From xen-devel-bounces@lists.xenproject.org Tue Dec 01 00:00:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 00:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41530.74736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjt5U-0002SR-HB; Tue, 01 Dec 2020 00:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41530.74736; Tue, 01 Dec 2020 00:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjt5U-0002Rl-Dc; Tue, 01 Dec 2020 00:00:08 +0000
Received: by outflank-mailman (input) for mailman id 41530;
 Tue, 01 Dec 2020 00:00:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCSm=FF=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kjt5S-0002OY-TW
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 00:00:06 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 096d6e64-5b30-4155-aa85-4458a9364139;
 Tue, 01 Dec 2020 00:00:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 096d6e64-5b30-4155-aa85-4458a9364139
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606780805;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=vNn3KP4WjXfpM6kbKGzfo/xBCJOMYjHp8/tp6Or9F/g=;
  b=NhZO8mh9zIwJsBQpqe//G7KWsB9puQss7x9grCN//TMGotWEFyuka0ml
   fpCB0jZ1qf5mUvNo9n/3yoH1xwTYtjt2eEp4vxn75HjwlJrTIA9Cd4XxU
   fZhbBFKyXCRiZqRDT+iisUH3lTAixcjgEw3kKRHWmBsYopRjbRkRPVAWZ
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MU1q1OVfNhGAOCZ8vgWjDcDNyKxtTuhjZRCeMCqhSdbGBsEb+15fHdFnJqr4FqaXxl2gdjig/5
 /o2v4nmXgA07JIYyR1sII3kO4VvHP5nW2xd8gnbV/r4gbvx96kgwDbvu8tARPZ3Gb6aGCN7KwE
 LVm0CHoUXxNimPHC+lpKMjLBENjDi1fx+2xwuDEZNBF1iuL4Cw8vuOFTccFqWyvkPCRezoXwsG
 kV8HhjGjTu51Lu/BvFmK66UeLPABPtHMcXZq/LY1TbSBACc31crIARIqZCIsCYSczNUQxZOYRI
 IgQ=
X-SBRS: None
X-MesageID: 32183726
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,382,1599537600"; 
   d="scan'208";a="32183726"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31
Date: Mon, 30 Nov 2020 23:59:37 +0000
Message-ID: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

Current limit of 7 is too restrictive for modern systems where one GSI
could be shared by potentially many PCI INTx sources where each of them
corresponds to a device passed through to its own guest. Some systems do not
apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
interrupt pin for the majority of PCI devices behind a single router,
resulting in overuse of a GSI.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---

If people think that would make sense - I can rework the array to a list of
domain pointers to avoid the limit.

---
 xen/arch/x86/irq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9..194f660 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1028,7 +1028,7 @@ int __init setup_irq(unsigned int irq, unsigned int irqflags,
  * HANDLING OF GUEST-BOUND PHYSICAL IRQS
  */
 
-#define IRQ_MAX_GUESTS 7
+#define IRQ_MAX_GUESTS 31
 typedef struct {
     u8 nr_guests;
     u8 in_flight;
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 01:13:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 01:13:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41541.74750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjuEb-0005hS-Qk; Tue, 01 Dec 2020 01:13:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41541.74750; Tue, 01 Dec 2020 01:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjuEb-0005hL-Ne; Tue, 01 Dec 2020 01:13:37 +0000
Received: by outflank-mailman (input) for mailman id 41541;
 Tue, 01 Dec 2020 01:13:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fi77=FF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kjuEb-0005hG-51
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 01:13:37 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 861f628a-24df-49d1-bf64-4e7033c2fa17;
 Tue, 01 Dec 2020 01:13:36 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B7E9E2073C;
 Tue,  1 Dec 2020 01:13:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 861f628a-24df-49d1-bf64-4e7033c2fa17
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606785215;
	bh=z5rOzcOj/GGFIXPoAItrL7xTCRxPVuL79Ztcu8USgXE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=cJ8DdGWyvyphnLxAZ3VL77fyIMbb1eQ+Cdibiv5JulNRCu1vGPRgqz5jtZWKNSxNl
	 cSbg2w5L+58fzEwfKkqy3vRv/5YvRIAIjq7DH/gV8Dt4iB/3alFZjO8aoLHzDrrLCf
	 AuUIhuqFL7LjSnu7uRQixMUA2snPE6Dub65VFDL8=
Date: Mon, 30 Nov 2020 17:13:34 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Diederik de Haas <didi.debian@cknow.org>
cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, 
    Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH] Fix spelling errors.
In-Reply-To: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
Message-ID: <alpine.DEB.2.21.2011301713210.1100@sstabellini-ThinkPad-T480s>
References: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Diederik de Haas wrote:
> Only spelling errors; no functional changes.
> 
> In docs/misc/dump-core-format.txt there are a few more instances of
> 'informations'. I'll leave that up to someone who can properly determine
> how those sentences should be constructed.
> 
> Signed-off-by: Diederik de Haas <didi.debian@cknow.org>
> 
> Please CC me in replies as I'm not subscribed to this list.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  docs/man/xl.1.pod.in                   | 2 +-
>  docs/man/xl.cfg.5.pod.in               | 2 +-
>  docs/man/xlcpupool.cfg.5.pod           | 2 +-
>  tools/firmware/rombios/rombios.c       | 2 +-
>  tools/libs/light/libxl_stream_read.c   | 2 +-
>  tools/xl/xl_cmdtable.c                 | 2 +-
>  xen/arch/x86/boot/video.S              | 2 +-
>  xen/arch/x86/cpu/vpmu.c                | 2 +-
>  xen/arch/x86/mpparse.c                 | 2 +-
>  xen/arch/x86/x86_emulate/x86_emulate.c | 2 +-
>  xen/common/libelf/libelf-dominfo.c     | 2 +-
>  xen/drivers/passthrough/arm/smmu.c     | 2 +-
>  xen/tools/gen-cpuid.py                 | 2 +-
>  xen/xsm/flask/policy/access_vectors    | 2 +-
>  14 files changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
> index f92bacfa72..eaa72faad6 100644
> --- a/docs/man/xl.1.pod.in
> +++ b/docs/man/xl.1.pod.in
> @@ -1578,7 +1578,7 @@ List vsnd devices for a domain.
>  Creates a new keyboard device in the domain specified by I<domain-id>.
>  I<vkb-device> describes the device to attach, using the same format as the
>  B<VKB_SPEC_STRING> string in the domain config file. See L<xl.cfg(5)>
> -for more informations.
> +for more information.
>  
>  =item B<vkb-detach> I<domain-id> I<devid>
>  
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 0532739c1f..b4625f56db 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -2385,7 +2385,7 @@ If B<videoram> is set less than 128MB, an error will be triggered.
>  
>  =item B<stdvga=BOOLEAN>
>  
> -Speficies a standard VGA card with VBE (VESA BIOS Extensions) as the
> +Specifies a standard VGA card with VBE (VESA BIOS Extensions) as the
>  emulated graphics device. If your guest supports VBE 2.0 or
>  later (e.g. Windows XP onwards) then you should enable this.
>  stdvga supports more video ram and bigger resolutions than Cirrus.
> diff --git a/docs/man/xlcpupool.cfg.5.pod b/docs/man/xlcpupool.cfg.5.pod
> index 3c9ddf7958..c577c7ca3a 100644
> --- a/docs/man/xlcpupool.cfg.5.pod
> +++ b/docs/man/xlcpupool.cfg.5.pod
> @@ -106,7 +106,7 @@ means that cpus 2,3,5 will be member of the cpupool.
>  means that cpus 0,2,3 and 5 will be member of the cpupool. A "node:" or
>  "nodes:" modifier can be used. E.g., "0,node:1,nodes:2-3,^10-13" means
>  that pcpus 0, plus all the cpus of NUMA nodes 1,2,3 with the exception
> -of cpus 10,11,12,13 will be memeber of the cpupool.
> +of cpus 10,11,12,13 will be members of the cpupool.
>  
>  =back
>  
> diff --git a/tools/firmware/rombios/rombios.c b/tools/firmware/rombios/rombios.c
> index 51558ee57a..5cda22785f 100644
> --- a/tools/firmware/rombios/rombios.c
> +++ b/tools/firmware/rombios/rombios.c
> @@ -2607,7 +2607,7 @@ void ata_detect( )
>    write_byte(ebda_seg,&EbdaData->ata.channels[3].irq,11);
>  #endif
>  #if BX_MAX_ATA_INTERFACES > 4
> -#error Please fill the ATA interface informations
> +#error Please fill the ATA interface information
>  #endif
>  
>    // Device detection
> diff --git a/tools/libs/light/libxl_stream_read.c b/tools/libs/light/libxl_stream_read.c
> index 514f6d9f89..99a6714e76 100644
> --- a/tools/libs/light/libxl_stream_read.c
> +++ b/tools/libs/light/libxl_stream_read.c
> @@ -459,7 +459,7 @@ static void stream_continue(libxl__egc *egc,
>          while (process_record(egc, stream))
>              ; /*
>                 * Nothing! process_record() helpfully tells us if no specific
> -               * futher actions have been set up, in which case we want to go
> +               * further actions have been set up, in which case we want to go
>                 * ahead and process the next record.
>                 */
>          break;
> diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
> index 7da6c1b927..6ab5e47da3 100644
> --- a/tools/xl/xl_cmdtable.c
> +++ b/tools/xl/xl_cmdtable.c
> @@ -154,7 +154,7 @@ struct cmd_spec cmd_table[] = {
>        "-h  Print this help.\n"
>        "-c  Leave domain running after creating the snapshot.\n"
>        "-p  Leave domain paused after creating the snapshot.\n"
> -      "-D  Store the domain id in the configration."
> +      "-D  Store the domain id in the configuration."
>      },
>      { "migrate",
>        &main_migrate, 0, 1,
> diff --git a/xen/arch/x86/boot/video.S b/xen/arch/x86/boot/video.S
> index a485779ce7..0efbe8d3b3 100644
> --- a/xen/arch/x86/boot/video.S
> +++ b/xen/arch/x86/boot/video.S
> @@ -177,7 +177,7 @@ dac_set:
>          movb    $0, _param(PARAM_LFB_COLORS+7)
>  
>  dac_done:
> -# get protected mode interface informations
> +# get protected mode interface information
>          movw    $0x4f0a, %ax
>          xorw    %bx, %bx
>          xorw    %di, %di
> diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
> index 1ed39ef03f..ac32379c2e 100644
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -680,7 +680,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
>          vcpu_unpause(v);
>  }
>  
> -/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
> +/* Dump some vpmu information on console. Used in keyhandler dump_domains(). */
>  void vpmu_dump(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
> index d532575fee..dff02b142b 100644
> --- a/xen/arch/x86/mpparse.c
> +++ b/xen/arch/x86/mpparse.c
> @@ -170,7 +170,7 @@ static int MP_processor_info_x(struct mpc_config_processor *m,
>  	if (num_processors >= 8 && hotplug
>  	    && genapic.name == apic_default.name) {
>  		printk_once(XENLOG_WARNING
> -			    "WARNING: CPUs limit of 8 reached - ignoring futher processors\n");
> +			    "WARNING: CPUs limit of 8 reached - ignoring further processors\n");
>  		unaccounted_cpus = true;
>  		return -ENOSPC;
>  	}
> diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
> index a35b63634b..ecc067bffe 100644
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -3246,7 +3246,7 @@ x86_decode(
>              case 0x23: /* mov reg,dr */
>                  /*
>                   * Mov to/from cr/dr ignore the encoding of Mod, and behave as
> -                 * if they were encoded as reg/reg instructions.  No futher
> +                 * if they were encoded as reg/reg instructions. No further
>                   * disp/SIB bytes are fetched.
>                   */
>                  modrm_mod = 3;
> diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
> index 508f08db42..69c94b6f3b 100644
> --- a/xen/common/libelf/libelf-dominfo.c
> +++ b/xen/common/libelf/libelf-dominfo.c
> @@ -1,5 +1,5 @@
>  /*
> - * parse xen-specific informations out of elf kernel binaries.
> + * parse xen-specific information out of elf kernel binaries.
>   *
>   * This library is free software; you can redistribute it and/or
>   * modify it under the terms of the GNU Lesser General Public
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index b8321f5d8d..ed04d85e05 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -214,7 +214,7 @@ struct iommu_domain
>  	struct list_head		list;
>  };
>  
> -/* Xen: Describes informations required for a Xen domain */
> +/* Xen: Describes information required for a Xen domain */
>  struct arm_smmu_xen_domain {
>  	spinlock_t			lock;
>  	/* List of context (i.e iommu_domain) associated to this domain */
> diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
> index 50412b9a46..36f67750e5 100755
> --- a/xen/tools/gen-cpuid.py
> +++ b/xen/tools/gen-cpuid.py
> @@ -192,7 +192,7 @@ def crunch_numbers(state):
>          FXSR: [FFXSR, SSE],
>  
>          # SSE is taken to mean support for the %XMM registers as well as the
> -        # instructions.  Several futher instruction sets are built on core
> +        # instructions.  Several further instruction sets are built on core
>          # %XMM support, without specific inter-dependencies.  Additionally
>          # AMD has a special mis-alignment sub-mode.
>          SSE: [SSE2, MISALIGNSSE],
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index 1aa0bb501c..6359c7fc87 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -507,7 +507,7 @@ class security
>  #
>  class version
>  {
> -# Extra informations (-unstable).
> +# Extra information (-unstable).
>      xen_extraversion
>  # Compile information of the hypervisor.
>      xen_compile_info
> -- 
> 2.29.2
> 


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 02:29:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 02:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41549.74763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjvPl-0003fm-Lg; Tue, 01 Dec 2020 02:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41549.74763; Tue, 01 Dec 2020 02:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjvPl-0003ff-HF; Tue, 01 Dec 2020 02:29:13 +0000
Received: by outflank-mailman (input) for mailman id 41549;
 Tue, 01 Dec 2020 02:29:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjvPj-0003fR-Oi; Tue, 01 Dec 2020 02:29:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjvPj-0006VB-Ea; Tue, 01 Dec 2020 02:29:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjvPj-0004hR-30; Tue, 01 Dec 2020 02:29:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjvPj-00036D-2W; Tue, 01 Dec 2020 02:29:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SWVK4qEOZjAeofUZPBmftnPp9kPNV2E0zoAEA60ta+Q=; b=tuHPecemU0E6TAYMfaKitCjxfd
	F6rnPondRXuzRC8mA/jpl4rXnLz8j17McOGMUbctkJED0j9Yw7J2iuRe4/7uxpWL3eTq09n1nCSQf
	rhpeRpLDK8aCX5ho65+eDdEPKnGblpDPulSeY+3iktbFnOKYqFruKHd2aji4OIl9uOQw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157115-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157115: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
X-Osstest-Versions-That:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 02:29:11 +0000

flight 157115 xen-unstable real [real]
flight 157122 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157115/
http://logs.test-lab.xenproject.org/osstest/logs/157122/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157122-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157102
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157102
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157102
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157102
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157102
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157102
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157102
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 157102
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157102
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157102
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157102
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157102
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
baseline version:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5

Last test of basis   157102  2020-11-30 01:52:27 Z    1 days
Testing same since   157115  2020-11-30 17:07:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roge.rpau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f7d7d53f64..3ae469af8e  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53 -> master


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 03:29:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 03:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41562.74796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjwMF-0000rx-FT; Tue, 01 Dec 2020 03:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41562.74796; Tue, 01 Dec 2020 03:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjwMF-0000rq-By; Tue, 01 Dec 2020 03:29:39 +0000
Received: by outflank-mailman (input) for mailman id 41562;
 Tue, 01 Dec 2020 03:29:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjwME-0000ri-Eu; Tue, 01 Dec 2020 03:29:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjwME-0007iY-5g; Tue, 01 Dec 2020 03:29:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjwMD-0007Cv-Ul; Tue, 01 Dec 2020 03:29:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjwMD-0007km-UJ; Tue, 01 Dec 2020 03:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ngIvIYZWGtqlHYmYE9ypCOD3B80M5WD9x3SuTRVZk6g=; b=k1SC/LUMzl0abJEAv/ODH+jA4D
	S1V1sBEWmVgMIUE1MSsvYsrDThOJdivx5dpH7zVb0Fxq4X8ZtbNX81njcxco1OQs0E4aR8zpWTnRR
	x0mZ4InPjvbZAdHMM+R1dEqXDfjWpJIvxnIFGeO6mSIILjK9P/uDqNUU5/U5jlBT4xSI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157117-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157117: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9fb629edd75e1ae1e7f4e85b0876107a7180899b
X-Osstest-Versions-That:
    ovmf=8501bb0c05ad9dd7ef6504803678866b1d23f6ab
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 03:29:37 +0000

flight 157117 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157117/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9fb629edd75e1ae1e7f4e85b0876107a7180899b
baseline version:
 ovmf                 8501bb0c05ad9dd7ef6504803678866b1d23f6ab

Last test of basis   157104  2020-11-30 03:09:46 Z    1 days
Testing same since   157117  2020-11-30 18:12:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Peter Grehan <grehan@freebsd.org>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8501bb0c05..9fb629edd7  9fb629edd75e1ae1e7f4e85b0876107a7180899b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 05:13:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 05:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41377.74811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjxyp-0002ol-Gq; Tue, 01 Dec 2020 05:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41377.74811; Tue, 01 Dec 2020 05:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjxyp-0002oe-Ci; Tue, 01 Dec 2020 05:13:35 +0000
Received: by outflank-mailman (input) for mailman id 41377;
 Mon, 30 Nov 2020 17:41:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JoTA=FE=cknow.org=didi.debian@srs-us1.protection.inumbo.net>)
 id 1kjnBG-0006Sb-3i
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 17:41:42 +0000
Received: from relay2-d.mail.gandi.net (unknown [217.70.183.194])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c439de24-f28b-4528-9f2d-c3e669644a02;
 Mon, 30 Nov 2020 17:41:40 +0000 (UTC)
Received: from bagend.home.cknow.org (92-110-45-68.cable.dynamic.v4.ziggo.nl
 [92.110.45.68]) (Authenticated sender: didi.debian@cknow.org)
 by relay2-d.mail.gandi.net (Postfix) with ESMTPSA id 9E01240004;
 Mon, 30 Nov 2020 17:41:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c439de24-f28b-4528-9f2d-c3e669644a02
X-Originating-IP: 92.110.45.68
From: Diederik de Haas <didi.debian@cknow.org>
To: xen-devel@lists.xenproject.org
Cc: Diederik de Haas <didi.debian@cknow.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH] Fix spelling errors.
Date: Mon, 30 Nov 2020 18:39:41 +0100
Message-Id: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Only spelling errors; no functional changes.

In docs/misc/dump-core-format.txt there are a few more instances of
'informations'. I'll leave that up to someone who can properly determine
how those sentences should be constructed.

Signed-off-by: Diederik de Haas <didi.debian@cknow.org>

Please CC me in replies as I'm not subscribed to this list.
---
 docs/man/xl.1.pod.in                   | 2 +-
 docs/man/xl.cfg.5.pod.in               | 2 +-
 docs/man/xlcpupool.cfg.5.pod           | 2 +-
 tools/firmware/rombios/rombios.c       | 2 +-
 tools/libs/light/libxl_stream_read.c   | 2 +-
 tools/xl/xl_cmdtable.c                 | 2 +-
 xen/arch/x86/boot/video.S              | 2 +-
 xen/arch/x86/cpu/vpmu.c                | 2 +-
 xen/arch/x86/mpparse.c                 | 2 +-
 xen/arch/x86/x86_emulate/x86_emulate.c | 2 +-
 xen/common/libelf/libelf-dominfo.c     | 2 +-
 xen/drivers/passthrough/arm/smmu.c     | 2 +-
 xen/tools/gen-cpuid.py                 | 2 +-
 xen/xsm/flask/policy/access_vectors    | 2 +-
 14 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index f92bacfa72..eaa72faad6 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1578,7 +1578,7 @@ List vsnd devices for a domain.
 Creates a new keyboard device in the domain specified by I<domain-id>.
 I<vkb-device> describes the device to attach, using the same format as the
 B<VKB_SPEC_STRING> string in the domain config file. See L<xl.cfg(5)>
-for more informations.
+for more information.
 
 =item B<vkb-detach> I<domain-id> I<devid>
 
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..b4625f56db 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2385,7 +2385,7 @@ If B<videoram> is set less than 128MB, an error will be triggered.
 
 =item B<stdvga=BOOLEAN>
 
-Speficies a standard VGA card with VBE (VESA BIOS Extensions) as the
+Specifies a standard VGA card with VBE (VESA BIOS Extensions) as the
 emulated graphics device. If your guest supports VBE 2.0 or
 later (e.g. Windows XP onwards) then you should enable this.
 stdvga supports more video ram and bigger resolutions than Cirrus.
diff --git a/docs/man/xlcpupool.cfg.5.pod b/docs/man/xlcpupool.cfg.5.pod
index 3c9ddf7958..c577c7ca3a 100644
--- a/docs/man/xlcpupool.cfg.5.pod
+++ b/docs/man/xlcpupool.cfg.5.pod
@@ -106,7 +106,7 @@ means that cpus 2,3,5 will be member of the cpupool.
 means that cpus 0,2,3 and 5 will be member of the cpupool. A "node:" or
 "nodes:" modifier can be used. E.g., "0,node:1,nodes:2-3,^10-13" means
 that pcpus 0, plus all the cpus of NUMA nodes 1,2,3 with the exception
-of cpus 10,11,12,13 will be memeber of the cpupool.
+of cpus 10,11,12,13 will be members of the cpupool.
 
 =back
 
diff --git a/tools/firmware/rombios/rombios.c b/tools/firmware/rombios/rombios.c
index 51558ee57a..5cda22785f 100644
--- a/tools/firmware/rombios/rombios.c
+++ b/tools/firmware/rombios/rombios.c
@@ -2607,7 +2607,7 @@ void ata_detect( )
   write_byte(ebda_seg,&EbdaData->ata.channels[3].irq,11);
 #endif
 #if BX_MAX_ATA_INTERFACES > 4
-#error Please fill the ATA interface informations
+#error Please fill the ATA interface information
 #endif
 
   // Device detection
diff --git a/tools/libs/light/libxl_stream_read.c b/tools/libs/light/libxl_stream_read.c
index 514f6d9f89..99a6714e76 100644
--- a/tools/libs/light/libxl_stream_read.c
+++ b/tools/libs/light/libxl_stream_read.c
@@ -459,7 +459,7 @@ static void stream_continue(libxl__egc *egc,
         while (process_record(egc, stream))
             ; /*
                * Nothing! process_record() helpfully tells us if no specific
-               * futher actions have been set up, in which case we want to go
+               * further actions have been set up, in which case we want to go
                * ahead and process the next record.
                */
         break;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927..6ab5e47da3 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -154,7 +154,7 @@ struct cmd_spec cmd_table[] = {
       "-h  Print this help.\n"
       "-c  Leave domain running after creating the snapshot.\n"
       "-p  Leave domain paused after creating the snapshot.\n"
-      "-D  Store the domain id in the configration."
+      "-D  Store the domain id in the configuration."
     },
     { "migrate",
       &main_migrate, 0, 1,
diff --git a/xen/arch/x86/boot/video.S b/xen/arch/x86/boot/video.S
index a485779ce7..0efbe8d3b3 100644
--- a/xen/arch/x86/boot/video.S
+++ b/xen/arch/x86/boot/video.S
@@ -177,7 +177,7 @@ dac_set:
         movb    $0, _param(PARAM_LFB_COLORS+7)
 
 dac_done:
-# get protected mode interface informations
+# get protected mode interface information
         movw    $0x4f0a, %ax
         xorw    %bx, %bx
         xorw    %di, %di
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 1ed39ef03f..ac32379c2e 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -680,7 +680,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
         vcpu_unpause(v);
 }
 
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+/* Dump some vpmu information on console. Used in keyhandler dump_domains(). */
 void vpmu_dump(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d532575fee..dff02b142b 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -170,7 +170,7 @@ static int MP_processor_info_x(struct mpc_config_processor *m,
 	if (num_processors >= 8 && hotplug
 	    && genapic.name == apic_default.name) {
 		printk_once(XENLOG_WARNING
-			    "WARNING: CPUs limit of 8 reached - ignoring futher processors\n");
+			    "WARNING: CPUs limit of 8 reached - ignoring further processors\n");
 		unaccounted_cpus = true;
 		return -ENOSPC;
 	}
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index a35b63634b..ecc067bffe 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3246,7 +3246,7 @@ x86_decode(
             case 0x23: /* mov reg,dr */
                 /*
                  * Mov to/from cr/dr ignore the encoding of Mod, and behave as
-                 * if they were encoded as reg/reg instructions.  No futher
+                 * if they were encoded as reg/reg instructions. No further
                  * disp/SIB bytes are fetched.
                  */
                 modrm_mod = 3;
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 508f08db42..69c94b6f3b 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -1,5 +1,5 @@
 /*
- * parse xen-specific informations out of elf kernel binaries.
+ * parse xen-specific information out of elf kernel binaries.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Lesser General Public
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index b8321f5d8d..ed04d85e05 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -214,7 +214,7 @@ struct iommu_domain
 	struct list_head		list;
 };
 
-/* Xen: Describes informations required for a Xen domain */
+/* Xen: Describes information required for a Xen domain */
 struct arm_smmu_xen_domain {
 	spinlock_t			lock;
 	/* List of context (i.e iommu_domain) associated to this domain */
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index 50412b9a46..36f67750e5 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -192,7 +192,7 @@ def crunch_numbers(state):
         FXSR: [FFXSR, SSE],
 
         # SSE is taken to mean support for the %XMM registers as well as the
-        # instructions.  Several futher instruction sets are built on core
+        # instructions.  Several further instruction sets are built on core
         # %XMM support, without specific inter-dependencies.  Additionally
         # AMD has a special mis-alignment sub-mode.
         SSE: [SSE2, MISALIGNSSE],
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1aa0bb501c..6359c7fc87 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -507,7 +507,7 @@ class security
 #
 class version
 {
-# Extra informations (-unstable).
+# Extra information (-unstable).
     xen_extraversion
 # Compile information of the hypervisor.
     xen_compile_info
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 05:14:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 05:14:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41571.74822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjxzW-0002uZ-TH; Tue, 01 Dec 2020 05:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41571.74822; Tue, 01 Dec 2020 05:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjxzW-0002uS-QD; Tue, 01 Dec 2020 05:14:18 +0000
Received: by outflank-mailman (input) for mailman id 41571;
 Tue, 01 Dec 2020 05:14:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dyvC=FF=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjxzV-0002uL-IN
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 05:14:17 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93725f7a-dd98-4abe-9a7e-3d8b78546769;
 Tue, 01 Dec 2020 05:14:15 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 30 Nov 2020 21:14:13 -0800
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by fmsmga005.fm.intel.com with ESMTP; 30 Nov 2020 21:14:13 -0800
Received: from fmsmsx609.amr.corp.intel.com (10.18.126.89) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Mon, 30 Nov 2020 21:14:13 -0800
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx609.amr.corp.intel.com (10.18.126.89) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Mon, 30 Nov 2020 21:14:13 -0800
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.172)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Mon, 30 Nov 2020 21:14:13 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1567.namprd11.prod.outlook.com (2603:10b6:301:d::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.24; Tue, 1 Dec
 2020 05:14:12 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Tue, 1 Dec 2020
 05:14:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93725f7a-dd98-4abe-9a7e-3d8b78546769
IronPort-SDR: XkMNV1+2CrEGkgM/4VsJNHGa/3sUKEybaEjGOngtXOJiIxB+AlBgd5uLCsNyHok1LYLpjYW0n9
 +0MKC96LiaZw==
X-IronPort-AV: E=McAfee;i="6000,8403,9821"; a="257476237"
X-IronPort-AV: E=Sophos;i="5.78,383,1599548400"; 
   d="scan'208";a="257476237"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: Y3smGJXVsL4fdnji/rGLWpdc9DGdRbZRQ2hchFG0Y9h2/jOpAeyNn5C3RKtX9xyJyAfwi1h+dL
 lFyDVQPDONKQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,383,1599548400"; 
   d="scan'208";a="538870292"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lETtPNVhmMDWh7FWaiDGq9kIGR0D0WVXL2g/rkD/Mh1/2GcPO86+Azi5DcbBiLXkhXizsnCh3XyUu9bQznkXN5kPG4dAXAXxN3Po38jE6mqs/Gpyy1Yr+d4P8Py1pR5GWhwX5ZkZ6OH97B0OKxd91JS5jS0rTNUIDJ8zW4mGgE8A/3kX0UmOjpW8iRdDq5jzoSDoha8GjeKMloKQBbSdci/jPp/QFp9WsXSECMVJz8Gj0gx8ujO82s8/CYC97m/tegB+7QZH/MQj3HjsDtwVcx/qHVXatzExe0hH+IPOgx25KVUNYMqjrSmwjigd7qwkKeFjjKS3WEkaHOf3N0TjMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VrMHMsU9ngcsF76fZaDCMsSaDHV0e/vGyRm24Rw2OTk=;
 b=czPxKyh4H6u5PAOJ0iIzV2G2UTPd1hVrhKnQmxz46mzhh9BBtvkP/uX6tm5K10yz0pV/qSZk6FDiRELRNL5DvakQqc8PcYOTss2nR5iTmtgmm1b8kY51ZP5Q5iQAfPCwJiQD5pIWSAKgcyw73ibWWdfK/IBPaYDlYFTl/JZCvinuzjAUdsuGHDA7BrXFh+9fok5PBVXSWRt3C04XtPrG8N5Pf2O2MCMj8auIV69yw3lpCNp3oILIE8cMhu1B7bi8mopaU1NbOBXWWmFpvi5793tFT0GNaL9RkkFdGeikBXg8a+w3y+EmGA6Ip0FoU7NMrmFPw7D0++QCEfm1ZbkcvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VrMHMsU9ngcsF76fZaDCMsSaDHV0e/vGyRm24Rw2OTk=;
 b=J7PsWmTJ+6o12Do3ytS5wE9eU/sWkleECVrjl9d7grSMkdDHE3SL7zX79bsN5qjw5JCpi+ZMoopTZDI+o8Vx5HJT5XjxfWboHBSoyapxKPy6h6m0DAhLDvgnJJaNKE+RuNfZ0OyBU3/ymVdn3xza9rj4SL7GNJNIFoRTSBgluPI=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Paul Durrant <pdurrant@amazon.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v10 5/7] vtd: use a bit field for root_entry
Thread-Topic: [PATCH v10 5/7] vtd: use a bit field for root_entry
Thread-Index: AQHWv0COaWiEdvBtMUu0J5DvTo0K7angCUXAgABza4CAAUYrEA==
Date: Tue, 1 Dec 2020 05:14:12 +0000
Message-ID: <MWHPR11MB16454C8AA702124013BE89F68CF40@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-6-paul@xen.org>
 <MWHPR11MB164520264945AF959D7A3ED28CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
 <5962cbc3-5aaf-7855-e00d-fb525441f454@suse.com>
In-Reply-To: <5962cbc3-5aaf-7855-e00d-fb525441f454@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3df7bc9f-be6c-4013-0c97-08d895b7f0bc
x-ms-traffictypediagnostic: MWHPR11MB1567:
x-microsoft-antispam-prvs: <MWHPR11MB1567019EE62FBDBD93ED9A3E8CF40@MWHPR11MB1567.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: loJhw/8ykJr34ImqOGHJNNWJwvTTkyrz8KnsxIKfHTH96MM8YRIEYHiuYBO/tXaB4jBB12loCLD+B0yVUXkiWkjidlOD7VUAB7mBG5y6iHuyJUtPb6qDELtu+bi7BEV1GYvHLbuFc4eHCdVCFzhEW/Ihi3/I1/jPv1hphFsnUXnamIQkkZjA/wSLytOzGgyOIEgeKdwfC0sYiKZlIEbIlcx7jrxWb1b3aw65e7XIVyMWO0Hq0CJTBrM2RpI/5lZsD9WIIU/t5rNlW5YseICUlHnZZYVB6hgdG7DK0/FhvQIbb7hVbTGz/vXwrnvWrp2ueU3H8xhnlauHZ4Qm1CbEog==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(39860400002)(136003)(346002)(396003)(8936002)(4744005)(2906002)(6916009)(33656002)(76116006)(4326008)(54906003)(86362001)(71200400001)(8676002)(186003)(478600001)(26005)(316002)(5660300002)(66446008)(83380400001)(7696005)(66556008)(53546011)(66946007)(66476007)(52536014)(64756008)(9686003)(6506007)(55016002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?TWY1MVhiZ001MGN1MUhPSVRhdE9QYjN0Q2VoSVlCNEIrMWE0Q2doanhzTFZK?=
 =?utf-8?B?VGJub0Y2dmFUL3ZEWUZ0WndsM3l6eWkrZzBWeno3MStyYStKc3MxSWhTVjVv?=
 =?utf-8?B?cUszRnZVdllva2lSaldFVjRROHpFb0hkK29xbmt2RnY4ZmdvQ0hMMVpXSUFn?=
 =?utf-8?B?cm14M0xxY1RwTWhrU2YxcmJQeCtHTXd6RnFTREVuYXJKVjJ5ODFVaHp0bDNh?=
 =?utf-8?B?eE9HZFlEOVR3blk4RDdtSTJvd05zK3hlNjFXZjJwbm1kOE5FelQ3eXF6RXVL?=
 =?utf-8?B?eUx3L2ZWeWQ3VzVvT2xGVWdHRlVsWkxXTHVHMkU4eG1oaHRWenJrQ3FMazRj?=
 =?utf-8?B?aGF6QlNGeGoyVlUybzZERlBOM3MyaURFdXU5WElyb0p0K3VZZEJpOENTMktC?=
 =?utf-8?B?Z0k5dVJnWXdoeTZ0QTcxbW9pTXNzMC9uSHFDM1FFaVYxczRobHVNWUp4MzEr?=
 =?utf-8?B?czV1a3c1NTJVbW9LSjBDbnNFVVFET2tHVHQxYnRqNmYrRTVPMkdUYU1qR3Bo?=
 =?utf-8?B?aVdlZ3VldjRvaFR5RWlXMjQ4ZThROUpwVXlPVExrVGNKTUI3MzZISjBzRndw?=
 =?utf-8?B?UjY2RE1hbGdKcThZZ3haaUZtUm1SZEFMazg4VHpJRFVDVTM4YnBRNHNVTUp5?=
 =?utf-8?B?WnFId3M3SDBERGd0WThCQUthb2h4dDh5QU5NN3VjbUpRRkNMdndWTXRHUjNu?=
 =?utf-8?B?aFJiRTFxQ244ekNGcHlMZGZvc2JUa0lJRHc1MHJpRGF4dEl3MkxpclZNb1Mz?=
 =?utf-8?B?UHZjQW1nTTExdCtCQkh6UmpuYWRhbmRnRHpHSHBHMkhIQW9UNUl4MGVnY1Rn?=
 =?utf-8?B?N3ZpUHRFNkJvRVhFNXdMekhkUkVkTW02VEY5NzhFRE9FZFQ1N0QvVisvRGFR?=
 =?utf-8?B?M2tiNWVmQ1U0aStvbjNOVmFCOGY5RDBsNXlQcjhFRjBXRTNMVWs5SmpIWVZJ?=
 =?utf-8?B?Wk1CQUN0S01RV29TOXRjM2dHajNuMVM4RlFoRm5rUXJKQ1BvZjhnN2pNcWFm?=
 =?utf-8?B?VWZRM0dReE9SZ2tPQU9VNWRJckxPY0poMlJOLytCTlRlUWZ0R3l5eTNvRmNs?=
 =?utf-8?B?cjRiemloS3VGUU11dzNHNXM5NGorL2RyRE16aDhmdnU4bG5maUkrMzB5aUZl?=
 =?utf-8?B?YlduYmY2SWlqY1pPV0RYWGlnNWRWUTFLTlFuWW5ZVkprUzMyVjNOT1pwTVV2?=
 =?utf-8?B?NHJVbkgyVVZlVks1WjFhdnF0dUF4R2Uya1BERmpXM2RKSnZzWWYzcnBTYnpz?=
 =?utf-8?B?NXhSMG9yNVhhNmpyMHpreHd6OGlPQlpIRlVWc3BmQUdQT25UN0gra2FHNGJm?=
 =?utf-8?Q?Vd52VsWN3kmZ8=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3df7bc9f-be6c-4013-0c97-08d895b7f0bc
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 05:14:12.1395
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: QTh6hjw/34+YCtL+twLJpA3u6ymkmE3M5Bg+xOiA1U5jMqQyAs+2YTxrS6lVz6/QK7GqWRKY7i7dTjiFmDW1rg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1567
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IE1vbmRheSwg
Tm92ZW1iZXIgMzAsIDIwMjAgNTo0NiBQTQ0KPiANCj4gT24gMzAuMTEuMjAyMCAwNDowNiwgVGlh
biwgS2V2aW4gd3JvdGU6DQo+ID4+IEZyb206IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPg0K
PiA+PiBTZW50OiBGcmlkYXksIE5vdmVtYmVyIDIwLCAyMDIwIDk6MjUgUE0NCj4gPj4NCj4gPj4g
RnJvbTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPiA+Pg0KPiA+PiBUaGlz
IG1ha2VzIHRoZSBjb2RlIGEgbGl0dGxlIGVhc2llciB0byByZWFkIGFuZCBhbHNvIG1ha2VzIGl0
IG1vcmUNCj4gY29uc2lzdGVudA0KPiA+PiB3aXRoIGlyZW1hcF9lbnRyeS4NCj4gPj4NCj4gPj4g
QWxzbyB0YWtlIHRoZSBvcHBvcnR1bml0eSB0byB0aWR5IHVwIHRoZSBpbXBsZW1lbnRhdGlvbiBv
Zg0KPiA+PiBkZXZpY2VfaW5fZG9tYWluKCkuDQo+ID4+DQo+ID4+IFNpZ25lZC1vZmYtYnk6IFBh
dWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4gPg0KPiA+IFJldmlld2VkLWJ5OiA8
a2V2aW4udGlhbkBpbnRlbC5jb20+DQo+IA0KPiBCZXNpZGVzIHRoaXMgbG9va2luZyBhIGxpdHRs
ZSBvZGQgKGNhbiBiZSBlYXNpbHkgZml4ZWQgb2YgY291cnNlKQ0KPiBJIHdvbmRlciB3aGV0aGVy
IGJvdGggaGVyZSBhbmQgZm9yIHBhdGNoIDYgeW91IGhhZCBzZWVuIG15IHJlcXVlc3RzDQo+IGZv
ciBzbWFsbGlzaCBjaGFuZ2VzLCBhbmQgd2hldGhlciB5b3UgbWVhbnQgdG8gb3ZlcnJpZGUgdGhv
c2UsIG9yDQo+IHdoZXRoZXIgeW91ciBSLWIgd2lsbCBjb250aW51ZSB0byBhcHBseSB3aXRoIHRo
ZW0gbWFkZS4NCj4gDQoNCkxldCBteSBSLWIgY29udGludWUgdG8gYXBwbHkuIFRob3NlIGFyZSBz
bWFsbCBjaGFuZ2VzLg0KDQpUaGFua3MNCktldmluDQo=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 05:54:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 05:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41582.74841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjyby-0006dX-0g; Tue, 01 Dec 2020 05:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41582.74841; Tue, 01 Dec 2020 05:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjybx-0006dQ-TA; Tue, 01 Dec 2020 05:54:01 +0000
Received: by outflank-mailman (input) for mailman id 41582;
 Tue, 01 Dec 2020 05:54:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yqme=FF=oracle.com=martin.petersen@srs-us1.protection.inumbo.net>)
 id 1kjybv-0006dI-VS
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 05:54:00 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee10a15d-9d1b-4f07-9174-07380ae8f783;
 Tue, 01 Dec 2020 05:53:59 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B15n67q107471;
 Tue, 1 Dec 2020 05:53:05 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2130.oracle.com with ESMTP id 353c2aru96-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 01 Dec 2020 05:53:05 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B15oGDF104852;
 Tue, 1 Dec 2020 05:53:04 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by userp3020.oracle.com with ESMTP id 3540arqfy6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 01 Dec 2020 05:53:04 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0B15r2KY111063;
 Tue, 1 Dec 2020 05:53:02 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3020.oracle.com with ESMTP id 3540arqfwj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 01 Dec 2020 05:53:02 +0000
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0B15qbsa005213;
 Tue, 1 Dec 2020 05:52:40 GMT
Received: from ca-mkp.ca.oracle.com (/10.159.214.123)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 30 Nov 2020 21:52:37 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee10a15d-9d1b-4f07-9174-07380ae8f783
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to : cc : subject :
 from : message-id : references : date : in-reply-to : mime-version :
 content-type; s=corp-2020-01-29;
 bh=l9ihS9O8ty7fvCGsyg6t+FL2UWzq+Pp7wI7g9M3fu94=;
 b=emDPbOhPomsDJ0NVhkgalKaWcDmTVt43uxmasZhRdsIKDAK5SyUn3ZBbDT0tfjc2Ub+G
 pwnVB+2L6jYArSQUwoKsFjxw+gIeAIQd1agInCI9m5zu6sstYek99cFlGrF83pfAMoYp
 7h3VW08t/kH06cHx+UsVCvNai9zlonapT50haza8ExqU3V0ifCJ/7nW0lmA/qRSavM+P
 eE0d7ezZdfPLepa71ISDd/u3VXj3GbiZhoaxq0dsKdiEuu46H687kMfk2k9ooRtVL0+n
 yN1cC747fo++ObKGg9Es3E0DMkXRjQCXCIESHdIfuri0sdDNQNO6hiI2IHC3bOw+gche Nw== 
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
        amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
        ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
        coreteam@netfilter.org, devel@driverdev.osuosl.org,
        dm-devel@redhat.com, drbd-dev@tron.linbit.com,
        dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
        GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
        intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
        linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
        linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
        linux-arm-msm@vger.kernel.org, linux-atm-general@lists.sourceforge.net,
        linux-block@vger.kernel.org, linux-can@vger.kernel.org,
        linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
        linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
        linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
        linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
        linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
        linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
        linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
        linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-mm@kvack.org,
        linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
        linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
        linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
        linux-security-module@vger.kernel.org,
        linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
        linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
        netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
        nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
        oss-drivers@netronome.com, patches@opensource.cirrus.com,
        rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
        samba-technical@lists.samba.org, selinux@vger.kernel.org,
        target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
        usb-storage@lists.one-eyed-alien.net,
        virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
        x86@kernel.org, xen-devel@lists.xenproject.org,
        linux-hardening@vger.kernel.org,
        Nick Desaulniers
 <ndesaulniers@google.com>,
        Nathan Chancellor <natechancellor@gmail.com>,
        Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>,
        Kees
 Cook <keescook@chromium.org>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: "Martin K. Petersen" <martin.petersen@oracle.com>
Organization: Oracle Corporation
Message-ID: <yq1h7p6gjkk.fsf@ca-mkp.ca.oracle.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
Date: Tue, 01 Dec 2020 00:52:27 -0500
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org> (Gustavo
	A. R. Silva's message of "Fri, 20 Nov 2020 12:21:39 -0600")
MIME-Version: 1.0
Content-Type: text/plain
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9821 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=1 lowpriorityscore=0
 clxscore=1011 bulkscore=0 mlxlogscore=289 phishscore=0 malwarescore=0
 spamscore=0 adultscore=0 mlxscore=0 priorityscore=1501 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012010039


Gustavo,

> This series aims to fix almost all remaining fall-through warnings in
> order to enable -Wimplicit-fallthrough for Clang.

Applied 20-22,54,120-124 to 5.11/scsi-staging, thanks.

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 07:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 07:37:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41595.74859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0Dy-0007b9-MP; Tue, 01 Dec 2020 07:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41595.74859; Tue, 01 Dec 2020 07:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0Dy-0007b2-Iv; Tue, 01 Dec 2020 07:37:22 +0000
Received: by outflank-mailman (input) for mailman id 41595;
 Tue, 01 Dec 2020 07:37:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk0Dx-0007ax-Aj
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 07:37:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04c375a3-03c1-445e-a4ed-35a27fca23cf;
 Tue, 01 Dec 2020 07:37:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E3423AC2E;
 Tue,  1 Dec 2020 07:37:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04c375a3-03c1-445e-a4ed-35a27fca23cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606808239; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lrE4uA8nvH6aMb0GP+TFUfmyFpP1OeaCi17A0aEBzWQ=;
	b=u6487MCvE3DLDeLpdIXGZjdSP+CvF2hxukNhraKl8f6iZ3Gp+2anfLNNQAQe9K7B2ZWTmx
	HpU48HgFjR5cAsVxZWrSga+N9rHoIV13MTHWY2NykHPS8dhhSQCwWkbkgnMoQOL6zXfVTQ
	fb0eu6kqXAb4B0OhrSsbj/iLmKbF4VM=
Subject: Re: [PATCH 04/16] x86/srat: vmap the pages for acpi_slit
To: Hongyan Xia <hx242@xen.org>
Cc: julien@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1588278317.git.hongyxia@amazon.com>
 <f4226fafcd333c0274fcee24601c280bf6494417.1588278317.git.hongyxia@amazon.com>
 <d41fee35-8889-3ab8-2a5e-f4b442747362@suse.com>
 <8118aa61528cb14acab8a399bd483557bd3c921e.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aa7cfc75-25dc-389f-24c2-4d78293ddd24@suse.com>
Date: Tue, 1 Dec 2020 08:37:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8118aa61528cb14acab8a399bd483557bd3c921e.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 19:11, Hongyan Xia wrote:
> On Mon, 2020-11-30 at 11:16 +0100, Jan Beulich wrote:
>> On 30.04.2020 22:44, Hongyan Xia wrote:
>>> --- a/xen/arch/x86/srat.c
>>> +++ b/xen/arch/x86/srat.c
>>> @@ -196,7 +196,8 @@ void __init acpi_numa_slit_init(struct
>>> acpi_table_slit *slit)
>>>  		return;
>>>  	}
>>>  	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>> -	acpi_slit = mfn_to_virt(mfn_x(mfn));
>>> +	acpi_slit = vmap_boot_pages(mfn, PFN_UP(slit->header.length));
>>> +	BUG_ON(!acpi_slit);
>>>  	memcpy(acpi_slit, slit, slit->header.length);
>>>  }
>>
>> I'm not sure in how far this series is still to be considered
>> active / pending; I still have it in my inbox as something to
>> look at in any event. If it is, then I think the latest by this
>> patch it becomes clear that we either want to make vmalloc()
>> boot-allocator capable, or introduce e.g. vmalloc_boot().
>> Having this recurring pattern including the somewhat odd
>> vmap_boot_pages() is imo not the best way forward. It would
>> then also no longer be necessary to allocate contiguous pages,
>> as none of the users up to here appear to have such a need.
> 
> This series is blocked on the PTE domheap conversion series so I will
> definitely come back here after that series is merged.
> 
> vmap_boot_pages() (poorly named, there is nothing "boot" about it) is
> actually useful in other patches as well, especially when there is no
> direct map but we need to map a contiguous range, since
> map_domain_page() can only handle a single one. So I would say there
> will be a need for this function (maybe call it vmap_contig_pages()?)
> even if for this patch a boot-capable vmalloc can do the job.

Question is in how many cases contiguous allocations are actually
needed. I suspect there aren't many, and hence vmalloc() (or a
boot time clone of it, if need be) may again be the better choice.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 07:55:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 07:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41604.74874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0VR-00013a-BX; Tue, 01 Dec 2020 07:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41604.74874; Tue, 01 Dec 2020 07:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0VR-00013T-8i; Tue, 01 Dec 2020 07:55:25 +0000
Received: by outflank-mailman (input) for mailman id 41604;
 Tue, 01 Dec 2020 07:55:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk0VP-00013O-Kv
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 07:55:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5b2c199-83e1-4d7e-92fe-95a6614cf103;
 Tue, 01 Dec 2020 07:55:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14557AC2F;
 Tue,  1 Dec 2020 07:55:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5b2c199-83e1-4d7e-92fe-95a6614cf103
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606809322; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VQbGmZorjHVggPDNorqVk7pQ8bzwkxtNEbHZ9tsGUkg=;
	b=NvEaPa/nN/uNhHGiOmVDqIlZUgJFvgL7jhpCk6ym0fx5P7GiS2GnR2QCgg5XmZkeiimY3T
	0CuYaq35JlsDHBUHQHDCadrKu+i9bo6sVQjAOe+VW4FNj6q07Nl33d7+0UgMSniFPYRWP/
	UsvZY7BjFphOsFLRf+ag7lEDiOWWPhE=
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Oleksandr <olekstysh@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
Date: Tue, 1 Dec 2020 08:55:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 00:27, Oleksandr wrote:
> On 30.11.20 23:03, Volodymyr Babchuk wrote:
>> Oleksandr Tyshchenko writes:
>>> --- a/xen/include/asm-arm/traps.h
>>> +++ b/xen/include/asm-arm/traps.h
>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs)
>>>           (unsigned long)abort_guest_exit_end == regs->pc;
>>>   }
>>>   
>>> +/* Check whether the sign extension is required and perform it */
>>> +static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r)
>>> +{
>>> +    uint8_t size = (1 << dabt.size) * 8;
>>> +
>>> +    /*
>>> +     * Sign extend if required.
>>> +     * Note that we expect the read handler to have zeroed the bits
>>> +     * outside the requested access size.
>>> +     */
>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>> +    {
>>> +        /*
>>> +         * We are relying on register_t using the same as
>>> +         * an unsigned long in order to keep the 32-bit assembly
>>> +         * code smaller.
>>> +         */
>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>> +        r |= (~0UL) << size;
>> If `size` is 64, you will get undefined behavior there.
> I think, we don't need to worry about undefined behavior here. Having 
> size=64 would be possible with doubleword (dabt.size=3). But if "r" 
> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
> we deal with byte, halfword or word operations (dabt.size<3). Or I 
> missed something?

At which point please put in a respective ASSERT(), possibly amended
by a brief comment.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:00:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41614.74886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0a1-0002JW-Ao; Tue, 01 Dec 2020 08:00:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41614.74886; Tue, 01 Dec 2020 08:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0a1-0002Ip-5B; Tue, 01 Dec 2020 08:00:09 +0000
Received: by outflank-mailman (input) for mailman id 41614;
 Tue, 01 Dec 2020 08:00:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk0a0-0002Ig-5I
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:00:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0815429a-0de1-450f-9b77-1cbf0527beb4;
 Tue, 01 Dec 2020 08:00:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6516DAEF5;
 Tue,  1 Dec 2020 08:00:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0815429a-0de1-450f-9b77-1cbf0527beb4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606809606; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fCV07YhsFVUYe3rO/+xalK2NDdZq1VldAkeLRsqSiYg=;
	b=e1Mzok4mpu6GVVqkCWuZ/+mDQf/b/LWndGVIsyahQ9VMc9FHRUslLIvS6SvX+EkBGV4kQi
	9ctpRTP7qGQNCTkEJJrd2rc5+NEB2kZIycP1/KR0Rzs5l4UleKWzexO5uHWzgUw7UGR/XM
	psvNbEswRsJfXKdxBHXbp2ve59pjSQE=
Subject: Re: [PATCH] Fix spelling errors.
To: Diederik de Haas <didi.debian@cknow.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6a393816-c418-51c9-25c0-5622ef331099@suse.com>
Date: Tue, 1 Dec 2020 09:00:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 18:39, Diederik de Haas wrote:
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -680,7 +680,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
>          vcpu_unpause(v);
>  }
>  
> -/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
> +/* Dump some vpmu information on console. Used in keyhandler dump_domains(). */

Replace "on" by "to" at the same time?

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -3246,7 +3246,7 @@ x86_decode(
>              case 0x23: /* mov reg,dr */
>                  /*
>                   * Mov to/from cr/dr ignore the encoding of Mod, and behave as
> -                 * if they were encoded as reg/reg instructions.  No futher
> +                 * if they were encoded as reg/reg instructions. No further
>                   * disp/SIB bytes are fetched.

Please don't discard double blanks between comment sentences. More
often than not they're put this way intentionally.

Both could be easily adjusted while committing. Applicable parts
then
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41620.74898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ue-0004Sn-6M; Tue, 01 Dec 2020 08:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41620.74898; Tue, 01 Dec 2020 08:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ue-0004Sg-3H; Tue, 01 Dec 2020 08:21:28 +0000
Received: by outflank-mailman (input) for mailman id 41620;
 Tue, 01 Dec 2020 08:21:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5MHz=FF=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1kk0ud-0004SX-CQ
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:27 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdbd0b7e-eb5a-4f5c-9507-d2e885904a2a;
 Tue, 01 Dec 2020 08:21:25 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 94ECB20659;
 Tue,  1 Dec 2020 08:21:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdbd0b7e-eb5a-4f5c-9507-d2e885904a2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606810884;
	bh=xHTFstOj6O/KMLPWIJ9livXkeh5E3cNJZoMEX1ICbl0=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=0ugfFVtmDEFz3qweRxNJGIbdlhHbJEbe+SPrGbc9I44gM+O6I2rVgrcZagGiPQlJe
	 qDa/e9cJY/n7rREFFqWQI6CYR7sTmaWJfiub5J4ReXH3L76qQfPF0XWoTZ+/KCqBa1
	 43uSa/AsDxhQJOFTJrlenu8ULt+S2HlexiyIiZjk=
Date: Tue, 1 Dec 2020 02:20:47 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
	amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
	coreteam@netfilter.org, devel@driverdev.osuosl.org,
	dm-devel@redhat.com, drbd-dev@tron.linbit.com,
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mm@kvack.org,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
	netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
	oss-drivers@netronome.com, patches@opensource.cirrus.com,
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
	samba-technical@lists.samba.org, selinux@vger.kernel.org,
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>,
	Kees Cook <keescook@chromium.org>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201201082047.GA11832@embeddedor>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <yq1h7p6gjkk.fsf@ca-mkp.ca.oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <yq1h7p6gjkk.fsf@ca-mkp.ca.oracle.com>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Tue, Dec 01, 2020 at 12:52:27AM -0500, Martin K. Petersen wrote:
> 
> Gustavo,
> 
> > This series aims to fix almost all remaining fall-through warnings in
> > order to enable -Wimplicit-fallthrough for Clang.
> 
> Applied 20-22,54,120-124 to 5.11/scsi-staging, thanks.

Awesome! :)

Thanks, Martin.
--
Gustavo


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41621.74909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uk-0004VQ-ER; Tue, 01 Dec 2020 08:21:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41621.74909; Tue, 01 Dec 2020 08:21:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uk-0004VI-B8; Tue, 01 Dec 2020 08:21:34 +0000
Received: by outflank-mailman (input) for mailman id 41621;
 Tue, 01 Dec 2020 08:21:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0ui-0004Uj-NE
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24573929-9acf-4325-b865-1e4397cf6f40;
 Tue, 01 Dec 2020 08:21:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23AC5AC2F;
 Tue,  1 Dec 2020 08:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24573929-9acf-4325-b865-1e4397cf6f40
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uevcohjDdjyaRW4J/XJl3vfCGPtDJwSk8swOs/WEatk=;
	b=qmGLVYtTEwAPxQupeGB9PHxOUoB5b8zhWnwI/5dP1T9FMNI2wlSPT1uQZRV/2QP3WK32pu
	sVH1k4Z6Pcc/QzeWxFtbJqPxkGy8eavtdlCiMzvMnnLjTeFlM10K+f1NlHWRk1fqPaxCm1
	jMKKHgbP8tCbU0C547Rmfxe6cvXHB+0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v2 01/17] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
Date: Tue,  1 Dec 2020 09:21:12 +0100
Message-Id: <20201201082128.15239-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a cpu is removed from a cpupool and added to the free cpus it
should be added to sched_res_mask, too.

The related removal from sched_res_mask in case of core scheduling
is already done in schedule_cpu_add().

As long as all cpupools share the same scheduling granularity there
is nothing going wrong with the missing addition, but this will change
when per-cpupool granularity is fully supported.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
 xen/common/sched/core.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index ed973e90ec..f8c81592af 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3189,6 +3189,7 @@ int schedule_cpu_rm(unsigned int cpu)
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
             cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
             init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41622.74922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ul-0004XB-NU; Tue, 01 Dec 2020 08:21:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41622.74922; Tue, 01 Dec 2020 08:21:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ul-0004X2-JX; Tue, 01 Dec 2020 08:21:35 +0000
Received: by outflank-mailman (input) for mailman id 41622;
 Tue, 01 Dec 2020 08:21:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0uk-0004VK-E8
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ceb36211-fc02-48df-8343-c9d46e54bc3d;
 Tue, 01 Dec 2020 08:21:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6FB70AD8A;
 Tue,  1 Dec 2020 08:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceb36211-fc02-48df-8343-c9d46e54bc3d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jtGFdMweTBJDc6UpYyzBO+QE6apiq4QvvZ8PlcUPilY=;
	b=fV3iKp6ww6im4i9yanZ1LMMB8Ak9va/BEB82ONP/RQrpk/0g6J4y3TdLmdqhdELhulomiq
	SK0UF0OWf9EsHY+fEBlrvq7LSh/Uu1+7xBcxxBzFF6UBSgfF6k0BgdAfdLGaOnzR+ZV0ZO
	I/jVlJIcQVK3rwtM3cquFV5Lq8eYRO8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v2 03/17] xen/cpupool: sort included headers in cpupool.c
Date: Tue,  1 Dec 2020 09:21:14 +0100
Message-Id: <20201201082128.15239-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Common style is to include header files in alphabetical order. Sort the
#include statements in cpupool.c accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Dario Faggioli <dfaggioli@suse.com>
---
 xen/common/sched/cpupool.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 6429c8f7b5..84f326ea63 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -11,15 +11,15 @@
  * (C) 2009, Juergen Gross, Fujitsu Technology Solutions
  */
 
-#include <xen/lib.h>
-#include <xen/init.h>
+#include <xen/cpu.h>
 #include <xen/cpumask.h>
+#include <xen/init.h>
+#include <xen/keyhandler.h>
+#include <xen/lib.h>
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
 #include <xen/warning.h>
-#include <xen/keyhandler.h>
-#include <xen/cpu.h>
 
 #include "private.h"
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41623.74934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uo-0004Zw-16; Tue, 01 Dec 2020 08:21:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41623.74934; Tue, 01 Dec 2020 08:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0un-0004Zi-TT; Tue, 01 Dec 2020 08:21:37 +0000
Received: by outflank-mailman (input) for mailman id 41623;
 Tue, 01 Dec 2020 08:21:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0un-0004VK-5O
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee50f3c3-d2f7-44ea-9920-1b24a7ede1b2;
 Tue, 01 Dec 2020 08:21:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B77C0AD8C;
 Tue,  1 Dec 2020 08:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee50f3c3-d2f7-44ea-9920-1b24a7ede1b2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+NBEIZ/iUOPwcq2uRt/bFtNmsh/ACI+DXq98XdmLhFc=;
	b=hdZaX9B1RgAYsdA+smYMrR7sOXU2D4lXiNtmBlqk9Mf5/VXLwUiDI6UiBOcH796x2QcFil
	t1XytGeNcSiE5W7plZ3w9C+bxO3dHz3P3d+3YMb8fO1odqhQj5UuS7/hw5XFkLjQ1rSv0V
	Gsjm0vREuLbyTM3rt3LfR7pYlUl2S40=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
Date: Tue,  1 Dec 2020 09:21:15 +0100
Message-Id: <20201201082128.15239-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The cpupool id is an unsigned value in the public interface header, so
there is no reason why it is a signed value in struct cpupool.

Switch it to unsigned int.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 xen/common/sched/core.c    |  2 +-
 xen/common/sched/cpupool.c | 48 +++++++++++++++++++-------------------
 xen/common/sched/private.h |  8 +++----
 xen/include/xen/sched.h    |  4 ++--
 4 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f8c81592af..6063f6d9ea 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -757,7 +757,7 @@ void sched_destroy_vcpu(struct vcpu *v)
     }
 }
 
-int sched_init_domain(struct domain *d, int poolid)
+int sched_init_domain(struct domain *d, unsigned int poolid)
 {
     void *sdom;
     int ret;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 84f326ea63..01fa71dd00 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -187,7 +187,7 @@ static struct cpupool *alloc_cpupool_struct(void)
  * the searched id is returned
  * returns NULL if not found.
  */
-static struct cpupool *__cpupool_find_by_id(int id, bool exact)
+static struct cpupool *__cpupool_find_by_id(unsigned int id, bool exact)
 {
     struct cpupool **q;
 
@@ -200,12 +200,12 @@ static struct cpupool *__cpupool_find_by_id(int id, bool exact)
     return (!exact || (*q == NULL) || ((*q)->cpupool_id == id)) ? *q : NULL;
 }
 
-static struct cpupool *cpupool_find_by_id(int poolid)
+static struct cpupool *cpupool_find_by_id(unsigned int poolid)
 {
     return __cpupool_find_by_id(poolid, true);
 }
 
-static struct cpupool *__cpupool_get_by_id(int poolid, bool exact)
+static struct cpupool *__cpupool_get_by_id(unsigned int poolid, bool exact)
 {
     struct cpupool *c;
     spin_lock(&cpupool_lock);
@@ -216,12 +216,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid, bool exact)
     return c;
 }
 
-struct cpupool *cpupool_get_by_id(int poolid)
+struct cpupool *cpupool_get_by_id(unsigned int poolid)
 {
     return __cpupool_get_by_id(poolid, true);
 }
 
-static struct cpupool *cpupool_get_next_by_id(int poolid)
+static struct cpupool *cpupool_get_next_by_id(unsigned int poolid)
 {
     return __cpupool_get_by_id(poolid, false);
 }
@@ -243,11 +243,11 @@ void cpupool_put(struct cpupool *pool)
  * - unknown scheduler
  */
 static struct cpupool *cpupool_create(
-    int poolid, unsigned int sched_id, int *perr)
+    unsigned int poolid, unsigned int sched_id, int *perr)
 {
     struct cpupool *c;
     struct cpupool **q;
-    int last = 0;
+    unsigned int last = 0;
 
     *perr = -ENOMEM;
     if ( (c = alloc_cpupool_struct()) == NULL )
@@ -256,7 +256,7 @@ static struct cpupool *cpupool_create(
     /* One reference for caller, one reference for cpupool_destroy(). */
     atomic_set(&c->refcnt, 2);
 
-    debugtrace_printk("cpupool_create(pool=%d,sched=%u)\n", poolid, sched_id);
+    debugtrace_printk("cpupool_create(pool=%u,sched=%u)\n", poolid, sched_id);
 
     spin_lock(&cpupool_lock);
 
@@ -295,7 +295,7 @@ static struct cpupool *cpupool_create(
 
     spin_unlock(&cpupool_lock);
 
-    debugtrace_printk("Created cpupool %d with scheduler %s (%s)\n",
+    debugtrace_printk("Created cpupool %u with scheduler %s (%s)\n",
                       c->cpupool_id, c->sched->name, c->sched->opt_name);
 
     *perr = 0;
@@ -337,7 +337,7 @@ static int cpupool_destroy(struct cpupool *c)
 
     cpupool_put(c);
 
-    debugtrace_printk("cpupool_destroy(pool=%d)\n", c->cpupool_id);
+    debugtrace_printk("cpupool_destroy(pool=%u)\n", c->cpupool_id);
     return 0;
 }
 
@@ -521,7 +521,7 @@ static long cpupool_unassign_cpu_helper(void *info)
     struct cpupool *c = info;
     long ret;
 
-    debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d)\n",
+    debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d)\n",
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
@@ -551,7 +551,7 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu)
     int ret;
     unsigned int master_cpu;
 
-    debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d)\n",
+    debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d)\n",
                       c->cpupool_id, cpu);
 
     if ( !cpu_online(cpu) )
@@ -561,7 +561,7 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu)
     ret = cpupool_unassign_cpu_start(c, master_cpu);
     if ( ret )
     {
-        debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d) ret %d\n",
+        debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d) ret %d\n",
                           c->cpupool_id, cpu, ret);
         return ret;
     }
@@ -582,7 +582,7 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu)
  * - pool does not exist
  * - no cpu assigned to pool
  */
-int cpupool_add_domain(struct domain *d, int poolid)
+int cpupool_add_domain(struct domain *d, unsigned int poolid)
 {
     struct cpupool *c;
     int rc;
@@ -604,7 +604,7 @@ int cpupool_add_domain(struct domain *d, int poolid)
         rc = 0;
     }
     spin_unlock(&cpupool_lock);
-    debugtrace_printk("cpupool_add_domain(dom=%d,pool=%d) n_dom %d rc %d\n",
+    debugtrace_printk("cpupool_add_domain(dom=%d,pool=%u) n_dom %d rc %d\n",
                       d->domain_id, poolid, n_dom, rc);
     return rc;
 }
@@ -614,7 +614,7 @@ int cpupool_add_domain(struct domain *d, int poolid)
  */
 void cpupool_rm_domain(struct domain *d)
 {
-    int cpupool_id;
+    unsigned int cpupool_id;
     int n_dom;
 
     if ( d->cpupool == NULL )
@@ -625,7 +625,7 @@ void cpupool_rm_domain(struct domain *d)
     n_dom = d->cpupool->n_dom;
     d->cpupool = NULL;
     spin_unlock(&cpupool_lock);
-    debugtrace_printk("cpupool_rm_domain(dom=%d,pool=%d) n_dom %d\n",
+    debugtrace_printk("cpupool_rm_domain(dom=%d,pool=%u) n_dom %d\n",
                       d->domain_id, cpupool_id, n_dom);
     return;
 }
@@ -767,7 +767,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
     case XEN_SYSCTL_CPUPOOL_OP_CREATE:
     {
-        int poolid;
+        unsigned int poolid;
 
         poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
             CPUPOOLID_NONE: op->cpupool_id;
@@ -811,7 +811,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
         const cpumask_t *cpus;
 
         cpu = op->cpu;
-        debugtrace_printk("cpupool_assign_cpu(pool=%d,cpu=%d)\n",
+        debugtrace_printk("cpupool_assign_cpu(pool=%u,cpu=%u)\n",
                           op->cpupool_id, cpu);
 
         spin_lock(&cpupool_lock);
@@ -844,7 +844,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
     addcpu_out:
         spin_unlock(&cpupool_lock);
-        debugtrace_printk("cpupool_assign_cpu(pool=%d,cpu=%d) ret %d\n",
+        debugtrace_printk("cpupool_assign_cpu(pool=%u,cpu=%u) ret %d\n",
                           op->cpupool_id, cpu, ret);
 
     }
@@ -885,7 +885,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
             rcu_unlock_domain(d);
             break;
         }
-        debugtrace_printk("cpupool move_domain(dom=%d)->pool=%d\n",
+        debugtrace_printk("cpupool move_domain(dom=%d)->pool=%u\n",
                           d->domain_id, op->cpupool_id);
         ret = -ENOENT;
         spin_lock(&cpupool_lock);
@@ -895,7 +895,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
             ret = cpupool_move_domain_locked(d, c);
 
         spin_unlock(&cpupool_lock);
-        debugtrace_printk("cpupool move_domain(dom=%d)->pool=%d ret %d\n",
+        debugtrace_printk("cpupool move_domain(dom=%d)->pool=%u ret %d\n",
                           d->domain_id, op->cpupool_id, ret);
         rcu_unlock_domain(d);
     }
@@ -916,7 +916,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
     return ret;
 }
 
-int cpupool_get_id(const struct domain *d)
+unsigned int cpupool_get_id(const struct domain *d)
 {
     return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
 }
@@ -946,7 +946,7 @@ void dump_runq(unsigned char key)
 
     for_each_cpupool(c)
     {
-        printk("Cpupool %d:\n", (*c)->cpupool_id);
+        printk("Cpupool %u:\n", (*c)->cpupool_id);
         printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid));
         sched_gran_print((*c)->gran, cpupool_get_granularity(*c));
         schedule_dump(*c);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 685992cab9..e69d9be1e8 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
 
 struct cpupool
 {
-    int              cpupool_id;
-#define CPUPOOLID_NONE    (-1)
+    unsigned int     cpupool_id;
+#define CPUPOOLID_NONE    (~0U)
     unsigned int     n_dom;
     cpumask_var_t    cpu_valid;      /* all cpus assigned to pool */
     cpumask_var_t    res_valid;      /* all scheduling resources of pool */
@@ -601,9 +601,9 @@ int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
 int schedule_cpu_rm(unsigned int cpu);
 int sched_move_domain(struct domain *d, struct cpupool *c);
-struct cpupool *cpupool_get_by_id(int poolid);
+struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);
-int cpupool_add_domain(struct domain *d, int poolid);
+int cpupool_add_domain(struct domain *d, unsigned int poolid);
 void cpupool_rm_domain(struct domain *d);
 
 #endif /* __XEN_SCHED_IF_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a345cc01f8..b2878e7b2a 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -691,7 +691,7 @@ void noreturn asm_domain_crash_synchronous(unsigned long addr);
 void scheduler_init(void);
 int  sched_init_vcpu(struct vcpu *v);
 void sched_destroy_vcpu(struct vcpu *v);
-int  sched_init_domain(struct domain *d, int poolid);
+int  sched_init_domain(struct domain *d, unsigned int poolid);
 void sched_destroy_domain(struct domain *d);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
@@ -1089,7 +1089,7 @@ static always_inline bool is_cpufreq_controller(const struct domain *d)
 
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
-int cpupool_get_id(const struct domain *d);
+unsigned int cpupool_get_id(const struct domain *d);
 const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
 extern void dump_runq(unsigned char key);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41624.74942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uo-0004ax-K4; Tue, 01 Dec 2020 08:21:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41624.74942; Tue, 01 Dec 2020 08:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uo-0004aa-9K; Tue, 01 Dec 2020 08:21:38 +0000
Received: by outflank-mailman (input) for mailman id 41624;
 Tue, 01 Dec 2020 08:21:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0un-0004Uj-Il
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3a9bb8f-12bc-44f7-ab8c-f8e9dabb0d80;
 Tue, 01 Dec 2020 08:21:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0963DAD71;
 Tue,  1 Dec 2020 08:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a9bb8f-12bc-44f7-ab8c-f8e9dabb0d80
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=b421o4gqnXLovgEu9WEC6gmhWCRJ1CZH3fbnZ3VWOko=;
	b=XMg1+DsoO/GET8yTF8cQ33YvA+LOsnb6vddawc7zcfiWqsaUDz9B5rQSLDbyJOHgqgM7ve
	3JcDOzhdpXJG0q1BCOnbQdUSUsiFDRxm/LdwYCwWzO6De+SN+OEyRWCNs9qHsR0A5PCRdp
	mTZWRxE0VlDs+oz8gJlrMpm/PeJkLx8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
Date: Tue,  1 Dec 2020 09:21:11 +0100
Message-Id: <20201201082128.15239-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Support scheduling granularity per cpupool. Setting the granularity is
done via hypfs, which needed to gain dynamical entries for that
purpose.

Apart from the hypfs related additional functionality the main change
for cpupools was the support for moving a domain to a new granularity,
as this requires to modify the scheduling unit/vcpu relationship.

I have tried to do the hypfs modifications in a rather generic way in
order to be able to use the same infrastructure in other cases, too
(e.g. for per-domain entries).

The complete series has been tested by creating cpupools with different
granularities and moving busy and idle domains between those.

Changes in V2:
- Added several new patches, especially for some further cleanups in
  cpupool.c.
- Completely reworked the locking scheme with dynamical directories:
  locking of resources (cpupools in this series) is now done via new
  callbacks which are called when traversing the hypfs tree. This
  removes the need to add locking to each hypfs related cpupool
  function and it ensures data integrity across multiple callbacks.
- Reordered the first few patches in order to have already acked
  patches in pure cleanup patches first.
- Addressed several comments.

Juergen Gross (17):
  xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
  xen/cpupool: add missing bits for per-cpupool scheduling granularity
  xen/cpupool: sort included headers in cpupool.c
  xen/cpupool: switch cpupool id to unsigned
  xen/cpupool: switch cpupool list to normal list interface
  xen/cpupool: use ERR_PTR() for returning error cause from
    cpupool_create()
  xen/cpupool: support moving domain between cpupools with different
    granularity
  docs: fix hypfs path documentation
  xen/hypfs: move per-node function pointers into a dedicated struct
  xen/hypfs: pass real failure reason up from hypfs_get_entry()
  xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
  xen/hypfs: add new enter() and exit() per node callbacks
  xen/hypfs: support dynamic hypfs nodes
  xen/hypfs: add support for id-based dynamic directories
  xen/cpupool: add cpupool directories
  xen/cpupool: add scheduling granularity entry to cpupool entries
  xen/cpupool: make per-cpupool sched-gran hypfs node writable

 docs/misc/hypfs-paths.pandoc |  18 +-
 xen/common/hypfs.c           | 336 ++++++++++++++++++++++----
 xen/common/sched/core.c      | 137 ++++++++---
 xen/common/sched/cpupool.c   | 446 +++++++++++++++++++++++++++--------
 xen/common/sched/private.h   |  15 +-
 xen/include/xen/hypfs.h      | 140 ++++++++---
 xen/include/xen/param.h      |  15 +-
 xen/include/xen/sched.h      |   4 +-
 8 files changed, 875 insertions(+), 236 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41625.74958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ut-0004iL-5O; Tue, 01 Dec 2020 08:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41625.74958; Tue, 01 Dec 2020 08:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ut-0004i9-1C; Tue, 01 Dec 2020 08:21:43 +0000
Received: by outflank-mailman (input) for mailman id 41625;
 Tue, 01 Dec 2020 08:21:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0us-0004VK-5P
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e073957-b4d3-4490-a2b8-ed309ab44ce7;
 Tue, 01 Dec 2020 08:21:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EDDFAF0B;
 Tue,  1 Dec 2020 08:21:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e073957-b4d3-4490-a2b8-ed309ab44ce7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qB6FBv+cM43jZdWdb0tnpaGR6+tOQZRO3dWAITjC9q0=;
	b=MDEmfBgGtA6DduOQH1C265ZvdQ2g1BmM2oxV9IeH0i3JY28sWofN5W0mJt1SBw0VLG/3/y
	Xwk99r5DsCgc5iRluiuKF/q6TxjBq4/0/wQ+Yo5Bl+aR6dFVdbhWIl+AjBeakPAH+55Nmd
	QNdqGsjlsuEUWeWDk5xu/3+agETy7iM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 08/17] docs: fix hypfs path documentation
Date: Tue,  1 Dec 2020 09:21:19 +0100
Message-Id: <20201201082128.15239-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The /params/* entry is missing a writable tag.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 docs/misc/hypfs-paths.pandoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index dddb592bc5..6c7b2f7ee3 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -179,7 +179,7 @@ The minor version of Xen.
 
 A directory of runtime parameters.
 
-#### /params/*
+#### /params/* [w]
 
 The individual parameters. The description of the different parameters can be
 found in `docs/misc/xen-command-line.pandoc`.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41626.74966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ut-0004jW-QB; Tue, 01 Dec 2020 08:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41626.74966; Tue, 01 Dec 2020 08:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0ut-0004j7-EH; Tue, 01 Dec 2020 08:21:43 +0000
Received: by outflank-mailman (input) for mailman id 41626;
 Tue, 01 Dec 2020 08:21:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0us-0004Uj-Ip
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c5f4ada-2f3d-4529-8a2f-4cd8dd020c8a;
 Tue, 01 Dec 2020 08:21:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47B51AD75;
 Tue,  1 Dec 2020 08:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c5f4ada-2f3d-4529-8a2f-4cd8dd020c8a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ykS7CP8XhSozhB7wIOpYgHQJFC/AfOdzJDIhKXHUBwQ=;
	b=UdjbPyXi8Vb+7zlaMLgGUS4u+pFTXmkJPEKnOgIESqKr5fz57ciQ+DWuNyqzZVVjMA1AYd
	d0VXWMH6CDFwm9JQHmTGgnnssKaMVSctkvpzuJDOMSJSn9QLTvuHpZH99hkYXPvBUdUSAq
	NPeSUB247Q3iLQ4JZlLrTgVnaIQo5lM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v2 02/17] xen/cpupool: add missing bits for per-cpupool scheduling granularity
Date: Tue,  1 Dec 2020 09:21:13 +0100
Message-Id: <20201201082128.15239-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Even with storing the scheduling granularity in struct cpupool there
are still a few bits missing for being able to have cpupools with
different granularity (apart from the missing interface for setting
the individual granularities): the number of cpus in a scheduling
unit is always taken from the global sched_granularity variable.

So store the value in struct cpupool and use that instead of
sched_granularity.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
 xen/common/sched/cpupool.c | 3 ++-
 xen/common/sched/private.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 7ea641ca26..6429c8f7b5 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -151,7 +151,7 @@ static void __init cpupool_gran_init(void)
 
 unsigned int cpupool_get_granularity(const struct cpupool *c)
 {
-    return c ? sched_granularity : 1;
+    return c ? c->sched_gran : 1;
 }
 
 static void free_cpupool_struct(struct cpupool *c)
@@ -289,6 +289,7 @@ static struct cpupool *cpupool_create(
     }
     c->sched->cpupool = c;
     c->gran = opt_sched_granularity;
+    c->sched_gran = sched_granularity;
 
     *q = c;
 
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index df50976eb2..685992cab9 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -514,6 +514,7 @@ struct cpupool
     struct scheduler *sched;
     atomic_t         refcnt;
     enum sched_gran  gran;
+    unsigned int     sched_gran;     /* Number of cpus per sched-item. */
 };
 
 static inline cpumask_t *cpupool_domain_master_cpumask(const struct domain *d)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41629.74982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uz-0004tc-4z; Tue, 01 Dec 2020 08:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41629.74982; Tue, 01 Dec 2020 08:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uz-0004tT-0C; Tue, 01 Dec 2020 08:21:49 +0000
Received: by outflank-mailman (input) for mailman id 41629;
 Tue, 01 Dec 2020 08:21:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0ux-0004VK-5f
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47543d46-c993-4e43-b9b2-580602c148f4;
 Tue, 01 Dec 2020 08:21:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08B27ADB3;
 Tue,  1 Dec 2020 08:21:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47543d46-c993-4e43-b9b2-580602c148f4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GDlThUHp1khXcMw14yFbWzJcy5msoaFCmMjGVwghDTw=;
	b=RIPBYEVfMZ0L6sdH0/scPIO3SUWJ8lAPqqmZDol9/cWCRirYglJ1FgSX3NSmO3Nu5pX/Lq
	4WFVSznlXIbgXjQ2yZ9id3JAVFzKg0DdKbWCHcvMaUAB8PRGhVyc3eV/Rwol412BKcfIOD
	ikPqSQ/8m2KDtAfgCoLyP1ZQYOA4A+U=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error cause from cpupool_create()
Date: Tue,  1 Dec 2020 09:21:17 +0100
Message-Id: <20201201082128.15239-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of a pointer to an error variable as parameter just use
ERR_PTR() to return the cause of an error in cpupool_create().

This propagates to scheduler_alloc(), too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 xen/common/sched/core.c    | 13 ++++++-------
 xen/common/sched/cpupool.c | 38 ++++++++++++++++++++------------------
 xen/common/sched/private.h |  2 +-
 3 files changed, 27 insertions(+), 26 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 6063f6d9ea..a429fc7640 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3233,26 +3233,25 @@ struct scheduler *scheduler_get_default(void)
     return &ops;
 }
 
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
+struct scheduler *scheduler_alloc(unsigned int sched_id)
 {
     int i;
+    int ret;
     struct scheduler *sched;
 
     for ( i = 0; i < NUM_SCHEDULERS; i++ )
         if ( schedulers[i] && schedulers[i]->sched_id == sched_id )
             goto found;
-    *perr = -ENOENT;
-    return NULL;
+    return ERR_PTR(-ENOENT);
 
  found:
-    *perr = -ENOMEM;
     if ( (sched = xmalloc(struct scheduler)) == NULL )
-        return NULL;
+        return ERR_PTR(-ENOMEM);
     memcpy(sched, schedulers[i], sizeof(*sched));
-    if ( (*perr = sched_init(sched)) != 0 )
+    if ( (ret = sched_init(sched)) != 0 )
     {
         xfree(sched);
-        sched = NULL;
+        sched = ERR_PTR(ret);
     }
 
     return sched;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 714cd47ae9..0db7d77219 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -240,15 +240,15 @@ void cpupool_put(struct cpupool *pool)
  * - poolid already used
  * - unknown scheduler
  */
-static struct cpupool *cpupool_create(
-    unsigned int poolid, unsigned int sched_id, int *perr)
+static struct cpupool *cpupool_create(unsigned int poolid,
+                                      unsigned int sched_id)
 {
     struct cpupool *c;
     struct cpupool *q;
+    int ret;
 
-    *perr = -ENOMEM;
     if ( (c = alloc_cpupool_struct()) == NULL )
-        return NULL;
+        return ERR_PTR(-ENOMEM);
 
     /* One reference for caller, one reference for cpupool_destroy(). */
     atomic_set(&c->refcnt, 2);
@@ -267,7 +267,7 @@ static struct cpupool *cpupool_create(
             list_add_tail(&c->list, &q->list);
             if ( q->cpupool_id == poolid )
             {
-                *perr = -EEXIST;
+                ret = -EEXIST;
                 goto err;
             }
         }
@@ -294,15 +294,15 @@ static struct cpupool *cpupool_create(
     }
 
     if ( poolid == 0 )
-    {
         c->sched = scheduler_get_default();
-    }
     else
+        c->sched = scheduler_alloc(sched_id);
+    if ( IS_ERR(c->sched) )
     {
-        c->sched = scheduler_alloc(sched_id, perr);
-        if ( c->sched == NULL )
-            goto err;
+        ret = PTR_ERR(c->sched);
+        goto err;
     }
+
     c->sched->cpupool = c;
     c->gran = opt_sched_granularity;
     c->sched_gran = sched_granularity;
@@ -312,15 +312,16 @@ static struct cpupool *cpupool_create(
     debugtrace_printk("Created cpupool %u with scheduler %s (%s)\n",
                       c->cpupool_id, c->sched->name, c->sched->opt_name);
 
-    *perr = 0;
     return c;
 
  err:
     list_del(&c->list);
 
     spin_unlock(&cpupool_lock);
+
     free_cpupool_struct(c);
-    return NULL;
+
+    return ERR_PTR(ret);
 }
 /*
  * destroys the given cpupool
@@ -767,7 +768,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu)
  */
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 {
-    int ret;
+    int ret = 0;
     struct cpupool *c;
 
     switch ( op->op )
@@ -779,8 +780,10 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
         poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
             CPUPOOLID_NONE: op->cpupool_id;
-        c = cpupool_create(poolid, op->sched_id, &ret);
-        if ( c != NULL )
+        c = cpupool_create(poolid, op->sched_id);
+        if ( IS_ERR(c) )
+            ret = PTR_ERR(c);
+        else
         {
             op->cpupool_id = c->cpupool_id;
             cpupool_put(c);
@@ -1003,12 +1006,11 @@ static struct notifier_block cpu_nfb = {
 static int __init cpupool_init(void)
 {
     unsigned int cpu;
-    int err;
 
     cpupool_gran_init();
 
-    cpupool0 = cpupool_create(0, 0, &err);
-    BUG_ON(cpupool0 == NULL);
+    cpupool0 = cpupool_create(0, 0);
+    BUG_ON(IS_ERR(cpupool0));
     cpupool_put(cpupool0);
     register_cpu_notifier(&cpu_nfb);
 
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 6953cefa6e..92d0d49610 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -597,7 +597,7 @@ void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
 struct scheduler *scheduler_get_default(void);
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
+struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41630.74988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uz-0004v6-Uc; Tue, 01 Dec 2020 08:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41630.74988; Tue, 01 Dec 2020 08:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0uz-0004uW-FQ; Tue, 01 Dec 2020 08:21:49 +0000
Received: by outflank-mailman (input) for mailman id 41630;
 Tue, 01 Dec 2020 08:21:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0ux-0004Uj-JC
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4e9ee5b-0ccc-45a7-ab93-42acbd9a5809;
 Tue, 01 Dec 2020 08:21:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D6DE7ADA2;
 Tue,  1 Dec 2020 08:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4e9ee5b-0ccc-45a7-ab93-42acbd9a5809
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mU9Pbrvb5/SbczGpyv+43AFg9f7JL1E6nM4w4+Kc8Ys=;
	b=ILvsAfc3DQfxMeBh7/UeyY7thcPBDBnKKQx/m85uM/6vnUDiWbyPot23PoNk501NzKpZEM
	KKYg3q5EdesuiQhDwclZrDFL62v2x8JPBIqUHpyfsi2HGSgabmKh+LC1RrVB5LNMxo5k/E
	vbgd9p2QtYA+jDJDCKCnQ77jyX/Ezns=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list interface
Date: Tue,  1 Dec 2020 09:21:16 +0100
Message-Id: <20201201082128.15239-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of open coding something like a linked list just use the
available functionality from list.h.

The allocation of a new cpupool id is not aware of a possible wrap.
Fix that.

While adding the required new include to private.h sort the includes.

Signed-off-by: From: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c | 100 ++++++++++++++++++++-----------------
 xen/common/sched/private.h |   4 +-
 2 files changed, 57 insertions(+), 47 deletions(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 01fa71dd00..714cd47ae9 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -16,6 +16,7 @@
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
+#include <xen/list.h>
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
@@ -23,13 +24,10 @@
 
 #include "private.h"
 
-#define for_each_cpupool(ptr)    \
-    for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next))
-
 struct cpupool *cpupool0;                /* Initial cpupool with Dom0 */
 cpumask_t cpupool_free_cpus;             /* cpus not in any cpupool */
 
-static struct cpupool *cpupool_list;     /* linked list, sorted by poolid */
+static LIST_HEAD(cpupool_list);          /* linked list, sorted by poolid */
 
 static int cpupool_moving_cpu = -1;
 static struct cpupool *cpupool_cpu_moving = NULL;
@@ -189,15 +187,15 @@ static struct cpupool *alloc_cpupool_struct(void)
  */
 static struct cpupool *__cpupool_find_by_id(unsigned int id, bool exact)
 {
-    struct cpupool **q;
+    struct cpupool *q;
 
     ASSERT(spin_is_locked(&cpupool_lock));
 
-    for_each_cpupool(q)
-        if ( (*q)->cpupool_id >= id )
-            break;
+    list_for_each_entry(q, &cpupool_list, list)
+        if ( q->cpupool_id == id || (!exact && q->cpupool_id > id) )
+            return q;
 
-    return (!exact || (*q == NULL) || ((*q)->cpupool_id == id)) ? *q : NULL;
+    return NULL;
 }
 
 static struct cpupool *cpupool_find_by_id(unsigned int poolid)
@@ -246,8 +244,7 @@ static struct cpupool *cpupool_create(
     unsigned int poolid, unsigned int sched_id, int *perr)
 {
     struct cpupool *c;
-    struct cpupool **q;
-    unsigned int last = 0;
+    struct cpupool *q;
 
     *perr = -ENOMEM;
     if ( (c = alloc_cpupool_struct()) == NULL )
@@ -260,23 +257,42 @@ static struct cpupool *cpupool_create(
 
     spin_lock(&cpupool_lock);
 
-    for_each_cpupool(q)
+    if ( poolid != CPUPOOLID_NONE )
     {
-        last = (*q)->cpupool_id;
-        if ( (poolid != CPUPOOLID_NONE) && (last >= poolid) )
-            break;
+        q = __cpupool_find_by_id(poolid, false);
+        if ( !q )
+            list_add_tail(&c->list, &cpupool_list);
+        else
+        {
+            list_add_tail(&c->list, &q->list);
+            if ( q->cpupool_id == poolid )
+            {
+                *perr = -EEXIST;
+                goto err;
+            }
+        }
+
+        c->cpupool_id = poolid;
     }
-    if ( *q != NULL )
+    else
     {
-        if ( (*q)->cpupool_id == poolid )
+        /* Cpupool 0 is created with specified id at boot and never removed. */
+        ASSERT(!list_empty(&cpupool_list));
+
+        q = list_last_entry(&cpupool_list, struct cpupool, list);
+        /* In case of wrap search for first free id. */
+        if ( q->cpupool_id == CPUPOOLID_NONE - 1 )
         {
-            *perr = -EEXIST;
-            goto err;
+            list_for_each_entry(q, &cpupool_list, list)
+                if ( q->cpupool_id + 1 != list_next_entry(q, list)->cpupool_id )
+                    break;
         }
-        c->next = *q;
+
+        list_add(&c->list, &q->list);
+
+        c->cpupool_id = q->cpupool_id + 1;
     }
 
-    c->cpupool_id = (poolid == CPUPOOLID_NONE) ? (last + 1) : poolid;
     if ( poolid == 0 )
     {
         c->sched = scheduler_get_default();
@@ -291,8 +307,6 @@ static struct cpupool *cpupool_create(
     c->gran = opt_sched_granularity;
     c->sched_gran = sched_granularity;
 
-    *q = c;
-
     spin_unlock(&cpupool_lock);
 
     debugtrace_printk("Created cpupool %u with scheduler %s (%s)\n",
@@ -302,6 +316,8 @@ static struct cpupool *cpupool_create(
     return c;
 
  err:
+    list_del(&c->list);
+
     spin_unlock(&cpupool_lock);
     free_cpupool_struct(c);
     return NULL;
@@ -312,27 +328,19 @@ static struct cpupool *cpupool_create(
  * possible failures:
  * - pool still in use
  * - cpus still assigned to pool
- * - pool not in list
  */
 static int cpupool_destroy(struct cpupool *c)
 {
-    struct cpupool **q;
-
     spin_lock(&cpupool_lock);
-    for_each_cpupool(q)
-        if ( *q == c )
-            break;
-    if ( *q != c )
-    {
-        spin_unlock(&cpupool_lock);
-        return -ENOENT;
-    }
+
     if ( (c->n_dom != 0) || cpumask_weight(c->cpu_valid) )
     {
         spin_unlock(&cpupool_lock);
         return -EBUSY;
     }
-    *q = c->next;
+
+    list_del(&c->list);
+
     spin_unlock(&cpupool_lock);
 
     cpupool_put(c);
@@ -732,17 +740,17 @@ static int cpupool_cpu_remove_prologue(unsigned int cpu)
  */
 static void cpupool_cpu_remove_forced(unsigned int cpu)
 {
-    struct cpupool **c;
+    struct cpupool *c;
     int ret;
     unsigned int master_cpu = sched_get_resource_cpu(cpu);
 
-    for_each_cpupool ( c )
+    list_for_each_entry(c, &cpupool_list, list)
     {
-        if ( cpumask_test_cpu(master_cpu, (*c)->cpu_valid) )
+        if ( cpumask_test_cpu(master_cpu, c->cpu_valid) )
         {
-            ret = cpupool_unassign_cpu_start(*c, master_cpu);
+            ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(*c);
+            ret = cpupool_unassign_cpu_finish(c);
             BUG_ON(ret);
         }
     }
@@ -929,7 +937,7 @@ const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool)
 void dump_runq(unsigned char key)
 {
     s_time_t         now = NOW();
-    struct cpupool **c;
+    struct cpupool *c;
 
     spin_lock(&cpupool_lock);
 
@@ -944,12 +952,12 @@ void dump_runq(unsigned char key)
         schedule_dump(NULL);
     }
 
-    for_each_cpupool(c)
+    list_for_each_entry(c, &cpupool_list, list)
     {
-        printk("Cpupool %u:\n", (*c)->cpupool_id);
-        printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid));
-        sched_gran_print((*c)->gran, cpupool_get_granularity(*c));
-        schedule_dump(*c);
+        printk("Cpupool %u:\n", c->cpupool_id);
+        printk("Cpus: %*pbl\n", CPUMASK_PR(c->cpu_valid));
+        sched_gran_print(c->gran, cpupool_get_granularity(c));
+        schedule_dump(c);
     }
 
     spin_unlock(&cpupool_lock);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index e69d9be1e8..6953cefa6e 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -8,8 +8,9 @@
 #ifndef __XEN_SCHED_IF_H__
 #define __XEN_SCHED_IF_H__
 
-#include <xen/percpu.h>
 #include <xen/err.h>
+#include <xen/list.h>
+#include <xen/percpu.h>
 #include <xen/rcupdate.h>
 
 /* cpus currently in no cpupool */
@@ -510,6 +511,7 @@ struct cpupool
     unsigned int     n_dom;
     cpumask_var_t    cpu_valid;      /* all cpus assigned to pool */
     cpumask_var_t    res_valid;      /* all scheduling resources of pool */
+    struct list_head list;
     struct cpupool   *next;
     struct scheduler *sched;
     atomic_t         refcnt;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41639.75006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v4-00054R-9L; Tue, 01 Dec 2020 08:21:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41639.75006; Tue, 01 Dec 2020 08:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v4-00054E-4b; Tue, 01 Dec 2020 08:21:54 +0000
Received: by outflank-mailman (input) for mailman id 41639;
 Tue, 01 Dec 2020 08:21:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0v2-0004VK-5i
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c2e290f-45b7-4f0b-88e8-d03f1bef03bd;
 Tue, 01 Dec 2020 08:21:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB4D9AF0C;
 Tue,  1 Dec 2020 08:21:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c2e290f-45b7-4f0b-88e8-d03f1bef03bd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kUqinpHgs7lQk4VZmE9ndgJqbAEUMbPxpoEKorm7mzA=;
	b=aLg6UtsdqCn5u7PfntKCiyCpCF7kvj0R65v/6un82KaisYDyo2w/IllrDI1vPDMZ7C6asF
	fZ6A/M0p8FXu0LnT+JFWn3Lf5nqHF47aEk9dVs8F9qe2PTnaAHulrmq3lsNiTSHUIO2yno
	L8DmQOD0hJeA4od635U9bI5hsdEPp+w=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into a dedicated struct
Date: Tue,  1 Dec 2020 09:21:20 +0100
Message-Id: <20201201082128.15239-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the function pointers currently stored in each hypfs node into a
dedicated structure in order to save some space for each node. This
will save even more space with additional callbacks added in future.

Provide some standard function vectors.

Instead of testing the write pointer to be not NULL provide a dummy
function just returning -EACCESS. ASSERT() all vector entries being
populated when adding a node. This avoids any potential problem (e.g.
pv domain privilege escalations) in case of calling a non populated
vector entry.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- make function vector const (Jan Beulich)
- don't allow any NULL entry (Jan Beulich)
- add callback comment
---
 xen/common/hypfs.c      | 41 ++++++++++++++++++++----
 xen/include/xen/hypfs.h | 71 ++++++++++++++++++++++++++++-------------
 xen/include/xen/param.h | 15 +++------
 3 files changed, 88 insertions(+), 39 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 8e932b5cf9..7befd144ba 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -24,6 +24,27 @@ CHECK_hypfs_dirlistentry;
     (DIRENTRY_NAME_OFF +        \
      ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
 
+const struct hypfs_funcs hypfs_dir_funcs = {
+    .read = hypfs_read_dir,
+    .write = hypfs_write_deny,
+};
+const struct hypfs_funcs hypfs_leaf_ro_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_deny,
+};
+const struct hypfs_funcs hypfs_leaf_wr_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_leaf,
+};
+const struct hypfs_funcs hypfs_bool_wr_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_bool,
+};
+const struct hypfs_funcs hypfs_custom_wr_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_custom,
+};
+
 static DEFINE_RWLOCK(hypfs_lock);
 enum hypfs_lock_state {
     hypfs_unlocked,
@@ -74,6 +95,9 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
     int ret = -ENOENT;
     struct hypfs_entry *e;
 
+    ASSERT(new->funcs->read);
+    ASSERT(new->funcs->write);
+
     hypfs_write_lock();
 
     list_for_each_entry ( e, &parent->dirlist, list )
@@ -284,7 +308,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
 
     guest_handle_add_offset(uaddr, sizeof(e));
 
-    ret = entry->read(entry, uaddr);
+    ret = entry->funcs->read(entry, uaddr);
 
  out:
     return ret;
@@ -297,6 +321,7 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
     int ret;
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+    ASSERT(leaf->e.max_size);
 
     if ( ulen > leaf->e.max_size )
         return -ENOSPC;
@@ -357,6 +382,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
     int ret;
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+    ASSERT(leaf->e.max_size);
 
     /* Avoid oversized buffer allocation. */
     if ( ulen > MAX_PARAM_SIZE )
@@ -382,19 +408,20 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
     return ret;
 }
 
+int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    return -EACCES;
+}
+
 static int hypfs_write(struct hypfs_entry *entry,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
 {
     struct hypfs_entry_leaf *l;
 
-    if ( !entry->write )
-        return -EACCES;
-
-    ASSERT(entry->max_size);
-
     l = container_of(entry, struct hypfs_entry_leaf, e);
 
-    return entry->write(l, uaddr, ulen);
+    return entry->funcs->write(l, uaddr, ulen);
 }
 
 long do_hypfs_op(unsigned int cmd,
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 5ad99cb558..25fdf3ead7 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -7,6 +7,32 @@
 #include <public/hypfs.h>
 
 struct hypfs_entry_leaf;
+struct hypfs_entry;
+
+/*
+ * Per-node callbacks:
+ *
+ * The callbacks are always called with the hypfs lock held.
+ *
+ * The read() callback is used to return the contents of a node (either
+ * directory or leaf). It is NOT used to get directory entries during traversal
+ * of the tree.
+ *
+ * The write() callback is used to modify the contents of a node. Writing
+ * directories is not supported (this means all nodes are added at boot time).
+ */
+struct hypfs_funcs {
+    int (*read)(const struct hypfs_entry *entry,
+                XEN_GUEST_HANDLE_PARAM(void) uaddr);
+    int (*write)(struct hypfs_entry_leaf *leaf,
+                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+};
+
+extern const struct hypfs_funcs hypfs_dir_funcs;
+extern const struct hypfs_funcs hypfs_leaf_ro_funcs;
+extern const struct hypfs_funcs hypfs_leaf_wr_funcs;
+extern const struct hypfs_funcs hypfs_bool_wr_funcs;
+extern const struct hypfs_funcs hypfs_custom_wr_funcs;
 
 struct hypfs_entry {
     unsigned short type;
@@ -15,10 +41,7 @@ struct hypfs_entry {
     unsigned int max_size;
     const char *name;
     struct list_head list;
-    int (*read)(const struct hypfs_entry *entry,
-                XEN_GUEST_HANDLE_PARAM(void) uaddr);
-    int (*write)(struct hypfs_entry_leaf *leaf,
-                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+    const struct hypfs_funcs *funcs;
 };
 
 struct hypfs_entry_leaf {
@@ -42,7 +65,7 @@ struct hypfs_entry_dir {
         .e.size = 0,                              \
         .e.max_size = 0,                          \
         .e.list = LIST_HEAD_INIT(var.e.list),     \
-        .e.read = hypfs_read_dir,                 \
+        .e.funcs = &hypfs_dir_funcs,              \
         .dirlist = LIST_HEAD_INIT(var.dirlist),   \
     }
 
@@ -52,7 +75,7 @@ struct hypfs_entry_dir {
         .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
         .e.name = (nam),                          \
         .e.max_size = (msz),                      \
-        .e.read = hypfs_read_leaf,                \
+        .e.funcs = &hypfs_leaf_ro_funcs,          \
     }
 
 /* Content and size need to be set via hypfs_string_set_reference(). */
@@ -72,35 +95,37 @@ static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
     leaf->e.size = strlen(str) + 1;
 }
 
-#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
-    struct hypfs_entry_leaf __read_mostly var = {        \
-        .e.type = (typ),                                 \
-        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
-        .e.name = (nam),                                 \
-        .e.size = sizeof(contvar),                       \
-        .e.max_size = (wr) ? sizeof(contvar) : 0,        \
-        .e.read = hypfs_read_leaf,                       \
-        .e.write = (wr),                                 \
-        .u.content = &(contvar),                         \
+#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, fn, wr) \
+    struct hypfs_entry_leaf __read_mostly var = {            \
+        .e.type = (typ),                                     \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,                   \
+        .e.name = (nam),                                     \
+        .e.size = sizeof(contvar),                           \
+        .e.max_size = (wr) ? sizeof(contvar) : 0,            \
+        .e.funcs = (fn),                                     \
+        .u.content = &(contvar),                             \
     }
 
 #define HYPFS_UINT_INIT(var, nam, contvar)                       \
-    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL)
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
+                         &hypfs_leaf_ro_funcs, 0)
 #define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar)              \
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
-                         hypfs_write_leaf)
+                         &hypfs_leaf_wr_funcs, 1)
 
 #define HYPFS_INT_INIT(var, nam, contvar)                        \
-    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL)
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar,  \
+                         &hypfs_leaf_ro_funcs, 0)
 #define HYPFS_INT_INIT_WRITABLE(var, nam, contvar)               \
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \
-                         hypfs_write_leaf)
+                         &hypfs_leaf_wr_funcs, 1)
 
 #define HYPFS_BOOL_INIT(var, nam, contvar)                       \
-    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL)
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
+                         &hypfs_leaf_ro_funcs, 0)
 #define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar)              \
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
-                         hypfs_write_bool)
+                         &hypfs_bool_wr_funcs, 1)
 
 extern struct hypfs_entry_dir hypfs_root;
 
@@ -112,6 +137,8 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
 int hypfs_read_leaf(const struct hypfs_entry *entry,
                     XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index d0409d3a0e..1b2c7db954 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -116,8 +116,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
         { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_custom, \
+          .hypfs.e.funcs = &hypfs_custom_wr_funcs, \
           .init_leaf = (initfunc), \
           .func = (variable) }
 #define boolean_runtime_only_param(nam, variable) \
@@ -127,8 +126,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_bool, \
+          .hypfs.e.funcs = &hypfs_bool_wr_funcs, \
           .hypfs.u.content = &(variable) }
 #define integer_runtime_only_param(nam, variable) \
     __paramfs __parfs_##variable = \
@@ -137,8 +135,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \
           .hypfs.u.content = &(variable) }
 #define size_runtime_only_param(nam, variable) \
     __paramfs __parfs_##variable = \
@@ -147,8 +144,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \
           .hypfs.u.content = &(variable) }
 #define string_runtime_only_param(nam, variable) \
     __paramfs __parfs_##variable = \
@@ -157,8 +153,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = 0, \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \
           .hypfs.u.content = &(variable) }
 
 #else
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41640.75012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v4-00055m-Vp; Tue, 01 Dec 2020 08:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41640.75012; Tue, 01 Dec 2020 08:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v4-00055R-Ka; Tue, 01 Dec 2020 08:21:54 +0000
Received: by outflank-mailman (input) for mailman id 41640;
 Tue, 01 Dec 2020 08:21:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0v2-0004Uj-JJ
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5433781-6557-465d-9090-c775c13054ae;
 Tue, 01 Dec 2020 08:21:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2DC4CAEFF;
 Tue,  1 Dec 2020 08:21:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5433781-6557-465d-9090-c775c13054ae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=708vSf4hdDLmhqVFIbpBu/E6M/WDwL53lyQ6wvfCs8c=;
	b=EEKGCdbaUJZM5ne+c6o2Igd6i7OnsH1A4VTVrMWIIJ8haSEigJ728u0IGOeBVBvrGYgD0G
	sZf9D/plSzZkk4iAqQKZVxKeKecfF4J/jpq2Tz6BeS2ijXbTAaGnfnE2i4avbbshJxq0pS
	iekW+Jec6R8Lugat7JGWYuvemBryrL8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v2 07/17] xen/cpupool: support moving domain between cpupools with different granularity
Date: Tue,  1 Dec 2020 09:21:18 +0100
Message-Id: <20201201082128.15239-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When moving a domain between cpupools with different scheduling
granularity the sched_units of the domain need to be adjusted.

Do that by allocating new sched_units and throwing away the old ones
in sched_move_domain().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++----------
 1 file changed, 90 insertions(+), 31 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index a429fc7640..2a61c879b3 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -613,17 +613,45 @@ static void sched_move_irqs(const struct sched_unit *unit)
         vcpu_move_irqs(v);
 }
 
+/*
+ * Move a domain from one cpupool to another.
+ *
+ * A domain with any vcpu having temporary affinity settings will be denied
+ * to move. Hard and soft affinities will be reset.
+ *
+ * In order to support cpupools with different scheduling granularities all
+ * scheduling units are replaced by new ones.
+ *
+ * The complete move is done in the following steps:
+ * - check prerequisites (no vcpu with temporary affinities)
+ * - allocate all new data structures (scheduler specific domain data, unit
+ *   memory, scheduler specific unit data)
+ * - pause domain
+ * - temporarily move all (old) units to the same scheduling resource (this
+ *   makes the final resource assignment easier in case the new cpupool has
+ *   a larger granularity than the old one, as the scheduling locks for all
+ *   vcpus must be held for that operation)
+ * - remove old units from scheduling
+ * - set new cpupool and scheduler domain data pointers in struct domain
+ * - switch all vcpus to new units, still assigned to the old scheduling
+ *   resource
+ * - migrate all new units to scheduling resources of the new cpupool
+ * - unpause the domain
+ * - free the old memory (scheduler specific domain data, unit memory,
+ *   scheduler specific unit data)
+ */
 int sched_move_domain(struct domain *d, struct cpupool *c)
 {
     struct vcpu *v;
-    struct sched_unit *unit;
+    struct sched_unit *unit, *old_unit;
+    struct sched_unit *new_units = NULL, *old_units;
+    struct sched_unit **unit_ptr = &new_units;
     unsigned int new_p, unit_idx;
-    void **unit_priv;
     void *domdata;
-    void *unitdata;
-    struct scheduler *old_ops;
+    struct scheduler *old_ops = dom_scheduler(d);
     void *old_domdata;
     unsigned int gran = cpupool_get_granularity(c);
+    unsigned int n_units = DIV_ROUND_UP(d->max_vcpus, gran);
     int ret = 0;
 
     for_each_vcpu ( d, v )
@@ -641,53 +669,78 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
         goto out;
     }
 
-    unit_priv = xzalloc_array(void *, DIV_ROUND_UP(d->max_vcpus, gran));
-    if ( unit_priv == NULL )
+    for ( unit_idx = 0; unit_idx < n_units; unit_idx++ )
     {
-        sched_free_domdata(c->sched, domdata);
-        ret = -ENOMEM;
-        goto out;
-    }
+        unit = sched_alloc_unit_mem();
+        if ( unit )
+        {
+            /* Initialize unit for sched_alloc_udata() to work. */
+            unit->domain = d;
+            unit->unit_id = unit_idx * gran;
+            unit->vcpu_list = d->vcpu[unit->unit_id];
+            unit->priv = sched_alloc_udata(c->sched, unit, domdata);
+            *unit_ptr = unit;
+        }
 
-    unit_idx = 0;
-    for_each_sched_unit ( d, unit )
-    {
-        unit_priv[unit_idx] = sched_alloc_udata(c->sched, unit, domdata);
-        if ( unit_priv[unit_idx] == NULL )
+        if ( !unit || !unit->priv )
         {
-            for ( unit_idx = 0; unit_priv[unit_idx]; unit_idx++ )
-                sched_free_udata(c->sched, unit_priv[unit_idx]);
-            xfree(unit_priv);
-            sched_free_domdata(c->sched, domdata);
+            old_units = new_units;
+            old_domdata = domdata;
             ret = -ENOMEM;
-            goto out;
+            goto out_free;
         }
-        unit_idx++;
+
+        unit_ptr = &unit->next_in_list;
     }
 
     domain_pause(d);
 
-    old_ops = dom_scheduler(d);
     old_domdata = d->sched_priv;
 
+    new_p = cpumask_first(d->cpupool->cpu_valid);
     for_each_sched_unit ( d, unit )
     {
+        spinlock_t *lock;
+
+        /*
+         * Temporarily move all units to same processor to make locking
+         * easier when moving the new units to the new processors.
+         */
+        lock = unit_schedule_lock_irq(unit);
+        sched_set_res(unit, get_sched_res(new_p));
+        spin_unlock_irq(lock);
+
         sched_remove_unit(old_ops, unit);
     }
 
+    old_units = d->sched_unit_list;
+
     d->cpupool = c;
     d->sched_priv = domdata;
 
+    unit = new_units;
+    for_each_vcpu ( d, v )
+    {
+        old_unit = v->sched_unit;
+        if ( unit->unit_id + gran == v->vcpu_id )
+            unit = unit->next_in_list;
+
+        unit->state_entry_time = old_unit->state_entry_time;
+        unit->runstate_cnt[v->runstate.state]++;
+        /* Temporarily use old resource assignment */
+        unit->res = get_sched_res(new_p);
+
+        v->sched_unit = unit;
+    }
+
+    d->sched_unit_list = new_units;
+
     new_p = cpumask_first(c->cpu_valid);
-    unit_idx = 0;
     for_each_sched_unit ( d, unit )
     {
         spinlock_t *lock;
         unsigned int unit_p = new_p;
 
-        unitdata = unit->priv;
-        unit->priv = unit_priv[unit_idx];
-
         for_each_sched_unit_vcpu ( unit, v )
         {
             migrate_timer(&v->periodic_timer, new_p);
@@ -713,8 +766,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
         sched_insert_unit(c->sched, unit);
 
-        sched_free_udata(old_ops, unitdata);
-
         unit_idx++;
     }
 
@@ -722,11 +773,19 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
     domain_unpause(d);
 
-    sched_free_domdata(old_ops, old_domdata);
+ out_free:
+    for ( unit = old_units; unit; )
+    {
+        if ( unit->priv )
+            sched_free_udata(c->sched, unit->priv);
+        old_unit = unit;
+        unit = unit->next_in_list;
+        xfree(old_unit);
+    }
 
-    xfree(unit_priv);
+    sched_free_domdata(old_ops, old_domdata);
 
-out:
+ out:
     rcu_read_unlock(&sched_res_rculock);
 
     return ret;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41643.75030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v8-0005Em-V8; Tue, 01 Dec 2020 08:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41643.75030; Tue, 01 Dec 2020 08:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v8-0005EW-PL; Tue, 01 Dec 2020 08:21:58 +0000
Received: by outflank-mailman (input) for mailman id 41643;
 Tue, 01 Dec 2020 08:21:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0v7-0004VK-5p
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc310c53-1e04-4611-b723-ff45b9e4ba3b;
 Tue, 01 Dec 2020 08:21:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76E9EAD75;
 Tue,  1 Dec 2020 08:21:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc310c53-1e04-4611-b723-ff45b9e4ba3b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=se4zYxbxCXhFPxdQtqG6gW//2iyLAiNTMs1y0ZSZMwA=;
	b=VpXkgRaiPqc/ptp+PDZ3uGEMAf5rWeWN3Kp2eX7Wj8zOmp8Tb9AC9suqBG46hI5y3pKFTS
	8BPfOUFHdpKdUFulJm1s1QTnAGPzECVzPiOLnB2PaBz9mrJf2Y8p8fUyfTtbCUA6AzXt3+
	2N0W0pdw4AVqEidIu+euFXhq40JrPzg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v2 16/17] xen/cpupool: add scheduling granularity entry to cpupool entries
Date: Tue,  1 Dec 2020 09:21:27 +0100
Message-Id: <20201201082128.15239-17-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a "sched-gran" entry to the per-cpupool hypfs directories.

For now make this entry read-only and let it contain one of the
strings "cpu", "core" or "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added const (Jan Beulich)
- modify test in cpupool_gran_read() (Jan Beulich)
---
 docs/misc/hypfs-paths.pandoc |  4 +++
 xen/common/sched/cpupool.c   | 69 ++++++++++++++++++++++++++++++++++--
 2 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index aaca1cdf92..f1ce24d7fe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,6 +184,10 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+
+The scheduling granularity of a cpupool.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 3e17fdf95b..cfc75ccbe4 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -41,9 +41,10 @@ static DEFINE_SPINLOCK(cpupool_lock);
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
 static unsigned int __read_mostly sched_granularity = 1;
 
+#define SCHED_GRAN_NAME_LEN  8
 struct sched_gran_name {
     enum sched_gran mode;
-    char name[8];
+    char name[SCHED_GRAN_NAME_LEN];
 };
 
 static const struct sched_gran_name sg_name[] = {
@@ -52,7 +53,7 @@ static const struct sched_gran_name sg_name[] = {
     {SCHED_GRAN_socket, "socket"},
 };
 
-static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+static const char *sched_gran_get_name(enum sched_gran mode)
 {
     const char *name = "";
     unsigned int i;
@@ -66,8 +67,13 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
         }
     }
 
+    return name;
+}
+
+static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+{
     printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
-           name, gran, gran == 1 ? "" : "s");
+           sched_gran_get_name(mode), gran, gran == 1 ? "" : "s");
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
@@ -1033,9 +1039,14 @@ static int cpupool_dir_read(const struct hypfs_entry *entry,
     int ret = 0;
     const struct cpupool *c;
     unsigned int size = 0;
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
 
     list_for_each_entry(c, &cpupool_list, list)
     {
+        data->id = c->cpupool_id;
+
         size += hypfs_dynid_entry_size(entry, c->cpupool_id);
 
         ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
@@ -1100,6 +1111,56 @@ static struct hypfs_entry *cpupool_dir_findentry(
     return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
 }
 
+static int cpupool_gran_read(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+    const struct cpupool *cpupool;
+    const char *gran;
+
+    data = hypfs_get_dyndata();
+    cpupool = __cpupool_find_by_id(data->id, true);
+    ASSERT(cpupool);
+
+    gran = sched_gran_get_name(cpupool->gran);
+
+    if ( !*gran )
+        return -ENOENT;
+
+    return copy_to_guest(uaddr, gran, strlen(gran) + 1) ? -EFAULT : 0;
+}
+
+static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
+{
+    const struct hypfs_dyndir_id *data;
+    const struct cpupool *cpupool;
+    const char *gran;
+
+    data = hypfs_get_dyndata();
+    cpupool = __cpupool_find_by_id(data->id, true);
+    ASSERT(cpupool);
+
+    gran = sched_gran_get_name(cpupool->gran);
+
+    return strlen(gran) + 1;
+}
+
+static struct hypfs_funcs cpupool_gran_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
+    .read = cpupool_gran_read,
+    .write = hypfs_write_deny,
+    .getsize = hypfs_gran_getsize,
+    .findentry = hypfs_leaf_findentry,
+};
+
+static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
+                          0, &cpupool_gran_funcs);
+static char granstr[SCHED_GRAN_NAME_LEN] = {
+    [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
+    [SCHED_GRAN_NAME_LEN - 1] = 0
+};
+
 static struct hypfs_funcs cpupool_dir_funcs = {
     .enter = cpupool_dir_enter,
     .exit = cpupool_dir_exit,
@@ -1115,6 +1176,8 @@ static void cpupool_hypfs_init(void)
 {
     hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
     hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+    hypfs_string_set_reference(&cpupool_gran, granstr);
+    hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true);
 }
 #else
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:21:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41645.75035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v9-0005GU-KK; Tue, 01 Dec 2020 08:21:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41645.75035; Tue, 01 Dec 2020 08:21:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0v9-0005Fn-8i; Tue, 01 Dec 2020 08:21:59 +0000
Received: by outflank-mailman (input) for mailman id 41645;
 Tue, 01 Dec 2020 08:21:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0v7-0004Uj-JN
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2ebf806-5cc1-4f39-8b25-a2bca9abd9d3;
 Tue, 01 Dec 2020 08:21:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5E62AF10;
 Tue,  1 Dec 2020 08:21:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2ebf806-5cc1-4f39-8b25-a2bca9abd9d3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=P8wgYZkUcmfVZFhet14HurBZ05V1wN1xeSm132mMJYo=;
	b=CP9Vx9M9inEdF2kGeks3UZVfFpSP8Y7VsG8c1gvih5SH9oigsXhd+CRbtMtozh+4eTh3fA
	C+y1UgnGOuIYp0DyctDWe4YfIFfiR1lJr4zA6q81m4iNO47eQTKvDUjVpAz/M1e40+EIBy
	XOukMaKP1VcaHdDN0TXpkMbnEEYVnM4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 10/17] xen/hypfs: pass real failure reason up from hypfs_get_entry()
Date: Tue,  1 Dec 2020 09:21:21 +0100
Message-Id: <20201201082128.15239-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of handling all errors from hypfs_get_entry() as ENOENT pass
up the real error value via ERR_PTR().

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/hypfs.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 7befd144ba..fdfd0f764a 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -187,7 +187,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
     while ( again )
     {
         if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
-            return NULL;
+            return ERR_PTR(-ENOENT);
 
         if ( !*path )
             return &dir->e;
@@ -206,7 +206,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
                                                      struct hypfs_entry_dir, e);
 
             if ( cmp < 0 )
-                return NULL;
+                return ERR_PTR(-ENOENT);
             if ( !cmp && strlen(entry->name) == name_len )
             {
                 if ( !*end )
@@ -221,13 +221,13 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
         }
     }
 
-    return NULL;
+    return ERR_PTR(-ENOENT);
 }
 
 static struct hypfs_entry *hypfs_get_entry(const char *path)
 {
     if ( path[0] != '/' )
-        return NULL;
+        return ERR_PTR(-EINVAL);
 
     return hypfs_get_entry_rel(&hypfs_root, path + 1);
 }
@@ -454,9 +454,9 @@ long do_hypfs_op(unsigned int cmd,
         goto out;
 
     entry = hypfs_get_entry(path);
-    if ( !entry )
+    if ( IS_ERR(entry) )
     {
-        ret = -ENOENT;
+        ret = PTR_ERR(entry);
         goto out;
     }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:22:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:22:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41650.75053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vE-0005QC-1y; Tue, 01 Dec 2020 08:22:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41650.75053; Tue, 01 Dec 2020 08:22:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vD-0005Q1-TB; Tue, 01 Dec 2020 08:22:03 +0000
Received: by outflank-mailman (input) for mailman id 41650;
 Tue, 01 Dec 2020 08:22:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0vC-0004VK-64
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8fe17956-cceb-44ac-9b50-da0c9bf4e05d;
 Tue, 01 Dec 2020 08:21:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CA09AF19;
 Tue,  1 Dec 2020 08:21:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fe17956-cceb-44ac-9b50-da0c9bf4e05d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8P3UHikA7pHBmIcZYgCpCp4RulCN+1OvDj1DbtBRNCk=;
	b=GNqocWCji7UO4coxDFPLzqvgECSysm/zPrIqYeFIp17FIBp0Lmx77eaUclfLp9dIhxOLDo
	K/uRNZsOtQ38yUwi4g8WJfTzYWq7x95fDw2X4gmoK/hD68vYUsXFvKolmxWGdg+JJQ8hC+
	vHR6gfebNf+TnpCr1qn4NJbUqjeTniE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
Date: Tue,  1 Dec 2020 09:21:22 +0100
Message-Id: <20201201082128.15239-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a getsize() function pointer to struct hypfs_funcs for being able
to have dynamically filled entries without the need to take the hypfs
lock each time the contents are being generated.

For directories add a findentry callback to the vector and modify
hypfs_get_entry_rel() to use it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add a dummy findentry function (Jan Beulich)
- add const to findentry dir parameter
- don't move DIRENTRY_SIZE() to hypfs.h (Jan Beulich)
- split dyndir support off into new patch

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 100 ++++++++++++++++++++++++++--------------
 xen/include/xen/hypfs.h |  25 +++++++++-
 2 files changed, 89 insertions(+), 36 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index fdfd0f764a..83c5cacdca 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -27,22 +27,32 @@ CHECK_hypfs_dirlistentry;
 const struct hypfs_funcs hypfs_dir_funcs = {
     .read = hypfs_read_dir,
     .write = hypfs_write_deny,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_dir_findentry,
 };
 const struct hypfs_funcs hypfs_leaf_ro_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_deny,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_leaf_wr_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_leaf,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_bool_wr_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_bool,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_custom_wr_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_custom,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_leaf_findentry,
 };
 
 static DEFINE_RWLOCK(hypfs_lock);
@@ -97,6 +107,8 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 
     ASSERT(new->funcs->read);
     ASSERT(new->funcs->write);
+    ASSERT(new->funcs->getsize);
+    ASSERT(new->funcs->findentry);
 
     hypfs_write_lock();
 
@@ -176,15 +188,41 @@ static int hypfs_get_path_user(char *buf,
     return 0;
 }
 
+struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
+                                         const char *name,
+                                         unsigned int name_len)
+{
+    return ERR_PTR(-ENOENT);
+}
+
+struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
+                                        const char *name,
+                                        unsigned int name_len)
+{
+    struct hypfs_entry *entry;
+
+    list_for_each_entry ( entry, &dir->dirlist, list )
+    {
+        int cmp = strncmp(name, entry->name, name_len);
+
+        if ( cmp < 0 )
+            return ERR_PTR(-ENOENT);
+
+        if ( !cmp && strlen(entry->name) == name_len )
+            return entry;
+    }
+
+    return ERR_PTR(-ENOENT);
+}
+
 static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
                                                const char *path)
 {
     const char *end;
     struct hypfs_entry *entry;
     unsigned int name_len;
-    bool again = true;
 
-    while ( again )
+    for ( ; ; )
     {
         if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
             return ERR_PTR(-ENOENT);
@@ -197,28 +235,12 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
             end = strchr(path, '\0');
         name_len = end - path;
 
-        again = false;
+        entry = dir->e.funcs->findentry(dir, path, name_len);
+        if ( IS_ERR(entry) || !*end )
+            return entry;
 
-        list_for_each_entry ( entry, &dir->dirlist, list )
-        {
-            int cmp = strncmp(path, entry->name, name_len);
-            struct hypfs_entry_dir *d = container_of(entry,
-                                                     struct hypfs_entry_dir, e);
-
-            if ( cmp < 0 )
-                return ERR_PTR(-ENOENT);
-            if ( !cmp && strlen(entry->name) == name_len )
-            {
-                if ( !*end )
-                    return entry;
-
-                again = true;
-                dir = d;
-                path = end + 1;
-
-                break;
-            }
-        }
+        path = end + 1;
+        dir = container_of(entry, struct hypfs_entry_dir, e);
     }
 
     return ERR_PTR(-ENOENT);
@@ -232,12 +254,17 @@ static struct hypfs_entry *hypfs_get_entry(const char *path)
     return hypfs_get_entry_rel(&hypfs_root, path + 1);
 }
 
+unsigned int hypfs_getsize(const struct hypfs_entry *entry)
+{
+    return entry->size;
+}
+
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
     const struct hypfs_entry_dir *d;
     const struct hypfs_entry *e;
-    unsigned int size = entry->size;
+    unsigned int size = entry->funcs->getsize(entry);
 
     ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
 
@@ -252,7 +279,7 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
         direntry.e.pad = 0;
         direntry.e.type = e->type;
         direntry.e.encoding = e->encoding;
-        direntry.e.content_len = e->size;
+        direntry.e.content_len = e->funcs->getsize(e);
         direntry.e.max_write_len = e->max_size;
         direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
         if ( copy_to_guest(uaddr, &direntry, 1) )
@@ -275,18 +302,20 @@ int hypfs_read_leaf(const struct hypfs_entry *entry,
                     XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
     const struct hypfs_entry_leaf *l;
+    unsigned int size = entry->funcs->getsize(entry);
 
     ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
 
     l = container_of(entry, const struct hypfs_entry_leaf, e);
 
-    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT: 0;
+    return copy_to_guest(uaddr, l->u.content, size) ?  -EFAULT : 0;
 }
 
 static int hypfs_read(const struct hypfs_entry *entry,
                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
 {
     struct xen_hypfs_direntry e;
+    unsigned int size = entry->funcs->getsize(entry);
     long ret = -EINVAL;
 
     if ( ulen < sizeof(e) )
@@ -295,7 +324,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
     e.pad = 0;
     e.type = entry->type;
     e.encoding = entry->encoding;
-    e.content_len = entry->size;
+    e.content_len = size;
     e.max_write_len = entry->max_size;
 
     ret = -EFAULT;
@@ -303,7 +332,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
         goto out;
 
     ret = -ENOBUFS;
-    if ( ulen < entry->size + sizeof(e) )
+    if ( ulen < size + sizeof(e) )
         goto out;
 
     guest_handle_add_offset(uaddr, sizeof(e));
@@ -319,15 +348,16 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
 {
     char *buf;
     int ret;
+    struct hypfs_entry *e = &leaf->e;
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
     ASSERT(leaf->e.max_size);
 
-    if ( ulen > leaf->e.max_size )
+    if ( ulen > e->max_size )
         return -ENOSPC;
 
-    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
-         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
+    if ( e->type != XEN_HYPFS_TYPE_STRING &&
+         e->type != XEN_HYPFS_TYPE_BLOB && ulen != e->funcs->getsize(e) )
         return -EDOM;
 
     buf = xmalloc_array(char, ulen);
@@ -339,14 +369,14 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
         goto out;
 
     ret = -EINVAL;
-    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
-         leaf->e.encoding == XEN_HYPFS_ENC_PLAIN &&
+    if ( e->type == XEN_HYPFS_TYPE_STRING &&
+         e->encoding == XEN_HYPFS_ENC_PLAIN &&
          memchr(buf, 0, ulen) != (buf + ulen - 1) )
         goto out;
 
     ret = 0;
     memcpy(leaf->u.write_ptr, buf, ulen);
-    leaf->e.size = ulen;
+    e->size = ulen;
 
  out:
     xfree(buf);
@@ -360,7 +390,7 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
     ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
-           leaf->e.size == sizeof(bool) &&
+           leaf->e.funcs->getsize(&leaf->e) == sizeof(bool) &&
            leaf->e.max_size == sizeof(bool) );
 
     if ( ulen != leaf->e.max_size )
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 25fdf3ead7..53f50772b4 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -2,17 +2,21 @@
 #define __XEN_HYPFS_H__
 
 #ifdef CONFIG_HYPFS
+#include <xen/lib.h>
 #include <xen/list.h>
 #include <xen/string.h>
 #include <public/hypfs.h>
 
 struct hypfs_entry_leaf;
+struct hypfs_entry_dir;
 struct hypfs_entry;
 
 /*
  * Per-node callbacks:
  *
- * The callbacks are always called with the hypfs lock held.
+ * The callbacks are always called with the hypfs lock held. In case multiple
+ * callbacks are called for a single operation the lock is held across all
+ * those callbacks.
  *
  * The read() callback is used to return the contents of a node (either
  * directory or leaf). It is NOT used to get directory entries during traversal
@@ -20,12 +24,24 @@ struct hypfs_entry;
  *
  * The write() callback is used to modify the contents of a node. Writing
  * directories is not supported (this means all nodes are added at boot time).
+ *
+ * getsize() is called in two cases:
+ * - when reading a node (directory or leaf) for filling in the size of the
+ *   node into the returned direntry
+ * - when reading a directory for each node in this directory
+ *
+ * findentry() is called for traversing a path from the root node to a node
+ * for all nodes on that path excluding the final node (so for looking up
+ * "/a/b/c" findentry() will be called for "/", "/a", and "/a/b").
  */
 struct hypfs_funcs {
     int (*read)(const struct hypfs_entry *entry,
                 XEN_GUEST_HANDLE_PARAM(void) uaddr);
     int (*write)(struct hypfs_entry_leaf *leaf,
                  XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+    unsigned int (*getsize)(const struct hypfs_entry *entry);
+    struct hypfs_entry *(*findentry)(const struct hypfs_entry_dir *dir,
+                                     const char *name, unsigned int name_len);
 };
 
 extern const struct hypfs_funcs hypfs_dir_funcs;
@@ -145,6 +161,13 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+unsigned int hypfs_getsize(const struct hypfs_entry *entry);
+struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
+                                         const char *name,
+                                         unsigned int name_len);
+struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
+                                        const char *name,
+                                        unsigned int name_len);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:22:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:22:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41652.75061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vE-0005Rt-RD; Tue, 01 Dec 2020 08:22:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41652.75061; Tue, 01 Dec 2020 08:22:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vE-0005RM-Dq; Tue, 01 Dec 2020 08:22:04 +0000
Received: by outflank-mailman (input) for mailman id 41652;
 Tue, 01 Dec 2020 08:22:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0vC-0004Uj-Je
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 140159a2-5e1f-4a70-a062-b217f06cd2c0;
 Tue, 01 Dec 2020 08:21:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA3E8AF30;
 Tue,  1 Dec 2020 08:21:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 140159a2-5e1f-4a70-a062-b217f06cd2c0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FZxV6jsf+IHXWLLNyxfWYa8r8oAvJLZTojatFVH5etE=;
	b=jEbgpZ10S3Xr01s/3ASowbRpBT+X/cyV0c8W4GNBCHOn59lgdMPBAK10SLlkYdJMk5c4Pt
	MIsa02484ifRClmscqGm1wvmVdD6BM4+kRB8aZjoRbRbJIa5yZ+Oex3LFRIXMW/GtTjeeT
	+pSk1w+TBHplJ7DyxCvsLN/Y/vZS650=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes
Date: Tue,  1 Dec 2020 09:21:24 +0100
Message-Id: <20201201082128.15239-14-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a HYPFS_VARDIR_INIT() macro for initializing such a directory
statically, taking a struct hypfs_funcs pointer as parameter additional
to those of HYPFS_DIR_INIT().

Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
additional parameter as this will be needed for dynamical entries.

For being able to let the generic hypfs coding continue to work on
normal struct hypfs_entry entities even for dynamical nodes add some
infrastructure for allocating a working area for the current hypfs
request in order to store needed information for traversing the tree.
This area is anchored in a percpu pointer and can be retrieved by any
level of the dynamic entries. The normal way to handle allocation and
freeing is to allocate the data in the enter() callback of a node and
to free it in the related exit() callback.

Add a hypfs_add_dyndir() function for adding a dynamic directory
template to the tree, which is needed for having the correct reference
to its position in hypfs.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich)
- carved out from previous patch
- use enter() and exit() callbacks for allocating and freeing
  dyndata memory
- add hypfs_add_dyndir()
---
 xen/common/hypfs.c      | 31 +++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 28 ++++++++++++++++++----------
 2 files changed, 49 insertions(+), 10 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index e5adc9defe..39448c5409 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -72,6 +72,7 @@ enum hypfs_lock_state {
     hypfs_write_locked
 };
 static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
+static DEFINE_PER_CPU(struct hypfs_dyndata *, hypfs_dyndata);
 
 static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
 
@@ -157,6 +158,30 @@ static void node_exit_all(void)
         node_exit(*last);
 }
 
+void *hypfs_alloc_dyndata(unsigned long size)
+{
+    unsigned int cpu = smp_processor_id();
+
+    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
+    ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL);
+
+    per_cpu(hypfs_dyndata, cpu) = xzalloc_bytes(size);
+
+    return per_cpu(hypfs_dyndata, cpu);
+}
+
+void *hypfs_get_dyndata(void)
+{
+    ASSERT(this_cpu(hypfs_dyndata));
+
+    return this_cpu(hypfs_dyndata);
+}
+
+void hypfs_free_dyndata(void)
+{
+    XFREE(this_cpu(hypfs_dyndata));
+}
+
 static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 {
     int ret = -ENOENT;
@@ -218,6 +243,12 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,
     return ret;
 }
 
+void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
+                      struct hypfs_entry_dir *template)
+{
+    template->e.parent = &parent->e;
+}
+
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 8d96abd805..be8d6c508a 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -76,7 +76,7 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
-#define HYPFS_DIR_INIT(var, nam)                  \
+#define HYPFS_VARDIR_INIT(var, nam, fn)           \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
         .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
@@ -84,22 +84,25 @@ struct hypfs_entry_dir {
         .e.size = 0,                              \
         .e.max_size = 0,                          \
         .e.list = LIST_HEAD_INIT(var.e.list),     \
-        .e.funcs = &hypfs_dir_funcs,              \
+        .e.funcs = (fn),                          \
         .dirlist = LIST_HEAD_INIT(var.dirlist),   \
     }
 
-#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
-    struct hypfs_entry_leaf __read_mostly var = { \
-        .e.type = (typ),                          \
-        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
-        .e.name = (nam),                          \
-        .e.max_size = (msz),                      \
-        .e.funcs = &hypfs_leaf_ro_funcs,          \
+#define HYPFS_DIR_INIT(var, nam)                  \
+    HYPFS_VARDIR_INIT(var, nam, &hypfs_dir_funcs)
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \
+    struct hypfs_entry_leaf __read_mostly var = {  \
+        .e.type = (typ),                           \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,         \
+        .e.name = (nam),                           \
+        .e.max_size = (msz),                       \
+        .e.funcs = (fn),                           \
     }
 
 /* Content and size need to be set via hypfs_string_set_reference(). */
 #define HYPFS_STRING_INIT(var, nam)               \
-    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_funcs)
 
 /*
  * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
@@ -150,6 +153,8 @@ extern struct hypfs_entry_dir hypfs_root;
 
 int hypfs_add_dir(struct hypfs_entry_dir *parent,
                   struct hypfs_entry_dir *dir, bool nofault);
+void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
+                      struct hypfs_entry_dir *template);
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault);
 const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry);
@@ -173,6 +178,9 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
 struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
                                         const char *name,
                                         unsigned int name_len);
+void *hypfs_alloc_dyndata(unsigned long size);
+void *hypfs_get_dyndata(void);
+void hypfs_free_dyndata(void);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:22:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:22:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41658.75078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vJ-0005bl-2e; Tue, 01 Dec 2020 08:22:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41658.75078; Tue, 01 Dec 2020 08:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vI-0005bY-Pd; Tue, 01 Dec 2020 08:22:08 +0000
Received: by outflank-mailman (input) for mailman id 41658;
 Tue, 01 Dec 2020 08:22:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0vH-0004VK-6D
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25f21fdf-10c8-425f-b015-8b72c954f7ae;
 Tue, 01 Dec 2020 08:21:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BA1F5AD8A;
 Tue,  1 Dec 2020 08:21:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25f21fdf-10c8-425f-b015-8b72c954f7ae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3G0xgIlL/9kX9DtKJGfu7x/TxV+adakU1t5bKB7Shos=;
	b=IkpN9WygXUSH6MIqzZPiyHDmb45LMWErKCm8iYpw9oM5Mo9lo6NFRIhzEfKRtrBCK30H+G
	I9itFnpDgiqpTj4f4xPcwoimg7/sfsc/K7+kG2Vzw2hBWABakSekL3R+30plQDJappTYyh
	pKHnK5heJzej4UKjt411+cYbvpyHkMs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v2 17/17] xen/cpupool: make per-cpupool sched-gran hypfs node writable
Date: Tue,  1 Dec 2020 09:21:28 +0100
Message-Id: <20201201082128.15239-18-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
cpupool selectable scheduling granularity.

Writing this node is allowed only with no cpu assigned to the cpupool.
Allowed are values "cpu", "core" and "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- test user parameters earlier (Jan Beulich)
---
 docs/misc/hypfs-paths.pandoc |  5 ++-
 xen/common/sched/cpupool.c   | 70 ++++++++++++++++++++++++++++++------
 2 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index f1ce24d7fe..e86f7d0dbe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,10 +184,13 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
-#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") [w]
 
 The scheduling granularity of a cpupool.
 
+Writing a value is allowed only for cpupools with no cpu assigned and if the
+architecture is supporting different scheduling granularities.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index cfc75ccbe4..b1d9507978 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
-static int __init sched_select_granularity(const char *str)
+static int sched_gran_get(const char *str, enum sched_gran *mode)
 {
     unsigned int i;
 
@@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str)
     {
         if ( strcmp(sg_name[i].name, str) == 0 )
         {
-            opt_sched_granularity = sg_name[i].mode;
+            *mode = sg_name[i].mode;
             return 0;
         }
     }
 
     return -EINVAL;
 }
+
+static int __init sched_select_granularity(const char *str)
+{
+    return sched_gran_get(str, &opt_sched_granularity);
+}
 custom_param("sched-gran", sched_select_granularity);
+#else
+static int sched_gran_get(const char *str, enum sched_gran *mode)
+{
+    return -EINVAL;
+}
 #endif
 
-static unsigned int __init cpupool_check_granularity(void)
+static unsigned int cpupool_check_granularity(enum sched_gran mode)
 {
     unsigned int cpu;
     unsigned int siblings, gran = 0;
 
-    if ( opt_sched_granularity == SCHED_GRAN_cpu )
+    if ( mode == SCHED_GRAN_cpu )
         return 1;
 
     for_each_online_cpu ( cpu )
     {
-        siblings = cpumask_weight(sched_get_opt_cpumask(opt_sched_granularity,
-                                                        cpu));
+        siblings = cpumask_weight(sched_get_opt_cpumask(mode, cpu));
         if ( gran == 0 )
             gran = siblings;
         else if ( gran != siblings )
             return 0;
     }
 
-    sched_disable_smt_switching = true;
-
     return gran;
 }
 
@@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void)
 
     while ( gran == 0 )
     {
-        gran = cpupool_check_granularity();
+        gran = cpupool_check_granularity(opt_sched_granularity);
 
         if ( gran == 0 )
         {
@@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void)
     if ( fallback )
         warning_add(fallback);
 
+    if ( opt_sched_granularity != SCHED_GRAN_cpu )
+        sched_disable_smt_switching = true;
+
     sched_granularity = gran;
     sched_gran_print(opt_sched_granularity, sched_granularity);
 }
@@ -1145,17 +1155,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
     return strlen(gran) + 1;
 }
 
+static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
+                              XEN_GUEST_HANDLE_PARAM(void) uaddr,
+                              unsigned int ulen)
+{
+    const struct hypfs_dyndir_id *data;
+    struct cpupool *cpupool;
+    enum sched_gran gran;
+    unsigned int sched_gran = 0;
+    char name[SCHED_GRAN_NAME_LEN];
+    int ret = 0;
+
+    if ( ulen > SCHED_GRAN_NAME_LEN )
+        return -ENOSPC;
+
+    if ( copy_from_guest(name, uaddr, ulen) )
+        return -EFAULT;
+
+    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
+        sched_gran = sched_gran_get(name, &gran) ?
+                     0 : cpupool_check_granularity(gran);
+    if ( sched_gran == 0 )
+        return -EINVAL;
+
+    data = hypfs_get_dyndata();
+    cpupool = __cpupool_find_by_id(data->id, true);
+    ASSERT(cpupool);
+
+    if ( !cpumask_empty(cpupool->cpu_valid) )
+        ret = -EBUSY;
+    else
+    {
+        cpupool->gran = gran;
+        cpupool->sched_gran = sched_gran;
+    }
+
+    return ret;
+}
+
 static struct hypfs_funcs cpupool_gran_funcs = {
     .enter = hypfs_node_enter,
     .exit = hypfs_node_exit,
     .read = cpupool_gran_read,
-    .write = hypfs_write_deny,
+    .write = cpupool_gran_write,
     .getsize = hypfs_gran_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 
 static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
-                          0, &cpupool_gran_funcs);
+                          SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs);
 static char granstr[SCHED_GRAN_NAME_LEN] = {
     [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
     [SCHED_GRAN_NAME_LEN - 1] = 0
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:22:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:22:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41659.75084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vK-0005dp-2r; Tue, 01 Dec 2020 08:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41659.75084; Tue, 01 Dec 2020 08:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vJ-0005cz-Be; Tue, 01 Dec 2020 08:22:09 +0000
Received: by outflank-mailman (input) for mailman id 41659;
 Tue, 01 Dec 2020 08:22:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0vH-0004Uj-Jh
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 544f183e-d5f3-4c30-9d2f-26ec576f0792;
 Tue, 01 Dec 2020 08:21:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70019AF26;
 Tue,  1 Dec 2020 08:21:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 544f183e-d5f3-4c30-9d2f-26ec576f0792
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5TUB0753mqlMUzFRfIKqWdQ4pi4TEHdAafXYRsY3+ps=;
	b=A0YHXj4E2/lcDG8u30FMXxMKHWlPnkqX5fXhttVGTKHbTrOBq0ysiVvNfNaXhMEnxZYSWQ
	WIDIxYZnKNEGbvmh3Q/gTg/InCxWZ3nGq39Z4NnoNGX8nmZxUssvDqtA8JiFhyF0fFQ/9G
	4/WqEKfDD4eK6+TeEMMWkvr2lM9Lc/s=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node callbacks
Date: Tue,  1 Dec 2020 09:21:23 +0100
Message-Id: <20201201082128.15239-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to better support resource allocation and locking for dynamic
hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.

The enter() callback is called when entering a node during hypfs user
actions (traversing, reading or writing it), while the exit() callback
is called when leaving a node (accessing another node at the same or a
higher directory level, or when returning to the user).

For avoiding recursion this requires a parent pointer in each node.
Let the enter() callback return the entry address which is stored as
the last accessed node in order to be able to use a template entry for
that purpose in case of dynamic entries.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 xen/common/hypfs.c      | 79 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h |  5 +++
 2 files changed, 84 insertions(+)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 83c5cacdca..e5adc9defe 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry;
      ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
 
 const struct hypfs_funcs hypfs_dir_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_dir,
     .write = hypfs_write_deny,
     .getsize = hypfs_getsize,
     .findentry = hypfs_dir_findentry,
 };
 const struct hypfs_funcs hypfs_leaf_ro_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_deny,
     .getsize = hypfs_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_leaf_wr_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_leaf,
     .getsize = hypfs_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_bool_wr_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_bool,
     .getsize = hypfs_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_custom_wr_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_custom,
     .getsize = hypfs_getsize,
@@ -63,6 +73,8 @@ enum hypfs_lock_state {
 };
 static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
 
+static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
+
 HYPFS_DIR_INIT(hypfs_root, "");
 
 static void hypfs_read_lock(void)
@@ -100,11 +112,58 @@ static void hypfs_unlock(void)
     }
 }
 
+const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
+{
+    return entry;
+}
+
+void hypfs_node_exit(const struct hypfs_entry *entry)
+{
+}
+
+static int node_enter(const struct hypfs_entry *entry)
+{
+    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
+
+    entry = entry->funcs->enter(entry);
+    if ( IS_ERR(entry) )
+        return PTR_ERR(entry);
+
+    ASSERT(!*last || *last == entry->parent);
+
+    *last = entry;
+
+    return 0;
+}
+
+static void node_exit(const struct hypfs_entry *entry)
+{
+    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
+
+    if ( !*last )
+        return;
+
+    ASSERT(*last == entry);
+    *last = entry->parent;
+
+    entry->funcs->exit(entry);
+}
+
+static void node_exit_all(void)
+{
+    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
+
+    while ( *last )
+        node_exit(*last);
+}
+
 static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 {
     int ret = -ENOENT;
     struct hypfs_entry *e;
 
+    ASSERT(new->funcs->enter);
+    ASSERT(new->funcs->exit);
     ASSERT(new->funcs->read);
     ASSERT(new->funcs->write);
     ASSERT(new->funcs->getsize);
@@ -140,6 +199,7 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
         unsigned int sz = strlen(new->name);
 
         parent->e.size += DIRENTRY_SIZE(sz);
+        new->parent = &parent->e;
     }
 
     hypfs_unlock();
@@ -221,6 +281,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
     const char *end;
     struct hypfs_entry *entry;
     unsigned int name_len;
+    int ret;
 
     for ( ; ; )
     {
@@ -235,6 +296,10 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
             end = strchr(path, '\0');
         name_len = end - path;
 
+        ret = node_enter(&dir->e);
+        if ( ret )
+            return ERR_PTR(ret);
+
         entry = dir->e.funcs->findentry(dir, path, name_len);
         if ( IS_ERR(entry) || !*end )
             return entry;
@@ -265,6 +330,7 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
     const struct hypfs_entry_dir *d;
     const struct hypfs_entry *e;
     unsigned int size = entry->funcs->getsize(entry);
+    int ret;
 
     ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
 
@@ -276,12 +342,19 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
         unsigned int e_namelen = strlen(e->name);
         unsigned int e_len = DIRENTRY_SIZE(e_namelen);
 
+        ret = node_enter(e);
+        if ( ret )
+            return ret;
+
         direntry.e.pad = 0;
         direntry.e.type = e->type;
         direntry.e.encoding = e->encoding;
         direntry.e.content_len = e->funcs->getsize(e);
         direntry.e.max_write_len = e->max_size;
         direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
+
+        node_exit(e);
+
         if ( copy_to_guest(uaddr, &direntry, 1) )
             return -EFAULT;
 
@@ -490,6 +563,10 @@ long do_hypfs_op(unsigned int cmd,
         goto out;
     }
 
+    ret = node_enter(entry);
+    if ( ret )
+        goto out;
+
     switch ( cmd )
     {
     case XEN_HYPFS_OP_read:
@@ -506,6 +583,8 @@ long do_hypfs_op(unsigned int cmd,
     }
 
  out:
+    node_exit_all();
+
     hypfs_unlock();
 
     return ret;
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 53f50772b4..8d96abd805 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -35,6 +35,8 @@ struct hypfs_entry;
  * "/a/b/c" findentry() will be called for "/", "/a", and "/a/b").
  */
 struct hypfs_funcs {
+    const struct hypfs_entry *(*enter)(const struct hypfs_entry *entry);
+    void (*exit)(const struct hypfs_entry *entry);
     int (*read)(const struct hypfs_entry *entry,
                 XEN_GUEST_HANDLE_PARAM(void) uaddr);
     int (*write)(struct hypfs_entry_leaf *leaf,
@@ -56,6 +58,7 @@ struct hypfs_entry {
     unsigned int size;
     unsigned int max_size;
     const char *name;
+    struct hypfs_entry *parent;
     struct list_head list;
     const struct hypfs_funcs *funcs;
 };
@@ -149,6 +152,8 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,
                   struct hypfs_entry_dir *dir, bool nofault);
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault);
+const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry);
+void hypfs_node_exit(const struct hypfs_entry *entry);
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
 int hypfs_read_leaf(const struct hypfs_entry *entry,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:22:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41669.75101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vO-0005of-FO; Tue, 01 Dec 2020 08:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41669.75101; Tue, 01 Dec 2020 08:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vO-0005oT-A2; Tue, 01 Dec 2020 08:22:14 +0000
Received: by outflank-mailman (input) for mailman id 41669;
 Tue, 01 Dec 2020 08:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0vM-0004Uj-Jt
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3e75dc9-6b8c-4948-9e33-bfc041c331ed;
 Tue, 01 Dec 2020 08:21:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E7BA2AC2F;
 Tue,  1 Dec 2020 08:21:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3e75dc9-6b8c-4948-9e33-bfc041c331ed
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7nEfxMc4Y111J56Pbtn4xCo86ZYNHh7NsCYZakRnxAw=;
	b=q7fVue6+a3LnDyFS4uwGeOnENuWlrCODSQKGhqEjyhoQOU0imZti8xKGiJxIgP4ZPH0XGq
	NcJcyyWri44MTb8Bd6uW73F0Exdla3Y1bf3Kz69CES6KlvO/AeUxcAaeshGW8aFjl48Syi
	NJNsGCjs8OelOTH3cwKbfq2LzCG5/UE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic directories
Date: Tue,  1 Dec 2020 09:21:25 +0100
Message-Id: <20201201082128.15239-15-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add some helpers to hypfs.c to support dynamic directories with a
numerical id as name.

The dynamic directory is based on a template specified by the user
allowing to use specific access functions and having a predefined
set of entries in the directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use macro for length of entry name (Jan Beulich)
- const attributes (Jan Beulich)
- use template name as format string (Jan Beulich)
- add hypfs_dynid_entry_size() helper (Jan Beulich)
- expect dyndir data having been allocated by enter() callback
---
 xen/common/hypfs.c      | 75 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 17 ++++++++++
 2 files changed, 92 insertions(+)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 39448c5409..1819b26925 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
     return entry->size;
 }
 
+int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
+{
+    struct xen_hypfs_dirlistentry direntry;
+    char name[HYPFS_DYNDIR_ID_NAMELEN];
+    unsigned int e_namelen, e_len;
+
+    e_namelen = snprintf(name, sizeof(name), template->e.name, id);
+    e_len = DIRENTRY_SIZE(e_namelen);
+    direntry.e.pad = 0;
+    direntry.e.type = template->e.type;
+    direntry.e.encoding = template->e.encoding;
+    direntry.e.content_len = template->e.funcs->getsize(&template->e);
+    direntry.e.max_write_len = template->e.max_size;
+    direntry.off_next = is_last ? 0 : e_len;
+    if ( copy_to_guest(*uaddr, &direntry, 1) )
+        return -EFAULT;
+    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
+                              e_namelen + 1) )
+        return -EFAULT;
+
+    guest_handle_add_offset(*uaddr, e_len);
+
+    return 0;
+}
+
+static struct hypfs_entry *hypfs_dyndir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original findentry function. */
+    return data->template->e.funcs->findentry(data->template, name, name_len);
+}
+
+static int hypfs_read_dyndir(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original read function. */
+    return data->template->e.funcs->read(&data->template->e, uaddr);
+}
+
+struct hypfs_entry *hypfs_gen_dyndir_entry_id(
+    const struct hypfs_entry_dir *template, unsigned int id)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    data->template = template;
+    data->id = id;
+    snprintf(data->name, sizeof(data->name), template->e.name, id);
+    data->dir = *template;
+    data->dir.e.name = data->name;
+    data->dir.e.funcs = &data->funcs;
+    data->funcs = *template->e.funcs;
+    data->funcs.findentry = hypfs_dyndir_findentry;
+    data->funcs.read = hypfs_read_dyndir;
+
+    return &data->dir.e;
+}
+
+unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
+                                    unsigned int id)
+{
+    return DIRENTRY_SIZE(snprintf(NULL, 0, template->name, id));
+}
+
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index be8d6c508a..1782c50b30 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -76,6 +76,16 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
+struct hypfs_dyndir_id {
+    struct hypfs_entry_dir dir;             /* Modified copy of template. */
+    struct hypfs_funcs funcs;               /* Dynamic functions. */
+    const struct hypfs_entry_dir *template; /* Template used. */
+#define HYPFS_DYNDIR_ID_NAMELEN 12
+    char name[HYPFS_DYNDIR_ID_NAMELEN];     /* Name of hypfs entry. */
+
+    unsigned int id;                        /* Numerical id. */
+};
+
 #define HYPFS_VARDIR_INIT(var, nam, fn)           \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
@@ -181,6 +191,13 @@ struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
 void *hypfs_alloc_dyndata(unsigned long size);
 void *hypfs_get_dyndata(void);
 void hypfs_free_dyndata(void);
+int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr);
+struct hypfs_entry *hypfs_gen_dyndir_entry_id(
+    const struct hypfs_entry_dir *template, unsigned int id);
+unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
+                                    unsigned int id);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:22:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:22:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41677.75114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vT-0005yk-9Q; Tue, 01 Dec 2020 08:22:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41677.75114; Tue, 01 Dec 2020 08:22:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0vT-0005yV-38; Tue, 01 Dec 2020 08:22:19 +0000
Received: by outflank-mailman (input) for mailman id 41677;
 Tue, 01 Dec 2020 08:22:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk0vR-0004Uj-KF
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c0bbf43-36d4-4383-b29a-fee1a0eed84c;
 Tue, 01 Dec 2020 08:21:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37707AD71;
 Tue,  1 Dec 2020 08:21:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c0bbf43-36d4-4383-b29a-fee1a0eed84c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HM46dWJ+2EckZVs46afJ02xSZlPII/ZflhbtJ/soVzY=;
	b=si2eglK8gFZfvlwn7DwLnIfjPwj443YIA2y0+2TssmsqgTD1VpUZvitNvDWOvPhJZbQS6V
	H6lMLFAHhyc3EJFWNXxzU9nCuJnsn3hnRCrUF1EVh1mxqA/k9p3Ppo39CylLKOsK5gOPwX
	ZRK+gWBGuZMp8brKkizNXEPDGrUqBxI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v2 15/17] xen/cpupool: add cpupool directories
Date: Tue,  1 Dec 2020 09:21:26 +0100
Message-Id: <20201201082128.15239-16-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be implemented.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added const (Jan Beulich)
- call hypfs_add_dir() in helper (Dario Faggioli)
- switch locking to enter/exit callbacks
---
 docs/misc/hypfs-paths.pandoc |   9 +++
 xen/common/sched/cpupool.c   | 122 +++++++++++++++++++++++++++++++++++
 2 files changed, 131 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 6c7b2f7ee3..aaca1cdf92 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -175,6 +175,15 @@ The major version of Xen.
 
 The minor version of Xen.
 
+#### /cpupool/
+
+A directory of all current cpupools.
+
+#### /cpupool/*/
+
+The individual cpupools. Each entry is a directory with the name being the
+cpupool-id (e.g. /cpupool/0/).
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 0db7d77219..3e17fdf95b 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -13,6 +13,8 @@
 
 #include <xen/cpu.h>
 #include <xen/cpumask.h>
+#include <xen/guest_access.h>
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
@@ -33,6 +35,7 @@ static int cpupool_moving_cpu = -1;
 static struct cpupool *cpupool_cpu_moving = NULL;
 static cpumask_t cpupool_locked_cpus;
 
+/* This lock nests inside sysctl or hypfs lock. */
 static DEFINE_SPINLOCK(cpupool_lock);
 
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
@@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
+#ifdef CONFIG_HYPFS
+static const struct hypfs_entry *cpupool_pooldir_enter(
+    const struct hypfs_entry *entry);
+
+static struct hypfs_funcs cpupool_pooldir_funcs = {
+    .enter = cpupool_pooldir_enter,
+    .exit = hypfs_node_exit,
+    .read = hypfs_read_dir,
+    .write = hypfs_write_deny,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_dir_findentry,
+};
+
+static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs);
+
+static const struct hypfs_entry *cpupool_pooldir_enter(
+    const struct hypfs_entry *entry)
+{
+    return &cpupool_pooldir.e;
+}
+
+static int cpupool_dir_read(const struct hypfs_entry *entry,
+                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    const struct cpupool *c;
+    unsigned int size = 0;
+
+    list_for_each_entry(c, &cpupool_list, list)
+    {
+        size += hypfs_dynid_entry_size(entry, c->cpupool_id);
+
+        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
+                                         list_is_last(&c->list, &cpupool_list),
+                                         &uaddr);
+        if ( ret )
+            break;
+    }
+
+    return ret;
+}
+
+static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
+{
+    const struct cpupool *c;
+    unsigned int size = 0;
+
+    list_for_each_entry(c, &cpupool_list, list)
+        size += hypfs_dynid_entry_size(entry, c->cpupool_id);
+
+    return size;
+}
+
+static const struct hypfs_entry *cpupool_dir_enter(
+    const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_alloc_dyndata(sizeof(*data));
+    if ( !data )
+        return ERR_PTR(-ENOMEM);
+    data->id = CPUPOOLID_NONE;
+
+    spin_lock(&cpupool_lock);
+
+    return entry;
+}
+
+static void cpupool_dir_exit(const struct hypfs_entry *entry)
+{
+    spin_unlock(&cpupool_lock);
+
+    hypfs_free_dyndata();
+}
+
+static struct hypfs_entry *cpupool_dir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    const struct cpupool *cpupool;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( end != name + name_len )
+        return ERR_PTR(-ENOENT);
+
+    cpupool = __cpupool_find_by_id(id, true);
+
+    if ( !cpupool )
+        return ERR_PTR(-ENOENT);
+
+    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
+}
+
+static struct hypfs_funcs cpupool_dir_funcs = {
+    .enter = cpupool_dir_enter,
+    .exit = cpupool_dir_exit,
+    .read = cpupool_dir_read,
+    .write = hypfs_write_deny,
+    .getsize = cpupool_dir_getsize,
+    .findentry = cpupool_dir_findentry,
+};
+
+static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);
+
+static void cpupool_hypfs_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
+    hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+}
+#else
+
+static void cpupool_hypfs_init(void)
+{
+}
+#endif
+
 static int __init cpupool_init(void)
 {
     unsigned int cpu;
 
     cpupool_gran_init();
 
+    cpupool_hypfs_init();
+
     cpupool0 = cpupool_create(0, 0);
     BUG_ON(IS_ERR(cpupool0));
     cpupool_put(cpupool0);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:24:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41701.75125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0xL-0006fJ-Kd; Tue, 01 Dec 2020 08:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41701.75125; Tue, 01 Dec 2020 08:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk0xL-0006fC-Hg; Tue, 01 Dec 2020 08:24:15 +0000
Received: by outflank-mailman (input) for mailman id 41701;
 Tue, 01 Dec 2020 08:24:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk0xK-0006f1-Hp; Tue, 01 Dec 2020 08:24:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk0xK-0006W1-DL; Tue, 01 Dec 2020 08:24:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk0xK-0006yH-5n; Tue, 01 Dec 2020 08:24:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kk0xK-0000ym-5L; Tue, 01 Dec 2020 08:24:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y5VkM0Y3FhZC60uDY7Y1xgDiVDit1xrKETxH2jCZGBo=; b=GH/VPYlIKl1NpTnOT5WiAUV7bG
	KqdJM2QuHpqVWbPBnTOEO/kXxM60GLEj7HF4rthUsJYpGMQ9hNpDQxi2tjbyl1BH10TgpK13Nev1A
	LTuZOOwJScQt4FF+42XDvkQLdjgQsz7XtsmK1JquUo6yn3dbTzPc9vl9V3p0f2lt08Yg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157125-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157125: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=08ae9e5f40f8bae0c3cf48f84181884ddd310fa0
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 08:24:14 +0000

flight 157125 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157125/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              08ae9e5f40f8bae0c3cf48f84181884ddd310fa0
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  144 days
Failing since        151818  2020-07-11 04:18:52 Z  143 days  138 attempts
Testing same since   157125  2020-12-01 04:19:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29770 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:33:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41742.75141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk16G-0007q8-JY; Tue, 01 Dec 2020 08:33:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41742.75141; Tue, 01 Dec 2020 08:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk16G-0007q1-G0; Tue, 01 Dec 2020 08:33:28 +0000
Received: by outflank-mailman (input) for mailman id 41742;
 Tue, 01 Dec 2020 08:33:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk16F-0007po-7R; Tue, 01 Dec 2020 08:33:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk16F-0006li-1T; Tue, 01 Dec 2020 08:33:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk16E-0007Wg-Md; Tue, 01 Dec 2020 08:33:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kk16E-0004Pb-MA; Tue, 01 Dec 2020 08:33:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aukKySFH3+tdZz2fUeMMHS1iLyNz2Strc3q1hHQb+c4=; b=4iXpNiY43QLk7u2JD7skClnKmK
	Kx3tBdhJc9DwUWKAZ9G7C9Iu/OPEL3W6j8XmOypupMYNq6dm8yjYPDWecEvvm0+8PQ0ZsHj7qJNfX
	4z4Yh69vEAKhBBbpVdCA0Bm+t9Av7TNQGwNV6hWAvaM62+WKSmIdg7W2QycnTVLYteTs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157116-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157116: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 08:33:26 +0000

flight 157116 qemu-mainline real [real]
flight 157127 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157116/
http://logs.test-lab.xenproject.org/osstest/logs/157127/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  102 days
Failing since        152659  2020-08-21 14:07:39 Z  101 days  213 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    3 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 08:55:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 08:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41751.75156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1RV-0001Ne-JY; Tue, 01 Dec 2020 08:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41751.75156; Tue, 01 Dec 2020 08:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1RV-0001NX-Gb; Tue, 01 Dec 2020 08:55:25 +0000
Received: by outflank-mailman (input) for mailman id 41751;
 Tue, 01 Dec 2020 08:55:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk1RU-0001NS-E3
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:55:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3b7a3dd-f7b6-444f-bf74-0270d97608e1;
 Tue, 01 Dec 2020 08:55:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5620ACA8;
 Tue,  1 Dec 2020 08:55:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3b7a3dd-f7b6-444f-bf74-0270d97608e1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606812922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4DAvSpJ/mylxBxeBiq1yxunlL1Xy2FoBL45NFThfm2E=;
	b=ZTpcSMuq5ube2ooETDSTxmQYoea4DLrZTIE+2qWETTC+IyWaICboFVdJXdMjsRWMfje2tr
	zx54s065aETpuZSmQn5gyp/yD3K/3/duSRMySH1DDmqNTYfPHSdtXnGKEdxIACO/IldF74
	6YjkGJiZaw727/74O9q/NQCf4SPdjnY=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
Date: Tue, 1 Dec 2020 09:55:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> @@ -243,11 +243,11 @@ void cpupool_put(struct cpupool *pool)
>   * - unknown scheduler
>   */
>  static struct cpupool *cpupool_create(
> -    int poolid, unsigned int sched_id, int *perr)
> +    unsigned int poolid, unsigned int sched_id, int *perr)
>  {
>      struct cpupool *c;
>      struct cpupool **q;
> -    int last = 0;
> +    unsigned int last = 0;
>  
>      *perr = -ENOMEM;
>      if ( (c = alloc_cpupool_struct()) == NULL )
> @@ -256,7 +256,7 @@ static struct cpupool *cpupool_create(
>      /* One reference for caller, one reference for cpupool_destroy(). */
>      atomic_set(&c->refcnt, 2);
>  
> -    debugtrace_printk("cpupool_create(pool=%d,sched=%u)\n", poolid, sched_id);
> +    debugtrace_printk("cpupool_create(pool=%u,sched=%u)\n", poolid, sched_id);
>  
>      spin_lock(&cpupool_lock);

Below from here we have

    c->cpupool_id = (poolid == CPUPOOLID_NONE) ? (last + 1) : poolid;

which I think can (a) wrap to zero and (b) cause a pool with id
CPUPOOLID_NONE to be created. The former is bad in any event, and
the latter will cause confusion at least with cpupool_add_domain()
and cpupool_get_id(). I realize this is a tangential problem, i.e.
may want fixing in a separate change.

> --- a/xen/common/sched/private.h
> +++ b/xen/common/sched/private.h
> @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
>  
>  struct cpupool
>  {
> -    int              cpupool_id;
> -#define CPUPOOLID_NONE    (-1)
> +    unsigned int     cpupool_id;
> +#define CPUPOOLID_NONE    (~0U)

How about using XEN_SYSCTL_CPUPOOL_PAR_ANY here? Furthermore,
together with the remark above, I think you also want to consider
the case of sizeof(unsigned int) > sizeof(uint32_t).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:00:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41758.75171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1Wk-0002Kw-Cf; Tue, 01 Dec 2020 09:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41758.75171; Tue, 01 Dec 2020 09:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1Wk-0002Kp-8e; Tue, 01 Dec 2020 09:00:50 +0000
Received: by outflank-mailman (input) for mailman id 41758;
 Tue, 01 Dec 2020 09:00:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk1Wi-0002Kk-FP
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 09:00:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11074f4f-65cf-41f5-83f0-a50536beda04;
 Tue, 01 Dec 2020 09:00:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BBD9AACA8;
 Tue,  1 Dec 2020 09:00:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11074f4f-65cf-41f5-83f0-a50536beda04
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606813246; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7C9N7AXS0BopIlIGgXrPcBYo/B9HhXxAUyIPjqTIqmU=;
	b=GQaxkkaShDPVV0mMRfi8G+mdNcWQoHLbzisF3SmUM3dO0PmsCdr1q/gLOvfUDku0sOnsLr
	wRr1opE38nsFqUDM59ewvN0pJJ1pziKPirvVOq00CSkpty6NjZe/etXsNIKUcLhhFqTBFe
	g3Ye51MENuouI4nnfbhb5bUeqfQT0EU=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <07118fab-6252-eb37-5844-b63e5dfc0976@suse.com>
Date: Tue, 1 Dec 2020 10:00:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-16-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
> dynamic, so the related hypfs access functions need to be implemented.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - added const (Jan Beulich)

Any particular reason this doesn't extend to ...

> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = {
>      .notifier_call = cpu_callback
>  };
>  
> +#ifdef CONFIG_HYPFS
> +static const struct hypfs_entry *cpupool_pooldir_enter(
> +    const struct hypfs_entry *entry);
> +
> +static struct hypfs_funcs cpupool_pooldir_funcs = {

... this (similarly in the next patch)? Granted I didn't look at
the hypfs patches yet, but I don't suppose these struct instances
need to be writable.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:01:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:01:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41759.75183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1Wz-0002Op-Ll; Tue, 01 Dec 2020 09:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41759.75183; Tue, 01 Dec 2020 09:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1Wz-0002Oi-HR; Tue, 01 Dec 2020 09:01:05 +0000
Received: by outflank-mailman (input) for mailman id 41759;
 Tue, 01 Dec 2020 09:01:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk1Wy-0002OT-CA
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 09:01:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 186bfbcc-18e7-4e7d-bc0b-f6408ca6e6a3;
 Tue, 01 Dec 2020 09:01:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3E4F2AB7F;
 Tue,  1 Dec 2020 09:01:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 186bfbcc-18e7-4e7d-bc0b-f6408ca6e6a3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606813262; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gq9HZx+s/4mvB3GyXfKgnklt74JkdDMjWvBBPTawEPc=;
	b=owduYkwbTtgQ79+UMD/bdAXnxU+gkhvaOVJdqrvo/s8v2rnitlIGUppjBnCBLFRpK55Q+9
	dJ31bj8r2hegFFSOGoo5eLZoWPnN7saI/M/EyNyTZpGELwYgQThKI+OaajyYpoPy1AheqQ
	uKIPdmJD3NBaSfXarPqsZv2IeSndO+c=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
Date: Tue, 1 Dec 2020 10:01:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="h4icEOT6ODfBq5kuB7s1ePdTUsnn7F4TO"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--h4icEOT6ODfBq5kuB7s1ePdTUsnn7F4TO
Content-Type: multipart/mixed; boundary="o3PByQlchZ7AcDLF77ktJN7QW4C1jEzgU";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
In-Reply-To: <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>

--o3PByQlchZ7AcDLF77ktJN7QW4C1jEzgU
Content-Type: multipart/mixed;
 boundary="------------86159DF286D8E4151E9E14A7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------86159DF286D8E4151E9E14A7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.12.20 09:55, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -243,11 +243,11 @@ void cpupool_put(struct cpupool *pool)
>>    * - unknown scheduler
>>    */
>>   static struct cpupool *cpupool_create(
>> -    int poolid, unsigned int sched_id, int *perr)
>> +    unsigned int poolid, unsigned int sched_id, int *perr)
>>   {
>>       struct cpupool *c;
>>       struct cpupool **q;
>> -    int last =3D 0;
>> +    unsigned int last =3D 0;
>>  =20
>>       *perr =3D -ENOMEM;
>>       if ( (c =3D alloc_cpupool_struct()) =3D=3D NULL )
>> @@ -256,7 +256,7 @@ static struct cpupool *cpupool_create(
>>       /* One reference for caller, one reference for cpupool_destroy()=
=2E */
>>       atomic_set(&c->refcnt, 2);
>>  =20
>> -    debugtrace_printk("cpupool_create(pool=3D%d,sched=3D%u)\n", pooli=
d, sched_id);
>> +    debugtrace_printk("cpupool_create(pool=3D%u,sched=3D%u)\n", pooli=
d, sched_id);
>>  =20
>>       spin_lock(&cpupool_lock);
>=20
> Below from here we have
>=20
>      c->cpupool_id =3D (poolid =3D=3D CPUPOOLID_NONE) ? (last + 1) : po=
olid;
>=20
> which I think can (a) wrap to zero and (b) cause a pool with id
> CPUPOOLID_NONE to be created. The former is bad in any event, and
> the latter will cause confusion at least with cpupool_add_domain()
> and cpupool_get_id(). I realize this is a tangential problem, i.e.
> may want fixing in a separate change.

Yes, this is an issue today already, and it is fixed in patch 5.

>=20
>> --- a/xen/common/sched/private.h
>> +++ b/xen/common/sched/private.h
>> @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct=
 sched_unit *unit)
>>  =20
>>   struct cpupool
>>   {
>> -    int              cpupool_id;
>> -#define CPUPOOLID_NONE    (-1)
>> +    unsigned int     cpupool_id;
>> +#define CPUPOOLID_NONE    (~0U)
>=20
> How about using XEN_SYSCTL_CPUPOOL_PAR_ANY here? Furthermore,
> together with the remark above, I think you also want to consider
> the case of sizeof(unsigned int) > sizeof(uint32_t).

With patch 5 this should be completely fine.


Juergen

--------------86159DF286D8E4151E9E14A7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------86159DF286D8E4151E9E14A7--

--o3PByQlchZ7AcDLF77ktJN7QW4C1jEzgU--

--h4icEOT6ODfBq5kuB7s1ePdTUsnn7F4TO
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/GBk0FAwAAAAAACgkQsN6d1ii/Ey9p
qgf/ZFYxdFIjfUPcw9KSeBF0rIxH6Nz3sauaCdlDJZ5hW3mgRwRAB0dob+EXSqaCMhdy8rk24nTr
22DD7QFwBPvk/Ved8pd3aqh8yEd2br0u2YMiDmggZLncUQcWKcFrS6ovZYFo+ZfvOes+IvVATrO5
bffCgqAAVdR8b1BWkb4w+ViryUkZAcbSqedpbXYLicHZCeUdxT+pdogfDVKqpXaUPYoKyKmgCZke
H+Qs03Jeg4N/C62SSEpccMbVjLd0smj2EzEt5pVGdfHodn2BTdOFF/NhSD8t6Ol4tGo7ZS74GjZx
q9WklHEnwieagx38ERPmZ7ICNBQGzFFEegHopCHOEA==
=zIOt
-----END PGP SIGNATURE-----

--h4icEOT6ODfBq5kuB7s1ePdTUsnn7F4TO--


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:03:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:03:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41770.75198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1Yz-0002bN-5T; Tue, 01 Dec 2020 09:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41770.75198; Tue, 01 Dec 2020 09:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1Yz-0002bG-1T; Tue, 01 Dec 2020 09:03:09 +0000
Received: by outflank-mailman (input) for mailman id 41770;
 Tue, 01 Dec 2020 09:03:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk1Yy-0002bB-Fv
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 09:03:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 108a0a0e-9bc7-4fd6-acbf-b75450e0125c;
 Tue, 01 Dec 2020 09:03:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB004AB7F;
 Tue,  1 Dec 2020 09:03:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 108a0a0e-9bc7-4fd6-acbf-b75450e0125c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606813387; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tBGQA4SdlZIXUvwqO0mSdE3fF/iJT63s2Q4sRiRixj0=;
	b=BBbV30TEB7X2dv0kEdKpVWTeFvW58YcrRX4eM7yOHOcNT6LTxLE02bS2sgAuxW8t5rEPlJ
	GQC1KwVC6+wLxI1RtRNDBA+FUqoBc3Bnb7Bil9GXsJlnMIzUxsxXcuhgxY/acoV7KGeatX
	87JpTbvO7q6dg79eGAWTqyg1TR6wxDg=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <07118fab-6252-eb37-5844-b63e5dfc0976@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d8809f56-8d6c-8597-d3c8-ff6c9bf9bde2@suse.com>
Date: Tue, 1 Dec 2020 10:03:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <07118fab-6252-eb37-5844-b63e5dfc0976@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="YE8XuQam5rCsXRQok1cjmn5VjJeFrOoev"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--YE8XuQam5rCsXRQok1cjmn5VjJeFrOoev
Content-Type: multipart/mixed; boundary="O5Lq6QKm3HQR8eeYWZMX9nbDmPVDM4NLw";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
Message-ID: <d8809f56-8d6c-8597-d3c8-ff6c9bf9bde2@suse.com>
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <07118fab-6252-eb37-5844-b63e5dfc0976@suse.com>
In-Reply-To: <07118fab-6252-eb37-5844-b63e5dfc0976@suse.com>

--O5Lq6QKm3HQR8eeYWZMX9nbDmPVDM4NLw
Content-Type: multipart/mixed;
 boundary="------------67038BF2CE13C3E0D04F8211"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------67038BF2CE13C3E0D04F8211
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.12.20 10:00, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
>> dynamic, so the related hypfs access functions need to be implemented.=

>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - added const (Jan Beulich)
>=20
> Any particular reason this doesn't extend to ...
>=20
>> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb =3D {
>>       .notifier_call =3D cpu_callback
>>   };
>>  =20
>> +#ifdef CONFIG_HYPFS
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> +    const struct hypfs_entry *entry);
>> +
>> +static struct hypfs_funcs cpupool_pooldir_funcs =3D {
>=20
> ... this (similarly in the next patch)? Granted I didn't look at
> the hypfs patches yet, but I don't suppose these struct instances
> need to be writable.

No reason. I'll add const.


Juergen

--------------67038BF2CE13C3E0D04F8211
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------67038BF2CE13C3E0D04F8211--

--O5Lq6QKm3HQR8eeYWZMX9nbDmPVDM4NLw--

--YE8XuQam5rCsXRQok1cjmn5VjJeFrOoev
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/GBsoFAwAAAAAACgkQsN6d1ii/Ey8m
rgf/b4V257T8XPxSGuvAtBgY1uxyq/y0inFpZBdD3sJU/uOQQZ/4JUA3SRZZUa7xr7tINvQuILQG
RxHzvj/vxHpG+8nLEH6nb8f7xxieMK2PLltydGYyTpPLGhUU74sOh8pSHNLYCXlywNGUBoDpIsjX
mX5frFM8qEyUtK+nvM/tPNcxJeWkcFi572GX60GKiRr1Ny6zU0kWsCCr/Ib6zHG7HKKfVS5F+tOP
8ghF5O5qRkZ8XkNH9Dwh5VWyB4TNbQ+HR+Cp6xwUg1P4RHxQD2i7lTR7Yx/Pu9HqSuhuqt5Z6YXs
Ic3lzajLBCMI8T8An288IGUEJC7YXZIHVSKWFawvGQ==
=WgH3
-----END PGP SIGNATURE-----

--YE8XuQam5rCsXRQok1cjmn5VjJeFrOoev--


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:07:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41777.75210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1dM-0002q4-Nd; Tue, 01 Dec 2020 09:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41777.75210; Tue, 01 Dec 2020 09:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1dM-0002px-KD; Tue, 01 Dec 2020 09:07:40 +0000
Received: by outflank-mailman (input) for mailman id 41777;
 Tue, 01 Dec 2020 09:07:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk1dL-0002ps-NE
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 09:07:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b003c577-5e7c-4d79-8fb4-920b04642b5d;
 Tue, 01 Dec 2020 09:07:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 12AD8AB7F;
 Tue,  1 Dec 2020 09:07:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b003c577-5e7c-4d79-8fb4-920b04642b5d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606813658; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4afmUn79aMo+7zFP+9CWyAWfYk6rLfDq5W/VGJdw1LQ=;
	b=Ka1c1ChuHG6f2cKd9MNL1F3kaRbEwEJ26cGtbPg0hy9mVDSLvzcux9no2t/SVyZ2R5q8kF
	sjBsPteM3Y/7XxHtwnT3KLv3o65onWt3CCE8KK1Mxht7Q4lH35sOkIFMs/k70n156kVXJq
	l3apbpvQH+gdG4uQ1AZ9DhGAX3lossY=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
 <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e6cc3d1f-f0c5-f32e-db9c-4fc9298c2a45@suse.com>
Date: Tue, 1 Dec 2020 10:07:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.12.2020 10:01, Jürgen Groß wrote:
> On 01.12.20 09:55, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> @@ -243,11 +243,11 @@ void cpupool_put(struct cpupool *pool)
>>>    * - unknown scheduler
>>>    */
>>>   static struct cpupool *cpupool_create(
>>> -    int poolid, unsigned int sched_id, int *perr)
>>> +    unsigned int poolid, unsigned int sched_id, int *perr)
>>>   {
>>>       struct cpupool *c;
>>>       struct cpupool **q;
>>> -    int last = 0;
>>> +    unsigned int last = 0;
>>>   
>>>       *perr = -ENOMEM;
>>>       if ( (c = alloc_cpupool_struct()) == NULL )
>>> @@ -256,7 +256,7 @@ static struct cpupool *cpupool_create(
>>>       /* One reference for caller, one reference for cpupool_destroy(). */
>>>       atomic_set(&c->refcnt, 2);
>>>   
>>> -    debugtrace_printk("cpupool_create(pool=%d,sched=%u)\n", poolid, sched_id);
>>> +    debugtrace_printk("cpupool_create(pool=%u,sched=%u)\n", poolid, sched_id);
>>>   
>>>       spin_lock(&cpupool_lock);
>>
>> Below from here we have
>>
>>      c->cpupool_id = (poolid == CPUPOOLID_NONE) ? (last + 1) : poolid;
>>
>> which I think can (a) wrap to zero and (b) cause a pool with id
>> CPUPOOLID_NONE to be created. The former is bad in any event, and
>> the latter will cause confusion at least with cpupool_add_domain()
>> and cpupool_get_id(). I realize this is a tangential problem, i.e.
>> may want fixing in a separate change.
> 
> Yes, this is an issue today already, and it is fixed in patch 5.
> 
>>
>>> --- a/xen/common/sched/private.h
>>> +++ b/xen/common/sched/private.h
>>> @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
>>>   
>>>   struct cpupool
>>>   {
>>> -    int              cpupool_id;
>>> -#define CPUPOOLID_NONE    (-1)
>>> +    unsigned int     cpupool_id;
>>> +#define CPUPOOLID_NONE    (~0U)
>>
>> How about using XEN_SYSCTL_CPUPOOL_PAR_ANY here? Furthermore,
>> together with the remark above, I think you also want to consider
>> the case of sizeof(unsigned int) > sizeof(uint32_t).
> 
> With patch 5 this should be completely fine.

Ah - I didn't expect this kind of fix in a patch with that title,
but yes.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:13:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41784.75222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1iW-0003nZ-Bz; Tue, 01 Dec 2020 09:13:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41784.75222; Tue, 01 Dec 2020 09:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1iW-0003nS-8x; Tue, 01 Dec 2020 09:13:00 +0000
Received: by outflank-mailman (input) for mailman id 41784;
 Tue, 01 Dec 2020 09:12:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk1iU-0003nN-HS
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 09:12:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e3d97f0-72d4-4259-b98e-1ca2637a6a1b;
 Tue, 01 Dec 2020 09:12:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4E8CAB7F;
 Tue,  1 Dec 2020 09:12:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e3d97f0-72d4-4259-b98e-1ca2637a6a1b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606813977; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=E7Hs6rKppoF5jfW28heEQsAkU1uHWQukzKmDwMlkejA=;
	b=A1sDDzFJ/FaYBZBpxTvyoXNGxpg2NpLgnQcT2Tc/1/ubygQkoBkyRTb41YSSDJpst2cHVW
	Iqi7jgYEV5LwTGRWO/WuxjZwrGCTzrPdwP328Jly4ih5gjPCk1KUpXDt+575mYcY6CbkFe
	IkdXMIrJKyIrxOlYO+8o9c5FpGYyBZU=
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list
 interface
To: Juergen Gross <jgross@suse.com>
Cc: Dario Faggioli <dfaggioli@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-6-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
Date: Tue, 1 Dec 2020 10:12:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> @@ -260,23 +257,42 @@ static struct cpupool *cpupool_create(
>  
>      spin_lock(&cpupool_lock);
>  
> -    for_each_cpupool(q)
> +    if ( poolid != CPUPOOLID_NONE )
>      {
> -        last = (*q)->cpupool_id;
> -        if ( (poolid != CPUPOOLID_NONE) && (last >= poolid) )
> -            break;
> +        q = __cpupool_find_by_id(poolid, false);
> +        if ( !q )
> +            list_add_tail(&c->list, &cpupool_list);
> +        else
> +        {
> +            list_add_tail(&c->list, &q->list);
> +            if ( q->cpupool_id == poolid )
> +            {
> +                *perr = -EEXIST;
> +                goto err;
> +            }

You bail _after_ having added the new entry to the list?

> +        }
> +
> +        c->cpupool_id = poolid;
>      }
> -    if ( *q != NULL )
> +    else
>      {
> -        if ( (*q)->cpupool_id == poolid )
> +        /* Cpupool 0 is created with specified id at boot and never removed. */
> +        ASSERT(!list_empty(&cpupool_list));
> +
> +        q = list_last_entry(&cpupool_list, struct cpupool, list);
> +        /* In case of wrap search for first free id. */
> +        if ( q->cpupool_id == CPUPOOLID_NONE - 1 )
>          {
> -            *perr = -EEXIST;
> -            goto err;
> +            list_for_each_entry(q, &cpupool_list, list)
> +                if ( q->cpupool_id + 1 != list_next_entry(q, list)->cpupool_id )
> +                    break;
>          }
> -        c->next = *q;
> +
> +        list_add(&c->list, &q->list);
> +
> +        c->cpupool_id = q->cpupool_id + 1;

What guarantees that you managed to find an unused ID, other
than at current CPU speeds it taking too long to create 4
billion pools? Since you're doing this under lock, wouldn't
it help anyway to have a global helper variable pointing at
the lowest pool followed by an unused ID?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:19:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41790.75234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1oF-000421-30; Tue, 01 Dec 2020 09:18:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41790.75234; Tue, 01 Dec 2020 09:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk1oE-00041u-Uw; Tue, 01 Dec 2020 09:18:54 +0000
Received: by outflank-mailman (input) for mailman id 41790;
 Tue, 01 Dec 2020 09:18:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UECe=FF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kk1oD-00041p-3w
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 09:18:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf825411-e4df-4939-aa9b-748e5bd81716;
 Tue, 01 Dec 2020 09:18:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54D22AB7F;
 Tue,  1 Dec 2020 09:18:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf825411-e4df-4939-aa9b-748e5bd81716
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606814331; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fWCxlC/CJPteGvSTp3tr4/lUzzyoA57LjvB2UgrxZWw=;
	b=rjpcSS3O7m/Xi7uTaCmvfz3tE6ivFdGzymgl7txFLvPNPvB9Drc40gIoLtlvVjsnvExPcO
	DNmm8/YzMmVMW8QnXAr9oIkbK1hHX4b6JmTjyQ5aANjF+J5wgor5ssu9dox1czWDRnmMhK
	XGS0hAPdnMrnvMe3n6O7Oj5XR/mo/OI=
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list
 interface
To: Jan Beulich <jbeulich@suse.com>
Cc: Dario Faggioli <dfaggioli@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-6-jgross@suse.com>
 <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
Date: Tue, 1 Dec 2020 10:18:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="DF6v35XDwOFfwLeKbACaaapOh4TuO42x6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--DF6v35XDwOFfwLeKbACaaapOh4TuO42x6
Content-Type: multipart/mixed; boundary="hueREzuxdUo116A2T10fiR8Gvh92XlugY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Dario Faggioli <dfaggioli@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Message-ID: <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list
 interface
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-6-jgross@suse.com>
 <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
In-Reply-To: <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>

--hueREzuxdUo116A2T10fiR8Gvh92XlugY
Content-Type: multipart/mixed;
 boundary="------------E9628A0BADD5F12DA357219C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E9628A0BADD5F12DA357219C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.12.20 10:12, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -260,23 +257,42 @@ static struct cpupool *cpupool_create(
>>  =20
>>       spin_lock(&cpupool_lock);
>>  =20
>> -    for_each_cpupool(q)
>> +    if ( poolid !=3D CPUPOOLID_NONE )
>>       {
>> -        last =3D (*q)->cpupool_id;
>> -        if ( (poolid !=3D CPUPOOLID_NONE) && (last >=3D poolid) )
>> -            break;
>> +        q =3D __cpupool_find_by_id(poolid, false);
>> +        if ( !q )
>> +            list_add_tail(&c->list, &cpupool_list);
>> +        else
>> +        {
>> +            list_add_tail(&c->list, &q->list);
>> +            if ( q->cpupool_id =3D=3D poolid )
>> +            {
>> +                *perr =3D -EEXIST;
>> +                goto err;
>> +            }
>=20
> You bail _after_ having added the new entry to the list?

Yes, this is making exit handling easier.

>=20
>> +        }
>> +
>> +        c->cpupool_id =3D poolid;
>>       }
>> -    if ( *q !=3D NULL )
>> +    else
>>       {
>> -        if ( (*q)->cpupool_id =3D=3D poolid )
>> +        /* Cpupool 0 is created with specified id at boot and never r=
emoved. */
>> +        ASSERT(!list_empty(&cpupool_list));
>> +
>> +        q =3D list_last_entry(&cpupool_list, struct cpupool, list);
>> +        /* In case of wrap search for first free id. */
>> +        if ( q->cpupool_id =3D=3D CPUPOOLID_NONE - 1 )
>>           {
>> -            *perr =3D -EEXIST;
>> -            goto err;
>> +            list_for_each_entry(q, &cpupool_list, list)
>> +                if ( q->cpupool_id + 1 !=3D list_next_entry(q, list)-=
>cpupool_id )
>> +                    break;
>>           }
>> -        c->next =3D *q;
>> +
>> +        list_add(&c->list, &q->list);
>> +
>> +        c->cpupool_id =3D q->cpupool_id + 1;
>=20
> What guarantees that you managed to find an unused ID, other
> than at current CPU speeds it taking too long to create 4
> billion pools? Since you're doing this under lock, wouldn't
> it help anyway to have a global helper variable pointing at
> the lowest pool followed by an unused ID?

An admin doing that would be quite crazy and wouldn't deserve better.

For being usable a cpupool needs to have a cpu assigned to it. And I
don't think we are coming even close to 4 billion supported cpus. :-)

Yes, it would be possible to create 4 billion empty cpupools, but for
what purpose? There are simpler ways to make the system unusable with
dom0 root access.


Juergen

--------------E9628A0BADD5F12DA357219C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E9628A0BADD5F12DA357219C--

--hueREzuxdUo116A2T10fiR8Gvh92XlugY--

--DF6v35XDwOFfwLeKbACaaapOh4TuO42x6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/GCnoFAwAAAAAACgkQsN6d1ii/Ey8I
Bwf8DLsGEHdQnlflzpqp5MBbqCGgCPsdOh62BQMbslNw/nYeEa3e1iJZBnQOE2lngGgDl1iUNMlS
Zh18OqdmMi0BfZF5HkJi+C7TUXPD/DWQwYKWraDayNFhFtPNWD1DmnJYB+c+Nioqt8CB6O53iSQX
xaor7aXO8s07kYFBQuYlvy3U56Kjeyl2k0kG4hiYGsnNEYnSELDRfgOIMH2aUzG/uSzcHhhc7xbu
ZomzMVVyL4Y8e0nO1dcwSLqw/bzvmsj4bjY8V8Rqd+jP6dOoKXLqdTIBI8elritIoGC5RHIiuoZg
bKZtxr3kXQa2kKG/F+SEfKgoXfde4p1dQggz3a/27Q==
=9rvz
-----END PGP SIGNATURE-----

--DF6v35XDwOFfwLeKbACaaapOh4TuO42x6--


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 09:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 09:49:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41799.75246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk2Hb-0006qD-EE; Tue, 01 Dec 2020 09:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41799.75246; Tue, 01 Dec 2020 09:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk2Hb-0006q6-B4; Tue, 01 Dec 2020 09:49:15 +0000
Received: by outflank-mailman (input) for mailman id 41799;
 Tue, 01 Dec 2020 09:49:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk2HZ-0006py-QF; Tue, 01 Dec 2020 09:49:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk2HZ-0008Ij-JZ; Tue, 01 Dec 2020 09:49:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk2HZ-0002Gg-AZ; Tue, 01 Dec 2020 09:49:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kk2HZ-00074z-A3; Tue, 01 Dec 2020 09:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TZCnbN2P8BpGvEBqrrNkAhPk0t25NZ7voWmaKlAZBnU=; b=BQ6TR+cRIwQoeAFEYL6QlCSfhw
	cAABOAgJaNK9nqsMXtR75Lf95ClfvnE4sdZOxXYqvUHjHHrELWRKXgcH8j3wjEQISy2lM1VGJNCcy
	UXXJubhz2YUl0JmPggQLY/mMzaX+dS7tzofB/mt9Fmx/RYOyQC5Od0RoFK/7Q3qkoDyo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157119-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157119: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b65054597872ce3aefbc6a666385eabdf9e288da
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 09:49:13 +0000

flight 157119 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157119/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 157109 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 157109 pass in 157119
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 157109 pass in 157119
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 157109 pass in 157119
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157109
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157109

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157109 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b65054597872ce3aefbc6a666385eabdf9e288da
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  122 days
Failing since        152366  2020-08-01 20:49:34 Z  121 days  206 attempts
Testing same since   157109  2020-11-30 08:17:04 Z    1 days    2 attempts

------------------------------------------------------------
3619 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 693043 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 10:23:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 10:23:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41807.75261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk2or-0002Cf-CA; Tue, 01 Dec 2020 10:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41807.75261; Tue, 01 Dec 2020 10:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk2or-0002CY-8f; Tue, 01 Dec 2020 10:23:37 +0000
Received: by outflank-mailman (input) for mailman id 41807;
 Tue, 01 Dec 2020 10:23:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kk2op-0002CT-Qr
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 10:23:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk2oo-0000dq-5O; Tue, 01 Dec 2020 10:23:34 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk2on-0006Nj-Ok; Tue, 01 Dec 2020 10:23:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Wv1H4jfX8yg2Bf63sq8U5hxCS4N0pJHUxdaTsf8FBGM=; b=5c8pDQFQi7sQ0kfi9fOB87g23u
	Vdz4kBG3dCYjf62L2qmXrXZb3oHZyeYVScadjamjDh18vFaORZDQ74Pa1EEKLBpTzS1Y6VBtDl3D9
	CwLHWQUjpi2wzDidJEmy/+RPp9CJomvI9VVW678o0H9jZy/DETt7xXJOhO9Q430c/ufc=;
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Oleksandr <olekstysh@gmail.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <25057245-5885-5b11-753d-91f501eb070a@xen.org>
Date: Tue, 1 Dec 2020 10:23:31 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 30/11/2020 23:27, Oleksandr wrote:
> 
> On 30.11.20 23:03, Volodymyr Babchuk wrote:
>> Hi,
> 
> Hi Volodymyr
> 
> 
>>
>> Oleksandr Tyshchenko writes:
>>
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> In order to avoid code duplication (both handle_read() and
>>> handle_ioserv() contain the same code for the sign-extension)
>>> put this code to a common helper to be used for both.
>>>
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> CC: Julien Grall <julien.grall@arm.com>
>>>
>>> ---
>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>> "Add support for Guest IO forwarding to a device emulator"
>>>
>>> Changes V1 -> V2:
>>>     - new patch
>>>
>>> Changes V2 -> V3:
>>>     - no changes
>>> ---
>>> ---
>>>   xen/arch/arm/io.c           | 18 ++----------------
>>>   xen/arch/arm/ioreq.c        | 17 +----------------
>>>   xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++
>>>   3 files changed, 27 insertions(+), 32 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
>>> index f44cfd4..8d6ec6c 100644
>>> --- a/xen/arch/arm/io.c
>>> +++ b/xen/arch/arm/io.c
>>> @@ -23,6 +23,7 @@
>>>   #include <asm/cpuerrata.h>
>>>   #include <asm/current.h>
>>>   #include <asm/mmio.h>
>>> +#include <asm/traps.h>
>>>   #include <asm/hvm/ioreq.h>
>>>   #include "decode.h"
>>> @@ -39,26 +40,11 @@ static enum io_state handle_read(const struct 
>>> mmio_handler *handler,
>>>        * setting r).
>>>        */
>>>       register_t r = 0;
>>> -    uint8_t size = (1 << dabt.size) * 8;
>>>       if ( !handler->ops->read(v, info, &r, handler->priv) )
>>>           return IO_ABORT;
>>> -    /*
>>> -     * Sign extend if required.
>>> -     * Note that we expect the read handler to have zeroed the bits
>>> -     * outside the requested access size.
>>> -     */
>>> -    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>> -    {
>>> -        /*
>>> -         * We are relying on register_t using the same as
>>> -         * an unsigned long in order to keep the 32-bit assembly
>>> -         * code smaller.
>>> -         */
>>> -        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>> -        r |= (~0UL) << size;
>>> -    }
>>> +    r = sign_extend(dabt, r);
>>>       set_user_reg(regs, dabt.reg, r);
>>> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
>>> index f08190c..2f39289 100644
>>> --- a/xen/arch/arm/ioreq.c
>>> +++ b/xen/arch/arm/ioreq.c
>>> @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs 
>>> *regs, struct vcpu *v)
>>>       const union hsr hsr = { .bits = regs->hsr };
>>>       const struct hsr_dabt dabt = hsr.dabt;
>>>       /* Code is similar to handle_read */
>>> -    uint8_t size = (1 << dabt.size) * 8;
>>>       register_t r = v->io.req.data;
>>>       /* We are done with the IO */
>>> @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs 
>>> *regs, struct vcpu *v)
>>>       if ( dabt.write )
>>>           return IO_HANDLED;
>>> -    /*
>>> -     * Sign extend if required.
>>> -     * Note that we expect the read handler to have zeroed the bits
>>> -     * outside the requested access size.
>>> -     */
>>> -    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>> -    {
>>> -        /*
>>> -         * We are relying on register_t using the same as
>>> -         * an unsigned long in order to keep the 32-bit assembly
>>> -         * code smaller.
>>> -         */
>>> -        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>> -        r |= (~0UL) << size;
>>> -    }
>>> +    r = sign_extend(dabt, r);
>>>       set_user_reg(regs, dabt.reg, r);
>>> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
>>> index 997c378..e301c44 100644
>>> --- a/xen/include/asm-arm/traps.h
>>> +++ b/xen/include/asm-arm/traps.h
>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const 
>>> struct cpu_user_regs *regs)
>>>           (unsigned long)abort_guest_exit_end == regs->pc;
>>>   }
>>> +/* Check whether the sign extension is required and perform it */
>>> +static inline register_t sign_extend(const struct hsr_dabt dabt, 
>>> register_t r)
>>> +{
>>> +    uint8_t size = (1 << dabt.size) * 8;
>>> +
>>> +    /*
>>> +     * Sign extend if required.
>>> +     * Note that we expect the read handler to have zeroed the bits
>>> +     * outside the requested access size.
>>> +     */
>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>> +    {
>>> +        /*
>>> +         * We are relying on register_t using the same as
>>> +         * an unsigned long in order to keep the 32-bit assembly
>>> +         * code smaller.
>>> +         */
>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>> +        r |= (~0UL) << size;
>> If `size` is 64, you will get undefined behavior there.
> I think, we don't need to worry about undefined behavior here. Having 
> size=64 would be possible with doubleword (dabt.size=3). But if "r" 
> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
> we deal with byte, halfword or word operations (dabt.size<3). Or I 
> missed something?

This is known and was pointed out in the commit message introducing the 
sign-extension:

"Note that the bit can only be set for access size smaller than the 
register size (i.e byte/half-word for aarch32, byte/half-word/word for 
aarch32). So we don't have to worry about undefined C behavior."

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 10:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 10:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41813.75272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk2vp-000369-5g; Tue, 01 Dec 2020 10:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41813.75272; Tue, 01 Dec 2020 10:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk2vp-000362-2O; Tue, 01 Dec 2020 10:30:49 +0000
Received: by outflank-mailman (input) for mailman id 41813;
 Tue, 01 Dec 2020 10:30:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kk2vn-00035x-Kj
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 10:30:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk2vl-0000mV-Dy; Tue, 01 Dec 2020 10:30:45 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk2vl-0007MT-49; Tue, 01 Dec 2020 10:30:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Dv11DqHyCy/2lJw7+6ik3PRNikc0v30V0ItAaA2NVC8=; b=mYnFuashNVPl7xNQLg+YTHMg6X
	n+3TIo3m6LhGCgFHunqhzzQCaFXTzIcqxMfBejWmBAVJhNN8eEUWhHCzm3LXf0ZzCu51wr00hm+1j
	3SM7OddnXvI3PGaidQft8J+yL2Rmpxl3HOvX/S71xDw+nMJXmT62GClqm6UE/bntf7UQ=;
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Jan Beulich <jbeulich@suse.com>, Oleksandr <olekstysh@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
 <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
Date: Tue, 1 Dec 2020 10:30:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 01/12/2020 07:55, Jan Beulich wrote:
> On 01.12.2020 00:27, Oleksandr wrote:
>> On 30.11.20 23:03, Volodymyr Babchuk wrote:
>>> Oleksandr Tyshchenko writes:
>>>> --- a/xen/include/asm-arm/traps.h
>>>> +++ b/xen/include/asm-arm/traps.h
>>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs)
>>>>            (unsigned long)abort_guest_exit_end == regs->pc;
>>>>    }
>>>>    
>>>> +/* Check whether the sign extension is required and perform it */
>>>> +static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r)
>>>> +{
>>>> +    uint8_t size = (1 << dabt.size) * 8;
>>>> +
>>>> +    /*
>>>> +     * Sign extend if required.
>>>> +     * Note that we expect the read handler to have zeroed the bits
>>>> +     * outside the requested access size.
>>>> +     */
>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>> +    {
>>>> +        /*
>>>> +         * We are relying on register_t using the same as
>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>> +         * code smaller.
>>>> +         */
>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>> +        r |= (~0UL) << size;
>>> If `size` is 64, you will get undefined behavior there.
>> I think, we don't need to worry about undefined behavior here. Having
>> size=64 would be possible with doubleword (dabt.size=3). But if "r"
>> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
>> we deal with byte, halfword or word operations (dabt.size<3). Or I
>> missed something?
> 
> At which point please put in a respective ASSERT(), possibly amended
> by a brief comment.

ASSERT()s are only meant to catch programatic error. However, in this 
case, the bigger risk is an hardware bug such as advertising a sign 
extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32).

Actually the Armv8 spec is a bit more blurry when running in AArch32 
state because they suggest that the sign extension can be set even for 
32-bit access. I think this is a spelling mistake, but it is probably 
better to be cautious here.

Therefore, I would recommend to rework the code so it is only called 
when len < sizeof(register_t).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 10:42:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 10:42:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41819.75285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk36j-0004BJ-6m; Tue, 01 Dec 2020 10:42:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41819.75285; Tue, 01 Dec 2020 10:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk36j-0004BC-3N; Tue, 01 Dec 2020 10:42:05 +0000
Received: by outflank-mailman (input) for mailman id 41819;
 Tue, 01 Dec 2020 10:42:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lvrb=FF=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kk36i-0004B7-6z
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 10:42:04 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fe7f5c9-0acc-4984-80e9-7d825828368a;
 Tue, 01 Dec 2020 10:42:03 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id g14so1851471wrm.13
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 02:42:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id k16sm2390397wrl.65.2020.12.01.02.42.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 01 Dec 2020 02:42:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fe7f5c9-0acc-4984-80e9-7d825828368a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=iYDXREdUAV/LJlcuptjzgZ8tcA4UyMaQyotxPegfS5k=;
        b=Th00CvFdx0Yl+1TbSox9O7R9OJnEXQO7n+oc68HeCU4mruNkTn/vhqBgh8BNiWY7sY
         yuKDcVEATZhBBjqYfKsgIXV8yFTy3t88Po9tLMPG4mQz5r6P+P7xYuBiOlui583rwrKt
         5LX1UTFqFOYJAt0XgrAfZPPBw8WLATVEvStzpTdQiUqAHUVt/r/CZZOVDdEjWscA/+MV
         cy5TFfXlXr/D4ET6CAUXTEzlzr9OFYKQFTAkFxrfxdu9r4xXTFZZVhhhpHPVx0hzSJSW
         gN5MyhVdkJW7UIWiF1fFmCBEhCr3OKyZQ1icGiA7CjUMhrRBEW/t8gCWOqCLftVzmnSy
         TZ3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=iYDXREdUAV/LJlcuptjzgZ8tcA4UyMaQyotxPegfS5k=;
        b=dgr/EyA+jS96Z7xSV0mq8Psih5f8YM8iiB3eDnLFI2hqGX5j7x5PRiQG1NlISQWToU
         di1xz1SbU9FOOPCjHJ8YOH+qZAo0FDSHPDtDhbzaCvLPE0vB5+r276oxsyxaisUNr6es
         WHTOjvv75lHgb9xAvWSCrwxW8We30wmSpvLJupsIaNB2vxDsU04K6IxO0BHgdTiRFqVl
         owhHxyRbmSbXghb4lJtPEq6wZeob6iFls7jzEpUkSafWX7ZgP0lPXB3CIlI9bVNadhM6
         7b6ZOLIWkhkvTyQwiVdzKoXE8SsyRuDD6kN+7uKfaLdwdc6pofaS61vtpINZ8V4dZg39
         7+Ig==
X-Gm-Message-State: AOAM530eemAJUoM0pqsEpmLZ1CPMJKcZeuDkf4902raGd9l69MjOllgW
	3ZPwg40XyV0u9cqC/VMGiDU=
X-Google-Smtp-Source: ABdhPJxEuo7SvEhaaLaHEsEFRfcVfd5HWV2auBfgupVEr89QKyJ+wHSJr9dcZQ+l1V6UyC85klQ2NA==
X-Received: by 2002:adf:a191:: with SMTP id u17mr3004296wru.421.1606819322470;
        Tue, 01 Dec 2020 02:42:02 -0800 (PST)
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
 <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
 <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <932d7826-7e48-aaee-d566-44c384f84e1c@gmail.com>
Date: Tue, 1 Dec 2020 12:42:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.12.20 12:30, Julien Grall wrote:

Hi Julien

> Hi Jan,
>
> On 01/12/2020 07:55, Jan Beulich wrote:
>> On 01.12.2020 00:27, Oleksandr wrote:
>>> On 30.11.20 23:03, Volodymyr Babchuk wrote:
>>>> Oleksandr Tyshchenko writes:
>>>>> --- a/xen/include/asm-arm/traps.h
>>>>> +++ b/xen/include/asm-arm/traps.h
>>>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const 
>>>>> struct cpu_user_regs *regs)
>>>>>            (unsigned long)abort_guest_exit_end == regs->pc;
>>>>>    }
>>>>>    +/* Check whether the sign extension is required and perform it */
>>>>> +static inline register_t sign_extend(const struct hsr_dabt dabt, 
>>>>> register_t r)
>>>>> +{
>>>>> +    uint8_t size = (1 << dabt.size) * 8;
>>>>> +
>>>>> +    /*
>>>>> +     * Sign extend if required.
>>>>> +     * Note that we expect the read handler to have zeroed the bits
>>>>> +     * outside the requested access size.
>>>>> +     */
>>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>>> +    {
>>>>> +        /*
>>>>> +         * We are relying on register_t using the same as
>>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>>> +         * code smaller.
>>>>> +         */
>>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>>> +        r |= (~0UL) << size;
>>>> If `size` is 64, you will get undefined behavior there.
>>> I think, we don't need to worry about undefined behavior here. Having
>>> size=64 would be possible with doubleword (dabt.size=3). But if "r"
>>> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
>>> we deal with byte, halfword or word operations (dabt.size<3). Or I
>>> missed something?
>>
>> At which point please put in a respective ASSERT(), possibly amended
>> by a brief comment.
>
> ASSERT()s are only meant to catch programatic error. However, in this 
> case, the bigger risk is an hardware bug such as advertising a sign 
> extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32).
>
> Actually the Armv8 spec is a bit more blurry when running in AArch32 
> state because they suggest that the sign extension can be set even for 
> 32-bit access. I think this is a spelling mistake, but it is probably 
> better to be cautious here.
>
> Therefore, I would recommend to rework the code so it is only called 
> when len < sizeof(register_t).

I am not sure I understand the recommendation, could you please clarify 
(also I don't see 'len' being used here).


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 10:49:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 10:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41825.75297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk3E7-0004R9-0b; Tue, 01 Dec 2020 10:49:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41825.75297; Tue, 01 Dec 2020 10:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk3E6-0004R2-Tt; Tue, 01 Dec 2020 10:49:42 +0000
Received: by outflank-mailman (input) for mailman id 41825;
 Tue, 01 Dec 2020 10:49:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk3E5-0004Qx-Ig
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 10:49:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83d024e0-b26c-49f4-b781-36d511790e18;
 Tue, 01 Dec 2020 10:49:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 998B6AC90;
 Tue,  1 Dec 2020 10:49:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83d024e0-b26c-49f4-b781-36d511790e18
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606819779; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nRa7yuoH4jgSQKC7xkeLfahP/u1wYvFmASAmGAH6kOs=;
	b=nkfDCEzViB0nRB8DYyrodsOTbRqWJLodX4GeZ4zJMcU6CaKA4kgi1B82D/DwihvPs9/oa6
	f+WCiR9X+JwPIQoJFUXVy80C8WZkw6TSwK4jFkP37efd0U8R2CV/EAveeKD+JViafpgUeM
	jEe0if4cQhUdWGsHdESRQAcoGWp80iw=
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Julien Grall <julien@xen.org>, Oleksandr <olekstysh@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
 <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
 <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7b6c5dd4-fcff-2ed6-2295-d70e204c26a0@suse.com>
Date: Tue, 1 Dec 2020 11:49:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 11:30, Julien Grall wrote:
> Hi Jan,
> 
> On 01/12/2020 07:55, Jan Beulich wrote:
>> On 01.12.2020 00:27, Oleksandr wrote:
>>> On 30.11.20 23:03, Volodymyr Babchuk wrote:
>>>> Oleksandr Tyshchenko writes:
>>>>> --- a/xen/include/asm-arm/traps.h
>>>>> +++ b/xen/include/asm-arm/traps.h
>>>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs)
>>>>>            (unsigned long)abort_guest_exit_end == regs->pc;
>>>>>    }
>>>>>    
>>>>> +/* Check whether the sign extension is required and perform it */
>>>>> +static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r)
>>>>> +{
>>>>> +    uint8_t size = (1 << dabt.size) * 8;
>>>>> +
>>>>> +    /*
>>>>> +     * Sign extend if required.
>>>>> +     * Note that we expect the read handler to have zeroed the bits
>>>>> +     * outside the requested access size.
>>>>> +     */
>>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>>> +    {
>>>>> +        /*
>>>>> +         * We are relying on register_t using the same as
>>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>>> +         * code smaller.
>>>>> +         */
>>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>>> +        r |= (~0UL) << size;
>>>> If `size` is 64, you will get undefined behavior there.
>>> I think, we don't need to worry about undefined behavior here. Having
>>> size=64 would be possible with doubleword (dabt.size=3). But if "r"
>>> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
>>> we deal with byte, halfword or word operations (dabt.size<3). Or I
>>> missed something?
>>
>> At which point please put in a respective ASSERT(), possibly amended
>> by a brief comment.
> 
> ASSERT()s are only meant to catch programatic error. However, in this 
> case, the bigger risk is an hardware bug such as advertising a sign 
> extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32).
> 
> Actually the Armv8 spec is a bit more blurry when running in AArch32 
> state because they suggest that the sign extension can be set even for 
> 32-bit access. I think this is a spelling mistake, but it is probably 
> better to be cautious here.
> 
> Therefore, I would recommend to rework the code so it is only called 
> when len < sizeof(register_t).

This would be even better in this case, I agree.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 11:03:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 11:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41834.75309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk3RO-0006HA-85; Tue, 01 Dec 2020 11:03:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41834.75309; Tue, 01 Dec 2020 11:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk3RO-0006H3-4g; Tue, 01 Dec 2020 11:03:26 +0000
Received: by outflank-mailman (input) for mailman id 41834;
 Tue, 01 Dec 2020 11:03:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=na+5=FF=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kk3RN-0006Gy-5y
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 11:03:25 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bae84da9-3c01-488a-b810-367f11aac04c;
 Tue, 01 Dec 2020 11:03:24 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id 23so1974181wrc.8
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 03:03:24 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id v7sm2381281wma.26.2020.12.01.03.03.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 01 Dec 2020 03:03:21 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id EEF111FF7E;
 Tue,  1 Dec 2020 11:03:20 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bae84da9-3c01-488a-b810-367f11aac04c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=ipBJUGbtRku9teUZUW1nNmFVDBPn0WzbshJ5Gm4rGOE=;
        b=avPWF8RnfQKUr8zOoYRH0VRn/ccw7yYf4BBEJWjhXMZbUxIMkKs/6ouOh18zpWt6JJ
         jjc8jJycFJet4QMK/cVtWxvbNK43lrKS3FWnmUPGmoXzVj93XSVw9yr2eexn3vVH4j46
         8oTKRyxGAnFjFkiIDXwykv37ZjC3Vh7KfoAuYBzp1aR0HF4HTv8/JbYh/NJ5Q2ofuygQ
         8TlV0o6GNTcr3AYcHMs/3n2SMh9y2+Yb4T8iHG8QtFbUuXlb43DwkI9eVTxifB8Aipwh
         IdaM17w6IQ7WJjCkha6J26G/0Su65Inm+d5RSsD+ZSg27KkKLsjacFTW2JQoq3vt389J
         E/Hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=ipBJUGbtRku9teUZUW1nNmFVDBPn0WzbshJ5Gm4rGOE=;
        b=ms1GofobsQZ0mK2xjOBInwkq6XrzSnecVI0c1xWWjOkD00CaeIjpP59h2J3cL5tsFu
         DcXAsfqBwcH4pJ6MuOm/vzXllxMyZbRsXoqPZ3ei69Z2gCyzhfQGlYTW24iAl6JPzVca
         0zwU7nrpxeNdtA3JUlb0lFxO4phPNbKKcLrQyA8dqPazjPkbVIaof+eXyfIRT1IyUZNO
         47ubgtUlhKe4AYv5RlDQ2TuRega75PCuyjXoAbYMh0Kv4Dhi7m+6mdsusdYhlMsUP7PJ
         U5frjEdRtx1oXcec+yKMp7rsJnPA/L3v0MHA+b7mjogQz27pUK49EUiHyDL4UneQi2iQ
         IwGQ==
X-Gm-Message-State: AOAM5313WRaZv5YO0XYez8+qOFY/nZ7tCkmF+vTPeDCbkLow2pK3Fn39
	y5nX+3Nc18WkIKW45qk7ZdyC8A==
X-Google-Smtp-Source: ABdhPJzDOhKQm/zgRzj3yo7Syp+0/5eSKBVrrTB2oEhZyuwUbPW2Zwj4z3f2sZrHl5Ju3NYCuLAbww==
X-Received: by 2002:adf:dd8a:: with SMTP id x10mr3215799wrl.24.1606820603110;
        Tue, 01 Dec 2020 03:03:23 -0800 (PST)
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
User-agent: mu4e 1.5.7; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
 <paul@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>, Wei
 Liu <wl@xen.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
In-reply-to: <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
Date: Tue, 01 Dec 2020 11:03:20 +0000
Message-ID: <87eek9u6tj.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Oleksandr Tyshchenko <olekstysh@gmail.com> writes:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> As a lot of x86 code can be re-used on Arm later on, this
> patch makes some preparation to x86/hvm/ioreq.c before moving
> to the common code. This way we will get a verbatim copy
<snip>
>
> It worth mentioning that a code which checks the return value of
> p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was
> folded into arch_ioreq_server_map_mem_type() for the clear split.
> So the p2m_change_entry_type_global() is called with ioreq_server
> lock held.
<snip>
>=20=20
> +/* Called with ioreq_server lock held */
> +int arch_ioreq_server_map_mem_type(struct domain *d,
> +                                   struct hvm_ioreq_server *s,
> +                                   uint32_t flags)
> +{
> +    int rc =3D p2m_set_ioreq_server(d, flags, s);
> +
> +    if ( rc =3D=3D 0 && flags =3D=3D 0 )
> +    {
> +        const struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
> +
> +        if ( read_atomic(&p2m->ioreq.entry_count) )
> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw=
);
> +    }
> +
> +    return rc;
> +}
> +
>  /*
>   * Map or unmap an ioreq server to specific memory type. For now, only
>   * HVMMEM_ioreq_server is supported, and in the future new types can be
> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domai=
n *d, ioservid_t id,
>      if ( s->emulator !=3D current->domain )
>          goto out;
>=20=20
> -    rc =3D p2m_set_ioreq_server(d, flags, s);
> +    rc =3D arch_ioreq_server_map_mem_type(d, s, flags);
>=20=20
>   out:
>      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>=20=20
> -    if ( rc =3D=3D 0 && flags =3D=3D 0 )
> -    {
> -        struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
> -
> -        if ( read_atomic(&p2m->ioreq.entry_count) )
> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw=
);
> -    }
> -

It should be noted that p2m holds it's own lock but I'm unfamiliar with
Xen's locking architecture. Is there anything that prevents another vCPU
accessing a page that is also being used my ioreq on the first vCPU?

Assuming that deadlock isn't a possibility to my relatively untrained
eye this looks good to me:

Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 11:07:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 11:07:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41839.75321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk3Uz-0006Rs-VW; Tue, 01 Dec 2020 11:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41839.75321; Tue, 01 Dec 2020 11:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk3Uz-0006Rl-QC; Tue, 01 Dec 2020 11:07:09 +0000
Received: by outflank-mailman (input) for mailman id 41839;
 Tue, 01 Dec 2020 11:07:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=na+5=FF=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kk3Uy-0006Rb-6O
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 11:07:08 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6a13458-f542-48e3-a73e-be88b2a35611;
 Tue, 01 Dec 2020 11:07:07 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id k14so2023307wrn.1
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 03:07:07 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id n123sm2377452wmn.7.2020.12.01.03.07.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 01 Dec 2020 03:07:05 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 6E9221FF7E;
 Tue,  1 Dec 2020 11:07:04 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6a13458-f542-48e3-a73e-be88b2a35611
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=rLmjP5UhMJ1kEFPMl6GYP6epgRPpProrvcEKGiCfUx0=;
        b=Lfpnpkn2moZhpzzzFZam7Mg58M6Vy2AnDoF0tO8I6PvyBAt3JEff3yY5QGFOaUm13/
         XbO+gAoY8amBDq58YR4C46niAq1nU6bIU/rnRRIZZYf0VFEi+rvaVzvVJt1QHpYVMZ7+
         9zQN8/fPfG90oi2Wg207Ab1dWlsWBDLnW1XzZkHcZoT5aCJPt3TiBD7NjFx4545s3T5P
         9oevgB3P2wJuLLpg5WAc1N5G38R9G9o8v/7V+Zw420TV4gnjCrkmkVDPT5Qqa6sys2Sr
         RtbFJieJoRqQfbIBc5VAneAfkyWog5yofIaVB+fcBqdEPStWUJBO4b+qK4JmkLIZNKRf
         BV6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=rLmjP5UhMJ1kEFPMl6GYP6epgRPpProrvcEKGiCfUx0=;
        b=RaoNr2ptH4tY46qKUKEFlIWR1Q0aLUMfEfrV157uKq7YFnJHZzFJNkkuYNe+lwXPUP
         XcXVTuCop/tmCPY+H87oeM0GlDzFkgsDxrekJVSKIQzI+PO4d9z88tqwtRZzohYVR5H8
         LKaw8rlnPa9dqn0EZ9jY43Ud/H0H2nMHQXZCOM6aazC3W/48w0ldJLpmE999TbSrSXzq
         Tv25nQslRqeshr9/VnXXntrfzoiBSBYz/2gdG86vNktqckmfz5t0tR8bgdqfLCX3pHEP
         uNMKQg2/Q/rqSxrtSCu/ZfVjr1xmRw+CgHJJhK+cQK51+utMhrQo2VGUjjeX9aTVbXIx
         nfYg==
X-Gm-Message-State: AOAM5334FiBeqCubaDz4JAqnDhNb8XORqz0O+xJ81VNKJ+tPF2s+OAIW
	siqk3cD912blICIJdLMt2JPf0/uSTRZf0Q==
X-Google-Smtp-Source: ABdhPJwn+ZJyJsXJfEsjhqssWibgFvZYhPhfk3oz/Szk3804uVDQYagg7P0sXVZm6CdiUJ0LGJiuPQ==
X-Received: by 2002:adf:f888:: with SMTP id u8mr3049207wrp.381.1606820826755;
        Tue, 01 Dec 2020 03:07:06 -0800 (PST)
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
User-agent: mu4e 1.5.7; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
 <paul@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>, Wei
 Liu <wl@xen.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH V3 02/23] x86/ioreq: Add IOREQ_STATUS_* #define-s and
 update code for moving
In-reply-to: <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
Date: Tue, 01 Dec 2020 11:07:04 +0000
Message-ID: <87blfdu6nb.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Oleksandr Tyshchenko <olekstysh@gmail.com> writes:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> This patch continues to make some preparation to x86/hvm/ioreq.c
> before moving to the common code.
>
> Add IOREQ_STATUS_* #define-s and update candidates for moving
> since X86EMUL_* shouldn't be exposed to the common code in
> that form.
>
> This support is going to be used on Arm to be able run device
> emulator outside of Xen hypervisor.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>

Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 11:42:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 11:42:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41852.75333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk435-0001ff-Fh; Tue, 01 Dec 2020 11:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41852.75333; Tue, 01 Dec 2020 11:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk435-0001fY-CC; Tue, 01 Dec 2020 11:42:23 +0000
Received: by outflank-mailman (input) for mailman id 41852;
 Tue, 01 Dec 2020 11:42:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dt7S=FF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kk434-0001fT-6X
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 11:42:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6377db3-00a4-4129-a933-2bfcd36ed77d;
 Tue, 01 Dec 2020 11:42:19 +0000 (UTC)
Received: from AM6P194CA0052.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::29)
 by AM6PR08MB4914.eurprd08.prod.outlook.com (2603:10a6:20b:cf::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.21; Tue, 1 Dec
 2020 11:42:17 +0000
Received: from AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::52) by AM6P194CA0052.outlook.office365.com
 (2603:10a6:209:84::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 1 Dec 2020 11:42:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT005.mail.protection.outlook.com (10.152.16.146) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Tue, 1 Dec 2020 11:42:16 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71");
 Tue, 01 Dec 2020 11:42:16 +0000
Received: from 2fb2177f5223.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3D4D3511-B59E-4E94-A3BB-37F92A35594C.1; 
 Tue, 01 Dec 2020 11:42:01 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2fb2177f5223.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 01 Dec 2020 11:42:01 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4823.eurprd08.prod.outlook.com (2603:10a6:10:df::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 11:41:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Tue, 1 Dec 2020
 11:41:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6377db3-00a4-4129-a933-2bfcd36ed77d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KPRYrYgKzdOlPiRSzRXRF9NtPFmB1OkjPCYy8XGSXNc=;
 b=vQaQPIStDXAazwIJGTuB33LAPuOI6aiEdF/wkmuK9d6dYkpwWDM8amS0SuQeforQ6gwlgGjuq45lIcYltXHzjntOsZcSX9HpIIRErtSwzNzhKpZCOP1wki97LqBuDk7qdJTJ0X6TOfXG7QMGdctnYOoVPVMu3Nndu7aS3ft4T4M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: edd371d7f51bf23c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FoC1wxvdUYH5F1i1a0L1V2RWdC9y7t1okYMHXHrBI6d/0skP0L2ISgtJql6RaxgFc7N+NrDqBIdfHtwCDPiGzjQn3QjufiuNjtZN7jemX+NpRD0N+FaIVud02K4kAOmu3QE9bAvb2OQcECU395C7AGkznD8ygAwRTrsrHiOWp7smLpuSiTzAUGKCQ6So3sWi8IeGP69D5oEJ/sV4RJr/DALMkihBWQ8zuun44rJl+UoYg6iIUrMoctpv8+sHS5mm/oaBc/s9tX4jLyGxT5bCAJB7tT5uh/3HPFEZD/nYRWD8P1/08a7ecilyURTKomtpd4/t4FwMItFymydl/HZ6qA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KPRYrYgKzdOlPiRSzRXRF9NtPFmB1OkjPCYy8XGSXNc=;
 b=O/lzXBVjMZBhnUZB9WFSI+gl7LOJ3Qnc2sTm59PnR33Qy5fTszuymlKtzQwG+lu1PtvrxKyA5cVdxZytq5+qJ7TnnLRo/uwUpTyccEL2S9tIAc20b2oF8N3ZCWBisKGyIhqP5zKnrbWkG+2wSx5LnjdOcYGtouPNecvWjqQftzafoFr5NBelyL4FajHaXXwgbf53o2StYxAk7CqVX05Nz2cabOXidIvAH5lzACf6KGQGVxO5FPY3ZMaC95VnbgTIe+44EFkUnO3/+5dMg/LP0h5vOHZv0I+xwntdamuLiY36ocn7tVUtIHYgI6JO8+qRXEmneX6e1WzqHm7Rxj+Hyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KPRYrYgKzdOlPiRSzRXRF9NtPFmB1OkjPCYy8XGSXNc=;
 b=vQaQPIStDXAazwIJGTuB33LAPuOI6aiEdF/wkmuK9d6dYkpwWDM8amS0SuQeforQ6gwlgGjuq45lIcYltXHzjntOsZcSX9HpIIRErtSwzNzhKpZCOP1wki97LqBuDk7qdJTJ0X6TOfXG7QMGdctnYOoVPVMu3Nndu7aS3ft4T4M=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index: AQHWxyRpRLPV3+rSX0iDOmZYGXBVV6nhHOuAgAECsoA=
Date: Tue, 1 Dec 2020 11:41:47 +0000
Message-ID: <766ECB70-DD45-4194-98DE-C1D312C8BE11@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
 <87tut67g93.fsf@epam.com>
In-Reply-To: <87tut67g93.fsf@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0c611bc9-7bd4-480d-dc94-08d895ee277d
x-ms-traffictypediagnostic: DBBPR08MB4823:|AM6PR08MB4914:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB491405C19C32935C1739CF699DF40@AM6PR08MB4914.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 unyw+XzBEB+JrxdNiwP/aL7DRoHWBlNK9q+gRQwFmMrtAkBp9SbJqTZFkYn4TtkPQ5Nwt0GQ+6dxlvBPFWR5ehO+GJ6qYm+Rs9cEnhfpbvtA0096VVhlwd0/130aNKULWYlZkug2+a9iYMYqopWK1AEOJ4E5BWo6y339NhbnKGrgW0pmGQLRB0luWlQF/tPQwzV2JRM7g4MGLZejuR7rE/grZaV1b5t+xYPVAm+Mp9P6hVbzm/4DvxZr85QricM7EkDO+KAHHHcPnOA/RaSmAinwrmVfhBpITGseKkZOrOss3ElMckif6Dn33S8BPU+TfMsLv5tX2LGzHUu/6HMtng==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(366004)(346002)(39860400002)(136003)(71200400001)(2616005)(6486002)(53546011)(6506007)(33656002)(5660300002)(478600001)(36756003)(66556008)(186003)(4326008)(66446008)(26005)(8676002)(66476007)(83380400001)(6512007)(2906002)(54906003)(86362001)(8936002)(76116006)(316002)(64756008)(6916009)(66946007)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?aoShPmv6T9A5Ku9HwbCtJj3ATN++9bExlyiaI6Ud0ZIGDeaIwdLUa12aNIFc?=
 =?us-ascii?Q?IRiPQU1jYqVkLyqI05tJpL4aKTnqE4/604hm7oI96ONtftkuLDt/dOvc67fu?=
 =?us-ascii?Q?HJ+G8gK3zxhFmIidXmZRMzG8DImz6Jd0YbQxX03WId5NmBqXH39QPKEdu22o?=
 =?us-ascii?Q?LYdrDpqQSyr4ZLhaZ23okGrnwaWu7mzeXVw0c6x77z1PXoIRaUhsJk3HiSiT?=
 =?us-ascii?Q?xTixfget7pp1x2sW3xYrP77UAt9LT0ENpIjeI36OMDYIY36mZb6lWsumyPro?=
 =?us-ascii?Q?mPbc4wLJurzVn3icQfKBBHoRtpcDpSvPR1NKvrqnDuVEX86bKbmeo6yyYEYe?=
 =?us-ascii?Q?bu//HCzhI9jDSiJsOmJ3fuJ+ScxVx8G9qoQk/N5s77Yb7x/ZPJSwsvGv43ri?=
 =?us-ascii?Q?GiCi+Ey35ilS599zM6I6Bu7fNCrS7dPAOaaqwWaEvXZUUDp1Cr/aMs3vYLKv?=
 =?us-ascii?Q?DdhKLk9h56ZTCfeJDFo+qa61qX4f9ZShvN/92IleCSUypWmUirkh8uWJ+VZN?=
 =?us-ascii?Q?GfQuFm4v56metEWrhsRhgWIar8ER2lHkhPaXpfEfMG8UlCYQnsCgmYbmbWzH?=
 =?us-ascii?Q?6g5nXdv3PuWCx/n+Sh0wtATgemmPvEMC+xCIQlndw6A9dXQ0L2kYc7poowSs?=
 =?us-ascii?Q?bAw6gJCQL2WkqDnACXm8kS43N1b2urW/COqUTuirIraTN87gckt+aDpTpS5Q?=
 =?us-ascii?Q?3Odngq9O+w6THHeeHyxOFuHg9FnPanK/s3BTMu5v/rts1dllMk8LoPjW3tHm?=
 =?us-ascii?Q?DFtXzSXo0ke46M8aTttvfUSBu5kEQH8+GzBFFueu5zczBQnsCFdSQ40k+fjW?=
 =?us-ascii?Q?2UmoGQvHNJGbXwgv5kt7LA/5X3WSTfgIBshgYbLXyJMEPQvzQffX5APWZmnT?=
 =?us-ascii?Q?+WHUj3ba2v1QS0sX8cwaKgwzPVS7vW1YILLkGnDeqoiMGXWI6OoIyuo7Fq7j?=
 =?us-ascii?Q?f7iOZ3z4EwIr9R+0zL1yaqT/iUfLRIhFi3jqQ+b5pmoGNcEfZfC+KYpY7EXk?=
 =?us-ascii?Q?1Y3c?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <69981C77A1CF6140ABFA7897CD886903@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4823
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	51a492f2-22ab-45dc-d471-08d895ee1645
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JHnqboET/ECy9lJbpSsnNKZKbpN8wQELgY+9pKsVXFvQGdVaEHbaG70uYb2cGHbbhHxKbwt+T6+lE+5iwJRvqD2aLO+sXcWxuVnXkJWrxAcUves7CwYmLAEWoLUrgOFriFd2AqO0Mzy4RnCNsBzK+wSa2Dr1BX734g4hplMMftx6OGleN2TxoRbHudahN1nubQ7IGUPaeCYGbGcwQD5sBQLUXmL+i6pA7i2uhbWf18BBuiwhpCFdrg1q+DZOpqUoiMjnMBVSTJlFYpwlCVSeBsTYZSRUaH9b4yNYG3/DF4F1ExYEehcnjwhy/9/9co8bIrwsqN1GPiz3lfTpaYe0vI2zHZUoOYokLkmH03vfDCN06/pKQFBYZwCebUlXzwPDVOX/5zrdCUEAzp/ScXhcYQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39860400002)(396003)(46966005)(33656002)(81166007)(8676002)(47076004)(36756003)(8936002)(2906002)(83380400001)(70206006)(478600001)(5660300002)(70586007)(6486002)(4326008)(6862004)(336012)(54906003)(36906005)(26005)(316002)(2616005)(6512007)(186003)(82740400003)(53546011)(86362001)(82310400003)(6506007)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 11:42:16.9630
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c611bc9-7bd4-480d-dc94-08d895ee277d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4914

Hi Volodymyr,

> On 30 Nov 2020, at 20:15, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> =
wrote:
>=20
>=20
> Bertrand Marquis writes:
>=20
>> Create a cpuinfo structure for guest and mask into it the features that
>> we do not support in Xen or that we do not want to publish to guests.
>>=20
>> Modify some values in the cpuinfo structure for guests to mask some
>> features which we do not want to allow to guests (like AMU) or we do not
>> support (like SVE).
>>=20
>> The code is trying to group together registers modifications for the
>> same feature to be able in the long term to easily enable/disable a
>> feature depending on user parameters or add other registers modification
>> in the same place (like enabling/disabling HCR bits).
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: rebase
>> ---
>> xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>> xen/include/asm-arm/cpufeature.h |  2 ++
>> 2 files changed, 53 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index 204be9b084..309941ff37 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -24,6 +24,8 @@
>>=20
>> DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>=20
>> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
>> +
>> void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>>                              const char *info)
>> {
>> @@ -156,6 +158,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>> #endif
>> }
>>=20
>> +/*
>> + * This function is creating a cpuinfo structure with values modified t=
o mask
>> + * all cpu features that should not be published to guest.
>> + * The created structure is then used to provide ID registers values to=
 guests.
>> + */
>> +static int __init create_guest_cpuinfo(void)
>> +{
>> +    /*
>> +     * TODO: The code is currently using only the features detected on =
the boot
>> +     * core. In the long term we should try to compute values containin=
g only
>> +     * features supported by all cores.
>> +     */
>> +    identify_cpu(&guest_cpuinfo);
>> +
>> +#ifdef CONFIG_ARM_64
>> +    /* Disable MPAM as xen does not support it */
>> +    guest_cpuinfo.pfr64.mpam =3D 0;
>> +    guest_cpuinfo.pfr64.mpam_frac =3D 0;
>> +
>> +    /* Disable SVE as Xen does not support it */
>> +    guest_cpuinfo.pfr64.sve =3D 0;
>> +    guest_cpuinfo.zfr64.bits[0] =3D 0;
>> +
>> +    /* Disable MTE as Xen does not support it */
>> +    guest_cpuinfo.pfr64.mte =3D 0;
>> +#endif
>> +
>> +    /* Disable AMU */
>> +#ifdef CONFIG_ARM_64
>> +    guest_cpuinfo.pfr64.amu =3D 0;
>> +#endif
>> +    guest_cpuinfo.pfr32.amu =3D 0;
>> +
>> +    /* Disable RAS as Xen does not support it */
>> +#ifdef CONFIG_ARM_64
>> +    guest_cpuinfo.pfr64.ras =3D 0;
>> +    guest_cpuinfo.pfr64.ras_frac =3D 0;
>> +#endif
>> +    guest_cpuinfo.pfr32.ras =3D 0;
>> +    guest_cpuinfo.pfr32.ras_frac =3D 0;
>> +
>> +    return 0;
>> +}
>> +/*
>> + * This function needs to be run after all smp are started to have
>> + * cpuinfo structures for all cores.
>> + */
>=20
> This comment contradicts with TODO at the beginning of
> create_guest_cpuinfo().
>=20

I think the comment is coherent as it is a prerequisite to solve the TODO.
I made it this way so that nothing would need to be modified there to handl=
e the TODO.

So I do not really see a contradiction there, what would you suggest to say=
 instead ?

Regards
Bertrand

>> +__initcall(create_guest_cpuinfo);
>> +
>> /*
>>  * Local variables:
>>  * mode: C
>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpuf=
eature.h
>> index 64354c3f19..0ab6dd42a0 100644
>> --- a/xen/include/asm-arm/cpufeature.h
>> +++ b/xen/include/asm-arm/cpufeature.h
>> @@ -290,6 +290,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>> extern struct cpuinfo_arm cpu_data[];
>> #define current_cpu_data cpu_data[smp_processor_id()]
>>=20
>> +extern struct cpuinfo_arm guest_cpuinfo;
>> +
>> #endif /* __ASSEMBLY__ */
>>=20
>> #endif
>=20
>=20
> --=20
> Volodymyr Babchuk at EPAM



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 11:43:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 11:43:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41856.75344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk43g-0001l7-P0; Tue, 01 Dec 2020 11:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41856.75344; Tue, 01 Dec 2020 11:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk43g-0001l0-Lf; Tue, 01 Dec 2020 11:43:00 +0000
Received: by outflank-mailman (input) for mailman id 41856;
 Tue, 01 Dec 2020 11:42:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dt7S=FF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kk43f-0001kr-6z
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 11:42:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce46dd19-659b-4ca8-8c4f-a87b3e13be48;
 Tue, 01 Dec 2020 11:42:58 +0000 (UTC)
Received: from DB3PR08CA0021.eurprd08.prod.outlook.com (2603:10a6:8::34) by
 DB8PR08MB5082.eurprd08.prod.outlook.com (2603:10a6:10:ec::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.22; Tue, 1 Dec 2020 11:42:55 +0000
Received: from DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::f7) by DB3PR08CA0021.outlook.office365.com
 (2603:10a6:8::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25 via Frontend
 Transport; Tue, 1 Dec 2020 11:42:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT004.mail.protection.outlook.com (10.152.20.128) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Tue, 1 Dec 2020 11:42:54 +0000
Received: ("Tessian outbound d6c201accd3c:v71");
 Tue, 01 Dec 2020 11:42:54 +0000
Received: from e2e1f31a7dca.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4F59A5E5-A5C6-4304-9B78-3FF0416475EE.1; 
 Tue, 01 Dec 2020 11:42:49 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e2e1f31a7dca.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 01 Dec 2020 11:42:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5558.eurprd08.prod.outlook.com (2603:10a6:10:1b3::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Tue, 1 Dec
 2020 11:42:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Tue, 1 Dec 2020
 11:42:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce46dd19-659b-4ca8-8c4f-a87b3e13be48
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MIOOQ/imLr5HgpYZ1fL2CY0xohUVU/AwB1bge8XGEII=;
 b=9O1ltqiLA7Xf041On5qsOieA20KY80sgy8RXcmiMWCTALavU6jm8ifv6XUj+li6i/olP+JjU0DvcsgH4GWTu1QL3rqU3A0fMvdpTvePHSM5MZa5rnbyLF5GwBJ8nokkwVNp5x9PGdkH/yszxjHPesQ3WFar4l+TsVgAumNp+5lg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: b6eb451a31a243c7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gNWMOIbmj9aWoHyt7SjKwlyhWH8lji+zkMDv6ktD/tSM1gWZWAy/egjlDMKmyWaO6sMdQgTUOHobyS8inpRtr4AxJGz4+QaV5eZNFAQU0ZTQn7num4IJ/L4/5Qpzs/4o5tL76WggV2+XHjy5AwVF2SG3R+v7DbvHc7fI++sLW9kmYxLCkcaguf0QhnWqyumKwdrxoef0fttI5R8YsT1qWfzPHPRg3gpJIGwjYhNtp2OzuWJ5yG16fkYzlyHNNcNt1NAIaY6EKVYyPwdPVeu3RdF20nYgBtUnQiSVZzpNnSlIeMgQp80fVVghYnEngop8ujdSd0JvQ9N1qbHS6+9BsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MIOOQ/imLr5HgpYZ1fL2CY0xohUVU/AwB1bge8XGEII=;
 b=ifI6Kzj772L1vsGaKuEtEb0ughCPZLwfIR4oBd/3q+ecnqnq8y2URZvehDlPC/QdzM9aA4z5yHCblxkTdo4/bWaDHn+1JIgGvksp9Nkp3ip+6796P2X0qeR6VT7BZpLCR/S/+HwM1idGhLX0dEqIqNMb/hA9yskF57Bp8V9XC+33xwLKAlxCKFIM9P+oa/ImCctbrCmU+Cp3s0kaGYZz4eyXyZHAJbW0MSzZaQ4QOBeuhop8n9VR8ypqQg6lWT1NAdlUuhJY/OkFXVyEkrntkjioZIKUuggyN0O0WDBd7PYbJgM6mxqUQ+CZl8yIH6H+b3wdI6oUrqsQ2dlyI4fvPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MIOOQ/imLr5HgpYZ1fL2CY0xohUVU/AwB1bge8XGEII=;
 b=9O1ltqiLA7Xf041On5qsOieA20KY80sgy8RXcmiMWCTALavU6jm8ifv6XUj+li6i/olP+JjU0DvcsgH4GWTu1QL3rqU3A0fMvdpTvePHSM5MZa5rnbyLF5GwBJ8nokkwVNp5x9PGdkH/yszxjHPesQ3WFar4l+TsVgAumNp+5lg=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Topic: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Index: AQHWxyRh5FtYFTYJ+ESi9Xe847KlxqnhHqgAgAEBOwA=
Date: Tue, 1 Dec 2020 11:42:46 +0000
Message-ID: <C561BBE3-796C-4A29-B24D-188D792757CB@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
 <87pn3u7fyp.fsf@epam.com>
In-Reply-To: <87pn3u7fyp.fsf@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: aa69a774-83fb-4871-fd6c-08d895ee3e16
x-ms-traffictypediagnostic: DBAPR08MB5558:|DB8PR08MB5082:
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB508202EA628305FD5BAB9FC79DF40@DB8PR08MB5082.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3383;OLM:3383;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FlJelYZEapVTbea0Fv0VjlmRq+IbYNkzS6/4f9KeDlX31oRth0uR6/wKJyCEKJB0a3Uqic5Ij7u0IM1CXLAk1Vizk1q+iQ1e0INijPckbVhz24+TC+fEiEXawkXc+G3zCzGhHa4NdJKQ8goDCr3tsGr2sJbKfMSpAWfarxyoAP2ObScKI192PCQRObzy5MQxS783yv32QfSGBFhMqvVSfQ7n1c9uU/uMecObyzGM4SjokGAk2wSfOSXuzT7gGv75RSawj92rSJTPpvetfAFKmWNPY4h4ZtUsUtMiddBAwpLPDdg9TDLOocJXV4V5uPpFeqhm70/6tVV5/AX97vgAzA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(366004)(346002)(376002)(66476007)(6916009)(66556008)(91956017)(6486002)(83380400001)(86362001)(76116006)(64756008)(4326008)(5660300002)(66446008)(2616005)(71200400001)(2906002)(316002)(6506007)(66946007)(36756003)(6512007)(186003)(33656002)(478600001)(26005)(53546011)(54906003)(8936002)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?7CMJD2+rgNlAmEd+mqDh1Mt7z6aU+IzCaoGllcLSwJk65j+o+TOF7eRAAVN1?=
 =?us-ascii?Q?6E0HGzYvwCV6ZS/Ze042IFbyNvmXqPwkwnke/40mlb9vZ+3wGTOOAhXTCRTR?=
 =?us-ascii?Q?i2onpljfCUwVrQZp1ygjOceh91bIab6knKaPjWrSJb/PNYdHMdI6IweidxOR?=
 =?us-ascii?Q?BA35AlH62z6IZyowfiNRNOvnhYrVjrOZRC4tkUnieDnYH3n3c4pmGWa+TWb4?=
 =?us-ascii?Q?TxngmHWsROJwIhoGHlfoKNbR9snTJEmNnk7cUBnUtAHHET4p0vWLfZUbT7/Y?=
 =?us-ascii?Q?T88DwQQk/UBOS7Q892vy49RE3kxwKolcYGQo/QJEtAXoe4RRhF5UrpWXIW3w?=
 =?us-ascii?Q?C6MPQ4KsuK5NJDpRZZz1BfMbLviq0N7zqvWIpuIngtvvM7k2k2MAET0ATOkE?=
 =?us-ascii?Q?NxsybP9suDOf5tPAN54ZYWEBoLSD8UFYKXLVY5p//dbr82pXZOX0gXdalttF?=
 =?us-ascii?Q?aPrbP/IlDWo6+5zdt3NW6hbA+3AITJYtdF/COMEakRuZJ/13ZiwYSyPimzbY?=
 =?us-ascii?Q?7fig3YmkrjtizTJ2p2Zu2Y8tMzpbSNdD8GBvzBmZBb5/zNVJrRd/DjIISVv1?=
 =?us-ascii?Q?3JSyXXu2IXxWiKCHrPJGoIxa/aGCKB1BlhfHCbrqkMEZbbgUqSsYJ506Gbjb?=
 =?us-ascii?Q?vZSWPYn0580ouHY+tqE4jkIjr/lN6uBO1Cfbm7QWkgYMrOQoTLQgHHjMStZm?=
 =?us-ascii?Q?63v4+UVDJ57QaDB5ot6YfiMt9/k0zkvwnRmFSz7c+OAh6IYED57N9j1IEhu8?=
 =?us-ascii?Q?XmLv7vCFVxj5sWkHvv40EnOf7d24cH3A1+8VgXGbBH7AlEnhebhfSPQZIaVS?=
 =?us-ascii?Q?3poxTry93whppYeF+xjch7Vn0B1/IluVSd9dX9zHHd4v4SvHtIJGd+6oL53U?=
 =?us-ascii?Q?EHxmaOMcJhe9hVMhJ4N7X5g3SLswULWgqz3u3s4anmGG1ruRXMqqGbEO1ueR?=
 =?us-ascii?Q?ElCX2FhxuDylH933xqnGR+Y+ZMr1//SODH/qoqE/9+dVHrCIvM3NCD4XZatR?=
 =?us-ascii?Q?k/fs?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9DD8BCC8FDEB644BB2EECA334B020D17@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5558
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f51460f9-5be8-41f4-aa61-08d895ee3968
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZkHVxQOQcrQc5fK3i615VYRSNlJWCBwpjwHjogSJ88U2TeM8c+gjG77J0tEGopJLHWfydLLjwv0TQRo70LCSP5f9wLZupsA+RQHpp/GyuploWfBnasrx1HchH1zXE4JTw4v7qkFZPu10at6BncucJhsN365mE9nW5jVYUoeQOAaBFzlsnRqP+wZtCudR1r4mTYx/maL3LkMUsbKDqsWtLFIGT4ohczyY1+yHXVoxXncNuvEjG1g/nlS6PSeE6CoLrwQqPg8aY9q3TIevPgXc8WcqpZR0uFzyn6yCBmORR5ep0DKSjcn3pLcol0+wbFHHPAqadzqbyNLGaceWShNGRZ0F+9tRDGtIinBFsfhKcCLB4Fh6AkNekXsuKoZu/YyERp7StZXMaMwXrLUt1zVzxA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(46966005)(6512007)(336012)(356005)(8676002)(36756003)(47076004)(70206006)(81166007)(2616005)(5660300002)(2906002)(26005)(4326008)(8936002)(6486002)(6862004)(53546011)(186003)(70586007)(82740400003)(54906003)(316002)(33656002)(6506007)(478600001)(82310400003)(86362001)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 11:42:54.9314
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aa69a774-83fb-4871-fd6c-08d895ee3e16
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5082

Hi Volodymyr,

> On 30 Nov 2020, at 20:22, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> =
wrote:
>=20
>=20
>=20
> Bertrand Marquis writes:
>=20
>> Add vsysreg emulation for registers trapped when TID3 bit is activated
>> in HSR.
>> The emulation is returning the value stored in cpuinfo_guest structure
>> for most values and the hardware value for registers not stored in the
>> structure (those are mostly registers existing only as a provision for
>> feature use but who have no definition right now).
>=20
> I can't see the code that returns values for the registers not stored in
> the guest_cpuinfo. Perhaps you need to update the commit description?

You are right, i modified my code lately to handle all possible registers s=
o
this case does not exist anymore.
I will update the commit message to fix this.

Cheers
Bertrand

>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: rebase
>> ---
>> xen/arch/arm/arm64/vsysreg.c | 49 ++++++++++++++++++++++++++++++++++++
>> 1 file changed, 49 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
>> index 8a85507d9d..970ef51603 100644
>> --- a/xen/arch/arm/arm64/vsysreg.c
>> +++ b/xen/arch/arm/arm64/vsysreg.c
>> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>>         break;                                                          =
\
>>     }
>>=20
>> +/* Macro to generate easily case for ID co-processor emulation */
>> +#define GENERATE_TID3_INFO(reg,field,offset)                           =
 \
>> +    case HSR_SYSREG_##reg:                                             =
 \
>> +    {                                                                  =
 \
>> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,  =
 \
>> +                          1, guest_cpuinfo.field.bits[offset]);        =
 \
>> +    }
>> +
>> void do_sysreg(struct cpu_user_regs *regs,
>>                const union hsr hsr)
>> {
>> @@ -259,6 +267,47 @@ void do_sysreg(struct cpu_user_regs *regs,
>>          */
>>         return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
>>=20
>> +    /*
>> +     * HCR_EL2.TID3
>> +     *
>> +     * This is trapping most Identification registers used by a guest
>> +     * to identify the processor features
>> +     */
>> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
>> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
>> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
>> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
>> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
>> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
>> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
>> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
>> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
>> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
>> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
>> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
>> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
>> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
>> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
>> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
>> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
>> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
>> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
>> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
>> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
>> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
>> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
>> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
>> +
>>     /*
>>      * HCR_EL2.TIDCP
>>      *
>=20
>=20
> --=20
> Volodymyr Babchuk at EPAM



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 11:47:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 11:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41863.75357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk47f-0001zc-Ft; Tue, 01 Dec 2020 11:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41863.75357; Tue, 01 Dec 2020 11:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk47f-0001zV-CM; Tue, 01 Dec 2020 11:47:07 +0000
Received: by outflank-mailman (input) for mailman id 41863;
 Tue, 01 Dec 2020 11:47:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dt7S=FF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kk47e-0001zQ-AL
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 11:47:06 +0000
Received: from FRA01-PR2-obe.outbound.protection.outlook.com (unknown
 [40.107.12.72]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9cbdb628-1dc3-4c10-a7e9-de1968233740;
 Tue, 01 Dec 2020 11:47:05 +0000 (UTC)
Received: from MR2P264CA0151.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::14) by
 PR2PR08MB5225.eurprd08.prod.outlook.com (2603:10a6:101:1c::23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.31; Tue, 1 Dec 2020 11:47:04 +0000
Received: from VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::6) by MR2P264CA0151.outlook.office365.com
 (2603:10a6:501:1::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 1 Dec 2020 11:47:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT057.mail.protection.outlook.com (10.152.19.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Tue, 1 Dec 2020 11:47:03 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71");
 Tue, 01 Dec 2020 11:47:03 +0000
Received: from 9ea6a3bdfbf7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 486E2741-E33B-40F5-AA8E-6FEC46352DE6.1; 
 Tue, 01 Dec 2020 11:46:47 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9ea6a3bdfbf7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 01 Dec 2020 11:46:47 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4823.eurprd08.prod.outlook.com (2603:10a6:10:df::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 11:46:46 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Tue, 1 Dec 2020
 11:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cbdb628-1dc3-4c10-a7e9-de1968233740
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cpsuuKKrdjNlgDZu3dJ7jgYdZsGAzGNwiG/CKgCrQhc=;
 b=GGyoi4xYnxTKlxWQs8f4XolZlo+vu03mlFZekSIUzjIEH9cOFDiWZbI/ERQ/wyUzg/6mUUAxHOKzyoKIe1bdttlOjWa4zL/EcMOuBR33BFZ//qCifAKyFIxtDLGa5/5EpIgb4N68QUcZf8qm9HeMX0NoMpcCkAaWyXaTl24Gh+0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5074c98d6ded5a5d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CA9by4y8+VjnSTmREUN7H33aXuQ1WLnMqTjcDg8lh7AmmV+zSZFI8f6WRUvt04HMuyGoostdFBTIe5my6fypYuSL/epbFC42Rt84ruGEOR/5ad95dRrqBkrejk/uO26BsHcJcCg+IKjoPPRKjAN3niPcp3l2chZlGovKMZAgP0XDPDIC3kjbBG8xQWCnjECVolJbVmePpF69DEhCAiSplecjXbV2imLrzQWKhThVpu0b9V1XlpVhFal+S4rTQJkfaE8dsw1UYQbPGuMdZwipNxbWe0jTTyyDVsMlqWB3si/cAqhGLJ6x+S/FRwK75gUoBnOjqvEXJ1LUd7m4BsdSlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cpsuuKKrdjNlgDZu3dJ7jgYdZsGAzGNwiG/CKgCrQhc=;
 b=cYwIGdE47R/iY6uSxB3O0MzULpc8iCdQ8WrmGcfNuuGsioDYskXiQHKJmVWpFZ9Soue016ENMCnV9iRgtExB7F77Pkn50pNyE948lDDKwDgNmz9uXcswjvq99pfQ8s81+rujg6JqgfZV3sRyv0DOIAl93olOnPCvw22QYeklwrt9XhDBx1j7+BguRdyvljldYTpXFcUDVMSXE4aAXxobtCZtYC6VsBYERKLiE6TLzW1U+4xTfLpHsg8F/sHWmZbtVJAFqzT/ykDF57suo6BhlI80b1gtNE4LJ/UbeNMd6ShyQ1YkW+bqfu4hmFn1Z7L+1NnJfoF6VIGvzczizHJfEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cpsuuKKrdjNlgDZu3dJ7jgYdZsGAzGNwiG/CKgCrQhc=;
 b=GGyoi4xYnxTKlxWQs8f4XolZlo+vu03mlFZekSIUzjIEH9cOFDiWZbI/ERQ/wyUzg/6mUUAxHOKzyoKIe1bdttlOjWa4zL/EcMOuBR33BFZ//qCifAKyFIxtDLGa5/5EpIgb4N68QUcZf8qm9HeMX0NoMpcCkAaWyXaTl24Gh+0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHWxyRuchSNSGGKDEyJCEQqqd6Vw6nhIUWAgAD/ugA=
Date: Tue, 1 Dec 2020 11:46:46 +0000
Message-ID: <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
 <87lfei7fj5.fsf@epam.com>
In-Reply-To: <87lfei7fj5.fsf@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fa5590b5-d33f-47bd-a98b-08d895eed269
x-ms-traffictypediagnostic: DBBPR08MB4823:|PR2PR08MB5225:
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB5225FEB6B11F103DB5A24F569DF40@PR2PR08MB5225.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nunH9aSUDnIOAqWeNb6hAXx9a1Hdpl39crsx7ZjV+8VV6NI0D1TuM85L1ifPbOYiKJlEe98KOMLgqGNV0lH4nNK4H+6fu+VPDnIKkzUvGOnfeO1HfDlnTfAzyovrZbUA7Z5n5GgvqHP4Ht3FBrnzdVgiVSnf9//+TzqyNBN+iN4azBO7zN2/2p6TqNkXimHJ50f20ANMtUr0zK0qAC8Zu+msN435MvzMGgs93b1tX+2RKBD9q+2+T4oDIJ0+TIp+kLX/wyxqPEHlykkXgk/8Tkx9gS5aw+/buU2br7Ma6DKXzyF892EkhMOgCEaHq8RMQV3zxQiGAN/U9uK5saMmvg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(366004)(346002)(39860400002)(136003)(71200400001)(2616005)(6486002)(53546011)(6506007)(33656002)(5660300002)(478600001)(36756003)(66556008)(186003)(4326008)(66446008)(26005)(8676002)(66476007)(83380400001)(6512007)(2906002)(54906003)(86362001)(8936002)(76116006)(316002)(64756008)(6916009)(66946007)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?R1pOSWRueWVLMlk1elo5OXRrbkhWbEZ3N29SNVVpMlhQNmpZTkFhODdXRWw2?=
 =?utf-8?B?NTFtQ3lLRkJieEJXT3IxcUl3TkM3MCtRZllPL3JTR1p4TXY0K0NsdnFuWitS?=
 =?utf-8?B?c1hLRkxsZ29OYWdma25kdWpic0d5ZGMwUDg3RFArUFRaSnJDbkFsS29XRjhm?=
 =?utf-8?B?a1o2Sk4zakhXaGZwNStGWnpwS3NtcWc5MXJTVDVPZFBBN2xoQytHd0xhRy9y?=
 =?utf-8?B?QnRxbEpzRG9XNDBaN1ZVdFUvNWM0RDdUQm9za251S2oxMFIzdExzUndkSC96?=
 =?utf-8?B?MGVrb0VaUFkybUZQd3dCVVpnMEhsbzhBdHhod2ZINW5EVGR1MnZDZ25qaGpk?=
 =?utf-8?B?cEw5V2NuQUIzUUw3dzZCRU1kQlhzc3JXY2JVUTRSRWVjei9ocXlNUjZmQVhP?=
 =?utf-8?B?VlZBdWNIc2VFSTNIRUZVYzZPUU1oemJrYkJXSGorYThDV1piVE9VYmFvakRJ?=
 =?utf-8?B?NXRHQ0FPdTZ6UHBDdlhaTW9uT0lyR09ldVZ0MVJsV1NiZXJ4dy85MTdXRDY5?=
 =?utf-8?B?cktUVk1pYWwwSWQxKzdBL0VPbjNkbDVWanorS3NBQTlrSUpvc2RaRUlNNHNF?=
 =?utf-8?B?OFFlcXdOVlZZNldlVldnNWx5VWMraGNvYlJWSExMejhrTjFiWjdWNU11US96?=
 =?utf-8?B?VkNSZ0ZleFBoNzVRdndkTGdhT2VOUWpRZnBDaG00bmRTRXNOUnpVK3R5RHdh?=
 =?utf-8?B?UmNNNnltMkxMaHNET1ZraVMyOEpJMmFOZzZ1NzJPK01SRW5WblVRaEYrZE1H?=
 =?utf-8?B?MFRwSi9CU3MwdUw2ZU05MEc0cDA0Z0YyQkRBNWFCVVJTdmo0RVZhazgwdjhX?=
 =?utf-8?B?SUhuRER4RjZKN0luL21zZ1pHSFhYc3BjOXNMNUxjMldTL2JZcHFJN3cvelhi?=
 =?utf-8?B?VW13eVhDdTVDb09WWncvOWJBZ3ZnU2J1WHBqQlU1OHBiQnBpNHV1SEt3aHVC?=
 =?utf-8?B?S2pkU3AwVjMxaTdrTTh3YXJVTVgxZ3VwbzdwZFJGVktGN0pNYnE4WUlIcmkw?=
 =?utf-8?B?MDdkZ1VYaVViZGRPYnV2d09vanF5ajBSUkJaZURyVnQ1cnA3ZzFJb2NNcytR?=
 =?utf-8?B?VWcyMk5OOEw5UmZ5eERYQkozbTFGQ29QaEVzSDFvaFFiVDh0cFNQUG5WUW5E?=
 =?utf-8?B?OGRVSkhyMzdFUFlXUXRFVGwvTi9tTXIrUDIwamFDZmJ0UGRGV3NmZ1hDbTdh?=
 =?utf-8?B?ZVd1cTRvSDF1a00raDlqYWZLMHpaK2pXVXVsdGlHa29oajVqMGVBeDUvSklP?=
 =?utf-8?B?bDZFcGwyUThDTVhmeFpNSXZHL3hweXkxSGt6Nm5ZYStQcFB2L2FPVndDTTgy?=
 =?utf-8?Q?lKr3FuhrEtX0yGWRIoAtANiidTv36T6bxV?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <EFCFA736C45C634386D284D83B6E23CB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4823
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	aa74e7f8-abe5-45ba-253d-08d895eec7de
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1KX0CeHHq3MGQ5PeC+MaHstPz7OYykfFPoEfaBSa5cW8g6/50KVJG6oSWpu6fOgmbgsVwqVr1vdKLfLKOKAuvBsUbJ1+lbUSh17DlRHqwXxOTvoIgUcjyBurB3BHhmm2zC2nkHe5xqNJWGIzhTdgEmaadTugx6MhPpQtcKZ8m0+sx6Wx8Qv6OtAVV9v9Snaz0j26Gqb17Iwj7F42Sl6efGSirk3JlGS/u94R6pOrZ76s866luWAAyAyUEkgnfPns5yt/Ro493TYcVymDDHEo7sSHxMbRjXPBXXzXY2z4GLywps7/e+2W2BHFSgap85HtAt1oJXvhrWAljmGi3+Z7JAvBogU0SWS9ckGtUvQaTIfFwPBAkWNdLGbxw5P0rGjcw0IJb6JvSOWzVn3untjFWA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(39860400002)(346002)(46966005)(33656002)(8936002)(26005)(47076004)(70206006)(8676002)(336012)(6486002)(82740400003)(54906003)(82310400003)(70586007)(478600001)(186003)(6512007)(2906002)(86362001)(6506007)(53546011)(356005)(4326008)(2616005)(6862004)(81166007)(83380400001)(5660300002)(316002)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 11:47:03.6510
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fa5590b5-d33f-47bd-a98b-08d895eed269
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB5225

SGksDQoNCj4gT24gMzAgTm92IDIwMjAsIGF0IDIwOjMxLCBWb2xvZHlteXIgQmFiY2h1ayA8Vm9s
b2R5bXlyX0JhYmNodWtAZXBhbS5jb20+IHdyb3RlOg0KPiANCj4gDQo+IEJlcnRyYW5kIE1hcnF1
aXMgd3JpdGVzOg0KPiANCj4+IEFkZCBzdXBwb3J0IGZvciBlbXVsYXRpb24gb2YgY3AxNSBiYXNl
ZCBJRCByZWdpc3RlcnMgKG9uIGFybTMyIG9yIHdoZW4NCj4+IHJ1bm5pbmcgYSAzMmJpdCBndWVz
dCBvbiBhcm02NCkuDQo+PiBUaGUgaGFuZGxlcnMgYXJlIHJldHVybmluZyB0aGUgdmFsdWVzIHN0
b3JlZCBpbiB0aGUgZ3Vlc3RfY3B1aW5mbw0KPj4gc3RydWN0dXJlLg0KPj4gSW4gdGhlIGN1cnJl
bnQgc3RhdHVzIHRoZSBNVkZSIHJlZ2lzdGVycyBhcmUgbm8gc3VwcG9ydGVkLg0KPiANCj4gSXQg
aXMgdW5jbGVhciB3aGF0IHdpbGwgaGFwcGVuIHdpdGggcmVnaXN0ZXJzIHRoYXQgYXJlIG5vdCBj
b3ZlcmVkIGJ5DQo+IGd1ZXN0X2NwdWluZm8gc3RydWN0dXJlLiBBY2NvcmRpbmcgdG8gQVJNIEFS
TSwgaXQgaXMgaW1wbGVtZW50YXRpb24NCj4gZGVmaW5lZCBpZiBzdWNoIGFjY2Vzc2VzIHdpbGwg
YmUgdHJhcHBlZC4gT24gb3RoZXIgaGFuZCwgdGhlcmUgYXJlIG1hbnkNCj4gcmVnaXN0ZXJzIHdo
aWNoIGFyZSBSQVouIFNvLCBnb29kIGJlaGF2aW5nIGd1ZXN0IGNhbiB0cnkgdG8gcmVhZCBvbmUg
b2YNCj4gdGhhdCByZWdpc3RlcnMgYW5kIGl0IHdpbGwgZ2V0IHVuZGVmaW5lZCBpbnN0cnVjdGlv
biBleGNlcHRpb24sIGluc3RlYWQNCj4gb2YganVzdCByZWFkaW5nIGFsbCB6ZXJvZXMuDQoNClRo
aXMgaXMgdHJ1ZSBpbiB0aGUgc3RhdHVzIG9mIHRoaXMgcGF0Y2ggYnV0IHRoaXMgaXMgc29sdmVk
IGJ5IHRoZSBuZXh0IHBhdGNoDQp3aGljaCBpcyBhZGRpbmcgcHJvcGVyIGhhbmRsaW5nIG9mIHRo
b3NlIHJlZ2lzdGVycyAoYWRkIENQMTAgZXhjZXB0aW9uDQpzdXBwb3J0KSwgYXQgbGVhc3QgZm9y
IE1WRlIgb25lcy4NCg0KRnJvbSBBUk0gQVJNIHBvaW50IG9mIHZpZXcsIEkgZGlkIGhhbmRsZSBh
bGwgcmVnaXN0ZXJzIGxpc3RlZCBJIHRoaW5rLg0KSWYgeW91IHRoaW5rIHNvbWUgYXJlIG1pc3Np
bmcgcGxlYXNlIHBvaW50IG1lIHRvIHRoZW0gYXMgTyBkbyBub3QNCmNvbXBsZXRlbHkgdW5kZXJz
dGFuZCB3aGF0IGFyZSB0aGUg4oCccmVnaXN0ZXJzIG5vdCBjb3ZlcmVk4oCdIHVubGVzcw0KeW91
IG1lYW4gdGhlIE1WRlIgb25lcy4NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo+IA0KPj4gU2lnbmVk
LW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPj4g
LS0tDQo+PiBDaGFuZ2VzIGluIFYyOiByZWJhc2UNCj4+IC0tLQ0KPj4geGVuL2FyY2gvYXJtL3Zj
cHJlZy5jIHwgMzUgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4+IDEgZmls
ZSBjaGFuZ2VkLCAzNSBpbnNlcnRpb25zKCspDQo+PiANCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJj
aC9hcm0vdmNwcmVnLmMgYi94ZW4vYXJjaC9hcm0vdmNwcmVnLmMNCj4+IGluZGV4IGNkYzkxY2Rm
NWIuLmQwYzY0MDZmMzQgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vdmNwcmVnLmMNCj4+
ICsrKyBiL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4gQEAgLTE1NSw2ICsxNTUsMTQgQEAgVFZN
X1JFRzMyKENPTlRFWFRJRFIsIENPTlRFWFRJRFJfRUwxKQ0KPj4gICAgICAgICBicmVhazsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+PiAg
ICAgfQ0KPj4gDQo+PiArLyogTWFjcm8gdG8gZ2VuZXJhdGUgZWFzaWx5IGNhc2UgZm9yIElEIGNv
LXByb2Nlc3NvciBlbXVsYXRpb24gKi8NCj4+ICsjZGVmaW5lIEdFTkVSQVRFX1RJRDNfSU5GTyhy
ZWcsZmllbGQsb2Zmc2V0KSAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+ICsgICAgY2FzZSBI
U1JfQ1BSRUczMihyZWcpOiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwNCj4+ICsgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIFwNCj4+ICsgICAgICAgIHJldHVybiBoYW5kbGVfcm9fcmVhZF92
YWwocmVncywgcmVnaWR4LCBjcDMyLnJlYWQsIGhzciwgICAgIFwNCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgIDEsIGd1ZXN0X2NwdWluZm8uZmllbGQuYml0c1tvZmZzZXRdKTsgICAgIFwN
Cj4+ICsgICAgfQ0KPj4gKw0KPj4gdm9pZCBkb19jcDE1XzMyKHN0cnVjdCBjcHVfdXNlcl9yZWdz
ICpyZWdzLCBjb25zdCB1bmlvbiBoc3IgaHNyKQ0KPj4gew0KPj4gICAgIGNvbnN0IHN0cnVjdCBo
c3JfY3AzMiBjcDMyID0gaHNyLmNwMzI7DQo+PiBAQCAtMjg2LDYgKzI5NCwzMyBAQCB2b2lkIGRv
X2NwMTVfMzIoc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGNvbnN0IHVuaW9uIGhzciBoc3Ip
DQo+PiAgICAgICAgICAqLw0KPj4gICAgICAgICByZXR1cm4gaGFuZGxlX3Jhel93aShyZWdzLCBy
ZWdpZHgsIGNwMzIucmVhZCwgaHNyLCAxKTsNCj4+IA0KPj4gKyAgICAvKg0KPj4gKyAgICAgKiBI
Q1JfRUwyLlRJRDMNCj4+ICsgICAgICoNCj4+ICsgICAgICogVGhpcyBpcyB0cmFwcGluZyBtb3N0
IElkZW50aWZpY2F0aW9uIHJlZ2lzdGVycyB1c2VkIGJ5IGEgZ3Vlc3QNCj4+ICsgICAgICogdG8g
aWRlbnRpZnkgdGhlIHByb2Nlc3NvciBmZWF0dXJlcw0KPj4gKyAgICAgKi8NCj4+ICsgICAgR0VO
RVJBVEVfVElEM19JTkZPKElEX1BGUjAsIHBmcjMyLCAwKQ0KPj4gKyAgICBHRU5FUkFURV9USUQz
X0lORk8oSURfUEZSMSwgcGZyMzIsIDEpDQo+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9Q
RlIyLCBwZnIzMiwgMikNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0RGUjAsIGRiZzMy
LCAwKQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfREZSMSwgZGJnMzIsIDEpDQo+PiAr
ICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9BRlIwLCBhdXgzMiwgMCkNCj4+ICsgICAgR0VORVJB
VEVfVElEM19JTkZPKElEX01NRlIwLCBtbTMyLCAwKQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lO
Rk8oSURfTU1GUjEsIG1tMzIsIDEpDQo+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZS
MiwgbW0zMiwgMikNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIzLCBtbTMyLCAz
KQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfTU1GUjQsIG1tMzIsIDQpDQo+PiArICAg
IEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSNSwgbW0zMiwgNSkNCj4+ICsgICAgR0VORVJBVEVf
VElEM19JTkZPKElEX0lTQVIwLCBpc2EzMiwgMCkNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZP
KElEX0lTQVIxLCBpc2EzMiwgMSkNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIy
LCBpc2EzMiwgMikNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIzLCBpc2EzMiwg
MykNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVI0LCBpc2EzMiwgNCkNCj4+ICsg
ICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVI1LCBpc2EzMiwgNSkNCj4+ICsgICAgR0VORVJB
VEVfVElEM19JTkZPKElEX0lTQVI2LCBpc2EzMiwgNikNCj4+ICsgICAgLyogTVZGUiByZWdpc3Rl
cnMgYXJlIGluIGNwMTAgbm8gY3AxNSAqLw0KPj4gKw0KPj4gICAgIC8qDQo+PiAgICAgICogSENS
X0VMMi5USURDUA0KPj4gICAgICAqDQo+IA0KPiANCj4gLS0gDQo+IFZvbG9keW15ciBCYWJjaHVr
IGF0IEVQQU0NCg0K


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 11:54:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 11:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41871.75369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4F4-000302-8B; Tue, 01 Dec 2020 11:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41871.75369; Tue, 01 Dec 2020 11:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4F4-0002zv-4I; Tue, 01 Dec 2020 11:54:46 +0000
Received: by outflank-mailman (input) for mailman id 41871;
 Tue, 01 Dec 2020 11:54:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Rbw=FF=epam.com=prvs=0604308a42=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kk4F2-0002zq-UK
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 11:54:45 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5922f9f9-0423-44a6-b126-bcc4d4af9889;
 Tue, 01 Dec 2020 11:54:43 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1BntaT022312; Tue, 1 Dec 2020 11:54:38 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2176.outbound.protection.outlook.com [104.47.17.176])
 by mx0b-0039f301.pphosted.com with ESMTP id 355k5tgf0q-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 11:54:38 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB4976.eurprd03.prod.outlook.com (2603:10a6:803:bd::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Tue, 1 Dec
 2020 11:54:33 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Tue, 1 Dec 2020
 11:54:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5922f9f9-0423-44a6-b126-bcc4d4af9889
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z5SNM6fmejyPVD6OVxWywspVnP9kFaQjoVUam2hxTD+UjHYlfrVqdpmxhW6sTe8sYlwsF3kD4tCsZS9PNx8DCxDmOQZVXYEQjfHLPs7k0kJwBgICPvwK0Qa4kCvtiqLnHtoHy3e3lsNTc5ig4MgTtyiIN4JixapO6BW0oKw09Re9JUdbbgzzEnQmHJgbDH9CiHoYlrFineIZyDNXq4PlKs4sBpO4q4Vyhy0QahXobp9iaCIDuN+rrm70ddHj0sVHckJ6p1xWAM7xsMZjVwts0u3QuDs2VjTTcsuU/nOte+SMz1albRsI50AzxjRnjJdM9Qs6e9wFrbNKC1PptbMFKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7UibfmSkyUXppL87aTbqePgIkcgv0DZZYw3cYe+r93c=;
 b=enbz7ZS0twDHgyvStYQcIAwChoN4C6NOnngGokJ2e1OiP/GEi5C/QbnoFFOqSOENz+v1STGltIsO0jmNp+JE7cOrWsYTsxHhBiNIQfI3A5iEQkTv0jzScmPAyvrDj1Xl2338eVHIr1kZxDCelyM1a/jSzqNOIhDE5QVJEz6rJaENKvaXlBQs+3tSjZU6a97oZ6HWsFCvDkH/kmiUnDpRU9eYx/Y5Ph711MrM3CfM5L+NinaJnD05ILd7xYUXKJitV9yclYtjBo0NI/5pOee28mvOCbrKRXLPL1aoon4TfdCRuxoOrLbX3easynmbFaiBPRzw5pgBVRwxEubCp+/k7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7UibfmSkyUXppL87aTbqePgIkcgv0DZZYw3cYe+r93c=;
 b=Jym2aoFI+rtt3ecsBfa/j3Zt6U+4tyWDUZVEn9GDc0YX+QJH8AzuasZLGKNGXerZi0F3VvPN4EICik0Jr/qr3HM1q2boNMmIyvV4uoroo/hGpvXya4DNjLqcCDeZdJ6ODXcnTeGqKIjdR4iIpeF6+oSEjit2eDvxxkr5ybr7cqCymQ5/sm1WzWNkMUo6EP06MrJNuDFcWLWj+qHopfwHQJh4z5Mty4PWXH1zGlinr7CpIspvpAXJF251iFD1VSq0oTa/8QEmosZmdot11zKZuXMs52FjsUAJ+57XIyB5ZH8TT8NVxZw273alalcsdtGYPoERj6UDcduZMCdrLZ2pXw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Topic: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Index: AQHWxyRRk7rRSvoRk02jzxNDTD03R6nhHqgAgAEBOwCAAANKAA==
Date: Tue, 1 Dec 2020 11:54:33 +0000
Message-ID: <87y2ih68sn.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
 <87pn3u7fyp.fsf@epam.com> <C561BBE3-796C-4A29-B24D-188D792757CB@arm.com>
In-Reply-To: <C561BBE3-796C-4A29-B24D-188D792757CB@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d4751b2c-fbf8-4822-d15b-08d895efdeb9
x-ms-traffictypediagnostic: VI1PR03MB4976:
x-microsoft-antispam-prvs: 
 <VI1PR03MB4976FFFE0306D2B304337F5AE6F40@VI1PR03MB4976.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4303;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 aG5mMmXpN17V70nZiBsUxxHdfgDg4sntxen02yhwOafITcKhpqVH24OHcPIM2fLYdfZRj03l63TgSVnD3Bq70C+EztTM1VxNV5/BfIlOyJvH/uaJ1dOPqBZlIHoRiGQZ86hWmofu8TT5bTTW9EKC7jLcQfjAnaGzFrAhzHlxx/9ToZOpUYoJTG/cGVkVlDl6+PISe/VSNJ8gIyaroejJPFThkELJ8qf8Y9uHxtzTIXcDjIIvOk6nMyTpV9jJwB0kGJB7L+XT0g959h8SRXATmsx2bryL1ArlXcM9GtSs2NnDAhiL2cFi3RIu686JFlvLB55rKDb8ASMuoU0xKtPyKQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(39860400002)(396003)(376002)(136003)(6506007)(53546011)(71200400001)(6512007)(5660300002)(66556008)(2906002)(76116006)(91956017)(66946007)(4326008)(66476007)(2616005)(66446008)(64756008)(6916009)(26005)(186003)(83380400001)(8936002)(478600001)(8676002)(316002)(36756003)(6486002)(54906003)(55236004)(86362001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?jZQAi1UOqJyv7cA4kYeqXeTQ+VYgAtvygqgoTx2pu7hGrXMJtRkMEZIMYT?=
 =?iso-8859-1?Q?8MwL+bTAPd0ZsA8kaQcc+9NNnw1puQdJ+NzqAM8qEatr17lFHqIXQlkUBj?=
 =?iso-8859-1?Q?u4F5nepGwmqoilv7qP6V0oNhMebWOSn0D2fifRgbDvrRHuWhK3Up0dzQW+?=
 =?iso-8859-1?Q?sFfEOMM5reeNIbTexriE5cGv6CEflgZejx6LQiCO7TTrEw+zAUYFUROl0S?=
 =?iso-8859-1?Q?EvyP7znNs2RHRBJ1vv61rLSQ9Qg330Jg40Ngovcynjm48cCQgJh0VogLjG?=
 =?iso-8859-1?Q?fmUkqHEVI6fWedOMRahKG303nogdBAzSiEpxPGpeVx9PEbnz6hzqVxa3EV?=
 =?iso-8859-1?Q?QEfdNxStb8fDaksrMkalo6KshqIlpOwsm085imPHJucXVnSAq7tseZEq4V?=
 =?iso-8859-1?Q?hqlVW/ecbADIo6VG/gd87RbyaejhPLoyywFSwAK2OQnNwWcjSLBxZCqBoe?=
 =?iso-8859-1?Q?dLmf4xu0PBxE+z+TLBAFHsnr4PIPsELxXVyqzgbVbpxaxaitOz9gfTVlE4?=
 =?iso-8859-1?Q?StO+nAf1fOgsYDN8i+uBKf3lEplzlRrHh+5ghcr7NMeEocON/fGUYS5ozv?=
 =?iso-8859-1?Q?5k1KLAU6SxmY3pquaemdPD31xfF2p1h4F4vNSnbHVwnqTPGuaAY17Be1hB?=
 =?iso-8859-1?Q?KCZriupGCyIr9dCd0VEJM5N9vTu5pwliCiMjA1Q0obctwT0mqJXTloXhBY?=
 =?iso-8859-1?Q?houKCeTyE5FWWzR6kXBg6Xc6U8GPGon8FFa2yQOmbSVaJ6iSsksU36hmFb?=
 =?iso-8859-1?Q?Sa+oIgIqsuo0q5NID1EYrK7U8o6MUQs5JLxNyxXzP1dhuJhUYM6mfkC6hR?=
 =?iso-8859-1?Q?iRLXCUSr0WiPQCAKQFMVMJwJ6npNhSts9O811roUW/IrfF93cu0k9gnFKy?=
 =?iso-8859-1?Q?SA6HGyl2ra8oaC/avMoyJrgU+gBEe9XyRVXYietQOvVBbOttfr7g4N6a8i?=
 =?iso-8859-1?Q?aseoh2o1cIRNjR5qiZq9NfugrjdhTqvAB5RfcQLgI7//Di24axwpVvEZkz?=
 =?iso-8859-1?Q?2G+sbAPsaoLy8DWw7Jtk3x6dl0R7hzTevRsV21?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d4751b2c-fbf8-4822-d15b-08d895efdeb9
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 11:54:33.7744
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: BoSN9Acdj2rTaGLkRq45ZM1fINGxiyrdtSS3r5T9Roydy4k+wkVMDZPru5uTrLIYGWRR54W88iVIhIGiPuu9f1R+Sw2xwMtFWGASxjhETpI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB4976
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_04:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 malwarescore=0
 suspectscore=0 spamscore=0 mlxscore=0 bulkscore=0 priorityscore=1501
 impostorscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=840
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010077


Hi Bertrand,

Bertrand Marquis writes:

> Hi Volodymyr,
>
>> On 30 Nov 2020, at 20:22, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>=
 wrote:
>>=20
>>=20
>>=20
>> Bertrand Marquis writes:
>>=20
>>> Add vsysreg emulation for registers trapped when TID3 bit is activated
>>> in HSR.
>>> The emulation is returning the value stored in cpuinfo_guest structure
>>> for most values and the hardware value for registers not stored in the
>>> structure (those are mostly registers existing only as a provision for
>>> feature use but who have no definition right now).
>>=20
>> I can't see the code that returns values for the registers not stored in
>> the guest_cpuinfo. Perhaps you need to update the commit description?
>
> You are right, i modified my code lately to handle all possible registers=
 so
> this case does not exist anymore.

You are covering all currently known registers. If I got reference
manual right, there are number of not assigned ID registers, that should
be RAZ. But with this patch access to them will trigger undefined
instruction abort in a guest.

> I will update the commit message to fix this.

I believe, you need to add cases for currently unassigned registers, so
access to them will yield 0.

> Cheers
> Bertrand
>
>>=20
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> Changes in V2: rebase
>>> ---
>>> xen/arch/arm/arm64/vsysreg.c | 49 ++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 49 insertions(+)
>>>=20
>>> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.=
c
>>> index 8a85507d9d..970ef51603 100644
>>> --- a/xen/arch/arm/arm64/vsysreg.c
>>> +++ b/xen/arch/arm/arm64/vsysreg.c
>>> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>>>         break;                                                         =
 \
>>>     }
>>>=20
>>> +/* Macro to generate easily case for ID co-processor emulation */
>>> +#define GENERATE_TID3_INFO(reg,field,offset)                          =
  \
>>> +    case HSR_SYSREG_##reg:                                            =
  \
>>> +    {                                                                 =
  \
>>> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, =
  \
>>> +                          1, guest_cpuinfo.field.bits[offset]);       =
  \
>>> +    }
>>> +
>>> void do_sysreg(struct cpu_user_regs *regs,
>>>                const union hsr hsr)
>>> {
>>> @@ -259,6 +267,47 @@ void do_sysreg(struct cpu_user_regs *regs,
>>>          */
>>>         return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
>>>=20
>>> +    /*
>>> +     * HCR_EL2.TID3
>>> +     *
>>> +     * This is trapping most Identification registers used by a guest
>>> +     * to identify the processor features
>>> +     */
>>> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
>>> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
>>> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
>>> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
>>> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
>>> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
>>> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
>>> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
>>> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
>>> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
>>> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
>>> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
>>> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
>>> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
>>> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
>>> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
>>> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
>>> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
>>> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
>>> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
>>> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
>>> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
>>> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
>>> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
>>> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
>>> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
>>> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
>>> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
>>> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
>>> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
>>> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
>>> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
>>> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
>>> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
>>> +
>>>     /*
>>>      * HCR_EL2.TIDCP
>>>      *
>>=20
>>=20
>> --=20
>> Volodymyr Babchuk at EPAM


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:07:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41888.75385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4RP-0004A7-RI; Tue, 01 Dec 2020 12:07:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41888.75385; Tue, 01 Dec 2020 12:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4RP-0004A0-O8; Tue, 01 Dec 2020 12:07:31 +0000
Received: by outflank-mailman (input) for mailman id 41888;
 Tue, 01 Dec 2020 12:07:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Rbw=FF=epam.com=prvs=0604308a42=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kk4RO-00049v-LF
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:07:30 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd296bd9-8f51-42d0-b785-31e35b458043;
 Tue, 01 Dec 2020 12:07:29 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1C6ERn018201; Tue, 1 Dec 2020 12:07:24 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2057.outbound.protection.outlook.com [104.47.4.57])
 by mx0b-0039f301.pphosted.com with ESMTP id 353ejmyqjj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 12:07:24 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB3520.eurprd03.prod.outlook.com (2603:10a6:803:2c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Tue, 1 Dec
 2020 12:07:13 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Tue, 1 Dec 2020
 12:07:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd296bd9-8f51-42d0-b785-31e35b458043
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cz4kDwEKeEu6pQ3AacVRJjSxWH2zYzbjEwpWvGdNkEkex22SmPrulL/jxmFbEfmv+k3z6howfm8FdJlF2LR2ogAyvuafy1NPFMjYJamRbIGJY+l15BLx6R2MBEBKhIg9aFjsyX+rjJeBfRpgZk6355RvSZWEmT5GjqgeIJslEbMZlFQDjyPRTEykFosajjKF5Xs/kDHfyDk+g1opW7Bq31IH0zmAIqTfV8cKEKszlmbigYA4h1ywWlFc/zww9KKy7+BDl6SugRfSmlTBzqTJL0GmYpjYwQnfIvVR9+xl9h8TPSyKl9/PR7xurFnK4K9e9sbvuiqLcUHhNf0xbUuXlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qocu+7lHskqhuFd/LzqK8s14GfpI/b9CwmrpPIzJKeA=;
 b=oA+V8iiNU/k6+W4JToS6p+gQHUPgLPVGmWw8ZvtYUyb4q+1CEQFRlKdcfgwV6AswfVto0JvqEI0NtRN0sGVu+N4GF8qickGCt1Aifm62rI6+X82O4GugYCQwKocfkw+aH4kSzvXG0XU7dU5kUucG9sU6GErwzftcyxD4EmqY5yuZERfMVyESqg0hZ/WBH76X64VQw57cvs3qFuJahA81i//2yAUmlGF8Hi9WGqCxApNnmbJ6RoMZCwRjfpT7Kb0rcTxPnqnodiU56WxocxbFhN1GjqfGhlfYGCJ2pLjDrQLkw8afVt0ZKxSkbA+2DSU6xt5ZpeW+Ukb6fT55FDqAhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qocu+7lHskqhuFd/LzqK8s14GfpI/b9CwmrpPIzJKeA=;
 b=kQdgggOH13ryIIQqZ3JVew4mdEQCk/J43pm4E5mubrefQNdGh0WqEaM8VKlGiP1yRT+5tCrMdbPYP+2TvOZOOhUenL/+U5sXy/nu8MWFGIaprTkO7cv0ymzCFhKEkFz32Q3BIoHy31c5UL4Zl19ofVzEGtf3Ov6YO9b5DUSwUF5sC46a+lypUDyOU+SoTBaXlaxGuRcEsTvBw7KJNUcn8NeLMCNtBWxks9VRErRZQHqq6VXgqNPW2RU1a6CrbiXsNbDM/4JWUt43taTqYKkiauMHptOGwDHy9PqqIB2mHfYs+G4xQJ9wA1deOItJq5QD/Aezve28XK/QuvAi6TWv8Q==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHWxyRSFk5jpSQ9QE+NXR5AM1rL5qnhIUQAgAD/vgCAAAW1AA==
Date: Tue, 1 Dec 2020 12:07:13 +0000
Message-ID: <87sg8p687j.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
 <87lfei7fj5.fsf@epam.com> <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
In-Reply-To: <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 869ceb0f-e3b6-4a48-a902-08d895f1a364
x-ms-traffictypediagnostic: VI1PR03MB3520:
x-microsoft-antispam-prvs: 
 <VI1PR03MB3520FD2AFEEA30999514C97AE6F40@VI1PR03MB3520.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 K0N7b8pLXeFl5lN8KG+NohhIinX4/rqqN1VujeRP3A7hgqXbLSUgkd1KUIWmv8DlVBmmvzMh/01zIvw7v0m+SXB+cJO5sKXhJRrqsxlt/mxtdIqne+5h2Cz/Rc9oyRHVrZdzelZdvguLnYG55VR2ec07FSvDM3zHkmw2p7FvzqAn3VMtF+TVQ6SwkX4dmgYBHqt2qdJNKlBK+8nupORe9ypvovEz78MrZVFP1zrwybH9HS4p8aRAYYIziIqs0xdozzArs9GyeI9QbI2g8W9tsBEy6PuX6cIgA/7J0dP7G3HxDgG4hsULZNXWVz22yxTT+ktKXJX10d0MfkzHzhtisg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(396003)(136003)(39860400002)(316002)(6486002)(478600001)(54906003)(2616005)(5660300002)(4326008)(8676002)(55236004)(6916009)(91956017)(53546011)(186003)(26005)(66446008)(66556008)(66946007)(6506007)(64756008)(66476007)(83380400001)(71200400001)(86362001)(36756003)(2906002)(76116006)(8936002)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?WlhzUld1bjdJTVpOL0VWRnZWbkdqemMwdXYxZlljb1dNU3RYcHFQbkJsVmNH?=
 =?utf-8?B?SXd0ckMvQnM2R1dpMGNXN3BYZGhMNHNQa3pzeFlwQXVXMURDTW1iTWZiZDZj?=
 =?utf-8?B?cE5jTytycW1sQVRMWURLYmJXRy9uTEZjVHJTNWhQbTFiODFYUWhhNmZ0cDFE?=
 =?utf-8?B?S2pEMWcwMFFkS3V6RnZ2QVlheDBQU3g5VTB4TUhOOHY3UFdPNUlrQjFodW9N?=
 =?utf-8?B?MlhYYnRKa2tCckJEMlRURFdZOE1uYWJ1aXdNeGxmRDFtR2plU05MbGtCR1Q4?=
 =?utf-8?B?dlB5ZXdqL05TMk5JRlE4eUJOQ2dvQ1pTSjZtZlR1NXZkT1p0NWtnSHR0VndF?=
 =?utf-8?B?MWE0eGZFTVBLaUt2a2ZCOEhOTXJtdGVXMHFaWExuK29rTmFEek1BWGt5d0k5?=
 =?utf-8?B?MzI0RmtNSm01ZWVhTVVzSklKRDJYU0ppN21LTUk4Yk9CMmxCMS9rOGNFRkFZ?=
 =?utf-8?B?V2hmdmhwTHFqYW1IT1lRbWpnU1RTM3NZd0M2OWoxRWltZ2RwZklVbkFvbDN4?=
 =?utf-8?B?VThqS0NKWDZhUVFhOWtKTkkrYnBYUHdSbW1YbkZUMWsyQ0QyNnpraTFKd3Jz?=
 =?utf-8?B?VmJsTUx0L0JZKzJ0eFlnOVYzd1YyV29tV1hacUY4Q1RjR2t3R3JXV2NRUENY?=
 =?utf-8?B?TUduekxrN0JzbWxnWjcwMnQvdmVNRk1GYmVJR2oycDhscVFTVzQwZVNYZXRU?=
 =?utf-8?B?Ky9QT0VNaXQ4M0V6cUpWZThrY1Y5R1FLY2dZaGRPQXg2b2M0b0Rza0VGSmov?=
 =?utf-8?B?K1ZxYXVrZmRERWI4blg1ZC8xUjRQT3F1YnFtLzdPRmwxNGxSdlo3U0dPbzNr?=
 =?utf-8?B?aXRpeDdCZTNlR1ZvdTVaaGt6UDNKMUNWM0swcEJRV1dCc1YxZnprdG9kZWNR?=
 =?utf-8?B?M1ZLSkl4WU9mQkQzT1lQL3U5MEtoM25Eb09Fd1pERElqRGxia2dMelprckQ4?=
 =?utf-8?B?dEk5WjdlSmdrVkc3cm4xRlViSS9iTkRHK20xakpCUlhCN0ZkM3RsZk5VdWxm?=
 =?utf-8?B?SE1JNW4yS0dvRnhDSGVWcDNIZTlZWnpkc0RrRDNKQTl0eHpUUVl1VzBOYngv?=
 =?utf-8?B?dGFLWkJLKzkzbklFRkFSTm5EWEtxUHZ2Unp0dElDVE5aRnlsb3RGb2lPVVRw?=
 =?utf-8?B?V2g1Z2hrb2VxL3hRandRWGJGUEJwRDIvMXZLdUlJeVFjVGVOUk01UDhjajdz?=
 =?utf-8?B?VG9FRW9nRkVRRGRmRTZyUk90TTBzVy8vMXNBaHlTSGtZb2tWR0E2VEhkek1l?=
 =?utf-8?B?c3l3MG5iek1jRGVDYjIrc1NmcjNYbXFEYmpGV01PWmlUQ3AybGJ2alFTdlJt?=
 =?utf-8?Q?5Pz8g8+IPG1bUC6Oeo/Fl9j9KEg0VAqcFg?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <EF347B1C31249F4DB12B42DF48EDAC5E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 869ceb0f-e3b6-4a48-a902-08d895f1a364
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 12:07:13.2869
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hcFIZjuDybx2bj+5Uvm1tPBGQBHTD1/2eSrurYaf8xmLbrzLbpGTQ8lXvzhruPs/Ou5gRChRK5mC5fPjJdrB6CGwDD8j2ISfEkwoOjPgilA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3520
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_04:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 clxscore=1015 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=791 spamscore=0 mlxscore=0 adultscore=0 phishscore=0
 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2012010079

DQpIaSwNCg0KQmVydHJhbmQgTWFycXVpcyB3cml0ZXM6DQoNCj4gSGksDQo+DQo+PiBPbiAzMCBO
b3YgMjAyMCwgYXQgMjA6MzEsIFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJfQmFiY2h1a0Bl
cGFtLmNvbT4gd3JvdGU6DQo+PiANCj4+IA0KPj4gQmVydHJhbmQgTWFycXVpcyB3cml0ZXM6DQo+
PiANCj4+PiBBZGQgc3VwcG9ydCBmb3IgZW11bGF0aW9uIG9mIGNwMTUgYmFzZWQgSUQgcmVnaXN0
ZXJzIChvbiBhcm0zMiBvciB3aGVuDQo+Pj4gcnVubmluZyBhIDMyYml0IGd1ZXN0IG9uIGFybTY0
KS4NCj4+PiBUaGUgaGFuZGxlcnMgYXJlIHJldHVybmluZyB0aGUgdmFsdWVzIHN0b3JlZCBpbiB0
aGUgZ3Vlc3RfY3B1aW5mbw0KPj4+IHN0cnVjdHVyZS4NCj4+PiBJbiB0aGUgY3VycmVudCBzdGF0
dXMgdGhlIE1WRlIgcmVnaXN0ZXJzIGFyZSBubyBzdXBwb3J0ZWQuDQo+PiANCj4+IEl0IGlzIHVu
Y2xlYXIgd2hhdCB3aWxsIGhhcHBlbiB3aXRoIHJlZ2lzdGVycyB0aGF0IGFyZSBub3QgY292ZXJl
ZCBieQ0KPj4gZ3Vlc3RfY3B1aW5mbyBzdHJ1Y3R1cmUuIEFjY29yZGluZyB0byBBUk0gQVJNLCBp
dCBpcyBpbXBsZW1lbnRhdGlvbg0KPj4gZGVmaW5lZCBpZiBzdWNoIGFjY2Vzc2VzIHdpbGwgYmUg
dHJhcHBlZC4gT24gb3RoZXIgaGFuZCwgdGhlcmUgYXJlIG1hbnkNCj4+IHJlZ2lzdGVycyB3aGlj
aCBhcmUgUkFaLiBTbywgZ29vZCBiZWhhdmluZyBndWVzdCBjYW4gdHJ5IHRvIHJlYWQgb25lIG9m
DQo+PiB0aGF0IHJlZ2lzdGVycyBhbmQgaXQgd2lsbCBnZXQgdW5kZWZpbmVkIGluc3RydWN0aW9u
IGV4Y2VwdGlvbiwgaW5zdGVhZA0KPj4gb2YganVzdCByZWFkaW5nIGFsbCB6ZXJvZXMuDQo+DQo+
IFRoaXMgaXMgdHJ1ZSBpbiB0aGUgc3RhdHVzIG9mIHRoaXMgcGF0Y2ggYnV0IHRoaXMgaXMgc29s
dmVkIGJ5IHRoZSBuZXh0IHBhdGNoDQo+IHdoaWNoIGlzIGFkZGluZyBwcm9wZXIgaGFuZGxpbmcg
b2YgdGhvc2UgcmVnaXN0ZXJzIChhZGQgQ1AxMCBleGNlcHRpb24NCj4gc3VwcG9ydCksIGF0IGxl
YXN0IGZvciBNVkZSIG9uZXMuDQo+DQo+IEZyb20gQVJNIEFSTSBwb2ludCBvZiB2aWV3LCBJIGRp
ZCBoYW5kbGUgYWxsIHJlZ2lzdGVycyBsaXN0ZWQgSSB0aGluay4NCj4gSWYgeW91IHRoaW5rIHNv
bWUgYXJlIG1pc3NpbmcgcGxlYXNlIHBvaW50IG1lIHRvIHRoZW0gYXMgTyBkbyBub3QNCj4gY29t
cGxldGVseSB1bmRlcnN0YW5kIHdoYXQgYXJlIHRoZSDigJxyZWdpc3RlcnMgbm90IGNvdmVyZWTi
gJ0gdW5sZXNzDQo+IHlvdSBtZWFuIHRoZSBNVkZSIG9uZXMuDQoNCldlbGwsIEkgbWF5IGJlIHdy
b25nIGZvciBhYXJjaDMyIGNhc2UsIGJ1dCBmb3IgYWFyY2g2NCwgdGhlcmUgYXJlIG51bWJlcg0K
b2YgcmVzZXJ2ZWQgcmVnaXN0ZXJzIGluIElEcyByYW5nZS4gVGhvc2UgcmVnaXN0ZXJzIHNob3Vs
ZCByZWFkIGFzDQp6ZXJvLiBZb3UgY2FuIGZpbmQgdGhlbSBpbiB0aGUgc2VjdGlvbiAiQzUuMS42
IG9wMD09MGIxMSwgTW92ZXMgdG8gYW5kDQpmcm9tIG5vbi1kZWJ1ZyBTeXN0ZW0gcmVnaXN0ZXJz
IGFuZCBTcGVjaWFsLXB1cnBvc2UgcmVnaXN0ZXJzIiBvZiBBUk0NCkRESSAwNDg3Qi5hLiBDaGVj
ayBvdXQgIlRhYmxlIEM1LTYgU3lzdGVtIGluc3RydWN0aW9uIGVuY29kaW5ncyBmb3INCm5vbi1E
ZWJ1ZyBTeXN0ZW0gcmVnaXN0ZXIgYWNjZXNzZXMiLg0KDQoNCj4+IA0KPj4+IFNpZ25lZC1vZmYt
Ynk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+PiAtLS0N
Cj4+PiBDaGFuZ2VzIGluIFYyOiByZWJhc2UNCj4+PiAtLS0NCj4+PiB4ZW4vYXJjaC9hcm0vdmNw
cmVnLmMgfCAzNSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4+IDEgZmls
ZSBjaGFuZ2VkLCAzNSBpbnNlcnRpb25zKCspDQo+Pj4gDQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9h
cmNoL2FybS92Y3ByZWcuYyBiL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4+IGluZGV4IGNkYzkx
Y2RmNWIuLmQwYzY0MDZmMzQgMTAwNjQ0DQo+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL3ZjcHJlZy5j
DQo+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3ZjcHJlZy5jDQo+Pj4gQEAgLTE1NSw2ICsxNTUsMTQg
QEAgVFZNX1JFRzMyKENPTlRFWFRJRFIsIENPTlRFWFRJRFJfRUwxKQ0KPj4+ICAgICAgICAgYnJl
YWs7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XA0KPj4+ICAgICB9DQo+Pj4gDQo+Pj4gKy8qIE1hY3JvIHRvIGdlbmVyYXRlIGVhc2lseSBjYXNl
IGZvciBJRCBjby1wcm9jZXNzb3IgZW11bGF0aW9uICovDQo+Pj4gKyNkZWZpbmUgR0VORVJBVEVf
VElEM19JTkZPKHJlZyxmaWVsZCxvZmZzZXQpICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4+
ICsgICAgY2FzZSBIU1JfQ1BSRUczMihyZWcpOiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwNCj4+PiArICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+Pj4gKyAgICAgICAgcmV0dXJuIGhh
bmRsZV9yb19yZWFkX3ZhbChyZWdzLCByZWdpZHgsIGNwMzIucmVhZCwgaHNyLCAgICAgXA0KPj4+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgIDEsIGd1ZXN0X2NwdWluZm8uZmllbGQuYml0c1tv
ZmZzZXRdKTsgICAgIFwNCj4+PiArICAgIH0NCj4+PiArDQo+Pj4gdm9pZCBkb19jcDE1XzMyKHN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzLCBjb25zdCB1bmlvbiBoc3IgaHNyKQ0KPj4+IHsNCj4+
PiAgICAgY29uc3Qgc3RydWN0IGhzcl9jcDMyIGNwMzIgPSBoc3IuY3AzMjsNCj4+PiBAQCAtMjg2
LDYgKzI5NCwzMyBAQCB2b2lkIGRvX2NwMTVfMzIoc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3Ms
IGNvbnN0IHVuaW9uIGhzciBoc3IpDQo+Pj4gICAgICAgICAgKi8NCj4+PiAgICAgICAgIHJldHVy
biBoYW5kbGVfcmF6X3dpKHJlZ3MsIHJlZ2lkeCwgY3AzMi5yZWFkLCBoc3IsIDEpOw0KPj4+IA0K
Pj4+ICsgICAgLyoNCj4+PiArICAgICAqIEhDUl9FTDIuVElEMw0KPj4+ICsgICAgICoNCj4+PiAr
ICAgICAqIFRoaXMgaXMgdHJhcHBpbmcgbW9zdCBJZGVudGlmaWNhdGlvbiByZWdpc3RlcnMgdXNl
ZCBieSBhIGd1ZXN0DQo+Pj4gKyAgICAgKiB0byBpZGVudGlmeSB0aGUgcHJvY2Vzc29yIGZlYXR1
cmVzDQo+Pj4gKyAgICAgKi8NCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9QRlIwLCBw
ZnIzMiwgMCkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9QRlIxLCBwZnIzMiwgMSkN
Cj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9QRlIyLCBwZnIzMiwgMikNCj4+PiArICAg
IEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9ERlIwLCBkYmczMiwgMCkNCj4+PiArICAgIEdFTkVSQVRF
X1RJRDNfSU5GTyhJRF9ERlIxLCBkYmczMiwgMSkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5G
TyhJRF9BRlIwLCBhdXgzMiwgMCkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZS
MCwgbW0zMiwgMCkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMSwgbW0zMiwg
MSkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMiwgbW0zMiwgMikNCj4+PiAr
ICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMywgbW0zMiwgMykNCj4+PiArICAgIEdFTkVS
QVRFX1RJRDNfSU5GTyhJRF9NTUZSNCwgbW0zMiwgNCkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNf
SU5GTyhJRF9NTUZSNSwgbW0zMiwgNSkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9J
U0FSMCwgaXNhMzIsIDApDQo+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjEsIGlz
YTMyLCAxKQ0KPj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIyLCBpc2EzMiwgMikN
Cj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9JU0FSMywgaXNhMzIsIDMpDQo+Pj4gKyAg
ICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjQsIGlzYTMyLCA0KQ0KPj4+ICsgICAgR0VORVJB
VEVfVElEM19JTkZPKElEX0lTQVI1LCBpc2EzMiwgNSkNCj4+PiArICAgIEdFTkVSQVRFX1RJRDNf
SU5GTyhJRF9JU0FSNiwgaXNhMzIsIDYpDQo+Pj4gKyAgICAvKiBNVkZSIHJlZ2lzdGVycyBhcmUg
aW4gY3AxMCBubyBjcDE1ICovDQo+Pj4gKw0KPj4+ICAgICAvKg0KPj4+ICAgICAgKiBIQ1JfRUwy
LlRJRENQDQo+Pj4gICAgICAqDQo+PiANCj4+IA0KPj4gLS0gDQo+PiBWb2xvZHlteXIgQmFiY2h1
ayBhdCBFUEFNDQoNCg0KLS0gDQpWb2xvZHlteXIgQmFiY2h1ayBhdCBFUEFN


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:13:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:13:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41896.75400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4X3-00059S-Hy; Tue, 01 Dec 2020 12:13:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41896.75400; Tue, 01 Dec 2020 12:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4X3-00059L-F1; Tue, 01 Dec 2020 12:13:21 +0000
Received: by outflank-mailman (input) for mailman id 41896;
 Tue, 01 Dec 2020 12:13:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kk4X2-00059G-Dx
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:13:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk4X0-0002yS-Nh; Tue, 01 Dec 2020 12:13:18 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk4X0-0002CL-DP; Tue, 01 Dec 2020 12:13:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ef7qC0uAHcI2Y0FkRg9bQl35p9Ox/+EmcfDWLzISWWA=; b=nvgdgIxDjsk3G2RrcN1rNolgCJ
	B4/+MrWFttzgzlZVSmT0e3pam6SyeReHh1V8ssx03TmxwVpktyl8FoJ0YBv70swIWdLPCgoVsxYhN
	MrGWnZYh0T7DWABBCrKOZ3J0afmjdzlK45yZ0g/VH3xhrKHA/2YSkrWHeMPq7v09FXOw=;
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Oleksandr <olekstysh@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
 <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
 <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
 <932d7826-7e48-aaee-d566-44c384f84e1c@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a7e9a898-c096-2506-c944-b465f60d153c@xen.org>
Date: Tue, 1 Dec 2020 12:13:16 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <932d7826-7e48-aaee-d566-44c384f84e1c@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Oleksandr,

On 01/12/2020 10:42, Oleksandr wrote:
> 
> On 01.12.20 12:30, Julien Grall wrote:
> 
> Hi Julien
> 
>> Hi Jan,
>>
>> On 01/12/2020 07:55, Jan Beulich wrote:
>>> On 01.12.2020 00:27, Oleksandr wrote:
>>>> On 30.11.20 23:03, Volodymyr Babchuk wrote:
>>>>> Oleksandr Tyshchenko writes:
>>>>>> --- a/xen/include/asm-arm/traps.h
>>>>>> +++ b/xen/include/asm-arm/traps.h
>>>>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const 
>>>>>> struct cpu_user_regs *regs)
>>>>>>            (unsigned long)abort_guest_exit_end == regs->pc;
>>>>>>    }
>>>>>>    +/* Check whether the sign extension is required and perform it */
>>>>>> +static inline register_t sign_extend(const struct hsr_dabt dabt, 
>>>>>> register_t r)
>>>>>> +{
>>>>>> +    uint8_t size = (1 << dabt.size) * 8;
>>>>>> +
>>>>>> +    /*
>>>>>> +     * Sign extend if required.
>>>>>> +     * Note that we expect the read handler to have zeroed the bits
>>>>>> +     * outside the requested access size.
>>>>>> +     */
>>>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>>>> +    {
>>>>>> +        /*
>>>>>> +         * We are relying on register_t using the same as
>>>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>>>> +         * code smaller.
>>>>>> +         */
>>>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>>>> +        r |= (~0UL) << size;
>>>>> If `size` is 64, you will get undefined behavior there.
>>>> I think, we don't need to worry about undefined behavior here. Having
>>>> size=64 would be possible with doubleword (dabt.size=3). But if "r"
>>>> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
>>>> we deal with byte, halfword or word operations (dabt.size<3). Or I
>>>> missed something?
>>>
>>> At which point please put in a respective ASSERT(), possibly amended
>>> by a brief comment.
>>
>> ASSERT()s are only meant to catch programatic error. However, in this 
>> case, the bigger risk is an hardware bug such as advertising a sign 
>> extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32).
>>
>> Actually the Armv8 spec is a bit more blurry when running in AArch32 
>> state because they suggest that the sign extension can be set even for 
>> 32-bit access. I think this is a spelling mistake, but it is probably 
>> better to be cautious here.
>>
>> Therefore, I would recommend to rework the code so it is only called 
>> when len < sizeof(register_t).
> 
> I am not sure I understand the recommendation, could you please clarify 
> (also I don't see 'len' being used here).

Sorry I meant 'size'. I think something like:

if ( dabt.sign && (size < sizeof(register_t)) &&
      (r & (1UL << (size - 1)) )
{
}

Another posibility would be:

if ( dabt.sign && (size < sizeof(register_t)) )
{
    /* find whether the sign bit is set and propagate it */
}

I have a slight preference for the latter as the "if" is easier to read.

In any case, I think this change should be done in a separate patch (I 
don't mint whether this is done after or before this one).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:22:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41904.75413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4gA-00068Y-HG; Tue, 01 Dec 2020 12:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41904.75413; Tue, 01 Dec 2020 12:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4gA-00068R-Cw; Tue, 01 Dec 2020 12:22:46 +0000
Received: by outflank-mailman (input) for mailman id 41904;
 Tue, 01 Dec 2020 12:22:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tcj=FF=yahoo.com=hack3rcon@srs-us1.protection.inumbo.net>)
 id 1kk4g9-00068M-5X
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:22:45 +0000
Received: from sonic301-2.consmr.mail.bf2.yahoo.com (unknown [74.6.129.41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a130c4cb-f803-4850-a9db-b1c7851224a6;
 Tue, 01 Dec 2020 12:22:44 +0000 (UTC)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.bf2.yahoo.com with HTTP; Tue, 1 Dec 2020 12:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a130c4cb-f803-4850-a9db-b1c7851224a6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1606825363; bh=DWRgjhjkz16GuG7eOlybJFH4/1Imfx+LR0DXUu+paX4=; h=Date:From:To:Subject:References:From:Subject; b=ArZu19vMKxXZLyLbRL54yf15tBOCEz4NBtJXYSLfjRYN8MdXVfflzkZT65PK4p5//oy+5vgDz65+/tn6oTIawj0wOskiVFN7ennw3TKMnnxzNM01DrFP+SbbcH7JXIwF3WyvYDYdtfXWJ4XYIEmsPUSMdR3Y9RPEcRJ9aCTGecDMRTw7A5xnxhZe0677y0/r6XoAkbxYlrf6RiRUxV6L7Y8zGJiEFaARt1/fq87RpX9LX7AnSlqlWfpYd+DWLkbI2Ln5byhUO0JydcJ5ZDWXX1CXbBppMTUoO9yQg76wm2xBIJzYLURNPfs3N3PPTRhTlsqWNRov6qSxn6Yv5mpIEw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1606825364; bh=pSFzuPrnXNKbhKTlUP2o7hcDoJXhFDxIEfhdw3hAVg9=; h=Date:From:To:Subject:From:Subject; b=haEzfG7TG2yrmwH723BpOGZFrSeKwvim46l9q4MUkt6x9tP4OCWU9a9B04ConwvZTK/9b3A0wdUx/dHZZpP7YU+pzYQZcmWDGjnNvM6GxTpqEVsziaxycxca0FaUBuK5KX6YWwRq071wK2fJ9BdKP+6G2j4WJLIgk6VrBVxqJn989FSgxV5Hm7Va2T74NQQaPqxLuOvNnQ/0dNvpZjzSXHHB0V56Fkk3gOTII3S9kIJOrmf5MBo/pyqTpMzkAj8SYKYq2PTI6ANw5pcqqFwduh0R4eL19N9R/eHqRHk9OTGhIJvijgg/sjTl/RamV9IuYAV6hyGLnMaGhFuFvdA6FQ==
X-YMail-OSG: sC8A6mIVM1m9AC9WW1IrJioQ67wpkSSG0feCbjaios4_KqGc1AjG6Y5yTOk0TUV
 an6Kw9YcZYDc4bbnTV8H9.T4xZ.4zH1.lQVow3JXC.QgBvSEobOuf6QgnVAkuSA0I3ecIHkj7rJ7
 A4MziK9TGRs3QjxQuI0JKS3a4yPqG0kiE1E6yxyrLxlJQbKAh_nsjGmnEuoz4a9Sp0AMQhxuvZQv
 25DD.yKqLA3SiGjhPHIhz3fXzRn0ENIiF55QB6F2uzSE9Zn3DhfrK2OGz7lmUqVMlny1fQHQzjNK
 YUdqMjD1Nifp2em68YR4X4gBp8JGuApEqOaMoi0mkTrPtRDCV12qLg5UZ3GEsj7XecX9n6zo0eDS
 OJ0KBoLfy7XCHzFc.xc0euYddZI_n0d2t99UQcXWYpdacGM09s2h_TU21.jhA7HcyYMJoewT6FPa
 57699y6lEvIsnk5dySMMt2lhiNsG0mJ8HO_.0M8ekV68dBSgoJCXjlf_sUmvm9lQmmfB1ogFRCMD
 81QZIJOwpIF5Kw.igtkBp46ZWkdwfik3mXyMReWs4.hNvz.ALU7DIqzMwjhj7Vn8dszue1mijaD6
 4ARtycqwOWd0fsIOoTUHb7HwZyWcivQmVBEVbBMv.WHTwSE4P.BS9dqwPM2DhG0nC3i0m0C3Qkj_
 3bI0JtRJcr.zVQm0L_anj_MHb.qaumvz.3vNUKOgg5WJV5yhJhFW4557aaPVDMeq4wdCuBAVTQdj
 XzsjdPmEso8dyvyRC5laCT0m3dcQA9eLsFIH9_RnRnrLCSn7tQ4n308rqciJkVZLtKgxCKQPzLJ8
 t3f3r09oHLGr8YXrmdu2l4hrQuib_GD6f_sz_Mo8Q6KKZv.sViAhg60XdU8rJzVTY.w_o7sIUCfS
 Yo3VyQhbbrIafrQnywW3qyY9OoAMhyGv7Jy5wDtfcwtCoDLAtJoOdtxB_s52CklGbm4hg3VMIzVO
 NHGYESre1grmg_FcsqwuYEF_DC7Xh_.2aj3ryyfVtxfifGxPAMjyBJal.P3izeLSl9vyth2kUyGC
 vvBFHB2aQXhS7Xsl0kQVhoeH8q1glRyKyrWM.88KKg3g32kBCA42xcbyFWeXw.9kI70XevhdTa2O
 oB5CS4OyJUwPM.3alFye_Sf.FuAmycYQWsVEXoXt2qxMwLymAWdCdsOt0otSBP0UV__YJ0uRl62y
 GtJWR8brU04WZQBdQtXDhQSW9Q7sk6RZprtUfriEbgjrLNysiFlQjcL7BBHEuZnO85RMyWKC.DDD
 RnWf9KcDWyia6l1Sw_jCd1gxwoFjjPXv11hQzkMz0U5WJ8KYHa0PWVl5LV6dnJyPoaSlGcUx6DAM
 ADtRj2NljoIkMe4A8Mk6GOszgOCtn0t2E0wIYf1pa9ypa8uoRFcxLDbfEEtMnuQ0Hid8DFB6NgKc
 YlEQ7MU829jyXL3JdGtU6253Eq1WdfwX1Ogp1XN2_5U5JF53pw7by2MieN2cifxyfmvte311_Q8q
 WN3oivQlL7jUF0t1Ow2s4hOdpqu5A4fVoH9JuGHhxImpmL8SwnkvCVJexZ76d7qtioaNmXavVHP7
 WpQeK.uNbd9n4XinJgbvAq9nkwzQztW2iJt3vLv9r0a3TDCJIT8jujJ.aB9RRqJxokRJQPvVeaLb
 VFS7v7Tw6n7jDc15HAEJEk3XZTXYWhStTUB0iC31Vn3dLVqBSPNHqxjm1yVl_qhJScE5prXyB7mD
 oJqiTVu7Zo3h54iOtGybuLlSn0ySWYRI.7TkS2sS1q5HFXMu4glvZKNuONkZkpUwEC.43dXSNU0i
 XrUvWbuMSlX6b5LK30GMEO_E4k4HCZzomD4deOEQ5hYrH.Wakv.j_nJjG.eFoXxLqw6p0KBfEeFj
 kApWpvF31WJ1R7Ek76ySN9MGBBDIfADREWGoKMIlK6WYqfHbBuvkYVOqhNELrA.XUJJe86kbrLCH
 MBjxAX3pYG4dkAsthswB873WxsAHbi9yee2xv41sMoUNKTCbzJxDa.g9njgQiGw5H9M6BNyaMUZa
 mNBbDGYseu.H7gUfcBHGiTNssDkNwXo4Y8ap7lIZMGLZq.dyC7iqCYaTo4rV5N5YkFxDwygPfRHQ
 zhIZN9fgAxiDUEkHArA00H.PHkKGXSOrXVGehB3k5kQuulvI55bjWRsd9acQh4ENLR9GyR0hy6PB
 uv8cUqAq.2QZFE7kyGkLoU5Op0le8grph2KBSBiWGFQoz97cbe50m6v4AtX5q3PzozN4D.lY7mNi
 mfYtwHnLxc_hnXEtv1gjVHaDbMesjbMYbCFFCtHNSXDf4xONY1xLqs8E9CRdoghc35OtOqCls6IC
 PpMh5_W4-
Date: Tue, 1 Dec 2020 12:22:38 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <1326147626.1969889.1606825358993@mail.yahoo.com>
Subject: Apple on Xen?
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
References: <1326147626.1969889.1606825358993.ref@mail.yahoo.com>
X-Mailer: WebService/1.1.17111 YMailNorrin Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.67 Safari/537.36
Content-Length: 228

Hello,
According to this news (https://aws.amazon.com/blogs/aws/new-use-mac-instan=
ces-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/), Amazon EC2=C2=
=A0can run=C2=A0macOS.
Is it OK for Xen Project too?

Thanks.=C2=A0


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:24:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:24:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41908.75425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4hX-0006HS-Sd; Tue, 01 Dec 2020 12:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41908.75425; Tue, 01 Dec 2020 12:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4hX-0006HL-Ol; Tue, 01 Dec 2020 12:24:11 +0000
Received: by outflank-mailman (input) for mailman id 41908;
 Tue, 01 Dec 2020 12:24:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lvrb=FF=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kk4hW-0006HG-GI
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:24:10 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c74b28c0-51d0-4431-99b1-e62b69d739bc;
 Tue, 01 Dec 2020 12:24:09 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id s8so2309799wrw.10
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 04:24:09 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id z22sm2520668wml.1.2020.12.01.04.24.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 01 Dec 2020 04:24:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c74b28c0-51d0-4431-99b1-e62b69d739bc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=1VWtBrcKLPcRyiQcf3HkSHTQy7W9yax8VnwC0yLWqL4=;
        b=K8FYKEX1y9ESDaEmCqIdtZP/LgAECHiQz4EschbtiL4dx4ieZI2pcgFpK+S8STRLgy
         LhhXsm3DtwCRshkvBm8HkrQI1JwT2TDBKak21+8Rwjn9cwm+PCYDUnINLXwNJwt/6wR0
         zWlolcThkXg6oTUe9mG1s/vwDRafKG8RhIov1jd1n11j1JxIYVLNE9gZIlGInLQ1qX0B
         X6dmaUR9mOCg6vqpKctCblNaFVCdEs/8pqykJaZNH7a78t3k1lPWcmcOE2oK3O5bKVK1
         ppObSqyYdXIHB61/sWVJbR9CtnzztGOyduiefj6YzFKOVT+xtL9Vq++xT0MD1XWTReTl
         6v+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=1VWtBrcKLPcRyiQcf3HkSHTQy7W9yax8VnwC0yLWqL4=;
        b=jA1ipWoIGCTYsugE0Itg+gz1la3f4e43kHDwfZziViPwSa314GXlJSQR4LsHgmFXHL
         4w7nZAvBXrLEVdkVZC9qecjjMmv2vi1C4P04mmmxAzT5iG7RFeLqbZPq+dSxIJuaibM6
         y8aPG62b7rPLzZH5JvkXuorAZbvP1dVCqEs/D3/RQZcjBRCsGjaS1ejLAQT9tRonZWDq
         OE+Kn9pDbLep+2AFMa8A5q/8VI3VvADt+IFY/9HpG6HjQgEQkESx8EhazJTj9P9Hfz8h
         ecAIPjehEkaBwqFnT3TfhwF8zs9quYWhUdPqSuJ/4SUoQe/3dBfu4TC5fE3VK7wLdEP9
         wPmA==
X-Gm-Message-State: AOAM532W98r4OUfzkwaVhIL9Fv4KAHaUfqhmoLMLykVGZaivHSw+hIeg
	mm+M3wyQMn3KU+23u4tK/nk=
X-Google-Smtp-Source: ABdhPJyJjrhMArs/3bbPXMkpu011sVs1J0mB7GSLU+p7pzqCy4EGcV/cLZWe51HY/zGD1KCXxFLtPw==
X-Received: by 2002:a05:6000:82:: with SMTP id m2mr2089524wrx.314.1606825448753;
        Tue, 01 Dec 2020 04:24:08 -0800 (PST)
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
 <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
 <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
 <932d7826-7e48-aaee-d566-44c384f84e1c@gmail.com>
 <a7e9a898-c096-2506-c944-b465f60d153c@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <458dc9d7-6ef7-6591-5212-48363bb56971@gmail.com>
Date: Tue, 1 Dec 2020 14:24:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a7e9a898-c096-2506-c944-b465f60d153c@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.12.20 14:13, Julien Grall wrote:
> Hi Oleksandr,

Hi Julien.


>
>>>>>>> --- a/xen/include/asm-arm/traps.h
>>>>>>> +++ b/xen/include/asm-arm/traps.h
>>>>>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const 
>>>>>>> struct cpu_user_regs *regs)
>>>>>>>            (unsigned long)abort_guest_exit_end == regs->pc;
>>>>>>>    }
>>>>>>>    +/* Check whether the sign extension is required and perform 
>>>>>>> it */
>>>>>>> +static inline register_t sign_extend(const struct hsr_dabt 
>>>>>>> dabt, register_t r)
>>>>>>> +{
>>>>>>> +    uint8_t size = (1 << dabt.size) * 8;
>>>>>>> +
>>>>>>> +    /*
>>>>>>> +     * Sign extend if required.
>>>>>>> +     * Note that we expect the read handler to have zeroed the 
>>>>>>> bits
>>>>>>> +     * outside the requested access size.
>>>>>>> +     */
>>>>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>>>>> +    {
>>>>>>> +        /*
>>>>>>> +         * We are relying on register_t using the same as
>>>>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>>>>> +         * code smaller.
>>>>>>> +         */
>>>>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>>>>> +        r |= (~0UL) << size;
>>>>>> If `size` is 64, you will get undefined behavior there.
>>>>> I think, we don't need to worry about undefined behavior here. Having
>>>>> size=64 would be possible with doubleword (dabt.size=3). But if "r"
>>>>> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
>>>>> we deal with byte, halfword or word operations (dabt.size<3). Or I
>>>>> missed something?
>>>>
>>>> At which point please put in a respective ASSERT(), possibly amended
>>>> by a brief comment.
>>>
>>> ASSERT()s are only meant to catch programatic error. However, in 
>>> this case, the bigger risk is an hardware bug such as advertising a 
>>> sign extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32).
>>>
>>> Actually the Armv8 spec is a bit more blurry when running in AArch32 
>>> state because they suggest that the sign extension can be set even 
>>> for 32-bit access. I think this is a spelling mistake, but it is 
>>> probably better to be cautious here.
>>>
>>> Therefore, I would recommend to rework the code so it is only called 
>>> when len < sizeof(register_t).
>>
>> I am not sure I understand the recommendation, could you please 
>> clarify (also I don't see 'len' being used here).
>
> Sorry I meant 'size'. I think something like:
>
> if ( dabt.sign && (size < sizeof(register_t)) &&
>      (r & (1UL << (size - 1)) )
> {
> }
>
> Another posibility would be:
>
> if ( dabt.sign && (size < sizeof(register_t)) )
> {
>    /* find whether the sign bit is set and propagate it */
> }
>
> I have a slight preference for the latter as the "if" is easier to read.
>
> In any case, I think this change should be done in a separate patch (I 
> don't mint whether this is done after or before this one).

ok, I got it, thank you for the clarification. Of course, I will do that 
in a separate patch, since the current one is to avoid code duplication 
only. BTW, do you have comments on this patch itself?

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:28:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41917.75436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4la-0006Ug-Ht; Tue, 01 Dec 2020 12:28:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41917.75436; Tue, 01 Dec 2020 12:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4la-0006UZ-Ey; Tue, 01 Dec 2020 12:28:22 +0000
Received: by outflank-mailman (input) for mailman id 41917;
 Tue, 01 Dec 2020 12:28:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kk4lY-0006UU-Mq
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:28:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk4lX-0003HC-40; Tue, 01 Dec 2020 12:28:19 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk4lW-0003Gh-RT; Tue, 01 Dec 2020 12:28:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZSrY3iTXFqVdkNuyUHX4+OrJ/JWlAreJszuINILngnM=; b=H/szbc8m9mjDjUaXhkN+uXJKoP
	h5xhIFbrSfLynd6zMtAn7HIW1CxlWE6PQ77iIBZxD2MqhluvmI4fkrbZ11YTERBHtPXbxJxGb7Mqp
	aBXLijOnqitL2cDa8PmUWzWkxVJ004c95Ya1bXspJy9S9sPwTkBBJJrHawEZzu7nWeyE=;
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Oleksandr <olekstysh@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com> <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
 <93284ea1-e658-ffff-3223-174d633e38ad@suse.com>
 <d7b8f43d-2a59-6316-5609-0595b2a86045@xen.org>
 <932d7826-7e48-aaee-d566-44c384f84e1c@gmail.com>
 <a7e9a898-c096-2506-c944-b465f60d153c@xen.org>
 <458dc9d7-6ef7-6591-5212-48363bb56971@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8ad8a27e-bbc3-8d9b-04e2-b68de1ff8ef4@xen.org>
Date: Tue, 1 Dec 2020 12:28:17 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <458dc9d7-6ef7-6591-5212-48363bb56971@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 01/12/2020 12:24, Oleksandr wrote:
> 
> On 01.12.20 14:13, Julien Grall wrote:
>> Hi Oleksandr,
> 
> Hi Julien.
> 
> 
>>
>>>>>>>> --- a/xen/include/asm-arm/traps.h
>>>>>>>> +++ b/xen/include/asm-arm/traps.h
>>>>>>>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const 
>>>>>>>> struct cpu_user_regs *regs)
>>>>>>>>            (unsigned long)abort_guest_exit_end == regs->pc;
>>>>>>>>    }
>>>>>>>>    +/* Check whether the sign extension is required and perform 
>>>>>>>> it */
>>>>>>>> +static inline register_t sign_extend(const struct hsr_dabt 
>>>>>>>> dabt, register_t r)
>>>>>>>> +{
>>>>>>>> +    uint8_t size = (1 << dabt.size) * 8;
>>>>>>>> +
>>>>>>>> +    /*
>>>>>>>> +     * Sign extend if required.
>>>>>>>> +     * Note that we expect the read handler to have zeroed the 
>>>>>>>> bits
>>>>>>>> +     * outside the requested access size.
>>>>>>>> +     */
>>>>>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>>>>>> +    {
>>>>>>>> +        /*
>>>>>>>> +         * We are relying on register_t using the same as
>>>>>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>>>>>> +         * code smaller.
>>>>>>>> +         */
>>>>>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>>>>>> +        r |= (~0UL) << size;
>>>>>>> If `size` is 64, you will get undefined behavior there.
>>>>>> I think, we don't need to worry about undefined behavior here. Having
>>>>>> size=64 would be possible with doubleword (dabt.size=3). But if "r"
>>>>>> adjustment gets called (I mean Syndrome Sign Extend bit is set) then
>>>>>> we deal with byte, halfword or word operations (dabt.size<3). Or I
>>>>>> missed something?
>>>>>
>>>>> At which point please put in a respective ASSERT(), possibly amended
>>>>> by a brief comment.
>>>>
>>>> ASSERT()s are only meant to catch programatic error. However, in 
>>>> this case, the bigger risk is an hardware bug such as advertising a 
>>>> sign extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32).
>>>>
>>>> Actually the Armv8 spec is a bit more blurry when running in AArch32 
>>>> state because they suggest that the sign extension can be set even 
>>>> for 32-bit access. I think this is a spelling mistake, but it is 
>>>> probably better to be cautious here.
>>>>
>>>> Therefore, I would recommend to rework the code so it is only called 
>>>> when len < sizeof(register_t).
>>>
>>> I am not sure I understand the recommendation, could you please 
>>> clarify (also I don't see 'len' being used here).
>>
>> Sorry I meant 'size'. I think something like:
>>
>> if ( dabt.sign && (size < sizeof(register_t)) &&
>>      (r & (1UL << (size - 1)) )
>> {
>> }
>>
>> Another posibility would be:
>>
>> if ( dabt.sign && (size < sizeof(register_t)) )
>> {
>>    /* find whether the sign bit is set and propagate it */
>> }
>>
>> I have a slight preference for the latter as the "if" is easier to read.
>>
>> In any case, I think this change should be done in a separate patch (I 
>> don't mint whether this is done after or before this one).
> 
> ok, I got it, thank you for the clarification. Of course, I will do that 
> in a separate patch, since the current one is to avoid code duplication 
> only. BTW, do you have comments on this patch itself?

The series is in my TODO list. I will have a look once in a bit :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:32:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:32:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41923.75449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4px-0007Od-1E; Tue, 01 Dec 2020 12:32:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41923.75449; Tue, 01 Dec 2020 12:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk4pw-0007OW-TJ; Tue, 01 Dec 2020 12:32:52 +0000
Received: by outflank-mailman (input) for mailman id 41923;
 Tue, 01 Dec 2020 12:32:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk4pv-0007OR-M7
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:32:51 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d6ace77-3758-4537-a20d-98aaf16088cd;
 Tue, 01 Dec 2020 12:32:50 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1CTdQX022284; Tue, 1 Dec 2020 12:32:48 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2052.outbound.protection.outlook.com [104.47.0.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 355k5tgjem-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 12:32:48 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR0302MB3217.eurprd03.prod.outlook.com (2603:10a6:208:2::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 12:32:46 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 12:32:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d6ace77-3758-4537-a20d-98aaf16088cd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B17hIosFLe0HwpqDQ+0H/xMqDuk78BHFTYxx4NywcTn3LsZAloEFMZbZKKkCfWYLW6/7IZA9TN7rY7n1/APu1hK4fZR/dPxa4+DbBUgXw0s/0FAtPIr/xbj/hJkbzRd1JYWYXDCSO6Lt41MpYt8MSyNAAVcqMDicoFjUASTugLP4ebNzhg+2iCkxModN50lZ9qnyPvQGOY6JPxL+FHg0xTUi706pvqTh1madpffyF0Nr04Plabgqxqj29tH+fHsUZA2EZ2FzqxYmGoK974oc8NzK3wLNBOq7EP8EIpVeffZuuuawOh4uPdfl9CJqGzoFovl5x8b+RByE+07Zh8ispQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xmlGrHfd9YgnaTOvCWi0PlJMwGuxD/a8aU+wUUro3gk=;
 b=P9UbpNwcOgCkxS9/fIX0boTv41phvAdA92HNcWMGgZAoWR3MHOxlBfLhXbjBHNsun86MP+iUxCMq6Ynq3P2IBpQ6jhBKLSuJmSw61UHY5xlCOV4I0zTU0upTzTKI0rfu8X0nuNq3136409sR7BNfW7Q0NH1+/woOx65heJMpwlZEQNH2s5/eEBkzKI7qxLjpNCJGGU1VzGvSz+4IMn3ocRYenHGK2i3G8O19ydBgevt0ib4rjiMmxcBWGqW8SQDjbbu0YiheAQeA8QpvntBHyGvFfOZcl4r6Y7qQ6jXjQAOP52mb9kG0ZHBoytq0nKEdB7Sb8Rrka6MRb39K3bOKtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xmlGrHfd9YgnaTOvCWi0PlJMwGuxD/a8aU+wUUro3gk=;
 b=qnF+Qss8DrTN9XkJQA0CegT8fxVUl7BFLlwXsH7bQw58WUJD63DyScz+u3ztPvmdpRSNCuT1AoUbn6gHAfCoZJc4XDT+u64bgyspqmqbunbmqFbj9QwvcI3BAycHOEGOmhmdbRMyzROxR/u+ditHvmpMQnr8Ur1EH8aJJDmxvOXH/bNeryKrxpcl7jCuaNbtZbaoVXXkkAhxDMmpf+orht3s5ReUs7ro6dI6oWeKWMuZqW0Wl/ovZQY48Ger4Ku596Dt5XFZMu5lvQtHP6GPtO5441L6K9wXARBj9T3BMB4Mu37tnmli5gxCJZFHYbsgQQEohdSv6PN8yQiFRmKb4w==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Thread-Topic: [PATCH v4 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Thread-Index: AQHWx94RQn6N+bgvwE2dBHmo2TdzxQ==
Date: Tue, 1 Dec 2020 12:32:46 +0000
Message-ID: <43e4db29-744e-89b6-462d-b6d129fdcb08@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-2-paul@xen.org>
In-Reply-To: <20201124080159.11912-2-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 773a59e1-cde0-40f5-8e7b-08d895f534fe
x-ms-traffictypediagnostic: AM0PR0302MB3217:
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3217C1B8405CFD2034359BCAE7F40@AM0PR0302MB3217.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 KXb4LfuOF+D/EYSOmRZ3gDxgv4IYx84wb8kRqlAZ6GcaOJjmjlZetYj2ylyV/X+MKDpLjQpWiYuQeeg6ub4p0gIkafVWS5PR4vI5NVLmra78wtxyofv9tw3WUSk8ly0fzxQQojaxhWBdg5RA7foOwF79YvY4Y5psdfbVreAW92lepOn7uPe22I1hbShENgLW3xY3q6imeJurytc0ijCggGonfUyKe0aocrga7+YiBTFvlofwCvN/g5l+6WMojIGj/g0OLWwvs7lKX2gHiwx82hEjrkCfxC11LbTGUHYH+2VSthJKIBYrpmccHqXsugu94NGHCNz574r6WS4lA8jDKA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(136003)(376002)(346002)(39860400002)(6486002)(53546011)(6506007)(31686004)(110136005)(54906003)(316002)(2906002)(36756003)(26005)(4326008)(6512007)(8936002)(186003)(66946007)(86362001)(478600001)(2616005)(8676002)(31696002)(5660300002)(66476007)(76116006)(66446008)(64756008)(83380400001)(71200400001)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?aldpNVA4Vyt0WHFrS1N3NHFWNUlXdEJSMDBiTmlaUHNuU21GZjN6bUEyTmZC?=
 =?utf-8?B?MFhRV1FNTTZocWJpU00wSmd4OHhGc0NJeUVKT3JPTWcvRnQrMFd3MXBiWFgw?=
 =?utf-8?B?RFNUUm5rZ1NuU3NrQk90SkFRZEhTRzIvbmZGRXhPY0ZZK2RmL1kvRUJHRkIv?=
 =?utf-8?B?c29xWHRqUTRpdndYU3A0dXEvRHZCNTMxR09admxBRUZSQlduZmd6Szh2SU9a?=
 =?utf-8?B?bjV1cnJ3czg3ZkR4dGxqUmttVDVPbXJXZHc5UUhpQ3o3TmxPaGVsODh1cFBR?=
 =?utf-8?B?VTg5MnFsaDNZWlpMbUVoYU44dVN6elRQbWRjRmxYRGM4QVJUdnZEMzNMSmJK?=
 =?utf-8?B?QXFrUlpTdHlSY0x3MFpPNzQxZVE1bllQak1NMXhXcXczcmRUc0I0eDZDQ0hp?=
 =?utf-8?B?Y3dmZ3JoQnFPb3loQTNybGN3S1liL2pWZEN2dGVCNTVXb0xScGRWYmk1enhH?=
 =?utf-8?B?MUV5akdIaXhiSW1QMDV1ZC92Vk5BampZdnJsS3RNZWhrUjEzam00Mjh5Y3dP?=
 =?utf-8?B?dTBFUHB3TkpZT0g1TkFVNkwzREYzYml5eFcweERZSldDTzF0RzkvSDJqVEpY?=
 =?utf-8?B?eGdiUkwyTlk1TVZkd094K0daMUFzRmhFNERyWnRQZFpiTmVuSitWa3c0NGNp?=
 =?utf-8?B?VFRVMkhicWp0RjJnSnovZDdZa1lkRnNHL2xIT2NSU2dJYkdkck9yTklUL2Vy?=
 =?utf-8?B?dVZLNTRxcUVmWXJYNEhtTm1mRVpQSFd5NE9oZXlPajNkOFM5RUV5V3JjNlI3?=
 =?utf-8?B?WEYvTWJpZVpLVWdRUzhQQWlFc1dwMjV4K1BZeUQ2REZCWFJwLzFuWjl3cnVk?=
 =?utf-8?B?eldsY1Z2WlRma3BrWmJ2ZHdoMUlhVUxkdWp0UXRIOC9xTmxSODRIUXVuWHZI?=
 =?utf-8?B?SDdWTHc2WEJML0s4aEZXSlFaWlZVMlZaSnA5RDREcnZ6dVZYZXBSdllVbmJs?=
 =?utf-8?B?Wi9reVF6WERFeWZSLzE1UEk4cW9wQ3h0OGF0K0tCSWIyYTh5ZVRNTythaFNi?=
 =?utf-8?B?bUV0b1hmL1prRHpWNkVOdm82WVpHemxFdytOU3ZmVzREZzFKR3dhTVlzTXFO?=
 =?utf-8?B?LzVhd0lJU0lId2JwSFlaMjAyNUdoRTcwVU9ZRUVIekUySzdvR012OTBXUWVa?=
 =?utf-8?B?QXlCb0ZlUC9VR1Z2Q3dxeHFIQ1Uxd3czekpQVjY3QlVySml0QzhZdFQ0clQw?=
 =?utf-8?B?VEljREpHMU5sSmU4WGNHQlVaSzZZMnU5OTdUUWFkVjd3ZDY4V296Zm1pbHk4?=
 =?utf-8?B?RVJ4aFJrck5XUTZVYU1pN2pFWEN0WndicHUvNHNPU2FLbWpwWEVlL1ZTNElj?=
 =?utf-8?Q?/6FUaJ3Tx4XBfEt1s5Bgst5qgubECfYmNC?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <9AF413306DF9C44089371DC299E559F7@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 773a59e1-cde0-40f5-8e7b-08d895f534fe
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 12:32:46.0364
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: bvKWrh3i/ruDoMRujomRicnmsLpEBbF8/1/A/is1SOMXVzSchnSdZDJXe8jzvzBCoUF1w4YZGIxMciLXBDhzLyBSWjxJmINARfDB0P+fxcE6OxqTnSHhzM/3eDdp9hQF
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3217
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_04:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 malwarescore=0
 suspectscore=0 spamscore=0 mlxscore=0 bulkscore=0 priorityscore=1501
 impostorscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010081

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gVGhlIHNlZW1p
bmdseSBhcmJpdHJhcnkgdXNlIG9mICdwY2knIGFuZCAncGNpZGV2JyBpbiB0aGUgY29kZSBpbiBs
aWJ4bF9wY2kuYw0KPiBpcyBjb25mdXNpbmcgYW5kIGFsc28gY29tcHJvbWlzZXMgdXNlIG9mIHNv
bWUgbWFjcm9zIHVzZWQgZm9yIG90aGVyIGRldmljZQ0KPiB0eXBlcy4gSW5kZWVkIGl0IHNlZW1z
IHRoYXQgREVGSU5FX0RFVklDRV9UWVBFX1NUUlVDVF9YIGV4aXN0cyBzb2xlbHkgYmVjYXVzZQ0K
PiBvZiB0aGlzIGR1YWxpdHkuDQo+DQo+IFRoaXMgcGF0Y2ggcHVyZ2VzIHVzZSBvZiAncGNpZGV2
JyBmcm9tIHRoZSBsaWJ4bCBjb2RlLCBhbGxvd2luZyBldmFsdWF0aW9uIG9mDQo+IERFRklORV9E
RVZJQ0VfVFlQRV9TVFJVQ1RfWCB0byBiZSByZXBsYWNlZCB3aXRoIERFRklORV9ERVZJQ0VfVFlQ
RV9TVFJVQ1QsDQo+IGhlbmNlIGFsbG93aW5nIHJlbW92YWwgb2YgdGhlIGZvcm1lci4NCj4NCj4g
Rm9yIGNvbnNpc3RlbmN5IHRoZSB4bCBhbmQgbGlicy91dGlsIGNvZGUgaXMgYWxzbyBtb2RpZmll
ZCwgYnV0IGluIHRoaXMgY2FzZQ0KPiBpdCBpcyBwdXJlbHkgY29zbWV0aWMuDQo+DQo+IE5PVEU6
IFNvbWUgb2YgdGhlIG1vcmUgZ3Jvc3MgZm9ybWF0dGluZyBlcnJvcnMgKHN1Y2ggYXMgbGFjayBv
ZiBzcGFjZXMgYWZ0ZXINCj4gICAgICAgIGtleXdvcmRzKSB0aGF0IGNhbWUgaW50byBjb250ZXh0
IGhhdmUgYmVlbiBmaXhlZCBpbiBsaWJ4bF9wY2kuYy4NCj4NCj4gU2lnbmVkLW9mZi1ieTogUGF1
bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPiAtLS0NCj4gQ2M6IElhbiBKYWNrc29u
IDxpd2pAeGVucHJvamVjdC5vcmc+DQo+IENjOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiBDYzog
QW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+DQo+IC0tLQ0KPiAgIHRv
b2xzL2luY2x1ZGUvbGlieGwuaCAgICAgICAgICAgICB8ICAxNyArLQ0KPiAgIHRvb2xzL2xpYnMv
bGlnaHQvbGlieGxfY3JlYXRlLmMgICB8ICAgNiArLQ0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGli
eGxfZG0uYyAgICAgICB8ICAxOCArLQ0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfaW50ZXJu
YWwuaCB8ICA0NSArKy0NCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jICAgICAgfCA1
ODIgKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gICB0b29scy9saWJz
L2xpZ2h0L2xpYnhsX3R5cGVzLmlkbCAgfCAgIDIgKy0NCj4gICB0b29scy9saWJzL3V0aWwvbGli
eGx1X3BjaS5jICAgICAgfCAgMzYgKy0tDQo+ICAgdG9vbHMveGwveGxfcGFyc2UuYyAgICAgICAg
ICAgICAgIHwgIDI4ICstDQo+ICAgdG9vbHMveGwveGxfcGNpLmMgICAgICAgICAgICAgICAgIHwg
IDY4ICsrLS0tDQo+ICAgdG9vbHMveGwveGxfc3hwLmMgICAgICAgICAgICAgICAgIHwgIDEyICst
DQo+ICAgMTAgZmlsZXMgY2hhbmdlZCwgNDA5IGluc2VydGlvbnMoKyksIDQwNSBkZWxldGlvbnMo
LSkNCj4NCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2luY2x1ZGUvbGlieGwuaCBiL3Rvb2xzL2luY2x1
ZGUvbGlieGwuaA0KPiBpbmRleCAxZWE1YjRmNDQ2Li5mYmU0YzgxYmE1IDEwMDY0NA0KPiAtLS0g
YS90b29scy9pbmNsdWRlL2xpYnhsLmgNCj4gKysrIGIvdG9vbHMvaW5jbHVkZS9saWJ4bC5oDQo+
IEBAIC00NDUsNiArNDQ1LDEzIEBADQo+ICAgI2RlZmluZSBMSUJYTF9IQVZFX0RJU0tfU0FGRV9S
RU1PVkUgMQ0KPiAgIA0KW3NuaXBdDQo+IC0vKiBTY2FuIHRocm91Z2ggL3N5cy8uLi4vcGNpYmFj
ay9zbG90cyBsb29raW5nIGZvciBwY2lkZXYncyBCREYgKi8NCj4gLXN0YXRpYyBpbnQgcGNpYmFj
a19kZXZfaGFzX3Nsb3QobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpZGV2KQ0K
PiArLyogU2NhbiB0aHJvdWdoIC9zeXMvLi4uL3BjaWJhY2svc2xvdHMgbG9va2luZyBmb3IgcGNp
J3MgQkRGICovDQo+ICtzdGF0aWMgaW50IHBjaWJhY2tfZGV2X2hhc19zbG90KGxpYnhsX19nYyAq
Z2MsIGxpYnhsX2RldmljZV9wY2kgKnBjaSkNCj4gICB7DQo+ICAgICAgIEZJTEUgKmY7DQo+ICAg
ICAgIGludCByYyA9IDA7DQo+IEBAIC02MzUsMTEgKzYzNSwxMSBAQCBzdGF0aWMgaW50IHBjaWJh
Y2tfZGV2X2hhc19zbG90KGxpYnhsX19nYyAqZ2MsIGxpYnhsX2RldmljZV9wY2kgKnBjaWRldikN
Cj4gICAgICAgICAgIHJldHVybiBFUlJPUl9GQUlMOw0KPiAgICAgICB9DQo+ICAgDQo+IC0gICAg
d2hpbGUoZnNjYW5mKGYsICIleDoleDoleC4lZFxuIiwgJmRvbSwgJmJ1cywgJmRldiwgJmZ1bmMp
PT00KSB7DQo+IC0gICAgICAgIGlmKGRvbSA9PSBwY2lkZXYtPmRvbWFpbg0KPiAtICAgICAgICAg
ICAmJiBidXMgPT0gcGNpZGV2LT5idXMNCj4gLSAgICAgICAgICAgJiYgZGV2ID09IHBjaWRldi0+
ZGV2DQo+IC0gICAgICAgICAgICYmIGZ1bmMgPT0gcGNpZGV2LT5mdW5jKSB7DQo+ICsgICAgd2hp
bGUgKGZzY2FuZihmLCAiJXg6JXg6JXguJWRcbiIsICZkb20sICZidXMsICZkZXYsICZmdW5jKT09
NCkgew0KU28sIHRoZW4geW91IGNhbiBwcm9iYWJseSBwdXQgc3BhY2VzIGFyb3VuZCAiNCIgaWYg
dG91Y2hpbmcgdGhpcyBsaW5lDQo+ICsgICAgICAgIGlmIChkb20gPT0gcGNpLT5kb21haW4NCj4g
KyAgICAgICAgICAgICYmIGJ1cyA9PSBwY2ktPmJ1cw0KPiArICAgICAgICAgICAgJiYgZGV2ID09
IHBjaS0+ZGV2DQo+ICsgICAgICAgICAgICAmJiBmdW5jID09IHBjaS0+ZnVuYykgew0KPiAgICAg
ICAgICAgICAgIHJjID0gMTsNCj4gICAgICAgICAgICAgICBnb3RvIG91dDsNCj4gICAgICAgICAg
IH0NCj4gQEAgLTY0OSw3ICs2NDksNyBAQCBvdXQ6DQo+ICAgICAgIHJldHVybiByYzsNCj4gICB9
DQo+ICAgDQoNClJldmlld2VkLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRy
X2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:47:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:47:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41985.75499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk53d-0000J0-Uv; Tue, 01 Dec 2020 12:47:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41985.75499; Tue, 01 Dec 2020 12:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk53d-0000It-Rn; Tue, 01 Dec 2020 12:47:01 +0000
Received: by outflank-mailman (input) for mailman id 41985;
 Tue, 01 Dec 2020 12:47:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kk53c-0000Io-82
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:47:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk53a-0003fw-O1; Tue, 01 Dec 2020 12:46:58 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kk53a-00054p-IF; Tue, 01 Dec 2020 12:46:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=o0qGaJvymSsbp0G5CPySp3y92GYd0Tz7c6m2ZoOVRoM=; b=ekPVelHUHuMU1NJhq2y+kEKPI4
	k8cHyRL5vUabV4dVJFHC9CeFxMFZxlcmpVbTtGz33OcLCjR/B51VnnNuvjGG5Mw8Xu5nT23ZQCRf+
	wyb/cbdc6lhL2L4TS0Tkg2ljcnWck1a/hL13K6G9E0rJ28aqFDdvlrgO1dVHM7e3jU/k=;
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
 <87czzu7emb.fsf@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c3cfe831-0f78-8e57-d9d4-802be7ce283e@xen.org>
Date: Tue, 1 Dec 2020 12:46:56 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <87czzu7emb.fsf@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Volodymyr,

On 30/11/2020 20:51, Volodymyr Babchuk wrote:
> Oleksandr Tyshchenko writes:
> 
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch adds proper handling of return value of
>> vcpu_ioreq_handle_completion() which involves using a loop
>> in leave_hypervisor_to_guest().
>>
>> The reason to use an unbounded loop here is the fact that vCPU
>> shouldn't continue until an I/O has completed. In Xen case, if an I/O
>> never completes then it most likely means that something went horribly
>> wrong with the Device Emulator. And it is most likely not safe to
>> continue. So letting the vCPU to spin forever if I/O never completes
>> is a safer action than letting it continue and leaving the guest in
>> unclear state and is the best what we can do for now.
>>
>> This wouldn't be an issue for Xen as do_softirq() would be called at
>> every loop. In case of failure, the guest will crash and the vCPU
>> will be unscheduled.
>>
> 
> Why you don't block vcpu there and unblock it when response is ready?

The vCPU will already block while waiting for the event channel. See the 
call wait_for_event_channel() in the ioreq code. However, you can still 
receive spurious unblock (e.g. the event channel notificaiton is 
received before the I/O is unhandled). So you have to loop around and 
check if there are more work to do.

> If
> I got it right, "client" vcpu will spin in the loop, eating own
> scheduling budget with no useful work done.

You can't really do much about that if you have a rogue/buggy device model.

> In the worst case, it will
> prevent "server" vcpu from running.

How so? Xen will raise the schedule softirq if the I/O is handled. You 
would have a pretty buggy system if your "client" vCPU is considered to 
be a much higher priority than your "server" vCPU.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 12:50:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 12:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41990.75511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk56n-00019h-Et; Tue, 01 Dec 2020 12:50:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41990.75511; Tue, 01 Dec 2020 12:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk56n-00019a-Ar; Tue, 01 Dec 2020 12:50:17 +0000
Received: by outflank-mailman (input) for mailman id 41990;
 Tue, 01 Dec 2020 12:50:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk56l-00019V-VM
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 12:50:16 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5aa2bfc-2db9-47bf-a209-d2e273d1a5f1;
 Tue, 01 Dec 2020 12:50:13 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1Cndso001533; Tue, 1 Dec 2020 12:50:11 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53])
 by mx0b-0039f301.pphosted.com with ESMTP id 355k5tgm44-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 12:50:11 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6210.eurprd03.prod.outlook.com (2603:10a6:20b:15f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 12:50:07 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 12:50:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5aa2bfc-2db9-47bf-a209-d2e273d1a5f1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OrCMfMqdHlmjGI1pXtByv6iSGsELnankCgL+hCj5cW7rygw6axwjv8gG3s9xiRf/756K0VTQNKi5VQZwpFSukHxqwt8+jjH7Pnex6cXom2YG5es+r+IxheLaq3V0CICyKEF323qRztDw2JjFgKptSRBwxP3+Pah41wuDjd60dgtTYuf794MxzIbfO6l2/uC9+v5JZhCkIjI0z7uKANh1HzbU7Gfh1L9hWvQiC+cpqfDUM+nPMWfHC61aJLqw7a0A+0kzECujSUbuIeX1ceM5eAJ6oOUhJ81Ip/1LWPotxfyBzdjL0Vef6Z5kuI2X2ehNKC9QLG7h4g7A0i/mg8xk4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PolJpx23ga+IRRfOMT+eXk2000hjT4RUxCeO72jf4Ac=;
 b=LFBYiyaxFWCt4urLcfwtd7O655elbpFa/5UrcLk4VfDr/pVBCfwMk4HbGnNBYOpBYXRTpd+JXdP8xySwxvBq4+W8keHx6tqW9DcGYgZhwLZDBkaOYwIwyNxbRdw4Ey7veOBtdI/x6irP9BHG16HTFXFH8JTPihhC+gJXuRMDWqzMWoGI8SEhA9H5KBLzTN/GgHQayU51klx9U93c6yTeB7Oh2bpbPRu4FRJlsyCUgF8w2Y/k1S2LZTu8e7oj3RvDl7bdT4RG1ODnI5ZnhHGtP3Ho3WNbvRmHhiuj1UIHof5qRR/lcZNALEugtbKFkyPZlzxgdP03QRHsRkNd+uBVuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PolJpx23ga+IRRfOMT+eXk2000hjT4RUxCeO72jf4Ac=;
 b=1EgLVNjNXc474RfD+DuQIGO2RpSNaDVfQZavKbiCfyaLrJdIfCd+1pAtCn7pF3PbsNrbs0DzznjjyjxvpvfyajgY8pIZDYp7zBVQadTPBMeOYxtvaECR5ztfKAFPInmrjuGktBcT8vAnp/8mj6n2aFvTSx4kfsPoiW7zmMD/A7qxtBo87P0TaHzZPzFIAkJPclf/Ek7JK1YQ9uqdz1oDwvj/P+cRRU+gdtUE9UR2Q/sQA7z7Ihese7gq38DVT6DiGa+1sdcMU44qsT7INdKNCaMJCWRtCO+rNZ8s50WUmVNHgQHZBWsvoZzpM1AD2En2fX7UQOgKGGxbzeZU5wFTAQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 02/23] libxl: make libxl__device_list() work correctly
 for LIBXL__DEVICE_KIND_PCI...
Thread-Topic: [PATCH v4 02/23] libxl: make libxl__device_list() work correctly
 for LIBXL__DEVICE_KIND_PCI...
Thread-Index: AQHWx+B+LFFGYFSKDUmoUIVMlfBq+w==
Date: Tue, 1 Dec 2020 12:50:07 +0000
Message-ID: <e533b7bc-77ed-9303-3df2-ac11a24d0497@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-3-paul@xen.org>
In-Reply-To: <20201124080159.11912-3-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 031c9bc4-a4fd-4e8e-2842-08d895f7a188
x-ms-traffictypediagnostic: AM0PR03MB6210:
x-microsoft-antispam-prvs: 
 <AM0PR03MB6210C7BA90BF0742CA2B2CF8E7F40@AM0PR03MB6210.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:224;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 cpgeZtNodmKvkvJCMQhshK5W0oC+7Ob3vTxTeBsLQLYj5e7xzMkodjhicnjVYpahW5Y/cfT6GVXw6SYB/WoOph7mO8yaR2Ow1pCgCT1ZNCSdwx+K9rtmBx90P9/MacLzLrowO/GgkiA0pvt1UsUtuklzP8xXW5oaYsAKuiu9BXd1auBZqRSy9WeEzJ9a0byVA5gUDwGAGBMWdgJWZexsiyJVpv8kF9RQy1K+0gmkclq+enYXvBFiHPGeGzsbIgpQy6Z1/rJU8Xrhlxp1HVQD2DS1vcKc9wTNE0H8VzKgjTR2npz+cuNa4e3Ln76HokbCnwdG7/ANAukdRZNvfp8MSKUSTAEXP91k41bhMFCff6M2RsETSgfCt5s3SVdOPcJT
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(136003)(396003)(376002)(346002)(76116006)(186003)(66556008)(478600001)(66946007)(66476007)(66446008)(26005)(8676002)(31686004)(6506007)(83380400001)(64756008)(53546011)(71200400001)(110136005)(31696002)(316002)(4326008)(86362001)(54906003)(8936002)(5660300002)(36756003)(6512007)(2616005)(2906002)(6486002)(101420200001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?dTdDWFVkOGpqYlRFcTRHY2FPMVliYVVUU3R2Tno5MnZMNVZ6bkFGQUtGWDRx?=
 =?utf-8?B?Sm5XZEw1clZBZUVtYTRydWllRm5kdTJpbUhJS1BZYmZWcHFZdkhKeDU4TXph?=
 =?utf-8?B?cEZDZjBBRHFtaFFubW9XUmZlWE9wLzdCby9TWTZCZXEzVnpuWVB3VmwrT2J2?=
 =?utf-8?B?QVJ1cHl4V1FEZG9VQVdlcjJYMldFSC8wV0pNUm5MdjFRVUhqUTJ4L2EwOGNG?=
 =?utf-8?B?MWg2dXdJUG9iZlBzbDNieXJ6VjZ4dHpDTXZTZnR3NHlJZVZwNHdKaUZUd2Na?=
 =?utf-8?B?MFN5WmxLT3k2YWRHMlVwTmo3bndVTi83Z2dhY2xjR0NUaFdtUEhDRjVnTS9w?=
 =?utf-8?B?VVRyLytuSjJGQTNEelVVODhhYWpMMXJpR3E1RkFjeENqRUZ5Tm5HQkY5akFo?=
 =?utf-8?B?Q3RLNlBXV2hQNWZWOU04YVpqaHhyVmQ4RTFjM0dWT1Q5RkpIQzRIYW1uVEhq?=
 =?utf-8?B?R01HbWNaUDRpaTd6Znc5aUQvVDBwUklCbWhlT0t6eTdTU0ZvcTJLUk1qd0Nz?=
 =?utf-8?B?NkxrYUREd25GWTBScjU1a2t5Q0VVaDgxSXNpcHdhbXRJWktWKzk5NjloVW5n?=
 =?utf-8?B?ejJwQXh0UjdTN1lQaVNvRitWb1NTRnVjeXlaSlg0cDV1MzlCdTVhdE9SQTVI?=
 =?utf-8?B?VDBxcG5CY3drRUNPTDh1UUpBWnZmdTZEbytSNWN4ZzVCd284RWhvNlpyQnlP?=
 =?utf-8?B?NThsWjFZdXF0WkxHVEdFa0d3SnNIQzNQRFlkRXNkMWE2SDFKbVJOSXoxd3E4?=
 =?utf-8?B?UVUrb2hlM1lFVHl1L2tSclhMWEhmUTJOTGZnZWd4NzhQUWNVRmVqUVNidE9O?=
 =?utf-8?B?YmtNMnVlZFB4QmlFK1hIbkV2YUVYZit1ZjMwMFBtSzVaZ3FYOW9zNGpwMHRw?=
 =?utf-8?B?WHNZZ0VmYzFHUGFjZUFZdkl6TUpmeGp6NWVoZitESURhV2xPNjVhejdNOXBz?=
 =?utf-8?B?dHhLRWV2cjBmZ1U4ekVUNlgyNkpUeXlmWVN5Wk45V2pLeGZGTUpCcHhwTmFG?=
 =?utf-8?B?YXVhMlFRVHp0VGFiWGY5TXpNcW9GbG1aNjhQS3pQNnhWc1lEZjZ0ZnVVYW9j?=
 =?utf-8?B?UWkrcmdwNGJCRllPWFlnaUFqK3p3WUFnSjlGb2NCdUtJWDdwSllmOWhnOFBJ?=
 =?utf-8?B?UW9uVUVDUEp1Z3pNOVNoVVpWYzNveU1zL3ZKZmpWdXcvYXVtOVhWRCtFN2Ex?=
 =?utf-8?B?UitvaE1VN2dQbFRwREZyOGtwV2s0N2sxM0doVU5IREpoK051eWdKWjVhcmVV?=
 =?utf-8?B?WDhnZTAzOVEyQ3ZPb3ZJZFYxZENxMDBMSUlzS0lWK0hUbmpsRTNLTGQvWFls?=
 =?utf-8?Q?ogU9Wk4AFDIl6XD+wKcEX8lQZFZ83ad5pq?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <B04ECA3726926947A3DC5CE463A7BFEF@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 031c9bc4-a4fd-4e8e-2842-08d895f7a188
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 12:50:07.1365
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fS0bVW2zPCucwTPWO/BT47a/hnzNOemsJFuF8pvKoKHdNUzgePBDUTp15NMXAMmnD6jF6L5mKHyQPZaxsqT8c52mk/G6tQX6c6ix5a8DGNQ5cCcEu578gK+Fx/fX6f/n
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6210
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_04:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 malwarescore=0
 suspectscore=0 spamscore=0 mlxscore=0 bulkscore=0 priorityscore=1501
 impostorscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010082

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gLi4uIGRldmlj
ZXMuDQo+DQo+IEN1cnJlbnRseSB0aGVyZSBpcyBhbiBhc3N1bXB0aW9uIGJ1aWx0IGludG8gbGli
eGxfX2RldmljZV9saXN0KCkgdGhhdCBkZXZpY2UNCj4gYmFja2VuZHMgYXJlIGZ1bGx5IGVudW1h
cmF0ZWQgdW5kZXIgdGhlICcvbGlieGwnIHBhdGggaW4geGVuc3RvcmUuIFRoaXMgaXMNCj4gbm90
IHRoZSBjYXNlIGZvciBQQ0kgYmFja2VuZCBkZXZpY2VzLCB3aGljaCBhcmUgb25seSBwcm9wZXJs
eSBlbnVtZXJhdGVkDQo+IHVuZGVyICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZCcuDQo+DQo+IFRo
aXMgcGF0Y2ggYWRkcyBhIG5ldyBnZXRfcGF0aCgpIG1ldGhvZCB0byBsaWJ4bF9fZGV2aWNlX3R5
cGUgdG8gYWxsb3cgYQ0KPiBiYWNrZW5kIGltcGxlbWVudGF0aW9uIChzdWNoIGFzIFBDSSkgdG8g
c3BlY2lmeSB0aGUgeGVuc3RvcmUgcGF0aCB3aGVyZQ0KPiBkZXZpY2VzIGFyZSBlbnVtZXJhdGVk
IGFuZCBtb2RpZmllcyBsaWJ4bF9fZGV2aWNlX2xpc3QoKSB0byB1c2UgdGhpcyBtZXRob2QNCj4g
aWYgaXQgaXMgYXZhaWxhYmxlLiBBbHNvLCBpZiB0aGUgZ2V0X251bSgpIG1ldGhvZCBpcyBkZWZp
bmVkIHRoZW4gdGhlDQo+IGZyb21feGVuc3RvcmUoKSBtZXRob2QgZXhwZWN0cyB0byBiZSBwYXNz
ZWQgdGhlIGJhY2tlbmQgcGF0aCB3aXRob3V0IHRoZSBkZXZpY2UNCj4gbnVtYmVyIGNvbmNhdGVu
YXRlZCwgc28gdGhpcyBpc3N1ZSBpcyBhbHNvIHJlY3RpZmllZC4NCj4NCj4gSGF2aW5nIG1hZGUg
bGlieGxfX2RldmljZV9saXN0KCkgd29yayBjb3JyZWN0bHksIHRoaXMgcGF0Y2ggcmVtb3ZlcyB0
aGUNCj4gb3Blbi1jb2RlZCBsaWJ4bF9wY2lfZGV2aWNlX3BjaV9saXN0KCkgaW4gZmF2b3VyIG9m
IGFuIGV2YWx1YXRpb24gb2YgdGhlDQo+IExJQlhMX0RFRklORV9ERVZJQ0VfTElTVCgpIG1hY3Jv
LiBUaGlzIGhhcyB0aGUgc2lkZS1lZmZlY3Qgb2YgYWxzbyBkZWZpbmluZw0KPiBsaWJ4bF9wY2lf
ZGV2aWNlX3BjaV9saXN0X2ZyZWUoKSB3aGljaCB3aWxsIGJlIHVzZWQgaW4gc3Vic2VxdWVudCBw
YXRjaGVzLg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpv
bi5jb20+DQpSZXZpZXdlZC1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9h
bmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPiAtLS0NCj4gQ2M6IElhbiBKYWNrc29uIDxpd2pAeGVu
cHJvamVjdC5vcmc+DQo+IENjOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiBDYzogQW50aG9ueSBQ
RVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+DQo+DQo+IHYzOg0KPiAgIC0gTmV3IGlu
IHYzIChyZXBsYWNpbmcgImxpYnhsOiB1c2UgTElCWExfREVGSU5FX0RFVklDRV9MSVNUIGZvciBw
Y2kgZGV2aWNlcyIpDQo+IC0tLQ0KPiAgIHRvb2xzL2luY2x1ZGUvbGlieGwuaCAgICAgICAgICAg
ICB8ICA3ICsrKysrDQo+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9kZXZpY2UuYyAgIHwgNjYg
KysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tDQo+ICAgdG9vbHMvbGlicy9s
aWdodC9saWJ4bF9pbnRlcm5hbC5oIHwgIDIgKysNCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhs
X3BjaS5jICAgICAgfCAyOSArKysrKy0tLS0tLS0tLS0tLQ0KPiAgIDQgZmlsZXMgY2hhbmdlZCwg
NTIgaW5zZXJ0aW9ucygrKSwgNTIgZGVsZXRpb25zKC0pDQo+DQo+IGRpZmYgLS1naXQgYS90b29s
cy9pbmNsdWRlL2xpYnhsLmggYi90b29scy9pbmNsdWRlL2xpYnhsLmgNCj4gaW5kZXggZmJlNGM4
MWJhNS4uZWU1MmQzY2Y3ZSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvaW5jbHVkZS9saWJ4bC5oDQo+
ICsrKyBiL3Rvb2xzL2luY2x1ZGUvbGlieGwuaA0KPiBAQCAtNDUyLDYgKzQ1MiwxMiBAQA0KPiAg
ICNkZWZpbmUgTElCWExfSEFWRV9DT05GSUdfUENJUyAxDQo+ICAgDQo+ICAgLyoNCj4gKyAqIExJ
QlhMX0hBVkVfREVWSUNFX1BDSV9MSVNUX0ZSRUUgaW5kaWNhdGVzIHRoYXQgdGhlDQo+ICsgKiBs
aWJ4bF9kZXZpY2VfcGNpX2xpc3RfZnJlZSgpIGZ1bmN0aW9uIGlzIGRlZmluZWQuDQo+ICsgKi8N
Cj4gKyNkZWZpbmUgTElCWExfSEFWRV9ERVZJQ0VfUENJX0xJU1RfRlJFRSAxDQo+ICsNCj4gKy8q
DQo+ICAgICogbGlieGwgQUJJIGNvbXBhdGliaWxpdHkNCj4gICAgKg0KPiAgICAqIFRoZSBvbmx5
IGd1YXJhbnRlZSB3aGljaCBsaWJ4bCBtYWtlcyByZWdhcmRpbmcgQUJJIGNvbXBhdGliaWxpdHkN
Cj4gQEAgLTIzMjEsNiArMjMyNyw3IEBAIGludCBsaWJ4bF9kZXZpY2VfcGNpX2Rlc3Ryb3kobGli
eGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLA0KPiAgIA0KPiAgIGxpYnhsX2RldmljZV9wY2kg
KmxpYnhsX2RldmljZV9wY2lfbGlzdChsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsDQo+
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCAqbnVtKTsNCj4g
K3ZvaWQgbGlieGxfZGV2aWNlX3BjaV9saXN0X2ZyZWUobGlieGxfZGV2aWNlX3BjaSogbGlzdCwg
aW50IG51bSk7DQo+ICAgDQo+ICAgLyoNCj4gICAgKiBUdXJucyB0aGUgY3VycmVudCBwcm9jZXNz
IGludG8gYSBiYWNrZW5kIGRldmljZSBzZXJ2aWNlIGRhZW1vbg0KPiBkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlicy9saWdodC9saWJ4bF9kZXZpY2UuYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfZGV2
aWNlLmMNCj4gaW5kZXggZTA4MWZhZjlhOS4uYWMxNzNhMDQzZCAxMDA2NDQNCj4gLS0tIGEvdG9v
bHMvbGlicy9saWdodC9saWJ4bF9kZXZpY2UuYw0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xp
YnhsX2RldmljZS5jDQo+IEBAIC0yMDExLDcgKzIwMTEsNyBAQCB2b2lkICpsaWJ4bF9fZGV2aWNl
X2xpc3QobGlieGxfX2djICpnYywgY29uc3QgbGlieGxfX2RldmljZV90eXBlICpkdCwNCj4gICAg
ICAgdm9pZCAqciA9IE5VTEw7DQo+ICAgICAgIHZvaWQgKmxpc3QgPSBOVUxMOw0KPiAgICAgICB2
b2lkICppdGVtID0gTlVMTDsNCj4gLSAgICBjaGFyICpsaWJ4bF9wYXRoOw0KPiArICAgIGNoYXIg
KnBhdGg7DQo+ICAgICAgIGNoYXIgKipkaXIgPSBOVUxMOw0KPiAgICAgICB1bnNpZ25lZCBpbnQg
bmRpcnMgPSAwOw0KPiAgICAgICB1bnNpZ25lZCBpbnQgbmRldnMgPSAwOw0KPiBAQCAtMjAxOSw0
MiArMjAxOSw0NiBAQCB2b2lkICpsaWJ4bF9fZGV2aWNlX2xpc3QobGlieGxfX2djICpnYywgY29u
c3QgbGlieGxfX2RldmljZV90eXBlICpkdCwNCj4gICANCj4gICAgICAgKm51bSA9IDA7DQo+ICAg
DQo+IC0gICAgbGlieGxfcGF0aCA9IEdDU1BSSU5URigiJXMvZGV2aWNlLyVzIiwNCj4gLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX194c19saWJ4bF9wYXRoKGdjLCBkb21pZCksDQo+
IC0gICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2tpbmRfdG9fc3RyaW5n
KGR0LT50eXBlKSk7DQo+IC0NCj4gLSAgICBkaXIgPSBsaWJ4bF9feHNfZGlyZWN0b3J5KGdjLCBY
QlRfTlVMTCwgbGlieGxfcGF0aCwgJm5kaXJzKTsNCj4gKyAgICBpZiAoZHQtPmdldF9wYXRoKSB7
DQo+ICsgICAgICAgIHJjID0gZHQtPmdldF9wYXRoKGdjLCBkb21pZCwgJnBhdGgpOw0KPiArICAg
ICAgICBpZiAocmMpIGdvdG8gb3V0Ow0KPiArICAgIH0gZWxzZSB7DQo+ICsgICAgICAgIHBhdGgg
PSBHQ1NQUklOVEYoIiVzL2RldmljZS8lcyIsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfX3hzX2xpYnhsX3BhdGgoZ2MsIGRvbWlkKSwNCj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICBsaWJ4bF9fZGV2aWNlX2tpbmRfdG9fc3RyaW5nKGR0LT50eXBlKSk7DQo+ICsgICAgfQ0K
PiAgIA0KPiAtICAgIGlmIChkaXIgJiYgbmRpcnMpIHsNCj4gLSAgICAgICAgaWYgKGR0LT5nZXRf
bnVtKSB7DQo+IC0gICAgICAgICAgICBpZiAobmRpcnMgIT0gMSkgew0KPiAtICAgICAgICAgICAg
ICAgIExPR0QoRVJST1IsIGRvbWlkLCAibXVsdGlwbGUgZW50cmllcyBpbiAlc1xuIiwgbGlieGxf
cGF0aCk7DQo+IC0gICAgICAgICAgICAgICAgcmMgPSBFUlJPUl9GQUlMOw0KPiAtICAgICAgICAg
ICAgICAgIGdvdG8gb3V0Ow0KPiAtICAgICAgICAgICAgfQ0KPiAtICAgICAgICAgICAgcmMgPSBk
dC0+Z2V0X251bShnYywgR0NTUFJJTlRGKCIlcy8lcyIsIGxpYnhsX3BhdGgsICpkaXIpLCAmbmRl
dnMpOw0KPiAtICAgICAgICAgICAgaWYgKHJjKSBnb3RvIG91dDsNCj4gLSAgICAgICAgfSBlbHNl
IHsNCj4gKyAgICBpZiAoZHQtPmdldF9udW0pIHsNCj4gKyAgICAgICAgcmMgPSBkdC0+Z2V0X251
bShnYywgcGF0aCwgJm5kZXZzKTsNCj4gKyAgICAgICAgaWYgKHJjKSBnb3RvIG91dDsNCj4gKyAg
ICB9IGVsc2Ugew0KPiArICAgICAgICBkaXIgPSBsaWJ4bF9feHNfZGlyZWN0b3J5KGdjLCBYQlRf
TlVMTCwgcGF0aCwgJm5kaXJzKTsNCj4gKyAgICAgICAgaWYgKGRpciAmJiBuZGlycykNCj4gICAg
ICAgICAgICAgICBuZGV2cyA9IG5kaXJzOw0KPiAtICAgICAgICB9DQo+IC0gICAgICAgIGxpc3Qg
PSBsaWJ4bF9fbWFsbG9jKE5PR0MsIGR0LT5kZXZfZWxlbV9zaXplICogbmRldnMpOw0KPiAtICAg
ICAgICBpdGVtID0gbGlzdDsNCj4gKyAgICB9DQo+ICAgDQo+IC0gICAgICAgIHdoaWxlICgqbnVt
IDwgbmRldnMpIHsNCj4gLSAgICAgICAgICAgIGR0LT5pbml0KGl0ZW0pOw0KPiArICAgIGlmICgh
bmRldnMpDQo+ICsgICAgICAgIHJldHVybiBOVUxMOw0KPiAgIA0KPiAtICAgICAgICAgICAgaWYg
KGR0LT5mcm9tX3hlbnN0b3JlKSB7DQo+IC0gICAgICAgICAgICAgICAgaW50IG5yID0gZHQtPmdl
dF9udW0gPyAqbnVtIDogYXRvaSgqZGlyKTsNCj4gLSAgICAgICAgICAgICAgICBjaGFyICpkZXZp
Y2VfbGlieGxfcGF0aCA9IEdDU1BSSU5URigiJXMvJXMiLCBsaWJ4bF9wYXRoLCAqZGlyKTsNCj4g
LSAgICAgICAgICAgICAgICByYyA9IGR0LT5mcm9tX3hlbnN0b3JlKGdjLCBkZXZpY2VfbGlieGxf
cGF0aCwgbnIsIGl0ZW0pOw0KPiAtICAgICAgICAgICAgICAgIGlmIChyYykgZ290byBvdXQ7DQo+
IC0gICAgICAgICAgICB9DQo+ICsgICAgbGlzdCA9IGxpYnhsX19tYWxsb2MoTk9HQywgZHQtPmRl
dl9lbGVtX3NpemUgKiBuZGV2cyk7DQo+ICsgICAgaXRlbSA9IGxpc3Q7DQo+ICAgDQo+IC0gICAg
ICAgICAgICBpdGVtID0gKHVpbnQ4X3QgKilpdGVtICsgZHQtPmRldl9lbGVtX3NpemU7DQo+IC0g
ICAgICAgICAgICArKygqbnVtKTsNCj4gLSAgICAgICAgICAgIGlmICghZHQtPmdldF9udW0pDQo+
IC0gICAgICAgICAgICAgICAgKytkaXI7DQo+ICsgICAgd2hpbGUgKCpudW0gPCBuZGV2cykgew0K
PiArICAgICAgICBkdC0+aW5pdChpdGVtKTsNCj4gKw0KPiArICAgICAgICBpZiAoZHQtPmZyb21f
eGVuc3RvcmUpIHsNCj4gKyAgICAgICAgICAgIGludCBuciA9IGR0LT5nZXRfbnVtID8gKm51bSA6
IGF0b2koKmRpcik7DQo+ICsgICAgICAgICAgICBjaGFyICpkZXZpY2VfcGF0aCA9IGR0LT5nZXRf
bnVtID8gcGF0aCA6DQo+ICsgICAgICAgICAgICAgICAgR0NTUFJJTlRGKCIlcy8lZCIsIHBhdGgs
IG5yKTsNCj4gKw0KPiArICAgICAgICAgICAgcmMgPSBkdC0+ZnJvbV94ZW5zdG9yZShnYywgZGV2
aWNlX3BhdGgsIG5yLCBpdGVtKTsNCj4gKyAgICAgICAgICAgIGlmIChyYykgZ290byBvdXQ7DQo+
ICAgICAgICAgICB9DQo+ICsNCj4gKyAgICAgICAgaXRlbSA9ICh1aW50OF90ICopaXRlbSArIGR0
LT5kZXZfZWxlbV9zaXplOw0KPiArICAgICAgICArKygqbnVtKTsNCj4gKyAgICAgICAgaWYgKCFk
dC0+Z2V0X251bSkNCj4gKyAgICAgICAgICAgICsrZGlyOw0KPiAgICAgICB9DQo+ICAgDQo+ICAg
ICAgIHIgPSBsaXN0Ow0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9pbnRl
cm5hbC5oIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9pbnRlcm5hbC5oDQo+IGluZGV4IDNlNzBm
ZjYzOWIuLmVjZWU2MWI1NDEgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxf
aW50ZXJuYWwuaA0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX2ludGVybmFsLmgNCj4g
QEAgLTM5MTcsNiArMzkxNyw3IEBAIHR5cGVkZWYgaW50ICgqZGV2aWNlX2RtX25lZWRlZF9mbl90
KSh2b2lkICosIHVuc2lnbmVkKTsNCj4gICB0eXBlZGVmIHZvaWQgKCpkZXZpY2VfdXBkYXRlX2Nv
bmZpZ19mbl90KShsaWJ4bF9fZ2MgKiwgdm9pZCAqLCB2b2lkICopOw0KPiAgIHR5cGVkZWYgaW50
ICgqZGV2aWNlX3VwZGF0ZV9kZXZpZF9mbl90KShsaWJ4bF9fZ2MgKiwgdWludDMyX3QsIHZvaWQg
Kik7DQo+ICAgdHlwZWRlZiBpbnQgKCpkZXZpY2VfZ2V0X251bV9mbl90KShsaWJ4bF9fZ2MgKiwg
Y29uc3QgY2hhciAqLCB1bnNpZ25lZCBpbnQgKik7DQo+ICt0eXBlZGVmIGludCAoKmRldmljZV9n
ZXRfcGF0aF9mbl90KShsaWJ4bF9fZ2MgKiwgdWludDMyX3QsIGNoYXIgKiopOw0KPiAgIHR5cGVk
ZWYgaW50ICgqZGV2aWNlX2Zyb21feGVuc3RvcmVfZm5fdCkobGlieGxfX2djICosIGNvbnN0IGNo
YXIgKiwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhs
X2RldmlkLCB2b2lkICopOw0KPiAgIHR5cGVkZWYgaW50ICgqZGV2aWNlX3NldF94ZW5zdG9yZV9j
b25maWdfZm5fdCkobGlieGxfX2djICosIHVpbnQzMl90LCB2b2lkICosDQo+IEBAIC0zOTQxLDYg
KzM5NDIsNyBAQCBzdHJ1Y3QgbGlieGxfX2RldmljZV90eXBlIHsNCj4gICAgICAgZGV2aWNlX3Vw
ZGF0ZV9jb25maWdfZm5fdCAgICAgICB1cGRhdGVfY29uZmlnOw0KPiAgICAgICBkZXZpY2VfdXBk
YXRlX2RldmlkX2ZuX3QgICAgICAgIHVwZGF0ZV9kZXZpZDsNCj4gICAgICAgZGV2aWNlX2dldF9u
dW1fZm5fdCAgICAgICAgICAgICBnZXRfbnVtOw0KPiArICAgIGRldmljZV9nZXRfcGF0aF9mbl90
ICAgICAgICAgICAgZ2V0X3BhdGg7DQo+ICAgICAgIGRldmljZV9mcm9tX3hlbnN0b3JlX2ZuX3Qg
ICAgICAgZnJvbV94ZW5zdG9yZTsNCj4gICAgICAgZGV2aWNlX3NldF94ZW5zdG9yZV9jb25maWdf
Zm5fdCBzZXRfeGVuc3RvcmVfY29uZmlnOw0KPiAgIH07DQo+IGRpZmYgLS1naXQgYS90b29scy9s
aWJzL2xpZ2h0L2xpYnhsX3BjaS5jIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiBp
bmRleCAyZmYxYzY0YTMxLi45ZDQ0YjI4ZjBhIDEwMDY0NA0KPiAtLS0gYS90b29scy9saWJzL2xp
Z2h0L2xpYnhsX3BjaS5jDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4g
QEAgLTIzOTMsMjkgKzIzOTMsMTMgQEAgc3RhdGljIGludCBsaWJ4bF9fZGV2aWNlX3BjaV9nZXRf
bnVtKGxpYnhsX19nYyAqZ2MsIGNvbnN0IGNoYXIgKmJlX3BhdGgsDQo+ICAgICAgIHJldHVybiBy
YzsNCj4gICB9DQo+ICAgDQo+IC1saWJ4bF9kZXZpY2VfcGNpICpsaWJ4bF9kZXZpY2VfcGNpX2xp
c3QobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLCBpbnQgKm51bSkNCj4gK3N0YXRpYyBp
bnQgbGlieGxfX2RldmljZV9wY2lfZ2V0X3BhdGgobGlieGxfX2djICpnYywgdWludDMyX3QgZG9t
aWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNoYXIgKipwYXRo
KQ0KPiAgIHsNCj4gLSAgICBHQ19JTklUKGN0eCk7DQo+IC0gICAgY2hhciAqYmVfcGF0aDsNCj4g
LSAgICB1bnNpZ25lZCBpbnQgbiwgaTsNCj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2lzID0g
TlVMTDsNCj4gLQ0KPiAtICAgICpudW0gPSAwOw0KPiAtDQo+IC0gICAgYmVfcGF0aCA9IGxpYnhs
X19kb21haW5fZGV2aWNlX2JhY2tlbmRfcGF0aChnYywgMCwgZG9taWQsIDAsDQo+IC0gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBMSUJYTF9fREVWSUNFX0tJ
TkRfUENJKTsNCj4gLSAgICBpZiAobGlieGxfX2RldmljZV9wY2lfZ2V0X251bShnYywgYmVfcGF0
aCwgJm4pKQ0KPiAtICAgICAgICBnb3RvIG91dDsNCj4gKyAgICAqcGF0aCA9IGxpYnhsX19kb21h
aW5fZGV2aWNlX2JhY2tlbmRfcGF0aChnYywgMCwgZG9taWQsIDAsDQo+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgTElCWExfX0RFVklDRV9LSU5EX1BDSSk7
DQo+ICAgDQo+IC0gICAgcGNpcyA9IGNhbGxvYyhuLCBzaXplb2YobGlieGxfZGV2aWNlX3BjaSkp
Ow0KPiAtDQo+IC0gICAgZm9yIChpID0gMDsgaSA8IG47IGkrKykNCj4gLSAgICAgICAgbGlieGxf
X2RldmljZV9wY2lfZnJvbV94c19iZShnYywgYmVfcGF0aCwgaSwgcGNpcyArIGkpOw0KPiAtDQo+
IC0gICAgKm51bSA9IG47DQo+IC1vdXQ6DQo+IC0gICAgR0NfRlJFRTsNCj4gLSAgICByZXR1cm4g
cGNpczsNCj4gKyAgICByZXR1cm4gMDsNCj4gICB9DQo+ICAgDQo+ICAgdm9pZCBsaWJ4bF9fZGV2
aWNlX3BjaV9kZXN0cm95X2FsbChsaWJ4bF9fZWdjICplZ2MsIHVpbnQzMl90IGRvbWlkLA0KPiBA
QCAtMjQ5MiwxMCArMjQ3NiwxMyBAQCBzdGF0aWMgaW50IGxpYnhsX2RldmljZV9wY2lfY29tcGFy
ZShjb25zdCBsaWJ4bF9kZXZpY2VfcGNpICpkMSwNCj4gICAgICAgcmV0dXJuIENPTVBBUkVfUENJ
KGQxLCBkMik7DQo+ICAgfQ0KPiAgIA0KPiArTElCWExfREVGSU5FX0RFVklDRV9MSVNUKHBjaSkN
Cj4gKw0KPiAgICNkZWZpbmUgbGlieGxfX2RldmljZV9wY2lfdXBkYXRlX2RldmlkIE5VTEwNCj4g
ICANCj4gICBERUZJTkVfREVWSUNFX1RZUEVfU1RSVUNUKHBjaSwgUENJLA0KPiAgICAgICAuZ2V0
X251bSA9IGxpYnhsX19kZXZpY2VfcGNpX2dldF9udW0sDQo+ICsgICAgLmdldF9wYXRoID0gbGli
eGxfX2RldmljZV9wY2lfZ2V0X3BhdGgsDQo+ICAgICAgIC5mcm9tX3hlbnN0b3JlID0gbGlieGxf
X2RldmljZV9wY2lfZnJvbV94c19iZSwNCj4gICApOw0KPiAgIA==


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:01:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:01:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41999.75523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5Hu-0002KW-KB; Tue, 01 Dec 2020 13:01:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41999.75523; Tue, 01 Dec 2020 13:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5Hu-0002KP-H5; Tue, 01 Dec 2020 13:01:46 +0000
Received: by outflank-mailman (input) for mailman id 41999;
 Tue, 01 Dec 2020 13:01:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk5Ht-0002KH-NV; Tue, 01 Dec 2020 13:01:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk5Ht-00040r-Bh; Tue, 01 Dec 2020 13:01:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kk5Hs-0002I2-Uq; Tue, 01 Dec 2020 13:01:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kk5Hs-0004P2-UU; Tue, 01 Dec 2020 13:01:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4sp7PrTnoXUsT6glqJSIDSX7a/iQ2sl0AMwVAi5b+40=; b=BQ/MWKa4RhmA1FmVvfmarRzdvd
	z3UqjOy2BZfxQ8K3R+MBlVC2OED02Km6RXA9wGsyhWQyNPgtHZHDSBG3XZWRKL6I83XasX+aRH+eK
	wXG8aVCoy53OSb3oZ09WEn4kStPW4caT93A2Wj5ROQC9ScXkVIvyqiTMuFf8MMO3+Mcg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157123-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157123: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
X-Osstest-Versions-That:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 13:01:44 +0000

flight 157123 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157123/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 157115 pass in 157123
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157115 pass in 157123
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 157115
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 157115

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157115
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157115
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157115
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157115
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157115
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157115
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157115
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157115
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157115
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157115
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157115
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
baseline version:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53

Last test of basis   157123  2020-12-01 02:30:56 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:12:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42008.75537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5SL-0003Lv-E9; Tue, 01 Dec 2020 13:12:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42008.75537; Tue, 01 Dec 2020 13:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5SL-0003Lo-B2; Tue, 01 Dec 2020 13:12:33 +0000
Received: by outflank-mailman (input) for mailman id 42008;
 Tue, 01 Dec 2020 13:12:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk5SK-0003Lj-IQ
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:12:32 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd21798f-ee3c-41a2-87aa-d18f6de31dac;
 Tue, 01 Dec 2020 13:12:30 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1DAEWR023559; Tue, 1 Dec 2020 13:12:27 GMT
Received: from eur03-am5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2050.outbound.protection.outlook.com [104.47.8.50])
 by mx0b-0039f301.pphosted.com with ESMTP id 353ejmywry-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 13:12:27 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6882.eurprd03.prod.outlook.com (2603:10a6:20b:283::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Tue, 1 Dec
 2020 13:12:22 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 13:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd21798f-ee3c-41a2-87aa-d18f6de31dac
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BNYCrEQvf3whzZ/bRJYXijdCBGFzBlaUysXceFNdCefpXBBVyZnxN6F4tPYegoa3ZTekSTszmJ+OCP6xPT/JIGchK/iP5Hsb6ciUf7r4mcyJRdhYpdIMdv2j/pysIcpz1XC7z97NFVZ3oeTfnEo+nZLH1Bwn9tYEoxxXVfD45vAx7AAUXxi2DfPtTZkd8U+RMRq/ZqOVJJIjwhfWy7eZJ68B5G2VdZRo4c81sv8WmcfIEYbK0aJ9WyfkRHqCk3IfQUjFVbjPoaRrN0z3PNHubchHIC35vU+pbI1O+HBJ17vqJblO0a2fm4RyyucXSCDNoBVZCvZ+mSq6km8TIT/JXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gX1fZVUF3IThJ1z4CK/d+pqVKfl2H7GE0dwf74Lavwc=;
 b=NRMOQwGiMZZ8+8wdbHlddLLYPXT7C8n27UVJneyzsuvweuXhGqLKZ3bO534W49IX2hKcoDQ/Ym79IxtCYACtkzEFO6SRROlsWG2465GnESl6IkgcyWAT50lR3djH60sVCxuhDZwcDg7TJx7tQA+1yz9SiKD/ldr0KUE8l3YVh+IHdlRpInoYF4rjjiAETKE8hXAVtAJvaI7eaX/3JE+jUHZ8fk8JHi3G8e4AkJ9HFzh3P3IqUSe55SoQxONQd9QlPgvHCn3hKSB/GfNJ1N1bAD+vsKFfiMF5b6eQafRpP2g3Qozjx3/W53AUd3/dHs/HbmnKfsEGoE95HGzWliQpWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gX1fZVUF3IThJ1z4CK/d+pqVKfl2H7GE0dwf74Lavwc=;
 b=jYpc4ZnLtKK9b1nXNanOZkHAn3HHD1Hx9j3swwRKJ2T1D/DctfgicoPF8hzddFgqQLrbMJxTGuVMve+L2xiVx5I6PgsTch9gZHJfwpJjFcBIgo6UdEMBfRg2jeHnnmWLwFiplBvygMEmrqqAVqdnmC6BM2fSFM9ZCTZZwym3J0q1Xb/UzyI+8lqzrIj6TwjQTT3jFKbqb7S4C3KJODwyF7Ho4yMThsT+P1u/qBMxQT1vO8a+i0hWZcphrYlWnwJ3G94UuHSQt9VRBB1WfyY1T0VO29LDKYOHJeyzFIbMtwYSiiR4mbOph5ZbZSPC9GxMBBbf5UDU12+SjEKCAKPrbw==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach are
 reflected in the config
Thread-Topic: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach
 are reflected in the config
Thread-Index: AQHWx+OabtYotvRU7E6Ior0bjSGKzA==
Date: Tue, 1 Dec 2020 13:12:22 +0000
Message-ID: <d16e33d7-a4af-8686-c639-b4f591caf77c@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-4-paul@xen.org>
In-Reply-To: <20201124080159.11912-4-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f06f2a16-e436-499a-83ec-08d895fabdaf
x-ms-traffictypediagnostic: AM9PR03MB6882:
x-microsoft-antispam-prvs: 
 <AM9PR03MB6882B05CFB4C976631DEB79DE7F40@AM9PR03MB6882.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:446;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Z7zGAS/OxmJckLmmK0ch2SlNZmkW8Btk7mQVClS4Di6MSoKdR8SA6LxP2bF/0za0DJD6mCWvoSdGihwO8kkWoD2I0kCsLkWsET7OjkD/OU+cPraSrap/rJ9QErf6I9bZVtRNfVLlrWSGHya5FJ5catKbizKiGORNhir9VHVcrsUwZaEr5hJ/i5HJSWYk+3bLnqOCRK5tzvaD0O9fjAbubQp6Feosldn1wQWRJs+IHRp8OOcDqruRiMJ0o+PerbRM1AjXp91AgW8AVe/tvXZ1EqX145Fx4J6NFcPyZXHRcJlO8c0sP4s1aiG0ZkbiDZgWGdbHUTsAA1wn+xrX/TBPUxQy0Qi9w7p7oExrebOaiMKRq95oDdUpGZGKUpvyk24wtokYPimhxvDdi+XcH/u2eA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(39860400002)(136003)(396003)(66946007)(31686004)(31696002)(36756003)(186003)(54906003)(316002)(53546011)(2906002)(64756008)(110136005)(478600001)(6506007)(4326008)(6512007)(26005)(966005)(8676002)(66446008)(2616005)(66476007)(66556008)(76116006)(5660300002)(71200400001)(8936002)(86362001)(6486002)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?VDQ3ZnBPQjlIanhYS3NnYU43djV1K0F5c3BXN0Vsam9KYXp2OEFacmcyNkpS?=
 =?utf-8?B?SkpyNjVMSUNOTGNtUjFGMHl0RWJZMEw5REFFSDZWZFJsU2FFakxNcEZDWG5j?=
 =?utf-8?B?TXhDRVQxM0RMcGFldk5mT25kZmhoSWpHeVJzNXdWenBUbWZPc0F5TFpHNWMz?=
 =?utf-8?B?SWQ4VzI1dTNzbkdXVkd3RkZoS3NwVU5qL29oSXhzZ3d0eTZoVGVEdjJXbHQr?=
 =?utf-8?B?Nkp2QkFxYVg3eVVqZFJFTDNhUnlqd0JlTGxucEk1aDcwQUpLUmVmT3loUFJU?=
 =?utf-8?B?UXhlT3htZjJqRTRZQmRLaDIxQUM1ZlRUVzZaNzU0UGpnU3JRVGY1M3ZkNEVO?=
 =?utf-8?B?WW15WWNzNHo0VG1acjV3V0liNXlaQzgvMFJpVWJkVG5CeTJjRElIa05LZVJt?=
 =?utf-8?B?bmxiUHhYbHlPTW4yR05qbTdsRzJvQXJpRUE0ZDZiQ2FNZFBVL3VaT2ZkdEt0?=
 =?utf-8?B?WGpUaDFveUV5QXNQQ1pXdlpnbkhsVjlhbFJUSXNtMjFPb1QzczZvYjlhWkQz?=
 =?utf-8?B?bWFJbDF6VHZMU2dESUhRcEpTYTE0Z0orZXJtMytIQlNPazQ2V2FuUjdWSXAx?=
 =?utf-8?B?TmVtRkFaWUVHYXNidko1a2tMR3FMZ1J4SndxODZheUpJREdKbkcvQlhOcFp3?=
 =?utf-8?B?cTZYOEY4R1A0RURGaTNSa0o5SlFlbzg3a3h3SkJzeW0xWEdwS1ZKTWw0QWFy?=
 =?utf-8?B?UWMvbFVCTkZsMmVsd01sZExPKy9xL0gzTXIxdHFkZGZCdmN4dytDcWIwL3h6?=
 =?utf-8?B?bUxqc1F1VktsemJKaXlJRGFHTDRZTW5xbzdrazA2cHJWQ1dzM1MyVWp4WXM4?=
 =?utf-8?B?emQwZGoxWjd5dnVmTXhEcnpoUHpyMzhHR096VjZqK2I2UFZFdEJIL2FvRTNo?=
 =?utf-8?B?Y0RRY1M3cGJzMURBOE5POTV6RkxqcHdNS3Fiam5nR0NhOW5QczNCcXZudkJL?=
 =?utf-8?B?dk5DdEN2b1FIVUZhZXQzOTd3SUlwdzVEN1hpV3p6d3FoR2huakdCd0gyMmlt?=
 =?utf-8?B?RmVGeVhZVkVWdXJFQXVOLytjbUVJamtNT3VBWEJuRVJNSVRtM1ZCRDA4RmJx?=
 =?utf-8?B?NFp0Zk5mOFBoTDBEdmVsMk5uYjk5NFVUOERvV3lXZE1SRUtMNDFzTFVkaHo5?=
 =?utf-8?B?UnpRRUVtUWVablRSTDVsaEVkeEFpdjY0dGJkVEYwRkU4NGt0eFBGUytYak1L?=
 =?utf-8?B?SWZBNjRCMXdGOHplOEk1OXFmdzNzaUl4WFNYNmlJRUFtMVZGcFdIS2dUcjcw?=
 =?utf-8?B?SVV4aHZIQm1FcTdmaHJTNnZQMXlqZ2pyWXlaSWZZVkk5OHEwcFovUHhoM0Rw?=
 =?utf-8?Q?HQy39lS9oKdpizr9kQVVTFqbcHJPldIslb?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <695146B236D2264784747525B41B63EB@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f06f2a16-e436-499a-83ec-08d895fabdaf
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 13:12:22.8768
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: isZrUscEQ6VtDS3dSaxFKyAROd2fx4vJ3a9e+SJyMbfyVIZ7vhgCnQNpjVqDbIjnfot52Df0kJs5+Ky97brTbIJgR+8Z8RF7VW0ARnk2rAYQIp5EM//NF/3TUfogi5LR
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6882
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 clxscore=1015 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=999 spamscore=0 mlxscore=0 adultscore=0 phishscore=0
 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2012010084

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gQ3VycmVudGx5
IGxpYnhsX19kZXZpY2VfcGNpX2FkZF94ZW5zdG9yZSgpIGlzIGJyb2tlbiBpbiB0aGF0IGRvZXMg
bm90DQo+IHVwZGF0ZSB0aGUgZG9tYWluJ3MgY29uZmlndXJhdGlvbiBmb3IgdGhlIGZpcnN0IGRl
dmljZSBhZGRlZCAod2hpY2ggY2F1c2VzDQo+IGNyZWF0aW9uIG9mIHRoZSBvdmVyYWxsIGJhY2tl
bmQgYXJlYSBpbiB4ZW5zdG9yZSkuIFRoaXMgY2FuIGJlIGVhc2lseSBvYnNlcnZlZA0KPiBieSBy
dW5uaW5nICd4bCBsaXN0IC1sJyBhZnRlciBhZGRpbmcgYSBzaW5nbGUgZGV2aWNlOiB0aGUgZGV2
aWNlIHdpbGwgYmUNCj4gbWlzc2luZy4NCj4NCj4gVGhpcyBwYXRjaCBmaXhlcyB0aGUgcHJvYmxl
bSBhbmQgYWRkcyBhIERFQlVHIGxvZyBsaW5lIHRvIGFsbG93IGVhc3kNCj4gdmVyaWZpY2F0aW9u
IHRoYXQgdGhlIGRvbWFpbiBjb25maWd1cmF0aW9uIGlzIGJlaW5nIG1vZGlmaWVkLiBBbHNvLCB0
aGUgdXNlDQo+IG9mIGxpYnhsX19kZXZpY2VfZ2VuZXJpY19hZGQoKSBpcyBkcm9wcGVkIGFzIGl0
IGxlYWRzIHRvIGEgY29uZnVzaW5nIHNpdHVhdGlvbg0KPiB3aGVyZSBvbmx5IHBhcnRpYWwgYmFj
a2VuZCBpbmZvcm1hdGlvbiBpcyB3cml0dGVuIHVuZGVyIHRoZSB4ZW5zdG9yZQ0KPiAnL2xpYnhs
JyBwYXRoLiBGb3IgTElCWExfX0RFVklDRV9LSU5EX1BDSSBkZXZpY2VzIHRoZSBvbmx5IGRlZmlu
aXRpdmUNCj4gaW5mb3JtYXRpb24gaW4geGVuc3RvcmUgaXMgdW5kZXIgJy9sb2NhbC9kb21haW4v
MC9iYWNrZW5kJyAodGhlICcwJyBiZWluZw0KPiBoYXJkLWNvZGVkKS4NCj4NCj4gTk9URTogVGhp
cyBwYXRjaCBpbmNsdWRlcyBhIHdoaXRlc3BhY2UgaW4gYWRkX3BjaXNfZG9uZSgpLg0KPg0KPiBT
aWduZWQtb2ZmLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQo+IC0tLQ0K
PiBDYzogSWFuIEphY2tzb24gPGl3akB4ZW5wcm9qZWN0Lm9yZz4NCj4gQ2M6IFdlaSBMaXUgPHds
QHhlbi5vcmc+DQo+IENjOiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNv
bT4NCj4NCj4gdjI6DQo+ICAgLSBBdm9pZCBoYXZpbmcgdHdvIGNvbXBsZXRlbHkgZGlmZmVyZW50
IHdheXMgb2YgYWRkaW5nIGRldmljZXMgaW50byB4ZW5zdG9yZQ0KPg0KPiB2MzoNCj4gICAtIFJl
dmVydCBzb21lIGNoYW5nZXMgZm9ybSB2MiBhcyB0aGVyZSBpcyBjb25mdXNpb24gb3ZlciB1c2Ug
b2YgdGhlIGxpYnhsDQo+ICAgICBhbmQgYmFja2VuZCB4ZW5zdG9yZSBwYXRocyB3aGljaCBuZWVk
cyB0byBiZSBmaXhlZA0KPiAtLS0NCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jIHwg
ODcgKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gICAxIGZp
bGUgY2hhbmdlZCwgNDUgaW5zZXJ0aW9ucygrKSwgNDIgZGVsZXRpb25zKC0pDQo+DQo+IGRpZmYg
LS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jIGIvdG9vbHMvbGlicy9saWdodC9s
aWJ4bF9wY2kuYw0KPiBpbmRleCA5ZDQ0YjI4ZjBhLi5kYTAxYzc3YmEyIDEwMDY0NA0KPiAtLS0g
YS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQv
bGlieGxfcGNpLmMNCj4gQEAgLTc5LDM5ICs3OSw1NSBAQCBzdGF0aWMgdm9pZCBsaWJ4bF9fZGV2
aWNlX2Zyb21fcGNpKGxpYnhsX19nYyAqZ2MsIHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICBkZXZp
Y2UtPmtpbmQgPSBMSUJYTF9fREVWSUNFX0tJTkRfUENJOw0KPiAgIH0NCj4gICANCj4gLXN0YXRp
YyBpbnQgbGlieGxfX2NyZWF0ZV9wY2lfYmFja2VuZChsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBk
b21pZCwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBsaWJ4
bF9kZXZpY2VfcGNpICpwY2ksDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgaW50IG51bSkNCj4gK3N0YXRpYyB2b2lkIGxpYnhsX19jcmVhdGVfcGNpX2JhY2tlbmQobGli
eGxfX2djICpnYywgeHNfdHJhbnNhY3Rpb25fdCB0LA0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwgY29uc3QgbGlieGxfZGV2aWNlX3BjaSAq
cGNpKQ0KPiAgIHsNCj4gLSAgICBmbGV4YXJyYXlfdCAqZnJvbnQgPSBOVUxMOw0KPiAtICAgIGZs
ZXhhcnJheV90ICpiYWNrID0gTlVMTDsNCj4gLSAgICBsaWJ4bF9fZGV2aWNlIGRldmljZTsNCj4g
LSAgICBpbnQgaTsNCj4gKyAgICBsaWJ4bF9jdHggKmN0eCA9IGxpYnhsX19nY19vd25lcihnYyk7
DQo+ICsgICAgZmxleGFycmF5X3QgKmZyb250LCAqYmFjazsNCj4gKyAgICBjaGFyICpmZV9wYXRo
LCAqYmVfcGF0aDsNCj4gKyAgICBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMgZmVfcGVybXNbMl0sIGJl
X3Blcm1zWzJdOw0KPiArDQo+ICsgICAgTE9HRChERUJVRywgZG9taWQsICJDcmVhdGluZyBwY2kg
YmFja2VuZCIpOw0KPiAgIA0KPiAgICAgICBmcm9udCA9IGZsZXhhcnJheV9tYWtlKGdjLCAxNiwg
MSk7DQo+ICAgICAgIGJhY2sgPSBmbGV4YXJyYXlfbWFrZShnYywgMTYsIDEpOw0KPiAgIA0KPiAt
ICAgIExPR0QoREVCVUcsIGRvbWlkLCAiQ3JlYXRpbmcgcGNpIGJhY2tlbmQiKTsNCj4gLQ0KPiAt
ICAgIC8qIGFkZCBwY2kgZGV2aWNlICovDQo+IC0gICAgbGlieGxfX2RldmljZV9mcm9tX3BjaShn
YywgZG9taWQsIHBjaSwgJmRldmljZSk7DQo+ICsgICAgZmVfcGF0aCA9IGxpYnhsX19kb21haW5f
ZGV2aWNlX2Zyb250ZW5kX3BhdGgoZ2MsIGRvbWlkLCAwLA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIExJQlhMX19ERVZJQ0VfS0lORF9QQ0kpOw0K
PiArICAgIGJlX3BhdGggPSBsaWJ4bF9fZG9tYWluX2RldmljZV9iYWNrZW5kX3BhdGgoZ2MsIDAs
IGRvbWlkLCAwLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgTElCWExfX0RFVklDRV9LSU5EX1BDSSk7DQo+ICAgDQo+ICsgICAgZmxleGFycmF5X2Fw
cGVuZF9wYWlyKGJhY2ssICJmcm9udGVuZCIsIGZlX3BhdGgpOw0KPiAgICAgICBmbGV4YXJyYXlf
YXBwZW5kX3BhaXIoYmFjaywgImZyb250ZW5kLWlkIiwgR0NTUFJJTlRGKCIlZCIsIGRvbWlkKSk7
DQo+IC0gICAgZmxleGFycmF5X2FwcGVuZF9wYWlyKGJhY2ssICJvbmxpbmUiLCAiMSIpOw0KPiAr
ICAgIGZsZXhhcnJheV9hcHBlbmRfcGFpcihiYWNrLCAib25saW5lIiwgR0NTUFJJTlRGKCIlZCIs
IDEpKTsNCj4gICAgICAgZmxleGFycmF5X2FwcGVuZF9wYWlyKGJhY2ssICJzdGF0ZSIsIEdDU1BS
SU5URigiJWQiLCBYZW5idXNTdGF0ZUluaXRpYWxpc2luZykpOw0KPiAgICAgICBmbGV4YXJyYXlf
YXBwZW5kX3BhaXIoYmFjaywgImRvbWFpbiIsIGxpYnhsX19kb21pZF90b19uYW1lKGdjLCBkb21p
ZCkpOw0KPiAgIA0KPiAtICAgIGZvciAoaSA9IDA7IGkgPCBudW07IGkrKywgcGNpKyspDQo+IC0g
ICAgICAgIGxpYnhsX2NyZWF0ZV9wY2lfYmFja2VuZF9kZXZpY2UoZ2MsIGJhY2ssIGksIHBjaSk7
DQo+ICsgICAgYmVfcGVybXNbMF0uaWQgPSAwOw0KDQpUaGVyZSB3YXMgYSBkaXNjdXNzaW9uIFsx
XSBvbiBQQ0kgb24gQVJNIGFuZCBvbmUgb2YgdGhlIHF1ZXN0aW9uIHdhcyB0aGF0IGl0IGlzIHBv
c3NpYmxlDQoNCnRoYXQgd2UgaGF2ZSB0aGUgcGNpIGJhY2tlbmQgcnVubmluZyBpbiBhIGxhdGUg
aGFyZHdhcmUgZG9tYWluL2RyaXZlciBkb21haW4sIHdoaWNoIG1heQ0KDQpub3QgYmUgRG9tYWlu
LTAuIERvIHlvdSB0aGluayB3ZSBjYW4gYXZvaWQgdXNpbmcgMCBoZXJlIGFuZCBnZXQgc29tZSBj
bHVlIG9mIHRoZSBkb21haW4NCg0KZnJvbSAiKmJhY2tlbmQ9ZG9tYWluLWlkIj8gSWYgbm90IHNl
dCBpdCB3aWxsIHJldHVybiBEb21haW4tMCdzIElEIGFuZCB3b24ndCBicmVhayBhbnl0aGluZyoN
Cg0KKlRoYW5rIHlvdSwqDQoNCipPbGVrc2FuZHINCioNCg0KPiArICAgIGJlX3Blcm1zWzBdLnBl
cm1zID0gWFNfUEVSTV9OT05FOw0KPiArICAgIGJlX3Blcm1zWzFdLmlkID0gZG9taWQ7DQo+ICsg
ICAgYmVfcGVybXNbMV0ucGVybXMgPSBYU19QRVJNX1JFQUQ7DQo+ICsNCj4gKyAgICB4c19ybShj
dHgtPnhzaCwgdCwgYmVfcGF0aCk7DQo+ICsgICAgeHNfbWtkaXIoY3R4LT54c2gsIHQsIGJlX3Bh
dGgpOw0KPiArICAgIHhzX3NldF9wZXJtaXNzaW9ucyhjdHgtPnhzaCwgdCwgYmVfcGF0aCwgYmVf
cGVybXMsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgIEFSUkFZX1NJWkUoYmVfcGVybXMpKTsN
Cj4gKyAgICBsaWJ4bF9feHNfd3JpdGV2KGdjLCB0LCBiZV9wYXRoLCBsaWJ4bF9feHNfa3ZzX29m
X2ZsZXhhcnJheShnYywgYmFjaykpOw0KPiAgIA0KPiAtICAgIGZsZXhhcnJheV9hcHBlbmRfcGFp
cihiYWNrLCAibnVtX2RldnMiLCBHQ1NQUklOVEYoIiVkIiwgbnVtKSk7DQo+ICsgICAgZmxleGFy
cmF5X2FwcGVuZF9wYWlyKGZyb250LCAiYmFja2VuZCIsIGJlX3BhdGgpOw0KPiAgICAgICBmbGV4
YXJyYXlfYXBwZW5kX3BhaXIoZnJvbnQsICJiYWNrZW5kLWlkIiwgR0NTUFJJTlRGKCIlZCIsIDAp
KTsNCj4gICAgICAgZmxleGFycmF5X2FwcGVuZF9wYWlyKGZyb250LCAic3RhdGUiLCBHQ1NQUklO
VEYoIiVkIiwgWGVuYnVzU3RhdGVJbml0aWFsaXNpbmcpKTsNCj4gICANCj4gLSAgICByZXR1cm4g
bGlieGxfX2RldmljZV9nZW5lcmljX2FkZChnYywgWEJUX05VTEwsICZkZXZpY2UsDQo+IC0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX3hzX2t2c19vZl9mbGV4YXJy
YXkoZ2MsIGJhY2spLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxp
YnhsX194c19rdnNfb2ZfZmxleGFycmF5KGdjLCBmcm9udCksDQo+IC0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgTlVMTCk7DQo+ICsgICAgZmVfcGVybXNbMF0uaWQgPSBkb21p
ZDsNCj4gKyAgICBmZV9wZXJtc1swXS5wZXJtcyA9IFhTX1BFUk1fTk9ORTsNCj4gKyAgICBmZV9w
ZXJtc1sxXS5pZCA9IDA7DQo+ICsgICAgZmVfcGVybXNbMV0ucGVybXMgPSBYU19QRVJNX1JFQUQ7
DQo+ICsNCj4gKyAgICB4c19ybShjdHgtPnhzaCwgdCwgZmVfcGF0aCk7DQo+ICsgICAgeHNfbWtk
aXIoY3R4LT54c2gsIHQsIGZlX3BhdGgpOw0KPiArICAgIHhzX3NldF9wZXJtaXNzaW9ucyhjdHgt
PnhzaCwgdCwgZmVfcGF0aCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgZmVfcGVybXMsIEFS
UkFZX1NJWkUoZmVfcGVybXMpKTsNCj4gKyAgICBsaWJ4bF9feHNfd3JpdGV2KGdjLCB0LCBmZV9w
YXRoLCBsaWJ4bF9feHNfa3ZzX29mX2ZsZXhhcnJheShnYywgZnJvbnQpKTsNCj4gICB9DQo+ICAg
DQo+ICAgc3RhdGljIGludCBsaWJ4bF9fZGV2aWNlX3BjaV9hZGRfeGVuc3RvcmUobGlieGxfX2dj
ICpnYywNCj4gQEAgLTEzNSw4ICsxNTEsNiBAQCBzdGF0aWMgaW50IGxpYnhsX19kZXZpY2VfcGNp
X2FkZF94ZW5zdG9yZShsaWJ4bF9fZ2MgKmdjLA0KPiAgICAgICBiZV9wYXRoID0gbGlieGxfX2Rv
bWFpbl9kZXZpY2VfYmFja2VuZF9wYXRoKGdjLCAwLCBkb21pZCwgMCwNCj4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBMSUJYTF9fREVWSUNFX0tJTkRf
UENJKTsNCj4gICAgICAgbnVtX2RldnMgPSBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsIEdD
U1BSSU5URigiJXMvbnVtX2RldnMiLCBiZV9wYXRoKSk7DQo+IC0gICAgaWYgKCFudW1fZGV2cykN
Cj4gLSAgICAgICAgcmV0dXJuIGxpYnhsX19jcmVhdGVfcGNpX2JhY2tlbmQoZ2MsIGRvbWlkLCBw
Y2ksIDEpOw0KPiAgIA0KPiAgICAgICBsaWJ4bF9kb21haW5fdHlwZSBkb210eXBlID0gbGlieGxf
X2RvbWFpbl90eXBlKGdjLCBkb21pZCk7DQo+ICAgICAgIGlmIChkb210eXBlID09IExJQlhMX0RP
TUFJTl9UWVBFX0lOVkFMSUQpDQo+IEBAIC0xNTAsMTcgKzE2NCwxNyBAQCBzdGF0aWMgaW50IGxp
YnhsX19kZXZpY2VfcGNpX2FkZF94ZW5zdG9yZShsaWJ4bF9fZ2MgKmdjLA0KPiAgICAgICBiYWNr
ID0gZmxleGFycmF5X21ha2UoZ2MsIDE2LCAxKTsNCj4gICANCj4gICAgICAgTE9HRChERUJVRywg
ZG9taWQsICJBZGRpbmcgbmV3IHBjaSBkZXZpY2UgdG8geGVuc3RvcmUiKTsNCj4gLSAgICBudW0g
PSBhdG9pKG51bV9kZXZzKTsNCj4gKyAgICBudW0gPSBudW1fZGV2cyA/IGF0b2kobnVtX2RldnMp
IDogMDsNCj4gICAgICAgbGlieGxfY3JlYXRlX3BjaV9iYWNrZW5kX2RldmljZShnYywgYmFjaywg
bnVtLCBwY2kpOw0KPiAgICAgICBmbGV4YXJyYXlfYXBwZW5kX3BhaXIoYmFjaywgIm51bV9kZXZz
IiwgR0NTUFJJTlRGKCIlZCIsIG51bSArIDEpKTsNCj4gLSAgICBpZiAoIXN0YXJ0aW5nKQ0KPiAr
ICAgIGlmIChudW0gJiYgIXN0YXJ0aW5nKQ0KPiAgICAgICAgICAgZmxleGFycmF5X2FwcGVuZF9w
YWlyKGJhY2ssICJzdGF0ZSIsIEdDU1BSSU5URigiJWQiLCBYZW5idXNTdGF0ZVJlY29uZmlndXJp
bmcpKTsNCj4gICANCj4gICAgICAgLyoNCj4gICAgICAgICogU3R1YmRvbWluIGNvbmZpZyBpcyBk
ZXJpdmVkIGZyb20gaXRzIHRhcmdldCBkb21haW4sIGl0IGRvZXNuJ3QgaGF2ZQ0KPiAgICAgICAg
KiBpdHMgb3duIGZpbGUuDQo+ICAgICAgICAqLw0KPiAtICAgIGlmICghaXNfc3R1YmRvbWFpbikg
ew0KPiArICAgIGlmICghaXNfc3R1YmRvbWFpbiAmJiAhc3RhcnRpbmcpIHsNCj4gICAgICAgICAg
IGxvY2sgPSBsaWJ4bF9fbG9ja19kb21haW5fdXNlcmRhdGEoZ2MsIGRvbWlkKTsNCj4gICAgICAg
ICAgIGlmICghbG9jaykgew0KPiAgICAgICAgICAgICAgIHJjID0gRVJST1JfTE9DS19GQUlMOw0K
PiBAQCAtMTcwLDYgKzE4NCw3IEBAIHN0YXRpYyBpbnQgbGlieGxfX2RldmljZV9wY2lfYWRkX3hl
bnN0b3JlKGxpYnhsX19nYyAqZ2MsDQo+ICAgICAgICAgICByYyA9IGxpYnhsX19nZXRfZG9tYWlu
X2NvbmZpZ3VyYXRpb24oZ2MsIGRvbWlkLCAmZF9jb25maWcpOw0KPiAgICAgICAgICAgaWYgKHJj
KSBnb3RvIG91dDsNCj4gICANCj4gKyAgICAgICAgTE9HRChERUJVRywgZG9taWQsICJBZGRpbmcg
bmV3IHBjaSBkZXZpY2UgdG8gY29uZmlnIik7DQo+ICAgICAgICAgICBkZXZpY2VfYWRkX2RvbWFp
bl9jb25maWcoZ2MsICZkX2NvbmZpZywgJmxpYnhsX19wY2lfZGV2dHlwZSwNCj4gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBwY2kpOw0KPiAgIA0KPiBAQCAtMTg2LDYgKzIwMSwx
MCBAQCBzdGF0aWMgaW50IGxpYnhsX19kZXZpY2VfcGNpX2FkZF94ZW5zdG9yZShsaWJ4bF9fZ2Mg
KmdjLA0KPiAgICAgICAgICAgICAgIGlmIChyYykgZ290byBvdXQ7DQo+ICAgICAgICAgICB9DQo+
ICAgDQo+ICsgICAgICAgIC8qIFRoaXMgaXMgdGhlIGZpcnN0IGRldmljZSwgc28gY3JlYXRlIHRo
ZSBiYWNrZW5kICovDQo+ICsgICAgICAgIGlmICghbnVtX2RldnMpDQo+ICsgICAgICAgICAgICBs
aWJ4bF9fY3JlYXRlX3BjaV9iYWNrZW5kKGdjLCB0LCBkb21pZCwgcGNpKTsNCj4gKw0KPiAgICAg
ICAgICAgbGlieGxfX3hzX3dyaXRldihnYywgdCwgYmVfcGF0aCwgbGlieGxfX3hzX2t2c19vZl9m
bGV4YXJyYXkoZ2MsIGJhY2spKTsNCj4gICANCj4gICAgICAgICAgIHJjID0gbGlieGxfX3hzX3Ry
YW5zYWN0aW9uX2NvbW1pdChnYywgJnQpOw0KPiBAQCAtMTQzNyw3ICsxNDU2LDcgQEAgb3V0X25v
X2lycToNCj4gICAgICAgICAgIH0NCj4gICAgICAgfQ0KPiAgIA0KPiAtICAgIGlmICghc3RhcnRp
bmcgJiYgIWxpYnhsX2dldF9zdHViZG9tX2lkKENUWCwgZG9taWQpKQ0KPiArICAgIGlmICghbGli
eGxfZ2V0X3N0dWJkb21faWQoQ1RYLCBkb21pZCkpDQo+ICAgICAgICAgICByYyA9IGxpYnhsX19k
ZXZpY2VfcGNpX2FkZF94ZW5zdG9yZShnYywgZG9taWQsIHBjaSwgc3RhcnRpbmcpOw0KPiAgICAg
ICBlbHNlDQo+ICAgICAgICAgICByYyA9IDA7DQo+IEBAIC0xNzY1LDI4ICsxNzg0LDEyIEBAIHN0
YXRpYyB2b2lkIGxpYnhsX19hZGRfcGNpcyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8s
IHVpbnQzMl90IGRvbWlkLA0KPiAgIH0NCj4gICANCj4gICBzdGF0aWMgdm9pZCBhZGRfcGNpc19k
b25lKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX211bHRpZGV2ICptdWx0aWRldiwNCj4gLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgaW50IHJjKQ0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgICBpbnQgcmMpDQo+ICAgew0KPiAgICAgICBFR0NfR0M7DQo+ICAgICAgIGFkZF9wY2lzX3N0
YXRlICphcGRzID0gQ09OVEFJTkVSX09GKG11bHRpZGV2LCAqYXBkcywgbXVsdGlkZXYpOw0KPiAt
DQo+IC0gICAgLyogQ29udmVuaWVuY2UgYWxpYXNlcyAqLw0KPiAtICAgIGxpYnhsX2RvbWFpbl9j
b25maWcgKmRfY29uZmlnID0gYXBkcy0+ZF9jb25maWc7DQo+IC0gICAgbGlieGxfZG9taWQgZG9t
aWQgPSBhcGRzLT5kb21pZDsNCj4gICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYgPSBhcGRz
LT5vdXRlcl9hb2RldjsNCj4gICANCj4gLSAgICBpZiAocmMpIGdvdG8gb3V0Ow0KPiAtDQo+IC0g
ICAgaWYgKGRfY29uZmlnLT5udW1fcGNpcyA+IDAgJiYgIWxpYnhsX2dldF9zdHViZG9tX2lkKENU
WCwgZG9taWQpKSB7DQo+IC0gICAgICAgIHJjID0gbGlieGxfX2NyZWF0ZV9wY2lfYmFja2VuZChn
YywgZG9taWQsIGRfY29uZmlnLT5wY2lzLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgZF9jb25maWctPm51bV9wY2lzKTsNCj4gLSAgICAgICAgaWYgKHJjIDwgMCkg
ew0KPiAtICAgICAgICAgICAgTE9HRChFUlJPUiwgZG9taWQsICJsaWJ4bF9jcmVhdGVfcGNpX2Jh
Y2tlbmQgZmFpbGVkOiAlZCIsIHJjKTsNCj4gLSAgICAgICAgICAgIGdvdG8gb3V0Ow0KPiAtICAg
ICAgICB9DQo+IC0gICAgfQ0KPiAtDQo+IC1vdXQ6DQo+ICAgICAgIGFvZGV2LT5yYyA9IHJjOw0K
PiAgICAgICBhb2Rldi0+Y2FsbGJhY2soZWdjLCBhb2Rldik7DQo+ICAgfQ0KWzFdIGh0dHBzOi8v
bGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMC0xMC9tc2cw
MTg2MS5odG1s


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:13:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42013.75550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5TV-0003Vf-TH; Tue, 01 Dec 2020 13:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42013.75550; Tue, 01 Dec 2020 13:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5TV-0003VY-PD; Tue, 01 Dec 2020 13:13:45 +0000
Received: by outflank-mailman (input) for mailman id 42013;
 Tue, 01 Dec 2020 13:13:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk5TU-0003VS-FO
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:13:44 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e6c9e84-e9c2-4ffc-9bdb-881248c158fc;
 Tue, 01 Dec 2020 13:13:43 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1DA1Nv008856; Tue, 1 Dec 2020 13:13:42 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58])
 by mx0a-0039f301.pphosted.com with ESMTP id 353ybc6raj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 13:13:42 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB3716.eurprd03.prod.outlook.com (2603:10a6:208:4a::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.21; Tue, 1 Dec
 2020 13:13:36 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 13:13:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e6c9e84-e9c2-4ffc-9bdb-881248c158fc
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X5MHKD+9dD1r6wYhB7+5bhQrFOM+sx1KJC6lQved12nvoez5yGt0pZawB5ggH8VVbN8vz36jiFtMnh/1Fm7D1YqhRdBCRyaISbO32D3bdU+MtlvWJ33H9X/ZHUzgV4E+kmg90FSUcMRccsoBqCeIWjAuSGkPVy44y2wZMPokLAQar4wPdohmt93TguXq4Gy6naqBv3hRqbwVI2sErUqmsCxiL5FCtQIn8NosPSXMcRPVE/HI5QgZqMprufKABXUXOwss7g3v9tuqFq0DhLdI6guvzfDqus8hw2B0xvVb+FSRpvBDfgSxqivN6V4YiGp6wC1q7WQimZytaG5weENKFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sSZK4GsuDxjXeeDPQJFPwK6N7RDuYzRGJ4q4fzKFPlY=;
 b=CmGGcKtO2saonGHhbsYfRA1cGVjsWtOHBq3hyzpDHYepcM4x50pTX7nn59Y1c1vYFvOxXNa8DYeU0mI5onVUZL+cvrV7nmNO657m9eBiY4mkhX9T6s1/Ink/ejGNYgUi4+R7ZofMMlxVEPlW6PwtmYgOqujyYtSxiVfnn9oAerMMKFoK7B6miSvnD9oxo82IQ5WYtfgDOY4hszST3hOt62b2qRLz9kVA1+stfi/SQwlcp8BnKZqjYjNePs3td6E4lzRXHvo4jLr+eXoTn+Hn9sooppOr50pQZ0z18iO1JKZrY5xKhoPADr2Nvm1avj7WHkYUuB2gocS8TP/kNPYMpQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sSZK4GsuDxjXeeDPQJFPwK6N7RDuYzRGJ4q4fzKFPlY=;
 b=aaXtJgcGHtGiiDjQi1RZG9ekSQ7EXsNPZcvH5F0xH9T9plMcwAU4aKQ5Gj/MVnrzGfiBGrqnNOVZIxUX8/FnDI3p4YQaNvlcAio8Xq2UI/rSxcyXhyuetneXy44vXMM7qSp26Tv2CmjTOO0yz8HASEojCM2JHtIWIYAToV8ATW4NqGtbsNk4ARYqRYXXKEfq1d5eWhyftpO1lUEmZWuc8WdOBOBDQHoBXOx4QUrwUPz/b/LIFmLvcKeGlCA1uZVmZ1DolM4O/g3cqTdOjMHw6f+okqgdmpNksC0STfTy4rG/Py7QMDVYXZIxq5b5j9hDH/xAslnWkp5VyASplUyhmQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 04/23] libxl: add/recover 'rdm_policy' to/from PCI
 backend in xenstore
Thread-Topic: [PATCH v4 04/23] libxl: add/recover 'rdm_policy' to/from PCI
 backend in xenstore
Thread-Index: AQHWx+PHjW13DPVvRE2wVjnAvNUDaw==
Date: Tue, 1 Dec 2020 13:13:36 +0000
Message-ID: <d37041d4-c0f8-8347-0e2d-c50bac78eb6f@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-5-paul@xen.org>
In-Reply-To: <20201124080159.11912-5-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 20858cdd-02f4-4776-790d-08d895fae9a1
x-ms-traffictypediagnostic: AM0PR03MB3716:
x-microsoft-antispam-prvs: 
 <AM0PR03MB3716EA8A4B52AA4DF06B8563E7F40@AM0PR03MB3716.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:989;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Dg3yeh8k/vk9cOd/w/5/OFHrMKGM7jvY36Fr3U7IUN7fDDl+kQZKbaT4LPebabD6TfeqlS9O33qtt9GCLd1umfbooGbnv4sthnIsHzPGnyCiGcQkNBbt/lYYeKhlGaMu/kcbYg3RXYkXtVTNxF4Nf4hlayM3G2+H5fsUMi5+t8iczo9npVeBKg7bWSUV2U4A+ygMj6UxMK5V++1bwfKcy7ANiDnraePnLFXBTf5uGmJDEhlREpFYmqlo+NB55WIqVmIiruz406mngirye+4T9WJ4pPiWCw1jQywO7VLh0McMAWFZOR3Ij+i/GlSeaE+X3/Ba4kjqEucM4PGBrhllaA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(366004)(396003)(346002)(8676002)(8936002)(2906002)(31696002)(86362001)(4326008)(478600001)(83380400001)(71200400001)(6486002)(316002)(5660300002)(110136005)(186003)(2616005)(36756003)(54906003)(26005)(6506007)(76116006)(6512007)(53546011)(66556008)(64756008)(66946007)(66476007)(66446008)(31686004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?RmRMdEhONVpobUx4cDdHVUszL3I1WWdQZEx1cU9zWWVJZUdGWlBRN016N3NE?=
 =?utf-8?B?NVNOYzQ3R2lvUjJob0lDRGY0bHl3TVNtQlhmY3FJNUk3YWJOOHFubWh1bXF4?=
 =?utf-8?B?eGxJSisrYlVoRDFWcjYvZmd4emR6eVQ5MUpNSEk4SmdZRmRNS1N1SXNNa3kr?=
 =?utf-8?B?MGkydk0wMThjeGsyc3FFclNiUlBlSThiMXNtYzA1MEZFVjQwUVRHU3JxbHEx?=
 =?utf-8?B?ZzF0Z3h5aXBaWjFsblhwL3dNZXQ0WmdIenR6WDVHazZBVThIdFVHS1pkTTRW?=
 =?utf-8?B?TEVxVVNRTVlqcDNDUHJwWlVFSVprbjd4dzFUREw3a3ZIWFFWZ1poMzg3aTZi?=
 =?utf-8?B?dDEwUWUxNjNtNW9uYmlUL21VRGxtZE5hZFpwVjUrbUtOT3FBbjFienZyWGFB?=
 =?utf-8?B?SkRsYzBhVU45Sk51OFVMRmV2YVNGWEpGNjZKYXZ5ajZyWDlwYlNPai80OVd3?=
 =?utf-8?B?d1l0RGNIVnFLc3dkeWpNYkExMDE1OGdYTnVhM1ZKdHlLZnQzVGQycnhyS2xB?=
 =?utf-8?B?VE1LOUVJSjZrYVpSNkJEWE4zMHlLUVJTRDEvNlJCdHgveWdhL05NWE1DM2Vj?=
 =?utf-8?B?eHNNOUZDbzJ0STJqRkZtNDN3aTZJWlgxZ2pkSzEzdG4zNnhGczdkRzk4WFJH?=
 =?utf-8?B?dHp1MHZHUTZjb2pOWmYxeWxMMUNZQmZnQU8wdDc4d0JPQXNmcGwvZ05UVEFn?=
 =?utf-8?B?UUhkUTBaSVVVb3VUc29QdFQ2bGc3ZjVydytpM3dONmYyNzUxK2JXeUpkWUYw?=
 =?utf-8?B?RExSTjZyOVZGcUg0MDdMNWpTVUxWLzdQWEk0RWthbmVuOCtwUFFROU52VHpB?=
 =?utf-8?B?YW9idzE1MGpNQVdEMUVyVHBXZTJmQzJjYnNnYW12T3N0eENZR2NNOUV3UC9E?=
 =?utf-8?B?cEhpV1g5MWV1VEVRR3Fhd095cFdJVVpJajcrdFB6bDBHeDZhVmFRSzRuRE1q?=
 =?utf-8?B?UHJ0aXc5dVc4RDk5TGV1OTdHK1c0ZDRrYXpTQmJidTV0cURZbDk1b01vRjR3?=
 =?utf-8?B?OW02MHdRdVlNazF2NUF0TjI3QlpyWEJlclJST0p0OEhQblJXaE82T05LbmlU?=
 =?utf-8?B?ZUZ3Q2pOcFhYdmp6blZFYUFoSDJKdjUzMW9ndTRnVDF3ZkpoTWlCSm03bUNu?=
 =?utf-8?B?Z2lLUy9zZnRkZWlWekEvcmwydVAxTHBoQ1RWVjhKZE1Bc3Z0TFhBSGtmaC9o?=
 =?utf-8?B?TkVVUkdPMW9PVU5BM093dDlWLzJDTWo2VDNDU2d2bmMrSHNqNUFBYnI2bXMx?=
 =?utf-8?B?MDhWNjM0eVBLSWo4TUtaUkJSNWJpMEZmd3VxYmdLdWdEWlhjTDlvVDlmV1ZJ?=
 =?utf-8?Q?w9732+Il1Uo7vpclYAnb/L38hMM18BH7Ju?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <5D2F2D770C743B438E95DA9FF3AE9C2C@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20858cdd-02f4-4776-790d-08d895fae9a1
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 13:13:36.6334
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: n2Kpa/z/IosJFAFJuFLC28JMvU74C6NYQXaz+9Nm12xuukHDM6TTA5ofVj4SRZJDHLFRBMVyxC/NQraBIhRxYF51AB0EI4ltWfV+/EAmf4KblEh9LJKsJxBK01jGQIk2
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3716
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 impostorscore=0 suspectscore=0 phishscore=0 bulkscore=0 clxscore=1015
 mlxscore=0 malwarescore=0 adultscore=0 mlxlogscore=999 spamscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010084

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gT3RoZXIgcGFy
YW1ldGVycywgc3VjaCBhcyAnbXNpdHJhbnNsYXRlJyBhbmQgJ3Blcm1pc3NpdmUnIGFyZSBkZWFs
dCB3aXRoDQo+IGJ1dCAncmRtX3BvbGljeScgYXBwZWFycyB0byBiZSBoYXZlIGJlZW4gY29tcGxl
dGVseSBtaXNzZWQuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRA
YW1hem9uLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVr
c2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHIN
Cg0KPiAtLS0NCj4gQ2M6IElhbiBKYWNrc29uIDxpd2pAeGVucHJvamVjdC5vcmc+DQo+IENjOiBX
ZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiAtLS0NCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX3Bj
aS5jIHwgOSArKysrKystLS0NCj4gICAxIGZpbGUgY2hhbmdlZCwgNiBpbnNlcnRpb25zKCspLCAz
IGRlbGV0aW9ucygtKQ0KPg0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9w
Y2kuYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4gaW5kZXggZGEwMWM3N2JhMi4u
NTBjOTZjYmZhNiAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0K
PiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+IEBAIC02MSw5ICs2MSw5IEBA
IHN0YXRpYyB2b2lkIGxpYnhsX2NyZWF0ZV9wY2lfYmFja2VuZF9kZXZpY2UobGlieGxfX2djICpn
YywNCj4gICAgICAgICAgIGZsZXhhcnJheV9hcHBlbmRfcGFpcihiYWNrLCBHQ1NQUklOVEYoInZk
ZXZmbi0lZCIsIG51bSksIEdDU1BSSU5URigiJXgiLCBwY2ktPnZkZXZmbikpOw0KPiAgICAgICBm
bGV4YXJyYXlfYXBwZW5kKGJhY2ssIEdDU1BSSU5URigib3B0cy0lZCIsIG51bSkpOw0KPiAgICAg
ICBmbGV4YXJyYXlfYXBwZW5kKGJhY2ssDQo+IC0gICAgICAgICAgICAgIEdDU1BSSU5URigibXNp
dHJhbnNsYXRlPSVkLHBvd2VyX21nbXQ9JWQscGVybWlzc2l2ZT0lZCIsDQo+IC0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHBjaS0+bXNpdHJhbnNsYXRlLCBwY2ktPnBvd2VyX21nbXQsDQo+
IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBjaS0+cGVybWlzc2l2ZSkpOw0KPiArICAg
ICAgICAgICAgICBHQ1NQUklOVEYoIm1zaXRyYW5zbGF0ZT0lZCxwb3dlcl9tZ210PSVkLHBlcm1p
c3NpdmU9JWQscmRtX3BvbGljeT0lcyIsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICBwY2kt
Pm1zaXRyYW5zbGF0ZSwgcGNpLT5wb3dlcl9tZ210LA0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgcGNpLT5wZXJtaXNzaXZlLCBsaWJ4bF9yZG1fcmVzZXJ2ZV9wb2xpY3lfdG9fc3RyaW5nKHBj
aS0+cmRtX3BvbGljeSkpKTsNCj4gICAgICAgZmxleGFycmF5X2FwcGVuZF9wYWlyKGJhY2ssIEdD
U1BSSU5URigic3RhdGUtJWQiLCBudW0pLCBHQ1NQUklOVEYoIiVkIiwgWGVuYnVzU3RhdGVJbml0
aWFsaXNpbmcpKTsNCj4gICB9DQo+ICAgDQo+IEBAIC0yMzc0LDYgKzIzNzQsOSBAQCBzdGF0aWMg
aW50IGxpYnhsX19kZXZpY2VfcGNpX2Zyb21feHNfYmUobGlieGxfX2djICpnYywNCj4gICAgICAg
ICAgICAgICB9IGVsc2UgaWYgKCFzdHJjbXAocCwgInBlcm1pc3NpdmUiKSkgew0KPiAgICAgICAg
ICAgICAgICAgICBwID0gc3RydG9rX3IoTlVMTCwgIiw9IiwgJnNhdmVwdHIpOw0KPiAgICAgICAg
ICAgICAgICAgICBwY2ktPnBlcm1pc3NpdmUgPSBhdG9pKHApOw0KPiArICAgICAgICAgICAgfSBl
bHNlIGlmICghc3RyY21wKHAsICJyZG1fcG9saWN5IikpIHsNCj4gKyAgICAgICAgICAgICAgICBw
ID0gc3RydG9rX3IoTlVMTCwgIiw9IiwgJnNhdmVwdHIpOw0KPiArICAgICAgICAgICAgICAgIGxp
YnhsX3JkbV9yZXNlcnZlX3BvbGljeV9mcm9tX3N0cmluZyhwLCAmcGNpLT5yZG1fcG9saWN5KTsN
Cj4gICAgICAgICAgICAgICB9DQo+ICAgICAgICAgICB9IHdoaWxlICgocCA9IHN0cnRva19yKE5V
TEwsICIsPSIsICZzYXZlcHRyKSkgIT0gTlVMTCk7DQo+ICAgICAgIH0=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:15:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:15:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42018.75562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5VA-0003dP-8U; Tue, 01 Dec 2020 13:15:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42018.75562; Tue, 01 Dec 2020 13:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5VA-0003dI-5I; Tue, 01 Dec 2020 13:15:28 +0000
Received: by outflank-mailman (input) for mailman id 42018;
 Tue, 01 Dec 2020 13:15:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0EuF=FF=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kk5V8-0003d9-IZ
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:15:26 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f7492e2-814e-4c98-acf4-a4b3dd7f5980;
 Tue, 01 Dec 2020 13:15:25 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id s30so4041561lfc.4
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 05:15:25 -0800 (PST)
Received: from [192.168.10.4] ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id d20sm198780lfe.174.2020.12.01.05.15.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 01 Dec 2020 05:15:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f7492e2-814e-4c98-acf4-a4b3dd7f5980
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=xpUvP3GsuZ5Zr8GiUBIEZ3Q5BpQaKqmylsT1wwgYw8s=;
        b=ODQLz2yczPSppQnvYLKeg1TWErMot8t/pR0qN81cqdpACQ6yPuAD6PqFFCZXuwDEJS
         bp3UpnTpw7jLOvdlFllO8n2fQa8s+LIAWCTBiOhWSsYHvJkPfg00KIYahV1INIFIx3Wh
         reyDCHN40XQt3Dco3Xrx0ViU9jFTnRU5IIwfJi4e6KFN6WQCs6d0nw7ubq5DphZaHdn3
         dyY3Z9xd+JFevwhC23rK07/bn2e7+wIBfWu4Y6/VH/5OPbYjaclyrLOSPPbZDOKIAhFX
         bPqgjg+dxIWEa37JBCmApo21sH8dTAfwlGEyHVNljjozdjnF0q9Dqus/jet0kAZgcmWV
         uzZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=xpUvP3GsuZ5Zr8GiUBIEZ3Q5BpQaKqmylsT1wwgYw8s=;
        b=rYI9TB/r8eafxOMOhK6Z1zT46knHU3TMNbZb4SMHq4Xcpgj88VnE46DDqtmhJG6Mwr
         ALhLIjwR99O3o6857UPxdyt7xMBTRpnv45ryKl4Z6VKuffO9b1s5XWjdk5QaEnS3ToFW
         JYvoLIEevX1GpLy1a8PbcSDAerZ7B/f04NmrTeliRwmvBf9yvEgBPgXCMuvQgfwURkHy
         +bWXW/zxKmXpKaBLcurw7xW+jZdDMCUuJmqblYkxhpkhDz5ynVPIftHC4GahyEVc7yEl
         nUiI5QNIic6EISmI2HYKlQno2wwxhYL+0zeVGSroWz/7L/sBn5SsSog5wpl1DbN9Tlda
         zPcA==
X-Gm-Message-State: AOAM531S3dZjOp+9/GiheHlAGoEkYi5UakpyIRnASriWC/9JhqOL/kru
	FRUrzEtOgIlU+AFz6p/E46g=
X-Google-Smtp-Source: ABdhPJwtlIwXmM2btWSCgdX9bRh5+0p0ovuSx9BJQdDYJwwRYpejw5hqNjg0ac2/1Xz90PUHhQ88Mw==
X-Received: by 2002:a19:5015:: with SMTP id e21mr1088305lfb.566.1606828524468;
        Tue, 01 Dec 2020 05:15:24 -0800 (PST)
Subject: Re: [PATCH v4 05/23] libxl: s/detatched/detached in libxl_pci.c
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-6-paul@xen.org>
From: Oleksandr Andrushchenko <andr2000@gmail.com>
Message-ID: <f9f05409-a0aa-abfc-ef98-c1012d242b2a@gmail.com>
Date: Tue, 1 Dec 2020 15:15:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201124080159.11912-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US

Hi, Paul!

On 11/24/20 10:01 AM, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> Simply spelling correction. Purely cosmetic fix.
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Thank you,

Oleksandr

> ---
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Wei Liu <wl@xen.org>
> ---
>   tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
>   1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
> index 50c96cbfa6..de617e95eb 100644
> --- a/tools/libs/light/libxl_pci.c
> +++ b/tools/libs/light/libxl_pci.c
> @@ -1864,7 +1864,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
>       libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
>   static void pci_remove_timeout(libxl__egc *egc,
>       libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
> -static void pci_remove_detatched(libxl__egc *egc,
> +static void pci_remove_detached(libxl__egc *egc,
>       pci_remove_state *prs, int rc);
>   static void pci_remove_stubdom_done(libxl__egc *egc,
>       libxl__ao_device *aodev);
> @@ -1978,7 +1978,7 @@ skip1:
>   skip_irq:
>       rc = 0;
>   out_fail:
> -    pci_remove_detatched(egc, prs, rc); /* must be last */
> +    pci_remove_detached(egc, prs, rc); /* must be last */
>   }
>   
>   static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
> @@ -2002,7 +2002,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
>       rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
>   
>   out:
> -    pci_remove_detatched(egc, prs, rc);
> +    pci_remove_detached(egc, prs, rc);
>   }
>   
>   static void pci_remove_qmp_device_del(libxl__egc *egc,
> @@ -2028,7 +2028,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
>       return;
>   
>   out:
> -    pci_remove_detatched(egc, prs, rc);
> +    pci_remove_detached(egc, prs, rc);
>   }
>   
>   static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
> @@ -2051,7 +2051,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
>       return;
>   
>   out:
> -    pci_remove_detatched(egc, prs, rc);
> +    pci_remove_detached(egc, prs, rc);
>   }
>   
>   static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
> @@ -2067,7 +2067,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
>       return;
>   
>   out:
> -    pci_remove_detatched(egc, prs, rc);
> +    pci_remove_detached(egc, prs, rc);
>   }
>   
>   static void pci_remove_qmp_query_cb(libxl__egc *egc,
> @@ -2127,7 +2127,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
>       }
>   
>   out:
> -    pci_remove_detatched(egc, prs, rc); /* must be last */
> +    pci_remove_detached(egc, prs, rc); /* must be last */
>   }
>   
>   static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
> @@ -2146,12 +2146,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
>       /* If we timed out, we might still want to keep destroying the device
>        * (when force==true), so let the next function decide what to do on
>        * error */
> -    pci_remove_detatched(egc, prs, rc);
> +    pci_remove_detached(egc, prs, rc);
>   }
>   
> -static void pci_remove_detatched(libxl__egc *egc,
> -                                 pci_remove_state *prs,
> -                                 int rc)
> +static void pci_remove_detached(libxl__egc *egc,
> +                                pci_remove_state *prs,
> +                                int rc)
>   {
>       STATE_AO_GC(prs->aodev->ao);
>       int stubdomid = 0;


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:41:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:41:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42043.75578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5u6-0006RB-Ee; Tue, 01 Dec 2020 13:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42043.75578; Tue, 01 Dec 2020 13:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5u6-0006R4-AY; Tue, 01 Dec 2020 13:41:14 +0000
Received: by outflank-mailman (input) for mailman id 42043;
 Tue, 01 Dec 2020 13:41:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk5u4-0006Qz-Nr
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:41:12 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3087181f-81f4-4539-bf22-67e71639a29d;
 Tue, 01 Dec 2020 13:41:11 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1De9E5007562; Tue, 1 Dec 2020 13:41:10 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2107.outbound.protection.outlook.com [104.47.18.107])
 by mx0a-0039f301.pphosted.com with ESMTP id 355h93s75v-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 13:41:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2660.eurprd03.prod.outlook.com (2603:10a6:200:93::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 13:41:07 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 13:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3087181f-81f4-4539-bf22-67e71639a29d
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EZw6+RZPlBF3OrNaiyadUKKz6jAUbXJbPkFvILROusE9+hi7vDE0us5WU6XlYKkhzhu4f5amlqyVukjH3lmws2/lnj2hvrAH2W0nwcABbq4YCcw9P/5kiZI3DskJFsz6caNZzRAI3I6a+E08sAL+UKTSzj+IIE3aBqQf5iubNDvHPhtauEQTWGhsR7GXlvDyWC0EwLaykZTK00LCiHSPFBFGUOGvkdIFYPF29zwjELTR9mKCe97lbrSiSjU9gY3LiYpHzpGKFP/S82lPgcXFppMOFgBfG0a0vGw0nxJYwVA3Celzxh/HT9vIQ9G+dr9JOpaMymRQboV0mjazOVcVBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x93Uu/FAUTn3WBeXtST/PfNW0jQRGU/1J8BCQm49iiA=;
 b=Pm96EyUkd3m2FYfQxHY15GqKYSwHPzZ+tSmA/SwP7Cgt882Qj+jvLGrRAfohM9GODMNEAZn0PoppehpSsadLg/E0UoOObOScIWO5CUbfhy3tucAailLpIM4p3OcZC5DlTwDI17R2dtIA7mpJt18hKK0d0VKw7FQbv1BdRDtp6YHCPOl7AMP2HeYhDZ+DXKILViKlXIqdEvIsPCMHwfJyZJLSCyJ8JwNj2A3iKrLhR76ztlH7JLmad3+TG4u6eSfWekBACinNUJfdQuTkxwc7z3rrQzPr0GTeBkAl9NyJtF+4j9F5/qbnJ5iA3PRRvH0SB1seCnft4I3w17z1OvYMPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x93Uu/FAUTn3WBeXtST/PfNW0jQRGU/1J8BCQm49iiA=;
 b=3FeBjIi51+poHLcq3jed3X0ZYAw/TiKlbiHpEq+JrIIcbosmQkPgNdrxIAKTL2915D4cw8N2l3YfZGfxUHyShMZC/qfe5gLe1VQHmeLaIlYhA+ESENIXa8hN98YDfco10LEfhdZr28IXnr9pjitoZiFDjqLuapmOOzU3DGGInGy5KBJktOv+oppUNJHTJfdvOUjnON9k2oDY16VUmfc41Qfr6RiFgGas8iSY+CxOBnJGZfu4+bL3jkuAfhFb4OYkE1L+sMxUT87CfdPt8z9y6dVp3alZLxwfaRW2gY0BFJm4thORiAAiiIncG5gW7OPqnHmfU7+FtsfENW+I/BmVAw==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 06/23] libxl: remove extraneous arguments to
 do_pci_remove() in libxl_pci.c
Thread-Topic: [PATCH v4 06/23] libxl: remove extraneous arguments to
 do_pci_remove() in libxl_pci.c
Thread-Index: AQHWx+eeThISOtcroUS+RdwYWSyffg==
Date: Tue, 1 Dec 2020 13:41:07 +0000
Message-ID: <9e535c83-b7a8-4e7e-169f-3a769abf86c5@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-7-paul@xen.org>
In-Reply-To: <20201124080159.11912-7-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 207cf934-cbb3-40c9-fe07-08d895fec184
x-ms-traffictypediagnostic: AM4PR0302MB2660:
x-microsoft-antispam-prvs: 
 <AM4PR0302MB266046DF951C2363ED87985EE7F40@AM4PR0302MB2660.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:1107;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 6xaTVmhi99CtLfya9brrO3+hRFW+1AhX0Z4qNWP0Y4LkiWnn4wZ4W3Jo78hEeTRhuBKiO3eOUEANPHohXB9jZfiypXqycB2TeuSWGSKK98WnHSIrDdghTT+Y8buocBar7aB9cspXdy1RZi4Lbem1QXxayDYIy69qZGtB5+TN+aGuYWU+m89A+mjkDgSGWxLWVxq/JyMNwIK8EWVd1UwAdJoT9v6dcAjXb9S8GFTpy6kMJr+zPzrdb/2RGO61R0dN/NvIJbSYeTejHSZG5+kGTPtBvHj6ACZm/meg5gfJAuFEnWEBMYBI08XzqwRQEbG8UyojChOQOPON9N6bIYxnYw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(366004)(346002)(136003)(31686004)(31696002)(8676002)(316002)(36756003)(4326008)(6506007)(53546011)(6486002)(83380400001)(54906003)(6512007)(110136005)(26005)(86362001)(186003)(2616005)(71200400001)(66946007)(66476007)(64756008)(66446008)(76116006)(5660300002)(2906002)(66556008)(478600001)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?VlovMGtkVVRlaFVxa09mRmJlVzFhaXFZcGM0N1k5R1ZUdjJPOWNxbTRRLytW?=
 =?utf-8?B?NllFM0NMalhxRVdvSnlGVXBYZ0tUWHFSV0N0UkV3STZkemFocG9ZY0g3dUJ3?=
 =?utf-8?B?RmkrOWF3NU1kOE95TFFoRzRsck9NMVVuS0o1cGJXNHl0YnB0SXZSQXZvdkUy?=
 =?utf-8?B?QzRBTDgyeVJnYmp3UE45UHlQTEJCMWxoV29oWFFXZUh1RitzSmIvaW1PcUcw?=
 =?utf-8?B?WnRVRVBjelpvQVY5Z2w4eXpMWC9POVZMK01QaDlYaXBNOEk1K28yQmh6K3I1?=
 =?utf-8?B?SFN4bnphUDNQTHd0V0gydEVYNWx0bWlMV1d3SGc1S21yTXBpbWdjeGxRWGl0?=
 =?utf-8?B?YnRMSFhyMVdEMTJBNzIrMXNCNHNrRzJQK2NSb0c1VDFBd3F5WDFxK0I0ZlNI?=
 =?utf-8?B?VXVWZlQyMTE4V2VDL3JZdWlnaWl5bGsxZjZuZkJKRTlPWnBDdkZvWWU4TlNq?=
 =?utf-8?B?NE5nSFZDTFp4WWRpVXlMYmtudFVLSG8rMnhFMVRZNkpIalNoa3QzVmxSRWNH?=
 =?utf-8?B?N2dYMVQ2Nyt3VGtIOVRQKzZpYUlkT0d0NVY5YWtGQlpSa0JNY0hLbjdybmRY?=
 =?utf-8?B?UkhCTjBSTUZyQlBYRnV4LzFWc0hCUVp1S0pTK0dFSFIvL1VQaGdUZUt1WXlp?=
 =?utf-8?B?TlV6VWY3RlJ4S2FkN2tFbTdZMW00bVEzUGhPVG1lVzQzTEh0NnN6YTRoMks3?=
 =?utf-8?B?M3p6K0dhNU5JeGlsdW5FWHBLdlZ1eUhXVklaWXV0NlYzVTI4UGgzQ2FnakRC?=
 =?utf-8?B?bDdrdTRhdUsxSHEyL3pTOGtmT21kd1lzV2xuR3VWeEZHRUVUdGZuMHJCYkpo?=
 =?utf-8?B?WFZ6ZXhBRkpINVkvS2xhNlRRb2U1RlIvUlFBTnpyalNmYk5KdUNUQmhIWEZI?=
 =?utf-8?B?Sno1Yk5tQUNCU3hJRCtNMTlTSVJHNVZiN1o5U2YxMTNSMlZqTitXM2lsSkJB?=
 =?utf-8?B?eGt1QURiamozRVVOaFkrNWhpR20va1dQc3dZdmR6T0JITHlWUUgzV3YvN3Zy?=
 =?utf-8?B?YUNVRWhwc3FLeDBEUGFSVldZTUhhejVyRng5ZUd1QWtJa3BtN1BjR0RpRVBj?=
 =?utf-8?B?d0kxL0R1MXF6RmtYb0x0OUh4eFVqRVN1NE5PWm1tUGFZTWs3WkNCRUtLcnNG?=
 =?utf-8?B?SndENE1LRnV1cE9VV3RpSGxRVFpqa1FSQlJkYVJVY1IzcUxCcEZJV3o4R05C?=
 =?utf-8?B?QW5jeFhYRTJ2ckhXZFhGOThaY2UyckpnUXh0M0xJMTA4ejlmcmt2eWpsYjNt?=
 =?utf-8?B?OG5EVFhrK3I3dWlmam9pTk5MVGZxT21WVHkxd0ZJL2JJOUxkQXN1YytBYkcv?=
 =?utf-8?Q?5TnYTvwfFtUEwKUHa7EVmnNK8i1WFoOAsa?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C53D5757A0186449B1BC6EC8CBDC904E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 207cf934-cbb3-40c9-fe07-08d895fec184
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 13:41:07.2969
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: tcZPQOQz1/h/qRILYCMpyYudD8jV/I9V65I5Xr6oKFvaITG4RTWcKAF6lpUcWjh2TvXLpAMn29q4QQ1YiExrPscn145yiTynqzvz2PzbhIEeYSI7azGuTQ30MKwa5y8N
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0302MB2660
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0
 mlxscore=0 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1015
 malwarescore=0 mlxlogscore=999 suspectscore=0 lowpriorityscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010088

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gQm90aCAnZG9t
aWQnIGFuZCAncGNpJyBhcmUgYXZhaWxhYmxlIGluICdwY2lfcmVtb3ZlX3N0YXRlJyBzbyB0aGVy
ZSBpcyBubw0KPiBuZWVkIHRvIGFsc28gcGFzcyB0aGVtIGFzIHNlcGFyYXRlIGFyZ3VtZW50cy4N
Cj4NCj4gU2lnbmVkLW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0K
DQpSZXZpZXdlZC1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9hbmRydXNo
Y2hlbmtvQGVwYW0uY29tPg0KDQpUaGFuayB5b3UsDQoNCk9sZWtzYW5kcg0KDQo+IC0tLQ0KPiBD
YzogSWFuIEphY2tzb24gPGl3akB4ZW5wcm9qZWN0Lm9yZz4NCj4gQ2M6IFdlaSBMaXUgPHdsQHhl
bi5vcmc+DQo+IC0tLQ0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMgfCA5ICsrKyst
LS0tLQ0KPiAgIDEgZmlsZSBjaGFuZ2VkLCA0IGluc2VydGlvbnMoKyksIDUgZGVsZXRpb25zKC0p
DQo+DQo+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jIGIvdG9vbHMv
bGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiBpbmRleCBkZTYxN2U5NWViLi40MWU0YjJiNTcxIDEw
MDY0NA0KPiAtLS0gYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+ICsrKyBiL3Rvb2xz
L2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4gQEAgLTE4NzEsMTQgKzE4NzEsMTQgQEAgc3RhdGlj
IHZvaWQgcGNpX3JlbW92ZV9zdHViZG9tX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPiAgIHN0YXRp
YyB2b2lkIHBjaV9yZW1vdmVfZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+ICAgICAgIHBjaV9yZW1v
dmVfc3RhdGUgKnBycywgaW50IHJjKTsNCj4gICANCj4gLXN0YXRpYyB2b2lkIGRvX3BjaV9yZW1v
dmUobGlieGxfX2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwNCj4gLSAgICAgICAgICAgICAgICAg
ICAgICAgICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpLCBpbnQgZm9yY2UsDQo+IC0gICAgICAgICAg
ICAgICAgICAgICAgICAgIHBjaV9yZW1vdmVfc3RhdGUgKnBycykNCj4gK3N0YXRpYyB2b2lkIGRv
X3BjaV9yZW1vdmUobGlieGxfX2VnYyAqZWdjLCBwY2lfcmVtb3ZlX3N0YXRlICpwcnMpDQo+ICAg
ew0KPiAgICAgICBTVEFURV9BT19HQyhwcnMtPmFvZGV2LT5hbyk7DQo+ICAgICAgIGxpYnhsX2N0
eCAqY3R4ID0gbGlieGxfX2djX293bmVyKGdjKTsNCj4gICAgICAgbGlieGxfZGV2aWNlX3BjaSAq
YXNzaWduZWQ7DQo+ICsgICAgdWludDMyX3QgZG9taWQgPSBwcnMtPmRvbWlkOw0KPiAgICAgICBs
aWJ4bF9kb21haW5fdHlwZSB0eXBlID0gbGlieGxfX2RvbWFpbl90eXBlKGdjLCBkb21pZCk7DQo+
ICsgICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpID0gcHJzLT5wY2k7DQo+ICAgICAgIGludCByYywg
bnVtOw0KPiAgICAgICB1aW50MzJfdCBkb21haW5pZCA9IGRvbWlkOw0KPiAgIA0KPiBAQCAtMjI3
NSw3ICsyMjc1LDYgQEAgc3RhdGljIHZvaWQgZGV2aWNlX3BjaV9yZW1vdmVfY29tbW9uX25leHQo
bGlieGxfX2VnYyAqZWdjLA0KPiAgICAgICBFR0NfR0M7DQo+ICAgDQo+ICAgICAgIC8qIENvbnZl
bmllbmNlIGFsaWFzZXMgKi8NCj4gLSAgICBsaWJ4bF9kb21pZCBkb21pZCA9IHBycy0+ZG9taWQ7
DQo+ICAgICAgIGxpYnhsX2RldmljZV9wY2kgKmNvbnN0IHBjaSA9IHBycy0+cGNpOw0KPiAgICAg
ICBsaWJ4bF9fYW9fZGV2aWNlICpjb25zdCBhb2RldiA9IHBycy0+YW9kZXY7DQo+ICAgICAgIGNv
bnN0IHVuc2lnbmVkIGludCBwZnVuY19tYXNrID0gcHJzLT5wZnVuY19tYXNrOw0KPiBAQCAtMjI5
Myw3ICsyMjkyLDcgQEAgc3RhdGljIHZvaWQgZGV2aWNlX3BjaV9yZW1vdmVfY29tbW9uX25leHQo
bGlieGxfX2VnYyAqZWdjLA0KPiAgICAgICAgICAgICAgIH0gZWxzZSB7DQo+ICAgICAgICAgICAg
ICAgICAgIHBjaS0+dmRldmZuID0gb3JpZ192ZGV2Ow0KPiAgICAgICAgICAgICAgIH0NCj4gLSAg
ICAgICAgICAgIGRvX3BjaV9yZW1vdmUoZWdjLCBkb21pZCwgcGNpLCBwcnMtPmZvcmNlLCBwcnMp
Ow0KPiArICAgICAgICAgICAgZG9fcGNpX3JlbW92ZShlZ2MsIHBycyk7DQo+ICAgICAgICAgICAg
ICAgcmV0dXJuOw0KPiAgICAgICAgICAgfQ0KPiAgICAgICB9


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:42:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:42:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42048.75590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5vi-0006Zq-Vz; Tue, 01 Dec 2020 13:42:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42048.75590; Tue, 01 Dec 2020 13:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk5vi-0006Zj-Rk; Tue, 01 Dec 2020 13:42:54 +0000
Received: by outflank-mailman (input) for mailman id 42048;
 Tue, 01 Dec 2020 13:42:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk5vh-0006Ze-T9
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:42:53 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5afeef8e-eefe-430c-b51f-df410730f552;
 Tue, 01 Dec 2020 13:42:52 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1DdXPf022735; Tue, 1 Dec 2020 13:42:51 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2107.outbound.protection.outlook.com [104.47.18.107])
 by mx0a-0039f301.pphosted.com with ESMTP id 353fhjqyed-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 13:42:51 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2660.eurprd03.prod.outlook.com (2603:10a6:200:93::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 13:42:48 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 13:42:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5afeef8e-eefe-430c-b51f-df410730f552
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PMx8Avbfy6bnXMYo0oQP76BXvCsAs0iqEcLjLYAvRh4pICs0hnZyAtZtY2d0egU9Ds/y74PktLSbPFPNc+WpTZM7I1qK5cPpnhsKJemQdigUw3Ni1I6UGJvZZWprmvKzXzybjGaGRQpMbhV2KrJVXfOwHfvQQ2lMOCYk/dIf3nMm/4mn0DxNkhn67lJxznfPELmqGVG3hQhLLj3rmmcJT13cjGbUStmblsf4YEvWUVGeCXnzXAyqcZfG6Onw4iEkTeMWTXeTjd3SHR1odW8w72RqGUbSfdinT2RQAtyiAtWcx+8yT1n4b2RdRvb8q0DyKT5qV2NFXCPrpSBbDsKwvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rSxOCYfYkjEVqYJxPEXVsqJPaZkBfbsGxaWmN8IVK1g=;
 b=kuIhiba0NgFW2aAFwXNDzmhE7jM+ju8bPXDHicSpOdNFUMjxy/FRXzrYQFxqbDHsqUkd1G/hSTBJF/9EFBXlnzpHoUozdjh/d0tT5AWfCK/21TJfLMiVo4Pzmh+FnyYMlWQ48dknwfcsI1HQM1L3ON/OEjFf/0uZNPSgWOA184KXux3nnZoBDJnQ1vzbmbURcsq+42NAaPTN+6WimI/v6N2dY5vAvPSGwGWZr/rX+11DjCiMuilSHlGrhsRU5Ig+fnFiwwZIQb/D/kDFP0vdj/qhVm+0MOLbD4Ngdq6LdoVIbUaAFAbNc6mez5C99SiEZrugKJeic90NhKtw3w+mPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rSxOCYfYkjEVqYJxPEXVsqJPaZkBfbsGxaWmN8IVK1g=;
 b=uLLniChiccrOJUJz84/hkoIni93I7dpDFwptneqibmd4v06yOdNjVQtJEfDDbBJPuO60jEoqhz1O4w/l8Zcxrqg3LHr32kN89SQdUCQ2bMEeDCtXai5LNla6KxWH1TeIbDuO6nvPdupWepebN1wpGp2H22F3YlLreJeta9RM6W/zzmh1aGKpxx9dBA8KVGUkvjBFfiWntZnbhrm/E6SjVs8D49JREP5y3WYX5GfLZJnZJrwKinj+BdaoinBJuy0tNvs+uxbwqzDEWVkWT+6/kV0ocVyPJhzXFFgIkRJz9kJBS+ZRFs7m0CemloRxe5+Y0WYMoChFfRCaFpODi+UxfQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 07/23] libxl: stop using aodev->device_config in
 libxl__device_pci_add()...
Thread-Topic: [PATCH v4 07/23] libxl: stop using aodev->device_config in
 libxl__device_pci_add()...
Thread-Index: AQHWx+fburloX/qOnkuYutvO7j8/pg==
Date: Tue, 1 Dec 2020 13:42:48 +0000
Message-ID: <a0bcdae3-146b-5b8e-1cdd-f76c73bba715@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-8-paul@xen.org>
In-Reply-To: <20201124080159.11912-8-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0230ffa7-34a4-44ac-4405-08d895fefdc2
x-ms-traffictypediagnostic: AM4PR0302MB2660:
x-microsoft-antispam-prvs: 
 <AM4PR0302MB26602A7C194BE6D622A551C9E7F40@AM4PR0302MB2660.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:576;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 hzib3lG/r50cWDkTXKuBm0UYMZITZa6AjNTxQfl+uS0AcLVb9Gaja27g5C9B9Ygbl6hQca2s1wJczMVRDb2h7uVdWqWfpNx4PnYayM6D+YiH1XO47Ot7EgA57WCwPRpxPY7Oo1sQijuo1JDXgdPENb9fv3N3Ocv4tOB5yRLsEM0mCkeP0X9r4LhPKlKjfvN7Qj7HdeZD7L9hCt9jtMDjrSrrqWeFBT1AsSH9jM/lrhP5TGd+CSCno4H/mAK0P1RlEWSQHyvCA54uPN4c4iwrNbYOA88XXzHLeUxtUlbHLF5XCVYM8O8TZ9zV54G8dVNgeXPwl2nqHxnqBltpC0n+543T3U7coKjiuISFNGfSmobMyNK0100tMpAt27p2zHXa
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(366004)(346002)(136003)(31686004)(31696002)(8676002)(316002)(36756003)(4326008)(6506007)(53546011)(6486002)(83380400001)(54906003)(6512007)(110136005)(26005)(86362001)(186003)(2616005)(71200400001)(66946007)(66476007)(64756008)(66446008)(76116006)(5660300002)(2906002)(66556008)(478600001)(8936002)(101420200001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?ckNqWmVtTHFWanRPZ0JGNVBLbHN6UWRBaTZrVXhlRW53bEdBdDExbEhicnBZ?=
 =?utf-8?B?Skt0NGJXendSdll2cnVaU0VRYkJ3d3R1eHFyUWZNUGxaa2dZdmpLUGJHQnB4?=
 =?utf-8?B?OXFwWkk1R3NMWVNuWG4zdU81amFjdzc2emgwSjFhcENsWDNxZXc5bkxUaXF6?=
 =?utf-8?B?LzZzeEVKTE84YmljaXdoWDNwOXcrd0FVNUJoS3o5eitVT3JlQ2pTNUhkdDZS?=
 =?utf-8?B?dVRqQSsxMTIxa2dqME9zb3ZERytFL0Uxa2VybEZHeVJJMXEvb3lIZ0kwTzBJ?=
 =?utf-8?B?ZUFwcTRaNFNCRW84bWRDd3JOUWVtcU5JQnNwUGZVMlRqOWR5NWdoT0dzTytq?=
 =?utf-8?B?TjhPSzM2NUpsSm5RUHRjaTdYZW5KL3FQelhubkJzS2NxY2ZxUmp1TTBpSDNy?=
 =?utf-8?B?N3R4V04wOTkxOC9JajczYStidmRDYzdJTU5ZZ1BETmdEbHZRQ3orRnB4NkpW?=
 =?utf-8?B?WDROVS80Mkp2NmtOL21rb1FMK21DSVE1anN2WXdKQzJhMEdpWVlEaDlNZUhV?=
 =?utf-8?B?T3RlUnY5U1doblRzeUtZamtXVnpmZ2J6WlEvaXk5K3prK09OSk5iRTJnN2Zh?=
 =?utf-8?B?bE84V2FkdE42Q1M1WTVGd1FSUHRPR0hxWklLUGZuaGVSWks1SXA3em8vendD?=
 =?utf-8?B?K3JNYU5WNUlYcnN0a29kbndmYmFQSVBTZGFWR2pxeXZhbmZWYkVpUjBPTVll?=
 =?utf-8?B?MTNJRWt5SEVFbTlpeEhXd2lPejBlS3FMclZMV1NpaTd0bjZRMGl2ME5kNzdo?=
 =?utf-8?B?alVnUkRpMTlUb0VObXh0VThpQmFwb1FHTmlPTDlMeGNGbnRTUERPWjd5OVZm?=
 =?utf-8?B?TGpBbEJlTlczdlRDdVFIcmZEbG1jci9TQitzNHFhbFZuWUNjQVM1NUVkSERo?=
 =?utf-8?B?UytMeFVNakYyemlrUFlYeGlUdWtjNVk2elp2eGU1YlVqQVJlcTQ0RkhoNmNz?=
 =?utf-8?B?TU5lbDVSaTZLcE9OeWF4clMxc2lYcmt5TzRkWk92anRQTWJJTy95dnY1VGxq?=
 =?utf-8?B?SldnK0NjQzZEcExKdzhlTmZqQUZ0STN4eDlGcFpHVlMvbkdCbDhIVlBJeWt6?=
 =?utf-8?B?c0kzSUhOZCs1UUZzMEFkS3JqcktqMEN4aWZzRXFkL3Ywc21wME9uV3FQZ0JC?=
 =?utf-8?B?MjVNR25vL3pTQ1BvclVFRVFabi80SXoxa3I4eFFpTWFkRDEzdmkwQmQ1UkI3?=
 =?utf-8?B?Qys5MVJQQTBxKzQwWEs2bWpIWTcvbklxbjQvNkZoMnZJVW5qdUoxK0ExYUNy?=
 =?utf-8?B?bm4vUTZlV29vMHpLTVlUY29kUXVla3NCbDcvV21rdXhnQnRBYzhOa2NYaDZy?=
 =?utf-8?Q?W9kK8aCelAmQL+dGaVAsL9xM4BQQ33eCxo?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C92D3C574DC0A7499F8C340C1E3D0249@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0230ffa7-34a4-44ac-4405-08d895fefdc2
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 13:42:48.3777
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: x2qawYqAM/1eUg4rbQoGibwSYsRiIpe3WDXhUHm8K0+P8V+Ih+3TgaQ0BHaPCtnaK4qnWBwUKkwg4vYuYgsDIA3yQlfURLMXSfT8vOX8EwpjFu1Gmg4Ey0a4tG8rh/rg
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0302MB2660
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 impostorscore=0 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 clxscore=1015
 adultscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010088

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gLi4uIHRvIGhv
bGQgYSBwb2ludGVyIHRvIHRoZSBkZXZpY2UuDQo+DQo+IFRoZXJlIGlzIGFscmVhZHkgYSAncGNp
JyBmaWVsZCBpbiAncGNpX2FkZF9zdGF0ZScgc28gc2ltcGx5IHVzZSB0aGF0IGZyb20NCj4gdGhl
IHN0YXJ0LiBUaGlzIGFsc28gYWxsb3dzIHRoZSAncGNpJyAoIzMpIGFyZ3VtZW50IHRvIGJlIGRy
b3BwZWQgZnJvbQ0KPiBkb19wY2lfYWRkKCkuDQo+DQo+IE5PVEU6IFRoaXMgcGF0Y2ggYWxzbyBj
aGFuZ2VzIHRoZSB0eXBlIG9mIHRoZSAncGNpX2RvbWlkJyBmaWVsZCBpbg0KPiAgICAgICAgJ3Bj
aV9hZGRfc3RhdGUnIGZyb20gJ2ludCcgdG8gJ2xpYnhsX2RvbWlkJyB3aGljaCBpcyBtb3JlIGFw
cHJvcHJpYXRlDQo+ICAgICAgICBnaXZlbiB3aGF0IHRoZSBmaWVsZCBpcyB1c2VkIGZvci4NCj4N
Cj4gU2lnbmVkLW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KDQpS
ZXZpZXdlZC1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9hbmRydXNoY2hl
bmtvQGVwYW0uY29tPg0KDQpUaGFuayB5b3UsDQoNCk9sZWtzYW5kcg0KDQo+IC0tLQ0KPiBDYzog
SWFuIEphY2tzb24gPGl3akB4ZW5wcm9qZWN0Lm9yZz4NCj4gQ2M6IFdlaSBMaXUgPHdsQHhlbi5v
cmc+DQo+IC0tLQ0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMgfCAxOSArKysrKysr
LS0tLS0tLS0tLS0tDQo+ICAgMSBmaWxlIGNoYW5nZWQsIDcgaW5zZXJ0aW9ucygrKSwgMTIgZGVs
ZXRpb25zKC0pDQo+DQo+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5j
IGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiBpbmRleCA0MWU0YjJiNTcxLi43N2Vk
ZDI3MzQ1IDEwMDY0NA0KPiAtLS0gYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+ICsr
KyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4gQEAgLTEwNzQsNyArMTA3NCw3IEBA
IHR5cGVkZWYgc3RydWN0IHBjaV9hZGRfc3RhdGUgew0KPiAgICAgICBsaWJ4bF9fZXZfcW1wIHFt
cDsNCj4gICAgICAgbGlieGxfX2V2X3RpbWUgdGltZW91dDsNCj4gICAgICAgbGlieGxfZGV2aWNl
X3BjaSAqcGNpOw0KPiAtICAgIGludCBwY2lfZG9taWQ7DQo+ICsgICAgbGlieGxfZG9taWQgcGNp
X2RvbWlkOw0KPiAgIH0gcGNpX2FkZF9zdGF0ZTsNCj4gICANCj4gICBzdGF0aWMgdm9pZCBwY2lf
YWRkX3FlbXVfdHJhZF93YXRjaF9zdGF0ZV9jYihsaWJ4bF9fZWdjICplZ2MsDQo+IEBAIC0xMDkx
LDcgKzEwOTEsNiBAQCBzdGF0aWMgdm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqLA0K
PiAgIA0KPiAgIHN0YXRpYyB2b2lkIGRvX3BjaV9hZGQobGlieGxfX2VnYyAqZWdjLA0KPiAgICAg
ICAgICAgICAgICAgICAgICAgICAgbGlieGxfZG9taWQgZG9taWQsDQo+IC0gICAgICAgICAgICAg
ICAgICAgICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSwNCj4gICAgICAgICAgICAgICAgICAgICAg
ICAgIHBjaV9hZGRfc3RhdGUgKnBhcykNCj4gICB7DQo+ICAgICAgIFNUQVRFX0FPX0dDKHBhcy0+
YW9kZXYtPmFvKTsNCj4gQEAgLTExMDEsNyArMTEwMCw2IEBAIHN0YXRpYyB2b2lkIGRvX3BjaV9h
ZGQobGlieGxfX2VnYyAqZWdjLA0KPiAgICAgICAvKiBpbml0IHBjaV9hZGRfc3RhdGUgKi8NCj4g
ICAgICAgbGlieGxfX3hzd2FpdF9pbml0KCZwYXMtPnhzd2FpdCk7DQo+ICAgICAgIGxpYnhsX19l
dl9xbXBfaW5pdCgmcGFzLT5xbXApOw0KPiAtICAgIHBhcy0+cGNpID0gcGNpOw0KPiAgICAgICBw
YXMtPnBjaV9kb21pZCA9IGRvbWlkOw0KPiAgICAgICBsaWJ4bF9fZXZfdGltZV9pbml0KCZwYXMt
PnRpbWVvdXQpOw0KPiAgIA0KPiBAQCAtMTU2NCwxMyArMTU2MiwxMCBAQCB2b2lkIGxpYnhsX19k
ZXZpY2VfcGNpX2FkZChsaWJ4bF9fZWdjICplZ2MsIHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICBp
bnQgc3R1YmRvbWlkID0gMDsNCj4gICAgICAgcGNpX2FkZF9zdGF0ZSAqcGFzOw0KPiAgIA0KPiAt
ICAgIC8qIFN0b3JlICpwY2kgdG8gYmUgdXNlZCBieSBjYWxsYmFja3MgKi8NCj4gLSAgICBhb2Rl
di0+ZGV2aWNlX2NvbmZpZyA9IHBjaTsNCj4gLSAgICBhb2Rldi0+ZGV2aWNlX3R5cGUgPSAmbGli
eGxfX3BjaV9kZXZ0eXBlOw0KPiAtDQo+ICAgICAgIEdDTkVXKHBhcyk7DQo+ICAgICAgIHBhcy0+
YW9kZXYgPSBhb2RldjsNCj4gICAgICAgcGFzLT5kb21pZCA9IGRvbWlkOw0KPiArICAgIHBhcy0+
cGNpID0gcGNpOw0KPiAgICAgICBwYXMtPnN0YXJ0aW5nID0gc3RhcnRpbmc7DQo+ICAgICAgIHBh
cy0+Y2FsbGJhY2sgPSBkZXZpY2VfcGNpX2FkZF9zdHViZG9tX2RvbmU7DQo+ICAgDQo+IEBAIC0x
NjI0LDkgKzE2MTksMTAgQEAgdm9pZCBsaWJ4bF9fZGV2aWNlX3BjaV9hZGQobGlieGxfX2VnYyAq
ZWdjLCB1aW50MzJfdCBkb21pZCwNCj4gICAgICAgICAgIEdDTkVXKHBjaV9zKTsNCj4gICAgICAg
ICAgIGxpYnhsX2RldmljZV9wY2lfaW5pdChwY2lfcyk7DQo+ICAgICAgICAgICBsaWJ4bF9kZXZp
Y2VfcGNpX2NvcHkoQ1RYLCBwY2lfcywgcGNpKTsNCj4gKyAgICAgICAgcGFzLT5wY2kgPSBwY2lf
czsNCj4gICAgICAgICAgIHBhcy0+Y2FsbGJhY2sgPSBkZXZpY2VfcGNpX2FkZF9zdHViZG9tX3dh
aXQ7DQo+ICAgDQo+IC0gICAgICAgIGRvX3BjaV9hZGQoZWdjLCBzdHViZG9taWQsIHBjaV9zLCBw
YXMpOyAvKiBtdXN0IGJlIGxhc3QgKi8NCj4gKyAgICAgICAgZG9fcGNpX2FkZChlZ2MsIHN0dWJk
b21pZCwgcGFzKTsgLyogbXVzdCBiZSBsYXN0ICovDQo+ICAgICAgICAgICByZXR1cm47DQo+ICAg
ICAgIH0NCj4gICANCj4gQEAgLTE2ODEsOSArMTY3Nyw4IEBAIHN0YXRpYyB2b2lkIGRldmljZV9w
Y2lfYWRkX3N0dWJkb21fZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+ICAgICAgIGludCBpOw0KPiAg
IA0KPiAgICAgICAvKiBDb252ZW5pZW5jZSBhbGlhc2VzICovDQo+IC0gICAgbGlieGxfX2FvX2Rl
dmljZSAqYW9kZXYgPSBwYXMtPmFvZGV2Ow0KPiAgICAgICBsaWJ4bF9kb21pZCBkb21pZCA9IHBh
cy0+ZG9taWQ7DQo+IC0gICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpID0gYW9kZXYtPmRldmljZV9j
b25maWc7DQo+ICsgICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpID0gcGFzLT5wY2k7DQo+ICAgDQo+
ICAgICAgIGlmIChyYykgZ290byBvdXQ7DQo+ICAgDQo+IEBAIC0xNzE4LDcgKzE3MTMsNyBAQCBz
dGF0aWMgdm9pZCBkZXZpY2VfcGNpX2FkZF9zdHViZG9tX2RvbmUobGlieGxfX2VnYyAqZWdjLA0K
PiAgICAgICAgICAgICAgICAgICBwY2ktPnZkZXZmbiA9IG9yaWdfdmRldjsNCj4gICAgICAgICAg
ICAgICB9DQo+ICAgICAgICAgICAgICAgcGFzLT5jYWxsYmFjayA9IGRldmljZV9wY2lfYWRkX2Rv
bmU7DQo+IC0gICAgICAgICAgICBkb19wY2lfYWRkKGVnYywgZG9taWQsIHBjaSwgcGFzKTsgLyog
bXVzdCBiZSBsYXN0ICovDQo+ICsgICAgICAgICAgICBkb19wY2lfYWRkKGVnYywgZG9taWQsIHBh
cyk7IC8qIG11c3QgYmUgbGFzdCAqLw0KPiAgICAgICAgICAgICAgIHJldHVybjsNCj4gICAgICAg
ICAgIH0NCj4gICAgICAgfQ0KPiBAQCAtMTczNCw3ICsxNzI5LDcgQEAgc3RhdGljIHZvaWQgZGV2
aWNlX3BjaV9hZGRfZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+ICAgICAgIEVHQ19HQzsNCj4gICAg
ICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYgPSBwYXMtPmFvZGV2Ow0KPiAgICAgICBsaWJ4bF9k
b21pZCBkb21pZCA9IHBhcy0+ZG9taWQ7DQo+IC0gICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpID0g
YW9kZXYtPmRldmljZV9jb25maWc7DQo+ICsgICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpID0gcGFz
LT5wY2k7DQo+ICAgDQo+ICAgICAgIGlmIChyYykgew0KPiAgICAgICAgICAgTE9HRChFUlJPUiwg
ZG9taWQs


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:49:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:49:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42055.75602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk61X-0006pT-Lu; Tue, 01 Dec 2020 13:48:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42055.75602; Tue, 01 Dec 2020 13:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk61X-0006pM-Ig; Tue, 01 Dec 2020 13:48:55 +0000
Received: by outflank-mailman (input) for mailman id 42055;
 Tue, 01 Dec 2020 13:48:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk61W-0006pH-L3
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:48:54 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 890a6b1c-f0c2-477b-88ed-b924eddcc14f;
 Tue, 01 Dec 2020 13:48:53 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1Dioac007084; Tue, 1 Dec 2020 13:48:52 GMT
Received: from eur03-am5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2052.outbound.protection.outlook.com [104.47.8.52])
 by mx0a-0039f301.pphosted.com with ESMTP id 353epur14m-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 13:48:51 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4884.eurprd03.prod.outlook.com (2603:10a6:208:fe::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Tue, 1 Dec
 2020 13:48:46 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 13:48:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 890a6b1c-f0c2-477b-88ed-b924eddcc14f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c5r26GqoT/BWE7hXoNw5hmYxjkvCv3Gq1ARlO6rvqL5XsKlfbIunfrY5llz6/j/1VwfERSvmxff0lMbRVf9yrFetIkST3qukVbQDmZhtAQaArAPYqVym9+9pq66L2WTtD/Y/zo/QIlYSUYIGrO/7UR++nsDZdVl0ZCwr0+G5hzR8+3FIlMR6n66yjyMaI8FsiaxzpHiNgRy3FhgxuzBUY6eu2D1q47QvvEm8x5M3WUm4icCGJ26whD4reuhMAK8XyUI2WcczpXVZorhkSa4VVz36kNjrn2N7Zta+tcufWYBrvdVKff+MClH9E/b+jNTPxNuL3opNiPRY+04tKwSaGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wokN+bPcvXguk4xKfunrLZTNdt8u6aBsRhNNulVtlGA=;
 b=hlmswT9KUoux5lRZFA7dqCFfFrOhPsmBDPAnl2Ep1WH4cVRCLo1kFnERQxe1Pwo/cSQZqWrKBNinFg/1tQQlSMviJSxPpEfpFIwg4nsV3eu5plLMihEXt83gdRkmrTBAvpeEnqRJkZ1zZP7tPRxqkdopKF3m9u7+Ov4xqXrQ4lgNT+uhepSLJoBOtGp9sDfqGzYb/omhXkl82JLrVtzpIpYFyOREH82w5jsKLAVYGL8YREQUZinFRh4bZaHtWRboTdheo5pWlDc5/WWNPjy7we9kzcMcWdSPl+dTJ3dUeEqMcws0BgkbPQorUHU/GNjg4LJTnEpKYYr3Zio0/wSW4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wokN+bPcvXguk4xKfunrLZTNdt8u6aBsRhNNulVtlGA=;
 b=sEOyGxkeKh66H1BreSmdsF1JC1kr86PPzXm8b48IpzLkX7jVb3T1ef68iJTPl/eGJHI1qnIesCv6KJucIFwVR8NG+bM2iBp/JFoDyl6iHAM8QwZLdHBK66b69AP3Z3Ap5ZT7lK3Jpzd57wh2bjsbCo8Z8wpjx9z+S+YvC05cUsqcfyubTzUzuhuBLOAzNmAmJNdNFkpVnfhFKpxZm4VZUf2Vx8pK/HS7nsjuDRJJ9Sg6I2NAgTQnswBxI2bpQ958+QH9Sm2EsDROv1ToV8RY9wQhzw7Py/c+DeDcrFh3/ZUuMPQ7IhOhOHoIGnQfTi/ii5Nat6njaLWUd0AlJQ9O6g==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 08/23] libxl: generalise 'driver_path' xenstore access
 functions in libxl_pci.c
Thread-Topic: [PATCH v4 08/23] libxl: generalise 'driver_path' xenstore access
 functions in libxl_pci.c
Thread-Index: AQHWx+iwPYPJcGuC/0eTe1W0s2JAOQ==
Date: Tue, 1 Dec 2020 13:48:46 +0000
Message-ID: <3584ee50-914b-0652-4cce-0facb8092b18@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-9-paul@xen.org>
In-Reply-To: <20201124080159.11912-9-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 88871e0d-dbcb-48e6-8dfc-08d895ffd34e
x-ms-traffictypediagnostic: AM0PR03MB4884:
x-microsoft-antispam-prvs: 
 <AM0PR03MB48842C0F8886D9A7BD4D2732E7F40@AM0PR03MB4884.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4941;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 yywao/3RITp2FLtE05fmfdutQSe3BfmgmrZ0HALjymZLABH9xoEA8wXoOdwMwPHTk5BOfh4TZ4E8hKzoFGoK2CHR69DtHZgMea7hComIlknuoAuK0isR0C7ZyrRTV4IhjuBRt4R1FolvrLdsVgBJEm4YiuC0v52DhNBRLpmCth0mwyzq26P0ODt2syC9Tcn6YfpeY47GACWqmZ+gO+jgnAQSx3c95nHNbqHyc4TffsHWmjxp2Aw/85+Fo/2EoC87oxKQZhR6Bpy8kmnwlSSoQDKNnNgX4xGFTNYaQWD3s4vlrPDunL4+ik0h7dsK/H9U708t6rzIr+Mew1DrSLCw7Q==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(396003)(136003)(366004)(346002)(66446008)(2616005)(83380400001)(76116006)(31696002)(6512007)(66946007)(478600001)(66556008)(31686004)(66476007)(64756008)(71200400001)(186003)(53546011)(36756003)(6506007)(86362001)(8936002)(2906002)(26005)(316002)(4326008)(5660300002)(6486002)(8676002)(54906003)(110136005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?cDREa1N6ODc2SHh3dm9yTFJjT3BrRGhYT0ZjK1pmcWRjakIwOU5oNWpIVGJD?=
 =?utf-8?B?NWM5U1hQMHh0VDhoRXc5Yy9xL2lKOUhhdHJqWTNMRXh2K01hUUZZZHZMdXdt?=
 =?utf-8?B?M2xmcGRqNWRZSjlEUVkvR29iT2V5YmtLNTltOFVxaXpZQnBzallUbGgwK0Vj?=
 =?utf-8?B?MlBiclhwMG1MdkxoaGVhd1dKdkNBM3M4VllwOWgzUzRxSkV0NERqRHVYZUly?=
 =?utf-8?B?ckI5eTlUb1J0cnRlYTk1WVM2dC9yNjFRTUR5THo3ckJFem1YV1RPVk1FUDAw?=
 =?utf-8?B?U0Q2aC96VEpNaDdsYVdXWHRoL292SmVxNnNCdWxudWtudmZycjNUNzE4TVph?=
 =?utf-8?B?TVdCSmxENzBWYVE5bkNsNnpZUGJKQlRtdEhOVjF1TUlTNUpQd1ptUmpORHdu?=
 =?utf-8?B?UEE3UjlTK0I4eTRzb3EwbVVObWJCbHNhR1NjQ0Y1MVM0Qk90NDc2UlMvR3FV?=
 =?utf-8?B?NmRUZTRoZmtWdi9neFJKbE0xdVRrMGRONDJLOUtNUm82OWQyb3k1dWp4eGo0?=
 =?utf-8?B?S1o0a0J4aUlBODlmTGZoQnR3bU9oQlVXckFES1puNFp1VlA0Q0lWRVZEMHNI?=
 =?utf-8?B?bUtBYWVlNW1mM29ZSWZqTy95cUNXbnJkTkRGSTh4UWU5NkY4UGRVSXlsbUo0?=
 =?utf-8?B?RXVwNWd5YVNiNC9Ld2NVZmMveHU3LzBnR29NN2VPdjhvM2hrZ3poQkQxWUU1?=
 =?utf-8?B?MVZPQjRrSWZxTktJZ295bVhBdXRmMU9VY1lycjJGUzNESXJaSGhRKy9PM0hs?=
 =?utf-8?B?a2ZKYnRSbEUwZ3VvRi8zRFZHOFRwTjg5dWFmVmVMU24zcGpnYklQVVJ5U2JG?=
 =?utf-8?B?eDEzS2VLNVBYdS9XTnpoWFl5amllZllCaFVxM00zWkNGTE10UENBOXFqVytm?=
 =?utf-8?B?WlYxZmxQOUJvdnBPc2Z1TllGRW5zT1BIQ3dtRzVMMGNFbDZLWnh2ekJMSHYy?=
 =?utf-8?B?VUZjRGlTYjk1c05HTnAyY0lrdnQ2dDE3REgwZnVvR29zTm9vSUhKT3Q1RXpn?=
 =?utf-8?B?TnA5SURMS2FwelpVK0lLeDFyRVdWL1BIODgzOWF2dkJpYVY4NlpVRmdsZWMr?=
 =?utf-8?B?VnRDcndLQzJWZHZ5akxwYTlqZUwycFhtSi9YVFkwd2pTMXJ5UmJZbkdYelMv?=
 =?utf-8?B?OHVpa0FZbm9SS3RMc3RSeVRUSHMybDVpemY0TzFzZUFBTFJaV2o2bU1jL1Fv?=
 =?utf-8?B?SEx6dXlmaURWck1sYTdGQjMvbkIybC9FdVVyeit2YjhUbXVmemdkVTlzbk5X?=
 =?utf-8?B?SlJHWHZqYTVQSXhITHBteno5T0orZm1Za2xTbFZqU3JINnRyYTNGK0JiT2Yy?=
 =?utf-8?Q?MguC1jGgZQjLGElA6fE6IaIvLmQb5UgrfC?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <A02BE9FEC78EF942873AB28FE7385F70@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 88871e0d-dbcb-48e6-8dfc-08d895ffd34e
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 13:48:46.6548
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Gd1Pqoe8qv3C4Q28muoOEhF4oG46U69dwkBQAmijR0P8Nu2CoPm12qC3v4Z6x8fVNo/szBWUtkPWbojjD+/nVmrObQjZRuuYpZ0LsrrO4gAJ0wqI4VEMDY73l2uN6sH8
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4884
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 clxscore=1015
 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 adultscore=0
 priorityscore=1501 malwarescore=0 bulkscore=0 spamscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012010089

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gRm9yIHRoZSBw
dXJwb3NlcyBvZiByZS1iaW5kaW5nIGEgZGV2aWNlIHRvIGl0cyBwcmV2aW91cyBkcml2ZXINCj4g
bGlieGxfX2RldmljZV9wY2lfYXNzaWduYWJsZV9hZGQoKSB3cml0ZXMgdGhlIGRyaXZlciBwYXRo
IGludG8geGVuc3RvcmUuDQo+IFRoaXMgcGF0aCBpcyB0aGVuIHJlYWQgYmFjayBpbiBsaWJ4bF9f
ZGV2aWNlX3BjaV9hc3NpZ25hYmxlX3JlbW92ZSgpLg0KPg0KPiBUaGUgZnVuY3Rpb25zIHRoYXQg
c3VwcG9ydCB0aGlzIHdyaXRpbmcgdG8gYW5kIHJlYWRpbmcgZnJvbSB4ZW5zdG9yZSBhcmUNCj4g
Y3VycmVudGx5IGRlZGljYXRlZCBmb3IgdGhpcyBwdXJwb3NlIGFuZCBoZW5jZSB0aGUgbm9kZSBu
YW1lICdkcml2ZXJfcGF0aCcNCj4gaXMgaGFyZC1jb2RlZC4gVGhpcyBwYXRjaCBnZW5lcmFsaXpl
cyB0aGVzZSB1dGlsaXR5IGZ1bmN0aW9ucyBhbmQgcGFzc2VzDQo+ICdkcml2ZXJfcGF0aCcgYXMg
YW4gYXJndW1lbnQuIFN1YnNlcXVlbnQgcGF0Y2hlcyB3aWxsIGludm9rZSB0aGVtIHRvDQo+IGFj
Y2VzcyBvdGhlciBub2Rlcy4NCj4NCj4gTk9URTogQmVjYXVzZSBmdW5jdGlvbnMgd2lsbCBoYXZl
IGEgYnJvYWRlciB1c2UgKG90aGVyIHRoYW4gc3RvcmluZyBhDQo+ICAgICAgICBkcml2ZXIgcGF0
aCBpbiBsaWV1IG9mIHBjaWJhY2spIHRoZSBiYXNlIHhlbnN0b3JlIHBhdGggaXMgYWxzbw0KPiAg
ICAgICAgY2hhbmdlZCBmcm9tICcvbGlieGwvcGNpYmFjaycgdG8gJy9saWJ4bC9wY2knLg0KPg0K
PiBTaWduZWQtb2ZmLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQoNClJl
dmlld2VkLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVu
a29AZXBhbS5jb20+DQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQoNCj4gLS0tDQo+IENjOiBJ
YW4gSmFja3NvbiA8aXdqQHhlbnByb2plY3Qub3JnPg0KPiBDYzogV2VpIExpdSA8d2xAeGVuLm9y
Zz4NCj4gLS0tDQo+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyB8IDY2ICsrKysrKysr
KysrKysrKysrKysrKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ICAgMSBmaWxlIGNoYW5nZWQs
IDMyIGluc2VydGlvbnMoKyksIDM0IGRlbGV0aW9ucygtKQ0KPg0KPiBkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMN
Cj4gaW5kZXggNzdlZGQyNzM0NS4uYTVkNWQyZTc4YiAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvbGli
cy9saWdodC9saWJ4bF9wY2kuYw0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5j
DQo+IEBAIC03MzcsNDggKzczNyw0NiBAQCBzdGF0aWMgaW50IHBjaWJhY2tfZGV2X3VuYXNzaWdu
KGxpYnhsX19nYyAqZ2MsIGxpYnhsX2RldmljZV9wY2kgKnBjaSkNCj4gICAgICAgcmV0dXJuIDA7
DQo+ICAgfQ0KPiAgIA0KPiAtI2RlZmluZSBQQ0lCQUNLX0lORk9fUEFUSCAiL2xpYnhsL3BjaWJh
Y2siDQo+ICsjZGVmaW5lIFBDSV9JTkZPX1BBVEggIi9saWJ4bC9wY2kiDQo+ICAgDQo+IC1zdGF0
aWMgdm9pZCBwY2lfYXNzaWduYWJsZV9kcml2ZXJfcGF0aF93cml0ZShsaWJ4bF9fZ2MgKmdjLA0K
PiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZp
Y2VfcGNpICpwY2ksDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGNoYXIgKmRyaXZlcl9wYXRoKQ0KPiArc3RhdGljIGNoYXIgKnBjaV9pbmZvX3hzX3BhdGgo
bGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpLA0KPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9kZSkNCj4gICB7DQo+IC0gICAgY2hhciAqcGF0
aDsNCj4gKyAgICByZXR1cm4gbm9kZSA/DQo+ICsgICAgICAgIEdDU1BSSU5URihQQ0lfSU5GT19Q
QVRIIi8iUENJX0JERl9YU1BBVEgiLyVzIiwNCj4gKyAgICAgICAgICAgICAgICAgIHBjaS0+ZG9t
YWluLCBwY2ktPmJ1cywgcGNpLT5kZXYsIHBjaS0+ZnVuYywNCj4gKyAgICAgICAgICAgICAgICAg
IG5vZGUpIDoNCj4gKyAgICAgICAgR0NTUFJJTlRGKFBDSV9JTkZPX1BBVEgiLyJQQ0lfQkRGX1hT
UEFUSCwNCj4gKyAgICAgICAgICAgICAgICAgIHBjaS0+ZG9tYWluLCBwY2ktPmJ1cywgcGNpLT5k
ZXYsIHBjaS0+ZnVuYyk7DQo+ICt9DQo+ICsNCj4gKw0KPiArc3RhdGljIHZvaWQgcGNpX2luZm9f
eHNfd3JpdGUobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpLA0KPiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9kZSwgY29uc3QgY2hhciAqdmFs
KQ0KPiArew0KPiArICAgIGNoYXIgKnBhdGggPSBwY2lfaW5mb194c19wYXRoKGdjLCBwY2ksIG5v
ZGUpOw0KPiAgIA0KPiAtICAgIHBhdGggPSBHQ1NQUklOVEYoUENJQkFDS19JTkZPX1BBVEgiLyJQ
Q0lfQkRGX1hTUEFUSCIvZHJpdmVyX3BhdGgiLA0KPiAtICAgICAgICAgICAgICAgICAgICAgcGNp
LT5kb21haW4sDQo+IC0gICAgICAgICAgICAgICAgICAgICBwY2ktPmJ1cywNCj4gLSAgICAgICAg
ICAgICAgICAgICAgIHBjaS0+ZGV2LA0KPiAtICAgICAgICAgICAgICAgICAgICAgcGNpLT5mdW5j
KTsNCj4gLSAgICBpZiAoIGxpYnhsX194c19wcmludGYoZ2MsIFhCVF9OVUxMLCBwYXRoLCAiJXMi
LCBkcml2ZXJfcGF0aCkgPCAwICkgew0KPiAtICAgICAgICBMT0dFKFdBUk4sICJXcml0ZSBvZiAl
cyB0byBub2RlICVzIGZhaWxlZC4iLCBkcml2ZXJfcGF0aCwgcGF0aCk7DQo+ICsgICAgaWYgKCBs
aWJ4bF9feHNfcHJpbnRmKGdjLCBYQlRfTlVMTCwgcGF0aCwgIiVzIiwgdmFsKSA8IDAgKSB7DQo+
ICsgICAgICAgIExPR0UoV0FSTiwgIldyaXRlIG9mICVzIHRvIG5vZGUgJXMgZmFpbGVkLiIsIHZh
bCwgcGF0aCk7DQo+ICAgICAgIH0NCj4gICB9DQo+ICAgDQo+IC1zdGF0aWMgY2hhciAqIHBjaV9h
c3NpZ25hYmxlX2RyaXZlcl9wYXRoX3JlYWQobGlieGxfX2djICpnYywNCj4gLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2kp
DQo+ICtzdGF0aWMgY2hhciAqcGNpX2luZm9feHNfcmVhZChsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9k
ZXZpY2VfcGNpICpwY2ksDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBj
aGFyICpub2RlKQ0KPiAgIHsNCj4gLSAgICByZXR1cm4gbGlieGxfX3hzX3JlYWQoZ2MsIFhCVF9O
VUxMLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICBHQ1NQUklOVEYoDQo+IC0gICAgICAg
ICAgICAgICAgICAgICAgICAgICBQQ0lCQUNLX0lORk9fUEFUSCAiLyIgUENJX0JERl9YU1BBVEgg
Ii9kcml2ZXJfcGF0aCIsDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICBwY2ktPmRvbWFp
biwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgIHBjaS0+YnVzLA0KPiAtICAgICAgICAg
ICAgICAgICAgICAgICAgICAgcGNpLT5kZXYsDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAg
ICBwY2ktPmZ1bmMpKTsNCj4gKyAgICBjaGFyICpwYXRoID0gcGNpX2luZm9feHNfcGF0aChnYywg
cGNpLCBub2RlKTsNCj4gKw0KPiArICAgIHJldHVybiBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05V
TEwsIHBhdGgpOw0KPiAgIH0NCj4gICANCj4gLXN0YXRpYyB2b2lkIHBjaV9hc3NpZ25hYmxlX2Ry
aXZlcl9wYXRoX3JlbW92ZShsaWJ4bF9fZ2MgKmdjLA0KPiAtICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSkNCj4gK3N0YXRp
YyB2b2lkIHBjaV9pbmZvX3hzX3JlbW92ZShsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9kZXZpY2VfcGNp
ICpwY2ksDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9k
ZSkNCj4gICB7DQo+ICsgICAgY2hhciAqcGF0aCA9IHBjaV9pbmZvX3hzX3BhdGgoZ2MsIHBjaSwg
bm9kZSk7DQo+ICAgICAgIGxpYnhsX2N0eCAqY3R4ID0gbGlieGxfX2djX293bmVyKGdjKTsNCj4g
ICANCj4gICAgICAgLyogUmVtb3ZlIHRoZSB4ZW5zdG9yZSBlbnRyeSAqLw0KPiAtICAgIHhzX3Jt
KGN0eC0+eHNoLCBYQlRfTlVMTCwNCj4gLSAgICAgICAgICBHQ1NQUklOVEYoUENJQkFDS19JTkZP
X1BBVEggIi8iIFBDSV9CREZfWFNQQVRILA0KPiAtICAgICAgICAgICAgICAgICAgICBwY2ktPmRv
bWFpbiwNCj4gLSAgICAgICAgICAgICAgICAgICAgcGNpLT5idXMsDQo+IC0gICAgICAgICAgICAg
ICAgICAgIHBjaS0+ZGV2LA0KPiAtICAgICAgICAgICAgICAgICAgICBwY2ktPmZ1bmMpICk7DQo+
ICsgICAgeHNfcm0oY3R4LT54c2gsIFhCVF9OVUxMLCBwYXRoKTsNCj4gICB9DQo+ICAgDQo+ICAg
c3RhdGljIGludCBsaWJ4bF9fZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2FkZChsaWJ4bF9fZ2MgKmdj
LA0KPiBAQCAtODI0LDkgKzgyMiw5IEBAIHN0YXRpYyBpbnQgbGlieGxfX2RldmljZV9wY2lfYXNz
aWduYWJsZV9hZGQobGlieGxfX2djICpnYywNCj4gICAgICAgLyogU3RvcmUgZHJpdmVyX3BhdGgg
Zm9yIHJlYmluZGluZyB0byBkb20wICovDQo+ICAgICAgIGlmICggcmViaW5kICkgew0KPiAgICAg
ICAgICAgaWYgKCBkcml2ZXJfcGF0aCApIHsNCj4gLSAgICAgICAgICAgIHBjaV9hc3NpZ25hYmxl
X2RyaXZlcl9wYXRoX3dyaXRlKGdjLCBwY2ksIGRyaXZlcl9wYXRoKTsNCj4gKyAgICAgICAgICAg
IHBjaV9pbmZvX3hzX3dyaXRlKGdjLCBwY2ksICJkcml2ZXJfcGF0aCIsIGRyaXZlcl9wYXRoKTsN
Cj4gICAgICAgICAgIH0gZWxzZSBpZiAoIChkcml2ZXJfcGF0aCA9DQo+IC0gICAgICAgICAgICAg
ICAgICAgICBwY2lfYXNzaWduYWJsZV9kcml2ZXJfcGF0aF9yZWFkKGdjLCBwY2kpKSAhPSBOVUxM
ICkgew0KPiArICAgICAgICAgICAgICAgICAgICAgcGNpX2luZm9feHNfcmVhZChnYywgcGNpLCAi
ZHJpdmVyX3BhdGgiKSkgIT0gTlVMTCApIHsNCj4gICAgICAgICAgICAgICBMT0coSU5GTywgUENJ
X0JERiIgbm90IGJvdW5kIHRvIGEgZHJpdmVyLCB3aWxsIGJlIHJlYm91bmQgdG8gJXMiLA0KPiAg
ICAgICAgICAgICAgICAgICBkb20sIGJ1cywgZGV2LCBmdW5jLCBkcml2ZXJfcGF0aCk7DQo+ICAg
ICAgICAgICB9IGVsc2Ugew0KPiBAQCAtODM0LDcgKzgzMiw3IEBAIHN0YXRpYyBpbnQgbGlieGxf
X2RldmljZV9wY2lfYXNzaWduYWJsZV9hZGQobGlieGxfX2djICpnYywNCj4gICAgICAgICAgICAg
ICAgICAgZG9tLCBidXMsIGRldiwgZnVuYyk7DQo+ICAgICAgICAgICB9DQo+ICAgICAgIH0gZWxz
ZSB7DQo+IC0gICAgICAgIHBjaV9hc3NpZ25hYmxlX2RyaXZlcl9wYXRoX3JlbW92ZShnYywgcGNp
KTsNCj4gKyAgICAgICAgcGNpX2luZm9feHNfcmVtb3ZlKGdjLCBwY2ksICJkcml2ZXJfcGF0aCIp
Ow0KPiAgICAgICB9DQo+ICAgDQo+ICAgICAgIGlmICggcGNpYmFja19kZXZfYXNzaWduKGdjLCBw
Y2kpICkgew0KPiBAQCAtODg0LDcgKzg4Miw3IEBAIHN0YXRpYyBpbnQgbGlieGxfX2RldmljZV9w
Y2lfYXNzaWduYWJsZV9yZW1vdmUobGlieGxfX2djICpnYywNCj4gICAgICAgfQ0KPiAgIA0KPiAg
ICAgICAvKiBSZWJpbmQgaWYgbmVjZXNzYXJ5ICovDQo+IC0gICAgZHJpdmVyX3BhdGggPSBwY2lf
YXNzaWduYWJsZV9kcml2ZXJfcGF0aF9yZWFkKGdjLCBwY2kpOw0KPiArICAgIGRyaXZlcl9wYXRo
ID0gcGNpX2luZm9feHNfcmVhZChnYywgcGNpLCAiZHJpdmVyX3BhdGgiKTsNCj4gICANCj4gICAg
ICAgaWYgKCBkcml2ZXJfcGF0aCApIHsNCj4gICAgICAgICAgIGlmICggcmViaW5kICkgew0KPiBA
QCAtODk3LDcgKzg5NSw3IEBAIHN0YXRpYyBpbnQgbGlieGxfX2RldmljZV9wY2lfYXNzaWduYWJs
ZV9yZW1vdmUobGlieGxfX2djICpnYywNCj4gICAgICAgICAgICAgICAgICAgcmV0dXJuIC0xOw0K
PiAgICAgICAgICAgICAgIH0NCj4gICANCj4gLSAgICAgICAgICAgIHBjaV9hc3NpZ25hYmxlX2Ry
aXZlcl9wYXRoX3JlbW92ZShnYywgcGNpKTsNCj4gKyAgICAgICAgICAgIHBjaV9pbmZvX3hzX3Jl
bW92ZShnYywgcGNpLCAiZHJpdmVyX3BhdGgiKTsNCj4gICAgICAgICAgIH0NCj4gICAgICAgfSBl
bHNlIHsNCj4gICAgICAgICAgIGlmICggcmViaW5kICkgew==


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 13:51:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 13:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42063.75614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk64R-0007jj-9a; Tue, 01 Dec 2020 13:51:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42063.75614; Tue, 01 Dec 2020 13:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk64R-0007jc-6O; Tue, 01 Dec 2020 13:51:55 +0000
Received: by outflank-mailman (input) for mailman id 42063;
 Tue, 01 Dec 2020 13:51:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk64P-0007jX-GI
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 13:51:53 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bf253aa-426b-42ff-a0e8-2b1874e7a7e3;
 Tue, 01 Dec 2020 13:51:52 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1DkAAk022156; Tue, 1 Dec 2020 13:51:51 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2172.outbound.protection.outlook.com [104.47.17.172])
 by mx0a-0039f301.pphosted.com with ESMTP id 355h93s8h8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 13:51:51 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4754.eurprd03.prod.outlook.com (2603:10a6:208:cb::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Tue, 1 Dec
 2020 13:51:45 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 13:51:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bf253aa-426b-42ff-a0e8-2b1874e7a7e3
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dCOD3VQvecmyZ+jjaTMENwYLljYQm4sgm+tlMPmOoMpibBZTeXNy5BbHjSVQax/BA9OfMyRQtFuMqwa7JdGZtC0olbXq6B6sF31MlJECD+n8P/H6sDKwxQc9oDCJrORjhKhVP9Xstj9q9Prd1Yqcyu3KVJ2vVjUv6RhDPacKSiLMqYOlUk+iq0C+FJ4cZUVLD7I80eNAxYUBm9fm1wf9zv3I5+y3TPhKVHoIiKWrxs4TCBR+g559dOfpLO0dnYXnjycEtfFfjW1LCYv258PLckMs17pHZLh47BVgB4NY5py8YYIc77z7bVKaO/rzydf83knASv7DuqB6FL/M5jE7ag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I7qEdFB+pb+3svKUzZSDr8+LbJ22z5hJ/xL4rLhctZU=;
 b=JkcnQ1MODOtSNuke69c3Pu5Idya7TgW7Z4bh3VmU+h4K4O3AoFGB2JLxWj9iRcC/GUAj0e8mS09DW6zyWa2/gZgTHN5QXDsYMFB+2Wyfjjv/fzDnhvf4Chkqkra64BFdhVRYgST8xnfz9tFZ1qyI+tPS7Khs+WKlBwodl2T8mMhXPUdp1+U42xFQbv/AmqF7J3J1wnYz1hna9T+/mVe6CxWHWdo7rmqYAT053x4Sn6dZcSZkZoeLYXfEX6wjczoKtg8r/AcY8jGgtj22ZigNbkPrKoylt+k26uCtA2DUVyqgj6okPujG93rRikgmz010pGiZmusYqvcEdwjThDcgLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I7qEdFB+pb+3svKUzZSDr8+LbJ22z5hJ/xL4rLhctZU=;
 b=QsBDEdNPuSqC5noyD2yJs6AhGHTpzNUZs24hgOHFz8vHS/ZUAD3jkeqYe3P3khOvWYTQTdCvJ70QsF1AIf+yjD91Fj52+TV9KdioIOz4LsFNVrWIGnXXTAf/xUvsZ0aKh+eLTiq2kCx44hOqkHnnlM3mTwDy7Fpy900zuDthTUZso0i7lbAUyd+xu0N7itOgvgMrC6RtOs/mQvRt/WPHQxQj83S0F+evGG02f1mktit3JdFdG37gUGr/2nlDI9go00ZKQKQo0C+7JQ85izdcvZmT2ReYs4UmHSGR4ew4V2c0AoCt5d+DKjds+XM857kc61RbcHx6A4ITymMmWmQ4dQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 09/23] libxl: remove unnecessary check from
 libxl__device_pci_add()
Thread-Topic: [PATCH v4 09/23] libxl: remove unnecessary check from
 libxl__device_pci_add()
Thread-Index: AQHWx+kbix++NRxo5UGrslXglzQVrA==
Date: Tue, 1 Dec 2020 13:51:44 +0000
Message-ID: <42fea377-76e7-2315-0868-96d758d9e4fa@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-10-paul@xen.org>
In-Reply-To: <20201124080159.11912-10-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f87d7f3a-253e-4fe4-792c-08d896003d8e
x-ms-traffictypediagnostic: AM0PR03MB4754:
x-microsoft-antispam-prvs: 
 <AM0PR03MB475451D3060EF0B5C02A9E20E7F40@AM0PR03MB4754.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 kLC5KqLRjttBfFg8Mwcfr65n/KsFTl5xFzboxHezi3MGJ2u1FwFnD/6nY2cNiMPBkqlVmVsRVDEEcdiqkvF37rzasH6GTLTif9lbjCw+CcA67deVCTAAZ7YC1j9yDKjl9obb5alRW9PcwLB58rXjWLJAsY5BdaDrA6HcqcnPfoU28YWKWCIaxmkRm8r+vCX6PxJ9EWmMO1yD8HLG+1vA19n+jgS6KYxBoXQYtDRydh5bmyiMbM94PQ3/dD+E9XgY0eNKK0ICC9KwoDkNoz4MlG7V9+mrOGyHDZsnueOJKNVghFALjylAs/G9VslGxclAi2uiilGd+tSCqvNRRtHPD3bLUA+2ffpaQ3VpjmQ+SzbqCT2xANxMrCsMLj0gxOnz
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(136003)(366004)(39860400002)(76116006)(31696002)(64756008)(66446008)(66946007)(4326008)(66556008)(66476007)(8936002)(2906002)(5660300002)(8676002)(71200400001)(6486002)(6512007)(2616005)(26005)(478600001)(186003)(83380400001)(6506007)(53546011)(36756003)(86362001)(110136005)(54906003)(31686004)(316002)(101420200001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?WTRlRlRFMTk3TUY3cFBldDh3VlZ3RWY5c1hvZ1ptbmNNL0FJNFNyZmFRckJo?=
 =?utf-8?B?Q0ovckRjR0xqU0EwSG04UEFZeEVnSmR1MVNFTzJXaklkWDZWcGZNUXVJRE5v?=
 =?utf-8?B?SHlzWER0OHJtTHBLQUhiYjJ4cnVWdmUvU25PT0p5cEVyLzZoS1NLbmd6VWVo?=
 =?utf-8?B?YjdLbVovZXRnemRzdTJteE92WndaTHkzNEx6NUdUZi9SUXF2cUNWNU5vajJj?=
 =?utf-8?B?MU1DVCs4QkJsbEhmeG1HWVNPY2xTTEVjVU0rc1BWR1JpVS8yUVN3TWd6akFy?=
 =?utf-8?B?VzcwMDhEa2Y3SGoyeXB0TVlYMDBBWVRJS01OeGRxYzVGVVVPSUQ3OFVsWVRG?=
 =?utf-8?B?aFpGOGUxOGl2Z0d5dTlhTXd1Um1rMTRjRXZ1KzZkVjJ6MlkyV1d6cEsxZzQ3?=
 =?utf-8?B?ejV1ZVlFekJEYjN1SHNkaUF3NzUxWVpoM1JFTFBZTmM2Z20zMVlxVi9vN3o3?=
 =?utf-8?B?cGhrcFJoL1pUSVB3VSswVTVZb0hlS1ZoWUVFeVVQM0tBSjh2NDQ5ckJTUmpx?=
 =?utf-8?B?TkVoS1V0UzhHMUZZOGxVVEhRdXcvZGFQVFpYbVlKYnFtNHNIQXh4TFlkc1dr?=
 =?utf-8?B?U2RhQ1ArTm1IbGxZUDlzVlI3SGJhbzJEenZUR0Zsa3ZzeVF0dmx2NkJ6MXY2?=
 =?utf-8?B?Wld2T3dMekNPcU85a3R2VGhwdlhtRVVOcGJla2RpLzMrM2ZQelc4ZkROVTI3?=
 =?utf-8?B?ZUZzbktER3p1ZkZVbW5ONzh3RC9NelBaVFFadjdkbjFsYXozSG12YzdTaXVh?=
 =?utf-8?B?V2I3Y0VPWnk0eUpBSTU4QWd5SzZvbjIzUXlFYXl4VllnSGxJZ3FuUEEzdUIz?=
 =?utf-8?B?WW0zbHgwcHRQTGZXS3plOFFHbVgyekZwNDBob3lPYksrY1ZET0NMNzlsVTJI?=
 =?utf-8?B?ZzVlQnJsN3hHSGZCcDNMWUJyS3hyZGNtTklqOFI2UkV4TkltQXNqYklDeVZr?=
 =?utf-8?B?a0F2YVJMYlRZeW9LbytHM0h6NjFpZ3QxVE9PSVBHZitndEJpbkdKbUxtWURu?=
 =?utf-8?B?aTZkNDE0ZnZ3Qmd1ekJrUFliNUFCQko2UU56VFU1bTV3TExkTHpVYitQNjZq?=
 =?utf-8?B?Und3VUpETEZrbEpabnY1NFRSR1hXcnFCRlFkY0dtdHpFVnBRK0VNRzA4d3pp?=
 =?utf-8?B?QytpMEU0VStjakk5NlNsNEpndmo3RDdZaitoV01hOGIwMDJRQ0F5MDJnY212?=
 =?utf-8?B?bHZoQjhZTjlvSXRPSVBPRFU4NjZla3g1ZFJPOFBWVEFkZ1hBMFJIaWtNbDY1?=
 =?utf-8?B?WDVkU1VSc0tTK1FxNlFMWU5WbERibjdDN1paai9sMkJ0N0JRdFRIRDFkWkV3?=
 =?utf-8?Q?1UGengFDN6FvKusxMGpIi9+bss6hThF8Wd?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <1B66EE5A209296448A04C8494ADC460B@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f87d7f3a-253e-4fe4-792c-08d896003d8e
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 13:51:44.9271
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: GDNN68v1MnsBDiyiDcULsy+/OY0L7zzpalKffq8x0+d7EG73HD6VrYtYvWx5Eu0rBvbM+mowV0w0+6sqFr/ok85knXEhjPhFARqqDeO3G8Q8XHdL8NH/gIsRCM+rL9Yu
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4754
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0
 mlxscore=0 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1015
 malwarescore=0 mlxlogscore=999 suspectscore=0 lowpriorityscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010089

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gVGhlIGNvZGUg
Y3VycmVudGx5IGNoZWNrcyBleHBsaWNpdGx5IHdoZXRoZXIgdGhlIGRldmljZSBpcyBhbHJlYWR5
IGFzc2lnbmVkLA0KPiBidXQgdGhpcyBpcyBhY3R1YWxseSB1bm5lY2Vzc2FyeSBhcyBhc3NpZ25l
ZCBkZXZpY2VzIGRvIG5vdCBmb3JtIHBhcnQgb2YNCj4gdGhlIGxpc3QgcmV0dXJuZWQgYnkgbGli
eGxfZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2xpc3QoKSBhbmQgaGVuY2UgdGhlDQo+IGxpYnhsX3Bj
aV9hc3NpZ25hYmxlKCkgdGVzdCB3b3VsZCBoYXZlIGFscmVhZHkgZmFpbGVkLg0KPg0KPiBTaWdu
ZWQtb2ZmLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQoNClJldmlld2Vk
LWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBh
bS5jb20+DQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQoNCj4gLS0tDQo+IENjOiBJYW4gSmFj
a3NvbiA8aXdqQHhlbnByb2plY3Qub3JnPg0KPiBDYzogV2VpIExpdSA8d2xAeGVuLm9yZz4NCj4g
LS0tDQo+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyB8IDE2ICstLS0tLS0tLS0tLS0t
LS0NCj4gICAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKyksIDE1IGRlbGV0aW9ucygtKQ0K
Pg0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyBiL3Rvb2xzL2xp
YnMvbGlnaHQvbGlieGxfcGNpLmMNCj4gaW5kZXggYTVkNWQyZTc4Yi4uZWMxMDFmMjU1ZiAxMDA2
NDQNCj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiArKysgYi90b29scy9s
aWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+IEBAIC0xNTU1LDggKzE1NTUsNyBAQCB2b2lkIGxpYnhs
X19kZXZpY2VfcGNpX2FkZChsaWJ4bF9fZWdjICplZ2MsIHVpbnQzMl90IGRvbWlkLA0KPiAgIHsN
Cj4gICAgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFvKTsNCj4gICAgICAgbGlieGxfY3R4ICpjdHgg
PSBsaWJ4bF9fZ2Nfb3duZXIoZ2MpOw0KPiAtICAgIGxpYnhsX2RldmljZV9wY2kgKmFzc2lnbmVk
Ow0KPiAtICAgIGludCBudW1fYXNzaWduZWQsIHJjOw0KPiArICAgIGludCByYzsNCj4gICAgICAg
aW50IHN0dWJkb21pZCA9IDA7DQo+ICAgICAgIHBjaV9hZGRfc3RhdGUgKnBhczsNCj4gICANCj4g
QEAgLTE1OTUsMTkgKzE1OTQsNiBAQCB2b2lkIGxpYnhsX19kZXZpY2VfcGNpX2FkZChsaWJ4bF9f
ZWdjICplZ2MsIHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICAgICAgZ290byBvdXQ7DQo+ICAgICAg
IH0NCj4gICANCj4gLSAgICByYyA9IGdldF9hbGxfYXNzaWduZWRfZGV2aWNlcyhnYywgJmFzc2ln
bmVkLCAmbnVtX2Fzc2lnbmVkKTsNCj4gLSAgICBpZiAoIHJjICkgew0KPiAtICAgICAgICBMT0dE
KEVSUk9SLCBkb21pZCwNCj4gLSAgICAgICAgICAgICAiY2Fubm90IGRldGVybWluZSBpZiBkZXZp
Y2UgaXMgYXNzaWduZWQsIHJlZnVzaW5nIHRvIGNvbnRpbnVlIik7DQo+IC0gICAgICAgIGdvdG8g
b3V0Ow0KPiAtICAgIH0NCj4gLSAgICBpZiAoIGlzX3BjaV9pbl9hcnJheShhc3NpZ25lZCwgbnVt
X2Fzc2lnbmVkLCBwY2ktPmRvbWFpbiwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICBwY2kt
PmJ1cywgcGNpLT5kZXYsIHBjaS0+ZnVuYykgKSB7DQo+IC0gICAgICAgIExPR0QoRVJST1IsIGRv
bWlkLCAiUENJIGRldmljZSBhbHJlYWR5IGF0dGFjaGVkIHRvIGEgZG9tYWluIik7DQo+IC0gICAg
ICAgIHJjID0gRVJST1JfRkFJTDsNCj4gLSAgICAgICAgZ290byBvdXQ7DQo+IC0gICAgfQ0KPiAt
DQo+ICAgICAgIGxpYnhsX19kZXZpY2VfcGNpX3Jlc2V0KGdjLCBwY2ktPmRvbWFpbiwgcGNpLT5i
dXMsIHBjaS0+ZGV2LCBwY2ktPmZ1bmMpOw0KPiAgIA0KPiAgICAgICBzdHViZG9taWQgPSBsaWJ4
bF9nZXRfc3R1YmRvbV9pZChjdHgsIGRvbWlkKTs=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:04:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42074.75626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6Go-0000U9-Gz; Tue, 01 Dec 2020 14:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42074.75626; Tue, 01 Dec 2020 14:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6Go-0000U2-D5; Tue, 01 Dec 2020 14:04:42 +0000
Received: by outflank-mailman (input) for mailman id 42074;
 Tue, 01 Dec 2020 14:04:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dt7S=FF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kk6Gn-0000Tx-Hx
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:04:41 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1e::626])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74c52e1e-8ff9-4d3d-9c46-3eba004867b5;
 Tue, 01 Dec 2020 14:04:39 +0000 (UTC)
Received: from DB9PR02CA0020.eurprd02.prod.outlook.com (2603:10a6:10:1d9::25)
 by AM9PR08MB6147.eurprd08.prod.outlook.com (2603:10a6:20b:2da::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Tue, 1 Dec
 2020 14:04:36 +0000
Received: from DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d9:cafe::a) by DB9PR02CA0020.outlook.office365.com
 (2603:10a6:10:1d9::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Tue, 1 Dec 2020 14:04:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT037.mail.protection.outlook.com (10.152.20.215) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Tue, 1 Dec 2020 14:04:36 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Tue, 01 Dec 2020 14:04:36 +0000
Received: from 8cb0a397dafd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 56DEE55A-0762-42EE-8043-32751756AA09.1; 
 Tue, 01 Dec 2020 14:04:31 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8cb0a397dafd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 01 Dec 2020 14:04:31 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5589.eurprd08.prod.outlook.com (2603:10a6:10:1a2::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.24; Tue, 1 Dec
 2020 14:04:30 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Tue, 1 Dec 2020
 14:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74c52e1e-8ff9-4d3d-9c46-3eba004867b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=75OssYQ1ohlGTH2OCu/0MBJ4agx4rEEyL3D4EWsCiNc=;
 b=JQS1zbSm+4vL8+n5qfFRmnmPhGWux7ZtkeNzJfEWDx7gp3vvgT4MaCGaYG+9ox+bzNQF7s/p9DJxwNzy5rnw6Zqi7HXGkbjztKisejuoJRci4wXmKL9HPl6WUemDZ/9sNtjjvm7RBS+cdsrf72nLwlClAuMRPNT/S9cKklF/lU4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5a91f0418d968cdf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vv6ogArxsPrzUL2g/SwkAUl43CjV1g5q8MyHTG++2o27G96kQb418yA30kryeICDei7DtFov9jw3iO2IkoAGhwnHT0co4D8cSTdDBgsyqJKvgNxOfI20B+kYhOGSjxRCJoU0vtOE+k3D9Vd2WQ+X5dfGpqYKkLx9UJJG6yWvEhtkYUt6EOiRBaAU7N6t2LcNurMjqqE+RDJTr+7d0kGyRWCVta+hBxBbkmORPK2oufMUyRAdnE/24dzwhafv4ss1/Kgu9TmAc1YEnEIT5Ie5waKuYNqH3aau5v//3N+wwNEXnkf79N/OI+TldO9TpnYwBhsIxvThSEa7vtSR776fJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=75OssYQ1ohlGTH2OCu/0MBJ4agx4rEEyL3D4EWsCiNc=;
 b=h9dD4ZlUd7iDNrNuKsnnR9fFt5G0k+gAFDobGUwoaqR2U0AJfskUwbpR1AtKH5Gy3RtH5ITl6AOExa6wrDefj5a13pNQC3e3AlBUofRNeyRCxgbPZQA15baFxok9m+v5bwcC09euGxdT+Ozgntry8Grdy6Yo1pyZ5mZimYxx/PbOEwZmOQQhZgrH6g7VgaYNTl1wrLj1nDJc3QP1iLhUyEQ95X7yXTWtHpWBBj1v9sDh3+913FEp7sFt99bDbk5u5UsTZKSKC4dUA6cuoUXhv8fHrIyySSn/UvWNvoV1RE52pl04Y+pQrIO7HizNYr6CVNRNYu1tQg2qk8VrJxygrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=75OssYQ1ohlGTH2OCu/0MBJ4agx4rEEyL3D4EWsCiNc=;
 b=JQS1zbSm+4vL8+n5qfFRmnmPhGWux7ZtkeNzJfEWDx7gp3vvgT4MaCGaYG+9ox+bzNQF7s/p9DJxwNzy5rnw6Zqi7HXGkbjztKisejuoJRci4wXmKL9HPl6WUemDZ/9sNtjjvm7RBS+cdsrf72nLwlClAuMRPNT/S9cKklF/lU4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 6/7] xen/arm: Add CP10 exception support to handle VMFR
Thread-Topic: [PATCH v2 6/7] xen/arm: Add CP10 exception support to handle
 VMFR
Thread-Index: AQHWxyRimsJuJ4bKO0qQ+1JBnbZRw6nhI6EAgAEj24A=
Date: Tue, 1 Dec 2020 14:04:29 +0000
Message-ID: <82B51F6A-5D3E-48FB-89DE-F7F8B5407064@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <58ff66d0daf610dfe8e09516302cb0c0fe17fc59.1606742184.git.bertrand.marquis@arm.com>
 <87h7p67f52.fsf@epam.com>
In-Reply-To: <87h7p67f52.fsf@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7fa5cacd-804c-4159-a14a-08d89602098b
x-ms-traffictypediagnostic: DBAPR08MB5589:|AM9PR08MB6147:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB614763FD0354616DE84871A59DF40@AM9PR08MB6147.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iZuumAOXCvfJeLCeiDY/IL4QL9m/G1gJnYweguO9FXcMFxFE+p0m93vXUhW8qOg/qbnB/YytpjUrvdcZ3ss9DE5PApxrg5Tk6LkfTr3VF6m1IsVbbdN1d6iK2A4rlSUwEw3LZ18rPLDqKGEdJ7fVWGTpaMU0GhjZHL9oUErjEDrOqQse8OWQ5JhpBkup9G/w3qTWv8EkXkzbYRz6IqzVNLNxurDLkAEPxGuuoOnx4HDM0427Qr0RTOh/1u6JzPOX9co80OSpaLuiEfe9o83YnR9KBE1PNrQVF+qzcgK+jiMzJicunOcNc3PyHTu3VaUL
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(366004)(39860400002)(136003)(186003)(33656002)(83380400001)(6916009)(2616005)(71200400001)(478600001)(4326008)(316002)(6512007)(6506007)(6486002)(26005)(53546011)(54906003)(8676002)(91956017)(86362001)(8936002)(64756008)(66556008)(66446008)(76116006)(36756003)(66946007)(66476007)(5660300002)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?xdtUqwEhZYZfY/YFkvhf2DONLt8EDGC1a7g0Cz6z2qeXUMqyggePSbmnCGm2?=
 =?us-ascii?Q?eBQAYicHEXsCtcG9TUQvUZO5NI3ge5JslmIywNpTy2MtsOma2tEyGrL+yKaz?=
 =?us-ascii?Q?ZXssyIN8drMTZBhTQWMbu7uZHw6+SAFEvs8qc28/1qnFT2K8i0/wEUWTZx/S?=
 =?us-ascii?Q?uUJ2Ay2hvVoIjoEW4nB2JkCtLVDT7bux2X09of/6XYUYjqei1gl7hzDD4k1A?=
 =?us-ascii?Q?Ia4/vNv0lcEhcoC/M3GAorr5Foa+NpEKXgbrkrnbSzArR0iL9WCoaeuTUt+j?=
 =?us-ascii?Q?DYxopGjvy9Lm7dwFt/n8Yy75ZnZxjrpe/jEDtuIxmUsJVoVKAJ/pzbG3H+V+?=
 =?us-ascii?Q?tlp8SqPevTtjzPHxthYTjvqK9HKgSwt3toT3dFx53bdN2Hd6+6+/eeD5PxFD?=
 =?us-ascii?Q?iiDMYRb7XZtrQVDUshhNqTtkvIv/C4q17lZfO43JKaVlc16wHTMqd0RVZpP3?=
 =?us-ascii?Q?MntEt0hJwMQQcluI0d86hPBoh6ltIgnn0XvUG7j5NIf5WCc3f2qTTDiWTR+a?=
 =?us-ascii?Q?s2WCA++fjjih8w5cuhfeuMigFKjrM0TC+AMvk7+UNOy0zNltA0Vt5PUQKNgD?=
 =?us-ascii?Q?BNiuHscWzflSyv3G57XajYoI+tjlQ+YmG1GhdrR2R/wLaypsa8Q+BzxSnMme?=
 =?us-ascii?Q?WNjmfodrD2+OBLOFtma6uT31pPh2TW+YurNVmlzoBnA6vXoqykz9Yivx7Ygk?=
 =?us-ascii?Q?QzsVAk87KGxavdXo+MHsGxlipeM4yTP+RMKuAsaiSmvtEO7/ul3ITUd1I4/2?=
 =?us-ascii?Q?t+l66NXuCSjwCCuUkFHTTxVBQAHJF4KKLKvKUokDDkvYUjFFH5PXunl3RZyD?=
 =?us-ascii?Q?sH51Tbhsa9/fICAlqwnTQ24QuCtD1/sAr05GpHzBymTIpxcR0f8nq9U0F7XC?=
 =?us-ascii?Q?c8ADdh/AVZdfIdLOil0/zq3eIK9/q6ZVtzidD3vUqRKYRd5InOrssVMWyIae?=
 =?us-ascii?Q?arX8U427C+yU2QNsioGm2YJNYzbFtcjwoN/GJUYLskftXR7wIqtH6kwOY+od?=
 =?us-ascii?Q?Ilqg?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8600584C81FFA74A8D69FD65D47BEA9F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5589
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	936aec29-18a7-4937-0e41-08d896020597
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	H65eRrgrSsEErC713GTvZVn5Kytbwsq1VBCD0rdHt/30mCizkYfaMyLNkYkeuoLwlVR3GWBC+/Bv57oybMTVC4ZCYJYdXHUbXUAFs/V0koUz9tLNcy3TKpHmDZRCQpGDf391u5F7YbGdvDW/JBOj9cFP01r7DRwLQHC1F0hIkf988kIB4FY1XONrcPpi5p8rxpoG5lYoKZRqGw5D8jrd5LAaJdP+zWCVOT7zHC323or6Aa8Exjoltvogeo7KhKPKgOC5VKfOYQzIdPJJ3pRd0HlXJgLRTfzKqLfJL5opzlfEVWs4/YlIz9L8++KeLL8M22K/D1TBkS950pmZ5YZLHCpA0+ohB/uTLjTEVklCombdFUITbDWQCmkQk+xEiVO2mOOFFs3BrrMyrCpg+smE3Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(136003)(376002)(396003)(46966005)(8936002)(316002)(47076004)(81166007)(356005)(86362001)(82740400003)(54906003)(6486002)(36756003)(6512007)(2906002)(53546011)(6506007)(8676002)(478600001)(26005)(336012)(70206006)(70586007)(2616005)(33656002)(83380400001)(82310400003)(5660300002)(4326008)(6862004)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 14:04:36.7051
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fa5cacd-804c-4159-a14a-08d89602098b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6147

Hi,

> On 30 Nov 2020, at 20:39, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> =
wrote:
>=20
>=20
> Bertrand Marquis writes:
>=20
>> Add support for cp10 exceptions decoding to be able to emulate the
>> values for VMFR0 and VMFR1 when TID3 bit of HSR is activated.
>> This is required for aarch32 guests accessing VMFR0 and VMFR1 using vmrs
>> and vmsr instructions.
>=20
> is it VMFR or MVFR? According to the reference manual, it is MVFR. Also,
> you are missing MVFR2.
>=20

Thanks for the typo and it is MVFR (I will fix that).

Regarding MVFR2, it is in fact missing for 32bit implementation and in fact=
 the
idea that it was not available on armv7 is wrong.

Thanks a lot for the finding, I will fix and test this and send a v3 patch.

Regards
Bertrand

>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: rebase
>> ---
>> xen/arch/arm/traps.c             |  5 +++++
>> xen/arch/arm/vcpreg.c            | 38 ++++++++++++++++++++++++++++++++
>> xen/include/asm-arm/perfc_defn.h |  1 +
>> xen/include/asm-arm/traps.h      |  1 +
>> 4 files changed, 45 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 22bd1bd4c6..28d9d64558 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *reg=
s)
>>         perfc_incr(trap_cp14_dbg);
>>         do_cp14_dbg(regs, hsr);
>>         break;
>> +    case HSR_EC_CP10:
>> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>> +        perfc_incr(trap_cp10);
>> +        do_cp10(regs, hsr);
>> +        break;
>>     case HSR_EC_CP:
>>         GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>>         perfc_incr(trap_cp);
>> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
>> index d0c6406f34..9d6a36ca5d 100644
>> --- a/xen/arch/arm/vcpreg.c
>> +++ b/xen/arch/arm/vcpreg.c
>> @@ -634,6 +634,44 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const =
union hsr hsr)
>>     inject_undef_exception(regs, hsr);
>> }
>>=20
>> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
>> +{
>> +    const struct hsr_cp32 cp32 =3D hsr.cp32;
>> +    int regidx =3D cp32.reg;
>> +
>> +    if ( !check_conditional_instr(regs, hsr) )
>> +    {
>> +        advance_pc(regs, hsr);
>> +        return;
>> +    }
>> +
>> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
>> +    {
>> +    /*
>> +     * HSR.TID3 is trapping access to MVFR register used to identify th=
e
>> +     * VFP/Simd using VMRS/VMSR instructions.
>> +     * In this case MVFR2 is not supported as the instruction does not =
support
>> +     * it.
>> +     * Exception encoding is using MRC/MCR standard with the reg field =
in Crn
>> +     * as are declared MVFR0 and MVFR1 in cpregs.h
>> +     */
>> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
>> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
>> +
>> +    default:
>> +        gdprintk(XENLOG_ERR,
>> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n"=
,
>> +                 cp32.read ? "mrc" : "mcr",
>> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs=
->pc);
>> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
>> +                 hsr.bits & HSR_CP32_REGS_MASK);
>> +        inject_undef_exception(regs, hsr);
>> +        return;
>> +    }
>> +
>> +    advance_pc(regs, hsr);
>> +}
>> +
>> void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>> {
>>     const struct hsr_cp cp =3D hsr.cp;
>> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perf=
c_defn.h
>> index 6a83185163..31f071222b 100644
>> --- a/xen/include/asm-arm/perfc_defn.h
>> +++ b/xen/include/asm-arm/perfc_defn.h
>> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>> PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>> PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>> PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
>> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>> PERFCOUNTER(trap_cp,       "trap: cp access")
>> PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>> PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
>> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
>> index 997c37884e..c4a3d0fb1b 100644
>> --- a/xen/include/asm-arm/traps.h
>> +++ b/xen/include/asm-arm/traps.h
>> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const unio=
n hsr hsr);
>> void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>> void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>> void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
>> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>> void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
>>=20
>> /* SMCCC handling */
>=20
>=20
> --=20
> Volodymyr Babchuk at EPAM



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:08:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:08:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42080.75638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6KJ-0000f8-1C; Tue, 01 Dec 2020 14:08:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42080.75638; Tue, 01 Dec 2020 14:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6KI-0000f1-UH; Tue, 01 Dec 2020 14:08:18 +0000
Received: by outflank-mailman (input) for mailman id 42080;
 Tue, 01 Dec 2020 14:08:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=882D=FF=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
 id 1kk6KH-0000ew-Kp
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:08:17 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ba12c40-de6e-434e-9ca4-44ed2eeb5e43;
 Tue, 01 Dec 2020 14:08:16 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B1DtHP4002730;
 Tue, 1 Dec 2020 14:07:42 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 353dyqjnq9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 01 Dec 2020 14:07:42 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B1Du5R6003844;
 Tue, 1 Dec 2020 14:05:41 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by aserp3020.oracle.com with ESMTP id 3540ey0hqt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 01 Dec 2020 14:05:41 +0000
Received: from aserp3020.oracle.com (aserp3020.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0B1E1twO021849;
 Tue, 1 Dec 2020 14:05:41 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 3540ey0hp9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 01 Dec 2020 14:05:40 +0000
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0B1E5MSD015816;
 Tue, 1 Dec 2020 14:05:23 GMT
Received: from kadam (/102.36.221.92) by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 01 Dec 2020 06:05:21 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ba12c40-de6e-434e-9ca4-44ed2eeb5e43
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=pTmh0fAx41hLIvYDyOekZfZD8/4rzzXxS/TqqdPExwg=;
 b=vLWTsjjic+1p3i9uxybiHNVi42dcBKcTvA6AfpFTEr0sUNmVqp9yRxLgg7kIK5qGUN0J
 oofgNIgToJJBxsPbFd+Am4pBxk6JPpjzRqo19VpEwymshbhsnRALOcfpiO21XOp3kmxr
 lgOLJrUqyUshUH+0ojxyxXIg1LFHdnj2t2Bklh5y68LsxqqiRxoSPtOWNIoWynIJF754
 5bVkuWqCYCEFs8tq7LyeOV+zI3/vr+tI5ZKBss7pqjTfnpXNJRaPwJeD8R6AGcywA+BA
 lerBH0PS/hUi55aURtflNdJ2juhGcYo6ht8r8gtlOn38U9XkoMGDxbASv62fD9VykISM Rg== 
Date: Tue, 1 Dec 2020 17:04:49 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: Kees Cook <keescook@chromium.org>
Cc: Jakub Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org,
        linux-atm-general@lists.sourceforge.net,
        reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org,
        linux-wireless@vger.kernel.org, linux-fbdev@vger.kernel.org,
        dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
        Nathan Chancellor <natechancellor@gmail.com>,
        linux-ide@vger.kernel.org, dm-devel@redhat.com,
        keyrings@vger.kernel.org, linux-mtd@lists.infradead.org,
        GR-everest-linux-l2@marvell.com, wcn36xx@lists.infradead.org,
        samba-technical@lists.samba.org, linux-i3c@lists.infradead.org,
        linux1394-devel@lists.sourceforge.net, linux-afs@lists.infradead.org,
        usb-storage@lists.one-eyed-alien.net, drbd-dev@tron.linbit.com,
        devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
        rds-devel@oss.oracle.com, Nick Desaulniers <ndesaulniers@google.com>,
        linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org,
        oss-drivers@netronome.com, bridge@lists.linux-foundation.org,
        linux-security-module@vger.kernel.org, amd-gfx@lists.freedesktop.org,
        linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com,
        linux-acpi@vger.kernel.org, coreteam@netfilter.org,
        intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org,
        Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net,
        linux-ext4@vger.kernel.org, linux-media@vger.kernel.org,
        linux-watchdog@vger.kernel.org, selinux@vger.kernel.org,
        linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org,
        linux-geode@lists.infradead.org, linux-can@vger.kernel.org,
        linux-block@vger.kernel.org, linux-gpio@vger.kernel.org,
        op-tee@lists.trustedfirmware.org, linux-mediatek@lists.infradead.org,
        xen-devel@lists.xenproject.org, nouveau@lists.freedesktop.org,
        linux-hams@vger.kernel.org, ceph-devel@vger.kernel.org,
        virtualization@lists.linux-foundation.org,
        linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org,
        x86@kernel.org, linux-nfs@vger.kernel.org,
        GR-Linux-NIC-Dev@marvell.com, linux-mm@kvack.org,
        netdev@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
        linux-mmc@vger.kernel.org,
        "Gustavo A. R. Silva" <gustavoars@kernel.org>,
        linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org,
        linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org,
        linux-crypto@vger.kernel.org, patches@opensource.cirrus.com,
        Joe Perches <joe@perches.com>, linux-integrity@vger.kernel.org,
        target-devel@vger.kernel.org, linux-hardening@vger.kernel.org
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201201140449.GG2767@kadam>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <202011220816.8B6591A@keescook>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9821 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 bulkscore=0
 clxscore=1011 mlxscore=0 spamscore=0 priorityscore=1501 mlxlogscore=999
 suspectscore=0 lowpriorityscore=0 phishscore=0 adultscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010090

On Sun, Nov 22, 2020 at 08:17:03AM -0800, Kees Cook wrote:
> On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> > On Fri, 20 Nov 2020 11:30:40 -0800 Kees Cook wrote:
> > > On Fri, Nov 20, 2020 at 10:53:44AM -0800, Jakub Kicinski wrote:
> > > > On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:  
> > > > > This series aims to fix almost all remaining fall-through warnings in
> > > > > order to enable -Wimplicit-fallthrough for Clang.
> > > > > 
> > > > > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > > > > add multiple break/goto/return/fallthrough statements instead of just
> > > > > letting the code fall through to the next case.
> > > > > 
> > > > > Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> > > > > change[1] is meant to be reverted at some point. So, this patch helps
> > > > > to move in that direction.
> > > > > 
> > > > > Something important to mention is that there is currently a discrepancy
> > > > > between GCC and Clang when dealing with switch fall-through to empty case
> > > > > statements or to cases that only contain a break/continue/return
> > > > > statement[2][3][4].  
> > > > 
> > > > Are we sure we want to make this change? Was it discussed before?
> > > > 
> > > > Are there any bugs Clangs puritanical definition of fallthrough helped
> > > > find?
> > > > 
> > > > IMVHO compiler warnings are supposed to warn about issues that could
> > > > be bugs. Falling through to default: break; can hardly be a bug?!  
> > > 
> > > It's certainly a place where the intent is not always clear. I think
> > > this makes all the cases unambiguous, and doesn't impact the machine
> > > code, since the compiler will happily optimize away any behavioral
> > > redundancy.
> > 
> > If none of the 140 patches here fix a real bug, and there is no change
> > to machine code then it sounds to me like a W=2 kind of a warning.
> 
> FWIW, this series has found at least one bug so far:
> https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/

This is a fallthrough to a return and not to a break.  That should
trigger a warning.  The fallthrough to a break should not generate a
warning.

The bug we're trying to fix is "missing break statement" but if the
result of the bug is "we hit a break statement" then now we're just
talking about style.  GCC should limit itself to warning about
potentially buggy code.

regards,
dan carpenter


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:09:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:09:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42085.75649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6Lh-0000ns-G0; Tue, 01 Dec 2020 14:09:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42085.75649; Tue, 01 Dec 2020 14:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6Lh-0000nl-D7; Tue, 01 Dec 2020 14:09:45 +0000
Received: by outflank-mailman (input) for mailman id 42085;
 Tue, 01 Dec 2020 14:09:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Ars=FF=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kk6Lf-0000nf-HB
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:09:43 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fd1b97d-6e24-4419-93f8-1ae1bd6afc8a;
 Tue, 01 Dec 2020 14:09:42 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id 23so2812252wrc.8
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 06:09:42 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id o74sm3610645wme.36.2020.12.01.06.09.40
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 01 Dec 2020 06:09:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fd1b97d-6e24-4419-93f8-1ae1bd6afc8a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=TibSohplpJ5bFhrdsDRpd6q4R0TtXnEWtnbnXccrNtc=;
        b=oZlRvrP/o7tYG45nojNsCLtg9EOB6w7+VFnkK88EMJ6vKLnrqED5LXQ67qlEN/EqmQ
         qezRV5KMFDFezbhvact8gNFlm1WjZZs2x2dLa0F6QzKqYI7F00gCrrsk7ZbN1rfIQgG4
         XznHef4WDXBmYVzXEM6NJyZIEZokTbCypIXEvjGAvucD/nHmzXCWWd47Lo+9EloE5qNn
         0FEaNc2J47rGmaRpB0S31Ocnip8SvPlTkt5zAkP9ZSnoGQuBnq807qlfaIB1dt1sL7iO
         YDGRvHeaMP+Ho4gyJR7LHQzSIyWq/nPXb7VISASDQPrvLz4X97cgEkqHRU2BPhvcvJ7Y
         LURA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=TibSohplpJ5bFhrdsDRpd6q4R0TtXnEWtnbnXccrNtc=;
        b=pQO/MUOGGRwAo/88nKXMQaW0jUAIgPnViyUa7k563Jxct1u60mhPeiPPj24HMk5olU
         +lE4JoCHc2aWxSe48EPcvqqptaoXNBm8C2t8rySxRrhDsAH8Xw8KAM7PFuzGlZntsh/d
         qmbYn8Kg+QTP0LUopp/q0bR8p7LbKcmLurW02ze1KxY1t4tyojDVzPlsgwwmPd7xn15G
         kiDM6QVkYsm5csx3oPoJwYKZsueV/OlBV3i8C/3Tmr4Ilfp2cEKOqDszcxRPux7zqIcp
         rlQ95g9DfkA7Xfb59jV0x6y2sh+d5lAQLS/edZEoQ7nInkUGMQ8zRBL5DogPFkWIHh6t
         6jyg==
X-Gm-Message-State: AOAM530H+uavclMPQtrpastGBWKuI/a5QDJbzsrfGXsFWHcnYyI0g7Zf
	vqkt5BkCJRKJOhZy25KU2Ug=
X-Google-Smtp-Source: ABdhPJw/5t85SlZjv/mL74VhJ/rX9I+zWSSRDG0/wOD9MvNKVbm9GNNM0GGGaVXPbMMHQG2AMbnl6g==
X-Received: by 2002:a5d:4046:: with SMTP id w6mr4234417wrp.51.1606831781822;
        Tue, 01 Dec 2020 06:09:41 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>
Cc: <xen-devel@lists.xenproject.org>
References: <20201124190744.11343-1-paul@xen.org>
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
Subject: RE: [EXTERNAL] [PATCH v3 00/13] viridian: add support for ExProcessorMasks...
Date: Tue, 1 Dec 2020 14:09:40 -0000
Message-ID: <001b01d6c7eb$9c6d0240$d54706c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIUz4Ho6g2LZ3vHphp6MrUkQEjyX6lmN4sw

Wei,

  I'll likely send a v4 to address the style nit Jan picked up in patch =
#1 but the rest should be stable now. Could you have a look over it?

  Thanks,

    Paul

> -----Original Message-----
> From: Paul Durrant <paul@xen.org>
> Sent: 24 November 2020 19:08
> To: xen-devel@lists.xenproject.org
> Cc: Durrant, Paul <pdurrant@amazon.co.uk>
> Subject: [EXTERNAL] [PATCH v3 00/13] viridian: add support for =
ExProcessorMasks...
>=20
> CAUTION: This email originated from outside of the organization. Do =
not click links or open
> attachments unless you can confirm the sender and know the content is =
safe.
>=20
>=20
>=20
> From: Paul Durrant <pdurrant@amazon.com>
>=20
> ... plus one miscellaneous cleanup patch after introducing =
sizeof_field().
>=20
> Paul Durrant (13):
>   viridian: don't blindly write to 32-bit registers is 'mode' is =
invalid
>   viridian: move flush hypercall implementation into separate function
>   viridian: move IPI hypercall implementation into separate function
>   viridian: introduce a per-cpu hypercall_vpmask and accessor
>     functions...
>   viridian: use hypercall_vpmask in hvcall_ipi()
>   viridian: use softirq batching in hvcall_ipi()
>   xen/include: import sizeof_field() macro from Linux stddef.h
>   viridian: add ExProcessorMasks variants of the flush hypercalls
>   viridian: add ExProcessorMasks variant of the IPI hypercall
>   viridian: log initial invocation of each type of hypercall
>   viridian: add a new '_HVMPV_ex_processor_masks' bit into
>     HVM_PARAM_VIRIDIAN...
>   xl / libxl: add 'ex_processor_mask' into
>     'libxl_viridian_enlightenment'
>   x86: replace open-coded occurrences of sizeof_field()...
>=20
>  docs/man/xl.cfg.5.pod.in             |   8 +
>  tools/include/libxl.h                |   7 +
>  tools/libs/light/libxl_types.idl     |   1 +
>  tools/libs/light/libxl_x86.c         |   3 +
>  xen/arch/x86/cpu/vpmu_intel.c        |   4 +-
>  xen/arch/x86/hvm/viridian/viridian.c | 601 =
+++++++++++++++++++++------
>  xen/arch/x86/setup.c                 |  16 +-
>  xen/include/asm-x86/hvm/viridian.h   |  10 +
>  xen/include/public/hvm/params.h      |   7 +-
>  xen/include/xen/compiler.h           |   8 +
>  10 files changed, 532 insertions(+), 133 deletions(-)
>=20
> --
> 2.20.1




From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:10:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:10:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42086.75661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6Lz-00016h-Ok; Tue, 01 Dec 2020 14:10:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42086.75661; Tue, 01 Dec 2020 14:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6Lz-00016a-Ld; Tue, 01 Dec 2020 14:10:03 +0000
Received: by outflank-mailman (input) for mailman id 42086;
 Tue, 01 Dec 2020 14:10:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=882D=FF=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
 id 1kk6Ly-0000xC-4E
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:10:02 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8859bd21-e391-4934-a85c-12e5ed24e718;
 Tue, 01 Dec 2020 14:10:00 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B1Dt2KT119141;
 Tue, 1 Dec 2020 14:09:31 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 353egkjmec-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 01 Dec 2020 14:09:31 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B1Du5d4003800;
 Tue, 1 Dec 2020 14:09:30 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by aserp3020.oracle.com with ESMTP id 3540ey0nww-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 01 Dec 2020 14:09:30 +0000
Received: from aserp3020.oracle.com (aserp3020.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0B1E8IaF039759;
 Tue, 1 Dec 2020 14:09:29 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 3540ey0nvu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 01 Dec 2020 14:09:29 +0000
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0B1E9NOp018011;
 Tue, 1 Dec 2020 14:09:24 GMT
Received: from kadam (/102.36.221.92) by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 01 Dec 2020 06:09:23 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8859bd21-e391-4934-a85c-12e5ed24e718
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=CVxJrJvszPiYRoVvo8QMoaANjq0rT1w3VAIVepGqitY=;
 b=Q8Pq03hifJp4hxcZYQuiJvbKTKZMKP2mLNTS1aHW6hAxlDpObu2xNteC6Yj/L36I1kmY
 WSfMJZMdSk8bN2J9+UmYiF9MWo5KvhVT37I7ctRf5KKEocNW4v8PFVCDeiU3k6pn404k
 Igra+V15xZpvhIpriUlAo6fac2mI985JVt4CqtU5vfkzzkfANqieFePUQJBzO6eRndXt
 r4RGB+aUeJDKgHUaSwwmUD5k/B0oZ1T+RFudNU+Upzz4yF4kbfdExpcaXwIzXbiJCqdd
 NFMbLYQpnYBf2dU4wooUhfeXNwTuWm/ESbR6RxVNN2eeS+ZfeOwZNtuQUXq4Dn72Z2fB vg== 
Date: Tue, 1 Dec 2020 17:08:49 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kees Cook <keescook@chromium.org>, alsa-devel@alsa-project.org,
        linux-atm-general@lists.sourceforge.net,
        reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org,
        linux-wireless <linux-wireless@vger.kernel.org>,
        linux-fbdev@vger.kernel.org,
        dri-devel <dri-devel@lists.freedesktop.org>,
        LKML <linux-kernel@vger.kernel.org>,
        Nathan Chancellor <natechancellor@gmail.com>,
        linux-ide@vger.kernel.org, dm-devel@redhat.com,
        keyrings@vger.kernel.org, linux-mtd@lists.infradead.org,
        GR-everest-linux-l2@marvell.com, wcn36xx@lists.infradead.org,
        samba-technical@lists.samba.org, linux-i3c@lists.infradead.org,
        linux1394-devel@lists.sourceforge.net, linux-afs@lists.infradead.org,
        usb-storage@lists.one-eyed-alien.net, drbd-dev@tron.linbit.com,
        devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
        rds-devel@oss.oracle.com, linux-scsi@vger.kernel.org,
        linux-rdma@vger.kernel.org, oss-drivers@netronome.com,
        bridge@lists.linux-foundation.org,
        linux-security-module@vger.kernel.org,
        amd-gfx list <amd-gfx@lists.freedesktop.org>,
        linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com,
        linux-acpi@vger.kernel.org, coreteam@netfilter.org,
        intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org,
        Miguel Ojeda <ojeda@kernel.org>, Jakub Kicinski <kuba@kernel.org>,
        linux-ext4@vger.kernel.org, linux-media@vger.kernel.org,
        linux-watchdog@vger.kernel.org, selinux@vger.kernel.org,
        linux-arm-msm <linux-arm-msm@vger.kernel.org>,
        intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org,
        linux-can@vger.kernel.org, linux-block@vger.kernel.org,
        linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org,
        linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org,
        nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org,
        ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org,
        Linux ARM <linux-arm-kernel@lists.infradead.org>,
        linux-hwmon@vger.kernel.org,
        "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
        linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com,
        tipc-discussion@lists.sourceforge.net,
        Linux Memory Management List <linux-mm@kvack.org>,
        Network Development <netdev@vger.kernel.org>,
        linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org,
        "Gustavo A. R. Silva" <gustavoars@kernel.org>,
        Linux-Renesas <linux-renesas-soc@vger.kernel.org>,
        linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org,
        netfilter-devel@vger.kernel.org,
        "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>,
        patches@opensource.cirrus.com, Joe Perches <joe@perches.com>,
        linux-integrity@vger.kernel.org, target-devel@vger.kernel.org,
        linux-hardening@vger.kernel.org
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201201140849.GH2767@kadam>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <CAKwvOdntVfXj2WRR5n6Kw7BfG7FdKpTeHeh5nPu5AzwVMhOHTg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKwvOdntVfXj2WRR5n6Kw7BfG7FdKpTeHeh5nPu5AzwVMhOHTg@mail.gmail.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9821 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 bulkscore=0 suspectscore=0
 phishscore=0 mlxlogscore=999 lowpriorityscore=0 malwarescore=0
 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1015 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012010090

On Mon, Nov 23, 2020 at 05:32:51PM -0800, Nick Desaulniers wrote:
> On Sun, Nov 22, 2020 at 8:17 AM Kees Cook <keescook@chromium.org> wrote:
> >
> > On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> > > If none of the 140 patches here fix a real bug, and there is no change
> > > to machine code then it sounds to me like a W=2 kind of a warning.
> >
> > FWIW, this series has found at least one bug so far:
> > https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/
> 
> So looks like the bulk of these are:
> switch (x) {
>   case 0:
>     ++x;
>   default:
>     break;
> }

This should not generate a warning.

> 
> I have a patch that fixes those up for clang:
> https://reviews.llvm.org/D91895
> 
> There's 3 other cases that don't quite match between GCC and Clang I
> observe in the kernel:
> switch (x) {
>   case 0:
>     ++x;
>   default:
>     goto y;
> }
> y:;

This should generate a warning.

> 
> switch (x) {
>   case 0:
>     ++x;
>   default:
>     return;
> }

Warn for this.


> 
> switch (x) {
>   case 0:
>     ++x;
>   default:
>     ;
> }

Don't warn for this.

If adding a break statement changes the flow of the code then warn about
potentially missing break statements, but if it doesn't change anything
then don't warn about it.

regards,
dan carpenter


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:18:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42100.75674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6U7-0001w1-Kz; Tue, 01 Dec 2020 14:18:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42100.75674; Tue, 01 Dec 2020 14:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6U7-0001vs-HV; Tue, 01 Dec 2020 14:18:27 +0000
Received: by outflank-mailman (input) for mailman id 42100;
 Tue, 01 Dec 2020 14:18:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk6U6-0001vk-0D
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:18:26 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f734b131-177d-42cc-8890-f277ea91884f;
 Tue, 01 Dec 2020 14:18:24 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1EBZfB024471; Tue, 1 Dec 2020 14:18:22 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 by mx0a-0039f301.pphosted.com with ESMTP id 353fhjr418-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 14:18:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB3874.eurprd03.prod.outlook.com (2603:10a6:208:6c::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Tue, 1 Dec
 2020 14:18:17 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 14:18:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f734b131-177d-42cc-8890-f277ea91884f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WipC/7zznIMiOAuMwPaLkitD7IpuRpyiDqjj1BNNgVX1Pb4Wz+kdrqLVb1B9NM+fvt2bO7sG5WDSeIfTmEFW9IbG+YRNWCFW6nkMnfQl0JP+vP4xVzGxwNObpx/ryXaeb+Xt+qO9KUBSUmmU/3pv6fnn5KcuJ1P+ayvw+XhDQhRwasB72ouKlKIRIM1yleFgsJ6vb4/nMh9d5LvvP7L0/9oNgXtINUINGpR8NCX716MmmxwEqQRjp/4XrA5n6MeVY7aaiE7dMio2JyinVexfVXJl1PpR/iglUNmZHEriL3t0M0Ucgpdg4YGxR4VWdDockoP9zCI8ZZJEgnJx4Wu9Nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0NaIsqinmCMX1FeIJvJbHdHhQW1pBkALWWvPB7Cu5sc=;
 b=Y0ngvSwikYK7jrbXd4CF3NELYrv2n6scS/K/EtptDttxKzWAmq9w1KqwjuS6VVrNIG3DH29DRs+qwXGwezT7q6SapLduGW8KeXvNbJTl7kwERSLE4/By9re/t/1Kr7iLhyBujZC2ffxOU+vHNJUneptwuGqljqigDWtm72IRqrgI2VSRl4Q/epmzgx01xnvIG4dN/UTv06KXXatEHpkW3Jhpwx9XiLOIOSCbeA1C/OaozFTaZjHbWUTC7faQF/up/sDyW6harQiBFAkd3cmsapsoyhlq7CwRBqSfOR1ahlGmAz1qrxUxZDdsDP0iD7EsrBzDflgLviTg72N3xJyNvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0NaIsqinmCMX1FeIJvJbHdHhQW1pBkALWWvPB7Cu5sc=;
 b=f1FT86UZ6gzwBHx8Jq5hOrwECMDSnWP4RUXaWINA3n9dPVceLpq7cUpVClkNTKAllfSCvV5Zkbgbw0g3z2CJt7fnSLxMAdYtQMqz7URQvQD4Ex18giUnfnL0ncbFRpovoKGB16B6eTn3A2HIuj0QhefC+xSeX5gj5YLjgAaUulCI/r+UkxqXFE83Z5iNTvF0a2JTkuJLOY8aBfSSeph/YsIxj9JMyGffo3aVrdD0J5a25cJfYaXarza7uRr4MKXLiPaDB83/7PNrqW9R480ykBK01nNz9v6o1g9jlBTTxxf5MlnTFkclSz+iVzpoMlXenS+6IiC97Jq7Qo+xs2Gj6w==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 10/23] libxl: remove get_all_assigned_devices() from
 libxl_pci.c
Thread-Topic: [PATCH v4 10/23] libxl: remove get_all_assigned_devices() from
 libxl_pci.c
Thread-Index: AQHWx+zPGDBM2XC+TUWCOgoUsRCv0g==
Date: Tue, 1 Dec 2020 14:18:17 +0000
Message-ID: <4f587c5a-bfd4-d6fc-a2b6-5868b316c94d@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-11-paul@xen.org>
In-Reply-To: <20201124080159.11912-11-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 138fef8a-a25a-4709-ea03-08d89603f2bf
x-ms-traffictypediagnostic: AM0PR03MB3874:
x-microsoft-antispam-prvs: 
 <AM0PR03MB3874523FFF6A193DDBEEDB9FE7F40@AM0PR03MB3874.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:741;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 tggUhNKSj6zYmwlVTQDWsCJ7oTeFsL9jv0uy0DK8+okZfAozQkfgc5HWIUYv/keroBHjnQGMUb0GKz5ecyzmaRmIbHyp5TrAKICQwVzMxiMoRlGxsoog12w2ISvctQFsYLRssyuax7dFSXLl/+7iCAwumkVHV4rrBVYSprbca5avIsMhdOMwHbZwM6N106PPuaKjYLlZWrIyPnGkDfIAFIA4Vom/LNT1rjh8WyvwXJE87o0FVdSiqm7ME0FBAOl4EBp+FmshM2+ST3EAMqYwoyLRpGDoyOKEk+oN0txOTk31YkdPbYRldYp7CXSuDcHoPRHeNxAERZkreyJ5yVuONP9pUV4o93wT6IEGZqs8JEdJtPPJVk9aLvx09ELsKTJY
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(366004)(376002)(346002)(186003)(6506007)(6512007)(31686004)(478600001)(2616005)(36756003)(83380400001)(31696002)(5660300002)(71200400001)(8676002)(53546011)(86362001)(66476007)(4326008)(6486002)(26005)(2906002)(76116006)(110136005)(54906003)(66946007)(316002)(66556008)(8936002)(64756008)(66446008)(21314003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?aXlTYkdmcWJzeGRVenlibmxRRDlnN3JzbjdJVVNjaEFISUtwMUlIRVNnbTM4?=
 =?utf-8?B?VHFLTTBJRVFhS0lONzdyQXhZc1JCdkRoRUMzcHpDTjRBa1p2WDJTbjd1R0RI?=
 =?utf-8?B?YTB5bjN1VldmZW5qYmhyMEsxUUFYbUtIaWd0WWw1VnZ1Rk9TREVzTHU4M0tJ?=
 =?utf-8?B?ODU4L1ArdG41bEhKM2NReEdqVlY4UFJoY1RPeDhTSXBvTzB3S0lQV3hZbjg3?=
 =?utf-8?B?M1BFZXlsOGExV0cvcnV1Wk1nNDFpc1RXNTcraUJnZ2VSWVdvdUtJVWY1b0Rj?=
 =?utf-8?B?WWtmREJ2SzRxYmJIRENublJDYm8rNXF3eTVJUTVnREZ6YTdIdFAvUVY5bUNk?=
 =?utf-8?B?TzJGalVhQXNXZDZnQ2hrWkMrWGZmc1pKT2ZjNXZISlgrM3lvQkdMeGZVK2c1?=
 =?utf-8?B?K0JIdzQyVHhDbTJHZEMyaU9Odk9UaE90aTM1aFR0YWNDNEdXNTJHem9wVTcz?=
 =?utf-8?B?M204ZFdFOGZCb3JLWHZGWkxldEpVZWZhLzViVmZFc1dHZ3EvNHIzL25xb3lo?=
 =?utf-8?B?MHBLemJDWjNuclVITTJNS3VXb1JGUEkwdnBwYXJaRWRwN3NJNDVTWVhUT2x6?=
 =?utf-8?B?ditiUzdVUzk3VnBwWFRPdFBORUVFL1JwcmFzbFl2UUlIRkZQQStvK2xJakZ4?=
 =?utf-8?B?eTNpRGFRVmhDZTZlNzNUV210Znp2NktvVHlrR1l6c1VpR3haT3d6WnV2QXYr?=
 =?utf-8?B?Zy9qUm1Zd0VlSmtSVE5tSUpCWU1MZGNWeGhXNmlNaGFBSjA3UUlRaXFESFkv?=
 =?utf-8?B?L0Y2WVl2MTYrK1pITGYza01pb3Y3ZHRrcVA2dm84THRTSWtON1c2QXprVU4z?=
 =?utf-8?B?RkV4VWJUYi9Td2NUWHJuTDNYUThtakpKZGRDV25VOXVTTWxoRFRsWXlzSSt5?=
 =?utf-8?B?Rk95V1Y1M2ZrUkdHc0RqeVVQeWRabFhLa3A3cFZENSt2SE1YbEZXYnJmTTlX?=
 =?utf-8?B?SzlRNk5GamZXQnNGRnZKcUVEYU5QSEszTGVtQ0VxOVhPMHhaejNJVE4yUUY0?=
 =?utf-8?B?VmtuMmNkR21iRmo0ancwaGcvTEtuQ2FMTi9sOFJWaE5rcG55OXp5aDhnMzZL?=
 =?utf-8?B?dCtOdkM0U0w3Rzd5d0pyMkVUbVJlSlZ0UmRRL0w4NTVTNittK01qRHNJeExM?=
 =?utf-8?B?bXcxR0gwNFJPK054bW1TMjZzcWtzYkEyVFA0dkE5Y1JPN3hPWEZ3TEZQV1M1?=
 =?utf-8?B?OFBycVdoK3V6Z0xnZFRlTWJOb2dIUkdiZ1lldTJsU3hiSE96OFIwdXNPenhE?=
 =?utf-8?B?L2lBcTFKS00venE5K25xZ3FUemFpT3NsL2dwSXdZdXpwTnRiOU04QXEyUU11?=
 =?utf-8?Q?WwNoqwUcQ2vzik4Zpg+ie3ma0gR4kVycQn?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <58FE9B0E82E5B34FB4BED45C14900713@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 138fef8a-a25a-4709-ea03-08d89603f2bf
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 14:18:17.2486
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: BQGcbVVVm64QDwMoSW8S6GCI7obO4mt9CpQwdMHhC+cPfRd+EdjdMNXHrJB2Ez/1Xzw941Xyk+JQ7xCfaReRj+iJfp4+4ld0t1OApcPQyOFKsAKQTz5zj21B0AHHPfr7
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3874
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_05:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 impostorscore=0 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 clxscore=1015
 adultscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010091

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gVXNlIG9mIHRo
aXMgZnVuY3Rpb24gaXMgYSB2ZXJ5IGluZWZmaWNpZW50IHdheSB0byBjaGVjayB3aGV0aGVyIGEg
ZGV2aWNlDQo+IGhhcyBhbHJlYWR5IGJlZW4gYXNzaWduZWQuDQo+DQo+IFRoaXMgcGF0Y2ggYWRk
cyBjb2RlIHRoYXQgc2F2ZXMgdGhlIGRvbWFpbiBpZCBpbiB4ZW5zdG9yZSBhdCB0aGUgcG9pbnQg
b2YNCj4gYXNzaWdubWVudCwgYW5kIHJlbW92ZXMgaXQgYWdhaW4gd2hlbiB0aGUgZGV2aWNlIGlk
IGRlLWFzc2lnbmVkIChvciB0aGUNCj4gZG9tYWluIGlzIGRlc3Ryb3llZCkuIEl0IGlzIHRoZW4g
c3RyYWlnaHRmb3J3YXJkIHRvIGNoZWNrIHdoZXRoZXIgYSBkZXZpY2UNCj4gaGFzIGJlZW4gYXNz
aWduZWQgYnkgY2hlY2tpbmcgd2hldGhlciBhIGRldmljZSBoYXMgYSBzYXZlZCBkb21haW4gaWQu
DQo+DQo+IE5PVEU6IFRvIGZhY2lsaXRhdGUgdGhlIHhlbnN0b3JlIGNoZWNrIGl0IGlzIG5lY2Vz
c2FyeSB0byBtb3ZlIHRoZQ0KPiAgICAgICAgcGNpX2luZm9feHNfcmVhZCgpIGVhcmxpZXIgaW4g
bGlieGxfcGNpLmMuIFRvIGtlZXAgcmVsYXRlZCBmdW5jdGlvbnMNCj4gICAgICAgIHRvZ2V0aGVy
LCB0aGUgcmVzdCBvZiB0aGUgcGNpX2luZm9feHNfWFhYKCkgZnVuY3Rpb25zIGFyZSBtb3ZlZCB0
b28uDQo+DQo+IFNpZ25lZC1vZmYtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNv
bT4NClJldmlld2VkLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1
c2hjaGVua29AZXBhbS5jb20+DQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQoNCj4gLS0tDQo+
IENjOiBJYW4gSmFja3NvbiA8aXdqQHhlbnByb2plY3Qub3JnPg0KPiBDYzogV2VpIExpdSA8d2xA
eGVuLm9yZz4NCj4gLS0tDQo+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyB8IDE0OSAr
KysrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ICAgMSBmaWxlIGNo
YW5nZWQsIDU1IGluc2VydGlvbnMoKyksIDk0IGRlbGV0aW9ucygtKQ0KPg0KPiBkaWZmIC0tZ2l0
IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxf
cGNpLmMNCj4gaW5kZXggZWMxMDFmMjU1Zi4uZDNjN2E1NDdjMyAxMDA2NDQNCj4gLS0tIGEvdG9v
bHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhs
X3BjaS5jDQo+IEBAIC0zMzYsNTAgKzMzNiw2IEBAIHJldHJ5X3RyYW5zYWN0aW9uMjoNCj4gICAg
ICAgcmV0dXJuIDA7DQo+ICAgfQ0KPiAgIA0KPiAtc3RhdGljIGludCBnZXRfYWxsX2Fzc2lnbmVk
X2RldmljZXMobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqKmxpc3QsIGludCAqbnVt
KQ0KPiAtew0KPiAtICAgIGNoYXIgKipkb21saXN0Ow0KPiAtICAgIHVuc2lnbmVkIGludCBuZCA9
IDAsIGk7DQo+IC0NCj4gLSAgICAqbGlzdCA9IE5VTEw7DQo+IC0gICAgKm51bSA9IDA7DQo+IC0N
Cj4gLSAgICBkb21saXN0ID0gbGlieGxfX3hzX2RpcmVjdG9yeShnYywgWEJUX05VTEwsICIvbG9j
YWwvZG9tYWluIiwgJm5kKTsNCj4gLSAgICBmb3IoaSA9IDA7IGkgPCBuZDsgaSsrKSB7DQo+IC0g
ICAgICAgIGNoYXIgKnBhdGgsICpudW1fZGV2czsNCj4gLQ0KPiAtICAgICAgICBwYXRoID0gR0NT
UFJJTlRGKCIvbG9jYWwvZG9tYWluLzAvYmFja2VuZC8lcy8lcy8wL251bV9kZXZzIiwNCj4gLSAg
ICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2tpbmRfdG9fc3RyaW5nKExJQlhM
X19ERVZJQ0VfS0lORF9QQ0kpLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgIGRvbWxpc3Rb
aV0pOw0KPiAtICAgICAgICBudW1fZGV2cyA9IGxpYnhsX194c19yZWFkKGdjLCBYQlRfTlVMTCwg
cGF0aCk7DQo+IC0gICAgICAgIGlmICggbnVtX2RldnMgKSB7DQo+IC0gICAgICAgICAgICBpbnQg
bmRldiA9IGF0b2kobnVtX2RldnMpLCBqOw0KPiAtICAgICAgICAgICAgY2hhciAqZGV2cGF0aCwg
KmJkZjsNCj4gLQ0KPiAtICAgICAgICAgICAgZm9yKGogPSAwOyBqIDwgbmRldjsgaisrKSB7DQo+
IC0gICAgICAgICAgICAgICAgZGV2cGF0aCA9IEdDU1BSSU5URigiL2xvY2FsL2RvbWFpbi8wL2Jh
Y2tlbmQvJXMvJXMvMC9kZXYtJXUiLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgbGlieGxfX2RldmljZV9raW5kX3RvX3N0cmluZyhMSUJYTF9fREVWSUNFX0tJTkRfUENJ
KSwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGRvbWxpc3RbaV0sIGop
Ow0KPiAtICAgICAgICAgICAgICAgIGJkZiA9IGxpYnhsX194c19yZWFkKGdjLCBYQlRfTlVMTCwg
ZGV2cGF0aCk7DQo+IC0gICAgICAgICAgICAgICAgaWYgKCBiZGYgKSB7DQo+IC0gICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGRvbSwgYnVzLCBkZXYsIGZ1bmM7DQo+IC0gICAgICAgICAgICAg
ICAgICAgIGlmICggc3NjYW5mKGJkZiwgUENJX0JERiwgJmRvbSwgJmJ1cywgJmRldiwgJmZ1bmMp
ICE9IDQgKQ0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgY29udGludWU7DQo+IC0NCj4gLSAg
ICAgICAgICAgICAgICAgICAgKmxpc3QgPSByZWFsbG9jKCpsaXN0LCBzaXplb2YobGlieGxfZGV2
aWNlX3BjaSkgKiAoKCpudW0pICsgMSkpOw0KPiAtICAgICAgICAgICAgICAgICAgICBpZiAoKmxp
c3QgPT0gTlVMTCkNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgIHJldHVybiBFUlJPUl9OT01F
TTsNCj4gLSAgICAgICAgICAgICAgICAgICAgcGNpX3N0cnVjdF9maWxsKCpsaXN0ICsgKm51bSwg
ZG9tLCBidXMsIGRldiwgZnVuYywgMCk7DQo+IC0gICAgICAgICAgICAgICAgICAgICgqbnVtKSsr
Ow0KPiAtICAgICAgICAgICAgICAgIH0NCj4gLSAgICAgICAgICAgIH0NCj4gLSAgICAgICAgfQ0K
PiAtICAgIH0NCj4gLSAgICBsaWJ4bF9fcHRyX2FkZChnYywgKmxpc3QpOw0KPiAtDQo+IC0gICAg
cmV0dXJuIDA7DQo+IC19DQo+IC0NCj4gICBzdGF0aWMgaW50IGlzX3BjaV9pbl9hcnJheShsaWJ4
bF9kZXZpY2VfcGNpICphc3NpZ25lZCwgaW50IG51bV9hc3NpZ25lZCwNCj4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBpbnQgZG9tLCBpbnQgYnVzLCBpbnQgZGV2LCBpbnQgZnVuYykNCj4g
ICB7DQo+IEBAIC00MjcsMTkgKzM4Myw1OCBAQCBzdGF0aWMgaW50IHN5c2ZzX3dyaXRlX2JkZihs
aWJ4bF9fZ2MgKmdjLCBjb25zdCBjaGFyICogc3lzZnNfcGF0aCwNCj4gICAgICAgcmV0dXJuIDA7
DQo+ICAgfQ0KPiAgIA0KPiArI2RlZmluZSBQQ0lfSU5GT19QQVRIICIvbGlieGwvcGNpIg0KPiAr
DQo+ICtzdGF0aWMgY2hhciAqcGNpX2luZm9feHNfcGF0aChsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9k
ZXZpY2VfcGNpICpwY2ksDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBj
aGFyICpub2RlKQ0KPiArew0KPiArICAgIHJldHVybiBub2RlID8NCj4gKyAgICAgICAgR0NTUFJJ
TlRGKFBDSV9JTkZPX1BBVEgiLyJQQ0lfQkRGX1hTUEFUSCIvJXMiLA0KPiArICAgICAgICAgICAg
ICAgICAgcGNpLT5kb21haW4sIHBjaS0+YnVzLCBwY2ktPmRldiwgcGNpLT5mdW5jLA0KPiArICAg
ICAgICAgICAgICAgICAgbm9kZSkgOg0KPiArICAgICAgICBHQ1NQUklOVEYoUENJX0lORk9fUEFU
SCIvIlBDSV9CREZfWFNQQVRILA0KPiArICAgICAgICAgICAgICAgICAgcGNpLT5kb21haW4sIHBj
aS0+YnVzLCBwY2ktPmRldiwgcGNpLT5mdW5jKTsNCj4gK30NCj4gKw0KPiArDQo+ICtzdGF0aWMg
aW50IHBjaV9pbmZvX3hzX3dyaXRlKGxpYnhsX19nYyAqZ2MsIGxpYnhsX2RldmljZV9wY2kgKnBj
aSwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIgKm5vZGUsIGNv
bnN0IGNoYXIgKnZhbCkNCj4gK3sNCj4gKyAgICBjaGFyICpwYXRoID0gcGNpX2luZm9feHNfcGF0
aChnYywgcGNpLCBub2RlKTsNCj4gKyAgICBpbnQgcmMgPSBsaWJ4bF9feHNfcHJpbnRmKGdjLCBY
QlRfTlVMTCwgcGF0aCwgIiVzIiwgdmFsKTsNCj4gKw0KPiArICAgIGlmIChyYykgTE9HRShXQVJO
LCAiV3JpdGUgb2YgJXMgdG8gbm9kZSAlcyBmYWlsZWQuIiwgdmFsLCBwYXRoKTsNCj4gKw0KPiAr
ICAgIHJldHVybiByYzsNCj4gK30NCj4gKw0KPiArc3RhdGljIGNoYXIgKnBjaV9pbmZvX3hzX3Jl
YWQobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpLA0KPiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9kZSkNCj4gK3sNCj4gKyAgICBjaGFyICpw
YXRoID0gcGNpX2luZm9feHNfcGF0aChnYywgcGNpLCBub2RlKTsNCj4gKw0KPiArICAgIHJldHVy
biBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsIHBhdGgpOw0KPiArfQ0KPiArDQo+ICtzdGF0
aWMgdm9pZCBwY2lfaW5mb194c19yZW1vdmUobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3Bj
aSAqcGNpLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIgKm5v
ZGUpDQo+ICt7DQo+ICsgICAgY2hhciAqcGF0aCA9IHBjaV9pbmZvX3hzX3BhdGgoZ2MsIHBjaSwg
bm9kZSk7DQo+ICsgICAgbGlieGxfY3R4ICpjdHggPSBsaWJ4bF9fZ2Nfb3duZXIoZ2MpOw0KPiAr
DQo+ICsgICAgLyogUmVtb3ZlIHRoZSB4ZW5zdG9yZSBlbnRyeSAqLw0KPiArICAgIHhzX3JtKGN0
eC0+eHNoLCBYQlRfTlVMTCwgcGF0aCk7DQo+ICt9DQo+ICsNCj4gICBsaWJ4bF9kZXZpY2VfcGNp
ICpsaWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfbGlzdChsaWJ4bF9jdHggKmN0eCwgaW50ICpu
dW0pDQo+ICAgew0KPiAgICAgICBHQ19JTklUKGN0eCk7DQo+IC0gICAgbGlieGxfZGV2aWNlX3Bj
aSAqcGNpcyA9IE5VTEwsICpuZXcsICphc3NpZ25lZDsNCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNp
ICpwY2lzID0gTlVMTCwgKm5ldzsNCj4gICAgICAgc3RydWN0IGRpcmVudCAqZGU7DQo+ICAgICAg
IERJUiAqZGlyOw0KPiAtICAgIGludCByLCBudW1fYXNzaWduZWQ7DQo+ICAgDQo+ICAgICAgICpu
dW0gPSAwOw0KPiAgIA0KPiAtICAgIHIgPSBnZXRfYWxsX2Fzc2lnbmVkX2RldmljZXMoZ2MsICZh
c3NpZ25lZCwgJm51bV9hc3NpZ25lZCk7DQo+IC0gICAgaWYgKHIpIGdvdG8gb3V0Ow0KPiAtDQo+
ICAgICAgIGRpciA9IG9wZW5kaXIoU1lTRlNfUENJQkFDS19EUklWRVIpOw0KPiAgICAgICBpZiAo
TlVMTCA9PSBkaXIpIHsNCj4gICAgICAgICAgIGlmIChlcnJubyA9PSBFTk9FTlQpIHsNCj4gQEAg
LTQ1NSw5ICs0NTAsNiBAQCBsaWJ4bF9kZXZpY2VfcGNpICpsaWJ4bF9kZXZpY2VfcGNpX2Fzc2ln
bmFibGVfbGlzdChsaWJ4bF9jdHggKmN0eCwgaW50ICpudW0pDQo+ICAgICAgICAgICBpZiAoc3Nj
YW5mKGRlLT5kX25hbWUsIFBDSV9CREYsICZkb20sICZidXMsICZkZXYsICZmdW5jKSAhPSA0KQ0K
PiAgICAgICAgICAgICAgIGNvbnRpbnVlOw0KPiAgIA0KPiAtICAgICAgICBpZiAoaXNfcGNpX2lu
X2FycmF5KGFzc2lnbmVkLCBudW1fYXNzaWduZWQsIGRvbSwgYnVzLCBkZXYsIGZ1bmMpKQ0KPiAt
ICAgICAgICAgICAgY29udGludWU7DQo+IC0NCj4gICAgICAgICAgIG5ldyA9IHJlYWxsb2MocGNp
cywgKCgqbnVtKSArIDEpICogc2l6ZW9mKCpuZXcpKTsNCj4gICAgICAgICAgIGlmIChOVUxMID09
IG5ldykNCj4gICAgICAgICAgICAgICBjb250aW51ZTsNCj4gQEAgLTQ2Nyw2ICs0NTksMTAgQEAg
bGlieGxfZGV2aWNlX3BjaSAqbGlieGxfZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2xpc3QobGlieGxf
Y3R4ICpjdHgsIGludCAqbnVtKQ0KPiAgIA0KPiAgICAgICAgICAgbWVtc2V0KG5ldywgMCwgc2l6
ZW9mKCpuZXcpKTsNCj4gICAgICAgICAgIHBjaV9zdHJ1Y3RfZmlsbChuZXcsIGRvbSwgYnVzLCBk
ZXYsIGZ1bmMsIDApOw0KPiArDQo+ICsgICAgICAgIGlmIChwY2lfaW5mb194c19yZWFkKGdjLCBu
ZXcsICJkb21pZCIpKSAvKiBhbHJlYWR5IGFzc2lnbmVkICovDQo+ICsgICAgICAgICAgICBjb250
aW51ZTsNCj4gKw0KPiAgICAgICAgICAgKCpudW0pKys7DQo+ICAgICAgIH0NCj4gICANCj4gQEAg
LTczNyw0OCArNzMzLDYgQEAgc3RhdGljIGludCBwY2liYWNrX2Rldl91bmFzc2lnbihsaWJ4bF9f
Z2MgKmdjLCBsaWJ4bF9kZXZpY2VfcGNpICpwY2kpDQo+ICAgICAgIHJldHVybiAwOw0KPiAgIH0N
Cj4gICANCj4gLSNkZWZpbmUgUENJX0lORk9fUEFUSCAiL2xpYnhsL3BjaSINCj4gLQ0KPiAtc3Rh
dGljIGNoYXIgKnBjaV9pbmZvX3hzX3BhdGgobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3Bj
aSAqcGNpLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9k
ZSkNCj4gLXsNCj4gLSAgICByZXR1cm4gbm9kZSA/DQo+IC0gICAgICAgIEdDU1BSSU5URihQQ0lf
SU5GT19QQVRIIi8iUENJX0JERl9YU1BBVEgiLyVzIiwNCj4gLSAgICAgICAgICAgICAgICAgIHBj
aS0+ZG9tYWluLCBwY2ktPmJ1cywgcGNpLT5kZXYsIHBjaS0+ZnVuYywNCj4gLSAgICAgICAgICAg
ICAgICAgIG5vZGUpIDoNCj4gLSAgICAgICAgR0NTUFJJTlRGKFBDSV9JTkZPX1BBVEgiLyJQQ0lf
QkRGX1hTUEFUSCwNCj4gLSAgICAgICAgICAgICAgICAgIHBjaS0+ZG9tYWluLCBwY2ktPmJ1cywg
cGNpLT5kZXYsIHBjaS0+ZnVuYyk7DQo+IC19DQo+IC0NCj4gLQ0KPiAtc3RhdGljIHZvaWQgcGNp
X2luZm9feHNfd3JpdGUobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpLA0KPiAt
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9kZSwgY29uc3QgY2hh
ciAqdmFsKQ0KPiAtew0KPiAtICAgIGNoYXIgKnBhdGggPSBwY2lfaW5mb194c19wYXRoKGdjLCBw
Y2ksIG5vZGUpOw0KPiAtDQo+IC0gICAgaWYgKCBsaWJ4bF9feHNfcHJpbnRmKGdjLCBYQlRfTlVM
TCwgcGF0aCwgIiVzIiwgdmFsKSA8IDAgKSB7DQo+IC0gICAgICAgIExPR0UoV0FSTiwgIldyaXRl
IG9mICVzIHRvIG5vZGUgJXMgZmFpbGVkLiIsIHZhbCwgcGF0aCk7DQo+IC0gICAgfQ0KPiAtfQ0K
PiAtDQo+IC1zdGF0aWMgY2hhciAqcGNpX2luZm9feHNfcmVhZChsaWJ4bF9fZ2MgKmdjLCBsaWJ4
bF9kZXZpY2VfcGNpICpwY2ksDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25z
dCBjaGFyICpub2RlKQ0KPiAtew0KPiAtICAgIGNoYXIgKnBhdGggPSBwY2lfaW5mb194c19wYXRo
KGdjLCBwY2ksIG5vZGUpOw0KPiAtDQo+IC0gICAgcmV0dXJuIGxpYnhsX194c19yZWFkKGdjLCBY
QlRfTlVMTCwgcGF0aCk7DQo+IC19DQo+IC0NCj4gLXN0YXRpYyB2b2lkIHBjaV9pbmZvX3hzX3Jl
bW92ZShsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9kZXZpY2VfcGNpICpwY2ksDQo+IC0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbm9kZSkNCj4gLXsNCj4gLSAgICBjaGFy
ICpwYXRoID0gcGNpX2luZm9feHNfcGF0aChnYywgcGNpLCBub2RlKTsNCj4gLSAgICBsaWJ4bF9j
dHggKmN0eCA9IGxpYnhsX19nY19vd25lcihnYyk7DQo+IC0NCj4gLSAgICAvKiBSZW1vdmUgdGhl
IHhlbnN0b3JlIGVudHJ5ICovDQo+IC0gICAgeHNfcm0oY3R4LT54c2gsIFhCVF9OVUxMLCBwYXRo
KTsNCj4gLX0NCj4gLQ0KPiAgIHN0YXRpYyBpbnQgbGlieGxfX2RldmljZV9wY2lfYXNzaWduYWJs
ZV9hZGQobGlieGxfX2djICpnYywNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSwNCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCByZWJpbmQpDQo+IEBAIC0xNTk0LDYgKzE1
NDgsOSBAQCB2b2lkIGxpYnhsX19kZXZpY2VfcGNpX2FkZChsaWJ4bF9fZWdjICplZ2MsIHVpbnQz
Ml90IGRvbWlkLA0KPiAgICAgICAgICAgZ290byBvdXQ7DQo+ICAgICAgIH0NCj4gICANCj4gKyAg
ICByYyA9IHBjaV9pbmZvX3hzX3dyaXRlKGdjLCBwY2ksICJkb21pZCIsIEdDU1BSSU5URigiJXUi
LCBkb21pZCkpOw0KPiArICAgIGlmIChyYykgZ290byBvdXQ7DQo+ICsNCj4gICAgICAgbGlieGxf
X2RldmljZV9wY2lfcmVzZXQoZ2MsIHBjaS0+ZG9tYWluLCBwY2ktPmJ1cywgcGNpLT5kZXYsIHBj
aS0+ZnVuYyk7DQo+ICAgDQo+ICAgICAgIHN0dWJkb21pZCA9IGxpYnhsX2dldF9zdHViZG9tX2lk
KGN0eCwgZG9taWQpOw0KPiBAQCAtMTcyMSw2ICsxNjc4LDcgQEAgc3RhdGljIHZvaWQgZGV2aWNl
X3BjaV9hZGRfZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+ICAgICAgICAgICAgICAgICJQQ0kgZGV2
aWNlICV4OiV4OiV4LiV4IChyYyAlZCkiLA0KPiAgICAgICAgICAgICAgICBwY2ktPmRvbWFpbiwg
cGNpLT5idXMsIHBjaS0+ZGV2LCBwY2ktPmZ1bmMsDQo+ICAgICAgICAgICAgICAgIHJjKTsNCj4g
KyAgICAgICAgcGNpX2luZm9feHNfcmVtb3ZlKGdjLCBwY2ksICJkb21pZCIpOw0KPiAgICAgICB9
DQo+ICAgICAgIGFvZGV2LT5yYyA9IHJjOw0KPiAgICAgICBhb2Rldi0+Y2FsbGJhY2soZWdjLCBh
b2Rldik7DQo+IEBAIC0yMjgyLDYgKzIyNDAsOSBAQCBvdXQ6DQo+ICAgICAgIGxpYnhsX194c3dh
aXRfc3RvcChnYywgJnBycy0+eHN3YWl0KTsNCj4gICAgICAgbGlieGxfX2V2X3RpbWVfZGVyZWdp
c3RlcihnYywgJnBycy0+dGltZW91dCk7DQo+ICAgICAgIGxpYnhsX19ldl90aW1lX2RlcmVnaXN0
ZXIoZ2MsICZwcnMtPnJldHJ5X3RpbWVyKTsNCj4gKw0KPiArICAgIGlmICghcmMpIHBjaV9pbmZv
X3hzX3JlbW92ZShnYywgcGNpLCAiZG9taWQiKTsNCj4gKw0KPiAgICAgICBhb2Rldi0+cmMgPSBy
YzsNCj4gICAgICAgYW9kZXYtPmNhbGxiYWNrKGVnYywgYW9kZXYpOw0KPiAgIH0=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42107.75686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6XU-0002r4-8j; Tue, 01 Dec 2020 14:21:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42107.75686; Tue, 01 Dec 2020 14:21:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6XU-0002qx-5a; Tue, 01 Dec 2020 14:21:56 +0000
Received: by outflank-mailman (input) for mailman id 42107;
 Tue, 01 Dec 2020 14:21:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dt7S=FF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kk6XS-0002qr-Lg
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:21:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 726b9e24-0006-45e0-aaaf-dbc24389865b;
 Tue, 01 Dec 2020 14:21:52 +0000 (UTC)
Received: from AM6PR0202CA0054.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::31) by AM0PR08MB5089.eurprd08.prod.outlook.com
 (2603:10a6:208:15b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Tue, 1 Dec
 2020 14:21:50 +0000
Received: from VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:3a:cafe::3c) by AM6PR0202CA0054.outlook.office365.com
 (2603:10a6:20b:3a::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22 via Frontend
 Transport; Tue, 1 Dec 2020 14:21:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT006.mail.protection.outlook.com (10.152.18.116) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Tue, 1 Dec 2020 14:21:50 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Tue, 01 Dec 2020 14:21:49 +0000
Received: from 966c964ae05d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6A7EDA45-3AA0-4ECC-9DE8-EA304DA7DB5C.1; 
 Tue, 01 Dec 2020 14:21:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 966c964ae05d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 01 Dec 2020 14:21:12 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2694.eurprd08.prod.outlook.com (2603:10a6:6:1f::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Tue, 1 Dec
 2020 14:21:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Tue, 1 Dec 2020
 14:21:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 726b9e24-0006-45e0-aaaf-dbc24389865b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RoUAKAInsUuNczKwoDejNvXT3psocXhURQC81Mb6Sdg=;
 b=ViAgHL3+rZKxV+ZoGfAxwX1F7nl3XvM5pLVG6o9+9E/H6IKn6riVnie5k9tQZj77wYQEuGf/PD36sYuYhzBoVr0pFMoHSQUNuZJyaYEdJHreTjnya6WU3sxSeSSt0fmGvDoM5ju8JGyeTGJlKdJJ1W4qzEHG+xMYaEha8sTkVk8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: d4536d92f1525444
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LV9Co6uxuiwV3egg9Y0ov7f398nofJLdQAIXS7Z4gZDFfFDxzZOT6oRESu4Rs6jJPccwvFI5yhTpTIaelEvZmWNwajJRLs0Mbwd6z1AkiyZS9YAVr/dz4ZzpdYUd8zPjAqBwES8LZtyqIJSQOp5wrm6qkpMaFuSq8J0tXekSNt10hD5mE+1oAWk/nZRpiZdkgJ7te0qZCPpKneCGpEsCDbTGTFQxdAjOO8MBnnBPYpqoOhoBqgR24iAaEz7efm3rwVAMLgQf3pSN2NpB/G/o4VuntbSCaq7Krd0wAGEMyzhVP6uUvw0/lTETe7ueThq6JGKhosLw0+rYEmq8R/vmMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RoUAKAInsUuNczKwoDejNvXT3psocXhURQC81Mb6Sdg=;
 b=SuHaEJ6NvwKtlVQLvcAPuPlQKbcw69V5jnchBK8CFUB/RVd8qATn6ZKBuAk3ZaHKDF4evkGao1TSl6rUtL9t9yWrDfNu+S3s6Zk//1nRTeOMh0cj0z+2pHMCMTsQeqB5gJjK5aRAttMnfCgEgd2AGU4ylh2AQwhE/t2xi0FGG2OzlKU3QxLQa1hpjG+tHKw7arsdX6oUkt8coFDb6yorvBXx/NVPeWUerjH/JA9LtA8SSiNI49NUnm14d7D5jrVXK9XbAYnIvSSbjtFMyLaef/ij3p0oGnmkpw6eLlcr5EcpBHmIecrgzpD/jBbjklsOKz7T/C18NSbONxKt1SnMLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RoUAKAInsUuNczKwoDejNvXT3psocXhURQC81Mb6Sdg=;
 b=ViAgHL3+rZKxV+ZoGfAxwX1F7nl3XvM5pLVG6o9+9E/H6IKn6riVnie5k9tQZj77wYQEuGf/PD36sYuYhzBoVr0pFMoHSQUNuZJyaYEdJHreTjnya6WU3sxSeSSt0fmGvDoM5ju8JGyeTGJlKdJJ1W4qzEHG+xMYaEha8sTkVk8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHWxyRuchSNSGGKDEyJCEQqqd6Vw6nhIUWAgAD/ugCAAAW5gIAAJWmA
Date: Tue, 1 Dec 2020 14:21:08 +0000
Message-ID: <87243486-2A58-4497-B566-5FDE4158D18E@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
 <87lfei7fj5.fsf@epam.com> <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
 <87sg8p687j.fsf@epam.com>
In-Reply-To: <87sg8p687j.fsf@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1fa62a3d-1938-4aac-45b7-08d8960471ae
x-ms-traffictypediagnostic: DB6PR08MB2694:|AM0PR08MB5089:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5089F0E7D75E5335B1192A099DF40@AM0PR08MB5089.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5Wxthw4tqc9Qt+HWSLyNswoNnY4Vc87C3zekit3/hn9NV1q5dWeeJWdVxxDSO6Ex6oV/H4+rKsArMeKyZ1+IjIMp4NebsEVnwbzf3c/dj6KGf+GQb5vrmhK1piVXooSoH0D6IdyyjxQtsV9CG4ccyz1oMOthRvx5Dr8BtTJ7h0rJS+JEz492rbolupHIjZ8ZbdYMLiEsj8G6xA3A6KC2Sarpj0WXXaHw0ShTXnUfwQcb+X93rBBX96KMAY0RQ7vzyyJYcRi7tNhaRJ+iN2yRhB8o6ytThPCRYOFcyROfmG/oPk8ZvDgHZgx8XHZurnbGFr+2l8YMk8cxdDSD04IgcQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(366004)(376002)(396003)(54906003)(186003)(91956017)(76116006)(6512007)(33656002)(66556008)(66446008)(64756008)(53546011)(83380400001)(6506007)(66476007)(71200400001)(66946007)(478600001)(26005)(86362001)(2616005)(6486002)(316002)(36756003)(6916009)(2906002)(5660300002)(4326008)(8936002)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?TSsxOGF1cnM0ZkhDUFdFd0tBbTJZbVVmcUpuamROMUVMcjJ2Tkt5Qzd4Qlcr?=
 =?utf-8?B?em5pNUdwM3dSVjRpUEw5ZXFET0szZWJ0aG0rc3hCenJ1bjljSmcwV0J0Z2Vm?=
 =?utf-8?B?aFlTWW1KTUFLcjdpWkM3SWg2RHV4RTJQTHpHMTZ5SEhjdjBOVXhvOTFXd1ZR?=
 =?utf-8?B?em1UNWVrTCtNU0E1VHo2R0NKYXVhTG53UDFXZHBUaDZWaU1kL1pkcGp3UWZF?=
 =?utf-8?B?b1A1NHgvMjFVeDcyOTJEUVZ5UDcyN05ZQk9HMkZRV2p2UEZFeVNwNHJJeWhC?=
 =?utf-8?B?L2dTcmphZjQ5RWF5MWJpT0xTbUZ4eU5zZ1RCa2R5VkNNbUFoNWU3U01vclAv?=
 =?utf-8?B?aGZlMmY4RVg2MTA2VHZ4WkFvZVBWbWdoYlZWZmdFd1o0dE4vQ29YcDZwZXMz?=
 =?utf-8?B?cGZ6YytYS1dtNWZFa1NuTUdLMmhnOStYR3duYTB1WFZkSjNtWFh4bjM5Q2o0?=
 =?utf-8?B?eW00eFZRMW0zTEJlMmNSYldrdzdkSWdzbndXbW02My9JZ1pabmFrNmJDc2du?=
 =?utf-8?B?RE9ic0d2ZFFRZG5lbFJ5MnZTNTZIeEdneUtoWml3ZTQyVG51NW1UaHRXdFRW?=
 =?utf-8?B?aWxYa291R3JiUkVpeityQ0FJV2hmbitWZGg1TFhKQUdzQjNBcWFVeWpvQlVy?=
 =?utf-8?B?c1o3aXVkR1lPYkE2eVhwRnFtVmVSZFI4R3puTWplRk1SZGM4QXhEVmh5b1Uy?=
 =?utf-8?B?dkdQUG81cW5oSk0vbStCWHorYkoyNlFwcmFXZGVmK3pUSEpsa1BOOVlhWEFU?=
 =?utf-8?B?djlnZ2V5M21OMDcxRkNSQUtlemdVQXFXMFByOC85R3RCVUlGUm05SHlCbHhp?=
 =?utf-8?B?RU1TK1NlM2ZKcXBMcWtLODN5d0JETWNLMjcyNXFncjI0dFFWejNCeiswNzRr?=
 =?utf-8?B?MnlhZEFIZGVhajlJbkZheEpTTzhlWGhrWG9ua3NBWlU3dmRZYldYQThHa0hN?=
 =?utf-8?B?SWdBZTdzcjBLV014c0J5NGVoUEx1bWJhNEZCTUtUTnIwSXM1Z0RrMm92RGFt?=
 =?utf-8?B?OTc0VXhyeWVtejNySitTeXVRNkJSSkIyakE0c2VsYzgxTUc2N2ZYZWdxUnRR?=
 =?utf-8?B?UElsc0lZUzJXeFpzampvdm9uRjhGd05EQWpTUkJGeUlySUNBdDMra2F5UURr?=
 =?utf-8?B?MFErcDdlUEhUZDBPSXJJbDY0aDVsc0pYUkpTdEpjOE9iREQ1T2dlZ2QxSUwx?=
 =?utf-8?B?TGNiRTM4aEh1V0d5TGdZM2ZBQTcrSkxYdGJ5ODJ5VE9rbHRGMDFqUW1OcEEz?=
 =?utf-8?B?bHZVdmtMaG5kOHk3T29oSk5nSHVrVVlpZXAyUWhJWkRDaU1YVStrc05xcUJ6?=
 =?utf-8?Q?m7OR66tVPcw7+vwUKFAgc9JUBIEayngNmI?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <65A0948A7F05F241A4EA2C66F76890D6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2694
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	81c3533a-e1a0-402c-1441-08d89604587c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	McR72uEb3RJpxNbZ66ThPuWTK5YIDE/TN818wMRIgXpKzudeEhSbTd8XM2foiTv4fh+RYasdI9dNC21UEDHbrhkZ0dkNyuf6wkYMdlCzHdTqjiqqWnH+fXxDrpQK8ER8FQsY+ChHEKeyYGuMo1eY/t0itPOeIaH0iZfLFZh5TfHvTyIRCyTtyH/r1KLE4i5cvbu8AvurxIZFXMupPBo5Bjg9D9ElDPr/5IC8DH7k8EINgB144ugqFAWXxaYxsQvrFT5MZKwlz6x1prXhwtoOpZfHvP9x6i0yDRWsZE7R/dQUzMkl/JTuW6Tqexnlw5Ig03g7K0mbIBtb/jygYyVaGQLoVmIC7Kn6JNZ1VrdmArCTQxPT0hpXjgFTa57EFjjeTb86dbBTH8JB/jc6pMVw9w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39860400002)(136003)(46966005)(336012)(33656002)(186003)(6486002)(54906003)(316002)(86362001)(6512007)(26005)(53546011)(6506007)(70206006)(6862004)(36756003)(70586007)(47076004)(82310400003)(81166007)(8936002)(5660300002)(4326008)(82740400003)(478600001)(8676002)(2906002)(356005)(83380400001)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 14:21:50.2485
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1fa62a3d-1938-4aac-45b7-08d8960471ae
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5089

SGkgVm9sb2R5bXlyLA0KDQo+IE9uIDEgRGVjIDIwMjAsIGF0IDEyOjA3LCBWb2xvZHlteXIgQmFi
Y2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+IHdyb3RlOg0KPiANCj4gDQo+IEhpLA0K
PiANCj4gQmVydHJhbmQgTWFycXVpcyB3cml0ZXM6DQo+IA0KPj4gSGksDQo+PiANCj4+PiBPbiAz
MCBOb3YgMjAyMCwgYXQgMjA6MzEsIFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJfQmFiY2h1
a0BlcGFtLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gDQo+Pj4gQmVydHJhbmQgTWFycXVpcyB3cml0
ZXM6DQo+Pj4gDQo+Pj4+IEFkZCBzdXBwb3J0IGZvciBlbXVsYXRpb24gb2YgY3AxNSBiYXNlZCBJ
RCByZWdpc3RlcnMgKG9uIGFybTMyIG9yIHdoZW4NCj4+Pj4gcnVubmluZyBhIDMyYml0IGd1ZXN0
IG9uIGFybTY0KS4NCj4+Pj4gVGhlIGhhbmRsZXJzIGFyZSByZXR1cm5pbmcgdGhlIHZhbHVlcyBz
dG9yZWQgaW4gdGhlIGd1ZXN0X2NwdWluZm8NCj4+Pj4gc3RydWN0dXJlLg0KPj4+PiBJbiB0aGUg
Y3VycmVudCBzdGF0dXMgdGhlIE1WRlIgcmVnaXN0ZXJzIGFyZSBubyBzdXBwb3J0ZWQuDQo+Pj4g
DQo+Pj4gSXQgaXMgdW5jbGVhciB3aGF0IHdpbGwgaGFwcGVuIHdpdGggcmVnaXN0ZXJzIHRoYXQg
YXJlIG5vdCBjb3ZlcmVkIGJ5DQo+Pj4gZ3Vlc3RfY3B1aW5mbyBzdHJ1Y3R1cmUuIEFjY29yZGlu
ZyB0byBBUk0gQVJNLCBpdCBpcyBpbXBsZW1lbnRhdGlvbg0KPj4+IGRlZmluZWQgaWYgc3VjaCBh
Y2Nlc3NlcyB3aWxsIGJlIHRyYXBwZWQuIE9uIG90aGVyIGhhbmQsIHRoZXJlIGFyZSBtYW55DQo+
Pj4gcmVnaXN0ZXJzIHdoaWNoIGFyZSBSQVouIFNvLCBnb29kIGJlaGF2aW5nIGd1ZXN0IGNhbiB0
cnkgdG8gcmVhZCBvbmUgb2YNCj4+PiB0aGF0IHJlZ2lzdGVycyBhbmQgaXQgd2lsbCBnZXQgdW5k
ZWZpbmVkIGluc3RydWN0aW9uIGV4Y2VwdGlvbiwgaW5zdGVhZA0KPj4+IG9mIGp1c3QgcmVhZGlu
ZyBhbGwgemVyb2VzLg0KPj4gDQo+PiBUaGlzIGlzIHRydWUgaW4gdGhlIHN0YXR1cyBvZiB0aGlz
IHBhdGNoIGJ1dCB0aGlzIGlzIHNvbHZlZCBieSB0aGUgbmV4dCBwYXRjaA0KPj4gd2hpY2ggaXMg
YWRkaW5nIHByb3BlciBoYW5kbGluZyBvZiB0aG9zZSByZWdpc3RlcnMgKGFkZCBDUDEwIGV4Y2Vw
dGlvbg0KPj4gc3VwcG9ydCksIGF0IGxlYXN0IGZvciBNVkZSIG9uZXMuDQo+PiANCj4+IEZyb20g
QVJNIEFSTSBwb2ludCBvZiB2aWV3LCBJIGRpZCBoYW5kbGUgYWxsIHJlZ2lzdGVycyBsaXN0ZWQg
SSB0aGluay4NCj4+IElmIHlvdSB0aGluayBzb21lIGFyZSBtaXNzaW5nIHBsZWFzZSBwb2ludCBt
ZSB0byB0aGVtIGFzIE8gZG8gbm90DQo+PiBjb21wbGV0ZWx5IHVuZGVyc3RhbmQgd2hhdCBhcmUg
dGhlIOKAnHJlZ2lzdGVycyBub3QgY292ZXJlZOKAnSB1bmxlc3MNCj4+IHlvdSBtZWFuIHRoZSBN
VkZSIG9uZXMuDQo+IA0KPiBXZWxsLCBJIG1heSBiZSB3cm9uZyBmb3IgYWFyY2gzMiBjYXNlLCBi
dXQgZm9yIGFhcmNoNjQsIHRoZXJlIGFyZSBudW1iZXINCj4gb2YgcmVzZXJ2ZWQgcmVnaXN0ZXJz
IGluIElEcyByYW5nZS4gVGhvc2UgcmVnaXN0ZXJzIHNob3VsZCByZWFkIGFzDQo+IHplcm8uIFlv
dSBjYW4gZmluZCB0aGVtIGluIHRoZSBzZWN0aW9uICJDNS4xLjYgb3AwPT0wYjExLCBNb3ZlcyB0
byBhbmQNCj4gZnJvbSBub24tZGVidWcgU3lzdGVtIHJlZ2lzdGVycyBhbmQgU3BlY2lhbC1wdXJw
b3NlIHJlZ2lzdGVycyIgb2YgQVJNDQo+IERESSAwNDg3Qi5hLiBDaGVjayBvdXQgIlRhYmxlIEM1
LTYgU3lzdGVtIGluc3RydWN0aW9uIGVuY29kaW5ncyBmb3INCj4gbm9uLURlYnVnIFN5c3RlbSBy
ZWdpc3RlciBhY2Nlc3NlcyIuDQoNClRoZSBwb2ludCBvZiB0aGUgc2VyaWUgaXMgdG8gaGFuZGxl
IGFsbCByZWdpc3RlcnMgdHJhcHBlZCBkdWUgdG8gVElEMyBiaXQgaW4gSENSX0VMMi4NCg0KQW5k
IGkgdGhpbmsgSSBoYW5kbGVkIGFsbCBvZiB0aGVtIGJ1dCBJIG1pZ2h0IGJlIHdyb25nLg0KDQpI
YW5kbGluZyBhbGwgcmVnaXN0ZXJzIGZvciBvcDA9PTBiMTEgd2lsbCBjb3ZlciBhIGxvdCBtb3Jl
IHRoaW5ncy4NClRoaXMgY2FuIGJlIGRvbmUgb2YgY291cnNlIGJ1dCB0aGlzIHdhcyBub3QgdGhl
IHBvaW50IG9mIHRoaXMgc2VyaWUuDQoNClRoZSBsaXN0aW5nIGluIEhDUl9FTDIgZG9jdW1lbnRh
dGlvbiBpcyBwcmV0dHkgY29tcGxldGUgYW5kIGlmIEkgbWlzcyBhbnkgcmVnaXN0ZXINCnRoZXJl
IHBsZWFzZSB0ZWxsIG1lIGJ1dCBJIGRvIG5vIHVuZGVyc3RhbmQgZnJvbSB0aGUgZG9jdW1lbnRh
dGlvbiB0aGF0IGFsbCByZWdpc3RlcnMNCndpdGggb3AwIDMgYXJlIHRyYXBwZWQgYnkgVElEMy4N
Cg0KUmVnYXJkcw0KQmVydHJhbmQNCg0KDQo+IA0KPiANCj4+PiANCj4+Pj4gU2lnbmVkLW9mZi1i
eTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPj4+PiAtLS0N
Cj4+Pj4gQ2hhbmdlcyBpbiBWMjogcmViYXNlDQo+Pj4+IC0tLQ0KPj4+PiB4ZW4vYXJjaC9hcm0v
dmNwcmVnLmMgfCAzNSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4+PiAx
IGZpbGUgY2hhbmdlZCwgMzUgaW5zZXJ0aW9ucygrKQ0KPj4+PiANCj4+Pj4gZGlmZiAtLWdpdCBh
L3hlbi9hcmNoL2FybS92Y3ByZWcuYyBiL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4+PiBpbmRl
eCBjZGM5MWNkZjViLi5kMGM2NDA2ZjM0IDEwMDY0NA0KPj4+PiAtLS0gYS94ZW4vYXJjaC9hcm0v
dmNwcmVnLmMNCj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3ZjcHJlZy5jDQo+Pj4+IEBAIC0xNTUs
NiArMTU1LDE0IEBAIFRWTV9SRUczMihDT05URVhUSURSLCBDT05URVhUSURSX0VMMSkNCj4+Pj4g
ICAgICAgIGJyZWFrOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFwNCj4+Pj4gICAgfQ0KPj4+PiANCj4+Pj4gKy8qIE1hY3JvIHRvIGdlbmVyYXRl
IGVhc2lseSBjYXNlIGZvciBJRCBjby1wcm9jZXNzb3IgZW11bGF0aW9uICovDQo+Pj4+ICsjZGVm
aW5lIEdFTkVSQVRFX1RJRDNfSU5GTyhyZWcsZmllbGQsb2Zmc2V0KSAgICAgICAgICAgICAgICAg
ICAgICAgIFwNCj4+Pj4gKyAgICBjYXNlIEhTUl9DUFJFRzMyKHJlZyk6ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4+PiArICAgIHsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+Pj4+ICsg
ICAgICAgIHJldHVybiBoYW5kbGVfcm9fcmVhZF92YWwocmVncywgcmVnaWR4LCBjcDMyLnJlYWQs
IGhzciwgICAgIFwNCj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgMSwgZ3Vlc3RfY3B1
aW5mby5maWVsZC5iaXRzW29mZnNldF0pOyAgICAgXA0KPj4+PiArICAgIH0NCj4+Pj4gKw0KPj4+
PiB2b2lkIGRvX2NwMTVfMzIoc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGNvbnN0IHVuaW9u
IGhzciBoc3IpDQo+Pj4+IHsNCj4+Pj4gICAgY29uc3Qgc3RydWN0IGhzcl9jcDMyIGNwMzIgPSBo
c3IuY3AzMjsNCj4+Pj4gQEAgLTI4Niw2ICsyOTQsMzMgQEAgdm9pZCBkb19jcDE1XzMyKHN0cnVj
dCBjcHVfdXNlcl9yZWdzICpyZWdzLCBjb25zdCB1bmlvbiBoc3IgaHNyKQ0KPj4+PiAgICAgICAg
ICovDQo+Pj4+ICAgICAgICByZXR1cm4gaGFuZGxlX3Jhel93aShyZWdzLCByZWdpZHgsIGNwMzIu
cmVhZCwgaHNyLCAxKTsNCj4+Pj4gDQo+Pj4+ICsgICAgLyoNCj4+Pj4gKyAgICAgKiBIQ1JfRUwy
LlRJRDMNCj4+Pj4gKyAgICAgKg0KPj4+PiArICAgICAqIFRoaXMgaXMgdHJhcHBpbmcgbW9zdCBJ
ZGVudGlmaWNhdGlvbiByZWdpc3RlcnMgdXNlZCBieSBhIGd1ZXN0DQo+Pj4+ICsgICAgICogdG8g
aWRlbnRpZnkgdGhlIHByb2Nlc3NvciBmZWF0dXJlcw0KPj4+PiArICAgICAqLw0KPj4+PiArICAg
IEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9QRlIwLCBwZnIzMiwgMCkNCj4+Pj4gKyAgICBHRU5FUkFU
RV9USUQzX0lORk8oSURfUEZSMSwgcGZyMzIsIDEpDQo+Pj4+ICsgICAgR0VORVJBVEVfVElEM19J
TkZPKElEX1BGUjIsIHBmcjMyLCAyKQ0KPj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9E
RlIwLCBkYmczMiwgMCkNCj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfREZSMSwgZGJn
MzIsIDEpDQo+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0FGUjAsIGF1eDMyLCAwKQ0K
Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMCwgbW0zMiwgMCkNCj4+Pj4gKyAg
ICBHRU5FUkFURV9USUQzX0lORk8oSURfTU1GUjEsIG1tMzIsIDEpDQo+Pj4+ICsgICAgR0VORVJB
VEVfVElEM19JTkZPKElEX01NRlIyLCBtbTMyLCAyKQ0KPj4+PiArICAgIEdFTkVSQVRFX1RJRDNf
SU5GTyhJRF9NTUZSMywgbW0zMiwgMykNCj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURf
TU1GUjQsIG1tMzIsIDQpDQo+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlI1LCBt
bTMyLCA1KQ0KPj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9JU0FSMCwgaXNhMzIsIDAp
DQo+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIxLCBpc2EzMiwgMSkNCj4+Pj4g
KyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjIsIGlzYTMyLCAyKQ0KPj4+PiArICAgIEdF
TkVSQVRFX1RJRDNfSU5GTyhJRF9JU0FSMywgaXNhMzIsIDMpDQo+Pj4+ICsgICAgR0VORVJBVEVf
VElEM19JTkZPKElEX0lTQVI0LCBpc2EzMiwgNCkNCj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lO
Rk8oSURfSVNBUjUsIGlzYTMyLCA1KQ0KPj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9J
U0FSNiwgaXNhMzIsIDYpDQo+Pj4+ICsgICAgLyogTVZGUiByZWdpc3RlcnMgYXJlIGluIGNwMTAg
bm8gY3AxNSAqLw0KPj4+PiArDQo+Pj4+ICAgIC8qDQo+Pj4+ICAgICAqIEhDUl9FTDIuVElEQ1AN
Cj4+Pj4gICAgICoNCj4+PiANCj4+PiANCj4+PiAtLSANCj4+PiBWb2xvZHlteXIgQmFiY2h1ayBh
dCBFUEFNDQo+IA0KPiANCj4gLS0gDQo+IFZvbG9keW15ciBCYWJjaHVrIGF0IEVQQU0NCg0K


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 14:50:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 14:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42138.75758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6yS-0005E9-8h; Tue, 01 Dec 2020 14:49:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42138.75758; Tue, 01 Dec 2020 14:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk6yS-0005E2-5V; Tue, 01 Dec 2020 14:49:48 +0000
Received: by outflank-mailman (input) for mailman id 42138;
 Tue, 01 Dec 2020 14:49:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hnjp=FF=cknow.org=didi.debian@srs-us1.protection.inumbo.net>)
 id 1kk6yQ-0005Dx-L2
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 14:49:46 +0000
Received: from relay4-d.mail.gandi.net (unknown [217.70.183.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 689b7ce6-5655-40a8-963c-15b088002e15;
 Tue, 01 Dec 2020 14:49:44 +0000 (UTC)
Received: from bagend.home.cknow.org (92-110-45-68.cable.dynamic.v4.ziggo.nl
 [92.110.45.68]) (Authenticated sender: didi.debian@cknow.org)
 by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 97B44E0009;
 Tue,  1 Dec 2020 14:49:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 689b7ce6-5655-40a8-963c-15b088002e15
X-Originating-IP: 92.110.45.68
From: Diederik de Haas <didi.debian@cknow.org>
To: xen-devel@lists.xenproject.org
Cc: Diederik de Haas <didi.debian@cknow.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH for-4.14] Fix spelling errors.
Date: Tue,  1 Dec 2020 15:42:23 +0100
Message-Id: <5f4935dbc0257e19b87b9461ea62e25328a6091e.1606833490.git.didi.debian@cknow.org>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Only spelling errors; no functional changes.

In docs/misc/dump-core-format.txt there are a few more instances of
'informations'. I'll leave that up to someone who can properly determine
how those sentences should be constructed.

Signed-off-by: Diederik de Haas <didi.debian@cknow.org>
---

I incorporated the remarks by Jan Beulich that were made for the patch
targeted at the master branch. Other then that, they're the exact same 
changes although for libxl_stream_read.c the path was updated to match 
the stable-4.14 branch.

---
 docs/man/xl.1.pod.in                   | 2 +-
 docs/man/xl.cfg.5.pod.in               | 2 +-
 docs/man/xlcpupool.cfg.5.pod           | 2 +-
 tools/firmware/rombios/rombios.c       | 2 +-
 tools/libxl/libxl_stream_read.c        | 2 +-
 tools/xl/xl_cmdtable.c                 | 2 +-
 xen/arch/x86/boot/video.S              | 2 +-
 xen/arch/x86/cpu/vpmu.c                | 2 +-
 xen/arch/x86/mpparse.c                 | 2 +-
 xen/arch/x86/x86_emulate/x86_emulate.c | 2 +-
 xen/common/libelf/libelf-dominfo.c     | 2 +-
 xen/drivers/passthrough/arm/smmu.c     | 2 +-
 xen/tools/gen-cpuid.py                 | 2 +-
 xen/xsm/flask/policy/access_vectors    | 2 +-
 14 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 52a47a6fbd..34807071ab 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1578,7 +1578,7 @@ List vsnd devices for a domain.
 Creates a new keyboard device in the domain specified by I<domain-id>.
 I<vkb-device> describes the device to attach, using the same format as the
 B<VKB_SPEC_STRING> string in the domain config file. See L<xl.cfg(5)>
-for more informations.
+for more information.
 
 =item B<vkb-detach> I<domain-id> I<devid>
 
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..b4625f56db 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2385,7 +2385,7 @@ If B<videoram> is set less than 128MB, an error will be triggered.
 
 =item B<stdvga=BOOLEAN>
 
-Speficies a standard VGA card with VBE (VESA BIOS Extensions) as the
+Specifies a standard VGA card with VBE (VESA BIOS Extensions) as the
 emulated graphics device. If your guest supports VBE 2.0 or
 later (e.g. Windows XP onwards) then you should enable this.
 stdvga supports more video ram and bigger resolutions than Cirrus.
diff --git a/docs/man/xlcpupool.cfg.5.pod b/docs/man/xlcpupool.cfg.5.pod
index 3c9ddf7958..c577c7ca3a 100644
--- a/docs/man/xlcpupool.cfg.5.pod
+++ b/docs/man/xlcpupool.cfg.5.pod
@@ -106,7 +106,7 @@ means that cpus 2,3,5 will be member of the cpupool.
 means that cpus 0,2,3 and 5 will be member of the cpupool. A "node:" or
 "nodes:" modifier can be used. E.g., "0,node:1,nodes:2-3,^10-13" means
 that pcpus 0, plus all the cpus of NUMA nodes 1,2,3 with the exception
-of cpus 10,11,12,13 will be memeber of the cpupool.
+of cpus 10,11,12,13 will be members of the cpupool.
 
 =back
 
diff --git a/tools/firmware/rombios/rombios.c b/tools/firmware/rombios/rombios.c
index 51558ee57a..5cda22785f 100644
--- a/tools/firmware/rombios/rombios.c
+++ b/tools/firmware/rombios/rombios.c
@@ -2607,7 +2607,7 @@ void ata_detect( )
   write_byte(ebda_seg,&EbdaData->ata.channels[3].irq,11);
 #endif
 #if BX_MAX_ATA_INTERFACES > 4
-#error Please fill the ATA interface informations
+#error Please fill the ATA interface information
 #endif
 
   // Device detection
diff --git a/tools/libxl/libxl_stream_read.c b/tools/libxl/libxl_stream_read.c
index 514f6d9f89..99a6714e76 100644
--- a/tools/libxl/libxl_stream_read.c
+++ b/tools/libxl/libxl_stream_read.c
@@ -459,7 +459,7 @@ static void stream_continue(libxl__egc *egc,
         while (process_record(egc, stream))
             ; /*
                * Nothing! process_record() helpfully tells us if no specific
-               * futher actions have been set up, in which case we want to go
+               * further actions have been set up, in which case we want to go
                * ahead and process the next record.
                */
         break;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 08335394e5..9ad31a6cc0 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -154,7 +154,7 @@ struct cmd_spec cmd_table[] = {
       "-h  Print this help.\n"
       "-c  Leave domain running after creating the snapshot.\n"
       "-p  Leave domain paused after creating the snapshot.\n"
-      "-D  Store the domain id in the configration."
+      "-D  Store the domain id in the configuration."
     },
     { "migrate",
       &main_migrate, 0, 1,
diff --git a/xen/arch/x86/boot/video.S b/xen/arch/x86/boot/video.S
index a485779ce7..0efbe8d3b3 100644
--- a/xen/arch/x86/boot/video.S
+++ b/xen/arch/x86/boot/video.S
@@ -177,7 +177,7 @@ dac_set:
         movb    $0, _param(PARAM_LFB_COLORS+7)
 
 dac_done:
-# get protected mode interface informations
+# get protected mode interface information
         movw    $0x4f0a, %ax
         xorw    %bx, %bx
         xorw    %di, %di
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 1ed39ef03f..ab667361d3 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -680,7 +680,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
         vcpu_unpause(v);
 }
 
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+/* Dump some vpmu information to console. Used in keyhandler dump_domains(). */
 void vpmu_dump(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d532575fee..dff02b142b 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -170,7 +170,7 @@ static int MP_processor_info_x(struct mpc_config_processor *m,
 	if (num_processors >= 8 && hotplug
 	    && genapic.name == apic_default.name) {
 		printk_once(XENLOG_WARNING
-			    "WARNING: CPUs limit of 8 reached - ignoring futher processors\n");
+			    "WARNING: CPUs limit of 8 reached - ignoring further processors\n");
 		unaccounted_cpus = true;
 		return -ENOSPC;
 	}
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 9b29548e2d..5d91f03dac 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3243,7 +3243,7 @@ x86_decode(
             case 0x23: /* mov reg,dr */
                 /*
                  * Mov to/from cr/dr ignore the encoding of Mod, and behave as
-                 * if they were encoded as reg/reg instructions.  No futher
+                 * if they were encoded as reg/reg instructions.  No further
                  * disp/SIB bytes are fetched.
                  */
                 modrm_mod = 3;
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 508f08db42..69c94b6f3b 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -1,5 +1,5 @@
 /*
- * parse xen-specific informations out of elf kernel binaries.
+ * parse xen-specific information out of elf kernel binaries.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Lesser General Public
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 4ba6d3ab94..5c95131b07 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -214,7 +214,7 @@ struct iommu_domain
 	struct list_head		list;
 };
 
-/* Xen: Describes informations required for a Xen domain */
+/* Xen: Describes information required for a Xen domain */
 struct arm_smmu_xen_domain {
 	spinlock_t			lock;
 	/* List of context (i.e iommu_domain) associated to this domain */
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index ffd9529fdf..14f56df89c 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -192,7 +192,7 @@ def crunch_numbers(state):
         FXSR: [FFXSR, SSE],
 
         # SSE is taken to mean support for the %XMM registers as well as the
-        # instructions.  Several futher instruction sets are built on core
+        # instructions.  Several further instruction sets are built on core
         # %XMM support, without specific inter-dependencies.  Additionally
         # AMD has a special mis-alignment sub-mode.
         SSE: [SSE2, MISALIGNSSE],
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index b87c99ea98..5371196f69 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -509,7 +509,7 @@ class security
 #
 class version
 {
-# Extra informations (-unstable).
+# Extra information (-unstable).
     xen_extraversion
 # Compile information of the hypervisor.
     xen_compile_info
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 15:10:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 15:10:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42152.75789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7IH-0007wq-7m; Tue, 01 Dec 2020 15:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42152.75789; Tue, 01 Dec 2020 15:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7IH-0007wj-4n; Tue, 01 Dec 2020 15:10:17 +0000
Received: by outflank-mailman (input) for mailman id 42152;
 Tue, 01 Dec 2020 15:10:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KrUB=FF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kk7IF-0007we-4a
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 15:10:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5042aadf-43e5-4008-84ee-a57859af9665;
 Tue, 01 Dec 2020 15:10:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54A4DAD8A;
 Tue,  1 Dec 2020 15:10:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5042aadf-43e5-4008-84ee-a57859af9665
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606835413; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xtF1gUkQIbLObrQLpiVl8dHZ92xK7khCutPfGFAjsR8=;
	b=PEdd8C82sXPSpASiI0bb8KY6rF8tY6mj6uvAdSCeUxqfL5KgR5Bkus5mUSzPUqcq2afmtR
	h7XZImaRylgNNrRwGw6d59g8VHUmu47zMivWKZdUr8YIznsYQ6H2vpUaZdpplcOmp+PdFq
	NIRVQ/b9BIaVDQAvM2njH8lQrOs44JA=
Subject: Re: [PATCH for-4.14] Fix spelling errors.
To: Diederik de Haas <didi.debian@cknow.org>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <5f4935dbc0257e19b87b9461ea62e25328a6091e.1606833490.git.didi.debian@cknow.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <44d06209-65de-f959-fb93-90a924cbf055@suse.com>
Date: Tue, 1 Dec 2020 16:10:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <5f4935dbc0257e19b87b9461ea62e25328a6091e.1606833490.git.didi.debian@cknow.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 15:42, Diederik de Haas wrote:
> Only spelling errors; no functional changes.
> 
> In docs/misc/dump-core-format.txt there are a few more instances of
> 'informations'. I'll leave that up to someone who can properly determine
> how those sentences should be constructed.
> 
> Signed-off-by: Diederik de Haas <didi.debian@cknow.org>
> ---
> 
> I incorporated the remarks by Jan Beulich that were made for the patch
> targeted at the master branch. Other then that, they're the exact same 
> changes although for libxl_stream_read.c the path was updated to match 
> the stable-4.14 branch.

I'm afraid this isn't the kind of change we'd be backporting, unless
you have a very good justification for a respective request. The
widest of a scope of such a backport that I'd see remotely possible
is when just documentation would get touched.

Also, process-wise, there wouldn't normally be patches sent to the
list for the stable trees. Exceptions are when the backporting is
non-trivial, or when something needs fixing there that doesn't
exist anymore in newer code. In all other cases it would simply be
a request to backport a certain change, possibly directly embedded
in the original submission (e.g. by means of a Fixes: tag).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 15:10:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 15:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42156.75802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7Is-00084A-N0; Tue, 01 Dec 2020 15:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42156.75802; Tue, 01 Dec 2020 15:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7Is-000843-JG; Tue, 01 Dec 2020 15:10:54 +0000
Received: by outflank-mailman (input) for mailman id 42156;
 Tue, 01 Dec 2020 15:10:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk7Ir-00083u-CJ
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 15:10:53 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46d74550-4b10-4d8d-af81-b0739bd7d961;
 Tue, 01 Dec 2020 15:10:51 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1F5fYT022470; Tue, 1 Dec 2020 15:10:48 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 by mx0b-0039f301.pphosted.com with ESMTP id 355k5th7jp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 15:10:47 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6689.eurprd03.prod.outlook.com (2603:10a6:20b:2db::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Tue, 1 Dec
 2020 15:10:43 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 15:10:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46d74550-4b10-4d8d-af81-b0739bd7d961
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K7HPCDm3KIpR0vYj4it8wtZrbfEp6zeVYY3+NisKWmJcYVQJdncT/tip+q0E4BAE6Iz0/J0vamasTyx7BYS2Fjp6EFkACELGbbWO6otxtGDB3vLtYsznSAsMU82r3TAAuuPV2q4QUpj18Mulyf63S02QMP78VWGUC4eab8lZh+a6t+/fuxDtkOexpPIRUxFfw7ZMKqo/7dM35XsXr9qqaFHz6gm2jlUB4H6wqN0JR2BxPhMF0MLROu630Woug6vteXTflU7U/ALhA1TSFBRVZHqaLw1SEyT2QDMewzFOvnufHkWIbU12kyMYUoPtxb96P/zpR58eGlPId6To7JfApA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sLOksrG1DWldkYQMEf76+RrtyktXN2Hdk87XupY05X0=;
 b=PkCVFFP30V0ZGmtvjoEaNEeTk7QHWekJ+T7ugppNBQR7YT0DeAg0JfrHnGnqXE0WlHg2op40knR3bR97ALEXHmhfEE7zcOBcr++nNzfyvd3QoP0Ykbi2viiY9o/R2yerHzvH9YnMPnUFdbM5eJ7AN+Jqk960Is6XAHe2PYPcagfHXCYp+odapb7aJBgap0h1k2VgyBk/wLYiGiilKXqs1A2AfDcccfgVB5rqsE/EbmvX41NVybHKXN12dX3WL37kv/aGFJBYqE3wSbtPd/3w/ieiJn4hYdspzQdMHR9y7o0FeaJbEaiEa4q102FUySwrhJ3MUJG4AItkZfoJFX6Ouw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sLOksrG1DWldkYQMEf76+RrtyktXN2Hdk87XupY05X0=;
 b=YW0TH+zKmSFiFnkPpHpoj6gMdjviaA/+OUtdqTimvVNH0eUH9aoLy0RCAokUH+uc4NHVaFH82HhBgf0iaq9ZglW+CUCnG+R2Uge8sq2tmz1hQ0ydVx7lP9/MvMuj2NN5IsGsomhFCu9Eh3UOpD5Y3Jt9tLdydp9O7lerz+LWrLWJ5bIUXpZwiK/YjM9wSzcKjCwtoZtL/hOzcwCC6SNsLbw5pyVkYjjgFi9SZW73RIkUjqic28vbRct+fWQYv5TRSvA8SzLW7cmSVUfrpKF5ZZxCrPyu6KIDCwMxFVGoGyKCLVqQo2fjOPnyZgx0NcCVewXWUvwcUF1311pmfInHeA==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 11/23] libxl: make sure callers of
 libxl_device_pci_list() free the list after use
Thread-Topic: [PATCH v4 11/23] libxl: make sure callers of
 libxl_device_pci_list() free the list after use
Thread-Index: AQHWx/QjgNzIAVkiY0OWp7rlRRBXLw==
Date: Tue, 1 Dec 2020 15:10:43 +0000
Message-ID: <e6f0aad0-60bd-169e-1e70-f86d0517e551@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-12-paul@xen.org>
In-Reply-To: <20201124080159.11912-12-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e7c69f0a-2338-4c32-72ab-08d8960b45b4
x-ms-traffictypediagnostic: AM9PR03MB6689:
x-microsoft-antispam-prvs: 
 <AM9PR03MB66892DB6DF7AB6425CE3C2CDE7F40@AM9PR03MB6689.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:341;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 VtWWdO2Uo7K3aqYnsNzEpzgoLRy5MgtqjZhdmDlxemO123uQ0Mjw1fWwGMLZNcrKDaGM1RLrd5b4lHmeYR7f/k3T0cgF1MrysalvFK/hrLhU+E7HyGa/oxye4kPGeQy+DVQCtw02rk9dPFrFsvLe2ppipehgAjgJ3POJGIVjZZvqrLPcXsZo4ulofTcUusGv93lroGAIuU2OS28+85Sc4/dD+/rm6ND5+qTaBBLHkPwmkHmaxFUXVqxCZj4ZkK4QgKB2mRDC1c12j2EnzntDjjYBqD2xwCSyzsvNQCUHRrmFQ76BwSLg8lpuiVVSaq1N
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(66556008)(71200400001)(8676002)(6506007)(186003)(8936002)(2616005)(2906002)(26005)(53546011)(83380400001)(66446008)(110136005)(76116006)(4326008)(66946007)(66476007)(64756008)(6512007)(498600001)(54906003)(36756003)(86362001)(31696002)(31686004)(5660300002)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?TzZHVCt6ckdnbTJBWnVtMlVMTExKSUM4endkOGtjRjcxMWltdURzQUhRWUVM?=
 =?utf-8?B?eWtxaFp1SENvVFRSbzZLUkNJTDF4Wi9Ddk8zYzh0aHVnTEZqb2tEYlFaQUVu?=
 =?utf-8?B?eDhJVktsb0l2SnBDM2VxTDhSSkFTc1RWSkpHSXRkeHZpYmd5S2tBbDZWaVpz?=
 =?utf-8?B?d1dHYkZFTEZydjI4dkFaV09tdkJhTm4vSk1YYkZQcU5yOEpEU1hSbzBQaDc0?=
 =?utf-8?B?WkZlZ0RDWXg0TlltbnJzUU51RWZIQVgvN1VMdkFvTHhjTTF6cEQ4RXhrZ2pO?=
 =?utf-8?B?K0NQbE5PbVVFbS8vMjJHM3R1ZVJTVFpkTHFVQkdpTDBFbVlZS1pTUjVBTWpB?=
 =?utf-8?B?U3M5eFAyb1U4aWRsbitmcDN1a1FyRWN4WkFrVXFUNlF5UzQvOS9zTDRJQ1VL?=
 =?utf-8?B?QllCWEM5SWQvaXkyd2VqdEQ2a3FSMTN5MkdVVC9pa2NzcFNJZGxWem5qa3JH?=
 =?utf-8?B?enFBWm8zZTc5RnBxM281T29ueFRMRitiK2UraXdMbXR4MnJNeHpzcm5iMWFq?=
 =?utf-8?B?Mkhyc2dtQndCSVlhWUZTMTNGekRYWW13UHF5R2E1TllncnZ6eW80NjVNNE1R?=
 =?utf-8?B?b29QUGo5bEE5a1FOY05wMGhSbHBBTGtLS2k0WmlUdHlWb1ZiUDV3ZkZqVGty?=
 =?utf-8?B?Q2hiZmtqenRzSFU3ZUVzL2NaNmIySnZrS01WTUFzVWFBa2thWnE0RUdrTU91?=
 =?utf-8?B?WkFuL0JGNmxTRHI0ZEdEYjJHSlppZ1JtWDRlTzkrendVM3Z6L1VUd2hGV1pP?=
 =?utf-8?B?Tm80S3pzdmdSTjRSdGlsSUtKMnNKQVpORmhtc3hYT3dzSStiaDRqR01DY3NH?=
 =?utf-8?B?aDBPbC92MkpqcU5YNm9tSTc0RVl3aWp4Y3NxZ2ZwMXgxaS94eVJHUG1Ub256?=
 =?utf-8?B?SlRsUTlvLzVPTWtkNGZQMTE4aCtJZ3lmTy9adHMwNDBMN0tpOGMxWUFSdjBJ?=
 =?utf-8?B?OVBkUUNOYUxJYVNsV1VBekpXVlQxT1JhT2dVczVnc1BGcVk4SDJSRXhvdWw4?=
 =?utf-8?B?bkpxcUFLdnVpMnVBT2J4RlpiY2pseGE1V1d5KzhxUzhIaXJSVjMvcmhPcFJv?=
 =?utf-8?B?SG5tUlU1Q1ZyamZTaHh1SXlRMkxsY2xrd3FEQjZhU3ZwM0J5QWhuWVNMeXM5?=
 =?utf-8?B?blhEejVqTENJNURuSklsT3F3Y0tEbStOcERrcGlzVlZLd1NTbkdiMFZjR0xP?=
 =?utf-8?B?aGxhUlBkL21TZHNlN0p5RFdPNGxieElDUndKRnR3akxkME9PVXJLZSswMHFv?=
 =?utf-8?B?aUVvWUltQUQ0eFc4UFQzdVlyLzZkNzQyaGV0NzN0NmZVcldFTXI3VVVTbTRR?=
 =?utf-8?Q?U1DSWxTrtJ0NChZY3VrRPGR6ehxzL/MlKT?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D92BCD250157C24EB4CD1ED3B476FFC3@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e7c69f0a-2338-4c32-72ab-08d8960b45b4
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 15:10:43.0677
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +qtujCTmGX1+GutnCcvryYuxWkOM0LVItrngDFn57JD/aLxahjGH81aQWf1E98ZrFegt5YU9gSTQpDGxzEdjnjJuUI+GvFNHh/GBqEwdcwX5fIyLW9mDToQmwy20PIqi
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6689
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_07:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 malwarescore=0
 suspectscore=0 spamscore=0 mlxscore=0 bulkscore=0 priorityscore=1501
 impostorscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010097

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gQSBwcmV2aW91
cyBwYXRjaCBpbnRyb2R1Y2VkIGxpYnhsX2RldmljZV9wY2lfbGlzdF9mcmVlKCkgd2hpY2ggc2hv
dWxkIGJlIHVzZWQNCj4gYnkgY2FsbGVycyBvZiBsaWJ4bF9kZXZpY2VfcGNpX2xpc3QoKSB0byBw
cm9wZXJseSBkaXNwb3NlIG9mIHRoZSBleHBvcnRlZA0KPiAnbGlieGxfZGV2aWNlX3BjaScgdHlw
ZXMgYW5kIHRoZSBmcmVlIHRoZSBtZW1vcnkgaG9sZGluZyB0aGVtLiBXaGlsc3QgYWxsDQo+IGN1
cnJlbnQgY2FsbGVycyBkbyBlbnN1cmUgdGhlIG1lbW9yeSBpcyBmcmVlZCwgb25seSB0aGUgY29k
ZSBpbiB4bCdzDQo+IHBjaWxpc3QoKSBmdW5jdGlvbiBhY3R1YWxseSBjYWxscyBsaWJ4bF9kZXZp
Y2VfcGNpX2Rpc3Bvc2UoKS4gQXMgaXQgc3RhbmRzDQo+IHRoaXMgbGF4aXR5IGRvZXMgbm90IGxl
YWQgdG8gYW55IG1lbW9yeSBsZWFrcywgYnV0IHRoZSBzaW1wbGUgYWRkaXRpb24gb2YNCj4gLmUu
Zy4gYSAnc3RyaW5nJyBpbnRvIHRoZSBpZGwgZGVmaW5pdGlvbiBvZiAnbGlieGxfZGV2aWNlX3Bj
aScgd291bGQgbGVhZA0KPiB0byBsZWFrcy4NCj4NCj4gVGhpcyBwYXRjaCBtYWtlcyBzdXJlIGFs
bCBjYWxsZXJzIG9mIGxpYnhsX2RldmljZV9wY2lfbGlzdCgpIGNhbiBjYWxsDQo+IGxpYnhsX2Rl
dmljZV9wY2lfbGlzdF9mcmVlKCkgYnkga2VlcGluZyBjb3BpZXMgb2YgJ2xpYnhsX2RldmljZV9w
Y2knDQo+IHN0cnVjdHVyZXMgaW5saW5lIGluICdwY2lfYWRkX3N0YXRlJyBhbmQgJ3BjaV9yZW1v
dmVfc3RhdGUnIChhbmQgYWxzbyBtYWtpbmcNCj4gc3VyZSB0aGVzZSBhcmUgcHJvcGVybHkgZGlz
cG9zZWQgYXQgdGhlIGVuZCBvZiB0aGUgb3BlcmF0aW9ucykgcmF0aGVyDQo+IHRoYW4ga2VlcGlu
ZyBwb2ludGVycyB0byB0aGUgc3RydWN0dXJlcyByZXR1cm5lZCBieSBsaWJ4bF9kZXZpY2VfcGNp
X2xpc3QoKS4NCj4NCj4gU2lnbmVkLW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6
b24uY29tPg0KUmV2aWV3ZWQtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJf
YW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KPiAt
LS0NCj4gQ2M6IElhbiBKYWNrc29uIDxpd2pAeGVucHJvamVjdC5vcmc+DQo+IENjOiBXZWkgTGl1
IDx3bEB4ZW4ub3JnPg0KPiBDYzogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJp
eC5jb20+DQo+IC0tLQ0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMgfCA2OCArKysr
KysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPiAgIHRvb2xzL3hsL3hs
X3BjaS5jICAgICAgICAgICAgfCAgMyArLQ0KPiAgIDIgZmlsZXMgY2hhbmdlZCwgMzggaW5zZXJ0
aW9ucygrKSwgMzMgZGVsZXRpb25zKC0pDQo+DQo+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xp
Z2h0L2xpYnhsX3BjaS5jIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiBpbmRleCBk
M2M3YTU0N2MzLi4wZjQxOTM5ZDFmIDEwMDY0NA0KPiAtLS0gYS90b29scy9saWJzL2xpZ2h0L2xp
YnhsX3BjaS5jDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4gQEAgLTEw
MjUsNyArMTAyNSw3IEBAIHR5cGVkZWYgc3RydWN0IHBjaV9hZGRfc3RhdGUgew0KPiAgICAgICBs
aWJ4bF9feHN3YWl0X3N0YXRlIHhzd2FpdDsNCj4gICAgICAgbGlieGxfX2V2X3FtcCBxbXA7DQo+
ICAgICAgIGxpYnhsX19ldl90aW1lIHRpbWVvdXQ7DQo+IC0gICAgbGlieGxfZGV2aWNlX3BjaSAq
cGNpOw0KPiArICAgIGxpYnhsX2RldmljZV9wY2kgcGNpOw0KPiAgICAgICBsaWJ4bF9kb21pZCBw
Y2lfZG9taWQ7DQo+ICAgfSBwY2lfYWRkX3N0YXRlOw0KPiAgIA0KPiBAQCAtMTA5Nyw3ICsxMDk3
LDcgQEAgc3RhdGljIHZvaWQgcGNpX2FkZF9xZW11X3RyYWRfd2F0Y2hfc3RhdGVfY2IobGlieGxf
X2VnYyAqZWdjLA0KPiAgIA0KPiAgICAgICAvKiBDb252ZW5pZW5jZSBhbGlhc2VzICovDQo+ICAg
ICAgIGxpYnhsX2RvbWlkIGRvbWlkID0gcGFzLT5kb21pZDsNCj4gLSAgICBsaWJ4bF9kZXZpY2Vf
cGNpICpwY2kgPSBwYXMtPnBjaTsNCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2kgPSAmcGFz
LT5wY2k7DQo+ICAgDQo+ICAgICAgIHJjID0gY2hlY2tfcWVtdV9ydW5uaW5nKGdjLCBkb21pZCwg
eHN3YSwgcmMsIHN0YXRlKTsNCj4gICAgICAgaWYgKHJjID09IEVSUk9SX05PVF9SRUFEWSkNCj4g
QEAgLTExMTgsNyArMTExOCw3IEBAIHN0YXRpYyB2b2lkIHBjaV9hZGRfcW1wX2RldmljZV9hZGQo
bGlieGxfX2VnYyAqZWdjLCBwY2lfYWRkX3N0YXRlICpwYXMpDQo+ICAgDQo+ICAgICAgIC8qIENv
bnZlbmllbmNlIGFsaWFzZXMgKi8NCj4gICAgICAgbGlieGxfZG9taWQgZG9taWQgPSBwYXMtPmRv
bWlkOw0KPiAtICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSA9IHBhcy0+cGNpOw0KPiArICAgIGxp
YnhsX2RldmljZV9wY2kgKnBjaSA9ICZwYXMtPnBjaTsNCj4gICAgICAgbGlieGxfX2V2X3FtcCAq
Y29uc3QgcW1wID0gJnBhcy0+cW1wOw0KPiAgIA0KPiAgICAgICByYyA9IGxpYnhsX19ldl90aW1l
X3JlZ2lzdGVyX3JlbChhbywgJnBhcy0+dGltZW91dCwNCj4gQEAgLTExOTksNyArMTE5OSw3IEBA
IHN0YXRpYyB2b2lkIHBjaV9hZGRfcW1wX3F1ZXJ5X3BjaV9jYihsaWJ4bF9fZWdjICplZ2MsDQo+
ICAgICAgIGludCBkZXZfc2xvdCwgZGV2X2Z1bmM7DQo+ICAgDQo+ICAgICAgIC8qIENvbnZlbmll
bmNlIGFsaWFzZXMgKi8NCj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2kgPSBwYXMtPnBjaTsN
Cj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2kgPSAmcGFzLT5wY2k7DQo+ICAgDQo+ICAgICAg
IGlmIChyYykgZ290byBvdXQ7DQo+ICAgDQo+IEBAIC0xMzAwLDcgKzEzMDAsNyBAQCBzdGF0aWMg
dm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPiAgIA0KPiAgICAgICAvKiBD
b252ZW5pZW5jZSBhbGlhc2VzICovDQo+ICAgICAgIGJvb2wgc3RhcnRpbmcgPSBwYXMtPnN0YXJ0
aW5nOw0KPiAtICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSA9IHBhcy0+cGNpOw0KPiArICAgIGxp
YnhsX2RldmljZV9wY2kgKnBjaSA9ICZwYXMtPnBjaTsNCj4gICAgICAgYm9vbCBodm0gPSBsaWJ4
bF9fZG9tYWluX3R5cGUoZ2MsIGRvbWlkKSA9PSBMSUJYTF9ET01BSU5fVFlQRV9IVk07DQo+ICAg
DQo+ICAgICAgIGxpYnhsX19ldl9xbXBfZGlzcG9zZShnYywgJnBhcy0+cW1wKTsNCj4gQEAgLTE1
MTYsNyArMTUxNiwxMCBAQCB2b2lkIGxpYnhsX19kZXZpY2VfcGNpX2FkZChsaWJ4bF9fZWdjICpl
Z2MsIHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICBHQ05FVyhwYXMpOw0KPiAgICAgICBwYXMtPmFv
ZGV2ID0gYW9kZXY7DQo+ICAgICAgIHBhcy0+ZG9taWQgPSBkb21pZDsNCj4gLSAgICBwYXMtPnBj
aSA9IHBjaTsNCj4gKw0KPiArICAgIGxpYnhsX2RldmljZV9wY2lfY29weShDVFgsICZwYXMtPnBj
aSwgcGNpKTsNCj4gKyAgICBwY2kgPSAmcGFzLT5wY2k7DQo+ICsNCj4gICAgICAgcGFzLT5zdGFy
dGluZyA9IHN0YXJ0aW5nOw0KPiAgICAgICBwYXMtPmNhbGxiYWNrID0gZGV2aWNlX3BjaV9hZGRf
c3R1YmRvbV9kb25lOw0KPiAgIA0KPiBAQCAtMTU1NSwxMiArMTU1OCw2IEBAIHZvaWQgbGlieGxf
X2RldmljZV9wY2lfYWRkKGxpYnhsX19lZ2MgKmVnYywgdWludDMyX3QgZG9taWQsDQo+ICAgDQo+
ICAgICAgIHN0dWJkb21pZCA9IGxpYnhsX2dldF9zdHViZG9tX2lkKGN0eCwgZG9taWQpOw0KPiAg
ICAgICBpZiAoc3R1YmRvbWlkICE9IDApIHsNCj4gLSAgICAgICAgbGlieGxfZGV2aWNlX3BjaSAq
cGNpX3M7DQo+IC0NCj4gLSAgICAgICAgR0NORVcocGNpX3MpOw0KPiAtICAgICAgICBsaWJ4bF9k
ZXZpY2VfcGNpX2luaXQocGNpX3MpOw0KPiAtICAgICAgICBsaWJ4bF9kZXZpY2VfcGNpX2NvcHko
Q1RYLCBwY2lfcywgcGNpKTsNCj4gLSAgICAgICAgcGFzLT5wY2kgPSBwY2lfczsNCj4gICAgICAg
ICAgIHBhcy0+Y2FsbGJhY2sgPSBkZXZpY2VfcGNpX2FkZF9zdHViZG9tX3dhaXQ7DQo+ICAgDQo+
ICAgICAgICAgICBkb19wY2lfYWRkKGVnYywgc3R1YmRvbWlkLCBwYXMpOyAvKiBtdXN0IGJlIGxh
c3QgKi8NCj4gQEAgLTE2MTksNyArMTYxNiw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9wY2lfYWRk
X3N0dWJkb21fZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+ICAgDQo+ICAgICAgIC8qIENvbnZlbmll
bmNlIGFsaWFzZXMgKi8NCj4gICAgICAgbGlieGxfZG9taWQgZG9taWQgPSBwYXMtPmRvbWlkOw0K
PiAtICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSA9IHBhcy0+cGNpOw0KPiArICAgIGxpYnhsX2Rl
dmljZV9wY2kgKnBjaSA9ICZwYXMtPnBjaTsNCj4gICANCj4gICAgICAgaWYgKHJjKSBnb3RvIG91
dDsNCj4gICANCj4gQEAgLTE2NzAsNyArMTY2Nyw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9wY2lf
YWRkX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPiAgICAgICBFR0NfR0M7DQo+ICAgICAgIGxpYnhs
X19hb19kZXZpY2UgKmFvZGV2ID0gcGFzLT5hb2RldjsNCj4gICAgICAgbGlieGxfZG9taWQgZG9t
aWQgPSBwYXMtPmRvbWlkOw0KPiAtICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSA9IHBhcy0+cGNp
Ow0KPiArICAgIGxpYnhsX2RldmljZV9wY2kgKnBjaSA9ICZwYXMtPnBjaTsNCj4gICANCj4gICAg
ICAgaWYgKHJjKSB7DQo+ICAgICAgICAgICBMT0dEKEVSUk9SLCBkb21pZCwNCj4gQEAgLTE2ODAs
NiArMTY3Nyw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9wY2lfYWRkX2RvbmUobGlieGxfX2VnYyAq
ZWdjLA0KPiAgICAgICAgICAgICAgICByYyk7DQo+ICAgICAgICAgICBwY2lfaW5mb194c19yZW1v
dmUoZ2MsIHBjaSwgImRvbWlkIik7DQo+ICAgICAgIH0NCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNp
X2Rpc3Bvc2UocGNpKTsNCj4gICAgICAgYW9kZXYtPnJjID0gcmM7DQo+ICAgICAgIGFvZGV2LT5j
YWxsYmFjayhlZ2MsIGFvZGV2KTsNCj4gICB9DQo+IEBAIC0xNzcwLDcgKzE3NjgsNyBAQCBzdGF0
aWMgaW50IHFlbXVfcGNpX3JlbW92ZV94ZW5zdG9yZShsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBk
b21pZCwNCj4gICB0eXBlZGVmIHN0cnVjdCBwY2lfcmVtb3ZlX3N0YXRlIHsNCj4gICAgICAgbGli
eGxfX2FvX2RldmljZSAqYW9kZXY7DQo+ICAgICAgIGxpYnhsX2RvbWlkIGRvbWlkOw0KPiAtICAg
IGxpYnhsX2RldmljZV9wY2kgKnBjaTsNCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpIHBjaTsNCj4g
ICAgICAgYm9vbCBmb3JjZTsNCj4gICAgICAgYm9vbCBodm07DQo+ICAgICAgIHVuc2lnbmVkIGlu
dCBvcmlnX3ZkZXY7DQo+IEBAIC0xODEyLDIzICsxODEwLDI2IEBAIHN0YXRpYyB2b2lkIGRvX3Bj
aV9yZW1vdmUobGlieGxfX2VnYyAqZWdjLCBwY2lfcmVtb3ZlX3N0YXRlICpwcnMpDQo+ICAgew0K
PiAgICAgICBTVEFURV9BT19HQyhwcnMtPmFvZGV2LT5hbyk7DQo+ICAgICAgIGxpYnhsX2N0eCAq
Y3R4ID0gbGlieGxfX2djX293bmVyKGdjKTsNCj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICphc3Np
Z25lZDsNCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2lzOw0KPiArICAgIGJvb2wgYXR0YWNo
ZWQ7DQo+ICAgICAgIHVpbnQzMl90IGRvbWlkID0gcHJzLT5kb21pZDsNCj4gICAgICAgbGlieGxf
ZG9tYWluX3R5cGUgdHlwZSA9IGxpYnhsX19kb21haW5fdHlwZShnYywgZG9taWQpOw0KPiAtICAg
IGxpYnhsX2RldmljZV9wY2kgKnBjaSA9IHBycy0+cGNpOw0KPiArICAgIGxpYnhsX2RldmljZV9w
Y2kgKnBjaSA9ICZwcnMtPnBjaTsNCj4gICAgICAgaW50IHJjLCBudW07DQo+ICAgICAgIHVpbnQz
Ml90IGRvbWFpbmlkID0gZG9taWQ7DQo+ICAgDQo+IC0gICAgYXNzaWduZWQgPSBsaWJ4bF9kZXZp
Y2VfcGNpX2xpc3QoY3R4LCBkb21pZCwgJm51bSk7DQo+IC0gICAgaWYgKGFzc2lnbmVkID09IE5V
TEwpIHsNCj4gKyAgICBwY2lzID0gbGlieGxfZGV2aWNlX3BjaV9saXN0KGN0eCwgZG9taWQsICZu
dW0pOw0KPiArICAgIGlmICghcGNpcykgew0KPiAgICAgICAgICAgcmMgPSBFUlJPUl9GQUlMOw0K
PiAgICAgICAgICAgZ290byBvdXRfZmFpbDsNCj4gICAgICAgfQ0KPiAtICAgIGxpYnhsX19wdHJf
YWRkKGdjLCBhc3NpZ25lZCk7DQo+ICsNCj4gKyAgICBhdHRhY2hlZCA9IGlzX3BjaV9pbl9hcnJh
eShwY2lzLCBudW0sIHBjaS0+ZG9tYWluLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHBjaS0+YnVzLCBwY2ktPmRldiwgcGNpLT5mdW5jKTsNCj4gKyAgICBsaWJ4bF9kZXZpY2Vf
cGNpX2xpc3RfZnJlZShwY2lzLCBudW0pOw0KPiAgIA0KPiAgICAgICByYyA9IEVSUk9SX0lOVkFM
Ow0KPiAtICAgIGlmICggIWlzX3BjaV9pbl9hcnJheShhc3NpZ25lZCwgbnVtLCBwY2ktPmRvbWFp
biwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgcGNpLT5idXMsIHBjaS0+ZGV2LCBwY2kt
PmZ1bmMpICkgew0KPiArICAgIGlmICghYXR0YWNoZWQpIHsNCj4gICAgICAgICAgIExPR0QoRVJS
T1IsIGRvbWFpbmlkLCAiUENJIGRldmljZSBub3QgYXR0YWNoZWQgdG8gdGhpcyBkb21haW4iKTsN
Cj4gICAgICAgICAgIGdvdG8gb3V0X2ZhaWw7DQo+ICAgICAgIH0NCj4gQEAgLTE5MjgsNyArMTky
OSw3IEBAIHN0YXRpYyB2b2lkIHBjaV9yZW1vdmVfcWVtdV90cmFkX3dhdGNoX3N0YXRlX2NiKGxp
YnhsX19lZ2MgKmVnYywNCj4gICANCj4gICAgICAgLyogQ29udmVuaWVuY2UgYWxpYXNlcyAqLw0K
PiAgICAgICBsaWJ4bF9kb21pZCBkb21pZCA9IHBycy0+ZG9taWQ7DQo+IC0gICAgbGlieGxfZGV2
aWNlX3BjaSAqY29uc3QgcGNpID0gcHJzLT5wY2k7DQo+ICsgICAgbGlieGxfZGV2aWNlX3BjaSAq
Y29uc3QgcGNpID0gJnBycy0+cGNpOw0KPiAgIA0KPiAgICAgICByYyA9IGNoZWNrX3FlbXVfcnVu
bmluZyhnYywgZG9taWQsIHhzd2EsIHJjLCBzdGF0ZSk7DQo+ICAgICAgIGlmIChyYyA9PSBFUlJP
Ul9OT1RfUkVBRFkpDQo+IEBAIC0xOTUwLDcgKzE5NTEsNyBAQCBzdGF0aWMgdm9pZCBwY2lfcmVt
b3ZlX3FtcF9kZXZpY2VfZGVsKGxpYnhsX19lZ2MgKmVnYywNCj4gICAgICAgaW50IHJjOw0KPiAg
IA0KPiAgICAgICAvKiBDb252ZW5pZW5jZSBhbGlhc2VzICovDQo+IC0gICAgbGlieGxfZGV2aWNl
X3BjaSAqY29uc3QgcGNpID0gcHJzLT5wY2k7DQo+ICsgICAgbGlieGxfZGV2aWNlX3BjaSAqY29u
c3QgcGNpID0gJnBycy0+cGNpOw0KPiAgIA0KPiAgICAgICByYyA9IGxpYnhsX19ldl90aW1lX3Jl
Z2lzdGVyX3JlbChhbywgJnBycy0+dGltZW91dCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgcGNpX3JlbW92ZV90aW1lb3V0LA0KPiBAQCAtMjAyMCw3ICsyMDIxLDcg
QEAgc3RhdGljIHZvaWQgcGNpX3JlbW92ZV9xbXBfcXVlcnlfY2IobGlieGxfX2VnYyAqZWdjLA0K
PiAgIA0KPiAgICAgICAvKiBDb252ZW5pZW5jZSBhbGlhc2VzICovDQo+ICAgICAgIGxpYnhsX19h
byAqY29uc3QgYW8gPSBwcnMtPmFvZGV2LT5hbzsNCj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICpj
b25zdCBwY2kgPSBwcnMtPnBjaTsNCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpICpjb25zdCBwY2kg
PSAmcHJzLT5wY2k7DQo+ICAgDQo+ICAgICAgIGlmIChyYykgZ290byBvdXQ7DQo+ICAgDQo+IEBA
IC0yMDc1LDcgKzIwNzYsNyBAQCBzdGF0aWMgdm9pZCBwY2lfcmVtb3ZlX3RpbWVvdXQobGlieGxf
X2VnYyAqZWdjLCBsaWJ4bF9fZXZfdGltZSAqZXYsDQo+ICAgICAgIHBjaV9yZW1vdmVfc3RhdGUg
KnBycyA9IENPTlRBSU5FUl9PRihldiwgKnBycywgdGltZW91dCk7DQo+ICAgDQo+ICAgICAgIC8q
IENvbnZlbmllbmNlIGFsaWFzZXMgKi8NCj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICpjb25zdCBw
Y2kgPSBwcnMtPnBjaTsNCj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpICpjb25zdCBwY2kgPSAmcHJz
LT5wY2k7DQo+ICAgDQo+ICAgICAgIExPR0QoV0FSTiwgcHJzLT5kb21pZCwgInRpbWVkIG91dCB3
YWl0aW5nIGZvciBETSB0byByZW1vdmUgIg0KPiAgICAgICAgICAgIFBDSV9QVF9RREVWX0lELCBw
Y2ktPmJ1cywgcGNpLT5kZXYsIHBjaS0+ZnVuYyk7DQo+IEBAIC0yMDk2LDcgKzIwOTcsNyBAQCBz
dGF0aWMgdm9pZCBwY2lfcmVtb3ZlX2RldGFjaGVkKGxpYnhsX19lZ2MgKmVnYywNCj4gICAgICAg
Ym9vbCBpc3N0dWJkb207DQo+ICAgDQo+ICAgICAgIC8qIENvbnZlbmllbmNlIGFsaWFzZXMgKi8N
Cj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICpjb25zdCBwY2kgPSBwcnMtPnBjaTsNCj4gKyAgICBs
aWJ4bF9kZXZpY2VfcGNpICpjb25zdCBwY2kgPSAmcHJzLT5wY2k7DQo+ICAgICAgIGxpYnhsX2Rv
bWlkIGRvbWlkID0gcHJzLT5kb21pZDsNCj4gICANCj4gICAgICAgLyogQ2xlYW5pbmcgUU1QIHN0
YXRlcyBBU0FQICovDQo+IEBAIC0yMTU5LDcgKzIxNjAsNyBAQCBzdGF0aWMgdm9pZCBwY2lfcmVt
b3ZlX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPiAgIA0KPiAgICAgICBpZiAocmMpIGdvdG8gb3V0
Ow0KPiAgIA0KPiAtICAgIGxpYnhsX19kZXZpY2VfcGNpX3JlbW92ZV94ZW5zdG9yZShnYywgcHJz
LT5kb21pZCwgcHJzLT5wY2kpOw0KPiArICAgIGxpYnhsX19kZXZpY2VfcGNpX3JlbW92ZV94ZW5z
dG9yZShnYywgcHJzLT5kb21pZCwgJnBycy0+cGNpKTsNCj4gICBvdXQ6DQo+ICAgICAgIGRldmlj
ZV9wY2lfcmVtb3ZlX2NvbW1vbl9uZXh0KGVnYywgcHJzLCByYyk7DQo+ICAgfQ0KPiBAQCAtMjE3
Nyw3ICsyMTc4LDEwIEBAIHN0YXRpYyB2b2lkIGxpYnhsX19kZXZpY2VfcGNpX3JlbW92ZV9jb21t
b24obGlieGxfX2VnYyAqZWdjLA0KPiAgICAgICBHQ05FVyhwcnMpOw0KPiAgICAgICBwcnMtPmFv
ZGV2ID0gYW9kZXY7DQo+ICAgICAgIHBycy0+ZG9taWQgPSBkb21pZDsNCj4gLSAgICBwcnMtPnBj
aSA9IHBjaTsNCj4gKw0KPiArICAgIGxpYnhsX2RldmljZV9wY2lfY29weShDVFgsICZwcnMtPnBj
aSwgcGNpKTsNCj4gKyAgICBwY2kgPSAmcHJzLT5wY2k7DQo+ICsNCj4gICAgICAgcHJzLT5mb3Jj
ZSA9IGZvcmNlOw0KPiAgICAgICBsaWJ4bF9feHN3YWl0X2luaXQoJnBycy0+eHN3YWl0KTsNCj4g
ICAgICAgbGlieGxfX2V2X3FtcF9pbml0KCZwcnMtPnFtcCk7DQo+IEBAIC0yMjEyLDcgKzIyMTYs
NyBAQCBzdGF0aWMgdm9pZCBkZXZpY2VfcGNpX3JlbW92ZV9jb21tb25fbmV4dChsaWJ4bF9fZWdj
ICplZ2MsDQo+ICAgICAgIEVHQ19HQzsNCj4gICANCj4gICAgICAgLyogQ29udmVuaWVuY2UgYWxp
YXNlcyAqLw0KPiAtICAgIGxpYnhsX2RldmljZV9wY2kgKmNvbnN0IHBjaSA9IHBycy0+cGNpOw0K
PiArICAgIGxpYnhsX2RldmljZV9wY2kgKmNvbnN0IHBjaSA9ICZwcnMtPnBjaTsNCj4gICAgICAg
bGlieGxfX2FvX2RldmljZSAqY29uc3QgYW9kZXYgPSBwcnMtPmFvZGV2Ow0KPiAgICAgICBjb25z
dCB1bnNpZ25lZCBpbnQgcGZ1bmNfbWFzayA9IHBycy0+cGZ1bmNfbWFzazsNCj4gICAgICAgY29u
c3QgdW5zaWduZWQgaW50IG9yaWdfdmRldiA9IHBycy0+b3JpZ192ZGV2Ow0KPiBAQCAtMjI0Myw2
ICsyMjQ3LDcgQEAgb3V0Og0KPiAgIA0KPiAgICAgICBpZiAoIXJjKSBwY2lfaW5mb194c19yZW1v
dmUoZ2MsIHBjaSwgImRvbWlkIik7DQo+ICAgDQo+ICsgICAgbGlieGxfZGV2aWNlX3BjaV9kaXNw
b3NlKHBjaSk7DQo+ICAgICAgIGFvZGV2LT5yYyA9IHJjOw0KPiAgICAgICBhb2Rldi0+Y2FsbGJh
Y2soZWdjLCBhb2Rldik7DQo+ICAgfQ0KPiBAQCAtMjM1Nyw3ICsyMzYyLDYgQEAgdm9pZCBsaWJ4
bF9fZGV2aWNlX3BjaV9kZXN0cm95X2FsbChsaWJ4bF9fZWdjICplZ2MsIHVpbnQzMl90IGRvbWlk
LA0KPiAgICAgICBwY2lzID0gbGlieGxfZGV2aWNlX3BjaV9saXN0KENUWCwgZG9taWQsICZudW0p
Ow0KPiAgICAgICBpZiAoIHBjaXMgPT0gTlVMTCApDQo+ICAgICAgICAgICByZXR1cm47DQo+IC0g
ICAgbGlieGxfX3B0cl9hZGQoZ2MsIHBjaXMpOw0KPiAgIA0KPiAgICAgICBmb3IgKGkgPSAwOyBp
IDwgbnVtOyBpKyspIHsNCj4gICAgICAgICAgIC8qIEZvcmNlIHJlbW92ZSBvbiBzaHV0ZG93biBz
aW5jZSwgb24gSFZNLCBxZW11IHdpbGwgbm90IGFsd2F5cw0KPiBAQCAtMjM2OCw2ICsyMzcyLDgg
QEAgdm9pZCBsaWJ4bF9fZGV2aWNlX3BjaV9kZXN0cm95X2FsbChsaWJ4bF9fZWdjICplZ2MsIHVp
bnQzMl90IGRvbWlkLA0KPiAgICAgICAgICAgbGlieGxfX2RldmljZV9wY2lfcmVtb3ZlX2NvbW1v
bihlZ2MsIGRvbWlkLCBwY2lzICsgaSwgdHJ1ZSwNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgYW9kZXYpOw0KPiAgICAgICB9DQo+ICsNCj4gKyAgICBsaWJ4bF9k
ZXZpY2VfcGNpX2xpc3RfZnJlZShwY2lzLCBudW0pOw0KPiAgIH0NCj4gICANCj4gICBpbnQgbGli
eGxfX2dyYW50X3ZnYV9pb21lbV9wZXJtaXNzaW9uKGxpYnhsX19nYyAqZ2MsIGNvbnN0IHVpbnQz
Ml90IGRvbWlkLA0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMveGwveGxfcGNpLmMgYi90b29scy94bC94
bF9wY2kuYw0KPiBpbmRleCAzNGZjZjVhNGZhLi43YzBmMTAyYWM3IDEwMDY0NA0KPiAtLS0gYS90
b29scy94bC94bF9wY2kuYw0KPiArKysgYi90b29scy94bC94bF9wY2kuYw0KPiBAQCAtMzUsOSAr
MzUsOCBAQCBzdGF0aWMgdm9pZCBwY2lsaXN0KHVpbnQzMl90IGRvbWlkKQ0KPiAgICAgICAgICAg
cHJpbnRmKCIlMDJ4LiUwMXggJTA0eDolMDJ4OiUwMnguJTAxeFxuIiwNCj4gICAgICAgICAgICAg
ICAgICAocGNpc1tpXS52ZGV2Zm4gPj4gMykgJiAweDFmLCBwY2lzW2ldLnZkZXZmbiAmIDB4NywN
Cj4gICAgICAgICAgICAgICAgICBwY2lzW2ldLmRvbWFpbiwgcGNpc1tpXS5idXMsIHBjaXNbaV0u
ZGV2LCBwY2lzW2ldLmZ1bmMpOw0KPiAtICAgICAgICBsaWJ4bF9kZXZpY2VfcGNpX2Rpc3Bvc2Uo
JnBjaXNbaV0pOw0KPiAgICAgICB9DQo+IC0gICAgZnJlZShwY2lzKTsNCj4gKyAgICBsaWJ4bF9k
ZXZpY2VfcGNpX2xpc3RfZnJlZShwY2lzLCBudW0pOw0KPiAgIH0NCj4gICANCj4gICBpbnQgbWFp
bl9wY2lsaXN0KGludCBhcmdjLCBjaGFyICoqYXJndik=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 15:17:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 15:17:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42164.75814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7PS-0008Md-EI; Tue, 01 Dec 2020 15:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42164.75814; Tue, 01 Dec 2020 15:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7PS-0008MW-AH; Tue, 01 Dec 2020 15:17:42 +0000
Received: by outflank-mailman (input) for mailman id 42164;
 Tue, 01 Dec 2020 15:17:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk7PR-0008Ln-3D
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 15:17:41 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1745e2f-162b-4d21-8b1c-ea6f4ed8ce2b;
 Tue, 01 Dec 2020 15:17:40 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1FG3vq031500; Tue, 1 Dec 2020 15:17:36 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2050.outbound.protection.outlook.com [104.47.4.50])
 by mx0a-0039f301.pphosted.com with ESMTP id 355q3mgctr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 15:17:35 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4305.eurprd03.prod.outlook.com (2603:10a6:208:c0::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Tue, 1 Dec
 2020 15:17:33 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 15:17:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1745e2f-162b-4d21-8b1c-ea6f4ed8ce2b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LjCzCyFXBVNdrgP/GmxHq5IT53g256WQTH2C9R3SnsAanIcFRl6O//8/+iwES6pH4FOmhPXP/5Z34lcPnphExF1hoAaXo80rLH7CjtL6a8toQR9BNV0H1TS0reQBoxBZgA664knBpWkdgX0+Shaq6236qsWajKZOuK+eAvTyl7d4zHn+G6+1SN+8hOCQn2bAmWjXqWs1lBCUm14jMEyqkTaNOGa3Mi035aKXB56XqXI9BrNHZbgQA+CyvzECwzdl5m95tTTKbS+KDwg18+LDOaUndE91t9OpTjFjX3jf3040hvBpeS5KBFAg8qe6rx/fJLogXR87yEgqccz/3nVlqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K5rorWJok3bx4+CjDQu16nOqvt1L2HhYMKH2whDILIQ=;
 b=VdiIQdm3/u6WxD5j51N5t/2o0Ix03egx1hfIBWG6vZ9C5/UlWmDc8nHLw3L8MP1G6PskcAWopT+0Foj2PnkdD7tWj76qDQHaXv6eyUAwPYIxuZO4xnO4OJ13waRBAnZUgzMN/5JGdvBdLbqJxz1FwmZ0Ss8QXm0Lp8hYfHVfmRMGFnry6CHfprhYGPAFEzjgaMrivbVKrKvSKxFDWVoDFxPPpMJbtU0IhLlvC3W1SHrzOiQhG6x7kuSuRS0L6r6d765iulhs9Z8hn1V8IcEzGfoiDL96+k/plOtEuURVwMEYQVoBPk/WPHGmjr8WC+oMrjNRIIUr80c10rrLF3NvzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K5rorWJok3bx4+CjDQu16nOqvt1L2HhYMKH2whDILIQ=;
 b=XAkcuw1Whdy4JxTcqnueduZ0AbpzT72pOiVns4y/8qxylaIaovWiD2M1+UeihkDVof4KRUc/gjmsTSvAP+M3xDzd6tMil3tffn/SNR6sAzcA+KvaaPiNORGdjg3kpqa8KNJWQsUrmmErYdaGtBbbCX0dvd44JDYG6t9gZEx5gTJAptb1Djxg5X/oLXQq0XTlTkF6RSXAEvsBsHkdyJu9mCo+uwmRVsa1wOZvBYTvdwl1FKz/4HFhPCuYm+MEF5cj4nWkRvrvCJgnTQdewLCw4UoVlkHkd1dgyqXul2BsLFiZ3kaJ6ClIeJ3aWJM6jHGg/FFTd/N9tUiOoskBrL/5sQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>, Christian Lindig <christian.lindig@citrix.com>,
        David Scott
	<dave@recoil.org>,
        Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 12/23] libxl: add
 libxl_device_pci_assignable_list_free()...
Thread-Topic: [PATCH v4 12/23] libxl: add
 libxl_device_pci_assignable_list_free()...
Thread-Index: AQHWx/UXQdVidg9hUkmTJ/tMtrHCAA==
Date: Tue, 1 Dec 2020 15:17:33 +0000
Message-ID: <a4135498-f536-a76a-82c3-20f618f7b502@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-13-paul@xen.org>
In-Reply-To: <20201124080159.11912-13-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0b03b731-041a-4f61-c052-08d8960c3a1b
x-ms-traffictypediagnostic: AM0PR03MB4305:
x-microsoft-antispam-prvs: 
 <AM0PR03MB4305EB97085F2ABE65ADC5A3E7F40@AM0PR03MB4305.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Z7dkVT4P3zPbeQg41QCo9Bbz4O9s8BLcZ9p8z29llh9xvNZbkF9kCVe7kV6rgUZTfSrCMvi2stKLuQmly8a7Y+WwPzvkppnJmIWI6EyWJEj7g2HrfgdgYcEBzKUoXQijlNc2AF4V6+XQ+M5HotM3pvF881tkIbivm08xQP146WZhCLUQfJcXjHrPjqOcgi/GFbGw+8vN3I1xGMr4xBbMGyLW/nTNIhem8f1HvL4syfkKf6KteCf0uMQd8JgQAmaO1jztt8+uPVNWyGRSNxxmJ8He1+kPCxuXH73S5fVaFC2AvfEwMR2dqeq/rzoe+S3dyqespfeaFWR5yHmokQ6zUA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(6512007)(8936002)(6506007)(498600001)(8676002)(71200400001)(83380400001)(64756008)(26005)(53546011)(6486002)(76116006)(66946007)(186003)(110136005)(66556008)(5660300002)(66446008)(31696002)(2906002)(54906003)(66476007)(2616005)(31686004)(86362001)(36756003)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?UnFSRW12VFBsUXZCM2ZxV1BxVHJ3dTF4ZlZjSFBHbFdGNEw2NnVGNmZobkdL?=
 =?utf-8?B?UGxCN3JWRFhjQUE2ODAvNWpkVkU4RGgrSW9qaExhUDR1eTBsVnZkRGx5aWtj?=
 =?utf-8?B?UUNiczI2STZwNUI3NElNRXVZWkhkMWpwTEVHbDZwRG5XUTdNMzUxVnZydnFl?=
 =?utf-8?B?c2xIL0VTbndON0p3WG1JVDRmdnZ1UVRvTEhhajFmTnhCc2VxZ2RTUmJQQkdZ?=
 =?utf-8?B?YXVzMGl6U2hPYzd1SHdBdUpJL2U4c051dXMvTVlQSGJmcXdvL2Y0b3NsSm9s?=
 =?utf-8?B?RkFCS1hMZHI2U1RGbzhKOWpXaSt3VmgwTWN0WS9qMGc0WHhyRzk5VHV5aDYx?=
 =?utf-8?B?aHliK1pUSU1YR0RVVHV6cDRvZE1qL1RXR0hEVU80VytmUldTUDFEeGFPaWJB?=
 =?utf-8?B?b2xQa1Y0QUM2YjVaSnRqTHlBampyNlczcUYyMkpQRnE1UmJTV3JwZEdmOTMv?=
 =?utf-8?B?VWh3elI4NFNLdTlzZGZSSlQ5eDF5aVg1L3IvTEtjeWNNMkhpVWJ1V3d2eEhU?=
 =?utf-8?B?NnVuRng2ZkQzUCtYL2NnM2o2bmNWQlhyWHlwNDl4aURpNWJuS205aW1SQWtN?=
 =?utf-8?B?NVpMc2ZjTXBwMmJ2a2N4SFF0ZWxDcGdlWC9JZWxUUytoU2RhcTM0TXhuZjBz?=
 =?utf-8?B?S2E3Qk1sMjN4VFFucytpcnBYN29Mc1pFWCtyTVNLR3lDeGtieXg1a29Vcmln?=
 =?utf-8?B?dmJTRUI3djllQ1NJU2RoUTdZNTZWOEVjTlRweHYySGxTWTMvbHoyWWc5WHdN?=
 =?utf-8?B?VE5tRXRTcCtZRlNmMnh2eTgzSitrZGxZQVVjWklMMGg4bjU1eC9YRWZsLzRY?=
 =?utf-8?B?STlkaVZNVC9BQ1huSU1RdmEzNm1LS0NoRFhXTll4bUtZT0tnRk13MXVRakFE?=
 =?utf-8?B?d0RuTjJ2NnJSR1lxekZ0RHFXaURFRHUxeXhFS0RmWWp1UnZYU0xDRXFWVkVN?=
 =?utf-8?B?Ym12Z2dkQlgvSzFrZkJWd1NwaC8vTnVPVXBMRkpZdFR0UHZObnlsV1NjbW5v?=
 =?utf-8?B?RG1rR2xGT3h3K1d3d1ZJZllhOG1oTEF2TWxlWmQ2YUdCU1pMRnVGRGVuZVFG?=
 =?utf-8?B?dmRyVWlTbG1TZlFzeDR3NWN5SmRUS2ZveHJCTEdTTFREU25LWkZpSlIwV1J3?=
 =?utf-8?B?OGRFbkNqdkU3WDdqNyt0NzRWTFYwSk0yZGUzSG9TMGhnZkg0aXYxcGJ1SHBE?=
 =?utf-8?B?R1g1N050SFdOT1VXQmU3OUhtL0g5UTJQaDhHTXRFZmNOY2lMc3RaajNhaHZQ?=
 =?utf-8?B?QnBMK1czZ2Rtc3pPV2FiMmh3ZmViUnNpeFQ2V0wrdUc0Y01YQ2k0WFA5UmVN?=
 =?utf-8?Q?RAfz0EZpVcsnGYAvkIUXssZd3ucWi4/IS9?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <52A19BCDA4102B4DA5FEEA5BFED035A4@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b03b731-041a-4f61-c052-08d8960c3a1b
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 15:17:33.0969
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fDpuwnZcnC+0xf+gc2nl9GRfceik0aer5+NFk2lsOLODXYd5bR/pmMQW8cxJ0D2kS/a+CeejB/tHNFyZo+ewbEFXoejSqKgeClSDHtu8R9TBFIaY7Usku96x+ox2qORX
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4305
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_07:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0
 mlxlogscore=999 priorityscore=1501 bulkscore=0 phishscore=0 spamscore=0
 adultscore=0 mlxscore=0 suspectscore=0 lowpriorityscore=0 clxscore=1015
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010098

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gLi4uIHRvIGJl
IHVzZWQgYnkgY2FsbGVycyBvZiBsaWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfbGlzdCgpLg0K
Pg0KPiBDdXJyZW50bHkgdGhlcmUgaXMgbm8gQVBJIGZvciBjYWxsZXJzIG9mIGxpYnhsX2Rldmlj
ZV9wY2lfYXNzaWduYWJsZV9saXN0KCkNCj4gdG8gZnJlZSB0aGUgbGlzdC4gVGhlIHhsIGZ1bmN0
aW9uIHBjaWFzc2lnbmFibGVfbGlzdCgpIGNhbGxzDQo+IGxpYnhsX2RldmljZV9wY2lfZGlzcG9z
ZSgpIG9uIGVhY2ggZWxlbWVudCBvZiB0aGUgcmV0dXJuZWQgbGlzdCwgYnV0DQo+IGxpYnhsX3Bj
aV9hc3NpZ25hYmxlKCkgaW4gbGlieGxfcGNpLmMgZG9lcyBub3QuIE5laXRoZXIgZG9lcyB0aGUg
aW1wbGVtZW50YXRpb24NCj4gb2YgbGlieGxfZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2xpc3QoKSBj
YWxsIGxpYnhsX2RldmljZV9wY2lfaW5pdCgpLg0KPg0KPiBUaGlzIHBhdGNoIGFkZHMgdGhlIG5l
dyBBUEkgZnVuY3Rpb24sIG1ha2VzIHN1cmUgaXQgaXMgdXNlZCBldmVyeXdoZXJlIGFuZA0KPiBh
bHNvIG1vZGlmaWVzIGxpYnhsX2RldmljZV9wY2lfYXNzaWduYWJsZV9saXN0KCkgdG8gaW5pdGlh
bGl6ZSBsaXN0DQo+IGVudHJpZXMgcmF0aGVyIHRoYW4ganVzdCB6ZXJvaW5nIHRoZW0uDQo+DQo+
IFNpZ25lZC1vZmYtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NClJldmll
d2VkLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29A
ZXBhbS5jb20+DQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQoNCj4gLS0tDQo+IENjOiBJYW4g
SmFja3NvbiA8aXdqQHhlbnByb2plY3Qub3JnPg0KPiBDYzogV2VpIExpdSA8d2xAeGVuLm9yZz4N
Cj4gQ2M6IENocmlzdGlhbiBMaW5kaWcgPGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4NCj4g
Q2M6IERhdmlkIFNjb3R0IDxkYXZlQHJlY29pbC5vcmc+DQo+IENjOiBBbnRob255IFBFUkFSRCA8
YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4gLS0tDQo+ICAgdG9vbHMvaW5jbHVkZS9saWJ4
bC5oICAgICAgICAgICAgICAgIHwgIDcgKysrKysrKw0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGli
eGxfcGNpLmMgICAgICAgICB8IDE0ICsrKysrKysrKysrKy0tDQo+ICAgdG9vbHMvb2NhbWwvbGli
cy94bC94ZW5saWdodF9zdHVicy5jIHwgIDMgKy0tDQo+ICAgdG9vbHMveGwveGxfcGNpLmMgICAg
ICAgICAgICAgICAgICAgIHwgIDMgKy0tDQo+ICAgNCBmaWxlcyBjaGFuZ2VkLCAyMSBpbnNlcnRp
b25zKCspLCA2IGRlbGV0aW9ucygtKQ0KPg0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvaW5jbHVkZS9s
aWJ4bC5oIGIvdG9vbHMvaW5jbHVkZS9saWJ4bC5oDQo+IGluZGV4IGVlNTJkM2NmN2UuLjgyMjU4
MDlkOTQgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2luY2x1ZGUvbGlieGwuaA0KPiArKysgYi90b29s
cy9pbmNsdWRlL2xpYnhsLmgNCj4gQEAgLTQ1OCw2ICs0NTgsMTIgQEANCj4gICAjZGVmaW5lIExJ
QlhMX0hBVkVfREVWSUNFX1BDSV9MSVNUX0ZSRUUgMQ0KPiAgIA0KPiAgIC8qDQo+ICsgKiBMSUJY
TF9IQVZFX0RFVklDRV9QQ0lfQVNTSUdOQUJMRV9MSVNUX0ZSRUUgaW5kaWNhdGVzIHRoYXQgdGhl
DQo+ICsgKiBsaWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfbGlzdF9mcmVlKCkgZnVuY3Rpb24g
aXMgZGVmaW5lZC4NCj4gKyAqLw0KPiArI2RlZmluZSBMSUJYTF9IQVZFX0RFVklDRV9QQ0lfQVNT
SUdOQUJMRV9MSVNUX0ZSRUUgMQ0KPiArDQo+ICsvKg0KPiAgICAqIGxpYnhsIEFCSSBjb21wYXRp
YmlsaXR5DQo+ICAgICoNCj4gICAgKiBUaGUgb25seSBndWFyYW50ZWUgd2hpY2ggbGlieGwgbWFr
ZXMgcmVnYXJkaW5nIEFCSSBjb21wYXRpYmlsaXR5DQo+IEBAIC0yMzY5LDYgKzIzNzUsNyBAQCBp
bnQgbGlieGxfZGV2aWNlX2V2ZW50c19oYW5kbGVyKGxpYnhsX2N0eCAqY3R4LA0KPiAgIGludCBs
aWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfYWRkKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kZXZp
Y2VfcGNpICpwY2ksIGludCByZWJpbmQpOw0KPiAgIGludCBsaWJ4bF9kZXZpY2VfcGNpX2Fzc2ln
bmFibGVfcmVtb3ZlKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kZXZpY2VfcGNpICpwY2ksIGludCBy
ZWJpbmQpOw0KPiAgIGxpYnhsX2RldmljZV9wY2kgKmxpYnhsX2RldmljZV9wY2lfYXNzaWduYWJs
ZV9saXN0KGxpYnhsX2N0eCAqY3R4LCBpbnQgKm51bSk7DQo+ICt2b2lkIGxpYnhsX2RldmljZV9w
Y2lfYXNzaWduYWJsZV9saXN0X2ZyZWUobGlieGxfZGV2aWNlX3BjaSAqbGlzdCwgaW50IG51bSk7
DQo+ICAgDQo+ICAgLyogQ1BVSUQgaGFuZGxpbmcgKi8NCj4gICBpbnQgbGlieGxfY3B1aWRfcGFy
c2VfY29uZmlnKGxpYnhsX2NwdWlkX3BvbGljeV9saXN0ICpjcHVpZCwgY29uc3QgY2hhciogc3Ry
KTsNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMgYi90b29scy9s
aWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+IGluZGV4IDBmNDE5MzlkMWYuLjVhMzM1MmMyZWMgMTAw
NjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4gKysrIGIvdG9vbHMv
bGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiBAQCAtNDU3LDcgKzQ1Nyw3IEBAIGxpYnhsX2Rldmlj
ZV9wY2kgKmxpYnhsX2RldmljZV9wY2lfYXNzaWduYWJsZV9saXN0KGxpYnhsX2N0eCAqY3R4LCBp
bnQgKm51bSkNCj4gICAgICAgICAgIHBjaXMgPSBuZXc7DQo+ICAgICAgICAgICBuZXcgPSBwY2lz
ICsgKm51bTsNCj4gICANCj4gLSAgICAgICAgbWVtc2V0KG5ldywgMCwgc2l6ZW9mKCpuZXcpKTsN
Cj4gKyAgICAgICAgbGlieGxfZGV2aWNlX3BjaV9pbml0KG5ldyk7DQo+ICAgICAgICAgICBwY2lf
c3RydWN0X2ZpbGwobmV3LCBkb20sIGJ1cywgZGV2LCBmdW5jLCAwKTsNCj4gICANCj4gICAgICAg
ICAgIGlmIChwY2lfaW5mb194c19yZWFkKGdjLCBuZXcsICJkb21pZCIpKSAvKiBhbHJlYWR5IGFz
c2lnbmVkICovDQo+IEBAIC00NzIsNiArNDcyLDE2IEBAIG91dDoNCj4gICAgICAgcmV0dXJuIHBj
aXM7DQo+ICAgfQ0KPiAgIA0KPiArdm9pZCBsaWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfbGlz
dF9mcmVlKGxpYnhsX2RldmljZV9wY2kgKmxpc3QsIGludCBudW0pDQo+ICt7DQo+ICsgICAgaW50
IGk7DQo+ICsNCj4gKyAgICBmb3IgKGkgPSAwOyBpIDwgbnVtOyBpKyspDQo+ICsgICAgICAgIGxp
YnhsX2RldmljZV9wY2lfZGlzcG9zZSgmbGlzdFtpXSk7DQo+ICsNCj4gKyAgICBmcmVlKGxpc3Qp
Ow0KPiArfQ0KPiArDQo+ICAgLyogVW5iaW5kIGRldmljZSBmcm9tIGl0cyBjdXJyZW50IGRyaXZl
ciwgaWYgYW55LiAgSWYgZHJpdmVyX3BhdGggaXMgbm9uLU5VTEwsDQo+ICAgICogc3RvcmUgdGhl
IHBhdGggdG8gdGhlIG9yaWdpbmFsIGRyaXZlciBpbiBpdC4gKi8NCj4gICBzdGF0aWMgaW50IHN5
c2ZzX2Rldl91bmJpbmQobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpLA0KPiBA
QCAtMTQ5MCw3ICsxNTAwLDcgQEAgc3RhdGljIGludCBsaWJ4bF9wY2lfYXNzaWduYWJsZShsaWJ4
bF9jdHggKmN0eCwgbGlieGxfZGV2aWNlX3BjaSAqcGNpKQ0KPiAgICAgICAgICAgICAgIHBjaXNb
aV0uZnVuYyA9PSBwY2ktPmZ1bmMpDQo+ICAgICAgICAgICAgICAgYnJlYWs7DQo+ICAgICAgIH0N
Cj4gLSAgICBmcmVlKHBjaXMpOw0KPiArICAgIGxpYnhsX2RldmljZV9wY2lfYXNzaWduYWJsZV9s
aXN0X2ZyZWUocGNpcywgbnVtKTsNCj4gICAgICAgcmV0dXJuIGkgIT0gbnVtOw0KPiAgIH0NCj4g
ICANCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL2xpYnMveGwveGVubGlnaHRfc3R1YnMuYyBi
L3Rvb2xzL29jYW1sL2xpYnMveGwveGVubGlnaHRfc3R1YnMuYw0KPiBpbmRleCAxMTgxOTcxZGE0
Li4zNTJhMDAxMzRkIDEwMDY0NA0KPiAtLS0gYS90b29scy9vY2FtbC9saWJzL3hsL3hlbmxpZ2h0
X3N0dWJzLmMNCj4gKysrIGIvdG9vbHMvb2NhbWwvbGlicy94bC94ZW5saWdodF9zdHVicy5jDQo+
IEBAIC04OTQsOSArODk0LDggQEAgdmFsdWUgc3R1Yl94bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVf
bGlzdCh2YWx1ZSBjdHgpDQo+ICAgCQlGaWVsZChsaXN0LCAxKSA9IHRlbXA7DQo+ICAgCQl0ZW1w
ID0gbGlzdDsNCj4gICAJCVN0b3JlX2ZpZWxkKGxpc3QsIDAsIFZhbF9kZXZpY2VfcGNpKCZjX2xp
c3RbaV0pKTsNCj4gLQkJbGlieGxfZGV2aWNlX3BjaV9kaXNwb3NlKCZjX2xpc3RbaV0pOw0KPiAg
IAl9DQo+IC0JZnJlZShjX2xpc3QpOw0KPiArCWxpYnhsX2RldmljZV9wY2lfYXNzaWduYWJsZV9s
aXN0X2ZyZWUoY19saXN0LCBuYik7DQo+ICAgDQo+ICAgCUNBTUxyZXR1cm4obGlzdCk7DQo+ICAg
fQ0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMveGwveGxfcGNpLmMgYi90b29scy94bC94bF9wY2kuYw0K
PiBpbmRleCA3YzBmMTAyYWM3Li5mNzE0OThjYmI1IDEwMDY0NA0KPiAtLS0gYS90b29scy94bC94
bF9wY2kuYw0KPiArKysgYi90b29scy94bC94bF9wY2kuYw0KPiBAQCAtMTY0LDkgKzE2NCw4IEBA
IHN0YXRpYyB2b2lkIHBjaWFzc2lnbmFibGVfbGlzdCh2b2lkKQ0KPiAgICAgICBmb3IgKGkgPSAw
OyBpIDwgbnVtOyBpKyspIHsNCj4gICAgICAgICAgIHByaW50ZigiJTA0eDolMDJ4OiUwMnguJTAx
eFxuIiwNCj4gICAgICAgICAgICAgICAgICBwY2lzW2ldLmRvbWFpbiwgcGNpc1tpXS5idXMsIHBj
aXNbaV0uZGV2LCBwY2lzW2ldLmZ1bmMpOw0KPiAtICAgICAgICBsaWJ4bF9kZXZpY2VfcGNpX2Rp
c3Bvc2UoJnBjaXNbaV0pOw0KPiAgICAgICB9DQo+IC0gICAgZnJlZShwY2lzKTsNCj4gKyAgICBs
aWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfbGlzdF9mcmVlKHBjaXMsIG51bSk7DQo+ICAgfQ0K
PiAgIA0KPiAgIGludCBtYWluX3BjaWFzc2lnbmFibGVfbGlzdChpbnQgYXJnYywgY2hhciAqKmFy
Z3Yp


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 15:20:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 15:20:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42169.75826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7SD-0000me-Vb; Tue, 01 Dec 2020 15:20:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42169.75826; Tue, 01 Dec 2020 15:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7SD-0000mX-SN; Tue, 01 Dec 2020 15:20:33 +0000
Received: by outflank-mailman (input) for mailman id 42169;
 Tue, 01 Dec 2020 15:20:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk7SB-0000mP-La
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 15:20:31 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 424ac773-b536-4afe-9604-cb4549e77c20;
 Tue, 01 Dec 2020 15:20:30 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1FEwPJ012102; Tue, 1 Dec 2020 15:20:29 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2052.outbound.protection.outlook.com [104.47.12.52])
 by mx0a-0039f301.pphosted.com with ESMTP id 355q3hgdks-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 15:20:28 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5555.eurprd03.prod.outlook.com (2603:10a6:208:16d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Tue, 1 Dec
 2020 15:20:26 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 15:20:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 424ac773-b536-4afe-9604-cb4549e77c20
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ne4e24vNTq7BGhOqI+cKdCEVddJykG/t/+Fb0I6Dmv5Ph/c+8GanJNKH6Sl0C0Hr837n+pEoT7AQk/dLHdUG+VQAULFJxUvfUY7Bh2ddcnvq77is3bE1g5Bjhkrg32h4zhJYEPJmcs+3XhJ6VXOYT42UA1+pkbJtsmksf5Z9C59E/6mjwwnMROeUbcNLhGJ3o7KORZuvZrcPXnWU6LOD4ojKpJMFqB+GBZPZa8lKD/u1XwV5PpHy3C2om13uXJrur8itDFcfY7K0fG0Z4K2j84Wt3Qs640rKKeHd3B1IlIDa18kLGm9t6y59E1MF1rXQG4SLvENqFF8XZ/EXAWLBhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OpHT8MI2KZOG8h1yXjHL9YFDFZiOlGf6i/sOnfTamDQ=;
 b=NKPVK94hbnoFAYk+FbcsmsKXi4GxCM/Kh+WUcBcXndNXcUDmlIT/FRi4qvgpdqcDyAoQZBKdKC509VNN9OyVo909wbVqNoBNufdcJnQjBzdzXrjPc8Ks0v8g6IuEKGTpthg6JPp3gkv3Y8y+Iwl2VLjRrJwsco/5FVTTDs722dD+jst0pMgniUJn30PIp4Oqbdk9BH5jKNa8a38K382cDNTltt+X9mDLgTwwFf6z0PglRMVOR0Y/LlNMeVqzMLsTykXAb+h7VzWWB52VRO+ET0fvKJKxVD/qsOdLcayGOVLi56M6anX1nMRg8rTewC3CIiNMTmlKgncseCa75jLa3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OpHT8MI2KZOG8h1yXjHL9YFDFZiOlGf6i/sOnfTamDQ=;
 b=mrMq+8hQr4uxZoK6kav6qJ79kmMt0fqf4ON2a85cE00ZS6qx4GM89rYYqSpzP2hhVF/soqFOiend0oJJoZovaidJIyDUxs9Duv6/aeAn8C1jViVXNjEsvh/tKePWqHnhnGZ5uFUiSZ9WxkHJlkEwhmtTwPH+/+Gn22U3EUBIL291JDznU+2TelLpjIJe0IE17owijkTvxRZtRULebq8lQhbHkNWCLnqrGnPZSNBpEtE48Kd61ezEU2T0s8ykNDoGO2p3Nf2xOBSJE03B3FJidmFEpK/DlxyWqlrHZ3nJdeuyOppiRcPwpwrQVr+56yHk2TGGc29F4f0qj23ECKUudQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 13/23] libxl: use COMPARE_PCI() macro
 is_pci_in_array()...
Thread-Topic: [PATCH v4 13/23] libxl: use COMPARE_PCI() macro
 is_pci_in_array()...
Thread-Index: AQHWx/V+Aj8+RUjOSEaa4IIEDCcUXQ==
Date: Tue, 1 Dec 2020 15:20:26 +0000
Message-ID: <7951d30a-006e-2c42-2373-117decfe4508@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-14-paul@xen.org>
In-Reply-To: <20201124080159.11912-14-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3cbc9eba-2753-4e8e-eada-08d8960ca144
x-ms-traffictypediagnostic: AM0PR03MB5555:
x-microsoft-antispam-prvs: 
 <AM0PR03MB5555615CB3583C5002F7CD68E7F40@AM0PR03MB5555.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 XjdTmkGz693yj+hWvniY+bkNN8DbSZ6ernHOlZT5QSOFBJC2HyqcE5fnIX4xgagoTJ+XuS0fratR0/6IVZciQFMJCxgFr012YRBDanGunqX6Wlciweh6W7l/rQoPCtPily3sGdvRMjpbwZ10fPecFwkpMtO7ewba9cE64P92G3NabSL0k86/61oEO8G4gBZyAvm7emaBUpjZGY86feReNJtW4JZGmAvmxL8ud6zdQUh0C+fc8lPXRSYAs7XvyYKEnvFRZKPIwFqvG4HxYgl8QebbiytxuNhzEjcO8xZmatkwNnaYlq+UEmEgJhd8MelbHkR8AS5pdBEpxFMseNpqXg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(498600001)(36756003)(8676002)(31696002)(8936002)(71200400001)(6512007)(5660300002)(2906002)(53546011)(4326008)(83380400001)(186003)(54906003)(31686004)(86362001)(76116006)(66946007)(110136005)(6506007)(2616005)(66446008)(6486002)(26005)(66556008)(66476007)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?TkI5d3JVZXcwZHByd3U4R2RvcEI0bmd0TmVUbFAzSVJEMmF1VXpUR3I3TkEr?=
 =?utf-8?B?TFhaeGhXRngyYXRkdmE4dG1zU3Z3dTVhdFhKSktETW1BU0JRUlprb0l2bTBv?=
 =?utf-8?B?WXF6VDBGNHViU29wK0cxeEdFbHNnd2QrekdiZmJoeERoU0VUdXF1cndlM3Mw?=
 =?utf-8?B?RWFhUnRaRzRhWkJPWFdjTG9XaXJMRTdWQjVrNlZTbVJBVHZ0RUl3L0R0dGJz?=
 =?utf-8?B?NVgxNjZNMWVOMDh5b1ZKSlJKVC9FdHc2QmhyMXV0YTdxeTFkVjk0YXV1UDdn?=
 =?utf-8?B?ckF4WktoU2FsamlSZjFuRERwRmcwNy9UOUlGclFBOGp3Qjk0L0gvdTNRTzNW?=
 =?utf-8?B?YUJxc05qS2c2S0JlREdlY3NaVFJRVk9PQkJta0VWT3BBWjBIOTVPR2FtOG9z?=
 =?utf-8?B?RFhQeGdEWnZuU3pSdEpVTXRESUhqM2pGNWRaSWRVamJrRUhWbzErRDBxb2Nn?=
 =?utf-8?B?aG5TVnZEVVNEUHJFdzU0WlJkZWZORDZRcVpucjR0Rnl0RVRGZ1NHN1ROd0Vp?=
 =?utf-8?B?R2ZNUCtqN016b25rdnpVUE9mczJPeXgxRnZSQ2tkUmQweDlwWWxtWUY4NVFy?=
 =?utf-8?B?M3Ard2crTENqZGhmNlZ5MVN3OXppd2xJUEg1R3IrbWF6RGRtNmlXVjMxNVJY?=
 =?utf-8?B?d1FDODUzTmxsVEhYM0RmL0xxeVZNbk1vQmttLzZrTk9na21Rb1VyeEJ2bnhZ?=
 =?utf-8?B?Ly9lbWhQTi9VSEVpK2lCdGhDVFN3WFpzbkprdnhOZXZwN3MyMFVsVHVydEtP?=
 =?utf-8?B?ZlhVd2VFeFk2TlloNktxTmU3NTFjMXY2dVhJTHJJMFBWZTJnMFEvekhxOHFW?=
 =?utf-8?B?UEExL0RhUFU5bU9xclFGa2VEaU9lRVRVM2M2Y2pYU3JjaEtZbVZrM2JKYk9J?=
 =?utf-8?B?Yk45eXhSMEtHb1VUT3RpUnRTa1JjK3k5dXJwUlpzU1VJbklHQXBrSENQdWZR?=
 =?utf-8?B?NVl4bHZpa25yMmdtY0JCMzRNMW5JU3YwZG8zbVVRcXRYOUQrSnFmZ3AyRjBi?=
 =?utf-8?B?cjM1UEt1a2ovc2xjaFRsemxSNGJONUYxcDBocVpVZUgvREhOdVRRMmMyMFNX?=
 =?utf-8?B?ZlNpd1hXOFNXRFZQbVRvY3VZaHczZGtQcmlBc3N3K0VBbHB0Mk90dzhrc2kw?=
 =?utf-8?B?bE9HVW9NWTNSSVNJRWdqRCtGWlVtQ3VOVFRtWE1WOSs1NWlHOE1DTkkwS0pB?=
 =?utf-8?B?cDBSZWpzZE9saXFFSWJaYU5JU2dNTkt1ak43OERYaHk3d21kbjN4d0R5QkZJ?=
 =?utf-8?B?aHBJZ05HQm9BazFTdHlDeE1lWVU5L1hMVTRHT3RidDQ4RlRKOElsT3g5UnJZ?=
 =?utf-8?Q?GgqYiT7Elh5d6xiOv7FzyKOKF+Uua17gt9?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <0E0AD44350126E41B1FB5CD6CC99927C@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3cbc9eba-2753-4e8e-eada-08d8960ca144
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 15:20:26.1521
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: QzW+/Mfc1jOceULjuPLifKo8IHimtkHzzbNdGgdbtXfrRPgPKgVLgShXDeNbXiYGSqIlqQAQi57gj4KU93gxlGtimeT7Lqkp+zhrs8FO0KMbLT3pD503v4MijfTKfo4P
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5555
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_07:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 phishscore=0
 spamscore=0 impostorscore=0 malwarescore=0 clxscore=1015
 priorityscore=1501 mlxlogscore=999 suspectscore=0 mlxscore=0 adultscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010098

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gLi4uIHJhdGhl
ciB0aGFuIGFuIG9wZW4tY29kZWQgZXF1aXZhbGVudC4NCj4NCj4gVGhpcyBwYXRjaCB0aWRpZXMg
dXAgdGhlIGlzX3BjaV9pbl9hcnJheSgpIGZ1bmN0aW9uLCBtYWtpbmcgaXQgdGFrZSBhIHNpbmds
ZQ0KPiAnbGlieGxfZGV2aWNlX3BjaScgYXJndW1lbnQgcmF0aGVyIHRoYW4gc2VwYXJhdGUgZG9t
YWluLCBidXMsIGRldmljZSBhbmQNCj4gZnVuY3Rpb24gYXJndW1lbnRzLiBUaGUgYWxyZWFkeS1h
dmFpbGFibGUgQ09NUEFSRV9QQ0koKSBtYWNybyBjYW4gdGhlbiBiZQ0KPiB1c2VkIGFuZCBpdCBp
cyBhbHNvIG1vZGlmaWVkIHRvIHJldHVybiAnYm9vbCcgcmF0aGVyIHRoYW4gJ2ludCcuDQo+DQo+
IFRoZSBwYXRjaCBhbHNvIG1vZGlmaWVzIGxpYnhsX3BjaV9hc3NpZ25hYmxlKCkgdG8gdXNlIGlz
X3BjaV9pbl9hcnJheSgpIHJhdGhlcg0KPiB0aGFuIGEgc2VwYXJhdGUgb3Blbi1jb2RlZCBlcXVp
dmFsZW50LCBhbmQgYWxzbyBtb2RpZmllcyBpdCB0byByZXR1cm4gYQ0KPiAnYm9vbCcgcmF0aGVy
IHRoYW4gYW4gJ2ludCcuDQo+DQo+IE5PVEU6IFRoZSBDT01QQVJFX1BDSSgpIG1hY3JvIGlzIGFs
c28gZml4ZWQgdG8gaW5jbHVkZSB0aGUgJ2RvbWFpbicgaW4gaXRzDQo+ICAgICAgICBjb21wYXJp
c29uLCB3aGljaCBzaG91bGQgYWx3YXlzIGhhdmUgYmVlbiB0aGUgY2FzZS4NCj4NCj4gU2lnbmVk
LW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KUmV2aWV3ZWQtYnk6
IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNv
bT4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KPiAtLS0NCj4gQ2M6IElhbiBKYWNrc29u
IDxpd2pAeGVucHJvamVjdC5vcmc+DQo+IENjOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiAtLS0N
Cj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2ludGVybmFsLmggfCAgNyArKysrLS0tDQo+ICAg
dG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyAgICAgIHwgMzggKysrKysrKysrKysrKy0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gICAyIGZpbGVzIGNoYW5nZWQsIDE3IGluc2VydGlvbnMo
KyksIDI4IGRlbGV0aW9ucygtKQ0KPg0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9s
aWJ4bF9pbnRlcm5hbC5oIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9pbnRlcm5hbC5oDQo+IGlu
ZGV4IGVjZWU2MWI1NDEuLjAyZjhhMzE3OWMgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvbGln
aHQvbGlieGxfaW50ZXJuYWwuaA0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX2ludGVy
bmFsLmgNCj4gQEAgLTQ3NDYsOSArNDc0NiwxMCBAQCB2b2lkIGxpYnhsX194Y2luZm8yeGxpbmZv
KGxpYnhsX2N0eCAqY3R4LA0KPiAgICAqIGRldmljZXMgaGF2ZSBzYW1lIGlkZW50aWZpZXIuICov
DQo+ICAgI2RlZmluZSBDT01QQVJFX0RFVklEKGEsIGIpICgoYSktPmRldmlkID09IChiKS0+ZGV2
aWQpDQo+ICAgI2RlZmluZSBDT01QQVJFX0RJU0soYSwgYikgKCFzdHJjbXAoKGEpLT52ZGV2LCAo
YiktPnZkZXYpKQ0KPiAtI2RlZmluZSBDT01QQVJFX1BDSShhLCBiKSAoKGEpLT5mdW5jID09IChi
KS0+ZnVuYyAmJiAgICBcDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAoYSktPmJ1cyA9
PSAoYiktPmJ1cyAmJiAgICAgIFwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgIChhKS0+
ZGV2ID09IChiKS0+ZGV2KQ0KPiArI2RlZmluZSBDT01QQVJFX1BDSShhLCBiKSAoKGEpLT5kb21h
aW4gPT0gKGIpLT5kb21haW4gJiYgXA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgKGEp
LT5idXMgPT0gKGIpLT5idXMgJiYgICAgICAgXA0KPiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKGEpLT5kZXYgPT0gKGIpLT5kZXYgJiYgICAgICAgXA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgKGEpLT5mdW5jID09IChiKS0+ZnVuYykNCj4gICAjZGVmaW5lIENPTVBBUkVfVVNC
KGEsIGIpICgoYSktPmN0cmwgPT0gKGIpLT5jdHJsICYmIFwNCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAoYSktPnBvcnQgPT0gKGIpLT5wb3J0KQ0KPiAgICNkZWZpbmUgQ09NUEFSRV9V
U0JDVFJMKGEsIGIpICgoYSktPmRldmlkID09IChiKS0+ZGV2aWQpDQo+IGRpZmYgLS1naXQgYS90
b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2ku
Yw0KPiBpbmRleCA1YTMzNTJjMmVjLi5lMGI2MTZmZTE4IDEwMDY0NA0KPiAtLS0gYS90b29scy9s
aWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNp
LmMNCj4gQEAgLTMzNiwyNCArMzM2LDE3IEBAIHJldHJ5X3RyYW5zYWN0aW9uMjoNCj4gICAgICAg
cmV0dXJuIDA7DQo+ICAgfQ0KPiAgIA0KPiAtc3RhdGljIGludCBpc19wY2lfaW5fYXJyYXkobGli
eGxfZGV2aWNlX3BjaSAqYXNzaWduZWQsIGludCBudW1fYXNzaWduZWQsDQo+IC0gICAgICAgICAg
ICAgICAgICAgICAgICAgICBpbnQgZG9tLCBpbnQgYnVzLCBpbnQgZGV2LCBpbnQgZnVuYykNCj4g
K3N0YXRpYyBib29sIGlzX3BjaV9pbl9hcnJheShsaWJ4bF9kZXZpY2VfcGNpICpwY2lzLCBpbnQg
bnVtLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RldmljZV9wY2kgKnBj
aSkNCj4gICB7DQo+ICAgICAgIGludCBpOw0KPiAgIA0KPiAtICAgIGZvcihpID0gMDsgaSA8IG51
bV9hc3NpZ25lZDsgaSsrKSB7DQo+IC0gICAgICAgIGlmICggYXNzaWduZWRbaV0uZG9tYWluICE9
IGRvbSApDQo+IC0gICAgICAgICAgICBjb250aW51ZTsNCj4gLSAgICAgICAgaWYgKCBhc3NpZ25l
ZFtpXS5idXMgIT0gYnVzICkNCj4gLSAgICAgICAgICAgIGNvbnRpbnVlOw0KPiAtICAgICAgICBp
ZiAoIGFzc2lnbmVkW2ldLmRldiAhPSBkZXYgKQ0KPiAtICAgICAgICAgICAgY29udGludWU7DQo+
IC0gICAgICAgIGlmICggYXNzaWduZWRbaV0uZnVuYyAhPSBmdW5jICkNCj4gLSAgICAgICAgICAg
IGNvbnRpbnVlOw0KPiAtICAgICAgICByZXR1cm4gMTsNCj4gKyAgICBmb3IgKGkgPSAwOyBpIDwg
bnVtOyBpKyspIHsNCj4gKyAgICAgICAgaWYgKENPTVBBUkVfUENJKHBjaSwgJnBjaXNbaV0pKQ0K
PiArICAgICAgICAgICAgYnJlYWs7DQo+ICAgICAgIH0NCj4gICANCj4gLSAgICByZXR1cm4gMDsN
Cj4gKyAgICByZXR1cm4gaSA8IG51bTsNCj4gICB9DQo+ICAgDQo+ICAgLyogV3JpdGUgdGhlIHN0
YW5kYXJkIEJERiBpbnRvIHRoZSBzeXNmcyBwYXRoIGdpdmVuIGJ5IHN5c2ZzX3BhdGguICovDQo+
IEBAIC0xNDg3LDIxICsxNDgwLDE3IEBAIGludCBsaWJ4bF9kZXZpY2VfcGNpX2FkZChsaWJ4bF9j
dHggKmN0eCwgdWludDMyX3QgZG9taWQsDQo+ICAgICAgIHJldHVybiBBT19JTlBST0dSRVNTOw0K
PiAgIH0NCj4gICANCj4gLXN0YXRpYyBpbnQgbGlieGxfcGNpX2Fzc2lnbmFibGUobGlieGxfY3R4
ICpjdHgsIGxpYnhsX2RldmljZV9wY2kgKnBjaSkNCj4gK3N0YXRpYyBib29sIGxpYnhsX3BjaV9h
c3NpZ25hYmxlKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kZXZpY2VfcGNpICpwY2kpDQo+ICAgew0K
PiAgICAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2lzOw0KPiAtICAgIGludCBudW0sIGk7DQo+ICsg
ICAgaW50IG51bTsNCj4gKyAgICBib29sIGFzc2lnbmFibGU7DQo+ICAgDQo+ICAgICAgIHBjaXMg
PSBsaWJ4bF9kZXZpY2VfcGNpX2Fzc2lnbmFibGVfbGlzdChjdHgsICZudW0pOw0KPiAtICAgIGZv
ciAoaSA9IDA7IGkgPCBudW07IGkrKykgew0KPiAtICAgICAgICBpZiAocGNpc1tpXS5kb21haW4g
PT0gcGNpLT5kb21haW4gJiYNCj4gLSAgICAgICAgICAgIHBjaXNbaV0uYnVzID09IHBjaS0+YnVz
ICYmDQo+IC0gICAgICAgICAgICBwY2lzW2ldLmRldiA9PSBwY2ktPmRldiAmJg0KPiAtICAgICAg
ICAgICAgcGNpc1tpXS5mdW5jID09IHBjaS0+ZnVuYykNCj4gLSAgICAgICAgICAgIGJyZWFrOw0K
PiAtICAgIH0NCj4gKyAgICBhc3NpZ25hYmxlID0gaXNfcGNpX2luX2FycmF5KHBjaXMsIG51bSwg
cGNpKTsNCj4gICAgICAgbGlieGxfZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2xpc3RfZnJlZShwY2lz
LCBudW0pOw0KPiAtICAgIHJldHVybiBpICE9IG51bTsNCj4gKw0KPiArICAgIHJldHVybiBhc3Np
Z25hYmxlOw0KPiAgIH0NCj4gICANCj4gICBzdGF0aWMgdm9pZCBkZXZpY2VfcGNpX2FkZF9zdHVi
ZG9tX3dhaXQobGlieGxfX2VnYyAqZWdjLA0KPiBAQCAtMTgzNCw4ICsxODIzLDcgQEAgc3RhdGlj
IHZvaWQgZG9fcGNpX3JlbW92ZShsaWJ4bF9fZWdjICplZ2MsIHBjaV9yZW1vdmVfc3RhdGUgKnBy
cykNCj4gICAgICAgICAgIGdvdG8gb3V0X2ZhaWw7DQo+ICAgICAgIH0NCj4gICANCj4gLSAgICBh
dHRhY2hlZCA9IGlzX3BjaV9pbl9hcnJheShwY2lzLCBudW0sIHBjaS0+ZG9tYWluLA0KPiAtICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBjaS0+YnVzLCBwY2ktPmRldiwgcGNpLT5mdW5j
KTsNCj4gKyAgICBhdHRhY2hlZCA9IGlzX3BjaV9pbl9hcnJheShwY2lzLCBudW0sIHBjaSk7DQo+
ICAgICAgIGxpYnhsX2RldmljZV9wY2lfbGlzdF9mcmVlKHBjaXMsIG51bSk7DQo+ICAgDQo+ICAg
ICAgIHJjID0gRVJST1JfSU5WQUw7


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 15:23:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 15:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42177.75837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7V0-00011Q-DS; Tue, 01 Dec 2020 15:23:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42177.75837; Tue, 01 Dec 2020 15:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7V0-00011J-AV; Tue, 01 Dec 2020 15:23:26 +0000
Received: by outflank-mailman (input) for mailman id 42177;
 Tue, 01 Dec 2020 15:23:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hnjp=FF=cknow.org=didi.debian@srs-us1.protection.inumbo.net>)
 id 1kk7Uy-00011C-SX
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 15:23:24 +0000
Received: from relay8-d.mail.gandi.net (unknown [217.70.183.201])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86b74837-e761-409c-b030-f221c89ab567;
 Tue, 01 Dec 2020 15:23:22 +0000 (UTC)
Received: from bagend.localnet (92-110-45-68.cable.dynamic.v4.ziggo.nl
 [92.110.45.68]) (Authenticated sender: didi.debian@cknow.org)
 by relay8-d.mail.gandi.net (Postfix) with ESMTPSA id 83C201BF213;
 Tue,  1 Dec 2020 15:23:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86b74837-e761-409c-b030-f221c89ab567
X-Originating-IP: 92.110.45.68
From: Diederik de Haas <didi.debian@cknow.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Anthony PERARD <anthony.perard@citrix.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-4.14] Fix spelling errors.
Date: Tue, 01 Dec 2020 16:23:17 +0100
Message-ID: <7175790.EvYhyI6sBW@bagend>
Organization: Connecting Knowledge
In-Reply-To: <44d06209-65de-f959-fb93-90a924cbf055@suse.com>
References: <5f4935dbc0257e19b87b9461ea62e25328a6091e.1606833490.git.didi.debian@cknow.org> <44d06209-65de-f959-fb93-90a924cbf055@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="nextPart3565244.kQq0lBPeGt"; micalg="pgp-sha512"; protocol="application/pgp-signature"

--nextPart3565244.kQq0lBPeGt
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"; protected-headers="v1"
From: Diederik de Haas <didi.debian@cknow.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Anthony PERARD <anthony.perard@citrix.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-4.14] Fix spelling errors.
Date: Tue, 01 Dec 2020 16:23:17 +0100
Message-ID: <7175790.EvYhyI6sBW@bagend>
Organization: Connecting Knowledge
In-Reply-To: <44d06209-65de-f959-fb93-90a924cbf055@suse.com>
References: <5f4935dbc0257e19b87b9461ea62e25328a6091e.1606833490.git.didi.debian@cknow.org> <44d06209-65de-f959-fb93-90a924cbf055@suse.com>

On dinsdag 1 december 2020 16:10:13 CET Jan Beulich wrote:
> I'm afraid this isn't the kind of change we'd be backporting, 

Ok, I didn't know that. 

> unless you have a very good justification for a respective request. 

I was fixing issues found by Debian's lintian tool and 4.14  is currently in 
testing. I highly doubt that counts as a (good) justification ;)
I'll (try to) find another way.

> Also, process-wise, there wouldn't normally be patches sent to the
> list for the stable trees.

Ok, sorry for the noise and thanks for the information.

Cheers,
  Diederik
--nextPart3565244.kQq0lBPeGt
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part.
Content-Transfer-Encoding: 7Bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEEf+PJh5LtCd6LDwjYE45BkVx+/tYFAl/GX+UACgkQE45BkVx+
/tY1qA//a1FLmWQpVrJKMcEweFcINA2B7SuwSRfLDZdw3FzHyQEblJJXaVgRf1c/
EoQyMhyOojjuku1IPeSk9anS8avLhUePhIDIK+8cY2emYWoso8DPyN2z36RAUky6
7Th2XSXuhhsT4r33oMccV5CoHj482wOEeBImBp2OlM8LAW5gRrDXSBfsepfQ2JQs
t62ftG/KuvvIpNFIMPpNy72G2Vcb0B7pFZB+6cUrrrymzkiuKxH7FgkKRsNWwpvt
D14WjagmG5ouWyWyGAe8FX/UXP76vjA91Nh9tPcd4mCYuGtWhbXefUbMlQDE/VPk
yHB0ND9WmtSej8/9ijxwec8kbLdAjI2agQcxUNA9bcYxu5jcfyPHtS9V+jsWhoz+
PMlxJOLwzdNSSdatW+q7G5TMfB57ASdD52auAWFjiEJPxjyzz+BmiaKGe1lHIYT1
MVCGGhzLaHdHWhICN/g8F6sTwIq/9GRhLVmKidwbpCEL2+tIrnff+1iLdmMmYrjc
/KW3i0kD+ZPOBGkGjNetSNREPf9dSygwCKIMMUsZwCl0V5OczkqCeGNR5muNzqhL
1GiziWJh/JA+9eNUeRlpEa2MF58dzuqcw/JwEeD1PFs8gzneA9au4kYRT69Zrh5h
I7SsSsmQpgmevK9uxnXOZWO+K+465e021wdot3IrxnHqB65CKa0=
=ykDn
-----END PGP SIGNATURE-----

--nextPart3565244.kQq0lBPeGt--





From xen-devel-bounces@lists.xenproject.org Tue Dec 01 15:25:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 15:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42182.75850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7Wp-00019k-Qh; Tue, 01 Dec 2020 15:25:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42182.75850; Tue, 01 Dec 2020 15:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk7Wp-00019d-MZ; Tue, 01 Dec 2020 15:25:19 +0000
Received: by outflank-mailman (input) for mailman id 42182;
 Tue, 01 Dec 2020 15:25:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJqf=FF=epam.com=prvs=0604985de8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kk7Wp-00019Y-0C
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 15:25:19 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2fc1ae3c-c814-4e6e-8354-a9fdb38d814a;
 Tue, 01 Dec 2020 15:25:17 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1FExxE012105; Tue, 1 Dec 2020 15:25:16 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 by mx0a-0039f301.pphosted.com with ESMTP id 355q3hge5n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 15:25:16 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5555.eurprd03.prod.outlook.com (2603:10a6:208:16d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Tue, 1 Dec
 2020 15:25:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3611.022; Tue, 1 Dec 2020
 15:25:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fc1ae3c-c814-4e6e-8354-a9fdb38d814a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eck9nOODJqaG1hRz8LUzU/DYRkA2ppcegsy9r8+zgQlitrtlLc44mT9CclVjzF4nWzKKkZXzk/Bpp4s8fXUUOboVhNlGBztn6P+7at1r2/LFBQiaHl++COOtlHpsFzdbWWk7mA54O8RGpm0FLuycFOvc7qSyJdkf+rkgwP+osG12qB2GqOl+2BwzZVKU08xBiE9YdTMGgFM5qeQZPB+KsivFwlvKWZUPl5TEfWgm4SwlxpO09YAawOdLkvzkRrGJVWZW84kyy5mJkHt+L2K1h/5n/MmFlN94waaYke+mLYeK8H3NunNOXnq3MNPC0VG48Qyjx51kJOmU8Jh2MGwBcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GgiuTfaLIZL8CHXQHuicwvFuL4rzvcjz2zfwZYquH4Y=;
 b=XPHrc3tkYhRQduR8vFuTlNOc5iBPjQdPGaP1cXlHhyxWz4jH4zHQFJQK7NnIh8PRK78XA6HD6R4HPjAfzHezlTWPRBFlyG29RJjjbIbSr0ndUGiPlwXPvELQ551Owwmx5L0ZOh5xHf73je0oFmlb8BgWyUy763RbSKSNIe+pOwm9vq25VVM6oYusYf2c0J8ZbbhHMYWAICHaVzx8KdhczarZzv/aM3MwK22CPFa6ychsaboN7gVOsNlW4tsfZ/PlTa1KFvjx0c0+7mb/eGtZPpKQj6PyiVK2JC/f7LQtaD7gkUyVkWv22bznGADhJ+uO494yKstMEk8zkpQfYSv5QQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GgiuTfaLIZL8CHXQHuicwvFuL4rzvcjz2zfwZYquH4Y=;
 b=mGzemsh+0ocxRItsD9Ox8FxcKHYbGhST1FhWw97BpUxgEDaRP1GERnyJpcFYy6/UTYfiTC9wI5xl11esum0Xo0N0VEvIt8YT1iJPfsFdUL+WPtL5VBhfQKOdJCf6LKWm2vADBcVLWWbnwPs9Yo/3IhoNO8uHxw4exvcCbXwr3cl+yN2GgRSBSZyKAKSN9pIVkxUpCuJHSX0FQ7IwsbAYIxitU8GXFfi5BuppLKOffE4WBXtWrgU3VJzNS2PiKOAInLn3Ubx9otq9bLOeW3G8ezxHRsC/JLqLlnvZLMZFZwvfT2moFaIHuan3+dcnFPP91E+ASs1/syidASqYgfY5rg==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
        Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v4 14/23] docs/man: extract documentation of
 PCI_SPEC_STRING from the xl.cfg manpage...
Thread-Topic: [PATCH v4 14/23] docs/man: extract documentation of
 PCI_SPEC_STRING from the xl.cfg manpage...
Thread-Index: AQHWx/Yo8BmqKuHo0UWe0VurdxLGFw==
Date: Tue, 1 Dec 2020 15:25:10 +0000
Message-ID: <ef4b88ee-8833-5d95-f1a1-347c8af86808@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-15-paul@xen.org>
In-Reply-To: <20201124080159.11912-15-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 882a14e4-71c3-4cba-69a2-08d8960d4afc
x-ms-traffictypediagnostic: AM0PR03MB5555:
x-microsoft-antispam-prvs: 
 <AM0PR03MB5555FD1A57D820196A183C2DE7F40@AM0PR03MB5555.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 yNhvMvnQV8itbTLPpv+3YG5yp+SFKTELDusgQCrYWozQEWMXuHtYjosnxFRVsUcc9Rh2nC7A6z/wIMbHKl9f20T/OVNtEnsUCySzkKrqgfykyGpkSimnj828yhDe0vJkAR/dI9SSEMdYjG889RrLddapD9y+kwmO6WpzZMj/Ep4yMiT8DW10wmEFlJixToiI/v8OqUEqYZldsEqwsX7vHGTheInAuyZr/ZT5Y3RiFzxI0yzw756F04duHPXRImkKGNjmWXoA0ID70DKl0FVbokKJUhM4eURdpWkluXUJgJHK9NKzKUQ3L6ZunBXAr/KnGLgwOLXHFG9IfW5CQpYIag==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(4326008)(53546011)(83380400001)(5660300002)(2906002)(6512007)(6486002)(26005)(2616005)(66446008)(64756008)(66476007)(66556008)(54906003)(31686004)(86362001)(186003)(76116006)(110136005)(6506007)(66946007)(498600001)(36756003)(31696002)(8936002)(71200400001)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?Q1BEUnFURmtwN0lBUnZRd0h2TnMzeW8vSWZ1U0xRTUEzVjNmUUFBd3QrUHZR?=
 =?utf-8?B?Nk1LYm1PRFYzcjdlMGNON2w3TmRwTXFrdWl5N3JYVmZHaDUwNWV6WnpxOVNz?=
 =?utf-8?B?VnhnNkhwQTQxQmxSUnZYUVpvRmNnTVhrYzVBWmcvQk9vaHp5RjdrWlN6Lzlo?=
 =?utf-8?B?YnFObERRcFYzWlZWTy9qeng1VkN5RnJzdEs2Nmd5VFIwYXVFRTBpWFhDRnB2?=
 =?utf-8?B?R2huMmVQYThqa1RyNm4vbVBHTko4ckxlUmZ4SFY1RGUyTmVveEk4SE9nYk9l?=
 =?utf-8?B?WWhYb3VkZEVMZUhiNVNqL1c2MTEzSjEzTWhlWHlIazFJVUJTaEJoVGhKRTcx?=
 =?utf-8?B?TEZzSE1YTzYwemh4NTdIZG9OTWdody9uTFZhMEJMTEFZUkFmQ0s4UW8xRE5L?=
 =?utf-8?B?RkN1b2o4MlRtZGZZNzZJQnk2aUVXWTZFbkZsZjA1elEydnYycXpCbUNJUGtF?=
 =?utf-8?B?ZURnc3JWVFhlZEdwa2dRZ3JBNDNRS3VFK2tQWml2SS81ZW1TOEQ2ZnlDZHQw?=
 =?utf-8?B?WWVsWC9WTjZ3S1haeWNnK0dCUVRiNmM1Z0FuaFFUVnQ4MVFGN2M4WlRTbHVT?=
 =?utf-8?B?WG5oa0xKZDZSdUlWN0tTdzN4S3RVWnFvQi9xTVhaa2lUSEpXRTFlb29kcEVx?=
 =?utf-8?B?SExySUxHN2tBSE1MSCtydTYzRU55NStYaHAra3JDZDFoYmxFYXIxbEkxQ0Zw?=
 =?utf-8?B?bUNiL2sxSHJITHlkajBJaHhpOVlITytoRHRiL0hjZGlrZFgya2RFNjFMMS8w?=
 =?utf-8?B?bk1INWRnS1puVlRuVXFwWmJwZGdkdFkwc0c0djhVM3UxZWpEcWhmT0tGR2pn?=
 =?utf-8?B?di9NTTR2QnA4TDNBdXYxQjF4ZXNJNytrOGpIUmZKZnFQT0tvWDR0UlV6TnBC?=
 =?utf-8?B?TVFSNE5YUGZ2VU5KZWZGbm50VEhvUXpHb3ptSFZwTHE5cm5XVTNVZXU0a1pv?=
 =?utf-8?B?Z1p5aXY0M2xNQ2ZJM0k0V1dsNjhSM0RYR1pNT0cvdGc3L2I1aXJSS0lkN1lD?=
 =?utf-8?B?aEl5YXQxdlhiSGtWdzR4K0YzY3JmVDNsdWRiQXN3MG1xNkV1SmJqNkJqYXJx?=
 =?utf-8?B?M1l1RmVmYjg1b1BlNWZUTmlZSlB2NHQ0MjRKRlhZWGVEYlAyZ1QxTVdHa05K?=
 =?utf-8?B?Tk16MkZwU1lzaU40eTVaTHV1bWV0VGVLMXNML3BrYTF1dFkyMzdhTVNrTnhv?=
 =?utf-8?B?dmVkNm1IckptSnE1R0RvNXhaYVZFSGlVNEZHc3RCY1gybFhMRitnM0h5VGlY?=
 =?utf-8?B?dlBHNk40SmExVVBidWVUclhWSjdHaytsYnZWTkQyWVkwMEJ5Z2VEMlA2ZE1X?=
 =?utf-8?Q?mNpwPNcGAuKLajTI4qYTFoSTX2ELPSE0+L?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <9205C46A915CB84A97CBC3C2DD752E23@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 882a14e4-71c3-4cba-69a2-08d8960d4afc
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 15:25:10.7980
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ei/9DRXIuUYs1Bxv1D4ImIv2hj5o0g2bZjdb3v+INcleR81ge2boKtSJwiSg58ucx047tmj876KH3kvbI1vNQbQPlzBkiuOzQOH8AQJtptLb8wYBzjmFwfVXR9tYYg/z
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5555
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_07:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 phishscore=0
 spamscore=0 impostorscore=0 malwarescore=0 clxscore=1015
 priorityscore=1501 mlxlogscore=987 suspectscore=0 mlxscore=0 adultscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012010098

SGksIFBhdWwhDQoNCk9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4NCj4gLi4uIGFuZCBw
dXQgaXQgaW50byBhIG5ldyB4bC1wY2ktY29uZmlndXJhdGlvbig1KSBtYW5wYWdlLCBha2luIHRv
IHRoZQ0KPiB4bC1uZXR3b3JrLWNvbmZpZ3JhdGlvbig1KSBhbmQgeGwtZGlzay1jb25maWd1cmF0
aW9uKDUpIG1hbnBhZ2VzLg0KPg0KPiBUaGlzIHBhdGNoIG1vdmVzIHRoZSBjb250ZW50IG9mIHRo
ZSBzZWN0aW9uIHZlcmJhdGltLiBBIHN1YnNlcXVlbnQgcGF0Y2gNCj4gd2lsbCBpbXByb3ZlIHRo
ZSBkb2N1bWVudGF0aW9uLCBvbmNlIGl0IGlzIGluIGl0cyBuZXcgbG9jYXRpb24uDQo+DQo+IFNp
Z25lZC1vZmYtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCg0KUmV2aWV3
ZWQtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0Bl
cGFtLmNvbT4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KPiAtLS0NCj4gQ2M6IElhbiBK
YWNrc29uIDxpd2pAeGVucHJvamVjdC5vcmc+DQo+IENjOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0K
PiAtLS0NCj4gICBkb2NzL21hbi94bC1wY2ktY29uZmlndXJhdGlvbi41LnBvZCB8IDc4ICsrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4gICBkb2NzL21hbi94bC5jZmcuNS5w
b2QuaW4gICAgICAgICAgICB8IDY4ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+
ICAgMiBmaWxlcyBjaGFuZ2VkLCA3OSBpbnNlcnRpb25zKCspLCA2NyBkZWxldGlvbnMoLSkNCj4g
ICBjcmVhdGUgbW9kZSAxMDA2NDQgZG9jcy9tYW4veGwtcGNpLWNvbmZpZ3VyYXRpb24uNS5wb2QN
Cj4NCj4gZGlmZiAtLWdpdCBhL2RvY3MvbWFuL3hsLXBjaS1jb25maWd1cmF0aW9uLjUucG9kIGIv
ZG9jcy9tYW4veGwtcGNpLWNvbmZpZ3VyYXRpb24uNS5wb2QNCj4gbmV3IGZpbGUgbW9kZSAxMDA2
NDQNCj4gaW5kZXggMDAwMDAwMDAwMC4uNzJhMjdiZDk1ZA0KPiAtLS0gL2Rldi9udWxsDQo+ICsr
KyBiL2RvY3MvbWFuL3hsLXBjaS1jb25maWd1cmF0aW9uLjUucG9kDQo+IEBAIC0wLDAgKzEsNzgg
QEANCj4gKz1lbmNvZGluZyB1dGY4DQo+ICsNCj4gKz1oZWFkMSBOQU1FDQo+ICsNCj4gK3hsLXBj
aS1jb25maWd1cmF0aW9uIC0gWEwgUENJIENvbmZpZ3VyYXRpb24gU3ludGF4DQo+ICsNCj4gKz1o
ZWFkMSBTWU5UQVgNCj4gKw0KPiArVGhpcyBkb2N1bWVudCBzcGVjaWZpZXMgdGhlIGZvcm1hdCBm
b3IgQjxQQ0lfU1BFQ19TVFJJTkc+IHdoaWNoIGlzIHVzZWQgYnkNCj4gK3RoZSBMPHhsLmNmZyg1
KT4gcGNpIGNvbmZpZ3VyYXRpb24gb3B0aW9uLCBhbmQgcmVsYXRlZCBMPHhsKDEpPiBjb21tYW5k
cy4NCj4gKw0KPiArRWFjaCBCPFBDSV9TUEVDX1NUUklORz4gaGFzIHRoZSBmb3JtIG9mDQo+ICtC
PFtEREREOl1CQjpERC5GW0BWU0xPVF0sS0VZPVZBTFVFLEtFWT1WQUxVRSwuLi4+IHdoZXJlOg0K
PiArDQo+ICs9b3ZlciA0DQo+ICsNCj4gKz1pdGVtIEI8W0REREQ6XUJCOkRELkY+DQo+ICsNCj4g
K0lkZW50aWZpZXMgdGhlIFBDSSBkZXZpY2UgZnJvbSB0aGUgaG9zdCBwZXJzcGVjdGl2ZSBpbiB0
aGUgZG9tYWluDQo+ICsoQjxEREREPiksIEJ1cyAoQjxCQj4pLCBEZXZpY2UgKEI8REQ+KSBhbmQg
RnVuY3Rpb24gKEI8Rj4pIHN5bnRheC4gVGhpcyBpcw0KPiArdGhlIHNhbWUgc2NoZW1lIGFzIHVz
ZWQgaW4gdGhlIG91dHB1dCBvZiBCPGxzcGNpKDEpPiBmb3IgdGhlIGRldmljZSBpbg0KPiArcXVl
c3Rpb24uDQo+ICsNCj4gK05vdGU6IGJ5IGRlZmF1bHQgQjxsc3BjaSgxKT4gd2lsbCBvbWl0IHRo
ZSBkb21haW4gKEI8RERERD4pIGlmIGl0DQo+ICtpcyB6ZXJvIGFuZCBpdCBpcyBvcHRpb25hbCBo
ZXJlIGFsc28uIFlvdSBtYXkgc3BlY2lmeSB0aGUgZnVuY3Rpb24NCj4gKyhCPEY+KSBhcyBCPCo+
IHRvIGluZGljYXRlIGFsbCBmdW5jdGlvbnMuDQo+ICsNCj4gKz1pdGVtIEI8QFZTTE9UPg0KPiAr
DQo+ICtTcGVjaWZpZXMgdGhlIHZpcnR1YWwgc2xvdCB3aGVyZSB0aGUgZ3Vlc3Qgd2lsbCBzZWUg
dGhpcw0KPiArZGV2aWNlLiBUaGlzIGlzIGVxdWl2YWxlbnQgdG8gdGhlIEI8REQ+IHdoaWNoIHRo
ZSBndWVzdCBzZWVzLiBJbiBhDQo+ICtndWVzdCBCPEREREQ+IGFuZCBCPEJCPiBhcmUgQzwwMDAw
OjAwPi4NCj4gKw0KPiArPWl0ZW0gQjxwZXJtaXNzaXZlPUJPT0xFQU4+DQo+ICsNCj4gK0J5IGRl
ZmF1bHQgcGNpYmFjayBvbmx5IGFsbG93cyBQViBndWVzdHMgdG8gd3JpdGUgImtub3duIHNhZmUi
IHZhbHVlcw0KPiAraW50byBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZSwgbGlrZXdpc2UgUUVNVSAo
Ym90aCBxZW11LXhlbiBhbmQNCj4gK3FlbXUteGVuLXRyYWRpdGlvbmFsKSBpbXBvc2VzIHRoZSBz
YW1lIGNvbnN0cmFpbnQgb24gSFZNIGd1ZXN0cy4NCj4gK0hvd2V2ZXIsIG1hbnkgZGV2aWNlcyBy
ZXF1aXJlIHdyaXRlcyB0byBvdGhlciBhcmVhcyBvZiB0aGUgY29uZmlndXJhdGlvbiBzcGFjZQ0K
PiAraW4gb3JkZXIgdG8gb3BlcmF0ZSBwcm9wZXJseS4gIFRoaXMgb3B0aW9uIHRlbGxzIHRoZSBi
YWNrZW5kIChwY2liYWNrIG9yIFFFTVUpDQo+ICt0byBhbGxvdyBhbGwgd3JpdGVzIHRvIHRoZSBQ
Q0kgY29uZmlndXJhdGlvbiBzcGFjZSBvZiB0aGlzIGRldmljZSBieSB0aGlzDQo+ICtkb21haW4u
DQo+ICsNCj4gK0I8VGhpcyBvcHRpb24gc2hvdWxkIGJlIGVuYWJsZWQgd2l0aCBjYXV0aW9uOj4g
aXQgZ2l2ZXMgdGhlIGd1ZXN0IG11Y2gNCj4gK21vcmUgY29udHJvbCBvdmVyIHRoZSBkZXZpY2Us
IHdoaWNoIG1heSBoYXZlIHNlY3VyaXR5IG9yIHN0YWJpbGl0eQ0KPiAraW1wbGljYXRpb25zLiAg
SXQgaXMgcmVjb21tZW5kZWQgdG8gb25seSBlbmFibGUgdGhpcyBvcHRpb24gZm9yDQo+ICt0cnVz
dGVkIFZNcyB1bmRlciBhZG1pbmlzdHJhdG9yJ3MgY29udHJvbC4NCj4gKw0KPiArPWl0ZW0gQjxt
c2l0cmFuc2xhdGU9Qk9PTEVBTj4NCj4gKw0KPiArU3BlY2lmaWVzIHRoYXQgTVNJLUlOVHggdHJh
bnNsYXRpb24gc2hvdWxkIGJlIHR1cm5lZCBvbiBmb3IgdGhlIFBDSQ0KPiArZGV2aWNlLiBXaGVu
IGVuYWJsZWQsIE1TSS1JTlR4IHRyYW5zbGF0aW9uIHdpbGwgYWx3YXlzIGVuYWJsZSBNU0kgb24N
Cj4gK3RoZSBQQ0kgZGV2aWNlIHJlZ2FyZGxlc3Mgb2Ygd2hldGhlciB0aGUgZ3Vlc3QgdXNlcyBJ
TlR4IG9yIE1TSS4gU29tZQ0KPiArZGV2aWNlIGRyaXZlcnMsIHN1Y2ggYXMgTlZJRElBJ3MsIGRl
dGVjdCBhbiBpbmNvbnNpc3RlbmN5IGFuZCBkbyBub3QNCj4gK2Z1bmN0aW9uIHdoZW4gdGhpcyBv
cHRpb24gaXMgZW5hYmxlZC4gVGhlcmVmb3JlIHRoZSBkZWZhdWx0IGlzIGZhbHNlICgwKS4NCj4g
Kw0KPiArPWl0ZW0gQjxzZWl6ZT1CT09MRUFOPg0KPiArDQo+ICtUZWxscyBCPHhsPiB0byBhdXRv
bWF0aWNhbGx5IGF0dGVtcHQgdG8gcmUtYXNzaWduIGEgZGV2aWNlIHRvDQo+ICtwY2liYWNrIGlm
IGl0IGlzIG5vdCBhbHJlYWR5IGFzc2lnbmVkLg0KPiArDQo+ICtCPFdBUk5JTkc6PiBJZiB5b3Ug
c2V0IHRoaXMgb3B0aW9uLCBCPHhsPiB3aWxsIGdsYWRseSByZS1hc3NpZ24gYSBjcml0aWNhbA0K
PiArc3lzdGVtIGRldmljZSwgc3VjaCBhcyBhIG5ldHdvcmsgb3IgYSBkaXNrIGNvbnRyb2xsZXIg
YmVpbmcgdXNlZCBieQ0KPiArZG9tMCB3aXRob3V0IGNvbmZpcm1hdGlvbi4gIFBsZWFzZSB1c2Ug
d2l0aCBjYXJlLg0KPiArDQo+ICs9aXRlbSBCPHBvd2VyX21nbXQ9Qk9PTEVBTj4NCj4gKw0KPiAr
QjwoSFZNIG9ubHkpPiBTcGVjaWZpZXMgdGhhdCB0aGUgVk0gc2hvdWxkIGJlIGFibGUgdG8gcHJv
Z3JhbSB0aGUNCj4gK0QwLUQzaG90IHBvd2VyIG1hbmFnZW1lbnQgc3RhdGVzIGZvciB0aGUgUENJ
IGRldmljZS4gVGhlIGRlZmF1bHQgaXMgZmFsc2UgKDApLg0KPiArDQo+ICs9aXRlbSBCPHJkbV9w
b2xpY3k9U1RSSU5HPg0KPiArDQo+ICtCPChIVk0veDg2IG9ubHkpPiBUaGlzIGlzIHRoZSBzYW1l
IGFzIHRoZSBwb2xpY3kgc2V0dGluZyBpbnNpZGUgdGhlIEI8cmRtPg0KPiArb3B0aW9uIGJ1dCBq
dXN0IHNwZWNpZmljIHRvIGEgZ2l2ZW4gZGV2aWNlLiBUaGUgZGVmYXVsdCBpcyAicmVsYXhlZCIu
DQo+ICsNCj4gK05vdGU6IHRoaXMgd291bGQgb3ZlcnJpZGUgZ2xvYmFsIEI8cmRtPiBvcHRpb24u
DQo+ICsNCj4gKz1iYWNrDQo+IGRpZmYgLS1naXQgYS9kb2NzL21hbi94bC5jZmcuNS5wb2QuaW4g
Yi9kb2NzL21hbi94bC5jZmcuNS5wb2QuaW4NCj4gaW5kZXggMDUzMjczOWMxZi4uYjAwNjQ0ZTg1
MiAxMDA2NDQNCj4gLS0tIGEvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluDQo+ICsrKyBiL2RvY3Mv
bWFuL3hsLmNmZy41LnBvZC5pbg0KPiBAQCAtMTEwMSw3MyArMTEwMSw3IEBAIG9wdGlvbiBpcyB2
YWxpZCBvbmx5IHdoZW4gdGhlIEI8Y29udHJvbGxlcj4gb3B0aW9uIGlzIHNwZWNpZmllZC4NCj4g
ICA9aXRlbSBCPHBjaT1bICJQQ0lfU1BFQ19TVFJJTkciLCAiUENJX1NQRUNfU1RSSU5HIiwgLi4u
XT4NCj4gICANCj4gICBTcGVjaWZpZXMgdGhlIGhvc3QgUENJIGRldmljZXMgdG8gcGFzc3Rocm91
Z2ggdG8gdGhpcyBndWVzdC4NCj4gLUVhY2ggQjxQQ0lfU1BFQ19TVFJJTkc+IGhhcyB0aGUgZm9y
bSBvZg0KPiAtQjxbRERERDpdQkI6REQuRltAVlNMT1RdLEtFWT1WQUxVRSxLRVk9VkFMVUUsLi4u
PiB3aGVyZToNCj4gLQ0KPiAtPW92ZXIgNA0KPiAtDQo+IC09aXRlbSBCPFtEREREOl1CQjpERC5G
Pg0KPiAtDQo+IC1JZGVudGlmaWVzIHRoZSBQQ0kgZGV2aWNlIGZyb20gdGhlIGhvc3QgcGVyc3Bl
Y3RpdmUgaW4gdGhlIGRvbWFpbg0KPiAtKEI8RERERD4pLCBCdXMgKEI8QkI+KSwgRGV2aWNlIChC
PEREPikgYW5kIEZ1bmN0aW9uIChCPEY+KSBzeW50YXguIFRoaXMgaXMNCj4gLXRoZSBzYW1lIHNj
aGVtZSBhcyB1c2VkIGluIHRoZSBvdXRwdXQgb2YgQjxsc3BjaSgxKT4gZm9yIHRoZSBkZXZpY2Ug
aW4NCj4gLXF1ZXN0aW9uLg0KPiAtDQo+IC1Ob3RlOiBieSBkZWZhdWx0IEI8bHNwY2koMSk+IHdp
bGwgb21pdCB0aGUgZG9tYWluIChCPEREREQ+KSBpZiBpdA0KPiAtaXMgemVybyBhbmQgaXQgaXMg
b3B0aW9uYWwgaGVyZSBhbHNvLiBZb3UgbWF5IHNwZWNpZnkgdGhlIGZ1bmN0aW9uDQo+IC0oQjxG
PikgYXMgQjwqPiB0byBpbmRpY2F0ZSBhbGwgZnVuY3Rpb25zLg0KPiAtDQo+IC09aXRlbSBCPEBW
U0xPVD4NCj4gLQ0KPiAtU3BlY2lmaWVzIHRoZSB2aXJ0dWFsIHNsb3Qgd2hlcmUgdGhlIGd1ZXN0
IHdpbGwgc2VlIHRoaXMNCj4gLWRldmljZS4gVGhpcyBpcyBlcXVpdmFsZW50IHRvIHRoZSBCPERE
PiB3aGljaCB0aGUgZ3Vlc3Qgc2Vlcy4gSW4gYQ0KPiAtZ3Vlc3QgQjxEREREPiBhbmQgQjxCQj4g
YXJlIEM8MDAwMDowMD4uDQo+IC0NCj4gLT1pdGVtIEI8cGVybWlzc2l2ZT1CT09MRUFOPg0KPiAt
DQo+IC1CeSBkZWZhdWx0IHBjaWJhY2sgb25seSBhbGxvd3MgUFYgZ3Vlc3RzIHRvIHdyaXRlICJr
bm93biBzYWZlIiB2YWx1ZXMNCj4gLWludG8gUENJIGNvbmZpZ3VyYXRpb24gc3BhY2UsIGxpa2V3
aXNlIFFFTVUgKGJvdGggcWVtdS14ZW4gYW5kDQo+IC1xZW11LXhlbi10cmFkaXRpb25hbCkgaW1w
b3NlcyB0aGUgc2FtZSBjb25zdHJhaW50IG9uIEhWTSBndWVzdHMuDQo+IC1Ib3dldmVyLCBtYW55
IGRldmljZXMgcmVxdWlyZSB3cml0ZXMgdG8gb3RoZXIgYXJlYXMgb2YgdGhlIGNvbmZpZ3VyYXRp
b24gc3BhY2UNCj4gLWluIG9yZGVyIHRvIG9wZXJhdGUgcHJvcGVybHkuICBUaGlzIG9wdGlvbiB0
ZWxscyB0aGUgYmFja2VuZCAocGNpYmFjayBvciBRRU1VKQ0KPiAtdG8gYWxsb3cgYWxsIHdyaXRl
cyB0byB0aGUgUENJIGNvbmZpZ3VyYXRpb24gc3BhY2Ugb2YgdGhpcyBkZXZpY2UgYnkgdGhpcw0K
PiAtZG9tYWluLg0KPiAtDQo+IC1CPFRoaXMgb3B0aW9uIHNob3VsZCBiZSBlbmFibGVkIHdpdGgg
Y2F1dGlvbjo+IGl0IGdpdmVzIHRoZSBndWVzdCBtdWNoDQo+IC1tb3JlIGNvbnRyb2wgb3ZlciB0
aGUgZGV2aWNlLCB3aGljaCBtYXkgaGF2ZSBzZWN1cml0eSBvciBzdGFiaWxpdHkNCj4gLWltcGxp
Y2F0aW9ucy4gIEl0IGlzIHJlY29tbWVuZGVkIHRvIG9ubHkgZW5hYmxlIHRoaXMgb3B0aW9uIGZv
cg0KPiAtdHJ1c3RlZCBWTXMgdW5kZXIgYWRtaW5pc3RyYXRvcidzIGNvbnRyb2wuDQo+IC0NCj4g
LT1pdGVtIEI8bXNpdHJhbnNsYXRlPUJPT0xFQU4+DQo+IC0NCj4gLVNwZWNpZmllcyB0aGF0IE1T
SS1JTlR4IHRyYW5zbGF0aW9uIHNob3VsZCBiZSB0dXJuZWQgb24gZm9yIHRoZSBQQ0kNCj4gLWRl
dmljZS4gV2hlbiBlbmFibGVkLCBNU0ktSU5UeCB0cmFuc2xhdGlvbiB3aWxsIGFsd2F5cyBlbmFi
bGUgTVNJIG9uDQo+IC10aGUgUENJIGRldmljZSByZWdhcmRsZXNzIG9mIHdoZXRoZXIgdGhlIGd1
ZXN0IHVzZXMgSU5UeCBvciBNU0kuIFNvbWUNCj4gLWRldmljZSBkcml2ZXJzLCBzdWNoIGFzIE5W
SURJQSdzLCBkZXRlY3QgYW4gaW5jb25zaXN0ZW5jeSBhbmQgZG8gbm90DQo+IC1mdW5jdGlvbiB3
aGVuIHRoaXMgb3B0aW9uIGlzIGVuYWJsZWQuIFRoZXJlZm9yZSB0aGUgZGVmYXVsdCBpcyBmYWxz
ZSAoMCkuDQo+IC0NCj4gLT1pdGVtIEI8c2VpemU9Qk9PTEVBTj4NCj4gLQ0KPiAtVGVsbHMgQjx4
bD4gdG8gYXV0b21hdGljYWxseSBhdHRlbXB0IHRvIHJlLWFzc2lnbiBhIGRldmljZSB0bw0KPiAt
cGNpYmFjayBpZiBpdCBpcyBub3QgYWxyZWFkeSBhc3NpZ25lZC4NCj4gLQ0KPiAtQjxXQVJOSU5H
Oj4gSWYgeW91IHNldCB0aGlzIG9wdGlvbiwgQjx4bD4gd2lsbCBnbGFkbHkgcmUtYXNzaWduIGEg
Y3JpdGljYWwNCj4gLXN5c3RlbSBkZXZpY2UsIHN1Y2ggYXMgYSBuZXR3b3JrIG9yIGEgZGlzayBj
b250cm9sbGVyIGJlaW5nIHVzZWQgYnkNCj4gLWRvbTAgd2l0aG91dCBjb25maXJtYXRpb24uICBQ
bGVhc2UgdXNlIHdpdGggY2FyZS4NCj4gLQ0KPiAtPWl0ZW0gQjxwb3dlcl9tZ210PUJPT0xFQU4+
DQo+IC0NCj4gLUI8KEhWTSBvbmx5KT4gU3BlY2lmaWVzIHRoYXQgdGhlIFZNIHNob3VsZCBiZSBh
YmxlIHRvIHByb2dyYW0gdGhlDQo+IC1EMC1EM2hvdCBwb3dlciBtYW5hZ2VtZW50IHN0YXRlcyBm
b3IgdGhlIFBDSSBkZXZpY2UuIFRoZSBkZWZhdWx0IGlzIGZhbHNlICgwKS4NCj4gLQ0KPiAtPWl0
ZW0gQjxyZG1fcG9saWN5PVNUUklORz4NCj4gLQ0KPiAtQjwoSFZNL3g4NiBvbmx5KT4gVGhpcyBp
cyB0aGUgc2FtZSBhcyB0aGUgcG9saWN5IHNldHRpbmcgaW5zaWRlIHRoZSBCPHJkbT4NCj4gLW9w
dGlvbiBidXQganVzdCBzcGVjaWZpYyB0byBhIGdpdmVuIGRldmljZS4gVGhlIGRlZmF1bHQgaXMg
InJlbGF4ZWQiLg0KPiAtDQo+IC1Ob3RlOiB0aGlzIHdvdWxkIG92ZXJyaWRlIGdsb2JhbCBCPHJk
bT4gb3B0aW9uLg0KPiAtDQo+IC09YmFjaw0KPiArU2VlIEw8eGwtcGNpLWNvbmZpZ3VyYXRpb24o
NSk+IGZvciBtb3JlIGRldGFpbHMuDQo+ICAgDQo+ICAgPWl0ZW0gQjxwY2lfcGVybWlzc2l2ZT1C
T09MRUFOPg0KPiAgIA==


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 16:55:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 16:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42225.75928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk8vQ-0001yJ-J6; Tue, 01 Dec 2020 16:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42225.75928; Tue, 01 Dec 2020 16:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk8vQ-0001yC-G9; Tue, 01 Dec 2020 16:54:48 +0000
Received: by outflank-mailman (input) for mailman id 42225;
 Tue, 01 Dec 2020 16:54:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Rbw=FF=epam.com=prvs=0604308a42=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kk8vP-0001y7-MA
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 16:54:47 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a738e3bc-4730-4731-9d5c-92d472700c14;
 Tue, 01 Dec 2020 16:54:45 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B1GjXec025828; Tue, 1 Dec 2020 16:54:41 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2111.outbound.protection.outlook.com [104.47.18.111])
 by mx0a-0039f301.pphosted.com with ESMTP id 355rbugam7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 01 Dec 2020 16:54:40 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VE1PR03MB5197.eurprd03.prod.outlook.com (2603:10a6:802:a3::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.24; Tue, 1 Dec
 2020 16:54:37 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Tue, 1 Dec 2020
 16:54:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a738e3bc-4730-4731-9d5c-92d472700c14
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E6xyjSDcH0vCOxPhurgc42zfgZl1+glVTATMyS+u7xzIKRG3JLfBPFW++8b4JgUJbHrwkRIQF3xd2GCMWrp32piXi6L7Auk/2/7KfmDgdb9LgOVUKEfrQYXqqtaySTpHxuWKBmp5AVZoU4cehcaEmnZIzV6caMqweMU4P5L3TSl5jcxQPkjFxobuuh4Wr7oj6fbm1DWMNwmKJELyVl3Xcuq7s8cRH2c5YfoZyCWt+/u/noJGOZjmjIHNMcai6s8LkXoHViV6CV7GTRWSmKvMq7yrHIEgRcRJXsbM2ut+OCTHTqHrlHVapUq51MtRaDeg/6XFbW7DXMmphKW+qdrJRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pD7dHSjE0bgxDY7bpnmsWNoqhd5TmDlW6Zky89g0x5w=;
 b=Snjh1fZlQbKPzYVIyf6EQU0LvBGILq936M1rxCEuVciJPt9lLkEB5QRoSmF9rlKYm0XgCA4OfgYaUyAmc6ZYUnVJWrfm5IoX5VEnL/TfiKyBe1qBbxQ8h2o5BM4o95HVT+g2jN9k/Q4j4070FLM/JdH/C93Flx1jjhKbnFOw6Zgc2+HmBZSthlwUBpqpLvb8f8mb3Wftrml/iz7n1bMEmcrqsyjqtDvGEDVyI5QEnpkR70AUa5yDdupgM18PfevpL4HHpOJzCNvI4UPRj07sKiBQ2kYxTS6zRcxfFnA5bVi9VI9/Fl3Val74XhmTgbY9k3K5ZXLNhjbtAHKrK+bJbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pD7dHSjE0bgxDY7bpnmsWNoqhd5TmDlW6Zky89g0x5w=;
 b=ARlhCPjjhcvbnUo20YBdK4N+LBUXz2AnefI8cr2BY+FB9zD8K9LbPUbUYd5fNEov6u8WQX5aUmclNRXZLFWYP+TNXrAl8X5ooGDVYDuNZuYCDCVIdtOcf+GRE25GYhtBVL69fw8qKjo2oUwKC7rUstNJ6SpVi69UFPaQvDfS4F1Qs5CnvE+YifNMZq8VaHB55dIEYzpQ4KgzyBsyN9HYcB+E1rEmmpCzUnV7aRLuuXejB0W+rXEZfhcLnafBtTmr4lKxIIlshbc0ZTiiqYHr4Qn+fPd23kTGAfGaK/GiDMC+ylCuNggsS45CQufKbgtyEkgVGvxKQlgWhIr6k9FS/g==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHWxyRSFk5jpSQ9QE+NXR5AM1rL5qnhIUQAgAD/vgCAAAW1AIAAJWwAgAAq4QA=
Date: Tue, 1 Dec 2020 16:54:37 +0000
Message-ID: <87h7p55uwj.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
 <87lfei7fj5.fsf@epam.com> <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
 <87sg8p687j.fsf@epam.com> <87243486-2A58-4497-B566-5FDE4158D18E@arm.com>
In-Reply-To: <87243486-2A58-4497-B566-5FDE4158D18E@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 803075e1-1212-47e2-1a75-08d89619c9bb
x-ms-traffictypediagnostic: VE1PR03MB5197:
x-microsoft-antispam-prvs: 
 <VE1PR03MB5197707B5AA47C0450E568D8E6F40@VE1PR03MB5197.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 bTp4CgNX/silfgYUHtrKC491EAoaIlzKl4awA3dDSz2RZ/10lsvXw8Wa0oGmaju/v7ZuUjt9yt3hxOsatmAasjnxwQ2gj6k8iZuggIJ4FSi43rQA6gnScAPJoEd7Fk6cRcgYel1QuTQniFmRC+XdDOU8zwj/VXr1I6SX+tOIn+2sS339QJ9yU8QIZxzRL8eLQUBQd6Bhd7M5a4bz+QUvLVdwWkB2AguP0Lbc/2+6KO6SWu7DL87LgfZlVTLwmXN18YhUFRGAQ4qo/+YL0+msr+/5KOpj8EhXrwv7MkI6rQwH3Wksm3N2N+ZMvgSNFg1Day+nJgZjVkit83UfQbY7LA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39850400004)(346002)(376002)(396003)(366004)(83380400001)(6506007)(54906003)(316002)(76116006)(2906002)(36756003)(8676002)(8936002)(186003)(26005)(2616005)(91956017)(6486002)(55236004)(53546011)(66946007)(6512007)(66476007)(71200400001)(66446008)(64756008)(86362001)(5660300002)(4326008)(66556008)(478600001)(6916009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?dnduUUJ6MmNsdS9tUGZ6OXRYSVNzeFFXZGtlamhvY2U4Vkl1QUJEaitoL2Z6?=
 =?utf-8?B?U1QvZUs2RXIrN01HUUR0TDlUVnM0TTMwdUM0VFpUUnVMZHdHTGFhUWkvWXZ3?=
 =?utf-8?B?NjdWMlFKN1BnRnUvNU9pNHFTbDdDTUVwK0d5QldnTnFraWVWQUNkdlUraThW?=
 =?utf-8?B?TjJWQ3FHRk8yeTJ1SmRaWDZ0R0pEU2g4a3JSQ01hSU9YVGRwdkdaYWhON3pa?=
 =?utf-8?B?bWgyRGc1eHBNcUN6cCtMWU1KN2hBQmVOb0hadmZUdWd6V0F1VFd0OUJOQXZo?=
 =?utf-8?B?V3crTng4Nzk1UUFZOGNsbmt0c1pMUTJJT1dYaEpVUTFndlBXSzdJMlF2NGY2?=
 =?utf-8?B?NjlTRzhHRjlSVFE1Qk94R0MvQ0tLKzJ1NzZOb3VoNzVWT2JEalZHMFJhUVp3?=
 =?utf-8?B?dDFEMDRhVG5DeCtiNDNZM3NiWVdIdi9jSklJeGdqcTdOMXpYbHg5M2lSVXNm?=
 =?utf-8?B?cUlHZE1Yb1JIUVMrcEJ5SXkzazAweVNCYWY1a2JYcC9UR2xScjIrQldHNWZ4?=
 =?utf-8?B?QnVmMGlmbkJPMUIvVnRzanVoZWZmYVBQUSt6MXN3OE5OdGh3bGVjRDFNS2xs?=
 =?utf-8?B?NlV6WEdqNWwzbk01NFN2TXZPWnh5QkdKTXdGL24xUWx5cWFBemFxWVF4RWd0?=
 =?utf-8?B?c0g2L0FFcStndW9qUXZHWGZtMmlUQ09SbGl1QTVBT2p5ZlR3OXpiZHo2YVdH?=
 =?utf-8?B?K3VDWDlSc1lrNEw0b1JwMnlJbGMwS0RiWU9KejRIQlE5ZTVhNXhsZ0hlTWxM?=
 =?utf-8?B?NXFjOFBOUDRqdFJLc0tqZ2FrV3JZN01OR2s3REZZRkd3a0VMU0t3bXduU1pi?=
 =?utf-8?B?UjRNM3EzbDJvUTBGUkMyQmY1eHdUeDhEMm9xU2dWS0UyeFM3NitiNTF0WWV2?=
 =?utf-8?B?TlNNd2dpcUJJckJUNW93M1pMZkZiTGJIRnBVNmt0YlV1THpHNzk5Qjd4NVZS?=
 =?utf-8?B?ckRXT0xrYWNPTXhmZVdjZlo0Qys2YUVNVnYzSldWSE92bjFDWXNFSU9SYmdm?=
 =?utf-8?B?N0tyWW5ZK2hQVTI5QjdNWi9nK0FlYUduMjJ1cVZSM3BOQWNHV3BzNWhRcnlB?=
 =?utf-8?B?cy9YWlZXZFY2Sk5USElyQnRic1FEMzFwZDBXUEVQbUw3T1JIc2RGNTM0anVw?=
 =?utf-8?B?TlhxQjRWNDNXT09xbE9COE4vNGdVdEswdTZuaExHdFlJaFpHZGJIU28wYTQv?=
 =?utf-8?B?bEEwdW9MaWd5Q0s1ZG5jeUQ3V1dEUDM4T2dDckQ0MHJKOHB0RW8xMlJ3NjBu?=
 =?utf-8?B?MlROWFV1L2tsR3NMN2dsNzJXNTdqZ0wrOUxCNlZWRHk1ZTVtQ0I2SzRPMFRh?=
 =?utf-8?Q?5TF7cFrnCvTAVMgmR2L1ZihKwkRgcIwqmx?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E6DA12EE11875C458FB8C750AAD54672@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 803075e1-1212-47e2-1a75-08d89619c9bb
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Dec 2020 16:54:37.4233
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: i+zpgUKg2lZhPV8hH5ayc8gA+6iXCNspAwQVFezWVpgQ7nmrDj0njOOxSfNGnOBS0kSmcJerzmDSVOkUZxAJ9MIGhGAxYA9OG22tYlkbUpE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR03MB5197
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-01_07:2020-11-30,2020-12-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 clxscore=1015
 suspectscore=0 priorityscore=1501 mlxlogscore=999 spamscore=0 adultscore=0
 phishscore=0 lowpriorityscore=0 malwarescore=0 mlxscore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012010103

DQpIaSwNCg0KQmVydHJhbmQgTWFycXVpcyB3cml0ZXM6DQoNCj4gSGkgVm9sb2R5bXlyLA0KPg0K
Pj4gT24gMSBEZWMgMjAyMCwgYXQgMTI6MDcsIFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJf
QmFiY2h1a0BlcGFtLmNvbT4gd3JvdGU6DQo+PiANCj4+IA0KPj4gSGksDQo+PiANCj4+IEJlcnRy
YW5kIE1hcnF1aXMgd3JpdGVzOg0KPj4gDQo+Pj4gSGksDQo+Pj4gDQo+Pj4+IE9uIDMwIE5vdiAy
MDIwLCBhdCAyMDozMSwgVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0u
Y29tPiB3cm90ZToNCj4+Pj4gDQo+Pj4+IA0KPj4+PiBCZXJ0cmFuZCBNYXJxdWlzIHdyaXRlczoN
Cj4+Pj4gDQo+Pj4+PiBBZGQgc3VwcG9ydCBmb3IgZW11bGF0aW9uIG9mIGNwMTUgYmFzZWQgSUQg
cmVnaXN0ZXJzIChvbiBhcm0zMiBvciB3aGVuDQo+Pj4+PiBydW5uaW5nIGEgMzJiaXQgZ3Vlc3Qg
b24gYXJtNjQpLg0KPj4+Pj4gVGhlIGhhbmRsZXJzIGFyZSByZXR1cm5pbmcgdGhlIHZhbHVlcyBz
dG9yZWQgaW4gdGhlIGd1ZXN0X2NwdWluZm8NCj4+Pj4+IHN0cnVjdHVyZS4NCj4+Pj4+IEluIHRo
ZSBjdXJyZW50IHN0YXR1cyB0aGUgTVZGUiByZWdpc3RlcnMgYXJlIG5vIHN1cHBvcnRlZC4NCj4+
Pj4gDQo+Pj4+IEl0IGlzIHVuY2xlYXIgd2hhdCB3aWxsIGhhcHBlbiB3aXRoIHJlZ2lzdGVycyB0
aGF0IGFyZSBub3QgY292ZXJlZCBieQ0KPj4+PiBndWVzdF9jcHVpbmZvIHN0cnVjdHVyZS4gQWNj
b3JkaW5nIHRvIEFSTSBBUk0sIGl0IGlzIGltcGxlbWVudGF0aW9uDQo+Pj4+IGRlZmluZWQgaWYg
c3VjaCBhY2Nlc3NlcyB3aWxsIGJlIHRyYXBwZWQuIE9uIG90aGVyIGhhbmQsIHRoZXJlIGFyZSBt
YW55DQo+Pj4+IHJlZ2lzdGVycyB3aGljaCBhcmUgUkFaLiBTbywgZ29vZCBiZWhhdmluZyBndWVz
dCBjYW4gdHJ5IHRvIHJlYWQgb25lIG9mDQo+Pj4+IHRoYXQgcmVnaXN0ZXJzIGFuZCBpdCB3aWxs
IGdldCB1bmRlZmluZWQgaW5zdHJ1Y3Rpb24gZXhjZXB0aW9uLCBpbnN0ZWFkDQo+Pj4+IG9mIGp1
c3QgcmVhZGluZyBhbGwgemVyb2VzLg0KPj4+IA0KPj4+IFRoaXMgaXMgdHJ1ZSBpbiB0aGUgc3Rh
dHVzIG9mIHRoaXMgcGF0Y2ggYnV0IHRoaXMgaXMgc29sdmVkIGJ5IHRoZSBuZXh0IHBhdGNoDQo+
Pj4gd2hpY2ggaXMgYWRkaW5nIHByb3BlciBoYW5kbGluZyBvZiB0aG9zZSByZWdpc3RlcnMgKGFk
ZCBDUDEwIGV4Y2VwdGlvbg0KPj4+IHN1cHBvcnQpLCBhdCBsZWFzdCBmb3IgTVZGUiBvbmVzLg0K
Pj4+IA0KPj4+IEZyb20gQVJNIEFSTSBwb2ludCBvZiB2aWV3LCBJIGRpZCBoYW5kbGUgYWxsIHJl
Z2lzdGVycyBsaXN0ZWQgSSB0aGluay4NCj4+PiBJZiB5b3UgdGhpbmsgc29tZSBhcmUgbWlzc2lu
ZyBwbGVhc2UgcG9pbnQgbWUgdG8gdGhlbSBhcyBPIGRvIG5vdA0KPj4+IGNvbXBsZXRlbHkgdW5k
ZXJzdGFuZCB3aGF0IGFyZSB0aGUg4oCccmVnaXN0ZXJzIG5vdCBjb3ZlcmVk4oCdIHVubGVzcw0K
Pj4+IHlvdSBtZWFuIHRoZSBNVkZSIG9uZXMuDQo+PiANCj4+IFdlbGwsIEkgbWF5IGJlIHdyb25n
IGZvciBhYXJjaDMyIGNhc2UsIGJ1dCBmb3IgYWFyY2g2NCwgdGhlcmUgYXJlIG51bWJlcg0KPj4g
b2YgcmVzZXJ2ZWQgcmVnaXN0ZXJzIGluIElEcyByYW5nZS4gVGhvc2UgcmVnaXN0ZXJzIHNob3Vs
ZCByZWFkIGFzDQo+PiB6ZXJvLiBZb3UgY2FuIGZpbmQgdGhlbSBpbiB0aGUgc2VjdGlvbiAiQzUu
MS42IG9wMD09MGIxMSwgTW92ZXMgdG8gYW5kDQo+PiBmcm9tIG5vbi1kZWJ1ZyBTeXN0ZW0gcmVn
aXN0ZXJzIGFuZCBTcGVjaWFsLXB1cnBvc2UgcmVnaXN0ZXJzIiBvZiBBUk0NCj4+IERESSAwNDg3
Qi5hLiBDaGVjayBvdXQgIlRhYmxlIEM1LTYgU3lzdGVtIGluc3RydWN0aW9uIGVuY29kaW5ncyBm
b3INCj4+IG5vbi1EZWJ1ZyBTeXN0ZW0gcmVnaXN0ZXIgYWNjZXNzZXMiLg0KPg0KPiBUaGUgcG9p
bnQgb2YgdGhlIHNlcmllIGlzIHRvIGhhbmRsZSBhbGwgcmVnaXN0ZXJzIHRyYXBwZWQgZHVlIHRv
IFRJRDMgYml0IGluIEhDUl9FTDIuDQo+DQo+IEFuZCBpIHRoaW5rIEkgaGFuZGxlZCBhbGwgb2Yg
dGhlbSBidXQgSSBtaWdodCBiZSB3cm9uZy4NCj4NCj4gSGFuZGxpbmcgYWxsIHJlZ2lzdGVycyBm
b3Igb3AwPT0wYjExIHdpbGwgY292ZXIgYSBsb3QgbW9yZSB0aGluZ3MuDQo+IFRoaXMgY2FuIGJl
IGRvbmUgb2YgY291cnNlIGJ1dCB0aGlzIHdhcyBub3QgdGhlIHBvaW50IG9mIHRoaXMgc2VyaWUu
DQo+DQo+IFRoZSBsaXN0aW5nIGluIEhDUl9FTDIgZG9jdW1lbnRhdGlvbiBpcyBwcmV0dHkgY29t
cGxldGUgYW5kIGlmIEkgbWlzcyBhbnkgcmVnaXN0ZXINCj4gdGhlcmUgcGxlYXNlIHRlbGwgbWUg
YnV0IEkgZG8gbm8gdW5kZXJzdGFuZCBmcm9tIHRoZSBkb2N1bWVudGF0aW9uIHRoYXQgYWxsIHJl
Z2lzdGVycw0KPiB3aXRoIG9wMCAzIGFyZSB0cmFwcGVkIGJ5IFRJRDMuDQoNCk15IGNvbmNlcm4g
aXMgdGhhdCB0aGUgc2FtZSBjb2RlIG1heSBvYnNlcnZlIGRpZmZlcmVudCBlZmZlY3RzIHdoZW4N
CnJ1bm5pbmcgaW4gYmFyZW1ldGFsIG1vZGUgYW5kIGFzIFZNLg0KDQpGb3IgZXhhbXBsZSwgd2Ug
YXJlIHRyeWluZyB0byBydW4gYSBuZXdlciB2ZXJzaW9uIG9mIGEga2VybmVsLCB0aGF0DQpzdXBw
b3J0cyBzb21lIGh5cG90aGV0aWNhbCBBUk12OC45LiBBbmQgaXQgdHJpZXMgdG8gcmVhZCBhIG5l
dyBJRA0KcmVnaXN0ZXIgd2hpY2ggaXMgYWJzZW50IGluIEFSTXY4LjIuIFRoZXJlIGFyZSBwb3Nz
aWJsZSBjYXNlczoNCg0KMC4gSXQgcnVucyBhcyBhIGJhcmVtZXRhbCBjb2RlIG9uIGEgY29tcGF0
aWJsZSBhcmNoaXRlY3R1cmUuIFNvIGl0IGp1c3QNCmFjY2Vzc2VzIHRoaXMgcmVnaXN0ZXIgYW5k
IGFsbCBpcyBmaW5lLg0KDQoxLiBJdCBydW5zIGFzIGJhcmVtZXRhbCBjb2RlIG9uIG9sZGVyIEFS
TTggYXJjaGl0ZWN0dXJlLiBDdXJyZW50DQpyZWZlcmVuY2UgbWFudWFsIHN0YXRlcyB0aGF0IHRo
b3NlIHJlZ2lzdGVycyBzaG91bGQgcmVhZCBhcyB6ZXJvLCBzbw0KYWxsIGlzIGZpbmUsIGFzIHdl
bGwuDQoNCjIuIEl0IHJ1bnMgYXMgVk0gb24gYW4gb2xkZXIgYXJjaGl0ZWN0dXJlLiBJdCBpcyBJ
TVBMRU1FTlRBVElPTiBERUZJTkVEDQppZiBIQ1IuSUQzIHdpbGwgY2F1c2UgdHJhcHMgd2hlbiBh
Y2Nlc3MgdG8gdW5hc3NpZ25lZCByZWdpc3RlcnM6DQoNCjJhLiBQbGF0Zm9ybSBkb2VzIG5vdCBj
YXVzZSB0cmFwcyBhbmQgc29mdHdhcmUgcmVhZHMgemVyb3MgZGlyZWN0bHkgZnJvbQ0KcmVhbCBy
ZWdpc3RlcnMuIFRoaXMgaXMgYSBnb29kIG91dGNvbWUuDQoNCjJiLiBQbGF0Zm9ybSBjYXVzZXMg
dHJhcCwgYW5kIHlvdXIgY29kZSBpbmplY3RzIHRoZSB1bmRlZmluZWQNCmluc3RydWN0aW9uIGV4
Y2VwdGlvbi4gVGhpcyBpcyBhIGNhc2UgdGhhdCBib3RoZXJzIG1lLiBXZWxsIHdyaXR0ZW4gY29k
ZQ0KdGhhdCBpcyB0cmllcyB0byBiZSBjb21wYXRpYmxlIHdpdGggZGlmZmVyZW50IHZlcnNpb25z
IG9mIGFyY2hpdGVjdHVyZQ0Kd2lsbCBmYWlsIHRoZXJlLg0KDQozLiBJdCBydW5zIGFzIFZNIG9u
IGEgbmV2ZXIgYXJjaGl0ZWN0dXJlLiBJIGNhbiBvbmx5IHNwZWN1bGF0ZSB0aGVyZSwNCmJ1dCBJ
IHRoaW5rLCB0aGF0IGxpc3Qgb2YgcmVnaXN0ZXJzIHRyYXBwZWQgYnkgSENSLklEMyB3aWxsIGJl
IGV4dGVuZGVkDQp3aGVuIG5ldyBmZWF0dXJlcyBhcmUgYWRkZWQuIEF0IGxlYXN0LCB0aGlzIGRv
ZXMgbm90IGNvbnRyYWRpY3Qgd2l0aCB0aGUNCmN1cnJlbnQgSU1QTEVNRU5UQVRJT04gREVGSU5F
RCBjb25zdHJhaW50LiBJbiB0aGlzIGNhc2UsIGh5cGVydmlzb3Igd2lsbA0KaW5qZWN0IGV4Y2Vw
dGlvbiB3aGVuIGd1ZXN0IHRyaWVzIHRvIGFjY2VzcyBhIHZhbGlkIHJlZ2lzdGVyLg0KDQoNClNv
LCBpbiBteSBvcGluaW9uIGFuZCB0byBiZSBjb21wYXRpYmxlIHdpdGggdGhlIHJlZmVyZW5jZSBt
YW51YWwsIHdlDQpzaG91bGQgYWxsb3cgZ3Vlc3RzIHRvIHJlYWQgIlJlc2VydmVkLCBSQVoiIHJl
Z2lzdGVycy4NCg0KDQoNCj4gUmVnYXJkcw0KPiBCZXJ0cmFuZA0KPg0KPg0KPj4gDQo+PiANCj4+
Pj4gDQo+Pj4+PiBTaWduZWQtb2ZmLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJx
dWlzQGFybS5jb20+DQo+Pj4+PiAtLS0NCj4+Pj4+IENoYW5nZXMgaW4gVjI6IHJlYmFzZQ0KPj4+
Pj4gLS0tDQo+Pj4+PiB4ZW4vYXJjaC9hcm0vdmNwcmVnLmMgfCAzNSArKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKw0KPj4+Pj4gMSBmaWxlIGNoYW5nZWQsIDM1IGluc2VydGlvbnMo
KykNCj4+Pj4+IA0KPj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92Y3ByZWcuYyBiL3hl
bi9hcmNoL2FybS92Y3ByZWcuYw0KPj4+Pj4gaW5kZXggY2RjOTFjZGY1Yi4uZDBjNjQwNmYzNCAx
MDA2NDQNCj4+Pj4+IC0tLSBhL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4+Pj4gKysrIGIveGVu
L2FyY2gvYXJtL3ZjcHJlZy5jDQo+Pj4+PiBAQCAtMTU1LDYgKzE1NSwxNCBAQCBUVk1fUkVHMzIo
Q09OVEVYVElEUiwgQ09OVEVYVElEUl9FTDEpDQo+Pj4+PiAgICAgICAgYnJlYWs7ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4+Pj4gICAg
fQ0KPj4+Pj4gDQo+Pj4+PiArLyogTWFjcm8gdG8gZ2VuZXJhdGUgZWFzaWx5IGNhc2UgZm9yIElE
IGNvLXByb2Nlc3NvciBlbXVsYXRpb24gKi8NCj4+Pj4+ICsjZGVmaW5lIEdFTkVSQVRFX1RJRDNf
SU5GTyhyZWcsZmllbGQsb2Zmc2V0KSAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+Pj4+ICsg
ICAgY2FzZSBIU1JfQ1BSRUczMihyZWcpOiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFwNCj4+Pj4+ICsgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+Pj4+ICsgICAgICAgIHJldHVybiBo
YW5kbGVfcm9fcmVhZF92YWwocmVncywgcmVnaWR4LCBjcDMyLnJlYWQsIGhzciwgICAgIFwNCj4+
Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgIDEsIGd1ZXN0X2NwdWluZm8uZmllbGQuYml0
c1tvZmZzZXRdKTsgICAgIFwNCj4+Pj4+ICsgICAgfQ0KPj4+Pj4gKw0KPj4+Pj4gdm9pZCBkb19j
cDE1XzMyKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzLCBjb25zdCB1bmlvbiBoc3IgaHNyKQ0K
Pj4+Pj4gew0KPj4+Pj4gICAgY29uc3Qgc3RydWN0IGhzcl9jcDMyIGNwMzIgPSBoc3IuY3AzMjsN
Cj4+Pj4+IEBAIC0yODYsNiArMjk0LDMzIEBAIHZvaWQgZG9fY3AxNV8zMihzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncywgY29uc3QgdW5pb24gaHNyIGhzcikNCj4+Pj4+ICAgICAgICAgKi8NCj4+
Pj4+ICAgICAgICByZXR1cm4gaGFuZGxlX3Jhel93aShyZWdzLCByZWdpZHgsIGNwMzIucmVhZCwg
aHNyLCAxKTsNCj4+Pj4+IA0KPj4+Pj4gKyAgICAvKg0KPj4+Pj4gKyAgICAgKiBIQ1JfRUwyLlRJ
RDMNCj4+Pj4+ICsgICAgICoNCj4+Pj4+ICsgICAgICogVGhpcyBpcyB0cmFwcGluZyBtb3N0IElk
ZW50aWZpY2F0aW9uIHJlZ2lzdGVycyB1c2VkIGJ5IGEgZ3Vlc3QNCj4+Pj4+ICsgICAgICogdG8g
aWRlbnRpZnkgdGhlIHByb2Nlc3NvciBmZWF0dXJlcw0KPj4+Pj4gKyAgICAgKi8NCj4+Pj4+ICsg
ICAgR0VORVJBVEVfVElEM19JTkZPKElEX1BGUjAsIHBmcjMyLCAwKQ0KPj4+Pj4gKyAgICBHRU5F
UkFURV9USUQzX0lORk8oSURfUEZSMSwgcGZyMzIsIDEpDQo+Pj4+PiArICAgIEdFTkVSQVRFX1RJ
RDNfSU5GTyhJRF9QRlIyLCBwZnIzMiwgMikNCj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZP
KElEX0RGUjAsIGRiZzMyLCAwKQ0KPj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfREZS
MSwgZGJnMzIsIDEpDQo+Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9BRlIwLCBhdXgz
MiwgMCkNCj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIwLCBtbTMyLCAwKQ0K
Pj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfTU1GUjEsIG1tMzIsIDEpDQo+Pj4+PiAr
ICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMiwgbW0zMiwgMikNCj4+Pj4+ICsgICAgR0VO
RVJBVEVfVElEM19JTkZPKElEX01NRlIzLCBtbTMyLCAzKQ0KPj4+Pj4gKyAgICBHRU5FUkFURV9U
SUQzX0lORk8oSURfTU1GUjQsIG1tMzIsIDQpDQo+Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5G
TyhJRF9NTUZSNSwgbW0zMiwgNSkNCj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lT
QVIwLCBpc2EzMiwgMCkNCj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIxLCBp
c2EzMiwgMSkNCj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIyLCBpc2EzMiwg
MikNCj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIzLCBpc2EzMiwgMykNCj4+
Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVI0LCBpc2EzMiwgNCkNCj4+Pj4+ICsg
ICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVI1LCBpc2EzMiwgNSkNCj4+Pj4+ICsgICAgR0VO
RVJBVEVfVElEM19JTkZPKElEX0lTQVI2LCBpc2EzMiwgNikNCj4+Pj4+ICsgICAgLyogTVZGUiBy
ZWdpc3RlcnMgYXJlIGluIGNwMTAgbm8gY3AxNSAqLw0KPj4+Pj4gKw0KPj4+Pj4gICAgLyoNCj4+
Pj4+ICAgICAqIEhDUl9FTDIuVElEQ1ANCj4+Pj4+ICAgICAqDQo+Pj4+IA0KPj4+PiANCj4+Pj4g
LS0gDQo+Pj4+IFZvbG9keW15ciBCYWJjaHVrIGF0IEVQQU0NCj4+IA0KPj4gDQo+PiAtLSANCj4+
IFZvbG9keW15ciBCYWJjaHVrIGF0IEVQQU0NCg0KDQotLSANClZvbG9keW15ciBCYWJjaHVrIGF0
IEVQQU0=


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 17:41:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 17:41:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42238.75941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk9eT-0006YM-8F; Tue, 01 Dec 2020 17:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42238.75941; Tue, 01 Dec 2020 17:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kk9eT-0006YF-4z; Tue, 01 Dec 2020 17:41:21 +0000
Received: by outflank-mailman (input) for mailman id 42238;
 Tue, 01 Dec 2020 17:41:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SK29=FF=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kk9eR-0006YA-2L
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 17:41:19 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00dab951-975e-4f50-bae8-55646c65eb20;
 Tue, 01 Dec 2020 17:41:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00dab951-975e-4f50-bae8-55646c65eb20
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606844477;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=iVIc1mh7sDE1Fnb0UisdiqFgekYPnrQJrEvAysjGp0Y=;
  b=Y4vkIYR8l4xtnczW3G2hiOg+qlbCtud/qICSvkB3p4X/dpKOjih1h4Bf
   uPZCydNoyp1paPg7D560AHLp8eNys52TVtLLzkohCORlpy83va2JGaQA7
   iDSVjW5WqC0OPoJPAqOblpOr3UAWT4aoeZFt4Gbpn05JCb6bR8IPadhw2
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SNTO5zvFG3vmfPRGVZxraIs5vumKDfiQncyQajZBiuCY+9XbUTxDf6uCZAfbndae1BpsEfoUP4
 LaAbZMY1ytuZTlmvYIPpaejRyUkpqUOL2MzLNxil4pSSjigdu2kR2/mUEBNg2trWrX1lDt7IaN
 SDw7YDVNiCvIGtiAnefMxYXNTsw+QiXR6KPvMOvSXYDlR3IAOkXKiGiS/Hr9ZKiBqvYYiX/AbK
 54qFJMmc+Azd9pSVNMdY7GG+yTA8KvuW8i+AplLGh9Myenu3h0jX/+Zjj3SRRTi5TYRMboK1hz
 QPo=
X-SBRS: None
X-MesageID: 32512932
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,384,1599537600"; 
   d="scan'208";a="32512932"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dqo/vHwpk/3MClzVP8XD1oAHeNbS2LDTcwOhbLB3b4cC3qF8jpFXxm/3VdSKBYyo3HFsBjoy5deYbwGAPHIXkZe3CmgfTEBS0pjjQCTtylOpCEzhG0JdMVvyrHKQVCeTulrXldiZ/ANtopTwJC7fyR5vJZHDuIa0MeLNKg6TdR3wg7mIDKF69rkcM+fXPl3lWGteAYMUnWKcDVhSjznBLR1WZTtPbp6V/VDt4TKyiCAUjqb/LXWIfnHVTgmUOBZXNbTdoZeqBn/nwmywoAvbAmkvf9JzNAaKENWIPcyG7f0ttOQ51wfsGGYKVGyX4cKwIR2u6Zp6BBTHXmtGgDftow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QUilLypEJA41k654QAtYtSzKwPvDXaDonCVZeP+ybGU=;
 b=VBzgTq7SWTa9gheBt5P4oG7iitpxus0MpWETGiLDI1y3F/6UNZch4sGfPBWHJL+7YgH47TSKjsZ47LRwS0vtbFn2JmZSmX99Fxxy8CYD87pmdhnI/Vijz6pduYdg9BZmuzb9PJcSS73QiA8VIpPibc42KLsktF+yXMG021E3npE3skCF5gy/Gnq85Wj8Bqaqki7fBlaClo0Pj170QeI4zFPqoq6wxJKkqwK+4etFjcbj21hgCznV3E4r5dW9Z4iF5jRYEMCt8PBjEpg2sF2oWb/eF/IAkZ5pUPfEnrP5QiOvIhU338fMc/r02gAy26QnPV0H8bKiOa6dAxlv57Utug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QUilLypEJA41k654QAtYtSzKwPvDXaDonCVZeP+ybGU=;
 b=az47rUEnquYsfKy4Kb7oDa+pXhKWvz1DI6NUhncUqrgcD5V6BIqoNZrVEkkTbrkLYZPGAf6GVgtZDJ3rgMNRoR6XAZwBCNVR1KJF+NFsrK4S1mDlyc3EJhlc8MrbdynKVCOL2jZhn9ndZORCn/5rUnIQMDTe6wPxhFhPs11TXU4=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Manuel Bouyer
	<bouyer@antioche.eu.org>
Subject: [PATCH] vpci/msix: exit early if MSI-X is disabled
Date: Tue,  1 Dec 2020 18:40:14 +0100
Message-ID: <20201201174014.27878-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0014.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ab532a1a-80c4-4027-997f-08d896204309
X-MS-TrafficTypeDiagnostic: DM6PR03MB4604:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4604A840C0710899432703488FF40@DM6PR03MB4604.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TZhSu8HUz2Nep/mHiP0wOjPb94JY48CffdPCEYcm00tG/msVK+C2mdxVELGegnrZWXuqk5vpqrKniE1b1JfCGH/dsWWM8sOcGjCceNp0wgbCmGRVOHaUmEBg7Z5bn9QEzXxbhJ0Cm6Uy/CnIrTrt3Qpw0zQLEs6exh3GfZfrpB6yYnadwDrqYv/OIU3e9BMy1576cnhjLRC4LkcggswqMd5Rz2NuRL32mkHOM3k7xAc8+8wFd1Azhby5lbDEphYxcsjod1NthITJyGPwIKhzbYj0BuW8aRk2u5DrUFqKw4o5gZaBxGg8RrptV0U4TeHX
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(366004)(396003)(136003)(66946007)(5660300002)(956004)(1076003)(6486002)(2906002)(66476007)(4326008)(2616005)(6916009)(6666004)(66556008)(8936002)(36756003)(83380400001)(16526019)(186003)(26005)(8676002)(86362001)(54906003)(478600001)(6496006)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?azFnY3J3UGtCVk9HQStKMlVISkR5VDI1MG9vQlNWZjNkY1ZlUnZseVZSTkcr?=
 =?utf-8?B?eHBRNXh4ZnVCdm12ZlBxMmFyTXMzU2tUSDlmUFljQkRrRk54UlRVUk1wQWdv?=
 =?utf-8?B?cXcxYk02Q2MxcU9DUlpVYlZDd2kxN3luNmRCUlRNVHpCYXJBQVlWS0JTSVpL?=
 =?utf-8?B?cVVwT0RsTEJWc2Iya3Z2YytiN2hieVlIWWxhRDBVY3lsZ3R1OTJPYStISHky?=
 =?utf-8?B?TEIzTkE1RmUyb1UrZytjRXBUUzJncVh4MURFSnNneEVSUHhkblp4OGlpOWgv?=
 =?utf-8?B?UDJEaWJhYTUvVHA0TDVrL3BpS3VCMFNJQTh2NjRZeXRzNkdhbUFvc3RranVF?=
 =?utf-8?B?OTFKRmRrTFAxNEVIbzhoN2JnaEg1dTlza1BZeUIzNmNhUlNEcTNGYTViclBY?=
 =?utf-8?B?ZlJLWmdtaGFvbE9MZXB4eGxOQzhST1FTWE9zVFJ1Y2xkYWxqVUMrYlhpd2x6?=
 =?utf-8?B?aU1oVzluQ0lUMHlnUVZsY3JXNldrVVRobkQ2MGlUV0MxRXprVnNjdkw3eUNN?=
 =?utf-8?B?SjFLdHJ6NGQ0YUtKc2ZYYzhvUmEzd0tJdFFRdDdvL3Nxd3BuS0N5Sk5kTlFC?=
 =?utf-8?B?bWRjMnBLdWF2Wld1SlhOQ0c1OWxpUXhQcmZoaFp1eXVlVUF2RmpVQzloa0ZB?=
 =?utf-8?B?K3RLcGVwNXZkUzlqLys1RXgzdDYvRE9vYVNuMGRkYVkrelYwNDBXckdIbTMx?=
 =?utf-8?B?bCs2dDlSWHdPcDFobW9HUkpRNGw2Tk9VVnUvRHBQbDlJNFV4eml5YWFJaXV4?=
 =?utf-8?B?Vjh4d2xpendhZEZlTFlUa2JrTjBsZC9MYUc1VXlOY2RjM0VwQkRtQnZtemdw?=
 =?utf-8?B?dndSNjk3Q01tZnEyU1BTVjhzNStiN1hYZzMwak9RNTBvVkkzRUNBakppOGJo?=
 =?utf-8?B?dGZmbGU1RzljcDN6bnZBUm16aWc3OFc2KzFWMlF3TFZ1N2tPbzhpNGpMdnUx?=
 =?utf-8?B?VmtkUGJJUG93cXdUTENVUzVwbmZCbGlRclRVcHRITU9xQlJzMlQxZUUvUnFG?=
 =?utf-8?B?TE0reldZRFFCQnJQSXB4MUtTSXVKWWJGbnhCOGxkdEEzd0VrcFlrL0pNM0l5?=
 =?utf-8?B?bmlYajF4d2orajZ4bDRJWU54aUZtTUZ3UVVGcUIrY3F1ekVrZ2t5eS9aOWFK?=
 =?utf-8?B?VEJHeUliZXZFamtodnEwZXJFNkpwMG1DQmhoZ3BNR0l0Nm9RT1dvbHovRVZR?=
 =?utf-8?B?OGRGbTFFbnI2Tk50TVR0SmlWVXVRVk5BMVlJZGZNdEJsY29qRVlnSjFxaEcr?=
 =?utf-8?B?TkQzVVhpcVo5b01JTlh3TW9EY01JMWxabnlFYkxXWHJNbE0xRERuOFJod1ND?=
 =?utf-8?Q?n6Ow8iKRO7WQbbNRwjniLafNsIE56Kxs+X?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ab532a1a-80c4-4027-997f-08d896204309
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 17:40:58.2689
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dqr+/tm6zj24vmTeL0dyhAUrswEgFvXGLkOc+MUKPxCiTblOYbl+eVJN6neHzgpgIdJlGhzbuCfeLiBkGHh5Eg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4604
X-OriginatorOrg: citrix.com

Do not attempt to mask an MSI-X entry if MSI-X is not enabled. Else it
will lead to hitting the following assert on debug builds:

(XEN) Panic on CPU 13:
(XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843

In order to fix it exit early from the switch in msix_write if MSI-X
is not enabled.

Fixes: d6281be9d0145 ('vpci/msix: add MSI-X handlers')
Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/vpci/msix.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..93902ba7db 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
          * so that it picks the new state.
          */
         entry->masked = new_masked;
-        if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
+
+        if ( !msix->enabled )
+            break;
+
+        if ( !new_masked && !msix->masked && entry->updated )
         {
             /*
              * If MSI-X is enabled, the function mask is not active, the entry
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 18:53:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 18:53:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42287.76012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkAm5-000567-RT; Tue, 01 Dec 2020 18:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42287.76012; Tue, 01 Dec 2020 18:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkAm5-000560-Nq; Tue, 01 Dec 2020 18:53:17 +0000
Received: by outflank-mailman (input) for mailman id 42287;
 Tue, 01 Dec 2020 18:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lvrb=FF=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kkAm4-00055v-Jr
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 18:53:16 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e03f970b-01d7-4e20-b29b-614d5403cdf2;
 Tue, 01 Dec 2020 18:53:15 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id k14so4337453wrn.1
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 10:53:15 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id n4sm1049687wmc.30.2020.12.01.10.53.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 01 Dec 2020 10:53:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e03f970b-01d7-4e20-b29b-614d5403cdf2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=j4ldwK3OA/QH4qbINrvM8NZOooiQJgxxFypacLUiVqc=;
        b=Y5NHW2ZcZkuVLi6t5TbbaLuIQsqB5J9fHLp7B4ueojSreNE/f4hrMB8T7jQ/Femqzl
         vvITaEH5mdSNrXhMHmlofC5B7kRgcLeUEs4RpcpWiYPhYjOfP+Ycl31sA7ie7wA5Zwwc
         fpTgw6bpbI87wIbmmqOEzMP6i85D4sNU42gUAKrRqqpueCIaVPlDZmrpDrlldhnU7glf
         CtsJm6la9hS5243s7jLzng8Y0sxN+Jol1i8LBwfNiRKNKxm5+25hfcnFL8PKjAWbD8Bp
         x12pnFhQCfqYeed+O/FsGnMY0h77CfF3JO8FLnRYlJ/rjLJ1zI3fCc0soxVGjmICpsqR
         bTgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=j4ldwK3OA/QH4qbINrvM8NZOooiQJgxxFypacLUiVqc=;
        b=MEX0XlE4onQLJwjejbtHw4/ExS6zDxRmVFEb4bH5lFz9JcjdUU1C5kTuXN8abWnA2K
         0ZDac/1Odt1K4WWtlhlpTb1FeolK9Fc9i8vI7ytikqknehGfOHXdAzynp8CN+P77Kd8k
         0v2Y0cRYMinr8aHAkX8MjOaNuLAR4AzroTOslzQnNi3isbSdLUuUtYE1I/7q+V8NyMGE
         pfPOV2XbHObybH2lGWYuasvXo4x9uRkdGP1hDL7stYdRUICMgkfds2QQjSK3orztZzEU
         l+1eBX+6RuSj3pJRJH9TCfOTrkIg4SogMjWYOmufRNCLoC8oxkvNSbccfuZ+G5/cPIXi
         EKbQ==
X-Gm-Message-State: AOAM530Gvt12oC+PU63NI1ewfKUIdcCtq65dtrPit4xT1FWCmvzrMhzo
	XJ73fg3Z9HmV2ej/3jZweG9SOVQvIJkvSA==
X-Google-Smtp-Source: ABdhPJy57cIS+Ur+LvNK2SJRThcmB7ubIYdAGEVkXPhUbo3C5hs+A+tmUXI8b8KVGoOgHVMmwbvZFQ==
X-Received: by 2002:adf:9d49:: with SMTP id o9mr5804255wre.413.1606848794557;
        Tue, 01 Dec 2020 10:53:14 -0800 (PST)
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <87eek9u6tj.fsf@linaro.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <cd2e064e-896b-3a28-5d37-93ddaba1c13e@gmail.com>
Date: Tue, 1 Dec 2020 20:53:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <87eek9u6tj.fsf@linaro.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.12.20 13:03, Alex Bennée wrote:

Hi Alex

> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> As a lot of x86 code can be re-used on Arm later on, this
>> patch makes some preparation to x86/hvm/ioreq.c before moving
>> to the common code. This way we will get a verbatim copy
> <snip>
>> It worth mentioning that a code which checks the return value of
>> p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was
>> folded into arch_ioreq_server_map_mem_type() for the clear split.
>> So the p2m_change_entry_type_global() is called with ioreq_server
>> lock held.
> <snip>
>>   
>> +/* Called with ioreq_server lock held */
>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>> +                                   struct hvm_ioreq_server *s,
>> +                                   uint32_t flags)
>> +{
>> +    int rc = p2m_set_ioreq_server(d, flags, s);
>> +
>> +    if ( rc == 0 && flags == 0 )
>> +    {
>> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>> +    }
>> +
>> +    return rc;
>> +}
>> +
>>   /*
>>    * Map or unmap an ioreq server to specific memory type. For now, only
>>    * HVMMEM_ioreq_server is supported, and in the future new types can be
>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>       if ( s->emulator != current->domain )
>>           goto out;
>>   
>> -    rc = p2m_set_ioreq_server(d, flags, s);
>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>   
>>    out:
>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>   
>> -    if ( rc == 0 && flags == 0 )
>> -    {
>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> -
>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>> -    }
>> -
> It should be noted that p2m holds it's own lock but I'm unfamiliar with
> Xen's locking architecture. Is there anything that prevents another vCPU
> accessing a page that is also being used my ioreq on the first vCPU?
I am not sure that I would be able to provide reasonable explanations here.
All what I understand is that p2m_change_entry_type_global() x86 
specific (we don't have p2m_ioreq_server concept on Arm) and should 
remain as such (not exposed to the common code).
IIRC, I raised a question during V2 review whether we could have ioreq 
server lock around the call to p2m_change_entry_type_global() and didn't 
get objections. I may mistake, but looks like the lock being used
in p2m_change_entry_type_global() is yet another lock for protecting 
page table operations, so unlikely we could get into the trouble calling 
this function with the ioreq server lock held.


>
> Assuming that deadlock isn't a possibility to my relatively untrained
> eye this looks good to me:
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Thank you.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 01 19:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 19:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42296.76024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkBRm-0000RV-7o; Tue, 01 Dec 2020 19:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42296.76024; Tue, 01 Dec 2020 19:36:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkBRm-0000RO-41; Tue, 01 Dec 2020 19:36:22 +0000
Received: by outflank-mailman (input) for mailman id 42296;
 Tue, 01 Dec 2020 19:36:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=na+5=FF=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kkBRl-0000RJ-EV
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 19:36:21 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40d032bb-61a2-402d-8504-ad19c3f13e92;
 Tue, 01 Dec 2020 19:36:20 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id o1so4524987wrx.7
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 11:36:20 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id r21sm987137wrc.16.2020.12.01.11.36.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 01 Dec 2020 11:36:17 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 3DC781FF7E;
 Tue,  1 Dec 2020 19:36:16 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40d032bb-61a2-402d-8504-ad19c3f13e92
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=JFQI+aXRmBjkxgRhRTRPjWwAL6b77ZM7cNrym6Grfj4=;
        b=mqI4ra5goyWqZOxhTP5SlobUglWOLYR32l91ywLtqYvUmMnnw0NX1TVYXVFJ3akU3B
         c99hawi8nXJmFAUHLKxwCbP4rZmeg4AuOeSo08gJSAgfOA3TfK9Na291bE7CJ+hjaJao
         9qE99n8Xptj+v0KAbW1DD9pXMt3IFf/oAAeQdWk63KkAxBg/0NUKUInRT/wGbriKMAws
         7hVUIMpQNBOzdfibiBd0zToc0v1t4k/Ytp4CBAvDZlzbzoWw3HeirZrfX6qEKF0ty/bb
         dIC2DK9s2EVvuY7S2sx11g1FmwCysBW0p1O2/oVAEl1fvidUCrOOsA1M9wsOpYm45O0j
         S4Bw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=JFQI+aXRmBjkxgRhRTRPjWwAL6b77ZM7cNrym6Grfj4=;
        b=jNGvXdU5LzGRs+a7l1F6IF369vpPCOElh/x6CS1AA/OUJbrHCyWcFzfkq3yJUn2e4z
         rlNcYOvrE2VJIjLPRnYu6P5WAy7TgjO7+v5LZsq5vAT9hcjLT/WUGiDTEnSSSq0iDc+Y
         vL7Hj7yqhGZ5/VO1Az4tlQWO9AHa5dPjjEnyYxSThdwo8YH+sAxQiI48moNHGdqGxvoj
         5iVmNvcmYLviSvs8CCB3g/BcISwcKqBq1JR3BcNVOlQwxT22GzQ7ur890Q+qh5PRw5o8
         sbKFBmYTU2vaCOGvAk5NQ47PdUhVgs2iqk6DVe6LB8CZ5E1XwwKVDyBVFgydOc36xru9
         dU3A==
X-Gm-Message-State: AOAM532m4uNqKQcnIWOMeQ/r4HuWG+wC63tj8MabvR3FV3vJqCE2yNMC
	YDF1GMbviVtAdRlxgpKRpUjmxA==
X-Google-Smtp-Source: ABdhPJxHttdiWmMSctixZ6owPkrOplh50n4DTtC0Ba1W5vEDss1FE+BP7GdcjI0EMwRHZSZZMfr0hg==
X-Received: by 2002:a5d:680c:: with SMTP id w12mr6090486wru.161.1606851379184;
        Tue, 01 Dec 2020 11:36:19 -0800 (PST)
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <87eek9u6tj.fsf@linaro.org>
 <cd2e064e-896b-3a28-5d37-93ddaba1c13e@gmail.com>
User-agent: mu4e 1.5.7; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
 <paul@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>, Wei
 Liu <wl@xen.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
In-reply-to: <cd2e064e-896b-3a28-5d37-93ddaba1c13e@gmail.com>
Date: Tue, 01 Dec 2020 19:36:16 +0000
Message-ID: <87360ptj2n.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Oleksandr <olekstysh@gmail.com> writes:

> On 01.12.20 13:03, Alex Benn=C3=A9e wrote:
>
> Hi Alex
>
>> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
>>
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> As a lot of x86 code can be re-used on Arm later on, this
>>> patch makes some preparation to x86/hvm/ioreq.c before moving
>>> to the common code. This way we will get a verbatim copy
>> <snip>
>>> It worth mentioning that a code which checks the return value of
>>> p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was
>>> folded into arch_ioreq_server_map_mem_type() for the clear split.
>>> So the p2m_change_entry_type_global() is called with ioreq_server
>>> lock held.
>> <snip>
>>>=20=20=20
>>> +/* Called with ioreq_server lock held */
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags)
>>> +{
>>> +    int rc =3D p2m_set_ioreq_server(d, flags, s);
>>> +
>>> +    if ( rc =3D=3D 0 && flags =3D=3D 0 )
>>> +    {
>>> +        const struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
>>> +
>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_=
rw);
>>> +    }
>>> +
>>> +    return rc;
>>> +}
>>> +
>>>   /*
>>>    * Map or unmap an ioreq server to specific memory type. For now, only
>>>    * HVMMEM_ioreq_server is supported, and in the future new types can =
be
>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct dom=
ain *d, ioservid_t id,
>>>       if ( s->emulator !=3D current->domain )
>>>           goto out;
>>>=20=20=20
>>> -    rc =3D p2m_set_ioreq_server(d, flags, s;)
>>> +    rc =3D arch_ioreq_server_map_mem_type(d, s, flags);
>>>=20=20=20
>>>    out:
>>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>=20=20=20
>>> -    if ( rc =3D=3D 0 && flags =3D=3D 0 )
>>> -    {
>>> -        struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
>>> -
>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_=
rw);
>>> -    }
>>> -
>> It should be noted that p2m holds it's own lock but I'm unfamiliar with
>> Xen's locking architecture. Is there anything that prevents another vCPU
>> accessing a page that is also being used my ioreq on the first vCPU?
> I am not sure that I would be able to provide reasonable explanations her=
e.
> All what I understand is that p2m_change_entry_type_global() x86=20
> specific (we don't have p2m_ioreq_server concept on Arm) and should=20
> remain as such (not exposed to the common code).
> IIRC, I raised a question during V2 review whether we could have ioreq=20
> server lock around the call to p2m_change_entry_type_global() and didn't=
=20
> get objections. I may mistake, but looks like the lock being used
> in p2m_change_entry_type_global() is yet another lock for protecting=20
> page table operations, so unlikely we could get into the trouble calling=
=20
> this function with the ioreq server lock held.

The p2m lock code looks designed to be recursive so I could only
envision a problem where a page somehow racing to lock under the ioreq
lock which I don't think is possible. However reasoning about locking is
hard if your not familiar - it's one reason we added Promela/Spin [1] model=
s to
QEMU for our various locking regimes.


[1] http://spinroot.com/spin/whatispin.html
[2] https://git.qemu.org/?p=3Dqemu.git;a=3Dtree;f=3Ddocs/spin;h=3Dcc1680251=
31676429a560ca70d7234a56f958092;hb=3DHEAD

>
>
>>
>> Assuming that deadlock isn't a possibility to my relatively untrained
>> eye this looks good to me:
>>
>> Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>
> Thank you.


--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 20:08:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 20:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42304.76041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkBwc-0003PF-Sl; Tue, 01 Dec 2020 20:08:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42304.76041; Tue, 01 Dec 2020 20:08:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkBwc-0003P8-P8; Tue, 01 Dec 2020 20:08:14 +0000
Received: by outflank-mailman (input) for mailman id 42304;
 Tue, 01 Dec 2020 20:08:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkBwb-0003Oz-7X; Tue, 01 Dec 2020 20:08:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkBwa-0005GB-SK; Tue, 01 Dec 2020 20:08:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkBwa-0000BR-Du; Tue, 01 Dec 2020 20:08:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkBwa-0002pT-Ct; Tue, 01 Dec 2020 20:08:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uzZeSzyTxbfZr1XQbSarOGznaT+W2LCirt8gsLsizVQ=; b=Y7hMZN6ChXshla5StBrRX6uE5L
	nGc7SjavHY4n7WIT49oX5dqS8PcGfapxJ1UVHaacPxEcuPTIOigNsCYfQh5dOVwfAttJl1QlNxjfE
	ggBla6PsBELaxK45iBiJTM7z5e2dA63QpSl50Ja6T2lkFUxYMZYjELoQXUoLqRef79yM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157129-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157129: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 20:08:12 +0000

flight 157129 qemu-mainline real [real]
flight 157140 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157129/
http://logs.test-lab.xenproject.org/osstest/logs/157140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  103 days
Failing since        152659  2020-08-21 14:07:39 Z  102 days  214 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    3 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 20:55:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 20:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42319.76063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkCg2-0007xu-CR; Tue, 01 Dec 2020 20:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42319.76063; Tue, 01 Dec 2020 20:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkCg2-0007xn-8I; Tue, 01 Dec 2020 20:55:10 +0000
Received: by outflank-mailman (input) for mailman id 42319;
 Tue, 01 Dec 2020 20:55:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkCg1-0007xf-8P; Tue, 01 Dec 2020 20:55:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkCg0-0006Fv-VJ; Tue, 01 Dec 2020 20:55:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkCg0-0003Eb-ME; Tue, 01 Dec 2020 20:55:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkCg0-00017L-Lk; Tue, 01 Dec 2020 20:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9o+LSGoniFqayEpBhRdP8Oq3u7gnnIhQSwWzlqoHKeY=; b=v7Emm8GoXDoWdv0W+sj3fBL5M0
	0pjBGWkP+g4HJmlpSzeFRL2P11dG+4fmWJzHyY7sZirQ4s82XXlQXMz+AXTSckDUu6gy5B6x/YmAu
	0bB59ByBdggebi8t7qGMVzFH3NI3upfHu1cz9kVPO8VNGlT0O6frnQIvidNBcuwzu2qE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157131-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157131: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b65054597872ce3aefbc6a666385eabdf9e288da
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 01 Dec 2020 20:55:08 +0000

flight 157131 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 157109 REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 157119 REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157119 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 157119 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 157109 pass in 157131
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 157109 pass in 157131
 test-arm64-arm64-xl           8 xen-boot         fail in 157119 pass in 157109
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157119 pass in 157131
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 157119 pass in 157131
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157119
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 157119
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157119
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157119

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157109 blocked in 152332
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 157119 like 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157119 like 152332
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 157119 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b65054597872ce3aefbc6a666385eabdf9e288da
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  123 days
Failing since        152366  2020-08-01 20:49:34 Z  122 days  207 attempts
Testing same since   157109  2020-11-30 08:17:04 Z    1 days    3 attempts

------------------------------------------------------------
3619 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken
broken-step test-arm64-arm64-xl host-install(5)

Not pushing.

(No revision log; it would be 693043 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 22:01:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 22:01:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42328.76078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkDiF-0005w6-EQ; Tue, 01 Dec 2020 22:01:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42328.76078; Tue, 01 Dec 2020 22:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkDiF-0005vz-9b; Tue, 01 Dec 2020 22:01:31 +0000
Received: by outflank-mailman (input) for mailman id 42328;
 Tue, 01 Dec 2020 22:01:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fi77=FF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkDiE-0005vu-5p
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 22:01:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 323fcb38-24bf-4976-acd2-1d94d43684bd;
 Tue, 01 Dec 2020 22:01:28 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B581B20870;
 Tue,  1 Dec 2020 22:01:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 323fcb38-24bf-4976-acd2-1d94d43684bd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606860087;
	bh=Rb/c9IrolLFK3MIYlbuF/oE+cl32oEPFZ4xevFEdv7g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rSH4ogzji17PrlZBQnxHhiwOs3yYj60XrtsnVZW0SGQYHyBHcmwV18gDnjwcyxQYn
	 su+ZN7L7rmFWSz9QDuv9lllNzIlTBaDuMymP04qTvz76xdBZOPfns3DrVrAOmvsXwX
	 oq0ky6tDp9WPyWLk7DpPocgA/NVIsixcjz2oRGdA=
Date: Tue, 1 Dec 2020 14:01:26 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/8] xen/arm: Import the SMMUv3 driver from Linux
In-Reply-To: <0967bb590eb1ea4bb040e064e7c5c1bb90ef2a21.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011401150.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <0967bb590eb1ea4bb040e064e7c5c1bb90ef2a21.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> Based on tag Linux 5.9.8 commit 951cbbc386ff01b50da4f46387e994e81d9ab431
> 
> It's a copy of the Linux SMMUv3 driver. Xen specific code has not
> been added yet and code has not been compiled.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 4164 +++++++++++++++++++++++++
>  1 file changed, 4164 insertions(+)
>  create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> new file mode 100644
> index 0000000000..c192544e87
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -0,0 +1,4164 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * IOMMU API for ARM architected SMMUv3 implementations.
> + *
> + * Copyright (C) 2015 ARM Limited
> + *
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This driver is powered by bad coffee and bombay mix.
> + */
> +
> +#include <linux/acpi.h>
> +#include <linux/acpi_iort.h>
> +#include <linux/bitfield.h>
> +#include <linux/bitops.h>
> +#include <linux/crash_dump.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/err.h>
> +#include <linux/interrupt.h>
> +#include <linux/io-pgtable.h>
> +#include <linux/iommu.h>
> +#include <linux/iopoll.h>
> +#include <linux/module.h>
> +#include <linux/msi.h>
> +#include <linux/of.h>
> +#include <linux/of_address.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/pci-ats.h>
> +#include <linux/platform_device.h>
> +
> +#include <linux/amba/bus.h>
> +
> +/* MMIO registers */
> +#define ARM_SMMU_IDR0			0x0
> +#define IDR0_ST_LVL			GENMASK(28, 27)
> +#define IDR0_ST_LVL_2LVL		1
> +#define IDR0_STALL_MODEL		GENMASK(25, 24)
> +#define IDR0_STALL_MODEL_STALL		0
> +#define IDR0_STALL_MODEL_FORCE		2
> +#define IDR0_TTENDIAN			GENMASK(22, 21)
> +#define IDR0_TTENDIAN_MIXED		0
> +#define IDR0_TTENDIAN_LE		2
> +#define IDR0_TTENDIAN_BE		3
> +#define IDR0_CD2L			(1 << 19)
> +#define IDR0_VMID16			(1 << 18)
> +#define IDR0_PRI			(1 << 16)
> +#define IDR0_SEV			(1 << 14)
> +#define IDR0_MSI			(1 << 13)
> +#define IDR0_ASID16			(1 << 12)
> +#define IDR0_ATS			(1 << 10)
> +#define IDR0_HYP			(1 << 9)
> +#define IDR0_COHACC			(1 << 4)
> +#define IDR0_TTF			GENMASK(3, 2)
> +#define IDR0_TTF_AARCH64		2
> +#define IDR0_TTF_AARCH32_64		3
> +#define IDR0_S1P			(1 << 1)
> +#define IDR0_S2P			(1 << 0)
> +
> +#define ARM_SMMU_IDR1			0x4
> +#define IDR1_TABLES_PRESET		(1 << 30)
> +#define IDR1_QUEUES_PRESET		(1 << 29)
> +#define IDR1_REL			(1 << 28)
> +#define IDR1_CMDQS			GENMASK(25, 21)
> +#define IDR1_EVTQS			GENMASK(20, 16)
> +#define IDR1_PRIQS			GENMASK(15, 11)
> +#define IDR1_SSIDSIZE			GENMASK(10, 6)
> +#define IDR1_SIDSIZE			GENMASK(5, 0)
> +
> +#define ARM_SMMU_IDR3			0xc
> +#define IDR3_RIL			(1 << 10)
> +
> +#define ARM_SMMU_IDR5			0x14
> +#define IDR5_STALL_MAX			GENMASK(31, 16)
> +#define IDR5_GRAN64K			(1 << 6)
> +#define IDR5_GRAN16K			(1 << 5)
> +#define IDR5_GRAN4K			(1 << 4)
> +#define IDR5_OAS			GENMASK(2, 0)
> +#define IDR5_OAS_32_BIT			0
> +#define IDR5_OAS_36_BIT			1
> +#define IDR5_OAS_40_BIT			2
> +#define IDR5_OAS_42_BIT			3
> +#define IDR5_OAS_44_BIT			4
> +#define IDR5_OAS_48_BIT			5
> +#define IDR5_OAS_52_BIT			6
> +#define IDR5_VAX			GENMASK(11, 10)
> +#define IDR5_VAX_52_BIT			1
> +
> +#define ARM_SMMU_CR0			0x20
> +#define CR0_ATSCHK			(1 << 4)
> +#define CR0_CMDQEN			(1 << 3)
> +#define CR0_EVTQEN			(1 << 2)
> +#define CR0_PRIQEN			(1 << 1)
> +#define CR0_SMMUEN			(1 << 0)
> +
> +#define ARM_SMMU_CR0ACK			0x24
> +
> +#define ARM_SMMU_CR1			0x28
> +#define CR1_TABLE_SH			GENMASK(11, 10)
> +#define CR1_TABLE_OC			GENMASK(9, 8)
> +#define CR1_TABLE_IC			GENMASK(7, 6)
> +#define CR1_QUEUE_SH			GENMASK(5, 4)
> +#define CR1_QUEUE_OC			GENMASK(3, 2)
> +#define CR1_QUEUE_IC			GENMASK(1, 0)
> +/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */
> +#define CR1_CACHE_NC			0
> +#define CR1_CACHE_WB			1
> +#define CR1_CACHE_WT			2
> +
> +#define ARM_SMMU_CR2			0x2c
> +#define CR2_PTM				(1 << 2)
> +#define CR2_RECINVSID			(1 << 1)
> +#define CR2_E2H				(1 << 0)
> +
> +#define ARM_SMMU_GBPA			0x44
> +#define GBPA_UPDATE			(1 << 31)
> +#define GBPA_ABORT			(1 << 20)
> +
> +#define ARM_SMMU_IRQ_CTRL		0x50
> +#define IRQ_CTRL_EVTQ_IRQEN		(1 << 2)
> +#define IRQ_CTRL_PRIQ_IRQEN		(1 << 1)
> +#define IRQ_CTRL_GERROR_IRQEN		(1 << 0)
> +
> +#define ARM_SMMU_IRQ_CTRLACK		0x54
> +
> +#define ARM_SMMU_GERROR			0x60
> +#define GERROR_SFM_ERR			(1 << 8)
> +#define GERROR_MSI_GERROR_ABT_ERR	(1 << 7)
> +#define GERROR_MSI_PRIQ_ABT_ERR		(1 << 6)
> +#define GERROR_MSI_EVTQ_ABT_ERR		(1 << 5)
> +#define GERROR_MSI_CMDQ_ABT_ERR		(1 << 4)
> +#define GERROR_PRIQ_ABT_ERR		(1 << 3)
> +#define GERROR_EVTQ_ABT_ERR		(1 << 2)
> +#define GERROR_CMDQ_ERR			(1 << 0)
> +#define GERROR_ERR_MASK			0xfd
> +
> +#define ARM_SMMU_GERRORN		0x64
> +
> +#define ARM_SMMU_GERROR_IRQ_CFG0	0x68
> +#define ARM_SMMU_GERROR_IRQ_CFG1	0x70
> +#define ARM_SMMU_GERROR_IRQ_CFG2	0x74
> +
> +#define ARM_SMMU_STRTAB_BASE		0x80
> +#define STRTAB_BASE_RA			(1UL << 62)
> +#define STRTAB_BASE_ADDR_MASK		GENMASK_ULL(51, 6)
> +
> +#define ARM_SMMU_STRTAB_BASE_CFG	0x88
> +#define STRTAB_BASE_CFG_FMT		GENMASK(17, 16)
> +#define STRTAB_BASE_CFG_FMT_LINEAR	0
> +#define STRTAB_BASE_CFG_FMT_2LVL	1
> +#define STRTAB_BASE_CFG_SPLIT		GENMASK(10, 6)
> +#define STRTAB_BASE_CFG_LOG2SIZE	GENMASK(5, 0)
> +
> +#define ARM_SMMU_CMDQ_BASE		0x90
> +#define ARM_SMMU_CMDQ_PROD		0x98
> +#define ARM_SMMU_CMDQ_CONS		0x9c
> +
> +#define ARM_SMMU_EVTQ_BASE		0xa0
> +#define ARM_SMMU_EVTQ_PROD		0x100a8
> +#define ARM_SMMU_EVTQ_CONS		0x100ac
> +#define ARM_SMMU_EVTQ_IRQ_CFG0		0xb0
> +#define ARM_SMMU_EVTQ_IRQ_CFG1		0xb8
> +#define ARM_SMMU_EVTQ_IRQ_CFG2		0xbc
> +
> +#define ARM_SMMU_PRIQ_BASE		0xc0
> +#define ARM_SMMU_PRIQ_PROD		0x100c8
> +#define ARM_SMMU_PRIQ_CONS		0x100cc
> +#define ARM_SMMU_PRIQ_IRQ_CFG0		0xd0
> +#define ARM_SMMU_PRIQ_IRQ_CFG1		0xd8
> +#define ARM_SMMU_PRIQ_IRQ_CFG2		0xdc
> +
> +#define ARM_SMMU_REG_SZ			0xe00
> +
> +/* Common MSI config fields */
> +#define MSI_CFG0_ADDR_MASK		GENMASK_ULL(51, 2)
> +#define MSI_CFG2_SH			GENMASK(5, 4)
> +#define MSI_CFG2_MEMATTR		GENMASK(3, 0)
> +
> +/* Common memory attribute values */
> +#define ARM_SMMU_SH_NSH			0
> +#define ARM_SMMU_SH_OSH			2
> +#define ARM_SMMU_SH_ISH			3
> +#define ARM_SMMU_MEMATTR_DEVICE_nGnRE	0x1
> +#define ARM_SMMU_MEMATTR_OIWB		0xf
> +
> +#define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
> +#define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
> +#define Q_OVERFLOW_FLAG			(1U << 31)
> +#define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
> +#define Q_ENT(q, p)			((q)->base +			\
> +					 Q_IDX(&((q)->llq), p) *	\
> +					 (q)->ent_dwords)
> +
> +#define Q_BASE_RWA			(1UL << 62)
> +#define Q_BASE_ADDR_MASK		GENMASK_ULL(51, 5)
> +#define Q_BASE_LOG2SIZE			GENMASK(4, 0)
> +
> +/* Ensure DMA allocations are naturally aligned */
> +#ifdef CONFIG_CMA_ALIGNMENT
> +#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
> +#else
> +#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + MAX_ORDER - 1)
> +#endif
> +
> +/*
> + * Stream table.
> + *
> + * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
> + * 2lvl: 128k L1 entries,
> + *       256 lazy entries per table (each table covers a PCI bus)
> + */
> +#define STRTAB_L1_SZ_SHIFT		20
> +#define STRTAB_SPLIT			8
> +
> +#define STRTAB_L1_DESC_DWORDS		1
> +#define STRTAB_L1_DESC_SPAN		GENMASK_ULL(4, 0)
> +#define STRTAB_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 6)
> +
> +#define STRTAB_STE_DWORDS		8
> +#define STRTAB_STE_0_V			(1UL << 0)
> +#define STRTAB_STE_0_CFG		GENMASK_ULL(3, 1)
> +#define STRTAB_STE_0_CFG_ABORT		0
> +#define STRTAB_STE_0_CFG_BYPASS		4
> +#define STRTAB_STE_0_CFG_S1_TRANS	5
> +#define STRTAB_STE_0_CFG_S2_TRANS	6
> +
> +#define STRTAB_STE_0_S1FMT		GENMASK_ULL(5, 4)
> +#define STRTAB_STE_0_S1FMT_LINEAR	0
> +#define STRTAB_STE_0_S1FMT_64K_L2	2
> +#define STRTAB_STE_0_S1CTXPTR_MASK	GENMASK_ULL(51, 6)
> +#define STRTAB_STE_0_S1CDMAX		GENMASK_ULL(63, 59)
> +
> +#define STRTAB_STE_1_S1DSS		GENMASK_ULL(1, 0)
> +#define STRTAB_STE_1_S1DSS_TERMINATE	0x0
> +#define STRTAB_STE_1_S1DSS_BYPASS	0x1
> +#define STRTAB_STE_1_S1DSS_SSID0	0x2
> +
> +#define STRTAB_STE_1_S1C_CACHE_NC	0UL
> +#define STRTAB_STE_1_S1C_CACHE_WBRA	1UL
> +#define STRTAB_STE_1_S1C_CACHE_WT	2UL
> +#define STRTAB_STE_1_S1C_CACHE_WB	3UL
> +#define STRTAB_STE_1_S1CIR		GENMASK_ULL(3, 2)
> +#define STRTAB_STE_1_S1COR		GENMASK_ULL(5, 4)
> +#define STRTAB_STE_1_S1CSH		GENMASK_ULL(7, 6)
> +
> +#define STRTAB_STE_1_S1STALLD		(1UL << 27)
> +
> +#define STRTAB_STE_1_EATS		GENMASK_ULL(29, 28)
> +#define STRTAB_STE_1_EATS_ABT		0UL
> +#define STRTAB_STE_1_EATS_TRANS		1UL
> +#define STRTAB_STE_1_EATS_S1CHK		2UL
> +
> +#define STRTAB_STE_1_STRW		GENMASK_ULL(31, 30)
> +#define STRTAB_STE_1_STRW_NSEL1		0UL
> +#define STRTAB_STE_1_STRW_EL2		2UL
> +
> +#define STRTAB_STE_1_SHCFG		GENMASK_ULL(45, 44)
> +#define STRTAB_STE_1_SHCFG_INCOMING	1UL
> +
> +#define STRTAB_STE_2_S2VMID		GENMASK_ULL(15, 0)
> +#define STRTAB_STE_2_VTCR		GENMASK_ULL(50, 32)
> +#define STRTAB_STE_2_VTCR_S2T0SZ	GENMASK_ULL(5, 0)
> +#define STRTAB_STE_2_VTCR_S2SL0		GENMASK_ULL(7, 6)
> +#define STRTAB_STE_2_VTCR_S2IR0		GENMASK_ULL(9, 8)
> +#define STRTAB_STE_2_VTCR_S2OR0		GENMASK_ULL(11, 10)
> +#define STRTAB_STE_2_VTCR_S2SH0		GENMASK_ULL(13, 12)
> +#define STRTAB_STE_2_VTCR_S2TG		GENMASK_ULL(15, 14)
> +#define STRTAB_STE_2_VTCR_S2PS		GENMASK_ULL(18, 16)
> +#define STRTAB_STE_2_S2AA64		(1UL << 51)
> +#define STRTAB_STE_2_S2ENDI		(1UL << 52)
> +#define STRTAB_STE_2_S2PTW		(1UL << 54)
> +#define STRTAB_STE_2_S2R		(1UL << 58)
> +
> +#define STRTAB_STE_3_S2TTB_MASK		GENMASK_ULL(51, 4)
> +
> +/*
> + * Context descriptors.
> + *
> + * Linear: when less than 1024 SSIDs are supported
> + * 2lvl: at most 1024 L1 entries,
> + *       1024 lazy entries per table.
> + */
> +#define CTXDESC_SPLIT			10
> +#define CTXDESC_L2_ENTRIES		(1 << CTXDESC_SPLIT)
> +
> +#define CTXDESC_L1_DESC_DWORDS		1
> +#define CTXDESC_L1_DESC_V		(1UL << 0)
> +#define CTXDESC_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 12)
> +
> +#define CTXDESC_CD_DWORDS		8
> +#define CTXDESC_CD_0_TCR_T0SZ		GENMASK_ULL(5, 0)
> +#define CTXDESC_CD_0_TCR_TG0		GENMASK_ULL(7, 6)
> +#define CTXDESC_CD_0_TCR_IRGN0		GENMASK_ULL(9, 8)
> +#define CTXDESC_CD_0_TCR_ORGN0		GENMASK_ULL(11, 10)
> +#define CTXDESC_CD_0_TCR_SH0		GENMASK_ULL(13, 12)
> +#define CTXDESC_CD_0_TCR_EPD0		(1ULL << 14)
> +#define CTXDESC_CD_0_TCR_EPD1		(1ULL << 30)
> +
> +#define CTXDESC_CD_0_ENDI		(1UL << 15)
> +#define CTXDESC_CD_0_V			(1UL << 31)
> +
> +#define CTXDESC_CD_0_TCR_IPS		GENMASK_ULL(34, 32)
> +#define CTXDESC_CD_0_TCR_TBI0		(1ULL << 38)
> +
> +#define CTXDESC_CD_0_AA64		(1UL << 41)
> +#define CTXDESC_CD_0_S			(1UL << 44)
> +#define CTXDESC_CD_0_R			(1UL << 45)
> +#define CTXDESC_CD_0_A			(1UL << 46)
> +#define CTXDESC_CD_0_ASET		(1UL << 47)
> +#define CTXDESC_CD_0_ASID		GENMASK_ULL(63, 48)
> +
> +#define CTXDESC_CD_1_TTB0_MASK		GENMASK_ULL(51, 4)
> +
> +/*
> + * When the SMMU only supports linear context descriptor tables, pick a
> + * reasonable size limit (64kB).
> + */
> +#define CTXDESC_LINEAR_CDMAX		ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3))
> +
> +/* Command queue */
> +#define CMDQ_ENT_SZ_SHIFT		4
> +#define CMDQ_ENT_DWORDS			((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
> +#define CMDQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT)
> +
> +#define CMDQ_CONS_ERR			GENMASK(30, 24)
> +#define CMDQ_ERR_CERROR_NONE_IDX	0
> +#define CMDQ_ERR_CERROR_ILL_IDX		1
> +#define CMDQ_ERR_CERROR_ABT_IDX		2
> +#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
> +
> +#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
> +
> +/*
> + * This is used to size the command queue and therefore must be at least
> + * BITS_PER_LONG so that the valid_map works correctly (it relies on the
> + * total number of queue entries being a multiple of BITS_PER_LONG).
> + */
> +#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
> +
> +#define CMDQ_0_OP			GENMASK_ULL(7, 0)
> +#define CMDQ_0_SSV			(1UL << 11)
> +
> +#define CMDQ_PREFETCH_0_SID		GENMASK_ULL(63, 32)
> +#define CMDQ_PREFETCH_1_SIZE		GENMASK_ULL(4, 0)
> +#define CMDQ_PREFETCH_1_ADDR_MASK	GENMASK_ULL(63, 12)
> +
> +#define CMDQ_CFGI_0_SSID		GENMASK_ULL(31, 12)
> +#define CMDQ_CFGI_0_SID			GENMASK_ULL(63, 32)
> +#define CMDQ_CFGI_1_LEAF		(1UL << 0)
> +#define CMDQ_CFGI_1_RANGE		GENMASK_ULL(4, 0)
> +
> +#define CMDQ_TLBI_0_NUM			GENMASK_ULL(16, 12)
> +#define CMDQ_TLBI_RANGE_NUM_MAX		31
> +#define CMDQ_TLBI_0_SCALE		GENMASK_ULL(24, 20)
> +#define CMDQ_TLBI_0_VMID		GENMASK_ULL(47, 32)
> +#define CMDQ_TLBI_0_ASID		GENMASK_ULL(63, 48)
> +#define CMDQ_TLBI_1_LEAF		(1UL << 0)
> +#define CMDQ_TLBI_1_TTL			GENMASK_ULL(9, 8)
> +#define CMDQ_TLBI_1_TG			GENMASK_ULL(11, 10)
> +#define CMDQ_TLBI_1_VA_MASK		GENMASK_ULL(63, 12)
> +#define CMDQ_TLBI_1_IPA_MASK		GENMASK_ULL(51, 12)
> +
> +#define CMDQ_ATC_0_SSID			GENMASK_ULL(31, 12)
> +#define CMDQ_ATC_0_SID			GENMASK_ULL(63, 32)
> +#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
> +#define CMDQ_ATC_1_SIZE			GENMASK_ULL(5, 0)
> +#define CMDQ_ATC_1_ADDR_MASK		GENMASK_ULL(63, 12)
> +
> +#define CMDQ_PRI_0_SSID			GENMASK_ULL(31, 12)
> +#define CMDQ_PRI_0_SID			GENMASK_ULL(63, 32)
> +#define CMDQ_PRI_1_GRPID		GENMASK_ULL(8, 0)
> +#define CMDQ_PRI_1_RESP			GENMASK_ULL(13, 12)
> +
> +#define CMDQ_SYNC_0_CS			GENMASK_ULL(13, 12)
> +#define CMDQ_SYNC_0_CS_NONE		0
> +#define CMDQ_SYNC_0_CS_IRQ		1
> +#define CMDQ_SYNC_0_CS_SEV		2
> +#define CMDQ_SYNC_0_MSH			GENMASK_ULL(23, 22)
> +#define CMDQ_SYNC_0_MSIATTR		GENMASK_ULL(27, 24)
> +#define CMDQ_SYNC_0_MSIDATA		GENMASK_ULL(63, 32)
> +#define CMDQ_SYNC_1_MSIADDR_MASK	GENMASK_ULL(51, 2)
> +
> +/* Event queue */
> +#define EVTQ_ENT_SZ_SHIFT		5
> +#define EVTQ_ENT_DWORDS			((1 << EVTQ_ENT_SZ_SHIFT) >> 3)
> +#define EVTQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT)
> +
> +#define EVTQ_0_ID			GENMASK_ULL(7, 0)
> +
> +/* PRI queue */
> +#define PRIQ_ENT_SZ_SHIFT		4
> +#define PRIQ_ENT_DWORDS			((1 << PRIQ_ENT_SZ_SHIFT) >> 3)
> +#define PRIQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT)
> +
> +#define PRIQ_0_SID			GENMASK_ULL(31, 0)
> +#define PRIQ_0_SSID			GENMASK_ULL(51, 32)
> +#define PRIQ_0_PERM_PRIV		(1UL << 58)
> +#define PRIQ_0_PERM_EXEC		(1UL << 59)
> +#define PRIQ_0_PERM_READ		(1UL << 60)
> +#define PRIQ_0_PERM_WRITE		(1UL << 61)
> +#define PRIQ_0_PRG_LAST			(1UL << 62)
> +#define PRIQ_0_SSID_V			(1UL << 63)
> +
> +#define PRIQ_1_PRG_IDX			GENMASK_ULL(8, 0)
> +#define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
> +
> +/* High-level queue structures */
> +#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
> +#define ARM_SMMU_POLL_SPIN_COUNT	10
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +static bool disable_bypass = 1;
> +module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
> +MODULE_PARM_DESC(disable_bypass,
> +	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
> +
> +enum pri_resp {
> +	PRI_RESP_DENY = 0,
> +	PRI_RESP_FAIL = 1,
> +	PRI_RESP_SUCC = 2,
> +};
> +
> +enum arm_smmu_msi_index {
> +	EVTQ_MSI_INDEX,
> +	GERROR_MSI_INDEX,
> +	PRIQ_MSI_INDEX,
> +	ARM_SMMU_MAX_MSIS,
> +};
> +
> +static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
> +	[EVTQ_MSI_INDEX] = {
> +		ARM_SMMU_EVTQ_IRQ_CFG0,
> +		ARM_SMMU_EVTQ_IRQ_CFG1,
> +		ARM_SMMU_EVTQ_IRQ_CFG2,
> +	},
> +	[GERROR_MSI_INDEX] = {
> +		ARM_SMMU_GERROR_IRQ_CFG0,
> +		ARM_SMMU_GERROR_IRQ_CFG1,
> +		ARM_SMMU_GERROR_IRQ_CFG2,
> +	},
> +	[PRIQ_MSI_INDEX] = {
> +		ARM_SMMU_PRIQ_IRQ_CFG0,
> +		ARM_SMMU_PRIQ_IRQ_CFG1,
> +		ARM_SMMU_PRIQ_IRQ_CFG2,
> +	},
> +};
> +
> +struct arm_smmu_cmdq_ent {
> +	/* Common fields */
> +	u8				opcode;
> +	bool				substream_valid;
> +
> +	/* Command-specific fields */
> +	union {
> +		#define CMDQ_OP_PREFETCH_CFG	0x1
> +		struct {
> +			u32			sid;
> +			u8			size;
> +			u64			addr;
> +		} prefetch;
> +
> +		#define CMDQ_OP_CFGI_STE	0x3
> +		#define CMDQ_OP_CFGI_ALL	0x4
> +		#define CMDQ_OP_CFGI_CD		0x5
> +		#define CMDQ_OP_CFGI_CD_ALL	0x6
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			union {
> +				bool		leaf;
> +				u8		span;
> +			};
> +		} cfgi;
> +
> +		#define CMDQ_OP_TLBI_NH_ASID	0x11
> +		#define CMDQ_OP_TLBI_NH_VA	0x12
> +		#define CMDQ_OP_TLBI_EL2_ALL	0x20
> +		#define CMDQ_OP_TLBI_S12_VMALL	0x28
> +		#define CMDQ_OP_TLBI_S2_IPA	0x2a
> +		#define CMDQ_OP_TLBI_NSNH_ALL	0x30
> +		struct {
> +			u8			num;
> +			u8			scale;
> +			u16			asid;
> +			u16			vmid;
> +			bool			leaf;
> +			u8			ttl;
> +			u8			tg;
> +			u64			addr;
> +		} tlbi;
> +
> +		#define CMDQ_OP_ATC_INV		0x40
> +		#define ATC_INV_SIZE_ALL	52
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			u64			addr;
> +			u8			size;
> +			bool			global;
> +		} atc;
> +
> +		#define CMDQ_OP_PRI_RESP	0x41
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			u16			grpid;
> +			enum pri_resp		resp;
> +		} pri;
> +
> +		#define CMDQ_OP_CMD_SYNC	0x46
> +		struct {
> +			u64			msiaddr;
> +		} sync;
> +	};
> +};
> +
> +struct arm_smmu_ll_queue {
> +	union {
> +		u64			val;
> +		struct {
> +			u32		prod;
> +			u32		cons;
> +		};
> +		struct {
> +			atomic_t	prod;
> +			atomic_t	cons;
> +		} atomic;
> +		u8			__pad[SMP_CACHE_BYTES];
> +	} ____cacheline_aligned_in_smp;
> +	u32				max_n_shift;
> +};
> +
> +struct arm_smmu_queue {
> +	struct arm_smmu_ll_queue	llq;
> +	int				irq; /* Wired interrupt */
> +
> +	__le64				*base;
> +	dma_addr_t			base_dma;
> +	u64				q_base;
> +
> +	size_t				ent_dwords;
> +
> +	u32 __iomem			*prod_reg;
> +	u32 __iomem			*cons_reg;
> +};
> +
> +struct arm_smmu_queue_poll {
> +	ktime_t				timeout;
> +	unsigned int			delay;
> +	unsigned int			spin_cnt;
> +	bool				wfe;
> +};
> +
> +struct arm_smmu_cmdq {
> +	struct arm_smmu_queue		q;
> +	atomic_long_t			*valid_map;
> +	atomic_t			owner_prod;
> +	atomic_t			lock;
> +};
> +
> +struct arm_smmu_cmdq_batch {
> +	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
> +	int				num;
> +};
> +
> +struct arm_smmu_evtq {
> +	struct arm_smmu_queue		q;
> +	u32				max_stalls;
> +};
> +
> +struct arm_smmu_priq {
> +	struct arm_smmu_queue		q;
> +};
> +
> +/* High-level stream table and context descriptor structures */
> +struct arm_smmu_strtab_l1_desc {
> +	u8				span;
> +
> +	__le64				*l2ptr;
> +	dma_addr_t			l2ptr_dma;
> +};
> +
> +struct arm_smmu_ctx_desc {
> +	u16				asid;
> +	u64				ttbr;
> +	u64				tcr;
> +	u64				mair;
> +};
> +
> +struct arm_smmu_l1_ctx_desc {
> +	__le64				*l2ptr;
> +	dma_addr_t			l2ptr_dma;
> +};
> +
> +struct arm_smmu_ctx_desc_cfg {
> +	__le64				*cdtab;
> +	dma_addr_t			cdtab_dma;
> +	struct arm_smmu_l1_ctx_desc	*l1_desc;
> +	unsigned int			num_l1_ents;
> +};
> +
> +struct arm_smmu_s1_cfg {
> +	struct arm_smmu_ctx_desc_cfg	cdcfg;
> +	struct arm_smmu_ctx_desc	cd;
> +	u8				s1fmt;
> +	u8				s1cdmax;
> +};
> +
> +struct arm_smmu_s2_cfg {
> +	u16				vmid;
> +	u64				vttbr;
> +	u64				vtcr;
> +};
> +
> +struct arm_smmu_strtab_cfg {
> +	__le64				*strtab;
> +	dma_addr_t			strtab_dma;
> +	struct arm_smmu_strtab_l1_desc	*l1_desc;
> +	unsigned int			num_l1_ents;
> +
> +	u64				strtab_base;
> +	u32				strtab_base_cfg;
> +};
> +
> +/* An SMMUv3 instance */
> +struct arm_smmu_device {
> +	struct device			*dev;
> +	void __iomem			*base;
> +	void __iomem			*page1;
> +
> +#define ARM_SMMU_FEAT_2_LVL_STRTAB	(1 << 0)
> +#define ARM_SMMU_FEAT_2_LVL_CDTAB	(1 << 1)
> +#define ARM_SMMU_FEAT_TT_LE		(1 << 2)
> +#define ARM_SMMU_FEAT_TT_BE		(1 << 3)
> +#define ARM_SMMU_FEAT_PRI		(1 << 4)
> +#define ARM_SMMU_FEAT_ATS		(1 << 5)
> +#define ARM_SMMU_FEAT_SEV		(1 << 6)
> +#define ARM_SMMU_FEAT_MSI		(1 << 7)
> +#define ARM_SMMU_FEAT_COHERENCY		(1 << 8)
> +#define ARM_SMMU_FEAT_TRANS_S1		(1 << 9)
> +#define ARM_SMMU_FEAT_TRANS_S2		(1 << 10)
> +#define ARM_SMMU_FEAT_STALLS		(1 << 11)
> +#define ARM_SMMU_FEAT_HYP		(1 << 12)
> +#define ARM_SMMU_FEAT_STALL_FORCE	(1 << 13)
> +#define ARM_SMMU_FEAT_VAX		(1 << 14)
> +#define ARM_SMMU_FEAT_RANGE_INV		(1 << 15)
> +	u32				features;
> +
> +#define ARM_SMMU_OPT_SKIP_PREFETCH	(1 << 0)
> +#define ARM_SMMU_OPT_PAGE0_REGS_ONLY	(1 << 1)
> +	u32				options;
> +
> +	struct arm_smmu_cmdq		cmdq;
> +	struct arm_smmu_evtq		evtq;
> +	struct arm_smmu_priq		priq;
> +
> +	int				gerr_irq;
> +	int				combined_irq;
> +
> +	unsigned long			ias; /* IPA */
> +	unsigned long			oas; /* PA */
> +	unsigned long			pgsize_bitmap;
> +
> +#define ARM_SMMU_MAX_ASIDS		(1 << 16)
> +	unsigned int			asid_bits;
> +
> +#define ARM_SMMU_MAX_VMIDS		(1 << 16)
> +	unsigned int			vmid_bits;
> +	DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
> +
> +	unsigned int			ssid_bits;
> +	unsigned int			sid_bits;
> +
> +	struct arm_smmu_strtab_cfg	strtab_cfg;
> +
> +	/* IOMMU core code handle */
> +	struct iommu_device		iommu;
> +};
> +
> +/* SMMU private data for each master */
> +struct arm_smmu_master {
> +	struct arm_smmu_device		*smmu;
> +	struct device			*dev;
> +	struct arm_smmu_domain		*domain;
> +	struct list_head		domain_head;
> +	u32				*sids;
> +	unsigned int			num_sids;
> +	bool				ats_enabled;
> +	unsigned int			ssid_bits;
> +};
> +
> +/* SMMU private data for an IOMMU domain */
> +enum arm_smmu_domain_stage {
> +	ARM_SMMU_DOMAIN_S1 = 0,
> +	ARM_SMMU_DOMAIN_S2,
> +	ARM_SMMU_DOMAIN_NESTED,
> +	ARM_SMMU_DOMAIN_BYPASS,
> +};
> +
> +struct arm_smmu_domain {
> +	struct arm_smmu_device		*smmu;
> +	struct mutex			init_mutex; /* Protects smmu pointer */
> +
> +	struct io_pgtable_ops		*pgtbl_ops;
> +	bool				non_strict;
> +	atomic_t			nr_ats_masters;
> +
> +	enum arm_smmu_domain_stage	stage;
> +	union {
> +		struct arm_smmu_s1_cfg	s1_cfg;
> +		struct arm_smmu_s2_cfg	s2_cfg;
> +	};
> +
> +	struct iommu_domain		domain;
> +
> +	struct list_head		devices;
> +	spinlock_t			devices_lock;
> +};
> +
> +struct arm_smmu_option_prop {
> +	u32 opt;
> +	const char *prop;
> +};
> +
> +static DEFINE_XARRAY_ALLOC1(asid_xa);
> +
> +static struct arm_smmu_option_prop arm_smmu_options[] = {
> +	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
> +	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
> +	{ 0, NULL},
> +};
> +
> +static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
> +						 struct arm_smmu_device *smmu)
> +{
> +	if (offset > SZ_64K)
> +		return smmu->page1 + offset - SZ_64K;
> +
> +	return smmu->base + offset;
> +}
> +
> +static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
> +{
> +	return container_of(dom, struct arm_smmu_domain, domain);
> +}
> +
> +static void parse_driver_options(struct arm_smmu_device *smmu)
> +{
> +	int i = 0;
> +
> +	do {
> +		if (of_property_read_bool(smmu->dev->of_node,
> +						arm_smmu_options[i].prop)) {
> +			smmu->options |= arm_smmu_options[i].opt;
> +			dev_notice(smmu->dev, "option %s\n",
> +				arm_smmu_options[i].prop);
> +		}
> +	} while (arm_smmu_options[++i].opt);
> +}
> +
> +/* Low-level queue manipulation functions */
> +static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
> +{
> +	u32 space, prod, cons;
> +
> +	prod = Q_IDX(q, q->prod);
> +	cons = Q_IDX(q, q->cons);
> +
> +	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
> +		space = (1 << q->max_n_shift) - (prod - cons);
> +	else
> +		space = cons - prod;
> +
> +	return space >= n;
> +}
> +
> +static bool queue_full(struct arm_smmu_ll_queue *q)
> +{
> +	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> +	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
> +}
> +
> +static bool queue_empty(struct arm_smmu_ll_queue *q)
> +{
> +	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> +	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
> +}
> +
> +static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
> +{
> +	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
> +		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
> +	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
> +		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
> +}
> +
> +static void queue_sync_cons_out(struct arm_smmu_queue *q)
> +{
> +	/*
> +	 * Ensure that all CPU accesses (reads and writes) to the queue
> +	 * are complete before we update the cons pointer.
> +	 */
> +	mb();
> +	writel_relaxed(q->llq.cons, q->cons_reg);
> +}
> +
> +static void queue_inc_cons(struct arm_smmu_ll_queue *q)
> +{
> +	u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
> +	q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
> +}
> +
> +static int queue_sync_prod_in(struct arm_smmu_queue *q)
> +{
> +	int ret = 0;
> +	u32 prod = readl_relaxed(q->prod_reg);
> +
> +	if (Q_OVF(prod) != Q_OVF(q->llq.prod))
> +		ret = -EOVERFLOW;
> +
> +	q->llq.prod = prod;
> +	return ret;
> +}
> +
> +static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
> +{
> +	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
> +	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
> +}
> +
> +static void queue_poll_init(struct arm_smmu_device *smmu,
> +			    struct arm_smmu_queue_poll *qp)
> +{
> +	qp->delay = 1;
> +	qp->spin_cnt = 0;
> +	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
> +	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
> +}
> +
> +static int queue_poll(struct arm_smmu_queue_poll *qp)
> +{
> +	if (ktime_compare(ktime_get(), qp->timeout) > 0)
> +		return -ETIMEDOUT;
> +
> +	if (qp->wfe) {
> +		wfe();
> +	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
> +		cpu_relax();
> +	} else {
> +		udelay(qp->delay);
> +		qp->delay *= 2;
> +		qp->spin_cnt = 0;
> +	}
> +
> +	return 0;
> +}
> +
> +static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
> +{
> +	int i;
> +
> +	for (i = 0; i < n_dwords; ++i)
> +		*dst++ = cpu_to_le64(*src++);
> +}
> +
> +static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
> +{
> +	int i;
> +
> +	for (i = 0; i < n_dwords; ++i)
> +		*dst++ = le64_to_cpu(*src++);
> +}
> +
> +static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
> +{
> +	if (queue_empty(&q->llq))
> +		return -EAGAIN;
> +
> +	queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords);
> +	queue_inc_cons(&q->llq);
> +	queue_sync_cons_out(q);
> +	return 0;
> +}
> +
> +/* High-level queue accessors */
> +static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> +{
> +	memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
> +	cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
> +
> +	switch (ent->opcode) {
> +	case CMDQ_OP_TLBI_EL2_ALL:
> +	case CMDQ_OP_TLBI_NSNH_ALL:
> +		break;
> +	case CMDQ_OP_PREFETCH_CFG:
> +		cmd[0] |= FIELD_PREP(CMDQ_PREFETCH_0_SID, ent->prefetch.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
> +		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
> +		break;
> +	case CMDQ_OP_CFGI_CD:
> +		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid);
> +		fallthrough;
> +	case CMDQ_OP_CFGI_STE:
> +		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
> +		break;
> +	case CMDQ_OP_CFGI_CD_ALL:
> +		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
> +		break;
> +	case CMDQ_OP_CFGI_ALL:
> +		/* Cover the entire SID range */
> +		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
> +		break;
> +	case CMDQ_OP_TLBI_NH_VA:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
> +		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
> +		break;
> +	case CMDQ_OP_TLBI_S2_IPA:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
> +		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
> +		break;
> +	case CMDQ_OP_TLBI_NH_ASID:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
> +		fallthrough;
> +	case CMDQ_OP_TLBI_S12_VMALL:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> +		break;
> +	case CMDQ_OP_ATC_INV:
> +		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
> +		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
> +		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
> +		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
> +		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
> +		break;
> +	case CMDQ_OP_PRI_RESP:
> +		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
> +		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
> +		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid);
> +		switch (ent->pri.resp) {
> +		case PRI_RESP_DENY:
> +		case PRI_RESP_FAIL:
> +		case PRI_RESP_SUCC:
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
> +		break;
> +	case CMDQ_OP_CMD_SYNC:
> +		if (ent->sync.msiaddr) {
> +			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> +			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> +		} else {
> +			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> +		}
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> +		break;
> +	default:
> +		return -ENOENT;
> +	}
> +
> +	return 0;
> +}
> +
> +static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
> +					 u32 prod)
> +{
> +	struct arm_smmu_queue *q = &smmu->cmdq.q;
> +	struct arm_smmu_cmdq_ent ent = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +	};
> +
> +	/*
> +	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
> +	 * payload, so the write will zero the entire command on that platform.
> +	 */
> +	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> +	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
> +		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
> +				   q->ent_dwords * 8;
> +	}
> +
> +	arm_smmu_cmdq_build_cmd(cmd, &ent);
> +}
> +
> +static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
> +{
> +	static const char *cerror_str[] = {
> +		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
> +		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
> +		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
> +		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
> +	};
> +
> +	int i;
> +	u64 cmd[CMDQ_ENT_DWORDS];
> +	struct arm_smmu_queue *q = &smmu->cmdq.q;
> +	u32 cons = readl_relaxed(q->cons_reg);
> +	u32 idx = FIELD_GET(CMDQ_CONS_ERR, cons);
> +	struct arm_smmu_cmdq_ent cmd_sync = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +	};
> +
> +	dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
> +		idx < ARRAY_SIZE(cerror_str) ?  cerror_str[idx] : "Unknown");
> +
> +	switch (idx) {
> +	case CMDQ_ERR_CERROR_ABT_IDX:
> +		dev_err(smmu->dev, "retrying command fetch\n");
> +	case CMDQ_ERR_CERROR_NONE_IDX:
> +		return;
> +	case CMDQ_ERR_CERROR_ATC_INV_IDX:
> +		/*
> +		 * ATC Invalidation Completion timeout. CONS is still pointing
> +		 * at the CMD_SYNC. Attempt to complete other pending commands
> +		 * by repeating the CMD_SYNC, though we might well end up back
> +		 * here since the ATC invalidation may still be pending.
> +		 */
> +		return;
> +	case CMDQ_ERR_CERROR_ILL_IDX:
> +	default:
> +		break;
> +	}
> +
> +	/*
> +	 * We may have concurrent producers, so we need to be careful
> +	 * not to touch any of the shadow cmdq state.
> +	 */
> +	queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
> +	dev_err(smmu->dev, "skipping command in error state:\n");
> +	for (i = 0; i < ARRAY_SIZE(cmd); ++i)
> +		dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
> +
> +	/* Convert the erroneous command into a CMD_SYNC */
> +	if (arm_smmu_cmdq_build_cmd(cmd, &cmd_sync)) {
> +		dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
> +		return;
> +	}
> +
> +	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
> +}
> +
> +/*
> + * Command queue locking.
> + * This is a form of bastardised rwlock with the following major changes:
> + *
> + * - The only LOCK routines are exclusive_trylock() and shared_lock().
> + *   Neither have barrier semantics, and instead provide only a control
> + *   dependency.
> + *
> + * - The UNLOCK routines are supplemented with shared_tryunlock(), which
> + *   fails if the caller appears to be the last lock holder (yes, this is
> + *   racy). All successful UNLOCK routines have RELEASE semantics.
> + */
> +static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> +{
> +	int val;
> +
> +	/*
> +	 * We can try to avoid the cmpxchg() loop by simply incrementing the
> +	 * lock counter. When held in exclusive state, the lock counter is set
> +	 * to INT_MIN so these increments won't hurt as the value will remain
> +	 * negative.
> +	 */
> +	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> +		return;
> +
> +	do {
> +		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
> +	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
> +}
> +
> +static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
> +{
> +	(void)atomic_dec_return_release(&cmdq->lock);
> +}
> +
> +static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> +{
> +	if (atomic_read(&cmdq->lock) == 1)
> +		return false;
> +
> +	arm_smmu_cmdq_shared_unlock(cmdq);
> +	return true;
> +}
> +
> +#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
> +({									\
> +	bool __ret;							\
> +	local_irq_save(flags);						\
> +	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
> +	if (!__ret)							\
> +		local_irq_restore(flags);				\
> +	__ret;								\
> +})
> +
> +#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
> +({									\
> +	atomic_set_release(&cmdq->lock, 0);				\
> +	local_irq_restore(flags);					\
> +})
> +
> +
> +/*
> + * Command queue insertion.
> + * This is made fiddly by our attempts to achieve some sort of scalability
> + * since there is one queue shared amongst all of the CPUs in the system.  If
> + * you like mixed-size concurrency, dependency ordering and relaxed atomics,
> + * then you'll *love* this monstrosity.
> + *
> + * The basic idea is to split the queue up into ranges of commands that are
> + * owned by a given CPU; the owner may not have written all of the commands
> + * itself, but is responsible for advancing the hardware prod pointer when
> + * the time comes. The algorithm is roughly:
> + *
> + * 	1. Allocate some space in the queue. At this point we also discover
> + *	   whether the head of the queue is currently owned by another CPU,
> + *	   or whether we are the owner.
> + *
> + *	2. Write our commands into our allocated slots in the queue.
> + *
> + *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
> + *
> + *	4. If we are an owner:
> + *		a. Wait for the previous owner to finish.
> + *		b. Mark the queue head as unowned, which tells us the range
> + *		   that we are responsible for publishing.
> + *		c. Wait for all commands in our owned range to become valid.
> + *		d. Advance the hardware prod pointer.
> + *		e. Tell the next owner we've finished.
> + *
> + *	5. If we are inserting a CMD_SYNC (we may or may not have been an
> + *	   owner), then we need to stick around until it has completed:
> + *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
> + *		   to clear the first 4 bytes.
> + *		b. Otherwise, we spin waiting for the hardware cons pointer to
> + *		   advance past our command.
> + *
> + * The devil is in the details, particularly the use of locking for handling
> + * SYNC completion and freeing up space in the queue before we think that it is
> + * full.
> + */
> +static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
> +					       u32 sprod, u32 eprod, bool set)
> +{
> +	u32 swidx, sbidx, ewidx, ebidx;
> +	struct arm_smmu_ll_queue llq = {
> +		.max_n_shift	= cmdq->q.llq.max_n_shift,
> +		.prod		= sprod,
> +	};
> +
> +	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
> +	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
> +
> +	while (llq.prod != eprod) {
> +		unsigned long mask;
> +		atomic_long_t *ptr;
> +		u32 limit = BITS_PER_LONG;
> +
> +		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
> +		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
> +
> +		ptr = &cmdq->valid_map[swidx];
> +
> +		if ((swidx == ewidx) && (sbidx < ebidx))
> +			limit = ebidx;
> +
> +		mask = GENMASK(limit - 1, sbidx);
> +
> +		/*
> +		 * The valid bit is the inverse of the wrap bit. This means
> +		 * that a zero-initialised queue is invalid and, after marking
> +		 * all entries as valid, they become invalid again when we
> +		 * wrap.
> +		 */
> +		if (set) {
> +			atomic_long_xor(mask, ptr);
> +		} else { /* Poll */
> +			unsigned long valid;
> +
> +			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
> +			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
> +		}
> +
> +		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
> +	}
> +}
> +
> +/* Mark all entries in the range [sprod, eprod) as valid */
> +static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
> +					u32 sprod, u32 eprod)
> +{
> +	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
> +}
> +
> +/* Wait for all entries in the range [sprod, eprod) to become valid */
> +static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
> +					 u32 sprod, u32 eprod)
> +{
> +	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
> +}
> +
> +/* Wait for the command queue to become non-full */
> +static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> +					     struct arm_smmu_ll_queue *llq)
> +{
> +	unsigned long flags;
> +	struct arm_smmu_queue_poll qp;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	int ret = 0;
> +
> +	/*
> +	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> +	 * that fails, spin until somebody else updates it for us.
> +	 */
> +	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
> +		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
> +		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
> +		llq->val = READ_ONCE(cmdq->q.llq.val);
> +		return 0;
> +	}
> +
> +	queue_poll_init(smmu, &qp);
> +	do {
> +		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> +		if (!queue_full(llq))
> +			break;
> +
> +		ret = queue_poll(&qp);
> +	} while (!ret);
> +
> +	return ret;
> +}
> +
> +/*
> + * Wait until the SMMU signals a CMD_SYNC completion MSI.
> + * Must be called with the cmdq lock held in some capacity.
> + */
> +static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
> +					  struct arm_smmu_ll_queue *llq)
> +{
> +	int ret = 0;
> +	struct arm_smmu_queue_poll qp;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
> +
> +	queue_poll_init(smmu, &qp);
> +
> +	/*
> +	 * The MSI won't generate an event, since it's being written back
> +	 * into the command queue.
> +	 */
> +	qp.wfe = false;
> +	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
> +	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
> +	return ret;
> +}
> +
> +/*
> + * Wait until the SMMU cons index passes llq->prod.
> + * Must be called with the cmdq lock held in some capacity.
> + */
> +static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
> +					       struct arm_smmu_ll_queue *llq)
> +{
> +	struct arm_smmu_queue_poll qp;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	u32 prod = llq->prod;
> +	int ret = 0;
> +
> +	queue_poll_init(smmu, &qp);
> +	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> +	do {
> +		if (queue_consumed(llq, prod))
> +			break;
> +
> +		ret = queue_poll(&qp);
> +
> +		/*
> +		 * This needs to be a readl() so that our subsequent call
> +		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
> +		 *
> +		 * Specifically, we need to ensure that we observe all
> +		 * shared_lock()s by other CMD_SYNCs that share our owner,
> +		 * so that a failing call to tryunlock() means that we're
> +		 * the last one out and therefore we can safely advance
> +		 * cmdq->q.llq.cons. Roughly speaking:
> +		 *
> +		 * CPU 0		CPU1			CPU2 (us)
> +		 *
> +		 * if (sync)
> +		 * 	shared_lock();
> +		 *
> +		 * dma_wmb();
> +		 * set_valid_map();
> +		 *
> +		 * 			if (owner) {
> +		 *				poll_valid_map();
> +		 *				<control dependency>
> +		 *				writel(prod_reg);
> +		 *
> +		 *						readl(cons_reg);
> +		 *						tryunlock();
> +		 *
> +		 * Requires us to see CPU 0's shared_lock() acquisition.
> +		 */
> +		llq->cons = readl(cmdq->q.cons_reg);
> +	} while (!ret);
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
> +					 struct arm_smmu_ll_queue *llq)
> +{
> +	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> +	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
> +		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
> +
> +	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
> +}
> +
> +static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
> +					u32 prod, int n)
> +{
> +	int i;
> +	struct arm_smmu_ll_queue llq = {
> +		.max_n_shift	= cmdq->q.llq.max_n_shift,
> +		.prod		= prod,
> +	};
> +
> +	for (i = 0; i < n; ++i) {
> +		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
> +
> +		prod = queue_inc_prod_n(&llq, i);
> +		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
> +	}
> +}
> +
> +/*
> + * This is the actual insertion function, and provides the following
> + * ordering guarantees to callers:
> + *
> + * - There is a dma_wmb() before publishing any commands to the queue.
> + *   This can be relied upon to order prior writes to data structures
> + *   in memory (such as a CD or an STE) before the command.
> + *
> + * - On completion of a CMD_SYNC, there is a control dependency.
> + *   This can be relied upon to order subsequent writes to memory (e.g.
> + *   freeing an IOVA) after completion of the CMD_SYNC.
> + *
> + * - Command insertion is totally ordered, so if two CPUs each race to
> + *   insert their own list of commands then all of the commands from one
> + *   CPU will appear before any of the commands from the other CPU.
> + */
> +static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> +				       u64 *cmds, int n, bool sync)
> +{
> +	u64 cmd_sync[CMDQ_ENT_DWORDS];
> +	u32 prod;
> +	unsigned long flags;
> +	bool owner;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	struct arm_smmu_ll_queue llq = {
> +		.max_n_shift = cmdq->q.llq.max_n_shift,
> +	}, head = llq;
> +	int ret = 0;
> +
> +	/* 1. Allocate some space in the queue */
> +	local_irq_save(flags);
> +	llq.val = READ_ONCE(cmdq->q.llq.val);
> +	do {
> +		u64 old;
> +
> +		while (!queue_has_space(&llq, n + sync)) {
> +			local_irq_restore(flags);
> +			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> +				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
> +			local_irq_save(flags);
> +		}
> +
> +		head.cons = llq.cons;
> +		head.prod = queue_inc_prod_n(&llq, n + sync) |
> +					     CMDQ_PROD_OWNED_FLAG;
> +
> +		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
> +		if (old == llq.val)
> +			break;
> +
> +		llq.val = old;
> +	} while (1);
> +	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
> +	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
> +	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
> +
> +	/*
> +	 * 2. Write our commands into the queue
> +	 * Dependency ordering from the cmpxchg() loop above.
> +	 */
> +	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
> +	if (sync) {
> +		prod = queue_inc_prod_n(&llq, n);
> +		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
> +		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
> +
> +		/*
> +		 * In order to determine completion of our CMD_SYNC, we must
> +		 * ensure that the queue can't wrap twice without us noticing.
> +		 * We achieve that by taking the cmdq lock as shared before
> +		 * marking our slot as valid.
> +		 */
> +		arm_smmu_cmdq_shared_lock(cmdq);
> +	}
> +
> +	/* 3. Mark our slots as valid, ensuring commands are visible first */
> +	dma_wmb();
> +	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
> +
> +	/* 4. If we are the owner, take control of the SMMU hardware */
> +	if (owner) {
> +		/* a. Wait for previous owner to finish */
> +		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
> +
> +		/* b. Stop gathering work by clearing the owned flag */
> +		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
> +						   &cmdq->q.llq.atomic.prod);
> +		prod &= ~CMDQ_PROD_OWNED_FLAG;
> +
> +		/*
> +		 * c. Wait for any gathered work to be written to the queue.
> +		 * Note that we read our own entries so that we have the control
> +		 * dependency required by (d).
> +		 */
> +		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
> +
> +		/*
> +		 * d. Advance the hardware prod pointer
> +		 * Control dependency ordering from the entries becoming valid.
> +		 */
> +		writel_relaxed(prod, cmdq->q.prod_reg);
> +
> +		/*
> +		 * e. Tell the next owner we're done
> +		 * Make sure we've updated the hardware first, so that we don't
> +		 * race to update prod and potentially move it backwards.
> +		 */
> +		atomic_set_release(&cmdq->owner_prod, prod);
> +	}
> +
> +	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
> +	if (sync) {
> +		llq.prod = queue_inc_prod_n(&llq, n);
> +		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
> +		if (ret) {
> +			dev_err_ratelimited(smmu->dev,
> +					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
> +					    llq.prod,
> +					    readl_relaxed(cmdq->q.prod_reg),
> +					    readl_relaxed(cmdq->q.cons_reg));
> +		}
> +
> +		/*
> +		 * Try to unlock the cmdq lock. This will fail if we're the last
> +		 * reader, in which case we can safely update cmdq->q.llq.cons
> +		 */
> +		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
> +			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
> +			arm_smmu_cmdq_shared_unlock(cmdq);
> +		}
> +	}
> +
> +	local_irq_restore(flags);
> +	return ret;
> +}
> +
> +static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> +				   struct arm_smmu_cmdq_ent *ent)
> +{
> +	u64 cmd[CMDQ_ENT_DWORDS];
> +
> +	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
> +		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
> +			 ent->opcode);
> +		return -EINVAL;
> +	}
> +
> +	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
> +}
> +
> +static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> +{
> +	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
> +}
> +
> +static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
> +				    struct arm_smmu_cmdq_batch *cmds,
> +				    struct arm_smmu_cmdq_ent *cmd)
> +{
> +	if (cmds->num == CMDQ_BATCH_ENTRIES) {
> +		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
> +		cmds->num = 0;
> +	}
> +	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
> +	cmds->num++;
> +}
> +
> +static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
> +				      struct arm_smmu_cmdq_batch *cmds)
> +{
> +	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
> +}
> +
> +/* Context descriptor manipulation functions */
> +static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
> +			     int ssid, bool leaf)
> +{
> +	size_t i;
> +	unsigned long flags;
> +	struct arm_smmu_master *master;
> +	struct arm_smmu_cmdq_batch cmds = {};
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode	= CMDQ_OP_CFGI_CD,
> +		.cfgi	= {
> +			.ssid	= ssid,
> +			.leaf	= leaf,
> +		},
> +	};
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> +		for (i = 0; i < master->num_sids; i++) {
> +			cmd.cfgi.sid = master->sids[i];
> +			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> +		}
> +	}
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> +}
> +
> +static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
> +					struct arm_smmu_l1_ctx_desc *l1_desc)
> +{
> +	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
> +
> +	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
> +					     &l1_desc->l2ptr_dma, GFP_KERNEL);
> +	if (!l1_desc->l2ptr) {
> +		dev_warn(smmu->dev,
> +			 "failed to allocate context descriptor table\n");
> +		return -ENOMEM;
> +	}
> +	return 0;
> +}
> +
> +static void arm_smmu_write_cd_l1_desc(__le64 *dst,
> +				      struct arm_smmu_l1_ctx_desc *l1_desc)
> +{
> +	u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) |
> +		  CTXDESC_L1_DESC_V;
> +
> +	/* See comment in arm_smmu_write_ctx_desc() */
> +	WRITE_ONCE(*dst, cpu_to_le64(val));
> +}
> +
> +static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain,
> +				   u32 ssid)
> +{
> +	__le64 *l1ptr;
> +	unsigned int idx;
> +	struct arm_smmu_l1_ctx_desc *l1_desc;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
> +
> +	if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR)
> +		return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS;
> +
> +	idx = ssid >> CTXDESC_SPLIT;
> +	l1_desc = &cdcfg->l1_desc[idx];
> +	if (!l1_desc->l2ptr) {
> +		if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc))
> +			return NULL;
> +
> +		l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS;
> +		arm_smmu_write_cd_l1_desc(l1ptr, l1_desc);
> +		/* An invalid L1CD can be cached */
> +		arm_smmu_sync_cd(smmu_domain, ssid, false);
> +	}
> +	idx = ssid & (CTXDESC_L2_ENTRIES - 1);
> +	return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
> +}
> +
> +static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
> +				   int ssid, struct arm_smmu_ctx_desc *cd)
> +{
> +	/*
> +	 * This function handles the following cases:
> +	 *
> +	 * (1) Install primary CD, for normal DMA traffic (SSID = 0).
> +	 * (2) Install a secondary CD, for SID+SSID traffic.
> +	 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
> +	 *     CD, then invalidate the old entry and mappings.
> +	 * (4) Remove a secondary CD.
> +	 */
> +	u64 val;
> +	bool cd_live;
> +	__le64 *cdptr;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax)))
> +		return -E2BIG;
> +
> +	cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid);
> +	if (!cdptr)
> +		return -ENOMEM;
> +
> +	val = le64_to_cpu(cdptr[0]);
> +	cd_live = !!(val & CTXDESC_CD_0_V);
> +
> +	if (!cd) { /* (4) */
> +		val = 0;
> +	} else if (cd_live) { /* (3) */
> +		val &= ~CTXDESC_CD_0_ASID;
> +		val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
> +		/*
> +		 * Until CD+TLB invalidation, both ASIDs may be used for tagging
> +		 * this substream's traffic
> +		 */
> +	} else { /* (1) and (2) */
> +		cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
> +		cdptr[2] = 0;
> +		cdptr[3] = cpu_to_le64(cd->mair);
> +
> +		/*
> +		 * STE is live, and the SMMU might read dwords of this CD in any
> +		 * order. Ensure that it observes valid values before reading
> +		 * V=1.
> +		 */
> +		arm_smmu_sync_cd(smmu_domain, ssid, true);
> +
> +		val = cd->tcr |
> +#ifdef __BIG_ENDIAN
> +			CTXDESC_CD_0_ENDI |
> +#endif
> +			CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
> +			CTXDESC_CD_0_AA64 |
> +			FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
> +			CTXDESC_CD_0_V;
> +
> +		/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
> +		if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
> +			val |= CTXDESC_CD_0_S;
> +	}
> +
> +	/*
> +	 * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
> +	 * "Configuration structures and configuration invalidation completion"
> +	 *
> +	 *   The size of single-copy atomic reads made by the SMMU is
> +	 *   IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
> +	 *   field within an aligned 64-bit span of a structure can be altered
> +	 *   without first making the structure invalid.
> +	 */
> +	WRITE_ONCE(cdptr[0], cpu_to_le64(val));
> +	arm_smmu_sync_cd(smmu_domain, ssid, true);
> +	return 0;
> +}
> +
> +static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
> +{
> +	int ret;
> +	size_t l1size;
> +	size_t max_contexts;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +	struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg;
> +
> +	max_contexts = 1 << cfg->s1cdmax;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) ||
> +	    max_contexts <= CTXDESC_L2_ENTRIES) {
> +		cfg->s1fmt = STRTAB_STE_0_S1FMT_LINEAR;
> +		cdcfg->num_l1_ents = max_contexts;
> +
> +		l1size = max_contexts * (CTXDESC_CD_DWORDS << 3);
> +	} else {
> +		cfg->s1fmt = STRTAB_STE_0_S1FMT_64K_L2;
> +		cdcfg->num_l1_ents = DIV_ROUND_UP(max_contexts,
> +						  CTXDESC_L2_ENTRIES);
> +
> +		cdcfg->l1_desc = devm_kcalloc(smmu->dev, cdcfg->num_l1_ents,
> +					      sizeof(*cdcfg->l1_desc),
> +					      GFP_KERNEL);
> +		if (!cdcfg->l1_desc)
> +			return -ENOMEM;
> +
> +		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
> +	}
> +
> +	cdcfg->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cdcfg->cdtab_dma,
> +					   GFP_KERNEL);
> +	if (!cdcfg->cdtab) {
> +		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
> +		ret = -ENOMEM;
> +		goto err_free_l1;
> +	}
> +
> +	return 0;
> +
> +err_free_l1:
> +	if (cdcfg->l1_desc) {
> +		devm_kfree(smmu->dev, cdcfg->l1_desc);
> +		cdcfg->l1_desc = NULL;
> +	}
> +	return ret;
> +}
> +
> +static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
> +{
> +	int i;
> +	size_t size, l1size;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
> +
> +	if (cdcfg->l1_desc) {
> +		size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
> +
> +		for (i = 0; i < cdcfg->num_l1_ents; i++) {
> +			if (!cdcfg->l1_desc[i].l2ptr)
> +				continue;
> +
> +			dmam_free_coherent(smmu->dev, size,
> +					   cdcfg->l1_desc[i].l2ptr,
> +					   cdcfg->l1_desc[i].l2ptr_dma);
> +		}
> +		devm_kfree(smmu->dev, cdcfg->l1_desc);
> +		cdcfg->l1_desc = NULL;
> +
> +		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
> +	} else {
> +		l1size = cdcfg->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
> +	}
> +
> +	dmam_free_coherent(smmu->dev, l1size, cdcfg->cdtab, cdcfg->cdtab_dma);
> +	cdcfg->cdtab_dma = 0;
> +	cdcfg->cdtab = NULL;
> +}
> +
> +static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
> +{
> +	if (!cd->asid)
> +		return;
> +
> +	xa_erase(&asid_xa, cd->asid);
> +}
> +
> +/* Stream table manipulation functions */
> +static void
> +arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
> +{
> +	u64 val = 0;
> +
> +	val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span);
> +	val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
> +
> +	/* See comment in arm_smmu_write_ctx_desc() */
> +	WRITE_ONCE(*dst, cpu_to_le64(val));
> +}
> +
> +static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode	= CMDQ_OP_CFGI_STE,
> +		.cfgi	= {
> +			.sid	= sid,
> +			.leaf	= true,
> +		},
> +	};
> +
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +}
> +
> +static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
> +				      __le64 *dst)
> +{
> +	/*
> +	 * This is hideously complicated, but we only really care about
> +	 * three cases at the moment:
> +	 *
> +	 * 1. Invalid (all zero) -> bypass/fault (init)
> +	 * 2. Bypass/fault -> translation/bypass (attach)
> +	 * 3. Translation/bypass -> bypass/fault (detach)
> +	 *
> +	 * Given that we can't update the STE atomically and the SMMU
> +	 * doesn't read the thing in a defined order, that leaves us
> +	 * with the following maintenance requirements:
> +	 *
> +	 * 1. Update Config, return (init time STEs aren't live)
> +	 * 2. Write everything apart from dword 0, sync, write dword 0, sync
> +	 * 3. Update Config, sync
> +	 */
> +	u64 val = le64_to_cpu(dst[0]);
> +	bool ste_live = false;
> +	struct arm_smmu_device *smmu = NULL;
> +	struct arm_smmu_s1_cfg *s1_cfg = NULL;
> +	struct arm_smmu_s2_cfg *s2_cfg = NULL;
> +	struct arm_smmu_domain *smmu_domain = NULL;
> +	struct arm_smmu_cmdq_ent prefetch_cmd = {
> +		.opcode		= CMDQ_OP_PREFETCH_CFG,
> +		.prefetch	= {
> +			.sid	= sid,
> +		},
> +	};
> +
> +	if (master) {
> +		smmu_domain = master->domain;
> +		smmu = master->smmu;
> +	}
> +
> +	if (smmu_domain) {
> +		switch (smmu_domain->stage) {
> +		case ARM_SMMU_DOMAIN_S1:
> +			s1_cfg = &smmu_domain->s1_cfg;
> +			break;
> +		case ARM_SMMU_DOMAIN_S2:
> +		case ARM_SMMU_DOMAIN_NESTED:
> +			s2_cfg = &smmu_domain->s2_cfg;
> +			break;
> +		default:
> +			break;
> +		}
> +	}
> +
> +	if (val & STRTAB_STE_0_V) {
> +		switch (FIELD_GET(STRTAB_STE_0_CFG, val)) {
> +		case STRTAB_STE_0_CFG_BYPASS:
> +			break;
> +		case STRTAB_STE_0_CFG_S1_TRANS:
> +		case STRTAB_STE_0_CFG_S2_TRANS:
> +			ste_live = true;
> +			break;
> +		case STRTAB_STE_0_CFG_ABORT:
> +			BUG_ON(!disable_bypass);
> +			break;
> +		default:
> +			BUG(); /* STE corruption */
> +		}
> +	}
> +
> +	/* Nuke the existing STE_0 value, as we're going to rewrite it */
> +	val = STRTAB_STE_0_V;
> +
> +	/* Bypass/fault */
> +	if (!smmu_domain || !(s1_cfg || s2_cfg)) {
> +		if (!smmu_domain && disable_bypass)
> +			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
> +		else
> +			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
> +
> +		dst[0] = cpu_to_le64(val);
> +		dst[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG,
> +						STRTAB_STE_1_SHCFG_INCOMING));
> +		dst[2] = 0; /* Nuke the VMID */
> +		/*
> +		 * The SMMU can perform negative caching, so we must sync
> +		 * the STE regardless of whether the old value was live.
> +		 */
> +		if (smmu)
> +			arm_smmu_sync_ste_for_sid(smmu, sid);
> +		return;
> +	}
> +
> +	if (s1_cfg) {
> +		BUG_ON(ste_live);
> +		dst[1] = cpu_to_le64(
> +			 FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) |
> +			 FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
> +			 FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
> +			 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
> +			 FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1));
> +
> +		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
> +		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
> +			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
> +
> +		val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
> +			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) |
> +			FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) |
> +			FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
> +	}
> +
> +	if (s2_cfg) {
> +		BUG_ON(ste_live);
> +		dst[2] = cpu_to_le64(
> +			 FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) |
> +			 FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) |
> +#ifdef __BIG_ENDIAN
> +			 STRTAB_STE_2_S2ENDI |
> +#endif
> +			 STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
> +			 STRTAB_STE_2_S2R);
> +
> +		dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK);
> +
> +		val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
> +	}
> +
> +	if (master->ats_enabled)
> +		dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
> +						 STRTAB_STE_1_EATS_TRANS));
> +
> +	arm_smmu_sync_ste_for_sid(smmu, sid);
> +	/* See comment in arm_smmu_write_ctx_desc() */
> +	WRITE_ONCE(dst[0], cpu_to_le64(val));
> +	arm_smmu_sync_ste_for_sid(smmu, sid);
> +
> +	/* It's likely that we'll want to use the new STE soon */
> +	if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
> +		arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
> +}
> +
> +static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < nent; ++i) {
> +		arm_smmu_write_strtab_ent(NULL, -1, strtab);
> +		strtab += STRTAB_STE_DWORDS;
> +	}
> +}
> +
> +static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	size_t size;
> +	void *strtab;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +	struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
> +
> +	if (desc->l2ptr)
> +		return 0;
> +
> +	size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
> +	strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
> +
> +	desc->span = STRTAB_SPLIT + 1;
> +	desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
> +					  GFP_KERNEL);
> +	if (!desc->l2ptr) {
> +		dev_err(smmu->dev,
> +			"failed to allocate l2 stream table for SID %u\n",
> +			sid);
> +		return -ENOMEM;
> +	}
> +
> +	arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
> +	arm_smmu_write_strtab_l1_desc(strtab, desc);
> +	return 0;
> +}
> +
> +/* IRQ and event handlers */
> +static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
> +{
> +	int i;
> +	struct arm_smmu_device *smmu = dev;
> +	struct arm_smmu_queue *q = &smmu->evtq.q;
> +	struct arm_smmu_ll_queue *llq = &q->llq;
> +	u64 evt[EVTQ_ENT_DWORDS];
> +
> +	do {
> +		while (!queue_remove_raw(q, evt)) {
> +			u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
> +
> +			dev_info(smmu->dev, "event 0x%02x received:\n", id);
> +			for (i = 0; i < ARRAY_SIZE(evt); ++i)
> +				dev_info(smmu->dev, "\t0x%016llx\n",
> +					 (unsigned long long)evt[i]);
> +
> +		}
> +
> +		/*
> +		 * Not much we can do on overflow, so scream and pretend we're
> +		 * trying harder.
> +		 */
> +		if (queue_sync_prod_in(q) == -EOVERFLOW)
> +			dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
> +	} while (!queue_empty(llq));
> +
> +	/* Sync our overflow flag, as we believe we're up to speed */
> +	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
> +		    Q_IDX(llq, llq->cons);
> +	return IRQ_HANDLED;
> +}
> +
> +static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
> +{
> +	u32 sid, ssid;
> +	u16 grpid;
> +	bool ssv, last;
> +
> +	sid = FIELD_GET(PRIQ_0_SID, evt[0]);
> +	ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]);
> +	ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0;
> +	last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]);
> +	grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]);
> +
> +	dev_info(smmu->dev, "unexpected PRI request received:\n");
> +	dev_info(smmu->dev,
> +		 "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
> +		 sid, ssid, grpid, last ? "L" : "",
> +		 evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
> +		 evt[0] & PRIQ_0_PERM_READ ? "R" : "",
> +		 evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
> +		 evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
> +		 evt[1] & PRIQ_1_ADDR_MASK);
> +
> +	if (last) {
> +		struct arm_smmu_cmdq_ent cmd = {
> +			.opcode			= CMDQ_OP_PRI_RESP,
> +			.substream_valid	= ssv,
> +			.pri			= {
> +				.sid	= sid,
> +				.ssid	= ssid,
> +				.grpid	= grpid,
> +				.resp	= PRI_RESP_DENY,
> +			},
> +		};
> +
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	}
> +}
> +
> +static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
> +{
> +	struct arm_smmu_device *smmu = dev;
> +	struct arm_smmu_queue *q = &smmu->priq.q;
> +	struct arm_smmu_ll_queue *llq = &q->llq;
> +	u64 evt[PRIQ_ENT_DWORDS];
> +
> +	do {
> +		while (!queue_remove_raw(q, evt))
> +			arm_smmu_handle_ppr(smmu, evt);
> +
> +		if (queue_sync_prod_in(q) == -EOVERFLOW)
> +			dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
> +	} while (!queue_empty(llq));
> +
> +	/* Sync our overflow flag, as we believe we're up to speed */
> +	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
> +		      Q_IDX(llq, llq->cons);
> +	queue_sync_cons_out(q);
> +	return IRQ_HANDLED;
> +}
> +
> +static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
> +
> +static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
> +{
> +	u32 gerror, gerrorn, active;
> +	struct arm_smmu_device *smmu = dev;
> +
> +	gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
> +	gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
> +
> +	active = gerror ^ gerrorn;
> +	if (!(active & GERROR_ERR_MASK))
> +		return IRQ_NONE; /* No errors pending */
> +
> +	dev_warn(smmu->dev,
> +		 "unexpected global error reported (0x%08x), this could be serious\n",
> +		 active);
> +
> +	if (active & GERROR_SFM_ERR) {
> +		dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
> +		arm_smmu_device_disable(smmu);
> +	}
> +
> +	if (active & GERROR_MSI_GERROR_ABT_ERR)
> +		dev_warn(smmu->dev, "GERROR MSI write aborted\n");
> +
> +	if (active & GERROR_MSI_PRIQ_ABT_ERR)
> +		dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
> +
> +	if (active & GERROR_MSI_EVTQ_ABT_ERR)
> +		dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
> +
> +	if (active & GERROR_MSI_CMDQ_ABT_ERR)
> +		dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
> +
> +	if (active & GERROR_PRIQ_ABT_ERR)
> +		dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
> +
> +	if (active & GERROR_EVTQ_ABT_ERR)
> +		dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
> +
> +	if (active & GERROR_CMDQ_ERR)
> +		arm_smmu_cmdq_skip_err(smmu);
> +
> +	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
> +{
> +	struct arm_smmu_device *smmu = dev;
> +
> +	arm_smmu_evtq_thread(irq, dev);
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		arm_smmu_priq_thread(irq, dev);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
> +{
> +	arm_smmu_gerror_handler(irq, dev);
> +	return IRQ_WAKE_THREAD;
> +}
> +
> +static void
> +arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
> +			struct arm_smmu_cmdq_ent *cmd)
> +{
> +	size_t log2_span;
> +	size_t span_mask;
> +	/* ATC invalidates are always on 4096-bytes pages */
> +	size_t inval_grain_shift = 12;
> +	unsigned long page_start, page_end;
> +
> +	*cmd = (struct arm_smmu_cmdq_ent) {
> +		.opcode			= CMDQ_OP_ATC_INV,
> +		.substream_valid	= !!ssid,
> +		.atc.ssid		= ssid,
> +	};
> +
> +	if (!size) {
> +		cmd->atc.size = ATC_INV_SIZE_ALL;
> +		return;
> +	}
> +
> +	page_start	= iova >> inval_grain_shift;
> +	page_end	= (iova + size - 1) >> inval_grain_shift;
> +
> +	/*
> +	 * In an ATS Invalidate Request, the address must be aligned on the
> +	 * range size, which must be a power of two number of page sizes. We
> +	 * thus have to choose between grossly over-invalidating the region, or
> +	 * splitting the invalidation into multiple commands. For simplicity
> +	 * we'll go with the first solution, but should refine it in the future
> +	 * if multiple commands are shown to be more efficient.
> +	 *
> +	 * Find the smallest power of two that covers the range. The most
> +	 * significant differing bit between the start and end addresses,
> +	 * fls(start ^ end), indicates the required span. For example:
> +	 *
> +	 * We want to invalidate pages [8; 11]. This is already the ideal range:
> +	 *		x = 0b1000 ^ 0b1011 = 0b11
> +	 *		span = 1 << fls(x) = 4
> +	 *
> +	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
> +	 *		x = 0b0111 ^ 0b1010 = 0b1101
> +	 *		span = 1 << fls(x) = 16
> +	 */
> +	log2_span	= fls_long(page_start ^ page_end);
> +	span_mask	= (1ULL << log2_span) - 1;
> +
> +	page_start	&= ~span_mask;
> +
> +	cmd->atc.addr	= page_start << inval_grain_shift;
> +	cmd->atc.size	= log2_span;
> +}
> +
> +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
> +{
> +	int i;
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> +
> +	for (i = 0; i < master->num_sids; i++) {
> +		cmd.atc.sid = master->sids[i];
> +		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
> +	}
> +
> +	return arm_smmu_cmdq_issue_sync(master->smmu);
> +}
> +
> +static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
> +				   int ssid, unsigned long iova, size_t size)
> +{
> +	int i;
> +	unsigned long flags;
> +	struct arm_smmu_cmdq_ent cmd;
> +	struct arm_smmu_master *master;
> +	struct arm_smmu_cmdq_batch cmds = {};
> +
> +	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
> +		return 0;
> +
> +	/*
> +	 * Ensure that we've completed prior invalidation of the main TLBs
> +	 * before we read 'nr_ats_masters' in case of a concurrent call to
> +	 * arm_smmu_enable_ats():
> +	 *
> +	 *	// unmap()			// arm_smmu_enable_ats()
> +	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
> +	 *	smp_mb();			[...]
> +	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
> +	 *
> +	 * Ensures that we always see the incremented 'nr_ats_masters' count if
> +	 * ATS was enabled at the PCI device before completion of the TLBI.
> +	 */
> +	smp_mb();
> +	if (!atomic_read(&smmu_domain->nr_ats_masters))
> +		return 0;
> +
> +	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> +		if (!master->ats_enabled)
> +			continue;
> +
> +		for (i = 0; i < master->num_sids; i++) {
> +			cmd.atc.sid = master->sids[i];
> +			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
> +		}
> +	}
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
> +}
> +
> +/* IO_PGTABLE API */
> +static void arm_smmu_tlb_inv_context(void *cookie)
> +{
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
> +		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> +		cmd.tlbi.vmid	= 0;
> +	} else {
> +		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
> +		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> +	}
> +
> +	/*
> +	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
> +	 * PTEs previously cleared by unmaps on the current CPU not yet visible
> +	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
> +	 * insertion to guarantee those are observed before the TLBI. Do be
> +	 * careful, 007.
> +	 */
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
> +}
> +
> +static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
> +				   size_t granule, bool leaf,
> +				   struct arm_smmu_domain *smmu_domain)
> +{
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
> +	size_t inv_range = granule;
> +	struct arm_smmu_cmdq_batch cmds = {};
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.tlbi = {
> +			.leaf	= leaf,
> +		},
> +	};
> +
> +	if (!size)
> +		return;
> +
> +	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
> +		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> +	} else {
> +		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
> +		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> +	}
> +
> +	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> +		/* Get the leaf page size */
> +		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
> +
> +		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
> +		cmd.tlbi.tg = (tg - 10) / 2;
> +
> +		/* Determine what level the granule is at */
> +		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
> +
> +		num_pages = size >> tg;
> +	}
> +
> +	while (iova < end) {
> +		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> +			/*
> +			 * On each iteration of the loop, the range is 5 bits
> +			 * worth of the aligned size remaining.
> +			 * The range in pages is:
> +			 *
> +			 * range = (num_pages & (0x1f << __ffs(num_pages)))
> +			 */
> +			unsigned long scale, num;
> +
> +			/* Determine the power of 2 multiple number of pages */
> +			scale = __ffs(num_pages);
> +			cmd.tlbi.scale = scale;
> +
> +			/* Determine how many chunks of 2^scale size we have */
> +			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
> +			cmd.tlbi.num = num - 1;
> +
> +			/* range is num * 2^scale * pgsize */
> +			inv_range = num << (scale + tg);
> +
> +			/* Clear out the lower order bits for the next iteration */
> +			num_pages -= num << scale;
> +		}
> +
> +		cmd.tlbi.addr = iova;
> +		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> +		iova += inv_range;
> +	}
> +	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> +
> +	/*
> +	 * Unfortunately, this can't be leaf-only since we may have
> +	 * zapped an entire table.
> +	 */
> +	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
> +}
> +
> +static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
> +					 unsigned long iova, size_t granule,
> +					 void *cookie)
> +{
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct iommu_domain *domain = &smmu_domain->domain;
> +
> +	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
> +}
> +
> +static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
> +				  size_t granule, void *cookie)
> +{
> +	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
> +}
> +
> +static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
> +				  size_t granule, void *cookie)
> +{
> +	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
> +}
> +
> +static const struct iommu_flush_ops arm_smmu_flush_ops = {
> +	.tlb_flush_all	= arm_smmu_tlb_inv_context,
> +	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
> +	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
> +	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
> +};
> +
> +/* IOMMU API */
> +static bool arm_smmu_capable(enum iommu_cap cap)
> +{
> +	switch (cap) {
> +	case IOMMU_CAP_CACHE_COHERENCY:
> +		return true;
> +	case IOMMU_CAP_NOEXEC:
> +		return true;
> +	default:
> +		return false;
> +	}
> +}
> +
> +static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
> +{
> +	struct arm_smmu_domain *smmu_domain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED &&
> +	    type != IOMMU_DOMAIN_DMA &&
> +	    type != IOMMU_DOMAIN_IDENTITY)
> +		return NULL;
> +
> +	/*
> +	 * Allocate the domain and initialise some of its data structures.
> +	 * We can't really do anything meaningful until we've added a
> +	 * master.
> +	 */
> +	smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
> +	if (!smmu_domain)
> +		return NULL;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&smmu_domain->domain)) {
> +		kfree(smmu_domain);
> +		return NULL;
> +	}
> +
> +	mutex_init(&smmu_domain->init_mutex);
> +	INIT_LIST_HEAD(&smmu_domain->devices);
> +	spin_lock_init(&smmu_domain->devices_lock);
> +
> +	return &smmu_domain->domain;
> +}
> +
> +static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
> +{
> +	int idx, size = 1 << span;
> +
> +	do {
> +		idx = find_first_zero_bit(map, size);
> +		if (idx == size)
> +			return -ENOSPC;
> +	} while (test_and_set_bit(idx, map));
> +
> +	return idx;
> +}
> +
> +static void arm_smmu_bitmap_free(unsigned long *map, int idx)
> +{
> +	clear_bit(idx, map);
> +}
> +
> +static void arm_smmu_domain_free(struct iommu_domain *domain)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	iommu_put_dma_cookie(domain);
> +	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
> +
> +	/* Free the CD and ASID, if we allocated them */
> +	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +
> +		if (cfg->cdcfg.cdtab)
> +			arm_smmu_free_cd_tables(smmu_domain);
> +		arm_smmu_free_asid(&cfg->cd);
> +	} else {
> +		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> +		if (cfg->vmid)
> +			arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
> +	}
> +
> +	kfree(smmu_domain);
> +}
> +
> +static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
> +				       struct arm_smmu_master *master,
> +				       struct io_pgtable_cfg *pgtbl_cfg)
> +{
> +	int ret;
> +	u32 asid;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
> +
> +	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
> +		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	if (ret)
> +		return ret;
> +
> +	cfg->s1cdmax = master->ssid_bits;
> +
> +	ret = arm_smmu_alloc_cd_tables(smmu_domain);
> +	if (ret)
> +		goto out_free_asid;
> +
> +	cfg->cd.asid	= (u16)asid;
> +	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
> +	cfg->cd.tcr	= FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) |
> +			  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
> +	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair;
> +
> +	/*
> +	 * Note that this will end up calling arm_smmu_sync_cd() before
> +	 * the master has been added to the devices list for this domain.
> +	 * This isn't an issue because the STE hasn't been installed yet.
> +	 */
> +	ret = arm_smmu_write_ctx_desc(smmu_domain, 0, &cfg->cd);
> +	if (ret)
> +		goto out_free_cd_tables;
> +
> +	return 0;
> +
> +out_free_cd_tables:
> +	arm_smmu_free_cd_tables(smmu_domain);
> +out_free_asid:
> +	arm_smmu_free_asid(&cfg->cd);
> +	return ret;
> +}
> +
> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
> +				       struct arm_smmu_master *master,
> +				       struct io_pgtable_cfg *pgtbl_cfg)
> +{
> +	int vmid;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> +	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
> +
> +	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
> +	if (vmid < 0)
> +		return vmid;
> +
> +	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
> +	cfg->vmid	= (u16)vmid;
> +	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
> +	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
> +	return 0;
> +}
> +
> +static int arm_smmu_domain_finalise(struct iommu_domain *domain,
> +				    struct arm_smmu_master *master)
> +{
> +	int ret;
> +	unsigned long ias, oas;
> +	enum io_pgtable_fmt fmt;
> +	struct io_pgtable_cfg pgtbl_cfg;
> +	struct io_pgtable_ops *pgtbl_ops;
> +	int (*finalise_stage_fn)(struct arm_smmu_domain *,
> +				 struct arm_smmu_master *,
> +				 struct io_pgtable_cfg *);
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
> +		return 0;
> +	}
> +
> +	/* Restrict the stage to what we can actually support */
> +	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
> +	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> +
> +	switch (smmu_domain->stage) {
> +	case ARM_SMMU_DOMAIN_S1:
> +		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
> +		ias = min_t(unsigned long, ias, VA_BITS);
> +		oas = smmu->ias;
> +		fmt = ARM_64_LPAE_S1;
> +		finalise_stage_fn = arm_smmu_domain_finalise_s1;
> +		break;
> +	case ARM_SMMU_DOMAIN_NESTED:
> +	case ARM_SMMU_DOMAIN_S2:
> +		ias = smmu->ias;
> +		oas = smmu->oas;
> +		fmt = ARM_64_LPAE_S2;
> +		finalise_stage_fn = arm_smmu_domain_finalise_s2;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	pgtbl_cfg = (struct io_pgtable_cfg) {
> +		.pgsize_bitmap	= smmu->pgsize_bitmap,
> +		.ias		= ias,
> +		.oas		= oas,
> +		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
> +		.tlb		= &arm_smmu_flush_ops,
> +		.iommu_dev	= smmu->dev,
> +	};
> +
> +	if (smmu_domain->non_strict)
> +		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
> +
> +	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
> +	if (!pgtbl_ops)
> +		return -ENOMEM;
> +
> +	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
> +	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
> +	domain->geometry.force_aperture = true;
> +
> +	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
> +	if (ret < 0) {
> +		free_io_pgtable_ops(pgtbl_ops);
> +		return ret;
> +	}
> +
> +	smmu_domain->pgtbl_ops = pgtbl_ops;
> +	return 0;
> +}
> +
> +static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	__le64 *step;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +
> +	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
> +		struct arm_smmu_strtab_l1_desc *l1_desc;
> +		int idx;
> +
> +		/* Two-level walk */
> +		idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
> +		l1_desc = &cfg->l1_desc[idx];
> +		idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
> +		step = &l1_desc->l2ptr[idx];
> +	} else {
> +		/* Simple linear lookup */
> +		step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
> +	}
> +
> +	return step;
> +}
> +
> +static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
> +{
> +	int i, j;
> +	struct arm_smmu_device *smmu = master->smmu;
> +
> +	for (i = 0; i < master->num_sids; ++i) {
> +		u32 sid = master->sids[i];
> +		__le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
> +
> +		/* Bridged PCI devices may end up with duplicated IDs */
> +		for (j = 0; j < i; j++)
> +			if (master->sids[j] == sid)
> +				break;
> +		if (j < i)
> +			continue;
> +
> +		arm_smmu_write_strtab_ent(master, sid, step);
> +	}
> +}
> +
> +static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
> +{
> +	struct device *dev = master->dev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
> +		return false;
> +
> +	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
> +		return false;
> +
> +	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
> +}
> +
> +static void arm_smmu_enable_ats(struct arm_smmu_master *master)
> +{
> +	size_t stu;
> +	struct pci_dev *pdev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +	struct arm_smmu_domain *smmu_domain = master->domain;
> +
> +	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
> +	if (!master->ats_enabled)
> +		return;
> +
> +	/* Smallest Translation Unit: log2 of the smallest supported granule */
> +	stu = __ffs(smmu->pgsize_bitmap);
> +	pdev = to_pci_dev(master->dev);
> +
> +	atomic_inc(&smmu_domain->nr_ats_masters);
> +	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
> +	if (pci_enable_ats(pdev, stu))
> +		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
> +}
> +
> +static void arm_smmu_disable_ats(struct arm_smmu_master *master)
> +{
> +	struct arm_smmu_domain *smmu_domain = master->domain;
> +
> +	if (!master->ats_enabled)
> +		return;
> +
> +	pci_disable_ats(to_pci_dev(master->dev));
> +	/*
> +	 * Ensure ATS is disabled at the endpoint before we issue the
> +	 * ATC invalidation via the SMMU.
> +	 */
> +	wmb();
> +	arm_smmu_atc_inv_master(master);
> +	atomic_dec(&smmu_domain->nr_ats_masters);
> +}
> +
> +static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
> +{
> +	int ret;
> +	int features;
> +	int num_pasids;
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return -ENODEV;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	features = pci_pasid_features(pdev);
> +	if (features < 0)
> +		return features;
> +
> +	num_pasids = pci_max_pasids(pdev);
> +	if (num_pasids <= 0)
> +		return num_pasids;
> +
> +	ret = pci_enable_pasid(pdev, features);
> +	if (ret) {
> +		dev_err(&pdev->dev, "Failed to enable PASID\n");
> +		return ret;
> +	}
> +
> +	master->ssid_bits = min_t(u8, ilog2(num_pasids),
> +				  master->smmu->ssid_bits);
> +	return 0;
> +}
> +
> +static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->pasid_enabled)
> +		return;
> +
> +	master->ssid_bits = 0;
> +	pci_disable_pasid(pdev);
> +}
> +
> +static void arm_smmu_detach_dev(struct arm_smmu_master *master)
> +{
> +	unsigned long flags;
> +	struct arm_smmu_domain *smmu_domain = master->domain;
> +
> +	if (!smmu_domain)
> +		return;
> +
> +	arm_smmu_disable_ats(master);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_del(&master->domain_head);
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	master->domain = NULL;
> +	master->ats_enabled = false;
> +	arm_smmu_install_ste_for_dev(master);
> +}
> +
> +static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int ret = 0;
> +	unsigned long flags;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +	struct arm_smmu_device *smmu;
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_master *master;
> +
> +	if (!fwspec)
> +		return -ENOENT;
> +
> +	master = dev_iommu_priv_get(dev);
> +	smmu = master->smmu;
> +
> +	arm_smmu_detach_dev(master);
> +
> +	mutex_lock(&smmu_domain->init_mutex);
> +
> +	if (!smmu_domain->smmu) {
> +		smmu_domain->smmu = smmu;
> +		ret = arm_smmu_domain_finalise(domain, master);
> +		if (ret) {
> +			smmu_domain->smmu = NULL;
> +			goto out_unlock;
> +		}
> +	} else if (smmu_domain->smmu != smmu) {
> +		dev_err(dev,
> +			"cannot attach to SMMU %s (upstream of %s)\n",
> +			dev_name(smmu_domain->smmu->dev),
> +			dev_name(smmu->dev));
> +		ret = -ENXIO;
> +		goto out_unlock;
> +	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 &&
> +		   master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) {
> +		dev_err(dev,
> +			"cannot attach to incompatible domain (%u SSID bits != %u)\n",
> +			smmu_domain->s1_cfg.s1cdmax, master->ssid_bits);
> +		ret = -EINVAL;
> +		goto out_unlock;
> +	}
> +
> +	master->domain = smmu_domain;
> +
> +	if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
> +		master->ats_enabled = arm_smmu_ats_supported(master);
> +
> +	arm_smmu_install_ste_for_dev(master);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_add(&master->domain_head, &smmu_domain->devices);
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	arm_smmu_enable_ats(master);
> +
> +out_unlock:
> +	mutex_unlock(&smmu_domain->init_mutex);
> +	return ret;
> +}
> +
> +static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> +			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> +{
> +	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> +
> +	if (!ops)
> +		return -ENODEV;
> +
> +	return ops->map(ops, iova, paddr, size, prot, gfp);
> +}
> +
> +static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			     size_t size, struct iommu_iotlb_gather *gather)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
> +
> +	if (!ops)
> +		return 0;
> +
> +	return ops->unmap(ops, iova, size, gather);
> +}
> +
> +static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	if (smmu_domain->smmu)
> +		arm_smmu_tlb_inv_context(smmu_domain);
> +}
> +
> +static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
> +				struct iommu_iotlb_gather *gather)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
> +			       gather->pgsize, true, smmu_domain);
> +}
> +
> +static phys_addr_t
> +arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> +{
> +	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> +
> +	if (domain->type == IOMMU_DOMAIN_IDENTITY)
> +		return iova;
> +
> +	if (!ops)
> +		return 0;
> +
> +	return ops->iova_to_phys(ops, iova);
> +}
> +
> +static struct platform_driver arm_smmu_driver;
> +
> +static
> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
> +							  fwnode);
> +	put_device(dev);
> +	return dev ? dev_get_drvdata(dev) : NULL;
> +}
> +
> +static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
> +
> +	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
> +		limit *= 1UL << STRTAB_SPLIT;
> +
> +	return sid < limit;
> +}
> +
> +static struct iommu_ops arm_smmu_ops;
> +
> +static struct iommu_device *arm_smmu_probe_device(struct device *dev)
> +{
> +	int i, ret;
> +	struct arm_smmu_device *smmu;
> +	struct arm_smmu_master *master;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> +		return ERR_PTR(-ENODEV);
> +
> +	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
> +		return ERR_PTR(-EBUSY);
> +
> +	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!smmu)
> +		return ERR_PTR(-ENODEV);
> +
> +	master = kzalloc(sizeof(*master), GFP_KERNEL);
> +	if (!master)
> +		return ERR_PTR(-ENOMEM);
> +
> +	master->dev = dev;
> +	master->smmu = smmu;
> +	master->sids = fwspec->ids;
> +	master->num_sids = fwspec->num_ids;
> +	dev_iommu_priv_set(dev, master);
> +
> +	/* Check the SIDs are in range of the SMMU and our stream table */
> +	for (i = 0; i < master->num_sids; i++) {
> +		u32 sid = master->sids[i];
> +
> +		if (!arm_smmu_sid_in_range(smmu, sid)) {
> +			ret = -ERANGE;
> +			goto err_free_master;
> +		}
> +
> +		/* Ensure l2 strtab is initialised */
> +		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
> +			ret = arm_smmu_init_l2_strtab(smmu, sid);
> +			if (ret)
> +				goto err_free_master;
> +		}
> +	}
> +
> +	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
> +
> +	/*
> +	 * Note that PASID must be enabled before, and disabled after ATS:
> +	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
> +	 *
> +	 *   Behavior is undefined if this bit is Set and the value of the PASID
> +	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bits
> +	 *   are changed.
> +	 */
> +	arm_smmu_enable_pasid(master);
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
> +		master->ssid_bits = min_t(u8, master->ssid_bits,
> +					  CTXDESC_LINEAR_CDMAX);
> +
> +	return &smmu->iommu;
> +
> +err_free_master:
> +	kfree(master);
> +	dev_iommu_priv_set(dev, NULL);
> +	return ERR_PTR(ret);
> +}
> +
> +static void arm_smmu_release_device(struct device *dev)
> +{
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +	struct arm_smmu_master *master;
> +
> +	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> +		return;
> +
> +	master = dev_iommu_priv_get(dev);
> +	arm_smmu_detach_dev(master);
> +	arm_smmu_disable_pasid(master);
> +	kfree(master);
> +	iommu_fwspec_free(dev);
> +}
> +
> +static struct iommu_group *arm_smmu_device_group(struct device *dev)
> +{
> +	struct iommu_group *group;
> +
> +	/*
> +	 * We don't support devices sharing stream IDs other than PCI RID
> +	 * aliases, since the necessary ID-to-device lookup becomes rather
> +	 * impractical given a potential sparse 32-bit stream ID space.
> +	 */
> +	if (dev_is_pci(dev))
> +		group = pci_device_group(dev);
> +	else
> +		group = generic_device_group(dev);
> +
> +	return group;
> +}
> +
> +static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> +				    enum iommu_attr attr, void *data)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	switch (domain->type) {
> +	case IOMMU_DOMAIN_UNMANAGED:
> +		switch (attr) {
> +		case DOMAIN_ATTR_NESTING:
> +			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> +			return 0;
> +		default:
> +			return -ENODEV;
> +		}
> +		break;
> +	case IOMMU_DOMAIN_DMA:
> +		switch (attr) {
> +		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> +			*(int *)data = smmu_domain->non_strict;
> +			return 0;
> +		default:
> +			return -ENODEV;
> +		}
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> +				    enum iommu_attr attr, void *data)
> +{
> +	int ret = 0;
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	mutex_lock(&smmu_domain->init_mutex);
> +
> +	switch (domain->type) {
> +	case IOMMU_DOMAIN_UNMANAGED:
> +		switch (attr) {
> +		case DOMAIN_ATTR_NESTING:
> +			if (smmu_domain->smmu) {
> +				ret = -EPERM;
> +				goto out_unlock;
> +			}
> +
> +			if (*(int *)data)
> +				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> +			else
> +				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> +			break;
> +		default:
> +			ret = -ENODEV;
> +		}
> +		break;
> +	case IOMMU_DOMAIN_DMA:
> +		switch(attr) {
> +		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> +			smmu_domain->non_strict = *(int *)data;
> +			break;
> +		default:
> +			ret = -ENODEV;
> +		}
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +out_unlock:
> +	mutex_unlock(&smmu_domain->init_mutex);
> +	return ret;
> +}
> +
> +static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static void arm_smmu_get_resv_regions(struct device *dev,
> +				      struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					 prot, IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static struct iommu_ops arm_smmu_ops = {
> +	.capable		= arm_smmu_capable,
> +	.domain_alloc		= arm_smmu_domain_alloc,
> +	.domain_free		= arm_smmu_domain_free,
> +	.attach_dev		= arm_smmu_attach_dev,
> +	.map			= arm_smmu_map,
> +	.unmap			= arm_smmu_unmap,
> +	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
> +	.iotlb_sync		= arm_smmu_iotlb_sync,
> +	.iova_to_phys		= arm_smmu_iova_to_phys,
> +	.probe_device		= arm_smmu_probe_device,
> +	.release_device		= arm_smmu_release_device,
> +	.device_group		= arm_smmu_device_group,
> +	.domain_get_attr	= arm_smmu_domain_get_attr,
> +	.domain_set_attr	= arm_smmu_domain_set_attr,
> +	.of_xlate		= arm_smmu_of_xlate,
> +	.get_resv_regions	= arm_smmu_get_resv_regions,
> +	.put_resv_regions	= generic_iommu_put_resv_regions,
> +	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
> +};
> +
> +/* Probing and initialisation functions */
> +static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
> +				   struct arm_smmu_queue *q,
> +				   unsigned long prod_off,
> +				   unsigned long cons_off,
> +				   size_t dwords, const char *name)
> +{
> +	size_t qsz;
> +
> +	do {
> +		qsz = ((1 << q->llq.max_n_shift) * dwords) << 3;
> +		q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma,
> +					      GFP_KERNEL);
> +		if (q->base || qsz < PAGE_SIZE)
> +			break;
> +
> +		q->llq.max_n_shift--;
> +	} while (1);
> +
> +	if (!q->base) {
> +		dev_err(smmu->dev,
> +			"failed to allocate queue (0x%zx bytes) for %s\n",
> +			qsz, name);
> +		return -ENOMEM;
> +	}
> +
> +	if (!WARN_ON(q->base_dma & (qsz - 1))) {
> +		dev_info(smmu->dev, "allocated %u entries for %s\n",
> +			 1 << q->llq.max_n_shift, name);
> +	}
> +
> +	q->prod_reg	= arm_smmu_page1_fixup(prod_off, smmu);
> +	q->cons_reg	= arm_smmu_page1_fixup(cons_off, smmu);
> +	q->ent_dwords	= dwords;
> +
> +	q->q_base  = Q_BASE_RWA;
> +	q->q_base |= q->base_dma & Q_BASE_ADDR_MASK;
> +	q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift);
> +
> +	q->llq.prod = q->llq.cons = 0;
> +	return 0;
> +}
> +
> +static void arm_smmu_cmdq_free_bitmap(void *data)
> +{
> +	unsigned long *bitmap = data;
> +	bitmap_free(bitmap);
> +}
> +
> +static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
> +{
> +	int ret = 0;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
> +	atomic_long_t *bitmap;
> +
> +	atomic_set(&cmdq->owner_prod, 0);
> +	atomic_set(&cmdq->lock, 0);
> +
> +	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
> +	if (!bitmap) {
> +		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
> +		ret = -ENOMEM;
> +	} else {
> +		cmdq->valid_map = bitmap;
> +		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
> +	}
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
> +{
> +	int ret;
> +
> +	/* cmdq */
> +	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
> +				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
> +				      "cmdq");
> +	if (ret)
> +		return ret;
> +
> +	ret = arm_smmu_cmdq_init(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* evtq */
> +	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
> +				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
> +				      "evtq");
> +	if (ret)
> +		return ret;
> +
> +	/* priq */
> +	if (!(smmu->features & ARM_SMMU_FEAT_PRI))
> +		return 0;
> +
> +	return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
> +				       ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS,
> +				       "priq");
> +}
> +
> +static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
> +{
> +	unsigned int i;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +	size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
> +	void *strtab = smmu->strtab_cfg.strtab;
> +
> +	cfg->l1_desc = devm_kzalloc(smmu->dev, size, GFP_KERNEL);
> +	if (!cfg->l1_desc) {
> +		dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < cfg->num_l1_ents; ++i) {
> +		arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
> +		strtab += STRTAB_L1_DESC_DWORDS << 3;
> +	}
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
> +{
> +	void *strtab;
> +	u64 reg;
> +	u32 size, l1size;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +
> +	/* Calculate the L1 size, capped to the SIDSIZE. */
> +	size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
> +	size = min(size, smmu->sid_bits - STRTAB_SPLIT);
> +	cfg->num_l1_ents = 1 << size;
> +
> +	size += STRTAB_SPLIT;
> +	if (size < smmu->sid_bits)
> +		dev_warn(smmu->dev,
> +			 "2-level strtab only covers %u/%u bits of SID\n",
> +			 size, smmu->sid_bits);
> +
> +	l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
> +	strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
> +				     GFP_KERNEL);
> +	if (!strtab) {
> +		dev_err(smmu->dev,
> +			"failed to allocate l1 stream table (%u bytes)\n",
> +			size);
> +		return -ENOMEM;
> +	}
> +	cfg->strtab = strtab;
> +
> +	/* Configure strtab_base_cfg for 2 levels */
> +	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL);
> +	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size);
> +	reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT);
> +	cfg->strtab_base_cfg = reg;
> +
> +	return arm_smmu_init_l1_strtab(smmu);
> +}
> +
> +static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
> +{
> +	void *strtab;
> +	u64 reg;
> +	u32 size;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +
> +	size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
> +	strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
> +				     GFP_KERNEL);
> +	if (!strtab) {
> +		dev_err(smmu->dev,
> +			"failed to allocate linear stream table (%u bytes)\n",
> +			size);
> +		return -ENOMEM;
> +	}
> +	cfg->strtab = strtab;
> +	cfg->num_l1_ents = 1 << smmu->sid_bits;
> +
> +	/* Configure strtab_base_cfg for a linear table covering all SIDs */
> +	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR);
> +	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits);
> +	cfg->strtab_base_cfg = reg;
> +
> +	arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
> +	return 0;
> +}
> +
> +static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
> +{
> +	u64 reg;
> +	int ret;
> +
> +	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
> +		ret = arm_smmu_init_strtab_2lvl(smmu);
> +	else
> +		ret = arm_smmu_init_strtab_linear(smmu);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* Set the strtab base address */
> +	reg  = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK;
> +	reg |= STRTAB_BASE_RA;
> +	smmu->strtab_cfg.strtab_base = reg;
> +
> +	/* Allocate the first VMID for stage-2 bypass STEs */
> +	set_bit(0, smmu->vmid_map);
> +	return 0;
> +}
> +
> +static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
> +{
> +	int ret;
> +
> +	ret = arm_smmu_init_queues(smmu);
> +	if (ret)
> +		return ret;
> +
> +	return arm_smmu_init_strtab(smmu);
> +}
> +
> +static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
> +				   unsigned int reg_off, unsigned int ack_off)
> +{
> +	u32 reg;
> +
> +	writel_relaxed(val, smmu->base + reg_off);
> +	return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
> +					  1, ARM_SMMU_POLL_TIMEOUT_US);
> +}
> +
> +/* GBPA is "special" */
> +static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
> +{
> +	int ret;
> +	u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
> +
> +	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
> +					 1, ARM_SMMU_POLL_TIMEOUT_US);
> +	if (ret)
> +		return ret;
> +
> +	reg &= ~clr;
> +	reg |= set;
> +	writel_relaxed(reg | GBPA_UPDATE, gbpa);
> +	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
> +					 1, ARM_SMMU_POLL_TIMEOUT_US);
> +
> +	if (ret)
> +		dev_err(smmu->dev, "GBPA not responding to update\n");
> +	return ret;
> +}
> +
> +static void arm_smmu_free_msis(void *data)
> +{
> +	struct device *dev = data;
> +	platform_msi_domain_free_irqs(dev);
> +}
> +
> +static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
> +{
> +	phys_addr_t doorbell;
> +	struct device *dev = msi_desc_to_dev(desc);
> +	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
> +	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
> +
> +	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
> +	doorbell &= MSI_CFG0_ADDR_MASK;
> +
> +	writeq_relaxed(doorbell, smmu->base + cfg[0]);
> +	writel_relaxed(msg->data, smmu->base + cfg[1]);
> +	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
> +}
> +
> +static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
> +{
> +	struct msi_desc *desc;
> +	int ret, nvec = ARM_SMMU_MAX_MSIS;
> +	struct device *dev = smmu->dev;
> +
> +	/* Clear the MSI address regs */
> +	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
> +	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
> +
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
> +	else
> +		nvec--;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
> +		return;
> +
> +	if (!dev->msi_domain) {
> +		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
> +		return;
> +	}
> +
> +	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
> +	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
> +	if (ret) {
> +		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
> +		return;
> +	}
> +
> +	for_each_msi_entry(desc, dev) {
> +		switch (desc->platform.msi_index) {
> +		case EVTQ_MSI_INDEX:
> +			smmu->evtq.q.irq = desc->irq;
> +			break;
> +		case GERROR_MSI_INDEX:
> +			smmu->gerr_irq = desc->irq;
> +			break;
> +		case PRIQ_MSI_INDEX:
> +			smmu->priq.q.irq = desc->irq;
> +			break;
> +		default:	/* Unknown */
> +			continue;
> +		}
> +	}
> +
> +	/* Add callback to free MSIs on teardown */
> +	devm_add_action(dev, arm_smmu_free_msis, dev);
> +}
> +
> +static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
> +{
> +	int irq, ret;
> +
> +	arm_smmu_setup_msis(smmu);
> +
> +	/* Request interrupt lines */
> +	irq = smmu->evtq.q.irq;
> +	if (irq) {
> +		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> +						arm_smmu_evtq_thread,
> +						IRQF_ONESHOT,
> +						"arm-smmu-v3-evtq", smmu);
> +		if (ret < 0)
> +			dev_warn(smmu->dev, "failed to enable evtq irq\n");
> +	} else {
> +		dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n");
> +	}
> +
> +	irq = smmu->gerr_irq;
> +	if (irq) {
> +		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
> +				       0, "arm-smmu-v3-gerror", smmu);
> +		if (ret < 0)
> +			dev_warn(smmu->dev, "failed to enable gerror irq\n");
> +	} else {
> +		dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n");
> +	}
> +
> +	if (smmu->features & ARM_SMMU_FEAT_PRI) {
> +		irq = smmu->priq.q.irq;
> +		if (irq) {
> +			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> +							arm_smmu_priq_thread,
> +							IRQF_ONESHOT,
> +							"arm-smmu-v3-priq",
> +							smmu);
> +			if (ret < 0)
> +				dev_warn(smmu->dev,
> +					 "failed to enable priq irq\n");
> +		} else {
> +			dev_warn(smmu->dev, "no priq irq - PRI will be broken\n");
> +		}
> +	}
> +}
> +
> +static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
> +{
> +	int ret, irq;
> +	u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
> +
> +	/* Disable IRQs first */
> +	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
> +				      ARM_SMMU_IRQ_CTRLACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to disable irqs\n");
> +		return ret;
> +	}
> +
> +	irq = smmu->combined_irq;
> +	if (irq) {
> +		/*
> +		 * Cavium ThunderX2 implementation doesn't support unique irq
> +		 * lines. Use a single irq line for all the SMMUv3 interrupts.
> +		 */
> +		ret = devm_request_threaded_irq(smmu->dev, irq,
> +					arm_smmu_combined_irq_handler,
> +					arm_smmu_combined_irq_thread,
> +					IRQF_ONESHOT,
> +					"arm-smmu-v3-combined-irq", smmu);
> +		if (ret < 0)
> +			dev_warn(smmu->dev, "failed to enable combined irq\n");
> +	} else
> +		arm_smmu_setup_unique_irqs(smmu);
> +
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
> +
> +	/* Enable interrupt generation on the SMMU */
> +	ret = arm_smmu_write_reg_sync(smmu, irqen_flags,
> +				      ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
> +	if (ret)
> +		dev_warn(smmu->dev, "failed to enable irqs\n");
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
> +{
> +	int ret;
> +
> +	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
> +	if (ret)
> +		dev_err(smmu->dev, "failed to clear cr0\n");
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
> +{
> +	int ret;
> +	u32 reg, enables;
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	/* Clear CR0 and sync (disables SMMU and queue processing) */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
> +	if (reg & CR0_SMMUEN) {
> +		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
> +		WARN_ON(is_kdump_kernel() && !disable_bypass);
> +		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
> +	}
> +
> +	ret = arm_smmu_device_disable(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* CR1 (table and queue memory attributes) */
> +	reg = FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) |
> +	      FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) |
> +	      FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) |
> +	      FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) |
> +	      FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) |
> +	      FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB);
> +	writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
> +
> +	/* CR2 (random crap) */
> +	reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
> +	writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
> +
> +	/* Stream table */
> +	writeq_relaxed(smmu->strtab_cfg.strtab_base,
> +		       smmu->base + ARM_SMMU_STRTAB_BASE);
> +	writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
> +		       smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
> +
> +	/* Command queue */
> +	writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
> +	writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
> +	writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
> +
> +	enables = CR0_CMDQEN;
> +	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +				      ARM_SMMU_CR0ACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to enable command queue\n");
> +		return ret;
> +	}
> +
> +	/* Invalidate any cached configuration */
> +	cmd.opcode = CMDQ_OP_CFGI_ALL;
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +
> +	/* Invalidate any stale TLB entries */
> +	if (smmu->features & ARM_SMMU_FEAT_HYP) {
> +		cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	}
> +
> +	cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +
> +	/* Event queue */
> +	writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
> +	writel_relaxed(smmu->evtq.q.llq.prod,
> +		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_PROD, smmu));
> +	writel_relaxed(smmu->evtq.q.llq.cons,
> +		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_CONS, smmu));
> +
> +	enables |= CR0_EVTQEN;
> +	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +				      ARM_SMMU_CR0ACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to enable event queue\n");
> +		return ret;
> +	}
> +
> +	/* PRI queue */
> +	if (smmu->features & ARM_SMMU_FEAT_PRI) {
> +		writeq_relaxed(smmu->priq.q.q_base,
> +			       smmu->base + ARM_SMMU_PRIQ_BASE);
> +		writel_relaxed(smmu->priq.q.llq.prod,
> +			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_PROD, smmu));
> +		writel_relaxed(smmu->priq.q.llq.cons,
> +			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_CONS, smmu));
> +
> +		enables |= CR0_PRIQEN;
> +		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +					      ARM_SMMU_CR0ACK);
> +		if (ret) {
> +			dev_err(smmu->dev, "failed to enable PRI queue\n");
> +			return ret;
> +		}
> +	}
> +
> +	if (smmu->features & ARM_SMMU_FEAT_ATS) {
> +		enables |= CR0_ATSCHK;
> +		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +					      ARM_SMMU_CR0ACK);
> +		if (ret) {
> +			dev_err(smmu->dev, "failed to enable ATS check\n");
> +			return ret;
> +		}
> +	}
> +
> +	ret = arm_smmu_setup_irqs(smmu);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to setup irqs\n");
> +		return ret;
> +	}
> +
> +	if (is_kdump_kernel())
> +		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
> +
> +	/* Enable the SMMU interface, or ensure bypass */
> +	if (!bypass || disable_bypass) {
> +		enables |= CR0_SMMUEN;
> +	} else {
> +		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
> +		if (ret)
> +			return ret;
> +	}
> +	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +				      ARM_SMMU_CR0ACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to enable SMMU interface\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
> +{
> +	u32 reg;
> +	bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
> +
> +	/* IDR0 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
> +
> +	/* 2-level structures */
> +	if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL)
> +		smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
> +
> +	if (reg & IDR0_CD2L)
> +		smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB;
> +
> +	/*
> +	 * Translation table endianness.
> +	 * We currently require the same endianness as the CPU, but this
> +	 * could be changed later by adding a new IO_PGTABLE_QUIRK.
> +	 */
> +	switch (FIELD_GET(IDR0_TTENDIAN, reg)) {
> +	case IDR0_TTENDIAN_MIXED:
> +		smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE;
> +		break;
> +#ifdef __BIG_ENDIAN
> +	case IDR0_TTENDIAN_BE:
> +		smmu->features |= ARM_SMMU_FEAT_TT_BE;
> +		break;
> +#else
> +	case IDR0_TTENDIAN_LE:
> +		smmu->features |= ARM_SMMU_FEAT_TT_LE;
> +		break;
> +#endif
> +	default:
> +		dev_err(smmu->dev, "unknown/unsupported TT endianness!\n");
> +		return -ENXIO;
> +	}
> +
> +	/* Boolean feature flags */
> +	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
> +		smmu->features |= ARM_SMMU_FEAT_PRI;
> +
> +	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
> +		smmu->features |= ARM_SMMU_FEAT_ATS;
> +
> +	if (reg & IDR0_SEV)
> +		smmu->features |= ARM_SMMU_FEAT_SEV;
> +
> +	if (reg & IDR0_MSI)
> +		smmu->features |= ARM_SMMU_FEAT_MSI;
> +
> +	if (reg & IDR0_HYP)
> +		smmu->features |= ARM_SMMU_FEAT_HYP;
> +
> +	/*
> +	 * The coherency feature as set by FW is used in preference to the ID
> +	 * register, but warn on mismatch.
> +	 */
> +	if (!!(reg & IDR0_COHACC) != coherent)
> +		dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n",
> +			 coherent ? "true" : "false");
> +
> +	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
> +	case IDR0_STALL_MODEL_FORCE:
> +		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
> +		fallthrough;
> +	case IDR0_STALL_MODEL_STALL:
> +		smmu->features |= ARM_SMMU_FEAT_STALLS;
> +	}
> +
> +	if (reg & IDR0_S1P)
> +		smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
> +
> +	if (reg & IDR0_S2P)
> +		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
> +
> +	if (!(reg & (IDR0_S1P | IDR0_S2P))) {
> +		dev_err(smmu->dev, "no translation support!\n");
> +		return -ENXIO;
> +	}
> +
> +	/* We only support the AArch64 table format at present */
> +	switch (FIELD_GET(IDR0_TTF, reg)) {
> +	case IDR0_TTF_AARCH32_64:
> +		smmu->ias = 40;
> +		fallthrough;
> +	case IDR0_TTF_AARCH64:
> +		break;
> +	default:
> +		dev_err(smmu->dev, "AArch64 table format not supported!\n");
> +		return -ENXIO;
> +	}
> +
> +	/* ASID/VMID sizes */
> +	smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
> +	smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
> +
> +	/* IDR1 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
> +	if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) {
> +		dev_err(smmu->dev, "embedded implementation not supported\n");
> +		return -ENXIO;
> +	}
> +
> +	/* Queue sizes, capped to ensure natural alignment */
> +	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
> +					     FIELD_GET(IDR1_CMDQS, reg));
> +	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
> +		/*
> +		 * We don't support splitting up batches, so one batch of
> +		 * commands plus an extra sync needs to fit inside the command
> +		 * queue. There's also no way we can handle the weird alignment
> +		 * restrictions on the base pointer for a unit-length queue.
> +		 */
> +		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
> +			CMDQ_BATCH_ENTRIES);
> +		return -ENXIO;
> +	}
> +
> +	smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT,
> +					     FIELD_GET(IDR1_EVTQS, reg));
> +	smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT,
> +					     FIELD_GET(IDR1_PRIQS, reg));
> +
> +	/* SID/SSID sizes */
> +	smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg);
> +	smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
> +
> +	/*
> +	 * If the SMMU supports fewer bits than would fill a single L2 stream
> +	 * table, use a linear table instead.
> +	 */
> +	if (smmu->sid_bits <= STRTAB_SPLIT)
> +		smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
> +
> +	/* IDR3 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
> +	if (FIELD_GET(IDR3_RIL, reg))
> +		smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
> +
> +	/* IDR5 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
> +
> +	/* Maximum number of outstanding stalls */
> +	smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg);
> +
> +	/* Page sizes */
> +	if (reg & IDR5_GRAN64K)
> +		smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
> +	if (reg & IDR5_GRAN16K)
> +		smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
> +	if (reg & IDR5_GRAN4K)
> +		smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
> +
> +	/* Input address size */
> +	if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT)
> +		smmu->features |= ARM_SMMU_FEAT_VAX;
> +
> +	/* Output address size */
> +	switch (FIELD_GET(IDR5_OAS, reg)) {
> +	case IDR5_OAS_32_BIT:
> +		smmu->oas = 32;
> +		break;
> +	case IDR5_OAS_36_BIT:
> +		smmu->oas = 36;
> +		break;
> +	case IDR5_OAS_40_BIT:
> +		smmu->oas = 40;
> +		break;
> +	case IDR5_OAS_42_BIT:
> +		smmu->oas = 42;
> +		break;
> +	case IDR5_OAS_44_BIT:
> +		smmu->oas = 44;
> +		break;
> +	case IDR5_OAS_52_BIT:
> +		smmu->oas = 52;
> +		smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */
> +		break;
> +	default:
> +		dev_info(smmu->dev,
> +			"unknown output address size. Truncating to 48-bit\n");
> +		fallthrough;
> +	case IDR5_OAS_48_BIT:
> +		smmu->oas = 48;
> +	}
> +
> +	if (arm_smmu_ops.pgsize_bitmap == -1UL)
> +		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
> +	else
> +		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
> +
> +	/* Set the DMA mask for our table walker */
> +	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> +		dev_warn(smmu->dev,
> +			 "failed to set DMA mask for table walker\n");
> +
> +	smmu->ias = max(smmu->ias, smmu->oas);
> +
> +	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
> +		 smmu->ias, smmu->oas, smmu->features);
> +	return 0;
> +}
> +
> +#ifdef CONFIG_ACPI
> +static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu)
> +{
> +	switch (model) {
> +	case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX:
> +		smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY;
> +		break;
> +	case ACPI_IORT_SMMU_V3_HISILICON_HI161X:
> +		smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH;
> +		break;
> +	}
> +
> +	dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options);
> +}
> +
> +static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
> +				      struct arm_smmu_device *smmu)
> +{
> +	struct acpi_iort_smmu_v3 *iort_smmu;
> +	struct device *dev = smmu->dev;
> +	struct acpi_iort_node *node;
> +
> +	node = *(struct acpi_iort_node **)dev_get_platdata(dev);
> +
> +	/* Retrieve SMMUv3 specific data */
> +	iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
> +
> +	acpi_smmu_get_options(iort_smmu->model, smmu);
> +
> +	if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE)
> +		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> +
> +	return 0;
> +}
> +#else
> +static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
> +					     struct arm_smmu_device *smmu)
> +{
> +	return -ENODEV;
> +}
> +#endif
> +
> +static int arm_smmu_device_dt_probe(struct platform_device *pdev,
> +				    struct arm_smmu_device *smmu)
> +{
> +	struct device *dev = &pdev->dev;
> +	u32 cells;
> +	int ret = -EINVAL;
> +
> +	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
> +		dev_err(dev, "missing #iommu-cells property\n");
> +	else if (cells != 1)
> +		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
> +	else
> +		ret = 0;
> +
> +	parse_driver_options(smmu);
> +
> +	if (of_dma_is_coherent(dev->of_node))
> +		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> +
> +	return ret;
> +}
> +
> +static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
> +{
> +	if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY)
> +		return SZ_64K;
> +	else
> +		return SZ_128K;
> +}
> +
> +static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
> +{
> +	int err;
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != ops) {
> +		err = bus_set_iommu(&pci_bus_type, ops);
> +		if (err)
> +			return err;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != ops) {
> +		err = bus_set_iommu(&amba_bustype, ops);
> +		if (err)
> +			goto err_reset_pci_ops;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != ops) {
> +		err = bus_set_iommu(&platform_bus_type, ops);
> +		if (err)
> +			goto err_reset_amba_ops;
> +	}
> +
> +	return 0;
> +
> +err_reset_amba_ops:
> +#ifdef CONFIG_ARM_AMBA
> +	bus_set_iommu(&amba_bustype, NULL);
> +#endif
> +err_reset_pci_ops: __maybe_unused;
> +#ifdef CONFIG_PCI
> +	bus_set_iommu(&pci_bus_type, NULL);
> +#endif
> +	return err;
> +}
> +
> +static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
> +				      resource_size_t size)
> +{
> +	struct resource res = {
> +		.flags = IORESOURCE_MEM,
> +		.start = start,
> +		.end = start + size - 1,
> +	};
> +
> +	return devm_ioremap_resource(dev, &res);
> +}
> +
> +static int arm_smmu_device_probe(struct platform_device *pdev)
> +{
> +	int irq, ret;
> +	struct resource *res;
> +	resource_size_t ioaddr;
> +	struct arm_smmu_device *smmu;
> +	struct device *dev = &pdev->dev;
> +	bool bypass;
> +
> +	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> +	if (!smmu) {
> +		dev_err(dev, "failed to allocate arm_smmu_device\n");
> +		return -ENOMEM;
> +	}
> +	smmu->dev = dev;
> +
> +	if (dev->of_node) {
> +		ret = arm_smmu_device_dt_probe(pdev, smmu);
> +	} else {
> +		ret = arm_smmu_device_acpi_probe(pdev, smmu);
> +		if (ret == -ENODEV)
> +			return ret;
> +	}
> +
> +	/* Set bypass mode according to firmware probing result */
> +	bypass = !!ret;
> +
> +	/* Base address */
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
> +		dev_err(dev, "MMIO region too small (%pr)\n", res);
> +		return -EINVAL;
> +	}
> +	ioaddr = res->start;
> +
> +	/*
> +	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
> +	 * the PMCG registers which are reserved by the PMU driver.
> +	 */
> +	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
> +	if (IS_ERR(smmu->base))
> +		return PTR_ERR(smmu->base);
> +
> +	if (arm_smmu_resource_size(smmu) > SZ_64K) {
> +		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
> +					       ARM_SMMU_REG_SZ);
> +		if (IS_ERR(smmu->page1))
> +			return PTR_ERR(smmu->page1);
> +	} else {
> +		smmu->page1 = smmu->base;
> +	}
> +
> +	/* Interrupt lines */
> +
> +	irq = platform_get_irq_byname_optional(pdev, "combined");
> +	if (irq > 0)
> +		smmu->combined_irq = irq;
> +	else {
> +		irq = platform_get_irq_byname_optional(pdev, "eventq");
> +		if (irq > 0)
> +			smmu->evtq.q.irq = irq;
> +
> +		irq = platform_get_irq_byname_optional(pdev, "priq");
> +		if (irq > 0)
> +			smmu->priq.q.irq = irq;
> +
> +		irq = platform_get_irq_byname_optional(pdev, "gerror");
> +		if (irq > 0)
> +			smmu->gerr_irq = irq;
> +	}
> +	/* Probe the h/w */
> +	ret = arm_smmu_device_hw_probe(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* Initialise in-memory data structures */
> +	ret = arm_smmu_init_structures(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* Record our private device structure */
> +	platform_set_drvdata(pdev, smmu);
> +
> +	/* Reset the device */
> +	ret = arm_smmu_device_reset(smmu, bypass);
> +	if (ret)
> +		return ret;
> +
> +	/* And we're up. Go go go! */
> +	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
> +				     "smmu3.%pa", &ioaddr);
> +	if (ret)
> +		return ret;
> +
> +	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
> +	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
> +
> +	ret = iommu_device_register(&smmu->iommu);
> +	if (ret) {
> +		dev_err(dev, "Failed to register iommu\n");
> +		return ret;
> +	}
> +
> +	return arm_smmu_set_bus_ops(&arm_smmu_ops);
> +}
> +
> +static int arm_smmu_device_remove(struct platform_device *pdev)
> +{
> +	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
> +
> +	arm_smmu_set_bus_ops(NULL);
> +	iommu_device_unregister(&smmu->iommu);
> +	iommu_device_sysfs_remove(&smmu->iommu);
> +	arm_smmu_device_disable(smmu);
> +
> +	return 0;
> +}
> +
> +static void arm_smmu_device_shutdown(struct platform_device *pdev)
> +{
> +	arm_smmu_device_remove(pdev);
> +}
> +
> +static const struct of_device_id arm_smmu_of_match[] = {
> +	{ .compatible = "arm,smmu-v3", },
> +	{ },
> +};
> +MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
> +
> +static struct platform_driver arm_smmu_driver = {
> +	.driver	= {
> +		.name			= "arm-smmu-v3",
> +		.of_match_table		= arm_smmu_of_match,
> +		.suppress_bind_attrs	= true,
> +	},
> +	.probe	= arm_smmu_device_probe,
> +	.remove	= arm_smmu_device_remove,
> +	.shutdown = arm_smmu_device_shutdown,
> +};
> +module_platform_driver(arm_smmu_driver);
> +
> +MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
> +MODULE_AUTHOR("Will Deacon <will@kernel.org>");
> +MODULE_ALIAS("platform:arm-smmu-v3");
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Dec 01 22:23:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 01 Dec 2020 22:23:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42334.76089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkE3e-0007tx-BH; Tue, 01 Dec 2020 22:23:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42334.76089; Tue, 01 Dec 2020 22:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkE3e-0007tq-8D; Tue, 01 Dec 2020 22:23:38 +0000
Received: by outflank-mailman (input) for mailman id 42334;
 Tue, 01 Dec 2020 22:23:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fi77=FF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkE3c-0007tl-TN
 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 22:23:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37fc5c97-bcbe-4fcd-adb4-9ccfa673d006;
 Tue, 01 Dec 2020 22:23:36 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A92752086A;
 Tue,  1 Dec 2020 22:23:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37fc5c97-bcbe-4fcd-adb4-9ccfa673d006
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606861415;
	bh=3DrqQx+Zgh3Pzb1A9CPYejsr1hFsYcDo3whK0lPrJH8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QtHKSlYB1vO0sz9lF72dVMSX0KfGCd0MDNfLVamKvXMqwCOwB9jZNADrXVrLQeWE3
	 wYcKvUGCNvU9u8i3ewvForYz2MO08M/JgmAoDQBmdcj9oHCtTBibWiSRvP30Lz/Z36
	 MvSOOb69pK+iZUXKtDnkXXObEo5bgu+L8HAmzFAQ=
Date: Tue, 1 Dec 2020 14:23:33 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
In-Reply-To: <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011420520.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> Linux SMMUv3 code implements the commands-queue insertion based on
> atomic operations implemented in Linux. Atomic functions used by the
> commands-queue insertion is not implemented in XEN therefore revert the
> patch that implemented the commands-queue insertion based on atomic
> operations.
> 
> Once the proper atomic operations will be available in XEN the driver
> can be updated.
> 
> Reverted the commit 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b
> iommu/arm-smmu-v3: Reduce contention during command-queue insertion

I checked 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b: this patch does more
than just reverting 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b. It looks
like it is also reverting edd0351e7bc49555d8b5ad8438a65a7ca262c9f0 and
some other commits.

Please can you provide a complete list of reverted commits? I would like
to be able to do the reverts myself on the linux tree and see that the
driver textually matches the one on the xen tree with this patch
applied.



> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 847 ++++++--------------------
>  1 file changed, 180 insertions(+), 667 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index c192544e87..97eac61ea4 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -330,15 +330,6 @@
>  #define CMDQ_ERR_CERROR_ABT_IDX		2
>  #define CMDQ_ERR_CERROR_ATC_INV_IDX	3
>  
> -#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
> -
> -/*
> - * This is used to size the command queue and therefore must be at least
> - * BITS_PER_LONG so that the valid_map works correctly (it relies on the
> - * total number of queue entries being a multiple of BITS_PER_LONG).
> - */
> -#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
> -
>  #define CMDQ_0_OP			GENMASK_ULL(7, 0)
>  #define CMDQ_0_SSV			(1UL << 11)
>  
> @@ -407,8 +398,9 @@
>  #define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
>  
>  /* High-level queue structures */
> -#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
> -#define ARM_SMMU_POLL_SPIN_COUNT	10
> +#define ARM_SMMU_POLL_TIMEOUT_US	100
> +#define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
> +#define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -513,24 +505,15 @@ struct arm_smmu_cmdq_ent {
>  
>  		#define CMDQ_OP_CMD_SYNC	0x46
>  		struct {
> +			u32			msidata;
>  			u64			msiaddr;
>  		} sync;
>  	};
>  };
>  
>  struct arm_smmu_ll_queue {
> -	union {
> -		u64			val;
> -		struct {
> -			u32		prod;
> -			u32		cons;
> -		};
> -		struct {
> -			atomic_t	prod;
> -			atomic_t	cons;
> -		} atomic;
> -		u8			__pad[SMP_CACHE_BYTES];
> -	} ____cacheline_aligned_in_smp;
> +	u32				prod;
> +	u32				cons;
>  	u32				max_n_shift;
>  };
>  
> @@ -548,23 +531,9 @@ struct arm_smmu_queue {
>  	u32 __iomem			*cons_reg;
>  };
>  
> -struct arm_smmu_queue_poll {
> -	ktime_t				timeout;
> -	unsigned int			delay;
> -	unsigned int			spin_cnt;
> -	bool				wfe;
> -};
> -
>  struct arm_smmu_cmdq {
>  	struct arm_smmu_queue		q;
> -	atomic_long_t			*valid_map;
> -	atomic_t			owner_prod;
> -	atomic_t			lock;
> -};
> -
> -struct arm_smmu_cmdq_batch {
> -	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
> -	int				num;
> +	spinlock_t			lock;
>  };
>  
>  struct arm_smmu_evtq {
> @@ -660,6 +629,8 @@ struct arm_smmu_device {
>  
>  	int				gerr_irq;
>  	int				combined_irq;
> +	u32				sync_nr;
> +	u8				prev_cmd_opcode;
>  
>  	unsigned long			ias; /* IPA */
>  	unsigned long			oas; /* PA */
> @@ -677,6 +648,12 @@ struct arm_smmu_device {
>  
>  	struct arm_smmu_strtab_cfg	strtab_cfg;
>  
> +	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
> +	union {
> +		u32			sync_count;
> +		u64			padding;
> +	};
> +
>  	/* IOMMU core code handle */
>  	struct iommu_device		iommu;
>  };
> @@ -763,21 +740,6 @@ static void parse_driver_options(struct arm_smmu_device *smmu)
>  }
>  
>  /* Low-level queue manipulation functions */
> -static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
> -{
> -	u32 space, prod, cons;
> -
> -	prod = Q_IDX(q, q->prod);
> -	cons = Q_IDX(q, q->cons);
> -
> -	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
> -		space = (1 << q->max_n_shift) - (prod - cons);
> -	else
> -		space = cons - prod;
> -
> -	return space >= n;
> -}
> -
>  static bool queue_full(struct arm_smmu_ll_queue *q)
>  {
>  	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> @@ -790,12 +752,9 @@ static bool queue_empty(struct arm_smmu_ll_queue *q)
>  	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
>  }
>  
> -static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
> +static void queue_sync_cons_in(struct arm_smmu_queue *q)
>  {
> -	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
> -		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
> -	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
> -		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
> +	q->llq.cons = readl_relaxed(q->cons_reg);
>  }
>  
>  static void queue_sync_cons_out(struct arm_smmu_queue *q)
> @@ -826,34 +785,46 @@ static int queue_sync_prod_in(struct arm_smmu_queue *q)
>  	return ret;
>  }
>  
> -static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
> +static void queue_sync_prod_out(struct arm_smmu_queue *q)
>  {
> -	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
> -	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
> +	writel(q->llq.prod, q->prod_reg);
>  }
>  
> -static void queue_poll_init(struct arm_smmu_device *smmu,
> -			    struct arm_smmu_queue_poll *qp)
> +static void queue_inc_prod(struct arm_smmu_ll_queue *q)
>  {
> -	qp->delay = 1;
> -	qp->spin_cnt = 0;
> -	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
> -	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
> +	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
> +	q->prod = Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
>  }
>  
> -static int queue_poll(struct arm_smmu_queue_poll *qp)
> +/*
> + * Wait for the SMMU to consume items. If sync is true, wait until the queue
> + * is empty. Otherwise, wait until there is at least one free slot.
> + */
> +static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
>  {
> -	if (ktime_compare(ktime_get(), qp->timeout) > 0)
> -		return -ETIMEDOUT;
> +	ktime_t timeout;
> +	unsigned int delay = 1, spin_cnt = 0;
>  
> -	if (qp->wfe) {
> -		wfe();
> -	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
> -		cpu_relax();
> -	} else {
> -		udelay(qp->delay);
> -		qp->delay *= 2;
> -		qp->spin_cnt = 0;
> +	/* Wait longer if it's a CMD_SYNC */
> +	timeout = ktime_add_us(ktime_get(), sync ?
> +					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
> +					    ARM_SMMU_POLL_TIMEOUT_US);
> +
> +	while (queue_sync_cons_in(q),
> +	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
> +		if (ktime_compare(ktime_get(), timeout) > 0)
> +			return -ETIMEDOUT;
> +
> +		if (wfe) {
> +			wfe();
> +		} else if (++spin_cnt < ARM_SMMU_CMDQ_SYNC_SPIN_COUNT) {
> +			cpu_relax();
> +			continue;
> +		} else {
> +			udelay(delay);
> +			delay *= 2;
> +			spin_cnt = 0;
> +		}
>  	}
>  
>  	return 0;
> @@ -867,6 +838,17 @@ static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
>  		*dst++ = cpu_to_le64(*src++);
>  }
>  
> +static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
> +{
> +	if (queue_full(&q->llq))
> +		return -ENOSPC;
> +
> +	queue_write(Q_ENT(q, q->llq.prod), ent, q->ent_dwords);
> +	queue_inc_prod(&q->llq);
> +	queue_sync_prod_out(q);
> +	return 0;
> +}
> +
>  static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
>  {
>  	int i;
> @@ -964,14 +946,20 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
>  		break;
>  	case CMDQ_OP_CMD_SYNC:
> -		if (ent->sync.msiaddr) {
> +		if (ent->sync.msiaddr)
>  			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> -			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> -		} else {
> +		else
>  			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> -		}
>  		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>  		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> +		/*
> +		 * Commands are written little-endian, but we want the SMMU to
> +		 * receive MSIData, and thus write it back to memory, in CPU
> +		 * byte order, so big-endian needs an extra byteswap here.
> +		 */
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
> +				     cpu_to_le32(ent->sync.msidata));
> +		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>  		break;
>  	default:
>  		return -ENOENT;
> @@ -980,27 +968,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  	return 0;
>  }
>  
> -static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
> -					 u32 prod)
> -{
> -	struct arm_smmu_queue *q = &smmu->cmdq.q;
> -	struct arm_smmu_cmdq_ent ent = {
> -		.opcode = CMDQ_OP_CMD_SYNC,
> -	};
> -
> -	/*
> -	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
> -	 * payload, so the write will zero the entire command on that platform.
> -	 */
> -	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> -	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
> -		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
> -				   q->ent_dwords * 8;
> -	}
> -
> -	arm_smmu_cmdq_build_cmd(cmd, &ent);
> -}
> -
>  static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  {
>  	static const char *cerror_str[] = {
> @@ -1058,474 +1025,109 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
>  }
>  
> -/*
> - * Command queue locking.
> - * This is a form of bastardised rwlock with the following major changes:
> - *
> - * - The only LOCK routines are exclusive_trylock() and shared_lock().
> - *   Neither have barrier semantics, and instead provide only a control
> - *   dependency.
> - *
> - * - The UNLOCK routines are supplemented with shared_tryunlock(), which
> - *   fails if the caller appears to be the last lock holder (yes, this is
> - *   racy). All successful UNLOCK routines have RELEASE semantics.
> - */
> -static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> +static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
>  {
> -	int val;
> -
> -	/*
> -	 * We can try to avoid the cmpxchg() loop by simply incrementing the
> -	 * lock counter. When held in exclusive state, the lock counter is set
> -	 * to INT_MIN so these increments won't hurt as the value will remain
> -	 * negative.
> -	 */
> -	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> -		return;
> -
> -	do {
> -		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
> -	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
> -}
> -
> -static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
> -{
> -	(void)atomic_dec_return_release(&cmdq->lock);
> -}
> -
> -static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> -{
> -	if (atomic_read(&cmdq->lock) == 1)
> -		return false;
> -
> -	arm_smmu_cmdq_shared_unlock(cmdq);
> -	return true;
> -}
> -
> -#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
> -({									\
> -	bool __ret;							\
> -	local_irq_save(flags);						\
> -	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
> -	if (!__ret)							\
> -		local_irq_restore(flags);				\
> -	__ret;								\
> -})
> -
> -#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
> -({									\
> -	atomic_set_release(&cmdq->lock, 0);				\
> -	local_irq_restore(flags);					\
> -})
> -
> -
> -/*
> - * Command queue insertion.
> - * This is made fiddly by our attempts to achieve some sort of scalability
> - * since there is one queue shared amongst all of the CPUs in the system.  If
> - * you like mixed-size concurrency, dependency ordering and relaxed atomics,
> - * then you'll *love* this monstrosity.
> - *
> - * The basic idea is to split the queue up into ranges of commands that are
> - * owned by a given CPU; the owner may not have written all of the commands
> - * itself, but is responsible for advancing the hardware prod pointer when
> - * the time comes. The algorithm is roughly:
> - *
> - * 	1. Allocate some space in the queue. At this point we also discover
> - *	   whether the head of the queue is currently owned by another CPU,
> - *	   or whether we are the owner.
> - *
> - *	2. Write our commands into our allocated slots in the queue.
> - *
> - *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
> - *
> - *	4. If we are an owner:
> - *		a. Wait for the previous owner to finish.
> - *		b. Mark the queue head as unowned, which tells us the range
> - *		   that we are responsible for publishing.
> - *		c. Wait for all commands in our owned range to become valid.
> - *		d. Advance the hardware prod pointer.
> - *		e. Tell the next owner we've finished.
> - *
> - *	5. If we are inserting a CMD_SYNC (we may or may not have been an
> - *	   owner), then we need to stick around until it has completed:
> - *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
> - *		   to clear the first 4 bytes.
> - *		b. Otherwise, we spin waiting for the hardware cons pointer to
> - *		   advance past our command.
> - *
> - * The devil is in the details, particularly the use of locking for handling
> - * SYNC completion and freeing up space in the queue before we think that it is
> - * full.
> - */
> -static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
> -					       u32 sprod, u32 eprod, bool set)
> -{
> -	u32 swidx, sbidx, ewidx, ebidx;
> -	struct arm_smmu_ll_queue llq = {
> -		.max_n_shift	= cmdq->q.llq.max_n_shift,
> -		.prod		= sprod,
> -	};
> -
> -	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
> -	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
> -
> -	while (llq.prod != eprod) {
> -		unsigned long mask;
> -		atomic_long_t *ptr;
> -		u32 limit = BITS_PER_LONG;
> -
> -		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
> -		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
> -
> -		ptr = &cmdq->valid_map[swidx];
> -
> -		if ((swidx == ewidx) && (sbidx < ebidx))
> -			limit = ebidx;
> -
> -		mask = GENMASK(limit - 1, sbidx);
> -
> -		/*
> -		 * The valid bit is the inverse of the wrap bit. This means
> -		 * that a zero-initialised queue is invalid and, after marking
> -		 * all entries as valid, they become invalid again when we
> -		 * wrap.
> -		 */
> -		if (set) {
> -			atomic_long_xor(mask, ptr);
> -		} else { /* Poll */
> -			unsigned long valid;
> +	struct arm_smmu_queue *q = &smmu->cmdq.q;
> +	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
>  
> -			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
> -			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
> -		}
> +	smmu->prev_cmd_opcode = FIELD_GET(CMDQ_0_OP, cmd[0]);
>  
> -		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
> +	while (queue_insert_raw(q, cmd) == -ENOSPC) {
> +		if (queue_poll_cons(q, false, wfe))
> +			dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>  	}
>  }
>  
> -/* Mark all entries in the range [sprod, eprod) as valid */
> -static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
> -					u32 sprod, u32 eprod)
> -{
> -	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
> -}
> -
> -/* Wait for all entries in the range [sprod, eprod) to become valid */
> -static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
> -					 u32 sprod, u32 eprod)
> -{
> -	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
> -}
> -
> -/* Wait for the command queue to become non-full */
> -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> -					     struct arm_smmu_ll_queue *llq)
> +static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> +				    struct arm_smmu_cmdq_ent *ent)
>  {
> +	u64 cmd[CMDQ_ENT_DWORDS];
>  	unsigned long flags;
> -	struct arm_smmu_queue_poll qp;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	int ret = 0;
>  
> -	/*
> -	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> -	 * that fails, spin until somebody else updates it for us.
> -	 */
> -	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
> -		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
> -		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
> -		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		return 0;
> +	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
> +		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
> +			 ent->opcode);
> +		return;
>  	}
>  
> -	queue_poll_init(smmu, &qp);
> -	do {
> -		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> -		if (!queue_full(llq))
> -			break;
> -
> -		ret = queue_poll(&qp);
> -	} while (!ret);
> -
> -	return ret;
> -}
> -
> -/*
> - * Wait until the SMMU signals a CMD_SYNC completion MSI.
> - * Must be called with the cmdq lock held in some capacity.
> - */
> -static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
> -					  struct arm_smmu_ll_queue *llq)
> -{
> -	int ret = 0;
> -	struct arm_smmu_queue_poll qp;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
> -
> -	queue_poll_init(smmu, &qp);
> -
> -	/*
> -	 * The MSI won't generate an event, since it's being written back
> -	 * into the command queue.
> -	 */
> -	qp.wfe = false;
> -	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
> -	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
> -	return ret;
> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> +	arm_smmu_cmdq_insert_cmd(smmu, cmd);
> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  }
>  
>  /*
> - * Wait until the SMMU cons index passes llq->prod.
> - * Must be called with the cmdq lock held in some capacity.
> + * The difference between val and sync_idx is bounded by the maximum size of
> + * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
>   */
> -static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
> -					       struct arm_smmu_ll_queue *llq)
> +static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
>  {
> -	struct arm_smmu_queue_poll qp;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	u32 prod = llq->prod;
> -	int ret = 0;
> +	ktime_t timeout;
> +	u32 val;
>  
> -	queue_poll_init(smmu, &qp);
> -	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> -	do {
> -		if (queue_consumed(llq, prod))
> -			break;
> -
> -		ret = queue_poll(&qp);
> -
> -		/*
> -		 * This needs to be a readl() so that our subsequent call
> -		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
> -		 *
> -		 * Specifically, we need to ensure that we observe all
> -		 * shared_lock()s by other CMD_SYNCs that share our owner,
> -		 * so that a failing call to tryunlock() means that we're
> -		 * the last one out and therefore we can safely advance
> -		 * cmdq->q.llq.cons. Roughly speaking:
> -		 *
> -		 * CPU 0		CPU1			CPU2 (us)
> -		 *
> -		 * if (sync)
> -		 * 	shared_lock();
> -		 *
> -		 * dma_wmb();
> -		 * set_valid_map();
> -		 *
> -		 * 			if (owner) {
> -		 *				poll_valid_map();
> -		 *				<control dependency>
> -		 *				writel(prod_reg);
> -		 *
> -		 *						readl(cons_reg);
> -		 *						tryunlock();
> -		 *
> -		 * Requires us to see CPU 0's shared_lock() acquisition.
> -		 */
> -		llq->cons = readl(cmdq->q.cons_reg);
> -	} while (!ret);
> +	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> +	val = smp_cond_load_acquire(&smmu->sync_count,
> +				    (int)(VAL - sync_idx) >= 0 ||
> +				    !ktime_before(ktime_get(), timeout));
>  
> -	return ret;
> +	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
>  }
>  
> -static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
> -					 struct arm_smmu_ll_queue *llq)
> +static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>  {
> -	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> -	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
> -		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
> -
> -	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
> -}
> -
> -static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
> -					u32 prod, int n)
> -{
> -	int i;
> -	struct arm_smmu_ll_queue llq = {
> -		.max_n_shift	= cmdq->q.llq.max_n_shift,
> -		.prod		= prod,
> -	};
> -
> -	for (i = 0; i < n; ++i) {
> -		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
> -
> -		prod = queue_inc_prod_n(&llq, i);
> -		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
> -	}
> -}
> -
> -/*
> - * This is the actual insertion function, and provides the following
> - * ordering guarantees to callers:
> - *
> - * - There is a dma_wmb() before publishing any commands to the queue.
> - *   This can be relied upon to order prior writes to data structures
> - *   in memory (such as a CD or an STE) before the command.
> - *
> - * - On completion of a CMD_SYNC, there is a control dependency.
> - *   This can be relied upon to order subsequent writes to memory (e.g.
> - *   freeing an IOVA) after completion of the CMD_SYNC.
> - *
> - * - Command insertion is totally ordered, so if two CPUs each race to
> - *   insert their own list of commands then all of the commands from one
> - *   CPU will appear before any of the commands from the other CPU.
> - */
> -static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> -				       u64 *cmds, int n, bool sync)
> -{
> -	u64 cmd_sync[CMDQ_ENT_DWORDS];
> -	u32 prod;
> +	u64 cmd[CMDQ_ENT_DWORDS];
>  	unsigned long flags;
> -	bool owner;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	struct arm_smmu_ll_queue llq = {
> -		.max_n_shift = cmdq->q.llq.max_n_shift,
> -	}, head = llq;
> -	int ret = 0;
> -
> -	/* 1. Allocate some space in the queue */
> -	local_irq_save(flags);
> -	llq.val = READ_ONCE(cmdq->q.llq.val);
> -	do {
> -		u64 old;
> -
> -		while (!queue_has_space(&llq, n + sync)) {
> -			local_irq_restore(flags);
> -			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> -				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
> -			local_irq_save(flags);
> -		}
> -
> -		head.cons = llq.cons;
> -		head.prod = queue_inc_prod_n(&llq, n + sync) |
> -					     CMDQ_PROD_OWNED_FLAG;
> -
> -		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
> -		if (old == llq.val)
> -			break;
> -
> -		llq.val = old;
> -	} while (1);
> -	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
> -	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
> -	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
> -
> -	/*
> -	 * 2. Write our commands into the queue
> -	 * Dependency ordering from the cmpxchg() loop above.
> -	 */
> -	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
> -	if (sync) {
> -		prod = queue_inc_prod_n(&llq, n);
> -		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
> -		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
> -
> -		/*
> -		 * In order to determine completion of our CMD_SYNC, we must
> -		 * ensure that the queue can't wrap twice without us noticing.
> -		 * We achieve that by taking the cmdq lock as shared before
> -		 * marking our slot as valid.
> -		 */
> -		arm_smmu_cmdq_shared_lock(cmdq);
> -	}
> -
> -	/* 3. Mark our slots as valid, ensuring commands are visible first */
> -	dma_wmb();
> -	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
> -
> -	/* 4. If we are the owner, take control of the SMMU hardware */
> -	if (owner) {
> -		/* a. Wait for previous owner to finish */
> -		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
> -
> -		/* b. Stop gathering work by clearing the owned flag */
> -		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
> -						   &cmdq->q.llq.atomic.prod);
> -		prod &= ~CMDQ_PROD_OWNED_FLAG;
> +	struct arm_smmu_cmdq_ent  ent = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +		.sync	= {
> +			.msiaddr = virt_to_phys(&smmu->sync_count),
> +		},
> +	};
>  
> -		/*
> -		 * c. Wait for any gathered work to be written to the queue.
> -		 * Note that we read our own entries so that we have the control
> -		 * dependency required by (d).
> -		 */
> -		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
>  
> -		/*
> -		 * d. Advance the hardware prod pointer
> -		 * Control dependency ordering from the entries becoming valid.
> -		 */
> -		writel_relaxed(prod, cmdq->q.prod_reg);
> -
> -		/*
> -		 * e. Tell the next owner we're done
> -		 * Make sure we've updated the hardware first, so that we don't
> -		 * race to update prod and potentially move it backwards.
> -		 */
> -		atomic_set_release(&cmdq->owner_prod, prod);
> +	/* Piggy-back on the previous command if it's a SYNC */
> +	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
> +		ent.sync.msidata = smmu->sync_nr;
> +	} else {
> +		ent.sync.msidata = ++smmu->sync_nr;
> +		arm_smmu_cmdq_build_cmd(cmd, &ent);
> +		arm_smmu_cmdq_insert_cmd(smmu, cmd);
>  	}
>  
> -	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
> -	if (sync) {
> -		llq.prod = queue_inc_prod_n(&llq, n);
> -		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
> -		if (ret) {
> -			dev_err_ratelimited(smmu->dev,
> -					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
> -					    llq.prod,
> -					    readl_relaxed(cmdq->q.prod_reg),
> -					    readl_relaxed(cmdq->q.cons_reg));
> -		}
> -
> -		/*
> -		 * Try to unlock the cmdq lock. This will fail if we're the last
> -		 * reader, in which case we can safely update cmdq->q.llq.cons
> -		 */
> -		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
> -			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
> -			arm_smmu_cmdq_shared_unlock(cmdq);
> -		}
> -	}
> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  
> -	local_irq_restore(flags);
> -	return ret;
> +	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
>  }
>  
> -static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> -				   struct arm_smmu_cmdq_ent *ent)
> +static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
>  	u64 cmd[CMDQ_ENT_DWORDS];
> +	unsigned long flags;
> +	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
> +	struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC };
> +	int ret;
>  
> -	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
> -		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
> -			 ent->opcode);
> -		return -EINVAL;
> -	}
> +	arm_smmu_cmdq_build_cmd(cmd, &ent);
>  
> -	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
> -}
> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> +	arm_smmu_cmdq_insert_cmd(smmu, cmd);
> +	ret = queue_poll_cons(&smmu->cmdq.q, true, wfe);
> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  
> -static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> -{
> -	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
> +	return ret;
>  }
>  
> -static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
> -				    struct arm_smmu_cmdq_batch *cmds,
> -				    struct arm_smmu_cmdq_ent *cmd)
> +static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
> -	if (cmds->num == CMDQ_BATCH_ENTRIES) {
> -		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
> -		cmds->num = 0;
> -	}
> -	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
> -	cmds->num++;
> -}
> +	int ret;
> +	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
> +		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
>  
> -static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
> -				      struct arm_smmu_cmdq_batch *cmds)
> -{
> -	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
> +	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
> +		  : __arm_smmu_cmdq_issue_sync(smmu);
> +	if (ret)
> +		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
> +	return ret;
>  }
>  
>  /* Context descriptor manipulation functions */
> @@ -1535,7 +1137,6 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
>  	size_t i;
>  	unsigned long flags;
>  	struct arm_smmu_master *master;
> -	struct arm_smmu_cmdq_batch cmds = {};
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_cmdq_ent cmd = {
>  		.opcode	= CMDQ_OP_CFGI_CD,
> @@ -1549,12 +1150,12 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
>  	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
>  		for (i = 0; i < master->num_sids; i++) {
>  			cmd.cfgi.sid = master->sids[i];
> -			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> +			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>  		}
>  	}
>  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>  
> -	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> +	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
> @@ -2189,16 +1790,17 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
>  	cmd->atc.size	= log2_span;
>  }
>  
> -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
> +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
> +				   struct arm_smmu_cmdq_ent *cmd)
>  {
>  	int i;
> -	struct arm_smmu_cmdq_ent cmd;
>  
> -	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> +	if (!master->ats_enabled)
> +		return 0;
>  
>  	for (i = 0; i < master->num_sids; i++) {
> -		cmd.atc.sid = master->sids[i];
> -		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
> +		cmd->atc.sid = master->sids[i];
> +		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
>  	}
>  
>  	return arm_smmu_cmdq_issue_sync(master->smmu);
> @@ -2207,11 +1809,10 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
>  static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>  				   int ssid, unsigned long iova, size_t size)
>  {
> -	int i;
> +	int ret = 0;
>  	unsigned long flags;
>  	struct arm_smmu_cmdq_ent cmd;
>  	struct arm_smmu_master *master;
> -	struct arm_smmu_cmdq_batch cmds = {};
>  
>  	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
>  		return 0;
> @@ -2236,18 +1837,11 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>  	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
>  
>  	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> -	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> -		if (!master->ats_enabled)
> -			continue;
> -
> -		for (i = 0; i < master->num_sids; i++) {
> -			cmd.atc.sid = master->sids[i];
> -			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
> -		}
> -	}
> +	list_for_each_entry(master, &smmu_domain->devices, domain_head)
> +		ret |= arm_smmu_atc_inv_master(master, &cmd);
>  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>  
> -	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
> +	return ret ? -ETIMEDOUT : 0;
>  }
>  
>  /* IO_PGTABLE API */
> @@ -2269,32 +1863,27 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  	/*
>  	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
>  	 * PTEs previously cleared by unmaps on the current CPU not yet visible
> -	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
> -	 * insertion to guarantee those are observed before the TLBI. Do be
> -	 * careful, 007.
> +	 * to the SMMU. We are relying on the DSB implicit in
> +	 * queue_sync_prod_out() to guarantee those are observed before the
> +	 * TLBI. Do be careful, 007.
>  	 */
>  	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>  	arm_smmu_cmdq_issue_sync(smmu);
>  	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
>  }
>  
> -static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
> -				   size_t granule, bool leaf,
> -				   struct arm_smmu_domain *smmu_domain)
> +static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> +					  size_t granule, bool leaf, void *cookie)
>  {
> +	struct arm_smmu_domain *smmu_domain = cookie;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
> -	size_t inv_range = granule;
> -	struct arm_smmu_cmdq_batch cmds = {};
>  	struct arm_smmu_cmdq_ent cmd = {
>  		.tlbi = {
>  			.leaf	= leaf,
> +			.addr	= iova,
>  		},
>  	};
>  
> -	if (!size)
> -		return;
> -
>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>  		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
>  		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> @@ -2303,78 +1892,37 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
>  	}
>  
> -	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> -		/* Get the leaf page size */
> -		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
> -
> -		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
> -		cmd.tlbi.tg = (tg - 10) / 2;
> -
> -		/* Determine what level the granule is at */
> -		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
> -
> -		num_pages = size >> tg;
> -	}
> -
> -	while (iova < end) {
> -		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> -			/*
> -			 * On each iteration of the loop, the range is 5 bits
> -			 * worth of the aligned size remaining.
> -			 * The range in pages is:
> -			 *
> -			 * range = (num_pages & (0x1f << __ffs(num_pages)))
> -			 */
> -			unsigned long scale, num;
> -
> -			/* Determine the power of 2 multiple number of pages */
> -			scale = __ffs(num_pages);
> -			cmd.tlbi.scale = scale;
> -
> -			/* Determine how many chunks of 2^scale size we have */
> -			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
> -			cmd.tlbi.num = num - 1;
> -
> -			/* range is num * 2^scale * pgsize */
> -			inv_range = num << (scale + tg);
> -
> -			/* Clear out the lower order bits for the next iteration */
> -			num_pages -= num << scale;
> -		}
> -
> -		cmd.tlbi.addr = iova;
> -		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> -		iova += inv_range;
> -	}
> -	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> -
> -	/*
> -	 * Unfortunately, this can't be leaf-only since we may have
> -	 * zapped an entire table.
> -	 */
> -	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
> +	do {
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +		cmd.tlbi.addr += granule;
> +	} while (size -= granule);
>  }
>  
>  static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
>  					 unsigned long iova, size_t granule,
>  					 void *cookie)
>  {
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct iommu_domain *domain = &smmu_domain->domain;
> -
> -	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
> +	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
>  }
>  
>  static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
>  				  size_t granule, void *cookie)
>  {
> -	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
> +	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
>  				  size_t granule, void *cookie)
>  {
> -	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
> +	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static const struct iommu_flush_ops arm_smmu_flush_ops = {
> @@ -2700,6 +2248,7 @@ static void arm_smmu_enable_ats(struct arm_smmu_master *master)
>  
>  static void arm_smmu_disable_ats(struct arm_smmu_master *master)
>  {
> +	struct arm_smmu_cmdq_ent cmd;
>  	struct arm_smmu_domain *smmu_domain = master->domain;
>  
>  	if (!master->ats_enabled)
> @@ -2711,8 +2260,9 @@ static void arm_smmu_disable_ats(struct arm_smmu_master *master)
>  	 * ATC invalidation via the SMMU.
>  	 */
>  	wmb();
> -	arm_smmu_atc_inv_master(master);
> -	atomic_dec(&smmu_domain->nr_ats_masters);
> +	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> +	arm_smmu_atc_inv_master(master, &cmd);
> +    atomic_dec(&smmu_domain->nr_ats_masters);
>  }
>  
>  static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
> @@ -2875,10 +2425,10 @@ static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
>  static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
>  				struct iommu_iotlb_gather *gather)
>  {
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
>  
> -	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
> -			       gather->pgsize, true, smmu_domain);
> +	if (smmu)
> +		arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static phys_addr_t
> @@ -3176,49 +2726,18 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  	return 0;
>  }
>  
> -static void arm_smmu_cmdq_free_bitmap(void *data)
> -{
> -	unsigned long *bitmap = data;
> -	bitmap_free(bitmap);
> -}
> -
> -static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
> -{
> -	int ret = 0;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
> -	atomic_long_t *bitmap;
> -
> -	atomic_set(&cmdq->owner_prod, 0);
> -	atomic_set(&cmdq->lock, 0);
> -
> -	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
> -	if (!bitmap) {
> -		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
> -		ret = -ENOMEM;
> -	} else {
> -		cmdq->valid_map = bitmap;
> -		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
> -	}
> -
> -	return ret;
> -}
> -
>  static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
>  {
>  	int ret;
>  
>  	/* cmdq */
> +	spin_lock_init(&smmu->cmdq.lock);
>  	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
>  				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
>  				      "cmdq");
>  	if (ret)
>  		return ret;
>  
> -	ret = arm_smmu_cmdq_init(smmu);
> -	if (ret)
> -		return ret;
> -
>  	/* evtq */
>  	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
>  				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
> @@ -3799,15 +3318,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	/* Queue sizes, capped to ensure natural alignment */
>  	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
>  					     FIELD_GET(IDR1_CMDQS, reg));
> -	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
> -		/*
> -		 * We don't support splitting up batches, so one batch of
> -		 * commands plus an extra sync needs to fit inside the command
> -		 * queue. There's also no way we can handle the weird alignment
> -		 * restrictions on the base pointer for a unit-length queue.
> -		 */
> -		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
> -			CMDQ_BATCH_ENTRIES);
> +	if (!smmu->cmdq.q.llq.max_n_shift) {
> +		/* Odd alignment restrictions on the base, so ignore for now */
> +		dev_err(smmu->dev, "unit-length command queue not supported\n");
>  		return -ENXIO;
>  	}
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 00:20:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 00:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42348.76112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkFsl-0002Tu-8O; Wed, 02 Dec 2020 00:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42348.76112; Wed, 02 Dec 2020 00:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkFsl-0002Tn-51; Wed, 02 Dec 2020 00:20:31 +0000
Received: by outflank-mailman (input) for mailman id 42348;
 Wed, 02 Dec 2020 00:20:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkFsk-0002Tf-CM
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 00:20:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c11489d7-6c4c-4183-9e0f-11cf252e5f22;
 Wed, 02 Dec 2020 00:20:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c11489d7-6c4c-4183-9e0f-11cf252e5f22
Date: Tue, 1 Dec 2020 16:20:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606868428;
	bh=qyMhDrZnkHAEKbdQSOWunqEA0ieuUB8ftXYMtUKkDa8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Xx+DCCJdkqkD6Bo43+XUqHMIF9JmuX0r2XHH8NJd6dDlVRTnKMSqRamN9TNr5tQbK
	 GIrEl7Hbdg/zQF6TUvfV+qivEK6Ib5lV6p7GB/Wfo5xF92udV1pVm3LdHU7gibCTtN
	 d2Em2H5+N5riSZexUJ6vkhYcY7f/SJldbnbqt47w=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/8] xen/arm: revert patch related to XArray
In-Reply-To: <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011620180.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> XArray is not implemented in XEN revert the patch that introduce the
> XArray code in SMMUv3 driver.
> 
> Once XArray is implemented in XEN this patch can be added in XEN.
> 
> Reverted the commit 0299a1a81ca056e79c1a7fb751f936ec0d5c7afe
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 27 +++++++++------------------
>  1 file changed, 9 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 97eac61ea4..cec304e51a 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -638,6 +638,7 @@ struct arm_smmu_device {
>  
>  #define ARM_SMMU_MAX_ASIDS		(1 << 16)
>  	unsigned int			asid_bits;
> +	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
>  
>  #define ARM_SMMU_MAX_VMIDS		(1 << 16)
>  	unsigned int			vmid_bits;
> @@ -703,8 +704,6 @@ struct arm_smmu_option_prop {
>  	const char *prop;
>  };
>  
> -static DEFINE_XARRAY_ALLOC1(asid_xa);
> -
>  static struct arm_smmu_option_prop arm_smmu_options[] = {
>  	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
>  	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
> @@ -1366,14 +1365,6 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
>  	cdcfg->cdtab = NULL;
>  }
>  
> -static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
> -{
> -	if (!cd->asid)
> -		return;
> -
> -	xa_erase(&asid_xa, cd->asid);
> -}
> -
>  /* Stream table manipulation functions */
>  static void
>  arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
> @@ -2006,9 +1997,10 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>  		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
>  
> -		if (cfg->cdcfg.cdtab)
> +		if (cfg->cdcfg.cdtab) {
>  			arm_smmu_free_cd_tables(smmu_domain);
> -		arm_smmu_free_asid(&cfg->cd);
> +			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
> +		}
>  	} else {
>  		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>  		if (cfg->vmid)
> @@ -2023,15 +2015,14 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>  				       struct io_pgtable_cfg *pgtbl_cfg)
>  {
>  	int ret;
> -	u32 asid;
> +	int asid;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
>  	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
>  
> -	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> -	if (ret)
> -		return ret;
> +	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
> +	if (asid < 0)
> +		return asid;
>  
>  	cfg->s1cdmax = master->ssid_bits;
>  
> @@ -2064,7 +2055,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>  out_free_cd_tables:
>  	arm_smmu_free_cd_tables(smmu_domain);
>  out_free_asid:
> -	arm_smmu_free_asid(&cfg->cd);
> +	arm_smmu_bitmap_free(smmu->asid_map, asid);
>  	return ret;
>  }
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 00:33:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 00:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42354.76123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkG58-0003bG-Ea; Wed, 02 Dec 2020 00:33:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42354.76123; Wed, 02 Dec 2020 00:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkG58-0003b9-Bf; Wed, 02 Dec 2020 00:33:18 +0000
Received: by outflank-mailman (input) for mailman id 42354;
 Wed, 02 Dec 2020 00:33:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkG57-0003b4-6P
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 00:33:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e387a90-f591-47e4-bd6d-4bcd3eaf26dd;
 Wed, 02 Dec 2020 00:33:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e387a90-f591-47e4-bd6d-4bcd3eaf26dd
Date: Tue, 1 Dec 2020 16:33:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606869195;
	bh=XhQ+H+8voz/0mH0rCAaTQG1Ag9N3dzvfRvLIH8u8bf4=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=fnnlv7FTx8ADFtkhfMLEAHBbszAC53YqfZqXEfMZSvMn8sfb0EZp0tGbbXdQ3wxlF
	 5OypsidvoEigxVfZnoUeG/0yumYkSVU7BfBrHyWl5rS+t6vQoebQPrhsLAPEymUL2R
	 03KB5eisi0vl4/Ms9H7zmHgckKlZMteJVm+JdKkI=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
In-Reply-To: <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011621380.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> XEN does not support MSI on ARM platforms, therefore remove the MSI
> support from SMMUv3 driver.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

I wonder if it makes sense to #ifdef CONFIG_MSI this code instead of
removing it completely.


In the past, we tried to keep the entire file as textually similar to
the original Linux driver as possible to make it easier to backport
features and fixes. So, in this case we would probably not even used an
#ifdef but maybe something like:

  if (/* msi_enabled */ 0)
      return;

at the beginning of arm_smmu_setup_msis.


However, that strategy didn't actually work very well because backports
have proven difficult to do anyway. So at that point we might as well at
least have clean code in Xen and do the changes properly.

So that's my reasoning for accepting this patch :-)

Julien, are you happy with this too?


> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 176 +-------------------------
>  1 file changed, 3 insertions(+), 173 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index cec304e51a..401f7ae006 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -416,31 +416,6 @@ enum pri_resp {
>  	PRI_RESP_SUCC = 2,
>  };
>  
> -enum arm_smmu_msi_index {
> -	EVTQ_MSI_INDEX,
> -	GERROR_MSI_INDEX,
> -	PRIQ_MSI_INDEX,
> -	ARM_SMMU_MAX_MSIS,
> -};
> -
> -static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
> -	[EVTQ_MSI_INDEX] = {
> -		ARM_SMMU_EVTQ_IRQ_CFG0,
> -		ARM_SMMU_EVTQ_IRQ_CFG1,
> -		ARM_SMMU_EVTQ_IRQ_CFG2,
> -	},
> -	[GERROR_MSI_INDEX] = {
> -		ARM_SMMU_GERROR_IRQ_CFG0,
> -		ARM_SMMU_GERROR_IRQ_CFG1,
> -		ARM_SMMU_GERROR_IRQ_CFG2,
> -	},
> -	[PRIQ_MSI_INDEX] = {
> -		ARM_SMMU_PRIQ_IRQ_CFG0,
> -		ARM_SMMU_PRIQ_IRQ_CFG1,
> -		ARM_SMMU_PRIQ_IRQ_CFG2,
> -	},
> -};
> -
>  struct arm_smmu_cmdq_ent {
>  	/* Common fields */
>  	u8				opcode;
> @@ -504,10 +479,6 @@ struct arm_smmu_cmdq_ent {
>  		} pri;
>  
>  		#define CMDQ_OP_CMD_SYNC	0x46
> -		struct {
> -			u32			msidata;
> -			u64			msiaddr;
> -		} sync;
>  	};
>  };
>  
> @@ -649,12 +620,6 @@ struct arm_smmu_device {
>  
>  	struct arm_smmu_strtab_cfg	strtab_cfg;
>  
> -	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
> -	union {
> -		u32			sync_count;
> -		u64			padding;
> -	};
> -
>  	/* IOMMU core code handle */
>  	struct iommu_device		iommu;
>  };
> @@ -945,20 +910,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
>  		break;
>  	case CMDQ_OP_CMD_SYNC:
> -		if (ent->sync.msiaddr)
> -			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> -		else
> -			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> -		/*
> -		 * Commands are written little-endian, but we want the SMMU to
> -		 * receive MSIData, and thus write it back to memory, in CPU
> -		 * byte order, so big-endian needs an extra byteswap here.
> -		 */
> -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
> -				     cpu_to_le32(ent->sync.msidata));
> -		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
>  		break;
>  	default:
>  		return -ENOENT;
> @@ -1054,50 +1006,6 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>  	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  }
>  
> -/*
> - * The difference between val and sync_idx is bounded by the maximum size of
> - * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
> - */
> -static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
> -{
> -	ktime_t timeout;
> -	u32 val;
> -
> -	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> -	val = smp_cond_load_acquire(&smmu->sync_count,
> -				    (int)(VAL - sync_idx) >= 0 ||
> -				    !ktime_before(ktime_get(), timeout));
> -
> -	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
> -}
> -
> -static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
> -{
> -	u64 cmd[CMDQ_ENT_DWORDS];
> -	unsigned long flags;
> -	struct arm_smmu_cmdq_ent  ent = {
> -		.opcode = CMDQ_OP_CMD_SYNC,
> -		.sync	= {
> -			.msiaddr = virt_to_phys(&smmu->sync_count),
> -		},
> -	};
> -
> -	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> -
> -	/* Piggy-back on the previous command if it's a SYNC */
> -	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
> -		ent.sync.msidata = smmu->sync_nr;
> -	} else {
> -		ent.sync.msidata = ++smmu->sync_nr;
> -		arm_smmu_cmdq_build_cmd(cmd, &ent);
> -		arm_smmu_cmdq_insert_cmd(smmu, cmd);
> -	}
> -
> -	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> -
> -	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
> -}
> -
>  static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
>  	u64 cmd[CMDQ_ENT_DWORDS];
> @@ -1119,12 +1027,9 @@ static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
>  	int ret;
> -	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
> -		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
>  
> -	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
> -		  : __arm_smmu_cmdq_issue_sync(smmu);
> -	if (ret)
> +	ret = __arm_smmu_cmdq_issue_sync(smmu);
> +	if ( ret )
>  		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
>  	return ret;
>  }
> @@ -2898,83 +2803,10 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
>  	return ret;
>  }
>  
> -static void arm_smmu_free_msis(void *data)
> -{
> -	struct device *dev = data;
> -	platform_msi_domain_free_irqs(dev);
> -}
> -
> -static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
> -{
> -	phys_addr_t doorbell;
> -	struct device *dev = msi_desc_to_dev(desc);
> -	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
> -	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
> -
> -	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
> -	doorbell &= MSI_CFG0_ADDR_MASK;
> -
> -	writeq_relaxed(doorbell, smmu->base + cfg[0]);
> -	writel_relaxed(msg->data, smmu->base + cfg[1]);
> -	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
> -}
> -
> -static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
> -{
> -	struct msi_desc *desc;
> -	int ret, nvec = ARM_SMMU_MAX_MSIS;
> -	struct device *dev = smmu->dev;
> -
> -	/* Clear the MSI address regs */
> -	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
> -	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
> -
> -	if (smmu->features & ARM_SMMU_FEAT_PRI)
> -		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
> -	else
> -		nvec--;
> -
> -	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
> -		return;
> -
> -	if (!dev->msi_domain) {
> -		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
> -		return;
> -	}
> -
> -	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
> -	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
> -	if (ret) {
> -		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
> -		return;
> -	}
> -
> -	for_each_msi_entry(desc, dev) {
> -		switch (desc->platform.msi_index) {
> -		case EVTQ_MSI_INDEX:
> -			smmu->evtq.q.irq = desc->irq;
> -			break;
> -		case GERROR_MSI_INDEX:
> -			smmu->gerr_irq = desc->irq;
> -			break;
> -		case PRIQ_MSI_INDEX:
> -			smmu->priq.q.irq = desc->irq;
> -			break;
> -		default:	/* Unknown */
> -			continue;
> -		}
> -	}
> -
> -	/* Add callback to free MSIs on teardown */
> -	devm_add_action(dev, arm_smmu_free_msis, dev);
> -}
> -
>  static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  {
>  	int irq, ret;
>  
> -	arm_smmu_setup_msis(smmu);
> -
>  	/* Request interrupt lines */
>  	irq = smmu->evtq.q.irq;
>  	if (irq) {
> @@ -3250,8 +3082,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	if (reg & IDR0_SEV)
>  		smmu->features |= ARM_SMMU_FEAT_SEV;
>  
> -	if (reg & IDR0_MSI)
> -		smmu->features |= ARM_SMMU_FEAT_MSI;
>  
>  	if (reg & IDR0_HYP)
>  		smmu->features |= ARM_SMMU_FEAT_HYP;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 00:39:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 00:39:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42360.76136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGAn-0003nx-4y; Wed, 02 Dec 2020 00:39:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42360.76136; Wed, 02 Dec 2020 00:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGAn-0003nq-1w; Wed, 02 Dec 2020 00:39:09 +0000
Received: by outflank-mailman (input) for mailman id 42360;
 Wed, 02 Dec 2020 00:39:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkGAl-0003nl-RJ
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 00:39:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f8d86e0-3b20-4353-b789-0e05f7beb5b8;
 Wed, 02 Dec 2020 00:39:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f8d86e0-3b20-4353-b789-0e05f7beb5b8
Date: Tue, 1 Dec 2020 16:39:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606869546;
	bh=3nSprtXt+pbb2ty+p/ANcgmcJOmruqh0rcNzj8s6KqE=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=ZywtJ5SwZfJlv2BSTup54nEvdgjFnbajxBfpA1jT7ckuJFg8Qu2migTiMqb9Q+/Zm
	 ebK68DvPNJXKBNFfYWvDb7GIWlI4otPe/0daV1zm76hHPgs6OJrLVWcfLBV8cRVpPt
	 s3wfYMET4QBdCkSXP7ecni06vRTw9sHPDa6xQHQw=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 5/8] xen/arm: Remove support for PCI ATS on SMMUv3
In-Reply-To: <78079d1d6e9d2e7e87125da131e9bdb5809b838a.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011637560.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <78079d1d6e9d2e7e87125da131e9bdb5809b838a.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> PCI ATS functionality is not implemented and tested on ARM. Remove the
> PCI ATS support, once PCI ATS support is tested and available this
> patch can be added.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

This looks like a revert of 9ce27afc0830f. Can we add that as a note to
the commit message?

One very minor comment at the bottom


> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 273 --------------------------
>  1 file changed, 273 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 401f7ae006..6a33628087 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -460,16 +460,6 @@ struct arm_smmu_cmdq_ent {
>  			u64			addr;
>  		} tlbi;
>  
> -		#define CMDQ_OP_ATC_INV		0x40
> -		#define ATC_INV_SIZE_ALL	52
> -		struct {
> -			u32			sid;
> -			u32			ssid;
> -			u64			addr;
> -			u8			size;
> -			bool			global;
> -		} atc;
> -
>  		#define CMDQ_OP_PRI_RESP	0x41
>  		struct {
>  			u32			sid;
> @@ -632,7 +622,6 @@ struct arm_smmu_master {
>  	struct list_head		domain_head;
>  	u32				*sids;
>  	unsigned int			num_sids;
> -	bool				ats_enabled;
>  	unsigned int			ssid_bits;
>  };
>  
> @@ -650,7 +639,6 @@ struct arm_smmu_domain {
>  
>  	struct io_pgtable_ops		*pgtbl_ops;
>  	bool				non_strict;
> -	atomic_t			nr_ats_masters;
>  
>  	enum arm_smmu_domain_stage	stage;
>  	union {
> @@ -886,14 +874,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  	case CMDQ_OP_TLBI_S12_VMALL:
>  		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
>  		break;
> -	case CMDQ_OP_ATC_INV:
> -		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
> -		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
> -		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
> -		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
> -		cmd[1] |= FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
> -		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
> -		break;
>  	case CMDQ_OP_PRI_RESP:
>  		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
>  		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
> @@ -925,7 +905,6 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
>  		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
>  		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
> -		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
>  	};
>  
>  	int i;
> @@ -945,14 +924,6 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  		dev_err(smmu->dev, "retrying command fetch\n");
>  	case CMDQ_ERR_CERROR_NONE_IDX:
>  		return;
> -	case CMDQ_ERR_CERROR_ATC_INV_IDX:
> -		/*
> -		 * ATC Invalidation Completion timeout. CONS is still pointing
> -		 * at the CMD_SYNC. Attempt to complete other pending commands
> -		 * by repeating the CMD_SYNC, though we might well end up back
> -		 * here since the ATC invalidation may still be pending.
> -		 */
> -		return;
>  	case CMDQ_ERR_CERROR_ILL_IDX:
>  	default:
>  		break;
> @@ -1422,9 +1393,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
>  		val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
>  	}
>  
> -	if (master->ats_enabled)
> -		dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
> -						 STRTAB_STE_1_EATS_TRANS));
>  
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
>  	/* See comment in arm_smmu_write_ctx_desc() */
> @@ -1633,112 +1601,6 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
>  	return IRQ_WAKE_THREAD;
>  }
>  
> -static void
> -arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
> -			struct arm_smmu_cmdq_ent *cmd)
> -{
> -	size_t log2_span;
> -	size_t span_mask;
> -	/* ATC invalidates are always on 4096-bytes pages */
> -	size_t inval_grain_shift = 12;
> -	unsigned long page_start, page_end;
> -
> -	*cmd = (struct arm_smmu_cmdq_ent) {
> -		.opcode			= CMDQ_OP_ATC_INV,
> -		.substream_valid	= !!ssid,
> -		.atc.ssid		= ssid,
> -	};
> -
> -	if (!size) {
> -		cmd->atc.size = ATC_INV_SIZE_ALL;
> -		return;
> -	}
> -
> -	page_start	= iova >> inval_grain_shift;
> -	page_end	= (iova + size - 1) >> inval_grain_shift;
> -
> -	/*
> -	 * In an ATS Invalidate Request, the address must be aligned on the
> -	 * range size, which must be a power of two number of page sizes. We
> -	 * thus have to choose between grossly over-invalidating the region, or
> -	 * splitting the invalidation into multiple commands. For simplicity
> -	 * we'll go with the first solution, but should refine it in the future
> -	 * if multiple commands are shown to be more efficient.
> -	 *
> -	 * Find the smallest power of two that covers the range. The most
> -	 * significant differing bit between the start and end addresses,
> -	 * fls(start ^ end), indicates the required span. For example:
> -	 *
> -	 * We want to invalidate pages [8; 11]. This is already the ideal range:
> -	 *		x = 0b1000 ^ 0b1011 = 0b11
> -	 *		span = 1 << fls(x) = 4
> -	 *
> -	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
> -	 *		x = 0b0111 ^ 0b1010 = 0b1101
> -	 *		span = 1 << fls(x) = 16
> -	 */
> -	log2_span	= fls_long(page_start ^ page_end);
> -	span_mask	= (1ULL << log2_span) - 1;
> -
> -	page_start	&= ~span_mask;
> -
> -	cmd->atc.addr	= page_start << inval_grain_shift;
> -	cmd->atc.size	= log2_span;
> -}
> -
> -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
> -				   struct arm_smmu_cmdq_ent *cmd)
> -{
> -	int i;
> -
> -	if (!master->ats_enabled)
> -		return 0;
> -
> -	for (i = 0; i < master->num_sids; i++) {
> -		cmd->atc.sid = master->sids[i];
> -		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
> -	}
> -
> -	return arm_smmu_cmdq_issue_sync(master->smmu);
> -}
> -
> -static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
> -				   int ssid, unsigned long iova, size_t size)
> -{
> -	int ret = 0;
> -	unsigned long flags;
> -	struct arm_smmu_cmdq_ent cmd;
> -	struct arm_smmu_master *master;
> -
> -	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
> -		return 0;
> -
> -	/*
> -	 * Ensure that we've completed prior invalidation of the main TLBs
> -	 * before we read 'nr_ats_masters' in case of a concurrent call to
> -	 * arm_smmu_enable_ats():
> -	 *
> -	 *	// unmap()			// arm_smmu_enable_ats()
> -	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
> -	 *	smp_mb();			[...]
> -	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
> -	 *
> -	 * Ensures that we always see the incremented 'nr_ats_masters' count if
> -	 * ATS was enabled at the PCI device before completion of the TLBI.
> -	 */
> -	smp_mb();
> -	if (!atomic_read(&smmu_domain->nr_ats_masters))
> -		return 0;
> -
> -	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
> -
> -	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> -	list_for_each_entry(master, &smmu_domain->devices, domain_head)
> -		ret |= arm_smmu_atc_inv_master(master, &cmd);
> -	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> -
> -	return ret ? -ETIMEDOUT : 0;
> -}
>  
>  /* IO_PGTABLE API */
>  static void arm_smmu_tlb_inv_context(void *cookie)
> @@ -1765,7 +1627,6 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  	 */
>  	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>  	arm_smmu_cmdq_issue_sync(smmu);
> -	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
>  }
>  
>  static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> @@ -2106,108 +1967,6 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
>  	}
>  }
>  
> -static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
> -{
> -	struct device *dev = master->dev;
> -	struct arm_smmu_device *smmu = master->smmu;
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -
> -	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
> -		return false;
> -
> -	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
> -		return false;
> -
> -	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
> -}
> -
> -static void arm_smmu_enable_ats(struct arm_smmu_master *master)
> -{
> -	size_t stu;
> -	struct pci_dev *pdev;
> -	struct arm_smmu_device *smmu = master->smmu;
> -	struct arm_smmu_domain *smmu_domain = master->domain;
> -
> -	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
> -	if (!master->ats_enabled)
> -		return;
> -
> -	/* Smallest Translation Unit: log2 of the smallest supported granule */
> -	stu = __ffs(smmu->pgsize_bitmap);
> -	pdev = to_pci_dev(master->dev);
> -
> -	atomic_inc(&smmu_domain->nr_ats_masters);
> -	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
> -	if (pci_enable_ats(pdev, stu))
> -		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
> -}
> -
> -static void arm_smmu_disable_ats(struct arm_smmu_master *master)
> -{
> -	struct arm_smmu_cmdq_ent cmd;
> -	struct arm_smmu_domain *smmu_domain = master->domain;
> -
> -	if (!master->ats_enabled)
> -		return;
> -
> -	pci_disable_ats(to_pci_dev(master->dev));
> -	/*
> -	 * Ensure ATS is disabled at the endpoint before we issue the
> -	 * ATC invalidation via the SMMU.
> -	 */
> -	wmb();
> -	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> -	arm_smmu_atc_inv_master(master, &cmd);
> -    atomic_dec(&smmu_domain->nr_ats_masters);
> -}
> -
> -static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
> -{
> -	int ret;
> -	int features;
> -	int num_pasids;
> -	struct pci_dev *pdev;
> -
> -	if (!dev_is_pci(master->dev))
> -		return -ENODEV;
> -
> -	pdev = to_pci_dev(master->dev);
> -
> -	features = pci_pasid_features(pdev);
> -	if (features < 0)
> -		return features;
> -
> -	num_pasids = pci_max_pasids(pdev);
> -	if (num_pasids <= 0)
> -		return num_pasids;
> -
> -	ret = pci_enable_pasid(pdev, features);
> -	if (ret) {
> -		dev_err(&pdev->dev, "Failed to enable PASID\n");
> -		return ret;
> -	}
> -
> -	master->ssid_bits = min_t(u8, ilog2(num_pasids),
> -				  master->smmu->ssid_bits);
> -	return 0;
> -}
> -
> -static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
> -{
> -	struct pci_dev *pdev;
> -
> -	if (!dev_is_pci(master->dev))
> -		return;
> -
> -	pdev = to_pci_dev(master->dev);
> -
> -	if (!pdev->pasid_enabled)
> -		return;
> -
> -	master->ssid_bits = 0;
> -	pci_disable_pasid(pdev);
> -}
> -
>  static void arm_smmu_detach_dev(struct arm_smmu_master *master)
>  {
>  	unsigned long flags;
> @@ -2216,14 +1975,11 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master)
>  	if (!smmu_domain)
>  		return;
>  
> -	arm_smmu_disable_ats(master);
> -
>  	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>  	list_del(&master->domain_head);
>  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>  
>  	master->domain = NULL;
> -	master->ats_enabled = false;
>  	arm_smmu_install_ste_for_dev(master);
>  }
>  
> @@ -2271,17 +2027,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  
>  	master->domain = smmu_domain;
>  
> -	if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
> -		master->ats_enabled = arm_smmu_ats_supported(master);
> -
>  	arm_smmu_install_ste_for_dev(master);
>  
>  	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>  	list_add(&master->domain_head, &smmu_domain->devices);
>  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>  
> -	arm_smmu_enable_ats(master);
> -
>  out_unlock:
>  	mutex_unlock(&smmu_domain->init_mutex);
>  	return ret;
> @@ -2410,16 +2161,6 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>  
>  	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
>  
> -	/*
> -	 * Note that PASID must be enabled before, and disabled after ATS:
> -	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
> -	 *
> -	 *   Behavior is undefined if this bit is Set and the value of the PASID
> -	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bits
> -	 *   are changed.
> -	 */
> -	arm_smmu_enable_pasid(master);
> -
>  	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
>  		master->ssid_bits = min_t(u8, master->ssid_bits,
>  					  CTXDESC_LINEAR_CDMAX);
> @@ -2442,7 +2183,6 @@ static void arm_smmu_release_device(struct device *dev)
>  
>  	master = dev_iommu_priv_get(dev);
>  	arm_smmu_detach_dev(master);
> -	arm_smmu_disable_pasid(master);
>  	kfree(master);
>  	iommu_fwspec_free(dev);
>  }
> @@ -2997,15 +2737,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  		}
>  	}
>  
> -	if (smmu->features & ARM_SMMU_FEAT_ATS) {
> -		enables |= CR0_ATSCHK;
> -		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> -					      ARM_SMMU_CR0ACK);
> -		if (ret) {
> -			dev_err(smmu->dev, "failed to enable ATS check\n");
> -			return ret;
> -		}
> -	}
>  
>  	ret = arm_smmu_setup_irqs(smmu);
>  	if (ret) {
> @@ -3076,13 +2807,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
>  		smmu->features |= ARM_SMMU_FEAT_PRI;
>  
> -	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
> -		smmu->features |= ARM_SMMU_FEAT_ATS;
> -
>  	if (reg & IDR0_SEV)
>  		smmu->features |= ARM_SMMU_FEAT_SEV;
>  
> -
>  	if (reg & IDR0_HYP)

spurious change


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 00:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 00:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42365.76147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGC1-0004bA-Jf; Wed, 02 Dec 2020 00:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42365.76147; Wed, 02 Dec 2020 00:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGC1-0004b3-GJ; Wed, 02 Dec 2020 00:40:25 +0000
Received: by outflank-mailman (input) for mailman id 42365;
 Wed, 02 Dec 2020 00:40:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkGBz-0004aw-DC
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 00:40:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bace658d-d98b-4644-8cba-479b492fd3a4;
 Wed, 02 Dec 2020 00:40:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bace658d-d98b-4644-8cba-479b492fd3a4
Date: Tue, 1 Dec 2020 16:40:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606869621;
	bh=wXvQPTq272+6H7csm0351ogCox2rR+SbAS3xF87DQSo=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=GD+VhzPnVzMsG8QBkfjGcdX3NzHo6DX8ZjrXaywnx3bkyE+cLsohc66yTDTFH7CSO
	 fVDc6p+XvUZshHGO9zi6aPeKdxNbdTeRA/lZpAlTLapXwYYf5565/yu2Zj9sn5mVRr
	 VArzkfh7EqFj0iQXrtjodDMRqzFxjEUnb9ypQg6U=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org, 
    bertrand.marquis@arm.com, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
In-Reply-To: <alpine.DEB.2.21.2012011621380.1100@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2012011639230.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com> <alpine.DEB.2.21.2012011621380.1100@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 1 Dec 2020, Stefano Stabellini wrote:
> On Thu, 26 Nov 2020, Rahul Singh wrote:
> > XEN does not support MSI on ARM platforms, therefore remove the MSI
> > support from SMMUv3 driver.
> > 
> > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> 
> I wonder if it makes sense to #ifdef CONFIG_MSI this code instead of
> removing it completely.

One more thought: could this patch be achieved by reverting
166bdbd23161160f2abcea70621adba179050bee ? If this patch could be done
by a couple of revert, it would be great to say it in the commit
message.


> In the past, we tried to keep the entire file as textually similar to
> the original Linux driver as possible to make it easier to backport
> features and fixes. So, in this case we would probably not even used an
> #ifdef but maybe something like:
> 
>   if (/* msi_enabled */ 0)
>       return;
> 
> at the beginning of arm_smmu_setup_msis.
> 
> 
> However, that strategy didn't actually work very well because backports
> have proven difficult to do anyway. So at that point we might as well at
> least have clean code in Xen and do the changes properly.
> 
> So that's my reasoning for accepting this patch :-)
> 
> Julien, are you happy with this too?
> 
> 
> > ---
> >  xen/drivers/passthrough/arm/smmu-v3.c | 176 +-------------------------
> >  1 file changed, 3 insertions(+), 173 deletions(-)
> > 
> > diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> > index cec304e51a..401f7ae006 100644
> > --- a/xen/drivers/passthrough/arm/smmu-v3.c
> > +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> > @@ -416,31 +416,6 @@ enum pri_resp {
> >  	PRI_RESP_SUCC = 2,
> >  };
> >  
> > -enum arm_smmu_msi_index {
> > -	EVTQ_MSI_INDEX,
> > -	GERROR_MSI_INDEX,
> > -	PRIQ_MSI_INDEX,
> > -	ARM_SMMU_MAX_MSIS,
> > -};
> > -
> > -static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
> > -	[EVTQ_MSI_INDEX] = {
> > -		ARM_SMMU_EVTQ_IRQ_CFG0,
> > -		ARM_SMMU_EVTQ_IRQ_CFG1,
> > -		ARM_SMMU_EVTQ_IRQ_CFG2,
> > -	},
> > -	[GERROR_MSI_INDEX] = {
> > -		ARM_SMMU_GERROR_IRQ_CFG0,
> > -		ARM_SMMU_GERROR_IRQ_CFG1,
> > -		ARM_SMMU_GERROR_IRQ_CFG2,
> > -	},
> > -	[PRIQ_MSI_INDEX] = {
> > -		ARM_SMMU_PRIQ_IRQ_CFG0,
> > -		ARM_SMMU_PRIQ_IRQ_CFG1,
> > -		ARM_SMMU_PRIQ_IRQ_CFG2,
> > -	},
> > -};
> > -
> >  struct arm_smmu_cmdq_ent {
> >  	/* Common fields */
> >  	u8				opcode;
> > @@ -504,10 +479,6 @@ struct arm_smmu_cmdq_ent {
> >  		} pri;
> >  
> >  		#define CMDQ_OP_CMD_SYNC	0x46
> > -		struct {
> > -			u32			msidata;
> > -			u64			msiaddr;
> > -		} sync;
> >  	};
> >  };
> >  
> > @@ -649,12 +620,6 @@ struct arm_smmu_device {
> >  
> >  	struct arm_smmu_strtab_cfg	strtab_cfg;
> >  
> > -	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
> > -	union {
> > -		u32			sync_count;
> > -		u64			padding;
> > -	};
> > -
> >  	/* IOMMU core code handle */
> >  	struct iommu_device		iommu;
> >  };
> > @@ -945,20 +910,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> >  		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
> >  		break;
> >  	case CMDQ_OP_CMD_SYNC:
> > -		if (ent->sync.msiaddr)
> > -			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> > -		else
> > -			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> > -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> > -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> > -		/*
> > -		 * Commands are written little-endian, but we want the SMMU to
> > -		 * receive MSIData, and thus write it back to memory, in CPU
> > -		 * byte order, so big-endian needs an extra byteswap here.
> > -		 */
> > -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
> > -				     cpu_to_le32(ent->sync.msidata));
> > -		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> > +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> >  		break;
> >  	default:
> >  		return -ENOENT;
> > @@ -1054,50 +1006,6 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> >  	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> >  }
> >  
> > -/*
> > - * The difference between val and sync_idx is bounded by the maximum size of
> > - * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
> > - */
> > -static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
> > -{
> > -	ktime_t timeout;
> > -	u32 val;
> > -
> > -	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> > -	val = smp_cond_load_acquire(&smmu->sync_count,
> > -				    (int)(VAL - sync_idx) >= 0 ||
> > -				    !ktime_before(ktime_get(), timeout));
> > -
> > -	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
> > -}
> > -
> > -static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
> > -{
> > -	u64 cmd[CMDQ_ENT_DWORDS];
> > -	unsigned long flags;
> > -	struct arm_smmu_cmdq_ent  ent = {
> > -		.opcode = CMDQ_OP_CMD_SYNC,
> > -		.sync	= {
> > -			.msiaddr = virt_to_phys(&smmu->sync_count),
> > -		},
> > -	};
> > -
> > -	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> > -
> > -	/* Piggy-back on the previous command if it's a SYNC */
> > -	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
> > -		ent.sync.msidata = smmu->sync_nr;
> > -	} else {
> > -		ent.sync.msidata = ++smmu->sync_nr;
> > -		arm_smmu_cmdq_build_cmd(cmd, &ent);
> > -		arm_smmu_cmdq_insert_cmd(smmu, cmd);
> > -	}
> > -
> > -	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> > -
> > -	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
> > -}
> > -
> >  static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> >  {
> >  	u64 cmd[CMDQ_ENT_DWORDS];
> > @@ -1119,12 +1027,9 @@ static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> >  static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> >  {
> >  	int ret;
> > -	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
> > -		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
> >  
> > -	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
> > -		  : __arm_smmu_cmdq_issue_sync(smmu);
> > -	if (ret)
> > +	ret = __arm_smmu_cmdq_issue_sync(smmu);
> > +	if ( ret )
> >  		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
> >  	return ret;
> >  }
> > @@ -2898,83 +2803,10 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
> >  	return ret;
> >  }
> >  
> > -static void arm_smmu_free_msis(void *data)
> > -{
> > -	struct device *dev = data;
> > -	platform_msi_domain_free_irqs(dev);
> > -}
> > -
> > -static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
> > -{
> > -	phys_addr_t doorbell;
> > -	struct device *dev = msi_desc_to_dev(desc);
> > -	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
> > -	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
> > -
> > -	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
> > -	doorbell &= MSI_CFG0_ADDR_MASK;
> > -
> > -	writeq_relaxed(doorbell, smmu->base + cfg[0]);
> > -	writel_relaxed(msg->data, smmu->base + cfg[1]);
> > -	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
> > -}
> > -
> > -static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
> > -{
> > -	struct msi_desc *desc;
> > -	int ret, nvec = ARM_SMMU_MAX_MSIS;
> > -	struct device *dev = smmu->dev;
> > -
> > -	/* Clear the MSI address regs */
> > -	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
> > -	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
> > -
> > -	if (smmu->features & ARM_SMMU_FEAT_PRI)
> > -		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
> > -	else
> > -		nvec--;
> > -
> > -	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
> > -		return;
> > -
> > -	if (!dev->msi_domain) {
> > -		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
> > -		return;
> > -	}
> > -
> > -	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
> > -	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
> > -	if (ret) {
> > -		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
> > -		return;
> > -	}
> > -
> > -	for_each_msi_entry(desc, dev) {
> > -		switch (desc->platform.msi_index) {
> > -		case EVTQ_MSI_INDEX:
> > -			smmu->evtq.q.irq = desc->irq;
> > -			break;
> > -		case GERROR_MSI_INDEX:
> > -			smmu->gerr_irq = desc->irq;
> > -			break;
> > -		case PRIQ_MSI_INDEX:
> > -			smmu->priq.q.irq = desc->irq;
> > -			break;
> > -		default:	/* Unknown */
> > -			continue;
> > -		}
> > -	}
> > -
> > -	/* Add callback to free MSIs on teardown */
> > -	devm_add_action(dev, arm_smmu_free_msis, dev);
> > -}
> > -
> >  static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
> >  {
> >  	int irq, ret;
> >  
> > -	arm_smmu_setup_msis(smmu);
> > -
> >  	/* Request interrupt lines */
> >  	irq = smmu->evtq.q.irq;
> >  	if (irq) {
> > @@ -3250,8 +3082,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
> >  	if (reg & IDR0_SEV)
> >  		smmu->features |= ARM_SMMU_FEAT_SEV;
> >  
> > -	if (reg & IDR0_MSI)
> > -		smmu->features |= ARM_SMMU_FEAT_MSI;
> >  
> >  	if (reg & IDR0_HYP)
> >  		smmu->features |= ARM_SMMU_FEAT_HYP;
> > -- 
> > 2.17.1
> > 
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 00:49:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 00:49:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42373.76166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGL9-0004xK-Ng; Wed, 02 Dec 2020 00:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42373.76166; Wed, 02 Dec 2020 00:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGL9-0004xD-Jv; Wed, 02 Dec 2020 00:49:51 +0000
Received: by outflank-mailman (input) for mailman id 42373;
 Wed, 02 Dec 2020 00:49:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkGL8-0004x5-6N; Wed, 02 Dec 2020 00:49:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkGL7-0003CN-SV; Wed, 02 Dec 2020 00:49:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkGL7-0001To-0q; Wed, 02 Dec 2020 00:49:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkGL7-0006q1-0L; Wed, 02 Dec 2020 00:49:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cNLacnHXrHVIPKhVzIn7SI1e2gikTlO4vm8eBsTDMVw=; b=CgGF1R/X22diNUHZvLE/wRqNbe
	4eZNMh41wDeBUWJ7yQhT7imxHxtvaSoi0M9efByhFnPYRszSFihkkP8bIL2qOsZgYgavS1EzVeoJc
	t0/0fgBbuGC5MO3looQaHwuE+9KCBzwr9JqOBBbeZs232hNhf2Ilsk1WnfB0G12hr+T0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157133-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 157133: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-xl-shadow:guest-localmigrate/x10:fail:heisenbug
    xen-4.14-testing:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d1d1f5391976456a79daac0dcfe7157da1e54f7
X-Osstest-Versions-That:
    xen=0057b1f8fa79abe8272690341db54b064c8f2b7f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 00:49:49 +0000

flight 157133 xen-4.14-testing real [real]
flight 157144 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157133/
http://logs.test-lab.xenproject.org/osstest/logs/157144/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow 20 guest-localmigrate/x10 fail pass in 157144-retest
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 157144-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157144-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 157144 like 156989
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 157144 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156989
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156989
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156989
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156989
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156989
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156989
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156989
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156989
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156989
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156989
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d1d1f5391976456a79daac0dcfe7157da1e54f7
baseline version:
 xen                  0057b1f8fa79abe8272690341db54b064c8f2b7f

Last test of basis   156989  2020-11-24 13:36:54 Z    7 days
Testing same since   157133  2020-12-01 14:37:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roge.rpau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0057b1f8fa..1d1d1f5391  1d1d1f5391976456a79daac0dcfe7157da1e54f7 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 00:53:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 00:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42380.76181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGOO-0005t4-Ba; Wed, 02 Dec 2020 00:53:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42380.76181; Wed, 02 Dec 2020 00:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkGOO-0005sx-8X; Wed, 02 Dec 2020 00:53:12 +0000
Received: by outflank-mailman (input) for mailman id 42380;
 Wed, 02 Dec 2020 00:53:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkGOM-0005sr-Vi
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 00:53:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f26849cc-d071-499e-b574-868124b061c6;
 Wed, 02 Dec 2020 00:53:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f26849cc-d071-499e-b574-868124b061c6
Date: Tue, 1 Dec 2020 16:53:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606870389;
	bh=OmOkv6OlLSNdNNR7uZFMySdRTYxoaJ1PrG4gWM3K9yU=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=TUFYlXdbdyn4TqOfEdrBpfaSLAPkw/4p0N7+qZDSDr/nUDvq4EOFpZulc5ZoRcX3c
	 xEb9vdgwaTD+UeoR9Q5KBrm7KtHAWqYfIAl4kssJkW7tDge5oJl8ZvaVVq5mShfjpI
	 vGUf2DfjlqBYLS6pyZZ4p32PKY/6vIp1FbxsT7X0=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 6/8] xen/arm: Remove support for Stage-1 translation
 on SMMUv3.
In-Reply-To: <29d40e76341983b175250b71e7b7a290895effd0.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011645170.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <29d40e76341983b175250b71e7b7a290895effd0.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> Linux SMMUv3 driver supports both Stage-1 and Stage-2 translations.
> As of now only Stage-2 translation support has been tested.
> 
> Once Stage-1 translation support is tested this patch can be added.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

[...]


> @@ -1871,19 +1476,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>  	}
>  
>  	/* Restrict the stage to what we can actually support */
> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> +	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;

It would be good to add an helpful error message if
ARM_SMMU_FEAT_TRANS_S2 is missing.


>  	switch (smmu_domain->stage) {
> -	case ARM_SMMU_DOMAIN_S1:
> -		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
> -		ias = min_t(unsigned long, ias, VA_BITS);
> -		oas = smmu->ias;
> -		fmt = ARM_64_LPAE_S1;
> -		finalise_stage_fn = arm_smmu_domain_finalise_s1;
> -		break;
>  	case ARM_SMMU_DOMAIN_NESTED:
>  	case ARM_SMMU_DOMAIN_S2:
>  		ias = smmu->ias;


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 01:48:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 01:48:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42394.76221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkHFV-0000lN-PX; Wed, 02 Dec 2020 01:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42394.76221; Wed, 02 Dec 2020 01:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkHFV-0000lG-LM; Wed, 02 Dec 2020 01:48:05 +0000
Received: by outflank-mailman (input) for mailman id 42394;
 Wed, 02 Dec 2020 01:48:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkHFT-0000lB-Rp
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 01:48:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a66f318-f322-4656-bbd7-a3a267e6ca7e;
 Wed, 02 Dec 2020 01:48:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a66f318-f322-4656-bbd7-a3a267e6ca7e
Date: Tue, 1 Dec 2020 17:48:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606873681;
	bh=cpG1g38yW9lWPfjiDy7s40B+Tn5E30TyFh9TNn9S/ys=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=u/6sdwAw2vuZieRB49yTYLGNsUS6IZ9BMIFvNP+aQiYERRE7lTsddPfEg3k/pud+k
	 Ic1a0XP3r5CbyGV/RkJ97EV/32RoU27Kc0BWo6DXveSSySIaUWdD43pCFZT9CX1Nmd
	 JOsaUfudDybdeIAmxQd4OdyhALQGYDLFHoYhuZeA=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
In-Reply-To: <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011724350.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> struct io_pgtable_ops, struct io_pgtable_cfg, struct iommu_flush_ops,
> and struct iommu_ops related code are linux specific.
> 
> Remove code related to above struct as code is dead code in XEN.

There are still instances of struct io_pgtable_cfg after applying this
patch in the following functions:
- arm_smmu_domain_finalise_s2
- arm_smmu_domain_finalise



> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 457 --------------------------
>  1 file changed, 457 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 40e3890a58..55d1cba194 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -599,7 +593,6 @@ struct arm_smmu_domain {
>  	struct arm_smmu_device		*smmu;
>  	struct mutex			init_mutex; /* Protects smmu pointer */
>  
> -	struct io_pgtable_ops		*pgtbl_ops;
>  	bool				non_strict;
>  
>  	enum arm_smmu_domain_stage	stage;
> @@ -1297,74 +1290,6 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
> -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  size_t granule, bool leaf, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -	struct arm_smmu_cmdq_ent cmd = {
> -		.tlbi = {
> -			.leaf	= leaf,
> -			.addr	= iova,
> -		},
> -	};
> -
> -	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
> -	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> -
> -	do {
> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> -		cmd.tlbi.addr += granule;
> -	} while (size -= granule);
> -}
> -
> -static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
> -					 unsigned long iova, size_t granule,
> -					 void *cookie)
> -{
> -	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
> -}
> -
> -static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static const struct iommu_flush_ops arm_smmu_flush_ops = {
> -	.tlb_flush_all	= arm_smmu_tlb_inv_context,
> -	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
> -	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
> -	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
> -};
> -
> -/* IOMMU API */
> -static bool arm_smmu_capable(enum iommu_cap cap)
> -{
> -	switch (cap) {
> -	case IOMMU_CAP_CACHE_COHERENCY:
> -		return true;
> -	case IOMMU_CAP_NOEXEC:
> -		return true;
> -	default:
> -		return false;
> -	}
> -}
> -
>  static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>  {
>  	struct arm_smmu_domain *smmu_domain;
> @@ -1421,7 +1346,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>  
>  	iommu_put_dma_cookie(domain);
> -	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
>  
>  	if (cfg->vmid)
>  		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
> @@ -1429,7 +1353,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  	kfree(smmu_domain);
>  }
>  
> -
>  static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>  				       struct arm_smmu_master *master,
>  				       struct io_pgtable_cfg *pgtbl_cfg)
> @@ -1437,7 +1360,6 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>  	int vmid;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> -	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
>  
>  	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>  	if (vmid < 0)
> @@ -1461,20 +1383,12 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>  {
>  	int ret;
>  	unsigned long ias, oas;
> -	enum io_pgtable_fmt fmt;
> -	struct io_pgtable_cfg pgtbl_cfg;
> -	struct io_pgtable_ops *pgtbl_ops;
>  	int (*finalise_stage_fn)(struct arm_smmu_domain *,
>  				 struct arm_smmu_master *,
>  				 struct io_pgtable_cfg *);
>  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
> -		return 0;
> -	}
> -
>  	/* Restrict the stage to what we can actually support */
>  	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
>  
> @@ -1483,40 +1397,17 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>  	case ARM_SMMU_DOMAIN_S2:
>  		ias = smmu->ias;
>  		oas = smmu->oas;
> -		fmt = ARM_64_LPAE_S2;
>  		finalise_stage_fn = arm_smmu_domain_finalise_s2;
>  		break;
>  	default:
>  		return -EINVAL;
>  	}
>  
> -	pgtbl_cfg = (struct io_pgtable_cfg) {
> -		.pgsize_bitmap	= smmu->pgsize_bitmap,
> -		.ias		= ias,
> -		.oas		= oas,
> -		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
> -		.tlb		= &arm_smmu_flush_ops,
> -		.iommu_dev	= smmu->dev,
> -	};
> -
> -	if (smmu_domain->non_strict)
> -		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
> -
> -	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
> -	if (!pgtbl_ops)
> -		return -ENOMEM;
> -
> -	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
> -	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
> -	domain->geometry.force_aperture = true;
> -
>  	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
>  	if (ret < 0) {
> -		free_io_pgtable_ops(pgtbl_ops);
>  		return ret;
>  	}
>  
> -	smmu_domain->pgtbl_ops = pgtbl_ops;
>  	return 0;
>  }
>  
> @@ -1626,71 +1517,6 @@ out_unlock:
>  	return ret;
>  }
>  
> -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> -{
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (!ops)
> -		return -ENODEV;
> -
> -	return ops->map(ops, iova, paddr, size, prot, gfp);
> -}
> -
> -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
> -			     size_t size, struct iommu_iotlb_gather *gather)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
> -
> -	if (!ops)
> -		return 0;
> -
> -	return ops->unmap(ops, iova, size, gather);
> -}
> -
> -static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	if (smmu_domain->smmu)
> -		arm_smmu_tlb_inv_context(smmu_domain);
> -}
> -
> -static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
> -				struct iommu_iotlb_gather *gather)
> -{
> -	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
> -
> -	if (smmu)
> -		arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static phys_addr_t
> -arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> -{
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY)
> -		return iova;
> -
> -	if (!ops)
> -		return 0;
> -
> -	return ops->iova_to_phys(ops, iova);
> -}
> -
> -static struct platform_driver arm_smmu_driver;
> -
> -static
> -struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> -{
> -	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
> -							  fwnode);
> -	put_device(dev);
> -	return dev ? dev_get_drvdata(dev) : NULL;
> -}
> -
>  static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  {
>  	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
> @@ -1701,206 +1527,6 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  	return sid < limit;
>  }
>  
> -static struct iommu_ops arm_smmu_ops;
> -
> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
> -{
> -	int i, ret;
> -	struct arm_smmu_device *smmu;
> -	struct arm_smmu_master *master;
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -
> -	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> -		return ERR_PTR(-ENODEV);
> -
> -	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
> -		return ERR_PTR(-EBUSY);
> -
> -	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
> -	if (!smmu)
> -		return ERR_PTR(-ENODEV);
> -
> -	master = kzalloc(sizeof(*master), GFP_KERNEL);
> -	if (!master)
> -		return ERR_PTR(-ENOMEM);
> -
> -	master->dev = dev;
> -	master->smmu = smmu;
> -	master->sids = fwspec->ids;
> -	master->num_sids = fwspec->num_ids;
> -	dev_iommu_priv_set(dev, master);
> -
> -	/* Check the SIDs are in range of the SMMU and our stream table */
> -	for (i = 0; i < master->num_sids; i++) {
> -		u32 sid = master->sids[i];
> -
> -		if (!arm_smmu_sid_in_range(smmu, sid)) {
> -			ret = -ERANGE;
> -			goto err_free_master;
> -		}
> -
> -		/* Ensure l2 strtab is initialised */
> -		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
> -			ret = arm_smmu_init_l2_strtab(smmu, sid);
> -			if (ret)
> -				goto err_free_master;
> -		}
> -	}
> -
> -	return &smmu->iommu;
> -
> -err_free_master:
> -	kfree(master);
> -	dev_iommu_priv_set(dev, NULL);
> -	return ERR_PTR(ret);
> -}
> -
> -static void arm_smmu_release_device(struct device *dev)
> -{
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -	struct arm_smmu_master *master;
> -
> -	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> -		return;
> -
> -	master = dev_iommu_priv_get(dev);
> -	arm_smmu_detach_dev(master);
> -	kfree(master);
> -	iommu_fwspec_free(dev);
> -}
> -
> -static struct iommu_group *arm_smmu_device_group(struct device *dev)
> -{
> -	struct iommu_group *group;
> -
> -	/*
> -	 * We don't support devices sharing stream IDs other than PCI RID
> -	 * aliases, since the necessary ID-to-device lookup becomes rather
> -	 * impractical given a potential sparse 32-bit stream ID space.
> -	 */
> -	if (dev_is_pci(dev))
> -		group = pci_device_group(dev);
> -	else
> -		group = generic_device_group(dev);
> -
> -	return group;
> -}
> -
> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data = smmu_domain->non_strict;
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -}
> -
> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	int ret = 0;
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	mutex_lock(&smmu_domain->init_mutex);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			if (smmu_domain->smmu) {
> -				ret = -EPERM;
> -				goto out_unlock;
> -			}
> -
> -			if (*(int *)data)
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> -			else
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch(attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			smmu_domain->non_strict = *(int *)data;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	default:
> -		ret = -EINVAL;
> -	}
> -
> -out_unlock:
> -	mutex_unlock(&smmu_domain->init_mutex);
> -	return ret;
> -}
> -
> -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
> -{
> -	return iommu_fwspec_add_ids(dev, args->args, 1);
> -}
> -
> -static void arm_smmu_get_resv_regions(struct device *dev,
> -				      struct list_head *head)
> -{
> -	struct iommu_resv_region *region;
> -	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> -
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> -					 prot, IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> -
> -	list_add_tail(&region->list, head);
> -
> -	iommu_dma_get_resv_regions(dev, head);
> -}

Arguably this could have been removed previously as part of the MSI
patch, but that's OK either way.


> -static struct iommu_ops arm_smmu_ops = {
> -	.capable		= arm_smmu_capable,
> -	.domain_alloc		= arm_smmu_domain_alloc,
> -	.domain_free		= arm_smmu_domain_free,
> -	.attach_dev		= arm_smmu_attach_dev,
> -	.map			= arm_smmu_map,
> -	.unmap			= arm_smmu_unmap,
> -	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
> -	.iotlb_sync		= arm_smmu_iotlb_sync,
> -	.iova_to_phys		= arm_smmu_iova_to_phys,
> -	.probe_device		= arm_smmu_probe_device,
> -	.release_device		= arm_smmu_release_device,
> -	.device_group		= arm_smmu_device_group,
> -	.domain_get_attr	= arm_smmu_domain_get_attr,
> -	.domain_set_attr	= arm_smmu_domain_set_attr,
> -	.of_xlate		= arm_smmu_of_xlate,
> -	.get_resv_regions	= arm_smmu_get_resv_regions,
> -	.put_resv_regions	= generic_iommu_put_resv_regions,
> -	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
> -};
> -
>  /* Probing and initialisation functions */
>  static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  				   struct arm_smmu_queue *q,
> @@ -2515,21 +2139,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	default:
>  		dev_info(smmu->dev,
>  			"unknown output address size. Truncating to 48-bit\n");
> -		fallthrough;
>  	case IDR5_OAS_48_BIT:
>  		smmu->oas = 48;
>  	}
>  
> -	if (arm_smmu_ops.pgsize_bitmap == -1UL)
> -		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
> -	else
> -		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
> -
> -	/* Set the DMA mask for our table walker */
> -	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> -		dev_warn(smmu->dev,
> -			 "failed to set DMA mask for table walker\n");
> -
>  	smmu->ias = max(smmu->ias, smmu->oas);
>  
>  	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
> @@ -2595,9 +2208,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>  
>  	parse_driver_options(smmu);
>  
> -	if (of_dma_is_coherent(dev->of_node))
> -		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> -

Why this change? The ARM_SMMU_FEAT_COHERENCY flag is still used in
arm_smmu_device_hw_probe.


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 02:07:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 02:07:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42401.76232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkHXs-00031A-C0; Wed, 02 Dec 2020 02:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42401.76232; Wed, 02 Dec 2020 02:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkHXs-000313-96; Wed, 02 Dec 2020 02:07:04 +0000
Received: by outflank-mailman (input) for mailman id 42401;
 Wed, 02 Dec 2020 02:07:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mC+d=FG=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1kkHXq-00030y-Vq
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 02:07:03 +0000
Received: from mail-qk1-x731.google.com (unknown [2607:f8b0:4864:20::731])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70e9618d-3d7a-4847-8294-5baae0f26b8b;
 Wed, 02 Dec 2020 02:07:01 +0000 (UTC)
Received: by mail-qk1-x731.google.com with SMTP id i199so3448992qke.5
 for <xen-devel@lists.xenproject.org>; Tue, 01 Dec 2020 18:07:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70e9618d-3d7a-4847-8294-5baae0f26b8b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=WJry5lY9fqlB7gymDPuJQp1TTD65xULilrywO6EkZec=;
        b=Ts3QTO4qF5p87V8b7b+pCtltXZl8F8MKXn5KgaO+jLE17XGCClMD2Gvnm23hhPx0yr
         S3/9m0sGipP/O/+5/3T7lmZVtipiLkOkrcP+BBy3ingZ+v3timNaTk5b6u7S0B1zGzLs
         oMenFUvXVLmIiG7484baP7UOh7MMAivhcmtrbu41Hi1j/h7pf2V8gt1u+dst/QK0TT/6
         Omcr4MP+b173uHv95SurDgmpjf9+J2azNXTYIBf6EGmp1X5aiMfAJqzMIh2JgbilTIQD
         dWMKgnDNLyuuW0cpjtSlfVsY2MKJAteSJZkhMJk/Y4jswiBFE7cZzOPi4H4/7R2AcSpU
         dm2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=WJry5lY9fqlB7gymDPuJQp1TTD65xULilrywO6EkZec=;
        b=ftfZo3lmSrUjocKPkzuKLkNhKprb42pn4dSwQ4RTsg1zZIDK2Zzu+qLi/Ks6zQli0t
         psLxg5qmqHFBS4lNQfZ8rNMDo5VBmzMhKa5k/4wY/DO745decKCvqHtGlFmUX5Ljg2r/
         lObVkIy+d7fS8JMApjfTJSyUZYDqSFDnM5HaD581y99Kdhx33WpHA764uOJcfmCCj4bC
         1OoJLyLd5bnXoTyd1f+CUSWoIx7Tzbv7c3mrl5MpQ285tVylNczPf2pvoZj9oODKCq17
         ydtA0etJ5w9Z1ovr/fXS80vCw3KD2FaTx26wPo9oCaju+6CYNGd6qXAL/w78wES9vva1
         LglQ==
X-Gm-Message-State: AOAM533wZ1hDIDr0RElrzvcgNXDjg2v4CSYlqO/7jU4xuwH7w4V3ELj5
	m9Q5PyuqFyxaoIk7mqDswhoxTJHPnrTw9KgWbgrKtg==
X-Google-Smtp-Source: ABdhPJzd95etu8V3mPeMRcwjrH6RdlvEzShvzjKtsmqpFS1prrNQ+8CyuOFz7bq/tHdqJ/zGpmmBFDVvnOLNhKfbhhk=
X-Received: by 2002:a37:6546:: with SMTP id z67mr486261qkb.22.1606874820941;
 Tue, 01 Dec 2020 18:07:00 -0800 (PST)
MIME-Version: 1.0
References: <1326147626.1969889.1606825358993.ref@mail.yahoo.com> <1326147626.1969889.1606825358993@mail.yahoo.com>
In-Reply-To: <1326147626.1969889.1606825358993@mail.yahoo.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Tue, 1 Dec 2020 18:06:50 -0800
Message-ID: <CAMmSBy_3wgU=DHsAxG0EK-WKAf71PEsXvu=1hbjoJ_cyy=gcDw@mail.gmail.com>
Subject: Re: Apple on Xen?
To: Jason Long <hack3rcon@yahoo.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Dec 1, 2020 at 4:22 AM Jason Long <hack3rcon@yahoo.com> wrote:
>
> Hello,
> According to this news (https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/), Amazon EC2 can run macOS.
> Is it OK for Xen Project too?

Amazon recently clarified that these are bare-metal instances -- no
hypervisor of any kind involved.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 02:52:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 02:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42407.76244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkIFB-0007V8-RE; Wed, 02 Dec 2020 02:51:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42407.76244; Wed, 02 Dec 2020 02:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkIFB-0007V1-O2; Wed, 02 Dec 2020 02:51:49 +0000
Received: by outflank-mailman (input) for mailman id 42407;
 Wed, 02 Dec 2020 02:51:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GudV=FG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkIFA-0007Uw-Te
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 02:51:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d4c442c-6c8b-4b3f-acc4-b6c228b02835;
 Wed, 02 Dec 2020 02:51:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d4c442c-6c8b-4b3f-acc4-b6c228b02835
Date: Tue, 1 Dec 2020 18:51:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606877506;
	bh=AhHQoiIqPJiW830xvz8l+sNzDDmzhfEw5QJsFzXWLak=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Con/JhxRMWuF0VqXIHeAN7Ro2E1qc+AkAoG0Q4u4FKp0uTRpf0KNLHDygwvk/xjnH
	 oJAxBFxCpTYCwOXrAbrKW/TNMZAejGvyPUE78kDXTBgsqBMq/b03iLyWy9tlG089YD
	 l8nBU6cCmJ4tUo7YMx57AzPNzPar30alr2FeR+NI=
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
In-Reply-To: <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Rahul Singh wrote:
> Add support for ARM architected SMMUv3 implementation. It is based on
> the Linux SMMUv3 driver.
> 
> Major differences with regard to Linux driver are as follows:
> 1. Only Stage-2 translation is supported as compared to the Linux driver
>    that supports both Stage-1 and Stage-2 translations.
> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>    capability to share the page tables with the CPU.
> 3. Tasklets are used in place of threaded IRQ's in Linux for event queue
>    and priority queue IRQ handling.
> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>    access functions based on atomic operations implemented in Linux.
>    Atomic functions used by the commands queue access functions are not
>    implemented in XEN therefore we decided to port the earlier version
>    of the code. Once the proper atomic operations will be available in
>    XEN the driver can be updated.
> 5. Driver is currently supported as Tech Preview.

This patch is big and was difficult to review, nonetheless I tried :-)

That said, the code is self-contained, marked as Tech Preview, and not
at risk of making other things unstable, so it is low risk to accept it.

Comments below.


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  MAINTAINERS                           |   6 +
>  SUPPORT.md                            |   1 +
>  xen/drivers/passthrough/Kconfig       |  10 +
>  xen/drivers/passthrough/arm/Makefile  |   1 +
>  xen/drivers/passthrough/arm/smmu-v3.c | 986 +++++++++++++++++++++-----
>  5 files changed, 814 insertions(+), 190 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index dab38a6a14..1d63489eec 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -249,6 +249,12 @@ F:	xen/include/asm-arm/
>  F:	xen/include/public/arch-arm/
>  F:	xen/include/public/arch-arm.h
>  
> +ARM SMMUv3
> +M:	Bertrand Marquis <bertrand.marquis@arm.com>
> +M:	Rahul Singh <rahul.singh@arm.com>
> +S:	Supported
> +F:	xen/drivers/passthrough/arm/smmu-v3.c
> +
>  Change Log
>  M:	Paul Durrant <paul@xen.org>
>  R:	Community Manager <community.manager@xenproject.org>
> diff --git a/SUPPORT.md b/SUPPORT.md
> index ab02aca5f4..e402c7202d 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -68,6 +68,7 @@ For the Cortex A57 r0p0 - r1p1, see Errata 832075.
>      Status, ARM SMMUv1: Supported, not security supported
>      Status, ARM SMMUv2: Supported, not security supported
>      Status, Renesas IPMMU-VMSA: Supported, not security supported
> +    Status, ARM SMMUv3: Tech Preview
>  
>  ### ARM/GICv3 ITS
>  
> diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
> index 0036007ec4..5b71c59f47 100644
> --- a/xen/drivers/passthrough/Kconfig
> +++ b/xen/drivers/passthrough/Kconfig
> @@ -13,6 +13,16 @@ config ARM_SMMU
>  	  Say Y here if your SoC includes an IOMMU device implementing the
>  	  ARM SMMU architecture.
>  
> +config ARM_SMMU_V3
> +	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
> +	depends on ARM_64
> +	---help---
> +	 Support for implementations of the ARM System MMU architecture
> +	 version 3.
> +
> +	 Say Y here if your system includes an IOMMU device implementing
> +	 the ARM SMMUv3 architecture.
> +
>  config IPMMU_VMSA
>  	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
>  	depends on ARM_64
> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
> index fcd918ea3e..c5fb3b58a5 100644
> --- a/xen/drivers/passthrough/arm/Makefile
> +++ b/xen/drivers/passthrough/arm/Makefile
> @@ -1,3 +1,4 @@
>  obj-y += iommu.o iommu_helpers.o iommu_fwspec.o
>  obj-$(CONFIG_ARM_SMMU) += smmu.o
>  obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
> +obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 55d1cba194..8f2337e7f2 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2,36 +2,280 @@
>  /*
>   * IOMMU API for ARM architected SMMUv3 implementations.
>   *
> - * Copyright (C) 2015 ARM Limited
> + * Based on Linux's SMMUv3 driver:
> + *    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> + *    commit: 951cbbc386ff01b50da4f46387e994e81d9ab431
> + * and Xen's SMMU driver:
> + *    xen/drivers/passthrough/arm/smmu.c
>   *
> - * Author: Will Deacon <will.deacon@arm.com>
> + * Copyright (C) 2015 ARM Limited Will Deacon <will.deacon@arm.com>
>   *
> - * This driver is powered by bad coffee and bombay mix.
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + *
> + */
> +
> +#include <xen/acpi.h>
> +#include <xen/config.h>
> +#include <xen/delay.h>
> +#include <xen/errno.h>
> +#include <xen/err.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/list.h>
> +#include <xen/mm.h>
> +#include <xen/rbtree.h>
> +#include <xen/sched.h>
> +#include <xen/sizes.h>
> +#include <xen/vmap.h>
> +#include <asm/atomic.h>
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <asm/platform.h>
> +#include <asm/iommu_fwspec.h>
> +
> +/* Linux compatibility functions. */
> +typedef paddr_t dma_addr_t;
> +typedef unsigned int gfp_t;
> +
> +#define platform_device device
> +
> +#define GFP_KERNEL 0
> +
> +/* Alias to Xen device tree helpers */
> +#define device_node dt_device_node
> +#define of_phandle_args dt_phandle_args
> +#define of_device_id dt_device_match
> +#define of_match_node dt_match_node
> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
> +#define of_property_read_bool dt_property_read_bool
> +#define of_parse_phandle_with_args dt_parse_phandle_with_args

Given all the changes to the file by the previous patches we are
basically fully (or almost fully) adapting this code to Xen.

So at that point I wonder if we should just as well make these changes
(e.g. s/of_phandle_args/dt_phandle_args/g) to the code too.

Julien, Rahul, what do you guys think?


> +/* Alias to Xen lock functions */

I think this deserves at least one statement to explain why it is OK.

Also, a similar comment to the one above: maybe we should add a couple
of more preparation patches to s/mutex/spinlock/g and change the time
and allocation functions too.


> +#define mutex spinlock
> +#define mutex_init spin_lock_init
> +#define mutex_lock spin_lock
> +#define mutex_unlock spin_unlock
> +
> +/* Alias to Xen time functions */
> +#define ktime_t s_time_t
> +#define ktime_get()             (NOW())
> +#define ktime_add_us(t,i)       (t + MICROSECS(i))
> +#define ktime_compare(t,i)      (t > (i))
> +
> +/* Alias to Xen allocation helpers */
> +#define kzalloc(size, flags)    _xzalloc(size, sizeof(void *))
> +#define kfree xfree
> +#define devm_kzalloc(dev, size, flags)  _xzalloc(size, sizeof(void *))
> +
> +/* Device logger functions */
> +#define dev_name(dev) dt_node_full_name(dev->of_node)
> +#define dev_dbg(dev, fmt, ...)      \
> +    printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_notice(dev, fmt, ...)   \
> +    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_warn(dev, fmt, ...)     \
> +    printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_err(dev, fmt, ...)      \
> +    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_info(dev, fmt, ...)     \
> +    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_err_ratelimited(dev, fmt, ...)      \
> +    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +
> +/*
> + * Periodically poll an address and wait between reads in us until a
> + * condition is met or a timeout occurs.

It would be good to add a statement to point out what the return value
is going to be.


> + */
> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
> +({ \
> +     s_time_t deadline = NOW() + MICROSECS(timeout_us); \
> +     for (;;) { \
> +        (val) = op(addr); \
> +        if (cond) \
> +            break; \
> +        if (NOW() > deadline) { \
> +            (val) = op(addr); \
> +            break; \
> +        } \
> +        udelay(sleep_us); \
> +     } \
> +     (cond) ? 0 : -ETIMEDOUT; \
> +})
> +
> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
> +    readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
> +
> +#define FIELD_PREP(_mask, _val)         \
> +    (((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))

Let's add the definition of ffsll to bitops.h


> +#define FIELD_GET(_mask, _reg)          \
> +    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
> +
> +#define WRITE_ONCE(x, val)                  \
> +do {                                        \
> +    *(volatile typeof(x) *)&(x) = (val);    \
> +} while (0)

maybe we should define this in xen/include/xen/lib.h


> +
> +/* Xen: Stub out DMA domain related functions */
> +#define iommu_get_dma_cookie(dom) 0
> +#define iommu_put_dma_cookie(dom)

Shouldn't we remove any call to iommu_get_dma_cookie and
iommu_put_dma_cookie in one of the previous patches?


> +/*
> + * Helpers for DMA allocation. Just the function name is reused for
> + * porting code, these allocation are not managed allocations
>   */
> +static void *dmam_alloc_coherent(struct device *dev, size_t size,
> +                                 paddr_t *dma_handle, gfp_t gfp)
> +{
> +    void *vaddr;
> +    unsigned long alignment = size;
> +
> +    /*
> +     * _xzalloc requires that the (align & (align -1)) = 0. Most of the
> +     * allocations in SMMU code should send the right value for size. In
> +     * case this is not true print a warning and align to the size of a
> +     * (void *)
> +     */
> +    if ( size & (size - 1) )
> +    {
> +        printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n");
> +        alignment = sizeof(void *);
> +    }
> +
> +    vaddr = _xzalloc(size, alignment);
> +    if ( !vaddr )
> +    {
> +        printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
> +        return NULL;
> +    }
> +
> +    *dma_handle = virt_to_maddr(vaddr);
> +
> +    return vaddr;
> +}
> +
> +/* Xen: Type definitions for iommu_domain */
> +#define IOMMU_DOMAIN_UNMANAGED 0
> +#define IOMMU_DOMAIN_DMA 1
> +#define IOMMU_DOMAIN_IDENTITY 2
> +
> +/* Xen specific code. */
> +struct iommu_domain {
> +    /* Runtime SMMU configuration for this iommu_domain */
> +    atomic_t ref;
> +    /*
> +     * Used to link iommu_domain contexts for a same domain.
> +     * There is at least one per-SMMU to used by the domain.
> +     */
> +    struct list_head    list;
> +};
> +
> +/* Describes information required for a Xen domain */
> +struct arm_smmu_xen_domain {
> +    spinlock_t      lock;
> +
> +    /* List of iommu domains associated to this domain */
> +    struct list_head    contexts;
> +};
> +
> +/*
> + * Information about each device stored in dev->archdata.iommu
> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
> + * the SMMU).
> + */
> +struct arm_smmu_xen_device {
> +    struct iommu_domain *domain;
> +};

Do we need both struct arm_smmu_xen_device and struct iommu_domain?


> +/* Keep a list of devices associated with this driver */
> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
> +static LIST_HEAD(arm_smmu_devices);
> +
> +
> +static inline void *dev_iommu_priv_get(struct device *dev)
> +{
> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +    return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
> +}
> +
> +static inline void dev_iommu_priv_set(struct device *dev, void *priv)
> +{
> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +    fwspec->iommu_priv = priv;
> +}
> +
> +int dt_property_match_string(const struct dt_device_node *np,
> +                             const char *propname, const char *string)
> +{
> +    const struct dt_property *dtprop = dt_find_property(np, propname, NULL);
> +    size_t l;
> +    int i;
> +    const char *p, *end;
> +
> +    if ( !dtprop )
> +        return -EINVAL;
> +
> +    if ( !dtprop->value )
> +        return -ENODATA;
> +
> +    p = dtprop->value;
> +    end = p + dtprop->length;
> +
> +    for ( i = 0; p < end; i++, p += l )
> +    {
> +        l = strnlen(p, end - p) + 1;
> +
> +        if ( p + l > end )
> +            return -EILSEQ;
> +
> +        if ( strcmp(string, p) == 0 )
> +            return i; /* Found it; return index */
> +    }
> +
> +    return -ENODATA;
> +}

I think you should either use dt_property_read_string or move the
implementation of dt_property_match_string to xen/common/device_tree.c
(in which case I suggest to do it in a separate patch.)

I'd just use dt_property_read_string.



> +static int platform_get_irq_byname_optional(struct device *dev,
> +                                            const char *name)
> +{
> +    int index, ret;
> +    struct dt_device_node *np  = dev_to_dt(dev);
> +
> +    if ( unlikely(!name) )
> +        return -EINVAL;
> +
> +    index = dt_property_match_string(np, "interrupt-names", name);
> +    if ( index < 0 )
> +    {
> +        dev_info(dev, "IRQ %s not found\n", name);
> +        return index;
> +    }
>  
> -#include <linux/acpi.h>
> -#include <linux/acpi_iort.h>
> -#include <linux/bitfield.h>
> -#include <linux/bitops.h>
> -#include <linux/crash_dump.h>
> -#include <linux/delay.h>
> -#include <linux/dma-iommu.h>
> -#include <linux/err.h>
> -#include <linux/interrupt.h>
> -#include <linux/io-pgtable.h>
> -#include <linux/iommu.h>
> -#include <linux/iopoll.h>
> -#include <linux/module.h>
> -#include <linux/msi.h>
> -#include <linux/of.h>
> -#include <linux/of_address.h>
> -#include <linux/of_iommu.h>
> -#include <linux/of_platform.h>
> -#include <linux/pci.h>
> -#include <linux/pci-ats.h>
> -#include <linux/platform_device.h>
> -
> -#include <linux/amba/bus.h>
> +    ret = platform_get_irq(np, index);
> +    if ( ret < 0 )
> +    {
> +        dev_err(dev, "failed to get irq index %d\n", index);
> +        return -ENODEV;
> +    }
> +
> +    return ret;
> +}
> +
> +/* Start of Linux SMMUv3 code */
>  
>  /* MMIO registers */
>  #define ARM_SMMU_IDR0			0x0
> @@ -507,6 +751,7 @@ struct arm_smmu_s2_cfg {
>  	u16				vmid;
>  	u64				vttbr;
>  	u64				vtcr;
> +	struct domain		*domain;
>  };
>  
>  struct arm_smmu_strtab_cfg {
> @@ -567,8 +812,13 @@ struct arm_smmu_device {
>  
>  	struct arm_smmu_strtab_cfg	strtab_cfg;
>  
> -	/* IOMMU core code handle */
> -	struct iommu_device		iommu;
> +	/* Need to keep a list of SMMU devices */
> +	struct list_head		devices;
> +
> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
> +	struct tasklet		evtq_irq_tasklet;
> +	struct tasklet		priq_irq_tasklet;
> +	struct tasklet		combined_irq_tasklet;
>  };
>  
>  /* SMMU private data for each master */
> @@ -1110,7 +1360,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>  }
>  
>  /* IRQ and event handlers */
> -static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
> +static void arm_smmu_evtq_thread(void *dev)

I think we shouldn't call it a thread given that's a tasklet. We should
rename the function or it will be confusing.


>  {
>  	int i;
>  	struct arm_smmu_device *smmu = dev;
> @@ -1140,7 +1390,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>  	/* Sync our overflow flag, as we believe we're up to speed */
>  	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>  		    Q_IDX(llq, llq->cons);
> -	return IRQ_HANDLED;
>  }
>  
>  static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
> @@ -1181,7 +1430,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>  	}
>  }
>  
> -static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
> +static void arm_smmu_priq_thread(void *dev)

same here


>  {
>  	struct arm_smmu_device *smmu = dev;
>  	struct arm_smmu_queue *q = &smmu->priq.q;
> @@ -1200,12 +1449,12 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>  	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>  		      Q_IDX(llq, llq->cons);
>  	queue_sync_cons_out(q);
> -	return IRQ_HANDLED;
>  }
>  
>  static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
>  
> -static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
> +static void arm_smmu_gerror_handler(int irq, void *dev,
> +				struct cpu_user_regs *regs)
>  {
>  	u32 gerror, gerrorn, active;
>  	struct arm_smmu_device *smmu = dev;
> @@ -1215,7 +1464,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>  
>  	active = gerror ^ gerrorn;
>  	if (!(active & GERROR_ERR_MASK))
> -		return IRQ_NONE; /* No errors pending */
> +		return; /* No errors pending */
>  
>  	dev_warn(smmu->dev,
>  		 "unexpected global error reported (0x%08x), this could be serious\n",
> @@ -1248,26 +1497,42 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>  		arm_smmu_cmdq_skip_err(smmu);
>  
>  	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
> -	return IRQ_HANDLED;
>  }
>  
> -static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
> +static void arm_smmu_combined_irq_handler(int irq, void *dev,
> +				struct cpu_user_regs *regs)
> +{
> +	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
> +
> +	arm_smmu_gerror_handler(irq, dev, regs);
> +
> +	tasklet_schedule(&(smmu->combined_irq_tasklet));
> +}
> +
> +static void arm_smmu_combined_irq_thread(void *dev)
>  {
>  	struct arm_smmu_device *smmu = dev;
>  
> -	arm_smmu_evtq_thread(irq, dev);
> +	arm_smmu_evtq_thread(dev);
>  	if (smmu->features & ARM_SMMU_FEAT_PRI)
> -		arm_smmu_priq_thread(irq, dev);
> -
> -	return IRQ_HANDLED;
> +		arm_smmu_priq_thread(dev);
>  }
>  
> -static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
> +static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
> +				struct cpu_user_regs *regs)
>  {
> -	arm_smmu_gerror_handler(irq, dev);
> -	return IRQ_WAKE_THREAD;
> +	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
> +
> +	tasklet_schedule(&(smmu->evtq_irq_tasklet));
>  }
>  
> +static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
> +				struct cpu_user_regs *regs)
> +{
> +	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
> +
> +	tasklet_schedule(&(smmu->priq_irq_tasklet));
> +}
>  
>  /* IO_PGTABLE API */
>  static void arm_smmu_tlb_inv_context(void *cookie)
> @@ -1354,27 +1619,69 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  }
>  
>  static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
> -				       struct arm_smmu_master *master,
> -				       struct io_pgtable_cfg *pgtbl_cfg)
> +				       struct arm_smmu_master *master)
>  {
>  	int vmid;
> +	u64 reg;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>  
> +	/* VTCR */
> +	reg = VTCR_RES1 | VTCR_SH0_IS | VTCR_IRGN0_WBWA | VTCR_ORGN0_WBWA;
> +
> +	switch (PAGE_SIZE) {
> +	case SZ_4K:
> +		reg |= VTCR_TG0_4K;
> +		break;
> +	case SZ_16K:
> +		reg |= VTCR_TG0_16K;
> +		break;
> +	case SZ_64K:
> +		reg |= VTCR_TG0_4K;
> +		break;
> +	}
> +
> +	switch (smmu->oas) {
> +	case 32:
> +		reg |= VTCR_PS(_AC(0x0,ULL));
> +		break;
> +	case 36:
> +		reg |= VTCR_PS(_AC(0x1,ULL));
> +		break;
> +	case 40:
> +		reg |= VTCR_PS(_AC(0x2,ULL));
> +		break;
> +	case 42:
> +		reg |= VTCR_PS(_AC(0x3,ULL));
> +		break;
> +	case 44:
> +		reg |= VTCR_PS(_AC(0x4,ULL));
> +		break;
> +		case 48:
> +		reg |= VTCR_PS(_AC(0x5,ULL));
> +		break;
> +	case 52:
> +		reg |= VTCR_PS(_AC(0x6,ULL));
> +		break;
> +	}
> +
> +	reg |= VTCR_T0SZ(64ULL - smmu->ias);
> +	reg |= VTCR_SL0(0x2);
> +	reg |= VTCR_VS;
> +
> +	cfg->vtcr   = reg;
> +
>  	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>  	if (vmid < 0)
>  		return vmid;
> +	cfg->vmid  = (u16)vmid;
> +
> +	cfg->vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
> +
> +	printk(XENLOG_DEBUG
> +		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
> +		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
>  
> -	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
> -	cfg->vmid	= (u16)vmid;
> -	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
> -	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
>  	return 0;
>  }
>  
> @@ -1382,28 +1689,12 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>  				    struct arm_smmu_master *master)
>  {
>  	int ret;
> -	unsigned long ias, oas;
> -	int (*finalise_stage_fn)(struct arm_smmu_domain *,
> -				 struct arm_smmu_master *,
> -				 struct io_pgtable_cfg *);
>  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  
>  	/* Restrict the stage to what we can actually support */
>  	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
>  
> -	switch (smmu_domain->stage) {
> -	case ARM_SMMU_DOMAIN_NESTED:
> -	case ARM_SMMU_DOMAIN_S2:
> -		ias = smmu->ias;
> -		oas = smmu->oas;
> -		finalise_stage_fn = arm_smmu_domain_finalise_s2;
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -
> -	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
> +	ret = arm_smmu_domain_finalise_s2(smmu_domain, master);
>  	if (ret < 0) {
>  		return ret;
>  	}
> @@ -1553,7 +1844,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  		return -ENOMEM;
>  	}
>  
> -	if (!WARN_ON(q->base_dma & (qsz - 1))) {
> +	WARN_ON(q->base_dma & (qsz - 1));
> +	if (unlikely(q->base_dma & (qsz - 1))) {
>  		dev_info(smmu->dev, "allocated %u entries for %s\n",
>  			 1 << q->llq.max_n_shift, name);

We don't need both the WARNING and the dev_info. you could turn the
dev_warn / XENLOG_WARNING.


>  	}
> @@ -1758,9 +2050,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  	/* Request interrupt lines */
>  	irq = smmu->evtq.q.irq;
>  	if (irq) {
> -		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> -						arm_smmu_evtq_thread,
> -						IRQF_ONESHOT,
> +		ret = request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
>  						"arm-smmu-v3-evtq", smmu);
>  		if (ret < 0)
>  			dev_warn(smmu->dev, "failed to enable evtq irq\n");
> @@ -1770,8 +2060,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  
>  	irq = smmu->gerr_irq;
>  	if (irq) {
> -		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
> -				       0, "arm-smmu-v3-gerror", smmu);
> +		ret = request_irq(irq, 0, arm_smmu_gerror_handler,
> +						"arm-smmu-v3-gerror", smmu);
>  		if (ret < 0)
>  			dev_warn(smmu->dev, "failed to enable gerror irq\n");
>  	} else {
> @@ -1781,11 +2071,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  	if (smmu->features & ARM_SMMU_FEAT_PRI) {
>  		irq = smmu->priq.q.irq;
>  		if (irq) {
> -			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> -							arm_smmu_priq_thread,
> -							IRQF_ONESHOT,
> -							"arm-smmu-v3-priq",
> -							smmu);
> +			ret = request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
> +							"arm-smmu-v3-priq", smmu);
>  			if (ret < 0)
>  				dev_warn(smmu->dev,
>  					 "failed to enable priq irq\n");
> @@ -1814,11 +2101,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
>  		 * Cavium ThunderX2 implementation doesn't support unique irq
>  		 * lines. Use a single irq line for all the SMMUv3 interrupts.
>  		 */
> -		ret = devm_request_threaded_irq(smmu->dev, irq,
> -					arm_smmu_combined_irq_handler,
> -					arm_smmu_combined_irq_thread,
> -					IRQF_ONESHOT,
> -					"arm-smmu-v3-combined-irq", smmu);
> +		ret = request_irq(irq, 0, arm_smmu_combined_irq_handler,
> +						"arm-smmu-v3-combined-irq", smmu);
>  		if (ret < 0)
>  			dev_warn(smmu->dev, "failed to enable combined irq\n");
>  	} else
> @@ -1857,7 +2141,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
>  	if (reg & CR0_SMMUEN) {
>  		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
> -		WARN_ON(is_kdump_kernel() && !disable_bypass);
> +		WARN_ON(!disable_bypass);
>  		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
>  	}
>  
> @@ -1952,8 +2236,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  		return ret;
>  	}
>  
> -	if (is_kdump_kernel())
> -		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
> +	/* Initialize tasklets for threaded IRQs*/
> +	tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_thread, smmu);
> +	tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_thread, smmu);
> +	tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_thread,
> +				 smmu);
>  
>  	/* Enable the SMMU interface, or ensure bypass */
>  	if (!bypass || disable_bypass) {
> @@ -2195,7 +2482,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>  static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>  				    struct arm_smmu_device *smmu)
>  {
> -	struct device *dev = &pdev->dev;
> +	struct device *dev = pdev;
>  	u32 cells;
>  	int ret = -EINVAL;
>  
> @@ -2219,130 +2506,449 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
>  		return SZ_128K;
>  }
>  
> +/* Start of Xen specific code. */
>  static int arm_smmu_device_probe(struct platform_device *pdev)
>  {
> -	int irq, ret;
> -	struct resource *res;
> -	resource_size_t ioaddr;
> -	struct arm_smmu_device *smmu;
> -	struct device *dev = &pdev->dev;
> -	bool bypass;
> +    int irq, ret;
> +    paddr_t ioaddr, iosize;
> +    struct arm_smmu_device *smmu;
> +    bool bypass;
> +
> +    smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> +    if ( !smmu )
> +    {
> +        dev_err(pdev, "failed to allocate arm_smmu_device\n");
> +        return -ENOMEM;
> +    }
> +    smmu->dev = pdev;
> +
> +    if ( pdev->of_node )
> +    {
> +        ret = arm_smmu_device_dt_probe(pdev, smmu);
> +    } else
> +    {

coding style


> +        ret = arm_smmu_device_acpi_probe(pdev, smmu);
> +        if ( ret == -ENODEV )
> +            return ret;
> +    }
> +
> +    /* Set bypass mode according to firmware probing result */
> +    bypass = !!ret;
> +
> +    /* Base address */
> +    ret = dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
> +    if( ret )
> +        return -ENODEV;
> +
> +    if ( iosize < arm_smmu_resource_size(smmu) )
> +    {
> +        dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
> +     * the PMCG registers which are reserved by the PMU driver.

Does this apply to Xen too?


> +     */
> +    smmu->base = ioremap_nocache(ioaddr, iosize);
> +    if ( IS_ERR(smmu->base) )
> +        return PTR_ERR(smmu->base);
> +
> +    if ( iosize > SZ_64K )
> +    {
> +        smmu->page1 = ioremap_nocache(ioaddr + SZ_64K, ARM_SMMU_REG_SZ);
> +        if ( IS_ERR(smmu->page1) )
> +            return PTR_ERR(smmu->page1);
> +    }
> +    else
> +    {
> +        smmu->page1 = smmu->base;
> +    }
> +
> +    /* Interrupt lines */
> +
> +    irq = platform_get_irq_byname_optional(pdev, "combined");
> +    if ( irq > 0 )
> +        smmu->combined_irq = irq;
> +    else
> +    {
> +        irq = platform_get_irq_byname_optional(pdev, "eventq");
> +        if ( irq > 0 )
> +            smmu->evtq.q.irq = irq;
> +
> +        irq = platform_get_irq_byname_optional(pdev, "priq");
> +        if ( irq > 0 )
> +            smmu->priq.q.irq = irq;
> +
> +        irq = platform_get_irq_byname_optional(pdev, "gerror");
> +        if ( irq > 0 )
> +            smmu->gerr_irq = irq;
> +    }
> +    /* Probe the h/w */
> +    ret = arm_smmu_device_hw_probe(smmu);
> +    if ( ret )
> +        return ret;
> +
> +    /* Initialise in-memory data structures */
> +    ret = arm_smmu_init_structures(smmu);
> +    if ( ret )
> +        return ret;
> +
> +    /* Reset the device */
> +    ret = arm_smmu_device_reset(smmu, bypass);
> +    if ( ret )
> +        return ret;
> +
> +    /*
> +     * Keep a list of all probed devices. This will be used to query
> +     * the smmu devices based on the fwnode.
> +     */
> +    INIT_LIST_HEAD(&smmu->devices);
> +
> +    spin_lock(&arm_smmu_devices_lock);
> +    list_add(&smmu->devices, &arm_smmu_devices);
> +    spin_unlock(&arm_smmu_devices_lock);
> +
> +    return 0;
> +}


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 03:39:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 03:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42417.76263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkIyp-0002wo-Rv; Wed, 02 Dec 2020 03:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42417.76263; Wed, 02 Dec 2020 03:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkIyp-0002wh-Ob; Wed, 02 Dec 2020 03:38:59 +0000
Received: by outflank-mailman (input) for mailman id 42417;
 Wed, 02 Dec 2020 03:38:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkIyo-0002wZ-IL; Wed, 02 Dec 2020 03:38:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkIyo-00057i-C8; Wed, 02 Dec 2020 03:38:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkIyo-0002zG-2q; Wed, 02 Dec 2020 03:38:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkIyo-00025f-2I; Wed, 02 Dec 2020 03:38:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c01JtjVb3LRkCJhT/jEiAaCPGyubx8EzmTU7muUDIhI=; b=L5EYATUHmRyjUYm6TrLRZrZf+N
	pG+EedxcyEh8fBBF0rRI4ARPHCuRFxoR1pZuokL/DQ8qBqaa2mQVILexGlKD8+8nT2bkuZUq6mSF7
	+kTXvuDMoIHX8q+TTZC++LlPnIb4UQfcHGdyJtvbMqcret/YuVO7FQd4dyOKpaYk2UkI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157134-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 157134: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8145d38b48009255a32ab87a02e481cd09c811f9
X-Osstest-Versions-That:
    xen=660254422422f103e797f97545018a1fbc7548e7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 03:38:58 +0000

flight 157134 xen-4.12-testing real [real]
flight 157148 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157134/
http://logs.test-lab.xenproject.org/osstest/logs/157148/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail pass in 157148-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156987
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156987
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156987
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156987
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156987
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156987
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156987
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156987
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156987
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  8145d38b48009255a32ab87a02e481cd09c811f9
baseline version:
 xen                  660254422422f103e797f97545018a1fbc7548e7

Last test of basis   156987  2020-11-24 13:36:22 Z    7 days
Testing same since   157134  2020-12-01 15:05:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6602544224..8145d38b48  8145d38b48009255a32ab87a02e481cd09c811f9 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 07:08:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 07:08:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42439.76308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkMFZ-0005ni-GI; Wed, 02 Dec 2020 07:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42439.76308; Wed, 02 Dec 2020 07:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkMFZ-0005nb-DB; Wed, 02 Dec 2020 07:08:29 +0000
Received: by outflank-mailman (input) for mailman id 42439;
 Wed, 02 Dec 2020 07:08:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkMFX-0005nW-Sl
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 07:08:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d243450e-244d-4128-87c9-23de6709c548;
 Wed, 02 Dec 2020 07:08:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 84B1CAEB9;
 Wed,  2 Dec 2020 07:08:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d243450e-244d-4128-87c9-23de6709c548
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606892905; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QSWFIVf7dgKK9hDqy9KuN12myrRcYnWFcjH3nm9J1n0=;
	b=Rc41cFLLLQDV5whdPyFeFDUBlf6CTbsLT2nGzmeEJhoU3pWRjHoV5/VL+jSDsI1nx3Sf4K
	rokKWUZZObLOedPLhFZnKnEqMHk8KWX7gYjHSVXoY9p7ofnGBermTdBQyNlim59YB63akE
	JngIRgfF2N5Q0QBWlG17VrB6uVxw39I=
Subject: Re: [PATCH v2] tools/libs/ctrl: fix dumping of ballooned guest
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201111100143.13820-1-jgross@suse.com>
 <3720cde6-d138-a081-37be-b68103b8aa6f@suse.com>
Message-ID: <c107c0dd-f714-9923-c489-eeace0d44c74@suse.com>
Date: Wed, 2 Dec 2020 08:08:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <3720cde6-d138-a081-37be-b68103b8aa6f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="tu37e4FrpuoSm1Ifwvgx7IT2pySOQqPSV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--tu37e4FrpuoSm1Ifwvgx7IT2pySOQqPSV
Content-Type: multipart/mixed; boundary="jVRl6AaI2AppXwcFxJtDQBVGosLJWwxIY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <c107c0dd-f714-9923-c489-eeace0d44c74@suse.com>
Subject: Re: [PATCH v2] tools/libs/ctrl: fix dumping of ballooned guest
References: <20201111100143.13820-1-jgross@suse.com>
 <3720cde6-d138-a081-37be-b68103b8aa6f@suse.com>
In-Reply-To: <3720cde6-d138-a081-37be-b68103b8aa6f@suse.com>

--jVRl6AaI2AppXwcFxJtDQBVGosLJWwxIY
Content-Type: multipart/mixed;
 boundary="------------5CEEAD753ECA90F2829D542E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5CEEAD753ECA90F2829D542E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 16:33, J=C3=BCrgen Gro=C3=9F wrote:
> On 11.11.20 11:01, Juergen Gross wrote:
>> A guest with memory < maxmem often can't be dumped via xl dump-core
>> without an error message today:
>>
>> xc: info: exceeded nr_pages (262144) losing pages
>>
>> In case the last page of the guest isn't allocated the loop in
>> xc_domain_dumpcore_via_callback() will always spit out this message,
>> as the number of already dumped pages is tested before the next page
>> is checked to be valid.
>>
>> The guest's p2m_size might be lower than expected, so this should be
>> tested in order to avoid reading past the end of it.
>>
>> The guest might use high bits in p2m entries to flag special cases lik=
e
>> foreign mappings. Entries with an MFN larger than the highest MFN of
>> the host should be skipped.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> This is a real bug fix.
>=20
> Can any maintainer please have a look?

PING?


Juergen

>=20
>=20
> Juergen
>=20
>> ---
>> =C2=A0 tools/libs/ctrl/xc_core.c | 42 +++++++++++++++++++++++++++++---=
-------
>> =C2=A0 1 file changed, 31 insertions(+), 11 deletions(-)
>>
>> diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
>> index e8c6fb96f9..b47ab2f6d8 100644
>> --- a/tools/libs/ctrl/xc_core.c
>> +++ b/tools/libs/ctrl/xc_core.c
>> @@ -439,6 +439,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,=

>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long i;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long j;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long nr_pages;
>> +=C2=A0=C2=A0=C2=A0 unsigned long max_mfn;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xc_core_memory_map_t *memory_map =3D NU=
LL;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int nr_memory_map;
>> @@ -577,6 +578,10 @@ xc_domain_dumpcore_via_callback(xc_interface *xch=
,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 &p2=
m, &dinfo->p2m_size);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( sts !=3D 0=
 )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 goto out;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sts =3D xc_maximum_ram_pag=
e(xch, &max_mfn);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( sts !=3D 0 )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 go=
to out;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
>> @@ -818,19 +823,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xc=
h,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 uint64_t gmfn;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 void *vaddr;
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 ( j >=3D nr_pages )
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 /*
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 * When live dump-mode (-L option) is specifie=
d,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 * guest domain may increase memory.
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 IPRINTF("exceeded nr_pages (%ld) losing pages",=20
>> nr_pages);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 goto copy_done;
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 if ( !auto_translated_physmap )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if ( i >=3D dinfo->p2m_size )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 break;
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( dinfo->guest_width >=3D sizeof(unsign=
ed long) )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( dinfo->guest_=
width =3D=3D sizeof(unsigned long) )
>> @@ -846,6 +844,14 @@ xc_domain_dumpcore_via_callback(xc_interface *xch=
,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( gmfn =3D=3D (=
uint32_t)INVALID_PFN )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
continue;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if ( gmfn > max_mfn )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if ( j >=3D nr_pages )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j++;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 p2m_array[j].pfn =3D i;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 p2m_array[j].gmfn =3D gmfn;
>> @@ -855,6 +861,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch=
,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( !xc_core_arch_gpfn_may_present(&arch_=
ctxt, i) )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if ( j >=3D nr_pages )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j++;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 }
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 gmfn =3D i;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pfn_array[j] =3D i;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 }
>> @@ -879,7 +891,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch=
,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> -copy_done:
>> +=C2=A0=C2=A0=C2=A0 if ( j > nr_pages )
>> +=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * When live dump-mod=
e (-L option) is specified,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * guest domain may i=
ncrease memory.
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 IPRINTF("exceeded nr_pages=
 (%ld) losing %ld pages", nr_pages,=20
>> j - nr_pages);
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sts =3D dump_rtn(xch, args, dump_mem_st=
art, dump_mem -=20
>> dump_mem_start);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( sts !=3D 0 )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 goto out;
>>
>=20


--------------5CEEAD753ECA90F2829D542E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5CEEAD753ECA90F2829D542E--

--jVRl6AaI2AppXwcFxJtDQBVGosLJWwxIY--

--tu37e4FrpuoSm1Ifwvgx7IT2pySOQqPSV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/HPWgFAwAAAAAACgkQsN6d1ii/Ey9F
oQf+NvXtdoTdkD134Giwqv9qmStRNF4hUYbfC2cMo9nkkWgeikU/vxd+XmB3hD1sm2BowYY50lJK
g25d0YBePQ9Vy73teHf/mnwjZSP2tbChU0aS8Xg2mtAHaDTFyZSFry+Z5oitsqlhkgJAxATUSCY4
wgv3vyORqH6XaTJ6KtmmlEFLQTrFXJOvfjELczZLTfW1NrYDk5cG8gzH0fKOC0rVwsLy0BeKC7C1
tzNvvYfuVQ+m1v59nxZnmeaPpGZCgEHNDj9F9CZF7bzCmx6pruQXssgsTlHv8/lnaXMtU+z/XMCh
bQsjPNbZNGhS2weqMiz887R335EozTNd6sHQOxWCNQ==
=fJ7j
-----END PGP SIGNATURE-----

--tu37e4FrpuoSm1Ifwvgx7IT2pySOQqPSV--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 07:49:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 07:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42447.76320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkMsi-00014Y-QR; Wed, 02 Dec 2020 07:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42447.76320; Wed, 02 Dec 2020 07:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkMsi-00014R-NL; Wed, 02 Dec 2020 07:48:56 +0000
Received: by outflank-mailman (input) for mailman id 42447;
 Wed, 02 Dec 2020 07:48:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkMsh-00014M-Kl
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 07:48:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8c4eed8-3b31-459f-825d-5a256df6ba8b;
 Wed, 02 Dec 2020 07:48:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F383CAC2D;
 Wed,  2 Dec 2020 07:48:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8c4eed8-3b31-459f-825d-5a256df6ba8b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606895334; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=q1Dze1pG8rgwLtTU9aw5X1Gw1LFcmWyazwJ7hYALEbs=;
	b=QitdUtl3wYEt1I99ljM8l5g31HPq6AdnyKgUiNgghRIoNniaM6oJCjRv8UYt+H5OJ0Sx5/
	LwzcEX6WYr/Rc/bT8H5lR8WoZ18KcznB0SMWisF4W3Xziv2TxTsyS0GkxbC3nudGbqOCO+
	Rjj5yfKYldc1qjjGaRNuW6T7jeM244A=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v9] xen/events: do some cleanups in evtchn_fifo_set_pending()
Date: Wed,  2 Dec 2020 08:48:52 +0100
Message-Id: <20201202074852.30473-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

evtchn_fifo_set_pending() can be simplified a little bit. Especially
testing for the fifo control block to exist can be moved out of the
main if clause of the function enabling to avoid testing the LINKED bit
twice.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V8:
- new patch

V9:
- move test for control_block existing to after setting PENDING bit
  (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 33 +++++++++++++++------------------
 1 file changed, 15 insertions(+), 18 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index a6cca4798d..d508d57219 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -229,29 +229,24 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         goto done;
     }
 
+    /*
+     * Control block not mapped.  The guest must not unmask an
+     * event until the control block is initialized, so we can
+     * just drop the event.
+     */
+    if ( unlikely(!v->evtchn_fifo->control_block) )
+    {
+        printk(XENLOG_G_WARNING
+               "%pv has no FIFO event channel control block\n", v);
+        goto unlock;
+    }
+
     /*
      * Link the event if it unmasked and not already linked.
      */
     if ( !guest_test_bit(d, EVTCHN_FIFO_MASKED, word) &&
-         !guest_test_bit(d, EVTCHN_FIFO_LINKED, word) )
+         !guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
     {
-        event_word_t *tail_word;
-
-        /*
-         * Control block not mapped.  The guest must not unmask an
-         * event until the control block is initialized, so we can
-         * just drop the event.
-         */
-        if ( unlikely(!v->evtchn_fifo->control_block) )
-        {
-            printk(XENLOG_G_WARNING
-                   "%pv has no FIFO event channel control block\n", v);
-            goto unlock;
-        }
-
-        if ( guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
-            goto unlock;
-
         /*
          * If this event was a tail, the old queue is now empty and
          * its tail must be invalidated to prevent adding an event to
@@ -286,6 +281,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         linked = false;
         if ( q->tail )
         {
+            event_word_t *tail_word;
+
             tail_word = evtchn_fifo_word_from_port(d, q->tail);
             linked = evtchn_fifo_set_link(d, tail_word, port);
         }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 07:51:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 07:51:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42451.76331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkMvV-0001wY-4s; Wed, 02 Dec 2020 07:51:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42451.76331; Wed, 02 Dec 2020 07:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkMvV-0001wR-1v; Wed, 02 Dec 2020 07:51:49 +0000
Received: by outflank-mailman (input) for mailman id 42451;
 Wed, 02 Dec 2020 07:51:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkMvT-0001wJ-WE; Wed, 02 Dec 2020 07:51:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkMvT-0002Rs-P2; Wed, 02 Dec 2020 07:51:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkMvT-0001qn-DA; Wed, 02 Dec 2020 07:51:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkMvT-0005XH-Cc; Wed, 02 Dec 2020 07:51:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vonj2oitz7xHimNIE3u3TS/ObA9Lcb3EwwboGQ/O9IE=; b=yPoRdmrbFXMATworNLAS5NTKu8
	4hXV9YJHl2pfe09MnlQrZm9RT+AycaC/WwLoO45p9ur4HMSWh53pSMf4YzwNnbQV+8lgj1TXCkbz0
	KtyIaNE7dmJ+pq4skhUzFwef0Ub2byEdjAO7nLkkJbh79rpLrKumLua+UpRUxu5wtYuE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157135: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-i386-libvirt:debian-fixup:fail:heisenbug
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b5302273e2c51940172400486644636f2f4fc64a
X-Osstest-Versions-That:
    xen=5e4914e60da9a8dfdc00e839278f40c87525b8ae
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 07:51:47 +0000

flight 157135 xen-4.13-testing real [real]
flight 157152 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157135/
http://logs.test-lab.xenproject.org/osstest/logs/157152/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt      13 debian-fixup        fail pass in 157152-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt     15 migrate-support-check fail in 157152 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156988
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156988
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156988
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156988
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156988
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156988
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156988
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156988
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156988
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156988
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156988
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  b5302273e2c51940172400486644636f2f4fc64a
baseline version:
 xen                  5e4914e60da9a8dfdc00e839278f40c87525b8ae

Last test of basis   156988  2020-11-24 13:36:44 Z    7 days
Testing same since   157135  2020-12-01 15:06:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roge.rpau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e4914e60d..b5302273e2  b5302273e2c51940172400486644636f2f4fc64a -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 08:00:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 08:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42472.76379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkN40-0003c2-Mr; Wed, 02 Dec 2020 08:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42472.76379; Wed, 02 Dec 2020 08:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkN40-0003bv-Jf; Wed, 02 Dec 2020 08:00:36 +0000
Received: by outflank-mailman (input) for mailman id 42472;
 Wed, 02 Dec 2020 08:00:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkN3z-0003bq-AI
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 08:00:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e60884d-8b66-4a16-9af0-d577dcd50c5d;
 Wed, 02 Dec 2020 08:00:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 622F2AC2D;
 Wed,  2 Dec 2020 08:00:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e60884d-8b66-4a16-9af0-d577dcd50c5d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606896033; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g6FYMsGWpX8KoLJIZ4SxdHBjh5HKNe1VFSZksiQciHU=;
	b=TgdOqlZCy72ckynxbg33GXrVdLZ+6wPcKZ+pJo2D/k4tYjLqBzIxHmWb/wp7QXABb3Ud0D
	bpEtnY2oDzWRjmGDGr3DYjzu+yvUJvPZuw7hoodtFZEvbPkPMFohZXn8w1bWwB1NldsSco
	u6kuDx+diToy7BvMAjgThrIDRmtE6c8=
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <87eek9u6tj.fsf@linaro.org> <cd2e064e-896b-3a28-5d37-93ddaba1c13e@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <802c49d5-00bb-9e10-70d7-2629913b08c9@suse.com>
Date: Wed, 2 Dec 2020 09:00:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cd2e064e-896b-3a28-5d37-93ddaba1c13e@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.12.2020 19:53, Oleksandr wrote:
> On 01.12.20 13:03, Alex Bennée wrote:
>> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>>       if ( s->emulator != current->domain )
>>>           goto out;
>>>   
>>> -    rc = p2m_set_ioreq_server(d, flags, s);
>>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>>   
>>>    out:
>>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>   
>>> -    if ( rc == 0 && flags == 0 )
>>> -    {
>>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> -
>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> -    }
>>> -
>> It should be noted that p2m holds it's own lock but I'm unfamiliar with
>> Xen's locking architecture. Is there anything that prevents another vCPU
>> accessing a page that is also being used my ioreq on the first vCPU?
> I am not sure that I would be able to provide reasonable explanations here.
> All what I understand is that p2m_change_entry_type_global() x86 
> specific (we don't have p2m_ioreq_server concept on Arm) and should 
> remain as such (not exposed to the common code).
> IIRC, I raised a question during V2 review whether we could have ioreq 
> server lock around the call to p2m_change_entry_type_global() and didn't 
> get objections.

Not getting objections doesn't mean much. Personally I don't recall
such a question, but this doesn't mean much. The important thing
here is that you properly justify this change in the description (I
didn't look at this version of the patch as a whole yet, so quite
likely you actually do). This is because you need to guarantee that
you don't introduce any lock order violations by this. There also
should be an attempt to avoid future introduction of issues, by
adding lock nesting related comments in suitable places. Again,
quite likely you actually do so, and I will notice it once looking
at the patch as a whole.

All of this said, I think it should be tried hard to avoid
introducing this extra lock nesting, if there aren't other places
already where the same nesting of locks is in effect.

> I may mistake, but looks like the lock being used
> in p2m_change_entry_type_global() is yet another lock for protecting 
> page table operations, so unlikely we could get into the trouble calling 
> this function with the ioreq server lock held.

I'm afraid I don't understand the "yet another" here: The ioreq
server lock clearly serves an entirely different purpose.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 08:38:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 08:38:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42482.76397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkNeY-0006cu-TQ; Wed, 02 Dec 2020 08:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42482.76397; Wed, 02 Dec 2020 08:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkNeY-0006cn-PH; Wed, 02 Dec 2020 08:38:22 +0000
Received: by outflank-mailman (input) for mailman id 42482;
 Wed, 02 Dec 2020 08:38:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkNeX-0006ci-KM
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 08:38:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8f50bd7-f247-496d-9206-5ddada90bb14;
 Wed, 02 Dec 2020 08:38:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E111AD43;
 Wed,  2 Dec 2020 08:38:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8f50bd7-f247-496d-9206-5ddada90bb14
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606898299; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XEYyqJfDwG6UshypPKSBfFwAxJJi3ee131MIR2Zs4L8=;
	b=OmsUZPZ3nToyRt/I2zOoBW0NrM1qVRfppgD+1UrQ3xnKL8ZVlWTorVgzgFUb7Vig7mc9PD
	C+Rk1RYi2npBhtKN03TQthMkX6VlZiLxY4w61WBVVRb72JJzkHXeElA1JeKMtPk2DGEiG8
	9rGPaYCK0707URPv1IkYHA4iST5hrzo=
Subject: Re: [PATCH] vpci/msix: exit early if MSI-X is disabled
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xenproject.org
References: <20201201174014.27878-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dfc96aa9-c39f-177c-c8f8-af18b80804de@suse.com>
Date: Wed, 2 Dec 2020 09:38:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201174014.27878-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 18:40, Roger Pau Monne wrote:
> Do not attempt to mask an MSI-X entry if MSI-X is not enabled. Else it
> will lead to hitting the following assert on debug builds:
> 
> (XEN) Panic on CPU 13:
> (XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843

Since the line number is only of limited use, I'd like to see the
function name (vpci_msix_arch_mask_entry()) also added here; easily
done while committing, if the question further down can be resolved
without code change.

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
>           * so that it picks the new state.
>           */
>          entry->masked = new_masked;
> -        if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
> +
> +        if ( !msix->enabled )
> +            break;
> +
> +        if ( !new_masked && !msix->masked && entry->updated )
>          {
>              /*
>               * If MSI-X is enabled, the function mask is not active, the entry

What about a "disabled" -> "enabled-but-masked" transition? This,
afaict, similarly won't trigger setting up of entries from
control_write(), and hence I'd expect the ASSERT() to similarly
trigger when subsequently an entry's mask bit gets altered.

I'd also be fine making this further adjustment, if you agree,
but the one thing I haven't been able to fully convince myself of
is that there's then still no need to set ->updated to true.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 08:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 08:59:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42488.76409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkNyX-00005r-K1; Wed, 02 Dec 2020 08:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42488.76409; Wed, 02 Dec 2020 08:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkNyX-00005k-Ga; Wed, 02 Dec 2020 08:59:01 +0000
Received: by outflank-mailman (input) for mailman id 42488;
 Wed, 02 Dec 2020 08:59:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkNyW-00005f-GH
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 08:59:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2abb743c-6879-4ff1-86ae-dac99a2c597e;
 Wed, 02 Dec 2020 08:58:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AFA5EAC2E;
 Wed,  2 Dec 2020 08:58:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2abb743c-6879-4ff1-86ae-dac99a2c597e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606899538; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rgDcdo3kxp9iMlIVlrHmxSpNRhD+3tvODEGKA7L2EgI=;
	b=hlD9DY3VIbOLY5VGAh0tw2YqjCupVPZ+eDuPhi6wAk5vQG0MAEY2X8QKXL+b6oMTkdmk7s
	eVquVu488AWlcMk/zS4DgPZ3gvCfi5vFmd3BL1N70rWraYPFFkV4kL2M3WQGPcElrOVd7z
	OZYldNTZ2RHh4Bm3iC73NE6rYfQXx08=
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
To: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-7-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
Date: Wed, 2 Dec 2020 09:58:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> Instead of a pointer to an error variable as parameter just use
> ERR_PTR() to return the cause of an error in cpupool_create().
> 
> This propagates to scheduler_alloc(), too.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one small question:

> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -3233,26 +3233,25 @@ struct scheduler *scheduler_get_default(void)
>      return &ops;
>  }
>  
> -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
> +struct scheduler *scheduler_alloc(unsigned int sched_id)
>  {
>      int i;
> +    int ret;

I guess you didn't merge this with i's declaration because of a
plan/hope for i to be converted to unsigned int?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:05:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:05:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42495.76421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkO4s-00015n-EP; Wed, 02 Dec 2020 09:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42495.76421; Wed, 02 Dec 2020 09:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkO4s-00015g-BJ; Wed, 02 Dec 2020 09:05:34 +0000
Received: by outflank-mailman (input) for mailman id 42495;
 Wed, 02 Dec 2020 09:05:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkO4r-00015b-ED
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:05:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 689f0276-8bdb-4c06-8263-ee973c53d642;
 Wed, 02 Dec 2020 09:05:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AD235AC2E;
 Wed,  2 Dec 2020 09:05:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 689f0276-8bdb-4c06-8263-ee973c53d642
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606899931; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RYABrRedWVHhG6rsWZPdptGnU401jSFHr8Qgyk6k58A=;
	b=anfJGqu79NDZab8bZlBiZkKPsspQ91DqRgzJK5xdI9ZI6EbW5+tYcVUowJu+Av3NM8psoX
	v6UlNVq4SEZwzV3nZIeNAFIe6xyS2/mw0vxXE/9L4vBeT5y3yC/PLsQ7HphRs7RCI+VZgt
	mWAusFSnxOzELrvidpeU9RlhF1TqaJY=
Subject: Re: [PATCH] Fix spelling errors.
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org,
 Diederik de Haas <didi.debian@cknow.org>
References: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3702b443-fd7b-6000-a952-0ecec6fe318c@suse.com>
Date: Wed, 2 Dec 2020 10:05:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 18:39, Diederik de Haas wrote:
> Only spelling errors; no functional changes.
> 
> In docs/misc/dump-core-format.txt there are a few more instances of
> 'informations'. I'll leave that up to someone who can properly determine
> how those sentences should be constructed.
> 
> Signed-off-by: Diederik de Haas <didi.debian@cknow.org>
> 
> Please CC me in replies as I'm not subscribed to this list.
> ---
>  docs/man/xl.1.pod.in                   | 2 +-
>  docs/man/xl.cfg.5.pod.in               | 2 +-
>  docs/man/xlcpupool.cfg.5.pod           | 2 +-
>  tools/firmware/rombios/rombios.c       | 2 +-
>  tools/libs/light/libxl_stream_read.c   | 2 +-
>  tools/xl/xl_cmdtable.c                 | 2 +-

Since these are trivial and obvious adjustments, I intend to not wait
very long for an xl/libxl side ack, before deciding to commit this.
Perhaps just another day or so.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42502.76437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKu-0002vO-W4; Wed, 02 Dec 2020 09:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42502.76437; Wed, 02 Dec 2020 09:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKu-0002vH-Rf; Wed, 02 Dec 2020 09:22:08 +0000
Received: by outflank-mailman (input) for mailman id 42502;
 Wed, 02 Dec 2020 09:22:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOKt-0002v7-Ec
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKt-0004px-6J; Wed, 02 Dec 2020 09:22:07 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKs-0006CD-Ts; Wed, 02 Dec 2020 09:22:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=vKZWz15kBurVPlkeTgoU9pGf8p6XxQhWGrIoZqzdY1Y=; b=oiQDxnmwrAIF3w4tm3iFsKizQb
	waNBxZ9qTXlgJoJS6B7Ma07LKIG5oHnd+RxpTJ59ohLctwgXSui30oXwz7krwdeGcuTFHoVtryjFp
	n8BkvC30vWcksXlVYX/dQq2ZK8pgK/ALVOjyv9UEEtdIjsA5oXsfCczxJh4zrcMqZ6Zw=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH v4 00/11] viridian: add support for ExProcessorMasks
Date: Wed,  2 Dec 2020 09:21:56 +0000
Message-Id: <20201202092205.906-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (11):
  viridian: don't blindly write to 32-bit registers is 'mode' is invalid
  viridian: move flush hypercall implementation into separate function
  viridian: move IPI hypercall implementation into separate function
  viridian: introduce a per-cpu hypercall_vpmask and accessor
    functions...
  viridian: use hypercall_vpmask in hvcall_ipi()
  viridian: use softirq batching in hvcall_ipi()
  viridian: add ExProcessorMasks variants of the flush hypercalls
  viridian: add ExProcessorMasks variant of the IPI hypercall
  viridian: log initial invocation of each type of hypercall
  viridian: add a new '_HVMPV_ex_processor_masks' bit into
    HVM_PARAM_VIRIDIAN...
  xl / libxl: add 'ex_processor_mask' into
    'libxl_viridian_enlightenment'

 docs/man/xl.cfg.5.pod.in             |   8 +
 tools/include/libxl.h                |   7 +
 tools/libs/light/libxl_types.idl     |   1 +
 tools/libs/light/libxl_x86.c         |   3 +
 xen/arch/x86/hvm/viridian/viridian.c | 604 +++++++++++++++++++++------
 xen/include/asm-x86/hvm/viridian.h   |  10 +
 xen/include/public/hvm/params.h      |   7 +-
 7 files changed, 516 insertions(+), 124 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42505.76473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKy-00030H-Pd; Wed, 02 Dec 2020 09:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42505.76473; Wed, 02 Dec 2020 09:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKy-000308-La; Wed, 02 Dec 2020 09:22:12 +0000
Received: by outflank-mailman (input) for mailman id 42505;
 Wed, 02 Dec 2020 09:22:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOKw-0002xr-WC
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKw-0004qF-Ki; Wed, 02 Dec 2020 09:22:10 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKw-0006CD-Cd; Wed, 02 Dec 2020 09:22:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=X3Jg3m4I7HeCYZvOk4GKjZUPANy4lRyYlPfxlQJ1voE=; b=kAA2MBMBgCy7Ave8W2H3ITyF1B
	/f1++/OK4feg7B8chJOIPCYnyXV+OtTz3UDgiMiyCzgiENFYsDk5YGkaeDLQeLCE/rgXp1RXBdrEb
	kFoQrVex4IEtX5wJiYtIMXEG7Fp7ytVw/bhsj2hWAeHYgeprtzJZ4WNBORvAYiJPUDnI=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 03/11] viridian: move IPI hypercall implementation into separate function
Date: Wed,  2 Dec 2020 09:21:59 +0000
Message-Id: <20201202092205.906-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.

The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.

NOTE: The formatting of the switch statement at the top of
      viridial_hypercall() is also adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function

v2:
 - Different formatting adjustment
---
 xen/arch/x86/hvm/viridian/viridian.c | 168 ++++++++++++++-------------
 1 file changed, 87 insertions(+), 81 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 69b6f285e8aa..9844bb57872a 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -581,13 +581,72 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi(const union hypercall_input *input,
+                      union hypercall_output *output,
+                      paddr_t input_params_gpa,
+                      paddr_t output_params_gpa)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    uint32_t vector;
+    uint64_t vcpu_mask;
+
+    /* Get input parameters. */
+    if ( input->fast )
+    {
+        if ( input_params_gpa >> 32 )
+            return -EINVAL;
+
+        vector = input_params_gpa;
+        vcpu_mask = output_params_gpa;
+    }
+    else
+    {
+        struct {
+            uint32_t vector;
+            uint8_t target_vtl;
+            uint8_t reserved_zero[3];
+            uint64_t vcpu_mask;
+        } input_params;
+
+        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                      sizeof(input_params)) != HVMTRANS_okay )
+            return -EINVAL;
+
+        if ( input_params.target_vtl ||
+             input_params.reserved_zero[0] ||
+             input_params.reserved_zero[1] ||
+             input_params.reserved_zero[2] )
+            return -EINVAL;
+
+        vector = input_params.vector;
+        vcpu_mask = input_params.vcpu_mask;
+    }
+
+    if ( vector < 0x10 || vector > 0xff )
+        return -EINVAL;
+
+    for_each_vcpu ( currd, v )
+    {
+        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+            return -EINVAL;
+
+        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+            continue;
+
+        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+    }
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
-    uint16_t status = HV_STATUS_SUCCESS;
+    int rc = 0;
     union hypercall_input input;
     union hypercall_output output = {};
 
@@ -600,11 +659,13 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         input_params_gpa = regs->rdx;
         output_params_gpa = regs->r8;
         break;
+
     case 4:
         input.raw = (regs->rdx << 32) | regs->eax;
         input_params_gpa = (regs->rbx << 32) | regs->ecx;
         output_params_gpa = (regs->rdi << 32) | regs->esi;
         break;
+
     default:
         goto out;
     }
@@ -616,92 +677,18 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * See section 14.5.1 of the specification.
          */
         do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
-        status = HV_STATUS_SUCCESS;
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-    {
-        int rc = hvcall_flush(&input, &output, input_params_gpa,
-                              output_params_gpa);
-
-        switch ( rc )
-        {
-        case 0:
-            break;
-
-        case -ERESTART:
-            return HVM_HCALL_preempted;
-
-        default:
-            ASSERT_UNREACHABLE();
-            /* Fallthrough */
-        case -EINVAL:
-            status = HV_STATUS_INVALID_PARAMETER;
-            break;
-        }
-
+        rc = hvcall_flush(&input, &output, input_params_gpa,
+                          output_params_gpa);
         break;
-    }
 
     case HVCALL_SEND_IPI:
-    {
-        struct vcpu *v;
-        uint32_t vector;
-        uint64_t vcpu_mask;
-
-        status = HV_STATUS_INVALID_PARAMETER;
-
-        /* Get input parameters. */
-        if ( input.fast )
-        {
-            if ( input_params_gpa >> 32 )
-                break;
-
-            vector = input_params_gpa;
-            vcpu_mask = output_params_gpa;
-        }
-        else
-        {
-            struct {
-                uint32_t vector;
-                uint8_t target_vtl;
-                uint8_t reserved_zero[3];
-                uint64_t vcpu_mask;
-            } input_params;
-
-            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                          sizeof(input_params)) !=
-                 HVMTRANS_okay )
-                break;
-
-            if ( input_params.target_vtl ||
-                 input_params.reserved_zero[0] ||
-                 input_params.reserved_zero[1] ||
-                 input_params.reserved_zero[2] )
-                break;
-
-            vector = input_params.vector;
-            vcpu_mask = input_params.vcpu_mask;
-        }
-
-        if ( vector < 0x10 || vector > 0xff )
-            break;
-
-        for_each_vcpu ( currd, v )
-        {
-            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-                break;
-
-            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-                continue;
-
-            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-        }
-
-        status = HV_STATUS_SUCCESS;
+        rc = hvcall_ipi(&input, &output, input_params_gpa,
+                        output_params_gpa);
         break;
-    }
 
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
@@ -714,12 +701,31 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * Given that return a status of 'invalid code' has not so far
          * caused any problems it's not worth logging.
          */
-        status = HV_STATUS_INVALID_HYPERCALL_CODE;
+        rc = -EOPNOTSUPP;
         break;
     }
 
  out:
-    output.result = status;
+    switch ( rc )
+    {
+    case 0:
+        break;
+
+    case -ERESTART:
+        return HVM_HCALL_preempted;
+
+    case -EOPNOTSUPP:
+        output.result = HV_STATUS_INVALID_HYPERCALL_CODE;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
+        output.result = HV_STATUS_INVALID_PARAMETER;
+        break;
+    }
+
     switch (mode)
     {
     case 8:
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42506.76482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKz-00031g-DG; Wed, 02 Dec 2020 09:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42506.76482; Wed, 02 Dec 2020 09:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKz-00031M-6t; Wed, 02 Dec 2020 09:22:13 +0000
Received: by outflank-mailman (input) for mailman id 42506;
 Wed, 02 Dec 2020 09:22:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOKy-0002zc-2x
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKx-0004qN-PV; Wed, 02 Dec 2020 09:22:11 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKx-0006CD-HW; Wed, 02 Dec 2020 09:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=UQPEbNdpYJUM2Gx6R85h5d+pKLFn2/d5C4V2C8DT2GY=; b=RyAPoQvFW1GcXRJ17jZMSw8iY4
	3mhHWJlneSxUBy0AUP+GKcb9LfZS/kT4YjdfkkYaionNTVV16lvmPP7mgORouGQwOD0Zoq9iLthqJ
	n8TxqxaWPQYUx1ZF/vcicFzJ0nn9PqwdTAUinneX3x2X/k3Sc0lCUfNThwgBl7ecXQyY=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 04/11] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
Date: Wed,  2 Dec 2020 09:22:00 +0000
Message-Id: <20201202092205.906-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and make use of them in hvcall_flush()/need_flush().

Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Modified vpmask_set() to take a base 'vp' and a 64-bit 'mask', still
   looping over the mask as bitmap.h does not provide a primitive for copying
   one mask into another at an offset
 - Added ASSERTions to verify that we don't attempt to set or test bits
   beyond the limit of the map
---
 xen/arch/x86/hvm/viridian/viridian.c | 58 ++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 9844bb57872a..cfd87504052f 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -507,15 +507,59 @@ void viridian_domain_deinit(struct domain *d)
     XFREE(d->arch.hvm.viridian);
 }
 
+struct hypercall_vpmask {
+    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
+
+static void vpmask_empty(struct hypercall_vpmask *vpmask)
+{
+    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp,
+                       uint64_t mask)
+{
+    unsigned int count = sizeof(mask) * 8;
+
+    while ( count-- )
+    {
+        if ( !mask )
+            break;
+
+        if ( mask & 1 )
+        {
+            ASSERT(vp < HVM_MAX_VCPUS);
+            __set_bit(vp, vpmask->mask);
+        }
+
+        mask >>= 1;
+        vp++;
+    }
+}
+
+static void vpmask_fill(struct hypercall_vpmask *vpmask)
+{
+    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static bool vpmask_test(const struct hypercall_vpmask *vpmask,
+                        unsigned int vp)
+{
+    ASSERT(vp < HVM_MAX_VCPUS);
+    return test_bit(vp, vpmask->mask);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
  */
 static bool need_flush(void *ctxt, struct vcpu *v)
 {
-    uint64_t vcpu_mask = *(uint64_t *)ctxt;
+    struct hypercall_vpmask *vpmask = ctxt;
 
-    return vcpu_mask & (1ul << v->vcpu_id);
+    return vpmask_test(vpmask, v->vcpu_id);
 }
 
 union hypercall_input {
@@ -546,6 +590,7 @@ static int hvcall_flush(const union hypercall_input *input,
                         paddr_t input_params_gpa,
                         paddr_t output_params_gpa)
 {
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     struct {
         uint64_t address_space;
         uint64_t flags;
@@ -567,13 +612,18 @@ static int hvcall_flush(const union hypercall_input *input,
      * so err on the safe side.
      */
     if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-        input_params.vcpu_mask = ~0ul;
+        vpmask_fill(vpmask);
+    else
+    {
+        vpmask_empty(vpmask);
+        vpmask_set(vpmask, 0, input_params.vcpu_mask);
+    }
 
     /*
      * A false return means that another vcpu is currently trying
      * a similar operation, so back off.
      */
-    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+    if ( !paging_flush_tlb(need_flush, vpmask) )
         return -ERESTART;
 
     output->rep_complete = input->rep_count;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42504.76460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKx-0002yA-FB; Wed, 02 Dec 2020 09:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42504.76460; Wed, 02 Dec 2020 09:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKx-0002y3-C8; Wed, 02 Dec 2020 09:22:11 +0000
Received: by outflank-mailman (input) for mailman id 42504;
 Wed, 02 Dec 2020 09:22:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOKv-0002wn-Vj
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKv-0004q8-Hv; Wed, 02 Dec 2020 09:22:09 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKv-0006CD-7v; Wed, 02 Dec 2020 09:22:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=m2JUzU4YdPfLMUj4CZFPdzmClYqKNkNVCg9Y25SzLOg=; b=p8nVOjA7bj85qCOY/txbqL8F2F
	bu+tkG9FMj/p+GQeXRJ7wVKN3uLSh/TilbpEjZzCJEWXOdbQPlwz1LPVeeIP4MCn4S1amio/9+T+F
	l0Okjhe6qoOKtkE6xbnq0l1IH1bBW8ujFqdiWYsE+HpjA3wtJ9PuagcvYFivdeRYeflQ=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 02/11] viridian: move flush hypercall implementation into separate function
Date: Wed,  2 Dec 2020 09:21:58 +0000
Message-Id: <20201202092205.906-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.

The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function
---
 xen/arch/x86/hvm/viridian/viridian.c | 130 ++++++++++++++++-----------
 1 file changed, 78 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 338e705bd29c..69b6f285e8aa 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v)
     return vcpu_mask & (1ul << v->vcpu_id);
 }
 
+union hypercall_input {
+    uint64_t raw;
+    struct {
+        uint16_t call_code;
+        uint16_t fast:1;
+        uint16_t rsvd1:15;
+        uint16_t rep_count:12;
+        uint16_t rsvd2:4;
+        uint16_t rep_start:12;
+        uint16_t rsvd3:4;
+    };
+};
+
+union hypercall_output {
+    uint64_t raw;
+    struct {
+        uint16_t result;
+        uint16_t rsvd1;
+        uint32_t rep_complete:12;
+        uint32_t rsvd2:20;
+    };
+};
+
+static int hvcall_flush(const union hypercall_input *input,
+                        union hypercall_output *output,
+                        paddr_t input_params_gpa,
+                        paddr_t output_params_gpa)
+{
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        uint64_t vcpu_mask;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    /*
+     * It is not clear from the spec. if we are supposed to
+     * include current virtual CPU in the set or not in this case,
+     * so err on the safe side.
+     */
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        input_params.vcpu_mask = ~0ul;
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     uint16_t status = HV_STATUS_SUCCESS;
-
-    union hypercall_input {
-        uint64_t raw;
-        struct {
-            uint16_t call_code;
-            uint16_t fast:1;
-            uint16_t rsvd1:15;
-            uint16_t rep_count:12;
-            uint16_t rsvd2:4;
-            uint16_t rep_start:12;
-            uint16_t rsvd3:4;
-        };
-    } input;
-
-    union hypercall_output {
-        uint64_t raw;
-        struct {
-            uint16_t result;
-            uint16_t rsvd1;
-            uint32_t rep_complete:12;
-            uint32_t rsvd2:20;
-        };
-    } output = { 0 };
+    union hypercall_input input;
+    union hypercall_output output = {};
 
     ASSERT(is_viridian_domain(currd));
 
@@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
     {
-        struct {
-            uint64_t address_space;
-            uint64_t flags;
-            uint64_t vcpu_mask;
-        } input_params;
+        int rc = hvcall_flush(&input, &output, input_params_gpa,
+                              output_params_gpa);
 
-        /* These hypercalls should never use the fast-call convention. */
-        status = HV_STATUS_INVALID_PARAMETER;
-        if ( input.fast )
+        switch ( rc )
+        {
+        case 0:
             break;
 
-        /* Get input parameters. */
-        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                      sizeof(input_params)) !=
-             HVMTRANS_okay )
-            break;
-
-        /*
-         * It is not clear from the spec. if we are supposed to
-         * include current virtual CPU in the set or not in this case,
-         * so err on the safe side.
-         */
-        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-            input_params.vcpu_mask = ~0ul;
-
-        /*
-         * A false return means that another vcpu is currently trying
-         * a similar operation, so back off.
-         */
-        if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        case -ERESTART:
             return HVM_HCALL_preempted;
 
-        output.rep_complete = input.rep_count;
+        default:
+            ASSERT_UNREACHABLE();
+            /* Fallthrough */
+        case -EINVAL:
+            status = HV_STATUS_INVALID_PARAMETER;
+            break;
+        }
 
-        status = HV_STATUS_SUCCESS;
         break;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42503.76449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKw-0002wz-8D; Wed, 02 Dec 2020 09:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42503.76449; Wed, 02 Dec 2020 09:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOKw-0002ws-3s; Wed, 02 Dec 2020 09:22:10 +0000
Received: by outflank-mailman (input) for mailman id 42503;
 Wed, 02 Dec 2020 09:22:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOKu-0002vC-N5
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKu-0004q2-Br; Wed, 02 Dec 2020 09:22:08 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKu-0006CD-31; Wed, 02 Dec 2020 09:22:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=wsUSjMIuCJSFEzqOPxy9cU/RRj5MGvZk3yge+ryDxEU=; b=6cgIRj9yLenRSlm1Lk0kmN45G8
	YduZiUpQZwQ0rsalsdiyU42/NGcjtjwRdiSDlegGMocN5ImisRx6T6vjwAwUhNyoHP8sfQkooaLBX
	HSLiX7HdBZcenn+KEyIVEILfDpM7aOHbneE2pnc64jNSlrzmMbg/cnzvYgkcbZDel1lU=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 01/11] viridian: don't blindly write to 32-bit registers is 'mode' is invalid
Date: Wed,  2 Dec 2020 09:21:57 +0000
Message-Id: <20201202092205.906-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

If hvm_guest_x86_mode() returns something other than 8 or 4 then
viridian_hypercall() will return immediately but, on the way out, will write
back status as if 'mode' was 4. This patch simply makes it leave the registers
alone.

NOTE: The formatting of the 'out' label and the switch statement are also
      adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Fixed another CODING_STYLE violation.

v2:
 - New in v2
---
 xen/arch/x86/hvm/viridian/viridian.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index dc7183a54627..338e705bd29c 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -692,13 +692,15 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         break;
     }
 
-out:
+ out:
     output.result = status;
-    switch (mode) {
+    switch (mode)
+    {
     case 8:
         regs->rax = output.raw;
         break;
-    default:
+
+    case 4:
         regs->rdx = output.raw >> 32;
         regs->rax = (uint32_t)output.raw;
         break;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42507.76497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL0-00035T-Q0; Wed, 02 Dec 2020 09:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42507.76497; Wed, 02 Dec 2020 09:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL0-00035D-K3; Wed, 02 Dec 2020 09:22:14 +0000
Received: by outflank-mailman (input) for mailman id 42507;
 Wed, 02 Dec 2020 09:22:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOKz-00031Q-6e
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKy-0004qU-UQ; Wed, 02 Dec 2020 09:22:12 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKy-0006CD-MN; Wed, 02 Dec 2020 09:22:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=K8HbJNFS+eSvzpIgXAlXNXAsJGy5ROLLCyua8HaV7xs=; b=gIzu0rZ6dBx0WfduEYPTaQDnYh
	4RFD4aUDl85sqBIqAWo1Utz1Hdon6kXGCjt1TPb7cLqF7XGleXUMtykYMDq6bEJosOjWfnBgptfMP
	oKnvR2y9cfel1MjJDyxjAR4JXuTgjFzWIzQ9JaoBwLsLNoV1Qe7Bu6DSfJ2dK6KP5KdU=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 05/11] viridian: use hypercall_vpmask in hvcall_ipi()
Date: Wed,  2 Dec 2020 09:22:01 +0000
Message-Id: <20201202092205.906-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Couple of extra 'const' qualifiers

v2:
 - Drop the 'vp' loop now that vpmask_set() will do it internally
---
 xen/arch/x86/hvm/viridian/viridian.c | 44 +++++++++++++++++++++-------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index cfd87504052f..fb38210e2cc7 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -551,6 +551,26 @@ static bool vpmask_test(const struct hypercall_vpmask *vpmask,
     return test_bit(vp, vpmask->mask);
 }
 
+static unsigned int vpmask_first(const struct hypercall_vpmask *vpmask)
+{
+    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static unsigned int vpmask_next(const struct hypercall_vpmask *vpmask,
+                                unsigned int vp)
+{
+    /*
+     * If vp + 1 > HVM_MAX_VCPUS then find_next_bit() will return
+     * HVM_MAX_VCPUS, ensuring the for_each_vp ( ... ) loop terminates.
+     */
+    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
+}
+
+#define for_each_vp(vpmask, vp) \
+	for ( (vp) = vpmask_first(vpmask); \
+	      (vp) < HVM_MAX_VCPUS; \
+	      (vp) = vpmask_next(vpmask, vp) )
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -631,13 +651,21 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
+{
+    struct domain *currd = current->domain;
+    unsigned int vp;
+
+    for_each_vp ( vpmask, vp )
+        vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+}
+
 static int hvcall_ipi(const union hypercall_input *input,
                       union hypercall_output *output,
                       paddr_t input_params_gpa,
                       paddr_t output_params_gpa)
 {
-    struct domain *currd = current->domain;
-    struct vcpu *v;
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     uint32_t vector;
     uint64_t vcpu_mask;
 
@@ -676,16 +704,10 @@ static int hvcall_ipi(const union hypercall_input *input,
     if ( vector < 0x10 || vector > 0xff )
         return -EINVAL;
 
-    for_each_vcpu ( currd, v )
-    {
-        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-            return -EINVAL;
+    vpmask_empty(vpmask);
+    vpmask_set(vpmask, 0, vcpu_mask);
 
-        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-            continue;
-
-        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-    }
+    send_ipi(vpmask, vector);
 
     return 0;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42508.76501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL1-00036f-9M; Wed, 02 Dec 2020 09:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42508.76501; Wed, 02 Dec 2020 09:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL1-00036G-0H; Wed, 02 Dec 2020 09:22:15 +0000
Received: by outflank-mailman (input) for mailman id 42508;
 Wed, 02 Dec 2020 09:22:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOL0-00034p-EE
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL0-0004qb-3K; Wed, 02 Dec 2020 09:22:14 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOKz-0006CD-RH; Wed, 02 Dec 2020 09:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=rXcPbRQWkmJ068idR05ATB7vjB/FmCOYwLBsOKxDDNY=; b=U6+Q7VyP145vZC50t7PgjeJL4f
	xXg1gCbM/FM3sePTVyf0mt0YFA8Di+NSZ6sO4nPT3z42JSdIDVT0PZtTecoiEle0HHRtq8+wQNsJl
	oTGwIWbvzEKNIQKYn67DFT+ZJztKJAaS5jrgW4VUIKQXnwxrfe1VgsM2Ft1+QTRXbb1Q=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 06/11] viridian: use softirq batching in hvcall_ipi()
Date: Wed,  2 Dec 2020 09:22:02 +0000
Message-Id: <20201202092205.906-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Don't add the 'nr' field to struct hypercall_vpmask and use
   bitmap_weight() instead
---
 xen/arch/x86/hvm/viridian/viridian.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index fb38210e2cc7..47f15717bcd3 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -11,6 +11,7 @@
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
+#include <xen/softirq.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -571,6 +572,11 @@ static unsigned int vpmask_next(const struct hypercall_vpmask *vpmask,
 	      (vp) < HVM_MAX_VCPUS; \
 	      (vp) = vpmask_next(vpmask, vp) )
 
+static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
+{
+    return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -654,10 +660,17 @@ static int hvcall_flush(const union hypercall_input *input,
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
+    unsigned int nr = vpmask_nr(vpmask);
     unsigned int vp;
 
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_begin();
+
     for_each_vp ( vpmask, vp )
         vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_finish();
 }
 
 static int hvcall_ipi(const union hypercall_input *input,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42509.76521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL3-0003CI-OO; Wed, 02 Dec 2020 09:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42509.76521; Wed, 02 Dec 2020 09:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL3-0003C4-HO; Wed, 02 Dec 2020 09:22:17 +0000
Received: by outflank-mailman (input) for mailman id 42509;
 Wed, 02 Dec 2020 09:22:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOL1-00037J-EE
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL1-0004qi-7x; Wed, 02 Dec 2020 09:22:15 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL0-0006CD-WC; Wed, 02 Dec 2020 09:22:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=2McqarQ48ceH9xo3HrtqQ787rnJw7CqOtU4aFIIyXmY=; b=VrfCTBgvonhJEYdohms63KyNVG
	O6YoeftfM3WaIJ9RY2iXH+dIoSfaMa6UjrpggzFVMQfAVuMbjvlKDNMjudxN7N7wrxlGbIDiMU3sK
	GxvLSWdYiNOD1H1cLJjK4pxKJUWE7moBSq3jQfKTjDg1BX3GpULQFwFmFk5T3Qd9iNw0=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 07/11] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Wed,  2 Dec 2020 09:22:03 +0000
Message-Id: <20201202092205.906-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
      support needs to be advertised via CPUID. This will be done in a
      subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust one of the helper macros
 - A few more consts and type tweaks
 - Adjust prototype of new function

v2:
 - Add helper macros to define mask and struct sizes
 - Use a union to determine the size of 'hypercall_vpset'
 - Use hweight64() in hv_vpset_nr_banks()
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 141 +++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 47f15717bcd3..fceca760b41d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -577,6 +577,69 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
     return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
 }
 
+#define HV_VPSET_BANK_SIZE \
+    sizeof_field(struct hv_vpset, bank_contents[0])
+
+#define HV_VPSET_SIZE(banks)   \
+    (offsetof(struct hv_vpset, bank_contents) + \
+     ((banks) * HV_VPSET_BANK_SIZE))
+
+#define HV_VPSET_MAX_BANKS \
+    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
+
+union hypercall_vpset {
+    struct hv_vpset set;
+    uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
+};
+
+static DEFINE_PER_CPU(union hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+    return hweight64(vpset->valid_bank_mask);
+}
+
+static uint16_t hv_vpset_to_vpmask(const struct hv_vpset *set,
+                                   struct hypercall_vpmask *vpmask)
+{
+#define NR_VPS_PER_BANK (HV_VPSET_BANK_SIZE * 8)
+
+    switch ( set->format )
+    {
+    case HV_GENERIC_SET_ALL:
+        vpmask_fill(vpmask);
+        return 0;
+
+    case HV_GENERIC_SET_SPARSE_4K:
+    {
+        uint64_t bank_mask;
+        unsigned int vp, bank = 0;
+
+        vpmask_empty(vpmask);
+        for ( vp = 0, bank_mask = set->valid_bank_mask;
+              bank_mask;
+              vp += NR_VPS_PER_BANK, bank_mask >>= 1 )
+        {
+            if ( bank_mask & 1 )
+            {
+                uint64_t mask = set->bank_contents[bank];
+
+                vpmask_set(vpmask, vp, mask);
+                bank++;
+            }
+        }
+        return 0;
+    }
+
+    default:
+        break;
+    }
+
+    return -EINVAL;
+
+#undef NR_VPS_PER_BANK
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -657,6 +720,78 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_flush_ex(const union hypercall_input *input,
+                           union hypercall_output *output,
+                           paddr_t input_params_gpa,
+                           paddr_t output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        struct hv_vpset set;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        vpmask_fill(vpmask);
+    else
+    {
+        union hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+        struct hv_vpset *set = &vpset->set;
+        size_t size;
+        int rc;
+
+        *set = input_params.set;
+        if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+        {
+            unsigned long offset = offsetof(typeof(input_params),
+                                            set.bank_contents);
+
+            size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+            if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+                 sizeof(*vpset) )
+            {
+                ASSERT_UNREACHABLE();
+                return -EINVAL;
+            }
+
+            if ( hvm_copy_from_guest_phys(&set->bank_contents[0],
+                                          input_params_gpa + offset,
+                                          size) != HVMTRANS_okay)
+                return -EINVAL;
+
+            size += sizeof(*set);
+        }
+        else
+            size = sizeof(*set);
+
+        rc = hv_vpset_to_vpmask(set, vpmask);
+        if ( rc )
+            return rc;
+    }
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, vpmask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
@@ -770,6 +905,12 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                           output_params_gpa);
         break;
 
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        rc = hvcall_flush_ex(&input, &output, input_params_gpa,
+                             output_params_gpa);
+        break;
+
     case HVCALL_SEND_IPI:
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42511.76532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL4-0003Eb-Tb; Wed, 02 Dec 2020 09:22:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42511.76532; Wed, 02 Dec 2020 09:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL4-0003Dc-40; Wed, 02 Dec 2020 09:22:18 +0000
Received: by outflank-mailman (input) for mailman id 42511;
 Wed, 02 Dec 2020 09:22:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOL2-0003AT-Js
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL2-0004qv-Cc; Wed, 02 Dec 2020 09:22:16 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL2-0006CD-4h; Wed, 02 Dec 2020 09:22:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Gt2Gm0WXXUEpvKgVkyBf36jhM5Cv2I4ub9rwEjAgTo0=; b=25jlMVPl80HFOvfhQj59s90ZWD
	s0F2bzsjBcDmkAAxhiL5Rn1/OGKLpVNbhwwM+k/ULKIvHVdybUeeiQOStJt0KpuHZPN+gdACpYQ6l
	l8w7fqvAGwLjGJe6GRxksplffffO7vrkihWOxXQZFWNddfiZgcRR4OQbjWZ7l5IgymMk=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 08/11] viridian: add ExProcessorMasks variant of the IPI hypercall
Date: Wed,  2 Dec 2020 09:22:04 +0000
Message-Id: <20201202092205.906-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).

NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
      not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
      'ExProcessorMasks' is not yet advertised via CPUID.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function

v2:
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 74 ++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index fceca760b41d..9aa8e6c2572c 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -860,6 +860,75 @@ static int hvcall_ipi(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi_ex(const union hypercall_input *input,
+                         union hypercall_output *output,
+                         paddr_t input_params_gpa,
+                         paddr_t output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint32_t vector;
+        uint8_t target_vtl;
+        uint8_t reserved_zero[3];
+        struct hv_vpset set;
+    } input_params;
+    union hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+    struct hv_vpset *set = &vpset->set;
+    size_t size;
+    int rc;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.target_vtl ||
+         input_params.reserved_zero[0] ||
+         input_params.reserved_zero[1] ||
+         input_params.reserved_zero[2] )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    if ( input_params.vector < 0x10 || input_params.vector > 0xff )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    *set = input_params.set;
+    if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+    {
+        unsigned long offset = offsetof(typeof(input_params),
+                                        set.bank_contents);
+
+        size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+        if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+             sizeof(*vpset) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+
+        if ( hvm_copy_from_guest_phys(&set->bank_contents,
+                                      input_params_gpa + offset,
+                                      size) != HVMTRANS_okay)
+            return -EINVAL;
+
+        size += sizeof(*set);
+    }
+    else
+        size = sizeof(*set);
+
+    rc = hv_vpset_to_vpmask(set, vpmask);
+    if ( rc )
+        return rc;
+
+    send_ipi(vpmask, input_params.vector);
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -916,6 +985,11 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                         output_params_gpa);
         break;
 
+    case HVCALL_SEND_IPI_EX:
+        rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
+                           output_params_gpa);
+        break;
+
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:22:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:22:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42512.76549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL6-0003KL-Mj; Wed, 02 Dec 2020 09:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42512.76549; Wed, 02 Dec 2020 09:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOL6-0003Jv-Eq; Wed, 02 Dec 2020 09:22:20 +0000
Received: by outflank-mailman (input) for mailman id 42512;
 Wed, 02 Dec 2020 09:22:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkOL3-0003Cx-UN
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:22:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL3-0004r2-Hd; Wed, 02 Dec 2020 09:22:17 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkOL3-0006CD-9c; Wed, 02 Dec 2020 09:22:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Pgl96KUiTKNG1FAmFTz8k9V801W3LGUJk8d0gWn7PzM=; b=LCEq8Mx+1Piq1b7P3sadP0P4Bn
	GOygdLUclZW9Avp+RpKcDQrX2wk/DMaktHc7lagUQrYjglb8tRajC8C95hkBV5OpxSJlEWVo23Kp+
	OAdzhp4uBMFQ9dRtGwO+WakzP2BJKv2pgs+DzPOwmBAKdFyf49tJJyUC9+Q6D0Wisr2I=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 09/11] viridian: log initial invocation of each type of hypercall
Date: Wed,  2 Dec 2020 09:22:05 +0000
Message-Id: <20201202092205.906-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201202092205.906-1-paul@xen.org>
References: <20201202092205.906-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Use DECLARE_BITMAP() for 'hypercall_flags'
 - Use an enum for _HCALL_* values
---
 xen/arch/x86/hvm/viridian/viridian.c | 21 +++++++++++++++++++++
 xen/include/asm-x86/hvm/viridian.h   | 10 ++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 9aa8e6c2572c..95314c1f7f6c 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -933,6 +933,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
+    struct viridian_domain *vd = currd->arch.hvm.viridian;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     int rc = 0;
@@ -962,6 +963,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     switch ( input.call_code )
     {
     case HVCALL_NOTIFY_LONG_SPIN_WAIT:
+        if ( !test_and_set_bit(_HCALL_spin_wait, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "d%d: VIRIDIAN HVCALL_NOTIFY_LONG_SPIN_WAIT\n",
+                   currd->domain_id);
+
         /*
          * See section 14.5.1 of the specification.
          */
@@ -970,22 +975,38 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+        if ( !test_and_set_bit(_HCALL_flush, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST\n",
+                   currd);
+
         rc = hvcall_flush(&input, &output, input_params_gpa,
                           output_params_gpa);
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        if ( !test_and_set_bit(_HCALL_flush_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX\n",
+                   currd);
+
         rc = hvcall_flush_ex(&input, &output, input_params_gpa,
                              output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI:
+        if ( !test_and_set_bit(_HCALL_ipi, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI\n",
+                   currd);
+
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI_EX:
+        if ( !test_and_set_bit(_HCALL_ipi_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI_EX\n",
+                   currd);
+
         rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
                            output_params_gpa);
         break;
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index cbf77d9c760b..4c8ff6e80b6f 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -55,10 +55,20 @@ struct viridian_time_ref_count
     int64_t off;
 };
 
+enum {
+    _HCALL_spin_wait,
+    _HCALL_flush,
+    _HCALL_flush_ex,
+    _HCALL_ipi,
+    _HCALL_ipi_ex,
+    _HCALL_nr /* must be last */
+};
+
 struct viridian_domain
 {
     union hv_guest_os_id guest_os_id;
     union hv_vp_assist_page_msr hypercall_gpa;
+    DECLARE_BITMAP(hypercall_flags, _HCALL_nr);
     struct viridian_time_ref_count time_ref_count;
     struct viridian_page reference_tsc;
 };
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:25:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:25:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42554.76573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkONt-00049D-F8; Wed, 02 Dec 2020 09:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42554.76573; Wed, 02 Dec 2020 09:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkONt-000496-C8; Wed, 02 Dec 2020 09:25:13 +0000
Received: by outflank-mailman (input) for mailman id 42554;
 Wed, 02 Dec 2020 09:25:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkONs-000491-4m
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:25:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3a9b3c6e-0669-4457-9192-3736b16c5fca;
 Wed, 02 Dec 2020 09:25:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A262ADCA;
 Wed,  2 Dec 2020 09:25:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a9b3c6e-0669-4457-9192-3736b16c5fca
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606901109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=efiM/dz3vSPkG+X5au0xOu1pZEnQd/QJBfpCmrN6BzY=;
	b=VaxlxOrVAC0IaoB+0wHID+d9tcWM6GnkB6ZnhEW8XKBcVOE0o7v5elU4YAj79pBegu8nyA
	d3YGavOQnKuOf9aQ6ukV1omZcjENX5EwusBfTqprxR8hu1mtPae7E6fRTea9BwjsXHHcb4
	0Ay/NO3QyxaukSmJtqOfkmyhMbCJMMw=
Subject: Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org,
 xen-devel@lists.xenproject.org
References: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b98d3517-6c9d-6f40-6e28-cde142978143@suse.com>
Date: Wed, 2 Dec 2020 10:25:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 00:59, Igor Druzhinin wrote:
> Current limit of 7 is too restrictive for modern systems where one GSI
> could be shared by potentially many PCI INTx sources where each of them
> corresponds to a device passed through to its own guest. Some systems do not
> apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
> interrupt pin for the majority of PCI devices behind a single router,
> resulting in overuse of a GSI.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
> 
> If people think that would make sense - I can rework the array to a list of
> domain pointers to avoid the limit.

Not sure about this. What is the biggest number you've found on any
system? (I assume the chosen value of 31 has some headroom.)

Instead I'm wondering whether this wouldn't better be a Kconfig
setting (or even command line controllable). There don't look to be
any restrictions on the precise value chosen (i.e. 2**n-1 like is
the case for old and new values here, for whatever reason), so a
simple permitted range of like 4...64 would seem fine to specify.
Whether the default then would want to be 8 (close to the current
7) or higher (around the actually observed maximum) is a different
question.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:28:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42570.76584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkORP-0004RK-Vf; Wed, 02 Dec 2020 09:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42570.76584; Wed, 02 Dec 2020 09:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkORP-0004RD-Sk; Wed, 02 Dec 2020 09:28:51 +0000
Received: by outflank-mailman (input) for mailman id 42570;
 Wed, 02 Dec 2020 09:28:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkORP-0004R8-3p
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:28:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f37dbe1e-6121-4853-85cc-5e58da556245;
 Wed, 02 Dec 2020 09:28:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 78D17ADA2;
 Wed,  2 Dec 2020 09:28:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f37dbe1e-6121-4853-85cc-5e58da556245
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606901329; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mjDGImZh8q7OUTE/8K7S0r4PTqQZWnnkUTanEAf10KA=;
	b=jNYTfDnoFxwV2ZKSVrPGKD8fwFG96NHLRqxYVhu0c9fYY6IGjX24Ozwaw2nWpkMXaVdyzX
	eaC6p7kqfEWklZBL7erpdD4N++AcmfLKv56ktYKlmSEyQ5DStmKGe68rSRQC/mIO5J3dNn
	WDg/8OGtPopKN9YkauTUYZfHzrzxKWk=
Subject: Re: [PATCH v4 01/11] viridian: don't blindly write to 32-bit
 registers is 'mode' is invalid
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201202092205.906-1-paul@xen.org>
 <20201202092205.906-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f95b237e-31d3-08c8-4dab-ee273f10b585@suse.com>
Date: Wed, 2 Dec 2020 10:28:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201202092205.906-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.12.2020 10:21, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> If hvm_guest_x86_mode() returns something other than 8 or 4 then
> viridian_hypercall() will return immediately but, on the way out, will write
> back status as if 'mode' was 4. This patch simply makes it leave the registers
> alone.
> 
> NOTE: The formatting of the 'out' label and the switch statement are also
>       adjusted as per CODING_STYLE.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Wei Liu <wl@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> 
> v4:
>  - Fixed another CODING_STYLE violation.

Partly:

> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -692,13 +692,15 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>          break;
>      }
>  
> -out:
> + out:
>      output.result = status;
> -    switch (mode) {
> +    switch (mode)

There are also two blanks missing here. Will again record this as
to be taken care of while committing, once an ack arrives. (And
btw, the earlier of the two "is" in the subject also wants to be
"if".)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:33:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42576.76597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOVr-0005OX-Hm; Wed, 02 Dec 2020 09:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42576.76597; Wed, 02 Dec 2020 09:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOVr-0005OQ-Ei; Wed, 02 Dec 2020 09:33:27 +0000
Received: by outflank-mailman (input) for mailman id 42576;
 Wed, 02 Dec 2020 09:33:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ucf=FG=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kkOVq-0005OL-GD
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:33:26 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6cec683-7717-442e-b1b9-c6fe711af3f0;
 Wed, 02 Dec 2020 09:33:25 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id a3so5839401wmb.5
 for <xen-devel@lists.xenproject.org>; Wed, 02 Dec 2020 01:33:25 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-227.amazon.com. [54.240.197.227])
 by smtp.gmail.com with ESMTPSA id d8sm1306279wrp.44.2020.12.02.01.33.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 02 Dec 2020 01:33:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6cec683-7717-442e-b1b9-c6fe711af3f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=wFdH0b5BrOglk4W7o4+NNUGlrS0Sb7D/PHMAdFyrgMo=;
        b=B0+FBYO/somobHBPPeVu5uWzlY4jxnb2cBXb85MBtORedv399tlgbibRbDFBlRjfBZ
         1MrRgOtkNRqNxLIWRUgFcCxQgBu1owiH8lGWpR2jFUIM/4cbQY3SgMKxXCcf6/ngleRd
         o1fmqCESy/tukuFUXd0yDhcS0Yn6RMvh7hYCYT0iDxR0eRfVzSxncSoHwFt/tZkzcKbG
         /buuvXlPO9ur75hZ2MTFesJHGt7ThN/QoYttGeud7JOGnihZGqRjOZGdYAG3PT7y4Y2I
         wmD5idLXFUvx1sFG/9/MsTSI3CQJizrfsHKQuuA+or6bkyv9Yk2yBPKwy6glS3CUlPT3
         GINg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=wFdH0b5BrOglk4W7o4+NNUGlrS0Sb7D/PHMAdFyrgMo=;
        b=uDr2NJ0B3OP5uE1nrFjsNVKLsWSk8Gy2Jcq9yCQQNCn5iW3GQsiHSd95M3+ub1OoFN
         LowLn/ESAYq09kB7QM8Ua0FtobdHyyXaxtLFflE81aKb79fDkm27aRcUDZM6Cf2yPwYF
         xoOBh5u5iShOVWrsjLXon9I3Vd8uqJJejkPjYDP95eVWQas44gHYX+uS6xQ5u7qtraLt
         0cW1WuKVBBQx7eKum902t8XefPO6mpVW9I66zG5bSqNXDFyjwHFL7P0UGwdm0XLviZfi
         gadXWHivejnANKU4rGfPXfkMf3Z+1SIeb7ihVnEBrQPopFxifwJ4qRiNksy2N1Y4lY5+
         +c5w==
X-Gm-Message-State: AOAM531+l9w51C87U06bpwE8Cg2PtqN9hA4dPJOv7++8+jhF++FUp92G
	3MWRmw0rYj2r18UNb2TKX9Y=
X-Google-Smtp-Source: ABdhPJyay8tRDTs/3w06RI3yhhgMpPAYb92JAShISclenYW+9XTJA9RE/OiOhfi7hzjuo3Db8lFsFw==
X-Received: by 2002:a05:600c:2949:: with SMTP id n9mr2113328wmd.29.1606901604960;
        Wed, 02 Dec 2020 01:33:24 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201202092205.906-1-paul@xen.org> <20201202092205.906-2-paul@xen.org> <f95b237e-31d3-08c8-4dab-ee273f10b585@suse.com>
In-Reply-To: <f95b237e-31d3-08c8-4dab-ee273f10b585@suse.com>
Subject: RE: [PATCH v4 01/11] viridian: don't blindly write to 32-bit registers is 'mode' is invalid
Date: Wed, 2 Dec 2020 09:33:23 -0000
Message-ID: <003301d6c88e$2eccc010$8c664030$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFAW81zKVdA94714yPPI08guNmREwItrs0IAWpwLCuq86NOoA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 02 December 2020 09:29
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v4 01/11] viridian: don't blindly write to 32-bit =
registers is 'mode' is invalid
>=20
> On 02.12.2020 10:21, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > If hvm_guest_x86_mode() returns something other than 8 or 4 then
> > viridian_hypercall() will return immediately but, on the way out, =
will write
> > back status as if 'mode' was 4. This patch simply makes it leave the =
registers
> > alone.
> >
> > NOTE: The formatting of the 'out' label and the switch statement are =
also
> >       adjusted as per CODING_STYLE.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > ---
> > Cc: Wei Liu <wl@xen.org>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: "Roger Pau Monn=C3=A9" <roger.pau@citrix.com>
> >
> > v4:
> >  - Fixed another CODING_STYLE violation.
>=20
> Partly:
>=20
> > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > @@ -692,13 +692,15 @@ int viridian_hypercall(struct cpu_user_regs =
*regs)
> >          break;
> >      }
> >
> > -out:
> > + out:
> >      output.result =3D status;
> > -    switch (mode) {
> > +    switch (mode)
>=20
> There are also two blanks missing here. Will again record this as
> to be taken care of while committing, once an ack arrives. (And
> btw, the earlier of the two "is" in the subject also wants to be
> "if".)

Gah. The lack of a style checker really is annoying. I'll fix locally in =
case there's a v5.

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:51:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:51:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42582.76609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOnU-0007GM-5S; Wed, 02 Dec 2020 09:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42582.76609; Wed, 02 Dec 2020 09:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOnU-0007GF-1X; Wed, 02 Dec 2020 09:51:40 +0000
Received: by outflank-mailman (input) for mailman id 42582;
 Wed, 02 Dec 2020 09:51:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkOnS-0007G7-Dv; Wed, 02 Dec 2020 09:51:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkOnS-0005UO-AC; Wed, 02 Dec 2020 09:51:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkOnR-0001rh-Vf; Wed, 02 Dec 2020 09:51:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkOnR-0003PD-VA; Wed, 02 Dec 2020 09:51:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ocILQjgMa7KZf1mBb+5XjvWkPVdV6bK2BH23pb9+or8=; b=JX6TX76Prmyjr1OV5ADMKATBRQ
	kqSks3K2aDdMfk5ktNfQZciyaSjnLBNXmI962vAKmQS2kBedl23EKCnZwbCoOaDc/vRHAtUsq1WWl
	yxNVaSHa7LwDR9ID20k3R1+J0mK6i2gxGzY9G/EfoB1y2VAXx9XRNC3/QYfhhnxLXYas=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157155-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157155: all pass - PUSHED
X-Osstest-Versions-This:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
X-Osstest-Versions-That:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 09:51:37 +0000

flight 157155 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157155/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
baseline version:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5

Last test of basis   157090  2020-11-29 09:18:25 Z    3 days
Testing same since   157155  2020-12-02 09:19:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roge.rpau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f7d7d53f64..3ae469af8e  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 09:56:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 09:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42589.76624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOry-0007Sr-Ow; Wed, 02 Dec 2020 09:56:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42589.76624; Wed, 02 Dec 2020 09:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkOry-0007Sk-M1; Wed, 02 Dec 2020 09:56:18 +0000
Received: by outflank-mailman (input) for mailman id 42589;
 Wed, 02 Dec 2020 09:56:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkOrw-0007Se-Lw
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 09:56:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38dbf20a-1517-41cd-9013-efad798c226f;
 Wed, 02 Dec 2020 09:56:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5ABA2ADA2;
 Wed,  2 Dec 2020 09:56:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38dbf20a-1517-41cd-9013-efad798c226f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606902974; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qqyHNE6My3ls3Mgyx2I38tE/vBd+G9OscyK+Lp5oJCA=;
	b=n25Sfy+3KhXXKe3jifLSH69XjqPuApuuelrbyDdFYtS39KuPdxTlKVP+EU0d6YMANFW1IJ
	h1y3TA/WYwBmncVQzQWM7Q4SuzTwEEGDBcdZCF/KLI8iRS5SJOzV1GPllr7xCPup9Z1MeX
	D1Ra5roBdNjpRUB/gzZ4I9w3TXoL+LA=
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-7-jgross@suse.com>
 <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b130d75b-8cfa-7158-7f27-947ee50c579f@suse.com>
Date: Wed, 2 Dec 2020 10:56:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="BTzFdMUKZGwAD3RX1qHJUCJVfghljlvb0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--BTzFdMUKZGwAD3RX1qHJUCJVfghljlvb0
Content-Type: multipart/mixed; boundary="OWDtWKAwyDQ1SpJBJ9i3CaI5yusm0Ml4Q";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Message-ID: <b130d75b-8cfa-7158-7f27-947ee50c579f@suse.com>
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-7-jgross@suse.com>
 <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
In-Reply-To: <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>

--OWDtWKAwyDQ1SpJBJ9i3CaI5yusm0Ml4Q
Content-Type: multipart/mixed;
 boundary="------------D4ED1F27BF011DC212BDCC7B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D4ED1F27BF011DC212BDCC7B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.12.20 09:58, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> Instead of a pointer to an error variable as parameter just use
>> ERR_PTR() to return the cause of an error in cpupool_create().
>>
>> This propagates to scheduler_alloc(), too.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one small question:
>=20
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>> @@ -3233,26 +3233,25 @@ struct scheduler *scheduler_get_default(void)
>>       return &ops;
>>   }
>>  =20
>> -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
>> +struct scheduler *scheduler_alloc(unsigned int sched_id)
>>   {
>>       int i;
>> +    int ret;
>=20
> I guess you didn't merge this with i's declaration because of a
> plan/hope for i to be converted to unsigned int?

The main reason is I don't like overloading variables this way.

Any sane compiler will do that for me as it will discover that the
two variables are not alive at the same time, so the generated code
should be the same, while the written code stays more readable this
way.


Juergen


--------------D4ED1F27BF011DC212BDCC7B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------D4ED1F27BF011DC212BDCC7B--

--OWDtWKAwyDQ1SpJBJ9i3CaI5yusm0Ml4Q--

--BTzFdMUKZGwAD3RX1qHJUCJVfghljlvb0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/HZL0FAwAAAAAACgkQsN6d1ii/Ey9a
8gf7BSAD9h4wFYo+csLYbkZhbDEqYR2sQosi+SkvXq8CrZXlyR004cUA/rJ5tzRpW1PvCjZ+csIb
wqqJfSfe/oNC9o6mxJc6ZLVi4qwiAqLevDRNzlTwSxrl+a75oaOrgCaAw0pFXntFXfdwb4Xj7ZHW
Qp6Jy4/EuG7c9w87qkWqkLpbikqN6uwHbUm0g6MbZSdjBnvGmTAZ/Ycz4NECVjMc8KW+6A+BG4qF
EZqunWEyQYYYmw7Y9BjIV/2JBmxxcL/0gL6QHoH9q/9unqxRlr2Lae70SprnFdz264eQ5hbNnsPD
UGUNp+SUkvxUFLECl2aShwvb8MoeTcly6uyUt7B4xQ==
=MxGn
-----END PGP SIGNATURE-----

--BTzFdMUKZGwAD3RX1qHJUCJVfghljlvb0--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 10:04:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 10:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42595.76636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkP0C-000082-QP; Wed, 02 Dec 2020 10:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42595.76636; Wed, 02 Dec 2020 10:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkP0C-00007v-NC; Wed, 02 Dec 2020 10:04:48 +0000
Received: by outflank-mailman (input) for mailman id 42595;
 Wed, 02 Dec 2020 10:04:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkP0C-00007q-2F
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 10:04:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77aaf235-b3c5-4699-a850-8a789c67164d;
 Wed, 02 Dec 2020 10:04:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B7CCACC2;
 Wed,  2 Dec 2020 10:04:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77aaf235-b3c5-4699-a850-8a789c67164d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606903486; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U7d1fXBw9ABU8UoyWVSUVygnntd9mPXLb/277kW8Z00=;
	b=d8G4iiWnNyBK9VIv/2d3DEwZifxWQcn5bC02xOMzUlhFQ79zAAAkPAkpaNgXto9v8tdoYL
	55H4Ht0YLECJ15RJw6gHEe/egDw2Xu/l+ObG2fCpC7CthHL5te5CHH1rS+bfwZM1PQEG7M
	KdW6chjHVRz6/bzJrax3B/qIGZlhMAs=
Subject: Re: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
To: Hongyan Xia <hx242@xen.org>
Cc: jgrall@amazon.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <34de4c4326673c60d3e2cbd3bbcbcca481906524.1606755042.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ef127c6f-2d8a-1ddf-f8e7-7e747518c5a8@suse.com>
Date: Wed, 2 Dec 2020 11:04:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <34de4c4326673c60d3e2cbd3bbcbcca481906524.1606755042.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 17:50, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> There is simply no guarantee that vmap won't return superpages to the
> caller. It can happen if the list of MFNs are contiguous, or we simply
> have a large granularity. Although rare, if such things do happen, we
> will simply hit BUG_ON() and crash. Properly handle such cases in a new
> implementation.
> 
> Note that vmap is now too large to be a macro, so implement it as a
> normal function and move the declaration to mm.h (page.h cannot handle
> mfn_t).

I'm not happy about this movement, and it's also not really clear to
me what problem page.h would have in principle. Yes, it can't
include xen/mm.h, but I think it is long overdue that we disentangle
this at least some. Let me see what I can do as a prereq for this
change, but see also below.

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5194,6 +5194,49 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
>          }                                          \
>      } while ( false )
>  
> +mfn_t vmap_to_mfn(const void *v)
> +{
> +    bool locking = system_state > SYS_STATE_boot;
> +    unsigned int l2_offset = l2_table_offset((unsigned long)v);
> +    unsigned int l1_offset = l1_table_offset((unsigned long)v);
> +    l3_pgentry_t *pl3e = virt_to_xen_l3e((unsigned long)v);
> +    l2_pgentry_t *pl2e;
> +    l1_pgentry_t *pl1e;

Can't all three of them be pointer-to-const?

> +    struct page_info *l3page;
> +    mfn_t ret;
> +
> +    ASSERT(pl3e);

    if ( !pl3e )
    {
        ASSERT_UNREACHABLE();
        return INVALID_MFN;
    }

as per the bottom of ./CODING_STYLE? (Similarly further down
then.)

> +    l3page = virt_to_page(pl3e);
> +    L3T_LOCK(l3page);
> +
> +    ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
> +    if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
> +    {
> +        ret = mfn_add(l3e_get_mfn(*pl3e),
> +                      (l2_offset << PAGETABLE_ORDER) + l1_offset);
> +        L3T_UNLOCK(l3page);
> +        return ret;

To keep the locked region as narrow as possible

        mfn = l3e_get_mfn(*pl3e);
        L3T_UNLOCK(l3page);
        return mfn_add(mfn, (l2_offset << PAGETABLE_ORDER) + l1_offset);

? However, in particular because of the recurring unlocks on
the exit paths I wonder whether it wouldn't be better to
restructure the whole function such that there'll be one unlock
and one return. Otoh I'm afraid what I'm asking for here is
going to yield a measurable set of goto-s ...

> +    }
> +
> +    pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
> +    ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
> +    if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
> +    {
> +        ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
> +        L3T_UNLOCK(l3page);
> +        return ret;
> +    }
> +
> +    pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
> +    UNMAP_DOMAIN_PAGE(pl2e);
> +    ASSERT(l1e_get_flags(*pl1e) & _PAGE_PRESENT);
> +    ret = l1e_get_mfn(*pl1e);
> +    L3T_UNLOCK(l3page);
> +    UNMAP_DOMAIN_PAGE(pl1e);
> +
> +    return ret;
> +}

Now for the name of the function: The only aspect tying it
somewhat to vmap() is that it assumes (asserts) it'll find a
valid mapping. I think it wants renaming, and vmap_to_mfn()
then would become a #define of it (perhaps even retaining
its property of getting unsigned long passed in), at which
point it also doesn't need moving out of page.h. As to the
actual name, xen_map_to_mfn() to somewhat match up with
map_pages_to_xen()?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 10:36:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 10:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42605.76648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPV2-0002yW-C2; Wed, 02 Dec 2020 10:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42605.76648; Wed, 02 Dec 2020 10:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPV2-0002yP-90; Wed, 02 Dec 2020 10:36:40 +0000
Received: by outflank-mailman (input) for mailman id 42605;
 Wed, 02 Dec 2020 10:36:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkPV0-0002yK-NF
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 10:36:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1ed2e2a-239a-4638-af6e-9bc10cb4a435;
 Wed, 02 Dec 2020 10:36:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2ADB5ACB5;
 Wed,  2 Dec 2020 10:36:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1ed2e2a-239a-4638-af6e-9bc10cb4a435
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606905397; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Lxwrvvx9Rff81V9XwGlv00GuptOBu9ZX7xD68G/OcaM=;
	b=epiNnRqO3OKMKKZazuNkpWrQODJ1onBAkZq2ij8ThpdMHwFCyHOWzG8VGlJnL208Rye4bu
	O44IPFCh4liFOB4RlSC4a14ifyC94ZUInuvGDTiIr6vJdbIpEyotFFw6T2PZqkDsfWP1CA
	LLLbp4zczeeLK+S55kWKeH59xfBVVUc=
Subject: Re: [PATCH] xen: remove trailing semicolon in macro definition
To: trix@redhat.com, boris.ostrovsky@oracle.com, sstabellini@kernel.org,
 tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com
Cc: x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20201127160707.2622061-1-trix@redhat.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3c0d78dd-b9d8-0f80-98bb-680248fc9b99@suse.com>
Date: Wed, 2 Dec 2020 11:36:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201127160707.2622061-1-trix@redhat.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="YNqqUgVH3u7yUvmoDsd4aqCpnG2L0pwwo"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--YNqqUgVH3u7yUvmoDsd4aqCpnG2L0pwwo
Content-Type: multipart/mixed; boundary="25zRNclcpurKruKefRf7bJ6B7KJ20d3Ty";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: trix@redhat.com, boris.ostrovsky@oracle.com, sstabellini@kernel.org,
 tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com
Cc: x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Message-ID: <3c0d78dd-b9d8-0f80-98bb-680248fc9b99@suse.com>
Subject: Re: [PATCH] xen: remove trailing semicolon in macro definition
References: <20201127160707.2622061-1-trix@redhat.com>
In-Reply-To: <20201127160707.2622061-1-trix@redhat.com>

--25zRNclcpurKruKefRf7bJ6B7KJ20d3Ty
Content-Type: multipart/mixed;
 boundary="------------4B2B39EDD71E8B73392BA0C4"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4B2B39EDD71E8B73392BA0C4
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 17:07, trix@redhat.com wrote:
> From: Tom Rix <trix@redhat.com>
>=20
> The macro use will already have a semicolon.
>=20
> Signed-off-by: Tom Rix <trix@redhat.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------4B2B39EDD71E8B73392BA0C4
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4B2B39EDD71E8B73392BA0C4--

--25zRNclcpurKruKefRf7bJ6B7KJ20d3Ty--

--YNqqUgVH3u7yUvmoDsd4aqCpnG2L0pwwo
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/HbjQFAwAAAAAACgkQsN6d1ii/Ey9q
ygf/SgFCZg1YowmATZyRWnic7FXKS/1Cb/ushTbHPxzvmgaUs8t8320iZdW78n9fVHPwKghFNGZK
xXNtHkP6UHDAJ2dLzy4gMTEhP2aqDJhRkLIHX7BsJg1H7piRYUOj7ymiFVtft+0Vzl31+KyAUDUi
M7Lsa0g81cvJ8E5KZnbG4ustRUgan9u3/qxT2glejbcuais21uFaUG5tKWklem5YWES/kYkrrhbx
D2yX7pgazkAN4g1CkRuOya3if8SOFk+mty4OGhqpxDjhxxgqKHg6Mm2OWsI87BR0KDqNgc1dhupC
Gm4mk+NubQZJBiYPNlBMmsWk+EFYtFoVLkaTdH6jWQ==
=lM4K
-----END PGP SIGNATURE-----

--YNqqUgVH3u7yUvmoDsd4aqCpnG2L0pwwo--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 10:46:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 10:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42610.76660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPew-0003zQ-DC; Wed, 02 Dec 2020 10:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42610.76660; Wed, 02 Dec 2020 10:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPew-0003zJ-9B; Wed, 02 Dec 2020 10:46:54 +0000
Received: by outflank-mailman (input) for mailman id 42610;
 Wed, 02 Dec 2020 10:46:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkPeu-0003zE-UZ
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 10:46:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e8a8fae-c4a8-4b0c-9c17-365c986087fe;
 Wed, 02 Dec 2020 10:46:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37B5EACB5;
 Wed,  2 Dec 2020 10:46:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e8a8fae-c4a8-4b0c-9c17-365c986087fe
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606906011; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sTD3mEBlpAzVGMWs80SkdN2Ve9wTuxDg3Z7Z8vfRfMA=;
	b=Pl/nEvyVlgS7g3phEkB4jnf1kD9DofrUOkY5wCKfqpCoKGTl+2rAYumtiiwA0QD/7Ns+zD
	FCGjrpFkl6mS9hKbq02nj+FCF8EAxhOwd+o/HKJKQujS4X9skwIacOwPDlrUE2+D7WdNPD
	fpSb8SMkSHMvD+CuttsBGGufuH33dm8=
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-7-jgross@suse.com>
 <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
 <b130d75b-8cfa-7158-7f27-947ee50c579f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <841e972f-4b46-e606-3344-c6ba423acf5c@suse.com>
Date: Wed, 2 Dec 2020 11:46:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <b130d75b-8cfa-7158-7f27-947ee50c579f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.12.2020 10:56, Jürgen Groß wrote:
> On 02.12.20 09:58, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> Instead of a pointer to an error variable as parameter just use
>>> ERR_PTR() to return the cause of an error in cpupool_create().
>>>
>>> This propagates to scheduler_alloc(), too.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> with one small question:
>>
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>> @@ -3233,26 +3233,25 @@ struct scheduler *scheduler_get_default(void)
>>>       return &ops;
>>>   }
>>>   
>>> -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
>>> +struct scheduler *scheduler_alloc(unsigned int sched_id)
>>>   {
>>>       int i;
>>> +    int ret;
>>
>> I guess you didn't merge this with i's declaration because of a
>> plan/hope for i to be converted to unsigned int?
> 
> The main reason is I don't like overloading variables this way.

That's no what I asked. I was wondering why the new decl wasn't

    int i, ret;

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 10:58:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 10:58:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42616.76672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPpp-00050i-Eh; Wed, 02 Dec 2020 10:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42616.76672; Wed, 02 Dec 2020 10:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPpp-00050b-BT; Wed, 02 Dec 2020 10:58:09 +0000
Received: by outflank-mailman (input) for mailman id 42616;
 Wed, 02 Dec 2020 10:58:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkPpn-00050W-RM
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 10:58:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26ab1df6-224b-482f-9a79-efbb5947e83d;
 Wed, 02 Dec 2020 10:58:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DCEABACB5;
 Wed,  2 Dec 2020 10:58:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26ab1df6-224b-482f-9a79-efbb5947e83d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606906686; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pcC5tv/RPuVxDRXJIH45H5SSFlITXbWJ3gmYhUOhN2U=;
	b=hdFiXpFvO6QKSE8EpROVk3f8jgxX58gDRNqS0GSgB96TnZlOJCYGe9UYboOT3IRbTo6b2e
	SsrjFtTw4TDHLV88UzT9gzl/T8E6pRbPTt7+k2NlZvoZ7IA0J3L/r0eSEkT24b9DJbcSAR
	gRPpk6oYlZvuH/ZoHcILb1qFfGww1xo=
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-7-jgross@suse.com>
 <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
 <b130d75b-8cfa-7158-7f27-947ee50c579f@suse.com>
 <841e972f-4b46-e606-3344-c6ba423acf5c@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <17387dc8-1a10-e822-f38c-1dcd8eac2f62@suse.com>
Date: Wed, 2 Dec 2020 11:58:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <841e972f-4b46-e606-3344-c6ba423acf5c@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="M7mbblpm1ScmAi3QJgUvjieHDHENK8x2l"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--M7mbblpm1ScmAi3QJgUvjieHDHENK8x2l
Content-Type: multipart/mixed; boundary="XNVEaSkvSwYRw7Ek3nZiv1Nw4QwtbhnF0";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Message-ID: <17387dc8-1a10-e822-f38c-1dcd8eac2f62@suse.com>
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-7-jgross@suse.com>
 <08f377a7-7862-0597-fe42-98851dc3db37@suse.com>
 <b130d75b-8cfa-7158-7f27-947ee50c579f@suse.com>
 <841e972f-4b46-e606-3344-c6ba423acf5c@suse.com>
In-Reply-To: <841e972f-4b46-e606-3344-c6ba423acf5c@suse.com>

--XNVEaSkvSwYRw7Ek3nZiv1Nw4QwtbhnF0
Content-Type: multipart/mixed;
 boundary="------------0F89D87545B999E56FFF0EE0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0F89D87545B999E56FFF0EE0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.12.20 11:46, Jan Beulich wrote:
> On 02.12.2020 10:56, J=C3=BCrgen Gro=C3=9F wrote:
>> On 02.12.20 09:58, Jan Beulich wrote:
>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>> Instead of a pointer to an error variable as parameter just use
>>>> ERR_PTR() to return the cause of an error in cpupool_create().
>>>>
>>>> This propagates to scheduler_alloc(), too.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> with one small question:
>>>
>>>> --- a/xen/common/sched/core.c
>>>> +++ b/xen/common/sched/core.c
>>>> @@ -3233,26 +3233,25 @@ struct scheduler *scheduler_get_default(void=
)
>>>>        return &ops;
>>>>    }
>>>>   =20
>>>> -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)=

>>>> +struct scheduler *scheduler_alloc(unsigned int sched_id)
>>>>    {
>>>>        int i;
>>>> +    int ret;
>>>
>>> I guess you didn't merge this with i's declaration because of a
>>> plan/hope for i to be converted to unsigned int?
>>
>> The main reason is I don't like overloading variables this way.
>=20
> That's no what I asked. I was wondering why the new decl wasn't
>=20
>      int i, ret;

Oh, then I completely misunderstood your question, sorry.

I had no special reason when writing the patch, but I think changing
i to be unsigned is a good idea. I'll do that in the next iteration.


Juergen

--------------0F89D87545B999E56FFF0EE0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0F89D87545B999E56FFF0EE0--

--XNVEaSkvSwYRw7Ek3nZiv1Nw4QwtbhnF0--

--M7mbblpm1ScmAi3QJgUvjieHDHENK8x2l
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Hcz0FAwAAAAAACgkQsN6d1ii/Ey+l
QQgAlWwvuDhrJYlOCbD+llwqQqvxguB7v+H0VZSajMU4+hIQnO1B8PghBiOCgjnwoyqVv22R7pDn
rZNdhwwtt29+o9VGaghL1g8VRFXM+gMmRvjxfywtjTmJlYqegSH/Mxl/hW4HFmA/y32hmUj4sPST
I5KAzEFyQuTWYFnx1S/oNtZRTNzO7Qbbwywj165r8W8cbdJY8ZfoLdJ3F8NiiTfjYIQ8oBMIryE6
bpSYrtNVeZ7GOGzNQ6ggYuVYdI8htXHF4VOlBks3oIgnRiI9NICD/Yd+Zi+6PLMYV1+JlQ88hZXZ
qfVGjU3EqdNl2n7vxDT28Jqf2cQqdD7OsljC9Bzr5g==
=WiDG
-----END PGP SIGNATURE-----

--M7mbblpm1ScmAi3QJgUvjieHDHENK8x2l--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 11:04:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 11:04:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42622.76683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPvf-00060h-4B; Wed, 02 Dec 2020 11:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42622.76683; Wed, 02 Dec 2020 11:04:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkPvf-00060a-1G; Wed, 02 Dec 2020 11:04:11 +0000
Received: by outflank-mailman (input) for mailman id 42622;
 Wed, 02 Dec 2020 11:04:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkPve-00060V-8g
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 11:04:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c066b43-7e00-4eeb-aa77-5e132e291d94;
 Wed, 02 Dec 2020 11:04:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A77D2AB7F;
 Wed,  2 Dec 2020 11:04:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c066b43-7e00-4eeb-aa77-5e132e291d94
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606907048; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pdUvgVfYTqrQvV5+apl5wpCLRdSrGYgq/Idh/t8BaEE=;
	b=C1S5nT7paL9iTIylL1BpvzoqcU9Neii7qkak4Q5w8YXLQFeK9WGvdpadmpPuRWnI5uqvr7
	wReoYSFZHLNUS0DpadXU939QcyVoVv1o5JPoOwU9sj5S+dk4URHzLGmP4ykUEwnk4y2LU+
	lR7jVRwA1bSWv4KeFWNjBMlGKJ6R2A4=
Subject: Re: [PATCH v9] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201202074852.30473-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <28c90eac-82a5-d1f6-f1dd-2d469a793af6@suse.com>
Date: Wed, 2 Dec 2020 12:04:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201202074852.30473-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 08:48, Juergen Gross wrote:
> evtchn_fifo_set_pending() can be simplified a little bit. Especially
> testing for the fifo control block to exist can be moved out of the
> main if clause of the function enabling to avoid testing the LINKED bit
> twice.
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 11:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 11:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42628.76696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQ3i-000708-5R; Wed, 02 Dec 2020 11:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42628.76696; Wed, 02 Dec 2020 11:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQ3i-000701-1D; Wed, 02 Dec 2020 11:12:30 +0000
Received: by outflank-mailman (input) for mailman id 42628;
 Wed, 02 Dec 2020 11:12:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19jX=FG=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kkQ3h-0006zw-0e
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 11:12:29 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.82]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17817f11-7116-4496-932f-71c2005b44d9;
 Wed, 02 Dec 2020 11:12:27 +0000 (UTC)
Received: from MR2P264CA0189.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::28) by
 DBBPR08MB6314.eurprd08.prod.outlook.com (2603:10a6:10:20f::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.25; Wed, 2 Dec 2020 11:12:25 +0000
Received: from VE1EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:0:cafe::d9) by MR2P264CA0189.outlook.office365.com
 (2603:10a6:501::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 2 Dec 2020 11:12:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT046.mail.protection.outlook.com (10.152.19.226) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 11:12:25 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Wed, 02 Dec 2020 11:12:25 +0000
Received: from e68fdd47b439.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E9C13AF4-53EF-4C44-901E-CEFBA07D9208.1; 
 Wed, 02 Dec 2020 11:12:19 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e68fdd47b439.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 11:12:19 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4774.eurprd08.prod.outlook.com (2603:10a6:10:d5::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Wed, 2 Dec
 2020 11:12:17 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Wed, 2 Dec 2020
 11:12:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17817f11-7116-4496-932f-71c2005b44d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=55YDo+2ySZR6FB/RCtllluDKqIZlFz0MgRXRLJ2YzL4=;
 b=whtWvXjwEH5IQRUA/B48svbADpGvk/Cfohu52tN6acxVDzmrpoik0R0/YSfi5UTa35mtG0KwU8y8b9b8g9CbvjuYnVVgp/npNwjH2IC0zvG5Qzo86nBgaVfHiGIOZaR0KcndCCjdnFe6FMDhoLgPIfl3C7l7ZI6rCCfvVpd7YNA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 70a1a08dcc6b693c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mWUOMJGvk2rr8pApZJFQhzYMgVoaFEBcFCcOh5icjy3N7LfPn8FDNWpFZRnSFN4l+j2tYAJcpbLzzN+qqKE27wMT0eWP/qJhdbnRvaMd6JeDCdP423sD5wjLl7YeG4eXJ8FSHPibJoQM1jWGDU1Y6AVoMYfG8xj2gm+Szd67b+aRyqR1fIIWxJT5vQEHAigDGH4+wDd3rwTj/13KKYVptneh2ORF1Hoi4FBQeBCl7n+wwdjMhrJi/58/iVwnKEGlnOgkpxLvcKqIeqWtsWySEDe2DkRCQ5UCuulWQ8/+u5HJ94pPmhBkjmV1DNdK8Ep0Gw2i8S7AXadb71Gk1aKawA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=55YDo+2ySZR6FB/RCtllluDKqIZlFz0MgRXRLJ2YzL4=;
 b=FWdpz46PVemrnwxUtoUBNHwQQquD8YX2Mmwwo94zZmfaTEgQoPPv33Xn5Dg6RFw+A6hKEjomLm6xCiNyV2rJP6BjhiGHJosbm9/yhhW3NhaBd5lZKFQVItVIGClvy1o1pq2MIS2pqNlsHkjkSMJoeVL45F3w9rm/WvkFr/6tTsybL6WCDHtaHOKB40N7wCV6kA6/BvfhShuWntPaxOxXVOfHCCnbYYvFJSZiwPcjoUAT6XxMTEjqR0Y+nm0RYHpTtY9SRvqSVcgLNKYqxHfk6W+3YoneAmBSXT1I0bMqZhvpPMEDuiKwHAJFoCWSrHI1s9rFOJtXFyQ3qXQ1bRQp3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=55YDo+2ySZR6FB/RCtllluDKqIZlFz0MgRXRLJ2YzL4=;
 b=whtWvXjwEH5IQRUA/B48svbADpGvk/Cfohu52tN6acxVDzmrpoik0R0/YSfi5UTa35mtG0KwU8y8b9b8g9CbvjuYnVVgp/npNwjH2IC0zvG5Qzo86nBgaVfHiGIOZaR0KcndCCjdnFe6FMDhoLgPIfl3C7l7ZI6rCCfvVpd7YNA=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index:
 AQHWxyRuchSNSGGKDEyJCEQqqd6Vw6nhIUWAgAD/ugCAAAW5gIAAJWmAgAAq44CAATKvAA==
Date: Wed, 2 Dec 2020 11:12:17 +0000
Message-ID: <80D814EA-B0FC-4975-BB08-4D7DAE8C8B56@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
 <87lfei7fj5.fsf@epam.com> <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
 <87sg8p687j.fsf@epam.com> <87243486-2A58-4497-B566-5FDE4158D18E@arm.com>
 <87h7p55uwj.fsf@epam.com>
In-Reply-To: <87h7p55uwj.fsf@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ee4714c2-7f3f-4841-5d38-08d896b325f6
x-ms-traffictypediagnostic: DBBPR08MB4774:|DBBPR08MB6314:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB63148D48271C4E013918164E9DF30@DBBPR08MB6314.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 p3ZMGsCzfM8hoXncZkz7waBRZ2eDiwU72kIGR5CaqOzrKNVk9fB9c3bEAlIZtuMoFwOq/Z+KdRa9PL79T88IU0BuJaX8r5uFkxdI7aNTj9g6d50vFj5vhX0FbjLYg0Iw8TbT+1kBt4sSy/6HLU4HMXXhjg0SyHamQcP9A0OgdwBOYa4x8HOkhdCwhxGXAVXyqnV6hV7mZJ0Zra4xzcbO3AKZ8sPzvzJ2cwWyLA27pcI2zdiZHmEwuw17uEaEuNVxuee5ACw4CV5EpL8E3DZUcxiAW4RXlnhhVSfEhP5f9urD3FuiGgIlbP8iDe67GtxXU5ZRMJZMPS33e/M4ZLxPENsh9g4R6wJ0OkpvlITi9ugsXdqpGWAYElc4l6ecj7pa
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(136003)(396003)(39860400002)(346002)(6916009)(66556008)(66946007)(8936002)(76116006)(36756003)(316002)(71200400001)(86362001)(66476007)(91956017)(53546011)(33656002)(64756008)(66446008)(4326008)(2906002)(54906003)(2616005)(6486002)(6512007)(5660300002)(6506007)(83380400001)(8676002)(186003)(478600001)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?WXFuZWcvb1IxODlxdmdCTmV6NERKdzFIZTFlV2V0YmYzcXlxenNGaVBJSTJH?=
 =?utf-8?B?dWMrTmhpZXV1NXp0NmxDRkovMGlwUE90S1Bja1JoWUVIekNpcFJneVZnblFV?=
 =?utf-8?B?cFIzajZGQnE1TmZxUXZXMFZaYks2Z3diVC9WK2lFVDBKV2MwZ3c2akxBTllB?=
 =?utf-8?B?bVE2Um05OFFUbXRaK01pMHNqWmswU3J4T1hndWNmWE45ekVIMEkyMGhETDdP?=
 =?utf-8?B?cE1JNlh0U21KclQrcEg0UzkwQUxFdGl4b0hGR09CSkdTbHF4ZmZtMzlmV2dh?=
 =?utf-8?B?c0RRZ3hSdGJrRG44dFM3ZnFJeHlIVE9YeUErWkh6NVFYcXFOUlNkWXUycndO?=
 =?utf-8?B?N1ZudlJvT09UbFJtakREWjVLTGpOcVVENTYwMjVIeGNIdUVlREpmTk84MzZV?=
 =?utf-8?B?SVVEempUOGpmQmQ1aWVHK0JtUk1mRXMxTDMxYlhMaUNlclB4L3VTdzNjdFhL?=
 =?utf-8?B?Qmxnb1BZams0Mm5iSXZQRml2b1JKOW96alYzeDdsZTgvTEh5YjJxSnRjdFJy?=
 =?utf-8?B?YkR3U2g3b0gyYTExNDY5ZnVvT3VET0tXbkNoTG93elJxb0RWK09QUHU0dU54?=
 =?utf-8?B?L24ydGFmUFFiL0JlMjI5MVdSNWxnaGQrTy9JRFpkVlY2UjJaUHJpSVdyVFNO?=
 =?utf-8?B?RWw5NmRUS3c1b1llWmIwZkhMRndBWjY0bmV5dzJYNVZCaC9VQ05lajd3Zmcr?=
 =?utf-8?B?Y2Fsa2hHekZWWGg1SFNSRlJ5Um8zTytvQkQvb0t5WXgxRHM0SWozNDVvWkFl?=
 =?utf-8?B?SDkyRitGR2tlMUtzVUtuRUNOWER0VWoxSGxsSmxBeDBlZENNS29DK1BvWXJ2?=
 =?utf-8?B?MDJodDRRc01LN2VPTmxXb1FMdEFUNDZtc2tESWtMUkk5bHNValIwdGE4T01S?=
 =?utf-8?B?akQ5ZU00dFd2UmdmQVIwK1lGNExOTjkvcHl1cDdzNHp4YjVkV3dYNkp5dlRH?=
 =?utf-8?B?QnRESjJxYTFiVHRFdER4eU94aldHdjFLQ3R2OU9ta3RRSGZIdi9ob0pLWGJY?=
 =?utf-8?B?WFZXODBUb3NuakZwVEZEcmtsVEw3bGlLbjVUZEtidkJXODdwbWExT0Y2MjFo?=
 =?utf-8?B?MzUwSlZjelJadnVmNGtUbndtbTFGVlhicWxialFLQ215ZHFkT20yck5hRHZ2?=
 =?utf-8?B?Vm0zUnZXSlNiM3dJQWs3eFJhM2VDYXpyUHFkOFliVDZLZE4xK0R3b3labmRC?=
 =?utf-8?B?M3FwM0pDNnVjSW5BakxZRUE2VlBua1BTb3BtR2Fpc0h0ZjIra25zdExxTE4v?=
 =?utf-8?B?RDVZb3FnZzFNSUF1aG1TTTR6VTRVcEtmUXNkcjNLcHZuYVgwc0ZVYWFJTW91?=
 =?utf-8?Q?Ojtg4qMUxjgdPvO0+JA+T1YjyWeoXYqT8E?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <27A093D0A6887147ADC7DF6087671E90@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4774
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0f474106-ac64-46e7-5e9f-08d896b32167
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vUad7jZAqp9Zm56eq70bs0NuusMicYThSHnOUJLrTCRwsjlbnurd79c47fkcVIlNLU/ykWEPL3nl1WY0sXz6Dax4PiP/3A3uyI5a+SaMoDSzcgH8+xlroadffUv5pFnPfFtq8klV77qGW0JBnFog57BaVnzsl+4PcJHnZsUgmyqDJcyJLfL53dUMOCw5HxNfkQFmc3AT2VflaxnWhjYG1Bk/TDZ4tHZfmGdAGzGj07+LxylxdLFZ9PZ1weVvZI1WFcZIqwyZ4zrfjrlB9k29U5pkJ2Lh7Ms9vUDfsLzo8aeVeKLbKzKwdO31B8sIRJMcb71kQpyc7U36Wuh8NZTJ1Se3KWD2sDQLvjFAEfgzGYKVSG0MMIhKLIVxN4AfXJSSLRiWz+DQX/VNRWag+QsHqA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(46966005)(6862004)(4326008)(53546011)(186003)(316002)(478600001)(336012)(6486002)(6506007)(47076004)(81166007)(70206006)(26005)(70586007)(82740400003)(6512007)(54906003)(82310400003)(2616005)(86362001)(356005)(2906002)(5660300002)(33656002)(8676002)(36756003)(83380400001)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 11:12:25.1391
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ee4714c2-7f3f-4841-5d38-08d896b325f6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6314

SEkgVm9sb2R5bXlyLA0KDQo+IE9uIDEgRGVjIDIwMjAsIGF0IDE2OjU0LCBWb2xvZHlteXIgQmFi
Y2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+IHdyb3RlOg0KPiANCj4gDQo+IEhpLA0K
PiANCj4gQmVydHJhbmQgTWFycXVpcyB3cml0ZXM6DQo+IA0KPj4gSGkgVm9sb2R5bXlyLA0KPj4g
DQo+Pj4gT24gMSBEZWMgMjAyMCwgYXQgMTI6MDcsIFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlt
eXJfQmFiY2h1a0BlcGFtLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gDQo+Pj4gSGksDQo+Pj4gDQo+
Pj4gQmVydHJhbmQgTWFycXVpcyB3cml0ZXM6DQo+Pj4gDQo+Pj4+IEhpLA0KPj4+PiANCj4+Pj4+
IE9uIDMwIE5vdiAyMDIwLCBhdCAyMDozMSwgVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15cl9C
YWJjaHVrQGVwYW0uY29tPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4gDQo+Pj4+PiBCZXJ0cmFuZCBN
YXJxdWlzIHdyaXRlczoNCj4+Pj4+IA0KPj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBlbXVsYXRpb24g
b2YgY3AxNSBiYXNlZCBJRCByZWdpc3RlcnMgKG9uIGFybTMyIG9yIHdoZW4NCj4+Pj4+PiBydW5u
aW5nIGEgMzJiaXQgZ3Vlc3Qgb24gYXJtNjQpLg0KPj4+Pj4+IFRoZSBoYW5kbGVycyBhcmUgcmV0
dXJuaW5nIHRoZSB2YWx1ZXMgc3RvcmVkIGluIHRoZSBndWVzdF9jcHVpbmZvDQo+Pj4+Pj4gc3Ry
dWN0dXJlLg0KPj4+Pj4+IEluIHRoZSBjdXJyZW50IHN0YXR1cyB0aGUgTVZGUiByZWdpc3RlcnMg
YXJlIG5vIHN1cHBvcnRlZC4NCj4+Pj4+IA0KPj4+Pj4gSXQgaXMgdW5jbGVhciB3aGF0IHdpbGwg
aGFwcGVuIHdpdGggcmVnaXN0ZXJzIHRoYXQgYXJlIG5vdCBjb3ZlcmVkIGJ5DQo+Pj4+PiBndWVz
dF9jcHVpbmZvIHN0cnVjdHVyZS4gQWNjb3JkaW5nIHRvIEFSTSBBUk0sIGl0IGlzIGltcGxlbWVu
dGF0aW9uDQo+Pj4+PiBkZWZpbmVkIGlmIHN1Y2ggYWNjZXNzZXMgd2lsbCBiZSB0cmFwcGVkLiBP
biBvdGhlciBoYW5kLCB0aGVyZSBhcmUgbWFueQ0KPj4+Pj4gcmVnaXN0ZXJzIHdoaWNoIGFyZSBS
QVouIFNvLCBnb29kIGJlaGF2aW5nIGd1ZXN0IGNhbiB0cnkgdG8gcmVhZCBvbmUgb2YNCj4+Pj4+
IHRoYXQgcmVnaXN0ZXJzIGFuZCBpdCB3aWxsIGdldCB1bmRlZmluZWQgaW5zdHJ1Y3Rpb24gZXhj
ZXB0aW9uLCBpbnN0ZWFkDQo+Pj4+PiBvZiBqdXN0IHJlYWRpbmcgYWxsIHplcm9lcy4NCj4+Pj4g
DQo+Pj4+IFRoaXMgaXMgdHJ1ZSBpbiB0aGUgc3RhdHVzIG9mIHRoaXMgcGF0Y2ggYnV0IHRoaXMg
aXMgc29sdmVkIGJ5IHRoZSBuZXh0IHBhdGNoDQo+Pj4+IHdoaWNoIGlzIGFkZGluZyBwcm9wZXIg
aGFuZGxpbmcgb2YgdGhvc2UgcmVnaXN0ZXJzIChhZGQgQ1AxMCBleGNlcHRpb24NCj4+Pj4gc3Vw
cG9ydCksIGF0IGxlYXN0IGZvciBNVkZSIG9uZXMuDQo+Pj4+IA0KPj4+PiBGcm9tIEFSTSBBUk0g
cG9pbnQgb2YgdmlldywgSSBkaWQgaGFuZGxlIGFsbCByZWdpc3RlcnMgbGlzdGVkIEkgdGhpbmsu
DQo+Pj4+IElmIHlvdSB0aGluayBzb21lIGFyZSBtaXNzaW5nIHBsZWFzZSBwb2ludCBtZSB0byB0
aGVtIGFzIE8gZG8gbm90DQo+Pj4+IGNvbXBsZXRlbHkgdW5kZXJzdGFuZCB3aGF0IGFyZSB0aGUg
4oCccmVnaXN0ZXJzIG5vdCBjb3ZlcmVk4oCdIHVubGVzcw0KPj4+PiB5b3UgbWVhbiB0aGUgTVZG
UiBvbmVzLg0KPj4+IA0KPj4+IFdlbGwsIEkgbWF5IGJlIHdyb25nIGZvciBhYXJjaDMyIGNhc2Us
IGJ1dCBmb3IgYWFyY2g2NCwgdGhlcmUgYXJlIG51bWJlcg0KPj4+IG9mIHJlc2VydmVkIHJlZ2lz
dGVycyBpbiBJRHMgcmFuZ2UuIFRob3NlIHJlZ2lzdGVycyBzaG91bGQgcmVhZCBhcw0KPj4+IHpl
cm8uIFlvdSBjYW4gZmluZCB0aGVtIGluIHRoZSBzZWN0aW9uICJDNS4xLjYgb3AwPT0wYjExLCBN
b3ZlcyB0byBhbmQNCj4+PiBmcm9tIG5vbi1kZWJ1ZyBTeXN0ZW0gcmVnaXN0ZXJzIGFuZCBTcGVj
aWFsLXB1cnBvc2UgcmVnaXN0ZXJzIiBvZiBBUk0NCj4+PiBEREkgMDQ4N0IuYS4gQ2hlY2sgb3V0
ICJUYWJsZSBDNS02IFN5c3RlbSBpbnN0cnVjdGlvbiBlbmNvZGluZ3MgZm9yDQo+Pj4gbm9uLURl
YnVnIFN5c3RlbSByZWdpc3RlciBhY2Nlc3NlcyIuDQo+PiANCj4+IFRoZSBwb2ludCBvZiB0aGUg
c2VyaWUgaXMgdG8gaGFuZGxlIGFsbCByZWdpc3RlcnMgdHJhcHBlZCBkdWUgdG8gVElEMyBiaXQg
aW4gSENSX0VMMi4NCj4+IA0KPj4gQW5kIGkgdGhpbmsgSSBoYW5kbGVkIGFsbCBvZiB0aGVtIGJ1
dCBJIG1pZ2h0IGJlIHdyb25nLg0KPj4gDQo+PiBIYW5kbGluZyBhbGwgcmVnaXN0ZXJzIGZvciBv
cDA9PTBiMTEgd2lsbCBjb3ZlciBhIGxvdCBtb3JlIHRoaW5ncy4NCj4+IFRoaXMgY2FuIGJlIGRv
bmUgb2YgY291cnNlIGJ1dCB0aGlzIHdhcyBub3QgdGhlIHBvaW50IG9mIHRoaXMgc2VyaWUuDQo+
PiANCj4+IFRoZSBsaXN0aW5nIGluIEhDUl9FTDIgZG9jdW1lbnRhdGlvbiBpcyBwcmV0dHkgY29t
cGxldGUgYW5kIGlmIEkgbWlzcyBhbnkgcmVnaXN0ZXINCj4+IHRoZXJlIHBsZWFzZSB0ZWxsIG1l
IGJ1dCBJIGRvIG5vIHVuZGVyc3RhbmQgZnJvbSB0aGUgZG9jdW1lbnRhdGlvbiB0aGF0IGFsbCBy
ZWdpc3RlcnMNCj4+IHdpdGggb3AwIDMgYXJlIHRyYXBwZWQgYnkgVElEMy4NCj4gDQo+IE15IGNv
bmNlcm4gaXMgdGhhdCB0aGUgc2FtZSBjb2RlIG1heSBvYnNlcnZlIGRpZmZlcmVudCBlZmZlY3Rz
IHdoZW4NCj4gcnVubmluZyBpbiBiYXJlbWV0YWwgbW9kZSBhbmQgYXMgVk0uDQo+IA0KPiBGb3Ig
ZXhhbXBsZSwgd2UgYXJlIHRyeWluZyB0byBydW4gYSBuZXdlciB2ZXJzaW9uIG9mIGEga2VybmVs
LCB0aGF0DQo+IHN1cHBvcnRzIHNvbWUgaHlwb3RoZXRpY2FsIEFSTXY4LjkuIEFuZCBpdCB0cmll
cyB0byByZWFkIGEgbmV3IElEDQo+IHJlZ2lzdGVyIHdoaWNoIGlzIGFic2VudCBpbiBBUk12OC4y
LiBUaGVyZSBhcmUgcG9zc2libGUgY2FzZXM6DQo+IA0KPiAwLiBJdCBydW5zIGFzIGEgYmFyZW1l
dGFsIGNvZGUgb24gYSBjb21wYXRpYmxlIGFyY2hpdGVjdHVyZS4gU28gaXQganVzdA0KPiBhY2Nl
c3NlcyB0aGlzIHJlZ2lzdGVyIGFuZCBhbGwgaXMgZmluZS4NCj4gDQo+IDEuIEl0IHJ1bnMgYXMg
YmFyZW1ldGFsIGNvZGUgb24gb2xkZXIgQVJNOCBhcmNoaXRlY3R1cmUuIEN1cnJlbnQNCj4gcmVm
ZXJlbmNlIG1hbnVhbCBzdGF0ZXMgdGhhdCB0aG9zZSByZWdpc3RlcnMgc2hvdWxkIHJlYWQgYXMg
emVybywgc28NCj4gYWxsIGlzIGZpbmUsIGFzIHdlbGwuDQo+IA0KPiAyLiBJdCBydW5zIGFzIFZN
IG9uIGFuIG9sZGVyIGFyY2hpdGVjdHVyZS4gSXQgaXMgSU1QTEVNRU5UQVRJT04gREVGSU5FRA0K
PiBpZiBIQ1IuSUQzIHdpbGwgY2F1c2UgdHJhcHMgd2hlbiBhY2Nlc3MgdG8gdW5hc3NpZ25lZCBy
ZWdpc3RlcnM6DQo+IA0KPiAyYS4gUGxhdGZvcm0gZG9lcyBub3QgY2F1c2UgdHJhcHMgYW5kIHNv
ZnR3YXJlIHJlYWRzIHplcm9zIGRpcmVjdGx5IGZyb20NCj4gcmVhbCByZWdpc3RlcnMuIFRoaXMg
aXMgYSBnb29kIG91dGNvbWUuDQo+IA0KPiAyYi4gUGxhdGZvcm0gY2F1c2VzIHRyYXAsIGFuZCB5
b3VyIGNvZGUgaW5qZWN0cyB0aGUgdW5kZWZpbmVkDQo+IGluc3RydWN0aW9uIGV4Y2VwdGlvbi4g
VGhpcyBpcyBhIGNhc2UgdGhhdCBib3RoZXJzIG1lLiBXZWxsIHdyaXR0ZW4gY29kZQ0KPiB0aGF0
IGlzIHRyaWVzIHRvIGJlIGNvbXBhdGlibGUgd2l0aCBkaWZmZXJlbnQgdmVyc2lvbnMgb2YgYXJj
aGl0ZWN0dXJlDQo+IHdpbGwgZmFpbCB0aGVyZS4NCj4gDQo+IDMuIEl0IHJ1bnMgYXMgVk0gb24g
YSBuZXZlciBhcmNoaXRlY3R1cmUuIEkgY2FuIG9ubHkgc3BlY3VsYXRlIHRoZXJlLA0KPiBidXQg
SSB0aGluaywgdGhhdCBsaXN0IG9mIHJlZ2lzdGVycyB0cmFwcGVkIGJ5IEhDUi5JRDMgd2lsbCBi
ZSBleHRlbmRlZA0KPiB3aGVuIG5ldyBmZWF0dXJlcyBhcmUgYWRkZWQuIEF0IGxlYXN0LCB0aGlz
IGRvZXMgbm90IGNvbnRyYWRpY3Qgd2l0aCB0aGUNCj4gY3VycmVudCBJTVBMRU1FTlRBVElPTiBE
RUZJTkVEIGNvbnN0cmFpbnQuIEluIHRoaXMgY2FzZSwgaHlwZXJ2aXNvciB3aWxsDQo+IGluamVj
dCBleGNlcHRpb24gd2hlbiBndWVzdCB0cmllcyB0byBhY2Nlc3MgYSB2YWxpZCByZWdpc3Rlci4N
Cj4gDQo+IA0KPiBTbywgaW4gbXkgb3BpbmlvbiBhbmQgdG8gYmUgY29tcGF0aWJsZSB3aXRoIHRo
ZSByZWZlcmVuY2UgbWFudWFsLCB3ZQ0KPiBzaG91bGQgYWxsb3cgZ3Vlc3RzIHRvIHJlYWQgIlJl
c2VydmVkLCBSQVoiIHJlZ2lzdGVycy4NCg0KVGhhbmtzIGZvciB0aGUgdmVyeSBkZXRhaWxlZCBl
eHBsYW5hdGlvbiwgSSBrbm93IGJldHRlciB1bmRlcnN0YW5kIHdoYXQgeW91DQptZWFuIGhlcmUu
DQoNCkkgd2lsbCB0cnkgdG8gY2hlY2sgaWYgd2UgY291bGQgcmV0dXJuIFJBWiBmb3Ig4oCcb3Ro
ZXLigJ0gb3AwPTMgcmVnaXN0ZXJzIGFuZCB3aGF0DQpzaG91bGQgYmUgZG9uZSBvbiBjcDE1IHJl
Z2lzdGVycyB0byBoYXZlIHNvbWV0aGluZyBlcXVpdmFsZW50Lg0KDQpSZWdhcmRzDQpCZXJ0cmFu
ZA0KDQo+IA0KPiANCj4gDQo+PiBSZWdhcmRzDQo+PiBCZXJ0cmFuZA0KPj4gDQo+PiANCj4+PiAN
Cj4+PiANCj4+Pj4+IA0KPj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJl
cnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+Pj4+PiAtLS0NCj4+Pj4+PiBDaGFuZ2VzIGluIFYy
OiByZWJhc2UNCj4+Pj4+PiAtLS0NCj4+Pj4+PiB4ZW4vYXJjaC9hcm0vdmNwcmVnLmMgfCAzNSAr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4+Pj4+IDEgZmlsZSBjaGFuZ2Vk
LCAzNSBpbnNlcnRpb25zKCspDQo+Pj4+Pj4gDQo+Pj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L2FybS92Y3ByZWcuYyBiL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4+Pj4+IGluZGV4IGNkYzkx
Y2RmNWIuLmQwYzY0MDZmMzQgMTAwNjQ0DQo+Pj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL3ZjcHJl
Zy5jDQo+Pj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3ZjcHJlZy5jDQo+Pj4+Pj4gQEAgLTE1NSw2
ICsxNTUsMTQgQEAgVFZNX1JFRzMyKENPTlRFWFRJRFIsIENPTlRFWFRJRFJfRUwxKQ0KPj4+Pj4+
ICAgICAgIGJyZWFrOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFwNCj4+Pj4+PiAgIH0NCj4+Pj4+PiANCj4+Pj4+PiArLyogTWFjcm8gdG8gZ2Vu
ZXJhdGUgZWFzaWx5IGNhc2UgZm9yIElEIGNvLXByb2Nlc3NvciBlbXVsYXRpb24gKi8NCj4+Pj4+
PiArI2RlZmluZSBHRU5FUkFURV9USUQzX0lORk8ocmVnLGZpZWxkLG9mZnNldCkgICAgICAgICAg
ICAgICAgICAgICAgICBcDQo+Pj4+Pj4gKyAgICBjYXNlIEhTUl9DUFJFRzMyKHJlZyk6ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4+Pj4+ICsgICAgeyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwNCj4+Pj4+PiArICAgICAgICByZXR1cm4gaGFuZGxlX3JvX3JlYWRfdmFsKHJlZ3MsIHJlZ2lk
eCwgY3AzMi5yZWFkLCBoc3IsICAgICBcDQo+Pj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgMSwgZ3Vlc3RfY3B1aW5mby5maWVsZC5iaXRzW29mZnNldF0pOyAgICAgXA0KPj4+Pj4+ICsg
ICAgfQ0KPj4+Pj4+ICsNCj4+Pj4+PiB2b2lkIGRvX2NwMTVfMzIoc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MsIGNvbnN0IHVuaW9uIGhzciBoc3IpDQo+Pj4+Pj4gew0KPj4+Pj4+ICAgY29uc3Qg
c3RydWN0IGhzcl9jcDMyIGNwMzIgPSBoc3IuY3AzMjsNCj4+Pj4+PiBAQCAtMjg2LDYgKzI5NCwz
MyBAQCB2b2lkIGRvX2NwMTVfMzIoc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGNvbnN0IHVu
aW9uIGhzciBoc3IpDQo+Pj4+Pj4gICAgICAgICovDQo+Pj4+Pj4gICAgICAgcmV0dXJuIGhhbmRs
ZV9yYXpfd2kocmVncywgcmVnaWR4LCBjcDMyLnJlYWQsIGhzciwgMSk7DQo+Pj4+Pj4gDQo+Pj4+
Pj4gKyAgICAvKg0KPj4+Pj4+ICsgICAgICogSENSX0VMMi5USUQzDQo+Pj4+Pj4gKyAgICAgKg0K
Pj4+Pj4+ICsgICAgICogVGhpcyBpcyB0cmFwcGluZyBtb3N0IElkZW50aWZpY2F0aW9uIHJlZ2lz
dGVycyB1c2VkIGJ5IGEgZ3Vlc3QNCj4+Pj4+PiArICAgICAqIHRvIGlkZW50aWZ5IHRoZSBwcm9j
ZXNzb3IgZmVhdHVyZXMNCj4+Pj4+PiArICAgICAqLw0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElE
M19JTkZPKElEX1BGUjAsIHBmcjMyLCAwKQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZP
KElEX1BGUjEsIHBmcjMyLCAxKQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX1BG
UjIsIHBmcjMyLCAyKQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0RGUjAsIGRi
ZzMyLCAwKQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0RGUjEsIGRiZzMyLCAx
KQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0FGUjAsIGF1eDMyLCAwKQ0KPj4+
Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIwLCBtbTMyLCAwKQ0KPj4+Pj4+ICsg
ICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIxLCBtbTMyLCAxKQ0KPj4+Pj4+ICsgICAgR0VO
RVJBVEVfVElEM19JTkZPKElEX01NRlIyLCBtbTMyLCAyKQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVf
VElEM19JTkZPKElEX01NRlIzLCBtbTMyLCAzKQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19J
TkZPKElEX01NRlI0LCBtbTMyLCA0KQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElE
X01NRlI1LCBtbTMyLCA1KQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIw
LCBpc2EzMiwgMCkNCj4+Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9JU0FSMSwgaXNh
MzIsIDEpDQo+Pj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjIsIGlzYTMyLCAy
KQ0KPj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVIzLCBpc2EzMiwgMykNCj4+
Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9JU0FSNCwgaXNhMzIsIDQpDQo+Pj4+Pj4g
KyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjUsIGlzYTMyLCA1KQ0KPj4+Pj4+ICsgICAg
R0VORVJBVEVfVElEM19JTkZPKElEX0lTQVI2LCBpc2EzMiwgNikNCj4+Pj4+PiArICAgIC8qIE1W
RlIgcmVnaXN0ZXJzIGFyZSBpbiBjcDEwIG5vIGNwMTUgKi8NCj4+Pj4+PiArDQo+Pj4+Pj4gICAv
Kg0KPj4+Pj4+ICAgICogSENSX0VMMi5USURDUA0KPj4+Pj4+ICAgICoNCj4+Pj4+IA0KPj4+Pj4g
DQo+Pj4+PiAtLSANCj4+Pj4+IFZvbG9keW15ciBCYWJjaHVrIGF0IEVQQU0NCj4+PiANCj4+PiAN
Cj4+PiAtLSANCj4+PiBWb2xvZHlteXIgQmFiY2h1ayBhdCBFUEFNDQo+IA0KPiANCj4gLS0gDQo+
IFZvbG9keW15ciBCYWJjaHVrIGF0IEVQQU0NCg0K


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 11:19:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 11:19:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42634.76708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQAN-0007FA-0H; Wed, 02 Dec 2020 11:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42634.76708; Wed, 02 Dec 2020 11:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQAM-0007F3-RA; Wed, 02 Dec 2020 11:19:22 +0000
Received: by outflank-mailman (input) for mailman id 42634;
 Wed, 02 Dec 2020 11:19:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkQAL-0007Ev-Fl; Wed, 02 Dec 2020 11:19:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkQAL-0007M2-7M; Wed, 02 Dec 2020 11:19:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkQAK-0007Dr-Vi; Wed, 02 Dec 2020 11:19:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkQAK-0003Fm-R2; Wed, 02 Dec 2020 11:19:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y5LBgJhvX84kVi1rkRPAuwvIy3X2te31MOUCvGylARw=; b=19hukAw30rA/dIAAla6VJ2WafZ
	OP96Xja1PZttw2f9Ij4+Qp6UmZoGABxh/JXtiDXAu3CVqmKXLO0mHr0osOXEUH9gyJBncKzJ9Z/iz
	3uKXxdEVyAX4YtmeVP9qQwF+q4VuQtAPlakxfuzHmuun14YnEoEB7N7pcweYb/JkzlFc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157137-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 157137: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=41a822c3926350f26917d747c8dfed1c44a2cf42
X-Osstest-Versions-That:
    xen=62aed78b8e0cc6dcd99b80a528650ad0619b3909
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 11:19:20 +0000

flight 157137 xen-4.11-testing real [real]
flight 157156 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157137/
http://logs.test-lab.xenproject.org/osstest/logs/157156/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      13 guest-start         fail pass in 157156-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 157156 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 157156 never pass
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156986
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156986
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156986
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156986
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156986
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156986
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156986
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156986
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156986
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156986
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156986
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156986
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156986
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  41a822c3926350f26917d747c8dfed1c44a2cf42
baseline version:
 xen                  62aed78b8e0cc6dcd99b80a528650ad0619b3909

Last test of basis   156986  2020-11-24 13:36:13 Z    7 days
Testing same since   157137  2020-12-01 16:36:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   62aed78b8e..41a822c392  41a822c3926350f26917d747c8dfed1c44a2cf42 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 11:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 11:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42638.76723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQAv-0007La-Ec; Wed, 02 Dec 2020 11:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42638.76723; Wed, 02 Dec 2020 11:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQAv-0007LT-Aq; Wed, 02 Dec 2020 11:19:57 +0000
Received: by outflank-mailman (input) for mailman id 42638;
 Wed, 02 Dec 2020 11:19:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBw1=FG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kkQAu-0007LL-JM
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 11:19:56 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d196e1af-7f25-4c74-996f-f9f4d72e348b;
 Wed, 02 Dec 2020 11:19:55 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id l1so3385262wrb.9
 for <xen-devel@lists.xenproject.org>; Wed, 02 Dec 2020 03:19:55 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 20sm1673260wmk.16.2020.12.02.03.19.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 02 Dec 2020 03:19:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d196e1af-7f25-4c74-996f-f9f4d72e348b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=LFsveCACiVP0miso5kdur+j/TxB89o+KtzxvHkrVjwA=;
        b=LysWxXQsBAy1p0nLgpoK1v3sUqgaE/eIXA7h/4PgAT0M3xs8H0qA3j/g18taWiyNv4
         AaY2zzMfiHWGIkQ3h0AZE6yTanEL2G0wHQY9kXSkyQMFznFILta4Ovveaxr4EkJWc6Vv
         2oMmvYno29dJCNyLEVS/mr4epNPSaOgBQn2RRNp46+Nj/qr3cej5H6QWUgDQtjADwfWC
         xrl3Xf3xMUG7vSjFoXeWD630DF7Hnj5+PUv+iILUE2A3l0d/OEYmnZahIh43BpW5oKYn
         WKJPujGm7nVDHBC/cCZrvHg5uM6ctabQZF/g4CljK3Dn+eCzQqPX6yK8F3MmYJyMx8U1
         /oQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=LFsveCACiVP0miso5kdur+j/TxB89o+KtzxvHkrVjwA=;
        b=rRS4ZeMBknW4Z6Dh5XiFm3fkZ8wSwSK+ELUH0V2mzaw+mj+LxQ6thgHwWjUDhXEWdG
         RfqlJC/CRnvY7Lo4De5iTPmlWGMcDX4HXmUAywri4dQI+AOKMvCbAnut3gJ14BjIW3Im
         WQZ6pIBBgdI+2LkEsffDwfo788ivocg0F3xfQ0X6/L8zqMlEGzxICjDINc3t1WMojO1L
         dfRjQ5PzG2gmmQ0HxxK1GtM/ryfQoMb9DMP+9j+Qn+yS/vuzUSulmn+H4x7iTLvbblEu
         n44U3Tey0WmgJqF6FHPHyndi1YhxwG4S9CSyL0rfCip8IYZBS919RB3UoIGA389ONNXA
         QQww==
X-Gm-Message-State: AOAM531WTSyOpfvf6CMILa0LJYGpM5uYLcfA2UIACP/0+WT9p6hs2e2o
	EfcrhxAqVqMXv+6QVG9p3nnWXRoDTOsg2A==
X-Google-Smtp-Source: ABdhPJxLKIOP8OG9AKKPR5rcUd8O0n7IeppMfMzuI3aO3qANsD6+3WxRdKonHXusJm1PvuSx6NRWoQ==
X-Received: by 2002:adf:f349:: with SMTP id e9mr2884358wrp.110.1606907994666;
        Wed, 02 Dec 2020 03:19:54 -0800 (PST)
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <87eek9u6tj.fsf@linaro.org> <cd2e064e-896b-3a28-5d37-93ddaba1c13e@gmail.com>
 <802c49d5-00bb-9e10-70d7-2629913b08c9@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7906614f-70d4-5c39-2207-95126dea7ffd@gmail.com>
Date: Wed, 2 Dec 2020 13:19:48 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <802c49d5-00bb-9e10-70d7-2629913b08c9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 02.12.20 10:00, Jan Beulich wrote:

Hi Jan

> On 01.12.2020 19:53, Oleksandr wrote:
>> On 01.12.20 13:03, Alex Bennée wrote:
>>> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
>>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>>>        if ( s->emulator != current->domain )
>>>>            goto out;
>>>>    
>>>> -    rc = p2m_set_ioreq_server(d, flags, s);
>>>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>>>    
>>>>     out:
>>>>        spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>>    
>>>> -    if ( rc == 0 && flags == 0 )
>>>> -    {
>>>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>> -
>>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>>> -    }
>>>> -
>>> It should be noted that p2m holds it's own lock but I'm unfamiliar with
>>> Xen's locking architecture. Is there anything that prevents another vCPU
>>> accessing a page that is also being used my ioreq on the first vCPU?
>> I am not sure that I would be able to provide reasonable explanations here.
>> All what I understand is that p2m_change_entry_type_global() x86
>> specific (we don't have p2m_ioreq_server concept on Arm) and should
>> remain as such (not exposed to the common code).
>> IIRC, I raised a question during V2 review whether we could have ioreq
>> server lock around the call to p2m_change_entry_type_global() and didn't
>> get objections.
> Not getting objections doesn't mean much. Personally I don't recall
> such a question, but this doesn't mean much.

Sorry for not being clear here. Discussion happened at [1] when I was 
asked to move hvm_map_mem_type_to_ioreq_server() to the common code.


>   The important thing
> here is that you properly justify this change in the description (I
> didn't look at this version of the patch as a whole yet, so quite
> likely you actually do). This is because you need to guarantee that
> you don't introduce any lock order violations by this.
Yes, almost all changes in this patch are mostly mechanical and leave 
things as they are.
The change with p2m_change_entry_type_global() requires an additional 
attention, so decided to put emphasis on touching that
in the description and add a comment in the code that it is called with 
ioreq_server lock held.


[1] 
https://patchwork.kernel.org/project/xen-devel/patch/1602780274-29141-2-git-send-email-olekstysh@gmail.com/#23734839

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 11:57:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 11:57:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42660.76754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQlX-0002jL-Nc; Wed, 02 Dec 2020 11:57:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42660.76754; Wed, 02 Dec 2020 11:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkQlX-0002jE-KX; Wed, 02 Dec 2020 11:57:47 +0000
Received: by outflank-mailman (input) for mailman id 42660;
 Wed, 02 Dec 2020 11:57:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19jX=FG=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kkQlW-0002j9-IX
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 11:57:46 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.83]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e89901b7-d9ee-4831-9a8d-6a21ada420d9;
 Wed, 02 Dec 2020 11:57:45 +0000 (UTC)
Received: from AS8PR05CA0030.eurprd05.prod.outlook.com (2603:10a6:20b:311::35)
 by AM6PR08MB4502.eurprd08.prod.outlook.com (2603:10a6:20b:b9::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 2 Dec
 2020 11:57:42 +0000
Received: from VE1EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:311:cafe::5c) by AS8PR05CA0030.outlook.office365.com
 (2603:10a6:20b:311::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend
 Transport; Wed, 2 Dec 2020 11:57:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT062.mail.protection.outlook.com (10.152.18.252) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 11:57:41 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Wed, 02 Dec 2020 11:57:41 +0000
Received: from 8d3fbc22d040.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 54AFD028-4889-4152-9906-0E0342715DD8.1; 
 Wed, 02 Dec 2020 11:57:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8d3fbc22d040.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 11:57:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2696.eurprd08.prod.outlook.com (2603:10a6:6:25::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Wed, 2 Dec
 2020 11:57:22 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Wed, 2 Dec 2020
 11:57:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e89901b7-d9ee-4831-9a8d-6a21ada420d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BQbCXEmZIUtHqzG4a5vV0+9xRRlpqSIBpRofn89D9ww=;
 b=5d1AyhcY9qRQnkwRg5QuE9ZsHSX1KfUgu0OnUVPFgBIFEAjdHQT8tO2z1o+HNN9d3ci0gycIaXTx/GjZjCT0s50EJvss2Ag4Z5FJAr7NfnQ3p4zi9Re1K0b1NM2XF1F1sbHCsDtdgp5Xl0m3y7Giapjkw4l3qibEWG2JtuRHHj8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: bfbea0626e59e7cc
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xj36y0URqOb1kZMbcPrsmhNm26Xvd9UpgxqK7hKYIWlOYmDnk+OTx3vwvbImBz9s0lbmasgqQo+eMVq0bpHbp+S8W7spia+sslpjOTaY/hNztMJusUTKkjUsYaOYbit5s6MI3B2l3iuGxRoLN5DeogvVKyUvcich5v04jJ4L1StARQxYby91FRqPCTG2QGamBCXUUjcHyH7fPLIzj01HsuAIFLWyX7mmhZLy7g1p1iCzSNe8R4dhCQxQrywwC+jr+XZKXh5k0HlvvsH4n8EXnwgC9Ztk/0I+Lu/t+67KoSTbFwduu21/KJ7B+THuMPx+/meQmFF72Wxosrk8/OSYew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BQbCXEmZIUtHqzG4a5vV0+9xRRlpqSIBpRofn89D9ww=;
 b=kd3Dig5TZOkQn1m+qmZMYpQ4NFD0ai5kB81Q7evrsuK//vfV3rQuWcHquxe/NLJ/xdZohRuW2Q3lZ29DhGNDDKGo1WtohiQsw84oUCVE7BGWZt7eRcP1/9v8LOzHyCpdZxo5tqzTB9g49idhOp516aplozZFE4GhYS5x3ULOKkLDz1mtvWS8NOVgjUjSTlhez3YY3l2JWrECVFRu9Kddluf1+YK9ujBQMiOpK3EicQtSbxP3mqBN1XhO2BlZrqL9W/gr+1VYsnEenPIP11bbq2V6Ajd4iiHwHiIBr1V+t5xu4IivcUlOfu+prYYAL4S9+BLqyvno8b8GSTt2XPiUEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BQbCXEmZIUtHqzG4a5vV0+9xRRlpqSIBpRofn89D9ww=;
 b=5d1AyhcY9qRQnkwRg5QuE9ZsHSX1KfUgu0OnUVPFgBIFEAjdHQT8tO2z1o+HNN9d3ci0gycIaXTx/GjZjCT0s50EJvss2Ag4Z5FJAr7NfnQ3p4zi9Re1K0b1NM2XF1F1sbHCsDtdgp5Xl0m3y7Giapjkw4l3qibEWG2JtuRHHj8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index:
 AQHWxyRuchSNSGGKDEyJCEQqqd6Vw6nhIUWAgAD/ugCAAAW5gIAAJWmAgAAq44CAATKvAIAADJiA
Date: Wed, 2 Dec 2020 11:57:22 +0000
Message-ID: <2F7BDAAC-4253-4D92-A995-12AA1361AE35@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
 <87lfei7fj5.fsf@epam.com> <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com>
 <87sg8p687j.fsf@epam.com> <87243486-2A58-4497-B566-5FDE4158D18E@arm.com>
 <87h7p55uwj.fsf@epam.com> <80D814EA-B0FC-4975-BB08-4D7DAE8C8B56@arm.com>
In-Reply-To: <80D814EA-B0FC-4975-BB08-4D7DAE8C8B56@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0bec37d8-e708-4bf6-b7d6-08d896b97930
x-ms-traffictypediagnostic: DB6PR08MB2696:|AM6PR08MB4502:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB45024D0924CB2E2F11CF616C9DF30@AM6PR08MB4502.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 upMZMQEEDzGTDaOl99+N3JSbeQERdq2zYrtHA+0mwBSJBi/iEUlreXGGJhgAkt8Al6jia7bXnOaAZkafHL1GiRoT2Ka3ntc7bgtP0+gcUJ008tlPKe9u/Sn1VzDPZ38oXCJwfFr0e7kABi16/pDU/vByeFCYLRyqpES0rFuxA89cTUd3knpC4nGC2v73cfNHQKPi29Jr3zoRlAX9fCEeJ4FUCFFaxNSTrskP8m9KxaXcyEPcSqLb62KiYmgd6dQ/5Vmms3FXv0QF9dvUxfdPdLTuKS8sxQ9DdEi/DcZjiKQS2q/kTQLdvcGFUNxYRYkP8AIo+I0RBDTRHQ+UlMfXugKyD8jymBfFNtGKcW0tpVvLHjDawR9raBJyaQ6D+pEB
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(376002)(39860400002)(346002)(186003)(6506007)(86362001)(33656002)(8676002)(26005)(83380400001)(6486002)(4326008)(8936002)(6512007)(5660300002)(53546011)(71200400001)(54906003)(76116006)(36756003)(6916009)(91956017)(66556008)(66446008)(2616005)(64756008)(2906002)(66946007)(316002)(66476007)(478600001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?NXBNdGdXZEJ3MkI1WjFQMkU1K0FwbUd5N1BiQVkrQUVwVTF3ZVZnR0Z2VWZ1?=
 =?utf-8?B?a1FueGcxQ1Q1YUZIS1ZpbE1nTHUwT3dDekFjTENWYllDZ0xhV3ZuYUlxN2hK?=
 =?utf-8?B?ZFZMVEJhMnFyUnBDUWVXdExlQ1NFSE1iL0pzcmZBWXBTZHBKN1lCanpGcjVD?=
 =?utf-8?B?YzRZdFNwOExrL1BVaW9zNkdzcnY3dkFGZFlnZUMyVDZjcDBkOTFZK2hIeHhN?=
 =?utf-8?B?OWtpUGJ1SktwdCt4OUhXbFZ3cjA4QUpqUzVOOE4rUUNtWDVJNmVHRTF2VVV6?=
 =?utf-8?B?aEg5N1E2TjhCU0tvUk82V05NL0UrNzErQ2lXM1pWL3hHME9jMUZvU2w2dk5r?=
 =?utf-8?B?UmIveE80MWJqWFlLcThEaC9TTlFZRW1VY01qdWFtaWwzekJDU0g1emk4Qk15?=
 =?utf-8?B?dnIrekJZQ2xUYTJpVXlHK0pycS9HUGdPdHpEbng1THd3Z2tVd1lJZkFYTy9Z?=
 =?utf-8?B?S05FTjAyNjhKN2Y2ZDFnWVRjaktmcWQ3U2R0WWQrMUs2MmF4VlU5R1VTUVkz?=
 =?utf-8?B?Z1Y4ZGRzd043ZzlwOHN5Z2ppYXpIVGpNWGM1NTlDT2FGSnJ5cHF4YmFGOXJB?=
 =?utf-8?B?dnlvckNKYW1lZ3R5dzBpQVdzWGtsSU5QYVZCMEo4SkRQaUUrVTdXbXFOZjVH?=
 =?utf-8?B?QlFsS2x2cHNneG1icXk2ZlI0UERUekN1MERsY3dwcnFidkI4SmZjV2VVVGxn?=
 =?utf-8?B?WCs1VzNyQUNvT3VlMWxKV20yTnFmK0ZpcjNhOXpJQkpTbXJkMjV3NnpMbjha?=
 =?utf-8?B?eHY2N01PQk5FK3NZNWdrT01VeHhsNVhSaUY3TXlrWEF4N09Rdmh2dmk2bWVN?=
 =?utf-8?B?OXFld0dRS1FjbUdleXZUNkFoYXBQV0pYWS82aHJoZXlsY293YWJWUzlDYjhI?=
 =?utf-8?B?eWlqb2I5UmdQK1hOZldDNHJKRUZiWXJJSW5zYlRBS21FNVAwRXVYamIyRFRS?=
 =?utf-8?B?ZTc2ZUlSMFpVdTJ3K0QyRTNWbHVWSVdYZC9mRFFkTS9yUlIrTHFuQ1g4SmZH?=
 =?utf-8?B?TFcvTE45Q1BIb1NKUTB3U3hGdDY2azg5WkNyaEpGVEhISnlpQkdNZUlUU3Jl?=
 =?utf-8?B?dllOT2FxOVVxeTRlbTRBaFE2MFE0U0I5clFRanBkUWN5YTdUcXlQVUlZTDY0?=
 =?utf-8?B?Q0gxaXZSVWw0WmxIb0dRRzJmZVk2SW9EcklJQ3FRSmxFSUgrT2FuNmVEcVRz?=
 =?utf-8?B?cTBEbWprdGY5TFgwajFVTUxaWmY0SmdEYkV6VDZndkxMb0xVWU9OREhkcnAw?=
 =?utf-8?B?dG5iK2craWw5U0RMZWJueFhXVnJXbzdXMTVsTEw2dlQxbGJOdU9aak1ETThk?=
 =?utf-8?Q?uotGEYZDt2XNhAaIwImi+7rMQWrUH7dbtN?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <2B72DC16E1D3B7449AE6DCB04C8E7CC3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2696
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c54f69ab-b558-4306-56d0-08d896b96d97
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PGzGYxtJoXe5ct77wz6LH/kCUtsz7FM/deX9xE+Ycz1v9MVhQZFlbKx8bRQICL19kjhRMNF7j0/of4Uu/YrUBodfcjutRCgxTEsB5M84slpWHng0anBoNlzOvJP38tm6WEtehqVl7pPkHasymRH8Pji0K+AB5TPR/53X7GtJAmSdi3Vfg5AAs8xVS3ivTo2oo9WuJhmGWJ2zfvnRpqhv30mgKS7cXl4kaRiurKMNyU/wsAqOrMqxQLDD5LQwQorEaenO0JxqRaIFLXTy/6wfomgDDwOgorJWARMeCinimX+NDTYkC6IlBvxtX9k8YJBqu+3Oj6kfGxUe6FEaHfusbkKBiG9tOdKRN7BfNZYiuOkz37gRlhtt7uz7CF610wITNC1I9cif56oVT050B3+Q1Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(46966005)(2906002)(356005)(26005)(81166007)(47076004)(53546011)(6506007)(336012)(2616005)(82740400003)(86362001)(186003)(6486002)(70586007)(33656002)(316002)(83380400001)(36756003)(4326008)(82310400003)(8936002)(54906003)(6862004)(8676002)(5660300002)(70206006)(478600001)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 11:57:41.7558
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0bec37d8-e708-4bf6-b7d6-08d896b97930
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4502

SGksDQoNCj4gT24gMiBEZWMgMjAyMCwgYXQgMTE6MTIsIEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRy
YW5kLm1hcnF1aXNAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBISSBWb2xvZHlteXIsDQo+IA0KPj4g
T24gMSBEZWMgMjAyMCwgYXQgMTY6NTQsIFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJfQmFi
Y2h1a0BlcGFtLmNvbT4gd3JvdGU6DQo+PiANCj4+IA0KPj4gSGksDQo+PiANCj4+IEJlcnRyYW5k
IE1hcnF1aXMgd3JpdGVzOg0KPj4gDQo+Pj4gSGkgVm9sb2R5bXlyLA0KPj4+IA0KPj4+PiBPbiAx
IERlYyAyMDIwLCBhdCAxMjowNywgVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15cl9CYWJjaHVr
QGVwYW0uY29tPiB3cm90ZToNCj4+Pj4gDQo+Pj4+IA0KPj4+PiBIaSwNCj4+Pj4gDQo+Pj4+IEJl
cnRyYW5kIE1hcnF1aXMgd3JpdGVzOg0KPj4+PiANCj4+Pj4+IEhpLA0KPj4+Pj4gDQo+Pj4+Pj4g
T24gMzAgTm92IDIwMjAsIGF0IDIwOjMxLCBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXlyX0Jh
YmNodWtAZXBhbS5jb20+IHdyb3RlOg0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IEJlcnRyYW5k
IE1hcnF1aXMgd3JpdGVzOg0KPj4+Pj4+IA0KPj4+Pj4+PiBBZGQgc3VwcG9ydCBmb3IgZW11bGF0
aW9uIG9mIGNwMTUgYmFzZWQgSUQgcmVnaXN0ZXJzIChvbiBhcm0zMiBvciB3aGVuDQo+Pj4+Pj4+
IHJ1bm5pbmcgYSAzMmJpdCBndWVzdCBvbiBhcm02NCkuDQo+Pj4+Pj4+IFRoZSBoYW5kbGVycyBh
cmUgcmV0dXJuaW5nIHRoZSB2YWx1ZXMgc3RvcmVkIGluIHRoZSBndWVzdF9jcHVpbmZvDQo+Pj4+
Pj4+IHN0cnVjdHVyZS4NCj4+Pj4+Pj4gSW4gdGhlIGN1cnJlbnQgc3RhdHVzIHRoZSBNVkZSIHJl
Z2lzdGVycyBhcmUgbm8gc3VwcG9ydGVkLg0KPj4+Pj4+IA0KPj4+Pj4+IEl0IGlzIHVuY2xlYXIg
d2hhdCB3aWxsIGhhcHBlbiB3aXRoIHJlZ2lzdGVycyB0aGF0IGFyZSBub3QgY292ZXJlZCBieQ0K
Pj4+Pj4+IGd1ZXN0X2NwdWluZm8gc3RydWN0dXJlLiBBY2NvcmRpbmcgdG8gQVJNIEFSTSwgaXQg
aXMgaW1wbGVtZW50YXRpb24NCj4+Pj4+PiBkZWZpbmVkIGlmIHN1Y2ggYWNjZXNzZXMgd2lsbCBi
ZSB0cmFwcGVkLiBPbiBvdGhlciBoYW5kLCB0aGVyZSBhcmUgbWFueQ0KPj4+Pj4+IHJlZ2lzdGVy
cyB3aGljaCBhcmUgUkFaLiBTbywgZ29vZCBiZWhhdmluZyBndWVzdCBjYW4gdHJ5IHRvIHJlYWQg
b25lIG9mDQo+Pj4+Pj4gdGhhdCByZWdpc3RlcnMgYW5kIGl0IHdpbGwgZ2V0IHVuZGVmaW5lZCBp
bnN0cnVjdGlvbiBleGNlcHRpb24sIGluc3RlYWQNCj4+Pj4+PiBvZiBqdXN0IHJlYWRpbmcgYWxs
IHplcm9lcy4NCj4+Pj4+IA0KPj4+Pj4gVGhpcyBpcyB0cnVlIGluIHRoZSBzdGF0dXMgb2YgdGhp
cyBwYXRjaCBidXQgdGhpcyBpcyBzb2x2ZWQgYnkgdGhlIG5leHQgcGF0Y2gNCj4+Pj4+IHdoaWNo
IGlzIGFkZGluZyBwcm9wZXIgaGFuZGxpbmcgb2YgdGhvc2UgcmVnaXN0ZXJzIChhZGQgQ1AxMCBl
eGNlcHRpb24NCj4+Pj4+IHN1cHBvcnQpLCBhdCBsZWFzdCBmb3IgTVZGUiBvbmVzLg0KPj4+Pj4g
DQo+Pj4+PiBGcm9tIEFSTSBBUk0gcG9pbnQgb2YgdmlldywgSSBkaWQgaGFuZGxlIGFsbCByZWdp
c3RlcnMgbGlzdGVkIEkgdGhpbmsuDQo+Pj4+PiBJZiB5b3UgdGhpbmsgc29tZSBhcmUgbWlzc2lu
ZyBwbGVhc2UgcG9pbnQgbWUgdG8gdGhlbSBhcyBPIGRvIG5vdA0KPj4+Pj4gY29tcGxldGVseSB1
bmRlcnN0YW5kIHdoYXQgYXJlIHRoZSDigJxyZWdpc3RlcnMgbm90IGNvdmVyZWTigJ0gdW5sZXNz
DQo+Pj4+PiB5b3UgbWVhbiB0aGUgTVZGUiBvbmVzLg0KPj4+PiANCj4+Pj4gV2VsbCwgSSBtYXkg
YmUgd3JvbmcgZm9yIGFhcmNoMzIgY2FzZSwgYnV0IGZvciBhYXJjaDY0LCB0aGVyZSBhcmUgbnVt
YmVyDQo+Pj4+IG9mIHJlc2VydmVkIHJlZ2lzdGVycyBpbiBJRHMgcmFuZ2UuIFRob3NlIHJlZ2lz
dGVycyBzaG91bGQgcmVhZCBhcw0KPj4+PiB6ZXJvLiBZb3UgY2FuIGZpbmQgdGhlbSBpbiB0aGUg
c2VjdGlvbiAiQzUuMS42IG9wMD09MGIxMSwgTW92ZXMgdG8gYW5kDQo+Pj4+IGZyb20gbm9uLWRl
YnVnIFN5c3RlbSByZWdpc3RlcnMgYW5kIFNwZWNpYWwtcHVycG9zZSByZWdpc3RlcnMiIG9mIEFS
TQ0KPj4+PiBEREkgMDQ4N0IuYS4gQ2hlY2sgb3V0ICJUYWJsZSBDNS02IFN5c3RlbSBpbnN0cnVj
dGlvbiBlbmNvZGluZ3MgZm9yDQo+Pj4+IG5vbi1EZWJ1ZyBTeXN0ZW0gcmVnaXN0ZXIgYWNjZXNz
ZXMiLg0KPj4+IA0KPj4+IFRoZSBwb2ludCBvZiB0aGUgc2VyaWUgaXMgdG8gaGFuZGxlIGFsbCBy
ZWdpc3RlcnMgdHJhcHBlZCBkdWUgdG8gVElEMyBiaXQgaW4gSENSX0VMMi4NCj4+PiANCj4+PiBB
bmQgaSB0aGluayBJIGhhbmRsZWQgYWxsIG9mIHRoZW0gYnV0IEkgbWlnaHQgYmUgd3JvbmcuDQo+
Pj4gDQo+Pj4gSGFuZGxpbmcgYWxsIHJlZ2lzdGVycyBmb3Igb3AwPT0wYjExIHdpbGwgY292ZXIg
YSBsb3QgbW9yZSB0aGluZ3MuDQo+Pj4gVGhpcyBjYW4gYmUgZG9uZSBvZiBjb3Vyc2UgYnV0IHRo
aXMgd2FzIG5vdCB0aGUgcG9pbnQgb2YgdGhpcyBzZXJpZS4NCj4+PiANCj4+PiBUaGUgbGlzdGlu
ZyBpbiBIQ1JfRUwyIGRvY3VtZW50YXRpb24gaXMgcHJldHR5IGNvbXBsZXRlIGFuZCBpZiBJIG1p
c3MgYW55IHJlZ2lzdGVyDQo+Pj4gdGhlcmUgcGxlYXNlIHRlbGwgbWUgYnV0IEkgZG8gbm8gdW5k
ZXJzdGFuZCBmcm9tIHRoZSBkb2N1bWVudGF0aW9uIHRoYXQgYWxsIHJlZ2lzdGVycw0KPj4+IHdp
dGggb3AwIDMgYXJlIHRyYXBwZWQgYnkgVElEMy4NCj4+IA0KPj4gTXkgY29uY2VybiBpcyB0aGF0
IHRoZSBzYW1lIGNvZGUgbWF5IG9ic2VydmUgZGlmZmVyZW50IGVmZmVjdHMgd2hlbg0KPj4gcnVu
bmluZyBpbiBiYXJlbWV0YWwgbW9kZSBhbmQgYXMgVk0uDQo+PiANCj4+IEZvciBleGFtcGxlLCB3
ZSBhcmUgdHJ5aW5nIHRvIHJ1biBhIG5ld2VyIHZlcnNpb24gb2YgYSBrZXJuZWwsIHRoYXQNCj4+
IHN1cHBvcnRzIHNvbWUgaHlwb3RoZXRpY2FsIEFSTXY4LjkuIEFuZCBpdCB0cmllcyB0byByZWFk
IGEgbmV3IElEDQo+PiByZWdpc3RlciB3aGljaCBpcyBhYnNlbnQgaW4gQVJNdjguMi4gVGhlcmUg
YXJlIHBvc3NpYmxlIGNhc2VzOg0KPj4gDQo+PiAwLiBJdCBydW5zIGFzIGEgYmFyZW1ldGFsIGNv
ZGUgb24gYSBjb21wYXRpYmxlIGFyY2hpdGVjdHVyZS4gU28gaXQganVzdA0KPj4gYWNjZXNzZXMg
dGhpcyByZWdpc3RlciBhbmQgYWxsIGlzIGZpbmUuDQo+PiANCj4+IDEuIEl0IHJ1bnMgYXMgYmFy
ZW1ldGFsIGNvZGUgb24gb2xkZXIgQVJNOCBhcmNoaXRlY3R1cmUuIEN1cnJlbnQNCj4+IHJlZmVy
ZW5jZSBtYW51YWwgc3RhdGVzIHRoYXQgdGhvc2UgcmVnaXN0ZXJzIHNob3VsZCByZWFkIGFzIHpl
cm8sIHNvDQo+PiBhbGwgaXMgZmluZSwgYXMgd2VsbC4NCj4+IA0KPj4gMi4gSXQgcnVucyBhcyBW
TSBvbiBhbiBvbGRlciBhcmNoaXRlY3R1cmUuIEl0IGlzIElNUExFTUVOVEFUSU9OIERFRklORUQN
Cj4+IGlmIEhDUi5JRDMgd2lsbCBjYXVzZSB0cmFwcyB3aGVuIGFjY2VzcyB0byB1bmFzc2lnbmVk
IHJlZ2lzdGVyczoNCj4+IA0KPj4gMmEuIFBsYXRmb3JtIGRvZXMgbm90IGNhdXNlIHRyYXBzIGFu
ZCBzb2Z0d2FyZSByZWFkcyB6ZXJvcyBkaXJlY3RseSBmcm9tDQo+PiByZWFsIHJlZ2lzdGVycy4g
VGhpcyBpcyBhIGdvb2Qgb3V0Y29tZS4NCj4+IA0KPj4gMmIuIFBsYXRmb3JtIGNhdXNlcyB0cmFw
LCBhbmQgeW91ciBjb2RlIGluamVjdHMgdGhlIHVuZGVmaW5lZA0KPj4gaW5zdHJ1Y3Rpb24gZXhj
ZXB0aW9uLiBUaGlzIGlzIGEgY2FzZSB0aGF0IGJvdGhlcnMgbWUuIFdlbGwgd3JpdHRlbiBjb2Rl
DQo+PiB0aGF0IGlzIHRyaWVzIHRvIGJlIGNvbXBhdGlibGUgd2l0aCBkaWZmZXJlbnQgdmVyc2lv
bnMgb2YgYXJjaGl0ZWN0dXJlDQo+PiB3aWxsIGZhaWwgdGhlcmUuDQo+PiANCj4+IDMuIEl0IHJ1
bnMgYXMgVk0gb24gYSBuZXZlciBhcmNoaXRlY3R1cmUuIEkgY2FuIG9ubHkgc3BlY3VsYXRlIHRo
ZXJlLA0KPj4gYnV0IEkgdGhpbmssIHRoYXQgbGlzdCBvZiByZWdpc3RlcnMgdHJhcHBlZCBieSBI
Q1IuSUQzIHdpbGwgYmUgZXh0ZW5kZWQNCj4+IHdoZW4gbmV3IGZlYXR1cmVzIGFyZSBhZGRlZC4g
QXQgbGVhc3QsIHRoaXMgZG9lcyBub3QgY29udHJhZGljdCB3aXRoIHRoZQ0KPj4gY3VycmVudCBJ
TVBMRU1FTlRBVElPTiBERUZJTkVEIGNvbnN0cmFpbnQuIEluIHRoaXMgY2FzZSwgaHlwZXJ2aXNv
ciB3aWxsDQo+PiBpbmplY3QgZXhjZXB0aW9uIHdoZW4gZ3Vlc3QgdHJpZXMgdG8gYWNjZXNzIGEg
dmFsaWQgcmVnaXN0ZXIuDQo+PiANCj4+IA0KPj4gU28sIGluIG15IG9waW5pb24gYW5kIHRvIGJl
IGNvbXBhdGlibGUgd2l0aCB0aGUgcmVmZXJlbmNlIG1hbnVhbCwgd2UNCj4+IHNob3VsZCBhbGxv
dyBndWVzdHMgdG8gcmVhZCAiUmVzZXJ2ZWQsIFJBWiIgcmVnaXN0ZXJzLg0KPiANCj4gVGhhbmtz
IGZvciB0aGUgdmVyeSBkZXRhaWxlZCBleHBsYW5hdGlvbiwgSSBrbm93IGJldHRlciB1bmRlcnN0
YW5kIHdoYXQgeW91DQo+IG1lYW4gaGVyZS4NCj4gDQo+IEkgd2lsbCB0cnkgdG8gY2hlY2sgaWYg
d2UgY291bGQgcmV0dXJuIFJBWiBmb3Ig4oCcb3RoZXLigJ0gb3AwPTMgcmVnaXN0ZXJzIGFuZCB3
aGF0DQo+IHNob3VsZCBiZSBkb25lIG9uIGNwMTUgcmVnaXN0ZXJzIHRvIGhhdmUgc29tZXRoaW5n
IGVxdWl2YWxlbnQuDQo+IA0KDQpJbiBmYWN0IEkgbmVlZCB0byBhZGQgaGFuZGxpbmcgZm9yIG90
aGVyIHJlZ2lzdGVycyBtZW50aW9ubmVkIGJ5IHRoZSBUSUQzDQpkZXNjcmlwdGlvbiBpbiB0aGUg
YXJtdjggYXJjaGl0ZWN0dXJlIG1hbnVhbDoNCiJUaGlzIGZpZWxkIHRyYXBzIGFsbCBNUlMgYWNj
ZXNzZXMgdG8gcmVnaXN0ZXJzIGluIHRoZSBmb2xsb3dpbmcgcmFuZ2UgdGhhdCBhcmUgbm90DQph
bHJlYWR5IG1lbnRpb25lZCBpbiB0aGlzIGZpZWxkIGRlc2NyaXB0aW9uOiBPcDAgPT0gMywgb3Ax
ID09IDAsIENSbiA9PSBjMCwNCiBDUm0gPT0ge2MxLWM3fSwgb3AyID09IHswLTd9LuKAnQ0KIlRo
aXMgZmllbGQgdHJhcHMgYWxsIE1SQyBhY2Nlc3NlcyB0byBlbmNvZGluZ3MgaW4gdGhlIGZvbGxv
d2luZyByYW5nZSB0aGF0IGFyZSBub3QNCmFscmVhZHkgbWVudGlvbmVkIGluIHRoaXMgZmllbGQg
ZGVzY3JpcHRpb246IGNvcHJvYyA9PSBwMTUsIG9wYzEgPT0gMCwgQ1JuID09IGMwLA0KQ1JtID09
IHtjMi1jN30sIG9wYzIgPT0gezAtN30u4oCdDQoNCkkgd2lsbCBjaGVjayBob3cgaSBjYW4gZG8g
dGhhdC4NCg0KVGhhbmtzIGEgbG90IGZvciB0aGUgcmV2aWV3Lg0KDQpSZWdhcmRzDQpCZXJ0cmFu
ZA0KDQo+IFJlZ2FyZHMNCj4gQmVydHJhbmQNCj4gDQo+PiANCj4+IA0KPj4gDQo+Pj4gUmVnYXJk
cw0KPj4+IEJlcnRyYW5kDQo+Pj4gDQo+Pj4gDQo+Pj4+IA0KPj4+PiANCj4+Pj4+PiANCj4+Pj4+
Pj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0u
Y29tPg0KPj4+Pj4+PiAtLS0NCj4+Pj4+Pj4gQ2hhbmdlcyBpbiBWMjogcmViYXNlDQo+Pj4+Pj4+
IC0tLQ0KPj4+Pj4+PiB4ZW4vYXJjaC9hcm0vdmNwcmVnLmMgfCAzNSArKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKw0KPj4+Pj4+PiAxIGZpbGUgY2hhbmdlZCwgMzUgaW5zZXJ0aW9u
cygrKQ0KPj4+Pj4+PiANCj4+Pj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92Y3ByZWcu
YyBiL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4+Pj4+PiBpbmRleCBjZGM5MWNkZjViLi5kMGM2
NDA2ZjM0IDEwMDY0NA0KPj4+Pj4+PiAtLS0gYS94ZW4vYXJjaC9hcm0vdmNwcmVnLmMNCj4+Pj4+
Pj4gKysrIGIveGVuL2FyY2gvYXJtL3ZjcHJlZy5jDQo+Pj4+Pj4+IEBAIC0xNTUsNiArMTU1LDE0
IEBAIFRWTV9SRUczMihDT05URVhUSURSLCBDT05URVhUSURSX0VMMSkNCj4+Pj4+Pj4gICAgICBi
cmVhazsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcDQo+Pj4+Pj4+ICB9DQo+Pj4+Pj4+IA0KPj4+Pj4+PiArLyogTWFjcm8gdG8gZ2VuZXJhdGUg
ZWFzaWx5IGNhc2UgZm9yIElEIGNvLXByb2Nlc3NvciBlbXVsYXRpb24gKi8NCj4+Pj4+Pj4gKyNk
ZWZpbmUgR0VORVJBVEVfVElEM19JTkZPKHJlZyxmaWVsZCxvZmZzZXQpICAgICAgICAgICAgICAg
ICAgICAgICAgXA0KPj4+Pj4+PiArICAgIGNhc2UgSFNSX0NQUkVHMzIocmVnKTogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+Pj4+Pj4+ICsgICAgeyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwN
Cj4+Pj4+Pj4gKyAgICAgICAgcmV0dXJuIGhhbmRsZV9yb19yZWFkX3ZhbChyZWdzLCByZWdpZHgs
IGNwMzIucmVhZCwgaHNyLCAgICAgXA0KPj4+Pj4+PiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAxLCBndWVzdF9jcHVpbmZvLmZpZWxkLmJpdHNbb2Zmc2V0XSk7ICAgICBcDQo+Pj4+Pj4+ICsg
ICAgfQ0KPj4+Pj4+PiArDQo+Pj4+Pj4+IHZvaWQgZG9fY3AxNV8zMihzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqcmVncywgY29uc3QgdW5pb24gaHNyIGhzcikNCj4+Pj4+Pj4gew0KPj4+Pj4+PiAgY29u
c3Qgc3RydWN0IGhzcl9jcDMyIGNwMzIgPSBoc3IuY3AzMjsNCj4+Pj4+Pj4gQEAgLTI4Niw2ICsy
OTQsMzMgQEAgdm9pZCBkb19jcDE1XzMyKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzLCBjb25z
dCB1bmlvbiBoc3IgaHNyKQ0KPj4+Pj4+PiAgICAgICAqLw0KPj4+Pj4+PiAgICAgIHJldHVybiBo
YW5kbGVfcmF6X3dpKHJlZ3MsIHJlZ2lkeCwgY3AzMi5yZWFkLCBoc3IsIDEpOw0KPj4+Pj4+PiAN
Cj4+Pj4+Pj4gKyAgICAvKg0KPj4+Pj4+PiArICAgICAqIEhDUl9FTDIuVElEMw0KPj4+Pj4+PiAr
ICAgICAqDQo+Pj4+Pj4+ICsgICAgICogVGhpcyBpcyB0cmFwcGluZyBtb3N0IElkZW50aWZpY2F0
aW9uIHJlZ2lzdGVycyB1c2VkIGJ5IGEgZ3Vlc3QNCj4+Pj4+Pj4gKyAgICAgKiB0byBpZGVudGlm
eSB0aGUgcHJvY2Vzc29yIGZlYXR1cmVzDQo+Pj4+Pj4+ICsgICAgICovDQo+Pj4+Pj4+ICsgICAg
R0VORVJBVEVfVElEM19JTkZPKElEX1BGUjAsIHBmcjMyLCAwKQ0KPj4+Pj4+PiArICAgIEdFTkVS
QVRFX1RJRDNfSU5GTyhJRF9QRlIxLCBwZnIzMiwgMSkNCj4+Pj4+Pj4gKyAgICBHRU5FUkFURV9U
SUQzX0lORk8oSURfUEZSMiwgcGZyMzIsIDIpDQo+Pj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19J
TkZPKElEX0RGUjAsIGRiZzMyLCAwKQ0KPj4+Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJ
RF9ERlIxLCBkYmczMiwgMSkNCj4+Pj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfQUZS
MCwgYXV4MzIsIDApDQo+Pj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIwLCBt
bTMyLCAwKQ0KPj4+Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMSwgbW0zMiwg
MSkNCj4+Pj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfTU1GUjIsIG1tMzIsIDIpDQo+
Pj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIzLCBtbTMyLCAzKQ0KPj4+Pj4+
PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSNCwgbW0zMiwgNCkNCj4+Pj4+Pj4gKyAg
ICBHRU5FUkFURV9USUQzX0lORk8oSURfTU1GUjUsIG1tMzIsIDUpDQo+Pj4+Pj4+ICsgICAgR0VO
RVJBVEVfVElEM19JTkZPKElEX0lTQVIwLCBpc2EzMiwgMCkNCj4+Pj4+Pj4gKyAgICBHRU5FUkFU
RV9USUQzX0lORk8oSURfSVNBUjEsIGlzYTMyLCAxKQ0KPj4+Pj4+PiArICAgIEdFTkVSQVRFX1RJ
RDNfSU5GTyhJRF9JU0FSMiwgaXNhMzIsIDIpDQo+Pj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19J
TkZPKElEX0lTQVIzLCBpc2EzMiwgMykNCj4+Pj4+Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8o
SURfSVNBUjQsIGlzYTMyLCA0KQ0KPj4+Pj4+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9J
U0FSNSwgaXNhMzIsIDUpDQo+Pj4+Pj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX0lTQVI2
LCBpc2EzMiwgNikNCj4+Pj4+Pj4gKyAgICAvKiBNVkZSIHJlZ2lzdGVycyBhcmUgaW4gY3AxMCBu
byBjcDE1ICovDQo+Pj4+Pj4+ICsNCj4+Pj4+Pj4gIC8qDQo+Pj4+Pj4+ICAgKiBIQ1JfRUwyLlRJ
RENQDQo+Pj4+Pj4+ICAgKg0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IC0tIA0KPj4+Pj4+IFZv
bG9keW15ciBCYWJjaHVrIGF0IEVQQU0NCj4+Pj4gDQo+Pj4+IA0KPj4+PiAtLSANCj4+Pj4gVm9s
b2R5bXlyIEJhYmNodWsgYXQgRVBBTQ0KPj4gDQo+PiANCj4+IC0tIA0KPj4gVm9sb2R5bXlyIEJh
YmNodWsgYXQgRVBBTQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 12:17:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 12:17:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42675.76766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkR4i-0004lZ-TF; Wed, 02 Dec 2020 12:17:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42675.76766; Wed, 02 Dec 2020 12:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkR4i-0004lS-Q6; Wed, 02 Dec 2020 12:17:36 +0000
Received: by outflank-mailman (input) for mailman id 42675;
 Wed, 02 Dec 2020 12:17:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kkR4i-0004lN-0k
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 12:17:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kkR4f-00007M-4Y; Wed, 02 Dec 2020 12:17:33 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kkR4e-0000jH-Oi; Wed, 02 Dec 2020 12:17:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=rq8aNccWaCFgls25NMK4LR2Ct9UOag4RrkVGOyHGqu8=; b=igaiE1Ht56MQRjcfruRkbbDLhL
	JBIbDDUhCskRGo/i/1bKKlMkqx8BrbVdV1Aa1LetW2XP6iYTpokDSRCcusz/ej2eOpgILShxHcmk2
	PGnpkyHZWUaVmOwUxvWBwZr5gKQCc44WlEpPkGg1FG4fvC/E1l5i8Hl1VeTUXY+iVgEg=;
Message-ID: <6409ea31aa21fa54ef2697b0ce959bbf7b5e2a23.camel@xen.org>
Subject: Re: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: jgrall@amazon.com, Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Date: Wed, 02 Dec 2020 12:17:30 +0000
In-Reply-To: <ef127c6f-2d8a-1ddf-f8e7-7e747518c5a8@suse.com>
References: 
	<34de4c4326673c60d3e2cbd3bbcbcca481906524.1606755042.git.hongyxia@amazon.com>
	 <ef127c6f-2d8a-1ddf-f8e7-7e747518c5a8@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Wed, 2020-12-02 at 11:04 +0100, Jan Beulich wrote:
> On 30.11.2020 17:50, Hongyan Xia wrote:
> > From: Hongyan Xia <hongyxia@amazon.com>
> > 
> > There is simply no guarantee that vmap won't return superpages to
> > the
> > caller. It can happen if the list of MFNs are contiguous, or we
> > simply
> > have a large granularity. Although rare, if such things do happen,
> > we
> > will simply hit BUG_ON() and crash. Properly handle such cases in a
> > new
> > implementation.
> > 
> > Note that vmap is now too large to be a macro, so implement it as a
> > normal function and move the declaration to mm.h (page.h cannot
> > handle
> > mfn_t).
> 
> I'm not happy about this movement, and it's also not really clear to
> me what problem page.h would have in principle. Yes, it can't
> include xen/mm.h, but I think it is long overdue that we disentangle
> this at least some. Let me see what I can do as a prereq for this
> change, but see also below.
> 
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -5194,6 +5194,49 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long
> > v)
> >          }                                          \
> >      } while ( false )
> >  
> > +mfn_t vmap_to_mfn(const void *v)
> > +{
> > +    bool locking = system_state > SYS_STATE_boot;
> > +    unsigned int l2_offset = l2_table_offset((unsigned long)v);
> > +    unsigned int l1_offset = l1_table_offset((unsigned long)v);
> > +    l3_pgentry_t *pl3e = virt_to_xen_l3e((unsigned long)v);
> > +    l2_pgentry_t *pl2e;
> > +    l1_pgentry_t *pl1e;
> 
> Can't all three of them be pointer-to-const?

They can (and yes they should).

> > +    struct page_info *l3page;
> > +    mfn_t ret;
> > +
> > +    ASSERT(pl3e);
> 
>     if ( !pl3e )
>     {
>         ASSERT_UNREACHABLE();
>         return INVALID_MFN;
>     }
> 
> as per the bottom of ./CODING_STYLE? (Similarly further down
> then.)

Will revise.

> > +    l3page = virt_to_page(pl3e);
> > +    L3T_LOCK(l3page);
> > +
> > +    ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
> > +    if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
> > +    {
> > +        ret = mfn_add(l3e_get_mfn(*pl3e),
> > +                      (l2_offset << PAGETABLE_ORDER) + l1_offset);
> > +        L3T_UNLOCK(l3page);
> > +        return ret;
> 
> To keep the locked region as narrow as possible
> 
>         mfn = l3e_get_mfn(*pl3e);
>         L3T_UNLOCK(l3page);
>         return mfn_add(mfn, (l2_offset << PAGETABLE_ORDER) +
> l1_offset);
> 
> ? However, in particular because of the recurring unlocks on
> the exit paths I wonder whether it wouldn't be better to
> restructure the whole function such that there'll be one unlock
> and one return. Otoh I'm afraid what I'm asking for here is
> going to yield a measurable set of goto-s ...

I can do that.

But what about the lock narrowing? Will be slightly more tricky when
there is goto. Naturally:

    ret = full return mfn;
    goto out;

out:
    UNLOCK();
    return ret;

but with narrowing, my first reaction is:

    ret = high bits of mfn;
    l2_offset = 0;
    l1_offset = 0;
    goto out;

out:
    UNLOCK();
    return mfn + l2_offset << TABLE_ORDER + l1_offset;

To be honest, I doubt it is really worth it and I prefer the first one.

> 
> > +    }
> > +
> > +    pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
> > +    ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
> > +    if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
> > +    {
> > +        ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
> > +        L3T_UNLOCK(l3page);
> > +        return ret;
> > +    }
> > +
> > +    pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
> > +    UNMAP_DOMAIN_PAGE(pl2e);
> > +    ASSERT(l1e_get_flags(*pl1e) & _PAGE_PRESENT);
> > +    ret = l1e_get_mfn(*pl1e);
> > +    L3T_UNLOCK(l3page);
> > +    UNMAP_DOMAIN_PAGE(pl1e);
> > +
> > +    return ret;
> > +}
> 
> Now for the name of the function: The only aspect tying it
> somewhat to vmap() is that it assumes (asserts) it'll find a
> valid mapping. I think it wants renaming, and vmap_to_mfn()
> then would become a #define of it (perhaps even retaining
> its property of getting unsigned long passed in), at which
> point it also doesn't need moving out of page.h. As to the
> actual name, xen_map_to_mfn() to somewhat match up with
> map_pages_to_xen()?

I actually really like this idea. I will come up with something in the
next rev. But if we want to make it generic, shouldn't we not ASSERT on
pl*e and the PRESENT flag and just return INVALID_MFN? Then this
function works on both mapped address (assumption of vmap_to_mfn()) and
other use cases.

Hongyan



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 12:23:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 12:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42683.76784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRAL-0005hy-NU; Wed, 02 Dec 2020 12:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42683.76784; Wed, 02 Dec 2020 12:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRAL-0005hr-KV; Wed, 02 Dec 2020 12:23:25 +0000
Received: by outflank-mailman (input) for mailman id 42683;
 Wed, 02 Dec 2020 12:23:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkRAK-0005hj-K3; Wed, 02 Dec 2020 12:23:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkRAK-0000F5-D3; Wed, 02 Dec 2020 12:23:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkRAK-0002LX-5X; Wed, 02 Dec 2020 12:23:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkRAK-0008Tm-55; Wed, 02 Dec 2020 12:23:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dOtx9UTkFPO9WEL0+ZKCGvAuBtXT4O9vVlR19rr7zcE=; b=RM3BkcsrN1AcCACiMPXY6MnvI8
	ywMlkxDNElD/dj3ACAYpqMdURaV1EhslYc3hAXGNkIlDUgDO9PvXmvaADIIp25HMmzIpIQlQCxygK
	dP+d0gpqAGRidcxYZOQaUkCX6Gs7o9eMnqkQrmMN70WK0BVi4BVEyfn5r7LMXAFYHyLM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157157-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157157: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e
X-Osstest-Versions-That:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 12:23:24 +0000

flight 157157 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157157/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e
baseline version:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53

Last test of basis   157112  2020-11-30 14:00:26 Z    1 days
Testing same since   157157  2020-12-02 10:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3ae469af8e..cabf60fc32  cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 12:32:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 12:32:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42691.76799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRJM-0006jz-MG; Wed, 02 Dec 2020 12:32:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42691.76799; Wed, 02 Dec 2020 12:32:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRJM-0006js-Iv; Wed, 02 Dec 2020 12:32:44 +0000
Received: by outflank-mailman (input) for mailman id 42691;
 Wed, 02 Dec 2020 12:32:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DXNT=FG=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1kkRJK-0006jn-Ex
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 12:32:43 +0000
Received: from mail.skyhub.de (unknown [5.9.137.197])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cab47ae-c2ff-4eb8-a472-8e46912ccb2a;
 Wed, 02 Dec 2020 12:32:40 +0000 (UTC)
Received: from zn.tnic (p200300ec2f161b00e186258fb055049e.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f16:1b00:e186:258f:b055:49e])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 839061EC0445;
 Wed,  2 Dec 2020 13:32:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cab47ae-c2ff-4eb8-a472-8e46912ccb2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1606912359;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=o/X2Q8TjgxS2AV7FMOJUAyk9P7ykB2huuy5Ud2cBljk=;
	b=fME5MejkgBN0mNXJWemh6uyBGI0q25TJUz52B4+MjOgCtx/vrQOy0f9Ub8yPrBnpMg8dIl
	1BtvFqXRtCl5jfju1qfzn/8xxugE+mK0UXIH4WMg2ulGvxAIS6TwI4OPYiLuQNYyxJFg8+
	GLKfO3eez6x6jvEShA/6zBynFK93hw4=
Date: Wed, 2 Dec 2020 13:32:35 +0100
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, peterz@infradead.org,
	luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 04/12] x86/xen: drop USERGS_SYSRET64 paravirt call
Message-ID: <20201202123235.GD2951@zn.tnic>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-5-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-5-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:22PM +0100, Juergen Gross wrote:
> @@ -123,12 +115,15 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
>  	 * Try to use SYSRET instead of IRET if we're returning to
>  	 * a completely clean 64-bit userspace context.  If we're not,
>  	 * go to the slow exit path.
> +	 * In the Xen PV case we must use iret anyway.
>  	 */
> -	movq	RCX(%rsp), %rcx
> -	movq	RIP(%rsp), %r11
>  
> -	cmpq	%rcx, %r11	/* SYSRET requires RCX == RIP */
> -	jne	swapgs_restore_regs_and_return_to_usermode
> +	ALTERNATIVE __stringify( \
> +		movq	RCX(%rsp), %rcx; \
> +		movq	RIP(%rsp), %r11; \
> +		cmpq	%rcx, %r11;	/* SYSRET requires RCX == RIP */ \
> +		jne	swapgs_restore_regs_and_return_to_usermode), \
> +	"jmp	swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV

Why such a big ALTERNATIVE when you can simply do:

        /*
         * Try to use SYSRET instead of IRET if we're returning to
         * a completely clean 64-bit userspace context.  If we're not,
         * go to the slow exit path.
         * In the Xen PV case we must use iret anyway.
         */
        ALTERNATIVE "", "jmp swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV

        movq    RCX(%rsp), %rcx;
        movq    RIP(%rsp), %r11;
        cmpq    %rcx, %r11;     /* SYSRET requires RCX == RIP */ \
        jne     swapgs_restore_regs_and_return_to_usermode

?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:05:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:05:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42697.76812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRof-0001DF-8E; Wed, 02 Dec 2020 13:05:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42697.76812; Wed, 02 Dec 2020 13:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRof-0001D8-4e; Wed, 02 Dec 2020 13:05:05 +0000
Received: by outflank-mailman (input) for mailman id 42697;
 Wed, 02 Dec 2020 13:05:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkRod-0001D3-8n
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:05:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6390fc0-9699-44fb-a241-26756a7f4d40;
 Wed, 02 Dec 2020 13:05:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E69F5AB63;
 Wed,  2 Dec 2020 13:05:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6390fc0-9699-44fb-a241-26756a7f4d40
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606914301; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U40an+7nBL5IpBLWBvRUSgZjOGexcNmNVctfmAFKDsE=;
	b=kNSqeSAo1Eg+AFhzIAPDp0sEesaPo/Vx3gmC3dvpX+VOKEALIlxfzK8HdtOWcRH/zvr8wK
	3FIiwSnDE+YJG95S9s5iKj3frfds8FIklEFMOlmqKSC4VUARi2+2PKr3JfUF3RYzzSeuij
	DE/HHlaewHIHpXQ+aFU1IErIcPP8hgU=
Subject: Re: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
To: Hongyan Xia <hx242@xen.org>
Cc: jgrall@amazon.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <34de4c4326673c60d3e2cbd3bbcbcca481906524.1606755042.git.hongyxia@amazon.com>
 <ef127c6f-2d8a-1ddf-f8e7-7e747518c5a8@suse.com>
 <6409ea31aa21fa54ef2697b0ce959bbf7b5e2a23.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8b6f7075-1c91-562f-cb24-878ac373001c@suse.com>
Date: Wed, 2 Dec 2020 14:05:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <6409ea31aa21fa54ef2697b0ce959bbf7b5e2a23.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 13:17, Hongyan Xia wrote:
> On Wed, 2020-12-02 at 11:04 +0100, Jan Beulich wrote:
>> On 30.11.2020 17:50, Hongyan Xia wrote:
>>> +    l3page = virt_to_page(pl3e);
>>> +    L3T_LOCK(l3page);
>>> +
>>> +    ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
>>> +    if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
>>> +    {
>>> +        ret = mfn_add(l3e_get_mfn(*pl3e),
>>> +                      (l2_offset << PAGETABLE_ORDER) + l1_offset);
>>> +        L3T_UNLOCK(l3page);
>>> +        return ret;
>>
>> To keep the locked region as narrow as possible
>>
>>         mfn = l3e_get_mfn(*pl3e);
>>         L3T_UNLOCK(l3page);
>>         return mfn_add(mfn, (l2_offset << PAGETABLE_ORDER) +
>> l1_offset);
>>
>> ? However, in particular because of the recurring unlocks on
>> the exit paths I wonder whether it wouldn't be better to
>> restructure the whole function such that there'll be one unlock
>> and one return. Otoh I'm afraid what I'm asking for here is
>> going to yield a measurable set of goto-s ...
> 
> I can do that.
> 
> But what about the lock narrowing? Will be slightly more tricky when
> there is goto. Naturally:
> 
>     ret = full return mfn;
>     goto out;
> 
> out:
>     UNLOCK();
>     return ret;
> 
> but with narrowing, my first reaction is:
> 
>     ret = high bits of mfn;
>     l2_offset = 0;
>     l1_offset = 0;
>     goto out;
> 
> out:
>     UNLOCK();
>     return mfn + l2_offset << TABLE_ORDER + l1_offset;
> 
> To be honest, I doubt it is really worth it and I prefer the first one.

That's why I said "However ..." - I did realize both won't fit
together very well.

>>> +    }
>>> +
>>> +    pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
>>> +    ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
>>> +    if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
>>> +    {
>>> +        ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
>>> +        L3T_UNLOCK(l3page);
>>> +        return ret;
>>> +    }
>>> +
>>> +    pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
>>> +    UNMAP_DOMAIN_PAGE(pl2e);
>>> +    ASSERT(l1e_get_flags(*pl1e) & _PAGE_PRESENT);
>>> +    ret = l1e_get_mfn(*pl1e);
>>> +    L3T_UNLOCK(l3page);
>>> +    UNMAP_DOMAIN_PAGE(pl1e);
>>> +
>>> +    return ret;
>>> +}
>>
>> Now for the name of the function: The only aspect tying it
>> somewhat to vmap() is that it assumes (asserts) it'll find a
>> valid mapping. I think it wants renaming, and vmap_to_mfn()
>> then would become a #define of it (perhaps even retaining
>> its property of getting unsigned long passed in), at which
>> point it also doesn't need moving out of page.h. As to the
>> actual name, xen_map_to_mfn() to somewhat match up with
>> map_pages_to_xen()?
> 
> I actually really like this idea. I will come up with something in the
> next rev. But if we want to make it generic, shouldn't we not ASSERT on
> pl*e and the PRESENT flag and just return INVALID_MFN? Then this
> function works on both mapped address (assumption of vmap_to_mfn()) and
> other use cases.

Depends - we can still document that this function is going to
require a valid mapping. I did consider the generalization, too,
but this to a certain degree also collides with virt_to_xen_l3e()
allocating an L3 table, which isn't what we would want for a
fully generic lookup function. IOW - I'm undecided and will take
it from wherever you move it (albeit with no promise to not ask
for further adjustment).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:05:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42701.76824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRpA-0001If-HU; Wed, 02 Dec 2020 13:05:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42701.76824; Wed, 02 Dec 2020 13:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRpA-0001IY-EA; Wed, 02 Dec 2020 13:05:36 +0000
Received: by outflank-mailman (input) for mailman id 42701;
 Wed, 02 Dec 2020 13:05:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkRp9-0001IS-Dk
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:05:35 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::60d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e63074a3-836c-4070-a5ca-cef815297591;
 Wed, 02 Dec 2020 13:05:30 +0000 (UTC)
Received: from AS8PR04CA0087.eurprd04.prod.outlook.com (2603:10a6:20b:313::32)
 by AM9PR08MB6164.eurprd08.prod.outlook.com (2603:10a6:20b:287::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 2 Dec
 2020 13:05:28 +0000
Received: from AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:313:cafe::1b) by AS8PR04CA0087.outlook.office365.com
 (2603:10a6:20b:313::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Wed, 2 Dec 2020 13:05:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT044.mail.protection.outlook.com (10.152.17.56) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 13:05:27 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Wed, 02 Dec 2020 13:05:27 +0000
Received: from 46ad1ae9c926.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 60F6284F-8363-4594-B93B-CD5670F6CEEA.1; 
 Wed, 02 Dec 2020 13:05:21 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 46ad1ae9c926.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 13:05:21 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4281.eurprd08.prod.outlook.com (2603:10a6:10:c4::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Wed, 2 Dec
 2020 13:05:19 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 13:05:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e63074a3-836c-4070-a5ca-cef815297591
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G5fbEqVvIGmniE5PW22DNYDWqoQ/SVmYTluU4rkW02I=;
 b=LKM39K6nMLGD0m+xKu1fKu1ziJqYwPjZ8TgPjaPc9QzUdzOMgJvXecVq+aO440XGTEHvAtwNstOEDC5JsQOfWjJapdZLxwVA26CCHW3txMGMNx3FUTrre3Mli47eNvL4+vIiOYe9TqItQhTbRWLd2KLZr58DFwE9ZW5D0mF2teM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4b62aef3179c32c5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q5EEm7XixjnrT7LPcbyXA3N+d9FL6GaHxqD7kwRpGmk5ZDEHfFozfcq6tKF+YRpLVK1RZo4Cpsu0RovJQF3juO/RwQQQB4gzxopFBpopfl1151oInYSgzi77c27RfKw9MxrzV0Mr/TzI8mvParKboGdqpInFiX4V8UIM05d1tlDgaCT0T2uwFVta+a7x71vFWLVAsBc8V6qVwbnc9dcfj9SkE94TNdYjLPZSvVlcw4a57yRegI4zkOFQrvQ6yqo8+F3iApsSQoxuAqy82l0JgiEFaoct9NmsaHZxv3HECTj/KyAMLItUPZIqozRu6+hbXV/zpzE9yGKsr8ThCr/PiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G5fbEqVvIGmniE5PW22DNYDWqoQ/SVmYTluU4rkW02I=;
 b=oPtWqtYZ97SSU/Bcev3cQysvIGtPr6rxnV4oJkMOZcUeSGTdeGHj335jV8duF30zE5KrTMAaCjFVD2wNGk2G2tZoGAGYuglHtxaFRp6B2x6VijMH1rzz/Lb88iSzAXG0V6lAOY+VFw/eZF8gv3kbDbE7cQB/6/nrK4VVGLuj/+jBlMn2YXTEOhkcJBXIxIFcpXlQHQkIFPc/2oxlP+OVuYO9dy+CfQPvTu3w7cWopMz5LZn+imu4zKSHp6+GF/HoxwSm6jdTFuJhCSzrTpiXfDEMOOXo9gNU6wySpgt9dGx7sY2HBWdQX5OrrFYGL1Gs/WoVFHKPlOOG/5ycxDm5pQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G5fbEqVvIGmniE5PW22DNYDWqoQ/SVmYTluU4rkW02I=;
 b=LKM39K6nMLGD0m+xKu1fKu1ziJqYwPjZ8TgPjaPc9QzUdzOMgJvXecVq+aO440XGTEHvAtwNstOEDC5JsQOfWjJapdZLxwVA26CCHW3txMGMNx3FUTrre3Mli47eNvL4+vIiOYe9TqItQhTbRWLd2KLZr58DFwE9ZW5D0mF2teM=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
Thread-Topic: [PATCH v2 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
Thread-Index: AQHWxBX9ux1e0/RKik+PTdn4Fk42F6ni2QmAgAD2WwA=
Date: Wed, 2 Dec 2020 13:05:18 +0000
Message-ID: <BEEC5554-0E05-4DD0-8164-710562DC4E71@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011420520.1100@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012011420520.1100@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d157457f-71f9-438c-d35b-08d896c2f0c7
x-ms-traffictypediagnostic: DBBPR08MB4281:|AM9PR08MB6164:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB616423A4BB60477DD7967291FCF30@AM9PR08MB6164.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gG964mKdhgpBOdQAwWDXQtd7TUNxLZH46XszUy87Q1lD1C/XafxqrxnXPejyQmD+QalfJcoWpFHn0NbquqDxRYXYAEP8UPGeuAxRprmhnujr/NHoQFDEVk0aaa4ZzwlLuWaxjcRl81wzu2sQLvnn5nrO1zPHUd/2FtfFuwxONCJpDT41LqgSDZMp3d2zOtQCpJRaagc/GfKn0snY04vGgAGNPQZubOKoxSxHllKsXghwA14sOYZYYuTzGzdEm3DfkBmxSPy7jnNCMY3WSqUEE0rvCBOy1VP3rHOZVagxtNC5v3THKvSJBJIs/PyTYnR/KCmuCrJogqRyPvrxJLBiVR9h5X6Y6Rzlqt5jxdkumis3FPc7cjqFs0kWhAttypILq6dQAFoRXsLSDwsmkesGHeBaTd+K5UZ/ClmRRL0sWOM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(136003)(346002)(376002)(396003)(478600001)(36756003)(33656002)(8676002)(71200400001)(8936002)(316002)(83380400001)(6512007)(2906002)(4326008)(30864003)(6506007)(26005)(53546011)(5660300002)(54906003)(66946007)(91956017)(66446008)(66556008)(76116006)(186003)(2616005)(6486002)(86362001)(6916009)(64756008)(66476007)(45980500001)(579004)(559001)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?g/jHN1cQH1Cjbz/X47w6bevLgPjnpmqoE+Cf+B8Lvcs3xGY9ar0GpoExa/W8?=
 =?us-ascii?Q?Keqaq+pu9Ujvr5lJaTQ691cW86qLZPCHoGnOduIwvoJiIKo8l+gknfm7f2Id?=
 =?us-ascii?Q?QsKvdafD1CIZCAdJ0SFV/lTiY9Ui0Qpzmv5gNJKE5enBxxHV2SwizT9TcVMf?=
 =?us-ascii?Q?gggar80gQw3StwRcZVWD7F6qNP9U7xnzfEI0Fyxb7TIiaw4dTwjsoqvN4BHN?=
 =?us-ascii?Q?t2uWgBWsoNOBy9srtPy/GRxgDadj+z0ClHV3DItXdqtalAQZejvAuDdGK5GC?=
 =?us-ascii?Q?RxiBSx3tgpAgkDl0g/LeCXxA/okqKLv9kemcOWeChjh3bUbg4xbNxsPpVS4U?=
 =?us-ascii?Q?FF3YwxMWfQfPaBTVUf68lr/dsX2r31/VV0onEfQxO3lJDgNtCiiJM+yFSO3b?=
 =?us-ascii?Q?qegzhQTAveWsR3TfFuqtswktDWCVKHb811H1AQMWPJwp3qVhFmNoPMmacOC+?=
 =?us-ascii?Q?tnvK4YvSWW60DUo4/mB6Vn2IGItPuVmAWhZ1qE3U6kSuI8lDjzjV+GdheRUj?=
 =?us-ascii?Q?Q7uSTYk50cQEZKrShR6Ffquntbx+rH9FAqojqz/hkg6QioAuSJvNei6nAfBg?=
 =?us-ascii?Q?sQn8gcrLWxoN7I+82+0J9ZZZAXfKeSmPfDPp8opMQx9Cm6pQoUrt7muYIqPY?=
 =?us-ascii?Q?oXGrB6CefvyTriTftAm8NIJyekBi0V/s+MF3Ch6Q9oKsdx++n7WDTsSPzSGX?=
 =?us-ascii?Q?SL2Yoi3YvVPx1xHzPZO4smQJ9OGvB6TopacwKNo8Yt/kTPz8zPl8SOa9C1a+?=
 =?us-ascii?Q?UJNbiK7OwWb8AHLiJ5QwUw4YwEqjvOtNfSVUWnPDw5sZ8qXFId+IlQadxbM1?=
 =?us-ascii?Q?TH4aN2ORQbsiMyXrpy//19Vtp/peLxzytC+UjD//ONM4kBOLUb+1C9/729dg?=
 =?us-ascii?Q?e7Naz+uDfJnnxQuVcOFiV/LQL5YT8ESnookQpg9dAaUsx+d9Cizql9OxL8aD?=
 =?us-ascii?Q?nkPbWLSuRP+TtFJMUL1/yix+aTRGdg27X3VRsQtqOQhSY0VpxO13N3112+Lc?=
 =?us-ascii?Q?O6Zv?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <95E4BCF7159C3343A106C79159F63608@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4281
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	877d5131-a17e-4916-f164-08d896c2eb71
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r1BUlE3tFvq86r2f59oOtkti2O/TFH4MbrXONPAI9Be7c0/IlgQWCmShpUqW3Y26yoCvLmmj0I3uRc5TM8MA7FJhOD5AF6w8rQJJVsyF/n1nwyRgLnnKxyfX9eNcFSQ5s427Hz5Hqn34cfHF2T7ILpZdTrDxGWUWFxGxXyPHkTShFK+8xupkSQ0Vya3aQndhBxQ2GG7SircfaEFICXBBe2j/CTOqMvIlITjw+RarOir9MIhyETn82O4z4NYtqsjjrBpBCl2KYlTcyWylA4rOlcqbJ3uUBLv8JwDDy3svMEgcX3FF+jATEjavIHXumxFCXmw2VduQY3cvR8z9U4Mm/79HViVCZfUnkieaFalEd5PKUjXB71UeOIbxmtGTfxIH0DOEBVqTX9DRaJ0ydZU6LcJcbwNr/FZ1RPaa7gmNnA0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(46966005)(70206006)(356005)(47076004)(4326008)(86362001)(82740400003)(36756003)(81166007)(70586007)(478600001)(6862004)(83380400001)(82310400003)(6486002)(26005)(54906003)(8676002)(186003)(8936002)(36906005)(33656002)(30864003)(107886003)(336012)(316002)(2616005)(6512007)(5660300002)(53546011)(2906002)(6506007)(579004)(309714004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 13:05:27.9524
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d157457f-71f9-438c-d35b-08d896c2f0c7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6164

Hello Stefano,

Thanks for reviewing the code.

> On 1 Dec 2020, at 10:23 pm, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Thu, 26 Nov 2020, Rahul Singh wrote:
>> Linux SMMUv3 code implements the commands-queue insertion based on
>> atomic operations implemented in Linux. Atomic functions used by the
>> commands-queue insertion is not implemented in XEN therefore revert the
>> patch that implemented the commands-queue insertion based on atomic
>> operations.
>>=20
>> Once the proper atomic operations will be available in XEN the driver
>> can be updated.
>>=20
>> Reverted the commit 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b
>> iommu/arm-smmu-v3: Reduce contention during command-queue insertion
>=20
> I checked 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b: this patch does more
> than just reverting 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b. It looks
> like it is also reverting edd0351e7bc49555d8b5ad8438a65a7ca262c9f0 and
> some other commits.
>=20
> Please can you provide a complete list of reverted commits? I would like
> to be able to do the reverts myself on the linux tree and see that the
> driver textually matches the one on the xen tree with this patch
> applied.
>=20
>=20

Yes this patch is also reverting the commits that is based on the code that=
 introduced the atomic-operations. I will add all the commit id in next ver=
sion of the patch in commit msg.=20

Patches that are reverted in this patch are as follows:

9e773aee8c3e1b3ba019c5c7f8435aaa836c6130  iommu/arm-smmu-v3: Batch ATC inva=
lidation commands
edd0351e7bc49555d8b5ad8438a65a7ca262c9f0  iommu/arm-smmu-v3: Batch context =
descriptor invalidation
4ce8da453640147101bda418640394637c1a7cfc  iommu/arm-smmu-v3: Add command qu=
eue batching helpers
2af2e72b18b499fa36d3f7379fd010ff25d2a984.      iommu/arm-smmu-v3: Defer TLB=
 invalidation until ->iotlb_sync()=20
587e6c10a7ce89a5924fdbeff2ec524fbd6a124b      iommu/arm-smmu-v3: Reduce con=
tention during command-queue insertion


Regards,
Rahul

>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>> xen/drivers/passthrough/arm/smmu-v3.c | 847 ++++++--------------------
>> 1 file changed, 180 insertions(+), 667 deletions(-)
>>=20
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthr=
ough/arm/smmu-v3.c
>> index c192544e87..97eac61ea4 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -330,15 +330,6 @@
>> #define CMDQ_ERR_CERROR_ABT_IDX		2
>> #define CMDQ_ERR_CERROR_ATC_INV_IDX	3
>>=20
>> -#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
>> -
>> -/*
>> - * This is used to size the command queue and therefore must be at leas=
t
>> - * BITS_PER_LONG so that the valid_map works correctly (it relies on th=
e
>> - * total number of queue entries being a multiple of BITS_PER_LONG).
>> - */
>> -#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
>> -
>> #define CMDQ_0_OP			GENMASK_ULL(7, 0)
>> #define CMDQ_0_SSV			(1UL << 11)
>>=20
>> @@ -407,8 +398,9 @@
>> #define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
>>=20
>> /* High-level queue structures */
>> -#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
>> -#define ARM_SMMU_POLL_SPIN_COUNT	10
>> +#define ARM_SMMU_POLL_TIMEOUT_US	100
>> +#define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
>> +#define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>>=20
>> #define MSI_IOVA_BASE			0x8000000
>> #define MSI_IOVA_LENGTH			0x100000
>> @@ -513,24 +505,15 @@ struct arm_smmu_cmdq_ent {
>>=20
>> 		#define CMDQ_OP_CMD_SYNC	0x46
>> 		struct {
>> +			u32			msidata;
>> 			u64			msiaddr;
>> 		} sync;
>> 	};
>> };
>>=20
>> struct arm_smmu_ll_queue {
>> -	union {
>> -		u64			val;
>> -		struct {
>> -			u32		prod;
>> -			u32		cons;
>> -		};
>> -		struct {
>> -			atomic_t	prod;
>> -			atomic_t	cons;
>> -		} atomic;
>> -		u8			__pad[SMP_CACHE_BYTES];
>> -	} ____cacheline_aligned_in_smp;
>> +	u32				prod;
>> +	u32				cons;
>> 	u32				max_n_shift;
>> };
>>=20
>> @@ -548,23 +531,9 @@ struct arm_smmu_queue {
>> 	u32 __iomem			*cons_reg;
>> };
>>=20
>> -struct arm_smmu_queue_poll {
>> -	ktime_t				timeout;
>> -	unsigned int			delay;
>> -	unsigned int			spin_cnt;
>> -	bool				wfe;
>> -};
>> -
>> struct arm_smmu_cmdq {
>> 	struct arm_smmu_queue		q;
>> -	atomic_long_t			*valid_map;
>> -	atomic_t			owner_prod;
>> -	atomic_t			lock;
>> -};
>> -
>> -struct arm_smmu_cmdq_batch {
>> -	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
>> -	int				num;
>> +	spinlock_t			lock;
>> };
>>=20
>> struct arm_smmu_evtq {
>> @@ -660,6 +629,8 @@ struct arm_smmu_device {
>>=20
>> 	int				gerr_irq;
>> 	int				combined_irq;
>> +	u32				sync_nr;
>> +	u8				prev_cmd_opcode;
>>=20
>> 	unsigned long			ias; /* IPA */
>> 	unsigned long			oas; /* PA */
>> @@ -677,6 +648,12 @@ struct arm_smmu_device {
>>=20
>> 	struct arm_smmu_strtab_cfg	strtab_cfg;
>>=20
>> +	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
>> +	union {
>> +		u32			sync_count;
>> +		u64			padding;
>> +	};
>> +
>> 	/* IOMMU core code handle */
>> 	struct iommu_device		iommu;
>> };
>> @@ -763,21 +740,6 @@ static void parse_driver_options(struct arm_smmu_de=
vice *smmu)
>> }
>>=20
>> /* Low-level queue manipulation functions */
>> -static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
>> -{
>> -	u32 space, prod, cons;
>> -
>> -	prod =3D Q_IDX(q, q->prod);
>> -	cons =3D Q_IDX(q, q->cons);
>> -
>> -	if (Q_WRP(q, q->prod) =3D=3D Q_WRP(q, q->cons))
>> -		space =3D (1 << q->max_n_shift) - (prod - cons);
>> -	else
>> -		space =3D cons - prod;
>> -
>> -	return space >=3D n;
>> -}
>> -
>> static bool queue_full(struct arm_smmu_ll_queue *q)
>> {
>> 	return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) &&
>> @@ -790,12 +752,9 @@ static bool queue_empty(struct arm_smmu_ll_queue *q=
)
>> 	       Q_WRP(q, q->prod) =3D=3D Q_WRP(q, q->cons);
>> }
>>=20
>> -static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
>> +static void queue_sync_cons_in(struct arm_smmu_queue *q)
>> {
>> -	return ((Q_WRP(q, q->cons) =3D=3D Q_WRP(q, prod)) &&
>> -		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
>> -	       ((Q_WRP(q, q->cons) !=3D Q_WRP(q, prod)) &&
>> -		(Q_IDX(q, q->cons) <=3D Q_IDX(q, prod)));
>> +	q->llq.cons =3D readl_relaxed(q->cons_reg);
>> }
>>=20
>> static void queue_sync_cons_out(struct arm_smmu_queue *q)
>> @@ -826,34 +785,46 @@ static int queue_sync_prod_in(struct arm_smmu_queu=
e *q)
>> 	return ret;
>> }
>>=20
>> -static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
>> +static void queue_sync_prod_out(struct arm_smmu_queue *q)
>> {
>> -	u32 prod =3D (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
>> -	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
>> +	writel(q->llq.prod, q->prod_reg);
>> }
>>=20
>> -static void queue_poll_init(struct arm_smmu_device *smmu,
>> -			    struct arm_smmu_queue_poll *qp)
>> +static void queue_inc_prod(struct arm_smmu_ll_queue *q)
>> {
>> -	qp->delay =3D 1;
>> -	qp->spin_cnt =3D 0;
>> -	qp->wfe =3D !!(smmu->features & ARM_SMMU_FEAT_SEV);
>> -	qp->timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
>> +	u32 prod =3D (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
>> +	q->prod =3D Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
>> }
>>=20
>> -static int queue_poll(struct arm_smmu_queue_poll *qp)
>> +/*
>> + * Wait for the SMMU to consume items. If sync is true, wait until the =
queue
>> + * is empty. Otherwise, wait until there is at least one free slot.
>> + */
>> +static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wf=
e)
>> {
>> -	if (ktime_compare(ktime_get(), qp->timeout) > 0)
>> -		return -ETIMEDOUT;
>> +	ktime_t timeout;
>> +	unsigned int delay =3D 1, spin_cnt =3D 0;
>>=20
>> -	if (qp->wfe) {
>> -		wfe();
>> -	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
>> -		cpu_relax();
>> -	} else {
>> -		udelay(qp->delay);
>> -		qp->delay *=3D 2;
>> -		qp->spin_cnt =3D 0;
>> +	/* Wait longer if it's a CMD_SYNC */
>> +	timeout =3D ktime_add_us(ktime_get(), sync ?
>> +					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
>> +					    ARM_SMMU_POLL_TIMEOUT_US);
>> +
>> +	while (queue_sync_cons_in(q),
>> +	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
>> +		if (ktime_compare(ktime_get(), timeout) > 0)
>> +			return -ETIMEDOUT;
>> +
>> +		if (wfe) {
>> +			wfe();
>> +		} else if (++spin_cnt < ARM_SMMU_CMDQ_SYNC_SPIN_COUNT) {
>> +			cpu_relax();
>> +			continue;
>> +		} else {
>> +			udelay(delay);
>> +			delay *=3D 2;
>> +			spin_cnt =3D 0;
>> +		}
>> 	}
>>=20
>> 	return 0;
>> @@ -867,6 +838,17 @@ static void queue_write(__le64 *dst, u64 *src, size=
_t n_dwords)
>> 		*dst++ =3D cpu_to_le64(*src++);
>> }
>>=20
>> +static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
>> +{
>> +	if (queue_full(&q->llq))
>> +		return -ENOSPC;
>> +
>> +	queue_write(Q_ENT(q, q->llq.prod), ent, q->ent_dwords);
>> +	queue_inc_prod(&q->llq);
>> +	queue_sync_prod_out(q);
>> +	return 0;
>> +}
>> +
>> static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
>> {
>> 	int i;
>> @@ -964,14 +946,20 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struc=
t arm_smmu_cmdq_ent *ent)
>> 		cmd[1] |=3D FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
>> 		break;
>> 	case CMDQ_OP_CMD_SYNC:
>> -		if (ent->sync.msiaddr) {
>> +		if (ent->sync.msiaddr)
>> 			cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
>> -			cmd[1] |=3D ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>> -		} else {
>> +		else
>> 			cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
>> -		}
>> 		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>> 		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
>> +		/*
>> +		 * Commands are written little-endian, but we want the SMMU to
>> +		 * receive MSIData, and thus write it back to memory, in CPU
>> +		 * byte order, so big-endian needs an extra byteswap here.
>> +		 */
>> +		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
>> +				     cpu_to_le32(ent->sync.msidata));
>> +		cmd[1] |=3D ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>> 		break;
>> 	default:
>> 		return -ENOENT;
>> @@ -980,27 +968,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct=
 arm_smmu_cmdq_ent *ent)
>> 	return 0;
>> }
>>=20
>> -static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_devi=
ce *smmu,
>> -					 u32 prod)
>> -{
>> -	struct arm_smmu_queue *q =3D &smmu->cmdq.q;
>> -	struct arm_smmu_cmdq_ent ent =3D {
>> -		.opcode =3D CMDQ_OP_CMD_SYNC,
>> -	};
>> -
>> -	/*
>> -	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
>> -	 * payload, so the write will zero the entire command on that platform=
.
>> -	 */
>> -	if (smmu->features & ARM_SMMU_FEAT_MSI &&
>> -	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
>> -		ent.sync.msiaddr =3D q->base_dma + Q_IDX(&q->llq, prod) *
>> -				   q->ent_dwords * 8;
>> -	}
>> -
>> -	arm_smmu_cmdq_build_cmd(cmd, &ent);
>> -}
>> -
>> static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>> {
>> 	static const char *cerror_str[] =3D {
>> @@ -1058,474 +1025,109 @@ static void arm_smmu_cmdq_skip_err(struct arm_=
smmu_device *smmu)
>> 	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
>> }
>>=20
>> -/*
>> - * Command queue locking.
>> - * This is a form of bastardised rwlock with the following major change=
s:
>> - *
>> - * - The only LOCK routines are exclusive_trylock() and shared_lock().
>> - *   Neither have barrier semantics, and instead provide only a control
>> - *   dependency.
>> - *
>> - * - The UNLOCK routines are supplemented with shared_tryunlock(), whic=
h
>> - *   fails if the caller appears to be the last lock holder (yes, this =
is
>> - *   racy). All successful UNLOCK routines have RELEASE semantics.
>> - */
>> -static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
>> +static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 =
*cmd)
>> {
>> -	int val;
>> -
>> -	/*
>> -	 * We can try to avoid the cmpxchg() loop by simply incrementing the
>> -	 * lock counter. When held in exclusive state, the lock counter is set
>> -	 * to INT_MIN so these increments won't hurt as the value will remain
>> -	 * negative.
>> -	 */
>> -	if (atomic_fetch_inc_relaxed(&cmdq->lock) >=3D 0)
>> -		return;
>> -
>> -	do {
>> -		val =3D atomic_cond_read_relaxed(&cmdq->lock, VAL >=3D 0);
>> -	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) !=3D val);
>> -}
>> -
>> -static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
>> -{
>> -	(void)atomic_dec_return_release(&cmdq->lock);
>> -}
>> -
>> -static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
>> -{
>> -	if (atomic_read(&cmdq->lock) =3D=3D 1)
>> -		return false;
>> -
>> -	arm_smmu_cmdq_shared_unlock(cmdq);
>> -	return true;
>> -}
>> -
>> -#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
>> -({									\
>> -	bool __ret;							\
>> -	local_irq_save(flags);						\
>> -	__ret =3D !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
>> -	if (!__ret)							\
>> -		local_irq_restore(flags);				\
>> -	__ret;								\
>> -})
>> -
>> -#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
>> -({									\
>> -	atomic_set_release(&cmdq->lock, 0);				\
>> -	local_irq_restore(flags);					\
>> -})
>> -
>> -
>> -/*
>> - * Command queue insertion.
>> - * This is made fiddly by our attempts to achieve some sort of scalabil=
ity
>> - * since there is one queue shared amongst all of the CPUs in the syste=
m.  If
>> - * you like mixed-size concurrency, dependency ordering and relaxed ato=
mics,
>> - * then you'll *love* this monstrosity.
>> - *
>> - * The basic idea is to split the queue up into ranges of commands that=
 are
>> - * owned by a given CPU; the owner may not have written all of the comm=
ands
>> - * itself, but is responsible for advancing the hardware prod pointer w=
hen
>> - * the time comes. The algorithm is roughly:
>> - *
>> - * 	1. Allocate some space in the queue. At this point we also discover
>> - *	   whether the head of the queue is currently owned by another CPU,
>> - *	   or whether we are the owner.
>> - *
>> - *	2. Write our commands into our allocated slots in the queue.
>> - *
>> - *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
>> - *
>> - *	4. If we are an owner:
>> - *		a. Wait for the previous owner to finish.
>> - *		b. Mark the queue head as unowned, which tells us the range
>> - *		   that we are responsible for publishing.
>> - *		c. Wait for all commands in our owned range to become valid.
>> - *		d. Advance the hardware prod pointer.
>> - *		e. Tell the next owner we've finished.
>> - *
>> - *	5. If we are inserting a CMD_SYNC (we may or may not have been an
>> - *	   owner), then we need to stick around until it has completed:
>> - *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
>> - *		   to clear the first 4 bytes.
>> - *		b. Otherwise, we spin waiting for the hardware cons pointer to
>> - *		   advance past our command.
>> - *
>> - * The devil is in the details, particularly the use of locking for han=
dling
>> - * SYNC completion and freeing up space in the queue before we think th=
at it is
>> - * full.
>> - */
>> -static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cm=
dq,
>> -					       u32 sprod, u32 eprod, bool set)
>> -{
>> -	u32 swidx, sbidx, ewidx, ebidx;
>> -	struct arm_smmu_ll_queue llq =3D {
>> -		.max_n_shift	=3D cmdq->q.llq.max_n_shift,
>> -		.prod		=3D sprod,
>> -	};
>> -
>> -	ewidx =3D BIT_WORD(Q_IDX(&llq, eprod));
>> -	ebidx =3D Q_IDX(&llq, eprod) % BITS_PER_LONG;
>> -
>> -	while (llq.prod !=3D eprod) {
>> -		unsigned long mask;
>> -		atomic_long_t *ptr;
>> -		u32 limit =3D BITS_PER_LONG;
>> -
>> -		swidx =3D BIT_WORD(Q_IDX(&llq, llq.prod));
>> -		sbidx =3D Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
>> -
>> -		ptr =3D &cmdq->valid_map[swidx];
>> -
>> -		if ((swidx =3D=3D ewidx) && (sbidx < ebidx))
>> -			limit =3D ebidx;
>> -
>> -		mask =3D GENMASK(limit - 1, sbidx);
>> -
>> -		/*
>> -		 * The valid bit is the inverse of the wrap bit. This means
>> -		 * that a zero-initialised queue is invalid and, after marking
>> -		 * all entries as valid, they become invalid again when we
>> -		 * wrap.
>> -		 */
>> -		if (set) {
>> -			atomic_long_xor(mask, ptr);
>> -		} else { /* Poll */
>> -			unsigned long valid;
>> +	struct arm_smmu_queue *q =3D &smmu->cmdq.q;
>> +	bool wfe =3D !!(smmu->features & ARM_SMMU_FEAT_SEV);
>>=20
>> -			valid =3D (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
>> -			atomic_long_cond_read_relaxed(ptr, (VAL & mask) =3D=3D valid);
>> -		}
>> +	smmu->prev_cmd_opcode =3D FIELD_GET(CMDQ_0_OP, cmd[0]);
>>=20
>> -		llq.prod =3D queue_inc_prod_n(&llq, limit - sbidx);
>> +	while (queue_insert_raw(q, cmd) =3D=3D -ENOSPC) {
>> +		if (queue_poll_cons(q, false, wfe))
>> +			dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>> 	}
>> }
>>=20
>> -/* Mark all entries in the range [sprod, eprod) as valid */
>> -static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
>> -					u32 sprod, u32 eprod)
>> -{
>> -	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
>> -}
>> -
>> -/* Wait for all entries in the range [sprod, eprod) to become valid */
>> -static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
>> -					 u32 sprod, u32 eprod)
>> -{
>> -	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
>> -}
>> -
>> -/* Wait for the command queue to become non-full */
>> -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *sm=
mu,
>> -					     struct arm_smmu_ll_queue *llq)
>> +static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>> +				    struct arm_smmu_cmdq_ent *ent)
>> {
>> +	u64 cmd[CMDQ_ENT_DWORDS];
>> 	unsigned long flags;
>> -	struct arm_smmu_queue_poll qp;
>> -	struct arm_smmu_cmdq *cmdq =3D &smmu->cmdq;
>> -	int ret =3D 0;
>>=20
>> -	/*
>> -	 * Try to update our copy of cons by grabbing exclusive cmdq access. I=
f
>> -	 * that fails, spin until somebody else updates it for us.
>> -	 */
>> -	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
>> -		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
>> -		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
>> -		llq->val =3D READ_ONCE(cmdq->q.llq.val);
>> -		return 0;
>> +	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
>> +		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
>> +			 ent->opcode);
>> +		return;
>> 	}
>>=20
>> -	queue_poll_init(smmu, &qp);
>> -	do {
>> -		llq->val =3D READ_ONCE(smmu->cmdq.q.llq.val);
>> -		if (!queue_full(llq))
>> -			break;
>> -
>> -		ret =3D queue_poll(&qp);
>> -	} while (!ret);
>> -
>> -	return ret;
>> -}
>> -
>> -/*
>> - * Wait until the SMMU signals a CMD_SYNC completion MSI.
>> - * Must be called with the cmdq lock held in some capacity.
>> - */
>> -static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
>> -					  struct arm_smmu_ll_queue *llq)
>> -{
>> -	int ret =3D 0;
>> -	struct arm_smmu_queue_poll qp;
>> -	struct arm_smmu_cmdq *cmdq =3D &smmu->cmdq;
>> -	u32 *cmd =3D (u32 *)(Q_ENT(&cmdq->q, llq->prod));
>> -
>> -	queue_poll_init(smmu, &qp);
>> -
>> -	/*
>> -	 * The MSI won't generate an event, since it's being written back
>> -	 * into the command queue.
>> -	 */
>> -	qp.wfe =3D false;
>> -	smp_cond_load_relaxed(cmd, !VAL || (ret =3D queue_poll(&qp)));
>> -	llq->cons =3D ret ? llq->prod : queue_inc_prod_n(llq, 1);
>> -	return ret;
>> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
>> +	arm_smmu_cmdq_insert_cmd(smmu, cmd);
>> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>> }
>>=20
>> /*
>> - * Wait until the SMMU cons index passes llq->prod.
>> - * Must be called with the cmdq lock held in some capacity.
>> + * The difference between val and sync_idx is bounded by the maximum si=
ze of
>> + * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmet=
ic.
>>  */
>> -static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *=
smmu,
>> -					       struct arm_smmu_ll_queue *llq)
>> +static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 s=
ync_idx)
>> {
>> -	struct arm_smmu_queue_poll qp;
>> -	struct arm_smmu_cmdq *cmdq =3D &smmu->cmdq;
>> -	u32 prod =3D llq->prod;
>> -	int ret =3D 0;
>> +	ktime_t timeout;
>> +	u32 val;
>>=20
>> -	queue_poll_init(smmu, &qp);
>> -	llq->val =3D READ_ONCE(smmu->cmdq.q.llq.val);
>> -	do {
>> -		if (queue_consumed(llq, prod))
>> -			break;
>> -
>> -		ret =3D queue_poll(&qp);
>> -
>> -		/*
>> -		 * This needs to be a readl() so that our subsequent call
>> -		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
>> -		 *
>> -		 * Specifically, we need to ensure that we observe all
>> -		 * shared_lock()s by other CMD_SYNCs that share our owner,
>> -		 * so that a failing call to tryunlock() means that we're
>> -		 * the last one out and therefore we can safely advance
>> -		 * cmdq->q.llq.cons. Roughly speaking:
>> -		 *
>> -		 * CPU 0		CPU1			CPU2 (us)
>> -		 *
>> -		 * if (sync)
>> -		 * 	shared_lock();
>> -		 *
>> -		 * dma_wmb();
>> -		 * set_valid_map();
>> -		 *
>> -		 * 			if (owner) {
>> -		 *				poll_valid_map();
>> -		 *				<control dependency>
>> -		 *				writel(prod_reg);
>> -		 *
>> -		 *						readl(cons_reg);
>> -		 *						tryunlock();
>> -		 *
>> -		 * Requires us to see CPU 0's shared_lock() acquisition.
>> -		 */
>> -		llq->cons =3D readl(cmdq->q.cons_reg);
>> -	} while (!ret);
>> +	timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
>> +	val =3D smp_cond_load_acquire(&smmu->sync_count,
>> +				    (int)(VAL - sync_idx) >=3D 0 ||
>> +				    !ktime_before(ktime_get(), timeout));
>>=20
>> -	return ret;
>> +	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
>> }
>>=20
>> -static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
>> -					 struct arm_smmu_ll_queue *llq)
>> +static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>> {
>> -	if (smmu->features & ARM_SMMU_FEAT_MSI &&
>> -	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
>> -		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
>> -
>> -	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
>> -}
>> -
>> -static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64=
 *cmds,
>> -					u32 prod, int n)
>> -{
>> -	int i;
>> -	struct arm_smmu_ll_queue llq =3D {
>> -		.max_n_shift	=3D cmdq->q.llq.max_n_shift,
>> -		.prod		=3D prod,
>> -	};
>> -
>> -	for (i =3D 0; i < n; ++i) {
>> -		u64 *cmd =3D &cmds[i * CMDQ_ENT_DWORDS];
>> -
>> -		prod =3D queue_inc_prod_n(&llq, i);
>> -		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
>> -	}
>> -}
>> -
>> -/*
>> - * This is the actual insertion function, and provides the following
>> - * ordering guarantees to callers:
>> - *
>> - * - There is a dma_wmb() before publishing any commands to the queue.
>> - *   This can be relied upon to order prior writes to data structures
>> - *   in memory (such as a CD or an STE) before the command.
>> - *
>> - * - On completion of a CMD_SYNC, there is a control dependency.
>> - *   This can be relied upon to order subsequent writes to memory (e.g.
>> - *   freeing an IOVA) after completion of the CMD_SYNC.
>> - *
>> - * - Command insertion is totally ordered, so if two CPUs each race to
>> - *   insert their own list of commands then all of the commands from on=
e
>> - *   CPU will appear before any of the commands from the other CPU.
>> - */
>> -static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
>> -				       u64 *cmds, int n, bool sync)
>> -{
>> -	u64 cmd_sync[CMDQ_ENT_DWORDS];
>> -	u32 prod;
>> +	u64 cmd[CMDQ_ENT_DWORDS];
>> 	unsigned long flags;
>> -	bool owner;
>> -	struct arm_smmu_cmdq *cmdq =3D &smmu->cmdq;
>> -	struct arm_smmu_ll_queue llq =3D {
>> -		.max_n_shift =3D cmdq->q.llq.max_n_shift,
>> -	}, head =3D llq;
>> -	int ret =3D 0;
>> -
>> -	/* 1. Allocate some space in the queue */
>> -	local_irq_save(flags);
>> -	llq.val =3D READ_ONCE(cmdq->q.llq.val);
>> -	do {
>> -		u64 old;
>> -
>> -		while (!queue_has_space(&llq, n + sync)) {
>> -			local_irq_restore(flags);
>> -			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
>> -				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>> -			local_irq_save(flags);
>> -		}
>> -
>> -		head.cons =3D llq.cons;
>> -		head.prod =3D queue_inc_prod_n(&llq, n + sync) |
>> -					     CMDQ_PROD_OWNED_FLAG;
>> -
>> -		old =3D cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
>> -		if (old =3D=3D llq.val)
>> -			break;
>> -
>> -		llq.val =3D old;
>> -	} while (1);
>> -	owner =3D !(llq.prod & CMDQ_PROD_OWNED_FLAG);
>> -	head.prod &=3D ~CMDQ_PROD_OWNED_FLAG;
>> -	llq.prod &=3D ~CMDQ_PROD_OWNED_FLAG;
>> -
>> -	/*
>> -	 * 2. Write our commands into the queue
>> -	 * Dependency ordering from the cmpxchg() loop above.
>> -	 */
>> -	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
>> -	if (sync) {
>> -		prod =3D queue_inc_prod_n(&llq, n);
>> -		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
>> -		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
>> -
>> -		/*
>> -		 * In order to determine completion of our CMD_SYNC, we must
>> -		 * ensure that the queue can't wrap twice without us noticing.
>> -		 * We achieve that by taking the cmdq lock as shared before
>> -		 * marking our slot as valid.
>> -		 */
>> -		arm_smmu_cmdq_shared_lock(cmdq);
>> -	}
>> -
>> -	/* 3. Mark our slots as valid, ensuring commands are visible first */
>> -	dma_wmb();
>> -	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
>> -
>> -	/* 4. If we are the owner, take control of the SMMU hardware */
>> -	if (owner) {
>> -		/* a. Wait for previous owner to finish */
>> -		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL =3D=3D llq.prod);
>> -
>> -		/* b. Stop gathering work by clearing the owned flag */
>> -		prod =3D atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
>> -						   &cmdq->q.llq.atomic.prod);
>> -		prod &=3D ~CMDQ_PROD_OWNED_FLAG;
>> +	struct arm_smmu_cmdq_ent  ent =3D {
>> +		.opcode =3D CMDQ_OP_CMD_SYNC,
>> +		.sync	=3D {
>> +			.msiaddr =3D virt_to_phys(&smmu->sync_count),
>> +		},
>> +	};
>>=20
>> -		/*
>> -		 * c. Wait for any gathered work to be written to the queue.
>> -		 * Note that we read our own entries so that we have the control
>> -		 * dependency required by (d).
>> -		 */
>> -		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
>> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
>>=20
>> -		/*
>> -		 * d. Advance the hardware prod pointer
>> -		 * Control dependency ordering from the entries becoming valid.
>> -		 */
>> -		writel_relaxed(prod, cmdq->q.prod_reg);
>> -
>> -		/*
>> -		 * e. Tell the next owner we're done
>> -		 * Make sure we've updated the hardware first, so that we don't
>> -		 * race to update prod and potentially move it backwards.
>> -		 */
>> -		atomic_set_release(&cmdq->owner_prod, prod);
>> +	/* Piggy-back on the previous command if it's a SYNC */
>> +	if (smmu->prev_cmd_opcode =3D=3D CMDQ_OP_CMD_SYNC) {
>> +		ent.sync.msidata =3D smmu->sync_nr;
>> +	} else {
>> +		ent.sync.msidata =3D ++smmu->sync_nr;
>> +		arm_smmu_cmdq_build_cmd(cmd, &ent);
>> +		arm_smmu_cmdq_insert_cmd(smmu, cmd);
>> 	}
>>=20
>> -	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete =
*/
>> -	if (sync) {
>> -		llq.prod =3D queue_inc_prod_n(&llq, n);
>> -		ret =3D arm_smmu_cmdq_poll_until_sync(smmu, &llq);
>> -		if (ret) {
>> -			dev_err_ratelimited(smmu->dev,
>> -					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
>> -					    llq.prod,
>> -					    readl_relaxed(cmdq->q.prod_reg),
>> -					    readl_relaxed(cmdq->q.cons_reg));
>> -		}
>> -
>> -		/*
>> -		 * Try to unlock the cmdq lock. This will fail if we're the last
>> -		 * reader, in which case we can safely update cmdq->q.llq.cons
>> -		 */
>> -		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
>> -			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
>> -			arm_smmu_cmdq_shared_unlock(cmdq);
>> -		}
>> -	}
>> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>>=20
>> -	local_irq_restore(flags);
>> -	return ret;
>> +	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
>> }
>>=20
>> -static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>> -				   struct arm_smmu_cmdq_ent *ent)
>> +static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>> {
>> 	u64 cmd[CMDQ_ENT_DWORDS];
>> +	unsigned long flags;
>> +	bool wfe =3D !!(smmu->features & ARM_SMMU_FEAT_SEV);
>> +	struct arm_smmu_cmdq_ent ent =3D { .opcode =3D CMDQ_OP_CMD_SYNC };
>> +	int ret;
>>=20
>> -	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
>> -		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
>> -			 ent->opcode);
>> -		return -EINVAL;
>> -	}
>> +	arm_smmu_cmdq_build_cmd(cmd, &ent);
>>=20
>> -	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
>> -}
>> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
>> +	arm_smmu_cmdq_insert_cmd(smmu, cmd);
>> +	ret =3D queue_poll_cons(&smmu->cmdq.q, true, wfe);
>> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>>=20
>> -static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>> -{
>> -	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
>> +	return ret;
>> }
>>=20
>> -static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
>> -				    struct arm_smmu_cmdq_batch *cmds,
>> -				    struct arm_smmu_cmdq_ent *cmd)
>> +static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>> {
>> -	if (cmds->num =3D=3D CMDQ_BATCH_ENTRIES) {
>> -		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
>> -		cmds->num =3D 0;
>> -	}
>> -	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd)=
;
>> -	cmds->num++;
>> -}
>> +	int ret;
>> +	bool msi =3D (smmu->features & ARM_SMMU_FEAT_MSI) &&
>> +		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
>>=20
>> -static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
>> -				      struct arm_smmu_cmdq_batch *cmds)
>> -{
>> -	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
>> +	ret =3D msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
>> +		  : __arm_smmu_cmdq_issue_sync(smmu);
>> +	if (ret)
>> +		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
>> +	return ret;
>> }
>>=20
>> /* Context descriptor manipulation functions */
>> @@ -1535,7 +1137,6 @@ static void arm_smmu_sync_cd(struct arm_smmu_domai=
n *smmu_domain,
>> 	size_t i;
>> 	unsigned long flags;
>> 	struct arm_smmu_master *master;
>> -	struct arm_smmu_cmdq_batch cmds =3D {};
>> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> 	struct arm_smmu_cmdq_ent cmd =3D {
>> 		.opcode	=3D CMDQ_OP_CFGI_CD,
>> @@ -1549,12 +1150,12 @@ static void arm_smmu_sync_cd(struct arm_smmu_dom=
ain *smmu_domain,
>> 	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
>> 		for (i =3D 0; i < master->num_sids; i++) {
>> 			cmd.cfgi.sid =3D master->sids[i];
>> -			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
>> +			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> 		}
>> 	}
>> 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>>=20
>> -	arm_smmu_cmdq_batch_submit(smmu, &cmds);
>> +	arm_smmu_cmdq_issue_sync(smmu);
>> }
>>=20
>> static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
>> @@ -2189,16 +1790,17 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long =
iova, size_t size,
>> 	cmd->atc.size	=3D log2_span;
>> }
>>=20
>> -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
>> +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
>> +				   struct arm_smmu_cmdq_ent *cmd)
>> {
>> 	int i;
>> -	struct arm_smmu_cmdq_ent cmd;
>>=20
>> -	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
>> +	if (!master->ats_enabled)
>> +		return 0;
>>=20
>> 	for (i =3D 0; i < master->num_sids; i++) {
>> -		cmd.atc.sid =3D master->sids[i];
>> -		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
>> +		cmd->atc.sid =3D master->sids[i];
>> +		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
>> 	}
>>=20
>> 	return arm_smmu_cmdq_issue_sync(master->smmu);
>> @@ -2207,11 +1809,10 @@ static int arm_smmu_atc_inv_master(struct arm_sm=
mu_master *master)
>> static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>> 				   int ssid, unsigned long iova, size_t size)
>> {
>> -	int i;
>> +	int ret =3D 0;
>> 	unsigned long flags;
>> 	struct arm_smmu_cmdq_ent cmd;
>> 	struct arm_smmu_master *master;
>> -	struct arm_smmu_cmdq_batch cmds =3D {};
>>=20
>> 	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
>> 		return 0;
>> @@ -2236,18 +1837,11 @@ static int arm_smmu_atc_inv_domain(struct arm_sm=
mu_domain *smmu_domain,
>> 	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
>>=20
>> 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>> -	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
>> -		if (!master->ats_enabled)
>> -			continue;
>> -
>> -		for (i =3D 0; i < master->num_sids; i++) {
>> -			cmd.atc.sid =3D master->sids[i];
>> -			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
>> -		}
>> -	}
>> +	list_for_each_entry(master, &smmu_domain->devices, domain_head)
>> +		ret |=3D arm_smmu_atc_inv_master(master, &cmd);
>> 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>>=20
>> -	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
>> +	return ret ? -ETIMEDOUT : 0;
>> }
>>=20
>> /* IO_PGTABLE API */
>> @@ -2269,32 +1863,27 @@ static void arm_smmu_tlb_inv_context(void *cooki=
e)
>> 	/*
>> 	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
>> 	 * PTEs previously cleared by unmaps on the current CPU not yet visible
>> -	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
>> -	 * insertion to guarantee those are observed before the TLBI. Do be
>> -	 * careful, 007.
>> +	 * to the SMMU. We are relying on the DSB implicit in
>> +	 * queue_sync_prod_out() to guarantee those are observed before the
>> +	 * TLBI. Do be careful, 007.
>> 	 */
>> 	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> 	arm_smmu_cmdq_issue_sync(smmu);
>> 	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
>> }
>>=20
>> -static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
>> -				   size_t granule, bool leaf,
>> -				   struct arm_smmu_domain *smmu_domain)
>> +static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t si=
ze,
>> +					  size_t granule, bool leaf, void *cookie)
>> {
>> +	struct arm_smmu_domain *smmu_domain =3D cookie;
>> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -	unsigned long start =3D iova, end =3D iova + size, num_pages =3D 0, tg=
 =3D 0;
>> -	size_t inv_range =3D granule;
>> -	struct arm_smmu_cmdq_batch cmds =3D {};
>> 	struct arm_smmu_cmdq_ent cmd =3D {
>> 		.tlbi =3D {
>> 			.leaf	=3D leaf,
>> +			.addr	=3D iova,
>> 		},
>> 	};
>>=20
>> -	if (!size)
>> -		return;
>> -
>> 	if (smmu_domain->stage =3D=3D ARM_SMMU_DOMAIN_S1) {
>> 		cmd.opcode	=3D CMDQ_OP_TLBI_NH_VA;
>> 		cmd.tlbi.asid	=3D smmu_domain->s1_cfg.cd.asid;
>> @@ -2303,78 +1892,37 @@ static void arm_smmu_tlb_inv_range(unsigned long=
 iova, size_t size,
>> 		cmd.tlbi.vmid	=3D smmu_domain->s2_cfg.vmid;
>> 	}
>>=20
>> -	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
>> -		/* Get the leaf page size */
>> -		tg =3D __ffs(smmu_domain->domain.pgsize_bitmap);
>> -
>> -		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
>> -		cmd.tlbi.tg =3D (tg - 10) / 2;
>> -
>> -		/* Determine what level the granule is at */
>> -		cmd.tlbi.ttl =3D 4 - ((ilog2(granule) - 3) / (tg - 3));
>> -
>> -		num_pages =3D size >> tg;
>> -	}
>> -
>> -	while (iova < end) {
>> -		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
>> -			/*
>> -			 * On each iteration of the loop, the range is 5 bits
>> -			 * worth of the aligned size remaining.
>> -			 * The range in pages is:
>> -			 *
>> -			 * range =3D (num_pages & (0x1f << __ffs(num_pages)))
>> -			 */
>> -			unsigned long scale, num;
>> -
>> -			/* Determine the power of 2 multiple number of pages */
>> -			scale =3D __ffs(num_pages);
>> -			cmd.tlbi.scale =3D scale;
>> -
>> -			/* Determine how many chunks of 2^scale size we have */
>> -			num =3D (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
>> -			cmd.tlbi.num =3D num - 1;
>> -
>> -			/* range is num * 2^scale * pgsize */
>> -			inv_range =3D num << (scale + tg);
>> -
>> -			/* Clear out the lower order bits for the next iteration */
>> -			num_pages -=3D num << scale;
>> -		}
>> -
>> -		cmd.tlbi.addr =3D iova;
>> -		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
>> -		iova +=3D inv_range;
>> -	}
>> -	arm_smmu_cmdq_batch_submit(smmu, &cmds);
>> -
>> -	/*
>> -	 * Unfortunately, this can't be leaf-only since we may have
>> -	 * zapped an entire table.
>> -	 */
>> -	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
>> +	do {
>> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> +		cmd.tlbi.addr +=3D granule;
>> +	} while (size -=3D granule);
>> }
>>=20
>> static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gath=
er,
>> 					 unsigned long iova, size_t granule,
>> 					 void *cookie)
>> {
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct iommu_domain *domain =3D &smmu_domain->domain;
>> -
>> -	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
>> +	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
>> }
>>=20
>> static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
>> 				  size_t granule, void *cookie)
>> {
>> -	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
>> +	struct arm_smmu_domain *smmu_domain =3D cookie;
>> +	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> +
>> +	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
>> +	arm_smmu_cmdq_issue_sync(smmu);
>> }
>>=20
>> static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
>> 				  size_t granule, void *cookie)
>> {
>> -	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
>> +	struct arm_smmu_domain *smmu_domain =3D cookie;
>> +	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> +
>> +	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
>> +	arm_smmu_cmdq_issue_sync(smmu);
>> }
>>=20
>> static const struct iommu_flush_ops arm_smmu_flush_ops =3D {
>> @@ -2700,6 +2248,7 @@ static void arm_smmu_enable_ats(struct arm_smmu_ma=
ster *master)
>>=20
>> static void arm_smmu_disable_ats(struct arm_smmu_master *master)
>> {
>> +	struct arm_smmu_cmdq_ent cmd;
>> 	struct arm_smmu_domain *smmu_domain =3D master->domain;
>>=20
>> 	if (!master->ats_enabled)
>> @@ -2711,8 +2260,9 @@ static void arm_smmu_disable_ats(struct arm_smmu_m=
aster *master)
>> 	 * ATC invalidation via the SMMU.
>> 	 */
>> 	wmb();
>> -	arm_smmu_atc_inv_master(master);
>> -	atomic_dec(&smmu_domain->nr_ats_masters);
>> +	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
>> +	arm_smmu_atc_inv_master(master, &cmd);
>> +    atomic_dec(&smmu_domain->nr_ats_masters);
>> }
>>=20
>> static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
>> @@ -2875,10 +2425,10 @@ static void arm_smmu_flush_iotlb_all(struct iomm=
u_domain *domain)
>> static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
>> 				struct iommu_iotlb_gather *gather)
>> {
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> +	struct arm_smmu_device *smmu =3D to_smmu_domain(domain)->smmu;
>>=20
>> -	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
>> -			       gather->pgsize, true, smmu_domain);
>> +	if (smmu)
>> +		arm_smmu_cmdq_issue_sync(smmu);
>> }
>>=20
>> static phys_addr_t
>> @@ -3176,49 +2726,18 @@ static int arm_smmu_init_one_queue(struct arm_sm=
mu_device *smmu,
>> 	return 0;
>> }
>>=20
>> -static void arm_smmu_cmdq_free_bitmap(void *data)
>> -{
>> -	unsigned long *bitmap =3D data;
>> -	bitmap_free(bitmap);
>> -}
>> -
>> -static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
>> -{
>> -	int ret =3D 0;
>> -	struct arm_smmu_cmdq *cmdq =3D &smmu->cmdq;
>> -	unsigned int nents =3D 1 << cmdq->q.llq.max_n_shift;
>> -	atomic_long_t *bitmap;
>> -
>> -	atomic_set(&cmdq->owner_prod, 0);
>> -	atomic_set(&cmdq->lock, 0);
>> -
>> -	bitmap =3D (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
>> -	if (!bitmap) {
>> -		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
>> -		ret =3D -ENOMEM;
>> -	} else {
>> -		cmdq->valid_map =3D bitmap;
>> -		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
>> -	}
>> -
>> -	return ret;
>> -}
>> -
>> static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
>> {
>> 	int ret;
>>=20
>> 	/* cmdq */
>> +	spin_lock_init(&smmu->cmdq.lock);
>> 	ret =3D arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD=
,
>> 				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
>> 				      "cmdq");
>> 	if (ret)
>> 		return ret;
>>=20
>> -	ret =3D arm_smmu_cmdq_init(smmu);
>> -	if (ret)
>> -		return ret;
>> -
>> 	/* evtq */
>> 	ret =3D arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD=
,
>> 				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
>> @@ -3799,15 +3318,9 @@ static int arm_smmu_device_hw_probe(struct arm_sm=
mu_device *smmu)
>> 	/* Queue sizes, capped to ensure natural alignment */
>> 	smmu->cmdq.q.llq.max_n_shift =3D min_t(u32, CMDQ_MAX_SZ_SHIFT,
>> 					     FIELD_GET(IDR1_CMDQS, reg));
>> -	if (smmu->cmdq.q.llq.max_n_shift <=3D ilog2(CMDQ_BATCH_ENTRIES)) {
>> -		/*
>> -		 * We don't support splitting up batches, so one batch of
>> -		 * commands plus an extra sync needs to fit inside the command
>> -		 * queue. There's also no way we can handle the weird alignment
>> -		 * restrictions on the base pointer for a unit-length queue.
>> -		 */
>> -		dev_err(smmu->dev, "command queue size <=3D %d entries not supported\=
n",
>> -			CMDQ_BATCH_ENTRIES);
>> +	if (!smmu->cmdq.q.llq.max_n_shift) {
>> +		/* Odd alignment restrictions on the base, so ignore for now */
>> +		dev_err(smmu->dev, "unit-length command queue not supported\n");
>> 		return -ENXIO;
>> 	}
>>=20
>> --=20
>> 2.17.1
>>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:08:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:08:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42706.76836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRrf-0001VM-4s; Wed, 02 Dec 2020 13:08:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42706.76836; Wed, 02 Dec 2020 13:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRrf-0001VE-0p; Wed, 02 Dec 2020 13:08:11 +0000
Received: by outflank-mailman (input) for mailman id 42706;
 Wed, 02 Dec 2020 13:08:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkRrd-0001UK-QN
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:08:09 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.88]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8ec5658-98e7-46ac-a19a-f1f7efd86ef7;
 Wed, 02 Dec 2020 13:08:07 +0000 (UTC)
Received: from AS8PR04CA0025.eurprd04.prod.outlook.com (2603:10a6:20b:310::30)
 by VE1PR08MB5134.eurprd08.prod.outlook.com (2603:10a6:803:110::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.24; Wed, 2 Dec
 2020 13:08:04 +0000
Received: from VE1EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::33) by AS8PR04CA0025.outlook.office365.com
 (2603:10a6:20b:310::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend
 Transport; Wed, 2 Dec 2020 13:08:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT053.mail.protection.outlook.com (10.152.19.198) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 13:08:03 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Wed, 02 Dec 2020 13:08:03 +0000
Received: from 115d48dd8c1b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CF21F612-78FF-48A1-B430-A048B8CC1E42.1; 
 Wed, 02 Dec 2020 13:07:57 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 115d48dd8c1b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 13:07:57 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4629.eurprd08.prod.outlook.com (2603:10a6:10:f4::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Wed, 2 Dec
 2020 13:07:54 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 13:07:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8ec5658-98e7-46ac-a19a-f1f7efd86ef7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4th91duCNOldyqSfMrh+bv83mKXJID4jEUUfnNKZ3Lg=;
 b=5SBpEv0hVSG9d4TZR5Z52y04bHceiXFRstXR4clbOLjz05xtC2V7Yn79u5D16nlNzEvFe+jygkmfVrrsJ+Zj2HJUn2cFmtDE8vtaqBlDQtdM2EBfKMLJ+UxmkbzOjKfWU16fxVv5yXbdadxARNAmuivBNrFzB/1jvOhLVkZXGtY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8b7abede8de565d9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lmAgi4U8dIlvIVN6d6mvwLWwVY/XMUacdgJsoctiLMhB9HvPgNZZku0wh/iQPxDPQs6Zo4e/OcLjeSwwB+RHHN38F2XXUZ2GiyJ1/ubMZnLkCT4B1JtHD+l4SQmo/MTAB9ItSIwgrfuc1DHPAHVf52SNRLwJMP9r58VypfsHXJ/TtyFB/qiASbjewCjnPnD/ipKiiOHyp7cym6OOaXBCq2hYFblmFH698NYFGZO827IO5OW/Obp94McutFslxteOdbc6aUycpUXhamb1QPXQTtU6HZYBA8nvM4svhKBx6yWE/sSbnx7Q92uKaAR/immIboPGNQm/GPdBR9BcuJTHOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4th91duCNOldyqSfMrh+bv83mKXJID4jEUUfnNKZ3Lg=;
 b=ev6TnqaNO2fsMy0+E72X2Uk/xSefDdcXg4r3PmqGFf1CbAwMAotcWwN6GHOrX8cSroDHBHfY2No15AZlVV+lOHGWJuSl/kLak3+bbfm9UUmyinvtJsXvRmVanxoIiPL+qlubxyvv3two9pJGEdE+PdrqpTLOAIC0t+Gqo+nr1mvsJS/zTB1hjNpSQ7lPjdW8d8B3o8mtmGY+E8HAp+Ur/UFCKqWojsfHH36fjz1kkfSADgH4Yts7BVtecUxSs0LE9n4qAWPyE48SJM0ixdISdg6AG4gHxmAbuWtfo9lZ1aCkCC/ceu9nRyTooY+OPcbNCqGfVySEvz/edI7uBbkamA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4th91duCNOldyqSfMrh+bv83mKXJID4jEUUfnNKZ3Lg=;
 b=5SBpEv0hVSG9d4TZR5Z52y04bHceiXFRstXR4clbOLjz05xtC2V7Yn79u5D16nlNzEvFe+jygkmfVrrsJ+Zj2HJUn2cFmtDE8vtaqBlDQtdM2EBfKMLJ+UxmkbzOjKfWU16fxVv5yXbdadxARNAmuivBNrFzB/1jvOhLVkZXGtY=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 5/8] xen/arm: Remove support for PCI ATS on SMMUv3
Thread-Topic: [PATCH v2 5/8] xen/arm: Remove support for PCI ATS on SMMUv3
Thread-Index: AQHWxBX6mY7D6FQdO06LxAF1ZfWTsani/uUAgADROIA=
Date: Wed, 2 Dec 2020 13:07:54 +0000
Message-ID: <9DCDC4CF-6503-477A-8C38-E50D1448B598@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <78079d1d6e9d2e7e87125da131e9bdb5809b838a.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011637560.1100@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012011637560.1100@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4b6fdeee-ead9-4bc1-ce9b-08d896c34d93
x-ms-traffictypediagnostic: DBBPR08MB4629:|VE1PR08MB5134:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB513401F396EE8BD14B093EBCFCF30@VE1PR08MB5134.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:639;OLM:605;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HGCWlaT+KemwTOZAuI0qJu9JUByzLg+fl8wq1elnRWOhKwhfzxP0Vo1bXZ+7anEaeOmJuXXkGB53pF3lCS9wD3OYtm8/bCaU9kIVypCjwhnCjpxhWFLTDtcPavnLDJa697HHginXFtlca/fKhgcYI9CEJHSUT2uP8oiblzl8ZpwKLuMcbCE1dPKuJpMYxlG2MVJO0W/V42SyJftW4xgJeMMNC2naK4m96YwFaDXRBQR8HpSx5kjhxS/F2sfSodkL/1whPeeE48ht91kVGmgdLzVT3b7ZZDU25pcN6x116Q67Le24zzNgvj8Yh7CT0WCv3uAmoCXAtM9J+6Zn7tDdOrTl9etI+WvUW6qpZpW4s0N2Q7PGS6lNRsu27hLokWI8uHbUGNwY9rfgayVinSHKgiUte73yW4QoYs9ZBB4b6Ao=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(376002)(346002)(136003)(64756008)(6506007)(26005)(8936002)(2616005)(66476007)(478600001)(30864003)(6916009)(186003)(36756003)(6512007)(66946007)(53546011)(33656002)(5660300002)(2906002)(66556008)(4326008)(6486002)(54906003)(83380400001)(66446008)(8676002)(91956017)(76116006)(86362001)(71200400001)(316002)(45980500001)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?KUAL8opg5+yKUsejdsyNaIMLE3MvC3x52WEdsnWYa+eBRnUMzlfJ6l0IeNa7?=
 =?us-ascii?Q?6Aoe97u0NOLjl3szZl8DJ3zlXUfwO16wG+FV3nrubYtMAeAC3izRiLIWtNly?=
 =?us-ascii?Q?qnt21LUlvyhgWCBGpyYtWcciX423ospJbcIWgiDPqjvLAMcX3OIF2b0AVB7v?=
 =?us-ascii?Q?Df+/qceP41v7UYh9+NtsDeQi5mHh7IwTAd9dgZhAcwLbpDJax/3Ve5d5+96E?=
 =?us-ascii?Q?vytYVUVwI31fq5K95REVOGQlK5stGD036VdBHb19dVmvRdvZo0CpDtxjCKFd?=
 =?us-ascii?Q?5Sf/mce31J8YkN3F5HDhHDNGD6LbGb/Qa8mes1CYcKfADcsNLQF6nvqdhVJE?=
 =?us-ascii?Q?JFBR2xvZFYtSHIPoZ21Wsk3AbPjdjhPZ3gzgBHU/ATkufyaTN+e/D7SObOUY?=
 =?us-ascii?Q?GsiwpPnf+ERwszVHY6aprBaHTk/su0UkMwE1AgnlwcTydH+KQ/pTfFesG1ls?=
 =?us-ascii?Q?ahYl6owa9/E1oLtTTHGXNbmmWw1DTfQTZLmVZGmZHSKgZsV7bUeur8zz8vXX?=
 =?us-ascii?Q?9lcYsNhJ9igwUlxj6LxJGBSNrwJulHo9pne+fNvxF8r2VrQV2Rrh2KzZ4rVd?=
 =?us-ascii?Q?ZMAf1VVQjQ04BGGm4RudhEu2pkNLkEEW2paGnPUNczc7b9pu/9RxmF7QeiPR?=
 =?us-ascii?Q?6TnvQzTkZxWTpanTmjmbuBtg+gK3dUdRnNgy3ZALQftQzrNYhbX01rAcwa+c?=
 =?us-ascii?Q?dVQZXzQbSnp5x216MrGypeCThewGvwpoYOA6v+J8z3fCm/VYLEd9TzsEqBYX?=
 =?us-ascii?Q?vZGD/l7SMlhd9mF0cEEW51uogXK01N2qrCRcE6f7PVsE28W4w8USBGTWc93V?=
 =?us-ascii?Q?IRdwKKREKJ4O0lCgGxQmgTyRZRSfCmnqpqQxQ6t/W6EdNugJaXoYhkSt6he1?=
 =?us-ascii?Q?bvSsWEuaf5PiJDaE6l+ecn34Vyd9TCnpmGdQ/XdQRYtZYjArmnmCg+0RPkPm?=
 =?us-ascii?Q?rYEqf59I94BHtnn1n8iT0NfEZH64kYH1GGt1g9gGOVCjevDN8iD4ne/ZbyUD?=
 =?us-ascii?Q?DFg5?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1D4995E940986A499FCA7121AB8245CA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4629
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	24a54b48-9ccb-4ecf-ea43-08d896c347e9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F6TWcz2CX8ZSl89aAWCqo9BJxrLUKHl7rGdm/G8RCti+yfJ5Mn2siFcgrqic3bC/UOinG90qLVnrC22WKOQx1cZ/JhdtOyZ7RaduWudzFMMR2MRv7kDdQQHxXSGtrPLyfrLrL8UvhTU5TwjYB9b9+Kt9lyhwO0VCEpojJlIk4YGbks0ne4o8ZPkIXWcpHp2aJust4mZ26gXD83JYhXbsziA6B0a8VUXwjIXaLg5lXeDEPC6PLyy5u022ycnsyi26iiyOWKaGRvLTfUrt27OYurN1SgdFqaGLt+MzZZoPF/liO3gAi30qSSsKg5OfbOxGOjg2kg90AA92KQLSR3ly2IlQOse0aoVCDPpj4KLTDoUl1hhUT8rQ7+biWkINafuH9nAJ3hRgpj+I2zbFcz2Z80vTGQbFz72kvt+JWIOB7lA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(396003)(346002)(46966005)(70586007)(8936002)(81166007)(47076004)(82740400003)(6512007)(6862004)(26005)(336012)(53546011)(6506007)(186003)(316002)(83380400001)(54906003)(82310400003)(36756003)(356005)(33656002)(86362001)(2616005)(4326008)(107886003)(2906002)(30864003)(6486002)(8676002)(70206006)(478600001)(5660300002)(309714004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 13:08:03.5557
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b6fdeee-ead9-4bc1-ce9b-08d896c34d93
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5134

Hello Stefano,

> On 2 Dec 2020, at 12:39 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Thu, 26 Nov 2020, Rahul Singh wrote:
>> PCI ATS functionality is not implemented and tested on ARM. Remove the
>> PCI ATS support, once PCI ATS support is tested and available this
>> patch can be added.
>>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>=20
> This looks like a revert of 9ce27afc0830f. Can we add that as a note to
> the commit message?
=20
Ok I will add in next version.

>=20
> One very minor comment at the bottom

Ack.

Regards,
Rahul
>=20
>=20
>> ---
>> xen/drivers/passthrough/arm/smmu-v3.c | 273 --------------------------
>> 1 file changed, 273 deletions(-)
>>=20
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthr=
ough/arm/smmu-v3.c
>> index 401f7ae006..6a33628087 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -460,16 +460,6 @@ struct arm_smmu_cmdq_ent {
>> 			u64			addr;
>> 		} tlbi;
>>=20
>> -		#define CMDQ_OP_ATC_INV		0x40
>> -		#define ATC_INV_SIZE_ALL	52
>> -		struct {
>> -			u32			sid;
>> -			u32			ssid;
>> -			u64			addr;
>> -			u8			size;
>> -			bool			global;
>> -		} atc;
>> -
>> 		#define CMDQ_OP_PRI_RESP	0x41
>> 		struct {
>> 			u32			sid;
>> @@ -632,7 +622,6 @@ struct arm_smmu_master {
>> 	struct list_head		domain_head;
>> 	u32				*sids;
>> 	unsigned int			num_sids;
>> -	bool				ats_enabled;
>> 	unsigned int			ssid_bits;
>> };
>>=20
>> @@ -650,7 +639,6 @@ struct arm_smmu_domain {
>>=20
>> 	struct io_pgtable_ops		*pgtbl_ops;
>> 	bool				non_strict;
>> -	atomic_t			nr_ats_masters;
>>=20
>> 	enum arm_smmu_domain_stage	stage;
>> 	union {
>> @@ -886,14 +874,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct=
 arm_smmu_cmdq_ent *ent)
>> 	case CMDQ_OP_TLBI_S12_VMALL:
>> 		cmd[0] |=3D FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
>> 		break;
>> -	case CMDQ_OP_ATC_INV:
>> -		cmd[0] |=3D FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
>> -		cmd[0] |=3D FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
>> -		cmd[0] |=3D FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
>> -		cmd[0] |=3D FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
>> -		cmd[1] |=3D FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
>> -		cmd[1] |=3D ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
>> -		break;
>> 	case CMDQ_OP_PRI_RESP:
>> 		cmd[0] |=3D FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
>> 		cmd[0] |=3D FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
>> @@ -925,7 +905,6 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_d=
evice *smmu)
>> 		[CMDQ_ERR_CERROR_NONE_IDX]	=3D "No error",
>> 		[CMDQ_ERR_CERROR_ILL_IDX]	=3D "Illegal command",
>> 		[CMDQ_ERR_CERROR_ABT_IDX]	=3D "Abort on command fetch",
>> -		[CMDQ_ERR_CERROR_ATC_INV_IDX]	=3D "ATC invalidate timeout",
>> 	};
>>=20
>> 	int i;
>> @@ -945,14 +924,6 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_=
device *smmu)
>> 		dev_err(smmu->dev, "retrying command fetch\n");
>> 	case CMDQ_ERR_CERROR_NONE_IDX:
>> 		return;
>> -	case CMDQ_ERR_CERROR_ATC_INV_IDX:
>> -		/*
>> -		 * ATC Invalidation Completion timeout. CONS is still pointing
>> -		 * at the CMD_SYNC. Attempt to complete other pending commands
>> -		 * by repeating the CMD_SYNC, though we might well end up back
>> -		 * here since the ATC invalidation may still be pending.
>> -		 */
>> -		return;
>> 	case CMDQ_ERR_CERROR_ILL_IDX:
>> 	default:
>> 		break;
>> @@ -1422,9 +1393,6 @@ static void arm_smmu_write_strtab_ent(struct arm_s=
mmu_master *master, u32 sid,
>> 		val |=3D FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
>> 	}
>>=20
>> -	if (master->ats_enabled)
>> -		dst[1] |=3D cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
>> -						 STRTAB_STE_1_EATS_TRANS));
>>=20
>> 	arm_smmu_sync_ste_for_sid(smmu, sid);
>> 	/* See comment in arm_smmu_write_ctx_desc() */
>> @@ -1633,112 +1601,6 @@ static irqreturn_t arm_smmu_combined_irq_handler=
(int irq, void *dev)
>> 	return IRQ_WAKE_THREAD;
>> }
>>=20
>> -static void
>> -arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
>> -			struct arm_smmu_cmdq_ent *cmd)
>> -{
>> -	size_t log2_span;
>> -	size_t span_mask;
>> -	/* ATC invalidates are always on 4096-bytes pages */
>> -	size_t inval_grain_shift =3D 12;
>> -	unsigned long page_start, page_end;
>> -
>> -	*cmd =3D (struct arm_smmu_cmdq_ent) {
>> -		.opcode			=3D CMDQ_OP_ATC_INV,
>> -		.substream_valid	=3D !!ssid,
>> -		.atc.ssid		=3D ssid,
>> -	};
>> -
>> -	if (!size) {
>> -		cmd->atc.size =3D ATC_INV_SIZE_ALL;
>> -		return;
>> -	}
>> -
>> -	page_start	=3D iova >> inval_grain_shift;
>> -	page_end	=3D (iova + size - 1) >> inval_grain_shift;
>> -
>> -	/*
>> -	 * In an ATS Invalidate Request, the address must be aligned on the
>> -	 * range size, which must be a power of two number of page sizes. We
>> -	 * thus have to choose between grossly over-invalidating the region, o=
r
>> -	 * splitting the invalidation into multiple commands. For simplicity
>> -	 * we'll go with the first solution, but should refine it in the futur=
e
>> -	 * if multiple commands are shown to be more efficient.
>> -	 *
>> -	 * Find the smallest power of two that covers the range. The most
>> -	 * significant differing bit between the start and end addresses,
>> -	 * fls(start ^ end), indicates the required span. For example:
>> -	 *
>> -	 * We want to invalidate pages [8; 11]. This is already the ideal rang=
e:
>> -	 *		x =3D 0b1000 ^ 0b1011 =3D 0b11
>> -	 *		span =3D 1 << fls(x) =3D 4
>> -	 *
>> -	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
>> -	 *		x =3D 0b0111 ^ 0b1010 =3D 0b1101
>> -	 *		span =3D 1 << fls(x) =3D 16
>> -	 */
>> -	log2_span	=3D fls_long(page_start ^ page_end);
>> -	span_mask	=3D (1ULL << log2_span) - 1;
>> -
>> -	page_start	&=3D ~span_mask;
>> -
>> -	cmd->atc.addr	=3D page_start << inval_grain_shift;
>> -	cmd->atc.size	=3D log2_span;
>> -}
>> -
>> -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
>> -				   struct arm_smmu_cmdq_ent *cmd)
>> -{
>> -	int i;
>> -
>> -	if (!master->ats_enabled)
>> -		return 0;
>> -
>> -	for (i =3D 0; i < master->num_sids; i++) {
>> -		cmd->atc.sid =3D master->sids[i];
>> -		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
>> -	}
>> -
>> -	return arm_smmu_cmdq_issue_sync(master->smmu);
>> -}
>> -
>> -static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>> -				   int ssid, unsigned long iova, size_t size)
>> -{
>> -	int ret =3D 0;
>> -	unsigned long flags;
>> -	struct arm_smmu_cmdq_ent cmd;
>> -	struct arm_smmu_master *master;
>> -
>> -	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
>> -		return 0;
>> -
>> -	/*
>> -	 * Ensure that we've completed prior invalidation of the main TLBs
>> -	 * before we read 'nr_ats_masters' in case of a concurrent call to
>> -	 * arm_smmu_enable_ats():
>> -	 *
>> -	 *	// unmap()			// arm_smmu_enable_ats()
>> -	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
>> -	 *	smp_mb();			[...]
>> -	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
>> -	 *
>> -	 * Ensures that we always see the incremented 'nr_ats_masters' count i=
f
>> -	 * ATS was enabled at the PCI device before completion of the TLBI.
>> -	 */
>> -	smp_mb();
>> -	if (!atomic_read(&smmu_domain->nr_ats_masters))
>> -		return 0;
>> -
>> -	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
>> -
>> -	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>> -	list_for_each_entry(master, &smmu_domain->devices, domain_head)
>> -		ret |=3D arm_smmu_atc_inv_master(master, &cmd);
>> -	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>> -
>> -	return ret ? -ETIMEDOUT : 0;
>> -}
>>=20
>> /* IO_PGTABLE API */
>> static void arm_smmu_tlb_inv_context(void *cookie)
>> @@ -1765,7 +1627,6 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>> 	 */
>> 	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> 	arm_smmu_cmdq_issue_sync(smmu);
>> -	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
>> }
>>=20
>> static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t siz=
e,
>> @@ -2106,108 +1967,6 @@ static void arm_smmu_install_ste_for_dev(struct =
arm_smmu_master *master)
>> 	}
>> }
>>=20
>> -static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
>> -{
>> -	struct device *dev =3D master->dev;
>> -	struct arm_smmu_device *smmu =3D master->smmu;
>> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> -
>> -	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
>> -		return false;
>> -
>> -	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
>> -		return false;
>> -
>> -	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
>> -}
>> -
>> -static void arm_smmu_enable_ats(struct arm_smmu_master *master)
>> -{
>> -	size_t stu;
>> -	struct pci_dev *pdev;
>> -	struct arm_smmu_device *smmu =3D master->smmu;
>> -	struct arm_smmu_domain *smmu_domain =3D master->domain;
>> -
>> -	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
>> -	if (!master->ats_enabled)
>> -		return;
>> -
>> -	/* Smallest Translation Unit: log2 of the smallest supported granule *=
/
>> -	stu =3D __ffs(smmu->pgsize_bitmap);
>> -	pdev =3D to_pci_dev(master->dev);
>> -
>> -	atomic_inc(&smmu_domain->nr_ats_masters);
>> -	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
>> -	if (pci_enable_ats(pdev, stu))
>> -		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
>> -}
>> -
>> -static void arm_smmu_disable_ats(struct arm_smmu_master *master)
>> -{
>> -	struct arm_smmu_cmdq_ent cmd;
>> -	struct arm_smmu_domain *smmu_domain =3D master->domain;
>> -
>> -	if (!master->ats_enabled)
>> -		return;
>> -
>> -	pci_disable_ats(to_pci_dev(master->dev));
>> -	/*
>> -	 * Ensure ATS is disabled at the endpoint before we issue the
>> -	 * ATC invalidation via the SMMU.
>> -	 */
>> -	wmb();
>> -	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
>> -	arm_smmu_atc_inv_master(master, &cmd);
>> -    atomic_dec(&smmu_domain->nr_ats_masters);
>> -}
>> -
>> -static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
>> -{
>> -	int ret;
>> -	int features;
>> -	int num_pasids;
>> -	struct pci_dev *pdev;
>> -
>> -	if (!dev_is_pci(master->dev))
>> -		return -ENODEV;
>> -
>> -	pdev =3D to_pci_dev(master->dev);
>> -
>> -	features =3D pci_pasid_features(pdev);
>> -	if (features < 0)
>> -		return features;
>> -
>> -	num_pasids =3D pci_max_pasids(pdev);
>> -	if (num_pasids <=3D 0)
>> -		return num_pasids;
>> -
>> -	ret =3D pci_enable_pasid(pdev, features);
>> -	if (ret) {
>> -		dev_err(&pdev->dev, "Failed to enable PASID\n");
>> -		return ret;
>> -	}
>> -
>> -	master->ssid_bits =3D min_t(u8, ilog2(num_pasids),
>> -				  master->smmu->ssid_bits);
>> -	return 0;
>> -}
>> -
>> -static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
>> -{
>> -	struct pci_dev *pdev;
>> -
>> -	if (!dev_is_pci(master->dev))
>> -		return;
>> -
>> -	pdev =3D to_pci_dev(master->dev);
>> -
>> -	if (!pdev->pasid_enabled)
>> -		return;
>> -
>> -	master->ssid_bits =3D 0;
>> -	pci_disable_pasid(pdev);
>> -}
>> -
>> static void arm_smmu_detach_dev(struct arm_smmu_master *master)
>> {
>> 	unsigned long flags;
>> @@ -2216,14 +1975,11 @@ static void arm_smmu_detach_dev(struct arm_smmu_=
master *master)
>> 	if (!smmu_domain)
>> 		return;
>>=20
>> -	arm_smmu_disable_ats(master);
>> -
>> 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>> 	list_del(&master->domain_head);
>> 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>>=20
>> 	master->domain =3D NULL;
>> -	master->ats_enabled =3D false;
>> 	arm_smmu_install_ste_for_dev(master);
>> }
>>=20
>> @@ -2271,17 +2027,12 @@ static int arm_smmu_attach_dev(struct iommu_doma=
in *domain, struct device *dev)
>>=20
>> 	master->domain =3D smmu_domain;
>>=20
>> -	if (smmu_domain->stage !=3D ARM_SMMU_DOMAIN_BYPASS)
>> -		master->ats_enabled =3D arm_smmu_ats_supported(master);
>> -
>> 	arm_smmu_install_ste_for_dev(master);
>>=20
>> 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>> 	list_add(&master->domain_head, &smmu_domain->devices);
>> 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>>=20
>> -	arm_smmu_enable_ats(master);
>> -
>> out_unlock:
>> 	mutex_unlock(&smmu_domain->init_mutex);
>> 	return ret;
>> @@ -2410,16 +2161,6 @@ static struct iommu_device *arm_smmu_probe_device=
(struct device *dev)
>>=20
>> 	master->ssid_bits =3D min(smmu->ssid_bits, fwspec->num_pasid_bits);
>>=20
>> -	/*
>> -	 * Note that PASID must be enabled before, and disabled after ATS:
>> -	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
>> -	 *
>> -	 *   Behavior is undefined if this bit is Set and the value of the PAS=
ID
>> -	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bi=
ts
>> -	 *   are changed.
>> -	 */
>> -	arm_smmu_enable_pasid(master);
>> -
>> 	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
>> 		master->ssid_bits =3D min_t(u8, master->ssid_bits,
>> 					  CTXDESC_LINEAR_CDMAX);
>> @@ -2442,7 +2183,6 @@ static void arm_smmu_release_device(struct device =
*dev)
>>=20
>> 	master =3D dev_iommu_priv_get(dev);
>> 	arm_smmu_detach_dev(master);
>> -	arm_smmu_disable_pasid(master);
>> 	kfree(master);
>> 	iommu_fwspec_free(dev);
>> }
>> @@ -2997,15 +2737,6 @@ static int arm_smmu_device_reset(struct arm_smmu_=
device *smmu, bool bypass)
>> 		}
>> 	}
>>=20
>> -	if (smmu->features & ARM_SMMU_FEAT_ATS) {
>> -		enables |=3D CR0_ATSCHK;
>> -		ret =3D arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
>> -					      ARM_SMMU_CR0ACK);
>> -		if (ret) {
>> -			dev_err(smmu->dev, "failed to enable ATS check\n");
>> -			return ret;
>> -		}
>> -	}
>>=20
>> 	ret =3D arm_smmu_setup_irqs(smmu);
>> 	if (ret) {
>> @@ -3076,13 +2807,9 @@ static int arm_smmu_device_hw_probe(struct arm_sm=
mu_device *smmu)
>> 	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
>> 		smmu->features |=3D ARM_SMMU_FEAT_PRI;
>>=20
>> -	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
>> -		smmu->features |=3D ARM_SMMU_FEAT_ATS;
>> -
>> 	if (reg & IDR0_SEV)
>> 		smmu->features |=3D ARM_SMMU_FEAT_SEV;
>>=20
>> -
>> 	if (reg & IDR0_HYP)
>=20
> spurious change



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:13:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42715.76848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRwJ-0002QF-OB; Wed, 02 Dec 2020 13:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42715.76848; Wed, 02 Dec 2020 13:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRwJ-0002Q8-Kj; Wed, 02 Dec 2020 13:12:59 +0000
Received: by outflank-mailman (input) for mailman id 42715;
 Wed, 02 Dec 2020 13:12:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkRwH-0002Pz-Kf
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:12:57 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::622])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e11294f4-781e-42b1-931a-bbc1c18d8ab6;
 Wed, 02 Dec 2020 13:12:55 +0000 (UTC)
Received: from DB6P195CA0005.EURP195.PROD.OUTLOOK.COM (2603:10a6:4:cb::15) by
 VI1PR08MB2926.eurprd08.prod.outlook.com (2603:10a6:802:1f::30) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.25; Wed, 2 Dec 2020 13:12:51 +0000
Received: from DB5EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:cb:cafe::9) by DB6P195CA0005.outlook.office365.com
 (2603:10a6:4:cb::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Wed, 2 Dec 2020 13:12:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT036.mail.protection.outlook.com (10.152.20.185) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 13:12:51 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Wed, 02 Dec 2020 13:12:51 +0000
Received: from a78df1f4684e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 89062AA4-2033-48B8-8BD4-98D5C9F00FC4.1; 
 Wed, 02 Dec 2020 13:12:14 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a78df1f4684e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 13:12:14 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB5995.eurprd08.prod.outlook.com (2603:10a6:10:20b::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 2 Dec
 2020 13:12:11 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 13:12:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e11294f4-781e-42b1-931a-bbc1c18d8ab6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Cc/cAPggcQ9iLguvdoXkIND+u55kDfmkYFsK+EnMJg=;
 b=J5z7A/l7mtxaCHj1ouS52IPgwkjgcjEigLXQ9UrNTNNui970XwAb2qBJyMgofRtoRU+uzoTLLkTLsXX9SA93DnIi9P3B5Wq7KKh1+HAV3AlP+QA4xgg3rCmVXFrsIRkJ9dpTPL8ZMyhIigcWSgtN6rAPYm7kApb5ZXD8OLwqWcQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 13d21086afb81b74
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uos2AYuQDgjp6LtyXCx9iznQ4apT1B1AbP9onbKZzr5RTuuuzt9YSet8WOvBaz73O+JGJCazJMPOoTBWaf8Uk7t6MDPkZyftRFgNKDfJZK65ibUVaC0LQ8ZcdOxPV7MkS6lwcbvm+YJyB2HFqXDurs31WTIgmp0q0DNKct5LlrVgLyirOxFxdUgReCXWeOP7VaKVBBbXNyORG0bHSPQfaunKZcllWIOFqbMnbPJLNo9r0hQaoIVCvypfniz7Z7Wb230LL2Egut8yn1ztTtDT+jdZAduQlC7JWUcmzsY8Fx8RgvKHAbO5ddATSAXOnIpDGU9Tez9K1TbU13m1ft0IEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Cc/cAPggcQ9iLguvdoXkIND+u55kDfmkYFsK+EnMJg=;
 b=RVdjTrGtWCf3YIdcidPluKsWDs3tduswn4v/M2VsndVOBRr9zQXTN7t8yDg92IcQ1rvCiwa427Bu4aCdy+eMqe2pgEWLUAKReP+famA0HkhZFAZU6XMWE3d9Hb9Yb9GUIUAsyioWUyV9yGrWsu6RlavCevFhR0X76l+07BdT0bNUekGnUtLTyI+5zMDTzSzi8WyOkFZ3reASuRbEmHv2nYBJ/S5+uirrwrT7r2iB+moJhqwFh6OZWs5SsyeF4m9RZdZOPt+6vYv0X9u+de44QtN1rKy6N6o9ClfV0NYlnF0iYhjy3z6YQNkBAuhtewMiozuc6gEI7Pr2Q9NI8tEcSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Cc/cAPggcQ9iLguvdoXkIND+u55kDfmkYFsK+EnMJg=;
 b=J5z7A/l7mtxaCHj1ouS52IPgwkjgcjEigLXQ9UrNTNNui970XwAb2qBJyMgofRtoRU+uzoTLLkTLsXX9SA93DnIi9P3B5Wq7KKh1+HAV3AlP+QA4xgg3rCmVXFrsIRkJ9dpTPL8ZMyhIigcWSgtN6rAPYm7kApb5ZXD8OLwqWcQ=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
Thread-Topic: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
Thread-Index: AQHWxBX8l+Fkiiynvkqd+dcwplqzQKni/UOAgAAB/QCAANIPAA==
Date: Wed, 2 Dec 2020 13:12:11 +0000
Message-ID: <D79D7DC5-649D-4517-A8CA-B13632595DA5@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011621380.1100@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012011639230.1100@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012011639230.1100@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6ae88d13-195d-4402-d9bf-08d896c3f91a
x-ms-traffictypediagnostic: DBBPR08MB5995:|VI1PR08MB2926:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB292634CA91D13F00919C48FBFCF30@VI1PR08MB2926.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SzbccR+LMeqCqkiT/JouZamUbnWrPdGFDK5xoOfJu1qi9o3Hq64i2KZQZYCmvuAGI7t9TrlU1E4nbS78BrVYBtKsI8wn7QwiahYy5Gn8eeh0Bn9dKdolEPzxzysrj4VSPav8HRGA2cQg1pTCOXdp1mhKxhZJzaxGZlBUbZfoodlfWO/Nl+saZhJLLacSPwSHFhGnhtw5xVONFqk0eX/f4DYGuHPolpd3WxtefAW98FIaHTOec4481kvZyNqIe8u4rJMIXPHMhRr6GqHRqqzduvCWei1+VaH1Ay+GXA5ME6palWMzwczegTQaLOmE6fwrHRFCbU0+dGTJYkZ5MG+k4B42cJ1ejFh9Dh1homMbttrrCM+Yt0fuPaZbDZQyFWNM
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(346002)(376002)(366004)(54906003)(316002)(71200400001)(86362001)(186003)(2616005)(26005)(6506007)(53546011)(478600001)(33656002)(6486002)(36756003)(8676002)(6916009)(83380400001)(6512007)(4326008)(66946007)(66446008)(66556008)(66476007)(91956017)(64756008)(8936002)(76116006)(2906002)(5660300002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?HFoCDPrkj+YQamFR0s8UhDIkXRhur5upbD795tE3qHIzd9SBrBHbPnAWVr9i?=
 =?us-ascii?Q?45cW4+xkGSvF0itNJTzGYj9JdNbEoFA9sYS/EXmjhKkB8undmKGLxLkrdjPW?=
 =?us-ascii?Q?qhvkEwqUQmvMzfLJR/4JdYswhHtPHN+buJl5qgU5fcgN7IW0cdul8hDo8S9+?=
 =?us-ascii?Q?3pykblP394kUSDM8L/XTxepQafQ96RdiA8BFREoS1feE6a0/Bd07uj439mt6?=
 =?us-ascii?Q?bHm2kg46zYdrRo2DTGZMh0jneJTp9BgL439fQlsCI/dR+z7+Wfh0HgH2Z9p4?=
 =?us-ascii?Q?3GmDWYgX6z2kl5XfLq1SbcdmmaQ5UhIjYJNKJbUIdO6wNmLDO8bVHmFk5HaN?=
 =?us-ascii?Q?3Pfca/qn5Jj7mDdS7EVsBT9POsoHxWv3VVrMQPY7uX+bWCKvIEezS/urmvD5?=
 =?us-ascii?Q?jFCmNmZIVP+tg0pRTPJCrRP7/3Ba8PHT9Zk4gufkkiCqCnVorGMrWLnB8krF?=
 =?us-ascii?Q?XdnhbMaTArcY7QY7Vb3VvPcIOnp7+VIB9BfNsGDLEiwv2vYwckWOXLgu/tr+?=
 =?us-ascii?Q?xnLRFNMxevPv8/L16Tx1gafE/FjAdaP7HLXPQDYtyDhengrX7xYXZOI9QvW4?=
 =?us-ascii?Q?ZpjtfCzsvA9r3URNxxop60nLcLeCTP6LKeDYa3WMB5cTNhePKparfSyrE7Mi?=
 =?us-ascii?Q?5w+woNBhPDN2yBRC9XxNZfvZjJX2eN3O023LfLiuuvMoHFyIgZI/zbJ1UgNZ?=
 =?us-ascii?Q?IOzdblA8//+wha50u0Q33ErBJqWNeWkzc2UcTTTG/2Oles+A2axAhDcEovik?=
 =?us-ascii?Q?jxhprhlren3YLRByCQPziUCV7Q8YY7yl1ZU/LkBrvLbDM9ZU4HeYJZsK5QWk?=
 =?us-ascii?Q?NXlEq5crkwAga9n5XHFec03skFXm4SLWcWpJg7IngHM+AAaPi1eUYDUDmZQy?=
 =?us-ascii?Q?6BeqwRQVzzocbDLP14DQqSDM8+C1yiE3xP7kgOc7YUv/IMBWrZ5p3hRF/llv?=
 =?us-ascii?Q?0vg3Pv1LlGA9ROMrwMKODz2XgW4se73ov0MAFx0VTHSX3YxnTfFsZoW4cAke?=
 =?us-ascii?Q?3qw6?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2FA825F89933114180A599C906B2E501@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5995
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3ed73565-a444-41db-32b6-08d896c3e11d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QbHQPmSLhX4W+zezjQdYg5/wrUdZ10InVr9TDF+RgJ8K9R+Rnd6baPaSNEQDKm7qJviyV84DPUPM71puGJbm7TdTgHOjm/P4S/5MX0WN7nCLqSQ83pU/j222xd7+TM+RvUWdLrJVQ8mr8rHXlNQOB1+NsBE6bCb6Q1D+dKCfSlff3VPop8hidTqGc8Z7ZNEwdMqxM3DP+i+LPX7cgFQm0ETv4GR3e8OQt180CcbuE4DUoj8gLojAO0/nZKwgGkW6TwYlzoj97r0oZxcIrPsNJNJTMZHMCoEPajgp3G80kJ6OSMaHLGUjmchqHbl5olwiYcCdI4c54acBX04T4N9r+fnMsNu/3ww/0qdLeH0iN52mEDInDMm+7Ff3EF4/xIBtbQJ/BN/D+3eDnw9+Gl6Lvw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966005)(36756003)(356005)(47076004)(4326008)(54906003)(336012)(70586007)(81166007)(316002)(86362001)(186003)(5660300002)(2616005)(82740400003)(2906002)(70206006)(33656002)(107886003)(8676002)(6512007)(6862004)(6486002)(6506007)(82310400003)(83380400001)(8936002)(53546011)(26005)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 13:12:51.4850
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ae88d13-195d-4402-d9bf-08d896c3f91a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2926

Hello Stefano,

> On 2 Dec 2020, at 12:40 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Tue, 1 Dec 2020, Stefano Stabellini wrote:
>> On Thu, 26 Nov 2020, Rahul Singh wrote:
>>> XEN does not support MSI on ARM platforms, therefore remove the MSI
>>> support from SMMUv3 driver.
>>>=20
>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>=20
>> I wonder if it makes sense to #ifdef CONFIG_MSI this code instead of
>> removing it completely.
>=20
> One more thought: could this patch be achieved by reverting
> 166bdbd23161160f2abcea70621adba179050bee ? If this patch could be done
> by a couple of revert, it would be great to say it in the commit
> message.
>=20
 Ok will add in next version.

>=20
>> In the past, we tried to keep the entire file as textually similar to
>> the original Linux driver as possible to make it easier to backport
>> features and fixes. So, in this case we would probably not even used an
>> #ifdef but maybe something like:
>>=20
>>  if (/* msi_enabled */ 0)
>>      return;
>>=20
>> at the beginning of arm_smmu_setup_msis.
>>=20
>>=20
>> However, that strategy didn't actually work very well because backports
>> have proven difficult to do anyway. So at that point we might as well at
>> least have clean code in Xen and do the changes properly.

Main reason to remove the feature/code that is not usable in XEN is to have=
 a clean code.

Regards,
Rahul

>>=20
>> So that's my reasoning for accepting this patch :-)
>>=20
>> Julien, are you happy with this too?
>>=20
>>=20
>>> ---
>>> xen/drivers/passthrough/arm/smmu-v3.c | 176 +-------------------------
>>> 1 file changed, 3 insertions(+), 173 deletions(-)
>>>=20
>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passth=
rough/arm/smmu-v3.c
>>> index cec304e51a..401f7ae006 100644
>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>> @@ -416,31 +416,6 @@ enum pri_resp {
>>> 	PRI_RESP_SUCC =3D 2,
>>> };
>>>=20
>>> -enum arm_smmu_msi_index {
>>> -	EVTQ_MSI_INDEX,
>>> -	GERROR_MSI_INDEX,
>>> -	PRIQ_MSI_INDEX,
>>> -	ARM_SMMU_MAX_MSIS,
>>> -};
>>> -
>>> -static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] =3D {
>>> -	[EVTQ_MSI_INDEX] =3D {
>>> -		ARM_SMMU_EVTQ_IRQ_CFG0,
>>> -		ARM_SMMU_EVTQ_IRQ_CFG1,
>>> -		ARM_SMMU_EVTQ_IRQ_CFG2,
>>> -	},
>>> -	[GERROR_MSI_INDEX] =3D {
>>> -		ARM_SMMU_GERROR_IRQ_CFG0,
>>> -		ARM_SMMU_GERROR_IRQ_CFG1,
>>> -		ARM_SMMU_GERROR_IRQ_CFG2,
>>> -	},
>>> -	[PRIQ_MSI_INDEX] =3D {
>>> -		ARM_SMMU_PRIQ_IRQ_CFG0,
>>> -		ARM_SMMU_PRIQ_IRQ_CFG1,
>>> -		ARM_SMMU_PRIQ_IRQ_CFG2,
>>> -	},
>>> -};
>>> -
>>> struct arm_smmu_cmdq_ent {
>>> 	/* Common fields */
>>> 	u8				opcode;
>>> @@ -504,10 +479,6 @@ struct arm_smmu_cmdq_ent {
>>> 		} pri;
>>>=20
>>> 		#define CMDQ_OP_CMD_SYNC	0x46
>>> -		struct {
>>> -			u32			msidata;
>>> -			u64			msiaddr;
>>> -		} sync;
>>> 	};
>>> };
>>>=20
>>> @@ -649,12 +620,6 @@ struct arm_smmu_device {
>>>=20
>>> 	struct arm_smmu_strtab_cfg	strtab_cfg;
>>>=20
>>> -	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
>>> -	union {
>>> -		u32			sync_count;
>>> -		u64			padding;
>>> -	};
>>> -
>>> 	/* IOMMU core code handle */
>>> 	struct iommu_device		iommu;
>>> };
>>> @@ -945,20 +910,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struc=
t arm_smmu_cmdq_ent *ent)
>>> 		cmd[1] |=3D FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
>>> 		break;
>>> 	case CMDQ_OP_CMD_SYNC:
>>> -		if (ent->sync.msiaddr)
>>> -			cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
>>> -		else
>>> -			cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
>>> -		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>>> -		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
>>> -		/*
>>> -		 * Commands are written little-endian, but we want the SMMU to
>>> -		 * receive MSIData, and thus write it back to memory, in CPU
>>> -		 * byte order, so big-endian needs an extra byteswap here.
>>> -		 */
>>> -		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
>>> -				     cpu_to_le32(ent->sync.msidata));
>>> -		cmd[1] |=3D ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>>> +		cmd[0] |=3D FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
>>> 		break;
>>> 	default:
>>> 		return -ENOENT;
>>> @@ -1054,50 +1006,6 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_s=
mmu_device *smmu,
>>> 	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>>> }
>>>=20
>>> -/*
>>> - * The difference between val and sync_idx is bounded by the maximum s=
ize of
>>> - * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithme=
tic.
>>> - */
>>> -static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 =
sync_idx)
>>> -{
>>> -	ktime_t timeout;
>>> -	u32 val;
>>> -
>>> -	timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
>>> -	val =3D smp_cond_load_acquire(&smmu->sync_count,
>>> -				    (int)(VAL - sync_idx) >=3D 0 ||
>>> -				    !ktime_before(ktime_get(), timeout));
>>> -
>>> -	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
>>> -}
>>> -
>>> -static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu=
)
>>> -{
>>> -	u64 cmd[CMDQ_ENT_DWORDS];
>>> -	unsigned long flags;
>>> -	struct arm_smmu_cmdq_ent  ent =3D {
>>> -		.opcode =3D CMDQ_OP_CMD_SYNC,
>>> -		.sync	=3D {
>>> -			.msiaddr =3D virt_to_phys(&smmu->sync_count),
>>> -		},
>>> -	};
>>> -
>>> -	spin_lock_irqsave(&smmu->cmdq.lock, flags);
>>> -
>>> -	/* Piggy-back on the previous command if it's a SYNC */
>>> -	if (smmu->prev_cmd_opcode =3D=3D CMDQ_OP_CMD_SYNC) {
>>> -		ent.sync.msidata =3D smmu->sync_nr;
>>> -	} else {
>>> -		ent.sync.msidata =3D ++smmu->sync_nr;
>>> -		arm_smmu_cmdq_build_cmd(cmd, &ent);
>>> -		arm_smmu_cmdq_insert_cmd(smmu, cmd);
>>> -	}
>>> -
>>> -	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>>> -
>>> -	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
>>> -}
>>> -
>>> static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>>> {
>>> 	u64 cmd[CMDQ_ENT_DWORDS];
>>> @@ -1119,12 +1027,9 @@ static int __arm_smmu_cmdq_issue_sync(struct arm=
_smmu_device *smmu)
>>> static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>>> {
>>> 	int ret;
>>> -	bool msi =3D (smmu->features & ARM_SMMU_FEAT_MSI) &&
>>> -		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
>>>=20
>>> -	ret =3D msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
>>> -		  : __arm_smmu_cmdq_issue_sync(smmu);
>>> -	if (ret)
>>> +	ret =3D __arm_smmu_cmdq_issue_sync(smmu);
>>> +	if ( ret )
>>> 		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
>>> 	return ret;
>>> }
>>> @@ -2898,83 +2803,10 @@ static int arm_smmu_update_gbpa(struct arm_smmu=
_device *smmu, u32 set, u32 clr)
>>> 	return ret;
>>> }
>>>=20
>>> -static void arm_smmu_free_msis(void *data)
>>> -{
>>> -	struct device *dev =3D data;
>>> -	platform_msi_domain_free_irqs(dev);
>>> -}
>>> -
>>> -static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_m=
sg *msg)
>>> -{
>>> -	phys_addr_t doorbell;
>>> -	struct device *dev =3D msi_desc_to_dev(desc);
>>> -	struct arm_smmu_device *smmu =3D dev_get_drvdata(dev);
>>> -	phys_addr_t *cfg =3D arm_smmu_msi_cfg[desc->platform.msi_index];
>>> -
>>> -	doorbell =3D (((u64)msg->address_hi) << 32) | msg->address_lo;
>>> -	doorbell &=3D MSI_CFG0_ADDR_MASK;
>>> -
>>> -	writeq_relaxed(doorbell, smmu->base + cfg[0]);
>>> -	writel_relaxed(msg->data, smmu->base + cfg[1]);
>>> -	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
>>> -}
>>> -
>>> -static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
>>> -{
>>> -	struct msi_desc *desc;
>>> -	int ret, nvec =3D ARM_SMMU_MAX_MSIS;
>>> -	struct device *dev =3D smmu->dev;
>>> -
>>> -	/* Clear the MSI address regs */
>>> -	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
>>> -	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
>>> -
>>> -	if (smmu->features & ARM_SMMU_FEAT_PRI)
>>> -		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
>>> -	else
>>> -		nvec--;
>>> -
>>> -	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
>>> -		return;
>>> -
>>> -	if (!dev->msi_domain) {
>>> -		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\=
n");
>>> -		return;
>>> -	}
>>> -
>>> -	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
>>> -	ret =3D platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_=
msg);
>>> -	if (ret) {
>>> -		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\=
n");
>>> -		return;
>>> -	}
>>> -
>>> -	for_each_msi_entry(desc, dev) {
>>> -		switch (desc->platform.msi_index) {
>>> -		case EVTQ_MSI_INDEX:
>>> -			smmu->evtq.q.irq =3D desc->irq;
>>> -			break;
>>> -		case GERROR_MSI_INDEX:
>>> -			smmu->gerr_irq =3D desc->irq;
>>> -			break;
>>> -		case PRIQ_MSI_INDEX:
>>> -			smmu->priq.q.irq =3D desc->irq;
>>> -			break;
>>> -		default:	/* Unknown */
>>> -			continue;
>>> -		}
>>> -	}
>>> -
>>> -	/* Add callback to free MSIs on teardown */
>>> -	devm_add_action(dev, arm_smmu_free_msis, dev);
>>> -}
>>> -
>>> static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>>> {
>>> 	int irq, ret;
>>>=20
>>> -	arm_smmu_setup_msis(smmu);
>>> -
>>> 	/* Request interrupt lines */
>>> 	irq =3D smmu->evtq.q.irq;
>>> 	if (irq) {
>>> @@ -3250,8 +3082,6 @@ static int arm_smmu_device_hw_probe(struct arm_sm=
mu_device *smmu)
>>> 	if (reg & IDR0_SEV)
>>> 		smmu->features |=3D ARM_SMMU_FEAT_SEV;
>>>=20
>>> -	if (reg & IDR0_MSI)
>>> -		smmu->features |=3D ARM_SMMU_FEAT_MSI;
>>>=20
>>> 	if (reg & IDR0_HYP)
>>> 		smmu->features |=3D ARM_SMMU_FEAT_HYP;
>>> --=20
>>> 2.17.1
>>>=20
>>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:13:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42720.76859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRx5-0002Yw-82; Wed, 02 Dec 2020 13:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42720.76859; Wed, 02 Dec 2020 13:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkRx5-0002Yp-4y; Wed, 02 Dec 2020 13:13:47 +0000
Received: by outflank-mailman (input) for mailman id 42720;
 Wed, 02 Dec 2020 13:13:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkRx4-0002Yj-Km
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:13:46 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.49]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 999b570c-ab0b-47df-b356-a799c0c58d9b;
 Wed, 02 Dec 2020 13:13:44 +0000 (UTC)
Received: from AM7PR04CA0003.eurprd04.prod.outlook.com (2603:10a6:20b:110::13)
 by DB8PR08MB5338.eurprd08.prod.outlook.com (2603:10a6:10:111::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Wed, 2 Dec
 2020 13:13:42 +0000
Received: from AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::75) by AM7PR04CA0003.outlook.office365.com
 (2603:10a6:20b:110::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Wed, 2 Dec 2020 13:13:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT023.mail.protection.outlook.com (10.152.16.169) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 13:13:42 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Wed, 02 Dec 2020 13:13:41 +0000
Received: from a111c2080f6c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CE896D0E-4039-479F-AC03-03280FD4248D.1; 
 Wed, 02 Dec 2020 13:13:04 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a111c2080f6c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 13:13:04 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5244.eurprd08.prod.outlook.com (2603:10a6:10:e6::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 2 Dec
 2020 13:13:03 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 13:13:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 999b570c-ab0b-47df-b356-a799c0c58d9b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PxEz9Y3eMvGLMljvFhW1rS0tAmWJXxUok54w5uvoLmg=;
 b=8wOxEGcqEJHLymnems8Lb73R0KXaJ3UxSSQKvuiyp4SY9JqWYkRW99iOwQHfPpd3oSsmNkpu9eaaFqqJWGz6VTirvxqYQDH2wAji7baJ0mOWQd48N4vo/Iln1LL6HAi2vfrQfm2GEfvBPdj/lHkEym8CdhjcNrr+j/JkzIeotyE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: ea822e5d80b74535
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mP03TGUZhSktOMwhQkHXrB7tnDsMoQDwodJKb7gn7uI/qDGHyVyfdzeeFTsSn8AiJGoB/fEfomvK+q4PUki5PoL2jd/VKrl3lbdKQoGHw4jKJEaZnu0qbZT0CXGOCibp4rFDvMrGjLegCd2qaTzpIB3H20O9WaHCks1E7f1YB9S+h78WB8n7paDU5sRx05pYeZU/KpEaQV8M7BEMNgcheOYT/MPbV9JoXsFIpACPWvPj16VD///CvQ+v8fibtZMfIyIFREegA7mRLfGXq/TUymlR3SDniKxOxR0jvBPZv7PkyERT+MWoom3JuvkzxosN1K+E/ccWbUd6t8/NshUmnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PxEz9Y3eMvGLMljvFhW1rS0tAmWJXxUok54w5uvoLmg=;
 b=EYrouWkzdPWkQAqK+/tUX6R7OsY7WYyTxVoYvdxnrLOWTyhO5YsEBSv755lEm/OLLLntlnvBb/wJTapTYqN7ZYofOMJTGNXp14qFVtnC1yxEsFmlrFxY2ntcM5TUKyN9sjPQ+9rbgY5vDMI//XUds+ntcLGp/nCpfHOX82q9jpyeq3AzR70KkXgezW3aMkD91eK+Jp8fKjPAQEkmVGKlItaO39qNa3dLidpvJowZHaEvmcKStcJ9dcYBUICmnpApjDSQO1NZzBFY1CTJ/qiBOjG6hOAeahwEDDChH39bDbpDI3NycLS+0mM5ryksco8g5iBUyl1q/2RLqYZQC1MV1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PxEz9Y3eMvGLMljvFhW1rS0tAmWJXxUok54w5uvoLmg=;
 b=8wOxEGcqEJHLymnems8Lb73R0KXaJ3UxSSQKvuiyp4SY9JqWYkRW99iOwQHfPpd3oSsmNkpu9eaaFqqJWGz6VTirvxqYQDH2wAji7baJ0mOWQd48N4vo/Iln1LL6HAi2vfrQfm2GEfvBPdj/lHkEym8CdhjcNrr+j/JkzIeotyE=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 6/8] xen/arm: Remove support for Stage-1 translation on
 SMMUv3.
Thread-Topic: [PATCH v2 6/8] xen/arm: Remove support for Stage-1 translation
 on SMMUv3.
Thread-Index: AQHWxBX8ar/P6AWgZUeUeAQIZFy5kKnjAtQAgADOu4A=
Date: Wed, 2 Dec 2020 13:13:03 +0000
Message-ID: <54C9C8EE-083D-47E8-82FD-461D240F8C68@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <29d40e76341983b175250b71e7b7a290895effd0.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011645170.1100@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012011645170.1100@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 685f997a-9b2b-467f-e373-08d896c4174d
x-ms-traffictypediagnostic: DB8PR08MB5244:|DB8PR08MB5338:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB5338DC3D5B3419B9B31F8410FCF30@DB8PR08MB5338.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6108;OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8jL5E+c5VMZhgtWmLqCfcGwhA8QzO00k5FxOTo+1O7xQUGLG+oJnHVwcD8A4HG43mkc7JLleZhK+WKn6y1Gd1tRUb7U8VAr9faur/r52xqtZqivCQTIHOpsCCUUdZR3CBCDGKC9IYPXeFeVNazk5sb4mUdvQECx/fUiG7o24o1W/B/jYvb41hSWG4MDk3mmB+/m+VILWMscWT/HtKUbDZ/RxwnNYsWuESkPkT2XBE2rf2FuSqYRgo2Fsqm+Nj61j5O0zv6fMpQdJT67jwyXQTpLjjnPCRO6g0sW8BzrYeBm6QtMW2ERMkzwiNDAnVHpsp2OK9bKCKyxBUc6CC7vuWN9hFmkhSpoldjVE68Lg5d1NQtePQEQe6prCrduM1/SI
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(136003)(366004)(396003)(6486002)(186003)(2906002)(478600001)(6512007)(6916009)(26005)(53546011)(2616005)(6506007)(316002)(8676002)(54906003)(86362001)(83380400001)(8936002)(33656002)(66476007)(64756008)(66446008)(66556008)(76116006)(5660300002)(91956017)(71200400001)(4326008)(36756003)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?fhdmFqRsPDoAEn95wBzSYoMcNHHxBSGprZS1ed9wbQlmNiTQB+aLikAl9OPF?=
 =?us-ascii?Q?RFDSWgNaW3/5rKay4lX1h2RBGSVjEQZDxGbUIVpptaZmlGngoxni2+do6Slq?=
 =?us-ascii?Q?vN9bgkMuIdcafv38+vtHChvvMmrWxygnvr925Ud8W4qD53ZoprxauV0I2uzI?=
 =?us-ascii?Q?jfTka75HBzqIONyddzF1d5n3t0+0qfAlCUwbhHDyt67L/vjzQDyp6RDJiusW?=
 =?us-ascii?Q?rE9PZJKaLDIs3CmkKlGP+pl4L0XN70Tg2/6045io0Jp1f2CERxnfjP+ZGTPZ?=
 =?us-ascii?Q?e0M0qQiGPPaMtRH+6SafI8M1jORBJUDzhbSdgVGH0KevS9Gh2SoIiP3rCNCV?=
 =?us-ascii?Q?QxdpxaUblyRJN3bZ9+HzTu/SetwrApLBz+agURza35ufOOIHkvMKlKxuSEcR?=
 =?us-ascii?Q?4Pd1GDQ7AZipUFvCinV5g80f5MY1dRUn8rSBrGXieaM4NQ/NWveenX6fQ5dD?=
 =?us-ascii?Q?6I/G7/ZG4XB2wKAU025RqVocFmbFyPBkI9H7+PMXQpw83PE5vI2BXliAyEPg?=
 =?us-ascii?Q?j6XWf2LRimTwWin+wysPJFeL58xcp4a/pD2BeS/QBYaURFHfBiBQH0Kah6Ps?=
 =?us-ascii?Q?68s8KiNcMbZxv7qZle0stVTQ06rZDZZ6o5Si3rWX0MrbwRINxSoi95gRCKWy?=
 =?us-ascii?Q?AQAIOdvPECoXZ+OLHHHlwUrYUWn4lTB+Qcw8DBKL+MqSWIIXHtZ3HlSMmCsl?=
 =?us-ascii?Q?JsZJj88/EZ/spxKml/eDpQR1Fb2n6OSlVZY77y+JAhtBU8FhkZhvVvtlcWvl?=
 =?us-ascii?Q?MMJmmbpk+H3oRBoz4/9lDDeigvdgZ7gHs6bnyMwcO6MoEbs2ILWmnJwVtqZo?=
 =?us-ascii?Q?ut76mQAWZatSzNtV978U0uYuw78CGPF31XIGZWKyFusjLQBWon8DYpxXeDzt?=
 =?us-ascii?Q?vsZdlR6jvMoTUc0JAC/2lO4A8SZvtbam+b9pFLyemrdJSWJxnyiYoOo2Evbb?=
 =?us-ascii?Q?dZRFa7qrZGA3Tb4zwD3SyK/LUYAf+X+lsAI8d8z1uI8oj5v6Lrtt6TTB7t6U?=
 =?us-ascii?Q?LFHe?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C5B7C67E2CEDBE41A5E93EC05E111CDC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5244
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a6690022-dfe9-4680-e9b9-08d896c4006f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x75jURq6KakhfySZoI5uAXj7carwsVmnUoCra9pzQ/3eB+A9mb78a+o6fZW01+f3ckjLmiNMjjBItI3VCiwV4ASqgPICLDH+BOvnPxZVYTXksVC3r6A2OYh0bWJWE+U+BTsDSV+ygn2qNN00ZK12o+jFStOo22bW9Rro6jjZ3dcz24rNaAqaY0oiLzl6ZGD7+d91Kv3qiirAY6hc6zT3qWQ73IykzoWNp3Rj1Eg9e2trLOKOjUj2JKMkpHua/rlzE6B82QVg6yclS5v7kxHGMK6+Ug1tAfDOls79XzEU73kVSFTkBKPhejauOz4IXkL75aCvrM466CVGPPv86xFvRhcGwdexGXd9LqxrWaVmllqd+6JBmnoRK4/goTIgGQqtkDlkbVdVzw0GVsadBqw8DA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966005)(47076004)(8936002)(54906003)(6862004)(86362001)(6486002)(107886003)(2616005)(2906002)(36906005)(82740400003)(70206006)(33656002)(4326008)(70586007)(316002)(83380400001)(5660300002)(36756003)(82310400003)(81166007)(356005)(186003)(336012)(26005)(53546011)(6506007)(8676002)(6512007)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 13:13:42.0972
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 685f997a-9b2b-467f-e373-08d896c4174d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5338

Hello Stefano,

> On 2 Dec 2020, at 12:53 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Thu, 26 Nov 2020, Rahul Singh wrote:
>> Linux SMMUv3 driver supports both Stage-1 and Stage-2 translations.
>> As of now only Stage-2 translation support has been tested.
>>=20
>> Once Stage-1 translation support is tested this patch can be added.
>>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>=20
> [...]
>=20
>=20
>> @@ -1871,19 +1476,9 @@ static int arm_smmu_domain_finalise(struct iommu_=
domain *domain,
>> 	}
>>=20
>> 	/* Restrict the stage to what we can actually support */
>> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
>> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
>> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_S1;
>> +	smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>=20
> It would be good to add an helpful error message if
> ARM_SMMU_FEAT_TRANS_S2 is missing.
>=20

Ack. I will add error message.

Regards,
Rahul
>=20
>> 	switch (smmu_domain->stage) {
>> -	case ARM_SMMU_DOMAIN_S1:
>> -		ias =3D (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
>> -		ias =3D min_t(unsigned long, ias, VA_BITS);
>> -		oas =3D smmu->ias;
>> -		fmt =3D ARM_64_LPAE_S1;
>> -		finalise_stage_fn =3D arm_smmu_domain_finalise_s1;
>> -		break;
>> 	case ARM_SMMU_DOMAIN_NESTED:
>> 	case ARM_SMMU_DOMAIN_S2:
>> 		ias =3D smmu->ias;



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:45:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:45:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42732.76892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSRD-0005VV-0D; Wed, 02 Dec 2020 13:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42732.76892; Wed, 02 Dec 2020 13:44:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSRC-0005VO-TC; Wed, 02 Dec 2020 13:44:54 +0000
Received: by outflank-mailman (input) for mailman id 42732;
 Wed, 02 Dec 2020 13:44:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkSRB-0005VD-Ca
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:44:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSR9-0001vV-WD; Wed, 02 Dec 2020 13:44:52 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSR9-0007Jk-QY; Wed, 02 Dec 2020 13:44:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=CUjdfLZFOHaD7kprNnefbGcLGRXfJc7JyVO7qhIJrUc=; b=7Nb5ACfpFlhbGWk7pCeqaylrMX
	ON7CC9R+GuRUDH0vEVtvMfH9SBtkSzcdMvbOP7HCzmORubkj2KtZet1wmK+3LXElYxPNg2PD2KEWD
	1ZEiqAlMx9hpgo53KYk7uDKyjWjSyzfqZVJw228Wqmm67eZJPfRTYNbRpOXIMrrCjFsE=;
Subject: Re: [PATCH v2 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <39a9c619-d7b2-eca0-688c-5f35546e59fa@xen.org>
Date: Wed, 2 Dec 2020 13:44:49 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 26/11/2020 17:02, Rahul Singh wrote:
> Linux SMMUv3 code implements the commands-queue insertion based on
> atomic operations implemented in Linux. Atomic functions used by the
> commands-queue insertion is not implemented in XEN therefore revert the
> patch that implemented the commands-queue insertion based on atomic
> operations.

This commit message explains why we revert but not the consequence of 
the revert. Can outline if there are any and why they are fine?

I am also interested to have a list of *must* have for the driver to be 
out of the tech preview.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:45:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:45:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42735.76903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSRe-0005ab-9t; Wed, 02 Dec 2020 13:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42735.76903; Wed, 02 Dec 2020 13:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSRe-0005aU-6c; Wed, 02 Dec 2020 13:45:22 +0000
Received: by outflank-mailman (input) for mailman id 42735;
 Wed, 02 Dec 2020 13:45:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkSRc-0005aJ-IO; Wed, 02 Dec 2020 13:45:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkSRc-0001wB-EC; Wed, 02 Dec 2020 13:45:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkSRc-0005NT-3I; Wed, 02 Dec 2020 13:45:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkSRc-0003QU-2q; Wed, 02 Dec 2020 13:45:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P+Qtdu5AZiJ1ydDv+p955yD1fY9dk57tRt2Ly/YJFAs=; b=Nt5jk7ztESk+uMdVMLQI7U2BJO
	7K6IwgzZaurusEHfQRaMqlvNeMq0RyM1sq5X7qTfGHQB51r758W0nJSg0E4eAhmkhFpCSUgINkHQ5
	V231DXyww1Kx0Quimswi3YrIhb35RhzTtnBV+buLFc/v5exM0jRC8Xubf91yYfXHieZ0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157138-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 157138: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d72d9915edff0dd41f601bbb0b1f83c02ff1689
X-Osstest-Versions-That:
    xen=17ec9b43af072051edb1380a5eb459a382dcafa3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 13:45:20 +0000

flight 157138 xen-4.10-testing real [real]
flight 157159 xen-4.10-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157138/
http://logs.test-lab.xenproject.org/osstest/logs/157159/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 157159-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 156985
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156985
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156985
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156985
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156985
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156985
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156985
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156985
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156985
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156985
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156985
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156985
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156985
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156985
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  1d72d9915edff0dd41f601bbb0b1f83c02ff1689
baseline version:
 xen                  17ec9b43af072051edb1380a5eb459a382dcafa3

Last test of basis   156985  2020-11-24 13:35:57 Z    8 days
Testing same since   157138  2020-12-01 17:05:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   17ec9b43af..1d72d9915e  1d72d9915edff0dd41f601bbb0b1f83c02ff1689 -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:46:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42742.76918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSSh-0005kI-Qs; Wed, 02 Dec 2020 13:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42742.76918; Wed, 02 Dec 2020 13:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSSh-0005kA-Nc; Wed, 02 Dec 2020 13:46:27 +0000
Received: by outflank-mailman (input) for mailman id 42742;
 Wed, 02 Dec 2020 13:46:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkSSh-0005k4-7B
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:46:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSSg-0001xP-0c; Wed, 02 Dec 2020 13:46:26 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSSf-0007cx-Oa; Wed, 02 Dec 2020 13:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Xcmsw2LVMqxENrwEyIDOWOVwj7CAt2X7+DboZoaTOKs=; b=Sgn6516yAjhQ7Y4KTR3wRxnjIh
	hxkTAixzaxsXj2rRIACGyPnzHuVWmTB7eqn9Umqk/yutK+f+8mFFTKCiRQ8JgsI/3unL4YitXmK+i
	kA1uTjG4a7p7y8wmUgVI4ITHETFrwwdIA6GyRlLrXBHBs9f9j98Vm3b/8F2LTjOBNYNs=;
Subject: Re: [PATCH v2 3/8] xen/arm: revert patch related to XArray
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <266b918c-b9c4-e067-b8dc-4e879c913af5@xen.org>
Date: Wed, 2 Dec 2020 13:46:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 26/11/2020 17:02, Rahul Singh wrote:
> XArray is not implemented in XEN revert the patch that introduce the
> XArray code in SMMUv3 driver.

Similar to the atomic revert, you are explaining why the revert but not 
the consequences. I think this is quite important to have them outline 
in the commit message as it looks like it means the SMMU driver would 
not scale.

> 
> Once XArray is implemented in XEN this patch can be added in XEN.

What's the plan for that?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 13:58:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 13:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42753.76930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSdn-0006oC-S2; Wed, 02 Dec 2020 13:57:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42753.76930; Wed, 02 Dec 2020 13:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSdn-0006o5-P8; Wed, 02 Dec 2020 13:57:55 +0000
Received: by outflank-mailman (input) for mailman id 42753;
 Wed, 02 Dec 2020 13:57:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkSdm-0006o0-Bh
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 13:57:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSdl-0002Dg-2d; Wed, 02 Dec 2020 13:57:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSdk-0008MR-Qq; Wed, 02 Dec 2020 13:57:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=91dOSfOWzqw82XPyEpn2waOosAi1g4ire86s59vJbzE=; b=iTGm2lOGrwcJA0B1xt23KGsLdE
	+Zv8WJmiOP3qEpiFoE5cAPC/xcsEoU8n998b9MMcedZ407D7CQVPGZKOmFcGj3+w5bffFKcGa84c2
	ET8PcFNJn8TwdHKFQlllLMSlQvePMmYm/fVsROScGW2o6Gbl88ZmuWVK9UZtxR6OEN9s=;
Subject: Re: [PATCH v2 5/8] xen/arm: Remove support for PCI ATS on SMMUv3
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <78079d1d6e9d2e7e87125da131e9bdb5809b838a.1606406359.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0c831595-0ee6-2cd1-cc20-18329cae6b7b@xen.org>
Date: Wed, 2 Dec 2020 13:57:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <78079d1d6e9d2e7e87125da131e9bdb5809b838a.1606406359.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 26/11/2020 17:02, Rahul Singh wrote:
> PCI ATS functionality is not implemented and tested on ARM. 

I agree that short term, this is not going to be implemented. However, I 
do expect this feature to be used medium term (mostly likely before the 
driver is out of tech preview).

So I think it would be best to keep the code around.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:11:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:11:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42759.76943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSqc-0000J0-3s; Wed, 02 Dec 2020 14:11:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42759.76943; Wed, 02 Dec 2020 14:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkSqb-0000It-VC; Wed, 02 Dec 2020 14:11:09 +0000
Received: by outflank-mailman (input) for mailman id 42759;
 Wed, 02 Dec 2020 14:11:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkSqa-0000In-Of
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:11:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSqa-0002Z0-D7; Wed, 02 Dec 2020 14:11:08 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkSqa-0001AL-3l; Wed, 02 Dec 2020 14:11:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JhAUXmvFa2YAXnelqSqr2MrMrVaCGJG+Ro8or6l66i8=; b=qs0dzSljClMkN2Q9r++2ip3B/l
	pYtK4hBzwJN5KKEG0Ie9PiQZwT/XIsTGh9GVJNKBDwC4m+CeCgfB4IJ6oH/LJjuv8v4tqRjqRYa6T
	ODRxHlCdRspSBalVjmnGzLv5NWpKwciNbX1/yxSA7DRFh+VL+odxJSpJQrGoUm2iH4gs=;
Subject: Re: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
To: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011621380.1100@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012011639230.1100@sstabellini-ThinkPad-T480s>
 <D79D7DC5-649D-4517-A8CA-B13632595DA5@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5689cfe7-ca16-6540-d394-00d3f60f4f5f@xen.org>
Date: Wed, 2 Dec 2020 14:11:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <D79D7DC5-649D-4517-A8CA-B13632595DA5@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 02/12/2020 13:12, Rahul Singh wrote:
> Hello Stefano,
> 
>> On 2 Dec 2020, at 12:40 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> On Tue, 1 Dec 2020, Stefano Stabellini wrote:
>>> On Thu, 26 Nov 2020, Rahul Singh wrote:
>>>> XEN does not support MSI on ARM platforms, therefore remove the MSI
>>>> support from SMMUv3 driver.
>>>>
>>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>>
>>> I wonder if it makes sense to #ifdef CONFIG_MSI this code instead of
>>> removing it completely.
>>
>> One more thought: could this patch be achieved by reverting
>> 166bdbd23161160f2abcea70621adba179050bee ? If this patch could be done
>> by a couple of revert, it would be great to say it in the commit
>> message.
>>
>   Ok will add in next version.
> 
>>
>>> In the past, we tried to keep the entire file as textually similar to
>>> the original Linux driver as possible to make it easier to backport
>>> features and fixes. So, in this case we would probably not even used an
>>> #ifdef but maybe something like:
>>>
>>>   if (/* msi_enabled */ 0)
>>>       return;
>>>
>>> at the beginning of arm_smmu_setup_msis.
>>>
>>>
>>> However, that strategy didn't actually work very well because backports
>>> have proven difficult to do anyway. So at that point we might as well at
>>> least have clean code in Xen and do the changes properly.

It was difficult because Linux decided to rework how IOMMU drivers 
works. I agree the risk is still there and therefore clean code would be 
better with some caveats (see below).

> 
> Main reason to remove the feature/code that is not usable in XEN is to have a clean code.

I agree that short term this feature will not be usable. However, I 
think we need to think about {medium, long}-term plan to avoid extra 
effort in the future because the driver evolve in a way making the 
revert of revert impossible.

Therefore I would prefer to keep both the MSI and PCI ATS present as 
they are going to be useful/necessary on some platforms. It doesn't 
matter that they don't work because the driver will be in tech preview.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:34:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42766.76955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTD3-0002Jk-1S; Wed, 02 Dec 2020 14:34:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42766.76955; Wed, 02 Dec 2020 14:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTD2-0002Jd-Tu; Wed, 02 Dec 2020 14:34:20 +0000
Received: by outflank-mailman (input) for mailman id 42766;
 Wed, 02 Dec 2020 14:34:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkTD1-0002JY-CW
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:34:19 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.89]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b5146b8-5bbe-4742-b875-35a99ccda972;
 Wed, 02 Dec 2020 14:34:15 +0000 (UTC)
Received: from DB6P18901CA0014.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::24)
 by DB6PR0801MB2039.eurprd08.prod.outlook.com (2603:10a6:4:79::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Wed, 2 Dec
 2020 14:34:12 +0000
Received: from DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::a8) by DB6P18901CA0014.outlook.office365.com
 (2603:10a6:4:16::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Wed, 2 Dec 2020 14:34:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT026.mail.protection.outlook.com (10.152.20.159) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 14:34:12 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Wed, 02 Dec 2020 14:34:12 +0000
Received: from 0fcb9b8f3ebe.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5FB74C3A-0303-48B0-B6A9-24741F837892.1; 
 Wed, 02 Dec 2020 14:34:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0fcb9b8f3ebe.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 14:34:07 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6042.eurprd08.prod.outlook.com (2603:10a6:10:20f::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 2 Dec
 2020 14:34:05 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 14:34:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b5146b8-5bbe-4742-b875-35a99ccda972
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pdfQ7/VJjzhu2i+wNCFu7xNJ6fP1Vv/UOiGCcIXYGEY=;
 b=0/GWS2E/hbSL5qpuT8tjtb24vPubGQXWOuqlYSiyEcttLnMAIsd+n3wX0G5WVgiHhroj+NTAdl6Ky4VYjXQNKBBoI+yADN3YvDtDyUTVyUev37OxUDgLB6SP54DRpyF6Yz3r03x4p4DgFIJLLLPrTPp/5CNzudR1s+aDtPln490=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: dd3d42341e0b9908
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lXIrfVbCgGxmY4t42EfYdDGTv8qyJs0h84NBvrLFzC4o93h4xQ5L/B4kDUCnhA2dE9hcdhtpSnjzWSJKu3g5eFlgYSRAfTfhHXhGFOatH+prUL0LJc+eyzG5MpPJnsB1gmoGl72D7VE2wWDMa0+58KDRCeLdHpyt+I6qGb43+DP3JoY6yuz+eikXSGhp7VWRBZIV7LqLHB/Jf1HK/QLW5ZXkAwWj8h9dq/9w9bqca+eWJZNhvVmaaUZ978h/6CwEImvrcZ/nyHOhKtPfgND0mwxWwkeZXapD8R2CWgQtfRmi5MRrF38+fLSI++0ANO+8fODuMC+BL8GXDg4HmX4vAw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pdfQ7/VJjzhu2i+wNCFu7xNJ6fP1Vv/UOiGCcIXYGEY=;
 b=ZB/iPj0xIyn/UCzb+a16nRl03wJDJeT2BpCcaVU/O3vs2Rfw/JmD8TUcw3l8ZFgHrU4Ou6uSoxIWy9aM52ECvATfaFqDWi5DN8pr94VdAE9vMWBL80TX/Z58YG/4IdQPI2+lD/fXCofHkdzNKTPZByc9ebKuCXP4+AlsOt5M5yUmTWvzFmQSjPzLAClXdbaRzHP/EZzfZZCcyPKBcBRiX1IR0vaCqYunroTh7mcWBVbddAWAhtp8UMkpcFzcOQApbCyFs52/mi3dFrAwaLN94wJ2dRTt7lvvGYdhjaZQmLlKyJ2fH4rfblfukDmGecBlwUObpm7/5YA2mAyt/JFXIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pdfQ7/VJjzhu2i+wNCFu7xNJ6fP1Vv/UOiGCcIXYGEY=;
 b=0/GWS2E/hbSL5qpuT8tjtb24vPubGQXWOuqlYSiyEcttLnMAIsd+n3wX0G5WVgiHhroj+NTAdl6Ky4VYjXQNKBBoI+yADN3YvDtDyUTVyUev37OxUDgLB6SP54DRpyF6Yz3r03x4p4DgFIJLLLPrTPp/5CNzudR1s+aDtPln490=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Topic: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Index: AQHWxBX830xFguE4YkK2klATz/ucYKnjEigAgADWCQA=
Date: Wed, 2 Dec 2020 14:34:05 +0000
Message-ID: <804B8C95-FF10-4FE5-AC82-9959EC9B8041@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011724350.1100@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012011724350.1100@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: de81d9ae-44bf-4363-0594-08d896cf568d
x-ms-traffictypediagnostic: DBBPR08MB6042:|DB6PR0801MB2039:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB203910AAB87142A53BACCB87FCF30@DB6PR0801MB2039.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4ZwzT93HDsDRNJ1bq+3Bs54E55NCcxJe/K6SqZU2C4+aeVk7KpFT3HaCjO3bFcIePqBaRwLwVSeIm6+FWflZkkoXfhP3Q4vHZEN1ejFC2scqLYYrKjxP6fmd1w6/V2f7f0qORzk8/5ylHA2gLstbLbC85+8y2vIX9uurJLEMlCO0vFdvPGxIaX0UGs5jr0vxryfnHAExeyU9y4VZjT/SjNksci+iPWgh5bBCDJLe8qBx0xb/t7iDGj1r696+OtOMzAxpKS6tNliirKjcMJywtPzNMCjhO6yKeyO4M/1shwtOAh5woDL0qz763g7SaRl7Yfq7LBDhjyt1lD4vmnD78/omqqoC5lIinxbQ0sQyLuQenfgy5IsYROaAdKGTLwDd
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(33656002)(5660300002)(66946007)(91956017)(76116006)(478600001)(8936002)(2906002)(83380400001)(6916009)(54906003)(6512007)(8676002)(6486002)(71200400001)(316002)(2616005)(30864003)(66476007)(36756003)(66446008)(64756008)(6506007)(4326008)(66556008)(86362001)(53546011)(186003)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?snmung5B5c26qC/UJPX104nbPXnn0+pEwzywGkrVWqN2lfjuaWQ1+ZR5OnGx?=
 =?us-ascii?Q?WMVq1Cp3mp2vfXsGd8Kf7B+cuUduBcG/yBoMAVn801rSOdRKtABiLjjhRtH1?=
 =?us-ascii?Q?s88bCmDi15wgt0NJd5K0V5snFOTKHO78kdxSmSupfZNiwrArHANSRjCzDLE1?=
 =?us-ascii?Q?mGIAk3CsPyU99n3FT5uUf5kUJsH2vGvWSIUNyo696GjHjXVg5r+1SMqi54sf?=
 =?us-ascii?Q?qKWwVO5bl4aJrwP8f6y7tmPjjMp65FDLy+pM15xIoIu3XIWqcZpffRLTm5Pw?=
 =?us-ascii?Q?KVbQ5YK2TUGDC/phIb/qHIPt7P6YyUiraqcmFPn1x0Oz8F4qFjM1uxJqc+ya?=
 =?us-ascii?Q?4y5v1Dxu+sx/tcwacEkGS9ZmpIQQ9TMeI781f03tk7cUKOldb1nrZOB3rl+J?=
 =?us-ascii?Q?a7GiAWbMjpnKMtAgvoAaJmyGma/NlKts604obeZwk6oyvOskMUOuoBkcGsZp?=
 =?us-ascii?Q?NBPVwUSerVzBuo/syEOn++VVRxBNdqqAmUnZ2kNXFqGrLmsZidaGvbInbFES?=
 =?us-ascii?Q?CiBi6dokAjDW4SceEZsiSJBK85mnEX5ZbzAV20iudogk/V5gEgU0lwOkIoXT?=
 =?us-ascii?Q?BTJPZFCNbYMO95/LIvoYKI5ihLRMqVMcaqwfWETY8pSDsTXysVG+e5vEY+kf?=
 =?us-ascii?Q?mL00ec028E1Kog5BBgEd6h7f5yfajDBlG9S/uzlrjynmunrX97ICCcZ3otJo?=
 =?us-ascii?Q?AJE/A/UFhbReTN61Lzd72JtyqMsetT68XxSMoWE/L2qxbcF+MEws1OGtt+YL?=
 =?us-ascii?Q?5NOiw8tS8Egc0cxj2qL85poRVS5FrDMhFoPy9dhrsaOY77cwMX5RG0LL1NzF?=
 =?us-ascii?Q?Pbu/ySYZRzmu4bfw6TcposE4MMIlXcE31l0M5NyG4SLM5i51ZPy6gxNb+EoZ?=
 =?us-ascii?Q?TESqd2Zv1YFyFi1fgrLOWXKaqY2TbQTDxXn0g4dPunuSJ3mYwaJINiv/Uys9?=
 =?us-ascii?Q?p7/DG1KGhyg9zq8QvZ89XMiAdGg2sYQ5Qm1Wr/Wo+++jGTccL0YONEDVrq+f?=
 =?us-ascii?Q?VDaE?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1B36034BE497744382163D40EC59EBEC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6042
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1e27ede6-33bf-4bbb-0e43-08d896cf5236
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PN5mlqrwJsK11B7UiYJ6whM5ePyyUgP5YUgunEnFLWP4mVZtFoY8HIrPfItXVA6XqJiPQmgp6oFftVnE4G7kAK/vYnjmZhBfWMAXX0/vGDhjrLeqYrLX3CmAp37GAJ7XxM+8rqiYftSBLLlL/ZEv+bwMG3pCFU3PLYKxFjuYOVqiQEHxLhfk5LD3yCzBmgC4yNS9Bo+8ZbuwfoSX3tS0bLOCi2IoBvmm89oy4KSp9IjtkAgNQr3zdpfkbbEE7dh2bJFPF7fUfO8RUjoq6O4mzf6DSxZvUtpbw/KaXNueiCVXqqZ8HpmQxJiyVczZIxojcpI/pJWvsCYZxc1gZTYo2bz+y45iX1lcEkhpp1Jy1UGoxpvm2Sn/iIAXjARY7vZuLCgN/hXi5qUaCjF245/Cmw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(346002)(376002)(46966005)(26005)(33656002)(186003)(30864003)(4326008)(2616005)(47076004)(86362001)(107886003)(6862004)(2906002)(82310400003)(356005)(6506007)(53546011)(336012)(82740400003)(54906003)(6486002)(36756003)(5660300002)(83380400001)(6512007)(81166007)(8936002)(478600001)(316002)(70206006)(70586007)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 14:34:12.7324
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: de81d9ae-44bf-4363-0594-08d896cf568d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB2039

Hello Stefano,

> On 2 Dec 2020, at 1:48 am, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>=20
> On Thu, 26 Nov 2020, Rahul Singh wrote:
>> struct io_pgtable_ops, struct io_pgtable_cfg, struct iommu_flush_ops,
>> and struct iommu_ops related code are linux specific.
>>=20
>> Remove code related to above struct as code is dead code in XEN.
>=20
> There are still instances of struct io_pgtable_cfg after applying this
> patch in the following functions:
> - arm_smmu_domain_finalise_s2
> - arm_smmu_domain_finalise
>=20

Ok. I will remove the instances.
>=20
>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>> xen/drivers/passthrough/arm/smmu-v3.c | 457 --------------------------
>> 1 file changed, 457 deletions(-)
>>=20
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthr=
ough/arm/smmu-v3.c
>> index 40e3890a58..55d1cba194 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -599,7 +593,6 @@ struct arm_smmu_domain {
>> 	struct arm_smmu_device		*smmu;
>> 	struct mutex			init_mutex; /* Protects smmu pointer */
>>=20
>> -	struct io_pgtable_ops		*pgtbl_ops;
>> 	bool				non_strict;
>>=20
>> 	enum arm_smmu_domain_stage	stage;
>> @@ -1297,74 +1290,6 @@ static void arm_smmu_tlb_inv_context(void *cookie=
)
>> 	arm_smmu_cmdq_issue_sync(smmu);
>> }
>>=20
>> -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t si=
ze,
>> -					  size_t granule, bool leaf, void *cookie)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -	struct arm_smmu_cmdq_ent cmd =3D {
>> -		.tlbi =3D {
>> -			.leaf	=3D leaf,
>> -			.addr	=3D iova,
>> -		},
>> -	};
>> -
>> -	cmd.opcode	=3D CMDQ_OP_TLBI_S2_IPA;
>> -	cmd.tlbi.vmid	=3D smmu_domain->s2_cfg.vmid;
>> -
>> -	do {
>> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> -		cmd.tlbi.addr +=3D granule;
>> -	} while (size -=3D granule);
>> -}
>> -
>> -static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gat=
her,
>> -					 unsigned long iova, size_t granule,
>> -					 void *cookie)
>> -{
>> -	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
>> -}
>> -
>> -static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
>> -				  size_t granule, void *cookie)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -
>> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
>> -	arm_smmu_cmdq_issue_sync(smmu);
>> -}
>> -
>> -static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
>> -				  size_t granule, void *cookie)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -
>> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
>> -	arm_smmu_cmdq_issue_sync(smmu);
>> -}
>> -
>> -static const struct iommu_flush_ops arm_smmu_flush_ops =3D {
>> -	.tlb_flush_all	=3D arm_smmu_tlb_inv_context,
>> -	.tlb_flush_walk =3D arm_smmu_tlb_inv_walk,
>> -	.tlb_flush_leaf =3D arm_smmu_tlb_inv_leaf,
>> -	.tlb_add_page	=3D arm_smmu_tlb_inv_page_nosync,
>> -};
>> -
>> -/* IOMMU API */
>> -static bool arm_smmu_capable(enum iommu_cap cap)
>> -{
>> -	switch (cap) {
>> -	case IOMMU_CAP_CACHE_COHERENCY:
>> -		return true;
>> -	case IOMMU_CAP_NOEXEC:
>> -		return true;
>> -	default:
>> -		return false;
>> -	}
>> -}
>> -
>> static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>> {
>> 	struct arm_smmu_domain *smmu_domain;
>> @@ -1421,7 +1346,6 @@ static void arm_smmu_domain_free(struct iommu_doma=
in *domain)
>> 	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>>=20
>> 	iommu_put_dma_cookie(domain);
>> -	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
>>=20
>> 	if (cfg->vmid)
>> 		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>> @@ -1429,7 +1353,6 @@ static void arm_smmu_domain_free(struct iommu_doma=
in *domain)
>> 	kfree(smmu_domain);
>> }
>>=20
>> -
>> static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_doma=
in,
>> 				       struct arm_smmu_master *master,
>> 				       struct io_pgtable_cfg *pgtbl_cfg)
>> @@ -1437,7 +1360,6 @@ static int arm_smmu_domain_finalise_s2(struct arm_=
smmu_domain *smmu_domain,
>> 	int vmid;
>> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> 	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>> -	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
>>=20
>> 	vmid =3D arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>> 	if (vmid < 0)
>> @@ -1461,20 +1383,12 @@ static int arm_smmu_domain_finalise(struct iommu=
_domain *domain,
>> {
>> 	int ret;
>> 	unsigned long ias, oas;
>> -	enum io_pgtable_fmt fmt;
>> -	struct io_pgtable_cfg pgtbl_cfg;
>> -	struct io_pgtable_ops *pgtbl_ops;
>> 	int (*finalise_stage_fn)(struct arm_smmu_domain *,
>> 				 struct arm_smmu_master *,
>> 				 struct io_pgtable_cfg *);
>> 	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>>=20
>> -	if (domain->type =3D=3D IOMMU_DOMAIN_IDENTITY) {
>> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_BYPASS;
>> -		return 0;
>> -	}
>> -
>> 	/* Restrict the stage to what we can actually support */
>> 	smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>>=20
>> @@ -1483,40 +1397,17 @@ static int arm_smmu_domain_finalise(struct iommu=
_domain *domain,
>> 	case ARM_SMMU_DOMAIN_S2:
>> 		ias =3D smmu->ias;
>> 		oas =3D smmu->oas;
>> -		fmt =3D ARM_64_LPAE_S2;
>> 		finalise_stage_fn =3D arm_smmu_domain_finalise_s2;
>> 		break;
>> 	default:
>> 		return -EINVAL;
>> 	}
>>=20
>> -	pgtbl_cfg =3D (struct io_pgtable_cfg) {
>> -		.pgsize_bitmap	=3D smmu->pgsize_bitmap,
>> -		.ias		=3D ias,
>> -		.oas		=3D oas,
>> -		.coherent_walk	=3D smmu->features & ARM_SMMU_FEAT_COHERENCY,
>> -		.tlb		=3D &arm_smmu_flush_ops,
>> -		.iommu_dev	=3D smmu->dev,
>> -	};
>> -
>> -	if (smmu_domain->non_strict)
>> -		pgtbl_cfg.quirks |=3D IO_PGTABLE_QUIRK_NON_STRICT;
>> -
>> -	pgtbl_ops =3D alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
>> -	if (!pgtbl_ops)
>> -		return -ENOMEM;
>> -
>> -	domain->pgsize_bitmap =3D pgtbl_cfg.pgsize_bitmap;
>> -	domain->geometry.aperture_end =3D (1UL << pgtbl_cfg.ias) - 1;
>> -	domain->geometry.force_aperture =3D true;
>> -
>> 	ret =3D finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
>> 	if (ret < 0) {
>> -		free_io_pgtable_ops(pgtbl_ops);
>> 		return ret;
>> 	}
>>=20
>> -	smmu_domain->pgtbl_ops =3D pgtbl_ops;
>> 	return 0;
>> }
>>=20
>> @@ -1626,71 +1517,6 @@ out_unlock:
>> 	return ret;
>> }
>>=20
>> -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova=
,
>> -			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>> -{
>> -	struct io_pgtable_ops *ops =3D to_smmu_domain(domain)->pgtbl_ops;
>> -
>> -	if (!ops)
>> -		return -ENODEV;
>> -
>> -	return ops->map(ops, iova, paddr, size, prot, gfp);
>> -}
>> -
>> -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long=
 iova,
>> -			     size_t size, struct iommu_iotlb_gather *gather)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -	struct io_pgtable_ops *ops =3D smmu_domain->pgtbl_ops;
>> -
>> -	if (!ops)
>> -		return 0;
>> -
>> -	return ops->unmap(ops, iova, size, gather);
>> -}
>> -
>> -static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -
>> -	if (smmu_domain->smmu)
>> -		arm_smmu_tlb_inv_context(smmu_domain);
>> -}
>> -
>> -static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
>> -				struct iommu_iotlb_gather *gather)
>> -{
>> -	struct arm_smmu_device *smmu =3D to_smmu_domain(domain)->smmu;
>> -
>> -	if (smmu)
>> -		arm_smmu_cmdq_issue_sync(smmu);
>> -}
>> -
>> -static phys_addr_t
>> -arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
>> -{
>> -	struct io_pgtable_ops *ops =3D to_smmu_domain(domain)->pgtbl_ops;
>> -
>> -	if (domain->type =3D=3D IOMMU_DOMAIN_IDENTITY)
>> -		return iova;
>> -
>> -	if (!ops)
>> -		return 0;
>> -
>> -	return ops->iova_to_phys(ops, iova);
>> -}
>> -
>> -static struct platform_driver arm_smmu_driver;
>> -
>> -static
>> -struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fw=
node)
>> -{
>> -	struct device *dev =3D driver_find_device_by_fwnode(&arm_smmu_driver.d=
river,
>> -							  fwnode);
>> -	put_device(dev);
>> -	return dev ? dev_get_drvdata(dev) : NULL;
>> -}
>> -
>> static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>> {
>> 	unsigned long limit =3D smmu->strtab_cfg.num_l1_ents;
>> @@ -1701,206 +1527,6 @@ static bool arm_smmu_sid_in_range(struct arm_smm=
u_device *smmu, u32 sid)
>> 	return sid < limit;
>> }
>>=20
>> -static struct iommu_ops arm_smmu_ops;
>> -
>> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>> -{
>> -	int i, ret;
>> -	struct arm_smmu_device *smmu;
>> -	struct arm_smmu_master *master;
>> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> -
>> -	if (!fwspec || fwspec->ops !=3D &arm_smmu_ops)
>> -		return ERR_PTR(-ENODEV);
>> -
>> -	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
>> -		return ERR_PTR(-EBUSY);
>> -
>> -	smmu =3D arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
>> -	if (!smmu)
>> -		return ERR_PTR(-ENODEV);
>> -
>> -	master =3D kzalloc(sizeof(*master), GFP_KERNEL);
>> -	if (!master)
>> -		return ERR_PTR(-ENOMEM);
>> -
>> -	master->dev =3D dev;
>> -	master->smmu =3D smmu;
>> -	master->sids =3D fwspec->ids;
>> -	master->num_sids =3D fwspec->num_ids;
>> -	dev_iommu_priv_set(dev, master);
>> -
>> -	/* Check the SIDs are in range of the SMMU and our stream table */
>> -	for (i =3D 0; i < master->num_sids; i++) {
>> -		u32 sid =3D master->sids[i];
>> -
>> -		if (!arm_smmu_sid_in_range(smmu, sid)) {
>> -			ret =3D -ERANGE;
>> -			goto err_free_master;
>> -		}
>> -
>> -		/* Ensure l2 strtab is initialised */
>> -		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
>> -			ret =3D arm_smmu_init_l2_strtab(smmu, sid);
>> -			if (ret)
>> -				goto err_free_master;
>> -		}
>> -	}
>> -
>> -	return &smmu->iommu;
>> -
>> -err_free_master:
>> -	kfree(master);
>> -	dev_iommu_priv_set(dev, NULL);
>> -	return ERR_PTR(ret);
>> -}
>> -
>> -static void arm_smmu_release_device(struct device *dev)
>> -{
>> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> -	struct arm_smmu_master *master;
>> -
>> -	if (!fwspec || fwspec->ops !=3D &arm_smmu_ops)
>> -		return;
>> -
>> -	master =3D dev_iommu_priv_get(dev);
>> -	arm_smmu_detach_dev(master);
>> -	kfree(master);
>> -	iommu_fwspec_free(dev);
>> -}
>> -
>> -static struct iommu_group *arm_smmu_device_group(struct device *dev)
>> -{
>> -	struct iommu_group *group;
>> -
>> -	/*
>> -	 * We don't support devices sharing stream IDs other than PCI RID
>> -	 * aliases, since the necessary ID-to-device lookup becomes rather
>> -	 * impractical given a potential sparse 32-bit stream ID space.
>> -	 */
>> -	if (dev_is_pci(dev))
>> -		group =3D pci_device_group(dev);
>> -	else
>> -		group =3D generic_device_group(dev);
>> -
>> -	return group;
>> -}
>> -
>> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
>> -				    enum iommu_attr attr, void *data)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -
>> -	switch (domain->type) {
>> -	case IOMMU_DOMAIN_UNMANAGED:
>> -		switch (attr) {
>> -		case DOMAIN_ATTR_NESTING:
>> -			*(int *)data =3D (smmu_domain->stage =3D=3D ARM_SMMU_DOMAIN_NESTED);
>> -			return 0;
>> -		default:
>> -			return -ENODEV;
>> -		}
>> -		break;
>> -	case IOMMU_DOMAIN_DMA:
>> -		switch (attr) {
>> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
>> -			*(int *)data =3D smmu_domain->non_strict;
>> -			return 0;
>> -		default:
>> -			return -ENODEV;
>> -		}
>> -		break;
>> -	default:
>> -		return -EINVAL;
>> -	}
>> -}
>> -
>> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
>> -				    enum iommu_attr attr, void *data)
>> -{
>> -	int ret =3D 0;
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -
>> -	mutex_lock(&smmu_domain->init_mutex);
>> -
>> -	switch (domain->type) {
>> -	case IOMMU_DOMAIN_UNMANAGED:
>> -		switch (attr) {
>> -		case DOMAIN_ATTR_NESTING:
>> -			if (smmu_domain->smmu) {
>> -				ret =3D -EPERM;
>> -				goto out_unlock;
>> -			}
>> -
>> -			if (*(int *)data)
>> -				smmu_domain->stage =3D ARM_SMMU_DOMAIN_NESTED;
>> -			else
>> -				smmu_domain->stage =3D ARM_SMMU_DOMAIN_S1;
>> -			break;
>> -		default:
>> -			ret =3D -ENODEV;
>> -		}
>> -		break;
>> -	case IOMMU_DOMAIN_DMA:
>> -		switch(attr) {
>> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
>> -			smmu_domain->non_strict =3D *(int *)data;
>> -			break;
>> -		default:
>> -			ret =3D -ENODEV;
>> -		}
>> -		break;
>> -	default:
>> -		ret =3D -EINVAL;
>> -	}
>> -
>> -out_unlock:
>> -	mutex_unlock(&smmu_domain->init_mutex);
>> -	return ret;
>> -}
>> -
>> -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args=
 *args)
>> -{
>> -	return iommu_fwspec_add_ids(dev, args->args, 1);
>> -}
>> -
>> -static void arm_smmu_get_resv_regions(struct device *dev,
>> -				      struct list_head *head)
>> -{
>> -	struct iommu_resv_region *region;
>> -	int prot =3D IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> -
>> -	region =3D iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
>> -					 prot, IOMMU_RESV_SW_MSI);
>> -	if (!region)
>> -		return;
>> -
>> -	list_add_tail(&region->list, head);
>> -
>> -	iommu_dma_get_resv_regions(dev, head);
>> -}
>=20
> Arguably this could have been removed previously as part of the MSI
> patch, but that's OK either way.

Ack.=20
>=20
>=20
>> -static struct iommu_ops arm_smmu_ops =3D {
>> -	.capable		=3D arm_smmu_capable,
>> -	.domain_alloc		=3D arm_smmu_domain_alloc,
>> -	.domain_free		=3D arm_smmu_domain_free,
>> -	.attach_dev		=3D arm_smmu_attach_dev,
>> -	.map			=3D arm_smmu_map,
>> -	.unmap			=3D arm_smmu_unmap,
>> -	.flush_iotlb_all	=3D arm_smmu_flush_iotlb_all,
>> -	.iotlb_sync		=3D arm_smmu_iotlb_sync,
>> -	.iova_to_phys		=3D arm_smmu_iova_to_phys,
>> -	.probe_device		=3D arm_smmu_probe_device,
>> -	.release_device		=3D arm_smmu_release_device,
>> -	.device_group		=3D arm_smmu_device_group,
>> -	.domain_get_attr	=3D arm_smmu_domain_get_attr,
>> -	.domain_set_attr	=3D arm_smmu_domain_set_attr,
>> -	.of_xlate		=3D arm_smmu_of_xlate,
>> -	.get_resv_regions	=3D arm_smmu_get_resv_regions,
>> -	.put_resv_regions	=3D generic_iommu_put_resv_regions,
>> -	.pgsize_bitmap		=3D -1UL, /* Restricted during device attach */
>> -};
>> -
>> /* Probing and initialisation functions */
>> static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>> 				   struct arm_smmu_queue *q,
>> @@ -2515,21 +2139,10 @@ static int arm_smmu_device_hw_probe(struct arm_s=
mmu_device *smmu)
>> 	default:
>> 		dev_info(smmu->dev,
>> 			"unknown output address size. Truncating to 48-bit\n");
>> -		fallthrough;
>> 	case IDR5_OAS_48_BIT:
>> 		smmu->oas =3D 48;
>> 	}
>>=20
>> -	if (arm_smmu_ops.pgsize_bitmap =3D=3D -1UL)
>> -		arm_smmu_ops.pgsize_bitmap =3D smmu->pgsize_bitmap;
>> -	else
>> -		arm_smmu_ops.pgsize_bitmap |=3D smmu->pgsize_bitmap;
>> -
>> -	/* Set the DMA mask for our table walker */
>> -	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
>> -		dev_warn(smmu->dev,
>> -			 "failed to set DMA mask for table walker\n");
>> -
>> 	smmu->ias =3D max(smmu->ias, smmu->oas);
>>=20
>> 	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
>> @@ -2595,9 +2208,6 @@ static int arm_smmu_device_dt_probe(struct platfor=
m_device *pdev,
>>=20
>> 	parse_driver_options(smmu);
>>=20
>> -	if (of_dma_is_coherent(dev->of_node))
>> -		smmu->features |=3D ARM_SMMU_FEAT_COHERENCY;
>> -
>=20
> Why this change? The ARM_SMMU_FEAT_COHERENCY flag is still used in
> arm_smmu_device_hw_probe.

I remove this as this is linux specific.  I will remove ARM_SMMU_FEAT_COHER=
ENCY flag  used in arm_smmu_device_hw_probe

Regards,
Rahul




From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:37:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42771.76967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTGT-0002TT-Mj; Wed, 02 Dec 2020 14:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42771.76967; Wed, 02 Dec 2020 14:37:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTGT-0002TM-JP; Wed, 02 Dec 2020 14:37:53 +0000
Received: by outflank-mailman (input) for mailman id 42771;
 Wed, 02 Dec 2020 14:37:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkTGS-0002TE-Cy; Wed, 02 Dec 2020 14:37:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkTGR-00037o-LX; Wed, 02 Dec 2020 14:37:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkTGR-0007Xm-Eq; Wed, 02 Dec 2020 14:37:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkTGR-0006FM-EM; Wed, 02 Dec 2020 14:37:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7Z6ex5U+7PQgA//K8Wuu358hE8NPs5PHhxy2eSeb5k8=; b=v3Np668M1neXnYlU0ksEzM+ICU
	FJGy8O/Dl52vNswlUg6Jxst2Y4G7K2YOR5OK+I1Ofij0l1Xhw86RLzxH22sXLusPhW49EwIQWlOhI
	VIm4low1Ahf0vgQuzD8238PqJm038RVuLpNe2bImgi+7Grz+GXgZ+uMV7GBql5GOzCtM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157150-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157150: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=0436a144681ae4e0790932e31347d3471823c12e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 14:37:51 +0000

flight 157150 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157150/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              0436a144681ae4e0790932e31347d3471823c12e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  145 days
Failing since        151818  2020-07-11 04:18:52 Z  144 days  139 attempts
Testing same since   157150  2020-12-02 04:20:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 30241 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:40:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:40:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42779.76985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTIa-00030g-8b; Wed, 02 Dec 2020 14:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42779.76985; Wed, 02 Dec 2020 14:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTIa-0002zx-4s; Wed, 02 Dec 2020 14:40:04 +0000
Received: by outflank-mailman (input) for mailman id 42779;
 Wed, 02 Dec 2020 14:40:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkTIX-0002eh-TN
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:40:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkTIX-0003A7-I1; Wed, 02 Dec 2020 14:40:01 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkTIX-00031E-Af; Wed, 02 Dec 2020 14:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=+hRPY9g6YRAHj9s3GL8eUWDrrun+ZidHmmbIACfNpS8=; b=u1zcfHICyNIyvWtZUuBx2wgRc/
	s0O9aNWb1zYdImVmytsJtRJz3TDWxJvumQi21j7fauxpUzOWx4X/KconPnZL0F2AD8OydVNMz+b+4
	P1ZqXYnDqxzJXp5IOSQ2aWOozkGviOLNNLzG6dFP3oIdjWGUMiCkgbeVrLA2k0G2zs60=;
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
To: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011724350.1100@sstabellini-ThinkPad-T480s>
 <804B8C95-FF10-4FE5-AC82-9959EC9B8041@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ad58e8e3-2e50-b355-9d6a-d6b8313aaee2@xen.org>
Date: Wed, 2 Dec 2020 14:39:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <804B8C95-FF10-4FE5-AC82-9959EC9B8041@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 02/12/2020 14:34, Rahul Singh wrote:
>>> 	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
>>> @@ -2595,9 +2208,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>>>
>>> 	parse_driver_options(smmu);
>>>
>>> -	if (of_dma_is_coherent(dev->of_node))
>>> -		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>>> -
>>
>> Why this change? The ARM_SMMU_FEAT_COHERENCY flag is still used in
>> arm_smmu_device_hw_probe.
> 
> I remove this as this is linux specific.  I will remove ARM_SMMU_FEAT_COHERENCY flag  used in arm_smmu_device_hw_probe

 From my understanding, ARM_SMMU_FEAT_COHERENCY indicate whether the 
SMMU page table walker will snoop the cache. If the flag is not set, 
then Xen will have to clean to PoC every entry updated in the p2m.

Therefore, I think we need to keep this code.

In the case we don't need to keep the code, then I think the reason 
should be explained in the commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:46:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:46:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42788.76999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTOP-0003cK-1S; Wed, 02 Dec 2020 14:46:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42788.76999; Wed, 02 Dec 2020 14:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTOO-0003cD-Tu; Wed, 02 Dec 2020 14:46:04 +0000
Received: by outflank-mailman (input) for mailman id 42788;
 Wed, 02 Dec 2020 14:46:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkTOM-0003c5-Rj
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:46:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkTOL-0003Ho-CI; Wed, 02 Dec 2020 14:46:01 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkTOK-0003cm-SA; Wed, 02 Dec 2020 14:46:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=V/E2Z8jxMNoxX9cvZCbUJGhzLgA3MJqVZSMuZyxfzUs=; b=I935RW6QVKd9sCSduM7EW2PnN8
	ak13rNxOFaEMTlGokQEU0X2VemKUie+6B4QWCY+HKflc0HkCcKGU7ZPdy3rlp683Mf3qsbS5/BlaC
	e8CFyNrOMg8jc9COZXHtHmkntXU1RFJtNLvO0tv7+Rvm7bXzPHWSFQqM2kqUasfm6wVU=;
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cd74f2a7-7836-ef90-9cd8-857068adb0f5@xen.org>
Date: Wed, 2 Dec 2020 14:45:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 26/11/2020 17:02, Rahul Singh wrote:
> struct io_pgtable_ops, struct io_pgtable_cfg, struct iommu_flush_ops,
> and struct iommu_ops related code are linux specific.

So the assumption is we are going to support only sharing with the P2M 
and the IOMMU. That's probably fine short term, but long-term we are 
going to need unsharing the page-tables (there are issues on some 
platforms if the ITS doorbell is accessed by the processors).


I am ok with removing anything related to the unsharing code. But I 
think it should be clarified here.

> 
> Remove code related to above struct as code is dead code in XEN.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>   xen/drivers/passthrough/arm/smmu-v3.c | 457 --------------------------
>   1 file changed, 457 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 40e3890a58..55d1cba194 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -402,13 +402,7 @@
>   #define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
>   #define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>   
> -#define MSI_IOVA_BASE			0x8000000
> -#define MSI_IOVA_LENGTH			0x100000
> -
>   static bool disable_bypass = 1;
> -module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
> -MODULE_PARM_DESC(disable_bypass,
> -	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");

Per your commit message, this doesn't look related to what you intend to 
remove. Maybe your commit message should be updated?

>   
>   enum pri_resp {
>   	PRI_RESP_DENY = 0,
> @@ -599,7 +593,6 @@ struct arm_smmu_domain {
>   	struct arm_smmu_device		*smmu;
>   	struct mutex			init_mutex; /* Protects smmu pointer */
>   
> -	struct io_pgtable_ops		*pgtbl_ops;
>   	bool				non_strict;
>   
>   	enum arm_smmu_domain_stage	stage;
> @@ -1297,74 +1290,6 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>   	arm_smmu_cmdq_issue_sync(smmu);
>   }
>   
> -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  size_t granule, bool leaf, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -	struct arm_smmu_cmdq_ent cmd = {
> -		.tlbi = {
> -			.leaf	= leaf,
> -			.addr	= iova,
> -		},
> -	};
> -
> -	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
> -	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> -
> -	do {
> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> -		cmd.tlbi.addr += granule;
> -	} while (size -= granule);
> -}
> -
> -static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
> -					 unsigned long iova, size_t granule,
> -					 void *cookie)
> -{
> -	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
> -}
> -
> -static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static const struct iommu_flush_ops arm_smmu_flush_ops = {
> -	.tlb_flush_all	= arm_smmu_tlb_inv_context,
> -	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
> -	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
> -	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
> -};
> -
> -/* IOMMU API */
> -static bool arm_smmu_capable(enum iommu_cap cap)
> -{
> -	switch (cap) {
> -	case IOMMU_CAP_CACHE_COHERENCY:
> -		return true;
> -	case IOMMU_CAP_NOEXEC:
> -		return true;
> -	default:
> -		return false;
> -	}
> -}
> -
>   static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>   {
>   	struct arm_smmu_domain *smmu_domain;
> @@ -1421,7 +1346,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>   	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>   
>   	iommu_put_dma_cookie(domain);
> -	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
>   
>   	if (cfg->vmid)
>   		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
> @@ -1429,7 +1353,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>   	kfree(smmu_domain);
>   }
>   
> -

Looks like a spurious change.

>   static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>   				       struct arm_smmu_master *master,
>   				       struct io_pgtable_cfg *pgtbl_cfg)

You commit message leads to think that all the use of struct 
io_pgtable_cfg will be removed.

> @@ -1437,7 +1360,6 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>   	int vmid;
>   	struct arm_smmu_device *smmu = smmu_domain->smmu;
>   	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> -	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;

It feels a bit odd that the definition of 'vtcr' is removed but there 
are still users.

>   
>   	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>   	if (vmid < 0)
> @@ -1461,20 +1383,12 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>   {
>   	int ret;
>   	unsigned long ias, oas;
> -	enum io_pgtable_fmt fmt;
> -	struct io_pgtable_cfg pgtbl_cfg;
> -	struct io_pgtable_ops *pgtbl_ops;
>   	int (*finalise_stage_fn)(struct arm_smmu_domain *,
>   				 struct arm_smmu_master *,
>   				 struct io_pgtable_cfg *);
>   	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>   	struct arm_smmu_device *smmu = smmu_domain->smmu;
>   
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
> -		return 0;
> -	}
> -

Per your commit message, this doesn't look related to what you intend to 
remove. Maybe your commit message should be updated?


>   	/* Restrict the stage to what we can actually support */
>   	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
>   
> @@ -1483,40 +1397,17 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>   	case ARM_SMMU_DOMAIN_S2:
>   		ias = smmu->ias;
>   		oas = smmu->oas;
> -		fmt = ARM_64_LPAE_S2;
>   		finalise_stage_fn = arm_smmu_domain_finalise_s2;
>   		break;
>   	default:
>   		return -EINVAL;
>   	}
>   
> -	pgtbl_cfg = (struct io_pgtable_cfg) {
> -		.pgsize_bitmap	= smmu->pgsize_bitmap,
> -		.ias		= ias,
> -		.oas		= oas,
> -		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
> -		.tlb		= &arm_smmu_flush_ops,
> -		.iommu_dev	= smmu->dev,
> -	};
> -
> -	if (smmu_domain->non_strict)
> -		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
> -
> -	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
> -	if (!pgtbl_ops)
> -		return -ENOMEM;
> -
> -	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
> -	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
> -	domain->geometry.force_aperture = true;
> -
>   	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
>   	if (ret < 0) {
> -		free_io_pgtable_ops(pgtbl_ops);
>   		return ret;
>   	}
>   
> -	smmu_domain->pgtbl_ops = pgtbl_ops;
>   	return 0;
>   }
>   
> @@ -1626,71 +1517,6 @@ out_unlock:
>   	return ret;
>   }
>   
> -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> -{
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (!ops)
> -		return -ENODEV;
> -
> -	return ops->map(ops, iova, paddr, size, prot, gfp);
> -}
> -
> -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
> -			     size_t size, struct iommu_iotlb_gather *gather)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
> -
> -	if (!ops)
> -		return 0;
> -
> -	return ops->unmap(ops, iova, size, gather);
> -}
> -
> -static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	if (smmu_domain->smmu)
> -		arm_smmu_tlb_inv_context(smmu_domain);
> -}
> -
> -static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
> -				struct iommu_iotlb_gather *gather)
> -{
> -	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
> -
> -	if (smmu)
> -		arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static phys_addr_t
> -arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> -{
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY)
> -		return iova;
> -
> -	if (!ops)
> -		return 0;
> -
> -	return ops->iova_to_phys(ops, iova);
> -}
> -
> -static struct platform_driver arm_smmu_driver;
> -
> -static
> -struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> -{
> -	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
> -							  fwnode);
> -	put_device(dev);
> -	return dev ? dev_get_drvdata(dev) : NULL;
> -}
> -
>   static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>   {
>   	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
> @@ -1701,206 +1527,6 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>   	return sid < limit;
>   }
>   
> -static struct iommu_ops arm_smmu_ops;
> -
> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
> -{

Most of the code here looks useful to Xen. I think you want to keep the 
code and re-use it afterwards.

> -	int i, ret;
> -	struct arm_smmu_device *smmu;
> -	struct arm_smmu_master *master;
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -
> -	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> -		return ERR_PTR(-ENODEV);
> -
> -	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
> -		return ERR_PTR(-EBUSY);
> -
> -	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
> -	if (!smmu)
> -		return ERR_PTR(-ENODEV);
> -
> -	master = kzalloc(sizeof(*master), GFP_KERNEL);
> -	if (!master)
> -		return ERR_PTR(-ENOMEM);
> -
> -	master->dev = dev;
> -	master->smmu = smmu;
> -	master->sids = fwspec->ids;
> -	master->num_sids = fwspec->num_ids;
> -	dev_iommu_priv_set(dev, master);
> -
> -	/* Check the SIDs are in range of the SMMU and our stream table */
> -	for (i = 0; i < master->num_sids; i++) {
> -		u32 sid = master->sids[i];
> -
> -		if (!arm_smmu_sid_in_range(smmu, sid)) {
> -			ret = -ERANGE;
> -			goto err_free_master;
> -		}
> -
> -		/* Ensure l2 strtab is initialised */
> -		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
> -			ret = arm_smmu_init_l2_strtab(smmu, sid);
> -			if (ret)
> -				goto err_free_master;
> -		}
> -	}
> -
> -	return &smmu->iommu;
> -
> -err_free_master:
> -	kfree(master);
> -	dev_iommu_priv_set(dev, NULL);
> -	return ERR_PTR(ret);
> -}
> -
> -static void arm_smmu_release_device(struct device *dev)
> -{
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -	struct arm_smmu_master *master;
> -
> -	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> -		return;
> -
> -	master = dev_iommu_priv_get(dev);
> -	arm_smmu_detach_dev(master);
> -	kfree(master);
> -	iommu_fwspec_free(dev);
> -}
> -
> -static struct iommu_group *arm_smmu_device_group(struct device *dev)
> -{
> -	struct iommu_group *group;
> -
> -	/*
> -	 * We don't support devices sharing stream IDs other than PCI RID
> -	 * aliases, since the necessary ID-to-device lookup becomes rather
> -	 * impractical given a potential sparse 32-bit stream ID space.
> -	 */
> -	if (dev_is_pci(dev))
> -		group = pci_device_group(dev);
> -	else
> -		group = generic_device_group(dev);
> -
> -	return group;
> -}
> -
> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data = smmu_domain->non_strict;
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -}
> -
> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	int ret = 0;
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	mutex_lock(&smmu_domain->init_mutex);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			if (smmu_domain->smmu) {
> -				ret = -EPERM;
> -				goto out_unlock;
> -			}
> -
> -			if (*(int *)data)
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> -			else
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch(attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			smmu_domain->non_strict = *(int *)data;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	default:
> -		ret = -EINVAL;
> -	}
> -
> -out_unlock:
> -	mutex_unlock(&smmu_domain->init_mutex);
> -	return ret;
> -}
> -
> -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
> -{
> -	return iommu_fwspec_add_ids(dev, args->args, 1);
> -}
> -
> -static void arm_smmu_get_resv_regions(struct device *dev,
> -				      struct list_head *head)
> -{
> -	struct iommu_resv_region *region;
> -	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> -
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> -					 prot, IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> -
> -	list_add_tail(&region->list, head);
> -
> -	iommu_dma_get_resv_regions(dev, head);
> -}
> -
> -static struct iommu_ops arm_smmu_ops = {
> -	.capable		= arm_smmu_capable,
> -	.domain_alloc		= arm_smmu_domain_alloc,
> -	.domain_free		= arm_smmu_domain_free,
> -	.attach_dev		= arm_smmu_attach_dev,
> -	.map			= arm_smmu_map,
> -	.unmap			= arm_smmu_unmap,
> -	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
> -	.iotlb_sync		= arm_smmu_iotlb_sync,
> -	.iova_to_phys		= arm_smmu_iova_to_phys,
> -	.probe_device		= arm_smmu_probe_device,
> -	.release_device		= arm_smmu_release_device,
> -	.device_group		= arm_smmu_device_group,
> -	.domain_get_attr	= arm_smmu_domain_get_attr,
> -	.domain_set_attr	= arm_smmu_domain_set_attr,
> -	.of_xlate		= arm_smmu_of_xlate,
> -	.get_resv_regions	= arm_smmu_get_resv_regions,
> -	.put_resv_regions	= generic_iommu_put_resv_regions,
> -	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
> -};
> -
>   /* Probing and initialisation functions */
>   static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>   				   struct arm_smmu_queue *q,
> @@ -2406,7 +2032,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>   	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
>   	case IDR0_STALL_MODEL_FORCE:
>   		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
> -		fallthrough;

We should keep all the fallthrough documented. So I think we want to 
introduce the fallthrough in Xen as well.

>   	case IDR0_STALL_MODEL_STALL:
>   		smmu->features |= ARM_SMMU_FEAT_STALLS;
>   	}
> @@ -2426,7 +2051,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>   	switch (FIELD_GET(IDR0_TTF, reg)) {
>   	case IDR0_TTF_AARCH32_64:
>   		smmu->ias = 40;
> -		fallthrough;
>   	case IDR0_TTF_AARCH64:
>   		break;
>   	default:
> @@ -2515,21 +2139,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>   	default:
>   		dev_info(smmu->dev,
>   			"unknown output address size. Truncating to 48-bit\n");
> -		fallthrough;
>   	case IDR5_OAS_48_BIT:
>   		smmu->oas = 48;
>   	}
>   
> -	if (arm_smmu_ops.pgsize_bitmap == -1UL)
> -		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
> -	else
> -		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
> -
> -	/* Set the DMA mask for our table walker */
> -	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> -		dev_warn(smmu->dev,
> -			 "failed to set DMA mask for table walker\n");
> -
>   	smmu->ias = max(smmu->ias, smmu->oas);
>   
>   	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
> @@ -2595,9 +2208,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>   
>   	parse_driver_options(smmu);
>   
> -	if (of_dma_is_coherent(dev->of_node))
> -		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> -
>   	return ret;
>   }
>   
> @@ -2609,55 +2219,6 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
>   		return SZ_128K;
>   }
>   
> -static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
> -{
> -	int err;
> -
> -#ifdef CONFIG_PCI
> -	if (pci_bus_type.iommu_ops != ops) {
> -		err = bus_set_iommu(&pci_bus_type, ops);
> -		if (err)
> -			return err;
> -	}
> -#endif
> -#ifdef CONFIG_ARM_AMBA
> -	if (amba_bustype.iommu_ops != ops) {
> -		err = bus_set_iommu(&amba_bustype, ops);
> -		if (err)
> -			goto err_reset_pci_ops;
> -	}
> -#endif
> -	if (platform_bus_type.iommu_ops != ops) {
> -		err = bus_set_iommu(&platform_bus_type, ops);
> -		if (err)
> -			goto err_reset_amba_ops;
> -	}
> -
> -	return 0;
> -
> -err_reset_amba_ops:
> -#ifdef CONFIG_ARM_AMBA
> -	bus_set_iommu(&amba_bustype, NULL);
> -#endif
> -err_reset_pci_ops: __maybe_unused;
> -#ifdef CONFIG_PCI
> -	bus_set_iommu(&pci_bus_type, NULL);
> -#endif
> -	return err;
> -}
> -
> -static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
> -				      resource_size_t size)

This seems a bit odd that you are removing the function but not the 
callers. Shouldn't you keep this function until next patch?

> -{
> -	struct resource res = {
> -		.flags = IORESOURCE_MEM,
> -		.start = start,
> -		.end = start + size - 1,
> -	};
> -
> -	return devm_ioremap_resource(dev, &res);
> -}
> -
>   static int arm_smmu_device_probe(struct platform_device *pdev)
>   {
>   	int irq, ret;
> @@ -2785,21 +2346,3 @@ static const struct of_device_id arm_smmu_of_match[] = {
>   	{ .compatible = "arm,smmu-v3", },
>   	{ },
>   };
> -MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
> -
> -static struct platform_driver arm_smmu_driver = {
> -	.driver	= {
> -		.name			= "arm-smmu-v3",
> -		.of_match_table		= arm_smmu_of_match,
> -		.suppress_bind_attrs	= true,
> -	},
> -	.probe	= arm_smmu_device_probe,
> -	.remove	= arm_smmu_device_remove,
> -	.shutdown = arm_smmu_device_shutdown,
> -};
> -module_platform_driver(arm_smmu_driver);
> -
> -MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
> -MODULE_AUTHOR("Will Deacon <will@kernel.org>");
> -MODULE_ALIAS("platform:arm-smmu-v3");
> -MODULE_LICENSE("GPL v2");
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:48:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:48:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42793.77012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTQf-0003ne-Jt; Wed, 02 Dec 2020 14:48:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42793.77012; Wed, 02 Dec 2020 14:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTQf-0003nX-G2; Wed, 02 Dec 2020 14:48:25 +0000
Received: by outflank-mailman (input) for mailman id 42793;
 Wed, 02 Dec 2020 14:48:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkTQe-0003nR-H8
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:48:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1005b7ee-db6f-432d-8765-000ecfa3ef95;
 Wed, 02 Dec 2020 14:48:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B1CCACC3;
 Wed,  2 Dec 2020 14:48:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1005b7ee-db6f-432d-8765-000ecfa3ef95
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606920502; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3lIaTsxjFMYIQNxaeePaL9KbT8jUcnLCx8Gz61GRcM0=;
	b=XOAmE0X3npWKI/lGX5ZljwuqsxujpMp1fMK3dbXVafCTHYiZ2VJ/DLYR1nM+U9EG/I6l49
	ehM7jGwsJ87A3LNKHTX7omn9BagHn4Y7FxixMA7iZCSEuvmw6LVRG/sQVlE0asBQxR8OD9
	DXnRFMWgZMKiNwxugfX0pYJWvdhzSbU=
Subject: Re: [PATCH v2 04/12] x86/xen: drop USERGS_SYSRET64 paravirt call
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 peterz@infradead.org, luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
 Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-5-jgross@suse.com> <20201202123235.GD2951@zn.tnic>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6be0d1a5-0079-5d90-0c38-85fe4471f1b8@suse.com>
Date: Wed, 2 Dec 2020 15:48:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201202123235.GD2951@zn.tnic>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="jJ9PpTcxy9lmYsonXHC7Yx0WUyHQN3ZlP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--jJ9PpTcxy9lmYsonXHC7Yx0WUyHQN3ZlP
Content-Type: multipart/mixed; boundary="cx6HqDKbye5TwFIy5c3ypsnchRBmnlgeh";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 peterz@infradead.org, luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
 Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <6be0d1a5-0079-5d90-0c38-85fe4471f1b8@suse.com>
Subject: Re: [PATCH v2 04/12] x86/xen: drop USERGS_SYSRET64 paravirt call
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-5-jgross@suse.com> <20201202123235.GD2951@zn.tnic>
In-Reply-To: <20201202123235.GD2951@zn.tnic>

--cx6HqDKbye5TwFIy5c3ypsnchRBmnlgeh
Content-Type: multipart/mixed;
 boundary="------------50989C0C7D65F3525DF0B3D1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------50989C0C7D65F3525DF0B3D1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.12.20 13:32, Borislav Petkov wrote:
> On Fri, Nov 20, 2020 at 12:46:22PM +0100, Juergen Gross wrote:
>> @@ -123,12 +115,15 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, =
SYM_L_GLOBAL)
>>   	 * Try to use SYSRET instead of IRET if we're returning to
>>   	 * a completely clean 64-bit userspace context.  If we're not,
>>   	 * go to the slow exit path.
>> +	 * In the Xen PV case we must use iret anyway.
>>   	 */
>> -	movq	RCX(%rsp), %rcx
>> -	movq	RIP(%rsp), %r11
>>  =20
>> -	cmpq	%rcx, %r11	/* SYSRET requires RCX =3D=3D RIP */
>> -	jne	swapgs_restore_regs_and_return_to_usermode
>> +	ALTERNATIVE __stringify( \
>> +		movq	RCX(%rsp), %rcx; \
>> +		movq	RIP(%rsp), %r11; \
>> +		cmpq	%rcx, %r11;	/* SYSRET requires RCX =3D=3D RIP */ \
>> +		jne	swapgs_restore_regs_and_return_to_usermode), \
>> +	"jmp	swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
>=20
> Why such a big ALTERNATIVE when you can simply do:
>=20
>          /*
>           * Try to use SYSRET instead of IRET if we're returning to
>           * a completely clean 64-bit userspace context.  If we're not,=

>           * go to the slow exit path.
>           * In the Xen PV case we must use iret anyway.
>           */
>          ALTERNATIVE "", "jmp swapgs_restore_regs_and_return_to_usermod=
e", X86_FEATURE_XENPV
>=20
>          movq    RCX(%rsp), %rcx;
>          movq    RIP(%rsp), %r11;
>          cmpq    %rcx, %r11;     /* SYSRET requires RCX =3D=3D RIP */ \=

>          jne     swapgs_restore_regs_and_return_to_usermode
>=20
> ?
>=20

I wanted to avoid the additional NOPs for the bare metal case.

If you don't mind them I can do as you are suggesting.


Juergen

--------------50989C0C7D65F3525DF0B3D1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------50989C0C7D65F3525DF0B3D1--

--cx6HqDKbye5TwFIy5c3ypsnchRBmnlgeh--

--jJ9PpTcxy9lmYsonXHC7Yx0WUyHQN3ZlP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/HqTUFAwAAAAAACgkQsN6d1ii/Ey+4
gQf9EFMoWpLKjzw2BVbyD0hiQapRVGseNvB3GqQFjKeRF2cYKO3pdXTP/8/gguKxLu2hewzYnjiK
7bUt7DNePDKIodhyA2PpUNexvDcOkPWFU2gvJLPAeIfFIknZnDCwQRsSlPX0qysGcH+ukEwwNthY
u98BK1NtYIgkdzADu7TB94cXYCIRKhFYZZ81I3TY11Lj7KpTgv1RYb8+JEW8M7m02x9k2FGi9QcU
1ZLJFfVF9djamKm+jymofOLtyE83uoAJcQqjxpMrLH/GhavwLB4x0hiVCDDB3+xoGK6GRPLeVayL
2CjIsvIXdDbHXTAHQ/Cy80yJzLbeK95XQp91LaS4Yw==
=Lxwk
-----END PGP SIGNATURE-----

--jJ9PpTcxy9lmYsonXHC7Yx0WUyHQN3ZlP--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:48:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42796.77024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTR1-0003sm-Se; Wed, 02 Dec 2020 14:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42796.77024; Wed, 02 Dec 2020 14:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTR1-0003sf-PC; Wed, 02 Dec 2020 14:48:47 +0000
Received: by outflank-mailman (input) for mailman id 42796;
 Wed, 02 Dec 2020 14:48:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkTQz-0003sQ-VL
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:48:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4e3ed96-efd5-4a45-86a5-2e3ceb9cf14e;
 Wed, 02 Dec 2020 14:48:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 84E8DAB63;
 Wed,  2 Dec 2020 14:48:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4e3ed96-efd5-4a45-86a5-2e3ceb9cf14e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606920524; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=pOUeix5rFQpTEf17eVWovsbIjIYjoeNzqwp31IeSKf0=;
	b=RXouTiTJAOtKSqxd7RXzwS+yoiWc2aekyhAu0Ov4PXcB7SPNuV3Fb+jdzZZ/zJZI7kzRHq
	rBwoD/yCIWdxAT9CGNpgFlWYESM3LFHahCrwiGHVHZEc1EhLaR2sCW+BqvLFrk0WJwNTS7
	MmjBI/v86KjqrAJEik1qqpyEt4mmfu8=
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Hongyan Xia <hx242@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] a tiny bit of header disentangling
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
Date: Wed, 2 Dec 2020 15:48:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While reviewing Hongyan's "x86/vmap: handle superpages in
vmap_to_mfn()" it became apparent that the interaction of
xen/mm.h and asm/page.h is problematic. Therefore some basic
page size related definitions get moved out of the latter, and
the mfn_t et al ones out of the former, each into new headers.

While various configurations build fine for me with these
changes in place, it's relatively likely that this may break
some more exotic ones. Such breakage ought to be easy to
resolve, so I hope this risk isn't going to be a hindrance of
the changes here going in.

1: include: don't use asm/page.h from common headers
2: mm: split out mfn_t / gfn_t / pfn_t definitions and helpers

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:50:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:50:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42802.77035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTSB-00041z-6d; Wed, 02 Dec 2020 14:49:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42802.77035; Wed, 02 Dec 2020 14:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTSB-00041s-3O; Wed, 02 Dec 2020 14:49:59 +0000
Received: by outflank-mailman (input) for mailman id 42802;
 Wed, 02 Dec 2020 14:49:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkTS9-00041k-T7
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:49:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f3c7dcf-7af3-42bd-aa4f-e4df48286bf8;
 Wed, 02 Dec 2020 14:49:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5FA1AB63;
 Wed,  2 Dec 2020 14:49:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f3c7dcf-7af3-42bd-aa4f-e4df48286bf8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606920595; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7+x+U2qTlkcoxC0HiG5vY4DnoiLDrqjJYZR5n9P0NNM=;
	b=iPOSzDxyS4dbHIQgSYsUTpxKCXTunemEYHw+KaOkv1nk2yTACyxAp8EgyTesSkvnRBr8KY
	RQabmeCDMRxtzM5kXaL7I/l3xO4tayyW9Nj8hBo/4KOLKuxO6nqeUI7BNURY3el//IAkHJ
	TihDkl3lyTKfByN++Km1df8S9KkL6YU=
Subject: [PATCH 1/2] include: don't use asm/page.h from common headers
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Hongyan Xia <hx242@xen.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
Message-ID: <04276039-a5d0-fefd-260e-ffaa8272fd6a@suse.com>
Date: Wed, 2 Dec 2020 15:49:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Doing so limits what can be done in (in particular included by) this per-
arch header. Abstract out page shift/size related #define-s, which is all
the repsecitve headers care about. Extend the replacement / removal to
some x86 headers as well; some others now need to include page.h (and
they really should have before).

Arm's VADDR_BITS gets restricted to 32-bit, as its current value is
clearly wrong for 64-bit, but the constant also isn't used anywhere
right now (i.e. the #define could also be dropped altogether).

I wasn't sure about Arm's use of vaddr_t in PAGE_OFFSET(), and hence I
kept it and provided a way to override the #define in the common header.

Also drop the dead PAGE_FLAG_MASK altogether at this occasion.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/arm/arm64/lib/clear_page.S
+++ b/xen/arch/arm/arm64/lib/clear_page.S
@@ -14,6 +14,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/page-size.h>
+
 /*
  * Clear page @dest
  *
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -176,11 +176,6 @@
 #define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
 #define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
 
-#define PAGE_SHIFT              12
-#define PAGE_SIZE           (_AC(1,L) << PAGE_SHIFT)
-#define PAGE_MASK           (~(PAGE_SIZE-1))
-#define PAGE_FLAG_MASK      (~0)
-
 #define NR_hypercalls 64
 
 #define STACK_ORDER 3
--- a/xen/include/asm-arm/current.h
+++ b/xen/include/asm-arm/current.h
@@ -1,6 +1,7 @@
 #ifndef __ARM_CURRENT_H__
 #define __ARM_CURRENT_H__
 
+#include <xen/page-size.h>
 #include <xen/percpu.h>
 
 #include <asm/processor.h>
--- /dev/null
+++ b/xen/include/asm-arm/page-shift.h
@@ -0,0 +1,15 @@
+#ifndef __ARM_PAGE_SHIFT_H__
+#define __ARM_PAGE_SHIFT_H__
+
+#define PAGE_SHIFT              12
+
+#define PAGE_OFFSET(ptr)        ((vaddr_t)(ptr) & ~PAGE_MASK)
+
+#ifdef CONFIG_ARM_64
+#define PADDR_BITS              48
+#else
+#define PADDR_BITS              40
+#define VADDR_BITS              32
+#endif
+
+#endif /* __ARM_PAGE_SHIFT_H__ */
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -2,21 +2,11 @@
 #define __ARM_PAGE_H__
 
 #include <public/xen.h>
+#include <xen/page-size.h>
 #include <asm/processor.h>
 #include <asm/lpae.h>
 #include <asm/sysregs.h>
 
-#ifdef CONFIG_ARM_64
-#define PADDR_BITS              48
-#else
-#define PADDR_BITS              40
-#endif
-#define PADDR_MASK              ((1ULL << PADDR_BITS)-1)
-#define PAGE_OFFSET(ptr)        ((vaddr_t)(ptr) & ~PAGE_MASK)
-
-#define VADDR_BITS              32
-#define VADDR_MASK              (~0UL)
-
 /* Shareability values for the LPAE entries */
 #define LPAE_SH_NON_SHAREABLE 0x0
 #define LPAE_SH_UNPREDICTALE  0x1
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -8,8 +8,8 @@
 #define __X86_CURRENT_H__
 
 #include <xen/percpu.h>
+#include <xen/page-size.h>
 #include <public/xen.h>
-#include <asm/page.h>
 
 /*
  * Xen's cpu stacks are 8 pages (8-page aligned), arranged as:
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -1,6 +1,8 @@
 #ifndef __ARCH_DESC_H
 #define __ARCH_DESC_H
 
+#include <asm/page.h>
+
 /*
  * Xen reserves a memory page of GDT entries.
  * No guest GDT entries exist beyond the Xen reserved area.
--- a/xen/include/asm-x86/fixmap.h
+++ b/xen/include/asm-x86/fixmap.h
@@ -12,7 +12,7 @@
 #ifndef _ASM_FIXMAP_H
 #define _ASM_FIXMAP_H
 
-#include <asm/page.h>
+#include <xen/page-size.h>
 
 #define FIXADDR_TOP (VMAP_VIRT_END - PAGE_SIZE)
 #define FIXADDR_X_TOP (XEN_VIRT_END - PAGE_SIZE)
--- a/xen/include/asm-x86/guest/hyperv-hcall.h
+++ b/xen/include/asm-x86/guest/hyperv-hcall.h
@@ -20,12 +20,12 @@
 #define __X86_HYPERV_HCALL_H__
 
 #include <xen/lib.h>
+#include <xen/page-size.h>
 #include <xen/types.h>
 
 #include <asm/asm_defns.h>
 #include <asm/fixmap.h>
 #include <asm/guest/hyperv-tlfs.h>
-#include <asm/page.h>
 
 static inline uint64_t hv_do_hypercall(uint64_t control, paddr_t input_addr,
                                        paddr_t output_addr)
--- a/xen/include/asm-x86/guest/hyperv-tlfs.h
+++ b/xen/include/asm-x86/guest/hyperv-tlfs.h
@@ -10,8 +10,8 @@
 #define _ASM_X86_HYPERV_TLFS_H
 
 #include <xen/bitops.h>
+#include <xen/page-size.h>
 #include <xen/types.h>
-#include <asm/page.h>
 
 /*
  * While not explicitly listed in the TLFS, Hyper-V always runs with a page size
--- a/xen/include/asm-x86/io.h
+++ b/xen/include/asm-x86/io.h
@@ -3,7 +3,6 @@
 
 #include <xen/vmap.h>
 #include <xen/types.h>
-#include <asm/page.h>
 
 #define readb(x) (*(volatile uint8_t  *)(x))
 #define readw(x) (*(volatile uint16_t *)(x))
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -6,6 +6,7 @@
 #include <xen/spinlock.h>
 #include <xen/rwlock.h>
 #include <asm/io.h>
+#include <asm/page.h>
 #include <asm/uaccess.h>
 #include <asm/x86_emulate.h>
 
--- /dev/null
+++ b/xen/include/asm-x86/page-shift.h
@@ -0,0 +1,26 @@
+#ifndef __X86_PAGE_SHIFT_H__
+#define __X86_PAGE_SHIFT_H__
+
+#define L1_PAGETABLE_SHIFT      12
+#define L2_PAGETABLE_SHIFT      21
+#define L3_PAGETABLE_SHIFT      30
+#define L4_PAGETABLE_SHIFT      39
+#define PAGE_SHIFT              L1_PAGETABLE_SHIFT
+#define SUPERPAGE_SHIFT         L2_PAGETABLE_SHIFT
+#define ROOT_PAGETABLE_SHIFT    L4_PAGETABLE_SHIFT
+
+#define PAGETABLE_ORDER         9
+#define L1_PAGETABLE_ENTRIES    (1 << PAGETABLE_ORDER)
+#define L2_PAGETABLE_ENTRIES    (1 << PAGETABLE_ORDER)
+#define L3_PAGETABLE_ENTRIES    (1 << PAGETABLE_ORDER)
+#define L4_PAGETABLE_ENTRIES    (1 << PAGETABLE_ORDER)
+#define ROOT_PAGETABLE_ENTRIES  L4_PAGETABLE_ENTRIES
+
+#define SUPERPAGE_ORDER         PAGETABLE_ORDER
+#define SUPERPAGE_PAGES         (1 << SUPERPAGE_ORDER)
+
+/* These are architectural limits. */
+#define PADDR_BITS              52
+#define VADDR_BITS              48
+
+#endif /* __X86_PAGE_SHIFT_H__ */
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -2,15 +2,7 @@
 #define __X86_PAGE_H__
 
 #include <xen/const.h>
-
-/*
- * It is important that the masks are signed quantities. This ensures that
- * the compiler sign-extends a 32-bit mask to 64 bits if that is required.
- */
-#define PAGE_SIZE           (_AC(1,L) << PAGE_SHIFT)
-#define PAGE_MASK           (~(PAGE_SIZE-1))
-#define PAGE_FLAG_MASK      (~0)
-#define PAGE_OFFSET(ptr)    ((unsigned long)(ptr) & ~PAGE_MASK)
+#include <xen/page-size.h>
 
 #define PAGE_ORDER_4K       0
 #define PAGE_ORDER_2M       9
--- a/xen/include/asm-x86/uaccess.h
+++ b/xen/include/asm-x86/uaccess.h
@@ -6,7 +6,6 @@
 #include <xen/errno.h>
 #include <xen/prefetch.h>
 #include <asm/asm_defns.h>
-#include <asm/page.h>
 
 #include <asm/x86_64/uaccess.h>
 
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -2,31 +2,8 @@
 #ifndef __X86_64_PAGE_H__
 #define __X86_64_PAGE_H__
 
-#define L1_PAGETABLE_SHIFT      12
-#define L2_PAGETABLE_SHIFT      21
-#define L3_PAGETABLE_SHIFT      30
-#define L4_PAGETABLE_SHIFT      39
-#define PAGE_SHIFT              L1_PAGETABLE_SHIFT
-#define SUPERPAGE_SHIFT         L2_PAGETABLE_SHIFT
-#define ROOT_PAGETABLE_SHIFT    L4_PAGETABLE_SHIFT
-
-#define PAGETABLE_ORDER         9
-#define L1_PAGETABLE_ENTRIES    (1<<PAGETABLE_ORDER)
-#define L2_PAGETABLE_ENTRIES    (1<<PAGETABLE_ORDER)
-#define L3_PAGETABLE_ENTRIES    (1<<PAGETABLE_ORDER)
-#define L4_PAGETABLE_ENTRIES    (1<<PAGETABLE_ORDER)
-#define ROOT_PAGETABLE_ENTRIES  L4_PAGETABLE_ENTRIES
-#define SUPERPAGE_ORDER         PAGETABLE_ORDER
-#define SUPERPAGE_PAGES         (1<<SUPERPAGE_ORDER)
-
 #define __XEN_VIRT_START        XEN_VIRT_START
 
-/* These are architectural limits. Current CPUs support only 40-bit phys. */
-#define PADDR_BITS              52
-#define VADDR_BITS              48
-#define PADDR_MASK              ((_AC(1,UL) << PADDR_BITS) - 1)
-#define VADDR_MASK              ((_AC(1,UL) << VADDR_BITS) - 1)
-
 #define VADDR_TOP_BIT           (1UL << (VADDR_BITS - 1))
 #define CANONICAL_MASK          (~0UL & ~VADDR_MASK)
 
--- a/xen/include/xen/gdbstub.h
+++ b/xen/include/xen/gdbstub.h
@@ -20,8 +20,8 @@
 #ifndef __XEN_GDBSTUB_H__
 #define __XEN_GDBSTUB_H__
 
+#include <xen/page-size.h>
 #include <asm/atomic.h>
-#include <asm/page.h>
 
 #ifdef CONFIG_CRASH_DEBUG
 
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -26,7 +26,6 @@
 #include <xen/mm.h>
 #include <xen/rwlock.h>
 #include <public/grant_table.h>
-#include <asm/page.h>
 #include <asm/grant_table.h>
 
 #ifdef CONFIG_GRANT_TABLE
--- /dev/null
+++ b/xen/include/xen/page-size.h
@@ -0,0 +1,21 @@
+#ifndef __XEN_PAGE_SIZE_H__
+#define __XEN_PAGE_SIZE_H__
+
+#include <xen/const.h>
+#include <asm/page-shift.h>
+
+/*
+ * It is important that the masks are signed quantities. This ensures that
+ * the compiler sign-extends a 32-bit mask to 64 bits if that is required.
+ */
+#define PAGE_SIZE           (_AC(1,L) << PAGE_SHIFT)
+#define PAGE_MASK           (~(PAGE_SIZE-1))
+
+#ifndef PAGE_OFFSET
+# define PAGE_OFFSET(ptr)   ((unsigned long)(ptr) & ~PAGE_MASK)
+#endif
+
+#define PADDR_MASK          ((_AC(1,ULL) << PADDR_BITS) - 1)
+#define VADDR_MASK          (~_AC(0,UL) >> (BITS_PER_LONG - VADDR_BITS))
+
+#endif /* __XEN_PAGE_SIZE__ */
--- a/xen/include/xen/pfn.h
+++ b/xen/include/xen/pfn.h
@@ -1,7 +1,7 @@
 #ifndef __XEN_PFN_H__
 #define __XEN_PFN_H__
 
-#include <asm/page.h>
+#include <xen/page-size.h>
 
 #define PFN_DOWN(x)   ((x) >> PAGE_SHIFT)
 #define PFN_UP(x)     (((x) + PAGE_SIZE-1) >> PAGE_SHIFT)
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -2,7 +2,7 @@
 #define __XEN_VMAP_H__
 
 #include <xen/mm.h>
-#include <asm/page.h>
+#include <xen/page-size.h>
 
 enum vmap_region {
     VMAP_DEFAULT,



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:50:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:50:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42808.77048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTSn-0004oN-IB; Wed, 02 Dec 2020 14:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42808.77048; Wed, 02 Dec 2020 14:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTSn-0004oG-DX; Wed, 02 Dec 2020 14:50:37 +0000
Received: by outflank-mailman (input) for mailman id 42808;
 Wed, 02 Dec 2020 14:50:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkTSm-0004nm-T7
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:50:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ecea57c-c1d9-4e4f-8a57-dddf8277fe9f;
 Wed, 02 Dec 2020 14:50:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CDE2ACC3;
 Wed,  2 Dec 2020 14:50:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ecea57c-c1d9-4e4f-8a57-dddf8277fe9f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606920631; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MHnA0jwUib1BTdgfMxgCIDNTuw+txuIpBV3gHXdrMEk=;
	b=BkESPxBEKvBcMktSuI8jpK4IsXRhEjxcLecUJl0O9VvZJbJMpdhEgCbVXIuhuvtmr3R3et
	YtKGMxtHgO5DUh4JqG2SZY2k4yw010BFb8jfklC5tOae2Krq1LppnkKZpQ7hzbnvDcP4cu
	9qtwQ+jLdQghcwPd92tnItRepdAkPpw=
Subject: [PATCH 2/2] mm: split out mfn_t / gfn_t / pfn_t definitions and
 helpers
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Hongyan Xia <hx242@xen.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
Message-ID: <fb4de786-7302-3336-dcb4-1a388bee34bc@suse.com>
Date: Wed, 2 Dec 2020 15:50:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

xen/mm.h has heavy dependencies, while in a number of cases only these
type definitions are needed. This separation then also allows pulling in
these definitions when including xen/mm.h would cause cyclic
dependencies.

Replace xen/mm.h inclusion where possible in include/xen/. (In
xen/iommu.h also take the opportunity and correct the few remaining
sorting issues.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -10,7 +10,6 @@
  * Slimmed with Xen specific support.
  */
 
-#include <asm/io.h>
 #include <xen/acpi.h>
 #include <xen/errno.h>
 #include <xen/iocap.h>
--- a/xen/drivers/char/meson-uart.c
+++ b/xen/drivers/char/meson-uart.c
@@ -18,7 +18,9 @@
  * License along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/errno.h>
 #include <xen/irq.h>
+#include <xen/mm.h>
 #include <xen/serial.h>
 #include <xen/vmap.h>
 #include <asm/io.h>
--- a/xen/drivers/char/mvebu-uart.c
+++ b/xen/drivers/char/mvebu-uart.c
@@ -18,7 +18,9 @@
  * License along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/errno.h>
 #include <xen/irq.h>
+#include <xen/mm.h>
 #include <xen/serial.h>
 #include <xen/vmap.h>
 #include <asm/io.h>
--- a/xen/include/asm-x86/io.h
+++ b/xen/include/asm-x86/io.h
@@ -49,6 +49,7 @@ __OUT(l,,int)
 
 /* Function pointer used to handle platform specific I/O port emulation. */
 #define IOEMUL_QUIRK_STUB_BYTES 9
+struct cpu_user_regs;
 extern unsigned int (*ioemul_handle_quirk)(
     u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
 
--- /dev/null
+++ b/xen/include/xen/frame-num.h
@@ -0,0 +1,96 @@
+#ifndef __XEN_FRAME_NUM_H__
+#define __XEN_FRAME_NUM_H__
+
+#include <xen/kernel.h>
+#include <xen/typesafe.h>
+
+TYPE_SAFE(unsigned long, mfn);
+#define PRI_mfn          "05lx"
+#define INVALID_MFN      _mfn(~0UL)
+/*
+ * To be used for global variable initialization. This workaround a bug
+ * in GCC < 5.0.
+ */
+#define INVALID_MFN_INITIALIZER { ~0UL }
+
+#ifndef mfn_t
+#define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
+#define _mfn
+#define mfn_x
+#undef mfn_t
+#undef _mfn
+#undef mfn_x
+#endif
+
+static inline mfn_t mfn_add(mfn_t mfn, unsigned long i)
+{
+    return _mfn(mfn_x(mfn) + i);
+}
+
+static inline mfn_t mfn_max(mfn_t x, mfn_t y)
+{
+    return _mfn(max(mfn_x(x), mfn_x(y)));
+}
+
+static inline mfn_t mfn_min(mfn_t x, mfn_t y)
+{
+    return _mfn(min(mfn_x(x), mfn_x(y)));
+}
+
+static inline bool_t mfn_eq(mfn_t x, mfn_t y)
+{
+    return mfn_x(x) == mfn_x(y);
+}
+
+TYPE_SAFE(unsigned long, gfn);
+#define PRI_gfn          "05lx"
+#define INVALID_GFN      _gfn(~0UL)
+/*
+ * To be used for global variable initialization. This workaround a bug
+ * in GCC < 5.0 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64856
+ */
+#define INVALID_GFN_INITIALIZER { ~0UL }
+
+#ifndef gfn_t
+#define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
+#define _gfn
+#define gfn_x
+#undef gfn_t
+#undef _gfn
+#undef gfn_x
+#endif
+
+static inline gfn_t gfn_add(gfn_t gfn, unsigned long i)
+{
+    return _gfn(gfn_x(gfn) + i);
+}
+
+static inline gfn_t gfn_max(gfn_t x, gfn_t y)
+{
+    return _gfn(max(gfn_x(x), gfn_x(y)));
+}
+
+static inline gfn_t gfn_min(gfn_t x, gfn_t y)
+{
+    return _gfn(min(gfn_x(x), gfn_x(y)));
+}
+
+static inline bool_t gfn_eq(gfn_t x, gfn_t y)
+{
+    return gfn_x(x) == gfn_x(y);
+}
+
+TYPE_SAFE(unsigned long, pfn);
+#define PRI_pfn          "05lx"
+#define INVALID_PFN      (~0UL)
+
+#ifndef pfn_t
+#define pfn_t /* Grep fodder: pfn_t, _pfn() and pfn_x() are defined above */
+#define _pfn
+#define pfn_x
+#undef pfn_t
+#undef _pfn
+#undef pfn_x
+#endif
+
+#endif /* __XEN_FRAME_NUM_H__ */
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -23,7 +23,7 @@
 #ifndef __XEN_GRANT_TABLE_H__
 #define __XEN_GRANT_TABLE_H__
 
-#include <xen/mm.h>
+#include <xen/frame-num.h>
 #include <xen/rwlock.h>
 #include <public/grant_table.h>
 #include <asm/grant_table.h>
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -19,14 +19,13 @@
 #ifndef _IOMMU_H_
 #define _IOMMU_H_
 
+#include <xen/frame-num.h>
 #include <xen/init.h>
 #include <xen/page-defs.h>
-#include <xen/spinlock.h>
 #include <xen/pci.h>
-#include <xen/typesafe.h>
-#include <xen/mm.h>
-#include <public/hvm/ioreq.h>
+#include <xen/spinlock.h>
 #include <public/domctl.h>
+#include <public/hvm/ioreq.h>
 #include <asm/device.h>
 
 TYPE_SAFE(uint64_t, dfn);
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -51,103 +51,13 @@
 #define __XEN_MM_H__
 
 #include <xen/compiler.h>
+#include <xen/frame-num.h>
 #include <xen/types.h>
 #include <xen/list.h>
 #include <xen/spinlock.h>
-#include <xen/typesafe.h>
-#include <xen/kernel.h>
 #include <xen/perfc.h>
 #include <public/memory.h>
 
-TYPE_SAFE(unsigned long, mfn);
-#define PRI_mfn          "05lx"
-#define INVALID_MFN      _mfn(~0UL)
-/*
- * To be used for global variable initialization. This workaround a bug
- * in GCC < 5.0.
- */
-#define INVALID_MFN_INITIALIZER { ~0UL }
-
-#ifndef mfn_t
-#define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */
-#define _mfn
-#define mfn_x
-#undef mfn_t
-#undef _mfn
-#undef mfn_x
-#endif
-
-static inline mfn_t mfn_add(mfn_t mfn, unsigned long i)
-{
-    return _mfn(mfn_x(mfn) + i);
-}
-
-static inline mfn_t mfn_max(mfn_t x, mfn_t y)
-{
-    return _mfn(max(mfn_x(x), mfn_x(y)));
-}
-
-static inline mfn_t mfn_min(mfn_t x, mfn_t y)
-{
-    return _mfn(min(mfn_x(x), mfn_x(y)));
-}
-
-static inline bool_t mfn_eq(mfn_t x, mfn_t y)
-{
-    return mfn_x(x) == mfn_x(y);
-}
-
-TYPE_SAFE(unsigned long, gfn);
-#define PRI_gfn          "05lx"
-#define INVALID_GFN      _gfn(~0UL)
-/*
- * To be used for global variable initialization. This workaround a bug
- * in GCC < 5.0 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64856
- */
-#define INVALID_GFN_INITIALIZER { ~0UL }
-
-#ifndef gfn_t
-#define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */
-#define _gfn
-#define gfn_x
-#undef gfn_t
-#undef _gfn
-#undef gfn_x
-#endif
-
-static inline gfn_t gfn_add(gfn_t gfn, unsigned long i)
-{
-    return _gfn(gfn_x(gfn) + i);
-}
-
-static inline gfn_t gfn_max(gfn_t x, gfn_t y)
-{
-    return _gfn(max(gfn_x(x), gfn_x(y)));
-}
-
-static inline gfn_t gfn_min(gfn_t x, gfn_t y)
-{
-    return _gfn(min(gfn_x(x), gfn_x(y)));
-}
-
-static inline bool_t gfn_eq(gfn_t x, gfn_t y)
-{
-    return gfn_x(x) == gfn_x(y);
-}
-
-TYPE_SAFE(unsigned long, pfn);
-#define PRI_pfn          "05lx"
-#define INVALID_PFN      (~0UL)
-
-#ifndef pfn_t
-#define pfn_t /* Grep fodder: pfn_t, _pfn() and pfn_x() are defined above */
-#define _pfn
-#define pfn_x
-#undef pfn_t
-#undef _pfn
-#undef pfn_x
-#endif
-
 struct page_info;
 
 void put_page(struct page_info *);
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -1,7 +1,7 @@
 #ifndef _XEN_P2M_COMMON_H
 #define _XEN_P2M_COMMON_H
 
-#include <xen/mm.h>
+#include <xen/frame-num.h>
 
 /* Remove a page from a domain's p2m table */
 int __must_check
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -1,7 +1,7 @@
 #if !defined(__XEN_VMAP_H__) && defined(VMAP_VIRT_START)
 #define __XEN_VMAP_H__
 
-#include <xen/mm.h>
+#include <xen/frame-num.h>
 #include <xen/page-size.h>
 
 enum vmap_region {



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 14:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 14:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42818.77060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTVN-00055y-2x; Wed, 02 Dec 2020 14:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42818.77060; Wed, 02 Dec 2020 14:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTVM-00055r-W7; Wed, 02 Dec 2020 14:53:16 +0000
Received: by outflank-mailman (input) for mailman id 42818;
 Wed, 02 Dec 2020 14:53:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dnsL=FG=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kkTVL-00055l-De
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 14:53:15 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47c90c37-bb16-4c2c-ba00-73c12babbbd1;
 Wed, 02 Dec 2020 14:53:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47c90c37-bb16-4c2c-ba00-73c12babbbd1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606920794;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+km+qYoT+GfRCnvw41Pngpu9ryNDDDkSetdoW9zY/9g=;
  b=gIeSuy9MZ8mxTbgFTKjRlE9aOGjE6DCYBSU/xS3MpmpYreZjz8/XsVE/
   lb1CLMkXajQc4nV3EsffjOTmvesq3nDMM2hKMllzl2domYXQKBwLY3UiA
   P3/cf/XOZCfAXX43yoZvhsU6279BguRm0j4reAlE6hcnl0J4CpJOY0SD6
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: doGrgZq6PS74a89lqfIVn4zs7R1viOiOITNT/3mDFwZ8OHQl8Pub647M4FUOYk2d0LVgheVxhU
 fZgqeTX2G6WYXLmaqg2ziuCULHNIraXQh0jjZ+3hmKcNxSL9tzJr/kmBuNhznijLIYnmB79qL5
 vupgmD35MSPAG2pnm+p70Y233gqPioVW3tdFbzhpSQ3lVv+EMfZEHdJuEQzhh1+3y0tYSBsvY0
 WZG1kOgtBvvYFF7/CCIjQ2qBWH5P8SagAvipv4/nRYBBXkZlVEW3PHKsBkadStN5CyXRVjyAcS
 2Gw=
X-SBRS: None
X-MesageID: 32368170
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,387,1599537600"; 
   d="scan'208";a="32368170"
Subject: Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31
To: Jan Beulich <jbeulich@suse.com>
CC: <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>, <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
 <b98d3517-6c9d-6f40-6e28-cde142978143@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <3c9735ec-2b04-1ace-2adb-d72b32c4a5f9@citrix.com>
Date: Wed, 2 Dec 2020 14:53:06 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b98d3517-6c9d-6f40-6e28-cde142978143@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02/12/2020 09:25, Jan Beulich wrote:
> On 01.12.2020 00:59, Igor Druzhinin wrote:
>> Current limit of 7 is too restrictive for modern systems where one GSI
>> could be shared by potentially many PCI INTx sources where each of them
>> corresponds to a device passed through to its own guest. Some systems do not
>> apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
>> interrupt pin for the majority of PCI devices behind a single router,
>> resulting in overuse of a GSI.
>>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> ---
>>
>> If people think that would make sense - I can rework the array to a list of
>> domain pointers to avoid the limit.
> 
> Not sure about this. What is the biggest number you've found on any
> system? (I assume the chosen value of 31 has some headroom.)

The value 31 was taken as a practical maximum for one specific HP system
if all of the PCI slots in all of its riser cards are occupied with GPUs.
The value is obtained by reverse engineering their ACPI tables. Currently
we're only concerned with number 8 (our graphics vendors do not recommend
installing more cards than that) but it's a matter of time it will grow.
I'm also not sure why this routing scheme was chosen in the first place:
is it dictated by hardware restrictions or firmware engineers being lazy - 
we're working with HP to determine it.

> Instead I'm wondering whether this wouldn't better be a Kconfig
> setting (or even command line controllable). There don't look to be
> any restrictions on the precise value chosen (i.e. 2**n-1 like is
> the case for old and new values here, for whatever reason), so a
> simple permitted range of like 4...64 would seem fine to specify.
> Whether the default then would want to be 8 (close to the current
> 7) or higher (around the actually observed maximum) is a different
> question.

I'm in favor of a command line argument here - it would be much less trouble
if a higher limit was suddenly necessary in the field. The default IMO
should definitely be higher than 8 - I'd stick with number 32 which to me
should cover our real world scenarios and apply some headroom for the future.

Igor


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 15:21:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 15:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42826.77071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTwT-0007uF-8f; Wed, 02 Dec 2020 15:21:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42826.77071; Wed, 02 Dec 2020 15:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkTwT-0007u8-5i; Wed, 02 Dec 2020 15:21:17 +0000
Received: by outflank-mailman (input) for mailman id 42826;
 Wed, 02 Dec 2020 15:21:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkTwR-0007u3-97
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 15:21:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aaa56127-b1fc-41fe-b81e-faab051dc165;
 Wed, 02 Dec 2020 15:21:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 49DCAAB63;
 Wed,  2 Dec 2020 15:21:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaa56127-b1fc-41fe-b81e-faab051dc165
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606922473; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kOGzPfbLwev74s99qMBx8ajs4GS/HCJf6Xwb4QVzdG4=;
	b=nasi7uLAcF+SsfKKfZgZjUjAgGV5SYf7Zk1BJFPDwGwMcIIluxlhFE1ScfgA8wYv35sPbm
	OdGMbWOwi8mrE8knygaNsB56bc3dHQS5CxTxCeAYYrn7/Y4gmaH5YN/+QXYrqICrssAeW6
	OuZ6qrelGTOqNLbylBY+yjVNkgljKKk=
Subject: Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org,
 xen-devel@lists.xenproject.org
References: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
 <b98d3517-6c9d-6f40-6e28-cde142978143@suse.com>
 <3c9735ec-2b04-1ace-2adb-d72b32c4a5f9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88019c81-1988-2512-282b-53b61adf09c6@suse.com>
Date: Wed, 2 Dec 2020 16:21:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <3c9735ec-2b04-1ace-2adb-d72b32c4a5f9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 15:53, Igor Druzhinin wrote:
> On 02/12/2020 09:25, Jan Beulich wrote:
>> On 01.12.2020 00:59, Igor Druzhinin wrote:
>>> Current limit of 7 is too restrictive for modern systems where one GSI
>>> could be shared by potentially many PCI INTx sources where each of them
>>> corresponds to a device passed through to its own guest. Some systems do not
>>> apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
>>> interrupt pin for the majority of PCI devices behind a single router,
>>> resulting in overuse of a GSI.
>>>
>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>> ---
>>>
>>> If people think that would make sense - I can rework the array to a list of
>>> domain pointers to avoid the limit.
>>
>> Not sure about this. What is the biggest number you've found on any
>> system? (I assume the chosen value of 31 has some headroom.)
> 
> The value 31 was taken as a practical maximum for one specific HP system
> if all of the PCI slots in all of its riser cards are occupied with GPUs.
> The value is obtained by reverse engineering their ACPI tables. Currently
> we're only concerned with number 8 (our graphics vendors do not recommend
> installing more cards than that) but it's a matter of time it will grow.
> I'm also not sure why this routing scheme was chosen in the first place:
> is it dictated by hardware restrictions or firmware engineers being lazy - 
> we're working with HP to determine it.

Thanks for the insight.

>> Instead I'm wondering whether this wouldn't better be a Kconfig
>> setting (or even command line controllable). There don't look to be
>> any restrictions on the precise value chosen (i.e. 2**n-1 like is
>> the case for old and new values here, for whatever reason), so a
>> simple permitted range of like 4...64 would seem fine to specify.
>> Whether the default then would want to be 8 (close to the current
>> 7) or higher (around the actually observed maximum) is a different
>> question.
> 
> I'm in favor of a command line argument here - it would be much less trouble
> if a higher limit was suddenly necessary in the field. The default IMO
> should definitely be higher than 8 - I'd stick with number 32 which to me
> should cover our real world scenarios and apply some headroom for the future.

Well, I'm concerned of the extra memory overhead. Every IRQ,
sharable or not, will get the extra slots allocated with the
current scheme. Perhaps a prereq change then would be to only
allocate multi-guest arrays for sharable IRQs, effectively
shrinking the overhead in particular for all MSI ones?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 15:36:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 15:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42834.77084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUAo-0000aY-Js; Wed, 02 Dec 2020 15:36:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42834.77084; Wed, 02 Dec 2020 15:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUAo-0000aR-Gd; Wed, 02 Dec 2020 15:36:06 +0000
Received: by outflank-mailman (input) for mailman id 42834;
 Wed, 02 Dec 2020 15:36:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkUAn-0000aM-1C
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 15:36:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d95d972e-43a9-4742-9ce6-4a52e7f6393f;
 Wed, 02 Dec 2020 15:36:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CDC85AB63;
 Wed,  2 Dec 2020 15:36:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d95d972e-43a9-4742-9ce6-4a52e7f6393f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606923361; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j3I7h9s1udaVS+hRbRz2OFRXoYsRef5P1i5uTR+suZ0=;
	b=pH7ZssuG2F6ZqNN3mDfgBKHr3nmV221P4UOLSlRpBattpx8QOYpUPRO4tTmV11JV4+KDQO
	PGLCQRugxarlE0pz59BU6VndkY8QDofq2oxE8d6KOZK8er9VP7MWqwVTMTchmwz74i6q3P
	P2RPIUan5FXg11lCjzyAEDKlyPgDrr0=
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
Date: Wed, 2 Dec 2020 16:36:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-10-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> @@ -297,6 +321,7 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
>      int ret;
>  
>      ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
> +    ASSERT(leaf->e.max_size);
>  
>      if ( ulen > leaf->e.max_size )
>          return -ENOSPC;
> @@ -357,6 +382,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
>      int ret;
>  
>      ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
> +    ASSERT(leaf->e.max_size);
>  
>      /* Avoid oversized buffer allocation. */
>      if ( ulen > MAX_PARAM_SIZE )

At the first glance both of these hunks look as if they
wouldn't belong here, but I guess the ASSERT()s are getting
lifted here from hypfs_write(). (The first looks even somewhat
redundant with the immediately following if().) If this
understanding of mine is correct,
Reviewed-by: Jan Beulich <jbeulich@suse.com>

> @@ -382,19 +408,20 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
>      return ret;
>  }
>  
> +int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
> +{
> +    return -EACCES;
> +}
> +
>  static int hypfs_write(struct hypfs_entry *entry,
>                         XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)

As a tangent, is there a reason these write functions don't take
handles of "const void"? (I realize this likely is nothing that
wants addressing right here.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 15:41:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 15:41:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42841.77096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUFl-0001Wb-8m; Wed, 02 Dec 2020 15:41:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42841.77096; Wed, 02 Dec 2020 15:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUFl-0001WU-3q; Wed, 02 Dec 2020 15:41:13 +0000
Received: by outflank-mailman (input) for mailman id 42841;
 Wed, 02 Dec 2020 15:41:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkUFk-0001WP-AY
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 15:41:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3a35072e-8951-4384-bc30-369ff31d6317;
 Wed, 02 Dec 2020 15:41:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A00DAC2D;
 Wed,  2 Dec 2020 15:41:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a35072e-8951-4384-bc30-369ff31d6317
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606923669; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=udBuN1ex3ZQ1V/yAGNsT7MZSXRvUWFYuB3R471utzUY=;
	b=vRV+rp0ML5zBkIVNr4h2aSqMgHY5A/Z+52ywWVAdR5qRtViYme0BKxNgRYwl3R8BqkTB6Q
	pbRLe+PrDprDTupxU+cATWlN1ZAslefVOY5AgvGtWoqOyCkIBF/+WtNXDI/7KCtRcdKGnr
	Cc9NPDkcDvglc/MaB2oKcRtWMMRGZ2A=
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d9bf90f5-26b7-d5b2-c3ae-1c8b336c487d@suse.com>
Date: Wed, 2 Dec 2020 16:41:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="e36nwYktNoAgr2cTiLaWNC3fX2vpFkrjQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--e36nwYktNoAgr2cTiLaWNC3fX2vpFkrjQ
Content-Type: multipart/mixed; boundary="3cTGl03TTf2ZuecOY3I5oRt6qFtCiOuNi";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <d9bf90f5-26b7-d5b2-c3ae-1c8b336c487d@suse.com>
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
In-Reply-To: <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>

--3cTGl03TTf2ZuecOY3I5oRt6qFtCiOuNi
Content-Type: multipart/mixed;
 boundary="------------1BEEB8BA5F274B2B52C38AEA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1BEEB8BA5F274B2B52C38AEA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.12.20 16:36, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -297,6 +321,7 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf=
,
>>       int ret;
>>  =20
>>       ASSERT(this_cpu(hypfs_locked) =3D=3D hypfs_write_locked);
>> +    ASSERT(leaf->e.max_size);
>>  =20
>>       if ( ulen > leaf->e.max_size )
>>           return -ENOSPC;
>> @@ -357,6 +382,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *le=
af,
>>       int ret;
>>  =20
>>       ASSERT(this_cpu(hypfs_locked) =3D=3D hypfs_write_locked);
>> +    ASSERT(leaf->e.max_size);
>>  =20
>>       /* Avoid oversized buffer allocation. */
>>       if ( ulen > MAX_PARAM_SIZE )
>=20
> At the first glance both of these hunks look as if they
> wouldn't belong here, but I guess the ASSERT()s are getting
> lifted here from hypfs_write(). (The first looks even somewhat
> redundant with the immediately following if().) If this
> understanding of mine is correct,

It is.

> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>=20
>> @@ -382,19 +408,20 @@ int hypfs_write_custom(struct hypfs_entry_leaf *=
leaf,
>>       return ret;
>>   }
>>  =20
>> +int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
>> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int=
 ulen)
>> +{
>> +    return -EACCES;
>> +}
>> +
>>   static int hypfs_write(struct hypfs_entry *entry,
>>                          XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned =
long ulen)
>=20
> As a tangent, is there a reason these write functions don't take
> handles of "const void"? (I realize this likely is nothing that
> wants addressing right here.)

No, not really.

I'll change that.


Juergen

--------------1BEEB8BA5F274B2B52C38AEA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1BEEB8BA5F274B2B52C38AEA--

--3cTGl03TTf2ZuecOY3I5oRt6qFtCiOuNi--

--e36nwYktNoAgr2cTiLaWNC3fX2vpFkrjQ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/HtZQFAwAAAAAACgkQsN6d1ii/Ey8W
rgf8DcZidDQkLWcz4tUpixOc54VjDr1fQPKFsdqsnhc3zGlzsjN74QfTh332RpRenxeOqepyv297
azRVyMHh96O3QocU+7jknSWRA1EtOpUGGD0tw87xeZy9n2vEk9dJx1YzgZ4jcvv4UdcKpyDKC/h9
1H8CG59o0HubZEot0nupjBRPUPqyLnnsw3feGdo2feOTpACc3sHASBAKbYbkIW2Bnm0/VQD1xOk+
KTyAlDTcbDL++7Bzes/tjDdjhwh8beUmyLgIIUMzgT5Gq4jRR0bvk3HRg6f6nxckCEZX6gjvDXo3
dVwXvf8rcclWEcBOu9k57P4WDmWE9XvT7dmb7mPiaQ==
=D8fN
-----END PGP SIGNATURE-----

--e36nwYktNoAgr2cTiLaWNC3fX2vpFkrjQ--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 15:42:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 15:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42845.77108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUHO-0001ef-J8; Wed, 02 Dec 2020 15:42:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42845.77108; Wed, 02 Dec 2020 15:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUHO-0001eY-Ff; Wed, 02 Dec 2020 15:42:54 +0000
Received: by outflank-mailman (input) for mailman id 42845;
 Wed, 02 Dec 2020 15:42:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UQyH=FG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkUHN-0001eT-Et
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 15:42:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0854dc01-12da-491f-b673-e1a4cd091d09;
 Wed, 02 Dec 2020 15:42:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CEA41AB7F;
 Wed,  2 Dec 2020 15:42:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0854dc01-12da-491f-b673-e1a4cd091d09
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606923771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1aXkKaaHyUKoJj5Oe6Rs0XH7o9R1EUE9tK8pBVy7BkY=;
	b=dPOJUDYBhLc98TxAZtPAin855IzVePbD8MQHKtooagXc1+CqohePj2wwrWnOaAhCuM+qar
	b/uYXRKmb/7RxNK3Fz9ZlHE4FxBjeUTUtA8HGKUgJSTA2uJx0BaCsPNBco5s2GlvIVjQcv
	nU0LMaSQBVYAOxfcx6WbtudWlB/Oqsc=
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
Date: Wed, 2 Dec 2020 16:42:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-12-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> Add a getsize() function pointer to struct hypfs_funcs for being able
> to have dynamically filled entries without the need to take the hypfs
> lock each time the contents are being generated.
> 
> For directories add a findentry callback to the vector and modify
> hypfs_get_entry_rel() to use it.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with maybe one further small adjustment:

> @@ -176,15 +188,41 @@ static int hypfs_get_path_user(char *buf,
>      return 0;
>  }
>  
> +struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
> +                                         const char *name,
> +                                         unsigned int name_len)
> +{
> +    return ERR_PTR(-ENOENT);
> +}

ENOENT seems odd to me here. There looks to be no counterpart to
EISDIR, so maybe ENODATA, EACCES, or EPERM?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 15:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 15:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42852.77120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUL1-0001qb-3l; Wed, 02 Dec 2020 15:46:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42852.77120; Wed, 02 Dec 2020 15:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUL1-0001qU-0e; Wed, 02 Dec 2020 15:46:39 +0000
Received: by outflank-mailman (input) for mailman id 42852;
 Wed, 02 Dec 2020 15:46:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkUKz-0001qP-U5
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 15:46:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec901e13-2da2-47f2-b919-cae44d9700bf;
 Wed, 02 Dec 2020 15:46:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5BF3AB63;
 Wed,  2 Dec 2020 15:46:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec901e13-2da2-47f2-b919-cae44d9700bf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606923996; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Of94Q3nLN+PvayhaMtmgKrrKbZUjhpkb3kLPQDuNWLE=;
	b=tIfSb9cGGwTNMoSHHthIG2ZUWf0Xf7sL82a3nxyIzQkA11yIO2smKQLYvxbfQFpyp5r0Ka
	efypXlWEuv0gjZLR6wfMB0a3ogWH8ryekGg6PtP0fv1qNlOEUvTmb2oGRiLkOrg10uMI28
	7k1IDPQ/NNpMOi//Ufx/uea/K0fWrVM=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Dario Faggioli <dfaggioli@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <89f52bed-c611-70c5-1349-63838530debd@suse.com>
Date: Wed, 2 Dec 2020 16:46:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-16-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="hdxCukwDLpEhSpLXcecK7ulHambp58YCQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--hdxCukwDLpEhSpLXcecK7ulHambp58YCQ
Content-Type: multipart/mixed; boundary="rqHCrMOxyfYUsPffIOvUC2amAcn6LZi4K";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Dario Faggioli <dfaggioli@suse.com>
Message-ID: <89f52bed-c611-70c5-1349-63838530debd@suse.com>
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
In-Reply-To: <20201201082128.15239-16-jgross@suse.com>

--rqHCrMOxyfYUsPffIOvUC2amAcn6LZi4K
Content-Type: multipart/mixed;
 boundary="------------7B75E4804D05BCE6C326C307"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7B75E4804D05BCE6C326C307
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 01.12.20 09:21, Juergen Gross wrote:
> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
> dynamic, so the related hypfs access functions need to be implemented.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - added const (Jan Beulich)
> - call hypfs_add_dir() in helper (Dario Faggioli)
> - switch locking to enter/exit callbacks
> ---
>   docs/misc/hypfs-paths.pandoc |   9 +++
>   xen/common/sched/cpupool.c   | 122 ++++++++++++++++++++++++++++++++++=
+
>   2 files changed, 131 insertions(+)
>=20
> diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pando=
c
> index 6c7b2f7ee3..aaca1cdf92 100644
> --- a/docs/misc/hypfs-paths.pandoc
> +++ b/docs/misc/hypfs-paths.pandoc
> @@ -175,6 +175,15 @@ The major version of Xen.
>  =20
>   The minor version of Xen.
>  =20
> +#### /cpupool/
> +
> +A directory of all current cpupools.
> +
> +#### /cpupool/*/
> +
> +The individual cpupools. Each entry is a directory with the name being=
 the
> +cpupool-id (e.g. /cpupool/0/).
> +
>   #### /params/
>  =20
>   A directory of runtime parameters.
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index 0db7d77219..3e17fdf95b 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -13,6 +13,8 @@
>  =20
>   #include <xen/cpu.h>
>   #include <xen/cpumask.h>
> +#include <xen/guest_access.h>
> +#include <xen/hypfs.h>
>   #include <xen/init.h>
>   #include <xen/keyhandler.h>
>   #include <xen/lib.h>
> @@ -33,6 +35,7 @@ static int cpupool_moving_cpu =3D -1;
>   static struct cpupool *cpupool_cpu_moving =3D NULL;
>   static cpumask_t cpupool_locked_cpus;
>  =20
> +/* This lock nests inside sysctl or hypfs lock. */
>   static DEFINE_SPINLOCK(cpupool_lock);
>  =20
>   static enum sched_gran __read_mostly opt_sched_granularity =3D SCHED_=
GRAN_cpu;
> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb =3D {
>       .notifier_call =3D cpu_callback
>   };
>  =20
> +#ifdef CONFIG_HYPFS
> +static const struct hypfs_entry *cpupool_pooldir_enter(
> +    const struct hypfs_entry *entry);
> +
> +static struct hypfs_funcs cpupool_pooldir_funcs =3D {
> +    .enter =3D cpupool_pooldir_enter,
> +    .exit =3D hypfs_node_exit,
> +    .read =3D hypfs_read_dir,
> +    .write =3D hypfs_write_deny,
> +    .getsize =3D hypfs_getsize,
> +    .findentry =3D hypfs_dir_findentry,
> +};
> +
> +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs=
);
> +
> +static const struct hypfs_entry *cpupool_pooldir_enter(
> +    const struct hypfs_entry *entry)
> +{
> +    return &cpupool_pooldir.e;
> +}
I have found a more generic way to handle entering a dyndir node,
resulting in no need to have cpupool_pooldir_enter() and
cpupool_pooldir_funcs.

This will add some more lines to the previous patch, but less than
saved here.


Juergen

--------------7B75E4804D05BCE6C326C307
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7B75E4804D05BCE6C326C307--

--rqHCrMOxyfYUsPffIOvUC2amAcn6LZi4K--

--hdxCukwDLpEhSpLXcecK7ulHambp58YCQ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/HttsFAwAAAAAACgkQsN6d1ii/Ey/K
agf/WaeVbTibJfJn9zrxD4x/ecZ9u9dCb9ps5jG8uK7HM2sWj8scc3Z0/VF45ydQa7doyN+p/Fhc
c+tpuBSNobaU2y7uQqclVwFyUIK9mQY3qyzeeSvseUaT2n/9/pj+8smRjeUijpCJa5lWcGROVT5I
V9JauKM/t/UeH06swMvyWR5uDj432Kraj4IxwMt0x0aqD2D7JrQqseO0B3RMgfo2edCozldaATcU
dtRJRnORm+8Kz1rpAER5BPtyPj64F8uvU6dWzAoE1iYZM7LFeZyS0eyZAYpI5XM29AvE8EWmR3vN
JqZWSO3vPvdwvq7B9LD92T6VfiRcIm9Kd2A5/Q0m8Q==
=kgug
-----END PGP SIGNATURE-----

--hdxCukwDLpEhSpLXcecK7ulHambp58YCQ--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 15:51:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 15:51:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42863.77132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUPS-0002ne-5g; Wed, 02 Dec 2020 15:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42863.77132; Wed, 02 Dec 2020 15:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUPS-0002nX-1N; Wed, 02 Dec 2020 15:51:14 +0000
Received: by outflank-mailman (input) for mailman id 42863;
 Wed, 02 Dec 2020 15:51:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wvYF=FG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkUPQ-0002nS-SL
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 15:51:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fddb267f-05c3-4b22-9c25-4ebb56e15e26;
 Wed, 02 Dec 2020 15:51:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C4029AD77;
 Wed,  2 Dec 2020 15:51:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fddb267f-05c3-4b22-9c25-4ebb56e15e26
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606924270; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=l0bykKTUxbZmkR7+O5wFLkuxHXd1HeoXIre0hR6YVw8=;
	b=S+VdqnzWvC1apv+Gi5oIUiOKEruiTn9CuGKTBmCHwR71rT0XlgqmgyjU8vfcliGZBPc9Yb
	2VDH9rcBwPDnuTs9GRVJCRpDNkBMJek82hiK4ZV18ivy4jxBbrZGzDP/5rUtsADQbD5ffp
	M4Wt6ROg7rRuzlbyNdykAkzjoxci1uc=
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e2909e87-473f-dbf5-9e58-7c817ac59e3f@suse.com>
Date: Wed, 2 Dec 2020 16:51:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ev5MbomkOsphj5h8NzOX96rHc0lF0rqYM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ev5MbomkOsphj5h8NzOX96rHc0lF0rqYM
Content-Type: multipart/mixed; boundary="bUUIQIyeJYy5vbFwSFw3NdcAlwRFEnyuH";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <e2909e87-473f-dbf5-9e58-7c817ac59e3f@suse.com>
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
In-Reply-To: <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>

--bUUIQIyeJYy5vbFwSFw3NdcAlwRFEnyuH
Content-Type: multipart/mixed;
 boundary="------------2D237B77DA11AEE598BCFA87"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2D237B77DA11AEE598BCFA87
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.12.20 16:42, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> Add a getsize() function pointer to struct hypfs_funcs for being able
>> to have dynamically filled entries without the need to take the hypfs
>> lock each time the contents are being generated.
>>
>> For directories add a findentry callback to the vector and modify
>> hypfs_get_entry_rel() to use it.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with maybe one further small adjustment:
>=20
>> @@ -176,15 +188,41 @@ static int hypfs_get_path_user(char *buf,
>>       return 0;
>>   }
>>  =20
>> +struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir=
 *dir,
>> +                                         const char *name,
>> +                                         unsigned int name_len)
>> +{
>> +    return ERR_PTR(-ENOENT);
>> +}
>=20
> ENOENT seems odd to me here. There looks to be no counterpart to
> EISDIR, so maybe ENODATA, EACCES, or EPERM?

Hmm, why?

In case I have /a/b and I'm looking for /a/b/c ENOENT seems to be just
fine?

Or I could add ENOTDIR.


Juergen

--------------2D237B77DA11AEE598BCFA87
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2D237B77DA11AEE598BCFA87--

--bUUIQIyeJYy5vbFwSFw3NdcAlwRFEnyuH--

--ev5MbomkOsphj5h8NzOX96rHc0lF0rqYM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Ht+wFAwAAAAAACgkQsN6d1ii/Ey/d
ngf/eCXuaWeFk3FPGYLK+GkUPqCkNmk24pw6d6Conjgh6CTEnuh0A78HOCKd+JrsFS3qohHHSsSH
yaKf938fjj5UXkcZh/kbpN9sJ5GvqyqRCB6i7vTc69pNIp5Ugyvxd6S+eJD5UIkhX6Gt40jIMtnm
zZy849HGLADiOi7CQR/dylTslXbA+3oQKqKpjHgDNp0lL43tEDh4U/juA076uDwTocBOt2Q3wZLI
pxy7Tll68fh0LF1gpXCKQ+SEEYcmsrPzNw8Q8Eg2Wbs+fOUgYi5BRTGESWbv45odzrMqrnMvZFqi
IyR6c7ZP6fyAxg83PC8/2gwjT1PeZTqkpx37MLomhg==
=4dQJ
-----END PGP SIGNATURE-----

--ev5MbomkOsphj5h8NzOX96rHc0lF0rqYM--


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 16:23:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 16:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42871.77148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUuI-0006Bp-Hw; Wed, 02 Dec 2020 16:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42871.77148; Wed, 02 Dec 2020 16:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUuI-0006Bi-EE; Wed, 02 Dec 2020 16:23:06 +0000
Received: by outflank-mailman (input) for mailman id 42871;
 Wed, 02 Dec 2020 16:23:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkUuH-0006Bc-0W
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:23:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkUuB-0005rf-Sj; Wed, 02 Dec 2020 16:22:59 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkUuB-000312-64; Wed, 02 Dec 2020 16:22:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:Subject:From;
	bh=VnjWqipgjomA6OH93rtvKZ6PRGrdHEjx/jRx4+YFL/I=; b=HggpNbWzuj8OD8uUrgKehXzy9c
	xWUUlw8F+JtlU3pyEmPLbUuObzmlyg8eA55Fn1K/uK1ypD/v1RwS0XC6I6UaqkDCFrh6+/pDTiTTA
	k2rm0bYPs5SYSt56SU0gueVVO0ewr4x9NxmaD5K6SH2Iuj7SnmfnOeXkqcVkasy4kRO4=;
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
Message-ID: <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
Date: Wed, 2 Dec 2020 16:22:56 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 26/11/2020 17:02, Rahul Singh wrote:
> Add support for ARM architected SMMUv3 implementation. It is based on
> the Linux SMMUv3 driver.
> 
> Major differences with regard to Linux driver are as follows:
> 1. Only Stage-2 translation is supported as compared to the Linux driver
>     that supports both Stage-1 and Stage-2 translations.
> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>     capability to share the page tables with the CPU.
> 3. Tasklets are used in place of threaded IRQ's in Linux for event queue
>     and priority queue IRQ handling.

On the previous version, we discussed that using tasklets is not a 
suitable replacement for threaded IRQs. What's the plan to address it?

> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>     access functions based on atomic operations implemented in Linux.
>     Atomic functions used by the commands queue access functions are not
>     implemented in XEN therefore we decided to port the earlier version
>     of the code. Once the proper atomic operations will be available in
>     XEN the driver can be updated.
> 5. Driver is currently supported as Tech Preview.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>   MAINTAINERS                           |   6 +
>   SUPPORT.md                            |   1 +
>   xen/drivers/passthrough/Kconfig       |  10 +
>   xen/drivers/passthrough/arm/Makefile  |   1 +
>   xen/drivers/passthrough/arm/smmu-v3.c | 986 +++++++++++++++++++++-----
>   5 files changed, 814 insertions(+), 190 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index dab38a6a14..1d63489eec 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -249,6 +249,12 @@ F:	xen/include/asm-arm/
>   F:	xen/include/public/arch-arm/
>   F:	xen/include/public/arch-arm.h
>   
> +ARM SMMUv3
> +M:	Bertrand Marquis <bertrand.marquis@arm.com>
> +M:	Rahul Singh <rahul.singh@arm.com>
> +S:	Supported
> +F:	xen/drivers/passthrough/arm/smmu-v3.c
> +
>   Change Log
>   M:	Paul Durrant <paul@xen.org>
>   R:	Community Manager <community.manager@xenproject.org>
> diff --git a/SUPPORT.md b/SUPPORT.md
> index ab02aca5f4..e402c7202d 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -68,6 +68,7 @@ For the Cortex A57 r0p0 - r1p1, see Errata 832075.
>       Status, ARM SMMUv1: Supported, not security supported
>       Status, ARM SMMUv2: Supported, not security supported
>       Status, Renesas IPMMU-VMSA: Supported, not security supported
> +    Status, ARM SMMUv3: Tech Preview

Please move this right after "ARM SMMUv2".

>   
>   ### ARM/GICv3 ITS
>   
> diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
> index 0036007ec4..5b71c59f47 100644
> --- a/xen/drivers/passthrough/Kconfig
> +++ b/xen/drivers/passthrough/Kconfig
> @@ -13,6 +13,16 @@ config ARM_SMMU
>   	  Say Y here if your SoC includes an IOMMU device implementing the
>   	  ARM SMMU architecture.
>   
> +config ARM_SMMU_V3
> +	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
> +	depends on ARM_64
> +	---help---
> +	 Support for implementations of the ARM System MMU architecture
> +	 version 3.
> +
> +	 Say Y here if your system includes an IOMMU device implementing
> +	 the ARM SMMUv3 architecture.
> +
>   config IPMMU_VMSA
>   	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
>   	depends on ARM_64
> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
> index fcd918ea3e..c5fb3b58a5 100644
> --- a/xen/drivers/passthrough/arm/Makefile
> +++ b/xen/drivers/passthrough/arm/Makefile
> @@ -1,3 +1,4 @@
>   obj-y += iommu.o iommu_helpers.o iommu_fwspec.o
>   obj-$(CONFIG_ARM_SMMU) += smmu.o
>   obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
> +obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 55d1cba194..8f2337e7f2 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2,36 +2,280 @@
>   /*
>    * IOMMU API for ARM architected SMMUv3 implementations.
>    *
> - * Copyright (C) 2015 ARM Limited
> + * Based on Linux's SMMUv3 driver:
> + *    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> + *    commit: 951cbbc386ff01b50da4f46387e994e81d9ab431
> + * and Xen's SMMU driver:
> + *    xen/drivers/passthrough/arm/smmu.c

I would suggest to list the major differences here as well.

>    *
> - * Author: Will Deacon <will.deacon@arm.com>
> + * Copyright (C) 2015 ARM Limited Will Deacon <will.deacon@arm.com>

Why did you merge the Author and copyright line?

>    *
> - * This driver is powered by bad coffee and bombay mix.
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + *
> + */
> +
> +#include <xen/acpi.h>
> +#include <xen/config.h>
> +#include <xen/delay.h>
> +#include <xen/errno.h>
> +#include <xen/err.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/list.h>
> +#include <xen/mm.h>
> +#include <xen/rbtree.h>
> +#include <xen/sched.h>
> +#include <xen/sizes.h>
> +#include <xen/vmap.h>
> +#include <asm/atomic.h>
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <asm/platform.h>
> +#include <asm/iommu_fwspec.h>

All the headers seem to be alphabetically ordered but this one.

> +
> +/* Linux compatibility functions. */

Some of the helpers here seem to be similar to the SMMU driver. Can we 
have an header that can be shared between the two?

> +typedef paddr_t dma_addr_t;
> +typedef unsigned int gfp_t;
> +
> +#define platform_device device
> +
> +#define GFP_KERNEL 0
> +
> +/* Alias to Xen device tree helpers */
> +#define device_node dt_device_node
> +#define of_phandle_args dt_phandle_args
> +#define of_device_id dt_device_match
> +#define of_match_node dt_match_node
> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
> +#define of_property_read_bool dt_property_read_bool
> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
> +
> +/* Alias to Xen lock functions */
> +#define mutex spinlock
> +#define mutex_init spin_lock_init
> +#define mutex_lock spin_lock
> +#define mutex_unlock spin_unlock

Hmm... mutex are not spinlock. Can you explain why this is fine to 
switch to spinlock?

> +
> +/* Alias to Xen time functions */
> +#define ktime_t s_time_t
> +#define ktime_get()             (NOW())
> +#define ktime_add_us(t,i)       (t + MICROSECS(i))
> +#define ktime_compare(t,i)      (t > (i))
> +
> +/* Alias to Xen allocation helpers */
> +#define kzalloc(size, flags)    _xzalloc(size, sizeof(void *))
> +#define kfree xfree
> +#define devm_kzalloc(dev, size, flags)  _xzalloc(size, sizeof(void *))
> +
> +/* Device logger functions */
> +#define dev_name(dev) dt_node_full_name(dev->of_node)
> +#define dev_dbg(dev, fmt, ...)      \
> +    printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_notice(dev, fmt, ...)   \
> +    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_warn(dev, fmt, ...)     \
> +    printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_err(dev, fmt, ...)      \
> +    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_info(dev, fmt, ...)     \
> +    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_err_ratelimited(dev, fmt, ...)      \
> +    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +
> +/*
> + * Periodically poll an address and wait between reads in us until a
> + * condition is met or a timeout occurs.
> + */
> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
> +({ \
> +     s_time_t deadline = NOW() + MICROSECS(timeout_us); \
> +     for (;;) { \
> +        (val) = op(addr); \
> +        if (cond) \
> +            break; \
> +        if (NOW() > deadline) { \
> +            (val) = op(addr); \
> +            break; \
> +        } \
> +        udelay(sleep_us); \
> +     } \
> +     (cond) ? 0 : -ETIMEDOUT; \
> +})
> +
> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
> +    readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
> +
> +#define FIELD_PREP(_mask, _val)         \
> +    (((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))
> +
> +#define FIELD_GET(_mask, _reg)          \
> +    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
> +
> +#define WRITE_ONCE(x, val)                  \
> +do {                                        \
> +    *(volatile typeof(x) *)&(x) = (val);    \
> +} while (0)

Please implement it with write_atomic() or ACCESS_ONCE().

> +
> +/* Xen: Stub out DMA domain related functions */
> +#define iommu_get_dma_cookie(dom) 0
> +#define iommu_put_dma_cookie(dom)
> +
> +/*
> + * Helpers for DMA allocation. Just the function name is reused for
> + * porting code, these allocation are not managed allocations
>    */
> +static void *dmam_alloc_coherent(struct device *dev, size_t size,
> +                                 paddr_t *dma_handle, gfp_t gfp)
> +{
> +    void *vaddr;
> +    unsigned long alignment = size;
> +
> +    /*
> +     * _xzalloc requires that the (align & (align -1)) = 0. Most of the
> +     * allocations in SMMU code should send the right value for size. In
> +     * case this is not true print a warning and align to the size of a
> +     * (void *)
> +     */
> +    if ( size & (size - 1) )

We should use the same coding style within the file. As the file is 
imported from Linux, new code should follow Linux coding style.

> +    {
> +        printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n");
> +        alignment = sizeof(void *);
> +    }
> +
> +    vaddr = _xzalloc(size, alignment);
> +    if ( !vaddr )
> +    {
> +        printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
> +        return NULL;
> +    }
> +
> +    *dma_handle = virt_to_maddr(vaddr);
> +
> +    return vaddr;
> +}
> +
> +/* Xen: Type definitions for iommu_domain */
> +#define IOMMU_DOMAIN_UNMANAGED 0
> +#define IOMMU_DOMAIN_DMA 1
> +#define IOMMU_DOMAIN_IDENTITY 2
> +
> +/* Xen specific code. */
> +struct iommu_domain {
> +    /* Runtime SMMU configuration for this iommu_domain */
> +    atomic_t ref;
> +    /*
> +     * Used to link iommu_domain contexts for a same domain.
> +     * There is at least one per-SMMU to used by the domain.
> +     */
> +    struct list_head    list;
> +};
> +
> +/* Describes information required for a Xen domain */
> +struct arm_smmu_xen_domain {
> +    spinlock_t      lock;
> +
> +    /* List of iommu domains associated to this domain */
> +    struct list_head    contexts;
> +};
> +
> +/*
> + * Information about each device stored in dev->archdata.iommu
> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
> + * the SMMU).
> + */
> +struct arm_smmu_xen_device {
> +    struct iommu_domain *domain;
> +};
> +
> +/* Keep a list of devices associated with this driver */
> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
> +static LIST_HEAD(arm_smmu_devices);
> +
> +
> +static inline void *dev_iommu_priv_get(struct device *dev)
> +{
> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +    return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
> +}
> +
> +static inline void dev_iommu_priv_set(struct device *dev, void *priv)
> +{
> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +    fwspec->iommu_priv = priv;
> +}
> +
> +int dt_property_match_string(const struct dt_device_node *np,
> +                             const char *propname, const char *string)

I think this should be implemented in device_tree.c

> +{
> +    const struct dt_property *dtprop = dt_find_property(np, propname, NULL);
> +    size_t l;
> +    int i;
> +    const char *p, *end;
> +
> +    if ( !dtprop )
> +        return -EINVAL;
> +
> +    if ( !dtprop->value )
> +        return -ENODATA;
> +
> +    p = dtprop->value;
> +    end = p + dtprop->length;
> +
> +    for ( i = 0; p < end; i++, p += l )
> +    {
> +        l = strnlen(p, end - p) + 1;
> +
> +        if ( p + l > end )
> +            return -EILSEQ;
> +
> +        if ( strcmp(string, p) == 0 )
> +            return i; /* Found it; return index */
> +    }
> +
> +    return -ENODATA;
> +}
> +
> +static int platform_get_irq_byname_optional(struct device *dev,
> +                                            const char *name)
> +{
> +    int index, ret;
> +    struct dt_device_node *np  = dev_to_dt(dev);
> +
> +    if ( unlikely(!name) )
> +        return -EINVAL;
> +
> +    index = dt_property_match_string(np, "interrupt-names", name);
> +    if ( index < 0 )
> +    {
> +        dev_info(dev, "IRQ %s not found\n", name);
> +        return index;
> +    }
>   
> -#include <linux/acpi.h>
> -#include <linux/acpi_iort.h>
> -#include <linux/bitfield.h>
> -#include <linux/bitops.h>
> -#include <linux/crash_dump.h>
> -#include <linux/delay.h>
> -#include <linux/dma-iommu.h>
> -#include <linux/err.h>
> -#include <linux/interrupt.h>
> -#include <linux/io-pgtable.h>
> -#include <linux/iommu.h>
> -#include <linux/iopoll.h>
> -#include <linux/module.h>
> -#include <linux/msi.h>
> -#include <linux/of.h>
> -#include <linux/of_address.h>
> -#include <linux/of_iommu.h>
> -#include <linux/of_platform.h>
> -#include <linux/pci.h>
> -#include <linux/pci-ats.h>
> -#include <linux/platform_device.h>
> -
> -#include <linux/amba/bus.h>
> +    ret = platform_get_irq(np, index);
> +    if ( ret < 0 )
> +    {
> +        dev_err(dev, "failed to get irq index %d\n", index);
> +        return -ENODEV;
> +    }
> +
> +    return ret;
> +}
> +
> +/* Start of Linux SMMUv3 code */
>   
>   /* MMIO registers */
>   #define ARM_SMMU_IDR0			0x0
> @@ -507,6 +751,7 @@ struct arm_smmu_s2_cfg {
>   	u16				vmid;
>   	u64				vttbr;
>   	u64				vtcr;
> +	struct domain		*domain;
>   };
>   
>   struct arm_smmu_strtab_cfg {
> @@ -567,8 +812,13 @@ struct arm_smmu_device {
>   
>   	struct arm_smmu_strtab_cfg	strtab_cfg;
>   
> -	/* IOMMU core code handle */
> -	struct iommu_device		iommu;
> +	/* Need to keep a list of SMMU devices */
> +	struct list_head		devices;
> +
> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
> +	struct tasklet		evtq_irq_tasklet;
> +	struct tasklet		priq_irq_tasklet;
> +	struct tasklet		combined_irq_tasklet;
>   };
>   
>   /* SMMU private data for each master */
> @@ -1110,7 +1360,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>   }
>   
>   /* IRQ and event handlers */
> -static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
> +static void arm_smmu_evtq_thread(void *dev)
>   {
>   	int i;
>   	struct arm_smmu_device *smmu = dev;
> @@ -1140,7 +1390,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>   	/* Sync our overflow flag, as we believe we're up to speed */
>   	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>   		    Q_IDX(llq, llq->cons);
> -	return IRQ_HANDLED;
>   }
>   
>   static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
> @@ -1181,7 +1430,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>   	}
>   }
>   
> -static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
> +static void arm_smmu_priq_thread(void *dev)
>   {
>   	struct arm_smmu_device *smmu = dev;
>   	struct arm_smmu_queue *q = &smmu->priq.q;
> @@ -1200,12 +1449,12 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>   	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>   		      Q_IDX(llq, llq->cons);
>   	queue_sync_cons_out(q);
> -	return IRQ_HANDLED;
>   }
>   
>   static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
>   
> -static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
> +static void arm_smmu_gerror_handler(int irq, void *dev,
> +				struct cpu_user_regs *regs)
>   {
>   	u32 gerror, gerrorn, active;
>   	struct arm_smmu_device *smmu = dev;
> @@ -1215,7 +1464,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>   
>   	active = gerror ^ gerrorn;
>   	if (!(active & GERROR_ERR_MASK))
> -		return IRQ_NONE; /* No errors pending */
> +		return; /* No errors pending */
>   
>   	dev_warn(smmu->dev,
>   		 "unexpected global error reported (0x%08x), this could be serious\n",
> @@ -1248,26 +1497,42 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>   		arm_smmu_cmdq_skip_err(smmu);
>   
>   	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
> -	return IRQ_HANDLED;
>   }
>   
> -static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
> +static void arm_smmu_combined_irq_handler(int irq, void *dev,
> +				struct cpu_user_regs *regs)
> +{
> +	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;

The cast is not necessary.

> +
> +	arm_smmu_gerror_handler(irq, dev, regs);
> +
> +	tasklet_schedule(&(smmu->combined_irq_tasklet));
> +}
> +
> +static void arm_smmu_combined_irq_thread(void *dev)
>   {
>   	struct arm_smmu_device *smmu = dev;
>   
> -	arm_smmu_evtq_thread(irq, dev);
> +	arm_smmu_evtq_thread(dev);
>   	if (smmu->features & ARM_SMMU_FEAT_PRI)
> -		arm_smmu_priq_thread(irq, dev);
> -
> -	return IRQ_HANDLED;
> +		arm_smmu_priq_thread(dev);
>   }
>   
> -static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
> +static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
> +				struct cpu_user_regs *regs)
>   {
> -	arm_smmu_gerror_handler(irq, dev);
> -	return IRQ_WAKE_THREAD;
> +	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;

Ditto.

> +
> +	tasklet_schedule(&(smmu->evtq_irq_tasklet));
>   }
>   
> +static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
> +				struct cpu_user_regs *regs)
> +{
> +	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;

Ditto.

> +
> +	tasklet_schedule(&(smmu->priq_irq_tasklet));
> +}
>   
>   /* IO_PGTABLE API */
>   static void arm_smmu_tlb_inv_context(void *cookie)
> @@ -1354,27 +1619,69 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>   }
>   
>   static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
> -				       struct arm_smmu_master *master,
> -				       struct io_pgtable_cfg *pgtbl_cfg)
> +				       struct arm_smmu_master *master)
>   {
>   	int vmid;
> +	u64 reg;

I think uint32_t is sufficient here.

>   	struct arm_smmu_device *smmu = smmu_domain->smmu;
>   	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>   
> +	/* VTCR */
> +	reg = VTCR_RES1 | VTCR_SH0_IS | VTCR_IRGN0_WBWA | VTCR_ORGN0_WBWA;

VTCR_RES1 will set bit 31 to 1. However, from the spec it looks like the 
equivalent bit in the entry (see 5.2 in ARM IHI 0070C.a) will be RES0.

> +
> +	switch (PAGE_SIZE) {
> +	case SZ_4K:
> +		reg |= VTCR_TG0_4K;
> +		break;
> +	case SZ_16K:
> +		reg |= VTCR_TG0_16K;
> +		break;
> +	case SZ_64K:
> +		reg |= VTCR_TG0_4K;
> +		break;
> +	}

I would just handle 4K here and add a BUILD_BUG_ON(PAGE_SIZE != SZ_4K).

> + > +	switch (smmu->oas) {

AFAICT smmu->oas and ...

> +	case 32:
> +		reg |= VTCR_PS(_AC(0x0,ULL));
> +		break;
> +	case 36:
> +		reg |= VTCR_PS(_AC(0x1,ULL));
> +		break;
> +	case 40:
> +		reg |= VTCR_PS(_AC(0x2,ULL));
> +		break;
> +	case 42:
> +		reg |= VTCR_PS(_AC(0x3,ULL));
> +		break;
> +	case 44:
> +		reg |= VTCR_PS(_AC(0x4,ULL));
> +		break;
> +		case 48:
> +		reg |= VTCR_PS(_AC(0x5,ULL));
> +		break;
> +	case 52:
> +		reg |= VTCR_PS(_AC(0x6,ULL));
> +		break;
> +	}
> +
> +	reg |= VTCR_T0SZ(64ULL - smmu->ias);

... are directly taken from the SMMU configuration. However, as we share 
the P2M, we need to make sure the value match what the CPU is using.

For the IAS, you will want to use p2m_ipa_bits and for the output, we 
will want to cap to PADDR_BITS.

> +	reg |= VTCR_SL0(0x2);

Similar to above, the starting level will depend on how the P2M was 
configured.

> +	reg |= VTCR_VS;

AFAICT, the bit 179 (bit 19 in the word) is indicating whether AArch32 
or AArch64 translation table is used. However, bit 19 in VTCR_EL2 
indicates whether we are using 8-bit or 16-bit VMID.

> +
> +	cfg->vtcr   = reg;

It would be better to initialize vtcr exactly at the same place as Linux 
does. This would make easier to match the code.
> +
>   	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>   	if (vmid < 0)
>   		return vmid;
> +	cfg->vmid  = (u16)vmid;
> +
> +	cfg->vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
> +
> +	printk(XENLOG_DEBUG
> +		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
> +		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
>   
> -	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
> -	cfg->vmid	= (u16)vmid;
> -	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
> -	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
> -			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
>   	return 0;
>   }
>   
> @@ -1382,28 +1689,12 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>   				    struct arm_smmu_master *master)
>   {
>   	int ret;
> -	unsigned long ias, oas;
> -	int (*finalise_stage_fn)(struct arm_smmu_domain *,
> -				 struct arm_smmu_master *,
> -				 struct io_pgtable_cfg *);
>   	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
>   
>   	/* Restrict the stage to what we can actually support */
>   	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
>   
> -	switch (smmu_domain->stage) {
> -	case ARM_SMMU_DOMAIN_NESTED:
> -	case ARM_SMMU_DOMAIN_S2:
> -		ias = smmu->ias;
> -		oas = smmu->oas;
> -		finalise_stage_fn = arm_smmu_domain_finalise_s2;
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -
> -	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);

It is not entirely clear why this code is removed here and not the 
previous patch?

> +	ret = arm_smmu_domain_finalise_s2(smmu_domain, master);
>   	if (ret < 0) {
>   		return ret;
>   	}
> @@ -1553,7 +1844,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>   		return -ENOMEM;
>   	}
>   
> -	if (!WARN_ON(q->base_dma & (qsz - 1))) {
> +	WARN_ON(q->base_dma & (qsz - 1));

This is a call to rework how our WARN_ON() in Xen.

> +	if (unlikely(q->base_dma & (qsz - 1))) {
>   		dev_info(smmu->dev, "allocated %u entries for %s\n",
>   			 1 << q->llq.max_n_shift, name);
>   	}
> @@ -1758,9 +2050,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>   	/* Request interrupt lines */
>   	irq = smmu->evtq.q.irq;
>   	if (irq) {
> -		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> -						arm_smmu_evtq_thread,
> -						IRQF_ONESHOT,
> +		ret = request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
>   						"arm-smmu-v3-evtq", smmu);
>   		if (ret < 0)
>   			dev_warn(smmu->dev, "failed to enable evtq irq\n");
> @@ -1770,8 +2060,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>   
>   	irq = smmu->gerr_irq;
>   	if (irq) {
> -		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
> -				       0, "arm-smmu-v3-gerror", smmu);
> +		ret = request_irq(irq, 0, arm_smmu_gerror_handler,
> +						"arm-smmu-v3-gerror", smmu);
>   		if (ret < 0)
>   			dev_warn(smmu->dev, "failed to enable gerror irq\n");
>   	} else {
> @@ -1781,11 +2071,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>   	if (smmu->features & ARM_SMMU_FEAT_PRI) {
>   		irq = smmu->priq.q.irq;
>   		if (irq) {
> -			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> -							arm_smmu_priq_thread,
> -							IRQF_ONESHOT,
> -							"arm-smmu-v3-priq",
> -							smmu);
> +			ret = request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
> +							"arm-smmu-v3-priq", smmu);
>   			if (ret < 0)
>   				dev_warn(smmu->dev,
>   					 "failed to enable priq irq\n");
> @@ -1814,11 +2101,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
>   		 * Cavium ThunderX2 implementation doesn't support unique irq
>   		 * lines. Use a single irq line for all the SMMUv3 interrupts.
>   		 */
> -		ret = devm_request_threaded_irq(smmu->dev, irq,
> -					arm_smmu_combined_irq_handler,
> -					arm_smmu_combined_irq_thread,
> -					IRQF_ONESHOT,
> -					"arm-smmu-v3-combined-irq", smmu);
> +		ret = request_irq(irq, 0, arm_smmu_combined_irq_handler,
> +						"arm-smmu-v3-combined-irq", smmu);
>   		if (ret < 0)
>   			dev_warn(smmu->dev, "failed to enable combined irq\n");
>   	} else
> @@ -1857,7 +2141,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>   	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
>   	if (reg & CR0_SMMUEN) {
>   		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
> -		WARN_ON(is_kdump_kernel() && !disable_bypass);
> +		WARN_ON(!disable_bypass);
>   		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
>   	}
>   
> @@ -1952,8 +2236,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>   		return ret;
>   	}
>   
> -	if (is_kdump_kernel())
> -		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
> +	/* Initialize tasklets for threaded IRQs*/
> +	tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_thread, smmu);
> +	tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_thread, smmu);
> +	tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_thread,
> +				 smmu);
>   
>   	/* Enable the SMMU interface, or ensure bypass */
>   	if (!bypass || disable_bypass) {
> @@ -2195,7 +2482,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>   static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>   				    struct arm_smmu_device *smmu)
>   {
> -	struct device *dev = &pdev->dev;
> +	struct device *dev = pdev;
>   	u32 cells;
>   	int ret = -EINVAL;
>   
> @@ -2219,130 +2506,449 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
>   		return SZ_128K;
>   }
>   
> +/* Start of Xen specific code. */
>   static int arm_smmu_device_probe(struct platform_device *pdev)
>   {
> -	int irq, ret;
> -	struct resource *res;
> -	resource_size_t ioaddr;
> -	struct arm_smmu_device *smmu;
> -	struct device *dev = &pdev->dev;
> -	bool bypass;
> +    int irq, ret;
> +    paddr_t ioaddr, iosize;
> +    struct arm_smmu_device *smmu;
> +    bool bypass;
> +
> +    smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> +    if ( !smmu )
> +    {
> +        dev_err(pdev, "failed to allocate arm_smmu_device\n");
> +        return -ENOMEM;
> +    }
> +    smmu->dev = pdev;
> +
> +    if ( pdev->of_node )
> +    {
> +        ret = arm_smmu_device_dt_probe(pdev, smmu);
> +    } else
> +    {
> +        ret = arm_smmu_device_acpi_probe(pdev, smmu);
> +        if ( ret == -ENODEV )
> +            return ret;
> +    }
> +
> +    /* Set bypass mode according to firmware probing result */
> +    bypass = !!ret;

AFAICT, bypass would be set if the device-tree is buggy. For Xen, I 
think it would be saner to not continue as this would break isolation.

> +
> +    /* Base address */
> +    ret = dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
> +    if( ret )
> +        return -ENODEV;
> +
> +    if ( iosize < arm_smmu_resource_size(smmu) )
> +    {
> +        dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
> +     * the PMCG registers which are reserved by the PMU driver.
> +     */

This comment doesn't seem to match the code. Did you intend to...


> +    smmu->base = ioremap_nocache(ioaddr, iosize);

... use ARM_SMMU_REG_SZ which would only map the necessary region?

> +    if ( IS_ERR(smmu->base) )
> +        return PTR_ERR(smmu->base);
> +
> +    if ( iosize > SZ_64K )
> +    {
> +        smmu->page1 = ioremap_nocache(ioaddr + SZ_64K, ARM_SMMU_REG_SZ);
> +        if ( IS_ERR(smmu->page1) )
> +            return PTR_ERR(smmu->page1);
> +    }
> +    else
> +    {
> +        smmu->page1 = smmu->base;
> +    }
> +
> +    /* Interrupt lines */
> +
> +    irq = platform_get_irq_byname_optional(pdev, "combined");
> +    if ( irq > 0 )
> +        smmu->combined_irq = irq;
> +    else
> +    {
> +        irq = platform_get_irq_byname_optional(pdev, "eventq");
> +        if ( irq > 0 )
> +            smmu->evtq.q.irq = irq;
> +
> +        irq = platform_get_irq_byname_optional(pdev, "priq");
> +        if ( irq > 0 )
> +            smmu->priq.q.irq = irq;
> +
> +        irq = platform_get_irq_byname_optional(pdev, "gerror");
> +        if ( irq > 0 )
> +            smmu->gerr_irq = irq;
> +    }
> +    /* Probe the h/w */
> +    ret = arm_smmu_device_hw_probe(smmu);
> +    if ( ret )
> +        return ret;
> +
> +    /* Initialise in-memory data structures */
> +    ret = arm_smmu_init_structures(smmu);
> +    if ( ret )
> +        return ret;
> +
> +    /* Reset the device */
> +    ret = arm_smmu_device_reset(smmu, bypass);
> +    if ( ret )
> +        return ret;
> +
> +    /*
> +     * Keep a list of all probed devices. This will be used to query
> +     * the smmu devices based on the fwnode.
> +     */
> +    INIT_LIST_HEAD(&smmu->devices);
> +
> +    spin_lock(&arm_smmu_devices_lock);
> +    list_add(&smmu->devices, &arm_smmu_devices);
> +    spin_unlock(&arm_smmu_devices_lock);
> +
> +    return 0;
> +}
> +
> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
> +{
> +    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +    struct iommu_domain *io_domain;
> +
> +    spin_lock(&xen_domain->lock);
> +
> +    list_for_each_entry( io_domain, &xen_domain->contexts, list )
> +    {
> +        /*
> +         * Only invalidate the context when SMMU is present.
> +         * This is because the context initialization is delayed
> +         * until a master has been added.
> +         */
> +        if ( unlikely(!ACCESS_ONCE(to_smmu_domain(io_domain)->smmu)) )
> +            continue;
> +
> +        arm_smmu_tlb_inv_context(to_smmu_domain(io_domain));
> +    }
>   
> -	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> -	if (!smmu) {
> -		dev_err(dev, "failed to allocate arm_smmu_device\n");
> -		return -ENOMEM;
> -	}
> -	smmu->dev = dev;
> +    spin_unlock(&xen_domain->lock);
> +
> +    return 0;
> +}
> +
> +static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
> +                                             unsigned long page_count,
> +                                             unsigned int flush_flags)
> +{
> +    return arm_smmu_iotlb_flush_all(d);
> +}
>   
> -	if (dev->of_node) {
> -		ret = arm_smmu_device_dt_probe(pdev, smmu);
> -	} else {
> -		ret = arm_smmu_device_acpi_probe(pdev, smmu);
> -		if (ret == -ENODEV)
> -			return ret;
> -	}
> +static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
> +{
> +    struct arm_smmu_device *smmu = NULL;
>   
> -	/* Set bypass mode according to firmware probing result */
> -	bypass = !!ret;
> +    spin_lock(&arm_smmu_devices_lock);
> +    list_for_each_entry( smmu, &arm_smmu_devices, devices )
> +    {
> +        if ( smmu->dev  == dev )
> +        {
> +            spin_unlock(&arm_smmu_devices_lock);
> +            return smmu;
> +        }
> +    }
> +    spin_unlock(&arm_smmu_devices_lock);
>   
> -	/* Base address */
> -	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> -	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
> -		dev_err(dev, "MMIO region too small (%pr)\n", res);
> -		return -EINVAL;
> -	}
> -	ioaddr = res->start;

The code removed is basically the same as the one you added except the 
coding style change. This patch is already quite long to review, so can 
we please keep the change to the strict minimum?

If you want to do to clean-up then they should be done before/after.

> +    return NULL;
> +}
>   
> -	/*
> -	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
> -	 * the PMCG registers which are reserved by the PMU driver.
> -	 */
> -	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
> -	if (IS_ERR(smmu->base))
> -		return PTR_ERR(smmu->base);
> -
> -	if (arm_smmu_resource_size(smmu) > SZ_64K) {
> -		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
> -					       ARM_SMMU_REG_SZ);
> -		if (IS_ERR(smmu->page1))
> -			return PTR_ERR(smmu->page1);
> -	} else {
> -		smmu->page1 = smmu->base;
> -	}
> +/* Probing and initialisation functions */
> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
> +                                                struct device *dev)
> +{
> +    struct iommu_domain *io_domain;
> +    struct arm_smmu_domain *smmu_domain;
> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +    struct arm_smmu_device *smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
> +
> +    if ( !smmu )
> +        return NULL;
>   
> -	/* Interrupt lines */
> +    /*
> +     * Loop through the &xen_domain->contexts to locate a context
> +     * assigned to this SMMU
> +     */
> +    list_for_each_entry( io_domain, &xen_domain->contexts, list )
> +    {
> +        smmu_domain = to_smmu_domain(io_domain);
> +        if ( smmu_domain->smmu == smmu )
> +            return io_domain;
> +    }
>   
> -	irq = platform_get_irq_byname_optional(pdev, "combined");
> -	if (irq > 0)
> -		smmu->combined_irq = irq;
> -	else {
> -		irq = platform_get_irq_byname_optional(pdev, "eventq");
> -		if (irq > 0)
> -			smmu->evtq.q.irq = irq;
> +    return NULL;
> +}
>   
> -		irq = platform_get_irq_byname_optional(pdev, "priq");
> -		if (irq > 0)
> -			smmu->priq.q.irq = irq;
> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *io_domain)
> +{
> +    list_del(&io_domain->list);
> +    arm_smmu_domain_free(io_domain);
> +}
>   
> -		irq = platform_get_irq_byname_optional(pdev, "gerror");
> -		if (irq > 0)
> -			smmu->gerr_irq = irq;
> -	}
> -	/* Probe the h/w */
> -	ret = arm_smmu_device_hw_probe(smmu);
> -	if (ret)
> -		return ret;
> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
> +                               struct device *dev, u32 flag)
> +{
> +    int ret = 0;
> +    struct iommu_domain *io_domain;
> +    struct arm_smmu_domain *smmu_domain;
> +    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
>   
> -	/* Initialise in-memory data structures */
> -	ret = arm_smmu_init_structures(smmu);
> -	if (ret)
> -		return ret;
> +    if ( !dev->archdata.iommu )
> +    {
> +        dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
> +        if ( !dev->archdata.iommu )
> +            return -ENOMEM;
> +    }
>   
> -	/* Record our private device structure */
> -	platform_set_drvdata(pdev, smmu);
> +    spin_lock(&xen_domain->lock);
>   
> -	/* Reset the device */
> -	ret = arm_smmu_device_reset(smmu, bypass);
> -	if (ret)
> -		return ret;
> +    /*
> +     * Check to see if an iommu_domain already exists for this xen domain
> +     * under the same SMMU
> +     */
> +    io_domain = arm_smmu_get_domain(d, dev);
> +    if ( !io_domain )
> +    {
> +        io_domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
> +        if ( !io_domain )
> +        {
> +            ret = -ENOMEM;
> +            goto out;
> +        }
>   
> -	/* And we're up. Go go go! */
> -	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
> -				     "smmu3.%pa", &ioaddr);
> -	if (ret)
> -		return ret;
> +        smmu_domain = to_smmu_domain(io_domain);
> +        smmu_domain->s2_cfg.domain = d;
>   
> -	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
> -	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
> +        /* Chain the new context to the domain */
> +        list_add(&io_domain->list, &xen_domain->contexts);
>   
> -	ret = iommu_device_register(&smmu->iommu);
> -	if (ret) {
> -		dev_err(dev, "Failed to register iommu\n");
> -		return ret;
> -	}
> +    }
> +
> +    ret = arm_smmu_attach_dev(io_domain, dev);
> +    if ( ret )
> +    {
> +        if ( io_domain->ref.counter == 0 )
> +            arm_smmu_destroy_iommu_domain(io_domain);
> +    }
> +    else
> +    {
> +        atomic_inc(&io_domain->ref);
> +    }
>   
> -	return arm_smmu_set_bus_ops(&arm_smmu_ops);
> +out:
> +    spin_unlock(&xen_domain->lock);
> +    return ret;
>   }
>   
> -static int arm_smmu_device_remove(struct platform_device *pdev)
> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
>   {
> -	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
> +    struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
> +    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +    struct arm_smmu_domain *arm_smmu = to_smmu_domain(io_domain);
> +    struct arm_smmu_master *master = dev_iommu_priv_get(dev);
>   
> -	arm_smmu_set_bus_ops(NULL);
> -	iommu_device_unregister(&smmu->iommu);
> -	iommu_device_sysfs_remove(&smmu->iommu);
> -	arm_smmu_device_disable(smmu);
> +    if ( !arm_smmu || arm_smmu->s2_cfg.domain != d )
> +    {
> +        dev_err(dev, " not attached to domain %d\n", d->domain_id);
> +        return -ESRCH;
> +    }
>   
> -	return 0;
> +    spin_lock(&xen_domain->lock);
> +
> +    arm_smmu_detach_dev(master);
> +    atomic_dec(&io_domain->ref);
> +
> +    if ( io_domain->ref.counter == 0 )
> +        arm_smmu_destroy_iommu_domain(io_domain);
> +
> +    spin_unlock(&xen_domain->lock);
> +
> +    return 0;
> +}
> +
> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
> +                                 u8 devfn,  struct device *dev)
> +{
> +    int ret = 0;
> +
> +    /* Don't allow remapping on other domain than hwdom */
> +    if ( t && t != hardware_domain )
> +        return -EPERM;
> +
> +    if ( t == s )
> +        return 0;
> +
> +    ret = arm_smmu_deassign_dev(s, dev);
> +    if ( ret )
> +        return ret;
> +
> +    if ( t )
> +    {
> +        /* No flags are defined for ARM. */
> +        ret = arm_smmu_assign_dev(t, devfn, dev, 0);
> +        if ( ret )
> +            return ret;
> +    }
> +
> +    return 0;
> +}
> +
> +static int arm_smmu_iommu_xen_domain_init(struct domain *d)
> +{
> +    struct arm_smmu_xen_domain *xen_domain;
> +
> +    xen_domain = xzalloc(struct arm_smmu_xen_domain);
> +    if ( !xen_domain )
> +        return -ENOMEM;
> +
> +    spin_lock_init(&xen_domain->lock);
> +    INIT_LIST_HEAD(&xen_domain->contexts);
> +
> +    dom_iommu(d)->arch.priv = xen_domain;
> +
> +    return 0;
> +}
> +
> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
> +{
> +}
> +
> +static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
> +{
> +    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> +    ASSERT(list_empty(&xen_domain->contexts));
> +    xfree(xen_domain);
> +}
> +
> +static int arm_smmu_dt_xlate(struct device *dev,
> +                             const struct dt_phandle_args *args)
> +{
> +    int ret;
> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +    ret = iommu_fwspec_add_ids(dev, args->args, 1);
> +    if ( ret )
> +        return ret;
> +
> +    if ( dt_device_is_protected(dev_to_dt(dev)) )
> +    {
> +        dev_err(dev, "Already added to SMMUv3\n");
> +        return -EEXIST;
> +    }
> +
> +    /* Let Xen know that the master device is protected by an IOMMU. */
> +    dt_device_set_protected(dev_to_dt(dev));
> +
> +    dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
> +            dev_name(fwspec->iommu_dev), fwspec->num_ids);
> +
> +    return 0;
>   }
>   
> -static void arm_smmu_device_shutdown(struct platform_device *pdev)

I think this function should have been dropped in the previous patch.

> +static int arm_smmu_add_device(u8 devfn, struct device *dev)
>   {
> -	arm_smmu_device_remove(pdev);
> +    int i, ret;
> +    struct arm_smmu_device *smmu;
> +    struct arm_smmu_master *master;
> +    struct iommu_fwspec *fwspec;
> +
> +    fwspec = dev_iommu_fwspec_get(dev);
> +    if ( !fwspec )
> +        return -ENODEV;
> +
> +    smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
> +    if ( !smmu )
> +        return -ENODEV;
> +
> +    master = xzalloc(struct arm_smmu_master);
> +    if ( !master )
> +        return -ENOMEM;
> +
> +    master->dev = dev;
> +    master->smmu = smmu;
> +    master->sids = fwspec->ids;
> +    master->num_sids = fwspec->num_ids;
> +
> +    dev_iommu_priv_set(dev, master);
> +
> +    /* Check the SIDs are in range of the SMMU and our stream table */
> +    for ( i = 0; i < master->num_sids; i++ )
> +    {
> +        u32 sid = master->sids[i];
> +
> +        if ( !arm_smmu_sid_in_range(smmu, sid) )
> +        {
> +            ret = -ERANGE;
> +            goto err_free_master;
> +        }
> +
> +        /* Ensure l2 strtab is initialised */
> +        if ( smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB )
> +        {
> +            ret = arm_smmu_init_l2_strtab(smmu, sid);
> +            if ( ret )
> +                goto err_free_master;
> +        }
> +    }
> +
> +    return 0;
> +
> +err_free_master:
> +    xfree(master);
> +    dev_iommu_priv_set(dev, NULL);
> +    return ret;
>   }
>   
> -static const struct of_device_id arm_smmu_of_match[] = {
> -	{ .compatible = "arm,smmu-v3", },
> -	{ },
> +static const struct iommu_ops arm_smmu_iommu_ops = {
> +    .init = arm_smmu_iommu_xen_domain_init,
> +    .hwdom_init = arm_smmu_iommu_hwdom_init,
> +    .teardown = arm_smmu_iommu_xen_domain_teardown,
> +    .iotlb_flush = arm_smmu_iotlb_flush,
> +    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
> +    .assign_device = arm_smmu_assign_dev,
> +    .reassign_device = arm_smmu_reassign_dev,
> +    .map_page = arm_iommu_map_page,
> +    .unmap_page = arm_iommu_unmap_page,
> +    .dt_xlate = arm_smmu_dt_xlate,
> +    .add_device = arm_smmu_add_device,
> +};
> +
> +static const struct dt_device_match arm_smmu_of_match[] = {
> +    { .compatible = "arm,smmu-v3", },
> +    { },
>   };
> +
> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
> +                                   const void *data)
> +{
> +    int rc;
> +
> +    /*
> +     * Even if the device can't be initialized, we don't want to
> +     * give the SMMU device to dom0.
> +     */
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    rc = arm_smmu_device_probe(dt_to_dev(dev));
> +    if ( rc )
> +        return rc;
> +
> +    iommu_set_ops(&arm_smmu_iommu_ops);
> +    return 0;
> +}
> +
> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
> +    .dt_match = arm_smmu_of_match,
> +    .init = arm_smmu_dt_init,
> +DT_DEVICE_END
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 16:28:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 16:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42876.77160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUzA-0006Ox-9z; Wed, 02 Dec 2020 16:28:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42876.77160; Wed, 02 Dec 2020 16:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkUzA-0006Oq-6h; Wed, 02 Dec 2020 16:28:08 +0000
Received: by outflank-mailman (input) for mailman id 42876;
 Wed, 02 Dec 2020 16:28:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkUz8-0006Ol-Bl
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:28:06 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.67]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f65dae95-bed6-4014-909f-038c5c284190;
 Wed, 02 Dec 2020 16:28:02 +0000 (UTC)
Received: from AM6P194CA0062.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::39)
 by DB7PR08MB3898.eurprd08.prod.outlook.com (2603:10a6:10:30::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Wed, 2 Dec
 2020 16:27:56 +0000
Received: from AM5EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::f5) by AM6P194CA0062.outlook.office365.com
 (2603:10a6:209:84::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Wed, 2 Dec 2020 16:27:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT052.mail.protection.outlook.com (10.152.17.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 16:27:56 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Wed, 02 Dec 2020 16:27:56 +0000
Received: from 6ec038b02ca8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C6412DD8-2385-43CC-BA28-A4DF970FCAEC.1; 
 Wed, 02 Dec 2020 16:27:24 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6ec038b02ca8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 16:27:24 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5388.eurprd08.prod.outlook.com (2603:10a6:10:11c::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 2 Dec
 2020 16:27:21 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 16:27:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f65dae95-bed6-4014-909f-038c5c284190
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6JkTW5nVri2kNE3b7oFBJwGnGkmN+AXVr6yE6KOkoI=;
 b=67RK9EakKvUzwLbBlE4KM4VxqaPlam3IEn9UcMMwzlyFC5lBnaWR3kJ/G3XbuG23icixnMPU5XhKmsP457bQGyL4dTJm37Rv9yOFCaoMd8WtnL80TNmp1hJgBxZ2sCGzeM9uNXUmJjzJBxbaA3BFgf6V0hvjU7Q8rp9mfEvDjnQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7863ca8403c9f91b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FoY1+EcOJZEu2PnxyKSu7iu7fm/4qTHMlwiiR3gl18xROK2NrCYLE0FCYptFKLWdIHc14AFgc70WEgEchcU41AqYQYbfVO7ylktJ+jWQUTXBcHARnytuf4n3I7amOQSLR+Yu1/qgbRWDJQfUTh2/nMb/Y2GR5KPepAo/2KOUD5tknhxRcP92nrrvQHVi6JBdwyOb8AvQ3cMUtOHjiEo9MgSxnaH8k5VS9s8Sc+T29m/HVuoQpPg7YqBbPlGXp/WemdiK9+emFGPnTq8Y/mz1Cy66WRKiEAO2+Q9i1wM4T52YH1n8rZbSXlFuwCTtwgDtt6Rn08vEG50fWx9dgmENxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6JkTW5nVri2kNE3b7oFBJwGnGkmN+AXVr6yE6KOkoI=;
 b=b8dySvqHd4wq7hsUnhB7enrk5wKpnMy/tyQ61VtnUx3Cp/CQU0Eh7y/T8/RFkqBCUjSdBQpcA6Sk0nk5gzvQXBbHUJoL5nuhUqTQiio/2oLAivAyLQ4G07hyIOY5uRYcri131ka97F9zkOXaoeRb7y9nBQq8Qx3TjvaMX7sL2Zak9ZzmyxWfwbPXuoAs57K9PcO52wYYz/JNm3+tWyjB7W8OJPxG3zPfIifV+IcTvYxS3vIAnlaZRJFtxfuDIRAki6M+yehPCsIKS15Y/IMkNONsgJlzXYkOX2wV2BA64EDPpwsHGz2q0x9OaXu/Np0Jo+LOy3Xj+JRuiiY97OjCPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6JkTW5nVri2kNE3b7oFBJwGnGkmN+AXVr6yE6KOkoI=;
 b=67RK9EakKvUzwLbBlE4KM4VxqaPlam3IEn9UcMMwzlyFC5lBnaWR3kJ/G3XbuG23icixnMPU5XhKmsP457bQGyL4dTJm37Rv9yOFCaoMd8WtnL80TNmp1hJgBxZ2sCGzeM9uNXUmJjzJBxbaA3BFgf6V0hvjU7Q8rp9mfEvDjnQ=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWxBX/ttp8YMNqT0KunkbXuSldmKnjI/iAgADj3wA=
Date: Wed, 2 Dec 2020 16:27:21 +0000
Message-ID: <4338735A-1655-4B5F-A493-13D6021C2FBB@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 431dd52e-f08f-462c-3553-08d896df39cc
x-ms-traffictypediagnostic: DB8PR08MB5388:|DB7PR08MB3898:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB38988E841B0A1B38E05B27EEFCF30@DB7PR08MB3898.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5797;OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ccUHDIWVC6gmPdpK8cqKpHTRLvkQzAVICbRzVt0BNB3XqEqnFul6My4x3oqTRUGoNmr22fL2i2eg2eZ+/Le8u206QcDU4S/1zrNeAw/FG1tBfxOP+KJ3wUx5SYNKTXzRKNtUatSn+n9kaUhZEr58g2JMWB0/yhsrNFVK+pGg97rLUp46N4U5YSUIbBPGag1axRQ+NeWhTUcwL7LkcFJUzr9gq8EAzgMZ//48gFZ5B6e4EY3xPDkZcQvbFlKIy/FOBc7h0AXPTIgNhxZ7Eq/DTocdVafBzzfrnq/2Ry0Rgq82w3OLlqonwgMpei6MDv8XNANets+WZVN3bdjhzLbhZ9T74pJhZdTB2jkJ9KQLfbpUoF8eB7/ebxlpqXUoksYwpboCJkjOekT4dHU57tn32HYonlaDr5/2QAILU6w6ZzmPn30GE6Qaa0cRzCMlRywe7BDCa4SAhfFdeFw2Zo6x/KZUnGvxj+XAUrfb5mehRw6HMU0rKjyaiGvZKm/iHCNL
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(376002)(396003)(39860400002)(136003)(2906002)(316002)(71200400001)(6512007)(6506007)(8676002)(478600001)(53546011)(6486002)(2616005)(8936002)(33656002)(83380400001)(54906003)(6916009)(5660300002)(64756008)(66446008)(76116006)(66946007)(91956017)(66476007)(66556008)(30864003)(36756003)(26005)(86362001)(186003)(7416002)(4326008)(2004002)(45980500001)(559001)(579004)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?TjlwT2MvQmdHWVNlRUV4R0hXYlFMTmNsUEE3VUcxaThQMStvajRVL21VcWVs?=
 =?utf-8?B?NEh5MkJzeGhrblBFbWo3RVJGa01FUW1ZR2JtRlJhMWNsbHQ5NTVTSGVKSTBv?=
 =?utf-8?B?RUJCZFRqdXRnODNEOHp2N3E3eGNobXVLdnlMNC9FNS9pNkQvNmY1b3RPa0tN?=
 =?utf-8?B?V2N5dzBkNXdIUGtpaWFjZ2U5RWlwemtZTStrWTM4cFYrVldFWE9tZDh3bllz?=
 =?utf-8?B?M1BlUDdjRUFQTXF6a2tHZjlrVGJXNWM4R01kZ3Rjc1Z3MWp6M1BxMG1LYnFS?=
 =?utf-8?B?K2RmZStnU0dMUVZZZnFOSlRSbnZsTnNyVmc0b0RJWkxjbEtMV1VIOTB1T2VH?=
 =?utf-8?B?K1lXQk5jYkY1MXpvTndYMWpRY3VBVXhQQ2YzVW9pNnRxNU1MTFF4c2hOSUFs?=
 =?utf-8?B?Mjh5R2ovYWVoT29idFVBNk5KeWVjR3duUnI3bXJZbjBDbHhQVHdhRElUQzM3?=
 =?utf-8?B?T3g1OGpLcXNmNElsZG5CTXpGNEEyQ2RDUW84UHVMTzNkTnBFeWxRSjlkaXdF?=
 =?utf-8?B?OXJmRWJydHVnOVNVMUc5aXZuRWFFcC9HKzltYTFpS21HV1JzQ1ZjSFBnbVhL?=
 =?utf-8?B?dFVDUzg2L01RUng1NG9iNWw2Y3g0eGgyZ2kxRlVDenAxYWhJRFpUZ0ZGNHFX?=
 =?utf-8?B?R0tnOUZMMngzUDZPVkE4VXRYMGExMzdCSENPYUlyY25QMlpvcVBYRzVEZVlY?=
 =?utf-8?B?TTZsemZBNE9JS3hFdU9kUDA5SEhadS83cUIyOERHZ3ArVkFFazdVTGM2UDM1?=
 =?utf-8?B?UmhpcGpsYzZOOEQ3U2VKZDlyb0k0MnRCcGFheFUyK1IvQUtqaS9BMkpkNXFO?=
 =?utf-8?B?cmxIN2txNjQvWlJJLy9VckkvRlFjclhzUW4wdEZHUTFVOFNMeld1YjJTbkNI?=
 =?utf-8?B?NDJKeWVQN1AyY0cxTkUxcnVHNy83VU01d2pzNEFzenJuRWNZSHRob3VTbGl5?=
 =?utf-8?B?UjVFNFdpUU10Rkgya3R6Qm9xR1dCMXNqZEttaFcrQXUrTEEyME9XSUNKWHZ3?=
 =?utf-8?B?ekVjVHRqWE4xVzQ5b3ZVaGx4WTZlUi9jS2RWVVRjQkU0TVUwWlZ4enBFWkE0?=
 =?utf-8?B?UFFUWXgyanpYWDVyaHRFaEFoOUFhSkt2MHA3U1BSNE1qMGM3eDZnSGJ3aG56?=
 =?utf-8?B?Q0FSWnhMNHBNa0hGV043MTZMYURLVndSKzRZTHNyN3ByS29DeC81VXlZdmxW?=
 =?utf-8?B?QTlkai9DdEVwVGprWGZwd21wRFdNZm5zTWxZYUV6TnlZNThteVdwZkdqSnE4?=
 =?utf-8?B?dnIwcmpDYnhTVndGTmljQzJ1eXJyNmdGSzhRY0k5NUlGOWMvb3E1RWc4R0p3?=
 =?utf-8?Q?yguGSEUUi1MzomHyHSEV44114Mu61G/nsc?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C0D3FB0535B89D40831677DBA3A26D44@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5388
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	24814ece-a4bc-4eb3-207b-08d896df2510
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vxqvtu3km541wQfz1u+C1pcyiKYp2qcIqN8/dPTG5oJSeh/Z3F+8bU4iLlUQ/N4BBT0W8aLi2YGLg1tD0QgG5tBwJyVcxHPDea1PnLmGwTS60ZOP2taP+/g659MKhzzn5Bh0G6QDh4KdzT/zJijqPryIUJ6nr4C1TKWMy7/f2MSAjDx9m6lFRl09NYprcWGuqtwUey4NEkNFe+W6JwLh1c7FB0gs9u01FKRKZySMStonTD5kx6uMeUMFvXLl/7A52gOj86vy/jnzo2FTU+ZggqYcjX6QpVg5D+6jEMebVqPv+3KX3sqVbtnCn4cfiesDfO94INwfpiU43zHLGuZYCkCZJcsKCqllqIP0W6SMJpDu/dD2MZ36/f3zofbTQG+OLY1kG/k/R/z9yaTziymbmmgT2uroKsj9NnVo3N9LXPLr7hAEKkY8bEomQCR26/nTO9oN9c+GEchDQCoPKDKeqYswWOiz9r04QQcBuXuEJ4o7UL9btikwzyqs6TTIVRdhlnNLHD2hGIkQ8xOQRSHbAQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(136003)(376002)(46966005)(2616005)(81166007)(36756003)(83380400001)(336012)(54906003)(4326008)(33656002)(6486002)(2906002)(86362001)(5660300002)(82310400003)(8676002)(53546011)(6862004)(30864003)(6506007)(36906005)(26005)(356005)(478600001)(82740400003)(316002)(186003)(8936002)(47076004)(107886003)(70206006)(70586007)(6512007)(2004002)(309714004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 16:27:56.3743
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 431dd52e-f08f-462c-3553-08d896df39cc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3898

SGVsbG8gU3RlZmFubywNCg0KPiBPbiAyIERlYyAyMDIwLCBhdCAyOjUxIGFtLCBTdGVmYW5vIFN0
YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gVGh1LCAy
NiBOb3YgMjAyMCwgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBBZGQgc3VwcG9ydCBmb3IgQVJNIGFy
Y2hpdGVjdGVkIFNNTVV2MyBpbXBsZW1lbnRhdGlvbi4gSXQgaXMgYmFzZWQgb24NCj4+IHRoZSBM
aW51eCBTTU1VdjMgZHJpdmVyLg0KPj4gDQo+PiBNYWpvciBkaWZmZXJlbmNlcyB3aXRoIHJlZ2Fy
ZCB0byBMaW51eCBkcml2ZXIgYXJlIGFzIGZvbGxvd3M6DQo+PiAxLiBPbmx5IFN0YWdlLTIgdHJh
bnNsYXRpb24gaXMgc3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+
ICAgdGhhdCBzdXBwb3J0cyBib3RoIFN0YWdlLTEgYW5kIFN0YWdlLTIgdHJhbnNsYXRpb25zLg0K
Pj4gMi4gVXNlIFAyTSAgcGFnZSB0YWJsZSBpbnN0ZWFkIG9mIGNyZWF0aW5nIG9uZSBhcyBTTU1V
djMgaGFzIHRoZQ0KPj4gICBjYXBhYmlsaXR5IHRvIHNoYXJlIHRoZSBwYWdlIHRhYmxlcyB3aXRo
IHRoZSBDUFUuDQo+PiAzLiBUYXNrbGV0cyBhcmUgdXNlZCBpbiBwbGFjZSBvZiB0aHJlYWRlZCBJ
UlEncyBpbiBMaW51eCBmb3IgZXZlbnQgcXVldWUNCj4+ICAgYW5kIHByaW9yaXR5IHF1ZXVlIElS
USBoYW5kbGluZy4NCj4+IDQuIExhdGVzdCB2ZXJzaW9uIG9mIHRoZSBMaW51eCBTTU1VdjMgY29k
ZSBpbXBsZW1lbnRzIHRoZSBjb21tYW5kcyBxdWV1ZQ0KPj4gICBhY2Nlc3MgZnVuY3Rpb25zIGJh
c2VkIG9uIGF0b21pYyBvcGVyYXRpb25zIGltcGxlbWVudGVkIGluIExpbnV4Lg0KPj4gICBBdG9t
aWMgZnVuY3Rpb25zIHVzZWQgYnkgdGhlIGNvbW1hbmRzIHF1ZXVlIGFjY2VzcyBmdW5jdGlvbnMg
YXJlIG5vdA0KPj4gICBpbXBsZW1lbnRlZCBpbiBYRU4gdGhlcmVmb3JlIHdlIGRlY2lkZWQgdG8g
cG9ydCB0aGUgZWFybGllciB2ZXJzaW9uDQo+PiAgIG9mIHRoZSBjb2RlLiBPbmNlIHRoZSBwcm9w
ZXIgYXRvbWljIG9wZXJhdGlvbnMgd2lsbCBiZSBhdmFpbGFibGUgaW4NCj4+ICAgWEVOIHRoZSBk
cml2ZXIgY2FuIGJlIHVwZGF0ZWQuDQo+PiA1LiBEcml2ZXIgaXMgY3VycmVudGx5IHN1cHBvcnRl
ZCBhcyBUZWNoIFByZXZpZXcuDQo+IA0KPiBUaGlzIHBhdGNoIGlzIGJpZyBhbmQgd2FzIGRpZmZp
Y3VsdCB0byByZXZpZXcsIG5vbmV0aGVsZXNzIEkgdHJpZWQgOi0pDQoNClRoYW5rcyBhZ2FpbiBm
b3IgcmV2aWV3aW5nIHRoZSBjb2RlLg0KPiANCj4gVGhhdCBzYWlkLCB0aGUgY29kZSBpcyBzZWxm
LWNvbnRhaW5lZCwgbWFya2VkIGFzIFRlY2ggUHJldmlldywgYW5kIG5vdA0KPiBhdCByaXNrIG9m
IG1ha2luZyBvdGhlciB0aGluZ3MgdW5zdGFibGUsIHNvIGl0IGlzIGxvdyByaXNrIHRvIGFjY2Vw
dCBpdC4NCj4gDQo+IENvbW1lbnRzIGJlbG93Lg0KPiANCj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBS
YWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gTUFJTlRBSU5FUlMg
ICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgNiArDQo+PiBTVVBQT1JULm1kICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHwgICAxICsNCj4+IHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL0tj
b25maWcgICAgICAgfCAgMTAgKw0KPj4geGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL01ha2Vm
aWxlICB8ICAgMSArDQo+PiB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS12My5jIHwg
OTg2ICsrKysrKysrKysrKysrKysrKysrKy0tLS0tDQo+PiA1IGZpbGVzIGNoYW5nZWQsIDgxNCBp
bnNlcnRpb25zKCspLCAxOTAgZGVsZXRpb25zKC0pDQo+PiANCj4+IGRpZmYgLS1naXQgYS9NQUlO
VEFJTkVSUyBiL01BSU5UQUlORVJTDQo+PiBpbmRleCBkYWIzOGE2YTE0Li4xZDYzNDg5ZWVjIDEw
MDY0NA0KPj4gLS0tIGEvTUFJTlRBSU5FUlMNCj4+ICsrKyBiL01BSU5UQUlORVJTDQo+PiBAQCAt
MjQ5LDYgKzI0OSwxMiBAQCBGOgl4ZW4vaW5jbHVkZS9hc20tYXJtLw0KPj4gRjoJeGVuL2luY2x1
ZGUvcHVibGljL2FyY2gtYXJtLw0KPj4gRjoJeGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmgN
Cj4+IA0KPj4gK0FSTSBTTU1VdjMNCj4+ICtNOglCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5t
YXJxdWlzQGFybS5jb20+DQo+PiArTToJUmFodWwgU2luZ2ggPHJhaHVsLnNpbmdoQGFybS5jb20+
DQo+PiArUzoJU3VwcG9ydGVkDQo+PiArRjoJeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3Nt
bXUtdjMuYw0KPj4gKw0KPj4gQ2hhbmdlIExvZw0KPj4gTToJUGF1bCBEdXJyYW50IDxwYXVsQHhl
bi5vcmc+DQo+PiBSOglDb21tdW5pdHkgTWFuYWdlciA8Y29tbXVuaXR5Lm1hbmFnZXJAeGVucHJv
amVjdC5vcmc+DQo+PiBkaWZmIC0tZ2l0IGEvU1VQUE9SVC5tZCBiL1NVUFBPUlQubWQNCj4+IGlu
ZGV4IGFiMDJhY2E1ZjQuLmU0MDJjNzIwMmQgMTAwNjQ0DQo+PiAtLS0gYS9TVVBQT1JULm1kDQo+
PiArKysgYi9TVVBQT1JULm1kDQo+PiBAQCAtNjgsNiArNjgsNyBAQCBGb3IgdGhlIENvcnRleCBB
NTcgcjBwMCAtIHIxcDEsIHNlZSBFcnJhdGEgODMyMDc1Lg0KPj4gICAgIFN0YXR1cywgQVJNIFNN
TVV2MTogU3VwcG9ydGVkLCBub3Qgc2VjdXJpdHkgc3VwcG9ydGVkDQo+PiAgICAgU3RhdHVzLCBB
Uk0gU01NVXYyOiBTdXBwb3J0ZWQsIG5vdCBzZWN1cml0eSBzdXBwb3J0ZWQNCj4+ICAgICBTdGF0
dXMsIFJlbmVzYXMgSVBNTVUtVk1TQTogU3VwcG9ydGVkLCBub3Qgc2VjdXJpdHkgc3VwcG9ydGVk
DQo+PiArICAgIFN0YXR1cywgQVJNIFNNTVV2MzogVGVjaCBQcmV2aWV3DQo+PiANCj4+ICMjIyBB
Uk0vR0lDdjMgSVRTDQo+PiANCj4+IGRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9LY29uZmlnIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvS2NvbmZpZw0KPj4gaW5kZXggMDAz
NjAwN2VjNC4uNWI3MWM1OWY0NyAxMDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL0tjb25maWcNCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL0tjb25maWcNCj4+
IEBAIC0xMyw2ICsxMywxNiBAQCBjb25maWcgQVJNX1NNTVUNCj4+IAkgIFNheSBZIGhlcmUgaWYg
eW91ciBTb0MgaW5jbHVkZXMgYW4gSU9NTVUgZGV2aWNlIGltcGxlbWVudGluZyB0aGUNCj4+IAkg
IEFSTSBTTU1VIGFyY2hpdGVjdHVyZS4NCj4+IA0KPj4gK2NvbmZpZyBBUk1fU01NVV9WMw0KPj4g
Kwlib29sICJBUk0gTHRkLiBTeXN0ZW0gTU1VIFZlcnNpb24gMyAoU01NVXYzKSBTdXBwb3J0IiBp
ZiBFWFBFUlQNCj4+ICsJZGVwZW5kcyBvbiBBUk1fNjQNCj4+ICsJLS0taGVscC0tLQ0KPj4gKwkg
U3VwcG9ydCBmb3IgaW1wbGVtZW50YXRpb25zIG9mIHRoZSBBUk0gU3lzdGVtIE1NVSBhcmNoaXRl
Y3R1cmUNCj4+ICsJIHZlcnNpb24gMy4NCj4+ICsNCj4+ICsJIFNheSBZIGhlcmUgaWYgeW91ciBz
eXN0ZW0gaW5jbHVkZXMgYW4gSU9NTVUgZGV2aWNlIGltcGxlbWVudGluZw0KPj4gKwkgdGhlIEFS
TSBTTU1VdjMgYXJjaGl0ZWN0dXJlLg0KPj4gKw0KPj4gY29uZmlnIElQTU1VX1ZNU0ENCj4+IAli
b29sICJSZW5lc2FzIElQTU1VLVZNU0EgZm91bmQgaW4gUi1DYXIgR2VuMyBTb0NzIg0KPj4gCWRl
cGVuZHMgb24gQVJNXzY0DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
YXJtL01ha2VmaWxlIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL01ha2VmaWxlDQo+PiBp
bmRleCBmY2Q5MThlYTNlLi5jNWZiM2I1OGE1IDEwMDY0NA0KPj4gLS0tIGEveGVuL2RyaXZlcnMv
cGFzc3Rocm91Z2gvYXJtL01ha2VmaWxlDQo+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hcm0vTWFrZWZpbGUNCj4+IEBAIC0xLDMgKzEsNCBAQA0KPj4gb2JqLXkgKz0gaW9tbXUubyBp
b21tdV9oZWxwZXJzLm8gaW9tbXVfZndzcGVjLm8NCj4+IG9iai0kKENPTkZJR19BUk1fU01NVSkg
Kz0gc21tdS5vDQo+PiBvYmotJChDT05GSUdfSVBNTVVfVk1TQSkgKz0gaXBtbXUtdm1zYS5vDQo+
PiArb2JqLSQoQ09ORklHX0FSTV9TTU1VX1YzKSArPSBzbW11LXYzLm8NCj4+IGRpZmYgLS1naXQg
YS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS12My5jIGIveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvYXJtL3NtbXUtdjMuYw0KPj4gaW5kZXggNTVkMWNiYTE5NC4uOGYyMzM3ZTdmMiAx
MDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LXYzLmMNCj4+
ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LXYzLmMNCj4+IEBAIC0yLDM2
ICsyLDI4MCBAQA0KPj4gLyoNCj4+ICAqIElPTU1VIEFQSSBmb3IgQVJNIGFyY2hpdGVjdGVkIFNN
TVV2MyBpbXBsZW1lbnRhdGlvbnMuDQo+PiAgKg0KPj4gLSAqIENvcHlyaWdodCAoQykgMjAxNSBB
Uk0gTGltaXRlZA0KPj4gKyAqIEJhc2VkIG9uIExpbnV4J3MgU01NVXYzIGRyaXZlcjoNCj4+ICsg
KiAgICBkcml2ZXJzL2lvbW11L2FybS9hcm0tc21tdS12My9hcm0tc21tdS12My5jDQo+PiArICog
ICAgY29tbWl0OiA5NTFjYmJjMzg2ZmYwMWI1MGRhNGY0NjM4N2U5OTRlODFkOWFiNDMxDQo+PiAr
ICogYW5kIFhlbidzIFNNTVUgZHJpdmVyOg0KPj4gKyAqICAgIHhlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FybS9zbW11LmMNCj4+ICAqDQo+PiAtICogQXV0aG9yOiBXaWxsIERlYWNvbiA8d2lsbC5k
ZWFjb25AYXJtLmNvbT4NCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMTUgQVJNIExpbWl0ZWQgV2ls
bCBEZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+DQo+PiAgKg0KPj4gLSAqIFRoaXMgZHJpdmVy
IGlzIHBvd2VyZWQgYnkgYmFkIGNvZmZlZSBhbmQgYm9tYmF5IG1peC4NCj4+ICsgKiBDb3B5cmln
aHQgKEMpIDIwMjAgQXJtIEx0ZC4NCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVl
IHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5DQo+PiArICog
aXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJz
aW9uIDIgYXMNCj4+ICsgKiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlv
bi4NCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9w
ZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLA0KPj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5U
WTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+PiArICogTUVSQ0hBTlRB
QklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZQ0KPj4g
KyAqIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuDQo+PiArICoN
Cj4+ICsgKiBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJh
bCBQdWJsaWMgTGljZW5zZQ0KPj4gKyAqIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtLiAgSWYgbm90
LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+Lg0KPj4gKyAqDQo+PiArICovDQo+
PiArDQo+PiArI2luY2x1ZGUgPHhlbi9hY3BpLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9jb25maWcu
aD4NCj4+ICsjaW5jbHVkZSA8eGVuL2RlbGF5Lmg+DQo+PiArI2luY2x1ZGUgPHhlbi9lcnJuby5o
Pg0KPj4gKyNpbmNsdWRlIDx4ZW4vZXJyLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9pcnEuaD4NCj4+
ICsjaW5jbHVkZSA8eGVuL2xpYi5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vbGlzdC5oPg0KPj4gKyNp
bmNsdWRlIDx4ZW4vbW0uaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3JidHJlZS5oPg0KPj4gKyNpbmNs
dWRlIDx4ZW4vc2NoZWQuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3NpemVzLmg+DQo+PiArI2luY2x1
ZGUgPHhlbi92bWFwLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9hdG9taWMuaD4NCj4+ICsjaW5jbHVk
ZSA8YXNtL2RldmljZS5oPg0KPj4gKyNpbmNsdWRlIDxhc20vaW8uaD4NCj4+ICsjaW5jbHVkZSA8
YXNtL3BsYXRmb3JtLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9pb21tdV9md3NwZWMuaD4NCj4+ICsN
Cj4+ICsvKiBMaW51eCBjb21wYXRpYmlsaXR5IGZ1bmN0aW9ucy4gKi8NCj4+ICt0eXBlZGVmIHBh
ZGRyX3QgZG1hX2FkZHJfdDsNCj4+ICt0eXBlZGVmIHVuc2lnbmVkIGludCBnZnBfdDsNCj4+ICsN
Cj4+ICsjZGVmaW5lIHBsYXRmb3JtX2RldmljZSBkZXZpY2UNCj4+ICsNCj4+ICsjZGVmaW5lIEdG
UF9LRVJORUwgMA0KPj4gKw0KPj4gKy8qIEFsaWFzIHRvIFhlbiBkZXZpY2UgdHJlZSBoZWxwZXJz
ICovDQo+PiArI2RlZmluZSBkZXZpY2Vfbm9kZSBkdF9kZXZpY2Vfbm9kZQ0KPj4gKyNkZWZpbmUg
b2ZfcGhhbmRsZV9hcmdzIGR0X3BoYW5kbGVfYXJncw0KPj4gKyNkZWZpbmUgb2ZfZGV2aWNlX2lk
IGR0X2RldmljZV9tYXRjaA0KPj4gKyNkZWZpbmUgb2ZfbWF0Y2hfbm9kZSBkdF9tYXRjaF9ub2Rl
DQo+PiArI2RlZmluZSBvZl9wcm9wZXJ0eV9yZWFkX3UzMihucCwgcG5hbWUsIG91dCkgKCFkdF9w
cm9wZXJ0eV9yZWFkX3UzMihucCwgcG5hbWUsIG91dCkpDQo+PiArI2RlZmluZSBvZl9wcm9wZXJ0
eV9yZWFkX2Jvb2wgZHRfcHJvcGVydHlfcmVhZF9ib29sDQo+PiArI2RlZmluZSBvZl9wYXJzZV9w
aGFuZGxlX3dpdGhfYXJncyBkdF9wYXJzZV9waGFuZGxlX3dpdGhfYXJncw0KPiANCj4gR2l2ZW4g
YWxsIHRoZSBjaGFuZ2VzIHRvIHRoZSBmaWxlIGJ5IHRoZSBwcmV2aW91cyBwYXRjaGVzIHdlIGFy
ZQ0KPiBiYXNpY2FsbHkgZnVsbHkgKG9yIGFsbW9zdCBmdWxseSkgYWRhcHRpbmcgdGhpcyBjb2Rl
IHRvIFhlbi4NCj4gDQo+IFNvIGF0IHRoYXQgcG9pbnQgSSB3b25kZXIgaWYgd2Ugc2hvdWxkIGp1
c3QgYXMgd2VsbCBtYWtlIHRoZXNlIGNoYW5nZXMNCj4gKGUuZy4gcy9vZl9waGFuZGxlX2FyZ3Mv
ZHRfcGhhbmRsZV9hcmdzL2cpIHRvIHRoZSBjb2RlIHRvby4NCg0KWWVzLCB5b3UgYXJlIHJpZ2h0
IHRoYXQgaXMgd2h5IGluIHRoZSBmaXJzdCB2ZXJzaW9uIG9mIHRoZSBwYXRjaCBJIG1vZGlmeWlu
ZyB0aGUgY29kZSB0byBtYWtlIGl0IGZ1bGx5IFhFTiBjb21wYXRpYmxlLiBCdXQgaW4gdGhpcyBw
YXRjaCBzZXJpZXMgSSB1c2UgdGhlIExpbnV4IGNvbXBhdGliaWxpdHkgZnVuY3Rpb24gdG8gaGF2
ZSBMaW51eCBjb2RlIHVubW9kaWZpZWQuSSBhbHNvIHByZWZlciB0byBtYWtlIGNoYW5nZXMgKHMv
b2ZfcGhhbmRsZV9hcmdzL2R0X3BoYW5kbGVfYXJncy9nKS4NCg0KPiANCj4gSnVsaWVuLCBSYWh1
bCwgd2hhdCBkbyB5b3UgZ3V5cyB0aGluaz8gDQo+IA0KPiANCj4+ICsvKiBBbGlhcyB0byBYZW4g
bG9jayBmdW5jdGlvbnMgKi8NCj4gDQo+IEkgdGhpbmsgdGhpcyBkZXNlcnZlcyBhdCBsZWFzdCBv
bmUgc3RhdGVtZW50IHRvIGV4cGxhaW4gd2h5IGl0IGlzIE9LLg0KDQpBY2suIEkgV2lsbCBhZGQg
dGhlIGNvbW1lbnQuDQo+IA0KPiBBbHNvLCBhIHNpbWlsYXIgY29tbWVudCB0byB0aGUgb25lIGFi
b3ZlOiBtYXliZSB3ZSBzaG91bGQgYWRkIGEgY291cGxlDQo+IG9mIG1vcmUgcHJlcGFyYXRpb24g
cGF0Y2hlcyB0byBzL211dGV4L3NwaW5sb2NrL2cgYW5kIGNoYW5nZSB0aGUgdGltZQ0KPiBhbmQg
YWxsb2NhdGlvbiBmdW5jdGlvbnMgdG9vLg0KPiANCj4gDQo+PiArI2RlZmluZSBtdXRleCBzcGlu
bG9jaw0KPj4gKyNkZWZpbmUgbXV0ZXhfaW5pdCBzcGluX2xvY2tfaW5pdA0KPj4gKyNkZWZpbmUg
bXV0ZXhfbG9jayBzcGluX2xvY2sNCj4+ICsjZGVmaW5lIG11dGV4X3VubG9jayBzcGluX3VubG9j
aw0KPj4gKw0KPj4gKy8qIEFsaWFzIHRvIFhlbiB0aW1lIGZ1bmN0aW9ucyAqLw0KPj4gKyNkZWZp
bmUga3RpbWVfdCBzX3RpbWVfdA0KPj4gKyNkZWZpbmUga3RpbWVfZ2V0KCkgICAgICAgICAgICAg
KE5PVygpKQ0KPj4gKyNkZWZpbmUga3RpbWVfYWRkX3VzKHQsaSkgICAgICAgKHQgKyBNSUNST1NF
Q1MoaSkpDQo+PiArI2RlZmluZSBrdGltZV9jb21wYXJlKHQsaSkgICAgICAodCA+IChpKSkNCj4+
ICsNCj4+ICsvKiBBbGlhcyB0byBYZW4gYWxsb2NhdGlvbiBoZWxwZXJzICovDQo+PiArI2RlZmlu
ZSBremFsbG9jKHNpemUsIGZsYWdzKSAgICBfeHphbGxvYyhzaXplLCBzaXplb2Yodm9pZCAqKSkN
Cj4+ICsjZGVmaW5lIGtmcmVlIHhmcmVlDQo+PiArI2RlZmluZSBkZXZtX2t6YWxsb2MoZGV2LCBz
aXplLCBmbGFncykgIF94emFsbG9jKHNpemUsIHNpemVvZih2b2lkICopKQ0KPj4gKw0KPj4gKy8q
IERldmljZSBsb2dnZXIgZnVuY3Rpb25zICovDQo+PiArI2RlZmluZSBkZXZfbmFtZShkZXYpIGR0
X25vZGVfZnVsbF9uYW1lKGRldi0+b2Zfbm9kZSkNCj4+ICsjZGVmaW5lIGRldl9kYmcoZGV2LCBm
bXQsIC4uLikgICAgICBcDQo+PiArICAgIHByaW50ayhYRU5MT0dfREVCVUcgIlNNTVV2MzogJXM6
ICIgZm10LCBkZXZfbmFtZShkZXYpLCAjIyBfX1ZBX0FSR1NfXykNCj4+ICsjZGVmaW5lIGRldl9u
b3RpY2UoZGV2LCBmbXQsIC4uLikgICBcDQo+PiArICAgIHByaW50ayhYRU5MT0dfSU5GTyAiU01N
VXYzOiAlczogIiBmbXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJHU19fKQ0KPj4gKyNkZWZp
bmUgZGV2X3dhcm4oZGV2LCBmbXQsIC4uLikgICAgIFwNCj4+ICsgICAgcHJpbnRrKFhFTkxPR19X
QVJOSU5HICJTTU1VdjM6ICVzOiAiIGZtdCwgZGV2X25hbWUoZGV2KSwgIyMgX19WQV9BUkdTX18p
DQo+PiArI2RlZmluZSBkZXZfZXJyKGRldiwgZm10LCAuLi4pICAgICAgXA0KPj4gKyAgICBwcmlu
dGsoWEVOTE9HX0VSUiAiU01NVXYzOiAlczogIiBmbXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFf
QVJHU19fKQ0KPj4gKyNkZWZpbmUgZGV2X2luZm8oZGV2LCBmbXQsIC4uLikgICAgIFwNCj4+ICsg
ICAgcHJpbnRrKFhFTkxPR19JTkZPICJTTU1VdjM6ICVzOiAiIGZtdCwgZGV2X25hbWUoZGV2KSwg
IyMgX19WQV9BUkdTX18pDQo+PiArI2RlZmluZSBkZXZfZXJyX3JhdGVsaW1pdGVkKGRldiwgZm10
LCAuLi4pICAgICAgXA0KPj4gKyAgICBwcmludGsoWEVOTE9HX0VSUiAiU01NVXYzOiAlczogIiBm
bXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJHU19fKQ0KPj4gKw0KPj4gKy8qDQo+PiArICog
UGVyaW9kaWNhbGx5IHBvbGwgYW4gYWRkcmVzcyBhbmQgd2FpdCBiZXR3ZWVuIHJlYWRzIGluIHVz
IHVudGlsIGENCj4+ICsgKiBjb25kaXRpb24gaXMgbWV0IG9yIGEgdGltZW91dCBvY2N1cnMuDQo+
IA0KPiBJdCB3b3VsZCBiZSBnb29kIHRvIGFkZCBhIHN0YXRlbWVudCB0byBwb2ludCBvdXQgd2hh
dCB0aGUgcmV0dXJuIHZhbHVlDQo+IGlzIGdvaW5nIHRvIGJlLg0KDQpBY2suIA0KPiANCj4gDQo+
PiArICovDQo+PiArI2RlZmluZSByZWFkeF9wb2xsX3RpbWVvdXQob3AsIGFkZHIsIHZhbCwgY29u
ZCwgc2xlZXBfdXMsIHRpbWVvdXRfdXMpIFwNCj4+ICsoeyBcDQo+PiArICAgICBzX3RpbWVfdCBk
ZWFkbGluZSA9IE5PVygpICsgTUlDUk9TRUNTKHRpbWVvdXRfdXMpOyBcDQo+PiArICAgICBmb3Ig
KDs7KSB7IFwNCj4+ICsgICAgICAgICh2YWwpID0gb3AoYWRkcik7IFwNCj4+ICsgICAgICAgIGlm
IChjb25kKSBcDQo+PiArICAgICAgICAgICAgYnJlYWs7IFwNCj4+ICsgICAgICAgIGlmIChOT1co
KSA+IGRlYWRsaW5lKSB7IFwNCj4+ICsgICAgICAgICAgICAodmFsKSA9IG9wKGFkZHIpOyBcDQo+
PiArICAgICAgICAgICAgYnJlYWs7IFwNCj4+ICsgICAgICAgIH0gXA0KPj4gKyAgICAgICAgdWRl
bGF5KHNsZWVwX3VzKTsgXA0KPj4gKyAgICAgfSBcDQo+PiArICAgICAoY29uZCkgPyAwIDogLUVU
SU1FRE9VVDsgXA0KPj4gK30pDQo+PiArDQo+PiArI2RlZmluZSByZWFkbF9yZWxheGVkX3BvbGxf
dGltZW91dChhZGRyLCB2YWwsIGNvbmQsIGRlbGF5X3VzLCB0aW1lb3V0X3VzKSBcDQo+PiArICAg
IHJlYWR4X3BvbGxfdGltZW91dChyZWFkbF9yZWxheGVkLCBhZGRyLCB2YWwsIGNvbmQsIGRlbGF5
X3VzLCB0aW1lb3V0X3VzKQ0KPj4gKw0KPj4gKyNkZWZpbmUgRklFTERfUFJFUChfbWFzaywgX3Zh
bCkgICAgICAgICBcDQo+PiArICAgICgoKHR5cGVvZihfbWFzaykpKF92YWwpIDw8IChfX2J1aWx0
aW5fZmZzbGwoX21hc2spIC0gMSkpICYgKF9tYXNrKSkNCj4gDQo+IExldCdzIGFkZCB0aGUgZGVm
aW5pdGlvbiBvZiBmZnNsbCB0byBiaXRvcHMuaA0KDQpPay4gDQo+IA0KPiANCj4+ICsjZGVmaW5l
IEZJRUxEX0dFVChfbWFzaywgX3JlZykgICAgICAgICAgXA0KPj4gKyAgICAodHlwZW9mKF9tYXNr
KSkoKChfcmVnKSAmIChfbWFzaykpID4+IChfX2J1aWx0aW5fZmZzbGwoX21hc2spIC0gMSkpDQo+
PiArDQo+PiArI2RlZmluZSBXUklURV9PTkNFKHgsIHZhbCkgICAgICAgICAgICAgICAgICBcDQo+
PiArZG8geyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+PiArICAg
ICoodm9sYXRpbGUgdHlwZW9mKHgpICopJih4KSA9ICh2YWwpOyAgICBcDQo+PiArfSB3aGlsZSAo
MCkNCj4gDQo+IG1heWJlIHdlIHNob3VsZCBkZWZpbmUgdGhpcyBpbiB4ZW4vaW5jbHVkZS94ZW4v
bGliLmgNCg0KT2suIA0KPiANCj4gDQo+PiArDQo+PiArLyogWGVuOiBTdHViIG91dCBETUEgZG9t
YWluIHJlbGF0ZWQgZnVuY3Rpb25zICovDQo+PiArI2RlZmluZSBpb21tdV9nZXRfZG1hX2Nvb2tp
ZShkb20pIDANCj4+ICsjZGVmaW5lIGlvbW11X3B1dF9kbWFfY29va2llKGRvbSkNCj4gDQo+IFNo
b3VsZG4ndCB3ZSByZW1vdmUgYW55IGNhbGwgdG8gaW9tbXVfZ2V0X2RtYV9jb29raWUgYW5kDQo+
IGlvbW11X3B1dF9kbWFfY29va2llIGluIG9uZSBvZiB0aGUgcHJldmlvdXMgcGF0Y2hlcz8NCg0K
QWNrLiANCj4gDQo+IA0KPj4gKy8qDQo+PiArICogSGVscGVycyBmb3IgRE1BIGFsbG9jYXRpb24u
IEp1c3QgdGhlIGZ1bmN0aW9uIG5hbWUgaXMgcmV1c2VkIGZvcg0KPj4gKyAqIHBvcnRpbmcgY29k
ZSwgdGhlc2UgYWxsb2NhdGlvbiBhcmUgbm90IG1hbmFnZWQgYWxsb2NhdGlvbnMNCj4+ICAqLw0K
Pj4gK3N0YXRpYyB2b2lkICpkbWFtX2FsbG9jX2NvaGVyZW50KHN0cnVjdCBkZXZpY2UgKmRldiwg
c2l6ZV90IHNpemUsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFkZHJf
dCAqZG1hX2hhbmRsZSwgZ2ZwX3QgZ2ZwKQ0KPj4gK3sNCj4+ICsgICAgdm9pZCAqdmFkZHI7DQo+
PiArICAgIHVuc2lnbmVkIGxvbmcgYWxpZ25tZW50ID0gc2l6ZTsNCj4+ICsNCj4+ICsgICAgLyoN
Cj4+ICsgICAgICogX3h6YWxsb2MgcmVxdWlyZXMgdGhhdCB0aGUgKGFsaWduICYgKGFsaWduIC0x
KSkgPSAwLiBNb3N0IG9mIHRoZQ0KPj4gKyAgICAgKiBhbGxvY2F0aW9ucyBpbiBTTU1VIGNvZGUg
c2hvdWxkIHNlbmQgdGhlIHJpZ2h0IHZhbHVlIGZvciBzaXplLiBJbg0KPj4gKyAgICAgKiBjYXNl
IHRoaXMgaXMgbm90IHRydWUgcHJpbnQgYSB3YXJuaW5nIGFuZCBhbGlnbiB0byB0aGUgc2l6ZSBv
ZiBhDQo+PiArICAgICAqICh2b2lkICopDQo+PiArICAgICAqLw0KPj4gKyAgICBpZiAoIHNpemUg
JiAoc2l6ZSAtIDEpICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJO
SU5HICJTTU1VdjM6IEZpeGluZyBhbGlnbm1lbnQgZm9yIHRoZSBETUEgYnVmZmVyXG4iKTsNCj4+
ICsgICAgICAgIGFsaWdubWVudCA9IHNpemVvZih2b2lkICopOw0KPj4gKyAgICB9DQo+PiArDQo+
PiArICAgIHZhZGRyID0gX3h6YWxsb2Moc2l6ZSwgYWxpZ25tZW50KTsNCj4+ICsgICAgaWYgKCAh
dmFkZHIgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiU01NVXYz
OiBETUEgYWxsb2NhdGlvbiBmYWlsZWRcbiIpOw0KPj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+
PiArICAgIH0NCj4+ICsNCj4+ICsgICAgKmRtYV9oYW5kbGUgPSB2aXJ0X3RvX21hZGRyKHZhZGRy
KTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIHZhZGRyOw0KPj4gK30NCj4+ICsNCj4+ICsvKiBYZW46
IFR5cGUgZGVmaW5pdGlvbnMgZm9yIGlvbW11X2RvbWFpbiAqLw0KPj4gKyNkZWZpbmUgSU9NTVVf
RE9NQUlOX1VOTUFOQUdFRCAwDQo+PiArI2RlZmluZSBJT01NVV9ET01BSU5fRE1BIDENCj4+ICsj
ZGVmaW5lIElPTU1VX0RPTUFJTl9JREVOVElUWSAyDQo+PiArDQo+PiArLyogWGVuIHNwZWNpZmlj
IGNvZGUuICovDQo+PiArc3RydWN0IGlvbW11X2RvbWFpbiB7DQo+PiArICAgIC8qIFJ1bnRpbWUg
U01NVSBjb25maWd1cmF0aW9uIGZvciB0aGlzIGlvbW11X2RvbWFpbiAqLw0KPj4gKyAgICBhdG9t
aWNfdCByZWY7DQo+PiArICAgIC8qDQo+PiArICAgICAqIFVzZWQgdG8gbGluayBpb21tdV9kb21h
aW4gY29udGV4dHMgZm9yIGEgc2FtZSBkb21haW4uDQo+PiArICAgICAqIFRoZXJlIGlzIGF0IGxl
YXN0IG9uZSBwZXItU01NVSB0byB1c2VkIGJ5IHRoZSBkb21haW4uDQo+PiArICAgICAqLw0KPj4g
KyAgICBzdHJ1Y3QgbGlzdF9oZWFkICAgIGxpc3Q7DQo+PiArfTsNCj4+ICsNCj4+ICsvKiBEZXNj
cmliZXMgaW5mb3JtYXRpb24gcmVxdWlyZWQgZm9yIGEgWGVuIGRvbWFpbiAqLw0KPj4gK3N0cnVj
dCBhcm1fc21tdV94ZW5fZG9tYWluIHsNCj4+ICsgICAgc3BpbmxvY2tfdCAgICAgIGxvY2s7DQo+
PiArDQo+PiArICAgIC8qIExpc3Qgb2YgaW9tbXUgZG9tYWlucyBhc3NvY2lhdGVkIHRvIHRoaXMg
ZG9tYWluICovDQo+PiArICAgIHN0cnVjdCBsaXN0X2hlYWQgICAgY29udGV4dHM7DQo+PiArfTsN
Cj4+ICsNCj4+ICsvKg0KPj4gKyAqIEluZm9ybWF0aW9uIGFib3V0IGVhY2ggZGV2aWNlIHN0b3Jl
ZCBpbiBkZXYtPmFyY2hkYXRhLmlvbW11DQo+PiArICogVGhlIGRldi0+YXJjaGRhdGEuaW9tbXUg
c3RvcmVzIHRoZSBpb21tdV9kb21haW4gKHJ1bnRpbWUgY29uZmlndXJhdGlvbiBvZg0KPj4gKyAq
IHRoZSBTTU1VKS4NCj4+ICsgKi8NCj4+ICtzdHJ1Y3QgYXJtX3NtbXVfeGVuX2RldmljZSB7DQo+
PiArICAgIHN0cnVjdCBpb21tdV9kb21haW4gKmRvbWFpbjsNCj4+ICt9Ow0KPiANCj4gRG8gd2Ug
bmVlZCBib3RoIHN0cnVjdCBhcm1fc21tdV94ZW5fZGV2aWNlIGFuZCBzdHJ1Y3QgaW9tbXVfZG9t
YWluPw0KPiANCg0KTm8gd2UgZG9u4oCZdCBuZWVkIGJvdGguIEkgd2lsbCByZW1vdmUgdGhlIHN0
cnVjdCBhcm1fc21tdV94ZW5fZGV2aWNlLiANCg0KPiANCj4+ICsvKiBLZWVwIGEgbGlzdCBvZiBk
ZXZpY2VzIGFzc29jaWF0ZWQgd2l0aCB0aGlzIGRyaXZlciAqLw0KPj4gK3N0YXRpYyBERUZJTkVf
U1BJTkxPQ0soYXJtX3NtbXVfZGV2aWNlc19sb2NrKTsNCj4+ICtzdGF0aWMgTElTVF9IRUFEKGFy
bV9zbW11X2RldmljZXMpOw0KPj4gKw0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCAqZGV2
X2lvbW11X3ByaXZfZ2V0KHN0cnVjdCBkZXZpY2UgKmRldikNCj4+ICt7DQo+PiArICAgIHN0cnVj
dCBpb21tdV9md3NwZWMgKmZ3c3BlYyA9IGRldl9pb21tdV9md3NwZWNfZ2V0KGRldik7DQo+PiAr
DQo+PiArICAgIHJldHVybiBmd3NwZWMgJiYgZndzcGVjLT5pb21tdV9wcml2ID8gZndzcGVjLT5p
b21tdV9wcml2IDogTlVMTDsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIGlubGluZSB2b2lkIGRl
dl9pb21tdV9wcml2X3NldChzdHJ1Y3QgZGV2aWNlICpkZXYsIHZvaWQgKnByaXYpDQo+PiArew0K
Pj4gKyAgICBzdHJ1Y3QgaW9tbXVfZndzcGVjICpmd3NwZWMgPSBkZXZfaW9tbXVfZndzcGVjX2dl
dChkZXYpOw0KPj4gKw0KPj4gKyAgICBmd3NwZWMtPmlvbW11X3ByaXYgPSBwcml2Ow0KPj4gK30N
Cj4+ICsNCj4+ICtpbnQgZHRfcHJvcGVydHlfbWF0Y2hfc3RyaW5nKGNvbnN0IHN0cnVjdCBkdF9k
ZXZpY2Vfbm9kZSAqbnAsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBj
aGFyICpwcm9wbmFtZSwgY29uc3QgY2hhciAqc3RyaW5nKQ0KPj4gK3sNCj4+ICsgICAgY29uc3Qg
c3RydWN0IGR0X3Byb3BlcnR5ICpkdHByb3AgPSBkdF9maW5kX3Byb3BlcnR5KG5wLCBwcm9wbmFt
ZSwgTlVMTCk7DQo+PiArICAgIHNpemVfdCBsOw0KPj4gKyAgICBpbnQgaTsNCj4+ICsgICAgY29u
c3QgY2hhciAqcCwgKmVuZDsNCj4+ICsNCj4+ICsgICAgaWYgKCAhZHRwcm9wICkNCj4+ICsgICAg
ICAgIHJldHVybiAtRUlOVkFMOw0KPj4gKw0KPj4gKyAgICBpZiAoICFkdHByb3AtPnZhbHVlICkN
Cj4+ICsgICAgICAgIHJldHVybiAtRU5PREFUQTsNCj4+ICsNCj4+ICsgICAgcCA9IGR0cHJvcC0+
dmFsdWU7DQo+PiArICAgIGVuZCA9IHAgKyBkdHByb3AtPmxlbmd0aDsNCj4+ICsNCj4+ICsgICAg
Zm9yICggaSA9IDA7IHAgPCBlbmQ7IGkrKywgcCArPSBsICkNCj4+ICsgICAgew0KPj4gKyAgICAg
ICAgbCA9IHN0cm5sZW4ocCwgZW5kIC0gcCkgKyAxOw0KPj4gKw0KPj4gKyAgICAgICAgaWYgKCBw
ICsgbCA+IGVuZCApDQo+PiArICAgICAgICAgICAgcmV0dXJuIC1FSUxTRVE7DQo+PiArDQo+PiAr
ICAgICAgICBpZiAoIHN0cmNtcChzdHJpbmcsIHApID09IDAgKQ0KPj4gKyAgICAgICAgICAgIHJl
dHVybiBpOyAvKiBGb3VuZCBpdDsgcmV0dXJuIGluZGV4ICovDQo+PiArICAgIH0NCj4+ICsNCj4+
ICsgICAgcmV0dXJuIC1FTk9EQVRBOw0KPj4gK30NCj4gDQo+IEkgdGhpbmsgeW91IHNob3VsZCBl
aXRoZXIgdXNlIGR0X3Byb3BlcnR5X3JlYWRfc3RyaW5nIG9yIG1vdmUgdGhlDQo+IGltcGxlbWVu
dGF0aW9uIG9mIGR0X3Byb3BlcnR5X21hdGNoX3N0cmluZyB0byB4ZW4vY29tbW9uL2RldmljZV90
cmVlLmMNCj4gKGluIHdoaWNoIGNhc2UgSSBzdWdnZXN0IHRvIGRvIGl0IGluIGEgc2VwYXJhdGUg
cGF0Y2guKQ0KDQpJIHdpbGwgcHJlZmVyIHRvIG1vdmUgdGhlIGNvZGUgdG8geGVuL2NvbW1vbi9k
ZXZpY2VfdHJlZS5jIGFzIGl0IG1pZ2h0IGhlbHAgaW4gZnV0dXJlIHRvIHVzZS4NCj4gDQo+IA0K
PiBJJ2QganVzdCB1c2UgZHRfcHJvcGVydHlfcmVhZF9zdHJpbmcuDQo+IA0KPiANCj4gDQo+PiAr
c3RhdGljIGludCBwbGF0Zm9ybV9nZXRfaXJxX2J5bmFtZV9vcHRpb25hbChzdHJ1Y3QgZGV2aWNl
ICpkZXYsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBj
b25zdCBjaGFyICpuYW1lKQ0KPj4gK3sNCj4+ICsgICAgaW50IGluZGV4LCByZXQ7DQo+PiArICAg
IHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqbnAgID0gZGV2X3RvX2R0KGRldik7DQo+PiArDQo+PiAr
ICAgIGlmICggdW5saWtlbHkoIW5hbWUpICkNCj4+ICsgICAgICAgIHJldHVybiAtRUlOVkFMOw0K
Pj4gKw0KPj4gKyAgICBpbmRleCA9IGR0X3Byb3BlcnR5X21hdGNoX3N0cmluZyhucCwgImludGVy
cnVwdC1uYW1lcyIsIG5hbWUpOw0KPj4gKyAgICBpZiAoIGluZGV4IDwgMCApDQo+PiArICAgIHsN
Cj4+ICsgICAgICAgIGRldl9pbmZvKGRldiwgIklSUSAlcyBub3QgZm91bmRcbiIsIG5hbWUpOw0K
Pj4gKyAgICAgICAgcmV0dXJuIGluZGV4Ow0KPj4gKyAgICB9DQo+PiANCj4+IC0jaW5jbHVkZSA8
bGludXgvYWNwaS5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9hY3BpX2lvcnQuaD4NCj4+IC0jaW5j
bHVkZSA8bGludXgvYml0ZmllbGQuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvYml0b3BzLmg+DQo+
PiAtI2luY2x1ZGUgPGxpbnV4L2NyYXNoX2R1bXAuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvZGVs
YXkuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvZG1hLWlvbW11Lmg+DQo+PiAtI2luY2x1ZGUgPGxp
bnV4L2Vyci5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9pbnRlcnJ1cHQuaD4NCj4+IC0jaW5jbHVk
ZSA8bGludXgvaW8tcGd0YWJsZS5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9pb21tdS5oPg0KPj4g
LSNpbmNsdWRlIDxsaW51eC9pb3BvbGwuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvbW9kdWxlLmg+
DQo+PiAtI2luY2x1ZGUgPGxpbnV4L21zaS5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9vZi5oPg0K
Pj4gLSNpbmNsdWRlIDxsaW51eC9vZl9hZGRyZXNzLmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L29m
X2lvbW11Lmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L29mX3BsYXRmb3JtLmg+DQo+PiAtI2luY2x1
ZGUgPGxpbnV4L3BjaS5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9wY2ktYXRzLmg+DQo+PiAtI2lu
Y2x1ZGUgPGxpbnV4L3BsYXRmb3JtX2RldmljZS5oPg0KPj4gLQ0KPj4gLSNpbmNsdWRlIDxsaW51
eC9hbWJhL2J1cy5oPg0KPj4gKyAgICByZXQgPSBwbGF0Zm9ybV9nZXRfaXJxKG5wLCBpbmRleCk7
DQo+PiArICAgIGlmICggcmV0IDwgMCApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIGRldl9lcnIo
ZGV2LCAiZmFpbGVkIHRvIGdldCBpcnEgaW5kZXggJWRcbiIsIGluZGV4KTsNCj4+ICsgICAgICAg
IHJldHVybiAtRU5PREVWOw0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHJldHVybiByZXQ7DQo+
PiArfQ0KPj4gKw0KPj4gKy8qIFN0YXJ0IG9mIExpbnV4IFNNTVV2MyBjb2RlICovDQo+PiANCj4+
IC8qIE1NSU8gcmVnaXN0ZXJzICovDQo+PiAjZGVmaW5lIEFSTV9TTU1VX0lEUjAJCQkweDANCj4+
IEBAIC01MDcsNiArNzUxLDcgQEAgc3RydWN0IGFybV9zbW11X3MyX2NmZyB7DQo+PiAJdTE2CQkJ
CXZtaWQ7DQo+PiAJdTY0CQkJCXZ0dGJyOw0KPj4gCXU2NAkJCQl2dGNyOw0KPj4gKwlzdHJ1Y3Qg
ZG9tYWluCQkqZG9tYWluOw0KPj4gfTsNCj4+IA0KPj4gc3RydWN0IGFybV9zbW11X3N0cnRhYl9j
Zmcgew0KPj4gQEAgLTU2Nyw4ICs4MTIsMTMgQEAgc3RydWN0IGFybV9zbW11X2RldmljZSB7DQo+
PiANCj4+IAlzdHJ1Y3QgYXJtX3NtbXVfc3RydGFiX2NmZwlzdHJ0YWJfY2ZnOw0KPj4gDQo+PiAt
CS8qIElPTU1VIGNvcmUgY29kZSBoYW5kbGUgKi8NCj4+IC0Jc3RydWN0IGlvbW11X2RldmljZQkJ
aW9tbXU7DQo+PiArCS8qIE5lZWQgdG8ga2VlcCBhIGxpc3Qgb2YgU01NVSBkZXZpY2VzICovDQo+
PiArCXN0cnVjdCBsaXN0X2hlYWQJCWRldmljZXM7DQo+PiArDQo+PiArCS8qIFRhc2tsZXRzIGZv
ciBoYW5kbGluZyBldnRzL2ZhdWx0cyBhbmQgcGNpIHBhZ2UgcmVxdWVzdCBJUlFzKi8NCj4+ICsJ
c3RydWN0IHRhc2tsZXQJCWV2dHFfaXJxX3Rhc2tsZXQ7DQo+PiArCXN0cnVjdCB0YXNrbGV0CQlw
cmlxX2lycV90YXNrbGV0Ow0KPj4gKwlzdHJ1Y3QgdGFza2xldAkJY29tYmluZWRfaXJxX3Rhc2ts
ZXQ7DQo+PiB9Ow0KPj4gDQo+PiAvKiBTTU1VIHByaXZhdGUgZGF0YSBmb3IgZWFjaCBtYXN0ZXIg
Ki8NCj4+IEBAIC0xMTEwLDcgKzEzNjAsNyBAQCBzdGF0aWMgaW50IGFybV9zbW11X2luaXRfbDJf
c3RydGFiKHN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUsIHUzMiBzaWQpDQo+PiB9DQo+PiAN
Cj4+IC8qIElSUSBhbmQgZXZlbnQgaGFuZGxlcnMgKi8NCj4+IC1zdGF0aWMgaXJxcmV0dXJuX3Qg
YXJtX3NtbXVfZXZ0cV90aHJlYWQoaW50IGlycSwgdm9pZCAqZGV2KQ0KPj4gK3N0YXRpYyB2b2lk
IGFybV9zbW11X2V2dHFfdGhyZWFkKHZvaWQgKmRldikNCj4gDQo+IEkgdGhpbmsgd2Ugc2hvdWxk
bid0IGNhbGwgaXQgYSB0aHJlYWQgZ2l2ZW4gdGhhdCdzIGEgdGFza2xldC4gV2Ugc2hvdWxkDQo+
IHJlbmFtZSB0aGUgZnVuY3Rpb24gb3IgaXQgd2lsbCBiZSBjb25mdXNpbmcNCg0KT2suIA0KPiAN
Cj4+IHsNCj4+IAlpbnQgaTsNCj4+IAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gZGV2
Ow0KPj4gQEAgLTExNDAsNyArMTM5MCw2IEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9l
dnRxX3RocmVhZChpbnQgaXJxLCB2b2lkICpkZXYpDQo+PiAJLyogU3luYyBvdXIgb3ZlcmZsb3cg
ZmxhZywgYXMgd2UgYmVsaWV2ZSB3ZSdyZSB1cCB0byBzcGVlZCAqLw0KPj4gCWxscS0+Y29ucyA9
IFFfT1ZGKGxscS0+cHJvZCkgfCBRX1dSUChsbHEsIGxscS0+Y29ucykgfA0KPj4gCQkgICAgUV9J
RFgobGxxLCBsbHEtPmNvbnMpOw0KPj4gLQlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+PiB9DQo+PiAN
Cj4+IHN0YXRpYyB2b2lkIGFybV9zbW11X2hhbmRsZV9wcHIoc3RydWN0IGFybV9zbW11X2Rldmlj
ZSAqc21tdSwgdTY0ICpldnQpDQo+PiBAQCAtMTE4MSw3ICsxNDMwLDcgQEAgc3RhdGljIHZvaWQg
YXJtX3NtbXVfaGFuZGxlX3BwcihzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11LCB1NjQgKmV2
dCkNCj4+IAl9DQo+PiB9DQo+PiANCj4+IC1zdGF0aWMgaXJxcmV0dXJuX3QgYXJtX3NtbXVfcHJp
cV90aHJlYWQoaW50IGlycSwgdm9pZCAqZGV2KQ0KPj4gK3N0YXRpYyB2b2lkIGFybV9zbW11X3By
aXFfdGhyZWFkKHZvaWQgKmRldikNCj4gDQo+IHNhbWUgaGVyZQ0KDQpPay4gDQo+IA0KPiANCj4+
IHsNCj4+IAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gZGV2Ow0KPj4gCXN0cnVjdCBh
cm1fc21tdV9xdWV1ZSAqcSA9ICZzbW11LT5wcmlxLnE7DQo+PiBAQCAtMTIwMCwxMiArMTQ0OSwx
MiBAQCBzdGF0aWMgaXJxcmV0dXJuX3QgYXJtX3NtbXVfcHJpcV90aHJlYWQoaW50IGlycSwgdm9p
ZCAqZGV2KQ0KPj4gCWxscS0+Y29ucyA9IFFfT1ZGKGxscS0+cHJvZCkgfCBRX1dSUChsbHEsIGxs
cS0+Y29ucykgfA0KPj4gCQkgICAgICBRX0lEWChsbHEsIGxscS0+Y29ucyk7DQo+PiAJcXVldWVf
c3luY19jb25zX291dChxKTsNCj4+IC0JcmV0dXJuIElSUV9IQU5ETEVEOw0KPj4gfQ0KPj4gDQo+
PiBzdGF0aWMgaW50IGFybV9zbW11X2RldmljZV9kaXNhYmxlKHN0cnVjdCBhcm1fc21tdV9kZXZp
Y2UgKnNtbXUpOw0KPj4gDQo+PiAtc3RhdGljIGlycXJldHVybl90IGFybV9zbW11X2dlcnJvcl9o
YW5kbGVyKGludCBpcnEsIHZvaWQgKmRldikNCj4+ICtzdGF0aWMgdm9pZCBhcm1fc21tdV9nZXJy
b3JfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYsDQo+PiArCQkJCXN0cnVjdCBjcHVfdXNlcl9y
ZWdzICpyZWdzKQ0KPj4gew0KPj4gCXUzMiBnZXJyb3IsIGdlcnJvcm4sIGFjdGl2ZTsNCj4+IAlz
dHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gZGV2Ow0KPj4gQEAgLTEyMTUsNyArMTQ2NCw3
IEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9nZXJyb3JfaGFuZGxlcihpbnQgaXJxLCB2
b2lkICpkZXYpDQo+PiANCj4+IAlhY3RpdmUgPSBnZXJyb3IgXiBnZXJyb3JuOw0KPj4gCWlmICgh
KGFjdGl2ZSAmIEdFUlJPUl9FUlJfTUFTSykpDQo+PiAtCQlyZXR1cm4gSVJRX05PTkU7IC8qIE5v
IGVycm9ycyBwZW5kaW5nICovDQo+PiArCQlyZXR1cm47IC8qIE5vIGVycm9ycyBwZW5kaW5nICov
DQo+PiANCj4+IAlkZXZfd2FybihzbW11LT5kZXYsDQo+PiAJCSAidW5leHBlY3RlZCBnbG9iYWwg
ZXJyb3IgcmVwb3J0ZWQgKDB4JTA4eCksIHRoaXMgY291bGQgYmUgc2VyaW91c1xuIiwNCj4+IEBA
IC0xMjQ4LDI2ICsxNDk3LDQyIEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9nZXJyb3Jf
aGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYpDQo+PiAJCWFybV9zbW11X2NtZHFfc2tpcF9lcnIo
c21tdSk7DQo+PiANCj4+IAl3cml0ZWwoZ2Vycm9yLCBzbW11LT5iYXNlICsgQVJNX1NNTVVfR0VS
Uk9STik7DQo+PiAtCXJldHVybiBJUlFfSEFORExFRDsNCj4+IH0NCj4+IA0KPj4gLXN0YXRpYyBp
cnFyZXR1cm5fdCBhcm1fc21tdV9jb21iaW5lZF9pcnFfdGhyZWFkKGludCBpcnEsIHZvaWQgKmRl
dikNCj4+ICtzdGF0aWMgdm9pZCBhcm1fc21tdV9jb21iaW5lZF9pcnFfaGFuZGxlcihpbnQgaXJx
LCB2b2lkICpkZXYsDQo+PiArCQkJCXN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQ0KPj4gK3sN
Cj4+ICsJc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSA9IChzdHJ1Y3QgYXJtX3NtbXVfZGV2
aWNlICopZGV2Ow0KPj4gKw0KPj4gKwlhcm1fc21tdV9nZXJyb3JfaGFuZGxlcihpcnEsIGRldiwg
cmVncyk7DQo+PiArDQo+PiArCXRhc2tsZXRfc2NoZWR1bGUoJihzbW11LT5jb21iaW5lZF9pcnFf
dGFza2xldCkpOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgdm9pZCBhcm1fc21tdV9jb21iaW5l
ZF9pcnFfdGhyZWFkKHZvaWQgKmRldikNCj4+IHsNCj4+IAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNl
ICpzbW11ID0gZGV2Ow0KPj4gDQo+PiAtCWFybV9zbW11X2V2dHFfdGhyZWFkKGlycSwgZGV2KTsN
Cj4+ICsJYXJtX3NtbXVfZXZ0cV90aHJlYWQoZGV2KTsNCj4+IAlpZiAoc21tdS0+ZmVhdHVyZXMg
JiBBUk1fU01NVV9GRUFUX1BSSSkNCj4+IC0JCWFybV9zbW11X3ByaXFfdGhyZWFkKGlycSwgZGV2
KTsNCj4+IC0NCj4+IC0JcmV0dXJuIElSUV9IQU5ETEVEOw0KPj4gKwkJYXJtX3NtbXVfcHJpcV90
aHJlYWQoZGV2KTsNCj4+IH0NCj4+IA0KPj4gLXN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9j
b21iaW5lZF9pcnFfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYpDQo+PiArc3RhdGljIHZvaWQg
YXJtX3NtbXVfZXZ0cV9pcnFfdGFza2xldChpbnQgaXJxLCB2b2lkICpkZXYsDQo+PiArCQkJCXN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQ0KPj4gew0KPj4gLQlhcm1fc21tdV9nZXJyb3JfaGFu
ZGxlcihpcnEsIGRldik7DQo+PiAtCXJldHVybiBJUlFfV0FLRV9USFJFQUQ7DQo+PiArCXN0cnVj
dCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSAoc3RydWN0IGFybV9zbW11X2RldmljZSAqKWRldjsN
Cj4+ICsNCj4+ICsJdGFza2xldF9zY2hlZHVsZSgmKHNtbXUtPmV2dHFfaXJxX3Rhc2tsZXQpKTsN
Cj4+IH0NCj4+IA0KPj4gK3N0YXRpYyB2b2lkIGFybV9zbW11X3ByaXFfaXJxX3Rhc2tsZXQoaW50
IGlycSwgdm9pZCAqZGV2LA0KPj4gKwkJCQlzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykNCj4+
ICt7DQo+PiArCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSAoc3RydWN0IGFybV9zbW11
X2RldmljZSAqKWRldjsNCj4+ICsNCj4+ICsJdGFza2xldF9zY2hlZHVsZSgmKHNtbXUtPnByaXFf
aXJxX3Rhc2tsZXQpKTsNCj4+ICt9DQo+PiANCj4+IC8qIElPX1BHVEFCTEUgQVBJICovDQo+PiBz
dGF0aWMgdm9pZCBhcm1fc21tdV90bGJfaW52X2NvbnRleHQodm9pZCAqY29va2llKQ0KPj4gQEAg
LTEzNTQsMjcgKzE2MTksNjkgQEAgc3RhdGljIHZvaWQgYXJtX3NtbXVfZG9tYWluX2ZyZWUoc3Ry
dWN0IGlvbW11X2RvbWFpbiAqZG9tYWluKQ0KPj4gfQ0KPj4gDQo+PiBzdGF0aWMgaW50IGFybV9z
bW11X2RvbWFpbl9maW5hbGlzZV9zMihzdHJ1Y3QgYXJtX3NtbXVfZG9tYWluICpzbW11X2RvbWFp
biwNCj4+IC0JCQkJICAgICAgIHN0cnVjdCBhcm1fc21tdV9tYXN0ZXIgKm1hc3RlciwNCj4+IC0J
CQkJICAgICAgIHN0cnVjdCBpb19wZ3RhYmxlX2NmZyAqcGd0YmxfY2ZnKQ0KPj4gKwkJCQkgICAg
ICAgc3RydWN0IGFybV9zbW11X21hc3RlciAqbWFzdGVyKQ0KPj4gew0KPj4gCWludCB2bWlkOw0K
Pj4gKwl1NjQgcmVnOw0KPj4gCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSBzbW11X2Rv
bWFpbi0+c21tdTsNCj4+IAlzdHJ1Y3QgYXJtX3NtbXVfczJfY2ZnICpjZmcgPSAmc21tdV9kb21h
aW4tPnMyX2NmZzsNCj4+IA0KPj4gKwkvKiBWVENSICovDQo+PiArCXJlZyA9IFZUQ1JfUkVTMSB8
IFZUQ1JfU0gwX0lTIHwgVlRDUl9JUkdOMF9XQldBIHwgVlRDUl9PUkdOMF9XQldBOw0KPj4gKw0K
Pj4gKwlzd2l0Y2ggKFBBR0VfU0laRSkgew0KPj4gKwljYXNlIFNaXzRLOg0KPj4gKwkJcmVnIHw9
IFZUQ1JfVEcwXzRLOw0KPj4gKwkJYnJlYWs7DQo+PiArCWNhc2UgU1pfMTZLOg0KPj4gKwkJcmVn
IHw9IFZUQ1JfVEcwXzE2SzsNCj4+ICsJCWJyZWFrOw0KPj4gKwljYXNlIFNaXzY0SzoNCj4+ICsJ
CXJlZyB8PSBWVENSX1RHMF80SzsNCj4+ICsJCWJyZWFrOw0KPj4gKwl9DQo+PiArDQo+PiArCXN3
aXRjaCAoc21tdS0+b2FzKSB7DQo+PiArCWNhc2UgMzI6DQo+PiArCQlyZWcgfD0gVlRDUl9QUyhf
QUMoMHgwLFVMTCkpOw0KPj4gKwkJYnJlYWs7DQo+PiArCWNhc2UgMzY6DQo+PiArCQlyZWcgfD0g
VlRDUl9QUyhfQUMoMHgxLFVMTCkpOw0KPj4gKwkJYnJlYWs7DQo+PiArCWNhc2UgNDA6DQo+PiAr
CQlyZWcgfD0gVlRDUl9QUyhfQUMoMHgyLFVMTCkpOw0KPj4gKwkJYnJlYWs7DQo+PiArCWNhc2Ug
NDI6DQo+PiArCQlyZWcgfD0gVlRDUl9QUyhfQUMoMHgzLFVMTCkpOw0KPj4gKwkJYnJlYWs7DQo+
PiArCWNhc2UgNDQ6DQo+PiArCQlyZWcgfD0gVlRDUl9QUyhfQUMoMHg0LFVMTCkpOw0KPj4gKwkJ
YnJlYWs7DQo+PiArCQljYXNlIDQ4Og0KPj4gKwkJcmVnIHw9IFZUQ1JfUFMoX0FDKDB4NSxVTEwp
KTsNCj4+ICsJCWJyZWFrOw0KPj4gKwljYXNlIDUyOg0KPj4gKwkJcmVnIHw9IFZUQ1JfUFMoX0FD
KDB4NixVTEwpKTsNCj4+ICsJCWJyZWFrOw0KPj4gKwl9DQo+PiArDQo+PiArCXJlZyB8PSBWVENS
X1QwU1ooNjRVTEwgLSBzbW11LT5pYXMpOw0KPj4gKwlyZWcgfD0gVlRDUl9TTDAoMHgyKTsNCj4+
ICsJcmVnIHw9IFZUQ1JfVlM7DQo+PiArDQo+PiArCWNmZy0+dnRjciAgID0gcmVnOw0KPj4gKw0K
Pj4gCXZtaWQgPSBhcm1fc21tdV9iaXRtYXBfYWxsb2Moc21tdS0+dm1pZF9tYXAsIHNtbXUtPnZt
aWRfYml0cyk7DQo+PiAJaWYgKHZtaWQgPCAwKQ0KPj4gCQlyZXR1cm4gdm1pZDsNCj4+ICsJY2Zn
LT52bWlkICA9ICh1MTYpdm1pZDsNCj4+ICsNCj4+ICsJY2ZnLT52dHRiciAgPSBwYWdlX3RvX21h
ZGRyKGNmZy0+ZG9tYWluLT5hcmNoLnAybS5yb290KTsNCj4+ICsNCj4+ICsJcHJpbnRrKFhFTkxP
R19ERUJVRw0KPj4gKwkJICAgIlNNTVV2MzogZCV1OiB2bWlkIDB4JXggdnRjciAweCUiUFJJcGFk
ZHIiIHAybWFkZHIgMHglIlBSSXBhZGRyIlxuIiwNCj4+ICsJCSAgIGNmZy0+ZG9tYWluLT5kb21h
aW5faWQsIGNmZy0+dm1pZCwgY2ZnLT52dGNyLCBjZmctPnZ0dGJyKTsNCj4+IA0KPj4gLQl2dGNy
ID0gJnBndGJsX2NmZy0+YXJtX2xwYWVfczJfY2ZnLnZ0Y3I7DQo+PiAtCWNmZy0+dm1pZAk9ICh1
MTYpdm1pZDsNCj4+IC0JY2ZnLT52dHRicgk9IHBndGJsX2NmZy0+YXJtX2xwYWVfczJfY2ZnLnZ0
dGJyOw0KPj4gLQljZmctPnZ0Y3IJPSBGSUVMRF9QUkVQKFNUUlRBQl9TVEVfMl9WVENSX1MyVDBT
WiwgdnRjci0+dHN6KSB8DQo+PiAtCQkJICBGSUVMRF9QUkVQKFNUUlRBQl9TVEVfMl9WVENSX1My
U0wwLCB2dGNyLT5zbCkgfA0KPj4gLQkJCSAgRklFTERfUFJFUChTVFJUQUJfU1RFXzJfVlRDUl9T
MklSMCwgdnRjci0+aXJnbikgfA0KPj4gLQkJCSAgRklFTERfUFJFUChTVFJUQUJfU1RFXzJfVlRD
Ul9TMk9SMCwgdnRjci0+b3JnbikgfA0KPj4gLQkJCSAgRklFTERfUFJFUChTVFJUQUJfU1RFXzJf
VlRDUl9TMlNIMCwgdnRjci0+c2gpIHwNCj4+IC0JCQkgIEZJRUxEX1BSRVAoU1RSVEFCX1NURV8y
X1ZUQ1JfUzJURywgdnRjci0+dGcpIHwNCj4+IC0JCQkgIEZJRUxEX1BSRVAoU1RSVEFCX1NURV8y
X1ZUQ1JfUzJQUywgdnRjci0+cHMpOw0KPj4gCXJldHVybiAwOw0KPj4gfQ0KPj4gDQo+PiBAQCAt
MTM4MiwyOCArMTY4OSwxMiBAQCBzdGF0aWMgaW50IGFybV9zbW11X2RvbWFpbl9maW5hbGlzZShz
dHJ1Y3QgaW9tbXVfZG9tYWluICpkb21haW4sDQo+PiAJCQkJICAgIHN0cnVjdCBhcm1fc21tdV9t
YXN0ZXIgKm1hc3RlcikNCj4+IHsNCj4+IAlpbnQgcmV0Ow0KPj4gLQl1bnNpZ25lZCBsb25nIGlh
cywgb2FzOw0KPj4gLQlpbnQgKCpmaW5hbGlzZV9zdGFnZV9mbikoc3RydWN0IGFybV9zbW11X2Rv
bWFpbiAqLA0KPj4gLQkJCQkgc3RydWN0IGFybV9zbW11X21hc3RlciAqLA0KPj4gLQkJCQkgc3Ry
dWN0IGlvX3BndGFibGVfY2ZnICopOw0KPj4gCXN0cnVjdCBhcm1fc21tdV9kb21haW4gKnNtbXVf
ZG9tYWluID0gdG9fc21tdV9kb21haW4oZG9tYWluKTsNCj4+IC0Jc3RydWN0IGFybV9zbW11X2Rl
dmljZSAqc21tdSA9IHNtbXVfZG9tYWluLT5zbW11Ow0KPj4gDQo+PiAJLyogUmVzdHJpY3QgdGhl
IHN0YWdlIHRvIHdoYXQgd2UgY2FuIGFjdHVhbGx5IHN1cHBvcnQgKi8NCj4+IAlzbW11X2RvbWFp
bi0+c3RhZ2UgPSBBUk1fU01NVV9ET01BSU5fUzI7DQo+PiANCj4+IC0Jc3dpdGNoIChzbW11X2Rv
bWFpbi0+c3RhZ2UpIHsNCj4+IC0JY2FzZSBBUk1fU01NVV9ET01BSU5fTkVTVEVEOg0KPj4gLQlj
YXNlIEFSTV9TTU1VX0RPTUFJTl9TMjoNCj4+IC0JCWlhcyA9IHNtbXUtPmlhczsNCj4+IC0JCW9h
cyA9IHNtbXUtPm9hczsNCj4+IC0JCWZpbmFsaXNlX3N0YWdlX2ZuID0gYXJtX3NtbXVfZG9tYWlu
X2ZpbmFsaXNlX3MyOw0KPj4gLQkJYnJlYWs7DQo+PiAtCWRlZmF1bHQ6DQo+PiAtCQlyZXR1cm4g
LUVJTlZBTDsNCj4+IC0JfQ0KPj4gLQ0KPj4gLQlyZXQgPSBmaW5hbGlzZV9zdGFnZV9mbihzbW11
X2RvbWFpbiwgbWFzdGVyLCAmcGd0YmxfY2ZnKTsNCj4+ICsJcmV0ID0gYXJtX3NtbXVfZG9tYWlu
X2ZpbmFsaXNlX3MyKHNtbXVfZG9tYWluLCBtYXN0ZXIpOw0KPj4gCWlmIChyZXQgPCAwKSB7DQo+
PiAJCXJldHVybiByZXQ7DQo+PiAJfQ0KPj4gQEAgLTE1NTMsNyArMTg0NCw4IEBAIHN0YXRpYyBp
bnQgYXJtX3NtbXVfaW5pdF9vbmVfcXVldWUoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSwN
Cj4+IAkJcmV0dXJuIC1FTk9NRU07DQo+PiAJfQ0KPj4gDQo+PiAtCWlmICghV0FSTl9PTihxLT5i
YXNlX2RtYSAmIChxc3ogLSAxKSkpIHsNCj4+ICsJV0FSTl9PTihxLT5iYXNlX2RtYSAmIChxc3og
LSAxKSk7DQo+PiArCWlmICh1bmxpa2VseShxLT5iYXNlX2RtYSAmIChxc3ogLSAxKSkpIHsNCj4+
IAkJZGV2X2luZm8oc21tdS0+ZGV2LCAiYWxsb2NhdGVkICV1IGVudHJpZXMgZm9yICVzXG4iLA0K
Pj4gCQkJIDEgPDwgcS0+bGxxLm1heF9uX3NoaWZ0LCBuYW1lKTsNCj4gDQo+IFdlIGRvbid0IG5l
ZWQgYm90aCB0aGUgV0FSTklORyBhbmQgdGhlIGRldl9pbmZvLiB5b3UgY291bGQgdHVybiB0aGUN
Cj4gZGV2X3dhcm4gLyBYRU5MT0dfV0FSTklORy4NCj4gDQpBY2suDQoNCj4gDQo+PiAJfQ0KPj4g
QEAgLTE3NTgsOSArMjA1MCw3IEBAIHN0YXRpYyB2b2lkIGFybV9zbW11X3NldHVwX3VuaXF1ZV9p
cnFzKHN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUpDQo+PiAJLyogUmVxdWVzdCBpbnRlcnJ1
cHQgbGluZXMgKi8NCj4+IAlpcnEgPSBzbW11LT5ldnRxLnEuaXJxOw0KPj4gCWlmIChpcnEpIHsN
Cj4+IC0JCXJldCA9IGRldm1fcmVxdWVzdF90aHJlYWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsIE5V
TEwsDQo+PiAtCQkJCQkJYXJtX3NtbXVfZXZ0cV90aHJlYWQsDQo+PiAtCQkJCQkJSVJRRl9PTkVT
SE9ULA0KPj4gKwkJcmV0ID0gcmVxdWVzdF9pcnEoaXJxLCAwLCBhcm1fc21tdV9ldnRxX2lycV90
YXNrbGV0LA0KPj4gCQkJCQkJImFybS1zbW11LXYzLWV2dHEiLCBzbW11KTsNCj4+IAkJaWYgKHJl
dCA8IDApDQo+PiAJCQlkZXZfd2FybihzbW11LT5kZXYsICJmYWlsZWQgdG8gZW5hYmxlIGV2dHEg
aXJxXG4iKTsNCj4+IEBAIC0xNzcwLDggKzIwNjAsOCBAQCBzdGF0aWMgdm9pZCBhcm1fc21tdV9z
ZXR1cF91bmlxdWVfaXJxcyhzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11KQ0KPj4gDQo+PiAJ
aXJxID0gc21tdS0+Z2Vycl9pcnE7DQo+PiAJaWYgKGlycSkgew0KPj4gLQkJcmV0ID0gZGV2bV9y
ZXF1ZXN0X2lycShzbW11LT5kZXYsIGlycSwgYXJtX3NtbXVfZ2Vycm9yX2hhbmRsZXIsDQo+PiAt
CQkJCSAgICAgICAwLCAiYXJtLXNtbXUtdjMtZ2Vycm9yIiwgc21tdSk7DQo+PiArCQlyZXQgPSBy
ZXF1ZXN0X2lycShpcnEsIDAsIGFybV9zbW11X2dlcnJvcl9oYW5kbGVyLA0KPj4gKwkJCQkJCSJh
cm0tc21tdS12My1nZXJyb3IiLCBzbW11KTsNCj4+IAkJaWYgKHJldCA8IDApDQo+PiAJCQlkZXZf
d2FybihzbW11LT5kZXYsICJmYWlsZWQgdG8gZW5hYmxlIGdlcnJvciBpcnFcbiIpOw0KPj4gCX0g
ZWxzZSB7DQo+PiBAQCAtMTc4MSwxMSArMjA3MSw4IEBAIHN0YXRpYyB2b2lkIGFybV9zbW11X3Nl
dHVwX3VuaXF1ZV9pcnFzKHN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUpDQo+PiAJaWYgKHNt
bXUtPmZlYXR1cmVzICYgQVJNX1NNTVVfRkVBVF9QUkkpIHsNCj4+IAkJaXJxID0gc21tdS0+cHJp
cS5xLmlycTsNCj4+IAkJaWYgKGlycSkgew0KPj4gLQkJCXJldCA9IGRldm1fcmVxdWVzdF90aHJl
YWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsIE5VTEwsDQo+PiAtCQkJCQkJCWFybV9zbW11X3ByaXFf
dGhyZWFkLA0KPj4gLQkJCQkJCQlJUlFGX09ORVNIT1QsDQo+PiAtCQkJCQkJCSJhcm0tc21tdS12
My1wcmlxIiwNCj4+IC0JCQkJCQkJc21tdSk7DQo+PiArCQkJcmV0ID0gcmVxdWVzdF9pcnEoaXJx
LCAwLCBhcm1fc21tdV9wcmlxX2lycV90YXNrbGV0LA0KPj4gKwkJCQkJCQkiYXJtLXNtbXUtdjMt
cHJpcSIsIHNtbXUpOw0KPj4gCQkJaWYgKHJldCA8IDApDQo+PiAJCQkJZGV2X3dhcm4oc21tdS0+
ZGV2LA0KPj4gCQkJCQkgImZhaWxlZCB0byBlbmFibGUgcHJpcSBpcnFcbiIpOw0KPj4gQEAgLTE4
MTQsMTEgKzIxMDEsOCBAQCBzdGF0aWMgaW50IGFybV9zbW11X3NldHVwX2lycXMoc3RydWN0IGFy
bV9zbW11X2RldmljZSAqc21tdSkNCj4+IAkJICogQ2F2aXVtIFRodW5kZXJYMiBpbXBsZW1lbnRh
dGlvbiBkb2Vzbid0IHN1cHBvcnQgdW5pcXVlIGlycQ0KPj4gCQkgKiBsaW5lcy4gVXNlIGEgc2lu
Z2xlIGlycSBsaW5lIGZvciBhbGwgdGhlIFNNTVV2MyBpbnRlcnJ1cHRzLg0KPj4gCQkgKi8NCj4+
IC0JCXJldCA9IGRldm1fcmVxdWVzdF90aHJlYWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsDQo+PiAt
CQkJCQlhcm1fc21tdV9jb21iaW5lZF9pcnFfaGFuZGxlciwNCj4+IC0JCQkJCWFybV9zbW11X2Nv
bWJpbmVkX2lycV90aHJlYWQsDQo+PiAtCQkJCQlJUlFGX09ORVNIT1QsDQo+PiAtCQkJCQkiYXJt
LXNtbXUtdjMtY29tYmluZWQtaXJxIiwgc21tdSk7DQo+PiArCQlyZXQgPSByZXF1ZXN0X2lycShp
cnEsIDAsIGFybV9zbW11X2NvbWJpbmVkX2lycV9oYW5kbGVyLA0KPj4gKwkJCQkJCSJhcm0tc21t
dS12My1jb21iaW5lZC1pcnEiLCBzbW11KTsNCj4+IAkJaWYgKHJldCA8IDApDQo+PiAJCQlkZXZf
d2FybihzbW11LT5kZXYsICJmYWlsZWQgdG8gZW5hYmxlIGNvbWJpbmVkIGlycVxuIik7DQo+PiAJ
fSBlbHNlDQo+PiBAQCAtMTg1Nyw3ICsyMTQxLDcgQEAgc3RhdGljIGludCBhcm1fc21tdV9kZXZp
Y2VfcmVzZXQoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSwgYm9vbCBieXBhc3MpDQo+PiAJ
cmVnID0gcmVhZGxfcmVsYXhlZChzbW11LT5iYXNlICsgQVJNX1NNTVVfQ1IwKTsNCj4+IAlpZiAo
cmVnICYgQ1IwX1NNTVVFTikgew0KPj4gCQlkZXZfd2FybihzbW11LT5kZXYsICJTTU1VIGN1cnJl
bnRseSBlbmFibGVkISBSZXNldHRpbmcuLi5cbiIpOw0KPj4gLQkJV0FSTl9PTihpc19rZHVtcF9r
ZXJuZWwoKSAmJiAhZGlzYWJsZV9ieXBhc3MpOw0KPj4gKwkJV0FSTl9PTighZGlzYWJsZV9ieXBh
c3MpOw0KPj4gCQlhcm1fc21tdV91cGRhdGVfZ2JwYShzbW11LCBHQlBBX0FCT1JULCAwKTsNCj4+
IAl9DQo+PiANCj4+IEBAIC0xOTUyLDggKzIyMzYsMTEgQEAgc3RhdGljIGludCBhcm1fc21tdV9k
ZXZpY2VfcmVzZXQoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSwgYm9vbCBieXBhc3MpDQo+
PiAJCXJldHVybiByZXQ7DQo+PiAJfQ0KPj4gDQo+PiAtCWlmIChpc19rZHVtcF9rZXJuZWwoKSkN
Cj4+IC0JCWVuYWJsZXMgJj0gfihDUjBfRVZUUUVOIHwgQ1IwX1BSSVFFTik7DQo+PiArCS8qIElu
aXRpYWxpemUgdGFza2xldHMgZm9yIHRocmVhZGVkIElSUXMqLw0KPj4gKwl0YXNrbGV0X2luaXQo
JnNtbXUtPmV2dHFfaXJxX3Rhc2tsZXQsIGFybV9zbW11X2V2dHFfdGhyZWFkLCBzbW11KTsNCj4+
ICsJdGFza2xldF9pbml0KCZzbW11LT5wcmlxX2lycV90YXNrbGV0LCBhcm1fc21tdV9wcmlxX3Ro
cmVhZCwgc21tdSk7DQo+PiArCXRhc2tsZXRfaW5pdCgmc21tdS0+Y29tYmluZWRfaXJxX3Rhc2ts
ZXQsIGFybV9zbW11X2NvbWJpbmVkX2lycV90aHJlYWQsDQo+PiArCQkJCSBzbW11KTsNCj4+IA0K
Pj4gCS8qIEVuYWJsZSB0aGUgU01NVSBpbnRlcmZhY2UsIG9yIGVuc3VyZSBieXBhc3MgKi8NCj4+
IAlpZiAoIWJ5cGFzcyB8fCBkaXNhYmxlX2J5cGFzcykgew0KPj4gQEAgLTIxOTUsNyArMjQ4Miw3
IEBAIHN0YXRpYyBpbmxpbmUgaW50IGFybV9zbW11X2RldmljZV9hY3BpX3Byb2JlKHN0cnVjdCBw
bGF0Zm9ybV9kZXZpY2UgKnBkZXYsDQo+PiBzdGF0aWMgaW50IGFybV9zbW11X2RldmljZV9kdF9w
cm9iZShzdHJ1Y3QgcGxhdGZvcm1fZGV2aWNlICpwZGV2LA0KPj4gCQkJCSAgICBzdHJ1Y3QgYXJt
X3NtbXVfZGV2aWNlICpzbW11KQ0KPj4gew0KPj4gLQlzdHJ1Y3QgZGV2aWNlICpkZXYgPSAmcGRl
di0+ZGV2Ow0KPj4gKwlzdHJ1Y3QgZGV2aWNlICpkZXYgPSBwZGV2Ow0KPj4gCXUzMiBjZWxsczsN
Cj4+IAlpbnQgcmV0ID0gLUVJTlZBTDsNCj4+IA0KPj4gQEAgLTIyMTksMTMwICsyNTA2LDQ0OSBA
QCBzdGF0aWMgdW5zaWduZWQgbG9uZyBhcm1fc21tdV9yZXNvdXJjZV9zaXplKHN0cnVjdCBhcm1f
c21tdV9kZXZpY2UgKnNtbXUpDQo+PiAJCXJldHVybiBTWl8xMjhLOw0KPj4gfQ0KPj4gDQo+PiAr
LyogU3RhcnQgb2YgWGVuIHNwZWNpZmljIGNvZGUuICovDQo+PiBzdGF0aWMgaW50IGFybV9zbW11
X2RldmljZV9wcm9iZShzdHJ1Y3QgcGxhdGZvcm1fZGV2aWNlICpwZGV2KQ0KPj4gew0KPj4gLQlp
bnQgaXJxLCByZXQ7DQo+PiAtCXN0cnVjdCByZXNvdXJjZSAqcmVzOw0KPj4gLQlyZXNvdXJjZV9z
aXplX3QgaW9hZGRyOw0KPj4gLQlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11Ow0KPj4gLQlz
dHJ1Y3QgZGV2aWNlICpkZXYgPSAmcGRldi0+ZGV2Ow0KPj4gLQlib29sIGJ5cGFzczsNCj4+ICsg
ICAgaW50IGlycSwgcmV0Ow0KPj4gKyAgICBwYWRkcl90IGlvYWRkciwgaW9zaXplOw0KPj4gKyAg
ICBzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11Ow0KPj4gKyAgICBib29sIGJ5cGFzczsNCj4+
ICsNCj4+ICsgICAgc21tdSA9IGRldm1fa3phbGxvYyhkZXYsIHNpemVvZigqc21tdSksIEdGUF9L
RVJORUwpOw0KPj4gKyAgICBpZiAoICFzbW11ICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgZGV2
X2VycihwZGV2LCAiZmFpbGVkIHRvIGFsbG9jYXRlIGFybV9zbW11X2RldmljZVxuIik7DQo+PiAr
ICAgICAgICByZXR1cm4gLUVOT01FTTsNCj4+ICsgICAgfQ0KPj4gKyAgICBzbW11LT5kZXYgPSBw
ZGV2Ow0KPj4gKw0KPj4gKyAgICBpZiAoIHBkZXYtPm9mX25vZGUgKQ0KPj4gKyAgICB7DQo+PiAr
ICAgICAgICByZXQgPSBhcm1fc21tdV9kZXZpY2VfZHRfcHJvYmUocGRldiwgc21tdSk7DQo+PiAr
ICAgIH0gZWxzZQ0KPj4gKyAgICB7DQo+IA0KPiBjb2Rpbmcgc3R5bGUNCg0KQWNrLiANCj4gDQo+
IA0KPj4gKyAgICAgICAgcmV0ID0gYXJtX3NtbXVfZGV2aWNlX2FjcGlfcHJvYmUocGRldiwgc21t
dSk7DQo+PiArICAgICAgICBpZiAoIHJldCA9PSAtRU5PREVWICkNCj4+ICsgICAgICAgICAgICBy
ZXR1cm4gcmV0Ow0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIC8qIFNldCBieXBhc3MgbW9kZSBh
Y2NvcmRpbmcgdG8gZmlybXdhcmUgcHJvYmluZyByZXN1bHQgKi8NCj4+ICsgICAgYnlwYXNzID0g
ISFyZXQ7DQo+PiArDQo+PiArICAgIC8qIEJhc2UgYWRkcmVzcyAqLw0KPj4gKyAgICByZXQgPSBk
dF9kZXZpY2VfZ2V0X2FkZHJlc3MoZGV2X3RvX2R0KHBkZXYpLCAwLCAmaW9hZGRyLCAmaW9zaXpl
KTsNCj4+ICsgICAgaWYoIHJldCApDQo+PiArICAgICAgICByZXR1cm4gLUVOT0RFVjsNCj4+ICsN
Cj4+ICsgICAgaWYgKCBpb3NpemUgPCBhcm1fc21tdV9yZXNvdXJjZV9zaXplKHNtbXUpICkNCj4+
ICsgICAgew0KPj4gKyAgICAgICAgZGV2X2VycihwZGV2LCAiTU1JTyByZWdpb24gdG9vIHNtYWxs
ICglbHgpXG4iLCBpb3NpemUpOw0KPj4gKyAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+PiArICAg
IH0NCj4+ICsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogRG9uJ3QgbWFwIHRoZSBJTVBMRU1FTlRB
VElPTiBERUZJTkVEIHJlZ2lvbnMsIHNpbmNlIHRoZXkgbWF5IGNvbnRhaW4NCj4+ICsgICAgICog
dGhlIFBNQ0cgcmVnaXN0ZXJzIHdoaWNoIGFyZSByZXNlcnZlZCBieSB0aGUgUE1VIGRyaXZlci4N
Cj4gDQo+IERvZXMgdGhpcyBhcHBseSB0byBYZW4gdG9vPw0KDQpZZXMuIFBlcmZvcm1hbmNlIG1v
bml0b3JpbmcgZmFjaWxpdGllcyBhcmUgb3B0aW9uYWwuIEN1cnJlbnRseSB0aGVyZSBpcyBubyBw
bGFuIHRvIHN1cHBvcnQgYWxzbyBpbiBYRU4uDQo+IA0KPiANCj4+ICsgICAgICovDQo+PiArICAg
IHNtbXUtPmJhc2UgPSBpb3JlbWFwX25vY2FjaGUoaW9hZGRyLCBpb3NpemUpOw0KPj4gKyAgICBp
ZiAoIElTX0VSUihzbW11LT5iYXNlKSApDQo+PiArICAgICAgICByZXR1cm4gUFRSX0VSUihzbW11
LT5iYXNlKTsNCj4+ICsNCj4+ICsgICAgaWYgKCBpb3NpemUgPiBTWl82NEsgKQ0KPj4gKyAgICB7
DQo+PiArICAgICAgICBzbW11LT5wYWdlMSA9IGlvcmVtYXBfbm9jYWNoZShpb2FkZHIgKyBTWl82
NEssIEFSTV9TTU1VX1JFR19TWik7DQo+PiArICAgICAgICBpZiAoIElTX0VSUihzbW11LT5wYWdl
MSkgKQ0KPj4gKyAgICAgICAgICAgIHJldHVybiBQVFJfRVJSKHNtbXUtPnBhZ2UxKTsNCj4+ICsg
ICAgfQ0KPj4gKyAgICBlbHNlDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHNtbXUtPnBhZ2UxID0g
c21tdS0+YmFzZTsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICAvKiBJbnRlcnJ1cHQgbGluZXMg
Ki8NCj4+ICsNCj4+ICsgICAgaXJxID0gcGxhdGZvcm1fZ2V0X2lycV9ieW5hbWVfb3B0aW9uYWwo
cGRldiwgImNvbWJpbmVkIik7DQo+PiArICAgIGlmICggaXJxID4gMCApDQo+PiArICAgICAgICBz
bW11LT5jb21iaW5lZF9pcnEgPSBpcnE7DQo+PiArICAgIGVsc2UNCj4+ICsgICAgew0KPj4gKyAg
ICAgICAgaXJxID0gcGxhdGZvcm1fZ2V0X2lycV9ieW5hbWVfb3B0aW9uYWwocGRldiwgImV2ZW50
cSIpOw0KPj4gKyAgICAgICAgaWYgKCBpcnEgPiAwICkNCj4+ICsgICAgICAgICAgICBzbW11LT5l
dnRxLnEuaXJxID0gaXJxOw0KPj4gKw0KPj4gKyAgICAgICAgaXJxID0gcGxhdGZvcm1fZ2V0X2ly
cV9ieW5hbWVfb3B0aW9uYWwocGRldiwgInByaXEiKTsNCj4+ICsgICAgICAgIGlmICggaXJxID4g
MCApDQo+PiArICAgICAgICAgICAgc21tdS0+cHJpcS5xLmlycSA9IGlycTsNCj4+ICsNCj4+ICsg
ICAgICAgIGlycSA9IHBsYXRmb3JtX2dldF9pcnFfYnluYW1lX29wdGlvbmFsKHBkZXYsICJnZXJy
b3IiKTsNCj4+ICsgICAgICAgIGlmICggaXJxID4gMCApDQo+PiArICAgICAgICAgICAgc21tdS0+
Z2Vycl9pcnEgPSBpcnE7DQo+PiArICAgIH0NCj4+ICsgICAgLyogUHJvYmUgdGhlIGgvdyAqLw0K
Pj4gKyAgICByZXQgPSBhcm1fc21tdV9kZXZpY2VfaHdfcHJvYmUoc21tdSk7DQo+PiArICAgIGlm
ICggcmV0ICkNCj4+ICsgICAgICAgIHJldHVybiByZXQ7DQo+PiArDQo+PiArICAgIC8qIEluaXRp
YWxpc2UgaW4tbWVtb3J5IGRhdGEgc3RydWN0dXJlcyAqLw0KPj4gKyAgICByZXQgPSBhcm1fc21t
dV9pbml0X3N0cnVjdHVyZXMoc21tdSk7DQo+PiArICAgIGlmICggcmV0ICkNCj4+ICsgICAgICAg
IHJldHVybiByZXQ7DQo+PiArDQo+PiArICAgIC8qIFJlc2V0IHRoZSBkZXZpY2UgKi8NCj4+ICsg
ICAgcmV0ID0gYXJtX3NtbXVfZGV2aWNlX3Jlc2V0KHNtbXUsIGJ5cGFzcyk7DQo+PiArICAgIGlm
ICggcmV0ICkNCj4+ICsgICAgICAgIHJldHVybiByZXQ7DQo+PiArDQo+PiArICAgIC8qDQo+PiAr
ICAgICAqIEtlZXAgYSBsaXN0IG9mIGFsbCBwcm9iZWQgZGV2aWNlcy4gVGhpcyB3aWxsIGJlIHVz
ZWQgdG8gcXVlcnkNCj4+ICsgICAgICogdGhlIHNtbXUgZGV2aWNlcyBiYXNlZCBvbiB0aGUgZndu
b2RlLg0KPj4gKyAgICAgKi8NCj4+ICsgICAgSU5JVF9MSVNUX0hFQUQoJnNtbXUtPmRldmljZXMp
Ow0KPj4gKw0KPj4gKyAgICBzcGluX2xvY2soJmFybV9zbW11X2RldmljZXNfbG9jayk7DQo+PiAr
ICAgIGxpc3RfYWRkKCZzbW11LT5kZXZpY2VzLCAmYXJtX3NtbXVfZGV2aWNlcyk7DQo+PiArICAg
IHNwaW5fdW5sb2NrKCZhcm1fc21tdV9kZXZpY2VzX2xvY2spOw0KPj4gKw0KPj4gKyAgICByZXR1
cm4gMDsNCj4+ICt9DQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 16:35:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 16:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42882.77172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkV5t-0007PF-6t; Wed, 02 Dec 2020 16:35:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42882.77172; Wed, 02 Dec 2020 16:35:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkV5t-0007P8-3E; Wed, 02 Dec 2020 16:35:05 +0000
Received: by outflank-mailman (input) for mailman id 42882;
 Wed, 02 Dec 2020 16:35:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dnsL=FG=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kkV5s-0007P3-0a
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:35:04 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 896997d7-e286-4c90-bb7d-f0d01f8bb686;
 Wed, 02 Dec 2020 16:35:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 896997d7-e286-4c90-bb7d-f0d01f8bb686
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606926902;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=eycOKdvUo1CyiWQtNu/Nm7EaPdWQhjbeuuRaBlXKR4Q=;
  b=IwjzciYoSicWztxaIM9IauZHGhQn/8xBxj+mDss26ZRmR/3tX4ia3W2j
   0UwB1lnsIJ28fGaY/2Y3v5bEGArK2YN3MFJUgyeFGoFOUFvUGu2CyKPt2
   CXVPhsqoyMpVHBDOljORiTUWLsdGmC9d2V1gl7GijfLq+E6a284FMcs7J
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1NQUARUK5eMmTpNbHwSkFWjmAQc3uAN53dEqKAZZMYHmVQMxFAuCg0+idIP1f0TjWD9xsgjS7m
 gs77VQH+9dNT8lEgv/Oe9gQQIkpe7VhWlIvLXKdlTsTBa7pbbbAIgM7TFeYl/gzdvOiylRfsFq
 K9ATtHRtZIIuE4RsapGHA739VflB95/4VSx8ungxV/HUrbpC3U8hoSIv6YUB/Ql+PxfewVmpKo
 LgNR0hGGQnK1QhepEKlH0k8TSqMadXLC2DDqMpYzRGzgTx7118B7dE/T8DViJEdNDFT6H/Mclz
 YR0=
X-SBRS: None
X-MesageID: 32376019
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,387,1599537600"; 
   d="scan'208";a="32376019"
Subject: Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31
To: Jan Beulich <jbeulich@suse.com>
CC: <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>, <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
 <b98d3517-6c9d-6f40-6e28-cde142978143@suse.com>
 <3c9735ec-2b04-1ace-2adb-d72b32c4a5f9@citrix.com>
 <88019c81-1988-2512-282b-53b61adf09c6@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <bcb0964d-9444-f5e6-372f-d8daa460fcfd@citrix.com>
Date: Wed, 2 Dec 2020 16:34:55 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <88019c81-1988-2512-282b-53b61adf09c6@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02/12/2020 15:21, Jan Beulich wrote:
> On 02.12.2020 15:53, Igor Druzhinin wrote:
>> On 02/12/2020 09:25, Jan Beulich wrote:
>>> Instead I'm wondering whether this wouldn't better be a Kconfig
>>> setting (or even command line controllable). There don't look to be
>>> any restrictions on the precise value chosen (i.e. 2**n-1 like is
>>> the case for old and new values here, for whatever reason), so a
>>> simple permitted range of like 4...64 would seem fine to specify.
>>> Whether the default then would want to be 8 (close to the current
>>> 7) or higher (around the actually observed maximum) is a different
>>> question.
>>
>> I'm in favor of a command line argument here - it would be much less trouble
>> if a higher limit was suddenly necessary in the field. The default IMO
>> should definitely be higher than 8 - I'd stick with number 32 which to me
>> should cover our real world scenarios and apply some headroom for the future.
> 
> Well, I'm concerned of the extra memory overhead. Every IRQ,
> sharable or not, will get the extra slots allocated with the
> current scheme. Perhaps a prereq change then would be to only
> allocate multi-guest arrays for sharable IRQs, effectively
> shrinking the overhead in particular for all MSI ones?

That's one way to improve overall system scalability but in that area
there is certainly much bigger fish to fry elsewhere. With 32 elements in the
array we get 200 bytes of overhead per structure, with 16 it's just 72 extra
bytes which in the unattainable worst case scenario of every single vector taken
in 512 CPU machine would only account for several MB of overhead.

I'd start with dynamic array allocation first and setting the limit to 16 that
should be enough for now. And then if that default value needs to be raised
we can consider further improvements.

Igor


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 16:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 16:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42887.77183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkV6n-0007Vp-GV; Wed, 02 Dec 2020 16:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42887.77183; Wed, 02 Dec 2020 16:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkV6n-0007Vi-Df; Wed, 02 Dec 2020 16:36:01 +0000
Received: by outflank-mailman (input) for mailman id 42887;
 Wed, 02 Dec 2020 16:36:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kkV6m-0007Vb-Sv
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:36:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kkV6m-00068N-Qy
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:36:00 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kkV6m-0003z2-PY
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:36:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kkV6j-0005on-BI; Wed, 02 Dec 2020 16:35:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=zlQiDJR+/Pb9UkxioXvWJdFAT1Utbih89CmWbF+q2JY=; b=p151pceMRr3NSR6RAnGXDPtFx7
	i4cq0fMFFSqmORB3xBTrM89z0r/75yP9iKPsMkhIK1YIUQ4jTG0NhQmFF8U1hKLsSdWhCYiNBH6+7
	aLpKD6powGnY0xW1RVTFr4BSsyMpBIFtvHPSfm9j7twHcAAl5jbnDbxr4OAk9G9pnNs8=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24519.49773.66229.841306@mariner.uk.xensource.com>
Date: Wed, 2 Dec 2020 16:35:57 +0000
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
    committers@xenproject.org,
    George Dunlap <George.Dunlap@citrix.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    =?iso-8859-1?Q?J=FCrgen_Gro=DF?=  <jgross@suse.com>,
    Paul Durrant <xadimgnik@gmail.com>,
    Wei Liu <wl@xen.org>,
    Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: Xen 4.15: Proposed release schedule
In-Reply-To: <a0648b20-54df-850b-2992-35dfbb86b7ca@xen.org>
References: <24510.24778.433048.477008@mariner.uk.xensource.com>
	<24510.25252.447028.364012@mariner.uk.xensource.com>
	<a0648b20-54df-850b-2992-35dfbb86b7ca@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("Re: Xen 4.15: Proposed release schedule"):
> Therefore, would it be possible to push the "Feature Freeze" by week?

Sure.  To expand on that: this was the only comment on my proposal,
and I'm happy to accept such a minor change.

Adjusting the LPD too, but not the other two tentative dates, leads to
this schedule:

   Friday 15th January    Last posting date

     Patches adding new features should be posted to the mailing list
     by this cate, although perhaps not in their final version.

   Friday 29th January   Feature freeze
  
     Patches adding new features should be committed by this date.
     Straightforward bugfixes may continue to be accepted by
     maintainers.

   Friday 12th February **tentatve**   Code freeze

     Bugfixes only, all changes to be approved by the Release Manager.

   Week of 12th March **tentative**   Release
     (probably Tuesday or Wednesday)

Unless anyone has objections, I will declare this official, tomorrow.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 16:47:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 16:47:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42895.77196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVH7-00008e-Hz; Wed, 02 Dec 2020 16:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42895.77196; Wed, 02 Dec 2020 16:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVH7-00008X-Ex; Wed, 02 Dec 2020 16:46:41 +0000
Received: by outflank-mailman (input) for mailman id 42895;
 Wed, 02 Dec 2020 16:46:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z8xT=FG=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kkVH6-00008R-DE
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:46:40 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8fc5ae2-fdbf-480d-baa4-dd360b6448c0;
 Wed, 02 Dec 2020 16:46:38 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.4 DYNA|AUTH)
 with ESMTPSA id 60a649wB2GkUXfq
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 2 Dec 2020 17:46:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8fc5ae2-fdbf-480d-baa4-dd360b6448c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1606927597;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=CrKA4JECnYMv6hggb6IRjnNAsHHjaWmmYuVheKSuQYA=;
	b=C6Ah8Gpz5QVXe2hhwkt3Y+9eC9FK4uJKMzY3okZ5EOY+tY4nmIAhwJAbxTc21qE2wk
	fN5SPS/tQpaj15yu9hzFbozviRl1mO97nWXDcGUkxfASmXwN6aRMno+AfG2ZOEFyhVSk
	RNXUM7YqKCse+nayeWAFDcmj2wUYzdmNFUbT6TP/FMLZRJQabhBatLXcITtrKFHztNbC
	RNqBHWCTKHp3oOr+kE7UgcPCdVHNz7wlABvgM1QWWj/85ZBAV3zLrrMUrjKScCVAdiaF
	QgwURfBbm9+3ucw337QXf9GxMrKYPZ4+fgYIId315RNnphry6VgkkKbwAXexLq6S6Reh
	molA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3Gwg"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools/hotplug: allow tuning of xenwatchdogd arguments
Date: Wed,  2 Dec 2020 17:46:28 +0100
Message-Id: <20201202164628.24224-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the arguments for xenwatchdogd are hardcoded with 15s
keep-alive interval and 30s timeout.

It is not possible to tweak these values via
/etc/systemd/system/xen-watchdog.service.d/*.conf because ExecStart
can not be replaced. The only option would be a private copy
/etc/systemd/system/xen-watchdog.service, which may get out of sync
with the Xen provided xen-watchdog.service.

Adjust the service file to recognize XENWATCHDOGD_ARGS= in a
private unit configuration file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/Linux/init.d/xen-watchdog.in          | 7 ++++++-
 tools/hotplug/Linux/systemd/xen-watchdog.service.in | 4 +++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/Linux/init.d/xen-watchdog.in b/tools/hotplug/Linux/init.d/xen-watchdog.in
index c05f1f6b6a..87e2353b49 100644
--- a/tools/hotplug/Linux/init.d/xen-watchdog.in
+++ b/tools/hotplug/Linux/init.d/xen-watchdog.in
@@ -19,6 +19,11 @@
 
 . @XEN_SCRIPT_DIR@/hotplugpath.sh
 
+xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
+
+test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
+
+test -z "$XENWATCHDOGD_ARGS" || XENWATCHDOGD_ARGS='15 30'
 DAEMON=${sbindir}/xenwatchdogd
 base=$(basename $DAEMON)
 
@@ -46,7 +51,7 @@ start() {
 	local r
 	echo -n $"Starting domain watchdog daemon: "
 
-	$DAEMON 30 15
+	$DAEMON $XENWATCHDOGD_ARGS
 	r=$?
 	[ "$r" -eq 0 ] && success $"$base startup" || failure $"$base startup"
 	echo
diff --git a/tools/hotplug/Linux/systemd/xen-watchdog.service.in b/tools/hotplug/Linux/systemd/xen-watchdog.service.in
index 1eecd2a616..637ab7fd7f 100644
--- a/tools/hotplug/Linux/systemd/xen-watchdog.service.in
+++ b/tools/hotplug/Linux/systemd/xen-watchdog.service.in
@@ -6,7 +6,9 @@ ConditionPathExists=/proc/xen/capabilities
 
 [Service]
 Type=forking
-ExecStart=@sbindir@/xenwatchdogd 30 15
+Environment="XENWATCHDOGD_ARGS=30 15"
+EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
+ExecStart=@sbindir@/xenwatchdogd $XENWATCHDOGD_ARGS
 KillSignal=USR1
 
 [Install]


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 16:47:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 16:47:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42901.77207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVIH-0000FG-SY; Wed, 02 Dec 2020 16:47:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42901.77207; Wed, 02 Dec 2020 16:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVIH-0000F9-PW; Wed, 02 Dec 2020 16:47:53 +0000
Received: by outflank-mailman (input) for mailman id 42901;
 Wed, 02 Dec 2020 16:47:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkVIG-0000F3-Mv
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 16:47:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkVID-0006Ok-1l; Wed, 02 Dec 2020 16:47:49 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkVIC-0004ze-Ca; Wed, 02 Dec 2020 16:47:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IgtC8QEvnHi7quh/t01nXLhSMnMFrS0sCPbUyz0O6V0=; b=GuSh0okzGKVc50oqOHJjXt1MOP
	JZB+egSu6BE1H+6kD1RpfhhaSxS2XGOTXZa1QDFXSmD3IlB5VKEvex5DqW9Y5n2PLcL3pgKTofElV
	PRjBCzOzwMfMABBZRo3K6OM+m6A+iWuowMILBb9H3qNHajEdd9QhVhYTg40FlSqU/JBU=;
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <1912278a-13f4-885d-d1ca-cc130718d064@xen.org>
Date: Wed, 2 Dec 2020 16:47:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 02/12/2020 02:51, Stefano Stabellini wrote:
> On Thu, 26 Nov 2020, Rahul Singh wrote:

>> +/* Alias to Xen device tree helpers */
>> +#define device_node dt_device_node
>> +#define of_phandle_args dt_phandle_args
>> +#define of_device_id dt_device_match
>> +#define of_match_node dt_match_node
>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
>> +#define of_property_read_bool dt_property_read_bool
>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
> 
> Given all the changes to the file by the previous patches we are
> basically fully (or almost fully) adapting this code to Xen.
> 
> So at that point I wonder if we should just as well make these changes
> (e.g. s/of_phandle_args/dt_phandle_args/g) to the code too.

I have already accepted the fact that keeping Linux code as-is is nearly 
impossible without much workaround :). The benefits tends to also 
limited as we noticed for the SMMU driver.

I would like to point out that this may make quite difficult (if not 
impossible) to revert the previous patches which remove support for some 
features (e.g. atomic, MSI, ATS).

If we are going to adapt the code to Xen (I'd like to keep Linux code 
style though), then I think we should consider to keep code that may be 
useful in the near future (at least MSI, ATS).

> 
>> +#define FIELD_GET(_mask, _reg)          \
>> +    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
>> +
>> +#define WRITE_ONCE(x, val)                  \
>> +do {                                        \
>> +    *(volatile typeof(x) *)&(x) = (val);    \
>> +} while (0)
> 
> maybe we should define this in xen/include/xen/lib.h

I have attempted such discussion in the past and this resulted to more 
bikeshed than it is worth it. So I would suggest to re-implement 
WRITE_ONCE() as write_atomic() for now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 17:04:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 17:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42911.77220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVY2-0002Bq-B9; Wed, 02 Dec 2020 17:04:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42911.77220; Wed, 02 Dec 2020 17:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVY2-0002Bj-7z; Wed, 02 Dec 2020 17:04:10 +0000
Received: by outflank-mailman (input) for mailman id 42911;
 Wed, 02 Dec 2020 17:04:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19jX=FG=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kkVY0-0002Be-98
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 17:04:08 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.46]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 184c0ddd-a64c-467b-877b-f0a92e5155aa;
 Wed, 02 Dec 2020 17:04:06 +0000 (UTC)
Received: from AM6P191CA0038.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::15)
 by VE1PR08MB5679.eurprd08.prod.outlook.com (2603:10a6:800:1a8::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 2 Dec
 2020 17:04:04 +0000
Received: from AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::e) by AM6P191CA0038.outlook.office365.com
 (2603:10a6:209:7f::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Wed, 2 Dec 2020 17:04:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT020.mail.protection.outlook.com (10.152.16.116) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 17:04:04 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Wed, 02 Dec 2020 17:04:04 +0000
Received: from e298a24661a6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2B00524A-EDAC-4327-B89B-E8F6A0B5DC13.1; 
 Wed, 02 Dec 2020 17:03:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e298a24661a6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 17:03:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB3995.eurprd08.prod.outlook.com (2603:10a6:10:b2::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 2 Dec
 2020 17:03:57 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.017; Wed, 2 Dec 2020
 17:03:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 184c0ddd-a64c-467b-877b-f0a92e5155aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5gQavw2fe5+oEYN5S7XHA8Tere3wHB4YJJKzmopc4uU=;
 b=wspkC6L0G0zq5begiOtVewMTTuO8el2T7azeF5/IQmeuN90b/ZADIYpraIy4h/lA5LRGZs33rNcpnaZDInm67qmbQn6i2hWwjd1yEnXXrLweYf6nWdkjTvRVylqKcJ5pe1A0Vb+txqGoPFzU30btsZCS1+beW4xNnMbhpvar26k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1a2231e60c3015f5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BLDy+jFlfZHyBdlhSVwxCvet8RL4xDy+233hXCZ1EQTwF6XG4C8NLcI3n1c3fWgpXONg32DwUOJc/oC+2o1GcFAGpSUiZMkYjIHBSdVG6eIpdz/jQpWu51t38LLVYEA3gVMz7e1buVms1/iARsqiuuTEUdznRlH4nw9FmImYU8cYXkv1dnXj4wmh3vt7NH3k52PlRcbgkOmoEDZijY1cWrqG0USRnr23pLRO0JTubrKDT1mbdpmbxr4wMKzlDxfyjJSMydWhDORV2KKnvCxfF9A20aGMNreyVeYDunp1DOxF67OTpqX5r9pE+r/D9CopiMLwZuyIM8yv1ZwD7J9NHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5gQavw2fe5+oEYN5S7XHA8Tere3wHB4YJJKzmopc4uU=;
 b=F/Wb3rUPAKNEA/ANufcX3MsdjnB0XpnA3x8rKysdxMrxZcyRNlHVoeIQIqrnHC2CN3N09icbNxrnu+GATVA6snUmuyNcbeOI/L5xv72ABJq1CEj9Unoj9GBTLNeU7CGZlHgcVN0yHkHZVlkiT4nYrLHbaAz+7sn7TG7lHlpsIjEeO81Q6sT4vbDnce1UY4T+19hDjkp2V/uEgUYvQyMU7uX8Hr4RTKkTU2X3NSBpIFHlbIYFCE8wQfOIw9gd/llum3aNDziOYWyeAqUpRmcwf01AeAniFEk0096UlDuGDCUXzqf3Hpl+iJPabbHiUBVY7JGBEcc489Cb//wuEbNCzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5gQavw2fe5+oEYN5S7XHA8Tere3wHB4YJJKzmopc4uU=;
 b=wspkC6L0G0zq5begiOtVewMTTuO8el2T7azeF5/IQmeuN90b/ZADIYpraIy4h/lA5LRGZs33rNcpnaZDInm67qmbQn6i2hWwjd1yEnXXrLweYf6nWdkjTvRVylqKcJ5pe1A0Vb+txqGoPFzU30btsZCS1+beW4xNnMbhpvar26k=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq()
 in init_*_irq_data()
Thread-Topic: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq()
 in init_*_irq_data()
Thread-Index: AQHWxXrf7kDHq9oT70mf9BS7rS/4g6nkD0iA
Date: Wed, 2 Dec 2020 17:03:57 +0000
Message-ID: <E9377A57-0AA0-4B5B-87B0-B65FEF655C59@arm.com>
References: <20201128113642.8265-1-julien@xen.org>
In-Reply-To: <20201128113642.8265-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1cdc200e-3491-4d1b-c892-08d896e44616
x-ms-traffictypediagnostic: DB8PR08MB3995:|VE1PR08MB5679:
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB5679511213AFAC0EB365F0C19DF30@VE1PR08MB5679.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SbAp61UzIKLNt06wCs+HYSBiACd9d4Vo3Nz1VFly1ZV/QvoBomktex05RicvaG0LJGjac8056QETk/V0GcyxbPb6j2ILSUQ/U8vmU/vC6rQpRTL6CWo3Pn2Ng7ijsbdOEcta4NS6rjb8/CYtxUhmcZEx4SQrhb54Q0i/7gvRV0PCPS+R/nSbq9NdB8vL8EqpXKIb6i2a2pGQ3b6nCca7rqj3VYKzdH0RjiTrgFNpcvf7e67lRIxc4Bm1x/QmP7HwqTIxVzRNtr+NPQ1g5PMiJsZewvoXJ7QLM6+pTFXN6mc+aJfaZsfQYICWfYH+y0Ozj9aGBagMm5kogyzbbwBWViA5SpaP47P1l+hQrZ4i8aLjVrvYhmJghbsOohQy1WZE
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(376002)(39860400002)(136003)(2906002)(316002)(71200400001)(6512007)(6506007)(478600001)(8676002)(53546011)(6486002)(33656002)(2616005)(8936002)(83380400001)(54906003)(6916009)(5660300002)(64756008)(66446008)(66946007)(76116006)(91956017)(66476007)(66556008)(36756003)(26005)(86362001)(186003)(4326008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?Qm5PeFlKUW1SSTVUYytTTWVLYU5IbUErMGN3UFd0dStFQXVlRmRha2kvb0lQ?=
 =?utf-8?B?d3RjRWYvYnZpdTZNWTI5NHdEN0hmbUVGcVZVRytYUThWNzdtQUFvcXE0Mjdt?=
 =?utf-8?B?a3NBbU5yMHRneGN5YUlVNlFHTllXMHhhVWJ0cjFSN3BIMmg3a1Foc2N0Q3BJ?=
 =?utf-8?B?NVlORzU4ajQrcVQzMkNIU0hOWlU2VW5kQVFqZ3B1VitRaFpPM1BKQndwYlI0?=
 =?utf-8?B?bjBlWTFJZUZsTW1vVCs1YVppSDFMaVc2R3F1TTJpcGEybzJkNkFnQXY3cTlE?=
 =?utf-8?B?bStvbWRRY2x5QVJFVzNIMkFITE51dXBjWkNOWUdzaUlVSFl1VWNGUTBVTkcw?=
 =?utf-8?B?RVYxRmc1cmJVSjEvLzdTTkh5ZW5DWDdQeWwrS2J0a2trZituTHdkTzlwbnhx?=
 =?utf-8?B?aTFzdlZwbm56aWlMNzdvSE9UNHVDY2xIclQ0L0h5bFNjRzluQStDT2ZwOFZh?=
 =?utf-8?B?M2tkb0ZxMUdlN1dPUFgvMkE2d3JTRnlJS2ZKMW13ekRPb01PdVY5YWNGRnRP?=
 =?utf-8?B?NVRIdHRMYWVBTHR4NmJYdVNRdWZXQSswRGtzS3FHYkg1WG02WFhlemlPN0hy?=
 =?utf-8?B?eVJNVVBiaUVjTTUvQW1UbHl4NGV2UnBsNUF4VnJWOHkwdmM2OWgzQTVMZDl1?=
 =?utf-8?B?VTNRT2daZGJ2Q2wxdWpFWEhvcHNDY0FpNVVlOUZLaWdDaXJYTUFYQ1FnOGFt?=
 =?utf-8?B?NTBzRmNEeXZWVmdHZzI5bVU3V2pCMzQyRyt1ajlpbWZETHJ2TXN0UThZbHNF?=
 =?utf-8?B?cTUvVDVCelpaZzdDM3lZWDBVYUxIQTcvSlhWZitMTWoxWW9pa0FiaTF3bzNR?=
 =?utf-8?B?MGQrNzE3ZGRtWkVwQXp2MzA5bE16WURGYmNBVmc2aUhRQ2s2U3BpajRuUTdP?=
 =?utf-8?B?WkpSYlR3eENZM29OTEJxWEZjRjFjT3h4ak1CTW42cnVGTHNpdWdHczRhYk9Y?=
 =?utf-8?B?NXpCSW9XTG9BWHhaZWRmcFZxY0hxNXYrVlZwTHo2WkNieHQ1NElpYktpSHdt?=
 =?utf-8?B?RFVUdWwwdzVtb0NQUG9tQ0ROTUI3eElUQXZoQVRBN3ZiZmM0ZWdqOGliQWVC?=
 =?utf-8?B?c0krR1lzeFFVUjZBSmtVTnJnT1VrOGRUYlJ1R2EwUkFOQkpuK09INkxlKzVv?=
 =?utf-8?B?dGVPeTZ1WUV1THNvUFpKamZONGNEOFFlb1Y4RDJhZCsveDN2Tnp1UXZ1Tm91?=
 =?utf-8?B?b0NXQVZybGEzbUFSUzZ6OWtoRDNCTWFmMHg5NTU4eGxkNmd4cDhNaWw0OW14?=
 =?utf-8?B?eGxSMWhncHVZV05GY2dzZjU2UnVUU0JFNzdhV3hzTlY3NU16VXdLMHJTSU1B?=
 =?utf-8?Q?wQO3abEOBBHvW/iiKSs/3abemEFdwuDExc?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <CAD069E0108CCC45A5128FE864EFB985@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3995
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0ff46910-cf80-4246-843d-08d896e44201
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0WECtJ9ab4NriLlLjCkgs52PHiQHyKOgh9KB3Tp0rwpS59IVAvOKAkjLmRiDTlrh4xE/B5xhEoodyy0LUyLTcBG+eZWmibomMO+00x3w6kLyxu8v1V+qfoRovyCuoRQ4qY+i9FnuUA6mcoPOxfTjSIHE1gUmXxrQBsimudpe/T9gvZcg/el0wk7wonO0esqQUp7cTBEczLeRJloMEwH9/OOLnKHbKH5wudhpn/hU41f1Ix9kIzQ+oUy/5tRD2lTh5wXXwAOHE8FHyAc6IuG66O5XTWhwXPU7D80FZCFpLrg9nTyi7Sk3Pz+ESqNNWyd4UJoliG8hOpvEFd2N14Do1aYLgklxAkkPA2o8XqLA5d+/D6XfiAv8CdakygpuRr4ssx+5wLnCF/CyBxHZcEU3eA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(46966005)(336012)(54906003)(4326008)(36906005)(186003)(82310400003)(316002)(2906002)(6512007)(26005)(36756003)(33656002)(81166007)(478600001)(53546011)(70586007)(356005)(6506007)(70206006)(8676002)(6862004)(2616005)(86362001)(83380400001)(82740400003)(5660300002)(8936002)(6486002)(47076004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 17:04:04.4773
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1cdc200e-3491-4d1b-c892-08d896e44616
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5679

DQoNCj4gT24gMjggTm92IDIwMjAsIGF0IDExOjM2LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBpbml0X29uZV9kZXNjX2lycSgpIGNhbiByZXR1cm4gYW4gZXJyb3IgaWYgaXQgaXMg
dW5hYmxlIHRvIGFsbG9jYXRlDQo+IG1lbW9yeS4gV2hpbGUgdGhpcyBpcyB1bmxpa2VseSB0byBo
YXBwZW4gZHVyaW5nIGJvb3QgKGNhbGxlZCBmcm9tDQo+IGluaXRfeyxsb2NhbF99aXJxX2RhdGEo
KSksIGl0IGlzIGJldHRlciB0byBoYXJkZW4gdGhlIGNvZGUgYnkNCj4gcHJvcGFndGluZyB0aGUg
cmV0dXJuIHZhbHVlLg0KPiANCj4gU3BvdHRlZCBieSBjb3Zlcml0eS4NCj4gDQo+IENJRDogMTA2
NTI5DQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29t
Pg0KPiBSZXZpZXdlZC1ieTogUm9nZXIgUGF1bCBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29t
Pg0KDQpSZXZpZXdlZC1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0u
Y29tPg0KDQpDaGVlcnMNCkJlcnRyYW5kDQoNCj4gDQo+IC0tLQ0KPiAgICBDaGFuZ2VzIGluIHYy
Og0KPiAgICAgICAgLSBBZGQgUm9nZXIncyByZXZpZXdlZC1ieSBmb3IgeDg2DQo+ICAgICAgICAt
IEhhbmRsZQ0KPiAtLS0NCj4geGVuL2FyY2gvYXJtL2lycS5jIHwgMTIgKysrKysrKysrKy0tDQo+
IHhlbi9hcmNoL3g4Ni9pcnEuYyB8ICA3ICsrKysrKy0NCj4gMiBmaWxlcyBjaGFuZ2VkLCAxNiBp
bnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L2FybS9pcnEuYyBiL3hlbi9hcmNoL2FybS9pcnEuYw0KPiBpbmRleCAzODc3NjU3YTUyNzcuLmI3
MWIwOTllNmZhMiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gvYXJtL2lycS5jDQo+ICsrKyBiL3hl
bi9hcmNoL2FybS9pcnEuYw0KPiBAQCAtODgsNyArODgsMTEgQEAgc3RhdGljIGludCBfX2luaXQg
aW5pdF9pcnFfZGF0YSh2b2lkKQ0KPiAgICAgZm9yICggaXJxID0gTlJfTE9DQUxfSVJRUzsgaXJx
IDwgTlJfSVJRUzsgaXJxKysgKQ0KPiAgICAgew0KPiAgICAgICAgIHN0cnVjdCBpcnFfZGVzYyAq
ZGVzYyA9IGlycV90b19kZXNjKGlycSk7DQo+IC0gICAgICAgIGluaXRfb25lX2lycV9kZXNjKGRl
c2MpOw0KPiArICAgICAgICBpbnQgcmMgPSBpbml0X29uZV9pcnFfZGVzYyhkZXNjKTsNCj4gKw0K
PiArICAgICAgICBpZiAoIHJjICkNCj4gKyAgICAgICAgICAgIHJldHVybiByYzsNCj4gKw0KPiAg
ICAgICAgIGRlc2MtPmlycSA9IGlycTsNCj4gICAgICAgICBkZXNjLT5hY3Rpb24gID0gTlVMTDsN
Cj4gICAgIH0NCj4gQEAgLTEwNSw3ICsxMDksMTEgQEAgc3RhdGljIGludCBpbml0X2xvY2FsX2ly
cV9kYXRhKHZvaWQpDQo+ICAgICBmb3IgKCBpcnEgPSAwOyBpcnEgPCBOUl9MT0NBTF9JUlFTOyBp
cnErKyApDQo+ICAgICB7DQo+ICAgICAgICAgc3RydWN0IGlycV9kZXNjICpkZXNjID0gaXJxX3Rv
X2Rlc2MoaXJxKTsNCj4gLSAgICAgICAgaW5pdF9vbmVfaXJxX2Rlc2MoZGVzYyk7DQo+ICsgICAg
ICAgIGludCByYyA9IGluaXRfb25lX2lycV9kZXNjKGRlc2MpOw0KPiArDQo+ICsgICAgICAgIGlm
ICggcmMgKQ0KPiArICAgICAgICAgICAgcmV0dXJuIHJjOw0KPiArDQo+ICAgICAgICAgZGVzYy0+
aXJxID0gaXJxOw0KPiAgICAgICAgIGRlc2MtPmFjdGlvbiAgPSBOVUxMOw0KPiANCj4gZGlmZiAt
LWdpdCBhL3hlbi9hcmNoL3g4Ni9pcnEuYyBiL3hlbi9hcmNoL3g4Ni9pcnEuYw0KPiBpbmRleCA0
NTk2Njk0NzkxOWUuLjNlYmQ2ODQ0MTVhYyAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2ly
cS5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9pcnEuYw0KPiBAQCAtNDI4LDkgKzQyOCwxNCBAQCBp
bnQgX19pbml0IGluaXRfaXJxX2RhdGEodm9pZCkNCj4gDQo+ICAgICBmb3IgKCBpcnEgPSAwOyBp
cnEgPCBucl9pcnFzX2dzaTsgaXJxKysgKQ0KPiAgICAgew0KPiArICAgICAgICBpbnQgcmM7DQo+
ICsNCj4gICAgICAgICBkZXNjID0gaXJxX3RvX2Rlc2MoaXJxKTsNCj4gICAgICAgICBkZXNjLT5p
cnEgPSBpcnE7DQo+IC0gICAgICAgIGluaXRfb25lX2lycV9kZXNjKGRlc2MpOw0KPiArDQo+ICsg
ICAgICAgIHJjID0gaW5pdF9vbmVfaXJxX2Rlc2MoZGVzYyk7DQo+ICsgICAgICAgIGlmICggcmMg
KQ0KPiArICAgICAgICAgICAgcmV0dXJuIHJjOw0KPiAgICAgfQ0KPiAgICAgZm9yICggOyBpcnEg
PCBucl9pcnFzOyBpcnErKyApDQo+ICAgICAgICAgaXJxX3RvX2Rlc2MoaXJxKS0+aXJxID0gaXJx
Ow0KPiAtLSANCj4gMi4xNy4xDQo+IA0KPiANCg0K


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 17:08:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 17:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42917.77232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVcG-0002Ox-3Z; Wed, 02 Dec 2020 17:08:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42917.77232; Wed, 02 Dec 2020 17:08:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVcG-0002Oq-0V; Wed, 02 Dec 2020 17:08:32 +0000
Received: by outflank-mailman (input) for mailman id 42917;
 Wed, 02 Dec 2020 17:08:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DXNT=FG=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1kkVcE-0002Ol-FB
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 17:08:30 +0000
Received: from mail.skyhub.de (unknown [2a01:4f8:190:11c2::b:1457])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebb0d06e-cdce-46b4-b125-b6fb0e05bed1;
 Wed, 02 Dec 2020 17:08:28 +0000 (UTC)
Received: from zn.tnic (p200300ec2f161b00329c23fffea6a903.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f16:1b00:329c:23ff:fea6:a903])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 88AA91EC0445;
 Wed,  2 Dec 2020 18:08:27 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebb0d06e-cdce-46b4-b125-b6fb0e05bed1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1606928907;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5ySegBj5bw98Dubr8StQaVFXEi9kv05WveRbfqUYjPM=;
	b=nupJKD0sLAJ4R4UUPVoZaW60JPZLoqvGYStCVadIUPeYcxQBDe1TC8TasvCIz+cBpUSJMI
	dbbQFVbpPp5/4n1DfFyfcTn82NsC40ZfhEmqPlVJx0HXns7PsZW4zbP9lny+lXVOb5CUuO
	WZuBUXZ7PtfjieBd/pKYBpU6ZLcKRCA=
Date: Wed, 2 Dec 2020 18:08:23 +0100
From: Borislav Petkov <bp@alien8.de>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, peterz@infradead.org,
	luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 04/12] x86/xen: drop USERGS_SYSRET64 paravirt call
Message-ID: <20201202170823.GF2951@zn.tnic>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-5-jgross@suse.com>
 <20201202123235.GD2951@zn.tnic>
 <6be0d1a5-0079-5d90-0c38-85fe4471f1b8@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6be0d1a5-0079-5d90-0c38-85fe4471f1b8@suse.com>

On Wed, Dec 02, 2020 at 03:48:21PM +0100, Jürgen Groß wrote:
> I wanted to avoid the additional NOPs for the bare metal case.

Yeah, in that case it gets optimized to a single NOP:

[    0.176692] SMP alternatives: ffffffff81a00068: [0:5) optimized NOPs: 0f 1f 44 00 00

which is nopl 0x0(%rax,%rax,1) and I don't think that's noticeable on
modern CPUs where a NOP is basically a rIP increment only and that goes
down the pipe almost for free. :-)

> If you don't mind them I can do as you are suggesting.

Yes pls, I think asm readability is more important than a 5-byte NOP.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 17:14:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 17:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42926.77250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVhr-0003MF-1Q; Wed, 02 Dec 2020 17:14:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42926.77250; Wed, 02 Dec 2020 17:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkVhq-0003M8-T0; Wed, 02 Dec 2020 17:14:18 +0000
Received: by outflank-mailman (input) for mailman id 42926;
 Wed, 02 Dec 2020 17:14:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkVhp-0003Lx-4a
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 17:14:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkVhm-0006yZ-Py; Wed, 02 Dec 2020 17:14:14 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkVhm-0002Uj-83; Wed, 02 Dec 2020 17:14:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=0FqFLdJQXrjhS3oZ6ij+JKyXDSMIt3qywizF6zgyS3Y=; b=o0VdJ0ml2EIsGotYn/WJuIpTYJ
	1JltMO4Q7cYf8s/oByWmC2Z2UG7OKRm4PutUAeSfbQJbIqHe5KL6VACZpxLxquyWlsjSdZUZ5iU9C
	NTWoJzZ40jncTjD7+hQqbLnqeM2GVJTbbZo6l8GJFg5+3omVtrI9QU9EKrTZ9IFhvI9M=;
Subject: Re: [PATCH 1/2] include: don't use asm/page.h from common headers
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <04276039-a5d0-fefd-260e-ffaa8272fd6a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a35fb176-e729-a542-4416-7040d6c80964@xen.org>
Date: Wed, 2 Dec 2020 17:14:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <04276039-a5d0-fefd-260e-ffaa8272fd6a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 02/12/2020 14:49, Jan Beulich wrote:
> Doing so limits what can be done in (in particular included by) this per-
> arch header. Abstract out page shift/size related #define-s, which is all
> the repsecitve headers care about. Extend the replacement / removal to

s/repsecitve/respective/

> some x86 headers as well; some others now need to include page.h (and
> they really should have before).
> 
> Arm's VADDR_BITS gets restricted to 32-bit, as its current value is
> clearly wrong for 64-bit, but the constant also isn't used anywhere
> right now (i.e. the #define could also be dropped altogether).

Whoops. Thankfully this is not used.

> 
> I wasn't sure about Arm's use of vaddr_t in PAGE_OFFSET(), and hence I
> kept it and provided a way to override the #define in the common header.

vaddr_t is defined to 32-bit for arm32 or 64-bit for arm64. So I think 
it would be fine to use the generic PAGE_OFFSET() implementation.

> 
> Also drop the dead PAGE_FLAG_MASK altogether at this occasion.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/arm/arm64/lib/clear_page.S
> +++ b/xen/arch/arm/arm64/lib/clear_page.S
> @@ -14,6 +14,8 @@
>    * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include <xen/page-size.h>
> +
>   /*
>    * Clear page @dest
>    *
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -176,11 +176,6 @@
>   #define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
>   #define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
>   
> -#define PAGE_SHIFT              12
> -#define PAGE_SIZE           (_AC(1,L) << PAGE_SHIFT)
> -#define PAGE_MASK           (~(PAGE_SIZE-1))
> -#define PAGE_FLAG_MASK      (~0)
> -
>   #define NR_hypercalls 64
>   
>   #define STACK_ORDER 3
> --- a/xen/include/asm-arm/current.h
> +++ b/xen/include/asm-arm/current.h
> @@ -1,6 +1,7 @@
>   #ifndef __ARM_CURRENT_H__
>   #define __ARM_CURRENT_H__
>   
> +#include <xen/page-size.h>
>   #include <xen/percpu.h>
>   
>   #include <asm/processor.h>
> --- /dev/null
> +++ b/xen/include/asm-arm/page-shift.h

The name of the file looks a bit odd given that *_BITS are also defined 
in it. So how about renaming to page-size.h?

> @@ -0,0 +1,15 @@
> +#ifndef __ARM_PAGE_SHIFT_H__
> +#define __ARM_PAGE_SHIFT_H__
> +
> +#define PAGE_SHIFT              12
> +
> +#define PAGE_OFFSET(ptr)        ((vaddr_t)(ptr) & ~PAGE_MASK)
> +
> +#ifdef CONFIG_ARM_64
> +#define PADDR_BITS              48

Shouldn't we define VADDR_BITS here? But I wonder whether VADDR_BITS 
should be defined as sizeof(vaddr_t) * 8.

This would require to include asm/types.h.

> +#else
> +#define PADDR_BITS              40
> +#define VADDR_BITS              32
> +#endif
> +
> +#endif /* __ARM_PAGE_SHIFT_H__ */
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -2,21 +2,11 @@
>   #define __ARM_PAGE_H__
>   
>   #include <public/xen.h>
> +#include <xen/page-size.h>
>   #include <asm/processor.h>
>   #include <asm/lpae.h>
>   #include <asm/sysregs.h>
>   
> -#ifdef CONFIG_ARM_64
> -#define PADDR_BITS              48
> -#else
> -#define PADDR_BITS              40
> -#endif
> -#define PADDR_MASK              ((1ULL << PADDR_BITS)-1)
> -#define PAGE_OFFSET(ptr)        ((vaddr_t)(ptr) & ~PAGE_MASK)
> -
> -#define VADDR_BITS              32
> -#define VADDR_MASK              (~0UL)
> -
>   /* Shareability values for the LPAE entries */
>   #define LPAE_SH_NON_SHAREABLE 0x0
>   #define LPAE_SH_UNPREDICTALE  0x1
> --- a/xen/include/asm-x86/current.h
> +++ b/xen/include/asm-x86/current.h
> @@ -8,8 +8,8 @@
>   #define __X86_CURRENT_H__
>   
>   #include <xen/percpu.h>
> +#include <xen/page-size.h>
>   #include <public/xen.h>
> -#include <asm/page.h>
>   
>   /*
>    * Xen's cpu stacks are 8 pages (8-page aligned), arranged as:
> --- a/xen/include/asm-x86/desc.h
> +++ b/xen/include/asm-x86/desc.h
> @@ -1,6 +1,8 @@
>   #ifndef __ARCH_DESC_H
>   #define __ARCH_DESC_H
>   
> +#include <asm/page.h>

May I ask why you are including <asm/page.h> and not <xen/page-size.h> here?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 17:35:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 17:35:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42936.77261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkW2W-0005Ih-Rq; Wed, 02 Dec 2020 17:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42936.77261; Wed, 02 Dec 2020 17:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkW2W-0005Ia-OI; Wed, 02 Dec 2020 17:35:40 +0000
Received: by outflank-mailman (input) for mailman id 42936;
 Wed, 02 Dec 2020 17:35:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkW2V-0005IV-73
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 17:35:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkW2S-0007PM-KS; Wed, 02 Dec 2020 17:35:36 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkW2R-00046l-Ug; Wed, 02 Dec 2020 17:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=2d1O9j/QPszej8rCOoWJe+/YrsxXIN6L80GKVTZ8lFA=; b=DNih/QhlpiQxhtO2yg2vG3/Zhw
	L0DCmOXps3kgE1zIxd25TOY+E0UvBLAbbqPml/CGU/7dPEfCP3VCPdkXG2f9RHeIA+x1qozVgKq7G
	kdPAIHeGDvIXbRftQUfO11UA9EqJl8Ahvaqfz5V2eRyBopyfwkcehO7pfncMVKxBI/qk=;
Subject: Re: [PATCH 2/2] mm: split out mfn_t / gfn_t / pfn_t definitions and
 helpers
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <fb4de786-7302-3336-dcb4-1a388bee34bc@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9c240acd-f3ef-6775-eb4b-6e3b14251e51@xen.org>
Date: Wed, 2 Dec 2020 17:35:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <fb4de786-7302-3336-dcb4-1a388bee34bc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 02/12/2020 14:50, Jan Beulich wrote:
> xen/mm.h has heavy dependencies, while in a number of cases only these
> type definitions are needed. This separation then also allows pulling in
> these definitions when including xen/mm.h would cause cyclic
> dependencies.
> 
> Replace xen/mm.h inclusion where possible in include/xen/. (In
> xen/iommu.h also take the opportunity and correct the few remaining
> sorting issues.)
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -10,7 +10,6 @@
>    * Slimmed with Xen specific support.
>    */
>   
> -#include <asm/io.h>

This seems to be unrelated of this work.

>   #include <xen/acpi.h>
>   #include <xen/errno.h>
>   #include <xen/iocap.h>
> --- a/xen/drivers/char/meson-uart.c
> +++ b/xen/drivers/char/meson-uart.c
> @@ -18,7 +18,9 @@
>    * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include <xen/errno.h>
>   #include <xen/irq.h>
> +#include <xen/mm.h>

I was going to ask why xen/mm.h needs to be included here. But it looks 
like ioremap_nocache() is defined in mm.h rather than vmap.h.

I will add it as a clean-up on my side.


>   #include <xen/serial.h>
>   #include <xen/vmap.h>
>   #include <asm/io.h>
> --- a/xen/drivers/char/mvebu-uart.c
> +++ b/xen/drivers/char/mvebu-uart.c
> @@ -18,7 +18,9 @@
>    * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include <xen/errno.h>
>   #include <xen/irq.h>
> +#include <xen/mm.h>
>   #include <xen/serial.h>
>   #include <xen/vmap.h>
>   #include <asm/io.h>
> --- a/xen/include/asm-x86/io.h
> +++ b/xen/include/asm-x86/io.h
> @@ -49,6 +49,7 @@ __OUT(l,,int)
>   
>   /* Function pointer used to handle platform specific I/O port emulation. */
>   #define IOEMUL_QUIRK_STUB_BYTES 9
> +struct cpu_user_regs;
>   extern unsigned int (*ioemul_handle_quirk)(
>       u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
>   
> --- /dev/null
> +++ b/xen/include/xen/frame-num.h

It would feel more natural to me if the file is named mm-types.h.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 18:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 18:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42943.77273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWTY-0008C1-6V; Wed, 02 Dec 2020 18:03:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42943.77273; Wed, 02 Dec 2020 18:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWTY-0008Bu-3E; Wed, 02 Dec 2020 18:03:36 +0000
Received: by outflank-mailman (input) for mailman id 42943;
 Wed, 02 Dec 2020 18:03:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkWTW-0008Bp-TT
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 18:03:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkWTU-00087d-PD; Wed, 02 Dec 2020 18:03:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkWTU-0006Hm-CH; Wed, 02 Dec 2020 18:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=h1RaJKZGs67HCwMBIg/HMcjp3ktjOJpM+aawCccispI=; b=tnQ4qzLgm/JXK+eUKqHCvHlEeQ
	+oj5pJgeOTU2+n/lWUQzJpmUowpkCkD5cGGmUF2vxCYw4lL8DFnleuNjmsd0HhPTl5385M13J6+pZ
	rzG5Qdn1HGTzGRzaPNHUlWYNCayb4TPI9HymTApvux1xVe+JRLn5kW2+dAPuXDjx2ZRk=;
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: "Tian, Kevin" <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Julien Grall <jgrall@amazon.com>
References: <20201119145216.29280-1-julien@xen.org>
 <MWHPR11MB16456E395CC9B993E0C07EC48CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <adffc9f6-9418-080f-135b-e723fbd3fb28@xen.org>
Date: Wed, 2 Dec 2020 18:03:30 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <MWHPR11MB16456E395CC9B993E0C07EC48CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 30/11/2020 02:50, Tian, Kevin wrote:
>> From: Julien Grall <julien@xen.org>
>> Sent: Thursday, November 19, 2020 10:52 PM
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> When booting Xen with CONFIG_USBAN=y on Sandy Bridge, UBSAN will
>> throw
>> the following splat:
>>
>> (XEN)
>> ================================================================
>> ================
>> (XEN) UBSAN: Undefined behaviour in quirks.c:449:63
>> (XEN) left shift of 1 by 31 places cannot be represented in type 'int'
>> (XEN) ----[ Xen-4.11.4  x86_64  debug=y   Not tainted ]----
>>
>> [...]
>>
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d0802c0ccc>] ubsan.c#ubsan_epilogue+0xa/0xad
>> (XEN)    [<ffff82d0802c16c9>]
>> __ubsan_handle_shift_out_of_bounds+0xb4/0x145
>> (XEN)    [<ffff82d0802eeecd>] pci_vtd_quirk+0x3d3/0x74f
>> (XEN)    [<ffff82d0802e508b>]
>> iommu.c#domain_context_mapping+0x45b/0x46f
>> (XEN)    [<ffff82d08053f39e>] iommu.c#setup_hwdom_device+0x22/0x3a
>> (XEN)    [<ffff82d08053dfbc>] pci.c#setup_one_hwdom_device+0x8c/0x124
>> (XEN)    [<ffff82d08053e302>] pci.c#_setup_hwdom_pci_devices+0xbb/0x2f7
>> (XEN)    [<ffff82d0802da5b7>] pci.c#pci_segments_iterate+0x4c/0x8c
>> (XEN)    [<ffff82d08053e8bd>] setup_hwdom_pci_devices+0x25/0x2c
>> (XEN)    [<ffff82d08053e916>]
>> iommu.c#intel_iommu_hwdom_init+0x52/0x2f3
>> (XEN)    [<ffff82d08053d6da>] iommu_hwdom_init+0x4e/0xa4
>> (XEN)    [<ffff82d080577f32>] dom0_construct_pv+0x23c8/0x2476
>> (XEN)    [<ffff82d08057cb50>] construct_dom0+0x6c/0xa3
>> (XEN)    [<ffff82d080564822>] __start_xen+0x4651/0x4b55
>> (XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55
>>
>> Note that splat is from 4.11.4 and not staging. Although, the problem is
>> still present.
>>
>> This can be solved by making the first operand unsigned int.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Thanks! I have committed the patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 18:11:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 18:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42950.77289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWaq-0000jz-1B; Wed, 02 Dec 2020 18:11:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42950.77289; Wed, 02 Dec 2020 18:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWap-0000js-UR; Wed, 02 Dec 2020 18:11:07 +0000
Received: by outflank-mailman (input) for mailman id 42950;
 Wed, 02 Dec 2020 18:11:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkWao-0000ji-U5
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 18:11:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkWao-0008Go-Lk; Wed, 02 Dec 2020 18:11:06 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkWao-0006th-74; Wed, 02 Dec 2020 18:11:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lWlr9lkqrxC598c5YlkM0v2pBVs1jhg/f08ORjhLjRg=; b=O+5fMO5gCB/OmGQCQKHBg0YKDX
	VDx7G403j6A3F0TnFebVBtI909pIuj6IxDnf9GZLzyU5ysSYOs2MmUP5NIrB+D/aDAyNWblw/RMDh
	SyCQpluS5cSjXXZdttUKF2CyUSvVhwhkIc81xARuHnI8aCV9QxyJGDaxYhTip5xWzwR0=;
Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
To: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>
Cc: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andre Przywara <Andre.Przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
 nd <nd@arm.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
 <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
 <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <8f47313a-f47a-520d-3845-3f2198fce5b4@xen.org>
 <AM0PR08MB37478D884057C8720ED1023D9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0a272ffd-24de-2db4-5751-9161cc57cec3@xen.org>
Date: Wed, 2 Dec 2020 18:11:04 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <AM0PR08MB37478D884057C8720ED1023D9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 26/11/2020 11:27, Wei Chen wrote:
> Hi Julien,

Hi Wei,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 2020年11月26日 18:55
>> To: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> Andre Przywara <Andre.Przywara@arm.com>; Bertrand Marquis
>> <Bertrand.Marquis@arm.com>; Kaly Xin <Kaly.Xin@arm.com>; nd
>> <nd@arm.com>
>> Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
>>
>> Hi Wei,
>>
>> Your e-mail font seems to be different to the usual plain text one. Are
>> you sending the e-mail using HTML by any chance?
>>
> 
> It's strange, I always use the plain text.

Maybe exchange decided to mangle the e-mail :). Anyway, this new message 
looks fine.

> 
>> On 26/11/2020 02:07, Wei Chen wrote:
>>> Hi Stefano,
>>>
>>>> -----Original Message-----
>>>> From: Stefano Stabellini <sstabellini@kernel.org>
>>>> Sent: 2020��11��26�� 8:00
>>>> To: Wei Chen <Wei.Chen@arm.com>
>>>> Cc: Julien Grall <julien@xen.org>; Penny Zheng <Penny.Zheng@arm.com>;
>> xen-
>>>> devel@lists.xenproject.org; sstabellini@kernel.org; Andre Przywara
>>>> <Andre.Przywara@arm.com>; Bertrand Marquis
>> <Bertrand.Marquis@arm.com>;
>>>> Kaly Xin <Kaly.Xin@arm.com>; nd <nd@arm.com>
>>>> Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
>>>>
>>>> Resuming this old thread.
>>>>
>>>> On Fri, 13 Nov 2020, Wei Chen wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 09/11/2020 08:21, Penny Zheng wrote:
>>>>>>> CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
>>>>>>> might return a wrong value when the counter crosses a 32bit boundary.
>>>>>>>
>>>>>>> Until now, there is no case for Xen itself to access CNTVCT_EL0,
>>>>>>> and it also should be the Guest OS's responsibility to deal with
>>>>>>> this part.
>>>>>>>
>>>>>>> But for CNTPCT, there exists several cases in Xen involving reading
>>>>>>> CNTPCT, so a possible workaround is that performing the read twice,
>>>>>>> and to return one or the other depending on whether a transition has
>>>>>>> taken place.
>>>>>>>
>>>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>>>>>
>>>>>> Acked-by: Julien Grall <jgrall@amazon.com>
>>>>>>
>>>>>> On a related topic, do we need a fix similar to Linux commit
>>>>>> 75a19a0202db "arm64: arch_timer: Ensure counter register reads occur
>>>>>> with seqlock held"?
>>>>>>
>>>>>
>>>>> I take a look at this Linux commit, it seems to prevent the seq-lock to be
>>>>> speculated.  Using an enforce ordering instead of ISB after the read counter
>>>>> operation seems to be for performance reasons.
>>>>>
>>>>> I have found that you had placed an ISB before read counter in get_cycles
>>>>> to prevent counter value to be speculated. But you haven't placed the
>> second
>>>>> ISB after reading. Is it because we haven't used the get_cycles in seq lock
>>>>> critical context (Maybe I didn't find the right place)? So should we need to
>> fix it
>>>>> now, or you prefer to fix it now for future usage?
>>>>
>>>> Looking at the call sites, it doesn't look like we need any ISB after
>>>> get_cycles as it is not used in any critical context. There is also a
>>>> data dependency with the value returned by it.
>>
>> I am assuming you looked at all the users of NOW(). Is that right?
>>
>>>>
>>>> So I am thinking we don't need any fix. At most we need an in-code comment?
>>>
>>> I agree with you to add an in-code comment. It will remind us in future when
>> we
>>> use the get_cycles in critical context. Adding it now will probably only lead to
>>> meaningless performance degradation.
>>
>> I read this as there would be no perfomance impact if we add the
>> ordering it now. Did you intend to say?
> 
> Sorry about my English. I intended to say "Adding it now may introduce some
> performance cost. And this performance cost may be not worth. Because Xen
> may never use it in a similar scenario "

Don't worry! I think the performance should not be noticeable if we use 
the same trick as Linux.

>> In addition, AFAICT, the x86 version of get_cycles() is already able to
>> provide that ordering. So there are chances that code may rely on it.
>>
>> While I don't necessarily agree to add barriers everywhere by default
>> (this may have big impact on the platform). I think it is better to have
>> an accurate number of cycles.
>>
> 
> As x86 had done it, I think it’s ok to do it for Arm. This will keep a function
> behaves the same on different architectures.

Just to be clear, I am not 100% sure this is what Intel is doing. 
Although this is my understanding of the comment in the code.

@Stefano, what do you think?

@Wei, assuming Stefano is happy with the proposal, would you be happy to 
send a patch for that?

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 18:17:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 18:17:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42956.77302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWgn-0000xe-Oo; Wed, 02 Dec 2020 18:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42956.77302; Wed, 02 Dec 2020 18:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWgn-0000xX-Kz; Wed, 02 Dec 2020 18:17:17 +0000
Received: by outflank-mailman (input) for mailman id 42956;
 Wed, 02 Dec 2020 18:17:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkWgm-0000xP-Jm; Wed, 02 Dec 2020 18:17:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkWgm-0008QB-AB; Wed, 02 Dec 2020 18:17:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkWgm-0001sl-1r; Wed, 02 Dec 2020 18:17:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkWgm-000511-1O; Wed, 02 Dec 2020 18:17:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jvhRD1rRly8eoUjMSbIlivpXx+zHQ17cMsgF6Sj4vaU=; b=qcHhGTOMIcacMTnDp26zrsu1wb
	3rac3GQjRiLe1CLw8rurrBlwhoQCL+JLhQ+C7p8NAENE///JiUB9cmtjSot419IgeSAqHiLlyEipo
	wMefUw2xJyXUu3RRQfPZUhm6xA2bAgqkmA0B5DZrXKRHuFUZgDbXrlsyGD217rUkqPuQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157142-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157142: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 18:17:16 +0000

flight 157142 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157142/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  104 days
Failing since        152659  2020-08-21 14:07:39 Z  103 days  215 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 18:31:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 18:31:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42965.77316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWug-0002p5-A3; Wed, 02 Dec 2020 18:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42965.77316; Wed, 02 Dec 2020 18:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkWug-0002oy-71; Wed, 02 Dec 2020 18:31:38 +0000
Received: by outflank-mailman (input) for mailman id 42965;
 Wed, 02 Dec 2020 18:31:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkWue-0002ot-Cm
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 18:31:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkWub-0000Fq-R9; Wed, 02 Dec 2020 18:31:33 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkWub-0008CU-8Y; Wed, 02 Dec 2020 18:31:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QWNtZF+8sKsnsiW8NH+OMTwZIn6A98UV+A8Bdjuh28U=; b=HsWiNJVpjjeV45bWsoVRj+l5GB
	gHuHisQEdUVwxQ30CAa49+YOO3H8YEL19iZXJWCTKw2mA/XbRZQl1/TQdfSWdjSzGlDucRMLF+/dN
	Kg3QjKhENLaKebKg6Ufhr6oA2P0Ib4ArZA+mtdwQQGAGfnSWZDajo2Un6ym3F5Fde9PU=;
Subject: Re: [PATCH] gnttab: don't allocate status frame tracking array when
 "gnttab=max_ver:1"
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <a484cc88-f41d-5d38-d098-4eda297569a1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bf921997-fc9a-b1b9-78d9-7a7f85fe4608@xen.org>
Date: Wed, 2 Dec 2020 18:31:31 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a484cc88-f41d-5d38-d098-4eda297569a1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 05/11/2020 15:55, Jan Beulich wrote:
> This array can be large when many grant frames are permitted; avoid
> allocating it when it's not going to be used anyway. 

Given there are not many users of grant-table v2, would it make sense to 
avoid allocating the array until the guest start using grant-table v2?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 19:03:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 19:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42977.77329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkXPY-0005jT-S5; Wed, 02 Dec 2020 19:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42977.77329; Wed, 02 Dec 2020 19:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkXPY-0005jM-OX; Wed, 02 Dec 2020 19:03:32 +0000
Received: by outflank-mailman (input) for mailman id 42977;
 Wed, 02 Dec 2020 19:03:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkXPX-0005jH-E2
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 19:03:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkXPU-0000zm-DC; Wed, 02 Dec 2020 19:03:28 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkXPU-0002Mw-3W; Wed, 02 Dec 2020 19:03:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=T432UGitHsRICyXBtYhUNS3QUhL8d5TafwM77KhXsGQ=; b=Z7GrOVkr3clPfUxXianyTNk/R/
	wUuVDDZ2ZcEYWxpGhFcYUOsJdLBY0XN+vRRyrdEu4nmDwn5cFKh30oiARuc7+cYagFxGPfI7NLZlD
	Pq/5WTbjitNbfMQo/i+pCYQDIhu4rRywhvbo9iLQ+iiZGWGfgiAMT3l+rW3xuHvow/J8=;
Subject: Re: [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d709a9c3-dbe2-65c6-2c2f-6a12f486335d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <70170293-a9a7-282a-dde6-7ed73fc2da48@xen.org>
Date: Wed, 2 Dec 2020 19:03:26 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <d709a9c3-dbe2-65c6-2c2f-6a12f486335d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/11/2020 13:28, Jan Beulich wrote:
> The per-vCPU virq_lock, which is being held anyway, together with there
> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
> is zero, provide sufficient guarantees. 

I agree that the per-vCPU virq_lock is going to be sufficient, however 
dropping the lock also means the event channel locking is more complex 
to understand (the long comment that was added proves it).

In fact, the locking in the event channel code was already proven to be 
quite fragile, therefore I think this patch is not worth the risk.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 19:13:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 19:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42983.77340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkXYq-0006iU-Oq; Wed, 02 Dec 2020 19:13:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42983.77340; Wed, 02 Dec 2020 19:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkXYq-0006iN-Lt; Wed, 02 Dec 2020 19:13:08 +0000
Received: by outflank-mailman (input) for mailman id 42983;
 Wed, 02 Dec 2020 19:13:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkXYp-0006iF-Tf; Wed, 02 Dec 2020 19:13:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkXYp-0001E2-IV; Wed, 02 Dec 2020 19:13:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkXYp-0004PQ-67; Wed, 02 Dec 2020 19:13:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkXYp-00035S-5d; Wed, 02 Dec 2020 19:13:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DkEi1tWVyYgv9YYRFP12ea1K5lioIIIqGxsih9TAMWk=; b=1OxFonjKNAwMJyvgsA8KnYP/VL
	dwU965Sfp7oGBGgOwYP/onrMjnnKWSDkDXywqrQ9jbbgwHhh2dZ+3EoX5n8c/D8wL5Uha9fNqlDZv
	v20LNA5pi0QJpNkqqzbw5f3Lz6EEVbjTKMRjPk+Z1S05ArVbLTttakkTUKp8nsAN2jZo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157143: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b65054597872ce3aefbc6a666385eabdf9e288da
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 19:13:07 +0000

flight 157143 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157143/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken  in 157131
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 157109 REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 157119 REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157119 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 157119 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157119 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157119 pass in 157131
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 157119 pass in 157143
 test-armhf-armhf-libvirt      8 xen-boot         fail in 157131 pass in 157143
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 157131 pass in 157143
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157109
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen        fail pass in 157119
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157119
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157119
 test-armhf-armhf-libvirt     18 guest-start/debian.repeat  fail pass in 157119
 test-arm64-arm64-examine      8 reboot                     fail pass in 157131
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157131

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl       5 host-install(5) broken in 157131 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157109 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157119 like 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 157131 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b65054597872ce3aefbc6a666385eabdf9e288da
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  123 days
Failing since        152366  2020-08-01 20:49:34 Z  122 days  208 attempts
Testing same since   157109  2020-11-30 08:17:04 Z    2 days    4 attempts

------------------------------------------------------------
3619 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken

Not pushing.

(No revision log; it would be 693043 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 19:27:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 19:27:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.42992.77359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkXma-0007pd-G0; Wed, 02 Dec 2020 19:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 42992.77359; Wed, 02 Dec 2020 19:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkXma-0007pW-Br; Wed, 02 Dec 2020 19:27:20 +0000
Received: by outflank-mailman (input) for mailman id 42992;
 Wed, 02 Dec 2020 19:27:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zz6=FG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkXmY-0007pO-Fw
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 19:27:18 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.45]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8f281b9-b760-41ad-89e1-96f75e1bb151;
 Wed, 02 Dec 2020 19:27:13 +0000 (UTC)
Received: from DB6PR07CA0165.eurprd07.prod.outlook.com (2603:10a6:6:43::19) by
 AM6PR08MB4197.eurprd08.prod.outlook.com (2603:10a6:20b:af::14) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.22; Wed, 2 Dec 2020 19:27:07 +0000
Received: from DB5EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:43:cafe::ba) by DB6PR07CA0165.outlook.office365.com
 (2603:10a6:6:43::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.5 via Frontend
 Transport; Wed, 2 Dec 2020 19:27:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT054.mail.protection.outlook.com (10.152.20.248) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 2 Dec 2020 19:27:06 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Wed, 02 Dec 2020 19:27:06 +0000
Received: from aef085049620.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4E15E8C0-94DB-4375-90A2-04965AE137BD.1; 
 Wed, 02 Dec 2020 19:26:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aef085049620.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 02 Dec 2020 19:26:51 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5483.eurprd08.prod.outlook.com (2603:10a6:10:11b::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 2 Dec
 2020 19:26:48 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.025; Wed, 2 Dec 2020
 19:26:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8f281b9-b760-41ad-89e1-96f75e1bb151
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jiEzk72JjC318bRSvD79N8SqTxrEm5R+ax7gAYFg3B0=;
 b=MCXK0Jq6kaE14kA3ynODckX+QiD54xL3qBNf1ykVjgXtlgoayP12ZwV3BsCPtY42qGXb7ubj8i6RUxskN849ukZBAfqHv8D/YXszFzBpj0OiW+vaoMmYgnDgzrYHcwdO/8S03/PzktyaUMQNMKWpy2QgcEaI7ae7IDS6EoNTBzs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 41f20396934c5f2b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S7qunehnMJX49K+aC1JCi+iqvorJueVRR4NVywkJtQNHAKR69yXerWP6Ju9A487A4zIOyEBM2ozczX1RkRJpjLof0OQttIKaw5Sr7r5VaxJrm+Qo3i/R3b4xjdKTFxwAFGePu137xgzrEgJEJAxK7LWieaNGHZg1t740d5WKhoPAJhYyn/Id4Nn3ktjluycfmHNuLn3cbc51Jdj5z4cq4RkO+MDQRZGhEb2TVH87EcIl3HENqzR2O/2gSUyvzSVzhEPpyJzY4fhEg1mbUl8xyjoep9aeFByuxwWRQx4sjDkMHw86t/525zk0R28chisR2EV9oEKHbm3+I7LtGrILJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jiEzk72JjC318bRSvD79N8SqTxrEm5R+ax7gAYFg3B0=;
 b=bgJUDkgCF0q2buJGvSJYHL314A3swssmUbJbz+dEUOlDPWAZirYWC5LpjeHG+cDKYjttLBLBMiRMrlPam6OFiPAzugc4whikQli/ycBQeu+TTfudOhZv2G3njBggz/COXAsBF9d2ybAgZMt/KnxuWE/mF8KAdaIONVbxZjj6iA07PkyUkT7V+X66XEw8sVk6MIJVb7zlh6PQcmUH9B1VnoAftijtL1kyOy3kBtkUeF7BytC0lWkF1xxBL+TpyrcbD2QLjenkXCabgmr3Pkcak9di2O0RnJv4qv/iVGlCqKxHSfnv+FFRE+MjSUl2irE11giPLkkiSBL/rdg+WqPcbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jiEzk72JjC318bRSvD79N8SqTxrEm5R+ax7gAYFg3B0=;
 b=MCXK0Jq6kaE14kA3ynODckX+QiD54xL3qBNf1ykVjgXtlgoayP12ZwV3BsCPtY42qGXb7ubj8i6RUxskN849ukZBAfqHv8D/YXszFzBpj0OiW+vaoMmYgnDgzrYHcwdO/8S03/PzktyaUMQNMKWpy2QgcEaI7ae7IDS6EoNTBzs=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWxBX/ttp8YMNqT0KunkbXuSldmKnjI/iAgADj3wCAADIkAA==
Date: Wed, 2 Dec 2020 19:26:48 +0000
Message-ID: <50A7E1D6-EDF6-48E3-84C5-93C5DC08ECF4@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
 <4338735A-1655-4B5F-A493-13D6021C2FBB@arm.com>
In-Reply-To: <4338735A-1655-4B5F-A493-13D6021C2FBB@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4b647f96-42b0-48c1-f013-08d896f84193
x-ms-traffictypediagnostic: DB8PR08MB5483:|AM6PR08MB4197:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB419786276F0CDBFAFCF6395CFCF30@AM6PR08MB4197.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5797;OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 34zTDEIdr8c2le7MbMi6fpREVIEdiGliQD6syzztumDAYadtI6utAJc4cTDNoN2hUQLFhZ46UTEdKCjc6PxXmJ5lQfV6Tmfe5xmT0u/ud14h8sqDD62Y50F/VIU0s9H976G44Fn9LNh6RxFup58IIQvPoVGe0jgYOhQHmrK3eyX44416L7fAYhLQtdE4oWgqW44Vwr/THV1Iui3C2TSS2JZb08lyT5iDEIKrisVUIJA7GNKhbwUXBUG1t55jilu8eYpRxJ5urrbkcPYxbK70+xA8VE67TH21wKN1hHOT9rUh7t3XMDjYiPAHWH0Yup56I04LUXWeOfTut4kmYfh4KiOV0gGYQliyasOeh4DIiqOOhJ9i4OtkOPAwkCsgCyWmqROk/Amu1vxqUv7WafNxeHrMtleSpRZ9CXBlkhDPV7HBJokbN3EN36as/37FFnzMC85venDC4llXJVs/NR/lDbtAhdu9EI7dZjiPjtQ6kdpKuEhTgAKq1P375kIBEQlp
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(376002)(366004)(39860400002)(316002)(2616005)(54906003)(86362001)(26005)(186003)(71200400001)(6506007)(478600001)(53546011)(33656002)(7416002)(6486002)(36756003)(8676002)(6916009)(4326008)(6512007)(83380400001)(66946007)(66446008)(66476007)(91956017)(76116006)(8936002)(64756008)(66556008)(5660300002)(30864003)(2906002)(2004002)(45980500001)(579004)(559001)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?VFJaVkZnK3pRTWtacWNCbC9nR0ZhQkFkZXlPVnVWRG02eFFkOWtDdEp3WmFj?=
 =?utf-8?B?WENJb25lbTVuZzBEUmZlcEZUaVVMNGVFdTNEY3pqQUd3WFhNNk1vMDUreGNm?=
 =?utf-8?B?dnlvNHBPZm9WU01UTklyRDhVZExxaEFETnovYXR5NUpmdjJRczdQTzhLS2ZI?=
 =?utf-8?B?NEJuTUFuNThPc01GYjI3TW5Dck8vSmcxWVhWUUk1QzBOWFVvQk00RXVCbkI5?=
 =?utf-8?B?SUlaazZaTXRzS0NYRnd3dC9wWjluaUhvNFJtTjk3UXV0QzNnejIvSEFFYUky?=
 =?utf-8?B?a2kwbkhxaTh1dFlSdldWNFZSemRGNTNVbUdwRlBpeE53OXNhejhXdnVTeEc4?=
 =?utf-8?B?N25uYXpjOVdWckMzRFlFVlRLMDArSzdRZFBXU2RQMmlubC9ZYW84azdXN1Zj?=
 =?utf-8?B?aVBVUVhNMjF2LytIVVoyRlNZbmFLemNjT2xEVk4zYTRoVmJ5eUlLOHRPUU1Z?=
 =?utf-8?B?a1FVMGEvSWZTQnY4RThWSkNpQm84T240TDBWVEo4ZDlkMHRKd2NDTEx0K2xo?=
 =?utf-8?B?Y1Q3b1JvT2xtbnE4RTdRcGxmVGpzd2E4QXlzSlh4VTl0OEZVUmdMRDFPenp0?=
 =?utf-8?B?M2dLblNHdk9JTld0UDBMQlBwbDFhMlNwZnFvOTNaNlgyUHI0SEdIZmtIbFIr?=
 =?utf-8?B?akVKd2daMDdpMTBzRUl2ak1JVEVSSzJON2o2MFBqVDFFTWhiUGVJZzAwa3JC?=
 =?utf-8?B?RW9jQW1UY2VrZ29IYjhYWldiSXh4Q0R5eDJiSmdkdHlaMXNPOVh2M0NpSGpC?=
 =?utf-8?B?bWo2M0JyMnZEWUhmcTJ2bTlsQ1Z1SUlFKzlRa0k0YnFiQUdEZGc2Zk84SmVL?=
 =?utf-8?B?YVk3a0pYNlJaV0FqcVhuTm41bjMyZWVZVzlxY0swdHUwQmhJeTRLRGVEc0lv?=
 =?utf-8?B?cGJKY0NSR3hYNnIwZkV3QjZOSXlIdlprdVFhcGtQYW42STRSTkdvbHd4SVpl?=
 =?utf-8?B?U2FLSVJ5RWpCZVBpZlBvNDg5bUlzSFNNbG9laTRSMUFXNUE1VFF1T0VGYk9I?=
 =?utf-8?B?WG5YQTg4WnJVOElYNllUSGNlRkRBbWZlT1RMeFJKajNnNk5qdnh3eVdyRDVF?=
 =?utf-8?B?V080SDJDWU14YzZQMHJGd1dreGZCQzEwMGorTFpEbmtwK3NFVElndXJkQVZ1?=
 =?utf-8?B?YU5qcGFhOFFHdzNYcE1LQjgwaGhsc3pqQnY1bUIycVovSlBiVW5HLzQxT0Vh?=
 =?utf-8?B?cXYzQzdRVTBHRDduNVNqbENJVHpETHNDdEttL3B6TXBrc2xiSVV6NXJvVndD?=
 =?utf-8?B?RENIM2d5QURYS3JiYVBJNFU2T0dQQXFzUTEvT2xxcS95NEZnN1M2aFNkU0Vt?=
 =?utf-8?Q?yLxX0mnf092wzeMuTlOuGQshC9Nna+nR8H?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C2C3FB0CF7706044A76EBDB3954AF2D7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5483
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cecf812e-b93e-4b49-1ed4-08d896f836b2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UoVRSABrR9eMjJE5wTX3fEiez3s4v2Wa7XZJqgW+T9CEDDNXIVYed8upq7Y8q2nzhSPuRjl6ce1W0znMNPXLPHYJBvkc+jiTw6DoDJJSsvLd0xAmaT6XdJeB1K+Yli8dOwrdKnBSFBDtSKEQMpEPcUoEqbm1y3sCMNEzhQsZ8t1QQXbiVek3jVIi3orKF/23bHxtSczdiV37z8JLudVM6eafwJBVGo9jmhtTClCT3qZlmIJq91sFM9CX+2tj5RHGQIDblfJl37Ks8zRloj+43FCzW3QjgE/OXOEWIYCEaqTXqGTC931NeVDDs+BqFTOEkIIu7XBZj8/e3eyDJezLLmF/5VkB+oes2UoBCokN1Lzwmxax2/32JSx7OleS1iMotEFvF2Ax9VACwl4gCt4A6YGiOfOh/2ESyJ5eFbDt18NSf7rkDMBNLPHLiqd9i7eZh8l1iAqOtDjW3Pkb8Cz6Jp2UoTF1R5NZgK2jiMmtrM/CcohlR81BQFxb5NG5833ugN5MVE2Umm8MZs2z70zNwg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966005)(2616005)(82740400003)(81166007)(83380400001)(186003)(26005)(54906003)(36756003)(478600001)(356005)(8676002)(6512007)(70206006)(8936002)(6506007)(70586007)(6862004)(4326008)(107886003)(53546011)(316002)(82310400003)(47076004)(86362001)(30864003)(336012)(33656002)(6486002)(5660300002)(2906002)(2004002)(309714004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Dec 2020 19:27:06.9115
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b647f96-42b0-48c1-f013-08d896f84193
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4197

SGkgU3RlZmFubywNCg0KPiBPbiAyIERlYyAyMDIwLCBhdCA0OjI3IHBtLCBSYWh1bCBTaW5naCA8
UmFodWwuU2luZ2hAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBIZWxsbyBTdGVmYW5vLA0KPiANCj4+
IE9uIDIgRGVjIDIwMjAsIGF0IDI6NTEgYW0sIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+PiANCj4+IE9uIFRodSwgMjYgTm92IDIwMjAsIFJhaHVs
IFNpbmdoIHdyb3RlOg0KPj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYz
IGltcGxlbWVudGF0aW9uLiBJdCBpcyBiYXNlZCBvbg0KPj4+IHRoZSBMaW51eCBTTU1VdjMgZHJp
dmVyLg0KPj4+IA0KPj4+IE1ham9yIGRpZmZlcmVuY2VzIHdpdGggcmVnYXJkIHRvIExpbnV4IGRy
aXZlciBhcmUgYXMgZm9sbG93czoNCj4+PiAxLiBPbmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMg
c3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+PiAgdGhhdCBzdXBw
b3J0cyBib3RoIFN0YWdlLTEgYW5kIFN0YWdlLTIgdHJhbnNsYXRpb25zLg0KPj4+IDIuIFVzZSBQ
Mk0gIHBhZ2UgdGFibGUgaW5zdGVhZCBvZiBjcmVhdGluZyBvbmUgYXMgU01NVXYzIGhhcyB0aGUN
Cj4+PiAgY2FwYWJpbGl0eSB0byBzaGFyZSB0aGUgcGFnZSB0YWJsZXMgd2l0aCB0aGUgQ1BVLg0K
Pj4+IDMuIFRhc2tsZXRzIGFyZSB1c2VkIGluIHBsYWNlIG9mIHRocmVhZGVkIElSUSdzIGluIExp
bnV4IGZvciBldmVudCBxdWV1ZQ0KPj4+ICBhbmQgcHJpb3JpdHkgcXVldWUgSVJRIGhhbmRsaW5n
Lg0KPj4+IDQuIExhdGVzdCB2ZXJzaW9uIG9mIHRoZSBMaW51eCBTTU1VdjMgY29kZSBpbXBsZW1l
bnRzIHRoZSBjb21tYW5kcyBxdWV1ZQ0KPj4+ICBhY2Nlc3MgZnVuY3Rpb25zIGJhc2VkIG9uIGF0
b21pYyBvcGVyYXRpb25zIGltcGxlbWVudGVkIGluIExpbnV4Lg0KPj4+ICBBdG9taWMgZnVuY3Rp
b25zIHVzZWQgYnkgdGhlIGNvbW1hbmRzIHF1ZXVlIGFjY2VzcyBmdW5jdGlvbnMgYXJlIG5vdA0K
Pj4+ICBpbXBsZW1lbnRlZCBpbiBYRU4gdGhlcmVmb3JlIHdlIGRlY2lkZWQgdG8gcG9ydCB0aGUg
ZWFybGllciB2ZXJzaW9uDQo+Pj4gIG9mIHRoZSBjb2RlLiBPbmNlIHRoZSBwcm9wZXIgYXRvbWlj
IG9wZXJhdGlvbnMgd2lsbCBiZSBhdmFpbGFibGUgaW4NCj4+PiAgWEVOIHRoZSBkcml2ZXIgY2Fu
IGJlIHVwZGF0ZWQuDQo+Pj4gNS4gRHJpdmVyIGlzIGN1cnJlbnRseSBzdXBwb3J0ZWQgYXMgVGVj
aCBQcmV2aWV3Lg0KPj4gDQo+PiBUaGlzIHBhdGNoIGlzIGJpZyBhbmQgd2FzIGRpZmZpY3VsdCB0
byByZXZpZXcsIG5vbmV0aGVsZXNzIEkgdHJpZWQgOi0pDQo+IA0KPiBUaGFua3MgYWdhaW4gZm9y
IHJldmlld2luZyB0aGUgY29kZS4NCj4+IA0KPj4gVGhhdCBzYWlkLCB0aGUgY29kZSBpcyBzZWxm
LWNvbnRhaW5lZCwgbWFya2VkIGFzIFRlY2ggUHJldmlldywgYW5kIG5vdA0KPj4gYXQgcmlzayBv
ZiBtYWtpbmcgb3RoZXIgdGhpbmdzIHVuc3RhYmxlLCBzbyBpdCBpcyBsb3cgcmlzayB0byBhY2Nl
cHQgaXQuDQo+PiANCj4+IENvbW1lbnRzIGJlbG93Lg0KPj4gDQo+PiANCj4+PiBTaWduZWQtb2Zm
LWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4NCj4+PiAtLS0NCj4+PiBNQUlO
VEFJTkVSUyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICA2ICsNCj4+PiBTVVBQT1JULm1k
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAxICsNCj4+PiB4ZW4vZHJpdmVycy9wYXNz
dGhyb3VnaC9LY29uZmlnICAgICAgIHwgIDEwICsNCj4+PiB4ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC9hcm0vTWFrZWZpbGUgIHwgICAxICsNCj4+PiB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0v
c21tdS12My5jIHwgOTg2ICsrKysrKysrKysrKysrKysrKysrKy0tLS0tDQo+Pj4gNSBmaWxlcyBj
aGFuZ2VkLCA4MTQgaW5zZXJ0aW9ucygrKSwgMTkwIGRlbGV0aW9ucygtKQ0KPj4+IA0KPj4+IGRp
ZmYgLS1naXQgYS9NQUlOVEFJTkVSUyBiL01BSU5UQUlORVJTDQo+Pj4gaW5kZXggZGFiMzhhNmEx
NC4uMWQ2MzQ4OWVlYyAxMDA2NDQNCj4+PiAtLS0gYS9NQUlOVEFJTkVSUw0KPj4+ICsrKyBiL01B
SU5UQUlORVJTDQo+Pj4gQEAgLTI0OSw2ICsyNDksMTIgQEAgRjoJeGVuL2luY2x1ZGUvYXNtLWFy
bS8NCj4+PiBGOgl4ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0vDQo+Pj4gRjoJeGVuL2luY2x1
ZGUvcHVibGljL2FyY2gtYXJtLmgNCj4+PiANCj4+PiArQVJNIFNNTVV2Mw0KPj4+ICtNOglCZXJ0
cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+DQo+Pj4gK006CVJhaHVsIFNp
bmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPg0KPj4+ICtTOglTdXBwb3J0ZWQNCj4+PiArRjoJeGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUtdjMuYw0KPj4+ICsNCj4+PiBDaGFuZ2UgTG9n
DQo+Pj4gTToJUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+DQo+Pj4gUjoJQ29tbXVuaXR5IE1h
bmFnZXIgPGNvbW11bml0eS5tYW5hZ2VyQHhlbnByb2plY3Qub3JnPg0KPj4+IGRpZmYgLS1naXQg
YS9TVVBQT1JULm1kIGIvU1VQUE9SVC5tZA0KPj4+IGluZGV4IGFiMDJhY2E1ZjQuLmU0MDJjNzIw
MmQgMTAwNjQ0DQo+Pj4gLS0tIGEvU1VQUE9SVC5tZA0KPj4+ICsrKyBiL1NVUFBPUlQubWQNCj4+
PiBAQCAtNjgsNiArNjgsNyBAQCBGb3IgdGhlIENvcnRleCBBNTcgcjBwMCAtIHIxcDEsIHNlZSBF
cnJhdGEgODMyMDc1Lg0KPj4+ICAgIFN0YXR1cywgQVJNIFNNTVV2MTogU3VwcG9ydGVkLCBub3Qg
c2VjdXJpdHkgc3VwcG9ydGVkDQo+Pj4gICAgU3RhdHVzLCBBUk0gU01NVXYyOiBTdXBwb3J0ZWQs
IG5vdCBzZWN1cml0eSBzdXBwb3J0ZWQNCj4+PiAgICBTdGF0dXMsIFJlbmVzYXMgSVBNTVUtVk1T
QTogU3VwcG9ydGVkLCBub3Qgc2VjdXJpdHkgc3VwcG9ydGVkDQo+Pj4gKyAgICBTdGF0dXMsIEFS
TSBTTU1VdjM6IFRlY2ggUHJldmlldw0KPj4+IA0KPj4+ICMjIyBBUk0vR0lDdjMgSVRTDQo+Pj4g
DQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL0tjb25maWcgYi94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9LY29uZmlnDQo+Pj4gaW5kZXggMDAzNjAwN2VjNC4uNWI3MWM1
OWY0NyAxMDA2NDQNCj4+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9LY29uZmlnDQo+
Pj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvS2NvbmZpZw0KPj4+IEBAIC0xMyw2ICsx
MywxNiBAQCBjb25maWcgQVJNX1NNTVUNCj4+PiAJICBTYXkgWSBoZXJlIGlmIHlvdXIgU29DIGlu
Y2x1ZGVzIGFuIElPTU1VIGRldmljZSBpbXBsZW1lbnRpbmcgdGhlDQo+Pj4gCSAgQVJNIFNNTVUg
YXJjaGl0ZWN0dXJlLg0KPj4+IA0KPj4+ICtjb25maWcgQVJNX1NNTVVfVjMNCj4+PiArCWJvb2wg
IkFSTSBMdGQuIFN5c3RlbSBNTVUgVmVyc2lvbiAzIChTTU1VdjMpIFN1cHBvcnQiIGlmIEVYUEVS
VA0KPj4+ICsJZGVwZW5kcyBvbiBBUk1fNjQNCj4+PiArCS0tLWhlbHAtLS0NCj4+PiArCSBTdXBw
b3J0IGZvciBpbXBsZW1lbnRhdGlvbnMgb2YgdGhlIEFSTSBTeXN0ZW0gTU1VIGFyY2hpdGVjdHVy
ZQ0KPj4+ICsJIHZlcnNpb24gMy4NCj4+PiArDQo+Pj4gKwkgU2F5IFkgaGVyZSBpZiB5b3VyIHN5
c3RlbSBpbmNsdWRlcyBhbiBJT01NVSBkZXZpY2UgaW1wbGVtZW50aW5nDQo+Pj4gKwkgdGhlIEFS
TSBTTU1VdjMgYXJjaGl0ZWN0dXJlLg0KPj4+ICsNCj4+PiBjb25maWcgSVBNTVVfVk1TQQ0KPj4+
IAlib29sICJSZW5lc2FzIElQTU1VLVZNU0EgZm91bmQgaW4gUi1DYXIgR2VuMyBTb0NzIg0KPj4+
IAlkZXBlbmRzIG9uIEFSTV82NA0KPj4+IGRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC9hcm0vTWFrZWZpbGUgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vTWFrZWZpbGUN
Cj4+PiBpbmRleCBmY2Q5MThlYTNlLi5jNWZiM2I1OGE1IDEwMDY0NA0KPj4+IC0tLSBhL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9NYWtlZmlsZQ0KPj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL2FybS9NYWtlZmlsZQ0KPj4+IEBAIC0xLDMgKzEsNCBAQA0KPj4+IG9iai15ICs9
IGlvbW11Lm8gaW9tbXVfaGVscGVycy5vIGlvbW11X2Z3c3BlYy5vDQo+Pj4gb2JqLSQoQ09ORklH
X0FSTV9TTU1VKSArPSBzbW11Lm8NCj4+PiBvYmotJChDT05GSUdfSVBNTVVfVk1TQSkgKz0gaXBt
bXUtdm1zYS5vDQo+Pj4gK29iai0kKENPTkZJR19BUk1fU01NVV9WMykgKz0gc21tdS12My5vDQo+
Pj4gZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LXYzLmMgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS12My5jDQo+Pj4gaW5kZXggNTVkMWNiYTE5
NC4uOGYyMzM3ZTdmMiAxMDA2NDQNCj4+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9h
cm0vc21tdS12My5jDQo+Pj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUt
djMuYw0KPj4+IEBAIC0yLDM2ICsyLDI4MCBAQA0KPj4+IC8qDQo+Pj4gKiBJT01NVSBBUEkgZm9y
IEFSTSBhcmNoaXRlY3RlZCBTTU1VdjMgaW1wbGVtZW50YXRpb25zLg0KPj4+ICoNCj4+PiAtICog
Q29weXJpZ2h0IChDKSAyMDE1IEFSTSBMaW1pdGVkDQo+Pj4gKyAqIEJhc2VkIG9uIExpbnV4J3Mg
U01NVXYzIGRyaXZlcjoNCj4+PiArICogICAgZHJpdmVycy9pb21tdS9hcm0vYXJtLXNtbXUtdjMv
YXJtLXNtbXUtdjMuYw0KPj4+ICsgKiAgICBjb21taXQ6IDk1MWNiYmMzODZmZjAxYjUwZGE0ZjQ2
Mzg3ZTk5NGU4MWQ5YWI0MzENCj4+PiArICogYW5kIFhlbidzIFNNTVUgZHJpdmVyOg0KPj4+ICsg
KiAgICB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jDQo+Pj4gKg0KPj4+IC0gKiBB
dXRob3I6IFdpbGwgRGVhY29uIDx3aWxsLmRlYWNvbkBhcm0uY29tPg0KPj4+ICsgKiBDb3B5cmln
aHQgKEMpIDIwMTUgQVJNIExpbWl0ZWQgV2lsbCBEZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+
DQo+Pj4gKg0KPj4+IC0gKiBUaGlzIGRyaXZlciBpcyBwb3dlcmVkIGJ5IGJhZCBjb2ZmZWUgYW5k
IGJvbWJheSBtaXguDQo+Pj4gKyAqIENvcHlyaWdodCAoQykgMjAyMCBBcm0gTHRkLg0KPj4+ICsg
Kg0KPj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4+ICsgKiBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4+ICsgKiBwdWJsaXNo
ZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4NCj4+PiArICoNCj4+PiArICogVGhp
cyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2Vm
dWwsDQo+Pj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBp
bXBsaWVkIHdhcnJhbnR5IG9mDQo+Pj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZP
UiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+PiArICogR05VIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+PiArICoNCj4+PiArICogWW91IHNob3Vs
ZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UN
Cj4+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3
dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+Pj4gKyAqDQo+Pj4gKyAqLw0KPj4+ICsNCj4+PiArI2lu
Y2x1ZGUgPHhlbi9hY3BpLmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vY29uZmlnLmg+DQo+Pj4gKyNp
bmNsdWRlIDx4ZW4vZGVsYXkuaD4NCj4+PiArI2luY2x1ZGUgPHhlbi9lcnJuby5oPg0KPj4+ICsj
aW5jbHVkZSA8eGVuL2Vyci5oPg0KPj4+ICsjaW5jbHVkZSA8eGVuL2lycS5oPg0KPj4+ICsjaW5j
bHVkZSA8eGVuL2xpYi5oPg0KPj4+ICsjaW5jbHVkZSA8eGVuL2xpc3QuaD4NCj4+PiArI2luY2x1
ZGUgPHhlbi9tbS5oPg0KPj4+ICsjaW5jbHVkZSA8eGVuL3JidHJlZS5oPg0KPj4+ICsjaW5jbHVk
ZSA8eGVuL3NjaGVkLmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vc2l6ZXMuaD4NCj4+PiArI2luY2x1
ZGUgPHhlbi92bWFwLmg+DQo+Pj4gKyNpbmNsdWRlIDxhc20vYXRvbWljLmg+DQo+Pj4gKyNpbmNs
dWRlIDxhc20vZGV2aWNlLmg+DQo+Pj4gKyNpbmNsdWRlIDxhc20vaW8uaD4NCj4+PiArI2luY2x1
ZGUgPGFzbS9wbGF0Zm9ybS5oPg0KPj4+ICsjaW5jbHVkZSA8YXNtL2lvbW11X2Z3c3BlYy5oPg0K
Pj4+ICsNCj4+PiArLyogTGludXggY29tcGF0aWJpbGl0eSBmdW5jdGlvbnMuICovDQo+Pj4gK3R5
cGVkZWYgcGFkZHJfdCBkbWFfYWRkcl90Ow0KPj4+ICt0eXBlZGVmIHVuc2lnbmVkIGludCBnZnBf
dDsNCj4+PiArDQo+Pj4gKyNkZWZpbmUgcGxhdGZvcm1fZGV2aWNlIGRldmljZQ0KPj4+ICsNCj4+
PiArI2RlZmluZSBHRlBfS0VSTkVMIDANCj4+PiArDQo+Pj4gKy8qIEFsaWFzIHRvIFhlbiBkZXZp
Y2UgdHJlZSBoZWxwZXJzICovDQo+Pj4gKyNkZWZpbmUgZGV2aWNlX25vZGUgZHRfZGV2aWNlX25v
ZGUNCj4+PiArI2RlZmluZSBvZl9waGFuZGxlX2FyZ3MgZHRfcGhhbmRsZV9hcmdzDQo+Pj4gKyNk
ZWZpbmUgb2ZfZGV2aWNlX2lkIGR0X2RldmljZV9tYXRjaA0KPj4+ICsjZGVmaW5lIG9mX21hdGNo
X25vZGUgZHRfbWF0Y2hfbm9kZQ0KPj4+ICsjZGVmaW5lIG9mX3Byb3BlcnR5X3JlYWRfdTMyKG5w
LCBwbmFtZSwgb3V0KSAoIWR0X3Byb3BlcnR5X3JlYWRfdTMyKG5wLCBwbmFtZSwgb3V0KSkNCj4+
PiArI2RlZmluZSBvZl9wcm9wZXJ0eV9yZWFkX2Jvb2wgZHRfcHJvcGVydHlfcmVhZF9ib29sDQo+
Pj4gKyNkZWZpbmUgb2ZfcGFyc2VfcGhhbmRsZV93aXRoX2FyZ3MgZHRfcGFyc2VfcGhhbmRsZV93
aXRoX2FyZ3MNCj4+IA0KPj4gR2l2ZW4gYWxsIHRoZSBjaGFuZ2VzIHRvIHRoZSBmaWxlIGJ5IHRo
ZSBwcmV2aW91cyBwYXRjaGVzIHdlIGFyZQ0KPj4gYmFzaWNhbGx5IGZ1bGx5IChvciBhbG1vc3Qg
ZnVsbHkpIGFkYXB0aW5nIHRoaXMgY29kZSB0byBYZW4uDQo+PiANCj4+IFNvIGF0IHRoYXQgcG9p
bnQgSSB3b25kZXIgaWYgd2Ugc2hvdWxkIGp1c3QgYXMgd2VsbCBtYWtlIHRoZXNlIGNoYW5nZXMN
Cj4+IChlLmcuIHMvb2ZfcGhhbmRsZV9hcmdzL2R0X3BoYW5kbGVfYXJncy9nKSB0byB0aGUgY29k
ZSB0b28uDQo+IA0KPiBZZXMsIHlvdSBhcmUgcmlnaHQgdGhhdCBpcyB3aHkgaW4gdGhlIGZpcnN0
IHZlcnNpb24gb2YgdGhlIHBhdGNoIEkgbW9kaWZ5aW5nIHRoZSBjb2RlIHRvIG1ha2UgaXQgZnVs
bHkgWEVOIGNvbXBhdGlibGUuIEJ1dCBpbiB0aGlzIHBhdGNoIHNlcmllcyBJIHVzZSB0aGUgTGlu
dXggY29tcGF0aWJpbGl0eSBmdW5jdGlvbiB0byBoYXZlIExpbnV4IGNvZGUgdW5tb2RpZmllZC5J
IGFsc28gcHJlZmVyIHRvIG1ha2UgY2hhbmdlcyAocy9vZl9waGFuZGxlX2FyZ3MvZHRfcGhhbmRs
ZV9hcmdzL2cpLg0KPiANCj4+IA0KPj4gSnVsaWVuLCBSYWh1bCwgd2hhdCBkbyB5b3UgZ3V5cyB0
aGluaz8gDQo+PiANCj4+IA0KPj4+ICsvKiBBbGlhcyB0byBYZW4gbG9jayBmdW5jdGlvbnMgKi8N
Cj4+IA0KPj4gSSB0aGluayB0aGlzIGRlc2VydmVzIGF0IGxlYXN0IG9uZSBzdGF0ZW1lbnQgdG8g
ZXhwbGFpbiB3aHkgaXQgaXMgT0suDQo+IA0KPiBBY2suIEkgV2lsbCBhZGQgdGhlIGNvbW1lbnQu
DQo+PiANCj4+IEFsc28sIGEgc2ltaWxhciBjb21tZW50IHRvIHRoZSBvbmUgYWJvdmU6IG1heWJl
IHdlIHNob3VsZCBhZGQgYSBjb3VwbGUNCj4+IG9mIG1vcmUgcHJlcGFyYXRpb24gcGF0Y2hlcyB0
byBzL211dGV4L3NwaW5sb2NrL2cgYW5kIGNoYW5nZSB0aGUgdGltZQ0KPj4gYW5kIGFsbG9jYXRp
b24gZnVuY3Rpb25zIHRvby4NCj4+IA0KPj4gDQo+Pj4gKyNkZWZpbmUgbXV0ZXggc3BpbmxvY2sN
Cj4+PiArI2RlZmluZSBtdXRleF9pbml0IHNwaW5fbG9ja19pbml0DQo+Pj4gKyNkZWZpbmUgbXV0
ZXhfbG9jayBzcGluX2xvY2sNCj4+PiArI2RlZmluZSBtdXRleF91bmxvY2sgc3Bpbl91bmxvY2sN
Cj4+PiArDQo+Pj4gKy8qIEFsaWFzIHRvIFhlbiB0aW1lIGZ1bmN0aW9ucyAqLw0KPj4+ICsjZGVm
aW5lIGt0aW1lX3Qgc190aW1lX3QNCj4+PiArI2RlZmluZSBrdGltZV9nZXQoKSAgICAgICAgICAg
ICAoTk9XKCkpDQo+Pj4gKyNkZWZpbmUga3RpbWVfYWRkX3VzKHQsaSkgICAgICAgKHQgKyBNSUNS
T1NFQ1MoaSkpDQo+Pj4gKyNkZWZpbmUga3RpbWVfY29tcGFyZSh0LGkpICAgICAgKHQgPiAoaSkp
DQo+Pj4gKw0KPj4+ICsvKiBBbGlhcyB0byBYZW4gYWxsb2NhdGlvbiBoZWxwZXJzICovDQo+Pj4g
KyNkZWZpbmUga3phbGxvYyhzaXplLCBmbGFncykgICAgX3h6YWxsb2Moc2l6ZSwgc2l6ZW9mKHZv
aWQgKikpDQo+Pj4gKyNkZWZpbmUga2ZyZWUgeGZyZWUNCj4+PiArI2RlZmluZSBkZXZtX2t6YWxs
b2MoZGV2LCBzaXplLCBmbGFncykgIF94emFsbG9jKHNpemUsIHNpemVvZih2b2lkICopKQ0KPj4+
ICsNCj4+PiArLyogRGV2aWNlIGxvZ2dlciBmdW5jdGlvbnMgKi8NCj4+PiArI2RlZmluZSBkZXZf
bmFtZShkZXYpIGR0X25vZGVfZnVsbF9uYW1lKGRldi0+b2Zfbm9kZSkNCj4+PiArI2RlZmluZSBk
ZXZfZGJnKGRldiwgZm10LCAuLi4pICAgICAgXA0KPj4+ICsgICAgcHJpbnRrKFhFTkxPR19ERUJV
RyAiU01NVXYzOiAlczogIiBmbXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJHU19fKQ0KPj4+
ICsjZGVmaW5lIGRldl9ub3RpY2UoZGV2LCBmbXQsIC4uLikgICBcDQo+Pj4gKyAgICBwcmludGso
WEVOTE9HX0lORk8gIlNNTVV2MzogJXM6ICIgZm10LCBkZXZfbmFtZShkZXYpLCAjIyBfX1ZBX0FS
R1NfXykNCj4+PiArI2RlZmluZSBkZXZfd2FybihkZXYsIGZtdCwgLi4uKSAgICAgXA0KPj4+ICsg
ICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HICJTTU1VdjM6ICVzOiAiIGZtdCwgZGV2X25hbWUoZGV2
KSwgIyMgX19WQV9BUkdTX18pDQo+Pj4gKyNkZWZpbmUgZGV2X2VycihkZXYsIGZtdCwgLi4uKSAg
ICAgIFwNCj4+PiArICAgIHByaW50ayhYRU5MT0dfRVJSICJTTU1VdjM6ICVzOiAiIGZtdCwgZGV2
X25hbWUoZGV2KSwgIyMgX19WQV9BUkdTX18pDQo+Pj4gKyNkZWZpbmUgZGV2X2luZm8oZGV2LCBm
bXQsIC4uLikgICAgIFwNCj4+PiArICAgIHByaW50ayhYRU5MT0dfSU5GTyAiU01NVXYzOiAlczog
IiBmbXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJHU19fKQ0KPj4+ICsjZGVmaW5lIGRldl9l
cnJfcmF0ZWxpbWl0ZWQoZGV2LCBmbXQsIC4uLikgICAgICBcDQo+Pj4gKyAgICBwcmludGsoWEVO
TE9HX0VSUiAiU01NVXYzOiAlczogIiBmbXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJHU19f
KQ0KPj4+ICsNCj4+PiArLyoNCj4+PiArICogUGVyaW9kaWNhbGx5IHBvbGwgYW4gYWRkcmVzcyBh
bmQgd2FpdCBiZXR3ZWVuIHJlYWRzIGluIHVzIHVudGlsIGENCj4+PiArICogY29uZGl0aW9uIGlz
IG1ldCBvciBhIHRpbWVvdXQgb2NjdXJzLg0KPj4gDQo+PiBJdCB3b3VsZCBiZSBnb29kIHRvIGFk
ZCBhIHN0YXRlbWVudCB0byBwb2ludCBvdXQgd2hhdCB0aGUgcmV0dXJuIHZhbHVlDQo+PiBpcyBn
b2luZyB0byBiZS4NCj4gDQo+IEFjay4gDQo+PiANCj4+IA0KPj4+ICsgKi8NCj4+PiArI2RlZmlu
ZSByZWFkeF9wb2xsX3RpbWVvdXQob3AsIGFkZHIsIHZhbCwgY29uZCwgc2xlZXBfdXMsIHRpbWVv
dXRfdXMpIFwNCj4+PiArKHsgXA0KPj4+ICsgICAgIHNfdGltZV90IGRlYWRsaW5lID0gTk9XKCkg
KyBNSUNST1NFQ1ModGltZW91dF91cyk7IFwNCj4+PiArICAgICBmb3IgKDs7KSB7IFwNCj4+PiAr
ICAgICAgICAodmFsKSA9IG9wKGFkZHIpOyBcDQo+Pj4gKyAgICAgICAgaWYgKGNvbmQpIFwNCj4+
PiArICAgICAgICAgICAgYnJlYWs7IFwNCj4+PiArICAgICAgICBpZiAoTk9XKCkgPiBkZWFkbGlu
ZSkgeyBcDQo+Pj4gKyAgICAgICAgICAgICh2YWwpID0gb3AoYWRkcik7IFwNCj4+PiArICAgICAg
ICAgICAgYnJlYWs7IFwNCj4+PiArICAgICAgICB9IFwNCj4+PiArICAgICAgICB1ZGVsYXkoc2xl
ZXBfdXMpOyBcDQo+Pj4gKyAgICAgfSBcDQo+Pj4gKyAgICAgKGNvbmQpID8gMCA6IC1FVElNRURP
VVQ7IFwNCj4+PiArfSkNCj4+PiArDQo+Pj4gKyNkZWZpbmUgcmVhZGxfcmVsYXhlZF9wb2xsX3Rp
bWVvdXQoYWRkciwgdmFsLCBjb25kLCBkZWxheV91cywgdGltZW91dF91cykgXA0KPj4+ICsgICAg
cmVhZHhfcG9sbF90aW1lb3V0KHJlYWRsX3JlbGF4ZWQsIGFkZHIsIHZhbCwgY29uZCwgZGVsYXlf
dXMsIHRpbWVvdXRfdXMpDQo+Pj4gKw0KPj4+ICsjZGVmaW5lIEZJRUxEX1BSRVAoX21hc2ssIF92
YWwpICAgICAgICAgXA0KPj4+ICsgICAgKCgodHlwZW9mKF9tYXNrKSkoX3ZhbCkgPDwgKF9fYnVp
bHRpbl9mZnNsbChfbWFzaykgLSAxKSkgJiAoX21hc2spKQ0KPj4gDQo+PiBMZXQncyBhZGQgdGhl
IGRlZmluaXRpb24gb2YgZmZzbGwgdG8gYml0b3BzLmgNCj4gDQo+IE9rLiANCj4+IA0KPj4gDQo+
Pj4gKyNkZWZpbmUgRklFTERfR0VUKF9tYXNrLCBfcmVnKSAgICAgICAgICBcDQo+Pj4gKyAgICAo
dHlwZW9mKF9tYXNrKSkoKChfcmVnKSAmIChfbWFzaykpID4+IChfX2J1aWx0aW5fZmZzbGwoX21h
c2spIC0gMSkpDQo+Pj4gKw0KPj4+ICsjZGVmaW5lIFdSSVRFX09OQ0UoeCwgdmFsKSAgICAgICAg
ICAgICAgICAgIFwNCj4+PiArZG8geyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcDQo+Pj4gKyAgICAqKHZvbGF0aWxlIHR5cGVvZih4KSAqKSYoeCkgPSAodmFsKTsgICAg
XA0KPj4+ICt9IHdoaWxlICgwKQ0KPj4gDQo+PiBtYXliZSB3ZSBzaG91bGQgZGVmaW5lIHRoaXMg
aW4geGVuL2luY2x1ZGUveGVuL2xpYi5oDQo+IA0KPiBPay4gDQo+PiANCj4+IA0KPj4+ICsNCj4+
PiArLyogWGVuOiBTdHViIG91dCBETUEgZG9tYWluIHJlbGF0ZWQgZnVuY3Rpb25zICovDQo+Pj4g
KyNkZWZpbmUgaW9tbXVfZ2V0X2RtYV9jb29raWUoZG9tKSAwDQo+Pj4gKyNkZWZpbmUgaW9tbXVf
cHV0X2RtYV9jb29raWUoZG9tKQ0KPj4gDQo+PiBTaG91bGRuJ3Qgd2UgcmVtb3ZlIGFueSBjYWxs
IHRvIGlvbW11X2dldF9kbWFfY29va2llIGFuZA0KPj4gaW9tbXVfcHV0X2RtYV9jb29raWUgaW4g
b25lIG9mIHRoZSBwcmV2aW91cyBwYXRjaGVzPw0KPiANCj4gQWNrLiANCj4+IA0KPj4gDQo+Pj4g
Ky8qDQo+Pj4gKyAqIEhlbHBlcnMgZm9yIERNQSBhbGxvY2F0aW9uLiBKdXN0IHRoZSBmdW5jdGlv
biBuYW1lIGlzIHJldXNlZCBmb3INCj4+PiArICogcG9ydGluZyBjb2RlLCB0aGVzZSBhbGxvY2F0
aW9uIGFyZSBub3QgbWFuYWdlZCBhbGxvY2F0aW9ucw0KPj4+ICovDQo+Pj4gK3N0YXRpYyB2b2lk
ICpkbWFtX2FsbG9jX2NvaGVyZW50KHN0cnVjdCBkZXZpY2UgKmRldiwgc2l6ZV90IHNpemUsDQo+
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3QgKmRtYV9oYW5kbGUs
IGdmcF90IGdmcCkNCj4+PiArew0KPj4+ICsgICAgdm9pZCAqdmFkZHI7DQo+Pj4gKyAgICB1bnNp
Z25lZCBsb25nIGFsaWdubWVudCA9IHNpemU7DQo+Pj4gKw0KPj4+ICsgICAgLyoNCj4+PiArICAg
ICAqIF94emFsbG9jIHJlcXVpcmVzIHRoYXQgdGhlIChhbGlnbiAmIChhbGlnbiAtMSkpID0gMC4g
TW9zdCBvZiB0aGUNCj4+PiArICAgICAqIGFsbG9jYXRpb25zIGluIFNNTVUgY29kZSBzaG91bGQg
c2VuZCB0aGUgcmlnaHQgdmFsdWUgZm9yIHNpemUuIEluDQo+Pj4gKyAgICAgKiBjYXNlIHRoaXMg
aXMgbm90IHRydWUgcHJpbnQgYSB3YXJuaW5nIGFuZCBhbGlnbiB0byB0aGUgc2l6ZSBvZiBhDQo+
Pj4gKyAgICAgKiAodm9pZCAqKQ0KPj4+ICsgICAgICovDQo+Pj4gKyAgICBpZiAoIHNpemUgJiAo
c2l6ZSAtIDEpICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5J
TkcgIlNNTVV2MzogRml4aW5nIGFsaWdubWVudCBmb3IgdGhlIERNQSBidWZmZXJcbiIpOw0KPj4+
ICsgICAgICAgIGFsaWdubWVudCA9IHNpemVvZih2b2lkICopOw0KPj4+ICsgICAgfQ0KPj4+ICsN
Cj4+PiArICAgIHZhZGRyID0gX3h6YWxsb2Moc2l6ZSwgYWxpZ25tZW50KTsNCj4+PiArICAgIGlm
ICggIXZhZGRyICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAi
U01NVXYzOiBETUEgYWxsb2NhdGlvbiBmYWlsZWRcbiIpOw0KPj4+ICsgICAgICAgIHJldHVybiBO
VUxMOw0KPj4+ICsgICAgfQ0KPj4+ICsNCj4+PiArICAgICpkbWFfaGFuZGxlID0gdmlydF90b19t
YWRkcih2YWRkcik7DQo+Pj4gKw0KPj4+ICsgICAgcmV0dXJuIHZhZGRyOw0KPj4+ICt9DQo+Pj4g
Kw0KPj4+ICsvKiBYZW46IFR5cGUgZGVmaW5pdGlvbnMgZm9yIGlvbW11X2RvbWFpbiAqLw0KPj4+
ICsjZGVmaW5lIElPTU1VX0RPTUFJTl9VTk1BTkFHRUQgMA0KPj4+ICsjZGVmaW5lIElPTU1VX0RP
TUFJTl9ETUEgMQ0KPj4+ICsjZGVmaW5lIElPTU1VX0RPTUFJTl9JREVOVElUWSAyDQo+Pj4gKw0K
Pj4+ICsvKiBYZW4gc3BlY2lmaWMgY29kZS4gKi8NCj4+PiArc3RydWN0IGlvbW11X2RvbWFpbiB7
DQo+Pj4gKyAgICAvKiBSdW50aW1lIFNNTVUgY29uZmlndXJhdGlvbiBmb3IgdGhpcyBpb21tdV9k
b21haW4gKi8NCj4+PiArICAgIGF0b21pY190IHJlZjsNCj4+PiArICAgIC8qDQo+Pj4gKyAgICAg
KiBVc2VkIHRvIGxpbmsgaW9tbXVfZG9tYWluIGNvbnRleHRzIGZvciBhIHNhbWUgZG9tYWluLg0K
Pj4+ICsgICAgICogVGhlcmUgaXMgYXQgbGVhc3Qgb25lIHBlci1TTU1VIHRvIHVzZWQgYnkgdGhl
IGRvbWFpbi4NCj4+PiArICAgICAqLw0KPj4+ICsgICAgc3RydWN0IGxpc3RfaGVhZCAgICBsaXN0
Ow0KPj4+ICt9Ow0KPj4+ICsNCj4+PiArLyogRGVzY3JpYmVzIGluZm9ybWF0aW9uIHJlcXVpcmVk
IGZvciBhIFhlbiBkb21haW4gKi8NCj4+PiArc3RydWN0IGFybV9zbW11X3hlbl9kb21haW4gew0K
Pj4+ICsgICAgc3BpbmxvY2tfdCAgICAgIGxvY2s7DQo+Pj4gKw0KPj4+ICsgICAgLyogTGlzdCBv
ZiBpb21tdSBkb21haW5zIGFzc29jaWF0ZWQgdG8gdGhpcyBkb21haW4gKi8NCj4+PiArICAgIHN0
cnVjdCBsaXN0X2hlYWQgICAgY29udGV4dHM7DQo+Pj4gK307DQo+Pj4gKw0KPj4+ICsvKg0KPj4+
ICsgKiBJbmZvcm1hdGlvbiBhYm91dCBlYWNoIGRldmljZSBzdG9yZWQgaW4gZGV2LT5hcmNoZGF0
YS5pb21tdQ0KPj4+ICsgKiBUaGUgZGV2LT5hcmNoZGF0YS5pb21tdSBzdG9yZXMgdGhlIGlvbW11
X2RvbWFpbiAocnVudGltZSBjb25maWd1cmF0aW9uIG9mDQo+Pj4gKyAqIHRoZSBTTU1VKS4NCj4+
PiArICovDQo+Pj4gK3N0cnVjdCBhcm1fc21tdV94ZW5fZGV2aWNlIHsNCj4+PiArICAgIHN0cnVj
dCBpb21tdV9kb21haW4gKmRvbWFpbjsNCj4+PiArfTsNCj4+IA0KPj4gRG8gd2UgbmVlZCBib3Ro
IHN0cnVjdCBhcm1fc21tdV94ZW5fZGV2aWNlIGFuZCBzdHJ1Y3QgaW9tbXVfZG9tYWluPw0KPj4g
DQo+IA0KPiBObyB3ZSBkb27igJl0IG5lZWQgYm90aC4gSSB3aWxsIHJlbW92ZSB0aGUgc3RydWN0
IGFybV9zbW11X3hlbl9kZXZpY2UuIA0KPiANCj4+IA0KPj4+ICsvKiBLZWVwIGEgbGlzdCBvZiBk
ZXZpY2VzIGFzc29jaWF0ZWQgd2l0aCB0aGlzIGRyaXZlciAqLw0KPj4+ICtzdGF0aWMgREVGSU5F
X1NQSU5MT0NLKGFybV9zbW11X2RldmljZXNfbG9jayk7DQo+Pj4gK3N0YXRpYyBMSVNUX0hFQUQo
YXJtX3NtbXVfZGV2aWNlcyk7DQo+Pj4gKw0KPj4+ICsNCj4+PiArc3RhdGljIGlubGluZSB2b2lk
ICpkZXZfaW9tbXVfcHJpdl9nZXQoc3RydWN0IGRldmljZSAqZGV2KQ0KPj4+ICt7DQo+Pj4gKyAg
ICBzdHJ1Y3QgaW9tbXVfZndzcGVjICpmd3NwZWMgPSBkZXZfaW9tbXVfZndzcGVjX2dldChkZXYp
Ow0KPj4+ICsNCj4+PiArICAgIHJldHVybiBmd3NwZWMgJiYgZndzcGVjLT5pb21tdV9wcml2ID8g
ZndzcGVjLT5pb21tdV9wcml2IDogTlVMTDsNCj4+PiArfQ0KPj4+ICsNCj4+PiArc3RhdGljIGlu
bGluZSB2b2lkIGRldl9pb21tdV9wcml2X3NldChzdHJ1Y3QgZGV2aWNlICpkZXYsIHZvaWQgKnBy
aXYpDQo+Pj4gK3sNCj4+PiArICAgIHN0cnVjdCBpb21tdV9md3NwZWMgKmZ3c3BlYyA9IGRldl9p
b21tdV9md3NwZWNfZ2V0KGRldik7DQo+Pj4gKw0KPj4+ICsgICAgZndzcGVjLT5pb21tdV9wcml2
ID0gcHJpdjsNCj4+PiArfQ0KPj4+ICsNCj4+PiAraW50IGR0X3Byb3BlcnR5X21hdGNoX3N0cmlu
Zyhjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKm5wLA0KPj4+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGNvbnN0IGNoYXIgKnByb3BuYW1lLCBjb25zdCBjaGFyICpzdHJpbmcpDQo+
Pj4gK3sNCj4+PiArICAgIGNvbnN0IHN0cnVjdCBkdF9wcm9wZXJ0eSAqZHRwcm9wID0gZHRfZmlu
ZF9wcm9wZXJ0eShucCwgcHJvcG5hbWUsIE5VTEwpOw0KPj4+ICsgICAgc2l6ZV90IGw7DQo+Pj4g
KyAgICBpbnQgaTsNCj4+PiArICAgIGNvbnN0IGNoYXIgKnAsICplbmQ7DQo+Pj4gKw0KPj4+ICsg
ICAgaWYgKCAhZHRwcm9wICkNCj4+PiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4+PiArDQo+
Pj4gKyAgICBpZiAoICFkdHByb3AtPnZhbHVlICkNCj4+PiArICAgICAgICByZXR1cm4gLUVOT0RB
VEE7DQo+Pj4gKw0KPj4+ICsgICAgcCA9IGR0cHJvcC0+dmFsdWU7DQo+Pj4gKyAgICBlbmQgPSBw
ICsgZHRwcm9wLT5sZW5ndGg7DQo+Pj4gKw0KPj4+ICsgICAgZm9yICggaSA9IDA7IHAgPCBlbmQ7
IGkrKywgcCArPSBsICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBsID0gc3RybmxlbihwLCBl
bmQgLSBwKSArIDE7DQo+Pj4gKw0KPj4+ICsgICAgICAgIGlmICggcCArIGwgPiBlbmQgKQ0KPj4+
ICsgICAgICAgICAgICByZXR1cm4gLUVJTFNFUTsNCj4+PiArDQo+Pj4gKyAgICAgICAgaWYgKCBz
dHJjbXAoc3RyaW5nLCBwKSA9PSAwICkNCj4+PiArICAgICAgICAgICAgcmV0dXJuIGk7IC8qIEZv
dW5kIGl0OyByZXR1cm4gaW5kZXggKi8NCj4+PiArICAgIH0NCj4+PiArDQo+Pj4gKyAgICByZXR1
cm4gLUVOT0RBVEE7DQo+Pj4gK30NCj4+IA0KPj4gSSB0aGluayB5b3Ugc2hvdWxkIGVpdGhlciB1
c2UgZHRfcHJvcGVydHlfcmVhZF9zdHJpbmcgb3IgbW92ZSB0aGUNCj4+IGltcGxlbWVudGF0aW9u
IG9mIGR0X3Byb3BlcnR5X21hdGNoX3N0cmluZyB0byB4ZW4vY29tbW9uL2RldmljZV90cmVlLmMN
Cj4+IChpbiB3aGljaCBjYXNlIEkgc3VnZ2VzdCB0byBkbyBpdCBpbiBhIHNlcGFyYXRlIHBhdGNo
LikNCj4gDQo+IEkgd2lsbCBwcmVmZXIgdG8gbW92ZSB0aGUgY29kZSB0byB4ZW4vY29tbW9uL2Rl
dmljZV90cmVlLmMgYXMgaXQgbWlnaHQgaGVscCBpbiBmdXR1cmUgdG8gdXNlLg0KPj4gDQo+PiAN
Cj4+IEknZCBqdXN0IHVzZSBkdF9wcm9wZXJ0eV9yZWFkX3N0cmluZy4NCj4+IA0KPj4gDQo+PiAN
Cj4+PiArc3RhdGljIGludCBwbGF0Zm9ybV9nZXRfaXJxX2J5bmFtZV9vcHRpb25hbChzdHJ1Y3Qg
ZGV2aWNlICpkZXYsDQo+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY29uc3QgY2hhciAqbmFtZSkNCj4+PiArew0KPj4+ICsgICAgaW50IGluZGV4LCByZXQ7
DQo+Pj4gKyAgICBzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKm5wICA9IGRldl90b19kdChkZXYpOw0K
Pj4+ICsNCj4+PiArICAgIGlmICggdW5saWtlbHkoIW5hbWUpICkNCj4+PiArICAgICAgICByZXR1
cm4gLUVJTlZBTDsNCj4+PiArDQo+Pj4gKyAgICBpbmRleCA9IGR0X3Byb3BlcnR5X21hdGNoX3N0
cmluZyhucCwgImludGVycnVwdC1uYW1lcyIsIG5hbWUpOw0KPj4+ICsgICAgaWYgKCBpbmRleCA8
IDAgKQ0KPj4+ICsgICAgew0KPj4+ICsgICAgICAgIGRldl9pbmZvKGRldiwgIklSUSAlcyBub3Qg
Zm91bmRcbiIsIG5hbWUpOw0KPj4+ICsgICAgICAgIHJldHVybiBpbmRleDsNCj4+PiArICAgIH0N
Cj4+PiANCj4+PiAtI2luY2x1ZGUgPGxpbnV4L2FjcGkuaD4NCj4+PiAtI2luY2x1ZGUgPGxpbnV4
L2FjcGlfaW9ydC5oPg0KPj4+IC0jaW5jbHVkZSA8bGludXgvYml0ZmllbGQuaD4NCj4+PiAtI2lu
Y2x1ZGUgPGxpbnV4L2JpdG9wcy5oPg0KPj4+IC0jaW5jbHVkZSA8bGludXgvY3Jhc2hfZHVtcC5o
Pg0KPj4+IC0jaW5jbHVkZSA8bGludXgvZGVsYXkuaD4NCj4+PiAtI2luY2x1ZGUgPGxpbnV4L2Rt
YS1pb21tdS5oPg0KPj4+IC0jaW5jbHVkZSA8bGludXgvZXJyLmg+DQo+Pj4gLSNpbmNsdWRlIDxs
aW51eC9pbnRlcnJ1cHQuaD4NCj4+PiAtI2luY2x1ZGUgPGxpbnV4L2lvLXBndGFibGUuaD4NCj4+
PiAtI2luY2x1ZGUgPGxpbnV4L2lvbW11Lmg+DQo+Pj4gLSNpbmNsdWRlIDxsaW51eC9pb3BvbGwu
aD4NCj4+PiAtI2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPg0KPj4+IC0jaW5jbHVkZSA8bGludXgv
bXNpLmg+DQo+Pj4gLSNpbmNsdWRlIDxsaW51eC9vZi5oPg0KPj4+IC0jaW5jbHVkZSA8bGludXgv
b2ZfYWRkcmVzcy5oPg0KPj4+IC0jaW5jbHVkZSA8bGludXgvb2ZfaW9tbXUuaD4NCj4+PiAtI2lu
Y2x1ZGUgPGxpbnV4L29mX3BsYXRmb3JtLmg+DQo+Pj4gLSNpbmNsdWRlIDxsaW51eC9wY2kuaD4N
Cj4+PiAtI2luY2x1ZGUgPGxpbnV4L3BjaS1hdHMuaD4NCj4+PiAtI2luY2x1ZGUgPGxpbnV4L3Bs
YXRmb3JtX2RldmljZS5oPg0KPj4+IC0NCj4+PiAtI2luY2x1ZGUgPGxpbnV4L2FtYmEvYnVzLmg+
DQo+Pj4gKyAgICByZXQgPSBwbGF0Zm9ybV9nZXRfaXJxKG5wLCBpbmRleCk7DQo+Pj4gKyAgICBp
ZiAoIHJldCA8IDAgKQ0KPj4+ICsgICAgew0KPj4+ICsgICAgICAgIGRldl9lcnIoZGV2LCAiZmFp
bGVkIHRvIGdldCBpcnEgaW5kZXggJWRcbiIsIGluZGV4KTsNCj4+PiArICAgICAgICByZXR1cm4g
LUVOT0RFVjsNCj4+PiArICAgIH0NCj4+PiArDQo+Pj4gKyAgICByZXR1cm4gcmV0Ow0KPj4+ICt9
DQo+Pj4gKw0KPj4+ICsvKiBTdGFydCBvZiBMaW51eCBTTU1VdjMgY29kZSAqLw0KPj4+IA0KPj4+
IC8qIE1NSU8gcmVnaXN0ZXJzICovDQo+Pj4gI2RlZmluZSBBUk1fU01NVV9JRFIwCQkJMHgwDQo+
Pj4gQEAgLTUwNyw2ICs3NTEsNyBAQCBzdHJ1Y3QgYXJtX3NtbXVfczJfY2ZnIHsNCj4+PiAJdTE2
CQkJCXZtaWQ7DQo+Pj4gCXU2NAkJCQl2dHRicjsNCj4+PiAJdTY0CQkJCXZ0Y3I7DQo+Pj4gKwlz
dHJ1Y3QgZG9tYWluCQkqZG9tYWluOw0KPj4+IH07DQo+Pj4gDQo+Pj4gc3RydWN0IGFybV9zbW11
X3N0cnRhYl9jZmcgew0KPj4+IEBAIC01NjcsOCArODEyLDEzIEBAIHN0cnVjdCBhcm1fc21tdV9k
ZXZpY2Ugew0KPj4+IA0KPj4+IAlzdHJ1Y3QgYXJtX3NtbXVfc3RydGFiX2NmZwlzdHJ0YWJfY2Zn
Ow0KPj4+IA0KPj4+IC0JLyogSU9NTVUgY29yZSBjb2RlIGhhbmRsZSAqLw0KPj4+IC0Jc3RydWN0
IGlvbW11X2RldmljZQkJaW9tbXU7DQo+Pj4gKwkvKiBOZWVkIHRvIGtlZXAgYSBsaXN0IG9mIFNN
TVUgZGV2aWNlcyAqLw0KPj4+ICsJc3RydWN0IGxpc3RfaGVhZAkJZGV2aWNlczsNCj4+PiArDQo+
Pj4gKwkvKiBUYXNrbGV0cyBmb3IgaGFuZGxpbmcgZXZ0cy9mYXVsdHMgYW5kIHBjaSBwYWdlIHJl
cXVlc3QgSVJRcyovDQo+Pj4gKwlzdHJ1Y3QgdGFza2xldAkJZXZ0cV9pcnFfdGFza2xldDsNCj4+
PiArCXN0cnVjdCB0YXNrbGV0CQlwcmlxX2lycV90YXNrbGV0Ow0KPj4+ICsJc3RydWN0IHRhc2ts
ZXQJCWNvbWJpbmVkX2lycV90YXNrbGV0Ow0KPj4+IH07DQo+Pj4gDQo+Pj4gLyogU01NVSBwcml2
YXRlIGRhdGEgZm9yIGVhY2ggbWFzdGVyICovDQo+Pj4gQEAgLTExMTAsNyArMTM2MCw3IEBAIHN0
YXRpYyBpbnQgYXJtX3NtbXVfaW5pdF9sMl9zdHJ0YWIoc3RydWN0IGFybV9zbW11X2RldmljZSAq
c21tdSwgdTMyIHNpZCkNCj4+PiB9DQo+Pj4gDQo+Pj4gLyogSVJRIGFuZCBldmVudCBoYW5kbGVy
cyAqLw0KPj4+IC1zdGF0aWMgaXJxcmV0dXJuX3QgYXJtX3NtbXVfZXZ0cV90aHJlYWQoaW50IGly
cSwgdm9pZCAqZGV2KQ0KPj4+ICtzdGF0aWMgdm9pZCBhcm1fc21tdV9ldnRxX3RocmVhZCh2b2lk
ICpkZXYpDQo+PiANCj4+IEkgdGhpbmsgd2Ugc2hvdWxkbid0IGNhbGwgaXQgYSB0aHJlYWQgZ2l2
ZW4gdGhhdCdzIGEgdGFza2xldC4gV2Ugc2hvdWxkDQo+PiByZW5hbWUgdGhlIGZ1bmN0aW9uIG9y
IGl0IHdpbGwgYmUgY29uZnVzaW5nDQo+IA0KPiBPay4gDQo+PiANCj4+PiB7DQo+Pj4gCWludCBp
Ow0KPj4+IAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gZGV2Ow0KPj4+IEBAIC0xMTQw
LDcgKzEzOTAsNiBAQCBzdGF0aWMgaXJxcmV0dXJuX3QgYXJtX3NtbXVfZXZ0cV90aHJlYWQoaW50
IGlycSwgdm9pZCAqZGV2KQ0KPj4+IAkvKiBTeW5jIG91ciBvdmVyZmxvdyBmbGFnLCBhcyB3ZSBi
ZWxpZXZlIHdlJ3JlIHVwIHRvIHNwZWVkICovDQo+Pj4gCWxscS0+Y29ucyA9IFFfT1ZGKGxscS0+
cHJvZCkgfCBRX1dSUChsbHEsIGxscS0+Y29ucykgfA0KPj4+IAkJICAgIFFfSURYKGxscSwgbGxx
LT5jb25zKTsNCj4+PiAtCXJldHVybiBJUlFfSEFORExFRDsNCj4+PiB9DQo+Pj4gDQo+Pj4gc3Rh
dGljIHZvaWQgYXJtX3NtbXVfaGFuZGxlX3BwcihzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11
LCB1NjQgKmV2dCkNCj4+PiBAQCAtMTE4MSw3ICsxNDMwLDcgQEAgc3RhdGljIHZvaWQgYXJtX3Nt
bXVfaGFuZGxlX3BwcihzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11LCB1NjQgKmV2dCkNCj4+
PiAJfQ0KPj4+IH0NCj4+PiANCj4+PiAtc3RhdGljIGlycXJldHVybl90IGFybV9zbW11X3ByaXFf
dGhyZWFkKGludCBpcnEsIHZvaWQgKmRldikNCj4+PiArc3RhdGljIHZvaWQgYXJtX3NtbXVfcHJp
cV90aHJlYWQodm9pZCAqZGV2KQ0KPj4gDQo+PiBzYW1lIGhlcmUNCj4gDQo+IE9rLiANCj4+IA0K
Pj4gDQo+Pj4gew0KPj4+IAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gZGV2Ow0KPj4+
IAlzdHJ1Y3QgYXJtX3NtbXVfcXVldWUgKnEgPSAmc21tdS0+cHJpcS5xOw0KPj4+IEBAIC0xMjAw
LDEyICsxNDQ5LDEyIEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9wcmlxX3RocmVhZChp
bnQgaXJxLCB2b2lkICpkZXYpDQo+Pj4gCWxscS0+Y29ucyA9IFFfT1ZGKGxscS0+cHJvZCkgfCBR
X1dSUChsbHEsIGxscS0+Y29ucykgfA0KPj4+IAkJICAgICAgUV9JRFgobGxxLCBsbHEtPmNvbnMp
Ow0KPj4+IAlxdWV1ZV9zeW5jX2NvbnNfb3V0KHEpOw0KPj4+IC0JcmV0dXJuIElSUV9IQU5ETEVE
Ow0KPj4+IH0NCj4+PiANCj4+PiBzdGF0aWMgaW50IGFybV9zbW11X2RldmljZV9kaXNhYmxlKHN0
cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUpOw0KPj4+IA0KPj4+IC1zdGF0aWMgaXJxcmV0dXJu
X3QgYXJtX3NtbXVfZ2Vycm9yX2hhbmRsZXIoaW50IGlycSwgdm9pZCAqZGV2KQ0KPj4+ICtzdGF0
aWMgdm9pZCBhcm1fc21tdV9nZXJyb3JfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYsDQo+Pj4g
KwkJCQlzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykNCj4+PiB7DQo+Pj4gCXUzMiBnZXJyb3Is
IGdlcnJvcm4sIGFjdGl2ZTsNCj4+PiAJc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSA9IGRl
djsNCj4+PiBAQCAtMTIxNSw3ICsxNDY0LDcgQEAgc3RhdGljIGlycXJldHVybl90IGFybV9zbW11
X2dlcnJvcl9oYW5kbGVyKGludCBpcnEsIHZvaWQgKmRldikNCj4+PiANCj4+PiAJYWN0aXZlID0g
Z2Vycm9yIF4gZ2Vycm9ybjsNCj4+PiAJaWYgKCEoYWN0aXZlICYgR0VSUk9SX0VSUl9NQVNLKSkN
Cj4+PiAtCQlyZXR1cm4gSVJRX05PTkU7IC8qIE5vIGVycm9ycyBwZW5kaW5nICovDQo+Pj4gKwkJ
cmV0dXJuOyAvKiBObyBlcnJvcnMgcGVuZGluZyAqLw0KPj4+IA0KPj4+IAlkZXZfd2FybihzbW11
LT5kZXYsDQo+Pj4gCQkgInVuZXhwZWN0ZWQgZ2xvYmFsIGVycm9yIHJlcG9ydGVkICgweCUwOHgp
LCB0aGlzIGNvdWxkIGJlIHNlcmlvdXNcbiIsDQo+Pj4gQEAgLTEyNDgsMjYgKzE0OTcsNDIgQEAg
c3RhdGljIGlycXJldHVybl90IGFybV9zbW11X2dlcnJvcl9oYW5kbGVyKGludCBpcnEsIHZvaWQg
KmRldikNCj4+PiAJCWFybV9zbW11X2NtZHFfc2tpcF9lcnIoc21tdSk7DQo+Pj4gDQo+Pj4gCXdy
aXRlbChnZXJyb3IsIHNtbXUtPmJhc2UgKyBBUk1fU01NVV9HRVJST1JOKTsNCj4+PiAtCXJldHVy
biBJUlFfSEFORExFRDsNCj4+PiB9DQo+Pj4gDQo+Pj4gLXN0YXRpYyBpcnFyZXR1cm5fdCBhcm1f
c21tdV9jb21iaW5lZF9pcnFfdGhyZWFkKGludCBpcnEsIHZvaWQgKmRldikNCj4+PiArc3RhdGlj
IHZvaWQgYXJtX3NtbXVfY29tYmluZWRfaXJxX2hhbmRsZXIoaW50IGlycSwgdm9pZCAqZGV2LA0K
Pj4+ICsJCQkJc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpDQo+Pj4gK3sNCj4+PiArCXN0cnVj
dCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSAoc3RydWN0IGFybV9zbW11X2RldmljZSAqKWRldjsN
Cj4+PiArDQo+Pj4gKwlhcm1fc21tdV9nZXJyb3JfaGFuZGxlcihpcnEsIGRldiwgcmVncyk7DQo+
Pj4gKw0KPj4+ICsJdGFza2xldF9zY2hlZHVsZSgmKHNtbXUtPmNvbWJpbmVkX2lycV90YXNrbGV0
KSk7DQo+Pj4gK30NCj4+PiArDQo+Pj4gK3N0YXRpYyB2b2lkIGFybV9zbW11X2NvbWJpbmVkX2ly
cV90aHJlYWQodm9pZCAqZGV2KQ0KPj4+IHsNCj4+PiAJc3RydWN0IGFybV9zbW11X2RldmljZSAq
c21tdSA9IGRldjsNCj4+PiANCj4+PiAtCWFybV9zbW11X2V2dHFfdGhyZWFkKGlycSwgZGV2KTsN
Cj4+PiArCWFybV9zbW11X2V2dHFfdGhyZWFkKGRldik7DQo+Pj4gCWlmIChzbW11LT5mZWF0dXJl
cyAmIEFSTV9TTU1VX0ZFQVRfUFJJKQ0KPj4+IC0JCWFybV9zbW11X3ByaXFfdGhyZWFkKGlycSwg
ZGV2KTsNCj4+PiAtDQo+Pj4gLQlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+Pj4gKwkJYXJtX3NtbXVf
cHJpcV90aHJlYWQoZGV2KTsNCj4+PiB9DQo+Pj4gDQo+Pj4gLXN0YXRpYyBpcnFyZXR1cm5fdCBh
cm1fc21tdV9jb21iaW5lZF9pcnFfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYpDQo+Pj4gK3N0
YXRpYyB2b2lkIGFybV9zbW11X2V2dHFfaXJxX3Rhc2tsZXQoaW50IGlycSwgdm9pZCAqZGV2LA0K
Pj4+ICsJCQkJc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpDQo+Pj4gew0KPj4+IC0JYXJtX3Nt
bXVfZ2Vycm9yX2hhbmRsZXIoaXJxLCBkZXYpOw0KPj4+IC0JcmV0dXJuIElSUV9XQUtFX1RIUkVB
RDsNCj4+PiArCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSAoc3RydWN0IGFybV9zbW11
X2RldmljZSAqKWRldjsNCj4+PiArDQo+Pj4gKwl0YXNrbGV0X3NjaGVkdWxlKCYoc21tdS0+ZXZ0
cV9pcnFfdGFza2xldCkpOw0KPj4+IH0NCj4+PiANCj4+PiArc3RhdGljIHZvaWQgYXJtX3NtbXVf
cHJpcV9pcnFfdGFza2xldChpbnQgaXJxLCB2b2lkICpkZXYsDQo+Pj4gKwkJCQlzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqcmVncykNCj4+PiArew0KPj4+ICsJc3RydWN0IGFybV9zbW11X2RldmljZSAq
c21tdSA9IChzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICopZGV2Ow0KPj4+ICsNCj4+PiArCXRhc2ts
ZXRfc2NoZWR1bGUoJihzbW11LT5wcmlxX2lycV90YXNrbGV0KSk7DQo+Pj4gK30NCj4+PiANCj4+
PiAvKiBJT19QR1RBQkxFIEFQSSAqLw0KPj4+IHN0YXRpYyB2b2lkIGFybV9zbW11X3RsYl9pbnZf
Y29udGV4dCh2b2lkICpjb29raWUpDQo+Pj4gQEAgLTEzNTQsMjcgKzE2MTksNjkgQEAgc3RhdGlj
IHZvaWQgYXJtX3NtbXVfZG9tYWluX2ZyZWUoc3RydWN0IGlvbW11X2RvbWFpbiAqZG9tYWluKQ0K
Pj4+IH0NCj4+PiANCj4+PiBzdGF0aWMgaW50IGFybV9zbW11X2RvbWFpbl9maW5hbGlzZV9zMihz
dHJ1Y3QgYXJtX3NtbXVfZG9tYWluICpzbW11X2RvbWFpbiwNCj4+PiAtCQkJCSAgICAgICBzdHJ1
Y3QgYXJtX3NtbXVfbWFzdGVyICptYXN0ZXIsDQo+Pj4gLQkJCQkgICAgICAgc3RydWN0IGlvX3Bn
dGFibGVfY2ZnICpwZ3RibF9jZmcpDQo+Pj4gKwkJCQkgICAgICAgc3RydWN0IGFybV9zbW11X21h
c3RlciAqbWFzdGVyKQ0KPj4+IHsNCj4+PiAJaW50IHZtaWQ7DQo+Pj4gKwl1NjQgcmVnOw0KPj4+
IAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gc21tdV9kb21haW4tPnNtbXU7DQo+Pj4g
CXN0cnVjdCBhcm1fc21tdV9zMl9jZmcgKmNmZyA9ICZzbW11X2RvbWFpbi0+czJfY2ZnOw0KPj4+
IA0KPj4+ICsJLyogVlRDUiAqLw0KPj4+ICsJcmVnID0gVlRDUl9SRVMxIHwgVlRDUl9TSDBfSVMg
fCBWVENSX0lSR04wX1dCV0EgfCBWVENSX09SR04wX1dCV0E7DQo+Pj4gKw0KPj4+ICsJc3dpdGNo
IChQQUdFX1NJWkUpIHsNCj4+PiArCWNhc2UgU1pfNEs6DQo+Pj4gKwkJcmVnIHw9IFZUQ1JfVEcw
XzRLOw0KPj4+ICsJCWJyZWFrOw0KPj4+ICsJY2FzZSBTWl8xNks6DQo+Pj4gKwkJcmVnIHw9IFZU
Q1JfVEcwXzE2SzsNCj4+PiArCQlicmVhazsNCj4+PiArCWNhc2UgU1pfNjRLOg0KPj4+ICsJCXJl
ZyB8PSBWVENSX1RHMF80SzsNCj4+PiArCQlicmVhazsNCj4+PiArCX0NCj4+PiArDQo+Pj4gKwlz
d2l0Y2ggKHNtbXUtPm9hcykgew0KPj4+ICsJY2FzZSAzMjoNCj4+PiArCQlyZWcgfD0gVlRDUl9Q
UyhfQUMoMHgwLFVMTCkpOw0KPj4+ICsJCWJyZWFrOw0KPj4+ICsJY2FzZSAzNjoNCj4+PiArCQly
ZWcgfD0gVlRDUl9QUyhfQUMoMHgxLFVMTCkpOw0KPj4+ICsJCWJyZWFrOw0KPj4+ICsJY2FzZSA0
MDoNCj4+PiArCQlyZWcgfD0gVlRDUl9QUyhfQUMoMHgyLFVMTCkpOw0KPj4+ICsJCWJyZWFrOw0K
Pj4+ICsJY2FzZSA0MjoNCj4+PiArCQlyZWcgfD0gVlRDUl9QUyhfQUMoMHgzLFVMTCkpOw0KPj4+
ICsJCWJyZWFrOw0KPj4+ICsJY2FzZSA0NDoNCj4+PiArCQlyZWcgfD0gVlRDUl9QUyhfQUMoMHg0
LFVMTCkpOw0KPj4+ICsJCWJyZWFrOw0KPj4+ICsJCWNhc2UgNDg6DQo+Pj4gKwkJcmVnIHw9IFZU
Q1JfUFMoX0FDKDB4NSxVTEwpKTsNCj4+PiArCQlicmVhazsNCj4+PiArCWNhc2UgNTI6DQo+Pj4g
KwkJcmVnIHw9IFZUQ1JfUFMoX0FDKDB4NixVTEwpKTsNCj4+PiArCQlicmVhazsNCj4+PiArCX0N
Cj4+PiArDQo+Pj4gKwlyZWcgfD0gVlRDUl9UMFNaKDY0VUxMIC0gc21tdS0+aWFzKTsNCj4+PiAr
CXJlZyB8PSBWVENSX1NMMCgweDIpOw0KPj4+ICsJcmVnIHw9IFZUQ1JfVlM7DQo+Pj4gKw0KPj4+
ICsJY2ZnLT52dGNyICAgPSByZWc7DQo+Pj4gKw0KPj4+IAl2bWlkID0gYXJtX3NtbXVfYml0bWFw
X2FsbG9jKHNtbXUtPnZtaWRfbWFwLCBzbW11LT52bWlkX2JpdHMpOw0KPj4+IAlpZiAodm1pZCA8
IDApDQo+Pj4gCQlyZXR1cm4gdm1pZDsNCj4+PiArCWNmZy0+dm1pZCAgPSAodTE2KXZtaWQ7DQo+
Pj4gKw0KPj4+ICsJY2ZnLT52dHRiciAgPSBwYWdlX3RvX21hZGRyKGNmZy0+ZG9tYWluLT5hcmNo
LnAybS5yb290KTsNCj4+PiArDQo+Pj4gKwlwcmludGsoWEVOTE9HX0RFQlVHDQo+Pj4gKwkJICAg
IlNNTVV2MzogZCV1OiB2bWlkIDB4JXggdnRjciAweCUiUFJJcGFkZHIiIHAybWFkZHIgMHglIlBS
SXBhZGRyIlxuIiwNCj4+PiArCQkgICBjZmctPmRvbWFpbi0+ZG9tYWluX2lkLCBjZmctPnZtaWQs
IGNmZy0+dnRjciwgY2ZnLT52dHRicik7DQo+Pj4gDQo+Pj4gLQl2dGNyID0gJnBndGJsX2NmZy0+
YXJtX2xwYWVfczJfY2ZnLnZ0Y3I7DQo+Pj4gLQljZmctPnZtaWQJPSAodTE2KXZtaWQ7DQo+Pj4g
LQljZmctPnZ0dGJyCT0gcGd0YmxfY2ZnLT5hcm1fbHBhZV9zMl9jZmcudnR0YnI7DQo+Pj4gLQlj
ZmctPnZ0Y3IJPSBGSUVMRF9QUkVQKFNUUlRBQl9TVEVfMl9WVENSX1MyVDBTWiwgdnRjci0+dHN6
KSB8DQo+Pj4gLQkJCSAgRklFTERfUFJFUChTVFJUQUJfU1RFXzJfVlRDUl9TMlNMMCwgdnRjci0+
c2wpIHwNCj4+PiAtCQkJICBGSUVMRF9QUkVQKFNUUlRBQl9TVEVfMl9WVENSX1MySVIwLCB2dGNy
LT5pcmduKSB8DQo+Pj4gLQkJCSAgRklFTERfUFJFUChTVFJUQUJfU1RFXzJfVlRDUl9TMk9SMCwg
dnRjci0+b3JnbikgfA0KPj4+IC0JCQkgIEZJRUxEX1BSRVAoU1RSVEFCX1NURV8yX1ZUQ1JfUzJT
SDAsIHZ0Y3ItPnNoKSB8DQo+Pj4gLQkJCSAgRklFTERfUFJFUChTVFJUQUJfU1RFXzJfVlRDUl9T
MlRHLCB2dGNyLT50ZykgfA0KPj4+IC0JCQkgIEZJRUxEX1BSRVAoU1RSVEFCX1NURV8yX1ZUQ1Jf
UzJQUywgdnRjci0+cHMpOw0KPj4+IAlyZXR1cm4gMDsNCj4+PiB9DQo+Pj4gDQo+Pj4gQEAgLTEz
ODIsMjggKzE2ODksMTIgQEAgc3RhdGljIGludCBhcm1fc21tdV9kb21haW5fZmluYWxpc2Uoc3Ry
dWN0IGlvbW11X2RvbWFpbiAqZG9tYWluLA0KPj4+IAkJCQkgICAgc3RydWN0IGFybV9zbW11X21h
c3RlciAqbWFzdGVyKQ0KPj4+IHsNCj4+PiAJaW50IHJldDsNCj4+PiAtCXVuc2lnbmVkIGxvbmcg
aWFzLCBvYXM7DQo+Pj4gLQlpbnQgKCpmaW5hbGlzZV9zdGFnZV9mbikoc3RydWN0IGFybV9zbW11
X2RvbWFpbiAqLA0KPj4+IC0JCQkJIHN0cnVjdCBhcm1fc21tdV9tYXN0ZXIgKiwNCj4+PiAtCQkJ
CSBzdHJ1Y3QgaW9fcGd0YWJsZV9jZmcgKik7DQo+Pj4gCXN0cnVjdCBhcm1fc21tdV9kb21haW4g
KnNtbXVfZG9tYWluID0gdG9fc21tdV9kb21haW4oZG9tYWluKTsNCj4+PiAtCXN0cnVjdCBhcm1f
c21tdV9kZXZpY2UgKnNtbXUgPSBzbW11X2RvbWFpbi0+c21tdTsNCj4+PiANCj4+PiAJLyogUmVz
dHJpY3QgdGhlIHN0YWdlIHRvIHdoYXQgd2UgY2FuIGFjdHVhbGx5IHN1cHBvcnQgKi8NCj4+PiAJ
c21tdV9kb21haW4tPnN0YWdlID0gQVJNX1NNTVVfRE9NQUlOX1MyOw0KPj4+IA0KPj4+IC0Jc3dp
dGNoIChzbW11X2RvbWFpbi0+c3RhZ2UpIHsNCj4+PiAtCWNhc2UgQVJNX1NNTVVfRE9NQUlOX05F
U1RFRDoNCj4+PiAtCWNhc2UgQVJNX1NNTVVfRE9NQUlOX1MyOg0KPj4+IC0JCWlhcyA9IHNtbXUt
PmlhczsNCj4+PiAtCQlvYXMgPSBzbW11LT5vYXM7DQo+Pj4gLQkJZmluYWxpc2Vfc3RhZ2VfZm4g
PSBhcm1fc21tdV9kb21haW5fZmluYWxpc2VfczI7DQo+Pj4gLQkJYnJlYWs7DQo+Pj4gLQlkZWZh
dWx0Og0KPj4+IC0JCXJldHVybiAtRUlOVkFMOw0KPj4+IC0JfQ0KPj4+IC0NCj4+PiAtCXJldCA9
IGZpbmFsaXNlX3N0YWdlX2ZuKHNtbXVfZG9tYWluLCBtYXN0ZXIsICZwZ3RibF9jZmcpOw0KPj4+
ICsJcmV0ID0gYXJtX3NtbXVfZG9tYWluX2ZpbmFsaXNlX3MyKHNtbXVfZG9tYWluLCBtYXN0ZXIp
Ow0KPj4+IAlpZiAocmV0IDwgMCkgew0KPj4+IAkJcmV0dXJuIHJldDsNCj4+PiAJfQ0KPj4+IEBA
IC0xNTUzLDcgKzE4NDQsOCBAQCBzdGF0aWMgaW50IGFybV9zbW11X2luaXRfb25lX3F1ZXVlKHN0
cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUsDQo+Pj4gCQlyZXR1cm4gLUVOT01FTTsNCj4+PiAJ
fQ0KPj4+IA0KPj4+IC0JaWYgKCFXQVJOX09OKHEtPmJhc2VfZG1hICYgKHFzeiAtIDEpKSkgew0K
Pj4+ICsJV0FSTl9PTihxLT5iYXNlX2RtYSAmIChxc3ogLSAxKSk7DQo+Pj4gKwlpZiAodW5saWtl
bHkocS0+YmFzZV9kbWEgJiAocXN6IC0gMSkpKSB7DQo+Pj4gCQlkZXZfaW5mbyhzbW11LT5kZXYs
ICJhbGxvY2F0ZWQgJXUgZW50cmllcyBmb3IgJXNcbiIsDQo+Pj4gCQkJIDEgPDwgcS0+bGxxLm1h
eF9uX3NoaWZ0LCBuYW1lKTsNCj4+IA0KPj4gV2UgZG9uJ3QgbmVlZCBib3RoIHRoZSBXQVJOSU5H
IGFuZCB0aGUgZGV2X2luZm8uIHlvdSBjb3VsZCB0dXJuIHRoZQ0KPj4gZGV2X3dhcm4gLyBYRU5M
T0dfV0FSTklORy4NCj4+IA0KPiBBY2suDQo+IA0KPj4gDQo+Pj4gCX0NCj4+PiBAQCAtMTc1OCw5
ICsyMDUwLDcgQEAgc3RhdGljIHZvaWQgYXJtX3NtbXVfc2V0dXBfdW5pcXVlX2lycXMoc3RydWN0
IGFybV9zbW11X2RldmljZSAqc21tdSkNCj4+PiAJLyogUmVxdWVzdCBpbnRlcnJ1cHQgbGluZXMg
Ki8NCj4+PiAJaXJxID0gc21tdS0+ZXZ0cS5xLmlycTsNCj4+PiAJaWYgKGlycSkgew0KPj4+IC0J
CXJldCA9IGRldm1fcmVxdWVzdF90aHJlYWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsIE5VTEwsDQo+
Pj4gLQkJCQkJCWFybV9zbW11X2V2dHFfdGhyZWFkLA0KPj4+IC0JCQkJCQlJUlFGX09ORVNIT1Qs
DQo+Pj4gKwkJcmV0ID0gcmVxdWVzdF9pcnEoaXJxLCAwLCBhcm1fc21tdV9ldnRxX2lycV90YXNr
bGV0LA0KPj4+IAkJCQkJCSJhcm0tc21tdS12My1ldnRxIiwgc21tdSk7DQo+Pj4gCQlpZiAocmV0
IDwgMCkNCj4+PiAJCQlkZXZfd2FybihzbW11LT5kZXYsICJmYWlsZWQgdG8gZW5hYmxlIGV2dHEg
aXJxXG4iKTsNCj4+PiBAQCAtMTc3MCw4ICsyMDYwLDggQEAgc3RhdGljIHZvaWQgYXJtX3NtbXVf
c2V0dXBfdW5pcXVlX2lycXMoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSkNCj4+PiANCj4+
PiAJaXJxID0gc21tdS0+Z2Vycl9pcnE7DQo+Pj4gCWlmIChpcnEpIHsNCj4+PiAtCQlyZXQgPSBk
ZXZtX3JlcXVlc3RfaXJxKHNtbXUtPmRldiwgaXJxLCBhcm1fc21tdV9nZXJyb3JfaGFuZGxlciwN
Cj4+PiAtCQkJCSAgICAgICAwLCAiYXJtLXNtbXUtdjMtZ2Vycm9yIiwgc21tdSk7DQo+Pj4gKwkJ
cmV0ID0gcmVxdWVzdF9pcnEoaXJxLCAwLCBhcm1fc21tdV9nZXJyb3JfaGFuZGxlciwNCj4+PiAr
CQkJCQkJImFybS1zbW11LXYzLWdlcnJvciIsIHNtbXUpOw0KPj4+IAkJaWYgKHJldCA8IDApDQo+
Pj4gCQkJZGV2X3dhcm4oc21tdS0+ZGV2LCAiZmFpbGVkIHRvIGVuYWJsZSBnZXJyb3IgaXJxXG4i
KTsNCj4+PiAJfSBlbHNlIHsNCj4+PiBAQCAtMTc4MSwxMSArMjA3MSw4IEBAIHN0YXRpYyB2b2lk
IGFybV9zbW11X3NldHVwX3VuaXF1ZV9pcnFzKHN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUp
DQo+Pj4gCWlmIChzbW11LT5mZWF0dXJlcyAmIEFSTV9TTU1VX0ZFQVRfUFJJKSB7DQo+Pj4gCQlp
cnEgPSBzbW11LT5wcmlxLnEuaXJxOw0KPj4+IAkJaWYgKGlycSkgew0KPj4+IC0JCQlyZXQgPSBk
ZXZtX3JlcXVlc3RfdGhyZWFkZWRfaXJxKHNtbXUtPmRldiwgaXJxLCBOVUxMLA0KPj4+IC0JCQkJ
CQkJYXJtX3NtbXVfcHJpcV90aHJlYWQsDQo+Pj4gLQkJCQkJCQlJUlFGX09ORVNIT1QsDQo+Pj4g
LQkJCQkJCQkiYXJtLXNtbXUtdjMtcHJpcSIsDQo+Pj4gLQkJCQkJCQlzbW11KTsNCj4+PiArCQkJ
cmV0ID0gcmVxdWVzdF9pcnEoaXJxLCAwLCBhcm1fc21tdV9wcmlxX2lycV90YXNrbGV0LA0KPj4+
ICsJCQkJCQkJImFybS1zbW11LXYzLXByaXEiLCBzbW11KTsNCj4+PiAJCQlpZiAocmV0IDwgMCkN
Cj4+PiAJCQkJZGV2X3dhcm4oc21tdS0+ZGV2LA0KPj4+IAkJCQkJICJmYWlsZWQgdG8gZW5hYmxl
IHByaXEgaXJxXG4iKTsNCj4+PiBAQCAtMTgxNCwxMSArMjEwMSw4IEBAIHN0YXRpYyBpbnQgYXJt
X3NtbXVfc2V0dXBfaXJxcyhzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11KQ0KPj4+IAkJICog
Q2F2aXVtIFRodW5kZXJYMiBpbXBsZW1lbnRhdGlvbiBkb2Vzbid0IHN1cHBvcnQgdW5pcXVlIGly
cQ0KPj4+IAkJICogbGluZXMuIFVzZSBhIHNpbmdsZSBpcnEgbGluZSBmb3IgYWxsIHRoZSBTTU1V
djMgaW50ZXJydXB0cy4NCj4+PiAJCSAqLw0KPj4+IC0JCXJldCA9IGRldm1fcmVxdWVzdF90aHJl
YWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsDQo+Pj4gLQkJCQkJYXJtX3NtbXVfY29tYmluZWRfaXJx
X2hhbmRsZXIsDQo+Pj4gLQkJCQkJYXJtX3NtbXVfY29tYmluZWRfaXJxX3RocmVhZCwNCj4+PiAt
CQkJCQlJUlFGX09ORVNIT1QsDQo+Pj4gLQkJCQkJImFybS1zbW11LXYzLWNvbWJpbmVkLWlycSIs
IHNtbXUpOw0KPj4+ICsJCXJldCA9IHJlcXVlc3RfaXJxKGlycSwgMCwgYXJtX3NtbXVfY29tYmlu
ZWRfaXJxX2hhbmRsZXIsDQo+Pj4gKwkJCQkJCSJhcm0tc21tdS12My1jb21iaW5lZC1pcnEiLCBz
bW11KTsNCj4+PiAJCWlmIChyZXQgPCAwKQ0KPj4+IAkJCWRldl93YXJuKHNtbXUtPmRldiwgImZh
aWxlZCB0byBlbmFibGUgY29tYmluZWQgaXJxXG4iKTsNCj4+PiAJfSBlbHNlDQo+Pj4gQEAgLTE4
NTcsNyArMjE0MSw3IEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX3Jlc2V0KHN0cnVjdCBh
cm1fc21tdV9kZXZpY2UgKnNtbXUsIGJvb2wgYnlwYXNzKQ0KPj4+IAlyZWcgPSByZWFkbF9yZWxh
eGVkKHNtbXUtPmJhc2UgKyBBUk1fU01NVV9DUjApOw0KPj4+IAlpZiAocmVnICYgQ1IwX1NNTVVF
Tikgew0KPj4+IAkJZGV2X3dhcm4oc21tdS0+ZGV2LCAiU01NVSBjdXJyZW50bHkgZW5hYmxlZCEg
UmVzZXR0aW5nLi4uXG4iKTsNCj4+PiAtCQlXQVJOX09OKGlzX2tkdW1wX2tlcm5lbCgpICYmICFk
aXNhYmxlX2J5cGFzcyk7DQo+Pj4gKwkJV0FSTl9PTighZGlzYWJsZV9ieXBhc3MpOw0KPj4+IAkJ
YXJtX3NtbXVfdXBkYXRlX2dicGEoc21tdSwgR0JQQV9BQk9SVCwgMCk7DQo+Pj4gCX0NCj4+PiAN
Cj4+PiBAQCAtMTk1Miw4ICsyMjM2LDExIEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX3Jl
c2V0KHN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUsIGJvb2wgYnlwYXNzKQ0KPj4+IAkJcmV0
dXJuIHJldDsNCj4+PiAJfQ0KPj4+IA0KPj4+IC0JaWYgKGlzX2tkdW1wX2tlcm5lbCgpKQ0KPj4+
IC0JCWVuYWJsZXMgJj0gfihDUjBfRVZUUUVOIHwgQ1IwX1BSSVFFTik7DQo+Pj4gKwkvKiBJbml0
aWFsaXplIHRhc2tsZXRzIGZvciB0aHJlYWRlZCBJUlFzKi8NCj4+PiArCXRhc2tsZXRfaW5pdCgm
c21tdS0+ZXZ0cV9pcnFfdGFza2xldCwgYXJtX3NtbXVfZXZ0cV90aHJlYWQsIHNtbXUpOw0KPj4+
ICsJdGFza2xldF9pbml0KCZzbW11LT5wcmlxX2lycV90YXNrbGV0LCBhcm1fc21tdV9wcmlxX3Ro
cmVhZCwgc21tdSk7DQo+Pj4gKwl0YXNrbGV0X2luaXQoJnNtbXUtPmNvbWJpbmVkX2lycV90YXNr
bGV0LCBhcm1fc21tdV9jb21iaW5lZF9pcnFfdGhyZWFkLA0KPj4+ICsJCQkJIHNtbXUpOw0KPj4+
IA0KPj4+IAkvKiBFbmFibGUgdGhlIFNNTVUgaW50ZXJmYWNlLCBvciBlbnN1cmUgYnlwYXNzICov
DQo+Pj4gCWlmICghYnlwYXNzIHx8IGRpc2FibGVfYnlwYXNzKSB7DQo+Pj4gQEAgLTIxOTUsNyAr
MjQ4Miw3IEBAIHN0YXRpYyBpbmxpbmUgaW50IGFybV9zbW11X2RldmljZV9hY3BpX3Byb2JlKHN0
cnVjdCBwbGF0Zm9ybV9kZXZpY2UgKnBkZXYsDQo+Pj4gc3RhdGljIGludCBhcm1fc21tdV9kZXZp
Y2VfZHRfcHJvYmUoc3RydWN0IHBsYXRmb3JtX2RldmljZSAqcGRldiwNCj4+PiAJCQkJICAgIHN0
cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUpDQo+Pj4gew0KPj4+IC0Jc3RydWN0IGRldmljZSAq
ZGV2ID0gJnBkZXYtPmRldjsNCj4+PiArCXN0cnVjdCBkZXZpY2UgKmRldiA9IHBkZXY7DQo+Pj4g
CXUzMiBjZWxsczsNCj4+PiAJaW50IHJldCA9IC1FSU5WQUw7DQo+Pj4gDQo+Pj4gQEAgLTIyMTks
MTMwICsyNTA2LDQ0OSBAQCBzdGF0aWMgdW5zaWduZWQgbG9uZyBhcm1fc21tdV9yZXNvdXJjZV9z
aXplKHN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUpDQo+Pj4gCQlyZXR1cm4gU1pfMTI4SzsN
Cj4+PiB9DQo+Pj4gDQo+Pj4gKy8qIFN0YXJ0IG9mIFhlbiBzcGVjaWZpYyBjb2RlLiAqLw0KPj4+
IHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX3Byb2JlKHN0cnVjdCBwbGF0Zm9ybV9kZXZpY2Ug
KnBkZXYpDQo+Pj4gew0KPj4+IC0JaW50IGlycSwgcmV0Ow0KPj4+IC0Jc3RydWN0IHJlc291cmNl
ICpyZXM7DQo+Pj4gLQlyZXNvdXJjZV9zaXplX3QgaW9hZGRyOw0KPj4+IC0Jc3RydWN0IGFybV9z
bW11X2RldmljZSAqc21tdTsNCj4+PiAtCXN0cnVjdCBkZXZpY2UgKmRldiA9ICZwZGV2LT5kZXY7
DQo+Pj4gLQlib29sIGJ5cGFzczsNCj4+PiArICAgIGludCBpcnEsIHJldDsNCj4+PiArICAgIHBh
ZGRyX3QgaW9hZGRyLCBpb3NpemU7DQo+Pj4gKyAgICBzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpz
bW11Ow0KPj4+ICsgICAgYm9vbCBieXBhc3M7DQo+Pj4gKw0KPj4+ICsgICAgc21tdSA9IGRldm1f
a3phbGxvYyhkZXYsIHNpemVvZigqc21tdSksIEdGUF9LRVJORUwpOw0KPj4+ICsgICAgaWYgKCAh
c21tdSApDQo+Pj4gKyAgICB7DQo+Pj4gKyAgICAgICAgZGV2X2VycihwZGV2LCAiZmFpbGVkIHRv
IGFsbG9jYXRlIGFybV9zbW11X2RldmljZVxuIik7DQo+Pj4gKyAgICAgICAgcmV0dXJuIC1FTk9N
RU07DQo+Pj4gKyAgICB9DQo+Pj4gKyAgICBzbW11LT5kZXYgPSBwZGV2Ow0KPj4+ICsNCj4+PiAr
ICAgIGlmICggcGRldi0+b2Zfbm9kZSApDQo+Pj4gKyAgICB7DQo+Pj4gKyAgICAgICAgcmV0ID0g
YXJtX3NtbXVfZGV2aWNlX2R0X3Byb2JlKHBkZXYsIHNtbXUpOw0KPj4+ICsgICAgfSBlbHNlDQo+
Pj4gKyAgICB7DQo+PiANCj4+IGNvZGluZyBzdHlsZQ0KPiANCj4gQWNrLiANCj4+IA0KPj4gDQo+
Pj4gKyAgICAgICAgcmV0ID0gYXJtX3NtbXVfZGV2aWNlX2FjcGlfcHJvYmUocGRldiwgc21tdSk7
DQo+Pj4gKyAgICAgICAgaWYgKCByZXQgPT0gLUVOT0RFViApDQo+Pj4gKyAgICAgICAgICAgIHJl
dHVybiByZXQ7DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgLyogU2V0IGJ5cGFzcyBtb2Rl
IGFjY29yZGluZyB0byBmaXJtd2FyZSBwcm9iaW5nIHJlc3VsdCAqLw0KPj4+ICsgICAgYnlwYXNz
ID0gISFyZXQ7DQo+Pj4gKw0KPj4+ICsgICAgLyogQmFzZSBhZGRyZXNzICovDQo+Pj4gKyAgICBy
ZXQgPSBkdF9kZXZpY2VfZ2V0X2FkZHJlc3MoZGV2X3RvX2R0KHBkZXYpLCAwLCAmaW9hZGRyLCAm
aW9zaXplKTsNCj4+PiArICAgIGlmKCByZXQgKQ0KPj4+ICsgICAgICAgIHJldHVybiAtRU5PREVW
Ow0KPj4+ICsNCj4+PiArICAgIGlmICggaW9zaXplIDwgYXJtX3NtbXVfcmVzb3VyY2Vfc2l6ZShz
bW11KSApDQo+Pj4gKyAgICB7DQo+Pj4gKyAgICAgICAgZGV2X2VycihwZGV2LCAiTU1JTyByZWdp
b24gdG9vIHNtYWxsICglbHgpXG4iLCBpb3NpemUpOw0KPj4+ICsgICAgICAgIHJldHVybiAtRUlO
VkFMOw0KPj4+ICsgICAgfQ0KPj4+ICsNCj4+PiArICAgIC8qDQo+Pj4gKyAgICAgKiBEb24ndCBt
YXAgdGhlIElNUExFTUVOVEFUSU9OIERFRklORUQgcmVnaW9ucywgc2luY2UgdGhleSBtYXkgY29u
dGFpbg0KPj4+ICsgICAgICogdGhlIFBNQ0cgcmVnaXN0ZXJzIHdoaWNoIGFyZSByZXNlcnZlZCBi
eSB0aGUgUE1VIGRyaXZlci4NCj4+IA0KPj4gRG9lcyB0aGlzIGFwcGx5IHRvIFhlbiB0b28/DQo+
IA0KPiBZZXMuIFBlcmZvcm1hbmNlIG1vbml0b3JpbmcgZmFjaWxpdGllcyBhcmUgb3B0aW9uYWwu
IEN1cnJlbnRseSB0aGVyZSBpcyBubyBwbGFuIHRvIHN1cHBvcnQgYWxzbyBpbiBYRU4uDQoNCkkg
bWlzc2VkIHRvIGFkZCB0aGF0IGluIG5leHQgdmVyc2lvbiBvZiB0aGUgcGF0Y2ggSSB3aWxsIHVz
ZSB0aGUgQVJNX1NNTVVfUkVHX1NaICBhcyBzaXplIGFzIGFsc28gc3VnZ2VzdGVkIGJ5IEp1bGll
bi4NCg0KUmVnYXJkcywNClJhaHVsDQo+PiANCj4+IA0KPj4+ICsgICAgICovDQo+Pj4gKyAgICBz
bW11LT5iYXNlID0gaW9yZW1hcF9ub2NhY2hlKGlvYWRkciwgaW9zaXplKTsNCj4+PiArICAgIGlm
ICggSVNfRVJSKHNtbXUtPmJhc2UpICkNCj4+PiArICAgICAgICByZXR1cm4gUFRSX0VSUihzbW11
LT5iYXNlKTsNCj4+PiArDQo+Pj4gKyAgICBpZiAoIGlvc2l6ZSA+IFNaXzY0SyApDQo+Pj4gKyAg
ICB7DQo+Pj4gKyAgICAgICAgc21tdS0+cGFnZTEgPSBpb3JlbWFwX25vY2FjaGUoaW9hZGRyICsg
U1pfNjRLLCBBUk1fU01NVV9SRUdfU1opOw0KPj4+ICsgICAgICAgIGlmICggSVNfRVJSKHNtbXUt
PnBhZ2UxKSApDQo+Pj4gKyAgICAgICAgICAgIHJldHVybiBQVFJfRVJSKHNtbXUtPnBhZ2UxKTsN
Cj4+PiArICAgIH0NCj4+PiArICAgIGVsc2UNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBzbW11
LT5wYWdlMSA9IHNtbXUtPmJhc2U7DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgLyogSW50
ZXJydXB0IGxpbmVzICovDQo+Pj4gKw0KPj4+ICsgICAgaXJxID0gcGxhdGZvcm1fZ2V0X2lycV9i
eW5hbWVfb3B0aW9uYWwocGRldiwgImNvbWJpbmVkIik7DQo+Pj4gKyAgICBpZiAoIGlycSA+IDAg
KQ0KPj4+ICsgICAgICAgIHNtbXUtPmNvbWJpbmVkX2lycSA9IGlycTsNCj4+PiArICAgIGVsc2UN
Cj4+PiArICAgIHsNCj4+PiArICAgICAgICBpcnEgPSBwbGF0Zm9ybV9nZXRfaXJxX2J5bmFtZV9v
cHRpb25hbChwZGV2LCAiZXZlbnRxIik7DQo+Pj4gKyAgICAgICAgaWYgKCBpcnEgPiAwICkNCj4+
PiArICAgICAgICAgICAgc21tdS0+ZXZ0cS5xLmlycSA9IGlycTsNCj4+PiArDQo+Pj4gKyAgICAg
ICAgaXJxID0gcGxhdGZvcm1fZ2V0X2lycV9ieW5hbWVfb3B0aW9uYWwocGRldiwgInByaXEiKTsN
Cj4+PiArICAgICAgICBpZiAoIGlycSA+IDAgKQ0KPj4+ICsgICAgICAgICAgICBzbW11LT5wcmlx
LnEuaXJxID0gaXJxOw0KPj4+ICsNCj4+PiArICAgICAgICBpcnEgPSBwbGF0Zm9ybV9nZXRfaXJx
X2J5bmFtZV9vcHRpb25hbChwZGV2LCAiZ2Vycm9yIik7DQo+Pj4gKyAgICAgICAgaWYgKCBpcnEg
PiAwICkNCj4+PiArICAgICAgICAgICAgc21tdS0+Z2Vycl9pcnEgPSBpcnE7DQo+Pj4gKyAgICB9
DQo+Pj4gKyAgICAvKiBQcm9iZSB0aGUgaC93ICovDQo+Pj4gKyAgICByZXQgPSBhcm1fc21tdV9k
ZXZpY2VfaHdfcHJvYmUoc21tdSk7DQo+Pj4gKyAgICBpZiAoIHJldCApDQo+Pj4gKyAgICAgICAg
cmV0dXJuIHJldDsNCj4+PiArDQo+Pj4gKyAgICAvKiBJbml0aWFsaXNlIGluLW1lbW9yeSBkYXRh
IHN0cnVjdHVyZXMgKi8NCj4+PiArICAgIHJldCA9IGFybV9zbW11X2luaXRfc3RydWN0dXJlcyhz
bW11KTsNCj4+PiArICAgIGlmICggcmV0ICkNCj4+PiArICAgICAgICByZXR1cm4gcmV0Ow0KPj4+
ICsNCj4+PiArICAgIC8qIFJlc2V0IHRoZSBkZXZpY2UgKi8NCj4+PiArICAgIHJldCA9IGFybV9z
bW11X2RldmljZV9yZXNldChzbW11LCBieXBhc3MpOw0KPj4+ICsgICAgaWYgKCByZXQgKQ0KPj4+
ICsgICAgICAgIHJldHVybiByZXQ7DQo+Pj4gKw0KPj4+ICsgICAgLyoNCj4+PiArICAgICAqIEtl
ZXAgYSBsaXN0IG9mIGFsbCBwcm9iZWQgZGV2aWNlcy4gVGhpcyB3aWxsIGJlIHVzZWQgdG8gcXVl
cnkNCj4+PiArICAgICAqIHRoZSBzbW11IGRldmljZXMgYmFzZWQgb24gdGhlIGZ3bm9kZS4NCj4+
PiArICAgICAqLw0KPj4+ICsgICAgSU5JVF9MSVNUX0hFQUQoJnNtbXUtPmRldmljZXMpOw0KPj4+
ICsNCj4+PiArICAgIHNwaW5fbG9jaygmYXJtX3NtbXVfZGV2aWNlc19sb2NrKTsNCj4+PiArICAg
IGxpc3RfYWRkKCZzbW11LT5kZXZpY2VzLCAmYXJtX3NtbXVfZGV2aWNlcyk7DQo+Pj4gKyAgICBz
cGluX3VubG9jaygmYXJtX3NtbXVfZGV2aWNlc19sb2NrKTsNCj4+PiArDQo+Pj4gKyAgICByZXR1
cm4gMDsNCj4+PiArfQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 20:18:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 20:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43002.77374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkYZe-0004Bc-EY; Wed, 02 Dec 2020 20:18:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43002.77374; Wed, 02 Dec 2020 20:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkYZe-0004BV-Av; Wed, 02 Dec 2020 20:18:02 +0000
Received: by outflank-mailman (input) for mailman id 43002;
 Wed, 02 Dec 2020 20:18:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X14H=FG=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kkYZd-0004BQ-N1
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 20:18:01 +0000
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bfff363c-9dc3-4be9-b6de-9344e0d4efbb;
 Wed, 02 Dec 2020 20:18:00 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id n4so2838840iow.12
 for <xen-devel@lists.xenproject.org>; Wed, 02 Dec 2020 12:18:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfff363c-9dc3-4be9-b6de-9344e0d4efbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=imOsHZ8UM7k2PqA8O+sHJ8lLeteNEVL/iOURmAIDRzs=;
        b=YVI++cRdislCT0pqLziNmY07cMrzMaRRyo/bg+OnOunK0TjuLN46oD1Fgsmb45irUj
         pomtFMYzxtICc1wZfYv1PpfGed/fNZanjbBDdiP6LnsfQ2NnFqtjuVrC1/eszVO5Ksyr
         dDnJoPIcfW/7t90rWKKtqmnPQOtTniKefkBae1QCYlNbxnUsmDMiZO11/iBEdVD1+MuW
         tT1vPjNXgTxzcc5LFTUhw7v5KCFuayV/JFyFBZALL12jPMFX2t4gUQo/qJYudqLDSNm9
         rLXju7QSBIi5U94JWFX+jbJAJ3sC0ne4uh4CdUv+3Dte/iopxngHLphNceER/WxjWxHz
         wVLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=imOsHZ8UM7k2PqA8O+sHJ8lLeteNEVL/iOURmAIDRzs=;
        b=k4rpKvbTMjc7eeg1QQIvG6wgrZptLjKG5H6zo+NYCQzHy+1c1nhZruXguUdO3EjStp
         G0qxBDQSybH9GLg9wIwOiOeswcMFeyVtNqNjwe8/YSnia8Uo9JLOJKzLZHhuzdHUNx6w
         /QFsYE5vNUJEZOozhrzIlX1GCTgZURQ1RxmMukNV34MhvOWcwkg8ALqyoeGQRpwjQnLC
         NwC28A0q6cMlYFYST7YXxCYDiIlaiM/DqD5WHR6j4aBbw2h7A8dj/IzBuniRI/coOcIP
         dk/33pjL+1qIgiDyizhY5x5AywdfFjJkCcPVcctgYIZ6FjgJFTnUgCHrQzx6djXNuKIF
         Iozw==
X-Gm-Message-State: AOAM530rslzk0ouQHtI7MAIsIQzOrUhF5fzxLlFCq07jkHSvV8gtgp7S
	PRzSIMrGbfG7Bm9Lw2RcV+1lb5iFoitmBq0M6io=
X-Google-Smtp-Source: ABdhPJwKiwf3SlGeZHIvOJLeNs1DGgEjYagjHDl7JjsFhuHVeh4GR8neEtie0iv0Pib34F/M4DbydMviVlR03QOkMNg=
X-Received: by 2002:a02:c981:: with SMTP id b1mr3811297jap.6.1606940280062;
 Wed, 02 Dec 2020 12:18:00 -0800 (PST)
MIME-Version: 1.0
References: <20201202164628.24224-1-olaf@aepfle.de>
In-Reply-To: <20201202164628.24224-1-olaf@aepfle.de>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 2 Dec 2020 15:17:48 -0500
Message-ID: <CAKf6xpt8tK2otnYEFEPQHAxx3oCmGo8iogdgkRM5_Pgtsg2VkQ@mail.gmail.com>
Subject: Re: [PATCH v1] tools/hotplug: allow tuning of xenwatchdogd arguments
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Dec 2, 2020 at 11:47 AM Olaf Hering <olaf@aepfle.de> wrote:
>
> Currently the arguments for xenwatchdogd are hardcoded with 15s
> keep-alive interval and 30s timeout.
>
> It is not possible to tweak these values via
> /etc/systemd/system/xen-watchdog.service.d/*.conf because ExecStart
> can not be replaced. The only option would be a private copy
> /etc/systemd/system/xen-watchdog.service, which may get out of sync
> with the Xen provided xen-watchdog.service.
>
> Adjust the service file to recognize XENWATCHDOGD_ARGS= in a
> private unit configuration file.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  tools/hotplug/Linux/init.d/xen-watchdog.in          | 7 ++++++-
>  tools/hotplug/Linux/systemd/xen-watchdog.service.in | 4 +++-
>  2 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/tools/hotplug/Linux/init.d/xen-watchdog.in b/tools/hotplug/Linux/init.d/xen-watchdog.in
> index c05f1f6b6a..87e2353b49 100644
> --- a/tools/hotplug/Linux/init.d/xen-watchdog.in
> +++ b/tools/hotplug/Linux/init.d/xen-watchdog.in
> @@ -19,6 +19,11 @@
>
>  . @XEN_SCRIPT_DIR@/hotplugpath.sh
>
> +xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
> +
> +test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
> +
> +test -z "$XENWATCHDOGD_ARGS" || XENWATCHDOGD_ARGS='15 30'

This should be `test -z ... && ` or `test -n ... || ` to set the
default values properly.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 21:10:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 21:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43009.77387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkZOg-0001F4-2y; Wed, 02 Dec 2020 21:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43009.77387; Wed, 02 Dec 2020 21:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkZOf-0001Ee-Tz; Wed, 02 Dec 2020 21:10:45 +0000
Received: by outflank-mailman (input) for mailman id 43009;
 Wed, 02 Dec 2020 21:10:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkZOe-0001AX-Mk
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 21:10:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkZOX-0003rS-RW; Wed, 02 Dec 2020 21:10:37 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkZOX-00039w-Hj; Wed, 02 Dec 2020 21:10:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Onm0xfQJVu6IWG1SCpBTJID+WxJ8YF3bI4LRL0Fd27Y=; b=h8s+qOsm38vLVaLM3L2hgfZW8/
	yYlNsTpeqoJTnbiDVFEp/B94uf/dHD1wDgTygKNcvDZE+7plNxv697phqhU/RALPpSK+MFUmomO8F
	Zzj0HLH8JXZvNPTF7JDif8Ba+0aE+zp2LL755ihbGbmRX+ahedcn3UvdUXb5K+2pkZik=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
Date: Wed, 2 Dec 2020 21:10:35 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/11/2020 13:30, Jan Beulich wrote:
> While there don't look to be any problems with this right now, the lock
> order implications from holding the lock can be very difficult to follow
> (and may be easy to violate unknowingly). The present callbacks don't
> (and no such callback should) have any need for the lock to be held.
> 
> However, vm_event_disable() frees the structures used by respective
> callbacks and isn't otherwise synchronized with invocations of these
> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> to wait to drop to zero before freeing the port (and dropping the lock).

AFAICT, this callback is not the only place where the synchronization is 
missing in the VM event code.

For instance, vm_event_put_request() can also race against 
vm_event_disable().

So shouldn't we handle this issue properly in VM event?

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Should we make this accounting optional, to be requested through a new
> parameter to alloc_unbound_xen_event_channel(), or derived from other
> than the default callback being requested?

Aside the VM event, do you see any value for the other caller?

> ---
> v3: Drain callbacks before proceeding with closing. Re-base.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -397,6 +397,7 @@ static long evtchn_bind_interdomain(evtc
>       
>       rchn->u.interdomain.remote_dom  = ld;
>       rchn->u.interdomain.remote_port = lport;
> +    atomic_set(&rchn->u.interdomain.active_calls, 0);
>       rchn->state                     = ECS_INTERDOMAIN;
>   
>       /*
> @@ -720,6 +721,10 @@ int evtchn_close(struct domain *d1, int
>   
>           double_evtchn_lock(chn1, chn2);
>   
> +        if ( consumer_is_xen(chn1) )
> +            while ( atomic_read(&chn1->u.interdomain.active_calls) )
> +                cpu_relax();
> +
>           evtchn_free(d1, chn1);
>   
>           chn2->state = ECS_UNBOUND;
> @@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig
>           rport = lchn->u.interdomain.remote_port;
>           rchn  = evtchn_from_port(rd, rport);
>           if ( consumer_is_xen(rchn) )
> +        {
> +            /* Don't keep holding the lock for the call below. */
> +            atomic_inc(&rchn->u.interdomain.active_calls);
> +            evtchn_read_unlock(lchn);
>               xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
> -        else
> -            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);

atomic_dec() doesn't contain any memory barrier, so we will want one 
between xen_notification_fn() and atomic_dec() to avoid re-ordering.

> +            atomic_dec(&rchn->u.interdomain.active_calls);
> +            return 0;
> +        }
> +        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>           break;
>       case ECS_IPI:
>           evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -104,6 +104,7 @@ struct evtchn
>           } unbound;     /* state == ECS_UNBOUND */
>           struct {
>               evtchn_port_t  remote_port;
> +            atomic_t       active_calls;
>               struct domain *remote_dom;
>           } interdomain; /* state == ECS_INTERDOMAIN */
>           struct {
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 21:14:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 21:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43016.77398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkZSP-0001Qt-It; Wed, 02 Dec 2020 21:14:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43016.77398; Wed, 02 Dec 2020 21:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkZSP-0001Qm-FD; Wed, 02 Dec 2020 21:14:37 +0000
Received: by outflank-mailman (input) for mailman id 43016;
 Wed, 02 Dec 2020 21:14:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kkZSO-0001Qh-Om
 for xen-devel@lists.xenproject.org; Wed, 02 Dec 2020 21:14:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkZSM-0003wn-Jr; Wed, 02 Dec 2020 21:14:34 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kkZSM-0003Ud-DL; Wed, 02 Dec 2020 21:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=PMNxof0t/q+zpIFpcEyHRgoWAfoohyTYPUe/y/T+DLU=; b=iljXsA1wn7crLPV6J1kB4AgqpT
	mmmtzAxW/BHVlHYBXM9Ds+cVgA+EFIwNXXRXuSOSjOzVjKnwPiobpnFPO588kRJUWBrEt1bQedYlm
	axy0EqBOfY3gZBqb6cZKxCIEadYC3YK3IJ2kFBvmVUcgMA0/FGK8w5VPZbin99kPJ+iM=;
Subject: Re: [PATCH v3 2/5] evtchn: avoid access tearing for
 ->virq_to_evtchn[] accesses
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <ce6ce543-d57a-4111-2e66-871c4f4633a8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <dae588c9-69ab-36f8-f945-b9f6fb0cb14d@xen.org>
Date: Wed, 2 Dec 2020 21:14:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <ce6ce543-d57a-4111-2e66-871c4f4633a8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/11/2020 13:28, Jan Beulich wrote:
> Use {read,write}_atomic() to exclude any eventualities, in particular
> observing that accesses aren't all happening under a consistent lock.
> 
> Requested-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> v3: New.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -446,7 +446,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>   
>       spin_lock(&d->event_lock);
>   
> -    if ( v->virq_to_evtchn[virq] != 0 )
> +    if ( read_atomic(&v->virq_to_evtchn[virq]) )
>           ERROR_EXIT(-EEXIST);
>   
>       if ( port != 0 )
> @@ -474,7 +474,8 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>   
>       evtchn_write_unlock(chn);
>   
> -    v->virq_to_evtchn[virq] = bind->port = port;
> +    bind->port = port;
> +    write_atomic(&v->virq_to_evtchn[virq], port);
>   
>    out:
>       spin_unlock(&d->event_lock);
> @@ -660,9 +661,9 @@ int evtchn_close(struct domain *d1, int
>       case ECS_VIRQ:
>           for_each_vcpu ( d1, v )
>           {
> -            if ( v->virq_to_evtchn[chn1->u.virq] != port1 )
> +            if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) != port1 )
>                   continue;
> -            v->virq_to_evtchn[chn1->u.virq] = 0;
> +            write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0);
>               spin_barrier(&v->virq_lock);
>           }
>           break;
> @@ -801,7 +802,7 @@ bool evtchn_virq_enabled(const struct vc
>       if ( virq_is_global(virq) && v->vcpu_id )
>           v = domain_vcpu(v->domain, 0);
>   
> -    return v->virq_to_evtchn[virq];
> +    return read_atomic(&v->virq_to_evtchn[virq]);
>   }
>   
>   void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
> @@ -814,7 +815,7 @@ void send_guest_vcpu_virq(struct vcpu *v
>   
>       spin_lock_irqsave(&v->virq_lock, flags);
>   
> -    port = v->virq_to_evtchn[virq];
> +    port = read_atomic(&v->virq_to_evtchn[virq]);
>       if ( unlikely(port == 0) )
>           goto out;
>   
> @@ -843,7 +844,7 @@ void send_guest_global_virq(struct domai
>   
>       spin_lock_irqsave(&v->virq_lock, flags);
>   
> -    port = v->virq_to_evtchn[virq];
> +    port = read_atomic(&v->virq_to_evtchn[virq]);
>       if ( unlikely(port == 0) )
>           goto out;
>   
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 02 22:47:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 22:47:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43024.77410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkatd-0001Ks-I0; Wed, 02 Dec 2020 22:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43024.77410; Wed, 02 Dec 2020 22:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkatd-0001Kl-Ep; Wed, 02 Dec 2020 22:46:49 +0000
Received: by outflank-mailman (input) for mailman id 43024;
 Wed, 02 Dec 2020 22:46:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkatc-0001Kd-El; Wed, 02 Dec 2020 22:46:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkatc-0005oE-5N; Wed, 02 Dec 2020 22:46:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkatb-0003EB-SL; Wed, 02 Dec 2020 22:46:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkatb-0005bR-Rj; Wed, 02 Dec 2020 22:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V9qQitRCiESvBkK1Pj6zVYz7gM8eGFae9OZAbowiKs8=; b=qaGkh4UrOLEUu6jKNQf3ueDBx6
	Bbo6t5IM/bbyFUSfizBQkAXO3N63cj+bn0hO+SFmI1PzhZ8t0pXjS2ygUlf9L5kjncLqgo+wrBGY5
	7oOoUwxTGYorIllmIr1gRdBTycgGcuWggcdv39OYxbdGCRE3+RIPaBhfhl9HYwb2GACE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157147-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157147: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
X-Osstest-Versions-That:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 22:46:47 +0000

flight 157147 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157147/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 157123 pass in 157147
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157123

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 157123
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157123
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157123
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157123
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157123
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157123
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157123
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157123
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157123
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157123
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157123
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157123
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
baseline version:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53

Last test of basis   157147  2020-12-02 01:52:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Dec 02 23:31:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 02 Dec 2020 23:31:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43035.77425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkbaW-0005vb-7E; Wed, 02 Dec 2020 23:31:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43035.77425; Wed, 02 Dec 2020 23:31:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkbaW-0005vU-2J; Wed, 02 Dec 2020 23:31:08 +0000
Received: by outflank-mailman (input) for mailman id 43035;
 Wed, 02 Dec 2020 23:31:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkbaU-0005vM-Kq; Wed, 02 Dec 2020 23:31:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkbaU-0006gp-BD; Wed, 02 Dec 2020 23:31:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkbaU-000588-1G; Wed, 02 Dec 2020 23:31:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkbaU-0004jQ-0j; Wed, 02 Dec 2020 23:31:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LUev0f1o1cDEs4P3LR+xf5zTzPORjd7ZaRSZg3JNB4s=; b=KAul2qOnFz93i9+i0mmpln4r1Q
	yQWdMkB6Sixc5qSDW1k7IrM1VwO/SvOenG7cwVGcWmf/bncE0hew9CFftKtiFUUHkaI2wsBJrvmfb
	iwUpfMdZUoumegI7i0InhAB67rDHOd3BsYCUBPpJzZjdyFX8NGr7DBdgDZHHGjU3N6Wk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157163-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157163: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aec46884784c2494a30221da775d4ac2c43a4d42
X-Osstest-Versions-That:
    xen=cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 02 Dec 2020 23:31:06 +0000

flight 157163 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157163/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  aec46884784c2494a30221da775d4ac2c43a4d42
baseline version:
 xen                  cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e

Last test of basis   157157  2020-12-02 10:00:26 Z    0 days
Testing same since   157163  2020-12-02 19:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cabf60fc32..aec4688478  aec46884784c2494a30221da775d4ac2c43a4d42 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 01:59:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 01:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43068.77470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkdtS-0007PY-O6; Thu, 03 Dec 2020 01:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43068.77470; Thu, 03 Dec 2020 01:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkdtS-0007PP-H0; Thu, 03 Dec 2020 01:58:50 +0000
Received: by outflank-mailman (input) for mailman id 43068;
 Thu, 03 Dec 2020 01:58:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fvcX=FH=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kkdtR-0007PG-Ag
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 01:58:49 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e775cbf0-7adc-46fb-9b14-6f05a26a7c43;
 Thu, 03 Dec 2020 01:58:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e775cbf0-7adc-46fb-9b14-6f05a26a7c43
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606960728;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=81Rw3lZq98mQJyaIkQSLoiys0ZES5ImYdbwUcQDOUUM=;
  b=MJsw8CDw+JS4C8KZAwdh25fwGlMR616zTUq1whLTv/7cZZ4rBaJRSppy
   mIaYA2tZifi+W+hRx5NLOijG6urwJZwCGU8957wSjFt3/yIxC7VSVmYpK
   i67LkoccBLSmcukELsRKoPfUOIFWqfAolHfM8Yu2VjNMPdrA9PxoI7BBQ
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QbsnmMv0btZoWs0MjOnZTuPOXXG4Sau6n2H6oj8ItV0G3cT4fCUauXYIOkTZU9N+h8/8soDNkQ
 LO5oSZ0gddmvSsHxmauHCYgkonA2BQAZgom2HEX5i55utTSe/sozu6bRd6OL4r8qGpzsPUtjqJ
 bLQu8Xp5Cz2vr+lBd0gImyt99IN0NAG2vr98aZfADxjAPj5t8aRpJ2hL640o0Q5KMLnXBnD2Tm
 wpvJz3X2sBr4XnGM71+qdFv/pfF3sknr/Tzlg527B0aWxa/X/uRpYSWqwyp0wabpRLFishOCBa
 G7U=
X-SBRS: None
X-MesageID: 32749603
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,388,1599537600"; 
   d="scan'208";a="32749603"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>,
	<iwj@xenproject.org>, <jbeulich@suse.com>, <julien@xen.org>,
	<sstabellini@kernel.org>, <wl@xen.org>, <roger.pau@citrix.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH v2 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs
Date: Thu, 3 Dec 2020 01:58:26 +0000
Message-ID: <1606960706-21274-2-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
References: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

... and increase default "irq_max_guests" to 32.

It's not necessary to have an array of a size more than 1 for non-shareable
IRQs and it might impact scalability in case of high "irq_max_guests"
values being used - every IRQ in the system including MSIs would be
supplied with an array of that size.

Since it's now less impactful to use higher "irq_max_guests" value - bump
the default to 32. That should give more headroom for future systems.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---

New in v2.
This is suggested by Jan and is optional for me.

---
 docs/misc/xen-command-line.pandoc | 2 +-
 xen/arch/x86/irq.c                | 7 ++++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index f5f230c..dea2a22 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1644,7 +1644,7 @@ This option is ignored in **pv-shim** mode.
 ### irq_max_guests (x86)
 > `= <integer>`
 
-> Default: `16`
+> Default: `32`
 
 Maximum number of guests IRQ could be shared between, i.e. a limit on
 the number of guests it is possible to start each having assigned a device
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 5ae9846..70b7a53 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -440,7 +440,7 @@ int __init init_irq_data(void)
         irq_to_desc(irq)->irq = irq;
 
     if ( !irq_max_guests || irq_max_guests > 255)
-        irq_max_guests = 16;
+        irq_max_guests = 32;
 
 #ifdef CONFIG_PV
     /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
@@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
     unsigned int        irq;
     struct irq_desc         *desc;
     irq_guest_action_t *action, *newaction = NULL;
+    unsigned int        max_nr_guests = will_share ? irq_max_guests : 1;
     int                 rc = 0;
 
     WARN_ON(!spin_is_locked(&v->domain->event_lock));
@@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         {
             spin_unlock_irq(&desc->lock);
             if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
-                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&
+                  max_nr_guests * sizeof(action->guest[0]))) != NULL &&
                  zalloc_cpumask_var(&newaction->cpu_eoi_map) )
                 goto retry;
             xfree(newaction);
@@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         goto retry;
     }
 
-    if ( action->nr_guests == irq_max_guests )
+    if ( action->nr_guests == max_nr_guests )
     {
         printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
                "Already at max share %u, increase with irq_max_guests= option.\n",
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 01:59:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 01:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43069.77472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkdtS-0007Pn-Tc; Thu, 03 Dec 2020 01:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43069.77472; Thu, 03 Dec 2020 01:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkdtS-0007Pc-PA; Thu, 03 Dec 2020 01:58:50 +0000
Received: by outflank-mailman (input) for mailman id 43069;
 Thu, 03 Dec 2020 01:58:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fvcX=FH=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kkdtR-0007PF-DT
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 01:58:49 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c04a0f03-97b9-44fe-ad2b-dab1b271e495;
 Thu, 03 Dec 2020 01:58:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c04a0f03-97b9-44fe-ad2b-dab1b271e495
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606960728;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=xpXNv8lFLeP6swaENV8AzRveAAHRAeWFit3RYlXNouM=;
  b=AUFQR/ptsKzA1KMFj5Rz3kdqq2Y3z6RvkkdZdnQ5gMffrydcuNmSYCKr
   4WuEBbaXaj1nke8BG9iv5RDHjXNyoNeZVSl4j+PC/+zgKvt1JbSTGq6gI
   7X1aYhyjTgwZIERMa6edfCw8fYMSVnvVGo+fVsIonOtvM1ncmWYJBvs67
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: HVkxNvIiq6a4s2clRs+HH8oSk97EohkIr13j11evyH4DuYDWiKPQr998784+E2rINRJJ0W5hS0
 WE/0yNecMujb2oUQByIVuAfSvjKFocldevmaBPxynSoBe3CyHmR2iMVb2U9/R/CQ1Zr1ryEtv6
 yOSiwezK8+etdg630vE+D1696juM1XFRiOobKO0SEiso5ot8g5EbtKcYf9KV0pd3BJxS9aOiHa
 7gaJcmUR8St3a0DNtXuw23GV3DKMU3dI2gQ0WqYigpU/Jig9nCNtTsOps31xZ8X5UG/wAyPWW1
 nTM=
X-SBRS: None
X-MesageID: 33596934
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,388,1599537600"; 
   d="scan'208";a="33596934"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>,
	<iwj@xenproject.org>, <jbeulich@suse.com>, <julien@xen.org>,
	<sstabellini@kernel.org>, <wl@xen.org>, <roger.pau@citrix.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH v2 1/2] x86/IRQ: make max number of guests for a shared IRQ configurable
Date: Thu, 3 Dec 2020 01:58:25 +0000
Message-ID: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

... and increase the default to 16.

Current limit of 7 is too restrictive for modern systems where one GSI
could be shared by potentially many PCI INTx sources where each of them
corresponds to a device passed through to its own guest. Some systems do not
apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
interrupt pin for the majority of PCI devices behind a single router,
resulting in overuse of a GSI.

Introduce a new command line option to configure that limit and dynamically
allocate an array of the necessary size. Set the default size now to 16 which
is higher than 7 but could later be increased even more if necessary.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---

Changes in v2:
- introduced a command line option as suggested
- set the default limit to 16 for now

---
 docs/misc/xen-command-line.pandoc |  9 +++++++++
 xen/arch/x86/irq.c                | 19 +++++++++++++------
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index b4a0d60..f5f230c 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1641,6 +1641,15 @@ This option is ignored in **pv-shim** mode.
 ### nr_irqs (x86)
 > `= <integer>`
 
+### irq_max_guests (x86)
+> `= <integer>`
+
+> Default: `16`
+
+Maximum number of guests IRQ could be shared between, i.e. a limit on
+the number of guests it is possible to start each having assigned a device
+sharing a common interrupt line.  Accepts values between 1 and 255.
+
 ### numa (x86)
 > `= on | off | fake=<integer> | noacpi`
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9..5ae9846 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -42,6 +42,10 @@ integer_param("nr_irqs", nr_irqs);
 int __read_mostly opt_irq_vector_map = OPT_IRQ_VECTOR_MAP_DEFAULT;
 custom_param("irq_vector_map", parse_irq_vector_map_param);
 
+/* Max number of guests IRQ could be shared with */
+static unsigned int __read_mostly irq_max_guests;
+integer_param("irq_max_guests", irq_max_guests);
+
 vmask_t global_used_vector_map;
 
 struct irq_desc __read_mostly *irq_desc = NULL;
@@ -435,6 +439,9 @@ int __init init_irq_data(void)
     for ( ; irq < nr_irqs; irq++ )
         irq_to_desc(irq)->irq = irq;
 
+    if ( !irq_max_guests || irq_max_guests > 255)
+        irq_max_guests = 16;
+
 #ifdef CONFIG_PV
     /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
     set_bit(LEGACY_SYSCALL_VECTOR, used_vectors);
@@ -1028,7 +1035,6 @@ int __init setup_irq(unsigned int irq, unsigned int irqflags,
  * HANDLING OF GUEST-BOUND PHYSICAL IRQS
  */
 
-#define IRQ_MAX_GUESTS 7
 typedef struct {
     u8 nr_guests;
     u8 in_flight;
@@ -1039,7 +1045,7 @@ typedef struct {
 #define ACKTYPE_EOI    2     /* EOI on the CPU that was interrupted  */
     cpumask_var_t cpu_eoi_map; /* CPUs that need to EOI this interrupt */
     struct timer eoi_timer;
-    struct domain *guest[IRQ_MAX_GUESTS];
+    struct domain *guest[];
 } irq_guest_action_t;
 
 /*
@@ -1564,7 +1570,8 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         if ( newaction == NULL )
         {
             spin_unlock_irq(&desc->lock);
-            if ( (newaction = xmalloc(irq_guest_action_t)) != NULL &&
+            if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
+                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&
                  zalloc_cpumask_var(&newaction->cpu_eoi_map) )
                 goto retry;
             xfree(newaction);
@@ -1633,11 +1640,11 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         goto retry;
     }
 
-    if ( action->nr_guests == IRQ_MAX_GUESTS )
+    if ( action->nr_guests == irq_max_guests )
     {
         printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
-               "Already at max share.\n",
-               pirq->pirq, v->domain->domain_id);
+               "Already at max share %u, increase with irq_max_guests= option.\n",
+               pirq->pirq, v->domain->domain_id, irq_max_guests);
         rc = -EBUSY;
         goto unlock_out;
     }
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 03:34:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 03:34:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43083.77500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkfOE-0000IB-DI; Thu, 03 Dec 2020 03:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43083.77500; Thu, 03 Dec 2020 03:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkfOE-0000I3-94; Thu, 03 Dec 2020 03:34:42 +0000
Received: by outflank-mailman (input) for mailman id 43083;
 Thu, 03 Dec 2020 03:34:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9WK=FH=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kkfOC-0000Hy-E0
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 03:34:40 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.79]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37105a36-fcce-414d-a4ea-a7276788f817;
 Thu, 03 Dec 2020 03:34:36 +0000 (UTC)
Received: from AM0PR07CA0005.eurprd07.prod.outlook.com (2603:10a6:208:ac::18)
 by PA4PR08MB6270.eurprd08.prod.outlook.com (2603:10a6:102:f3::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 3 Dec
 2020 03:34:34 +0000
Received: from AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:ac:cafe::e7) by AM0PR07CA0005.outlook.office365.com
 (2603:10a6:208:ac::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.9 via Frontend
 Transport; Thu, 3 Dec 2020 03:34:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT064.mail.protection.outlook.com (10.152.17.53) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Thu, 3 Dec 2020 03:34:33 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Thu, 03 Dec 2020 03:34:33 +0000
Received: from 349f6f0cdda8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8857B516-DA64-4196-93EE-332D406E0E7D.1; 
 Thu, 03 Dec 2020 03:34:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 349f6f0cdda8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Dec 2020 03:34:28 +0000
Received: from DB7PR08MB3753.eurprd08.prod.outlook.com (2603:10a6:10:7e::15)
 by DBBPR08MB5932.eurprd08.prod.outlook.com (2603:10a6:10:207::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.19; Thu, 3 Dec
 2020 03:34:27 +0000
Received: from DB7PR08MB3753.eurprd08.prod.outlook.com
 ([fe80::9dc1:2537:2a5:718a]) by DB7PR08MB3753.eurprd08.prod.outlook.com
 ([fe80::9dc1:2537:2a5:718a%6]) with mapi id 15.20.3611.031; Thu, 3 Dec 2020
 03:34:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37105a36-fcce-414d-a4ea-a7276788f817
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fjd0b1p0cV12doTQSN6u3hhq7xPLvTEFseVLMDrCvRo=;
 b=g/HLJe/5d0xcC+T6HZkdVXycY1QD3G2xs6brsUWEjIkRy08dx4WAeHFwm8bcNDc6Z4UbZW6Eb6CJ4sFudN99BCED87IF10T5wF1Y8zplpQaeELEL7kRuTUReKHWkAiUgCczgi5pUYcyeWtXY2bWAtexugPV+n9tDqarazxtnwBQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XKZ0rXIKjgKeGPlBOo4d6lJvHT/YI72XkVKdVMqT++aCgMC4tr32o3F+eKLTy/QmIwdekAIQpkxrFSZZkpstJFfdHLpcFyBXpZSzywdEV0WWd14TgpTntwb7k61tC2HOUFPKAlgUWzELwyXu9dx7yNoJSBxmfoIgo0aMFNDZ6ZxDdDsuBDDIVQQ9SXyTGAk38tAUrlZjPfCkqr3UKrMUnhZkbBqkt6Ob1dZEzhAKF42DDeY9Y5HbW0N8gkIBMbu0T+OHtKDkvQxwiVQGqvcKrnHg9BmHiOYMXljdwj859ihKlhH94xpG8Fx/3YO9+/7+1gy8OSoDqXoVNM5P/ItlIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fjd0b1p0cV12doTQSN6u3hhq7xPLvTEFseVLMDrCvRo=;
 b=bOK0gdNWZYgcxWaYBFvZ0uGcvoojn3+LHGB1E5zA0ispNA3Awm0BXo6nvAVpREY1owNnERPJUh+uV1rVyXOrZKLNQ0y5/whXJbRlddA7fONEu5O3+WDl3Uo4o4eCWP9gn5vDUmxBmweKcxkyoaquHqvc/s+LQ8Nte3tsvVN84c191mPY1QZHQRNtN72gvyfLK/ey6wWYnt+6vADJoDcgTytrr+ucJeG5auLwfvh873BOmGW2E8+bqpJAiMxA1e8ws+R/9ArGzG4D1kO/oIDYwmfwi7D8mZggpVpJdzZtL7u63Y4YwJr6vbd4/YguqaW8KWjvcuObIs3s7bDUlN1vgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fjd0b1p0cV12doTQSN6u3hhq7xPLvTEFseVLMDrCvRo=;
 b=g/HLJe/5d0xcC+T6HZkdVXycY1QD3G2xs6brsUWEjIkRy08dx4WAeHFwm8bcNDc6Z4UbZW6Eb6CJ4sFudN99BCED87IF10T5wF1Y8zplpQaeELEL7kRuTUReKHWkAiUgCczgi5pUYcyeWtXY2bWAtexugPV+n9tDqarazxtnwBQ=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
CC: Penny Zheng <Penny.Zheng@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andre Przywara <Andre.Przywara@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, nd
	<nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index:
 AQHWtnF2jWNIb4RgFU+PRE0mwpdPDam/tA6AgAW2ODCAFDcMgIAAINyAgACWHQCAAAX2cIAJ4doAgAB8mZA=
Date: Thu, 3 Dec 2020 03:34:27 +0000
Message-ID:
 <DB7PR08MB3753C88E41582E76E3C9049F9EF20@DB7PR08MB3753.eurprd08.prod.outlook.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
 <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
 <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <8f47313a-f47a-520d-3845-3f2198fce5b4@xen.org>
 <AM0PR08MB37478D884057C8720ED1023D9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <0a272ffd-24de-2db4-5751-9161cc57cec3@xen.org>
In-Reply-To: <0a272ffd-24de-2db4-5751-9161cc57cec3@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6C134F9AD1ED91418BA2296B52978671.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.113]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f0839a82-a83e-4e34-c6e2-08d8973c5a39
x-ms-traffictypediagnostic: DBBPR08MB5932:|PA4PR08MB6270:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB62708209C7C41B73763337C39EF20@PA4PR08MB6270.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2vv1/whwBEhhnfFLzTnruNjAF8tO/HDdCSlauq2He2K61LxE5OS+1haPWzHW0fUJEmg+6ctpY20Th1boRY1UwO/H+BTc/8TvtZCfg2GP61OANRYOLFVQed802tAUGjzVNC7B/UTUihTVklnMk8Qrr9L0YVsMMMm8maWeM6Mvl9uec3ZOFteQGmxKmKxwU+JSNrpjdqmyqDaIORlR5DPygQXyXhZtCnk6TUsG8K88Xnd9a/qApOx+ZaexjVcwaC9g5GAzuiYEEcr1fXPFvwadaF1jxOb5hEa9ayx5jLTFqOdfB2KpNt81KzMWXCLJ5qjfQX4WYLGYkQhIfhryXwW49n93ZjwUjJEcqQqfQ1YKXxibTXdQo+W2QJq+ilsGbhhRdsiZbSWzbxPqu20J/w482Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3753.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(136003)(346002)(366004)(110136005)(53546011)(26005)(52536014)(478600001)(54906003)(86362001)(5660300002)(6506007)(4326008)(83380400001)(7696005)(55016002)(66446008)(8676002)(64756008)(9686003)(33656002)(8936002)(186003)(316002)(2906002)(71200400001)(76116006)(66476007)(66946007)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?MVhGSkJPUVpQSVV1TnM5Z2dzbXN6QjJmU2hQQURSc24zUWFUZTJhSnBSODcx?=
 =?utf-8?B?d0RybElkUkprMHliZXoyTTVyZzlseGtXV1M0NnFZUWptVlhsVk1MQjYvd2R3?=
 =?utf-8?B?ZldmNC9RM2REN2hYZEIrU2Z6ZW1FNEZEY28rcnQzc0IzUEF3ZzlYYWY0U1Zo?=
 =?utf-8?B?KzRQdlBuMWhtRU0zNUJSVTJiL0c3Njk4SVVRcjlXdXhLZkFxdHNTT0ZHZmFJ?=
 =?utf-8?B?cTFJMjczenlEbVlMWll6U0J0Qmt6aDFlUWV4VnFyQnRQTWlZWnNPelM5Mmo2?=
 =?utf-8?B?azFIUGpEZzcvOEc3M1BCSkJoc0l1dGdGVnp0dkZpZklkdXdPSUE3VDdmeXRr?=
 =?utf-8?B?UlNOTm83Z0dscFZDdm1WK3VybmxKNUFxNDQ1ZHFDekdWeGY4djJIVjVKNEZB?=
 =?utf-8?B?TzhnYlRQZFY4eTBJelEyMnBvbzdqcURCK1pGM2M2UDFRbzNYYytkSEdGTWhV?=
 =?utf-8?B?UlZGV3gxb1NXSkZ1RTVEbzBTSkhIa214dXNsckpCVUEvaUF4eUc1bE0zTVhJ?=
 =?utf-8?B?ZE04dmk5MWxuWmljUGVRUXQ0ZGpPblJMQUtpdTVDREJNbXJwa1AxWDYrM0xU?=
 =?utf-8?B?bnRtUXlSc3ZBWHdOL0Fkeis1WGtVYnV3N1BFUkVVQlhzNm1FbkR4SlNEd2Jr?=
 =?utf-8?B?RW9KK3NKcEdYdkF3dFh2RnhmMytZbXRFS3NwZ1dHK2N4N1hRZXI4Z1BOMnhG?=
 =?utf-8?B?dmZ6OGdUTHNVc283TTh2VmFuYkkvS3IwSTdJb01CdzlEV1FCelY0OUVCWG1m?=
 =?utf-8?B?VW5TZGFTaXhhUnFUajNldXF1SXVKUmdUMmN5ekdyUkNWM3I5SFp3Uk1yWSt0?=
 =?utf-8?B?VTNBNUw4L0VLd2VaNnNCLzYyZFdtUWg4Y2Mrek9nQloreWlzUEZRcG1iRlpi?=
 =?utf-8?B?TnM4SkYvdkVTOU5lNWZrVXBTMzMyazIzWk11ai9VaS9JS3JXVU9nK2lyZ3RC?=
 =?utf-8?B?cnFsQ3I2dStjeVhQMHZCWkhjdlhRRjJGUFpKUGxuY3l3Qm1CQkVvZVowQzl2?=
 =?utf-8?B?MVg1dkRmQ3dLNjVMN0QxQVJESGNRRWx5cDIyNWFJUCs5UUxNbnhqUndRSlcr?=
 =?utf-8?B?QWxCK2VGeDF0ZjcyaHI3WG5xaFpwclVIVVZmQ0c4TWd5Z1RKRk5uVW5rOFNF?=
 =?utf-8?B?UXJYdzhIQ3N2cDc0a0tDYzdzS2EvblhtWnQyblB5QjVmUXB0S2ZFajB5T0h3?=
 =?utf-8?B?bEp4YkptL1R6TllTa2xWcVpuZzVTTjFLNEZtRXNqLzVPNG40TGZWZnhvNDVu?=
 =?utf-8?B?T2ttUmsvT1RIWHRPdk5RY0VlUGdlUkhkN0QxSjM0cFJHTEVJMTFCZCtTSURC?=
 =?utf-8?Q?noB9uu0AZPTHA=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5932
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	af0fe58d-cf11-4056-88fa-08d8973c5669
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GDyir3R2Vwi7z4yWrnq9d9SBN5AQAZJURhSq6AbZ10ytjpE9xDOn15mNRHG+eoJH8Rfr6MmRleprOSd6U2BZfIvamA7iRC8+vc9wEdPYJPSJN4AUaf0Bg+vAjc36Drt78WALFJI1RtAxvPeoBSkcqECpySbixIxvRF7yQf+ZDmRC3A/JrRiXWqFbVpVPVc61nkmHfEMmkVDEts3/aeCy5RqO1CxAKMKdneA2mCMGUCe9Lh7DHRNpWJsFl3CBL0Ut7A7L7qH+sk/D5c9EZj1EsyQCnDw3+TL5+3RzxNKM6asuNdHrMvPJA2gQ4SShrh5A6+E8CUQkfSoE7jZ32pR46NkgNVtRCWWRBH5RE3UBrIb1sZxa5cTzhQZyvVMpcOPadx1P3jbpsq70SZChOFb2eK5S/5BI/N2DkelUCHdosLBiVencITFc1EElfY3NX4C7WW9AIm8rOHTzJWmhTM83Lc3vOlo2TRMhAZvHjNJG5Jo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966005)(5660300002)(47076004)(52536014)(82740400003)(55016002)(9686003)(70206006)(70586007)(356005)(33656002)(110136005)(54906003)(81166007)(316002)(36906005)(8676002)(2906002)(8936002)(83380400001)(186003)(6506007)(53546011)(26005)(7696005)(336012)(86362001)(478600001)(4326008)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 03:34:33.9933
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f0839a82-a83e-4e34-c6e2-08d8973c5a39
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6270

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjDlubQxMuaciDPml6UgMjoxMQ0K
PiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IENjOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJt
LmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsNCj4gQW5kcmUgUHJ6eXdhcmEg
PEFuZHJlLlByenl3YXJhQGFybS5jb20+OyBCZXJ0cmFuZCBNYXJxdWlzDQo+IDxCZXJ0cmFuZC5N
YXJxdWlzQGFybS5jb20+OyBLYWx5IFhpbiA8S2FseS5YaW5AYXJtLmNvbT47IG5kDQo+IDxuZEBh
cm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIXSB4ZW4vYXJtOiBBZGQgQ29ydGV4LUE3MyBl
cnJhdHVtIDg1ODkyMSB3b3JrYXJvdW5kDQo+IA0KPiANCj4gDQo+IE9uIDI2LzExLzIwMjAgMTE6
MjcsIFdlaSBDaGVuIHdyb3RlOg0KPiA+IEhpIEp1bGllbiwNCj4gDQo+IEhpIFdlaSwNCj4gDQo+
ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IEp1bGllbiBHcmFsbCA8
anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IDIwMjDlubQxMeaciDI25pelIDE4OjU1DQo+ID4+
IFRvOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gPj4gQ2M6IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVu
Z0Bhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiA+PiBBbmRyZSBQ
cnp5d2FyYSA8QW5kcmUuUHJ6eXdhcmFAYXJtLmNvbT47IEJlcnRyYW5kIE1hcnF1aXMNCj4gPj4g
PEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IEthbHkgWGluIDxLYWx5LlhpbkBhcm0uY29tPjsg
bmQNCj4gPj4gPG5kQGFybS5jb20+DQo+ID4+IFN1YmplY3Q6IFJlOiBbUEFUQ0hdIHhlbi9hcm06
IEFkZCBDb3J0ZXgtQTczIGVycmF0dW0gODU4OTIxIHdvcmthcm91bmQNCj4gPj4NCj4gPj4gSGkg
V2VpLA0KPiA+Pg0KPiA+PiBZb3VyIGUtbWFpbCBmb250IHNlZW1zIHRvIGJlIGRpZmZlcmVudCB0
byB0aGUgdXN1YWwgcGxhaW4gdGV4dCBvbmUuIEFyZQ0KPiA+PiB5b3Ugc2VuZGluZyB0aGUgZS1t
YWlsIHVzaW5nIEhUTUwgYnkgYW55IGNoYW5jZT8NCj4gPj4NCj4gPg0KPiA+IEl0J3Mgc3RyYW5n
ZSwgSSBhbHdheXMgdXNlIHRoZSBwbGFpbiB0ZXh0Lg0KPiANCj4gTWF5YmUgZXhjaGFuZ2UgZGVj
aWRlZCB0byBtYW5nbGUgdGhlIGUtbWFpbCA6KS4gQW55d2F5LCB0aGlzIG5ldyBtZXNzYWdlDQo+
IGxvb2tzIGZpbmUuDQo+IA0KPiA+DQo+ID4+IE9uIDI2LzExLzIwMjAgMDI6MDcsIFdlaSBDaGVu
IHdyb3RlOg0KPiA+Pj4gSGkgU3RlZmFubywNCj4gPj4+DQo+ID4+Pj4gLS0tLS1PcmlnaW5hbCBN
ZXNzYWdlLS0tLS0NCj4gPj4+PiBGcm9tOiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc+DQo+ID4+Pj4gU2VudDogMjAyMO+/ve+/vTEx77+977+9Mjbvv73vv70gODow
MA0KPiA+Pj4+IFRvOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT4NCj4gPj4+PiBDYzogSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0u
Y29tPjsNCj4gPj4geGVuLQ0KPiA+Pj4+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOyBzc3Rh
YmVsbGluaUBrZXJuZWwub3JnOyBBbmRyZSBQcnp5d2FyYQ0KPiA+Pj4+IDxBbmRyZS5Qcnp5d2Fy
YUBhcm0uY29tPjsgQmVydHJhbmQgTWFycXVpcw0KPiA+PiA8QmVydHJhbmQuTWFycXVpc0Bhcm0u
Y29tPjsNCj4gPj4+PiBLYWx5IFhpbiA8S2FseS5YaW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29t
Pg0KPiA+Pj4+IFN1YmplY3Q6IFJFOiBbUEFUQ0hdIHhlbi9hcm06IEFkZCBDb3J0ZXgtQTczIGVy
cmF0dW0gODU4OTIxDQo+IHdvcmthcm91bmQNCj4gPj4+Pg0KPiA+Pj4+IFJlc3VtaW5nIHRoaXMg
b2xkIHRocmVhZC4NCj4gPj4+Pg0KPiA+Pj4+IE9uIEZyaSwgMTMgTm92IDIwMjAsIFdlaSBDaGVu
IHdyb3RlOg0KPiA+Pj4+Pj4gSGksDQo+ID4+Pj4+Pg0KPiA+Pj4+Pj4gT24gMDkvMTEvMjAyMCAw
ODoyMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4+Pj4+Pj4gQ05UVkNUX0VMMCBvciBDTlRQQ1Rf
RUwwIGNvdW50ZXIgcmVhZCBpbiBDb3J0ZXgtQTczIChhbGwgdmVyc2lvbnMpDQo+ID4+Pj4+Pj4g
bWlnaHQgcmV0dXJuIGEgd3JvbmcgdmFsdWUgd2hlbiB0aGUgY291bnRlciBjcm9zc2VzIGEgMzJi
aXQgYm91bmRhcnkuDQo+ID4+Pj4+Pj4NCj4gPj4+Pj4+PiBVbnRpbCBub3csIHRoZXJlIGlzIG5v
IGNhc2UgZm9yIFhlbiBpdHNlbGYgdG8gYWNjZXNzIENOVFZDVF9FTDAsDQo+ID4+Pj4+Pj4gYW5k
IGl0IGFsc28gc2hvdWxkIGJlIHRoZSBHdWVzdCBPUydzIHJlc3BvbnNpYmlsaXR5IHRvIGRlYWwg
d2l0aA0KPiA+Pj4+Pj4+IHRoaXMgcGFydC4NCj4gPj4+Pj4+Pg0KPiA+Pj4+Pj4+IEJ1dCBmb3Ig
Q05UUENULCB0aGVyZSBleGlzdHMgc2V2ZXJhbCBjYXNlcyBpbiBYZW4gaW52b2x2aW5nIHJlYWRp
bmcNCj4gPj4+Pj4+PiBDTlRQQ1QsIHNvIGEgcG9zc2libGUgd29ya2Fyb3VuZCBpcyB0aGF0IHBl
cmZvcm1pbmcgdGhlIHJlYWQgdHdpY2UsDQo+ID4+Pj4+Pj4gYW5kIHRvIHJldHVybiBvbmUgb3Ig
dGhlIG90aGVyIGRlcGVuZGluZyBvbiB3aGV0aGVyIGEgdHJhbnNpdGlvbiBoYXMNCj4gPj4+Pj4+
PiB0YWtlbiBwbGFjZS4NCj4gPj4+Pj4+Pg0KPiA+Pj4+Pj4+IFNpZ25lZC1vZmYtYnk6IFBlbm55
IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IEFja2VkLWJ5
OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IE9u
IGEgcmVsYXRlZCB0b3BpYywgZG8gd2UgbmVlZCBhIGZpeCBzaW1pbGFyIHRvIExpbnV4IGNvbW1p
dA0KPiA+Pj4+Pj4gNzVhMTlhMDIwMmRiICJhcm02NDogYXJjaF90aW1lcjogRW5zdXJlIGNvdW50
ZXIgcmVnaXN0ZXIgcmVhZHMgb2NjdXINCj4gPj4+Pj4+IHdpdGggc2VxbG9jayBoZWxkIj8NCj4g
Pj4+Pj4+DQo+ID4+Pj4+DQo+ID4+Pj4+IEkgdGFrZSBhIGxvb2sgYXQgdGhpcyBMaW51eCBjb21t
aXQsIGl0IHNlZW1zIHRvIHByZXZlbnQgdGhlIHNlcS1sb2NrIHRvIGJlDQo+ID4+Pj4+IHNwZWN1
bGF0ZWQuICBVc2luZyBhbiBlbmZvcmNlIG9yZGVyaW5nIGluc3RlYWQgb2YgSVNCIGFmdGVyIHRo
ZSByZWFkDQo+IGNvdW50ZXINCj4gPj4+Pj4gb3BlcmF0aW9uIHNlZW1zIHRvIGJlIGZvciBwZXJm
b3JtYW5jZSByZWFzb25zLg0KPiA+Pj4+Pg0KPiA+Pj4+PiBJIGhhdmUgZm91bmQgdGhhdCB5b3Ug
aGFkIHBsYWNlZCBhbiBJU0IgYmVmb3JlIHJlYWQgY291bnRlciBpbiBnZXRfY3ljbGVzDQo+ID4+
Pj4+IHRvIHByZXZlbnQgY291bnRlciB2YWx1ZSB0byBiZSBzcGVjdWxhdGVkLiBCdXQgeW91IGhh
dmVuJ3QgcGxhY2VkIHRoZQ0KPiA+PiBzZWNvbmQNCj4gPj4+Pj4gSVNCIGFmdGVyIHJlYWRpbmcu
IElzIGl0IGJlY2F1c2Ugd2UgaGF2ZW4ndCB1c2VkIHRoZSBnZXRfY3ljbGVzIGluIHNlcSBsb2Nr
DQo+ID4+Pj4+IGNyaXRpY2FsIGNvbnRleHQgKE1heWJlIEkgZGlkbid0IGZpbmQgdGhlIHJpZ2h0
IHBsYWNlKT8gU28gc2hvdWxkIHdlIG5lZWQgdG8NCj4gPj4gZml4IGl0DQo+ID4+Pj4+IG5vdywg
b3IgeW91IHByZWZlciB0byBmaXggaXQgbm93IGZvciBmdXR1cmUgdXNhZ2U/DQo+ID4+Pj4NCj4g
Pj4+PiBMb29raW5nIGF0IHRoZSBjYWxsIHNpdGVzLCBpdCBkb2Vzbid0IGxvb2sgbGlrZSB3ZSBu
ZWVkIGFueSBJU0IgYWZ0ZXINCj4gPj4+PiBnZXRfY3ljbGVzIGFzIGl0IGlzIG5vdCB1c2VkIGlu
IGFueSBjcml0aWNhbCBjb250ZXh0LiBUaGVyZSBpcyBhbHNvIGENCj4gPj4+PiBkYXRhIGRlcGVu
ZGVuY3kgd2l0aCB0aGUgdmFsdWUgcmV0dXJuZWQgYnkgaXQuDQo+ID4+DQo+ID4+IEkgYW0gYXNz
dW1pbmcgeW91IGxvb2tlZCBhdCBhbGwgdGhlIHVzZXJzIG9mIE5PVygpLiBJcyB0aGF0IHJpZ2h0
Pw0KPiA+Pg0KPiA+Pj4+DQo+ID4+Pj4gU28gSSBhbSB0aGlua2luZyB3ZSBkb24ndCBuZWVkIGFu
eSBmaXguIEF0IG1vc3Qgd2UgbmVlZCBhbiBpbi1jb2RlDQo+IGNvbW1lbnQ/DQo+ID4+Pg0KPiA+
Pj4gSSBhZ3JlZSB3aXRoIHlvdSB0byBhZGQgYW4gaW4tY29kZSBjb21tZW50LiBJdCB3aWxsIHJl
bWluZCB1cyBpbiBmdXR1cmUNCj4gd2hlbg0KPiA+PiB3ZQ0KPiA+Pj4gdXNlIHRoZSBnZXRfY3lj
bGVzIGluIGNyaXRpY2FsIGNvbnRleHQuIEFkZGluZyBpdCBub3cgd2lsbCBwcm9iYWJseSBvbmx5
IGxlYWQNCj4gdG8NCj4gPj4+IG1lYW5pbmdsZXNzIHBlcmZvcm1hbmNlIGRlZ3JhZGF0aW9uLg0K
PiA+Pg0KPiA+PiBJIHJlYWQgdGhpcyBhcyB0aGVyZSB3b3VsZCBiZSBubyBwZXJmb21hbmNlIGlt
cGFjdCBpZiB3ZSBhZGQgdGhlDQo+ID4+IG9yZGVyaW5nIGl0IG5vdy4gRGlkIHlvdSBpbnRlbmQg
dG8gc2F5Pw0KPiA+DQo+ID4gU29ycnkgYWJvdXQgbXkgRW5nbGlzaC4gSSBpbnRlbmRlZCB0byBz
YXkgIkFkZGluZyBpdCBub3cgbWF5IGludHJvZHVjZSBzb21lDQo+ID4gcGVyZm9ybWFuY2UgY29z
dC4gQW5kIHRoaXMgcGVyZm9ybWFuY2UgY29zdCBtYXkgYmUgbm90IHdvcnRoLiBCZWNhdXNlIFhl
bg0KPiA+IG1heSBuZXZlciB1c2UgaXQgaW4gYSBzaW1pbGFyIHNjZW5hcmlvICINCj4gDQo+IERv
bid0IHdvcnJ5ISBJIHRoaW5rIHRoZSBwZXJmb3JtYW5jZSBzaG91bGQgbm90IGJlIG5vdGljZWFi
bGUgaWYgd2UgdXNlDQo+IHRoZSBzYW1lIHRyaWNrIGFzIExpbnV4Lg0KPiANCj4gPj4gSW4gYWRk
aXRpb24sIEFGQUlDVCwgdGhlIHg4NiB2ZXJzaW9uIG9mIGdldF9jeWNsZXMoKSBpcyBhbHJlYWR5
IGFibGUgdG8NCj4gPj4gcHJvdmlkZSB0aGF0IG9yZGVyaW5nLiBTbyB0aGVyZSBhcmUgY2hhbmNl
cyB0aGF0IGNvZGUgbWF5IHJlbHkgb24gaXQuDQo+ID4+DQo+ID4+IFdoaWxlIEkgZG9uJ3QgbmVj
ZXNzYXJpbHkgYWdyZWUgdG8gYWRkIGJhcnJpZXJzIGV2ZXJ5d2hlcmUgYnkgZGVmYXVsdA0KPiA+
PiAodGhpcyBtYXkgaGF2ZSBiaWcgaW1wYWN0IG9uIHRoZSBwbGF0Zm9ybSkuIEkgdGhpbmsgaXQg
aXMgYmV0dGVyIHRvIGhhdmUNCj4gPj4gYW4gYWNjdXJhdGUgbnVtYmVyIG9mIGN5Y2xlcy4NCj4g
Pj4NCj4gPg0KPiA+IEFzIHg4NiBoYWQgZG9uZSBpdCwgSSB0aGluayBpdOKAmXMgb2sgdG8gZG8g
aXQgZm9yIEFybS4gVGhpcyB3aWxsIGtlZXAgYSBmdW5jdGlvbg0KPiA+IGJlaGF2ZXMgdGhlIHNh
bWUgb24gZGlmZmVyZW50IGFyY2hpdGVjdHVyZXMuDQo+IA0KPiBKdXN0IHRvIGJlIGNsZWFyLCBJ
IGFtIG5vdCAxMDAlIHN1cmUgdGhpcyBpcyB3aGF0IEludGVsIGlzIGRvaW5nLg0KPiBBbHRob3Vn
aCB0aGlzIGlzIG15IHVuZGVyc3RhbmRpbmcgb2YgdGhlIGNvbW1lbnQgaW4gdGhlIGNvZGUuDQo+
IA0KPiBAU3RlZmFubywgd2hhdCBkbyB5b3UgdGhpbms/DQo+IA0KPiBAV2VpLCBhc3N1bWluZyBT
dGVmYW5vIGlzIGhhcHB5IHdpdGggdGhlIHByb3Bvc2FsLCB3b3VsZCB5b3UgYmUgaGFwcHkgdG8N
Cj4gc2VuZCBhIHBhdGNoIGZvciB0aGF0Pw0KPiANCg0KT2YgIGNvdXJzZSwgSSBhbSB3aWxsaW5n
IHRvIGRvIHRoYXQuIEl0IHNlZW1zIHRoZSBlbmZvcmNlX29yZGVyaW5nIHBhdGNoIGhhcyBiZWVu
DQptZXJnZWQuIEFuZCBWaW5jZW56byByZXBvcnRlZCB0aGUgZW5mb3JjZV9vcmRlcmluZyBtZXRo
b2Qgd2lsbCBoYXZlIH40LjUNCnBlcmZvcm1hbmNlIGltcHJvdmVtZW50WzFdIChDb21wYXJlIHRv
IElTQikuIFNvIEkgd2lsbCB1c2UgZW5mb3JjZV9vcmRlcmluZw0KbWV0aG9kIGRpcmVjdGx5IGlu
c3RlYWQgb2YgdXNpbmcgSVNCLg0KDQpbMV1odHRwczovL2xrbWwub3JnL2xrbWwvMjAyMC8zLzEz
LzY0NQ0KDQo+IEJlc3QgcmVnYXJkcywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 04:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 04:13:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43089.77512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkfzW-00047P-K8; Thu, 03 Dec 2020 04:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43089.77512; Thu, 03 Dec 2020 04:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkfzW-00047I-F2; Thu, 03 Dec 2020 04:13:14 +0000
Received: by outflank-mailman (input) for mailman id 43089;
 Thu, 03 Dec 2020 04:13:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Inx=FH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkfzV-00047D-BW
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 04:13:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40d38d7f-36a3-4fc2-b7ed-6a7a8cf8ab77;
 Thu, 03 Dec 2020 04:13:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40d38d7f-36a3-4fc2-b7ed-6a7a8cf8ab77
Date: Wed, 2 Dec 2020 20:13:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1606968791;
	bh=Z3i40Qn3EymSB7nCQtviOCB9w8/baaES6EtOh1CYua4=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=s/5Y9MvmQJTPaTjisnA/SGxZUjVHUT7mW9eW4B7CSoXQD9mHKeSH4KgY24Bz+3zlq
	 HgEM/8CT9bFKI8b/LHseA3qBQkVfYf/hIIO1CLmdILCzQGKJ+o3JAcsDvATzfUbvfR
	 lfb9tpTkB/taQD3jgFvrE7ifRl0RTKY14zj8nSHOlqDdbGTfP5exDKHl0byqKNyF0u
	 rhj05kfaNp7zCisX7KPz181m+Unz3CdDYsa/UM4TQn1pkHtc4w/7sVnB575ykKb6Q8
	 AWi35YIN7FhBGe42c6jwvO82RIPEQ7/9P1TLss0AyenqCkSoFlRtGMAqmUYZIvxqdw
	 UTJ8OZo1tE0Vg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org, 
    bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
In-Reply-To: <1912278a-13f4-885d-d1ca-cc130718d064@xen.org>
Message-ID: <alpine.DEB.2.21.2012021958020.30425@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com> <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s> <1912278a-13f4-885d-d1ca-cc130718d064@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 2 Dec 2020, Julien Grall wrote:
> On 02/12/2020 02:51, Stefano Stabellini wrote:
> > On Thu, 26 Nov 2020, Rahul Singh wrote:
> > > +/* Alias to Xen device tree helpers */
> > > +#define device_node dt_device_node
> > > +#define of_phandle_args dt_phandle_args
> > > +#define of_device_id dt_device_match
> > > +#define of_match_node dt_match_node
> > > +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np,
> > > pname, out))
> > > +#define of_property_read_bool dt_property_read_bool
> > > +#define of_parse_phandle_with_args dt_parse_phandle_with_args
> > 
> > Given all the changes to the file by the previous patches we are
> > basically fully (or almost fully) adapting this code to Xen.
> > 
> > So at that point I wonder if we should just as well make these changes
> > (e.g. s/of_phandle_args/dt_phandle_args/g) to the code too.
> 
> I have already accepted the fact that keeping Linux code as-is is nearly
> impossible without much workaround :). The benefits tends to also limited as
> we noticed for the SMMU driver.
> 
> I would like to point out that this may make quite difficult (if not
> impossible) to revert the previous patches which remove support for some
> features (e.g. atomic, MSI, ATS).
> 
> If we are going to adapt the code to Xen (I'd like to keep Linux code style
> though), then I think we should consider to keep code that may be useful in
> the near future (at least MSI, ATS).

(I am fine with keeping the Linux code style.)

We could try to keep the code as similar to Linux as possible. This
didn't work out in the past.

Otherwise, we could fully adapt the driver to Xen. If we fully adapt the
driver to Xen (code style aside) it is better to be consistent and also
do substitutions like s/of_phandle_args/dt_phandle_args/g. Then the
policy becomes clear: the code comes from Linux but it is 100% adapted
to Xen.


Now the question about what to do about the MSI and ATS code is a good
one. We know that we are going to want that code at some point in the
next 2 years. Like you wrote, if we fully adapt the code to Xen and
remove MSI and ATS code, then it is going to be harder to add it back.

So maybe keeping the MSI and ATS code for now, even if it cannot work,
would be better. I think this strategy works well if the MSI and ATS
code can be disabled easily, i.e. with a couple of lines of code in the
init function rather than #ifdef everywhere. It doesn't work well if we
have to add #ifdef everywhere.

It looks like MSI could be disabled adding a couple of lines to
arm_smmu_setup_msis.

Similarly ATS seems to be easy to disable by forcing ats_enabled to
false.

So yes, this looks like a good way forward. Rahul, what do you think?


 
> > > +#define FIELD_GET(_mask, _reg)          \
> > > +    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
> > > +
> > > +#define WRITE_ONCE(x, val)                  \
> > > +do {                                        \
> > > +    *(volatile typeof(x) *)&(x) = (val);    \
> > > +} while (0)
> > 
> > maybe we should define this in xen/include/xen/lib.h
> 
> I have attempted such discussion in the past and this resulted to more
> bikeshed than it is worth it. So I would suggest to re-implement WRITE_ONCE()
> as write_atomic() for now.

Good suggestion, less discussions more code :-)


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 04:15:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 04:15:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43094.77524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkg1J-0004FK-VZ; Thu, 03 Dec 2020 04:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43094.77524; Thu, 03 Dec 2020 04:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkg1J-0004FD-SK; Thu, 03 Dec 2020 04:15:05 +0000
Received: by outflank-mailman (input) for mailman id 43094;
 Thu, 03 Dec 2020 04:15:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Inx=FH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkg1J-0004F8-7q
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 04:15:05 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c26f19a-211d-4100-af01-046564eb50a3;
 Thu, 03 Dec 2020 04:15:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c26f19a-211d-4100-af01-046564eb50a3
Date: Wed, 2 Dec 2020 20:15:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1606968903;
	bh=gSvh4WPDu46VitPbr/c3ps4QYUgcgZCVWP8di2342no=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=VsBE3nogPVuFPtaAKebL4wxwuXVWwh41J4T0iRzhMNQKbt4RugSvj27vX1D4VfY0b
	 r5icQH52RD4MV1W3a0Ye5fvfnC8t/ttPq1REiFBkQ5KCeKaqgHanL+P5ck6GUqFt+j
	 r6bQdNa9zdwuXKOPuX+5ypql3WhWzIeUC01wi+RMl4mf+M+WhH/VxT4bDiGr84W8Ec
	 jkL1sSAlEvNkKXJSUKaKDLI0H/k1O1iy+1qtRlZYkmn6sUFuiFbUGNlg2QfFOmTP5E
	 m5J8Ofl/PpJ0pCV327iceXGGHhgIsd0JNy863J7M5xVwWq5Ez1Uuce5vRIzBE0KLfp
	 T5t7VJ5uv3AYQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Wei Chen <Wei.Chen@arm.com>
cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Penny Zheng <Penny.Zheng@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andre Przywara <Andre.Przywara@arm.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, 
    nd <nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
In-Reply-To: <DB7PR08MB3753C88E41582E76E3C9049F9EF20@DB7PR08MB3753.eurprd08.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2012022014270.30425@sstabellini-ThinkPad-T480s>
References: <20201109082110.1133996-1-penny.zheng@arm.com> <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org> <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com> <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
 <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com> <8f47313a-f47a-520d-3845-3f2198fce5b4@xen.org> <AM0PR08MB37478D884057C8720ED1023D9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com> <0a272ffd-24de-2db4-5751-9161cc57cec3@xen.org>
 <DB7PR08MB3753C88E41582E76E3C9049F9EF20@DB7PR08MB3753.eurprd08.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1698463917-1606968903=:30425"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1698463917-1606968903=:30425
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 3 Dec 2020, Wei Chen wrote:
> Hi Julien,
> 
> > -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > Sent: 2020年12月3日 2:11
> > To: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > Andre Przywara <Andre.Przywara@arm.com>; Bertrand Marquis
> > <Bertrand.Marquis@arm.com>; Kaly Xin <Kaly.Xin@arm.com>; nd
> > <nd@arm.com>
> > Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
> > 
> > 
> > 
> > On 26/11/2020 11:27, Wei Chen wrote:
> > > Hi Julien,
> > 
> > Hi Wei,
> > 
> > >> -----Original Message-----
> > >> From: Julien Grall <julien@xen.org>
> > >> Sent: 2020年11月26日 18:55
> > >> To: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
> > <sstabellini@kernel.org>
> > >> Cc: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > >> Andre Przywara <Andre.Przywara@arm.com>; Bertrand Marquis
> > >> <Bertrand.Marquis@arm.com>; Kaly Xin <Kaly.Xin@arm.com>; nd
> > >> <nd@arm.com>
> > >> Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
> > >>
> > >> Hi Wei,
> > >>
> > >> Your e-mail font seems to be different to the usual plain text one. Are
> > >> you sending the e-mail using HTML by any chance?
> > >>
> > >
> > > It's strange, I always use the plain text.
> > 
> > Maybe exchange decided to mangle the e-mail :). Anyway, this new message
> > looks fine.
> > 
> > >
> > >> On 26/11/2020 02:07, Wei Chen wrote:
> > >>> Hi Stefano,
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Stefano Stabellini <sstabellini@kernel.org>
> > >>>> Sent: 2020??????11??????26?????? 8:00
> > >>>> To: Wei Chen <Wei.Chen@arm.com>
> > >>>> Cc: Julien Grall <julien@xen.org>; Penny Zheng <Penny.Zheng@arm.com>;
> > >> xen-
> > >>>> devel@lists.xenproject.org; sstabellini@kernel.org; Andre Przywara
> > >>>> <Andre.Przywara@arm.com>; Bertrand Marquis
> > >> <Bertrand.Marquis@arm.com>;
> > >>>> Kaly Xin <Kaly.Xin@arm.com>; nd <nd@arm.com>
> > >>>> Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921
> > workaround
> > >>>>
> > >>>> Resuming this old thread.
> > >>>>
> > >>>> On Fri, 13 Nov 2020, Wei Chen wrote:
> > >>>>>> Hi,
> > >>>>>>
> > >>>>>> On 09/11/2020 08:21, Penny Zheng wrote:
> > >>>>>>> CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
> > >>>>>>> might return a wrong value when the counter crosses a 32bit boundary.
> > >>>>>>>
> > >>>>>>> Until now, there is no case for Xen itself to access CNTVCT_EL0,
> > >>>>>>> and it also should be the Guest OS's responsibility to deal with
> > >>>>>>> this part.
> > >>>>>>>
> > >>>>>>> But for CNTPCT, there exists several cases in Xen involving reading
> > >>>>>>> CNTPCT, so a possible workaround is that performing the read twice,
> > >>>>>>> and to return one or the other depending on whether a transition has
> > >>>>>>> taken place.
> > >>>>>>>
> > >>>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > >>>>>>
> > >>>>>> Acked-by: Julien Grall <jgrall@amazon.com>
> > >>>>>>
> > >>>>>> On a related topic, do we need a fix similar to Linux commit
> > >>>>>> 75a19a0202db "arm64: arch_timer: Ensure counter register reads occur
> > >>>>>> with seqlock held"?
> > >>>>>>
> > >>>>>
> > >>>>> I take a look at this Linux commit, it seems to prevent the seq-lock to be
> > >>>>> speculated.  Using an enforce ordering instead of ISB after the read
> > counter
> > >>>>> operation seems to be for performance reasons.
> > >>>>>
> > >>>>> I have found that you had placed an ISB before read counter in get_cycles
> > >>>>> to prevent counter value to be speculated. But you haven't placed the
> > >> second
> > >>>>> ISB after reading. Is it because we haven't used the get_cycles in seq lock
> > >>>>> critical context (Maybe I didn't find the right place)? So should we need to
> > >> fix it
> > >>>>> now, or you prefer to fix it now for future usage?
> > >>>>
> > >>>> Looking at the call sites, it doesn't look like we need any ISB after
> > >>>> get_cycles as it is not used in any critical context. There is also a
> > >>>> data dependency with the value returned by it.
> > >>
> > >> I am assuming you looked at all the users of NOW(). Is that right?
> > >>
> > >>>>
> > >>>> So I am thinking we don't need any fix. At most we need an in-code
> > comment?
> > >>>
> > >>> I agree with you to add an in-code comment. It will remind us in future
> > when
> > >> we
> > >>> use the get_cycles in critical context. Adding it now will probably only lead
> > to
> > >>> meaningless performance degradation.
> > >>
> > >> I read this as there would be no perfomance impact if we add the
> > >> ordering it now. Did you intend to say?
> > >
> > > Sorry about my English. I intended to say "Adding it now may introduce some
> > > performance cost. And this performance cost may be not worth. Because Xen
> > > may never use it in a similar scenario "
> > 
> > Don't worry! I think the performance should not be noticeable if we use
> > the same trick as Linux.
> > 
> > >> In addition, AFAICT, the x86 version of get_cycles() is already able to
> > >> provide that ordering. So there are chances that code may rely on it.
> > >>
> > >> While I don't necessarily agree to add barriers everywhere by default
> > >> (this may have big impact on the platform). I think it is better to have
> > >> an accurate number of cycles.
> > >>
> > >
> > > As x86 had done it, I think it’s ok to do it for Arm. This will keep a function
> > > behaves the same on different architectures.
> > 
> > Just to be clear, I am not 100% sure this is what Intel is doing.
> > Although this is my understanding of the comment in the code.
> > 
> > @Stefano, what do you think?
> > 
> > @Wei, assuming Stefano is happy with the proposal, would you be happy to
> > send a patch for that?
> > 
> 
> Of  course, I am willing to do that. It seems the enforce_ordering patch has been
> merged. And Vincenzo reported the enforce_ordering method will have ~4.5
> performance improvement[1] (Compare to ISB). So I will use enforce_ordering
> method directly instead of using ISB.
> 
> [1]https://lkml.org/lkml/2020/3/13/645

If we can enforce ordering without adding an ISB, I am all for it.
--8323329-1698463917-1606968903=:30425--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 05:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 05:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43103.77542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkgoK-000106-UX; Thu, 03 Dec 2020 05:05:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43103.77542; Thu, 03 Dec 2020 05:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkgoK-0000zz-RR; Thu, 03 Dec 2020 05:05:44 +0000
Received: by outflank-mailman (input) for mailman id 43103;
 Thu, 03 Dec 2020 05:05:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkgoJ-0000zr-Cd; Thu, 03 Dec 2020 05:05:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkgoJ-0002St-34; Thu, 03 Dec 2020 05:05:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkgoI-0005Vf-QV; Thu, 03 Dec 2020 05:05:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkgoI-0001A8-Q1; Thu, 03 Dec 2020 05:05:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eTk2dTRNnM45AvHxqFJ/s03foEPmbmo9QHdjig9jV08=; b=zT8AfhPOp9jBP5GlcYHCqfIcEk
	iUwx7MjmWb6gqRbB3w2O5y9B39fPQAEK+0UMXrfyWMlCUk7XLRvLXZCLAxAkwe9Wys0RY4m3GYPg9
	eP834wZ3o5NBnL4fj2F5X5vGeDKuWgMd1l3TA82jyssC7D8pZ6WpoVFHGPm0f6UgG68k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157153-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157153: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-vhd:leak-check/check:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=42af416d71462a72b02ba6ac632c8dcb9ce729a0
X-Osstest-Versions-That:
    linux=9f4b26f3ea18cb2066c9e58a84ff202c71739a41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 05:05:42 +0000

flight 157153 linux-5.4 real [real]
flight 157170 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157153/
http://logs.test-lab.xenproject.org/osstest/logs/157170/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      20 leak-check/check    fail pass in 157170-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156984
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156984
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156984
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156984
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156984
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156984
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156984
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156984
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156984
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156984
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156984
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                42af416d71462a72b02ba6ac632c8dcb9ce729a0
baseline version:
 linux                9f4b26f3ea18cb2066c9e58a84ff202c71739a41

Last test of basis   156984  2020-11-24 13:11:22 Z    8 days
Testing same since   157153  2020-12-02 08:12:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alan Stern <stern@rowland.harvard.edu>
  Anand K Mistry <amistry@google.com>
  Andrew Lunn <andrew@lunn.ch>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Arthur Kiyanovski <akiyano@amazon.com>
  Avraham Stern <avraham.stern@intel.com>
  Benjamin Berg <bberg@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Boqun Feng <boqun.feng@gmail.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Borislav Petkov <bp@suse.de>
  Brett Creeley <brett.creeley@intel.com>
  Brian Masney <bmasney@redhat.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chen Baozi <chenbaozi@phytium.com.cn>
  Chris Ye <lzye@google.com>
  Christoph Hellwig <hch@lst.de>
  Cong Wang <cong.wang@bytedance.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel Xu <dxu@dxuuu.xyz>
  Dany Madden <drt@linux.ibm.com>
  David Sterba <dsterba@suse.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Dexuan Cui <decui@microsoft.com>
  Dipen Patel <dipenp@nvidia.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Klink <flokli@flokli.de>
  Frank Yang <puilp0502@gmail.com>
  Gabriele Paoloni <gabriele.paoloni@intel.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Greg Kurz <groug@kaod.org>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hauke Mehrtens <hauke@hauke-m.de>
  Henrique de Moraes Holschuh <hnh@hmh.eng.br>
  Hui Su <sh_def@163.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <johannes.thumshirn@wdc.com>
  Jon Hunter <jonathanh@nvidia.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Kalle Valo <kvalo@codeaurora.org>
  Konrad Jankowski <konrad0.jankowski@intel.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  Lee Duncan <lduncan@suse.com>
  Lijun Pan <ljp@linux.ibm.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  liuzx@knownsec.com
  Luca Coelho <luciano.coelho@intel.com>
  Lukas Wunner <lukas@wunner.de>
  Marc Ferland <ferlandm@amotus.ca>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Martijn van de Streek <martijn@zeewinde.xyz>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mateusz Gorski <mateusz.gorski@linux.intel.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Christie <michael.christie@oracle.com>
  Mike Cui <mikecui@amazon.com>
  Mike Rapoport <rppt@linux.ibm.com>
  Minwoo Im <minwoo.im.dev@gmail.com>
  Namhyung Kim <namhyung@kernel.org>
  Namjae Jeon <namjae.jeon@samsung.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Pablo Ceballos <pceballos@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavan K S <pavan.k.s@intel.com>
  penghao <penghao@uniontech.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Raju Rangoju <rajur@chelsio.com>
  Rohith Surabattula <rohiths@microsoft.com>
  Rui Miguel Silva <rui.silva@linaro.org>
  Ruslan Sushko <rus@sushko.dev>
  Sami Tolvanen <samitolvanen@google.com>
  Sasha Levin <sashal@kernel.org>
  Shay Agroskin <shayagr@amazon.com>
  Simon Wunderlich <sw@simonwunderlich.de>
  Slawomir Laba <slawomirx.laba@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Sriram Dash <sriram.dash@samsung.com>
  Stanley Chu <stanley.chu@mediatek.com>
  Stefan Agner <stefan@agner.ch>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sugar Zhang <sugar.zhang@rock-chips.com>
  Sven Eckelmann <sven@narfation.org>
  Sylwester Dziedziuch <sylwesterx.dziedziuch@intel.com>
  Taehee Yoo <ap420073@gmail.com>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <treding@nvidia.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tony Lindgren <tony@atomide.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Vinod Koul <vkoul@kernel.org>
  Wei Liu <wei.liu@kernel.org>
  Weihang Li <liweihang@huawei.com>
  Wenpeng Liang <liangwenpeng@huawei.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xiaochen Shen <xiaochen.shen@intel.com>
  Xiongfeng Wang <wangxiongfeng2@huawei.com>
  Yixian Liu <liuyixian@huawei.com>
  Yu Zhao <yuzhao@google.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   9f4b26f3ea18..42af416d7146  42af416d71462a72b02ba6ac632c8dcb9ce729a0 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 05:36:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 05:36:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43065.77557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkhHX-0003r2-FS; Thu, 03 Dec 2020 05:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43065.77557; Thu, 03 Dec 2020 05:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkhHX-0003qv-CL; Thu, 03 Dec 2020 05:35:55 +0000
Received: by outflank-mailman (input) for mailman id 43065;
 Thu, 03 Dec 2020 01:17:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FEMK=FH=google.com=jwerner@srs-us1.protection.inumbo.net>)
 id 1kkdFn-0003kK-61
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 01:17:51 +0000
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0041973b-43b7-48b6-b8eb-9f3db2d58ec3;
 Thu, 03 Dec 2020 01:17:49 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id i9so392974ioo.2
 for <xen-devel@lists.xenproject.org>; Wed, 02 Dec 2020 17:17:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0041973b-43b7-48b6-b8eb-9f3db2d58ec3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=lmpg5qD2M65fYVjtflfrq1MH1jnryFUeBIm4GzOqPYU=;
        b=NewOo2y8CU/oYPw9OMV6e9ukvbZiS33BdvMIIJ+GYNU3j8Wxj4K+F6lDxVEkbcBsTf
         6RiNwLOjHTnq+6bVlvatcfXpCQzFM02B+5uVjzELo+A71UPUCwh1eLyExncnxxRK8T5Q
         UdX2sGfZRUHOzZhrlzTGk2Sr0d0JbW5FSkATo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=lmpg5qD2M65fYVjtflfrq1MH1jnryFUeBIm4GzOqPYU=;
        b=cUhTlEOa3Dx1GT3jGYPr1dZWimaWKxHokT+nzoYctRRrrU00Jy5UivpYZgcN1DUMBG
         TGRE2R1z33kKuK/cEA1HmgjvYIK4eL9+OacuEwm4fA+2/QgRuR18UtYujJ9P5zAghOEh
         6yXjHoiixZk2AY8ZWFmBvop7CLoTqVyLk3pGWtJSiHXi1kCsFsCs41J1dNwPOmdPPkJS
         NYR+IhfZgUTCU0Ac64N06wDfzjNG0iQL0+Zfu7T7EUUJIxT+gZ15o6Q+aLXD9WiE2dcT
         WhyINg8uE6GjEl0WF2MR27cpMPeMuWJV8gNIPbZAUAM9SwOMIdt+a/vJ0jzGuTfb6dQz
         earA==
X-Gm-Message-State: AOAM530R/Jpg+wHN25IbU275n2imoZOtM0fjkvjZqvumWU3rMmEcHSyc
	NYQOXldImIPD7nLoEEIsXeJYySBW0tRx3GIJCPVqUA==
X-Google-Smtp-Source: ABdhPJwjoA0fKKVjk9eLidoE5PocSKI2LcJfWILMWRz/wAwVo+R3t4mrTndLEkKe38ytToxMrMoxjf+nYsHf9jbBYf0=
X-Received: by 2002:a02:bc9:: with SMTP id 192mr1303365jad.50.1606958268859;
 Wed, 02 Dec 2020 17:17:48 -0800 (PST)
MIME-Version: 1.0
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl> <ef50dac9-3ded-6836-28b1-7addb0bab986@gmx.de>
In-Reply-To: <ef50dac9-3ded-6836-28b1-7addb0bab986@gmx.de>
From: Julius Werner <jwerner@chromium.org>
Date: Wed, 2 Dec 2020 17:17:36 -0800
Message-ID: <CAODwPW9dxvMfXY=92pJNGazgYqcynAk72EkzOcmF7JZXhHTwSQ@mail.gmail.com>
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Coreboot <coreboot@coreboot.org>, The development of GRUB 2 <grub-devel@gnu.org>, 
	LKML <linux-kernel@vger.kernel.org>, systemd-devel@lists.freedesktop.org, 
	trenchboot-devel@googlegroups.com, U-Boot Mailing List <u-boot@lists.denx.de>, x86@kernel.org, 
	xen-devel@lists.xenproject.org, alecb@umass.edu, 
	alexander.burmashev@oracle.com, allen.cryptic@gmail.com, 
	andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org, btrotter@gmail.com, 
	dpsmith@apertussolutions.com, eric.devolder@oracle.com, 
	eric.snowberg@oracle.com, hpa@zytor.com, hun@n-dimensional.de, 
	javierm@redhat.com, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com, 
	konrad.wilk@oracle.com, krystian.hebel@3mdeb.com, leif@nuviainc.com, 
	lukasz.hawrylko@intel.com, luto@amacapital.net, michal.zygowski@3mdeb.com, 
	mjg59@google.com, mtottenh@akamai.com, 
	"Vladimir 'phcoder' Serbinenko" <phcoder@gmail.com>, piotr.krol@3mdeb.com, pjones@redhat.com, 
	Paul Menzel <pmenzel@molgen.mpg.de>, roger.pau@citrix.com, ross.philipson@oracle.com, 
	tyhicks@linux.microsoft.com, Heinrich Schuchardt <xypron.glpk@gmx.de>
Content-Type: text/plain; charset="UTF-8"

Standardizing in-memory logging sounds like an interesting idea,
especially with regards to components that can run on top of different
firmware stacks (things like GRUB or TF-A). But I would be a bit wary
of creating a "new standard to rule them all" and then expecting all
projects to switch what they have over to that. I think we all know
https://xkcd.com/927/.

Have you looked around and evaluated existing solutions that already
have some proliferation first? I think it would be much easier to
convince people to standardize on something that already has existing
users and drivers available in multiple projects.

In coreboot we're using a very simple character ring buffer that only
has two 4-byte header fields: total size and cursor (i.e. current
position where you would write the next character). The top 4 bits of
the cursor field are reserved for flags, one of which is the
"overflow" flag that tells you whether the ring-buffer already
overflowed or not (so readers know whether to print the whole ring
buffer, or only from the start to the current cursor). We try to
dimension the buffers so they don't overflow on a single boot, but
this approach allows us to log multiple boots on platforms that can
persist memory across reboots, which sometimes comes in handy.

The disadvantages of that approach compared to your proposal are lack
of some features, like the facilities field (although one can still
just print a tag like "<0>" or "<4>" behind each newline) or
timestamps (coreboot instead has separate timestamp logging). But I
think a really big advantage is size: in early firmware environments
before DDR training, space is often very precious and we struggle to
find more than a couple of kilobytes for the log buffer. If I look at
the structure you proposed, that's already 24 bytes of overhead per
individual message. If we were hooking that up to our normal printk()
facility in coreboot (such that each invocation creates a new message
header), that would probably waste about a third of the whole log
buffer on overhead. I think a complicated, syslog-style logging
structure that stores individual message blocks instead of a
continuous character string isn't really suitable for firmware
logging.

FWIW the coreboot console has existing support in Linux
(https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/firmware/google/memconsole-coreboot.c),
SeaBIOS (https://github.com/coreboot/seabios/blob/master/src/fw/coreboot.c#L219),
TF-A (https://github.com/ARM-software/arm-trusted-firmware/blob/master/drivers/coreboot/cbmem_console/aarch64/cbmem_console.S),
GRUB (https://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/term/i386/coreboot/cbmemc.c),
U-Boot (https://github.com/u-boot/u-boot/blob/master/drivers/misc/cbmem_console.c)
and probably a couple of others I'm not aware of. And the code to add
support (especially when only appending) is so simple that it just
takes a couple of lines to implement (binary code size to implement
the format is also always a concern for firmware environments).

On Wed, Nov 18, 2020 at 7:04 AM Heinrich Schuchardt <xypron.glpk@gmx.de> wrote:
>
> On 14.11.20 00:52, Daniel Kiper wrote:
> > Hey,
> >
> > This is next attempt to create firmware and bootloader log specification.
> > Due to high interest among industry it is an extension to the initial
> > bootloader log only specification. It takes into the account most of the
> > comments which I got up until now.
> >
> > The goal is to pass all logs produced by various boot components to the
> > running OS. The OS kernel should expose these logs to the user space
> > and/or process them internally if needed. The content of these logs
> > should be human readable. However, they should also contain the
> > information which allows admins to do e.g. boot time analysis.
> >
> > The log specification should be as much as possible platform agnostic
> > and self contained. The final version of this spec should be merged into
> > existing specifications, e.g. UEFI, ACPI, Multiboot2, or be a standalone
> > spec, e.g. as a part of OASIS Standards. The former seems better but is
> > not perfect too...
> >
> > Here is the description (pseudocode) of the structures which will be
> > used to store the log data.
>
> Hello Daniel,
>
> thanks for your suggestion which makes good sense to me.
>
> Why can't we simply use the message format defined in "The Syslog
> Protocol", https://tools.ietf.org/html/rfc5424?
>
> >
> >   struct bf_log
> >   {
> >     uint32_t   version;
> >     char       producer[64];
> >     uint64_t   flags;
> >     uint64_t   next_bf_log_addr;
> >     uint32_t   next_msg_off;
> >     bf_log_msg msgs[];
>
> As bf_log_msg is does not have defined length msgs[] cannot be an array.
>
> >   }
> >
> >   struct bf_log_msg
> >   {
> >     uint32_t size;
> >     uint64_t ts_nsec;
> >     uint32_t level;
> >     uint32_t facility;
> >     uint32_t msg_off;
> >     char     strings[];
> >   }
> >
> > The members of struct bf_log:
> >   - version: the firmware and bootloader log format version number, 1 for now,
> >   - producer: the producer/firmware/bootloader/... type; the length
> >     allows ASCII UUID storage if somebody needs that functionality,
> >   - flags: it can be used to store information about log state, e.g.
> >     it was truncated or not (does it make sense to have an information
> >     about the number of lost messages?),
> >   - next_bf_log_addr: address of next bf_log struct; none if zero (I think
> >     newer spec versions should not change anything in first 5 bf_log members;
> >     this way older log parsers will be able to traverse/copy all logs regardless
> >     of version used in one log or another),
> >   - next_msg_off: the offset, in bytes, from the beginning of the bf_log struct,
> >     of the next byte after the last log message in the msgs[]; i.e. the offset
> >     of the next available log message slot; it is equal to the total size of
> >     the log buffer including the bf_log struct,
>
> Why would you need an offset to first unused byte?
>
> We possibly have multiple producers of messages:
>
> - TF-A
> - U-Boot
> - iPXE
> - GRUB
>
> What we need is the offset to the next struct bf_log.
>
> >   - msgs: the array of log messages,
> >   - should we add CRC or hash or signatures here?
> >
> > The members of struct bf_log_msg:
> >   - size: total size of bf_log_msg struct,
> >   - ts_nsec: timestamp expressed in nanoseconds starting from 0,
>
> Would each message producer start from 0?
>
> Shouldn't we use the time from the hardware RTC if it is available via
> boot service GetTime()?
>
> >   - level: similar to syslog meaning; can be used to differentiate normal messages
> >     from debug messages; the exact interpretation depends on the current producer
> >     type specified in the bf_log.producer,
> >   - facility: similar to syslog meaning; can be used to differentiate the sources of
> >     the messages, e.g. message produced by networking module; the exact interpretation
> >     depends on the current producer type specified in the bf_log.producer,
> >   - msg_off: the log message offset in strings[],
>
> What is this field good for? Why don't you start the the string at
> strings[0]?
> What would be useful would be the offset to the next bf_log_msg.
>
> >   - strings[0]: the beginning of log message type, similar to the facility member but
> >     NUL terminated string instead of integer; this will be used by, e.g., the GRUB2
> >     for messages printed using grub_dprintf(),
> >   - strings[msg_off]: the beginning of log message, NUL terminated string.
>
>
> Why strings in plural? Do you want to put multiple strings into
> 'strings'? What identifies the last string?
>
>
> >
> > Note: The producers are free to use/ignore any given set of level, facility and/or
> >       log type members. Though the usage of these members has to be clearly defined.
> >       Ignored integer members should be set to 0. Ignored log message type should
> >       contain an empty NUL terminated string. The log message is mandatory but can
> >       be an empty NUL terminated string.
> >
> > There is still not fully solved problem how the logs should be presented to the OS.
> > On the UEFI platforms we can use config tables to do that. Then probably
> > bf_log.next_bf_log_addr should not be used.
>
> Why? How would you otherwise find the entries of the next produser in
> the configuration table? What I am missing is a GUID for the
> configuration table.
>
> > On the ACPI and Device Tree platforms
> > we can use these mechanisms to present the logs to the OSes. The situation gets more
>
> I do not understand this.
>
> UEFI implementations use either of ACPI and device-trees and support
> configuration tables. Why do you want to use some other binding?
>
> Best regards
>
> Heinrich
>
> > difficult if neither of these mechanisms are present. However, maybe we should not
> > bother too much about that because probably these platforms getting less and less
> > common.
> >
> > Anyway, I am aware that this is not specification per se. The goal of this email is
> > to continue the discussion about the idea of the firmware and booloader log and to
> > find out where the final specification should land. Of course taking into the account
> > assumptions made above.
> >
> > You can find previous discussions about related topics at [1], [2] and [3].
> >
> > Additionally, I am going to present this during GRUB mini-summit session on Tuesday,
> > 17th of November at 15:45 UTC. So, if you want to discuss the log design please join
> > us. You can find more details here [4].
> >
> > Daniel
> >
> > [1] https://lists.gnu.org/archive/html/grub-devel/2019-10/msg00107.html
> > [2] https://lists.gnu.org/archive/html/grub-devel/2019-11/msg00079.html
> > [3] https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00223.html
> > [4] https://twitter.com/3mdeb_com/status/1327278804100931587
> >
>


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 06:35:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 06:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43119.77569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkiCW-00015H-PD; Thu, 03 Dec 2020 06:34:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43119.77569; Thu, 03 Dec 2020 06:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkiCW-00015A-Km; Thu, 03 Dec 2020 06:34:48 +0000
Received: by outflank-mailman (input) for mailman id 43119;
 Thu, 03 Dec 2020 06:34:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lmMJ=FH=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kkiCV-000155-GT
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 06:34:47 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89fb8e21-f1b9-4377-a070-62adddff8373;
 Thu, 03 Dec 2020 06:34:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.4 DYNA|AUTH)
 with ESMTPSA id 60a649wB36YdYqy
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 3 Dec 2020 07:34:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89fb8e21-f1b9-4377-a070-62adddff8373
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1606977285;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=aOMOXSH1cw5HhPN/7/vRbzPCcihlD18Pa5LoVzmWzLM=;
	b=ZEqYBO48e8kD0eRUpvQOPEFHHydET8+fY90s+gt2UwYfl7ZyfwR3ljroTtAG4PXt7f
	tq23kVTcejx13b+LcWys5e5x4REPkNeoZr/uNAeL21b5Fd4JQqnkw69MYnnP8u+u/4zE
	LS3o4HkV4GHKfwzrUnnoprDsfsfcfQ8VoMAvdJ54vsrRJYCujM2PT/WBLuwMaYQuqao/
	4flR6ctwoZz3Pr9Nf/Gz9Wealg9w+ukSunlaCJH7j6bsZzvl7sLa9UPZFD9QVqSv5B/D
	dafpi2rYdj4pvhk4mXgZunc29THZb1C+hoTLt+DEV6AX9b30m9JJGE38sEuiMK4sYIYN
	GgVQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3Gwg"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd arguments
Date: Thu,  3 Dec 2020 07:34:36 +0100
Message-Id: <20201203063436.4503-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the arguments for xenwatchdogd are hardcoded with 15s
keep-alive interval and 30s timeout.

It is not possible to tweak these values via
/etc/systemd/system/xen-watchdog.service.d/*.conf because ExecStart
can not be replaced. The only option would be a private copy
/etc/systemd/system/xen-watchdog.service, which may get out of sync
with the Xen provided xen-watchdog.service.

Adjust the service file to recognize XENWATCHDOGD_ARGS= in a
private unit configuration file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

v2: fix "test -n" in init.d

 tools/hotplug/Linux/init.d/xen-watchdog.in          | 7 ++++++-
 tools/hotplug/Linux/systemd/xen-watchdog.service.in | 4 +++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/Linux/init.d/xen-watchdog.in b/tools/hotplug/Linux/init.d/xen-watchdog.in
index c05f1f6b6a..b36a94bd8e 100644
--- a/tools/hotplug/Linux/init.d/xen-watchdog.in
+++ b/tools/hotplug/Linux/init.d/xen-watchdog.in
@@ -19,6 +19,11 @@
 
 . @XEN_SCRIPT_DIR@/hotplugpath.sh
 
+xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
+
+test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
+
+test -n "$XENWATCHDOGD_ARGS" || XENWATCHDOGD_ARGS='15 30'
 DAEMON=${sbindir}/xenwatchdogd
 base=$(basename $DAEMON)
 
@@ -46,7 +51,7 @@ start() {
 	local r
 	echo -n $"Starting domain watchdog daemon: "
 
-	$DAEMON 30 15
+	$DAEMON $XENWATCHDOGD_ARGS
 	r=$?
 	[ "$r" -eq 0 ] && success $"$base startup" || failure $"$base startup"
 	echo
diff --git a/tools/hotplug/Linux/systemd/xen-watchdog.service.in b/tools/hotplug/Linux/systemd/xen-watchdog.service.in
index 1eecd2a616..637ab7fd7f 100644
--- a/tools/hotplug/Linux/systemd/xen-watchdog.service.in
+++ b/tools/hotplug/Linux/systemd/xen-watchdog.service.in
@@ -6,7 +6,9 @@ ConditionPathExists=/proc/xen/capabilities
 
 [Service]
 Type=forking
-ExecStart=@sbindir@/xenwatchdogd 30 15
+Environment="XENWATCHDOGD_ARGS=30 15"
+EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
+ExecStart=@sbindir@/xenwatchdogd $XENWATCHDOGD_ARGS
 KillSignal=USR1
 
 [Install]


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 06:48:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 06:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43125.77581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkiPJ-00028Z-Uo; Thu, 03 Dec 2020 06:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43125.77581; Thu, 03 Dec 2020 06:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkiPJ-00028S-RK; Thu, 03 Dec 2020 06:48:01 +0000
Received: by outflank-mailman (input) for mailman id 43125;
 Thu, 03 Dec 2020 06:48:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkiPI-00028N-JX
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 06:48:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17766bde-cbb2-4103-91ce-b0af543cff0a;
 Thu, 03 Dec 2020 06:47:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5C52ABCE;
 Thu,  3 Dec 2020 06:47:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17766bde-cbb2-4103-91ce-b0af543cff0a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606978079; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=27LBl/6fLBLGCT21NxwDcQWtqBaoIqx4FnlGoT+9bQo=;
	b=eORYaJrlm387v9Lq+0dzF0oCsZsGQM3Uxoxj3vj8sNo0/Bwm9SJuMXhhQES4xGOmov66vx
	HDh9sENkmXefsIIGivL9VITItsFMfsA9NZss5x3DirFTMWSxTS38DTtiuHxZNzQuaa0K5r
	EmoElm0iRvdTZFhOOlNsK2CO99NH/DE=
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd arguments
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201203063436.4503-1-olaf@aepfle.de>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3fc53e0a-c7b4-4c56-9ba8-0b0a55c10f50@suse.com>
Date: Thu, 3 Dec 2020 07:47:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201203063436.4503-1-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="XvAy2mYnAx7Bfdv28IvPpt3Hy4hEI3Krg"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--XvAy2mYnAx7Bfdv28IvPpt3Hy4hEI3Krg
Content-Type: multipart/mixed; boundary="uLJeKkrC3EAMCiXtXy7Amnfri60eK76m2";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <3fc53e0a-c7b4-4c56-9ba8-0b0a55c10f50@suse.com>
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd arguments
References: <20201203063436.4503-1-olaf@aepfle.de>
In-Reply-To: <20201203063436.4503-1-olaf@aepfle.de>

--uLJeKkrC3EAMCiXtXy7Amnfri60eK76m2
Content-Type: multipart/mixed;
 boundary="------------6F5CD8E246DD33A6CF305423"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6F5CD8E246DD33A6CF305423
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 07:34, Olaf Hering wrote:
> Currently the arguments for xenwatchdogd are hardcoded with 15s
> keep-alive interval and 30s timeout.
>=20
> It is not possible to tweak these values via
> /etc/systemd/system/xen-watchdog.service.d/*.conf because ExecStart
> can not be replaced. The only option would be a private copy
> /etc/systemd/system/xen-watchdog.service, which may get out of sync
> with the Xen provided xen-watchdog.service.
>=20
> Adjust the service file to recognize XENWATCHDOGD_ARGS=3D in a
> private unit configuration file.
>=20
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>=20
> v2: fix "test -n" in init.d
>=20
>   tools/hotplug/Linux/init.d/xen-watchdog.in          | 7 ++++++-
>   tools/hotplug/Linux/systemd/xen-watchdog.service.in | 4 +++-
>   2 files changed, 9 insertions(+), 2 deletions(-)

Could you please add a section for XENWATCHDOGD_ARGS in
tools/hotplug/Linux/init.d/sysconfig.xencommons.in ?


Juergen

--------------6F5CD8E246DD33A6CF305423
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6F5CD8E246DD33A6CF305423--

--uLJeKkrC3EAMCiXtXy7Amnfri60eK76m2--

--XvAy2mYnAx7Bfdv28IvPpt3Hy4hEI3Krg
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Iih4FAwAAAAAACgkQsN6d1ii/Ey/S
6wf/R/AhEhP8b3GhHKiqbtTy+h83FeKxZz1r1FZMQVrQepysrG0d9HXYGRR4ZSNj3HzFKKK+G/cy
DkaAPySxgvwXpA7+dthR3A2wc15hGKP3ri8IqRyr/2Wb2penoOa+L6JkFErySDVqwbdZvcU4qaxm
IdUfIP6hRLaLZv0Voyvw0QCzwx2sEokRVBn5+mnJ79FhuO/aQagomOGL68SDAJx3eyYo4TE58Bn4
vi5OXFoDNyhhdc8UFvii1N45HZpacvZ1EIUEtII5xxmsvYQvto5QwgT4rKmR6NLkdiwsX5C0OvjF
xa6D+9Xo4fIUKcowPmrexBk9F+pKBmhGdccBBSDooQ==
=X8aO
-----END PGP SIGNATURE-----

--XvAy2mYnAx7Bfdv28IvPpt3Hy4hEI3Krg--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 07:20:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 07:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43133.77593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkiuC-00054p-IV; Thu, 03 Dec 2020 07:19:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43133.77593; Thu, 03 Dec 2020 07:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkiuC-00054i-FR; Thu, 03 Dec 2020 07:19:56 +0000
Received: by outflank-mailman (input) for mailman id 43133;
 Thu, 03 Dec 2020 07:19:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lmMJ=FH=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kkiuA-00054d-Ob
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 07:19:54 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.53])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 978cc648-d550-4ede-ad82-04ad26eb4d88;
 Thu, 03 Dec 2020 07:19:53 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.4 DYNA|AUTH)
 with ESMTPSA id 60a649wB37JkZ2d
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 3 Dec 2020 08:19:46 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 978cc648-d550-4ede-ad82-04ad26eb4d88
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1606979992;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=JpCtL3qbfGKidI+NEmgXGicjB/JyJQLZXBgAfjzf7ZU=;
	b=EYtmm0EDHwoCwSLrv2U7sL/3QIY19uqzLs3DB9Ry/cu/L4Vu1CLn5GoIJPTbbzMnTt
	ghbmFNIhlAOsza1CEWY6XIVT6phKPv47611UWBvpY+/nVJtRle8J4+qEMecE2MyvhvXA
	WtaGZ2eb1g+RbZwPeeq0YA3L8V2r1+ygdX8ZrPWKjgb+10h29EJ81EZiR/IIhEtQeYZ6
	WV2f9yk62Zd2VMt7cldjkaSkNHedTlMMGlQZxcqh+ED+e/jzW98HTYiGgyh0femmXVFi
	Q2guXVgkH5RZWMJBGtMTG+lyQDEkCZa5vu+4pkV2u6Bau9YdTxHyfJsfVpbHjyKY4Q1h
	sikA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+uX"
X-RZG-CLASS-ID: mo00
Date: Thu, 3 Dec 2020 08:19:39 +0100
From: Olaf Hering <olaf@aepfle.de>
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd
 arguments
Message-ID: <20201203081939.7265ac3d.olaf@aepfle.de>
In-Reply-To: <3fc53e0a-c7b4-4c56-9ba8-0b0a55c10f50@suse.com>
References: <20201203063436.4503-1-olaf@aepfle.de>
	<3fc53e0a-c7b4-4c56-9ba8-0b0a55c10f50@suse.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/IIE1r9olYuIgrzDFlnEO3q4"; protocol="application/pgp-signature"

--Sig_/IIE1r9olYuIgrzDFlnEO3q4
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Thu, 3 Dec 2020 07:47:58 +0100
schrieb J=C3=BCrgen Gro=C3=9F <jgross@suse.com>:

> Could you please add a section for XENWATCHDOGD_ARGS in
> tools/hotplug/Linux/init.d/sysconfig.xencommons.in ?

No. Such details have to go into a to-be-written xencommons(8).

There will be a xenwatchdogd(8) shortly, which will cover this knob.


Olaf

--Sig_/IIE1r9olYuIgrzDFlnEO3q4
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl/IkYsACgkQ86SN7mm1
DoBS1g/9G0hcHBXQ7TX/+gJu0C+Enrt2yj/CZ30M+ZTajY7g3uDBzwJFsq55vNpZ
MS3wxRYlpvpT+khPQ3fFJRQOa/9xSAascyAjpgXjrOQ3l8ixCi548HDdzXZYMaWJ
ShdIqvtP5ROrUO1K74Q1bySajoLbAnuD6ni3s/boE42kM/tBNMUZYFjlEWkhMH3U
dJTTRnrvyNIdWz9CiQ9SGL2B3LSk/mIvHYaAwNOnJy4xIzdR+dV7Kp1iAoxU/ekM
KAzm4nFdZSV1ibMkcYmvmd7y8BQOovVSyQkFL6p8bRSE+FN2p/oNkgoU3BUnGjQA
wav0k8z/UCykrCyFmcgOePGbaSG8GdM6JPYbG3F/aw4GbduwaSbirfIX+7WNkCbV
VanmutKmPLT2pNghd+ZAiD7txskowQocNjkYv17FXKvmROzgTBySm0L94LXm9h2a
j6OFStr6tvYME1j5sPcfsNFzZpCJz5XtV3QxzpVrZX/dsSHKOZfUIC5/ZNFWCX7V
OhP8aCw7SGbnPxheozDEnesNUppYmQPCYbEwHKQ3EHuO31J5SoEZFxmCHqHhLDpv
lPUw6Hsslaxzqs1A7jxOaV0dMv0Kifz6cGPtvqYFXsIyvxOxVlilB5lbLO11YldQ
ZmwLxcMc7oPiDaDXSRXb8GDDtP1xgrMGnTI+0qLxdwKfkenbQew=
=JEwf
-----END PGP SIGNATURE-----

--Sig_/IIE1r9olYuIgrzDFlnEO3q4--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 07:51:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 07:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43139.77605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkjOX-0000Hj-50; Thu, 03 Dec 2020 07:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43139.77605; Thu, 03 Dec 2020 07:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkjOX-0000Hc-1X; Thu, 03 Dec 2020 07:51:17 +0000
Received: by outflank-mailman (input) for mailman id 43139;
 Thu, 03 Dec 2020 07:51:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkjOV-0000HU-Nc; Thu, 03 Dec 2020 07:51:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkjOV-0005qU-IA; Thu, 03 Dec 2020 07:51:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkjOV-0006Eo-9L; Thu, 03 Dec 2020 07:51:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkjOV-0004wE-8p; Thu, 03 Dec 2020 07:51:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZrJkIF2hvEYw4XpOqMcBQT3ueqn2myHeRGTbXAdq/h8=; b=YWRAwX128sL4TzN1Kz7J4XZf1C
	tApljEJnCQVlQcx1W+dyRGcP3ZQi87ecLPZoJuXaEzMHKSzpqB4W2OE53jLafbQKmEJ4+gj1Rh2gh
	NhS/JyG/9hUJOU8JSE3JgKKIg3Bk2JMVQKXkR2cKyw+/LXJADsgH3niBxbVqXuU8jUsA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157162-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157162: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 07:51:15 +0000

flight 157162 qemu-mainline real [real]
flight 157173 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157162/
http://logs.test-lab.xenproject.org/osstest/logs/157173/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  104 days
Failing since        152659  2020-08-21 14:07:39 Z  103 days  216 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 08:12:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 08:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43152.77620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkjjE-0002oI-Bw; Thu, 03 Dec 2020 08:12:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43152.77620; Thu, 03 Dec 2020 08:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkjjE-0002oB-86; Thu, 03 Dec 2020 08:12:40 +0000
Received: by outflank-mailman (input) for mailman id 43152;
 Thu, 03 Dec 2020 08:12:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkjjC-0002o6-Np
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 08:12:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b4bf2e8-5497-49b1-9b38-f7362e97abcb;
 Thu, 03 Dec 2020 08:12:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33349AE95;
 Thu,  3 Dec 2020 08:12:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b4bf2e8-5497-49b1-9b38-f7362e97abcb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606983156; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LNIkhb3LvLaWFc4Cvk7s0fVDJExphp0lqT/VK6hAPps=;
	b=RmoVxP27qESIVuXTZa2aJd/1ZbAe8cv77gcDoMEvl0w9ET68CpV3YRzv+oEgX7mS8Cax5l
	8rBcHXFB6duNTcWwloL9igUthmyiFJ4Il0Whl0EewWQp+Vn/gdeG2BDGYkNr0LSl4LdNgo
	Sq8D+XKxrR5bNLGp0JlMewapGs2rzxs=
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
 <e2909e87-473f-dbf5-9e58-7c817ac59e3f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d878f80d-d71d-93cd-8ee1-06fb860bf390@suse.com>
Date: Thu, 3 Dec 2020 09:12:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <e2909e87-473f-dbf5-9e58-7c817ac59e3f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.12.2020 16:51, Jürgen Groß wrote:
> On 02.12.20 16:42, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> Add a getsize() function pointer to struct hypfs_funcs for being able
>>> to have dynamically filled entries without the need to take the hypfs
>>> lock each time the contents are being generated.
>>>
>>> For directories add a findentry callback to the vector and modify
>>> hypfs_get_entry_rel() to use it.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> with maybe one further small adjustment:
>>
>>> @@ -176,15 +188,41 @@ static int hypfs_get_path_user(char *buf,
>>>       return 0;
>>>   }
>>>   
>>> +struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
>>> +                                         const char *name,
>>> +                                         unsigned int name_len)
>>> +{
>>> +    return ERR_PTR(-ENOENT);
>>> +}
>>
>> ENOENT seems odd to me here. There looks to be no counterpart to
>> EISDIR, so maybe ENODATA, EACCES, or EPERM?
> 
> Hmm, why?
> 
> In case I have /a/b and I'm looking for /a/b/c ENOENT seems to be just
> fine?
> 
> Or I could add ENOTDIR.

Oh, there actually is supposed to be such an entry, but public/errno.h
is simply missing it. Yes - ENOTDIR is what I was thinking of when
saying "there looks to be no counterpart to EISDIR".

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 08:20:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 08:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43160.77635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkjrD-0003qZ-Ea; Thu, 03 Dec 2020 08:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43160.77635; Thu, 03 Dec 2020 08:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkjrD-0003qS-BR; Thu, 03 Dec 2020 08:20:55 +0000
Received: by outflank-mailman (input) for mailman id 43160;
 Thu, 03 Dec 2020 08:20:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkjrC-0003qN-5N
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 08:20:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ceeae4b4-107d-4e50-a51c-bfc0412f8a2e;
 Thu, 03 Dec 2020 08:20:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61F10AC55;
 Thu,  3 Dec 2020 08:20:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceeae4b4-107d-4e50-a51c-bfc0412f8a2e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606983651; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4puBeoSg4tODVpXmLAW0sGWhnauZKc2JuH4Jn/niFAE=;
	b=Oe06/56bn4A3nCVy8onyNtDB57y4gC/VKHD528xFqG7TyV4sm76OW6n32MuIApx7UVIN+K
	bvZsI7pWv++M/0I+28LPmz5RAgta+N12iXo34qS+F8Hfuh1CtmmGhMf4+UxsuKN/fuz009
	WUpF0Dwn9Xd0O3QtghAiEvJHA2fQMHU=
Subject: Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org,
 xen-devel@lists.xenproject.org
References: <1606780777-30718-1-git-send-email-igor.druzhinin@citrix.com>
 <b98d3517-6c9d-6f40-6e28-cde142978143@suse.com>
 <3c9735ec-2b04-1ace-2adb-d72b32c4a5f9@citrix.com>
 <88019c81-1988-2512-282b-53b61adf09c6@suse.com>
 <bcb0964d-9444-f5e6-372f-d8daa460fcfd@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c29f706f-2fef-fbca-1b45-776882b8445c@suse.com>
Date: Thu, 3 Dec 2020 09:20:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <bcb0964d-9444-f5e6-372f-d8daa460fcfd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 17:34, Igor Druzhinin wrote:
> On 02/12/2020 15:21, Jan Beulich wrote:
>> On 02.12.2020 15:53, Igor Druzhinin wrote:
>>> On 02/12/2020 09:25, Jan Beulich wrote:
>>>> Instead I'm wondering whether this wouldn't better be a Kconfig
>>>> setting (or even command line controllable). There don't look to be
>>>> any restrictions on the precise value chosen (i.e. 2**n-1 like is
>>>> the case for old and new values here, for whatever reason), so a
>>>> simple permitted range of like 4...64 would seem fine to specify.
>>>> Whether the default then would want to be 8 (close to the current
>>>> 7) or higher (around the actually observed maximum) is a different
>>>> question.
>>>
>>> I'm in favor of a command line argument here - it would be much less trouble
>>> if a higher limit was suddenly necessary in the field. The default IMO
>>> should definitely be higher than 8 - I'd stick with number 32 which to me
>>> should cover our real world scenarios and apply some headroom for the future.
>>
>> Well, I'm concerned of the extra memory overhead. Every IRQ,
>> sharable or not, will get the extra slots allocated with the
>> current scheme. Perhaps a prereq change then would be to only
>> allocate multi-guest arrays for sharable IRQs, effectively
>> shrinking the overhead in particular for all MSI ones?
> 
> That's one way to improve overall system scalability but in that area
> there is certainly much bigger fish to fry elsewhere. With 32 elements in the
> array we get 200 bytes of overhead per structure, with 16 it's just 72 extra
> bytes which in the unattainable worst case scenario of every single vector taken
> in 512 CPU machine would only account for several MB of overhead.

I'm generally unhappy with this way of thinking, as this is what has
been leading to unnecessary growth of all sorts of software and its
needs of resources. Yes, there surely are larger gains to be had
elsewhere, but that's imo still no excuse to grow memory allocations
"blindly" despite it being clear that in a fair share of cases a
fair part of the allocated memory won't be used. This said, ...

> I'd start with dynamic array allocation first and setting the limit to 16 that
> should be enough for now. And then if that default value needs to be raised
> we can consider further improvements.

... I'm puzzled by this plan of yours, because unless I'm
misunderstanding dynamic array allocation is what I've been asking
for, effectively. Now that we have xmalloc_flex_struct(), this
should even be relatively straightforward, i.e. in particular with
no need to open code complex expressions.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 08:41:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 08:41:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43169.77649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkBS-0005kk-97; Thu, 03 Dec 2020 08:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43169.77649; Thu, 03 Dec 2020 08:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkBS-0005kd-5m; Thu, 03 Dec 2020 08:41:50 +0000
Received: by outflank-mailman (input) for mailman id 43169;
 Thu, 03 Dec 2020 08:41:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkkBQ-0005kY-UH
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 08:41:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01522a0f-91a4-4cba-820b-5564c787c73f;
 Thu, 03 Dec 2020 08:41:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2000FAE95;
 Thu,  3 Dec 2020 08:41:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01522a0f-91a4-4cba-820b-5564c787c73f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606984906; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RIRp1Sgm2T1K2iNUb2R8+gnpcvMSXg8rBvnNLo4h6Fw=;
	b=jsarj3sYnEHZux6//tHjkauK+3fWu1jGcUHNbux0O7F2GNlJRbJE03gp9ln2i5XvpvsHmF
	duAaUvdT0OCjcVj5i+WhpxTgBYU/iHjbd8B4XiFGHKbYTQYDI0DTovZILc4GgdPwAJWbA4
	QhgTIaNTztynwp/NS/36u7NS82omM3U=
Subject: Re: [PATCH v2 1/2] x86/IRQ: make max number of guests for a shared
 IRQ configurable
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, sstabellini@kernel.org, wl@xen.org, roger.pau@citrix.com,
 xen-devel@lists.xenproject.org
References: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4374579d-9cac-3500-7954-9cfa89e88f41@suse.com>
Date: Thu, 3 Dec 2020 09:41:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.12.2020 02:58, Igor Druzhinin wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1641,6 +1641,15 @@ This option is ignored in **pv-shim** mode.
>  ### nr_irqs (x86)
>  > `= <integer>`
>  
> +### irq_max_guests (x86)

As a rule of thumb, new options want to use - instead of _ as
separators. There was a respective discussion quite some time
ago. My patch to treat - and _ equally when parsing options
wasn't liked by Andrew ...

> @@ -435,6 +439,9 @@ int __init init_irq_data(void)
>      for ( ; irq < nr_irqs; irq++ )
>          irq_to_desc(irq)->irq = irq;
>  
> +    if ( !irq_max_guests || irq_max_guests > 255)

The 255 is a consequence of the struct field being u8, aiui?
There should be a direct connection between the two, i.e. when
changing the underlying property preferably only one place
should require touching (possible e.g. by converting to a bit
field with its width established via a #define), or comments
on either side should point at the other one.

Also as a nit, there's a blank missing ahead of the closing
paren.

> @@ -1564,7 +1570,8 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          if ( newaction == NULL )
>          {
>              spin_unlock_irq(&desc->lock);
> -            if ( (newaction = xmalloc(irq_guest_action_t)) != NULL &&
> +            if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
> +                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&

As said in the (later) reply to v1, please try to make use of
xmalloc_flex_struct() here.

> @@ -1633,11 +1640,11 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          goto retry;
>      }
>  
> -    if ( action->nr_guests == IRQ_MAX_GUESTS )
> +    if ( action->nr_guests == irq_max_guests )
>      {
>          printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
> -               "Already at max share.\n",
> -               pirq->pirq, v->domain->domain_id);
> +               "Already at max share %u, increase with irq_max_guests= option.\n",
> +               pirq->pirq, v->domain->domain_id, irq_max_guests);

If you already need to largely redo this printk(), please
- switch to %pd at the same time,
- don't have the format string extend across multiple lines,
- preferably avoid to use full stops (we don't use any in the vast
  majority of log messages), e.g. by making it "cannot bind IRQ%d
  to %pd, already at max share %u (increase with irq-max-guests=
  option)", if you want to stay close to the original wording (I
  think this could be expressed more compactly as well).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 08:47:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 08:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43175.77662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkHD-000624-2J; Thu, 03 Dec 2020 08:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43175.77662; Thu, 03 Dec 2020 08:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkHC-00061x-Ur; Thu, 03 Dec 2020 08:47:46 +0000
Received: by outflank-mailman (input) for mailman id 43175;
 Thu, 03 Dec 2020 08:47:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkkHB-00061Q-Ii
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 08:47:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcb0981a-498a-47be-8126-ef64b93b59b8;
 Thu, 03 Dec 2020 08:47:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CBBD0ACBA;
 Thu,  3 Dec 2020 08:47:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcb0981a-498a-47be-8126-ef64b93b59b8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606985263; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=eJELMgFUfE7C1siT3HQlr2UVZB/Mr7EIBBh8OE3FV8I=;
	b=TJLUU53/FAUkx+TKCYKoChI2PXH0mnU/PPNaHmtLnVwea26/58ApPnFbfsowPw5pMNYFuo
	k7LHEQNBXkdrNvY0qk4KTEdNPMl6KzeNKJ3ByUUSnkwh7ocgMq7Dk+EZf6kXOChZ7pM7km
	n7E3ITonl54eLToPC5j+GHCde5gjhaA=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
Message-ID: <c645058b-3e40-46a2-7110-58faa6ff3c6e@suse.com>
Date: Thu, 3 Dec 2020 09:47:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="VEkFpObwFhTSj6qienZdOyat8n4DcElII"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VEkFpObwFhTSj6qienZdOyat8n4DcElII
Content-Type: multipart/mixed; boundary="20vixFpbaY814X6wWbBSo8Y3bXthf4mTP";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <c645058b-3e40-46a2-7110-58faa6ff3c6e@suse.com>
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
In-Reply-To: <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>

--20vixFpbaY814X6wWbBSo8Y3bXthf4mTP
Content-Type: multipart/mixed;
 boundary="------------4785B4AFAAFA90CB45C9CCAF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4785B4AFAAFA90CB45C9CCAF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 02.12.20 16:36, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>>   static int hypfs_write(struct hypfs_entry *entry,
>>                          XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned =
long ulen)
>=20
> As a tangent, is there a reason these write functions don't take
> handles of "const void"? (I realize this likely is nothing that
> wants addressing right here.)

Uh, this is harder than I thought.

guest_handle_cast() doesn't handle const guest handle types currently:

hypfs.c:447:58: error: unknown type name =E2=80=98const_void=E2=80=99; di=
d you mean =E2=80=98const=E2=80=99?
          ret =3D hypfs_write(entry, guest_handle_cast(arg3, const_void),=
=20
arg4);
                                                           ^
/home/gross/xen/unstable/xen/include/xen/guest_access.h:26:5: note: in=20
definition of macro =E2=80=98guest_handle_cast=E2=80=99
      type *_x =3D (hnd).p;                         \
      ^~~~

Currently my ideas would be to either:

- add a new macro for constifying a guest handle (type -> const_type)
- add a new macro for casting a guest handle to a const_type
- add typedefs for the const_* types (typedef const x const_x)
- open code the cast

Or am I missing an existing variant?


Juergen

--------------4785B4AFAAFA90CB45C9CCAF
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4785B4AFAAFA90CB45C9CCAF--

--20vixFpbaY814X6wWbBSo8Y3bXthf4mTP--

--VEkFpObwFhTSj6qienZdOyat8n4DcElII
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Ipi4FAwAAAAAACgkQsN6d1ii/Ey8r
5Qf+MPtNWLvwT/dZdssS2SbXMTf1vkhJ8TpbWd3B3B72/Whdwa0dNqGT6oBmeQGLTanmtn0KTdWb
ErtzAd0rDsESfUw3FREcI6pyKau7rWhBgn+hkdIJLkYl4FuuvJ1HNFULHqSeJqvuGIthr04h9xpx
yJ1UPI29E6uRsV+LSyDUEuydwex6i1ozCd9SKfRwYIyagWfzmvPQigywtb1EibxfEq0HqrABRwmP
+0L8LfIKBAdXaymiGg+T8CPJtJy5SYMQ0GxcG0/mW2+TnG04Ve+01h2i8PoAT6WbSnvzW9m0YyUB
AIfY1KnvIjW8Tkd+M+yISvYDq89DrNV8IXiNdvdltg==
=Zeue
-----END PGP SIGNATURE-----

--VEkFpObwFhTSj6qienZdOyat8n4DcElII--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 08:50:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 08:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43184.77678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkJM-0006DY-G6; Thu, 03 Dec 2020 08:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43184.77678; Thu, 03 Dec 2020 08:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkJM-0006DR-Ci; Thu, 03 Dec 2020 08:50:00 +0000
Received: by outflank-mailman (input) for mailman id 43184;
 Thu, 03 Dec 2020 08:49:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkkJL-0006DM-1v
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 08:49:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 774c37ac-7ceb-4944-8d7f-e4daa1b993d6;
 Thu, 03 Dec 2020 08:49:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D26AACBA;
 Thu,  3 Dec 2020 08:49:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 774c37ac-7ceb-4944-8d7f-e4daa1b993d6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606985397; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oB6rCDgg8B1sgsCMwdLFPglfZs3fw373XJvKsUkWt6k=;
	b=CGQu5kHGp/SXtnY9InkCPoW+JVRiYVewbbd9m/FEA+aGZeI2irT+jkL6HfCUs9agfgrr1R
	fSWVWkf3uvyDDvsiud1f007Sn2JVoEi3TuZeKX3MV4FdTSENPHxBEvjVsnfSrB7u5VkroB
	gk7o/XhKLZNHSruCEvHKYWUx6yEW3iw=
Subject: Re: [PATCH v2 2/2] x86/IRQ: allocate guest array of max size only for
 shareable IRQs
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, sstabellini@kernel.org, wl@xen.org, roger.pau@citrix.com,
 xen-devel@lists.xenproject.org
References: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
 <1606960706-21274-2-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b38480ab-7df1-5b5b-8d6a-141cb7b99682@suse.com>
Date: Thu, 3 Dec 2020 09:49:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606960706-21274-2-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.12.2020 02:58, Igor Druzhinin wrote:
> @@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>      unsigned int        irq;
>      struct irq_desc         *desc;
>      irq_guest_action_t *action, *newaction = NULL;
> +    unsigned int        max_nr_guests = will_share ? irq_max_guests : 1;
>      int                 rc = 0;
>  
>      WARN_ON(!spin_is_locked(&v->domain->event_lock));
> @@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          {
>              spin_unlock_irq(&desc->lock);
>              if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
> -                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&
> +                  max_nr_guests * sizeof(action->guest[0]))) != NULL &&
>                   zalloc_cpumask_var(&newaction->cpu_eoi_map) )
>                  goto retry;
>              xfree(newaction);
> @@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          goto retry;
>      }
>  
> -    if ( action->nr_guests == irq_max_guests )
> +    if ( action->nr_guests == max_nr_guests )
>      {
>          printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
>                 "Already at max share %u, increase with irq_max_guests= option.\n",

Just as a minor remark - I don't think this last hunk is necessary,
as the !will_share case won't make it here unless action->nr_guests
is still zero. At which point the need for the new local variable
would also disappear. But I'm not going to insist, as there may be
the view that the code is more clear this way. However, if clarity
(in the sense of "obviously correct") is the goal, then I think
using >= instead of == would now become preferable.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 08:58:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 08:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43193.77690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkQp-0007Cs-BX; Thu, 03 Dec 2020 08:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43193.77690; Thu, 03 Dec 2020 08:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkQp-0007Cl-7r; Thu, 03 Dec 2020 08:57:43 +0000
Received: by outflank-mailman (input) for mailman id 43193;
 Thu, 03 Dec 2020 08:57:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkkQo-0007Cg-3I
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 08:57:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed491ce5-90f3-4e10-8cc3-decdad634163;
 Thu, 03 Dec 2020 08:57:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8A911AC55;
 Thu,  3 Dec 2020 08:57:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed491ce5-90f3-4e10-8cc3-decdad634163
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606985860; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RaXDU8Exax7EzTDvPzEzvL05M5B5cu7wyCIl826n7+U=;
	b=dIAzhzaL5Nw6mYvA8PJguDWJxfJWRdeeDH3jM8uvhAEtEHF9U67oRyJyeF1Sr44Fd2dN/F
	X4PP1eLf7OpGAqvMfjZZMvv/SS2cNLnvAvx2761ok+84udAl/uc59ivD3qRpVZsF990xAd
	hVnkMsdScUha5y1GM+IM+t34NPLFjGk=
Subject: Re: [PATCH v2 1/2] x86/IRQ: make max number of guests for a shared
 IRQ configurable
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, sstabellini@kernel.org, wl@xen.org, roger.pau@citrix.com,
 xen-devel@lists.xenproject.org
References: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <22894360-d568-e399-5522-693a52898027@suse.com>
Date: Thu, 3 Dec 2020 09:57:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606960706-21274-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.12.2020 02:58, Igor Druzhinin wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1641,6 +1641,15 @@ This option is ignored in **pv-shim** mode.
>  ### nr_irqs (x86)
>  > `= <integer>`
>  
> +### irq_max_guests (x86)
> +> `= <integer>`
> +
> +> Default: `16`
> +
> +Maximum number of guests IRQ could be shared between, i.e. a limit on
> +the number of guests it is possible to start each having assigned a device
> +sharing a common interrupt line.  Accepts values between 1 and 255.

Reading through this again, could "IRQ" be expanded to "any individual
IRQ" or some such?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:02:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43199.77702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkV1-00086l-UI; Thu, 03 Dec 2020 09:02:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43199.77702; Thu, 03 Dec 2020 09:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkV1-00086e-Qx; Thu, 03 Dec 2020 09:02:03 +0000
Received: by outflank-mailman (input) for mailman id 43199;
 Thu, 03 Dec 2020 09:02:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkV1-00086W-7R; Thu, 03 Dec 2020 09:02:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkV1-0007sL-1O; Thu, 03 Dec 2020 09:02:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkV0-0001Ly-I6; Thu, 03 Dec 2020 09:02:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkV0-0005r4-HY; Thu, 03 Dec 2020 09:02:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RL2ZxMu9Yh9HrlVCfLNoaXy27Uwg0ilDso6k12fD0to=; b=1EAlK89YfLvtXmXsIS7MA398ip
	966kGG9VOAeQR1t5xyNznzUOmuK6sKnLqU+m0uDjIubcB/fVgkCfqB3KGCYAySmvt7u9T47rp9lEl
	vzetxXRUUksu80vOHFhW+er8f0Tg99yg7tkJIiTYHeqyx3qBhn6YP3sRpTeYtSVUqC0w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157164-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157164: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=509a15421674b9e1a3e1916939d0d0efd3e578da
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 09:02:02 +0000

flight 157164 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157164/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                509a15421674b9e1a3e1916939d0d0efd3e578da
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  124 days
Failing since        152366  2020-08-01 20:49:34 Z  123 days  209 attempts
Testing same since   157164  2020-12-02 19:16:42 Z    0 days    1 attempts

------------------------------------------------------------
3620 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-xsm host-install(5)

Not pushing.

(No revision log; it would be 693490 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43208.77716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkf8-0000q5-4A; Thu, 03 Dec 2020 09:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43208.77716; Thu, 03 Dec 2020 09:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkf8-0000py-1B; Thu, 03 Dec 2020 09:12:30 +0000
Received: by outflank-mailman (input) for mailman id 43208;
 Thu, 03 Dec 2020 09:12:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkkf6-0000ps-Q1
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 09:12:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72e06943-f101-4894-9b98-f1cbf67ec185;
 Thu, 03 Dec 2020 09:12:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 154A6ACBA;
 Thu,  3 Dec 2020 09:12:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72e06943-f101-4894-9b98-f1cbf67ec185
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606986747; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vZoaX6FrvsFfqFY3RLjHKLoWyrIICRzm5Z+5BGTjc68=;
	b=nXlagQr+vJ+E7Ojn2gJG4tdm5Esg1pCh5JvKLU/J7p8Pfhi+mow8l+kI+jcsYHGQb8Gmdd
	3ex5Zo+i4nbdS0Q2H+zUNWHZN4jAmaGQbJbo+jaPTvnJGZCutik9rgEXNGc48s4METR16I
	0/3oz3RXRxbXEL1YKl25KV1ZTTKH76c=
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
 <c645058b-3e40-46a2-7110-58faa6ff3c6e@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4ddee6b8-36bb-63ae-2221-78c1768b3355@suse.com>
Date: Thu, 3 Dec 2020 10:12:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c645058b-3e40-46a2-7110-58faa6ff3c6e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.12.2020 09:47, Jürgen Groß wrote:
> On 02.12.20 16:36, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>   static int hypfs_write(struct hypfs_entry *entry,
>>>                          XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
>>
>> As a tangent, is there a reason these write functions don't take
>> handles of "const void"? (I realize this likely is nothing that
>> wants addressing right here.)
> 
> Uh, this is harder than I thought.
> 
> guest_handle_cast() doesn't handle const guest handle types currently:
> 
> hypfs.c:447:58: error: unknown type name ‘const_void’; did you mean ‘const’?
>           ret = hypfs_write(entry, guest_handle_cast(arg3, const_void), 
> arg4);
>                                                            ^
> /home/gross/xen/unstable/xen/include/xen/guest_access.h:26:5: note: in 
> definition of macro ‘guest_handle_cast’
>       type *_x = (hnd).p;                         \
>       ^~~~
> 
> Currently my ideas would be to either:
> 
> - add a new macro for constifying a guest handle (type -> const_type)
> - add a new macro for casting a guest handle to a const_type
> - add typedefs for the const_* types (typedef const x const_x)
> - open code the cast
> 
> Or am I missing an existing variant?

I don't think you are. Both of your first two suggestions look good
to me - ultimately we may want to have both anyway, eventually. For
its (presumed) type safety I may have a slight preference for
option 1, albeit afaics guest_handle_cast() doesn't allow
conversion between arbitrary types either (only to/from void).

It's quite unfortunate that this requires an explicit cast in the
first place, but what do you do.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:13:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43213.77729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkgL-0000ym-Fz; Thu, 03 Dec 2020 09:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43213.77729; Thu, 03 Dec 2020 09:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkkgL-0000yf-By; Thu, 03 Dec 2020 09:13:45 +0000
Received: by outflank-mailman (input) for mailman id 43213;
 Thu, 03 Dec 2020 09:13:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkgK-0000yW-0z; Thu, 03 Dec 2020 09:13:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkgJ-00086D-Qz; Thu, 03 Dec 2020 09:13:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkgJ-000201-HN; Thu, 03 Dec 2020 09:13:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkkgJ-0003sz-Gu; Thu, 03 Dec 2020 09:13:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jdE2O9uRa4dl1pBsShSiS7o0cX+bpI4inwUXxx9XLdo=; b=b/SQ9w2n63Q7EUM89f62PS+AZG
	z+/tBQ8DzkWMIdcnt6YP+hsFzRU/ZxmZbEFL9uHdq0T6UvkkqWh/GnSYGEuSsS87ilZhmlTxviWFP
	c+8r8rqdM0Oe4TR96hBbS9I04s3XPM1SIwfp4U8lxYraf2kiikQ6L0bAOg9G2uQSS1FU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157171: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=5f6a7618993b31c1ded6e5ad650a607a5c93f6e2
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 09:13:43 +0000

flight 157171 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157171/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              5f6a7618993b31c1ded6e5ad650a607a5c93f6e2
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  146 days
Failing since        151818  2020-07-11 04:18:52 Z  145 days  140 attempts
Testing same since   157171  2020-12-03 04:19:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 30497 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:27:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43223.77744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkktb-00024q-PT; Thu, 03 Dec 2020 09:27:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43223.77744; Thu, 03 Dec 2020 09:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkktb-00024j-MR; Thu, 03 Dec 2020 09:27:27 +0000
Received: by outflank-mailman (input) for mailman id 43223;
 Thu, 03 Dec 2020 09:27:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkkta-00024e-F7
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 09:27:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f12fe71c-d742-45fa-ad22-64cfe5131b73;
 Thu, 03 Dec 2020 09:27:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A5E0AC55;
 Thu,  3 Dec 2020 09:27:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f12fe71c-d742-45fa-ad22-64cfe5131b73
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606987644; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pVf0pzAcE5NuQeytzIki02m6gx+39mPGMJjm8+J2YLY=;
	b=QFsmUBIAB8E/9ubYwrPtGDIF0XEBMDJS3oW2R8PPStTJu2Kf3owlE7Nw/B7oXnyYAiQHyR
	DlzKSrpFpW/hN3RvsXrtsvHv8SNwAhC/syYWif/WPeDULRZ5MyGJPmuiWzi4RlnGFGUmI8
	x8u4zVzazJjB27170VmjdIN4hRGagto=
Subject: Re: [PATCH 1/2] include: don't use asm/page.h from common headers
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <04276039-a5d0-fefd-260e-ffaa8272fd6a@suse.com>
 <a35fb176-e729-a542-4416-7040d6c80964@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bdf294d9-e021-36d3-7e04-1c148e34701f@suse.com>
Date: Thu, 3 Dec 2020 10:27:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a35fb176-e729-a542-4416-7040d6c80964@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 18:14, Julien Grall wrote:
> Hi Jan,
> 
> On 02/12/2020 14:49, Jan Beulich wrote:
>> Doing so limits what can be done in (in particular included by) this per-
>> arch header. Abstract out page shift/size related #define-s, which is all
>> the repsecitve headers care about. Extend the replacement / removal to
> 
> s/repsecitve/respective/
> 
>> some x86 headers as well; some others now need to include page.h (and
>> they really should have before).
>>
>> Arm's VADDR_BITS gets restricted to 32-bit, as its current value is
>> clearly wrong for 64-bit, but the constant also isn't used anywhere
>> right now (i.e. the #define could also be dropped altogether).
> 
> Whoops. Thankfully this is not used.
> 
>>
>> I wasn't sure about Arm's use of vaddr_t in PAGE_OFFSET(), and hence I
>> kept it and provided a way to override the #define in the common header.
> 
> vaddr_t is defined to 32-bit for arm32 or 64-bit for arm64. So I think 
> it would be fine to use the generic PAGE_OFFSET() implementation.

Will switch.

>> --- /dev/null
>> +++ b/xen/include/asm-arm/page-shift.h
> 
> The name of the file looks a bit odd given that *_BITS are also defined 
> in it. So how about renaming to page-size.h?

I was initially meaning to use that name, but these headers
specifically don't define any sizes - *_BITS are still shift
values, at least in a way. If the current name isn't liked, my
next best suggestion would then be page-bits.h.

>> @@ -0,0 +1,15 @@
>> +#ifndef __ARM_PAGE_SHIFT_H__
>> +#define __ARM_PAGE_SHIFT_H__
>> +
>> +#define PAGE_SHIFT              12
>> +
>> +#define PAGE_OFFSET(ptr)        ((vaddr_t)(ptr) & ~PAGE_MASK)
>> +
>> +#ifdef CONFIG_ARM_64
>> +#define PADDR_BITS              48
> 
> Shouldn't we define VADDR_BITS here?

See the description - it's unused anyway. I'm fine any of the three
possible ways:
1) keep as is in v1
2) drop altogether
3) also #define for 64-bit (but then you need to tell me whether 64
   is the right value to use, or what the correct one would be)

> But I wonder whether VADDR_BITS 
> should be defined as sizeof(vaddr_t) * 8.
> 
> This would require to include asm/types.h.

Which I'd specifically like to avoid. Plus use of sizeof() also
precludes the use of respective #define-s in #if-s.

>> --- a/xen/include/asm-x86/desc.h
>> +++ b/xen/include/asm-x86/desc.h
>> @@ -1,6 +1,8 @@
>>   #ifndef __ARCH_DESC_H
>>   #define __ARCH_DESC_H
>>   
>> +#include <asm/page.h>
> 
> May I ask why you are including <asm/page.h> and not <xen/page-size.h> here?

Because of

DECLARE_PER_CPU(l1_pgentry_t, gdt_l1e);

and

DECLARE_PER_CPU(l1_pgentry_t, compat_gdt_l1e);

at least (didn't check further).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:39:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43237.77767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkl5Q-0003Dw-1p; Thu, 03 Dec 2020 09:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43237.77767; Thu, 03 Dec 2020 09:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkl5P-0003Dp-Us; Thu, 03 Dec 2020 09:39:39 +0000
Received: by outflank-mailman (input) for mailman id 43237;
 Thu, 03 Dec 2020 09:39:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkl5O-0003Db-6G
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 09:39:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ceb0013-3ebe-4d2a-937e-8eeecf592be7;
 Thu, 03 Dec 2020 09:39:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BB34AAC55;
 Thu,  3 Dec 2020 09:39:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ceb0013-3ebe-4d2a-937e-8eeecf592be7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606988375; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CSV1sDJoRkaDSHb5Z8goZsZ3J2kND8hk6DyBcbnGWKw=;
	b=AppEi50RLKoFlFgRlzogpJkwjjovKw0O26f2j3qJ8tKD9FL4iAcKX9qdbyDS+J7Rf/uRFu
	3sluevOzl7Ivymxd2p7M67z4KMic/xtGO8L9xYP9VOaVb0ixn13K6yJyTBf8nSIaG74dii
	x1rsqhrH7EY3JNbcWytUD242o6PiFAI=
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
 <e2909e87-473f-dbf5-9e58-7c817ac59e3f@suse.com>
 <d878f80d-d71d-93cd-8ee1-06fb860bf390@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a3318702-692d-ec25-cd1a-fbe6d3852659@suse.com>
Date: Thu, 3 Dec 2020 10:39:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d878f80d-d71d-93cd-8ee1-06fb860bf390@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JzzXWVCs0fg4lH9gtvbEMpwK42V3VbHTD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JzzXWVCs0fg4lH9gtvbEMpwK42V3VbHTD
Content-Type: multipart/mixed; boundary="e4g872LdFtk3sYL28g4p5I7zCh4uetgLM";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <a3318702-692d-ec25-cd1a-fbe6d3852659@suse.com>
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <e8a876c9-b1bf-62a4-d30c-a2c646cb68f7@suse.com>
 <e2909e87-473f-dbf5-9e58-7c817ac59e3f@suse.com>
 <d878f80d-d71d-93cd-8ee1-06fb860bf390@suse.com>
In-Reply-To: <d878f80d-d71d-93cd-8ee1-06fb860bf390@suse.com>

--e4g872LdFtk3sYL28g4p5I7zCh4uetgLM
Content-Type: multipart/mixed;
 boundary="------------201B8C9B7884ED2947A2D03E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------201B8C9B7884ED2947A2D03E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 09:12, Jan Beulich wrote:
> On 02.12.2020 16:51, J=C3=BCrgen Gro=C3=9F wrote:
>> On 02.12.20 16:42, Jan Beulich wrote:
>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>> Add a getsize() function pointer to struct hypfs_funcs for being abl=
e
>>>> to have dynamically filled entries without the need to take the hypf=
s
>>>> lock each time the contents are being generated.
>>>>
>>>> For directories add a findentry callback to the vector and modify
>>>> hypfs_get_entry_rel() to use it.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> with maybe one further small adjustment:
>>>
>>>> @@ -176,15 +188,41 @@ static int hypfs_get_path_user(char *buf,
>>>>        return 0;
>>>>    }
>>>>   =20
>>>> +struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_d=
ir *dir,
>>>> +                                         const char *name,
>>>> +                                         unsigned int name_len)
>>>> +{
>>>> +    return ERR_PTR(-ENOENT);
>>>> +}
>>>
>>> ENOENT seems odd to me here. There looks to be no counterpart to
>>> EISDIR, so maybe ENODATA, EACCES, or EPERM?
>>
>> Hmm, why?
>>
>> In case I have /a/b and I'm looking for /a/b/c ENOENT seems to be just=

>> fine?
>>
>> Or I could add ENOTDIR.
>=20
> Oh, there actually is supposed to be such an entry, but public/errno.h
> is simply missing it. Yes - ENOTDIR is what I was thinking of when
> saying "there looks to be no counterpart to EISDIR".

Okay, I'll add ENOTDIR and set it here.


Juergen

--------------201B8C9B7884ED2947A2D03E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------201B8C9B7884ED2947A2D03E--

--e4g872LdFtk3sYL28g4p5I7zCh4uetgLM--

--JzzXWVCs0fg4lH9gtvbEMpwK42V3VbHTD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/IslYFAwAAAAAACgkQsN6d1ii/Ey9l
GQf9FdRbPpnd4NjUyvELRCSc60N6aM+Co9SJ0bivWpz+Ed5NQY5TY+oO5czlYHinDo/yB0bkkrjm
h8Cp5L6OYwtY4izmKyc7kPF+MhiGcdKooaE+8ML8xoVh6twAoEi9vnrnurMtdhGLBrp3uxa1E9GP
4CjgzDlH9+OR3sDVkT+L6sJ6OF5ywf2HkbiVff3L4z9pl5CY1XzbsDYupESPUcpyOdxs0C0Fm/Qh
4MLyZxq3Bl/MUdgidS8oUgw9xC33/WF3ZO4NX1x3DCLawNCWXqi406u/OgC3fdgEaiWe5rzqw7d0
WJ6q8S4affxP1BMm84faRdctFbH2s6y4+L/O0V4g3w==
=FJQ0
-----END PGP SIGNATURE-----

--JzzXWVCs0fg4lH9gtvbEMpwK42V3VbHTD--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:39:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:39:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43238.77780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkl5Z-0003I1-Di; Thu, 03 Dec 2020 09:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43238.77780; Thu, 03 Dec 2020 09:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkl5Z-0003Hs-AX; Thu, 03 Dec 2020 09:39:49 +0000
Received: by outflank-mailman (input) for mailman id 43238;
 Thu, 03 Dec 2020 09:39:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkl5X-0003H0-Pl
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 09:39:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f63d5ff-a56a-4f68-b45b-d213fc2b3a01;
 Thu, 03 Dec 2020 09:39:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A8ABAD41;
 Thu,  3 Dec 2020 09:39:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f63d5ff-a56a-4f68-b45b-d213fc2b3a01
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606988386; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DSw79RRPYCY78Gq5LmoS5kY4pFo0mkxApqpjn/O9AeY=;
	b=ArAC1Xpy4B3hjPBjVHdP7utb8F+kNdAR7dX64OgOGE7V9+NOPfbGcc6Lu5fOag8sfrApJ+
	rGZ7h3nWYwpsT2rOlP/GjrYRNTOkRJz5PT6k6RFotMaBmmFy264Jmgq5UwJTJtmMoShwDG
	UWcbwsoZnuy0FngE8YdBR95ddqdWVv0=
Subject: Re: [PATCH 2/2] mm: split out mfn_t / gfn_t / pfn_t definitions and
 helpers
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <fb4de786-7302-3336-dcb4-1a388bee34bc@suse.com>
 <9c240acd-f3ef-6775-eb4b-6e3b14251e51@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <320d042c-2e37-f5ef-ce2f-2d4c97901bae@suse.com>
Date: Thu, 3 Dec 2020 10:39:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <9c240acd-f3ef-6775-eb4b-6e3b14251e51@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 18:35, Julien Grall wrote:
> On 02/12/2020 14:50, Jan Beulich wrote:
>> xen/mm.h has heavy dependencies, while in a number of cases only these
>> type definitions are needed. This separation then also allows pulling in
>> these definitions when including xen/mm.h would cause cyclic
>> dependencies.
>>
>> Replace xen/mm.h inclusion where possible in include/xen/. (In
>> xen/iommu.h also take the opportunity and correct the few remaining
>> sorting issues.)
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/acpi/power.c
>> +++ b/xen/arch/x86/acpi/power.c
>> @@ -10,7 +10,6 @@
>>    * Slimmed with Xen specific support.
>>    */
>>   
>> -#include <asm/io.h>
> 
> This seems to be unrelated of this work.

Well spotted, but the answer really is "yes and no". My first
attempt at fixing build issues from this and similar asm/io.h
inclusions was to remove such unnecessary ones. But this didn't
work out - I had to fix the header instead. If you think this
extra cleanup really does any harm here, I can drop it. But I'd
prefer to keep it.

>> --- /dev/null
>> +++ b/xen/include/xen/frame-num.h
> 
> It would feel more natural to me if the file is named mm-types.h.

Indeed I was first meaning to use this name (not the least
because I don't particularly like the one chosen, but I also
couldn't think of a better one). However, then things like
struct page_info would imo also belong there (more precisely in
asm/mm-types.h to be included from here), which is specifically
something I want to avoid. Yes, eventually we may (I'm inclined
to even say "will") want such a header, but I still want to
keep these even more fundamental types in a separate one.
Otherwise we'll again end up with files including mm-types.h
just because of needing e.g. gfn_t for a function declaration.
(Note that the same isn't the case for struct page_info, which
can simply be forward declared.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:46:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:46:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43275.77824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kklC9-0004WV-Kp; Thu, 03 Dec 2020 09:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43275.77824; Thu, 03 Dec 2020 09:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kklC9-0004WO-He; Thu, 03 Dec 2020 09:46:37 +0000
Received: by outflank-mailman (input) for mailman id 43275;
 Thu, 03 Dec 2020 09:46:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kklC8-0004WJ-R6
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 09:46:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c04f2bc-b956-40c1-a54c-27a32343a6fa;
 Thu, 03 Dec 2020 09:46:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E26F8ACBA;
 Thu,  3 Dec 2020 09:46:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c04f2bc-b956-40c1-a54c-27a32343a6fa
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606988795; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QeXzYmmdPu0yiACCqNFWW9iWhMTg03koey4W+Yzot1g=;
	b=pSQOWjddrYPJkGjNIT71GF/j2seaEqNTHA59ySjkrwJhNWada7enK18XLqpSUOQ35aVHe1
	zGywIoIGselbKFSv61tnfkwQ0Aze25IDP8pHU4Pv/dIK7lQXboqpkJA1Loszda9nZVj6QX
	aRm2WDiSkN8rsKwzZXmwOtW7/W4M3Dg=
Subject: Re: [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d709a9c3-dbe2-65c6-2c2f-6a12f486335d@suse.com>
 <70170293-a9a7-282a-dde6-7ed73fc2da48@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c15b1e7e-ed9c-b597-2fc1-b8cf89999c55@suse.com>
Date: Thu, 3 Dec 2020 10:46:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <70170293-a9a7-282a-dde6-7ed73fc2da48@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 20:03, Julien Grall wrote:
> On 23/11/2020 13:28, Jan Beulich wrote:
>> The per-vCPU virq_lock, which is being held anyway, together with there
>> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
>> is zero, provide sufficient guarantees. 
> 
> I agree that the per-vCPU virq_lock is going to be sufficient, however 
> dropping the lock also means the event channel locking is more complex 
> to understand (the long comment that was added proves it).
> 
> In fact, the locking in the event channel code was already proven to be 
> quite fragile, therefore I think this patch is not worth the risk.

I agree this is a very reasonable position to take. I probably
would even have remained silent if in the meantime the
spin_lock()s there hadn't changed to read_trylock()s. I really
think we want to limit this unusual locking model to where we
strictly need it. And this change eliminates 50% of them, with
the added benefit of making the paths more lightweight.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 09:51:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 09:51:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43285.77840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kklGY-0005V3-9g; Thu, 03 Dec 2020 09:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43285.77840; Thu, 03 Dec 2020 09:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kklGY-0005Uw-60; Thu, 03 Dec 2020 09:51:10 +0000
Received: by outflank-mailman (input) for mailman id 43285;
 Thu, 03 Dec 2020 09:51:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kklGW-0005Ur-MY
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 09:51:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ff6a737-147b-45e7-94b6-8e5085aac97a;
 Thu, 03 Dec 2020 09:51:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6796AD41;
 Thu,  3 Dec 2020 09:51:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ff6a737-147b-45e7-94b6-8e5085aac97a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606989063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qbY9qgML224Q9oA2ZDLIgpLcJl5U9LYu8RN1jSojU6I=;
	b=cNsUJivVR1wmG+ZSQGLbL8JIyzij4+db4po29hRFxwl/7ze8jzzlhNInJD8BxHEJ70CP7B
	2eT+2lsRObdHSU8lBmv647DN8iit1L7f2IU1GPv+dru6VQ45lPxFZiF0g0chkJu/WhhnxZ
	wxH18W0m2RzH3WXN4+SZJN8CHdP3syI=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
 <c645058b-3e40-46a2-7110-58faa6ff3c6e@suse.com>
 <4ddee6b8-36bb-63ae-2221-78c1768b3355@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
Message-ID: <5e0a7a74-1d00-7f9f-e595-27f441c5c200@suse.com>
Date: Thu, 3 Dec 2020 10:51:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4ddee6b8-36bb-63ae-2221-78c1768b3355@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iUmYVvk9AcNu7UpvuoAWd0MLEaieoAgsE"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iUmYVvk9AcNu7UpvuoAWd0MLEaieoAgsE
Content-Type: multipart/mixed; boundary="wWa1WkU4HwNLL3kYQ6SaCgMectahfLiGa";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <5e0a7a74-1d00-7f9f-e595-27f441c5c200@suse.com>
Subject: Re: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into
 a dedicated struct
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-10-jgross@suse.com>
 <ddb41dd4-485e-5ae3-9b3a-dd0aae787260@suse.com>
 <c645058b-3e40-46a2-7110-58faa6ff3c6e@suse.com>
 <4ddee6b8-36bb-63ae-2221-78c1768b3355@suse.com>
In-Reply-To: <4ddee6b8-36bb-63ae-2221-78c1768b3355@suse.com>

--wWa1WkU4HwNLL3kYQ6SaCgMectahfLiGa
Content-Type: multipart/mixed;
 boundary="------------E8F0A1D174FD905D8CBDB0A1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E8F0A1D174FD905D8CBDB0A1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 10:12, Jan Beulich wrote:
> On 03.12.2020 09:47, J=C3=BCrgen Gro=C3=9F wrote:
>> On 02.12.20 16:36, Jan Beulich wrote:
>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>>    static int hypfs_write(struct hypfs_entry *entry,
>>>>                           XEN_GUEST_HANDLE_PARAM(void) uaddr, unsign=
ed long ulen)
>>>
>>> As a tangent, is there a reason these write functions don't take
>>> handles of "const void"? (I realize this likely is nothing that
>>> wants addressing right here.)
>>
>> Uh, this is harder than I thought.
>>
>> guest_handle_cast() doesn't handle const guest handle types currently:=

>>
>> hypfs.c:447:58: error: unknown type name =E2=80=98const_void=E2=80=99;=
 did you mean =E2=80=98const=E2=80=99?
>>            ret =3D hypfs_write(entry, guest_handle_cast(arg3, const_vo=
id),
>> arg4);
>>                                                             ^
>> /home/gross/xen/unstable/xen/include/xen/guest_access.h:26:5: note: in=

>> definition of macro =E2=80=98guest_handle_cast=E2=80=99
>>        type *_x =3D (hnd).p;                         \
>>        ^~~~
>>
>> Currently my ideas would be to either:
>>
>> - add a new macro for constifying a guest handle (type -> const_type)
>> - add a new macro for casting a guest handle to a const_type
>> - add typedefs for the const_* types (typedef const x const_x)
>> - open code the cast
>>
>> Or am I missing an existing variant?
>=20
> I don't think you are. Both of your first two suggestions look good
> to me - ultimately we may want to have both anyway, eventually. For
> its (presumed) type safety I may have a slight preference for
> option 1, albeit afaics guest_handle_cast() doesn't allow
> conversion between arbitrary types either (only to/from void).
>=20
> It's quite unfortunate that this requires an explicit cast in the
> first place, but what do you do.

Right.

I'm going with variant 2, as variant 1 is not really easy to achieve
without specifying the basic type as a macro parameter - which will
basically be variant 2.


Juergen

--------------E8F0A1D174FD905D8CBDB0A1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E8F0A1D174FD905D8CBDB0A1--

--wWa1WkU4HwNLL3kYQ6SaCgMectahfLiGa--

--iUmYVvk9AcNu7UpvuoAWd0MLEaieoAgsE
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/ItQYFAwAAAAAACgkQsN6d1ii/Ey9t
Rwf6A9tzXdXiLLON5l2hr3kE9X4E0RDfbk0NKxQX+ejvDYS65qCz+lmW5yZZXv3iF0uK/OsrPFJk
debW3HhdK8ungmbfqMkOA/fRs3b/R+JSF1i6ctDlPOUUS8ReqckY7dcR2+HtVLGcA2UJPhNFN8vY
K0Rigyw0tQf812MdX9zCTHIiN3mI0uNTjeJpdhu+lqLlYY5OC3WvrbL4lLH9DYsXkNkxz5izjHKI
C04nlqAcpa/vILBKqSpgGduQ42uNNEwffefH4cVjWYqTJNzNw2hizkGkCYOEhQ8Ijxqzcvfju96A
nGrWfFbL1d9ircGrTQzbkgnwh5ME2xxS6mQWcbwmXg==
=MuND
-----END PGP SIGNATURE-----

--iUmYVvk9AcNu7UpvuoAWd0MLEaieoAgsE--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 10:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 10:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43312.77879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kklY6-0006uW-4R; Thu, 03 Dec 2020 10:09:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43312.77879; Thu, 03 Dec 2020 10:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kklY6-0006uP-1V; Thu, 03 Dec 2020 10:09:18 +0000
Received: by outflank-mailman (input) for mailman id 43312;
 Thu, 03 Dec 2020 10:09:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kklY3-0006uK-S4
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 10:09:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e841fc7f-1290-4b74-abb9-3bc1fd4e123e;
 Thu, 03 Dec 2020 10:09:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 230B4AC55;
 Thu,  3 Dec 2020 10:09:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e841fc7f-1290-4b74-abb9-3bc1fd4e123e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606990154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=p+MZFRaqU0qWdrdVSi3QGwDKgoH3z4XD13Kk/lwyE1k=;
	b=d5g0+NmI4EDvaRFtggAHhwKL/yUT7ts29tmgdBqEABtNfzwxqjn1T7XRHNZy5g+S+11aYq
	gtmjxaoTaSaqL+fDfR6S2agDS6CxvTe2AUE/X9ggns13W4eHncTWQtijYlCRMuyiEzeVm1
	P3ykoTx7n2kMZ4tGmQVaymrqzsGbGe4=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
Date: Thu, 3 Dec 2020 11:09:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 22:10, Julien Grall wrote:
> On 23/11/2020 13:30, Jan Beulich wrote:
>> While there don't look to be any problems with this right now, the lock
>> order implications from holding the lock can be very difficult to follow
>> (and may be easy to violate unknowingly). The present callbacks don't
>> (and no such callback should) have any need for the lock to be held.
>>
>> However, vm_event_disable() frees the structures used by respective
>> callbacks and isn't otherwise synchronized with invocations of these
>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>> to wait to drop to zero before freeing the port (and dropping the lock).
> 
> AFAICT, this callback is not the only place where the synchronization is 
> missing in the VM event code.
> 
> For instance, vm_event_put_request() can also race against 
> vm_event_disable().
> 
> So shouldn't we handle this issue properly in VM event?

I suppose that's a question to the VM event folks rather than me?

>> ---
>> Should we make this accounting optional, to be requested through a new
>> parameter to alloc_unbound_xen_event_channel(), or derived from other
>> than the default callback being requested?
> 
> Aside the VM event, do you see any value for the other caller?

No (albeit I'm not entirely certain about vpl011_notification()'s
needs), hence the consideration. It's unnecessary overhead in
those cases.

>> @@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig
>>           rport = lchn->u.interdomain.remote_port;
>>           rchn  = evtchn_from_port(rd, rport);
>>           if ( consumer_is_xen(rchn) )
>> +        {
>> +            /* Don't keep holding the lock for the call below. */
>> +            atomic_inc(&rchn->u.interdomain.active_calls);
>> +            evtchn_read_unlock(lchn);
>>               xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
>> -        else
>> -            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
> 
> atomic_dec() doesn't contain any memory barrier, so we will want one 
> between xen_notification_fn() and atomic_dec() to avoid re-ordering.

Oh, indeed. But smp_mb() is too heavy handed here - x86 doesn't
really need any barrier, yet would gain a full MFENCE that way.
Actually - looks like I forgot we gained smp_mb__before_atomic()
a little over half a year ago.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 11:22:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 11:22:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43348.77927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkmgq-00064Y-3Z; Thu, 03 Dec 2020 11:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43348.77927; Thu, 03 Dec 2020 11:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkmgp-00064R-Vp; Thu, 03 Dec 2020 11:22:23 +0000
Received: by outflank-mailman (input) for mailman id 43348;
 Thu, 03 Dec 2020 11:22:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kkmgo-00064M-NV
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 11:22:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kkmgo-0002QJ-Ic; Thu, 03 Dec 2020 11:22:22 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kkmgo-0001Sc-6B; Thu, 03 Dec 2020 11:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=+ZJPqw36f+nf9cGOtMdLw6jIeKVmMsOP+3efG7q7Lno=; b=VG2ruECdJC4y3R81Po9B3ETWQV
	XmpDbHy6Kjgh8iqsETdkJKTzzllhpJjpeKzCufYswHz/u/IA5YgA1so7TzMNpcsgqsJ4oGhIT3+JC
	lPpSPEVxkd010c5Xh+5cNZqX+HoD3Ff2jY9OHjYm+fP4rRglTuO9SO0Cph5HJyCebQio=;
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
Date: Thu,  3 Dec 2020 11:21:59 +0000
Message-Id: <4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1

From: Hongyan Xia <hongyxia@amazon.com>

There is simply no guarantee that vmap won't return superpages to the
caller. It can happen if the list of MFNs are contiguous, or we simply
have a large granularity. Although rare, if such things do happen, we
will simply hit BUG_ON() and crash.

Introduce xen_map_to_mfn() to translate any mapped Xen address to mfn
regardless of page size, and wrap vmap_to_mfn() around it.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- const pl*e
- introduce xen_map_to_mfn().
- goto to a single exit path.
- ASSERT_UNREACHABLE instead of ASSERT.
---
 xen/arch/x86/mm.c          | 54 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/mm.h   |  1 +
 xen/include/asm-x86/page.h |  2 +-
 3 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5a50339284c7..53cc5c6de2e6 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5194,6 +5194,60 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         }                                          \
     } while ( false )
 
+/* Translate mapped Xen address to MFN. */
+mfn_t xen_map_to_mfn(unsigned long va)
+{
+#define CHECK_MAPPED(cond_)     \
+    if ( !(cond_) )             \
+    {                           \
+        ASSERT_UNREACHABLE();   \
+        ret = INVALID_MFN;      \
+        goto out;               \
+    }                           \
+
+    bool locking = system_state > SYS_STATE_boot;
+    unsigned int l2_offset = l2_table_offset(va);
+    unsigned int l1_offset = l1_table_offset(va);
+    const l3_pgentry_t *pl3e = virt_to_xen_l3e(va);
+    const l2_pgentry_t *pl2e = NULL;
+    const l1_pgentry_t *pl1e = NULL;
+    struct page_info *l3page;
+    mfn_t ret;
+
+    L3T_INIT(l3page);
+    CHECK_MAPPED(pl3e)
+    l3page = virt_to_page(pl3e);
+    L3T_LOCK(l3page);
+
+    CHECK_MAPPED(l3e_get_flags(*pl3e) & _PAGE_PRESENT)
+    if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
+    {
+        ret = mfn_add(l3e_get_mfn(*pl3e),
+                      (l2_offset << PAGETABLE_ORDER) + l1_offset);
+        goto out;
+    }
+
+    pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
+    CHECK_MAPPED(l2e_get_flags(*pl2e) & _PAGE_PRESENT)
+    if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
+    {
+        ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
+        goto out;
+    }
+
+    pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
+    CHECK_MAPPED(l1e_get_flags(*pl1e) & _PAGE_PRESENT)
+    ret = l1e_get_mfn(*pl1e);
+
+#undef CHECK_MAPPED
+ out:
+    L3T_UNLOCK(l3page);
+    unmap_domain_page(pl1e);
+    unmap_domain_page(pl2e);
+    unmap_domain_page(pl3e);
+    return ret;
+}
+
 int map_pages_to_xen(
     unsigned long virt,
     mfn_t mfn,
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index deeba75a1cbb..2fc8eeaf7aad 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -578,6 +578,7 @@ mfn_t alloc_xen_pagetable_new(void);
 void free_xen_pagetable_new(mfn_t mfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
+mfn_t xen_map_to_mfn(unsigned long va);
 
 int __sync_local_execstate(void);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a771baf7cb3..886adf17a40c 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,7 +291,7 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
+#define vmap_to_mfn(va)     xen_map_to_mfn((unsigned long)va)
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 11:27:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 11:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43358.77946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkmlO-0006Jc-Us; Thu, 03 Dec 2020 11:27:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43358.77946; Thu, 03 Dec 2020 11:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkmlO-0006JV-Rr; Thu, 03 Dec 2020 11:27:06 +0000
Received: by outflank-mailman (input) for mailman id 43358;
 Thu, 03 Dec 2020 11:27:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kkmlO-0006JQ-6H
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 11:27:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kkmlN-0002WS-Uz; Thu, 03 Dec 2020 11:27:05 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=freeip.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kkmlN-0001iB-Ih; Thu, 03 Dec 2020 11:27:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=689NNRGAyqYRHkIFe91jNNqjvJ66cOutkepvTDUu2MM=; b=AQGKeyfSeNozMX+mtojvVThtEP
	8Pfcu2F7YDfiEib8T7JwpgB8N6HkpBohN5P8ZGcb3CQz2DmDwUgYvoAGci93qkEGh3GS1oxLd1PVO
	QvbVgByJ7dup85hftfD20ffIAeCRGhYzBqN2mcSEJjJB8FpRgsKTpngFLYNUoO4cYb1A=;
Message-ID: <1d8ce4a8f7e3bde2df8d56e9d7f96ccf492d9f6b.camel@xen.org>
Subject: Re: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Roger Pau =?ISO-8859-1?Q?Monn=E9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Date: Thu, 03 Dec 2020 11:27:03 +0000
In-Reply-To: <4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
References: 
	<4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
Content-Type: text/plain
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: quoted-printable

Apologies. Missing v2 in the title.



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 11:50:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 11:50:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43375.77971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkn7V-0008UD-5k; Thu, 03 Dec 2020 11:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43375.77971; Thu, 03 Dec 2020 11:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkn7V-0008U6-1t; Thu, 03 Dec 2020 11:49:57 +0000
Received: by outflank-mailman (input) for mailman id 43375;
 Thu, 03 Dec 2020 11:49:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lp8G=FH=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkn7T-0008U1-Ja
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 11:49:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.62]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e0b4539-3ea8-4dba-a7ee-ba8cea05c117;
 Thu, 03 Dec 2020 11:49:52 +0000 (UTC)
Received: from AM6P195CA0016.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::29)
 by AM6PR08MB2965.eurprd08.prod.outlook.com (2603:10a6:209:49::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Thu, 3 Dec
 2020 11:49:50 +0000
Received: from VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::75) by AM6P195CA0016.outlook.office365.com
 (2603:10a6:209:81::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend
 Transport; Thu, 3 Dec 2020 11:49:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT044.mail.protection.outlook.com (10.152.19.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Thu, 3 Dec 2020 11:49:50 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Thu, 03 Dec 2020 11:49:50 +0000
Received: from 313b5f7836c1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A6E3DC77-2FED-42F6-965E-2B9D96CA055E.1; 
 Thu, 03 Dec 2020 11:49:44 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 313b5f7836c1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Dec 2020 11:49:44 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5483.eurprd08.prod.outlook.com (2603:10a6:10:11b::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Thu, 3 Dec
 2020 11:49:42 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.032; Thu, 3 Dec 2020
 11:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e0b4539-3ea8-4dba-a7ee-ba8cea05c117
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nl/NRmHDmxQaVqoLAx3cifSnwO2lV9rCIJZ7zKBLR/g=;
 b=1rixW6atrJJzBpdW3l48D+FfPO2RoKOLcdIFcqTfXj8TBLN2Lqn4E1LHCt2Z4IfQ0b8+oaH5H7fkN9pqWhBmlFYdupqZRnrZxIxQZzCNfbwR4OhCIn6Nk/4VtnE+ZFa5pHykbbC8G7liZZLvxp4w0zAQl2qrTOeBqKWoZMvKpj4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: d42a9bc30ee15799
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kios/xh+ln7Fv/Eggimi55OkNWM191m0xWBYgI80hmTTdNbeGzZENhB3RJyK+Wtom8EtQSWIS3yP6t89GQnjSSR4xZB8Z0qlblzP5NeOF2eCMrUx4vjDwrsKCNRa0HiRWIG0vnMbMul9pQ3NClnuoCQehGK7BEhf0/H8TylXzvfAeEzgjOVhTwZIn4tafQ/P/eeriKLi+C/Ste/qIxldtyVHEQvyOMxXLPKRNIJtAm4nJJ/aKjSbGRTlWjaSCFtkvPRYWCfp5bl1KgDsfLzW1wXtArn9iNqBhyJ2jEt5B0IOuYFtud5Q8xkbubD8GxzXu4V4/0ylU6TzCQ6CZB0wXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nl/NRmHDmxQaVqoLAx3cifSnwO2lV9rCIJZ7zKBLR/g=;
 b=nDR1zGAIbWDe7j2vm9IVmZMc4Feu340IK6sAOuZRgHr40zB1YEQraVN4g68O4cLAJr8lTWhaHZnK7XB8ng20D2nr4gwP4ErKyDfFv+ZJHdjEQSYYtPtGJAS06lhDvn357FLc7CMG2yDQOb1MC0Wck6oJUN6n3ZpF0K21n6hXrRHTftscDRewsXiaLsq7+ZfVH8gFxDykDmaPkrzQS7BBNn9ohd3TuMTKDZF0NJGTbcQEaQl+8684uBIGkMM5BHK0r5qeGQZR5HP327QhiPq63Lz+CX+HDQiXTO85vbn1HPQKpRkIdy3z43BZyyqd3Vu3NyVrtyvYFe8vJwFuAfSWTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nl/NRmHDmxQaVqoLAx3cifSnwO2lV9rCIJZ7zKBLR/g=;
 b=1rixW6atrJJzBpdW3l48D+FfPO2RoKOLcdIFcqTfXj8TBLN2Lqn4E1LHCt2Z4IfQ0b8+oaH5H7fkN9pqWhBmlFYdupqZRnrZxIxQZzCNfbwR4OhCIn6Nk/4VtnE+ZFa5pHykbbC8G7liZZLvxp4w0zAQl2qrTOeBqKWoZMvKpj4=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
Thread-Topic: [PATCH v2 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
Thread-Index: AQHWxBX9ux1e0/RKik+PTdn4Fk42F6nj2m+AgAFyKwA=
Date: Thu, 3 Dec 2020 11:49:42 +0000
Message-ID: <3CA139F8-4156-4637-8986-84351C3E1400@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
 <39a9c619-d7b2-eca0-688c-5f35546e59fa@xen.org>
In-Reply-To: <39a9c619-d7b2-eca0-688c-5f35546e59fa@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: a4aa07cd-4713-4cc1-4852-08d897818aa9
x-ms-traffictypediagnostic: DB8PR08MB5483:|AM6PR08MB2965:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB296533392824C6E61C95AC2FFCF20@AM6PR08MB2965.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5516;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 B7pbjYLkiyhrz98ExJ9jp/O5jZvXXlN04BJA/ELRHdZkIKPzEP5PtqrX9mQsmAFj2jbe2Y41UJdbNT7hhljdqfJ/luuHNjRo9FwCUqMte0n38vba3Kj0d7J1TqjELwxaB4xch8DSnA6/SE5Weypyg4dgvOzAGgaYProYB1/Z7pou7pC059gZ5mkVZ7+61wbGEb2KBx7/SB0WjpDom98WTxlHyTwifVIoMYf170ONvEHPugNbp6qY7cgUQ3i9RTCyTmI2OPTGEfcEYjtgyjrzKklfS1WzmlG8NJ/iIf43ZxQxkE0Q+nwxqV3e5OB5IopAbb8CBYV2hS4njmKprGvYZZG4hwpH42uzpGq6q0pRE2C9Y3yySJe54ccWyQfQZ5bO
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(366004)(396003)(136003)(66946007)(6916009)(83380400001)(4326008)(6512007)(8676002)(4744005)(6486002)(36756003)(5660300002)(8936002)(66446008)(76116006)(2906002)(64756008)(66556008)(66476007)(91956017)(86362001)(54906003)(71200400001)(186003)(26005)(2616005)(316002)(33656002)(53546011)(478600001)(6506007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?D35RhRdul0c5JBk9FXFTIyHFFhgXo04pVXm2Gdeiu7oYA4k/5eiZ6pWCGJVP?=
 =?us-ascii?Q?Vq+7PGzC71jxdcPvbA/zfI+yVOtS/On0bwM7Xxw2Y5qtdXHRI6MGapuyA/+7?=
 =?us-ascii?Q?BXOAu7rChSaGBBzg45a8+Nw7/80e4pzY/ssijkuUauY+bF/W+VZlqe8pUf3o?=
 =?us-ascii?Q?nJlkitbE7N77PyhzT9G45bSwBn942twGJ5hQTQDSNt82qVQwGjyeCKYllK3t?=
 =?us-ascii?Q?2I+W0jVFVdDXSXnsGW7kqIdA8Q90mZupz6NC8/SuXSt+MUxUzERanPMUzpVe?=
 =?us-ascii?Q?Gj4riqzB0nBFuF5ILI1flkWePW2piwXlB8Exl9K7QZ4gAKiWo9tOHQ1Wo885?=
 =?us-ascii?Q?+OlcqFDhlb00ovg/hVnP/5BF3KqB7ObHr6sqv2XoKXgausD4RggiLSKSlRXv?=
 =?us-ascii?Q?EWAx64uit2IqdiyUcak17qiaMfAZ5SNd21bi8qkRAUsdSgKmpMOxsIwCNg4G?=
 =?us-ascii?Q?SSw237EkNE68IBzGA/PqIZD572fKpSehmWPPxqufY96E9yFil1KK0uw0cPI7?=
 =?us-ascii?Q?asjWY4nc82tlMqTVXV2OGfv5bZCu9SyGlNxBW2/5UbVvEKOLpf91LmwzFNFt?=
 =?us-ascii?Q?AgGulBA/s4GPBf59EnqPRtLNcYfG5zekm23dsQ8fxeybCHEdFQi0nghVGJ+X?=
 =?us-ascii?Q?dXNzMxDSXxoIEZOnGSBdOPVXwLXkjPAiJZbwgEtl19tvNAbqGRCRwUBo0mWC?=
 =?us-ascii?Q?2jNCO8JxrILtrPu4PyoNIVK8+H3/396gG7jmAripm8WMRi9mTMjG9tSKjQgb?=
 =?us-ascii?Q?IFQuiDfK1JlrH+NlsC1+qpf4XFisJDGPETN5zw7Gp3EJt+eDXAMiqtotSPk5?=
 =?us-ascii?Q?UK9zw5v3CMAASN+GbjiwcvAN5VL1gZpg51Rq/FWqN03pxJ6o0UGdrdh1igLq?=
 =?us-ascii?Q?2CHVTFuTKFyF054BktvMnpNoC5FtXPaQIXT+yDCQ38D69saPX/wWGUprtRje?=
 =?us-ascii?Q?g5pP9TN2OiKLIRwNdMGq1EqwiSZ4nbGDgiJJmA1kKPBcEwAEhOERUgjkFc6o?=
 =?us-ascii?Q?1sxW?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <86C657322E75434FA8E75396A88EF856@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5483
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	71ed074c-f59f-49e1-4911-08d8978185e6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Tp5Dj3m/HhZ8OOO566UGpkvx/TBH2GBxH6D6T+WFDhoVxvkxoC6jlQAAP0vYNWL4ThaRy3Lp77L27mqZ1GX2Rt4lELBgH5sRWVB8v8li4x+8Kf8GR8bGlw4qXNtI18gKnCFbR/ZplNUWDIQEHMwhwMHHJRtuxeelu0WvAytyaUzIHDyTzhn+gTRvL48zfQByTl66uXKdXzjgEoHI8fCtkkrQ+SXJsehGBykkVBSB81mNZO2DnWFTe17/1tX44njkg/p6jNuSQWkOv8J6llU0kA4xGWh4H9HTh/90HB8lG/RZ68FCkbZlQIMZJiMs5tLtgn6IATorN2FrhXWsRprv/bL0Itc6cpWBzXwP+PMzp3YNqi5snhXELulaeN+TKCAC62GFQCSdNYq6u/6ZoRxIYg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(136003)(346002)(46966005)(36756003)(70206006)(356005)(47076004)(82310400003)(4326008)(107886003)(6862004)(5660300002)(81166007)(83380400001)(4744005)(86362001)(8936002)(2616005)(336012)(33656002)(316002)(70586007)(82740400003)(6512007)(54906003)(6506007)(53546011)(26005)(478600001)(186003)(6486002)(2906002)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 11:49:50.4561
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a4aa07cd-4713-4cc1-4852-08d897818aa9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB2965

Hello Julien,

Thanks for reviewing the code.

> On 2 Dec 2020, at 1:44 pm, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Rahul,
>=20
> On 26/11/2020 17:02, Rahul Singh wrote:
>> Linux SMMUv3 code implements the commands-queue insertion based on
>> atomic operations implemented in Linux. Atomic functions used by the
>> commands-queue insertion is not implemented in XEN therefore revert the
>> patch that implemented the commands-queue insertion based on atomic
>> operations.
>=20
> This commit message explains why we revert but not the consequence of the=
 revert. Can outline if there are any and why they are fine?

Ok let me try to add more detail.
>=20
> I am also interested to have a list of *must* have for the driver to be o=
ut of the tech preview.

Ok let me add more informing in the commit message in next version of the p=
atch.

Regards,
Rahul

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:07:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43385.77983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknOO-00023t-Qn; Thu, 03 Dec 2020 12:07:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43385.77983; Thu, 03 Dec 2020 12:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknOO-00023m-Nl; Thu, 03 Dec 2020 12:07:24 +0000
Received: by outflank-mailman (input) for mailman id 43385;
 Thu, 03 Dec 2020 12:07:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KUIR=FH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kknON-00023h-9Y
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:07:23 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 900ccd7f-0efa-4205-9892-8e945cb6446b;
 Thu, 03 Dec 2020 12:07:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 900ccd7f-0efa-4205-9892-8e945cb6446b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606997242;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=KJdP6Uqywn/Nn94UJYUt611kPhXl937zqVBQr07U+Xk=;
  b=cJ+d5ZhIauFdElNZUSBMbJCNtlEw+GSGghlYepmt0As7nuHSBQySxsPV
   g1y+fn2SmVnrVgFLZzpvdt4p2nVEVOvZw0fG8e8p+Aiew5fjFNiWRIZrR
   ewf27CGuTv4CfQ6Eftq86LcVXUVExhqwjj4POSOQm+DHBUnOIQxcmcH5m
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: s/PaW1TzAHm3E5gWvDTJORBbFIy+EP/8HLXhbKXWr+daLN4LSJmeum5AMpDQO+lcXxtLbi9Wso
 aHqjBFnc1gT9+JMcPYVu2jjHH7NPmWu+fV5j1QmUOnbpxVL6FOWa8RQNRJCebNcPHAIzxztr8I
 xfaXveW8mrhsKdv9o4Vlrk1a2ZvTbDy4i1yvbuVOnjXz1xAQqF87/p+feYKMCMjJWe6n7MwfOv
 /EtjKO5kfCFQrNOrt2FG6yvdlB0r4UToGBpb7lBfb6iEcl9YUsQDmB7Uv5/t5ki8YCxmNMr218
 Hiw=
X-SBRS: None
X-MesageID: 32451431
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,389,1599537600"; 
   d="scan'208";a="32451431"
Subject: Re: [PATCH 1/2] include: don't use asm/page.h from common headers
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Hongyan Xia <hx242@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <04276039-a5d0-fefd-260e-ffaa8272fd6a@suse.com>
 <a35fb176-e729-a542-4416-7040d6c80964@xen.org>
 <bdf294d9-e021-36d3-7e04-1c148e34701f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b6080ab1-b303-d93c-88ba-7b98333e7237@citrix.com>
Date: Thu, 3 Dec 2020 12:07:15 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bdf294d9-e021-36d3-7e04-1c148e34701f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 03/12/2020 09:27, Jan Beulich wrote:
>>> --- /dev/null
>>> +++ b/xen/include/asm-arm/page-shift.h
>> The name of the file looks a bit odd given that *_BITS are also defined 
>> in it. So how about renaming to page-size.h?
> I was initially meaning to use that name, but these headers
> specifically don't define any sizes - *_BITS are still shift
> values, at least in a way. If the current name isn't liked, my
> next best suggestion would then be page-bits.h.

Pick a generic name, or it will bitrot quickly, and it really wants to
be the same across architectures.

The real issue is that page.h contains far too much stuff now, in both
architectures.  Longterm, we want to split apart the architectural
pagetable definitions, from the Xen APIs to manipulate them, which will
simplify the asm include hierarchy as well.

I'd go with page-bits.h, or just a plain pagetable.h.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:29:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:29:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43396.77994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknjQ-00044k-Kk; Thu, 03 Dec 2020 12:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43396.77994; Thu, 03 Dec 2020 12:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknjQ-00044d-Hj; Thu, 03 Dec 2020 12:29:08 +0000
Received: by outflank-mailman (input) for mailman id 43396;
 Thu, 03 Dec 2020 12:29:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kknjP-00044V-FE; Thu, 03 Dec 2020 12:29:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kknjP-0003pW-7R; Thu, 03 Dec 2020 12:29:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kknjO-0003jh-Vl; Thu, 03 Dec 2020 12:29:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kknjO-0006BM-VL; Thu, 03 Dec 2020 12:29:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MsKBq3r6Rne1lS5f2tV14ccGvGLO0ZFMEFVBN+U5AnM=; b=klQwEaHyZGwIw3Jow4A2kzS+PC
	va16s39KDpy4w8N33dvbV5D+e6+OU3Nygs3sv4QgWwxpuBYuFcjntksNvk3sjEY68nflEFUfQc4f3
	bIKe3nDdguiM+HSJH7i6clki6w5lRO81XdMS80eklEdaoJvCnoANSOTLNj/vh48U+jDk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157167-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157167: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7c4ab1c2ef60a4690177d2361f8dd44d7d7df7f8
X-Osstest-Versions-That:
    ovmf=9fb629edd75e1ae1e7f4e85b0876107a7180899b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 12:29:06 +0000

flight 157167 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157167/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7c4ab1c2ef60a4690177d2361f8dd44d7d7df7f8
baseline version:
 ovmf                 9fb629edd75e1ae1e7f4e85b0876107a7180899b

Last test of basis   157117  2020-11-30 18:12:47 Z    2 days
Testing same since   157167  2020-12-02 23:40:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9fb629edd7..7c4ab1c2ef  7c4ab1c2ef60a4690177d2361f8dd44d7d7df7f8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:42:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43406.78022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknvz-0005wD-1J; Thu, 03 Dec 2020 12:42:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43406.78022; Thu, 03 Dec 2020 12:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknvy-0005w6-UY; Thu, 03 Dec 2020 12:42:06 +0000
Received: by outflank-mailman (input) for mailman id 43406;
 Thu, 03 Dec 2020 12:42:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kknvx-0005v7-G6
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:42:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvv-00045z-VV; Thu, 03 Dec 2020 12:42:03 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvv-00015A-Lq; Thu, 03 Dec 2020 12:42:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=TkS/3Nu1RBKiO2GK1mYbF+lmLCDon9ThVaW6ItHos8g=; b=0ZcGMieUH5oylchv/0Q1N6RoT+
	chFubiScjuIJhbbcCo/zH4WpyG1MAPRGvKytPqFMxAXYDiWVI2owoeHrLy6Zhe3NHUwrK1ksDqxOq
	ZJmyj7jwwqXjFT+MuYmyk72O9yZCWarv59e4IUXr7esdcEvqK3FWX2feEPvKmI6nO4HA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Eslam Elnikety <elnikety@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 1/4] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_fifo, ...
Date: Thu,  3 Dec 2020 12:41:56 +0000
Message-Id: <20201203124159.3688-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203124159.3688-1-paul@xen.org>
References: <20201203124159.3688-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

...to control the visibility of the FIFO event channel operations
(EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
the guest.

These operations were added to the public header in commit d2d50c2f308f
("evtchn: add FIFO-based event channel ABI") and the first implementation
appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
that, a guest issuing those operations would receive a return value of
-ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
running on an older (pre-4.4) Xen would fall back to using the 2-level event
channel interface upon seeing this return value.

Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
onwards has implications for hibernation of some Linux guests. During resume
from hibernation, there are two kernels involved: the "boot" kernel and the
"resume" kernel. The guest boot kernel may default to use FIFO operations and
instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
other hand, the resume kernel keeps assuming 2-level, because it was hibernated
on a version of Xen that did not support the FIFO operations.

To maintain compatibility it is necessary to make Xen behave as it did
before the new operations were added and hence the code in this patch ensures
that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event channel
operations will again result in -ENOSYS being returned to the guest.

This patch also adds an extra log line into the 'e' key handler output to
call out which event channel ABI is in use by a domain.

NOTE: To maintain current behavior, until a tool-stack option is added to
      control the flag, it is unconditionally set for all domains. A
      subsequent patch will introduce such tool-stack control.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v5:
 - Flip the sense of the flag from disabling to enabling, as requested by
   Andrew

v4:
 - New in v4
---
 tools/libs/light/libxl_create.c |  1 +
 tools/ocaml/libs/xc/xenctrl.ml  |  1 +
 tools/ocaml/libs/xc/xenctrl.mli |  1 +
 xen/arch/arm/domain.c           |  3 ++-
 xen/arch/arm/domain_build.c     |  3 ++-
 xen/arch/arm/setup.c            |  3 ++-
 xen/arch/x86/setup.c            |  3 ++-
 xen/common/domain.c             |  2 +-
 xen/common/event_channel.c      | 24 +++++++++++++++++++++---
 xen/include/public/domctl.h     |  4 +++-
 10 files changed, 36 insertions(+), 9 deletions(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519b5..3ca9f00d6d83 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -607,6 +607,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
+            .flags = XEN_DOMCTL_CDF_evtchn_fifo,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e878699b0a1a..fa5c7b7eb0a2 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -65,6 +65,7 @@ type domain_create_flag =
 	| CDF_XS_DOMAIN
 	| CDF_IOMMU
 	| CDF_NESTED_VIRT
+	| CDF_EVTCHN_FIFO
 
 type domain_create_iommu_opts =
 	| IOMMU_NO_SHAREPT
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index e64907df8e7e..a872002d90cc 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -58,6 +58,7 @@ type domain_create_flag =
   | CDF_XS_DOMAIN
   | CDF_IOMMU
   | CDF_NESTED_VIRT
+  | CDF_EVTCHN_FIFO
 
 type domain_create_iommu_opts =
   | IOMMU_NO_SHAREPT
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 18cafcdda7b1..59f947370053 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     unsigned int max_vcpus;
 
     /* HVM and HAP must be set. IOMMU may or may not be */
-    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=
+    if ( (config->flags &
+          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) !=
          (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
     {
         dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e824ba34b012..13d1e79f1463 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2478,7 +2478,8 @@ void __init create_domUs(void)
         struct domain *d;
         struct xen_domctl_createdomain d_cfg = {
             .arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE,
-            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
+            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
+                     XEN_DOMCTL_CDF_evtchn_fifo,
             .max_evtchn_port = -1,
             .max_grant_frames = 64,
             .max_maptrack_frames = 1024,
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7fcff9af2a7e..0267acfca16e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -805,7 +805,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     struct bootmodule *xen_bootmodule;
     struct domain *dom0;
     struct xen_domctl_createdomain dom0_cfg = {
-        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
+        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
+                 XEN_DOMCTL_CDF_evtchn_fifo,
         .max_evtchn_port = -1,
         .max_grant_frames = gnttab_dom0_frames(),
         .max_maptrack_frames = -1,
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 30d6f375a3af..e558241c73da 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const module_t *image,
                                          const char *loader)
 {
     struct xen_domctl_createdomain dom0_cfg = {
-        .flags = IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0,
+        .flags = XEN_DOMCTL_CDF_evtchn_fifo |
+                 (IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0),
         .max_evtchn_port = -1,
         .max_grant_frames = -1,
         .max_maptrack_frames = -1,
diff --git a/xen/common/domain.c b/xen/common/domain.c
index f748806a450b..28592c7c8486 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -307,7 +307,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
          ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
            XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
            XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
-           XEN_DOMCTL_CDF_nested_virt) )
+           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo) )
     {
         dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
         return -EINVAL;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index dbfba62a4934..91133bf3c263 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1188,10 +1188,27 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     return ret;
 }
 
+static bool is_fifo_op(int cmd)
+{
+    switch ( cmd )
+    {
+    case EVTCHNOP_init_control:
+    case EVTCHNOP_expand_array:
+    case EVTCHNOP_set_priority:
+        return true;
+    default:
+        return false;
+    }
+}
+
 long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
+    if ( !(current->domain->options & XEN_DOMCTL_CDF_evtchn_fifo) &&
+         is_fifo_op(cmd) )
+        return -ENOSYS;
+
     switch ( cmd )
     {
     case EVTCHNOP_alloc_unbound: {
@@ -1568,9 +1585,10 @@ static void domain_dump_evtchn_info(struct domain *d)
     unsigned int port;
     int irq;
 
-    printk("Event channel information for domain %d:\n"
-           "Polling vCPUs: {%*pbl}\n"
-           "    port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask);
+    printk("Event channel information for %pd:\n", d);
+    printk("ABI: %s\n", d->evtchn_fifo ? "FIFO" : "2-level");
+    printk("Polling vCPUs: {%*pbl}\n"
+           "    port [p/m/s]\n", d->max_vcpus, d->poll_mask);
 
     spin_lock(&d->event_lock);
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 666aeb71bf1b..f7149c81a7c2 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
 #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
 #define _XEN_DOMCTL_CDF_nested_virt   6
 #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
+#define _XEN_DOMCTL_CDF_evtchn_fifo   7
+#define XEN_DOMCTL_CDF_evtchn_fifo    (1U << _XEN_DOMCTL_CDF_evtchn_fifo)
 
 /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
-#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_nested_virt
+#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_evtchn_fifo
 
     uint32_t flags;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43405.78009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknvx-0005vJ-Ox; Thu, 03 Dec 2020 12:42:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43405.78009; Thu, 03 Dec 2020 12:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknvx-0005vC-Lu; Thu, 03 Dec 2020 12:42:05 +0000
Received: by outflank-mailman (input) for mailman id 43405;
 Thu, 03 Dec 2020 12:42:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kknvw-0005v2-99
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:42:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvu-00045v-0r; Thu, 03 Dec 2020 12:42:02 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvt-00015A-LG; Thu, 03 Dec 2020 12:42:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=+0nQoHXCtgW9tqQuUMhFAMdszTnfQ1Y5Z2wyMhegY0Y=; b=ED48kwiM8f8v+vyU2LdzBf0+yB
	kfdxuCU6c46KQ6st+n0cOkbrJ7rwmCsit66Qvi2jta7Ldsu90MkZJXKGvfl+z64EOniJDn/Qzme0s
	MQXn+92hyYkVdS+6FOswDBqRyDZxg8bDgp6RRKfXf7pmm5vGPf1vGGuho8j3uVbY7/vs=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/4] Xen ABI feature control
Date: Thu,  3 Dec 2020 12:41:55 +0000
Message-Id: <20201203124159.3688-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This series was previously called "evtchn: Introduce a per-guest knob to
control FIFO ABI". It is been extensively re-worked and extended to cover
another ABI feature.

Paul Durrant (4):
  domctl: introduce a new domain create flag,
    XEN_DOMCTL_CDF_evtchn_fifo, ...
  domctl: introduce a new domain create flag,
    XEN_DOMCTL_CDF_evtchn_upcall, ...
  libxl: introduce a 'libxl_xen_abi_features' enumeration...
  xl: introduce a 'xen-abi-features' option...

 docs/man/xl.cfg.5.pod.in         | 50 ++++++++++++++++++++++++++++++++
 tools/include/libxl.h            | 10 +++++++
 tools/libs/light/libxl_arm.c     | 22 +++++++++-----
 tools/libs/light/libxl_create.c  | 31 ++++++++++++++++++++
 tools/libs/light/libxl_types.idl |  7 +++++
 tools/libs/light/libxl_x86.c     | 17 ++++++++++-
 tools/ocaml/libs/xc/xenctrl.ml   |  2 ++
 tools/ocaml/libs/xc/xenctrl.mli  |  2 ++
 tools/xl/xl_parse.c              | 50 ++++++++++++++++++++++++++++++--
 xen/arch/arm/domain.c            |  3 +-
 xen/arch/arm/domain_build.c      |  3 +-
 xen/arch/arm/setup.c             |  3 +-
 xen/arch/x86/domain.c            |  8 +++++
 xen/arch/x86/hvm/hvm.c           |  3 ++
 xen/arch/x86/setup.c             |  4 ++-
 xen/common/domain.c              |  3 +-
 xen/common/event_channel.c       | 24 +++++++++++++--
 xen/include/public/domctl.h      |  6 +++-
 18 files changed, 229 insertions(+), 19 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43407.78034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknw0-0005xn-B1; Thu, 03 Dec 2020 12:42:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43407.78034; Thu, 03 Dec 2020 12:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknw0-0005xg-6x; Thu, 03 Dec 2020 12:42:08 +0000
Received: by outflank-mailman (input) for mailman id 43407;
 Thu, 03 Dec 2020 12:42:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kknvz-0005wR-5p
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:42:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvx-000469-PG; Thu, 03 Dec 2020 12:42:05 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvx-00015A-FJ; Thu, 03 Dec 2020 12:42:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=lmkdn59hshdmAjDQmfZLDVWbGtLlFRpvq2lkuaCfgJY=; b=lnfsOqa1+UyaBlTlxXau6pCzBV
	Cmfflqq2/JrDMV/gHL2vnO6Hd1dVp7LZHeKXBThMnOcj1mAmAgP3xe3TMgUCcihcaPefdpoPRNw9p
	SmtW9xPEAtzVbf1g8Umd5YTlCiEMcCC7IPjBBVQgOMwbLbqeGf0NDkpGveQglSpOkft8=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 2/4] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_upcall, ...
Date: Thu,  3 Dec 2020 12:41:57 +0000
Message-Id: <20201203124159.3688-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203124159.3688-1-paul@xen.org>
References: <20201203124159.3688-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

...to control the visibility of the per-vCPU upcall feature for HVM guests.

Commit 04447f4453c0 ("x86/hvm: add per-vcpu evtchn upcalls") added a mechanism
by which x86 HVM guests can register a vector for each vCPU which will be used
by Xen to signal event channels on that vCPU.

This facility (an HVMOP hypercall) appeared in a uncontrolled fashion which
has implications for the behaviour of OS when moving from an older Xen to a
newer Xen. For instance, the OS may be aware of the per-vCPU upcall feature
but support for it is buggy. In this case the OS will function perfectly well
on the older Xen, but fail (in a potentially non-obvious way) on the newer
Xen.

To maintain compatibility it is necessary to make Xen behave as it did
before the new hypercall was added and hence the code in this patch ensures
that, if XEN_DOMCTL_CDF_evtchn_upcall is not set, the hypercall will again
result in -ENOSYS being returned to the guest.

NOTE: To maintain current behavior, until a tool-stack option is added to
      control the flag, it is unconditionally set for x86 HVM domains. A
      subsequent patch will introduce such tool-stack control.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v5:
 - New in v5
---
 tools/libs/light/libxl_x86.c    | 7 ++++++-
 tools/ocaml/libs/xc/xenctrl.ml  | 1 +
 tools/ocaml/libs/xc/xenctrl.mli | 1 +
 xen/arch/x86/domain.c           | 8 ++++++++
 xen/arch/x86/hvm/hvm.c          | 3 +++
 xen/arch/x86/setup.c            | 1 +
 xen/common/domain.c             | 3 ++-
 xen/include/public/domctl.h     | 4 +++-
 8 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 86d272999d67..f7217b422404 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -5,7 +5,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
                                       struct xen_domctl_createdomain *config)
 {
-    switch(d_config->c_info.type) {
+    libxl_domain_create_info *info = &d_config->c_info;
+
+    if (info->type == LIBXL_DOMAIN_TYPE_HVM)
+        config->flags |= XEN_DOMCTL_CDF_evtchn_upcall;
+
+    switch(info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
         config->arch.emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
         break;
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index fa5c7b7eb0a2..04284c364108 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -66,6 +66,7 @@ type domain_create_flag =
 	| CDF_IOMMU
 	| CDF_NESTED_VIRT
 	| CDF_EVTCHN_FIFO
+	| CDF_EVTCHN_UPCALL
 
 type domain_create_iommu_opts =
 	| IOMMU_NO_SHAREPT
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index a872002d90cc..e40759464ae5 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -59,6 +59,7 @@ type domain_create_flag =
   | CDF_IOMMU
   | CDF_NESTED_VIRT
   | CDF_EVTCHN_FIFO
+  | CDF_EVTCHN_UPCALL
 
 type domain_create_iommu_opts =
   | IOMMU_NO_SHAREPT
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 1b894d0124d7..e7f83880219d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -662,11 +662,19 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     }
 
     if ( !hvm )
+    {
+        if ( config->flags & XEN_DOMCTL_CDF_evtchn_upcall )
+        {
+            dprintk(XENLOG_INFO, "Per-vCPU event channel vector only supported for HVM guests\n");
+            return -EINVAL;
+        }
+
         /*
          * It is only meaningful for XEN_DOMCTL_CDF_oos_off to be clear
          * for HVM guests.
          */
         config->flags |= XEN_DOMCTL_CDF_oos_off;
+    }
 
     if ( nested_virt && !hap )
     {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 54e32e4fe85c..7ffc42a7282e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4037,6 +4037,9 @@ static int hvmop_set_evtchn_upcall_vector(
     struct domain *d = current->domain;
     struct vcpu *v;
 
+    if ( !(d->options & XEN_DOMCTL_CDF_evtchn_upcall) )
+        return -ENOSYS;
+
     if ( !is_hvm_domain(d) )
         return -EINVAL;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index e558241c73da..3ff9a25eede6 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -751,6 +751,7 @@ static struct domain *__init create_dom0(const module_t *image,
     if ( opt_dom0_pvh )
     {
         dom0_cfg.flags |= (XEN_DOMCTL_CDF_hvm |
+                           XEN_DOMCTL_CDF_evtchn_upcall |
                            ((hvm_hap_supported() && !opt_dom0_shadow) ?
                             XEN_DOMCTL_CDF_hap : 0));
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 28592c7c8486..1ff2603425a3 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -307,7 +307,8 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
          ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
            XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
            XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
-           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo) )
+           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo |
+           XEN_DOMCTL_CDF_evtchn_upcall ) )
     {
         dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
         return -EINVAL;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index f7149c81a7c2..f5fe43a55662 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -72,9 +72,11 @@ struct xen_domctl_createdomain {
 #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
 #define _XEN_DOMCTL_CDF_evtchn_fifo   7
 #define XEN_DOMCTL_CDF_evtchn_fifo    (1U << _XEN_DOMCTL_CDF_evtchn_fifo)
+#define _XEN_DOMCTL_CDF_evtchn_upcall 8
+#define XEN_DOMCTL_CDF_evtchn_upcall  (1U << _XEN_DOMCTL_CDF_evtchn_upcall)
 
 /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
-#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_evtchn_fifo
+#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_evtchn_upcall
 
     uint32_t flags;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43409.78058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknw2-00062D-B8; Thu, 03 Dec 2020 12:42:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43409.78058; Thu, 03 Dec 2020 12:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknw2-000621-7d; Thu, 03 Dec 2020 12:42:10 +0000
Received: by outflank-mailman (input) for mailman id 43409;
 Thu, 03 Dec 2020 12:42:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kknw0-0005yS-JC
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:42:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvz-00046M-Nc; Thu, 03 Dec 2020 12:42:07 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvz-00015A-Fm; Thu, 03 Dec 2020 12:42:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=CGl8BZR/+SpIqigLmobGcANzj6zAGuP+UKJDCJC+Ltg=; b=SbbI6XX5sX2gwuUxhWLZLT1pAX
	4ZL8iL+x0MoLHveAY1p0FQnICSlV/ZcM/nIq3r996pGP3fUHPfGg0mJ7ieDkLcD8k9KmOrqQi6zyN
	I4gxHGQd/gRl5BZpvbYI+vxBZKnK8rPcIE4sGGTHyZI5PEaumxUs1Gt4GFyU53cZHGhs=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 4/4] xl: introduce a 'xen-abi-features' option...
Date: Thu,  3 Dec 2020 12:41:59 +0000
Message-Id: <20201203124159.3688-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203124159.3688-1-paul@xen.org>
References: <20201203124159.3688-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to control which features of the Xen ABI are enabled in
'libxl_domain_build_info', and hence exposed to the guest.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v5:
 - New in v5
---
 docs/man/xl.cfg.5.pod.in | 50 ++++++++++++++++++++++++++++++++++++++++
 tools/xl/xl_parse.c      | 50 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 98 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 3f0f8de1e988..b42ab8ba9f60 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1649,6 +1649,56 @@ This feature is a B<technology preview>.
 
 =back
 
+=item B<xen_abi_features=[ "STRING", "STRING", ...]>
+
+The features of the Xen ABI exposed to the guest. The following features
+may be specified:
+
+=over 4
+
+=item B<evtchn_fifo>
+
+A new event channel ABI was introduced in Xen 4.4. Moving a guest from an
+earlier Xen to Xen 4.4 onwards may expose bugs in the guest support for
+this ABI. Disabling this feature hides the ABI from the guest and hence
+may be used as a workaround for such bugs.
+
+The festure is enabled by default.
+
+=item B<evcthn_upcall>
+
+B<x86 HVM only>. A new hypercall to specify per-VCPU interrupt vectors to use
+for event channel upcalls in HVM guests was added in Xen 4.6. Moving a guest
+from an earlier Xen to Xen 4.6 onwards may expose bugs in the guest support
+for this hypercall. Disabling this feature hides the hypercall from the
+guest and hence may be used as a workaround for such bugs.
+
+The festure is enabled by default for B<x86 HVM> guests. Note that it is
+considered an error to enable this feature for B<Arm> or B<x86 PV> guests.
+
+=item B<all>
+
+This is a special value that enables all available features.
+
+=back
+
+Features can be disabled by prefixing the name with '!'. So, for example,
+to enable all features except B<evtchn_upcall>, specify:
+
+=over 4
+
+B<xen-abi-features=[ "all", "!evtchn_upcall" ]>
+
+=back
+
+Or, to simply enable default features except B<evtchn_fifo>, specify:
+
+=over 4
+
+B<xen-abi-features=[ "!evtchn_fifo" ]>
+
+=back
+
 =back
 
 =head2 Paravirtualised (PV) Guest Specific Options
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c5a..566e09f938f4 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1216,8 +1216,9 @@ void parse_config_data(const char *config_source,
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms,
                    *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs;
     XLU_ConfigList *channels, *ioports, *irqs, *iomem, *viridian, *dtdevs,
-                   *mca_caps;
-    int num_ioports, num_irqs, num_iomem, num_cpus, num_viridian, num_mca_caps;
+                   *mca_caps, *features;
+    int num_ioports, num_irqs, num_iomem, num_cpus, num_viridian, num_mca_caps,
+        num_features;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
@@ -2737,6 +2738,51 @@ skip_usbdev:
     xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat",
                         &c_info->xend_suspend_evtchn_compat, 0);
 
+    switch (xlu_cfg_get_list(config, "xen_abi_features",
+                             &features, &num_features, 1))
+    {
+    case 0: /* Success */
+        if (num_features) {
+            libxl_bitmap_alloc(ctx, &b_info->feature_enable,
+                               LIBXL_BUILDINFO_FEATURE_ENABLE_DISABLE_WIDTH);
+            libxl_bitmap_alloc(ctx, &b_info->feature_disable,
+                               LIBXL_BUILDINFO_FEATURE_ENABLE_DISABLE_WIDTH);
+        }
+        for (i = 0; i < num_features; i++) {
+            if (strcmp(buf, "all") == 0)
+                libxl_bitmap_set_any(&b_info->feature_enable);
+            else {
+                libxl_bitmap *s = &b_info->feature_enable;
+                libxl_bitmap *r = &b_info->feature_disable;
+                libxl_xen_abi_feature f;
+
+                buf = xlu_cfg_get_listitem(features, i);
+
+                if (*buf == '!') {
+                    s = &b_info->feature_disable;
+                    r = &b_info->feature_enable;
+                    buf++;
+                }
+
+                e = libxl_xen_abi_feature_from_string(buf, &f);
+                if (e) {
+                    fprintf(stderr,
+                            "xl: Unknown Xen ABI feature '%s'\n",
+                            buf);
+                    exit(-ERROR_FAIL);
+                }
+
+                libxl_bitmap_set(s, f);
+                libxl_bitmap_reset(r, f);
+            }
+        }
+        break;
+    case ESRCH: break; /* Option not present */
+    default:
+        fprintf(stderr,"xl: Unable to parse Xen ABI features.\n");
+        exit(-ERROR_FAIL);
+    }
+
     xlu_cfg_destroy(config);
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:42:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43408.78043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknw0-0005yr-W2; Thu, 03 Dec 2020 12:42:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43408.78043; Thu, 03 Dec 2020 12:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kknw0-0005yN-Ja; Thu, 03 Dec 2020 12:42:08 +0000
Received: by outflank-mailman (input) for mailman id 43408;
 Thu, 03 Dec 2020 12:42:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kknvz-0005xT-Qp
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:42:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvy-00046E-NK; Thu, 03 Dec 2020 12:42:06 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kknvy-00015A-FT; Thu, 03 Dec 2020 12:42:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=yJbK+YZtuE0HTqrQU4QFK2ag5uh4t73qAq6ThSOAdyw=; b=eKUdkgqtJyfjtmonjYjyorCeu3
	ABafYeN9MyRFS86S/K/zCCiqsIOQHiW6P/sAbg9E8Ovb0INx+5rtUK53pwHL41yo1/a57jsil11lL
	GDMQxLmvXId8JpTBXBGj+p9nglTAedkLaKTk5ha/KP2dr/O3CnwMLgLXpoNkFRdfWy9k=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 3/4] libxl: introduce a 'libxl_xen_abi_features' enumeration...
Date: Thu,  3 Dec 2020 12:41:58 +0000
Message-Id: <20201203124159.3688-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203124159.3688-1-paul@xen.org>
References: <20201203124159.3688-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and bitmaps to enable or disable fetaures.

This patch adds a new 'libxl_xen_abi_features' enumeration into the IDL which
specifies features of the Xen ABI which may be optionally enabled or disabled
via new 'feature_enable' and 'feature_disable' bitaps added into
'libxl_domain_build_info'.

The initially defined features are enabled by default (for relevant
architectures) and so the corresponding flags in
'struct xen_domctl_createdomain' are set if they are missing from
'disable_features' rather than if they are present in 'enable_features'.
Checks are, however, added to make sure that features are not specifically
enabled in cases where they are not supported.

NOTE: A subsequent patch will add an option into xl.cfg(5) to control whether
      Xen ABI features are enabled or disabled.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v5:
 - New in v5
---
 tools/include/libxl.h            | 10 ++++++++++
 tools/libs/light/libxl_arm.c     | 22 +++++++++++++++-------
 tools/libs/light/libxl_create.c  | 32 +++++++++++++++++++++++++++++++-
 tools/libs/light/libxl_types.idl |  7 +++++++
 tools/libs/light/libxl_x86.c     | 14 ++++++++++++--
 5 files changed, 75 insertions(+), 10 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index eaffccb30f37..b328a5621e6f 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -451,6 +451,16 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
 
+/*
+ * LIBXL_HAVE_BUILDINFO_XEN_ABI_FEATURE indicates that the
+ * libxl_xen_abi_feature enumeration is defined and that
+ * libxl_domain_build_info has feature_enable and _disable bitmaps
+ * of the specified width. These bitmaps are used to enable or disable
+ * features of the Xen ABI (enumerated by the new type) for a domain.
+ */
+#define LIBXL_HAVE_BUILDINFO_XEN_ABI_FEATURE 1
+#define LIBXL_BUILDINFO_FEATURE_ENABLE_DISABLE_WIDTH 64
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 66e8a065fe67..69676340a661 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -28,19 +28,27 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
     unsigned int i;
     uint32_t vuart_irq;
     bool vuart_enabled = false;
+    libxl_domain_build_info *b_info = &d_config->b_info;
+    libxl_xen_abi_feature f = LIBXL_XEN_ABI_FEATURE_EVTCHN_UPCALL;
+
+    if (libxl_bitmap_test(&b_info->feature_enable, f)) {
+        LOG(ERROR, "unsupported Xen ABI feature '%s'",
+            libxl_xen_abi_feature_to_string(f));
+        return ERROR_FAIL;
+    }
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
      * of SPI VIRQ for pl011.
      */
-    if (d_config->b_info.arch_arm.vuart == LIBXL_VUART_TYPE_SBSA_UART) {
+    if (b_info->arch_arm.vuart == LIBXL_VUART_TYPE_SBSA_UART) {
         nr_spis += (GUEST_VPL011_SPI - 32) + 1;
         vuart_irq = GUEST_VPL011_SPI;
         vuart_enabled = true;
     }
 
-    for (i = 0; i < d_config->b_info.num_irqs; i++) {
-        uint32_t irq = d_config->b_info.irqs[i];
+    for (i = 0; i < b_info->num_irqs; i++) {
+        uint32_t irq = b_info->irqs[i];
         uint32_t spi;
 
         /*
@@ -72,7 +80,7 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
     config->arch.nr_spis = nr_spis;
     LOG(DEBUG, " - Allocate %u SPIs", nr_spis);
 
-    switch (d_config->b_info.arch_arm.gic_version) {
+    switch (b_info->arch_arm.gic_version) {
     case LIBXL_GIC_VERSION_DEFAULT:
         config->arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
         break;
@@ -84,11 +92,11 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         break;
     default:
         LOG(ERROR, "Unknown GIC version %d",
-            d_config->b_info.arch_arm.gic_version);
+            b_info->arch_arm.gic_version);
         return ERROR_FAIL;
     }
 
-    switch (d_config->b_info.tee) {
+    switch (b_info->tee) {
     case LIBXL_TEE_TYPE_NONE:
         config->arch.tee_type = XEN_DOMCTL_CONFIG_TEE_NONE;
         break;
@@ -97,7 +105,7 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         break;
     default:
         LOG(ERROR, "Unknown TEE type %d",
-            d_config->b_info.tee);
+            b_info->tee);
         return ERROR_FAIL;
     }
 
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 3ca9f00d6d83..8cf7fd5f6d1b 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -587,6 +587,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
     struct xs_permissions noperm[1];
     xs_transaction_t t = 0;
     libxl_vminfo *vm_list;
+    libxl_xen_abi_feature f;
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
@@ -607,9 +608,38 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
-            .flags = XEN_DOMCTL_CDF_evtchn_fifo,
         };
 
+        libxl_for_each_set_bit(f, b_info->feature_enable) {
+            if (!libxl_xen_abi_feature_to_string(f)) { /* check validity */
+                LOGED(ERROR, *domid, "unknown Xen ABI feature enabled");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+            if (libxl_bitmap_test(&b_info->feature_disable, f)) {
+                LOGED(ERROR, *domid, "Xen ABI feature '%s' both enabled and disabled",
+                      libxl_xen_abi_feature_to_string(f));
+                rc = ERROR_FAIL;
+                goto out;
+            }
+            LOGD(DETAIL, *domid, "enable feature: '%s'",
+                 libxl_xen_abi_feature_to_string(f));
+        }
+
+        libxl_for_each_set_bit(f, b_info->feature_disable) {
+            if (!libxl_xen_abi_feature_to_string(f)) { /* check validity */
+                LOGED(ERROR, *domid, "unknown Xen ABI feature disabled");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+            LOGD(DETAIL, *domid, "disable feature: '%s'",
+                 libxl_xen_abi_feature_to_string(f));
+        }
+
+        if (!libxl_bitmap_test(&b_info->feature_disable,
+                               LIBXL_XEN_ABI_FEATURE_EVTCHN_FIFO))
+            create.flags |= XEN_DOMCTL_CDF_evtchn_fifo;
+
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
             create.flags |= XEN_DOMCTL_CDF_hvm;
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 05324736b744..3c50724b64cd 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -477,6 +477,11 @@ libxl_tee_type = Enumeration("tee_type", [
     (1, "optee")
     ], init_val = "LIBXL_TEE_TYPE_NONE")
 
+libxl_xen_abi_feature = Enumeration("xen_abi_feature", [
+    (0, "evtchn_fifo"),
+    (1, "evtchn_upcall")
+    ])
+
 libxl_rdm_reserve = Struct("rdm_reserve", [
     ("strategy",    libxl_rdm_reserve_strategy),
     ("policy",      libxl_rdm_reserve_policy),
@@ -559,6 +564,8 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("apic",             libxl_defbool),
     ("dm_restrict",      libxl_defbool),
     ("tee",              libxl_tee_type),
+    ("feature_enable",   libxl_bitmap),
+    ("feature_disable",  libxl_bitmap),
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index f7217b422404..39a9d3cbf9f8 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -6,9 +6,19 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       struct xen_domctl_createdomain *config)
 {
     libxl_domain_create_info *info = &d_config->c_info;
+    libxl_domain_build_info *b_info = &d_config->b_info;
+    libxl_xen_abi_feature f = LIBXL_XEN_ABI_FEATURE_EVTCHN_UPCALL;
 
-    if (info->type == LIBXL_DOMAIN_TYPE_HVM)
-        config->flags |= XEN_DOMCTL_CDF_evtchn_upcall;
+    if (info->type != LIBXL_DOMAIN_TYPE_HVM &&
+        libxl_bitmap_test(&b_info->feature_enable, f)) {
+        LOG(ERROR, "unsupported Xen ABI feature '%s'",
+            libxl_xen_abi_feature_to_string(f));
+        return ERROR_FAIL;
+    }
+
+    if (info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        !libxl_bitmap_test(&b_info->feature_disable, f))
+            config->flags |= XEN_DOMCTL_CDF_evtchn_upcall;
 
     switch(info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:57:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43436.78073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoAu-0007XP-PI; Thu, 03 Dec 2020 12:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43436.78073; Thu, 03 Dec 2020 12:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoAu-0007XI-Lb; Thu, 03 Dec 2020 12:57:32 +0000
Received: by outflank-mailman (input) for mailman id 43436;
 Thu, 03 Dec 2020 12:57:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lp8G=FH=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkoAt-0007XD-BF
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:57:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.71]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d420663-49d3-401a-bc84-1e510e77ee18;
 Thu, 03 Dec 2020 12:57:28 +0000 (UTC)
Received: from AS8PR04CA0028.eurprd04.prod.outlook.com (2603:10a6:20b:310::33)
 by VI1PR08MB4224.eurprd08.prod.outlook.com (2603:10a6:803:bc::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 3 Dec
 2020 12:57:26 +0000
Received: from VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::b1) by AS8PR04CA0028.outlook.office365.com
 (2603:10a6:20b:310::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Thu, 3 Dec 2020 12:57:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT035.mail.protection.outlook.com (10.152.18.110) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Thu, 3 Dec 2020 12:57:25 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Thu, 03 Dec 2020 12:57:24 +0000
Received: from 24564b47310b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EBBF56B8-8F59-4D44-8175-956248CF8379.1; 
 Thu, 03 Dec 2020 12:57:08 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 24564b47310b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Dec 2020 12:57:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0801MB2119.eurprd08.prod.outlook.com (2603:10a6:4:34::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Thu, 3 Dec
 2020 12:57:05 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.032; Thu, 3 Dec 2020
 12:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d420663-49d3-401a-bc84-1e510e77ee18
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=glmXbc+pwXkEkBwx0IrBVP6cHdvjkgZ8tsP4AEr4lIc=;
 b=hFu4KpI++Qm2FS1G1v4gPGB1OBgmmG24sB645AEjuzzHOwTEOIhxwSY8vQzVJg12iRlaNXBJKlUH8O/XdkIya4/GW/+QwwU7pD7d1TAM7TyyuzY+CKa3yWtAiLPnz7QzQu0TebqvQ4Eqtm7bQwxB95jrjfi0m3M8dAeDDUiZxkA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 202d5570b031ef18
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lTry7acv2usk0+sECMXUYVTbsK2dKplcguUlSGIPD3sjcRCLC9F9591c+hVoxCJPUDhwkBSWhbLJSr4PFGXnvmQvY27Osn9GAm6epmT6oIAmQzwFVXeKX/rhnvJniMvehKyffYbYrsxdhCgqravfwYEvh2v+If8XERfg809tdMmuJFqM+S6qZK3/x58Le+zxQjqxgUxnqatb0A33EVOY5qKlq78HdJwGM26phhmJtBwk6pIrz3qWvFCnFj+orMV2DwqFhpLtVXE4P2mIhQ4pa+Aaa6JvBbRxQQ9KImzfTroDiEBNflWSjvLeIbnU3pElznXHhl4cqHsf0q3DK6JjkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=glmXbc+pwXkEkBwx0IrBVP6cHdvjkgZ8tsP4AEr4lIc=;
 b=dw1dDliY07EHyiNZXFTATQ+yR1Z+Dq5DXdh6IeF9KXyjMcEw+iNgW1UK6tPZPOVt3OcHvSwY0QRVk+fKNZfe3NzGGZMmYRMNjVcah0TCgAhcNkzEMaAm++AAp+xsKhCizj/z8pRTRURk5P7fZ3Ebx9fqL71rBXPnWSbHZodJqUteYtoaLjp8wqRBz8DenvIMHcfH+pG1eQlgNSwHVUUM6yjXt09ErgV+3c0ewdM6SDMlrJX02BZUiorMXx7b6dK7S+JPuyTDqWrMCpFabyCwVA3o+woCr29cFNF/WZ3op3jgw9GwJXuuTMHVi/WvYqUMgHXhjPrsRYEOO70lDGbGqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=glmXbc+pwXkEkBwx0IrBVP6cHdvjkgZ8tsP4AEr4lIc=;
 b=hFu4KpI++Qm2FS1G1v4gPGB1OBgmmG24sB645AEjuzzHOwTEOIhxwSY8vQzVJg12iRlaNXBJKlUH8O/XdkIya4/GW/+QwwU7pD7d1TAM7TyyuzY+CKa3yWtAiLPnz7QzQu0TebqvQ4Eqtm7bQwxB95jrjfi0m3M8dAeDDUiZxkA=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/8] xen/arm: revert patch related to XArray
Thread-Topic: [PATCH v2 3/8] xen/arm: revert patch related to XArray
Thread-Index: AQHWxBX3rCwwjKDJrkuX4ODVkz/db6nj2uAAgAGEjAA=
Date: Thu, 3 Dec 2020 12:57:05 +0000
Message-ID: <2B1A7090-F07C-4DF9-BDEC-6E5A2D715DB4@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
 <266b918c-b9c4-e067-b8dc-4e879c913af5@xen.org>
In-Reply-To: <266b918c-b9c4-e067-b8dc-4e879c913af5@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 59d3e2e9-1b2d-4458-d15e-08d8978afb74
x-ms-traffictypediagnostic: DB6PR0801MB2119:|VI1PR08MB4224:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB42240F19F8CEBE19E1CB685EFCF20@VI1PR08MB4224.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mCLUFCUA6v+DZYPEWBQ7G7UxmtMvVp5B7SfXRxrDuCHpuCPlcevGsDlQv0DRmfG9R8xSbyvdeajhQjhBWFccBvYUoeUgl4eMSWMlyxlNNHTTpGX4hapDuMbU2YYksgUdXSfZRz4OIWNBDRPKwhFZJfl/qR4qTubmTRp4hZ8Y7KY4pNdLxaAEMVkXwOiscPvHaMD0gBtwvjgvrvOoQlgI+DB+OKFqGhRW9zQHVcyRzkkEVDGFCfoHU0fnjB91a75Ti0+iAFoZIn3LoAo3nHO28BqQ4QSf9fX0pe9DtDoh7K6gjpAM3FPrWf3YHXRHIXUY2fvrAkkI0A6xToTPYCDKn1lcuAhTxF+LAfZP8ux21ITb/on4N6Nfu/MYyfq0b0yS
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(366004)(136003)(376002)(396003)(86362001)(478600001)(83380400001)(36756003)(6916009)(4326008)(5660300002)(2616005)(8936002)(4744005)(8676002)(54906003)(71200400001)(2906002)(316002)(186003)(6506007)(26005)(6512007)(66446008)(64756008)(6486002)(66946007)(66476007)(91956017)(76116006)(66556008)(53546011)(33656002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?wXwMeKWl+u9CXa9j/BqWovsSrEjKiWRzz8yjYYNID1Ti6ijKOFZo46AgyroR?=
 =?us-ascii?Q?TQJ5u/f+LD/DH5UCt1qc+P+k9ssCsL6+rb8AUpM0IHxnlWvuNM5U3zMBQr9T?=
 =?us-ascii?Q?VNKIIHiBb8g9i+y+layFaQftGM7WGea3gwbZq17wrwFv5YzogfsaUggUUfOJ?=
 =?us-ascii?Q?Cw3DPeRm2IVHCjhYb5pYcsImiZ4rixW+LNm+PH1UWEjoPjj5gi0UUZsdzpTG?=
 =?us-ascii?Q?ju/xKIH6lgHSc6gaDFpPI5ofJ7GmcmhG3zJj/U/XW2ODDqG2h3R5DZeEzk5B?=
 =?us-ascii?Q?pvenGFTed2XJcgqGh5zDmUlfBZt9FbcxjoyviKvTf5Orj5C1U3LgTVt8zJKt?=
 =?us-ascii?Q?utnuzPpAVEcP5hdp/Mq99Bj9NA5Z8cpfBEC1AYmbMyochLHlgmP3YiFQ5iNw?=
 =?us-ascii?Q?FDiMJfG2TP8JmsbYgPFiR6KmX6UdXLOlOJi3hhnTLueq+g/2GRsNB0mvX+2Y?=
 =?us-ascii?Q?LWMCp0sCepzE+92OQlCwPH5tbWCBzb4uyqM+bvoVX4la/oGv/zz/zYAsFEqS?=
 =?us-ascii?Q?z8QYT8A+7xDf9SBc7itrJu/NDo1t9prjigIKI/XMAXFXV5DESx4LT7soe7vI?=
 =?us-ascii?Q?2bsdFS49jNjy6VzSz05g4Y5KxxPDzH8OMuWuwWBd7ngtsiEtM1EO3CSOvRyQ?=
 =?us-ascii?Q?yCvfdyw/z9giDfp7i7xI+mWutAHlW1Bcmjo+7MiikPK5oOctL9Wc9CGe/UqW?=
 =?us-ascii?Q?yKpYSeYOIgEfUYosbcBtNDTSaLzV/urmfUOXRmq8fRYm7olFl8P+B6COc7NH?=
 =?us-ascii?Q?FyDEc1Oda+38kbdJ7Y4EDUH836kt4WcrjoN5sBzJZKJBdvBEq7B1DY5WBOJ/?=
 =?us-ascii?Q?SGNogS4c86RlKyVuu+kDmQIYNH05twUSxX5tod0G5kAPBqDCU5wQghr4utgc?=
 =?us-ascii?Q?8LpMu1HB8kP+yV4pZOnCijab18QtoxugPzvcaNWou3surnStrGxtWmpVwkr7?=
 =?us-ascii?Q?IKsZtrV+OfKzBdjdPbBwRC5xzBiAROEu+g+44zqYOT5rxDoKsU6Sn5eA+uAZ?=
 =?us-ascii?Q?hMxk?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <553694E2A5AE774F84C8E786654269A7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB2119
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	745f7346-13c0-4949-ee3f-08d8978aef78
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4s/L8vUmUfb+8/QS+GFPYgvvRyokVTqxfx43T3TmnKFL375Lz0xrb5Fm2LXcjaIViQZO66O+qgfvPb+Rq9smdcViuQlCMLeg8svD21AxzxjLoZ+KDHwI5RjtoXW/W9CslZ5pjnEr7u7C4niOr0oVtAxNai59kneYXxM/JbxRHUb3m3DLzKTfBYEkXPRIcISLJnOG4tIAdogYV5sdkGF3+r7Ycp5BSDGDq4bDuRvRA85ATIdCiyQVSR5f/2Q/1UH7c8MFalbtVjtjQ3bqYBQvuQKxZo91CJU+diWDJ5GwypMle5UPiX7k1w2SG1DoFylts58y2gnBf51HziuGPWUm429n5lLL/TkJSBpikl8xTzfB39x+r3m1FZ/DOxquMxW09pTP7UYwXgRqfzvxAoQfEg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(39860400002)(396003)(46966005)(8936002)(82310400003)(83380400001)(8676002)(2906002)(6862004)(478600001)(4326008)(107886003)(86362001)(186003)(336012)(26005)(53546011)(2616005)(6512007)(81166007)(6486002)(6506007)(47076004)(5660300002)(82740400003)(36756003)(70586007)(33656002)(316002)(70206006)(54906003)(4744005)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 12:57:25.1473
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 59d3e2e9-1b2d-4458-d15e-08d8978afb74
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4224

Hello Julien,

> On 2 Dec 2020, at 1:46 pm, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Rahul,
>=20
> On 26/11/2020 17:02, Rahul Singh wrote:
>> XArray is not implemented in XEN revert the patch that introduce the
>> XArray code in SMMUv3 driver.
>=20
> Similar to the atomic revert, you are explaining why the revert but not t=
he consequences. I think this is quite important to have them outline in th=
e commit message as it looks like it means the SMMU driver would not scale.

Ok I will add.
>=20
>> Once XArray is implemented in XEN this patch can be added in XEN.
>=20
> What's the plan for that?

As of now there is no plan for Xarray from our side. =20
Do we have requirement for Xarray implementation in XEN ?

Regards,
Rahul

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 12:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 12:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43441.78085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoD4-0007hv-As; Thu, 03 Dec 2020 12:59:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43441.78085; Thu, 03 Dec 2020 12:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoD4-0007ho-7f; Thu, 03 Dec 2020 12:59:46 +0000
Received: by outflank-mailman (input) for mailman id 43441;
 Thu, 03 Dec 2020 12:59:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lp8G=FH=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkoD2-0007hh-H5
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 12:59:44 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1e::61d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7c7da62-63c2-4c15-b9b1-a244afe11af8;
 Thu, 03 Dec 2020 12:59:42 +0000 (UTC)
Received: from MR2P264CA0099.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::15)
 by AM0PR08MB3346.eurprd08.prod.outlook.com (2603:10a6:208:5e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 3 Dec
 2020 12:59:40 +0000
Received: from VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::3d) by MR2P264CA0099.outlook.office365.com
 (2603:10a6:500:33::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 3 Dec 2020 12:59:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT025.mail.protection.outlook.com (10.152.18.74) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Thu, 3 Dec 2020 12:59:39 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Thu, 03 Dec 2020 12:59:39 +0000
Received: from 0cfccb92f110.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6BB4CED3-44E5-46B7-BAB2-F376C9FCAFFD.1; 
 Thu, 03 Dec 2020 12:59:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0cfccb92f110.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Dec 2020 12:59:23 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5097.eurprd08.prod.outlook.com (2603:10a6:10:38::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.21; Thu, 3 Dec
 2020 12:59:22 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.032; Thu, 3 Dec 2020
 12:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7c7da62-63c2-4c15-b9b1-a244afe11af8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VYGW/pOXP38DWcAUaDYijjkZxd8MUBtl4szVtA1Qfss=;
 b=adwQhaGK4VEhgkSz1y3hk67sDVZRHvfYuA48eb51bViQKdPp1iz5YkN6udCzgXMEFwSe66L5g9J3J5yPt4iezYBxoNew3RK6fQy063oZ/QKqjtXuH4q5XZzIn6bBMMIS3guiBt1zRZqIPTZA1CdHDhhCCKnBjitvTpy02zZhglg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4b7c2bf8a5d959e3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=exEUrM5JQjvloBIXHy+l+4MIh/XVTRQysRkr1cpqpmlifleBMJAUYnSdEjVseYloK+X2pM9v9q21Xujm+jU7tcUnIpY1rSJg918uEqOionNic1MoBjPkFTJ9Hgva81MpvYdlLm/5Fd+dulnIOkTFdJKrDJMBXgG3ji18DG7L2DZzMe/yohFqfpaFppQmhh0nBQ530ArR73pgsOPfp2LzeYU3Q7t0Ojy9GmjAhQ48zN4axBFNZrhmKyaA7C0vRNG4B/0mySEAR1+CI8yafSHmPoX/vYFj0e8box1DFnzQIVnI3/wjNi84bfOQpM+E0NM4AqrV4fGo96Uy3J0MBfNuGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VYGW/pOXP38DWcAUaDYijjkZxd8MUBtl4szVtA1Qfss=;
 b=PM74SHrfcPp+YYObxxq5lFGEyhnryEXw95836pT7U6hbDK+orj0RLw0x5LtVyMGHo+JqQCU7/Sx91xTXxk751e6RddXC0qHvtbUS5dI7cI5kInTa70Jq5WppQg6ooLNhzFlZ7NRTOllZyu3e8cJB8CwzOQUsL0H5+FgqdSaT4+C+tDkhase+JFKmgzla3FGdCNN0H7XZVhKX5XwAQP7OBE5jvUHmiO7VqXaHbLRki/cVq8U8ExY+kfrA0LXq9FX2iu6B0JjmEkn2do0EV4RFYwO2+lCEiknJzEufYyWWEgc/53l/PjrGJh2loSePRw4CSR8N6O82xEtwGRVK86y/XQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VYGW/pOXP38DWcAUaDYijjkZxd8MUBtl4szVtA1Qfss=;
 b=adwQhaGK4VEhgkSz1y3hk67sDVZRHvfYuA48eb51bViQKdPp1iz5YkN6udCzgXMEFwSe66L5g9J3J5yPt4iezYBxoNew3RK6fQy063oZ/QKqjtXuH4q5XZzIn6bBMMIS3guiBt1zRZqIPTZA1CdHDhhCCKnBjitvTpy02zZhglg=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
Thread-Topic: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
Thread-Index: AQHWxBX8l+Fkiiynvkqd+dcwplqzQKni/UOAgAAB/QCAANIPAIAAEHgAgAF+SgA=
Date: Thu, 3 Dec 2020 12:59:22 +0000
Message-ID: <EA25A499-E697-4B7B-AEBF-3842B0C95AE5@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011621380.1100@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012011639230.1100@sstabellini-ThinkPad-T480s>
 <D79D7DC5-649D-4517-A8CA-B13632595DA5@arm.com>
 <5689cfe7-ca16-6540-d394-00d3f60f4f5f@xen.org>
In-Reply-To: <5689cfe7-ca16-6540-d394-00d3f60f4f5f@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d3266781-c6f2-48ee-cdb6-08d8978b4bb2
x-ms-traffictypediagnostic: DB8PR08MB5097:|AM0PR08MB3346:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB33465794A02D7D29A892444EFCF20@AM0PR08MB3346.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iXfpwp2NSlOxwE2K6YSFh0BDDRIlVVZ1gqCRimdwEcnnYfGVNVh6ss9VeWZp8znGpKTXspc/o3rQlkDmWp+8LBKCXrXyPOgx/d1vlz4QncWbWwb4LtQQ5+rngqZL/AlWzFQ336hVa8BRuyXDfY8JYRff1+soU8s8Qzr8cDujcroE63kGCipIfKByzpt3QkJIthY8k/pjJOvUv92DZdSkX/pEmHuaVtpbH4K0ejdRvxw5CEhYpPGPBBcJ46z38p7STPrN/BRDl25jbrbUxWkOCA9Ftei7tJm8+xWbuiJJMEuUqn3Cgty/wuwEVL7TXaATVLCVL3F11wugIpVw8EbrKvMOX0Fodzpw/RyefpIJ57bUYFALbunUaOaguaY2FHDP
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(136003)(346002)(396003)(366004)(54906003)(53546011)(6506007)(6486002)(186003)(66556008)(76116006)(26005)(86362001)(91956017)(6512007)(316002)(2616005)(4326008)(71200400001)(2906002)(8936002)(33656002)(5660300002)(6916009)(478600001)(66946007)(66476007)(66446008)(36756003)(8676002)(83380400001)(64756008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?bW651/g7a5EVULMkDCuFnOHGd584Dp1v2LDc9IV+CRQp+bPECpkEZEByaS0V?=
 =?us-ascii?Q?kaHUtGZR9pmhLS1HFNPDKaB4o0D6nCb7Yr8CNumd1NMhsoi6Jjg558JAHhMy?=
 =?us-ascii?Q?8hOeDkIhEDoZMmmhy5aUyr45R0lsA9aeln8IDgs0SG054iBbeoiOHqp8f8Sj?=
 =?us-ascii?Q?4q1+7Z4PH3HBD7+GWu1fNu0GMoXAOOIr9W041ZE+CEjgzYz7Uqx24ZALixVr?=
 =?us-ascii?Q?RxrclcG0jj41Opj+VOKCkDDB8iNd3yuLf1n+Az6SDoe5TQUueByarzjDtxLH?=
 =?us-ascii?Q?LmFRLpc4pZMYLv8LgBj/k4qeCm7/FJDI3JQAynag1wdjQuQ7bo7bQRHNQXlg?=
 =?us-ascii?Q?kX4wtFHDBvtumWtDz66eEFRj+CyJtc2kCSr4XMtZdb/rQ8mUW8UmE6yDQToY?=
 =?us-ascii?Q?gGoJcAHWRJpSjFiArDJL4z1g5ESVibMyNMnlrwnrTEk2m6KHwsC0akmdIIP4?=
 =?us-ascii?Q?B6gLtK8l/OJfIb3bFtKlbrUaKpcUcbmN+DWfFV1AvhKqU9wcd9LIPHrjuUCO?=
 =?us-ascii?Q?t9ZVUCCGyHndzLYMDgTtmVQEBdSZtYI38uYiEQ2OoR9N4GvlrZr+reILOh3/?=
 =?us-ascii?Q?NnmfDetbtIj9aKOHG/711yb/zaj8+iiFpA0TqoREYPjiEGCwWeZjwfZlMlnk?=
 =?us-ascii?Q?spFA1fgSRCvvsGXfpAEqBsKS7kirj2dIrZmu1+2T0mTuDW/uc95t4BQ3y3iP?=
 =?us-ascii?Q?QaoCMRvq+LkKSwRq4tiMj4qWFYGFBwqHZMDFoeqUyvvxLQaV8qPuvkAWUo4H?=
 =?us-ascii?Q?AMvkKWYtZnvSdsxNC0UReowJt5T6CXcf/VlNATW0o2dXcWHt/8bQ9GogcjXe?=
 =?us-ascii?Q?0KMVe9q6L3wgVSHjIqrq+C1aiIchCnZ+J0caAcVW4dWXYern3vWjkO4R17+O?=
 =?us-ascii?Q?UajQE0L+oqB5Q90CQydcjh+Rtv/jjcE05XzGpUxBQLJaBW+O1za35kN9FmPJ?=
 =?us-ascii?Q?4aI0D3bDkU5z9UcAYLGM4Cbun/8eRVLdRD+QMABBo3fqcOkBk51BFLvp053Q?=
 =?us-ascii?Q?4JGl?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D219228701739443847EE67FB84DD429@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5097
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	92719819-e378-4fe8-426d-08d8978b4199
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9dAV9AMsn/K0dWgJHsb45+eV24l9sqjhWX6zF4O8zDJiWmygovRsN0eppXKO7S0n+lToMf04s7L9rufd7a02YuriwOAQ71uzrAwfGKhkbyxWdRXA24Ad9aI0hZmd8TZenSWq0zQ62tI2gRF3tF8T8P0DMA1W9X009Mqv/ZcXhZEZkdQiiiF+lw7BuNZDX7eMagS+u2V4Rgb+m8XsnplCLmRcbNyE8T65QbslwjyXstDIYEKRXon3lGUGz0+ut9XbQGnnR4Vki8itcNQ8Jlw0qDi/clnq/Fu2DgTf1O34gQ2B+Ychfr8sLr4U0IAVOommaAqAF70A2EaS8tO1YMHPM1InbnpC7JgyU99A9cBrR8OlyXOwZWftTwG8GbBM3jpnbClHhIVjUCMJiqLoAoPy3Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(396003)(39860400002)(46966005)(82310400003)(36756003)(83380400001)(8936002)(2616005)(8676002)(2906002)(186003)(478600001)(4326008)(6862004)(107886003)(26005)(86362001)(53546011)(336012)(6506007)(6512007)(5660300002)(47076004)(6486002)(82740400003)(70586007)(81166007)(33656002)(316002)(356005)(70206006)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 12:59:39.7883
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d3266781-c6f2-48ee-cdb6-08d8978b4bb2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3346

Hello Julien,

> On 2 Dec 2020, at 2:11 pm, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Rahul,
>=20
> On 02/12/2020 13:12, Rahul Singh wrote:
>> Hello Stefano,
>>> On 2 Dec 2020, at 12:40 am, Stefano Stabellini <sstabellini@kernel.org>=
 wrote:
>>>=20
>>> On Tue, 1 Dec 2020, Stefano Stabellini wrote:
>>>> On Thu, 26 Nov 2020, Rahul Singh wrote:
>>>>> XEN does not support MSI on ARM platforms, therefore remove the MSI
>>>>> support from SMMUv3 driver.
>>>>>=20
>>>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>>>=20
>>>> I wonder if it makes sense to #ifdef CONFIG_MSI this code instead of
>>>> removing it completely.
>>>=20
>>> One more thought: could this patch be achieved by reverting
>>> 166bdbd23161160f2abcea70621adba179050bee ? If this patch could be done
>>> by a couple of revert, it would be great to say it in the commit
>>> message.
>>>=20
>>  Ok will add in next version.
>>>=20
>>>> In the past, we tried to keep the entire file as textually similar to
>>>> the original Linux driver as possible to make it easier to backport
>>>> features and fixes. So, in this case we would probably not even used a=
n
>>>> #ifdef but maybe something like:
>>>>=20
>>>>  if (/* msi_enabled */ 0)
>>>>      return;
>>>>=20
>>>> at the beginning of arm_smmu_setup_msis.
>>>>=20
>>>>=20
>>>> However, that strategy didn't actually work very well because backport=
s
>>>> have proven difficult to do anyway. So at that point we might as well =
at
>>>> least have clean code in Xen and do the changes properly.
>=20
> It was difficult because Linux decided to rework how IOMMU drivers works.=
 I agree the risk is still there and therefore clean code would be better w=
ith some caveats (see below).
>=20
>> Main reason to remove the feature/code that is not usable in XEN is to h=
ave a clean code.
>=20
> I agree that short term this feature will not be usable. However, I think=
 we need to think about {medium, long}-term plan to avoid extra effort in t=
he future because the driver evolve in a way making the revert of revert im=
possible.
>=20
> Therefore I would prefer to keep both the MSI and PCI ATS present as they=
 are going to be useful/necessary on some platforms. It doesn't matter that=
 they don't work because the driver will be in tech preview.

Ok. As Stefano also agreed the same I will keep the PC ATS and MSI function=
ality in next version.

Regards,
Rahul

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:00:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43444.78097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoDP-0008Ur-JO; Thu, 03 Dec 2020 13:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43444.78097; Thu, 03 Dec 2020 13:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoDP-0008Uk-GR; Thu, 03 Dec 2020 13:00:07 +0000
Received: by outflank-mailman (input) for mailman id 43444;
 Thu, 03 Dec 2020 13:00:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oiWT=FH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kkoDO-0008Qo-7Q
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:00:06 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 610d9eb7-e9df-4dab-8955-f7af9053dc90;
 Thu, 03 Dec 2020 13:00:05 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id g25so2605343wmh.1
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 05:00:05 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id g184sm1460744wma.16.2020.12.03.05.00.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 03 Dec 2020 05:00:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 610d9eb7-e9df-4dab-8955-f7af9053dc90
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=HK6bb2uv+ttu54o7S7w4OgQnK0a3XPAMl/oW3A/nsRE=;
        b=kjp4a/jyPMFr4SbPD9bXcMk4Ti8sXK3rgMbBp3iOQa+i8KoI4LPNOdDtqU0hkOYXte
         4aMOkSrgV/T7XEGQJlo+5oY+O+BZK1Ima7gRFapfNREC7/JZSthb8Fr9wb8fvDplZln5
         eoq5tXz+zasNJEGMJLbpm90fL56LC1cHvxguXSCzk1tfKrbAYMGqKXabvnetbsx2hM/u
         DEacb3mJ3vAWBiJAOfVjL7+Yy0kmY8kGD7CnQBBOfcBa5np6av6vIhop25EaZOqpoD5O
         EM81KGGFPdVgkOwTqG8ghSxddX/HvyHa0Zs2x8qmBoHhP9+ArgJFA99hODa4lkr7XDCy
         twbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=HK6bb2uv+ttu54o7S7w4OgQnK0a3XPAMl/oW3A/nsRE=;
        b=VSzZQzU7HFNw14+EjAmwGiYUeso8D7CVlp8nbp0vascZId6u/2L1fvDA79d6gteIS3
         zEWE56FkL775B4o5Q1MKQEOaorGPiwjLOtQ3tSyH/1eHW2kr+tnLYB493DR5yJrRk36J
         Y9bmN1C7hs2BIegjPpPfWvOYc/dRjQJpTjBGLqGQyrsVEhriStv3Rl3UbBC0j3qXn3+z
         5vKFNcYu5bHvOVVDfPmcrMWRlMF6OoPZ646f0mszv/Ij+mRbOFJA+FjEocH778Jr3RKD
         sRauglISsgUwOJF76G/TtB2hu0COaCo+l4Z/tbjSHE5ZZFmmuhk+ne6PCF76QbbN9i6T
         Xaqg==
X-Gm-Message-State: AOAM5316kbrxjiZCfCfubaa0dSJ2iagnes2aQapXNDpuU4zjG1YyTslg
	Y2vd3hooII6GvClzBx7+kHw=
X-Google-Smtp-Source: ABdhPJz1UNVjq82zTQye1bkG1NZ+eBAZqTwkVABtkvMmtK6STT4YmVHcUklrc7Q1VEX+D3mz1SCb2A==
X-Received: by 2002:a1c:7f90:: with SMTP id a138mr3286668wmd.61.1607000404485;
        Thu, 03 Dec 2020 05:00:04 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Andrushchenko'" <Oleksandr_Andrushchenko@epam.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201124080159.11912-1-paul@xen.org> <20201124080159.11912-2-paul@xen.org> <43e4db29-744e-89b6-462d-b6d129fdcb08@epam.com>
In-Reply-To: <43e4db29-744e-89b6-462d-b6d129fdcb08@epam.com>
Subject: RE: [PATCH v4 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Thu, 3 Dec 2020 13:00:03 -0000
Message-ID: <009d01d6c974$3787e310$a697a930$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQMdgDRK4dMgv8Gkq5tfso7ziJ+CnAGMb7+NAhfTW7inOsUlYA==

> -----Original Message-----
> From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
> Sent: 01 December 2020 12:33
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>;
> Anthony PERARD <anthony.perard@citrix.com>
> Subject: Re: [PATCH v4 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> 
> Hi, Paul!
> 
> On 11/24/20 10:01 AM, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> > is confusing and also compromises use of some macros used for other device
> > types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> > of this duality.
> >
> > This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
> > DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
> > hence allowing removal of the former.
> >
> > For consistency the xl and libs/util code is also modified, but in this case
> > it is purely cosmetic.
> >
> > NOTE: Some of the more gross formatting errors (such as lack of spaces after
> >        keywords) that came into context have been fixed in libxl_pci.c.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > ---
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Wei Liu <wl@xen.org>
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >   tools/include/libxl.h             |  17 +-
> >   tools/libs/light/libxl_create.c   |   6 +-
> >   tools/libs/light/libxl_dm.c       |  18 +-
> >   tools/libs/light/libxl_internal.h |  45 ++-
> >   tools/libs/light/libxl_pci.c      | 582 +++++++++++++++++++-------------------
> >   tools/libs/light/libxl_types.idl  |   2 +-
> >   tools/libs/util/libxlu_pci.c      |  36 +--
> >   tools/xl/xl_parse.c               |  28 +-
> >   tools/xl/xl_pci.c                 |  68 ++---
> >   tools/xl/xl_sxp.c                 |  12 +-
> >   10 files changed, 409 insertions(+), 405 deletions(-)
> >
> > diff --git a/tools/include/libxl.h b/tools/include/libxl.h
> > index 1ea5b4f446..fbe4c81ba5 100644
> > --- a/tools/include/libxl.h
> > +++ b/tools/include/libxl.h
> > @@ -445,6 +445,13 @@
> >   #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
> >
> [snip]
> > -/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
> > -static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
> > +/* Scan through /sys/.../pciback/slots looking for pci's BDF */
> > +static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
> >   {
> >       FILE *f;
> >       int rc = 0;
> > @@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
> >           return ERROR_FAIL;
> >       }
> >
> > -    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
> > -        if(dom == pcidev->domain
> > -           && bus == pcidev->bus
> > -           && dev == pcidev->dev
> > -           && func == pcidev->func) {
> > +    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
> So, then you can probably put spaces around "4" if touching this line

Oh yes. Will do.

> > +        if (dom == pci->domain
> > +            && bus == pci->bus
> > +            && dev == pci->dev
> > +            && func == pci->func) {
> >               rc = 1;
> >               goto out;
> >           }
> > @@ -649,7 +649,7 @@ out:
> >       return rc;
> >   }
> >
> 
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Thanks,

  Paul



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:15:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:15:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43458.78109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoSL-0001Ls-V3; Thu, 03 Dec 2020 13:15:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43458.78109; Thu, 03 Dec 2020 13:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoSL-0001Ll-Rm; Thu, 03 Dec 2020 13:15:33 +0000
Received: by outflank-mailman (input) for mailman id 43458;
 Thu, 03 Dec 2020 13:15:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkoSL-0001Lg-2d
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:15:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 270ee19b-4189-44c3-ad8a-878b09f60b4b;
 Thu, 03 Dec 2020 13:15:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE182AC65;
 Thu,  3 Dec 2020 13:15:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 270ee19b-4189-44c3-ad8a-878b09f60b4b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607001331; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LAnMPRGMjiZ34oYKcSxsgRANcqltXHwXg5c8BdJLG2E=;
	b=V0WgZ1/hDHoPQEMvAcYm/PAWQ0Mz+y7qlQ2nuQsCH+PdUi9pKV2SyUnZDGrq0mKuAPh1S/
	QBUPveDFRYuH/3vj11jcfOfqefmL7X8eCGV8q0jmL3uOR88cIPJjLJ98wgyyCgFbvW0875
	eQXknPFD80PDJiGY532dfMV8wz1PwQY=
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20201203124159.3688-1-paul@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v5 0/4] Xen ABI feature control
Message-ID: <7417f158-2cad-3909-2676-f9d5a90f4202@suse.com>
Date: Thu, 3 Dec 2020 14:15:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201203124159.3688-1-paul@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="pqkGRCiFSHE5wcRBaS0xucWU9Q3STh3Mh"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--pqkGRCiFSHE5wcRBaS0xucWU9Q3STh3Mh
Content-Type: multipart/mixed; boundary="bos9UewwtCQ7EMCE42hqcxFdw3FCsqRav";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
Message-ID: <7417f158-2cad-3909-2676-f9d5a90f4202@suse.com>
Subject: Re: [PATCH v5 0/4] Xen ABI feature control
References: <20201203124159.3688-1-paul@xen.org>
In-Reply-To: <20201203124159.3688-1-paul@xen.org>

--bos9UewwtCQ7EMCE42hqcxFdw3FCsqRav
Content-Type: multipart/mixed;
 boundary="------------27780469BC73846B811AE500"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------27780469BC73846B811AE500
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 13:41, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>=20
> This series was previously called "evtchn: Introduce a per-guest knob t=
o
> control FIFO ABI". It is been extensively re-worked and extended to cov=
er
> another ABI feature.
>=20
> Paul Durrant (4):
>    domctl: introduce a new domain create flag,
>      XEN_DOMCTL_CDF_evtchn_fifo, ...
>    domctl: introduce a new domain create flag,
>      XEN_DOMCTL_CDF_evtchn_upcall, ...
>    libxl: introduce a 'libxl_xen_abi_features' enumeration...
>    xl: introduce a 'xen-abi-features' option...
>=20
>   docs/man/xl.cfg.5.pod.in         | 50 +++++++++++++++++++++++++++++++=
+
>   tools/include/libxl.h            | 10 +++++++
>   tools/libs/light/libxl_arm.c     | 22 +++++++++-----
>   tools/libs/light/libxl_create.c  | 31 ++++++++++++++++++++
>   tools/libs/light/libxl_types.idl |  7 +++++
>   tools/libs/light/libxl_x86.c     | 17 ++++++++++-
>   tools/ocaml/libs/xc/xenctrl.ml   |  2 ++
>   tools/ocaml/libs/xc/xenctrl.mli  |  2 ++
>   tools/xl/xl_parse.c              | 50 ++++++++++++++++++++++++++++++-=
-
>   xen/arch/arm/domain.c            |  3 +-
>   xen/arch/arm/domain_build.c      |  3 +-
>   xen/arch/arm/setup.c             |  3 +-
>   xen/arch/x86/domain.c            |  8 +++++
>   xen/arch/x86/hvm/hvm.c           |  3 ++
>   xen/arch/x86/setup.c             |  4 ++-
>   xen/common/domain.c              |  3 +-
>   xen/common/event_channel.c       | 24 +++++++++++++--
>   xen/include/public/domctl.h      |  6 +++-
>   18 files changed, 229 insertions(+), 19 deletions(-)
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Christian Lindig <christian.lindig@citrix.com>
> Cc: David Scott <dave@recoil.org>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: "Roger Pau Monn=C3=A9" <roger.pau@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> Cc: Wei Liu <wl@xen.org>
>=20

Do we want to add a create flag for each such feature, or would it be
better to set options like those via hypfs?

It would be fairly easy to ad dynamic hypfs paths, e.g.:

/domain/<domid>/abi-features/evtchn-fifo
/domain/<domid>/abi-features/evtchn-upcall

which would have boolean type and could be set as long as the domain
hasn't been started.

xl support could even be rather generic, without the need to add coding
to xl for each new feature.

This is no objection to this series, but just an idea how to avoid
extending the use of unstable interfaces.

Thoughts?


Juergen

--------------27780469BC73846B811AE500
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------27780469BC73846B811AE500--

--bos9UewwtCQ7EMCE42hqcxFdw3FCsqRav--

--pqkGRCiFSHE5wcRBaS0xucWU9Q3STh3Mh
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/I5PEFAwAAAAAACgkQsN6d1ii/Ey9c
2ggAmYTu2bQb0Knk/XE8yJLLXCd8OtYMj+vA05qhjasJWlTVzMnXQhbexowGgmHeAXnkCUiYyW4o
sdhFZo6uMnRKInmjGGpBrnN/+psjpDBEQgi/HMHCS2YcvS+x31c4Lhib9jUkei9mYFHYO7HifhQd
n8p1tKZKOrVOT0Eo3lVaHC7NZPg92CMzyoKeVy+56Bsnp8WlEZNVBEji0cIfm0u/bAVTWcbYdEVI
v71msPjP+dclC4FB3rhGVqpeEarKVJc2N5pCmCRdXxTjuW0dCWz+si2izbZSjx0a/CLu4MeKppUP
2gsuF2dmLNauHx27yfWuPnFHvLxLyeX3cPI3zIKaXA==
=ugaD
-----END PGP SIGNATURE-----

--pqkGRCiFSHE5wcRBaS0xucWU9Q3STh3Mh--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:17:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:17:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43465.78121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoUQ-0001UZ-GX; Thu, 03 Dec 2020 13:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43465.78121; Thu, 03 Dec 2020 13:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoUQ-0001US-DU; Thu, 03 Dec 2020 13:17:42 +0000
Received: by outflank-mailman (input) for mailman id 43465;
 Thu, 03 Dec 2020 13:17:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oiWT=FH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kkoUP-0001UL-6h
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:17:41 +0000
Received: from mail-wr1-x434.google.com (unknown [2a00:1450:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc16c507-6054-4e98-8b29-ac2df93bcfb8;
 Thu, 03 Dec 2020 13:17:40 +0000 (UTC)
Received: by mail-wr1-x434.google.com with SMTP id z7so1850999wrn.3
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 05:17:40 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id s13sm1715974wrt.80.2020.12.03.05.17.38
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 03 Dec 2020 05:17:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc16c507-6054-4e98-8b29-ac2df93bcfb8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=LivjrigrWtqG0Vw5HyVRbeR6KqZ/yS7uJpBbfZfM6qc=;
        b=HII2TH9cq93Hhcau37aKlLWfxT31E4b/kk+i/jj6TTdRXoo3dqSCB6FOeMgQw2TBXI
         EqfGqpb+CCz8LTEjzxXMN+oYY2NW+rdO7n3sGd+D+/bBmsfrsoOrGAsjixk5QKiQCRz3
         4jTrdgUVgiIsczPIG9ZLl0yV56DXNlqGk/cpbYo/kLXaqfdPCJYVQsIy8OOCzwRqE3V5
         1l3P0e6GaMTNsNcoZ7i7icDjYpEHdlPEvCWy178pVavwNKawkRe4KFwFGRmd/fqo81ql
         L1wt7rB3W8kvDS6C+H47FxT8WO2zJ9SdBduWoQiQmFpZYQk+ogKdbTscEg+Ij6uSeDwq
         Qx7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=LivjrigrWtqG0Vw5HyVRbeR6KqZ/yS7uJpBbfZfM6qc=;
        b=tbJIk/R60KZ0L5xzMEjY8lP/C/bv3P3B232jAdnCX7CxWgNnWNPum/UipH79Ugg36s
         VQcaQN/yVSmJiMvRwidBFgVyIKXlMQXTRpk67Ibwth7c5nNcOeOwu0qhsG22wLe1FfH5
         oDWQe/bKkvQe2+FHdQ2xhDC5j8LEX/JsoRMUS+YX/dwLaAgJbqW8SnAFp6JCl3vXxuHJ
         F/eqL7QlhrNXhsIvJ9VT4bRzKD2OpoAlO/MQ1s1JoVFwvYMjv/Hd4NY/XUNBp/C5010r
         7IdNwsTR0bqcjofW+8Q2cSQkoUFzHGUPf+lUnIKkkDidheRAqHfM+JMy56KP7a1uzzvu
         OjWQ==
X-Gm-Message-State: AOAM530zKx5Kwn+D8YIGQSHkAjSh9c7x3WVHuxvs3DwQa1jCxtkMlk9a
	fO7dAbt7iMH38Dw1vpwAvUY=
X-Google-Smtp-Source: ABdhPJz2NPvHt6rxL2W8yqebyXy479s6Ch9hOjvZpgZo6iHcvt8bkLEm2x0H8ee5Q/TJjfrGPV7rDA==
X-Received: by 2002:adf:b647:: with SMTP id i7mr3694400wre.241.1607001459410;
        Thu, 03 Dec 2020 05:17:39 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Andrushchenko'" <Oleksandr_Andrushchenko@epam.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201124080159.11912-1-paul@xen.org> <20201124080159.11912-4-paul@xen.org> <d16e33d7-a4af-8686-c639-b4f591caf77c@epam.com>
In-Reply-To: <d16e33d7-a4af-8686-c639-b4f591caf77c@epam.com>
Subject: RE: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Thu, 3 Dec 2020 13:17:38 -0000
Message-ID: <00a701d6c976$ac403020$04c09060$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQMdgDRK4dMgv8Gkq5tfso7ziJ+CnAGZk9QaAhWXG0OnOm+6MA==

> -----Original Message-----
> From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
> Sent: 01 December 2020 13:12
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>;
> Anthony PERARD <anthony.perard@citrix.com>
> Subject: Re: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach are reflected in the config
> 
> Hi, Paul!
> 
> On 11/24/20 10:01 AM, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Currently libxl__device_pci_add_xenstore() is broken in that does not
> > update the domain's configuration for the first device added (which causes
> > creation of the overall backend area in xenstore). This can be easily observed
> > by running 'xl list -l' after adding a single device: the device will be
> > missing.
> >
> > This patch fixes the problem and adds a DEBUG log line to allow easy
> > verification that the domain configuration is being modified. Also, the use
> > of libxl__device_generic_add() is dropped as it leads to a confusing situation
> > where only partial backend information is written under the xenstore
> > '/libxl' path. For LIBXL__DEVICE_KIND_PCI devices the only definitive
> > information in xenstore is under '/local/domain/0/backend' (the '0' being
> > hard-coded).
> >
> > NOTE: This patch includes a whitespace in add_pcis_done().
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > ---
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Wei Liu <wl@xen.org>
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> >
> > v2:
> >   - Avoid having two completely different ways of adding devices into xenstore
> >
> > v3:
> >   - Revert some changes form v2 as there is confusion over use of the libxl
> >     and backend xenstore paths which needs to be fixed
> > ---
> >   tools/libs/light/libxl_pci.c | 87 +++++++++++++++++++++++---------------------
> >   1 file changed, 45 insertions(+), 42 deletions(-)
> >
> > diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
> > index 9d44b28f0a..da01c77ba2 100644
> > --- a/tools/libs/light/libxl_pci.c
> > +++ b/tools/libs/light/libxl_pci.c
> > @@ -79,39 +79,55 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
> >       device->kind = LIBXL__DEVICE_KIND_PCI;
> >   }
> >
> > -static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
> > -                                     const libxl_device_pci *pci,
> > -                                     int num)
> > +static void libxl__create_pci_backend(libxl__gc *gc, xs_transaction_t t,
> > +                                      uint32_t domid, const libxl_device_pci *pci)
> >   {
> > -    flexarray_t *front = NULL;
> > -    flexarray_t *back = NULL;
> > -    libxl__device device;
> > -    int i;
> > +    libxl_ctx *ctx = libxl__gc_owner(gc);
> > +    flexarray_t *front, *back;
> > +    char *fe_path, *be_path;
> > +    struct xs_permissions fe_perms[2], be_perms[2];
> > +
> > +    LOGD(DEBUG, domid, "Creating pci backend");
> >
> >       front = flexarray_make(gc, 16, 1);
> >       back = flexarray_make(gc, 16, 1);
> >
> > -    LOGD(DEBUG, domid, "Creating pci backend");
> > -
> > -    /* add pci device */
> > -    libxl__device_from_pci(gc, domid, pci, &device);
> > +    fe_path = libxl__domain_device_frontend_path(gc, domid, 0,
> > +                                                 LIBXL__DEVICE_KIND_PCI);
> > +    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
> > +                                                LIBXL__DEVICE_KIND_PCI);
> >
> > +    flexarray_append_pair(back, "frontend", fe_path);
> >       flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
> > -    flexarray_append_pair(back, "online", "1");
> > +    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
> >       flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
> >       flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
> >
> > -    for (i = 0; i < num; i++, pci++)
> > -        libxl_create_pci_backend_device(gc, back, i, pci);
> > +    be_perms[0].id = 0;
> 
> There was a discussion [1] on PCI on ARM and one of the question was that it is possible
> 
> that we have the pci backend running in a late hardware domain/driver domain, which may
> 
> not be Domain-0. Do you think we can avoid using 0 here and get some clue of the domain
> 
> from "*backend=domain-id"? If not set it will return Domain-0's ID and won't break anything*

Not sure what you're asking for since...

> 
> *Thank you,*
> 
> *Oleksandr
> *
> 
> > +    be_perms[0].perms = XS_PERM_NONE;
> > +    be_perms[1].id = domid;
> > +    be_perms[1].perms = XS_PERM_READ;
> > +
> > +    xs_rm(ctx->xsh, t, be_path);
> > +    xs_mkdir(ctx->xsh, t, be_path);
> > +    xs_set_permissions(ctx->xsh, t, be_path, be_perms,
> > +                       ARRAY_SIZE(be_perms));
> > +    libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
> >
> > -    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
> > +    flexarray_append_pair(front, "backend", be_path);
> >       flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));

... backend-id is written here.

  Paul




From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:20:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43471.78133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoX4-0002LM-UP; Thu, 03 Dec 2020 13:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43471.78133; Thu, 03 Dec 2020 13:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoX4-0002LF-RQ; Thu, 03 Dec 2020 13:20:26 +0000
Received: by outflank-mailman (input) for mailman id 43471;
 Thu, 03 Dec 2020 13:20:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1si/=FH=epam.com=prvs=0606d307f8=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kkoX2-0002LA-Q6
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:20:25 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 206478a7-7516-45ea-a9f2-1a3a78fb640b;
 Thu, 03 Dec 2020 13:20:23 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B3DJwSI000461; Thu, 3 Dec 2020 13:20:21 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2054.outbound.protection.outlook.com [104.47.0.54])
 by mx0a-0039f301.pphosted.com with ESMTP id 355vrqd6fe-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 03 Dec 2020 13:20:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6065.eurprd03.prod.outlook.com (2603:10a6:208:15c::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Thu, 3 Dec
 2020 13:20:18 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3632.018; Thu, 3 Dec 2020
 13:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 206478a7-7516-45ea-a9f2-1a3a78fb640b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kZ3k8hysJyNOsM7A4KoDaf0sEzuERIfpE03WxYl9PwL/0uo07gvSxslH9XhEHaAEmIS54Ye4u73Jdd26JiwRuso4iW061p87ylqyUtlQQZXqwjUCblMEDyoluwCDHD50oL3jCyHJBAhn8ppyzMq6r/4aMpYsj4p3ucfYZzCrT4ABdRZhWJpGZDAwwP+joFjl1prgSU8k64WI8Jj28Jj7/PkzpZBlJzXuN6uIfNuuM2CXysGRap91TVbpxhUgd09ZsiGUJELgyjPcJ2UAea+z8rc+z8VpWxwsM0LBkLeZEx43N5X9ByAdnT7eORQIuXaFQh8cQJYlIjw48zKs+w2/mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VUwRy5/vE/PKCVxGZpC6aHaCa0X0v8XsvbChM7sHiM8=;
 b=H9UVYebyWCx8PnvvBLC6ywTathOzwXg0ke8kuFauSrZiCnJg2rpt21/a+ldKIX0X/rtd+7w2Dc8ax2XsPKmxc/AToLD3eReQBpn37pfmnpS9UrG169vis4kLcPkziZjvCAMJBUdxIpxCoinaUwWbTmLbFecBVzpYv9PV55uiwJXbi8MVEeVXxlkTC7i8vWAMdKmngNCMvVx1Z8vmHNL0GiO7ijlholbBRYzGR3E1PxBxb80q+BgR7ArpnYJW9mNnotfQ++hpHmA0gUSMnb9iEMUMxb7vlziCMPJnefQQXrxXzPDkctfT0AItxr5Bv8tUJXQwhDVB14AxaNXrHEcMiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VUwRy5/vE/PKCVxGZpC6aHaCa0X0v8XsvbChM7sHiM8=;
 b=dnsbzit6oMjpR+Opn3K1V0zpjnDlDDRAiK0Q0MmV/sAxferONYTlLW6e4G9c6psbsuPNI4phrkIGq9zpmdYt24jGOJqDxte9YtywX3oqiIzdbt/I0Xmscz60ukmlkdHMrDasI9hzdNg8lUqbGjjJ0R/f13Zj0Mo3GC4bq9QvPP+vTW843NpSuQ0S22sExAnhVkStnHXL59lX6iYSX6E8roXdOxVwo3VWNIf+EnKJMvd/iQ1cRdjWUGOS9kYbBIinfgMpdGI5OM05O83KVt+359Hlrp6tTqAtcmg1LL5k9Jv76xts2dFkAx0vlbju3AaA2aachij5Mi2scWKkQSD8LQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "paul@xen.org" <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: 'Paul Durrant' <pdurrant@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
        'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach are
 reflected in the config
Thread-Topic: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach
 are reflected in the config
Thread-Index: AQHWx+OabtYotvRU7E6Ior0bjSGKzKnlXZAAgAAAvYA=
Date: Thu, 3 Dec 2020 13:20:18 +0000
Message-ID: <2c30b442-5dba-3d87-11b2-ba2bfd521a8f@epam.com>
References: <20201124080159.11912-1-paul@xen.org>
 <20201124080159.11912-4-paul@xen.org>
 <d16e33d7-a4af-8686-c639-b4f591caf77c@epam.com>
 <00a701d6c976$ac403020$04c09060$@xen.org>
In-Reply-To: <00a701d6c976$ac403020$04c09060$@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e73ad96b-5222-41e9-b263-08d8978e2dca
x-ms-traffictypediagnostic: AM0PR03MB6065:
x-microsoft-antispam-prvs: 
 <AM0PR03MB60650F3289921A7E52C1C64EE7F20@AM0PR03MB6065.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 hFwYjUnRKLLZ+0o9W0AjmqwiIlOPVccicVLHMiAZ0K5MCM36Pccjg5HKkLRkF+qINA3TbuL7nVsOTR/h14Cj0/FFl8HZofy27h1Sciycw6TMqI0WOa5LvM/CNW8Q201ckGF3akLaZfVll9pR2WFFhFVa8tDB0PQcDh8d6xv7v/lIaB5iAmp36keG/vwa8XiKRMUS2MAGXEkX72IZcCIB56GTXngX2gcPi68RotqtrWh2rNBl94WQayLjd7hx9N6LiLlR1YBnHgyq0BwHWu5wkn5+uieYKM0SANVpenOvewaAWHPp3IOP1Nl6mccBidwIVPcAURuYyBckEcThL7xWZ0VoHexWq0/agzxxD7t6W9iQ6YU6Qdju3/Qm4Y1oIfiZ
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(2616005)(8676002)(6486002)(54906003)(316002)(53546011)(2906002)(110136005)(6506007)(26005)(186003)(36756003)(31686004)(8936002)(4326008)(6512007)(76116006)(86362001)(66946007)(66446008)(478600001)(5660300002)(31696002)(64756008)(66476007)(91956017)(66556008)(71200400001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?OEtaeUEyeTBCNk9Bcnk2S1Z4TSswaG1EN3Jxb0N5ZkdON2lnOEZzTGlPUHM5?=
 =?utf-8?B?TDIrQnUvZ0MwVEhxMWVSV2xmNWE1ZWtmWFVhVGpBNlZiZExKK0dCNVJsVkZC?=
 =?utf-8?B?ZFAxM3NCSXpINlRjM0FsY2NKeFg0NENlTXgxSVF3ZENUS3lpdVB2a3kxUGFQ?=
 =?utf-8?B?cS8wMTdjQUphNk1mRHpmZzU2eGhiRzhYQ2ZUT1pZeEFNM2tPUmRnUzdMQVFa?=
 =?utf-8?B?dG5nUFgwVUNKWU1GejFRcDFacVk1OFRaZEQ3Mmx1cHFBUnFrREthODhEUEMv?=
 =?utf-8?B?MWF4RFE4Z3RZWndEQ0RGc0lZRld4ZXIxL01CMVRtUXVTR0lzQkFBWWVZMTYr?=
 =?utf-8?B?YTRIVHBWMHNPQ2lUOHVBU2ZpaHpGUk96QjVGZHJCamQ4eEhQZW4wRkJ0Y29D?=
 =?utf-8?B?cGJtZXY4WXh6Y2VCRjR3MUFUUHZRQ0ovQmZyY3kyOFNUejkxRHhBajUyR3RJ?=
 =?utf-8?B?akFxd0k2bU5qV0lld0FvdjI4M3BINGh1UlVOcW83N012TWMyNzdiRU5KaUM3?=
 =?utf-8?B?WVlpLzZkTFloSmFJcEhKZXZ3UEIvckVJclcwdWRRTGk5Y2kxWDVTQUFOZWxp?=
 =?utf-8?B?MDF1UFNSTzFzb0tZa1QyaDJ1Y2daYUNrMndkTlo0dXdBSEFhb1ZCZ2lPeWtM?=
 =?utf-8?B?UWZEMWFvN2V6MGl0VFZ4dVBHQWh1YUt3RjhEU3d2R2MvY1VXMk4wL2E5THhR?=
 =?utf-8?B?Tkx3Qkh0aVRjbFFtUE1Lb1VwUVFVQUZUTjlxTUZEZWp1ZjJyUjZWdnlEYnFD?=
 =?utf-8?B?aTRISEM4MW5uekdWTzFtU2JRaFJXb1VZMmhpRVl6NDBPdTV3TkJmdStOSHRT?=
 =?utf-8?B?a2xzYWhHZmk2WUJDZG5hVHZhTG40SW9DYkliSEQvWm9xVTY3UENZVDdPQ2Za?=
 =?utf-8?B?Sm9DdFNhNGpDek1GSkxVdmVLSElWaloyU3JHUDMyanQrVjdhS3ppMWZ0dCtx?=
 =?utf-8?B?ZkJXaTBpRFB2bXpsaXJOZm85ZjJkYk83aTJqZ25lczhkUGxydzhJT2VwcU90?=
 =?utf-8?B?dmxNQ216cFZVcUc5Q2MzbTdHNzBucUNDVzNuUDFvUjIrQkx5MjIrWTBVaGY5?=
 =?utf-8?B?UVZpMEVSOWFpUUhncnFXaEIvV3FmRXVGdmVLL3JTdDJiUjcwQmwzQTNOSUxZ?=
 =?utf-8?B?N0NmSEMvUS9VT21CeEpFSkxkMDR1UVNJQ0hRNlFETExZdm9XcEFlSzJEczNs?=
 =?utf-8?B?RFhOeDFjSmU2eXFwRkR4eDVlWUxXOGJ1OXlJVFRPYjZPZkNFckh3UzFtRkhi?=
 =?utf-8?B?bFBLVXd2K1NNY3htUCtsaEY3dUgwdGxPa3ptNmZENDlNayt6WTZmRmc4R1RB?=
 =?utf-8?Q?a6kJ5g7ndNsEOAdIgtRZjsqRY8fG2T8jcH?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <25884EF5136DC249956B90901F9B070F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e73ad96b-5222-41e9-b263-08d8978e2dca
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Dec 2020 13:20:18.1487
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vD74w71gzTsiwJa2YVfB1qLjiEI6cl17EiSkwOWixejrph44vn62+/TXDoRCbq7v1o/7t/tUaCkpoTGzmJFy5n1G/xrHELJjpblTSMqviQO4wI/PoXBe+7acylL/Hjpv
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6065
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-03_07:2020-12-03,2020-12-03 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxscore=0
 clxscore=1015 lowpriorityscore=0 bulkscore=0 adultscore=0 suspectscore=0
 malwarescore=0 impostorscore=0 mlxlogscore=999 phishscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012030080

DQpPbiAxMi8zLzIwIDM6MTcgUE0sIFBhdWwgRHVycmFudCB3cm90ZToNCj4+IC0tLS0tT3JpZ2lu
YWwgTWVzc2FnZS0tLS0tDQo+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8T2xla3Nh
bmRyX0FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+PiBTZW50OiAwMSBEZWNlbWJlciAyMDIwIDEz
OjEyDQo+PiBUbzogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4+IENjOiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+
OyBJYW4gSmFja3NvbiA8aXdqQHhlbnByb2plY3Qub3JnPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47
DQo+PiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4+IFN1Ympl
Y3Q6IFJlOiBbUEFUQ0ggdjQgMDMvMjNdIGxpYnhsOiBNYWtlIHN1cmUgZGV2aWNlcyBhZGRlZCBi
eSBwY2ktYXR0YWNoIGFyZSByZWZsZWN0ZWQgaW4gdGhlIGNvbmZpZw0KPj4NCj4+IEhpLCBQYXVs
IQ0KPj4NCj4+IE9uIDExLzI0LzIwIDEwOjAxIEFNLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+Pj4g
RnJvbTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPj4+DQo+Pj4gQ3VycmVu
dGx5IGxpYnhsX19kZXZpY2VfcGNpX2FkZF94ZW5zdG9yZSgpIGlzIGJyb2tlbiBpbiB0aGF0IGRv
ZXMgbm90DQo+Pj4gdXBkYXRlIHRoZSBkb21haW4ncyBjb25maWd1cmF0aW9uIGZvciB0aGUgZmly
c3QgZGV2aWNlIGFkZGVkICh3aGljaCBjYXVzZXMNCj4+PiBjcmVhdGlvbiBvZiB0aGUgb3ZlcmFs
bCBiYWNrZW5kIGFyZWEgaW4geGVuc3RvcmUpLiBUaGlzIGNhbiBiZSBlYXNpbHkgb2JzZXJ2ZWQN
Cj4+PiBieSBydW5uaW5nICd4bCBsaXN0IC1sJyBhZnRlciBhZGRpbmcgYSBzaW5nbGUgZGV2aWNl
OiB0aGUgZGV2aWNlIHdpbGwgYmUNCj4+PiBtaXNzaW5nLg0KPj4+DQo+Pj4gVGhpcyBwYXRjaCBm
aXhlcyB0aGUgcHJvYmxlbSBhbmQgYWRkcyBhIERFQlVHIGxvZyBsaW5lIHRvIGFsbG93IGVhc3kN
Cj4+PiB2ZXJpZmljYXRpb24gdGhhdCB0aGUgZG9tYWluIGNvbmZpZ3VyYXRpb24gaXMgYmVpbmcg
bW9kaWZpZWQuIEFsc28sIHRoZSB1c2UNCj4+PiBvZiBsaWJ4bF9fZGV2aWNlX2dlbmVyaWNfYWRk
KCkgaXMgZHJvcHBlZCBhcyBpdCBsZWFkcyB0byBhIGNvbmZ1c2luZyBzaXR1YXRpb24NCj4+PiB3
aGVyZSBvbmx5IHBhcnRpYWwgYmFja2VuZCBpbmZvcm1hdGlvbiBpcyB3cml0dGVuIHVuZGVyIHRo
ZSB4ZW5zdG9yZQ0KPj4+ICcvbGlieGwnIHBhdGguIEZvciBMSUJYTF9fREVWSUNFX0tJTkRfUENJ
IGRldmljZXMgdGhlIG9ubHkgZGVmaW5pdGl2ZQ0KPj4+IGluZm9ybWF0aW9uIGluIHhlbnN0b3Jl
IGlzIHVuZGVyICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZCcgKHRoZSAnMCcgYmVpbmcNCj4+PiBo
YXJkLWNvZGVkKS4NCj4+Pg0KPj4+IE5PVEU6IFRoaXMgcGF0Y2ggaW5jbHVkZXMgYSB3aGl0ZXNw
YWNlIGluIGFkZF9wY2lzX2RvbmUoKS4NCj4+Pg0KPj4+IFNpZ25lZC1vZmYtYnk6IFBhdWwgRHVy
cmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4+PiAtLS0NCj4+PiBDYzogSWFuIEphY2tzb24g
PGl3akB4ZW5wcm9qZWN0Lm9yZz4NCj4+PiBDYzogV2VpIExpdSA8d2xAeGVuLm9yZz4NCj4+PiBD
YzogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+DQo+Pj4NCj4+PiB2
MjoNCj4+PiAgICAtIEF2b2lkIGhhdmluZyB0d28gY29tcGxldGVseSBkaWZmZXJlbnQgd2F5cyBv
ZiBhZGRpbmcgZGV2aWNlcyBpbnRvIHhlbnN0b3JlDQo+Pj4NCj4+PiB2MzoNCj4+PiAgICAtIFJl
dmVydCBzb21lIGNoYW5nZXMgZm9ybSB2MiBhcyB0aGVyZSBpcyBjb25mdXNpb24gb3ZlciB1c2Ug
b2YgdGhlIGxpYnhsDQo+Pj4gICAgICBhbmQgYmFja2VuZCB4ZW5zdG9yZSBwYXRocyB3aGljaCBu
ZWVkcyB0byBiZSBmaXhlZA0KPj4+IC0tLQ0KPj4+ICAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxf
cGNpLmMgfCA4NyArKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0K
Pj4+ICAgIDEgZmlsZSBjaGFuZ2VkLCA0NSBpbnNlcnRpb25zKCspLCA0MiBkZWxldGlvbnMoLSkN
Cj4+Pg0KPj4+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jIGIvdG9v
bHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPj4+IGluZGV4IDlkNDRiMjhmMGEuLmRhMDFjNzdi
YTIgMTAwNjQ0DQo+Pj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPj4+ICsr
KyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4+PiBAQCAtNzksMzkgKzc5LDU1IEBA
IHN0YXRpYyB2b2lkIGxpYnhsX19kZXZpY2VfZnJvbV9wY2kobGlieGxfX2djICpnYywgdWludDMy
X3QgZG9taWQsDQo+Pj4gICAgICAgIGRldmljZS0+a2luZCA9IExJQlhMX19ERVZJQ0VfS0lORF9Q
Q0k7DQo+Pj4gICAgfQ0KPj4+DQo+Pj4gLXN0YXRpYyBpbnQgbGlieGxfX2NyZWF0ZV9wY2lfYmFj
a2VuZChsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBkb21pZCwNCj4+PiAtICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGxpYnhsX2RldmljZV9wY2kgKnBjaSwNCj4+PiAt
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBudW0pDQo+Pj4gK3N0YXRp
YyB2b2lkIGxpYnhsX19jcmVhdGVfcGNpX2JhY2tlbmQobGlieGxfX2djICpnYywgeHNfdHJhbnNh
Y3Rpb25fdCB0LA0KPj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVp
bnQzMl90IGRvbWlkLCBjb25zdCBsaWJ4bF9kZXZpY2VfcGNpICpwY2kpDQo+Pj4gICAgew0KPj4+
IC0gICAgZmxleGFycmF5X3QgKmZyb250ID0gTlVMTDsNCj4+PiAtICAgIGZsZXhhcnJheV90ICpi
YWNrID0gTlVMTDsNCj4+PiAtICAgIGxpYnhsX19kZXZpY2UgZGV2aWNlOw0KPj4+IC0gICAgaW50
IGk7DQo+Pj4gKyAgICBsaWJ4bF9jdHggKmN0eCA9IGxpYnhsX19nY19vd25lcihnYyk7DQo+Pj4g
KyAgICBmbGV4YXJyYXlfdCAqZnJvbnQsICpiYWNrOw0KPj4+ICsgICAgY2hhciAqZmVfcGF0aCwg
KmJlX3BhdGg7DQo+Pj4gKyAgICBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMgZmVfcGVybXNbMl0sIGJl
X3Blcm1zWzJdOw0KPj4+ICsNCj4+PiArICAgIExPR0QoREVCVUcsIGRvbWlkLCAiQ3JlYXRpbmcg
cGNpIGJhY2tlbmQiKTsNCj4+Pg0KPj4+ICAgICAgICBmcm9udCA9IGZsZXhhcnJheV9tYWtlKGdj
LCAxNiwgMSk7DQo+Pj4gICAgICAgIGJhY2sgPSBmbGV4YXJyYXlfbWFrZShnYywgMTYsIDEpOw0K
Pj4+DQo+Pj4gLSAgICBMT0dEKERFQlVHLCBkb21pZCwgIkNyZWF0aW5nIHBjaSBiYWNrZW5kIik7
DQo+Pj4gLQ0KPj4+IC0gICAgLyogYWRkIHBjaSBkZXZpY2UgKi8NCj4+PiAtICAgIGxpYnhsX19k
ZXZpY2VfZnJvbV9wY2koZ2MsIGRvbWlkLCBwY2ksICZkZXZpY2UpOw0KPj4+ICsgICAgZmVfcGF0
aCA9IGxpYnhsX19kb21haW5fZGV2aWNlX2Zyb250ZW5kX3BhdGgoZ2MsIGRvbWlkLCAwLA0KPj4+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgTElCWExf
X0RFVklDRV9LSU5EX1BDSSk7DQo+Pj4gKyAgICBiZV9wYXRoID0gbGlieGxfX2RvbWFpbl9kZXZp
Y2VfYmFja2VuZF9wYXRoKGdjLCAwLCBkb21pZCwgMCwNCj4+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgTElCWExfX0RFVklDRV9LSU5EX1BDSSk7DQo+
Pj4NCj4+PiArICAgIGZsZXhhcnJheV9hcHBlbmRfcGFpcihiYWNrLCAiZnJvbnRlbmQiLCBmZV9w
YXRoKTsNCj4+PiAgICAgICAgZmxleGFycmF5X2FwcGVuZF9wYWlyKGJhY2ssICJmcm9udGVuZC1p
ZCIsIEdDU1BSSU5URigiJWQiLCBkb21pZCkpOw0KPj4+IC0gICAgZmxleGFycmF5X2FwcGVuZF9w
YWlyKGJhY2ssICJvbmxpbmUiLCAiMSIpOw0KPj4+ICsgICAgZmxleGFycmF5X2FwcGVuZF9wYWly
KGJhY2ssICJvbmxpbmUiLCBHQ1NQUklOVEYoIiVkIiwgMSkpOw0KPj4+ICAgICAgICBmbGV4YXJy
YXlfYXBwZW5kX3BhaXIoYmFjaywgInN0YXRlIiwgR0NTUFJJTlRGKCIlZCIsIFhlbmJ1c1N0YXRl
SW5pdGlhbGlzaW5nKSk7DQo+Pj4gICAgICAgIGZsZXhhcnJheV9hcHBlbmRfcGFpcihiYWNrLCAi
ZG9tYWluIiwgbGlieGxfX2RvbWlkX3RvX25hbWUoZ2MsIGRvbWlkKSk7DQo+Pj4NCj4+PiAtICAg
IGZvciAoaSA9IDA7IGkgPCBudW07IGkrKywgcGNpKyspDQo+Pj4gLSAgICAgICAgbGlieGxfY3Jl
YXRlX3BjaV9iYWNrZW5kX2RldmljZShnYywgYmFjaywgaSwgcGNpKTsNCj4+PiArICAgIGJlX3Bl
cm1zWzBdLmlkID0gMDsNCj4+IFRoZXJlIHdhcyBhIGRpc2N1c3Npb24gWzFdIG9uIFBDSSBvbiBB
Uk0gYW5kIG9uZSBvZiB0aGUgcXVlc3Rpb24gd2FzIHRoYXQgaXQgaXMgcG9zc2libGUNCj4+DQo+
PiB0aGF0IHdlIGhhdmUgdGhlIHBjaSBiYWNrZW5kIHJ1bm5pbmcgaW4gYSBsYXRlIGhhcmR3YXJl
IGRvbWFpbi9kcml2ZXIgZG9tYWluLCB3aGljaCBtYXkNCj4+DQo+PiBub3QgYmUgRG9tYWluLTAu
IERvIHlvdSB0aGluayB3ZSBjYW4gYXZvaWQgdXNpbmcgMCBoZXJlIGFuZCBnZXQgc29tZSBjbHVl
IG9mIHRoZSBkb21haW4NCj4+DQo+PiBmcm9tICIqYmFja2VuZD1kb21haW4taWQiPyBJZiBub3Qg
c2V0IGl0IHdpbGwgcmV0dXJuIERvbWFpbi0wJ3MgSUQgYW5kIHdvbid0IGJyZWFrIGFueXRoaW5n
Kg0KPiBOb3Qgc3VyZSB3aGF0IHlvdSdyZSBhc2tpbmcgZm9yIHNpbmNlLi4uDQoNCk15IGJhZCwg
cGxlYXNlIGlnbm9yZQ0KDQpSZXZpZXdlZC1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9s
ZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KDQpUaGFuayB5b3UsDQoNCk9sZWtzYW5k
cg0KDQo+DQo+PiAqVGhhbmsgeW91LCoNCj4+DQo+PiAqT2xla3NhbmRyDQo+PiAqDQo+Pg0KPj4+
ICsgICAgYmVfcGVybXNbMF0ucGVybXMgPSBYU19QRVJNX05PTkU7DQo+Pj4gKyAgICBiZV9wZXJt
c1sxXS5pZCA9IGRvbWlkOw0KPj4+ICsgICAgYmVfcGVybXNbMV0ucGVybXMgPSBYU19QRVJNX1JF
QUQ7DQo+Pj4gKw0KPj4+ICsgICAgeHNfcm0oY3R4LT54c2gsIHQsIGJlX3BhdGgpOw0KPj4+ICsg
ICAgeHNfbWtkaXIoY3R4LT54c2gsIHQsIGJlX3BhdGgpOw0KPj4+ICsgICAgeHNfc2V0X3Blcm1p
c3Npb25zKGN0eC0+eHNoLCB0LCBiZV9wYXRoLCBiZV9wZXJtcywNCj4+PiArICAgICAgICAgICAg
ICAgICAgICAgICBBUlJBWV9TSVpFKGJlX3Blcm1zKSk7DQo+Pj4gKyAgICBsaWJ4bF9feHNfd3Jp
dGV2KGdjLCB0LCBiZV9wYXRoLCBsaWJ4bF9feHNfa3ZzX29mX2ZsZXhhcnJheShnYywgYmFjaykp
Ow0KPj4+DQo+Pj4gLSAgICBmbGV4YXJyYXlfYXBwZW5kX3BhaXIoYmFjaywgIm51bV9kZXZzIiwg
R0NTUFJJTlRGKCIlZCIsIG51bSkpOw0KPj4+ICsgICAgZmxleGFycmF5X2FwcGVuZF9wYWlyKGZy
b250LCAiYmFja2VuZCIsIGJlX3BhdGgpOw0KPj4+ICAgICAgICBmbGV4YXJyYXlfYXBwZW5kX3Bh
aXIoZnJvbnQsICJiYWNrZW5kLWlkIiwgR0NTUFJJTlRGKCIlZCIsIDApKTsNCj4gLi4uIGJhY2tl
bmQtaWQgaXMgd3JpdHRlbiBoZXJlLg0KPg0KPiAgICBQYXVsDQo+DQo+


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:33:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:33:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43484.78151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoj8-0003XI-AZ; Thu, 03 Dec 2020 13:32:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43484.78151; Thu, 03 Dec 2020 13:32:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoj8-0003XB-6T; Thu, 03 Dec 2020 13:32:54 +0000
Received: by outflank-mailman (input) for mailman id 43484;
 Thu, 03 Dec 2020 13:32:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mnUA=FH=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1kkoj7-0003X6-7P
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:32:53 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26d15956-a595-40c4-9a09-6e9ead0919bb;
 Thu, 03 Dec 2020 13:32:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26d15956-a595-40c4-9a09-6e9ead0919bb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607002371;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=kmGynGka6OdOCKP6iBHNafWRmniOtqOEz4pj+4DOnFA=;
  b=ZhX3ydX1b2k7ROtkBLqdEJE793pgdNA6niCrn3h4TfEsbbPC5Mp4Stfa
   1acFSH2jTibsEUDzlkR8lAv3X0E1VVss7AFWy+gqway329NwJTFXNjnIs
   d78SQMfGfYnDNcn/zeeAWvYh8zgC2iNUsg6FZiGc1M8hiw6YVoGdVofXP
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HPlPa6xiAPFCZraDZfIaMQ161sNyf3o+pS3O/hBbwvuYDSNhwN+b2C9SBaZkn9k0/urNSA+e5+
 pxLzXQZcqun8YDpIAQ/2MmXCC8t6+q4Ejg1FsSnvhqY1EgJSyFWeG05jNjiP5HLMkSBZYzEz1t
 S+eOR2QcRErroJbE1dDXktttFiAkv1W3rKoNNoU+BajT0ty0T+hq0Hf/4QoD5hQN63GQkzQ966
 +Zwz8pN50wYyhjZG//OA0yhuxLREo2QKeE7y28WdI/7FQAIT1q3R3TVU8VtzUxNOsmi51E4RjQ
 f3s=
X-SBRS: 4.0
X-MesageID: 32449730
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,389,1599537600"; 
   d="scan'208,217";a="32449730"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yh9SOPutK6WOE//zhyxIsXlXSdao9WVRW10PV8w0SUWCZOP6hn1Gk2G7CVLhXK4URLMiTxPFoKI8LX8nPCKaF2329a7RH+THt8+Mr1YTjj/hKYN02qIXwl3iwOg84VlfRmiqyLT61j91i7lapguu00r30O1mtfwnMPkRltBJlT4kCywsYjD6NLQFF6/1wUiJ/GfQn1AaooNkkCp/EGenficC860nG4kAakd6F6R3yMd/myic+mX+wFQWe48khIefcYD9eE0Zjyf617bJfWG/Y9/E5GTKICnmsw8TzoM1kUG8a6kQlrfSqh60gPEDik1s06tLZwKZyNkvyIqGHBsudQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gXlpwmINk2mGBNYkik9bxuW71GX8tebXQLV7unc1R7c=;
 b=RIrYIcj2nWoXfLbhDY2TTAjJDcGie8FSW9J0cTNmnmaxIPhkdsgvIIK7zsChAElWK4eaeeLVQBbwuyb/CCjVRzU9UaNdJcygXveglhnSIt06cS4eZfk7xEsrFEfIIjYRUQuF50PN+mI91xBrGz7iAz1XYM96fg4ZOhjff9QFEBLJlr/W9PWUjYoR+gg3CIHhTEAODlGZKc6rbobLYgGJPx9VU2vLPad4rgK8d4jpPpFvaII/8CxmrTrg0WvMUoybuVMEhXVZLQILZPHJ/L2VAeQo+lfYPgaf2zMXb+CL3f7DGuGLFsnUcSUhbdi0UbS0UENtopc2RGFnDkcctyKjSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gXlpwmINk2mGBNYkik9bxuW71GX8tebXQLV7unc1R7c=;
 b=XPLTt1BATF15lzXU4/+KrMTOwoQr9O1tzbNw41gGVcGRLgtmgHzuOXWozEqZDrcziKjGpqB0/bNF0wZIJAokX4rq1Rm/g3BXnCR2s8JqB/SoeDnkl9Y9xdnGKFxejX8mGzunhS2e/nKjbmymgLNyTBbNqUTU+k2BVXJyJnLjoNk=
From: Christian Lindig <christian.lindig@citrix.com>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, Paul Durrant
	<paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Anthony Perard <anthony.perard@citrix.com>,
	David Scott <dave@recoil.org>, George Dunlap <George.Dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"Wei Liu" <wl@xen.org>
Subject: Re: [PATCH v5 0/4] Xen ABI feature control
Thread-Topic: [PATCH v5 0/4] Xen ABI feature control
Thread-Index: AQHWyXjIEJK6osKOu0eZZfeqHV9qfg==
Date: Thu, 3 Dec 2020 13:32:47 +0000
Message-ID: <DS7PR03MB5655496C2C7EFC8DF29BCA37F6F20@DS7PR03MB5655.namprd03.prod.outlook.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: acf09ade-402c-42bd-1b4c-08d8978fec6a
x-ms-traffictypediagnostic: DM6PR03MB4747:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM6PR03MB4747431EFDAEC02F7B4F8DC2F6F20@DM6PR03MB4747.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 0LziCboezOgvQRcHICS5Fe3NIWKHR6B+NHVIcYm1QAIP/GYrtiuJVcn8QAxUxk7KCiq5PyiZF50CgquV7PaFCIlrnzXprOzrb9vu7r+CwpUUqXRwpu7T5WDn+bXuQioFUg6rJmt5+ycHac7Z9Eq1mY2eYrN4gry+6Em4P4AvkBtJCao/NlkoJ8oMEvgSMhic+vdcjRezqe6Bw9ssw1M1dHjFJVggwJLg0m597BOy0ggtN0176VuLOmWD8XdJXdoiYuzpC3+BhfUt574VxA1PMvZmk6xcBbQ3wQlGZidaw+i1KyTlVN8oH4q7uEWrm2p2ABBUamYkuoQplqrRm71Vrw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(346002)(39860400002)(54906003)(19627405001)(33656002)(316002)(44832011)(5660300002)(52536014)(86362001)(110136005)(6506007)(186003)(7416002)(71200400001)(91956017)(66556008)(66446008)(53546011)(66574015)(64756008)(66946007)(76116006)(4326008)(66476007)(2906002)(26005)(9686003)(55016002)(7696005)(8676002)(8936002)(478600001)(55236004)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?iso-8859-1?Q?8g/UuWJSnFW3WTHYOfA1ajFiB5lVwcMOFuTZBv0oR3N3Sq2pqd8dsGXHki?=
 =?iso-8859-1?Q?XRLS/EmosV04rHGS+wRR7c36KruwPo8jqJ85efVzjKugISK4GWrzzx4a27?=
 =?iso-8859-1?Q?H0H15ZOeGAwAdGzHfIL0CEU/ptjfE40B85mNgD1cQG0se9MoXwQunKUU2M?=
 =?iso-8859-1?Q?oWwgeEL2zxEN1T/atJIsRDVuaiZgyikfrYUfho80tTT8DRww2zVGWOiMto?=
 =?iso-8859-1?Q?de4TGX2RxFRL34dALb0813XHl7oVVNmIdo+uKYzGz+cIgMJM+ZFgmcPG81?=
 =?iso-8859-1?Q?dkTwIU81/3p85FtJtVNVhG3fxVW9T0lF/FAcAD8nhwERRLydpkmFJ8nGVV?=
 =?iso-8859-1?Q?LLU9qQgPf2zhsZE3lw260Su8MPEKBNawDroT7KhQSDMkcJvqtQrUyHmAMy?=
 =?iso-8859-1?Q?vPWO9vretgADdfkKH0qwUyL9SvzMv6R2+/m92kgqM660TN8YFzL+pAZKUd?=
 =?iso-8859-1?Q?dLmFfNlxlO/pvCnszfu0MrGqdTSCiu/EVNsOMEK3X3L/Zd6lknwkkvFokO?=
 =?iso-8859-1?Q?PGb4ppKCBiIySgotpoErbzdf6fXDo4EWTCWEHoZJbmhO+Zgfu5rsl8AcNM?=
 =?iso-8859-1?Q?ZCpYbT5Zrz2gGflB0VcS5NiRc57isVwsNXn8hwyUqBQnetmHlM+UvseJIu?=
 =?iso-8859-1?Q?+ThkcY/Kfl6CWcckyqtp7YxxkphifeDLaBls9h+t+Rv9lp1XSIHYAsCaaP?=
 =?iso-8859-1?Q?vu1fP4r4z1yw3/OGlI1HZje8ywXrX0h7kPB3qIWaoaLuqk0NxYNJOopJ2H?=
 =?iso-8859-1?Q?Yrd8IPSC8jh7CbXzdCaQtGXoZgFS3aQb2RigZ6n9SIuYlr6wKrICv+hVtP?=
 =?iso-8859-1?Q?BKOArDJ0WfznEZODaPTKLiXAyJL63/beVaDRu5KiziJo/jvSSlUJejmrYp?=
 =?iso-8859-1?Q?Gym279PKy1iSuXK5vBxYE9MbSF+pptVI4+tegg59Iv0AaRwJz5rpbDli92?=
 =?iso-8859-1?Q?Be181QUsQYW7GDvHLZAiVLJUzKOAeogHJ+pDjgMSAyC1V1zLiquIY98A78?=
 =?iso-8859-1?Q?cDeifbMVs5SkX4zlg=3D?=
Content-Type: multipart/alternative;
	boundary="_000_DS7PR03MB5655496C2C7EFC8DF29BCA37F6F20DS7PR03MB5655namp_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: acf09ade-402c-42bd-1b4c-08d8978fec6a
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Dec 2020 13:32:47.4075
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: a7AElmWMrvGBDaWw5/bFSfDOeQ+T9NArnAAOSp13RfrdwcOfhgk9zHV9K9LuRNQYYP4u1wcd57eheTqBjRNZNeSCtG0uwRUGJHddR3/m8YI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4747
X-OriginatorOrg: citrix.com

--_000_DS7PR03MB5655496C2C7EFC8DF29BCA37F6F20DS7PR03MB5655namp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable


Acked-by: Christian Lindig <christian.lindig@citrix.com>


________________________________
From: J=FCrgen Gro=DF
Sent: Thursday, December 03, 2020 13:15
To: Paul Durrant; xen-devel@lists.xenproject.org
Cc: Paul Durrant; Andrew Cooper; Anthony Perard; Christian Lindig; David Sc=
ott; George Dunlap; Ian Jackson; Jan Beulich; Julien Grall; Roger Pau Monne=
; Stefano Stabellini; Volodymyr Babchuk; Wei Liu
Subject: Re: [PATCH v5 0/4] Xen ABI feature control

On 03.12.20 13:41, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> This series was previously called "evtchn: Introduce a per-guest knob to
> control FIFO ABI". It is been extensively re-worked and extended to cover
> another ABI feature.
>
> Paul Durrant (4):
>    domctl: introduce a new domain create flag,
>      XEN_DOMCTL_CDF_evtchn_fifo, ...
>    domctl: introduce a new domain create flag,
>      XEN_DOMCTL_CDF_evtchn_upcall, ...
>    libxl: introduce a 'libxl_xen_abi_features' enumeration...
>    xl: introduce a 'xen-abi-features' option...
>
>   docs/man/xl.cfg.5.pod.in         | 50 ++++++++++++++++++++++++++++++++
>   tools/include/libxl.h            | 10 +++++++
>   tools/libs/light/libxl_arm.c     | 22 +++++++++-----
>   tools/libs/light/libxl_create.c  | 31 ++++++++++++++++++++
>   tools/libs/light/libxl_types.idl |  7 +++++
>   tools/libs/light/libxl_x86.c     | 17 ++++++++++-
>   tools/ocaml/libs/xc/xenctrl.ml   |  2 ++
>   tools/ocaml/libs/xc/xenctrl.mli  |  2 ++
>   tools/xl/xl_parse.c              | 50 ++++++++++++++++++++++++++++++--
>   xen/arch/arm/domain.c            |  3 +-
>   xen/arch/arm/domain_build.c      |  3 +-
>   xen/arch/arm/setup.c             |  3 +-
>   xen/arch/x86/domain.c            |  8 +++++
>   xen/arch/x86/hvm/hvm.c           |  3 ++
>   xen/arch/x86/setup.c             |  4 ++-
>   xen/common/domain.c              |  3 +-
>   xen/common/event_channel.c       | 24 +++++++++++++--
>   xen/include/public/domctl.h      |  6 +++-
>   18 files changed, 229 insertions(+), 19 deletions(-)
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Christian Lindig <christian.lindig@citrix.com>
> Cc: David Scott <dave@recoil.org>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: "Roger Pau Monn=E9" <roger.pau@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> Cc: Wei Liu <wl@xen.org>
>

Do we want to add a create flag for each such feature, or would it be
better to set options like those via hypfs?

It would be fairly easy to ad dynamic hypfs paths, e.g.:

/domain/<domid>/abi-features/evtchn-fifo
/domain/<domid>/abi-features/evtchn-upcall

which would have boolean type and could be set as long as the domain
hasn't been started.

xl support could even be rather generic, without the need to add coding
to xl for each new feature.

This is no objection to this series, but just an idea how to avoid
extending the use of unstable interfaces.

Thoughts?


Juergen

--_000_DS7PR03MB5655496C2C7EFC8DF29BCA37F6F20DS7PR03MB5655namp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo=
ttom:0;} </style>
</head>
<body dir=3D"ltr">
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div>
<div>
<p style=3D"margin:0.0px 0.0px 0.0px 0.0px;font:11.0px Menlo;color:#000000"=
><span style=3D"font-variant-ligatures:no-common-ligatures">Acked-by: Chris=
tian Lindig &lt;christian.lindig@citrix.com&gt;</span></p>
<br>
</div>
<div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12p=
t; color:rgb(0,0,0);">
<br>
<hr tabindex=3D"-1" style=3D"display:inline-block; width:98%;">
<b>From:</b> J=FCrgen Gro=DF<br>
<b>Sent:</b> Thursday, December 03, 2020 13:15<br>
<b>To:</b> Paul Durrant; xen-devel@lists.xenproject.org<br>
<b>Cc:</b> Paul Durrant; Andrew Cooper; Anthony Perard; Christian Lindig; D=
avid Scott; George Dunlap; Ian Jackson; Jan Beulich; Julien Grall; Roger Pa=
u Monne; Stefano Stabellini; Volodymyr Babchuk; Wei Liu<br>
<b>Subject:</b> Re: [PATCH v5 0/4] Xen ABI feature control
<div><br>
</div>
</div>
<div class=3D"BodyFragment"><font size=3D"2"><span style=3D"font-size:11pt;=
">
<div class=3D"PlainText">On 03.12.20 13:41, Paul Durrant wrote:<br>
&gt; From: Paul Durrant &lt;pdurrant@amazon.com&gt;<br>
&gt; <br>
&gt; This series was previously called &quot;evtchn: Introduce a per-guest =
knob to<br>
&gt; control FIFO ABI&quot;. It is been extensively re-worked and extended =
to cover<br>
&gt; another ABI feature.<br>
&gt; <br>
&gt; Paul Durrant (4):<br>
&gt;&nbsp;&nbsp;&nbsp; domctl: introduce a new domain create flag,<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; XEN_DOMCTL_CDF_evtchn_fifo, ...<br>
&gt;&nbsp;&nbsp;&nbsp; domctl: introduce a new domain create flag,<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; XEN_DOMCTL_CDF_evtchn_upcall, ...<br>
&gt;&nbsp;&nbsp;&nbsp; libxl: introduce a 'libxl_xen_abi_features' enumerat=
ion...<br>
&gt;&nbsp;&nbsp;&nbsp; xl: introduce a 'xen-abi-features' option...<br>
&gt; <br>
&gt;&nbsp;&nbsp; docs/man/xl.cfg.5.pod.in&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; | 50 ++++++++++++++++++++++++++++++++<br>
&gt;&nbsp;&nbsp; tools/include/libxl.h&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | 10 +++++++<br>
&gt;&nbsp;&nbsp; tools/libs/light/libxl_arm.c&nbsp;&nbsp;&nbsp;&nbsp; | 22 =
+++++++++-----<br>
&gt;&nbsp;&nbsp; tools/libs/light/libxl_create.c&nbsp; | 31 +++++++++++++++=
+++++<br>
&gt;&nbsp;&nbsp; tools/libs/light/libxl_types.idl |&nbsp; 7 +++++<br>
&gt;&nbsp;&nbsp; tools/libs/light/libxl_x86.c&nbsp;&nbsp;&nbsp;&nbsp; | 17 =
++++++++++-<br>
&gt;&nbsp;&nbsp; tools/ocaml/libs/xc/xenctrl.ml&nbsp;&nbsp; |&nbsp; 2 ++<br=
>
&gt;&nbsp;&nbsp; tools/ocaml/libs/xc/xenctrl.mli&nbsp; |&nbsp; 2 ++<br>
&gt;&nbsp;&nbsp; tools/xl/xl_parse.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | 50 ++++++++++++++++++++++++++++++=
--<br>
&gt;&nbsp;&nbsp; xen/arch/arm/domain.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 3 +-<br>
&gt;&nbsp;&nbsp; xen/arch/arm/domain_build.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
|&nbsp; 3 +-<br>
&gt;&nbsp;&nbsp; xen/arch/arm/setup.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 3 +-<br>
&gt;&nbsp;&nbsp; xen/arch/x86/domain.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 8 +++++<br>
&gt;&nbsp;&nbsp; xen/arch/x86/hvm/hvm.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 3 ++<br>
&gt;&nbsp;&nbsp; xen/arch/x86/setup.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 4 ++-<br>
&gt;&nbsp;&nbsp; xen/common/domain.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 3 +-<br>
&gt;&nbsp;&nbsp; xen/common/event_channel.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; | 24 +++++++++++++--<br>
&gt;&nbsp;&nbsp; xen/include/public/domctl.h&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
|&nbsp; 6 +++-<br>
&gt;&nbsp;&nbsp; 18 files changed, 229 insertions(+), 19 deletions(-)<br>
&gt; ---<br>
&gt; Cc: Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;<br>
&gt; Cc: Anthony PERARD &lt;anthony.perard@citrix.com&gt;<br>
&gt; Cc: Christian Lindig &lt;christian.lindig@citrix.com&gt;<br>
&gt; Cc: David Scott &lt;dave@recoil.org&gt;<br>
&gt; Cc: George Dunlap &lt;george.dunlap@citrix.com&gt;<br>
&gt; Cc: Ian Jackson &lt;iwj@xenproject.org&gt;<br>
&gt; Cc: Jan Beulich &lt;jbeulich@suse.com&gt;<br>
&gt; Cc: Julien Grall &lt;julien@xen.org&gt;<br>
&gt; Cc: &quot;Roger Pau Monn=E9&quot; &lt;roger.pau@citrix.com&gt;<br>
&gt; Cc: Stefano Stabellini &lt;sstabellini@kernel.org&gt;<br>
&gt; Cc: Volodymyr Babchuk &lt;Volodymyr_Babchuk@epam.com&gt;<br>
&gt; Cc: Wei Liu &lt;wl@xen.org&gt;<br>
&gt; <br>
<br>
Do we want to add a create flag for each such feature, or would it be<br>
better to set options like those via hypfs?<br>
<br>
It would be fairly easy to ad dynamic hypfs paths, e.g.:<br>
<br>
/domain/&lt;domid&gt;/abi-features/evtchn-fifo<br>
/domain/&lt;domid&gt;/abi-features/evtchn-upcall<br>
<br>
which would have boolean type and could be set as long as the domain<br>
hasn't been started.<br>
<br>
xl support could even be rather generic, without the need to add coding<br>
to xl for each new feature.<br>
<br>
This is no objection to this series, but just an idea how to avoid<br>
extending the use of unstable interfaces.<br>
<br>
Thoughts?<br>
<br>
<br>
Juergen<br>
</div>
</span></font></div>
</div>
</body>
</html>

--_000_DS7PR03MB5655496C2C7EFC8DF29BCA37F6F20DS7PR03MB5655namp_--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:40:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:40:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43491.78162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoqW-0004Tc-7k; Thu, 03 Dec 2020 13:40:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43491.78162; Thu, 03 Dec 2020 13:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkoqW-0004TV-4o; Thu, 03 Dec 2020 13:40:32 +0000
Received: by outflank-mailman (input) for mailman id 43491;
 Thu, 03 Dec 2020 13:40:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkoqV-0004TO-6a
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:40:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1dd7175d-1265-41bf-92ef-91d91d17d076;
 Thu, 03 Dec 2020 13:40:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 62959AC2E;
 Thu,  3 Dec 2020 13:40:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dd7175d-1265-41bf-92ef-91d91d17d076
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607002829; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wsZeX4m2pj3ShAbO2r0Y+TdBRHcQ1W0Qjt05Su5dPrY=;
	b=SjkdG+I6WBwgReOPkJTseQSwk/sVFejKQOXgIf+qjVqzawGPZCVY+6pbzDeimyS75sNmfP
	E/LpLxIlJw9QTkPk2rvuhEBcqsHst4DoDig0N4Gk74xEXh+8+JTjlAXzMUPkwBuF7fNVcZ
	ZRBpXesevyd5vvXXG6BEn0y2AIHdKxc=
Subject: Re: [PATCH] vpci/msix: exit early if MSI-X is disabled
From: Jan Beulich <jbeulich@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xenproject.org
References: <20201201174014.27878-1-roger.pau@citrix.com>
 <dfc96aa9-c39f-177c-c8f8-af18b80804de@suse.com>
Message-ID: <cdb2a1ae-9ee7-6661-b69f-d2faacef2c12@suse.com>
Date: Thu, 3 Dec 2020 14:40:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <dfc96aa9-c39f-177c-c8f8-af18b80804de@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 09:38, Jan Beulich wrote:
> On 01.12.2020 18:40, Roger Pau Monne wrote:
>> --- a/xen/drivers/vpci/msix.c
>> +++ b/xen/drivers/vpci/msix.c
>> @@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
>>           * so that it picks the new state.
>>           */
>>          entry->masked = new_masked;
>> -        if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
>> +
>> +        if ( !msix->enabled )
>> +            break;
>> +
>> +        if ( !new_masked && !msix->masked && entry->updated )
>>          {
>>              /*
>>               * If MSI-X is enabled, the function mask is not active, the entry
> 
> What about a "disabled" -> "enabled-but-masked" transition? This,
> afaict, similarly won't trigger setting up of entries from
> control_write(), and hence I'd expect the ASSERT() to similarly
> trigger when subsequently an entry's mask bit gets altered.
> 
> I'd also be fine making this further adjustment, if you agree,
> but the one thing I haven't been able to fully convince myself of
> is that there's then still no need to set ->updated to true.

I've taken another look. I think setting ->updated (or something
equivalent) is needed in that case, in order to not lose the
setting of the entry mask bit. However, this would only defer the
problem to control_write(): This would now need to call
vpci_msix_arch_mask_entry() under suitable conditions, but avoid
calling it when the entry is disabled or was never set up. No
matter whether making the setting of ->updated conditional, or
adding a conditional call in update_entry(), we'd need to
evaluate whether the entry is currently disabled. Imo, instead of
introducing a new arch hook for this, it's easier to make
vpci_msix_arch_mask_entry() tolerate getting called on a disabled
entry. Below my proposed alternative change.

While writing the description I started wondering why we require
address or data fields to have got written before the first
unmask. I don't think the hardware imposes such a requirement;
zeros would be used instead, whatever this means. Let's not
forget that it's only the primary purpose of MSI/MSI-X to
trigger interrupts. Forcing the writes to go elsewhere in
memory is not forbidden from all I know, and could be used by a
driver. IOW I think ->updated should start out as set to true.
But of course vpci_msi_update() then would need to check the
upper address bits and avoid setting up an interrupt if they're
not 0xfee. And further arrangements would be needed to have the
guest requested write actually get carried out correctly.

Jan

x86/vPCI: tolerate (un)masking a disabled MSI-X entry

None of the four reasons causing vpci_msix_arch_mask_entry() to get
called (there's just a single call site) are impossible or illegal prior
to an entry actually having got set up:
- the entry may remain masked (in this case, however, a prior masked ->
  unmasked transition would already not have worked),
- MSI-X may not be enabled,
- the global mask bit may be set,
- the entry may not otherwise have been updated.
Hence the function asserting that the entry was previously set up was
simply wrong. Since the caller tracks the masked state (and setting up
of an entry would only be effected when that software bit is clear),
it's okay to skip both masking and unmasking requests in this case.

Fixes: d6281be9d0145 ('vpci/msix: add MSI-X handlers')
Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -840,8 +840,8 @@ void vpci_msi_arch_print(const struct vp
 void vpci_msix_arch_mask_entry(struct vpci_msix_entry *entry,
                                const struct pci_dev *pdev, bool mask)
 {
-    ASSERT(entry->arch.pirq != INVALID_PIRQ);
-    vpci_mask_pirq(pdev->domain, entry->arch.pirq, mask);
+    if ( entry->arch.pirq != INVALID_PIRQ )
+        vpci_mask_pirq(pdev->domain, entry->arch.pirq, mask);
 }
 
 int vpci_msix_arch_enable_entry(struct vpci_msix_entry *entry,


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:51:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:51:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43502.78175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkp1A-0005ex-8D; Thu, 03 Dec 2020 13:51:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43502.78175; Thu, 03 Dec 2020 13:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkp1A-0005eq-4w; Thu, 03 Dec 2020 13:51:32 +0000
Received: by outflank-mailman (input) for mailman id 43502;
 Thu, 03 Dec 2020 13:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oiWT=FH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kkp18-0005ek-Qc
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:51:30 +0000
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12d3098d-0b58-407c-ae6b-06e7429356c1;
 Thu, 03 Dec 2020 13:51:29 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id 3so3951346wmg.4
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 05:51:29 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id e4sm1881349wrr.32.2020.12.03.05.51.27
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 03 Dec 2020 05:51:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12d3098d-0b58-407c-ae6b-06e7429356c1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=KG0ZODBhqlJjf3yMVFp97XKyIxJKUtY7MgL/JtPCVxk=;
        b=p0R85et3SoulqgQcgIr4QEB6ii0+JdF7kqz4YTa9i+QepyoZ2+uyi9mRfCUgnTn4Kq
         usNz3Mj5pz5xFfgHdcn1qcokLz7GXvaWR3d+ubq99hiGfoSpzKfiBEiJF88nvRcibpNt
         rP66gtTKtaW1i5YddKUVT7E9SQTtSAmKfQXvhhVsmi4P+PyFz+rcd6hr+sNwtbHTxfuP
         b8d8ETy5HVfHRqITCgIezWQbsWu4LXYJx/Gd4x8RAWXs1d76HxYfg8PnaqxVTx/fxJHy
         3U0N5QmTPXa5wSA2Tz42Qg7niqeyL56/WXqbXs9eCFakQVNY1mEk2KjwoHioznyvCMpj
         VAyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=KG0ZODBhqlJjf3yMVFp97XKyIxJKUtY7MgL/JtPCVxk=;
        b=dP8yTWbK49NfcnfHTG8y67xcT6oS9fRB/bA+umO5tjrmWqwOrhJoYDGhed5XpzZurs
         RacEMgUL28zBkeKi+jGZyfcUJJXySZshLvW0J+xe6OuDgBgKo1kCPCME+Iun6Vp76CEn
         waf7Wr0mlLquO5pZAG0CQW4S1xnwF/LJWCNAeLBtEuGmPCK5gLQM61cKseZzloFaumzM
         XZwA1Aoq2XU8rOg5NX6k73qUjAauhyhxdAao4isG8pUaYdMqd2Qnp54hSHqmzLVDp75C
         P/6v/NQP/zY+8KP0YrZxrBKp3mtWqneoS9vFb7IjCin+lq1Ior5PQ9NSgoCQs4NY6yPL
         2APw==
X-Gm-Message-State: AOAM531r5u24WXdbZOyG3E2OQzAxK5fU9DRsg37gOU4/M2gcivDF8dJ1
	8l6/+jttdCqQkFvqD3HDKgw=
X-Google-Smtp-Source: ABdhPJxLAbcX/ZEH0BEBPfmqu5og4nYc8qefFJ9zXmASR5brA65YMYuDeQkVoIfsYsMLk6MEWEmIdg==
X-Received: by 2002:a1c:6a10:: with SMTP id f16mr3502596wmc.106.1607003489054;
        Thu, 03 Dec 2020 05:51:29 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: =?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	"'Wei Liu'" <wl@xen.org>
References: <20201203124159.3688-1-paul@xen.org> <7417f158-2cad-3909-2676-f9d5a90f4202@suse.com>
In-Reply-To: <7417f158-2cad-3909-2676-f9d5a90f4202@suse.com>
Subject: RE: [PATCH v5 0/4] Xen ABI feature control
Date: Thu, 3 Dec 2020 13:51:27 -0000
Message-ID: <00b101d6c97b$660c4990$3224dcb0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJD4qPyqJl9rgA=

> -----Original Message-----
> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> Sent: 03 December 2020 13:15
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Andrew Cooper =
<andrew.cooper3@citrix.com>; Anthony PERARD
> <anthony.perard@citrix.com>; Christian Lindig =
<christian.lindig@citrix.com>; David Scott
> <dave@recoil.org>; George Dunlap <george.dunlap@citrix.com>; Ian =
Jackson <iwj@xenproject.org>; Jan
> Beulich <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Roger Pau =
Monn=C3=A9 <roger.pau@citrix.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Wei Liu
> <wl@xen.org>
> Subject: Re: [PATCH v5 0/4] Xen ABI feature control
>=20
> On 03.12.20 13:41, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > This series was previously called "evtchn: Introduce a per-guest =
knob to
> > control FIFO ABI". It is been extensively re-worked and extended to =
cover
> > another ABI feature.
> >
> > Paul Durrant (4):
> >    domctl: introduce a new domain create flag,
> >      XEN_DOMCTL_CDF_evtchn_fifo, ...
> >    domctl: introduce a new domain create flag,
> >      XEN_DOMCTL_CDF_evtchn_upcall, ...
> >    libxl: introduce a 'libxl_xen_abi_features' enumeration...
> >    xl: introduce a 'xen-abi-features' option...
> >
> >   docs/man/xl.cfg.5.pod.in         | 50 =
++++++++++++++++++++++++++++++++
> >   tools/include/libxl.h            | 10 +++++++
> >   tools/libs/light/libxl_arm.c     | 22 +++++++++-----
> >   tools/libs/light/libxl_create.c  | 31 ++++++++++++++++++++
> >   tools/libs/light/libxl_types.idl |  7 +++++
> >   tools/libs/light/libxl_x86.c     | 17 ++++++++++-
> >   tools/ocaml/libs/xc/xenctrl.ml   |  2 ++
> >   tools/ocaml/libs/xc/xenctrl.mli  |  2 ++
> >   tools/xl/xl_parse.c              | 50 =
++++++++++++++++++++++++++++++--
> >   xen/arch/arm/domain.c            |  3 +-
> >   xen/arch/arm/domain_build.c      |  3 +-
> >   xen/arch/arm/setup.c             |  3 +-
> >   xen/arch/x86/domain.c            |  8 +++++
> >   xen/arch/x86/hvm/hvm.c           |  3 ++
> >   xen/arch/x86/setup.c             |  4 ++-
> >   xen/common/domain.c              |  3 +-
> >   xen/common/event_channel.c       | 24 +++++++++++++--
> >   xen/include/public/domctl.h      |  6 +++-
> >   18 files changed, 229 insertions(+), 19 deletions(-)
> > ---
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > Cc: Christian Lindig <christian.lindig@citrix.com>
> > Cc: David Scott <dave@recoil.org>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Julien Grall <julien@xen.org>
> > Cc: "Roger Pau Monn=C3=A9" <roger.pau@citrix.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> > Cc: Wei Liu <wl@xen.org>
> >
>=20
> Do we want to add a create flag for each such feature, or would it be
> better to set options like those via hypfs?
>=20
> It would be fairly easy to ad dynamic hypfs paths, e.g.:
>=20
> /domain/<domid>/abi-features/evtchn-fifo
> /domain/<domid>/abi-features/evtchn-upcall
>=20
> which would have boolean type and could be set as long as the domain
> hasn't been started.
>=20
> xl support could even be rather generic, without the need to add =
coding
> to xl for each new feature.
>=20
> This is no objection to this series, but just an idea how to avoid
> extending the use of unstable interfaces.
>=20
> Thoughts?
>=20

I was not aware we could have something that was dynamic only before I =
domain is started.

We'd still want libxl to write the features rather than xl doing it =
directly I think as we still want it to be the owner of the default =
settings. Personally it still feels like this kind of setting does want =
to be an explicit part of domain creation, though using hypfs does sound =
like a neat idea.

  Paul

>=20
> Juergen



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 13:58:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 13:58:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43508.78187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkp7m-0005tp-0O; Thu, 03 Dec 2020 13:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43508.78187; Thu, 03 Dec 2020 13:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkp7l-0005ti-Sn; Thu, 03 Dec 2020 13:58:21 +0000
Received: by outflank-mailman (input) for mailman id 43508;
 Thu, 03 Dec 2020 13:58:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkp7j-0005td-Rr
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 13:58:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c51bb16-f35b-404d-9181-6f17607f8d0a;
 Thu, 03 Dec 2020 13:58:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B4389AC65;
 Thu,  3 Dec 2020 13:58:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c51bb16-f35b-404d-9181-6f17607f8d0a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607003897; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N23/COHZQncimNjvea6Pe3W/rZSgf2seWr1DUgrMBvc=;
	b=u7hN5EPd1VB/PdTkap9kNasJWwKf1VkkaXT7XT1TFBpEGJX7Z1kL01qn7F9pCoRx3XKRQu
	k+AE2+jBe4CSgGHQ/hhaJOT3xELxh1D/7qKXOjfz3WFEatQdGgnE6iQUzbUg93qIZRSfoE
	cauAhX+/IAnYTxfrXO/xE3paT6MLfWY=
Subject: Re: [PATCH v5 0/4] Xen ABI feature control
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Julien Grall' <julien@xen.org>, =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?=
 <roger.pau@citrix.com>, 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>, 'Wei Liu' <wl@xen.org>
References: <20201203124159.3688-1-paul@xen.org>
 <7417f158-2cad-3909-2676-f9d5a90f4202@suse.com>
 <00b101d6c97b$660c4990$3224dcb0$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0443f0f9-c0e1-08f4-9580-8af0eb4cd17d@suse.com>
Date: Thu, 3 Dec 2020 14:58:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <00b101d6c97b$660c4990$3224dcb0$@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="RfRr3j1tyiPOvXfA3ZzN67KW0SZ9saOtN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--RfRr3j1tyiPOvXfA3ZzN67KW0SZ9saOtN
Content-Type: multipart/mixed; boundary="QPTCE5VTTwZ3Fqb1IqNCeL5BpmfRxl8uT";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Julien Grall' <julien@xen.org>, =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?=
 <roger.pau@citrix.com>, 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>, 'Wei Liu' <wl@xen.org>
Message-ID: <0443f0f9-c0e1-08f4-9580-8af0eb4cd17d@suse.com>
Subject: Re: [PATCH v5 0/4] Xen ABI feature control
References: <20201203124159.3688-1-paul@xen.org>
 <7417f158-2cad-3909-2676-f9d5a90f4202@suse.com>
 <00b101d6c97b$660c4990$3224dcb0$@xen.org>
In-Reply-To: <00b101d6c97b$660c4990$3224dcb0$@xen.org>

--QPTCE5VTTwZ3Fqb1IqNCeL5BpmfRxl8uT
Content-Type: multipart/mixed;
 boundary="------------4AB2994FE6748FD61D1446E7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4AB2994FE6748FD61D1446E7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 14:51, Paul Durrant wrote:
>> -----Original Message-----
>> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
>> Sent: 03 December 2020 13:15
>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
>> Cc: Paul Durrant <pdurrant@amazon.com>; Andrew Cooper <andrew.cooper3@=
citrix.com>; Anthony PERARD
>> <anthony.perard@citrix.com>; Christian Lindig <christian.lindig@citrix=
=2Ecom>; David Scott
>> <dave@recoil.org>; George Dunlap <george.dunlap@citrix.com>; Ian Jacks=
on <iwj@xenproject.org>; Jan
>> Beulich <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Roger Pau =
Monn=C3=A9 <roger.pau@citrix.com>;
>> Stefano Stabellini <sstabellini@kernel.org>; Volodymyr Babchuk <Volody=
myr_Babchuk@epam.com>; Wei Liu
>> <wl@xen.org>
>> Subject: Re: [PATCH v5 0/4] Xen ABI feature control
>>
>> On 03.12.20 13:41, Paul Durrant wrote:
>>> From: Paul Durrant <pdurrant@amazon.com>
>>>
>>> This series was previously called "evtchn: Introduce a per-guest knob=
 to
>>> control FIFO ABI". It is been extensively re-worked and extended to c=
over
>>> another ABI feature.
>>>
>>> Paul Durrant (4):
>>>     domctl: introduce a new domain create flag,
>>>       XEN_DOMCTL_CDF_evtchn_fifo, ...
>>>     domctl: introduce a new domain create flag,
>>>       XEN_DOMCTL_CDF_evtchn_upcall, ...
>>>     libxl: introduce a 'libxl_xen_abi_features' enumeration...
>>>     xl: introduce a 'xen-abi-features' option...
>>>
>>>    docs/man/xl.cfg.5.pod.in         | 50 ++++++++++++++++++++++++++++=
++++
>>>    tools/include/libxl.h            | 10 +++++++
>>>    tools/libs/light/libxl_arm.c     | 22 +++++++++-----
>>>    tools/libs/light/libxl_create.c  | 31 ++++++++++++++++++++
>>>    tools/libs/light/libxl_types.idl |  7 +++++
>>>    tools/libs/light/libxl_x86.c     | 17 ++++++++++-
>>>    tools/ocaml/libs/xc/xenctrl.ml   |  2 ++
>>>    tools/ocaml/libs/xc/xenctrl.mli  |  2 ++
>>>    tools/xl/xl_parse.c              | 50 ++++++++++++++++++++++++++++=
++--
>>>    xen/arch/arm/domain.c            |  3 +-
>>>    xen/arch/arm/domain_build.c      |  3 +-
>>>    xen/arch/arm/setup.c             |  3 +-
>>>    xen/arch/x86/domain.c            |  8 +++++
>>>    xen/arch/x86/hvm/hvm.c           |  3 ++
>>>    xen/arch/x86/setup.c             |  4 ++-
>>>    xen/common/domain.c              |  3 +-
>>>    xen/common/event_channel.c       | 24 +++++++++++++--
>>>    xen/include/public/domctl.h      |  6 +++-
>>>    18 files changed, 229 insertions(+), 19 deletions(-)
>>> ---
>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: Anthony PERARD <anthony.perard@citrix.com>
>>> Cc: Christian Lindig <christian.lindig@citrix.com>
>>> Cc: David Scott <dave@recoil.org>
>>> Cc: George Dunlap <george.dunlap@citrix.com>
>>> Cc: Ian Jackson <iwj@xenproject.org>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> Cc: Julien Grall <julien@xen.org>
>>> Cc: "Roger Pau Monn=C3=A9" <roger.pau@citrix.com>
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>>> Cc: Wei Liu <wl@xen.org>
>>>
>>
>> Do we want to add a create flag for each such feature, or would it be
>> better to set options like those via hypfs?
>>
>> It would be fairly easy to ad dynamic hypfs paths, e.g.:
>>
>> /domain/<domid>/abi-features/evtchn-fifo
>> /domain/<domid>/abi-features/evtchn-upcall
>>
>> which would have boolean type and could be set as long as the domain
>> hasn't been started.
>>
>> xl support could even be rather generic, without the need to add codin=
g
>> to xl for each new feature.
>>
>> This is no objection to this series, but just an idea how to avoid
>> extending the use of unstable interfaces.
>>
>> Thoughts?
>>
>=20
> I was not aware we could have something that was dynamic only before I =
domain is started.

Look at my current cpupool/hypfs series: the per-cpupool scheduling
granularity can be modified only if no cpu is assigned to the cpupool.

>=20
> We'd still want libxl to write the features rather than xl doing it dir=
ectly I think as we still want it to be the owner of the default settings=
=2E Personally it still feels like this kind of setting does want to be a=
n explicit part of domain creation, though using hypfs does sound like a =
neat idea.

No problem of doing it in libxl. libxl is already using libxenhypfs.


Juergen

--------------4AB2994FE6748FD61D1446E7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4AB2994FE6748FD61D1446E7--

--QPTCE5VTTwZ3Fqb1IqNCeL5BpmfRxl8uT--

--RfRr3j1tyiPOvXfA3ZzN67KW0SZ9saOtN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/I7vgFAwAAAAAACgkQsN6d1ii/Ey9P
tQf/XD8PSfOECyUCNoHxaQu4B5TQWbLmWX6Z/w/lmTI23hSsRrDPNS30k/H99sIM9WeV5wyNl0Q8
RCrTNmTcQyL39wrL5zR1sYHUDy0FxZKB0bv5Q3ny6Ez5MWdfmRwzsKbY8sS2+ascvWJgYElCzQVN
j5+cZ7fkt7/r9oq8UM4SoQCfH6fxAlaEaMv1FdGYtNTH/UYj9QI821L8PT3DQBiTacjRPQsZl5TU
+RRvoyMqxmT2a3kQPC+eL4NNJEgyqkHRCY6cD3B9PNjdDuQ+4a4IJpnhAY19dQrsxF3jaxeuau9N
jspLr6zjXEUGNNs9T4SD6h8CoFUs7YFDrmBgQxrVuw==
=UgtZ
-----END PGP SIGNATURE-----

--RfRr3j1tyiPOvXfA3ZzN67KW0SZ9saOtN--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43533.78228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYD-0000Uy-No; Thu, 03 Dec 2020 14:25:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43533.78228; Thu, 03 Dec 2020 14:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYD-0000Uc-DW; Thu, 03 Dec 2020 14:25:41 +0000
Received: by outflank-mailman (input) for mailman id 43533;
 Thu, 03 Dec 2020 14:25:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYC-0000SM-J1
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYB-0006LY-95; Thu, 03 Dec 2020 14:25:39 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYB-00015c-1F; Thu, 03 Dec 2020 14:25:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=dwuQda3wa3/+TZihqL8Ehutb2xsyBQK1fDBJLwRytYI=; b=fSJpYyR9PG1oz/fE8xggp0AsCg
	g6VPjd1wZ+W5IDybyft+UmQbFpmhQfaL3XCJoXY/G1EjHgr/OMJng6rJMrzShEdZ/0WoJeRmK08EV
	YlBahEjz2eyIgpXFcwJxcYxJp+248etXKnGu9+z1xw39Hst2xJqyGtlpMAbAJTNbRJGw=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 03/23] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Thu,  3 Dec 2020 14:25:14 +0000
Message-Id: <20201203142534.4017-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently libxl__device_pci_add_xenstore() is broken in that does not
update the domain's configuration for the first device added (which causes
creation of the overall backend area in xenstore). This can be easily observed
by running 'xl list -l' after adding a single device: the device will be
missing.

This patch fixes the problem and adds a DEBUG log line to allow easy
verification that the domain configuration is being modified. Also, the use
of libxl__device_generic_add() is dropped as it leads to a confusing situation
where only partial backend information is written under the xenstore
'/libxl' path. For LIBXL__DEVICE_KIND_PCI devices the only definitive
information in xenstore is under '/local/domain/0/backend' (the '0' being
hard-coded).

NOTE: This patch includes a whitespace in add_pcis_done().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v3:
 - Revert some changes form v2 as there is confusion over use of the libxl
   and backend xenstore paths which needs to be fixed

v2:
 - Avoid having two completely different ways of adding devices into xenstore
---
 tools/libs/light/libxl_pci.c | 87 +++++++++++++++++++-----------------
 1 file changed, 45 insertions(+), 42 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 757746a8dec1..aa7633dfef16 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -79,39 +79,55 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
     device->kind = LIBXL__DEVICE_KIND_PCI;
 }
 
-static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pci,
-                                     int num)
+static void libxl__create_pci_backend(libxl__gc *gc, xs_transaction_t t,
+                                      uint32_t domid, const libxl_device_pci *pci)
 {
-    flexarray_t *front = NULL;
-    flexarray_t *back = NULL;
-    libxl__device device;
-    int i;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    flexarray_t *front, *back;
+    char *fe_path, *be_path;
+    struct xs_permissions fe_perms[2], be_perms[2];
+
+    LOGD(DEBUG, domid, "Creating pci backend");
 
     front = flexarray_make(gc, 16, 1);
     back = flexarray_make(gc, 16, 1);
 
-    LOGD(DEBUG, domid, "Creating pci backend");
-
-    /* add pci device */
-    libxl__device_from_pci(gc, domid, pci, &device);
+    fe_path = libxl__domain_device_frontend_path(gc, domid, 0,
+                                                 LIBXL__DEVICE_KIND_PCI);
+    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                                LIBXL__DEVICE_KIND_PCI);
 
+    flexarray_append_pair(back, "frontend", fe_path);
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
-    flexarray_append_pair(back, "online", "1");
+    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pci++)
-        libxl_create_pci_backend_device(gc, back, i, pci);
+    be_perms[0].id = 0;
+    be_perms[0].perms = XS_PERM_NONE;
+    be_perms[1].id = domid;
+    be_perms[1].perms = XS_PERM_READ;
 
-    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
+    xs_rm(ctx->xsh, t, be_path);
+    xs_mkdir(ctx->xsh, t, be_path);
+    xs_set_permissions(ctx->xsh, t, be_path, be_perms,
+                       ARRAY_SIZE(be_perms));
+    libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
+
+    flexarray_append_pair(front, "backend", be_path);
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
     flexarray_append_pair(front, "state", GCSPRINTF("%d", XenbusStateInitialising));
 
-    return libxl__device_generic_add(gc, XBT_NULL, &device,
-                                     libxl__xs_kvs_of_flexarray(gc, back),
-                                     libxl__xs_kvs_of_flexarray(gc, front),
-                                     NULL);
+    fe_perms[0].id = domid;
+    fe_perms[0].perms = XS_PERM_NONE;
+    fe_perms[1].id = 0;
+    fe_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, fe_path);
+    xs_mkdir(ctx->xsh, t, fe_path);
+    xs_set_permissions(ctx->xsh, t, fe_path,
+                       fe_perms, ARRAY_SIZE(fe_perms));
+    libxl__xs_writev(gc, t, fe_path, libxl__xs_kvs_of_flexarray(gc, front));
 }
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
@@ -135,8 +151,6 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
-    if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -150,17 +164,17 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     back = flexarray_make(gc, 16, 1);
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
-    num = atoi(num_devs);
+    num = num_devs ? atoi(num_devs) : 0;
     libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
-    if (!starting)
+    if (num && !starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
 
     /*
      * Stubdomin config is derived from its target domain, it doesn't have
      * its own file.
      */
-    if (!is_stubdomain) {
+    if (!is_stubdomain && !starting) {
         lock = libxl__lock_domain_userdata(gc, domid);
         if (!lock) {
             rc = ERROR_LOCK_FAIL;
@@ -170,6 +184,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
+        LOGD(DEBUG, domid, "Adding new pci device to config");
         device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
                                  pci);
 
@@ -186,6 +201,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             if (rc) goto out;
         }
 
+        /* This is the first device, so create the backend */
+        if (!num_devs)
+            libxl__create_pci_backend(gc, t, domid, pci);
+
         libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
         rc = libxl__xs_transaction_commit(gc, &t);
@@ -1437,7 +1456,7 @@ out_no_irq:
         }
     }
 
-    if (!starting && !libxl_get_stubdom_id(CTX, domid))
+    if (!libxl_get_stubdom_id(CTX, domid))
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
@@ -1765,28 +1784,12 @@ static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
 }
 
 static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
-                             int rc)
+                          int rc)
 {
     EGC_GC;
     add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
-
-    /* Convenience aliases */
-    libxl_domain_config *d_config = apds->d_config;
-    libxl_domid domid = apds->domid;
     libxl__ao_device *aodev = apds->outer_aodev;
 
-    if (rc) goto out;
-
-    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
-                                       d_config->num_pcis);
-        if (rc < 0) {
-            LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
-            goto out;
-        }
-    }
-
-out:
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43532.78222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYD-0000Tf-CT; Thu, 03 Dec 2020 14:25:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43532.78222; Thu, 03 Dec 2020 14:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYD-0000TP-2y; Thu, 03 Dec 2020 14:25:41 +0000
Received: by outflank-mailman (input) for mailman id 43532;
 Thu, 03 Dec 2020 14:25:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYB-0000Ry-Tb
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYA-0006LU-5x; Thu, 03 Dec 2020 14:25:38 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpY9-00015c-UI; Thu, 03 Dec 2020 14:25:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=EELsyWGwAPTTNTQhZ1f09gDtvZWQqdShpQYlfaRbWq4=; b=J9rfVGWnFjq7NURSQm6D4QxEz2
	Iznrs/vIMYI6VZqMOOdXWpaWz0DsLklD+7wqBttI4w9f2geWERTKVbJ1LYn6AExzV2k19+czJA9uv
	xS040v09E43eh1Ji5OWlgC3kNjB6HYwjvogsgLoUPtd45C40kZ2Qfy4ECUsI0xTrdklA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 02/23] libxl: make libxl__device_list() work correctly for LIBXL__DEVICE_KIND_PCI...
Date: Thu,  3 Dec 2020 14:25:13 +0000
Message-Id: <20201203142534.4017-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... devices.

Currently there is an assumption built into libxl__device_list() that device
backends are fully enumarated under the '/libxl' path in xenstore. This is
not the case for PCI backend devices, which are only properly enumerated
under '/local/domain/0/backend'.

This patch adds a new get_path() method to libxl__device_type to allow a
backend implementation (such as PCI) to specify the xenstore path where
devices are enumerated and modifies libxl__device_list() to use this method
if it is available. Also, if the get_num() method is defined then the
from_xenstore() method expects to be passed the backend path without the device
number concatenated, so this issue is also rectified.

Having made libxl__device_list() work correctly, this patch removes the
open-coded libxl_pci_device_pci_list() in favour of an evaluation of the
LIBXL_DEFINE_DEVICE_LIST() macro. This has the side-effect of also defining
libxl_pci_device_pci_list_free() which will be used in subsequent patches.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v3:
 - New in v3 (replacing "libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices")
---
 tools/include/libxl.h             |  7 ++++
 tools/libs/light/libxl_device.c   | 70 ++++++++++++++++---------------
 tools/libs/light/libxl_internal.h |  2 +
 tools/libs/light/libxl_pci.c      | 29 ++++---------
 4 files changed, 54 insertions(+), 54 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index fbe4c81ba511..ee52d3cf7e7e 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -451,6 +451,12 @@
  */
 #define LIBXL_HAVE_CONFIG_PCIS 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_LIST_FREE indicates that the
+ * libxl_device_pci_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2321,6 +2327,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
 /*
  * Turns the current process into a backend device service daemon
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e081faf9a94e..ac173a043d31 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -2011,7 +2011,7 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
     void *r = NULL;
     void *list = NULL;
     void *item = NULL;
-    char *libxl_path;
+    char *path;
     char **dir = NULL;
     unsigned int ndirs = 0;
     unsigned int ndevs = 0;
@@ -2019,42 +2019,46 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
 
     *num = 0;
 
-    libxl_path = GCSPRINTF("%s/device/%s",
-                           libxl__xs_libxl_path(gc, domid),
-                           libxl__device_kind_to_string(dt->type));
+    if (dt->get_path) {
+        rc = dt->get_path(gc, domid, &path);
+        if (rc) goto out;
+    } else {
+        path = GCSPRINTF("%s/device/%s",
+                         libxl__xs_libxl_path(gc, domid),
+                         libxl__device_kind_to_string(dt->type));
+    }
 
-    dir = libxl__xs_directory(gc, XBT_NULL, libxl_path, &ndirs);
-
-    if (dir && ndirs) {
-        if (dt->get_num) {
-            if (ndirs != 1) {
-                LOGD(ERROR, domid, "multiple entries in %s\n", libxl_path);
-                rc = ERROR_FAIL;
-                goto out;
-            }
-            rc = dt->get_num(gc, GCSPRINTF("%s/%s", libxl_path, *dir), &ndevs);
-            if (rc) goto out;
-        } else {
+    if (dt->get_num) {
+        rc = dt->get_num(gc, path, &ndevs);
+        if (rc) goto out;
+    } else {
+        dir = libxl__xs_directory(gc, XBT_NULL, path, &ndirs);
+        if (dir && ndirs)
             ndevs = ndirs;
+    }
+
+    if (!ndevs)
+        return NULL;
+
+    list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
+    item = list;
+
+    while (*num < ndevs) {
+        dt->init(item);
+
+        if (dt->from_xenstore) {
+            int nr = dt->get_num ? *num : atoi(*dir);
+            char *device_path = dt->get_num ? path :
+                GCSPRINTF("%s/%d", path, nr);
+
+            rc = dt->from_xenstore(gc, device_path, nr, item);
+            if (rc) goto out;
         }
-        list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
-        item = list;
 
-        while (*num < ndevs) {
-            dt->init(item);
-
-            if (dt->from_xenstore) {
-                int nr = dt->get_num ? *num : atoi(*dir);
-                char *device_libxl_path = GCSPRINTF("%s/%s", libxl_path, *dir);
-                rc = dt->from_xenstore(gc, device_libxl_path, nr, item);
-                if (rc) goto out;
-            }
-
-            item = (uint8_t *)item + dt->dev_elem_size;
-            ++(*num);
-            if (!dt->get_num)
-                ++dir;
-        }
+        item = (uint8_t *)item + dt->dev_elem_size;
+        ++(*num);
+        if (!dt->get_num)
+            ++dir;
     }
 
     r = list;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 3e70ff639b3c..ecee61b5419c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3917,6 +3917,7 @@ typedef int (*device_dm_needed_fn_t)(void *, unsigned);
 typedef void (*device_update_config_fn_t)(libxl__gc *, void *, void *);
 typedef int (*device_update_devid_fn_t)(libxl__gc *, uint32_t, void *);
 typedef int (*device_get_num_fn_t)(libxl__gc *, const char *, unsigned int *);
+typedef int (*device_get_path_fn_t)(libxl__gc *, uint32_t, char **);
 typedef int (*device_from_xenstore_fn_t)(libxl__gc *, const char *,
                                          libxl_devid, void *);
 typedef int (*device_set_xenstore_config_fn_t)(libxl__gc *, uint32_t, void *,
@@ -3941,6 +3942,7 @@ struct libxl__device_type {
     device_update_config_fn_t       update_config;
     device_update_devid_fn_t        update_devid;
     device_get_num_fn_t             get_num;
+    device_get_path_fn_t            get_path;
     device_from_xenstore_fn_t       from_xenstore;
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 8c30642252f5..757746a8dec1 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -2393,29 +2393,13 @@ static int libxl__device_pci_get_num(libxl__gc *gc, const char *be_path,
     return rc;
 }
 
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num)
+static int libxl__device_pci_get_path(libxl__gc *gc, uint32_t domid,
+                                      char **path)
 {
-    GC_INIT(ctx);
-    char *be_path;
-    unsigned int n, i;
-    libxl_device_pci *pcis = NULL;
+    *path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                              LIBXL__DEVICE_KIND_PCI);
 
-    *num = 0;
-
-    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
-                                                LIBXL__DEVICE_KIND_PCI);
-    if (libxl__device_pci_get_num(gc, be_path, &n))
-        goto out;
-
-    pcis = calloc(n, sizeof(libxl_device_pci));
-
-    for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
-
-    *num = n;
-out:
-    GC_FREE;
-    return pcis;
+    return 0;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
@@ -2492,10 +2476,13 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
     return COMPARE_PCI(d1, d2);
 }
 
+LIBXL_DEFINE_DEVICE_LIST(pci)
+
 #define libxl__device_pci_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
+    .get_path = libxl__device_pci_get_path,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43531.78211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYC-0000SK-LU; Thu, 03 Dec 2020 14:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43531.78211; Thu, 03 Dec 2020 14:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYC-0000SD-Hp; Thu, 03 Dec 2020 14:25:40 +0000
Received: by outflank-mailman (input) for mailman id 43531;
 Thu, 03 Dec 2020 14:25:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYA-0000Rk-Nc
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpY9-0006LQ-5P; Thu, 03 Dec 2020 14:25:37 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpY8-00015c-K4; Thu, 03 Dec 2020 14:25:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=x251MOkO+WZaIbWbqux/NxcA8FNSIJPomM8sJBWlLiA=; b=NHJ8V3KC27N59oVeHtcyxMdfbW
	6lQzBGUjT+1JjCQ0FPXhkre2oAybEgadidVJWF6A4iTDtfKTemRSift3dihgMVE3MlZVgb6tWD2E+
	KECW/F91WS2S14GNBYOHbGaD+hueAwRGHN9ylm+FBrU1ezAVTwbMwNQLEx6s+8toO2eM=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Thu,  3 Dec 2020 14:25:12 +0000
Message-Id: <20201203142534.4017-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
is confusing and also compromises use of some macros used for other device
types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
of this duality.

This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
hence allowing removal of the former.

For consistency the xl and libs/util code is also modified, but in this case
it is purely cosmetic.

NOTE: Some of the more gross formatting errors (such as lack of spaces after
      keywords) that came into context have been fixed in libxl_pci.c.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v5:
 - Minor cosmetic fix
---
 tools/include/libxl.h             |  17 +-
 tools/libs/light/libxl_create.c   |   6 +-
 tools/libs/light/libxl_dm.c       |  18 +-
 tools/libs/light/libxl_internal.h |  45 ++-
 tools/libs/light/libxl_pci.c      | 582 +++++++++++++++---------------
 tools/libs/light/libxl_types.idl  |   2 +-
 tools/libs/util/libxlu_pci.c      |  36 +-
 tools/xl/xl_parse.c               |  26 +-
 tools/xl/xl_pci.c                 |  68 ++--
 tools/xl/xl_sxp.c                 |  12 +-
 10 files changed, 408 insertions(+), 404 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..fbe4c81ba511 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_CONFIG_PCIS indicates that the 'pcidevs' and 'num_pcidevs'
+ * fields in libxl_domain_config have been renamed to 'pcis' and 'num_pcis'
+ * respectively.
+ */
+#define LIBXL_HAVE_CONFIG_PCIS 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2300,15 +2307,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,
 
 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
@@ -2352,8 +2359,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519b5..1f5052c52033 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1100,7 +1100,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
         goto error_out;
     }
 
-    bool need_pt = d_config->num_pcidevs || d_config->num_dtdevs;
+    bool need_pt = d_config->num_pcis || d_config->num_dtdevs;
     if (c_info->passthrough == LIBXL_PASSTHROUGH_DEFAULT) {
         c_info->passthrough = need_pt
             ? LIBXL_PASSTHROUGH_ENABLED : LIBXL_PASSTHROUGH_DISABLED;
@@ -1141,7 +1141,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
      * assignment when PoD is enabled.
      */
     if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_PV &&
-        d_config->num_pcidevs && pod_enabled) {
+        d_config->num_pcis && pod_enabled) {
         ret = ERROR_INVAL;
         LOGD(ERROR, domid,
              "PCI device assignment for HVM guest failed due to PoD enabled");
@@ -1817,7 +1817,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__vtpm_devtype,
     &libxl__usbctrl_devtype,
     &libxl__usbdev_devtype,
-    &libxl__pcidev_devtype,
+    &libxl__pci_devtype,
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3da83259c08e..8ebe1b60c9d7 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -442,7 +442,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
 
     /* Might not expose rdm. */
     if (strategy == LIBXL_RDM_RESERVE_STRATEGY_IGNORE &&
-        !d_config->num_pcidevs)
+        !d_config->num_pcis)
         return 0;
 
     /* Query all RDM entries in this platform */
@@ -469,13 +469,13 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     }
 
     /* Query RDM entries per-device */
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcidevs[i].domain;
-        bus = d_config->pcidevs[i].bus;
-        devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                          d_config->pcidevs[i].func);
+        seg = d_config->pcis[i].domain;
+        bus = d_config->pcis[i].bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].dev,
+                          d_config->pcis[i].func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
@@ -488,7 +488,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
         assert(xrdm);
 
         rc = libxl__device_pci_setdefault(gc, DOMID_INVALID,
-                                          &d_config->pcidevs[i], false);
+                                          &d_config->pcis[i], false);
         if (rc)
             goto out;
 
@@ -516,7 +516,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                      * global policy in this case.
                      */
                     d_config->rdms[j].policy
-                        = d_config->pcidevs[i].rdm_policy;
+                        = d_config->pcis[i].rdm_policy;
                     new = false;
                     break;
                 }
@@ -526,7 +526,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                 add_rdm_entry(gc, d_config,
                               pfn_to_paddr(xrdm[n].start_pfn),
                               pfn_to_paddr(xrdm[n].nr_pages),
-                              d_config->pcidevs[i].rdm_policy);
+                              d_config->pcis[i].rdm_policy);
         }
     }
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9b5045..3e70ff639b3c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1709,7 +1709,7 @@ _hidden int libxl__pci_topology_init(libxl__gc *gc,
 /* from libxl_pci */
 
 _hidden void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                                   libxl_device_pci *pcidev, bool starting,
+                                   libxl_device_pci *pci, bool starting,
                                    libxl__ao_device *aodev);
 _hidden void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                            libxl__multidev *);
@@ -3945,30 +3945,27 @@ struct libxl__device_type {
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
 
-#define DEFINE_DEVICE_TYPE_STRUCT_X(name, sname, kind, ...)                    \
-    const libxl__device_type libxl__ ## name ## _devtype = {                   \
-        .type          = LIBXL__DEVICE_KIND_ ## kind,                       \
-        .ptr_offset    = offsetof(libxl_domain_config, name ## s),             \
-        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),     \
-        .dev_elem_size = sizeof(libxl_device_ ## sname),                       \
-        .add           = libxl__add_ ## name ## s,                             \
-        .set_default   = (device_set_default_fn_t)                             \
-                         libxl__device_ ## sname ## _setdefault,               \
-        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name,   \
-        .init          = (device_init_fn_t)libxl_device_ ## sname ## _init,    \
-        .copy          = (device_copy_fn_t)libxl_device_ ## sname ## _copy,    \
-        .dispose       = (device_dispose_fn_t)                                 \
-                         libxl_device_ ## sname ## _dispose,                   \
-        .compare       = (device_compare_fn_t)                                 \
-                         libxl_device_ ## sname ## _compare,                   \
-        .update_devid  = (device_update_devid_fn_t)                            \
-                         libxl__device_ ## sname ## _update_devid,             \
-        __VA_ARGS__                                                            \
+#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                           \
+    const libxl__device_type libxl__ ## name ## _devtype = {                 \
+        .type          = LIBXL__DEVICE_KIND_ ## kind,                        \
+        .ptr_offset    = offsetof(libxl_domain_config, name ## s),           \
+        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),   \
+        .dev_elem_size = sizeof(libxl_device_ ## name),                      \
+        .add           = libxl__add_ ## name ## s,                           \
+        .set_default   = (device_set_default_fn_t)                           \
+                         libxl__device_ ## name ## _setdefault,              \
+        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name, \
+        .init          = (device_init_fn_t)libxl_device_ ## name ## _init,   \
+        .copy          = (device_copy_fn_t)libxl_device_ ## name ## _copy,   \
+        .dispose       = (device_dispose_fn_t)                               \
+                         libxl_device_ ## name ## _dispose,                  \
+        .compare       = (device_compare_fn_t)                               \
+                         libxl_device_ ## name ## _compare,                  \
+        .update_devid  = (device_update_devid_fn_t)                          \
+                         libxl__device_ ## name ## _update_devid,            \
+        __VA_ARGS__                                                          \
     }
 
-#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                             \
-    DEFINE_DEVICE_TYPE_STRUCT_X(name, name, kind, __VA_ARGS__)
-
 static inline void **libxl__device_type_get_ptr(
     const libxl__device_type *dt, const libxl_domain_config *d_config)
 {
@@ -3995,7 +3992,7 @@ extern const libxl__device_type libxl__nic_devtype;
 extern const libxl__device_type libxl__vtpm_devtype;
 extern const libxl__device_type libxl__usbctrl_devtype;
 extern const libxl__device_type libxl__usbdev_devtype;
-extern const libxl__device_type libxl__pcidev_devtype;
+extern const libxl__device_type libxl__pci_devtype;
 extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bc5843b13701..8c30642252f5 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,51 +25,51 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pcidev_encode_bdf(libxl_device_pci *pcidev)
+static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pcidev->domain << 16;
-    value |= (pcidev->bus & 0xff) << 8;
-    value |= (pcidev->dev & 0x1f) << 3;
-    value |= (pcidev->func & 0x7);
+    value = pci->domain << 16;
+    value |= (pci->bus & 0xff) << 8;
+    value |= (pci->dev & 0x1f) << 3;
+    value |= (pci->func & 0x7);
 
     return value;
 }
 
-static void pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                            unsigned int bus, unsigned int dev,
+                            unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
 }
 
 static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             flexarray_t *back,
                                             int num,
-                                            const libxl_device_pci *pcidev)
+                                            const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
-    if (pcidev->vdevfn)
-        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pcidev->vdevfn));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    if (pci->vdevfn)
+        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
               GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pcidev->msitranslate, pcidev->power_mgmt,
-                             pcidev->permissive));
+                             pci->msitranslate, pci->power_mgmt,
+                             pci->permissive));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
-static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
-                                      const libxl_device_pci *pcidev,
-                                      libxl__device *device)
+static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
+                                   const libxl_device_pci *pci,
+                                   libxl__device *device)
 {
     device->backend_devid = 0;
     device->backend_domid = 0;
@@ -80,7 +80,7 @@ static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
 }
 
 static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pcidev,
+                                     const libxl_device_pci *pci,
                                      int num)
 {
     flexarray_t *front = NULL;
@@ -94,15 +94,15 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
     LOGD(DEBUG, domid, "Creating pci backend");
 
     /* add pci device */
-    libxl__device_from_pcidev(gc, domid, pcidev, &device);
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
     flexarray_append_pair(back, "online", "1");
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pcidev++)
-        libxl_create_pci_backend_device(gc, back, i, pcidev);
+    for (i = 0; i < num; i++, pci++)
+        libxl_create_pci_backend_device(gc, back, i, pci);
 
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
@@ -116,7 +116,7 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           uint32_t domid,
-                                          const libxl_device_pci *pcidev,
+                                          const libxl_device_pci *pci,
                                           bool starting)
 {
     flexarray_t *back;
@@ -136,7 +136,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pcidev, 1);
+        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -151,7 +151,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
     num = atoi(num_devs);
-    libxl_create_pci_backend_device(gc, back, num, pcidev);
+    libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
     if (!starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
@@ -170,8 +170,8 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
-        device_add_domain_config(gc, &d_config, &libxl__pcidev_devtype,
-                                 pcidev);
+        device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
+                                 pci);
 
         rc = libxl__dm_check_start(gc, &d_config, domid);
         if (rc) goto out;
@@ -201,7 +201,7 @@ out:
     return rc;
 }
 
-static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev)
+static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *be_path, *num_devs_path, *num_devs, *xsdev, *tmp, *tmppath;
@@ -231,8 +231,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pcidev->domain && bus == pcidev->bus &&
-            pcidev->dev == dev && pcidev->func == func) {
+        if (domain == pci->domain && bus == pci->bus &&
+            pci->dev == dev && pci->func == func) {
             break;
         }
     }
@@ -350,7 +350,7 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
                     *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
                     if (*list == NULL)
                         return ERROR_NOMEM;
-                    pcidev_struct_fill(*list + *num, dom, bus, dev, func, 0);
+                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
                     (*num)++;
                 }
             }
@@ -361,8 +361,8 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
     return 0;
 }
 
-static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
-                       int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
+                           int dom, int bus, int dev, int func)
 {
     int i;
 
@@ -383,7 +383,7 @@ static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
 static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pcidev)
+                           libxl_device_pci *pci)
 {
     int rc, fd;
     char *buf;
@@ -394,8 +394,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus,
-                    pcidev->dev, pcidev->func);
+    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
+                    pci->dev, pci->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -411,7 +411,7 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new, *assigned;
     struct dirent *de;
     DIR *dir;
     int r, num_assigned;
@@ -436,40 +436,40 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pcidev_in_array(assigned, num_assigned, dom, bus, dev, func))
+        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
             continue;
 
-        new = realloc(pcidevs, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcidevs = new;
-        new = pcidevs + *num;
+        pcis = new;
+        new = pcis + *num;
 
         memset(new, 0, sizeof(*new));
-        pcidev_struct_fill(new, dom, bus, dev, func, 0);
+        pci_struct_fill(new, dom, bus, dev, func, 0);
         (*num)++;
     }
 
     closedir(dir);
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func);
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -483,7 +483,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pcidev) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -495,11 +495,11 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
     return 0;
 }
 
-static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -507,7 +507,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -515,18 +515,18 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_vendor;
 }
 
-static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -534,7 +534,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -542,25 +542,25 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_device;
 }
 
-static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                     pci->domain, pci->bus, pci->dev, pci->func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -569,7 +569,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
     }
 
@@ -588,16 +588,16 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     uint16_t pt_vendor, pt_device;
     unsigned long class;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
-        pt_vendor = sysfs_dev_get_vendor(gc, pcidev);
-        pt_device = sysfs_dev_get_device(gc, pcidev);
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
+        libxl_device_pci *pci = &d_config->pcis[i];
+        pt_vendor = sysfs_dev_get_vendor(gc, pci);
+        pt_device = sysfs_dev_get_device(gc, pci);
 
         if (pt_vendor == 0xffff || pt_device == 0xffff ||
             pt_vendor != 0x8086)
             continue;
 
-        if (sysfs_dev_get_class(gc, pcidev, &class))
+        if (sysfs_dev_get_class(gc, pci, &class))
             continue;
         if (class == 0x030000)
             return true;
@@ -621,8 +621,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
 {
     FILE *f;
     int rc = 0;
@@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
 
-    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if(dom == pcidev->domain
-           && bus == pcidev->bus
-           && dev == pcidev->dev
-           && func == pcidev->func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
+        if (dom == pci->domain
+            && bus == pci->bus
+            && dev == pci->dev
+            && func == pci->func) {
             rc = 1;
             goto out;
         }
@@ -649,7 +649,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
     int rc;
@@ -665,8 +665,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pcidev->domain, pcidev->bus,
-                      pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus,
+                      pci->dev, pci->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -677,40 +677,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
 {
     int rc;
 
-    if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcidev) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pcidev) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -721,49 +721,49 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 #define PCIBACK_INFO_PATH "/libxl/pciback"
 
 static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             char *driver_path)
 {
     char *path;
 
     path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pcidev->domain,
-                     pcidev->bus,
-                     pcidev->dev,
-                     pcidev->func);
+                     pci->domain,
+                     pci->bus,
+                     pci->dev,
+                     pci->func);
     if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
         LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
     }
 }
 
 static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     return libxl__xs_read(gc, XBT_NULL,
                           GCSPRINTF(
                            PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func));
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func));
 }
 
 static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL,
           GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pcidev->domain,
-                    pcidev->bus,
-                    pcidev->dev,
-                    pcidev->func) );
+                    pci->domain,
+                    pci->bus,
+                    pci->dev,
+                    pci->func) );
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -773,10 +773,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pcidev->domain;
-    bus = pcidev->bus;
-    dev = pcidev->dev;
-    func = pcidev->func;
+    dom = pci->domain;
+    bus = pci->bus;
+    dev = pci->dev;
+    func = pci->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -786,7 +786,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pcidev);
+    rc = pciback_dev_is_assigned(gc, pci);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -796,7 +796,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pcidev, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -805,9 +805,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pcidev, driver_path);
+            pci_assignable_driver_path_write(gc, pci, driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pcidev)) != NULL ) {
+                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,10 +815,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pcidev);
+        pci_assignable_driver_path_remove(gc, pci);
     }
 
-    if ( pciback_dev_assign(gc, pcidev) ) {
+    if ( pciback_dev_assign(gc, pci) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -829,7 +829,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -840,7 +840,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pcidev,
+                                               libxl_device_pci *pci,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -848,24 +848,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcidev->domain, pcidev->bus,
-            pcidev->dev, pcidev->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc=pciback_dev_is_assigned(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pcidev);
+        pciback_dev_unassign(gc, pci);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pcidev);
+    driver_path = pci_assignable_driver_path_read(gc, pci);
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -873,12 +873,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pcidev) < 0 ) {
+                                 pci) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pcidev);
+            pci_assignable_driver_path_remove(gc, pci);
         }
     } else {
         if ( rebind ) {
@@ -890,26 +890,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
 
     GC_FREE;
     return rc;
@@ -920,7 +920,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
  * driver. It also initialises a bit-mask of which function numbers are present
  * on that device.
 */
-static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsigned int *func_mask)
+static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigned int *func_mask)
 {
     struct dirent *de;
     DIR *dir;
@@ -940,11 +940,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pcidev->domain != dom )
+        if ( pci->domain != dom )
             continue;
-        if ( pcidev->bus != bus )
+        if ( pci->bus != bus )
             continue;
-        if ( pcidev->dev != dev )
+        if ( pci->dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -979,7 +979,7 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 }
 
 static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
-                                 libxl_device_pci *pcidev)
+                                 libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc = 0;
@@ -991,15 +991,15 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    if (pcidev->vdevfn) {
+    if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pcidev->domain, pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->vdevfn, pcidev->msitranslate,
-                         pcidev->power_mgmt);
+                         pci->domain, pci->bus, pci->dev,
+                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pcidev->domain,  pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->msitranslate, pcidev->power_mgmt);
+                         pci->domain,  pci->bus, pci->dev,
+                         pci->func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1010,7 +1010,7 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     if ( rc < 0 )
         LOGD(ERROR, domid, "qemu refused to add device: %s", vdevfn);
-    else if ( sscanf(vdevfn, "0x%x", &pcidev->vdevfn) != 1 ) {
+    else if ( sscanf(vdevfn, "0x%x", &pci->vdevfn) != 1 ) {
         LOGD(ERROR, domid, "wrong format for the vdevfn: '%s'", vdevfn);
         rc = -1;
     }
@@ -1054,7 +1054,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     int pci_domid;
 } pci_add_state;
 
@@ -1072,7 +1072,7 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pcidev,
+                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1082,7 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pcidev = pcidev;
+    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1128,7 +1128,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1136,7 +1136,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_add_xenstore(gc, domid, pcidev);
+    rc = qemu_pci_add_xenstore(gc, domid, pci);
 out:
     pci_add_dm_done(egc, pas, rc); /* must be last */
 }
@@ -1149,7 +1149,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1160,14 +1160,14 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
-    if (pcidev->vdevfn) {
+                           "%04x:%02x:%02x.%01x", pci->domain,
+                           pci->bus, pci->dev, pci->func);
+    if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
-                               PCI_SLOT(pcidev->vdevfn),
-                               PCI_FUNC(pcidev->vdevfn));
+                               PCI_SLOT(pci->vdevfn),
+                               PCI_FUNC(pci->vdevfn));
     }
     /*
      * Version of QEMU prior to the XSA-131 fix did not support
@@ -1179,7 +1179,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
      * set the permissive flag if it is true. Users of older QEMU
      * have no reason to set the flag so this is ok.
      */
-    if (pcidev->permissive)
+    if (pci->permissive)
         libxl__qmp_param_add_bool(gc, &args, "permissive", true);
 
     qmp->ao = pas->aodev->ao;
@@ -1230,7 +1230,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1251,7 +1251,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1283,7 +1283,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
              }
              dev_func = libxl__json_object_get_integer(o);
 
-             pcidev->vdevfn = PCI_DEVFN(dev_slot, dev_func);
+             pci->vdevfn = PCI_DEVFN(dev_slot, dev_func);
 
              rc = 0;
              goto out;
@@ -1331,7 +1331,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1342,8 +1342,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                           pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1383,8 +1383,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                                pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                                pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1411,9 +1411,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
-    if (pcidev->permissive) {
+    if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1422,14 +1422,14 @@ static void pci_add_dm_done(libxl__egc *egc,
 
 out_no_irq:
     if (!isstubdom) {
-        if (pcidev->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
+        if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
-        } else if (pcidev->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
+        } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
             LOGED(ERROR, domainid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1438,7 +1438,7 @@ out_no_irq:
     }
 
     if (!starting && !libxl_get_stubdom_id(CTX, domid))
-        rc = libxl__device_pci_add_xenstore(gc, domid, pcidev, starting);
+        rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
 out:
@@ -1493,7 +1493,7 @@ int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -1504,24 +1504,24 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_ADD;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_add(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_add(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
-static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
+static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
     for (i = 0; i < num; i++) {
-        if (pcidevs[i].domain == pcidev->domain &&
-            pcidevs[i].bus == pcidev->bus &&
-            pcidevs[i].dev == pcidev->dev &&
-            pcidevs[i].func == pcidev->func)
+        if (pcis[i].domain == pci->domain &&
+            pcis[i].bus == pci->bus &&
+            pcis[i].dev == pci->dev &&
+            pcis[i].func == pci->func)
             break;
     }
-    free(pcidevs);
+    free(pcis);
     return i != num;
 }
 
@@ -1535,7 +1535,7 @@ static void device_pci_add_done(libxl__egc *egc,
     pci_add_state *, int rc);
 
 void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                           libxl_device_pci *pcidev, bool starting,
+                           libxl_device_pci *pci, bool starting,
                            libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
@@ -1545,9 +1545,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pcidev to be used by callbacks */
-    aodev->device_config = pcidev;
-    aodev->device_type = &libxl__pcidev_devtype;
+    /* Store *pci to be used by callbacks */
+    aodev->device_config = pci;
+    aodev->device_type = &libxl__pci_devtype;
 
     GCNEW(pas);
     pas->aodev = aodev;
@@ -1556,29 +1556,29 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+                 pci->domain, pci->bus, pci->dev, pci->func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
         }
     }
 
-    rc = libxl__device_pci_setdefault(gc, domid, pcidev, !starting);
+    rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pcidev->seize && !pciback_dev_is_assigned(gc, pcidev)) {
-        rc = libxl__device_pci_assignable_add(gc, pcidev, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
+        rc = libxl__device_pci_assignable_add(gc, pci, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pcidev_assignable(ctx, pcidev)) {
+    if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1589,25 +1589,25 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
              "cannot determine if device is assigned, refusing to continue");
         goto out;
     }
-    if ( is_pcidev_in_array(assigned, num_assigned, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
+                         pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domid, "PCI device already attached to a domain");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pcidev_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
         return;
     }
 
@@ -1664,42 +1664,42 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     /* Convenience aliases */
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) goto out;
 
-    orig_vdev = pcidev->vdevfn & ~7U;
+    orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( !(pcidev->vdevfn >> 3) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( !(pci->vdevfn >> 3) ) {
             LOGD(ERROR, domid, "Must specify a v-slot for multi-function devices");
             rc = ERROR_INVAL;
             goto out;
         }
-        if ( pci_multifunction_check(gc, pcidev, &pfunc_mask) ) {
+        if ( pci_multifunction_check(gc, pci, &pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= pfunc_mask;
+        pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pcidev->func);
+        pfunc_mask = (1 << pci->func);
     }
 
-    for(rc = 0, i = 7; i >= 0; --i) {
+    for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
                 /* if not passing through multiple devices in a block make
                  * sure that virtual function number 0 is always used otherwise
                  * guest won't see the device
                  */
-                pcidev->vdevfn = orig_vdev;
+                pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pcidev, pas); /* must be last */
+            do_pci_add(egc, domid, pci, pas); /* must be last */
             return;
         }
     }
@@ -1715,13 +1715,13 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) {
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+             pci->domain, pci->bus, pci->dev, pci->func,
              rc);
     }
     aodev->rc = rc;
@@ -1733,16 +1733,16 @@ typedef struct {
     libxl__ao_device *outer_aodev;
     libxl_domain_config *d_config;
     libxl_domid domid;
-} add_pcidevs_state;
+} add_pcis_state;
 
-static void add_pcidevs_done(libxl__egc *, libxl__multidev *, int rc);
+static void add_pcis_done(libxl__egc *, libxl__multidev *, int rc);
 
-static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                               libxl_domain_config *d_config,
-                               libxl__multidev *multidev)
+static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
+                            libxl_domain_config *d_config,
+                            libxl__multidev *multidev)
 {
     AO_GC;
-    add_pcidevs_state *apds;
+    add_pcis_state *apds;
     int i;
 
     /* We need to start a new multidev in order to be able to execute
@@ -1752,23 +1752,23 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     apds->outer_aodev = libxl__multidev_prepare(multidev);
     apds->d_config = d_config;
     apds->domid = domid;
-    apds->multidev.callback = add_pcidevs_done;
+    apds->multidev.callback = add_pcis_done;
     libxl__multidev_begin(ao, &apds->multidev);
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         libxl__ao_device *aodev = libxl__multidev_prepare(&apds->multidev);
-        libxl__device_pci_add(egc, domid, &d_config->pcidevs[i],
+        libxl__device_pci_add(egc, domid, &d_config->pcis[i],
                               true, aodev);
     }
 
     libxl__multidev_prepared(egc, &apds->multidev, 0);
 }
 
-static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
+static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
                              int rc)
 {
     EGC_GC;
-    add_pcidevs_state *apds = CONTAINER_OF(multidev, *apds, multidev);
+    add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
 
     /* Convenience aliases */
     libxl_domain_config *d_config = apds->d_config;
@@ -1777,9 +1777,9 @@ static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
 
     if (rc) goto out;
 
-    if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
+    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
+        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
+                                       d_config->num_pcis);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
@@ -1792,7 +1792,7 @@ out:
 }
 
 static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
-                                    libxl_device_pci *pcidev, int force)
+                                    libxl_device_pci *pci, int force)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *state;
@@ -1804,12 +1804,12 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
+                     pci->bus, pci->dev, pci->func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
-    if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
+    if ( !force && (pci->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
         if (libxl__wait_for_device_model_deprecated(gc, domid, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
@@ -1830,7 +1830,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1844,7 +1844,7 @@ typedef struct pci_remove_state {
 } pci_remove_state;
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
-    uint32_t domid, libxl_device_pci *pcidev, bool force,
+    uint32_t domid, libxl_device_pci *pci, bool force,
     libxl__ao_device *aodev);
 static void device_pci_remove_common_next(libxl__egc *egc,
     pci_remove_state *prs, int rc);
@@ -1869,7 +1869,7 @@ static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
 static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pcidev, int force,
+                          libxl_device_pci *pci, int force,
                           pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
@@ -1887,8 +1887,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl__ptr_add(gc, assigned);
 
     rc = ERROR_INVAL;
-    if ( !is_pcidev_in_array(assigned, num, pcidev->domain,
-                      pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( !is_pci_in_array(assigned, num, pci->domain,
+                          pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1917,8 +1917,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                                     pcidev->bus, pcidev->dev, pcidev->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                                     pci->bus, pci->dev, pci->func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1953,8 +1953,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                               pcidev->bus, pcidev->dev, pcidev->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                               pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1988,7 +1988,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1996,7 +1996,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_remove_xenstore(gc, domid, pcidev, prs->force);
+    rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
     pci_remove_detatched(egc, prs, rc);
@@ -2010,7 +2010,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2018,7 +2018,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2080,14 +2080,14 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     if (rc) goto out;
 
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2135,10 +2135,10 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2156,7 +2156,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2170,30 +2170,30 @@ static void pci_remove_detatched(libxl__egc *egc,
     isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
 
     /* don't do multiple resets while some functions are still passed through */
-    if ( (pcidev->vdevfn & 0x7) == 0 ) {
-        libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    if ((pci->vdevfn & 0x7) == 0) {
+        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
         libxl__ao_device *const stubdom_aodev = &prs->stubdom_aodev;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
 
         libxl__prepare_ao_device(ao, stubdom_aodev);
         stubdom_aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
         stubdom_aodev->callback = pci_remove_stubdom_done;
         stubdom_aodev->update_json = prs->aodev->update_json;
-        libxl__device_pci_remove_common(egc, stubdomid, pcidev_s,
+        libxl__device_pci_remove_common(egc, stubdomid, pci_s,
                                         prs->force, stubdom_aodev);
         return;
     }
@@ -2219,14 +2219,14 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
                                             uint32_t domid,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             bool force,
                                             libxl__ao_device *aodev)
 {
@@ -2237,7 +2237,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pcidev = pcidev;
+    prs->pci = pci;
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2247,16 +2247,16 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
-    prs->orig_vdev = pcidev->vdevfn & ~7U;
+    prs->orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( pci_multifunction_check(gc, pcidev, &prs->pfunc_mask) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( pci_multifunction_check(gc, pci, &prs->pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= prs->pfunc_mask;
-    }else{
-        prs->pfunc_mask = (1 << pcidev->func);
+        pci->vfunc_mask &= prs->pfunc_mask;
+    } else {
+        prs->pfunc_mask = (1 << pci->func);
     }
 
     rc = 0;
@@ -2273,7 +2273,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2284,13 +2284,13 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         const int i = prs->next_func;
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
-                pcidev->vdevfn = orig_vdev;
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
+                pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pcidev, prs->force, prs);
+            do_pci_remove(egc, domid, pci, prs->force, prs);
             return;
         }
     }
@@ -2306,7 +2306,7 @@ out:
 }
 
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
 
 {
@@ -2318,12 +2318,12 @@ int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -2334,7 +2334,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, true, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, true, aodev);
     return AO_INPROGRESS;
 }
 
@@ -2353,7 +2353,7 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
     if (s)
         vdevfn = strtol(s, (char **) NULL, 16);
 
-    pcidev_struct_fill(pci, domain, bus, dev, func, vdevfn);
+    pci_struct_fill(pci, domain, bus, dev, func, vdevfn);
 
     s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/opts-%d", be_path, nr));
     if (s) {
@@ -2398,7 +2398,7 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     GC_INIT(ctx);
     char *be_path;
     unsigned int n, i;
-    libxl_device_pci *pcidevs = NULL;
+    libxl_device_pci *pcis = NULL;
 
     *num = 0;
 
@@ -2407,28 +2407,28 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     if (libxl__device_pci_get_num(gc, be_path, &n))
         goto out;
 
-    pcidevs = calloc(n, sizeof(libxl_device_pci));
+    pcis = calloc(n, sizeof(libxl_device_pci));
 
     for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcidevs + i);
+        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
 
     *num = n;
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
     STATE_AO_GC(multidev->ao);
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(CTX, domid, &num);
-    if ( pcidevs == NULL )
+    pcis = libxl_device_pci_list(CTX, domid, &num);
+    if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcidevs);
+    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2436,7 +2436,7 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
          * devices by the time we even get here!
          */
         libxl__ao_device *aodev = libxl__multidev_prepare(multidev);
-        libxl__device_pci_remove_common(egc, domid, pcidevs + i, true,
+        libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
 }
@@ -2449,13 +2449,13 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
     if (!libxl_defbool_val(d_config->b_info.u.hvm.gfx_passthru))
         return 0;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
         uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
         uint32_t stubdom_domid;
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
+        libxl_device_pci *pci = &d_config->pcis[i];
         unsigned long pci_device_class;
 
-        if (sysfs_dev_get_class(gc, pcidev, &pci_device_class))
+        if (sysfs_dev_get_class(gc, pci, &pci_device_class))
             continue;
         if (pci_device_class != 0x030000) /* VGA class */
             continue;
@@ -2494,7 +2494,7 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
 
 #define libxl__device_pci_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT_X(pcidev, pci, PCI,
+DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..20f8dd7cfa5d 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -940,7 +940,7 @@ libxl_domain_config = Struct("domain_config", [
 
     ("disks", Array(libxl_device_disk, "num_disks")),
     ("nics", Array(libxl_device_nic, "num_nics")),
-    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
+    ("pcis", Array(libxl_device_pci, "num_pcis")),
     ("rdms", Array(libxl_device_rdm, "num_rdms")),
     ("dtdevs", Array(libxl_device_dtdev, "num_dtdevs")),
     ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 12fc0b3a7fdc..1d38fffce357 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -23,15 +23,15 @@ static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
     return 0;
 }
 
-static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                           unsigned int bus, unsigned int dev,
+                           unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
     return 0;
 }
 
@@ -47,7 +47,7 @@ static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
 #define STATE_RDM_STRATEGY      10
 #define STATE_RESERVE_POLICY    11
 #define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str)
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
 {
     unsigned state = STATE_DOMAIN;
     unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
@@ -110,11 +110,11 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 }
                 *ptr = '\0';
                 if ( !strcmp(tok, "*") ) {
-                    pcidev->vfunc_mask = LIBXL_PCI_FUNC_ALL;
+                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
                 }else{
                     if ( hex_convert(tok, &func, 0x7) )
                         goto parse_error;
-                    pcidev->vfunc_mask = (1 << 0);
+                    pci->vfunc_mask = (1 << 0);
                 }
                 tok = ptr + 1;
             }
@@ -141,18 +141,18 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
                 *ptr = '\0';
                 if ( !strcmp(optkey, "msitranslate") ) {
-                    pcidev->msitranslate = atoi(tok);
+                    pci->msitranslate = atoi(tok);
                 }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pcidev->power_mgmt = atoi(tok);
+                    pci->power_mgmt = atoi(tok);
                 }else if ( !strcmp(optkey, "permissive") ) {
-                    pcidev->permissive = atoi(tok);
+                    pci->permissive = atoi(tok);
                 }else if ( !strcmp(optkey, "seize") ) {
-                    pcidev->seize = atoi(tok);
+                    pci->seize = atoi(tok);
                 } else if (!strcmp(optkey, "rdm_policy")) {
                     if (!strcmp(tok, "strict")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
                     } else if (!strcmp(tok, "relaxed")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
                     } else {
                         XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
                                           " policy: 'strict' or 'relaxed'.",
@@ -175,7 +175,7 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
     assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
 
     /* Just a pretty way to fill in the values */
-    pcidev_struct_fill(pcidev, dom, bus, dev, func, vslot << 3);
+    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
 
     free(buf2);
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c5a..0765780d9f0a 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1470,24 +1470,24 @@ void parse_config_data(const char *config_source,
     }
 
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
+        d_config->num_pcis = 0;
+        d_config->pcis = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
-            libxl_device_pci *pcidev;
+            libxl_device_pci *pci;
 
-            pcidev = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
-                                               d_config->num_pcidevs,
-                                               libxl_device_pci_init);
-            pcidev->msitranslate = pci_msitranslate;
-            pcidev->power_mgmt = pci_power_mgmt;
-            pcidev->permissive = pci_permissive;
-            pcidev->seize = pci_seize;
+            pci = ARRAY_EXTEND_INIT_NODEVID(d_config->pcis,
+                                            d_config->num_pcis,
+                                            libxl_device_pci_init);
+            pci->msitranslate = pci_msitranslate;
+            pci->power_mgmt = pci_power_mgmt;
+            pci->permissive = pci_permissive;
+            pci->seize = pci_seize;
             /*
              * Like other pci option, the per-device policy always follows
              * the global policy by default.
              */
-            pcidev->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pcidev, buf);
+            pci->rdm_policy = b_info->u.hvm.rdm.policy;
+            e = xlu_pci_parse_bdf(config, pci, buf);
             if (e) {
                 fprintf(stderr,
                         "unable to parse PCI BDF `%s' for passthrough\n",
@@ -1495,7 +1495,7 @@ void parse_config_data(const char *config_source,
                 exit(-e);
             }
         }
-        if (d_config->num_pcidevs && c_info->type == LIBXL_DOMAIN_TYPE_PV)
+        if (d_config->num_pcis && c_info->type == LIBXL_DOMAIN_TYPE_PV)
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 58345bdae213..34fcf5a4fadf 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -24,20 +24,20 @@
 
 static void pcilist(uint32_t domid)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(ctx, domid, &num);
-    if (pcidevs == NULL)
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (pcis == NULL)
         return;
     printf("Vdev Device\n");
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
-               (pcidevs[i].vdevfn >> 3) & 0x1f, pcidevs[i].vdevfn & 0x7,
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pcilist(int argc, char **argv)
@@ -57,28 +57,28 @@ int main_pcilist(int argc, char **argv)
 
 static int pcidetach(uint32_t domid, const char *bdf, int force)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
     if (force) {
-        if (libxl_device_pci_destroy(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
     } else {
-        if (libxl_device_pci_remove(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_remove(ctx, domid, &pci, 0))
             r = 1;
     }
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -108,24 +108,24 @@ int main_pcidetach(int argc, char **argv)
 
 static int pciattach(uint32_t domid, const char *bdf, const char *vs)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_add(ctx, domid, &pcidev, 0))
+    if (libxl_device_pci_add(ctx, domid, &pci, 0))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -155,19 +155,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcidevs == NULL )
+    if ( pcis == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -184,24 +184,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -226,24 +226,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a0015709e..b03e348ffb9a 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -190,16 +190,16 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t)\n");
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcidevs[i].domain, d_config->pcidevs[i].bus,
-               d_config->pcidevs[i].dev, d_config->pcidevs[i].func,
-               d_config->pcidevs[i].vdevfn);
+               d_config->pcis[i].domain, d_config->pcis[i].bus,
+               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
-               d_config->pcidevs[i].msitranslate,
-               d_config->pcidevs[i].power_mgmt);
+               d_config->pcis[i].msitranslate,
+               d_config->pcis[i].power_mgmt);
         fprintf(fh, "\t\t)\n");
         fprintf(fh, "\t)\n");
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43530.78199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpY9-0000Ql-89; Thu, 03 Dec 2020 14:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43530.78199; Thu, 03 Dec 2020 14:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpY9-0000Qe-4g; Thu, 03 Dec 2020 14:25:37 +0000
Received: by outflank-mailman (input) for mailman id 43530;
 Thu, 03 Dec 2020 14:25:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpY8-0000QZ-5G
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpY7-0006LH-Qf; Thu, 03 Dec 2020 14:25:35 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpY7-00015c-Gs; Thu, 03 Dec 2020 14:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=sOVGNlxZA253QNyLwDoA0fXRW+6qHDxeO7BrGEiay3g=; b=2WLHyC
	BtCak28PDg8o7CoSLJSYwMmL+Tp0NrQs8PIc24kkXXwPkaVx4f6e08CtTnw4xNyvVi+vbF5getldg
	kg37vkOyBpHgY+brkDUh/bsuLx9kMvPwMege8vdRpnHQYVWiG9LVBJaHTK3De8Q464efD58VNDfAx
	zPssWSsGEqA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH v5 00/23] xl / libxl: named PCI pass-through devices
Date: Thu,  3 Dec 2020 14:25:11 +0000
Message-Id: <20201203142534.4017-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (23):
  xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  libxl: make libxl__device_list() work correctly for
    LIBXL__DEVICE_KIND_PCI...
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  libxl: modify
    libxl_device_pci_assignable_add/remove/list/list_free()...
  docs/man: modify xl(1) in preparation for naming of assignable devices
  xl / libxl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  xl / libxl: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 ++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +-
 tools/golang/xenlight/helpers.gen.go |   77 +-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   67 +-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_create.c      |    6 +-
 tools/libs/light/libxl_device.c      |   70 +-
 tools/libs/light/libxl_dm.c          |   18 +-
 tools/libs/light/libxl_internal.h    |   55 +-
 tools/libs/light/libxl_pci.c         | 1048 ++++++++++++++------------
 tools/libs/light/libxl_types.idl     |   19 +-
 tools/libs/util/libxlu_pci.c         |  379 +++++-----
 tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   28 +-
 tools/xl/xl_pci.c                    |  159 ++--
 tools/xl/xl_sxp.c                    |   12 +-
 19 files changed, 1376 insertions(+), 938 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43534.78247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYE-0000YR-Vm; Thu, 03 Dec 2020 14:25:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43534.78247; Thu, 03 Dec 2020 14:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYE-0000Y7-Qk; Thu, 03 Dec 2020 14:25:42 +0000
Received: by outflank-mailman (input) for mailman id 43534;
 Thu, 03 Dec 2020 14:25:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYD-0000TU-3O
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYC-0006Ll-8v; Thu, 03 Dec 2020 14:25:40 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYC-00015c-1G; Thu, 03 Dec 2020 14:25:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=iIlzPHbL10Svk6/AmKBREvAXRObwBdrHx3jozrlvASg=; b=lhwOG2wyTl1S2qFrZMBGHwJ50c
	pnGrwy/pisuAViCxiirEBiNChTApt5k7beBEWVVSQ9rwerCiTYaAaG3ArpGYMWV7mUvbXER9mOXij
	tXRz8R2K5HLujumisZUgSuYyx3N8YmGZr1We/O6QrBaxOswifEsa3cb6mGcwkO0NFxOM=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 04/23] libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
Date: Thu,  3 Dec 2020 14:25:15 +0000
Message-Id: <20201203142534.4017-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Other parameters, such as 'msitranslate' and 'permissive' are dealt with
but 'rdm_policy' appears to be have been completely missed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index aa7633dfef16..a06c88076519 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -61,9 +61,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
-              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pci->msitranslate, pci->power_mgmt,
-                             pci->permissive));
+              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d,rdm_policy=%s",
+                        pci->msitranslate, pci->power_mgmt,
+                        pci->permissive, libxl_rdm_reserve_policy_to_string(pci->rdm_policy)));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
@@ -2374,6 +2374,9 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
             } else if (!strcmp(p, "permissive")) {
                 p = strtok_r(NULL, ",=", &saveptr);
                 pci->permissive = atoi(p);
+            } else if (!strcmp(p, "rdm_policy")) {
+                p = strtok_r(NULL, ",=", &saveptr);
+                libxl_rdm_reserve_policy_from_string(p, &pci->rdm_policy);
             }
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43535.78251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYF-0000Zc-Gh; Thu, 03 Dec 2020 14:25:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43535.78251; Thu, 03 Dec 2020 14:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYF-0000ZD-9B; Thu, 03 Dec 2020 14:25:43 +0000
Received: by outflank-mailman (input) for mailman id 43535;
 Thu, 03 Dec 2020 14:25:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYE-0000WJ-3Q
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYD-0006Lv-9H; Thu, 03 Dec 2020 14:25:41 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYD-00015c-1G; Thu, 03 Dec 2020 14:25:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=0AyOqDJsHZVempffCeuXOX5SYYaKuAxFB7OmnpHONLE=; b=yJbt2LN08Ynu0/wKnSQK3Ftfh3
	8604GMya913fAaxc8KMWVelVSCCfXefVUYr7wVz5ngT8bK0j3+lpA+uTBd1YRq5ECyRmOxu3lRFM+
	rDBaqWrHnXk2PKkOwmtv6LE/BQC9hufMCIV+QEHu34SZMqjLRMP5Ch5l5qhce2VvLOVw=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 05/23] libxl: s/detatched/detached in libxl_pci.c
Date: Thu,  3 Dec 2020 14:25:16 +0000
Message-Id: <20201203142534.4017-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Simply spelling correction. Purely cosmetic fix.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index a06c88076519..35ee810d5df3 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1864,7 +1864,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
 static void pci_remove_timeout(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
-static void pci_remove_detatched(libxl__egc *egc,
+static void pci_remove_detached(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
@@ -1978,7 +1978,7 @@ skip1:
 skip_irq:
     rc = 0;
 out_fail:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -2002,7 +2002,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del(libxl__egc *egc,
@@ -2028,7 +2028,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
@@ -2051,7 +2051,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
@@ -2067,7 +2067,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
@@ -2127,7 +2127,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     }
 
 out:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
@@ -2146,12 +2146,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
      * error */
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
-static void pci_remove_detatched(libxl__egc *egc,
-                                 pci_remove_state *prs,
-                                 int rc)
+static void pci_remove_detached(libxl__egc *egc,
+                                pci_remove_state *prs,
+                                int rc)
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43536.78271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYH-0000dx-8R; Thu, 03 Dec 2020 14:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43536.78271; Thu, 03 Dec 2020 14:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYH-0000do-21; Thu, 03 Dec 2020 14:25:45 +0000
Received: by outflank-mailman (input) for mailman id 43536;
 Thu, 03 Dec 2020 14:25:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYF-0000Yk-1S
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYE-0006M4-8v; Thu, 03 Dec 2020 14:25:42 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYE-00015c-1F; Thu, 03 Dec 2020 14:25:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jrF8G4x0cfVSYo8Z0ZIBCjNYH5IUMK69nM2Ctnos49g=; b=pquTNGyeVIvIKiyqidIYP09jgi
	cQPO4ovoCgdCJOLbNSmQy2vUGAOiKhKGGlPjIrTG9DxYpl80yAXt8vuMG8rhAm/EhEE7Qbz6LM14d
	Ksbfz9u9i1dHvcrc8QkeQ3Sln/Vj4sJ13VUCHkmKl2wCqR01gnvTF8RfhnLXCYyeKuFM=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 06/23] libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
Date: Thu,  3 Dec 2020 14:25:17 +0000
Message-Id: <20201203142534.4017-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
need to also pass them as separate arguments.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 35ee810d5df3..23f3f78992fd 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1871,14 +1871,14 @@ static void pci_remove_stubdom_done(libxl__egc *egc,
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
-static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pci, int force,
-                          pci_remove_state *prs)
+static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_device_pci *assigned;
+    uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
+    libxl_device_pci *pci = prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
@@ -2275,7 +2275,6 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_domid domid = prs->domid;
     libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
@@ -2293,7 +2292,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
             } else {
                 pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pci, prs->force, prs);
+            do_pci_remove(egc, prs);
             return;
         }
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43537.78278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYI-0000fh-3W; Thu, 03 Dec 2020 14:25:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43537.78278; Thu, 03 Dec 2020 14:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYH-0000ee-H0; Thu, 03 Dec 2020 14:25:45 +0000
Received: by outflank-mailman (input) for mailman id 43537;
 Thu, 03 Dec 2020 14:25:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYF-0000bI-Tz
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYF-0006ME-8g; Thu, 03 Dec 2020 14:25:43 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYF-00015c-1F; Thu, 03 Dec 2020 14:25:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=BK8cHNgffedaGJNTcMum0Oa1ccaOx/vujwPmOihUqQ8=; b=BlVacWFPVaJDKWySjfgvowc4Uq
	lhjT3wslNYyeYeXuK5Nx+wVS1p3A6iqEKh3+NiM9GI84Ad7lxuMEhgcGJbm8tPxWSuhKX+QeV5XEi
	pXtDeVTo5ISi8UJ+LBKpxm3ucFK1IFNFHoOuEQxftAUS4jQwwxmV9kXEoXue0iNH9ZNA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 07/23] libxl: stop using aodev->device_config in libxl__device_pci_add()...
Date: Thu,  3 Dec 2020 14:25:18 +0000
Message-Id: <20201203142534.4017-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to hold a pointer to the device.

There is already a 'pci' field in 'pci_add_state' so simply use that from
the start. This also allows the 'pci' (#3) argument to be dropped from
do_pci_add().

NOTE: This patch also changes the type of the 'pci_domid' field in
      'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
      given what the field is used for.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 23f3f78992fd..31eaa95923c4 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1074,7 +1074,7 @@ typedef struct pci_add_state {
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
     libxl_device_pci *pci;
-    int pci_domid;
+    libxl_domid pci_domid;
 } pci_add_state;
 
 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1091,7 +1091,6 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1101,7 +1100,6 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1564,13 +1562,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pci to be used by callbacks */
-    aodev->device_config = pci;
-    aodev->device_type = &libxl__pci_devtype;
-
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
+    pas->pci = pci;
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1624,9 +1619,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         GCNEW(pci_s);
         libxl_device_pci_init(pci_s);
         libxl_device_pci_copy(CTX, pci_s, pci);
+        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pas); /* must be last */
         return;
     }
 
@@ -1681,9 +1677,8 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     int i;
 
     /* Convenience aliases */
-    libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1718,7 +1713,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
                 pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pci, pas); /* must be last */
+            do_pci_add(egc, domid, pas); /* must be last */
             return;
         }
     }
@@ -1734,7 +1729,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43538.78294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYJ-0000jc-DC; Thu, 03 Dec 2020 14:25:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43538.78294; Thu, 03 Dec 2020 14:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYJ-0000jN-3K; Thu, 03 Dec 2020 14:25:47 +0000
Received: by outflank-mailman (input) for mailman id 43538;
 Thu, 03 Dec 2020 14:25:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYH-0000eN-Bg
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYG-0006ML-8r; Thu, 03 Dec 2020 14:25:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYG-00015c-1G; Thu, 03 Dec 2020 14:25:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=8cQqWT9GuCPGDfS/ArV0IseEaCV4r+yxTp71Fb3v4Rk=; b=4sXRpP4Q92mHROeIPXs8K+Rts6
	N+iKtn3Qh2RTlK6mX9Bx8Go5zahnKeFs9O+MdL9LhunnQWB+TrVHXfTRZBWnqPqKxeE7wfLqw27oY
	c0rix4Fle3K4h+pqS4/2JKkUxxhnIgRSlDjQFf4ZBkZvwnUdXH8IGfYnSS4X5DfqiV00=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 08/23] libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
Date: Thu,  3 Dec 2020 14:25:19 +0000
Message-Id: <20201203142534.4017-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

For the purposes of re-binding a device to its previous driver
libxl__device_pci_assignable_add() writes the driver path into xenstore.
This path is then read back in libxl__device_pci_assignable_remove().

The functions that support this writing to and reading from xenstore are
currently dedicated for this purpose and hence the node name 'driver_path'
is hard-coded. This patch generalizes these utility functions and passes
'driver_path' as an argument. Subsequent patches will invoke them to
access other nodes.

NOTE: Because functions will have a broader use (other than storing a
      driver path in lieu of pciback) the base xenstore path is also
      changed from '/libxl/pciback' to '/libxl/pci'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 66 +++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 31eaa95923c4..57cf6ffc85de 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -737,48 +737,46 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCIBACK_INFO_PATH "/libxl/pciback"
+#define PCI_INFO_PATH "/libxl/pci"
 
-static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            char *driver_path)
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    char *path;
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
 
-    path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pci->domain,
-                     pci->bus,
-                     pci->dev,
-                     pci->func);
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
+
+static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
+        LOGE(WARN, "Write of %s to node %s failed.", val, path);
     }
 }
 
-static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    return libxl__xs_read(gc, XBT_NULL,
-                          GCSPRINTF(
-                           PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func));
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
 {
+    char *path = pci_info_xs_path(gc, pci, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL,
-          GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pci->domain,
-                    pci->bus,
-                    pci->dev,
-                    pci->func) );
+    xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
@@ -824,9 +822,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pci, driver_path);
+            pci_info_xs_write(gc, pci, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
+                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -834,7 +832,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pci);
+        pci_info_xs_remove(gc, pci, "driver_path");
     }
 
     if ( pciback_dev_assign(gc, pci) ) {
@@ -884,7 +882,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pci);
+    driver_path = pci_info_xs_read(gc, pci, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -897,7 +895,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pci);
+            pci_info_xs_remove(gc, pci, "driver_path");
         }
     } else {
         if ( rebind ) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:25:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43539.78301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYK-0000lF-3l; Thu, 03 Dec 2020 14:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43539.78301; Thu, 03 Dec 2020 14:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpYJ-0000kT-KA; Thu, 03 Dec 2020 14:25:47 +0000
Received: by outflank-mailman (input) for mailman id 43539;
 Thu, 03 Dec 2020 14:25:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpYI-0000gP-1F
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:25:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYH-0006MQ-8w; Thu, 03 Dec 2020 14:25:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYH-00015c-1G; Thu, 03 Dec 2020 14:25:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=5IAb1jvl0GRq7YOjJkqQXgaK3EOQQ6ougo6kLog0BHY=; b=vCFSyjPL6qa0CvQAFUFrrJJpH5
	E3EYSB8KHTJdXIwKhcqAWMJTCpXg+ghd9cNLyLahwR+wa/Pxu3rqnPfQEkEhG+LeXSXQ+XEZ9aXuS
	FTMiMJQYAooj/P5rwYUUzWACumn+1qt5CyR3vY64PkLhK0RIHTYkzWgxB1T4gSU4oINQ=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 09/23] libxl: remove unnecessary check from libxl__device_pci_add()
Date: Thu,  3 Dec 2020 14:25:20 +0000
Message-Id: <20201203142534.4017-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The code currently checks explicitly whether the device is already assigned,
but this is actually unnecessary as assigned devices do not form part of
the list returned by libxl_device_pci_assignable_list() and hence the
libxl_pci_assignable() test would have already failed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 57cf6ffc85de..24e79afcaa36 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1555,8 +1555,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 {
     STATE_AO_GC(aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
-    int num_assigned, rc;
+    int rc;
     int stubdomid = 0;
     pci_add_state *pas;
 
@@ -1595,19 +1594,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
-    rc = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if ( rc ) {
-        LOGD(ERROR, domid,
-             "cannot determine if device is assigned, refusing to continue");
-        goto out;
-    }
-    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
-                         pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domid, "PCI device already attached to a domain");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43589.78319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd8-0002Ok-NR; Thu, 03 Dec 2020 14:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43589.78319; Thu, 03 Dec 2020 14:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd8-0002Od-Jh; Thu, 03 Dec 2020 14:30:46 +0000
Received: by outflank-mailman (input) for mailman id 43589;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002NJ-Gk
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd6-0006Tr-Gg; Thu, 03 Dec 2020 14:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYV-00015c-KX; Thu, 03 Dec 2020 14:25:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uI/MuEj5MaCfoy0lewt6fPzqPiSYJFMNZ2+WVHHc/hI=; b=zHZAsVK4d05F96UGEQsnVsTuTx
	U9vRtYwSSLAF5tX2+qCODD6o8gZV5sXtnLvORbUeZ4NTPCezmJc8SsA+0U1B0gvCzc0NJWigZdoVF
	oM1dLs7/XhIRt6cNRgnqPhSPfQMgX8v+UL4RQwMFDtKznCPsOdPQL+PwgShE0wnE5jIw=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 23/23] xl / libxl: support 'xl pci-attach/detach' by name
Date: Thu,  3 Dec 2020 14:25:34 +0000
Message-Id: <20201203142534.4017-24-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a 'name' field into the idl for 'libxl_device_pci' and
libxlu_pci_parse_spec_string() is modified to parse the new 'name'
parameter of PCI_SPEC_STRING detailed in the updated documention in
xl-pci-configuration(5).

If the 'name' field is non-NULL then both libxl_device_pci_add() and
libxl_device_pci_remove() will use it to look up the device BDF in
the list of assignable devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h            |  6 +++
 tools/libs/light/libxl_pci.c     | 67 +++++++++++++++++++++++++++++---
 tools/libs/light/libxl_types.idl |  1 +
 tools/libs/util/libxlu_pci.c     |  7 +++-
 4 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4025d3a3d437..5b55a2015533 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -484,6 +484,12 @@
  */
 #define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_NAME indicates that the 'name' field of
+ * libxl_device_pci is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_NAME 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e62383dd7b8f..74c3db26df05 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -60,6 +60,10 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             int num,
                                             const libxl_device_pci *pci)
 {
+    if (pci->name) {
+        flexarray_append(back, GCSPRINTF("name-%d", num));
+        flexarray_append(back, GCSPRINTF("%s", pci->name));
+    }
     flexarray_append(back, GCSPRINTF("key-%d", num));
     flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
@@ -284,6 +288,7 @@ retry_transaction:
 
 retry_transaction2:
     t = xs_transaction_start(ctx->xsh);
+    xs_rm(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/state-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/key-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/dev-%d", be_path, i));
@@ -322,6 +327,12 @@ retry_transaction2:
             xs_write(ctx->xsh, t, GCSPRINTF("%s/vdevfn-%d", be_path, j - 1), tmp, strlen(tmp));
             xs_rm(ctx->xsh, t, tmppath);
         }
+        tmppath = GCSPRINTF("%s/name-%d", be_path, j);
+        tmp = libxl__xs_read(gc, t, tmppath);
+        if (tmp) {
+            xs_write(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, j - 1), tmp, strlen(tmp));
+            xs_rm(ctx->xsh, t, tmppath);
+        }
     }
     if (!xs_transaction_end(ctx->xsh, t, 0))
         if (errno == EAGAIN)
@@ -1619,6 +1630,23 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &pci->bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
         rc = xc_test_assign_device(ctx->xch, domid,
                                    pci_encode_bdf(&pci->bdf));
@@ -1767,11 +1795,19 @@ static void device_pci_add_done(libxl__egc *egc,
     libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
-        LOGD(ERROR, domid,
-             "libxl__device_pci_add  failed for "
-             "PCI device %x:%x:%x.%x (rc %d)",
-             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
-             rc);
+        if (pci->name) {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device '%s' (rc %d)",
+                 pci->name,
+                 rc);
+        } else {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device %x:%x:%x.%x (rc %d)",
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                 rc);
+        }
         pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
@@ -2288,6 +2324,23 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &prs->pci.bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     prs->orig_vdev = pci->vdevfn & ~7U;
 
     if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
@@ -2422,6 +2475,10 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
 
+    s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/name-%d", be_path, nr));
+    if (s)
+        pci->name = strdup(s);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2c441142fba6..44bad36f1c4c 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -778,6 +778,7 @@ libxl_pci_bdf = Struct("pci_bdf", [
 
 libxl_device_pci = Struct("device_pci", [
     ("bdf", libxl_pci_bdf),
+    ("name", string),
     ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index a8b6ce542736..543a1f80e99e 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -151,6 +151,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
 {
     const char *ptr = str;
     bool bdf_present = false;
+    bool name_present = false;
     int ret;
 
     /* Attempt to parse 'bdf' as positional parameter */
@@ -193,6 +194,10 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             pcidev->power_mgmt = atoi(val);
         } else if (!strcmp(key, "rdm_policy")) {
             ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else if (!strcmp(key, "name")) {
+            name_present = true;
+            pcidev->name = strdup(val);
+            if (!pcidev->name) ret = ERROR_NOMEM;
         } else {
             XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
             ret = ERROR_INVAL;
@@ -205,7 +210,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             return ret;
     }
 
-    if (!bdf_present)
+    if (!(bdf_present ^ name_present))
         return ERROR_INVAL;
 
     return 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43591.78335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd9-0002Pt-Fk; Thu, 03 Dec 2020 14:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43591.78335; Thu, 03 Dec 2020 14:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd9-0002Pd-5S; Thu, 03 Dec 2020 14:30:47 +0000
Received: by outflank-mailman (input) for mailman id 43591;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002NW-Nx
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd6-0006Tp-Dn; Thu, 03 Dec 2020 14:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYK-00015c-EB; Thu, 03 Dec 2020 14:25:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=MEESRvcgvJthDFBHLMpeiMRWXyxIr9Qgq69ES02P57c=; b=i5A4TKPXxsCVEsXVyHW889QAY4
	3lLbmriwGVzAKoalWseA/nW/KlEkrnBbm2XFp+UN0iquNPCcFgniWKTgcQEpGgS43WePNWpszimAI
	FzOL0ZPYO0OQszamlfdkY3ZrcNneG65GCaotptGl2C6qVf5aog5T1z4gpAZSzrQbAtFw=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 12/23] libxl: add libxl_device_pci_assignable_list_free()...
Date: Thu,  3 Dec 2020 14:25:23 +0000
Message-Id: <20201203142534.4017-13-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to be used by callers of libxl_device_pci_assignable_list().

Currently there is no API for callers of libxl_device_pci_assignable_list()
to free the list. The xl function pciassignable_list() calls
libxl_device_pci_dispose() on each element of the returned list, but
libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
of libxl_device_pci_assignable_list() call libxl_device_pci_init().

This patch adds the new API function, makes sure it is used everywhere and
also modifies libxl_device_pci_assignable_list() to initialize list
entries rather than just zeroing them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl_pci.c         | 14 ++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +--
 tools/xl/xl_pci.c                    |  3 +--
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ee52d3cf7e7e..8225809d94a8 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -457,6 +457,12 @@
  */
 #define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE indicates that the
+ * libxl_device_pci_assignable_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2369,6 +2375,7 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 5ef37fe8dfef..e42c2ba56b73 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -457,7 +457,7 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         pcis = new;
         new = pcis + *num;
 
-        memset(new, 0, sizeof(*new));
+        libxl_device_pci_init(new);
         pci_struct_fill(new, dom, bus, dev, func, 0);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
@@ -472,6 +472,16 @@ out:
     return pcis;
 }
 
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    int i;
+
+    for (i = 0; i < num; i++)
+        libxl_device_pci_dispose(&list[i]);
+
+    free(list);
+}
+
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
 static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
@@ -1490,7 +1500,7 @@ static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
             pcis[i].func == pci->func)
             break;
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
     return i != num;
 }
 
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 1181971da4e7..352a00134d70 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -894,9 +894,8 @@ value stub_xl_device_pci_assignable_list(value ctx)
 		Field(list, 1) = temp;
 		temp = list;
 		Store_field(list, 0, Val_device_pci(&c_list[i]));
-		libxl_device_pci_dispose(&c_list[i]);
 	}
-	free(c_list);
+	libxl_device_pci_assignable_list_free(c_list, nb);
 
 	CAMLreturn(list);
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 7c0f102ac7b7..f71498cbb570 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -164,9 +164,8 @@ static void pciassignable_list(void)
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43592.78359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdA-0002SX-JY; Thu, 03 Dec 2020 14:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43592.78359; Thu, 03 Dec 2020 14:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdA-0002S2-7t; Thu, 03 Dec 2020 14:30:48 +0000
Received: by outflank-mailman (input) for mailman id 43592;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002Nj-QW
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd6-0006Tt-LU; Thu, 03 Dec 2020 14:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYR-00015c-KY; Thu, 03 Dec 2020 14:25:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=+I+oSnidyg1ISOevzaYtma9ci/Jizf+9o1lZCYeCrsI=; b=yojnqqLPemmjav7fDjhKaIbsrS
	mIozyrPhFWiysXb0VElZ8Omr2FqrBOsrSElnZOcRzUq+sj/LfyJGx9Rt+lEBWdDgE/YViF4+5hHUP
	rC/gHozviR4/LjOze4TmHoe0wC/9Dn9VnBh/Ol5z6BRdKnzhVNubXhlfN9BrV6HnDrOA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 19/23] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
Date: Thu,  3 Dec 2020 14:25:30 +0000
Message-Id: <20201203142534.4017-20-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.

This patch modifies the API and callers accordingly. It also modifies
several internal functions in libxl_pci.c that support the API to also use
'libxl_pci_bdf'.

NOTE: The OCaml bindings are adjusted to contain the interface change. It
      should therefore not affect compatibility with OCaml-based utilities.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  15 +-
 tools/libs/light/libxl_pci.c         | 213 +++++++++++++++------------
 tools/ocaml/libs/xl/xenlight_stubs.c |  15 +-
 tools/xl/xl_pci.c                    |  32 ++--
 4 files changed, 156 insertions(+), 119 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5edacccbd1da..5703fdf367c5 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -469,6 +469,13 @@
  */
 #define LIBXL_HAVE_PCI_BDF 1
 
+/*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
+ * libxl_device_pci_assignable_add/remove/list/list_free() functions all
+ * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e0bcab4ee840..eecbd6efb694 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,26 +25,33 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pci_encode_bdf(libxl_device_pci *pci)
+static unsigned int pci_encode_bdf(libxl_pci_bdf *pcibdf)
 {
     unsigned int value;
 
-    value = pci->bdf.domain << 16;
-    value |= (pci->bdf.bus & 0xff) << 8;
-    value |= (pci->bdf.dev & 0x1f) << 3;
-    value |= (pci->bdf.func & 0x7);
+    value = pcibdf->domain << 16;
+    value |= (pcibdf->bus & 0xff) << 8;
+    value |= (pcibdf->dev & 0x1f) << 3;
+    value |= (pcibdf->func & 0x7);
 
     return value;
 }
 
+static void pcibdf_struct_fill(libxl_pci_bdf *pcibdf, unsigned int domain,
+                               unsigned int bus, unsigned int dev,
+                               unsigned int func)
+{
+    pcibdf->domain = domain;
+    pcibdf->bus = bus;
+    pcibdf->dev = dev;
+    pcibdf->func = func;
+}
+
 static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
+    pcibdf_struct_fill(&pci->bdf, domain, bus, dev, func);
     pci->vdevfn = vdevfn;
 }
 
@@ -350,8 +357,8 @@ static bool is_pci_in_array(libxl_device_pci *pcis, int num,
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
+static int sysfs_write_bdf(libxl__gc *gc, const char *sysfs_path,
+                           libxl_pci_bdf *pcibdf)
 {
     int rc, fd;
     char *buf;
@@ -362,8 +369,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-                    pci->bdf.dev, pci->bdf.func);
+    buf = GCSPRINTF(PCI_BDF, pcibdf->domain, pcibdf->bus,
+                    pcibdf->dev, pcibdf->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -378,22 +385,22 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 
 #define PCI_INFO_PATH "/libxl/pci"
 
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_path(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
 }
 
 
-static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+static int pci_info_xs_write(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node, const char *val)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
 
     if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
@@ -401,28 +408,28 @@ static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
     return rc;
 }
 
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_read(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
 
     return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                                const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new;
+    libxl_pci_bdf *pcibdfs = NULL, *new;
     struct dirent *de;
     DIR *dir;
 
@@ -443,15 +450,15 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcibdfs, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcis = new;
-        new = pcis + *num;
+        pcibdfs = new;
+        new = pcibdfs + *num;
 
-        libxl_device_pci_init(new);
-        pci_struct_fill(new, dom, bus, dev, func, 0);
+        libxl_pci_bdf_init(new);
+        pcibdf_struct_fill(new, dom, bus, dev, func);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
             continue;
@@ -462,32 +469,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
     closedir(dir);
 out:
     GC_FREE;
-    return pcis;
+    return pcibdfs;
 }
 
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num)
 {
     int i;
 
     for (i = 0; i < num; i++)
-        libxl_device_pci_dispose(&list[i]);
+        libxl_pci_bdf_dispose(&list[i]);
 
     free(list);
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->bdf.domain,
-                           pci->bdf.bus,
-                           pci->bdf.dev,
-                           pci->bdf.func);
+                           pcibdf->domain,
+                           pcibdf->bus,
+                           pcibdf->dev,
+                           pcibdf->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -501,7 +508,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -639,8 +646,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
+/* Scan through /sys/.../pciback/slots looking for BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     FILE *f;
     int rc = 0;
@@ -654,10 +661,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
-        if (dom == pci->bdf.domain
-            && bus == pci->bdf.bus
-            && dev == pci->bdf.dev
-            && func == pci->bdf.func) {
+        if (dom == pcibdf->domain
+            && bus == pcibdf->bus
+            && dev == pcibdf->dev
+            && func == pcibdf->func) {
             rc = 1;
             goto out;
         }
@@ -667,7 +674,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     char * spath;
     int rc;
@@ -683,8 +690,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->bdf.domain, pci->bdf.bus,
-                      pci->bdf.dev, pci->bdf.func);
+                      pcibdf->domain, pcibdf->bus,
+                      pcibdf->dev, pcibdf->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -695,40 +702,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_assign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     int rc;
 
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pcibdf)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcibdf) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pcibdf) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -737,7 +744,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pci,
+                                            libxl_pci_bdf *pcibdf,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -747,10 +754,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->bdf.domain;
-    bus = pci->bdf.bus;
-    dev = pci->bdf.dev;
-    func = pci->bdf.func;
+    dom = pcibdf->domain;
+    bus = pcibdf->bus;
+    dev = pcibdf->dev;
+    func = pcibdf->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -760,7 +767,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
+    rc = pciback_dev_is_assigned(gc, pcibdf);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -770,7 +777,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -779,9 +786,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
+            pci_info_xs_write(gc, pcibdf, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
+                     pci_info_xs_read(gc, pcibdf, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -789,10 +796,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
+        pci_info_xs_remove(gc, pcibdf, "driver_path");
     }
 
-    if ( pciback_dev_assign(gc, pci) ) {
+    if ( pciback_dev_assign(gc, pcibdf) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -803,7 +810,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -814,7 +821,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pci,
+                                               libxl_pci_bdf *pcibdf,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -822,24 +829,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-            pci->bdf.dev, pci->bdf.func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcibdf->domain,
+            pcibdf->bus, pcibdf->dev, pcibdf->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pcibdf)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
+        pciback_dev_unassign(gc, pcibdf);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    driver_path = pci_info_xs_read(gc, pcibdf, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -847,12 +854,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
+                                 pcibdf) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_info_xs_remove(gc, pci, "driver_path");
+            pci_info_xs_remove(gc, pcibdf, "driver_path");
         }
     } else {
         if ( rebind ) {
@@ -864,26 +871,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
@@ -1385,7 +1392,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+                             &pci->bdf) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1401,7 +1408,8 @@ out_no_irq:
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(&pci->bdf),
+                             flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1480,15 +1488,28 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static int is_bdf_in_array(libxl_pci_bdf *pcibdfs, int num,
+                           libxl_pci_bdf *pcibdf)
 {
-    libxl_device_pci *pcis;
+    int i;
+
+    for(i = 0; i < num; i++) {
+        if (COMPARE_BDF(pcibdf, &pcibdfs[i]))
+            break;
+    }
+
+    return i < num;
+}
+
+static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
+{
+    libxl_pci_bdf *pcibdfs;
     int num;
     bool assignable;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
-    assignable = is_pci_in_array(pcis, num, pci);
-    libxl_device_pci_assignable_list_free(pcis, num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
+    assignable = is_bdf_in_array(pcibdfs, num, pcibdf);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 
     return assignable;
 }
@@ -1523,7 +1544,8 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
+        rc = xc_test_assign_device(ctx->xch, domid,
+                                   pci_encode_bdf(&pci->bdf));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
@@ -1537,20 +1559,20 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
-        rc = libxl__device_pci_assignable_add(gc, pci, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pci_assignable(ctx, pci)) {
+    if (!is_bdf_assignable(ctx, &pci->bdf)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    rc = pci_info_xs_write(gc, &pci->bdf, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
     libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
@@ -1674,7 +1696,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
-        pci_info_xs_remove(gc, pci, "domid");
+        pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
@@ -2114,7 +2136,8 @@ static void pci_remove_detached(libxl__egc *egc,
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
+        rc = xc_deassign_device(CTX->xch, domid,
+                                pci_encode_bdf(&pci->bdf));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
@@ -2243,7 +2266,7 @@ out:
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
 
-    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+    if (!rc) pci_info_xs_remove(gc, &pci->bdf, "domid");
 
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d70..2388f238697c 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,7 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -861,7 +861,7 @@ value stub_xl_device_pci_assignable_remove(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_remove(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_remove(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -876,7 +876,7 @@ value stub_xl_device_pci_assignable_list(value ctx)
 {
 	CAMLparam1(ctx);
 	CAMLlocal2(list, temp);
-	libxl_device_pci *c_list;
+	libxl_pci_bdf *c_list;
 	int i, nb;
 	uint32_t c_domid;
 
@@ -889,11 +889,18 @@ value stub_xl_device_pci_assignable_list(value ctx)
 
 	list = temp = Val_emptylist;
 	for (i = 0; i < nb; i++) {
+		libxl_device_pci pci;
+
+		libxl_device_pci_init(&pci);
+		libxl_pci_bdf_copy(CTX, &pci.bdf, &c_list[i]);
+
 		list = caml_alloc_small(2, Tag_cons);
 		Field(list, 0) = Val_int(0);
 		Field(list, 1) = temp;
 		temp = list;
-		Store_field(list, 0, Val_device_pci(&c_list[i]));
+		Store_field(list, 0, Val_device_pci(&pci));
+
+		libxl_device_pci_dispose(&pci);
 	}
 	libxl_device_pci_assignable_list_free(c_list, nb);
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 9c24496cb2dd..37708b4eb14d 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -154,19 +154,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcis == NULL )
+    if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
-               pcis[i].bdf.func);
+               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
+               pcibdfs[i].func);
     }
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -183,24 +183,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -225,24 +225,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43590.78324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd9-0002PC-0l; Thu, 03 Dec 2020 14:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43590.78324; Thu, 03 Dec 2020 14:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd8-0002P1-Rh; Thu, 03 Dec 2020 14:30:46 +0000
Received: by outflank-mailman (input) for mailman id 43590;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002NR-JX
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd6-0006Tx-QY; Thu, 03 Dec 2020 14:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYL-00015c-E8; Thu, 03 Dec 2020 14:25:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=EMfCtlWlj+oMHa2Vf4m6cYy+TLLLFhjiF/PO4Sb4syk=; b=t3/M/JAYEx+h9b/w9pWt9ns2f6
	Se6y7trFmulmjUo6/7mpGgIUdPzIFfpZXHGPZNvzJnaKRExXNEEvCU4iqjg8ktZ/hIrPIZHzLyiRM
	Qtp67+pI7x+eBhpBs2eKVoB20e1Mow+6N75kiWVfFQL8Pr/JXfXFGFrdMx4dGEVmwpv0=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 13/23] libxl: use COMPARE_PCI() macro is_pci_in_array()...
Date: Thu,  3 Dec 2020 14:25:24 +0000
Message-Id: <20201203142534.4017-14-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... rather than an open-coded equivalent.

This patch tidies up the is_pci_in_array() function, making it take a single
'libxl_device_pci' argument rather than separate domain, bus, device and
function arguments. The already-available COMPARE_PCI() macro can then be
used and it is also modified to return 'bool' rather than 'int'.

The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
than a separate open-coded equivalent, and also modifies it to return a
'bool' rather than an 'int'.

NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
      comparison, which should always have been the case.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_internal.h |  7 +++---
 tools/libs/light/libxl_pci.c      | 38 +++++++++++--------------------
 2 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index ecee61b5419c..02f8a3179c44 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,9 +4746,10 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->func == (b)->func &&    \
-                           (a)->bus == (b)->bus &&      \
-                           (a)->dev == (b)->dev)
+#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+                           (a)->bus == (b)->bus &&       \
+                           (a)->dev == (b)->dev &&       \
+                           (a)->func == (b)->func)
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e42c2ba56b73..4bf4a168e032 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,24 +336,17 @@ retry_transaction2:
     return 0;
 }
 
-static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
-                           int dom, int bus, int dev, int func)
+static bool is_pci_in_array(libxl_device_pci *pcis, int num,
+                            libxl_device_pci *pci)
 {
     int i;
 
-    for(i = 0; i < num_assigned; i++) {
-        if ( assigned[i].domain != dom )
-            continue;
-        if ( assigned[i].bus != bus )
-            continue;
-        if ( assigned[i].dev != dev )
-            continue;
-        if ( assigned[i].func != func )
-            continue;
-        return 1;
+    for (i = 0; i < num; i++) {
+        if (COMPARE_PCI(pci, &pcis[i]))
+            break;
     }
 
-    return 0;
+    return i < num;
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
@@ -1487,21 +1480,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
     libxl_device_pci *pcis;
-    int num, i;
+    int num;
+    bool assignable;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    for (i = 0; i < num; i++) {
-        if (pcis[i].domain == pci->domain &&
-            pcis[i].bus == pci->bus &&
-            pcis[i].dev == pci->dev &&
-            pcis[i].func == pci->func)
-            break;
-    }
+    assignable = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_assignable_list_free(pcis, num);
-    return i != num;
+
+    return assignable;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1834,8 +1823,7 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         goto out_fail;
     }
 
-    attached = is_pci_in_array(pcis, num, pci->domain,
-                               pci->bus, pci->dev, pci->func);
+    attached = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43593.78348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdA-0002Qv-3O; Thu, 03 Dec 2020 14:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43593.78348; Thu, 03 Dec 2020 14:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpd9-0002QO-I2; Thu, 03 Dec 2020 14:30:47 +0000
Received: by outflank-mailman (input) for mailman id 43593;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002Nt-Rs
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006UH-71; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYQ-00015c-EA; Thu, 03 Dec 2020 14:25:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=y1RGS4HyjvmtOi0l/sT6C5YNGct5R5yiMakE1fxQSLM=; b=qCdHUe7bwa6t546DotfSo4/C/9
	2SA1DBe/Q7yn4jaq0ySqZ8hTkA3ocvGwumCTWgr3p+UAbAwYl5NjlAulEZM9Qb8BuQiE2WaPAwCnH
	4uzHE4fu0z/Fihhge138+cJo0KANefXH9acDlA4rhKIVZwj+P/GluBxn4gpKDlesKwGY=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 18/23] libxlu: introduce xlu_pci_parse_spec_string()
Date: Thu,  3 Dec 2020 14:25:29 +0000
Message-Id: <20201203142534.4017-19-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
it via the newly introduced function. The new parser also deals with 'bdf'
and 'vslot' as non-positional paramaters, as per the documentation in
xl-pci-configuration(5).

The existing xlu_pci_parse_bdf() function remains, but now strictly parses
BDF values. Some existing callers of xlu_pci_parse_bdf() are
modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).

NOTE: Usage text in xl_cmdtable.c and error messages are also modified
      appropriately.

Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxlutil.h    |   8 +-
 tools/libs/util/libxlu_pci.c | 374 +++++++++++++++++++----------------
 tools/xl/xl_cmdtable.c       |   4 +-
 tools/xl/xl_parse.c          |   4 +-
 tools/xl/xl_pci.c            |  37 ++--
 5 files changed, 230 insertions(+), 197 deletions(-)

diff --git a/tools/include/libxlutil.h b/tools/include/libxlutil.h
index 92e35c546278..cdd6aab4f816 100644
--- a/tools/include/libxlutil.h
+++ b/tools/include/libxlutil.h
@@ -108,10 +108,16 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
    * resulting disk struct is used with libxl.
    */
 
+/*
+ * PCI BDF
+ */
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str);
+
 /*
  * PCI specification parsing
  */
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pci,
+                              const char *str);
 
 /*
  * RDM parsing
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 5c107f264260..a8b6ce542736 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -1,5 +1,7 @@
 #define _GNU_SOURCE
 
+#include <ctype.h>
+
 #include "libxlu_internal.h"
 #include "libxlu_disk_l.h"
 #include "libxlu_disk_i.h"
@@ -9,185 +11,213 @@
 #define XLU__PCI_ERR(_c, _x, _a...) \
     if((_c) && (_c)->report) fprintf((_c)->report, _x, ##_a)
 
-static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
+static int parse_bdf(libxl_pci_bdf *bdfp, uint32_t *vfunc_maskp,
+                     const char *str, const char **endp)
 {
-    unsigned long ret;
-    char *end;
+    const char *ptr = str;
+    unsigned int colons = 0;
+    unsigned int domain, bus, dev, func;
+    int n;
 
-    ret = strtoul(str, &end, 16);
-    if ( end == str || *end != '\0' )
-        return -1;
-    if ( ret & ~mask )
-        return -1;
-    *val = (unsigned int)ret & mask;
-    return 0;
-}
-
-static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
-                           unsigned int bus, unsigned int dev,
-                           unsigned int func, unsigned int vdevfn)
-{
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
-    pci->vdevfn = vdevfn;
-    return 0;
-}
-
-#define STATE_DOMAIN    0
-#define STATE_BUS       1
-#define STATE_DEV       2
-#define STATE_FUNC      3
-#define STATE_VSLOT     4
-#define STATE_OPTIONS_K 6
-#define STATE_OPTIONS_V 7
-#define STATE_TERMINAL  8
-#define STATE_TYPE      9
-#define STATE_RDM_STRATEGY      10
-#define STATE_RESERVE_POLICY    11
-#define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
-{
-    unsigned state = STATE_DOMAIN;
-    unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
-    char *buf2, *tok, *ptr, *end, *optkey = NULL;
-
-    if ( NULL == (buf2 = ptr = strdup(str)) )
-        return ERROR_NOMEM;
-
-    for(tok = ptr, end = ptr + strlen(ptr) + 1; ptr < end; ptr++) {
-        switch(state) {
-        case STATE_DOMAIN:
-            if ( *ptr == ':' ) {
-                state = STATE_BUS;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dom, 0xffff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_BUS:
-            if ( *ptr == ':' ) {
-                state = STATE_DEV;
-                *ptr = '\0';
-                if ( hex_convert(tok, &bus, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }else if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( dom & ~0xff )
-                    goto parse_error;
-                bus = dom;
-                dom = 0;
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_DEV:
-            if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_FUNC:
-            if ( *ptr == '\0' || *ptr == '@' || *ptr == ',' ) {
-                switch( *ptr ) {
-                case '\0':
-                    state = STATE_TERMINAL;
-                    break;
-                case '@':
-                    state = STATE_VSLOT;
-                    break;
-                case ',':
-                    state = STATE_OPTIONS_K;
-                    break;
-                }
-                *ptr = '\0';
-                if ( !strcmp(tok, "*") ) {
-                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
-                }else{
-                    if ( hex_convert(tok, &func, 0x7) )
-                        goto parse_error;
-                    pci->vfunc_mask = (1 << 0);
-                }
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_VSLOT:
-            if ( *ptr == '\0' || *ptr == ',' ) {
-                state = ( *ptr == ',' ) ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( hex_convert(tok, &vslot, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_K:
-            if ( *ptr == '=' ) {
-                state = STATE_OPTIONS_V;
-                *ptr = '\0';
-                optkey = tok;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_V:
-            if ( *ptr == ',' || *ptr == '\0' ) {
-                state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( !strcmp(optkey, "msitranslate") ) {
-                    pci->msitranslate = atoi(tok);
-                }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pci->power_mgmt = atoi(tok);
-                }else if ( !strcmp(optkey, "permissive") ) {
-                    pci->permissive = atoi(tok);
-                }else if ( !strcmp(optkey, "seize") ) {
-                    pci->seize = atoi(tok);
-                } else if (!strcmp(optkey, "rdm_policy")) {
-                    if (!strcmp(tok, "strict")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                    } else if (!strcmp(tok, "relaxed")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                    } else {
-                        XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
-                                          " policy: 'strict' or 'relaxed'.",
-                                     tok);
-                        goto parse_error;
-                    }
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown PCI BDF option: %s", optkey);
-                }
-                tok = ptr + 1;
-            }
-        default:
-            break;
-        }
+    /* Count occurrences of ':' to detrmine presence/absence of the 'domain' */
+    while (isxdigit(*ptr) || *ptr == ':') {
+        if (*ptr == ':')
+            colons++;
+        ptr++;
     }
 
-    if ( tok != ptr || state != STATE_TERMINAL )
-        goto parse_error;
+    ptr = str;
+    switch (colons) {
+    case 1:
+        domain = 0;
+        if (sscanf(ptr, "%x:%x.%n", &bus, &dev, &n) != 2)
+            return ERROR_INVAL;
+        break;
+    case 2:
+        if (sscanf(ptr, "%x:%x:%x.%n", &domain, &bus, &dev, &n) != 3)
+            return ERROR_INVAL;
+        break;
+    default:
+        return ERROR_INVAL;
+    }
 
-    assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
+    if (domain > 0xffff || bus > 0xff || dev > 0x1f)
+        return ERROR_INVAL;
 
-    /* Just a pretty way to fill in the values */
-    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
+    ptr += n;
+    if (*ptr == '*') {
+        if (!vfunc_maskp)
+            return ERROR_INVAL;
+        *vfunc_maskp = LIBXL_PCI_FUNC_ALL;
+        func = 0;
+        ptr++;
+    } else {
+        if (sscanf(ptr, "%x%n", &func, &n) != 1)
+            return ERROR_INVAL;
+        if (func > 7)
+            return ERROR_INVAL;
+        if (vfunc_maskp)
+            *vfunc_maskp = 1;
+        ptr += n;
+    }
 
-    free(buf2);
+    bdfp->domain = domain;
+    bdfp->bus = bus;
+    bdfp->dev = dev;
+    bdfp->func = func;
+
+    if (endp)
+        *endp = ptr;
 
     return 0;
+}
 
-parse_error:
-    free(buf2);
-    return ERROR_INVAL;
+static int parse_vslot(uint32_t *vdevfnp, const char *str, const char **endp)
+{
+    const char *ptr = str;
+    unsigned int val;
+    int n;
+
+    if (sscanf(ptr, "%x%n", &val, &n) != 1)
+        return ERROR_INVAL;
+
+    if (val > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+
+    *vdevfnp = val << 3;
+
+    if (endp)
+        *endp = ptr;
+
+    return 0;
+}
+
+static int parse_key_val(char **keyp, char**valp, const char *str,
+                         const char **endp)
+{
+    const char *ptr = str;
+    char *key, *val;
+
+    while (*ptr != '=' && *ptr != '\0')
+        ptr++;
+
+    if (*ptr == '\0')
+        return ERROR_INVAL;
+
+    key = strndup(str, ptr - str);
+    if (!key)
+        return ERROR_NOMEM;
+
+    str = ++ptr; /* skip '=' */
+    while (*ptr != ',' && *ptr != '\0')
+        ptr++;
+
+    val = strndup(str, ptr - str);
+    if (!val) {
+        free(key);
+        return ERROR_NOMEM;
+    }
+
+    if (*ptr == ',')
+        ptr++;
+
+    *keyp = key;
+    *valp = val;
+    *endp = ptr;
+
+    return 0;
+}
+
+static int parse_rdm_policy(XLU_Config *cfg, libxl_rdm_reserve_policy *policy,
+                            const char *str)
+{
+    int ret = libxl_rdm_reserve_policy_from_string(str, policy);
+
+    if (ret)
+        XLU__PCI_ERR(cfg, "Unknown RDM policy: %s", str);
+
+    return ret;
+}
+
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str)
+{
+    return parse_bdf(bdf, NULL, str, NULL);
+}
+
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
+                              const char *str)
+{
+    const char *ptr = str;
+    bool bdf_present = false;
+    int ret;
+
+    /* Attempt to parse 'bdf' as positional parameter */
+    ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, ptr, &ptr);
+    if (!ret) {
+        bdf_present = true;
+
+        /* Check whether 'vslot' if present */
+        if (*ptr == '@') {
+            ret = parse_vslot(&pcidev->vdevfn, ++ptr, &ptr);
+            if (ret)
+                return ret;
+        }
+        if (*ptr == ',')
+            ptr++;
+        else if (*ptr != '\0')
+            return ERROR_INVAL;
+    }
+
+    /* Parse the rest as 'key=val' pairs */
+    while (*ptr != '\0') {
+        char *key, *val;
+
+        ret = parse_key_val(&key, &val, ptr, &ptr);
+        if (ret)
+            return ret;
+
+        if (!strcmp(key, "bdf")) {
+            ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, val, NULL);
+            bdf_present = !ret;
+        } else if (!strcmp(key, "vslot")) {
+            ret = parse_vslot(&pcidev->vdevfn, val, NULL);
+        } else if (!strcmp(key, "permissive")) {
+            pcidev->permissive = atoi(val);
+        } else if (!strcmp(key, "msitranslate")) {
+            pcidev->msitranslate = atoi(val);
+        } else if (!strcmp(key, "seize")) {
+            pcidev->seize= atoi(val);
+        } else if (!strcmp(key, "power_mgmt")) {
+            pcidev->power_mgmt = atoi(val);
+        } else if (!strcmp(key, "rdm_policy")) {
+            ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else {
+            XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
+            ret = ERROR_INVAL;
+        }
+
+        free(key);
+        free(val);
+
+        if (ret)
+            return ret;
+    }
+
+    if (!bdf_present)
+        return ERROR_INVAL;
+
+    return 0;
 }
 
 int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 {
+#define STATE_TYPE           0
+#define STATE_RDM_STRATEGY   1
+#define STATE_RESERVE_POLICY 2
+#define STATE_TERMINAL       3
+
     unsigned state = STATE_TYPE;
     char *buf2, *tok, *ptr, *end;
 
@@ -227,15 +257,8 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
             if (*ptr == ',' || *ptr == '\0') {
                 state = *ptr == ',' ? STATE_TYPE : STATE_TERMINAL;
                 *ptr = '\0';
-                if (!strcmp(tok, "strict")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                } else if (!strcmp(tok, "relaxed")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown RDM property policy value: %s",
-                                 tok);
+                if (!parse_rdm_policy(cfg, &rdm->policy, tok))
                     goto parse_error;
-                }
                 tok = ptr + 1;
             }
         default:
@@ -253,6 +276,11 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 parse_error:
     free(buf2);
     return ERROR_INVAL;
+
+#undef STATE_TYPE
+#undef STATE_RDM_STRATEGY
+#undef STATE_RESERVE_POLICY
+#undef STATE_TERMINAL
 }
 
 /*
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927bb..2ee0c4967334 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -90,12 +90,12 @@ struct cmd_spec cmd_table[] = {
     { "pci-attach",
       &main_pciattach, 0, 1,
       "Insert a new pass-through pci device",
-      "<Domain> <BDF> [Virtual Slot]",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-detach",
       &main_pcidetach, 0, 1,
       "Remove a domain's pass-through pci device",
-      "<Domain> <BDF>",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-list",
       &main_pcilist, 0, 0,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 0765780d9f0a..6a4703e745c9 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1487,10 +1487,10 @@ void parse_config_data(const char *config_source,
              * the global policy by default.
              */
             pci->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pci, buf);
+            e = xlu_pci_parse_spec_string(config, pci, buf);
             if (e) {
                 fprintf(stderr,
-                        "unable to parse PCI BDF `%s' for passthrough\n",
+                        "unable to parse PCI_SPEC_STRING `%s' for passthrough\n",
                         buf);
                 exit(-e);
             }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index b6dc7c28401c..9c24496cb2dd 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -55,7 +55,7 @@ int main_pcilist(int argc, char **argv)
     return 0;
 }
 
-static int pcidetach(uint32_t domid, const char *bdf, int force)
+static int pcidetach(uint32_t domid, const char *spec_string, int force)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -66,8 +66,9 @@ static int pcidetach(uint32_t domid, const char *bdf, int force)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-detach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
     if (force) {
@@ -89,7 +90,7 @@ int main_pcidetach(int argc, char **argv)
     uint32_t domid;
     int opt;
     int force = 0;
-    const char *bdf = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
     case 'f':
@@ -98,15 +99,15 @@ int main_pcidetach(int argc, char **argv)
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (pcidetach(domid, bdf, force))
+    if (pcidetach(domid, spec_string, force))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciattach(uint32_t domid, const char *bdf, const char *vs)
+static int pciattach(uint32_t domid, const char *spec_string)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -117,8 +118,9 @@ static int pciattach(uint32_t domid, const char *bdf, const char *vs)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-attach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
 
@@ -135,19 +137,16 @@ int main_pciattach(int argc, char **argv)
 {
     uint32_t domid;
     int opt;
-    const char *bdf = NULL, *vs = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
         /* No options */
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (optind + 1 < argc)
-        vs = argv[optind + 2];
-
-    if (pciattach(domid, bdf, vs))
+    if (pciattach(domid, spec_string))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
@@ -193,8 +192,8 @@ static int pciassignable_add(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
@@ -235,8 +234,8 @@ static int pciassignable_remove(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43594.78370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdB-0002Ta-BJ; Thu, 03 Dec 2020 14:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43594.78370; Thu, 03 Dec 2020 14:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdA-0002TC-Ok; Thu, 03 Dec 2020 14:30:48 +0000
Received: by outflank-mailman (input) for mailman id 43594;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002Nu-Ry
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006U3-0Q; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYI-00015c-1C; Thu, 03 Dec 2020 14:25:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ifYrkyLdt9NPLV2krVu1d0tSlTojxLIYXgTSsFop6sA=; b=g6NoKXAa/NhYTPklXue0kR8OSx
	ZhBslP+L3iwD0apYF2wqWT+JdWtfqKPbD0A2bapOs4JSiPxyUQFOk3aAIXlKbDd8+Zdso4Xc4D6r+
	MgIcu30LfUFuul+GNuu+uNz4R+q/VKgVHxFQ16/K4KwhHmJ57L4Xp/fvf9NHUrBzIZ/I=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 10/23] libxl: remove get_all_assigned_devices() from libxl_pci.c
Date: Thu,  3 Dec 2020 14:25:21 +0000
Message-Id: <20201203142534.4017-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Use of this function is a very inefficient way to check whether a device
has already been assigned.

This patch adds code that saves the domain id in xenstore at the point of
assignment, and removes it again when the device id de-assigned (or the
domain is destroyed). It is then straightforward to check whether a device
has been assigned by checking whether a device has a saved domain id.

NOTE: To facilitate the xenstore check it is necessary to move the
      pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
      together, the rest of the pci_info_xs_XXX() functions are moved too.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 149 +++++++++++++----------------------
 1 file changed, 55 insertions(+), 94 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 24e79afcaa36..0aecf3a4a8e2 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,50 +336,6 @@ retry_transaction2:
     return 0;
 }
 
-static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
-{
-    char **domlist;
-    unsigned int nd = 0, i;
-
-    *list = NULL;
-    *num = 0;
-
-    domlist = libxl__xs_directory(gc, XBT_NULL, "/local/domain", &nd);
-    for(i = 0; i < nd; i++) {
-        char *path, *num_devs;
-
-        path = GCSPRINTF("/local/domain/0/backend/%s/%s/0/num_devs",
-                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                         domlist[i]);
-        num_devs = libxl__xs_read(gc, XBT_NULL, path);
-        if ( num_devs ) {
-            int ndev = atoi(num_devs), j;
-            char *devpath, *bdf;
-
-            for(j = 0; j < ndev; j++) {
-                devpath = GCSPRINTF("/local/domain/0/backend/%s/%s/0/dev-%u",
-                                    libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                                    domlist[i], j);
-                bdf = libxl__xs_read(gc, XBT_NULL, devpath);
-                if ( bdf ) {
-                    unsigned dom, bus, dev, func;
-                    if ( sscanf(bdf, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
-                        continue;
-
-                    *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
-                    if (*list == NULL)
-                        return ERROR_NOMEM;
-                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
-                    (*num)++;
-                }
-            }
-        }
-    }
-    libxl__ptr_add(gc, *list);
-
-    return 0;
-}
-
 static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
                            int dom, int bus, int dev, int func)
 {
@@ -427,19 +383,58 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
     return 0;
 }
 
+#define PCI_INFO_PATH "/libxl/pci"
+
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
 
     *num = 0;
 
-    r = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if (r) goto out;
-
     dir = opendir(SYSFS_PCIBACK_DRIVER);
     if (NULL == dir) {
         if (errno == ENOENT) {
@@ -455,9 +450,6 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
-            continue;
-
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
@@ -467,6 +459,10 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
         memset(new, 0, sizeof(*new));
         pci_struct_fill(new, dom, bus, dev, func, 0);
+
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+            continue;
+
         (*num)++;
     }
 
@@ -737,48 +733,6 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCI_INFO_PATH "/libxl/pci"
-
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    return node ?
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
-                  node) :
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
-}
-
-
-static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node, const char *val)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", val, path);
-    }
-}
-
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    return libxl__xs_read(gc, XBT_NULL, path);
-}
-
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
-                               const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-    libxl_ctx *ctx = libxl__gc_owner(gc);
-
-    /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL, path);
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
@@ -1594,6 +1548,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
+    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    if (rc) goto out;
+
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
@@ -1721,6 +1678,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->domain, pci->bus, pci->dev, pci->func,
              rc);
+        pci_info_xs_remove(gc, pci, "domid");
     }
     aodev->rc = rc;
     aodev->callback(egc, aodev);
@@ -2282,6 +2240,9 @@ out:
     libxl__xswait_stop(gc, &prs->xswait);
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
+
+    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43595.78384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdC-0002WD-CS; Thu, 03 Dec 2020 14:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43595.78384; Thu, 03 Dec 2020 14:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdB-0002V5-Ge; Thu, 03 Dec 2020 14:30:49 +0000
Received: by outflank-mailman (input) for mailman id 43595;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002Nz-T2
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006UB-5W; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYS-00015c-HK; Thu, 03 Dec 2020 14:25:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uc5JF4Aw3pRQS5p/Kqsvv3JLTlI3UZvcoVyjRuBT19E=; b=ni+xR/2s0ypTiaPZl2MQim2ZYv
	vLUYxnJMgmF0xkH09F7XnADzdfhQZuaanv+P8p3dgr+mLhudocMkUmBCFXwbEI0gPcDJUnecaZTwO
	yakLmhTFRgCY4OyxZ+vcJOe4sUkw7ZpkzCqlQp39tXAEHyZn8pisxEBznfqovzxfe9x4=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 20/23] docs/man: modify xl(1) in preparation for naming of assignable devices
Date: Thu,  3 Dec 2020 14:25:31 +0000
Message-Id: <20201203142534.4017-21-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to allow a name to be specified to
'xl pci-assignable-add' such that the assignable device may be referred to
by than name in subsequent operations.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index c5fbce3b5c4b..0822a5842835 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1595,19 +1595,23 @@ List virtual network interfaces for a domain.
 
 =over 4
 
-=item B<pci-assignable-list>
+=item B<pci-assignable-list> [I<-n>]
 
 List all the B<BDF> of assignable PCI devices. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+specified then any name supplied when the device was made assignable
+will also be displayed.
 
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
-=item B<pci-assignable-add> I<BDF>
+=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
 
 Make the device at B<BDF> assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+supplied then the assignable device entry will the named with the
+given B<NAME>.
 
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
@@ -1622,10 +1626,11 @@ not to do this on a device critical to domain 0's operation, such as
 storage controllers, network interfaces, or GPUs that are currently
 being used.
 
-=item B<pci-assignable-remove> [I<-r>] I<BDF>
+=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
 
-Make the device at B<BDF> not assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+Make a device non-assignable to guests. The device may be identified
+either by its B<BDF> or the B<NAME> supplied when the device was made
+assignable. See L<xl-pci-configuration(5)> for more information.
 
 This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43596.78392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdC-0002YY-Qq; Thu, 03 Dec 2020 14:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43596.78392; Thu, 03 Dec 2020 14:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdC-0002XW-Bz; Thu, 03 Dec 2020 14:30:50 +0000
Received: by outflank-mailman (input) for mailman id 43596;
 Thu, 03 Dec 2020 14:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002Nv-Rz
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd6-0006Tz-SI; Thu, 03 Dec 2020 14:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYJ-00015c-4Q; Thu, 03 Dec 2020 14:25:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=U82VluXy/daAwNKIohur6zA5m770zmrzyBaMk3miTKg=; b=Y8JL4sPSvss8QWcn7AmHKF++Gd
	YxDEcii7JBV2GUzf0H1131hzVyLss7D8H29OSoKKunL2oZiMs+ON4PHyP+0pBz0xHgbf2YGbAFEzo
	lso8Q1iFUQJS2z1w+cgRE+LGljgTkFrn/qraWafDXB5AWrqDJnv3I+/rhNWWomeN3cyQ=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 11/23] libxl: make sure callers of libxl_device_pci_list() free the list after use
Date: Thu,  3 Dec 2020 14:25:22 +0000
Message-Id: <20201203142534.4017-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced libxl_device_pci_list_free() which should be used
by callers of libxl_device_pci_list() to properly dispose of the exported
'libxl_device_pci' types and the free the memory holding them. Whilst all
current callers do ensure the memory is freed, only the code in xl's
pcilist() function actually calls libxl_device_pci_dispose(). As it stands
this laxity does not lead to any memory leaks, but the simple addition of
.e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
to leaks.

This patch makes sure all callers of libxl_device_pci_list() can call
libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
sure these are properly disposed at the end of the operations) rather
than keeping pointers to the structures returned by libxl_device_pci_list().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_pci.c | 68 ++++++++++++++++++++----------------
 tools/xl/xl_pci.c            |  3 +-
 2 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0aecf3a4a8e2..5ef37fe8dfef 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1025,7 +1025,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     libxl_domid pci_domid;
 } pci_add_state;
 
@@ -1097,7 +1097,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1118,7 +1118,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1199,7 +1199,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1300,7 +1300,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1516,7 +1516,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
-    pas->pci = pci;
+
+    libxl_device_pci_copy(CTX, &pas->pci, pci);
+    pci = &pas->pci;
+
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1555,12 +1558,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pci_s;
-
-        GCNEW(pci_s);
-        libxl_device_pci_init(pci_s);
-        libxl_device_pci_copy(CTX, pci_s, pci);
-        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
         do_pci_add(egc, stubdomid, pas); /* must be last */
@@ -1619,7 +1616,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1670,7 +1667,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
@@ -1680,6 +1677,7 @@ static void device_pci_add_done(libxl__egc *egc,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -1770,7 +1768,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1812,23 +1810,26 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
+    libxl_device_pci *pcis;
+    bool attached;
     uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
-    libxl_device_pci *pci = prs->pci;
+    libxl_device_pci *pci = &prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
-    assigned = libxl_device_pci_list(ctx, domid, &num);
-    if (assigned == NULL) {
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (!pcis) {
         rc = ERROR_FAIL;
         goto out_fail;
     }
-    libxl__ptr_add(gc, assigned);
+
+    attached = is_pci_in_array(pcis, num, pci->domain,
+                               pci->bus, pci->dev, pci->func);
+    libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-    if ( !is_pci_in_array(assigned, num, pci->domain,
-                          pci->bus, pci->dev, pci->func) ) {
+    if (!attached) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1928,7 +1929,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1950,7 +1951,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2020,7 +2021,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     if (rc) goto out;
 
@@ -2075,7 +2076,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
          PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
@@ -2096,7 +2097,7 @@ static void pci_remove_detached(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2159,7 +2160,7 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, &prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
@@ -2177,7 +2178,10 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pci = pci;
+
+    libxl_device_pci_copy(CTX, &prs->pci, pci);
+    pci = &prs->pci;
+
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2212,7 +2216,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2243,6 +2247,7 @@ out:
 
     if (!rc) pci_info_xs_remove(gc, pci, "domid");
 
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -2357,7 +2362,6 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
     pcis = libxl_device_pci_list(CTX, domid, &num);
     if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2368,6 +2372,8 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
         libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
+
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 34fcf5a4fadf..7c0f102ac7b7 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -35,9 +35,8 @@ static void pcilist(uint32_t domid)
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int main_pcilist(int argc, char **argv)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43597.78409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdE-0002bO-15; Thu, 03 Dec 2020 14:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43597.78409; Thu, 03 Dec 2020 14:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdD-0002ag-Ap; Thu, 03 Dec 2020 14:30:51 +0000
Received: by outflank-mailman (input) for mailman id 43597;
 Thu, 03 Dec 2020 14:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002OB-Tw
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006U5-2A; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYM-00015c-E8; Thu, 03 Dec 2020 14:25:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=meTRAXGIhRqrIoXD7af8zUD3I9uYD6NxtOhZeh5k014=; b=H1/09o+6T9xzVg9iSyyvfHRClk
	hvUownJIFkNxM3mfLUbJ7EcQP9M1qnG/OOrTK9QRoMvnJT2OKfFA0HF8m+KynNFVuk4SM/Mw4u4ke
	VkpjS4JFRG9O7/NX8aHhwhxc9wWbdBM4eKQbhoCO4dacXdB4uOtF4LFfVfxkBrzsOHAg=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 14/23] docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
Date: Thu,  3 Dec 2020 14:25:25 +0000
Message-Id: <20201203142534.4017-15-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and put it into a new xl-pci-configuration(5) manpage, akin to the
xl-network-configration(5) and xl-disk-configuration(5) manpages.

This patch moves the content of the section verbatim. A subsequent patch
will improve the documentation, once it is in its new location.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 78 +++++++++++++++++++++++++++++
 docs/man/xl.cfg.5.pod.in            | 68 +------------------------
 2 files changed, 79 insertions(+), 67 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
new file mode 100644
index 000000000000..72a27bd95dec
--- /dev/null
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -0,0 +1,78 @@
+=encoding utf8
+
+=head1 NAME
+
+xl-pci-configuration - XL PCI Configuration Syntax
+
+=head1 SYNTAX
+
+This document specifies the format for B<PCI_SPEC_STRING> which is used by
+the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+
+Each B<PCI_SPEC_STRING> has the form of
+B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<[DDDD:]BB:DD.F>
+
+Identifies the PCI device from the host perspective in the domain
+(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
+the same scheme as used in the output of B<lspci(1)> for the device in
+question.
+
+Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
+is zero and it is optional here also. You may specify the function
+(B<F>) as B<*> to indicate all functions.
+
+=item B<@VSLOT>
+
+Specifies the virtual slot where the guest will see this
+device. This is equivalent to the B<DD> which the guest sees. In a
+guest B<DDDD> and B<BB> are C<0000:00>.
+
+=item B<permissive=BOOLEAN>
+
+By default pciback only allows PV guests to write "known safe" values
+into PCI configuration space, likewise QEMU (both qemu-xen and
+qemu-xen-traditional) imposes the same constraint on HVM guests.
+However, many devices require writes to other areas of the configuration space
+in order to operate properly.  This option tells the backend (pciback or QEMU)
+to allow all writes to the PCI configuration space of this device by this
+domain.
+
+B<This option should be enabled with caution:> it gives the guest much
+more control over the device, which may have security or stability
+implications.  It is recommended to only enable this option for
+trusted VMs under administrator's control.
+
+=item B<msitranslate=BOOLEAN>
+
+Specifies that MSI-INTx translation should be turned on for the PCI
+device. When enabled, MSI-INTx translation will always enable MSI on
+the PCI device regardless of whether the guest uses INTx or MSI. Some
+device drivers, such as NVIDIA's, detect an inconsistency and do not
+function when this option is enabled. Therefore the default is false (0).
+
+=item B<seize=BOOLEAN>
+
+Tells B<xl> to automatically attempt to re-assign a device to
+pciback if it is not already assigned.
+
+B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+system device, such as a network or a disk controller being used by
+dom0 without confirmation.  Please use with care.
+
+=item B<power_mgmt=BOOLEAN>
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device. The default is false (0).
+
+=item B<rdm_policy=STRING>
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option but just specific to a given device. The default is "relaxed".
+
+Note: this would override global B<rdm> option.
+
+=back
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..b00644e852f9 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1101,73 +1101,7 @@ option is valid only when the B<controller> option is specified.
 =item B<pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]>
 
 Specifies the host PCI devices to passthrough to this guest.
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
-
-=over 4
-
-=item B<[DDDD:]BB:DD.F>
-
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
-
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
-
-=item B<@VSLOT>
-
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
-
-=item B<permissive=BOOLEAN>
-
-By default pciback only allows PV guests to write "known safe" values
-into PCI configuration space, likewise QEMU (both qemu-xen and
-qemu-xen-traditional) imposes the same constraint on HVM guests.
-However, many devices require writes to other areas of the configuration space
-in order to operate properly.  This option tells the backend (pciback or QEMU)
-to allow all writes to the PCI configuration space of this device by this
-domain.
-
-B<This option should be enabled with caution:> it gives the guest much
-more control over the device, which may have security or stability
-implications.  It is recommended to only enable this option for
-trusted VMs under administrator's control.
-
-=item B<msitranslate=BOOLEAN>
-
-Specifies that MSI-INTx translation should be turned on for the PCI
-device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
-function when this option is enabled. Therefore the default is false (0).
-
-=item B<seize=BOOLEAN>
-
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
-
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
-system device, such as a network or a disk controller being used by
-dom0 without confirmation.  Please use with care.
-
-=item B<power_mgmt=BOOLEAN>
-
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
-
-=back
+See L<xl-pci-configuration(5)> for more details.
 
 =item B<pci_permissive=BOOLEAN>
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43598.78422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdF-0002fi-7z; Thu, 03 Dec 2020 14:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43598.78422; Thu, 03 Dec 2020 14:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdE-0002eW-Kg; Thu, 03 Dec 2020 14:30:52 +0000
Received: by outflank-mailman (input) for mailman id 43598;
 Thu, 03 Dec 2020 14:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd8-0002OL-0F
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd6-0006Tv-Od; Thu, 03 Dec 2020 14:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYP-00015c-E8; Thu, 03 Dec 2020 14:25:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ys1YmJE+Q9gs5LArk4V1GUH0NJCrETXal1NFINF1CWc=; b=l1WJsCCwqvvJOuHaAhqe6xE0c7
	r8ZNQ4p//F5i8xC20Gwa3Q7WY4fciEq+hgpFI1gZmi9luQ0BBVwTFdeemdab76NeddDtpdPefkUoD
	pXuODZANcPEQIJL+juRtXtdHrX/0sLGMqOFnrHBZq1je7I8e6bKRTodGFlA54M+j9fZo=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 17/23] libxl: introduce 'libxl_pci_bdf' in the idl...
Date: Thu,  3 Dec 2020 14:25:28 +0000
Message-Id: <20201203142534.4017-18-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and use in 'libxl_device_pci'

This patch is preparatory work for restricting the type passed to functions
that only require BDF information, rather than passing a 'libxl_device_pci'
structure which is only partially filled. In this patch only the minimal
mechanical changes necessary to deal with the structural changes are made.
Subsequent patches will adjust the code to make better use of the new type.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/helpers.gen.go |  77 ++++++++++----
 tools/golang/xenlight/types.gen.go   |   8 +-
 tools/include/libxl.h                |   6 ++
 tools/libs/light/libxl_dm.c          |   8 +-
 tools/libs/light/libxl_internal.h    |   3 +-
 tools/libs/light/libxl_pci.c         | 148 +++++++++++++--------------
 tools/libs/light/libxl_types.idl     |  16 +--
 tools/libs/util/libxlu_pci.c         |   8 +-
 tools/xl/xl_pci.c                    |   6 +-
 tools/xl/xl_sxp.c                    |   4 +-
 10 files changed, 167 insertions(+), 117 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index c8605994e7fe..b7230f693c69 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1999,6 +1999,41 @@ xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
  return nil
  }
 
+// NewPciBdf returns an instance of PciBdf initialized with defaults.
+func NewPciBdf() (*PciBdf, error) {
+var (
+x PciBdf
+xc C.libxl_pci_bdf)
+
+C.libxl_pci_bdf_init(&xc)
+defer C.libxl_pci_bdf_dispose(&xc)
+
+if err := x.fromC(&xc); err != nil {
+return nil, err }
+
+return &x, nil}
+
+func (x *PciBdf) fromC(xc *C.libxl_pci_bdf) error {
+ x.Func = byte(xc._func)
+x.Dev = byte(xc.dev)
+x.Bus = byte(xc.bus)
+x.Domain = int(xc.domain)
+
+ return nil}
+
+func (x *PciBdf) toC(xc *C.libxl_pci_bdf) (err error){defer func(){
+if err != nil{
+C.libxl_pci_bdf_dispose(xc)}
+}()
+
+xc._func = C.uint8_t(x.Func)
+xc.dev = C.uint8_t(x.Dev)
+xc.bus = C.uint8_t(x.Bus)
+xc.domain = C.int(x.Domain)
+
+ return nil
+ }
+
 // NewDevicePci returns an instance of DevicePci initialized with defaults.
 func NewDevicePci() (*DevicePci, error) {
 var (
@@ -2014,10 +2049,9 @@ return nil, err }
 return &x, nil}
 
 func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
- x.Func = byte(xc._func)
-x.Dev = byte(xc.dev)
-x.Bus = byte(xc.bus)
-x.Domain = int(xc.domain)
+ if err := x.Bdf.fromC(&xc.bdf);err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 x.Vdevfn = uint32(xc.vdevfn)
 x.VfuncMask = uint32(xc.vfunc_mask)
 x.Msitranslate = bool(xc.msitranslate)
@@ -2033,10 +2067,9 @@ if err != nil{
 C.libxl_device_pci_dispose(xc)}
 }()
 
-xc._func = C.uint8_t(x.Func)
-xc.dev = C.uint8_t(x.Dev)
-xc.bus = C.uint8_t(x.Bus)
-xc.domain = C.int(x.Domain)
+if err := x.Bdf.toC(&xc.bdf); err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 xc.vdevfn = C.uint32_t(x.Vdevfn)
 xc.vfunc_mask = C.uint32_t(x.VfuncMask)
 xc.msitranslate = C.bool(x.Msitranslate)
@@ -2766,13 +2799,13 @@ if err := x.Nics[i].fromC(&v); err != nil {
 return fmt.Errorf("converting field Nics: %v", err) }
 }
 }
-x.Pcidevs = nil
-if n := int(xc.num_pcidevs); n > 0 {
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
-x.Pcidevs = make([]DevicePci, n)
-for i, v := range cPcidevs {
-if err := x.Pcidevs[i].fromC(&v); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err) }
+x.Pcis = nil
+if n := int(xc.num_pcis); n > 0 {
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:n:n]
+x.Pcis = make([]DevicePci, n)
+for i, v := range cPcis {
+if err := x.Pcis[i].fromC(&v); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err) }
 }
 }
 x.Rdms = nil
@@ -2922,13 +2955,13 @@ return fmt.Errorf("converting field Nics: %v", err)
 }
 }
 }
-if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
-xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs)*C.sizeof_libxl_device_pci))
-xc.num_pcidevs = C.int(numPcidevs)
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
-for i,v := range x.Pcidevs {
-if err := v.toC(&cPcidevs[i]); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err)
+if numPcis := len(x.Pcis); numPcis > 0 {
+xc.pcis = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcis)*C.sizeof_libxl_device_pci))
+xc.num_pcis = C.int(numPcis)
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:numPcis:numPcis]
+for i,v := range x.Pcis {
+if err := v.toC(&cPcis[i]); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err)
 }
 }
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b4c5df0f2c5c..bc62ae8ce9d1 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -707,11 +707,15 @@ ColoCheckpointHost string
 ColoCheckpointPort string
 }
 
-type DevicePci struct {
+type PciBdf struct {
 Func byte
 Dev byte
 Bus byte
 Domain int
+}
+
+type DevicePci struct {
+Bdf PciBdf
 Vdevfn uint32
 VfuncMask uint32
 Msitranslate bool
@@ -896,7 +900,7 @@ CInfo DomainCreateInfo
 BInfo DomainBuildInfo
 Disks []DeviceDisk
 Nics []DeviceNic
-Pcidevs []DevicePci
+Pcis []DevicePci
 Rdms []DeviceRdm
 Dtdevs []DeviceDtdev
 Vfbs []DeviceVfb
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 8225809d94a8..5edacccbd1da 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -463,6 +463,12 @@
  */
 #define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
 
+/*
+ * LIBXL_HAVE_PCI_BDF indicates that the 'libxl_pci_bdf' type is defined
+ * is embedded in the 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_BDF 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 8ebe1b60c9d7..a25bf23834a2 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -472,10 +472,10 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcis[i].domain;
-        bus = d_config->pcis[i].bus;
-        devfn = PCI_DEVFN(d_config->pcis[i].dev,
-                          d_config->pcis[i].func);
+        seg = d_config->pcis[i].bdf.domain;
+        bus = d_config->pcis[i].bdf.bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].bdf.dev,
+                          d_config->pcis[i].bdf.func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 02f8a3179c44..da12d922099c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,10 +4746,11 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+#define COMPARE_BDF(a, b) ((a)->domain == (b)->domain && \
                            (a)->bus == (b)->bus &&       \
                            (a)->dev == (b)->dev &&       \
                            (a)->func == (b)->func)
+#define COMPARE_PCI(a, b) COMPARE_BDF(&((a)->bdf), &((b)->bdf))
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 4bf4a168e032..e0bcab4ee840 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,10 +29,10 @@ static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pci->domain << 16;
-    value |= (pci->bus & 0xff) << 8;
-    value |= (pci->dev & 0x1f) << 3;
-    value |= (pci->func & 0x7);
+    value = pci->bdf.domain << 16;
+    value |= (pci->bdf.bus & 0xff) << 8;
+    value |= (pci->bdf.dev & 0x1f) << 3;
+    value |= (pci->bdf.func & 0x7);
 
     return value;
 }
@@ -41,10 +41,10 @@ static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
 }
 
@@ -54,9 +54,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     if (pci->vdevfn)
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
@@ -250,8 +250,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pci->domain && bus == pci->bus &&
-            pci->dev == dev && pci->func == func) {
+        if (domain == pci->bdf.domain && bus == pci->bdf.bus &&
+            pci->bdf.dev == dev && pci->bdf.func == func) {
             break;
         }
     }
@@ -362,8 +362,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
+    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+                    pci->bdf.dev, pci->bdf.func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -383,10 +383,10 @@ static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 }
 
 
@@ -484,10 +484,10 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
+                           pci->bdf.domain,
+                           pci->bdf.bus,
+                           pci->bdf.dev,
+                           pci->bdf.func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -517,7 +517,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -525,7 +525,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -533,7 +533,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -544,7 +544,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -552,7 +552,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -560,7 +560,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -571,14 +571,14 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pci->domain, pci->bus, pci->dev, pci->func);
+                     pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -587,7 +587,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
     }
 
@@ -654,10 +654,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
+        if (dom == pci->bdf.domain
+            && bus == pci->bdf.bus
+            && dev == pci->bdf.dev
+            && func == pci->bdf.func) {
             rc = 1;
             goto out;
         }
@@ -683,8 +683,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus,
+                      pci->bdf.dev, pci->bdf.func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -747,10 +747,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
+    dom = pci->bdf.domain;
+    bus = pci->bdf.bus;
+    dev = pci->bdf.dev;
+    func = pci->bdf.func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -824,8 +824,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
-            pci->dev, pci->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+            pci->bdf.dev, pci->bdf.func);
         return ERROR_FAIL;
     }
 
@@ -914,11 +914,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigne
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pci->domain != dom )
+        if ( pci->bdf.domain != dom )
             continue;
-        if ( pci->bus != bus )
+        if ( pci->bdf.bus != bus )
             continue;
-        if ( pci->dev != dev )
+        if ( pci->bdf.dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -967,13 +967,13 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
     if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pci->domain, pci->bus, pci->dev,
-                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->bdf.domain, pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->vdevfn, pci->msitranslate,
                          pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pci->domain,  pci->bus, pci->dev,
-                         pci->func, pci->msitranslate, pci->power_mgmt);
+                         pci->bdf.domain,  pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1132,10 +1132,10 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+                           "%04x:%02x:%02x.%01x", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
                                PCI_SLOT(pci->vdevfn),
@@ -1223,7 +1223,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1314,8 +1314,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1355,8 +1355,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                                pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1527,7 +1527,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pci->domain, pci->bus, pci->dev, pci->func,
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
@@ -1545,7 +1545,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1553,7 +1553,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
-    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+    libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
@@ -1634,13 +1634,13 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
         pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pci->func);
+        pfunc_mask = (1 << pci->bdf.func);
     }
 
     for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 /* if not passing through multiple devices in a block make
@@ -1672,7 +1672,7 @@ static void device_pci_add_done(libxl__egc *egc,
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pci->domain, pci->bus, pci->dev, pci->func,
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
@@ -1741,8 +1741,8 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
-                     pci->bus, pci->dev, pci->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->bdf.domain,
+                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
@@ -1856,8 +1856,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                                     pci->bus, pci->dev, pci->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1892,8 +1892,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                               pci->bus, pci->dev, pci->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                               pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1957,7 +1957,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2026,7 +2026,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2077,7 +2077,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
+         PCI_PT_QDEV_ID, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2110,7 +2110,7 @@ static void pci_remove_detached(libxl__egc *egc,
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
-        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+        libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     }
 
     if (!isstubdom) {
@@ -2198,7 +2198,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
         }
         pci->vfunc_mask &= prs->pfunc_mask;
     } else {
-        prs->pfunc_mask = (1 << pci->func);
+        prs->pfunc_mask = (1 << pci->bdf.func);
     }
 
     rc = 0;
@@ -2226,7 +2226,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 pci->vdevfn = orig_vdev;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 20f8dd7cfa5d..2c441142fba6 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -769,18 +769,22 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_pci_bdf = Struct("pci_bdf", [
+    ("func", uint8),
+    ("dev", uint8),
+    ("bus", uint8),
+    ("domain", integer),
+    ])
+
 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
     ("power_mgmt", bool),
     ("permissive", bool),
     ("seize", bool),
-    ("rdm_policy",      libxl_rdm_reserve_policy),
+    ("rdm_policy", libxl_rdm_reserve_policy),
     ])
 
 libxl_device_rdm = Struct("device_rdm", [
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 1d38fffce357..5c107f264260 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -27,10 +27,10 @@ static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                            unsigned int bus, unsigned int dev,
                            unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
     return 0;
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f71498cbb570..b6dc7c28401c 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -34,7 +34,8 @@ static void pcilist(uint32_t domid)
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_list_free(pcis, num);
 }
@@ -163,7 +164,8 @@ static void pciassignable_list(void)
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_assignable_list_free(pcis, num);
 }
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index b03e348ffb9a..95180b60df5b 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -194,8 +194,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcis[i].domain, d_config->pcis[i].bus,
-               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].bdf.domain, d_config->pcis[i].bdf.bus,
+               d_config->pcis[i].bdf.dev, d_config->pcis[i].bdf.func,
                d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
                d_config->pcis[i].msitranslate,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43599.78434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdG-0002jH-NP; Thu, 03 Dec 2020 14:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43599.78434; Thu, 03 Dec 2020 14:30:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdF-0002hb-NI; Thu, 03 Dec 2020 14:30:53 +0000
Received: by outflank-mailman (input) for mailman id 43599;
 Thu, 03 Dec 2020 14:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd7-0002OJ-VV
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006U7-46; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYO-00015c-7h; Thu, 03 Dec 2020 14:25:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=dE8muqjMtVe9zxnJuQ+Eu3D+iUwfNz4L9IpDVriPtNY=; b=7DJui7TaJkE7mL6qNfdq1V0M44
	kAq4RQSwsv9iKtc0PnC9QKT3yIn1875Hvvf+r95QuL1Telp9DFqmkUvOI82zsebNGrR4BcLg0FWga
	k+McLTgaY23Vhh/n6b7tWHLPZKP3O8xwwbcdcn6p15iwDw2/9YY5v3iRWlmJxqXzLTEw=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 16/23] docs/man: fix xl(1) documentation for 'pci' operations
Date: Thu,  3 Dec 2020 14:25:27 +0000
Message-Id: <20201203142534.4017-17-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently the documentation completely fails to mention the existence of
PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
patch).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index f92bacfa7277..c5fbce3b5c4b 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1597,14 +1597,18 @@ List virtual network interfaces for a domain.
 
 =item B<pci-assignable-list>
 
-List all the assignable PCI devices.
+List all the B<BDF> of assignable PCI devices. See
+L<xl-pci-configuration(5)> for more information.
+
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
 =item B<pci-assignable-add> I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF assignable to guests.
+Make the device at B<BDF> assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
 first be unbound, and the original driver stored so that it can be
@@ -1620,8 +1624,10 @@ being used.
 
 =item B<pci-assignable-remove> [I<-r>] I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF not assignable to
-guests.  This will at least unbind the device from pciback, and
+Make the device at B<BDF> not assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
+This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
 option is specified, it will also attempt to re-bind the device to its
 original driver, making it usable by Domain 0 again.  If the device is
@@ -1637,15 +1643,15 @@ As always, this should only be done if you trust the guest, or are
 confident that the particular device you're re-assigning to dom0 will
 cancel all in-flight DMA on FLR.
 
-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-plug a new pass-through pci device to the specified domain.
-B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
+Hot-plug a new pass-through pci device to the specified domain. See
+L<xl-pci-configuration(5)> for more information.
 
-=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
+=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
-Bus/Device/Function of the physical device to be removed from the guest domain.
+Hot-unplug a pci device that was previously passed through to a domain. See
+L<xl-pci-configuration(5)> for more information.
 
 B<OPTIONS>
 
@@ -1660,7 +1666,7 @@ even without guest domain's collaboration.
 
 =item B<pci-list> I<domain-id>
 
-List pass-through pci devices for a domain.
+List the B<BDF> of pci devices passed through to a domain.
 
 =back
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43600.78446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdI-0002oZ-EH; Thu, 03 Dec 2020 14:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43600.78446; Thu, 03 Dec 2020 14:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdH-0002nI-GO; Thu, 03 Dec 2020 14:30:55 +0000
Received: by outflank-mailman (input) for mailman id 43600;
 Thu, 03 Dec 2020 14:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd8-0002OQ-2L
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006UP-Al; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYN-00015c-Av; Thu, 03 Dec 2020 14:25:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=9oBaH3TvR/OzX4i2SsbbWQZTQ5vUhqb0tdWkNXX9QLM=; b=wVGoqrqLKUGWEZucvgQbCCcZmL
	H1iG+g4oUvDLS0VJRFdbfjOLBBieQ18DmSDyNGFWsDVms6xpEqwtlD+MjJ4AF+Gp86Li0z1AwZyY9
	f0dTaozTgpMp9M3hMeCe5DPpJa0dg26ahmPsUdU6oTBk24cGLOe9W6ZKKwBWtVASIrfg=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 15/23] docs/man: improve documentation of PCI_SPEC_STRING...
Date: Thu,  3 Dec 2020 14:25:26 +0000
Message-Id: <20201203142534.4017-16-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and prepare for adding support for non-positional parsing of 'bdf' and
'vslot' in a subsequent patch.

Also document 'BDF' as a first-class parameter type and fix the documentation
to state that the default value of 'rdm_policy' is actually 'strict', not
'relaxed', as can be seen in libxl__device_pci_setdefault().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 187 +++++++++++++++++++++++-----
 1 file changed, 153 insertions(+), 34 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 72a27bd95dec..4dd73bc498d6 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -6,32 +6,105 @@ xl-pci-configuration - XL PCI Configuration Syntax
 
 =head1 SYNTAX
 
-This document specifies the format for B<PCI_SPEC_STRING> which is used by
-the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+This document specifies the format for B<BDF> and B<PCI_SPEC_STRING> which are
+used by the L<xl.cfg(5)> pci configuration option, and related L<xl(1)>
+commands.
 
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+A B<BDF> has the following form:
+
+    [DDDD:]BB:SS.F
+
+B<DDDD> is the domain number, B<BB> is the bus number, B<SS> is the device (or
+slot) number, and B<F> is the function number. This is the same scheme as
+used in the output of L<lspci(1)> for the device in question. By default
+L<lspci(1)> will omit the domain (B<DDDD>) if it is zero and hence a zero
+value for domain may also be omitted when specifying a B<BDF>.
+
+Each B<PCI_SPEC_STRING> has the one of the forms:
 
 =over 4
 
-=item B<[DDDD:]BB:DD.F>
+    [<bdf>[@<vslot>,][<key>=<value>,]*
+    [<key>=<value>,]*
 
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
+=back
 
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
+For example, these strings are equivalent:
 
-=item B<@VSLOT>
+=over 4
 
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
+    36:00.0@20,seize=1
+    36:00.0,vslot=20,seize=1
+    bdf=36:00.0,vslot=20,seize=1
 
-=item B<permissive=BOOLEAN>
+=back
+
+More formally, the string is a series of comma-separated keyword/value
+pairs, flags and positional parameters.  Parameters which are not bare
+keywords and which do not contain "=" symbols are assigned to the
+positional parameters, in the order specified below.  The positional
+parameters may also be specified by name.
+
+Each parameter may be specified at most once, either as a positional
+parameter or a named parameter.  Default values apply if the parameter
+is not specified, or if it is specified with an empty value (whether
+positionally or explicitly).
+
+B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
+B<bdf> will be ignored.
+
+=head1 Positional Parameters
+
+=over 4
+
+=item B<bdf>=I<BDF>
+
+=over 4
+
+=item Description
+
+This identifies the PCI device from the host perspective.
+
+In the context of a B<PCI_SPEC_STRING> you may specify the function (B<F>) as
+B<*> to indicate all functions of a multi-function device.
+
+=item Default Value
+
+None. This parameter is mandatory as it identifies the device.
+
+=back
+
+=item B<vslot>=I<NUMBER>
+
+=over 4
+
+=item Description
+
+Specifies the virtual slot (device) number where the guest will see this
+device. For example, running L<lspci(1)> in a Linux guest where B<vslot>
+was specified as C<8> would identify the device as C<00:08.0>. Virtual domain
+and bus numbers are always 0.
+
+B<NOTE:> This parameter is always parsed as a hexidecimal value.
+
+=item Default Value
+
+None. This parameter is not mandatory. An available B<vslot> will be selected
+if this parameter is not specified.
+
+=back
+
+=back
+
+=head1 Other Parameters and Flags
+
+=over 4
+
+=item B<permissive>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 By default pciback only allows PV guests to write "known safe" values
 into PCI configuration space, likewise QEMU (both qemu-xen and
@@ -46,33 +119,79 @@ more control over the device, which may have security or stability
 implications.  It is recommended to only enable this option for
 trusted VMs under administrator's control.
 
-=item B<msitranslate=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<msitranslate>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 Specifies that MSI-INTx translation should be turned on for the PCI
 device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
+the PCI device regardless of whether the guest uses INTx or MSI.
+
+=item Default Value
+
+Some device drivers, such as NVIDIA's, detect an inconsistency and do not
 function when this option is enabled. Therefore the default is false (0).
 
-=item B<seize=BOOLEAN>
+=back
 
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
+=item B<seize>=I<BOOLEAN>
 
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+=over 4
+
+=item Description
+
+Tells L<xl(1)> to automatically attempt to make the device assignable to
+guests if that has not already been done by the B<pci-assignable-add>
+command.
+
+B<WARNING:> If you set this option, L<xl> will gladly re-assign a critical
 system device, such as a network or a disk controller being used by
 dom0 without confirmation.  Please use with care.
 
-=item B<power_mgmt=BOOLEAN>
+=item Default Value
 
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
+0
+
+=back
+
+=item B<power_mgmt>=I<BOOLEAN>
+
+=over 4
+
+=item Description
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device.
+
+=item Default Value
+
+0
+
+=back
+
+=item B<rdm_policy>=I<STRING>
+
+=over 4
+
+=item Description
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option in L<xl.cfg(5)> but just specific to a given device.
+
+B<NOTE>: This overrides the global B<rdm> option.
+
+=item Default Value
+
+"strict"
+
+=back
 
 =back
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:30:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43601.78458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdK-0002uc-7C; Thu, 03 Dec 2020 14:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43601.78458; Thu, 03 Dec 2020 14:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdJ-0002tI-CF; Thu, 03 Dec 2020 14:30:57 +0000
Received: by outflank-mailman (input) for mailman id 43601;
 Thu, 03 Dec 2020 14:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd8-0002OV-3j
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006UL-8T; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYU-00015c-KU; Thu, 03 Dec 2020 14:25:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/V+Xsx5DyAJ7DrcbKg6+KBwsTjqitxu0fA7ODPAM4l8=; b=hVb/32lK4Spdb9YCTzgyRAKXwV
	HJL9l1y0sJYCte4jBiAllLWIzCAH5IIY5Vv768LWwoiA/CuY2zdF0GvMWhxPQ+gUSlEPiy49Romu2
	hwOtL4THRkREvAw3pcX6ygPOLjQELLltXAulGbUBmzA+37tBaSIeeF//51T0Hb4LX+W8=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 22/23] docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
Date: Thu,  3 Dec 2020 14:25:33 +0000
Message-Id: <20201203142534.4017-23-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Since assignable devices can be named, a subsequent patch will support use
of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
this case the name will be used to look up the 'bdf' in the list of assignable
(or assigned) devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 4dd73bc498d6..db3360307cbd 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -51,7 +51,7 @@ is not specified, or if it is specified with an empty value (whether
 positionally or explicitly).
 
 B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
-B<bdf> will be ignored.
+B<bdf> or B<name> will be ignored.
 
 =head1 Positional Parameters
 
@@ -70,7 +70,11 @@ B<*> to indicate all functions of a multi-function device.
 
 =item Default Value
 
-None. This parameter is mandatory as it identifies the device.
+None. This parameter is mandatory in its positional form. As a non-positional
+parameter it is also mandatory unless a B<name> parameter is present, in
+which case B<bdf> must not be present since the B<name> will be used to find
+the B<bdf> in the list of assignable devices. See L<xl(1)> for more information
+on naming assignable devices.
 
 =back
 
@@ -194,4 +198,21 @@ B<NOTE>: This overrides the global B<rdm> option.
 
 =back
 
+=item B<name>=I<STRING>
+
+=over 4
+
+=item Description
+
+This is the name given when the B<BDF> was made assignable. See L<xl(1)> for
+more information on naming assignable devices.
+
+=item Default Value
+
+None. This parameter must not be present if a B<bdf> parameter is present.
+If a B<bdf> parameter is not present then B<name> is mandatory as it is
+required to look up the B<BDF> in the list of assignable devices.
+
+=back
+
 =back
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:31:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:31:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43602.78472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdN-00035O-Lg; Thu, 03 Dec 2020 14:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43602.78472; Thu, 03 Dec 2020 14:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpdM-00033j-KP; Thu, 03 Dec 2020 14:31:00 +0000
Received: by outflank-mailman (input) for mailman id 43602;
 Thu, 03 Dec 2020 14:30:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kkpd9-0002QS-Hb
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:30:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpd7-0006UZ-CG; Thu, 03 Dec 2020 14:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kkpYT-00015c-Nk; Thu, 03 Dec 2020 14:25:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uWAkzfhC4Qc4xLlsCYYZdM4MM7cEitdjLXrUIr0/wNY=; b=hYLaCHieifCW2bTrde73H9lIA/
	mtT6dnVJLmhqd1wol6VWGlL2b56MGAvKcfQ6TEUqRqNfnEieAQHZu1aKm6jW5p1XCLbB5jCjGyrgh
	wFTARSiZm7wZj4yQcsWgTc0MI2qdCW1mlHRYC0TKQjw+VKw54E0fFO0tn4bD7jDKrdLo=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 21/23] xl / libxl: support naming of assignable devices
Date: Thu,  3 Dec 2020 14:25:32 +0000
Message-Id: <20201203142534.4017-22-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201203142534.4017-1-paul@xen.org>
References: <20201203142534.4017-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch modifies libxl_device_pci_assignable_add() to take an optional
'name' argument, which (if supplied) is saved into xenstore and can hence be
used to refer to the now-assignable BDF in subsequent operations. To
facilitate this, a new libxl_device_pci_assignable_name2bdf() function is
added.

The xl code is modified to allow a name to be specified in the
'pci-assignable-add' operation and also allow an option to be specified to
'pci-assignable-list' requesting that names be displayed. The latter is
facilitated by a new libxl_device_pci_assignable_bdf2name() function. Finally
xl 'pci-assignable-remove' is modified to that either a name or BDF can be
supplied. The supplied 'identifier' is first assumed to be a name, but if
libxl_device_pci_assignable_name2bdf() fails to find a matching BDF the
identifier itself will be parsed as a BDF. Names my only include printable
characters and may not include whitespace.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v4:
 - Fix unitialized return value in libxl_device_pci_assignable_name2bdf()
   that was discovered in CI
---
 tools/include/libxl.h                | 19 +++++-
 tools/libs/light/libxl_pci.c         | 86 ++++++++++++++++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +-
 tools/xl/xl_cmdtable.c               | 12 ++--
 tools/xl/xl_pci.c                    | 80 ++++++++++++++++++--------
 5 files changed, 164 insertions(+), 36 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5703fdf367c5..4025d3a3d437 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -476,6 +476,14 @@
  */
 #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
 
+/*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_NAME indicates that the
+ * libxl_device_pci_assignable_add() function takes a 'name' argument
+ * and that the libxl_device_pci_assignable_name2bdf() and
+ * libxl_device_pci_assignable_bdf2name() functions are defined.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2385,11 +2393,18 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    const char *name, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                       int rebind);
 libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name);
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf);
+
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
 int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index eecbd6efb694..e62383dd7b8f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -745,6 +745,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_pci_bdf *pcibdf,
+                                            const char *name,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -753,6 +754,23 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     int rc;
     struct stat st;
 
+    /* Sanitise any name that was passed */
+    if (name) {
+        unsigned int i, n = strlen(name);
+
+        if (n > 64) { /* Reasonable upper bound on name length */
+            LOG(ERROR, "Name too long");
+            return ERROR_FAIL;
+        }
+
+        for (i = 0; i < n; i++) {
+            if (!isgraph(name[i])) {
+                LOG(ERROR, "Names may only include printable characters");
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     /* Local copy for convenience */
     dom = pcibdf->domain;
     bus = pcibdf->bus;
@@ -773,7 +791,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
     if ( rc ) {
         LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto quarantine;
+        goto name;
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
@@ -804,7 +822,12 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-quarantine:
+name:
+    if (name)
+        pci_info_xs_write(gc, pcibdf, "name", name);
+    else
+        pci_info_xs_remove(gc, pcibdf, "name");
+
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -868,16 +891,18 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
         }
     }
 
+    pci_info_xs_remove(gc, pcibdf, "name");
+
     return 0;
 }
 
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
-                                    int rebind)
+                                    const char *name, int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, name, rebind);
 
     GC_FREE;
     return rc;
@@ -896,6 +921,57 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
     return rc;
 }
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name)
+{
+    GC_INIT(ctx);
+    char **bdfs;
+    libxl_pci_bdf *pcibdf = NULL;
+    unsigned int i, n;
+
+    bdfs = libxl__xs_directory(gc, XBT_NULL, PCI_INFO_PATH, &n);
+    if (!n)
+        goto out;
+
+    pcibdf = calloc(1, sizeof(*pcibdf));
+    if (!pcibdf)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        unsigned dom, bus, dev, func;
+        const char *tmp;
+
+        if (sscanf(bdfs[i], PCI_BDF_XSPATH, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        pcibdf_struct_fill(pcibdf, dom, bus, dev, func);
+
+        tmp = pci_info_xs_read(gc, pcibdf, "name");
+        if (tmp && !strcmp(tmp, name))
+            goto out;
+    }
+
+    free(pcibdf);
+    pcibdf = NULL;
+
+out:
+    GC_FREE;
+    return pcibdf;
+}
+
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf)
+{
+    GC_INIT(ctx);
+    char *name = NULL, *tmp = pci_info_xs_read(gc, pcibdf, "name");
+
+    if (tmp)
+        name = strdup(tmp);
+
+    GC_FREE;
+    return name;
+}
+
 /*
  * This function checks that all functions of a device are bound to pciback
  * driver. It also initialises a bit-mask of which function numbers are present
@@ -1560,7 +1636,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     if (rc) goto out;
 
     if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
-        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, NULL, 1);
         if ( rc )
             goto out;
     }
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2388f238697c..96bb4655e0b1 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,8 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, NULL,
+					      c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 2ee0c4967334..9e9aa448e2a6 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -105,21 +105,25 @@ struct cmd_spec cmd_table[] = {
     { "pci-assignable-add",
       &main_pciassignable_add, 0, 1,
       "Make a device assignable for pci-passthru",
-      "<BDF>",
+      "[options] <BDF>",
+      "-n NAME, --name=NAME    Name the assignable device.\n"
       "-h                      Print this help.\n"
     },
     { "pci-assignable-remove",
       &main_pciassignable_remove, 0, 1,
       "Remove a device from being assignable",
-      "[options] <BDF>",
+      "[options] <BDF>|NAME",
       "-h                      Print this help.\n"
       "-r                      Attempt to re-assign the device to the\n"
-      "                        original driver"
+      "                        original driver."
     },
     { "pci-assignable-list",
       &main_pciassignable_list, 0, 0,
       "List all the assignable pci devices",
-      "",
+      "[options]",
+      "-h                      Print this help.\n"
+      "-n, --show-names        Display assignable device names where\n"
+      "                        supplied.\n"
     },
     { "pause",
       &main_pause, 0, 1,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 37708b4eb14d..f1b58b39762a 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -152,7 +152,7 @@ int main_pciattach(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
-static void pciassignable_list(void)
+static void pciassignable_list(bool show_names)
 {
     libxl_pci_bdf *pcibdfs;
     int num, i;
@@ -162,9 +162,15 @@ static void pciassignable_list(void)
     if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
-        printf("%04x:%02x:%02x.%01x\n",
-               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
-               pcibdfs[i].func);
+        libxl_pci_bdf *pcibdf = &pcibdfs[i];
+        char *name = show_names ?
+            libxl_device_pci_assignable_bdf2name(ctx, pcibdf) : NULL;
+
+        printf("%04x:%02x:%02x.%01x %s\n",
+               pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
+               name ?: "");
+
+        free(name);
     }
     libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
@@ -172,16 +178,23 @@ static void pciassignable_list(void)
 int main_pciassignable_list(int argc, char **argv)
 {
     int opt;
+    static struct option opts[] = {
+        {"show-names", 0, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    bool show_names = false;
 
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-        /* No options */
+    SWITCH_FOREACH_OPT(opt, "n", opts, "pci-assignable-list", 0) {
+    case 'n':
+        show_names = true;
+        break;
     }
 
-    pciassignable_list();
+    pciassignable_list(show_names);
     return 0;
 }
 
-static int pciassignable_add(const char *bdf, int rebind)
+static int pciassignable_add(const char *bdf, const char *name, int rebind)
 {
     libxl_pci_bdf pcibdf;
     XLU_Config *config;
@@ -197,7 +210,7 @@ static int pciassignable_add(const char *bdf, int rebind)
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, name, rebind))
         r = 1;
 
     libxl_pci_bdf_dispose(&pcibdf);
@@ -210,39 +223,58 @@ int main_pciassignable_add(int argc, char **argv)
 {
     int opt;
     const char *bdf = NULL;
+    static struct option opts[] = {
+        {"name", 1, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    const char *name = NULL;
 
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-        /* No options */
+    SWITCH_FOREACH_OPT(opt, "n:", opts, "pci-assignable-add", 0) {
+    case 'n':
+        name = optarg;
+        break;
     }
 
     bdf = argv[optind];
 
-    if (pciassignable_add(bdf, 1))
+    if (pciassignable_add(bdf, name, 1))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciassignable_remove(const char *bdf, int rebind)
+static int pciassignable_remove(const char *ident, int rebind)
 {
-    libxl_pci_bdf pcibdf;
+    libxl_pci_bdf *pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_pci_bdf_init(&pcibdf);
-
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
-        exit(2);
+    pcibdf = libxl_device_pci_assignable_name2bdf(ctx, ident);
+    if (!pcibdf) {
+        pcibdf = calloc(1, sizeof(*pcibdf));
+
+        if (!pcibdf) {
+            fprintf(stderr,
+                    "pci-assignable-remove: failed to allocate memory\n");
+            exit(2);
+        }
+
+        libxl_pci_bdf_init(pcibdf);
+        if (xlu_pci_parse_bdf(config, pcibdf, ident)) {
+            fprintf(stderr,
+                    "pci-assignable-remove: malformed BDF '%s'\n", ident);
+            exit(2);
+        }
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, pcibdf, rebind))
         r = 1;
 
-    libxl_pci_bdf_dispose(&pcibdf);
+    libxl_pci_bdf_dispose(pcibdf);
+    free(pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -251,7 +283,7 @@ static int pciassignable_remove(const char *bdf, int rebind)
 int main_pciassignable_remove(int argc, char **argv)
 {
     int opt;
-    const char *bdf = NULL;
+    const char *ident = NULL;
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
@@ -260,9 +292,9 @@ int main_pciassignable_remove(int argc, char **argv)
         break;
     }
 
-    bdf = argv[optind];
+    ident = argv[optind];
 
-    if (pciassignable_remove(bdf, rebind))
+    if (pciassignable_remove(ident, rebind))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:32:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43656.78487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpet-000483-6o; Thu, 03 Dec 2020 14:32:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43656.78487; Thu, 03 Dec 2020 14:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpet-00047u-3N; Thu, 03 Dec 2020 14:32:35 +0000
Received: by outflank-mailman (input) for mailman id 43656;
 Thu, 03 Dec 2020 14:32:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkper-00047f-5L
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:32:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f7de066-6fba-487a-822d-6fce65ccd27b;
 Thu, 03 Dec 2020 14:32:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D2E9ABE9;
 Thu,  3 Dec 2020 14:32:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f7de066-6fba-487a-822d-6fce65ccd27b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607005951; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D590jycdEqjmqOLf5UnHRlb2s9jIT0LQnoieX8G9a+0=;
	b=gf7kiDQkuPQi8T2d1GxRwF9QKoicFnQWwpGlls3mvUKES0wCtEnie23zN9fe1jmCMOoSXu
	RXyAdNr6sJd3uRDC0yLuFJWSUs3O8AUhXsNDal1PiGSqwXOv5W/WSe6k/EdfQYOBDfLp47
	07oGw1ybrh1WL28LhfE9mSii+xgxjqQ=
Subject: Re: [ANNOUNCE] Call for agenda items for December 2020 Community Call
 @ 16:00 UTC
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>
References: <6A1AC739-EB53-4996-A99B-EE68358E70DB@citrix.com>
 <6da4cd56-7364-bc6e-24d8-02976dbd637d@suse.com>
 <49AA35F9-5056-4D42-AE1D-5A478B0CDF7B@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6173e857-2486-08b8-e0b1-93b400e8c9ba@suse.com>
Date: Thu, 3 Dec 2020 15:32:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <49AA35F9-5056-4D42-AE1D-5A478B0CDF7B@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.11.2020 13:16, George Dunlap wrote:
>> On Nov 30, 2020, at 10:25 AM, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 27.11.2020 12:52, George Dunlap wrote:
>>> The proposed agenda is in https://cryptpad.fr/pad/#/2/pad/edit/OPN55rXaOncupuWuHxtddzWJ/ and you can edit to add items.  Alternatively, you can reply to this mail directly.
>>
>> The "New series / series requiring attention" section is gone. Was
>> this intentional? If not, I would have wanted to propose that items
>> from that list which we didn't get to on the previous call be
>> automatically propagated. According to my observation it is more
>> likely than not that nothing would have changed in their status.
>> Hence it may be easier to take one off the list if indeed it has
>> got unstalled.
> 
> Oops — I meant to delete the content, but not the header.
> 
> Hopefully “not getting to that part of the call” should be rare; but yes, copying it over (perhaps with a color to indicate that it’s been carried over from last time) sounds reasonable.  I’ll do that.

Seeing that it didn't happen for today's meeting, I've blindly copied
over the previous meeting's section, before adding new items there.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:34:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:34:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43662.78499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpgp-0004M1-Ku; Thu, 03 Dec 2020 14:34:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43662.78499; Thu, 03 Dec 2020 14:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpgp-0004Lu-H3; Thu, 03 Dec 2020 14:34:35 +0000
Received: by outflank-mailman (input) for mailman id 43662;
 Thu, 03 Dec 2020 14:34:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lp8G=FH=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkpgo-0004Lp-Ra
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:34:35 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.51]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70022968-8d93-40a6-84fe-9628d4666c09;
 Thu, 03 Dec 2020 14:34:31 +0000 (UTC)
Received: from DB3PR08CA0016.eurprd08.prod.outlook.com (2603:10a6:8::29) by
 VI1PR08MB3486.eurprd08.prod.outlook.com (2603:10a6:803:8a::26) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.18; Thu, 3 Dec 2020 14:34:29 +0000
Received: from DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::3f) by DB3PR08CA0016.outlook.office365.com
 (2603:10a6:8::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Thu, 3 Dec 2020 14:34:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT017.mail.protection.outlook.com (10.152.20.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Thu, 3 Dec 2020 14:34:29 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Thu, 03 Dec 2020 14:34:28 +0000
Received: from 0f355e6c8d50.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 86795AA3-26F8-4342-8770-ECD0CA247716.1; 
 Thu, 03 Dec 2020 14:33:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0f355e6c8d50.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Dec 2020 14:33:51 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2373.eurprd08.prod.outlook.com (2603:10a6:4:88::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Thu, 3 Dec
 2020 14:33:49 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.032; Thu, 3 Dec 2020
 14:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70022968-8d93-40a6-84fe-9628d4666c09
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=83I9tKpsKyeU0i3y+i565ekiUkZAPReSxob8mTNCB2c=;
 b=KU5LULRPR+NEVep/dbI43P1Mqsew8/RTgz0lV8+ZUVJEhK5Z38xg6VW0i+whE2kVfSEaAxENs6z8CobPNr4WWFYKxwV/uaFHMiqjCcRXvSF72g5ov8MVKnXWXr33YdVsQoDXdUCi5PS/X5Q5hyDfI8SD9hB8fltINecMkmo00fg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6cde29299ee41a61
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XVyj+TiTc5qlY8WPXIqXlGQ+p3xKbiqIR9XTFlC1KB9pm+vFXeiROwVZZW44bgQUjShCeJjPNwboUMV4vz5tGjeGi20fX+NF5E/gJ+dMr5NoSJgKRDkDAsy3Op/ZUjYWGtzd2a9FGJtiznJ9eNtq5kLuOaD/1GZCXcG9V9ldyMYS0wziBCQR3AT2veIkvuhVKJ8lW5ULzy1ouuck+1l3TPH6kQg9dbiAMNADVW7iXyyP7/5d9F4fbrXYH/ppMnBJu3CwEWrQhvoLUtxym9XBmvCqsvNho2/c0ghQN4U5DxevdsafrYF6tcBw1Zsdy6JyCLHLZF6+AF5aDSBhDqjHQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=83I9tKpsKyeU0i3y+i565ekiUkZAPReSxob8mTNCB2c=;
 b=fQsvru/4hGkM7BNnzB2yLmjXvr6a2Xy9KMsYo5xNJitOPVXJk6F8E5Dx3THf2Q0/1/8MG1WM0sY8O32fjZJSLLSXMqa7YgYsksZsFgm3dZVyVHuFTsHHM1JNFUHHBJWmTcS3NHK4Ej9W6z1fPDDYhRETbTsfHnaRRQf17gdftNuAn1APgb7SWkAt0MyO67+bk2bjjTmJS57lWI91oYSEbqbzwPDPihTreyOjpiGzV4kAAAffRaTM1LNOZdIyRACmw8WRyBoYIIAAhR2bKTFMKtXOeI2Arn08xILouW6mXre3BqfnqRd9WtSwadViGkHZr9XaaBy30OOlGCvGEa2BBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=83I9tKpsKyeU0i3y+i565ekiUkZAPReSxob8mTNCB2c=;
 b=KU5LULRPR+NEVep/dbI43P1Mqsew8/RTgz0lV8+ZUVJEhK5Z38xg6VW0i+whE2kVfSEaAxENs6z8CobPNr4WWFYKxwV/uaFHMiqjCcRXvSF72g5ov8MVKnXWXr33YdVsQoDXdUCi5PS/X5Q5hyDfI8SD9hB8fltINecMkmo00fg=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Topic: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Index: AQHWxBX830xFguE4YkK2klATz/ucYKnj64aAgAGO7QA=
Date: Thu, 3 Dec 2020 14:33:49 +0000
Message-ID: <51C0C24A-3CE6-48A3-85F5-14F010409DC3@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
 <cd74f2a7-7836-ef90-9cd8-857068adb0f5@xen.org>
In-Reply-To: <cd74f2a7-7836-ef90-9cd8-857068adb0f5@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fee75201-d87d-45b7-a483-08d897988aaf
x-ms-traffictypediagnostic: DB6PR0802MB2373:|VI1PR08MB3486:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3486783AE68BA3C29F1055EEFCF20@VI1PR08MB3486.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VSvj0AEDkjCTAPBv3sHqxgEbkaBwcaxDfitwaxil2c0msYS6d3YI/yz2LLc+990GC4UOHKBL3qRwPXSlcTQI/SzUue2a9YKKGv8nJ83Ee6Kx2P3VkFwbpho9Xy8HnAwJiIyp8fG1JO4m7ZRMVAQuL7tYP2ATLRlQ3UZu79+pjh2TUBj9yLSmP6HHa5ckaLI3/dfQt4+4zEwWCNh4xzNLX75+ZgH1YJoEAlpo9RNkSBXbe8Yrje32Q/CeN1OmQ55Us+Ym08qcHh3ItpjaGSd9gCmJq286f6qHVb2hiWxIh414N7XLQEnYYX4MQCmYefQT/WHnttfHLOeEjg2G4WZRycifYinFFQXd2Rup96tM2GhiStE8M/4ND2RWsY7PKi1N
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(366004)(376002)(136003)(4326008)(66476007)(6512007)(2616005)(64756008)(71200400001)(66556008)(66446008)(76116006)(6506007)(26005)(316002)(91956017)(83380400001)(53546011)(66946007)(2906002)(186003)(6486002)(6916009)(30864003)(478600001)(8676002)(36756003)(86362001)(54906003)(5660300002)(33656002)(8936002)(45980500001)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?ocqhkChp4MDHMg6J/XB/M9uVVT2F4E8k/1XST6MPDvrR/5NlthHvFKZBxbg8?=
 =?us-ascii?Q?04q5JfM9hNl50vVuzUPGCKpWEQJ/yrYNwMXO0dSGUo+obYVmQMAYlmsa2FPl?=
 =?us-ascii?Q?hQ8DrK7jAbIlfJRoxSQ2mUy9oqrrOVDuFGpOICVWgOHGr95StvViD6t3ELIb?=
 =?us-ascii?Q?jrCkrV+aerNgr/BOvAwj/5pDHaWSEZMF0I99uuV/HycHdw432DgsVY8b1qak?=
 =?us-ascii?Q?yrfOsdmW+YOMfMOjeBemG4/VoO0yNDIgru4NV1EPfk6pD7YLXQQ425dS+TMI?=
 =?us-ascii?Q?ElxDzFcKWE1GbatgxFClMRBv2F/2RsCe1YgT37Scq1YJjdbJO6pKrCol9qBM?=
 =?us-ascii?Q?X93Fgz+/+ltVPGYrtkiLamaIFDovY9EjLMgWh6cH0wZHE7TgA3c+1Nx8PS9Z?=
 =?us-ascii?Q?L7ba4O8rw5D5ZmPLpkcsa+qdxmmJ4ThL2ioAiWivPm7H8Ir0DCaVrooHFK2V?=
 =?us-ascii?Q?fnHkRDyC/Uh8Xcb7keeEBowKX0FuRA+fm/5hdUBZ4C63S2KYSnmq54hYqOhm?=
 =?us-ascii?Q?10+7kQxXvWVqCk2PsjYTwtzS7vkl47+jCtDWw2Zye/oUXCi2mZ2YtOnQBi7m?=
 =?us-ascii?Q?mwJqLetb/a7jT+KtCIRtbLQgGSlWGEGxrXGYwvIrLkWpZJmB7qHfYmDmZCWa?=
 =?us-ascii?Q?rSimYjhy2fpci80qSZKOQKqriNLXcvWTA/dAyUm673diaOIo7/vE6WquWcB8?=
 =?us-ascii?Q?r2kwMMFWQL/q0DYNnMtAkITAHCB5XQO5CEl7k2AUsw34kcLtzQQ5Xh1Qwl1H?=
 =?us-ascii?Q?5WNfGolGSx4uUyz1zr0b7D79zfIWcdc0wl2kArtOE75hZ6eZ8ZYc0wjkTIxt?=
 =?us-ascii?Q?6rmrhl1ah88VD59DEVbBqz7zK4BUcsHiEjaq2ocI/erLPv3GFkel9f3r+sRg?=
 =?us-ascii?Q?BwhHgBLUsQ5Mp1B5tNPBhMHpiEx8hg4/LV+TZCSRqJWciijjDg6+o8F0hS7W?=
 =?us-ascii?Q?ceBa1Bo+aLicDNy2mqVgM33megIlROIQJQ11RKx6+izjuoo4YRqanvMTG0dQ?=
 =?us-ascii?Q?rE1V?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <EDE4DE96C99B454C8B1E63886EDDFFFD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2373
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0d735777-e761-4755-d51e-08d897987314
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4SximX/vUjIfVacz6t9LV2eut5D9xdAk8Ot0aFHmOTOcL53T09nMSqdH5VNhy+LEZG79hLcVgydw/Mka+51opxYmjfnt39wErgcqteCjWp0EmMUz9zIXOFhv4RkkDkDGZDGyQVH7ECxx7XK/kMshfsrHQwNXSIkTFNNR6a9rycSTLKGwhQVGgESxpp0L4Iibnmjtf3+Nsly8gp/0vKzrOi66eUJjDeNt8fe7MQDYaUPysv4oI/SAo46IE3QxLhJTan2QRVEMMmv7FvLlXEte4/yQq0Kitz+bDSTH36XXl2jnH80tq5IHC6jQOXQYGhZnzWNWYYrvOlAC6Dt9a6Kiltzdhog2BB86qJ54/Ut9jGKpAt9doZjOl3gYjXvquSnXiYKu4r6HPW+be5XS4VH0CQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(396003)(346002)(46966005)(2906002)(82740400003)(6506007)(478600001)(8676002)(8936002)(82310400003)(6862004)(4326008)(107886003)(83380400001)(70586007)(54906003)(70206006)(30864003)(2616005)(316002)(5660300002)(356005)(186003)(6512007)(26005)(6486002)(53546011)(336012)(86362001)(81166007)(47076004)(36756003)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 14:34:29.0430
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fee75201-d87d-45b7-a483-08d897988aaf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3486

Hello Julien,

> On 2 Dec 2020, at 2:45 pm, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 26/11/2020 17:02, Rahul Singh wrote:
>> struct io_pgtable_ops, struct io_pgtable_cfg, struct iommu_flush_ops,
>> and struct iommu_ops related code are linux specific.
>=20
> So the assumption is we are going to support only sharing with the P2M an=
d the IOMMU. That's probably fine short term, but long-term we are going to=
 need unsharing the page-tables (there are issues on some platforms if the =
ITS doorbell is accessed by the processors).
>=20
> I am ok with removing anything related to the unsharing code. But I think=
 it should be clarified here.

Ok I will add in commit message what is removed from the code.
>=20
>> Remove code related to above struct as code is dead code in XEN.
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>>  xen/drivers/passthrough/arm/smmu-v3.c | 457 --------------------------
>>  1 file changed, 457 deletions(-)
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthr=
ough/arm/smmu-v3.c
>> index 40e3890a58..55d1cba194 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -402,13 +402,7 @@
>>  #define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
>>  #define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>>  -#define MSI_IOVA_BASE			0x8000000
>> -#define MSI_IOVA_LENGTH			0x100000
>> -
>>  static bool disable_bypass =3D 1;
>> -module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
>> -MODULE_PARM_DESC(disable_bypass,
>> -	"Disable bypass streams such that incoming transactions from devices t=
hat are not attached to an iommu domain will report an abort back to the de=
vice and will not be allowed to pass through the SMMU.");
>=20
> Per your commit message, this doesn't look related to what you intend to =
remove. Maybe your commit message should be updated?

Ok.
>=20
>>    enum pri_resp {
>>  	PRI_RESP_DENY =3D 0,
>> @@ -599,7 +593,6 @@ struct arm_smmu_domain {
>>  	struct arm_smmu_device		*smmu;
>>  	struct mutex			init_mutex; /* Protects smmu pointer */
>>  -	struct io_pgtable_ops		*pgtbl_ops;
>>  	bool				non_strict;
>>    	enum arm_smmu_domain_stage	stage;
>> @@ -1297,74 +1290,6 @@ static void arm_smmu_tlb_inv_context(void *cookie=
)
>>  	arm_smmu_cmdq_issue_sync(smmu);
>>  }
>>  -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t s=
ize,
>> -					  size_t granule, bool leaf, void *cookie)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -	struct arm_smmu_cmdq_ent cmd =3D {
>> -		.tlbi =3D {
>> -			.leaf	=3D leaf,
>> -			.addr	=3D iova,
>> -		},
>> -	};
>> -
>> -	cmd.opcode	=3D CMDQ_OP_TLBI_S2_IPA;
>> -	cmd.tlbi.vmid	=3D smmu_domain->s2_cfg.vmid;
>> -
>> -	do {
>> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>> -		cmd.tlbi.addr +=3D granule;
>> -	} while (size -=3D granule);
>> -}
>> -
>> -static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gat=
her,
>> -					 unsigned long iova, size_t granule,
>> -					 void *cookie)
>> -{
>> -	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
>> -}
>> -
>> -static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
>> -				  size_t granule, void *cookie)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -
>> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
>> -	arm_smmu_cmdq_issue_sync(smmu);
>> -}
>> -
>> -static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
>> -				  size_t granule, void *cookie)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D cookie;
>> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> -
>> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
>> -	arm_smmu_cmdq_issue_sync(smmu);
>> -}
>> -
>> -static const struct iommu_flush_ops arm_smmu_flush_ops =3D {
>> -	.tlb_flush_all	=3D arm_smmu_tlb_inv_context,
>> -	.tlb_flush_walk =3D arm_smmu_tlb_inv_walk,
>> -	.tlb_flush_leaf =3D arm_smmu_tlb_inv_leaf,
>> -	.tlb_add_page	=3D arm_smmu_tlb_inv_page_nosync,
>> -};
>> -
>> -/* IOMMU API */
>> -static bool arm_smmu_capable(enum iommu_cap cap)
>> -{
>> -	switch (cap) {
>> -	case IOMMU_CAP_CACHE_COHERENCY:
>> -		return true;
>> -	case IOMMU_CAP_NOEXEC:
>> -		return true;
>> -	default:
>> -		return false;
>> -	}
>> -}
>> -
>>  static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>>  {
>>  	struct arm_smmu_domain *smmu_domain;
>> @@ -1421,7 +1346,6 @@ static void arm_smmu_domain_free(struct iommu_doma=
in *domain)
>>  	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>>    	iommu_put_dma_cookie(domain);
>> -	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
>>    	if (cfg->vmid)
>>  		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>> @@ -1429,7 +1353,6 @@ static void arm_smmu_domain_free(struct iommu_doma=
in *domain)
>>  	kfree(smmu_domain);
>>  }
>>  -
>=20
> Looks like a spurious change.

Ack.=20
>=20
>>  static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_dom=
ain,
>>  				       struct arm_smmu_master *master,
>>  				       struct io_pgtable_cfg *pgtbl_cfg)
>=20
> You commit message leads to think that all the use of struct io_pgtable_c=
fg will be removed.

Ok I will remove it.
>=20
>> @@ -1437,7 +1360,6 @@ static int arm_smmu_domain_finalise_s2(struct arm_=
smmu_domain *smmu_domain,
>>  	int vmid;
>>  	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>>  	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>> -	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
>=20
> It feels a bit odd that the definition of 'vtcr' is removed but there are=
 still users.

Ack. Will fix that.
>=20
>>    	vmid =3D arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>>  	if (vmid < 0)
>> @@ -1461,20 +1383,12 @@ static int arm_smmu_domain_finalise(struct iommu=
_domain *domain,
>>  {
>>  	int ret;
>>  	unsigned long ias, oas;
>> -	enum io_pgtable_fmt fmt;
>> -	struct io_pgtable_cfg pgtbl_cfg;
>> -	struct io_pgtable_ops *pgtbl_ops;
>>  	int (*finalise_stage_fn)(struct arm_smmu_domain *,
>>  				 struct arm_smmu_master *,
>>  				 struct io_pgtable_cfg *);
>>  	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>>  	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>>  -	if (domain->type =3D=3D IOMMU_DOMAIN_IDENTITY) {
>> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_BYPASS;
>> -		return 0;
>> -	}
>> -
>=20
> Per your commit message, this doesn't look related to what you intend to =
remove. Maybe your commit message should be updated?

Ok I will update the commit message.
>=20
>>  	/* Restrict the stage to what we can actually support */
>>  	smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>>  @@ -1483,40 +1397,17 @@ static int arm_smmu_domain_finalise(struct iomm=
u_domain *domain,
>>  	case ARM_SMMU_DOMAIN_S2:
>>  		ias =3D smmu->ias;
>>  		oas =3D smmu->oas;
>> -		fmt =3D ARM_64_LPAE_S2;
>>  		finalise_stage_fn =3D arm_smmu_domain_finalise_s2;
>>  		break;
>>  	default:
>>  		return -EINVAL;
>>  	}
>>  -	pgtbl_cfg =3D (struct io_pgtable_cfg) {
>> -		.pgsize_bitmap	=3D smmu->pgsize_bitmap,
>> -		.ias		=3D ias,
>> -		.oas		=3D oas,
>> -		.coherent_walk	=3D smmu->features & ARM_SMMU_FEAT_COHERENCY,
>> -		.tlb		=3D &arm_smmu_flush_ops,
>> -		.iommu_dev	=3D smmu->dev,
>> -	};
>> -
>> -	if (smmu_domain->non_strict)
>> -		pgtbl_cfg.quirks |=3D IO_PGTABLE_QUIRK_NON_STRICT;
>> -
>> -	pgtbl_ops =3D alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
>> -	if (!pgtbl_ops)
>> -		return -ENOMEM;
>> -
>> -	domain->pgsize_bitmap =3D pgtbl_cfg.pgsize_bitmap;
>> -	domain->geometry.aperture_end =3D (1UL << pgtbl_cfg.ias) - 1;
>> -	domain->geometry.force_aperture =3D true;
>> -
>>  	ret =3D finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
>>  	if (ret < 0) {
>> -		free_io_pgtable_ops(pgtbl_ops);
>>  		return ret;
>>  	}
>>  -	smmu_domain->pgtbl_ops =3D pgtbl_ops;
>>  	return 0;
>>  }
>>  @@ -1626,71 +1517,6 @@ out_unlock:
>>  	return ret;
>>  }
>>  -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iov=
a,
>> -			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>> -{
>> -	struct io_pgtable_ops *ops =3D to_smmu_domain(domain)->pgtbl_ops;
>> -
>> -	if (!ops)
>> -		return -ENODEV;
>> -
>> -	return ops->map(ops, iova, paddr, size, prot, gfp);
>> -}
>> -
>> -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long=
 iova,
>> -			     size_t size, struct iommu_iotlb_gather *gather)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -	struct io_pgtable_ops *ops =3D smmu_domain->pgtbl_ops;
>> -
>> -	if (!ops)
>> -		return 0;
>> -
>> -	return ops->unmap(ops, iova, size, gather);
>> -}
>> -
>> -static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -
>> -	if (smmu_domain->smmu)
>> -		arm_smmu_tlb_inv_context(smmu_domain);
>> -}
>> -
>> -static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
>> -				struct iommu_iotlb_gather *gather)
>> -{
>> -	struct arm_smmu_device *smmu =3D to_smmu_domain(domain)->smmu;
>> -
>> -	if (smmu)
>> -		arm_smmu_cmdq_issue_sync(smmu);
>> -}
>> -
>> -static phys_addr_t
>> -arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
>> -{
>> -	struct io_pgtable_ops *ops =3D to_smmu_domain(domain)->pgtbl_ops;
>> -
>> -	if (domain->type =3D=3D IOMMU_DOMAIN_IDENTITY)
>> -		return iova;
>> -
>> -	if (!ops)
>> -		return 0;
>> -
>> -	return ops->iova_to_phys(ops, iova);
>> -}
>> -
>> -static struct platform_driver arm_smmu_driver;
>> -
>> -static
>> -struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fw=
node)
>> -{
>> -	struct device *dev =3D driver_find_device_by_fwnode(&arm_smmu_driver.d=
river,
>> -							  fwnode);
>> -	put_device(dev);
>> -	return dev ? dev_get_drvdata(dev) : NULL;
>> -}
>> -
>>  static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid=
)
>>  {
>>  	unsigned long limit =3D smmu->strtab_cfg.num_l1_ents;
>> @@ -1701,206 +1527,6 @@ static bool arm_smmu_sid_in_range(struct arm_smm=
u_device *smmu, u32 sid)
>>  	return sid < limit;
>>  }
>>  -static struct iommu_ops arm_smmu_ops;
>> -
>> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>> -{
>=20
> Most of the code here looks useful to Xen. I think you want to keep the c=
ode and re-use it afterwards.

Ok. I removed the code here and added the XEN compatible code to add device=
s in next patch.
I will keep it in this patch and will modifying the code to make XEN compat=
ible.

>=20
>> -	int i, ret;
>> -	struct arm_smmu_device *smmu;
>> -	struct arm_smmu_master *master;
>> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> -
>> -	if (!fwspec || fwspec->ops !=3D &arm_smmu_ops)
>> -		return ERR_PTR(-ENODEV);
>> -
>> -	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
>> -		return ERR_PTR(-EBUSY);
>> -
>> -	smmu =3D arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
>> -	if (!smmu)
>> -		return ERR_PTR(-ENODEV);
>> -
>> -	master =3D kzalloc(sizeof(*master), GFP_KERNEL);
>> -	if (!master)
>> -		return ERR_PTR(-ENOMEM);
>> -
>> -	master->dev =3D dev;
>> -	master->smmu =3D smmu;
>> -	master->sids =3D fwspec->ids;
>> -	master->num_sids =3D fwspec->num_ids;
>> -	dev_iommu_priv_set(dev, master);
>> -
>> -	/* Check the SIDs are in range of the SMMU and our stream table */
>> -	for (i =3D 0; i < master->num_sids; i++) {
>> -		u32 sid =3D master->sids[i];
>> -
>> -		if (!arm_smmu_sid_in_range(smmu, sid)) {
>> -			ret =3D -ERANGE;
>> -			goto err_free_master;
>> -		}
>> -
>> -		/* Ensure l2 strtab is initialised */
>> -		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
>> -			ret =3D arm_smmu_init_l2_strtab(smmu, sid);
>> -			if (ret)
>> -				goto err_free_master;
>> -		}
>> -	}
>> -
>> -	return &smmu->iommu;
>> -
>> -err_free_master:
>> -	kfree(master);
>> -	dev_iommu_priv_set(dev, NULL);
>> -	return ERR_PTR(ret);
>> -}
>> -
>> -static void arm_smmu_release_device(struct device *dev)
>> -{
>> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> -	struct arm_smmu_master *master;
>> -
>> -	if (!fwspec || fwspec->ops !=3D &arm_smmu_ops)
>> -		return;
>> -
>> -	master =3D dev_iommu_priv_get(dev);
>> -	arm_smmu_detach_dev(master);
>> -	kfree(master);
>> -	iommu_fwspec_free(dev);
>> -}
>> -
>> -static struct iommu_group *arm_smmu_device_group(struct device *dev)
>> -{
>> -	struct iommu_group *group;
>> -
>> -	/*
>> -	 * We don't support devices sharing stream IDs other than PCI RID
>> -	 * aliases, since the necessary ID-to-device lookup becomes rather
>> -	 * impractical given a potential sparse 32-bit stream ID space.
>> -	 */
>> -	if (dev_is_pci(dev))
>> -		group =3D pci_device_group(dev);
>> -	else
>> -		group =3D generic_device_group(dev);
>> -
>> -	return group;
>> -}
>> -
>> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
>> -				    enum iommu_attr attr, void *data)
>> -{
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -
>> -	switch (domain->type) {
>> -	case IOMMU_DOMAIN_UNMANAGED:
>> -		switch (attr) {
>> -		case DOMAIN_ATTR_NESTING:
>> -			*(int *)data =3D (smmu_domain->stage =3D=3D ARM_SMMU_DOMAIN_NESTED);
>> -			return 0;
>> -		default:
>> -			return -ENODEV;
>> -		}
>> -		break;
>> -	case IOMMU_DOMAIN_DMA:
>> -		switch (attr) {
>> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
>> -			*(int *)data =3D smmu_domain->non_strict;
>> -			return 0;
>> -		default:
>> -			return -ENODEV;
>> -		}
>> -		break;
>> -	default:
>> -		return -EINVAL;
>> -	}
>> -}
>> -
>> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
>> -				    enum iommu_attr attr, void *data)
>> -{
>> -	int ret =3D 0;
>> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
>> -
>> -	mutex_lock(&smmu_domain->init_mutex);
>> -
>> -	switch (domain->type) {
>> -	case IOMMU_DOMAIN_UNMANAGED:
>> -		switch (attr) {
>> -		case DOMAIN_ATTR_NESTING:
>> -			if (smmu_domain->smmu) {
>> -				ret =3D -EPERM;
>> -				goto out_unlock;
>> -			}
>> -
>> -			if (*(int *)data)
>> -				smmu_domain->stage =3D ARM_SMMU_DOMAIN_NESTED;
>> -			else
>> -				smmu_domain->stage =3D ARM_SMMU_DOMAIN_S1;
>> -			break;
>> -		default:
>> -			ret =3D -ENODEV;
>> -		}
>> -		break;
>> -	case IOMMU_DOMAIN_DMA:
>> -		switch(attr) {
>> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
>> -			smmu_domain->non_strict =3D *(int *)data;
>> -			break;
>> -		default:
>> -			ret =3D -ENODEV;
>> -		}
>> -		break;
>> -	default:
>> -		ret =3D -EINVAL;
>> -	}
>> -
>> -out_unlock:
>> -	mutex_unlock(&smmu_domain->init_mutex);
>> -	return ret;
>> -}
>> -
>> -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args=
 *args)
>> -{
>> -	return iommu_fwspec_add_ids(dev, args->args, 1);
>> -}
>> -
>> -static void arm_smmu_get_resv_regions(struct device *dev,
>> -				      struct list_head *head)
>> -{
>> -	struct iommu_resv_region *region;
>> -	int prot =3D IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> -
>> -	region =3D iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
>> -					 prot, IOMMU_RESV_SW_MSI);
>> -	if (!region)
>> -		return;
>> -
>> -	list_add_tail(&region->list, head);
>> -
>> -	iommu_dma_get_resv_regions(dev, head);
>> -}
>> -
>> -static struct iommu_ops arm_smmu_ops =3D {
>> -	.capable		=3D arm_smmu_capable,
>> -	.domain_alloc		=3D arm_smmu_domain_alloc,
>> -	.domain_free		=3D arm_smmu_domain_free,
>> -	.attach_dev		=3D arm_smmu_attach_dev,
>> -	.map			=3D arm_smmu_map,
>> -	.unmap			=3D arm_smmu_unmap,
>> -	.flush_iotlb_all	=3D arm_smmu_flush_iotlb_all,
>> -	.iotlb_sync		=3D arm_smmu_iotlb_sync,
>> -	.iova_to_phys		=3D arm_smmu_iova_to_phys,
>> -	.probe_device		=3D arm_smmu_probe_device,
>> -	.release_device		=3D arm_smmu_release_device,
>> -	.device_group		=3D arm_smmu_device_group,
>> -	.domain_get_attr	=3D arm_smmu_domain_get_attr,
>> -	.domain_set_attr	=3D arm_smmu_domain_set_attr,
>> -	.of_xlate		=3D arm_smmu_of_xlate,
>> -	.get_resv_regions	=3D arm_smmu_get_resv_regions,
>> -	.put_resv_regions	=3D generic_iommu_put_resv_regions,
>> -	.pgsize_bitmap		=3D -1UL, /* Restricted during device attach */
>> -};
>> -
>>  /* Probing and initialisation functions */
>>  static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>>  				   struct arm_smmu_queue *q,
>> @@ -2406,7 +2032,6 @@ static int arm_smmu_device_hw_probe(struct arm_smm=
u_device *smmu)
>>  	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
>>  	case IDR0_STALL_MODEL_FORCE:
>>  		smmu->features |=3D ARM_SMMU_FEAT_STALL_FORCE;
>> -		fallthrough;
>=20
> We should keep all the fallthrough documented. So I think we want to intr=
oduce the fallthrough in Xen as well.

Ok I will keep fallthrough documented in this patch.=20

fallthrough implementation in XEN should be another patch. I am not sure wh=
en we can implement but we will try to implement.

>=20
>>  	case IDR0_STALL_MODEL_STALL:
>>  		smmu->features |=3D ARM_SMMU_FEAT_STALLS;
>>  	}
>> @@ -2426,7 +2051,6 @@ static int arm_smmu_device_hw_probe(struct arm_smm=
u_device *smmu)
>>  	switch (FIELD_GET(IDR0_TTF, reg)) {
>>  	case IDR0_TTF_AARCH32_64:
>>  		smmu->ias =3D 40;
>> -		fallthrough;
>>  	case IDR0_TTF_AARCH64:
>>  		break;
>>  	default:
>> @@ -2515,21 +2139,10 @@ static int arm_smmu_device_hw_probe(struct arm_s=
mmu_device *smmu)
>>  	default:
>>  		dev_info(smmu->dev,
>>  			"unknown output address size. Truncating to 48-bit\n");
>> -		fallthrough;
>>  	case IDR5_OAS_48_BIT:
>>  		smmu->oas =3D 48;
>>  	}
>>  -	if (arm_smmu_ops.pgsize_bitmap =3D=3D -1UL)
>> -		arm_smmu_ops.pgsize_bitmap =3D smmu->pgsize_bitmap;
>> -	else
>> -		arm_smmu_ops.pgsize_bitmap |=3D smmu->pgsize_bitmap;
>> -
>> -	/* Set the DMA mask for our table walker */
>> -	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
>> -		dev_warn(smmu->dev,
>> -			 "failed to set DMA mask for table walker\n");
>> -
>>  	smmu->ias =3D max(smmu->ias, smmu->oas);
>>    	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
>> @@ -2595,9 +2208,6 @@ static int arm_smmu_device_dt_probe(struct platfor=
m_device *pdev,
>>    	parse_driver_options(smmu);
>>  -	if (of_dma_is_coherent(dev->of_node))
>> -		smmu->features |=3D ARM_SMMU_FEAT_COHERENCY;
>> -
>>  	return ret;
>>  }
>>  @@ -2609,55 +2219,6 @@ static unsigned long arm_smmu_resource_size(stru=
ct arm_smmu_device *smmu)
>>  		return SZ_128K;
>>  }
>>  -static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
>> -{
>> -	int err;
>> -
>> -#ifdef CONFIG_PCI
>> -	if (pci_bus_type.iommu_ops !=3D ops) {
>> -		err =3D bus_set_iommu(&pci_bus_type, ops);
>> -		if (err)
>> -			return err;
>> -	}
>> -#endif
>> -#ifdef CONFIG_ARM_AMBA
>> -	if (amba_bustype.iommu_ops !=3D ops) {
>> -		err =3D bus_set_iommu(&amba_bustype, ops);
>> -		if (err)
>> -			goto err_reset_pci_ops;
>> -	}
>> -#endif
>> -	if (platform_bus_type.iommu_ops !=3D ops) {
>> -		err =3D bus_set_iommu(&platform_bus_type, ops);
>> -		if (err)
>> -			goto err_reset_amba_ops;
>> -	}
>> -
>> -	return 0;
>> -
>> -err_reset_amba_ops:
>> -#ifdef CONFIG_ARM_AMBA
>> -	bus_set_iommu(&amba_bustype, NULL);
>> -#endif
>> -err_reset_pci_ops: __maybe_unused;
>> -#ifdef CONFIG_PCI
>> -	bus_set_iommu(&pci_bus_type, NULL);
>> -#endif
>> -	return err;
>> -}
>> -
>> -static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size=
_t start,
>> -				      resource_size_t size)
>=20
> This seems a bit odd that you are removing the function but not the calle=
rs. Shouldn't you keep this function until next patch?
>=20
Ok I will keep this in this patch and will remove in next patch.

>> -{
>> -	struct resource res =3D {
>> -		.flags =3D IORESOURCE_MEM,
>> -		.start =3D start,
>> -		.end =3D start + size - 1,
>> -	};
>> -
>> -	return devm_ioremap_resource(dev, &res);
>> -}
>> -
>>  static int arm_smmu_device_probe(struct platform_device *pdev)
>>  {
>>  	int irq, ret;
>> @@ -2785,21 +2346,3 @@ static const struct of_device_id arm_smmu_of_matc=
h[] =3D {
>>  	{ .compatible =3D "arm,smmu-v3", },
>>  	{ },
>>  };
>> -MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
>> -
>> -static struct platform_driver arm_smmu_driver =3D {
>> -	.driver	=3D {
>> -		.name			=3D "arm-smmu-v3",
>> -		.of_match_table		=3D arm_smmu_of_match,
>> -		.suppress_bind_attrs	=3D true,
>> -	},
>> -	.probe	=3D arm_smmu_device_probe,
>> -	.remove	=3D arm_smmu_device_remove,
>> -	.shutdown =3D arm_smmu_device_shutdown,
>> -};
>> -module_platform_driver(arm_smmu_driver);
>> -
>> -MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementation=
s");
>> -MODULE_AUTHOR("Will Deacon <will@kernel.org>");
>> -MODULE_ALIAS("platform:arm-smmu-v3");
>> -MODULE_LICENSE("GPL v2");
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43674.78511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpnA-0005Io-IK; Thu, 03 Dec 2020 14:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43674.78511; Thu, 03 Dec 2020 14:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpnA-0005Ih-Dw; Thu, 03 Dec 2020 14:41:08 +0000
Received: by outflank-mailman (input) for mailman id 43674;
 Thu, 03 Dec 2020 14:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lp8G=FH=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kkpn9-0005Ia-CZ
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:41:07 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0a::617])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4de35079-8899-4e82-8ee5-5ba0336f7a0b;
 Thu, 03 Dec 2020 14:41:04 +0000 (UTC)
Received: from AM7PR03CA0028.eurprd03.prod.outlook.com (2603:10a6:20b:130::38)
 by AM9PR08MB6178.eurprd08.prod.outlook.com (2603:10a6:20b:2db::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 3 Dec
 2020 14:41:03 +0000
Received: from VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::1a) by AM7PR03CA0028.outlook.office365.com
 (2603:10a6:20b:130::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Thu, 3 Dec 2020 14:41:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT027.mail.protection.outlook.com (10.152.18.154) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Thu, 3 Dec 2020 14:41:02 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Thu, 03 Dec 2020 14:41:02 +0000
Received: from 710a84054e98.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D7103A9B-3D38-4A88-8B89-EED90E2EC165.1; 
 Thu, 03 Dec 2020 14:40:53 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 710a84054e98.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 03 Dec 2020 14:40:53 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2373.eurprd08.prod.outlook.com (2603:10a6:4:88::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Thu, 3 Dec
 2020 14:40:52 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3611.032; Thu, 3 Dec 2020
 14:40:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4de35079-8899-4e82-8ee5-5ba0336f7a0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ue0/iuRhkVWpmecHb9dHIihavIffILaG6rZxd8e8ZrM=;
 b=NiJWZw3qewqqT1bXXiQr4Lm4hQYqotX/TiTmqwUKRGCCdLwe+G6Y1zXc6WiCYvsynj3PLi/RIS/YaUt7Pc4u2ldHvdAj4gmchVKN2fTjqXGEXIWMV2uqc0YDDAxusOxJ8f8M5hBrv67cj6aicLsrQmXVFx1fYUTyU4uZLmBO478=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a059175dd3e48f16
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dubxOheaD9+CNokSz2MrZ9jW3oT8iZe0oX5VO1hIZYa1MniypN22+VCFqs7M6vhEtHX2tKV5r/JzlA9cbaaSm3p+ErR5G381QZyAwwa9RWoMWvXrU0zkPIWPENXLla/+PqxpccM/K7MX+1NW6EDQkuzp9aHz+uHKEiqzrU8KSaUYSFRa/wi9F4gdJEXRoObgoY5DNr9ZJ2DJEOx0uFk667CRm6HI4ewcwAtg1aG7GjxLfdvMekaoqNYP06zOWJqXhn4QfTC933HgQJDcqHT94U1EkJ2E1iE/9sgpO2g0NAv1ec8s1025wqA3GydMFr06z1YwKRMed6c2kL+6n/BjYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ue0/iuRhkVWpmecHb9dHIihavIffILaG6rZxd8e8ZrM=;
 b=Ywtzv0dQcv+IcssqBrxd2MH1WKwi/BjCml7NuNy5RQixG7YJYlRHJGi9TvlGHS5qOZYIrKoclMrMaMPa/hzK4aqO9lsDwxo1RTQeZZyeSL7hbX2ApAcZkeGerENXoIPiYDshDB1t+HzluuWQOkonte8qvYw71aFjn8PYzSH84DhUpn+jTCZFIU64zgFuxHPdam/Mz3UTaWbpRyAgnn/3Y5HH2G+TAM+DT/7U7F40TtKM+rkfMarXf9yCb5rGbyohTivs+SGxqTy87ejgP/2nJCOhF4GDCRaCgSaAE4nJO34thGJwtV3ccOLoxM+gbeB7O0NkxTifMmruhd5Qbg4FIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ue0/iuRhkVWpmecHb9dHIihavIffILaG6rZxd8e8ZrM=;
 b=NiJWZw3qewqqT1bXXiQr4Lm4hQYqotX/TiTmqwUKRGCCdLwe+G6Y1zXc6WiCYvsynj3PLi/RIS/YaUt7Pc4u2ldHvdAj4gmchVKN2fTjqXGEXIWMV2uqc0YDDAxusOxJ8f8M5hBrv67cj6aicLsrQmXVFx1fYUTyU4uZLmBO478=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWxBX/ttp8YMNqT0KunkbXuSldmKnjI/iAgADpk4CAAL+AgIAAr2CA
Date: Thu, 3 Dec 2020 14:40:52 +0000
Message-ID: <BD247B69-7201-41E2-8687-49924B7396CA@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
 <1912278a-13f4-885d-d1ca-cc130718d064@xen.org>
 <alpine.DEB.2.21.2012021958020.30425@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012021958020.30425@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 53a69dd4-ca16-4cde-2bec-08d897997572
x-ms-traffictypediagnostic: DB6PR0802MB2373:|AM9PR08MB6178:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB61784266CBC5085DBCC671F3FCF20@AM9PR08MB6178.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5797;OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OHNjpZFZgI30B+pF+DdIfr4Xd+2FmUCCb0dhmTi8ikug4Gn/1TLkzzC1OPgM6kttZITdT2LrO6DMUEEWD8GH/OCQMT6+/xAHqCR+AtfXy21iF8wkRZ8yxuqoyvaPdRTjdbONm63bcojpA3M6nyll3z4iIy4j0us7KrX0A4L11rJ3CAqm3C6HFc375AyeSu7fBjxbREuNAO2WUOLzPxB42Mqj8+P+xWSVvNFc6xk9MOpZ8vWS4LI0RI1nKduETRqlnM3ziLVpwHf4DTWODc/dTCWfq9eiJx8lifAFAtFoj5zcmhWqf0daBstpE3ryAk2LtEGdM0gaJFuVL339ielbhAOEs1Vqb6M6ndj5sNsXsiUg4bikduCI57bXC6rQL8hF
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(366004)(376002)(136003)(4326008)(66476007)(6512007)(2616005)(64756008)(71200400001)(66556008)(66446008)(76116006)(6506007)(26005)(316002)(91956017)(83380400001)(53546011)(66946007)(2906002)(186003)(6486002)(6916009)(478600001)(8676002)(36756003)(86362001)(54906003)(5660300002)(33656002)(7416002)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?4qL4vUn0/5iPCDOZ4TmRth98yO40ouSv5oKezwTDjCsBgaFz4mwF3Ub8zWzY?=
 =?us-ascii?Q?HHSkKzvDk/1/d81bevituyLKGS+Fv1bO2o1BbZ3krEBTLOVnLw0yE3R6v3Do?=
 =?us-ascii?Q?+pXlFS78uQm3b1ZUoSbchMvwSE6euLCk7q71iO6dc9qf4y66RAEBOhefC2Q3?=
 =?us-ascii?Q?3QD/3Dd7ySxal4/hhSq3OJ1Th9Lw6F9VQmvaTgvDGyoGntNDtLNGt8gtEw89?=
 =?us-ascii?Q?Wjv5B6FAhrJ3D+yv3Xt/1Mhj6MMXVOootMRcVH6+kZ1/Ea5rXF1AwMCpxRpF?=
 =?us-ascii?Q?rJSxiTSoPkNG26lKSAgdl3kehK4+hry9lVlogwwZ0qPQoM6/3hSp3EGPwUkI?=
 =?us-ascii?Q?dKsSyQOdRk32aWTcnXmhp5RqGiJRrySbXxRomy43UCH4qOqsw97MXA2utdMV?=
 =?us-ascii?Q?3rNqbrxKU7DJsnjs+oOfTcE3sZpXVWTsVwmBkafaGFTLtFik4RvUBNjW5J2i?=
 =?us-ascii?Q?b3INKa6LFjnMZtZ0V0AwpaNQXYC8mOr4F7BJ0JIbj3lyovbzrmr8b0W98hPj?=
 =?us-ascii?Q?shzDG98ljwJWIMPUlDEQBwewhNJ9AVlkSIZB8dBly7wv0T3TKY4XUKFd9gsj?=
 =?us-ascii?Q?eK9dVjAqhNCWyxATJ4UEkxOOzS66TVUDBJp/w4qtFc3DvRXA1vnmFgKYtx3w?=
 =?us-ascii?Q?Av2iTI+9HDaXqjamXVeavEbnQIi3dE+v+487//34hBvWPuTN2KQfbWRxSmzt?=
 =?us-ascii?Q?25rLgNX+3bLqf/tN5zoENzfIoRKHFJLoCst4J8NQpOz+HTL3P6nMDh+HqGoW?=
 =?us-ascii?Q?XzGrycn0pxOHANauTO4lPxApq/+GfOrZPWtbS5gmh4jE1RFYo9Wsz3Jx/oJf?=
 =?us-ascii?Q?TROsgp8KelyyQ989dx1lxFPyiBhLCQ1AzHwrnTsoHvXXY7uVt9Zqu032rSum?=
 =?us-ascii?Q?z+RQoeoxdSWJC+cl9f1XudFiM1yBt/2LaGJEql9+Ry5giwMaULNIqzf/sBS1?=
 =?us-ascii?Q?mBfMzK4kl0nMEfEH93N3tcp79UyIiGsqVIOZzPzm9Z052+gt6iGwFKAiW8nk?=
 =?us-ascii?Q?ejGI?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5D147D03E4E7D843A002A6F2D41ACF76@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2373
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ce35f04c-c2e9-43ee-e50e-08d897996f36
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yiUOseK4r7M0FbgnDlNXfaijG/ImF1dg1gcRe5qPtVz3PXRCp2wfNhxG8Q5D2+nLeeRNZEy8TSGK9MWBZFDe7Cu8EywTFpLgnoPXNcwRM93XnviWSGS63xk9lSyxr0TavAdA2SfR7myFRHT0JApKIgEnlQrGY/zUJNpE63cnBFv+Ey2X+qmMPNNCEGiSP4IJrBRvYBjZpJfM8ydYw6Xwen2Czn1iCLEJPyeaOxklSTaxw24pOHdgcMICdo9dy5NSyvSrue7au30Yv8zboaMY+3fgU3kcqGQrnx9Yb2iQTcwPwEMNl86OXxcgz1r8W66NxxCqZ4hfyeK81Je15bLi8ThZCJ/RSKi8+qBN27lLGYWsNMCJGomwtcPsJ27nZ3M0lW7qz3qGmxTnaBhU5pR1QQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(136003)(376002)(396003)(46966005)(6512007)(33656002)(6506007)(82740400003)(2616005)(4326008)(186003)(8676002)(6862004)(53546011)(8936002)(26005)(107886003)(478600001)(316002)(70586007)(2906002)(70206006)(336012)(54906003)(5660300002)(6486002)(82310400003)(36756003)(81166007)(47076004)(86362001)(356005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 14:41:02.7832
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 53a69dd4-ca16-4cde-2bec-08d897997572
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6178

Hello Stefano,

> On 3 Dec 2020, at 4:13 am, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>=20
> On Wed, 2 Dec 2020, Julien Grall wrote:
>> On 02/12/2020 02:51, Stefano Stabellini wrote:
>>> On Thu, 26 Nov 2020, Rahul Singh wrote:
>>>> +/* Alias to Xen device tree helpers */
>>>> +#define device_node dt_device_node
>>>> +#define of_phandle_args dt_phandle_args
>>>> +#define of_device_id dt_device_match
>>>> +#define of_match_node dt_match_node
>>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(n=
p,
>>>> pname, out))
>>>> +#define of_property_read_bool dt_property_read_bool
>>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>>>=20
>>> Given all the changes to the file by the previous patches we are
>>> basically fully (or almost fully) adapting this code to Xen.
>>>=20
>>> So at that point I wonder if we should just as well make these changes
>>> (e.g. s/of_phandle_args/dt_phandle_args/g) to the code too.
>>=20
>> I have already accepted the fact that keeping Linux code as-is is nearly
>> impossible without much workaround :). The benefits tends to also limite=
d as
>> we noticed for the SMMU driver.
>>=20
>> I would like to point out that this may make quite difficult (if not
>> impossible) to revert the previous patches which remove support for some
>> features (e.g. atomic, MSI, ATS).
>>=20
>> If we are going to adapt the code to Xen (I'd like to keep Linux code st=
yle
>> though), then I think we should consider to keep code that may be useful=
 in
>> the near future (at least MSI, ATS).
>=20
> (I am fine with keeping the Linux code style.)
>=20
> We could try to keep the code as similar to Linux as possible. This
> didn't work out in the past.
>=20
> Otherwise, we could fully adapt the driver to Xen. If we fully adapt the
> driver to Xen (code style aside) it is better to be consistent and also
> do substitutions like s/of_phandle_args/dt_phandle_args/g. Then the
> policy becomes clear: the code comes from Linux but it is 100% adapted
> to Xen.
>=20
>=20
> Now the question about what to do about the MSI and ATS code is a good
> one. We know that we are going to want that code at some point in the
> next 2 years. Like you wrote, if we fully adapt the code to Xen and
> remove MSI and ATS code, then it is going to be harder to add it back.
>=20
> So maybe keeping the MSI and ATS code for now, even if it cannot work,
> would be better. I think this strategy works well if the MSI and ATS
> code can be disabled easily, i.e. with a couple of lines of code in the
> init function rather than #ifdef everywhere. It doesn't work well if we
> have to add #ifdef everywhere.
>=20
> It looks like MSI could be disabled adding a couple of lines to
> arm_smmu_setup_msis.
>=20
> Similarly ATS seems to be easy to disable by forcing ats_enabled to
> false.
>=20
> So yes, this looks like a good way forward. Rahul, what do you think?


I am ok to have the PCI ATS and MSI functionality in the code.=20
As per the discussion next version of the patch will include below modifica=
tion:Please let me know if there are any suggestion overall that should be =
added in next version.

1. Keep the PCI ATS and MSI functionality code.
2. Make the code with XEN compatible ( remove linux compatibility functions=
)
3. Keep the Linux coding style for code imported from Linux.
4. Fix all comments.

I have one query what will be coding style for new code to make driver work=
  for XEN ?=20


>=20
>=20
>=20
>>>> +#define FIELD_GET(_mask, _reg)          \
>>>> +    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1=
))
>>>> +
>>>> +#define WRITE_ONCE(x, val)                  \
>>>> +do {                                        \
>>>> +    *(volatile typeof(x) *)&(x) =3D (val);    \
>>>> +} while (0)
>>>=20
>>> maybe we should define this in xen/include/xen/lib.h
>>=20
>> I have attempted such discussion in the past and this resulted to more
>> bikeshed than it is worth it. So I would suggest to re-implement WRITE_O=
NCE()
>> as write_atomic() for now.
>=20
> Good suggestion, less discussions more code :-)

I will use the write_atomic.

Regards,
Rahul




From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43675.78523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpnF-0005Kk-Rf; Thu, 03 Dec 2020 14:41:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43675.78523; Thu, 03 Dec 2020 14:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpnF-0005Kd-NK; Thu, 03 Dec 2020 14:41:13 +0000
Received: by outflank-mailman (input) for mailman id 43675;
 Thu, 03 Dec 2020 14:41:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ho2W=FH=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1kkpnE-0005Ia-BI
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:41:12 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eee0cced-9aa4-4e40-b8a4-305ce3a4e205;
 Thu, 03 Dec 2020 14:41:10 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id a3so4123647wmb.5
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 06:41:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eee0cced-9aa4-4e40-b8a4-305ce3a4e205
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=HK1V8/imb7tnSnJOei6g4pqy91wESKuWzAGgLPwjHwg=;
        b=SZHioIOn5tVu6tr8ybd3wXB3XD17SHBvvUHrKmgn5nzUUr9WTl3tW9lNsgArUvLm3Q
         ZV3GxTDDTFehHtJdjsdJ5BNTTDJaCL5oRK9ZHNQw7fD5ihGhnx3tj9FufdD8WFm0sSpf
         8wVZjOPbIlVKMR7eCcx6dZz9Q9ot/QzVGKf3h+GM8CZJ7F5TkJaIUYVlMIPi1JIsT72I
         0J4+XFIg8yOWuRaGYxiD+tmLahsIKO+2kjlhnSjzQTkCiv3eLv9xpvctn97rsCcp3FDZ
         2XLnBvymwoxiob6wEQ4XHV1Hok+273jGdjVGI6NOtSYSKpT1Szhun0oekITnZhntGX4o
         G01A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=HK1V8/imb7tnSnJOei6g4pqy91wESKuWzAGgLPwjHwg=;
        b=F1QV2XpWOcbkvlQaMt5/dBVRPmiz/+qaQAeERyMDwT2LTXOQtz+ORHJXzbOd9uV7CG
         SmJtaYL0nxaRbAnZYcq3AeV6Qo8QZBHx4b4c+LaZdZMX59M1OXfT7NQXNeip7eapJxep
         3mI98r0O4WL2CKEhcSwwUthEh/ydfrjlZ6o4BMp4QxD8xVn1Elrkhc+A21jySSqri/hv
         S5/rzjKxwKAlo38FljQpxJcmkta0PNIldIaQfphcpOYBdoGowfHrenkKc3tVGJ0nKSyx
         Z9T6ewwVsnySFpiP3xf8kL3e7QdljAJZHMuhf5FOPxI04HxuXQcPCSahboqZwEfG0jki
         Km8Q==
X-Gm-Message-State: AOAM532AkROkozVAq36uPiAXIJQtznQXuN8BwX/RGpfT3rgAUmZDeVVY
	wbBe9O5+C3n0UILWETqVvIyEsKmPXeaHX71SB8c=
X-Google-Smtp-Source: ABdhPJz6l0FDh5GRb5vo96+kF3as3jgAwFUvAeJUW0gxT8LtGN2Hm+Rzn0Pgu341aLOD4TUKSaHHGmoYiW7hs+0kvzo=
X-Received: by 2002:a1c:4e0a:: with SMTP id g10mr3733729wmh.51.1607006469731;
 Thu, 03 Dec 2020 06:41:09 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
In-Reply-To: <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Thu, 3 Dec 2020 09:40:34 -0500
Message-ID: <CABfawh=t+TK+Bm3JUee+ZZ7SSPRLtgnq162Sk17cKNKqJMRreA@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 3, 2020 at 5:09 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 02.12.2020 22:10, Julien Grall wrote:
> > On 23/11/2020 13:30, Jan Beulich wrote:
> >> While there don't look to be any problems with this right now, the lock
> >> order implications from holding the lock can be very difficult to follow
> >> (and may be easy to violate unknowingly). The present callbacks don't
> >> (and no such callback should) have any need for the lock to be held.
> >>
> >> However, vm_event_disable() frees the structures used by respective
> >> callbacks and isn't otherwise synchronized with invocations of these
> >> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> >> to wait to drop to zero before freeing the port (and dropping the lock).
> >
> > AFAICT, this callback is not the only place where the synchronization is
> > missing in the VM event code.
> >
> > For instance, vm_event_put_request() can also race against
> > vm_event_disable().
> >
> > So shouldn't we handle this issue properly in VM event?
>
> I suppose that's a question to the VM event folks rather than me?

IMHO it would obviously be better if the Xen side could handle
situations like these gracefully. OTOH it is also reasonable to expect
the privileged toolstack to perform its own sanity checks before
disabling. Like right now for disabling vm_event we first pause the
VM, process all remaining events that were on the ring, and only then
disable the interface. This avoids the race condition mentioned above
(among other issues). It's not perfect - we ran into problems where
event replies don't have the desired effect because the VM is paused -
but for the most part doing this is sufficient. So I don't consider
this to be a priority at the moment. That said, if anyone is so
inclined to fix this up, I would be happy to review & ack.

Tamas


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 14:46:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 14:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43690.78534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpsV-0005r5-FW; Thu, 03 Dec 2020 14:46:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43690.78534; Thu, 03 Dec 2020 14:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkpsV-0005qy-CK; Thu, 03 Dec 2020 14:46:39 +0000
Received: by outflank-mailman (input) for mailman id 43690;
 Thu, 03 Dec 2020 14:46:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkpsT-0005qt-JJ
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:46:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09e44b4f-6d99-4718-a99f-02da3d06edf6;
 Thu, 03 Dec 2020 14:46:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E28BDAC65;
 Thu,  3 Dec 2020 14:46:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09e44b4f-6d99-4718-a99f-02da3d06edf6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607006795; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zkQN/tCRpveejzo9EbHpCmPYU5dLT2Jsg3nQVkFEQZQ=;
	b=K8lUnQ2+KQSkixgnsDn4EWpBvOmzl+JwiDSeTOFcNdjVpffJ2MWo5oRvuxBsEacTp/5A8k
	Ju5A/NKvTeTCrbYb+yiXBtoxxa2yCr0DgtLvikHHp6CBIk0lBshx1OJtlNDUj5JVqEHbNP
	md8yQjlvXO0H1ut66WpN1Gm/qJhEvDE=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <89f52bed-c611-70c5-1349-63838530debd@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0613c7e2-724a-e16c-91f7-f99298d04ab2@suse.com>
Date: Thu, 3 Dec 2020 15:46:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <89f52bed-c611-70c5-1349-63838530debd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.12.2020 16:46, Jürgen Groß wrote:
> On 01.12.20 09:21, Juergen Gross wrote:
>> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = {
>>       .notifier_call = cpu_callback
>>   };
>>   
>> +#ifdef CONFIG_HYPFS
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> +    const struct hypfs_entry *entry);
>> +
>> +static struct hypfs_funcs cpupool_pooldir_funcs = {
>> +    .enter = cpupool_pooldir_enter,
>> +    .exit = hypfs_node_exit,
>> +    .read = hypfs_read_dir,
>> +    .write = hypfs_write_deny,
>> +    .getsize = hypfs_getsize,
>> +    .findentry = hypfs_dir_findentry,
>> +};
>> +
>> +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs);
>> +
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> +    const struct hypfs_entry *entry)
>> +{
>> +    return &cpupool_pooldir.e;
>> +}
> I have found a more generic way to handle entering a dyndir node,
> resulting in no need to have cpupool_pooldir_enter() and
> cpupool_pooldir_funcs.
> 
> This will add some more lines to the previous patch, but less than
> saved here.

Which may then mean it's not a good use of time to look at v2 patch
14, considering there's a lot of other stuff in need of looking at?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:00:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43716.78553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkq5N-00073M-Sf; Thu, 03 Dec 2020 14:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43716.78553; Thu, 03 Dec 2020 14:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkq5N-00073F-PK; Thu, 03 Dec 2020 14:59:57 +0000
Received: by outflank-mailman (input) for mailman id 43716;
 Thu, 03 Dec 2020 14:59:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkq5M-00072j-Ng
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 14:59:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53321761-819b-4c81-981f-30f84b9f9c33;
 Thu, 03 Dec 2020 14:59:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3822ABCE;
 Thu,  3 Dec 2020 14:59:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53321761-819b-4c81-981f-30f84b9f9c33
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607007595; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pSg9rmsjxLjre1vdrJTkpLratowiaDvXfmL0E9g5R6s=;
	b=u5gI8699UbcKNvZ99xU2yxsZvbUrGYNbcf6PaNCCFTDfIZQ1isJllL+M2Kh2GpQQO3fzew
	SReP1jXyQsIFOnto2+bgY8pqyahAUYUaH3Ev3fEfM33CGgPQkEruwLYix2mu3J9sB5qbGe
	KHuwCI4uzzYd75zH1X2Fx2w6cP2xfLs=
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
Date: Thu, 3 Dec 2020 15:59:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-13-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> In order to better support resource allocation and locking for dynamic
> hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.
> 
> The enter() callback is called when entering a node during hypfs user
> actions (traversing, reading or writing it), while the exit() callback
> is called when leaving a node (accessing another node at the same or a
> higher directory level, or when returning to the user).
> 
> For avoiding recursion this requires a parent pointer in each node.
> Let the enter() callback return the entry address which is stored as
> the last accessed node in order to be able to use a template entry for
> that purpose in case of dynamic entries.

I guess I'll learn in subsequent patches why this is necessary /
useful. Right now it looks odd for the function to simple return
the incoming argument, as this way it's clear the caller knows
the correct value already.

> @@ -100,11 +112,58 @@ static void hypfs_unlock(void)
>      }
>  }
>  
> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
> +{
> +    return entry;
> +}
> +
> +void hypfs_node_exit(const struct hypfs_entry *entry)
> +{
> +}
> +
> +static int node_enter(const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
> +
> +    entry = entry->funcs->enter(entry);
> +    if ( IS_ERR(entry) )
> +        return PTR_ERR(entry);
> +
> +    ASSERT(!*last || *last == entry->parent);
> +
> +    *last = entry;
> +
> +    return 0;
> +}
> +
> +static void node_exit(const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
> +
> +    if ( !*last )
> +        return;

Under what conditions is this legitimate to happen? IOW shouldn't
there be an ASSERT_UNREACHABLE() here?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:09:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43732.78565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqEB-0008BH-OX; Thu, 03 Dec 2020 15:09:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43732.78565; Thu, 03 Dec 2020 15:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqEB-0008BA-LA; Thu, 03 Dec 2020 15:09:03 +0000
Received: by outflank-mailman (input) for mailman id 43732;
 Thu, 03 Dec 2020 15:09:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkqEA-0008B5-2X
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:09:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18f79b02-6b1e-48a9-bd20-206abfa09bd4;
 Thu, 03 Dec 2020 15:09:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 83195AC65;
 Thu,  3 Dec 2020 15:09:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18f79b02-6b1e-48a9-bd20-206abfa09bd4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607008140; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I5UIsF9sXAtXCzAtuI+A+nPX35jXeoAr0MhfRJx9RzA=;
	b=WsQMwAni4+N4f5bMcesMYaqH8z/JkljNA210SbDeooYdDiADe7bJR8LZa76DXr/eTmQX7o
	8663IbNjL42IK90KSaNcNMIB7pLZvPcfZNFXka/y7G9/Zi78g7YC637sUnOWF1NvBClrks
	ktxaf2oHpjf0I+ypxIImhYbgMAJEAms=
Subject: Re: [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-14-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a02fe2e6-428f-9bea-0108-92fa03729420@suse.com>
Date: Thu, 3 Dec 2020 16:08:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-14-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> Add a HYPFS_VARDIR_INIT() macro for initializing such a directory
> statically, taking a struct hypfs_funcs pointer as parameter additional
> to those of HYPFS_DIR_INIT().
> 
> Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
> additional parameter as this will be needed for dynamical entries.
> 
> For being able to let the generic hypfs coding continue to work on
> normal struct hypfs_entry entities even for dynamical nodes add some
> infrastructure for allocating a working area for the current hypfs
> request in order to store needed information for traversing the tree.
> This area is anchored in a percpu pointer and can be retrieved by any
> level of the dynamic entries. The normal way to handle allocation and
> freeing is to allocate the data in the enter() callback of a node and
> to free it in the related exit() callback.
> 
> Add a hypfs_add_dyndir() function for adding a dynamic directory
> template to the tree, which is needed for having the correct reference
> to its position in hypfs.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich)
> - carved out from previous patch
> - use enter() and exit() callbacks for allocating and freeing
>   dyndata memory

I can't seem to be able to spot what this describes, and the
respective part of the description therefore also remains unclear
to me. Not the least again when considering multi-level templates,
where potentially each of the handlers may want to allocate dyndata,
yet only one party can at a time.

> - add hypfs_add_dyndir()

Overall this patch adds a lot of (for now) dead code, which makes it
hard to judge whether this is what's needed. I guess I'll again
learn more by reding further patches.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:11:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43743.78594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqGQ-0000kU-Hd; Thu, 03 Dec 2020 15:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43743.78594; Thu, 03 Dec 2020 15:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqGQ-0000kN-Ei; Thu, 03 Dec 2020 15:11:22 +0000
Received: by outflank-mailman (input) for mailman id 43743;
 Thu, 03 Dec 2020 15:11:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkqGP-0000kE-D2
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:11:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b965dbb1-944c-4fd0-8e8d-b9f24e04ae41;
 Thu, 03 Dec 2020 15:11:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9A798ABE9;
 Thu,  3 Dec 2020 15:11:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b965dbb1-944c-4fd0-8e8d-b9f24e04ae41
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607008279; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6AQrTGM70WvRjacy6wu+regu1TCCTX4sGmyLOct/hy0=;
	b=TeeLctapG7qWpcOhd2l2RMYZUNxnFdP//iQPQui7nxBIjbhEEfALnueqt7AOfxWlAQQtvi
	lhLoZchwl0xWUhxtQ+bmguUaA18xtcG3YqJYt+dIiZkM6EZQGNZjovyPyPGC7BXWVq51/2
	NtaAblCGNShg2GBq3TZ1j2+JfoBqWrA=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <89f52bed-c611-70c5-1349-63838530debd@suse.com>
 <0613c7e2-724a-e16c-91f7-f99298d04ab2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1cc0569c-5d6d-e28a-b5c4-36081c6eb2ac@suse.com>
Date: Thu, 3 Dec 2020 16:11:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0613c7e2-724a-e16c-91f7-f99298d04ab2@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="IBRapjFuAUaEMXRlTBC2maKlUZ02LHv8c"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IBRapjFuAUaEMXRlTBC2maKlUZ02LHv8c
Content-Type: multipart/mixed; boundary="8sj6s2QdyjsyosFvFACeITFUx1yNAG8us";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
Message-ID: <1cc0569c-5d6d-e28a-b5c4-36081c6eb2ac@suse.com>
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <89f52bed-c611-70c5-1349-63838530debd@suse.com>
 <0613c7e2-724a-e16c-91f7-f99298d04ab2@suse.com>
In-Reply-To: <0613c7e2-724a-e16c-91f7-f99298d04ab2@suse.com>

--8sj6s2QdyjsyosFvFACeITFUx1yNAG8us
Content-Type: multipart/mixed;
 boundary="------------2DBB41EF7B7C5BC488EF78F6"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2DBB41EF7B7C5BC488EF78F6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 15:46, Jan Beulich wrote:
> On 02.12.2020 16:46, J=C3=BCrgen Gro=C3=9F wrote:
>> On 01.12.20 09:21, Juergen Gross wrote:
>>> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb =3D {
>>>        .notifier_call =3D cpu_callback
>>>    };
>>>   =20
>>> +#ifdef CONFIG_HYPFS
>>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>>> +    const struct hypfs_entry *entry);
>>> +
>>> +static struct hypfs_funcs cpupool_pooldir_funcs =3D {
>>> +    .enter =3D cpupool_pooldir_enter,
>>> +    .exit =3D hypfs_node_exit,
>>> +    .read =3D hypfs_read_dir,
>>> +    .write =3D hypfs_write_deny,
>>> +    .getsize =3D hypfs_getsize,
>>> +    .findentry =3D hypfs_dir_findentry,
>>> +};
>>> +
>>> +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_fun=
cs);
>>> +
>>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>>> +    const struct hypfs_entry *entry)
>>> +{
>>> +    return &cpupool_pooldir.e;
>>> +}
>> I have found a more generic way to handle entering a dyndir node,
>> resulting in no need to have cpupool_pooldir_enter() and
>> cpupool_pooldir_funcs.
>>
>> This will add some more lines to the previous patch, but less than
>> saved here.
>=20
> Which may then mean it's not a good use of time to look at v2 patch
> 14, considering there's a lot of other stuff in need of looking at?

All of V2 patch 14 remains valid, there is just a generic enter function
added in V3.


Juergen

--------------2DBB41EF7B7C5BC488EF78F6
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2DBB41EF7B7C5BC488EF78F6--

--8sj6s2QdyjsyosFvFACeITFUx1yNAG8us--

--IBRapjFuAUaEMXRlTBC2maKlUZ02LHv8c
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/JABYFAwAAAAAACgkQsN6d1ii/Ey/O
YAf+IvhT5Ghw+E4r+XW8n9M5ef91bVbz2Dlxk1EoDt5wX1efV9ARxkNcl7objd6oBfO5LG+SriVX
JSt2avXjfRI2dSHXrpLc7jX71RQ/0F7Tr0OvIxduNC7urksGAAhE4J3Jaz7RCz76e9G4mPFIBy0C
mBlL1OC8ixwobKrAm5NJh9xYDy7+XLkdG+dy//2pZdGc1Ogj3XWbf+L6COptDDRksLCDEyyOQh9s
wTLO51rSwvYHW8JjfFKNy5aAGo6ZWyM5oRp4nu0ojWpNrntwCFiMz+P5KWhMvpVdK21Cewd7p2bK
lYv48KnFlMMh6C+GmplAxP4BD8z7F3MGKGPQR6/SxQ==
=Id8K
-----END PGP SIGNATURE-----

--IBRapjFuAUaEMXRlTBC2maKlUZ02LHv8c--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:14:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43761.78607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqJL-000126-13; Thu, 03 Dec 2020 15:14:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43761.78607; Thu, 03 Dec 2020 15:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqJK-00011z-UD; Thu, 03 Dec 2020 15:14:22 +0000
Received: by outflank-mailman (input) for mailman id 43761;
 Thu, 03 Dec 2020 15:14:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkqJJ-00011s-C2
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:14:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 287800c4-73b6-4983-97ab-e58aa7256a5a;
 Thu, 03 Dec 2020 15:14:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3ACF8AC65;
 Thu,  3 Dec 2020 15:14:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 287800c4-73b6-4983-97ab-e58aa7256a5a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607008459; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cAn2gV3OTPE4+DePVqcaxMGFcAzZWJKht69BAfdBfVg=;
	b=ZjA27wArEfi1L+86c1bhieLj1dHbCNkTwNqAEydupkfmm0n2XD3q86/lSnlwEAx5FKQElv
	2MtVGvvX9gYqZ2on0MK5kaccqnwgwtWszS7GevldmYhowm2lZNQhWyTd/kUsLZFB4NuBIi
	C28OrlW6sxQ/T/P595qUtmGO9pG1wWI=
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2503547c-1b3c-2224-c4a9-c647d9d1a058@suse.com>
Date: Thu, 3 Dec 2020 16:14:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="DpINg4kaag5UxHPSm2KwlwvvqM6Trvnbl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--DpINg4kaag5UxHPSm2KwlwvvqM6Trvnbl
Content-Type: multipart/mixed; boundary="AXnDHrJzysLHpeiJxwKVZv0EOYK9JNuAV";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <2503547c-1b3c-2224-c4a9-c647d9d1a058@suse.com>
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
In-Reply-To: <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>

--AXnDHrJzysLHpeiJxwKVZv0EOYK9JNuAV
Content-Type: multipart/mixed;
 boundary="------------29B8FDFF0FC4529743171AB6"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------29B8FDFF0FC4529743171AB6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 15:59, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> In order to better support resource allocation and locking for dynamic=

>> hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.
>>
>> The enter() callback is called when entering a node during hypfs user
>> actions (traversing, reading or writing it), while the exit() callback=

>> is called when leaving a node (accessing another node at the same or a=

>> higher directory level, or when returning to the user).
>>
>> For avoiding recursion this requires a parent pointer in each node.
>> Let the enter() callback return the entry address which is stored as
>> the last accessed node in order to be able to use a template entry for=

>> that purpose in case of dynamic entries.
>=20
> I guess I'll learn in subsequent patches why this is necessary /
> useful. Right now it looks odd for the function to simple return
> the incoming argument, as this way it's clear the caller knows
> the correct value already.

Basically for dynamic entries based on a template the entry function
will return the address of template->e instead of the dynamic entry
itself. This will enable to have the standard entry functions for
any nodes being linked to the template.

>=20
>> @@ -100,11 +112,58 @@ static void hypfs_unlock(void)
>>       }
>>   }
>>  =20
>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *=
entry)
>> +{
>> +    return entry;
>> +}
>> +
>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>> +{
>> +}
>> +
>> +static int node_enter(const struct hypfs_entry *entry)
>> +{
>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_ent=
ered);
>> +
>> +    entry =3D entry->funcs->enter(entry);
>> +    if ( IS_ERR(entry) )
>> +        return PTR_ERR(entry);
>> +
>> +    ASSERT(!*last || *last =3D=3D entry->parent);
>> +
>> +    *last =3D entry;
>> +
>> +    return 0;
>> +}
>> +
>> +static void node_exit(const struct hypfs_entry *entry)
>> +{
>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_ent=
ered);
>> +
>> +    if ( !*last )
>> +        return;
>=20
> Under what conditions is this legitimate to happen? IOW shouldn't
> there be an ASSERT_UNREACHABLE() here?

This is for the "/" node.


Juergen

--------------29B8FDFF0FC4529743171AB6
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------29B8FDFF0FC4529743171AB6--

--AXnDHrJzysLHpeiJxwKVZv0EOYK9JNuAV--

--DpINg4kaag5UxHPSm2KwlwvvqM6Trvnbl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/JAMoFAwAAAAAACgkQsN6d1ii/Ey/S
wAgAllJbESpFlfB2uhZ3dc2arq96CBlzuSyYQj1Fz8/7RB9vOiiXP3pwbEIpAtNkBKEKDPQHHmIE
DRLxkyuz8jXWIe39jjlJbdufSeMLgAfHhHts6bFEDZkGNmN0BBDlqS+xu9X2D5INotOKzW3t/M8d
JMIcA8n40w0BVRIQ6qo4TUJs17FQnVhRtziHD69vlSxCK7fPqZDT+hRlvFK7oBaG2XCZAOmy7UpB
KSrHyjXIpVHwVwmrqD9sN4P0yQAUkbOgBOmJbpteel5ZOobtt/l5vgwU6WBHVvJf2wXLVRF26yW4
29bqLEbcj2bIl8mxSjCYg1gTISt2i8orXUr5lzkCZg==
=4kpc
-----END PGP SIGNATURE-----

--DpINg4kaag5UxHPSm2KwlwvvqM6Trvnbl--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:18:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43769.78618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqNF-0001IB-Im; Thu, 03 Dec 2020 15:18:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43769.78618; Thu, 03 Dec 2020 15:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqNF-0001I4-Fm; Thu, 03 Dec 2020 15:18:25 +0000
Received: by outflank-mailman (input) for mailman id 43769;
 Thu, 03 Dec 2020 15:18:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kkqND-0001Hz-Vk
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:18:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d46ee3e-dd18-497c-b9ed-9d0d31441d37;
 Thu, 03 Dec 2020 15:18:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 068C5ABCE;
 Thu,  3 Dec 2020 15:18:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d46ee3e-dd18-497c-b9ed-9d0d31441d37
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607008701; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jwDV5HhaydAvzJYaCReQGhRny3sveDf76+NkX356SB4=;
	b=RXl90IvfLV3B+eIkJ57tFQH7Z9LhFEo8jn6VF7uTy7eFhE75Kp+S/2G4M446S3wE+xeNyM
	F6o+8xUPwqxiCM3bhkg5lXCFnHxEeL5cIYZzaHk5z5aFXO0J4T6c4kCzGKj81XmEZtp+C9
	yBIjUZzztMY9OvQgs2WM6Di7GVkVAkY=
Subject: Re: [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-14-jgross@suse.com>
 <a02fe2e6-428f-9bea-0108-92fa03729420@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0efe5d36-da32-7f0d-5515-5fb5994ea2d9@suse.com>
Date: Thu, 3 Dec 2020 16:18:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <a02fe2e6-428f-9bea-0108-92fa03729420@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="axbB0pP5sZ4QuvTln0IxeS0coD5uAjvTT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--axbB0pP5sZ4QuvTln0IxeS0coD5uAjvTT
Content-Type: multipart/mixed; boundary="XrT8D4fGhKOAxx1M7yVO0Qsd6Ea7gOe3z";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <0efe5d36-da32-7f0d-5515-5fb5994ea2d9@suse.com>
Subject: Re: [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-14-jgross@suse.com>
 <a02fe2e6-428f-9bea-0108-92fa03729420@suse.com>
In-Reply-To: <a02fe2e6-428f-9bea-0108-92fa03729420@suse.com>

--XrT8D4fGhKOAxx1M7yVO0Qsd6Ea7gOe3z
Content-Type: multipart/mixed;
 boundary="------------D60063E3C3802B61D4D5134A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D60063E3C3802B61D4D5134A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 16:08, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> Add a HYPFS_VARDIR_INIT() macro for initializing such a directory
>> statically, taking a struct hypfs_funcs pointer as parameter additiona=
l
>> to those of HYPFS_DIR_INIT().
>>
>> Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
>> additional parameter as this will be needed for dynamical entries.
>>
>> For being able to let the generic hypfs coding continue to work on
>> normal struct hypfs_entry entities even for dynamical nodes add some
>> infrastructure for allocating a working area for the current hypfs
>> request in order to store needed information for traversing the tree.
>> This area is anchored in a percpu pointer and can be retrieved by any
>> level of the dynamic entries. The normal way to handle allocation and
>> freeing is to allocate the data in the enter() callback of a node and
>> to free it in the related exit() callback.
>>
>> Add a hypfs_add_dyndir() function for adding a dynamic directory
>> template to the tree, which is needed for having the correct reference=

>> to its position in hypfs.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich)
>> - carved out from previous patch
>> - use enter() and exit() callbacks for allocating and freeing
>>    dyndata memory
>=20
> I can't seem to be able to spot what this describes, and the
> respective part of the description therefore also remains unclear

I think all pieces are coming together with patch 15.

> to me. Not the least again when considering multi-level templates,
> where potentially each of the handlers may want to allocate dyndata,
> yet only one party can at a time.

Right now: yes.

In case needed it will be rather easy to have a linked list of dyndata
entities, with the percpu dyndata variable pointing to the most recent
one (the one of the currently deepest nesting level).

>=20
>> - add hypfs_add_dyndir()
>=20
> Overall this patch adds a lot of (for now) dead code, which makes it
> hard to judge whether this is what's needed. I guess I'll again
> learn more by reding further patches.

I hope so.


--------------D60063E3C3802B61D4D5134A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------D60063E3C3802B61D4D5134A--

--XrT8D4fGhKOAxx1M7yVO0Qsd6Ea7gOe3z--

--axbB0pP5sZ4QuvTln0IxeS0coD5uAjvTT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/JAbwFAwAAAAAACgkQsN6d1ii/Ey94
HQgAhnWo2c4OidZB+HhTJGIh8tmFTFFuWBJKJBODAiGsvI2FbdwcnjtLx7MSmVeihAvMPYyYI7ls
BYTbp4DdTpQLKCMvnzWCTWY8iR0EZv37PL1zQUajzyvAFszVyLL9lvhxdfd27KNCgzOLy7KbwEZt
7eW0yGtJhqveO2OSA7hnIy30v9mr3NIdsU7FewSqEHpIG/B3G0Pouk4AAt8bGYbZqfJHImpyWYfl
oTn+q4c0Yxz6Wix0VZT5gdR/1XSvfb+t5B0jqR3wB/x7JDf2HdVBJLNnd15CEGQjFXruDG17B+j5
/Vo52CN3M5oHx/b1EAdhUcZzkQd1fDLKkeCsBOLAWA==
=fzpe
-----END PGP SIGNATURE-----

--axbB0pP5sZ4QuvTln0IxeS0coD5uAjvTT--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:23:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:23:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43775.78631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqRh-0002Gp-9G; Thu, 03 Dec 2020 15:23:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43775.78631; Thu, 03 Dec 2020 15:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqRh-0002Gi-5F; Thu, 03 Dec 2020 15:23:01 +0000
Received: by outflank-mailman (input) for mailman id 43775;
 Thu, 03 Dec 2020 15:23:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkqRf-0002Gd-Tw
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:22:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0041b98-db9a-4a09-aee0-f9344932ae24;
 Thu, 03 Dec 2020 15:22:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CBDDDABCE;
 Thu,  3 Dec 2020 15:22:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0041b98-db9a-4a09-aee0-f9344932ae24
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607008978; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nYaGsRymu8tOlNaFh/oHGL2VMwjaNycEaEBHauIUjjQ=;
	b=koVcS2usgAx3WC4U1saA6fqW0mfZTKYhmjskys5MBukd8Iyj0mAfOO8c8gPgz6Fa4OhWqK
	KHPrGRbT2ehZ4Cl9M6pGI+iY71PnPJaGCc1fwEn9Ob4O1XFqnBc+x+XJmycvpbKJsPR5di
	1h1n3WQBn8ztY6h/cajm2he2ENh69dA=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Eslam Elnikety <elnikety@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
Date: Thu, 3 Dec 2020 16:22:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201203124159.3688-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.12.2020 13:41, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ...to control the visibility of the FIFO event channel operations
> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
> the guest.
> 
> These operations were added to the public header in commit d2d50c2f308f
> ("evtchn: add FIFO-based event channel ABI") and the first implementation
> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
> EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
> ("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
> that, a guest issuing those operations would receive a return value of
> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
> running on an older (pre-4.4) Xen would fall back to using the 2-level event
> channel interface upon seeing this return value.
> 
> Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
> onwards has implications for hibernation of some Linux guests. During resume
> from hibernation, there are two kernels involved: the "boot" kernel and the
> "resume" kernel. The guest boot kernel may default to use FIFO operations and
> instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
> other hand, the resume kernel keeps assuming 2-level, because it was hibernated
> on a version of Xen that did not support the FIFO operations.
> 
> To maintain compatibility it is necessary to make Xen behave as it did
> before the new operations were added and hence the code in this patch ensures
> that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event channel
> operations will again result in -ENOSYS being returned to the guest.

I have to admit I'm now even more concerned of the control for such
going into Xen, the more with the now 2nd use in the subsequent patch.
The implication of this would seem to be that whenever we add new
hypercalls or sub-ops, a domain creation control would also need
adding determining whether that new sub-op is actually okay to use by
a guest. Or else I'd be keen to up front see criteria at least roughly
outlined by which it could be established whether such an override
control is needed.

I'm also not convinced such controls really want to be opt-in rather
than opt-out. While perhaps sensible as long as a feature is
experimental, not exposing stuff by default may mean slower adoption
of new (and hopefully better) functionality. I realize there's still
the option of having the tool stack default to enable, and just the
hypervisor defaulting to disable, but anyway.

> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>      unsigned int max_vcpus;
>  
>      /* HVM and HAP must be set. IOMMU may or may not be */
> -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=
> +    if ( (config->flags &
> +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) !=
>           (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
>      {
>          dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
>          struct domain *d;
>          struct xen_domctl_createdomain d_cfg = {
>              .arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE,
> -            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> +            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> +                     XEN_DOMCTL_CDF_evtchn_fifo,
>              .max_evtchn_port = -1,
>              .max_grant_frames = 64,
>              .max_maptrack_frames = 1024,
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -805,7 +805,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      struct bootmodule *xen_bootmodule;
>      struct domain *dom0;
>      struct xen_domctl_createdomain dom0_cfg = {
> -        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> +        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> +                 XEN_DOMCTL_CDF_evtchn_fifo,
>          .max_evtchn_port = -1,
>          .max_grant_frames = gnttab_dom0_frames(),
>          .max_maptrack_frames = -1,
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const module_t *image,
>                                           const char *loader)
>  {
>      struct xen_domctl_createdomain dom0_cfg = {
> -        .flags = IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0,
> +        .flags = XEN_DOMCTL_CDF_evtchn_fifo |
> +                 (IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0),
>          .max_evtchn_port = -1,
>          .max_grant_frames = -1,
>          .max_maptrack_frames = -1,
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>           ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>             XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
>             XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> -           XEN_DOMCTL_CDF_nested_virt) )
> +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo) )
>      {
>          dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
>          return -EINVAL;

All of the hunks above point out a scalability issue if we were to
follow this route for even just a fair part of new sub-ops, and I
suppose you've noticed this with the next patch presumably touching
all the same places again.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:29:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:29:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43781.78643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqXa-0002V6-UN; Thu, 03 Dec 2020 15:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43781.78643; Thu, 03 Dec 2020 15:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqXa-0002Uz-R0; Thu, 03 Dec 2020 15:29:06 +0000
Received: by outflank-mailman (input) for mailman id 43781;
 Thu, 03 Dec 2020 15:29:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkqXZ-0002Uu-Fg
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:29:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b10b02d5-7f06-4ec1-972c-e5f0a41e5e86;
 Thu, 03 Dec 2020 15:29:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3C8CABCE;
 Thu,  3 Dec 2020 15:29:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b10b02d5-7f06-4ec1-972c-e5f0a41e5e86
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607009343; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+rozxxHWvd4WD/sJly2e0flB1+mp+wjGgFoB1B01LzY=;
	b=Om3iCYJcZiBku6VQBJx64lrNacSpxXs80HWGAy770fLH2L3X/SVPZGjK0CW3dkaXF3T8uM
	COEQkJGV/XWJVsEiA50E5aF17DIJIGvWPUeAjoZoirpadrcAYH0P++GF/NXHTatTM2lFGd
	uTt5vn50SCN+HVD9KHBrwrFPF5BrE/Q=
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
 <2503547c-1b3c-2224-c4a9-c647d9d1a058@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6593ed01-23d0-70ac-faa3-556c69adec2b@suse.com>
Date: Thu, 3 Dec 2020 16:29:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <2503547c-1b3c-2224-c4a9-c647d9d1a058@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.12.2020 16:14, Jürgen Groß wrote:
> On 03.12.20 15:59, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> @@ -100,11 +112,58 @@ static void hypfs_unlock(void)
>>>       }
>>>   }
>>>   
>>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
>>> +{
>>> +    return entry;
>>> +}
>>> +
>>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>>> +{
>>> +}
>>> +
>>> +static int node_enter(const struct hypfs_entry *entry)
>>> +{
>>> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
>>> +
>>> +    entry = entry->funcs->enter(entry);
>>> +    if ( IS_ERR(entry) )
>>> +        return PTR_ERR(entry);
>>> +
>>> +    ASSERT(!*last || *last == entry->parent);
>>> +
>>> +    *last = entry;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static void node_exit(const struct hypfs_entry *entry)
>>> +{
>>> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
>>> +
>>> +    if ( !*last )
>>> +        return;
>>
>> Under what conditions is this legitimate to happen? IOW shouldn't
>> there be an ASSERT_UNREACHABLE() here?
> 
> This is for the "/" node.

I.e. would ASSERT(!entry->parent) be appropriate to add here, at
the same time serving as documentation of what you've just said?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:44:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43792.78655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqmL-0004SD-7k; Thu, 03 Dec 2020 15:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43792.78655; Thu, 03 Dec 2020 15:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqmL-0004S6-4l; Thu, 03 Dec 2020 15:44:21 +0000
Received: by outflank-mailman (input) for mailman id 43792;
 Thu, 03 Dec 2020 15:44:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkqmJ-0004S1-ER
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:44:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 062ed0e9-2bcb-4eb5-ba1b-dcf586605d4f;
 Thu, 03 Dec 2020 15:44:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6A8D6ABE9;
 Thu,  3 Dec 2020 15:44:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 062ed0e9-2bcb-4eb5-ba1b-dcf586605d4f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607010257; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=klzws1Zvp8VyV7qej1D8cd+MzGBuegBp5ai6h1JSypU=;
	b=NjGMXeoL48zuoyhSdqxGYoHknBGhKVFejx6v/OC+BBsxQxpsZxg286yBlX9ONhblxvQ5Qn
	olUsITdm2ic/bEl9TAxYcBHicuFB8U7NYXzkhFgx3kk/KksKvtt1ks6VtXN+tlUofuOLx0
	HKYLxo/94QWdB/eESxAWAIiKEuNVXqQ=
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
Date: Thu, 3 Dec 2020 16:44:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-15-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> --- a/xen/common/hypfs.c
> +++ b/xen/common/hypfs.c
> @@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
>      return entry->size;
>  }
>  
> +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
> +                               unsigned int id, bool is_last,
> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
> +{
> +    struct xen_hypfs_dirlistentry direntry;
> +    char name[HYPFS_DYNDIR_ID_NAMELEN];
> +    unsigned int e_namelen, e_len;
> +
> +    e_namelen = snprintf(name, sizeof(name), template->e.name, id);
> +    e_len = DIRENTRY_SIZE(e_namelen);
> +    direntry.e.pad = 0;
> +    direntry.e.type = template->e.type;
> +    direntry.e.encoding = template->e.encoding;
> +    direntry.e.content_len = template->e.funcs->getsize(&template->e);
> +    direntry.e.max_write_len = template->e.max_size;
> +    direntry.off_next = is_last ? 0 : e_len;
> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
> +        return -EFAULT;
> +    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
> +                              e_namelen + 1) )
> +        return -EFAULT;
> +
> +    guest_handle_add_offset(*uaddr, e_len);
> +
> +    return 0;
> +}
> +
> +static struct hypfs_entry *hypfs_dyndir_findentry(
> +    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
> +{
> +    const struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_get_dyndata();
> +
> +    /* Use template with original findentry function. */
> +    return data->template->e.funcs->findentry(data->template, name, name_len);
> +}
> +
> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    const struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_get_dyndata();
> +
> +    /* Use template with original read function. */
> +    return data->template->e.funcs->read(&data->template->e, uaddr);
> +}
> +
> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(
> +    const struct hypfs_entry_dir *template, unsigned int id)
> +{
> +    struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_get_dyndata();
> +
> +    data->template = template;
> +    data->id = id;
> +    snprintf(data->name, sizeof(data->name), template->e.name, id);
> +    data->dir = *template;
> +    data->dir.e.name = data->name;

I'm somewhat puzzled, if not confused, by the apparent redundancy
of this name generation with that in hypfs_read_dyndir_id_entry().
Wasn't the idea to be able to use generic functions on these
generated entries?

Also, seeing that other function's name, wouldn't the one here
want to be named hypfs_gen_dyndir_id_entry()?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:45:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43797.78666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqnQ-0004YU-J2; Thu, 03 Dec 2020 15:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43797.78666; Thu, 03 Dec 2020 15:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqnQ-0004YN-G6; Thu, 03 Dec 2020 15:45:28 +0000
Received: by outflank-mailman (input) for mailman id 43797;
 Thu, 03 Dec 2020 15:45:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oiWT=FH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kkqnO-0004YH-T5
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:45:26 +0000
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96f5e1c2-4d31-4160-923a-6713c82a0c92;
 Thu, 03 Dec 2020 15:45:25 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id a6so3165243wmc.2
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 07:45:25 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id q4sm2025341wmc.2.2020.12.03.07.45.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 03 Dec 2020 07:45:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96f5e1c2-4d31-4160-923a-6713c82a0c92
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=GzSzmJYRN4HMPNLrihzT9az/OPz9iInm9sfCMz8CPLo=;
        b=j4wjSvdqkXZkBywwbxpbvvxlzL8WrCyc07ue6Lamgw1g7QUeIgU8hOiOcnBzF20joT
         an70k8hICH5jiguisqjPdkY3FP4Gvgn1EWOsQYf0RCNSQ7AieLeYbqu8w9BtKM3mvwZi
         Xne0qGqYHC537wgcaPUjrmQjzlKxnm03/591BtjPk1fuuqUHfoTwPeKG5W+aCagMXz6B
         cnSMcEg5mdnTfGyZz3Gxk9gQbsSDAU1dcJgXXp2G5KldXRPwwGN00p3Oh2ZD9i4f65ep
         J9PdH8Ws/hRXM/GSXIWKT+Ju7J6mC6SpQZx1k1KwWmjoHCx95KKUSGLomwkLFln/QxNk
         J/wA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=GzSzmJYRN4HMPNLrihzT9az/OPz9iInm9sfCMz8CPLo=;
        b=Br6UWA8ond1/Kx1FsXkl3pl0ps8fCEGBshduTmmmrsUerAqXi3kj5h6rn95r+a1Z1K
         nQedS78Zaf50Btr4nInbK2aXWapk2tq5RMXf7Yjy7vadKpfGG6+BmVsA6vIOIverjr8N
         JDXetaAwDV/XdmG6Wx1YnM/fD4nvhXS6y+f1AHe8WCeTUVSiHUbz7nsaiqRwf9Ys2zAr
         T6MPvUBDPVvrLKrHrqIs7S3FZygG/1AuznVN6zX/zIiVc1ueireJKBq7COJpXLJPMNdL
         HYIsDNtVvlE3wkQbzTjBiQcuxgbaYHpLX24Vexfwo52MZi0emUQmnZ9QYEKLgUki6nxK
         wt+w==
X-Gm-Message-State: AOAM530dxC5zgyabbMwrzGv7PiiV2B4c10K0svzNJrvnvFevtvAcJET0
	LW6EIhm49wAlIRa1o1Qf9fo=
X-Google-Smtp-Source: ABdhPJxUSxDzqgwLUXNs9sOyya7svy5LVn1a5CWGGNM2GLQbPAz2wmBikIPVf3YwC8jArTxr8y4XJQ==
X-Received: by 2002:a05:600c:2652:: with SMTP id 18mr3977779wmy.54.1607010324882;
        Thu, 03 Dec 2020 07:45:24 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Eslam Elnikety'" <elnikety@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201203124159.3688-1-paul@xen.org> <20201203124159.3688-2-paul@xen.org> <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
In-Reply-To: <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_fifo, ...
Date: Thu, 3 Dec 2020 15:45:23 -0000
Message-ID: <00ee01d6c98b$507af1c0$f170d540$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJ7xs67Aj6rtzSoheWkIA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 03 December 2020 15:23
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Eslam Elnikety =
<elnikety@amazon.com>; Ian Jackson
> <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Anthony PERARD =
<anthony.perard@citrix.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; =
Julien Grall <julien@xen.org>;
> Stefano Stabellini <sstabellini@kernel.org>; Christian Lindig =
<christian.lindig@citrix.com>; David
> Scott <dave@recoil.org>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create =
flag, XEN_DOMCTL_CDF_evtchn_fifo,
> ...
>=20
> On 03.12.2020 13:41, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > ...to control the visibility of the FIFO event channel operations
> > (EVTCHNOP_init_control, EVTCHNOP_expand_array, and =
EVTCHNOP_set_priority) to
> > the guest.
> >
> > These operations were added to the public header in commit =
d2d50c2f308f
> > ("evtchn: add FIFO-based event channel ABI") and the first =
implementation
> > appeared in the two subsequent commits: edc8872aeb4a ("evtchn: =
implement
> > EVTCHNOP_set_priority and add the set_priority hook") and =
88910061ec61
> > ("evtchn: add FIFO-based event channel hypercalls and port ops"). =
Prior to
> > that, a guest issuing those operations would receive a return value =
of
> > -ENOSYS (not implemented) from Xen. Guests aware of the FIFO =
operations but
> > running on an older (pre-4.4) Xen would fall back to using the =
2-level event
> > channel interface upon seeing this return value.
> >
> > Unfortunately the uncontrolable appearance of these new operations =
in Xen 4.4
> > onwards has implications for hibernation of some Linux guests. =
During resume
> > from hibernation, there are two kernels involved: the "boot" kernel =
and the
> > "resume" kernel. The guest boot kernel may default to use FIFO =
operations and
> > instruct Xen via EVTCHNOP_init_control to switch from 2-level to =
FIFO. On the
> > other hand, the resume kernel keeps assuming 2-level, because it was =
hibernated
> > on a version of Xen that did not support the FIFO operations.
> >
> > To maintain compatibility it is necessary to make Xen behave as it =
did
> > before the new operations were added and hence the code in this =
patch ensures
> > that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event =
channel
> > operations will again result in -ENOSYS being returned to the guest.
>=20
> I have to admit I'm now even more concerned of the control for such
> going into Xen, the more with the now 2nd use in the subsequent patch.
> The implication of this would seem to be that whenever we add new
> hypercalls or sub-ops, a domain creation control would also need
> adding determining whether that new sub-op is actually okay to use by
> a guest. Or else I'd be keen to up front see criteria at least roughly
> outlined by which it could be established whether such an override
> control is needed.
>=20

Ultimately I think any new hypercall (or related set of hypercalls) =
added to the ABI needs to be opt-in on a per-domain basis, so that we =
know that from when a domain is first created it will not see a change =
in its environment unless the VM administrator wants that to happen.

> I'm also not convinced such controls really want to be opt-in rather
> than opt-out.

They really need to be opt-in I think. From a cloud provider PoV it is =
important that nothing in a customer's environment changes unless we =
want it to. Otherwise we have no way to deploy an updated hypervisor =
version without risking crashing their VMs.

> While perhaps sensible as long as a feature is
> experimental, not exposing stuff by default may mean slower adoption
> of new (and hopefully better) functionality. I realize there's still
> the option of having the tool stack default to enable, and just the
> hypervisor defaulting to disable, but anyway.
>=20

Ok. I don't see a problem in default-to-enable behaviour... but I guess =
we will need to add ABI features to migration stream to fix things =
properly.

> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct =
xen_domctl_createdomain *config)
> >      unsigned int max_vcpus;
> >
> >      /* HVM and HAP must be set. IOMMU may or may not be */
> > -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=3D
> > +    if ( (config->flags &
> > +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) !=3D
> >           (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
> >      {
> >          dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
> >          struct domain *d;
> >          struct xen_domctl_createdomain d_cfg =3D {
> >              .arch.gic_version =3D XEN_DOMCTL_CONFIG_GIC_NATIVE,
> > -            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> > +            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> > +                     XEN_DOMCTL_CDF_evtchn_fifo,
> >              .max_evtchn_port =3D -1,
> >              .max_grant_frames =3D 64,
> >              .max_maptrack_frames =3D 1024,
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -805,7 +805,8 @@ void __init start_xen(unsigned long =
boot_phys_offset,
> >      struct bootmodule *xen_bootmodule;
> >      struct domain *dom0;
> >      struct xen_domctl_createdomain dom0_cfg =3D {
> > -        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> > +        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> > +                 XEN_DOMCTL_CDF_evtchn_fifo,
> >          .max_evtchn_port =3D -1,
> >          .max_grant_frames =3D gnttab_dom0_frames(),
> >          .max_maptrack_frames =3D -1,
> > --- a/xen/arch/x86/setup.c
> > +++ b/xen/arch/x86/setup.c
> > @@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const =
module_t *image,
> >                                           const char *loader)
> >  {
> >      struct xen_domctl_createdomain dom0_cfg =3D {
> > -        .flags =3D IS_ENABLED(CONFIG_TBOOT) ? =
XEN_DOMCTL_CDF_s3_integrity : 0,
> > +        .flags =3D XEN_DOMCTL_CDF_evtchn_fifo |
> > +                 (IS_ENABLED(CONFIG_TBOOT) ? =
XEN_DOMCTL_CDF_s3_integrity : 0),
> >          .max_evtchn_port =3D -1,
> >          .max_grant_frames =3D -1,
> >          .max_maptrack_frames =3D -1,
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct =
xen_domctl_createdomain *config)
> >           ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >             XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
> >             XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> > -           XEN_DOMCTL_CDF_nested_virt) )
> > +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo) =
)
> >      {
> >          dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", =
config->flags);
> >          return -EINVAL;
>=20
> All of the hunks above point out a scalability issue if we were to
> follow this route for even just a fair part of new sub-ops, and I
> suppose you've noticed this with the next patch presumably touching
> all the same places again.

Indeed. This solution works for now but is probably not what we want in =
the long run. Same goes for the current way we control viridian features =
via an HVM param. It is good enough for now IMO since domctl is not a =
stable interface. Any ideas about how we might implement a better =
interface in the longer term?

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 15:56:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 15:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43808.78678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqyR-0005hB-PR; Thu, 03 Dec 2020 15:56:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43808.78678; Thu, 03 Dec 2020 15:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkqyR-0005h4-MK; Thu, 03 Dec 2020 15:56:51 +0000
Received: by outflank-mailman (input) for mailman id 43808;
 Thu, 03 Dec 2020 15:56:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vSHx=FH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kkqyQ-0005gz-Gw
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 15:56:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd6924c5-28f8-4a6c-a070-ba8ebca5276b;
 Thu, 03 Dec 2020 15:56:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6017FABE9;
 Thu,  3 Dec 2020 15:56:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd6924c5-28f8-4a6c-a070-ba8ebca5276b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607011008; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RWvbEaacuaWwtu6K66olcUmz3U/ojtYXp5FVHh/FZUQ=;
	b=Ai1jBQb0GCclTCJqP/IoMtW/Ys8at+H3qukhVeStewYElTMNUZakpKuQeb+iUo0P9iybmh
	Hs9mnVObH/bmUR2LdSbnnJyRMwKnhaHDhPVpz3fzxOXWeZL7axd2z3Ls41HQTNmNk1bOgz
	LjhlMiWbGp3T0PW0181dUuHtPAtXeuA=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
Date: Thu, 3 Dec 2020 16:56:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <00ee01d6c98b$507af1c0$f170d540$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.12.2020 16:45, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 03 December 2020 15:23
>>
>> On 03.12.2020 13:41, Paul Durrant wrote:
>>> From: Paul Durrant <pdurrant@amazon.com>
>>>
>>> ...to control the visibility of the FIFO event channel operations
>>> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
>>> the guest.
>>>
>>> These operations were added to the public header in commit d2d50c2f308f
>>> ("evtchn: add FIFO-based event channel ABI") and the first implementation
>>> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
>>> EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
>>> ("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
>>> that, a guest issuing those operations would receive a return value of
>>> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
>>> running on an older (pre-4.4) Xen would fall back to using the 2-level event
>>> channel interface upon seeing this return value.
>>>
>>> Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
>>> onwards has implications for hibernation of some Linux guests. During resume
>>> from hibernation, there are two kernels involved: the "boot" kernel and the
>>> "resume" kernel. The guest boot kernel may default to use FIFO operations and
>>> instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
>>> other hand, the resume kernel keeps assuming 2-level, because it was hibernated
>>> on a version of Xen that did not support the FIFO operations.
>>>
>>> To maintain compatibility it is necessary to make Xen behave as it did
>>> before the new operations were added and hence the code in this patch ensures
>>> that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event channel
>>> operations will again result in -ENOSYS being returned to the guest.
>>
>> I have to admit I'm now even more concerned of the control for such
>> going into Xen, the more with the now 2nd use in the subsequent patch.
>> The implication of this would seem to be that whenever we add new
>> hypercalls or sub-ops, a domain creation control would also need
>> adding determining whether that new sub-op is actually okay to use by
>> a guest. Or else I'd be keen to up front see criteria at least roughly
>> outlined by which it could be established whether such an override
>> control is needed.
>>
> 
> Ultimately I think any new hypercall (or related set of hypercalls) added to the ABI needs to be opt-in on a per-domain basis, so that we know that from when a domain is first created it will not see a change in its environment unless the VM administrator wants that to happen.

A new hypercall appearing is a change to the guest's environment, yes,
but a backwards compatible one. I don't see how this would harm a guest.
This and ...

>> I'm also not convinced such controls really want to be opt-in rather
>> than opt-out.
> 
> They really need to be opt-in I think. From a cloud provider PoV it is important that nothing in a customer's environment changes unless we want it to. Otherwise we have no way to deploy an updated hypervisor version without risking crashing their VMs.

... this sound to me more like workarounds for buggy guests than
functionality the hypervisor _needs_ to have. (I can appreciate
the specific case here for the specific scenario you provide as
an exception.)

>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>      unsigned int max_vcpus;
>>>
>>>      /* HVM and HAP must be set. IOMMU may or may not be */
>>> -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=
>>> +    if ( (config->flags &
>>> +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) !=
>>>           (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
>>>      {
>>>          dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
>>>          struct domain *d;
>>>          struct xen_domctl_createdomain d_cfg = {
>>>              .arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE,
>>> -            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>>> +            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>> +                     XEN_DOMCTL_CDF_evtchn_fifo,
>>>              .max_evtchn_port = -1,
>>>              .max_grant_frames = 64,
>>>              .max_maptrack_frames = 1024,
>>> --- a/xen/arch/arm/setup.c
>>> +++ b/xen/arch/arm/setup.c
>>> @@ -805,7 +805,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>>>      struct bootmodule *xen_bootmodule;
>>>      struct domain *dom0;
>>>      struct xen_domctl_createdomain dom0_cfg = {
>>> -        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>>> +        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>> +                 XEN_DOMCTL_CDF_evtchn_fifo,
>>>          .max_evtchn_port = -1,
>>>          .max_grant_frames = gnttab_dom0_frames(),
>>>          .max_maptrack_frames = -1,
>>> --- a/xen/arch/x86/setup.c
>>> +++ b/xen/arch/x86/setup.c
>>> @@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const module_t *image,
>>>                                           const char *loader)
>>>  {
>>>      struct xen_domctl_createdomain dom0_cfg = {
>>> -        .flags = IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0,
>>> +        .flags = XEN_DOMCTL_CDF_evtchn_fifo |
>>> +                 (IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0),
>>>          .max_evtchn_port = -1,
>>>          .max_grant_frames = -1,
>>>          .max_maptrack_frames = -1,
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>           ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>             XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
>>>             XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
>>> -           XEN_DOMCTL_CDF_nested_virt) )
>>> +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo) )
>>>      {
>>>          dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
>>>          return -EINVAL;
>>
>> All of the hunks above point out a scalability issue if we were to
>> follow this route for even just a fair part of new sub-ops, and I
>> suppose you've noticed this with the next patch presumably touching
>> all the same places again.
> 
> Indeed. This solution works for now but is probably not what we want in the long run. Same goes for the current way we control viridian features via an HVM param. It is good enough for now IMO since domctl is not a stable interface. Any ideas about how we might implement a better interface in the longer term?

While it has other downsides, Jürgen's proposal doesn't have any
similar scalability issue afaics. Another possible model would
seem to be to key new hypercalls to hypervisor CPUID leaf bits,
and derive their availability from a guest's CPUID policy. Of
course this won't work when needing to retrofit guarding like
you want to do here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 16:00:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 16:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43823.78699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkr1j-0006zd-CI; Thu, 03 Dec 2020 16:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43823.78699; Thu, 03 Dec 2020 16:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkr1j-0006zW-8B; Thu, 03 Dec 2020 16:00:15 +0000
Received: by outflank-mailman (input) for mailman id 43823;
 Thu, 03 Dec 2020 16:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkr1i-0006zM-J3; Thu, 03 Dec 2020 16:00:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkr1i-0000bL-EL; Thu, 03 Dec 2020 16:00:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkr1i-0007Ob-6m; Thu, 03 Dec 2020 16:00:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkr1i-0000rx-6L; Thu, 03 Dec 2020 16:00:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RCcyXH5rKG20H+3g0HQ4NIwVYVoduQt1p0ZYNVGKdu4=; b=SdjyN48UIR/FkcRXS9b/eCuntk
	vIXQfYNIXSQOoI9aX1SvMPBPehFOt1I0Oy6yKc9nqqyo9ZKcSD5C5St2OErlp5A0b6njEH++2Aqfl
	IX7ZvX96l3yaLspEQDN2IFO7tJgqbkaR+bMRAPIj3eXWlHy+1eV3XSTAD9Wz5hguFqm4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157166-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157166: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e
X-Osstest-Versions-That:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 16:00:14 +0000

flight 157166 xen-unstable real [real]
flight 157180 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157166/
http://logs.test-lab.xenproject.org/osstest/logs/157180/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157180-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 157147
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157147
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157147
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157147
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157147
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157147
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157147
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157147
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157147
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157147
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157147
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157147
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e
baseline version:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53

Last test of basis   157147  2020-12-02 01:52:26 Z    1 days
Testing same since   157166  2020-12-02 23:08:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3ae469af8e..cabf60fc32  cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e -> master


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 16:31:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 16:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43843.78726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkrVY-0001xP-5A; Thu, 03 Dec 2020 16:31:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43843.78726; Thu, 03 Dec 2020 16:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkrVY-0001xI-1U; Thu, 03 Dec 2020 16:31:04 +0000
Received: by outflank-mailman (input) for mailman id 43843;
 Thu, 03 Dec 2020 16:31:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mrnM=FH=gmail.com=andy.shevchenko@srs-us1.protection.inumbo.net>)
 id 1kkrVW-0001xD-BV
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 16:31:02 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 573f792d-73c7-473b-940a-86ec540a72e4;
 Thu, 03 Dec 2020 16:31:01 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id f14so1377289pju.4
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 08:31:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 573f792d-73c7-473b-940a-86ec540a72e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=UUlsCpyemxPUGQpIBc3drhb3sc/Fkh4PtlzYbjVQNa8=;
        b=HZ6E/ni9UlGIxxR2BMbpyFFrvxiyLLK/RkdhroUMEVqnOWKmVXyd9TZmPMSfFUiNe2
         BCXQhAC+V52TJMomN/Y3bN74/vuS4dIRpmRLM5l0NrZmyhQFlUL6tpnRHXqGCeh5Htwu
         FSPGqArFMdbtiIMrB/2MxotpFJWUIfEFnsm6rtlkL2RisaEVCIMrwl6eeV8ex4CmavCb
         E1UbVC3eRxmq/0VcQHcrxfMnvmO8mitkiqwbiSmNwcT7BYmgzBWFyyA4AzvULuotN+XY
         W2JdAPYMGXHpnY6QS+75KG+QsnepLi8V6AUYw2qCGQ0lfVrI2G/MfrlyFcPfOWC1b6XY
         ceLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=UUlsCpyemxPUGQpIBc3drhb3sc/Fkh4PtlzYbjVQNa8=;
        b=Lk0bBiSpxjnbW7vM/+Wp2bqH8xJ2+O2XN7q/BowBwOeooD57DQXRjcqb538UxI6Y4E
         B+Emnevc8j4xDN262vGAv9urZw+XK/Pd5h0qohh9A+xoO645kPejERakXG6TG61Ws/B2
         /3suHnKtITkAgy8TwyIc2qe+wGgK3/Ek7EgLc6vBffwDZTRgaCohLR999Lt6KXfvTFBp
         XL54PR8jKkQ63L0+z+8aqzvtQbSKLV70uvKbLZDJOU4gxsf+MIFtwNjoKBXemqOWvXeW
         lz41Fie26UL0xO6Q/Q4GujfztEgxSAwWzZF2HNE8Xdl5TlLWlsqZdixUYJNC82ZBuXPh
         ayhQ==
X-Gm-Message-State: AOAM530BpzdvRkotynu4ufI6nt85YRiN9N9Ta1BL5CjzO9loHk+k1Rzk
	4/rcSw+z3+kH81SGkFCHkROW4kiKmTWdQaJjgS0=
X-Google-Smtp-Source: ABdhPJyGpaBbPYigJz61z6dE/n5rsekW4wIydbSaYp/wf1bKc73V+I0B8uhEJRMvQA5TE3/pm31rBwrkVMh+3O6XPns=
X-Received: by 2002:a17:90a:34cb:: with SMTP id m11mr3783185pjf.181.1607013060896;
 Thu, 03 Dec 2020 08:31:00 -0800 (PST)
MIME-Version: 1.0
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
In-Reply-To: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
From: Andy Shevchenko <andy.shevchenko@gmail.com>
Date: Thu, 3 Dec 2020 18:31:49 +0200
Message-ID: <CAHp75Ve4jSBXfeyMQHn1=T21Dkf4q4DF7DWPTc2U_QO79Pn_TQ@mail.gmail.com>
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: coreboot@coreboot.org, grub-devel@gnu.org, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, systemd-devel@lists.freedesktop.org, 
	trenchboot-devel@googlegroups.com, U-Boot Mailing List <u-boot@lists.denx.de>, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, alecb@umass.edu, 
	alexander.burmashev@oracle.com, allen.cryptic@gmail.com, 
	andrew.cooper3@citrix.com, Ard Biesheuvel <ard.biesheuvel@linaro.org>, btrotter@gmail.com, 
	dpsmith@apertussolutions.com, eric.devolder@oracle.com, 
	eric.snowberg@oracle.com, "H. Peter Anvin" <hpa@zytor.com>, hun@n-dimensional.de, 
	Javier Martinez Canillas <javierm@redhat.com>, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, krystian.hebel@3mdeb.com, 
	Leif Lindholm <leif@nuviainc.com>, lukasz.hawrylko@intel.com, 
	Andy Lutomirski <luto@amacapital.net>, michal.zygowski@3mdeb.com, 
	Matthew Garrett <mjg59@google.com>, mtottenh@akamai.com, phcoder@gmail.com, 
	piotr.krol@3mdeb.com, Peter Jones <pjones@redhat.com>, 
	Paul Menzel <pmenzel@molgen.mpg.de>, roger.pau@citrix.com, ross.philipson@oracle.com, 
	tyhicks@linux.microsoft.com
Content-Type: text/plain; charset="UTF-8"

On Sat, Nov 14, 2020 at 2:01 AM Daniel Kiper <daniel.kiper@oracle.com> wrote:

...

> The log specification should be as much as possible platform agnostic
> and self contained. The final version of this spec should be merged into
> existing specifications, e.g. UEFI, ACPI, Multiboot2, or be a standalone
> spec, e.g. as a part of OASIS Standards. The former seems better but is
> not perfect too...

With all respect... https://xkcd.com/927/


-- 
With Best Regards,
Andy Shevchenko


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 17:05:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 17:05:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43876.78783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kks2R-0005H8-Gh; Thu, 03 Dec 2020 17:05:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43876.78783; Thu, 03 Dec 2020 17:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kks2R-0005H1-D8; Thu, 03 Dec 2020 17:05:03 +0000
Received: by outflank-mailman (input) for mailman id 43876;
 Thu, 03 Dec 2020 17:05:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kks2P-0005Gt-VQ; Thu, 03 Dec 2020 17:05:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kks2P-00021F-Oq; Thu, 03 Dec 2020 17:05:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kks2P-0001Pq-H9; Thu, 03 Dec 2020 17:05:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kks2P-0007PE-Gd; Thu, 03 Dec 2020 17:05:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Ty6475i82XUmTftPuXJ65vCtJ5j33U1xgb3D/ZlUIo=; b=5pQQ20De3/IZmBn7AY1u0rCLIl
	ZP8V8WsTAeGsM6aO7o0WWrp+gnqxrduVQzcaxkZ+LEw6vtTFVJtIjXclqtv3bnmMDEY+jPphj2JA2
	q9I5DdEdtOcMVL4tseyD+HRmeQ++NOf+hOJERvSgtGhNd/+S+jM+BckEoXaZfrTGq09k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157178-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157178: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=484e869dfd3b7e4b6c47cb65ae5d5f499fcc056e
X-Osstest-Versions-That:
    ovmf=7c4ab1c2ef60a4690177d2361f8dd44d7d7df7f8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 17:05:01 +0000

flight 157178 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157178/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 484e869dfd3b7e4b6c47cb65ae5d5f499fcc056e
baseline version:
 ovmf                 7c4ab1c2ef60a4690177d2361f8dd44d7d7df7f8

Last test of basis   157167  2020-12-02 23:40:53 Z    0 days
Testing same since   157178  2020-12-03 12:29:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Ray Ni <ray.ni@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7c4ab1c2ef..484e869dfd  484e869dfd3b7e4b6c47cb65ae5d5f499fcc056e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 17:07:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 17:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43883.78797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kks52-0005QW-0X; Thu, 03 Dec 2020 17:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43883.78797; Thu, 03 Dec 2020 17:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kks51-0005QP-TL; Thu, 03 Dec 2020 17:07:43 +0000
Received: by outflank-mailman (input) for mailman id 43883;
 Thu, 03 Dec 2020 17:07:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oiWT=FH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kks51-0005QK-58
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 17:07:43 +0000
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58928c02-8db5-47f2-b985-4cf0a7ec52bd;
 Thu, 03 Dec 2020 17:07:41 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id h21so4675786wmb.2
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 09:07:41 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id v12sm126244wrt.4.2020.12.03.09.07.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 03 Dec 2020 09:07:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58928c02-8db5-47f2-b985-4cf0a7ec52bd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=XjsLZyijg53AIF2WdwTwUGl3TQ8Jq0C865tkp8CATYc=;
        b=Ja9Z/mrv+3gRadIzKaT1A7w4roSSutxOJ0cFAsvhp32grnB0A1ynjzDX60oQMxRdKD
         UVQH4OmAJ10K0yCvSZ8wW0GTx7fsrTsd2QNSb1vgxB/MUw9k8pfpSh6nyUC80XNZ7mOx
         HCoDp5i4jVCsY/5+XSVzeLmDz1+XNTTueVqXGkBz+O6pBGV/cdRPDNtdvjHsoQQ+cJdW
         j4Fvha9te51zqo0nDlhJ+SjsqoguZxTwvfZP/iM3FKFMQHvXvCrlAPKmZuBrg+vuAESE
         iJ9Uc42opEWks9NfqIAJTnJXXMdhbvfXWtr1f/r9b1ALiDZjAz0cnSTThiC07UY4CSkc
         5Wxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=XjsLZyijg53AIF2WdwTwUGl3TQ8Jq0C865tkp8CATYc=;
        b=C0ymlxY8WIOn7BL9qvbjsNL6GvrtpY0quHT2pjEzVgvbziy0S5vWUYLkN6KDl7FgwF
         DD52xdTIUZkHqgBarv3KvIOs0gdUSNJBDQbi3063grrwsfs2p7G6rk8morxsk3nfEjQX
         ufbNkHKdPgT67Lo32FimJl7eNVHPRRZL8UuUXv7Yc2SYsl7JHs64+LItjmmCnMStAhs3
         Hzl8hnAketYdF3j7iqZwPr9unUHL06t/Zjq8oC5lQZRA608SrXfs5B1haKS+CvT+chUc
         /TfClxE/PDLRV8wESfQTKmZouZAisc56QLmAuTzk7jwwpbgGAsVCH5Se7bQtKbqO06SN
         wtZA==
X-Gm-Message-State: AOAM5328QYlfeSn0tyruM5hRPTC9xr8BpReGZaAroi2j6lKgaxlRyN25
	t3YSDhyxhTltDPn6n6lnpbg=
X-Google-Smtp-Source: ABdhPJyMhSlKcRWvU4lJ4odLL/6qvd1arhOdHixsDjA4gel2YyLD0tZ3AKOBDzo1QVkxkGsuD4bYJQ==
X-Received: by 2002:a05:600c:2106:: with SMTP id u6mr136490wml.4.1607015260294;
        Thu, 03 Dec 2020 09:07:40 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Eslam Elnikety'" <elnikety@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201203124159.3688-1-paul@xen.org> <20201203124159.3688-2-paul@xen.org> <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com> <00ee01d6c98b$507af1c0$f170d540$@xen.org> <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
In-Reply-To: <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_fifo, ...
Date: Thu, 3 Dec 2020 17:07:38 -0000
Message-ID: <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJ7xs67Aj6rtzQCSqguWAI8ixj/qGHFtuA=

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 03 December 2020 15:57
> To: paul@xen.org
> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Eslam Elnikety' =
<elnikety@amazon.com>; 'Ian Jackson'
> <iwj@xenproject.org>; 'Wei Liu' <wl@xen.org>; 'Anthony PERARD' =
<anthony.perard@citrix.com>; 'Andrew
> Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Julien Grall'
> <julien@xen.org>; 'Stefano Stabellini' <sstabellini@kernel.org>; =
'Christian Lindig'
> <christian.lindig@citrix.com>; 'David Scott' <dave@recoil.org>; =
'Volodymyr Babchuk'
> <Volodymyr_Babchuk@epam.com>; 'Roger Pau Monn=C3=A9' =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create =
flag, XEN_DOMCTL_CDF_evtchn_fifo,
> ...
>=20
> On 03.12.2020 16:45, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 03 December 2020 15:23
> >>
> >> On 03.12.2020 13:41, Paul Durrant wrote:
> >>> From: Paul Durrant <pdurrant@amazon.com>
> >>>
> >>> ...to control the visibility of the FIFO event channel operations
> >>> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and =
EVTCHNOP_set_priority) to
> >>> the guest.
> >>>
> >>> These operations were added to the public header in commit =
d2d50c2f308f
> >>> ("evtchn: add FIFO-based event channel ABI") and the first =
implementation
> >>> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: =
implement
> >>> EVTCHNOP_set_priority and add the set_priority hook") and =
88910061ec61
> >>> ("evtchn: add FIFO-based event channel hypercalls and port ops"). =
Prior to
> >>> that, a guest issuing those operations would receive a return =
value of
> >>> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO =
operations but
> >>> running on an older (pre-4.4) Xen would fall back to using the =
2-level event
> >>> channel interface upon seeing this return value.
> >>>
> >>> Unfortunately the uncontrolable appearance of these new operations =
in Xen 4.4
> >>> onwards has implications for hibernation of some Linux guests. =
During resume
> >>> from hibernation, there are two kernels involved: the "boot" =
kernel and the
> >>> "resume" kernel. The guest boot kernel may default to use FIFO =
operations and
> >>> instruct Xen via EVTCHNOP_init_control to switch from 2-level to =
FIFO. On the
> >>> other hand, the resume kernel keeps assuming 2-level, because it =
was hibernated
> >>> on a version of Xen that did not support the FIFO operations.
> >>>
> >>> To maintain compatibility it is necessary to make Xen behave as it =
did
> >>> before the new operations were added and hence the code in this =
patch ensures
> >>> that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event =
channel
> >>> operations will again result in -ENOSYS being returned to the =
guest.
> >>
> >> I have to admit I'm now even more concerned of the control for such
> >> going into Xen, the more with the now 2nd use in the subsequent =
patch.
> >> The implication of this would seem to be that whenever we add new
> >> hypercalls or sub-ops, a domain creation control would also need
> >> adding determining whether that new sub-op is actually okay to use =
by
> >> a guest. Or else I'd be keen to up front see criteria at least =
roughly
> >> outlined by which it could be established whether such an override
> >> control is needed.
> >>
> >
> > Ultimately I think any new hypercall (or related set of hypercalls) =
added to the ABI needs to be
> opt-in on a per-domain basis, so that we know that from when a domain =
is first created it will not see
> a change in its environment unless the VM administrator wants that to =
happen.
>=20
> A new hypercall appearing is a change to the guest's environment, yes,
> but a backwards compatible one. I don't see how this would harm a =
guest.

Say we have a guest which is aware of the newer Xen and is coded to use =
the new hypercall *but* we start it on an older Xen where the new =
hypercall is not implemented. *Then* we migrate it to the newer Xen... =
the new hypercall suddenly becomes available and changes the guest =
behaviour. In the worst case, the guest is sufficiently confused that it =
crashes.

> This and ...
>=20
> >> I'm also not convinced such controls really want to be opt-in =
rather
> >> than opt-out.
> >
> > They really need to be opt-in I think. From a cloud provider PoV it =
is important that nothing in a
> customer's environment changes unless we want it to. Otherwise we have =
no way to deploy an updated
> hypervisor version without risking crashing their VMs.
>=20
> ... this sound to me more like workarounds for buggy guests than
> functionality the hypervisor _needs_ to have. (I can appreciate
> the specific case here for the specific scenario you provide as
> an exception.)

If we want to have a hypervisor that can be used in a cloud environment =
then Xen absolutely needs this capability.

>=20
> >>> --- a/xen/arch/arm/domain.c
> >>> +++ b/xen/arch/arm/domain.c
> >>> @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct =
xen_domctl_createdomain *config)
> >>>      unsigned int max_vcpus;
> >>>
> >>>      /* HVM and HAP must be set. IOMMU may or may not be */
> >>> -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=3D
> >>> +    if ( (config->flags &
> >>> +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) =
!=3D
> >>>           (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
> >>>      {
> >>>          dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
> >>> --- a/xen/arch/arm/domain_build.c
> >>> +++ b/xen/arch/arm/domain_build.c
> >>> @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
> >>>          struct domain *d;
> >>>          struct xen_domctl_createdomain d_cfg =3D {
> >>>              .arch.gic_version =3D XEN_DOMCTL_CONFIG_GIC_NATIVE,
> >>> -            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> >>> +            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >>> +                     XEN_DOMCTL_CDF_evtchn_fifo,
> >>>              .max_evtchn_port =3D -1,
> >>>              .max_grant_frames =3D 64,
> >>>              .max_maptrack_frames =3D 1024,
> >>> --- a/xen/arch/arm/setup.c
> >>> +++ b/xen/arch/arm/setup.c
> >>> @@ -805,7 +805,8 @@ void __init start_xen(unsigned long =
boot_phys_offset,
> >>>      struct bootmodule *xen_bootmodule;
> >>>      struct domain *dom0;
> >>>      struct xen_domctl_createdomain dom0_cfg =3D {
> >>> -        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> >>> +        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >>> +                 XEN_DOMCTL_CDF_evtchn_fifo,
> >>>          .max_evtchn_port =3D -1,
> >>>          .max_grant_frames =3D gnttab_dom0_frames(),
> >>>          .max_maptrack_frames =3D -1,
> >>> --- a/xen/arch/x86/setup.c
> >>> +++ b/xen/arch/x86/setup.c
> >>> @@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const =
module_t *image,
> >>>                                           const char *loader)
> >>>  {
> >>>      struct xen_domctl_createdomain dom0_cfg =3D {
> >>> -        .flags =3D IS_ENABLED(CONFIG_TBOOT) ? =
XEN_DOMCTL_CDF_s3_integrity : 0,
> >>> +        .flags =3D XEN_DOMCTL_CDF_evtchn_fifo |
> >>> +                 (IS_ENABLED(CONFIG_TBOOT) ? =
XEN_DOMCTL_CDF_s3_integrity : 0),
> >>>          .max_evtchn_port =3D -1,
> >>>          .max_grant_frames =3D -1,
> >>>          .max_maptrack_frames =3D -1,
> >>> --- a/xen/common/domain.c
> >>> +++ b/xen/common/domain.c
> >>> @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct =
xen_domctl_createdomain *config)
> >>>           ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >>>             XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
> >>>             XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> >>> -           XEN_DOMCTL_CDF_nested_virt) )
> >>> +           XEN_DOMCTL_CDF_nested_virt | =
XEN_DOMCTL_CDF_evtchn_fifo) )
> >>>      {
> >>>          dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", =
config->flags);
> >>>          return -EINVAL;
> >>
> >> All of the hunks above point out a scalability issue if we were to
> >> follow this route for even just a fair part of new sub-ops, and I
> >> suppose you've noticed this with the next patch presumably touching
> >> all the same places again.
> >
> > Indeed. This solution works for now but is probably not what we want =
in the long run. Same goes for
> the current way we control viridian features via an HVM param. It is =
good enough for now IMO since
> domctl is not a stable interface. Any ideas about how we might =
implement a better interface in the
> longer term?
>=20
> While it has other downsides, J=C3=BCrgen's proposal doesn't have any
> similar scalability issue afaics. Another possible model would
> seem to be to key new hypercalls to hypervisor CPUID leaf bits,
> and derive their availability from a guest's CPUID policy. Of
> course this won't work when needing to retrofit guarding like
> you want to do here.
>=20

Ok, I'll take a look hypfs as an immediate solution, if that's =
preferred.

  Paul



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 17:19:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 17:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43891.78809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kksG2-0006Yk-2w; Thu, 03 Dec 2020 17:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43891.78809; Thu, 03 Dec 2020 17:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kksG1-0006Yd-W6; Thu, 03 Dec 2020 17:19:05 +0000
Received: by outflank-mailman (input) for mailman id 43891;
 Thu, 03 Dec 2020 17:19:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yflw=FH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kksG0-0006YY-Og
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 17:19:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68de2640-3324-4317-ba6b-1addd83ad69c;
 Thu, 03 Dec 2020 17:19:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1C42AC2E;
 Thu,  3 Dec 2020 17:19:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68de2640-3324-4317-ba6b-1addd83ad69c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607015942; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=KvLxnjWl8wJ1H2F26HsBTTSU4FyGjQtfNIv6PlKMQ2Q=;
	b=o3KGe2RV9/uS86SuBhHSEJAuf5hgG3KEuOfjl3lHAzI9O0DYppkDMa9/QLBfv15uu/dwsS
	C4iPymSz1bMaK/63jTzukqRQcad19On1tZNFWP4wZ+442O/VaSnGeSM/3l9iqZEwsJlN1l
	uTLZZ8RkhK+A1q8FiF0S8oBct+kFfpU=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: paul@xen.org, 'Jan Beulich' <jbeulich@suse.com>
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d72cde2c-24ac-e1f5-b1e3-b746faf41406@suse.com>
Date: Thu, 3 Dec 2020 18:19:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="zE5Jb3d3ChYCwnXO9sfA3nSVRjfjCWzsz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--zE5Jb3d3ChYCwnXO9sfA3nSVRjfjCWzsz
Content-Type: multipart/mixed; boundary="4fjRtZhUgaxajFxlJUeChy7UzYNRYgTyq";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: paul@xen.org, 'Jan Beulich' <jbeulich@suse.com>
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <d72cde2c-24ac-e1f5-b1e3-b746faf41406@suse.com>
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
In-Reply-To: <00f601d6c996$ce3908d0$6aab1a70$@xen.org>

--4fjRtZhUgaxajFxlJUeChy7UzYNRYgTyq
Content-Type: multipart/mixed;
 boundary="------------B8483D927153A65F09AC8FF2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------B8483D927153A65F09AC8FF2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 18:07, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 03 December 2020 15:57
>> To: paul@xen.org
>> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Eslam Elnikety' <elnikety@a=
mazon.com>; 'Ian Jackson'
>> <iwj@xenproject.org>; 'Wei Liu' <wl@xen.org>; 'Anthony PERARD' <anthon=
y.perard@citrix.com>; 'Andrew
>> Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' <george.dunlap@ci=
trix.com>; 'Julien Grall'
>> <julien@xen.org>; 'Stefano Stabellini' <sstabellini@kernel.org>; 'Chri=
stian Lindig'
>> <christian.lindig@citrix.com>; 'David Scott' <dave@recoil.org>; 'Volod=
ymyr Babchuk'
>> <Volodymyr_Babchuk@epam.com>; 'Roger Pau Monn=C3=A9' <roger.pau@citrix=
=2Ecom>; xen-devel@lists.xenproject.org
>> Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag=
, XEN_DOMCTL_CDF_evtchn_fifo,
>> ...
>>
>> On 03.12.2020 16:45, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 03 December 2020 15:23
>>>>
>>>> On 03.12.2020 13:41, Paul Durrant wrote:
>>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>>
>>>>> ...to control the visibility of the FIFO event channel operations
>>>>> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_pri=
ority) to
>>>>> the guest.
>>>>>
>>>>> These operations were added to the public header in commit d2d50c2f=
308f
>>>>> ("evtchn: add FIFO-based event channel ABI") and the first implemen=
tation
>>>>> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: impl=
ement
>>>>> EVTCHNOP_set_priority and add the set_priority hook") and 88910061e=
c61
>>>>> ("evtchn: add FIFO-based event channel hypercalls and port ops"). P=
rior to
>>>>> that, a guest issuing those operations would receive a return value=
 of
>>>>> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO operat=
ions but
>>>>> running on an older (pre-4.4) Xen would fall back to using the 2-le=
vel event
>>>>> channel interface upon seeing this return value.
>>>>>
>>>>> Unfortunately the uncontrolable appearance of these new operations =
in Xen 4.4
>>>>> onwards has implications for hibernation of some Linux guests. Duri=
ng resume
>>>>> from hibernation, there are two kernels involved: the "boot" kernel=
 and the
>>>>> "resume" kernel. The guest boot kernel may default to use FIFO oper=
ations and
>>>>> instruct Xen via EVTCHNOP_init_control to switch from 2-level to FI=
FO. On the
>>>>> other hand, the resume kernel keeps assuming 2-level, because it wa=
s hibernated
>>>>> on a version of Xen that did not support the FIFO operations.
>>>>>
>>>>> To maintain compatibility it is necessary to make Xen behave as it =
did
>>>>> before the new operations were added and hence the code in this pat=
ch ensures
>>>>> that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event chan=
nel
>>>>> operations will again result in -ENOSYS being returned to the guest=
=2E
>>>>
>>>> I have to admit I'm now even more concerned of the control for such
>>>> going into Xen, the more with the now 2nd use in the subsequent patc=
h.
>>>> The implication of this would seem to be that whenever we add new
>>>> hypercalls or sub-ops, a domain creation control would also need
>>>> adding determining whether that new sub-op is actually okay to use b=
y
>>>> a guest. Or else I'd be keen to up front see criteria at least rough=
ly
>>>> outlined by which it could be established whether such an override
>>>> control is needed.
>>>>
>>>
>>> Ultimately I think any new hypercall (or related set of hypercalls) a=
dded to the ABI needs to be
>> opt-in on a per-domain basis, so that we know that from when a domain =
is first created it will not see
>> a change in its environment unless the VM administrator wants that to =
happen.
>>
>> A new hypercall appearing is a change to the guest's environment, yes,=

>> but a backwards compatible one. I don't see how this would harm a gues=
t.
>=20
> Say we have a guest which is aware of the newer Xen and is coded to use=
 the new hypercall *but* we start it on an older Xen where the new hyperc=
all is not implemented. *Then* we migrate it to the newer Xen... the new =
hypercall suddenly becomes available and changes the guest behaviour. In =
the worst case, the guest is sufficiently confused that it crashes.
>=20
>> This and ...
>>
>>>> I'm also not convinced such controls really want to be opt-in rather=

>>>> than opt-out.
>>>
>>> They really need to be opt-in I think. From a cloud provider PoV it i=
s important that nothing in a
>> customer's environment changes unless we want it to. Otherwise we have=
 no way to deploy an updated
>> hypervisor version without risking crashing their VMs.
>>
>> ... this sound to me more like workarounds for buggy guests than
>> functionality the hypervisor _needs_ to have. (I can appreciate
>> the specific case here for the specific scenario you provide as
>> an exception.)
>=20
> If we want to have a hypervisor that can be used in a cloud environment=
 then Xen absolutely needs this capability.
>=20
>>
>>>>> --- a/xen/arch/arm/domain.c
>>>>> +++ b/xen/arch/arm/domain.c
>>>>> @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct xen_domc=
tl_createdomain *config)
>>>>>       unsigned int max_vcpus;
>>>>>
>>>>>       /* HVM and HAP must be set. IOMMU may or may not be */
>>>>> -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=3D
>>>>> +    if ( (config->flags &
>>>>> +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) !=3D=

>>>>>            (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
>>>>>       {
>>>>>           dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
>>>>> --- a/xen/arch/arm/domain_build.c
>>>>> +++ b/xen/arch/arm/domain_build.c
>>>>> @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
>>>>>           struct domain *d;
>>>>>           struct xen_domctl_createdomain d_cfg =3D {
>>>>>               .arch.gic_version =3D XEN_DOMCTL_CONFIG_GIC_NATIVE,
>>>>> -            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>>>>> +            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>>> +                     XEN_DOMCTL_CDF_evtchn_fifo,
>>>>>               .max_evtchn_port =3D -1,
>>>>>               .max_grant_frames =3D 64,
>>>>>               .max_maptrack_frames =3D 1024,
>>>>> --- a/xen/arch/arm/setup.c
>>>>> +++ b/xen/arch/arm/setup.c
>>>>> @@ -805,7 +805,8 @@ void __init start_xen(unsigned long boot_phys_o=
ffset,
>>>>>       struct bootmodule *xen_bootmodule;
>>>>>       struct domain *dom0;
>>>>>       struct xen_domctl_createdomain dom0_cfg =3D {
>>>>> -        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>>>>> +        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>>> +                 XEN_DOMCTL_CDF_evtchn_fifo,
>>>>>           .max_evtchn_port =3D -1,
>>>>>           .max_grant_frames =3D gnttab_dom0_frames(),
>>>>>           .max_maptrack_frames =3D -1,
>>>>> --- a/xen/arch/x86/setup.c
>>>>> +++ b/xen/arch/x86/setup.c
>>>>> @@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const =
module_t *image,
>>>>>                                            const char *loader)
>>>>>   {
>>>>>       struct xen_domctl_createdomain dom0_cfg =3D {
>>>>> -        .flags =3D IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_in=
tegrity : 0,
>>>>> +        .flags =3D XEN_DOMCTL_CDF_evtchn_fifo |
>>>>> +                 (IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_int=
egrity : 0),
>>>>>           .max_evtchn_port =3D -1,
>>>>>           .max_grant_frames =3D -1,
>>>>>           .max_maptrack_frames =3D -1,
>>>>> --- a/xen/common/domain.c
>>>>> +++ b/xen/common/domain.c
>>>>> @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct xen_do=
mctl_createdomain *config)
>>>>>            ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>>>              XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |=

>>>>>              XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
>>>>> -           XEN_DOMCTL_CDF_nested_virt) )
>>>>> +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo=
) )
>>>>>       {
>>>>>           dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->f=
lags);
>>>>>           return -EINVAL;
>>>>
>>>> All of the hunks above point out a scalability issue if we were to
>>>> follow this route for even just a fair part of new sub-ops, and I
>>>> suppose you've noticed this with the next patch presumably touching
>>>> all the same places again.
>>>
>>> Indeed. This solution works for now but is probably not what we want =
in the long run. Same goes for
>> the current way we control viridian features via an HVM param. It is g=
ood enough for now IMO since
>> domctl is not a stable interface. Any ideas about how we might impleme=
nt a better interface in the
>> longer term?
>>
>> While it has other downsides, J=C3=BCrgen's proposal doesn't have any
>> similar scalability issue afaics. Another possible model would
>> seem to be to key new hypercalls to hypervisor CPUID leaf bits,
>> and derive their availability from a guest's CPUID policy. Of
>> course this won't work when needing to retrofit guarding like
>> you want to do here.
>>
>=20
> Ok, I'll take a look hypfs as an immediate solution, if that's preferre=
d.

Paul, if you'd like me to add a few patches to my series doing the basic
framework (per-domain nodes, interfaces for setting struct domain fields
via hypfs), I can do that. You could then concentrate on the tools side.


Juergen

--------------B8483D927153A65F09AC8FF2
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------B8483D927153A65F09AC8FF2--

--4fjRtZhUgaxajFxlJUeChy7UzYNRYgTyq--

--zE5Jb3d3ChYCwnXO9sfA3nSVRjfjCWzsz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/JHgQFAwAAAAAACgkQsN6d1ii/Ey//
egf/Vvc+vb5qf8td5ppHy5qVajun3rzQ+tD79/H9p7k6HQCY1runcSKhmUSyxvm5W+EZJFg3GXap
3udrelnxdLgJidUm1JnCihWSXp6nkZCclqnv3JT+G6s0zQkXAHFkccsgNLVWqR7clLJq8df1pi5V
pD6QVLzcgH83bbfBfKdj0C0BimquXOx2GLCn/AvvKvRdkywPY6qNj6k7gYr1QKNdN7Hs1WqkJiSW
TvdLZbRLMWA8Pv2Xqs/nEn+DGf6tAfbb4x4IBNYnR6k8mJAlEjkNoVzeldtdC80QpeYqJu4yDBbK
Gu0+SCdOVPzWmsxBiindUzwU+++pIYcqRoEYaVItmw==
=4JFj
-----END PGP SIGNATURE-----

--zE5Jb3d3ChYCwnXO9sfA3nSVRjfjCWzsz--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 18:44:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 18:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43919.78822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kktaa-0006xc-H3; Thu, 03 Dec 2020 18:44:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43919.78822; Thu, 03 Dec 2020 18:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kktaa-0006xV-Dw; Thu, 03 Dec 2020 18:44:24 +0000
Received: by outflank-mailman (input) for mailman id 43919;
 Thu, 03 Dec 2020 18:44:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oiWT=FH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kktaZ-0006xP-6a
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 18:44:23 +0000
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55cfa44d-51dd-40bc-baf0-71aab0d69465;
 Thu, 03 Dec 2020 18:44:22 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id p8so2934501wrx.5
 for <xen-devel@lists.xenproject.org>; Thu, 03 Dec 2020 10:44:22 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id l8sm428251wro.46.2020.12.03.10.44.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 03 Dec 2020 10:44:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55cfa44d-51dd-40bc-baf0-71aab0d69465
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=6QEq0OQR2ej4DNaJ8dsHWczskzh0/w/6V75OIxZkJX4=;
        b=VaImNiGqfXBDh21Vg8qGs1mxqHsKF6RKzgPMhpnU0jhP0Qj6Vh3V/NBCKJxa0PKPGd
         m5YdB48lXrTXI5bLHCSD/YCJK9YWPPSjQPeraIhguO46n8MqI6TSnAoWTOrFLvIsqQSj
         9rcwpC4u83fbVpRBPNtZRSemLeDYUX+qY1BPnzBY97kiOUSzeGqATEVXXNF16qd1qycw
         zQn7UmvhoL2lgHSpAPL1/ylper5OpAJk42OUMU3ypuwc5uKg/G9PepTVUHZ/LQR1qaxI
         yMOgKW/qbStUNV4mDrpn+JMgpyqNpg1RXuMcouiGX2KHbGKQ4QQIrsONCBDwUjQNz5hD
         l+kw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=6QEq0OQR2ej4DNaJ8dsHWczskzh0/w/6V75OIxZkJX4=;
        b=WNSJxMr9MXC5Fq8SE7wEdww3Fr0HkDWNiStfr1sJh6TT73n92W2IE3NXBl0LVejQYl
         9nw6AJLAqlvJPIMmf7ms7F+BvZwoNd/dhrNPYLbot396aEOlmOz2msF7Unfz5XO2zsLg
         Efm/vuq4RsNgpEKAV5m6Q6M5Nwl1zoPz/g/3PUE0c0CKZrJhq2fujWoXc5vx1xdd8bbb
         awJS/opa3v5omSLJ5qLof4g1GsTGE81N580oaGZhfxZAGXWQt9i9mfcV67AcH7Y8h3Ck
         L9XV5UIiQWi/fn0JsBmebLdIL4q/p9uUguoAqMK8WEpKZhQLrjy6MA9Zq9dmWzS+jNHW
         sZKQ==
X-Gm-Message-State: AOAM530AouSvqj4r5zxxJvMrMc67Qkk04SW5yScLr7GmQJhF3rAX7xl3
	AQu3HoYgtvfzd7PBH3gUHVs=
X-Google-Smtp-Source: ABdhPJzFhiagCCPdnRzly3I5duj+YsIIWOdAL2IWbqxM/WD/GOgehy+EcL+6fslEpBmTslorOwMIzw==
X-Received: by 2002:a5d:444c:: with SMTP id x12mr656073wrr.6.1607021061686;
        Thu, 03 Dec 2020 10:44:21 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: =?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
	"'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Eslam Elnikety'" <elnikety@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201203124159.3688-1-paul@xen.org> <20201203124159.3688-2-paul@xen.org> <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com> <00ee01d6c98b$507af1c0$f170d540$@xen.org> <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com> <00f601d6c996$ce3908d0$6aab1a70$@xen.org> <d72cde2c-24ac-e1f5-b1e3-b746faf41406@suse.com>
In-Reply-To: <d72cde2c-24ac-e1f5-b1e3-b746faf41406@suse.com>
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_fifo, ...
Date: Thu, 3 Dec 2020 18:44:19 -0000
Message-ID: <010201d6c9a4$50194060$f04bc120$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJ7xs67Aj6rtzQCSqguWAI8ixj/AoRsg/ABh9SJBahBgCNw

> -----Original Message-----
[snip]
> >>>>
> >>>> All of the hunks above point out a scalability issue if we were =
to
> >>>> follow this route for even just a fair part of new sub-ops, and I
> >>>> suppose you've noticed this with the next patch presumably =
touching
> >>>> all the same places again.
> >>>
> >>> Indeed. This solution works for now but is probably not what we =
want in the long run. Same goes
> for
> >> the current way we control viridian features via an HVM param. It =
is good enough for now IMO since
> >> domctl is not a stable interface. Any ideas about how we might =
implement a better interface in the
> >> longer term?
> >>
> >> While it has other downsides, J=C3=BCrgen's proposal doesn't have =
any
> >> similar scalability issue afaics. Another possible model would
> >> seem to be to key new hypercalls to hypervisor CPUID leaf bits,
> >> and derive their availability from a guest's CPUID policy. Of
> >> course this won't work when needing to retrofit guarding like
> >> you want to do here.
> >>
> >
> > Ok, I'll take a look hypfs as an immediate solution, if that's =
preferred.
>=20
> Paul, if you'd like me to add a few patches to my series doing the =
basic
> framework (per-domain nodes, interfaces for setting struct domain =
fields
> via hypfs), I can do that. You could then concentrate on the tools =
side.
>=20

That would be very helpful. Thanks Juergen.

  Paul



From xen-devel-bounces@lists.xenproject.org Thu Dec 03 18:48:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 18:48:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43924.78834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkte6-00076g-0q; Thu, 03 Dec 2020 18:48:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43924.78834; Thu, 03 Dec 2020 18:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkte5-00076Z-U1; Thu, 03 Dec 2020 18:48:01 +0000
Received: by outflank-mailman (input) for mailman id 43924;
 Thu, 03 Dec 2020 18:48:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Inx=FH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kkte5-00076U-DY
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 18:48:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c9cc0a2-28a5-46c8-acb8-95712f13c692;
 Thu, 03 Dec 2020 18:48:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c9cc0a2-28a5-46c8-acb8-95712f13c692
Date: Thu, 3 Dec 2020 10:47:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607021279;
	bh=fnHQEf2dIJMQWzsuScF0ayLSGLJiph4eWeZhD/3Evjg=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Need5aTerq+HmBfK7p5NrrZ/dK2rK7N7CCWeo+9z3oyoticPoIO1UwJOUQcgEIw1R
	 Dd8Olm5yju+TphIGRkjv6QaJbXH4AhWJLLqytiBMmIk7QiGakqDG5Pvb8RLd3eUJId
	 yqcWyZpQ0ZIJNCY3C6tsTJqvhZw+oSnKAZeMqSTVA8EwUG8OzONLj4VLg36ssfWHwX
	 lqsY11+nr+H5hgRhM2iizlioDIv8TsoerQA0edLg7A6cpYTCTKiHoWbKBv+vAXFTi1
	 sWSlztAy8gyBYeqpZWC+T1M1YRv6ShyGfKrk3Zp4HgUVkXBc+fdN3QDyFYXiergmq/
	 oppthOVwKUd9A==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
In-Reply-To: <BD247B69-7201-41E2-8687-49924B7396CA@arm.com>
Message-ID: <alpine.DEB.2.21.2012031045420.32240@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com> <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s> <1912278a-13f4-885d-d1ca-cc130718d064@xen.org>
 <alpine.DEB.2.21.2012021958020.30425@sstabellini-ThinkPad-T480s> <BD247B69-7201-41E2-8687-49924B7396CA@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 3 Dec 2020, Rahul Singh wrote:
> > On 3 Dec 2020, at 4:13 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > On Wed, 2 Dec 2020, Julien Grall wrote:
> >> On 02/12/2020 02:51, Stefano Stabellini wrote:
> >>> On Thu, 26 Nov 2020, Rahul Singh wrote:
> >>>> +/* Alias to Xen device tree helpers */
> >>>> +#define device_node dt_device_node
> >>>> +#define of_phandle_args dt_phandle_args
> >>>> +#define of_device_id dt_device_match
> >>>> +#define of_match_node dt_match_node
> >>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np,
> >>>> pname, out))
> >>>> +#define of_property_read_bool dt_property_read_bool
> >>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
> >>> 
> >>> Given all the changes to the file by the previous patches we are
> >>> basically fully (or almost fully) adapting this code to Xen.
> >>> 
> >>> So at that point I wonder if we should just as well make these changes
> >>> (e.g. s/of_phandle_args/dt_phandle_args/g) to the code too.
> >> 
> >> I have already accepted the fact that keeping Linux code as-is is nearly
> >> impossible without much workaround :). The benefits tends to also limited as
> >> we noticed for the SMMU driver.
> >> 
> >> I would like to point out that this may make quite difficult (if not
> >> impossible) to revert the previous patches which remove support for some
> >> features (e.g. atomic, MSI, ATS).
> >> 
> >> If we are going to adapt the code to Xen (I'd like to keep Linux code style
> >> though), then I think we should consider to keep code that may be useful in
> >> the near future (at least MSI, ATS).
> > 
> > (I am fine with keeping the Linux code style.)
> > 
> > We could try to keep the code as similar to Linux as possible. This
> > didn't work out in the past.
> > 
> > Otherwise, we could fully adapt the driver to Xen. If we fully adapt the
> > driver to Xen (code style aside) it is better to be consistent and also
> > do substitutions like s/of_phandle_args/dt_phandle_args/g. Then the
> > policy becomes clear: the code comes from Linux but it is 100% adapted
> > to Xen.
> > 
> > 
> > Now the question about what to do about the MSI and ATS code is a good
> > one. We know that we are going to want that code at some point in the
> > next 2 years. Like you wrote, if we fully adapt the code to Xen and
> > remove MSI and ATS code, then it is going to be harder to add it back.
> > 
> > So maybe keeping the MSI and ATS code for now, even if it cannot work,
> > would be better. I think this strategy works well if the MSI and ATS
> > code can be disabled easily, i.e. with a couple of lines of code in the
> > init function rather than #ifdef everywhere. It doesn't work well if we
> > have to add #ifdef everywhere.
> > 
> > It looks like MSI could be disabled adding a couple of lines to
> > arm_smmu_setup_msis.
> > 
> > Similarly ATS seems to be easy to disable by forcing ats_enabled to
> > false.
> > 
> > So yes, this looks like a good way forward. Rahul, what do you think?
> 
> 
> I am ok to have the PCI ATS and MSI functionality in the code. 
> As per the discussion next version of the patch will include below modification:Please let me know if there are any suggestion overall that should be added in next version.
> 
> 1. Keep the PCI ATS and MSI functionality code.

I'll repeat one point I wrote above that I think it is important: please
try to disable ATS and MSI without invasive changes, ideally just a
couple of lines to force-disable each feature.


> 2. Make the code with XEN compatible ( remove linux compatibility functions)
> 3. Keep the Linux coding style for code imported from Linux.
> 4. Fix all comments.

Sounds good.


> I have one query what will be coding style for new code to make driver work  for XEN ? 

We try to keep the same code style for the entirety of a source file. In
this case, the whole driver should use Linux code style (both imported
code and new code).


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 18:59:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 18:59:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43932.78846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kktpY-0008Ed-3N; Thu, 03 Dec 2020 18:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43932.78846; Thu, 03 Dec 2020 18:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kktpY-0008EW-0B; Thu, 03 Dec 2020 18:59:52 +0000
Received: by outflank-mailman (input) for mailman id 43932;
 Thu, 03 Dec 2020 18:59:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Inx=FH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kktpW-0008ER-VJ
 for xen-devel@lists.xenproject.org; Thu, 03 Dec 2020 18:59:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5979b25-777b-4a19-a231-3cd1481a8243;
 Thu, 03 Dec 2020 18:59:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5979b25-777b-4a19-a231-3cd1481a8243
Date: Thu, 3 Dec 2020 10:59:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607021989;
	bh=OWWEdGrI+uaFpVJWMP3q8eS+LkmUXzLY+ZS3nn7Eo8s=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=eJSl8jSCNeVMGyYsiKnr78W9S4CCIXabO3N1UkU7fdw9cnfkOD4+lm1HTotoHTlUP
	 1icj5HNZnITFSiWqYQtFATyQf2/wGoDuwk/FOgQsC/AYuJuso4VkbxzAufXFbrOM/4
	 df1JtjmDy14suB5kegF7dzIjKdKifuBCvVe1C2XRo8VD5bFPyVWRSp7aT9heykyHh7
	 WlHzbBMshlFetC4LUBaCilrxuVa0m51djy6n25TgL1hKxa3H7qPq7HPbXNPRkH8EhF
	 UhDCRuXsG6IVdEwUzVRRrtz2cUM21mRCHR8Rr9Wt5G04NwxNCqJQSzXIehG4/yge5D
	 ohsb449Md9Njw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq()
 in init_*_irq_data()
In-Reply-To: <E9377A57-0AA0-4B5B-87B0-B65FEF655C59@arm.com>
Message-ID: <alpine.DEB.2.21.2012031059150.32240@sstabellini-ThinkPad-T480s>
References: <20201128113642.8265-1-julien@xen.org> <E9377A57-0AA0-4B5B-87B0-B65FEF655C59@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1445478398-1607021972=:32240"
Content-ID: <alpine.DEB.2.21.2012031059450.32240@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1445478398-1607021972=:32240
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2012031059451.32240@sstabellini-ThinkPad-T480s>

On Wed, 2 Dec 2020, Bertrand Marquis wrote:
> > On 28 Nov 2020, at 11:36, Julien Grall <julien@xen.org> wrote:
> > 
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > init_one_desc_irq() can return an error if it is unable to allocate
> > memory. While this is unlikely to happen during boot (called from
> > init_{,local_}irq_data()), it is better to harden the code by
> > propagting the return value.
> > 
> > Spotted by coverity.
> > 
> > CID: 106529
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > Reviewed-by: Roger Paul Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> >    Changes in v2:
> >        - Add Roger's reviewed-by for x86
> >        - Handle
> > ---
> > xen/arch/arm/irq.c | 12 ++++++++++--
> > xen/arch/x86/irq.c |  7 ++++++-
> > 2 files changed, 16 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > index 3877657a5277..b71b099e6fa2 100644
> > --- a/xen/arch/arm/irq.c
> > +++ b/xen/arch/arm/irq.c
> > @@ -88,7 +88,11 @@ static int __init init_irq_data(void)
> >     for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
> >     {
> >         struct irq_desc *desc = irq_to_desc(irq);
> > -        init_one_irq_desc(desc);
> > +        int rc = init_one_irq_desc(desc);
> > +
> > +        if ( rc )
> > +            return rc;
> > +
> >         desc->irq = irq;
> >         desc->action  = NULL;
> >     }
> > @@ -105,7 +109,11 @@ static int init_local_irq_data(void)
> >     for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
> >     {
> >         struct irq_desc *desc = irq_to_desc(irq);
> > -        init_one_irq_desc(desc);
> > +        int rc = init_one_irq_desc(desc);
> > +
> > +        if ( rc )
> > +            return rc;
> > +
> >         desc->irq = irq;
> >         desc->action  = NULL;
> > 
> > diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> > index 45966947919e..3ebd684415ac 100644
> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -428,9 +428,14 @@ int __init init_irq_data(void)
> > 
> >     for ( irq = 0; irq < nr_irqs_gsi; irq++ )
> >     {
> > +        int rc;
> > +
> >         desc = irq_to_desc(irq);
> >         desc->irq = irq;
> > -        init_one_irq_desc(desc);
> > +
> > +        rc = init_one_irq_desc(desc);
> > +        if ( rc )
> > +            return rc;
> >     }
> >     for ( ; irq < nr_irqs; irq++ )
> >         irq_to_desc(irq)->irq = irq;
> > -- 
> > 2.17.1
> > 
> > 
> 
> 
--8323329-1445478398-1607021972=:32240--


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 19:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 19:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43940.78858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kku2k-0001o3-CD; Thu, 03 Dec 2020 19:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43940.78858; Thu, 03 Dec 2020 19:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kku2k-0001nw-7m; Thu, 03 Dec 2020 19:13:30 +0000
Received: by outflank-mailman (input) for mailman id 43940;
 Thu, 03 Dec 2020 19:13:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kku2j-0001no-9K; Thu, 03 Dec 2020 19:13:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kku2j-0004sC-2b; Thu, 03 Dec 2020 19:13:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kku2i-0008PE-Oe; Thu, 03 Dec 2020 19:13:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kku2i-0005Ya-OA; Thu, 03 Dec 2020 19:13:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WoNtQzgINZ86aTTrnrGRKDi78hvsuH0LiTEi+y6RIS4=; b=ZGxKlo+F3cutf/76OednvatYio
	tjr8ZZk/tH7cwrd0znPqNDzaOfTzICK+ItuQG0VpqZpzukRS2wZbsV7byPXpYxxVXMfu8cCO+VRWA
	sm4XwPoObOQadT6D8FPRDb83k0S91ZJJjH0f6jqROiBIkSFKb1buYkdPkbGuKCuoPfOc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157174-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157174: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 19:13:28 +0000

flight 157174 qemu-mainline real [real]
flight 157185 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157174/
http://logs.test-lab.xenproject.org/osstest/logs/157185/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  105 days
Failing since        152659  2020-08-21 14:07:39 Z  104 days  217 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 20:12:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 20:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43956.78879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkuxX-0007oK-So; Thu, 03 Dec 2020 20:12:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43956.78879; Thu, 03 Dec 2020 20:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkuxX-0007oD-P1; Thu, 03 Dec 2020 20:12:11 +0000
Received: by outflank-mailman (input) for mailman id 43956;
 Thu, 03 Dec 2020 20:12:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkuxW-0007o5-9l; Thu, 03 Dec 2020 20:12:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkuxW-0006Bh-0z; Thu, 03 Dec 2020 20:12:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkuxV-00042Q-PJ; Thu, 03 Dec 2020 20:12:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkuxV-0006fb-Oo; Thu, 03 Dec 2020 20:12:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QLG5GShdFrB8W+nktYRiMPvA5w805gmXX0+shR57kyg=; b=dzJjeHJh2kFptLfGobWbxvhP9q
	WonMPRxP7KPWWkUd4t7OVxM+vb3bwAm+ViHEgXkS0eU9mrzIxWnGiu33W9XRw00gMCnV/FlZfXsH5
	Fa/48+1TzUMb2bOI3gGbOLs7G87vlcRnaeHD6QoWj1BwIxpo/FRA9lsQ2Bs9U1C/uX5k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157176: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=34816d20f173a90389c8a7e641166d8ea9dce70a
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 20:12:09 +0000

flight 157176 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157176/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                34816d20f173a90389c8a7e641166d8ea9dce70a
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  125 days
Failing since        152366  2020-08-01 20:49:34 Z  123 days  210 attempts
Testing same since   157176  2020-12-03 09:04:25 Z    0 days    1 attempts

------------------------------------------------------------
3620 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 694561 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 03 21:42:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 03 Dec 2020 21:42:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.43995.78929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkwMC-0008T2-RB; Thu, 03 Dec 2020 21:41:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 43995.78929; Thu, 03 Dec 2020 21:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkwMC-0008Sv-O4; Thu, 03 Dec 2020 21:41:44 +0000
Received: by outflank-mailman (input) for mailman id 43995;
 Thu, 03 Dec 2020 21:41:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkwMB-0008Sn-LZ; Thu, 03 Dec 2020 21:41:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkwMB-0008CE-C9; Thu, 03 Dec 2020 21:41:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkwMB-0008B5-30; Thu, 03 Dec 2020 21:41:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkwMB-0004Kk-28; Thu, 03 Dec 2020 21:41:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lspFA4/Gs8M8Ht2GDhBeGap1poIpvovIaTk3gaSIaDM=; b=p6YCDUohftu+js+cnbTYUePJPc
	gcAhN/m+bTLde17IXWOfBVJTiF/7/0/OrSwonfRGaPDINPqzrAvpPr0UyzxyDgch0HaSMgzsYGd7/
	U6DtuStKg1rX3cmr5p1cMFVRANU5xBwWBFjK/CvmMGQAJ1+LhD1F2Bq+2OD6LcAVgnlE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157184-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157184: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=126115a9fb3f89f8609336c87aa82fe7da19a9aa
X-Osstest-Versions-That:
    ovmf=484e869dfd3b7e4b6c47cb65ae5d5f499fcc056e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 03 Dec 2020 21:41:43 +0000

flight 157184 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157184/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 126115a9fb3f89f8609336c87aa82fe7da19a9aa
baseline version:
 ovmf                 484e869dfd3b7e4b6c47cb65ae5d5f499fcc056e

Last test of basis   157178  2020-12-03 12:29:37 Z    0 days
Testing same since   157184  2020-12-03 17:09:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Guo Dong <guo.dong@intel.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   484e869dfd..126115a9fb  126115a9fb3f89f8609336c87aa82fe7da19a9aa -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 00:30:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 00:30:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44015.78951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkyz8-0000II-Vb; Fri, 04 Dec 2020 00:30:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44015.78951; Fri, 04 Dec 2020 00:30:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kkyz8-0000IB-Rz; Fri, 04 Dec 2020 00:30:06 +0000
Received: by outflank-mailman (input) for mailman id 44015;
 Fri, 04 Dec 2020 00:30:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkyz7-0000EF-Ep; Fri, 04 Dec 2020 00:30:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkyz7-0003xq-5Z; Fri, 04 Dec 2020 00:30:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kkyz6-0007bV-PY; Fri, 04 Dec 2020 00:30:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kkyz6-0001Bz-P4; Fri, 04 Dec 2020 00:30:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NltXzQ2U1kuGKD6hQdnab3JfSxBUX4pQfOvSECCV0ZE=; b=QbdKkfUqFJKchbz8JaZ4qeilzs
	lLdbeOYpyr3qHgOeeneLOtNYiNEVfyd7D5iq8pCEgWeL8jPqEMb0xXeA3CeOdXViCxCJCq3XUtnts
	6oFetNB98fdGqeC94it5I4/Xp6teqURZwz5cmp30pVLQyqk1C9MP2MbrN9aHfFYwZ0H4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157182-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157182: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aec46884784c2494a30221da775d4ac2c43a4d42
X-Osstest-Versions-That:
    xen=cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 00:30:04 +0000

flight 157182 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157182/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 157166
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157166
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157166
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157166
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157166
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157166
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157166
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157166
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157166
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157166
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157166
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157166
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aec46884784c2494a30221da775d4ac2c43a4d42
baseline version:
 xen                  cabf60fc32d4cfa1d74a2bdfcdb294a31da5d68e

Last test of basis   157166  2020-12-02 23:08:38 Z    1 days
Testing same since   157182  2020-12-03 16:02:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cabf60fc32..aec4688478  aec46884784c2494a30221da775d4ac2c43a4d42 -> master


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 04:14:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 04:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44049.78982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl2UH-0001pP-Uj; Fri, 04 Dec 2020 04:14:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44049.78982; Fri, 04 Dec 2020 04:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl2UH-0001pH-Nm; Fri, 04 Dec 2020 04:14:29 +0000
Received: by outflank-mailman (input) for mailman id 44049;
 Fri, 04 Dec 2020 04:14:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl2UG-0001p9-TE; Fri, 04 Dec 2020 04:14:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl2UG-00051e-MO; Fri, 04 Dec 2020 04:14:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl2UG-0002Z0-CG; Fri, 04 Dec 2020 04:14:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kl2UG-0000ZJ-Bn; Fri, 04 Dec 2020 04:14:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NgjS20NBDt3BVqJ27dza1RwLQ+ZldJdz3R/PC1cUXlE=; b=PpwpTCKiD+xRw7Em4yBZ79/Pfs
	g9bt+HL0yfDqKLxwuaL1WcUJkCJ9K8/rY966/p3clXvN3J+VdhvVV0kPm506bJMmpc5olrBWTbOBA
	+DaK3NOpMkozHfXwtFDESUlPUmb6p1vF2tYRuqjKK+yIc/hMSF9PdXXl2NedHwzwrZWc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157191-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157191: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6af76adbbfccd31f4f8753fb0ddbbd9f4372f572
X-Osstest-Versions-That:
    ovmf=126115a9fb3f89f8609336c87aa82fe7da19a9aa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 04:14:28 +0000

flight 157191 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157191/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6af76adbbfccd31f4f8753fb0ddbbd9f4372f572
baseline version:
 ovmf                 126115a9fb3f89f8609336c87aa82fe7da19a9aa

Last test of basis   157184  2020-12-03 17:09:49 Z    0 days
Testing same since   157191  2020-12-04 01:39:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   126115a9fb..6af76adbbf  6af76adbbfccd31f4f8753fb0ddbbd9f4372f572 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 06:07:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 06:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44063.79000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl4FP-0004l7-0B; Fri, 04 Dec 2020 06:07:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44063.79000; Fri, 04 Dec 2020 06:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl4FO-0004l0-RY; Fri, 04 Dec 2020 06:07:14 +0000
Received: by outflank-mailman (input) for mailman id 44063;
 Fri, 04 Dec 2020 06:07:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl4FN-0004ks-81; Fri, 04 Dec 2020 06:07:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl4FM-0007zU-W3; Fri, 04 Dec 2020 06:07:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl4FM-0008KH-Ko; Fri, 04 Dec 2020 06:07:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kl4FM-0000k4-KH; Fri, 04 Dec 2020 06:07:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BCK5rtKrRL1F0usU31palY+KT+YOshMSkSeTtRFM/Ds=; b=HKHmqpL/9w3L1jbd0RyOVd86/y
	gPUjf2FReO9YCB7n4zsXMdhGSK5t6SHoH3tNOvFhuN+XiZGjn7/UtbjBulBZhYw5geE0zAPRmjn37
	5dXFR7+AtL3JxsvES+sK3vGVewUhpcsowWCeYVNEGlF+0ZnJnZr8Nmh/U3a1Szr/bUU4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157186-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157186: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 06:07:12 +0000

flight 157186 qemu-mainline real [real]
flight 157196 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157186/
http://logs.test-lab.xenproject.org/osstest/logs/157196/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  105 days
Failing since        152659  2020-08-21 14:07:39 Z  104 days  218 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 07:48:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 07:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44086.79038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl5pb-0006OL-6C; Fri, 04 Dec 2020 07:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44086.79038; Fri, 04 Dec 2020 07:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl5pb-0006OE-2S; Fri, 04 Dec 2020 07:48:43 +0000
Received: by outflank-mailman (input) for mailman id 44086;
 Fri, 04 Dec 2020 07:48:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl5pZ-0006O6-U7; Fri, 04 Dec 2020 07:48:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl5pZ-0001xl-Ln; Fri, 04 Dec 2020 07:48:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl5pZ-0007Gf-AU; Fri, 04 Dec 2020 07:48:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kl5pZ-0002sf-A1; Fri, 04 Dec 2020 07:48:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kMYXMM+hffgifRkgorQHYl3PQ4lMuxWw6da1ll9RGOM=; b=5L8B1yFppLUFbbNYRKFpjzmrgU
	WqPRPe37YQx0pHAPkk5wY51BCwxqtpGzV5faNzcy7JgDT+wbuGZYZ2JmyPbf+aDV5cm9G4UmiinXi
	syGBeBbwYfAkfyLf/cR/77dEXVyfDdeMkdiqIytEYxAnkAJZRCoZpp51yE9559z+3eEo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157188-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157188: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=34816d20f173a90389c8a7e641166d8ea9dce70a
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 07:48:41 +0000

flight 157188 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157188/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm 10 host-ping-check-xen fail in 157176 REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157176 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot         fail in 157176 pass in 157188
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 157176 pass in 157188
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157176
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157176
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 157176
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157176
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157176

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 157176 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 157176 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                34816d20f173a90389c8a7e641166d8ea9dce70a
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  125 days
Failing since        152366  2020-08-01 20:49:34 Z  124 days  211 attempts
Testing same since   157176  2020-12-03 09:04:25 Z    0 days    2 attempts

------------------------------------------------------------
3620 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 694561 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 07:53:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 07:53:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44094.79053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl5ty-0007OL-Pb; Fri, 04 Dec 2020 07:53:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44094.79053; Fri, 04 Dec 2020 07:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl5ty-0007OE-M5; Fri, 04 Dec 2020 07:53:14 +0000
Received: by outflank-mailman (input) for mailman id 44094;
 Fri, 04 Dec 2020 07:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl5tx-0007O9-Td
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 07:53:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2c07ca8-e8ea-434a-bda7-31c87cb8ce5f;
 Fri, 04 Dec 2020 07:53:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98D85AE1C;
 Fri,  4 Dec 2020 07:53:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2c07ca8-e8ea-434a-bda7-31c87cb8ce5f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607068391; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0UW8iom+2AP1E9O/Caq/ferkoW/IBAu+X9ko7/cK4os=;
	b=k3EDvhZXe/BdTl4AirfQnmxtM8qBgvLLwzPHhklFcBKAc4IL/rokXBQHchKYSPrSdALytA
	iGU+Nt5lLJKcByTAkOxeEfXfVB1zq34KgfY9G3xczQCxGjxWdxX+DBELMHUZeLwL96AdQ5
	FeW0QB9uMfdAnU2aEcdtVPKGT5v8OxM=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
Date: Fri, 4 Dec 2020 08:53:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.12.2020 18:07, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 03 December 2020 15:57
>>
>> On 03.12.2020 16:45, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 03 December 2020 15:23
>>>>
>>>> On 03.12.2020 13:41, Paul Durrant wrote:
>>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>>
>>>>> ...to control the visibility of the FIFO event channel operations
>>>>> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
>>>>> the guest.
>>>>>
>>>>> These operations were added to the public header in commit d2d50c2f308f
>>>>> ("evtchn: add FIFO-based event channel ABI") and the first implementation
>>>>> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
>>>>> EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
>>>>> ("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
>>>>> that, a guest issuing those operations would receive a return value of
>>>>> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
>>>>> running on an older (pre-4.4) Xen would fall back to using the 2-level event
>>>>> channel interface upon seeing this return value.
>>>>>
>>>>> Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
>>>>> onwards has implications for hibernation of some Linux guests. During resume
>>>>> from hibernation, there are two kernels involved: the "boot" kernel and the
>>>>> "resume" kernel. The guest boot kernel may default to use FIFO operations and
>>>>> instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
>>>>> other hand, the resume kernel keeps assuming 2-level, because it was hibernated
>>>>> on a version of Xen that did not support the FIFO operations.
>>>>>
>>>>> To maintain compatibility it is necessary to make Xen behave as it did
>>>>> before the new operations were added and hence the code in this patch ensures
>>>>> that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event channel
>>>>> operations will again result in -ENOSYS being returned to the guest.
>>>>
>>>> I have to admit I'm now even more concerned of the control for such
>>>> going into Xen, the more with the now 2nd use in the subsequent patch.
>>>> The implication of this would seem to be that whenever we add new
>>>> hypercalls or sub-ops, a domain creation control would also need
>>>> adding determining whether that new sub-op is actually okay to use by
>>>> a guest. Or else I'd be keen to up front see criteria at least roughly
>>>> outlined by which it could be established whether such an override
>>>> control is needed.
>>>>
>>>
>>> Ultimately I think any new hypercall (or related set of hypercalls) added to the ABI needs to be
>> opt-in on a per-domain basis, so that we know that from when a domain is first created it will not see
>> a change in its environment unless the VM administrator wants that to happen.
>>
>> A new hypercall appearing is a change to the guest's environment, yes,
>> but a backwards compatible one. I don't see how this would harm a guest.
> 
> Say we have a guest which is aware of the newer Xen and is coded to use the new hypercall *but* we start it on an older Xen where the new hypercall is not implemented. *Then* we migrate it to the newer Xen... the new hypercall suddenly becomes available and changes the guest behaviour. In the worst case, the guest is sufficiently confused that it crashes.

If a guest recognizes a new hypercall is unavailable, why would it ever try
to make use of it again, unless it knows it may appear after having been
migrated (which implies it's prepared for the sudden appearance)? This to
me still looks like an attempt to work around guest bugs. And a halfhearted
one, as you cherry pick just two out of many additions to the original 3.0
ABI.

>> This and ...
>>
>>>> I'm also not convinced such controls really want to be opt-in rather
>>>> than opt-out.
>>>
>>> They really need to be opt-in I think. From a cloud provider PoV it is important that nothing in a
>> customer's environment changes unless we want it to. Otherwise we have no way to deploy an updated
>> hypervisor version without risking crashing their VMs.
>>
>> ... this sound to me more like workarounds for buggy guests than
>> functionality the hypervisor _needs_ to have. (I can appreciate
>> the specific case here for the specific scenario you provide as
>> an exception.)
> 
> If we want to have a hypervisor that can be used in a cloud environment
> then Xen absolutely needs this capability.

As per above you can conclude that I'm still struggling to see the
"why" part here.

>>>>> --- a/xen/arch/arm/domain.c
>>>>> +++ b/xen/arch/arm/domain.c
>>>>> @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>>>      unsigned int max_vcpus;
>>>>>
>>>>>      /* HVM and HAP must be set. IOMMU may or may not be */
>>>>> -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=
>>>>> +    if ( (config->flags &
>>>>> +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) !=
>>>>>           (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
>>>>>      {
>>>>>          dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
>>>>> --- a/xen/arch/arm/domain_build.c
>>>>> +++ b/xen/arch/arm/domain_build.c
>>>>> @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
>>>>>          struct domain *d;
>>>>>          struct xen_domctl_createdomain d_cfg = {
>>>>>              .arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE,
>>>>> -            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>>>>> +            .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>>> +                     XEN_DOMCTL_CDF_evtchn_fifo,
>>>>>              .max_evtchn_port = -1,
>>>>>              .max_grant_frames = 64,
>>>>>              .max_maptrack_frames = 1024,
>>>>> --- a/xen/arch/arm/setup.c
>>>>> +++ b/xen/arch/arm/setup.c
>>>>> @@ -805,7 +805,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>>>>>      struct bootmodule *xen_bootmodule;
>>>>>      struct domain *dom0;
>>>>>      struct xen_domctl_createdomain dom0_cfg = {
>>>>> -        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
>>>>> +        .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>>> +                 XEN_DOMCTL_CDF_evtchn_fifo,
>>>>>          .max_evtchn_port = -1,
>>>>>          .max_grant_frames = gnttab_dom0_frames(),
>>>>>          .max_maptrack_frames = -1,
>>>>> --- a/xen/arch/x86/setup.c
>>>>> +++ b/xen/arch/x86/setup.c
>>>>> @@ -738,7 +738,8 @@ static struct domain *__init create_dom0(const module_t *image,
>>>>>                                           const char *loader)
>>>>>  {
>>>>>      struct xen_domctl_createdomain dom0_cfg = {
>>>>> -        .flags = IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0,
>>>>> +        .flags = XEN_DOMCTL_CDF_evtchn_fifo |
>>>>> +                 (IS_ENABLED(CONFIG_TBOOT) ? XEN_DOMCTL_CDF_s3_integrity : 0),
>>>>>          .max_evtchn_port = -1,
>>>>>          .max_grant_frames = -1,
>>>>>          .max_maptrack_frames = -1,
>>>>> --- a/xen/common/domain.c
>>>>> +++ b/xen/common/domain.c
>>>>> @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>>>           ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
>>>>>             XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
>>>>>             XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
>>>>> -           XEN_DOMCTL_CDF_nested_virt) )
>>>>> +           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_evtchn_fifo) )
>>>>>      {
>>>>>          dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
>>>>>          return -EINVAL;
>>>>
>>>> All of the hunks above point out a scalability issue if we were to
>>>> follow this route for even just a fair part of new sub-ops, and I
>>>> suppose you've noticed this with the next patch presumably touching
>>>> all the same places again.
>>>
>>> Indeed. This solution works for now but is probably not what we want in the long run. Same goes for
>> the current way we control viridian features via an HVM param. It is good enough for now IMO since
>> domctl is not a stable interface. Any ideas about how we might implement a better interface in the
>> longer term?
>>
>> While it has other downsides, Jürgen's proposal doesn't have any
>> similar scalability issue afaics. Another possible model would
>> seem to be to key new hypercalls to hypervisor CPUID leaf bits,
>> and derive their availability from a guest's CPUID policy. Of
>> course this won't work when needing to retrofit guarding like
>> you want to do here.
> 
> Ok, I'll take a look hypfs as an immediate solution, if that's preferred.

Well, as said - there are also downsides with that approach. I'm
not convinced it should be just the three of us to determine which
one is better overall, the more that you don't seem to be convinced
anyway (and I'm not going to claim I am, in either direction). It
is my understanding that based on the claimed need for this (see
above), this may become very widely used functionality, and if so
we want to make sure we won't regret the route we went.

For starters, just to get a better overall picture, besides the two
overrides you add here, do you have any plans to retroactively add
further controls for past ABI additions? I don't suppose you have
plans to consistently do this for all of them? I ask because I
could see us going the route you've chosen for very few retroactive
additions of overrides, but use the CPUID approach (provided
there's a suitable Arm counterpart, and ideally some understanding
whether something similar could also be found for at least the two
planned new ports) for ABI additions going forward.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:12:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:12:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44104.79065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6CO-0001YW-Lz; Fri, 04 Dec 2020 08:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44104.79065; Fri, 04 Dec 2020 08:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6CO-0001YP-Iw; Fri, 04 Dec 2020 08:12:16 +0000
Received: by outflank-mailman (input) for mailman id 44104;
 Fri, 04 Dec 2020 08:12:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl6CN-0001YK-4T
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:12:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9918804-8b8f-4dfe-b45d-549cfa28f48d;
 Fri, 04 Dec 2020 08:12:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 381A7AD41;
 Fri,  4 Dec 2020 08:12:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9918804-8b8f-4dfe-b45d-549cfa28f48d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607069532; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Pdz82aR/B76nVYaJnJdJnWiCk6FGMQ8yB3o9EbcmnJo=;
	b=lmikxqrUNr5oRXq8nMfBBCJrQDK1tpvJ2X2kbk6KsKKH3Ie+0S3eoqHIsdluW8hX3E9+PS
	bB7481imDZGGualU08/qQ62hHIq/dkPfYqLvknvLU9zulihzX88rEiA/iGkD217ClX+HRt
	IswOME9U6y+UXTcIlRBuZ1qeeichSHk=
Subject: Re: [PATCH v4 00/11] viridian: add support for ExProcessorMasks
To: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, xen-devel@lists.xenproject.org
References: <20201202092205.906-1-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fabc2720-3cbc-0b3f-1b09-23ec25189407@suse.com>
Date: Fri, 4 Dec 2020 09:12:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201202092205.906-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Wei,

On 02.12.2020 10:21, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Paul Durrant (11):
>   viridian: don't blindly write to 32-bit registers is 'mode' is invalid
>   viridian: move flush hypercall implementation into separate function
>   viridian: move IPI hypercall implementation into separate function
>   viridian: introduce a per-cpu hypercall_vpmask and accessor
>     functions...
>   viridian: use hypercall_vpmask in hvcall_ipi()
>   viridian: use softirq batching in hvcall_ipi()
>   viridian: add ExProcessorMasks variants of the flush hypercalls
>   viridian: add ExProcessorMasks variant of the IPI hypercall
>   viridian: log initial invocation of each type of hypercall
>   viridian: add a new '_HVMPV_ex_processor_masks' bit into
>     HVM_PARAM_VIRIDIAN...
>   xl / libxl: add 'ex_processor_mask' into
>     'libxl_viridian_enlightenment'
> 
>  docs/man/xl.cfg.5.pod.in             |   8 +
>  tools/include/libxl.h                |   7 +
>  tools/libs/light/libxl_types.idl     |   1 +
>  tools/libs/light/libxl_x86.c         |   3 +
>  xen/arch/x86/hvm/viridian/viridian.c | 604 +++++++++++++++++++++------
>  xen/include/asm-x86/hvm/viridian.h   |  10 +
>  xen/include/public/hvm/params.h      |   7 +-
>  7 files changed, 516 insertions(+), 124 deletions(-)

the status of this series was one of the topics of yesterday's
community call. Since Paul's prior ping hasn't had a response by
you (possibly because you're on PTO for an extended period of
time) the plan is to get this series in with as much of
reviewing that I was able to do by, perhaps, the middle of next
week. Unless of course we hear back from you earlier, giving at
least an indication of when you might be able to look at this.

Thanks for your understanding.

Paul, I notice v4 patches 10 and 11 never arrived in my inbox.
The list archives also don't have them. Therefore I can't check
the status of the tools side changes, and I don't think I'd
want to commit those anyway without tool stack side acks, the
more that they weren't part of what I've looked at.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:23:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:23:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44112.79076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6Mr-0002bq-NR; Fri, 04 Dec 2020 08:23:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44112.79076; Fri, 04 Dec 2020 08:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6Mr-0002bj-KV; Fri, 04 Dec 2020 08:23:05 +0000
Received: by outflank-mailman (input) for mailman id 44112;
 Fri, 04 Dec 2020 08:23:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IQI=FI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kl6Mq-0002be-FY
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:23:04 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04625a04-68c1-4095-9dae-9edc26883ac4;
 Fri, 04 Dec 2020 08:23:02 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id t4so4406592wrr.12
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 00:23:02 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id x4sm2656624wrv.81.2020.12.04.00.23.00
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 04 Dec 2020 00:23:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04625a04-68c1-4095-9dae-9edc26883ac4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=TAivD8bq8CjlG3+boC2nMjbRXzdIRu7NJY8YKlrIJFA=;
        b=d4BUVR8Z8smrmrwndWz12cX5mOhcasaGh2/9aOYpAA+ceNWe4H9wxgepwcU/nZFec1
         zhxzBu59ITktG4ZudtZaENMY29hwF/MgclP3VXtDmPwf5YrLkqz/2fbgFyqLq2HuDIuW
         AUi3wgmYleDPK5zOHGq2x0cyOaRUswFoRZzO4C8GLgaZPtKb4GREdJ/gzvWbSfTYlPdy
         fejcCWcppyln2SgBh0c0CBnN6o3IOexdCxA3mOyG554kJfLGyMuJr4PEp7kO7xswU2hJ
         jn+URhMq1I2THQKinGKNCntG1KT65b9ZAzUQR4u/Sw6KSj7MqmNOJe9YkyK4okyhB85r
         xCNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=TAivD8bq8CjlG3+boC2nMjbRXzdIRu7NJY8YKlrIJFA=;
        b=l8WCjcqh33UGAW4Q7fUrpzXYRLEWE0N+17ciU9iPv1G2eqq6ougTi0qwcczW4eUYsT
         gfN9T8333oh4LHe+ChBm2QqPa6kYGkYxPH9i/zQmLil1ysXYm7Yn51WGpw+54qvXFZuL
         6ihc+Dtgw+lsCkZErXR+LS0nkPhS7npSfLih5Byaefq/EF9dn7Zjb5phcJj/jCMO4P50
         NX4EV7/Rrh9U5YNUWYWflgLqQsPVQaiPYchnw99HcEYeiS+hi1H7b9QJj1OwwF+wv1NY
         OTfpv25/jqfxUAxMmLIsTVG4vlNm2nLpcw1H25IxoLIOqr3ANbzMqDRf89RHjqZQPTQ5
         H51Q==
X-Gm-Message-State: AOAM532etP+8mm4nnlVvE8iwZ9npFkytg9W+f3yj1rrRPolNDwto9EI8
	UztjCLB1UqC9c3TMBocVCbI=
X-Google-Smtp-Source: ABdhPJz9ycspFZL2/I+DqjD9ausZFNbKgomipK/S7wnXjHhl8iJtt1OiIo5QH3EsY/UfjXu/nDQQ1Q==
X-Received: by 2002:a5d:4a04:: with SMTP id m4mr3530465wrq.60.1607070181945;
        Fri, 04 Dec 2020 00:23:01 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Eslam Elnikety'" <elnikety@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201203124159.3688-1-paul@xen.org> <20201203124159.3688-2-paul@xen.org> <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com> <00ee01d6c98b$507af1c0$f170d540$@xen.org> <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com> <00f601d6c996$ce3908d0$6aab1a70$@xen.org> <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
In-Reply-To: <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_fifo, ...
Date: Fri, 4 Dec 2020 08:22:59 -0000
Message-ID: <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJ7xs67Aj6rtzQCSqguWAI8ixj/AoRsg/ABjwDZCKhCJokA

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 04 December 2020 07:53
> To: paul@xen.org
> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Eslam Elnikety' =
<elnikety@amazon.com>; 'Ian Jackson'
> <iwj@xenproject.org>; 'Wei Liu' <wl@xen.org>; 'Anthony PERARD' =
<anthony.perard@citrix.com>; 'Andrew
> Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Julien Grall'
> <julien@xen.org>; 'Stefano Stabellini' <sstabellini@kernel.org>; =
'Christian Lindig'
> <christian.lindig@citrix.com>; 'David Scott' <dave@recoil.org>; =
'Volodymyr Babchuk'
> <Volodymyr_Babchuk@epam.com>; 'Roger Pau Monn=C3=A9' =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create =
flag, XEN_DOMCTL_CDF_evtchn_fifo,
> ...
>=20
> On 03.12.2020 18:07, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 03 December 2020 15:57
> >>
> >> On 03.12.2020 16:45, Paul Durrant wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: 03 December 2020 15:23
> >>>>
> >>>> On 03.12.2020 13:41, Paul Durrant wrote:
> >>>>> From: Paul Durrant <pdurrant@amazon.com>
> >>>>>
> >>>>> ...to control the visibility of the FIFO event channel =
operations
> >>>>> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and =
EVTCHNOP_set_priority) to
> >>>>> the guest.
> >>>>>
> >>>>> These operations were added to the public header in commit =
d2d50c2f308f
> >>>>> ("evtchn: add FIFO-based event channel ABI") and the first =
implementation
> >>>>> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: =
implement
> >>>>> EVTCHNOP_set_priority and add the set_priority hook") and =
88910061ec61
> >>>>> ("evtchn: add FIFO-based event channel hypercalls and port =
ops"). Prior to
> >>>>> that, a guest issuing those operations would receive a return =
value of
> >>>>> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO =
operations but
> >>>>> running on an older (pre-4.4) Xen would fall back to using the =
2-level event
> >>>>> channel interface upon seeing this return value.
> >>>>>
> >>>>> Unfortunately the uncontrolable appearance of these new =
operations in Xen 4.4
> >>>>> onwards has implications for hibernation of some Linux guests. =
During resume
> >>>>> from hibernation, there are two kernels involved: the "boot" =
kernel and the
> >>>>> "resume" kernel. The guest boot kernel may default to use FIFO =
operations and
> >>>>> instruct Xen via EVTCHNOP_init_control to switch from 2-level to =
FIFO. On the
> >>>>> other hand, the resume kernel keeps assuming 2-level, because it =
was hibernated
> >>>>> on a version of Xen that did not support the FIFO operations.
> >>>>>
> >>>>> To maintain compatibility it is necessary to make Xen behave as =
it did
> >>>>> before the new operations were added and hence the code in this =
patch ensures
> >>>>> that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event =
channel
> >>>>> operations will again result in -ENOSYS being returned to the =
guest.
> >>>>
> >>>> I have to admit I'm now even more concerned of the control for =
such
> >>>> going into Xen, the more with the now 2nd use in the subsequent =
patch.
> >>>> The implication of this would seem to be that whenever we add new
> >>>> hypercalls or sub-ops, a domain creation control would also need
> >>>> adding determining whether that new sub-op is actually okay to =
use by
> >>>> a guest. Or else I'd be keen to up front see criteria at least =
roughly
> >>>> outlined by which it could be established whether such an =
override
> >>>> control is needed.
> >>>>
> >>>
> >>> Ultimately I think any new hypercall (or related set of =
hypercalls) added to the ABI needs to be
> >> opt-in on a per-domain basis, so that we know that from when a =
domain is first created it will not
> see
> >> a change in its environment unless the VM administrator wants that =
to happen.
> >>
> >> A new hypercall appearing is a change to the guest's environment, =
yes,
> >> but a backwards compatible one. I don't see how this would harm a =
guest.
> >
> > Say we have a guest which is aware of the newer Xen and is coded to =
use the new hypercall *but* we
> start it on an older Xen where the new hypercall is not implemented. =
*Then* we migrate it to the newer
> Xen... the new hypercall suddenly becomes available and changes the =
guest behaviour. In the worst
> case, the guest is sufficiently confused that it crashes.
>=20
> If a guest recognizes a new hypercall is unavailable, why would it =
ever try
> to make use of it again, unless it knows it may appear after having =
been
> migrated (which implies it's prepared for the sudden appearance)? This =
to
> me still looks like an attempt to work around guest bugs. And a =
halfhearted
> one, as you cherry pick just two out of many additions to the original =
3.0
> ABI.
>=20

This is exactly an attempt to work around guest bugs, which is something =
a cloud provider needs to do where possible.

> >> This and ...
> >>
> >>>> I'm also not convinced such controls really want to be opt-in =
rather
> >>>> than opt-out.
> >>>
> >>> They really need to be opt-in I think. From a cloud provider PoV =
it is important that nothing in a
> >> customer's environment changes unless we want it to. Otherwise we =
have no way to deploy an updated
> >> hypervisor version without risking crashing their VMs.
> >>
> >> ... this sound to me more like workarounds for buggy guests than
> >> functionality the hypervisor _needs_ to have. (I can appreciate
> >> the specific case here for the specific scenario you provide as
> >> an exception.)
> >
> > If we want to have a hypervisor that can be used in a cloud =
environment
> > then Xen absolutely needs this capability.
>=20
> As per above you can conclude that I'm still struggling to see the
> "why" part here.
>=20

Imagine you are a customer. You boot your OS and everything is just =
fine... you run your workload and all is good. You then shut down your =
VM and re-start it. Now it starts to crash. Who are you going to blame? =
You did nothing to your OS or application s/w, so you are going to blame =
the cloud provider of course.

Now imagine you are the cloud provider, running Xen. What you did was =
start to upgrade your hosts from an older version of Xen to a newer =
version of Xen, to pick up various bug fixes and make sure you are =
running a version that is within the security support envelope. You =
identify that your customer's problem is a bug in their OS that was =
latent on the old version of the hypervisor but is now manifesting on =
the new one because it has buggy support for a hypercall that was added =
between the two versions. How are you going to fix this issue, and get =
your customer up and running again? Of course you'd like your customer =
to upgrade their OS, but they can't even boot it to do that. You really =
need a solution that can restore the old VM environment, at least =
temporarily, for that customer.

> >>>>> --- a/xen/arch/arm/domain.c
> >>>>> +++ b/xen/arch/arm/domain.c
> >>>>> @@ -622,7 +622,8 @@ int arch_sanitise_domain_config(struct =
xen_domctl_createdomain *config)
> >>>>>      unsigned int max_vcpus;
> >>>>>
> >>>>>      /* HVM and HAP must be set. IOMMU may or may not be */
> >>>>> -    if ( (config->flags & ~XEN_DOMCTL_CDF_iommu) !=3D
> >>>>> +    if ( (config->flags &
> >>>>> +          ~(XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_evtchn_fifo) =
!=3D
> >>>>>           (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap) )
> >>>>>      {
> >>>>>          dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
> >>>>> --- a/xen/arch/arm/domain_build.c
> >>>>> +++ b/xen/arch/arm/domain_build.c
> >>>>> @@ -2478,7 +2478,8 @@ void __init create_domUs(void)
> >>>>>          struct domain *d;
> >>>>>          struct xen_domctl_createdomain d_cfg =3D {
> >>>>>              .arch.gic_version =3D XEN_DOMCTL_CONFIG_GIC_NATIVE,
> >>>>> -            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> >>>>> +            .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap =
|
> >>>>> +                     XEN_DOMCTL_CDF_evtchn_fifo,
> >>>>>              .max_evtchn_port =3D -1,
> >>>>>              .max_grant_frames =3D 64,
> >>>>>              .max_maptrack_frames =3D 1024,
> >>>>> --- a/xen/arch/arm/setup.c
> >>>>> +++ b/xen/arch/arm/setup.c
> >>>>> @@ -805,7 +805,8 @@ void __init start_xen(unsigned long =
boot_phys_offset,
> >>>>>      struct bootmodule *xen_bootmodule;
> >>>>>      struct domain *dom0;
> >>>>>      struct xen_domctl_createdomain dom0_cfg =3D {
> >>>>> -        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> >>>>> +        .flags =3D XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >>>>> +                 XEN_DOMCTL_CDF_evtchn_fifo,
> >>>>>          .max_evtchn_port =3D -1,
> >>>>>          .max_grant_frames =3D gnttab_dom0_frames(),
> >>>>>          .max_maptrack_frames =3D -1,
> >>>>> --- a/xen/arch/x86/setup.c
> >>>>> +++ b/xen/arch/x86/setup.c
> >>>>> @@ -738,7 +738,8 @@ static struct domain *__init =
create_dom0(const module_t *image,
> >>>>>                                           const char *loader)
> >>>>>  {
> >>>>>      struct xen_domctl_createdomain dom0_cfg =3D {
> >>>>> -        .flags =3D IS_ENABLED(CONFIG_TBOOT) ? =
XEN_DOMCTL_CDF_s3_integrity : 0,
> >>>>> +        .flags =3D XEN_DOMCTL_CDF_evtchn_fifo |
> >>>>> +                 (IS_ENABLED(CONFIG_TBOOT) ? =
XEN_DOMCTL_CDF_s3_integrity : 0),
> >>>>>          .max_evtchn_port =3D -1,
> >>>>>          .max_grant_frames =3D -1,
> >>>>>          .max_maptrack_frames =3D -1,
> >>>>> --- a/xen/common/domain.c
> >>>>> +++ b/xen/common/domain.c
> >>>>> @@ -307,7 +307,7 @@ static int sanitise_domain_config(struct =
xen_domctl_createdomain *config)
> >>>>>           ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
> >>>>>             XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off =
|
> >>>>>             XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
> >>>>> -           XEN_DOMCTL_CDF_nested_virt) )
> >>>>> +           XEN_DOMCTL_CDF_nested_virt | =
XEN_DOMCTL_CDF_evtchn_fifo) )
> >>>>>      {
> >>>>>          dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", =
config->flags);
> >>>>>          return -EINVAL;
> >>>>
> >>>> All of the hunks above point out a scalability issue if we were =
to
> >>>> follow this route for even just a fair part of new sub-ops, and I
> >>>> suppose you've noticed this with the next patch presumably =
touching
> >>>> all the same places again.
> >>>
> >>> Indeed. This solution works for now but is probably not what we =
want in the long run. Same goes
> for
> >> the current way we control viridian features via an HVM param. It =
is good enough for now IMO since
> >> domctl is not a stable interface. Any ideas about how we might =
implement a better interface in the
> >> longer term?
> >>
> >> While it has other downsides, J=C3=BCrgen's proposal doesn't have =
any
> >> similar scalability issue afaics. Another possible model would
> >> seem to be to key new hypercalls to hypervisor CPUID leaf bits,
> >> and derive their availability from a guest's CPUID policy. Of
> >> course this won't work when needing to retrofit guarding like
> >> you want to do here.
> >
> > Ok, I'll take a look hypfs as an immediate solution, if that's =
preferred.
>=20
> Well, as said - there are also downsides with that approach. I'm
> not convinced it should be just the three of us to determine which
> one is better overall, the more that you don't seem to be convinced
> anyway (and I'm not going to claim I am, in either direction). It
> is my understanding that based on the claimed need for this (see
> above), this may become very widely used functionality, and if so
> we want to make sure we won't regret the route we went.
>=20

Agreed, but we don't need the final top-to-bottom solution now. The only =
part of this that is introducing something that is stable is the libxl =
part, and I think the 'feature' bitmap is reasonable future-proof (being =
modelled on the viridian feature bitmap, which has been extended over =
several releases since it was introduced).

> For starters, just to get a better overall picture, besides the two
> overrides you add here, do you have any plans to retroactively add
> further controls for past ABI additions?

I don't have any specific plans. The two I deal with here are the causes =
of observed differences in guest behaviour, one being an actual crash =
and the other affecting PV driver behaviour which may or may not be the =
cause of other crashes... but still something we need to have control =
over.

> I don't suppose you have
> plans to consistently do this for all of them? I ask because I
> could see us going the route you've chosen for very few retroactive
> additions of overrides, but use the CPUID approach (provided
> there's a suitable Arm counterpart, and ideally some understanding
> whether something similar could also be found for at least the two
> planned new ports) for ABI additions going forward.
>=20

That sounds reasonable and I'd be perfectly happy to rip and replace the =
use of domain creation flags when we have something 'better'. We don't =
need to wait for that though... this series is perfectly functional and =
doesn't take us down any one-way streets (below libxl) AFAICT.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:26:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:26:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44117.79089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6Pm-0002qB-7G; Fri, 04 Dec 2020 08:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44117.79089; Fri, 04 Dec 2020 08:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6Pm-0002q4-32; Fri, 04 Dec 2020 08:26:06 +0000
Received: by outflank-mailman (input) for mailman id 44117;
 Fri, 04 Dec 2020 08:26:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IQI=FI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kl6Pk-0002pz-TM
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:26:04 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 545975b8-69c6-4cf9-9ac7-5d3d1e285600;
 Fri, 04 Dec 2020 08:26:04 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id i2so4443859wrs.4
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 00:26:04 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id g184sm2329176wma.16.2020.12.04.00.26.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 04 Dec 2020 00:26:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 545975b8-69c6-4cf9-9ac7-5d3d1e285600
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=6F6qkA6GyOCTVIrdg0AwkS09cXyFrJLXka5HfjpHUdA=;
        b=BoM8jnz05PIXfwbGOaPdzyn/bbit34cln3c6eF/aK+dGbMD1SozdPWAdRCs2i/JwLS
         TgXz8vJLLvJmP2ALwiDP6t86v3L0FGg2ywD3KhKt/t4sj/+ZRQOnQoJZ8JWf8L6ute72
         gJOEt29TZaTJ/CYloUM9Rc53xrOZC6ssIIua+f44qoic5oBJIp9zpwy7mnW3Tw8nmTDm
         vCDisr2fRy9g2iGRMNojEK6s1NWj4hnH2dB7ExO7yJO9n9oMq9ol3ksf/O4hJq5XWIKT
         Xxpreqkhp3sS9IzZ2FlGyNZKq3ChKxj3dH5dtDyur7XCNtsc/zPs4vDzGm+9FwvfRdVS
         LLLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=6F6qkA6GyOCTVIrdg0AwkS09cXyFrJLXka5HfjpHUdA=;
        b=hLadQg1+R1dL9fI7pD/gudVaVPEf4z9rbbc0VD/mue0jIMGm1X+8leqlG622DRCWeZ
         1x7HbOThx7jlgne1z42Y0uUy1093NgMJd7YnN2nG+4TBgShC2Hg8LRb1FFOdiFqumyJ7
         t/ru0myeNmYRTOi6Us/J/nBloo3Fky/D/c4AD55+X8IeAEmFQKHVeDJU8MNsA9kCCwIQ
         g/FzlTDvBxLGlSe/H4gXa7iPmTPX3aJk3lm0Sk2dtWagMVnDVWRSKyK1OzsJszPJpsob
         NnAS7ykfjLA312a1/C+m84F+HEIDRz1enxwfHZGgm8H0Qh+fibyAXlR8zKteh+Lm3zDM
         n3+Q==
X-Gm-Message-State: AOAM532tMlkUv2zs1KXWJ/k5ZQlk60T+KQWHF0ymnrTxs2YkjBhHH69n
	BlXYS7lfSeBfLvd5k/H9or8=
X-Google-Smtp-Source: ABdhPJwWA+FWVX2BpVAW3Z0/Y+/07GedtyYFajImr9hVdF15wh1Rziqiv/IQJJHdLwVf4Axa7q4ABQ==
X-Received: by 2002:adf:ecc9:: with SMTP id s9mr3575178wro.246.1607070363290;
        Fri, 04 Dec 2020 00:26:03 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Wei Liu'" <wl@xen.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	<xen-devel@lists.xenproject.org>
References: <20201202092205.906-1-paul@xen.org> <fabc2720-3cbc-0b3f-1b09-23ec25189407@suse.com>
In-Reply-To: <fabc2720-3cbc-0b3f-1b09-23ec25189407@suse.com>
Subject: RE: [PATCH v4 00/11] viridian: add support for ExProcessorMasks
Date: Fri, 4 Dec 2020 08:26:01 -0000
Message-ID: <011301d6ca17$1a3fb690$4ebf23b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFAW81zKVdA94714yPPI08guNmREwKxDbdBqv3tT6A=

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 04 December 2020 08:12
> To: Wei Liu <wl@xen.org>; Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v4 00/11] viridian: add support for =
ExProcessorMasks
>=20
> Wei,
>=20
> On 02.12.2020 10:21, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Paul Durrant (11):
> >   viridian: don't blindly write to 32-bit registers is 'mode' is =
invalid
> >   viridian: move flush hypercall implementation into separate =
function
> >   viridian: move IPI hypercall implementation into separate function
> >   viridian: introduce a per-cpu hypercall_vpmask and accessor
> >     functions...
> >   viridian: use hypercall_vpmask in hvcall_ipi()
> >   viridian: use softirq batching in hvcall_ipi()
> >   viridian: add ExProcessorMasks variants of the flush hypercalls
> >   viridian: add ExProcessorMasks variant of the IPI hypercall
> >   viridian: log initial invocation of each type of hypercall
> >   viridian: add a new '_HVMPV_ex_processor_masks' bit into
> >     HVM_PARAM_VIRIDIAN...
> >   xl / libxl: add 'ex_processor_mask' into
> >     'libxl_viridian_enlightenment'
> >
> >  docs/man/xl.cfg.5.pod.in             |   8 +
> >  tools/include/libxl.h                |   7 +
> >  tools/libs/light/libxl_types.idl     |   1 +
> >  tools/libs/light/libxl_x86.c         |   3 +
> >  xen/arch/x86/hvm/viridian/viridian.c | 604 =
+++++++++++++++++++++------
> >  xen/include/asm-x86/hvm/viridian.h   |  10 +
> >  xen/include/public/hvm/params.h      |   7 +-
> >  7 files changed, 516 insertions(+), 124 deletions(-)
>=20
> the status of this series was one of the topics of yesterday's
> community call. Since Paul's prior ping hasn't had a response by
> you (possibly because you're on PTO for an extended period of
> time) the plan is to get this series in with as much of
> reviewing that I was able to do by, perhaps, the middle of next
> week. Unless of course we hear back from you earlier, giving at
> least an indication of when you might be able to look at this.
>=20
> Thanks for your understanding.
>=20
> Paul, I notice v4 patches 10 and 11 never arrived in my inbox.

Oh, yes... I don't see them in my mail either. (I guess I did 'git =
send-email 000*' instead of 'git send-email 00*'). I'll send v5 (with =
the extra style fix) and get them on list.

> The list archives also don't have them. Therefore I can't check
> the status of the tools side changes, and I don't think I'd
> want to commit those anyway without tool stack side acks, the
> more that they weren't part of what I've looked at.
>=20

Sure. The toolstack side is pretty trivial so hopefully Anthony or Ian =
would happy to give an ack.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:31:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44124.79101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6UX-0003qj-Qj; Fri, 04 Dec 2020 08:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44124.79101; Fri, 04 Dec 2020 08:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6UX-0003qc-N6; Fri, 04 Dec 2020 08:31:01 +0000
Received: by outflank-mailman (input) for mailman id 44124;
 Fri, 04 Dec 2020 08:31:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl6UW-0003qV-Fn
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:31:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 473efd3f-d951-4b0a-bf74-9e7760a255c8;
 Fri, 04 Dec 2020 08:30:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 42418AE1C;
 Fri,  4 Dec 2020 08:30:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 473efd3f-d951-4b0a-bf74-9e7760a255c8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607070658; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5P8en+k2rGXM8pq6flKcnrVroXRbzVd/CL+FGiJQLNY=;
	b=f7a2q0gBNgxBaVZ29o/beZ1Be65yDfelrqNjkPgKVhfzSAT3uVbvqhgzdfvcZ0BJV4OLDm
	tDbBOvNMmjS8WM+F2orfuPFispHpWQcjTzCeq96Vm3rVmd6YUUPE3zlzRi7krhDK7dHOys
	bFcF0uyYQLg4EYTHlTod7wpizyDxQE8=
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ab7f1c91-ec59-1024-b902-d633bf90dd81@suse.com>
Date: Fri, 4 Dec 2020 09:30:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-13-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> @@ -100,11 +112,58 @@ static void hypfs_unlock(void)
>      }
>  }
>  
> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
> +{
> +    return entry;
> +}
> +
> +void hypfs_node_exit(const struct hypfs_entry *entry)
> +{
> +}
> +
> +static int node_enter(const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
> +
> +    entry = entry->funcs->enter(entry);
> +    if ( IS_ERR(entry) )
> +        return PTR_ERR(entry);
> +
> +    ASSERT(!*last || *last == entry->parent);
> +
> +    *last = entry;

In such a callback case I wonder whether you wouldn't want to also
guard against NULL coming back, perhaps simply by mistake, but of
course once handling it here such could even become permissible
behavior.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:33:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44129.79113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6XH-00042C-Cr; Fri, 04 Dec 2020 08:33:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44129.79113; Fri, 04 Dec 2020 08:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6XH-000425-9q; Fri, 04 Dec 2020 08:33:51 +0000
Received: by outflank-mailman (input) for mailman id 44129;
 Fri, 04 Dec 2020 08:33:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kl6XF-00041z-8E
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:33:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 813ff543-e725-414e-b283-9f051ed7ac18;
 Fri, 04 Dec 2020 08:33:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 97C75AE1C;
 Fri,  4 Dec 2020 08:33:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 813ff543-e725-414e-b283-9f051ed7ac18
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607070827; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TRXR4H+VQZieES9nDZ0CJWVCk9vVRbeNavGyf5qi30I=;
	b=oacqaRd+TY0beFyxfpAViEafIlUtxxHMkbym95rPlQ9VDHWHUnnuiUYa2ug7YQEAlb5jmu
	tFGN6CzMb8fzt7VbWKLt4jkrthk1qRUSqhwiYfr+zF00bJBGsJh/OrFrep8DI8FgV/OJ2w
	Tu9ejly068lTVC4XHyZA+Roq+vSMFiU=
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
 <2503547c-1b3c-2224-c4a9-c647d9d1a058@suse.com>
 <6593ed01-23d0-70ac-faa3-556c69adec2b@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2704feac-3935-d226-8476-c47ab2d72e92@suse.com>
Date: Fri, 4 Dec 2020 09:33:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6593ed01-23d0-70ac-faa3-556c69adec2b@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="oqsUZUCcfd0c9j5E1B9ELEDZz4ENdsQ7O"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--oqsUZUCcfd0c9j5E1B9ELEDZz4ENdsQ7O
Content-Type: multipart/mixed; boundary="2FWaqqLCqxvcwOsmasirwRWYnSHeIq1bb";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <2704feac-3935-d226-8476-c47ab2d72e92@suse.com>
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <0c57dd86-36d9-c378-6bdb-50221a7812b8@suse.com>
 <2503547c-1b3c-2224-c4a9-c647d9d1a058@suse.com>
 <6593ed01-23d0-70ac-faa3-556c69adec2b@suse.com>
In-Reply-To: <6593ed01-23d0-70ac-faa3-556c69adec2b@suse.com>

--2FWaqqLCqxvcwOsmasirwRWYnSHeIq1bb
Content-Type: multipart/mixed;
 boundary="------------785054A3C89D5DF546978FD1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------785054A3C89D5DF546978FD1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 16:29, Jan Beulich wrote:
> On 03.12.2020 16:14, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.12.20 15:59, Jan Beulich wrote:
>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>> @@ -100,11 +112,58 @@ static void hypfs_unlock(void)
>>>>        }
>>>>    }
>>>>   =20
>>>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry=
 *entry)
>>>> +{
>>>> +    return entry;
>>>> +}
>>>> +
>>>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>>>> +{
>>>> +}
>>>> +
>>>> +static int node_enter(const struct hypfs_entry *entry)
>>>> +{
>>>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_e=
ntered);
>>>> +
>>>> +    entry =3D entry->funcs->enter(entry);
>>>> +    if ( IS_ERR(entry) )
>>>> +        return PTR_ERR(entry);
>>>> +
>>>> +    ASSERT(!*last || *last =3D=3D entry->parent);
>>>> +
>>>> +    *last =3D entry;
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static void node_exit(const struct hypfs_entry *entry)
>>>> +{
>>>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_e=
ntered);
>>>> +
>>>> +    if ( !*last )
>>>> +        return;
>>>
>>> Under what conditions is this legitimate to happen? IOW shouldn't
>>> there be an ASSERT_UNREACHABLE() here?
>>
>> This is for the "/" node.
>=20
> I.e. would ASSERT(!entry->parent) be appropriate to add here, at
> the same time serving as documentation of what you've just said?

I rechecked and have found that this was a remnant from an earlier
variant. *last won't ever be NULL, so the if can be dropped (a NULL
will be catched by the following ASSERT()).


Juergen

--------------785054A3C89D5DF546978FD1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------785054A3C89D5DF546978FD1--

--2FWaqqLCqxvcwOsmasirwRWYnSHeIq1bb--

--oqsUZUCcfd0c9j5E1B9ELEDZz4ENdsQ7O
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/J9GoFAwAAAAAACgkQsN6d1ii/Ey9q
2wf8CZQnnuZHTNm7YwszHlHSdN1yZ9uPnPipFn/kZQHUTRsjl4Vnj1nHHdLhIyvso4nhE8rBqpwN
FcgSX4BG4It2dXdYia+CFlLgxbtvwE01bFud43GZMFUkK0NznzWDhqzj/p6X2fCWMPQWWV39FD3h
kiyWJyKhj1zH3f9nGBj0+awbHI/wZBkoF5ie35kWZVPJFCXPaNlNtJZu3wCFk65OuyhVnqQJF5tj
vqbBzEWgRt53Bzyi1q287Cw/Xl0aVA7kVVrMa9rvYqWsupSZxsV7pKojjhp3G4nC/AYQ4ynioD9E
hyvCsBRm9lmqnEmDr4d34thZ8+QKcSA/qqkQfJbEjg==
=uo7h
-----END PGP SIGNATURE-----

--oqsUZUCcfd0c9j5E1B9ELEDZz4ENdsQ7O--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:35:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44135.79127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6Yg-00049x-Rd; Fri, 04 Dec 2020 08:35:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44135.79127; Fri, 04 Dec 2020 08:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6Yg-00049q-OY; Fri, 04 Dec 2020 08:35:18 +0000
Received: by outflank-mailman (input) for mailman id 44135;
 Fri, 04 Dec 2020 08:35:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kl6Yf-00049k-P4
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:35:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7bf41951-72ef-4148-b451-2c1d3b4a8dd2;
 Fri, 04 Dec 2020 08:35:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1F9D6ACC3;
 Fri,  4 Dec 2020 08:35:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bf41951-72ef-4148-b451-2c1d3b4a8dd2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607070916; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GUpK2PyY2kusj3Z1dx1eQCb2zLnm3D2wdLUSomCkLyI=;
	b=qX+IlipFs/KiRM2vrUTLE3XJBLtfFZcmkCALtP4gnkysREKTl2HBTEyCB5TSg8SOez2SAG
	WAtKNC1socBAN3I1Bu3Sw/OF5jF8aJmvHu8FrvSsuuDYQRvh7jDzaJD0HL/iRWHLvUm7gq
	bZDCDhjNjtfVqNQz7UIyzkXFOB31/rQ=
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <ab7f1c91-ec59-1024-b902-d633bf90dd81@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dc829d44-9479-b723-ebb3-e90c3e0f49ad@suse.com>
Date: Fri, 4 Dec 2020 09:35:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ab7f1c91-ec59-1024-b902-d633bf90dd81@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="tkjX3SZCAz4HlSD9ZyziWDZEz32w5KShi"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--tkjX3SZCAz4HlSD9ZyziWDZEz32w5KShi
Content-Type: multipart/mixed; boundary="GR5bhPwhw3CVFpbz0bVb24jEdiFpRgKLi";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <dc829d44-9479-b723-ebb3-e90c3e0f49ad@suse.com>
Subject: Re: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node
 callbacks
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-13-jgross@suse.com>
 <ab7f1c91-ec59-1024-b902-d633bf90dd81@suse.com>
In-Reply-To: <ab7f1c91-ec59-1024-b902-d633bf90dd81@suse.com>

--GR5bhPwhw3CVFpbz0bVb24jEdiFpRgKLi
Content-Type: multipart/mixed;
 boundary="------------00F1DD82E2381B9CBADD9DAE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------00F1DD82E2381B9CBADD9DAE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.12.20 09:30, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -100,11 +112,58 @@ static void hypfs_unlock(void)
>>       }
>>   }
>>  =20
>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *=
entry)
>> +{
>> +    return entry;
>> +}
>> +
>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>> +{
>> +}
>> +
>> +static int node_enter(const struct hypfs_entry *entry)
>> +{
>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_ent=
ered);
>> +
>> +    entry =3D entry->funcs->enter(entry);
>> +    if ( IS_ERR(entry) )
>> +        return PTR_ERR(entry);
>> +
>> +    ASSERT(!*last || *last =3D=3D entry->parent);
>> +
>> +    *last =3D entry;
>=20
> In such a callback case I wonder whether you wouldn't want to also
> guard against NULL coming back, perhaps simply by mistake, but of
> course once handling it here such could even become permissible
> behavior.

I think you are right. I should add an ASSERT(entry);


Juergen

--------------00F1DD82E2381B9CBADD9DAE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------00F1DD82E2381B9CBADD9DAE--

--GR5bhPwhw3CVFpbz0bVb24jEdiFpRgKLi--

--tkjX3SZCAz4HlSD9ZyziWDZEz32w5KShi
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/J9MMFAwAAAAAACgkQsN6d1ii/Ey/l
Ggf+IoPPINU11OMFZWFNQdFU0/lRvf20ek1Fq35ctd3gNewQYHV8T8+2IZQpR/KtKcCJvroH2qHi
Dfem7x3E6Bj9Y4LJEtSCrusgUsKYMXnLnJyGotwqwkjAvRSCGOmKKLTmU8WfJUxtn4EW6JL46l2g
9q1CWzA7YFIts2S/kF7WwgMtpi+YjNsX39seS+JAdMbfK+JeXPTXz7HPvecGMfuQEO3zylFXtZVg
Jb4uH8WhVO4yZwJJ4I07OllpYW/xMj+7y6jXhzvdckFOqOlysmPT0YLXfee8LBluDG6eRiPaMjG5
//FWWrK8Pojdy7ILhRb/gyhcpj4yZniOrPeGfgy8NQ==
=aGEH
-----END PGP SIGNATURE-----

--tkjX3SZCAz4HlSD9ZyziWDZEz32w5KShi--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:52:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:52:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44148.79143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pI-0006Co-FX; Fri, 04 Dec 2020 08:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44148.79143; Fri, 04 Dec 2020 08:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pI-0006Ch-B3; Fri, 04 Dec 2020 08:52:28 +0000
Received: by outflank-mailman (input) for mailman id 44148;
 Fri, 04 Dec 2020 08:52:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl6pH-0006Cc-GP
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:52:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl6pG-00047b-AA; Fri, 04 Dec 2020 08:52:26 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl6pG-00080R-22; Fri, 04 Dec 2020 08:52:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=rA7cUkEhzOwYWHpkZlKG9tDLVRXBLDbhvxlPgKDSKcE=; b=se5/V9eA0JPLOii9tZzHK6ikeX
	ze3PtCW1c+Bo1Vg+QxrtkT+/pWMvFu8dWOnwKiVW1jTjXP08VBpwiC+pEch0PSx9J9KER7eholhaP
	MZA55J7w7T0GeBjAOkF1jQJ3pLCfGtAMXoIteP39J8QKnXVwH1HMKG3bUU/ryjWwNOqY=;
Subject: Re: [PATCH v2 3/8] xen/arm: revert patch related to XArray
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
 <266b918c-b9c4-e067-b8dc-4e879c913af5@xen.org>
 <2B1A7090-F07C-4DF9-BDEC-6E5A2D715DB4@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0f59af9e-b639-5a79-2227-bb6d18270fcd@xen.org>
Date: Fri, 4 Dec 2020 08:52:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <2B1A7090-F07C-4DF9-BDEC-6E5A2D715DB4@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 03/12/2020 12:57, Rahul Singh wrote:
> Hello Julien,

Hi,

>> On 2 Dec 2020, at 1:46 pm, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Rahul,
>>
>> On 26/11/2020 17:02, Rahul Singh wrote:
>>> XArray is not implemented in XEN revert the patch that introduce the
>>> XArray code in SMMUv3 driver.
>>
>> Similar to the atomic revert, you are explaining why the revert but not the consequences. I think this is quite important to have them outline in the commit message as it looks like it means the SMMU driver would not scale.
> 
> Ok I will add.
>>
>>> Once XArray is implemented in XEN this patch can be added in XEN.
>>
>> What's the plan for that?
> 
> As of now there is no plan for Xarray from our side.
> Do we have requirement for Xarray implementation in XEN ?

That's going to depend on the consequence of reverting this patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:52:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:52:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44149.79155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pQ-0006El-Mu; Fri, 04 Dec 2020 08:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44149.79155; Fri, 04 Dec 2020 08:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pQ-0006Ee-Jm; Fri, 04 Dec 2020 08:52:36 +0000
Received: by outflank-mailman (input) for mailman id 44149;
 Fri, 04 Dec 2020 08:52:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kl6pP-0006ED-1q
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:52:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3720c652-cbaf-46c8-8d10-4aa98d53ccc1;
 Fri, 04 Dec 2020 08:52:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80709ACC1;
 Fri,  4 Dec 2020 08:52:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3720c652-cbaf-46c8-8d10-4aa98d53ccc1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607071951; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zku6R6kuLFsg36142FeNi5AOowt8MhdJ5wlwtSAhXjo=;
	b=Dufe4201viJzWoqGPmu5cKUs0R/ggVOxd+vXQoP5LKdN1vD8GLZsGL7CbuPKQToq0DVX0Y
	THr0zjnKs25hbyKkDkCUz+EBeqAnj0DXbJ3J6dCEi9VrU9qucJJGQa+4rWacF1+FHt6lmW
	usquLKM9plD/o2uwia+GXFSyufTWrbI=
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
 <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
Date: Fri, 4 Dec 2020 09:52:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="vg2sd2UFiGpcpldpQB7g7D2KVCYDpX48W"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--vg2sd2UFiGpcpldpQB7g7D2KVCYDpX48W
Content-Type: multipart/mixed; boundary="AAmXXDQhMXpMdHqJx3JuHlzZfsya0dLR3";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
 <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
In-Reply-To: <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>

--AAmXXDQhMXpMdHqJx3JuHlzZfsya0dLR3
Content-Type: multipart/mixed;
 boundary="------------78B258B6CAB4FDC7EA31554D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------78B258B6CAB4FDC7EA31554D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.12.20 16:44, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> --- a/xen/common/hypfs.c
>> +++ b/xen/common/hypfs.c
>> @@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_ent=
ry *entry)
>>       return entry->size;
>>   }
>>  =20
>> +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template=
,
>> +                               unsigned int id, bool is_last,
>> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
>> +{
>> +    struct xen_hypfs_dirlistentry direntry;
>> +    char name[HYPFS_DYNDIR_ID_NAMELEN];
>> +    unsigned int e_namelen, e_len;
>> +
>> +    e_namelen =3D snprintf(name, sizeof(name), template->e.name, id);=

>> +    e_len =3D DIRENTRY_SIZE(e_namelen);
>> +    direntry.e.pad =3D 0;
>> +    direntry.e.type =3D template->e.type;
>> +    direntry.e.encoding =3D template->e.encoding;
>> +    direntry.e.content_len =3D template->e.funcs->getsize(&template->=
e);
>> +    direntry.e.max_write_len =3D template->e.max_size;
>> +    direntry.off_next =3D is_last ? 0 : e_len;
>> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
>> +        return -EFAULT;
>> +    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
>> +                              e_namelen + 1) )
>> +        return -EFAULT;
>> +
>> +    guest_handle_add_offset(*uaddr, e_len);
>> +
>> +    return 0;
>> +}
>> +
>> +static struct hypfs_entry *hypfs_dyndir_findentry(
>> +    const struct hypfs_entry_dir *dir, const char *name, unsigned int=
 name_len)
>> +{
>> +    const struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_get_dyndata();
>> +
>> +    /* Use template with original findentry function. */
>> +    return data->template->e.funcs->findentry(data->template, name, n=
ame_len);
>> +}
>> +
>> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
>> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +    const struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_get_dyndata();
>> +
>> +    /* Use template with original read function. */
>> +    return data->template->e.funcs->read(&data->template->e, uaddr);
>> +}
>> +
>> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(
>> +    const struct hypfs_entry_dir *template, unsigned int id)
>> +{
>> +    struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_get_dyndata();
>> +
>> +    data->template =3D template;
>> +    data->id =3D id;
>> +    snprintf(data->name, sizeof(data->name), template->e.name, id);
>> +    data->dir =3D *template;
>> +    data->dir.e.name =3D data->name;
>=20
> I'm somewhat puzzled, if not confused, by the apparent redundancy
> of this name generation with that in hypfs_read_dyndir_id_entry().
> Wasn't the idea to be able to use generic functions on these
> generated entries?

I can add a macro replacing the double snprintf().

> Also, seeing that other function's name, wouldn't the one here
> want to be named hypfs_gen_dyndir_id_entry()?

Fine with me.


Juergen

--------------78B258B6CAB4FDC7EA31554D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------78B258B6CAB4FDC7EA31554D--

--AAmXXDQhMXpMdHqJx3JuHlzZfsya0dLR3--

--vg2sd2UFiGpcpldpQB7g7D2KVCYDpX48W
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/J+M4FAwAAAAAACgkQsN6d1ii/Ey/z
XQgAkrAibvKRTV2lrR+LhLqWxo8kTfxnnfst48ZN2SjR9PDsK/teWqKgGWd8WGyt5pd6GHAUtYlo
3AE+ERrCernU+prqDe4BnPT7jxaj2nwGV7B43TAsXc/3WMrdcTNnp+Cy/0RYe57E/qqhrghQdj1D
+kCJy1ltpeY9Zd9qnu3kiH7VTcsOrmEcR+jjHmFElvbcZXEZr62BdwNkv5qXapvH4cSgfsB5uUAI
LM7xXM6NM0VXj2o9R7JY1+mPIJZ/eReoi40y87jqS0T1kiWdq0+R6jRlhfNRI+zo5i8/+1pgVz3z
M5qjB45nCR919CsfSxqFql3BXEQsN9YYmfS4PTzsjQ==
=zC6S
-----END PGP SIGNATURE-----

--vg2sd2UFiGpcpldpQB7g7D2KVCYDpX48W--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:52:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44153.79166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pm-0006Mr-4r; Fri, 04 Dec 2020 08:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44153.79166; Fri, 04 Dec 2020 08:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pm-0006Mi-1m; Fri, 04 Dec 2020 08:52:58 +0000
Received: by outflank-mailman (input) for mailman id 44153;
 Fri, 04 Dec 2020 08:52:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pl-0006MW-Cw
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:52:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pl-00048k-4A; Fri, 04 Dec 2020 08:52:57 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pk-00081O-Ru; Fri, 04 Dec 2020 08:52:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=EQfXIx5Zj7Jggd0RKO1e+/5rvXiVA7UtKvrBeCYH6zw=; b=AQ6h9RaoJRVPPLMe+YVFivbe8p
	zIKXJ6XT6vR8PzX7wCoAjY2zFc87bUubJ5exPftq4kPzzUk9j9ohfAnxQ0rWLP2onBLuVKo7AF9Al
	eBCePA29ZW26Wm/KuL0ZIWYtv327kkfXg+2nKF+IQqk2ac34mxB5EvXo5Mussvshe4zA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH v5 00/11] viridian: add support for ExProcessorMasks
Date: Fri,  4 Dec 2020 08:52:44 +0000
Message-Id: <20201204085255.26216-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (11):
  viridian: don't blindly write to 32-bit registers if 'mode' is invalid
  viridian: move flush hypercall implementation into separate function
  viridian: move IPI hypercall implementation into separate function
  viridian: introduce a per-cpu hypercall_vpmask and accessor
    functions...
  viridian: use hypercall_vpmask in hvcall_ipi()
  viridian: use softirq batching in hvcall_ipi()
  viridian: add ExProcessorMasks variants of the flush hypercalls
  viridian: add ExProcessorMasks variant of the IPI hypercall
  viridian: log initial invocation of each type of hypercall
  viridian: add a new '_HVMPV_ex_processor_masks' bit into
    HVM_PARAM_VIRIDIAN...
  xl / libxl: add 'ex_processor_mask' into
    'libxl_viridian_enlightenment'

 docs/man/xl.cfg.5.pod.in             |   8 +
 tools/include/libxl.h                |   7 +
 tools/libs/light/libxl_types.idl     |   1 +
 tools/libs/light/libxl_x86.c         |   3 +
 xen/arch/x86/hvm/viridian/viridian.c | 604 +++++++++++++++++++++------
 xen/include/asm-x86/hvm/viridian.h   |  10 +
 xen/include/public/hvm/params.h      |   7 +-
 7 files changed, 516 insertions(+), 124 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:52:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:52:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44154.79179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pn-0006Of-CT; Fri, 04 Dec 2020 08:52:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44154.79179; Fri, 04 Dec 2020 08:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pn-0006OX-9Q; Fri, 04 Dec 2020 08:52:59 +0000
Received: by outflank-mailman (input) for mailman id 44154;
 Fri, 04 Dec 2020 08:52:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pm-0006O4-M1
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:52:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pm-00048r-7N; Fri, 04 Dec 2020 08:52:58 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pl-00081O-VI; Fri, 04 Dec 2020 08:52:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=yaMWoBKVxWVw7Z8qM/NkeD5nD3ZYv/c7zC/niSnkkOE=; b=KshhxBEgHpzYHBUii9lBAMJfY+
	uN9Nd35uC5ZdIID9uZDKOL4oZgKvwqYteu6Y8J/99Ivr3tFqgAlVyXiXORzhYbSE19+4UEm03FbU8
	vomxFx4HeyWLe7ocO1mnaeUBwj8YP9aFC/0FsZxuKKgbAnyoXZp7yGSOpKmGT0a/+qcM=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 01/11] viridian: don't blindly write to 32-bit registers if 'mode' is invalid
Date: Fri,  4 Dec 2020 08:52:45 +0000
Message-Id: <20201204085255.26216-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

If hvm_guest_x86_mode() returns something other than 8 or 4 then
viridian_hypercall() will return immediately but, on the way out, will write
back status as if 'mode' was 4. This patch simply makes it leave the registers
alone.

NOTE: The formatting of the 'out' label and the switch statement are also
      adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v5:
 - Fixed yet another CODING_STYLE violation.

v4:
 - Fixed another CODING_STYLE violation.

v2:
 - New in v2
---
 xen/arch/x86/hvm/viridian/viridian.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index dc7183a54627..3dbb5c2d4cc1 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -692,13 +692,15 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         break;
     }
 
-out:
+ out:
     output.result = status;
-    switch (mode) {
+    switch ( mode )
+    {
     case 8:
         regs->rax = output.raw;
         break;
-    default:
+
+    case 4:
         regs->rdx = output.raw >> 32;
         regs->rax = (uint32_t)output.raw;
         break;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44155.79191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6po-0006Qu-NB; Fri, 04 Dec 2020 08:53:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44155.79191; Fri, 04 Dec 2020 08:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6po-0006Qh-Jq; Fri, 04 Dec 2020 08:53:00 +0000
Received: by outflank-mailman (input) for mailman id 44155;
 Fri, 04 Dec 2020 08:52:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pn-0006PO-KK
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:52:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pn-00048x-AB; Fri, 04 Dec 2020 08:52:59 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pn-00081O-2M; Fri, 04 Dec 2020 08:52:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=SJu8nQYqeczNSm5Lpps/HMrQmEvc2j8XylWQJGoCW0Q=; b=PfOwomKPCdL09cX78hLgN57hek
	WxjqHmFxnF8RyIPs9ho6KA6Kn0MMrfCEEm9QlJ81whWTAp6m2v9DdbDWcmeLrS/aqXsP6PIJTIRYf
	cey5u4nMq3kJV2tjyoO+IU7O26HZJnNYMBjve5y09fe4a40b72fJYRVfuIRj6ubHHo7w=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 02/11] viridian: move flush hypercall implementation into separate function
Date: Fri,  4 Dec 2020 08:52:46 +0000
Message-Id: <20201204085255.26216-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.

The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function
---
 xen/arch/x86/hvm/viridian/viridian.c | 130 ++++++++++++++++-----------
 1 file changed, 78 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 3dbb5c2d4cc1..f0b3ee65e3aa 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v)
     return vcpu_mask & (1ul << v->vcpu_id);
 }
 
+union hypercall_input {
+    uint64_t raw;
+    struct {
+        uint16_t call_code;
+        uint16_t fast:1;
+        uint16_t rsvd1:15;
+        uint16_t rep_count:12;
+        uint16_t rsvd2:4;
+        uint16_t rep_start:12;
+        uint16_t rsvd3:4;
+    };
+};
+
+union hypercall_output {
+    uint64_t raw;
+    struct {
+        uint16_t result;
+        uint16_t rsvd1;
+        uint32_t rep_complete:12;
+        uint32_t rsvd2:20;
+    };
+};
+
+static int hvcall_flush(const union hypercall_input *input,
+                        union hypercall_output *output,
+                        paddr_t input_params_gpa,
+                        paddr_t output_params_gpa)
+{
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        uint64_t vcpu_mask;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    /*
+     * It is not clear from the spec. if we are supposed to
+     * include current virtual CPU in the set or not in this case,
+     * so err on the safe side.
+     */
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        input_params.vcpu_mask = ~0ul;
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     uint16_t status = HV_STATUS_SUCCESS;
-
-    union hypercall_input {
-        uint64_t raw;
-        struct {
-            uint16_t call_code;
-            uint16_t fast:1;
-            uint16_t rsvd1:15;
-            uint16_t rep_count:12;
-            uint16_t rsvd2:4;
-            uint16_t rep_start:12;
-            uint16_t rsvd3:4;
-        };
-    } input;
-
-    union hypercall_output {
-        uint64_t raw;
-        struct {
-            uint16_t result;
-            uint16_t rsvd1;
-            uint32_t rep_complete:12;
-            uint32_t rsvd2:20;
-        };
-    } output = { 0 };
+    union hypercall_input input;
+    union hypercall_output output = {};
 
     ASSERT(is_viridian_domain(currd));
 
@@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
     {
-        struct {
-            uint64_t address_space;
-            uint64_t flags;
-            uint64_t vcpu_mask;
-        } input_params;
+        int rc = hvcall_flush(&input, &output, input_params_gpa,
+                              output_params_gpa);
 
-        /* These hypercalls should never use the fast-call convention. */
-        status = HV_STATUS_INVALID_PARAMETER;
-        if ( input.fast )
+        switch ( rc )
+        {
+        case 0:
             break;
 
-        /* Get input parameters. */
-        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                      sizeof(input_params)) !=
-             HVMTRANS_okay )
-            break;
-
-        /*
-         * It is not clear from the spec. if we are supposed to
-         * include current virtual CPU in the set or not in this case,
-         * so err on the safe side.
-         */
-        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-            input_params.vcpu_mask = ~0ul;
-
-        /*
-         * A false return means that another vcpu is currently trying
-         * a similar operation, so back off.
-         */
-        if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        case -ERESTART:
             return HVM_HCALL_preempted;
 
-        output.rep_complete = input.rep_count;
+        default:
+            ASSERT_UNREACHABLE();
+            /* Fallthrough */
+        case -EINVAL:
+            status = HV_STATUS_INVALID_PARAMETER;
+            break;
+        }
 
-        status = HV_STATUS_SUCCESS;
         break;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44156.79203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pr-0006Uh-2W; Fri, 04 Dec 2020 08:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44156.79203; Fri, 04 Dec 2020 08:53:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pq-0006UX-UL; Fri, 04 Dec 2020 08:53:02 +0000
Received: by outflank-mailman (input) for mailman id 44156;
 Fri, 04 Dec 2020 08:53:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6po-0006R7-P4
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6po-000494-DL; Fri, 04 Dec 2020 08:53:00 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6po-00081O-5d; Fri, 04 Dec 2020 08:53:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=+0Al/tr4MJNf/ubHYvoVGtFEnTG6Fvjontp4axhb608=; b=K+uA/vPlVhnw6B2Cd4n3hb6Ytv
	LhfyaPwl70xjf38pluqEIy4yYZRLstm11FVEMiNZT6IMfJOFm0pOT31UGSb5RCzPdFJMwq4QV5u95
	U0IZDX8TUWb+cgKYsVrN8BzNI2O9WTUOhdNpGbvCq3rN34dAVYfROOXizqVG75ec9rQ4=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 03/11] viridian: move IPI hypercall implementation into separate function
Date: Fri,  4 Dec 2020 08:52:47 +0000
Message-Id: <20201204085255.26216-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.

The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.

NOTE: The formatting of the switch statement at the top of
      viridial_hypercall() is also adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function

v2:
 - Different formatting adjustment
---
 xen/arch/x86/hvm/viridian/viridian.c | 168 ++++++++++++++-------------
 1 file changed, 87 insertions(+), 81 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index f0b3ee65e3aa..77e90b502c69 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -581,13 +581,72 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi(const union hypercall_input *input,
+                      union hypercall_output *output,
+                      paddr_t input_params_gpa,
+                      paddr_t output_params_gpa)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    uint32_t vector;
+    uint64_t vcpu_mask;
+
+    /* Get input parameters. */
+    if ( input->fast )
+    {
+        if ( input_params_gpa >> 32 )
+            return -EINVAL;
+
+        vector = input_params_gpa;
+        vcpu_mask = output_params_gpa;
+    }
+    else
+    {
+        struct {
+            uint32_t vector;
+            uint8_t target_vtl;
+            uint8_t reserved_zero[3];
+            uint64_t vcpu_mask;
+        } input_params;
+
+        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                      sizeof(input_params)) != HVMTRANS_okay )
+            return -EINVAL;
+
+        if ( input_params.target_vtl ||
+             input_params.reserved_zero[0] ||
+             input_params.reserved_zero[1] ||
+             input_params.reserved_zero[2] )
+            return -EINVAL;
+
+        vector = input_params.vector;
+        vcpu_mask = input_params.vcpu_mask;
+    }
+
+    if ( vector < 0x10 || vector > 0xff )
+        return -EINVAL;
+
+    for_each_vcpu ( currd, v )
+    {
+        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+            return -EINVAL;
+
+        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+            continue;
+
+        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+    }
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
-    uint16_t status = HV_STATUS_SUCCESS;
+    int rc = 0;
     union hypercall_input input;
     union hypercall_output output = {};
 
@@ -600,11 +659,13 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         input_params_gpa = regs->rdx;
         output_params_gpa = regs->r8;
         break;
+
     case 4:
         input.raw = (regs->rdx << 32) | regs->eax;
         input_params_gpa = (regs->rbx << 32) | regs->ecx;
         output_params_gpa = (regs->rdi << 32) | regs->esi;
         break;
+
     default:
         goto out;
     }
@@ -616,92 +677,18 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * See section 14.5.1 of the specification.
          */
         do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
-        status = HV_STATUS_SUCCESS;
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-    {
-        int rc = hvcall_flush(&input, &output, input_params_gpa,
-                              output_params_gpa);
-
-        switch ( rc )
-        {
-        case 0:
-            break;
-
-        case -ERESTART:
-            return HVM_HCALL_preempted;
-
-        default:
-            ASSERT_UNREACHABLE();
-            /* Fallthrough */
-        case -EINVAL:
-            status = HV_STATUS_INVALID_PARAMETER;
-            break;
-        }
-
+        rc = hvcall_flush(&input, &output, input_params_gpa,
+                          output_params_gpa);
         break;
-    }
 
     case HVCALL_SEND_IPI:
-    {
-        struct vcpu *v;
-        uint32_t vector;
-        uint64_t vcpu_mask;
-
-        status = HV_STATUS_INVALID_PARAMETER;
-
-        /* Get input parameters. */
-        if ( input.fast )
-        {
-            if ( input_params_gpa >> 32 )
-                break;
-
-            vector = input_params_gpa;
-            vcpu_mask = output_params_gpa;
-        }
-        else
-        {
-            struct {
-                uint32_t vector;
-                uint8_t target_vtl;
-                uint8_t reserved_zero[3];
-                uint64_t vcpu_mask;
-            } input_params;
-
-            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                          sizeof(input_params)) !=
-                 HVMTRANS_okay )
-                break;
-
-            if ( input_params.target_vtl ||
-                 input_params.reserved_zero[0] ||
-                 input_params.reserved_zero[1] ||
-                 input_params.reserved_zero[2] )
-                break;
-
-            vector = input_params.vector;
-            vcpu_mask = input_params.vcpu_mask;
-        }
-
-        if ( vector < 0x10 || vector > 0xff )
-            break;
-
-        for_each_vcpu ( currd, v )
-        {
-            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-                break;
-
-            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-                continue;
-
-            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-        }
-
-        status = HV_STATUS_SUCCESS;
+        rc = hvcall_ipi(&input, &output, input_params_gpa,
+                        output_params_gpa);
         break;
-    }
 
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
@@ -714,12 +701,31 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * Given that return a status of 'invalid code' has not so far
          * caused any problems it's not worth logging.
          */
-        status = HV_STATUS_INVALID_HYPERCALL_CODE;
+        rc = -EOPNOTSUPP;
         break;
     }
 
  out:
-    output.result = status;
+    switch ( rc )
+    {
+    case 0:
+        break;
+
+    case -ERESTART:
+        return HVM_HCALL_preempted;
+
+    case -EOPNOTSUPP:
+        output.result = HV_STATUS_INVALID_HYPERCALL_CODE;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
+        output.result = HV_STATUS_INVALID_PARAMETER;
+        break;
+    }
+
     switch ( mode )
     {
     case 8:
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44157.79211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pr-0006Vj-N6; Fri, 04 Dec 2020 08:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44157.79211; Fri, 04 Dec 2020 08:53:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pr-0006VH-9u; Fri, 04 Dec 2020 08:53:03 +0000
Received: by outflank-mailman (input) for mailman id 44157;
 Fri, 04 Dec 2020 08:53:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pp-0006TD-PK
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pp-00049A-H0; Fri, 04 Dec 2020 08:53:01 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pp-00081O-8w; Fri, 04 Dec 2020 08:53:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=xFA8GhlwFJSlImmF25Y7tztCCVN+Ec94bI0nrH45fvQ=; b=a8/kVosfWPSYwcXD7Cp/TzH/Z5
	jjX9ICTEGkx6gD8UXD0QzTvGPhONWD8ZTBW4fqgKgyKAlO7z8bEiseBVEE8I928UaWVdjmj94nOLO
	cAa3W9RpJ810U88YdC0ztRIDXIiaGSmRK/IyytNos41Oe5HI34nuJMV4jjV5LinE9oeo=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 04/11] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
Date: Fri,  4 Dec 2020 08:52:48 +0000
Message-Id: <20201204085255.26216-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and make use of them in hvcall_flush()/need_flush().

Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Modified vpmask_set() to take a base 'vp' and a 64-bit 'mask', still
   looping over the mask as bitmap.h does not provide a primitive for copying
   one mask into another at an offset
 - Added ASSERTions to verify that we don't attempt to set or test bits
   beyond the limit of the map
---
 xen/arch/x86/hvm/viridian/viridian.c | 58 ++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 77e90b502c69..0274c8f2eec1 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -507,15 +507,59 @@ void viridian_domain_deinit(struct domain *d)
     XFREE(d->arch.hvm.viridian);
 }
 
+struct hypercall_vpmask {
+    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
+
+static void vpmask_empty(struct hypercall_vpmask *vpmask)
+{
+    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp,
+                       uint64_t mask)
+{
+    unsigned int count = sizeof(mask) * 8;
+
+    while ( count-- )
+    {
+        if ( !mask )
+            break;
+
+        if ( mask & 1 )
+        {
+            ASSERT(vp < HVM_MAX_VCPUS);
+            __set_bit(vp, vpmask->mask);
+        }
+
+        mask >>= 1;
+        vp++;
+    }
+}
+
+static void vpmask_fill(struct hypercall_vpmask *vpmask)
+{
+    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static bool vpmask_test(const struct hypercall_vpmask *vpmask,
+                        unsigned int vp)
+{
+    ASSERT(vp < HVM_MAX_VCPUS);
+    return test_bit(vp, vpmask->mask);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
  */
 static bool need_flush(void *ctxt, struct vcpu *v)
 {
-    uint64_t vcpu_mask = *(uint64_t *)ctxt;
+    struct hypercall_vpmask *vpmask = ctxt;
 
-    return vcpu_mask & (1ul << v->vcpu_id);
+    return vpmask_test(vpmask, v->vcpu_id);
 }
 
 union hypercall_input {
@@ -546,6 +590,7 @@ static int hvcall_flush(const union hypercall_input *input,
                         paddr_t input_params_gpa,
                         paddr_t output_params_gpa)
 {
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     struct {
         uint64_t address_space;
         uint64_t flags;
@@ -567,13 +612,18 @@ static int hvcall_flush(const union hypercall_input *input,
      * so err on the safe side.
      */
     if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-        input_params.vcpu_mask = ~0ul;
+        vpmask_fill(vpmask);
+    else
+    {
+        vpmask_empty(vpmask);
+        vpmask_set(vpmask, 0, input_params.vcpu_mask);
+    }
 
     /*
      * A false return means that another vcpu is currently trying
      * a similar operation, so back off.
      */
-    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+    if ( !paging_flush_tlb(need_flush, vpmask) )
         return -ERESTART;
 
     output->rep_complete = input->rep_count;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44160.79223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6ps-0006Yf-RD; Fri, 04 Dec 2020 08:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44160.79223; Fri, 04 Dec 2020 08:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6ps-0006Xu-7j; Fri, 04 Dec 2020 08:53:04 +0000
Received: by outflank-mailman (input) for mailman id 44160;
 Fri, 04 Dec 2020 08:53:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pq-0006UL-Qv
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pq-00049V-KR; Fri, 04 Dec 2020 08:53:02 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pq-00081O-CE; Fri, 04 Dec 2020 08:53:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=VB9C9TJfNJjFaO4TuFClll7Hzlvku7IXdGGyn//ncrE=; b=2wXi7Y0oDqBLVpVfPXP1r0Iyvl
	bfM1+20z4RznhHEEmQj7mM3OyF5tPgsCDnpai3pEn9rDaP666Sh20GEXBcEfYmZNb87tT1Gn9EzNN
	A5YsWwYraeRVjAANOTd0pWMljFl9ReSgxU54DP164TM+P2leEc//keUb1axcvkBYT5qs=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 05/11] viridian: use hypercall_vpmask in hvcall_ipi()
Date: Fri,  4 Dec 2020 08:52:49 +0000
Message-Id: <20201204085255.26216-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Couple of extra 'const' qualifiers

v2:
 - Drop the 'vp' loop now that vpmask_set() will do it internally
---
 xen/arch/x86/hvm/viridian/viridian.c | 44 +++++++++++++++++++++-------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 0274c8f2eec1..3e2393be4160 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -551,6 +551,26 @@ static bool vpmask_test(const struct hypercall_vpmask *vpmask,
     return test_bit(vp, vpmask->mask);
 }
 
+static unsigned int vpmask_first(const struct hypercall_vpmask *vpmask)
+{
+    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static unsigned int vpmask_next(const struct hypercall_vpmask *vpmask,
+                                unsigned int vp)
+{
+    /*
+     * If vp + 1 > HVM_MAX_VCPUS then find_next_bit() will return
+     * HVM_MAX_VCPUS, ensuring the for_each_vp ( ... ) loop terminates.
+     */
+    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
+}
+
+#define for_each_vp(vpmask, vp) \
+	for ( (vp) = vpmask_first(vpmask); \
+	      (vp) < HVM_MAX_VCPUS; \
+	      (vp) = vpmask_next(vpmask, vp) )
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -631,13 +651,21 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
+{
+    struct domain *currd = current->domain;
+    unsigned int vp;
+
+    for_each_vp ( vpmask, vp )
+        vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+}
+
 static int hvcall_ipi(const union hypercall_input *input,
                       union hypercall_output *output,
                       paddr_t input_params_gpa,
                       paddr_t output_params_gpa)
 {
-    struct domain *currd = current->domain;
-    struct vcpu *v;
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     uint32_t vector;
     uint64_t vcpu_mask;
 
@@ -676,16 +704,10 @@ static int hvcall_ipi(const union hypercall_input *input,
     if ( vector < 0x10 || vector > 0xff )
         return -EINVAL;
 
-    for_each_vcpu ( currd, v )
-    {
-        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-            return -EINVAL;
+    vpmask_empty(vpmask);
+    vpmask_set(vpmask, 0, vcpu_mask);
 
-        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-            continue;
-
-        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-    }
+    send_ipi(vpmask, vector);
 
     return 0;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44161.79239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pu-0006cj-4t; Fri, 04 Dec 2020 08:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44161.79239; Fri, 04 Dec 2020 08:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pu-0006cS-0p; Fri, 04 Dec 2020 08:53:06 +0000
Received: by outflank-mailman (input) for mailman id 44161;
 Fri, 04 Dec 2020 08:53:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6ps-0006XX-0s
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pr-00049c-N7; Fri, 04 Dec 2020 08:53:03 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pr-00081O-FS; Fri, 04 Dec 2020 08:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=e7E0uPlbvldsr+/Or2Usn28nGFXQXE0Lw/3OzuAS844=; b=OY+ZrhmXvDbIM5anfbTCwyHSQ7
	30+17YusEJP4C/9uJmN9sKEkYQJYER1LLIyicVMa7AOrV8qTPZ/tjl1ain7osv5p4sfO3bZndXP3l
	e3pWnu55uxJfdyhOg+O4cqsT7pM7MSEX42tZ32I4OshPooYduemELvob5fG99ldXLvMA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 06/11] viridian: use softirq batching in hvcall_ipi()
Date: Fri,  4 Dec 2020 08:52:50 +0000
Message-Id: <20201204085255.26216-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Don't add the 'nr' field to struct hypercall_vpmask and use
   bitmap_weight() instead
---
 xen/arch/x86/hvm/viridian/viridian.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 3e2393be4160..894946abcb72 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -11,6 +11,7 @@
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
+#include <xen/softirq.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -571,6 +572,11 @@ static unsigned int vpmask_next(const struct hypercall_vpmask *vpmask,
 	      (vp) < HVM_MAX_VCPUS; \
 	      (vp) = vpmask_next(vpmask, vp) )
 
+static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
+{
+    return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -654,10 +660,17 @@ static int hvcall_flush(const union hypercall_input *input,
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
+    unsigned int nr = vpmask_nr(vpmask);
     unsigned int vp;
 
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_begin();
+
     for_each_vp ( vpmask, vp )
         vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_finish();
 }
 
 static int hvcall_ipi(const union hypercall_input *input,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44162.79244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pu-0006eW-VW; Fri, 04 Dec 2020 08:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44162.79244; Fri, 04 Dec 2020 08:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pu-0006e4-JC; Fri, 04 Dec 2020 08:53:06 +0000
Received: by outflank-mailman (input) for mailman id 44162;
 Fri, 04 Dec 2020 08:53:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pt-0006aE-4g
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6ps-00049j-R8; Fri, 04 Dec 2020 08:53:04 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6ps-00081O-If; Fri, 04 Dec 2020 08:53:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=nl4mRHTd0JXoiq37UlZWoIYiBW8BoLkrX+JaJVuep5E=; b=6zsb+ZVoAijkIwEnP9g+Ejy13w
	PKwIucOQbBygvPQpn5yX8E0ZgU85N4+jHLceINxz9jCj3Dg7m/ckxe5TWbqkyM9psliQvV/sh9Drb
	IpeVpe7EsB4cFv7Q/pX0v7dnYW6OFhgiuqT4eMP3f9SpSqzJH3pRUUcIcgQMVUVcSELo=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 07/11] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Fri,  4 Dec 2020 08:52:51 +0000
Message-Id: <20201204085255.26216-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
      support needs to be advertised via CPUID. This will be done in a
      subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust one of the helper macros
 - A few more consts and type tweaks
 - Adjust prototype of new function

v2:
 - Add helper macros to define mask and struct sizes
 - Use a union to determine the size of 'hypercall_vpset'
 - Use hweight64() in hv_vpset_nr_banks()
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 141 +++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 894946abcb72..a4cece722e97 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -577,6 +577,69 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
     return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
 }
 
+#define HV_VPSET_BANK_SIZE \
+    sizeof_field(struct hv_vpset, bank_contents[0])
+
+#define HV_VPSET_SIZE(banks)   \
+    (offsetof(struct hv_vpset, bank_contents) + \
+     ((banks) * HV_VPSET_BANK_SIZE))
+
+#define HV_VPSET_MAX_BANKS \
+    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
+
+union hypercall_vpset {
+    struct hv_vpset set;
+    uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
+};
+
+static DEFINE_PER_CPU(union hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+    return hweight64(vpset->valid_bank_mask);
+}
+
+static uint16_t hv_vpset_to_vpmask(const struct hv_vpset *set,
+                                   struct hypercall_vpmask *vpmask)
+{
+#define NR_VPS_PER_BANK (HV_VPSET_BANK_SIZE * 8)
+
+    switch ( set->format )
+    {
+    case HV_GENERIC_SET_ALL:
+        vpmask_fill(vpmask);
+        return 0;
+
+    case HV_GENERIC_SET_SPARSE_4K:
+    {
+        uint64_t bank_mask;
+        unsigned int vp, bank = 0;
+
+        vpmask_empty(vpmask);
+        for ( vp = 0, bank_mask = set->valid_bank_mask;
+              bank_mask;
+              vp += NR_VPS_PER_BANK, bank_mask >>= 1 )
+        {
+            if ( bank_mask & 1 )
+            {
+                uint64_t mask = set->bank_contents[bank];
+
+                vpmask_set(vpmask, vp, mask);
+                bank++;
+            }
+        }
+        return 0;
+    }
+
+    default:
+        break;
+    }
+
+    return -EINVAL;
+
+#undef NR_VPS_PER_BANK
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -657,6 +720,78 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_flush_ex(const union hypercall_input *input,
+                           union hypercall_output *output,
+                           paddr_t input_params_gpa,
+                           paddr_t output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        struct hv_vpset set;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        vpmask_fill(vpmask);
+    else
+    {
+        union hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+        struct hv_vpset *set = &vpset->set;
+        size_t size;
+        int rc;
+
+        *set = input_params.set;
+        if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+        {
+            unsigned long offset = offsetof(typeof(input_params),
+                                            set.bank_contents);
+
+            size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+            if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+                 sizeof(*vpset) )
+            {
+                ASSERT_UNREACHABLE();
+                return -EINVAL;
+            }
+
+            if ( hvm_copy_from_guest_phys(&set->bank_contents[0],
+                                          input_params_gpa + offset,
+                                          size) != HVMTRANS_okay)
+                return -EINVAL;
+
+            size += sizeof(*set);
+        }
+        else
+            size = sizeof(*set);
+
+        rc = hv_vpset_to_vpmask(set, vpmask);
+        if ( rc )
+            return rc;
+    }
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, vpmask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
@@ -770,6 +905,12 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                           output_params_gpa);
         break;
 
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        rc = hvcall_flush_ex(&input, &output, input_params_gpa,
+                             output_params_gpa);
+        break;
+
     case HVCALL_SEND_IPI:
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44163.79262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pw-0006jN-Mv; Fri, 04 Dec 2020 08:53:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44163.79262; Fri, 04 Dec 2020 08:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6pw-0006io-Aw; Fri, 04 Dec 2020 08:53:08 +0000
Received: by outflank-mailman (input) for mailman id 44163;
 Fri, 04 Dec 2020 08:53:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pu-0006dA-9S
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pt-00049r-Te; Fri, 04 Dec 2020 08:53:05 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pt-00081O-Lr; Fri, 04 Dec 2020 08:53:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=H830RGebjhB/7fuISXxF2D6b2V+2uO+On0a8uMTm7KY=; b=eGpiq7b7BXH9eR6zIvn71DZQHO
	1Ds1U6gZzv/JldBMDFJ1OZ7n/gGoWqTulXWjBe2T145I+t7/FdaM1xpHmUgJv62AA9jncSVvW4045
	huGLDJ4zqG7Voem7xs7L+2hf1veYx8jxUtHgoPRTdbWrIPDQywF8RRMuaIbx7yEKmXoo=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 08/11] viridian: add ExProcessorMasks variant of the IPI hypercall
Date: Fri,  4 Dec 2020 08:52:52 +0000
Message-Id: <20201204085255.26216-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).

NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
      not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
      'ExProcessorMasks' is not yet advertised via CPUID.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function

v2:
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 74 ++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index a4cece722e97..5e4a2fa53ad4 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -860,6 +860,75 @@ static int hvcall_ipi(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi_ex(const union hypercall_input *input,
+                         union hypercall_output *output,
+                         paddr_t input_params_gpa,
+                         paddr_t output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint32_t vector;
+        uint8_t target_vtl;
+        uint8_t reserved_zero[3];
+        struct hv_vpset set;
+    } input_params;
+    union hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+    struct hv_vpset *set = &vpset->set;
+    size_t size;
+    int rc;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.target_vtl ||
+         input_params.reserved_zero[0] ||
+         input_params.reserved_zero[1] ||
+         input_params.reserved_zero[2] )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    if ( input_params.vector < 0x10 || input_params.vector > 0xff )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    *set = input_params.set;
+    if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+    {
+        unsigned long offset = offsetof(typeof(input_params),
+                                        set.bank_contents);
+
+        size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+        if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+             sizeof(*vpset) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+
+        if ( hvm_copy_from_guest_phys(&set->bank_contents,
+                                      input_params_gpa + offset,
+                                      size) != HVMTRANS_okay)
+            return -EINVAL;
+
+        size += sizeof(*set);
+    }
+    else
+        size = sizeof(*set);
+
+    rc = hv_vpset_to_vpmask(set, vpmask);
+    if ( rc )
+        return rc;
+
+    send_ipi(vpmask, input_params.vector);
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -916,6 +985,11 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                         output_params_gpa);
         break;
 
+    case HVCALL_SEND_IPI_EX:
+        rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
+                           output_params_gpa);
+        break;
+
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:53:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44164.79270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6px-0006lp-LS; Fri, 04 Dec 2020 08:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44164.79270; Fri, 04 Dec 2020 08:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6px-0006ku-3R; Fri, 04 Dec 2020 08:53:09 +0000
Received: by outflank-mailman (input) for mailman id 44164;
 Fri, 04 Dec 2020 08:53:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6pv-0006fg-5V
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:53:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pv-0004A1-0R; Fri, 04 Dec 2020 08:53:07 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pu-00081O-P7; Fri, 04 Dec 2020 08:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=hKsDOrpgiJSDA+ammFrF07gmisbXWrRCPKTZaRZM/PU=; b=PqaHCFE1LPrk7HnD2c1OEv7J/E
	0b34JrxmglfSclnRKx86yjDeRwJB4FAzShN68tZWbct3X8oRmNBYKIBHAT38QTmjY25AyiosPP9To
	LFkh8JGW4U6VmJpxjF2a7/pPEZiunbeFQGxBGb2IoeW9oU2c7Yrx96zge2CR73L5eR9Q=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 09/11] viridian: log initial invocation of each type of hypercall
Date: Fri,  4 Dec 2020 08:52:53 +0000
Message-Id: <20201204085255.26216-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Use DECLARE_BITMAP() for 'hypercall_flags'
 - Use an enum for _HCALL_* values
---
 xen/arch/x86/hvm/viridian/viridian.c | 21 +++++++++++++++++++++
 xen/include/asm-x86/hvm/viridian.h   | 10 ++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 5e4a2fa53ad4..efd8e3a900c3 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -933,6 +933,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
+    struct viridian_domain *vd = currd->arch.hvm.viridian;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     int rc = 0;
@@ -962,6 +963,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     switch ( input.call_code )
     {
     case HVCALL_NOTIFY_LONG_SPIN_WAIT:
+        if ( !test_and_set_bit(_HCALL_spin_wait, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "d%d: VIRIDIAN HVCALL_NOTIFY_LONG_SPIN_WAIT\n",
+                   currd->domain_id);
+
         /*
          * See section 14.5.1 of the specification.
          */
@@ -970,22 +975,38 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+        if ( !test_and_set_bit(_HCALL_flush, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST\n",
+                   currd);
+
         rc = hvcall_flush(&input, &output, input_params_gpa,
                           output_params_gpa);
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        if ( !test_and_set_bit(_HCALL_flush_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX\n",
+                   currd);
+
         rc = hvcall_flush_ex(&input, &output, input_params_gpa,
                              output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI:
+        if ( !test_and_set_bit(_HCALL_ipi, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI\n",
+                   currd);
+
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI_EX:
+        if ( !test_and_set_bit(_HCALL_ipi_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI_EX\n",
+                   currd);
+
         rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
                            output_params_gpa);
         break;
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index cbf77d9c760b..4c8ff6e80b6f 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -55,10 +55,20 @@ struct viridian_time_ref_count
     int64_t off;
 };
 
+enum {
+    _HCALL_spin_wait,
+    _HCALL_flush,
+    _HCALL_flush_ex,
+    _HCALL_ipi,
+    _HCALL_ipi_ex,
+    _HCALL_nr /* must be last */
+};
+
 struct viridian_domain
 {
     union hv_guest_os_id guest_os_id;
     union hv_vp_assist_page_msr hypercall_gpa;
+    DECLARE_BITMAP(hypercall_flags, _HCALL_nr);
     struct viridian_time_ref_count time_ref_count;
     struct viridian_page reference_tsc;
 };
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 08:59:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 08:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44220.79286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6vc-0007kT-Tc; Fri, 04 Dec 2020 08:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44220.79286; Fri, 04 Dec 2020 08:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6vc-0007kM-Qb; Fri, 04 Dec 2020 08:59:00 +0000
Received: by outflank-mailman (input) for mailman id 44220;
 Fri, 04 Dec 2020 08:58:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl6vb-0007kH-AJ
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 08:58:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c7b661c-fb8e-41ed-82a6-d8adc48280cf;
 Fri, 04 Dec 2020 08:58:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85EE1ACC6;
 Fri,  4 Dec 2020 08:58:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c7b661c-fb8e-41ed-82a6-d8adc48280cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607072337; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nOSvgadS6oqi9y2g+N25P/SfFQDdmEoS8APBzeBXww8=;
	b=RI9BaDaPX/DSW/U7jZ6oiAbr8uqV6kZuHHaAlLWBFZ+PSKCW3fjmDOWv1XJcQ/UB0yRfsI
	LFSKsQoa2OzB4TltMGs9e6tS9LdPzolTMyqGUKgmx/W/yefz8wlI9uOo5tTdmu4bV/I3JY
	mXO0x7Q/lwgvdJifpOMhNxK5iWUteCE=
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <57d62a37-b953-a4fa-8c73-79336d634cb0@suse.com>
Date: Fri, 4 Dec 2020 09:58:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-12-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> @@ -197,28 +235,12 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
>              end = strchr(path, '\0');
>          name_len = end - path;
>  
> -        again = false;
> +        entry = dir->e.funcs->findentry(dir, path, name_len);
> +        if ( IS_ERR(entry) || !*end )
> +            return entry;
>  
> -        list_for_each_entry ( entry, &dir->dirlist, list )
> -        {
> -            int cmp = strncmp(path, entry->name, name_len);
> -            struct hypfs_entry_dir *d = container_of(entry,
> -                                                     struct hypfs_entry_dir, e);
> -
> -            if ( cmp < 0 )
> -                return ERR_PTR(-ENOENT);
> -            if ( !cmp && strlen(entry->name) == name_len )
> -            {
> -                if ( !*end )
> -                    return entry;
> -
> -                again = true;
> -                dir = d;
> -                path = end + 1;
> -
> -                break;
> -            }
> -        }
> +        path = end + 1;
> +        dir = container_of(entry, struct hypfs_entry_dir, e);
>      }

Looking at patch 15 my understanding is that "dir" may get invalidated
by the call to the ->findentry() hook above. That is, use of dir has
to be avoided between the two calls. This isn't at all obvious, so I
wonder whether at least a comment wouldn't want adding to avoid future
mistakes.

And of course the same comment applies to the IS_ERR() use here vs NULL
coming back that I already gave for the ->enter() call site.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44224.79299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6xK-00008x-C2; Fri, 04 Dec 2020 09:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44224.79299; Fri, 04 Dec 2020 09:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6xK-00008q-8z; Fri, 04 Dec 2020 09:00:46 +0000
Received: by outflank-mailman (input) for mailman id 44224;
 Fri, 04 Dec 2020 09:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6xJ-00008L-EN
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6xI-0004P1-FC; Fri, 04 Dec 2020 09:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6px-00081O-8y; Fri, 04 Dec 2020 08:53:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/tzXLkeYgeohzzpZOobPIXPIDOQTKzNBQ+V78872kJQ=; b=ExZOQlzjNsrwe3ex28436t2A3S
	E1oOtHtTq/eANBuuOGNN5+dEWad/tpCtP1jOZjMx2fyY3+pdAy+Z3UrpTPZTx68kkjp03+qRpEsns
	r7mere0DVCt5K+uhIJ0C+e+MZqQ/m4X5iGl/89v8tROTKxG/LH//waHC/FTPHlxIpzms=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 11/11] xl / libxl: add 'ex_processor_mask' into 'libxl_viridian_enlightenment'
Date: Fri,  4 Dec 2020 08:52:55 +0000
Message-Id: <20201204085255.26216-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Adding the new value into the enumeration makes it immediately available
to xl, so this patch adjusts the xl.cfg(5) documentation accordingly.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 docs/man/xl.cfg.5.pod.in         | 8 ++++++++
 tools/include/libxl.h            | 7 +++++++
 tools/libs/light/libxl_types.idl | 1 +
 tools/libs/light/libxl_x86.c     | 3 +++
 4 files changed, 19 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..3f0f8de1e988 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2318,6 +2318,14 @@ This set incorporates use of a hypercall for interprocessor interrupts.
 This enlightenment may improve performance of Windows guests with multiple
 virtual CPUs.
 
+=item B<ex_processor_masks>
+
+This set enables new hypercall variants taking a variably-sized sparse
+B<Virtual Processor Set> as an argument, rather than a simple 64-bit
+mask. Hence this enlightenment must be specified for guests with more
+than 64 vCPUs if B<hcall_remote_tlb_flush> and/or B<hcall_ipi> are also
+specified.
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..eaffccb30f37 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS indicates that the
+ * 'ex_processor_masks' value is present in the viridian enlightenment
+ * enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..05324736b744 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -238,6 +238,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (7, "synic"),
     (8, "stimer"),
     (9, "hcall_ipi"),
+    (10, "ex_processor_masks"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index e18274cc10e2..86d272999d67 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -366,6 +366,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
         mask |= HVMPV_hcall_ipi;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_EX_PROCESSOR_MASKS))
+        mask |= HVMPV_ex_processor_masks;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44225.79306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6xK-00009R-OU; Fri, 04 Dec 2020 09:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44225.79306; Fri, 04 Dec 2020 09:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl6xK-00009H-HN; Fri, 04 Dec 2020 09:00:46 +0000
Received: by outflank-mailman (input) for mailman id 44225;
 Fri, 04 Dec 2020 09:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kl6xJ-00008k-Mi
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6xI-0004P3-Jy; Fri, 04 Dec 2020 09:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kl6pw-00081O-8u; Fri, 04 Dec 2020 08:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=J/9PuceCgCRlAlZXFD/ir5Ro6bbX2Egg10HbABKGdYw=; b=LI6eqTxBu+gcE3HIy0b4ZdcAot
	jGhA7w7d1qxEAPeKHblHOYZkxCDcLqW5L7ttFgZQAUD7fKO7Ol+Jm1IHnBjHJnlMhaXltkNXHIOKm
	Z9mzneCIyDaZWDdMO3DqPNlCkmdbsyYm/B/4PhV7aD1XmZqbXfyG9bjuN/bp9f1cShL8=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v5 10/11] viridian: add a new '_HVMPV_ex_processor_masks' bit into HVM_PARAM_VIRIDIAN...
Date: Fri,  4 Dec 2020 08:52:54 +0000
Message-Id: <20201204085255.26216-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201204085255.26216-1-paul@xen.org>
References: <20201204085255.26216-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and advertise ExProcessorMasks support if it is set.

Support is advertised by setting bit 11 in CPUID:40000004:EAX.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/viridian/viridian.c | 3 +++
 xen/include/public/hvm/params.h      | 7 ++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index efd8e3a900c3..ed978047c12f 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -84,6 +84,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_MSR_BASED_APIC         (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT        (1 << 5)
 #define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
+#define CPUID4A_EX_PROCESSOR_MASKS     (1 << 11)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY    (1 << 0)
@@ -197,6 +198,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             res->a |= CPUID4A_MSR_BASED_APIC;
         if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
             res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
+        if ( viridian_feature_mask(d) & HVMPV_ex_processor_masks )
+            res->a |= CPUID4A_EX_PROCESSOR_MASKS;
 
         /*
          * This value is the recommended number of attempts to try to
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 0e3fdca09646..3b0a0f45da53 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -164,6 +164,10 @@
 #define _HVMPV_hcall_ipi 9
 #define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
 
+/* Enable ExProcessorMasks */
+#define _HVMPV_ex_processor_masks 10
+#define HVMPV_ex_processor_masks (1 << _HVMPV_ex_processor_masks)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -174,7 +178,8 @@
          HVMPV_crash_ctl | \
          HVMPV_synic | \
          HVMPV_stimer | \
-         HVMPV_hcall_ipi)
+         HVMPV_hcall_ipi | \
+         HVMPV_ex_processor_masks)
 
 #endif
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:05:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:05:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44237.79323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl727-0000df-9s; Fri, 04 Dec 2020 09:05:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44237.79323; Fri, 04 Dec 2020 09:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl727-0000dY-6Y; Fri, 04 Dec 2020 09:05:43 +0000
Received: by outflank-mailman (input) for mailman id 44237;
 Fri, 04 Dec 2020 09:05:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl726-0000dT-0z
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:05:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl724-0004XG-KS; Fri, 04 Dec 2020 09:05:40 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl724-0000e0-F6; Fri, 04 Dec 2020 09:05:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=A/FoQA/sadJy6m4W7CwLip1NCAOD/yNDGZPhkRJBuRQ=; b=PtBcDVh3D2h2/ZKPr5mYzYBKeB
	qP6yRvbTZ0YxF8+OP4OxABWJh1kQZJXPKxods4ilhxw5xuCgdbF5oDAj1t0ZY47rxLxO05Dr6we2+
	CbHSXVK4a3jiazyBC1Inybgnp7sMHVZTh2P7eUxvIurUY4Mwk5vs5OH4/R1sptK5R4Sk=;
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
 <cd74f2a7-7836-ef90-9cd8-857068adb0f5@xen.org>
 <51C0C24A-3CE6-48A3-85F5-14F010409DC3@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b87e9293-77bb-2c43-63d0-8d54d5fc9a7e@xen.org>
Date: Fri, 4 Dec 2020 09:05:38 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <51C0C24A-3CE6-48A3-85F5-14F010409DC3@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 03/12/2020 14:33, Rahul Singh wrote:
>> On 2 Dec 2020, at 2:45 pm, Julien Grall <julien@xen.org> wrote:
>>> -
>>> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>>> -{
>>
>> Most of the code here looks useful to Xen. I think you want to keep the code and re-use it afterwards.
> 
> Ok. I removed the code here and added the XEN compatible code to add devices in next patch.
> I will keep it in this patch and will modifying the code to make XEN compatible.

In general, it is prefer if the code the code rather than dropping in 
patch A and then add it again differently patch B. This makes easier to 
check that the code outcome of the function is mostly the same.

>>> -static struct iommu_ops arm_smmu_ops = {
>>> -	.capable		= arm_smmu_capable,
>>> -	.domain_alloc		= arm_smmu_domain_alloc,
>>> -	.domain_free		= arm_smmu_domain_free,
>>> -	.attach_dev		= arm_smmu_attach_dev,
>>> -	.map			= arm_smmu_map,
>>> -	.unmap			= arm_smmu_unmap,
>>> -	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
>>> -	.iotlb_sync		= arm_smmu_iotlb_sync,
>>> -	.iova_to_phys		= arm_smmu_iova_to_phys,
>>> -	.probe_device		= arm_smmu_probe_device,
>>> -	.release_device		= arm_smmu_release_device,
>>> -	.device_group		= arm_smmu_device_group,
>>> -	.domain_get_attr	= arm_smmu_domain_get_attr,
>>> -	.domain_set_attr	= arm_smmu_domain_set_attr,
>>> -	.of_xlate		= arm_smmu_of_xlate,
>>> -	.get_resv_regions	= arm_smmu_get_resv_regions,
>>> -	.put_resv_regions	= generic_iommu_put_resv_regions,
>>> -	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
>>> -};
>>> -
>>>   /* Probing and initialisation functions */
>>>   static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>>>   				   struct arm_smmu_queue *q,
>>> @@ -2406,7 +2032,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>>>   	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
>>>   	case IDR0_STALL_MODEL_FORCE:
>>>   		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
>>> -		fallthrough;
>>
>> We should keep all the fallthrough documented. So I think we want to introduce the fallthrough in Xen as well.
> 
> Ok I will keep fallthrough documented in this patch.
> 
> fallthrough implementation in XEN should be another patch. I am not sure when we can implement but we will try to implement.

Yes, I didn't ask to implement "fallthrough" in this patch, but instead 
as a pre-requirement patch.

I would implement it in include/xen/compiler.h.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44254.79334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl776-0001XG-Sy; Fri, 04 Dec 2020 09:10:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44254.79334; Fri, 04 Dec 2020 09:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl776-0001X9-Pz; Fri, 04 Dec 2020 09:10:52 +0000
Received: by outflank-mailman (input) for mailman id 44254;
 Fri, 04 Dec 2020 09:10:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl775-0001X4-17
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:10:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ef017ed-80da-4caa-b087-48b7b011fd06;
 Fri, 04 Dec 2020 09:10:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0CBA8ACC4;
 Fri,  4 Dec 2020 09:10:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ef017ed-80da-4caa-b087-48b7b011fd06
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607073049; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GvPAS1fnvtQTnWbCJURw87a677mWEVHMf028tHQFHsk=;
	b=LRQQpUASSPqqIUaI0Mpc6Ypf/1QddP+2MJh3QPxm/qUritoutBVhUdmOM84wybYYS3CR1p
	imvAiLNm1mEA2tqEvIDYRmASZMCiqQBPPnYGssVkJ//LAeAwlNd2A9iO61Ap451ArFAI2K
	fMpG+b76+WFMMHOHXMo98wMBUu44wRQ=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>
Date: Fri, 4 Dec 2020 10:10:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-16-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.12.2020 09:21, Juergen Gross wrote:
> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = {
>      .notifier_call = cpu_callback
>  };
>  
> +#ifdef CONFIG_HYPFS
> +static const struct hypfs_entry *cpupool_pooldir_enter(
> +    const struct hypfs_entry *entry);
> +
> +static struct hypfs_funcs cpupool_pooldir_funcs = {

Yet one more const missing?

> +    .enter = cpupool_pooldir_enter,
> +    .exit = hypfs_node_exit,
> +    .read = hypfs_read_dir,
> +    .write = hypfs_write_deny,
> +    .getsize = hypfs_getsize,
> +    .findentry = hypfs_dir_findentry,
> +};
> +
> +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs);
> +
> +static const struct hypfs_entry *cpupool_pooldir_enter(
> +    const struct hypfs_entry *entry)
> +{
> +    return &cpupool_pooldir.e;
> +}
> +
> +static int cpupool_dir_read(const struct hypfs_entry *entry,
> +                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    int ret = 0;
> +    const struct cpupool *c;
> +    unsigned int size = 0;
> +
> +    list_for_each_entry(c, &cpupool_list, list)
> +    {
> +        size += hypfs_dynid_entry_size(entry, c->cpupool_id);

Why do you maintain size here? I can't spot any use.

With this dropped the function then no longer depends on its
"entry" parameter, which makes me wonder ...

> +        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
> +                                         list_is_last(&c->list, &cpupool_list),
> +                                         &uaddr);
> +        if ( ret )
> +            break;
> +    }
> +
> +    return ret;
> +}
> +
> +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
> +{
> +    const struct cpupool *c;
> +    unsigned int size = 0;
> +
> +    list_for_each_entry(c, &cpupool_list, list)
> +        size += hypfs_dynid_entry_size(entry, c->cpupool_id);

... why this one does. To be certain their results are consistent
with one another, I think both should produce their results from
the same data.

> +    return size;
> +}
> +
> +static const struct hypfs_entry *cpupool_dir_enter(
> +    const struct hypfs_entry *entry)
> +{
> +    struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_alloc_dyndata(sizeof(*data));

I generally like the added type safety of the macro wrappers
around _xmalloc(). I wonder if it wouldn't be a good idea to have
such here as well, to avoid random mistakes like

    data = hypfs_alloc_dyndata(sizeof(data));

However I further notice that the struct allocated isn't cpupool
specific at all. It would seem to me that such an allocation
therefore doesn't belong here. Therefore I wonder whether ...

> +    if ( !data )
> +        return ERR_PTR(-ENOMEM);
> +    data->id = CPUPOOLID_NONE;
> +
> +    spin_lock(&cpupool_lock);

... these two properties (initial ID and lock) shouldn't e.g. be
communicated via the template, allowing the enter/exit hooks to
become generic for all ID templates.

Yet in turn I notice that the "id" field only ever gets set, both
in patch 14 and here. But yes, I've now spotted the consumers in
patch 16.

> +    return entry;
> +}
> +
> +static void cpupool_dir_exit(const struct hypfs_entry *entry)
> +{
> +    spin_unlock(&cpupool_lock);
> +
> +    hypfs_free_dyndata();
> +}
> +
> +static struct hypfs_entry *cpupool_dir_findentry(
> +    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
> +{
> +    unsigned long id;
> +    const char *end;
> +    const struct cpupool *cpupool;
> +
> +    id = simple_strtoul(name, &end, 10);
> +    if ( end != name + name_len )
> +        return ERR_PTR(-ENOENT);
> +
> +    cpupool = __cpupool_find_by_id(id, true);

Silent truncation from unsigned long to unsigned int?

> +    if ( !cpupool )
> +        return ERR_PTR(-ENOENT);
> +
> +    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
> +}
> +
> +static struct hypfs_funcs cpupool_dir_funcs = {

Yet another missing const?

> +    .enter = cpupool_dir_enter,
> +    .exit = cpupool_dir_exit,
> +    .read = cpupool_dir_read,
> +    .write = hypfs_write_deny,
> +    .getsize = cpupool_dir_getsize,
> +    .findentry = cpupool_dir_findentry,
> +};
> +
> +static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);

Why VARDIR? This isn't a template, is it? Or does VARDIR really
serve multiple purposes?

> +static void cpupool_hypfs_init(void)
> +{
> +    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
> +    hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
> +}
> +#else
> +
> +static void cpupool_hypfs_init(void)
> +{
> +}
> +#endif

I think you want to be consistent with the use of blank lines next
to #if / #else / #endif. In cases when they enclose multiple entities,
I think it's generally better to have intervening blank lines
everywhere. I also think in such cases commenting #else and #endif is
helpful. But you're the maintainer of this code ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:16:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44264.79347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7CQ-0001t7-HP; Fri, 04 Dec 2020 09:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44264.79347; Fri, 04 Dec 2020 09:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7CQ-0001t0-ER; Fri, 04 Dec 2020 09:16:22 +0000
Received: by outflank-mailman (input) for mailman id 44264;
 Fri, 04 Dec 2020 09:16:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl7CP-0001sv-NM
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:16:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56139137-c5f5-40c0-aabc-8e4516927846;
 Fri, 04 Dec 2020 09:16:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB797ACC6;
 Fri,  4 Dec 2020 09:16:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56139137-c5f5-40c0-aabc-8e4516927846
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607073380; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=elv0qx14rjDZkrgz9XMZ0WUbGL2H6cwfwNztroy1OWk=;
	b=lh6AdLF5IIhWZ8MmfCOl5EvcYZ2mHl8eBBqWMUKorZJ9eeV47tbJSyHitq070y5jSUz/5h
	ioXwqZSjJ3zZGNlzksHOuCx0mkP91voJCieXlGlsKcuSJi/W7o0fNysb8zUqlCjxVFCP8W
	9eReQs+TIRWhBvpsAKEa4bmE563yGUg=
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
 <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
 <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f81a011d-101c-29e7-cba2-0b52506cc027@suse.com>
Date: Fri, 4 Dec 2020 10:16:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.12.2020 09:52, Jürgen Groß wrote:
> On 03.12.20 16:44, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> --- a/xen/common/hypfs.c
>>> +++ b/xen/common/hypfs.c
>>> @@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
>>>       return entry->size;
>>>   }
>>>   
>>> +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
>>> +                               unsigned int id, bool is_last,
>>> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
>>> +{
>>> +    struct xen_hypfs_dirlistentry direntry;
>>> +    char name[HYPFS_DYNDIR_ID_NAMELEN];
>>> +    unsigned int e_namelen, e_len;
>>> +
>>> +    e_namelen = snprintf(name, sizeof(name), template->e.name, id);
>>> +    e_len = DIRENTRY_SIZE(e_namelen);
>>> +    direntry.e.pad = 0;
>>> +    direntry.e.type = template->e.type;
>>> +    direntry.e.encoding = template->e.encoding;
>>> +    direntry.e.content_len = template->e.funcs->getsize(&template->e);
>>> +    direntry.e.max_write_len = template->e.max_size;
>>> +    direntry.off_next = is_last ? 0 : e_len;
>>> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
>>> +        return -EFAULT;
>>> +    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
>>> +                              e_namelen + 1) )
>>> +        return -EFAULT;
>>> +
>>> +    guest_handle_add_offset(*uaddr, e_len);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static struct hypfs_entry *hypfs_dyndir_findentry(
>>> +    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
>>> +{
>>> +    const struct hypfs_dyndir_id *data;
>>> +
>>> +    data = hypfs_get_dyndata();
>>> +
>>> +    /* Use template with original findentry function. */
>>> +    return data->template->e.funcs->findentry(data->template, name, name_len);
>>> +}
>>> +
>>> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
>>> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>> +{
>>> +    const struct hypfs_dyndir_id *data;
>>> +
>>> +    data = hypfs_get_dyndata();
>>> +
>>> +    /* Use template with original read function. */
>>> +    return data->template->e.funcs->read(&data->template->e, uaddr);
>>> +}
>>> +
>>> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(
>>> +    const struct hypfs_entry_dir *template, unsigned int id)
>>> +{
>>> +    struct hypfs_dyndir_id *data;
>>> +
>>> +    data = hypfs_get_dyndata();
>>> +
>>> +    data->template = template;
>>> +    data->id = id;
>>> +    snprintf(data->name, sizeof(data->name), template->e.name, id);
>>> +    data->dir = *template;
>>> +    data->dir.e.name = data->name;
>>
>> I'm somewhat puzzled, if not confused, by the apparent redundancy
>> of this name generation with that in hypfs_read_dyndir_id_entry().
>> Wasn't the idea to be able to use generic functions on these
>> generated entries?
> 
> I can add a macro replacing the double snprintf().

That wasn't my point. I'm concerned of there being two name generation
sites in the first place. Is this perhaps simply some form of
optimization, avoiding hypfs_read_dyndir_id_entry() to call
hypfs_gen_dyndir_entry_id() (but risking the two going out of sync)?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:27:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44271.79359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7Md-00030e-L4; Fri, 04 Dec 2020 09:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44271.79359; Fri, 04 Dec 2020 09:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7Md-00030X-I0; Fri, 04 Dec 2020 09:26:55 +0000
Received: by outflank-mailman (input) for mailman id 44271;
 Fri, 04 Dec 2020 09:26:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl7Mc-00030S-8e
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:26:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl7Mb-00054X-3e; Fri, 04 Dec 2020 09:26:53 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl7Ma-0001xR-RQ; Fri, 04 Dec 2020 09:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VBQgBosC98HgPuxZ5XdIOh7Zj4Homg+IA64bM0PEdC0=; b=qxcOusCRdowinX/e2J5je7CbHn
	lCdlwChg5fUQFEVRY/aVBGChnPc77Opo9zFA6dbmpYaw7zCcjgcagERs8rsR4RRCR3t+R8r0L8ssR
	FJBC49hdDjilRPypugm/79jtMgi7GE3lQcVFn0KG1TzKoPBQxlTkhrQPPdhT7Q5RUpfw=;
Subject: Re: [PATCH 1/2] include: don't use asm/page.h from common headers
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <04276039-a5d0-fefd-260e-ffaa8272fd6a@suse.com>
 <a35fb176-e729-a542-4416-7040d6c80964@xen.org>
 <bdf294d9-e021-36d3-7e04-1c148e34701f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <56063ac8-f771-0269-62f5-8076ec714c96@xen.org>
Date: Fri, 4 Dec 2020 09:26:50 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <bdf294d9-e021-36d3-7e04-1c148e34701f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 03/12/2020 09:27, Jan Beulich wrote:
> On 02.12.2020 18:14, Julien Grall wrote:
>> Hi Jan,
>>
>> On 02/12/2020 14:49, Jan Beulich wrote:
>>> Doing so limits what can be done in (in particular included by) this per-
>>> arch header. Abstract out page shift/size related #define-s, which is all
>>> the repsecitve headers care about. Extend the replacement / removal to
>>
>> s/repsecitve/respective/
>>
>>> some x86 headers as well; some others now need to include page.h (and
>>> they really should have before).
>>>
>>> Arm's VADDR_BITS gets restricted to 32-bit, as its current value is
>>> clearly wrong for 64-bit, but the constant also isn't used anywhere
>>> right now (i.e. the #define could also be dropped altogether).
>>
>> Whoops. Thankfully this is not used.
>>
>>>
>>> I wasn't sure about Arm's use of vaddr_t in PAGE_OFFSET(), and hence I
>>> kept it and provided a way to override the #define in the common header.
>>
>> vaddr_t is defined to 32-bit for arm32 or 64-bit for arm64. So I think
>> it would be fine to use the generic PAGE_OFFSET() implementation.
> 
> Will switch.
> 
>>> --- /dev/null
>>> +++ b/xen/include/asm-arm/page-shift.h
>>
>> The name of the file looks a bit odd given that *_BITS are also defined
>> in it. So how about renaming to page-size.h?
> 
> I was initially meaning to use that name, but these headers
> specifically don't define any sizes - *_BITS are still shift
> values, at least in a way. If the current name isn't liked, my
> next best suggestion would then be page-bits.h.

I would be happy with page-bits.h.

> 
>>> @@ -0,0 +1,15 @@
>>> +#ifndef __ARM_PAGE_SHIFT_H__
>>> +#define __ARM_PAGE_SHIFT_H__
>>> +
>>> +#define PAGE_SHIFT              12
>>> +
>>> +#define PAGE_OFFSET(ptr)        ((vaddr_t)(ptr) & ~PAGE_MASK)
>>> +
>>> +#ifdef CONFIG_ARM_64
>>> +#define PADDR_BITS              48
>>
>> Shouldn't we define VADDR_BITS here?
> 
> See the description - it's unused anyway. I'm fine any of the three
> possible ways:
> 1) keep as is in v1
> 2) drop altogether
> 3) also #define for 64-bit (but then you need to tell me whether 64
>     is the right value to use, or what the correct one would be)

I would go with 2).

> 
>> But I wonder whether VADDR_BITS
>> should be defined as sizeof(vaddr_t) * 8.
>>
>> This would require to include asm/types.h.
> 
> Which I'd specifically like to avoid. Plus use of sizeof() also
> precludes the use of respective #define-s in #if-s.

Fair point!

>>> --- a/xen/include/asm-x86/desc.h
>>> +++ b/xen/include/asm-x86/desc.h
>>> @@ -1,6 +1,8 @@
>>>    #ifndef __ARCH_DESC_H
>>>    #define __ARCH_DESC_H
>>>    
>>> +#include <asm/page.h>
>>
>> May I ask why you are including <asm/page.h> and not <xen/page-size.h> here?
> 
> Because of
> 
> DECLARE_PER_CPU(l1_pgentry_t, gdt_l1e);
> 
> and
> 
> DECLARE_PER_CPU(l1_pgentry_t, compat_gdt_l1e);
> 
> at least (didn't check further).
Thanks for the explanation!


> Jan
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44276.79371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7Pg-0003dO-5K; Fri, 04 Dec 2020 09:30:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44276.79371; Fri, 04 Dec 2020 09:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7Pg-0003cr-1H; Fri, 04 Dec 2020 09:30:04 +0000
Received: by outflank-mailman (input) for mailman id 44276;
 Fri, 04 Dec 2020 09:30:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl7Pe-0003RM-UI
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:30:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl7Pd-00058p-UG; Fri, 04 Dec 2020 09:30:01 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl7Pd-00023o-NO; Fri, 04 Dec 2020 09:30:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Pg2ajvPOv+TopAudDD1Tf0w/hz1nlX0P6LwqAjlW1Ng=; b=d+QMPl6syT1//l2wjZDHRw5apZ
	2m4H5DSxEJ6LflcTySRxG4TyTO4mwicQv1ER0dphy+OyBWl6tPDGVJbfcW2kgh3qbNhrRfY1k36y4
	LaTCPJiw1uSW1NPmhaVMbWgEytw853xJ0gEJkk6SirXSd1BQw4BtFvn4++i9ubpQCwg4=;
Subject: Re: [PATCH 2/2] mm: split out mfn_t / gfn_t / pfn_t definitions and
 helpers
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <fb4de786-7302-3336-dcb4-1a388bee34bc@suse.com>
 <9c240acd-f3ef-6775-eb4b-6e3b14251e51@xen.org>
 <320d042c-2e37-f5ef-ce2f-2d4c97901bae@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3ef55770-70c3-f3d6-371b-79eb7a286466@xen.org>
Date: Fri, 4 Dec 2020 09:29:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <320d042c-2e37-f5ef-ce2f-2d4c97901bae@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 03/12/2020 09:39, Jan Beulich wrote:
> On 02.12.2020 18:35, Julien Grall wrote:
>> On 02/12/2020 14:50, Jan Beulich wrote:
>>> xen/mm.h has heavy dependencies, while in a number of cases only these
>>> type definitions are needed. This separation then also allows pulling in
>>> these definitions when including xen/mm.h would cause cyclic
>>> dependencies.
>>>
>>> Replace xen/mm.h inclusion where possible in include/xen/. (In
>>> xen/iommu.h also take the opportunity and correct the few remaining
>>> sorting issues.)
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/acpi/power.c
>>> +++ b/xen/arch/x86/acpi/power.c
>>> @@ -10,7 +10,6 @@
>>>     * Slimmed with Xen specific support.
>>>     */
>>>    
>>> -#include <asm/io.h>
>>
>> This seems to be unrelated of this work.
> 
> Well spotted, but the answer really is "yes and no". My first
> attempt at fixing build issues from this and similar asm/io.h
> inclusions was to remove such unnecessary ones. But this didn't
> work out - I had to fix the header instead. If you think this
> extra cleanup really does any harm here, I can drop it. But I'd
> prefer to keep it.

I am fine with keeping it here. Can you mention it in the commit message?

> 
>>> --- /dev/null
>>> +++ b/xen/include/xen/frame-num.h
>>
>> It would feel more natural to me if the file is named mm-types.h.
> 
> Indeed I was first meaning to use this name (not the least
> because I don't particularly like the one chosen, but I also
> couldn't think of a better one). However, then things like
> struct page_info would imo also belong there (more precisely in
> asm/mm-types.h to be included from here), which is specifically
> something I want to avoid. Yes, eventually we may (I'm inclined
> to even say "will") want such a header, but I still want to
> keep these even more fundamental types in a separate one.
> Otherwise we'll again end up with files including mm-types.h
> just because of needing e.g. gfn_t for a function declaration.
> (Note that the same isn't the case for struct page_info, which
> can simply be forward declared.)
Thanks for the explanation. AFAICT, this file will mostly contain 
typesafe for MM. So how about naming it to mm-typesafe.h? Or maybe 
mm-frame.h?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:43:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44284.79383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7c4-00058w-9j; Fri, 04 Dec 2020 09:42:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44284.79383; Fri, 04 Dec 2020 09:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7c4-00058p-6Y; Fri, 04 Dec 2020 09:42:52 +0000
Received: by outflank-mailman (input) for mailman id 44284;
 Fri, 04 Dec 2020 09:42:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl7c2-00058h-Nk; Fri, 04 Dec 2020 09:42:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl7c2-0005Sd-I4; Fri, 04 Dec 2020 09:42:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl7c2-0003Ff-9m; Fri, 04 Dec 2020 09:42:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kl7c2-0003hz-9H; Fri, 04 Dec 2020 09:42:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xvn5tFHhs3oPvev1goeg0xMbO996+kTp9XnW6PBUR+o=; b=2efhCm69o4UqOJ9PvlvNhLnrO/
	F6q+zuYPRewh4DyIlvOh+BWPdt1v9AzEmA5r2JmP5rz0Rwn/RpOP8BXzE+58aqdzxnoY3RaU9kPyn
	yraljdZdHNGmuED+sGfAMO7NPQKyhzusKGLfzsbL1nDU+cJgzjebO7oudOYuUiM/W5E4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157195-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157195: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=0d05d51b715390e08cd112f83e03b6776412aaeb
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 09:42:50 +0000

flight 157195 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157195/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              0d05d51b715390e08cd112f83e03b6776412aaeb
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  147 days
Failing since        151818  2020-07-11 04:18:52 Z  146 days  141 attempts
Testing same since   157195  2020-12-04 04:19:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 30923 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:43:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:43:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44289.79398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7cS-0005Fh-KW; Fri, 04 Dec 2020 09:43:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44289.79398; Fri, 04 Dec 2020 09:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7cS-0005Fa-GT; Fri, 04 Dec 2020 09:43:16 +0000
Received: by outflank-mailman (input) for mailman id 44289;
 Fri, 04 Dec 2020 09:43:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl7cQ-0005FN-SC
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:43:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 832b9660-dd75-4df4-88ce-24ee25bcd64b;
 Fri, 04 Dec 2020 09:43:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C61E8AC2E;
 Fri,  4 Dec 2020 09:43:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 832b9660-dd75-4df4-88ce-24ee25bcd64b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607074992; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bCns0iio/bkHOSn5gRHhr4Rz3JzMI4PI9UrqIVqaGS8=;
	b=EE3chOfKp9ENb5juCXGd8fmqq8SGskdQ9IUxvTB7+ixuxs30XBtrus1oEXZyBeg7g8U77J
	EvJVB8lKEhBcIapbd+5uw7Sjt3ZHKHoyBHK9NGGaOQ/i/XW3ulBl3WVLsrLN9O0FoEAKFi
	QCg5nzBH4OrA9RGZSyaw+exFM/PAa/w=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
Date: Fri, 4 Dec 2020 10:43:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.12.2020 09:22, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 04 December 2020 07:53
>>
>> On 03.12.2020 18:07, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 03 December 2020 15:57
>>>>
>>>> ... this sound to me more like workarounds for buggy guests than
>>>> functionality the hypervisor _needs_ to have. (I can appreciate
>>>> the specific case here for the specific scenario you provide as
>>>> an exception.)
>>>
>>> If we want to have a hypervisor that can be used in a cloud environment
>>> then Xen absolutely needs this capability.
>>
>> As per above you can conclude that I'm still struggling to see the
>> "why" part here.
>>
> 
> Imagine you are a customer. You boot your OS and everything is just fine... you run your workload and all is good. You then shut down your VM and re-start it. Now it starts to crash. Who are you going to blame? You did nothing to your OS or application s/w, so you are going to blame the cloud provider of course.

That's a situation OSes are in all the time. Buggy applications may
stop working on newer OS versions. It's still the application that's
in need of updating then. I guess OSes may choose to work around
some very common applications' bugs, but I'd then wonder on what
basis "very common" gets established. I dislike the underlying
asymmetry / inconsistency (if not unfairness) of such a model,
despite seeing that there may be business reasons leading people to
think they want something like this.

> Now imagine you are the cloud provider, running Xen. What you did was start to upgrade your hosts from an older version of Xen to a newer version of Xen, to pick up various bug fixes and make sure you are running a version that is within the security support envelope. You identify that your customer's problem is a bug in their OS that was latent on the old version of the hypervisor but is now manifesting on the new one because it has buggy support for a hypercall that was added between the two versions. How are you going to fix this issue, and get your customer up and running again? Of course you'd like your customer to upgrade their OS, but they can't even boot it to do that. You really need a solution that can restore the old VM environment, at least temporarily, for that customer.

Boot the guest on a not-yet-upgraded host again, to update its kernel?

>>>> While it has other downsides, Jürgen's proposal doesn't have any
>>>> similar scalability issue afaics. Another possible model would
>>>> seem to be to key new hypercalls to hypervisor CPUID leaf bits,
>>>> and derive their availability from a guest's CPUID policy. Of
>>>> course this won't work when needing to retrofit guarding like
>>>> you want to do here.
>>>
>>> Ok, I'll take a look hypfs as an immediate solution, if that's preferred.
>>
>> Well, as said - there are also downsides with that approach. I'm
>> not convinced it should be just the three of us to determine which
>> one is better overall, the more that you don't seem to be convinced
>> anyway (and I'm not going to claim I am, in either direction). It
>> is my understanding that based on the claimed need for this (see
>> above), this may become very widely used functionality, and if so
>> we want to make sure we won't regret the route we went.
>>
> 
> Agreed, but we don't need the final top-to-bottom solution now. The only part of this that is introducing something that is stable is the libxl part, and I think the 'feature' bitmap is reasonable future-proof (being modelled on the viridian feature bitmap, which has been extended over several releases since it was introduced).
> 
>> For starters, just to get a better overall picture, besides the two
>> overrides you add here, do you have any plans to retroactively add
>> further controls for past ABI additions?
> 
> I don't have any specific plans. The two I deal with here are the causes of observed differences in guest behaviour, one being an actual crash and the other affecting PV driver behaviour which may or may not be the cause of other crashes... but still something we need to have control over.

I.e. basically an arbitrary choice. This is again a symmetry /
consistency / fairness issue. If a guest admin pesters his cloud provider
enough, they may get the cloud provider to make hypervisor adjustments.
If another guest admin simply does the technically correct thing and
works out what needs fixing in the kernel, they may then figure they
simply didn't shout loud enough to avoid themselves needing to do much
work.

Anyway - I guess I'm about the only one seeing this from a purely
technical, non-business perspective. I suppose I'll continue looking at
the code for the purpose of finding issues (hopefully there aren't going
to be any), but will stay away from acking any parts of this. Whoever
agrees with the underlying concept can then provide their acks. (As said
elsewhere, in the particular case of the kexec issue with FIFO event
channels here, I could maybe see that as halfway acceptable
justification, albeit I did voice my concerns in that regard as well.
It's still working around a shortcoming in guest software.)

Nevertheless I'd like to gain clarity of what the plans are with future
ABI additions.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 09:51:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 09:51:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44299.79409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7kD-0006Lp-Kf; Fri, 04 Dec 2020 09:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44299.79409; Fri, 04 Dec 2020 09:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7kD-0006Li-Hp; Fri, 04 Dec 2020 09:51:17 +0000
Received: by outflank-mailman (input) for mailman id 44299;
 Fri, 04 Dec 2020 09:51:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl7kB-0006Ld-Qu
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 09:51:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 331bd1be-58ec-468a-af75-7c037ba931ff;
 Fri, 04 Dec 2020 09:51:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3F7EACB5;
 Fri,  4 Dec 2020 09:51:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 331bd1be-58ec-468a-af75-7c037ba931ff
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607075473; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qbenaPDgJgTp3fFbIP/WDKY+cqxPgsUiAs4OIWpmYTg=;
	b=G3OsXEJ56W9cnoEU8bswQ0U83AhoAaO0SEVnjfXKQGGdh7uToSEVh3PY+S3p/43YNU3P35
	DdyAxV2dGbvpUpnLb3i41gvK+Z10OkRCKmkOuZYn1TiA6yhz8T5oXdeMfBKw0NSouBp85U
	Ds/BWQ0cX3VL1N8pm0yan3eIzAUsCPM=
Subject: Re: [PATCH 2/2] mm: split out mfn_t / gfn_t / pfn_t definitions and
 helpers
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <75484377-160c-a529-1cfc-96de86cfc550@suse.com>
 <fb4de786-7302-3336-dcb4-1a388bee34bc@suse.com>
 <9c240acd-f3ef-6775-eb4b-6e3b14251e51@xen.org>
 <320d042c-2e37-f5ef-ce2f-2d4c97901bae@suse.com>
 <3ef55770-70c3-f3d6-371b-79eb7a286466@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ece618f3-26f2-0315-a949-b23023900f06@suse.com>
Date: Fri, 4 Dec 2020 10:51:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <3ef55770-70c3-f3d6-371b-79eb7a286466@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 10:29, Julien Grall wrote:
> On 03/12/2020 09:39, Jan Beulich wrote:
>> On 02.12.2020 18:35, Julien Grall wrote:
>>> On 02/12/2020 14:50, Jan Beulich wrote:
>>>> xen/mm.h has heavy dependencies, while in a number of cases only these
>>>> type definitions are needed. This separation then also allows pulling in
>>>> these definitions when including xen/mm.h would cause cyclic
>>>> dependencies.
>>>>
>>>> Replace xen/mm.h inclusion where possible in include/xen/. (In
>>>> xen/iommu.h also take the opportunity and correct the few remaining
>>>> sorting issues.)
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/acpi/power.c
>>>> +++ b/xen/arch/x86/acpi/power.c
>>>> @@ -10,7 +10,6 @@
>>>>     * Slimmed with Xen specific support.
>>>>     */
>>>>    
>>>> -#include <asm/io.h>
>>>
>>> This seems to be unrelated of this work.
>>
>> Well spotted, but the answer really is "yes and no". My first
>> attempt at fixing build issues from this and similar asm/io.h
>> inclusions was to remove such unnecessary ones. But this didn't
>> work out - I had to fix the header instead. If you think this
>> extra cleanup really does any harm here, I can drop it. But I'd
>> prefer to keep it.
> 
> I am fine with keeping it here. Can you mention it in the commit message?

I've added a paragraph.

>>>> --- /dev/null
>>>> +++ b/xen/include/xen/frame-num.h
>>>
>>> It would feel more natural to me if the file is named mm-types.h.
>>
>> Indeed I was first meaning to use this name (not the least
>> because I don't particularly like the one chosen, but I also
>> couldn't think of a better one). However, then things like
>> struct page_info would imo also belong there (more precisely in
>> asm/mm-types.h to be included from here), which is specifically
>> something I want to avoid. Yes, eventually we may (I'm inclined
>> to even say "will") want such a header, but I still want to
>> keep these even more fundamental types in a separate one.
>> Otherwise we'll again end up with files including mm-types.h
>> just because of needing e.g. gfn_t for a function declaration.
>> (Note that the same isn't the case for struct page_info, which
>> can simply be forward declared.)
> Thanks for the explanation. AFAICT, this file will mostly contain 
> typesafe for MM. So how about naming it to mm-typesafe.h? Or maybe 
> mm-frame.h?

Hmm, yes, why not. I guess I'd slightly prefer the latter.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:02:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44308.79421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7ul-0007Xs-Mg; Fri, 04 Dec 2020 10:02:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44308.79421; Fri, 04 Dec 2020 10:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl7ul-0007Xl-Jj; Fri, 04 Dec 2020 10:02:11 +0000
Received: by outflank-mailman (input) for mailman id 44308;
 Fri, 04 Dec 2020 10:02:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl7uk-0007Xg-DJ
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:02:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fb25cac-dc63-4e1c-909b-5931cc4c5452;
 Fri, 04 Dec 2020 10:02:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9816AFB4;
 Fri,  4 Dec 2020 10:02:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fb25cac-dc63-4e1c-909b-5931cc4c5452
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607076128; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yWdjWukSl+2mN1AWCmKhG0bEjMooLyrm1HRLOc99GTs=;
	b=T0cs8FczasGJrkUjdPIKLQ+aFXUAAtVp8jUmPvvmjDIA+0dV1NMZkslQPXoibrOQeqaiAS
	OKsmzla9oCWQcrpfOu2r/j47uHWCf30vuDok1kbfvBsla6CUTLa/WUPCSntB9jHVRt9/sn
	rwx8Ogwyf2zoOjNrjsN9Mhd7Xhn/rlU=
Subject: Re: [PATCH] gnttab: don't allocate status frame tracking array when
 "gnttab=max_ver:1"
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a484cc88-f41d-5d38-d098-4eda297569a1@suse.com>
 <bf921997-fc9a-b1b9-78d9-7a7f85fe4608@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6fb5dd40-d9b8-a3c6-9616-070d5fadb59b@suse.com>
Date: Fri, 4 Dec 2020 11:02:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <bf921997-fc9a-b1b9-78d9-7a7f85fe4608@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.12.2020 19:31, Julien Grall wrote:
> On 05/11/2020 15:55, Jan Beulich wrote:
>> This array can be large when many grant frames are permitted; avoid
>> allocating it when it's not going to be used anyway. 
> 
> Given there are not many users of grant-table v2, would it make sense to 
> avoid allocating the array until the guest start using grant-table v2?

Hmm, yes, seems possible. Let me give this a try.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:16:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:16:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44316.79434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl88N-0000HJ-SO; Fri, 04 Dec 2020 10:16:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44316.79434; Fri, 04 Dec 2020 10:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl88N-0000HC-PK; Fri, 04 Dec 2020 10:16:15 +0000
Received: by outflank-mailman (input) for mailman id 44316;
 Fri, 04 Dec 2020 10:16:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl88M-0000H4-17; Fri, 04 Dec 2020 10:16:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl88L-0006JP-Om; Fri, 04 Dec 2020 10:16:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl88L-0004WT-EM; Fri, 04 Dec 2020 10:16:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kl88L-0006P9-Dq; Fri, 04 Dec 2020 10:16:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CaBulPZ0qPCJG0jOesnN1fuY8DU5H8nu4AEhWgqfZ44=; b=N0LdBhgyyfjUgMMHHaZYxxu9CW
	zKcWp4Weqtt5qB/M5M105x/TxHA0YKgEujfWAUYCZt90ycHueIqrqVDe+6uFM2DaPiHO94Auw7Kz6
	Q29JVnvid7+p5uR7yQlYJEUqgTW6mLmzTGgtkoWhVYj1dLQABhxGo0UsPVX4QG2ckAbY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157200: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=be3755af37263833cb3b1c6b1f2ba219bdf97ec3
X-Osstest-Versions-That:
    xen=aec46884784c2494a30221da775d4ac2c43a4d42
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 10:16:13 +0000

flight 157200 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157200/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  be3755af37263833cb3b1c6b1f2ba219bdf97ec3
baseline version:
 xen                  aec46884784c2494a30221da775d4ac2c43a4d42

Last test of basis   157163  2020-12-02 19:01:29 Z    1 days
Testing same since   157200  2020-12-04 08:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Diederik de Haas <didi.debian@cknow.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   aec4688478..be3755af37  be3755af37263833cb3b1c6b1f2ba219bdf97ec3 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:36:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:36:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44332.79449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8S2-0002Om-Is; Fri, 04 Dec 2020 10:36:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44332.79449; Fri, 04 Dec 2020 10:36:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8S2-0002Of-Ft; Fri, 04 Dec 2020 10:36:34 +0000
Received: by outflank-mailman (input) for mailman id 44332;
 Fri, 04 Dec 2020 10:36:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8S1-0002Oa-WF
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:36:34 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a48f77cc-f3ce-4461-aeb7-5790db91d473;
 Fri, 04 Dec 2020 10:36:33 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id p8so4823429wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:36:33 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id y2sm3032650wrn.31.2020.12.04.02.36.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:36:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a48f77cc-f3ce-4461-aeb7-5790db91d473
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=iCwX12JvinfsBXupn1+ExlWFe4TW7BvvX0Qxxio66Zw=;
        b=iVfbt9oblLZ3wkcxcAUIliUWe5LPzw6qkNFEQQhYLUYuRQChq7JKiqLNv183nqJXC0
         Z7RhHYYjpAvs/ckOYTEfI5eosJ5KacanY4rBOUQq7GhUTWGcnWuB8PBOXsnQNaVU5lPS
         Oz2Rbs7n2rldQFRQjgsDEtNd5glNvKG/DJe8AvAHxUQ4bMNiFhdMN99k0ynOYMdcTG2Q
         VvTTGEsi3U+oyYbdRNxZSlkSAjdUfK74iVYgKtXn+NDclsf9Uhlqj24QTB6GZ8etKhNR
         9NxToae/BXf51sr8mtK/sXf1ev7YqEuFJquHPtvwTHNG4ERFf7D2Xy34FJxURa6FeNyQ
         NGIA==
X-Gm-Message-State: AOAM531QhjfMnEODAdKeI91c7IC2zCLt13THf95KK90htEKFiTSfxHcR
	TVuL1hb2n9wU0YqwYen9QqA=
X-Google-Smtp-Source: ABdhPJydpAA/xOKVpHwzXnu0wlIzM0yGBGdQWdr9LwoJ2pVfMgeiFJDLGZJeMErSlU26OEDEgpFmpA==
X-Received: by 2002:adf:f9cb:: with SMTP id w11mr4245212wrr.1.1607078192458;
        Fri, 04 Dec 2020 02:36:32 -0800 (PST)
Date: Fri, 4 Dec 2020 10:36:30 +0000
From: Wei Liu <wl@xen.org>
To: paul@xen.org
Cc: 'Wei Liu' <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [EXTERNAL] [PATCH v3 00/13] viridian: add support for
 ExProcessorMasks...
Message-ID: <20201204103630.2v5ftevpqlhswqtg@liuwe-devbox-debian-v2>
References: <20201124190744.11343-1-paul@xen.org>
 <001b01d6c7eb$9c6d0240$d54706c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <001b01d6c7eb$9c6d0240$d54706c0$@xen.org>
User-Agent: NeoMutt/20180716

On Tue, Dec 01, 2020 at 02:09:40PM -0000, Paul Durrant wrote:
> Wei,
> 
>   I'll likely send a v4 to address the style nit Jan picked up in patch #1 but the rest should be stable now. Could you have a look over it?

I've only been able to skim-read this patch set, but I agree in general
that adding ExProcessorMasks support is a good idea. It is needed to
cope with more than 64 cpus as far as I can tell.

With Jan's comments addressed.

Acked-by: Wei Liu <wl@xen.org>

Wei.



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:40:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:40:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44337.79461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8W4-0003FX-4y; Fri, 04 Dec 2020 10:40:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44337.79461; Fri, 04 Dec 2020 10:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8W4-0003FQ-1c; Fri, 04 Dec 2020 10:40:44 +0000
Received: by outflank-mailman (input) for mailman id 44337;
 Fri, 04 Dec 2020 10:40:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8W3-0003FL-5v
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:40:43 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23fc0eca-0223-4238-8d6f-5cb27cc9dfdc;
 Fri, 04 Dec 2020 10:40:42 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id r3so4839421wrt.2
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:40:42 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id v4sm3049528wru.12.2020.12.04.02.40.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:40:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23fc0eca-0223-4238-8d6f-5cb27cc9dfdc
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=TbXT0pcbwLfXrTNUjS4Zuw4lQG6GEg6Z33TgYA1WqXo=;
        b=E83tjs2i0mqK2Et0oik6pLfMtxR1Ocn2N2UZwwBaIA6sjXdxEtf41hTy2bxnEENK+S
         Abz6jIX7Yoyr4OMo47mTo/GdfrjLC0G2Gi2Abx6bhn9AGZbRC1/uIWT+ZkcuTht4gY5T
         vQ8HW6mDvAGUY/XL0iuQ4g4XBsf5E3lzD/D7mLZFY88Qyw0t4N2GOjuRt65nryge4E/o
         HRoFAe/oRsR9h55vRP316I6X5R9AlaoIrnJAjEag0eIENwwexPcgyX1A2vib32jegH2Y
         B4tD7IaoSZ33sHJDdSzn5ONEiaBhvjWGEMTNgJrF9i1OsyzuHDCBBeGl2MgW3R09aZ/x
         gESg==
X-Gm-Message-State: AOAM531nXce57ZVNDAu+bewWPPqY+ZdyjxFVA7HBgfM63dVqMWOCGgns
	vM5HGSDIqYX6ILnwTH73YiEWcL5hWws=
X-Google-Smtp-Source: ABdhPJwHCfEiNZ3/VZH9LBA2jIUVIV9LS2Y763e1a/zcnc5OV+rZ/j5eLdXL0gHSsMEDwqhrRBsDaQ==
X-Received: by 2002:a5d:4b81:: with SMTP id b1mr4172984wrt.372.1607078441487;
        Fri, 04 Dec 2020 02:40:41 -0800 (PST)
Date: Fri, 4 Dec 2020 10:40:39 +0000
From: Wei Liu <wl@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v3 01/12] automation: add a QEMU aarch64 smoke test
Message-ID: <20201204104039.44diltm2gg4twpxn@liuwe-devbox-debian-v2>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
 <20201125042745.31986-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201125042745.31986-1-sstabellini@kernel.org>
User-Agent: NeoMutt/20180716

On Tue, Nov 24, 2020 at 08:27:34PM -0800, Stefano Stabellini wrote:
> Use QEMU to start Xen (just the hypervisor) up until it stops because
> there is no dom0 kernel to boot.
> 
> It is based on the existing build job unstable-arm64v8.
> 
> Also use make -j$(nproc) to build Xen.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
> Changes in v2:
> - fix x86_32 build
> ---
>  automation/gitlab-ci/test.yaml         | 22 ++++++++++++++++++
>  automation/scripts/build               |  6 ++---
>  automation/scripts/qemu-smoke-arm64.sh | 32 ++++++++++++++++++++++++++
>  3 files changed, 57 insertions(+), 3 deletions(-)
>  create mode 100755 automation/scripts/qemu-smoke-arm64.sh
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index 793feafe8b..35346e3f6e 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -22,6 +22,28 @@ build-each-commit-gcc:
>      - /^coverity-tested\/.*/
>      - /^stable-.*/
>  
> +qemu-smoke-arm64-gcc:
> +  stage: test
> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> +  variables:
> +    CONTAINER: debian:unstable-arm64v8
> +  script:
> +    - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
> +  dependencies:
> +    - debian-unstable-gcc-arm64
> +  artifacts:
> +    paths:
> +      - smoke.serial
> +      - '*.log'
> +    when: always
> +  tags:
> +    - arm64
> +  except:
> +    - master
> +    - smoke
> +    - /^coverity-tested\/.*/
> +    - /^stable-.*/
> +
>  qemu-smoke-x86-64-gcc:
>    stage: test
>    image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 0cd0f3971d..7038e5eb50 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -10,9 +10,9 @@ cc-ver()
>  
>  # random config or default config
>  if [[ "${RANDCONFIG}" == "y" ]]; then
> -    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
> +    make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
>  else
> -    make -C xen defconfig
> +    make -j$(nproc) -C xen defconfig
>  fi
>  
>  # build up our configure options
> @@ -45,7 +45,7 @@ make -j$(nproc) dist
>  # Extract artifacts to avoid getting rewritten by customised builds
>  cp xen/.config xen-config
>  mkdir binaries
> -if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
> +if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
>      cp xen/xen binaries/xen
>  fi
>  
> diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
> new file mode 100755
> index 0000000000..a7efbf8b6f
> --- /dev/null
> +++ b/automation/scripts/qemu-smoke-arm64.sh
> @@ -0,0 +1,32 @@
> +#!/bin/bash
> +
> +set -ex
> +
> +# Install QEMU
> +export DEBIAN_FRONTENT=noninteractive
> +apt-get -qy update
> +apt-get -qy install --no-install-recommends qemu-system-aarch64 \
> +                                            u-boot-qemu
> +
> +# XXX Silly workaround to get the following QEMU command to work
> +cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom

Can you explain a bit more why this workaround works at all?

Not a blocking comment, but this will help other people who try to
modify this script.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:41:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:41:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44352.79473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8Wp-0003UG-Eu; Fri, 04 Dec 2020 10:41:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44352.79473; Fri, 04 Dec 2020 10:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8Wp-0003U9-Bf; Fri, 04 Dec 2020 10:41:31 +0000
Received: by outflank-mailman (input) for mailman id 44352;
 Fri, 04 Dec 2020 10:41:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8Wo-0003U1-3C
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:41:30 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb43b1b6-09f0-4e33-961b-4c76a0077415;
 Fri, 04 Dec 2020 10:41:29 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id g185so6583971wmf.3
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:41:29 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h204sm2772306wme.17.2020.12.04.02.41.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:41:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb43b1b6-09f0-4e33-961b-4c76a0077415
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=JzUvKkkB5zPz+M9oSf5zE+ISfTid8bcYA9Fpwf6hT6A=;
        b=Qu2z8vk1Zis4rL7v8yzsUvRgn6/pRwYvPXXOILKeZAYFLOAnGtL0kHDggF6eWZhCNh
         cdY1JHJB6BG6tcoW07wadaPKtu38i88xugIJtTVWLXuxSHZhlGsO4DzvcrsJTQhmbT2l
         ln8qsxjT9tfT2MSlw/ORrxrcB7FNN3Ellg3XHRC+ETUOv2a9g3SFtCpq1LHbdRJTtZct
         L+Pd/w/r0SXA3IrHgTXXB1xiLyH0M4u8AgwgWmsmJnOkuifaEI0r1XyZCk1GOwO1htO3
         u6RFxAua+wZFfvdKf+Pn2gnqW+w264ZY7eSxpfu+mzoCqBAdwWRrbXAuFrw14MfaWlnw
         PeBA==
X-Gm-Message-State: AOAM5303DTajaZ0iYx4nyg3oWX5d1q6/7dA3jWsAs00dtGmnuk57KEhJ
	/GKtDrNq8FpwEVEEuazAJvg=
X-Google-Smtp-Source: ABdhPJxfaVQLgz8YJPpsxaDMVgP8Ud6f3flhZmAxMl20YTnGBVuAQCbED6JDmyJIJFXA63Bqy9l99A==
X-Received: by 2002:a1c:f219:: with SMTP id s25mr3487486wmc.67.1607078488618;
        Fri, 04 Dec 2020 02:41:28 -0800 (PST)
Date: Fri, 4 Dec 2020 10:41:26 +0000
From: Wei Liu <wl@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 00/12] automation: improvements (mostly) for arm64
Message-ID: <20201204104126.liw4lt7vkvup5g2j@liuwe-devbox-debian-v2>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
User-Agent: NeoMutt/20180716

On Tue, Nov 24, 2020 at 08:27:21PM -0800, Stefano Stabellini wrote:
> Hi all,
> 
> This series does a few things:
> 
> 1) it introduces a simple Xen arm64 dom0less smoke test based on QEMU
> 2) it introduces alpine linux builds x86 and arm64
> 3) it introduces two tests artifacts containers
> 4) it uses said artifacts to create a dom0/domU arm64 test based on QEMU
> 
> The series is v3, but in reality only 1) above was sent out before (the
> first two patches). Everything else is new. All tests succeed currently.
> 

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:43:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44362.79484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8Yj-0003iA-RD; Fri, 04 Dec 2020 10:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44362.79484; Fri, 04 Dec 2020 10:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8Yj-0003i3-OK; Fri, 04 Dec 2020 10:43:29 +0000
Received: by outflank-mailman (input) for mailman id 44362;
 Fri, 04 Dec 2020 10:43:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl8Yi-0003hJ-DN
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:43:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be81bdac-6978-4641-b6f8-899d2e316793;
 Fri, 04 Dec 2020 10:43:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5CE9ACA8;
 Fri,  4 Dec 2020 10:43:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be81bdac-6978-4641-b6f8-899d2e316793
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607078606; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=tdGwiXGrtPfpIpBSzV9nRg16FV7D/t1MJpPdOSzwDQg=;
	b=YMHvT5MH8Q4Gi4JZmmrcK3/x4tveblxtHaO2Ruh47dnZ1E+gxwaiS57bEyo4efU+j8jh1F
	3gpw7jAT+SEKtt9F59dC257IZtAbl+mjzd1/5TyIpjfjKgn7x37tcTjJQDj9zo776+XGKA
	WjG+yX/nPvaCvLli8oT73srnJBN9i5E=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] memory: avoid pointless continuation in
 xenmem_add_to_physmap()
Message-ID: <e41fb847-684e-2502-5261-56108ebaeab0@suse.com>
Date: Fri, 4 Dec 2020 11:43:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Adjust so we uniformly avoid needlessly arranging for a continuation on
the last iteration.

Fixes: 5777a3742d88 ("IOMMU: hold page ref until after deferred TLB flush")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -854,8 +854,9 @@ int xenmem_add_to_physmap(struct domain
             ++extra.ppage;
 
         /* Check for continuation if it's not the last iteration. */
-        if ( (++done >= ARRAY_SIZE(pages) && extra.ppage) ||
-             (xatp->size > done && hypercall_preempt_check()) )
+        if ( xatp->size > ++done &&
+             ((done >= ARRAY_SIZE(pages) && extra.ppage) ||
+              hypercall_preempt_check()) )
         {
             rc = start + done;
             break;


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:46:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44367.79497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8bT-0003rf-E7; Fri, 04 Dec 2020 10:46:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44367.79497; Fri, 04 Dec 2020 10:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8bT-0003rY-B8; Fri, 04 Dec 2020 10:46:19 +0000
Received: by outflank-mailman (input) for mailman id 44367;
 Fri, 04 Dec 2020 10:46:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8bS-0003rS-9x
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:46:18 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bdd3fa05-2d14-4d09-b8cf-eff83be0a826;
 Fri, 04 Dec 2020 10:46:17 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id f190so6646299wme.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:46:17 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id d8sm2560640wmb.11.2020.12.04.02.46.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:46:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdd3fa05-2d14-4d09-b8cf-eff83be0a826
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=u6WCj7PXxa0WiaVvXHv6H3WzGC6ALrNUNnuUj/qDI0Y=;
        b=Wg8KuVD1DLJytOfS6/Po3PFlYH7eCXHE4CS5UY7AR5q+e6eU+rmPblTJYimD6SbSVM
         emQwO6GiWLESyKL87PXQo5bm5nctNgZgfcFnX5yRFG5PcX86asmIB1rdu0Gm4RLCxLep
         /uSlBrcFCjm3kuyGtb8SQHX9y1f5Hgq8tFyzxQkz+rrxbkJ8Ui4Cu+QIGE9VXn7QQJ0M
         ro4tLzccQ5ECryvuF6zlAJNwnh8rmI3aZ7am5MivSEj1naRdhuFFOwCaJls5IrDQ8cO5
         uka90VFnBnT2+BfLKlpN9ihaGkPnKjF0/+RQfX6+S0ZFb3a6k+3V67qaq/RnVF2Im4Ta
         LR8A==
X-Gm-Message-State: AOAM533FNohLrcvBagIZ7CnQGyVUdtMHGTuu0dCT4H8lOKuRnbiA/8U8
	WYlJti0HFslchC1erCIvZyc=
X-Google-Smtp-Source: ABdhPJyy3eR0gDnAhX3C/HEH14SRnfUcMXfMIdcbSZhbM9HNGfwc60C52iBkI8/ic1dG8y9Z3i4LAg==
X-Received: by 2002:a05:600c:242:: with SMTP id 2mr3540085wmj.144.1607078776876;
        Fri, 04 Dec 2020 02:46:16 -0800 (PST)
Date: Fri, 4 Dec 2020 10:46:14 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	xen-devel@lists.xenproject.org,
	Diederik de Haas <didi.debian@cknow.org>
Subject: Re: [PATCH] Fix spelling errors.
Message-ID: <20201204104614.s7wjirrxudrztoe4@liuwe-devbox-debian-v2>
References: <a60e2c98183d7c873f4e306954f900614fcdb582.1606757711.git.didi.debian@cknow.org>
 <3702b443-fd7b-6000-a952-0ecec6fe318c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <3702b443-fd7b-6000-a952-0ecec6fe318c@suse.com>
User-Agent: NeoMutt/20180716

On Wed, Dec 02, 2020 at 10:05:31AM +0100, Jan Beulich wrote:
> On 30.11.2020 18:39, Diederik de Haas wrote:
> > Only spelling errors; no functional changes.
> > 
> > In docs/misc/dump-core-format.txt there are a few more instances of
> > 'informations'. I'll leave that up to someone who can properly determine
> > how those sentences should be constructed.
> > 
> > Signed-off-by: Diederik de Haas <didi.debian@cknow.org>
> > 
> > Please CC me in replies as I'm not subscribed to this list.
> > ---
> >  docs/man/xl.1.pod.in                   | 2 +-
> >  docs/man/xl.cfg.5.pod.in               | 2 +-
> >  docs/man/xlcpupool.cfg.5.pod           | 2 +-
> >  tools/firmware/rombios/rombios.c       | 2 +-
> >  tools/libs/light/libxl_stream_read.c   | 2 +-
> >  tools/xl/xl_cmdtable.c                 | 2 +-
> 
> Since these are trivial and obvious adjustments, I intend to not wait
> very long for an xl/libxl side ack, before deciding to commit this.
> Perhaps just another day or so.

Please feel free to do this in the future for trivial changes like
fixing typos in comments and manpages.

Acked-by: Wei Liu <wl@xen.org>

> 
> Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:47:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:47:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44373.79509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8cx-0003zp-RN; Fri, 04 Dec 2020 10:47:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44373.79509; Fri, 04 Dec 2020 10:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8cx-0003zi-Mv; Fri, 04 Dec 2020 10:47:51 +0000
Received: by outflank-mailman (input) for mailman id 44373;
 Fri, 04 Dec 2020 10:47:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8cw-0003zb-Ig
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:47:50 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c28f864-2c61-4c9a-b610-eb1e28479484;
 Fri, 04 Dec 2020 10:47:49 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id r3so4860390wrt.2
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:47:49 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u23sm2699076wmc.32.2020.12.04.02.47.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:47:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c28f864-2c61-4c9a-b610-eb1e28479484
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=kuA/yQ4vyj1G+DxnjxkDCulQAGgM5Ed5CjhJHkroJBQ=;
        b=SsOdunQ03Ev3RWuLCIkr8nniJiZ1S1vS8gca0mY5UEFh3WyeDS5j0rXphfcOs8tBos
         FxnsQwPAFelhn6WYTjVHk0d+zSH6SDienOgqDfMdwZtuz3BiiSD0AdTq+DgbbKZQhwTr
         mA7Y2jeb8tU14PJo9BaMo8P/1ikmnAOr0G+PTmdcnAZoS4SA4Rww2ukLz9vmYVtoRYg+
         Fn5WC2adnL2epqniYnck1W8mcvwQY0k7X9PfTheNZpP11X8szGqXkPjHXBJXwtFbv57b
         caixsTo95aUIBsUcm/EXYPYR7oOCxoi0/e70VubdvUgxUUNUO4ME13HM9P8w5WjinygB
         YiWQ==
X-Gm-Message-State: AOAM5322ir0sFEg4aQjaFbmdtQdj4fgyIzVtWY9Bx2EFxeSn06GAXEJd
	gDNKhS74ECWjnQpchQzgc8U=
X-Google-Smtp-Source: ABdhPJyDoCSpapC7Lo3EXfLV+9P/i+CHNVNN2n6OA6NRQMCGFghA0ou6ARfLrntyTiavL5aAFezIyQ==
X-Received: by 2002:a5d:400a:: with SMTP id n10mr4232204wrp.362.1607078868865;
        Fri, 04 Dec 2020 02:47:48 -0800 (PST)
Date: Fri, 4 Dec 2020 10:47:47 +0000
From: Wei Liu <wl@xen.org>
To: paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>, 'Wei Liu' <wl@xen.org>,
	'Paul Durrant' <pdurrant@amazon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 00/11] viridian: add support for ExProcessorMasks
Message-ID: <20201204104747.zh7qpxmlpzg6xl2n@liuwe-devbox-debian-v2>
References: <20201202092205.906-1-paul@xen.org>
 <fabc2720-3cbc-0b3f-1b09-23ec25189407@suse.com>
 <011301d6ca17$1a3fb690$4ebf23b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <011301d6ca17$1a3fb690$4ebf23b0$@xen.org>
User-Agent: NeoMutt/20180716

On Fri, Dec 04, 2020 at 08:26:01AM -0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Jan Beulich <jbeulich@suse.com>
> > Sent: 04 December 2020 08:12
> > To: Wei Liu <wl@xen.org>; Paul Durrant <paul@xen.org>
> > Cc: Paul Durrant <pdurrant@amazon.com>; xen-devel@lists.xenproject.org
> > Subject: Re: [PATCH v4 00/11] viridian: add support for ExProcessorMasks
> > 
> > Wei,
> > 
> > On 02.12.2020 10:21, Paul Durrant wrote:
> > > From: Paul Durrant <pdurrant@amazon.com>
> > >
> > > Paul Durrant (11):
> > >   viridian: don't blindly write to 32-bit registers is 'mode' is invalid
> > >   viridian: move flush hypercall implementation into separate function
> > >   viridian: move IPI hypercall implementation into separate function
> > >   viridian: introduce a per-cpu hypercall_vpmask and accessor
> > >     functions...
> > >   viridian: use hypercall_vpmask in hvcall_ipi()
> > >   viridian: use softirq batching in hvcall_ipi()
> > >   viridian: add ExProcessorMasks variants of the flush hypercalls
> > >   viridian: add ExProcessorMasks variant of the IPI hypercall
> > >   viridian: log initial invocation of each type of hypercall
> > >   viridian: add a new '_HVMPV_ex_processor_masks' bit into
> > >     HVM_PARAM_VIRIDIAN...
> > >   xl / libxl: add 'ex_processor_mask' into
> > >     'libxl_viridian_enlightenment'
> > >
> > >  docs/man/xl.cfg.5.pod.in             |   8 +
> > >  tools/include/libxl.h                |   7 +
> > >  tools/libs/light/libxl_types.idl     |   1 +
> > >  tools/libs/light/libxl_x86.c         |   3 +
> > >  xen/arch/x86/hvm/viridian/viridian.c | 604 +++++++++++++++++++++------
> > >  xen/include/asm-x86/hvm/viridian.h   |  10 +
> > >  xen/include/public/hvm/params.h      |   7 +-
> > >  7 files changed, 516 insertions(+), 124 deletions(-)
> > 
> > the status of this series was one of the topics of yesterday's
> > community call. Since Paul's prior ping hasn't had a response by
> > you (possibly because you're on PTO for an extended period of
> > time) the plan is to get this series in with as much of
> > reviewing that I was able to do by, perhaps, the middle of next
> > week. Unless of course we hear back from you earlier, giving at
> > least an indication of when you might be able to look at this.
> > 
> > Thanks for your understanding.
> > 
> > Paul, I notice v4 patches 10 and 11 never arrived in my inbox.
> 
> Oh, yes... I don't see them in my mail either. (I guess I did 'git send-email 000*' instead of 'git send-email 00*'). I'll send v5 (with the extra style fix) and get them on list.
> 
> > The list archives also don't have them. Therefore I can't check
> > the status of the tools side changes, and I don't think I'd
> > want to commit those anyway without tool stack side acks, the
> > more that they weren't part of what I've looked at.
> > 
> 
> Sure. The toolstack side is pretty trivial so hopefully Anthony or Ian would happy to give an ack.

You have my ack on that part too.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:51:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44388.79520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8gH-0004vd-9A; Fri, 04 Dec 2020 10:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44388.79520; Fri, 04 Dec 2020 10:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8gH-0004vW-5u; Fri, 04 Dec 2020 10:51:17 +0000
Received: by outflank-mailman (input) for mailman id 44388;
 Fri, 04 Dec 2020 10:51:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8gF-0004um-Ih
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:51:15 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 902497ac-0d2a-47af-9aa7-f7d44ddda45a;
 Fri, 04 Dec 2020 10:51:14 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id a3so6591815wmb.5
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:51:14 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id g192sm2643065wme.48.2020.12.04.02.51.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:51:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 902497ac-0d2a-47af-9aa7-f7d44ddda45a
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=PpQq+83RAP6AzSMAr/ocqzqnwUcMMjZ/QL02XG/RWw0=;
        b=cxtA6Viu0/73Bd5Z0dJPIEGzodpUlLZOVVjhhKxpOT3QINwxpTJqjH0qxEsoD0i6a2
         pxOciAhyFGLutiXlxfCBzD685pWnkfRhQOARo9jAaVwSAZOea0D9SZq/i7AVEibLtSDS
         P8BYlBpzEf3mvJ1mmjZbPT3SEO+3S59990dXhho04pYGyPDhxI7NdHed9Eh/W9W2Mn64
         qn3qJqAhlZDjum6j7CtwidTOKQKq+MavgzBhSxsrgAehqCw2MtGPvodmwHRUQgQsLIlr
         spobWIoR4ApiISmOIZW0QJQhNScCk5bE5vKURT/uqQ7WBGBEcnLVJvuiowviYD5fAm10
         vfcw==
X-Gm-Message-State: AOAM532jFL8HL5NUg2wCzjkcQRuB5bQZ80AQah0wyDflnTMm+IyOg9OC
	38m+y5iWp80mL6zrs21ekIi+HWaMzlw=
X-Google-Smtp-Source: ABdhPJzw7+wlSWiQ6Gi2AVZBIBDgvX+r9Aw1pV/dU4AMCTQTurraFtpaF8t/XQ3UhuVc0bALkDq/UA==
X-Received: by 2002:a1c:454:: with SMTP id 81mr3509489wme.178.1607079074235;
        Fri, 04 Dec 2020 02:51:14 -0800 (PST)
Date: Fri, 4 Dec 2020 10:51:12 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd arguments
Message-ID: <20201204105112.5gehzmlsincrtvhu@liuwe-devbox-debian-v2>
References: <20201203063436.4503-1-olaf@aepfle.de>
 <3fc53e0a-c7b4-4c56-9ba8-0b0a55c10f50@suse.com>
 <20201203081939.7265ac3d.olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201203081939.7265ac3d.olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 08:19:39AM +0100, Olaf Hering wrote:
> Am Thu, 3 Dec 2020 07:47:58 +0100
> schrieb Jrgen Gro <jgross@suse.com>:
> 
> > Could you please add a section for XENWATCHDOGD_ARGS in
> > tools/hotplug/Linux/init.d/sysconfig.xencommons.in ?
> 
> No. Such details have to go into a to-be-written xencommons(8).
> 
> There will be a xenwatchdogd(8) shortly, which will cover this knob.

The more manpages the better. :-)

Wei.

> 
> 
> Olaf




From xen-devel-bounces@lists.xenproject.org Fri Dec 04 10:53:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 10:53:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44402.79533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8iH-0005CD-Lr; Fri, 04 Dec 2020 10:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44402.79533; Fri, 04 Dec 2020 10:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8iH-0005C6-II; Fri, 04 Dec 2020 10:53:21 +0000
Received: by outflank-mailman (input) for mailman id 44402;
 Fri, 04 Dec 2020 10:53:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl8iF-0005C1-Vu
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 10:53:20 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 958110ef-801b-4468-9d9e-f79f041726f2;
 Fri, 04 Dec 2020 10:53:18 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id t4so4844130wrr.12
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 02:53:18 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id r13sm3030550wrs.6.2020.12.04.02.53.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 02:53:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 958110ef-801b-4468-9d9e-f79f041726f2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=uQVdkYfHNmtLMlE+KpMivvufuNn5EMB4OSxsZFd4cMA=;
        b=WulgdPABbI6GNTSWmlZf9bAiWJ89Zl2f0l3t0wfeMEz9krAa2S+wZkzAJi1VHWSIDF
         hDhC3yEfiiQ6toC5uLCSL7DHXoc9AMO6dSOURw4Wtb4cn0FAgZ0T5m5wJ/kxoFqdU8+m
         m211p7LeO/dztVhXZgWaxYA8VGrKXWCGz86HWsd0jQ+IwbXo9xQkkbpvtWN+d9/NTETT
         2pNu1APkPa9LuzQDRuoWtVEqjQg+O7Ts++1R1R/1wXBjGj6bqyqjWXN8JHCuxvkGxFSk
         85/WQqfWBJ+Y1MLF7a2rtLo2zGKnhtU2u3iAgPdedt97QA1aIsOwSqbWgu7pbd7RzcVp
         udOQ==
X-Gm-Message-State: AOAM532eGRJtoDdsbLeD0vqJI+FCwn0wlPdWgIw4LmUjv5DaNDNlaeeP
	YpDYbsnZf5yFFsmoNOehooT0PkTiaws=
X-Google-Smtp-Source: ABdhPJxqVxeVy9f+z5qTImPPfxnJ1XkmFGTO6UgolkCnxdeBr/4sbys6x96TUUYCI4+jr8D6JM5fmQ==
X-Received: by 2002:adf:f84b:: with SMTP id d11mr4263796wrq.216.1607079197879;
        Fri, 04 Dec 2020 02:53:17 -0800 (PST)
Date: Fri, 4 Dec 2020 10:53:15 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd arguments
Message-ID: <20201204105315.avponbzbotrabf4c@liuwe-devbox-debian-v2>
References: <20201203063436.4503-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203063436.4503-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 07:34:36AM +0100, Olaf Hering wrote:
> Currently the arguments for xenwatchdogd are hardcoded with 15s
> keep-alive interval and 30s timeout.
> 
> It is not possible to tweak these values via
> /etc/systemd/system/xen-watchdog.service.d/*.conf because ExecStart
> can not be replaced. The only option would be a private copy
> /etc/systemd/system/xen-watchdog.service, which may get out of sync
> with the Xen provided xen-watchdog.service.
> 
> Adjust the service file to recognize XENWATCHDOGD_ARGS= in a
> private unit configuration file.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> v2: fix "test -n" in init.d
> 
>  tools/hotplug/Linux/init.d/xen-watchdog.in          | 7 ++++++-
>  tools/hotplug/Linux/systemd/xen-watchdog.service.in | 4 +++-
>  2 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/hotplug/Linux/init.d/xen-watchdog.in b/tools/hotplug/Linux/init.d/xen-watchdog.in
> index c05f1f6b6a..b36a94bd8e 100644
> --- a/tools/hotplug/Linux/init.d/xen-watchdog.in
> +++ b/tools/hotplug/Linux/init.d/xen-watchdog.in
> @@ -19,6 +19,11 @@
>  
>  . @XEN_SCRIPT_DIR@/hotplugpath.sh
>  
> +xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
> +
> +test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
> +
> +test -n "$XENWATCHDOGD_ARGS" || XENWATCHDOGD_ARGS='15 30'
>  DAEMON=${sbindir}/xenwatchdogd
>  base=$(basename $DAEMON)
>  
> @@ -46,7 +51,7 @@ start() {
>  	local r
>  	echo -n $"Starting domain watchdog daemon: "
>  
> -	$DAEMON 30 15
> +	$DAEMON $XENWATCHDOGD_ARGS

Did you accidentally swap 15 and 30 in XENWATCHDOGD_ARGS above? I see no
reasoning in the commit message for this change.

No need to resend.  I can fix it for you. But please confirm if that's a
mistake.

Wei.

>  	r=$?
>  	[ "$r" -eq 0 ] && success $"$base startup" || failure $"$base startup"
>  	echo
> diff --git a/tools/hotplug/Linux/systemd/xen-watchdog.service.in b/tools/hotplug/Linux/systemd/xen-watchdog.service.in
> index 1eecd2a616..637ab7fd7f 100644
> --- a/tools/hotplug/Linux/systemd/xen-watchdog.service.in
> +++ b/tools/hotplug/Linux/systemd/xen-watchdog.service.in
> @@ -6,7 +6,9 @@ ConditionPathExists=/proc/xen/capabilities
>  
>  [Service]
>  Type=forking
> -ExecStart=@sbindir@/xenwatchdogd 30 15
> +Environment="XENWATCHDOGD_ARGS=30 15"
> +EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
> +ExecStart=@sbindir@/xenwatchdogd $XENWATCHDOGD_ARGS
>  KillSignal=USR1
>  
>  [Install]


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:09:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:09:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44410.79544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8xN-0006Pm-1W; Fri, 04 Dec 2020 11:08:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44410.79544; Fri, 04 Dec 2020 11:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8xM-0006Pf-U6; Fri, 04 Dec 2020 11:08:56 +0000
Received: by outflank-mailman (input) for mailman id 44410;
 Fri, 04 Dec 2020 11:08:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TCoV=FI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kl8xK-0006Pa-GA
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:08:54 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e819146b-636c-4bf6-bf38-840aa4604e32;
 Fri, 04 Dec 2020 11:08:53 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id E2F855C0116;
 Fri,  4 Dec 2020 06:08:52 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Fri, 04 Dec 2020 06:08:52 -0500
Received: from mail-itl (unknown [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id C68AC240057;
 Fri,  4 Dec 2020 06:08:50 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e819146b-636c-4bf6-bf38-840aa4604e32
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=pE2wbx
	tydrzwQsmeEJiqEhudgsi/298pIjDCfrCe31k=; b=RA2fCGRA+2UCP/UlmO0o6i
	6KpC3gbmIoVy7pySCWRP9FrEZLpVXkx4S/LZLjyO9Ih2Bw1ZHZPRnxKMeExPUwwA
	bbspdFpjfSZeMQyqhRJrFFbu7yNSGj/cYv2hCnCXmSNfTIDFR/4LgW2f36Onc0BZ
	EQUD9nFpU8AJQI4QZe+uzfOBsr78rsq+RKBzMqNq1oe3RA2xbFBP4DE7u8SNoXaX
	P96MILn+Ldl8lVHWhMjetOzQ3VkNoJzDIisTpuGWCG2b3xpA1xJMzzVAyb8sSj7b
	DhmhHfry6VhTd1OWnctqxQF6orQhkpgsAhtw76jgKCN7zyMYos8T4dsnZ9Ncik3Q
	==
X-ME-Sender: <xms:wxjKX7aLi9mDY1DnK2m0WV8_t6q-Qu4gvHR-27MFbWQiHsRg1FEt6w>
    <xme:wxjKX6YcruexLR9PhWmOIrNDpE8P6KIbmYFkJHyDAtDHW6c3J3997WxYPKZCvH3ZV
    q6M1ZryFiwQZw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudeikedgvdefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:wxjKX9-uR2dIjUi9EXEAJGWujLIw7jFU_9x4qJt1kXkL3-CKECziuw>
    <xmx:wxjKXxo7GT0NEcLK4IaXaF_rfslDyYrNOmSHGUIH9MIfvPbuCEt1iw>
    <xmx:wxjKX2pKE_PXD3oDPpVwtWWdg015qxxEeXAz51zfB2ta1pXIxH36sQ>
    <xmx:xBjKX_kXmOM-s69NFQAlxjPr0KM0eAOevxNE4SEtslw9Rxjno9lfnA>
Date: Fri, 4 Dec 2020 12:08:47 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
Message-ID: <20201204110847.GU201140@mail-itl>
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="lKkRBIzN5W0l28vM"
Content-Disposition: inline
In-Reply-To: <20201202000642.GJ201140@mail-itl>


--lKkRBIzN5W0l28vM
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9

On Wed, Dec 02, 2020 at 01:06:46AM +0100, Marek Marczykowski-G=C3=B3recki w=
rote:
> On Tue, Dec 01, 2020 at 01:40:10AM +0900, Keith Busch wrote:
> > On Sun, Nov 29, 2020 at 04:56:39AM +0100, Marek Marczykowski-G=C3=B3rec=
ki wrote:
> > > I can reliably hit kernel panic in nvme_map_data() which looks like t=
he
> > > one below. It happens on Linux 5.9.9, while 5.4.75 works fine. I have=
n't
> > > tried other version on this hardware. Linux is running as Xen
> > > PV dom0, on top of nvme there is LUKS and then LVM with thin
> > > provisioning. The crash happens reliably when starting a Xen domU (wh=
ich
> > > uses one of thin provisioned LVM volumes as its disk). But booting do=
m0
> > > works fine (even though it is using the same disk setup for its root
> > > filesystem).
> > >=20
> > > I did a bit of debugging and found it's about this part:
> > >=20
> > > drivers/nvme/host/pci.c:
> > >  800 static blk_status_t nvme_map_data(struct nvme_dev *dev, struct r=
equest *req,
> > >  801         struct nvme_command *cmnd)
> > >  802 {
> > >  803     struct nvme_iod *iod =3D blk_mq_rq_to_pdu(req);
> > >  804     blk_status_t ret =3D BLK_STS_RESOURCE;
> > >  805     int nr_mapped;
> > >  806=20
> > >  807     if (blk_rq_nr_phys_segments(req) =3D=3D 1) {
> > >  808         struct bio_vec bv =3D req_bvec(req);
> > >  809=20
> > >  810         if (!is_pci_p2pdma_page(bv.bv_page)) {
> > >=20
> > > Here, bv.bv_page->pgmap is LIST_POISON1, while page_zonenum(bv.bv_pag=
e)
> > > says ZONE_DEVICE. So, is_pci_p2pdma_page() crashes on accessing
> > > bv.bv_page->pgmap->type.
> >=20
> > Something sounds off. I thought all ZONE_DEVICE pages require a pgmap
> > because that's what holds a references to the device's live-ness. What
> > are you allocating this memory from that makes ZONE_DEVICE true without
> > a pgmap?
>=20
> Well, I allocate anything myself. I just try to start the system with
> unmodified Linux 5.9.9 and NVME drive...
> I didn't managed to find where this page is allocated, nor where it gets
> broken. I _suspect_ it gets allocated as ZONE_DEVICE page and then gets
> released as ZONE_NORMAL which sets another part of the union to
> LIST_POISON1. But I have absolutely no data to confirm/deny this theory.

I've bisected this (thanks to a bit of scripting, PXE and git bisect
run, it was long, but fairly painless) and identified this commit as the
culprit:=20

commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Tue Sep 1 10:33:26 2020 +0200

    xen: add helpers to allocate unpopulated memory
   =20
I'm adding relevant people and xen-devel to the thread.
For completeness, here is the original crash message:

general protection fault, probably for non-canonical address 0xdead00000000=
0100: 0000 [#1] SMP NOPTI
CPU: 1 PID: 134 Comm: kworker/u12:2 Not tainted 5.9.9-1.qubes.x86_64 #1
Hardware name: LENOVO 20M9CTO1WW/20M9CTO1WW, BIOS N2CET50W (1.33 ) 01/15/20=
20
Workqueue: dm-thin do_worker [dm_thin_pool]
RIP: e030:nvme_map_data+0x300/0x3a0 [nvme]
Code: b8 fe ff ff e9 a8 fe ff ff 4c 8b 56 68 8b 5e 70 8b 76 74 49 8b 02 48 =
c1 e8 33 83 e0 07 83 f8 04 0f 85 f2 fe ff ff 49 8b 42 08 <83> b8 d0 00 00 0=
0 04 0f 85 e1 fe ff ff e9 38 fd ff ff 8b 55 70 be
RSP: e02b:ffffc900010e7ad8 EFLAGS: 00010246
RAX: dead000000000100 RBX: 0000000000001000 RCX: ffff8881a58f5000
RDX: 0000000000001000 RSI: 0000000000000000 RDI: ffff8881a679e000
RBP: ffff8881a5ef4c80 R08: ffff8881a5ef4c80 R09: 0000000000000002
R10: ffffea0003dfff40 R11: 0000000000000008 R12: ffff8881a679e000
R13: ffffc900010e7b20 R14: ffff8881a70b5980 R15: ffff8881a679e000
FS:  0000000000000000(0000) GS:ffff8881b5440000(0000) knlGS:0000000000000000
CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000001d64408 CR3: 00000001aa2c0000 CR4: 0000000000050660
Call Trace:
 nvme_queue_rq+0xa7/0x1a0 [nvme]
 __blk_mq_try_issue_directly+0x11d/0x1e0
 ? add_wait_queue_exclusive+0x70/0x70
 blk_mq_try_issue_directly+0x35/0xc0l[
 blk_mq_submit_bio+0x58f/0x660
 __submit_bio_noacct+0x300/0x330
 process_shared_bio+0x126/0x1b0 [dm_thin_pool]
 process_cell+0x226/0x280 [dm_thin_pool]
 process_thin_deferred_cells+0x185/0x320 [dm_thin_pool]
 process_deferred_bios+0xa4/0x2a0 [dm_thin_pool]UX
 do_worker+0xcc/0x130 [dm_thin_pool]
 process_one_work+0x1b4/0x370
 worker_thread+0x4c/0x310
 ? process_one_work+0x370/0x370
 kthread+0x11b/0x140
 ? __kthread_bind_mask+0x60/0x60<
 ret_from_fork+0x22/0x30
Modules linked in: loop snd_seq_dummy snd_hrtimer nf_tables nfnetlink vfat =
fat snd_sof_pci snd_sof_intel_byt snd_sof_intel_ipc snd_sof_intel_hda_commo=
n snd_soc_hdac_hda snd_sof_xtensa_dsp snd_sof_intel_hda snd_sof snd_soc_skl
snd_soc_sst_
ipc snd_soc_sst_dsp snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi =
snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine elan_i2c snd_hda_codec=
_hdmi mei_hdcp iTCO_wdt intel_powerclamp intel_pmc_bxt ee1004 intel_rapl_msr
iTCO_vendor
_support joydev pcspkr intel_wmi_thunderbolt wmi_bmof thunderbolt ucsi_acpi=
 idma64 typec_ucsi snd_hda_codec_realtek typec snd_hda_codec_generic snd_hd=
a_intel snd_intel_dspcfg snd_hda_codec thinkpad_acpi snd_hda_core ledtrig_a=
udio
int3403_
thermal snd_hwdep snd_seq snd_seq_device snd_pcm iwlwifi snd_timer processo=
r_thermal_device mei_me cfg80211 intel_rapl_common snd e1000e mei int3400_t=
hermal int340x_thermal_zone i2c_i801 acpi_thermal_rel soundcore intel_soc_d=
ts_iosf
i2c_s
mbus rfkill intel_pch_thermal xenfs
 ip_tables dm_thin_pool dm_persistent_data dm_bio_prison dm_crypt nouveau r=
tsx_pci_sdmmc mmc_core mxm_wmi crct10dif_pclmul ttm crc32_pclmul crc32c_int=
el i915 ghash_clmulni_intel i2c_algo_bit serio_raw nvme drm_kms_helper cec =
xhci_pci
nvme
_core rtsx_pci xhci_pci_renesas drm xhci_hcd wmi video pinctrl_cannonlake p=
inctrl_intel xen_privcmd xen_pciback xen_blkback xen_gntalloc xen_gntdev xe=
n_evtchn uinput
---[ end trace f8d47e4aa6724df4 ]---
RIP: e030:nvme_map_data+0x300/0x3a0 [nvme]
Code: b8 fe ff ff e9 a8 fe ff ff 4c 8b 56 68 8b 5e 70 8b 76 74 49 8b 02 48 =
c1 e8 33 83 e0 07 83 f8 04 0f 85 f2 fe ff ff 49 8b 42 08 <83> b8 d0 00 00 0=
0 04 0f 85 e1 fe ff ff e9 38 fd ff ff 8b 55 70 be
RSP: e02b:ffffc900010e7ad8 EFLAGS: 00010246
RAX: dead000000000100 RBX: 0000000000001000 RCX: ffff8881a58f5000
RDX: 0000000000001000 RSI: 0000000000000000 RDI: ffff8881a679e000
RBP: ffff8881a5ef4c80 R08: ffff8881a5ef4c80 R09: 0000000000000002
R10: ffffea0003dfff40 R11: 0000000000000008 R12: ffff8881a679e000
R13: ffffc900010e7b20 R14: ffff8881a70b5980 R15: ffff8881a679e000
FS:  0000000000000000(0000) GS:ffff8881b5440000(0000) knlGS:0000000000000000
CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000001d64408 CR3: 00000001aa2c0000 CR4: 0000000000050660
Kernel panic - not syncing: Fatal exception
Kernel Offset: disabled


--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--lKkRBIzN5W0l28vM
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl/KGMAACgkQ24/THMrX
1yyq/wf+Oj7E49JnpUC4yd/NdOWLI89rsYqR6UvBjYlR8QUh+FtVBoPdtfKKGm+A
Btb4LYgmHqT2oO9y86ZugGpP+MQbjIBva5MpR3TrbVsK4GZGaPKBQjkssRMnXBug
rvdNZEGUaylOJry8DzGiYo3/5kGzXhM7HNhNuvdkbGvwKdsQLbM6NBr3kOMbzktH
mquqn3uiLD1Inn6+8UtO2NPo0U5RSS1/h/ac/3v0/1ZXQh5ryW65e+y+WJuaKgdT
Nt5Scuz1FziTMxCeGp3E+sW1PHkS405VFL/XBntzD2A21HiJFtyMwlJZRbzD/Sw6
tQfHwvC25Bi6hxCe8p0N6gyUUxkRbQ==
=tfVN
-----END PGP SIGNATURE-----

--lKkRBIzN5W0l28vM--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:09:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:09:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44411.79557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8xQ-0006R3-Eo; Fri, 04 Dec 2020 11:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44411.79557; Fri, 04 Dec 2020 11:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl8xQ-0006Qw-B5; Fri, 04 Dec 2020 11:09:00 +0000
Received: by outflank-mailman (input) for mailman id 44411;
 Fri, 04 Dec 2020 11:08:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kl8xP-0006Pa-Hv
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:08:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96c48557-3014-4078-a48d-3e4800ea4c71;
 Fri, 04 Dec 2020 11:08:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5FECACA8;
 Fri,  4 Dec 2020 11:08:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96c48557-3014-4078-a48d-3e4800ea4c71
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607080138; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=84ResGlc+MxjQNmlNLcjLRxG7gWahDfMvrJUwuJlK3k=;
	b=KbfwJmAi8Im+J0vWgJMDG/P/J2zxMI1eBTzFcXrYLXbVQwXF9zB1f5K1zRc71zTssfDPre
	u9hBG3Da9xysVr6fX8Xz1pXdbqb+/HTVrYVN1UZVmCEkdpJLT3RyZk2Hl/K7L5uzMpI/zC
	aFVnS+QeDsSD9KDwlE7WHK3orrMh2yY=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
Message-ID: <72e2300c-6367-5469-d7fd-767dd411dcb8@suse.com>
Date: Fri, 4 Dec 2020 12:08:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="EuENw2daS39mqXviIERXk4Mk2BkRTOegc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--EuENw2daS39mqXviIERXk4Mk2BkRTOegc
Content-Type: multipart/mixed; boundary="PNNEg4wEdAXvKcCwGwjUJj48DdcaVibf6";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
Message-ID: <72e2300c-6367-5469-d7fd-767dd411dcb8@suse.com>
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>
In-Reply-To: <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>

--PNNEg4wEdAXvKcCwGwjUJj48DdcaVibf6
Content-Type: multipart/mixed;
 boundary="------------AFB756D8916BB5F08C1BC606"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AFB756D8916BB5F08C1BC606
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.12.20 10:10, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb =3D {
>>       .notifier_call =3D cpu_callback
>>   };
>>  =20
>> +#ifdef CONFIG_HYPFS
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> +    const struct hypfs_entry *entry);
>> +
>> +static struct hypfs_funcs cpupool_pooldir_funcs =3D {
>=20
> Yet one more const missing?

Already fixed locally.

>=20
>> +    .enter =3D cpupool_pooldir_enter,
>> +    .exit =3D hypfs_node_exit,
>> +    .read =3D hypfs_read_dir,
>> +    .write =3D hypfs_write_deny,
>> +    .getsize =3D hypfs_getsize,
>> +    .findentry =3D hypfs_dir_findentry,
>> +};
>> +
>> +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_func=
s);
>> +
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> +    const struct hypfs_entry *entry)
>> +{
>> +    return &cpupool_pooldir.e;
>> +}
>> +
>> +static int cpupool_dir_read(const struct hypfs_entry *entry,
>> +                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +    int ret =3D 0;
>> +    const struct cpupool *c;
>> +    unsigned int size =3D 0;
>> +
>> +    list_for_each_entry(c, &cpupool_list, list)
>> +    {
>> +        size +=3D hypfs_dynid_entry_size(entry, c->cpupool_id);
>=20
> Why do you maintain size here? I can't spot any use.

Oh, indeed.

This is a remnant of an earlier variant.

>=20
> With this dropped the function then no longer depends on its
> "entry" parameter, which makes me wonder ...
>=20
>> +        ret =3D hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupo=
ol_id,
>> +                                         list_is_last(&c->list, &cpup=
ool_list),
>> +                                         &uaddr);
>> +        if ( ret )
>> +            break;
>> +    }
>> +
>> +    return ret;
>> +}
>> +
>> +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *ent=
ry)
>> +{
>> +    const struct cpupool *c;
>> +    unsigned int size =3D 0;
>> +
>> +    list_for_each_entry(c, &cpupool_list, list)
>> +        size +=3D hypfs_dynid_entry_size(entry, c->cpupool_id);
>=20
> ... why this one does. To be certain their results are consistent
> with one another, I think both should produce their results from
> the same data.

In the end they do. Creating a complete direntry just for obtaining its
size is overkill, especially as hypfs_read_dyndir_id_entry() is not
directly calculating the size, but copying the fixed and the variable
parts in two portions.

>=20
>> +    return size;
>> +}
>> +
>> +static const struct hypfs_entry *cpupool_dir_enter(
>> +    const struct hypfs_entry *entry)
>> +{
>> +    struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_alloc_dyndata(sizeof(*data));
>=20
> I generally like the added type safety of the macro wrappers
> around _xmalloc(). I wonder if it wouldn't be a good idea to have
> such here as well, to avoid random mistakes like
>=20
>      data =3D hypfs_alloc_dyndata(sizeof(data));

Fine with me.

>=20
> However I further notice that the struct allocated isn't cpupool
> specific at all. It would seem to me that such an allocation
> therefore doesn't belong here. Therefore I wonder whether ...
>=20
>> +    if ( !data )
>> +        return ERR_PTR(-ENOMEM);
>> +    data->id =3D CPUPOOLID_NONE;
>> +
>> +    spin_lock(&cpupool_lock);
>=20
> ... these two properties (initial ID and lock) shouldn't e.g. be
> communicated via the template, allowing the enter/exit hooks to
> become generic for all ID templates.

The problem with the lock is that it is rather user specific. For
domains this will be split (rcu_read_lock(&domlist_read_lock) for
the /domain directory, and get_domain() for the per-domain level).

And memory allocation might need other data as well, so this won't
be the same structure for all cases. A two level dynamic directory
(e.g. domain/vcpu) might want to allocate the needed dyndata for
both levels already when entering /domain.

>=20
> Yet in turn I notice that the "id" field only ever gets set, both
> in patch 14 and here. But yes, I've now spotted the consumers in
> patch 16.
>=20
>> +    return entry;
>> +}
>> +
>> +static void cpupool_dir_exit(const struct hypfs_entry *entry)
>> +{
>> +    spin_unlock(&cpupool_lock);
>> +
>> +    hypfs_free_dyndata();
>> +}
>> +
>> +static struct hypfs_entry *cpupool_dir_findentry(
>> +    const struct hypfs_entry_dir *dir, const char *name, unsigned int=
 name_len)
>> +{
>> +    unsigned long id;
>> +    const char *end;
>> +    const struct cpupool *cpupool;
>> +
>> +    id =3D simple_strtoul(name, &end, 10);
>> +    if ( end !=3D name + name_len )
>> +        return ERR_PTR(-ENOENT);
>> +
>> +    cpupool =3D __cpupool_find_by_id(id, true);
>=20
> Silent truncation from unsigned long to unsigned int?

Oh, indeed. Need to check against UINT_MAX.

>=20
>> +    if ( !cpupool )
>> +        return ERR_PTR(-ENOENT);
>> +
>> +    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
>> +}
>> +
>> +static struct hypfs_funcs cpupool_dir_funcs =3D {
>=20
> Yet another missing const?

Already fixed.

>=20
>> +    .enter =3D cpupool_dir_enter,
>> +    .exit =3D cpupool_dir_exit,
>> +    .read =3D cpupool_dir_read,
>> +    .write =3D hypfs_write_deny,
>> +    .getsize =3D cpupool_dir_getsize,
>> +    .findentry =3D cpupool_dir_findentry,
>> +};
>> +
>> +static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);=

>=20
> Why VARDIR? This isn't a template, is it? Or does VARDIR really
> serve multiple purposes?

Basically it just takes an additional parameter for the function vector.
Maybe I should rename it to HYPFS_DIR_INIT_FUNC()?

>=20
>> +static void cpupool_hypfs_init(void)
>> +{
>> +    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
>> +    hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
>> +}
>> +#else
>> +
>> +static void cpupool_hypfs_init(void)
>> +{
>> +}
>> +#endif
>=20
> I think you want to be consistent with the use of blank lines next
> to #if / #else / #endif. In cases when they enclose multiple entities,
> I think it's generally better to have intervening blank lines
> everywhere. I also think in such cases commenting #else and #endif is
> helpful. But you're the maintainer of this code ...

I think I'll change it.


Juergen

--------------AFB756D8916BB5F08C1BC606
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AFB756D8916BB5F08C1BC606--

--PNNEg4wEdAXvKcCwGwjUJj48DdcaVibf6--

--EuENw2daS39mqXviIERXk4Mk2BkRTOegc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/KGMgFAwAAAAAACgkQsN6d1ii/Ey/F
0Af+IM5LPMZiEtpZAzwLT4o0m0CPErJkmADKNFqO6rh93GkEa7lu+LZ+pcRk1W+5so76CuAQM8uY
extKPqNUUpxRaZ+aIHfrwqCxwOI7mRtQPJZZvnuAS7N9nsw3TVJIYCNuJ7JIP3vbIMJFnMNMymGw
KrAC5MIJ19JisUGHBeqzvl8p+CrB0H6jznxLGs6mWM0GblzJMsv9e4G5L4u8as9WD0NyCaqP6Ud3
uZCyWw4z9F8Jc7OZHfSdhMoPDj4Z1noHR+70RXIt6N6q9MIdvZf1zwwNb7zyeKFHUflWYNaA1JTy
JQ9n2aKJQKKZ+jibh5AD2+Ov8y+9buD80Q9/l5y9RQ==
=VnYv
-----END PGP SIGNATURE-----

--EuENw2daS39mqXviIERXk4Mk2BkRTOegc--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:13:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44422.79569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl91n-0007Yx-4G; Fri, 04 Dec 2020 11:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44422.79569; Fri, 04 Dec 2020 11:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl91m-0007Yq-VE; Fri, 04 Dec 2020 11:13:30 +0000
Received: by outflank-mailman (input) for mailman id 44422;
 Fri, 04 Dec 2020 11:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl91l-0007Yk-Pa
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:13:29 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 043193b9-70ec-4e18-9180-36bcf6244fa2;
 Fri, 04 Dec 2020 11:13:28 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id p8so4934181wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:13:28 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id r128sm2916402wma.5.2020.12.04.03.13.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:13:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 043193b9-70ec-4e18-9180-36bcf6244fa2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=fjPRu6a4Q/jeCgdEwZPw2yIbNE1znjlghksZFJ9u6JY=;
        b=jM5nlMcYEKt1RT6YqPbdI4Vf9RkSO19MHShVt9Y1og2fQvej7OgGIebBGefr6FVu0L
         fuXN7Gav2PxPGT5eG4h5UNNamNF3uTi0hEDBAKcMUE5OWJFEid4xrdUCbSR018ithg+A
         U2epvkEF/TDuNadGxo38hSaI/e/UAb/jVtdCQrB3uibFi+lHFCjLbnov5P/gO0CCN4rh
         YY7r0XYH4QCaeJ3EFeJX2d9eTBRifg19IvsT2tcCeZ6FzaON1xGFpWN4j1R2+mgTRALo
         SD3YrG/FNIvj7u4JLJCI1sHl4aFWFKWWzsbpa4Z6ZDVsIqtwMEm492lwLdRqaBxbh9DE
         pwfA==
X-Gm-Message-State: AOAM532Zu3l2dAaMEdhWWOSI1YnJKHun9f+4LtZE1sci88PtmFarFTtz
	5a1rGPvKz0AiqaHJOO0vxrg=
X-Google-Smtp-Source: ABdhPJyGN5DZJE+OulSDXY0NObwabSKrAGrrQfvhlImyvHq9mDMfUbrTKgqAYBvOBfFuJqPOKnXzjQ==
X-Received: by 2002:a5d:5604:: with SMTP id l4mr4331855wrv.127.1607080408077;
        Fri, 04 Dec 2020 03:13:28 -0800 (PST)
Date: Fri, 4 Dec 2020 11:13:26 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Message-ID: <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-2-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-2-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:12PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> is confusing and also compromises use of some macros used for other device
> types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> of this duality.
> 
> This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
> DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
> hence allowing removal of the former.
> 
> For consistency the xl and libs/util code is also modified, but in this case
> it is purely cosmetic.
> 
> NOTE: Some of the more gross formatting errors (such as lack of spaces after
>       keywords) that came into context have been fixed in libxl_pci.c.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> ---
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> 

This is going to break libxl callers because the name "pcidev" is
visible from the public header.

I agree this is confusing and inconsistent, but we didn't go extra
length to maintain the inconsistency for no reason.

If you really want to change it, I won't stand in the way. In fact, I'm
all for consistency. I think the flag you added should help alleviate
the fallout. Also, you will need to submit patches to libvirt (the only
user I know of) to make use of the flag and switch to the new name and
then request such patch(es) be backported to all maintained version of
libvirt.

See https://github.com/libvirt/libvirt/blob/0d05d51b715390e08cd112f83e03b6776412aaeb/src/libxl/libxl_conf.c#L2273

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:14:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44426.79581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl92O-0007eX-BG; Fri, 04 Dec 2020 11:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44426.79581; Fri, 04 Dec 2020 11:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl92O-0007eQ-88; Fri, 04 Dec 2020 11:14:08 +0000
Received: by outflank-mailman (input) for mailman id 44426;
 Fri, 04 Dec 2020 11:14:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kl92M-0007eJ-TS
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:14:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec751bf6-e80f-410e-adee-38b408202e31;
 Fri, 04 Dec 2020 11:14:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19B46AC9A;
 Fri,  4 Dec 2020 11:14:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec751bf6-e80f-410e-adee-38b408202e31
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607080445; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Fa+6wf9gpQk6n/ZBCOz5HQR4MFwQCd1iFk58SnFZmeg=;
	b=EMqWX1eE6rgSiCL1KsuXoE3aiA4lUs8b0e85FR3dSR9z5whLzyZGmryZNsK4HL58A2BW29
	rT7dykOatX8UtnGy8ArdxL+WnC9+O1a1s1HPA/esLb+a6KbFML45uqLWmhSAdiwxAQpLXc
	PuWCVzjOtrLq+eMjjVL443RAUwtgYU0=
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <57d62a37-b953-a4fa-8c73-79336d634cb0@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2493b7d2-ec7d-d465-15c7-a4b9b009103c@suse.com>
Date: Fri, 4 Dec 2020 12:14:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <57d62a37-b953-a4fa-8c73-79336d634cb0@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="SNZGQxuofIzTs1d2RJWwBBSisZ7nr2zip"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--SNZGQxuofIzTs1d2RJWwBBSisZ7nr2zip
Content-Type: multipart/mixed; boundary="R3Ebz5NwFFSkgryU6FeeOVCymXCMMEFvs";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <2493b7d2-ec7d-d465-15c7-a4b9b009103c@suse.com>
Subject: Re: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry()
 callbacks to hypfs_funcs
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-12-jgross@suse.com>
 <57d62a37-b953-a4fa-8c73-79336d634cb0@suse.com>
In-Reply-To: <57d62a37-b953-a4fa-8c73-79336d634cb0@suse.com>

--R3Ebz5NwFFSkgryU6FeeOVCymXCMMEFvs
Content-Type: multipart/mixed;
 boundary="------------81C0EB144C998EA99B4EA86C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------81C0EB144C998EA99B4EA86C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.12.20 09:58, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -197,28 +235,12 @@ static struct hypfs_entry *hypfs_get_entry_rel(s=
truct hypfs_entry_dir *dir,
>>               end =3D strchr(path, '\0');
>>           name_len =3D end - path;
>>  =20
>> -        again =3D false;
>> +        entry =3D dir->e.funcs->findentry(dir, path, name_len);
>> +        if ( IS_ERR(entry) || !*end )
>> +            return entry;
>>  =20
>> -        list_for_each_entry ( entry, &dir->dirlist, list )
>> -        {
>> -            int cmp =3D strncmp(path, entry->name, name_len);
>> -            struct hypfs_entry_dir *d =3D container_of(entry,
>> -                                                     struct hypfs_ent=
ry_dir, e);
>> -
>> -            if ( cmp < 0 )
>> -                return ERR_PTR(-ENOENT);
>> -            if ( !cmp && strlen(entry->name) =3D=3D name_len )
>> -            {
>> -                if ( !*end )
>> -                    return entry;
>> -
>> -                again =3D true;
>> -                dir =3D d;
>> -                path =3D end + 1;
>> -
>> -                break;
>> -            }
>> -        }
>> +        path =3D end + 1;
>> +        dir =3D container_of(entry, struct hypfs_entry_dir, e);
>>       }
>=20
> Looking at patch 15 my understanding is that "dir" may get invalidated
> by the call to the ->findentry() hook above. That is, use of dir has
> to be avoided between the two calls. This isn't at all obvious, so I
> wonder whether at least a comment wouldn't want adding to avoid future
> mistakes.

Will add a comment.

>=20
> And of course the same comment applies to the IS_ERR() use here vs NULL=

> coming back that I already gave for the ->enter() call site.

I'll add an ASSERT().


Juergen

--------------81C0EB144C998EA99B4EA86C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------81C0EB144C998EA99B4EA86C--

--R3Ebz5NwFFSkgryU6FeeOVCymXCMMEFvs--

--SNZGQxuofIzTs1d2RJWwBBSisZ7nr2zip
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/KGfwFAwAAAAAACgkQsN6d1ii/Ey+5
wwf+KNbt27pH+AZTk52LTcWYnCbHBhet/flU1ftGrS+nEdHcAQMsV63b9T8hoBquOE3NWfQ/F1Vx
BjoC/NdZ1clrzCJV7eRnGboWGqOCVVKd9xfoX+iXJwI/0UV89+HdaglMlmWUaVgVlnNRDARXzVlP
Zw7F1Tuaw4HzY1TnoW3gJM/A/4zcsjkKrpd8aMUgU3G25NyVS0Qt8v0RihU2DqEp5Nq9SJjQl/hp
d/2ucMgqdDYGjk8eRlrvzTq32R8L/juDrEaWvaER0lk5Tf58OzIX0MqDKZKFsTM23WS+rwaroeNk
HUCSSXP/zYo9IucdYgXAl/3+iqn1SfVxUDw6rJfqHA==
=Xc8u
-----END PGP SIGNATURE-----

--SNZGQxuofIzTs1d2RJWwBBSisZ7nr2zip--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:16:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44432.79592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl94G-0007nw-Op; Fri, 04 Dec 2020 11:16:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44432.79592; Fri, 04 Dec 2020 11:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl94G-0007np-LF; Fri, 04 Dec 2020 11:16:04 +0000
Received: by outflank-mailman (input) for mailman id 44432;
 Fri, 04 Dec 2020 11:16:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl94E-0007nk-P6
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:16:02 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 942dccf4-b72e-440f-9281-60580c165893;
 Fri, 04 Dec 2020 11:16:00 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id e25so6712741wme.0
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:16:00 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id b14sm3203129wrx.35.2020.12.04.03.15.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:15:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 942dccf4-b72e-440f-9281-60580c165893
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=jFJIH+lw+TMVA2V8UqXknJ8gSkPNdLR7MaI2n2kAMs4=;
        b=SrqEEohvP+BZdCMV3l7ikDf5AnNkK0kwsVBPYw3smgJ/wacptD72Le2qKSW5xRoYzN
         3Z4+98OEoCH5DR1K466DJxzU7CM4MYLYUox/zvlWTKrPF2ygl3b3cwAmgFMsjP+66OI9
         P0k98ydT8FVHHcIt3o4Z3Zor5cSfF7D6qZk5xuYnoWdY990YTPs6YhedfblkZY0+k2E9
         tAp6jlKsT9kGv1aH55fANUoBTrWQhzfKXLULnmzjd8VRvzGTcTl6wE+P8ZtAyhtwbdmI
         ulDNUUcM0vWW1TZGJpeY40jmf7p2fghPBK4AHCWDyZkNAMEEVW8UmH31dgWs3fyD8yDc
         eqRA==
X-Gm-Message-State: AOAM530WQ68ci45P5q57eT+dGUksfIaD+Ln69CpDpUntcrKO4H6VYTTT
	9lmgBbN4gDtDMOHXwhs4vAA=
X-Google-Smtp-Source: ABdhPJzynoOzSvmEUiPX5nnqOc3x9oJE14/CZWa3l2WPV3av2SFoudCMCiLe8p0+FRyk1FCOSgQlmg==
X-Received: by 2002:a1c:220b:: with SMTP id i11mr3713673wmi.8.1607080559783;
        Fri, 04 Dec 2020 03:15:59 -0800 (PST)
Date: Fri, 4 Dec 2020 11:15:57 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Message-ID: <20201204111557.34ya4w4wvqvyonao@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-2-paul@xen.org>
 <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
User-Agent: NeoMutt/20180716

On Fri, Dec 04, 2020 at 11:13:26AM +0000, Wei Liu wrote:
> On Thu, Dec 03, 2020 at 02:25:12PM +0000, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> > 
> > The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> > is confusing and also compromises use of some macros used for other device
> > types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> > of this duality.
> > 
> > This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
> > DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
> > hence allowing removal of the former.
> > 
> > For consistency the xl and libs/util code is also modified, but in this case
> > it is purely cosmetic.
> > 
> > NOTE: Some of the more gross formatting errors (such as lack of spaces after
> >       keywords) that came into context have been fixed in libxl_pci.c.
> > 
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > ---
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Wei Liu <wl@xen.org>
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > 
> 
> This is going to break libxl callers because the name "pcidev" is
> visible from the public header.
> 
> I agree this is confusing and inconsistent, but we didn't go extra
> length to maintain the inconsistency for no reason.
> 
> If you really want to change it, I won't stand in the way. In fact, I'm
> all for consistency. I think the flag you added should help alleviate
> the fallout. Also, you will need to submit patches to libvirt (the only
> user I know of) to make use of the flag and switch to the new name and
> then request such patch(es) be backported to all maintained version of
> libvirt.
> 
> See https://github.com/libvirt/libvirt/blob/0d05d51b715390e08cd112f83e03b6776412aaeb/src/libxl/libxl_conf.c#L2273

Pasted in the wrong line. This is the correct one I had in mind:

https://github.com/libvirt/libvirt/blob/0d05d51b715390e08cd112f83e03b6776412aaeb/src/libxl/libxl_conf.c#L2320

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:17:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:17:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44439.79605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl95X-0007wO-71; Fri, 04 Dec 2020 11:17:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44439.79605; Fri, 04 Dec 2020 11:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl95X-0007wH-3y; Fri, 04 Dec 2020 11:17:23 +0000
Received: by outflank-mailman (input) for mailman id 44439;
 Fri, 04 Dec 2020 11:17:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl95W-0007wC-H6
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:17:22 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7866aea6-3b3e-4b16-b38a-aa8639c0b333;
 Fri, 04 Dec 2020 11:17:21 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id g25so4496577wmh.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:17:21 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id k1sm3179051wrp.23.2020.12.04.03.17.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:17:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7866aea6-3b3e-4b16-b38a-aa8639c0b333
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=YrkIW5eZe1PTKwVNEi3pHF2+fCyhthMg1SVSgIuHqdQ=;
        b=uAumc/t9Txj4dXj/Py6ThvXdnXcLSgh1PcIpFLhS5jCIIRGdDqRmIWx21B4uLj/5SO
         ZAhJRSjaFhNWbxa8jyJGo4aTcvk5mhFeLNA9NapZglGWVzhHCzm9HXp7+zJsTjR4oaM7
         kLl9jjqIOWadxy+dQrEItUCYG8rYGXbmUO4aDxIkqokBmmPBisybk00wJORAaUZjo1P4
         YkFgKHKuhJRDbspKDlKMYgqwsriSWUBabuy3ZRd7YQ8x2VRroolEDPpWwECkrzubg8Rl
         6ZBYdWVd3TNgygR9uMkrBq/WMF9EXdFbGMbri+Xg3uUMmy2hJVUlLKtlcUBaBaPlQFbZ
         Zc4w==
X-Gm-Message-State: AOAM533VFSWs1ESYZssDhXQj0ZhGaAEkZbvprb9KlvoBn19aR5E3EM0/
	2kYm99p+MO0tPoRVDgXkv90=
X-Google-Smtp-Source: ABdhPJwp7Dwc62vxAd3scwCvSIKjwSMrwsxsp74qhykFOwImb9lLoBkIhoQ0g6O28D5tdjd0O4j9MA==
X-Received: by 2002:a1c:bdc1:: with SMTP id n184mr3665157wmf.125.1607080641086;
        Fri, 04 Dec 2020 03:17:21 -0800 (PST)
Date: Fri, 4 Dec 2020 11:17:19 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 02/23] libxl: make libxl__device_list() work correctly
 for LIBXL__DEVICE_KIND_PCI...
Message-ID: <20201204111719.cfclb6wgdcroncfp@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-3-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-3-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:13PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... devices.
> 
> Currently there is an assumption built into libxl__device_list() that device
> backends are fully enumarated under the '/libxl' path in xenstore. This is
> not the case for PCI backend devices, which are only properly enumerated
> under '/local/domain/0/backend'.
> 
> This patch adds a new get_path() method to libxl__device_type to allow a
> backend implementation (such as PCI) to specify the xenstore path where
> devices are enumerated and modifies libxl__device_list() to use this method
> if it is available. Also, if the get_num() method is defined then the
> from_xenstore() method expects to be passed the backend path without the device
> number concatenated, so this issue is also rectified.
> 
> Having made libxl__device_list() work correctly, this patch removes the
> open-coded libxl_pci_device_pci_list() in favour of an evaluation of the
> LIBXL_DEFINE_DEVICE_LIST() macro. This has the side-effect of also defining
> libxl_pci_device_pci_list_free() which will be used in subsequent patches.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:18:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44444.79617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl96z-00086S-Ii; Fri, 04 Dec 2020 11:18:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44444.79617; Fri, 04 Dec 2020 11:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl96z-00086L-FT; Fri, 04 Dec 2020 11:18:53 +0000
Received: by outflank-mailman (input) for mailman id 44444;
 Fri, 04 Dec 2020 11:18:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl96x-00086F-Kb
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:18:51 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6e05185-1e04-4afd-a384-015777f45441;
 Fri, 04 Dec 2020 11:18:50 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id 91so996988wrj.7
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:18:50 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id k6sm2720768wmf.25.2020.12.04.03.18.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:18:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6e05185-1e04-4afd-a384-015777f45441
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=/mQZwD0o5vjrIKdlWxuGSJ4YB8S+Z/KzQCzJMkdeqYY=;
        b=YC/pYI7UoiRveRRHDkdB65Fj3uvAV0YYVHVhGPP2cv4qjc5pHQvGn9sxvzxTEkw5my
         suPzO2Mexj5vTmKym7wgYt67XJyn0Z3/U5Ml6xjFrsLRlt3SO1dlqyjOPL+3GSHOem1J
         Otk2CmTQ25hPHD/i7EE1lMKflVFedLJR+J69SRa+knzJBB9FzvUi4+KHjtWhDV2GZEB5
         0ogxUkcYqPb3+rHNcNbVckOpBYFHHT30/57jlFuObG32HKoi+Dk+m92PiJs5c6zSQzDq
         n8pw+M+WlwkOOZ1/CankPUJQVsqRNMoi1Rn5Er6G8Lc7AFONGDbq0EN/VBeAJxFYN44v
         6Q9g==
X-Gm-Message-State: AOAM532mdzC0K6i8TvLI593DtqAPrqaYPvh+nTMvdlErv1HinxFRk7mG
	SrdrX9KmSeRC0VhoDZdDdKQ=
X-Google-Smtp-Source: ABdhPJztSHKwRcD1BAt/CeqBvMknVFdjwDRN0XCxQRcwVtHzJe+o7DrAGu5hDzhxffjMSsV98c1UJQ==
X-Received: by 2002:a5d:4e47:: with SMTP id r7mr4352815wrt.342.1607080730168;
        Fri, 04 Dec 2020 03:18:50 -0800 (PST)
Date: Fri, 4 Dec 2020 11:18:48 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 03/23] libxl: Make sure devices added by pci-attach
 are reflected in the config
Message-ID: <20201204111848.bpgebyl3gcgwaprw@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-4-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-4-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:14PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently libxl__device_pci_add_xenstore() is broken in that does not
> update the domain's configuration for the first device added (which causes
> creation of the overall backend area in xenstore). This can be easily observed
> by running 'xl list -l' after adding a single device: the device will be
> missing.
> 
> This patch fixes the problem and adds a DEBUG log line to allow easy
> verification that the domain configuration is being modified. Also, the use
> of libxl__device_generic_add() is dropped as it leads to a confusing situation
> where only partial backend information is written under the xenstore
> '/libxl' path. For LIBXL__DEVICE_KIND_PCI devices the only definitive
> information in xenstore is under '/local/domain/0/backend' (the '0' being
> hard-coded).
> 
> NOTE: This patch includes a whitespace in add_pcis_done().
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:19:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:19:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44449.79629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97M-0008CH-RO; Fri, 04 Dec 2020 11:19:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44449.79629; Fri, 04 Dec 2020 11:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97M-0008C9-OD; Fri, 04 Dec 2020 11:19:16 +0000
Received: by outflank-mailman (input) for mailman id 44449;
 Fri, 04 Dec 2020 11:19:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl97L-0008Bx-87
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:19:15 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4cda3a2-143b-4626-9843-fa6cef21ffcb;
 Fri, 04 Dec 2020 11:19:14 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id t4so4922841wrr.12
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:19:14 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n10sm3154087wrv.77.2020.12.04.03.19.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:19:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4cda3a2-143b-4626-9843-fa6cef21ffcb
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=oucn0iOy6mf97liPz6ggJApsZ+F/ZSW6CQPdImEkriM=;
        b=RxdJjPuLBg52UFMc8Y+4hlMe7N9l0SN1525uKDggfut7FS3XuxubbHXwztoMJzMgNf
         Ev6kJ+vCqtAq2nreMNBj0LcBPQ8Dn2fQnF1r23Sd5kfQaq04Ac4tKxDn7Ouf5vyl0ieb
         47P/FFhxRbziG2e/OjJb5MAe/UR1kuztPDWuqU7Ks1MXjqX0v4U+a3uTEVObVm0OA4kc
         oWx6hNNzJYj/jTeZVAvNyaKYK+ugGgdAmKlJ+V7ycLAFBdN2Ow/RBoW0XNaop9tS33TK
         lEl9Q5rN7V2MA3YVYOu7mPKpZk0JAnzA/wcuQFM9gAcFBPieTya1lNMJnEIu7FDhrfyb
         isxw==
X-Gm-Message-State: AOAM533q/2GTDoKmqIThCwSYr8U1L7W7b4SYqD+MU8QrgbZdHpeIoOeu
	DnVdcKLhRJ2KUgMur207F+g=
X-Google-Smtp-Source: ABdhPJwvujTiSE4oLfPS1NKiRh2I+DJNs0vYg3OFA5gS6B2kCVw0olwXcqPYneWMYvGbl+SU9nofJA==
X-Received: by 2002:adf:bd84:: with SMTP id l4mr4472549wrh.41.1607080753849;
        Fri, 04 Dec 2020 03:19:13 -0800 (PST)
Date: Fri, 4 Dec 2020 11:19:11 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 04/23] libxl: add/recover 'rdm_policy' to/from PCI
 backend in xenstore
Message-ID: <20201204111911.vps6sjyqtocykjzl@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-5-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-5-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:15PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Other parameters, such as 'msitranslate' and 'permissive' are dealt with
> but 'rdm_policy' appears to be have been completely missed.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:19:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44451.79641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97c-0008Hp-4I; Fri, 04 Dec 2020 11:19:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44451.79641; Fri, 04 Dec 2020 11:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97c-0008Hh-0r; Fri, 04 Dec 2020 11:19:32 +0000
Received: by outflank-mailman (input) for mailman id 44451;
 Fri, 04 Dec 2020 11:19:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl97b-0008HN-7v
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:19:31 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d7db448-4ac8-49e7-8519-c2d6b8e1bb22;
 Fri, 04 Dec 2020 11:19:30 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id k14so4966734wrn.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:19:30 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u26sm2912463wmm.24.2020.12.04.03.19.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:19:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d7db448-4ac8-49e7-8519-c2d6b8e1bb22
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=pAimBtLIAxfNJiHe4gICAMSy6aIpRsD/YxrsobslKRE=;
        b=QaCpo2kndYRLzcgzkpTyPfweMwT1smD4tp3SGD/6yRghByAdDZiBhExGHsBPJfUHWW
         8l/BUz+OK/PqYGhlFQ0wdFFHSJn7G1h5sl/N+HKcpU9NrjMd8Q0axaSvNdS+e9YQ9/8X
         lrWZwLpyF/np+ewLuLZrmW5YPCNBA0EzMpUzTlG7IcI/c7n0oNp/tRRl2/ObVwupZ2fO
         peTvOLckK2HuDb6/Slt2aFYSyt05TW4TGPmmpjxSJJA1WVFTQ4b3aAJjaHfhvO0y+5wB
         joqm7ic2DJTvg93f8SNeij3JgGVGdAPGLikh2uLEo6nrYBkWHJSa3wNcv27kE4eO3jCv
         2t/A==
X-Gm-Message-State: AOAM533HaiyvgwwHtfiKkaPQDK7qFpn4V4LWa7hI0wnphRdtXXFmmUBl
	KEaV7XY310Jt9u0bFRvsnJ8=
X-Google-Smtp-Source: ABdhPJx7vh18C0Ien/6kPEMqkKZbkNcqlttjW4qenA4K2uw4rPDqJQAxRLXS3z7EuxQ/vzhgtQtnUA==
X-Received: by 2002:adf:e84e:: with SMTP id d14mr4300031wrn.190.1607080769923;
        Fri, 04 Dec 2020 03:19:29 -0800 (PST)
Date: Fri, 4 Dec 2020 11:19:28 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 05/23] libxl: s/detatched/detached in libxl_pci.c
Message-ID: <20201204111928.ddq4blb6e2hsl2gc@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-6-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-6-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:16PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Simply spelling correction. Purely cosmetic fix.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:19:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:19:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44456.79653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97p-0008Nt-DU; Fri, 04 Dec 2020 11:19:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44456.79653; Fri, 04 Dec 2020 11:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97p-0008Nm-9y; Fri, 04 Dec 2020 11:19:45 +0000
Received: by outflank-mailman (input) for mailman id 44456;
 Fri, 04 Dec 2020 11:19:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl97o-0008NK-0n
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:19:44 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b219a16a-513e-4cff-8b60-20e17671e652;
 Fri, 04 Dec 2020 11:19:43 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id h21so6708603wmb.2
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:19:43 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w3sm2798123wma.3.2020.12.04.03.19.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:19:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b219a16a-513e-4cff-8b60-20e17671e652
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=a4wnqHHZ6Nn/GohLuU45bYqVvn4Z6f2TRt09X4KhAiY=;
        b=mZEnONDKNSs3xrhPUy3oxBL2e19ldzBo7m16FBQKu7jUWinDjA9Ufp9aK0/gEchdK4
         mxA0A9s5miF+SRAfOjWwvws08Bf0PL/FHzEGGoVjSRmLwJAq9JrNJ2B1yJBz6qZqNsFp
         xRvJMcO5ZDk4k3mqEthCh3YZBAXX+P4G+gpJ5cX8uA4P3XQgUzL0JNCof7LjNXSkXg8S
         l+Dm6V7RWUPu15jh4repaVkxs+LfgJAitycrfwYNhROD7reuCUZzqAs26HKlbrODAbox
         W/IOjazH4EKX3K6mpuKGovPo3L5tPB7yuaUnpeL6gqZzM/nwRLP4Z8KTHGFX0sdlM1f7
         HAqQ==
X-Gm-Message-State: AOAM533YJXO8ewaE/FvOX5/0pej2DwHWzt2teeogMoYqsJ5JihGY781/
	JGrrOTHKDsSrskzsnywhbOg=
X-Google-Smtp-Source: ABdhPJzZsfkzgkmpXAa9zLsUjnxQi/AfHj/TOwj4g/8Kd9YyrTfo0uvwfAuOzcDA2s74OxXs52co2w==
X-Received: by 2002:a1c:1bc9:: with SMTP id b192mr3640214wmb.136.1607080782666;
        Fri, 04 Dec 2020 03:19:42 -0800 (PST)
Date: Fri, 4 Dec 2020 11:19:40 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 06/23] libxl: remove extraneous arguments to
 do_pci_remove() in libxl_pci.c
Message-ID: <20201204111940.ocex7wl346wusvbk@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-7-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-7-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:17PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
> need to also pass them as separate arguments.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:19:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:19:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44459.79665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97v-0008Rm-Lu; Fri, 04 Dec 2020 11:19:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44459.79665; Fri, 04 Dec 2020 11:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl97v-0008Rf-Il; Fri, 04 Dec 2020 11:19:51 +0000
Received: by outflank-mailman (input) for mailman id 44459;
 Fri, 04 Dec 2020 11:19:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IQI=FI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kl97u-0008RA-GM
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:19:50 +0000
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68152106-7c89-4179-9ecf-2ddb41de4caf;
 Fri, 04 Dec 2020 11:19:49 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id x6so1010263wro.11
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:19:49 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id c129sm2878769wma.31.2020.12.04.03.19.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 04 Dec 2020 03:19:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68152106-7c89-4179-9ecf-2ddb41de4caf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=KOlQMI9yAljOEhyD+kMhP3/tQtyLaW2yJYFmA4T5J7Y=;
        b=UCqpQbLI3VhLvmamqhwjFefLf2Jj09WzobWAtvBC+9iUMTrvftDJipkucp01LoK8+2
         aW8fBeAhxVUDZ8rgk9gD7rWENmilYaqryGJbSv0wgRIGT0jU3LHAv0YcMltrMbPzJEZK
         Z6pZNuOM6wkR5Ynnt+h5WGY9vWnvJD9IUmqUQvVwPea5VvRfdJk2AbCxdp+CbA6WKlIr
         U4oZT/F1Ee2crhMucfdT9+FQpO2R2I6kvJIqKycvLMTJ0beYyaxU5jngWm2VyrsYG7HS
         cQSRBWecuyV6Kd1mAx82Us+Gld46HO2G+VO+PYTrUDd3FYXjpOeEzR+RLcCApjqSxkwY
         prfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=KOlQMI9yAljOEhyD+kMhP3/tQtyLaW2yJYFmA4T5J7Y=;
        b=bwFB0oOEpx3CRw4YcFqTNRBVBQFfURIEJwADsxy+Jz5Yw2pi5Z+NgXCFkIfp6xrQJs
         PfcDAuCSqo+UTwPJjgsEHOh2O6CqdQ509z/bhGSPa2Zt+XevRwCmjDSXy3J+fL27N8Bz
         i4HEWH30AYUCUJvE4cUWof311TzxK9XoBOdPPjGDLYQiy7YpR50hZoe9kjf96dngpsZJ
         CS/fgjTK972WOd07AHQLDJZCDovxVOmKC9Z3pZnazffwlpMdzYbQqoNkQ8DV0PzTgXXt
         9EAr5xRZjPeJYxY/KU+Y+FGQIc7Vak4m5jShXRQAtS7vSKSl3BmIrNqJq8Ldmok17ZFb
         YU2g==
X-Gm-Message-State: AOAM533bRtWgrA2EvaektrnHR0dxWwMBBKxOuYRVslQvUHMDqAhLJ9U8
	QX38CWedtgeEFVMqWLC1Lu8=
X-Google-Smtp-Source: ABdhPJyYEuvi3cEuuUc6R5oZNzRWTL745gasqMwXRB5qm2QpQZ2rQ5rr65/cxk12JUCMLBoL9HDQ6g==
X-Received: by 2002:adf:f6c8:: with SMTP id y8mr3340552wrp.59.1607080788811;
        Fri, 04 Dec 2020 03:19:48 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>
Cc: <xen-devel@lists.xenproject.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Oleksandr Andrushchenko'" <oleksandr_andrushchenko@epam.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201203142534.4017-1-paul@xen.org> <20201203142534.4017-2-paul@xen.org> <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
In-Reply-To: <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
Subject: RE: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Fri, 4 Dec 2020 11:19:47 -0000
Message-ID: <013d01d6ca2f$605fe7e0$211fb7a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLNnXLs2pPLw4H6lzH6w2mINWFu0wFb+WecAj+9ygKn3EUpUA==

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 04 December 2020 11:13
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Oleksandr Andrushchenko
> <oleksandr_andrushchenko@epam.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Anthony
> PERARD <anthony.perard@citrix.com>
> Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> 
> On Thu, Dec 03, 2020 at 02:25:12PM +0000, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> > is confusing and also compromises use of some macros used for other device
> > types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> > of this duality.
> >
> > This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
> > DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
> > hence allowing removal of the former.
> >
> > For consistency the xl and libs/util code is also modified, but in this case
> > it is purely cosmetic.
> >
> > NOTE: Some of the more gross formatting errors (such as lack of spaces after
> >       keywords) that came into context have been fixed in libxl_pci.c.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > ---
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Wei Liu <wl@xen.org>
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> >
> 
> This is going to break libxl callers because the name "pcidev" is
> visible from the public header.
> 
> I agree this is confusing and inconsistent, but we didn't go extra
> length to maintain the inconsistency for no reason.
> 
> If you really want to change it, I won't stand in the way. In fact, I'm
> all for consistency. I think the flag you added should help alleviate
> the fallout.

Yes, I thought that was the idea... we can make API changes if we add a flag. I could see about adding shims to translate the names
and keep the internal code clean.

  Paul

> Also, you will need to submit patches to libvirt (the only
> user I know of) to make use of the flag and switch to the new name and
> then request such patch(es) be backported to all maintained version of
> libvirt.
> 
> See
> https://github.com/libvirt/libvirt/blob/0d05d51b715390e08cd112f83e03b6776412aaeb/src/libxl/libxl_conf.
> c#L2273
> 
> Wei.



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:21:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44471.79677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl99n-00017a-1q; Fri, 04 Dec 2020 11:21:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44471.79677; Fri, 04 Dec 2020 11:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl99m-00017T-Up; Fri, 04 Dec 2020 11:21:46 +0000
Received: by outflank-mailman (input) for mailman id 44471;
 Fri, 04 Dec 2020 11:21:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl99l-00017O-0D
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:21:45 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cde73c38-93d2-492a-ab1b-618c2df2e427;
 Fri, 04 Dec 2020 11:21:44 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id v14so5234195wml.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:21:44 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h83sm3055345wmf.9.2020.12.04.03.21.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:21:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cde73c38-93d2-492a-ab1b-618c2df2e427
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=KETAunvqR8TYIMllC4o7XH06Nc/jgVHMbN1L0eP8PUQ=;
        b=O5xbdIymffR/adYpXpc4u2/zx+y5Hjx3VzyxdqyoBYxRICf++cOYcRcFWqkke8t01b
         UWPzdZPRrFbGYaeIzii98RP+ijBuK6e6FWV7D7OTb2Yz8ym4At/FVm5mWQya4ZSll7GE
         KlI2wcveviyVdl3seGuhYbF/se7otso0IGXx7Zx08S46QkacE7iEkEYdv8x6kur8z1uE
         0xzAOdvO9FI5J+lh6sXGXuZAKBOVRSS7ca5Osn3vxGjJg4Z/n8GbTH4qIxzE3BVstDoX
         o7SmA/vrvNUkM/X5dGSttcUE3e9xwBibRMokMNzE7f8LQ1q+iY86fs8cP8SiAnXl/PzN
         XFhA==
X-Gm-Message-State: AOAM531e/7LxYChu8dxm7BytzNDmKO3nIrgjFpiU0/aWkx/ZiP6t+jQS
	IDWq9IWkjvvOHAmK7fhyB8DRoi8wilc=
X-Google-Smtp-Source: ABdhPJxC2wWfqmRn44ejnkzW4ytzjZRgfN8cXTI0sSt2H0uClchFnj1iF/itOK0qELPFLzQNF9t3cg==
X-Received: by 2002:a1c:7f81:: with SMTP id a123mr3720982wmd.6.1607080903476;
        Fri, 04 Dec 2020 03:21:43 -0800 (PST)
Date: Fri, 4 Dec 2020 11:21:41 +0000
From: Wei Liu <wl@xen.org>
To: paul@xen.org
Cc: 'Wei Liu' <wl@xen.org>, xen-devel@lists.xenproject.org,
	'Paul Durrant' <pdurrant@amazon.com>,
	'Oleksandr Andrushchenko' <oleksandr_andrushchenko@epam.com>,
	'Ian Jackson' <iwj@xenproject.org>,
	'Anthony PERARD' <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Message-ID: <20201204112141.wdwb54brb23x2bgs@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-2-paul@xen.org>
 <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
 <013d01d6ca2f$605fe7e0$211fb7a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <013d01d6ca2f$605fe7e0$211fb7a0$@xen.org>
User-Agent: NeoMutt/20180716

On Fri, Dec 04, 2020 at 11:19:47AM -0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Wei Liu <wl@xen.org>
> > Sent: 04 December 2020 11:13
> > To: Paul Durrant <paul@xen.org>
> > Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Oleksandr Andrushchenko
> > <oleksandr_andrushchenko@epam.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Anthony
> > PERARD <anthony.perard@citrix.com>
> > Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> > 
> > On Thu, Dec 03, 2020 at 02:25:12PM +0000, Paul Durrant wrote:
> > > From: Paul Durrant <pdurrant@amazon.com>
> > >
> > > The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> > > is confusing and also compromises use of some macros used for other device
> > > types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> > > of this duality.
> > >
> > > This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
> > > DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
> > > hence allowing removal of the former.
> > >
> > > For consistency the xl and libs/util code is also modified, but in this case
> > > it is purely cosmetic.
> > >
> > > NOTE: Some of the more gross formatting errors (such as lack of spaces after
> > >       keywords) that came into context have been fixed in libxl_pci.c.
> > >
> > > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > > Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > ---
> > > Cc: Ian Jackson <iwj@xenproject.org>
> > > Cc: Wei Liu <wl@xen.org>
> > > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > >
> > 
> > This is going to break libxl callers because the name "pcidev" is
> > visible from the public header.
> > 
> > I agree this is confusing and inconsistent, but we didn't go extra
> > length to maintain the inconsistency for no reason.
> > 
> > If you really want to change it, I won't stand in the way. In fact, I'm
> > all for consistency. I think the flag you added should help alleviate
> > the fallout.
> 
> Yes, I thought that was the idea... we can make API changes if we add a flag. I could see about adding shims to translate the names
> and keep the internal code clean.

Yes if you can add some internal shims to handle it that would be
great. Otherwise you will need to at least fix libvirt.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44477.79689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9AD-0001EP-Fk; Fri, 04 Dec 2020 11:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44477.79689; Fri, 04 Dec 2020 11:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9AD-0001EI-Bn; Fri, 04 Dec 2020 11:22:13 +0000
Received: by outflank-mailman (input) for mailman id 44477;
 Fri, 04 Dec 2020 11:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9AC-0001E9-4j
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:22:12 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8efe8e6-0292-45b3-a6bb-d71c78a87573;
 Fri, 04 Dec 2020 11:22:11 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id w206so4510788wma.0
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:22:11 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n9sm2718841wmd.4.2020.12.04.03.22.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:22:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8efe8e6-0292-45b3-a6bb-d71c78a87573
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=hMbQp47N+fTdzq3TQ2PBzUHDgZt4DnG9ASF2wZjso/Q=;
        b=KIOaIfcHHhj6GBIN+cdEvdmet0+t05hjDj39xGPak1JeotdxtRartEZdxhYE2aMPpg
         ODvKGsCTmm0APW73HQ2APQWd6YebqVeUJBDKzn0vfVylgTaP5/nr5Z8Cky7GMO5ep42k
         iCoNmKho64QVVsAwG1TyUbLI+fTKXZhmNYlLiF01pMBo6dpEkBCtQg/NAAY3nJdtb1rE
         AMmQyAEhjrJRoZJgJFS1MvP3rudWg6lYHf9knHHu7HQsnvwmV0W9XhW4fWp3TOgzvPNf
         r0e5vGqDGULe8MFcjHD7qoNSMfOymyle7XeaYLvE9PfuwFvZHFtGSLQHfQeSmySfT9B4
         Uiqg==
X-Gm-Message-State: AOAM530dUlrOcQ7EGjtvzyth963zXgQ5eczcnYUt7dwOWReF3OFWxadW
	MrntqWETccIj0eW40k9bmv2WS9sKzFc=
X-Google-Smtp-Source: ABdhPJx7OL5bNiNv+Y0H8bcxEMQ/IMnRg85H0pww+Ns+UTNnVfjF5qqiHFmewTuVOttchorNoqxiMw==
X-Received: by 2002:a7b:c3c8:: with SMTP id t8mr3640278wmj.88.1607080930684;
        Fri, 04 Dec 2020 03:22:10 -0800 (PST)
Date: Fri, 4 Dec 2020 11:22:08 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 07/23] libxl: stop using aodev->device_config in
 libxl__device_pci_add()...
Message-ID: <20201204112208.yk5cpm4r4sgcoihs@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-8-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-8-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:18PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to hold a pointer to the device.
> 
> There is already a 'pci' field in 'pci_add_state' so simply use that from
> the start. This also allows the 'pci' (#3) argument to be dropped from
> do_pci_add().
> 
> NOTE: This patch also changes the type of the 'pci_domid' field in
>       'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
>       given what the field is used for.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:22:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:22:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44481.79701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9AW-0001Ku-R0; Fri, 04 Dec 2020 11:22:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44481.79701; Fri, 04 Dec 2020 11:22:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9AW-0001Kl-Mf; Fri, 04 Dec 2020 11:22:32 +0000
Received: by outflank-mailman (input) for mailman id 44481;
 Fri, 04 Dec 2020 11:22:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9AV-0001KX-56
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:22:31 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7d81e9d-cfc1-4165-ae47-1e04633e2b61;
 Fri, 04 Dec 2020 11:22:30 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id s8so4952463wrw.10
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:22:30 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a18sm3083500wrr.20.2020.12.04.03.22.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:22:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7d81e9d-cfc1-4165-ae47-1e04633e2b61
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=QaAGpOLtOeW0sBjlbj9vrd9UFJyXaMeS7bsIQ1Sellg=;
        b=ucW8ZQBXVROEQkdUz8Y51G1dgY4weGhpRxZHpA9TP54vQ09sV27bpM9q4Bx5A6n9KW
         lnqQeBF9ZdKrjTjKkR6tEj8jDb749a62WYzd2Eih9jThuyuE5Oj73hfbAs3pFzZ51lhT
         faHjf9juy5rL8MVUMmftckGrlaDgm6Sja/I173IJaG1N9xE742A3SJ9f6bQlOWXiTfgP
         YToy1gRa2V2EP7o1zjVveIjS4MoKCnzgaSlrGIVTweF/kl0cAL6L4ZVYKBT6PdP29VN1
         /QubjZ4cCHVd17AxacE599UXhy5QNP9hu/1WmHGIzJHC9MO4KWYra3wu3BLSiIm7lVno
         mJjA==
X-Gm-Message-State: AOAM530r0XymC0ZVDUesfvWwNUnXENj77KrGIiUZhV9OXIsKJaF/+FXg
	SYQ2xL4BwHGPHxM0Yupfxt4=
X-Google-Smtp-Source: ABdhPJxaX4CsriLCeDHKZU8DTOd/eSknGyzSp5MnikhvLbzfkv5H5pM6qfFxfygHmmBhYnRQbncpvw==
X-Received: by 2002:adf:9cc6:: with SMTP id h6mr4335273wre.341.1607080949686;
        Fri, 04 Dec 2020 03:22:29 -0800 (PST)
Date: Fri, 4 Dec 2020 11:22:27 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 08/23] libxl: generalise 'driver_path' xenstore access
 functions in libxl_pci.c
Message-ID: <20201204112227.zpnm5qjlbzwgnks7@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-9-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-9-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:19PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> For the purposes of re-binding a device to its previous driver
> libxl__device_pci_assignable_add() writes the driver path into xenstore.
> This path is then read back in libxl__device_pci_assignable_remove().
> 
> The functions that support this writing to and reading from xenstore are
> currently dedicated for this purpose and hence the node name 'driver_path'
> is hard-coded. This patch generalizes these utility functions and passes
> 'driver_path' as an argument. Subsequent patches will invoke them to
> access other nodes.
> 
> NOTE: Because functions will have a broader use (other than storing a
>       driver path in lieu of pciback) the base xenstore path is also
>       changed from '/libxl/pciback' to '/libxl/pci'.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:23:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44490.79713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Ay-0001SP-51; Fri, 04 Dec 2020 11:23:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44490.79713; Fri, 04 Dec 2020 11:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Ay-0001SI-1B; Fri, 04 Dec 2020 11:23:00 +0000
Received: by outflank-mailman (input) for mailman id 44490;
 Fri, 04 Dec 2020 11:22:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Aw-0001S8-Ex
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:22:58 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d57d0a01-2d98-4f5e-87a5-cc8549f81204;
 Fri, 04 Dec 2020 11:22:57 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id p8so4962017wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:22:57 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w15sm3149831wrp.52.2020.12.04.03.22.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:22:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d57d0a01-2d98-4f5e-87a5-cc8549f81204
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=/C1k3MYuUx3p+eH57DxvaQx/eTuPkAfziW3cYF7b8P8=;
        b=EuYPVdfcZVHQ4gdsDd2SuNBWPRqxnboAmlBmanvUnutyMQOVtaD9ybL/APmZCHyt/M
         rcQNQ4nXMrUoMeV2SG2E/l9DzkXutovsFaPI6+tYNmDebtLRa8/2gLT39dI9wxU0Wu4q
         FeFrE1kJhPzfCbj3akfaEEJdknJ9hWuwAm21oAvegiAerdDOG8KzEDjZYoCAmpj4KUpl
         KksbmSebJgibQePThWwjlhV7ATbAb4MWx9wXjyRBz2s/+AYyxYUx0ng8bo7X0p6r2OTN
         PPw6D8lDVr7v92bmi2mpIZiuQyh9gYHQt813FCK3YPZKwyTTxszMm3Kep0vgFYicUqGf
         yv/g==
X-Gm-Message-State: AOAM533X1/LJ+uFQZYk1DHhPNUvnrTqP3GSX0aHLThOJvutIEnxjuul9
	pHvgTJ1mMkNXKF9gUQAcipE=
X-Google-Smtp-Source: ABdhPJxqmdNPve4AUlGSPQ1ru0WCIQrbGe051mC2cxWGHbTxd9ipy5194eamfvBhsV+O6C7MXzZb1w==
X-Received: by 2002:a5d:570d:: with SMTP id a13mr4442508wrv.193.1607080976614;
        Fri, 04 Dec 2020 03:22:56 -0800 (PST)
Date: Fri, 4 Dec 2020 11:22:54 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 09/23] libxl: remove unnecessary check from
 libxl__device_pci_add()
Message-ID: <20201204112254.fjdfb4esxrb3wiil@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-10-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-10-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:20PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> The code currently checks explicitly whether the device is already assigned,
> but this is actually unnecessary as assigned devices do not form part of
> the list returned by libxl_device_pci_assignable_list() and hence the
> libxl_pci_assignable() test would have already failed.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:23:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:23:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44496.79725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9BX-0001c4-E5; Fri, 04 Dec 2020 11:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44496.79725; Fri, 04 Dec 2020 11:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9BX-0001bw-AT; Fri, 04 Dec 2020 11:23:35 +0000
Received: by outflank-mailman (input) for mailman id 44496;
 Fri, 04 Dec 2020 11:23:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IQI=FI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kl9BW-0001an-39
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:23:34 +0000
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d202927-992a-4633-b18a-f49904518993;
 Fri, 04 Dec 2020 11:23:30 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id s8so4955346wrw.10
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:23:30 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id d191sm2084311wmd.24.2020.12.04.03.23.28
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 04 Dec 2020 03:23:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d202927-992a-4633-b18a-f49904518993
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=+bsmCiRUqBszY6GII2Goi0WykGJvS1wDctdpsN73wdk=;
        b=lw2IuKlT699PrS58j5TPqQp5TD9DIEfCeXswImk36Xm0zia0IHJPC5GapO26PeNy2+
         7z9r5eePIWpnjeXINN9MMWiAf5hCWz++vpnaxK3lwWojZR/5SMiDGttBgLOkgrz7Nlga
         IDdNSeWtjnMZINPjuam6bj+hEhwBUJNBswNFRmhR6twg6XT7eJ2osi6kCZSa8KQX9Hzl
         wyCvQsPPugQhpKsdTXq9vu7mv85Lb82Ag4DsznHOtpaz8Y/CcHi8oZKOuTSD05OtBCi9
         z6ZVcGPsp2881Q5lnDZWZSMVW8lErg/10f5Fik4JDGHvnW6yunjiy27ryym4sqZfY/25
         mSmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=+bsmCiRUqBszY6GII2Goi0WykGJvS1wDctdpsN73wdk=;
        b=NDa6RR2wox7J0Kc4t0DASd74mWONdMy+ZlsYJeA8CnLqV+hqvhSjndoxYw49qzXbFY
         IitSLh4CBPGZW8clHx6x5rBXzZ9Jgsi7lcVx08nLUxqixc6FdmxhnORWG0uX5qnIPpO9
         2NgS5Ll/jKZaACmb6VfSqRiyVPJ8tqPCg8EBs2b3IdEDrX+i+XcX4QPmUX4fIywqiYk8
         DkawN/CAPXmEJYqrYFx6bezPfk79CI4Vj+H6JiX0v1umIgzL+yuBTWJ0DZhdvA8FExBA
         IbyB+omN4RSbwXN9DudB0lIoZPv4NHi2QTc75NvdenNM7VtO9P/1c56PmzJZqi6jI50m
         CX8Q==
X-Gm-Message-State: AOAM532c14AkQB8gCCqGIVb1GDOBDJK+uDZ/T96G6+dN42TkpixCCYwB
	dVxhjfVbWiAQ0FRR8Hao1N0=
X-Google-Smtp-Source: ABdhPJzBH9pBNAWMR2Q0bO5UJQYvQXuUaJSGufDVoOHvqtzxecm6dpQWLxWwaDeccedmPdDyZHB0hg==
X-Received: by 2002:adf:ba91:: with SMTP id p17mr4396625wrg.328.1607081009796;
        Fri, 04 Dec 2020 03:23:29 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>
Cc: <xen-devel@lists.xenproject.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Oleksandr Andrushchenko'" <oleksandr_andrushchenko@epam.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201203142534.4017-1-paul@xen.org> <20201203142534.4017-2-paul@xen.org> <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2> <013d01d6ca2f$605fe7e0$211fb7a0$@xen.org> <20201204112141.wdwb54brb23x2bgs@liuwe-devbox-debian-v2>
In-Reply-To: <20201204112141.wdwb54brb23x2bgs@liuwe-devbox-debian-v2>
Subject: RE: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Fri, 4 Dec 2020 11:23:28 -0000
Message-ID: <014701d6ca2f$e414f260$ac3ed720$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLNnXLs2pPLw4H6lzH6w2mINWFu0wFb+WecAj+9ygIB0uemdAExrIbdp8QiUiA=

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 04 December 2020 11:22
> To: paul@xen.org
> Cc: 'Wei Liu' <wl@xen.org>; xen-devel@lists.xenproject.org; 'Paul Durrant' <pdurrant@amazon.com>;
> 'Oleksandr Andrushchenko' <oleksandr_andrushchenko@epam.com>; 'Ian Jackson' <iwj@xenproject.org>;
> 'Anthony PERARD' <anthony.perard@citrix.com>
> Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> 
> On Fri, Dec 04, 2020 at 11:19:47AM -0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Wei Liu <wl@xen.org>
> > > Sent: 04 December 2020 11:13
> > > To: Paul Durrant <paul@xen.org>
> > > Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Oleksandr Andrushchenko
> > > <oleksandr_andrushchenko@epam.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>;
> Anthony
> > > PERARD <anthony.perard@citrix.com>
> > > Subject: Re: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> > >
> > > On Thu, Dec 03, 2020 at 02:25:12PM +0000, Paul Durrant wrote:
> > > > From: Paul Durrant <pdurrant@amazon.com>
> > > >
> > > > The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> > > > is confusing and also compromises use of some macros used for other device
> > > > types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> > > > of this duality.
> > > >
> > > > This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
> > > > DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
> > > > hence allowing removal of the former.
> > > >
> > > > For consistency the xl and libs/util code is also modified, but in this case
> > > > it is purely cosmetic.
> > > >
> > > > NOTE: Some of the more gross formatting errors (such as lack of spaces after
> > > >       keywords) that came into context have been fixed in libxl_pci.c.
> > > >
> > > > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > > > Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > > ---
> > > > Cc: Ian Jackson <iwj@xenproject.org>
> > > > Cc: Wei Liu <wl@xen.org>
> > > > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > > >
> > >
> > > This is going to break libxl callers because the name "pcidev" is
> > > visible from the public header.
> > >
> > > I agree this is confusing and inconsistent, but we didn't go extra
> > > length to maintain the inconsistency for no reason.
> > >
> > > If you really want to change it, I won't stand in the way. In fact, I'm
> > > all for consistency. I think the flag you added should help alleviate
> > > the fallout.
> >
> > Yes, I thought that was the idea... we can make API changes if we add a flag. I could see about
> adding shims to translate the names
> > and keep the internal code clean.
> 
> Yes if you can add some internal shims to handle it that would be
> great. Otherwise you will need to at least fix libvirt.
> 

I think shims are safest. We don't know what other callers are lurking out there :-)

  Paul

> Wei.



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:26:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:26:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44507.79736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Eb-0001oC-T9; Fri, 04 Dec 2020 11:26:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44507.79736; Fri, 04 Dec 2020 11:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Eb-0001o5-QK; Fri, 04 Dec 2020 11:26:45 +0000
Received: by outflank-mailman (input) for mailman id 44507;
 Fri, 04 Dec 2020 11:26:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Eb-0001o0-6K
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:26:45 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c303893-563c-4cb7-a3a6-7b303daa7b34;
 Fri, 04 Dec 2020 11:26:43 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id 91so1019886wrj.7
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:26:43 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id b11sm2235812wrs.84.2020.12.04.03.26.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:26:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c303893-563c-4cb7-a3a6-7b303daa7b34
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=DegQS7TWt3u4cw5soBD9Ag1ldmEawFj2H5tQVuxrhUs=;
        b=imGSsAA2nbclZMSTPZy6qlumAw70ZsuRQ+s8dInZeF+lvLKTKu4qfktEW2SNuVFKly
         dnjBIyWLJAXVpQz90bT+nLqqTFUXetayHNTWT4r6k3BGVoAepFZUnHKbhDGNmxNFYRKY
         Bh4fyknk1XUcOYJUZcXx1SnZs9hfmVvBaI1Av3K51QwkP/dQ3wabLeiV348UVKb2Abz4
         4u0MpoCBcPhu/eTP6sVF7oc50ON7d7CWYZeSGtEyfMKZo+hG24CBPhFFpwSZf2XjO2wZ
         pTGVUXUTiJ4Zt/HKuU+koFJU0MEHJWfIbsxe8MusbJ6ys1ur7cLqFTuszxdGrHzfyZCn
         pt1g==
X-Gm-Message-State: AOAM532I8T3uYC1Ed0sPx8IV+75DCydd/wTnth3fU5lf9h4lV1FDmtNf
	sAI6M1tABOGXpPHjXdbXL00=
X-Google-Smtp-Source: ABdhPJwBE9eMkFnYC9WQfWhZtc3YTvZRxUtUYgdtgmqNnjjNxL4Nv2zbrkfJmOw4p19I2tHx0PsTRA==
X-Received: by 2002:a5d:634d:: with SMTP id b13mr4484603wrw.310.1607081202506;
        Fri, 04 Dec 2020 03:26:42 -0800 (PST)
Date: Fri, 4 Dec 2020 11:26:40 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 10/23] libxl: remove get_all_assigned_devices() from
 libxl_pci.c
Message-ID: <20201204112640.5ucp2oazwjbla6e3@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-11-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-11-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:21PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Use of this function is a very inefficient way to check whether a device
> has already been assigned.
> 
> This patch adds code that saves the domain id in xenstore at the point of
> assignment, and removes it again when the device id de-assigned (or the
> domain is destroyed). It is then straightforward to check whether a device
> has been assigned by checking whether a device has a saved domain id.
> 
> NOTE: To facilitate the xenstore check it is necessary to move the
>       pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
>       together, the rest of the pci_info_xs_XXX() functions are moved too.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:28:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44511.79749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9G8-0001xG-96; Fri, 04 Dec 2020 11:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44511.79749; Fri, 04 Dec 2020 11:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9G8-0001x9-5p; Fri, 04 Dec 2020 11:28:20 +0000
Received: by outflank-mailman (input) for mailman id 44511;
 Fri, 04 Dec 2020 11:28:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9G7-0001x4-H2
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:28:19 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2deff73-ca01-4a94-8db8-fe2eed1459a7;
 Fri, 04 Dec 2020 11:28:18 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id t4so4949165wrr.12
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:28:18 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id y7sm2803038wmb.37.2020.12.04.03.28.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:28:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2deff73-ca01-4a94-8db8-fe2eed1459a7
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=+hjSwgSDIvBj3Sb379n+N796YCNP7m2cy9yPk0T11TA=;
        b=LgryIRHLfKD0wwrx+HQqNVOg+cqLw9q2XDVx/MIiajkjWfV3aj8GxTv34rb3l+V3fY
         /gEzuc5V3WQTgvo3KJqRnYB4RBbhL4q4IVkVCTM6LrNI7sbMM8uif5DpyAXocYfmgmSb
         vsKnNPpHRmFaqA4Q9bTzrJMMwTBRaWjl1ms984KP1pyiXu97Rz0n+iYeSYKx42AayPMK
         hx1tddpnyE3ecWGtB0ULHTA2pnGRGyZGfufAHgnTNDfDH7Y0HffD+YTDTtAdt99RiS2T
         kJba8KlB37HH36r74DGr2zh+W+TtACg2s6aiRxsFUkor9GhtAYvTxXFohs6m/uV55OBt
         2P3Q==
X-Gm-Message-State: AOAM532QVy3buqY8FXJqhaWvv3qsnib4o283cPBw3zv5NolSv301fs0q
	cDCcZNMsif/yQcSvKdV7dXk=
X-Google-Smtp-Source: ABdhPJw8TFaiQ7bKQG5+6IXVdT4S/Fcge6kqNTTir/GW1WFyk/kJmHu7Lg/O6Hf38wP7Pl1jC5lD/w==
X-Received: by 2002:adf:f3cf:: with SMTP id g15mr4406556wrp.71.1607081297899;
        Fri, 04 Dec 2020 03:28:17 -0800 (PST)
Date: Fri, 4 Dec 2020 11:28:16 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 11/23] libxl: make sure callers of
 libxl_device_pci_list() free the list after use
Message-ID: <20201204112816.cl6fr6pgzxaopmds@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-12-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-12-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:22PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> A previous patch introduced libxl_device_pci_list_free() which should be used
> by callers of libxl_device_pci_list() to properly dispose of the exported
> 'libxl_device_pci' types and the free the memory holding them. Whilst all
> current callers do ensure the memory is freed, only the code in xl's
> pcilist() function actually calls libxl_device_pci_dispose(). As it stands
> this laxity does not lead to any memory leaks, but the simple addition of
> .e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
> to leaks.
> 
> This patch makes sure all callers of libxl_device_pci_list() can call
> libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
> structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
> sure these are properly disposed at the end of the operations) rather
> than keeping pointers to the structures returned by libxl_device_pci_list().
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:29:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:29:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44516.79761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Gs-00023w-JG; Fri, 04 Dec 2020 11:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44516.79761; Fri, 04 Dec 2020 11:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Gs-00023p-GB; Fri, 04 Dec 2020 11:29:06 +0000
Received: by outflank-mailman (input) for mailman id 44516;
 Fri, 04 Dec 2020 11:29:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl9Gq-00023f-Km
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:29:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl9Go-0008D7-Us; Fri, 04 Dec 2020 11:29:02 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl9Go-0002ny-Nh; Fri, 04 Dec 2020 11:29:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7uul7Eq0jzdriWKnD6cl7d748nd4WelZ/jS33nSJLPc=; b=j80BGqbxTtx9AoQ5RdwYzj2VuM
	S6nAr4un1XOA9XeustC7t8bagLAfag9HbAvvTs2Mhc3sAXM+KKhjj2gksSFkaRCbKPBnI/JHCa52h
	ohCC5bxj1aoZHbMHQuIflZYp6nXMH4QwatdnPi4f5MGBriuqYCcsXH3P6nt/gV34Slv8=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
Date: Fri, 4 Dec 2020 11:28:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 03/12/2020 10:09, Jan Beulich wrote:
> On 02.12.2020 22:10, Julien Grall wrote:
>> On 23/11/2020 13:30, Jan Beulich wrote:
>>> While there don't look to be any problems with this right now, the lock
>>> order implications from holding the lock can be very difficult to follow
>>> (and may be easy to violate unknowingly). The present callbacks don't
>>> (and no such callback should) have any need for the lock to be held.
>>>
>>> However, vm_event_disable() frees the structures used by respective
>>> callbacks and isn't otherwise synchronized with invocations of these
>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>
>> AFAICT, this callback is not the only place where the synchronization is
>> missing in the VM event code.
>>
>> For instance, vm_event_put_request() can also race against
>> vm_event_disable().
>>
>> So shouldn't we handle this issue properly in VM event?
> 
> I suppose that's a question to the VM event folks rather than me?

Yes. From my understanding of Tamas's e-mail, they are relying on the 
monitoring software to do the right thing.

I will refrain to comment on this approach. However, given the race is 
much wider than the event channel, I would recommend to not add more 
code in the event channel to deal with such problem.

Instead, this should be fixed in the VM event code when someone has time 
to harden the subsystem.

> 
>>> ---
>>> Should we make this accounting optional, to be requested through a new
>>> parameter to alloc_unbound_xen_event_channel(), or derived from other
>>> than the default callback being requested?
>>
>> Aside the VM event, do you see any value for the other caller?
> 
> No (albeit I'm not entirely certain about vpl011_notification()'s
> needs), hence the consideration. It's unnecessary overhead in
> those cases.

I had another look and I think there is a small race in VPL011. It 
should be easy to fix (I will try to have a look later today).

> 
>>> @@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig
>>>            rport = lchn->u.interdomain.remote_port;
>>>            rchn  = evtchn_from_port(rd, rport);
>>>            if ( consumer_is_xen(rchn) )
>>> +        {
>>> +            /* Don't keep holding the lock for the call below. */
>>> +            atomic_inc(&rchn->u.interdomain.active_calls);
>>> +            evtchn_read_unlock(lchn);
>>>                xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
>>> -        else
>>> -            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>>
>> atomic_dec() doesn't contain any memory barrier, so we will want one
>> between xen_notification_fn() and atomic_dec() to avoid re-ordering.
> 
> Oh, indeed. But smp_mb() is too heavy handed here - x86 doesn't
> really need any barrier, yet would gain a full MFENCE that way.
> Actually - looks like I forgot we gained smp_mb__before_atomic()
> a little over half a year ago.

Ah yes, I forgot that atomics instruction are ordered on x86.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:29:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44517.79772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9H8-00029K-02; Fri, 04 Dec 2020 11:29:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44517.79772; Fri, 04 Dec 2020 11:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9H7-00029D-TB; Fri, 04 Dec 2020 11:29:21 +0000
Received: by outflank-mailman (input) for mailman id 44517;
 Fri, 04 Dec 2020 11:29:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9H5-00028g-Ug
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:29:19 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b06e870e-fbd4-49f8-b830-32da89671e08;
 Fri, 04 Dec 2020 11:29:18 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id s8so4971866wrw.10
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:29:18 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 64sm2820559wmd.12.2020.12.04.03.29.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:29:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b06e870e-fbd4-49f8-b830-32da89671e08
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=8JcoC/cE2Sni0QVP8Rh/OxtpAcstxTODBod/LqoYel4=;
        b=HNdy3HU9kIKPLh7fwYQTh5I+Fw/HZTNtLUjGWL/bR2vma03MHoeVA+9uhj9i2rUKy8
         8DykdWZbf6NjpbMb7AtCzlxh0kRQ44XphvW0WXjKmYTb0KbkjC5zdvG1gU410pPHC2eL
         BpKlJTkbXjNkWjoEYft5aQiK4YLdrnFIsaLfrsfWTr+mgRURIF5JuJ2uQQuojYIIAfc4
         a4RDMo4CIfPG47wpxB5Ry0MCnsMCx3qy6e50GuEO//cLeyS5Xv+lsK9qVYBTAcj1Mfzn
         uhhPuiZhf5BMNt1u+RdmofRQv0Wxu0c5Ux6AQrRZ7yiypNENTCzoUhsPoNXJ8LXWAfx1
         ObLA==
X-Gm-Message-State: AOAM531yP0C4bs8Vw7zWAw3JgaaRPD13bQF0/aeAEoLwO/xW4vEgVFdF
	VeREufUPtfhmN2QRuvrpkXE=
X-Google-Smtp-Source: ABdhPJw8S0m2dIYTW0qSdEaBxQ2CYzaOJa+zaWCuc278qRh5ciSy03YAzBxbFOgfqegP9Gwq/c3mHw==
X-Received: by 2002:a5d:4046:: with SMTP id w6mr4547721wrp.51.1607081357296;
        Fri, 04 Dec 2020 03:29:17 -0800 (PST)
Date: Fri, 4 Dec 2020 11:29:15 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 12/23] libxl: add
 libxl_device_pci_assignable_list_free()...
Message-ID: <20201204112915.p3i4necmgvtpmtkp@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-13-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-13-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:23PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to be used by callers of libxl_device_pci_assignable_list().
> 
> Currently there is no API for callers of libxl_device_pci_assignable_list()
> to free the list. The xl function pciassignable_list() calls
> libxl_device_pci_dispose() on each element of the returned list, but
> libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
> of libxl_device_pci_assignable_list() call libxl_device_pci_init().
> 
> This patch adds the new API function, makes sure it is used everywhere and
> also modifies libxl_device_pci_assignable_list() to initialize list
> entries rather than just zeroing them.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Acked-by: Christian Lindig <christian.lindig@citrix.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44524.79785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9HS-0002GX-Al; Fri, 04 Dec 2020 11:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44524.79785; Fri, 04 Dec 2020 11:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9HS-0002GQ-6i; Fri, 04 Dec 2020 11:29:42 +0000
Received: by outflank-mailman (input) for mailman id 44524;
 Fri, 04 Dec 2020 11:29:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9HR-0002GK-O7
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:29:41 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02bb77cf-4717-440a-b2fe-70309c1b974e;
 Fri, 04 Dec 2020 11:29:41 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id 3so6677568wmg.4
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:29:41 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id g11sm3327683wrq.7.2020.12.04.03.29.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:29:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02bb77cf-4717-440a-b2fe-70309c1b974e
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=j1UO5S+/1hWYytAEZdsFJhC3rqMIBogTXlWIVsZnxZ0=;
        b=TQ/SSMne55cuKvfvwY8S3DQ6AtnjSqZl+HIe9kSEZxNVZB2ZUYWX600bC2f7kKtDuk
         UXK5D5tAw70qxUbx5b+okIGTVhbeJHGbJMC16Uf+ITJhkVi074kMGMLKJgD56PVEfXf8
         h97kN2OyfxQfRn8TEztFzM+ysxiujDaHGYSigWqt5d2MjQcz+T8DW/42fxhM3Xf/6Bkx
         fUFBMD0yhaAArXHQ6NlaJxM6ZX/t53iQ7MWz4hcgeBExHyf5tmfIjyzphoxvq2R4APr4
         k1BUaigXcnrzBIgE7pCj7uxMLubkb4miXCr20JSxR85ozW4zisgpqvSU34EQqkZOCk9n
         wqbQ==
X-Gm-Message-State: AOAM533Niz54y/fT2zU0t7jA1t/PZjHBlGuM7ERfSf9k5nfNEe/t2lRH
	ZyRvH7QHzMbE3AxU6RJsDjw=
X-Google-Smtp-Source: ABdhPJyB3J/RCfgVhRMC+8GVN1AI7qLnwiWXE61G0n5OhbjHoFiYqQ1VIYy4mXJ8S4yK8P8fjWN+EQ==
X-Received: by 2002:a1c:9dd8:: with SMTP id g207mr3686418wme.15.1607081380368;
        Fri, 04 Dec 2020 03:29:40 -0800 (PST)
Date: Fri, 4 Dec 2020 11:29:38 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 13/23] libxl: use COMPARE_PCI() macro
 is_pci_in_array()...
Message-ID: <20201204112938.mjbjwyew2kkrmfwj@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-14-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-14-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:24PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... rather than an open-coded equivalent.
> 
> This patch tidies up the is_pci_in_array() function, making it take a single
> 'libxl_device_pci' argument rather than separate domain, bus, device and
> function arguments. The already-available COMPARE_PCI() macro can then be
> used and it is also modified to return 'bool' rather than 'int'.
> 
> The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
> than a separate open-coded equivalent, and also modifies it to return a
> 'bool' rather than an 'int'.
> 
> NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
>       comparison, which should always have been the case.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:30:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:30:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44529.79797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Hk-0002N6-Jx; Fri, 04 Dec 2020 11:30:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44529.79797; Fri, 04 Dec 2020 11:30:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Hk-0002Mz-GD; Fri, 04 Dec 2020 11:30:00 +0000
Received: by outflank-mailman (input) for mailman id 44529;
 Fri, 04 Dec 2020 11:29:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Hj-0002Mq-MQ
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:29:59 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b427e4ce-8a47-45c7-adb0-3af833753cf2;
 Fri, 04 Dec 2020 11:29:59 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id g25so4518734wmh.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:29:59 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id j8sm3287537wrx.11.2020.12.04.03.29.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:29:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b427e4ce-8a47-45c7-adb0-3af833753cf2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=s6XWHWmXC3O+5iWU+9mVtXLFuukn/6544FOLIbMkQ6I=;
        b=I0eLKWm8YvJd89pKambqYkj6F9rmObbFpEDuhH2i/gn4tkhHknKZu0fSX3+0IgXJ30
         304GnmDohcHX9fcS66wK8OwFSTlTNuKUJTnbCDgvLqvrJfO97PSHnZK9dSXKtVWq3F25
         5wwq0OjUpO7JiDEeTJQhLWP3uv5uXMtxBL63rQpUgD0CnqMJ7v1iusxSctcHaF20Rxhy
         MS91uugvXui2jR6yJNElxaUtdveQVyXkOXOg4BJlyJYe//SkkIatWMzUP+XLLe/cxh+Q
         kQFzHB3M2RZ87YkBVUqHiTgZtqQeAGsCnVc1pi6nWgvMf4M4CCL0v5RM5UaOQ+wTQEr4
         KS9Q==
X-Gm-Message-State: AOAM530ax6jOh3Ln+iH0iArItWfVP314nkiMCotHuLa9KudZ+g7hf83K
	tQfQfb+dQVYXdCwmXlRa/ic=
X-Google-Smtp-Source: ABdhPJwtsebRUnAWlKDnzBYGGok755CbcMrC/o1NseYzldEqt8zsru7iGn6sY4Vh3X0xpKLmERel9g==
X-Received: by 2002:a1c:790f:: with SMTP id l15mr3719496wme.188.1607081398474;
        Fri, 04 Dec 2020 03:29:58 -0800 (PST)
Date: Fri, 4 Dec 2020 11:29:56 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 14/23] docs/man: extract documentation of
 PCI_SPEC_STRING from the xl.cfg manpage...
Message-ID: <20201204112956.7t23kidmf3p2zhig@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-15-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-15-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:25PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... and put it into a new xl-pci-configuration(5) manpage, akin to the
> xl-network-configration(5) and xl-disk-configuration(5) manpages.
> 
> This patch moves the content of the section verbatim. A subsequent patch
> will improve the documentation, once it is in its new location.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:30:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:30:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44537.79808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9II-0003BM-Ss; Fri, 04 Dec 2020 11:30:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44537.79808; Fri, 04 Dec 2020 11:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9II-0003BF-Pa; Fri, 04 Dec 2020 11:30:34 +0000
Received: by outflank-mailman (input) for mailman id 44537;
 Fri, 04 Dec 2020 11:30:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9IH-00039u-Sj
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:30:33 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b36151b5-6781-440b-847a-84e55dd49fbc;
 Fri, 04 Dec 2020 11:30:30 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id k14so4998849wrn.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:30:30 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u66sm2934724wmg.2.2020.12.04.03.30.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:30:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b36151b5-6781-440b-847a-84e55dd49fbc
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=PXJbjXPUsLZaYt42bpyelUKSIbJgB8MCV61pbqdpLc0=;
        b=gorn7Z3uCWeDU1RMzjERXtlV8lLNDUJEsRr4ClFAqyqfjBu4dHNgti1iDfRJPwbXhj
         E2SUjQU05NI22ZAGst8Fk885QJQRwXSY04IuqLGRad25M70T2uxibX9Hb9G+n3FdEplz
         LscbLsxnF2dGWYr1q4xNLw/SGZKOEPPEHADaXtxmH0mUMqId6V4RHXxDSd/HcnkwCV9x
         6Mu3zof/AitL3FJF1btieZaImUhm38yye+DhYx81U8SjTOnEjStQEcPdpymMOgiNWoOC
         7S36Pzgs+XZrieqKobRqe7oKWAL3DB+uJMDRQSNxcVETcd88lQyuOSlVpMM3dorRCJ6e
         YyPg==
X-Gm-Message-State: AOAM5331d559XmS2c8E+bMHUlNh17kmEkJFXL2gGZdztIyLkxHHVMgzf
	Aj88VoyTXxwSbgttT84JAvE=
X-Google-Smtp-Source: ABdhPJyp9t9NHyfnFD8PIVs56Ay7wcjFOw7F18JEFfHoG20/IYHbUpS1kmg0J5x9GwJm3oZzmbpcQg==
X-Received: by 2002:adf:f88c:: with SMTP id u12mr4478624wrp.209.1607081430045;
        Fri, 04 Dec 2020 03:30:30 -0800 (PST)
Date: Fri, 4 Dec 2020 11:30:28 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 15/23] docs/man: improve documentation of
 PCI_SPEC_STRING...
Message-ID: <20201204113028.h4s4hhgyis6u4tyn@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-16-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-16-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:26PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... and prepare for adding support for non-positional parsing of 'bdf' and
> 'vslot' in a subsequent patch.
> 
> Also document 'BDF' as a first-class parameter type and fix the documentation
> to state that the default value of 'rdm_policy' is actually 'strict', not
> 'relaxed', as can be seen in libxl__device_pci_setdefault().
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:31:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44543.79821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9In-0003UU-6k; Fri, 04 Dec 2020 11:31:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44543.79821; Fri, 04 Dec 2020 11:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9In-0003UN-3D; Fri, 04 Dec 2020 11:31:05 +0000
Received: by outflank-mailman (input) for mailman id 44543;
 Fri, 04 Dec 2020 11:31:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Il-0003UC-MS
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:31:03 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85c6d7ec-483b-4ea9-a860-f9284980260f;
 Fri, 04 Dec 2020 11:31:03 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id s8so4977167wrw.10
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:31:03 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w3sm2841035wma.3.2020.12.04.03.31.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:31:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85c6d7ec-483b-4ea9-a860-f9284980260f
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=lWeenlqsnvb4pGse1Eiubd2rBtxCxMc/5md6CbaBnjA=;
        b=p1u2+FCp1Z9U8iZqWQt8EHgjxQR2WXOCijwiFeEoQaILtiObZ4bcCSi20iSnDE2QCb
         oNECEB4gMrpD3lNcZs2VKH1s/oecOhDpJhBdKM+iQKb7g4tPy8WfsttxwtRNoqj1s3CU
         WFOQzunmEgw2MHltu25fdS/Ci3mMyeMwPND7crLqZbfTABbA3lzmoiYkKv0MeQfxByG4
         U5AtMzkQNUmENTId+IOluCSij9LeV3nLujngkfW00ADT18WhCfiWLED1sc+CtgDLNg8c
         Jm9EXoqHVZcSRM60Bns4Hf9ZyeKw4qqC4B9nCp8g4OUmOT4KtoDrZ9eWWnd18us3xNIB
         xBvw==
X-Gm-Message-State: AOAM530pirr0HP66Bni8fLsYWgsxyZF05hNKft58YdqtQRY1VAeLPsv8
	xsVBxUra9bTr9Y48ai+N0ZfOR//wzgI=
X-Google-Smtp-Source: ABdhPJy5OmwDqy3QvLFo4vYsvhyA8PXQLUFWHN7Njl1fRcBY92i6uVz+RIeltM+NUf2tg4pJ4tfk5g==
X-Received: by 2002:adf:e551:: with SMTP id z17mr4385806wrm.374.1607081462284;
        Fri, 04 Dec 2020 03:31:02 -0800 (PST)
Date: Fri, 4 Dec 2020 11:31:00 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 16/23] docs/man: fix xl(1) documentation for 'pci'
 operations
Message-ID: <20201204113100.v6dhjzvqnrj4njxn@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-17-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-17-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:27PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently the documentation completely fails to mention the existence of
> PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
> 'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
> take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
> patch).
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44549.79833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Jh-0003dl-Gk; Fri, 04 Dec 2020 11:32:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44549.79833; Fri, 04 Dec 2020 11:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Jh-0003de-D1; Fri, 04 Dec 2020 11:32:01 +0000
Received: by outflank-mailman (input) for mailman id 44549;
 Fri, 04 Dec 2020 11:32:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Jf-0003dV-UC
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:31:59 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d660ac91-ff95-4495-b952-28d566274875;
 Fri, 04 Dec 2020 11:31:59 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id a3so6701247wmb.5
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:31:59 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a13sm2500624wrm.39.2020.12.04.03.31.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:31:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d660ac91-ff95-4495-b952-28d566274875
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=2dGeClOo6CkgVFL97zGExp3XBN7NVsuRMwITjZfQOFU=;
        b=gznEApqmxxl2E5Jx+gz7umrEFcwAup4oE2Y46jZyyGuY1HpoVmk2h1dmgQpw8nSN7Z
         MryaTMF7sAMNzcBcuT7zCNaKEmxKphqsOHV+nLvKgyvmEA4o/c3/t6klPvT5KmNSN2XF
         eXOx76JQE475/z6FYGni/nwILQ84WDRu/RelMOrMJ5M1aD8VS1uynV+aDGjMGkt6o8T+
         iSYBrbpiNsCtHKXMKJxpGyCwYtiDiTFJCFGwWopP7IuLLNknJxu9Zq1FtjEhZxQM+xZy
         MB6qTvJUvioUux1H3b65H5oSlgyvZlbzsDQ4P7uaZwrOt5Dyi4ih7hStAsKC5wwAGWR3
         DxjQ==
X-Gm-Message-State: AOAM530j/VR8Mc3i4qZSRCyJofYDN+O6U230p2QgdjoWwMPh2CHN1kEy
	uxfF/cYQx4x5MkxztvsefNk=
X-Google-Smtp-Source: ABdhPJyijUxXhCWeU2bbKH6t0CGkQef+sxJegilKGYz0fnoLuXeWUNtv2PjVMKXeP8FzG9hwq8KfPQ==
X-Received: by 2002:a1c:2394:: with SMTP id j142mr3810422wmj.42.1607081518387;
        Fri, 04 Dec 2020 03:31:58 -0800 (PST)
Date: Fri, 4 Dec 2020 11:31:56 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 17/23] libxl: introduce 'libxl_pci_bdf' in the idl...
Message-ID: <20201204113156.xckbfbujcky2mohi@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-18-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-18-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:28PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... and use in 'libxl_device_pci'
> 
> This patch is preparatory work for restricting the type passed to functions
> that only require BDF information, rather than passing a 'libxl_device_pci'
> structure which is only partially filled. In this patch only the minimal
> mechanical changes necessary to deal with the structural changes are made.
> Subsequent patches will adjust the code to make better use of the new type.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>

George and Nick, please comment on the go bindings.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:34:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:34:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44564.79853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Lt-0003sa-1s; Fri, 04 Dec 2020 11:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44564.79853; Fri, 04 Dec 2020 11:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Ls-0003sT-UR; Fri, 04 Dec 2020 11:34:16 +0000
Received: by outflank-mailman (input) for mailman id 44564;
 Fri, 04 Dec 2020 11:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Ls-0003sN-9C
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:34:16 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c26a945-301f-4e69-b624-a910d7422574;
 Fri, 04 Dec 2020 11:34:15 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id h21so6746392wmb.2
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:34:15 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id g78sm2725202wme.33.2020.12.04.03.34.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:34:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c26a945-301f-4e69-b624-a910d7422574
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=TAwsEZkVakWz2W8UPc1EIi7gA6bN724STdrEp/845nE=;
        b=KfBz8rRlSfVNfT8th08YIZBu8F1CH7V18mPLu65fgMSapcS4C3De0uk78IfuIRbpPi
         cGEv1jQmxpJ8eDThrk6JiNuqkwkqHwKdQ6lxE0sn04hkKwkPjrDKpZ0jj6PRnvqsSBt9
         sHnaMGlu8tKxH2yYo8sgMRkYRJ5tY8GLTt9u/K6eB9AeI5NgOSn2GZFy07Ds9dHuuahJ
         FW+LbEOWpAf6g92Sh7qKeHsYmFRqG5PaT1lxvSKbqGc/KiXFsC7ULNcJlyFUFcp3aicr
         GzhXs1qtVtZJJ+nRAE1d8pZ/FqyQ4kViCqzn9u0kxAtfJt/2lPeJERapDjTDlHaK7aaE
         x09g==
X-Gm-Message-State: AOAM533kwW3075fbxKUgqqiWJSap7KLQFv//otC8E8xFfyF3pjZ2BPYq
	f8Z2T0x+5SyHI+02+XRNy9o=
X-Google-Smtp-Source: ABdhPJyorZyReJqJvksu+ExEcJqObTA07fyhtu+2Z7T6CoqTZ7UFaphKvCuLyBl56JPvLzeaZ0CD3A==
X-Received: by 2002:a7b:cb84:: with SMTP id m4mr3658343wmi.157.1607081654841;
        Fri, 04 Dec 2020 03:34:14 -0800 (PST)
Date: Fri, 4 Dec 2020 11:34:13 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 18/23] libxlu: introduce xlu_pci_parse_spec_string()
Message-ID: <20201204113413.iebyf2ldzq6rfpsg@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-19-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-19-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:29PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
> it via the newly introduced function. The new parser also deals with 'bdf'
> and 'vslot' as non-positional paramaters, as per the documentation in
> xl-pci-configuration(5).
> 
> The existing xlu_pci_parse_bdf() function remains, but now strictly parses
> BDF values. Some existing callers of xlu_pci_parse_bdf() are
> modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).
> 
> NOTE: Usage text in xl_cmdtable.c and error messages are also modified
>       appropriately.
> 
> Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")

I don't think d25cc3ec93eb is buggy, so this tag is not needed.

> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:35:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:35:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44572.79865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9NE-00040y-CV; Fri, 04 Dec 2020 11:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44572.79865; Fri, 04 Dec 2020 11:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9NE-00040r-9Q; Fri, 04 Dec 2020 11:35:40 +0000
Received: by outflank-mailman (input) for mailman id 44572;
 Fri, 04 Dec 2020 11:35:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9NC-00040j-OO
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:35:38 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce496fa9-c339-473d-a8c4-ddf966bcc699;
 Fri, 04 Dec 2020 11:35:37 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id i2so4998002wrs.4
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:35:37 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id f17sm2721488wmh.10.2020.12.04.03.35.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:35:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce496fa9-c339-473d-a8c4-ddf966bcc699
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=B8gPDM+qTyBAwXnMOvGRSEv4u7A/+ghbt9IuqV0i4Sc=;
        b=VXbnrMKyBMGsw49mIoVLtRoytAMaXgmgHEI7E6XDf8617xZNSZ83iES/tGFWYR0eLP
         WQm15K0jg3BxNADfRUQSbBnuWc1erL719uurWpXlLWA1UVj/zCh+EaGg5Fh6GaiB8JgQ
         S+N5IfcPobgqq6G3saQE4QycWEAl/wTjOhVx9t57ldwf5QgUkVC+r33MeB3mNN/2dK6N
         u2gVKXuRn+q9y39kKV6EnFtFlgfa/S/uw/EJz+OlgPbcUvOfhhf8YOYLLuLvwXPwvfpD
         r1rwhhh/gEsm2b4cf8GNfnkXMBb8wfFdZiJFfB7z2dfo7yxsg+eWTMHCBbeqb32SBvdg
         gOrA==
X-Gm-Message-State: AOAM533U1tTDJ4zMOD4+ySTOR4lCxrs6okzTHObWXcA8Jn0dCUFlsJZh
	6Xz8SBvWUWxvJcY7R91sRx4=
X-Google-Smtp-Source: ABdhPJy9ETK3U4N14nx2DDEPe670AotD347l51w1iM7RG9Rxh1AdmR2mR7Dd7Y/eGVcKlvUJEB59jQ==
X-Received: by 2002:a05:6000:104b:: with SMTP id c11mr4401985wrx.329.1607081737135;
        Fri, 04 Dec 2020 03:35:37 -0800 (PST)
Date: Fri, 4 Dec 2020 11:35:35 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 19/23] libxl: modify
 libxl_device_pci_assignable_add/remove/list/list_free()...
Message-ID: <20201204113535.kogsoqqvfykbg32x@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-20-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-20-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:30PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.
> 
> This patch modifies the API and callers accordingly. It also modifies
> several internal functions in libxl_pci.c that support the API to also use
> 'libxl_pci_bdf'.
> 
> NOTE: The OCaml bindings are adjusted to contain the interface change. It
>       should therefore not affect compatibility with OCaml-based utilities.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Acked-by: Christian Lindig <christian.lindig@citrix.com>

> ---
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Wei Liu <wl@xen.org>
> Cc: David Scott <dave@recoil.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  tools/include/libxl.h                |  15 +-
>  tools/libs/light/libxl_pci.c         | 213 +++++++++++++++------------
>  tools/ocaml/libs/xl/xenlight_stubs.c |  15 +-
>  tools/xl/xl_pci.c                    |  32 ++--
>  4 files changed, 156 insertions(+), 119 deletions(-)
> 
> diff --git a/tools/include/libxl.h b/tools/include/libxl.h
> index 5edacccbd1da..5703fdf367c5 100644
> --- a/tools/include/libxl.h
> +++ b/tools/include/libxl.h
> @@ -469,6 +469,13 @@
>   */
>  #define LIBXL_HAVE_PCI_BDF 1
>  
> +/*
> + * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
> + * libxl_device_pci_assignable_add/remove/list/list_free() functions all
> + * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
> + */
> +#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
> +
>  /*
>   * libxl ABI compatibility
>   *
> @@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
>   * added or is not bound, the functions will emit a warning but return
>   * SUCCESS.
>   */
> -int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
> -int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
> -libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
> -void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
> +int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
> +int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
> +libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
> +void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);

Given these APIs are visible to external callers, you will need to
provide fallbacks for the old APIs.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:36:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:36:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44574.79881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9NX-00048L-Tp; Fri, 04 Dec 2020 11:35:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44574.79881; Fri, 04 Dec 2020 11:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9NX-00048E-PK; Fri, 04 Dec 2020 11:35:59 +0000
Received: by outflank-mailman (input) for mailman id 44574;
 Fri, 04 Dec 2020 11:35:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9NV-00047s-Si
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:35:57 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b4588be-fad6-49df-9508-e6c1386420b1;
 Fri, 04 Dec 2020 11:35:57 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id t4so4971587wrr.12
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:35:57 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q25sm3271443wmq.37.2020.12.04.03.35.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:35:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b4588be-fad6-49df-9508-e6c1386420b1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=ErI/XEwoAHpbtqywJLNc96olsP+VGo4+KTZeTNi37/I=;
        b=BykguFccJD+Ix6ZacyBYdLm4bs1mJpGm7SbHv+jhtHC4Bc01B2DwXYBg+0v0MqXekE
         dlUXWaWJqK5rtTS3bWif83CnB5fW6bQ1g5JySTyTc59nncL/fiOOHZY24ui/P94rq3c7
         g3/A1cn3cDFHO/19lzY+3Sg7rMwNTaXDKA9sQNNuQwLtlDjxSUxez18hFWAKRRDl2u7Z
         K9PsbqwUCQ8IO7Hl3cXm/E3TQZi1ICXtT3HBdTZUOk9/sa/ky2S7oyuW5rVbHGOeUb+Y
         daVr466kfocnGOswnE+hIrmk6LdDksmhPAPhJ5jkfqBDaR0d4eHTDfouBvUHycmRIXrq
         0M2w==
X-Gm-Message-State: AOAM530vFSGBoxxG0ovCMAsDvmmxMgqPkWvhMaIk4Yzzr6MivO15DCo3
	4OyFicglFEQhwo/PqykB4Rg=
X-Google-Smtp-Source: ABdhPJxt9hcEZNab/glVmUevPhScA7E/ULg6C7oqAzcTSnmj7idDU4Kk+anU9yshLB7+iZxYcdX8bw==
X-Received: by 2002:a5d:504f:: with SMTP id h15mr4433372wrt.402.1607081756381;
        Fri, 04 Dec 2020 03:35:56 -0800 (PST)
Date: Fri, 4 Dec 2020 11:35:54 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 20/23] docs/man: modify xl(1) in preparation for
 naming of assignable devices
Message-ID: <20201204113554.xpjmhizlxxzv62i2@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-21-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-21-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:31PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> A subsequent patch will introduce code to allow a name to be specified to
> 'xl pci-assignable-add' such that the assignable device may be referred to
> by than name in subsequent operations.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:37:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44585.79893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9PM-0004Iq-9I; Fri, 04 Dec 2020 11:37:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44585.79893; Fri, 04 Dec 2020 11:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9PM-0004Ij-5y; Fri, 04 Dec 2020 11:37:52 +0000
Received: by outflank-mailman (input) for mailman id 44585;
 Fri, 04 Dec 2020 11:37:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9PK-0004Ie-Uh
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:37:50 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0096801f-e8dd-4030-959d-9f7713db16a7;
 Fri, 04 Dec 2020 11:37:50 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id a3so6716665wmb.5
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:37:50 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a12sm3178214wrq.58.2020.12.04.03.37.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:37:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0096801f-e8dd-4030-959d-9f7713db16a7
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=cFZURopZcAIzrqA+B6SryNkbfjCB4Ovz/Lugkvxi3a0=;
        b=JDi4hwy6jo8B4cgtxjNTwC7/q8Pjdbb2PJrWqSxTYC9tDU9NDtPbLBws9SedazHWPY
         csMDZl4RWSqDKI6mulYmtyDZKYGnPXzN/fVB80IoPQUivpMYkG4PrHZ8soHjGHes8MHT
         nrkg8hr40Peg5s0H8YQpin1sHTAmuk3loVyuaa80Rqa+yBitPT8sxlCdsai/cXnGAO86
         FnFJpiHe0xJql2ndp3F3g+ZXkqp7+A+aOpIsitYzvH+y6rpbDlmktIfS1q/rTt8ypRqd
         OcCVbAg7k/Fb2MIW2NEReVqmLYJ6oGQ2jG2NvzoJS7Q/UWyFBeYYF/yBr+BWuduG5Fqm
         czHQ==
X-Gm-Message-State: AOAM532z5OPLbjmkCe3nv1HtEaMMqqgwXoJwSW+tXXI6BCukPNpXUhyW
	aUi7AlUQp1VQwGpHY0aEe3Y=
X-Google-Smtp-Source: ABdhPJydnh21yTEIz/RPzMRWYbNnc813Cqojl5BPWFOtDOIQI5tcfuSrHN/uUYCoiWoMAuyiKlX7UA==
X-Received: by 2002:a1c:bd43:: with SMTP id n64mr3723316wmf.169.1607081869594;
        Fri, 04 Dec 2020 03:37:49 -0800 (PST)
Date: Fri, 4 Dec 2020 11:37:47 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 21/23] xl / libxl: support naming of assignable devices
Message-ID: <20201204113747.figsnjdsvifdgezl@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-22-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-22-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:32PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> This patch modifies libxl_device_pci_assignable_add() to take an optional
> 'name' argument, which (if supplied) is saved into xenstore and can hence be
> used to refer to the now-assignable BDF in subsequent operations. To
> facilitate this, a new libxl_device_pci_assignable_name2bdf() function is
> added.
> 
> The xl code is modified to allow a name to be specified in the
> 'pci-assignable-add' operation and also allow an option to be specified to
> 'pci-assignable-list' requesting that names be displayed. The latter is
> facilitated by a new libxl_device_pci_assignable_bdf2name() function. Finally
> xl 'pci-assignable-remove' is modified to that either a name or BDF can be
> supplied. The supplied 'identifier' is first assumed to be a name, but if
> libxl_device_pci_assignable_name2bdf() fails to find a matching BDF the
> identifier itself will be parsed as a BDF. Names my only include printable
> characters and may not include whitespace.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Acked-by: Christian Lindig <christian.lindig@citrix.com>
> ---
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Wei Liu <wl@xen.org>
> Cc: David Scott <dave@recoil.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> 
> v4:
>  - Fix unitialized return value in libxl_device_pci_assignable_name2bdf()
>    that was discovered in CI
> ---
>  tools/include/libxl.h                | 19 +++++-
>  tools/libs/light/libxl_pci.c         | 86 ++++++++++++++++++++++++++--
>  tools/ocaml/libs/xl/xenlight_stubs.c |  3 +-
>  tools/xl/xl_cmdtable.c               | 12 ++--
>  tools/xl/xl_pci.c                    | 80 ++++++++++++++++++--------
>  5 files changed, 164 insertions(+), 36 deletions(-)
> 
> diff --git a/tools/include/libxl.h b/tools/include/libxl.h
> index 5703fdf367c5..4025d3a3d437 100644
> --- a/tools/include/libxl.h
> +++ b/tools/include/libxl.h
> @@ -476,6 +476,14 @@
>   */
>  #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
>  
> +/*
> + * LIBXL_HAVE_PCI_ASSIGNABLE_NAME indicates that the
> + * libxl_device_pci_assignable_add() function takes a 'name' argument
> + * and that the libxl_device_pci_assignable_name2bdf() and
> + * libxl_device_pci_assignable_bdf2name() functions are defined.
> + */
> +#define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
> +
>  /*
>   * libxl ABI compatibility
>   *
> @@ -2385,11 +2393,18 @@ int libxl_device_events_handler(libxl_ctx *ctx,
>   * added or is not bound, the functions will emit a warning but return
>   * SUCCESS.
>   */
> -int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
> -int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
> +int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
> +                                    const char *name, int rebind);
> +int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
> +                                       int rebind);
>  libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
>  void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
>  
> +libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
> +                                                    const char *name);
> +char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
> +                                           libxl_pci_bdf *pcibdf);

Again, these function require shims to be backward compatible.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:38:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44586.79905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Pb-0004NJ-LA; Fri, 04 Dec 2020 11:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44586.79905; Fri, 04 Dec 2020 11:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Pb-0004N9-Ev; Fri, 04 Dec 2020 11:38:07 +0000
Received: by outflank-mailman (input) for mailman id 44586;
 Fri, 04 Dec 2020 11:38:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Pa-0004Mt-As
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:38:06 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8f23af4-1ce7-447c-9a20-7347d57d853b;
 Fri, 04 Dec 2020 11:38:05 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id 23so5004503wrc.8
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:38:05 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id l8sm3031676wmf.35.2020.12.04.03.38.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:38:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8f23af4-1ce7-447c-9a20-7347d57d853b
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=vmPUvN8VywZHRkKgOrY9KaFFX4boj2ETYu4jbuWNrYE=;
        b=dJ1mtobzv/8zu+PlNTOtSLv+E0/W8M0oYMbAwBOFB61ftQBgpwEnw59cNSQRrYEQHl
         fWjRazoKbkyw2noJMSPZL6IjmWcIvIhwO1OgJQZR7fMf5+xDynpGhQ2h+n1m3tT5/rn/
         dN8y40K1mkxuelba96P9MlmBN84lkeoBgGIXmktoo0967T2DsvmJxc6n5kE11DjME9oe
         J3c9phwJgELUohUV5O55wzr9v/zXdGpUN8RfpQRsMKD4UM4V3HMKLfDdOLIdv1HUHMaY
         aVSdLUrrsBMGJyAGTPRFM5u/Syp9ci/L/LU3o7AQOgikgIYecT0lTjSZb5fImV93IsQQ
         OMMg==
X-Gm-Message-State: AOAM533wWNmCrdSPoGR6dZNXlxrToamyVsOmBh7VJfAWa9NQ74rGEvJR
	uVPkb8cPYT7wE+MKHggCVuM=
X-Google-Smtp-Source: ABdhPJyRe8U1OPjHMiB/16PKPnvBYXJOHShk+Oqjph1TAHfwZf0E1D86yMkA5JSO9T71n8fx2acibw==
X-Received: by 2002:a5d:67c2:: with SMTP id n2mr4390966wrw.139.1607081884865;
        Fri, 04 Dec 2020 03:38:04 -0800 (PST)
Date: Fri, 4 Dec 2020 11:38:03 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 22/23] docs/man: modify xl-pci-configuration(5) to add
 'name' field to PCI_SPEC_STRING
Message-ID: <20201204113803.qikiaxzlkjcwljw6@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-23-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-23-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:33PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Since assignable devices can be named, a subsequent patch will support use
> of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
> this case the name will be used to look up the 'bdf' in the list of assignable
> (or assigned) devices.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:38:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44593.79917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Px-0004Vv-SD; Fri, 04 Dec 2020 11:38:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44593.79917; Fri, 04 Dec 2020 11:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Px-0004Vo-OO; Fri, 04 Dec 2020 11:38:29 +0000
Received: by outflank-mailman (input) for mailman id 44593;
 Fri, 04 Dec 2020 11:38:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9Pw-0004VX-LM
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:38:28 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89266af4-bb93-4676-a4cf-9a2a4e0a3d65;
 Fri, 04 Dec 2020 11:38:28 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id g25so4532932wmh.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:38:27 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id p3sm3172013wrs.50.2020.12.04.03.38.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:38:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89266af4-bb93-4676-a4cf-9a2a4e0a3d65
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=d2Hs/IyxORafj1wNe3j9Btm1enpTpYSVPM2koPpNpn4=;
        b=mLWCPTXqa9bcUSOXw1gqsAGS+zCjOInpxeLRKFQNunO1T//5q6u97Gx545mwrJboPz
         yyiiFabupQn6k4jGUJR2jm8nJBG0zupNrXn2EX5I2R5pSjQTVGP3kQyT7lF4k+PG7moc
         ZGrzV/o5tO4jIsJMksbV5y/V+WYVi1rJTrsJE/fIC/MZJAkAuzxI1NG8drHw4i/BjgHx
         CJI5BKAXLxu8CkyXjgjhdGsxOZ6ohFEM3Qg55XQgRE2jZ3aNm/dz7Sw8dm8uxEK7vFvd
         efaaD9qpOZXfXuRbdhkBEBjEK9JMQ8qMZ79gRSnQg4CR+uSiD9FiyF88VChJkjxJ4RKb
         MD3g==
X-Gm-Message-State: AOAM533wCxVsJgag7EN1OkvLbLCDAVgWinQLb19VdqitrzDYRXXFGvxM
	K1/922EWKzsNrXk1ICdPu6s=
X-Google-Smtp-Source: ABdhPJysJ4asnN+EzJIw2QbY3D9tzR3VxVZskACpBfEg+wqHueFQa1VeNlQRGo9t+uBEhvQWYwwjpg==
X-Received: by 2002:a7b:c00b:: with SMTP id c11mr3810600wmb.122.1607081907283;
        Fri, 04 Dec 2020 03:38:27 -0800 (PST)
Date: Fri, 4 Dec 2020 11:38:25 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 23/23] xl / libxl: support 'xl pci-attach/detach' by
 name
Message-ID: <20201204113825.32mzkysfp7d6frmz@liuwe-devbox-debian-v2>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-24-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-24-paul@xen.org>
User-Agent: NeoMutt/20180716

On Thu, Dec 03, 2020 at 02:25:34PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> This patch adds a 'name' field into the idl for 'libxl_device_pci' and
> libxlu_pci_parse_spec_string() is modified to parse the new 'name'
> parameter of PCI_SPEC_STRING detailed in the updated documention in
> xl-pci-configuration(5).
> 
> If the 'name' field is non-NULL then both libxl_device_pci_add() and
> libxl_device_pci_remove() will use it to look up the device BDF in
> the list of assignable devices.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:41:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:41:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44606.79928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9SR-0005b6-8j; Fri, 04 Dec 2020 11:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44606.79928; Fri, 04 Dec 2020 11:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9SR-0005az-5n; Fri, 04 Dec 2020 11:41:03 +0000
Received: by outflank-mailman (input) for mailman id 44606;
 Fri, 04 Dec 2020 11:41:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9SP-0005au-Aa
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:41:01 +0000
Received: from mail-wr1-f43.google.com (unknown [209.85.221.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eef73eb3-f2d6-4b9c-a7c5-f9b187ec382e;
 Fri, 04 Dec 2020 11:41:00 +0000 (UTC)
Received: by mail-wr1-f43.google.com with SMTP id s8so5005884wrw.10
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:41:00 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id t188sm2789666wmf.9.2020.12.04.03.40.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:40:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eef73eb3-f2d6-4b9c-a7c5-f9b187ec382e
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=hzURqLkexr9QsxLSKy90LiXlEh0rMjx1e/YuOmeVHZE=;
        b=sYqTo93p8UDY9X3qvJ6gYQH0jK3EbhSRmnHqX0bZmUBYKeKYNSvS8h5mM8Zm8EXhzd
         shoRRzJ03ayeILDKxduX6RSFz9E73NThgwQ2R2wuxkE07oFZpB1bx1fxYnT8I/7Frdjn
         +iTcAzUSNioxoJx0y43rBJhN+hy7/EVtKen8i9VaZ2A1Ot1ROxtAPBa46iFw4SBT5cQg
         w1DmAZlztBFzdVzDF/A28TDS6Psni2Nkd6qgjidd5FtbtwOEnbM7NXb3T2KgMfg3LgdP
         fKzFZEuhWAT136E+T2jGcY9GPMmQ1JGbcAb8urRj5O3yGRyIpHy63JoZ1xF3ND4XvDMC
         6u3g==
X-Gm-Message-State: AOAM5313yRKATLWjsfOFq3RUjKRQozAo8p74XrfcXdkswORaHPbzqKXw
	CNI4b4DbBhO5O0KoRHISmQo=
X-Google-Smtp-Source: ABdhPJzDp+Dqyvxb1je0hUMJvNBegKZWZJ4miZvAVCjSI5t39FScVoboUwqfFNyVk9IJGIUoBd6k/A==
X-Received: by 2002:a5d:634d:: with SMTP id b13mr4552273wrw.310.1607082059502;
        Fri, 04 Dec 2020 03:40:59 -0800 (PST)
Date: Fri, 4 Dec 2020 11:40:57 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 0/2] x86/IRQ: a little bit of tidying
Message-ID: <20201204114057.lyywlve3gibc7vwn@liuwe-devbox-debian-v2>
References: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Nov 23, 2020 at 04:00:50PM +0100, Jan Beulich wrote:
> 1: drop three unused variables
> 2: reduce casting involved in guest action retrieval

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:43:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:43:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44612.79940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9V8-0005lt-NZ; Fri, 04 Dec 2020 11:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44612.79940; Fri, 04 Dec 2020 11:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9V8-0005lm-Js; Fri, 04 Dec 2020 11:43:50 +0000
Received: by outflank-mailman (input) for mailman id 44612;
 Fri, 04 Dec 2020 11:43:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9V6-0005lh-MZ
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:43:48 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fdf0a977-74ff-4814-b36b-1c9a2445ed16;
 Fri, 04 Dec 2020 11:43:48 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id e7so5018385wrv.6
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:43:48 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c4sm3543772wrw.72.2020.12.04.03.43.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:43:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdf0a977-74ff-4814-b36b-1c9a2445ed16
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=TBsEIpScqNtA8NjsGdqkjDDgChffWoqT9fq2HeODi5k=;
        b=kn2KzRGzsz5ZMAyXoveBrk8/T4OW29FZljT6qkydVOlXcKozZN0PeHAu6GAF3zracE
         ZRlRm19qkeBqM2dEtU9O8yyR6J0o6JMeQ3kruM/EJMl39yT/QCSu2kuMcsUEqTSdo/tC
         FXxXQyUM0F7Z/7J1fcTkSUHs1QDErCbS/d9eV3SczxGCAaY82XEcb3LEqP/QjH3ykov1
         igASPLYi/e2ynuYM0S7wfNbJPLgdnrf9YhlPGgMEgFUWKxFtDVq+ggyCdK1eRcpuemw9
         pffl2UOZKOTEqQ3CKujcbEiI30sGwIelBB6/FGaj2BB6gvBPFokrghTUqfXNq9uT3Ec3
         VVpg==
X-Gm-Message-State: AOAM531XHlZ+oZ/stYwAvOd5gFx22Ifhh2Jd+rZ30mPTnte0+SFMIBUe
	jJmA+Oeyof0huZXmcM9IZyI=
X-Google-Smtp-Source: ABdhPJzRSI29a+WSBAXVgoVxCU2YR7XeXSWUlbj9yt2Ezi+u8MjcugCTWZvJ9q9eSggeFI/JErVKKw==
X-Received: by 2002:a5d:5741:: with SMTP id q1mr4404691wrw.160.1607082227354;
        Fri, 04 Dec 2020 03:43:47 -0800 (PST)
Date: Fri, 4 Dec 2020 11:43:45 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 0/8] xen: beginnings of moving library-like code into
 an archive
Message-ID: <20201204114345.4hbw3gkpzqnb4uf3@liuwe-devbox-debian-v2>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Nov 23, 2020 at 04:16:02PM +0100, Jan Beulich wrote:
[...]
> 
> 1: xen: fix build when $(obj-y) consists of just blanks
> 2: lib: collect library files in an archive
> 3: lib: move list sorting code
> 4: lib: move parse_size_and_unit()
> 5: lib: move init_constructors()
> 6: lib: move rbtree code
> 7: lib: move bsearch code
> 8: lib: move sort code

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:45:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:45:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44618.79953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9X4-0005uC-2D; Fri, 04 Dec 2020 11:45:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44618.79953; Fri, 04 Dec 2020 11:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9X3-0005u5-VQ; Fri, 04 Dec 2020 11:45:49 +0000
Received: by outflank-mailman (input) for mailman id 44618;
 Fri, 04 Dec 2020 11:45:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl9X2-0005tz-OG
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:45:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl9X1-0000E7-F8; Fri, 04 Dec 2020 11:45:47 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl9X1-0004YX-7L; Fri, 04 Dec 2020 11:45:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VSVyP/m4Qvvvrf4ff4WBQc0vdMocntDXlzKiksuiZt8=; b=Oqrj7U5JnQuc2KU0/m57wuye5b
	Hig6yCoOQooCh99smVPWhmF1KkHagIvhulfE+fl2oA1zoZEpCMQ5ddOLKxRIJEYai1PMKaovjArB9
	lhL7avY0wTsH9Y6Fwpsx+JFcyzwss+Rks6GsyuwOl7GWS/9rEDPGSlRuPE5lqjo5H8T4=;
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Jan Beulich <jbeulich@suse.com>, paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
Date: Fri, 4 Dec 2020 11:45:44 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

I haven't looked at the series yet. Just adding some thoughts on why one 
would want such option.

On 04/12/2020 09:43, Jan Beulich wrote:
> On 04.12.2020 09:22, Paul Durrant wrote:
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 04 December 2020 07:53
>>>
>>> On 03.12.2020 18:07, Paul Durrant wrote:
>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>> Sent: 03 December 2020 15:57
>>>>>
>>>>> ... this sound to me more like workarounds for buggy guests than
>>>>> functionality the hypervisor _needs_ to have. (I can appreciate
>>>>> the specific case here for the specific scenario you provide as
>>>>> an exception.)
>>>>
>>>> If we want to have a hypervisor that can be used in a cloud environment
>>>> then Xen absolutely needs this capability.
>>>
>>> As per above you can conclude that I'm still struggling to see the
>>> "why" part here.
>>>
>>
>> Imagine you are a customer. You boot your OS and everything is just fine... you run your workload and all is good. You then shut down your VM and re-start it. Now it starts to crash. Who are you going to blame? You did nothing to your OS or application s/w, so you are going to blame the cloud provider of course.
> 
> That's a situation OSes are in all the time. Buggy applications may
> stop working on newer OS versions. It's still the application that's
> in need of updating then. I guess OSes may choose to work around
> some very common applications' bugs, but I'd then wonder on what
> basis "very common" gets established. I dislike the underlying
> asymmetry / inconsistency (if not unfairness) of such a model,
> despite seeing that there may be business reasons leading people to
> think they want something like this.

The discussion seems to be geared towards buggy guest so far. However, 
this is not the only reason that one my want to avoid exposing some 
features:

    1) From the recent security issues (such as XSA-343), a knob to 
disable FIFO would be quite beneficials for vendors that don't need the 
feature.

    2) Fleet management purpose. You may have a fleet with multiple 
versions of Xen. You don't want your customer to start relying on 
features that may not be available on all the hosts otherwise it 
complicates the guest placement.

FAOD, I am sure there might be other features that need to be disabled. 
But we have to start somewhere :).

> 
>> Now imagine you are the cloud provider, running Xen. What you did was start to upgrade your hosts from an older version of Xen to a newer version of Xen, to pick up various bug fixes and make sure you are running a version that is within the security support envelope. You identify that your customer's problem is a bug in their OS that was latent on the old version of the hypervisor but is now manifesting on the new one because it has buggy support for a hypercall that was added between the two versions. How are you going to fix this issue, and get your customer up and running again? Of course you'd like your customer to upgrade their OS, but they can't even boot it to do that. You really need a solution that can restore the old VM environment, at least temporarily, for that customer.
> 
> Boot the guest on a not-yet-upgraded host again, to update its kernel?

You are making the assumption that the customer would have the choice to 
target a specific versions of Xen. This may be undesirable for a cloud 
provider as suddenly your customer may want to stick on the old version 
of Xen.

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:46:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44623.79965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Xp-000621-HD; Fri, 04 Dec 2020 11:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44623.79965; Fri, 04 Dec 2020 11:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Xp-00061u-Cq; Fri, 04 Dec 2020 11:46:37 +0000
Received: by outflank-mailman (input) for mailman id 44623;
 Fri, 04 Dec 2020 11:46:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=neyb=FI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kl9Xo-00061o-40
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:46:36 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.22])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a03de6a-97f9-4014-a523-00ca7aa6cad0;
 Fri, 04 Dec 2020 11:46:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.4 DYNA|AUTH)
 with ESMTPSA id 60a649wB4BkVehO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 4 Dec 2020 12:46:31 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a03de6a-97f9-4014-a523-00ca7aa6cad0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1607082394;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=UYWOVuXTVQU+lMoeG4W1DhXH+0cEFiffhkNGmz61YEE=;
	b=h7nWQg/qDUSGZoQKzXI7JMwk/Sl9DShffIBfNTHPLHW8SPWgXYKWgMuCzrWjPq/GAB
	2zRhyOOCg8Bc25nhynuxo29ynBqTJFy3mOxpLr+l0bHBJI0jwxdfBllmrhAUlDhz32lw
	JuOPORIwzkeVALLD0d4+jtzs30j9Ihx4PVun0VXVh+0up6A12UP2pIe4IZTG9P7S8BeR
	jnQkiLs7sWSFGvgydkmVeLd0M2cEH7bestitoPVG4IOrQoxcdUbmo2v8NjaqZ9i5mzLe
	kfeLEfmmTcjXt2zdWKFeEsGBT9OluQEclLLQF0MAlMUopMjcNeNQ2dj++i9irhUUg5Wf
	EjJA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+uX"
X-RZG-CLASS-ID: mo00
Date: Fri, 4 Dec 2020 12:46:20 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd
 arguments
Message-ID: <20201204124620.5ed9f6a0.olaf@aepfle.de>
In-Reply-To: <20201204105315.avponbzbotrabf4c@liuwe-devbox-debian-v2>
References: <20201203063436.4503-1-olaf@aepfle.de>
	<20201204105315.avponbzbotrabf4c@liuwe-devbox-debian-v2>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/hcbqnFuPMNexsyyn7XCds9E"; protocol="application/pgp-signature"

--Sig_/hcbqnFuPMNexsyyn7XCds9E
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 4 Dec 2020 10:53:15 +0000
schrieb Wei Liu <wl@xen.org>:

> Did you accidentally swap 15 and 30 in XENWATCHDOGD_ARGS above?

This is indeed a mistake. Thanks for spotting.

Olaf

--Sig_/hcbqnFuPMNexsyyn7XCds9E
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl/KIYwACgkQ86SN7mm1
DoAZsw/9FfBYCrm22IjEpdE3ab0+/8tkE4bnYI4GvLmtGFLggR+b/8mRByCUlJZe
7fnxZQ4pJZs1xD8O76h137Uka5A9vTKNZj03bKIZ/2ZbmAyWuwwbqV7JBLoZUnd1
OgPM/rzNk0l2TSTa51hnoN4k9FGbaVzfYotb2ujWCs1uuJZEV5JSg7TFSaCgiXtk
xjtuIKVFtVcZOFtAkTxpSUpFc94EVy+gJhYq8d7ceZb0XxQMvg5INLPCwh5utmnJ
KZ/B9Ihf+9cpQYRKMqpWoQWB8tHMaNPNHuBV5KGKe8dcv4IoeeLm8TNS6Yw8bBZg
DGGDgnmIrX9VjBVK0mS7JOQ8rJ0/NJHuHDmAfJi2nL5GWvngqmx5FnR6WnCInsmG
HqbLziqfyQRTPOjXpdWwOLUe5LX/yf3AvY3AoB27xnUZGebVat3VS9tvTCsIPKZZ
xbsz9EO+ASlKBNJJp4dPSK+q6JoNntB0KnRrY46zw++GNo5O9Tb7tVnjioLjoaJI
ZosPDbLKHX8GJTPlrReCu1wh2TrbwkAUGAegesf6xecAXfpFyt+MO1ZVXTZupwpL
ezD16/Oji0JcJJqq1N4RvWGV+M9pF89beKSKhMC+qvhvBfpjPyieSzQzOz1M2Sb6
1EyQVLA7tHIpn+FUOEbAEkW8S47rteHSoDo+nrxu1xp9AYGBm3k=
=+xAV
-----END PGP SIGNATURE-----

--Sig_/hcbqnFuPMNexsyyn7XCds9E--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:47:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44629.79977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Yw-0006AE-RF; Fri, 04 Dec 2020 11:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44629.79977; Fri, 04 Dec 2020 11:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9Yw-0006A7-O1; Fri, 04 Dec 2020 11:47:46 +0000
Received: by outflank-mailman (input) for mailman id 44629;
 Fri, 04 Dec 2020 11:47:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl9Yv-00069y-4R; Fri, 04 Dec 2020 11:47:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl9Yv-0000J2-1w; Fri, 04 Dec 2020 11:47:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kl9Yu-0000x8-Qf; Fri, 04 Dec 2020 11:47:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kl9Yu-0001aE-Q9; Fri, 04 Dec 2020 11:47:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7Y3ovhtYrYFlXqK4yDQ6vI7wsedMpQ6vWGNKB46xuuU=; b=uSBSAFQWVN1sWFmZIsZD+kl56M
	XOlH+Zj49XbbsAmGhnCoQrtlA4coGahfdTahN5vyNdq38wCspO6SB9D3jKCX33dw97AzdmKHAfYXY
	t5G7GxMEBmGqjY2vk8jGiI4AIHqYsSW6qZRFi5LWJYNJ2dddxRR4HhoB37C0KvoggrWE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157194-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157194: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=31e8a47b62a4f3dc45d8f9bbf3529a188e867a87
X-Osstest-Versions-That:
    ovmf=6af76adbbfccd31f4f8753fb0ddbbd9f4372f572
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 11:47:44 +0000

flight 157194 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157194/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 31e8a47b62a4f3dc45d8f9bbf3529a188e867a87
baseline version:
 ovmf                 6af76adbbfccd31f4f8753fb0ddbbd9f4372f572

Last test of basis   157191  2020-12-04 01:39:45 Z    0 days
Testing same since   157194  2020-12-04 04:16:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Vitaly Cheptsov <cheptsov@ispras.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6af76adbbf..31e8a47b62  31e8a47b62a4f3dc45d8f9bbf3529a188e867a87 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:48:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:48:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44633.79992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9ZE-0006Fa-4X; Fri, 04 Dec 2020 11:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44633.79992; Fri, 04 Dec 2020 11:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9ZE-0006FS-1D; Fri, 04 Dec 2020 11:48:04 +0000
Received: by outflank-mailman (input) for mailman id 44633;
 Fri, 04 Dec 2020 11:48:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl9ZB-0006F4-Ud
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:48:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2b01667-3aed-46b2-8eaa-97d939b7021a;
 Fri, 04 Dec 2020 11:48:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F2AA0AC9A;
 Fri,  4 Dec 2020 11:47:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2b01667-3aed-46b2-8eaa-97d939b7021a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607082480; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kuaCdU3/SANic9kCA6C0IYdinVYmrdvcgidim9WiC+k=;
	b=ftHmDNIh0IEJwtUqUBdkGBYqBo/cuGVp1u9MNJqkHPOEUW8tHvcfxGgZc/TTGn4iiTLKnR
	ocoCjEUwupFHsCj12focqfbjBPf40dmn8Ecmk4k2pEbvxyT32VXDmqh91RnoUgP0Ig6D3R
	Frzh8SwJElq4jaN1oxBUpp+ILfCONMM=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
Date: Fri, 4 Dec 2020 12:48:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 12:28, Julien Grall wrote:
> Hi Jan,
> 
> On 03/12/2020 10:09, Jan Beulich wrote:
>> On 02.12.2020 22:10, Julien Grall wrote:
>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>> While there don't look to be any problems with this right now, the lock
>>>> order implications from holding the lock can be very difficult to follow
>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>> (and no such callback should) have any need for the lock to be held.
>>>>
>>>> However, vm_event_disable() frees the structures used by respective
>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>
>>> AFAICT, this callback is not the only place where the synchronization is
>>> missing in the VM event code.
>>>
>>> For instance, vm_event_put_request() can also race against
>>> vm_event_disable().
>>>
>>> So shouldn't we handle this issue properly in VM event?
>>
>> I suppose that's a question to the VM event folks rather than me?
> 
> Yes. From my understanding of Tamas's e-mail, they are relying on the 
> monitoring software to do the right thing.
> 
> I will refrain to comment on this approach. However, given the race is 
> much wider than the event channel, I would recommend to not add more 
> code in the event channel to deal with such problem.
> 
> Instead, this should be fixed in the VM event code when someone has time 
> to harden the subsystem.

Are effectively saying I should now undo the addition of the
refcounting, which was added in response to feedback from you?
Or else what exactly am I to take from your reply?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:51:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:51:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44649.80006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9ck-0007NQ-Nc; Fri, 04 Dec 2020 11:51:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44649.80006; Fri, 04 Dec 2020 11:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9ck-0007NJ-Kl; Fri, 04 Dec 2020 11:51:42 +0000
Received: by outflank-mailman (input) for mailman id 44649;
 Fri, 04 Dec 2020 11:51:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kl9ck-0007ND-3I
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:51:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl9ci-0000Pl-BI; Fri, 04 Dec 2020 11:51:40 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kl9ch-0005D2-UC; Fri, 04 Dec 2020 11:51:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1er+zurBPy0ur3esjX4CCRQUfXDV4JwhhAL7uyRCjd8=; b=TmFOgOSJa+vVyCjtOSPeEIdZW3
	zoeuzomOOZ6Aq4NniXE7Ri6UoMfIs00afZSjR0dJVhzz/8Kkv1mDSUaaIXBqKq98/3IN1nEJ27G6E
	6j5tmkuFrDQjeYukmGnRz8oJ4zBepBdJ/WIox4zkPBSMdF1H5iY+ZJ/3UX9KzG4NA4Yo=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <02a2b77f-27a9-b1b6-1acf-1f136cffdf30@xen.org>
Date: Fri, 4 Dec 2020 11:51:37 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 04/12/2020 11:48, Jan Beulich wrote:
> On 04.12.2020 12:28, Julien Grall wrote:
>> Hi Jan,
>>
>> On 03/12/2020 10:09, Jan Beulich wrote:
>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>> While there don't look to be any problems with this right now, the lock
>>>>> order implications from holding the lock can be very difficult to follow
>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>
>>>>> However, vm_event_disable() frees the structures used by respective
>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>
>>>> AFAICT, this callback is not the only place where the synchronization is
>>>> missing in the VM event code.
>>>>
>>>> For instance, vm_event_put_request() can also race against
>>>> vm_event_disable().
>>>>
>>>> So shouldn't we handle this issue properly in VM event?
>>>
>>> I suppose that's a question to the VM event folks rather than me?
>>
>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>> monitoring software to do the right thing.
>>
>> I will refrain to comment on this approach. However, given the race is
>> much wider than the event channel, I would recommend to not add more
>> code in the event channel to deal with such problem.
>>
>> Instead, this should be fixed in the VM event code when someone has time
>> to harden the subsystem.
> 
> Are effectively saying I should now undo the addition of the
> refcounting, which was added in response to feedback from you?

Please point out where I made the request to use the refcounting...

I pointed out there was an issue with the VM event code. This was latter 
analysed as a wider issue. The VM event folks doesn't seem to be very 
concerned on the race, so I don't see the reason to try to fix it in the 
event channel code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:52:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:52:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44653.80018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9dO-0007Sz-1o; Fri, 04 Dec 2020 11:52:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44653.80018; Fri, 04 Dec 2020 11:52:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9dN-0007Ss-Us; Fri, 04 Dec 2020 11:52:21 +0000
Received: by outflank-mailman (input) for mailman id 44653;
 Fri, 04 Dec 2020 11:52:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WR05=FI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kl9dN-0007Sn-6r
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:52:21 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5ec47ad-7111-4bfe-afd6-84f222be67eb;
 Fri, 04 Dec 2020 11:52:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5ec47ad-7111-4bfe-afd6-84f222be67eb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607082739;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=r/PZflQfumI5dXunGT+rF7XvlJCkEcCqYC/LE8KWSj4=;
  b=cFSwSv+1KjW0vGTVFFy5rP747rf23xrswOBc9v9m6myZH9RKuqoZ7OnY
   pXvuwEkYbebr9aE0GHF8qfRyKJiNK63p/wToIS71thWMkD7ib40yM5X00
   iDGtPKLLteQde0zYkES/JyIUTzIgLMiU6Kgw99QK/EWzu8uUxt9iK/NHn
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 6BR2qO85Vfv3y4MYpndoGx4/bpc/s5AaYSl91/7OC8eYAvxgXxz36Up5AvJclPKZ5HPYP4m0xx
 mq42h7uBlh7HzDm8/4kU/zDUqK9PslfYuisyDPz/q69RAe24ZL1jG0Giq/HeqZHm4XJiXauR8U
 bYmoZbiiitT3bChsE0hAVxVeo/s5QsVENpR0csv7+8kvH8Wt0SOq+0m4LyGeRAQW35ZFFudzP0
 +rANQ3yDpOQch3Y7Y5QB+YLwbN5FcUhKJHBshZcdw3PzSR8o2KxHRt1xLhv6thxQtFkksrfr1I
 0Qo=
X-SBRS: 5.1
X-MesageID: 32872585
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,392,1599537600"; 
   d="scan'208";a="32872585"
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
	<paul@xen.org>
CC: 'Paul Durrant' <pdurrant@amazon.com>, 'Eslam Elnikety'
	<elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu'
	<wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>, 'George Dunlap'
	<george.dunlap@citrix.com>, 'Stefano Stabellini' <sstabellini@kernel.org>,
	'Christian Lindig' <christian.lindig@citrix.com>, 'David Scott'
	<dave@recoil.org>, 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5de9f051-4071-4e09-528c-c1fb8345dc25@citrix.com>
Date: Fri, 4 Dec 2020 11:52:11 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 04/12/2020 11:45, Julien Grall wrote:
> Hi,
>
> I haven't looked at the series yet. Just adding some thoughts on why
> one would want such option.
>
> On 04/12/2020 09:43, Jan Beulich wrote:
>> On 04.12.2020 09:22, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 04 December 2020 07:53
>>>>
>>>> On 03.12.2020 18:07, Paul Durrant wrote:
>>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> Sent: 03 December 2020 15:57
>>>>>>
>>>>>> ... this sound to me more like workarounds for buggy guests than
>>>>>> functionality the hypervisor _needs_ to have. (I can appreciate
>>>>>> the specific case here for the specific scenario you provide as
>>>>>> an exception.)
>>>>>
>>>>> If we want to have a hypervisor that can be used in a cloud
>>>>> environment
>>>>> then Xen absolutely needs this capability.
>>>>
>>>> As per above you can conclude that I'm still struggling to see the
>>>> "why" part here.
>>>>
>>>
>>> Imagine you are a customer. You boot your OS and everything is just
>>> fine... you run your workload and all is good. You then shut down
>>> your VM and re-start it. Now it starts to crash. Who are you going
>>> to blame? You did nothing to your OS or application s/w, so you are
>>> going to blame the cloud provider of course.
>>
>> That's a situation OSes are in all the time. Buggy applications may
>> stop working on newer OS versions. It's still the application that's
>> in need of updating then. I guess OSes may choose to work around
>> some very common applications' bugs, but I'd then wonder on what
>> basis "very common" gets established. I dislike the underlying
>> asymmetry / inconsistency (if not unfairness) of such a model,
>> despite seeing that there may be business reasons leading people to
>> think they want something like this.
>
> The discussion seems to be geared towards buggy guest so far. However,
> this is not the only reason that one my want to avoid exposing some
> features:
>
>    1) From the recent security issues (such as XSA-343), a knob to
> disable FIFO would be quite beneficials for vendors that don't need
> the feature.
>
>    2) Fleet management purpose. You may have a fleet with multiple
> versions of Xen. You don't want your customer to start relying on
> features that may not be available on all the hosts otherwise it
> complicates the guest placement.
>
> FAOD, I am sure there might be other features that need to be
> disabled. But we have to start somewhere :).

Absolutely top of the list, importance wise, is so we can test different
configurations, without needing to rebuild the hypervisor (and to a
lesser extent, without having to reboot).

It is a mistake that events/grants/etc were ever available unilaterally
in HVM guests.  This is definitely a step in the right direction (but I
thought it would be too rude to ask Paul to make all of those CDF flags
at once).

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:54:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:54:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44660.80033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9fa-0007fD-K5; Fri, 04 Dec 2020 11:54:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44660.80033; Fri, 04 Dec 2020 11:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9fa-0007f6-Gi; Fri, 04 Dec 2020 11:54:38 +0000
Received: by outflank-mailman (input) for mailman id 44660;
 Fri, 04 Dec 2020 11:54:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl9fZ-0007f0-Ai
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:54:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e1d7cb1-8e79-44c0-846d-7b058ff3a335;
 Fri, 04 Dec 2020 11:54:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 96536AC9A;
 Fri,  4 Dec 2020 11:54:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e1d7cb1-8e79-44c0-846d-7b058ff3a335
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607082875; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CmUpg1t4kusv1XlB3ff6JKDGCbzKYipbSB7YbiJmr3I=;
	b=kOK0ctoMgloaHLwnKH6KRgNBfppKiotH9QmUDM7eIGJ1hQmZHHe10vfyvn9We+BLIaKzZ2
	NKNis2M8Lp7HITqLfsj5fZfULDH0U5NcUqm0nVLGNhE/U5Zil13Oar8cwejiHb997DtH/s
	qI0ZxilquskuqmxZcWpzEaSb7SNt0q8=
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-16-jgross@suse.com>
 <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>
 <72e2300c-6367-5469-d7fd-767dd411dcb8@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <02ac006e-ac50-ef3b-e7f4-587dfadc976c@suse.com>
Date: Fri, 4 Dec 2020 12:54:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <72e2300c-6367-5469-d7fd-767dd411dcb8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.12.2020 12:08, Jürgen Groß wrote:
> On 04.12.20 10:10, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> +static struct hypfs_funcs cpupool_dir_funcs = {
>>
>> Yet another missing const?
> 
> Already fixed.
> 
>>
>>> +    .enter = cpupool_dir_enter,
>>> +    .exit = cpupool_dir_exit,
>>> +    .read = cpupool_dir_read,
>>> +    .write = hypfs_write_deny,
>>> +    .getsize = cpupool_dir_getsize,
>>> +    .findentry = cpupool_dir_findentry,
>>> +};
>>> +
>>> +static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);
>>
>> Why VARDIR? This isn't a template, is it? Or does VARDIR really
>> serve multiple purposes?
> 
> Basically it just takes an additional parameter for the function vector.
> Maybe I should rename it to HYPFS_DIR_INIT_FUNC()?

Maybe. Depends on what exactly the VAR is meant to stand for.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 11:55:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 11:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44668.80050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9gG-0007mp-VH; Fri, 04 Dec 2020 11:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44668.80050; Fri, 04 Dec 2020 11:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9gG-0007mi-Rm; Fri, 04 Dec 2020 11:55:20 +0000
Received: by outflank-mailman (input) for mailman id 44668;
 Fri, 04 Dec 2020 11:55:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kl9gG-0007mb-2I
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 11:55:20 +0000
Received: from mail-wr1-f45.google.com (unknown [209.85.221.45])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 690f2fef-e13b-49eb-85d6-141d9de4af26;
 Fri, 04 Dec 2020 11:55:18 +0000 (UTC)
Received: by mail-wr1-f45.google.com with SMTP id l1so5034027wrb.9
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 03:55:18 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id b11sm2333581wrs.84.2020.12.04.03.55.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 03:55:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 690f2fef-e13b-49eb-85d6-141d9de4af26
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=/mCO50+bvu1CrxUAld/ZO7Qod69A9jWuzr0dqqvMZIg=;
        b=nki+doggtYuY5bTKF3+MvMednsyYyU3LxSazNgvYc8F3CAMRnr5yIPR0/3TvzY6HfB
         pmi7JCdQRgXmEcQb22NMd5+QGecDhw/6vdNqQNYg/YDR2v+sSuf//2JCcIAWAto8JCd9
         glDcW77FLLhDA2QMas61ikklEn0LNS2TmDTGnOMhpiYBWfIOwNhfPqri/6ENtZBllmnB
         oUlLQ1H+fBybx05OR/nVCqPB36n0YqJeNO1PsNIWG0e7On4amQj2xXxgADTvXjz7A3Fq
         2fWIyLI5XrsdBDc+wybSNOY0slbJQTlT4qpwahORXAQsK94kjDxm4vk4XojMo7PLCu9Y
         MVTg==
X-Gm-Message-State: AOAM533B4e9O5CS1pFacRHxFi5cImUMC/oY91dpUkewfZxSFqmA0X3kt
	xr+yvnSIlUgZAlBQb4ryn3A=
X-Google-Smtp-Source: ABdhPJygDxH3tOgP5qiByJCR9IrSYtATVTsns0b2wn/wCWhdiJCfzIyQZFlNFNZLYpC4GcGTjBPwSg==
X-Received: by 2002:adf:dec1:: with SMTP id i1mr4421482wrn.129.1607082918135;
        Fri, 04 Dec 2020 03:55:18 -0800 (PST)
Date: Fri, 4 Dec 2020 11:55:16 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2] tools/hotplug: allow tuning of xenwatchdogd arguments
Message-ID: <20201204115516.m5p3r25erp2fteg3@liuwe-devbox-debian-v2>
References: <20201203063436.4503-1-olaf@aepfle.de>
 <20201204105315.avponbzbotrabf4c@liuwe-devbox-debian-v2>
 <20201204124620.5ed9f6a0.olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201204124620.5ed9f6a0.olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Fri, Dec 04, 2020 at 12:46:20PM +0100, Olaf Hering wrote:
> Am Fri, 4 Dec 2020 10:53:15 +0000
> schrieb Wei Liu <wl@xen.org>:
> 
> > Did you accidentally swap 15 and 30 in XENWATCHDOGD_ARGS above?
> 
> This is indeed a mistake. Thanks for spotting.

Fixed and pushed. Thanks.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 12:01:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 12:01:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44680.80062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9ma-0000Yo-3s; Fri, 04 Dec 2020 12:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44680.80062; Fri, 04 Dec 2020 12:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9ma-0000Yh-0X; Fri, 04 Dec 2020 12:01:52 +0000
Received: by outflank-mailman (input) for mailman id 44680;
 Fri, 04 Dec 2020 12:01:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c9tS=FI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kl9mY-0000YQ-89
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 12:01:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e10f0a0-943c-42b9-8b6c-8c289679afe3;
 Fri, 04 Dec 2020 12:01:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BBF9EAC9A;
 Fri,  4 Dec 2020 12:01:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e10f0a0-943c-42b9-8b6c-8c289679afe3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607083303; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+iBiIweEhkjMkPbG7wx2QQ/s/PvHP3NOzVlat4SbxsQ=;
	b=kuW9m5yRQOjaIpSzmt5RsCTQ2cI1bueMsnGZzraOXS0++8oKLeLHM/3rir6yUu0dcokIgK
	vQslLO0zF9Vz9j4wndO+JYUyubFwXV+bxdvozrCyDPHxkUzw6iAcdA7EeEN26LuMR5R9y0
	Vs/+eYMj3jHiY8CRaxlRCZJTOdwpTZE=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
 <02a2b77f-27a9-b1b6-1acf-1f136cffdf30@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <48395363-ea47-9139-011e-233d92581a71@suse.com>
Date: Fri, 4 Dec 2020 13:01:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <02a2b77f-27a9-b1b6-1acf-1f136cffdf30@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 12:51, Julien Grall wrote:
> 
> 
> On 04/12/2020 11:48, Jan Beulich wrote:
>> On 04.12.2020 12:28, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>>> While there don't look to be any problems with this right now, the lock
>>>>>> order implications from holding the lock can be very difficult to follow
>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>>
>>>>>> However, vm_event_disable() frees the structures used by respective
>>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>>
>>>>> AFAICT, this callback is not the only place where the synchronization is
>>>>> missing in the VM event code.
>>>>>
>>>>> For instance, vm_event_put_request() can also race against
>>>>> vm_event_disable().
>>>>>
>>>>> So shouldn't we handle this issue properly in VM event?
>>>>
>>>> I suppose that's a question to the VM event folks rather than me?
>>>
>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>> monitoring software to do the right thing.
>>>
>>> I will refrain to comment on this approach. However, given the race is
>>> much wider than the event channel, I would recommend to not add more
>>> code in the event channel to deal with such problem.
>>>
>>> Instead, this should be fixed in the VM event code when someone has time
>>> to harden the subsystem.
>>
>> Are effectively saying I should now undo the addition of the
>> refcounting, which was added in response to feedback from you?
> 
> Please point out where I made the request to use the refcounting...

You didn't ask for this directly, sure, but ...

> I pointed out there was an issue with the VM event code.

... this has ultimately led to the decision to use refcounting
(iirc there was one alternative that I had proposed, besides
the option of doing nothing).

> This was latter 
> analysed as a wider issue. The VM event folks doesn't seem to be very 
> concerned on the race, so I don't see the reason to try to fix it in the 
> event channel code.

And you won't need the refcount for vpl011 then? I can certainly
drop it again, but it feels odd to go back to an earlier version
under the circumstances ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 12:08:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 12:08:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44689.80074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9se-0000nO-Q1; Fri, 04 Dec 2020 12:08:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44689.80074; Fri, 04 Dec 2020 12:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kl9se-0000nH-MT; Fri, 04 Dec 2020 12:08:08 +0000
Received: by outflank-mailman (input) for mailman id 44689;
 Fri, 04 Dec 2020 12:08:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mnCO=FI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kl9sd-0000nC-13
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 12:08:07 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8149ed8-4223-400a-a05b-17597fb4bd18;
 Fri, 04 Dec 2020 12:08:06 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 8315C67373; Fri,  4 Dec 2020 13:08:03 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8149ed8-4223-400a-a05b-17597fb4bd18
Date: Fri, 4 Dec 2020 13:08:03 +0100
From: Christoph Hellwig <hch@lst.de>
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
Message-ID: <20201204120803.GA20727@lst.de>
References: <20201129035639.GW2532@mail-itl> <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com> <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201204110847.GU201140@mail-itl>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-Grecki wrote:
> culprit: 
> 
> commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
> Author: Roger Pau Monne <roger.pau@citrix.com>
> Date:   Tue Sep 1 10:33:26 2020 +0200
> 
>     xen: add helpers to allocate unpopulated memory
>     
> I'm adding relevant people and xen-devel to the thread.
> For completeness, here is the original crash message:

That commit definitively adds a new ZONE_DEVICE user, so it does look
related.  But you are not running on Xen, are you?


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 12:16:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 12:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44695.80086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klA0X-0001mj-LN; Fri, 04 Dec 2020 12:16:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44695.80086; Fri, 04 Dec 2020 12:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klA0X-0001mc-I7; Fri, 04 Dec 2020 12:16:17 +0000
Received: by outflank-mailman (input) for mailman id 44695;
 Fri, 04 Dec 2020 12:16:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IQI=FI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1klA0W-0001mX-DB
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 12:16:16 +0000
Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbfeecc8-8839-448b-bb13-8e61652bbe5f;
 Fri, 04 Dec 2020 12:16:15 +0000 (UTC)
Received: by mail-wm1-x334.google.com with SMTP id c198so5391282wmd.0
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 04:16:15 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id m9sm3370986wrx.59.2020.12.04.04.16.13
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 04 Dec 2020 04:16:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbfeecc8-8839-448b-bb13-8e61652bbe5f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=90kwJhyHhrltbS4wA5286NKEF2pbyTs8l5EFDdu2h84=;
        b=bebDAECVO0BLk2wgoFCbNrti9o+36A6/IJRwkGRKbWuf+TMdhZwX5qpnJSLKalIgvd
         QScn2aoqXjRG0VAS6CUYBXkICAwHssY0QYw1WzHin8tH6f2hiwtfnzDgRk7xtbCrrymm
         +RQotVgMWPsX3Q5xX37KcXRiFiOVfOaW9q6d8ba4bMXz7RwLud7SLzWrtqIlGYSAFcQ7
         vFW6gvetDfDIkaHMy6dN/gaKiWzWkvztKpeVcNA0O9sd2ahegGqDWI2cfJwY0/XNi2TV
         XHZSvUdR4piCo8BH2eczjnGYjj7fPDprFHwi3AjDUfU4ke85yhU5zwDwOOqF3mh1UcH0
         TEug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=90kwJhyHhrltbS4wA5286NKEF2pbyTs8l5EFDdu2h84=;
        b=KfIxteonEwnYFTnGYcup2VJ2v1WGsFDWY/VpH5190zIG/3F31LaBcwEZTdHppFaW0a
         smCBB7/cNsyf5DTyRlANNeAVlbpBp0V6u/gxp3FnKR8tSIJhAMLdqSSLPuyfGwPvDWWI
         ctsoswIT66tHoeR+wxMk58uNocjVj4vs3+nJp8nYw+nsPsztcmVfkYafXbHSwod8N/21
         qcz1oU6Rmso+4DQmtBtHqzoknzI25Pg0IdvCOOV1500qx/lv/F3LP+KlO6Alu6MgHqfP
         o7mxohEsNE1irTcZui8mEaG8o6/MRo7Wlt7GjhNfqO0Oq2qhCXQiMM1Y6Ck901yznxbq
         3ssA==
X-Gm-Message-State: AOAM530TsLv0rHNf1u1ShP4kvHM1oST7nhx/kEDh3+tlgXpT5TsIv/vp
	2NC69HSXBbEmy9eNrvHNaZg=
X-Google-Smtp-Source: ABdhPJyvTW/zZJsFGyOb+MrWg7zMIjyZjOQ7xGKDml7G7npyPLpKJlmRnS/uQCb3T+nPva6LAaieoQ==
X-Received: by 2002:a1c:9695:: with SMTP id y143mr3796419wmd.70.1607084174405;
        Fri, 04 Dec 2020 04:16:14 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>
Cc: <xen-devel@lists.xenproject.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'David Scott'" <dave@recoil.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201203142534.4017-1-paul@xen.org> <20201203142534.4017-20-paul@xen.org> <20201204113535.kogsoqqvfykbg32x@liuwe-devbox-debian-v2>
In-Reply-To: <20201204113535.kogsoqqvfykbg32x@liuwe-devbox-debian-v2>
Subject: RE: [PATCH v5 19/23] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
Date: Fri, 4 Dec 2020 12:16:12 -0000
Message-ID: <014f01d6ca37$42317d80$c6947880$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLNnXLs2pPLw4H6lzH6w2mINWFu0wLPHQcXAjbnA/Kn0QLrwA==

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 04 December 2020 11:36
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Christian Lindig
> <christian.lindig@citrix.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>; David Scott
> <dave@recoil.org>; Anthony PERARD <anthony.perard@citrix.com>
> Subject: Re: [PATCH v5 19/23] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
> 
> On Thu, Dec 03, 2020 at 02:25:30PM +0000, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > ... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.
> >
> > This patch modifies the API and callers accordingly. It also modifies
> > several internal functions in libxl_pci.c that support the API to also use
> > 'libxl_pci_bdf'.
> >
> > NOTE: The OCaml bindings are adjusted to contain the interface change. It
> >       should therefore not affect compatibility with OCaml-based utilities.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > Acked-by: Christian Lindig <christian.lindig@citrix.com>
> 
> > ---
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Wei Liu <wl@xen.org>
> > Cc: David Scott <dave@recoil.org>
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >  tools/include/libxl.h                |  15 +-
> >  tools/libs/light/libxl_pci.c         | 213 +++++++++++++++------------
> >  tools/ocaml/libs/xl/xenlight_stubs.c |  15 +-
> >  tools/xl/xl_pci.c                    |  32 ++--
> >  4 files changed, 156 insertions(+), 119 deletions(-)
> >
> > diff --git a/tools/include/libxl.h b/tools/include/libxl.h
> > index 5edacccbd1da..5703fdf367c5 100644
> > --- a/tools/include/libxl.h
> > +++ b/tools/include/libxl.h
> > @@ -469,6 +469,13 @@
> >   */
> >  #define LIBXL_HAVE_PCI_BDF 1
> >
> > +/*
> > + * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
> > + * libxl_device_pci_assignable_add/remove/list/list_free() functions all
> > + * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
> > + */
> > +#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
> > +
> >  /*
> >   * libxl ABI compatibility
> >   *
> > @@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
> >   * added or is not bound, the functions will emit a warning but return
> >   * SUCCESS.
> >   */
> > -int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
> > -int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
> > -libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
> > -void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
> > +int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
> > +int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
> > +libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
> > +void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
> 
> Given these APIs are visible to external callers, you will need to
> provide fallbacks for the old APIs.
> 

Ok, I'll name the new functions something like 'libxl_pci_bdf_assignable_add/remove' etc. and provide compat shims.

  Paul

> Wei.



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 12:21:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 12:21:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44702.80098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klA5D-0002iB-8I; Fri, 04 Dec 2020 12:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44702.80098; Fri, 04 Dec 2020 12:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klA5D-0002i4-4k; Fri, 04 Dec 2020 12:21:07 +0000
Received: by outflank-mailman (input) for mailman id 44702;
 Fri, 04 Dec 2020 12:21:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TCoV=FI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1klA5B-0002hz-Iu
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 12:21:05 +0000
Received: from wout2-smtp.messagingengine.com (unknown [64.147.123.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 620de4c1-82bf-405d-ae61-514d1f481482;
 Fri, 04 Dec 2020 12:21:03 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 7D48F378;
 Fri,  4 Dec 2020 07:21:01 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 04 Dec 2020 07:21:02 -0500
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id F371E1080059;
 Fri,  4 Dec 2020 07:20:57 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 620de4c1-82bf-405d-ae61-514d1f481482
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=DCUEX7
	nYunDYFqZ66Iy4I5yxcBSzLhwsOOKi4sQNbxE=; b=jkuCdY3OXTe4z6HTv8gsMK
	THIBhv0L3s3BJ4huVkWib64+kuNu8WHLfpjA0c+/LqoJcYj8kqr1ucQrbPZDKIxU
	Bb+2rMa79seYtfrVKbTLx99//zM00bAiaqvffwGjri9ErlN2e0uUv8JqlNXJt/k7
	/TicAtQY+/pASlRra/Rf8gA5ZH/H3D/J20HiVsrEngoZzbby1B8u6VdrH+cfnzvc
	KAY0eN0nUC92VHywjbcRoq8DLlpaPTgcYjGk5HgN2gB+iNBqk8GrZKaj5dwhPIwG
	66iD1UeCqagDxaQ1DurXfXyFPlo6p/LLl9gKXEl3prXBp/+3WtmYxysfT2HF8mkg
	==
X-ME-Sender: <xms:qynKXwSjrZx3CdVygWImNiUIt-H-1Ix-BMYv95GxLB56kORMrG9F6g>
    <xme:qynKX9wcCr6HqR-LcjbyfJZ3cO2prBEEO2IsewewuNErBOe-o5zu7sJ3ODY-r2GNj
    09eSZmCWVMxWg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudeikedgfeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:qynKX91JbFohpmDykhajFlzyB76jBqEP8gkQAto8mwKVZ-yXuapycA>
    <xmx:qynKX0BtF81arDY7PNYsEhYxzS3CHV_fVY3rVjxGSXqzT5--J3j6tg>
    <xmx:qynKX5jIZNJH9z_bWhR6LSEZco6vjtZbk0bhU2GQuPT10QLMkSLdpA>
    <xmx:rSnKX5ciGKP_FiNkCvScwmc82ipCUF4NANsxLL3CxUikpTnih9X_kQ>
Date: Fri, 4 Dec 2020 13:20:54 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Sagi Grimberg <sagi@grimberg.me>, linux-nvme@lists.infradead.org
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
Message-ID: <20201204122054.GV201140@mail-itl>
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl>
 <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="0vPiNU+Cgjd3emdO"
Content-Disposition: inline
In-Reply-To: <20201204120803.GA20727@lst.de>


--0vPiNU+Cgjd3emdO
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9

On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
> On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > culprit:=20
> >=20
> > commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
> > Author: Roger Pau Monne <roger.pau@citrix.com>
> > Date:   Tue Sep 1 10:33:26 2020 +0200
> >=20
> >     xen: add helpers to allocate unpopulated memory
> >    =20
> > I'm adding relevant people and xen-devel to the thread.
> > For completeness, here is the original crash message:
>=20
> That commit definitively adds a new ZONE_DEVICE user, so it does look
> related.  But you are not running on Xen, are you?

I am. It is Xen dom0.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--0vPiNU+Cgjd3emdO
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl/KKacACgkQ24/THMrX
1yyQigf/Ummc+PwREciQfLShPK41N+ndzLA5XCtY+J8j4jCaXFLTiLe0qmJ+dbio
TiB1PJ4wUlJ5A6OA2nE69SG+xBCSeBkO/uaMOTNdkqjTMadsTbgZwC8eNOLNqmmp
LHN8g6+Z+nT1A1v5vpNdHb9mFb7kCSNuTlqMMbUSGspw9LDPXGhpHB/r+6AZSS4v
DXXb+ZKK82wI01M7lLDzpBtAYNRI5EOoRNK3RC5DlBLjd0ZYzOAtckZ6TKPSm22K
QMo1CWOuCeE6so2nH3aIcEiHI076PVSy0WqCmjskFb3zxZ5YW4VL7695mg7Vhr/E
HU6X5ein1vG+FyxHWtI1jMNQpGk1Sg==
=6oGg
-----END PGP SIGNATURE-----

--0vPiNU+Cgjd3emdO--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 12:54:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 12:54:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44728.80162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klAbc-0006TB-Ih; Fri, 04 Dec 2020 12:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44728.80162; Fri, 04 Dec 2020 12:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klAbc-0006T4-FC; Fri, 04 Dec 2020 12:54:36 +0000
Received: by outflank-mailman (input) for mailman id 44728;
 Fri, 04 Dec 2020 12:53:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Ck8=FI=eltan.com=wvervoorn@srs-us1.protection.inumbo.net>)
 id 1klAa5-0006Pg-1G
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 12:53:02 +0000
Received: from mailfilter03-out40.webhostingserver.nl (unknown [195.211.72.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d887c7e6-7688-473d-a48e-6d2a4b047e1e;
 Fri, 04 Dec 2020 12:52:55 +0000 (UTC)
Received: from s219.webhostingserver.nl (unknown [195.211.72.6])
 by mailfilter03.webhostingserver.nl (Halon) with ESMTPSA
 id 9f3e2b36-362f-11eb-bfeb-001a4a4cb9a5;
 Fri, 04 Dec 2020 13:52:52 +0100 (CET)
Received: from 84-85-114-86.fixed.kpn.net ([84.85.114.86]
 helo=Eltsrv03.Eltan.local)
 by s219.webhostingserver.nl with esmtpa (Exim 4.93.0.4)
 (envelope-from <wvervoorn@eltan.com>)
 id 1klAZw-008vUv-HL; Fri, 04 Dec 2020 13:52:52 +0100
Received: from Eltsrv03.Eltan.local (192.168.100.3) by Eltsrv03.Eltan.local
 (192.168.100.3) with Microsoft SMTP Server (TLS) id 15.0.847.32; Fri, 4 Dec
 2020 13:52:17 +0100
Received: from Eltsrv03.Eltan.local ([fe80::24e7:1cc6:a76a:a3a8]) by
 Eltsrv03.Eltan.local ([fe80::24e7:1cc6:a76a:a3a8%12]) with mapi id
 15.00.0847.040; Fri, 4 Dec 2020 13:52:17 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d887c7e6-7688-473d-a48e-6d2a4b047e1e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=eltan.com; s=whs1;
	h=mime-version:content-transfer-encoding:content-type:in-reply-to:references:
	 message-id:date:subject:cc:to:from:from;
	bh=IsgwRvW85E5SBp0vynDgOoFtfedakcxXs8UmfqTBX74=;
	b=N4FhGCZnMbeERmDcdg5d3j7UCPkBM/7rkoattgo1kGQtz6cd/vhlIYBW9eW2gS81F1+KHwd1dvU/n
	 bs5LZB4z9BckKSRvL2hr6W0ByWnBjSg+QnRS/CskldPK0DBZKRZdNo/pUzf89/3wcGBkG3mqaU//b6
	 8EJFzXOoZKA5FxDqd0gdZJ3yjWCL7qEyaR5isWIR9U3PYtF5VIBTjbGQl0yCkruWbYzG41Q1FfUTtQ
	 jPTTcddU/ErkMwScPlK+QzY9Pr9neMlFfAMfidnRAteB2z1OYTm6cmVtELmzVVlhYuKq9r3EKl/4+C
	 STxZwTQbP5valpbaI6HCLAC2t2mLvaQ==
X-Halon-ID: 9f3e2b36-362f-11eb-bfeb-001a4a4cb9a5
From: Wim Vervoorn <wvervoorn@eltan.com>
To: The development of GNU GRUB <grub-devel@gnu.org>, Daniel Kiper
	<daniel.kiper@oracle.com>
CC: Coreboot <coreboot@coreboot.org>, LKML <linux-kernel@vger.kernel.org>,
	"systemd-devel@lists.freedesktop.org" <systemd-devel@lists.freedesktop.org>,
	"trenchboot-devel@googlegroups.com" <trenchboot-devel@googlegroups.com>,
	U-Boot Mailing List <u-boot@lists.denx.de>, "x86@kernel.org"
	<x86@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "alecb@umass.edu" <alecb@umass.edu>,
	"alexander.burmashev@oracle.com" <alexander.burmashev@oracle.com>,
	"allen.cryptic@gmail.com" <allen.cryptic@gmail.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"ard.biesheuvel@linaro.org" <ard.biesheuvel@linaro.org>, "btrotter@gmail.com"
	<btrotter@gmail.com>, "dpsmith@apertussolutions.com"
	<dpsmith@apertussolutions.com>, "eric.devolder@oracle.com"
	<eric.devolder@oracle.com>, "eric.snowberg@oracle.com"
	<eric.snowberg@oracle.com>, "hpa@zytor.com" <hpa@zytor.com>,
	"hun@n-dimensional.de" <hun@n-dimensional.de>, "javierm@redhat.com"
	<javierm@redhat.com>, "joao.m.martins@oracle.com"
	<joao.m.martins@oracle.com>, "kanth.ghatraju@oracle.com"
	<kanth.ghatraju@oracle.com>, "konrad.wilk@oracle.com"
	<konrad.wilk@oracle.com>, "krystian.hebel@3mdeb.com"
	<krystian.hebel@3mdeb.com>, "leif@nuviainc.com" <leif@nuviainc.com>,
	"lukasz.hawrylko@intel.com" <lukasz.hawrylko@intel.com>,
	"luto@amacapital.net" <luto@amacapital.net>, "michal.zygowski@3mdeb.com"
	<michal.zygowski@3mdeb.com>, "mjg59@google.com" <mjg59@google.com>,
	"mtottenh@akamai.com" <mtottenh@akamai.com>, Vladimir 'phcoder' Serbinenko
	<phcoder@gmail.com>, "piotr.krol@3mdeb.com" <piotr.krol@3mdeb.com>,
	"pjones@redhat.com" <pjones@redhat.com>, Paul Menzel <pmenzel@molgen.mpg.de>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>, "ross.philipson@oracle.com"
	<ross.philipson@oracle.com>, "tyhicks@linux.microsoft.com"
	<tyhicks@linux.microsoft.com>, Heinrich Schuchardt <xypron.glpk@gmx.de>
Subject: RE: [SPECIFICATION RFC] The firmware and bootloader log
 specification
Thread-Topic: [SPECIFICATION RFC] The firmware and bootloader log
 specification
Thread-Index: AQHWyRJEB7rkC+S91ESmRf+zunoIxKnm5awg
Date: Fri, 4 Dec 2020 12:52:17 +0000
Message-ID: <6c1e79be210549949c30253a6cfcafc1@Eltsrv03.Eltan.local>
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
 <CAODwPW9dxvMfXY=92pJNGazgYqcynAk72EkzOcmF7JZXhHTwSQ@mail.gmail.com>
In-Reply-To: <CAODwPW9dxvMfXY=92pJNGazgYqcynAk72EkzOcmF7JZXhHTwSQ@mail.gmail.com>
Accept-Language: nl-NL, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [192.168.100.108]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Antivirus-Scanner: Clean mail though you should still use an Antivirus

SGVsbG8gSnVsaXVzLA0KDQpJIGFncmVlIHdpdGggeW91LiBVc2luZyBhbiBleGlzdGluZyBzdGFu
ZGFyZCBpcyBiZXR0ZXIgdGhhbiBpbnZlbnRpbmcgYSBuZXcgb25lIGluIHRoaXMgY2FzZS4gSSB0
aGluayB1c2luZyB0aGUgY29yZWJvb3QgbG9nZ2luZyBpcyBhIGdvb2QgaWRlYSBhcyB0aGVyZSBp
cyBpbmRlZWQgYSBsb3Qgb2Ygc3VwcG9ydCBhbHJlYWR5IGF2YWlsYWJsZSBhbmQgaXQgaXMgbGln
aHR3ZWlnaHQgYW5kIHNpbXBsZS4NCg0KQmVzdCBSZWdhcmRzLA0KV2ltIFZlcnZvb3JuDQoNCkVs
dGFuIEIuVi4NCkFtYmFjaHRzdHJhYXQgMjMNCjU0ODEgU00gU2NoaWpuZGVsDQpUaGUgTmV0aGVy
bGFuZHMNCg0KVCA6ICszMS0oMCk3My01OTQgNDYgNjQNCkUgOiB3dmVydm9vcm5AZWx0YW4uY29t
DQpXIDogaHR0cDovL3d3dy5lbHRhbi5jb20NCg0KDQoiVGhpcyBtZXNzYWdlIGNvbnRhaW5zIGNv
bmZpZGVudGlhbCBpbmZvcm1hdGlvbi4gVW5sZXNzIHlvdSBhcmUgdGhlIGludGVuZGVkIHJlY2lw
aWVudCBvZiB0aGlzIG1lc3NhZ2UsIGFueSB1c2Ugb2YgdGhpcyBtZXNzYWdlIGlzIHN0cmljdGx5
IHByb2hpYml0ZWQuIElmIHlvdSBoYXZlIHJlY2VpdmVkIHRoaXMgbWVzc2FnZSBpbiBlcnJvciwg
cGxlYXNlIGltbWVkaWF0ZWx5IG5vdGlmeSB0aGUgc2VuZGVyIGJ5IHRlbGVwaG9uZSArMzEtKDAp
NzMtNTk0NDY2NCBvciByZXBseSBlbWFpbCwgYW5kIGltbWVkaWF0ZWx5IGRlbGV0ZSB0aGlzIG1l
c3NhZ2UgYW5kIGFsbCBjb3BpZXMuIg0KDQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpG
cm9tOiBHcnViLWRldmVsIFttYWlsdG86Z3J1Yi1kZXZlbC1ib3VuY2VzK3d2ZXJ2b29ybj1lbHRh
bi5jb21AZ251Lm9yZ10gT24gQmVoYWxmIE9mIEp1bGl1cyBXZXJuZXINClNlbnQ6IFRodXJzZGF5
LCBEZWNlbWJlciAzLCAyMDIwIDI6MTggQU0NClRvOiBEYW5pZWwgS2lwZXIgPGRhbmllbC5raXBl
ckBvcmFjbGUuY29tPg0KQ2M6IENvcmVib290IDxjb3JlYm9vdEBjb3JlYm9vdC5vcmc+OyBUaGUg
ZGV2ZWxvcG1lbnQgb2YgR1JVQiAyIDxncnViLWRldmVsQGdudS5vcmc+OyBMS01MIDxsaW51eC1r
ZXJuZWxAdmdlci5rZXJuZWwub3JnPjsgc3lzdGVtZC1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5v
cmc7IHRyZW5jaGJvb3QtZGV2ZWxAZ29vZ2xlZ3JvdXBzLmNvbTsgVS1Cb290IE1haWxpbmcgTGlz
dCA8dS1ib290QGxpc3RzLmRlbnguZGU+OyB4ODZAa2VybmVsLm9yZzsgeGVuLWRldmVsQGxpc3Rz
LnhlbnByb2plY3Qub3JnOyBhbGVjYkB1bWFzcy5lZHU7IGFsZXhhbmRlci5idXJtYXNoZXZAb3Jh
Y2xlLmNvbTsgYWxsZW4uY3J5cHRpY0BnbWFpbC5jb207IGFuZHJldy5jb29wZXIzQGNpdHJpeC5j
b207IGFyZC5iaWVzaGV1dmVsQGxpbmFyby5vcmc7IGJ0cm90dGVyQGdtYWlsLmNvbTsgZHBzbWl0
aEBhcGVydHVzc29sdXRpb25zLmNvbTsgZXJpYy5kZXZvbGRlckBvcmFjbGUuY29tOyBlcmljLnNu
b3diZXJnQG9yYWNsZS5jb207IGhwYUB6eXRvci5jb207IGh1bkBuLWRpbWVuc2lvbmFsLmRlOyBq
YXZpZXJtQHJlZGhhdC5jb207IGpvYW8ubS5tYXJ0aW5zQG9yYWNsZS5jb207IGthbnRoLmdoYXRy
YWp1QG9yYWNsZS5jb207IGtvbnJhZC53aWxrQG9yYWNsZS5jb207IGtyeXN0aWFuLmhlYmVsQDNt
ZGViLmNvbTsgbGVpZkBudXZpYWluYy5jb207IGx1a2Fzei5oYXdyeWxrb0BpbnRlbC5jb207IGx1
dG9AYW1hY2FwaXRhbC5uZXQ7IG1pY2hhbC56eWdvd3NraUAzbWRlYi5jb207IG1qZzU5QGdvb2ds
ZS5jb207IG10b3R0ZW5oQGFrYW1haS5jb207IFZsYWRpbWlyICdwaGNvZGVyJyBTZXJiaW5lbmtv
IDxwaGNvZGVyQGdtYWlsLmNvbT47IHBpb3RyLmtyb2xAM21kZWIuY29tOyBwam9uZXNAcmVkaGF0
LmNvbTsgUGF1bCBNZW56ZWwgPHBtZW56ZWxAbW9sZ2VuLm1wZy5kZT47IHJvZ2VyLnBhdUBjaXRy
aXguY29tOyByb3NzLnBoaWxpcHNvbkBvcmFjbGUuY29tOyB0eWhpY2tzQGxpbnV4Lm1pY3Jvc29m
dC5jb207IEhlaW5yaWNoIFNjaHVjaGFyZHQgPHh5cHJvbi5nbHBrQGdteC5kZT4NClN1YmplY3Q6
IFJlOiBbU1BFQ0lGSUNBVElPTiBSRkNdIFRoZSBmaXJtd2FyZSBhbmQgYm9vdGxvYWRlciBsb2cg
c3BlY2lmaWNhdGlvbg0KDQpTdGFuZGFyZGl6aW5nIGluLW1lbW9yeSBsb2dnaW5nIHNvdW5kcyBs
aWtlIGFuIGludGVyZXN0aW5nIGlkZWEsIGVzcGVjaWFsbHkgd2l0aCByZWdhcmRzIHRvIGNvbXBv
bmVudHMgdGhhdCBjYW4gcnVuIG9uIHRvcCBvZiBkaWZmZXJlbnQgZmlybXdhcmUgc3RhY2tzICh0
aGluZ3MgbGlrZSBHUlVCIG9yIFRGLUEpLiBCdXQgSSB3b3VsZCBiZSBhIGJpdCB3YXJ5IG9mIGNy
ZWF0aW5nIGEgIm5ldyBzdGFuZGFyZCB0byBydWxlIHRoZW0gYWxsIiBhbmQgdGhlbiBleHBlY3Rp
bmcgYWxsIHByb2plY3RzIHRvIHN3aXRjaCB3aGF0IHRoZXkgaGF2ZSBvdmVyIHRvIHRoYXQuIEkg
dGhpbmsgd2UgYWxsIGtub3cgaHR0cHM6Ly94a2NkLmNvbS85MjcvLg0KDQpIYXZlIHlvdSBsb29r
ZWQgYXJvdW5kIGFuZCBldmFsdWF0ZWQgZXhpc3Rpbmcgc29sdXRpb25zIHRoYXQgYWxyZWFkeSBo
YXZlIHNvbWUgcHJvbGlmZXJhdGlvbiBmaXJzdD8gSSB0aGluayBpdCB3b3VsZCBiZSBtdWNoIGVh
c2llciB0byBjb252aW5jZSBwZW9wbGUgdG8gc3RhbmRhcmRpemUgb24gc29tZXRoaW5nIHRoYXQg
YWxyZWFkeSBoYXMgZXhpc3RpbmcgdXNlcnMgYW5kIGRyaXZlcnMgYXZhaWxhYmxlIGluIG11bHRp
cGxlIHByb2plY3RzLg0KDQpJbiBjb3JlYm9vdCB3ZSdyZSB1c2luZyBhIHZlcnkgc2ltcGxlIGNo
YXJhY3RlciByaW5nIGJ1ZmZlciB0aGF0IG9ubHkgaGFzIHR3byA0LWJ5dGUgaGVhZGVyIGZpZWxk
czogdG90YWwgc2l6ZSBhbmQgY3Vyc29yIChpLmUuIGN1cnJlbnQgcG9zaXRpb24gd2hlcmUgeW91
IHdvdWxkIHdyaXRlIHRoZSBuZXh0IGNoYXJhY3RlcikuIFRoZSB0b3AgNCBiaXRzIG9mIHRoZSBj
dXJzb3IgZmllbGQgYXJlIHJlc2VydmVkIGZvciBmbGFncywgb25lIG9mIHdoaWNoIGlzIHRoZSAi
b3ZlcmZsb3ciIGZsYWcgdGhhdCB0ZWxscyB5b3Ugd2hldGhlciB0aGUgcmluZy1idWZmZXIgYWxy
ZWFkeSBvdmVyZmxvd2VkIG9yIG5vdCAoc28gcmVhZGVycyBrbm93IHdoZXRoZXIgdG8gcHJpbnQg
dGhlIHdob2xlIHJpbmcgYnVmZmVyLCBvciBvbmx5IGZyb20gdGhlIHN0YXJ0IHRvIHRoZSBjdXJy
ZW50IGN1cnNvcikuIFdlIHRyeSB0byBkaW1lbnNpb24gdGhlIGJ1ZmZlcnMgc28gdGhleSBkb24n
dCBvdmVyZmxvdyBvbiBhIHNpbmdsZSBib290LCBidXQgdGhpcyBhcHByb2FjaCBhbGxvd3MgdXMg
dG8gbG9nIG11bHRpcGxlIGJvb3RzIG9uIHBsYXRmb3JtcyB0aGF0IGNhbiBwZXJzaXN0IG1lbW9y
eSBhY3Jvc3MgcmVib290cywgd2hpY2ggc29tZXRpbWVzIGNvbWVzIGluIGhhbmR5Lg0KDQpUaGUg
ZGlzYWR2YW50YWdlcyBvZiB0aGF0IGFwcHJvYWNoIGNvbXBhcmVkIHRvIHlvdXIgcHJvcG9zYWwg
YXJlIGxhY2sgb2Ygc29tZSBmZWF0dXJlcywgbGlrZSB0aGUgZmFjaWxpdGllcyBmaWVsZCAoYWx0
aG91Z2ggb25lIGNhbiBzdGlsbCBqdXN0IHByaW50IGEgdGFnIGxpa2UgIjwwPiIgb3IgIjw0PiIg
YmVoaW5kIGVhY2ggbmV3bGluZSkgb3IgdGltZXN0YW1wcyAoY29yZWJvb3QgaW5zdGVhZCBoYXMg
c2VwYXJhdGUgdGltZXN0YW1wIGxvZ2dpbmcpLiBCdXQgSSB0aGluayBhIHJlYWxseSBiaWcgYWR2
YW50YWdlIGlzIHNpemU6IGluIGVhcmx5IGZpcm13YXJlIGVudmlyb25tZW50cyBiZWZvcmUgRERS
IHRyYWluaW5nLCBzcGFjZSBpcyBvZnRlbiB2ZXJ5IHByZWNpb3VzIGFuZCB3ZSBzdHJ1Z2dsZSB0
byBmaW5kIG1vcmUgdGhhbiBhIGNvdXBsZSBvZiBraWxvYnl0ZXMgZm9yIHRoZSBsb2cgYnVmZmVy
LiBJZiBJIGxvb2sgYXQgdGhlIHN0cnVjdHVyZSB5b3UgcHJvcG9zZWQsIHRoYXQncyBhbHJlYWR5
IDI0IGJ5dGVzIG9mIG92ZXJoZWFkIHBlciBpbmRpdmlkdWFsIG1lc3NhZ2UuIElmIHdlIHdlcmUg
aG9va2luZyB0aGF0IHVwIHRvIG91ciBub3JtYWwgcHJpbnRrKCkgZmFjaWxpdHkgaW4gY29yZWJv
b3QgKHN1Y2ggdGhhdCBlYWNoIGludm9jYXRpb24gY3JlYXRlcyBhIG5ldyBtZXNzYWdlIGhlYWRl
ciksIHRoYXQgd291bGQgcHJvYmFibHkgd2FzdGUgYWJvdXQgYSB0aGlyZCBvZiB0aGUgd2hvbGUg
bG9nIGJ1ZmZlciBvbiBvdmVyaGVhZC4gSSB0aGluayBhIGNvbXBsaWNhdGVkLCBzeXNsb2ctc3R5
bGUgbG9nZ2luZyBzdHJ1Y3R1cmUgdGhhdCBzdG9yZXMgaW5kaXZpZHVhbCBtZXNzYWdlIGJsb2Nr
cyBpbnN0ZWFkIG9mIGEgY29udGludW91cyBjaGFyYWN0ZXIgc3RyaW5nIGlzbid0IHJlYWxseSBz
dWl0YWJsZSBmb3IgZmlybXdhcmUgbG9nZ2luZy4NCg0KRldJVyB0aGUgY29yZWJvb3QgY29uc29s
ZSBoYXMgZXhpc3Rpbmcgc3VwcG9ydCBpbiBMaW51eCAoaHR0cHM6Ly9naXQua2VybmVsLm9yZy9w
dWIvc2NtL2xpbnV4L2tlcm5lbC9naXQvdG9ydmFsZHMvbGludXguZ2l0L3RyZWUvZHJpdmVycy9m
aXJtd2FyZS9nb29nbGUvbWVtY29uc29sZS1jb3JlYm9vdC5jKSwNClNlYUJJT1MgKGh0dHBzOi8v
Z2l0aHViLmNvbS9jb3JlYm9vdC9zZWFiaW9zL2Jsb2IvbWFzdGVyL3NyYy9mdy9jb3JlYm9vdC5j
I0wyMTkpLA0KVEYtQSAoaHR0cHM6Ly9naXRodWIuY29tL0FSTS1zb2Z0d2FyZS9hcm0tdHJ1c3Rl
ZC1maXJtd2FyZS9ibG9iL21hc3Rlci9kcml2ZXJzL2NvcmVib290L2NibWVtX2NvbnNvbGUvYWFy
Y2g2NC9jYm1lbV9jb25zb2xlLlMpLA0KR1JVQiAoaHR0cHM6Ly9naXQuc2F2YW5uYWguZ251Lm9y
Zy9jZ2l0L2dydWIuZ2l0L3RyZWUvZ3J1Yi1jb3JlL3Rlcm0vaTM4Ni9jb3JlYm9vdC9jYm1lbWMu
YyksDQpVLUJvb3QgKGh0dHBzOi8vZ2l0aHViLmNvbS91LWJvb3QvdS1ib290L2Jsb2IvbWFzdGVy
L2RyaXZlcnMvbWlzYy9jYm1lbV9jb25zb2xlLmMpDQphbmQgcHJvYmFibHkgYSBjb3VwbGUgb2Yg
b3RoZXJzIEknbSBub3QgYXdhcmUgb2YuIEFuZCB0aGUgY29kZSB0byBhZGQgc3VwcG9ydCAoZXNw
ZWNpYWxseSB3aGVuIG9ubHkgYXBwZW5kaW5nKSBpcyBzbyBzaW1wbGUgdGhhdCBpdCBqdXN0IHRh
a2VzIGEgY291cGxlIG9mIGxpbmVzIHRvIGltcGxlbWVudCAoYmluYXJ5IGNvZGUgc2l6ZSB0byBp
bXBsZW1lbnQgdGhlIGZvcm1hdCBpcyBhbHNvIGFsd2F5cyBhIGNvbmNlcm4gZm9yIGZpcm13YXJl
IGVudmlyb25tZW50cykuDQoNCk9uIFdlZCwgTm92IDE4LCAyMDIwIGF0IDc6MDQgQU0gSGVpbnJp
Y2ggU2NodWNoYXJkdCA8eHlwcm9uLmdscGtAZ214LmRlPiB3cm90ZToNCj4NCj4gT24gMTQuMTEu
MjAgMDA6NTIsIERhbmllbCBLaXBlciB3cm90ZToNCj4gPiBIZXksDQo+ID4NCj4gPiBUaGlzIGlz
IG5leHQgYXR0ZW1wdCB0byBjcmVhdGUgZmlybXdhcmUgYW5kIGJvb3Rsb2FkZXIgbG9nIHNwZWNp
ZmljYXRpb24uDQo+ID4gRHVlIHRvIGhpZ2ggaW50ZXJlc3QgYW1vbmcgaW5kdXN0cnkgaXQgaXMg
YW4gZXh0ZW5zaW9uIHRvIHRoZSANCj4gPiBpbml0aWFsIGJvb3Rsb2FkZXIgbG9nIG9ubHkgc3Bl
Y2lmaWNhdGlvbi4gSXQgdGFrZXMgaW50byB0aGUgYWNjb3VudCANCj4gPiBtb3N0IG9mIHRoZSBj
b21tZW50cyB3aGljaCBJIGdvdCB1cCB1bnRpbCBub3cuDQo+ID4NCj4gPiBUaGUgZ29hbCBpcyB0
byBwYXNzIGFsbCBsb2dzIHByb2R1Y2VkIGJ5IHZhcmlvdXMgYm9vdCBjb21wb25lbnRzIHRvIA0K
PiA+IHRoZSBydW5uaW5nIE9TLiBUaGUgT1Mga2VybmVsIHNob3VsZCBleHBvc2UgdGhlc2UgbG9n
cyB0byB0aGUgdXNlciANCj4gPiBzcGFjZSBhbmQvb3IgcHJvY2VzcyB0aGVtIGludGVybmFsbHkg
aWYgbmVlZGVkLiBUaGUgY29udGVudCBvZiB0aGVzZSANCj4gPiBsb2dzIHNob3VsZCBiZSBodW1h
biByZWFkYWJsZS4gSG93ZXZlciwgdGhleSBzaG91bGQgYWxzbyBjb250YWluIHRoZSANCj4gPiBp
bmZvcm1hdGlvbiB3aGljaCBhbGxvd3MgYWRtaW5zIHRvIGRvIGUuZy4gYm9vdCB0aW1lIGFuYWx5
c2lzLg0KPiA+DQo+ID4gVGhlIGxvZyBzcGVjaWZpY2F0aW9uIHNob3VsZCBiZSBhcyBtdWNoIGFz
IHBvc3NpYmxlIHBsYXRmb3JtIA0KPiA+IGFnbm9zdGljIGFuZCBzZWxmIGNvbnRhaW5lZC4gVGhl
IGZpbmFsIHZlcnNpb24gb2YgdGhpcyBzcGVjIHNob3VsZCANCj4gPiBiZSBtZXJnZWQgaW50byBl
eGlzdGluZyBzcGVjaWZpY2F0aW9ucywgZS5nLiBVRUZJLCBBQ1BJLCBNdWx0aWJvb3QyLCANCj4g
PiBvciBiZSBhIHN0YW5kYWxvbmUgc3BlYywgZS5nLiBhcyBhIHBhcnQgb2YgT0FTSVMgU3RhbmRh
cmRzLiBUaGUgDQo+ID4gZm9ybWVyIHNlZW1zIGJldHRlciBidXQgaXMgbm90IHBlcmZlY3QgdG9v
Li4uDQo+ID4NCj4gPiBIZXJlIGlzIHRoZSBkZXNjcmlwdGlvbiAocHNldWRvY29kZSkgb2YgdGhl
IHN0cnVjdHVyZXMgd2hpY2ggd2lsbCBiZSANCj4gPiB1c2VkIHRvIHN0b3JlIHRoZSBsb2cgZGF0
YS4NCj4NCj4gSGVsbG8gRGFuaWVsLA0KPg0KPiB0aGFua3MgZm9yIHlvdXIgc3VnZ2VzdGlvbiB3
aGljaCBtYWtlcyBnb29kIHNlbnNlIHRvIG1lLg0KPg0KPiBXaHkgY2FuJ3Qgd2Ugc2ltcGx5IHVz
ZSB0aGUgbWVzc2FnZSBmb3JtYXQgZGVmaW5lZCBpbiAiVGhlIFN5c2xvZyANCj4gUHJvdG9jb2wi
LCBodHRwczovL3Rvb2xzLmlldGYub3JnL2h0bWwvcmZjNTQyND8NCj4NCj4gPg0KPiA+ICAgc3Ry
dWN0IGJmX2xvZw0KPiA+ICAgew0KPiA+ICAgICB1aW50MzJfdCAgIHZlcnNpb247DQo+ID4gICAg
IGNoYXIgICAgICAgcHJvZHVjZXJbNjRdOw0KPiA+ICAgICB1aW50NjRfdCAgIGZsYWdzOw0KPiA+
ICAgICB1aW50NjRfdCAgIG5leHRfYmZfbG9nX2FkZHI7DQo+ID4gICAgIHVpbnQzMl90ICAgbmV4
dF9tc2dfb2ZmOw0KPiA+ICAgICBiZl9sb2dfbXNnIG1zZ3NbXTsNCj4NCj4gQXMgYmZfbG9nX21z
ZyBpcyBkb2VzIG5vdCBoYXZlIGRlZmluZWQgbGVuZ3RoIG1zZ3NbXSBjYW5ub3QgYmUgYW4gYXJy
YXkuDQo+DQo+ID4gICB9DQo+ID4NCj4gPiAgIHN0cnVjdCBiZl9sb2dfbXNnDQo+ID4gICB7DQo+
ID4gICAgIHVpbnQzMl90IHNpemU7DQo+ID4gICAgIHVpbnQ2NF90IHRzX25zZWM7DQo+ID4gICAg
IHVpbnQzMl90IGxldmVsOw0KPiA+ICAgICB1aW50MzJfdCBmYWNpbGl0eTsNCj4gPiAgICAgdWlu
dDMyX3QgbXNnX29mZjsNCj4gPiAgICAgY2hhciAgICAgc3RyaW5nc1tdOw0KPiA+ICAgfQ0KPiA+
DQo+ID4gVGhlIG1lbWJlcnMgb2Ygc3RydWN0IGJmX2xvZzoNCj4gPiAgIC0gdmVyc2lvbjogdGhl
IGZpcm13YXJlIGFuZCBib290bG9hZGVyIGxvZyBmb3JtYXQgdmVyc2lvbiBudW1iZXIsIDEgZm9y
IG5vdywNCj4gPiAgIC0gcHJvZHVjZXI6IHRoZSBwcm9kdWNlci9maXJtd2FyZS9ib290bG9hZGVy
Ly4uLiB0eXBlOyB0aGUgbGVuZ3RoDQo+ID4gICAgIGFsbG93cyBBU0NJSSBVVUlEIHN0b3JhZ2Ug
aWYgc29tZWJvZHkgbmVlZHMgdGhhdCBmdW5jdGlvbmFsaXR5LA0KPiA+ICAgLSBmbGFnczogaXQg
Y2FuIGJlIHVzZWQgdG8gc3RvcmUgaW5mb3JtYXRpb24gYWJvdXQgbG9nIHN0YXRlLCBlLmcuDQo+
ID4gICAgIGl0IHdhcyB0cnVuY2F0ZWQgb3Igbm90IChkb2VzIGl0IG1ha2Ugc2Vuc2UgdG8gaGF2
ZSBhbiBpbmZvcm1hdGlvbg0KPiA+ICAgICBhYm91dCB0aGUgbnVtYmVyIG9mIGxvc3QgbWVzc2Fn
ZXM/KSwNCj4gPiAgIC0gbmV4dF9iZl9sb2dfYWRkcjogYWRkcmVzcyBvZiBuZXh0IGJmX2xvZyBz
dHJ1Y3Q7IG5vbmUgaWYgemVybyAoSSB0aGluaw0KPiA+ICAgICBuZXdlciBzcGVjIHZlcnNpb25z
IHNob3VsZCBub3QgY2hhbmdlIGFueXRoaW5nIGluIGZpcnN0IDUgYmZfbG9nIG1lbWJlcnM7DQo+
ID4gICAgIHRoaXMgd2F5IG9sZGVyIGxvZyBwYXJzZXJzIHdpbGwgYmUgYWJsZSB0byB0cmF2ZXJz
ZS9jb3B5IGFsbCBsb2dzIHJlZ2FyZGxlc3MNCj4gPiAgICAgb2YgdmVyc2lvbiB1c2VkIGluIG9u
ZSBsb2cgb3IgYW5vdGhlciksDQo+ID4gICAtIG5leHRfbXNnX29mZjogdGhlIG9mZnNldCwgaW4g
Ynl0ZXMsIGZyb20gdGhlIGJlZ2lubmluZyBvZiB0aGUgYmZfbG9nIHN0cnVjdCwNCj4gPiAgICAg
b2YgdGhlIG5leHQgYnl0ZSBhZnRlciB0aGUgbGFzdCBsb2cgbWVzc2FnZSBpbiB0aGUgbXNnc1td
OyBpLmUuIHRoZSBvZmZzZXQNCj4gPiAgICAgb2YgdGhlIG5leHQgYXZhaWxhYmxlIGxvZyBtZXNz
YWdlIHNsb3Q7IGl0IGlzIGVxdWFsIHRvIHRoZSB0b3RhbCBzaXplIG9mDQo+ID4gICAgIHRoZSBs
b2cgYnVmZmVyIGluY2x1ZGluZyB0aGUgYmZfbG9nIHN0cnVjdCwNCj4NCj4gV2h5IHdvdWxkIHlv
dSBuZWVkIGFuIG9mZnNldCB0byBmaXJzdCB1bnVzZWQgYnl0ZT8NCj4NCj4gV2UgcG9zc2libHkg
aGF2ZSBtdWx0aXBsZSBwcm9kdWNlcnMgb2YgbWVzc2FnZXM6DQo+DQo+IC0gVEYtQQ0KPiAtIFUt
Qm9vdA0KPiAtIGlQWEUNCj4gLSBHUlVCDQo+DQo+IFdoYXQgd2UgbmVlZCBpcyB0aGUgb2Zmc2V0
IHRvIHRoZSBuZXh0IHN0cnVjdCBiZl9sb2cuDQo+DQo+ID4gICAtIG1zZ3M6IHRoZSBhcnJheSBv
ZiBsb2cgbWVzc2FnZXMsDQo+ID4gICAtIHNob3VsZCB3ZSBhZGQgQ1JDIG9yIGhhc2ggb3Igc2ln
bmF0dXJlcyBoZXJlPw0KPiA+DQo+ID4gVGhlIG1lbWJlcnMgb2Ygc3RydWN0IGJmX2xvZ19tc2c6
DQo+ID4gICAtIHNpemU6IHRvdGFsIHNpemUgb2YgYmZfbG9nX21zZyBzdHJ1Y3QsDQo+ID4gICAt
IHRzX25zZWM6IHRpbWVzdGFtcCBleHByZXNzZWQgaW4gbmFub3NlY29uZHMgc3RhcnRpbmcgZnJv
bSAwLA0KPg0KPiBXb3VsZCBlYWNoIG1lc3NhZ2UgcHJvZHVjZXIgc3RhcnQgZnJvbSAwPw0KPg0K
PiBTaG91bGRuJ3Qgd2UgdXNlIHRoZSB0aW1lIGZyb20gdGhlIGhhcmR3YXJlIFJUQyBpZiBpdCBp
cyBhdmFpbGFibGUgdmlhIA0KPiBib290IHNlcnZpY2UgR2V0VGltZSgpPw0KPg0KPiA+ICAgLSBs
ZXZlbDogc2ltaWxhciB0byBzeXNsb2cgbWVhbmluZzsgY2FuIGJlIHVzZWQgdG8gZGlmZmVyZW50
aWF0ZSBub3JtYWwgbWVzc2FnZXMNCj4gPiAgICAgZnJvbSBkZWJ1ZyBtZXNzYWdlczsgdGhlIGV4
YWN0IGludGVycHJldGF0aW9uIGRlcGVuZHMgb24gdGhlIGN1cnJlbnQgcHJvZHVjZXINCj4gPiAg
ICAgdHlwZSBzcGVjaWZpZWQgaW4gdGhlIGJmX2xvZy5wcm9kdWNlciwNCj4gPiAgIC0gZmFjaWxp
dHk6IHNpbWlsYXIgdG8gc3lzbG9nIG1lYW5pbmc7IGNhbiBiZSB1c2VkIHRvIGRpZmZlcmVudGlh
dGUgdGhlIHNvdXJjZXMgb2YNCj4gPiAgICAgdGhlIG1lc3NhZ2VzLCBlLmcuIG1lc3NhZ2UgcHJv
ZHVjZWQgYnkgbmV0d29ya2luZyBtb2R1bGU7IHRoZSBleGFjdCBpbnRlcnByZXRhdGlvbg0KPiA+
ICAgICBkZXBlbmRzIG9uIHRoZSBjdXJyZW50IHByb2R1Y2VyIHR5cGUgc3BlY2lmaWVkIGluIHRo
ZSBiZl9sb2cucHJvZHVjZXIsDQo+ID4gICAtIG1zZ19vZmY6IHRoZSBsb2cgbWVzc2FnZSBvZmZz
ZXQgaW4gc3RyaW5nc1tdLA0KPg0KPiBXaGF0IGlzIHRoaXMgZmllbGQgZ29vZCBmb3I/IFdoeSBk
b24ndCB5b3Ugc3RhcnQgdGhlIHRoZSBzdHJpbmcgYXQgDQo+IHN0cmluZ3NbMF0/DQo+IFdoYXQg
d291bGQgYmUgdXNlZnVsIHdvdWxkIGJlIHRoZSBvZmZzZXQgdG8gdGhlIG5leHQgYmZfbG9nX21z
Zy4NCj4NCj4gPiAgIC0gc3RyaW5nc1swXTogdGhlIGJlZ2lubmluZyBvZiBsb2cgbWVzc2FnZSB0
eXBlLCBzaW1pbGFyIHRvIHRoZSBmYWNpbGl0eSBtZW1iZXIgYnV0DQo+ID4gICAgIE5VTCB0ZXJt
aW5hdGVkIHN0cmluZyBpbnN0ZWFkIG9mIGludGVnZXI7IHRoaXMgd2lsbCBiZSB1c2VkIGJ5LCBl
LmcuLCB0aGUgR1JVQjINCj4gPiAgICAgZm9yIG1lc3NhZ2VzIHByaW50ZWQgdXNpbmcgZ3J1Yl9k
cHJpbnRmKCksDQo+ID4gICAtIHN0cmluZ3NbbXNnX29mZl06IHRoZSBiZWdpbm5pbmcgb2YgbG9n
IG1lc3NhZ2UsIE5VTCB0ZXJtaW5hdGVkIHN0cmluZy4NCj4NCj4NCj4gV2h5IHN0cmluZ3MgaW4g
cGx1cmFsPyBEbyB5b3Ugd2FudCB0byBwdXQgbXVsdGlwbGUgc3RyaW5ncyBpbnRvIA0KPiAnc3Ry
aW5ncyc/IFdoYXQgaWRlbnRpZmllcyB0aGUgbGFzdCBzdHJpbmc/DQo+DQo+DQo+ID4NCj4gPiBO
b3RlOiBUaGUgcHJvZHVjZXJzIGFyZSBmcmVlIHRvIHVzZS9pZ25vcmUgYW55IGdpdmVuIHNldCBv
ZiBsZXZlbCwgZmFjaWxpdHkgYW5kL29yDQo+ID4gICAgICAgbG9nIHR5cGUgbWVtYmVycy4gVGhv
dWdoIHRoZSB1c2FnZSBvZiB0aGVzZSBtZW1iZXJzIGhhcyB0byBiZSBjbGVhcmx5IGRlZmluZWQu
DQo+ID4gICAgICAgSWdub3JlZCBpbnRlZ2VyIG1lbWJlcnMgc2hvdWxkIGJlIHNldCB0byAwLiBJ
Z25vcmVkIGxvZyBtZXNzYWdlIHR5cGUgc2hvdWxkDQo+ID4gICAgICAgY29udGFpbiBhbiBlbXB0
eSBOVUwgdGVybWluYXRlZCBzdHJpbmcuIFRoZSBsb2cgbWVzc2FnZSBpcyBtYW5kYXRvcnkgYnV0
IGNhbg0KPiA+ICAgICAgIGJlIGFuIGVtcHR5IE5VTCB0ZXJtaW5hdGVkIHN0cmluZy4NCj4gPg0K
PiA+IFRoZXJlIGlzIHN0aWxsIG5vdCBmdWxseSBzb2x2ZWQgcHJvYmxlbSBob3cgdGhlIGxvZ3Mg
c2hvdWxkIGJlIHByZXNlbnRlZCB0byB0aGUgT1MuDQo+ID4gT24gdGhlIFVFRkkgcGxhdGZvcm1z
IHdlIGNhbiB1c2UgY29uZmlnIHRhYmxlcyB0byBkbyB0aGF0LiBUaGVuIA0KPiA+IHByb2JhYmx5
IGJmX2xvZy5uZXh0X2JmX2xvZ19hZGRyIHNob3VsZCBub3QgYmUgdXNlZC4NCj4NCj4gV2h5PyBI
b3cgd291bGQgeW91IG90aGVyd2lzZSBmaW5kIHRoZSBlbnRyaWVzIG9mIHRoZSBuZXh0IHByb2R1
c2VyIGluIA0KPiB0aGUgY29uZmlndXJhdGlvbiB0YWJsZT8gV2hhdCBJIGFtIG1pc3NpbmcgaXMg
YSBHVUlEIGZvciB0aGUgDQo+IGNvbmZpZ3VyYXRpb24gdGFibGUuDQo+DQo+ID4gT24gdGhlIEFD
UEkgYW5kIERldmljZSBUcmVlIHBsYXRmb3JtcyB3ZSBjYW4gdXNlIHRoZXNlIG1lY2hhbmlzbXMg
dG8gDQo+ID4gcHJlc2VudCB0aGUgbG9ncyB0byB0aGUgT1Nlcy4gVGhlIHNpdHVhdGlvbiBnZXRz
IG1vcmUNCj4NCj4gSSBkbyBub3QgdW5kZXJzdGFuZCB0aGlzLg0KPg0KPiBVRUZJIGltcGxlbWVu
dGF0aW9ucyB1c2UgZWl0aGVyIG9mIEFDUEkgYW5kIGRldmljZS10cmVlcyBhbmQgc3VwcG9ydCAN
Cj4gY29uZmlndXJhdGlvbiB0YWJsZXMuIFdoeSBkbyB5b3Ugd2FudCB0byB1c2Ugc29tZSBvdGhl
ciBiaW5kaW5nPw0KPg0KPiBCZXN0IHJlZ2FyZHMNCj4NCj4gSGVpbnJpY2gNCj4NCj4gPiBkaWZm
aWN1bHQgaWYgbmVpdGhlciBvZiB0aGVzZSBtZWNoYW5pc21zIGFyZSBwcmVzZW50LiBIb3dldmVy
LCBtYXliZSANCj4gPiB3ZSBzaG91bGQgbm90IGJvdGhlciB0b28gbXVjaCBhYm91dCB0aGF0IGJl
Y2F1c2UgcHJvYmFibHkgdGhlc2UgDQo+ID4gcGxhdGZvcm1zIGdldHRpbmcgbGVzcyBhbmQgbGVz
cyBjb21tb24uDQo+ID4NCj4gPiBBbnl3YXksIEkgYW0gYXdhcmUgdGhhdCB0aGlzIGlzIG5vdCBz
cGVjaWZpY2F0aW9uIHBlciBzZS4gVGhlIGdvYWwgDQo+ID4gb2YgdGhpcyBlbWFpbCBpcyB0byBj
b250aW51ZSB0aGUgZGlzY3Vzc2lvbiBhYm91dCB0aGUgaWRlYSBvZiB0aGUgDQo+ID4gZmlybXdh
cmUgYW5kIGJvb2xvYWRlciBsb2cgYW5kIHRvIGZpbmQgb3V0IHdoZXJlIHRoZSBmaW5hbCANCj4g
PiBzcGVjaWZpY2F0aW9uIHNob3VsZCBsYW5kLiBPZiBjb3Vyc2UgdGFraW5nIGludG8gdGhlIGFj
Y291bnQgYXNzdW1wdGlvbnMgbWFkZSBhYm92ZS4NCj4gPg0KPiA+IFlvdSBjYW4gZmluZCBwcmV2
aW91cyBkaXNjdXNzaW9ucyBhYm91dCByZWxhdGVkIHRvcGljcyBhdCBbMV0sIFsyXSBhbmQgWzNd
Lg0KPiA+DQo+ID4gQWRkaXRpb25hbGx5LCBJIGFtIGdvaW5nIHRvIHByZXNlbnQgdGhpcyBkdXJp
bmcgR1JVQiBtaW5pLXN1bW1pdCANCj4gPiBzZXNzaW9uIG9uIFR1ZXNkYXksIDE3dGggb2YgTm92
ZW1iZXIgYXQgMTU6NDUgVVRDLiBTbywgaWYgeW91IHdhbnQgDQo+ID4gdG8gZGlzY3VzcyB0aGUg
bG9nIGRlc2lnbiBwbGVhc2Ugam9pbiB1cy4gWW91IGNhbiBmaW5kIG1vcmUgZGV0YWlscyBoZXJl
IFs0XS4NCj4gPg0KPiA+IERhbmllbA0KPiA+DQo+ID4gWzFdIA0KPiA+IGh0dHBzOi8vbGlzdHMu
Z251Lm9yZy9hcmNoaXZlL2h0bWwvZ3J1Yi1kZXZlbC8yMDE5LTEwL21zZzAwMTA3Lmh0bWwNCj4g
PiBbMl0gDQo+ID4gaHR0cHM6Ly9saXN0cy5nbnUub3JnL2FyY2hpdmUvaHRtbC9ncnViLWRldmVs
LzIwMTktMTEvbXNnMDAwNzkuaHRtbA0KPiA+IFszXSANCj4gPiBodHRwczovL2xpc3RzLmdudS5v
cmcvYXJjaGl2ZS9odG1sL2dydWItZGV2ZWwvMjAyMC0wNS9tc2cwMDIyMy5odG1sDQo+ID4gWzRd
IGh0dHBzOi8vdHdpdHRlci5jb20vM21kZWJfY29tL3N0YXR1cy8xMzI3Mjc4ODA0MTAwOTMxNTg3
DQo+ID4NCj4NCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18NCkdydWItZGV2ZWwgbWFpbGluZyBsaXN0DQpHcnViLWRldmVsQGdudS5vcmcNCmh0dHBzOi8v
bGlzdHMuZ251Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL2dydWItZGV2ZWwNCg==


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 13:07:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 13:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44741.80174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klAo3-0007eh-TZ; Fri, 04 Dec 2020 13:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44741.80174; Fri, 04 Dec 2020 13:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klAo3-0007ea-QV; Fri, 04 Dec 2020 13:07:27 +0000
Received: by outflank-mailman (input) for mailman id 44741;
 Fri, 04 Dec 2020 13:07:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=320I=FI=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1klAo2-0007eV-4X
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 13:07:26 +0000
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f814601-9550-400a-ac8f-3f31a960ff25;
 Fri, 04 Dec 2020 13:07:25 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id h20so5272912qkk.4
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 05:07:25 -0800 (PST)
Received: from six (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id 68sm4814263qkf.97.2020.12.04.05.07.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Fri, 04 Dec 2020 05:07:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f814601-9550-400a-ac8f-3f31a960ff25
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=Om5PIIN3PNPeYoKgw0/0KDEjx2f5tEo3ReCnrcbyAKc=;
        b=WnlJ97K749Fk0/7lD8trjUx2W55WV2ZmtHT0C8NdDKs2TPvwC+Q35caOkmb4XVrEwA
         DOP8WCk9EX6c5GAunriGst0xhQgyVHuLMaa/G1POxyTFRT4sU6ncZ/C+3Axvp49vbCa/
         n2H5EYzM4sHbhB+fLKGYsUr+FQ+8NVveKVOukE40Zp8MdzB5UjVk4WRVYdSE8OG5AKSb
         Y5FP/qdPGLlCjJZoJYLQWWCKvJvhVVV+kVOhy6Pdf+Ryqo4Akg0y0Q8SUmfwSiwCUc1J
         BBNu9+jMuJGgLwtJSsgqdxEMd7EPunBhqnQ3ezPfNS8cE7J9k+fIkB5cpGHXczK1aJPs
         JVEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Om5PIIN3PNPeYoKgw0/0KDEjx2f5tEo3ReCnrcbyAKc=;
        b=Y6AKZu3oWVWrIkjUx+UoaLvqaW5TdlMCuyxxseb8asdQ/6PPsDeylZ0O2FWdFld4Qu
         hRTIAhom+PLMh8+v5fCyu+UdKZmRC8fs3yuTehSItiA9TBy4KminhQjnprfL21nEyz6m
         LUpxclZ1Ub6ZiFcbrdDt+PZba5WAT2PPfuVCG2g+14tn+F8kCxwiT5M/QMM2VfAEfh8a
         hF6UxR7AwvasSa2e9Kq9cjnrKN0NYvfiXGGfeTEofC3FIBzyWkX25zANOu7/Akma6K3k
         76jf225JLojCmrl9c9PsfM8TKSPS+oCf1RKHN+sHR+ScLTBYodUYyxuAGURmv5g0yrzK
         cnNw==
X-Gm-Message-State: AOAM530j95uvUQaOpYrm8cGfrmZD1+RbeKuGTzdm2ispus/29TgB3ob0
	6e2uDwTIoQHet0/tIcX2FnQ=
X-Google-Smtp-Source: ABdhPJzQvTAaOOFNkDmort9Z2pDUXBD56w8kvmeq2lzK4mB5PvUpaA50RN6hA5AybwVHyG2PruDhRg==
X-Received: by 2002:a37:655:: with SMTP id 82mr8744916qkg.374.1607087244978;
        Fri, 04 Dec 2020 05:07:24 -0800 (PST)
Date: Fri, 4 Dec 2020 08:07:20 -0500
From: Nick Rosbrook <rosbrookn@gmail.com>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v5 17/23] libxl: introduce 'libxl_pci_bdf' in the idl...
Message-ID: <20201204130720.GA20110@six>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-18-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201203142534.4017-18-paul@xen.org>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Thu, Dec 03, 2020 at 02:25:28PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... and use in 'libxl_device_pci'
> 
> This patch is preparatory work for restricting the type passed to functions
> that only require BDF information, rather than passing a 'libxl_device_pci'
> structure which is only partially filled. In this patch only the minimal
> mechanical changes necessary to deal with the structural changes are made.
> Subsequent patches will adjust the code to make better use of the new type.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

The Go binding re-generation looks fine to me.

Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 13:08:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 13:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44746.80185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klApQ-0007nr-8G; Fri, 04 Dec 2020 13:08:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44746.80185; Fri, 04 Dec 2020 13:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klApQ-0007nj-5G; Fri, 04 Dec 2020 13:08:52 +0000
Received: by outflank-mailman (input) for mailman id 44746;
 Fri, 04 Dec 2020 13:08:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1klApO-0007nd-GR
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 13:08:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58737cbc-61c9-4ca7-b6f8-55c38b937905;
 Fri, 04 Dec 2020 13:08:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 985D0AC9A;
 Fri,  4 Dec 2020 13:08:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58737cbc-61c9-4ca7-b6f8-55c38b937905
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607087328; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QG1t6H4LuO7yyYllIoahT4cqZrKS3yGt0avt04oGB2U=;
	b=O2voHMi1byHV69MAQAs1zhO2fFzKLUzdjDboQ17JnZJoMLxajD2yVpURWtzhO5N+6zK603
	tiEyN81aQdaATK6Pg/v6wiYlqw7Yvtjuxleb5W9Hx26qjuT5g3J3x3GBR5QCoA+03xRr/4
	MQRaZdHdaMWV2bqv+TMyirGbZgfOjeA=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
 <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
 <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
 <f81a011d-101c-29e7-cba2-0b52506cc027@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
Message-ID: <181448f7-fffb-ee5d-b420-40500bdb608d@suse.com>
Date: Fri, 4 Dec 2020 14:08:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <f81a011d-101c-29e7-cba2-0b52506cc027@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="lCIpEXYcdbo2k1dkeB1pVwZQuauXNeZew"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--lCIpEXYcdbo2k1dkeB1pVwZQuauXNeZew
Content-Type: multipart/mixed; boundary="1py2jHGkOkJFRyKDxi109fP3ISlgPf5Yb";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <181448f7-fffb-ee5d-b420-40500bdb608d@suse.com>
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
 <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
 <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
 <f81a011d-101c-29e7-cba2-0b52506cc027@suse.com>
In-Reply-To: <f81a011d-101c-29e7-cba2-0b52506cc027@suse.com>

--1py2jHGkOkJFRyKDxi109fP3ISlgPf5Yb
Content-Type: multipart/mixed;
 boundary="------------D5A47AE5AE2EB7D5E2E35E37"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D5A47AE5AE2EB7D5E2E35E37
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.12.20 10:16, Jan Beulich wrote:
> On 04.12.2020 09:52, J=C3=BCrgen Gro=C3=9F wrote:
>> On 03.12.20 16:44, Jan Beulich wrote:
>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>> --- a/xen/common/hypfs.c
>>>> +++ b/xen/common/hypfs.c
>>>> @@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_e=
ntry *entry)
>>>>        return entry->size;
>>>>    }
>>>>   =20
>>>> +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *templa=
te,
>>>> +                               unsigned int id, bool is_last,
>>>> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)=

>>>> +{
>>>> +    struct xen_hypfs_dirlistentry direntry;
>>>> +    char name[HYPFS_DYNDIR_ID_NAMELEN];
>>>> +    unsigned int e_namelen, e_len;
>>>> +
>>>> +    e_namelen =3D snprintf(name, sizeof(name), template->e.name, id=
);
>>>> +    e_len =3D DIRENTRY_SIZE(e_namelen);
>>>> +    direntry.e.pad =3D 0;
>>>> +    direntry.e.type =3D template->e.type;
>>>> +    direntry.e.encoding =3D template->e.encoding;
>>>> +    direntry.e.content_len =3D template->e.funcs->getsize(&template=
->e);
>>>> +    direntry.e.max_write_len =3D template->e.max_size;
>>>> +    direntry.off_next =3D is_last ? 0 : e_len;
>>>> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
>>>> +        return -EFAULT;
>>>> +    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
>>>> +                              e_namelen + 1) )
>>>> +        return -EFAULT;
>>>> +
>>>> +    guest_handle_add_offset(*uaddr, e_len);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static struct hypfs_entry *hypfs_dyndir_findentry(
>>>> +    const struct hypfs_entry_dir *dir, const char *name, unsigned i=
nt name_len)
>>>> +{
>>>> +    const struct hypfs_dyndir_id *data;
>>>> +
>>>> +    data =3D hypfs_get_dyndata();
>>>> +
>>>> +    /* Use template with original findentry function. */
>>>> +    return data->template->e.funcs->findentry(data->template, name,=
 name_len);
>>>> +}
>>>> +
>>>> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
>>>> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>>> +{
>>>> +    const struct hypfs_dyndir_id *data;
>>>> +
>>>> +    data =3D hypfs_get_dyndata();
>>>> +
>>>> +    /* Use template with original read function. */
>>>> +    return data->template->e.funcs->read(&data->template->e, uaddr)=
;
>>>> +}
>>>> +
>>>> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(
>>>> +    const struct hypfs_entry_dir *template, unsigned int id)
>>>> +{
>>>> +    struct hypfs_dyndir_id *data;
>>>> +
>>>> +    data =3D hypfs_get_dyndata();
>>>> +
>>>> +    data->template =3D template;
>>>> +    data->id =3D id;
>>>> +    snprintf(data->name, sizeof(data->name), template->e.name, id);=

>>>> +    data->dir =3D *template;
>>>> +    data->dir.e.name =3D data->name;
>>>
>>> I'm somewhat puzzled, if not confused, by the apparent redundancy
>>> of this name generation with that in hypfs_read_dyndir_id_entry().
>>> Wasn't the idea to be able to use generic functions on these
>>> generated entries?
>>
>> I can add a macro replacing the double snprintf().
>=20
> That wasn't my point. I'm concerned of there being two name generation
> sites in the first place. Is this perhaps simply some form of
> optimization, avoiding hypfs_read_dyndir_id_entry() to call
> hypfs_gen_dyndir_entry_id() (but risking the two going out of sync)?

Be aware that hypfs_read_dyndir_id_entry() is generating a struct
xen_hypfs_dirlistentry, which is different from the internal
representation of the data produced by hypfs_gen_dyndir_entry_id().

So the main common part is the name generation. What else would you
want apart from making it common via e.g. a macro? Letting
hypfs_read_dyndir_id_entry() call hypfs_gen_dyndir_entry_id() would
just be a more general approach with all the data but the generated
name of hypfs_gen_dyndir_entry_id() dropped or just copied around
a second time.


Juergen

--------------D5A47AE5AE2EB7D5E2E35E37
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------D5A47AE5AE2EB7D5E2E35E37--

--1py2jHGkOkJFRyKDxi109fP3ISlgPf5Yb--

--lCIpEXYcdbo2k1dkeB1pVwZQuauXNeZew
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/KNN8FAwAAAAAACgkQsN6d1ii/Ey8K
zAf/fUCIp8Ek6K4pNdzwzydfMUQAWebuq/kK4hZPqZgORrDti4nxuOMOhmLEW7cXw6FkwG9gDoEA
B2QkSWtr5cApYQGX/gmOUbiBS4jUfWv6uLpFXxeLHuO/xniKZmofYOOj+KAgML6bHsLTpeEmT5Lp
+76ddW1r1j5jgfQGtzisfGHGtu2HyEKsSqDEeoqBgs8Uq3ZuBYE6uSkNH45nVHtK6DgvvSZdGhpc
pOyYg9aBmu1bqW2wtQf5QUQdWHZxCs1qnFkcvS09qUfF8sUP8wzbyjgSwz0uRdF41qqZ+qoLYKe8
FhBGz74wbX1XUfLB3v4Gj0rv100vm8It3bseVjZcjw==
=qniA
-----END PGP SIGNATURE-----

--lCIpEXYcdbo2k1dkeB1pVwZQuauXNeZew--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 13:26:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 13:26:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44756.80198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klB6S-0001Tx-Qb; Fri, 04 Dec 2020 13:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44756.80198; Fri, 04 Dec 2020 13:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klB6S-0001Tq-ND; Fri, 04 Dec 2020 13:26:28 +0000
Received: by outflank-mailman (input) for mailman id 44756;
 Fri, 04 Dec 2020 13:23:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9xwy=FI=molgen.mpg.de=pmenzel@srs-us1.protection.inumbo.net>)
 id 1klB3Z-0001Lk-MV
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 13:23:29 +0000
Received: from mx1.molgen.mpg.de (unknown [141.14.17.11])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f49b2dd-7298-4057-a547-3d05f97ee422;
 Fri, 04 Dec 2020 13:23:26 +0000 (UTC)
Received: from [192.168.0.2] (ip5f5af1d8.dynamic.kabel-deutschland.de
 [95.90.241.216])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id 1065B2064784C;
 Fri,  4 Dec 2020 14:23:24 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f49b2dd-7298-4057-a547-3d05f97ee422
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Wim Vervoorn <wvervoorn@eltan.com>,
 The development of GNU GRUB <grub-devel@gnu.org>,
 Daniel Kiper <daniel.kiper@oracle.com>
Cc: coreboot@coreboot.org, LKML <linux-kernel@vger.kernel.org>,
 systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com,
 U-Boot Mailing List <u-boot@lists.denx.de>, x86@kernel.org,
 xen-devel@lists.xenproject.org, alecb@umass.edu,
 alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
 andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org,
 "btrotter@gmail.com" <btrotter@gmail.com>, dpsmith@apertussolutions.com,
 eric.devolder@oracle.com, eric.snowberg@oracle.com, hpa@zytor.com,
 hun@n-dimensional.de, javierm@redhat.com, joao.m.martins@oracle.com,
 kanth.ghatraju@oracle.com, konrad.wilk@oracle.com, krystian.hebel@3mdeb.com,
 leif@nuviainc.com, lukasz.hawrylko@intel.com, luto@amacapital.net,
 michal.zygowski@3mdeb.com, Matthew Garrett <mjg59@google.com>,
 mtottenh@akamai.com, Vladimir 'phcoder' Serbinenko <phcoder@gmail.com>,
 piotr.krol@3mdeb.com, pjones@redhat.com, roger.pau@citrix.com,
 ross.philipson@oracle.com, tyhicks@linux.microsoft.com,
 Heinrich Schuchardt <xypron.glpk@gmx.de>
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
 <CAODwPW9dxvMfXY=92pJNGazgYqcynAk72EkzOcmF7JZXhHTwSQ@mail.gmail.com>
 <6c1e79be210549949c30253a6cfcafc1@Eltsrv03.Eltan.local>
From: Paul Menzel <pmenzel@molgen.mpg.de>
Message-ID: <9b614471-0395-88a5-1347-66417797e39d@molgen.mpg.de>
Date: Fri, 4 Dec 2020 14:23:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <6c1e79be210549949c30253a6cfcafc1@Eltsrv03.Eltan.local>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Dear Wim, dear Daniel,


First, thank you for including all parties in the discussion.
Am 04.12.20 um 13:52 schrieb Wim Vervoorn:

> I agree with you. Using an existing standard is better than inventing
> a new one in this case. I think using the coreboot logging is a good
> idea as there is indeed a lot of support already available and it is
> lightweight and simple.
In my opinion coreboot’s format is lacking, that it does not record the 
timestamp, and the log level is not stored as metadata, but (in 
coreboot) only used to decide if to print the message or not.

I agree with you, that an existing standard should be used, and in my 
opinion it’s Linux message format. That is most widely supported, and 
existing tools could then also work with pre-Linux messages.

Sean Hudson from Mentor Graphics presented that idea at Embedded Linux 
Conference Europe 2016 [1]. No idea, if anything came out of that 
effort. (Unfortunately, I couldn’t find an email. Does somebody have 
contacts at Mentor to find out, how to reach him?)


Kind regards,

Paul


[1]: 
http://events17.linuxfoundation.org/sites/events/files/slides/2016-10-12%20-%20ELCE%20-%20Shared%20Logging%20-%20Part%20Deux.pdf


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 13:31:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 13:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44763.80210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klBB6-0002Wb-D4; Fri, 04 Dec 2020 13:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44763.80210; Fri, 04 Dec 2020 13:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klBB6-0002WU-9y; Fri, 04 Dec 2020 13:31:16 +0000
Received: by outflank-mailman (input) for mailman id 44763;
 Fri, 04 Dec 2020 13:31:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klBB5-0002WM-8k; Fri, 04 Dec 2020 13:31:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klBB5-0002mo-2K; Fri, 04 Dec 2020 13:31:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klBB4-0006ki-O4; Fri, 04 Dec 2020 13:31:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klBB4-0006sX-NH; Fri, 04 Dec 2020 13:31:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AKa5zqoL1DsI+93bsrdX0CX/oTO6mtBFHA0z59p/jnQ=; b=x7V7kOCSRkfZN5Ae/vQ+meq2Uc
	oskzjoTqxg+7JDswN9ZaX+z35PjI2F+dtbG7vIktig4TOzCEAzcNapErKvXAhdypBG5A4ZGMkhV8X
	8SnB87K+28kldEMsXNBFzfChO8FGBSt3KWQoLx3YUiI5vrb0aEvEfd5qDEVGJ5FU0yK0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157192-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157192: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aec46884784c2494a30221da775d4ac2c43a4d42
X-Osstest-Versions-That:
    xen=aec46884784c2494a30221da775d4ac2c43a4d42
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 13:31:14 +0000

flight 157192 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157192/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 157182
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157182
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157182
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157182
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157182
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157182
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157182
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157182
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157182
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157182
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157182
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157182
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aec46884784c2494a30221da775d4ac2c43a4d42
baseline version:
 xen                  aec46884784c2494a30221da775d4ac2c43a4d42

Last test of basis   157192  2020-12-04 01:51:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 13:35:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 13:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44772.80225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klBF8-0002jW-6u; Fri, 04 Dec 2020 13:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44772.80225; Fri, 04 Dec 2020 13:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klBF8-0002jP-3o; Fri, 04 Dec 2020 13:35:26 +0000
Received: by outflank-mailman (input) for mailman id 44772;
 Fri, 04 Dec 2020 13:35:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OO73=FI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1klBF7-0002jK-1x
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 13:35:25 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a19e2fcd-3131-4979-9bdc-9a9b9714135c;
 Fri, 04 Dec 2020 13:35:23 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id 91so1389521wrj.7
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 05:35:23 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id f14sm3478347wme.14.2020.12.04.05.35.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 04 Dec 2020 05:35:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a19e2fcd-3131-4979-9bdc-9a9b9714135c
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=CWUSlRXqW61ZWvUjZrkZz5dn8Il7W0zvyjkznQDqXjw=;
        b=DcnoX9TcQTy6jNERYxw1EKlvvZCZ6pF8zQmDu0L9eRBMPQDfIxx1fPV8dAdtM9eNXK
         AJpgoE07/FoDxTpAlAeCeFclSlI4z53awedvsOyAfW1kZK/9/WGrmxFTNTpf0T7p87Z3
         NeM03Lc5+F91dq+Ff1bAb4Gt006tawOcz8A0Rapg5UyxjicdGKFNN0CZho6mNxyitRr9
         OiixXwb8HJOSySdQnnk3Lcx2By4kK94Ge0ywWtMYejOdZYdB8WD92ixlhkKtL5nkCzYw
         8qqW7z5xLU18pxYq1K8srfAzM+FCs9x2jRa/QnZ3pjW2/PWmI3nz1v/2JcHWVhEDwD6Y
         4YJg==
X-Gm-Message-State: AOAM532zxydWH5rxC0FwF/mH957C/g29j0vj7qy2saA31hi+GyAILBWs
	4EPg/1RZ4+Y4RxklJkvBg+Y=
X-Google-Smtp-Source: ABdhPJwzhCUQ5lJ70g9opzHYZKcA3URKvwdPCp1LIlB4I8dGmK0kl/O+CpSH08NAcMESwehWTOeDUQ==
X-Received: by 2002:adf:a315:: with SMTP id c21mr4931570wrb.272.1607088922338;
        Fri, 04 Dec 2020 05:35:22 -0800 (PST)
Date: Fri, 4 Dec 2020 13:35:20 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools/libs/ctrl: fix dumping of ballooned guest
Message-ID: <20201204133520.57v2jjrhdzesaj5a@liuwe-devbox-debian-v2>
References: <20201111100143.13820-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201111100143.13820-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Wed, Nov 11, 2020 at 11:01:43AM +0100, Juergen Gross wrote:
> A guest with memory < maxmem often can't be dumped via xl dump-core
> without an error message today:
> 
> xc: info: exceeded nr_pages (262144) losing pages
> 
> In case the last page of the guest isn't allocated the loop in
> xc_domain_dumpcore_via_callback() will always spit out this message,
> as the number of already dumped pages is tested before the next page
> is checked to be valid.
> 
> The guest's p2m_size might be lower than expected, so this should be
> tested in order to avoid reading past the end of it.
> 
> The guest might use high bits in p2m entries to flag special cases like
> foreign mappings. Entries with an MFN larger than the highest MFN of
> the host should be skipped.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked + applied.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 14:38:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 14:38:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44838.80244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCDY-0000jK-U2; Fri, 04 Dec 2020 14:37:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44838.80244; Fri, 04 Dec 2020 14:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCDY-0000jD-R3; Fri, 04 Dec 2020 14:37:52 +0000
Received: by outflank-mailman (input) for mailman id 44838;
 Fri, 04 Dec 2020 14:37:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klCDX-0000j5-E0; Fri, 04 Dec 2020 14:37:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klCDX-0004IY-44; Fri, 04 Dec 2020 14:37:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klCDW-0000Tv-Ri; Fri, 04 Dec 2020 14:37:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klCDW-0007Af-RD; Fri, 04 Dec 2020 14:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3T51MvfJd6215i/YzoL2P3AWk1I9PyoJJJSCozgz9Ls=; b=FvCzX+RQDAbzJUjqyxV6R0mPS/
	XPm2rUaxi45xCTO7uV6/XmIN+qfc1vfgrPisrNu7lI2hLP+DN1u7LLL89wwikSFi6q9nGwS+DPiSc
	/N6QCrv0UX1L2cUcP2v1u8smnkTfbG4EG2impj4Ip67qWfmjliy934XJGO9JUn5BbiSo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157204-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157204: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=97e2b622d1f32ba35194dbca104c3bf918bf3271
X-Osstest-Versions-That:
    ovmf=31e8a47b62a4f3dc45d8f9bbf3529a188e867a87
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 14:37:50 +0000

flight 157204 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157204/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 97e2b622d1f32ba35194dbca104c3bf918bf3271
baseline version:
 ovmf                 31e8a47b62a4f3dc45d8f9bbf3529a188e867a87

Last test of basis   157194  2020-12-04 04:16:55 Z    0 days
Testing same since   157204  2020-12-04 12:10:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  wenyi xie <xiewenyi2=huawei.com@groups.io>
  Wenyi Xie <xiewenyi2@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   31e8a47b62..97e2b622d1  97e2b622d1f32ba35194dbca104c3bf918bf3271 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 14:39:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 14:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44844.80260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCEo-0000s8-AB; Fri, 04 Dec 2020 14:39:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44844.80260; Fri, 04 Dec 2020 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCEo-0000s1-6d; Fri, 04 Dec 2020 14:39:10 +0000
Received: by outflank-mailman (input) for mailman id 44844;
 Fri, 04 Dec 2020 14:39:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pknM=FI=epam.com=prvs=06072b5274=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1klCEm-0000rw-M1
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 14:39:08 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93c05ee1-8077-42a7-9898-0c36fb513aaf;
 Fri, 04 Dec 2020 14:39:07 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B4EaB9b003285; Fri, 4 Dec 2020 14:38:59 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 by mx0b-0039f301.pphosted.com with ESMTP id 3572y0k6jm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 04 Dec 2020 14:38:59 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Fri, 4 Dec
 2020 14:38:57 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3632.018; Fri, 4 Dec 2020
 14:38:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93c05ee1-8077-42a7-9898-0c36fb513aaf
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LvPVila2N70VZBSTzMwRg0e2A2DEf0G8HwrDGiyh5eJcZaCSggX7PYoFS4Q+5Bi/QZlvcerlZtSpCqqXWuTzdgQIQYsCEptgNK6slFG53rW5F79/AdEqjqiBIuYF8SoC4SgQPS4hSugQmzbaie6im/8g+zduVNiBJDhYJ6A1PrcTgDh0hq9QL7w26KTQZEET2sLYH+dU5F1UgtWVxNl65b3flOAjKh+Hnkl6fsL2zgCX4meZeK1SpPf1cDYVrgEuEOxn5Yx0AN6IcdzMK6iAai51DIrsjd0CU51ZauFIJV6SDD3VU+PYsT7PiOqGFdfcxj9lLAo5CVb+NItKXkxksw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qtC2b1Z5k5r1RLToD9WL3ocGlKUc5pLpOfsbTjP7DPw=;
 b=eUos4FJvjCQ7gmfKepCTtZ5oJDrcNLcmzh36vzwfbLzgAei0vMWJphk6SabHH6Bo6wM3QMFYMBKcBRmCO00NB6jLCDiZPazZwOvAS6tVVV/wVVrJVgt2Z5YMO7bn38bgoZjoHPBK8dUkTOfryVJ2JtiYqozbphPuZoFGyMvR/SuU4bOoitgruR3ZsEZtd3vElIbwBLK7B3nZLonzk8cUHOFSFTNHM/Qy5BKj/KfPlClHicudt8tN5ThcwDFRldWr/kZLcxWTifJrXEgYeq/cgaX5OTrG90r3RQddm3O2pxUwcdFkNc9bJLSaPCwztCxONIKflTJm5pdlzP7RirTqZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qtC2b1Z5k5r1RLToD9WL3ocGlKUc5pLpOfsbTjP7DPw=;
 b=lXu2/wPr0pJA+TV8N/rZMo6xrQ6QZafvpUOKO/yvGzgHXbGR6eVH9kDDUkNxKP4RnNaFVDZ7OGobyvYTNBXeI2vt/VEDFhZEeScSc3F2boEE4NQhqmmPxeYtq/jeoQky8UJ8+ktbv8qgBYpYRpmsC9QY4HMmKZztdj9GK0fdIqk7QH75qjFuxQxAU8mp0Lmd8QytA8XFasF9/TdKUzCwwJ1ZP6OlCqh7Mif6b8NbNR8FRAWSg4ldGNxAHQeQ9H17WdePyMM3aIM+s4YcIPUTOA2HZBU1kG5BNUCHI2wLlkiNYy+cForLmNYvsINGC416+zjkOc5ECcwV7V0CjRYdZA==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYAgAAcfoCAAAKRAIAAAYQAgAABxICAAAHUgIAg/YsA
Date: Fri, 4 Dec 2020 14:38:56 +0000
Message-ID: <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
In-Reply-To: <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: efec32e3-2a89-48b4-2859-08d8986254d4
x-ms-traffictypediagnostic: AM4PR0301MB2195:
x-microsoft-antispam-prvs: 
 <AM4PR0301MB2195D5D734068C47D3BAB5F0E7F10@AM4PR0301MB2195.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 544v5IpcpESxIicnWJGvb26PU9w0akgEAECVrOLKw1QDxL9MwqkDV0nnX1ixEOQy438QMJLMOSRf3Q5bUv6cMp/akWDAs3PqzdS4uQWe6y+2jFr6wq+p8S80/5UNAIKAU7CEb6SLW1TBNePRtaRFBMpK4fpEGKFVb68N238VTFDJNaNBkmgY72nWD8TJZaxkWSM/tOjMdQH4sot13TOgUb+vGjJH6lgVhNbCAWPSBsdXAX/blfjTJijXrJoPkQTtJV1RuXWyepE2yOrh6Hr86VXZlmuhxKrCVk//q9c+PH/zsNpmZ6VnOjeq9cxMsDEovXhFdQa7p4ErJPLNeQN3lZbQJ1YAJUOzjFmqJjZHZb7cOhXPETBiV66W8SsNfewU
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(136003)(366004)(376002)(346002)(316002)(2616005)(86362001)(186003)(66476007)(66556008)(66946007)(8676002)(2906002)(64756008)(76116006)(31696002)(6916009)(66446008)(7416002)(4326008)(6512007)(6486002)(8936002)(54906003)(71200400001)(6506007)(36756003)(31686004)(83380400001)(53546011)(478600001)(5660300002)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?MUgzMHpOYktwTXEwMHIrb3ZsWmhiUnZsZGcrWm5pUVViaFU1ay9EOVpDcTAv?=
 =?utf-8?B?WEJ3UXFETHY1dUM5WGI5U2lVb1JWYTBLNVMzYzZGdzQ1b1dtZWYwWVdWZzhi?=
 =?utf-8?B?dXdwU3RQU0RoMXpYK0NJT2N2Vmh2cHQ5U2FCWFllSzYrdndVcnRBOG9Oc2NX?=
 =?utf-8?B?Vnl4WVk2QXQ4K2hERVYwa1pXMjgrYXVOcjhRQm5TbVJpSGw5RFdjeE8vK3M4?=
 =?utf-8?B?N3lNR2VwQVpVUkE4NGp0WmhPUUlpNFVXalV2dm9paUtZZ0xoZmlJNDhxRVEy?=
 =?utf-8?B?dTJhSXZDY1BsV2cyTW94YkM2VVMxSjFWRFNBak9sTm9JaEl2dFlxUXg5M3pL?=
 =?utf-8?B?SVFVZ2t5R2ZqanJXUTAxdThOd1pra0d1eElEdDVJazBDNXlKY2cwU0txTis5?=
 =?utf-8?B?V2w5aGVDbWZCTTEwMDVMcjNtNytSWVpSUHBEc0RpaDVVcUdyLzREbFZmVFp2?=
 =?utf-8?B?SEZodE1mU2hKekVrdnNoUWdqWkw4Snh0REJjZDEzRmYrU1JnQnZYdnM0czNG?=
 =?utf-8?B?YitKZ0gvbmtCbzRLRW5GcmFsVWlGekFPK2VtdTYyVk9uamhtbk5nenI1a2U4?=
 =?utf-8?B?eVhya2l5V2EzZ2pyT1FsdjczdFJSdUhaL0dTdHdsNXVSRTdjYVFOaGdpVy96?=
 =?utf-8?B?SHRleCtaMjdVUm9KVk1HZ0xtV2puMXBlY3IrVXVsTGVMOEtQYU1PUjRjMmdI?=
 =?utf-8?B?eitzRVMwOWZKMGFUU0NuOWhRUldYVmxrRnhJQjBBTUFzVE1XTkJMbUJrNkph?=
 =?utf-8?B?bE1jNkJlam9uSXhjODgwSGxzVDR5d3p4cElGSzRKNlh6ckgrNHZKcExrZnFG?=
 =?utf-8?B?NG5YaG85ellnWVZaMFRPNDlpRWhFMURud0dmRmwrV2VOMHFINTBRRjhtTGRI?=
 =?utf-8?B?RXE5S1RMcGFhV3JiWEMxcm11ekYyQ0xMeVJBZ3BCQTk2K2VNeElOUHhyV1R4?=
 =?utf-8?B?WVNDVERIOFZTVjM2RzR2aG5xTFduWU1hT3VzV0ZEQXZuYWg2VXQ3cU9senQz?=
 =?utf-8?B?cTlnbHlLcm8xTyt3MVhCKzJOd1ZBamdQeFY4MS8ySmhsaHdLRlFNU0FLbVR6?=
 =?utf-8?B?TnE4Q2xoZE03QzBCTVkrNFRlY0tEQXc3cVFpNEE3SFlpcEtIZUYvdU9rRzVn?=
 =?utf-8?B?TjB3aXFEMFJocUZ4ajJwVkFPNzcwdWhsbU9rbkZUbENEMlZFZ1FjNStpeWgr?=
 =?utf-8?B?RWZlVU1Jbk8xMlJWVHRCQ2h6SndIcXBGZS9QaGVQMmE1Z2thd0hPcXZpak5L?=
 =?utf-8?B?VDJRMUUxZUt6RDJ5WTVncWRSYXFhUkUxVDdHYTk5aitkVzY1QklkY09BYnZL?=
 =?utf-8?Q?uFO4P2HCW58jE+/WOTWgS8JphF7IlNo9mv?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E232E532DE80BE4D8D33B6C18CA575A6@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: efec32e3-2a89-48b4-2859-08d8986254d4
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Dec 2020 14:38:56.9575
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kIozqT/9MuiyHj6VrU/E8cKamue0iCN6d1xBBgbRg1Bk5E+Zq0hoEcKeMZdfG+3Is0pVUzn8eiVN8Qr0OMJL6aU9ezSOKCD/q/9buZ3WAfxjvulg1d+ChNdvDWDVURCY
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2195
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-04_05:2020-12-04,2020-12-04 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0
 priorityscore=1501 malwarescore=0 mlxscore=0 mlxlogscore=540 bulkscore=0
 clxscore=1011 phishscore=0 lowpriorityscore=0 spamscore=0 adultscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012040084

SGksIEphbiENCg0KT24gMTEvMTMvMjAgNDo1MSBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9u
IDEzLjExLjIwMjAgMTU6NDQsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gT24g
MTEvMTMvMjAgNDozOCBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMTMuMTEuMjAyMCAx
NTozMiwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IE9uIDExLzEzLzIwIDQ6
MjMgUE0sIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4gICAgIEVhcmxpZXIgb24gSSBkaWRuJ3Qg
c2F5IHlvdSBzaG91bGQgZ2V0IHRoaXMgdG8gd29yaywgb25seQ0KPj4+Pj4gdGhhdCBJIHRoaW5r
IHRoZSBnZW5lcmFsIGxvZ2ljIGFyb3VuZCB3aGF0IHlvdSBhZGQgc2hvdWxkbid0IG1ha2UNCj4+
Pj4+IHRoaW5ncyBtb3JlIGFyY2ggc3BlY2lmaWMgdGhhbiB0aGV5IHJlYWxseSBzaG91bGQgYmUu
IFRoYXQgc2FpZCwNCj4+Pj4+IHNvbWV0aGluZyBzaW1pbGFyIHRvIHRoZSBhYm92ZSBzaG91bGQg
c3RpbGwgYmUgZG9hYmxlIG9uIHg4NiwNCj4+Pj4+IHV0aWxpemluZyBzdHJ1Y3QgcGNpX3NlZydz
IGJ1czJicmlkZ2VbXS4gVGhlcmUgb3VnaHQgdG8gYmUNCj4+Pj4+IERFVl9UWVBFX1BDSV9IT1NU
X0JSSURHRSBlbnRyaWVzIHRoZXJlLCBhbGJlaXQgYSBudW1iZXIgb2YgdGhlbQ0KPj4+Pj4gKHBy
b3ZpZGVkIGJ5IHRoZSBDUFVzIHRoZW1zZWx2ZXMgcmF0aGVyIHRoYW4gdGhlIGNoaXBzZXQpIGFy
ZW4ndA0KPj4+Pj4gcmVhbGx5IGhvc3QgYnJpZGdlcyBmb3IgdGhlIHB1cnBvc2VzIHlvdSdyZSBh
ZnRlci4NCj4+Pj4gWW91IG1lYW4gSSBjYW4gc3RpbGwgdXNlIERFVl9UWVBFX1BDSV9IT1NUX0JS
SURHRSBhcyBhIG1hcmtlcg0KPj4+Pg0KPj4+PiB3aGlsZSB0cnlpbmcgdG8gZGV0ZWN0IHdoYXQg
SSBuZWVkPw0KPj4+IEknbSBhZnJhaWQgSSBkb24ndCB1bmRlcnN0YW5kIHdoYXQgbWFya2VyIHlv
dSdyZSB0aGlua2luZyBhYm91dA0KPj4+IGhlcmUuDQo+PiBJIG1lYW4gdGhhdCB3aGVuIEkgZ28g
b3ZlciBidXMyYnJpZGdlIGVudHJpZXMsIHNob3VsZCBJIG9ubHkgd29yayB3aXRoDQo+Pg0KPj4g
dGhvc2Ugd2hvIGhhdmUgREVWX1RZUEVfUENJX0hPU1RfQlJJREdFPw0KPiBXZWxsLCBpZiB5b3Un
cmUgYWZ0ZXIgaG9zdCBicmlkZ2VzIC0geWVzLg0KPg0KPiBKYW4NCg0KU28sIEkgc3RhcnRlZCBs
b29raW5nIGF0IHRoZSBidXMyYnJpZGdlIGFuZCBpZiBpdCBjYW4gYmUgdXNlZCBmb3IgYm90aCB4
ODYgKGFuZCBwb3NzaWJsZSBBUk0pIGFuZCBJIGhhdmUgYW4NCg0KaW1wcmVzc2lvbiB0aGF0IHRo
ZSBvcmlnaW5hbCBwdXJwb3NlIG9mIHRoaXMgd2FzIHRvIGlkZW50aWZ5IHRoZSBkZXZpY2VzIHdo
aWNoIHg4NiBJT01NVSBzaG91bGQNCg0KY292ZXI6IGUuZy4gSSBhbSBsb29raW5nIGF0IHRoZSBm
aW5kX3Vwc3RyZWFtX2JyaWRnZSB1c2VycyBhbmQgdGhleSBhcmUgeDg2IElPTU1VIGFuZCBWR0Eg
ZHJpdmVyLg0KDQpJIHRyaWVkIHRvIHBsYXkgd2l0aCB0aGlzIG9uIEFSTSwgYW5kIGZvciB0aGUg
SFcgcGxhdGZvcm0gSSBoYXZlIGFuZCBRRU1VIEkgZ290IDAgZW50cmllcyBpbiBidXMyYnJpZGdl
Li4uDQoNClRoaXMgaXMgYmVjYXVzZSBvZiBob3cgeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvcGNp
LmM6YWxsb2NfcGRldiBpcyBpbXBsZW1lbnRlZCAod2hpY2ggbGl2ZXMgaW4gdGhlDQoNCmNvbW1v
biBjb2RlIEJUVywgYnV0IHNlZW1zIHRvIGJlIHg4NiBzcGVjaWZpYyk6IHNvLCBpdCBza2lwcyBz
ZXR0aW5nIHVwIGJ1czJicmlkZ2UgZW50cmllcyBmb3IgdGhlIGJyaWRnZXMgSSBoYXZlLg0KDQpU
aGVzZSBhcmUgREVWX1RZUEVfUENJZV9CUklER0UgYW5kIERFVl9UWVBFX1BDSV9IT1NUX0JSSURH
RS4gU28sIHRoZSBhc3N1bXB0aW9uIEkndmUgbWFkZSBiZWZvcmUNCg0KdGhhdCBJIGNhbiBnbyBv
dmVyIGJ1czJicmlkZ2UgYW5kIGZpbHRlciBvdXQgdGhlICJvd25lciIgb3IgcGFyZW50IGJyaWRn
ZSBmb3IgYSBnaXZlbiBzZWc6YnVzIGRvZXNuJ3QNCg0Kc2VlbSB0byBiZSBwb3NzaWJsZSBub3cu
DQoNCkV2ZW4gaWYgSSBmaW5kIHRoZSBwYXJlbnQgYnJpZGdlIHdpdGggeGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvcGNpLmM6ZmluZF91cHN0cmVhbV9icmlkZ2UgSSBhbSBub3Qgc3VyZQ0KDQpJIGNh
biBnZXQgYW55IGZ1cnRoZXIgaW4gZGV0ZWN0aW5nIHdoaWNoIHg4NiBkb21haW4gb3ducyB0aGlz
IGJyaWRnZTogdGhlIHdob2xlIGlkZWEgaW4gdGhlIHg4NiBjb2RlIGlzDQoNCnRoYXQgRG9tYWlu
LTAgaXMgdGhlIG9ubHkgcG9zc2libGUgb25lIGhlcmUgKHlvdSBtZW50aW9uZWQgdGhhdCkuIFNv
LCBJIGRvdWJ0IGlmIHdlIGNhbiBzdGlsbCBsaXZlIHdpdGgNCg0KaXNfaGFyZHdhcmVfZG9tYWlu
IGZvciBub3cgZm9yIHg4Nj8NCg0KVGhhbmsgeW91IGluIGFkdmFuY2UsDQoNCk9sZWtzYW5kcg0K


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 15:09:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 15:09:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44881.80300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klChp-0004G8-19; Fri, 04 Dec 2020 15:09:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44881.80300; Fri, 04 Dec 2020 15:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCho-0004G1-Tr; Fri, 04 Dec 2020 15:09:08 +0000
Received: by outflank-mailman (input) for mailman id 44881;
 Fri, 04 Dec 2020 15:09:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1klChn-0004Fw-GD
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 15:09:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1klChl-0004x8-D9; Fri, 04 Dec 2020 15:09:05 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1klChl-0004uC-2P; Fri, 04 Dec 2020 15:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Qt946/pZlmGRGP+S0I4pKTAzUdL3v8DCmAnKyp0CyBI=; b=YprXC9FV0wJqOnjWnslpaK5bXl
	xonwaHZqJKfSTvzEqVKHS7c4Wep1JAxnPmzphiNtyETTal9CR6alwuME2+oWymYXIVn0Aqd4pYnHB
	QC99RGoTAbrqsDh0klsC73ZiTESNZEl61vHTAXemQA2MOEioGcyNe64HlgdyQO78IbAU=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
 <02a2b77f-27a9-b1b6-1acf-1f136cffdf30@xen.org>
 <48395363-ea47-9139-011e-233d92581a71@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2edfc711-d8d9-4854-94a2-2d9e4d9902ec@xen.org>
Date: Fri, 4 Dec 2020 15:09:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <48395363-ea47-9139-011e-233d92581a71@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 04/12/2020 12:01, Jan Beulich wrote:
> On 04.12.2020 12:51, Julien Grall wrote:
>>
>>
>> On 04/12/2020 11:48, Jan Beulich wrote:
>>> On 04.12.2020 12:28, Julien Grall wrote:
>>>> Hi Jan,
>>>>
>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>>>> While there don't look to be any problems with this right now, the lock
>>>>>>> order implications from holding the lock can be very difficult to follow
>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>>>
>>>>>>> However, vm_event_disable() frees the structures used by respective
>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>>>
>>>>>> AFAICT, this callback is not the only place where the synchronization is
>>>>>> missing in the VM event code.
>>>>>>
>>>>>> For instance, vm_event_put_request() can also race against
>>>>>> vm_event_disable().
>>>>>>
>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>
>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>
>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>> monitoring software to do the right thing.
>>>>
>>>> I will refrain to comment on this approach. However, given the race is
>>>> much wider than the event channel, I would recommend to not add more
>>>> code in the event channel to deal with such problem.
>>>>
>>>> Instead, this should be fixed in the VM event code when someone has time
>>>> to harden the subsystem.
>>>
>>> Are effectively saying I should now undo the addition of the
>>> refcounting, which was added in response to feedback from you?
>>
>> Please point out where I made the request to use the refcounting...
> 
> You didn't ask for this directly, sure, but ...
> 
>> I pointed out there was an issue with the VM event code.
> 
> ... this has ultimately led to the decision to use refcounting
> (iirc there was one alternative that I had proposed, besides
> the option of doing nothing).

One other option that was discussed (maybe only on security@xen.org) is 
to move the spinlock outside of the structure so it is always allocated.

> 
>> This was latter
>> analysed as a wider issue. The VM event folks doesn't seem to be very
>> concerned on the race, so I don't see the reason to try to fix it in the
>> event channel code.
> 
> And you won't need the refcount for vpl011 then?

I don't believe we need it for the vpl011 as the spin lock protecting 
the code should always be allocated. The problem today is the lock is 
initialized too late.

> I can certainly
> drop it again, but it feels odd to go back to an earlier version
> under the circumstances ...

The code introduced doesn't look necessary outside of the VM event code.
So I think it would be wrong to merge it if it is just papering over a 
bigger problem.

Cheers,



> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 15:22:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 15:22:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44892.80312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCuT-0006IB-81; Fri, 04 Dec 2020 15:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44892.80312; Fri, 04 Dec 2020 15:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klCuT-0006I4-3x; Fri, 04 Dec 2020 15:22:13 +0000
Received: by outflank-mailman (input) for mailman id 44892;
 Fri, 04 Dec 2020 15:22:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VOLy=FI=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1klCuR-0006Hz-RP
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 15:22:11 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4db20e5d-84bf-4a5f-841e-8e079aedb096;
 Fri, 04 Dec 2020 15:22:11 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id k14so5704547wrn.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 07:22:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4db20e5d-84bf-4a5f-841e-8e079aedb096
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QUeOFftGxRLW9PKuJNKArkPnQ1h/QDbCkiM1yo1Bi10=;
        b=I0IIGRq9limyEPu5zReHeh4uQmINXn8c79MoYumI3/PVqt33ELgb5pQJq6WbiHQ3uq
         vTNBWPbQvyiOurs4JbaYMe60lqVMtm/o+NStYwA9jdSgHii272MYdZK2TJKg4TuyJdo5
         VaYJl9fIk0G7DVFT6KPtJvgM8CKOkbEvk7ne4ipF3R6e1whJw8sfUMYGZcq0pblA9nt0
         79jQZYSALJopVyn9gFBlbM6QrcbTCy3DpaXVeYnnmXK9XrzW1EACrCaiXNJwuOGaqzir
         7YZPkJNgCTYqPSzTNaEjlz41c4fvUdotBm2VBbaHHzg0BFEpyu3xzG+f870SKp9abLU0
         6nhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QUeOFftGxRLW9PKuJNKArkPnQ1h/QDbCkiM1yo1Bi10=;
        b=j1Wii2Hc+Rg949YC8ztsAQULxzrdtpF6XsUEi+E+WkYCy/DD8r7c2sLsiLrYcSyaKs
         5K4P+xi5Vf4kw3z2AuO45P5XYkbkaY4nsBFFaeqIgnPm/JQjb2JkZ0W2Xbf5yBaZ5ufS
         JVdBinJtnPFaJ3P4hJrHBFe1uHIQIIgXyOW9O/rn9E4niyfPvrDYu47Z46ky+TDchg0n
         V6hjpBo57uT4dYjKfmwPYzS/l2Yqw5NphA1KBsAr4QK5/rAXXFj0aTutusWUtiAlhyfO
         jStnoyz7wyFLFDimL8FN4cU7qgb2zFKccrhItNyxlXzH6Zj9pN3o5PwfNdrm5TjUTLxW
         p3hA==
X-Gm-Message-State: AOAM532IGx4Om/486Pe1yYz1JYAKesFkKKjOrAxKR3R3FAsxbF0WliKX
	Fjl3oXogWy2nN1C96F5imjmD11DAoCxdB4EPyC4=
X-Google-Smtp-Source: ABdhPJwNDiMyt7HazIIMyu5+0kK9tqgCNzVO1Du9W4h0ZaWZwzYOXC1KzBwtfuhxYDqfBEcLzkmJZk12AFjdF+hLtwY=
X-Received: by 2002:adf:e3cf:: with SMTP id k15mr5432865wrm.259.1607095330018;
 Fri, 04 Dec 2020 07:22:10 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com> <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
In-Reply-To: <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 4 Dec 2020 10:21:34 -0500
Message-ID: <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
>
> Hi Jan,
>
> On 03/12/2020 10:09, Jan Beulich wrote:
> > On 02.12.2020 22:10, Julien Grall wrote:
> >> On 23/11/2020 13:30, Jan Beulich wrote:
> >>> While there don't look to be any problems with this right now, the lock
> >>> order implications from holding the lock can be very difficult to follow
> >>> (and may be easy to violate unknowingly). The present callbacks don't
> >>> (and no such callback should) have any need for the lock to be held.
> >>>
> >>> However, vm_event_disable() frees the structures used by respective
> >>> callbacks and isn't otherwise synchronized with invocations of these
> >>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> >>> to wait to drop to zero before freeing the port (and dropping the lock).
> >>
> >> AFAICT, this callback is not the only place where the synchronization is
> >> missing in the VM event code.
> >>
> >> For instance, vm_event_put_request() can also race against
> >> vm_event_disable().
> >>
> >> So shouldn't we handle this issue properly in VM event?
> >
> > I suppose that's a question to the VM event folks rather than me?
>
> Yes. From my understanding of Tamas's e-mail, they are relying on the
> monitoring software to do the right thing.
>
> I will refrain to comment on this approach. However, given the race is
> much wider than the event channel, I would recommend to not add more
> code in the event channel to deal with such problem.
>
> Instead, this should be fixed in the VM event code when someone has time
> to harden the subsystem.

I double-checked and the disable route is actually more robust, we
don't just rely on the toolstack doing the right thing. The domain
gets paused before any calls to vm_event_disable. So I don't think
there is really a race-condition here.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 15:29:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 15:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44902.80324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klD18-0006Y7-3Q; Fri, 04 Dec 2020 15:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44902.80324; Fri, 04 Dec 2020 15:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klD17-0006Y0-Vm; Fri, 04 Dec 2020 15:29:05 +0000
Received: by outflank-mailman (input) for mailman id 44902;
 Fri, 04 Dec 2020 15:29:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1klD16-0006Xv-Ld
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 15:29:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1klD14-0005Lq-St; Fri, 04 Dec 2020 15:29:02 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1klD14-0006dl-IH; Fri, 04 Dec 2020 15:29:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xv+O3Dg9lMSF5rvurll0ZdkFckixVgK3H+VKj0h2FEU=; b=QT+l22IkuGn5fajKqG1whz4c2b
	fxgPLxt2Tq/4uQGw24HQ9a518q7LFOLv9ahi1brrhtCbOKnmJv7SqSDTeH8/eg9YRi/o3yAluYpLo
	fXSjcn6Dgh3mCSQsLSdtEZoYWggHnSDod3hSblJgPx43f7aQ6ZZeRQ7JLklODe8iuVd4=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
Date: Fri, 4 Dec 2020 15:29:00 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 04/12/2020 15:21, Tamas K Lengyel wrote:
> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
>>
>> Hi Jan,
>>
>> On 03/12/2020 10:09, Jan Beulich wrote:
>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>> While there don't look to be any problems with this right now, the lock
>>>>> order implications from holding the lock can be very difficult to follow
>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>
>>>>> However, vm_event_disable() frees the structures used by respective
>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>
>>>> AFAICT, this callback is not the only place where the synchronization is
>>>> missing in the VM event code.
>>>>
>>>> For instance, vm_event_put_request() can also race against
>>>> vm_event_disable().
>>>>
>>>> So shouldn't we handle this issue properly in VM event?
>>>
>>> I suppose that's a question to the VM event folks rather than me?
>>
>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>> monitoring software to do the right thing.
>>
>> I will refrain to comment on this approach. However, given the race is
>> much wider than the event channel, I would recommend to not add more
>> code in the event channel to deal with such problem.
>>
>> Instead, this should be fixed in the VM event code when someone has time
>> to harden the subsystem.
> 
> I double-checked and the disable route is actually more robust, we
> don't just rely on the toolstack doing the right thing. The domain
> gets paused before any calls to vm_event_disable. So I don't think
> there is really a race-condition here.

The code will *only* pause the monitored domain. I can see two issues:
    1) The toolstack is still sending event while destroy is happening. 
This is the race discussed here.
    2) The implement of vm_event_put_request() suggests that it can be 
called with not-current domain.

I don't see how just pausing the monitored domain is enough here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 15:37:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 15:37:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44910.80336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klD8n-0007d4-TM; Fri, 04 Dec 2020 15:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44910.80336; Fri, 04 Dec 2020 15:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klD8n-0007cx-QG; Fri, 04 Dec 2020 15:37:01 +0000
Received: by outflank-mailman (input) for mailman id 44910;
 Fri, 04 Dec 2020 15:37:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klD8m-0007cp-EN; Fri, 04 Dec 2020 15:37:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klD8m-0005WM-81; Fri, 04 Dec 2020 15:37:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klD8l-0003mt-7L; Fri, 04 Dec 2020 15:36:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klD8l-0001Zk-6s; Fri, 04 Dec 2020 15:36:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bQ80goojSaVaH5aTomU2viHlI/CjwN1KTCQJPWvkoME=; b=Jz407atRroMQJcDstAB+1XejJ0
	ifIT8nA4aaT1X3x4ZZbt1e/o3K96K21iGGX6QlI1vnZjyQJxxRpymIpPA1MQ7k7dNytxPwjb82GCa
	rMnMvEnfGw7n66M3CHig6qW9bMOI3nOg0WQuPz/hDQscLIjKjjERzUHn9xv03GBse+O4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157203-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157203: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0fb6dbfaa859aae0e51a82a8d5e213bcc64b85f1
X-Osstest-Versions-That:
    xen=be3755af37263833cb3b1c6b1f2ba219bdf97ec3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 15:36:59 +0000

flight 157203 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157203/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0fb6dbfaa859aae0e51a82a8d5e213bcc64b85f1
baseline version:
 xen                  be3755af37263833cb3b1c6b1f2ba219bdf97ec3

Last test of basis   157200  2020-12-04 08:00:27 Z    0 days
Testing same since   157203  2020-12-04 12:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   be3755af37..0fb6dbfaa8  0fb6dbfaa859aae0e51a82a8d5e213bcc64b85f1 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 15:53:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 15:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44920.80350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDOD-0001Hq-9i; Fri, 04 Dec 2020 15:52:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44920.80350; Fri, 04 Dec 2020 15:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDOD-0001Hj-6l; Fri, 04 Dec 2020 15:52:57 +0000
Received: by outflank-mailman (input) for mailman id 44920;
 Fri, 04 Dec 2020 15:52:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6t4/=FI=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1klDOB-0001He-BK
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 15:52:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5822dfd-3a27-4a7d-9a59-66f755a37ba2;
 Fri, 04 Dec 2020 15:52:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 96BD9ACF1;
 Fri,  4 Dec 2020 15:52:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5822dfd-3a27-4a7d-9a59-66f755a37ba2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607097173; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/s7w6felNN5aUqALA+mgH/FY6qkLlK/W/jn3uFNlDrI=;
	b=czeW9Nar3zrUjSW/SsHTzTEmHHKypnFST72Ec/zASskOh3eG1b4Pl9cvZwAuhOWmYlIeYo
	LxQHO6/yluo/9oqnIIgeycl7S0+bClcMYzwCG6Ok9d0FJcSSSNlQHP4TeJGjgx35icyO9f
	L5mnUVYZBHHc/9tdi1p2DbXzz5ZC9Z8=
Message-ID: <c2753a9853a1dccc159e0b34db0cdf0a364d2206.camel@suse.com>
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Andrew Cooper
	 <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	 <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Date: Fri, 04 Dec 2020 16:52:52 +0100
In-Reply-To: <20201201082128.15239-5-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
	 <20201201082128.15239-5-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-G1APAdAHLbOB5//xki2j"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-G1APAdAHLbOB5//xki2j
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-12-01 at 09:21 +0100, Juergen Gross wrote:
> The cpupool id is an unsigned value in the public interface header,
> so
> there is no reason why it is a signed value in struct cpupool.
>=20
> Switch it to unsigned int.
>=20
I think we can add:

"No functional change intended"

> Signed-off-by: Juergen Gross <jgross@suse.com>
>
IAC:

Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-G1APAdAHLbOB5//xki2j
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/KW1QACgkQFkJ4iaW4
c+6KbA/+MwD+ALsymyz5iKB+FV4FJuj7a3zN5crQmKJUEVDqJQj3T9bP083PsQNy
6g8SB4GaVYh0+J/DUl4BvEzYLiF9yDfl171gtR/AYMrTYHcALCseyRRwL69XRDC8
h8GG97v7tyaNXIFdZ38fEDtOkW6TPnUcD0EeQSS4YW0v1I6YSuRhTPwDEORNB0i4
ekkUrbSRZUzsdEE/gEwHh3qB0voDRWq5jRMjjWzZRVR1AUyQXqsp+Dfkzs3zrsSW
oT2CDyHqXBf4nYclEthzTB3Xa9jNB9yicY4fFdM86L7etJZtc+aZ+K1TE4e0ZIeh
6KJQK6o/YI5OrTqGE3CVB7BSOM9t7m4OF7uclWpOXob4UdVh95TE4XBrhozJ0gFN
TRhE7Vy4aVIWofFuPSOrK78IPRk8+EykLG8EB7ZmaJ2XkQLRwQxaM9NEUqpQvgPK
jumNrxjiS8JXXx+GV+wIJXZgZ313+SpIy3JoxmReojnj+gjq1f7wTu0cee4RelSi
1XZaBMz+Zlq0kLh93DJGkN+woQ+xMgqEpFoj83RLG1xftcFYxckhLA+0tvJaR0lj
zA3uariOhSnsJhU+gpI0wTUEeSpjoZ0ys7HpdlJnOj3ThN4fshyuK/nzbsSDr27X
l7PKgPZDnSuVRXKd6YIwidoU0n9Mco+MSBl682kahKFr9c2bULk=
=o/Gb
-----END PGP SIGNATURE-----

--=-G1APAdAHLbOB5//xki2j--



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 16:13:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 16:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44929.80363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDi2-0003zU-VO; Fri, 04 Dec 2020 16:13:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44929.80363; Fri, 04 Dec 2020 16:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDi2-0003zN-SB; Fri, 04 Dec 2020 16:13:26 +0000
Received: by outflank-mailman (input) for mailman id 44929;
 Fri, 04 Dec 2020 16:13:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6t4/=FI=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1klDi2-0003zI-3Q
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 16:13:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2400642-fd5d-48c1-a86d-3a76832761c1;
 Fri, 04 Dec 2020 16:13:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6AB46AC9A;
 Fri,  4 Dec 2020 16:13:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2400642-fd5d-48c1-a86d-3a76832761c1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607098403; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=egCyZgLYx3Tnyw2SefDaCVUfzeMrdZiWee7v5HfqyzQ=;
	b=MWQhxv3+R0Df10vNILBoIACHS2uyTSouIOOr+msMH1hQpiyf0Ouyd0lwbDajvKgpRYvLBp
	g1SwxnuAiDkZzHIsy8BP+/uH1oks22b3UeRmvu+sIc1AiZgQTs5Ef7VuTHTFY1msppw5r0
	j5G3H9mt7/Imz4vopnzkL89BkyOYrpw=
Message-ID: <fe0b6924122aa9dff2095403738f111750cc815a.camel@suse.com>
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal
 list interface
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, Jan Beulich
	 <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Date: Fri, 04 Dec 2020 17:13:21 +0100
In-Reply-To: <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
	 <20201201082128.15239-6-jgross@suse.com>
	 <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
	 <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-eaXWRjy1PlWl36Kv9g80"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-eaXWRjy1PlWl36Kv9g80
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-12-01 at 10:18 +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 01.12.20 10:12, Jan Beulich wrote:
> > What guarantees that you managed to find an unused ID, other
> > than at current CPU speeds it taking too long to create 4
> > billion pools? Since you're doing this under lock, wouldn't
> > it help anyway to have a global helper variable pointing at
> > the lowest pool followed by an unused ID?
>=20
> An admin doing that would be quite crazy and wouldn't deserve better.
>=20
> For being usable a cpupool needs to have a cpu assigned to it. And I
> don't think we are coming even close to 4 billion supported cpus. :-)
>=20
> Yes, it would be possible to create 4 billion empty cpupools, but for
> what purpose? There are simpler ways to make the system unusable with
> dom0 root access.
>=20
Yes, I agree. I don't think it's worth going through too much effort
for trying to deal with that.

What I'd do is:
 - add a comment here, explaining quickly exactly this fact, i.e.,=C2=A0
   that it's not that we've forgotten to deal with this and it's all=C2=A0
   on=C2=A0purpose. Actually, it can be either a comment here or it can be=
=C2=A0
   mentioned in the changelog. I'm fine either way
 - if we're concerned about someone doing:
     for i=3D1...N { xl cpupool-create foo bar }
   with N ending up being some giant number, e.g., by mistake, I don't=C2=
=A0
   think it's unreasonable to come up with an high enough (but=C2=A0
   certainly not in the billions!) MAX_CPUPOOLS, and stop creating new
   ones when we reach that level.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-eaXWRjy1PlWl36Kv9g80
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/KYCEACgkQFkJ4iaW4
c+4qvQ//bmq57+Rg88bsY3hi0xINfUp53ri8kl/5aY3cyMtbsYZ1f2THdAW7Tf8m
no4lRnUus/EvkbvkaPnEJqMS6e0ngNgNmLIKgA+VWUnP/yb/jI5G/vpeVW6IR3OR
kDcEcEkxLIXOGobAK/LzKtzUj8hvkBva+C4QHs/lzwD+Kx/HWiwcZb1mNM0WKr2v
C94yQUxeL78sVu8eQVbakjGrReDN1WmlMh5mxJhAh66N8Peeeq8SvrEInKfTyxBu
bNaDP6Ek+S+hZCvJ7JZmzQPk4YXDkz0q8IHUR/B+SLhUeKsw6fvW7aw1rsLbLlwF
cBhjbpUMezw4Vidt0AHkaPHFy6lAiEg5eqh7SXzveEhpAW8Myx8qddfQe+tlrefv
YAM5BUijILZfdVjCNyfrggIUE6SN12sCB4g53csj9RVf8ntpyAoqLAosybdV//MK
wqZAkOhnepEyvSgNRN7HLwgdDd0sCIe+qW/pdhtAJKxfAReWFQmI0nXshFuew2ss
kSEAmJIoyx2Bczl+XhHRUn/NpZhB6I0mAg84FHExbw2z9SMFeEW/6KWobFLmq187
HFkoZn7qJdensuyUisKt/BZwYU1V/E2PikTQg/sgx5sbBgz2mSFtQc5Sgn5Hwa74
WE7jo6MfAmowo0NK8aApQYpOwVOVw1OkgjsEQIWZx2TTD/1zzvs=
=QNI8
-----END PGP SIGNATURE-----

--=-eaXWRjy1PlWl36Kv9g80--



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 16:16:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 16:16:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44932.80375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDkm-00048J-En; Fri, 04 Dec 2020 16:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44932.80375; Fri, 04 Dec 2020 16:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDkm-00048C-Ac; Fri, 04 Dec 2020 16:16:16 +0000
Received: by outflank-mailman (input) for mailman id 44932;
 Fri, 04 Dec 2020 16:16:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov5/=FI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1klDkk-000487-O7
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 16:16:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8b1828c-dd67-46ec-a396-4132fbe838bf;
 Fri, 04 Dec 2020 16:16:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9BB7ACC4;
 Fri,  4 Dec 2020 16:16:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8b1828c-dd67-46ec-a396-4132fbe838bf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607098572; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8wHFCJcXLxaRCE//c8KZw/xWEWSTTcGqjK8EPcNdN4s=;
	b=hxvmS2q6N3Nl4P/4aNpN+Np+nJM7kvSudLfdsp1+kBLKZw0HT6BQ0wgEZ6tvzBPW6WHNw4
	PobokZctW6TEOQB6ZEK3RpBP3nZBSEBXhlXAWcbph1TO6ie92yB1XEv0679P9AEqvStUp2
	sMPU2HKyU5TeQdhQKEo8MGLMSUjlrWs=
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list
 interface
To: Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-6-jgross@suse.com>
 <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
 <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
 <fe0b6924122aa9dff2095403738f111750cc815a.camel@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <76e7d00f-b049-97fc-f9e2-ade76b10a93e@suse.com>
Date: Fri, 4 Dec 2020 17:16:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <fe0b6924122aa9dff2095403738f111750cc815a.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Na8waiugEdiEkwE6AMBr4JLGmOuQAJmic"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Na8waiugEdiEkwE6AMBr4JLGmOuQAJmic
Content-Type: multipart/mixed; boundary="zZ5RQvfaEdQoknt99G5x3RkM1YvBQuudI";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Message-ID: <76e7d00f-b049-97fc-f9e2-ade76b10a93e@suse.com>
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list
 interface
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-6-jgross@suse.com>
 <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
 <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
 <fe0b6924122aa9dff2095403738f111750cc815a.camel@suse.com>
In-Reply-To: <fe0b6924122aa9dff2095403738f111750cc815a.camel@suse.com>

--zZ5RQvfaEdQoknt99G5x3RkM1YvBQuudI
Content-Type: multipart/mixed;
 boundary="------------38956755D0F484510DE165D9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------38956755D0F484510DE165D9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 04.12.20 17:13, Dario Faggioli wrote:
> On Tue, 2020-12-01 at 10:18 +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 01.12.20 10:12, Jan Beulich wrote:
>>> What guarantees that you managed to find an unused ID, other
>>> than at current CPU speeds it taking too long to create 4
>>> billion pools? Since you're doing this under lock, wouldn't
>>> it help anyway to have a global helper variable pointing at
>>> the lowest pool followed by an unused ID?
>>
>> An admin doing that would be quite crazy and wouldn't deserve better.
>>
>> For being usable a cpupool needs to have a cpu assigned to it. And I
>> don't think we are coming even close to 4 billion supported cpus. :-)
>>
>> Yes, it would be possible to create 4 billion empty cpupools, but for
>> what purpose? There are simpler ways to make the system unusable with
>> dom0 root access.
>>
> Yes, I agree. I don't think it's worth going through too much effort
> for trying to deal with that.
>=20
> What I'd do is:
>   - add a comment here, explaining quickly exactly this fact, i.e.,
>     that it's not that we've forgotten to deal with this and it's all
>     on=C2=A0purpose. Actually, it can be either a comment here or it ca=
n be
>     mentioned in the changelog. I'm fine either way
>   - if we're concerned about someone doing:
>       for i=3D1...N { xl cpupool-create foo bar }
>     with N ending up being some giant number, e.g., by mistake, I don't=

>     think it's unreasonable to come up with an high enough (but
>     certainly not in the billions!) MAX_CPUPOOLS, and stop creating new=

>     ones when we reach that level.

Do you agree that this could be another patch?

I'm not introducing that (theoretical) problem here.


Juergen

--------------38956755D0F484510DE165D9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------38956755D0F484510DE165D9--

--zZ5RQvfaEdQoknt99G5x3RkM1YvBQuudI--

--Na8waiugEdiEkwE6AMBr4JLGmOuQAJmic
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/KYMwFAwAAAAAACgkQsN6d1ii/Ey+M
Pwf/fithXX+j1WDC8LgFkxUA74j8tEbR21GoRHgpvFB60RrVHUkhCJAPz1IRBLSLDWr7EUgWX2X4
aJda8afRVjwioryzrCbr+q6hyrPwjJ5INWO7ZyNB8yOsK71paVCHeFxeItsMsNeFMMhteNcwzmGh
xifQh7derOa7syUbJn8Zh9cd1sMjUoc4LCL3csoEwMvAeGJWXCEtWDGZfS+72BL301exP8Qd8vc9
hf60brsA4wcqrRV2P7bNPahGVn+LCpuCqqpGFrHcj7aa7Lhh/bg06FNntNoEMlACgbf8uDnk9igJ
6HjjJzKI9Q9OldiEspvbe4YLiMq13yUruDREPpPB3g==
=Lxw8
-----END PGP SIGNATURE-----

--Na8waiugEdiEkwE6AMBr4JLGmOuQAJmic--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 16:25:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 16:25:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44943.80387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDtY-0005ID-E8; Fri, 04 Dec 2020 16:25:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44943.80387; Fri, 04 Dec 2020 16:25:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDtY-0005I6-Ac; Fri, 04 Dec 2020 16:25:20 +0000
Received: by outflank-mailman (input) for mailman id 44943;
 Fri, 04 Dec 2020 16:25:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6t4/=FI=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1klDtX-0005I1-42
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 16:25:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d7b34b8-e71c-46b7-b9b8-76d04880ed6a;
 Fri, 04 Dec 2020 16:25:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69512AC9A;
 Fri,  4 Dec 2020 16:25:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d7b34b8-e71c-46b7-b9b8-76d04880ed6a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607099117; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=wCzY4tbIDpH0VVSO04WM5bndyi7WKIj6iiHn8DqcabU=;
	b=tOgvqskCaj3dWuLFXRr/D9VhuuzltZTHsNskMEoPPZKrOM13thR+ng45ha2wkM/1pRUBpi
	W6l9pepcYHAgg2312Y/csXigj1onHSVD4qJbRmhswbE+4BRtQ/IC4iyHQVY1Stm8J9iRBW
	CCD+5fqt3NymCfcgzYA0aeXssnT8aLE=
Message-ID: <967454f248c521e51b0a1be27b7d38fe243ce54e.camel@suse.com>
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal
 list interface
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, Jan Beulich
	 <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Date: Fri, 04 Dec 2020 17:25:16 +0100
In-Reply-To: <76e7d00f-b049-97fc-f9e2-ade76b10a93e@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
	 <20201201082128.15239-6-jgross@suse.com>
	 <54301d8c-2d69-3206-6c42-d2638b7e7aa3@suse.com>
	 <a812d9a9-a701-bb58-01bf-9375ad4feb50@suse.com>
	 <fe0b6924122aa9dff2095403738f111750cc815a.camel@suse.com>
	 <76e7d00f-b049-97fc-f9e2-ade76b10a93e@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-tkkM7bhpV4NNs/gEUX5h"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-tkkM7bhpV4NNs/gEUX5h
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2020-12-04 at 17:16 +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 04.12.20 17:13, Dario Faggioli wrote:
> >=20
> >=20
> > What I'd do is:
> > =C2=A0 - add a comment here, explaining quickly exactly this fact, i.e.=
,
> > =C2=A0=C2=A0=C2=A0 that it's not that we've forgotten to deal with this=
 and it's
> > all
> > =C2=A0=C2=A0=C2=A0 on=C2=A0purpose. Actually, it can be either a commen=
t here or it can
> > be
> > =C2=A0=C2=A0=C2=A0 mentioned in the changelog. I'm fine either way
> > =C2=A0 - if we're concerned about someone doing:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 for i=3D1...N { xl cpupool-create foo ba=
r }
> > =C2=A0=C2=A0=C2=A0 with N ending up being some giant number, e.g., by m=
istake, I
> > don't
> > =C2=A0=C2=A0=C2=A0 think it's unreasonable to come up with an high enou=
gh (but
> > =C2=A0=C2=A0=C2=A0 certainly not in the billions!) MAX_CPUPOOLS, and st=
op creating
> > new
> > =C2=A0=C2=A0=C2=A0 ones when we reach that level.
>=20
> Do you agree that this could be another patch?
>=20
Ah, yes, sorry, got carried away and forgot to mention that!

Of course it should be in another patch... But indeed I should have
stated that clearly.

So, trying to do better this time round:
- the comment can/should be added as part of this patch. But I'm
  now much more=C2=A0convinced=C2=A0that a quick mention in the changelog
  (still of this patch) is=C2=A0actually better;
- any "solution" (Jan's or MAX_CPUPOOLS) should go in its own patch.

> I'm not introducing that (theoretical) problem here.
>=20
Indeed.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-tkkM7bhpV4NNs/gEUX5h
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/KYuwACgkQFkJ4iaW4
c+61yhAAiFAyIW7eZ3o7XapgC2t8iDJzOpKwvbOIfnpPXHqeIR6tOpJVxtPSL36E
6RVNZ7DiilBizjF3aJ2WBKRsDpldfePBciUnDqcMkFLUDhqtDC/GBh2flSzB8twl
PcfQB15sKhYAKkDN1UNad05CfVDT7jmuDZfB0iTOf78bHKS5YMPcOFUJweeCYUCe
R47D8aIKqb+jdLg2rluKAt3gt+nmXesBUyGrV7DUMRoEd2ZuaCX8c++AXut0f7sd
E4z+X1EOaqnzY8dQYRJeIXvl6ZJsDTFVR2hkIfBJnXfpo9nJ1i7xnWIn5tPvJDxp
W6ihe8MUeRCcUN7CkxnH6exKQhoAjsKNOiynLPDbiY5IicvPgAUjHrY8srbgxZJ6
BAElKkdZot4oGfi7+++vKhAQ9HWX6JU8fHw30PdJCZzr6vQ5B1SHEbCo5eD4cviF
FgiHMjd6yTmEQgkE9uPIB/1P+LfEzvzjC065RnqDHD33PX7Oyjz92zUPYuTD72eu
nA5NZr8k/daCDjkKGX/tvu2t4+Plt1g/+Hv2JiCBDgxdrXJXi9FbzCv1Zhnu+ibr
PuoFSMUpnHOuDrMJiFCxcv3ohk0iNBW+Pp2YIvNZ0HL3xlcSHxVGsBSofR7Oz1lH
DNafHI0kTp8zYCBumlj9MDYzcrpJao438fbTHzsmevRMG1uQ8bI=
=hDDv
-----END PGP SIGNATURE-----

--=-tkkM7bhpV4NNs/gEUX5h--



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 16:29:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 16:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44950.80402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDxz-0005VW-3E; Fri, 04 Dec 2020 16:29:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44950.80402; Fri, 04 Dec 2020 16:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klDxz-0005VP-03; Fri, 04 Dec 2020 16:29:55 +0000
Received: by outflank-mailman (input) for mailman id 44950;
 Fri, 04 Dec 2020 16:29:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6t4/=FI=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1klDxx-0005VJ-Oo
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 16:29:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b67a80c-78cf-4d5d-841d-a0c0931c7e4c;
 Fri, 04 Dec 2020 16:29:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 837E9ACC1;
 Fri,  4 Dec 2020 16:29:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b67a80c-78cf-4d5d-841d-a0c0931c7e4c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607099391; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SL3mwkZxHtAUx3sawZKWGup9Vk1PcbISsUT/Xs/r90c=;
	b=JcUTC05PV1ie2Y2ULY9lG3A24V4VJcVjejvd2uV+LeflyvfUrk1XS9VyFs0VPcKpnIgaCP
	v8769GqUr2FEgHfKyzCVUFzUfuU/HHYmkp2LR6H/Nwc0w6Y77HdYJ2RTaknLHJXmEHCUTf
	oWvUVRoYjJYYeDIvLjvVQaSlkFkarD0=
Message-ID: <53c1aa85b4421d90b14f4345fb5cdf77a514a877.camel@suse.com>
Subject: Re: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error
 cause from cpupool_create()
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Fri, 04 Dec 2020 17:29:50 +0100
In-Reply-To: <20201201082128.15239-7-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
	 <20201201082128.15239-7-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-GPWP1MuQ5tDtAxoJkZJ2"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-GPWP1MuQ5tDtAxoJkZJ2
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-12-01 at 09:21 +0100, Juergen Gross wrote:
> Instead of a pointer to an error variable as parameter just use
> ERR_PTR() to return the cause of an error in cpupool_create().
>=20
Yes... And thanks!

Happy to see more of this happening and more of the ad-hoc error
handling going away.

> This propagates to scheduler_alloc(), too.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-GPWP1MuQ5tDtAxoJkZJ2
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/KY/4ACgkQFkJ4iaW4
c+5bDBAAq5/G7A3RbNQC4aZGOP10aor8sxDTEj17/+1ooAvQbU07DPQKIZhbGoT2
hjI7qnMB8gBStILWaV0HpYNsRj28Xu1zXZDjrVgtwa7z82y8hc5Yqf14eMANmslE
YxeVFhDlbZ+orUxz5JoCGNfOSZLP5Ak1OeG4dpK40KmunbToqiwHe55k3NGuTU5n
NbznfjMdKrWnRcJJKcOQ629xge6Bkd98dx3M/Wv0JL3VNCsVx3xWY8P+DYAysKrZ
hEy47z6oc9AGUCq4AnAisXwAIrfdHeL/VlwKteGb8i2iS0ZAxzc3q1ZeoNr8I0Mx
kftWJ159E2E72yBquIfEMj9xB8fpBqDpqLx/uUyGeVF4O4nYO2WUcjUf9hAV9E41
FI88xJga3HMp+Yse+VL2H5wQ3vJPmSsKLuv53C2bMD0vtuC678ZmnJYT9hSAidv4
C7ggl66erCQwQVV9HFx1DIkkuNpZ1fK0xigTJbgQOzfAyjFgB7XSyEMhveh34MJZ
zB8AIXERvwaVchj6pMkxOKEn1eQtUKJ1BRQRPv3PgrcVjxO9RJuqbgq0T6XvwoCo
F+GcMJV5gdE5/4bMwwubkUjFJ35JlkzxInZuDSJ7ljiEdEk79fB+voEhYApXB8Oh
nFH2Ffcaml7XVT5JXFyMGbcTMqGyzOxrUo1D+Jd8P2RpL6I8WME=
=n+ut
-----END PGP SIGNATURE-----

--=-GPWP1MuQ5tDtAxoJkZJ2--



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 16:56:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 16:56:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44962.80420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klENs-0000CX-Do; Fri, 04 Dec 2020 16:56:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44962.80420; Fri, 04 Dec 2020 16:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klENs-0000CQ-Ag; Fri, 04 Dec 2020 16:56:40 +0000
Received: by outflank-mailman (input) for mailman id 44962;
 Fri, 04 Dec 2020 16:56:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6t4/=FI=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1klENr-0000CL-1a
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 16:56:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cdd6546f-b220-449a-ac11-e220d4a7b2be;
 Fri, 04 Dec 2020 16:56:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4DB09ACBA;
 Fri,  4 Dec 2020 16:56:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdd6546f-b220-449a-ac11-e220d4a7b2be
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607100997; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cRdRUxehcl4ZfniJoAzlXcTcyM8p7bZ7J7D2+BOTM3A=;
	b=fjiYY2loQBSO+gUsGktexv/ZTlU/l1Pn3gbmQ1B66ONRV3ZzFRhhxfax/coSgZZM/mB6Gy
	2pIq4OFXHPDzwBHZznfsKc399pMaPIQ8EITLQABxZriRp0i9wEvzDIGK8ap778eLfvuGuX
	/kn5mT0z0zJeiAbQCnxUJWZ2AwbOwM4=
Message-ID: <fcb451af99564c595e5bd8d3a13aa9c2d39f670b.camel@suse.com>
Subject: Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal
 list interface
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Fri, 04 Dec 2020 17:56:36 +0100
In-Reply-To: <20201201082128.15239-6-jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
	 <20201201082128.15239-6-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-Y4odgjyWSGvy9y5OHjc7"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-Y4odgjyWSGvy9y5OHjc7
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-12-01 at 09:21 +0100, Juergen Gross wrote:
> Instead of open coding something like a linked list just use the
> available functionality from list.h.
>=20
Yep, much better.

> While adding the required new include to private.h sort the includes.
>=20
> Signed-off-by: From: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

We've discussed about the issue of creating too many cpupools  later in
the thread already. If, as stated there, a comment or (much better,
IMO) a mention about that in the changelog is added, the R-o-b still
stands.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-Y4odgjyWSGvy9y5OHjc7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/KakQACgkQFkJ4iaW4
c+5UIxAA6ZloExN3e4Iq9ew0cL/CEPKQ8fom44a30YnsXmStU2ARoLPlt2qJ+QwV
4by4e1YYvwWDBKJXIWA0TlLzAZodXcPPJ/N9GcGUQuvZw8829psyfLfc6e+MAH+T
Hj2zKpbJG8/PbReRG3hh8HUq0U+pAZ1xfdIXyR2sLsfSj4sUOv2efTl/6LiVVp42
2joGpLFmqvlbfV0TpvpvEeyLFmkwtQn/LjN/oBUWCUEM7GcI7EOtEgH/cvGlckXr
LiuKGb3q0Qd0x44hpWv45kzjz1rB8R636LrqQsz79bs1Bej2grV4oK/QI94FKWoU
mlHRwFddQvm2LWkfuMPkEgIAqRjdluBonGPz9dfKj2sI29bb6rgzjEPl6sDSwSRZ
Cdvg+JsX4tpxGuwbyAHN2gew1/ruaemj4fG+I2O2K7VM520IjDtQasJvNvDoLgX8
F84wvgpBfF1W0ugsr7kby7Lh///BzSgbpk1KLEGZy140SBwjgrgfwagLW/3fW57N
qQKtg2ecCLBr7pMm7FryR4CaMMlDbWxb7CO0MUrw+x1+bGa65mQQPNunNYzYmmke
XwwCMbCLtSmoUlrVfR+qfrS4YWIatjFnlb6vbqBQoxxHi65LwJp0gXRpU5T1VoiB
YoPb0XHUKqGAg0mgXkNNEsMdiwLaai8a3Z+DJ/hpY86uvQJwVEI=
=/d0r
-----END PGP SIGNATURE-----

--=-Y4odgjyWSGvy9y5OHjc7--



From xen-devel-bounces@lists.xenproject.org Fri Dec 04 17:41:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 17:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44978.80431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klF4p-0005SX-UF; Fri, 04 Dec 2020 17:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44978.80431; Fri, 04 Dec 2020 17:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klF4p-0005SQ-RG; Fri, 04 Dec 2020 17:41:03 +0000
Received: by outflank-mailman (input) for mailman id 44978;
 Fri, 04 Dec 2020 17:41:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gNFP=FI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klF4o-0005SL-Io
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 17:41:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26e5b9c1-2b49-481a-8dfe-bc3d0714334a;
 Fri, 04 Dec 2020 17:41:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26e5b9c1-2b49-481a-8dfe-bc3d0714334a
Date: Fri, 4 Dec 2020 09:41:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607103661;
	bh=soSQUAHUNCHxwjkh9H4HP2n+rYHdvJAaJr0J2mAgohU=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=hrAMXRUiyz4L+3KRwO91Xg+znFWUDkJpUWurqK4DOudjjZXuG6rXczqCT+eTc2ytn
	 ndJ+nXqiADAsczR7SmxPGL31l/p9kdvtiEkb+1EMwiFKSdt4HronBylar7UrQZhnFq
	 IJKpMOGmg85st1oAawBLbsvH4Ln0TLNEVixfvb3TZaTIoi7u6q3t0JBh7fcLS0bdfO
	 zXweQ3RD1C6Ejbs1KeyqdpNQVI23ENGcAFzxgiU66xsT/PAy9o83WmRbHqO8ZQNCFq
	 XnAYXkNAec+56GnzPS2gHTh5wUrszGy+1ckNj++27E7xuzBtsFBib4hAjyVdwEtZ7I
	 krXimMCncX1Zw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>, 
    paul@xen.org, 'Paul Durrant' <pdurrant@amazon.com>, 
    'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>, 
    'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>, 
    'George Dunlap' <george.dunlap@citrix.com>, 
    'Stefano Stabellini' <sstabellini@kernel.org>, 
    'Christian Lindig' <christian.lindig@citrix.com>, 
    'David Scott' <dave@recoil.org>, 
    'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>, 
    =?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
In-Reply-To: <5de9f051-4071-4e09-528c-c1fb8345dc25@citrix.com>
Message-ID: <alpine.DEB.2.21.2012040940160.32240@sstabellini-ThinkPad-T480s>
References: <20201203124159.3688-1-paul@xen.org> <20201203124159.3688-2-paul@xen.org> <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com> <00ee01d6c98b$507af1c0$f170d540$@xen.org> <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com> <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com> <011201d6ca16$ae14ac50$0a3e04f0$@xen.org> <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com> <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org> <5de9f051-4071-4e09-528c-c1fb8345dc25@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1843504570-1607103661=:32240"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1843504570-1607103661=:32240
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 4 Dec 2020, Andrew Cooper wrote:
> On 04/12/2020 11:45, Julien Grall wrote:
> > Hi,
> >
> > I haven't looked at the series yet. Just adding some thoughts on why
> > one would want such option.
> >
> > On 04/12/2020 09:43, Jan Beulich wrote:
> >> On 04.12.2020 09:22, Paul Durrant wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: 04 December 2020 07:53
> >>>>
> >>>> On 03.12.2020 18:07, Paul Durrant wrote:
> >>>>>> From: Jan Beulich <jbeulich@suse.com>
> >>>>>> Sent: 03 December 2020 15:57
> >>>>>>
> >>>>>> ... this sound to me more like workarounds for buggy guests than
> >>>>>> functionality the hypervisor _needs_ to have. (I can appreciate
> >>>>>> the specific case here for the specific scenario you provide as
> >>>>>> an exception.)
> >>>>>
> >>>>> If we want to have a hypervisor that can be used in a cloud
> >>>>> environment
> >>>>> then Xen absolutely needs this capability.
> >>>>
> >>>> As per above you can conclude that I'm still struggling to see the
> >>>> "why" part here.
> >>>>
> >>>
> >>> Imagine you are a customer. You boot your OS and everything is just
> >>> fine... you run your workload and all is good. You then shut down
> >>> your VM and re-start it. Now it starts to crash. Who are you going
> >>> to blame? You did nothing to your OS or application s/w, so you are
> >>> going to blame the cloud provider of course.
> >>
> >> That's a situation OSes are in all the time. Buggy applications may
> >> stop working on newer OS versions. It's still the application that's
> >> in need of updating then. I guess OSes may choose to work around
> >> some very common applications' bugs, but I'd then wonder on what
> >> basis "very common" gets established. I dislike the underlying
> >> asymmetry / inconsistency (if not unfairness) of such a model,
> >> despite seeing that there may be business reasons leading people to
> >> think they want something like this.
> >
> > The discussion seems to be geared towards buggy guest so far. However,
> > this is not the only reason that one my want to avoid exposing some
> > features:
> >
> >    1) From the recent security issues (such as XSA-343), a knob to
> > disable FIFO would be quite beneficials for vendors that don't need
> > the feature.
> >
> >    2) Fleet management purpose. You may have a fleet with multiple
> > versions of Xen. You don't want your customer to start relying on
> > features that may not be available on all the hosts otherwise it
> > complicates the guest placement.
> >
> > FAOD, I am sure there might be other features that need to be
> > disabled. But we have to start somewhere :).
> 
> Absolutely top of the list, importance wise, is so we can test different
> configurations, without needing to rebuild the hypervisor (and to a
> lesser extent, without having to reboot).
> 
> It is a mistake that events/grants/etc were ever available unilaterally
> in HVM guests.  This is definitely a step in the right direction (but I
> thought it would be too rude to ask Paul to make all of those CDF flags
> at once).

+1

For FuSa we'll need to be able to disable them at some point soon.
--8323329-1843504570-1607103661=:32240--


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 17:42:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 17:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44983.80444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klF66-0005ZN-A2; Fri, 04 Dec 2020 17:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44983.80444; Fri, 04 Dec 2020 17:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klF66-0005ZG-6P; Fri, 04 Dec 2020 17:42:22 +0000
Received: by outflank-mailman (input) for mailman id 44983;
 Fri, 04 Dec 2020 17:42:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klF64-0005Z5-UR; Fri, 04 Dec 2020 17:42:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klF64-0000Eb-PB; Fri, 04 Dec 2020 17:42:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klF64-0000JH-Fq; Fri, 04 Dec 2020 17:42:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klF64-0006TG-FJ; Fri, 04 Dec 2020 17:42:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IHcP3V3wyJZBR9mYAjJwXnCS+RnLBjSIKKEhqjh5QYw=; b=fzxTdZAstpRhMRIFoMdCjCO367
	QRBNEwuvdwOrDluMvf7fEFlO9rp2RGIJpEPVTmCaNplxivtUvUXNWJ62xI0b30JKeIMGIFxEXrpUT
	KPatfMqUp9/kRBIxiV5XJd9cFDuI6ix7juh31jOT24A4KEp1zfqRRM/mArMd/6qTvU9A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157198-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157198: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 17:42:20 +0000

flight 157198 qemu-mainline real [real]
flight 157207 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157198/
http://logs.test-lab.xenproject.org/osstest/logs/157207/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  106 days
Failing since        152659  2020-08-21 14:07:39 Z  105 days  219 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 17:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 17:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.44991.80459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klF8q-0005mK-0Z; Fri, 04 Dec 2020 17:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 44991.80459; Fri, 04 Dec 2020 17:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klF8p-0005mD-Sz; Fri, 04 Dec 2020 17:45:11 +0000
Received: by outflank-mailman (input) for mailman id 44991;
 Fri, 04 Dec 2020 17:45:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WR05=FI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1klF8o-0005m8-N7
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 17:45:10 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b76e0de4-8c5f-4e5a-9747-ace8915f83b0;
 Fri, 04 Dec 2020 17:45:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b76e0de4-8c5f-4e5a-9747-ace8915f83b0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607103909;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=OBBGy0WLqKjT3nvC12cDtX+iMRfhvEvkV/BunmaHmXs=;
  b=N/ZyKn400D5qbgflR4ecqWQZxuNBTFBl/3dzOWjg0rVW31q4zK0UlJGP
   jaAXuJUrxMS+irXMFTarPifwv8WkxRhSURSqx4ks+bLIBO6B31NUeBCtM
   DErVmuVY2ECGmdNxJEEKIrQ6g404sAAmLMIXxrhX8uO30zHiHPX9hb+kI
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: rsTt0SrV/L0VbMoEC1QBaTjtHWTfhbFKTXSapBcp3SzNNvCyGkaFW/sV3UGGIn8iTfN3i98Vlz
 dM23wto49vc59CSEVCJOIkVPNXHeNdjyBpqBTeS6SDjDOA8Rj9/2WHhYK+r+96qf1RNmPT+E+/
 phR3QKxHNoXbsWuweVn26unYuQZ39a43vnX4sZQYYLIsg8oGihWWKD5wYVzd1CmsuQwl3cbAvN
 wgwWWmJ1irJu0pM/H+s/UyLGdsTBxym437AgW0O4a6TTmyo/WurNIJiIgHn+ni2VlJvWOnVLaG
 mHs=
X-SBRS: 5.1
X-MesageID: 32902315
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,393,1599537600"; 
   d="scan'208";a="32902315"
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
	<paul@xen.org>, 'Paul Durrant' <pdurrant@amazon.com>, 'Eslam Elnikety'
	<elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu'
	<wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>, 'George Dunlap'
	<george.dunlap@citrix.com>, 'Christian Lindig' <christian.lindig@citrix.com>,
	'David Scott' <dave@recoil.org>, 'Volodymyr Babchuk'
	<Volodymyr_Babchuk@epam.com>, =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?=
	<roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
 <5de9f051-4071-4e09-528c-c1fb8345dc25@citrix.com>
 <alpine.DEB.2.21.2012040940160.32240@sstabellini-ThinkPad-T480s>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7184a2de-f711-9683-3db6-7b880def022d@citrix.com>
Date: Fri, 4 Dec 2020 17:45:00 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012040940160.32240@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 04/12/2020 17:41, Stefano Stabellini wrote:
>>> FAOD, I am sure there might be other features that need to be
>>> disabled. But we have to start somewhere :).
>> Absolutely top of the list, importance wise, is so we can test different
>> configurations, without needing to rebuild the hypervisor (and to a
>> lesser extent, without having to reboot).
>>
>> It is a mistake that events/grants/etc were ever available unilaterally
>> in HVM guests.  This is definitely a step in the right direction (but I
>> thought it would be too rude to ask Paul to make all of those CDF flags
>> at once).
> +1
>
> For FuSa we'll need to be able to disable them at some point soon.

FWIW, I have a proper plan for this stuff, which start alongside the
fixed toolstack ABI, and will cover all aspects of optional
functionality in a domain.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 18:13:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 18:13:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45001.80470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klFZk-0000Yk-95; Fri, 04 Dec 2020 18:13:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45001.80470; Fri, 04 Dec 2020 18:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klFZk-0000Yd-5p; Fri, 04 Dec 2020 18:13:00 +0000
Received: by outflank-mailman (input) for mailman id 45001;
 Fri, 04 Dec 2020 18:12:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klFZi-0000YU-7y; Fri, 04 Dec 2020 18:12:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klFZh-0000wy-Sl; Fri, 04 Dec 2020 18:12:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klFZh-0002au-IJ; Fri, 04 Dec 2020 18:12:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klFZh-0000UN-Ho; Fri, 04 Dec 2020 18:12:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h57diYszztX9jj86n7JapQMyfXHBa/cdla38MExp588=; b=qQjrYxqAWfGMHCqaaZ3bT7D9zf
	tlV4J2/f0G+h5/54PrcvKFkmR/Il9wENLOwbrLImVjYVMpmqLmjd09qxc+FfrZE8EzeLEwfylZgvG
	kv2AqZFOrN2FB647bnuNq10akf4Awf60m+AzUyL9u+CKrEx9r0JpYuuzcqBaM9vAeVQg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157206-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157206: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
X-Osstest-Versions-That:
    xen=0fb6dbfaa859aae0e51a82a8d5e213bcc64b85f1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 18:12:57 +0000

flight 157206 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157206/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b
baseline version:
 xen                  0fb6dbfaa859aae0e51a82a8d5e213bcc64b85f1

Last test of basis   157203  2020-12-04 12:01:25 Z    0 days
Testing same since   157206  2020-12-04 16:02:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0fb6dbfaa8..5e666356a9  5e666356a9d55fbd9eb5b8506088aa760e107b5b -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 18:33:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 18:33:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45018.80486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klFts-0002ll-07; Fri, 04 Dec 2020 18:33:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45018.80486; Fri, 04 Dec 2020 18:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klFtr-0002le-Sw; Fri, 04 Dec 2020 18:33:47 +0000
Received: by outflank-mailman (input) for mailman id 45018;
 Fri, 04 Dec 2020 18:33:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kc/5=FI=amazon.co.uk=prvs=6003cbb93=pdurrant@srs-us1.protection.inumbo.net>)
 id 1klFtq-0002lZ-H9
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 18:33:46 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6198d457-bf0f-4862-8914-c5112ad2a167;
 Fri, 04 Dec 2020 18:33:45 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 04 Dec 2020 18:33:39 +0000
Received: from EX13D03EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS
 id 80E28C05B9; Fri,  4 Dec 2020 18:33:35 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 4 Dec 2020 18:33:34 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 4 Dec 2020 18:33:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6198d457-bf0f-4862-8914-c5112ad2a167
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1607106825; x=1638642825;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=nTlTxptuem5odOY+jcWjS5Pof00CGGujTgfBoho2voo=;
  b=HufR3j87fRE9mNMF2tnARK72q9LoYqWEQncqUJk0Crc2Gn0fnAD9Qm1v
   ngc3uw+Of41RSEbutKqgzOTevuQQvdrDlCuO1EMTPLCUkDh5F21gySmQX
   yrkRM5ohXELHDbkOf/M62X/WG9C3/aDz2OXzAfUiF2sVNyXSb13HqkkFH
   U=;
X-IronPort-AV: E=Sophos;i="5.78,393,1599523200"; 
   d="scan'208";a="67403274"
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
Thread-Topic: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"paul@xen.org" <paul@xen.org>, "Elnikety, Eslam" <elnikety@amazon.com>, "'Ian
 Jackson'" <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>, 'Anthony PERARD'
	<anthony.perard@citrix.com>, 'George Dunlap' <george.dunlap@citrix.com>,
	'Christian Lindig' <christian.lindig@citrix.com>, 'David Scott'
	<dave@recoil.org>, 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
	=?utf-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJ7xs67Aj6rtzSoheWkIIAACFOAgAATywCAAPdoAIAACFeAgAAWa4CAACI7AIAAAc2AgABhdgCAAAEeAIAADUEw
Date: Fri, 4 Dec 2020 18:33:34 +0000
Message-ID: <19beb5b5a651415c83dfbdaa533e7bed@EX13D32EUC003.ant.amazon.com>
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
 <5de9f051-4071-4e09-528c-c1fb8345dc25@citrix.com>
 <alpine.DEB.2.21.2012040940160.32240@sstabellini-ThinkPad-T480s>
 <7184a2de-f711-9683-3db6-7b880def022d@citrix.com>
In-Reply-To: <7184a2de-f711-9683-3db6-7b880def022d@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAwNCBEZWNlbWJlciAyMDIwIDE3OjQ1DQo+
IFRvOiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IENjOiBK
dWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPjsgcGF1bEB4ZW4ub3JnOyBEdXJyYW50LCBQYXVsDQo+IDxwZHVycmFudEBhbWF6b24uY28u
dWs+OyBFbG5pa2V0eSwgRXNsYW0gPGVsbmlrZXR5QGFtYXpvbi5jb20+OyAnSWFuIEphY2tzb24n
IDxpd2pAeGVucHJvamVjdC5vcmc+Ow0KPiAnV2VpIExpdScgPHdsQHhlbi5vcmc+OyAnQW50aG9u
eSBQRVJBUkQnIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPjsgJ0dlb3JnZSBEdW5sYXAnDQo+
IDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+OyAnQ2hyaXN0aWFuIExpbmRpZycgPGNocmlzdGlh
bi5saW5kaWdAY2l0cml4LmNvbT47ICdEYXZpZCBTY290dCcNCj4gPGRhdmVAcmVjb2lsLm9yZz47
ICdWb2xvZHlteXIgQmFiY2h1aycgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgJ1JvZ2Vy
IFBhdSBNb25uw6knDQo+IDxyb2dlci5wYXVAY2l0cml4LmNvbT47IHhlbi1kZXZlbEBsaXN0cy54
ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSRTogW0VYVEVSTkFMXSBbUEFUQ0ggdjUgMS80XSBk
b21jdGw6IGludHJvZHVjZSBhIG5ldyBkb21haW4gY3JlYXRlIGZsYWcsDQo+IFhFTl9ET01DVExf
Q0RGX2V2dGNobl9maWZvLCAuLi4NCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRl
ZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9y
IG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFu
ZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDA0LzEyLzIwMjAg
MTc6NDEsIFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4gPj4+IEZBT0QsIEkgYW0gc3VyZSB0
aGVyZSBtaWdodCBiZSBvdGhlciBmZWF0dXJlcyB0aGF0IG5lZWQgdG8gYmUNCj4gPj4+IGRpc2Fi
bGVkLiBCdXQgd2UgaGF2ZSB0byBzdGFydCBzb21ld2hlcmUgOikuDQo+ID4+IEFic29sdXRlbHkg
dG9wIG9mIHRoZSBsaXN0LCBpbXBvcnRhbmNlIHdpc2UsIGlzIHNvIHdlIGNhbiB0ZXN0IGRpZmZl
cmVudA0KPiA+PiBjb25maWd1cmF0aW9ucywgd2l0aG91dCBuZWVkaW5nIHRvIHJlYnVpbGQgdGhl
IGh5cGVydmlzb3IgKGFuZCB0byBhDQo+ID4+IGxlc3NlciBleHRlbnQsIHdpdGhvdXQgaGF2aW5n
IHRvIHJlYm9vdCkuDQo+ID4+DQo+ID4+IEl0IGlzIGEgbWlzdGFrZSB0aGF0IGV2ZW50cy9ncmFu
dHMvZXRjIHdlcmUgZXZlciBhdmFpbGFibGUgdW5pbGF0ZXJhbGx5DQo+ID4+IGluIEhWTSBndWVz
dHMuICBUaGlzIGlzIGRlZmluaXRlbHkgYSBzdGVwIGluIHRoZSByaWdodCBkaXJlY3Rpb24gKGJ1
dCBJDQo+ID4+IHRob3VnaHQgaXQgd291bGQgYmUgdG9vIHJ1ZGUgdG8gYXNrIFBhdWwgdG8gbWFr
ZSBhbGwgb2YgdGhvc2UgQ0RGIGZsYWdzDQo+ID4+IGF0IG9uY2UpLg0KPiA+ICsxDQo+ID4NCj4g
PiBGb3IgRnVTYSB3ZSdsbCBuZWVkIHRvIGJlIGFibGUgdG8gZGlzYWJsZSB0aGVtIGF0IHNvbWUg
cG9pbnQgc29vbi4NCj4gDQo+IEZXSVcsIEkgaGF2ZSBhIHByb3BlciBwbGFuIGZvciB0aGlzIHN0
dWZmLCB3aGljaCBzdGFydCBhbG9uZ3NpZGUgdGhlDQo+IGZpeGVkIHRvb2xzdGFjayBBQkksIGFu
ZCB3aWxsIGNvdmVyIGFsbCBhc3BlY3RzIG9mIG9wdGlvbmFsDQo+IGZ1bmN0aW9uYWxpdHkgaW4g
YSBkb21haW4uDQo+IA0KDQpPSy4gQ2FuIHdlIGxpdmUgd2l0aCB0aGlzIHNlcmllcyBhcyBpdCBz
dGFuZHMgdW50aWwgdGhhdCBwb2ludD8gVGhlcmUgaXMgc29tZSB1cmdlbmN5IHRvIGdldCBhdCBs
ZWFzdCB0aGVzZSB0d28gdGhpbmdzIGZpeGVkLg0KDQogIFBhdWwNCg0KPiB+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 19:16:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 19:16:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45028.80504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klGYS-00073E-6O; Fri, 04 Dec 2020 19:15:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45028.80504; Fri, 04 Dec 2020 19:15:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klGYS-000737-1K; Fri, 04 Dec 2020 19:15:44 +0000
Received: by outflank-mailman (input) for mailman id 45028;
 Fri, 04 Dec 2020 19:15:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VOLy=FI=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1klGYQ-000732-EK
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 19:15:42 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a54f2da-c03e-4546-b076-9921267ff6a5;
 Fri, 04 Dec 2020 19:15:41 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id u12so6401116wrt.0
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 11:15:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a54f2da-c03e-4546-b076-9921267ff6a5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=BInFkaBTq7RoiD4QqVvt4DOGstC1qQ0b1UfsOmig1cs=;
        b=LdvavkxtQ5JGuVRfBnEdHRXwjjLu43bygmUViNH2uAv46JPpp2MdcwBgal6IZkxWlN
         mpb9UMFMltpXzM5991itulvqZWi4Vamlx1hy5nS5gp930Nv5zi4FywOAQRyv4OPE6t/9
         WP8uXpZPX9o/PKaPi4QN9fa/pTMf+4sESbe6MYqZeaARqAFvP911KbGwIll9cJqmRc8t
         pP33aELd/4zgOGi2i95uu8+KpPAm/4KL8lpp+aiXihfjh4kRbYcan8Yp27F/3F7OIH7I
         oB4HRVJnPG+RFMONdHGtFhR5BvXh9kJi9uH0/OnD733u1kp785PHz8J6FC7qGphXvd0n
         QbYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=BInFkaBTq7RoiD4QqVvt4DOGstC1qQ0b1UfsOmig1cs=;
        b=kyLFM00lcpmToRMLbqTm11UrAwu1eg45FLkPNFb+X1j2n2aC8/bJ0qoQbduQn7vzAU
         oIQF6qjmjnC6vrDQ6MnpUYS9PEZN7UmGgXzGeJJvRwKS0bNNzZ02CoJId7CvwLMcFxLQ
         4+59BZlfKklZcic6URX+7HQuO5xmBTrOkcwkzKib/wZV+WyVcwN3Jbf61qvqt6TE4KWj
         eoGfkOLMhAtfWUoV6NYYFL6BrEotbmqXOXiKs7MASIcAnRHXiU+9O37O4JwUE1FnBs4t
         OL5JskpWLt5XLHg4L9KzYvbP3FpzAbyd7GQJucoroy2Qo3VYA55UMCjoIZmm4HmHstFf
         wCDg==
X-Gm-Message-State: AOAM530+mIDcb0eLMMsLDPAfoKQ6ZrlOx94S9SAYcWgzLtIeEI1KO/U+
	qDVTF8esxTfVTgIBrDv0nbevl9nI0cqK1CoeJYg=
X-Google-Smtp-Source: ABdhPJxGfACeZ+AiB9ITaXrMsVNBAESh8C5YWoCwkr+X3R9NPqoG13ppIGczSR1/wGtsorDbnxUMavp4WNbcvB/d0fk=
X-Received: by 2002:a5d:68ce:: with SMTP id p14mr3863673wrw.386.1607109340552;
 Fri, 04 Dec 2020 11:15:40 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com> <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com> <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
In-Reply-To: <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 4 Dec 2020 14:15:05 -0500
Message-ID: <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
>
>
>
> On 04/12/2020 15:21, Tamas K Lengyel wrote:
> > On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
> >>
> >> Hi Jan,
> >>
> >> On 03/12/2020 10:09, Jan Beulich wrote:
> >>> On 02.12.2020 22:10, Julien Grall wrote:
> >>>> On 23/11/2020 13:30, Jan Beulich wrote:
> >>>>> While there don't look to be any problems with this right now, the lock
> >>>>> order implications from holding the lock can be very difficult to follow
> >>>>> (and may be easy to violate unknowingly). The present callbacks don't
> >>>>> (and no such callback should) have any need for the lock to be held.
> >>>>>
> >>>>> However, vm_event_disable() frees the structures used by respective
> >>>>> callbacks and isn't otherwise synchronized with invocations of these
> >>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> >>>>> to wait to drop to zero before freeing the port (and dropping the lock).
> >>>>
> >>>> AFAICT, this callback is not the only place where the synchronization is
> >>>> missing in the VM event code.
> >>>>
> >>>> For instance, vm_event_put_request() can also race against
> >>>> vm_event_disable().
> >>>>
> >>>> So shouldn't we handle this issue properly in VM event?
> >>>
> >>> I suppose that's a question to the VM event folks rather than me?
> >>
> >> Yes. From my understanding of Tamas's e-mail, they are relying on the
> >> monitoring software to do the right thing.
> >>
> >> I will refrain to comment on this approach. However, given the race is
> >> much wider than the event channel, I would recommend to not add more
> >> code in the event channel to deal with such problem.
> >>
> >> Instead, this should be fixed in the VM event code when someone has time
> >> to harden the subsystem.
> >
> > I double-checked and the disable route is actually more robust, we
> > don't just rely on the toolstack doing the right thing. The domain
> > gets paused before any calls to vm_event_disable. So I don't think
> > there is really a race-condition here.
>
> The code will *only* pause the monitored domain. I can see two issues:
>     1) The toolstack is still sending event while destroy is happening.
> This is the race discussed here.
>     2) The implement of vm_event_put_request() suggests that it can be
> called with not-current domain.
>
> I don't see how just pausing the monitored domain is enough here.

Requests only get generated by the monitored domain. So if the domain
is not running you won't get more of them. The toolstack can only send
replies.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 19:22:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 19:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45035.80516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klGf1-00087k-TC; Fri, 04 Dec 2020 19:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45035.80516; Fri, 04 Dec 2020 19:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klGf1-00087d-Pn; Fri, 04 Dec 2020 19:22:31 +0000
Received: by outflank-mailman (input) for mailman id 45035;
 Fri, 04 Dec 2020 19:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bFnI=FI=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1klGf0-00087Y-Hm
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 19:22:30 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d24ba10b-1fc0-4efe-8960-61b4f6b10542;
 Fri, 04 Dec 2020 19:22:29 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id k14so6396526wrn.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 11:22:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d24ba10b-1fc0-4efe-8960-61b4f6b10542
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=U2r/GetwmXYgHPqz3uMBX3bV+rng2hbj+1NDhSnegOk=;
        b=BkPXYCGj8BrbumW/i/cfLOlPlRpXEAfc+g9HVmv5dw+vpVEQcn5Pa+7CFSiHZcAQ5J
         UnKJUlw0bkv3np1kUrdvzSgL/RfOwPSavL0PldrFvQT2LxfRaX7BcqYuXV0zhBcNxs2D
         HpsdQzrT88T7yR0FSaATtj8w1ok3fqf8AKAcTTwO3yhxW5Z99lw4C4b05BLBWJvNMWJ9
         ZxAgu+wWgnQo58jaj7xFvf/KK1RG8oDx8JkNWioijsAtvGP+ev79E6HaxSOrXmjx/IgG
         5SOtwMfqjH67BIT2JL9bLIRhopCKsP6ZmUg+JqQVSsprSFup+oupsPMutejNDkv0FmVA
         7pzA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=U2r/GetwmXYgHPqz3uMBX3bV+rng2hbj+1NDhSnegOk=;
        b=h7mqt5ZTRaJKlIXMsCq0YpfrRU9i0iA5M5vYNe18hSnUJ1TRV2MKT+7dcV67UTg0NX
         EHfD5/AbRh5M5zSyYKUSKN4cSxIgOok8IHlXKhXGK3NLO6kX/lP6vCDV+9EhgZsAh99D
         YC5LGxDhPnWbtxG5SRpVGbcIFyVa9SZx1vzaStRcwUe7iLt1tuyy2NsUvW/ldzl07uSA
         lBZsOta1H2nTQ68OrMlQwu6NpW1P7LoxuXoR7rhQmkFckcFsroHHUWjO13+iTXp5CP47
         sU88WJQqJPY5m57sM4qHa8xhkA7AgOS3bU0FJOsqOFdaeZPdS1YIfspADU8zNiql8FVi
         h6aQ==
X-Gm-Message-State: AOAM532JH68Qg5oKzcNtI14IBCKU2ZNarlzUgP5qKMsJOwUCAMYaAs0l
	vbAsfrOF3WeJ7POvSrxLX3Oya1IyzU4m2MAAbVw=
X-Google-Smtp-Source: ABdhPJxlFwBeu2JJ6+d7r5NEFIMvlPxH6ytkM4SCA8GWAmdkJAL2Isuy4IR1G4KDNDSMhEzUBQVsbl5p+3pezL3vXuU=
X-Received: by 2002:a5d:51c2:: with SMTP id n2mr6684408wrv.326.1607109748651;
 Fri, 04 Dec 2020 11:22:28 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com> <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org> <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
In-Reply-To: <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 4 Dec 2020 19:22:17 +0000
Message-ID: <CAJ=z9a2yEsvUcu8c=pjv5ymLgLHebZCJcTh7c+yeW44J6jDgWw@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, 4 Dec 2020 at 19:15, Tamas K Lengyel <tamas.k.lengyel@gmail.com> wrote:
>
> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
> >
> >
> >
> > On 04/12/2020 15:21, Tamas K Lengyel wrote:
> > > On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
> > >>
> > >> Hi Jan,
> > >>
> > >> On 03/12/2020 10:09, Jan Beulich wrote:
> > >>> On 02.12.2020 22:10, Julien Grall wrote:
> > >>>> On 23/11/2020 13:30, Jan Beulich wrote:
> > >>>>> While there don't look to be any problems with this right now, the lock
> > >>>>> order implications from holding the lock can be very difficult to follow
> > >>>>> (and may be easy to violate unknowingly). The present callbacks don't
> > >>>>> (and no such callback should) have any need for the lock to be held.
> > >>>>>
> > >>>>> However, vm_event_disable() frees the structures used by respective
> > >>>>> callbacks and isn't otherwise synchronized with invocations of these
> > >>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> > >>>>> to wait to drop to zero before freeing the port (and dropping the lock).
> > >>>>
> > >>>> AFAICT, this callback is not the only place where the synchronization is
> > >>>> missing in the VM event code.
> > >>>>
> > >>>> For instance, vm_event_put_request() can also race against
> > >>>> vm_event_disable().
> > >>>>
> > >>>> So shouldn't we handle this issue properly in VM event?
> > >>>
> > >>> I suppose that's a question to the VM event folks rather than me?
> > >>
> > >> Yes. From my understanding of Tamas's e-mail, they are relying on the
> > >> monitoring software to do the right thing.
> > >>
> > >> I will refrain to comment on this approach. However, given the race is
> > >> much wider than the event channel, I would recommend to not add more
> > >> code in the event channel to deal with such problem.
> > >>
> > >> Instead, this should be fixed in the VM event code when someone has time
> > >> to harden the subsystem.
> > >
> > > I double-checked and the disable route is actually more robust, we
> > > don't just rely on the toolstack doing the right thing. The domain
> > > gets paused before any calls to vm_event_disable. So I don't think
> > > there is really a race-condition here.
> >
> > The code will *only* pause the monitored domain. I can see two issues:
> >     1) The toolstack is still sending event while destroy is happening.
> > This is the race discussed here.
> >     2) The implement of vm_event_put_request() suggests that it can be
> > called with not-current domain.
> >
> > I don't see how just pausing the monitored domain is enough here.
>
> Requests only get generated by the monitored domain.

If that's the case, then why is vm_event_put_request() able to
deal with a non-current domain?

I understand that you are possibly trusting who may call it, but this
looks quite fragile.

Cheers,

---


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 21:24:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 21:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45072.80532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klIYJ-0003qN-2y; Fri, 04 Dec 2020 21:23:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45072.80532; Fri, 04 Dec 2020 21:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klIYI-0003qG-Vn; Fri, 04 Dec 2020 21:23:42 +0000
Received: by outflank-mailman (input) for mailman id 45072;
 Fri, 04 Dec 2020 21:23:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VOLy=FI=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1klIYH-0003qB-7Y
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 21:23:41 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e36ca147-12da-4a4c-876a-6d8f8ec145fd;
 Fri, 04 Dec 2020 21:23:40 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id v14so6610265wml.1
 for <xen-devel@lists.xenproject.org>; Fri, 04 Dec 2020 13:23:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e36ca147-12da-4a4c-876a-6d8f8ec145fd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=xBkisH7qkt4tTks6hPY848K8Xv8nx74W1YCkO3MSs78=;
        b=FrwZUo3xb4GJ2XCMYiKx6j+u9fsesFpOeFfg9mWR8Y/MbbuFtHYhGweL8FRRJpQ1Fu
         TT05l0v4kSoDWxEHBTGxu+G0/NsWZgChvthEMQB5qo+PD1qegkVo30vTzBfaEaCkduqf
         TxMoffSc98r4Byl479ghTPk7V6RANs34jEoDRDYEDQ10Kr+BolyI2kWJKNNsN12v4TNz
         x6dg2kceyG/he3EH2Mt5JSeQldC6RDKfOlJm9iE4WWmrtqADDGbo9Nfj39Sps8wz5+5r
         QdaQC/MDc6lJPHZLDZfWRQN02maR3qOhkTHt8sUggzWiaCWl6jN/GyY5BUnsA8hSVfo2
         I/PQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=xBkisH7qkt4tTks6hPY848K8Xv8nx74W1YCkO3MSs78=;
        b=O4OQEu/NOFNMqHXDqKifh7WIuDKT0D9mnQ18AVMSZFCUOe0nEefY9lS7siSGy2Lq+h
         jiL6B46S7bthTZv4dHQE87fvdzHwU9tTkwlT/7W+nDuKv5tROoJ31KOoepdbULbk2agx
         2WtgS2tXb/yGkCyVExVkYqk4xe8ejXqW2sbtgkkbXUQNZat3rVi8dsSSCLbSt4R/wlI4
         J7iJ31AayNohBBOnIS6hrftQb4wXenadq2Von8PqCzditIiiX/lm9yEJeFFugmfaCYuz
         Cc8rQ7rddRPyEvZuxBedLKq3HzWoa+5hTbjbpo5pYWhIpsNAuv0DwNThUowvoN911Cu/
         nlSg==
X-Gm-Message-State: AOAM533DjW9uwioZmcDMjfd01dd+CIb4K4D9FVZ3kJySzFJHWsCMQ8lz
	VWgcmBeQJ4J6E6NZsZe7MxdtkXKqW7/Chafutqg=
X-Google-Smtp-Source: ABdhPJyDago5jgo8lT/b6tX8BLqNd48NQMnJrZtGH1jlRB2oYa35oUPd0fjyJN66ZAOSGkjfWsCl1yskmo0vd2+EJp4=
X-Received: by 2002:a1c:4843:: with SMTP id v64mr6328377wma.186.1607117019180;
 Fri, 04 Dec 2020 13:23:39 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com> <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org> <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <CAJ=z9a2yEsvUcu8c=pjv5ymLgLHebZCJcTh7c+yeW44J6jDgWw@mail.gmail.com>
In-Reply-To: <CAJ=z9a2yEsvUcu8c=pjv5ymLgLHebZCJcTh7c+yeW44J6jDgWw@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 4 Dec 2020 16:23:03 -0500
Message-ID: <CABfawhmqk8aOEe4RMUtTjq_jgSCuGrL5vpuNdRBPNmmxRnfxFg@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien.grall.oss@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Dec 4, 2020 at 2:22 PM Julien Grall <julien.grall.oss@gmail.com> wrote:
>
> On Fri, 4 Dec 2020 at 19:15, Tamas K Lengyel <tamas.k.lengyel@gmail.com> wrote:
> >
> > On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
> > >
> > >
> > >
> > > On 04/12/2020 15:21, Tamas K Lengyel wrote:
> > > > On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
> > > >>
> > > >> Hi Jan,
> > > >>
> > > >> On 03/12/2020 10:09, Jan Beulich wrote:
> > > >>> On 02.12.2020 22:10, Julien Grall wrote:
> > > >>>> On 23/11/2020 13:30, Jan Beulich wrote:
> > > >>>>> While there don't look to be any problems with this right now, the lock
> > > >>>>> order implications from holding the lock can be very difficult to follow
> > > >>>>> (and may be easy to violate unknowingly). The present callbacks don't
> > > >>>>> (and no such callback should) have any need for the lock to be held.
> > > >>>>>
> > > >>>>> However, vm_event_disable() frees the structures used by respective
> > > >>>>> callbacks and isn't otherwise synchronized with invocations of these
> > > >>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> > > >>>>> to wait to drop to zero before freeing the port (and dropping the lock).
> > > >>>>
> > > >>>> AFAICT, this callback is not the only place where the synchronization is
> > > >>>> missing in the VM event code.
> > > >>>>
> > > >>>> For instance, vm_event_put_request() can also race against
> > > >>>> vm_event_disable().
> > > >>>>
> > > >>>> So shouldn't we handle this issue properly in VM event?
> > > >>>
> > > >>> I suppose that's a question to the VM event folks rather than me?
> > > >>
> > > >> Yes. From my understanding of Tamas's e-mail, they are relying on the
> > > >> monitoring software to do the right thing.
> > > >>
> > > >> I will refrain to comment on this approach. However, given the race is
> > > >> much wider than the event channel, I would recommend to not add more
> > > >> code in the event channel to deal with such problem.
> > > >>
> > > >> Instead, this should be fixed in the VM event code when someone has time
> > > >> to harden the subsystem.
> > > >
> > > > I double-checked and the disable route is actually more robust, we
> > > > don't just rely on the toolstack doing the right thing. The domain
> > > > gets paused before any calls to vm_event_disable. So I don't think
> > > > there is really a race-condition here.
> > >
> > > The code will *only* pause the monitored domain. I can see two issues:
> > >     1) The toolstack is still sending event while destroy is happening.
> > > This is the race discussed here.
> > >     2) The implement of vm_event_put_request() suggests that it can be
> > > called with not-current domain.
> > >
> > > I don't see how just pausing the monitored domain is enough here.
> >
> > Requests only get generated by the monitored domain.
>
> If that's the case, then why is vm_event_put_request() able to
> deal with a non-current domain?
>
> I understand that you are possibly trusting who may call it, but this
> looks quite fragile.

I didn't write the system. You probably want to ask that question from
the original author.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 21:40:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 21:40:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45083.80550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klIoo-0005tZ-Ns; Fri, 04 Dec 2020 21:40:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45083.80550; Fri, 04 Dec 2020 21:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klIoo-0005tS-KL; Fri, 04 Dec 2020 21:40:46 +0000
Received: by outflank-mailman (input) for mailman id 45083;
 Fri, 04 Dec 2020 21:40:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gNFP=FI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klIon-0005mT-V0
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 21:40:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5530d3c-dab1-4a5a-bc96-6f0f942cccb7;
 Fri, 04 Dec 2020 21:40:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5530d3c-dab1-4a5a-bc96-6f0f942cccb7
Date: Fri, 4 Dec 2020 13:40:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607118044;
	bh=+BjprIbHx8mxLbZVVC1kwi0qaPAi0CFltpmHTGpjwpU=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=NzRsMuaFH6TclnM/mK2Q4H6zZTQtK+G8Qrt2XkCns8qBqorpvH3VHkdI6XWDg1W7O
	 +Gn6t2ZI+E1nclJTaHmU/tAqxRsvfRwimzs32ujxRb8Vds+8yqon/itVSkb4AYo0bM
	 fbzHvKDWdCqaEDW+Ok1ATw8A1G7nSswliLtthx7b4XYsMOUL2TGO/5UfhIcn/AHc2e
	 fFEYyK/eceQnHr0yihN6WJdfCko7AVIEFJ13dDTVmaU0j5Ti80EwkWf6LfH5n3Fysu
	 rqaDW0gXiRskZj84tXK2Kpy2gMWBXvun/Bvp6lpHaHV0jg+0GQ/o4X8Fq66uBUA50J
	 XSYeS0pVtDs0w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Wei Liu <wl@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    cardoe@cardoe.com, xen-devel@lists.xenproject.org, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v3 01/12] automation: add a QEMU aarch64 smoke test
In-Reply-To: <20201204104039.44diltm2gg4twpxn@liuwe-devbox-debian-v2>
Message-ID: <alpine.DEB.2.21.2012041335110.32240@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s> <20201125042745.31986-1-sstabellini@kernel.org> <20201204104039.44diltm2gg4twpxn@liuwe-devbox-debian-v2>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 4 Dec 2020, Wei Liu wrote:
> On Tue, Nov 24, 2020 at 08:27:34PM -0800, Stefano Stabellini wrote:
> > Use QEMU to start Xen (just the hypervisor) up until it stops because
> > there is no dom0 kernel to boot.
> > 
> > It is based on the existing build job unstable-arm64v8.
> > 
> > Also use make -j$(nproc) to build Xen.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> > Changes in v2:
> > - fix x86_32 build
> > ---
> >  automation/gitlab-ci/test.yaml         | 22 ++++++++++++++++++
> >  automation/scripts/build               |  6 ++---
> >  automation/scripts/qemu-smoke-arm64.sh | 32 ++++++++++++++++++++++++++
> >  3 files changed, 57 insertions(+), 3 deletions(-)
> >  create mode 100755 automation/scripts/qemu-smoke-arm64.sh
> > 
> > diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> > index 793feafe8b..35346e3f6e 100644
> > --- a/automation/gitlab-ci/test.yaml
> > +++ b/automation/gitlab-ci/test.yaml
> > @@ -22,6 +22,28 @@ build-each-commit-gcc:
> >      - /^coverity-tested\/.*/
> >      - /^stable-.*/
> >  
> > +qemu-smoke-arm64-gcc:
> > +  stage: test
> > +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> > +  variables:
> > +    CONTAINER: debian:unstable-arm64v8
> > +  script:
> > +    - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
> > +  dependencies:
> > +    - debian-unstable-gcc-arm64
> > +  artifacts:
> > +    paths:
> > +      - smoke.serial
> > +      - '*.log'
> > +    when: always
> > +  tags:
> > +    - arm64
> > +  except:
> > +    - master
> > +    - smoke
> > +    - /^coverity-tested\/.*/
> > +    - /^stable-.*/
> > +
> >  qemu-smoke-x86-64-gcc:
> >    stage: test
> >    image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> > diff --git a/automation/scripts/build b/automation/scripts/build
> > index 0cd0f3971d..7038e5eb50 100755
> > --- a/automation/scripts/build
> > +++ b/automation/scripts/build
> > @@ -10,9 +10,9 @@ cc-ver()
> >  
> >  # random config or default config
> >  if [[ "${RANDCONFIG}" == "y" ]]; then
> > -    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
> > +    make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
> >  else
> > -    make -C xen defconfig
> > +    make -j$(nproc) -C xen defconfig
> >  fi
> >  
> >  # build up our configure options
> > @@ -45,7 +45,7 @@ make -j$(nproc) dist
> >  # Extract artifacts to avoid getting rewritten by customised builds
> >  cp xen/.config xen-config
> >  mkdir binaries
> > -if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
> > +if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
> >      cp xen/xen binaries/xen
> >  fi
> >  
> > diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
> > new file mode 100755
> > index 0000000000..a7efbf8b6f
> > --- /dev/null
> > +++ b/automation/scripts/qemu-smoke-arm64.sh
> > @@ -0,0 +1,32 @@
> > +#!/bin/bash
> > +
> > +set -ex
> > +
> > +# Install QEMU
> > +export DEBIAN_FRONTENT=noninteractive
> > +apt-get -qy update
> > +apt-get -qy install --no-install-recommends qemu-system-aarch64 \
> > +                                            u-boot-qemu
> > +
> > +# XXX Silly workaround to get the following QEMU command to work
> > +cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
> 
> Can you explain a bit more why this workaround works at all?
> 
> Not a blocking comment, but this will help other people who try to
> modify this script.

Yeah: the following QEMU command just after the copy is:

  qemu-system-aarch64 \
     -machine virtualization=true \
     -cpu cortex-a57 -machine type=virt \
     -m 512 -display none \
     -machine dumpdtb=binaries/virt-gicv3.dtb

The purpose for this command is just to generate the dtb for the
platform, see the "dumpdtb" option.

This version of QEMU refuses to do that unless it can load
"efi-virtio.rom"; although it doesn't make any sense because:
- we are not running anything here, only generating a DTB, no ROMs
  should be needed
- below when we actualy start QEMU to do emulation with the same
  options, "efi-virtio.rom" is not actually needed


I can expand a bit more on the comment, maybe:

# XXX Silly workaround to get the following QEMU command to work
# QEMU looks for "efi-virtio.rom" even if it is unneeded


Thank you for the ack on the series by the way. If you are OK with it, I
am going to wait for a couple of days in case of further comments, and
if there aren't any I'll commit the series making this change on commit.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 23:31:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 23:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45100.80562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKXN-0000VR-So; Fri, 04 Dec 2020 23:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45100.80562; Fri, 04 Dec 2020 23:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKXN-0000VK-Os; Fri, 04 Dec 2020 23:30:53 +0000
Received: by outflank-mailman (input) for mailman id 45100;
 Fri, 04 Dec 2020 23:30:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klKXM-0000VC-6w; Fri, 04 Dec 2020 23:30:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klKXL-0007T6-TD; Fri, 04 Dec 2020 23:30:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klKXL-000268-I8; Fri, 04 Dec 2020 23:30:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klKXL-0008ET-Hg; Fri, 04 Dec 2020 23:30:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7n374Wk6bvu1lTA0hrLNtHCrvZ0phFEs6YbLWk9CeV0=; b=1QhNNdjnOF1xgSZ/ZTBqEiW8sG
	LLJAmO1qbH6v7keDDiXUY2MeWiSgMZUtpgVleaOwQpN0V/v5dDTauv0GP4QqJQGCfu49s6UMN9DjT
	sFHLydEJVHDY0T8R7sNWFjZUkkzKDnSkg4HNg61B867INAgV1nbVJ7XIRc8S8/ow8i5s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157199-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157199: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=bbe2ba04c5a92a49db8a42c850a5a2f6481e47eb
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 04 Dec 2020 23:30:51 +0000

flight 157199 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157199/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                bbe2ba04c5a92a49db8a42c850a5a2f6481e47eb
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  126 days
Failing since        152366  2020-08-01 20:49:34 Z  125 days  212 attempts
Testing same since   157199  2020-12-04 07:52:18 Z    0 days    1 attempts

------------------------------------------------------------
3629 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 696543 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 23:52:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 23:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45111.80583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKsZ-0002YP-5K; Fri, 04 Dec 2020 23:52:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45111.80583; Fri, 04 Dec 2020 23:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKsZ-0002YI-1l; Fri, 04 Dec 2020 23:52:47 +0000
Received: by outflank-mailman (input) for mailman id 45111;
 Fri, 04 Dec 2020 23:52:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gNFP=FI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klKsY-0002YD-0D
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 23:52:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7f734f3-864b-42ce-b86e-f95466562de2;
 Fri, 04 Dec 2020 23:52:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7f734f3-864b-42ce-b86e-f95466562de2
Date: Fri, 4 Dec 2020 15:52:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607125964;
	bh=YYyU6PwrDiQTEYcWMMLQfrhbgp4/5WOqB7PcoZaU+jM=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=l48xANx5IC8noSkTBcMCOxBrHjWDmuz9tkL5BHLxh7uW37ghXpF+9xpxMobDymnUZ
	 JiLLOxnTiBKVEdguyZyu9jx3ja999zvvbaTAF965Nf5z9NO2AeZ5XPotncQLDMjg8s
	 tFVHSIf8/X2nv0MZpmOjuUnkOI8ahuDto60bCG5HT7AGmIwqz6XzSrS/nltsmoj06S
	 3Qe2RHnQADhE1QN9LI+ZlYCINzjwoFH3SXndM06kjU8arbLKsDbemTR/0cHZii8tth
	 Uk/p+V/CxANa14weEwM2Ci2f2JrzuWLpUxUpgfs3IP7Camt5G1lEjg+G221RlTM5vG
	 wKx5YC/dBE4Ng==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/7] xen/arm: Add ID registers and complete cpufinfo
In-Reply-To: <97efd89cccdffc2a7fd987ac8156f5eea191fd3f.1606742184.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012041546340.32240@sstabellini-ThinkPad-T480s>
References: <cover.1606742184.git.bertrand.marquis@arm.com> <97efd89cccdffc2a7fd987ac8156f5eea191fd3f.1606742184.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

There is a typo in the subject line


On Mon, 30 Nov 2020, Bertrand Marquis wrote:
> Add definition and entries in cpuinfo for ID registers introduced in
> newer Arm Architecture reference manual:
> - ID_PFR2: processor feature register 2
> - ID_DFR1: debug feature register 1
> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
> - ID_ISA6: ISA Feature register 6
> Add more bitfield definitions in PFR fields of cpuinfo.
> Add MVFR2 register definition for aarch32.
> Add mvfr values in cpuinfo.
> Add some registers definition for arm64 in sysregs as some are not
> always know by compilers.
> Initialize the new values added in cpuinfo in identify_cpu during init.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

I realize I am using an old compiler but I am getting a build error:

/local/repos/gcc-linaro-5.3.1-2016.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc -MMD -MP -MF ./.cpufeature.o.d  -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O1 -fno-omit-frame-pointer -nostdinc -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith -Wvla -pipe -D__XEN__ -include /local/repos/xen-upstream/xen/include/xen/config.h -Wa,--strip-local-absolute -g -mcpu=generic -mgeneral-regs-only   -I/local/repos/xen-upstream/xen/include -fno-stack-protector -fno-exceptions -fno-asynchronous-unwind-tables -Wnested-externs '-D__OBJECT_FILE__="cpufeature.o"'  -c cpufeature.c -o cpufeature.o
{standard input}: Assembler messages:
{standard input}:634: Error: unknown or missing system register name at operand 2 -- `mrs x1,ID_MMFR4_EL1'

If I remove the line:

  c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);


it builds just fine



> ---
> Changes in V2:
>   Fix dbg32 table size and add proper initialisation of the second entry
>   of the table by reading ID_DFR1 register.
> ---
>  xen/arch/arm/cpufeature.c           | 17 ++++++++
>  xen/include/asm-arm/arm64/sysregs.h | 25 ++++++++++++
>  xen/include/asm-arm/cpregs.h        | 11 +++++
>  xen/include/asm-arm/cpufeature.h    | 63 ++++++++++++++++++++++++-----
>  4 files changed, 107 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 44126dbf07..204be9b084 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>  
>          c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
>          c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
> +        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
>  
>          c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
>          c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
> +
> +        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
>  #endif
>  
>          c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
>          c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
> +        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
>  
>          c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
> +        c->dbg32.bits[1] = READ_SYSREG32(ID_DFR1_EL1);
>  
>          c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
>  
> @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
>          c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
>          c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
> +        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
> +        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);
>  
>          c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
>          c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
> @@ -137,6 +144,16 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
>          c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
>          c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
> +        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
> +
> +#ifdef CONFIG_ARM_64
> +        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
> +        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
> +        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
> +#else
> +        c->mvfr.bits[0] = READ_CP32(MVFR0);
> +        c->mvfr.bits[1] = READ_CP32(MVFR1);
> +#endif
>  }
>  
>  /*
> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
> index c60029d38f..5abbeda3fd 100644
> --- a/xen/include/asm-arm/arm64/sysregs.h
> +++ b/xen/include/asm-arm/arm64/sysregs.h
> @@ -57,6 +57,31 @@
>  #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>  #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>  
> +/*
> + * Define ID coprocessor registers if they are not
> + * already defined by the compiler.
> + *
> + * Values picked from linux kernel
> + */
> +#ifndef ID_AA64MMFR2_EL1
> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
> +#endif
> +#ifndef ID_PFR2_EL1
> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
> +#endif
> +#ifndef ID_MMFR5_EL1
> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
> +#endif
> +#ifndef ID_ISAR6_EL1
> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
> +#endif
> +#ifndef ID_AA64ZFR0_EL1
> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
> +#endif
> +#ifndef ID_DFR1_EL1
> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
> +#endif
> +
>  /* Access to system registers */
>  
>  #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index 8fd344146e..58be898891 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -63,6 +63,7 @@
>  #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
>  #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
>  #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
>  #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
>  #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
>  #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
> @@ -108,18 +109,23 @@
>  #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
>  #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
>  #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
>  #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>  #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
>  #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
>  #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
>  #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
>  #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
>  #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>  #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>  #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>  #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>  #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>  #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>  #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>  #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>  #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
> @@ -312,18 +318,23 @@
>  #define HSTR_EL2                HSTR
>  #define ID_AFR0_EL1             ID_AFR0
>  #define ID_DFR0_EL1             ID_DFR0
> +#define ID_DFR1_EL1             ID_DFR1
>  #define ID_ISAR0_EL1            ID_ISAR0
>  #define ID_ISAR1_EL1            ID_ISAR1
>  #define ID_ISAR2_EL1            ID_ISAR2
>  #define ID_ISAR3_EL1            ID_ISAR3
>  #define ID_ISAR4_EL1            ID_ISAR4
>  #define ID_ISAR5_EL1            ID_ISAR5
> +#define ID_ISAR6_EL1            ID_ISAR6
>  #define ID_MMFR0_EL1            ID_MMFR0
>  #define ID_MMFR1_EL1            ID_MMFR1
>  #define ID_MMFR2_EL1            ID_MMFR2
>  #define ID_MMFR3_EL1            ID_MMFR3
> +#define ID_MMFR4_EL1            ID_MMFR4
> +#define ID_MMFR5_EL1            ID_MMFR5
>  #define ID_PFR0_EL1             ID_PFR0
>  #define ID_PFR1_EL1             ID_PFR1
> +#define ID_PFR2_EL1             ID_PFR2
>  #define IFSR32_EL2              IFSR
>  #define MDCR_EL2                HDCR
>  #define MIDR_EL1                MIDR
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index c7b5052992..64354c3f19 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>      union {
>          uint64_t bits[2];
>          struct {
> +            /* PFR0 */
>              unsigned long el0:4;
>              unsigned long el1:4;
>              unsigned long el2:4;
> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>              unsigned long fp:4;   /* Floating Point */
>              unsigned long simd:4; /* Advanced SIMD */
>              unsigned long gic:4;  /* GIC support */
> -            unsigned long __res0:28;
> +            unsigned long ras:4;
> +            unsigned long sve:4;
> +            unsigned long sel2:4;
> +            unsigned long mpam:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long __res0:4;
>              unsigned long csv2:4;
> -            unsigned long __res1:4;
> +            unsigned long cvs3:4;
> +
> +            /* PFR1 */
> +            unsigned long bt:4;
> +            unsigned long ssbs:4;
> +            unsigned long mte:4;
> +            unsigned long ras_frac:4;
> +            unsigned long mpam_frac:4;
> +            unsigned long __res1:44;
>          };
>      } pfr64;
>  
> @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>      } aux64;
>  
>      union {
> -        uint64_t bits[2];
> +        uint64_t bits[3];
>          struct {
>              unsigned long pa_range:4;
>              unsigned long asid_bits:4;
> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>              unsigned long pan:4;
>              unsigned long __res1:8;
>              unsigned long __res2:32;
> +
> +            unsigned long __res3:64;
>          };
>      } mm64;
>  
> @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>          uint64_t bits[2];
>      } isa64;
>  
> +    struct {
> +        uint64_t bits[1];
> +    } zfr64;
> +
>  #endif
>  
>      /*
> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>       * when running in 32-bit mode.
>       */
>      union {
> -        uint32_t bits[2];
> +        uint32_t bits[3];
>          struct {
> +            /* PFR0 */
>              unsigned long arm:4;
>              unsigned long thumb:4;
>              unsigned long jazelle:4;
>              unsigned long thumbee:4;
> -            unsigned long __res0:16;
> +            unsigned long csv2:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long ras:4;
>  
> +            /* PFR1 */
>              unsigned long progmodel:4;
>              unsigned long security:4;
>              unsigned long mprofile:4;
>              unsigned long virt:4;
>              unsigned long gentimer:4;
> -            unsigned long __res1:12;
> +            unsigned long sec_frac:4;
> +            unsigned long virt_frac:4;
> +            unsigned long gic:4;
> +
> +            /* PFR2 */
> +            unsigned long csv3:4;
> +            unsigned long ssbs:4;
> +            unsigned long ras_frac:4;
> +            unsigned long __res2:20;
>          };
>      } pfr32;
>  
>      struct {
> -        uint32_t bits[1];
> +        uint32_t bits[2];
>      } dbg32;
>  
>      struct {
> @@ -230,12 +264,23 @@ struct cpuinfo_arm {
>      } aux32;
>  
>      struct {
> -        uint32_t bits[4];
> +        uint32_t bits[6];
>      } mm32;
>  
>      struct {
> -        uint32_t bits[6];
> +        uint32_t bits[7];
>      } isa32;
> +
> +#ifdef CONFIG_ARM_64
> +    struct {
> +        uint64_t bits[3];
> +    } mvfr;
> +#else
> +    /* Only MVFR0 and MVFR1 exist on armv7 */
> +    struct {
> +        uint32_t bits[2];
> +    } mvfr;
> +#endif
>  };
>  
>  extern struct cpuinfo_arm boot_cpu_data;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 23:53:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 23:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45116.80595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKtG-0002gC-ES; Fri, 04 Dec 2020 23:53:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45116.80595; Fri, 04 Dec 2020 23:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKtG-0002g5-BU; Fri, 04 Dec 2020 23:53:30 +0000
Received: by outflank-mailman (input) for mailman id 45116;
 Fri, 04 Dec 2020 23:53:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WR05=FI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1klKtF-0002fz-9x
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 23:53:29 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bebb1a3-9dd5-46f3-b3af-da086547ca40;
 Fri, 04 Dec 2020 23:53:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bebb1a3-9dd5-46f3-b3af-da086547ca40
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607126008;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XWm9eAAQQ9rWakVKjBaxzrEdqMCR0yQ24/rfWLPC13s=;
  b=BE3QBf/vh/iqYT0607LXMGsKsNm0pyI9AfqtQQskknsq/j3voNfKX6ef
   QRJG8+IC2JMkJ1xQM6qLKCMP3OoT4B0ovHPHXfqtDm0joZ7tuWNyWJzFa
   tiz3sTVEQYRQNEZqtunQwbZdW25HA2flKYv9J1oxXjQ9zWsb6lj8dhdkR
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Kr6p4B4Tg8SnCWUjVp3uCMfjkdSA6hGWWNfrhsvm3O3kq78gY5qtrOauAvWXZx2RrNR/XsORq9
 Jr1Zw/Jh6wsdmjstdmLw2JktekReRxzMTbBy7Ura4Y6c0jrgcbJO+I+fJtlvNyPJkzmYoaT0mY
 +3YNPxj67OZpNUp2kzytiftDKvzVsnizq4Cwp78JNSSdgpXk03CshGQfLkgYXhe8xqCPQQp2Be
 Nz3uEh+UZcJWM5FH9kVcxCbC9iLXM5RMaRWaNo4sH5sjvNhZ40QcHdf8WcDXJRyqcCTGRQBRB2
 arA=
X-SBRS: 5.1
X-MesageID: 32816974
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,394,1599537600"; 
   d="scan'208";a="32816974"
Subject: Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
	<dfaggioli@suse.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201201082128.15239-1-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>
Date: Fri, 4 Dec 2020 23:53:21 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 01/12/2020 08:21, Juergen Gross wrote:
> Support scheduling granularity per cpupool. Setting the granularity is
> done via hypfs, which needed to gain dynamical entries for that
> purpose.
>
> Apart from the hypfs related additional functionality the main change
> for cpupools was the support for moving a domain to a new granularity,
> as this requires to modify the scheduling unit/vcpu relationship.
>
> I have tried to do the hypfs modifications in a rather generic way in
> order to be able to use the same infrastructure in other cases, too
> (e.g. for per-domain entries).
>
> The complete series has been tested by creating cpupools with different
> granularities and moving busy and idle domains between those.
>
> Changes in V2:
> - Added several new patches, especially for some further cleanups in
>   cpupool.c.
> - Completely reworked the locking scheme with dynamical directories:
>   locking of resources (cpupools in this series) is now done via new
>   callbacks which are called when traversing the hypfs tree. This
>   removes the need to add locking to each hypfs related cpupool
>   function and it ensures data integrity across multiple callbacks.
> - Reordered the first few patches in order to have already acked
>   patches in pure cleanup patches first.
> - Addressed several comments.
>
> Juergen Gross (17):
>   xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
>   xen/cpupool: add missing bits for per-cpupool scheduling granularity
>   xen/cpupool: sort included headers in cpupool.c
>   xen/cpupool: switch cpupool id to unsigned
>   xen/cpupool: switch cpupool list to normal list interface
>   xen/cpupool: use ERR_PTR() for returning error cause from
>     cpupool_create()
>   xen/cpupool: support moving domain between cpupools with different
>     granularity
>   docs: fix hypfs path documentation
>   xen/hypfs: move per-node function pointers into a dedicated struct
>   xen/hypfs: pass real failure reason up from hypfs_get_entry()
>   xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
>   xen/hypfs: add new enter() and exit() per node callbacks
>   xen/hypfs: support dynamic hypfs nodes
>   xen/hypfs: add support for id-based dynamic directories
>   xen/cpupool: add cpupool directories
>   xen/cpupool: add scheduling granularity entry to cpupool entries
>   xen/cpupool: make per-cpupool sched-gran hypfs node writable

Gitlab CI is fairly (but not completely) reliably hitting an failure in
ARM randconfig against this series only.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/225445864 is one
example.

Error is:

cpupool.c:102:12: error: 'sched_gran_get' defined but not used
[-Werror=unused-function]
  102 | static int sched_gran_get(const char *str, enum sched_gran *mode)
      |            ^~~~~~~~~~~~~~



Weirdly, there is a second diagnostic showing up which appears to be
unrelated and non-fatal, but a concerning non-the-less

mem_access.c: In function 'p2m_mem_access_check':
mem_access.c:227:6: note: parameter passing for argument of type 'const
struct npfec' changed in GCC 9.1
  227 | bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
npfec npfec)
      |      ^~~~~~~~~~~~~~~~~~~~

It appears to be related to
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88469 and is letting us
know that the ABI changed.  However, Xen is an embedded project with no
external linkage, so we can probably compile with -Wno-psabi and be done
with it.

~Andrew, in lieu of a real CI robot.


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 23:54:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 23:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45121.80606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKu1-0002nr-OT; Fri, 04 Dec 2020 23:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45121.80606; Fri, 04 Dec 2020 23:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKu1-0002nk-LM; Fri, 04 Dec 2020 23:54:17 +0000
Received: by outflank-mailman (input) for mailman id 45121;
 Fri, 04 Dec 2020 23:54:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gNFP=FI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klKu0-0002nb-8R
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 23:54:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38e12b43-c637-48a7-b068-6383b2eff31a;
 Fri, 04 Dec 2020 23:54:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38e12b43-c637-48a7-b068-6383b2eff31a
Date: Fri, 4 Dec 2020 15:54:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607126054;
	bh=6qbAfoHGBHocQsedYQyTCKTMdHTb6QYUplQTl2cKNi4=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=ZhH4VYEGTnpZguj/9y6T2lOccw9QjMm06TdTOHsTHmDrxQo+p0eq9Xm71QCZZJm1y
	 10cmg8nsvgjl+bW3wfY/Nw61SyRuC8GW2mVaYgk7+2qieZm8zz3PdWqW0L46CuBGQD
	 3dGU6So6h+KnBefLTNihMZahQdS9+mBh2J9EZXOM0tyZkl7wo82zox8kXub5k40HHw
	 cFCHKCoOFZyBlLUwBnk9c4dplxp1NZ0UUBTG6pOmHG6LCw47NOcqwmuSrdGAcBW9vj
	 xrnwz2VwocEChqCX5sMiV2Hq+GrnVscrTlUluIbrKYOxpaPdAN2d4KEZlAJIvRkgRv
	 tZQsLHbJ8FRAw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: Bertrand Marquis <bertrand.marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 2/7] xen/arm: Add arm64 ID registers definitions
In-Reply-To: <87zh2y7gm0.fsf@epam.com>
Message-ID: <alpine.DEB.2.21.2012041553240.32240@sstabellini-ThinkPad-T480s>
References: <cover.1606742184.git.bertrand.marquis@arm.com> <83f4e52dce23d2e83f6118e5ecb3cef22112f9e9.1606742184.git.bertrand.marquis@arm.com> <87zh2y7gm0.fsf@epam.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Volodymyr Babchuk wrote:
> Bertrand Marquis writes:
> 
> > Add coprocessor registers definitions for all ID registers trapped
> > through the TID3 bit of HSR.
> > Those are the one that will be emulated in Xen to only publish to guests
> > the features that are supported by Xen and that are accessible to
> > guests.
> >
> > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> > Changes in V2: rebase
> > ---
> >  xen/include/asm-arm/arm64/hsr.h | 37 +++++++++++++++++++++++++++++++++
> >  1 file changed, 37 insertions(+)
> >
> > diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
> > index ca931dd2fe..e691d41c17 100644
> > --- a/xen/include/asm-arm/arm64/hsr.h
> > +++ b/xen/include/asm-arm/arm64/hsr.h
> > @@ -110,6 +110,43 @@
> >  #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
> >  #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
> >  
> > +/* Those registers are used when HCR_EL2.TID3 is set */
> > +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
> > +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
> > +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
> > +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
> > +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
> > +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
> > +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
> > +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
> > +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
> > +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
> > +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
> > +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
> > +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
> > +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
> > +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
> > +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
> > +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
> > +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
> > +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
> > +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
> > +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
> > +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
> > +
> > +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
> > +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
> > +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
> > +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
> > +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
> > +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
> > +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
> > +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
> > +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
> > +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
> > +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
> > +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
> > +
> >  #endif /* __ASM_ARM_ARM64_HSR_H */
> >  
> >  /*
> 
> 
> -- 
> Volodymyr Babchuk at EPAM


From xen-devel-bounces@lists.xenproject.org Fri Dec 04 23:57:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 04 Dec 2020 23:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45129.80618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKxI-00038z-6I; Fri, 04 Dec 2020 23:57:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45129.80618; Fri, 04 Dec 2020 23:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klKxI-00038s-3H; Fri, 04 Dec 2020 23:57:40 +0000
Received: by outflank-mailman (input) for mailman id 45129;
 Fri, 04 Dec 2020 23:57:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gNFP=FI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klKxG-00038n-T1
 for xen-devel@lists.xenproject.org; Fri, 04 Dec 2020 23:57:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48ed7307-cfde-4bf3-b860-1304d03df2a9;
 Fri, 04 Dec 2020 23:57:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48ed7307-cfde-4bf3-b860-1304d03df2a9
Date: Fri, 4 Dec 2020 15:57:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607126257;
	bh=5h37HYw38QL1InbByuCWEuuiG/Y2OGL9FZe2XfEIETo=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=KZ0NhyrsVXbfPNDqjW0roWv56n5aXd3VB93jD5XQbURqEi7v88xn25EroTTWkKq7h
	 KfJQaBaRBlKtEcxsjz3FNcs8nPnPHt0eJnUdciqlg0W52XypwiDSxbgTJhNJL9N2UC
	 25B7fwuZV4fG5CdmzBbX+lEr92iPyZHBPlSr6uSF8lcoxTUQ4BdhlP7yHKD+oDt1W+
	 wrICrUMTzoROy4zshk6la5MYGmUQoPjaxY5oEnfJ63/gXRlEmVlfH2xWWtAicVeRZP
	 QHZtgFRSkmZW5t84vUTw/DzE7kiprmnIX9BJQworvf8wYWXOX5AXws3BW5gWRLDIT3
	 dAorYU4rvNVoQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
In-Reply-To: <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012041531420.32240@sstabellini-ThinkPad-T480s>
References: <cover.1606742184.git.bertrand.marquis@arm.com> <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Bertrand Marquis wrote:
> Create a cpuinfo structure for guest and mask into it the features that
> we do not support in Xen or that we do not want to publish to guests.
> 
> Modify some values in the cpuinfo structure for guests to mask some
> features which we do not want to allow to guests (like AMU) or we do not
> support (like SVE).

The first two sentences seem to say the same thing in two different
ways.


> The code is trying to group together registers modifications for the
> same feature to be able in the long term to easily enable/disable a
> feature depending on user parameters or add other registers modification
> in the same place (like enabling/disabling HCR bits).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: rebase
> ---
>  xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/cpufeature.h |  2 ++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 204be9b084..309941ff37 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -24,6 +24,8 @@
>  
>  DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>  
> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
> +
>  void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>                               const char *info)
>  {
> @@ -156,6 +158,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>  #endif
>  }
>  
> +/*
> + * This function is creating a cpuinfo structure with values modified to mask
> + * all cpu features that should not be published to guest.
> + * The created structure is then used to provide ID registers values to guests.
> + */
> +static int __init create_guest_cpuinfo(void)
> +{
> +    /*
> +     * TODO: The code is currently using only the features detected on the boot
> +     * core. In the long term we should try to compute values containing only
> +     * features supported by all cores.
> +     */
> +    identify_cpu(&guest_cpuinfo);

Given that we already have boot_cpu_data and current_cpu_data, which
should be already initialized at this point, we could simply:

  guest_cpuinfo = current_cpu_data;

or

  guest_cpuinfo = boot_cpu_data;

?


> +#ifdef CONFIG_ARM_64
> +    /* Disable MPAM as xen does not support it */
> +    guest_cpuinfo.pfr64.mpam = 0;
> +    guest_cpuinfo.pfr64.mpam_frac = 0;
> +
> +    /* Disable SVE as Xen does not support it */
> +    guest_cpuinfo.pfr64.sve = 0;
> +    guest_cpuinfo.zfr64.bits[0] = 0;
> +
> +    /* Disable MTE as Xen does not support it */
> +    guest_cpuinfo.pfr64.mte = 0;
> +#endif
> +
> +    /* Disable AMU */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.amu = 0;
> +#endif
> +    guest_cpuinfo.pfr32.amu = 0;
> +
> +    /* Disable RAS as Xen does not support it */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.ras = 0;
> +    guest_cpuinfo.pfr64.ras_frac = 0;
> +#endif
> +    guest_cpuinfo.pfr32.ras = 0;
> +    guest_cpuinfo.pfr32.ras_frac = 0;
> +
> +    return 0;
> +}
> +/*
> + * This function needs to be run after all smp are started to have
> + * cpuinfo structures for all cores.
> + */
> +__initcall(create_guest_cpuinfo);
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 64354c3f19..0ab6dd42a0 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -290,6 +290,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>  extern struct cpuinfo_arm cpu_data[];
>  #define current_cpu_data cpu_data[smp_processor_id()]
>  
> +extern struct cpuinfo_arm guest_cpuinfo;
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 00:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 00:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45137.80631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klLIH-0005y4-E8; Sat, 05 Dec 2020 00:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45137.80631; Sat, 05 Dec 2020 00:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klLIH-0005xx-9Z; Sat, 05 Dec 2020 00:19:21 +0000
Received: by outflank-mailman (input) for mailman id 45137;
 Sat, 05 Dec 2020 00:19:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iti=FJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klLIF-0005xs-KZ
 for xen-devel@lists.xenproject.org; Sat, 05 Dec 2020 00:19:19 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9b0eeeb-b21e-4960-9b8b-198204a6c2ee;
 Sat, 05 Dec 2020 00:19:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9b0eeeb-b21e-4960-9b8b-198204a6c2ee
Date: Fri, 4 Dec 2020 16:19:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607127558;
	bh=5IcshQBi5KsJRE/oFzfLJbrrl1vS9oE/5TgIXxD8o6I=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=FpWRNc6+h5MKQg81DNWEpioOArimv0Zo63n0xY7CYbE6/ngNaPZMknIdPDsTe9kOo
	 5m+XZTjFYqBvEN1cEvZPs+9xnr936/N4dau8t4m0fNrCbGIsFq5fNQXOqVrLs3Scmy
	 pfEyyt6wEm3s1LD4Xu/PET674XmeotzQYWiglflG8K/WJNCccI6UYytWY/Yi4NfftW
	 +IULPiJwBBKpCyn2NvhgcU/okAwuucjetx4ifTvBuuBoPBA4xohAUPWRf0INKlT/dM
	 7x6QVjRbe0FKz6ez6Hh12Q7XihfrcCVkNdoYs2T3jM1YNvdFHFUElpp8jMFLME6N5v
	 +zs25j8zuQKxw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
In-Reply-To: <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012041617510.32240@sstabellini-ThinkPad-T480s>
References: <cover.1606742184.git.bertrand.marquis@arm.com> <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Bertrand Marquis wrote:
> Add vsysreg emulation for registers trapped when TID3 bit is activated
> in HSR.
> The emulation is returning the value stored in cpuinfo_guest structure
> for most values and the hardware value for registers not stored in the
> structure (those are mostly registers existing only as a provision for
> feature use but who have no definition right now).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: rebase
> ---
>  xen/arch/arm/arm64/vsysreg.c | 49 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 8a85507d9d..970ef51603 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>          break;                                                          \
>      }
>  
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg,field,offset)                            \

In addition to Volodymyr's comment, this should be for code style:

  GENERATE_TID3_INFO(reg, field, offset)


> +    case HSR_SYSREG_##reg:                                              \
> +    {                                                                   \
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
> +                          1, guest_cpuinfo.field.bits[offset]);         \
> +    }
> +
>  void do_sysreg(struct cpu_user_regs *regs,
>                 const union hsr hsr)
>  {
> @@ -259,6 +267,47 @@ void do_sysreg(struct cpu_user_regs *regs,
>           */
>          return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
>  
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
> +
>      /*
>       * HCR_EL2.TIDCP
>       *
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 00:37:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 00:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45147.80643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klLZO-00084P-U1; Sat, 05 Dec 2020 00:37:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45147.80643; Sat, 05 Dec 2020 00:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klLZO-00084I-QW; Sat, 05 Dec 2020 00:37:02 +0000
Received: by outflank-mailman (input) for mailman id 45147;
 Sat, 05 Dec 2020 00:37:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iti=FJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klLZO-00084D-8K
 for xen-devel@lists.xenproject.org; Sat, 05 Dec 2020 00:37:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59733e19-0468-4028-8d44-c1f968e60e5b;
 Sat, 05 Dec 2020 00:37:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59733e19-0468-4028-8d44-c1f968e60e5b
Date: Fri, 4 Dec 2020 16:36:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607128620;
	bh=yma7yq/yKCuI+pjFvNwOtQ17RgOO/1A/Xl2Ety3bIsQ=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=T5HBmsUzCbCKSu/uyQqcx8KeKgurIwq9LaXufHrrLzzKW5QZEOvz3P4HwpkvHuN/K
	 ZygOloC/XJFCke5dsLpqkekTjbHRlToYgEPg9S6zEk06VoGgH8WHsV/o7/23J3ypRw
	 i9ZBXoyHzAZjd57hbcjtZOEXXGtKIkRSq8rPdOAQGG1j21jd4VuI8s4dqaMzeFh7eb
	 ZQdRW77MD0GZWTxwbl7UYSs22GN2tQ/BT/5WxTwaQnEABuz5drM/8g9Knze1wKdIuH
	 6Zkz6tMx1KEKMpgE9EaopOqqywNwEeKvSrGCnjK+ZxhZNlIoc52ZcpQDFTwnxZdUKA
	 BcjIUgWzMzT6A==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
In-Reply-To: <2F7BDAAC-4253-4D92-A995-12AA1361AE35@arm.com>
Message-ID: <alpine.DEB.2.21.2012041636340.32240@sstabellini-ThinkPad-T480s>
References: <cover.1606742184.git.bertrand.marquis@arm.com> <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com> <87lfei7fj5.fsf@epam.com> <AB32AAFF-DD1D-4B13-ABC0-06F460E95E1C@arm.com> <87sg8p687j.fsf@epam.com>
 <87243486-2A58-4497-B566-5FDE4158D18E@arm.com> <87h7p55uwj.fsf@epam.com> <80D814EA-B0FC-4975-BB08-4D7DAE8C8B56@arm.com> <2F7BDAAC-4253-4D92-A995-12AA1361AE35@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-128729673-1607128620=:32240"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-128729673-1607128620=:32240
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 2 Dec 2020, Bertrand Marquis wrote:
> > On 2 Dec 2020, at 11:12, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
> > 
> > HI Volodymyr,
> > 
> >> On 1 Dec 2020, at 16:54, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
> >> 
> >> 
> >> Hi,
> >> 
> >> Bertrand Marquis writes:
> >> 
> >>> Hi Volodymyr,
> >>> 
> >>>> On 1 Dec 2020, at 12:07, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
> >>>> 
> >>>> 
> >>>> Hi,
> >>>> 
> >>>> Bertrand Marquis writes:
> >>>> 
> >>>>> Hi,
> >>>>> 
> >>>>>> On 30 Nov 2020, at 20:31, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> wrote:
> >>>>>> 
> >>>>>> 
> >>>>>> Bertrand Marquis writes:
> >>>>>> 
> >>>>>>> Add support for emulation of cp15 based ID registers (on arm32 or when
> >>>>>>> running a 32bit guest on arm64).
> >>>>>>> The handlers are returning the values stored in the guest_cpuinfo
> >>>>>>> structure.
> >>>>>>> In the current status the MVFR registers are no supported.
> >>>>>> 
> >>>>>> It is unclear what will happen with registers that are not covered by
> >>>>>> guest_cpuinfo structure. According to ARM ARM, it is implementation
> >>>>>> defined if such accesses will be trapped. On other hand, there are many
> >>>>>> registers which are RAZ. So, good behaving guest can try to read one of
> >>>>>> that registers and it will get undefined instruction exception, instead
> >>>>>> of just reading all zeroes.
> >>>>> 
> >>>>> This is true in the status of this patch but this is solved by the next patch
> >>>>> which is adding proper handling of those registers (add CP10 exception
> >>>>> support), at least for MVFR ones.
> >>>>> 
> >>>>> From ARM ARM point of view, I did handle all registers listed I think.
> >>>>> If you think some are missing please point me to them as O do not
> >>>>> completely understand what are the “registers not covered” unless
> >>>>> you mean the MVFR ones.
> >>>> 
> >>>> Well, I may be wrong for aarch32 case, but for aarch64, there are number
> >>>> of reserved registers in IDs range. Those registers should read as
> >>>> zero. You can find them in the section "C5.1.6 op0==0b11, Moves to and
> >>>> from non-debug System registers and Special-purpose registers" of ARM
> >>>> DDI 0487B.a. Check out "Table C5-6 System instruction encodings for
> >>>> non-Debug System register accesses".
> >>> 
> >>> The point of the serie is to handle all registers trapped due to TID3 bit in HCR_EL2.
> >>> 
> >>> And i think I handled all of them but I might be wrong.
> >>> 
> >>> Handling all registers for op0==0b11 will cover a lot more things.
> >>> This can be done of course but this was not the point of this serie.
> >>> 
> >>> The listing in HCR_EL2 documentation is pretty complete and if I miss any register
> >>> there please tell me but I do no understand from the documentation that all registers
> >>> with op0 3 are trapped by TID3.
> >> 
> >> My concern is that the same code may observe different effects when
> >> running in baremetal mode and as VM.
> >> 
> >> For example, we are trying to run a newer version of a kernel, that
> >> supports some hypothetical ARMv8.9. And it tries to read a new ID
> >> register which is absent in ARMv8.2. There are possible cases:
> >> 
> >> 0. It runs as a baremetal code on a compatible architecture. So it just
> >> accesses this register and all is fine.
> >> 
> >> 1. It runs as baremetal code on older ARM8 architecture. Current
> >> reference manual states that those registers should read as zero, so
> >> all is fine, as well.
> >> 
> >> 2. It runs as VM on an older architecture. It is IMPLEMENTATION DEFINED
> >> if HCR.ID3 will cause traps when access to unassigned registers:
> >> 
> >> 2a. Platform does not cause traps and software reads zeros directly from
> >> real registers. This is a good outcome.
> >> 
> >> 2b. Platform causes trap, and your code injects the undefined
> >> instruction exception. This is a case that bothers me. Well written code
> >> that is tries to be compatible with different versions of architecture
> >> will fail there.
> >> 
> >> 3. It runs as VM on a never architecture. I can only speculate there,
> >> but I think, that list of registers trapped by HCR.ID3 will be extended
> >> when new features are added. At least, this does not contradict with the
> >> current IMPLEMENTATION DEFINED constraint. In this case, hypervisor will
> >> inject exception when guest tries to access a valid register.
> >> 
> >> 
> >> So, in my opinion and to be compatible with the reference manual, we
> >> should allow guests to read "Reserved, RAZ" registers.
> > 
> > Thanks for the very detailed explanation, I know better understand what you
> > mean here.
> > 
> > I will try to check if we could return RAZ for “other” op0=3 registers and what
> > should be done on cp15 registers to have something equivalent.
> > 
> 
> In fact I need to add handling for other registers mentionned by the TID3
> description in the armv8 architecture manual:
> "This field traps all MRS accesses to registers in the following range that are not
> already mentioned in this field description: Op0 == 3, op1 == 0, CRn == c0,
>  CRm == {c1-c7}, op2 == {0-7}.”
> "This field traps all MRC accesses to encodings in the following range that are not
> already mentioned in this field description: coproc == p15, opc1 == 0, CRn == c0,
> CRm == {c2-c7}, opc2 == {0-7}.”
> 
> I will check how i can do that.
> 
> Thanks a lot for the review.

Well spotted Volodymyr!
--8323329-128729673-1607128620=:32240--


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 01:06:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 01:06:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45153.80655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klM21-0007Qp-84; Sat, 05 Dec 2020 01:06:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45153.80655; Sat, 05 Dec 2020 01:06:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klM21-0007Qi-4f; Sat, 05 Dec 2020 01:06:37 +0000
Received: by outflank-mailman (input) for mailman id 45153;
 Sat, 05 Dec 2020 01:06:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iti=FJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klM1z-0007Qd-Qj
 for xen-devel@lists.xenproject.org; Sat, 05 Dec 2020 01:06:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 389a5772-acc8-479c-9534-81766603aa14;
 Sat, 05 Dec 2020 01:06:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 389a5772-acc8-479c-9534-81766603aa14
Date: Fri, 4 Dec 2020 17:06:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607130394;
	bh=iBU7bw6in51y2nRtJ6NGup3X29A4Q5AMNEUA6SQ1rF0=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=gO9uRt33cRAOwx8o/TFxtWlF5zTCHMd6kX1CDAw+buDbHWDvIvhY6dbPpzU0S6uS/
	 kBWCM0nCPwcx8MbH5V0h4U1AZfQEAwBj9Pyde0ZQB8AfQE/NdxOK+Lkx4upVvo+D/2
	 hUxfpnk2puBXZH/G3pTLvSvAcTiLOZ/zytzy3gJXXmx2NE4YeelSjTq5YGu9Yw5lt7
	 2ENGJgkfzYsfbsj9GohHx38TPIk+xhMKnc+usrt7n8qxXOCPaboQg6OaMUz9tJnvUD
	 oiV/BM18PTdE+/EoPQrCl3kyz8wX5FjPJmzFOI/pvGhThNxIafELtXAqxxN6VBW3hP
	 A1VDCChxdbbuw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
In-Reply-To: <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
Message-ID: <alpine.DEB.2.21.2012041706090.32240@sstabellini-ThinkPad-T480s>
References: <20201126080340.6154-1-jgross@suse.com> <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 26 Nov 2020, Jan Beulich wrote:
> On 26.11.2020 09:03, Juergen Gross wrote:
> > When the host crashes it would sometimes be nice to have additional
> > debug data available which could be produced via debug keys, but
> > halting the server for manual intervention might be impossible due to
> > the need to reboot/kexec rather sooner than later.
> > 
> > Add support for automatic debug key actions in case of crashes which
> > can be activated via boot- or runtime-parameter.
> > 
> > Depending on the type of crash the desired data might be different, so
> > support different settings for the possible types of crashes.
> > 
> > The parameter is "crash-debug" with the following syntax:
> > 
> >   crash-debug-<type>=<string>
> > 
> > with <type> being one of:
> > 
> >   panic, hwdom, watchdog, kexeccmd, debugkey
> > 
> > and <string> a sequence of debug key characters with '+' having the
> > special semantics of a 10 millisecond pause.
> > 
> > So "crash-debug-watchdog=0+0qr" would result in special output in case
> > of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
> > domain info, run queues).
> > 
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > ---
> > V2:
> > - switched special character '.' to '+' (Jan Beulich)
> > - 10 ms instead of 1 s pause (Jan Beulich)
> > - added more text to the boot parameter description (Jan Beulich)
> > 
> > V3:
> > - added const (Jan Beulich)
> > - thorough test of crash reason parameter (Jan Beulich)
> > - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
> > - added dummy get_irq_regs() helper on Arm
> > 
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Except for the Arm aspect, where I'm not sure using
> guest_cpu_user_regs() is correct in all cases,
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

for the trivial ARM bit:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 01:34:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 01:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45163.80667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klMSu-0002A8-Fs; Sat, 05 Dec 2020 01:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45163.80667; Sat, 05 Dec 2020 01:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klMSu-0002A0-9J; Sat, 05 Dec 2020 01:34:24 +0000
Received: by outflank-mailman (input) for mailman id 45163;
 Sat, 05 Dec 2020 01:34:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iti=FJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1klMSt-00029v-2k
 for xen-devel@lists.xenproject.org; Sat, 05 Dec 2020 01:34:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efa21118-354d-4e9d-ae85-f980706a30f7;
 Sat, 05 Dec 2020 01:34:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efa21118-354d-4e9d-ae85-f980706a30f7
Date: Fri, 4 Dec 2020 17:34:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607132061;
	bh=Lt/C4lcQ7J2En/RDWQMghfeo3pvnUFnGoPzH5ApaEXI=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=gEE7kXyG7cEHOPuYT1SjhsAUaIEIjLCuhSbFBxAS3rW4lJXO0BVAZGw0xbQPqwcuN
	 urPsFnCKIZILkdb+G3cqtVcCmLPr/I8beaUOT6yNqPP/AC4tNWWNEqZEBwfWeU2XFg
	 DBerUZRFQouIxUpfMtv5Yac7xILwRVOKsSHCfs5daK2lMhUXAzMBPbMU6TwbX3rnMX
	 pYwEdXyXhF6yinw1SeUZGfaftmVWwvLOT12cUlWuzHsIbhN7yzIKsUypftCZu6ggJI
	 YWR7HXau8nlaq3NrIMj9HdhuODYLkRlm9SDIC1ZMhI1nWafGpIK0kqhkzzSyoKHKHm
	 uvgW9ujX/3Ozw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Jan Beulich <jbeulich@suse.com>, "paul@xen.org" <paul@xen.org>, 
    "Elnikety, Eslam" <elnikety@amazon.com>, 
    'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>, 
    'Anthony PERARD' <anthony.perard@citrix.com>, 
    'George Dunlap' <george.dunlap@citrix.com>, 
    'Christian Lindig' <christian.lindig@citrix.com>, 
    'David Scott' <dave@recoil.org>, 
    'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>, 
    =?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
In-Reply-To: <19beb5b5a651415c83dfbdaa533e7bed@EX13D32EUC003.ant.amazon.com>
Message-ID: <alpine.DEB.2.21.2012041731540.32240@sstabellini-ThinkPad-T480s>
References: <20201203124159.3688-1-paul@xen.org> <20201203124159.3688-2-paul@xen.org> <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com> <00ee01d6c98b$507af1c0$f170d540$@xen.org> <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com> <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com> <011201d6ca16$ae14ac50$0a3e04f0$@xen.org> <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com> <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org> <5de9f051-4071-4e09-528c-c1fb8345dc25@citrix.com>
 <alpine.DEB.2.21.2012040940160.32240@sstabellini-ThinkPad-T480s> <7184a2de-f711-9683-3db6-7b880def022d@citrix.com> <19beb5b5a651415c83dfbdaa533e7bed@EX13D32EUC003.ant.amazon.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1399719437-1607132013=:32240"
Content-ID: <alpine.DEB.2.21.2012041733430.32240@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1399719437-1607132013=:32240
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2012041733431.32240@sstabellini-ThinkPad-T480s>

On Fri, 4 Dec 2020, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Andrew Cooper <andrew.cooper3@citrix.com>
> > Sent: 04 December 2020 17:45
> > To: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Julien Grall <julien@xen.org>; Jan Beulich <jbeulich@suse.com>; paul@xen.org; Durrant, Paul
> > <pdurrant@amazon.co.uk>; Elnikety, Eslam <elnikety@amazon.com>; 'Ian Jackson' <iwj@xenproject.org>;
> > 'Wei Liu' <wl@xen.org>; 'Anthony PERARD' <anthony.perard@citrix.com>; 'George Dunlap'
> > <george.dunlap@citrix.com>; 'Christian Lindig' <christian.lindig@citrix.com>; 'David Scott'
> > <dave@recoil.org>; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>; 'Roger Pau Monné'
> > <roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> > Subject: RE: [EXTERNAL] [PATCH v5 1/4] domctl: introduce a new domain create flag,
> > XEN_DOMCTL_CDF_evtchn_fifo, ...
> > 
> > CAUTION: This email originated from outside of the organization. Do not click links or open
> > attachments unless you can confirm the sender and know the content is safe.
> > 
> > 
> > 
> > On 04/12/2020 17:41, Stefano Stabellini wrote:
> > >>> FAOD, I am sure there might be other features that need to be
> > >>> disabled. But we have to start somewhere :).
> > >> Absolutely top of the list, importance wise, is so we can test different
> > >> configurations, without needing to rebuild the hypervisor (and to a
> > >> lesser extent, without having to reboot).
> > >>
> > >> It is a mistake that events/grants/etc were ever available unilaterally
> > >> in HVM guests.  This is definitely a step in the right direction (but I
> > >> thought it would be too rude to ask Paul to make all of those CDF flags
> > >> at once).
> > > +1
> > >
> > > For FuSa we'll need to be able to disable them at some point soon.
> > 
> > FWIW, I have a proper plan for this stuff, which start alongside the
> > fixed toolstack ABI, and will cover all aspects of optional
> > functionality in a domain.
> > 
> 
> OK. Can we live with this series as it stands until that point? There is some urgency to get at least these two things fixed.

I am happy to take things one step at a time, and this is a good step
forward.
--8323329-1399719437-1607132013=:32240--


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 05:17:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 05:17:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45188.80685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klPw6-0000mQ-Db; Sat, 05 Dec 2020 05:16:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45188.80685; Sat, 05 Dec 2020 05:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klPw6-0000mJ-AA; Sat, 05 Dec 2020 05:16:46 +0000
Received: by outflank-mailman (input) for mailman id 45188;
 Sat, 05 Dec 2020 05:16:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klPw5-0000m6-0I; Sat, 05 Dec 2020 05:16:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klPw4-0003Vz-SD; Sat, 05 Dec 2020 05:16:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klPw4-0006vt-HA; Sat, 05 Dec 2020 05:16:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klPw4-0005IN-Gh; Sat, 05 Dec 2020 05:16:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pnExG2D1+grtKOWLo5B4MZIlCiB0wnwOEla/itdqaCc=; b=X2TfuEfS6XhyJVIlsyflefc74x
	y298k/UM2gdjKCDL/sh97qsEegOFThzTs7/Rm40GFNAUEkFaWdXVB+kNXLJf0Wtcp7qIr6QLGp0C3
	ZWSRyecSVIAs6W0JAElGA752rYokZ00G0j6xM4SxBLv5gkCGH7ZeB/kKzb2R0/insnNw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157214-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157214: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=265eabc905eaa38b7c6deb3fedb83fe6d37e9b11
X-Osstest-Versions-That:
    ovmf=97e2b622d1f32ba35194dbca104c3bf918bf3271
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 05:16:44 +0000

flight 157214 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157214/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 265eabc905eaa38b7c6deb3fedb83fe6d37e9b11
baseline version:
 ovmf                 97e2b622d1f32ba35194dbca104c3bf918bf3271

Last test of basis   157204  2020-12-04 12:10:48 Z    0 days
Testing same since   157214  2020-12-05 01:55:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Divneil Rai Wadhawan <divneil.r.wadhawan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   97e2b622d1..265eabc905  265eabc905eaa38b7c6deb3fedb83fe6d37e9b11 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 06:52:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 06:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45211.80699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klRQE-0002jv-KT; Sat, 05 Dec 2020 06:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45211.80699; Sat, 05 Dec 2020 06:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klRQE-0002jo-H9; Sat, 05 Dec 2020 06:51:58 +0000
Received: by outflank-mailman (input) for mailman id 45211;
 Sat, 05 Dec 2020 06:51:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klRQC-0002jg-MU; Sat, 05 Dec 2020 06:51:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klRQC-0005T7-Cd; Sat, 05 Dec 2020 06:51:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klRQC-0001hO-4J; Sat, 05 Dec 2020 06:51:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klRQC-0003A3-3l; Sat, 05 Dec 2020 06:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vakQb4D6Vcqtrh9JUaEIcD9+2Um4GoqoO3Bbpy/OqeQ=; b=vgU9h/2C8+bmJ8UNxcZtyRlpGx
	ezQKCCKhDA4VU9SX65Zz12Gp4WXnSNEJi/Q6CK55KwzpbzaxagK1JvUOiIlMFx45h6FDCamhOBoWk
	5FJ2eG+S7CgnMg4kBLgkDmIs/uVfPdsQ+Bxn2HZxjazXd06PulO2AGRpy0SRUmFqHEE8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157205-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157205: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=be3755af37263833cb3b1c6b1f2ba219bdf97ec3
X-Osstest-Versions-That:
    xen=aec46884784c2494a30221da775d4ac2c43a4d42
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 06:51:56 +0000

flight 157205 xen-unstable real [real]
flight 157217 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157205/
http://logs.test-lab.xenproject.org/osstest/logs/157217/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      13 guest-start         fail pass in 157217-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 157217 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 157217 never pass
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 157192
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157192
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157192
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157192
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157192
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157192
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157192
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157192
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157192
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157192
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157192
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157192
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  be3755af37263833cb3b1c6b1f2ba219bdf97ec3
baseline version:
 xen                  aec46884784c2494a30221da775d4ac2c43a4d42

Last test of basis   157192  2020-12-04 01:51:29 Z    1 days
Testing same since   157205  2020-12-04 13:38:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Diederik de Haas <didi.debian@cknow.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   aec4688478..be3755af37  be3755af37263833cb3b1c6b1f2ba219bdf97ec3 -> master


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 07:16:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 07:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45225.80735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klRoH-0005A5-0U; Sat, 05 Dec 2020 07:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45225.80735; Sat, 05 Dec 2020 07:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klRoG-00059y-Sx; Sat, 05 Dec 2020 07:16:48 +0000
Received: by outflank-mailman (input) for mailman id 45225;
 Sat, 05 Dec 2020 07:16:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klRoG-00059q-Dg; Sat, 05 Dec 2020 07:16:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klRoG-0005zo-5D; Sat, 05 Dec 2020 07:16:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klRoF-0002I1-QJ; Sat, 05 Dec 2020 07:16:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klRoF-0002N0-Pq; Sat, 05 Dec 2020 07:16:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xq2CoDoODGlXW1kUc1rtNQTh5UZ2RGnqnr0GoGP8EDY=; b=G3Riyo3E2s2pCW+IiMAjVwsHLN
	GXtrKswLWQIkr9tSI5VdvzOnZ3TJ4Y3X4fMmvMbYl9jgPIK5mJRz6H6X4efrp69SFzHm6sDzEkT9D
	tAY1ZIxjacpcK/0SOsLZqAGQh6vUL1WbG0N5H7rKj5TkjrUaNvIfzeylEQkG5YEjCM5Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157216-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157216: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4523be1ed7f420dabdf42bae2dc33e13aa46b4e4
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 07:16:47 +0000

flight 157216 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157216/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4523be1ed7f420dabdf42bae2dc33e13aa46b4e4
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  148 days
Failing since        151818  2020-07-11 04:18:52 Z  147 days  142 attempts
Testing same since   157216  2020-12-05 04:19:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31452 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 07:41:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 07:41:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45240.80756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSC2-0008JZ-Gz; Sat, 05 Dec 2020 07:41:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45240.80756; Sat, 05 Dec 2020 07:41:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSC2-0008JS-DO; Sat, 05 Dec 2020 07:41:22 +0000
Received: by outflank-mailman (input) for mailman id 45240;
 Sat, 05 Dec 2020 07:41:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nC0X=FJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1klSC1-0008JN-84
 for xen-devel@lists.xenproject.org; Sat, 05 Dec 2020 07:41:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7a6ad24-f473-4425-8d68-be403e4c5339;
 Sat, 05 Dec 2020 07:41:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4BAA6AB63;
 Sat,  5 Dec 2020 07:41:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7a6ad24-f473-4425-8d68-be403e4c5339
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607154079; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LXQhJTON3yW+DHRMhOGOJwrdrbXm044P08RZ1yd5BR8=;
	b=NZpGzBbQYU3PLJVqQNYA0ra0wspHSoTfE7imUyzfxxqQHq0k58GriI519p+q2iLySSm0y2
	L4OxFLyEEfOPNGlcU230/rYTiko7C2ax2cmCjkYwBDgsLKWy+KtqjUjiXlaGCKyaRAzFoT
	/77Kyeo+AcO4ip0boHqCiR7dZeiLMh8=
Subject: Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201201082128.15239-1-jgross@suse.com>
 <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9a09bc6f-ab4b-4792-98f1-b3b811c14649@suse.com>
Date: Sat, 5 Dec 2020 08:41:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="YjpNXze1rb8tauUlInMiNPPVD8rEjefN7"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--YjpNXze1rb8tauUlInMiNPPVD8rEjefN7
Content-Type: multipart/mixed; boundary="lW5klSyhbMU2ukONhkXu6RIv80SCpfv28";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <9a09bc6f-ab4b-4792-98f1-b3b811c14649@suse.com>
Subject: Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
References: <20201201082128.15239-1-jgross@suse.com>
 <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>
In-Reply-To: <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>

--lW5klSyhbMU2ukONhkXu6RIv80SCpfv28
Content-Type: multipart/mixed;
 boundary="------------8EBDA8785DD754ABC9BA4365"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8EBDA8785DD754ABC9BA4365
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 05.12.20 00:53, Andrew Cooper wrote:
> On 01/12/2020 08:21, Juergen Gross wrote:
>> Support scheduling granularity per cpupool. Setting the granularity is=

>> done via hypfs, which needed to gain dynamical entries for that
>> purpose.
>>
>> Apart from the hypfs related additional functionality the main change
>> for cpupools was the support for moving a domain to a new granularity,=

>> as this requires to modify the scheduling unit/vcpu relationship.
>>
>> I have tried to do the hypfs modifications in a rather generic way in
>> order to be able to use the same infrastructure in other cases, too
>> (e.g. for per-domain entries).
>>
>> The complete series has been tested by creating cpupools with differen=
t
>> granularities and moving busy and idle domains between those.
>>
>> Changes in V2:
>> - Added several new patches, especially for some further cleanups in
>>    cpupool.c.
>> - Completely reworked the locking scheme with dynamical directories:
>>    locking of resources (cpupools in this series) is now done via new
>>    callbacks which are called when traversing the hypfs tree. This
>>    removes the need to add locking to each hypfs related cpupool
>>    function and it ensures data integrity across multiple callbacks.
>> - Reordered the first few patches in order to have already acked
>>    patches in pure cleanup patches first.
>> - Addressed several comments.
>>
>> Juergen Gross (17):
>>    xen/cpupool: add cpu to sched_res_mask when removing it from cpupoo=
l
>>    xen/cpupool: add missing bits for per-cpupool scheduling granularit=
y
>>    xen/cpupool: sort included headers in cpupool.c
>>    xen/cpupool: switch cpupool id to unsigned
>>    xen/cpupool: switch cpupool list to normal list interface
>>    xen/cpupool: use ERR_PTR() for returning error cause from
>>      cpupool_create()
>>    xen/cpupool: support moving domain between cpupools with different
>>      granularity
>>    docs: fix hypfs path documentation
>>    xen/hypfs: move per-node function pointers into a dedicated struct
>>    xen/hypfs: pass real failure reason up from hypfs_get_entry()
>>    xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
>>    xen/hypfs: add new enter() and exit() per node callbacks
>>    xen/hypfs: support dynamic hypfs nodes
>>    xen/hypfs: add support for id-based dynamic directories
>>    xen/cpupool: add cpupool directories
>>    xen/cpupool: add scheduling granularity entry to cpupool entries
>>    xen/cpupool: make per-cpupool sched-gran hypfs node writable
>=20
> Gitlab CI is fairly (but not completely) reliably hitting an failure in=

> ARM randconfig against this series only.
>=20
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/225445864 is one=

> example.
>=20
> Error is:
>=20
> cpupool.c:102:12: error: 'sched_gran_get' defined but not used
> [-Werror=3Dunused-function]
>  =C2=A0 102 | static int sched_gran_get(const char *str, enum sched_gra=
n *mode)
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 ^~~~~~~~~~~~~~
>=20

Ah, this is without CONFIG_HYPFS.

Will fix.


Juergen

--------------8EBDA8785DD754ABC9BA4365
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8EBDA8785DD754ABC9BA4365--

--lW5klSyhbMU2ukONhkXu6RIv80SCpfv28--

--YjpNXze1rb8tauUlInMiNPPVD8rEjefN7
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/LOZ4FAwAAAAAACgkQsN6d1ii/Ey/V
agf+PWFy8YLg2pXrX47ZCELzxhnyBndFQMxSawRoYFLjY+GarBlNSPT2+9ShIjIFMRfea5uuJiHE
YyaLkqycVqd7xAL6aXlxba0uqFGZTdtNlMzh3BkS3uD6bKleAd3/S6vwzL5gTHnkqcnGhPvnt6TW
MXAvOCh9XMpZEb4j6IXjW55tSbfviWMtD3i0oQ2DlLl5gOCEmU+R0FRx9QkdKemllAGxDb2Ym6IN
zdxoyFfYmxEx48gKpfyj3iKhZY81H98iHCD9ZdJULou/OYdcGcXXoBbMvAMeU5uz7Vn8NCp1QMbF
7DI26kV+MJ/ePBlch6AzQt9BhupcIQzz25vSHwieyw==
=uQcD
-----END PGP SIGNATURE-----

--YjpNXze1rb8tauUlInMiNPPVD8rEjefN7--


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 07:44:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 07:44:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45246.80767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSF1-0008UH-VV; Sat, 05 Dec 2020 07:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45246.80767; Sat, 05 Dec 2020 07:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSF1-0008UA-SF; Sat, 05 Dec 2020 07:44:27 +0000
Received: by outflank-mailman (input) for mailman id 45246;
 Sat, 05 Dec 2020 07:44:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klSF0-0008U1-Et; Sat, 05 Dec 2020 07:44:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klSF0-0006Ww-71; Sat, 05 Dec 2020 07:44:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klSEz-0002w7-VH; Sat, 05 Dec 2020 07:44:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klSEz-0000oh-Um; Sat, 05 Dec 2020 07:44:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cm2aUcwYL/LkW5PtkSXM/1dBVbOaWKBNuBVej2Afetg=; b=MBYHxhfe3ZX+nWoo/wXRZ3saRD
	bJI5aLH1q8fQuurUJYEQr3EbkGuguUAxMl2UlLt5FXa1/jasn84W4gdkZf5hU86EyJTmRrEXScOPc
	XosT1Nfd8Bqc0To1qD4p4+rLm1W0mMe2cuoQ1B0xV1PuX7jplcE6kXt1ufFtoUGuioD4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157213: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b3298500b23f0b53a8d81e0d5ad98a29db71f4f0
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 07:44:25 +0000

flight 157213 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157213/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b3298500b23f0b53a8d81e0d5ad98a29db71f4f0
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  126 days
Failing since        152366  2020-08-01 20:49:34 Z  125 days  213 attempts
Testing same since   157213  2020-12-04 23:40:49 Z    0 days    1 attempts

------------------------------------------------------------
3639 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 697696 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 08:16:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 08:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45262.80782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSk7-0003sb-SU; Sat, 05 Dec 2020 08:16:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45262.80782; Sat, 05 Dec 2020 08:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSk7-0003sU-PF; Sat, 05 Dec 2020 08:16:35 +0000
Received: by outflank-mailman (input) for mailman id 45262;
 Sat, 05 Dec 2020 08:16:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klSk6-0003sM-Ik; Sat, 05 Dec 2020 08:16:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klSk6-0007hb-A4; Sat, 05 Dec 2020 08:16:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klSk6-00047t-0O; Sat, 05 Dec 2020 08:16:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klSk5-000294-W8; Sat, 05 Dec 2020 08:16:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GV9UGvSkv2YINrRBO7xpIXd8bxQekWHwJcEc3CJOgTA=; b=5SURIotHvj3eV/bDti941Eu48Q
	3nN4KyqZjtNN4Tts6YT/Qtqq23O6eSI7DO4sLzdT43K6WZybw65i9btSKeUyvv9Q1vx8OzYj+wrLc
	AR06JTzHuqH8ItZJooXrPF7WTRsYgQoqb7o6GGg+aejtEXM1eUOg7nb/ezqmKYB4N/Y4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157209-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157209: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 08:16:33 +0000

flight 157209 qemu-mainline real [real]
flight 157219 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157209/
http://logs.test-lab.xenproject.org/osstest/logs/157219/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  106 days
Failing since        152659  2020-08-21 14:07:39 Z  105 days  220 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    3 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 08:28:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 08:28:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45274.80798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSw2-000567-37; Sat, 05 Dec 2020 08:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45274.80798; Sat, 05 Dec 2020 08:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klSw1-000560-Vu; Sat, 05 Dec 2020 08:28:53 +0000
Received: by outflank-mailman (input) for mailman id 45274;
 Sat, 05 Dec 2020 08:28:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I8Sz=FJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1klSw0-00055v-G3
 for xen-devel@lists.xenproject.org; Sat, 05 Dec 2020 08:28:52 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef5f5c75-57fa-42e4-9477-e431c5981b23;
 Sat, 05 Dec 2020 08:28:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef5f5c75-57fa-42e4-9477-e431c5981b23
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607156931;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=nAbXPK3nVMptgUPtprCki/nA5ZKt7Jm7DI5lfY9ePPc=;
  b=DMAtZpau7j3rRHHVjUcs9JAn82ZEfjA9IMK5md313UvDP3OgzBvZ2QwM
   Tk2LkDHnBl6rQzWHmrQmjO+PfDpXd1jQVBiu6wHFXrX9vgNlvSvHBFG2B
   wBQ9dASyNWkZUzhvq0XE4k6RmPM6Pi5vUV07KDsEmHMoIWD+A1CkzFmxn
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: eYwh+PWpzrUCxzctJsbE44MZ9g5neO0rDgE1/MapUWDDyJjJcDkADFBY/5UG205Xb4kbVvYDjz
 I4rgfs3td0I39Gu63LuRMDxRLGC9Fvev2q4sXWTiJRy4z2SesIces7TBxE7RPXc1WYWrwuwrrD
 Ez2FUj2wHJUiWHXQEKMGiTqHUYVMqEeKzWCM7QCr9KJZZ8SmnY3A8ymZ+UiqVdS5IS5yJhtwJW
 6fWQacRIO86MuS6sX/YCH5u6aacbQ5g3Q1HpGlO8o7bkloMdAKYv45pkdL6eicwgmCxTTsnx/n
 UAU=
X-SBRS: 5.1
X-MesageID: 32933548
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,395,1599537600"; 
   d="scan'208";a="32933548"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dAorNJtTrtqjtdkZq4+abiINL3eQcFnRaNHliSUIyD0u253/8ViKICtLaV9W6u5ZB6qRne8o31Ouk5pu/XRx1p+yvF0xaUGqDWspHo9bbNnf8TM7+K8Sviu1aNAds9nAfkx5NkkFX4rammP7wHSVvSdfj4X+aF5zjqV1jt9zZ8Ib9HZmdjeD89IRJIJYfYX6nYeZEkGodmpOrxoZ4IEZq9HqBlzYXehu5dJbvO06Y3L5h6txh31mUPebz4a0CBTNp3nax6no03Ob/PJqMVrAB3atmwaHdM/Tl0+QYiwf/w5pfjQ2GleAOr5INO/YAWJEtzd+njVSVY4oh9sHwxx+TA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9fNEVGfoh0fiailX7qD0wLGaytrgfccSnKl9WE8ni4=;
 b=bWgHInKvCVxIruWrPqfIrYti3b8dj9kz40Pve1Qlagp6GmsDOpws7XDr51VNfHvMftQ93KdfMMhVEKlxshQ3JC4d+p6mRKQneT4VFA1HSp6zhBP0y9oNsvEJFjKEfcFlHJH/De/4ZlLRXram1wIuVXQufv+DT0jLTRtgwMwXCzpSGVTwY3pvm8iFmIxrYQdk2xkTXy+GNkKedTSKhob7zwhwzcXPjR8wTvvyzQySLV6R7NCCtb+IETJshZgmucfo/WVYzol9MNfkMwf6l8utiAZmu9vDSm8aJwU9ThyQxvBBUJ9OONc8fY0UoUT5BAKK316NfJVEb5W64frljFEUqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N9fNEVGfoh0fiailX7qD0wLGaytrgfccSnKl9WE8ni4=;
 b=IxddIOrRAaW8F1o20ekjH5EGaj0VX0ugJZ+1o+P5tReFtrvufpQYuVOM0QnmjOB0PrFsNtERxKrQjPCvcYOFMoI7JYGCmXfqbcGswsIgo8LW/LC0zV1MqBgeS4MYEWhyen2OfEj8z9OvCcS634rbfegYtPEkXVihwQ+T8zIDx48=
Date: Sat, 5 Dec 2020 09:28:39 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
	<marmarek@invisiblethingslab.com>
CC: Christoph Hellwig <hch@lst.de>, Juergen Gross <jgross@suse.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>, Jens Axboe
	<axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
	<linux-nvme@lists.infradead.org>
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
Message-ID: <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201204122054.GV201140@mail-itl>
X-ClientProxiedBy: LNXP265CA0025.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b327aea6-b25f-4ab1-8c6d-08d898f7c835
X-MS-TrafficTypeDiagnostic: DM5PR03MB2633:
X-Microsoft-Antispam-PRVS: <DM5PR03MB263394985F4D59930BE1101A8FF00@DM5PR03MB2633.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZUyZ7OhZmf1Ez15i9i1H0YxEBaMq+v56qCt/shMttn1Wc1b37H7G1VswrDl8ED6599a8My+NrYZuKvFtjnBiya45Z6x/Y1YXRLmDpgPyZit5PGsrYlnWvrzX+rUWjlq4jmCvFw8el22yo8ENj6xYJsO6uh0HQRgqvp4YMkVx/HhElACDjMpaivFEJhD1hZNJb//RpS2R5KUGcFs5mdl2kRhG9Kq3jE5HtPZEtvX9Us+Ye9omvSx+tkodqTzzE9l5kbmxyC3xblRrwVvNs0Ss+gEEVW1JKh2qjRnPkVuH9k3sIrZPRYB0QSBsIm11miFiHZdg7Pd+C0owQ2Vc1dU4zQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(396003)(39860400002)(366004)(346002)(1076003)(83380400001)(6486002)(66574015)(316002)(6496006)(86362001)(66476007)(66946007)(6666004)(85182001)(5660300002)(186003)(16526019)(66556008)(478600001)(26005)(9686003)(6916009)(8936002)(33716001)(4326008)(956004)(2906002)(54906003)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WGZYZFR5SWNMdXM4eW9TSkpVZEYyVHhrQXdBdi92RHU0UTk0V3BRdzZnT1ov?=
 =?utf-8?B?b1BCcUNzU24xM2M3dFhSbURqMVNjbHZ2cERVaHd4ci9wbStaWEl1RjhIQkZx?=
 =?utf-8?B?bTQwamlZS0pFRjAxOWV0Tm1UMURyODVnR1E3dEt4NkxWRFNkeUc2OFpKMmx6?=
 =?utf-8?B?Q1U3YmFEL05zVExYK0Nsei9PeFk0eDQxNkhNUjFmWWIvczNXNE9OQTJrYkxv?=
 =?utf-8?B?ZXdQdHlpVTYxeG0zNTFRMVczWEJHTGpZVjc4cXJGSGRIeWFDNktsRzVKaTEx?=
 =?utf-8?B?NHFQZkY2MXV5SXVEQXAyamJNYXRmc2VwZExBMWRBUHIxMGlEM2tZQ3Fudzgw?=
 =?utf-8?B?RzR1NU5EME8rZzNoY3B0d0JNR2o2SkNFdUExTWt6a3Nhc1BuTWFUdlpFalpw?=
 =?utf-8?B?TjB2WUZEelFJL3JjUnVsc0pEYTc1Q01HM3V3NGkwcUlxSzJlbmZRNnVkQ1Y1?=
 =?utf-8?B?N1lML2dEVW5KemJBWEFDTjdIZFlRdmhkMGsyMjd5dFFqeDMzNXI0RkdWZnUx?=
 =?utf-8?B?OHNTOHJoaTQvL2Fqd3djWXZ2OUFULy9QcHV3KzNzUmtLZU9FS2JzQ3dNMGJF?=
 =?utf-8?B?Q2NUdGJWa0RJSHVwa3o1b0ovSjdOK3dGcG1IZXZnNXZLNWpyNFpwbW5qZjZ6?=
 =?utf-8?B?cmI4Mzhya2Z5L1p6ZG0wZTkzWGJtdGppd0FjQkFiK2RSZzZDcFhKdWgrMDNT?=
 =?utf-8?B?YUZnQXdUNVJId1RLeENFdnVFSWZKUU9iajRqS2kvMkl1SVFJTVFobTJSSmI4?=
 =?utf-8?B?VGcwcW9uR1draGl2VFY3MHRudFJkWVNGSjVrUGtVdkhvcmszTWhTcDBzQW84?=
 =?utf-8?B?MkpmT3pBcjlMNHVPcXBFNFdabWxTZDBhSU5hVllNeWRHNm1iTDhETEg1Y1pH?=
 =?utf-8?B?cHdKR2FWOHF6aXUzYTYxYWtPYTRwWHBPS1cwdmtVbm1OOSt0cUdBTTlTSzVp?=
 =?utf-8?B?NFFiOFIyM3JPSlFIY1BZdTI1VVBMUkllMG5aWVY0S295bzR3dzRQR21PQngz?=
 =?utf-8?B?dnIwUmxNbmhmcHNMRGU0VEhYRkhuU3krSy9TN0FERys3Y01jYmxVL3hXTUN3?=
 =?utf-8?B?MHhRZEZQOFp1MkxiaXFCNk5aT2xCUzVIRitaaTdnRHhEUkRDN25KcTdNNUZz?=
 =?utf-8?B?NEdEWU9nZWNmOVd1YWJsNHFyL1BiS2l2TTRIYlhrdys1VDRhVHptbVBBNW4y?=
 =?utf-8?B?WlkrQ0JwbEw5VGRacG00TGJQanJ3YVYzMjIrbGFpUFFJUWQvMGdtbkhZVVJN?=
 =?utf-8?B?bVNwd0U4R0JKdzFrR2dJQncwV0dVNHpFWTQyK0cxRFZoS1FhRTZpbi9ET1pC?=
 =?utf-8?Q?l/QfaI4voi81817yUS0zRk+2yWib5eDgj+?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Dec 2020 08:28:45.4539
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: b327aea6-b25f-4ab1-8c6d-08d898f7c835
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A+wSMUz9CuE3zSFwWjzL3+mNlH0UNTchnLuVlNmIT8/tTrzLFGNNeUPxwrpVGfLULsJbWz8tINIqR/j220k+Mw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2633
X-OriginatorOrg: citrix.com

On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-Górecki wrote:
> On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
> > On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-Górecki wrote:
> > > culprit: 
> > > 
> > > commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
> > > Author: Roger Pau Monne <roger.pau@citrix.com>
> > > Date:   Tue Sep 1 10:33:26 2020 +0200
> > > 
> > >     xen: add helpers to allocate unpopulated memory
> > >     
> > > I'm adding relevant people and xen-devel to the thread.
> > > For completeness, here is the original crash message:
> > 
> > That commit definitively adds a new ZONE_DEVICE user, so it does look
> > related.  But you are not running on Xen, are you?
> 
> I am. It is Xen dom0.

I'm afraid I'm on leave and won't be able to look into this until the
beginning of January. I would guess it's some kind of bad
interaction between blkback and NVMe drivers both using ZONE_DEVICE?

Maybe the best is to revert this change and I will look into it when
I get back, unless someone is willing to debug this further.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 16:31:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 16:31:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45414.80828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klaSL-000589-Mb; Sat, 05 Dec 2020 16:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45414.80828; Sat, 05 Dec 2020 16:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klaSL-000580-E7; Sat, 05 Dec 2020 16:30:45 +0000
Received: by outflank-mailman (input) for mailman id 45414;
 Sat, 05 Dec 2020 16:30:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klaSK-00057s-HW; Sat, 05 Dec 2020 16:30:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klaSK-0001Pl-9b; Sat, 05 Dec 2020 16:30:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klaSK-0005AO-0O; Sat, 05 Dec 2020 16:30:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klaSJ-0001Ys-W8; Sat, 05 Dec 2020 16:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MkwlFEz+GD/8rKVht9cQq1OHiTsdDF4hTjqUa97Qcjc=; b=o+4k8AHmgHTdLFJvc8OeZuyQDg
	SoZo2B4pmWA8DO5cHvNy5S0D60klpaATpSHzJ/ypR7ZenBxGp6+KQSTm+1r6pFNtMm9tmAiJ7cez/
	zl5rw0IT2BT6K1L81Cu6r0RNSjEyEcJ2whg4ae+jdqaGI/CwQoyvxaHViWOpq/lEjvnI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157218-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157218: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
X-Osstest-Versions-That:
    xen=be3755af37263833cb3b1c6b1f2ba219bdf97ec3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 16:30:43 +0000

flight 157218 xen-unstable real [real]
flight 157225 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157218/
http://logs.test-lab.xenproject.org/osstest/logs/157225/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 19 guest-start.2       fail pass in 157225-retest
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157225-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 157205
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157205
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157205
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157205
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157205
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157205
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157205
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157205
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157205
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157205
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157205
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157205
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b
baseline version:
 xen                  be3755af37263833cb3b1c6b1f2ba219bdf97ec3

Last test of basis   157205  2020-12-04 13:38:27 Z    1 days
Testing same since   157218  2020-12-05 06:54:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   be3755af37..5e666356a9  5e666356a9d55fbd9eb5b8506088aa760e107b5b -> master


From xen-devel-bounces@lists.xenproject.org Sat Dec 05 18:28:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 05 Dec 2020 18:28:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45444.80903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klcHl-00082i-PH; Sat, 05 Dec 2020 18:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45444.80903; Sat, 05 Dec 2020 18:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klcHl-00082b-Kw; Sat, 05 Dec 2020 18:27:57 +0000
Received: by outflank-mailman (input) for mailman id 45444;
 Sat, 05 Dec 2020 18:27:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klcHk-00082T-MI; Sat, 05 Dec 2020 18:27:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klcHk-0003sV-BC; Sat, 05 Dec 2020 18:27:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klcHj-0001aj-VZ; Sat, 05 Dec 2020 18:27:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klcHj-0005pV-V4; Sat, 05 Dec 2020 18:27:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FJOPypbMNhqD0Yes/dOFI5WiH+33MgUCBOy/Ma9FDG0=; b=LBLK1iaWpGXVlr89bfCRkS4y7G
	lpIZRjyac9zD1yi046vuoth0pQzmsolkvd4xQBFnFHxqNJxw67TA6DuleSe6WSt4PzZkvD1uh75hk
	yS7wcX2eMNgPUs9JnudXSV+1nagOhIGrM4f3+sKurLXEh2VbA6NzW4A29EMRmLHI4HjU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157221-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157221: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b3298500b23f0b53a8d81e0d5ad98a29db71f4f0
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 05 Dec 2020 18:27:55 +0000

flight 157221 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157221/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate fail in 157213 pass in 157221
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157213 pass in 157221
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157213
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157213
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157213

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b3298500b23f0b53a8d81e0d5ad98a29db71f4f0
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  126 days
Failing since        152366  2020-08-01 20:49:34 Z  125 days  214 attempts
Testing same since   157213  2020-12-04 23:40:49 Z    0 days    2 attempts

------------------------------------------------------------
3639 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 697696 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 00:44:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 00:44:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45493.80930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kliAH-00024V-JR; Sun, 06 Dec 2020 00:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45493.80930; Sun, 06 Dec 2020 00:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kliAH-00024O-G9; Sun, 06 Dec 2020 00:44:37 +0000
Received: by outflank-mailman (input) for mailman id 45493;
 Sun, 06 Dec 2020 00:44:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kliAG-00024G-8o; Sun, 06 Dec 2020 00:44:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kliAF-0003eO-VB; Sun, 06 Dec 2020 00:44:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kliAF-0007b1-Kj; Sun, 06 Dec 2020 00:44:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kliAF-0004sP-KJ; Sun, 06 Dec 2020 00:44:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=muhyZW3RcuBZT/VW/14842XCTznuo0GC7IZz4c866SQ=; b=VbvviVwIbIUIXFiYirUJHYdxD8
	kS0g9FAtrhXWNCBhV4Aml9nftuNRxeNCcwm1SIVG975mH+T9YKyiknseZZIO8+5a+DKhwjDToM01Q
	UrizT6voIGhoucZU+psqUxy/p8tblSDYa8m2EHJbZg0moZoNiaNeMvoEf4MuNQ9PDRQ4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157222-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157222: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 00:44:35 +0000

flight 157222 qemu-mainline real [real]
flight 157230 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157222/
http://logs.test-lab.xenproject.org/osstest/logs/157230/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  107 days
Failing since        152659  2020-08-21 14:07:39 Z  106 days  221 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    4 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 02:15:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 02:15:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45513.80951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kljaB-0007pH-QA; Sun, 06 Dec 2020 02:15:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45513.80951; Sun, 06 Dec 2020 02:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kljaB-0007p8-It; Sun, 06 Dec 2020 02:15:27 +0000
Received: by outflank-mailman (input) for mailman id 45513;
 Sun, 06 Dec 2020 02:15:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DarT=FK=gmail.com=bmeng.cn@srs-us1.protection.inumbo.net>)
 id 1klja9-0007p3-Sh
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 02:15:25 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e746601-17b7-484c-8234-ab72625db7dd;
 Sun, 06 Dec 2020 02:15:24 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id q5so9396699qkc.12
 for <xen-devel@lists.xenproject.org>; Sat, 05 Dec 2020 18:15:24 -0800 (PST)
Received: from pek-vx-bsp2.wrs.com (unknown-124-94.windriver.com.
 [147.11.124.94])
 by smtp.gmail.com with ESMTPSA id g18sm9296306qtv.79.2020.12.05.18.15.17
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 05 Dec 2020 18:15:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e746601-17b7-484c-8234-ab72625db7dd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=su1F/v+5+aB1M38riLgjKG95V1Mh9HGEBldHjqNVRUM=;
        b=TdyDkqCWGblhOuKZpsO1dxV+LKOBRkwiBZTIHPq66a93eKkXM6ZRIb9GfI9Mla98vS
         DcmmN6P2DKo7qJ4WQUHVrmrEJ4C+dbQ0dsjQdMwuy03rhYFqtKqDNARJ3hmpbTeIDBSp
         RybTmnKTqhKGY8fsHdCLeesBkfvlM7iKsk19gYn4WW6de1mgh0zHvilrElLfXytHyklJ
         DyeAThJ/hEsft2Be5e3dZPNVw+BAxur2CFGh5Ohl16mMzYn/rbCGj9I8bF8vZFEUjFXK
         Lg1US5va1GpD9DOapI+KePYY0OJwqp1fCDFFucXsZZSFc13y0rW7y0YcmH4mjtjibHkF
         uVrg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=su1F/v+5+aB1M38riLgjKG95V1Mh9HGEBldHjqNVRUM=;
        b=ah6Jo2k3qAF9LfzWc1RF+XRHEaZv/3w8OSQ+YYPejh6uYpNKk8oLLe0tnCBg6Ery4w
         n5fP8gk2H5zMsQKztgACWtm6sKZL8w67f2dIBSyuMEsXIT8E3gEpvSLBMy9pSOcRC7mu
         7yrhR3ud0qsqfu2oRMmafnckpmG107UCJV5L8WbuoXDSlawWamgnzMdc3LE4lZBqjBd5
         DmaRm5MygdhvywJIeBNouxYtF5LAzB/2L8LGHYdjP7ed0B44TJt3eY/FLJO5rEvWZ457
         LlJ6Ee0abDiveLgN4F9i1y3EHF+4HFnuPQGAF7Y+EJyd1zAQs7ofedIZ8RgrSLo0OhVO
         33CQ==
X-Gm-Message-State: AOAM532d9/GxILQVBw9QfCJ/UMKJ/iO0p/wiQzpl/Y53ih9SXl8hPofF
	WkBqm+atN4joU69wWQddF6g=
X-Google-Smtp-Source: ABdhPJz6I806bbxlzJbwwb4kQ3YSWW06ZHnPeCzANSQUQT99JyJR1eK9gpq6LyJ0IUUONb6beAYzeA==
X-Received: by 2002:a37:8943:: with SMTP id l64mr16832588qkd.212.1607220924201;
        Sat, 05 Dec 2020 18:15:24 -0800 (PST)
From: Bin Meng <bmeng.cn@gmail.com>
To: qemu-devel@nongnu.org
Cc: Bin Meng <bin.meng@windriver.com>,
	Alistair Francis <alistair@alistair23.me>,
	Andrew Jeffery <andrew@aj.id.au>,
	Anthony Perard <anthony.perard@citrix.com>,
	Beniamino Galvani <b.galvani@gmail.com>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	Joel Stanley <joel@jms.id.au>,
	Li Zhijian <lizhijian@cn.fujitsu.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Chubb <peter.chubb@nicta.com.au>,
	Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Zhang Chen <chen.zhang@intel.com>,
	qemu-arm@nongnu.org,
	qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 3/3] net: checksum: Introduce fine control over checksum type
Date: Sun,  6 Dec 2020 10:14:07 +0800
Message-Id: <1607220847-24096-3-git-send-email-bmeng.cn@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1607220847-24096-1-git-send-email-bmeng.cn@gmail.com>
References: <1607220847-24096-1-git-send-email-bmeng.cn@gmail.com>

From: Bin Meng <bin.meng@windriver.com>

At present net_checksum_calculate() blindly calculates all types of
checksums (IP, TCP, UDP). Some NICs may have a per type setting in
their BDs to control what checksum should be offloaded. To support
such hardware behavior, introduce a 'csum_flag' parameter to the
net_checksum_calculate() API to allow fine control over what type
checksum is calculated.

Existing users of this API are updated accordingly.

Signed-off-by: Bin Meng <bin.meng@windriver.com>

---

 hw/net/allwinner-sun8i-emac.c |  2 +-
 hw/net/cadence_gem.c          |  2 +-
 hw/net/fsl_etsec/rings.c      |  8 +++-----
 hw/net/ftgmac100.c            | 10 +++++++++-
 hw/net/imx_fec.c              | 15 +++------------
 hw/net/virtio-net.c           |  2 +-
 hw/net/xen_nic.c              |  2 +-
 include/net/checksum.h        |  7 ++++++-
 net/checksum.c                | 18 ++++++++++++++----
 net/filter-rewriter.c         |  4 ++--
 10 files changed, 41 insertions(+), 29 deletions(-)

diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index 38d3285..0427689 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -514,7 +514,7 @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
         /* After the last descriptor, send the packet */
         if (desc.status2 & TX_DESC_STATUS2_LAST_DESC) {
             if (desc.status2 & TX_DESC_STATUS2_CHECKSUM_MASK) {
-                net_checksum_calculate(packet_buf, packet_bytes);
+                net_checksum_calculate(packet_buf, packet_bytes, CSUM_ALL);
             }
 
             qemu_send_packet(nc, packet_buf, packet_bytes);
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
index 7a53469..9a4474a 100644
--- a/hw/net/cadence_gem.c
+++ b/hw/net/cadence_gem.c
@@ -1266,7 +1266,7 @@ static void gem_transmit(CadenceGEMState *s)
 
                 /* Is checksum offload enabled? */
                 if (s->regs[GEM_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
-                    net_checksum_calculate(s->tx_packet, total_bytes);
+                    net_checksum_calculate(s->tx_packet, total_bytes, CSUM_ALL);
                 }
 
                 /* Update MAC statistics */
diff --git a/hw/net/fsl_etsec/rings.c b/hw/net/fsl_etsec/rings.c
index 628648a..2b0afee 100644
--- a/hw/net/fsl_etsec/rings.c
+++ b/hw/net/fsl_etsec/rings.c
@@ -186,10 +186,8 @@ static void process_tx_fcb(eTSEC *etsec)
 
     /* if packet is IP4 and IP checksum is requested */
     if (flags & FCB_TX_IP && flags & FCB_TX_CIP) {
-        /* do IP4 checksum (TODO This function does TCP/UDP checksum
-         * but not sure if it also does IP4 checksum.) */
         net_checksum_calculate(etsec->tx_buffer + 8,
-                etsec->tx_buffer_len - 8);
+                etsec->tx_buffer_len - 8, CSUM_IP);
     }
     /* TODO Check the correct usage of the PHCS field of the FCB in case the NPH
      * flag is on */
@@ -203,7 +201,7 @@ static void process_tx_fcb(eTSEC *etsec)
                 /* do UDP checksum */
 
                 net_checksum_calculate(etsec->tx_buffer + 8,
-                        etsec->tx_buffer_len - 8);
+                        etsec->tx_buffer_len - 8, CSUM_UDP);
             } else {
                 /* set checksum field to 0 */
                 l4_header[6] = 0;
@@ -212,7 +210,7 @@ static void process_tx_fcb(eTSEC *etsec)
         } else if (flags & FCB_TX_CTU) { /* if TCP and checksum is requested */
             /* do TCP checksum */
             net_checksum_calculate(etsec->tx_buffer + 8,
-                                   etsec->tx_buffer_len - 8);
+                                   etsec->tx_buffer_len - 8, CSUM_TCP);
         }
     }
 }
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
index 782ff19..fbae1f1 100644
--- a/hw/net/ftgmac100.c
+++ b/hw/net/ftgmac100.c
@@ -573,7 +573,15 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
             }
 
             if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
-                net_checksum_calculate(s->frame, frame_size);
+                /*
+                 * TODO:
+                 * FTGMAC100_TXDES1_IP_CHKSUM seems to be only for IP checksum,
+                 * however previous net_checksum_calculate() did not calculate
+                 * IP checksum at all. Passing CSUM_ALL for now until someone
+                 * who is familar with this MAC to figure out what should be
+                 * properly added for TCP/UDP checksum offload.
+                 */
+                net_checksum_calculate(s->frame, frame_size, CSUM_ALL);
             }
             /* Last buffer in frame.  */
             qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
index 2c14804..45f96fa 100644
--- a/hw/net/imx_fec.c
+++ b/hw/net/imx_fec.c
@@ -562,20 +562,11 @@ static void imx_enet_do_tx(IMXFECState *s, uint32_t index)
         frame_size += len;
         if (bd.flags & ENET_BD_L) {
             if (bd.option & ENET_BD_PINS) {
-                struct ip_header *ip_hd = PKT_GET_IP_HDR(s->frame);
-                if (IP_HEADER_VERSION(ip_hd) == 4) {
-                    net_checksum_calculate(s->frame, frame_size);
-                }
+                net_checksum_calculate(s->frame, frame_size,
+                                       CSUM_TCP | CSUM_UDP);
             }
             if (bd.option & ENET_BD_IINS) {
-                struct ip_header *ip_hd = PKT_GET_IP_HDR(s->frame);
-                /* We compute checksum only for IPv4 frames */
-                if (IP_HEADER_VERSION(ip_hd) == 4) {
-                    uint16_t csum;
-                    ip_hd->ip_sum = 0;
-                    csum = net_raw_checksum((uint8_t *)ip_hd, sizeof(*ip_hd));
-                    ip_hd->ip_sum = cpu_to_be16(csum);
-                }
+                net_checksum_calculate(s->frame, frame_size, CSUM_IP);
             }
             /* Last buffer in frame.  */
 
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9179013..77e9ded 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1482,7 +1482,7 @@ static void work_around_broken_dhclient(struct virtio_net_hdr *hdr,
         (buf[12] == 0x08 && buf[13] == 0x00) && /* ethertype == IPv4 */
         (buf[23] == 17) && /* ip.protocol == UDP */
         (buf[34] == 0 && buf[35] == 67)) { /* udp.srcport == bootps */
-        net_checksum_calculate(buf, size);
+        net_checksum_calculate(buf, size, CSUM_UDP);
         hdr->flags &= ~VIRTIO_NET_HDR_F_NEEDS_CSUM;
     }
 }
diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
index 00a7fdf..5c815b4 100644
--- a/hw/net/xen_nic.c
+++ b/hw/net/xen_nic.c
@@ -174,7 +174,7 @@ static void net_tx_packets(struct XenNetDev *netdev)
                     tmpbuf = g_malloc(XC_PAGE_SIZE);
                 }
                 memcpy(tmpbuf, page + txreq.offset, txreq.size);
-                net_checksum_calculate(tmpbuf, txreq.size);
+                net_checksum_calculate(tmpbuf, txreq.size, CSUM_ALL);
                 qemu_send_packet(qemu_get_queue(netdev->nic), tmpbuf,
                                  txreq.size);
             } else {
diff --git a/include/net/checksum.h b/include/net/checksum.h
index 05a0d27..7dec37e 100644
--- a/include/net/checksum.h
+++ b/include/net/checksum.h
@@ -21,11 +21,16 @@
 #include "qemu/bswap.h"
 struct iovec;
 
+#define CSUM_IP     0x01
+#define CSUM_TCP    0x02
+#define CSUM_UDP    0x04
+#define CSUM_ALL    (CSUM_IP | CSUM_TCP | CSUM_UDP)
+
 uint32_t net_checksum_add_cont(int len, uint8_t *buf, int seq);
 uint16_t net_checksum_finish(uint32_t sum);
 uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
                              uint8_t *addrs, uint8_t *buf);
-void net_checksum_calculate(uint8_t *data, int length);
+void net_checksum_calculate(uint8_t *data, int length, int csum_flag);
 
 static inline uint32_t
 net_checksum_add(int len, uint8_t *buf)
diff --git a/net/checksum.c b/net/checksum.c
index dabd290..70f4eae 100644
--- a/net/checksum.c
+++ b/net/checksum.c
@@ -57,7 +57,7 @@ uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
     return net_checksum_finish(sum);
 }
 
-void net_checksum_calculate(uint8_t *data, int length)
+void net_checksum_calculate(uint8_t *data, int length, int csum_flag)
 {
     int mac_hdr_len, ip_len;
     struct ip_header *ip;
@@ -108,9 +108,11 @@ void net_checksum_calculate(uint8_t *data, int length)
     }
 
     /* Calculate IP checksum */
-    stw_he_p(&ip->ip_sum, 0);
-    csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
-    stw_be_p(&ip->ip_sum, csum);
+    if (csum_flag & CSUM_IP) {
+        stw_he_p(&ip->ip_sum, 0);
+        csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
+        stw_be_p(&ip->ip_sum, csum);
+    }
 
     if (IP4_IS_FRAGMENT(ip)) {
         return; /* a fragmented IP packet */
@@ -128,6 +130,10 @@ void net_checksum_calculate(uint8_t *data, int length)
     switch (ip->ip_p) {
     case IP_PROTO_TCP:
     {
+        if (!(csum_flag & CSUM_TCP)) {
+            return;
+        }
+
         tcp_header *tcp = (tcp_header *)(ip + 1);
 
         if (ip_len < sizeof(tcp_header)) {
@@ -148,6 +154,10 @@ void net_checksum_calculate(uint8_t *data, int length)
     }
     case IP_PROTO_UDP:
     {
+        if (!(csum_flag & CSUM_UDP)) {
+            return;
+        }
+
         udp_header *udp = (udp_header *)(ip + 1);
 
         if (ip_len < sizeof(udp_header)) {
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index e063a81..80caac5 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -114,7 +114,7 @@ static int handle_primary_tcp_pkt(RewriterState *rf,
             tcp_pkt->th_ack = htonl(ntohl(tcp_pkt->th_ack) + conn->offset);
 
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
-                                   pkt->size - pkt->vnet_hdr_len);
+                                   pkt->size - pkt->vnet_hdr_len, CSUM_TCP);
         }
 
         /*
@@ -216,7 +216,7 @@ static int handle_secondary_tcp_pkt(RewriterState *rf,
             tcp_pkt->th_seq = htonl(ntohl(tcp_pkt->th_seq) - conn->offset);
 
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
-                                   pkt->size - pkt->vnet_hdr_len);
+                                   pkt->size - pkt->vnet_hdr_len, CSUM_TCP);
         }
     }
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 03:23:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 03:23:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45521.80963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klkdu-0006Xr-2v; Sun, 06 Dec 2020 03:23:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45521.80963; Sun, 06 Dec 2020 03:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klkdt-0006Xj-SQ; Sun, 06 Dec 2020 03:23:21 +0000
Received: by outflank-mailman (input) for mailman id 45521;
 Sun, 06 Dec 2020 03:23:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klkds-0006XZ-MG; Sun, 06 Dec 2020 03:23:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klkdr-00031s-Mx; Sun, 06 Dec 2020 03:23:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klkdr-0004KK-4Q; Sun, 06 Dec 2020 03:23:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klkdr-0005bA-3p; Sun, 06 Dec 2020 03:23:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xWzCszBOp7krccMUmPBsiE/qQZ8DEKahehFSbax1n2M=; b=rjnslZkq6cSwtW+gVfXBrfSgby
	uP1rxXIZOrA8qA+lQHyE58pwmbpk0Q48rXtw4m7S/UQ8PZ4y5X8JEN9vzEwh/fq5u76xGqbSRyGUu
	SW5Tsg8Uf0xN9+4CT2nsgG3DEMy7zyHWMOiwvvVBIfyzxLNZsVX1f0puQXKk/fTmVm68=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157228-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157228: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b3298500b23f0b53a8d81e0d5ad98a29db71f4f0
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 03:23:19 +0000

flight 157228 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157228/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157213 REGR. vs. 152332
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 157221 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate fail in 157213 pass in 157228
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 157221 pass in 157228
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10 fail in 157221 pass in 157228
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen        fail pass in 157213
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157213
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157213
 test-arm64-arm64-examine      8 reboot                     fail pass in 157221
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 157221
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157221

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 157213 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157213 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b3298500b23f0b53a8d81e0d5ad98a29db71f4f0
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  127 days
Failing since        152366  2020-08-01 20:49:34 Z  126 days  215 attempts
Testing same since   157213  2020-12-04 23:40:49 Z    1 days    3 attempts

------------------------------------------------------------
3639 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 697696 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 08:09:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 08:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45545.80984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klp6L-00034l-R7; Sun, 06 Dec 2020 08:09:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45545.80984; Sun, 06 Dec 2020 08:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klp6L-00034e-OB; Sun, 06 Dec 2020 08:09:01 +0000
Received: by outflank-mailman (input) for mailman id 45545;
 Sun, 06 Dec 2020 08:09:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klp6K-00034W-Ue; Sun, 06 Dec 2020 08:09:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klp6K-0004BE-KS; Sun, 06 Dec 2020 08:09:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klp6K-0004Xu-Bl; Sun, 06 Dec 2020 08:09:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klp6K-0001Bn-B3; Sun, 06 Dec 2020 08:09:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vudjfJoe7yc+Y3K1CNNnQtZHGBaSYMtJmtU0GUxXFhI=; b=19bjAESwqlK8KuzsvrpCo8lx1t
	tkrE3NwvMzoLugvGKfw44xZ39AnjphQADx1zqTh+Vk3K3wqtgjCxbpix65qW9coscjsB4QawFu2G6
	vdoh3OqGCGh8ErlENnIptrxpHwLNC6hK0kxut72AA+wWjUajFWgCpoaJG34mwQibBldk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157235-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157235: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4523be1ed7f420dabdf42bae2dc33e13aa46b4e4
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 08:09:00 +0000

flight 157235 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157235/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4523be1ed7f420dabdf42bae2dc33e13aa46b4e4
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  149 days
Failing since        151818  2020-07-11 04:18:52 Z  148 days  143 attempts
Testing same since   157216  2020-12-05 04:19:10 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31452 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 09:35:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 09:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45572.81005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klqRk-000300-PV; Sun, 06 Dec 2020 09:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45572.81005; Sun, 06 Dec 2020 09:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klqRk-0002zt-Km; Sun, 06 Dec 2020 09:35:12 +0000
Received: by outflank-mailman (input) for mailman id 45572;
 Sun, 06 Dec 2020 09:35:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klqRj-0002zl-Ik; Sun, 06 Dec 2020 09:35:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klqRj-0005sl-9X; Sun, 06 Dec 2020 09:35:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klqRi-0000vp-VG; Sun, 06 Dec 2020 09:35:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klqRi-0006xx-Uo; Sun, 06 Dec 2020 09:35:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ebDm8VcNjJ1I7oEtNi7iX9AV0Ga5gCaOlq/Wvtj/CpQ=; b=M+Av5+OJdQCw+4AeilVWiRgP1u
	hxZvVlHL4GdtMjWr+v0jDN3nuDsUwOPpzHMZFxr/RnD7Y9AZbfFaLWIG67Lb3sQJ2y7+0NGUCkx1j
	eWsS1ZkotDvCguubh4aYsVguUW3c447hJJY7j0s9CDlXHncqrBsqOd8drCKNCD8AqqWM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157231-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157231: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 09:35:10 +0000

flight 157231 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157231/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 157222
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157222

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  108 days
Failing since        152659  2020-08-21 14:07:39 Z  106 days  222 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    4 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 09:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 09:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45585.81020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klqpR-00050z-Ot; Sun, 06 Dec 2020 09:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45585.81020; Sun, 06 Dec 2020 09:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klqpR-00050s-Lk; Sun, 06 Dec 2020 09:59:41 +0000
Received: by outflank-mailman (input) for mailman id 45585;
 Sun, 06 Dec 2020 09:59:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klqpQ-00050k-2f; Sun, 06 Dec 2020 09:59:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klqpP-0006Lc-SL; Sun, 06 Dec 2020 09:59:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klqpP-000293-Ja; Sun, 06 Dec 2020 09:59:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klqpP-0004y4-J4; Sun, 06 Dec 2020 09:59:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5EUVDposGf+QDwZ5VqtoQ3AiD/9bT30ZsIILmo4j+/A=; b=CmOSqmKj22XFuVQmyV6ommtvmQ
	vH2OvL06uw52Cif7yO9ncARm08ghBxdeTR0LqSPTxOPFwzcCOZfxJlZo0Ult5KBCobucqdflWpmy5
	eaYIr2TorgqiZhqMvLwWgttfWOq2pND8RJ3tRZ6wsckFhtPBV4WQuSLNR4QhbBlGCjD8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157238: all pass - PUSHED
X-Osstest-Versions-This:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
X-Osstest-Versions-That:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 09:59:39 +0000

flight 157238 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157238/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b
baseline version:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53

Last test of basis   157155  2020-12-02 09:19:29 Z    4 days
Testing same since   157238  2020-12-06 09:18:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Diederik de Haas <didi.debian@cknow.org>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3ae469af8e..5e666356a9  5e666356a9d55fbd9eb5b8506088aa760e107b5b -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 11:16:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 11:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45604.81040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kls1P-00043a-QH; Sun, 06 Dec 2020 11:16:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45604.81040; Sun, 06 Dec 2020 11:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kls1P-00043T-NE; Sun, 06 Dec 2020 11:16:07 +0000
Received: by outflank-mailman (input) for mailman id 45604;
 Sun, 06 Dec 2020 11:16:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E5iC=FK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kls1N-00043O-Vg
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 11:16:06 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df1e5d65-e99d-4408-bcb3-e87c7dbf1ce9;
 Sun, 06 Dec 2020 11:16:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df1e5d65-e99d-4408-bcb3-e87c7dbf1ce9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607253363;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=o1ll9WNyeNBZ4YfGD8Zp4kRNPyE78q/petgNxo6u59g=;
  b=B4q3NlCL/3Uxsnj0rSMq1zI65hNKmYYyWKrpjump+H1UzNQaMXUIF4PL
   VG/vKBocDo4nERPgrCvNNb5n1FAxqQVotat4XGxyiUrzh1cbtr7bTPL8H
   QWpttGHvdJoX/mXxlcQdiMjqJQ+d5uRIDSNojxE8uSy4gVt/Vtg5GXR+u
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: D+8rKoS9IrOg+AjALfjJDFjqPlqkWaEf1S2jtiBEGNaala1Plk2fr9Kpp1krOORiSd1IyCgZWx
 YYJlQDn9aLRO5FmiRYFnW1XH6PcCJvTYkNhr6rGbZVrRNBKYJMSnl6PA+EU7ZqbHV1AXSVoACq
 vSaj8H/ccQS2msqWXJ6VyEkKirDZolrmXeqIN0Nj0kRkmChpC6GlP1BUz5osMmgbBmh1kXvwAB
 G6b1GHyRT8DGwsYTjM55eEnOXcUgSfHre4nzpxFzTnyUlICOxDTjzbytVYnla6v+NDtJ2mZNUL
 9qI=
X-SBRS: 5.1
X-MesageID: 32631746
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,397,1599537600"; 
   d="scan'208";a="32631746"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V7iO1pxpbS7+m7q9aS4M2a9j7DeO7ewNXaN26TW0C/IBxrfw5Kf17dR1GbDxr8Ho+4YLA64CrctPvB5uaNzviGfbQ2pcqJxjiD3Rlm1HwXDzpjeXa2VGSh8DR/a5De9L0V7VAIoO9peV5I+XlEFGnWN+wErxg+gUiot6qvcw0KCkmL37J7KcGPfj7drwHpg3c4uloa2VPxOu6mdat22p5/FjfHzgiyvFY6PJQBBXw7Ukqhz91IXoUj+G9RpRn/y2KO9xr3jMTuXEX15lJEPLiM2cZpPia/s/v94GbBHGt5CDZIDQKMI5IZ+ngDoLJchJDiyh4Sv7XxTo5omKjCxQ/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eM7X5W7dHzNFXzHEvyYyRcUzrP0YeFVpW+ECifwTc/w=;
 b=EwALGlOxEj+Wky1/vSlls4Hg+nBfNbsG766OswIVfFRKCsOmg0+lZPAt7kJ5ZWQDnvs8uS1T0+StqgF7p7Izkofy84FhhWapcIOCH3vS0+mVP20M/X8LphPerIVcr4bAGbw123EhsDX4MdJw8sJ5vFyXbp+BSp9+rRyZNY7KchtHC+PolIzSuBiMcHAg6VU8ENOerQZlT2ur8n3dusmqe/KRYQ8NkkIDHAUZCeg2B++F9LfVO0lZTQ8DoPCM7APYBub++EKgFciHLMfRFTI6tadgtDhe1rX3CgBugZjuhcD9mddXuoU+BcK+UuHKF6crKMDimFkB9rAb+hfDUV5EsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eM7X5W7dHzNFXzHEvyYyRcUzrP0YeFVpW+ECifwTc/w=;
 b=TN6PesQjqpisHwW1V6TXMWOLS6kV77HJ5gAKXeun2DcN4nJTN4s7qZ/x6vLTJ7Eqw31lE6A3/ep6aFc21Wh7FAcg2sr+opX/21lqUn2TvT2bJXJ5XZ2i/HQo7gDj9LCfxwiqI+EJ2HM2CEkhkJ5Rr8RE8Yu+e4MqqFnrQIz5Jts=
Date: Sun, 6 Dec 2020 12:15:48 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] vpci/msix: exit early if MSI-X is disabled
Message-ID: <20201206111548.nzefo2fx6bvspuj5@Air-de-Roger>
References: <20201201174014.27878-1-roger.pau@citrix.com>
 <dfc96aa9-c39f-177c-c8f8-af18b80804de@suse.com>
 <cdb2a1ae-9ee7-6661-b69f-d2faacef2c12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <cdb2a1ae-9ee7-6661-b69f-d2faacef2c12@suse.com>
X-ClientProxiedBy: LO2P265CA0426.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1cb2226f-e6f1-42a3-ab01-08d899d84eb2
X-MS-TrafficTypeDiagnostic: DM5PR03MB3066:
X-Microsoft-Antispam-PRVS: <DM5PR03MB306657E92DA98EC1B3FFD7E88FCF0@DM5PR03MB3066.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: f2Qq2BNuvH9hU1biEdL4EcfE0XOVNYDxnaOmn3z3J/C87t48tWv5rbxic5Qr1rkMQYDtDZOHFdcYlPNVxSwfI+NmM42xiCWDMekYT0pSZTqeMTi8phLQhDq7xwqEiIagxTcKP+0TAWEyaMlsiWQ2lnU4ym77WNfmBGLKTauiOLWEWhKQf4uk9FFAQ3DJB5Uuvlx+WsTRS8BJh/QyPRRg9K2yYM1q/mWP8RSlpe65BLxwGfc35ENnMzPgdlgKs/Xv1csRYW5Lv9yFO3usVbwkbtNqG2MNIAiCqmLsCdGKiU/3MQnQ0wxJhrSqkoll1RZIempRrbNjB/aOB7gHp82Cvw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39850400004)(396003)(376002)(136003)(346002)(366004)(66946007)(956004)(6486002)(8936002)(186003)(5660300002)(6666004)(1076003)(2906002)(16526019)(26005)(33716001)(85182001)(83380400001)(66476007)(6916009)(8676002)(86362001)(66556008)(478600001)(4326008)(9686003)(6496006)(316002)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VHZXaktBMWFud20xZzRDYmZwa0I5bjlNTkFnZzNLWnFxZ2N6M1RxYjBsV01I?=
 =?utf-8?B?NW1uNS9wdVNwZ2ROWTZvMWVQUHRCRXpSVkFCTmpLV1BqaDdEcm42RHdNZjQ0?=
 =?utf-8?B?Vkk5NlZ1NVlLZkc1MnozL21xRHEvelgwREJEMUowRFpvT29tcktOSkZQUHpI?=
 =?utf-8?B?KzdEK3g1a0RjUXpvWkpYakM2cjFwbXhJeFlTaHovNElMTytEZllXeStlOW9I?=
 =?utf-8?B?Y1JvZlo0N2FISzVIcWxLcWNldWlCUC9RSTBsQ0VmbmxXaldzTHFEcUhTT0p5?=
 =?utf-8?B?TEtpYXRmdXI0VXM3NHc1WFRjRnZWcFpLa1V5VDE4VWJHWVN3UjJYdXpZQ21a?=
 =?utf-8?B?VSs5SUxpTVNPUnRUdEdqVHRRZ1o5ZVFRcEhOVWpxakFwTTBvNGxpNTNya25v?=
 =?utf-8?B?Z2tzRUhRcXdQNUN6c3BSNVVyRjRmTVBrWWR3Rm51c3ZyVzh2dnd3TlVVK1pn?=
 =?utf-8?B?aG1OT2dheVBDOGl6TW00QWMramduQVFXUDVkME5vMWVJRUtDaGE5emRwb3ZQ?=
 =?utf-8?B?MXBDaFhqb21GSFltSUFTSWEyYkJBMStLSTVHTVpQZE9SbFN4ZS9NKzcrWW9I?=
 =?utf-8?B?VVNCL0tVT1hrZVZaUDYyclpQeGpYQXZBa0I2MDJYMmgxTG4ydnRUNHUvM2xO?=
 =?utf-8?B?SEt0MXNCaHZYNWFnL3JEOXozcENuYm85MUduYkdhZDNnSzJXNmdLOThGTkxi?=
 =?utf-8?B?NXlUc1FwdTRheUlrVWpwZHdQNVVBWXdQYXoycHRxWHBEcWRCSnYvMHpWcXdy?=
 =?utf-8?B?SWtkaWRCMVc1VHVGM2VEUlRwMEcveDdhT3dxK1ZCTzN1WmZGdkRPWTZRVEE5?=
 =?utf-8?B?aXZhdU8xeUNpeDRWTkx3SG1uc2VzTXhreUpTVklmZlg4YnV0VldSRHU1YlFk?=
 =?utf-8?B?N3c2UXJ5L0xrL0lEaU9jRlVmUS9BT0xvM3BTbG9Dc0t0UmRaaVNpcWlhclAx?=
 =?utf-8?B?ZnlqekNrQ0VIMjB2aGlQWEpaYlhpcWJpa3ZteUQweklWc0tBNC96anpuSnhi?=
 =?utf-8?B?NGR6K3ZINFV1ZFpsNUxIeHp4NStSY3VuT3czektNNjI5VUJoTy9RUk9OSzNy?=
 =?utf-8?B?YmZVNGpmdTN2TzliQ1dDYXBkTC8xMXB6NW5xeElkTW1wYVF6M2l4MmR4bDdp?=
 =?utf-8?B?TTJKMDVQcVc3d3MzcjYzbmlpanFKNlJpMDBTeUxpSjdySStKMTlIR1plWGYr?=
 =?utf-8?B?b0ZWc3BCcmVCL3Btb1R5bEhpNE5tYnkxV1RQV2NoWFF1amxJSFR3MVdlR0JK?=
 =?utf-8?B?ZnY4SWl2NlVmR3Ivb0NzYmlQMFRFQUxyUFdmZ1JhR2RKWmlsdEdYODBHUkR1?=
 =?utf-8?Q?KiHtk7Rghz9jFZ0UlTdpqzZ7uzfTKyW91e?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Dec 2020 11:15:58.4476
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 1cb2226f-e6f1-42a3-ab01-08d899d84eb2
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MgSK7lZsTx0yIg5FzVBNnOKF/AkfgVPjlZB+kzqEe/7P6f88cZIJUqkNABN5zdUniFdfP7W5M5McOjd90Xh8Gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3066
X-OriginatorOrg: citrix.com

Sorry, slightly sleep deprived, hope the reply below makes sense.

On Thu, Dec 03, 2020 at 02:40:28PM +0100, Jan Beulich wrote:
> On 02.12.2020 09:38, Jan Beulich wrote:
> > On 01.12.2020 18:40, Roger Pau Monne wrote:
> >> --- a/xen/drivers/vpci/msix.c
> >> +++ b/xen/drivers/vpci/msix.c
> >> @@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
> >>           * so that it picks the new state.
> >>           */
> >>          entry->masked = new_masked;
> >> -        if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
> >> +
> >> +        if ( !msix->enabled )
> >> +            break;
> >> +
> >> +        if ( !new_masked && !msix->masked && entry->updated )
> >>          {
> >>              /*
> >>               * If MSI-X is enabled, the function mask is not active, the entry
> > 
> > What about a "disabled" -> "enabled-but-masked" transition? This,
> > afaict, similarly won't trigger setting up of entries from
> > control_write(), and hence I'd expect the ASSERT() to similarly
> > trigger when subsequently an entry's mask bit gets altered.

This would only happen if the user hasn't written to the entry address
or data fields since initialization, or else the update field would be
set and then when clearing the entry mask bit in
PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET the entry will be properly setup.

> > I'd also be fine making this further adjustment, if you agree,
> > but the one thing I haven't been able to fully convince myself of
> > is that there's then still no need to set ->updated to true.
> 
> I've taken another look. I think setting ->updated (or something
> equivalent) is needed in that case, in order to not lose the
> setting of the entry mask bit. However, this would only defer the
> problem to control_write(): This would now need to call
> vpci_msix_arch_mask_entry() under suitable conditions, but avoid
> calling it when the entry is disabled or was never set up.

If the entry is masked control_write won't call update_entry, leaving
the entry updated bit as-is, thus deferring the call to update_entry
to further updates in PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET. I think this
is all fine.

> No
> matter whether making the setting of ->updated conditional, or
> adding a conditional call in update_entry(), we'd need to
> evaluate whether the entry is currently disabled. Imo, instead of
> introducing a new arch hook for this, it's easier to make
> vpci_msix_arch_mask_entry() tolerate getting called on a disabled
> entry. Below my proposed alternative change.

I think just setting the updated bit for all entries at initialization
would solve this, as this would then force a call to update_entry when
and entry us unmasked (by writes to PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET).

In any case the assert in vpci_msix_arch_mask_entry is a logic check,
IIRC calling it with an invalid pirq will just result in the function
being a no op as domain_spin_lock_irq_desc will return NULL.

> 
> While writing the description I started wondering why we require
> address or data fields to have got written before the first
> unmask. I don't think the hardware imposes such a requirement;
> zeros would be used instead, whatever this means. Let's not
> forget that it's only the primary purpose of MSI/MSI-X to
> trigger interrupts. Forcing the writes to go elsewhere in
> memory is not forbidden from all I know, and could be used by a
> driver. IOW I think ->updated should start out as set to true.
> But of course vpci_msi_update() then would need to check the
> upper address bits and avoid setting up an interrupt if they're
> not 0xfee. And further arrangements would be needed to have the
> guest requested write actually get carried out correctly.

Seems correct, albeit adding such logic seems to complicate the code
and expand the attack surface. IMO I wouldn't implement this unless we
know there's a real use case for this.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 11:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 11:51:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45615.81053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klsZ8-0007jo-Kx; Sun, 06 Dec 2020 11:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45615.81053; Sun, 06 Dec 2020 11:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klsZ8-0007jh-Hf; Sun, 06 Dec 2020 11:50:58 +0000
Received: by outflank-mailman (input) for mailman id 45615;
 Sun, 06 Dec 2020 11:50:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klsZ7-0007jc-1P
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 11:50:57 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c1b59f2b-76c4-45b2-9d98-fb1505625120;
 Sun, 06 Dec 2020 11:50:56 +0000 (UTC)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-547-BMajkwIfPnmK6VH6JJ9Bew-1; Sun, 06 Dec 2020 06:50:52 -0500
Received: by mail-ed1-f71.google.com with SMTP id ca7so4521800edb.12
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 03:50:51 -0800 (PST)
Received: from [192.168.1.36] (111.red-88-21-205.staticip.rima-tde.net.
 [88.21.205.111])
 by smtp.gmail.com with ESMTPSA id m7sm7840045ejr.119.2020.12.06.03.50.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 06 Dec 2020 03:50:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1b59f2b-76c4-45b2-9d98-fb1505625120
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607255455;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QXN11YjZ1vXjJBEYEKORWd+AxNil9vOhCUsNnGsnr90=;
	b=EBE+gLP1VnuZXkYYJXL5Tc1+GLAbpusfPwc0dNuTzw7ukVxINXFXPAqy+8ijESSE1m5UmA
	V6jE7nS8qPYnVABjwlIG5tvbXiVntISRAhVIdQ6BbnOMaHChg7qZoZo6EeSHlhnB4haDtQ
	Tb+BSyPxNxniUR9Lj2nUKOv1nvESOtI=
X-MC-Unique: BMajkwIfPnmK6VH6JJ9Bew-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=QXN11YjZ1vXjJBEYEKORWd+AxNil9vOhCUsNnGsnr90=;
        b=KaDTtXdx90ZYAb44gMZ4Bg/C05a2PGnc/GKi6LrJbeLAFq+h08mLC1jB/Y+F/AvPBx
         b+M6E9t1si3xxJSgPqMUGQ5fPhi2+2MnVKvFCc9uiqiKc+uME4xELSHEMe69uBG32yaU
         RdlrayVoFyD2KeeibLUXds7lb3au3cUm87RHTwH3lkAP2ZAFPd8oyRJZWr4GRT9JBWzG
         TaX3N3tXQJ6ZjjxgtvJhRTHOnxffh3P61Kp9smftb9+/18ORxffzEj9jM+APt9wuN4cc
         /RFJYbUCov+ENXUemzrL1SZmoDlY/Ep6ibRPjdk2Fmb3eDihsEO/+EmFEMAhgTG3J9CN
         qlPA==
X-Gm-Message-State: AOAM531I8Ug8A2HNHN8jj579YTube+jM8K1dvRBwRp8mjwYqBEzzOYhw
	8nesKUDxHORTOhGIxecV3v6VV80SuPSBnh9Lpwqv6I3N0RTPN8RwRflc8N9V5J+StvONFsjRG5Z
	ktqXfgF8a7LCTKb1UmE6bXdTGqXs=
X-Received: by 2002:a17:907:3312:: with SMTP id ym18mr14924839ejb.437.1607255450929;
        Sun, 06 Dec 2020 03:50:50 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzVzHdQ0gd9y0fPcxXvPsBOb9NLjGt7VCscZxHh6SgyV32IoDz5VTi23v/85YihGGm/4+5V7A==
X-Received: by 2002:a17:907:3312:: with SMTP id ym18mr14924825ejb.437.1607255450726;
        Sun, 06 Dec 2020 03:50:50 -0800 (PST)
Subject: Re: [PATCH 3/3] net: checksum: Introduce fine control over checksum
 type
To: Bin Meng <bmeng.cn@gmail.com>, qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>,
 Li Zhijian <lizhijian@cn.fujitsu.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Andrew Jeffery <andrew@aj.id.au>,
 Jason Wang <jasowang@redhat.com>, Bin Meng <bin.meng@windriver.com>,
 Joel Stanley <joel@jms.id.au>, Beniamino Galvani <b.galvani@gmail.com>,
 Zhang Chen <chen.zhang@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Peter Chubb <peter.chubb@nicta.com.au>, =?UTF-8?Q?C=c3=a9dric_Le_Goater?=
 <clg@kaod.org>, qemu-arm@nongnu.org, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>
References: <1607220847-24096-1-git-send-email-bmeng.cn@gmail.com>
 <1607220847-24096-3-git-send-email-bmeng.cn@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <9f8cdb69-7b74-4034-223f-bfa62cb4e440@redhat.com>
Date: Sun, 6 Dec 2020 12:50:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1607220847-24096-3-git-send-email-bmeng.cn@gmail.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Ben,

On 12/6/20 3:14 AM, Bin Meng wrote:
> From: Bin Meng <bin.meng@windriver.com>
> 
> At present net_checksum_calculate() blindly calculates all types of
> checksums (IP, TCP, UDP). Some NICs may have a per type setting in
> their BDs to control what checksum should be offloaded. To support
> such hardware behavior, introduce a 'csum_flag' parameter to the
> net_checksum_calculate() API to allow fine control over what type
> checksum is calculated.
> 
> Existing users of this API are updated accordingly.
> 
> Signed-off-by: Bin Meng <bin.meng@windriver.com>
> 
> ---
> 
>  hw/net/allwinner-sun8i-emac.c |  2 +-
>  hw/net/cadence_gem.c          |  2 +-
>  hw/net/fsl_etsec/rings.c      |  8 +++-----
>  hw/net/ftgmac100.c            | 10 +++++++++-
>  hw/net/imx_fec.c              | 15 +++------------
>  hw/net/virtio-net.c           |  2 +-
>  hw/net/xen_nic.c              |  2 +-
>  include/net/checksum.h        |  7 ++++++-

When sending a such API refactor, patch is easier to
review if you setup the scripts/git.orderfile config.

>  net/checksum.c                | 18 ++++++++++++++----
>  net/filter-rewriter.c         |  4 ++--
>  10 files changed, 41 insertions(+), 29 deletions(-)
...
> diff --git a/include/net/checksum.h b/include/net/checksum.h
> index 05a0d27..7dec37e 100644
> --- a/include/net/checksum.h
> +++ b/include/net/checksum.h
> @@ -21,11 +21,16 @@
>  #include "qemu/bswap.h"
>  struct iovec;
>  
> +#define CSUM_IP     0x01

IMO this is IP_HEADER,

> +#define CSUM_TCP    0x02
> +#define CSUM_UDP    0x04

and these IP_PAYLOAD, regardless the payload protocol.

> +#define CSUM_ALL    (CSUM_IP | CSUM_TCP | CSUM_UDP)

Maybe CSUM_HEADER / CSUM_PAYLOAD / CSUM_FULL (aka RAW?).

> +
>  uint32_t net_checksum_add_cont(int len, uint8_t *buf, int seq);
>  uint16_t net_checksum_finish(uint32_t sum);
>  uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
>                               uint8_t *addrs, uint8_t *buf);
> -void net_checksum_calculate(uint8_t *data, int length);
> +void net_checksum_calculate(uint8_t *data, int length, int csum_flag);
>  
>  static inline uint32_t
>  net_checksum_add(int len, uint8_t *buf)
> diff --git a/net/checksum.c b/net/checksum.c
> index dabd290..70f4eae 100644
> --- a/net/checksum.c
> +++ b/net/checksum.c
> @@ -57,7 +57,7 @@ uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
>      return net_checksum_finish(sum);
>  }
>  
> -void net_checksum_calculate(uint8_t *data, int length)
> +void net_checksum_calculate(uint8_t *data, int length, int csum_flag)
>  {
>      int mac_hdr_len, ip_len;
>      struct ip_header *ip;
> @@ -108,9 +108,11 @@ void net_checksum_calculate(uint8_t *data, int length)
>      }
>  
>      /* Calculate IP checksum */
> -    stw_he_p(&ip->ip_sum, 0);
> -    csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
> -    stw_be_p(&ip->ip_sum, csum);
> +    if (csum_flag & CSUM_IP) {
> +        stw_he_p(&ip->ip_sum, 0);
> +        csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
> +        stw_be_p(&ip->ip_sum, csum);
> +    }
>  
>      if (IP4_IS_FRAGMENT(ip)) {
>          return; /* a fragmented IP packet */
> @@ -128,6 +130,10 @@ void net_checksum_calculate(uint8_t *data, int length)
>      switch (ip->ip_p) {
>      case IP_PROTO_TCP:
>      {
> +        if (!(csum_flag & CSUM_TCP)) {
> +            return;
> +        }
> +
>          tcp_header *tcp = (tcp_header *)(ip + 1);
>  
>          if (ip_len < sizeof(tcp_header)) {
> @@ -148,6 +154,10 @@ void net_checksum_calculate(uint8_t *data, int length)
>      }
>      case IP_PROTO_UDP:
>      {
> +        if (!(csum_flag & CSUM_UDP)) {
> +            return;
> +        }
> +
>          udp_header *udp = (udp_header *)(ip + 1);
>  
>          if (ip_len < sizeof(udp_header)) {
...



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 12:08:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 12:08:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45625.81064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klspe-0000Ts-A7; Sun, 06 Dec 2020 12:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45625.81064; Sun, 06 Dec 2020 12:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klspe-0000Tl-72; Sun, 06 Dec 2020 12:08:02 +0000
Received: by outflank-mailman (input) for mailman id 45625;
 Sun, 06 Dec 2020 12:08:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klspc-0000Td-Na; Sun, 06 Dec 2020 12:08:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klspc-0000bX-FA; Sun, 06 Dec 2020 12:08:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klspc-0007fH-3P; Sun, 06 Dec 2020 12:08:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klspc-00067K-2s; Sun, 06 Dec 2020 12:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b5bKSCSkIlqHns7iuKtqTsTgaoEJDAe10wKZSygRd0I=; b=La5bejHo1+9LXVPtEph5MmdudU
	V/JMucmW8GI4OCA7G6jFZdWx0b3c1WiUuPE8tH95wSDwS5Ai0G3iBSptGORrtNRULJ/5be0BQw4rx
	LHrg6EFcqx+TcACE6fkpmzTpV1FcklOjtfpUgZppTKjKvIluR4fKZwIe5jlIt1pMuKLc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157233-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157233: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
X-Osstest-Versions-That:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 12:08:00 +0000

flight 157233 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157233/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157218
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157218
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157218
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157218
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157218
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157218
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157218
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157218
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157218
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157218
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157218
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b
baseline version:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b

Last test of basis   157233  2020-12-06 01:51:51 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 12:45:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 12:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45638.81079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kltPK-0004CP-97; Sun, 06 Dec 2020 12:44:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45638.81079; Sun, 06 Dec 2020 12:44:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kltPK-0004CI-6G; Sun, 06 Dec 2020 12:44:54 +0000
Received: by outflank-mailman (input) for mailman id 45638;
 Sun, 06 Dec 2020 12:44:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DarT=FK=gmail.com=bmeng.cn@srs-us1.protection.inumbo.net>)
 id 1kltPI-0004CD-Rl
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 12:44:52 +0000
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f13a4457-78ea-4c88-ac4f-bbb7250fefb5;
 Sun, 06 Dec 2020 12:44:51 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id g15so10265298ybq.6
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 04:44:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f13a4457-78ea-4c88-ac4f-bbb7250fefb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=K+a8GLNhS5kz1ayNB+wyvyeVq/+HoG6SYkYYVIKVY/c=;
        b=aMeOiHqoWIlEFCaxZAxK537taWLP3YfLwHy9hr55Tv3/XfK/C+gOgMr0zIm7IYgTrS
         DuaGXSFleXbBusET2TL2Gr9LYyyVL3MMXAZbeJO65i0daGzKnCM8/QjGPb/iLClvi2Xt
         dk05SPvKyoiefbvMrkObf6y+C5DmP2FfFTiyBZ6D64TLDhjV+cYcdT6SoaspwmMaURqB
         sejpNooNuAB6Pz53TrqPJO3tPLB7qpIqxlt9ecd8FZupF+l8q2Fm67QvOm0TDMcNI8vd
         okm0GGeT210P/4BCUkbdztGT6oxXeUvxAI53QvTIwtFW7DEufjgUu8wF4758je3Tf5Vp
         w5Cg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=K+a8GLNhS5kz1ayNB+wyvyeVq/+HoG6SYkYYVIKVY/c=;
        b=StkBkiJ5MTJ5F2iXqzH9IJrdkQjwhUNZJkH7oxZOVbEVfX8qCFNDy1yIpREyoW/X06
         LegLXiF9p6Bn4NuuAY4JcfPRlCMTuHgQplSEDy5XSBupHd8cu+k1kNuHG5JINXqkDGJ2
         vlgl0owogTIznG2d5RNJKYbbQyaD1k11g7XfeiQiijULfTQ1yupwK6+GoDPCYiitY9Qa
         aVW+JEtqK+X1GuOVjBMWQYiW+5oCePppIMHwYsiBcxUj1yVZVJJPjopJsyA4pzqt9nA+
         iagww9AnP9W4CRvzS32SDf1Rui6lTfkzeZST53o2Wyn4Nw3NiZfTPTg0lByHbfeCIgHm
         CSTg==
X-Gm-Message-State: AOAM5337IuaxazK8mHpam3/UKEcVBCFklccfF+p2sk/DUmOfeGG96JFb
	8B/OsRZYFdxzBtDc03KIOLV2zb5lYEGdPHBy1BM=
X-Google-Smtp-Source: ABdhPJz5qrQQkWkiNP2bKMjxspW0551AxJy5859jYAV3SaADS0hWbnmO3MX4UZumVANKlo7EctyNfYKIh+He5KdL6x0=
X-Received: by 2002:a25:bb50:: with SMTP id b16mr18111943ybk.152.1607258691399;
 Sun, 06 Dec 2020 04:44:51 -0800 (PST)
MIME-Version: 1.0
References: <1607220847-24096-1-git-send-email-bmeng.cn@gmail.com>
 <1607220847-24096-3-git-send-email-bmeng.cn@gmail.com> <9f8cdb69-7b74-4034-223f-bfa62cb4e440@redhat.com>
In-Reply-To: <9f8cdb69-7b74-4034-223f-bfa62cb4e440@redhat.com>
From: Bin Meng <bmeng.cn@gmail.com>
Date: Sun, 6 Dec 2020 20:44:40 +0800
Message-ID: <CAEUhbmUBeUzjPG=2-WF=UnpMnVkH3qT0FkXpgBhP==yt53UfBg@mail.gmail.com>
Subject: Re: [PATCH 3/3] net: checksum: Introduce fine control over checksum type
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Cc: "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>, Peter Maydell <peter.maydell@linaro.org>, 
	Alistair Francis <alistair@alistair23.me>, Paul Durrant <paul@xen.org>, 
	Li Zhijian <lizhijian@cn.fujitsu.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Andrew Jeffery <andrew@aj.id.au>, Jason Wang <jasowang@redhat.com>, Bin Meng <bin.meng@windriver.com>, 
	Joel Stanley <joel@jms.id.au>, Beniamino Galvani <b.galvani@gmail.com>, Zhang Chen <chen.zhang@intel.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Peter Chubb <peter.chubb@nicta.com.au>, 
	=?UTF-8?Q?C=C3=A9dric_Le_Goater?= <clg@kaod.org>, 
	qemu-arm <qemu-arm@nongnu.org>, xen-devel@lists.xenproject.org, 
	Anthony Perard <anthony.perard@citrix.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, 
	qemu-ppc@nongnu.org, David Gibson <david@gibson.dropbear.id.au>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Philippe,

On Sun, Dec 6, 2020 at 7:50 PM Philippe Mathieu-Daud=C3=A9 <philmd@redhat.c=
om> wrote:
>
> Hi Ben,
>
> On 12/6/20 3:14 AM, Bin Meng wrote:
> > From: Bin Meng <bin.meng@windriver.com>
> >
> > At present net_checksum_calculate() blindly calculates all types of
> > checksums (IP, TCP, UDP). Some NICs may have a per type setting in
> > their BDs to control what checksum should be offloaded. To support
> > such hardware behavior, introduce a 'csum_flag' parameter to the
> > net_checksum_calculate() API to allow fine control over what type
> > checksum is calculated.
> >
> > Existing users of this API are updated accordingly.
> >
> > Signed-off-by: Bin Meng <bin.meng@windriver.com>
> >
> > ---
> >
> >  hw/net/allwinner-sun8i-emac.c |  2 +-
> >  hw/net/cadence_gem.c          |  2 +-
> >  hw/net/fsl_etsec/rings.c      |  8 +++-----
> >  hw/net/ftgmac100.c            | 10 +++++++++-
> >  hw/net/imx_fec.c              | 15 +++------------
> >  hw/net/virtio-net.c           |  2 +-
> >  hw/net/xen_nic.c              |  2 +-
> >  include/net/checksum.h        |  7 ++++++-
>
> When sending a such API refactor, patch is easier to
> review if you setup the scripts/git.orderfile config.

Sure. I thought I have done this before but apparently not on the
machine this series was genearated :)

>
> >  net/checksum.c                | 18 ++++++++++++++----
> >  net/filter-rewriter.c         |  4 ++--
> >  10 files changed, 41 insertions(+), 29 deletions(-)
> ...
> > diff --git a/include/net/checksum.h b/include/net/checksum.h
> > index 05a0d27..7dec37e 100644
> > --- a/include/net/checksum.h
> > +++ b/include/net/checksum.h
> > @@ -21,11 +21,16 @@
> >  #include "qemu/bswap.h"
> >  struct iovec;
> >
> > +#define CSUM_IP     0x01
>
> IMO this is IP_HEADER,

Yes, but I believe no one will misread it, no?

>
> > +#define CSUM_TCP    0x02
> > +#define CSUM_UDP    0x04
>
> and these IP_PAYLOAD, regardless the payload protocol.

We have to distinguish TCP and UDP.

>
> > +#define CSUM_ALL    (CSUM_IP | CSUM_TCP | CSUM_UDP)
>
> Maybe CSUM_HEADER / CSUM_PAYLOAD / CSUM_FULL (aka RAW?).
>
> > +
> >  uint32_t net_checksum_add_cont(int len, uint8_t *buf, int seq);
> >  uint16_t net_checksum_finish(uint32_t sum);
> >  uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
> >                               uint8_t *addrs, uint8_t *buf);
> > -void net_checksum_calculate(uint8_t *data, int length);
> > +void net_checksum_calculate(uint8_t *data, int length, int csum_flag);
> >
> >  static inline uint32_t
> >  net_checksum_add(int len, uint8_t *buf)
> > diff --git a/net/checksum.c b/net/checksum.c
> > index dabd290..70f4eae 100644
> > --- a/net/checksum.c
> > +++ b/net/checksum.c
> > @@ -57,7 +57,7 @@ uint16_t net_checksum_tcpudp(uint16_t length, uint16_=
t proto,
> >      return net_checksum_finish(sum);
> >  }
> >
> > -void net_checksum_calculate(uint8_t *data, int length)
> > +void net_checksum_calculate(uint8_t *data, int length, int csum_flag)
> >  {
> >      int mac_hdr_len, ip_len;
> >      struct ip_header *ip;
> > @@ -108,9 +108,11 @@ void net_checksum_calculate(uint8_t *data, int len=
gth)
> >      }
> >
> >      /* Calculate IP checksum */
> > -    stw_he_p(&ip->ip_sum, 0);
> > -    csum =3D net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
> > -    stw_be_p(&ip->ip_sum, csum);
> > +    if (csum_flag & CSUM_IP) {
> > +        stw_he_p(&ip->ip_sum, 0);
> > +        csum =3D net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
> > +        stw_be_p(&ip->ip_sum, csum);
> > +    }
> >
> >      if (IP4_IS_FRAGMENT(ip)) {
> >          return; /* a fragmented IP packet */
> > @@ -128,6 +130,10 @@ void net_checksum_calculate(uint8_t *data, int len=
gth)
> >      switch (ip->ip_p) {
> >      case IP_PROTO_TCP:
> >      {
> > +        if (!(csum_flag & CSUM_TCP)) {
> > +            return;
> > +        }
> > +
> >          tcp_header *tcp =3D (tcp_header *)(ip + 1);
> >
> >          if (ip_len < sizeof(tcp_header)) {
> > @@ -148,6 +154,10 @@ void net_checksum_calculate(uint8_t *data, int len=
gth)
> >      }
> >      case IP_PROTO_UDP:
> >      {
> > +        if (!(csum_flag & CSUM_UDP)) {
> > +            return;
> > +        }
> > +
> >          udp_header *udp =3D (udp_header *)(ip + 1);
> >
> >          if (ip_len < sizeof(udp_header)) {
> ...

Regards,
Bin


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 16:23:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 16:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45683.81103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klwon-000894-S0; Sun, 06 Dec 2020 16:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45683.81103; Sun, 06 Dec 2020 16:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klwon-00088x-Og; Sun, 06 Dec 2020 16:23:25 +0000
Received: by outflank-mailman (input) for mailman id 45683;
 Sun, 06 Dec 2020 16:23:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klwom-00088p-ST; Sun, 06 Dec 2020 16:23:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klwom-0006H0-LX; Sun, 06 Dec 2020 16:23:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1klwom-0001Uz-E5; Sun, 06 Dec 2020 16:23:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1klwom-0003KI-Dd; Sun, 06 Dec 2020 16:23:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ACcTVs/Ppjp6J9FgPcjuI67LnH0PucJjdAGgfBwsn0w=; b=zFE72nDasHRLyPuzYCD8jj21Ij
	7T5yTsDUzI778hZpheMeMzGCns0iXVIutXjd6OaDMx6TQp8x07524XKO43j/G6li0l9CQAw1r1Z/p
	dV7jUI8rQHoLoCG386kG60ZMN084/nVYwu/bLcBGcndj7KleB/9yYB5KdrSAKXvr6uo8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157234-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157234: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7059c2c00a2196865c2139083cbef47cd18109b6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 16:23:24 +0000

flight 157234 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157234/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7059c2c00a2196865c2139083cbef47cd18109b6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  127 days
Failing since        152366  2020-08-01 20:49:34 Z  126 days  216 attempts
Testing same since   157234  2020-12-06 03:27:14 Z    0 days    1 attempts

------------------------------------------------------------
3642 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 698500 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 16:47:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 16:47:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45705.81119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klxCN-0001jE-0x; Sun, 06 Dec 2020 16:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45705.81119; Sun, 06 Dec 2020 16:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klxCM-0001j7-TE; Sun, 06 Dec 2020 16:47:46 +0000
Received: by outflank-mailman (input) for mailman id 45705;
 Sun, 06 Dec 2020 16:47:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnKb=FK=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1klxCL-0001j2-TS
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 16:47:46 +0000
Received: from mail-lf1-x12a.google.com (unknown [2a00:1450:4864:20::12a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a62cf928-3956-4ded-9bbf-f5dcc643e8d3;
 Sun, 06 Dec 2020 16:47:44 +0000 (UTC)
Received: by mail-lf1-x12a.google.com with SMTP id a9so14587019lfh.2
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 08:47:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a62cf928-3956-4ded-9bbf-f5dcc643e8d3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=RW0lDqSJK11o5Hdo0ZTtiX38zMds7s0ivaE30jYDc08=;
        b=fyWAecBmSWqbeKTDqjVW+CVkNXAQmZ410oGJ+xz0B3svfmdqi9ApYJY14RYlLULq1X
         NQfKYWKNfPcKgC+hJNLBviDdi4gBw6UW0zBYYEzqUyiPbgT5G5L/KO8GooMOlf8fcq8V
         t75RXhiKmf+bn3sH0AYBYtKLi3q2jhRa/RQpy5AYfan7yvFFFKQqrH8wwvd+Vy6atRaf
         is1yTY0QYFtQXGVSI7EWg9vnv6exSx3VEEew5PE68sbe+WTEaviOQmbo8QaJLc2eaK7y
         HuNPjjSrWCxhDwki9gHeW6OEwhG/glGZEokkXpBsTQWHYr1m1cgxf2uOFGFRVK13vvp0
         MUng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=RW0lDqSJK11o5Hdo0ZTtiX38zMds7s0ivaE30jYDc08=;
        b=oCBAb40Vg6Lso/3nZnkEd7H5YaphMpcf+FZnD6ckqIILQagHXZYQy7kQJlmzz+5CLw
         HRrmBsSjujznmjPT5ekERYELabykL0V4aBoxQeNVyht5hJSHS5GrT4Cl7UYpBuLXsZ9R
         rojK621JOOmdK5U4BxqGCHR3LylqLqGMSvWW2nEUH67FDOO7k7WwKKByZk/uu2LYR4nL
         H583png105YXMZrcXUABStk5r6OGHB8hZBkBYFeLHla/9ii0C0RUM+RaXVth8FweeXoK
         Zd3Pe57Af/WXkamXcWJEwoblCmo5AwsKasV/cV57RxnOwO1U/FPM3zur+lTWhQcT4FzO
         j7CQ==
X-Gm-Message-State: AOAM531NSt81PWouUv1LuTZoR52Wy1+ld7WOgLQ14eT/B9TOAUAolV5M
	XwlQTyIyZZsOSt+kWQm1xLIpj55z4Pye6Wyvxkc=
X-Google-Smtp-Source: ABdhPJw9vQiBpQRjhN/8CX+xAjLRplV+JVJIeCtYqBQQJnmgOmlExZLWBZV8I1M5wikJWFGCCY4w8C0bQqI+B6jfbXY=
X-Received: by 2002:a19:3c5:: with SMTP id 188mr6378736lfd.202.1607273263606;
 Sun, 06 Dec 2020 08:47:43 -0800 (PST)
MIME-Version: 1.0
References: <20201129035639.GW2532@mail-itl> <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl> <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
In-Reply-To: <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Sun, 6 Dec 2020 11:47:32 -0500
Message-ID: <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	Christoph Hellwig <hch@lst.de>, Juergen Gross <jgross@suse.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>, 
	linux-nvme@lists.infradead.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com> =
wrote:
>
> On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
> > > On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-G=C3=B3r=
ecki wrote:
> > > > culprit:
> > > >
> > > > commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
> > > > Author: Roger Pau Monne <roger.pau@citrix.com>
> > > > Date:   Tue Sep 1 10:33:26 2020 +0200
> > > >
> > > >     xen: add helpers to allocate unpopulated memory
> > > >
> > > > I'm adding relevant people and xen-devel to the thread.
> > > > For completeness, here is the original crash message:
> > >
> > > That commit definitively adds a new ZONE_DEVICE user, so it does look
> > > related.  But you are not running on Xen, are you?
> >
> > I am. It is Xen dom0.
>
> I'm afraid I'm on leave and won't be able to look into this until the
> beginning of January. I would guess it's some kind of bad
> interaction between blkback and NVMe drivers both using ZONE_DEVICE?
>
> Maybe the best is to revert this change and I will look into it when
> I get back, unless someone is willing to debug this further.

Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , they
both use page->lru which is part of the anonymous union shared with
*pgmap.  That matches Marek's suspicion that the ZONE_DEVICE memory is
being used as ZONE_NORMAL.

memmap_init_zone_device() says:
* ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
* and zone_device_data.  It is a bug if a ZONE_DEVICE page is
* ever freed or placed on a driver-private list.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 17:24:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 17:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45717.81131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klxlh-0005SP-SY; Sun, 06 Dec 2020 17:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45717.81131; Sun, 06 Dec 2020 17:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klxlh-0005SI-PC; Sun, 06 Dec 2020 17:24:17 +0000
Received: by outflank-mailman (input) for mailman id 45717;
 Sun, 06 Dec 2020 17:24:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SsZU=FK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1klxlg-0005SD-HN
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 17:24:16 +0000
Received: from wnew4-smtp.messagingengine.com (unknown [64.147.123.18])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e19939fb-7343-4d6b-afa2-00bacb6c5744;
 Sun, 06 Dec 2020 17:24:14 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailnew.west.internal (Postfix) with ESMTP id A2CAB837;
 Sun,  6 Dec 2020 12:24:12 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Sun, 06 Dec 2020 12:24:13 -0500
Received: from localhost.localdomain (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 81FE2108005B;
 Sun,  6 Dec 2020 12:24:07 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e19939fb-7343-4d6b-afa2-00bacb6c5744
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-transfer-encoding:content-type
	:date:from:message-id:mime-version:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=JItcwb
	qleisdzutjtb6IffdEYYvbyCRFRbpwZ3ZqjOk=; b=c+VWyhEmYg/NEIse3CgHCB
	hzKRwBBZxsq8QvB9uulVUVTBEp6AzNDBvkDABqbaqSTwSlI310tiSnSYpR3tsWQy
	l9K/Id6W7nC/VR5tjG15yotcpk3oU7avgHuoP9T9R5TreLBvmjzs1CZodPAwa79G
	SPfjf0eIZ2UD5BFitmIl5yt9iNlvSekmwiJ6DHVMIjvIkeaz4o4nhNg+ks6aHZhT
	6MMdq5l0kflG2xVXpotAr7dD2hVSduWy5aJnObLEDdnl5diAzZecp/krH6MF9+kq
	JQ7pHyi3zJQaJzm32aXz7mjVs/Bp3ZxkXmfmMYcR3zIQy8POK9CD2/LH+x+POjlQ
	==
X-ME-Sender: <xms:uRPNX7U6w-bHOtfmjgHQwGZT9pP7n8E-EIsggLLqGJ0icKd8d9r2KA>
    <xme:uRPNXzm5sATZO6h6F2BbJe6BIq60U9Bagi8NhkhE1cX9IJnk4G_234PwabaWy6LkM
    SFwMwhZCfG3GQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudejvddguddtudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvffufffkofggtghogfesthekredtredtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhephefh
    feetueelvddvtedttdevieeluedtvedtfeejieelhedutdeuheduieejgfegnecuffhomh
    grihhnpehkvghrnhgvlhdrohhrghenucfkphepledurdeigedrudejtddrkeelnecuvehl
    uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvg
    hksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:uRPNX3ZrlOs3WgwCcd3ozP2QYvO35hombQMDFhsnUpx3ZmcZn5yT7w>
    <xmx:uRPNX2UxaVDfZeXEtMGqgJ-42RexzbLXGvjua9CHGBgixkAd5Pxjjw>
    <xmx:uRPNX1mKihq3nu_4A5bJfRz3II3CLERfZ_1yebF43JFPED1pDmyMSA>
    <xmx:vBPNX7eIkEMfUc5lvOk3n_QeNAgXlimtA_Wv4u50zDK_al08hxCUGYFfpZo>
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	stable@vger.kernel.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	David Airlie <airlied@linux.ie>,
	Daniel Vetter <daniel@ffwll.ch>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Simon Leiner <simon@leiner.me>,
	Yan Yankovskyi <yyankovskyi@gmail.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Dan Carpenter <dan.carpenter@oracle.com>,
	linux-kernel@vger.kernel.org (open list),
	dri-devel@lists.freedesktop.org (open list:DRM DRIVERS FOR XEN)
Subject: [PATCH] Revert "xen: add helpers to allocate unpopulated memory"
Date: Sun,  6 Dec 2020 18:22:36 +0100
Message-Id: <20201206172242.1249689-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.25.4
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Organization: Invisible Things Lab
Content-Transfer-Encoding: 8bit

This reverts commit 9e2369c06c8a181478039258a4598c1ddd2cadfa.

On a Xen PV dom0, with NVME disk, this makes the dom0 crash when starting
a domain. This looks like some bad interaction between xen-blkback and
NVME driver, both using ZONE_DEVICE. Since the author is on leave now,
revert the change until proper solution is developed.

The specific crash message is:

    general protection fault, probably for non-canonical address 0xdead000000000100: 0000 [#1] SMP NOPTI
    CPU: 1 PID: 134 Comm: kworker/u12:2 Not tainted 5.9.9-1.qubes.x86_64 #1
    Hardware name: LENOVO 20M9CTO1WW/20M9CTO1WW, BIOS N2CET50W (1.33 ) 01/15/2020
    Workqueue: dm-thin do_worker [dm_thin_pool]
    RIP: e030:nvme_map_data+0x300/0x3a0 [nvme]
    Code: b8 fe ff ff e9 a8 fe ff ff 4c 8b 56 68 8b 5e 70 8b 76 74 49 8b 02 48 c1 e8 33 83 e0 07 83 f8 04 0f 85 f2 fe ff ff 49 8b 42 08 <83> b8 d0 00 00 00 04 0f 85 e1 fe ff ff e9 38 fd ff ff 8b 55 70 be
    RSP: e02b:ffffc900010e7ad8 EFLAGS: 00010246
    RAX: dead000000000100 RBX: 0000000000001000 RCX: ffff8881a58f5000
    RDX: 0000000000001000 RSI: 0000000000000000 RDI: ffff8881a679e000
    RBP: ffff8881a5ef4c80 R08: ffff8881a5ef4c80 R09: 0000000000000002
    R10: ffffea0003dfff40 R11: 0000000000000008 R12: ffff8881a679e000
    R13: ffffc900010e7b20 R14: ffff8881a70b5980 R15: ffff8881a679e000
    FS:  0000000000000000(0000) GS:ffff8881b5440000(0000) knlGS:0000000000000000
    CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 0000000001d64408 CR3: 00000001aa2c0000 CR4: 0000000000050660
    Call Trace:
     nvme_queue_rq+0xa7/0x1a0 [nvme]
     __blk_mq_try_issue_directly+0x11d/0x1e0
     ? add_wait_queue_exclusive+0x70/0x70
     blk_mq_try_issue_directly+0x35/0xc0l[
     blk_mq_submit_bio+0x58f/0x660
     __submit_bio_noacct+0x300/0x330
     process_shared_bio+0x126/0x1b0 [dm_thin_pool]
     process_cell+0x226/0x280 [dm_thin_pool]
     process_thin_deferred_cells+0x185/0x320 [dm_thin_pool]
     process_deferred_bios+0xa4/0x2a0 [dm_thin_pool]UX
     do_worker+0xcc/0x130 [dm_thin_pool]
     process_one_work+0x1b4/0x370
     worker_thread+0x4c/0x310
     ? process_one_work+0x370/0x370
     kthread+0x11b/0x140
     ? __kthread_bind_mask+0x60/0x60<
     ret_from_fork+0x22/0x30
    Modules linked in: loop snd_seq_dummy snd_hrtimer nf_tables nfnetlink vfat fat snd_sof_pci snd_sof_intel_byt snd_sof_intel_ipc snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_xtensa_dsp snd_sof_intel_hda snd_sof snd_soc_skl snd_soc_sst_
    ipc snd_soc_sst_dsp snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine elan_i2c snd_hda_codec_hdmi mei_hdcp iTCO_wdt intel_powerclamp intel_pmc_bxt ee1004 intel_rapl_msr iTCO_vendor
    _support joydev pcspkr intel_wmi_thunderbolt wmi_bmof thunderbolt ucsi_acpi idma64 typec_ucsi snd_hda_codec_realtek typec snd_hda_codec_generic snd_hda_intel snd_intel_dspcfg snd_hda_codec thinkpad_acpi snd_hda_core ledtrig_audio int3403_
    thermal snd_hwdep snd_seq snd_seq_device snd_pcm iwlwifi snd_timer processor_thermal_device mei_me cfg80211 intel_rapl_common snd e1000e mei int3400_thermal int340x_thermal_zone i2c_i801 acpi_thermal_rel soundcore intel_soc_dts_iosf i2c_s
    mbus rfkill intel_pch_thermal xenfs
     ip_tables dm_thin_pool dm_persistent_data dm_bio_prison dm_crypt nouveau rtsx_pci_sdmmc mmc_core mxm_wmi crct10dif_pclmul ttm crc32_pclmul crc32c_intel i915 ghash_clmulni_intel i2c_algo_bit serio_raw nvme drm_kms_helper cec xhci_pci nvme
    _core rtsx_pci xhci_pci_renesas drm xhci_hcd wmi video pinctrl_cannonlake pinctrl_intel xen_privcmd xen_pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput
    ---[ end trace f8d47e4aa6724df4 ]---
    RIP: e030:nvme_map_data+0x300/0x3a0 [nvme]
    Code: b8 fe ff ff e9 a8 fe ff ff 4c 8b 56 68 8b 5e 70 8b 76 74 49 8b 02 48 c1 e8 33 83 e0 07 83 f8 04 0f 85 f2 fe ff ff 49 8b 42 08 <83> b8 d0 00 00 00 04 0f 85 e1 fe ff ff e9 38 fd ff ff 8b 55 70 be
    RSP: e02b:ffffc900010e7ad8 EFLAGS: 00010246
    RAX: dead000000000100 RBX: 0000000000001000 RCX: ffff8881a58f5000
    RDX: 0000000000001000 RSI: 0000000000000000 RDI: ffff8881a679e000
    RBP: ffff8881a5ef4c80 R08: ffff8881a5ef4c80 R09: 0000000000000002
    R10: ffffea0003dfff40 R11: 0000000000000008 R12: ffff8881a679e000
    R13: ffffc900010e7b20 R14: ffff8881a70b5980 R15: ffff8881a679e000
    FS:  0000000000000000(0000) GS:ffff8881b5440000(0000) knlGS:0000000000000000
    CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 0000000001d64408 CR3: 00000001aa2c0000 CR4: 0000000000050660
    Kernel panic - not syncing: Fatal exception
    Kernel Offset: disabled

Discussion at https://lore.kernel.org/xen-devel/20201205082839.ts3ju6yta46cgwjn@Air-de-Roger/T

Cc: stable@vger.kernel.org #v5.9+
(for 5.9 it's easier to revert the original commit directly)
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/gpu/drm/xen/xen_drm_front_gem.c |   9 +-
 drivers/xen/Kconfig                     |  10 --
 drivers/xen/Makefile                    |   1 -
 drivers/xen/balloon.c                   |   4 +-
 drivers/xen/grant-table.c               |   4 +-
 drivers/xen/privcmd.c                   |   4 +-
 drivers/xen/unpopulated-alloc.c         | 200 ------------------------
 drivers/xen/xenbus/xenbus_client.c      |   6 +-
 drivers/xen/xlate_mmu.c                 |   4 +-
 include/xen/xen.h                       |   9 --
 10 files changed, 15 insertions(+), 236 deletions(-)
 delete mode 100644 drivers/xen/unpopulated-alloc.c

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 2f464ef2d53e..90945344daae 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -18,7 +18,6 @@
 #include <drm/drm_probe_helper.h>
 
 #include <xen/balloon.h>
-#include <xen/xen.h>
 
 #include "xen_drm_front.h"
 #include "xen_drm_front_gem.h"
@@ -100,8 +99,8 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
 		 * allocate ballooned pages which will be used to map
 		 * grant references provided by the backend
 		 */
-		ret = xen_alloc_unpopulated_pages(xen_obj->num_pages,
-					          xen_obj->pages);
+		ret = alloc_xenballooned_pages(xen_obj->num_pages,
+					       xen_obj->pages);
 		if (ret < 0) {
 			DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
 				  xen_obj->num_pages, ret);
@@ -153,8 +152,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj)
 	} else {
 		if (xen_obj->pages) {
 			if (xen_obj->be_alloc) {
-				xen_free_unpopulated_pages(xen_obj->num_pages,
-							   xen_obj->pages);
+				free_xenballooned_pages(xen_obj->num_pages,
+							xen_obj->pages);
 				gem_free_pages_array(xen_obj);
 			} else {
 				drm_gem_put_pages(&xen_obj->base,
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 41645fe6ad48..ea6c1e7e3e42 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -325,14 +325,4 @@ config XEN_HAVE_VPMU
 config XEN_FRONT_PGDIR_SHBUF
 	tristate
 
-config XEN_UNPOPULATED_ALLOC
-	bool "Use unpopulated memory ranges for guest mappings"
-	depends on X86 && ZONE_DEVICE
-	default XEN_BACKEND || XEN_GNTDEV || XEN_DOM0
-	help
-	  Use unpopulated memory ranges in order to create mappings for guest
-	  memory regions, including grant maps and foreign pages. This avoids
-	  having to balloon out RAM regions in order to obtain physical memory
-	  space to create such mappings.
-
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index babdca808861..c25c9a699b48 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -41,4 +41,3 @@ xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF)	+= gntdev-dmabuf.o
 xen-gntalloc-y				:= gntalloc.o
 xen-privcmd-y				:= privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
-obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)	+= unpopulated-alloc.o
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index b57b2067ecbf..12d3a95bfdb4 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -653,7 +653,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
 }
 EXPORT_SYMBOL(free_xenballooned_pages);
 
-#if defined(CONFIG_XEN_PV) && !defined(CONFIG_XEN_UNPOPULATED_ALLOC)
+#ifdef CONFIG_XEN_PV
 static void __init balloon_add_region(unsigned long start_pfn,
 				      unsigned long pages)
 {
@@ -707,7 +707,7 @@ static int __init balloon_init(void)
 	register_sysctl_table(xen_root);
 #endif
 
-#if defined(CONFIG_XEN_PV) && !defined(CONFIG_XEN_UNPOPULATED_ALLOC)
+#ifdef CONFIG_XEN_PV
 	{
 		int i;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 523dcdf39cc9..8d06bf1cc347 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -801,7 +801,7 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
 {
 	int ret;
 
-	ret = xen_alloc_unpopulated_pages(nr_pages, pages);
+	ret = alloc_xenballooned_pages(nr_pages, pages);
 	if (ret < 0)
 		return ret;
 
@@ -836,7 +836,7 @@ EXPORT_SYMBOL_GPL(gnttab_pages_clear_private);
 void gnttab_free_pages(int nr_pages, struct page **pages)
 {
 	gnttab_pages_clear_private(nr_pages, pages);
-	xen_free_unpopulated_pages(nr_pages, pages);
+	free_xenballooned_pages(nr_pages, pages);
 }
 EXPORT_SYMBOL_GPL(gnttab_free_pages);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index b0c73c58f987..63abe6c3642b 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -424,7 +424,7 @@ static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
 	if (pages == NULL)
 		return -ENOMEM;
 
-	rc = xen_alloc_unpopulated_pages(numpgs, pages);
+	rc = alloc_xenballooned_pages(numpgs, pages);
 	if (rc != 0) {
 		pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__,
 			numpgs, rc);
@@ -895,7 +895,7 @@ static void privcmd_close(struct vm_area_struct *vma)
 
 	rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
 	if (rc == 0)
-		xen_free_unpopulated_pages(numpgs, pages);
+		free_xenballooned_pages(numpgs, pages);
 	else
 		pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
 			numpgs, rc);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
deleted file mode 100644
index 8c512ea550bb..000000000000
--- a/drivers/xen/unpopulated-alloc.c
+++ /dev/null
@@ -1,200 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <linux/errno.h>
-#include <linux/gfp.h>
-#include <linux/kernel.h>
-#include <linux/mm.h>
-#include <linux/memremap.h>
-#include <linux/slab.h>
-
-#include <asm/page.h>
-
-#include <xen/page.h>
-#include <xen/xen.h>
-
-static DEFINE_MUTEX(list_lock);
-static LIST_HEAD(page_list);
-static unsigned int list_count;
-
-static int fill_list(unsigned int nr_pages)
-{
-	struct dev_pagemap *pgmap;
-	struct resource *res;
-	void *vaddr;
-	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
-	int ret = -ENOMEM;
-
-	res = kzalloc(sizeof(*res), GFP_KERNEL);
-	if (!res)
-		return -ENOMEM;
-
-	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
-	if (!pgmap)
-		goto err_pgmap;
-
-	pgmap->type = MEMORY_DEVICE_GENERIC;
-	res->name = "Xen scratch";
-	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
-
-	ret = allocate_resource(&iomem_resource, res,
-				alloc_pages * PAGE_SIZE, 0, -1,
-				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
-	if (ret < 0) {
-		pr_err("Cannot allocate new IOMEM resource\n");
-		goto err_resource;
-	}
-
-	pgmap->range = (struct range) {
-		.start = res->start,
-		.end = res->end,
-	};
-	pgmap->nr_range = 1;
-	pgmap->owner = res;
-
-#ifdef CONFIG_XEN_HAVE_PVMMU
-        /*
-         * memremap will build page tables for the new memory so
-         * the p2m must contain invalid entries so the correct
-         * non-present PTEs will be written.
-         *
-         * If a failure occurs, the original (identity) p2m entries
-         * are not restored since this region is now known not to
-         * conflict with any devices.
-         */
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		xen_pfn_t pfn = PFN_DOWN(res->start);
-
-		for (i = 0; i < alloc_pages; i++) {
-			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
-				pr_warn("set_phys_to_machine() failed, no memory added\n");
-				ret = -ENOMEM;
-				goto err_memremap;
-			}
-                }
-	}
-#endif
-
-	vaddr = memremap_pages(pgmap, NUMA_NO_NODE);
-	if (IS_ERR(vaddr)) {
-		pr_err("Cannot remap memory range\n");
-		ret = PTR_ERR(vaddr);
-		goto err_memremap;
-	}
-
-	for (i = 0; i < alloc_pages; i++) {
-		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
-
-		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
-		list_add(&pg->lru, &page_list);
-		list_count++;
-	}
-
-	return 0;
-
-err_memremap:
-	release_resource(res);
-err_resource:
-	kfree(pgmap);
-err_pgmap:
-	kfree(res);
-	return ret;
-}
-
-/**
- * xen_alloc_unpopulated_pages - alloc unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages returned
- * @return 0 on success, error otherwise
- */
-int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
-{
-	unsigned int i;
-	int ret = 0;
-
-	mutex_lock(&list_lock);
-	if (list_count < nr_pages) {
-		ret = fill_list(nr_pages - list_count);
-		if (ret)
-			goto out;
-	}
-
-	for (i = 0; i < nr_pages; i++) {
-		struct page *pg = list_first_entry_or_null(&page_list,
-							   struct page,
-							   lru);
-
-		BUG_ON(!pg);
-		list_del(&pg->lru);
-		list_count--;
-		pages[i] = pg;
-
-#ifdef CONFIG_XEN_HAVE_PVMMU
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
-			if (ret < 0) {
-				unsigned int j;
-
-				for (j = 0; j <= i; j++) {
-					list_add(&pages[j]->lru, &page_list);
-					list_count++;
-				}
-				goto out;
-			}
-		}
-#endif
-	}
-
-out:
-	mutex_unlock(&list_lock);
-	return ret;
-}
-EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
-
-/**
- * xen_free_unpopulated_pages - return unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages to return
- */
-void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
-{
-	unsigned int i;
-
-	mutex_lock(&list_lock);
-	for (i = 0; i < nr_pages; i++) {
-		list_add(&pages[i]->lru, &page_list);
-		list_count++;
-	}
-	mutex_unlock(&list_lock);
-}
-EXPORT_SYMBOL(xen_free_unpopulated_pages);
-
-#ifdef CONFIG_XEN_PV
-static int __init init(void)
-{
-	unsigned int i;
-
-	if (!xen_domain())
-		return -ENODEV;
-
-	if (!xen_pv_domain())
-		return 0;
-
-	/*
-	 * Initialize with pages from the extra memory regions (see
-	 * arch/x86/xen/setup.c).
-	 */
-	for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
-		unsigned int j;
-
-		for (j = 0; j < xen_extra_mem[i].n_pfns; j++) {
-			struct page *pg =
-				pfn_to_page(xen_extra_mem[i].start_pfn + j);
-
-			list_add(&pg->lru, &page_list);
-			list_count++;
-		}
-	}
-
-	return 0;
-}
-subsys_initcall(init);
-#endif
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index fd80e318b99c..ef8b6ea8ecca 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -618,7 +618,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
 	bool leaked = false;
 	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
 
-	err = xen_alloc_unpopulated_pages(nr_pages, node->hvm.pages);
+	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
 	if (err)
 		goto out_err;
 
@@ -659,7 +659,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
 			 addr, nr_pages);
  out_free_ballooned_pages:
 	if (!leaked)
-		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
+		free_xenballooned_pages(nr_pages, node->hvm.pages);
  out_err:
 	return err;
 }
@@ -860,7 +860,7 @@ static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
 			       info.addrs);
 	if (!rv) {
 		vunmap(vaddr);
-		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
+		free_xenballooned_pages(nr_pages, node->hvm.pages);
 	}
 	else
 		WARN(1, "Leaking %p, size %u page(s)\n", vaddr, nr_pages);
diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
index 34742c6e189e..7b1077f0abcb 100644
--- a/drivers/xen/xlate_mmu.c
+++ b/drivers/xen/xlate_mmu.c
@@ -232,7 +232,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 		kfree(pages);
 		return -ENOMEM;
 	}
-	rc = xen_alloc_unpopulated_pages(nr_pages, pages);
+	rc = alloc_xenballooned_pages(nr_pages, pages);
 	if (rc) {
 		pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__,
 			nr_pages, rc);
@@ -249,7 +249,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 	if (!vaddr) {
 		pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__,
 			nr_pages, rc);
-		xen_free_unpopulated_pages(nr_pages, pages);
+		free_xenballooned_pages(nr_pages, pages);
 		kfree(pages);
 		kfree(pfns);
 		return -ENOMEM;
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 43efba045acc..19a72f591e2b 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,13 +52,4 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
-#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
-int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
-void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
-#else
-#define xen_alloc_unpopulated_pages alloc_xenballooned_pages
-#define xen_free_unpopulated_pages free_xenballooned_pages
-#include <xen/balloon.h>
-#endif
-
 #endif	/* _XEN_XEN_H */
-- 
2.25.4



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:05:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:05:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45732.81143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klyPa-0000qG-3p; Sun, 06 Dec 2020 18:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45732.81143; Sun, 06 Dec 2020 18:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klyPZ-0000q9-WD; Sun, 06 Dec 2020 18:05:30 +0000
Received: by outflank-mailman (input) for mailman id 45732;
 Sun, 06 Dec 2020 18:05:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hZbG=FK=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1klyPY-0000q4-DF
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:05:28 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2ff1f83-c7cb-4d53-89fa-cd580fa1c340;
 Sun, 06 Dec 2020 18:05:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2ff1f83-c7cb-4d53-89fa-cd580fa1c340
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607277926;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=hPEg/xcTyqwf1mlGpj4j/c1lKeAMXPZhJ+mEGlPUrac=;
  b=POgSskPi3pWHvJYa1YQrjisfGMYSmd618LEbIN4kP5ondjC0bcduSiRN
   M90UEVfXASA7CnXHTSGDhYDDXU1Nu9Mi9hrGOx3CAH6iMigD1E1gMetcD
   d9HtuS2OOMVs+uKdQCx2oNbv/Mkdq40dLBLuExMVbfw+oLgyIZBZFh8Qo
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: tv/Uab4Qve+f99T5GDTW5BPOIqiy8je4JaAvyItLKm4NppSbNzyvUXGRS7TtSbsCI9zNztqPZR
 xBq0bqR7FW5emIgLGC4GfpSYceSF8yEl5XG9IZAd079sHA/tytbLxKYChZ/wBnCEXTlhcCFyol
 BpWdztvJayZ/DCIZNyTWs1skxWGEfZGWGokPBxfwInRl5coz+ssk2CvqadfrVU61uIQPs+0WPu
 wm/hWkB3QVti22pKPLrbp+hrypa6ogH8hLp1tPr7q/DCdNtoy+QGN9VwLnfsThmU7HQXe4Fc3u
 UiM=
X-SBRS: 5.1
X-MesageID: 32638347
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,397,1599537600"; 
   d="scan'208";a="32638347"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>,
	<iwj@xenproject.org>, <jbeulich@suse.com>, <julien@xen.org>,
	<sstabellini@kernel.org>, <wl@xen.org>, <roger.pau@citrix.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH v3 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs
Date: Sun, 6 Dec 2020 17:43:07 +0000
Message-ID: <1607276587-19231-2-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
References: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

... and increase default "irq-max-guests" to 32.

It's not necessary to have an array of a size more than 1 for non-shareable
IRQs and it might impact scalability in case of high "irq-max-guests"
values being used - every IRQ in the system including MSIs would be
supplied with an array of that size.

Since it's now less impactful to use higher "irq-max-guests" value - bump
the default to 32. That should give more headroom for future systems.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---

New in v2.
Based on Jan's suggestion.

Changes in v3:
- almost none since I prefer the clarity of the code as is

---
 docs/misc/xen-command-line.pandoc | 2 +-
 xen/arch/x86/irq.c                | 7 ++++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 53e676b..f7db2b6 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1644,7 +1644,7 @@ This option is ignored in **pv-shim** mode.
 ### irq-max-guests (x86)
 > `= <integer>`
 
-> Default: `16`
+> Default: `32`
 
 Maximum number of guests any individual IRQ could be shared between,
 i.e. a limit on the number of guests it is possible to start each having
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 4fd0578..a088818 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -440,7 +440,7 @@ int __init init_irq_data(void)
         irq_to_desc(irq)->irq = irq;
 
     if ( !irq_max_guests )
-        irq_max_guests = 16;
+        irq_max_guests = 32;
 
 #ifdef CONFIG_PV
     /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
@@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
     unsigned int        irq;
     struct irq_desc         *desc;
     irq_guest_action_t *action, *newaction = NULL;
+    unsigned char       max_nr_guests = will_share ? irq_max_guests : 1;
     int                 rc = 0;
 
     WARN_ON(!spin_is_locked(&v->domain->event_lock));
@@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         {
             spin_unlock_irq(&desc->lock);
             if ( (newaction = xmalloc_flex_struct(irq_guest_action_t, guest,
-                                                  irq_max_guests)) != NULL &&
+                                                  max_nr_guests)) != NULL &&
                  zalloc_cpumask_var(&newaction->cpu_eoi_map) )
                 goto retry;
             xfree(newaction);
@@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         goto retry;
     }
 
-    if ( action->nr_guests == irq_max_guests )
+    if ( action->nr_guests >= max_nr_guests )
     {
         printk(XENLOG_G_INFO
                "Cannot bind IRQ%d to dom%pd: already at max share %u ",
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:07:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:07:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45737.81155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klyRy-0000zx-KE; Sun, 06 Dec 2020 18:07:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45737.81155; Sun, 06 Dec 2020 18:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klyRy-0000zq-Gm; Sun, 06 Dec 2020 18:07:58 +0000
Received: by outflank-mailman (input) for mailman id 45737;
 Sun, 06 Dec 2020 18:07:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hZbG=FK=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1klyRw-0000zg-Gh
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:07:56 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 946a5f03-3572-4cb1-900d-f03293bcd297;
 Sun, 06 Dec 2020 18:07:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 946a5f03-3572-4cb1-900d-f03293bcd297
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607278074;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=MQBlZ1unAHl3xhYSGHiFx1nWofa1ttfc/pz0nrz8Db0=;
  b=Fmg3tmX0n1QiDLMXdFUHqFO9XgQiiFvYJLiVYaVYW/f851RZjdtq+eQY
   d+a1rDvdsVjbuw0AYKQQ2OA+CkMSZLYk6fX5ejPzJLzHZKnw5enCk+pF0
   w5QEpfBR0yU7pnWWN1GVo/hzk66R5pTLcByZhfKBbOhYoWDLtd+4xPnx2
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: JuzEmUbSliWrhIawRwrK/2vFOaFvg5JFBZDXYUxf66XViuTVgotwRByk/7ZzwKhvA9oNG1szJ8
 8B0i/V812EmqnZiqxKzCmFjWpiTj9LohTm3Nxs448O124n4zXk95gx0WNaoj/omA2j7yVPf5I5
 EQCFgu3scYXcNMdSKi9Zn8KAvmmCOu4CfNwghZx7WvA3EREvmYavy1gS3yh57sXhAlaRWcY2xM
 AzCNIegbylTG+wev7P4dtW1U2+zdaFdp8cFVNYTd7QXkBDVt6zWzaaaAu2rUvDle3XXgrdaZNo
 s9E=
X-SBRS: 5.1
X-MesageID: 32966870
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,397,1599537600"; 
   d="scan'208";a="32966870"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>,
	<iwj@xenproject.org>, <jbeulich@suse.com>, <julien@xen.org>,
	<sstabellini@kernel.org>, <wl@xen.org>, <roger.pau@citrix.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH v3 1/2] x86/IRQ: make max number of guests for a shared IRQ configurable
Date: Sun, 6 Dec 2020 17:43:06 +0000
Message-ID: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

... and increase the default to 16.

Current limit of 7 is too restrictive for modern systems where one GSI
could be shared by potentially many PCI INTx sources where each of them
corresponds to a device passed through to its own guest. Some systems do not
apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
interrupt pin for the majority of PCI devices behind a single router,
resulting in overuse of a GSI.

Introduce a new command line option to configure that limit and dynamically
allocate an array of the necessary size. Set the default size now to 16 which
is higher than 7 but could later be increased even more if necessary.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---

Changes in v2:
- introduced a command line option as suggested
- set initial default limit to 16

Changes in v3:
- changed option name to us - instead of _
- used uchar instead of uint to utilize integer_param overflow handling logic
- used xmalloc_flex_struct
- restructured printk as suggested

---
 docs/misc/xen-command-line.pandoc | 10 ++++++++++
 xen/arch/x86/irq.c                | 22 +++++++++++++++-------
 2 files changed, 25 insertions(+), 7 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index b4a0d60..53e676b 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1641,6 +1641,16 @@ This option is ignored in **pv-shim** mode.
 ### nr_irqs (x86)
 > `= <integer>`
 
+### irq-max-guests (x86)
+> `= <integer>`
+
+> Default: `16`
+
+Maximum number of guests any individual IRQ could be shared between,
+i.e. a limit on the number of guests it is possible to start each having
+assigned a device sharing a common interrupt line.  Accepts values between
+1 and 255.
+
 ### numa (x86)
 > `= on | off | fake=<integer> | noacpi`
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9..4fd0578 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -42,6 +42,10 @@ integer_param("nr_irqs", nr_irqs);
 int __read_mostly opt_irq_vector_map = OPT_IRQ_VECTOR_MAP_DEFAULT;
 custom_param("irq_vector_map", parse_irq_vector_map_param);
 
+/* Max number of guests IRQ could be shared with */
+static unsigned char __read_mostly irq_max_guests;
+integer_param("irq-max-guests", irq_max_guests);
+
 vmask_t global_used_vector_map;
 
 struct irq_desc __read_mostly *irq_desc = NULL;
@@ -435,6 +439,9 @@ int __init init_irq_data(void)
     for ( ; irq < nr_irqs; irq++ )
         irq_to_desc(irq)->irq = irq;
 
+    if ( !irq_max_guests )
+        irq_max_guests = 16;
+
 #ifdef CONFIG_PV
     /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
     set_bit(LEGACY_SYSCALL_VECTOR, used_vectors);
@@ -1028,7 +1035,6 @@ int __init setup_irq(unsigned int irq, unsigned int irqflags,
  * HANDLING OF GUEST-BOUND PHYSICAL IRQS
  */
 
-#define IRQ_MAX_GUESTS 7
 typedef struct {
     u8 nr_guests;
     u8 in_flight;
@@ -1039,7 +1045,7 @@ typedef struct {
 #define ACKTYPE_EOI    2     /* EOI on the CPU that was interrupted  */
     cpumask_var_t cpu_eoi_map; /* CPUs that need to EOI this interrupt */
     struct timer eoi_timer;
-    struct domain *guest[IRQ_MAX_GUESTS];
+    struct domain *guest[];
 } irq_guest_action_t;
 
 /*
@@ -1564,7 +1570,8 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         if ( newaction == NULL )
         {
             spin_unlock_irq(&desc->lock);
-            if ( (newaction = xmalloc(irq_guest_action_t)) != NULL &&
+            if ( (newaction = xmalloc_flex_struct(irq_guest_action_t, guest,
+                                                  irq_max_guests)) != NULL &&
                  zalloc_cpumask_var(&newaction->cpu_eoi_map) )
                 goto retry;
             xfree(newaction);
@@ -1633,11 +1640,12 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
         goto retry;
     }
 
-    if ( action->nr_guests == IRQ_MAX_GUESTS )
+    if ( action->nr_guests == irq_max_guests )
     {
-        printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
-               "Already at max share.\n",
-               pirq->pirq, v->domain->domain_id);
+        printk(XENLOG_G_INFO
+               "Cannot bind IRQ%d to dom%pd: already at max share %u ",
+               pirq->pirq, v->domain, irq_max_guests);
+        printk("(increase with irq-max-guests= option)\n");
         rc = -EBUSY;
         goto unlock_out;
     }
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45754.81172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzBp-0005bO-FN; Sun, 06 Dec 2020 18:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45754.81172; Sun, 06 Dec 2020 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzBp-0005bH-CH; Sun, 06 Dec 2020 18:55:21 +0000
Received: by outflank-mailman (input) for mailman id 45754;
 Sun, 06 Dec 2020 18:55:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzBn-0005bC-Sq
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:20 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 313033da-c4fb-4255-ab08-62eeb1e8d5fd;
 Sun, 06 Dec 2020 18:55:16 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-580-mF2-PuzSM4G6JWuqSeM5Ag-1; Sun, 06 Dec 2020 13:55:13 -0500
Received: by mail-wm1-f71.google.com with SMTP id o203so3252682wmo.3
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:13 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id a62sm4051738wmh.40.2020.12.06.10.55.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 313033da-c4fb-4255-ab08-62eeb1e8d5fd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280916;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=oz3755FMu8d1x8s9I95QQZsr6H/EfAu6k1jRrCig21A=;
	b=XyghoQpJOR7foFreAGe7UDE001khSSSlAROjw6uvCKkckAFGGEcV5IPsSmM7gGJmntHc6M
	2Sf9i2HM5letzyg2Zn0pLuaQGVKTVfL+zG5Ds6wGSy8h2+GR/WWanJ1kkbf4b2igTxKCV3
	tzHo0duse6WOtB4RdKMpiUNeatJ4BYU=
X-MC-Unique: mF2-PuzSM4G6JWuqSeM5Ag-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=5L9zYVEk7UDV9e8J6jUWmDLNneFBSRjTDjE9eeZGMJ4=;
        b=lsA7Zn4glSovlTTSegGdRcOaJUw+RxxcSWf+oMEkweHDGZ3BQDvmC7NIM1IvX7h4iO
         Fq1qOki4N1fdaGZgZ2UwOXCcA7vmeRwC8e7LRAr69vkV/aVGEcLOX3CAKTxjDooZ/wpa
         mK3xOyzDFmrv3O/iSg6PnxFEDZUQClKZ2o9FSqFxlw3vW4/3WVNo0uqM3ZCHrA2BJyod
         v5vBjF02maABIF5Y9lh8q0FU0/QzqfLMQnhm9su6O9VxaWxw31Z8C8872OwbiNa4MUmD
         9Wv2n0oXgsXHpa59/Ytk4OuPD/Iln6YIseYyikUpdjeJtb3Quf40+5sj/+9xf+9FvnjL
         IIeA==
X-Gm-Message-State: AOAM532AlsoHGrXNRdT7HHvMwVn8Ur1bkJDjzXbNM851vJgaJIZwujwq
	0Xs1nbKj5HvwuawzxpbMblLqKIaVvpb7ZqIxNQdq6zCMvKOLFKVRM1pt7xcZfmnoR0Ujv7s5p31
	66wJOeZzbOu+Rx9cJvTRQp2ZXRIE=
X-Received: by 2002:a1c:6056:: with SMTP id u83mr14528865wmb.90.1607280911972;
        Sun, 06 Dec 2020 10:55:11 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyCSZGi5rbiGoVRoGKRNGTcTbBvyODVVSoiL6GLaxlOiqbwRFYQlZdlM1E+jjVm6ac4G5YFvw==
X-Received: by 2002:a1c:6056:: with SMTP id u83mr14528834wmb.90.1607280911735;
        Sun, 06 Dec 2020 10:55:11 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 0/8] gitlab-ci: Add accelerator-specific Linux jobs
Date: Sun,  6 Dec 2020 19:55:00 +0100
Message-Id: <20201206185508.3545711-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Hi,=0D
=0D
I was custom to use Travis-CI for testing KVM builds on s390x/ppc=0D
with the Travis-CI jobs.=0D
=0D
During October Travis-CI became unusable for me (extremely slow,=0D
see [1]). Then my free Travis account got updated to the new=0D
"10K credit minutes allotment" [2] which I burned without reading=0D
the notification email in time (I'd burn them eventually anyway).=0D
=0D
Today Travis-CI is pointless to me. While I could pay to run my=0D
QEMU jobs, I don't think it is fair for an Open Source project to=0D
ask its forks to pay for a service.=0D
=0D
As we want forks to run some CI before contributing patches, and=0D
we have cross-build Docker images available for Linux hosts, I=0D
added some cross KVM/Xen build jobs to Gitlab-CI.=0D
=0D
Cross-building doesn't have the same coverage as native building,=0D
as we can not run the tests. But this is still useful to get link=0D
failures.=0D
=0D
Each job is added in its own YAML file, so it is easier to notify=0D
subsystem maintainers in case of troubles.=0D
=0D
Resulting pipeline:=0D
https://gitlab.com/philmd/qemu/-/pipelines/225948077=0D
=0D
Regards,=0D
=0D
Phil.=0D
=0D
[1] https://travis-ci.community/t/build-delays-for-open-source-project/1027=
2=0D
[2] https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing=0D
=0D
Philippe Mathieu-Daud=C3=A9 (8):=0D
  gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)=0D
  gitlab-ci: Introduce 'cross_accel_build_job' template=0D
  gitlab-ci: Add KVM X86 cross-build jobs=0D
  gitlab-ci: Add KVM ARM cross-build jobs=0D
  gitlab-ci: Add KVM s390x cross-build jobs=0D
  gitlab-ci: Add KVM PPC cross-build jobs=0D
  gitlab-ci: Add KVM MIPS cross-build jobs=0D
  gitlab-ci: Add Xen cross-build jobs=0D
=0D
 .gitlab-ci.d/crossbuilds-kvm-arm.yml   |  5 +++=0D
 .gitlab-ci.d/crossbuilds-kvm-mips.yml  |  5 +++=0D
 .gitlab-ci.d/crossbuilds-kvm-ppc.yml   |  5 +++=0D
 .gitlab-ci.d/crossbuilds-kvm-s390x.yml |  6 +++=0D
 .gitlab-ci.d/crossbuilds-kvm-x86.yml   |  6 +++=0D
 .gitlab-ci.d/crossbuilds-xen.yml       | 14 +++++++=0D
 .gitlab-ci.d/crossbuilds.yml           | 52 ++++++++++++++++----------=0D
 .gitlab-ci.yml                         |  6 +++=0D
 MAINTAINERS                            |  6 +++=0D
 9 files changed, 85 insertions(+), 20 deletions(-)=0D
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-arm.yml=0D
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-mips.yml=0D
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-ppc.yml=0D
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml=0D
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-x86.yml=0D
 create mode 100644 .gitlab-ci.d/crossbuilds-xen.yml=0D
=0D
--=20=0D
2.26.2=0D
=0D



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45755.81185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzBq-0005cL-OS; Sun, 06 Dec 2020 18:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45755.81185; Sun, 06 Dec 2020 18:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzBq-0005cE-Kz; Sun, 06 Dec 2020 18:55:22 +0000
Received: by outflank-mailman (input) for mailman id 45755;
 Sun, 06 Dec 2020 18:55:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzBp-0005bW-J3
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:21 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id ff6edc66-9128-4fcf-9263-2bed66b4cdd6;
 Sun, 06 Dec 2020 18:55:20 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-431-YPwrwj12MxSAPXxn5W2S1g-1; Sun, 06 Dec 2020 13:55:18 -0500
Received: by mail-wm1-f71.google.com with SMTP id a134so3245209wmd.8
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:18 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id v7sm11353163wma.26.2020.12.06.10.55.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff6edc66-9128-4fcf-9263-2bed66b4cdd6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280920;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J0caE/rg850u7bwr+Q+jKihpcnbP+o9Z+fIMhlSMarg=;
	b=P4Z0ATzxM3uQDdwrsK6NPVownN6Vh+jBf4n3ODAK1USO/vJkWtEq1fS3XVAStP9lHUi67h
	DXQhgJabcFgtRXmBgqWLIvkq4K7qDiYSc9Ir4b2KnA2+74GOqu4ghOJFX4Ov6GhvlNy0mq
	EOgAjr9IyO3oJOxUL+q/I8fiNahWczg=
X-MC-Unique: YPwrwj12MxSAPXxn5W2S1g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=J0caE/rg850u7bwr+Q+jKihpcnbP+o9Z+fIMhlSMarg=;
        b=ZGstSD7YyPdWZ/VfsSAxsPKxuZG1rtsA+6Fi+v1almZpxYYZUHfQJvX6mUnPh5AUR2
         +uqBdOM9zwUcoDShmNbOf9XoU++eRWRkje/xvk5OIE5AzPEH1CMxVW9KqmhHFkAwrfwE
         jO0pOQ4FGk9K784gsQ+isSTfR1mR3IkrdNEqdXXTtnuS09s/PAyzaiNvcnMnZeQirX1y
         5+sXc1HGg6dza7hVxXEOIGt7GZyfC5GBR7oJEwPobZrGc7QIhpz366xtxZBNPc/O+zfl
         bATIX8+NE/9W6hsDmhmOM0wjlpMLeeF2ckgwxb04mkiYSMfxwqvyNOmzPDUr5Dn+/W38
         KfZQ==
X-Gm-Message-State: AOAM532AYSGe16zKXYzI3zbhGOM3zSbSYH/eNq0ZYN8HUkUerrsJxfeH
	C96i9x3G0y4gZeDMNax3XnrjmCnNnpDuVuUR4CxWVSe3xt3OXOCNXbLaLjhK/yUyPH+rsYB3jzz
	ToB/7JxdIRzpPuL7DFPMCCtjVHq8=
X-Received: by 2002:a5d:6447:: with SMTP id d7mr15842847wrw.96.1607280917541;
        Sun, 06 Dec 2020 10:55:17 -0800 (PST)
X-Google-Smtp-Source: ABdhPJy5Mv3+kYooSf15r+r2aJLz/Qnqkoua9wFQmOH5L1QMkE6WgvTm77oljoy/QdNzxy3aicjEtQ==
X-Received: by 2002:a5d:6447:: with SMTP id d7mr15842818wrw.96.1607280917387;
        Sun, 06 Dec 2020 10:55:17 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 1/8] gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)
Date: Sun,  6 Dec 2020 19:55:01 +0100
Message-Id: <20201206185508.3545711-2-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

'extends' is an alternative to using YAML anchors
and is a little more flexible and readable. See:
https://docs.gitlab.com/ee/ci/yaml/#extends

More importantly it allows exploding YAML jobs.

Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 40 ++++++++++++++++++------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 03ebfabb3fa..099949aaef3 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -1,5 +1,5 @@
 
-.cross_system_build_job_template: &cross_system_build_job_definition
+.cross_system_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
   timeout: 80m
@@ -13,7 +13,7 @@
           xtensa-softmmu"
     - make -j$(expr $(nproc) + 1) all check-build
 
-.cross_user_build_job_template: &cross_user_build_job_definition
+.cross_user_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
   script:
@@ -24,91 +24,91 @@
     - make -j$(expr $(nproc) + 1) all check-build
 
 cross-armel-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-armel-cross
 
 cross-armel-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-armel-cross
 
 cross-armhf-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-armhf-cross
 
 cross-armhf-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-armhf-cross
 
 cross-arm64-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-arm64-cross
 
 cross-arm64-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-arm64-cross
 
 cross-mips-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mips-cross
 
 cross-mips-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mips-cross
 
 cross-mipsel-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mipsel-cross
 
 cross-mipsel-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mipsel-cross
 
 cross-mips64el-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mips64el-cross
 
 cross-mips64el-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mips64el-cross
 
 cross-ppc64el-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-ppc64el-cross
 
 cross-ppc64el-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-ppc64el-cross
 
 cross-s390x-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-s390x-cross
 
 cross-s390x-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-s390x-cross
 
 cross-win32-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win32-cross
 
 cross-win64-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win64-cross
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45756.81197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzBw-0005f1-1T; Sun, 06 Dec 2020 18:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45756.81197; Sun, 06 Dec 2020 18:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzBv-0005et-T3; Sun, 06 Dec 2020 18:55:27 +0000
Received: by outflank-mailman (input) for mailman id 45756;
 Sun, 06 Dec 2020 18:55:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzBu-0005bW-L3
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:26 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9e4ae0b5-f640-46e1-9212-963813a10a1a;
 Sun, 06 Dec 2020 18:55:26 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-151-k4hPAIXZONqWfVhOYJ2A7A-1; Sun, 06 Dec 2020 13:55:24 -0500
Received: by mail-wm1-f71.google.com with SMTP id z12so4305120wmf.9
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:23 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id m8sm11324488wmc.27.2020.12.06.10.55.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e4ae0b5-f640-46e1-9212-963813a10a1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280925;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=r0Lqr22l7ATKhofbc7EdyOSapxnrV+ynGfxlD6q9jNk=;
	b=SSnMWYCsfN1oznTMN5e15j3msdb78/4wuJHDZJ3IjSJEqQglO8DyMUIEBmhwVjg/Epa5tu
	6qhnX6SOH82ORSDjgM9XAZkp1VinVgATzFdXRr+OEq6XSgAhS1CuCCzhgYu4jgniL6t3Vy
	6vPfZEEvJ0lreixQMVUmvjlz1T0R7Fc=
X-MC-Unique: k4hPAIXZONqWfVhOYJ2A7A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=r0Lqr22l7ATKhofbc7EdyOSapxnrV+ynGfxlD6q9jNk=;
        b=QR/y8tPSng0WSDfnSMIkbg4AdA06u61Lm85CdOvA6fhXZ4BXuvK3GOlGgE/Ppdh42n
         iJORxAKPKeiD3xpZS5+srmTIfgZP+2EhQd5gSYPVAcDdYB9qK/gQLCCJ5TrsM2S4FTtO
         NSinYqZgx1JZnriB2V8PePHCzHjYX3vFh00GwzaKOqLfbjLlxtTUQoVV7bWoirG7hclX
         v7UjTYgawLWYUOaLci3V2Lw9ZNLHIrEWFUnEq+r7UWhBMEKUxZalZYOhB4is+8y0yKJa
         wAvReuhU5ok0i7rgO/P/CJXGJnMMi8q39t5wg5Nl2AiTHy4A0vSwPt5QntFx9TRC2y6W
         0noA==
X-Gm-Message-State: AOAM533ZbU1aRWkCHdWBXOY93P7FW+sRamOVB14RR/sAYoBf+2gpqou0
	CwmMCFkehMlcuq0glCJ/6LgXAg/5PVwhusxbAa+NEnV2CG/gZwkVj8Ksa4pTZOrb/UUI8xw1E20
	amPRqMdVCMxMT6/BKddaE4djc0a4=
X-Received: by 2002:a1c:e084:: with SMTP id x126mr14748217wmg.109.1607280923046;
        Sun, 06 Dec 2020 10:55:23 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzUdCKKWJAWPI622MaP/oFwVO6CryrIW/lu5gCYVi0AVbbug+7fy7NlRBDMxgrTLJyAtgWuOA==
X-Received: by 2002:a1c:e084:: with SMTP id x126mr14748201wmg.109.1607280922898;
        Sun, 06 Dec 2020 10:55:22 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 2/8] gitlab-ci: Introduce 'cross_accel_build_job' template
Date: Sun,  6 Dec 2020 19:55:02 +0100
Message-Id: <20201206185508.3545711-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce a job template to cross-build accelerator specific
jobs (enable a specific accelerator, disabling the others).

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 099949aaef3..be63b209c5b 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -13,6 +13,18 @@
           xtensa-softmmu"
     - make -j$(expr $(nproc) + 1) all check-build
 
+.cross_accel_build_job:
+  stage: build
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  timeout: 30m
+  script:
+    - mkdir build
+    - cd build
+    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
+      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
+        --enable-${ACCEL:-kvm} --target-list="$TARGETS" $ACCEL_CONFIGURE_OPTS
+    - make -j$(expr $(nproc) + 1) all check-build
+
 .cross_user_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45757.81209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzC2-0005jh-A2; Sun, 06 Dec 2020 18:55:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45757.81209; Sun, 06 Dec 2020 18:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzC2-0005jY-65; Sun, 06 Dec 2020 18:55:34 +0000
Received: by outflank-mailman (input) for mailman id 45757;
 Sun, 06 Dec 2020 18:55:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzC0-0005ic-Lj
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:32 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fa5d5863-28a5-4171-8c2d-6245d167b2af;
 Sun, 06 Dec 2020 18:55:31 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-92-zS-F1DX3OMGr4O9orLkbUg-1; Sun, 06 Dec 2020 13:55:30 -0500
Received: by mail-wm1-f71.google.com with SMTP id r5so4360870wma.2
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:29 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id i5sm12530329wrw.45.2020.12.06.10.55.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa5d5863-28a5-4171-8c2d-6245d167b2af
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280931;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zj+VAsR2ZoGWNnB8pF1RFhJWmR0T2bfs3Brc/xrE4Jk=;
	b=eOjZ2gvY9aHksYNUJDp52FT47A5P6K4RubxOFKoEd9HpdjDLpHaWtEioazmo8xDuzBtTI+
	Yr1kKKbRgBovNZ4Qfi8Zd0MoI94kziJAyqLrHfDD1jrNKm9zs8d/k/kUpOx7zwlMr+MK12
	Je077ACTFDNQmt35UwBDGJaLxibUXQA=
X-MC-Unique: zS-F1DX3OMGr4O9orLkbUg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zj+VAsR2ZoGWNnB8pF1RFhJWmR0T2bfs3Brc/xrE4Jk=;
        b=XyI+UkARtPK1QFUZcgLv5Q/89csM6Y4NGgwq3gG8sYto/yyuq7gv4DYeRe0lp1zIS6
         pOfFnH5QtI8jlEkgirecbW2p38HJPZnyeJ5Y4ySbLjtXZOoeKDLsErAF+kH9guHayJ9J
         2RHv+OTBKQkVyp/aFruJpHHESlMtGYGTmgPFr1ZGd3juXRG15eqxRJptGdaL0EuMX4tx
         bIpJyUuxxSgleDd1DgAuNEGTyaj7sv6mtgKd7yPY0Mkt//LavVJgUSxu5Cdr0g+OfL7v
         UD/wGndrnZ482a9Jl5iHlczrdRh0PA++e5mS6vz9HoQ2LU0F9v0enJ1RBxQDcJHPa/Nk
         MUBw==
X-Gm-Message-State: AOAM533fq4CrCt7ISCguoM9cNX0cs0epbapd5A8fCvIRNxbUtyDPQl1A
	bWPjlQ8TI/iGYuAKMTeYBi3X3RNBM+yR0zYC88ASu6Cpr+xtAagLhqVIhnedc+sQNqcafglkFkk
	GdKnap0W403jkM7ygEuWbmXiJtuc=
X-Received: by 2002:a1c:a501:: with SMTP id o1mr9847440wme.44.1607280928821;
        Sun, 06 Dec 2020 10:55:28 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyjfkR9jOQCSNmbsQLisARRNPGgWAcGrUN24/vbSdXT+mIj5Zt7/jygW8L9ukGbdalO0fSF2A==
X-Received: by 2002:a1c:a501:: with SMTP id o1mr9847417wme.44.1607280928628;
        Sun, 06 Dec 2020 10:55:28 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 3/8] gitlab-ci: Add KVM X86 cross-build jobs
Date: Sun,  6 Dec 2020 19:55:03 +0100
Message-Id: <20201206185508.3545711-4-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build x86 target with only KVM accelerator enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds-kvm-x86.yml | 6 ++++++
 .gitlab-ci.yml                       | 1 +
 MAINTAINERS                          | 1 +
 3 files changed, 8 insertions(+)
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-x86.yml

diff --git a/.gitlab-ci.d/crossbuilds-kvm-x86.yml b/.gitlab-ci.d/crossbuilds-kvm-x86.yml
new file mode 100644
index 00000000000..9719a19d143
--- /dev/null
+++ b/.gitlab-ci.d/crossbuilds-kvm-x86.yml
@@ -0,0 +1,6 @@
+cross-amd64-kvm:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-amd64-cross
+    TARGETS: i386-softmmu,x86_64-softmmu
+    ACCEL_CONFIGURE_OPTS: --disable-tcg
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index d0173e82b16..cdfa1f82a3d 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -12,6 +12,7 @@ include:
   - local: '/.gitlab-ci.d/opensbi.yml'
   - local: '/.gitlab-ci.d/containers.yml'
   - local: '/.gitlab-ci.d/crossbuilds.yml'
+  - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
 
 .native_build_job_template: &native_build_job_definition
   stage: build
diff --git a/MAINTAINERS b/MAINTAINERS
index 68bc160f41b..8d7e2fdb7e2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@ L: kvm@vger.kernel.org
 S: Supported
 F: target/i386/kvm.c
 F: scripts/kvm/vmxcap
+F: .gitlab-ci.d/crossbuilds-kvm-x86.yml
 
 Guest CPU Cores (other accelerators)
 ------------------------------------
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45760.81221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCB-0005sJ-0U; Sun, 06 Dec 2020 18:55:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45760.81221; Sun, 06 Dec 2020 18:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCA-0005s3-Pd; Sun, 06 Dec 2020 18:55:42 +0000
Received: by outflank-mailman (input) for mailman id 45760;
 Sun, 06 Dec 2020 18:55:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzC9-0005oU-Hi
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:41 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 30518ff7-d9df-4962-8512-180c8e08cb37;
 Sun, 06 Dec 2020 18:55:39 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-360-ZVaRvVsKMoSpEp0xxrxiWQ-1; Sun, 06 Dec 2020 13:55:35 -0500
Received: by mail-wm1-f70.google.com with SMTP id y187so4310699wmy.3
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:35 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id l8sm12023533wmf.35.2020.12.06.10.55.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30518ff7-d9df-4962-8512-180c8e08cb37
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280939;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Vxt3MEZXS260piBfvrRF2LPibTg2rLk2j2FgrTceEtE=;
	b=DdpqVZOKqK5n6md7/XluGVO0qo3tgrqmhdCP6EQBuIEVX4hvoXYVRuB6KrovLjArqrOdhh
	2LQcGTDxkLCmw4+LfOHEnqY1NB5T55N006XQMI2ioYBIdxuUaVgK+1TvYDpM/t96FVVUoa
	RZhtJa/Hpvng6lpVSdlOtw/+1N0A37U=
X-MC-Unique: ZVaRvVsKMoSpEp0xxrxiWQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Vxt3MEZXS260piBfvrRF2LPibTg2rLk2j2FgrTceEtE=;
        b=LAetrLfOkWlkLyx5Z1u8QvqV4yQcW/0+cjKvpSSXnjtvSSdbAVYW2v8vQwAVAzfDTa
         niwX1Cy4LLQRQ7EWQomejpCdMHi3TDLE07KtS9nDU1gUcC84FGBKkhgiOZnkwE9HhSiN
         YJECvFDP+AtFqYLJU6HxoagsEsSWt/4/DH73GzbQ53eZyfFQfQKLiQ2hsYgmTgM7AoYs
         vMZYPbXh3cO43hOw5ET+lhhTe6qBD2c/0XQVo5fSyOG0LoH2qN5wTQCKC+dhiz3rHdkb
         WW8nDjiLQZXld3cHkgb5Xk1o+BcgdiSQEeBBIud+rTpc6rAaZD5z5B0fJtF84GhWxkFw
         4Rwg==
X-Gm-Message-State: AOAM530olqlQbk59stD0KCkgCzjxk2yJ+6XFv44R8YDhA/A2Vp59itKj
	Vjc8Twujwflt67uOiSnY4O/Dzo/gAnqHN+b2xeDxQ2ORVy6rVHsY+PBFHYt/RgbZt1JZ7P/jAxE
	yOPD2KAQL9qQaOlQCOsGh32Pq/bM=
X-Received: by 2002:a1c:48d:: with SMTP id 135mr14975797wme.147.1607280934466;
        Sun, 06 Dec 2020 10:55:34 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx4k4/YkSarxB89UuEp4wGBYMDt9qPp414ZW0Hy6JgLmQ1zgyBWhlkGZseJ8x91ebcKHRE8vw==
X-Received: by 2002:a1c:48d:: with SMTP id 135mr14975772wme.147.1607280934329;
        Sun, 06 Dec 2020 10:55:34 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 4/8] gitlab-ci: Add KVM ARM cross-build jobs
Date: Sun,  6 Dec 2020 19:55:04 +0100
Message-Id: <20201206185508.3545711-5-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build ARM aarch64 target with KVM and TCG accelerators enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
later this job will build KVM-only.
---
 .gitlab-ci.d/crossbuilds-kvm-arm.yml | 5 +++++
 .gitlab-ci.yml                       | 1 +
 MAINTAINERS                          | 1 +
 3 files changed, 7 insertions(+)
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-arm.yml

diff --git a/.gitlab-ci.d/crossbuilds-kvm-arm.yml b/.gitlab-ci.d/crossbuilds-kvm-arm.yml
new file mode 100644
index 00000000000..c74c6fdc9fb
--- /dev/null
+++ b/.gitlab-ci.d/crossbuilds-kvm-arm.yml
@@ -0,0 +1,5 @@
+cross-arm64-kvm:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-arm64-cross
+    TARGETS: aarch64-softmmu
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index cdfa1f82a3d..573afceb3c7 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -13,6 +13,7 @@ include:
   - local: '/.gitlab-ci.d/containers.yml'
   - local: '/.gitlab-ci.d/crossbuilds.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
+  - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
 
 .native_build_job_template: &native_build_job_definition
   stage: build
diff --git a/MAINTAINERS b/MAINTAINERS
index 8d7e2fdb7e2..40271eba592 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -386,6 +386,7 @@ M: Peter Maydell <peter.maydell@linaro.org>
 L: qemu-arm@nongnu.org
 S: Maintained
 F: target/arm/kvm.c
+F: .gitlab-ci.d/crossbuilds-kvm-arm.yml
 
 MIPS KVM CPUs
 M: Huacai Chen <chenhc@lemote.com>
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45767.81233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCG-0005y2-A1; Sun, 06 Dec 2020 18:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45767.81233; Sun, 06 Dec 2020 18:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCG-0005xu-5p; Sun, 06 Dec 2020 18:55:48 +0000
Received: by outflank-mailman (input) for mailman id 45767;
 Sun, 06 Dec 2020 18:55:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzCE-0005oU-I0
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:46 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fb91a125-8d6a-4446-baef-15f079abae10;
 Sun, 06 Dec 2020 18:55:45 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-160-CWcxd-CXO6-CHcSTaQhL6g-1; Sun, 06 Dec 2020 13:55:42 -0500
Received: by mail-wm1-f69.google.com with SMTP id k126so1497414wmb.0
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:42 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id l1sm5951733wrq.64.2020.12.06.10.55.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb91a125-8d6a-4446-baef-15f079abae10
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280945;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ICTgDX+4v4WDebhKXLX3qGQtibzA3mRvScrd6C7n6Ec=;
	b=aRApqZ4PbsbUtaZMEJs3B1XVmD/Ebgu2jnyneijYMZ4hMVCKlArNDBR80AO+1dVGh7VBH/
	09YDmKab6pg4m8tC3hBnDGm2kIwzHg88fdeMTrjEqJJr6NSmz2yECvWgMLgJ2bIKjq8Th2
	5GOZCzm3Ex9WAj7lgc6+6F5ZVoSH67U=
X-MC-Unique: CWcxd-CXO6-CHcSTaQhL6g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ICTgDX+4v4WDebhKXLX3qGQtibzA3mRvScrd6C7n6Ec=;
        b=SyMZmfnTiHe3k2vfVJPwsDoU5jGTpq5InTaM4tCGe5+0a/qp4XrSzmgwGvfePGvcbX
         SaK/pch/zDXPnmcN9gCHbS84sLBpcKvqBZfKQKWc9Ahe7xwJgUeDBLL3Brvg0yi1ET1O
         pvlqZOaGpgVHMByz8uKYYZymkd5nTMp8sqjb23bpd0tR4xg5ojNhSCtyWfDym8L0MPNH
         9A3UIN3dGevLnigKakCcnFI0tXrY0rO6XlErFCeSsu5jvtixsE9t9Mq35XVsC37bJ8l3
         0z4TONPo9wTkuzcBb9E7Bc7nda7TULpiaGogwS7oHmY07I7i1/edYNpZgHTKhqifJgGh
         p4KQ==
X-Gm-Message-State: AOAM533wdR+2oZN8p0nw+ooY3F4u0OBMLnx/qV1hQdY0G0iym9WvVz6f
	jV3Zuv5oQAyfKmw1wKesqp8u7C6sfQctLgUfhG+P36F0umDFkoZyuFJHtIYw/jlxZL0sQARmaVK
	rJNiI/SPjLYJ3PuzZg4J62iWiKWo=
X-Received: by 2002:a1c:9d8b:: with SMTP id g133mr14902023wme.189.1607280940117;
        Sun, 06 Dec 2020 10:55:40 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwS0gmyZIlDu9x3zdPI4XoBOwxS0bhZqQbF0DZUVSQQ25mAYsGy/bDy2iLLg12EuoAvmHTkDA==
X-Received: by 2002:a1c:9d8b:: with SMTP id g133mr14902001wme.189.1607280939914;
        Sun, 06 Dec 2020 10:55:39 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
Date: Sun,  6 Dec 2020 19:55:05 +0100
Message-Id: <20201206185508.3545711-6-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build s390x target with only KVM accelerator enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
 .gitlab-ci.yml                         | 1 +
 MAINTAINERS                            | 1 +
 3 files changed, 8 insertions(+)
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml

diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
new file mode 100644
index 00000000000..1731af62056
--- /dev/null
+++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
@@ -0,0 +1,6 @@
+cross-s390x-kvm:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-s390x-cross
+    TARGETS: s390x-softmmu
+    ACCEL_CONFIGURE_OPTS: --disable-tcg
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 573afceb3c7..a69619d7319 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -14,6 +14,7 @@ include:
   - local: '/.gitlab-ci.d/crossbuilds.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
+  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
 
 .native_build_job_template: &native_build_job_definition
   stage: build
diff --git a/MAINTAINERS b/MAINTAINERS
index 40271eba592..d41401f6683 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -417,6 +417,7 @@ F: hw/intc/s390_flic.c
 F: hw/intc/s390_flic_kvm.c
 F: include/hw/s390x/s390_flic.h
 F: gdb-xml/s390*.xml
+F: .gitlab-ci.d/crossbuilds-kvm-s390x.yml
 T: git https://github.com/cohuck/qemu.git s390-next
 T: git https://github.com/borntraeger/qemu.git s390-next
 L: qemu-s390x@nongnu.org
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:55:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:55:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45772.81245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCM-00063h-Ke; Sun, 06 Dec 2020 18:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45772.81245; Sun, 06 Dec 2020 18:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCM-00063Z-Ft; Sun, 06 Dec 2020 18:55:54 +0000
Received: by outflank-mailman (input) for mailman id 45772;
 Sun, 06 Dec 2020 18:55:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzCL-0005yj-Jt
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:55:53 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 587b9a81-94fb-43f6-80db-23a2e2df4101;
 Sun, 06 Dec 2020 18:55:49 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-201-tG4mia0tNiavDU8zWE1oAQ-1; Sun, 06 Dec 2020 13:55:47 -0500
Received: by mail-wm1-f71.google.com with SMTP id f12so4295042wmf.6
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:47 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id f16sm10763171wmh.7.2020.12.06.10.55.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 587b9a81-94fb-43f6-80db-23a2e2df4101
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280949;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OgnINeMOsSyDZG4XwDK8Xw/XSsfPDxGpQ5JDhRmZjUY=;
	b=CAopMyajJehc2dVd8qLO5n8LE/KYgBAfziz5CA88UJW6ioMSPkXrl1VFuwKm/r5a0YdvPR
	M8IhMv57l+y1tRq7ZMVWwu8Hmj0StLDsoGe+QwlSVLGQD4yGQPy4zOj81jUPw5sC98y0Aa
	cRIe7xn2ZMqY8YX1PxBOUUxgkr1DS+w=
X-MC-Unique: tG4mia0tNiavDU8zWE1oAQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=OgnINeMOsSyDZG4XwDK8Xw/XSsfPDxGpQ5JDhRmZjUY=;
        b=dc0GMX/QLqsjVcFB+tlt9wYdt47SmNdEM+3i3MJd6j9AFxoarTKmwIyRHHH2J4Cj7K
         IofTP5cUqpspa/3VojQyQfPvP6B71N3jpc14Z1hT28V5w4GRt4NULfUTDPpg/PGyZ5B2
         xaxRu7mweBB5DP5zdAvhLvfdZVI9KMWXE3pyQF/sXkIvgHdwqL8RBaPzEmOc0YMaoONm
         HmeAMHnRAUtRrR92QZi1xCd6Ea4R8cfnfOzJSiuG9wW1L2H4IIyHB1kl9livBR801Ge5
         dxl+zbMRLWoeRYiT9qHf+/14k75hhecKU+MGX6XiHspAFEpB4aSqNMSxjE1FLWsdbYLm
         Clow==
X-Gm-Message-State: AOAM531NiuM3VhT/55WNSPl9xDGvogfjR+oKe8aNgZX+L9rRczuW7B0H
	8FMOTRH+fQHYZNoSucDgrFOiRnoP00OZ+hM5hfTLC1CvX4BAjcCWkOTU+yNqDEzMayA2OFFcsMC
	N+VG/a9sJ1f3u/0SUiuxfVBG0LsY=
X-Received: by 2002:a1c:27c4:: with SMTP id n187mr14572187wmn.157.1607280946015;
        Sun, 06 Dec 2020 10:55:46 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzP5ckHKkPnoBNFErvKN6cx0NmMNVau5K2sPqOp9OxzOmSHemiybPRD6ErRhCPAEZm1F015Rw==
X-Received: by 2002:a1c:27c4:: with SMTP id n187mr14572172wmn.157.1607280945820;
        Sun, 06 Dec 2020 10:55:45 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 6/8] gitlab-ci: Add KVM PPC cross-build jobs
Date: Sun,  6 Dec 2020 19:55:06 +0100
Message-Id: <20201206185508.3545711-7-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build PPC target with KVM and TCG accelerators enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
later this job build KVM-only.
---
 .gitlab-ci.d/crossbuilds-kvm-ppc.yml | 5 +++++
 .gitlab-ci.yml                       | 1 +
 MAINTAINERS                          | 1 +
 3 files changed, 7 insertions(+)
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-ppc.yml

diff --git a/.gitlab-ci.d/crossbuilds-kvm-ppc.yml b/.gitlab-ci.d/crossbuilds-kvm-ppc.yml
new file mode 100644
index 00000000000..9df8bcf5a73
--- /dev/null
+++ b/.gitlab-ci.d/crossbuilds-kvm-ppc.yml
@@ -0,0 +1,5 @@
+cross-ppc64el-kvm:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-ppc64el-cross
+    TARGETS: ppc64-softmmu
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index a69619d7319..024624908e8 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -15,6 +15,7 @@ include:
   - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
+  - local: '/.gitlab-ci.d/crossbuilds-kvm-ppc.yml'
 
 .native_build_job_template: &native_build_job_definition
   stage: build
diff --git a/MAINTAINERS b/MAINTAINERS
index d41401f6683..c7766782174 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -397,6 +397,7 @@ PPC KVM CPUs
 M: David Gibson <david@gibson.dropbear.id.au>
 S: Maintained
 F: target/ppc/kvm.c
+F: .gitlab-ci.d/crossbuilds-kvm-ppc.yml
 
 S390 KVM CPUs
 M: Halil Pasic <pasic@linux.ibm.com>
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:56:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45781.81257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCW-0006DQ-W1; Sun, 06 Dec 2020 18:56:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45781.81257; Sun, 06 Dec 2020 18:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCW-0006DJ-RX; Sun, 06 Dec 2020 18:56:04 +0000
Received: by outflank-mailman (input) for mailman id 45781;
 Sun, 06 Dec 2020 18:56:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzCV-0005yj-JT
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:56:03 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c4d38cca-8728-41d0-b9be-15c84107e0be;
 Sun, 06 Dec 2020 18:55:54 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-127-mFqtb2s5PCGrHFEL6fSgeQ-1; Sun, 06 Dec 2020 13:55:53 -0500
Received: by mail-wm1-f72.google.com with SMTP id b184so3257895wmh.6
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:52 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id v20sm10922213wml.34.2020.12.06.10.55.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4d38cca-8728-41d0-b9be-15c84107e0be
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280954;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=i1vxVlVvkT3sP8apHjKZ3bpOmMfnVbEari6EJ9mVBVw=;
	b=iGDNmebkOhWUfG8LT03wEQ5zmhbG2UMW+jLtEA5KbVdcKuV/FX5Pmhah+79/YCrPlcej+A
	nyYbYWIjiCjxpDVZbVe/jODP3vP5UuBR/ftWqpT4iKdQf+r8vbFSGASNmOH0KJStUboXyT
	5LxU36mGwZYuwj4Pymf9XPf1IjxMaks=
X-MC-Unique: mFqtb2s5PCGrHFEL6fSgeQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=i1vxVlVvkT3sP8apHjKZ3bpOmMfnVbEari6EJ9mVBVw=;
        b=kz+wODeI075SOo5Kz8jcCYKmxc0smy2GobjYNuYEkRjbXlzocJrZFVEbcbFKT9z/hu
         yyOEwwlJYzDvrRKx1RaIo9j8gHi5v0vG3W+0X/oHm+rZ8R3cPemltGmLiA2/D9Rdbrts
         VUasU5dIulf0ZZ18ki7Ikypb+yMEapBYqwHUPba0YpSiMIfOnCPsiWTt5IeLgHQ9HQBS
         UCx2RLBedIPhbN/BlG+Fa4j4UDHg4oq9qIA5v1L0D6p/YoSQt3n9+/vVknewlBAuv6a7
         5pV74gaL2NRzmUnTLAfDdRJuxHeYxHwgJuJXJqlEIKO5vWZbjlDFMinNMGEnrrDu3JlN
         PCDw==
X-Gm-Message-State: AOAM5327NESWzZfAJSj8OaoM88P4kFC2ej/VkhFJ9jQdI7WHIY3Hkmc4
	/rIiPEJlYqMys80V8Z8zR2IqE6xmQF1GZ5TjOzg7jRF9dk1MzBYN/9lPKwNN3ff2TfpU2XAfFnn
	/Xdx+bexkfyNo7KE374c89dJxzCg=
X-Received: by 2002:adf:9b9b:: with SMTP id d27mr13324076wrc.125.1607280951436;
        Sun, 06 Dec 2020 10:55:51 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyjk3rMrzMdcRMBa9qbGydj+XoVJdnBoRtt4fTfXKTXx014IQZfLKKyL1tpFnUZLU9IFuOQ5Q==
X-Received: by 2002:adf:9b9b:: with SMTP id d27mr13324062wrc.125.1607280951287;
        Sun, 06 Dec 2020 10:55:51 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 7/8] gitlab-ci: Add KVM MIPS cross-build jobs
Date: Sun,  6 Dec 2020 19:55:07 +0100
Message-Id: <20201206185508.3545711-8-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build mips target with KVM and TCG accelerators enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
later we'll build KVM-only.
---
 .gitlab-ci.d/crossbuilds-kvm-mips.yml | 5 +++++
 .gitlab-ci.yml                        | 1 +
 MAINTAINERS                           | 1 +
 3 files changed, 7 insertions(+)
 create mode 100644 .gitlab-ci.d/crossbuilds-kvm-mips.yml

diff --git a/.gitlab-ci.d/crossbuilds-kvm-mips.yml b/.gitlab-ci.d/crossbuilds-kvm-mips.yml
new file mode 100644
index 00000000000..81eeeb315bb
--- /dev/null
+++ b/.gitlab-ci.d/crossbuilds-kvm-mips.yml
@@ -0,0 +1,5 @@
+cross-mips64el-kvm:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-mips64el-cross
+    TARGETS: mips64el-softmmu
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 024624908e8..5f607fc7b48 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -16,6 +16,7 @@ include:
   - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-ppc.yml'
+  - local: '/.gitlab-ci.d/crossbuilds-kvm-mips.yml'
 
 .native_build_job_template: &native_build_job_definition
   stage: build
diff --git a/MAINTAINERS b/MAINTAINERS
index c7766782174..5f26626a512 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -392,6 +392,7 @@ MIPS KVM CPUs
 M: Huacai Chen <chenhc@lemote.com>
 S: Odd Fixes
 F: target/mips/kvm.c
+F: .gitlab-ci.d/crossbuilds-kvm-mips.yml
 
 PPC KVM CPUs
 M: David Gibson <david@gibson.dropbear.id.au>
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 18:56:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 18:56:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45785.81269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCc-0006Hl-AI; Sun, 06 Dec 2020 18:56:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45785.81269; Sun, 06 Dec 2020 18:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzCc-0006He-6E; Sun, 06 Dec 2020 18:56:10 +0000
Received: by outflank-mailman (input) for mailman id 45785;
 Sun, 06 Dec 2020 18:56:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1klzCa-0005yj-Jh
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:56:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2994972a-f584-4d7d-b87e-7a7d6cf282f7;
 Sun, 06 Dec 2020 18:56:02 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-132-57U9OyrwNvGeEKNUk8K1oQ-1; Sun, 06 Dec 2020 13:55:58 -0500
Received: by mail-wm1-f70.google.com with SMTP id b184so3257956wmh.6
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 10:55:58 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id f199sm10894749wme.15.2020.12.06.10.55.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 06 Dec 2020 10:55:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2994972a-f584-4d7d-b87e-7a7d6cf282f7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607280962;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TJv54j6i8iOuNb5cgwW4vcH7LWLUemKzrpTz+i6TyMo=;
	b=Tw6pHfaCsc5VycCo9AAHFWomLhs28bDl1BQgN7cHMVqoWf5UUd181PX7ciJ2WamgGNaV3o
	QMx3ZKuXovxOynfmuzDctdxonpsfdftoTf2q0uu1skeaD8Wm2/IgqyiXAkhox/8MFHU42f
	VK1U7E9nIwJ1UxH1gfbuqWrjrmUAMHs=
X-MC-Unique: 57U9OyrwNvGeEKNUk8K1oQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TJv54j6i8iOuNb5cgwW4vcH7LWLUemKzrpTz+i6TyMo=;
        b=NGVyVlsAtkG/ZUKuoEpMrR5NyfJ5VtD82Cr8tM8Kun0ZpmQwx2gwb727kRWCu3Qkof
         B7iLrqZg8H1gsk+JPedEguXQ7aeK9Wm+PhbtereNG32NLYlk8vAoai31F58yiJawMMV0
         MwyX7e1JSf3dDMsjO/6u6PwL95ckp9Qkeqt9O4mkb2AybswAdZNqtjA9cVJHbBLA63Dy
         xRA0BSHi3BVIJh6hwQWSQOGgetFoTWrJO5Iwj8QSVRP8jwqh6CuAb5YWjbYxO0nH3w05
         fOCN/vtTPclyM9f5JI5kyZ6UeDJ6rPG1m9+Z/5GhugQ21ibDX8MP6I3JqxDpi7R6NSJH
         u5tg==
X-Gm-Message-State: AOAM531RiMPrr2Vrew7AnTCidpE30+/9xZ/5Gf/tNQK0Os+u6S9VQ5Vp
	TySs82WkwnrwI+EQM2AmVIMULUvq4rZliJv22uePxV/7QrwsjsKrpteIPeX/55uXNms+fiFvfr4
	IKzSFhxGVNtK5cd0U7NNsZrO2JKQ=
X-Received: by 2002:a5d:540f:: with SMTP id g15mr7837207wrv.397.1607280957359;
        Sun, 06 Dec 2020 10:55:57 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxaQWa5KUxTpGy2U2kfc8S05U9x2ntHy5AYmvHAQPO8HICDiiXk9/Erff4S29pCdX8ZlczzjA==
X-Received: by 2002:a5d:540f:: with SMTP id g15mr7837200wrv.397.1607280957219;
        Sun, 06 Dec 2020 10:55:57 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org
Subject: [PATCH 8/8] gitlab-ci: Add Xen cross-build jobs
Date: Sun,  6 Dec 2020 19:55:08 +0100
Message-Id: <20201206185508.3545711-9-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build ARM and X86 targets with only Xen accelerator enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds-xen.yml | 14 ++++++++++++++
 .gitlab-ci.yml                   |  1 +
 MAINTAINERS                      |  1 +
 3 files changed, 16 insertions(+)
 create mode 100644 .gitlab-ci.d/crossbuilds-xen.yml

diff --git a/.gitlab-ci.d/crossbuilds-xen.yml b/.gitlab-ci.d/crossbuilds-xen.yml
new file mode 100644
index 00000000000..9c4def4feeb
--- /dev/null
+++ b/.gitlab-ci.d/crossbuilds-xen.yml
@@ -0,0 +1,14 @@
+cross-amd64-xen:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-amd64-cross
+    ACCEL: xen
+    TARGETS: i386-softmmu,x86_64-softmmu
+    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
+
+cross-arm64-xen:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-arm64-cross
+    ACCEL: xen
+    TARGETS: aarch64-softmmu
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 5f607fc7b48..9765c2199f7 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -17,6 +17,7 @@ include:
   - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-ppc.yml'
   - local: '/.gitlab-ci.d/crossbuilds-kvm-mips.yml'
+  - local: '/.gitlab-ci.d/crossbuilds-xen.yml'
 
 .native_build_job_template: &native_build_job_definition
   stage: build
diff --git a/MAINTAINERS b/MAINTAINERS
index 5f26626a512..1581e120629 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -488,6 +488,7 @@ F: include/hw/xen/
 F: include/sysemu/xen.h
 F: include/sysemu/xen-mapcache.h
 F: stubs/xen-hw-stub.c
+F: .gitlab-ci.d/crossbuilds-xen.yml
 
 Guest CPU Cores (HAXM)
 ---------------------
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Dec 06 19:24:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 19:24:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45810.81280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzdW-0000xQ-Pe; Sun, 06 Dec 2020 19:23:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45810.81280; Sun, 06 Dec 2020 19:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1klzdW-0000xJ-Mn; Sun, 06 Dec 2020 19:23:58 +0000
Received: by outflank-mailman (input) for mailman id 45810;
 Sun, 06 Dec 2020 19:23:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xrq3=FK=suse.de=cfontana@srs-us1.protection.inumbo.net>)
 id 1klzdW-0000wn-1H
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 19:23:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6992802-fc26-4c5b-954b-4ebe25738c46;
 Sun, 06 Dec 2020 19:23:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63E26AC90;
 Sun,  6 Dec 2020 19:23:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6992802-fc26-4c5b-954b-4ebe25738c46
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Subject: Re: [PATCH 2/8] gitlab-ci: Introduce 'cross_accel_build_job' template
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Halil Pasic <pasic@linux.ibm.com>, Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Thomas Huth <thuth@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, qemu-s390x@nongnu.org,
 Aurelien Jarno <aurelien@aurel32.net>, qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-3-philmd@redhat.com>
From: Claudio Fontana <cfontana@suse.de>
Message-ID: <1691b11e-dd40-8a15-6a34-d5e817f95027@suse.de>
Date: Sun, 6 Dec 2020 20:23:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-3-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/6/20 7:55 PM, Philippe Mathieu-Daudé wrote:
> Introduce a job template to cross-build accelerator specific
> jobs (enable a specific accelerator, disabling the others).
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index 099949aaef3..be63b209c5b 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -13,6 +13,18 @@
>            xtensa-softmmu"
>      - make -j$(expr $(nproc) + 1) all check-build
>  
> +.cross_accel_build_job:
> +  stage: build
> +  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
> +  timeout: 30m
> +  script:
> +    - mkdir build
> +    - cd build
> +    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
> +      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
> +        --enable-${ACCEL:-kvm} --target-list="$TARGETS" $ACCEL_CONFIGURE_OPTS
> +    - make -j$(expr $(nproc) + 1) all check-build
> +
>  .cross_user_build_job:
>    stage: build
>    image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
> 

Hi Philippe,

probably I just don't understand how this works, but
where is the "disabling the others" part?

I see the --enable-${ACCEL:-kvm}, but I would expect some --disable-XXX ?

I am probably just missing something..

Thanks,

Ciao,

Claudio


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 20:07:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 20:07:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45817.81293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km0JA-0004mW-46; Sun, 06 Dec 2020 20:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45817.81293; Sun, 06 Dec 2020 20:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km0JA-0004mP-0v; Sun, 06 Dec 2020 20:07:00 +0000
Received: by outflank-mailman (input) for mailman id 45817;
 Sun, 06 Dec 2020 20:06:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km0J8-0004ld-VH; Sun, 06 Dec 2020 20:06:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km0J8-0002Xg-OR; Sun, 06 Dec 2020 20:06:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km0J8-0001mP-ER; Sun, 06 Dec 2020 20:06:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1km0J8-0004s4-Du; Sun, 06 Dec 2020 20:06:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/EW3SJUBOKHPwjK+neM6xyZJCvc4HoEq14VHN/YibgQ=; b=mko+H6q0eaEmxEQnDFt0U1xElj
	fy1oqhe2e3D1kDm+6vy8ktX0XkSaagpt3r09vB3Vcp1oG0w8broW00YYErngl/scaAqPk5Xg4lit0
	KfGd3+1cn8dTAbtiXzVKrdgiYlG4KlhgkajEz+39TcWXkaxLuGGN+tJE6k1RCEgIbPgo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157239-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157239: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 06 Dec 2020 20:06:58 +0000

flight 157239 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157239/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 157231 pass in 157239
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 157231 pass in 157239
 test-armhf-armhf-libvirt     18 guest-start/debian.repeat  fail pass in 157231

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  108 days
Failing since        152659  2020-08-21 14:07:39 Z  107 days  223 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    4 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 06 23:45:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 06 Dec 2020 23:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45837.81320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km3i0-0007yH-Ni; Sun, 06 Dec 2020 23:44:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45837.81320; Sun, 06 Dec 2020 23:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km3i0-0007yA-KI; Sun, 06 Dec 2020 23:44:52 +0000
Received: by outflank-mailman (input) for mailman id 45837;
 Sun, 06 Dec 2020 23:44:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PoKi=FK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1km3hz-0007y5-2f
 for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 23:44:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 86efd9e6-d144-4e40-bad3-9d421923a4f6;
 Sun, 06 Dec 2020 23:44:50 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-365-Y6oypYtdN92pw7l9SYZWag-1; Sun, 06 Dec 2020 18:44:43 -0500
Received: by mail-wm1-f72.google.com with SMTP id a205so3439281wme.9
 for <xen-devel@lists.xenproject.org>; Sun, 06 Dec 2020 15:44:43 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id c9sm12697020wrp.73.2020.12.06.15.44.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 06 Dec 2020 15:44:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86efd9e6-d144-4e40-bad3-9d421923a4f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607298289;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EVBInixgmn136ovV1rZRYaD9AJ24IiI0pMFI4QniZVU=;
	b=Z5MFfdttJmL+jS/jrQNGh3AIV+Nz4WGQqI5HZXxkK1h8Rw5wzhW2ehZFbGOoOKghuIE3Vp
	Pf3x1QUwP/ZKUR0LtttTIB9ESxVHTTyiSgBzu484xjQca2RK0bONnukhxroL9W+1Rl20s0
	wun9XtedSnPh8pyMCAXQOk6jhOgiBkg=
X-MC-Unique: Y6oypYtdN92pw7l9SYZWag-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=EVBInixgmn136ovV1rZRYaD9AJ24IiI0pMFI4QniZVU=;
        b=LnIfSl4+wqQdE35KKogUEgXmxflOdP5PFggE7wo9Tb04HtRpLq5dbmryl8oMtZcqmx
         ZxVyZprWKABI77smFfKDfKGEPTNWaYZvpMCJexRp+DgLN3an5l+fFlEnE9MMStdyfuAf
         TFRxmmyWhdJF/qUctBuZbiMJhPu+sqx0Ez1Gf4FFXdqX22vOCUg2nQHP1WT8PcqfTyai
         NoALKGSgEUnoZgSQAsb4WWHzm6M8+MUlQQsKVmHxSztSzQ8oiT3Rf7ukmnW+V/w3YnxC
         Xdgc0FE0osm95v/cvnJbPu3pgTXeusEZQVh0K2mBpv2LtYujAp06ZwfGKxoP0e8ozjUk
         gdMw==
X-Gm-Message-State: AOAM5312AlcZO77VNxpJjvwHG/8tB00eXoJQ5KnXqIyWW/ioBLLe2XvA
	zkNl8f9ZX59XFniOvbXb2PwJp5ej3mAuFqYxr+BLbPFD2J5z/S3oXcAN95zPPunZOnUPm2lgqQY
	AWDyw6rOwmIXHE8D101e3stW1yLU=
X-Received: by 2002:a7b:c303:: with SMTP id k3mr15698491wmj.21.1607298282180;
        Sun, 06 Dec 2020 15:44:42 -0800 (PST)
X-Google-Smtp-Source: ABdhPJz9Ly/Z3VYUD9ldB9eMDBoFXsydwmOnqU35i1cTdXNWpuYJCJR1JajcbqGx6vrhzpycS4B9gw==
X-Received: by 2002:a7b:c303:: with SMTP id k3mr15698481wmj.21.1607298282031;
        Sun, 06 Dec 2020 15:44:42 -0800 (PST)
Subject: Re: [PATCH 2/8] gitlab-ci: Introduce 'cross_accel_build_job' template
To: Claudio Fontana <cfontana@suse.de>
Cc: qemu-devel@nongnu.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Halil Pasic <pasic@linux.ibm.com>, Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Thomas Huth <thuth@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, qemu-s390x@nongnu.org,
 Aurelien Jarno <aurelien@aurel32.net>, qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-3-philmd@redhat.com>
 <1691b11e-dd40-8a15-6a34-d5e817f95027@suse.de>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <c3b42add-6586-8723-ab81-4fdd660277fc@redhat.com>
Date: Mon, 7 Dec 2020 00:44:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1691b11e-dd40-8a15-6a34-d5e817f95027@suse.de>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/6/20 8:23 PM, Claudio Fontana wrote:
> On 12/6/20 7:55 PM, Philippe Mathieu-Daudé wrote:
>> Introduce a job template to cross-build accelerator specific
>> jobs (enable a specific accelerator, disabling the others).
>>
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> ---
>>  .gitlab-ci.d/crossbuilds.yml | 12 ++++++++++++
>>  1 file changed, 12 insertions(+)
>>
>> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
>> index 099949aaef3..be63b209c5b 100644
>> --- a/.gitlab-ci.d/crossbuilds.yml
>> +++ b/.gitlab-ci.d/crossbuilds.yml
>> @@ -13,6 +13,18 @@
>>            xtensa-softmmu"
>>      - make -j$(expr $(nproc) + 1) all check-build
>>  
>> +.cross_accel_build_job:
>> +  stage: build
>> +  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
>> +  timeout: 30m
>> +  script:
>> +    - mkdir build
>> +    - cd build
>> +    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
>> +      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
>> +        --enable-${ACCEL:-kvm} --target-list="$TARGETS" $ACCEL_CONFIGURE_OPTS
>> +    - make -j$(expr $(nproc) + 1) all check-build
>> +
>>  .cross_user_build_job:
>>    stage: build
>>    image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
>>
> 
> Hi Philippe,
> 
> probably I just don't understand how this works, but
> where is the "disabling the others" part?

Sorry I forgot to document $ACCEL_CONFIGURE_OPTS, which
can be used to amend options. See x86 and s390x jobs
(the only one buildable without TCG afaik) use:

    ACCEL_CONFIGURE_OPTS: --disable-tcg

> 
> I see the --enable-${ACCEL:-kvm}, but I would expect some --disable-XXX ?
> 
> I am probably just missing something..

The goal of this series is not to test --disable-tcg, but
to test --enable-kvm when you don't have access to a host
arch. I see testing --disable-tcg as a bonus :)

> 
> Thanks,
> 
> Ciao,
> 
> Claudio
> 



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 01:02:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 01:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45846.81333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km4uf-0002xS-Rv; Mon, 07 Dec 2020 01:02:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45846.81333; Mon, 07 Dec 2020 01:02:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km4uf-0002wi-M9; Mon, 07 Dec 2020 01:02:01 +0000
Received: by outflank-mailman (input) for mailman id 45846;
 Mon, 07 Dec 2020 01:02:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km4ue-0002Bp-Ln; Mon, 07 Dec 2020 01:02:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km4ud-0006jS-Iy; Mon, 07 Dec 2020 01:01:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km4ud-0005dy-7E; Mon, 07 Dec 2020 01:01:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1km4ud-0004wK-6i; Mon, 07 Dec 2020 01:01:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3YkKUZaswRbo4bnSjCbP+iODXkR7QUrGDE0FocFmbgc=; b=1pYnHyusI9IQ9GepWq7oOKZO6t
	93K2XNKAsz2N61gsrx+Hg1lqeF671YJMLrOjAjYsMkm5x1p6ZSo/AiTbl2p2r49sa8SeYIFmjjEwD
	NcAjHnqb6QtC6ubhvPqgfj3qyJ3gbwmMeXixRCB4DRN7ku1zdALTvpg/y9zWbml48FXk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157243-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157243: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7059c2c00a2196865c2139083cbef47cd18109b6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 01:01:59 +0000

flight 157243 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157243/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm 10 host-ping-check-xen fail in 157234 REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157234 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157234 pass in 157243
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157234
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157234
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157234
 test-armhf-armhf-libvirt     18 guest-start/debian.repeat  fail pass in 157234

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 157234 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157234 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7059c2c00a2196865c2139083cbef47cd18109b6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  128 days
Failing since        152366  2020-08-01 20:49:34 Z  127 days  217 attempts
Testing same since   157234  2020-12-06 03:27:14 Z    0 days    2 attempts

------------------------------------------------------------
3642 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 698500 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 02:45:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 02:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45858.81353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km6Wh-00006Y-Uq; Mon, 07 Dec 2020 02:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45858.81353; Mon, 07 Dec 2020 02:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km6Wh-00006P-O4; Mon, 07 Dec 2020 02:45:23 +0000
Received: by outflank-mailman (input) for mailman id 45858;
 Mon, 07 Dec 2020 02:45:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZQzx=FL=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1km6Wf-00006K-UX
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 02:45:22 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bb06536-42c9-4db5-a85f-dd97a89bae40;
 Mon, 07 Dec 2020 02:45:17 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4Cq72d3llFz9sVx; Mon,  7 Dec 2020 13:45:13 +1100 (AEDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bb06536-42c9-4db5-a85f-dd97a89bae40
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1607309113;
	bh=TC0GOGaKbPAwP2EkbRQ9BPtcqZhEMWFvPmP9ecxIISo=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=NUcv192BLmiuo8zL9otdtQLt0UfmEo9EO574qHkkb6rjQT6zTr7LJVVf2p3ucpts5
	 7cCe77CeGg6dbknawRx9okhPJam/pPRWj1UOXXKDU9Ofz0d7yY7SUKIGVn6xqTuM1f
	 BBPukzLSjLblB+Cwg8oG8n6FG6BUR8UK014fAN+4=
Date: Mon, 7 Dec 2020 12:38:38 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org,
	Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Willian Rampazzo <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Claudio Fontana <cfontana@suse.de>,
	Halil Pasic <pasic@linux.ibm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>, Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>, qemu-s390x@nongnu.org,
	Aurelien Jarno <aurelien@aurel32.net>, qemu-arm@nongnu.org
Subject: Re: [PATCH 6/8] gitlab-ci: Add KVM PPC cross-build jobs
Message-ID: <20201207013838.GA2555@yekko.fritz.box>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-7-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="liOOAslEiF7prFVr"
Content-Disposition: inline
In-Reply-To: <20201206185508.3545711-7-philmd@redhat.com>


--liOOAslEiF7prFVr
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Sun, Dec 06, 2020 at 07:55:06PM +0100, Philippe Mathieu-Daud=E9 wrote:
> Cross-build PPC target with KVM and TCG accelerators enabled.
>=20
> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> ---
> later this job build KVM-only.
> ---
>  .gitlab-ci.d/crossbuilds-kvm-ppc.yml | 5 +++++
>  .gitlab-ci.yml                       | 1 +
>  MAINTAINERS                          | 1 +
>  3 files changed, 7 insertions(+)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-ppc.yml

Acked-by: David Gibson <david@gibson.dropbear.id.au>

> diff --git a/.gitlab-ci.d/crossbuilds-kvm-ppc.yml b/.gitlab-ci.d/crossbui=
lds-kvm-ppc.yml
> new file mode 100644
> index 00000000000..9df8bcf5a73
> --- /dev/null
> +++ b/.gitlab-ci.d/crossbuilds-kvm-ppc.yml
> @@ -0,0 +1,5 @@
> +cross-ppc64el-kvm:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-ppc64el-cross
> +    TARGETS: ppc64-softmmu
> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> index a69619d7319..024624908e8 100644
> --- a/.gitlab-ci.yml
> +++ b/.gitlab-ci.yml
> @@ -15,6 +15,7 @@ include:
>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>    - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-ppc.yml'
> =20
>  .native_build_job_template: &native_build_job_definition
>    stage: build
> diff --git a/MAINTAINERS b/MAINTAINERS
> index d41401f6683..c7766782174 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -397,6 +397,7 @@ PPC KVM CPUs
>  M: David Gibson <david@gibson.dropbear.id.au>
>  S: Maintained
>  F: target/ppc/kvm.c
> +F: .gitlab-ci.d/crossbuilds-kvm-ppc.yml
> =20
>  S390 KVM CPUs
>  M: Halil Pasic <pasic@linux.ibm.com>

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--liOOAslEiF7prFVr
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl/Nh5wACgkQbDjKyiDZ
s5LYEhAA36/x+ZVjKjbCR9cpbx/bxHW9hRFzSUsSkrFIx/72fqe7yXS0+kSyiZXE
yZw9xCYqyHNWkGbRdPbWWeICzn7uKP4JrKOGXhIWYmuAdCmRoodf95DLaEEsgyW1
kJswja2Od7ZqPytGo3Ybe540knkUj6a/Hae5nskOMw6f3RfjC1sLeCLz5Dfzup3F
0tbSKBXobELtf0AP7PIut9DQZKdorQXf4OO0gJDOkCqHUsvkuL0HqJ7mARHu0ca2
oFCZaO5Ko8frdptD5ik/n77Nqi+MqJrp81LvkiJrLRCSZPoex8RUWcAg/gDcQfoL
3nd92ZnOyWmuo72Pz0LK3jQCaIyvxVzikuWqpdeYK1/fQ3wx3em2S1vaksXmSbRJ
RXgIE/zSbZhbKSqpeQyowhJ1N0MxgrTiAwXJMapfwoA0z7rOvMFNAhhX2RLEIPYC
G/9c/KHEtBod2tp0GSUtMkIM2k/Dq9jobShxZubJOV3Z2bMsuILrPBccYIaEjjzT
eA8OCWafnik+e2Rqi9SOrTbVqgsVqOmhw1Muu5zrgRLBahI9FQx9XKC2Wji6mbTB
3/+qZ67nkzA29J4wfMi+WDpGUztqt6Ee65opWx8tTmOUMB2ReWYovhh34IbtsL4s
5RwEYP5Fm5CbUoZ0m/O2dsUXPt66NDALXeP51FDZuup1HQXp5y4=
=Obzg
-----END PGP SIGNATURE-----

--liOOAslEiF7prFVr--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:11:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45870.81371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km8o5-0006Id-F5; Mon, 07 Dec 2020 05:11:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45870.81371; Mon, 07 Dec 2020 05:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km8o5-0006IW-Bj; Mon, 07 Dec 2020 05:11:29 +0000
Received: by outflank-mailman (input) for mailman id 45870;
 Mon, 07 Dec 2020 05:11:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1km8o4-0006IR-5U
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:11:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 5f7eb3f1-d283-4f1f-ac5f-419239275a3c;
 Mon, 07 Dec 2020 05:11:26 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-508-v49AIyVENfWDHZqo9wU_CQ-1; Mon, 07 Dec 2020 00:11:24 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5CB29800D55;
 Mon,  7 Dec 2020 05:11:22 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C9D315D6AB;
 Mon,  7 Dec 2020 05:11:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f7eb3f1-d283-4f1f-ac5f-419239275a3c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607317886;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qab4deHbU/VN7oqCq/ddeCw/xxFnxVTtrXJHAdanJ30=;
	b=Wd2KUSwnCLr0XfStrF0+coXglJjXzI4yua/+Gv+o/8GdKmFNkomTSQG8Nqwj0Me2XwAM0M
	YwCJ2w2COGhRJ8jGWgheBtEn1/bi+Xwqxj+difq0eSzhVDmCy86Ncpp31sc93amy2oDZ/q
	nuyZLbMA25pr7OZF5koYPo6hZeEL83c=
X-MC-Unique: v49AIyVENfWDHZqo9wU_CQ-1
Subject: Re: [PATCH 1/8] gitlab-ci: Replace YAML anchors by extends
 (cross_system_build_job)
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-2-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <e5494ad3-a67e-0013-b48f-0fa82d67c397@redhat.com>
Date: Mon, 7 Dec 2020 06:11:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-2-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15

On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> 'extends' is an alternative to using YAML anchors
> and is a little more flexible and readable. See:
> https://docs.gitlab.com/ee/ci/yaml/#extends
> 
> More importantly it allows exploding YAML jobs.
> 
> Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 40 ++++++++++++++++++------------------
>  1 file changed, 20 insertions(+), 20 deletions(-)

Reviewed-by: Thomas Huth <thuth@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:20:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:20:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45876.81383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km8xB-0007IU-7D; Mon, 07 Dec 2020 05:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45876.81383; Mon, 07 Dec 2020 05:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km8xB-0007IN-3u; Mon, 07 Dec 2020 05:20:53 +0000
Received: by outflank-mailman (input) for mailman id 45876;
 Mon, 07 Dec 2020 05:20:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1km8xA-0007II-Jw
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:20:52 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 5d735198-668e-45ab-ad92-3a854e251e08;
 Mon, 07 Dec 2020 05:20:51 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-383-xoOH7ohQMkiOcx7_FPWO6A-1; Mon, 07 Dec 2020 00:20:49 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5DEF91005504;
 Mon,  7 Dec 2020 05:20:46 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1CF885D6AB;
 Mon,  7 Dec 2020 05:20:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d735198-668e-45ab-ad92-3a854e251e08
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607318451;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5i5jUUW575PHEv2nQ9iBjazPdMewihveiWGYT6cOBCc=;
	b=HfV4Ww8yr14fKlE2T0NMaxsH+ItBz57KalYIE4r5FWSAdRsIVBxFK/YqYpHhKqmfEm2Xra
	mQn9UtDHoUVnVJMyRjVeGBbB5uOjNz6wM1w8d9pTnXPJtHuigC7DpdMtGPr/5BAeiVopgY
	QQFS62rHPVJ7fxvj1PW6Ct5PGtQhQVo=
X-MC-Unique: xoOH7ohQMkiOcx7_FPWO6A-1
Subject: Re: [PATCH 3/8] gitlab-ci: Add KVM X86 cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-4-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <1048bbc0-7124-3564-4219-aa32ed11a35b@redhat.com>
Date: Mon, 7 Dec 2020 06:20:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-4-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15

On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> Cross-build x86 target with only KVM accelerator enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds-kvm-x86.yml | 6 ++++++
>  .gitlab-ci.yml                       | 1 +
>  MAINTAINERS                          | 1 +
>  3 files changed, 8 insertions(+)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-x86.yml

We already have a job that tests with KVM enabled and TCG disabled in the
main .gitlab-ci.yml file, the "build-tcg-disabled" job. So I don't quite see
the point in adding yet another job that does pretty much the same? Did I
miss something?

 Thomas



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:35:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45882.81395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9BV-0008Nw-HH; Mon, 07 Dec 2020 05:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45882.81395; Mon, 07 Dec 2020 05:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9BV-0008Np-DF; Mon, 07 Dec 2020 05:35:41 +0000
Received: by outflank-mailman (input) for mailman id 45882;
 Mon, 07 Dec 2020 05:35:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1km9BT-0008Nk-Rj
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:35:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ffb1b9f1-9769-46f7-af0b-6de233998809;
 Mon, 07 Dec 2020 05:35:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B325AC9A;
 Mon,  7 Dec 2020 05:35:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffb1b9f1-9769-46f7-af0b-6de233998809
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607319337; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SlqEwKkshET0nQ8kpRs2olYJF5K/bt1gdgtQI5Gp+F4=;
	b=nqYBqpXfgqRr+33q10Krux50tmOacjuAucpK22uUjYS0RL+dB6isCfmKuww+SPDYdXwD8C
	0of14QowTzPaWvK2t210QzCVEpfAEcfqwTBEDab27UQyEVxF2WhSKaqmqybhiHBmsnmBs8
	cEmS2KOsG7t94BJqwz/aN2wIPLrZ2zM=
Subject: Re: [PATCH] Revert "xen: add helpers to allocate unpopulated memory"
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: stable@vger.kernel.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Simon Leiner <simon@leiner.me>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 open list <linux-kernel@vger.kernel.org>,
 "open list:DRM DRIVERS FOR XEN" <dri-devel@lists.freedesktop.org>
References: <20201206172242.1249689-1-marmarek@invisiblethingslab.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e04a91c2-3e2e-7052-14fc-9915f9cf6589@suse.com>
Date: Mon, 7 Dec 2020 06:35:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201206172242.1249689-1-marmarek@invisiblethingslab.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="6ekrWC7Imvf7QM0ISrUV82GBc9sxpxUVx"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--6ekrWC7Imvf7QM0ISrUV82GBc9sxpxUVx
Content-Type: multipart/mixed; boundary="eZcwrVuZPjphx6y9vxccT9GtWokI6VlsT";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: stable@vger.kernel.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Simon Leiner <simon@leiner.me>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 open list <linux-kernel@vger.kernel.org>,
 "open list:DRM DRIVERS FOR XEN" <dri-devel@lists.freedesktop.org>
Message-ID: <e04a91c2-3e2e-7052-14fc-9915f9cf6589@suse.com>
Subject: Re: [PATCH] Revert "xen: add helpers to allocate unpopulated memory"
References: <20201206172242.1249689-1-marmarek@invisiblethingslab.com>
In-Reply-To: <20201206172242.1249689-1-marmarek@invisiblethingslab.com>

--eZcwrVuZPjphx6y9vxccT9GtWokI6VlsT
Content-Type: multipart/mixed;
 boundary="------------162D6435A12428CB18F39909"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------162D6435A12428CB18F39909
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 06.12.20 18:22, Marek Marczykowski-G=C3=B3recki wrote:
> This reverts commit 9e2369c06c8a181478039258a4598c1ddd2cadfa.
>=20
> On a Xen PV dom0, with NVME disk, this makes the dom0 crash when starti=
ng
> a domain. This looks like some bad interaction between xen-blkback and

xen-scsiback has the same use pattern.

> NVME driver, both using ZONE_DEVICE. Since the author is on leave now,
> revert the change until proper solution is developed.
>=20
> The specific crash message is:
>=20
>      general protection fault, probably for non-canonical address 0xdea=
d000000000100: 0000 [#1] SMP NOPTI
>      CPU: 1 PID: 134 Comm: kworker/u12:2 Not tainted 5.9.9-1.qubes.x86_=
64 #1
>      Hardware name: LENOVO 20M9CTO1WW/20M9CTO1WW, BIOS N2CET50W (1.33 )=
 01/15/2020
>      Workqueue: dm-thin do_worker [dm_thin_pool]
>      RIP: e030:nvme_map_data+0x300/0x3a0 [nvme]
>      Code: b8 fe ff ff e9 a8 fe ff ff 4c 8b 56 68 8b 5e 70 8b 76 74 49 =
8b 02 48 c1 e8 33 83 e0 07 83 f8 04 0f 85 f2 fe ff ff 49 8b 42 08 <83> b8=
 d0 00 00 00 04 0f 85 e1 fe ff ff e9 38 fd ff ff 8b 55 70 be
>      RSP: e02b:ffffc900010e7ad8 EFLAGS: 00010246
>      RAX: dead000000000100 RBX: 0000000000001000 RCX: ffff8881a58f5000
>      RDX: 0000000000001000 RSI: 0000000000000000 RDI: ffff8881a679e000
>      RBP: ffff8881a5ef4c80 R08: ffff8881a5ef4c80 R09: 0000000000000002
>      R10: ffffea0003dfff40 R11: 0000000000000008 R12: ffff8881a679e000
>      R13: ffffc900010e7b20 R14: ffff8881a70b5980 R15: ffff8881a679e000
>      FS:  0000000000000000(0000) GS:ffff8881b5440000(0000) knlGS:000000=
0000000000
>      CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>      CR2: 0000000001d64408 CR3: 00000001aa2c0000 CR4: 0000000000050660
>      Call Trace:
>       nvme_queue_rq+0xa7/0x1a0 [nvme]
>       __blk_mq_try_issue_directly+0x11d/0x1e0
>       ? add_wait_queue_exclusive+0x70/0x70
>       blk_mq_try_issue_directly+0x35/0xc0l[
>       blk_mq_submit_bio+0x58f/0x660
>       __submit_bio_noacct+0x300/0x330
>       process_shared_bio+0x126/0x1b0 [dm_thin_pool]
>       process_cell+0x226/0x280 [dm_thin_pool]
>       process_thin_deferred_cells+0x185/0x320 [dm_thin_pool]
>       process_deferred_bios+0xa4/0x2a0 [dm_thin_pool]UX
>       do_worker+0xcc/0x130 [dm_thin_pool]
>       process_one_work+0x1b4/0x370
>       worker_thread+0x4c/0x310
>       ? process_one_work+0x370/0x370
>       kthread+0x11b/0x140
>       ? __kthread_bind_mask+0x60/0x60<
>       ret_from_fork+0x22/0x30
>      Modules linked in: loop snd_seq_dummy snd_hrtimer nf_tables nfnetl=
ink vfat fat snd_sof_pci snd_sof_intel_byt snd_sof_intel_ipc snd_sof_inte=
l_hda_common snd_soc_hdac_hda snd_sof_xtensa_dsp snd_sof_intel_hda snd_so=
f snd_soc_skl snd_soc_sst_
>      ipc snd_soc_sst_dsp snd_hda_ext_core snd_soc_acpi_intel_match snd_=
soc_acpi snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine elan_i2c sn=
d_hda_codec_hdmi mei_hdcp iTCO_wdt intel_powerclamp intel_pmc_bxt ee1004 =
intel_rapl_msr iTCO_vendor
>      _support joydev pcspkr intel_wmi_thunderbolt wmi_bmof thunderbolt =
ucsi_acpi idma64 typec_ucsi snd_hda_codec_realtek typec snd_hda_codec_gen=
eric snd_hda_intel snd_intel_dspcfg snd_hda_codec thinkpad_acpi snd_hda_c=
ore ledtrig_audio int3403_
>      thermal snd_hwdep snd_seq snd_seq_device snd_pcm iwlwifi snd_timer=
 processor_thermal_device mei_me cfg80211 intel_rapl_common snd e1000e me=
i int3400_thermal int340x_thermal_zone i2c_i801 acpi_thermal_rel soundcor=
e intel_soc_dts_iosf i2c_s
>      mbus rfkill intel_pch_thermal xenfs
>       ip_tables dm_thin_pool dm_persistent_data dm_bio_prison dm_crypt =
nouveau rtsx_pci_sdmmc mmc_core mxm_wmi crct10dif_pclmul ttm crc32_pclmul=
 crc32c_intel i915 ghash_clmulni_intel i2c_algo_bit serio_raw nvme drm_km=
s_helper cec xhci_pci nvme
>      _core rtsx_pci xhci_pci_renesas drm xhci_hcd wmi video pinctrl_can=
nonlake pinctrl_intel xen_privcmd xen_pciback xen_blkback xen_gntalloc xe=
n_gntdev xen_evtchn uinput
>      ---[ end trace f8d47e4aa6724df4 ]---
>      RIP: e030:nvme_map_data+0x300/0x3a0 [nvme]
>      Code: b8 fe ff ff e9 a8 fe ff ff 4c 8b 56 68 8b 5e 70 8b 76 74 49 =
8b 02 48 c1 e8 33 83 e0 07 83 f8 04 0f 85 f2 fe ff ff 49 8b 42 08 <83> b8=
 d0 00 00 00 04 0f 85 e1 fe ff ff e9 38 fd ff ff 8b 55 70 be
>      RSP: e02b:ffffc900010e7ad8 EFLAGS: 00010246
>      RAX: dead000000000100 RBX: 0000000000001000 RCX: ffff8881a58f5000
>      RDX: 0000000000001000 RSI: 0000000000000000 RDI: ffff8881a679e000
>      RBP: ffff8881a5ef4c80 R08: ffff8881a5ef4c80 R09: 0000000000000002
>      R10: ffffea0003dfff40 R11: 0000000000000008 R12: ffff8881a679e000
>      R13: ffffc900010e7b20 R14: ffff8881a70b5980 R15: ffff8881a679e000
>      FS:  0000000000000000(0000) GS:ffff8881b5440000(0000) knlGS:000000=
0000000000
>      CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>      CR2: 0000000001d64408 CR3: 00000001aa2c0000 CR4: 0000000000050660
>      Kernel panic - not syncing: Fatal exception
>      Kernel Offset: disabled
>=20
> Discussion at https://lore.kernel.org/xen-devel/20201205082839.ts3ju6yt=
a46cgwjn@Air-de-Roger/T
>=20
> Cc: stable@vger.kernel.org #v5.9+
> (for 5.9 it's easier to revert the original commit directly)
> Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>

Acked-by: Juergen Gross <jgross@suse.com>


Juergen

--------------162D6435A12428CB18F39909
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------162D6435A12428CB18F39909--

--eZcwrVuZPjphx6y9vxccT9GtWokI6VlsT--

--6ekrWC7Imvf7QM0ISrUV82GBc9sxpxUVx
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/NvycFAwAAAAAACgkQsN6d1ii/Ey+X
DAf/Ug6PNIYgDFG7iWJnhxhdd9LQ1RRJqgiZKsTVO8zvyU3Bt113hUMmM3CnYFrcGEdP1jS4pu2r
9vuhfJpOAHpnwezPBJEBjlF0MlQzuxPTSUgVdOOwkjaTJU4lElMJ83DA41IZ5Iw7U0lm9W1mhs6p
hYVD5AV+m9h1WUtkKRt5ph0RTItPkbrIkOZOToKkPjKbtnyWuw9T/1NL1sKSObjX6axqyryiaW8R
Hki59QtT/AKoyNJWQETeOcrOiy8E+tR9JZI4q4E7xsXLuyztnXHUobdKJwyib/64yyjey5Dz2bf0
2uxCzz3vqy2OcDOzL2SWKFxZYvXhFHGGdIazbcW6Kw==
=xfxw
-----END PGP SIGNATURE-----

--6ekrWC7Imvf7QM0ISrUV82GBc9sxpxUVx--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:36:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45886.81407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9CL-0008UJ-Ql; Mon, 07 Dec 2020 05:36:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45886.81407; Mon, 07 Dec 2020 05:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9CL-0008UC-Mw; Mon, 07 Dec 2020 05:36:33 +0000
Received: by outflank-mailman (input) for mailman id 45886;
 Mon, 07 Dec 2020 05:36:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km9CK-0008U2-Ot; Mon, 07 Dec 2020 05:36:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km9CK-00076F-Ei; Mon, 07 Dec 2020 05:36:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1km9CK-0002ZF-2r; Mon, 07 Dec 2020 05:36:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1km9CK-0007XD-2I; Mon, 07 Dec 2020 05:36:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0ghqm+vA+QGb0qm9+4nvbsTVyZpMy/jtopZhxtxIoSE=; b=TGBS3qCUuw/JFEPndXyedbvA2A
	Ogcx4giXnzOyccTFvTP/ekJ8xUcSCmR/wGrzgFQk3DzJjIp5m2ob4XYY6e5CFBd1pYspg3T3o6Zrw
	auJAMWdQAZJmb7skaJhr1kXVMp828+xEsBmVyyKJqyR8KopIGG/7ytrZjhRBHg3uI7GU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157245-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157245: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 05:36:32 +0000

flight 157245 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157245/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt 18 guest-start/debian.repeat fail in 157239 pass in 157245
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail pass in 157239

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157239 like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  108 days
Failing since        152659  2020-08-21 14:07:39 Z  107 days  224 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    5 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:41:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:41:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45897.81421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9HU-00012w-JX; Mon, 07 Dec 2020 05:41:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45897.81421; Mon, 07 Dec 2020 05:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9HU-00012p-Ga; Mon, 07 Dec 2020 05:41:52 +0000
Received: by outflank-mailman (input) for mailman id 45897;
 Mon, 07 Dec 2020 05:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1km9HU-00012k-5l
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:41:52 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 693cf133-8d0f-46d4-9780-9e7090ec4ca5;
 Mon, 07 Dec 2020 05:41:51 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-332-EzR6j0tzNtSU__GG2EUBMw-1; Mon, 07 Dec 2020 00:41:47 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8F4CB1005513;
 Mon,  7 Dec 2020 05:41:44 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 409601A262;
 Mon,  7 Dec 2020 05:41:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 693cf133-8d0f-46d4-9780-9e7090ec4ca5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607319711;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hLuvXS3NI6D8cDyFAW5Sct9681MJ4TRf7wvMzpU1WWE=;
	b=KUTPZe0k7q1cCJJETbf2zJqjPp0nXzm+KcrXR1rYKsmFbQ9eSqclqliSlJpCdEaA+VNbJu
	6QSqjH2jkADchLC+3nqTwy7AQiKQeJphvkyBB+YblFcozhCM/cvhKHxuHEVlj+IZ52YCmw
	zKKSYboTvilOvH8fAIOWiaizBJuJq8E=
X-MC-Unique: EzR6j0tzNtSU__GG2EUBMw-1
Subject: Re: [PATCH 4/8] gitlab-ci: Add KVM ARM cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-5-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <2a75c6ea-013d-896e-8478-2312957d3ed2@redhat.com>
Date: Mon, 7 Dec 2020 06:41:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-5-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> Cross-build ARM aarch64 target with KVM and TCG accelerators enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> later this job will build KVM-only.
> ---
>  .gitlab-ci.d/crossbuilds-kvm-arm.yml | 5 +++++
>  .gitlab-ci.yml                       | 1 +
>  MAINTAINERS                          | 1 +
>  3 files changed, 7 insertions(+)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-arm.yml
> 
> diff --git a/.gitlab-ci.d/crossbuilds-kvm-arm.yml b/.gitlab-ci.d/crossbuilds-kvm-arm.yml
> new file mode 100644
> index 00000000000..c74c6fdc9fb
> --- /dev/null
> +++ b/.gitlab-ci.d/crossbuilds-kvm-arm.yml
> @@ -0,0 +1,5 @@
> +cross-arm64-kvm:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-arm64-cross
> +    TARGETS: aarch64-softmmu

Now that's a little bit surprising, I had expected that the KVM code is
already compiled by the "cross-arm64-system" job ... but looking at the
output of a corresponding pipeline, it says "KVM support: NO", see e.g.:

https://gitlab.com/qemu-project/qemu/-/jobs/883985039#L298

What's going wrong there? ... ah, well, it's because of the
"--target-list-exclude=aarch64-softmmu" in the template :-(
That was stupid. So instead of adding a new job, could you please simply
replace the aarch64-softmmu there by arm-softmmu?

 Thanks,
  Thomas



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:46:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45903.81434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9Lp-0001Eg-6c; Mon, 07 Dec 2020 05:46:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45903.81434; Mon, 07 Dec 2020 05:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9Lp-0001EZ-2r; Mon, 07 Dec 2020 05:46:21 +0000
Received: by outflank-mailman (input) for mailman id 45903;
 Mon, 07 Dec 2020 05:46:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1km9Lo-0001EU-9o
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:46:20 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 05bddbd3-396a-410d-90cc-96f85654a630;
 Mon, 07 Dec 2020 05:46:19 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-309-CIJUMShgMsKSfWVbZ8cxNA-1; Mon, 07 Dec 2020 00:46:17 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AD5B3107ACE3;
 Mon,  7 Dec 2020 05:46:14 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E008F5C1A1;
 Mon,  7 Dec 2020 05:46:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05bddbd3-396a-410d-90cc-96f85654a630
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607319979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dr7XA22zRIJfNYRDzJpq4GgkVnG3M7LcnQEFlOGUwVY=;
	b=SyBm6JkV0rUBmePa1Z+PUBGBlVjSBY/n5PZIBcXMRGKv8aInNoZc0SIqsGZbhjbZkzj3fX
	I0A65U3CpLzeJnSSsqEkAFtIU+a0zsMXDTPb3LwQ2gk0mxrcB4cdZ7zyVj3PKK63XUWMAC
	cnxk6QbU6Cm8tWU13bY6NIdTxPgXrlU=
X-MC-Unique: CIJUMShgMsKSfWVbZ8cxNA-1
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
Date: Mon, 7 Dec 2020 06:46:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-6-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16

On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> Cross-build s390x target with only KVM accelerator enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>  .gitlab-ci.yml                         | 1 +
>  MAINTAINERS                            | 1 +
>  3 files changed, 8 insertions(+)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
> 
> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
> new file mode 100644
> index 00000000000..1731af62056
> --- /dev/null
> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
> @@ -0,0 +1,6 @@
> +cross-s390x-kvm:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-s390x-cross
> +    TARGETS: s390x-softmmu
> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> index 573afceb3c7..a69619d7319 100644
> --- a/.gitlab-ci.yml
> +++ b/.gitlab-ci.yml
> @@ -14,6 +14,7 @@ include:
>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'

KVM code is already covered by the "cross-s390x-system" job, but an
additional compilation test with --disable-tcg makes sense here. I'd then
rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".

And while you're at it, I'd maybe rather name the new file just
crossbuilds-s390x.yml and also move the other s390x related jobs into it?

 Thomas



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:57:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:57:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45909.81445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9Wx-0002HI-7n; Mon, 07 Dec 2020 05:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45909.81445; Mon, 07 Dec 2020 05:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9Wx-0002HB-4k; Mon, 07 Dec 2020 05:57:51 +0000
Received: by outflank-mailman (input) for mailman id 45909;
 Mon, 07 Dec 2020 05:57:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1km9Ww-0002H6-Bt
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:57:50 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6d6521ae-ff5d-491d-9adb-b7ed0521f537;
 Mon, 07 Dec 2020 05:57:49 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-18-DGCDi2V8PqC2tkcP-WhOyg-1; Mon, 07 Dec 2020 00:57:48 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4B19E180E460;
 Mon,  7 Dec 2020 05:57:45 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 31A93100239A;
 Mon,  7 Dec 2020 05:57:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d6521ae-ff5d-491d-9adb-b7ed0521f537
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607320669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R0BBgmAIun19ecWxrxGSMAu6pxT4TBupQwMGoVA8c7E=;
	b=ZjgubTppElvSgti3QXJrIvzi2SNLru9KroYdUWrrN91DDyiAS7dZ5jrBA43Wl9sZPjnApZ
	aMwCFHuj+yNk7bu4ZsqKlAc3YmbhUjZ3AIo9hZ2rPX9EtX/ww4SE1cgRiF73RAzuTzJ/CK
	/ZvRYI2nzpvuiQRvphDGkfuc9JDAwU0=
X-MC-Unique: DGCDi2V8PqC2tkcP-WhOyg-1
Subject: Re: [PATCH 6/8] gitlab-ci: Add KVM PPC cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-7-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <ffafcb2d-c32e-95ef-82c7-20bf5c366df7@redhat.com>
Date: Mon, 7 Dec 2020 06:57:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-7-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22

On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> Cross-build PPC target with KVM and TCG accelerators enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> later this job build KVM-only.
> ---
>  .gitlab-ci.d/crossbuilds-kvm-ppc.yml | 5 +++++
>  .gitlab-ci.yml                       | 1 +
>  MAINTAINERS                          | 1 +
>  3 files changed, 7 insertions(+)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-ppc.yml
> 
> diff --git a/.gitlab-ci.d/crossbuilds-kvm-ppc.yml b/.gitlab-ci.d/crossbuilds-kvm-ppc.yml
> new file mode 100644
> index 00000000000..9df8bcf5a73
> --- /dev/null
> +++ b/.gitlab-ci.d/crossbuilds-kvm-ppc.yml
> @@ -0,0 +1,5 @@
> +cross-ppc64el-kvm:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-ppc64el-cross
> +    TARGETS: ppc64-softmmu

Compilation of the ppc KVM code should already be covered by the
cross-ppc64el-system job, see e.g.:

https://gitlab.com/qemu-project/qemu/-/jobs/883985074#L297

Thus there is no need to add a new job for this here. It might be a good
idea to remove ppc64-softmmu from the exclude list in crossbuilds.yml,
though, so that we also check the 64-bit builds and not only the 32-bit ones.

 Thomas



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 05:59:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 05:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45914.81458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9YH-0002Pb-J1; Mon, 07 Dec 2020 05:59:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45914.81458; Mon, 07 Dec 2020 05:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1km9YH-0002PU-Fx; Mon, 07 Dec 2020 05:59:13 +0000
Received: by outflank-mailman (input) for mailman id 45914;
 Mon, 07 Dec 2020 05:59:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1km9YF-0002PO-Po
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 05:59:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 238e397c-aeda-46a7-a233-92c85f16861e;
 Mon, 07 Dec 2020 05:59:11 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-419-GsG9jOd1Pp6VMq2f2a2Jhg-1; Mon, 07 Dec 2020 00:59:09 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 98228802B40;
 Mon,  7 Dec 2020 05:59:06 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5295A5D6AB;
 Mon,  7 Dec 2020 05:58:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 238e397c-aeda-46a7-a233-92c85f16861e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607320751;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J79ha+sdFVJY1jQ5SwPfSjjvCpXuHs7w7E0i9lDztTg=;
	b=Mf4DPmD7vZ6/zk+ycgkF3NO47KOkf9/IzzOIYPL97qOMZFR+gRu0y1fyJPSrObAgyTUrgk
	mMOCub9rP5xGEYj/i043Wz3rnLmFzFxZZNO+BQyEbr73DsdaPlVA+VqjjzGZE083r2cX7o
	5zxoQ9RKt8AY93sZtJs5ZxRasy+4srA=
X-MC-Unique: GsG9jOd1Pp6VMq2f2a2Jhg-1
Subject: Re: [PATCH 7/8] gitlab-ci: Add KVM MIPS cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-8-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <112e7a72-1269-2df5-e573-74963db7396a@redhat.com>
Date: Mon, 7 Dec 2020 06:58:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201206185508.3545711-8-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15

On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> Cross-build mips target with KVM and TCG accelerators enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> later we'll build KVM-only.
> ---
>  .gitlab-ci.d/crossbuilds-kvm-mips.yml | 5 +++++
>  .gitlab-ci.yml                        | 1 +
>  MAINTAINERS                           | 1 +
>  3 files changed, 7 insertions(+)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-mips.yml
> 
> diff --git a/.gitlab-ci.d/crossbuilds-kvm-mips.yml b/.gitlab-ci.d/crossbuilds-kvm-mips.yml
> new file mode 100644
> index 00000000000..81eeeb315bb
> --- /dev/null
> +++ b/.gitlab-ci.d/crossbuilds-kvm-mips.yml
> @@ -0,0 +1,5 @@
> +cross-mips64el-kvm:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-mips64el-cross
> +    TARGETS: mips64el-softmmu

That's already covered, see:

https://gitlab.com/qemu-project/qemu/-/jobs/883985068#L296

 Thomas



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 07:55:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 07:55:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45926.81470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBM7-00052j-2J; Mon, 07 Dec 2020 07:54:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45926.81470; Mon, 07 Dec 2020 07:54:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBM6-00052c-V5; Mon, 07 Dec 2020 07:54:46 +0000
Received: by outflank-mailman (input) for mailman id 45926;
 Mon, 07 Dec 2020 07:54:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmBM5-00052X-Ar
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 07:54:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebc1c104-9c37-4d43-93fd-6dc86b92bdf8;
 Mon, 07 Dec 2020 07:54:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 36D7DACC4;
 Mon,  7 Dec 2020 07:54:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebc1c104-9c37-4d43-93fd-6dc86b92bdf8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607327680; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dBW+RZrThsYzUdoYBFqueVw1sHDq20n8t5u2fozEwB8=;
	b=GR/EBMuhpdyFpNxUrwke6Li2u7hnyLdqDIIuPMAAug71kBgxebvxWUCVByFvhbmw/xjpba
	OIlxbRAFX9v1K80823wwf/3BI0hAffIu3Xw7+oW5mvEbciVPGc8ltdkRCWBJ4nxqls8u9D
	f9GhtcT5zi9YzzfeGfDukTf2KIRcXlU=
Subject: Re: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic
 directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-15-jgross@suse.com>
 <369bcb0b-5554-8976-d3fe-5066b3d7cdce@suse.com>
 <774ca9f3-3bbe-817f-5ecb-76054aa619f5@suse.com>
 <f81a011d-101c-29e7-cba2-0b52506cc027@suse.com>
 <181448f7-fffb-ee5d-b420-40500bdb608d@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8094a301-6334-2504-cf2a-87629098f8ed@suse.com>
Date: Mon, 7 Dec 2020 08:54:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <181448f7-fffb-ee5d-b420-40500bdb608d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.12.2020 14:08, Jürgen Groß wrote:
> On 04.12.20 10:16, Jan Beulich wrote:
>> On 04.12.2020 09:52, Jürgen Groß wrote:
>>> On 03.12.20 16:44, Jan Beulich wrote:
>>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>>> --- a/xen/common/hypfs.c
>>>>> +++ b/xen/common/hypfs.c
>>>>> @@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
>>>>>        return entry->size;
>>>>>    }
>>>>>    
>>>>> +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
>>>>> +                               unsigned int id, bool is_last,
>>>>> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
>>>>> +{
>>>>> +    struct xen_hypfs_dirlistentry direntry;
>>>>> +    char name[HYPFS_DYNDIR_ID_NAMELEN];
>>>>> +    unsigned int e_namelen, e_len;
>>>>> +
>>>>> +    e_namelen = snprintf(name, sizeof(name), template->e.name, id);
>>>>> +    e_len = DIRENTRY_SIZE(e_namelen);
>>>>> +    direntry.e.pad = 0;
>>>>> +    direntry.e.type = template->e.type;
>>>>> +    direntry.e.encoding = template->e.encoding;
>>>>> +    direntry.e.content_len = template->e.funcs->getsize(&template->e);
>>>>> +    direntry.e.max_write_len = template->e.max_size;
>>>>> +    direntry.off_next = is_last ? 0 : e_len;
>>>>> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
>>>>> +        return -EFAULT;
>>>>> +    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
>>>>> +                              e_namelen + 1) )
>>>>> +        return -EFAULT;
>>>>> +
>>>>> +    guest_handle_add_offset(*uaddr, e_len);
>>>>> +
>>>>> +    return 0;
>>>>> +}
>>>>> +
>>>>> +static struct hypfs_entry *hypfs_dyndir_findentry(
>>>>> +    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
>>>>> +{
>>>>> +    const struct hypfs_dyndir_id *data;
>>>>> +
>>>>> +    data = hypfs_get_dyndata();
>>>>> +
>>>>> +    /* Use template with original findentry function. */
>>>>> +    return data->template->e.funcs->findentry(data->template, name, name_len);
>>>>> +}
>>>>> +
>>>>> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
>>>>> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>>>> +{
>>>>> +    const struct hypfs_dyndir_id *data;
>>>>> +
>>>>> +    data = hypfs_get_dyndata();
>>>>> +
>>>>> +    /* Use template with original read function. */
>>>>> +    return data->template->e.funcs->read(&data->template->e, uaddr);
>>>>> +}
>>>>> +
>>>>> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(
>>>>> +    const struct hypfs_entry_dir *template, unsigned int id)
>>>>> +{
>>>>> +    struct hypfs_dyndir_id *data;
>>>>> +
>>>>> +    data = hypfs_get_dyndata();
>>>>> +
>>>>> +    data->template = template;
>>>>> +    data->id = id;
>>>>> +    snprintf(data->name, sizeof(data->name), template->e.name, id);
>>>>> +    data->dir = *template;
>>>>> +    data->dir.e.name = data->name;
>>>>
>>>> I'm somewhat puzzled, if not confused, by the apparent redundancy
>>>> of this name generation with that in hypfs_read_dyndir_id_entry().
>>>> Wasn't the idea to be able to use generic functions on these
>>>> generated entries?
>>>
>>> I can add a macro replacing the double snprintf().
>>
>> That wasn't my point. I'm concerned of there being two name generation
>> sites in the first place. Is this perhaps simply some form of
>> optimization, avoiding hypfs_read_dyndir_id_entry() to call
>> hypfs_gen_dyndir_entry_id() (but risking the two going out of sync)?
> 
> Be aware that hypfs_read_dyndir_id_entry() is generating a struct
> xen_hypfs_dirlistentry, which is different from the internal
> representation of the data produced by hypfs_gen_dyndir_entry_id().
> 
> So the main common part is the name generation. What else would you
> want apart from making it common via e.g. a macro? Letting
> hypfs_read_dyndir_id_entry() call hypfs_gen_dyndir_entry_id() would
> just be a more general approach with all the data but the generated
> name of hypfs_gen_dyndir_entry_id() dropped or just copied around
> a second time.

IOW just an optimization, as I was assuming. Whether you macroize the
name generation I'd like to leave up to you. But you could please add
comments on both sides as to parts which need to remain in sync?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 08:02:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 08:02:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45938.81482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBTU-0006Zt-5n; Mon, 07 Dec 2020 08:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45938.81482; Mon, 07 Dec 2020 08:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBTU-0006Zm-2U; Mon, 07 Dec 2020 08:02:24 +0000
Received: by outflank-mailman (input) for mailman id 45938;
 Mon, 07 Dec 2020 08:02:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmBTS-0006Zh-Ne
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 08:02:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7debd77-597b-4bb4-93cf-dbf0d510bb86;
 Mon, 07 Dec 2020 08:02:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 096ACAC9A;
 Mon,  7 Dec 2020 08:02:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7debd77-597b-4bb4-93cf-dbf0d510bb86
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607328140; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8jDy2kB4A2gHHrNfM6JAX0OgtL3+GUlngGaCrsMy/xo=;
	b=YpUOcMadl/5Tzer610sIxp88uh0eVS75GdP8TKuFJzQxFJDskZAOYyAqE3J7TBaJ5dnZtK
	or7hu1WhKnZQ0MjwxOdal2aEmalLMawu4JScm5KLMBfaPvneTgMvaR+09jeMqwz0XodGJi
	W2iz37KzUDmSmVHS6Ief3IxLnnhYYgA=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
 <02a2b77f-27a9-b1b6-1acf-1f136cffdf30@xen.org>
 <48395363-ea47-9139-011e-233d92581a71@suse.com>
 <2edfc711-d8d9-4854-94a2-2d9e4d9902ec@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <381cbc5b-29e8-d84d-0b7c-e84de82bc1a4@suse.com>
Date: Mon, 7 Dec 2020 09:02:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <2edfc711-d8d9-4854-94a2-2d9e4d9902ec@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 16:09, Julien Grall wrote:
> On 04/12/2020 12:01, Jan Beulich wrote:
>> On 04.12.2020 12:51, Julien Grall wrote:
>>> On 04/12/2020 11:48, Jan Beulich wrote:
>>>> On 04.12.2020 12:28, Julien Grall wrote:
>>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>>
>>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>>
>>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>>> monitoring software to do the right thing.
>>>>>
>>>>> I will refrain to comment on this approach. However, given the race is
>>>>> much wider than the event channel, I would recommend to not add more
>>>>> code in the event channel to deal with such problem.
>>>>>
>>>>> Instead, this should be fixed in the VM event code when someone has time
>>>>> to harden the subsystem.
>>>>
>>>> Are effectively saying I should now undo the addition of the
>>>> refcounting, which was added in response to feedback from you?
>>>
>>> Please point out where I made the request to use the refcounting...
>>
>> You didn't ask for this directly, sure, but ...
>>
>>> I pointed out there was an issue with the VM event code.
>>
>> ... this has ultimately led to the decision to use refcounting
>> (iirc there was one alternative that I had proposed, besides
>> the option of doing nothing).
> 
> One other option that was discussed (maybe only on security@xen.org) is 
> to move the spinlock outside of the structure so it is always allocated.

Oh, right - forgot about that one, because that's nothing I would
ever have taken on actually carrying out.

>>> This was latter
>>> analysed as a wider issue. The VM event folks doesn't seem to be very
>>> concerned on the race, so I don't see the reason to try to fix it in the
>>> event channel code.
>>
>> And you won't need the refcount for vpl011 then?
> 
> I don't believe we need it for the vpl011 as the spin lock protecting 
> the code should always be allocated. The problem today is the lock is 
> initialized too late.
> 
>> I can certainly
>> drop it again, but it feels odd to go back to an earlier version
>> under the circumstances ...
> 
> The code introduced doesn't look necessary outside of the VM event code.
> So I think it would be wrong to merge it if it is just papering over a 
> bigger problem.

So to translate this to a clear course of action: You want me to
go back to the earlier version by dropping the refcounting again?
(I don't view this as "papering over" btw, but a tiny step towards
a solution.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 08:19:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 08:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45944.81494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBjv-0007ja-PK; Mon, 07 Dec 2020 08:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45944.81494; Mon, 07 Dec 2020 08:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBjv-0007jT-MA; Mon, 07 Dec 2020 08:19:23 +0000
Received: by outflank-mailman (input) for mailman id 45944;
 Mon, 07 Dec 2020 08:19:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmBju-0007jO-8d
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 08:19:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be6ed0e2-1872-4d5b-8328-d42a78ee3889;
 Mon, 07 Dec 2020 08:19:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16AD2AC90;
 Mon,  7 Dec 2020 08:19:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be6ed0e2-1872-4d5b-8328-d42a78ee3889
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607329157; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wMkmq+g/A3EBOnLccszRtxejoRb5qd6/17yIFQ9MaKM=;
	b=bfdRrQK+Fm0Xhc+bApwgaiBWEUZ8sSRVW/x3R2bpurD3Ylz50bu01A0zb/QOKVMKYJWt4I
	ulbUEa1E3vGY53PofUGLkks9Q8wT4qeWxh75Lb+7Qk9A0qTfOFqkTVaYfD5qaxOc1AA6lD
	Nv0dpweDIndNaEhAgqinitpMYJo2dX4=
Subject: Re: [PATCH] vpci/msix: exit early if MSI-X is disabled
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xenproject.org
References: <20201201174014.27878-1-roger.pau@citrix.com>
 <dfc96aa9-c39f-177c-c8f8-af18b80804de@suse.com>
 <cdb2a1ae-9ee7-6661-b69f-d2faacef2c12@suse.com>
 <20201206111548.nzefo2fx6bvspuj5@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <72ff2db0-cf18-a122-bf6d-ae792887c5f7@suse.com>
Date: Mon, 7 Dec 2020 09:19:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201206111548.nzefo2fx6bvspuj5@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.12.2020 12:15, Roger Pau Monné wrote:
> On Thu, Dec 03, 2020 at 02:40:28PM +0100, Jan Beulich wrote:
>> On 02.12.2020 09:38, Jan Beulich wrote:
>>> On 01.12.2020 18:40, Roger Pau Monne wrote:
>>>> --- a/xen/drivers/vpci/msix.c
>>>> +++ b/xen/drivers/vpci/msix.c
>>>> @@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
>>>>           * so that it picks the new state.
>>>>           */
>>>>          entry->masked = new_masked;
>>>> -        if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
>>>> +
>>>> +        if ( !msix->enabled )
>>>> +            break;
>>>> +
>>>> +        if ( !new_masked && !msix->masked && entry->updated )
>>>>          {
>>>>              /*
>>>>               * If MSI-X is enabled, the function mask is not active, the entry
>>>
>>> What about a "disabled" -> "enabled-but-masked" transition? This,
>>> afaict, similarly won't trigger setting up of entries from
>>> control_write(), and hence I'd expect the ASSERT() to similarly
>>> trigger when subsequently an entry's mask bit gets altered.
> 
> This would only happen if the user hasn't written to the entry address
> or data fields since initialization, or else the update field would be
> set and then when clearing the entry mask bit in
> PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET the entry will be properly setup.

Right, but where's the difference to writes here happening when
!msix->enabled? All I'm saying is that all possible cases leading
to the "else" of this "if" need to be equally considered. Hence
my alternative patch.

>>> I'd also be fine making this further adjustment, if you agree,
>>> but the one thing I haven't been able to fully convince myself of
>>> is that there's then still no need to set ->updated to true.
>>
>> I've taken another look. I think setting ->updated (or something
>> equivalent) is needed in that case, in order to not lose the
>> setting of the entry mask bit. However, this would only defer the
>> problem to control_write(): This would now need to call
>> vpci_msix_arch_mask_entry() under suitable conditions, but avoid
>> calling it when the entry is disabled or was never set up.
> 
> If the entry is masked control_write won't call update_entry, leaving
> the entry updated bit as-is, thus deferring the call to update_entry
> to further updates in PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET. I think this
> is all fine.

It is under the assumption that msix_write() behaves correctly.
What I was saying is that there might appear to be a need to
set ->updated in msix_write() (to make sure the mask bit change
won't get lost), at which point the logic in control_write()
would need adjustment. Which I find undesirable.

>> No
>> matter whether making the setting of ->updated conditional, or
>> adding a conditional call in update_entry(), we'd need to
>> evaluate whether the entry is currently disabled. Imo, instead of
>> introducing a new arch hook for this, it's easier to make
>> vpci_msix_arch_mask_entry() tolerate getting called on a disabled
>> entry. Below my proposed alternative change.
> 
> I think just setting the updated bit for all entries at initialization
> would solve this, as this would then force a call to update_entry when
> and entry us unmasked (by writes to PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET).

I don't see this being a solution - we'd still end up calling
vpci_msix_arch_mask_entry() when !msix->enabled or msix->masked.

> In any case the assert in vpci_msix_arch_mask_entry is a logic check,
> IIRC calling it with an invalid pirq will just result in the function
> being a no op as domain_spin_lock_irq_desc will return NULL.

Ah yes, pirq_info() would return NULL here. Am I reading this as a
suggestion to simply drop the ASSERT(), instead of replacing it by
an if()? It would feel slightly more robust to me to keep the if().

>> While writing the description I started wondering why we require
>> address or data fields to have got written before the first
>> unmask. I don't think the hardware imposes such a requirement;
>> zeros would be used instead, whatever this means. Let's not
>> forget that it's only the primary purpose of MSI/MSI-X to
>> trigger interrupts. Forcing the writes to go elsewhere in
>> memory is not forbidden from all I know, and could be used by a
>> driver. IOW I think ->updated should start out as set to true.
>> But of course vpci_msi_update() then would need to check the
>> upper address bits and avoid setting up an interrupt if they're
>> not 0xfee. And further arrangements would be needed to have the
>> guest requested write actually get carried out correctly.
> 
> Seems correct, albeit adding such logic seems to complicate the code
> and expand the attack surface. IMO I wouldn't implement this unless we
> know there's a real use case for this.

I wasn't meaning to suggest we implement any of this without need.
I was, however, thinking we ought to at least check the high 12
address bits, and avoid trying to interpret the low 20 ones if
they don't match. Let me add a patch to this effect to the small
series that I've already accumulated anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 08:34:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 08:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45954.81512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBy5-0001Ap-9w; Mon, 07 Dec 2020 08:34:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45954.81512; Mon, 07 Dec 2020 08:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmBy5-0001Ai-6O; Mon, 07 Dec 2020 08:34:01 +0000
Received: by outflank-mailman (input) for mailman id 45954;
 Mon, 07 Dec 2020 08:33:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifa4=FL=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kmBy3-0001Ad-Mm
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 08:33:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::610])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3e3fff5-5b2e-4520-ac8e-b56472026524;
 Mon, 07 Dec 2020 08:33:56 +0000 (UTC)
Received: from AM3PR07CA0066.eurprd07.prod.outlook.com (2603:10a6:207:4::24)
 by DB6PR0801MB1749.eurprd08.prod.outlook.com (2603:10a6:4:3b::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Mon, 7 Dec
 2020 08:33:53 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:4:cafe::ca) by AM3PR07CA0066.outlook.office365.com
 (2603:10a6:207:4::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.7 via Frontend
 Transport; Mon, 7 Dec 2020 08:33:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 08:33:53 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Mon, 07 Dec 2020 08:33:53 +0000
Received: from 2e10e5fbd7a0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D1F5785B-FCC1-44D0-AAF8-1ACE814D98F4.1; 
 Mon, 07 Dec 2020 08:33:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2e10e5fbd7a0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 08:33:37 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3611.eurprd08.prod.outlook.com (2603:10a6:10:4d::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.23; Mon, 7 Dec
 2020 08:33:36 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3632.022; Mon, 7 Dec 2020
 08:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3e3fff5-5b2e-4520-ac8e-b56472026524
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5x7ZIarv5NJNQsVwJ/NCQ2f8jE3i9jzD+g2YJKYMKlQ=;
 b=HJ+lOU6I9GcCYItnJ++6GtxEagPX7x5OLQFPzI2etUX+vI2wd+Wwr6HTedutpWdH3nwjz7Y5KkmgNxdCHtB+jg7brT6oML6+bKgBE7i3ZU8yEyhzVoB+efd9lsp93qnUggOHxVeu9QV434EFuQKQW1rYx00IYzPlnck6Q7ltaFk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: d5d28cbf74b27c3f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EPE2tInPFDhAMzHAkblyhFJ27/Tse3AkrqIw6kTCZ+sO5JOrnfpi0hlSNQe5N42t3no4G4Tbc6QarrX1YXpRsDF6PrVH2qHym877wVC4ijJd87xP+EZZgAupT/pGtziWh/J3fo+zkVpvm12qd4qzLR0XkaH3H1VltMfcNijwl/vaICgotOgU72n1bYa7WU1AtyG3T2fpjeRXxKiwlEqV2e76IvXes2dY8nxOKW5j6MiTsYQdJQM3qlcdEJ6mF4vJ2Cvi6xQJshSb2aQtreKoa7DVcyaFWm2+4RAOYwIG5dv1P6WEIg2BDNqM+v13lFnry87DStaOulB+FAzyoTx5bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5x7ZIarv5NJNQsVwJ/NCQ2f8jE3i9jzD+g2YJKYMKlQ=;
 b=WLCHfmKyfFIceH0Tfn/WdjtAnC0U32MSfEhJ/gD0p7Xr5pa+rnHTP8/sfxq4oU5UFOtZycTGTUquTJIGo/uvpjuF+D34h6JjIoYBINUigzw1yEEghRepH8xH7gwkcY3F1dDZ+n3rKkglPMc7S99AQCZ7a3vrXCCYO7AFKgOzpVT+9b3fubLEEyrl4wNML8vRxulPFctVONyXNUZY9l+OmydyGvBkQtHgW2z0Nb2Z5jqk0BBzBxL9mmhI+m6aIL7LOm3zIle2uPFc8CRGFw83C3ncVSVPZuQydw/UZbARk1KYwcoU4PhTBDwJt0pT0JDR7XN22Pr+TMgUnjpqmJ5OMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5x7ZIarv5NJNQsVwJ/NCQ2f8jE3i9jzD+g2YJKYMKlQ=;
 b=HJ+lOU6I9GcCYItnJ++6GtxEagPX7x5OLQFPzI2etUX+vI2wd+Wwr6HTedutpWdH3nwjz7Y5KkmgNxdCHtB+jg7brT6oML6+bKgBE7i3ZU8yEyhzVoB+efd9lsp93qnUggOHxVeu9QV434EFuQKQW1rYx00IYzPlnck6Q7ltaFk=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index:
 AQHWxBX/ttp8YMNqT0KunkbXuSldmKnjI/iAgADpk4CAAL+AgIAAr2CAgABFBACABZ2zgA==
Date: Mon, 7 Dec 2020 08:33:36 +0000
Message-ID: <15627C56-2A19-4392-AE7E-4DFE0B377354@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012011749550.1100@sstabellini-ThinkPad-T480s>
 <1912278a-13f4-885d-d1ca-cc130718d064@xen.org>
 <alpine.DEB.2.21.2012021958020.30425@sstabellini-ThinkPad-T480s>
 <BD247B69-7201-41E2-8687-49924B7396CA@arm.com>
 <alpine.DEB.2.21.2012031045420.32240@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012031045420.32240@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 61950053-d3dd-4b1e-66cc-08d89a8ad46f
x-ms-traffictypediagnostic: DB7PR08MB3611:|DB6PR0801MB1749:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB17490E725E44ADDD8216ECF6FCCE0@DB6PR0801MB1749.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zrVBFlwPhUDt3f6Yc+BJDvghWM8J3ZartU5Yi1dLoyf9yszCblObY5gKOD+0WpPOii8EUtKmOYgsddsEj7ptjFkSXCAnA9EbgjnWDnPyVz9jOHHEX9tE4r7KrgoVWi+1FoiYJ1523gQbL89Q6fttgtr3fApm91njXbIIhcSfUAy0cddeJWUN4J975MXFZVNlSNlYWZ1ZBNRKgc/rGkR7M623LAiCGIObzitypoG6/asJp2dzlIHaRD6B+bbjq6j227oKYgHkxdxmu0UZoDes/ahvzy1hOHayvIytOWkSeKlsd4le0sjTIaHrxGFdPUfadXv8WGWe6oPDk1HPfhqu2oJlw/1cnuRdOW2lg3DhY6/YtlaFJVljDNCinZOJnzHb
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(396003)(376002)(136003)(366004)(8676002)(2906002)(7416002)(66446008)(316002)(86362001)(71200400001)(36756003)(33656002)(53546011)(186003)(54906003)(2616005)(6506007)(64756008)(6486002)(4326008)(478600001)(26005)(8936002)(6916009)(76116006)(5660300002)(66556008)(66476007)(83380400001)(91956017)(66946007)(6512007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?ihQsOPUiNsUzr+TJFiZFwEgsCDs/J3sYdxbTkP6kbnLuyrGJuxlges3R6FqC?=
 =?us-ascii?Q?PH5ji5Y4TvgCHfq+4K2OvCPaRqUn8QM/S35ndrtHKx/ALg9s7lgBK4rPqehh?=
 =?us-ascii?Q?EBf6jnqktUY43GMjZlM4aCreby9E+DYCZNqr5EYjGsO+HoLZconCkyCE8t7H?=
 =?us-ascii?Q?v65wFyLLkmBAvwLy16SyNxLLk/ORRZt5MbUh7CZ8Y3kQa8OY0raIM0R+XLvj?=
 =?us-ascii?Q?QnCDqMVeBoXDLV2Qx3gOe2w8pjXW/cmAX4T2ppuPgRX7aNymg9a1zmy5Xoma?=
 =?us-ascii?Q?sCNMlBmvvSm1C7Y9G8f+04ftHoYOT5BsYLWrllwCqsVuox2WaSuJtmlcx/DP?=
 =?us-ascii?Q?cjVWDbVXAr6/Wb/V2Midf/PGqlPDS/kiDAy8ubYble8jrjAVfwrwsOU5yvJV?=
 =?us-ascii?Q?1AY8tXPlI4V2+XUoruq7w3jhgC1/BpTFWCVuPnlBw45cFIZdyEj5uQM2EtQm?=
 =?us-ascii?Q?jxlo+LXcS8Oqw2s8yMtkTQn2oRwp0kjXZE5rD77Cf/jNC87/3ldIx+xpchlF?=
 =?us-ascii?Q?xKnhYipGeRojkFYqEX6CcnG0itaqk0/Er0DxZXnNnnF2g+QvgdJFtc7KHJeo?=
 =?us-ascii?Q?AU4y+V0HrUfeewAKCgPiTDjgvhJaF45sT7P/XjkpCmv05fVUFTuPWc5R2Few?=
 =?us-ascii?Q?gs5NHdY7gBrGwdlJOUVfBBdlQwoatki9k6cnlijEqvx8VpUea4vjnrYV7Ysz?=
 =?us-ascii?Q?59oAzRiVUv+Jzwt2nER+9IRJ8MBJNWiXDMPAsRJRC0p2UoBnx5bs4fA93X8J?=
 =?us-ascii?Q?K0NKfyFYYoKlO4wTZRPI/qTGanVCPsoDbwY6z+4sN23VBwDWT1L5kF/65INx?=
 =?us-ascii?Q?HZP5EGKNqd6ISxMmKS8shLCFTK5DQz6btzXgKwU9lUwEN+R/psW4PTScrQyu?=
 =?us-ascii?Q?0CuP2sJOj6MYBJUKMtv47fgujE0WSMcOYA+3tlJJmux7+DdTzf3tACoIT8o/?=
 =?us-ascii?Q?FmUo084Vlyonjwbhho2TabZ+EP63hhMD/dO2eCEvk//j91HVG5BOFdydQtR+?=
 =?us-ascii?Q?T6sw?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0E17C025B865C84DA81F4BE6E3724A79@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3611
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a0dd0df7-8723-4111-5832-08d89a8aca45
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6FJlmOKq0cPsPtnhCYe1oGg64EODl847gfPJFc8TnWRyU+RO7toTFaQAZvNMLYJ7RUYbKg+rmaEJCA7fIysFuTXAGgzWEn6gvA1VQ2xtXxtRKgwaGFlTYkQUDkI7TJhnU1jsiYFtTdYuRrYH7JW8W01UAYPlKbm2eRGqJ8Qm3zekws3E/EF1k4B2t7p2Ur6786t1uqmYT21VnLSUy4GliexadZBVLewg+Ygdf9msCCCkb8GfBHwbEbMdACID8RigSvzE2g4z+/LZrFdi/fecPN/0cX5IcVKxPuftkIL569VL6WD/gk1/00aCOgAxsfdwX6xp/nwSbuQRfpkYAja6pOKYhmHOnsjhfcpigmj0unolYX5dFWFfWhSvzR8dcpKZGz+seUyDqUPxOIPa6iDTzQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966005)(356005)(33656002)(6486002)(81166007)(82740400003)(26005)(86362001)(47076004)(82310400003)(5660300002)(83380400001)(70586007)(70206006)(36756003)(6506007)(6862004)(4326008)(2616005)(336012)(186003)(107886003)(2906002)(53546011)(8936002)(54906003)(6512007)(478600001)(316002)(36906005)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 08:33:53.2600
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 61950053-d3dd-4b1e-66cc-08d89a8ad46f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1749

Hello Stefano,

> On 3 Dec 2020, at 6:47 pm, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>=20
> On Thu, 3 Dec 2020, Rahul Singh wrote:
>>> On 3 Dec 2020, at 4:13 am, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>>> On Wed, 2 Dec 2020, Julien Grall wrote:
>>>> On 02/12/2020 02:51, Stefano Stabellini wrote:
>>>>> On Thu, 26 Nov 2020, Rahul Singh wrote:
>>>>>> +/* Alias to Xen device tree helpers */
>>>>>> +#define device_node dt_device_node
>>>>>> +#define of_phandle_args dt_phandle_args
>>>>>> +#define of_device_id dt_device_match
>>>>>> +#define of_match_node dt_match_node
>>>>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32=
(np,
>>>>>> pname, out))
>>>>>> +#define of_property_read_bool dt_property_read_bool
>>>>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>>>>>=20
>>>>> Given all the changes to the file by the previous patches we are
>>>>> basically fully (or almost fully) adapting this code to Xen.
>>>>>=20
>>>>> So at that point I wonder if we should just as well make these change=
s
>>>>> (e.g. s/of_phandle_args/dt_phandle_args/g) to the code too.
>>>>=20
>>>> I have already accepted the fact that keeping Linux code as-is is near=
ly
>>>> impossible without much workaround :). The benefits tends to also limi=
ted as
>>>> we noticed for the SMMU driver.
>>>>=20
>>>> I would like to point out that this may make quite difficult (if not
>>>> impossible) to revert the previous patches which remove support for so=
me
>>>> features (e.g. atomic, MSI, ATS).
>>>>=20
>>>> If we are going to adapt the code to Xen (I'd like to keep Linux code =
style
>>>> though), then I think we should consider to keep code that may be usef=
ul in
>>>> the near future (at least MSI, ATS).
>>>=20
>>> (I am fine with keeping the Linux code style.)
>>>=20
>>> We could try to keep the code as similar to Linux as possible. This
>>> didn't work out in the past.
>>>=20
>>> Otherwise, we could fully adapt the driver to Xen. If we fully adapt th=
e
>>> driver to Xen (code style aside) it is better to be consistent and also
>>> do substitutions like s/of_phandle_args/dt_phandle_args/g. Then the
>>> policy becomes clear: the code comes from Linux but it is 100% adapted
>>> to Xen.
>>>=20
>>>=20
>>> Now the question about what to do about the MSI and ATS code is a good
>>> one. We know that we are going to want that code at some point in the
>>> next 2 years. Like you wrote, if we fully adapt the code to Xen and
>>> remove MSI and ATS code, then it is going to be harder to add it back.
>>>=20
>>> So maybe keeping the MSI and ATS code for now, even if it cannot work,
>>> would be better. I think this strategy works well if the MSI and ATS
>>> code can be disabled easily, i.e. with a couple of lines of code in the
>>> init function rather than #ifdef everywhere. It doesn't work well if we
>>> have to add #ifdef everywhere.
>>>=20
>>> It looks like MSI could be disabled adding a couple of lines to
>>> arm_smmu_setup_msis.
>>>=20
>>> Similarly ATS seems to be easy to disable by forcing ats_enabled to
>>> false.
>>>=20
>>> So yes, this looks like a good way forward. Rahul, what do you think?
>>=20
>>=20
>> I am ok to have the PCI ATS and MSI functionality in the code.=20
>> As per the discussion next version of the patch will include below modif=
ication:Please let me know if there are any suggestion overall that should =
be added in next version.
>>=20
>> 1. Keep the PCI ATS and MSI functionality code.
>=20
> I'll repeat one point I wrote above that I think it is important: please
> try to disable ATS and MSI without invasive changes, ideally just a
> couple of lines to force-disable each feature

Yes I will disable the feature.
>=20
>=20
>> 2. Make the code with XEN compatible ( remove linux compatibility functi=
ons)
>> 3. Keep the Linux coding style for code imported from Linux.
>> 4. Fix all comments.
>=20
> Sounds good.
>=20
>=20
>> I have one query what will be coding style for new code to make driver w=
ork  for XEN ?=20
>=20
> We try to keep the same code style for the entirety of a source file. In
> this case, the whole driver should use Linux code style (both imported
> code and new code).

Ok.

Regards,
Rahul	=


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 08:48:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 08:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45960.81524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCBk-0002Fd-Hc; Mon, 07 Dec 2020 08:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45960.81524; Mon, 07 Dec 2020 08:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCBk-0002FW-Ec; Mon, 07 Dec 2020 08:48:08 +0000
Received: by outflank-mailman (input) for mailman id 45960;
 Mon, 07 Dec 2020 08:48:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmCBj-0002FR-Jf
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 08:48:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cf900e1-133f-41d3-847c-319d3e60ce58;
 Mon, 07 Dec 2020 08:48:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80C01AC9A;
 Mon,  7 Dec 2020 08:48:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cf900e1-133f-41d3-847c-319d3e60ce58
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607330885; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9i5hyFbGoKgt4swjGhuceObUoLO22/nwdsHt75xqA1Q=;
	b=dls3yzcTFjPcRxsHJzdqqDI7k1j3Ksvx8y4Dm38bqY2r7piR2AbFGppzVdmlOhThSVgZ+K
	sq9ZNrmXs8puIayi5oMXq+GQ9EhEjZaM+2DYA7s8+ivan/2vPXHfN9wD00bqqsINysALln
	9Wh2yQmeGsFgII3O2pkpFw0SjnI427s=
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
 <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b631c122-554c-e26e-4fa9-56809dd5569a@suse.com>
Date: Mon, 7 Dec 2020 09:48:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 15:38, Oleksandr Andrushchenko wrote:
> On 11/13/20 4:51 PM, Jan Beulich wrote:
>> On 13.11.2020 15:44, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 4:38 PM, Jan Beulich wrote:
>>>> On 13.11.2020 15:32, Oleksandr Andrushchenko wrote:
>>>>> On 11/13/20 4:23 PM, Jan Beulich wrote:
>>>>>>     Earlier on I didn't say you should get this to work, only
>>>>>> that I think the general logic around what you add shouldn't make
>>>>>> things more arch specific than they really should be. That said,
>>>>>> something similar to the above should still be doable on x86,
>>>>>> utilizing struct pci_seg's bus2bridge[]. There ought to be
>>>>>> DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
>>>>>> (provided by the CPUs themselves rather than the chipset) aren't
>>>>>> really host bridges for the purposes you're after.
>>>>> You mean I can still use DEV_TYPE_PCI_HOST_BRIDGE as a marker
>>>>>
>>>>> while trying to detect what I need?
>>>> I'm afraid I don't understand what marker you're thinking about
>>>> here.
>>> I mean that when I go over bus2bridge entries, should I only work with
>>>
>>> those who have DEV_TYPE_PCI_HOST_BRIDGE?
>> Well, if you're after host bridges - yes.
>>
>> Jan
> 
> So, I started looking at the bus2bridge and if it can be used for both x86 (and possible ARM) and I have an
> 
> impression that the original purpose of this was to identify the devices which x86 IOMMU should
> 
> cover: e.g. I am looking at the find_upstream_bridge users and they are x86 IOMMU and VGA driver.
> 
> I tried to play with this on ARM, and for the HW platform I have and QEMU I got 0 entries in bus2bridge...
> 
> This is because of how xen/drivers/passthrough/pci.c:alloc_pdev is implemented (which lives in the
> 
> common code BTW, but seems to be x86 specific): so, it skips setting up bus2bridge entries for the bridges I have.

I'm curious to learn what's x86-specific here. I also can't deduce
why bus2bridge setup would be skipped.

> These are DEV_TYPE_PCIe_BRIDGE and DEV_TYPE_PCI_HOST_BRIDGE. So, the assumption I've made before
> 
> that I can go over bus2bridge and filter out the "owner" or parent bridge for a given seg:bus doesn't
> 
> seem to be possible now.
> 
> Even if I find the parent bridge with xen/drivers/passthrough/pci.c:find_upstream_bridge I am not sure
> 
> I can get any further in detecting which x86 domain owns this bridge: the whole idea in the x86 code is
> 
> that Domain-0 is the only possible one here (you mentioned that).

Right - your attempt to find the owner should _right now_ result in
getting back Dom0, on x86. But there shouldn't be any such assumption
in the new consumer of this data that you mean to introduce. I.e. I
only did suggest this to avoid ...

> So, I doubt if we can still live with
> 
> is_hardware_domain for now for x86?

... hard-coding information which can be properly established /
retrieved.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 08:53:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 08:53:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45970.81535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCHH-0003F8-9G; Mon, 07 Dec 2020 08:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45970.81535; Mon, 07 Dec 2020 08:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCHH-0003F1-6C; Mon, 07 Dec 2020 08:53:51 +0000
Received: by outflank-mailman (input) for mailman id 45970;
 Mon, 07 Dec 2020 08:53:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmCHF-0003Ew-Dt
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 08:53:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb693cec-5f66-4f0a-a216-4273a3c0d0ba;
 Mon, 07 Dec 2020 08:53:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 320DAADCA;
 Mon,  7 Dec 2020 08:53:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb693cec-5f66-4f0a-a216-4273a3c0d0ba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607331225; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EJqEJybrWz1AyEh3+Wm2RnrytB8sa4RlWEisi1lHL6o=;
	b=jRzj71LQjX6YdwEu9llN7PoxVvOd17dRgD+MfCTAY2XFYR2CQBwW7cgqkklTi/q6kUH/X6
	wxWTQCgs4LNLM+4Q1tE+BF54g8EgOBPaX41m6NQd/D1/EPwSwxc+grQiX6YbRtsiRypox5
	aY/7UsVvwSb7cmaTO7FLLTmoPXKTv9w=
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Christoph Hellwig <hch@lst.de>,
 xen-devel <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <7de66323-8a1f-fe95-c9d2-d2a5b1318d2f@suse.com>
Date: Mon, 7 Dec 2020 09:53:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="DjInZuhMnyFoVAwIkWsZZLcVQw8Nn1gxE"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--DjInZuhMnyFoVAwIkWsZZLcVQw8Nn1gxE
Content-Type: multipart/mixed; boundary="dj6QMlgeBtfrRkv9HmCRuQeYiLeVMFEpv";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Christoph Hellwig <hch@lst.de>,
 xen-devel <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
Message-ID: <7de66323-8a1f-fe95-c9d2-d2a5b1318d2f@suse.com>
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
In-Reply-To: <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>

--dj6QMlgeBtfrRkv9HmCRuQeYiLeVMFEpv
Content-Type: multipart/mixed;
 boundary="------------8A9C09CAAF7010E62E1E799D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8A9C09CAAF7010E62E1E799D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Marek,

On 06.12.20 17:47, Jason Andryuk wrote:
> On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.c=
om> wrote:
>>
>> On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=B3re=
cki wrote:
>>> On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
>>>> On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-G=C3=B3=
recki wrote:
>>>>> culprit:
>>>>>
>>>>> commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
>>>>> Author: Roger Pau Monne <roger.pau@citrix.com>
>>>>> Date:   Tue Sep 1 10:33:26 2020 +0200
>>>>>
>>>>>      xen: add helpers to allocate unpopulated memory
>>>>>
>>>>> I'm adding relevant people and xen-devel to the thread.
>>>>> For completeness, here is the original crash message:
>>>>
>>>> That commit definitively adds a new ZONE_DEVICE user, so it does loo=
k
>>>> related.  But you are not running on Xen, are you?
>>>
>>> I am. It is Xen dom0.
>>
>> I'm afraid I'm on leave and won't be able to look into this until the
>> beginning of January. I would guess it's some kind of bad
>> interaction between blkback and NVMe drivers both using ZONE_DEVICE?
>>
>> Maybe the best is to revert this change and I will look into it when
>> I get back, unless someone is willing to debug this further.
>=20
> Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , they
> both use page->lru which is part of the anonymous union shared with
> *pgmap.  That matches Marek's suspicion that the ZONE_DEVICE memory is
> being used as ZONE_NORMAL.
>=20
> memmap_init_zone_device() says:
> * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
> * and zone_device_data.  It is a bug if a ZONE_DEVICE page is
> * ever freed or placed on a driver-private list.

Could you test whether the two attached patches are helping?

Only compile tested.


Juergen

--------------8A9C09CAAF7010E62E1E799D
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-add-helpers-for-caching-grant-mapping-pages.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-xen-add-helpers-for-caching-grant-mapping-pages.patch"

=46rom 4f6ad98ce5fd457fd12e6617b0bc2a8f82fbce4d Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Mon, 7 Dec 2020 08:31:22 +0100
Subject: [PATCH 1/2] xen: add helpers for caching grant mapping pages

Instead of having similar helpers in multiple backend drivers use
common helpers for caching pages allocated via gnttab_alloc_pages().

Make use of those helpers in blkback and scsiback.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkback/blkback.c | 89 ++++++-----------------------
 drivers/block/xen-blkback/common.h  |  4 +-
 drivers/block/xen-blkback/xenbus.c  |  6 +-
 drivers/xen/grant-table.c           | 72 +++++++++++++++++++++++
 drivers/xen/xen-scsiback.c          | 60 ++++---------------
 include/xen/grant_table.h           | 13 +++++
 6 files changed, 116 insertions(+), 128 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
index 501e9dacfff9..9ebf53903d7b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -132,73 +132,12 @@ module_param(log_stats, int, 0644);
=20
 #define BLKBACK_INVALID_HANDLE (~0)
=20
-/* Number of free pages to remove on each call to gnttab_free_pages */
-#define NUM_BATCH_FREE_PAGES 10
-
 static inline bool persistent_gnt_timeout(struct persistent_gnt *persist=
ent_gnt)
 {
 	return pgrant_timeout && (jiffies - persistent_gnt->last_used >=3D
 			HZ * pgrant_timeout);
 }
=20
-static inline int get_free_page(struct xen_blkif_ring *ring, struct page=
 **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	if (list_empty(&ring->free_pages)) {
-		BUG_ON(ring->free_pages_num !=3D 0);
-		spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	BUG_ON(ring->free_pages_num =3D=3D 0);
-	page[0] =3D list_first_entry(&ring->free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	ring->free_pages_num--;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-
-	return 0;
-}
-
-static inline void put_free_pages(struct xen_blkif_ring *ring, struct pa=
ge **page,
-                                  int num)
-{
-	unsigned long flags;
-	int i;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	for (i =3D 0; i < num; i++)
-		list_add(&page[i]->lru, &ring->free_pages);
-	ring->free_pages_num +=3D num;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-}
-
-static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int=
 num)
-{
-	/* Remove requested pages in batches of NUM_BATCH_FREE_PAGES */
-	struct page *page[NUM_BATCH_FREE_PAGES];
-	unsigned int num_pages =3D 0;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	while (ring->free_pages_num > num) {
-		BUG_ON(list_empty(&ring->free_pages));
-		page[num_pages] =3D list_first_entry(&ring->free_pages,
-		                                   struct page, lru);
-		list_del(&page[num_pages]->lru);
-		ring->free_pages_num--;
-		if (++num_pages =3D=3D NUM_BATCH_FREE_PAGES) {
-			spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-			gnttab_free_pages(num_pages, page);
-			spin_lock_irqsave(&ring->free_pages_lock, flags);
-			num_pages =3D 0;
-		}
-	}
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-	if (num_pages !=3D 0)
-		gnttab_free_pages(num_pages, page);
-}
-
 #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
=20
 static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi=
_flags);
@@ -331,7 +270,8 @@ static void free_persistent_gnts(struct xen_blkif_rin=
g *ring, struct rb_root *ro
 			unmap_data.count =3D segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
=20
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap =3D 0;
 		}
=20
@@ -371,7 +311,8 @@ void xen_blkbk_unmap_purged_grants(struct work_struct=
 *work)
 		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
 			unmap_data.count =3D segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap =3D 0;
 		}
 		kfree(persistent_gnt);
@@ -379,7 +320,7 @@ void xen_blkbk_unmap_purged_grants(struct work_struct=
 *work)
 	if (segs_to_unmap > 0) {
 		unmap_data.count =3D segs_to_unmap;
 		BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-		put_free_pages(ring, pages, segs_to_unmap);
+		gnttab_page_cache_put(&ring->free_pages, pages, segs_to_unmap);
 	}
 }
=20
@@ -664,9 +605,10 @@ int xen_blkif_schedule(void *arg)
=20
 		/* Shrink the free pages pool if it is too large. */
 		if (time_before(jiffies, blkif->buffer_squeeze_end))
-			shrink_free_pagepool(ring, 0);
+			gnttab_page_cache_shrink(&ring->free_pages, 0);
 		else
-			shrink_free_pagepool(ring, max_buffer_pages);
+			gnttab_page_cache_shrink(&ring->free_pages,
+						 max_buffer_pages);
=20
 		if (log_stats && time_after(jiffies, ring->st_print))
 			print_stats(ring);
@@ -697,7 +639,7 @@ void xen_blkbk_free_caches(struct xen_blkif_ring *rin=
g)
 	ring->persistent_gnt_c =3D 0;
=20
 	/* Since we are shutting down remove all pages from the buffer */
-	shrink_free_pagepool(ring, 0 /* All */);
+	gnttab_page_cache_shrink(&ring->free_pages, 0 /* All */);
 }
=20
 static unsigned int xen_blkbk_unmap_prepare(
@@ -736,7 +678,7 @@ static void xen_blkbk_unmap_and_respond_callback(int =
result, struct gntab_unmap_
 	   but is this the best way to deal with this? */
 	BUG_ON(result);
=20
-	put_free_pages(ring, data->pages, data->count);
+	gnttab_page_cache_put(&ring->free_pages, data->pages, data->count);
 	make_response(ring, pending_req->id,
 		      pending_req->operation, pending_req->status);
 	free_req(ring, pending_req);
@@ -803,7 +745,8 @@ static void xen_blkbk_unmap(struct xen_blkif_ring *ri=
ng,
 		if (invcount) {
 			ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
 			BUG_ON(ret);
-			put_free_pages(ring, unmap_pages, invcount);
+			gnttab_page_cache_put(&ring->free_pages, unmap_pages,
+					      invcount);
 		}
 		pages +=3D batch;
 		num -=3D batch;
@@ -850,7 +793,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

 			pages[i]->page =3D persistent_gnt->page;
 			pages[i]->persistent_gnt =3D persistent_gnt;
 		} else {
-			if (get_free_page(ring, &pages[i]->page))
+			if (gnttab_page_cache_get(&ring->free_pages,
+						  &pages[i]->page))
 				goto out_of_memory;
 			addr =3D vaddr(pages[i]->page);
 			pages_to_gnt[segs_to_map] =3D pages[i]->page;
@@ -883,7 +827,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

 			BUG_ON(new_map_idx >=3D segs_to_map);
 			if (unlikely(map[new_map_idx].status !=3D 0)) {
 				pr_debug("invalid buffer -- could not remap it\n");
-				put_free_pages(ring, &pages[seg_idx]->page, 1);
+				gnttab_page_cache_put(&ring->free_pages,
+						      &pages[seg_idx]->page, 1);
 				pages[seg_idx]->handle =3D BLKBACK_INVALID_HANDLE;
 				ret |=3D 1;
 				goto next;
@@ -944,7 +889,7 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

=20
 out_of_memory:
 	pr_alert("%s: out of memory\n", __func__);
-	put_free_pages(ring, pages_to_gnt, segs_to_map);
+	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
 	for (i =3D last_map; i < num; i++)
 		pages[i]->handle =3D BLKBACK_INVALID_HANDLE;
 	return -ENOMEM;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkba=
ck/common.h
index c6ea5d38c509..a1b9df2c4ef1 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -288,9 +288,7 @@ struct xen_blkif_ring {
 	struct work_struct	persistent_purge_work;
=20
 	/* Buffer of free pages to map grant refs. */
-	spinlock_t		free_pages_lock;
-	int			free_pages_num;
-	struct list_head	free_pages;
+	struct gnttab_page_cache free_pages;
=20
 	struct work_struct	free_work;
 	/* Thread shutdown wait queue. */
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
index f5705569e2a7..76912c584a76 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -144,8 +144,7 @@ static int xen_blkif_alloc_rings(struct xen_blkif *bl=
kif)
 		INIT_LIST_HEAD(&ring->pending_free);
 		INIT_LIST_HEAD(&ring->persistent_purge_list);
 		INIT_WORK(&ring->persistent_purge_work, xen_blkbk_unmap_purged_grants)=
;
-		spin_lock_init(&ring->free_pages_lock);
-		INIT_LIST_HEAD(&ring->free_pages);
+		gnttab_page_cache_init(&ring->free_pages);
=20
 		spin_lock_init(&ring->pending_free_lock);
 		init_waitqueue_head(&ring->pending_free_wq);
@@ -317,8 +316,7 @@ static int xen_blkif_disconnect(struct xen_blkif *blk=
if)
 		BUG_ON(atomic_read(&ring->persistent_gnt_in_use) !=3D 0);
 		BUG_ON(!list_empty(&ring->persistent_purge_list));
 		BUG_ON(!RB_EMPTY_ROOT(&ring->persistent_gnts));
-		BUG_ON(!list_empty(&ring->free_pages));
-		BUG_ON(ring->free_pages_num !=3D 0);
+		BUG_ON(ring->free_pages.num_pages !=3D 0);
 		BUG_ON(ring->persistent_gnt_c !=3D 0);
 		WARN_ON(i !=3D (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));
 		ring->active =3D false;
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 523dcdf39cc9..e2e42912f241 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,6 +813,78 @@ int gnttab_alloc_pages(int nr_pages, struct page **p=
ages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
=20
+void gnttab_page_cache_init(struct gnttab_page_cache *cache)
+{
+	spin_lock_init(&cache->lock);
+	INIT_LIST_HEAD(&cache->pages);
+	cache->num_pages =3D 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
+
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page *=
*page)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	if (list_empty(&cache->pages)) {
+		spin_unlock_irqrestore(&cache->lock, flags);
+		return gnttab_alloc_pages(1, page);
+	}
+
+	page[0] =3D list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+	cache->num_pages--;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_get);
+
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page =
**page,
+			   unsigned int num)
+{
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	for (i =3D 0; i < num; i++)
+		list_add(&page[i]->lru, &cache->pages);
+	cache->num_pages +=3D num;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_put);
+
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned =
int num)
+{
+	struct page *page[10];
+	unsigned int i =3D 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	while (cache->num_pages > num) {
+		page[i] =3D list_first_entry(&cache->pages, struct page, lru);
+		list_del(&page[i]->lru);
+		cache->num_pages--;
+		if (++i =3D=3D ARRAY_SIZE(page)) {
+			spin_unlock_irqrestore(&cache->lock, flags);
+			gnttab_free_pages(i, page);
+			i =3D 0;
+			spin_lock_irqsave(&cache->lock, flags);
+		}
+	}
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	if (i !=3D 0)
+		gnttab_free_pages(i, page);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_shrink);
+
 void gnttab_pages_clear_private(int nr_pages, struct page **pages)
 {
 	int i;
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 4acc4e899600..862162dca33c 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -99,6 +99,8 @@ struct vscsibk_info {
 	struct list_head v2p_entry_lists;
=20
 	wait_queue_head_t waiting_to_free;
+
+	struct gnttab_page_cache free_pages;
 };
=20
 /* theoretical maximum of grants for one request */
@@ -188,10 +190,6 @@ module_param_named(max_buffer_pages, scsiback_max_bu=
ffer_pages, int, 0644);
 MODULE_PARM_DESC(max_buffer_pages,
 "Maximum number of free pages to keep in backend buffer");
=20
-static DEFINE_SPINLOCK(free_pages_lock);
-static int free_pages_num;
-static LIST_HEAD(scsiback_free_pages);
-
 /* Global spinlock to protect scsiback TPG list */
 static DEFINE_MUTEX(scsiback_mutex);
 static LIST_HEAD(scsiback_list);
@@ -207,41 +205,6 @@ static void scsiback_put(struct vscsibk_info *info)
 		wake_up(&info->waiting_to_free);
 }
=20
-static void put_free_pages(struct page **page, int num)
-{
-	unsigned long flags;
-	int i =3D free_pages_num + num, n =3D num;
-
-	if (num =3D=3D 0)
-		return;
-	if (i > scsiback_max_buffer_pages) {
-		n =3D min(num, i - scsiback_max_buffer_pages);
-		gnttab_free_pages(n, page + num - n);
-		n =3D num - n;
-	}
-	spin_lock_irqsave(&free_pages_lock, flags);
-	for (i =3D 0; i < n; i++)
-		list_add(&page[i]->lru, &scsiback_free_pages);
-	free_pages_num +=3D n;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-}
-
-static int get_free_page(struct page **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&free_pages_lock, flags);
-	if (list_empty(&scsiback_free_pages)) {
-		spin_unlock_irqrestore(&free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	page[0] =3D list_first_entry(&scsiback_free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	free_pages_num--;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-	return 0;
-}
-
 static unsigned long vaddr_page(struct page *page)
 {
 	unsigned long pfn =3D page_to_pfn(page);
@@ -302,7 +265,8 @@ static void scsiback_fast_flush_area(struct vscsibk_p=
end *req)
 		BUG_ON(err);
 	}
=20
-	put_free_pages(req->pages, req->n_grants);
+	gnttab_page_cache_put(&req->info->free_pages, req->pages,
+			      req->n_grants);
 	req->n_grants =3D 0;
 }
=20
@@ -445,8 +409,8 @@ static int scsiback_gnttab_data_map_list(struct vscsi=
bk_pend *pending_req,
 	struct vscsibk_info *info =3D pending_req->info;
=20
 	for (i =3D 0; i < cnt; i++) {
-		if (get_free_page(pg + mapcount)) {
-			put_free_pages(pg, mapcount);
+		if (gnttab_page_cache_get(&info->free_pages, pg + mapcount)) {
+			gnttab_page_cache_put(&info->free_pages, pg, mapcount);
 			pr_err("no grant page\n");
 			return -ENOMEM;
 		}
@@ -796,6 +760,8 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *in=
fo,
 		cond_resched();
 	}
=20
+	gnttab_page_cache_shrink(&info->free_pages, scsiback_max_buffer_pages);=

+
 	RING_FINAL_CHECK_FOR_REQUESTS(&info->ring, more_to_do);
 	return more_to_do;
 }
@@ -1233,6 +1199,8 @@ static int scsiback_remove(struct xenbus_device *de=
v)
=20
 	scsiback_release_translation_entry(info);
=20
+	gnttab_page_cache_shrink(&info->free_pages, 0);
+
 	dev_set_drvdata(&dev->dev, NULL);
=20
 	return 0;
@@ -1263,6 +1231,7 @@ static int scsiback_probe(struct xenbus_device *dev=
,
 	info->irq =3D 0;
 	INIT_LIST_HEAD(&info->v2p_entry_lists);
 	spin_lock_init(&info->v2p_lock);
+	gnttab_page_cache_init(&info->free_pages);
=20
 	err =3D xenbus_printf(XBT_NIL, dev->nodename, "feature-sg-grant", "%u",=

 			    SG_ALL);
@@ -1879,13 +1848,6 @@ static int __init scsiback_init(void)
=20
 static void __exit scsiback_exit(void)
 {
-	struct page *page;
-
-	while (free_pages_num) {
-		if (get_free_page(&page))
-			BUG();
-		gnttab_free_pages(1, &page);
-	}
 	target_unregister_template(&scsiback_ops);
 	xenbus_unregister_driver(&scsiback_driver);
 }
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 9bc5bc07d4d3..c6ef8ffc1a09 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -198,6 +198,19 @@ void gnttab_free_auto_xlat_frames(void);
 int gnttab_alloc_pages(int nr_pages, struct page **pages);
 void gnttab_free_pages(int nr_pages, struct page **pages);
=20
+struct gnttab_page_cache {
+	spinlock_t		lock;
+	struct list_head	pages;
+	unsigned int		num_pages;
+};
+
+void gnttab_page_cache_init(struct gnttab_page_cache *cache);
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page *=
*page);
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page =
**page,
+			   unsigned int num);
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache,
+			      unsigned int num);
+
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 struct gnttab_dma_alloc_args {
 	/* Device for which DMA memory will be/was allocated. */
--=20
2.26.2


--------------8A9C09CAAF7010E62E1E799D
Content-Type: text/x-patch; charset=UTF-8;
 name="0002-xen-don-t-use-page-lru-for-ZONE_DEVICE-memory.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="0002-xen-don-t-use-page-lru-for-ZONE_DEVICE-memory.patch"

=46rom 061fee2e0b4cb7dc7deb07980fca8afa6349358b Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Mon, 7 Dec 2020 09:36:14 +0100
Subject: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory

Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
memory") introduced usage of ZONE_DEVICE memory for foreign memory
mappings.

Unfortunately this collides with using page->lru for Xen backend
private page caches.

Fix that by using page->zone_device_data instead.

Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")=

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/grant-table.c | 65 ++++++++++++++++++++++++++++++++++-----
 include/xen/grant_table.h |  4 +++
 2 files changed, 62 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index e2e42912f241..ddb38a3d7680 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,10 +813,63 @@ int gnttab_alloc_pages(int nr_pages, struct page **=
pages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
=20
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	cache->pages =3D NULL;
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return cache->pages;
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page =3D cache->pages;
+	cache->pages =3D page->zone_device_data;
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct pag=
e *page)
+{
+	page->zone_device_data =3D cache->pages;
+	cache->pages =3D page;
+}
+#else
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	INIT_LIST_HEAD(&cache->pages);
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return list_empty(&cache->pages);
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page =3D list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct pag=
e *page)
+{
+	list_add(&page->lru, &cache->pages);
+}
+#endif
+
 void gnttab_page_cache_init(struct gnttab_page_cache *cache)
 {
 	spin_lock_init(&cache->lock);
-	INIT_LIST_HEAD(&cache->pages);
+	cache_init(cache);
 	cache->num_pages =3D 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
@@ -827,13 +880,12 @@ int gnttab_page_cache_get(struct gnttab_page_cache =
*cache, struct page **page)
=20
 	spin_lock_irqsave(&cache->lock, flags);
=20
-	if (list_empty(&cache->pages)) {
+	if (cache_empty(cache)) {
 		spin_unlock_irqrestore(&cache->lock, flags);
 		return gnttab_alloc_pages(1, page);
 	}
=20
-	page[0] =3D list_first_entry(&cache->pages, struct page, lru);
-	list_del(&page[0]->lru);
+	page[0] =3D cache_deq(cache);
 	cache->num_pages--;
=20
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -851,7 +903,7 @@ void gnttab_page_cache_put(struct gnttab_page_cache *=
cache, struct page **page,
 	spin_lock_irqsave(&cache->lock, flags);
=20
 	for (i =3D 0; i < num; i++)
-		list_add(&page[i]->lru, &cache->pages);
+		cache_enq(cache, page[i]);
 	cache->num_pages +=3D num;
=20
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -867,8 +919,7 @@ void gnttab_page_cache_shrink(struct gnttab_page_cach=
e *cache, unsigned int num)
 	spin_lock_irqsave(&cache->lock, flags);
=20
 	while (cache->num_pages > num) {
-		page[i] =3D list_first_entry(&cache->pages, struct page, lru);
-		list_del(&page[i]->lru);
+		page[i] =3D cache_deq(cache);
 		cache->num_pages--;
 		if (++i =3D=3D ARRAY_SIZE(page)) {
 			spin_unlock_irqrestore(&cache->lock, flags);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index c6ef8ffc1a09..b9c937b3a149 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -200,7 +200,11 @@ void gnttab_free_pages(int nr_pages, struct page **p=
ages);
=20
 struct gnttab_page_cache {
 	spinlock_t		lock;
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+	struct page		*pages;
+#else
 	struct list_head	pages;
+#endif
 	unsigned int		num_pages;
 };
=20
--=20
2.26.2


--------------8A9C09CAAF7010E62E1E799D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8A9C09CAAF7010E62E1E799D--

--dj6QMlgeBtfrRkv9HmCRuQeYiLeVMFEpv--

--DjInZuhMnyFoVAwIkWsZZLcVQw8Nn1gxE
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/N7ZcFAwAAAAAACgkQsN6d1ii/Ey+7
pgf/TTJ0ULCjkRR/cCuSNRg9xycepvgqT7Rven1u6ewgKK2WDrcJlXtEIWRr43R8DAgQNyB67Ztb
azE/dcMa5KfmxQdKtwIDV3m7t60YLe4lVCRLlMB5UjNknbscAxa3/wtcqfwk7qErfvp9OAWXi41B
CRWacZ7/KsO5eCHvqiSTVrNgDjHZF5BRb2dSwWtivDhkHuegN6aaqwB/FgE1ds/HfT1XJe6y2ft4
yweThV1fzdg87bu8IM+HovxEAYcQT2949DoEfmzoDtHxGSgtl4WrzoO61FTOmQnVKSn9t2Ux12cc
gLJXwC8lc9bOa6Bkymlt9YSjnTrRJjAbZw30bWsjXA==
=JpuS
-----END PGP SIGNATURE-----

--DjInZuhMnyFoVAwIkWsZZLcVQw8Nn1gxE--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:00:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:00:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45976.81548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCNY-0004An-4X; Mon, 07 Dec 2020 09:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45976.81548; Mon, 07 Dec 2020 09:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCNY-0004Ag-1P; Mon, 07 Dec 2020 09:00:20 +0000
Received: by outflank-mailman (input) for mailman id 45976;
 Mon, 07 Dec 2020 09:00:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmCNX-0004AA-BT
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:00:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4de9675a-d5a0-4de0-9deb-86f3f75ce675;
 Mon, 07 Dec 2020 09:00:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1A255AD4A;
 Mon,  7 Dec 2020 09:00:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4de9675a-d5a0-4de0-9deb-86f3f75ce675
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607331616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6ZOjbW6hH7FMjrZaBIUvpayIw0NyC/l8Xhb1PdwmQAs=;
	b=cJQihNdzr+Y0H9CHTAojeM3LB9QnTKF0lUzTZV6uAxCEKtWKKR8uCuJNN77ySUj05LhjtM
	n3zgbYkclsZ3rtn1uMyI7a+f9gIxAVrCfEtG2MrHPGnUoGfOlMMoup060s1sk/Z8gV+mzX
	z8Kb71LETJwZH4C64jXxqQjgOg3v+2k=
Subject: Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>
References: <20201201082128.15239-1-jgross@suse.com>
 <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c1fb98de-6925-aefa-e06a-5df4369acc3a@suse.com>
Date: Mon, 7 Dec 2020 10:00:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.12.2020 00:53, Andrew Cooper wrote:
> Weirdly, there is a second diagnostic showing up which appears to be
> unrelated and non-fatal, but a concerning non-the-less
> 
> mem_access.c: In function 'p2m_mem_access_check':
> mem_access.c:227:6: note: parameter passing for argument of type 'const
> struct npfec' changed in GCC 9.1
>   227 | bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
> npfec npfec)
>       |      ^~~~~~~~~~~~~~~~~~~~
> 
> It appears to be related to
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88469 and is letting us
> know that the ABI changed.  However, Xen is an embedded project with no
> external linkage, so we can probably compile with -Wno-psabi and be done
> with it.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91710

I have no idea why the fix suggested there hasn't made it into the
code base yet - iirc I had taken it verbatim and it got rid of the
problem in my builds of the compiler.

I don't, btw, think us being "embedded" is an excuse to suppress
the warning. If there really was a code generation difference here
(and not just a false positive warning), an incremental build
across a switch between older and newer gcc would then be broken.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:02:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45982.81559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCPw-0004Nn-Ie; Mon, 07 Dec 2020 09:02:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45982.81559; Mon, 07 Dec 2020 09:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCPw-0004Ng-Eq; Mon, 07 Dec 2020 09:02:48 +0000
Received: by outflank-mailman (input) for mailman id 45982;
 Mon, 07 Dec 2020 09:02:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmCPv-0004Na-Iu
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:02:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e360266b-0d24-407e-82cb-b7ea7d5ecfcf;
 Mon, 07 Dec 2020 09:02:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E04CFAD3F;
 Mon,  7 Dec 2020 09:02:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e360266b-0d24-407e-82cb-b7ea7d5ecfcf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607331762; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=eoaV1NPwHD6WI0HBk9eoj2MU5zFUiUkyvNZPFYk2DUU=;
	b=vDgK3oIKGtSTrjDeI/K1u8SmaXMAe0BfFkDeCJoG7jWb/V4TW5JLyN5TdMZOPZQ++tBTDl
	8HcjNVeqGz58HAGhrBwGMKGjxXuCMA7DhnhVlUS6/L5l9LBuJStAC5OTWJ3O0jIZS91+mj
	Ts7BZSD2/MNhNX+bsfjLKqlOH5uv7AE=
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Christoph Hellwig <hch@lst.de>,
 xen-devel <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
 <7de66323-8a1f-fe95-c9d2-d2a5b1318d2f@suse.com>
Message-ID: <5150c139-b73d-ecdb-b19b-8755d5ecf631@suse.com>
Date: Mon, 7 Dec 2020 10:02:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <7de66323-8a1f-fe95-c9d2-d2a5b1318d2f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="1IWo2WNrVXqSxZflEHhPUnV4N9cjri20Z"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--1IWo2WNrVXqSxZflEHhPUnV4N9cjri20Z
Content-Type: multipart/mixed; boundary="Q2RT5Cd8uQaPQLruXibxaHlujkLkxbzKd";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Christoph Hellwig <hch@lst.de>,
 xen-devel <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
Message-ID: <5150c139-b73d-ecdb-b19b-8755d5ecf631@suse.com>
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
 <7de66323-8a1f-fe95-c9d2-d2a5b1318d2f@suse.com>
In-Reply-To: <7de66323-8a1f-fe95-c9d2-d2a5b1318d2f@suse.com>

--Q2RT5Cd8uQaPQLruXibxaHlujkLkxbzKd
Content-Type: multipart/mixed;
 boundary="------------8E61B34936BA4E7DF6D6DC5C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8E61B34936BA4E7DF6D6DC5C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.12.20 09:53, J=C3=BCrgen Gro=C3=9F wrote:
> Marek,
>=20
> On 06.12.20 17:47, Jason Andryuk wrote:
>> On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.=
com>=20
>> wrote:
>>>
>>> On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=B3r=
ecki=20
>>> wrote:
>>>> On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
>>>>> On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek=20
>>>>> Marczykowski-G=C3=B3recki wrote:
>>>>>> culprit:
>>>>>>
>>>>>> commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
>>>>>> Author: Roger Pau Monne <roger.pau@citrix.com>
>>>>>> Date:=C2=A0=C2=A0 Tue Sep 1 10:33:26 2020 +0200
>>>>>>
>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0 xen: add helpers to allocate unpopulated =
memory
>>>>>>
>>>>>> I'm adding relevant people and xen-devel to the thread.
>>>>>> For completeness, here is the original crash message:
>>>>>
>>>>> That commit definitively adds a new ZONE_DEVICE user, so it does lo=
ok
>>>>> related.=C2=A0 But you are not running on Xen, are you?
>>>>
>>>> I am. It is Xen dom0.
>>>
>>> I'm afraid I'm on leave and won't be able to look into this until the=

>>> beginning of January. I would guess it's some kind of bad
>>> interaction between blkback and NVMe drivers both using ZONE_DEVICE?
>>>
>>> Maybe the best is to revert this change and I will look into it when
>>> I get back, unless someone is willing to debug this further.
>>
>> Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , they=

>> both use page->lru which is part of the anonymous union shared with
>> *pgmap.=C2=A0 That matches Marek's suspicion that the ZONE_DEVICE memo=
ry is
>> being used as ZONE_NORMAL.
>>
>> memmap_init_zone_device() says:
>> * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
>> * and zone_device_data.=C2=A0 It is a bug if a ZONE_DEVICE page is
>> * ever freed or placed on a driver-private list.
>=20
> Could you test whether the two attached patches are helping?
>=20
> Only compile tested.

Oh, sorry, one patch missing.

I need to modify drivers/xen/unpopulated-alloc.c, too.


Juergen

--------------8E61B34936BA4E7DF6D6DC5C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8E61B34936BA4E7DF6D6DC5C--

--Q2RT5Cd8uQaPQLruXibxaHlujkLkxbzKd--

--1IWo2WNrVXqSxZflEHhPUnV4N9cjri20Z
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/N77EFAwAAAAAACgkQsN6d1ii/Ey8m
JAf9FhNQjBjAf0GFH5mKSCIdjE7daoeFdgRb2sju6KdRNHFhdAMNQqfxLCZM50N6ie90GsNYB+wO
aeK8diqMJfCJNtABA0JstQi6Lh9kMfWfMRTjoTK9U+O+2l+ziWYfiRhwEjpfXs+LZYkhzioqk8NB
RRnb9CzR6HirKDAbbbQxl50maMhvBY7bb1uLH6AtxalmDDolJU3jht25O7uf32ax2cYqlzhMM6hY
YQfjJduuU4zVCxufXkJUEe8jCm+QF24g2W9lxd/Tg6aEImdAXyU5yq+nJFTVyosEpUUVIjRkCcdd
buONYjMt7D2RVXfByX0SKSWQ6SjjEEfjm+8UqJJYLg==
=SUS1
-----END PGP SIGNATURE-----

--1IWo2WNrVXqSxZflEHhPUnV4N9cjri20Z--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:11:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:11:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45990.81571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCY7-0005OM-E8; Mon, 07 Dec 2020 09:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45990.81571; Mon, 07 Dec 2020 09:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCY7-0005OF-BB; Mon, 07 Dec 2020 09:11:15 +0000
Received: by outflank-mailman (input) for mailman id 45990;
 Mon, 07 Dec 2020 09:11:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYzf=FL=epam.com=prvs=0610b02f0e=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kmCY6-0005OA-05
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:11:14 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f9a3fe4-be7a-4bc7-b736-b3176ebd0089;
 Mon, 07 Dec 2020 09:11:12 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B79B4cX027477; Mon, 7 Dec 2020 09:11:07 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2174.outbound.protection.outlook.com [104.47.17.174])
 by mx0b-0039f301.pphosted.com with ESMTP id 3583jdjy4w-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 07 Dec 2020 09:11:07 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5700.eurprd03.prod.outlook.com (2603:10a6:208:16f::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21; Mon, 7 Dec
 2020 09:11:04 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3632.018; Mon, 7 Dec 2020
 09:11:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f9a3fe4-be7a-4bc7-b736-b3176ebd0089
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B8bHkUR1Z6chhQi/mAE/17Y9INbtRa/Xq4V4QqEujtAYz9BqvryYFZsHuz7dnZMIVM/k0q6WjPOFnUqyE6Q7ZQOJ48vqg9hoo7aisIjc3I4ddhNdOHU6vPJS3eIXciSjdYBWC1laqS9wfTEVg70LA7bHMbEV1XYXNuNo7TGahBHxwdVij4e++q6z0xu41bIyazo0Ml6Lag+NM4cm6sG8Qi7/kiTwKiSRI6Pcne/pH4P3HPm570f3BVHxAgQq8B2KmHzUhKN64D/PzEFj17uvTVOTgvggpVEfaJVk/VT968SlyG1pGaO5uoIvpsxdh72xNeTRIKsflF8cysnl0IWl7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HMZ/N49DrrxhTR+8FCsgdlxamziIgd/xJSAtrvvk0nE=;
 b=iZ7f/IoY1pL7ict6ZsOaLUzYyg5DbKEZIG3qehUAAHeKYhYI8PZBL4i0NW7k9k6JpaY+9O7sYp2Q9agFO4ecLJ2aLwgl9pjSis74Ca/k/U2+nPVKTxlIPjdasanL9YT9rIP1mHcauj5Lc3Fro4UA6w2ay1TC+72tb+tPI0gqpQvkgLyiNYTH18LLjRlXhbcxmqPfhWDqkjtEuug7/U3VREmOZLyywWxPVs9cuwHEeQwh3aTHHEEAn1Bh/ZwZcePNHicOhjdezpRhShgAXqpfJCxO7ZjPouVJINIX0a2r17Vv56NiUkJTKY18w4An8JA/voW9BFZ7ZTK6hg8r/X1MFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HMZ/N49DrrxhTR+8FCsgdlxamziIgd/xJSAtrvvk0nE=;
 b=ajwzJeV0/kJUbRQT7nSGj2PWi+odNz35GvIMkhubAI/QY/isBZ9arlB/9UMXe8XgvAi9Rz25wpfmu6J4VLZKlIG5UnBcOc0z4uTTem+GlVL70BWFc2TbFM3kV8oW6Ik+xEWRaYd25K/xqUCbIxasdHIQ0/p/nGYex2MuDs9bzZTX4RKGvU92QgpqH8/Neqcn6Bx0xn8PsrqNK7nfiVWQrkkm1ut5XODKdRfodNI6PXT8cHFph6VVe0NrPlxdfwvpBx6j+4SZhvOKWaLiTF5E/2swHlZDBQIPCHwDdJgFQ98oGaQfIFjf5mgcyV88pzV9SdCN8ZMc8Q6Ve6Z+KgL+9g==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYAgAAcfoCAAAKRAIAAAYQAgAABxICAAAHUgIAg/YsAgARU+oCAAAZpgA==
Date: Mon, 7 Dec 2020 09:11:03 +0000
Message-ID: <8913ce50-1b51-36f6-36b6-7e09d9553df9@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
 <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
 <b631c122-554c-e26e-4fa9-56809dd5569a@suse.com>
In-Reply-To: <b631c122-554c-e26e-4fa9-56809dd5569a@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cae05862-3560-460b-ebb7-08d89a90061f
x-ms-traffictypediagnostic: AM0PR03MB5700:
x-microsoft-antispam-prvs: 
 <AM0PR03MB5700AAC7D38C98821AA6B388E7CE0@AM0PR03MB5700.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 YostjL52x2sUkcauGDzbb6AGmDiiSmlMW6N1ML2ilbzazJ0EGe2lCWosxvRLWzA3JqQSmc/n5ORgwK4TYywQMHNbJ/N1e4p9Td1WTm7ZK+iBJ1uL54iqc0/b3dhy4AyLQZ/c/W+Kurr4QNvb74CzFOAjp2HFE7dZux2JMNb8IyuHRQP7khmBs0tiy6P9SylU4s2Bwzpi7E/GTrBpnlUgm0N99PxSuM1v66V345r+JWylhH6d0eFFcBA7cYU9KGcoywDGHcDADsi/gh4CQzYt313KdtEqm/OPJd+yUDi5rvby0y2u25B0Jv3FPuoySUh9rQNvwGRkrW2vCA+tUZCSSLpqtuyvOs81aa+v4YzeRiWRBuuPRlTwSh9pJYA++Teg
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(39860400002)(136003)(376002)(6916009)(31696002)(6512007)(53546011)(5660300002)(71200400001)(478600001)(66946007)(76116006)(54906003)(6506007)(31686004)(66446008)(83380400001)(66556008)(91956017)(66476007)(64756008)(186003)(8676002)(6486002)(7416002)(4326008)(36756003)(2616005)(26005)(316002)(2906002)(8936002)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?LytLSGRsYk54UXRwVTAvSWI1c0xNNW9Ea1RKMGVxbFgrb094cWxIOVQ2ckl4?=
 =?utf-8?B?TExQcGlvRXVMRzZrTlVVZzZiMFg0b21CRStQWXlkSW1HUTlpbTZEYS9mSko4?=
 =?utf-8?B?NTJ3YkttMUZacVNzSVV6ait2V1N2dy8zRWtQalQ2TFdOeVkzN3BmSjFFNlgw?=
 =?utf-8?B?aG11STVhNUtYSkxxMnBlZzl5WTdhcFJEZ0ZiemcyNXFBQ1NlWE5XNG1SY1Jv?=
 =?utf-8?B?bE5vdzRxem9NRjJXVW05RkVzVnlnaXBYeEk0MG1NTTJPNldLWUFIU3lROEgy?=
 =?utf-8?B?MGlsNjRaZm9yQ2VtSks4dENaUStPTUNQZ3ZzRFl3V0dIL1dWWkJjZVloSlV3?=
 =?utf-8?B?dENqUlRGa05Tb1ZlaGdRUnRGZXMwTXhEd1JhUUhzTWpKTW9rN0VvTWVHMFpD?=
 =?utf-8?B?ZlFLdEV1QmhBY1BPbFhXWnZFYTJvdXdIN1RaVjhzV0QxbDJzcEVYUkRRUnhX?=
 =?utf-8?B?alV1RW9UZ3VIcjE2ZTEzMzBaVWVMVTBVbUpBQThoQ0tLVHNlMHFCY0NuMUhw?=
 =?utf-8?B?R0xPbk1CMFI2SmZrazdyanhPS3cvZHBjMUFXTUgrYm5yd1ZaZ0FUcGFPMlFq?=
 =?utf-8?B?b2hQSHZiT29WS0NxZk9OM2V5RTJ3TFZ6bHVZRnhEaksxQkNyKzhlcTR1UkVa?=
 =?utf-8?B?aWptdGduK2R5cWowb0VwSHorU1ZjZW82RVFnL0NBZzlrNDZwcStkUG1zV3l0?=
 =?utf-8?B?czlPeHNhdys5UVU4dEFuUmw4UlhrSEszWXRMTE9xT0VKbTBUdEw4ZER2SHlu?=
 =?utf-8?B?c2locnE5ZURqclR0OHNXWXR6OU1IcmhhV3l2cUtvN0QxaWNUNTA1TGQyZVBI?=
 =?utf-8?B?RUVXRjhvQ2d2aTNIdkhqcEI2cUFuZjVOckVRSS9EclF6OHExSm9xNW5Iellm?=
 =?utf-8?B?TWhwOStlMUNNRDRCZGJ4OXRCNVdlbHh5S00wa3NpaG5GR3pJOWREMFo0YTR6?=
 =?utf-8?B?c3hRU2xYSHYvTCtTLzNIRURZMWpORyt1ZXVlWVY5VnBTcldldjJiYWZ2Tno0?=
 =?utf-8?B?WDh5aVAwNm42RmgwQVhuQUV4UG1yOUZSUnFHVWkxMFdZTllVbWRZUnQ1UXVa?=
 =?utf-8?B?SjNwUXM3eUZLYmlZYnd3NTdhdUw4dXRrNzRnUTNxSTlUb3VITWhQS3pxd2lu?=
 =?utf-8?B?ZVBrVnYvbGdpMGd3ci8yUHNKSXBTSmg3aEU5SG1LNjJxVFZHZjdwSXdjRzVr?=
 =?utf-8?B?d2V3Yzc3K1ErM0d0S2IrK2dvYWd0bW83cStCUnNtdGZpSkovT1Vyemd1V1Iz?=
 =?utf-8?B?azg1WklvRjlTNEIrTlIxRkdnNWxqWnVSOExRRyttc2t6OWlRZ2RQd2NrRmUx?=
 =?utf-8?Q?22ZoAOKv63OChM5Ru7KVHKp/xUa1065w/X?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6AB90E9BA0159D49BDF0323C310AD727@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cae05862-3560-460b-ebb7-08d89a90061f
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Dec 2020 09:11:04.0294
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: GzyJR4Nf1uvLvFQoGXPsHqkV3O8JYb1ik0//73SKgBYYIfw4ZHhf0z9spiGzZUHe3Z+tPhov7Z6QQiDWB6PCU14baOHZUI5Y3urVnltf1aaiuyK/kBgnyHElRwFa5sBN
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5700
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-07_08:2020-12-04,2020-12-07 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0
 clxscore=1015 mlxscore=0 lowpriorityscore=0 adultscore=0 suspectscore=0
 bulkscore=0 malwarescore=0 impostorscore=0 mlxlogscore=782
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012070059

DQpPbiAxMi83LzIwIDEwOjQ4IEFNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDQuMTIuMjAy
MCAxNTozOCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMS8xMy8yMCA0
OjUxIFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+PiBPbiAxMy4xMS4yMDIwIDE1OjQ0LCBPbGVr
c2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4gT24gMTEvMTMvMjAgNDozOCBQTSwgSmFu
IEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAxMy4xMS4yMDIwIDE1OjMyLCBPbGVrc2FuZHIgQW5k
cnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4+PiBPbiAxMS8xMy8yMCA0OjIzIFBNLCBKYW4gQmV1bGlj
aCB3cm90ZToNCj4+Pj4+Pj4gICAgICBFYXJsaWVyIG9uIEkgZGlkbid0IHNheSB5b3Ugc2hvdWxk
IGdldCB0aGlzIHRvIHdvcmssIG9ubHkNCj4+Pj4+Pj4gdGhhdCBJIHRoaW5rIHRoZSBnZW5lcmFs
IGxvZ2ljIGFyb3VuZCB3aGF0IHlvdSBhZGQgc2hvdWxkbid0IG1ha2UNCj4+Pj4+Pj4gdGhpbmdz
IG1vcmUgYXJjaCBzcGVjaWZpYyB0aGFuIHRoZXkgcmVhbGx5IHNob3VsZCBiZS4gVGhhdCBzYWlk
LA0KPj4+Pj4+PiBzb21ldGhpbmcgc2ltaWxhciB0byB0aGUgYWJvdmUgc2hvdWxkIHN0aWxsIGJl
IGRvYWJsZSBvbiB4ODYsDQo+Pj4+Pj4+IHV0aWxpemluZyBzdHJ1Y3QgcGNpX3NlZydzIGJ1czJi
cmlkZ2VbXS4gVGhlcmUgb3VnaHQgdG8gYmUNCj4+Pj4+Pj4gREVWX1RZUEVfUENJX0hPU1RfQlJJ
REdFIGVudHJpZXMgdGhlcmUsIGFsYmVpdCBhIG51bWJlciBvZiB0aGVtDQo+Pj4+Pj4+IChwcm92
aWRlZCBieSB0aGUgQ1BVcyB0aGVtc2VsdmVzIHJhdGhlciB0aGFuIHRoZSBjaGlwc2V0KSBhcmVu
J3QNCj4+Pj4+Pj4gcmVhbGx5IGhvc3QgYnJpZGdlcyBmb3IgdGhlIHB1cnBvc2VzIHlvdSdyZSBh
ZnRlci4NCj4+Pj4+PiBZb3UgbWVhbiBJIGNhbiBzdGlsbCB1c2UgREVWX1RZUEVfUENJX0hPU1Rf
QlJJREdFIGFzIGEgbWFya2VyDQo+Pj4+Pj4NCj4+Pj4+PiB3aGlsZSB0cnlpbmcgdG8gZGV0ZWN0
IHdoYXQgSSBuZWVkPw0KPj4+Pj4gSSdtIGFmcmFpZCBJIGRvbid0IHVuZGVyc3RhbmQgd2hhdCBt
YXJrZXIgeW91J3JlIHRoaW5raW5nIGFib3V0DQo+Pj4+PiBoZXJlLg0KPj4+PiBJIG1lYW4gdGhh
dCB3aGVuIEkgZ28gb3ZlciBidXMyYnJpZGdlIGVudHJpZXMsIHNob3VsZCBJIG9ubHkgd29yayB3
aXRoDQo+Pj4+DQo+Pj4+IHRob3NlIHdobyBoYXZlIERFVl9UWVBFX1BDSV9IT1NUX0JSSURHRT8N
Cj4+PiBXZWxsLCBpZiB5b3UncmUgYWZ0ZXIgaG9zdCBicmlkZ2VzIC0geWVzLg0KPj4+DQo+Pj4g
SmFuDQo+PiBTbywgSSBzdGFydGVkIGxvb2tpbmcgYXQgdGhlIGJ1czJicmlkZ2UgYW5kIGlmIGl0
IGNhbiBiZSB1c2VkIGZvciBib3RoIHg4NiAoYW5kIHBvc3NpYmxlIEFSTSkgYW5kIEkgaGF2ZSBh
bg0KPj4NCj4+IGltcHJlc3Npb24gdGhhdCB0aGUgb3JpZ2luYWwgcHVycG9zZSBvZiB0aGlzIHdh
cyB0byBpZGVudGlmeSB0aGUgZGV2aWNlcyB3aGljaCB4ODYgSU9NTVUgc2hvdWxkDQo+Pg0KPj4g
Y292ZXI6IGUuZy4gSSBhbSBsb29raW5nIGF0IHRoZSBmaW5kX3Vwc3RyZWFtX2JyaWRnZSB1c2Vy
cyBhbmQgdGhleSBhcmUgeDg2IElPTU1VIGFuZCBWR0EgZHJpdmVyLg0KPj4NCj4+IEkgdHJpZWQg
dG8gcGxheSB3aXRoIHRoaXMgb24gQVJNLCBhbmQgZm9yIHRoZSBIVyBwbGF0Zm9ybSBJIGhhdmUg
YW5kIFFFTVUgSSBnb3QgMCBlbnRyaWVzIGluIGJ1czJicmlkZ2UuLi4NCj4+DQo+PiBUaGlzIGlz
IGJlY2F1c2Ugb2YgaG93IHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3BjaS5jOmFsbG9jX3BkZXYg
aXMgaW1wbGVtZW50ZWQgKHdoaWNoIGxpdmVzIGluIHRoZQ0KPj4NCj4+IGNvbW1vbiBjb2RlIEJU
VywgYnV0IHNlZW1zIHRvIGJlIHg4NiBzcGVjaWZpYyk6IHNvLCBpdCBza2lwcyBzZXR0aW5nIHVw
IGJ1czJicmlkZ2UgZW50cmllcyBmb3IgdGhlIGJyaWRnZXMgSSBoYXZlLg0KPiBJJ20gY3VyaW91
cyB0byBsZWFybiB3aGF0J3MgeDg2LXNwZWNpZmljIGhlcmUuIEkgYWxzbyBjYW4ndCBkZWR1Y2UN
Cj4gd2h5IGJ1czJicmlkZ2Ugc2V0dXAgd291bGQgYmUgc2tpcHBlZC4NCg0KU28sIGZvciBleGFt
cGxlOg0KDQpjb21taXQgMGFmNDM4NzU3ZDQ1NWY4ZWI2YjVhNmFlOWE5OTBhZTI0NWYyMzBmZA0K
QXV0aG9yOiBTdXJhdmVlIFN1dGhpa3VscGFuaXQgPHN1cmF2ZWUuc3V0aGlrdWxwYW5pdEBhbWQu
Y29tPg0KRGF0ZTrCoMKgIEZyaSBTZXAgMjcgMTA6MTE6NDkgMjAxMyArMDIwMA0KDQogwqDCoMKg
IEFNRCBJT01NVTogZml4IERvbTAgZGV2aWNlIHNldHVwIGZhaWx1cmUgZm9yIGhvc3QgYnJpZGdl
cw0KDQogwqDCoMKgIFRoZSBob3N0IGJyaWRnZSBkZXZpY2UgKGkuZS4gMHgxOCBmb3IgQU1EKSBk
b2VzIG5vdCByZXF1aXJlIElPTU1VLCBhbmQNCiDCoMKgwqAgdGhlcmVmb3JlIGlzIG5vdCBpbmNs
dWRlZCBpbiB0aGUgSVZSUy4gVGhlIGN1cnJlbnQgbG9naWMgdHJpZXMgdG8gbWFwDQogwqDCoMKg
IGFsbCBQQ0kgZGV2aWNlcyB0byBhbiBJT01NVS4gSW4gdGhpcyBjYXNlLCAieGwgZG1lc2ciIHNo
b3dzIHRoZQ0KIMKgwqDCoCBmb2xsb3dpbmcgbWVzc2FnZSBvbiBBTUQgc3l0ZW0uDQoNCiDCoMKg
wqAgKFhFTikgc2V0dXAgMDAwMDowMDoxOC4wIGZvciBkMCBmYWlsZWQgKC0xOSkNCiDCoMKgwqAg
KFhFTikgc2V0dXAgMDAwMDowMDoxOC4xIGZvciBkMCBmYWlsZWQgKC0xOSkNCiDCoMKgwqAgKFhF
Tikgc2V0dXAgMDAwMDowMDoxOC4yIGZvciBkMCBmYWlsZWQgKC0xOSkNCiDCoMKgwqAgKFhFTikg
c2V0dXAgMDAwMDowMDoxOC4zIGZvciBkMCBmYWlsZWQgKC0xOSkNCiDCoMKgwqAgKFhFTikgc2V0
dXAgMDAwMDowMDoxOC40IGZvciBkMCBmYWlsZWQgKC0xOSkNCiDCoMKgwqAgKFhFTikgc2V0dXAg
MDAwMDowMDoxOC41IGZvciBkMCBmYWlsZWQgKC0xOSkNCg0KIMKgwqDCoCBUaGlzIHBhdGNoIGFk
ZHMgYSBuZXcgZGV2aWNlIHR5cGUgKGkuZS4gREVWX1RZUEVfUENJX0hPU1RfQlJJREdFKSB3aGlj
aA0KIMKgwqDCoCBjb3JyZXNwb25kcyB0byBQQ0kgY2xhc3MgY29kZSAweDA2IGFuZCBzdWItY2xh
c3MgMHgwMC4gVGhlbiwgaXQgdXNlcw0KIMKgwqDCoCB0aGlzIG5ldyB0eXBlIHRvIGZpbHRlciB3
aGVuIHRyeWluZyB0byBtYXAgZGV2aWNlIHRvIElPTU1VLg0KDQpPbmUgb2YgbXkgdGVzdCBzeXN0
ZW1zIGhhcyBERVZfVFlQRV9QQ0lfSE9TVF9CUklER0UsIHNvIGJ1czJicmRpZ2Ugc2V0dXAgaXMg
aWdub3JlZA0KDQo+DQo+PiBUaGVzZSBhcmUgREVWX1RZUEVfUENJZV9CUklER0UgYW5kIERFVl9U
WVBFX1BDSV9IT1NUX0JSSURHRS4gU28sIHRoZSBhc3N1bXB0aW9uIEkndmUgbWFkZSBiZWZvcmUN
Cj4+DQo+PiB0aGF0IEkgY2FuIGdvIG92ZXIgYnVzMmJyaWRnZSBhbmQgZmlsdGVyIG91dCB0aGUg
Im93bmVyIiBvciBwYXJlbnQgYnJpZGdlIGZvciBhIGdpdmVuIHNlZzpidXMgZG9lc24ndA0KPj4N
Cj4+IHNlZW0gdG8gYmUgcG9zc2libGUgbm93Lg0KPj4NCj4+IEV2ZW4gaWYgSSBmaW5kIHRoZSBw
YXJlbnQgYnJpZGdlIHdpdGggeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvcGNpLmM6ZmluZF91cHN0
cmVhbV9icmlkZ2UgSSBhbSBub3Qgc3VyZQ0KPj4NCj4+IEkgY2FuIGdldCBhbnkgZnVydGhlciBp
biBkZXRlY3Rpbmcgd2hpY2ggeDg2IGRvbWFpbiBvd25zIHRoaXMgYnJpZGdlOiB0aGUgd2hvbGUg
aWRlYSBpbiB0aGUgeDg2IGNvZGUgaXMNCj4+DQo+PiB0aGF0IERvbWFpbi0wIGlzIHRoZSBvbmx5
IHBvc3NpYmxlIG9uZSBoZXJlICh5b3UgbWVudGlvbmVkIHRoYXQpLg0KPiBSaWdodCAtIHlvdXIg
YXR0ZW1wdCB0byBmaW5kIHRoZSBvd25lciBzaG91bGQgX3JpZ2h0IG5vd18gcmVzdWx0IGluDQo+
IGdldHRpbmcgYmFjayBEb20wLCBvbiB4ODYuIEJ1dCB0aGVyZSBzaG91bGRuJ3QgYmUgYW55IHN1
Y2ggYXNzdW1wdGlvbg0KPiBpbiB0aGUgbmV3IGNvbnN1bWVyIG9mIHRoaXMgZGF0YSB0aGF0IHlv
dSBtZWFuIHRvIGludHJvZHVjZS4gSS5lLiBJDQo+IG9ubHkgZGlkIHN1Z2dlc3QgdGhpcyB0byBh
dm9pZCAuLi4NCj4NCj4+IFNvLCBJIGRvdWJ0IGlmIHdlIGNhbiBzdGlsbCBsaXZlIHdpdGgNCj4+
DQo+PiBpc19oYXJkd2FyZV9kb21haW4gZm9yIG5vdyBmb3IgeDg2Pw0KPiAuLi4gaGFyZC1jb2Rp
bmcgaW5mb3JtYXRpb24gd2hpY2ggY2FuIGJlIHByb3Blcmx5IGVzdGFibGlzaGVkIC8NCj4gcmV0
cmlldmVkLg0KPg0KPiBKYW4=


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:17:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:17:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.45997.81584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCdr-0005cA-8b; Mon, 07 Dec 2020 09:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 45997.81584; Mon, 07 Dec 2020 09:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCdr-0005c3-5M; Mon, 07 Dec 2020 09:17:11 +0000
Received: by outflank-mailman (input) for mailman id 45997;
 Mon, 07 Dec 2020 09:17:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmCdq-0005by-PU
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:17:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e81816c-ffe7-454a-b9b0-6ac5c9b9b9db;
 Mon, 07 Dec 2020 09:17:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C38A1AD20;
 Mon,  7 Dec 2020 09:17:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e81816c-ffe7-454a-b9b0-6ac5c9b9b9db
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607332628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VxaF6bbiB16xI/ghGCl3iL1FeoOfRPJpldz+RBm1kFk=;
	b=THxGTlteTwPPXBmiOF7Qm5/HTWWSoygX5jXme3ctN44cbU/oUaOBjsuUTqnNB0JrsNwtG2
	5ttIoADXe2NBYbT4hecqSH6VMtKqV5tesxZg6CDoGyAyWO6PLEYelCOSBovTmU1LDmrRTh
	PkNVFuE9Yjo/pjH/c/qjlxazChYxABM=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Julien Grall <julien@xen.org>
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, paul@xen.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c5537493-1a6f-cdc1-27dc-a34060e7efc5@suse.com>
Date: Mon, 7 Dec 2020 10:17:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 12:45, Julien Grall wrote:
> Hi,
> 
> I haven't looked at the series yet. Just adding some thoughts on why one 
> would want such option.
> 
> On 04/12/2020 09:43, Jan Beulich wrote:
>> On 04.12.2020 09:22, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 04 December 2020 07:53
>>>>
>>>> On 03.12.2020 18:07, Paul Durrant wrote:
>>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> Sent: 03 December 2020 15:57
>>>>>>
>>>>>> ... this sound to me more like workarounds for buggy guests than
>>>>>> functionality the hypervisor _needs_ to have. (I can appreciate
>>>>>> the specific case here for the specific scenario you provide as
>>>>>> an exception.)
>>>>>
>>>>> If we want to have a hypervisor that can be used in a cloud environment
>>>>> then Xen absolutely needs this capability.
>>>>
>>>> As per above you can conclude that I'm still struggling to see the
>>>> "why" part here.
>>>>
>>>
>>> Imagine you are a customer. You boot your OS and everything is just fine... you run your workload and all is good. You then shut down your VM and re-start it. Now it starts to crash. Who are you going to blame? You did nothing to your OS or application s/w, so you are going to blame the cloud provider of course.
>>
>> That's a situation OSes are in all the time. Buggy applications may
>> stop working on newer OS versions. It's still the application that's
>> in need of updating then. I guess OSes may choose to work around
>> some very common applications' bugs, but I'd then wonder on what
>> basis "very common" gets established. I dislike the underlying
>> asymmetry / inconsistency (if not unfairness) of such a model,
>> despite seeing that there may be business reasons leading people to
>> think they want something like this.
> 
> The discussion seems to be geared towards buggy guest so far. However, 
> this is not the only reason that one my want to avoid exposing some 
> features:
> 
>     1) From the recent security issues (such as XSA-343), a knob to 
> disable FIFO would be quite beneficials for vendors that don't need the 
> feature.

Except that this wouldn't have been suitable as a during-embargo
mitigation, for its observability by guests.

>     2) Fleet management purpose. You may have a fleet with multiple 
> versions of Xen. You don't want your customer to start relying on 
> features that may not be available on all the hosts otherwise it 
> complicates the guest placement.

Guests incapable to run on older Xen are a problem in this regard
anyway, aren't they? And if they are capable, I don't see what
you're referring to.

> FAOD, I am sure there might be other features that need to be disabled. 
> But we have to start somewhere :).

If there is such a need, then yes, sure. But shouldn't we at least
gain rough agreement on how the future is going to look like with
this? IOW have in hands some at least roughly agreed criteria by
which we could decide which new ABI additions will need some form
of override going forward (also allowing to judge which prior
additions may want to gain overrides in a retroactive fashion, or
in fact should have had ones from the beginning)?

>>> Now imagine you are the cloud provider, running Xen. What you did was start to upgrade your hosts from an older version of Xen to a newer version of Xen, to pick up various bug fixes and make sure you are running a version that is within the security support envelope. You identify that your customer's problem is a bug in their OS that was latent on the old version of the hypervisor but is now manifesting on the new one because it has buggy support for a hypercall that was added between the two versions. How are you going to fix this issue, and get your customer up and running again? Of course you'd like your customer to upgrade their OS, but they can't even boot it to do that. You really need a solution that can restore the old VM environment, at least temporarily, for that customer.
>>
>> Boot the guest on a not-yet-upgraded host again, to update its kernel?
> 
> You are making the assumption that the customer would have the choice to 
> target a specific versions of Xen. This may be undesirable for a cloud 
> provider as suddenly your customer may want to stick on the old version 
> of Xen.

I've gone from you saying "You really need a solution that can restore
the old VM environment, at least temporarily, for that customer." The
"temporarily" to me implies that it is at least an option to tie a
certain guest to a certain Xen version for in-guest upgrading purposes.
If the deal with the customer doesn't include running on a certain Xen
version, I don't see how this could have non-temporary consequences to
the cloud provider.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:28:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:28:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46005.81596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCok-0006gg-6S; Mon, 07 Dec 2020 09:28:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46005.81596; Mon, 07 Dec 2020 09:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCok-0006gZ-2v; Mon, 07 Dec 2020 09:28:26 +0000
Received: by outflank-mailman (input) for mailman id 46005;
 Mon, 07 Dec 2020 09:28:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmCoj-0006gU-A1
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:28:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7380d22-ce0e-4b48-a60e-f3831873a0f3;
 Mon, 07 Dec 2020 09:28:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1F7CAD41;
 Mon,  7 Dec 2020 09:28:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7380d22-ce0e-4b48-a60e-f3831873a0f3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607333303; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dkUOqVKpN6lMVVg5W5yeI3usiVuNaOeFFZ30HYye8vQ=;
	b=l1V+irZGEuIB+bjvRk1b5U+jjLUMj3PEDCz5bq6Z0zqkKa/z1rLNvjCs2yw45AIIY+8bwC
	MeaxS9wU6o7nxj5hntmYTZ8M9wbIkcbEnNt6LcqRY1tohOmIyqh/tH6SaUsDWu23vKQ0Yk
	jIudFMmvBr6/HEJoWvnFvEPC9qeZKQc=
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
 <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
 <b631c122-554c-e26e-4fa9-56809dd5569a@suse.com>
 <8913ce50-1b51-36f6-36b6-7e09d9553df9@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eed78fed-159f-3ee3-5eec-9384e52406bf@suse.com>
Date: Mon, 7 Dec 2020 10:28:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <8913ce50-1b51-36f6-36b6-7e09d9553df9@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.12.2020 10:11, Oleksandr Andrushchenko wrote:
> On 12/7/20 10:48 AM, Jan Beulich wrote:
>> On 04.12.2020 15:38, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 4:51 PM, Jan Beulich wrote:
>>>> On 13.11.2020 15:44, Oleksandr Andrushchenko wrote:
>>>>> On 11/13/20 4:38 PM, Jan Beulich wrote:
>>>>>> On 13.11.2020 15:32, Oleksandr Andrushchenko wrote:
>>>>>>> On 11/13/20 4:23 PM, Jan Beulich wrote:
>>>>>>>>      Earlier on I didn't say you should get this to work, only
>>>>>>>> that I think the general logic around what you add shouldn't make
>>>>>>>> things more arch specific than they really should be. That said,
>>>>>>>> something similar to the above should still be doable on x86,
>>>>>>>> utilizing struct pci_seg's bus2bridge[]. There ought to be
>>>>>>>> DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
>>>>>>>> (provided by the CPUs themselves rather than the chipset) aren't
>>>>>>>> really host bridges for the purposes you're after.
>>>>>>> You mean I can still use DEV_TYPE_PCI_HOST_BRIDGE as a marker
>>>>>>>
>>>>>>> while trying to detect what I need?
>>>>>> I'm afraid I don't understand what marker you're thinking about
>>>>>> here.
>>>>> I mean that when I go over bus2bridge entries, should I only work with
>>>>>
>>>>> those who have DEV_TYPE_PCI_HOST_BRIDGE?
>>>> Well, if you're after host bridges - yes.
>>>>
>>>> Jan
>>> So, I started looking at the bus2bridge and if it can be used for both x86 (and possible ARM) and I have an
>>>
>>> impression that the original purpose of this was to identify the devices which x86 IOMMU should
>>>
>>> cover: e.g. I am looking at the find_upstream_bridge users and they are x86 IOMMU and VGA driver.
>>>
>>> I tried to play with this on ARM, and for the HW platform I have and QEMU I got 0 entries in bus2bridge...
>>>
>>> This is because of how xen/drivers/passthrough/pci.c:alloc_pdev is implemented (which lives in the
>>>
>>> common code BTW, but seems to be x86 specific): so, it skips setting up bus2bridge entries for the bridges I have.
>> I'm curious to learn what's x86-specific here. I also can't deduce
>> why bus2bridge setup would be skipped.
> 
> So, for example:
> 
> commit 0af438757d455f8eb6b5a6ae9a990ae245f230fd
> Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Date:   Fri Sep 27 10:11:49 2013 +0200
> 
>      AMD IOMMU: fix Dom0 device setup failure for host bridges
> 
>      The host bridge device (i.e. 0x18 for AMD) does not require IOMMU, and
>      therefore is not included in the IVRS. The current logic tries to map
>      all PCI devices to an IOMMU. In this case, "xl dmesg" shows the
>      following message on AMD sytem.
> 
>      (XEN) setup 0000:00:18.0 for d0 failed (-19)
>      (XEN) setup 0000:00:18.1 for d0 failed (-19)
>      (XEN) setup 0000:00:18.2 for d0 failed (-19)
>      (XEN) setup 0000:00:18.3 for d0 failed (-19)
>      (XEN) setup 0000:00:18.4 for d0 failed (-19)
>      (XEN) setup 0000:00:18.5 for d0 failed (-19)
> 
>      This patch adds a new device type (i.e. DEV_TYPE_PCI_HOST_BRIDGE) which
>      corresponds to PCI class code 0x06 and sub-class 0x00. Then, it uses
>      this new type to filter when trying to map device to IOMMU.
> 
> One of my test systems has DEV_TYPE_PCI_HOST_BRIDGE, so bus2brdige setup is ignored

If there's data to be sensibly recorded for host bridges, I don't
see why the function couldn't be updated. I don't view this as
x86-specific; it may just be that on x86 we have no present use
for such data. It may in turn be the case that then x86-specific
call sites consuming this data need updating to not be mislead by
the change in what data gets recorded. But that's still all within
the scope of bringing intended-to-be-arch-independent code closer
to actually being arch-independent.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:37:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46012.81608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCxg-0007ge-4A; Mon, 07 Dec 2020 09:37:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46012.81608; Mon, 07 Dec 2020 09:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmCxf-0007gX-WC; Mon, 07 Dec 2020 09:37:40 +0000
Received: by outflank-mailman (input) for mailman id 46012;
 Mon, 07 Dec 2020 09:37:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FYzf=FL=epam.com=prvs=0610b02f0e=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kmCxe-0007gS-P1
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:37:38 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f98894c-6622-4160-a392-5e4eb229037b;
 Mon, 07 Dec 2020 09:37:38 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0B79XZdO005550; Mon, 7 Dec 2020 09:37:32 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2059.outbound.protection.outlook.com [104.47.2.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 3583g5k1gr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 07 Dec 2020 09:37:32 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2196.eurprd03.prod.outlook.com (2603:10a6:200:4d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Mon, 7 Dec
 2020 09:37:29 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3632.018; Mon, 7 Dec 2020
 09:37:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f98894c-6622-4160-a392-5e4eb229037b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UN38E0pZUJB/PFsHt/oCnfnjh8lfjWFR6xMNHgA2CK63qFtiUSG4ezzxz+6ngm9k/FC8BwnU5FwEAqbpKIA1CqwDMp2eLf7eiAq+SMqpddmYzinfQptU54aftzyZwLrqRp3AKlDSSj8nhQFeWOQS1CFaxBEAXn/kWlTk+oBCIvM/8QitWZTa91jfFvxlx5x+NmzciHSkLOVNFAMOj9UN8884YDfJ2ZNY/cRatm3VzgmuXrHMYk9D5SUx7D+uAZ/Tyh65NfYHG5+ob3oi3yTRWYxyXTXlh+vBUftCOtaDMA/A6sPD5kU28nsjzlvpX7tWkHOUq8xeGUxF6+P5hwT1HQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lTLcE0GIJgeLOO7/j6fHvOiyBigzz/3CQWCEoJOdosw=;
 b=jSb0RwsvYo8ARbnsKzmuZPoWuOzjYeNDrrWQcPPHSjEsbbZC6Qk4XM5SKSTA1oY/HpOZ8hs5Cr7jvbwxM8yvxj7OTqp25yTZ207BVvdkk5DwyNkADQBFVq2gtXU7xhy+EG+Q6F8EOT/Pcw9XzFDzp2NhaajPUqTDh+6MdQrOohIS7g0/kAAu/8KYt1YP4hfOWTKsjaRfs6Woo9XGhGoLtXeq+ViXK744jCwfxzl0BQ7Gqt5jhzQ1/TMi+PlY9S+YxGQ45phs/LL1OvELO+4T886sfI7WYj4t+yrd58kL801XaEKfRliPOPyM3U/N83az3eHW4t1qG+EbwvOAapEjHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lTLcE0GIJgeLOO7/j6fHvOiyBigzz/3CQWCEoJOdosw=;
 b=RTQ0rXwCfFWrUy3NiZoguApSyUdkjy5o080ym8VNaN7z6nPUMchMLp2EqBEpkwfy3t+LD2AIGQjd5oQChc6R4Gbbh2ojMTzN5zw2F+04Sevc3a0A5SI0Sr3uD0rfasKu9gfXrueVJPkcse8xIuJTCASEFkATp5AqxhGVxMnpcjS2XRGdHsZpujT5qFcbNRqA/sskRagAwhQyd+zl4P6ib7hYx1poG1I+ue+C4JQFYVbcWzhZW+HTWTcTGA8FAtcp28Tglq95YWi6Ty46Q3L+WzWxdT1zNmSr2ONKoxihPJYZ0jCypgr/xqLWBwxMYMOdXyKBZf+vh+GmRirNOJTdng==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYAgAAcfoCAAAKRAIAAAYQAgAABxICAAAHUgIAg/YsAgARU+oCAAAZpgIAABNqAgAAChwA=
Date: Mon, 7 Dec 2020 09:37:29 +0000
Message-ID: <d04bf141-d263-edfd-2110-f52bcca19411@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
 <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
 <b631c122-554c-e26e-4fa9-56809dd5569a@suse.com>
 <8913ce50-1b51-36f6-36b6-7e09d9553df9@epam.com>
 <eed78fed-159f-3ee3-5eec-9384e52406bf@suse.com>
In-Reply-To: <eed78fed-159f-3ee3-5eec-9384e52406bf@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: daf97b6e-1dfa-4134-9b00-08d89a93b6ee
x-ms-traffictypediagnostic: AM4PR0301MB2196:
x-microsoft-antispam-prvs: 
 <AM4PR0301MB2196C5D97C115522863FA84EE7CE0@AM4PR0301MB2196.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 C3UdnmIhAQkB7zacJf4yCjEUkqsiV1fGB4E5lZL0toQ8QydQK/zhz5abUnO0w9AYK8fsIoUXETeWactc2ykPL3kVjWunoUrrqNS6eq2IIhbbbMc6YR3G3gahAgA/3C6gEKp8wczqvCyzKxVew17JqfrLGoNq6+K4QyJYzOGzsC9DXSo/0e/gk+Zq6y15M3Ws/fVT5e93q6o9Nv33G2VKaZF7vKuN+6U+DCuuVo34dX0ykH2AAoPdO407yJsjNAu2QLuD/+CqEoJ9vB4EWOAb2xlMnwsN1EF+BDeXyyotjT2hZDH9gaNEsuMVsK13AgvBC6BcWOooK3NceZmdnJWqOUu93v4hAWJwXddBRzppRH8QEJAarj4HbE4kaCGZuGDA
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(346002)(396003)(376002)(7416002)(6486002)(478600001)(2906002)(66556008)(86362001)(31696002)(83380400001)(66946007)(76116006)(71200400001)(31686004)(66476007)(316002)(36756003)(2616005)(53546011)(66446008)(8936002)(26005)(5660300002)(6506007)(186003)(64756008)(54906003)(6916009)(6512007)(4326008)(8676002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?Y3F6U2FScHptMEl5cTF6ZFRiNi9iOW5mazlIeVEvZ1UrYW05aW9HdWp1OVZn?=
 =?utf-8?B?VDROWjBGanJ3aEo1RHZMSkVJTGFVMHNDY3B5bmtnWlZtL0tFOTllVkZYNUF5?=
 =?utf-8?B?aHNMcC94blc4WXRlaC9qRGo3ckpabTl3QlpXRWxkR1hZUkhJUEk4TURPZ0ZY?=
 =?utf-8?B?VFVTZlBOdThLcWpaMHpoUjR5TktyRlQyWjhFVFIwODg2U3Q4eDBMWGo5bEJj?=
 =?utf-8?B?VUhUYzlEQ1F1Yko2bDZJSFRxOG5kREhOYkxvV1loeTU1aFIybEpHNFhlRFds?=
 =?utf-8?B?aWFhRGkybXpSZFB2T1E3QkxCSk56cGtUL0FBYnI5Zlp0bER0MDJ6bHBtREo2?=
 =?utf-8?B?RnBPRzRlZEUwTXY2N1VmQUN0eGFtMmFEcDRKTTAxcmpVY3hlWUpYM2tWTWd2?=
 =?utf-8?B?QjhnRXpPUmdOZ1ptZjNwSXl4b3NFQTFKeVFiaDJLSEVqUkQ3UmtTZ0pzSHp5?=
 =?utf-8?B?M2U1ZWFCYWNYRlBvRXpIQUc4ZzJDSzB1eFU0Y09DZTdZUHkxT2lNRE51UU8w?=
 =?utf-8?B?dHloekxUYy9sTHhmYmhic0hZbWpYMFdWK2x6Q0lKaGRyMXk5dWtIcm1SVUY5?=
 =?utf-8?B?S3A1Y2xTaWZUdzJnQnpVUXJIR0EybjRoMFpIcG5yR0Y5YUFpcTBNRndBa2dS?=
 =?utf-8?B?RG1hSlNmcmNEbFVMemJQWXBmbEE1U24wOXBob2pLUkVrelhpUnNtTVdZRUNP?=
 =?utf-8?B?UnVBTm5KNEFROHE3YmVMYjN3VlptVng3c0UxYmlmckxWNEJhSXNIbjNYK0V6?=
 =?utf-8?B?cUNtRkxNS2kwN1F4eGlUWDJVUzZHc2ptSUZiU0EzYVRqd1dWcHArbFVnT0dy?=
 =?utf-8?B?cFJwazlYaEhlUmM1eDg4bVZLNDRMbFM1dktuMG9ka1ZNTW5KeitUSVhaY3lF?=
 =?utf-8?B?S1JUaVVjS01DbDI4WG5kSkhuc2VOYTdYZXJVYTN3TW1ZZUNBTkRNZXVDVzY1?=
 =?utf-8?B?eWNlQTVmMkdSQU1sTTcwZGhFb1JvYzZrc25LVFJFMTZDVUg4SDVQajNFMUdS?=
 =?utf-8?B?d1VVN0hkOXI2V3BENUN4Qkx1aHo1S1hNeUdNZWxMNVNsanVTTlJLUGp3SWFz?=
 =?utf-8?B?cE4yVjRCcjlvWlNabWhhcFFBS3JzNEVPWXRqU0tpVlpZZVB1QVNQMlJ0ZldS?=
 =?utf-8?B?bkpUcFNVVFBXRENDY0hYRGZmU0JzNEV5MEY2MCt5djdoRkdMOVJITTBIbGNn?=
 =?utf-8?B?VWNKZThJTmhVbTZPK29TVWI3ekxDY0pWOTF0ZlArNGJrNWFQT29qbmg1OEpI?=
 =?utf-8?B?SHJYWnEvbWN4NHBYTTA1a1FrZlRBSzE3c0FObjRESVJnRUJkMGREK1BlYWxL?=
 =?utf-8?Q?7m7r7dsRNvvrug+2xICpDHG/9fzPYmSbJC?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <21A53457A0BF0E4BB7A504009473532F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: daf97b6e-1dfa-4134-9b00-08d89a93b6ee
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Dec 2020 09:37:29.2089
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Y1d44YgmRpt8y3q/w3zVPosX91bsHwmoT1M+FX/u34RWcYEIaayM3N0RXUBf2eOrJlqFJa11xHSZblOribKKcZ11J9ytiLGU2ok3pz/+fXjgzwedbX1RzJxOAczR4vVr
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2196
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-12-07_08:2020-12-04,2020-12-07 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0
 mlxlogscore=999 adultscore=0 bulkscore=0 phishscore=0 priorityscore=1501
 lowpriorityscore=0 malwarescore=0 impostorscore=0 spamscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012070060

DQpPbiAxMi83LzIwIDExOjI4IEFNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDcuMTIuMjAy
MCAxMDoxMSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMi83LzIwIDEw
OjQ4IEFNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+PiBPbiAwNC4xMi4yMDIwIDE1OjM4LCBPbGVr
c2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4gT24gMTEvMTMvMjAgNDo1MSBQTSwgSmFu
IEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAxMy4xMS4yMDIwIDE1OjQ0LCBPbGVrc2FuZHIgQW5k
cnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4+PiBPbiAxMS8xMy8yMCA0OjM4IFBNLCBKYW4gQmV1bGlj
aCB3cm90ZToNCj4+Pj4+Pj4gT24gMTMuMTEuMjAyMCAxNTozMiwgT2xla3NhbmRyIEFuZHJ1c2hj
aGVua28gd3JvdGU6DQo+Pj4+Pj4+PiBPbiAxMS8xMy8yMCA0OjIzIFBNLCBKYW4gQmV1bGljaCB3
cm90ZToNCj4+Pj4+Pj4+PiAgICAgICBFYXJsaWVyIG9uIEkgZGlkbid0IHNheSB5b3Ugc2hvdWxk
IGdldCB0aGlzIHRvIHdvcmssIG9ubHkNCj4+Pj4+Pj4+PiB0aGF0IEkgdGhpbmsgdGhlIGdlbmVy
YWwgbG9naWMgYXJvdW5kIHdoYXQgeW91IGFkZCBzaG91bGRuJ3QgbWFrZQ0KPj4+Pj4+Pj4+IHRo
aW5ncyBtb3JlIGFyY2ggc3BlY2lmaWMgdGhhbiB0aGV5IHJlYWxseSBzaG91bGQgYmUuIFRoYXQg
c2FpZCwNCj4+Pj4+Pj4+PiBzb21ldGhpbmcgc2ltaWxhciB0byB0aGUgYWJvdmUgc2hvdWxkIHN0
aWxsIGJlIGRvYWJsZSBvbiB4ODYsDQo+Pj4+Pj4+Pj4gdXRpbGl6aW5nIHN0cnVjdCBwY2lfc2Vn
J3MgYnVzMmJyaWRnZVtdLiBUaGVyZSBvdWdodCB0byBiZQ0KPj4+Pj4+Pj4+IERFVl9UWVBFX1BD
SV9IT1NUX0JSSURHRSBlbnRyaWVzIHRoZXJlLCBhbGJlaXQgYSBudW1iZXIgb2YgdGhlbQ0KPj4+
Pj4+Pj4+IChwcm92aWRlZCBieSB0aGUgQ1BVcyB0aGVtc2VsdmVzIHJhdGhlciB0aGFuIHRoZSBj
aGlwc2V0KSBhcmVuJ3QNCj4+Pj4+Pj4+PiByZWFsbHkgaG9zdCBicmlkZ2VzIGZvciB0aGUgcHVy
cG9zZXMgeW91J3JlIGFmdGVyLg0KPj4+Pj4+Pj4gWW91IG1lYW4gSSBjYW4gc3RpbGwgdXNlIERF
Vl9UWVBFX1BDSV9IT1NUX0JSSURHRSBhcyBhIG1hcmtlcg0KPj4+Pj4+Pj4NCj4+Pj4+Pj4+IHdo
aWxlIHRyeWluZyB0byBkZXRlY3Qgd2hhdCBJIG5lZWQ/DQo+Pj4+Pj4+IEknbSBhZnJhaWQgSSBk
b24ndCB1bmRlcnN0YW5kIHdoYXQgbWFya2VyIHlvdSdyZSB0aGlua2luZyBhYm91dA0KPj4+Pj4+
PiBoZXJlLg0KPj4+Pj4+IEkgbWVhbiB0aGF0IHdoZW4gSSBnbyBvdmVyIGJ1czJicmlkZ2UgZW50
cmllcywgc2hvdWxkIEkgb25seSB3b3JrIHdpdGgNCj4+Pj4+Pg0KPj4+Pj4+IHRob3NlIHdobyBo
YXZlIERFVl9UWVBFX1BDSV9IT1NUX0JSSURHRT8NCj4+Pj4+IFdlbGwsIGlmIHlvdSdyZSBhZnRl
ciBob3N0IGJyaWRnZXMgLSB5ZXMuDQo+Pj4+Pg0KPj4+Pj4gSmFuDQo+Pj4+IFNvLCBJIHN0YXJ0
ZWQgbG9va2luZyBhdCB0aGUgYnVzMmJyaWRnZSBhbmQgaWYgaXQgY2FuIGJlIHVzZWQgZm9yIGJv
dGggeDg2IChhbmQgcG9zc2libGUgQVJNKSBhbmQgSSBoYXZlIGFuDQo+Pj4+DQo+Pj4+IGltcHJl
c3Npb24gdGhhdCB0aGUgb3JpZ2luYWwgcHVycG9zZSBvZiB0aGlzIHdhcyB0byBpZGVudGlmeSB0
aGUgZGV2aWNlcyB3aGljaCB4ODYgSU9NTVUgc2hvdWxkDQo+Pj4+DQo+Pj4+IGNvdmVyOiBlLmcu
IEkgYW0gbG9va2luZyBhdCB0aGUgZmluZF91cHN0cmVhbV9icmlkZ2UgdXNlcnMgYW5kIHRoZXkg
YXJlIHg4NiBJT01NVSBhbmQgVkdBIGRyaXZlci4NCj4+Pj4NCj4+Pj4gSSB0cmllZCB0byBwbGF5
IHdpdGggdGhpcyBvbiBBUk0sIGFuZCBmb3IgdGhlIEhXIHBsYXRmb3JtIEkgaGF2ZSBhbmQgUUVN
VSBJIGdvdCAwIGVudHJpZXMgaW4gYnVzMmJyaWRnZS4uLg0KPj4+Pg0KPj4+PiBUaGlzIGlzIGJl
Y2F1c2Ugb2YgaG93IHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3BjaS5jOmFsbG9jX3BkZXYgaXMg
aW1wbGVtZW50ZWQgKHdoaWNoIGxpdmVzIGluIHRoZQ0KPj4+Pg0KPj4+PiBjb21tb24gY29kZSBC
VFcsIGJ1dCBzZWVtcyB0byBiZSB4ODYgc3BlY2lmaWMpOiBzbywgaXQgc2tpcHMgc2V0dGluZyB1
cCBidXMyYnJpZGdlIGVudHJpZXMgZm9yIHRoZSBicmlkZ2VzIEkgaGF2ZS4NCj4+PiBJJ20gY3Vy
aW91cyB0byBsZWFybiB3aGF0J3MgeDg2LXNwZWNpZmljIGhlcmUuIEkgYWxzbyBjYW4ndCBkZWR1
Y2UNCj4+PiB3aHkgYnVzMmJyaWRnZSBzZXR1cCB3b3VsZCBiZSBza2lwcGVkLg0KPj4gU28sIGZv
ciBleGFtcGxlOg0KPj4NCj4+IGNvbW1pdCAwYWY0Mzg3NTdkNDU1ZjhlYjZiNWE2YWU5YTk5MGFl
MjQ1ZjIzMGZkDQo+PiBBdXRob3I6IFN1cmF2ZWUgU3V0aGlrdWxwYW5pdCA8c3VyYXZlZS5zdXRo
aWt1bHBhbml0QGFtZC5jb20+DQo+PiBEYXRlOsKgwqAgRnJpIFNlcCAyNyAxMDoxMTo0OSAyMDEz
ICswMjAwDQo+Pg0KPj4gICDCoMKgwqAgQU1EIElPTU1VOiBmaXggRG9tMCBkZXZpY2Ugc2V0dXAg
ZmFpbHVyZSBmb3IgaG9zdCBicmlkZ2VzDQo+Pg0KPj4gICDCoMKgwqAgVGhlIGhvc3QgYnJpZGdl
IGRldmljZSAoaS5lLiAweDE4IGZvciBBTUQpIGRvZXMgbm90IHJlcXVpcmUgSU9NTVUsIGFuZA0K
Pj4gICDCoMKgwqAgdGhlcmVmb3JlIGlzIG5vdCBpbmNsdWRlZCBpbiB0aGUgSVZSUy4gVGhlIGN1
cnJlbnQgbG9naWMgdHJpZXMgdG8gbWFwDQo+PiAgIMKgwqDCoCBhbGwgUENJIGRldmljZXMgdG8g
YW4gSU9NTVUuIEluIHRoaXMgY2FzZSwgInhsIGRtZXNnIiBzaG93cyB0aGUNCj4+ICAgwqDCoMKg
IGZvbGxvd2luZyBtZXNzYWdlIG9uIEFNRCBzeXRlbS4NCj4+DQo+PiAgIMKgwqDCoCAoWEVOKSBz
ZXR1cCAwMDAwOjAwOjE4LjAgZm9yIGQwIGZhaWxlZCAoLTE5KQ0KPj4gICDCoMKgwqAgKFhFTikg
c2V0dXAgMDAwMDowMDoxOC4xIGZvciBkMCBmYWlsZWQgKC0xOSkNCj4+ICAgwqDCoMKgIChYRU4p
IHNldHVwIDAwMDA6MDA6MTguMiBmb3IgZDAgZmFpbGVkICgtMTkpDQo+PiAgIMKgwqDCoCAoWEVO
KSBzZXR1cCAwMDAwOjAwOjE4LjMgZm9yIGQwIGZhaWxlZCAoLTE5KQ0KPj4gICDCoMKgwqAgKFhF
Tikgc2V0dXAgMDAwMDowMDoxOC40IGZvciBkMCBmYWlsZWQgKC0xOSkNCj4+ICAgwqDCoMKgIChY
RU4pIHNldHVwIDAwMDA6MDA6MTguNSBmb3IgZDAgZmFpbGVkICgtMTkpDQo+Pg0KPj4gICDCoMKg
wqAgVGhpcyBwYXRjaCBhZGRzIGEgbmV3IGRldmljZSB0eXBlIChpLmUuIERFVl9UWVBFX1BDSV9I
T1NUX0JSSURHRSkgd2hpY2gNCj4+ICAgwqDCoMKgIGNvcnJlc3BvbmRzIHRvIFBDSSBjbGFzcyBj
b2RlIDB4MDYgYW5kIHN1Yi1jbGFzcyAweDAwLiBUaGVuLCBpdCB1c2VzDQo+PiAgIMKgwqDCoCB0
aGlzIG5ldyB0eXBlIHRvIGZpbHRlciB3aGVuIHRyeWluZyB0byBtYXAgZGV2aWNlIHRvIElPTU1V
Lg0KPj4NCj4+IE9uZSBvZiBteSB0ZXN0IHN5c3RlbXMgaGFzIERFVl9UWVBFX1BDSV9IT1NUX0JS
SURHRSwgc28gYnVzMmJyZGlnZSBzZXR1cCBpcyBpZ25vcmVkDQo+IElmIHRoZXJlJ3MgZGF0YSB0
byBiZSBzZW5zaWJseSByZWNvcmRlZCBmb3IgaG9zdCBicmlkZ2VzLCBJIGRvbid0DQo+IHNlZSB3
aHkgdGhlIGZ1bmN0aW9uIGNvdWxkbid0IGJlIHVwZGF0ZWQuIEkgZG9uJ3QgdmlldyB0aGlzIGFz
DQo+IHg4Ni1zcGVjaWZpYzsgaXQgbWF5IGp1c3QgYmUgdGhhdCBvbiB4ODYgd2UgaGF2ZSBubyBw
cmVzZW50IHVzZQ0KPiBmb3Igc3VjaCBkYXRhLiBJdCBtYXkgaW4gdHVybiBiZSB0aGUgY2FzZSB0
aGF0IHRoZW4geDg2LXNwZWNpZmljDQo+IGNhbGwgc2l0ZXMgY29uc3VtaW5nIHRoaXMgZGF0YSBu
ZWVkIHVwZGF0aW5nIHRvIG5vdCBiZSBtaXNsZWFkIGJ5DQo+IHRoZSBjaGFuZ2UgaW4gd2hhdCBk
YXRhIGdldHMgcmVjb3JkZWQuIEJ1dCB0aGF0J3Mgc3RpbGwgYWxsIHdpdGhpbg0KPiB0aGUgc2Nv
cGUgb2YgYnJpbmdpbmcgaW50ZW5kZWQtdG8tYmUtYXJjaC1pbmRlcGVuZGVudCBjb2RlIGNsb3Nl
cg0KPiB0byBhY3R1YWxseSBiZWluZyBhcmNoLWluZGVwZW5kZW50Lg0KDQpXZWxsLCB0aGUgcGF0
Y2ggaXRzZWxmIG1hZGUgbWUgdGhpbmsgdGhhdCB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgeDg2
DQoNCndoaWNoIG1hZGUgREVWX1RZUEVfUENJX0hPU1RfQlJJREdFIGEgc3BlY2lhbCBjYXNlIGFu
ZCBpdCByZWxpZXMgb24gdGhhdC4NCg0KU28sIHBsZWFzZSBjb3JyZWN0IG1lIGlmIEknbSB3cm9u
ZyBoZXJlLCBidXQgaW4gb3JkZXIgdG8gbWFrZSBpdCByZWFsbHkgZ2VuZXJpYw0KDQpJIHdvdWxk
IG5lZWQgdG8gaW50cm9kdWNlIHNvbWUgc3BlY2lmaWMga25vd2xlZGdlIGZvciB4ODYgYWJvdXQg
c3VjaCBhIGRldmljZQ0KDQphbmQgbWFrZSB0aGUgSU9NTVUgY29kZSByZWx5IG9uIHRoYXQgaW5z
dGVhZCBvZiBidXMyYnJpZGdlLg0KDQo+DQo+IEphbg==


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:43:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46022.81626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmD3X-0000HD-2N; Mon, 07 Dec 2020 09:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46022.81626; Mon, 07 Dec 2020 09:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmD3W-0000H6-VC; Mon, 07 Dec 2020 09:43:42 +0000
Received: by outflank-mailman (input) for mailman id 46022;
 Mon, 07 Dec 2020 09:43:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmD3V-0000Gz-00
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:43:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0fd3c4c-a6f4-4abd-b89b-63d584135db3;
 Mon, 07 Dec 2020 09:43:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D6088AC55;
 Mon,  7 Dec 2020 09:43:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0fd3c4c-a6f4-4abd-b89b-63d584135db3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607334219; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=voCy+AsKpL7inwILDFJHM4hQ45aLTGWqmoLq9FWqy1I=;
	b=tzturytpDLVswWM6Ta1vWypw/M1u4DsLnd6C2f5yQiMaeGi5qvqp0n1CqdlZesHa55ISxI
	JLYL+BRipeOYL3JemUq7hRcdWsMxJVUsg50VrBHhVWxR9ojCuB5quBOSf2FTx02+/Hx0EQ
	FUbEbHQH1Ujb5bgpr0dTBQCxLZJynoo=
Subject: Re: [PATCH v3 1/2] x86/IRQ: make max number of guests for a shared
 IRQ configurable
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, sstabellini@kernel.org, wl@xen.org, roger.pau@citrix.com,
 xen-devel@lists.xenproject.org
References: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc023b15-9e28-322c-aa86-165459e65d77@suse.com>
Date: Mon, 7 Dec 2020 10:43:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.12.2020 18:43, Igor Druzhinin wrote:
> ... and increase the default to 16.
> 
> Current limit of 7 is too restrictive for modern systems where one GSI
> could be shared by potentially many PCI INTx sources where each of them
> corresponds to a device passed through to its own guest. Some systems do not
> apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
> interrupt pin for the majority of PCI devices behind a single router,
> resulting in overuse of a GSI.
> 
> Introduce a new command line option to configure that limit and dynamically
> allocate an array of the necessary size. Set the default size now to 16 which
> is higher than 7 but could later be increased even more if necessary.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
> 
> Changes in v2:
> - introduced a command line option as suggested
> - set initial default limit to 16
> 
> Changes in v3:
> - changed option name to us - instead of _
> - used uchar instead of uint to utilize integer_param overflow handling logic

Which now means rather than saturating at UINT8_MAX you'll get
-EOVERFLOW. Just as a remark, not as a strict request to change
anything.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -42,6 +42,10 @@ integer_param("nr_irqs", nr_irqs);
>  int __read_mostly opt_irq_vector_map = OPT_IRQ_VECTOR_MAP_DEFAULT;
>  custom_param("irq_vector_map", parse_irq_vector_map_param);
>  
> +/* Max number of guests IRQ could be shared with */
> +static unsigned char __read_mostly irq_max_guests;
> +integer_param("irq-max-guests", irq_max_guests);

There's an implied assumption now that sizeof(irq_max_guests)
<= sizeof_field(irq_guest_action_t, nr_guests). Sadly a
respective BUILD_BUG_ON() can't ...

> @@ -435,6 +439,9 @@ int __init init_irq_data(void)
>      for ( ; irq < nr_irqs; irq++ )
>          irq_to_desc(irq)->irq = irq;
>  
> +    if ( !irq_max_guests )
> +        irq_max_guests = 16;

... go here, because the type gets defined only ...

> @@ -1028,7 +1035,6 @@ int __init setup_irq(unsigned int irq, unsigned int irqflags,
>   * HANDLING OF GUEST-BOUND PHYSICAL IRQS
>   */
>  
> -#define IRQ_MAX_GUESTS 7
>  typedef struct {
>      u8 nr_guests;
>      u8 in_flight;
> @@ -1039,7 +1045,7 @@ typedef struct {
>  #define ACKTYPE_EOI    2     /* EOI on the CPU that was interrupted  */
>      cpumask_var_t cpu_eoi_map; /* CPUs that need to EOI this interrupt */
>      struct timer eoi_timer;
> -    struct domain *guest[IRQ_MAX_GUESTS];
> +    struct domain *guest[];
>  } irq_guest_action_t;

... here. The only later __init function is setup_dump_irqs(), so
it could be put there or in a new build_assertions() one.

> @@ -1633,11 +1640,12 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>          goto retry;
>      }
>  
> -    if ( action->nr_guests == IRQ_MAX_GUESTS )
> +    if ( action->nr_guests == irq_max_guests )
>      {
> -        printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
> -               "Already at max share.\n",
> -               pirq->pirq, v->domain->domain_id);
> +        printk(XENLOG_G_INFO
> +               "Cannot bind IRQ%d to dom%pd: already at max share %u ",
> +               pirq->pirq, v->domain, irq_max_guests);
> +        printk("(increase with irq-max-guests= option)\n");

Now two separate printk()s are definitely worse. Then putting the
part of the format string inside the parentheses on a separate line
would still be better (and perhaps a sensible compromise with the
grep-ability desire).

With suitable adjustments, which I'd be okay making while committing
as long as you agree,
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:44:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:44:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46027.81637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmD4J-0000Nf-BZ; Mon, 07 Dec 2020 09:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46027.81637; Mon, 07 Dec 2020 09:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmD4J-0000NY-8h; Mon, 07 Dec 2020 09:44:31 +0000
Received: by outflank-mailman (input) for mailman id 46027;
 Mon, 07 Dec 2020 09:44:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmD4I-0000NQ-1v
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:44:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9c44470-4c7b-43ff-be70-1608ea799ae6;
 Mon, 07 Dec 2020 09:44:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 96E69AC9A;
 Mon,  7 Dec 2020 09:44:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9c44470-4c7b-43ff-be70-1608ea799ae6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607334268; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KJohPHIbx1B9VcXIxmFimYdjluC9aftvHc+hYyertfo=;
	b=MJ7W3YYiAYd0OAT34njH6BGbOchvkTws8/N5ubU/3BX7cul4fL0XE3OWKZtQjrEUoNJv2i
	gnuZN0mjPQyMqBbDeZQrdBA60Xk/0DsQe/GKLV7XAYNfuG0CluRBkmNm2QAPVKI6Ci2Arv
	/16FlTGwg55rElvVhsnoSUi0Rb7YoG4=
Subject: Re: [PATCH v3 2/2] x86/IRQ: allocate guest array of max size only for
 shareable IRQs
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, sstabellini@kernel.org, wl@xen.org, roger.pau@citrix.com,
 xen-devel@lists.xenproject.org
References: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
 <1607276587-19231-2-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8dc7ed4d-7c3e-0fbf-92c9-cbe39c3e5f3d@suse.com>
Date: Mon, 7 Dec 2020 10:44:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1607276587-19231-2-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.12.2020 18:43, Igor Druzhinin wrote:
> ... and increase default "irq-max-guests" to 32.
> 
> It's not necessary to have an array of a size more than 1 for non-shareable
> IRQs and it might impact scalability in case of high "irq-max-guests"
> values being used - every IRQ in the system including MSIs would be
> supplied with an array of that size.
> 
> Since it's now less impactful to use higher "irq-max-guests" value - bump
> the default to 32. That should give more headroom for future systems.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:50:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46034.81649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmD9h-00018f-2M; Mon, 07 Dec 2020 09:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46034.81649; Mon, 07 Dec 2020 09:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmD9g-00017u-UE; Mon, 07 Dec 2020 09:50:04 +0000
Received: by outflank-mailman (input) for mailman id 46034;
 Mon, 07 Dec 2020 09:50:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmD9f-0000u0-Rl
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:50:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 366e6046-21a8-4da8-acd3-ea28002051cd;
 Mon, 07 Dec 2020 09:50:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30BF2AD6B;
 Mon,  7 Dec 2020 09:50:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 366e6046-21a8-4da8-acd3-ea28002051cd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607334602; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NX/6LZuF7Yu6jUZxb47a+vb6f3XXEKbg8SU3fM/qL14=;
	b=m36nZmwJ+p/YMJyARv77VcWQAohJhf/JdpiWs6aXDUTCQfAGDqnFWSDW8LvjnIWQdDeVz3
	Mmrl05lvQwCDOy5jHTebeuWbDB+AFZVHkEA22VxvVH13rAiwtDPiM3tvMmqIGqEq4Ho7ii
	+UpcU8xPljGDnP0XlLisn/PJDePGhdY=
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
 <802e20d8-82b6-5755-e6e5-aadb07585a32@epam.com>
 <b631c122-554c-e26e-4fa9-56809dd5569a@suse.com>
 <8913ce50-1b51-36f6-36b6-7e09d9553df9@epam.com>
 <eed78fed-159f-3ee3-5eec-9384e52406bf@suse.com>
 <d04bf141-d263-edfd-2110-f52bcca19411@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <06497635-b7bc-b1d6-0503-32d3b808a643@suse.com>
Date: Mon, 7 Dec 2020 10:50:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <d04bf141-d263-edfd-2110-f52bcca19411@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.12.2020 10:37, Oleksandr Andrushchenko wrote:
> On 12/7/20 11:28 AM, Jan Beulich wrote:
>> On 07.12.2020 10:11, Oleksandr Andrushchenko wrote:
>>> On 12/7/20 10:48 AM, Jan Beulich wrote:
>>>> On 04.12.2020 15:38, Oleksandr Andrushchenko wrote:
>>>>> So, I started looking at the bus2bridge and if it can be used for both x86 (and possible ARM) and I have an
>>>>>
>>>>> impression that the original purpose of this was to identify the devices which x86 IOMMU should
>>>>>
>>>>> cover: e.g. I am looking at the find_upstream_bridge users and they are x86 IOMMU and VGA driver.
>>>>>
>>>>> I tried to play with this on ARM, and for the HW platform I have and QEMU I got 0 entries in bus2bridge...
>>>>>
>>>>> This is because of how xen/drivers/passthrough/pci.c:alloc_pdev is implemented (which lives in the
>>>>>
>>>>> common code BTW, but seems to be x86 specific): so, it skips setting up bus2bridge entries for the bridges I have.
>>>> I'm curious to learn what's x86-specific here. I also can't deduce
>>>> why bus2bridge setup would be skipped.
>>> So, for example:
>>>
>>> commit 0af438757d455f8eb6b5a6ae9a990ae245f230fd
>>> Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>>> Date:   Fri Sep 27 10:11:49 2013 +0200
>>>
>>>       AMD IOMMU: fix Dom0 device setup failure for host bridges
>>>
>>>       The host bridge device (i.e. 0x18 for AMD) does not require IOMMU, and
>>>       therefore is not included in the IVRS. The current logic tries to map
>>>       all PCI devices to an IOMMU. In this case, "xl dmesg" shows the
>>>       following message on AMD sytem.
>>>
>>>       (XEN) setup 0000:00:18.0 for d0 failed (-19)
>>>       (XEN) setup 0000:00:18.1 for d0 failed (-19)
>>>       (XEN) setup 0000:00:18.2 for d0 failed (-19)
>>>       (XEN) setup 0000:00:18.3 for d0 failed (-19)
>>>       (XEN) setup 0000:00:18.4 for d0 failed (-19)
>>>       (XEN) setup 0000:00:18.5 for d0 failed (-19)
>>>
>>>       This patch adds a new device type (i.e. DEV_TYPE_PCI_HOST_BRIDGE) which
>>>       corresponds to PCI class code 0x06 and sub-class 0x00. Then, it uses
>>>       this new type to filter when trying to map device to IOMMU.
>>>
>>> One of my test systems has DEV_TYPE_PCI_HOST_BRIDGE, so bus2brdige setup is ignored
>> If there's data to be sensibly recorded for host bridges, I don't
>> see why the function couldn't be updated. I don't view this as
>> x86-specific; it may just be that on x86 we have no present use
>> for such data. It may in turn be the case that then x86-specific
>> call sites consuming this data need updating to not be mislead by
>> the change in what data gets recorded. But that's still all within
>> the scope of bringing intended-to-be-arch-independent code closer
>> to actually being arch-independent.
> 
> Well, the patch itself made me think that this is a workaround for x86
> 
> which made DEV_TYPE_PCI_HOST_BRIDGE a special case and it relies on that.
> 
> So, please correct me if I'm wrong here, but in order to make it really generic
> 
> I would need to introduce some specific knowledge for x86 about such a device
> 
> and make the IOMMU code rely on that instead of bus2bridge.

I'm afraid this is too vague for me to respond with a clear "yes" or
"no". In particular I don't see the special casing of that type, not
the least because it's not clear to me what data you would see
recording for it (or more precisely, from where you'd take the to be
recorded data - the device's config space doesn't tell you the bus
range covered by the bridge afaict).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:52:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:52:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46040.81662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDCD-0001W1-Ft; Mon, 07 Dec 2020 09:52:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46040.81662; Mon, 07 Dec 2020 09:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDCD-0001Vu-CI; Mon, 07 Dec 2020 09:52:41 +0000
Received: by outflank-mailman (input) for mailman id 46040;
 Mon, 07 Dec 2020 09:52:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDCB-0001Vm-Ov; Mon, 07 Dec 2020 09:52:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDCB-0004XI-GS; Mon, 07 Dec 2020 09:52:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDCB-0008LO-8C; Mon, 07 Dec 2020 09:52:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDCB-0006ai-7i; Mon, 07 Dec 2020 09:52:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PlOrPG3N6cqVCllnxaSbA/tgN1x1P+Jky3I2Szpqbus=; b=2PF2b+VnytsyG0YYzx1hsLQSrW
	LGu63uFwI03hxlmtl9EuV/woVQ3X+G3EkubIVdtWPvJYMfN1X+1hH/rd6B+KeX6GX/6fB/99wRLl2
	+2Z7rl/Kc3XkIH/k/aCNd+LxQQYVnMRtEFxLo668JkZoL6Rz6s1EStEltDGIrlq81+ng=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157251-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157251: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4523be1ed7f420dabdf42bae2dc33e13aa46b4e4
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 09:52:39 +0000

flight 157251 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157251/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4523be1ed7f420dabdf42bae2dc33e13aa46b4e4
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  150 days
Failing since        151818  2020-07-11 04:18:52 Z  149 days  144 attempts
Testing same since   157216  2020-12-05 04:19:10 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31452 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:58:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46050.81677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDHi-0001l9-4t; Mon, 07 Dec 2020 09:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46050.81677; Mon, 07 Dec 2020 09:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDHi-0001l2-1p; Mon, 07 Dec 2020 09:58:22 +0000
Received: by outflank-mailman (input) for mailman id 46050;
 Mon, 07 Dec 2020 09:58:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDHg-0001kW-TY
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:58:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f36f4ce9-95d2-4831-ac80-bcad4e6e2eac;
 Mon, 07 Dec 2020 09:58:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 431AFAC90;
 Mon,  7 Dec 2020 09:58:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f36f4ce9-95d2-4831-ac80-bcad4e6e2eac
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607335099; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RzAHNozzVBGVrFrACWfsc9DLFMUyjue6sLOqfcHKseA=;
	b=i0J+8xD2GwOueg+7yGV5xvlxdXdgEc2+T1qk15KQdhJkT7IrTlv1lMATC84bDJTX4quEum
	QeZRShy1HSOsDpspxpdhF/6HwjdoR51oeuuy5xBEi5D21vGbO/KM9w7FTeaXYMJFyqz0xx
	LOSqH8l44VRXvZOJds4cTeA5dTSVwZU=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <c2753a9853a1dccc159e0b34db0cdf0a364d2206.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <08e50e63-6146-fff2-01b1-d3d212b56092@suse.com>
Date: Mon, 7 Dec 2020 10:58:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c2753a9853a1dccc159e0b34db0cdf0a364d2206.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 16:52, Dario Faggioli wrote:
> On Tue, 2020-12-01 at 09:21 +0100, Juergen Gross wrote:
>> The cpupool id is an unsigned value in the public interface header,
>> so
>> there is no reason why it is a signed value in struct cpupool.
>>
>> Switch it to unsigned int.
>>
> I think we can add:
> 
> "No functional change intended"
> 
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> IAC:
> 
> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

FAOD this applies without any further changes, i.e. not even my
suggestion regarding the definition of CPUPOOLID_NONE to
XEN_SYSCTL_CPUPOOL_PAR_ANY, or - not said explicitly in the
earlier reply - at least the avoidance of open-coding UINT_MAX?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 09:59:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 09:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46056.81689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDIq-0001t7-KQ; Mon, 07 Dec 2020 09:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46056.81689; Mon, 07 Dec 2020 09:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDIq-0001t0-HT; Mon, 07 Dec 2020 09:59:32 +0000
Received: by outflank-mailman (input) for mailman id 46056;
 Mon, 07 Dec 2020 09:59:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDIp-0001su-GM
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 09:59:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89fa7183-07bb-4977-b10b-62d5321293f1;
 Mon, 07 Dec 2020 09:59:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1DFCFAD6B;
 Mon,  7 Dec 2020 09:59:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89fa7183-07bb-4977-b10b-62d5321293f1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607335170; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Bdqn2agfR32U8qSo2/yxfyc/n46s4Yiha5+sGmTNiQg=;
	b=YYTseKZZSegTUyX5UUTIkzLAMUDYNyzz0lk+DLtFXNOboaY05Ayj4Yp4oK6X1RxQF/X5QV
	BWtEB+6Z/V6yD6jSsA5CWcPpKZW4/GyLbzy5j9i/lA0nSstwn69+8F8pb/JjnETQHoYYub
	PSNukv802wbcxDQOCbT/9jaGyOBjHjU=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
 <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb2e47aa-6c82-f3dd-83e7-b1853816c41c@suse.com>
Date: Mon, 7 Dec 2020 10:59:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.12.2020 10:01, Jürgen Groß wrote:
> On 01.12.20 09:55, Jan Beulich wrote:
>> On 01.12.2020 09:21, Juergen Gross wrote:
>>> --- a/xen/common/sched/private.h
>>> +++ b/xen/common/sched/private.h
>>> @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
>>>   
>>>   struct cpupool
>>>   {
>>> -    int              cpupool_id;
>>> -#define CPUPOOLID_NONE    (-1)
>>> +    unsigned int     cpupool_id;
>>> +#define CPUPOOLID_NONE    (~0U)
>>
>> How about using XEN_SYSCTL_CPUPOOL_PAR_ANY here? Furthermore,
>> together with the remark above, I think you also want to consider
>> the case of sizeof(unsigned int) > sizeof(uint32_t).
> 
> With patch 5 this should be completely fine.

I don't think so, as there still will be CPUPOOLID_NONE !=
XEN_SYSCTL_CPUPOOL_PAR_ANY in the mentioned case.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:00:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46060.81700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDJW-0002jl-Tq; Mon, 07 Dec 2020 10:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46060.81700; Mon, 07 Dec 2020 10:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDJW-0002jd-Qx; Mon, 07 Dec 2020 10:00:14 +0000
Received: by outflank-mailman (input) for mailman id 46060;
 Mon, 07 Dec 2020 10:00:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmDJV-0002jW-Jg
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:00:13 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1b609c12-ef8c-417f-beaa-db91f0bbba59;
 Mon, 07 Dec 2020 10:00:12 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-151-v7qOpCU5MDqtg0zvfGEfJg-1; Mon, 07 Dec 2020 05:00:09 -0500
Received: by mail-wm1-f70.google.com with SMTP id k126so2196839wmb.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 02:00:09 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id c1sm13112995wml.8.2020.12.07.02.00.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 02:00:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b609c12-ef8c-417f-beaa-db91f0bbba59
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607335212;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jkvd7LrgfbA/3UaWz+BLuo3xobAs3RImgpzqIL+38Gs=;
	b=QfJL7SrLO/7J5nZCqRP99ADVzZgu9TrDtsm2xGg0/N/fDS4ZCMTZO3bcceBzx9R9tY32lE
	uMRSeGXM4ToCfnx8EMiLG0iMBqE94IRNy2X+oLzvhAPGfrjfcjreZY7+PuUr16R6yI3EB/
	7wonqoTO6kYgQdCW5DvBE7BeULRYVn8=
X-MC-Unique: v7qOpCU5MDqtg0zvfGEfJg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=jkvd7LrgfbA/3UaWz+BLuo3xobAs3RImgpzqIL+38Gs=;
        b=K7NscqqPplt+vPr1JHRdKWcgobFr0V+oA0mYYXUe/aWrRNNBdNtXJlIO6wFyMmMKQr
         /F21lwq+I55hFuLk2fvYZNrFCCAcKURtZ42nQNK9E5Nv6ZVIED5YI6nl8gp3k3yfrEmU
         Qpkp90gRLHnToqzDuDAkvAaCMZ3Bh3Ea6Y7mREL4xkXu/AuGEHtasO3cpV/0oSU1m7fb
         eAzX3u8Mgd5Qew/pxlJD/bFi3XPF4WJZzwsNLx/v/vbClMQm8F3jHmbMwaGuILarJcsT
         RFt//9PqMvn1rVbwiEr6QlYKQ+FIbUkYJgzj4LPBqHGEIKW1WKXqEiJ20v5LX5Cse+MV
         ejTA==
X-Gm-Message-State: AOAM532GvGdt2BklQEu3kuRzAWxOTY2bbUAFtXW+HApDyvLh7eEMoRoU
	aN4FNJpLT3D5TLAuOkgDvaKJtj9TbxmapxQjiwGXFrur2j5N0peI4PWtEutvhEElQhFEFFjqnbT
	HXwgu3PA4L8zDf1scUW/xbqtYg+4=
X-Received: by 2002:a1c:6208:: with SMTP id w8mr17496910wmb.96.1607335207833;
        Mon, 07 Dec 2020 02:00:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJw3EYRHSI+t9NVpsM0Mrbn468kurkRbJwfsRKh9URZe4lUy/zhC9c0UawCuc4XOtBW1JeEjjA==
X-Received: by 2002:a1c:6208:: with SMTP id w8mr17496881wmb.96.1607335207639;
        Mon, 07 Dec 2020 02:00:07 -0800 (PST)
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
To: Thomas Huth <thuth@redhat.com>, qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <2352c04c-829e-ea1d-0894-15fc1d06697a@redhat.com>
Date: Mon, 7 Dec 2020 11:00:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 6:46 AM, Thomas Huth wrote:
> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>> Cross-build s390x target with only KVM accelerator enabled.
>>
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> ---
>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>  .gitlab-ci.yml                         | 1 +
>>  MAINTAINERS                            | 1 +
>>  3 files changed, 8 insertions(+)
>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>
>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>> new file mode 100644
>> index 00000000000..1731af62056
>> --- /dev/null
>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>> @@ -0,0 +1,6 @@
>> +cross-s390x-kvm:
>> +  extends: .cross_accel_build_job
>> +  variables:
>> +    IMAGE: debian-s390x-cross
>> +    TARGETS: s390x-softmmu
>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>> index 573afceb3c7..a69619d7319 100644
>> --- a/.gitlab-ci.yml
>> +++ b/.gitlab-ci.yml
>> @@ -14,6 +14,7 @@ include:
>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
> 
> KVM code is already covered by the "cross-s390x-system" job, but an
> additional compilation test with --disable-tcg makes sense here. I'd then
> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".

As you wish. What I want is to let Gitlab users be able to build the
equivalent "[s390x] Clang (disable-tcg)" from Travis.

I keep using GCC toolchain because managing job coverage duplication
is an unresolved problem.

> 
> And while you're at it, I'd maybe rather name the new file just
> crossbuilds-s390x.yml and also move the other s390x related jobs into it?

OK will do.

Thanks,

Phil.



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:05:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:05:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46070.81717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDO1-00033X-Jw; Mon, 07 Dec 2020 10:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46070.81717; Mon, 07 Dec 2020 10:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDO1-00033Q-Go; Mon, 07 Dec 2020 10:04:53 +0000
Received: by outflank-mailman (input) for mailman id 46070;
 Mon, 07 Dec 2020 10:04:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmDNz-00033J-Qa
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:04:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 25bbe895-5439-4e3a-aff6-576325c446f5;
 Mon, 07 Dec 2020 10:04:51 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-378-LGK1Er0yN3u4y-rbsqZ6IQ-1; Mon, 07 Dec 2020 05:04:49 -0500
Received: by mail-wm1-f72.google.com with SMTP id l5so3984344wmi.4
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 02:04:48 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id e17sm4403139wrw.84.2020.12.07.02.04.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 02:04:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25bbe895-5439-4e3a-aff6-576325c446f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607335490;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NYGMZWwBeIh3fFPx69mvYjTdx1z088fxfHeToI+lsX8=;
	b=IUj/WNf5h8IqWTIOYg2z84Hv9lQBHEyKKAIlxidutKotmmyqXc3FpcRUuwERstCHy0B0tT
	kf9Z+L1alyvdKNdvuqCwkaIXbeuvz7wF3DLBEs2HwE8c6/p5vABFyblj54irnClcuEHY6I
	I/KvIc6erd2CSwBb2aGFbDlmayJoaX4=
X-MC-Unique: LGK1Er0yN3u4y-rbsqZ6IQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=NYGMZWwBeIh3fFPx69mvYjTdx1z088fxfHeToI+lsX8=;
        b=TVCgo9EJ/PMLQwG1DmeJ+bDUSQQ4If7OsH4sUUAMdWBZbOOiHma28N1SkXA1EXJWz8
         /zg1NQt6TwDEu8LbYvPY0f3DUDQui7YAKo8vMxpb5YB5EcGLEUhLW3+b1nCRC2/NeQH5
         zaHB7sEme2AYU6IuAGbgSq4OkkoO53tWrhZXkj+wizvsWA+9QAw3PFMgGdl6SW+/itVs
         naaqUZhFDAW/m4x2XCbpywAy6xqi8iA87nHmTB78/3nT79YvBP54QHv4LCZSJU0VD+1g
         +NP1b2lMDubos5taDWas0sr9L0Ee2AgPbMtxcFGU09cf6Z/RZiz1KF8Fh5byXmIGbwZ0
         dFlw==
X-Gm-Message-State: AOAM531P7FdhAre+WHtw7XTldUiYVv16KqkhcLpFDK6hOxZCwPklFkaX
	7oYoqRFWmusXNgIqBZ+xO/KMByu++0bp1N7mIFQzJOtuEqbPp5txM60Y7D8B1OqPDKebKthFDYU
	vVNljD9tdu590DxX2T+j45VoxCDQ=
X-Received: by 2002:a1c:4604:: with SMTP id t4mr16984339wma.17.1607335487736;
        Mon, 07 Dec 2020 02:04:47 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxh2IvFXXS0Sn+Z6xWooK0ZB7J3J/ucSwA/Wd5Mn37PSJsH8VDS/ZbIW+jBzvFF1CRvg0hS8w==
X-Received: by 2002:a1c:4604:: with SMTP id t4mr16984316wma.17.1607335487585;
        Mon, 07 Dec 2020 02:04:47 -0800 (PST)
Subject: Re: [PATCH 3/8] gitlab-ci: Add KVM X86 cross-build jobs
To: Thomas Huth <thuth@redhat.com>, qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>,
 qemu-arm@nongnu.org
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-4-philmd@redhat.com>
 <1048bbc0-7124-3564-4219-aa32ed11a35b@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <951882fd-fae0-2dec-5a81-d72adf139511@redhat.com>
Date: Mon, 7 Dec 2020 11:04:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1048bbc0-7124-3564-4219-aa32ed11a35b@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 6:20 AM, Thomas Huth wrote:
> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>> Cross-build x86 target with only KVM accelerator enabled.
>>
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> ---
>>  .gitlab-ci.d/crossbuilds-kvm-x86.yml | 6 ++++++
>>  .gitlab-ci.yml                       | 1 +
>>  MAINTAINERS                          | 1 +
>>  3 files changed, 8 insertions(+)
>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-x86.yml
> 
> We already have a job that tests with KVM enabled and TCG disabled in the
> main .gitlab-ci.yml file, the "build-tcg-disabled" job. So I don't quite see
> the point in adding yet another job that does pretty much the same? Did I
> miss something?

I missed it was x86-specific myself.

> 
>  Thomas
> 



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:05:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:05:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46071.81728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDO3-00034r-Sk; Mon, 07 Dec 2020 10:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46071.81728; Mon, 07 Dec 2020 10:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDO3-00034k-PM; Mon, 07 Dec 2020 10:04:55 +0000
Received: by outflank-mailman (input) for mailman id 46071;
 Mon, 07 Dec 2020 10:04:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmDO1-000343-U2
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:04:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmDO0-0004sb-Ef; Mon, 07 Dec 2020 10:04:52 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmDO0-00022p-25; Mon, 07 Dec 2020 10:04:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5lkBITQyaqDvEMtZseFQU6Mb1AE2nUUbIGAmWCYF1+A=; b=tSUddrKiW4IAEcDSHtKbUm2w6a
	2SmzPPqciacdnb0anNJS7niUsgM2PsY/B7prxflyy7jJZJMbLoCEdKLXTQquH1VlSD/B2y51GSJQG
	fgNsRktPVTIoT8KElbTu/UNsXNEtMXHZQb2liVYbLzB8RsvV7d/43LxBz3Icexv9sct0=;
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Jan Beulich <jbeulich@suse.com>
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, paul@xen.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
 <c5537493-1a6f-cdc1-27dc-a34060e7efc5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <63af7714-9c03-35b6-99a1-795b678b8032@xen.org>
Date: Mon, 7 Dec 2020 10:04:48 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c5537493-1a6f-cdc1-27dc-a34060e7efc5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 07/12/2020 09:17, Jan Beulich wrote:
> On 04.12.2020 12:45, Julien Grall wrote:
>>      1) From the recent security issues (such as XSA-343), a knob to
>> disable FIFO would be quite beneficials for vendors that don't need the
>> feature.
> 
> Except that this wouldn't have been suitable as a during-embargo
> mitigation, for its observability by guests.

I think you are missing my point here. If that knob was available before 
the security event, a vendor may have already decided to use it to 
disable feature affected.

This vendor would not have needed to spend time to either hotpatch or 
reboot all its fleet.

>>      2) Fleet management purpose. You may have a fleet with multiple
>> versions of Xen. You don't want your customer to start relying on
>> features that may not be available on all the hosts otherwise it
>> complicates the guest placement.
> 
> Guests incapable to run on older Xen are a problem in this regard
> anyway, aren't they? And if they are capable, I don't see what
> you're referring to.

It is not about guest that cannot run on older Xen. It is more about 
allowing a guest to use a feature that is not yet widely available in 
the fleet (you don't update all the hosts in a night...).

Imagine the guest owner really wants to use feature A that is only 
available on new Xen version. The owner may have noticed the feature on 
an existing running guest and would like to create a new guest that use 
the feature.

It might be possible that there are no capacity available on the new Xen 
version. So the guest may start on an older capacity.

I can assure you that the owner will contact the customer service of the 
cloud provider to ask why the feature is not available on the new guest.

With a knob available, a cloud provider has more flexibility to when the 
feature can be exposed.

>> FAOD, I am sure there might be other features that need to be disabled.
>> But we have to start somewhere :).
> 
> If there is such a need, then yes, sure. But shouldn't we at least
> gain rough agreement on how the future is going to look like with
> this? IOW have in hands some at least roughly agreed criteria by
> which we could decide which new ABI additions will need some form
> of override going forward (also allowing to judge which prior
> additions may want to gain overrides in a retroactive fashion, or
> in fact should have had ones from the beginning)?

I think the answer is quite straight-forward: Anything exposed to the 
non-privileged (I include stubdomain) guest should have a knob to 
disable it.

> 
>>>> Now imagine you are the cloud provider, running Xen. What you did was start to upgrade your hosts from an older version of Xen to a newer version of Xen, to pick up various bug fixes and make sure you are running a version that is within the security support envelope. You identify that your customer's problem is a bug in their OS that was latent on the old version of the hypervisor but is now manifesting on the new one because it has buggy support for a hypercall that was added between the two versions. How are you going to fix this issue, and get your customer up and running again? Of course you'd like your customer to upgrade their OS, but they can't even boot it to do that. You really need a solution that can restore the old VM environment, at least temporarily, for that customer.
>>>
>>> Boot the guest on a not-yet-upgraded host again, to update its kernel?
>>
>> You are making the assumption that the customer would have the choice to
>> target a specific versions of Xen. This may be undesirable for a cloud
>> provider as suddenly your customer may want to stick on the old version
>> of Xen.
> 
> I've gone from you saying "You really need a solution that can restore
> the old VM environment, at least temporarily, for that customer." The
> "temporarily" to me implies that it is at least an option to tie a
> certain guest to a certain Xen version for in-guest upgrading purposes.
 >
> If the deal with the customer doesn't include running on a certain Xen
> version, I don't see how this could have non-temporary consequences to
> the cloud provider.

I think by "you", you mean Paul and not me?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:07:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:07:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46082.81740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDQf-0003K2-9j; Mon, 07 Dec 2020 10:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46082.81740; Mon, 07 Dec 2020 10:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDQf-0003Jv-6v; Mon, 07 Dec 2020 10:07:37 +0000
Received: by outflank-mailman (input) for mailman id 46082;
 Mon, 07 Dec 2020 10:07:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmDQd-0003Jq-EJ
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:07:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmDQc-0004xP-2A; Mon, 07 Dec 2020 10:07:34 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmDQb-0002Eo-Ni; Mon, 07 Dec 2020 10:07:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=ouQvCwF6cPs+TVYkUwmeUGfuDacOBBcAOCv0Ja9PmJk=; b=Vw2Ab25EEN6JIVzGOe+9YfIrNv
	fvSIll7/O7p3p+CKO6OzY4ucLbxZ/VtfiePKpgqEyPBasAQwV+Vjr58Q9bcmjwqGRw8ZsscJMh1mE
	5DTV/kosB7nLTM1aeVoNZfylY2mgDh2sBQ8Rc0Qsq4VrHLsU0kD5Wrr5O+in7D8xnPb0=;
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, paul@xen.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
 <c5537493-1a6f-cdc1-27dc-a34060e7efc5@suse.com>
 <63af7714-9c03-35b6-99a1-795b678b8032@xen.org>
Message-ID: <19308701-04ca-46dc-289c-143679b16c7b@xen.org>
Date: Mon, 7 Dec 2020 10:07:30 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <63af7714-9c03-35b6-99a1-795b678b8032@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 07/12/2020 10:04, Julien Grall wrote:
> Hi Jan,
> 
> On 07/12/2020 09:17, Jan Beulich wrote:
>> On 04.12.2020 12:45, Julien Grall wrote:
>>>      1) From the recent security issues (such as XSA-343), a knob to
>>> disable FIFO would be quite beneficials for vendors that don't need the
>>> feature.
>>
>> Except that this wouldn't have been suitable as a during-embargo
>> mitigation, for its observability by guests.
> 
> I think you are missing my point here. If that knob was available before 
> the security event, a vendor may have already decided to use it to 
> disable feature affected.
> 
> This vendor would not have needed to spend time to either hotpatch or 
> reboot all its fleet.
> 
>>>      2) Fleet management purpose. You may have a fleet with multiple
>>> versions of Xen. You don't want your customer to start relying on
>>> features that may not be available on all the hosts otherwise it
>>> complicates the guest placement.
>>
>> Guests incapable to run on older Xen are a problem in this regard
>> anyway, aren't they? And if they are capable, I don't see what
>> you're referring to.
> 
> It is not about guest that cannot run on older Xen. It is more about 
> allowing a guest to use a feature that is not yet widely available in 
> the fleet (you don't update all the hosts in a night...).
> 
> Imagine the guest owner really wants to use feature A that is only 
> available on new Xen version. The owner may have noticed the feature on 
> an existing running guest and would like to create a new guest that use 
> the feature.
> 
> It might be possible that there are no capacity available on the new Xen 
> version. So the guest may start on an older capacity.
> 
> I can assure you that the owner will contact the customer service of the 
> cloud provider to ask why the feature is not available on the new guest.
> 
> With a knob available, a cloud provider has more flexibility to when the 
> feature can be exposed.
> 
>>> FAOD, I am sure there might be other features that need to be disabled.
>>> But we have to start somewhere :).
>>
>> If there is such a need, then yes, sure. But shouldn't we at least
>> gain rough agreement on how the future is going to look like with
>> this? IOW have in hands some at least roughly agreed criteria by
>> which we could decide which new ABI additions will need some form
>> of override going forward (also allowing to judge which prior
>> additions may want to gain overrides in a retroactive fashion, or
>> in fact should have had ones from the beginning)?
> 
> I think the answer is quite straight-forward: Anything exposed to the 
> non-privileged (I include stubdomain) guest should have a knob to 

s/include/exclude/

> disable it.
> 
>>
>>>>> Now imagine you are the cloud provider, running Xen. What you did 
>>>>> was start to upgrade your hosts from an older version of Xen to a 
>>>>> newer version of Xen, to pick up various bug fixes and make sure 
>>>>> you are running a version that is within the security support 
>>>>> envelope. You identify that your customer's problem is a bug in 
>>>>> their OS that was latent on the old version of the hypervisor but 
>>>>> is now manifesting on the new one because it has buggy support for 
>>>>> a hypercall that was added between the two versions. How are you 
>>>>> going to fix this issue, and get your customer up and running 
>>>>> again? Of course you'd like your customer to upgrade their OS, but 
>>>>> they can't even boot it to do that. You really need a solution that 
>>>>> can restore the old VM environment, at least temporarily, for that 
>>>>> customer.
>>>>
>>>> Boot the guest on a not-yet-upgraded host again, to update its kernel?
>>>
>>> You are making the assumption that the customer would have the choice to
>>> target a specific versions of Xen. This may be undesirable for a cloud
>>> provider as suddenly your customer may want to stick on the old version
>>> of Xen.
>>
>> I've gone from you saying "You really need a solution that can restore
>> the old VM environment, at least temporarily, for that customer." The
>> "temporarily" to me implies that it is at least an option to tie a
>> certain guest to a certain Xen version for in-guest upgrading purposes.
>  >
>> If the deal with the customer doesn't include running on a certain Xen
>> version, I don't see how this could have non-temporary consequences to
>> the cloud provider.
> 
> I think by "you", you mean Paul and not me?
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:11:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46091.81753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDUW-0004Hk-VH; Mon, 07 Dec 2020 10:11:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46091.81753; Mon, 07 Dec 2020 10:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDUW-0004Hd-S7; Mon, 07 Dec 2020 10:11:36 +0000
Received: by outflank-mailman (input) for mailman id 46091;
 Mon, 07 Dec 2020 10:11:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDUV-0004HY-Ht
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:11:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae47d2db-2f2b-4507-ab9b-5efd0c6e01d6;
 Mon, 07 Dec 2020 10:11:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CCBA1AC9A;
 Mon,  7 Dec 2020 10:11:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae47d2db-2f2b-4507-ab9b-5efd0c6e01d6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607335894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SCcClU7oa5vHgsCWal0Ai4WzKWWgRhj5qtSOOVmum70=;
	b=J50JUJjlMIypg5SbnxBxD6BzBte304BGWpHKEszDdFN8vv+/C59Wq3yMTVciRryw8rmJOy
	3tZ9GcfJ+Gka0zV1FRZEl/GuxyJ+GwLJ3glUCB/2vlG50kQnpFRu7NDkoNejwhR/P4sPVS
	cRdRTL/wi/1ZLFbnt6r1FiCD/KYbbKI=
Subject: Re: [PATCH v2] x86/vmap: handle superpages in vmap_to_mfn()
To: Hongyan Xia <hx242@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8a3c4749-4275-b632-b3fa-073447acd352@suse.com>
Date: Mon, 7 Dec 2020 11:11:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.12.2020 12:21, Hongyan Xia wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5194,6 +5194,60 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
>          }                                          \
>      } while ( false )
>  
> +/* Translate mapped Xen address to MFN. */
> +mfn_t xen_map_to_mfn(unsigned long va)
> +{
> +#define CHECK_MAPPED(cond_)     \
> +    if ( !(cond_) )             \
> +    {                           \
> +        ASSERT_UNREACHABLE();   \
> +        ret = INVALID_MFN;      \
> +        goto out;               \
> +    }                           \

This should be coded such that use sites ...

> +    bool locking = system_state > SYS_STATE_boot;
> +    unsigned int l2_offset = l2_table_offset(va);
> +    unsigned int l1_offset = l1_table_offset(va);
> +    const l3_pgentry_t *pl3e = virt_to_xen_l3e(va);
> +    const l2_pgentry_t *pl2e = NULL;
> +    const l1_pgentry_t *pl1e = NULL;
> +    struct page_info *l3page;
> +    mfn_t ret;
> +
> +    L3T_INIT(l3page);
> +    CHECK_MAPPED(pl3e)
> +    l3page = virt_to_page(pl3e);
> +    L3T_LOCK(l3page);
> +
> +    CHECK_MAPPED(l3e_get_flags(*pl3e) & _PAGE_PRESENT)

... will properly require a statement-ending semicolon. With
additionally the trailing underscore dropped from the macro's
parameter name
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Or wait,

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -578,6 +578,7 @@ mfn_t alloc_xen_pagetable_new(void);
>  void free_xen_pagetable_new(mfn_t mfn);
>  
>  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
> +mfn_t xen_map_to_mfn(unsigned long va);

This is now a pretty proper companion of map_page_to_xen(), and
hence imo ought to be declared next to that one rather than here.
Ultimately Arm may also need to gain an implementation.

> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -291,7 +291,7 @@ void copy_page_sse2(void *, const void *);
>  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
>  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
>  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
> +#define vmap_to_mfn(va)     xen_map_to_mfn((unsigned long)va)

You've lost parentheses around va.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:14:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:14:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46097.81764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDWr-0004Rd-Cx; Mon, 07 Dec 2020 10:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46097.81764; Mon, 07 Dec 2020 10:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDWr-0004RW-9j; Mon, 07 Dec 2020 10:14:01 +0000
Received: by outflank-mailman (input) for mailman id 46097;
 Mon, 07 Dec 2020 10:14:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDWq-0004RO-35; Mon, 07 Dec 2020 10:14:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDWp-00054s-Oj; Mon, 07 Dec 2020 10:13:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDWp-0001K4-FS; Mon, 07 Dec 2020 10:13:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmDWp-0007k4-F1; Mon, 07 Dec 2020 10:13:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CNaEc4NUwhdq9nASK8lz1cf+TwclsPDmPIp3x+hJZng=; b=NKmduwTdRNe0XdcXzeR9XJnPXA
	W4lG2/VB63xfwGHR9VK9uZ8eFDUZRzAp5/p/FShLuDaErmLaJbILMmOl33l5YQ7xf/XC1is0zBOV7
	+it8a1IpDDRWNoFXOpwz39LK+SNe6tURJuWf7vCvke09dzRS4lgUAFgNPHWdGmDGdq3w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157248-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157248: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0477e92881850d44910a7e94fc2c46f96faa131f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 10:13:59 +0000

flight 157248 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157248/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0477e92881850d44910a7e94fc2c46f96faa131f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  128 days
Failing since        152366  2020-08-01 20:49:34 Z  127 days  218 attempts
Testing same since   157248  2020-12-07 01:10:12 Z    0 days    1 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 700670 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:15:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46104.81780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDYE-0004a8-TD; Mon, 07 Dec 2020 10:15:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46104.81780; Mon, 07 Dec 2020 10:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDYE-0004a1-OC; Mon, 07 Dec 2020 10:15:26 +0000
Received: by outflank-mailman (input) for mailman id 46104;
 Mon, 07 Dec 2020 10:15:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDYD-0004Zv-7o
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:15:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a5a6bde-27c8-463a-bf4d-459b1adea722;
 Mon, 07 Dec 2020 10:15:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A99EEAC55;
 Mon,  7 Dec 2020 10:15:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a5a6bde-27c8-463a-bf4d-459b1adea722
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607336123; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XS6kqLRvxf+KBioev+tUZsKw3t9Cq5o1WWozwu/1000=;
	b=awJ5kDavviU3ognfxNE7hNgrAMy5ap7qB2CDbwDnguPx82oBrlZpvdt7jK364LkcPxGNs5
	ZpFkiQBFhCOlULu+KMclEuo5zXjiEr+flpPz56WL4mmE/FM9D2XZW9pRh+JP1mkAg5nH36
	wZzffaSol5KzA8WvfEz/AHf6iXU52MU=
Subject: Re: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
To: Julien Grall <julien@xen.org>
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Anthony PERARD' <anthony.perard@citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, paul@xen.org
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
 <c5537493-1a6f-cdc1-27dc-a34060e7efc5@suse.com>
 <63af7714-9c03-35b6-99a1-795b678b8032@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <93d4ff1c-9f8a-c318-50f8-add2820059d7@suse.com>
Date: Mon, 7 Dec 2020 11:15:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <63af7714-9c03-35b6-99a1-795b678b8032@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 11:04, Julien Grall wrote:
> On 07/12/2020 09:17, Jan Beulich wrote:
>> On 04.12.2020 12:45, Julien Grall wrote:
>>> You are making the assumption that the customer would have the choice to
>>> target a specific versions of Xen. This may be undesirable for a cloud
>>> provider as suddenly your customer may want to stick on the old version
>>> of Xen.
>>
>> I've gone from you saying "You really need a solution that can restore
>> the old VM environment, at least temporarily, for that customer." The
>> "temporarily" to me implies that it is at least an option to tie a
>> certain guest to a certain Xen version for in-guest upgrading purposes.
>  >
>> If the deal with the customer doesn't include running on a certain Xen
>> version, I don't see how this could have non-temporary consequences to
>> the cloud provider.
> 
> I think by "you", you mean Paul and not me?

Oh, right, I didn't pay attention to who wrote that text. Sorry.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:23:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:23:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46112.81792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDgG-0005ch-Sh; Mon, 07 Dec 2020 10:23:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46112.81792; Mon, 07 Dec 2020 10:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDgG-0005ca-Ny; Mon, 07 Dec 2020 10:23:44 +0000
Received: by outflank-mailman (input) for mailman id 46112;
 Mon, 07 Dec 2020 10:23:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDgF-0005cU-So
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:23:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 179f1264-87fb-4104-b38d-51b17b54ac20;
 Mon, 07 Dec 2020 10:23:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3E444AC90;
 Mon,  7 Dec 2020 10:23:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 179f1264-87fb-4104-b38d-51b17b54ac20
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607336622; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xQnepCiatrE8sJzTWk61do2J/pPpjtZ2Ei47QgkZUTM=;
	b=KnPyb8/oUA3AGG3RmXwfs2LZ/TfWPyU6an5SHK7SxPjNLmkAC/InvtijVoEvi+1CmBEpai
	OmLlsDv3PqTesY9Cg3ODnsgmAjut/y+l/KLQ3OtNY0gYUeHIYPcEBE4glx693aK2RwnLlf
	3rgOB1w1yQUVSwioZgrERK1NOEkl5Ao=
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Julien Grall <julien@xen.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
 <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e0782c3b-9958-3792-eab9-d3fd6708225f@suse.com>
Date: Mon, 7 Dec 2020 11:23:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 17:57, Julien Grall wrote:
> On 24/11/2020 00:40, Andrew Cooper wrote:
>> On a totally separate point,  I wonder if we'd be better off compiling
>> with -fgnu89-inline because I can't see any case we're we'd want the C99
>> inline semantics anywhere in Xen.
> 
> This was one of my point above. It feels that if we want to use the 
> behavior in Xen, then it should be everywhere rather than just this helper.

I'll be committing the series up to patch 6 in a minute. It remains
unclear to me whether your responses on this sub-thread are meant
to be an objection, or just a comment. Andrew gave his R-b despite
this separate consideration, and I now also have an ack from Wei
for the entire series. Please clarify.

Or actually I only thought I could commit a fair initial part of
the series - I'm still lacking Arm-side acks for patches 2 and 3
here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:23:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46113.81804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDgM-0005ea-3T; Mon, 07 Dec 2020 10:23:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46113.81804; Mon, 07 Dec 2020 10:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDgM-0005eR-0A; Mon, 07 Dec 2020 10:23:50 +0000
Received: by outflank-mailman (input) for mailman id 46113;
 Mon, 07 Dec 2020 10:23:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Acm=FL=redhat.com=berrange@srs-us1.protection.inumbo.net>)
 id 1kmDgK-0005cU-Mq
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:23:48 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9180c6ca-a37b-458b-ad0f-303c5a2da8bd;
 Mon, 07 Dec 2020 10:23:46 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-451-W4NZmy85M4282Y90Lqh5rw-1; Mon, 07 Dec 2020 05:23:32 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7432C801AE8;
 Mon,  7 Dec 2020 10:23:29 +0000 (UTC)
Received: from redhat.com (ovpn-113-137.ams2.redhat.com [10.36.113.137])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 435C360BD8;
 Mon,  7 Dec 2020 10:23:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9180c6ca-a37b-458b-ad0f-303c5a2da8bd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607336626;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0qgjftJvMWB28i7vhzEDVbkT48Qo4f02gO6Q9KnMTyo=;
	b=cGIxO2JaYQZ1vArjne8aOanS0fgGYaiYtDCS+A+HJr+skYSu8u34pX/TAz2mQgDovDLirx
	Hl7PrG3dX00XIRC19PBjr/LKZht0GnnhSJniXRnjBg3yxN/qWcVmfs87wXB9TF3Q96Hx0y
	/lNeCFKSiGMssqDBZuYg9W6CHWacZLw=
X-MC-Unique: W4NZmy85M4282Y90Lqh5rw-1
Date: Mon, 7 Dec 2020 10:23:16 +0000
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	Marcelo Tosatti <mtosatti@redhat.com>, kvm@vger.kernel.org,
	Paul Durrant <paul@xen.org>, Thomas Huth <thuth@redhat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Huacai Chen <chenhc@lemote.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Claudio Fontana <cfontana@suse.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	qemu-s390x@nongnu.org, qemu-arm@nongnu.org,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Cornelia Huck <cohuck@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aurelien Jarno <aurelien@aurel32.net>
Subject: Re: [PATCH 0/8] gitlab-ci: Add accelerator-specific Linux jobs
Message-ID: <20201207102316.GF3102898@redhat.com>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201206185508.3545711-1-philmd@redhat.com>
User-Agent: Mutt/1.14.6 (2020-07-11)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11

On Sun, Dec 06, 2020 at 07:55:00PM +0100, Philippe Mathieu-Daudé wrote:
> Hi,
> 
> I was custom to use Travis-CI for testing KVM builds on s390x/ppc
> with the Travis-CI jobs.
> 
> During October Travis-CI became unusable for me (extremely slow,
> see [1]). Then my free Travis account got updated to the new
> "10K credit minutes allotment" [2] which I burned without reading
> the notification email in time (I'd burn them eventually anyway).
> 
> Today Travis-CI is pointless to me. While I could pay to run my
> QEMU jobs, I don't think it is fair for an Open Source project to
> ask its forks to pay for a service.
> 
> As we want forks to run some CI before contributing patches, and
> we have cross-build Docker images available for Linux hosts, I
> added some cross KVM/Xen build jobs to Gitlab-CI.
> 
> Cross-building doesn't have the same coverage as native building,
> as we can not run the tests. But this is still useful to get link
> failures.
> 
> Each job is added in its own YAML file, so it is easier to notify
> subsystem maintainers in case of troubles.
> 
> Resulting pipeline:
> https://gitlab.com/philmd/qemu/-/pipelines/225948077
> 
> Regards,
> 
> Phil.
> 
> [1] https://travis-ci.community/t/build-delays-for-open-source-project/10272
> [2] https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing
> 
> Philippe Mathieu-Daudé (8):
>   gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)
>   gitlab-ci: Introduce 'cross_accel_build_job' template
>   gitlab-ci: Add KVM X86 cross-build jobs
>   gitlab-ci: Add KVM ARM cross-build jobs
>   gitlab-ci: Add KVM s390x cross-build jobs
>   gitlab-ci: Add KVM PPC cross-build jobs
>   gitlab-ci: Add KVM MIPS cross-build jobs
>   gitlab-ci: Add Xen cross-build jobs
> 
>  .gitlab-ci.d/crossbuilds-kvm-arm.yml   |  5 +++
>  .gitlab-ci.d/crossbuilds-kvm-mips.yml  |  5 +++
>  .gitlab-ci.d/crossbuilds-kvm-ppc.yml   |  5 +++
>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml |  6 +++
>  .gitlab-ci.d/crossbuilds-kvm-x86.yml   |  6 +++
>  .gitlab-ci.d/crossbuilds-xen.yml       | 14 +++++++

Adding so many different files here is crazy IMHO, and then should
all be under the same GitLab CI maintainers, not the respective
arch maintainers.  The MAINTAINERS file is saying who is responsible
for the contents of the .yml file, not who is responsible for making
sure KVM works on that arch. 

>  .gitlab-ci.d/crossbuilds.yml           | 52 ++++++++++++++++----------
>  .gitlab-ci.yml                         |  6 +++
>  MAINTAINERS                            |  6 +++
>  9 files changed, 85 insertions(+), 20 deletions(-)
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-arm.yml
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-mips.yml
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-ppc.yml
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-x86.yml
>  create mode 100644 .gitlab-ci.d/crossbuilds-xen.yml

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:24:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46122.81816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDgt-0005oL-Cb; Mon, 07 Dec 2020 10:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46122.81816; Mon, 07 Dec 2020 10:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDgt-0005oE-9Q; Mon, 07 Dec 2020 10:24:23 +0000
Received: by outflank-mailman (input) for mailman id 46122;
 Mon, 07 Dec 2020 10:24:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=og4e=FL=amazon.co.uk=prvs=60380b542=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kmDgs-0005mY-69
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:24:22 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95db8a9f-74d6-47ff-8392-343e496d73c4;
 Mon, 07 Dec 2020 10:24:16 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-8cc5d68b.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 07 Dec 2020 10:24:08 +0000
Received: from EX13D03EUC004.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2b-8cc5d68b.us-west-2.amazon.com (Postfix) with ESMTPS
 id EAFE0A1822; Mon,  7 Dec 2020 10:23:57 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC004.ant.amazon.com (10.43.164.33) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 7 Dec 2020 10:23:56 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 7 Dec 2020 10:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95db8a9f-74d6-47ff-8392-343e496d73c4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1607336657; x=1638872657;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=U0bim6zkzxMacxlqZrtth4BZTZBryVs9GrfbNQCSqqs=;
  b=BEStf6gC7bTWmcUfks64ZRKP01yFKGFOPDFyFXe3mLTLhfny4JnPNOqt
   W6/173mOBaR2wyd4O3rox32/96EWeLB0UY6WeUBIbKwila5Vds9qA0O43
   kE6bzpKgimn/aUjLVDghW4nGwqWs7gnTG6lUc8EZ2W/S0qGcFd02K6RLy
   k=;
X-IronPort-AV: E=Sophos;i="5.78,399,1599523200"; 
   d="scan'208";a="69530302"
Subject: RE: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
Thread-Topic: [PATCH v5 1/4] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_evtchn_fifo, ...
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
CC: "Elnikety, Eslam" <elnikety@amazon.com>, 'Ian Jackson'
	<iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>, 'Anthony PERARD'
	<anthony.perard@citrix.com>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
	'George Dunlap' <george.dunlap@citrix.com>, 'Stefano Stabellini'
	<sstabellini@kernel.org>, 'Christian Lindig' <christian.lindig@citrix.com>,
	'David Scott' <dave@recoil.org>, 'Volodymyr Babchuk'
	<Volodymyr_Babchuk@epam.com>, =?utf-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "paul@xen.org" <paul@xen.org>
Thread-Index: AQJzrCHAX/gquRkO4g65wfjgQBytUAJ7xs67Aj6rtzSoheWkIIAACFOAgAATywCAAPdoAIAACFeAgAAWa4CAACI7AIAEjX2AgAANTgCAAAL4gIAAALmQ
Date: Mon, 7 Dec 2020 10:23:56 +0000
Message-ID: <bba9cef29617481e8f88f9117e55ce76@EX13D32EUC003.ant.amazon.com>
References: <20201203124159.3688-1-paul@xen.org>
 <20201203124159.3688-2-paul@xen.org>
 <fea91a65-1d7c-cd46-81a2-9a6bcb690ed1@suse.com>
 <00ee01d6c98b$507af1c0$f170d540$@xen.org>
 <8a4a2027-0df3-aee2-537a-3d2814b329ec@suse.com>
 <00f601d6c996$ce3908d0$6aab1a70$@xen.org>
 <946280c7-c7f7-c760-c0d3-db91e6cde68a@suse.com>
 <011201d6ca16$ae14ac50$0a3e04f0$@xen.org>
 <4fb9fb4c-5849-25f1-ff72-ba3a046d3fd8@suse.com>
 <df1df316-9512-7b0c-fde1-aa4fc60ac70b@xen.org>
 <c5537493-1a6f-cdc1-27dc-a34060e7efc5@suse.com>
 <63af7714-9c03-35b6-99a1-795b678b8032@xen.org>
 <93d4ff1c-9f8a-c318-50f8-add2820059d7@suse.com>
In-Reply-To: <93d4ff1c-9f8a-c318-50f8-add2820059d7@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.145]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KW3NuaXBdDQo+ID4+IEkndmUgZ29uZSBmcm9t
IHlvdSBzYXlpbmcgIllvdSByZWFsbHkgbmVlZCBhIHNvbHV0aW9uIHRoYXQgY2FuIHJlc3RvcmUN
Cj4gPj4gdGhlIG9sZCBWTSBlbnZpcm9ubWVudCwgYXQgbGVhc3QgdGVtcG9yYXJpbHksIGZvciB0
aGF0IGN1c3RvbWVyLiIgVGhlDQo+ID4+ICJ0ZW1wb3JhcmlseSIgdG8gbWUgaW1wbGllcyB0aGF0
IGl0IGlzIGF0IGxlYXN0IGFuIG9wdGlvbiB0byB0aWUgYQ0KPiA+PiBjZXJ0YWluIGd1ZXN0IHRv
IGEgY2VydGFpbiBYZW4gdmVyc2lvbiBmb3IgaW4tZ3Vlc3QgdXBncmFkaW5nIHB1cnBvc2VzLg0K
PiA+ICA+DQoNCk5vdCBuZWNlc3NhcmlseS4NCg0KPiA+PiBJZiB0aGUgZGVhbCB3aXRoIHRoZSBj
dXN0b21lciBkb2Vzbid0IGluY2x1ZGUgcnVubmluZyBvbiBhIGNlcnRhaW4gWGVuDQo+ID4+IHZl
cnNpb24sIEkgZG9uJ3Qgc2VlIGhvdyB0aGlzIGNvdWxkIGhhdmUgbm9uLXRlbXBvcmFyeSBjb25z
ZXF1ZW5jZXMgdG8NCj4gPj4gdGhlIGNsb3VkIHByb3ZpZGVyLg0KPiA+DQo+ID4gSSB0aGluayBi
eSAieW91IiwgeW91IG1lYW4gUGF1bCBhbmQgbm90IG1lPw0KPiANCj4gT2gsIHJpZ2h0LCBJIGRp
ZG4ndCBwYXkgYXR0ZW50aW9uIHRvIHdobyB3cm90ZSB0aGF0IHRleHQuIFNvcnJ5Lg0KPiANCg0K
QnkgdGVtcG9yYXJ5IEkgbWVhbiB0aGF0IHdlIG1heSB3YW50IHRvIHRpbWUtbGltaXQgdHVybmlu
ZyBvZmYgYSBjZXJ0YWluIHBhcnQgb2YgdGhlIEFCVSBiZWNhdXNlLCB3aGlsc3QgaXQgaXMgcHJv
YmxlbWF0aWMgZm9yIHNvbWUgY3VzdG9tZXJzLCBpdCBjb3VsZCAoYW5kIGlzIGxpa2VseSB0bykg
aGF2ZSBtZWFzdXJhYmxlIGJlbmVmaXRzIHRvIG90aGVycy4gVGh1cyB5b3Uga2VlcCB0aGUgZmVh
dHVyZSBvZmYgb25seSB1bnRpbCBhbnkgY3VzdG9tZXJzIHJ1bm5pbmcgT1MgdGhhdCBoYXZlIHBy
b2JsZW1zIGhhdmUgdXBncmFkZWQgdGhlaXIgaW5zdGFsbGF0aW9ucy4NCg0KICBQYXVsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:25:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:25:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46127.81828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDhx-0005xi-Nd; Mon, 07 Dec 2020 10:25:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46127.81828; Mon, 07 Dec 2020 10:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDhx-0005xb-KV; Mon, 07 Dec 2020 10:25:29 +0000
Received: by outflank-mailman (input) for mailman id 46127;
 Mon, 07 Dec 2020 10:25:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Acm=FL=redhat.com=berrange@srs-us1.protection.inumbo.net>)
 id 1kmDhw-0005xU-B0
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:25:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 81d14cd7-cad1-4b66-b4b9-44ece13be304;
 Mon, 07 Dec 2020 10:25:25 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-11-VhlrEtJON5GtTXd0WawECg-1; Mon, 07 Dec 2020 05:25:23 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ADC311005513;
 Mon,  7 Dec 2020 10:25:20 +0000 (UTC)
Received: from redhat.com (ovpn-113-137.ams2.redhat.com [10.36.113.137])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6921C60BE2;
 Mon,  7 Dec 2020 10:25:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81d14cd7-cad1-4b66-b4b9-44ece13be304
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607336725;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lgvE6LyH4jrE6pQS01wyiRU9NGcs0HDZI4+lkUVm7vc=;
	b=HEhpMnQ+XLIrHO4FWkVVCTFIgHBRpY+s0JhdVTs64XOAWq0nEdzU4lEdnoz/nQ+q6fCaSQ
	kNVnigQsYBNIz5UAsXCTWaYWDu5YbKZR9LiolZIy8ov4uDKzVqwF30ERJxECHST+FsFhzJ
	3d1H1DPIBgLpflcCCgvpzGJ7oKPXgBg=
X-MC-Unique: VhlrEtJON5GtTXd0WawECg-1
Date: Mon, 7 Dec 2020 10:25:08 +0000
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Thomas Huth <thuth@redhat.com>
Cc: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Stefano Stabellini <sstabellini@kernel.org>, kvm@vger.kernel.org,
	Paul Durrant <paul@xen.org>, Cornelia Huck <cohuck@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	qemu-s390x@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	David Gibson <david@gibson.dropbear.id.au>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Claudio Fontana <cfontana@suse.de>
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
Message-ID: <20201207102450.GG3102898@redhat.com>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
User-Agent: Mutt/1.14.6 (2020-07-11)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12

On Mon, Dec 07, 2020 at 06:46:01AM +0100, Thomas Huth wrote:
> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> > Cross-build s390x target with only KVM accelerator enabled.
> > 
> > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > ---
> >  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
> >  .gitlab-ci.yml                         | 1 +
> >  MAINTAINERS                            | 1 +
> >  3 files changed, 8 insertions(+)
> >  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
> > 
> > diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
> > new file mode 100644
> > index 00000000000..1731af62056
> > --- /dev/null
> > +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
> > @@ -0,0 +1,6 @@
> > +cross-s390x-kvm:
> > +  extends: .cross_accel_build_job
> > +  variables:
> > +    IMAGE: debian-s390x-cross
> > +    TARGETS: s390x-softmmu
> > +    ACCEL_CONFIGURE_OPTS: --disable-tcg
> > diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> > index 573afceb3c7..a69619d7319 100644
> > --- a/.gitlab-ci.yml
> > +++ b/.gitlab-ci.yml
> > @@ -14,6 +14,7 @@ include:
> >    - local: '/.gitlab-ci.d/crossbuilds.yml'
> >    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
> >    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
> > +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
> 
> KVM code is already covered by the "cross-s390x-system" job, but an
> additional compilation test with --disable-tcg makes sense here. I'd then
> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
> 
> And while you're at it, I'd maybe rather name the new file just
> crossbuilds-s390x.yml and also move the other s390x related jobs into it?

I don't think we really should split it up so much - just put these
jobs in the existing crosbuilds.yml file.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:26:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:26:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46132.81840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDif-00064Q-1L; Mon, 07 Dec 2020 10:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46132.81840; Mon, 07 Dec 2020 10:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDie-00064J-U6; Mon, 07 Dec 2020 10:26:12 +0000
Received: by outflank-mailman (input) for mailman id 46132;
 Mon, 07 Dec 2020 10:26:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmDid-00064C-Mb
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:26:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 143ed5d6-6661-467c-a47c-e1d9544f22b0;
 Mon, 07 Dec 2020 10:26:11 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-522-TlDxTjxgM06wyNfpVlqRIA-1; Mon, 07 Dec 2020 05:26:09 -0500
Received: by mail-wr1-f69.google.com with SMTP id v1so4691307wri.16
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 02:26:09 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id n189sm13572019wmf.20.2020.12.07.02.26.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 02:26:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 143ed5d6-6661-467c-a47c-e1d9544f22b0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607336770;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J/MxqDMH4iax6gZAnLZgFNJcpANG56pQVPbXFBf8CsE=;
	b=NU6mvteMJw+bK8pnh+4Cdk/dLukYNzRI8kcUcW+VfdmZopyECIqiG2UUUuEDwC+FerGtMZ
	AEzbjXoxI0B6rLKIKFGZu4i+S3znekBniHKjb0Wy6N93vOMPsDof/nRs/9cjGWc87YZdYq
	Li02V/laYJiZNcZ/ZADbroNmTbXQ9dQ=
X-MC-Unique: TlDxTjxgM06wyNfpVlqRIA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=J/MxqDMH4iax6gZAnLZgFNJcpANG56pQVPbXFBf8CsE=;
        b=Xwbmew0WiW2XlfKKdBgjOsS0ep5yNACboY6mixuXvZR3cFglcLjN6QiOCGLkT3JwgO
         dW0KrkTaZ+Njn9OVX55CGCQ2RrqjNAtdcI1Pte5B4uTGWKNzELwAO12Gb31dpJ4r6U7w
         +lB1T48DzvcQurQk1PLjUyJA7J0nbpQ7NA9Ox0sIJPWjydby/MPtTjv1HV53DGF6kPoY
         byWcv2QojuZYwUJc3Cajqs62eirDdkR5FPCKIHSS9LLfyCg21Z+Qn6ZIzf+Zd7qtM1Sd
         CJiQPx3MiSUObcJaN9+w5sz9nCVr9+/4CZdjISo7wiqBIHj/iLQJQDYJisE+vtYKd1dI
         OTNQ==
X-Gm-Message-State: AOAM532mAGo4jIvGmmyI4CTFwj/16Ttlj34IhIMTDJPQxQguGGH3ktI7
	4N9DOgWNep6SRP+LanQBgzISLpc1X9wZMFD4HnO+Syg1KpFSP15XSGJnGOpSacOyPKAM6qQORkX
	Ul+nNl5rdTYmjISAW7KESKlUL/d0=
X-Received: by 2002:adf:f7c7:: with SMTP id a7mr18479912wrq.347.1607336767411;
        Mon, 07 Dec 2020 02:26:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwhcAghl9OpXDUnMnlNasl3UZRYONq9ISpQW3iUKTZZ4vH7cH2+LswiXSeNsBKrig5E8lZLoA==
X-Received: by 2002:adf:f7c7:: with SMTP id a7mr18479882wrq.347.1607336767232;
        Mon, 07 Dec 2020 02:26:07 -0800 (PST)
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
To: Thomas Huth <thuth@redhat.com>, qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <2352c04c-829e-ea1d-0894-15fc1d06697a@redhat.com>
Message-ID: <cd5d00b1-999a-fbb3-204e-a759a9e2c3ec@redhat.com>
Date: Mon, 7 Dec 2020 11:26:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <2352c04c-829e-ea1d-0894-15fc1d06697a@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 11:00 AM, Philippe Mathieu-Daudé wrote:
> On 12/7/20 6:46 AM, Thomas Huth wrote:
>> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>>> Cross-build s390x target with only KVM accelerator enabled.
>>>
>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>> ---
>>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>>  .gitlab-ci.yml                         | 1 +
>>>  MAINTAINERS                            | 1 +
>>>  3 files changed, 8 insertions(+)
>>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>
>>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>> new file mode 100644
>>> index 00000000000..1731af62056
>>> --- /dev/null
>>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>> @@ -0,0 +1,6 @@
>>> +cross-s390x-kvm:
>>> +  extends: .cross_accel_build_job
>>> +  variables:
>>> +    IMAGE: debian-s390x-cross
>>> +    TARGETS: s390x-softmmu
>>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>>> index 573afceb3c7..a69619d7319 100644
>>> --- a/.gitlab-ci.yml
>>> +++ b/.gitlab-ci.yml
>>> @@ -14,6 +14,7 @@ include:
>>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
>>
>> KVM code is already covered by the "cross-s390x-system" job, but an
>> additional compilation test with --disable-tcg makes sense here. I'd then
>> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".

What other accelerators are available on 390?

> 
> As you wish. What I want is to let Gitlab users be able to build the
> equivalent "[s390x] Clang (disable-tcg)" from Travis.
> 
> I keep using GCC toolchain because managing job coverage duplication
> is an unresolved problem.
> 
>>
>> And while you're at it, I'd maybe rather name the new file just
>> crossbuilds-s390x.yml and also move the other s390x related jobs into it?
> 
> OK will do.
> 
> Thanks,
> 
> Phil.
> 



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:27:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46140.81852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDjW-0006Dh-E8; Mon, 07 Dec 2020 10:27:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46140.81852; Mon, 07 Dec 2020 10:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDjW-0006Da-AT; Mon, 07 Dec 2020 10:27:06 +0000
Received: by outflank-mailman (input) for mailman id 46140;
 Mon, 07 Dec 2020 10:27:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmDjV-0006DU-6c
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:27:05 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2b590d77-d9e3-43e8-bc56-775eefe3a34a;
 Mon, 07 Dec 2020 10:27:04 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-408-WbqAKidON5SD0AIiRoHe1Q-1; Mon, 07 Dec 2020 05:27:02 -0500
Received: by mail-wm1-f69.google.com with SMTP id b184so3980816wmh.6
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 02:27:02 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id o203sm14297391wmb.0.2020.12.07.02.26.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 02:27:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b590d77-d9e3-43e8-bc56-775eefe3a34a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607336824;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MYK9SCRGoM7soozgWZPnpW4pQBMVhd9wGe6LkNWKsP0=;
	b=W4qQL0pWHNkRDJrBvw2tHO0v3F8KmAeSNN8x1T7xcYd0ypEmBX0EFc4QMcLUu+n15XHRlp
	xK+VInoiB3qnCQOmfYmIfnBtjfRb7f0iO1MEZc09fy1H1MI+utq8ZHaeT20hQNVtX+hr1B
	9HtN2Ysq85kXjIEf6DJdMmc4zfBOjwg=
X-MC-Unique: WbqAKidON5SD0AIiRoHe1Q-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=MYK9SCRGoM7soozgWZPnpW4pQBMVhd9wGe6LkNWKsP0=;
        b=M+mSOT3JwPd9W2QUD+zhYdV5zcopREGxhtB4RsQpw7JyABDRW4vyox/6iNsNpI+n3c
         jq01T5NpVqeHbqg2DOu/J/MKcU/uT0tEN8RBqC2Bj42h9Bu2Jepgakh9b1wVuxBCr8AL
         dQIgbdsKi+4MqzP24H3P5iU4kH3Pn8Kldpl4kvbPvTQWpibeUUCmtP7GLSDneIz9fAzL
         /YJN40X6LgbDGFSkiFOuVtv/Yu/1kz6K/6L7LsGeZP+bjLfTk2iYsg8qc+3hFo9F76Q6
         gHVFxlzcz0RVoWN2a7qsRoIGcPRsLwBbEup86e9AKy+vRfpMWEKKES7ESOGXRUYWeGhC
         Fo+w==
X-Gm-Message-State: AOAM530w8xWcFRx9F8kj2rl1excBhAh2hUySg9KG0Cu8kH8xvgYqBdQh
	FdrpSxT4hFeCgB3++qJh44Cc1F5M0UODw6kr7v6t+IgUnbHcWfz6Y/J28AL2P6PhqQ3rVhR3JfJ
	gbPdQFQo3xFdNjOD/HSD/Ki3GV3g=
X-Received: by 2002:a5d:5505:: with SMTP id b5mr18809762wrv.410.1607336821236;
        Mon, 07 Dec 2020 02:27:01 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwpxS69KnKku3sO4CNJaJdpGJkosGMLT7HJYybqboCJTmJrWpUYm5nXWA06aq36q3aIAua9Hw==
X-Received: by 2002:a5d:5505:: with SMTP id b5mr18809725wrv.410.1607336821045;
        Mon, 07 Dec 2020 02:27:01 -0800 (PST)
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
To: =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Thomas Huth <thuth@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Stefano Stabellini <sstabellini@kernel.org>, kvm@vger.kernel.org,
 Paul Durrant <paul@xen.org>, Cornelia Huck <cohuck@redhat.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Willian Rampazzo <wrampazz@redhat.com>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Claudio Fontana <cfontana@suse.de>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <20201207102450.GG3102898@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <9233fe7f-8d56-e1ad-b67e-40b3ce5fcabb@redhat.com>
Date: Mon, 7 Dec 2020 11:26:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201207102450.GG3102898@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 11:25 AM, Daniel P. Berrangé wrote:
> On Mon, Dec 07, 2020 at 06:46:01AM +0100, Thomas Huth wrote:
>> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>>> Cross-build s390x target with only KVM accelerator enabled.
>>>
>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>> ---
>>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>>  .gitlab-ci.yml                         | 1 +
>>>  MAINTAINERS                            | 1 +
>>>  3 files changed, 8 insertions(+)
>>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>
>>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>> new file mode 100644
>>> index 00000000000..1731af62056
>>> --- /dev/null
>>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>> @@ -0,0 +1,6 @@
>>> +cross-s390x-kvm:
>>> +  extends: .cross_accel_build_job
>>> +  variables:
>>> +    IMAGE: debian-s390x-cross
>>> +    TARGETS: s390x-softmmu
>>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>>> index 573afceb3c7..a69619d7319 100644
>>> --- a/.gitlab-ci.yml
>>> +++ b/.gitlab-ci.yml
>>> @@ -14,6 +14,7 @@ include:
>>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
>>
>> KVM code is already covered by the "cross-s390x-system" job, but an
>> additional compilation test with --disable-tcg makes sense here. I'd then
>> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
>>
>> And while you're at it, I'd maybe rather name the new file just
>> crossbuilds-s390x.yml and also move the other s390x related jobs into it?
> 
> I don't think we really should split it up so much - just put these
> jobs in the existing crosbuilds.yml file.

Don't we want to leverage MAINTAINERS file?



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:33:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:33:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46149.81863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDpi-0007GM-4Z; Mon, 07 Dec 2020 10:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46149.81863; Mon, 07 Dec 2020 10:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDpi-0007GF-1N; Mon, 07 Dec 2020 10:33:30 +0000
Received: by outflank-mailman (input) for mailman id 46149;
 Mon, 07 Dec 2020 10:33:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kmDpg-0007G8-DK
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:33:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fd5db7fb-a7b0-4805-a0ad-2f268959e0dc;
 Mon, 07 Dec 2020 10:33:27 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-568-C6u_WiEVOkC9GtpqMX5LSw-1; Mon, 07 Dec 2020 05:33:25 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 673C1858182;
 Mon,  7 Dec 2020 10:33:23 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 281EF60BD8;
 Mon,  7 Dec 2020 10:33:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd5db7fb-a7b0-4805-a0ad-2f268959e0dc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607337207;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TBOpJWU+NgQXJI1Pmo6+cAgOqxHfFZ8sskstH2Y6TmU=;
	b=ThOZN/nl+YziXoG0BqiTHXpKdmdJby1zYvkNTVbq0WgH40lv3AKJXLTqbeLeZNGluhNSQM
	rnV2ZauH41KjzQKcnmWlVejsAU69UPlZKYvhL+UhHvhBRbVxkQs+BfVoX5HgXO+//0GBsw
	ugM/Ad4l0d+jdOQPMroY6yFjZS82RJg=
X-MC-Unique: C6u_WiEVOkC9GtpqMX5LSw-1
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <2352c04c-829e-ea1d-0894-15fc1d06697a@redhat.com>
 <cd5d00b1-999a-fbb3-204e-a759a9e2c3ec@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <0447129c-e6c9-71f6-1786-b4e8689b8214@redhat.com>
Date: Mon, 7 Dec 2020 11:33:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <cd5d00b1-999a-fbb3-204e-a759a9e2c3ec@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11

On 07/12/2020 11.26, Philippe Mathieu-Daudé wrote:
> On 12/7/20 11:00 AM, Philippe Mathieu-Daudé wrote:
>> On 12/7/20 6:46 AM, Thomas Huth wrote:
>>> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>>>> Cross-build s390x target with only KVM accelerator enabled.
>>>>
>>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>> ---
>>>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>>>  .gitlab-ci.yml                         | 1 +
>>>>  MAINTAINERS                            | 1 +
>>>>  3 files changed, 8 insertions(+)
>>>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>
>>>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>> new file mode 100644
>>>> index 00000000000..1731af62056
>>>> --- /dev/null
>>>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>> @@ -0,0 +1,6 @@
>>>> +cross-s390x-kvm:
>>>> +  extends: .cross_accel_build_job
>>>> +  variables:
>>>> +    IMAGE: debian-s390x-cross
>>>> +    TARGETS: s390x-softmmu
>>>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>>>> index 573afceb3c7..a69619d7319 100644
>>>> --- a/.gitlab-ci.yml
>>>> +++ b/.gitlab-ci.yml
>>>> @@ -14,6 +14,7 @@ include:
>>>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>>>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
>>>
>>> KVM code is already covered by the "cross-s390x-system" job, but an
>>> additional compilation test with --disable-tcg makes sense here. I'd then
>>> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
> 
> What other accelerators are available on 390?

It's only TCG and KVM.

 Thomas




From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:34:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46154.81875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDr3-0007NL-FH; Mon, 07 Dec 2020 10:34:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46154.81875; Mon, 07 Dec 2020 10:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDr3-0007NE-CQ; Mon, 07 Dec 2020 10:34:53 +0000
Received: by outflank-mailman (input) for mailman id 46154;
 Mon, 07 Dec 2020 10:34:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Acm=FL=redhat.com=berrange@srs-us1.protection.inumbo.net>)
 id 1kmDr1-0007N8-OG
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:34:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1640bb7e-3949-4394-bc61-adb209adf956;
 Mon, 07 Dec 2020 10:34:51 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-585-wgEpu-mVPbGqhVGeG4IaJA-1; Mon, 07 Dec 2020 05:34:49 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C9069190B2A2;
 Mon,  7 Dec 2020 10:34:46 +0000 (UTC)
Received: from redhat.com (ovpn-113-137.ams2.redhat.com [10.36.113.137])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id F26E760C62;
 Mon,  7 Dec 2020 10:34:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1640bb7e-3949-4394-bc61-adb209adf956
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607337290;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rHJ/6LZZ/l4zPSaAfktxXxObRN4BFZ04xcSZNsVRSp0=;
	b=FmsYVwxki2MY2x3U6b8nIYm0w/ZkwW7csemX64pQ/GotLfHXpl+wS1/nI34zFoz84uFjCZ
	c9KK0nQoJ7enNzIlg3dUkH7bLrdgJNlgn71TC1+jy7RL6ljSk2xrYg6/5gSVcxScr1IZsO
	ssLHsJz5pyzd0k/9WYP8YoNP5et6da4=
X-MC-Unique: wgEpu-mVPbGqhVGeG4IaJA-1
Date: Mon, 7 Dec 2020 10:34:30 +0000
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	Paul Durrant <paul@xen.org>, Cornelia Huck <cohuck@redhat.com>,
	qemu-devel@nongnu.org,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	qemu-s390x@nongnu.org, Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Huacai Chen <chenhc@lemote.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
Message-ID: <20201207103430.GI3102898@redhat.com>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <20201207102450.GG3102898@redhat.com>
 <9233fe7f-8d56-e1ad-b67e-40b3ce5fcabb@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9233fe7f-8d56-e1ad-b67e-40b3ce5fcabb@redhat.com>
User-Agent: Mutt/1.14.6 (2020-07-11)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12

On Mon, Dec 07, 2020 at 11:26:58AM +0100, Philippe Mathieu-Daudé wrote:
> On 12/7/20 11:25 AM, Daniel P. Berrangé wrote:
> > On Mon, Dec 07, 2020 at 06:46:01AM +0100, Thomas Huth wrote:
> >> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
> >>> Cross-build s390x target with only KVM accelerator enabled.
> >>>
> >>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >>> ---
> >>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
> >>>  .gitlab-ci.yml                         | 1 +
> >>>  MAINTAINERS                            | 1 +
> >>>  3 files changed, 8 insertions(+)
> >>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
> >>>
> >>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
> >>> new file mode 100644
> >>> index 00000000000..1731af62056
> >>> --- /dev/null
> >>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
> >>> @@ -0,0 +1,6 @@
> >>> +cross-s390x-kvm:
> >>> +  extends: .cross_accel_build_job
> >>> +  variables:
> >>> +    IMAGE: debian-s390x-cross
> >>> +    TARGETS: s390x-softmmu
> >>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
> >>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> >>> index 573afceb3c7..a69619d7319 100644
> >>> --- a/.gitlab-ci.yml
> >>> +++ b/.gitlab-ci.yml
> >>> @@ -14,6 +14,7 @@ include:
> >>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
> >>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
> >>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
> >>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
> >>
> >> KVM code is already covered by the "cross-s390x-system" job, but an
> >> additional compilation test with --disable-tcg makes sense here. I'd then
> >> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
> >>
> >> And while you're at it, I'd maybe rather name the new file just
> >> crossbuilds-s390x.yml and also move the other s390x related jobs into it?
> > 
> > I don't think we really should split it up so much - just put these
> > jobs in the existing crosbuilds.yml file.
> 
> Don't we want to leverage MAINTAINERS file?

As mentioned in the cover letter, I think this is mis-using the MAINTAINERS
file to try to represent something different.

The MAINTAINERS file says who is responsible for the contents of the .yml
file, which is the CI maintainers, because we want a consistent gitlab
configuration as a whole, not everyone doing their own thing.

MAINTAINERS doesn't say who is responsible for making sure the actual
jobs that run are passing, which is potentially a completely different
person. If we want to track that, it is not the MAINTAINERS file.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:35:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46159.81888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDrS-0007T5-Or; Mon, 07 Dec 2020 10:35:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46159.81888; Mon, 07 Dec 2020 10:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDrS-0007Sy-L5; Mon, 07 Dec 2020 10:35:18 +0000
Received: by outflank-mailman (input) for mailman id 46159;
 Mon, 07 Dec 2020 10:35:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDrR-0007Sp-U4
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:35:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c67d606-679e-49ff-b879-b791ee6f9f7d;
 Mon, 07 Dec 2020 10:35:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 74024ADE1;
 Mon,  7 Dec 2020 10:35:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c67d606-679e-49ff-b879-b791ee6f9f7d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607337316; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=yXgzTx6AQT9GvEzuqCmUll4WyIQxBs96NAkOT8AKjf4=;
	b=r9pG1YNYaIH/+UsSQIkylG1hOutVdgW2BE5pMpZWoXouVilfrVoWQnKNHgMS6rJNdEU1v/
	praBKosFN1vZT3L88rAr2TDl1aMqR0GaqQqeDWsMECCEDjbDk2vISIfC7DrWWY4hcqjJ7U
	z6PDGAzFDOyCwJhFWuaoZatyb4Kx6rk=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] x86/vPCI: MSI/MSI-X related adjustments
Message-ID: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Date: Mon, 7 Dec 2020 11:35:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: tolerate (un)masking a disabled MSI-X entry
2: check address in vpci_msi_update()
3: vPCI/MSI-X: fold clearing of entry->updated
4: vPCI/MSI-X: make use of xzalloc_flex_struct()
5: vPCI/MSI-X: tidy init_msix()

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:36:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:36:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46165.81900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDsl-0007ck-38; Mon, 07 Dec 2020 10:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46165.81900; Mon, 07 Dec 2020 10:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDsl-0007cd-07; Mon, 07 Dec 2020 10:36:39 +0000
Received: by outflank-mailman (input) for mailman id 46165;
 Mon, 07 Dec 2020 10:36:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDsj-0007cW-Hd
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:36:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c91e369-21c9-4ef9-889b-9149c4a43501;
 Mon, 07 Dec 2020 10:36:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 462CCAC90;
 Mon,  7 Dec 2020 10:36:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c91e369-21c9-4ef9-889b-9149c4a43501
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607337396; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bHB5r6OeiC4r+XsAY4TkW/gaxfa4JhjmUmX1cIlsY/M=;
	b=KSy3ouoalq6V8ozWVhV/B2ffu6NX7EhxXXSXAcMZc4Moma2FBMbxzcnC5f1tht5UFPnRah
	FYgqgclkislV+At0LJQJW22GZ2jBCfxI1P2hA4G6HmX3LpcPPYJyOqpfqdpc572pAGkcZx
	hxOLWhJ9G6FPl2I1g+PK1usJVgH/R8Q=
Subject: [PATCH 1/5] x86/vPCI: tolerate (un)masking a disabled MSI-X entry
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Manuel Bouyer <bouyer@antioche.eu.org>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Message-ID: <fef14892-f21d-e304-d9b1-7484e0ea3415@suse.com>
Date: Mon, 7 Dec 2020 11:36:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

None of the four reasons causing vpci_msix_arch_mask_entry() to get
called (there's just a single call site) are impossible or illegal prior
to an entry actually having got set up:
- the entry may remain masked (in this case, however, a prior masked ->
  unmasked transition would already not have worked),
- MSI-X may not be enabled,
- the global mask bit may be set,
- the entry may not otherwise have been updated.
Hence the function asserting that the entry was previously set up was
simply wrong. Since the caller tracks the masked state (and setting up
of an entry would only be effected when that software bit is clear),
it's okay to skip both masking and unmasking requests in this case.

Fixes: d6281be9d0145 ('vpci/msix: add MSI-X handlers')
Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This is a presumed alternative to Roger's "vpci/msix: exit early if
MSI-X is disabled or globally masked".

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -840,8 +840,8 @@ void vpci_msi_arch_print(const struct vp
 void vpci_msix_arch_mask_entry(struct vpci_msix_entry *entry,
                                const struct pci_dev *pdev, bool mask)
 {
-    ASSERT(entry->arch.pirq != INVALID_PIRQ);
-    vpci_mask_pirq(pdev->domain, entry->arch.pirq, mask);
+    if ( entry->arch.pirq != INVALID_PIRQ )
+        vpci_mask_pirq(pdev->domain, entry->arch.pirq, mask);
 }
 
 int vpci_msix_arch_enable_entry(struct vpci_msix_entry *entry,



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:37:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:37:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46170.81912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDtE-0007iX-Cx; Mon, 07 Dec 2020 10:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46170.81912; Mon, 07 Dec 2020 10:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDtE-0007iQ-9D; Mon, 07 Dec 2020 10:37:08 +0000
Received: by outflank-mailman (input) for mailman id 46170;
 Mon, 07 Dec 2020 10:37:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifa4=FL=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kmDtC-0007iK-OZ
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:37:06 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.54]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7233b48a-67d6-4c73-83d4-5b8d925b0108;
 Mon, 07 Dec 2020 10:37:04 +0000 (UTC)
Received: from DB6PR07CA0177.eurprd07.prod.outlook.com (2603:10a6:6:43::31) by
 PR3PR08MB5675.eurprd08.prod.outlook.com (2603:10a6:102:8a::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.18; Mon, 7 Dec 2020 10:37:02 +0000
Received: from DB5EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:43:cafe::8c) by DB6PR07CA0177.outlook.office365.com
 (2603:10a6:6:43::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.7 via Frontend
 Transport; Mon, 7 Dec 2020 10:37:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT019.mail.protection.outlook.com (10.152.20.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 10:37:02 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Mon, 07 Dec 2020 10:37:02 +0000
Received: from 021e73208d6b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 389BC2F4-B07C-4BEA-A2C3-3FB4D6840A4D.1; 
 Mon, 07 Dec 2020 10:36:46 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 021e73208d6b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 10:36:46 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBAPR08MB5814.eurprd08.prod.outlook.com (2603:10a6:10:1b1::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Mon, 7 Dec
 2020 10:36:45 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3632.022; Mon, 7 Dec 2020
 10:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7233b48a-67d6-4c73-83d4-5b8d925b0108
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fCbaSDReFn8EINw+zrjSHAWQGc+nbO0vgMK186U5MyU=;
 b=QJ/jfQyH66g31vxDPgvv9xrqpT+uNHZM9+d1qqlJWKzdjCPYNzMCV8nII6P2Va8s12rJoWRnSWsbp7pLmV70ryiclx8mZDwqXOCT1rMwAvb05bqqZUMEdtTqWOFJKL6p67FSK3QIczhS7XOtmLxxjjP6w38KN9Ap1kmK7lyqPLg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 51c2f8e88da09d05
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aDaR5G8mrhS4B7aYmgSNfN7VQ5YAYwHLylCo8Jh5u2/1OTXuRPTzfGoxJwpWJlwSxNAbRqtuIuOVIQ7Ge0poEZi0L9m39d86wt8Kutu9feCK6lERkK06L3+MPCds+RvaZ7pdZRbNPH1TwSy76HDoXebrPIZ3XIxpt6ta5oX+d29nxjgAbeCK4fyt5uT5QCd03+WY15h+ca7qhgAh/bfMjI4gjp/HAj2lXWaX2M4UsEM6c9MFaloW7cOz8JhnUsvT3pq61hbRTRTCVDkZBvG0AcedVD0B4oE4RK+2LzbBFMzrBwCWZhHSlXDeyLJaoOFOs8QxaDkPK7O4A4GHM0YQdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fCbaSDReFn8EINw+zrjSHAWQGc+nbO0vgMK186U5MyU=;
 b=VvJJPO1ysGCDs4OMq10A0HzHcmVUymAhY+sTirIP6ai5AOW3bXereOc+RY/veCuE9aYb30wunClEBJSlkpBRv+hDK3Mj/NfCqXZdSW4ZWC5na4mQADufviHhXCS9k34/6ijmztocc2uSKw4v/An41e4MMpNxVZMWIBFWbCky+eadXQFBCc0qyBVrjvrqzvQiLtuXKMLtjLbSwQZd+VUAsV9T4EXQF16PcEXyXW0aU8tpfKlmDHtww1gQX9BIsWm6pENkyNbRsJHZerFSJ7OevoVLKaAIexLeFPycq5Undjp4vOCMLxVS1EJPiUCXdw4gRK1nwsBUmNiMRk8/ysQSZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fCbaSDReFn8EINw+zrjSHAWQGc+nbO0vgMK186U5MyU=;
 b=QJ/jfQyH66g31vxDPgvv9xrqpT+uNHZM9+d1qqlJWKzdjCPYNzMCV8nII6P2Va8s12rJoWRnSWsbp7pLmV70ryiclx8mZDwqXOCT1rMwAvb05bqqZUMEdtTqWOFJKL6p67FSK3QIczhS7XOtmLxxjjP6w38KN9Ap1kmK7lyqPLg=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Topic: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Index: AQHWxBX830xFguE4YkK2klATz/ucYKnj64aAgAGO7QCAATalAIAE0HIA
Date: Mon, 7 Dec 2020 10:36:45 +0000
Message-ID: <0B43914F-93E1-4860-BA45-7E08F817818C@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
 <cd74f2a7-7836-ef90-9cd8-857068adb0f5@xen.org>
 <51C0C24A-3CE6-48A3-85F5-14F010409DC3@arm.com>
 <b87e9293-77bb-2c43-63d0-8d54d5fc9a7e@xen.org>
In-Reply-To: <b87e9293-77bb-2c43-63d0-8d54d5fc9a7e@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 32120524-ecb2-44bc-4d98-08d89a9c08ba
x-ms-traffictypediagnostic: DBAPR08MB5814:|PR3PR08MB5675:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB56759148213D8959E62E0217FCCE0@PR3PR08MB5675.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 rTPhOin0VuKP54/M72p1meuyx6J+nhxB23PnD8JnueTICPbKhciaXON4kZgYx5qbbpYRCiqQTIfIHJa6pi4qJeJFi58+IT+oi1XgdtHnOH3vH/HIlvUQHH/BYEdOlb/gTabkWBE/LVHNSOfVCGwW0cE+PLDKPP4u2ueyzfUbga4Eb3LNOkw43kZdbHu1BV12rEliSGrFq2V1+6YsD4eqoIFKpqG49Y1Gib5iz3mQIUnmGYiGapoSsTJFSUsbaxX3amdLYVv/bOw3/YCiuUH9M57OtYezrzSV/v5GYx4Lrj1QkiSOZ5qZZsj7RBLva+zdmN7c7BPB2/gRTIUHHerdHvlqh1ZWXSqDUk+3ZJ/1MNlo7IysGNns8oNhVdYYWcPE
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(366004)(376002)(39860400002)(6506007)(53546011)(64756008)(2906002)(26005)(6916009)(8676002)(4326008)(6486002)(8936002)(186003)(54906003)(33656002)(6512007)(316002)(478600001)(2616005)(66946007)(76116006)(91956017)(71200400001)(86362001)(36756003)(5660300002)(83380400001)(66446008)(66556008)(66476007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?eVVzZ1JoVGZ0ZUlmb3lmZ050UUw0SzNlamVzMEd2OGlhMm1BclhqYmFlU3ZI?=
 =?utf-8?B?VVkxdEpZakJXRUxNekF0RXU0SCtMUWxneXkzbjRLNHZLTU9HUFI4WTFjNURU?=
 =?utf-8?B?MlZhczJ1RnN6MHJQTjVEWjRYc1g1b21LVkk5ZkhEWU1wcTQ0QzRVeEtYWDRG?=
 =?utf-8?B?bjBLRzBhTzhQT2xrVmlGTlUvRGRwY2RxNUpuSmViRnZyOE9mN1hpSXpoQ3VL?=
 =?utf-8?B?VFRRY3pWd2oxUms2SkxhaVQvRFFtKzVQa3dheExlTjN0cWZraDVSOVJER1JB?=
 =?utf-8?B?UCtoWkNLdnJNRmh0WFNBMnpqZnE2b085UkNKM2k4c3doN3BJSktsZUdTM2c5?=
 =?utf-8?B?R1dqK1dsYWo4a0VjUlNSdW1aWmVuTWpINEs0YlB5dEhvU3FNcEdWUUR1R1JJ?=
 =?utf-8?B?LzBGRGN4UjJVQzhhb2Q1Ni9Wdll0TVo2T2lNV2UwMXZnS2FkWWtxbldJYUJt?=
 =?utf-8?B?a2hXZm96TnJORnk2eEFmYmVQMnB0aS8rWDFsaW0zUmFYdW5tcCtpdUJBTVFZ?=
 =?utf-8?B?YkNQT3ZqTzlQbmxva0JyQVdadGg1Z3BheTM1bmJBSGNyUTlwWHR6bTZrdzBP?=
 =?utf-8?B?MW82OXBrSHhLeERIdk9zc2dNdzVmZHp2ZmRJZjZubTc0Wi9BREU3NDFSczRr?=
 =?utf-8?B?eFdJZXVIY0pLZ20yVk45MmM3YzI5U2U4QXc3R3dYZ0VXM3JBdlU5aUV3Rk1a?=
 =?utf-8?B?UWdHMmozMGdTTmdmQi9zbG8zRHpEVUpwaE1jWHZKSDFuMmhkWE9NWjNUYzRJ?=
 =?utf-8?B?VFpzdHlXaFE4QUNDSjE4MVNqRXI5ZVVDaU8yNWxJT2tiOVh2TjV6MmE5YzdL?=
 =?utf-8?B?aldOUzg5RzA5ZHdHUDRFSTh1bTgyRmxiUTd1RjlrZlAwZ3ZENWg3N0VUWmly?=
 =?utf-8?B?T0hyWXRyUmwxa2ZVWjQrNkFzeEJWWm14d05PVnd1U0lSem94b0FDT0Y5WGNW?=
 =?utf-8?B?RWJTbElOVXA0ak1sYitGQTBtcEora0xoWEZncStMWG1mVS9NMDFmYlJsdEhz?=
 =?utf-8?B?Y2dqWTZHampMT2tqK2paekVGYWh1UC9jTXN1b1I3NEMzcFMxZ3RiQUFacVRz?=
 =?utf-8?B?TGM1VnNHa2d3RHQ0VlJkWE5pOVRDQkt2c0tDQTFKcTZpTURrT2ZGNnZMQkMx?=
 =?utf-8?B?ZTVjVE1MSklkRVVvc0x1dlRURnlGS3hIZmczU3VLbW9Rb21IMXpySStkemRi?=
 =?utf-8?B?RTdibXFDVVR4WnhsZUVKKzloTEswekRlRkVYRmlGYUpDWXdWNjJhOGdydFhH?=
 =?utf-8?B?MHZWOHpUR0I3aGhkVk1DalFhVEVVRU9Cd3pTUXhkT2UybVU2dTVxdS9EYVJk?=
 =?utf-8?Q?LV1mpbCXyyEy6RZnKnAgyiabVXyHXXder+?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9FDFCC7C9A9BF2478E80446D9E43E15D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5814
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	19419e32-3081-4f35-27bb-08d89a9bfe5c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MmWGE9VDtn1rkFgIj1r77+R3jzk4I6mVpM72L0RfyGblu93rkD1eJSgsUL84PQfkrrFn5h0LZ22EqrIb3J2blAbjhT3aeq8frFlnPrJcl+iAuWkzknV+zlqktbOdeEkk2zHKZ3Amrfrp4yg9t8oOrLy7VCVNL3SYkM74ht2jV70A5qg1Rmi6okD8JlzoIWsgfuA1gfi12cPjN9u48rKfSydNuEuSBASOGNEwTqasR4FsgRwIlKu5v0v+Tqd/wDSEdQp9FhR+JU117LEV8B7A6BU9mqAltZYu4fS4RlYoqKSwNDToct10pEUTLeMnmC0xdTFEzGYF9Lrgf4dc5eyNYuMaGCSZA7uLWwdL9UhlrQFJRPKrUtDkKC8Nnu8vs/luXrhAb8SHMZnRTBT/EfkP2w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(46966005)(186003)(107886003)(6486002)(81166007)(6512007)(53546011)(356005)(33656002)(6506007)(47076004)(54906003)(86362001)(316002)(5660300002)(26005)(2906002)(8936002)(82740400003)(70586007)(36756003)(8676002)(83380400001)(4326008)(336012)(6862004)(478600001)(82310400003)(70206006)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 10:37:02.4868
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 32120524-ecb2-44bc-4d98-08d89a9c08ba
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5675

SGVsbG8gSnVsaWVuLAkNCg0KPiBPbiA0IERlYyAyMDIwLCBhdCA5OjA1IGFtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhpIFJhaHVsLA0KPiANCj4gT24gMDMv
MTIvMjAyMCAxNDozMywgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4gT24gMiBEZWMgMjAyMCwgYXQg
Mjo0NSBwbSwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4+IC0NCj4+
Pj4gLXN0YXRpYyBzdHJ1Y3QgaW9tbXVfZGV2aWNlICphcm1fc21tdV9wcm9iZV9kZXZpY2Uoc3Ry
dWN0IGRldmljZSAqZGV2KQ0KPj4+PiAtew0KPj4+IA0KPj4+IE1vc3Qgb2YgdGhlIGNvZGUgaGVy
ZSBsb29rcyB1c2VmdWwgdG8gWGVuLiBJIHRoaW5rIHlvdSB3YW50IHRvIGtlZXAgdGhlIGNvZGUg
YW5kIHJlLXVzZSBpdCBhZnRlcndhcmRzLg0KPj4gT2suIEkgcmVtb3ZlZCB0aGUgY29kZSBoZXJl
IGFuZCBhZGRlZCB0aGUgWEVOIGNvbXBhdGlibGUgY29kZSB0byBhZGQgZGV2aWNlcyBpbiBuZXh0
IHBhdGNoLg0KPj4gSSB3aWxsIGtlZXAgaXQgaW4gdGhpcyBwYXRjaCBhbmQgd2lsbCBtb2RpZnlp
bmcgdGhlIGNvZGUgdG8gbWFrZSBYRU4gY29tcGF0aWJsZS4NCj4gDQo+IEluIGdlbmVyYWwsIGl0
IGlzIHByZWZlciBpZiB0aGUgY29kZSB0aGUgY29kZSByYXRoZXIgdGhhbiBkcm9wcGluZyBpbiBw
YXRjaCBBIGFuZCB0aGVuIGFkZCBpdCBhZ2FpbiBkaWZmZXJlbnRseSBwYXRjaCBCLiBUaGlzIG1h
a2VzIGVhc2llciB0byBjaGVjayB0aGF0IHRoZSBjb2RlIG91dGNvbWUgb2YgdGhlIGZ1bmN0aW9u
IGlzIG1vc3RseSB0aGUgc2FtZS4NCj4gDQoNCk9rLg0KDQo+Pj4+IC1zdGF0aWMgc3RydWN0IGlv
bW11X29wcyBhcm1fc21tdV9vcHMgPSB7DQo+Pj4+IC0JLmNhcGFibGUJCT0gYXJtX3NtbXVfY2Fw
YWJsZSwNCj4+Pj4gLQkuZG9tYWluX2FsbG9jCQk9IGFybV9zbW11X2RvbWFpbl9hbGxvYywNCj4+
Pj4gLQkuZG9tYWluX2ZyZWUJCT0gYXJtX3NtbXVfZG9tYWluX2ZyZWUsDQo+Pj4+IC0JLmF0dGFj
aF9kZXYJCT0gYXJtX3NtbXVfYXR0YWNoX2RldiwNCj4+Pj4gLQkubWFwCQkJPSBhcm1fc21tdV9t
YXAsDQo+Pj4+IC0JLnVubWFwCQkJPSBhcm1fc21tdV91bm1hcCwNCj4+Pj4gLQkuZmx1c2hfaW90
bGJfYWxsCT0gYXJtX3NtbXVfZmx1c2hfaW90bGJfYWxsLA0KPj4+PiAtCS5pb3RsYl9zeW5jCQk9
IGFybV9zbW11X2lvdGxiX3N5bmMsDQo+Pj4+IC0JLmlvdmFfdG9fcGh5cwkJPSBhcm1fc21tdV9p
b3ZhX3RvX3BoeXMsDQo+Pj4+IC0JLnByb2JlX2RldmljZQkJPSBhcm1fc21tdV9wcm9iZV9kZXZp
Y2UsDQo+Pj4+IC0JLnJlbGVhc2VfZGV2aWNlCQk9IGFybV9zbW11X3JlbGVhc2VfZGV2aWNlLA0K
Pj4+PiAtCS5kZXZpY2VfZ3JvdXAJCT0gYXJtX3NtbXVfZGV2aWNlX2dyb3VwLA0KPj4+PiAtCS5k
b21haW5fZ2V0X2F0dHIJPSBhcm1fc21tdV9kb21haW5fZ2V0X2F0dHIsDQo+Pj4+IC0JLmRvbWFp
bl9zZXRfYXR0cgk9IGFybV9zbW11X2RvbWFpbl9zZXRfYXR0ciwNCj4+Pj4gLQkub2ZfeGxhdGUJ
CT0gYXJtX3NtbXVfb2ZfeGxhdGUsDQo+Pj4+IC0JLmdldF9yZXN2X3JlZ2lvbnMJPSBhcm1fc21t
dV9nZXRfcmVzdl9yZWdpb25zLA0KPj4+PiAtCS5wdXRfcmVzdl9yZWdpb25zCT0gZ2VuZXJpY19p
b21tdV9wdXRfcmVzdl9yZWdpb25zLA0KPj4+PiAtCS5wZ3NpemVfYml0bWFwCQk9IC0xVUwsIC8q
IFJlc3RyaWN0ZWQgZHVyaW5nIGRldmljZSBhdHRhY2ggKi8NCj4+Pj4gLX07DQo+Pj4+IC0NCj4+
Pj4gIC8qIFByb2JpbmcgYW5kIGluaXRpYWxpc2F0aW9uIGZ1bmN0aW9ucyAqLw0KPj4+PiAgc3Rh
dGljIGludCBhcm1fc21tdV9pbml0X29uZV9xdWV1ZShzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpz
bW11LA0KPj4+PiAgCQkJCSAgIHN0cnVjdCBhcm1fc21tdV9xdWV1ZSAqcSwNCj4+Pj4gQEAgLTI0
MDYsNyArMjAzMiw2IEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX2h3X3Byb2JlKHN0cnVj
dCBhcm1fc21tdV9kZXZpY2UgKnNtbXUpDQo+Pj4+ICAJc3dpdGNoIChGSUVMRF9HRVQoSURSMF9T
VEFMTF9NT0RFTCwgcmVnKSkgew0KPj4+PiAgCWNhc2UgSURSMF9TVEFMTF9NT0RFTF9GT1JDRToN
Cj4+Pj4gIAkJc21tdS0+ZmVhdHVyZXMgfD0gQVJNX1NNTVVfRkVBVF9TVEFMTF9GT1JDRTsNCj4+
Pj4gLQkJZmFsbHRocm91Z2g7DQo+Pj4gDQo+Pj4gV2Ugc2hvdWxkIGtlZXAgYWxsIHRoZSBmYWxs
dGhyb3VnaCBkb2N1bWVudGVkLiBTbyBJIHRoaW5rIHdlIHdhbnQgdG8gaW50cm9kdWNlIHRoZSBm
YWxsdGhyb3VnaCBpbiBYZW4gYXMgd2VsbC4NCj4+IE9rIEkgd2lsbCBrZWVwIGZhbGx0aHJvdWdo
IGRvY3VtZW50ZWQgaW4gdGhpcyBwYXRjaC4NCj4+IGZhbGx0aHJvdWdoIGltcGxlbWVudGF0aW9u
IGluIFhFTiBzaG91bGQgYmUgYW5vdGhlciBwYXRjaC4gSSBhbSBub3Qgc3VyZSB3aGVuIHdlIGNh
biBpbXBsZW1lbnQgYnV0IHdlIHdpbGwgdHJ5IHRvIGltcGxlbWVudC4NCj4gDQo+IFllcywgSSBk
aWRuJ3QgYXNrIHRvIGltcGxlbWVudCAiZmFsbHRocm91Z2giIGluIHRoaXMgcGF0Y2gsIGJ1dCBp
bnN0ZWFkIGFzIGEgcHJlLXJlcXVpcmVtZW50IHBhdGNoLg0KPiANCj4gSSB3b3VsZCBpbXBsZW1l
bnQgaXQgaW4gaW5jbHVkZS94ZW4vY29tcGlsZXIuaC4NCg0KT2suIEkgd2lsbCBpbXBsZW1lbnQg
YW5kIHdpbGwgc2hhcmUgdGhlIHBhdGNoIGZvciDigJxfX2F0dHJpYnV0ZV9fKChfX2ZhbGx0aHJv
dWdoX18pKSDigJ0gYnV0IGZvciBTTU1VdjMgaXMgdGhhdCBvayBpZiB3ZSBjYW4gcHJvY2VlZCB3
aXRoIOKAnCAvKiBmYWxsdGhyb3VnaCAqLyAgIi4gDQoNCkFzIOKAnGZhbGx0aG9ydWdo4oCdIGlt
cGxlbWVudGF0aW9uIGlzIGNvbW1vbiBmb3IgYWxsIGFyY2hpdGVjdHVyZSBpdCByZXF1aXJlZCBy
ZXZpZXcgZm9yIGFsbCBzdGFrZWhvbGRlciBhcyBwZXIgbXkgdW5kZXJzdGFuZGluZy4gSSBkb27i
gJl0IHdhbnQgdG8gYmxvY2sgU01NVXYzIGJlY2F1c2Ugb2YgdGhpcy4NCg0KUmVnYXJkcywNClJh
aHVsDQogDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:37:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46171.81924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDtJ-0007lb-R9; Mon, 07 Dec 2020 10:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46171.81924; Mon, 07 Dec 2020 10:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDtJ-0007lU-NR; Mon, 07 Dec 2020 10:37:13 +0000
Received: by outflank-mailman (input) for mailman id 46171;
 Mon, 07 Dec 2020 10:37:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmDtH-0007iK-NC
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:37:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id bda01068-3406-4d79-b940-019ae6d30fe1;
 Mon, 07 Dec 2020 10:37:10 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-368-khtG2to-OEi-9ORw3u8h7w-1; Mon, 07 Dec 2020 05:37:08 -0500
Received: by mail-wm1-f72.google.com with SMTP id k126so2231392wmb.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 02:37:08 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id z140sm14292218wmc.30.2020.12.07.02.37.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 02:37:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bda01068-3406-4d79-b940-019ae6d30fe1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607337430;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+MUpNM1ZaTlO4T3gwbQyUi+bhaJnWEMg2E4uaX6U69A=;
	b=MCLjHeHGavNg07I8a8SyRx0C8gPQX8fZmiDsXuF3OYMUILrOiG2kH4m5tkJX5OfhCsuEJE
	oo9ZcvDwz+0jYqQocdCUbIlDBJWVpha3IQgRfluVM/gWXIxqoH24meDqzSBSaQbwbjyAym
	1TvAZfKdSVIJvIVf/mYiOml8o9tKpBk=
X-MC-Unique: khtG2to-OEi-9ORw3u8h7w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=+MUpNM1ZaTlO4T3gwbQyUi+bhaJnWEMg2E4uaX6U69A=;
        b=bkYJ3vSt/KVmXD4Mi9KhcfoMCwJsaMsg8GzzC+T1Q7di97KCXUzatpER60JLJQqvC8
         4Rz+sqKyepDn3n4SptkLg6oYk9camsoSoDNa15cgxb0CyTGmXrNaGlrLA6YMW/IlpTwv
         h5f2ZgSqhp4RdmSRTVv0R01Ho78eoYKe+C6hbxO+TwaZSuSS7xpmqOWQW0GHGJ6PRF74
         W7DCRmfqxontmYhIvErLZSnKdHoBJxB6jAUi3PX7sFw7tgGNmCMPz989PKofHyoUSLGj
         As/B23X9y3yPIO824z9e6icoWfAwB1bmJW1FVd8J3MwRNaZxOu8KisVs9tJ/g/biandF
         QTww==
X-Gm-Message-State: AOAM5322J+NhiMu6IqijXB4TQdx0D7TkvkFEqqDL7zCaz3wH4hvkR9gC
	3gy9SYU2s+v2yb93NPGtK6poJqjDhRUhz3AKbgtRwnMMl6ZzmGjO4tRZBY6wY1ivUOLTFNSumd0
	IZG5adRahpKOLgWwJVZcPXvOplJI=
X-Received: by 2002:a05:600c:2:: with SMTP id g2mr17382126wmc.156.1607337426666;
        Mon, 07 Dec 2020 02:37:06 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwwRtV73QFCV2ROpZ9fzFZtsHpDjPVdEFc+3zGnA5/nvOZAdgvHLzYUDyvlwzcMdbtuT+HBXg==
X-Received: by 2002:a05:600c:2:: with SMTP id g2mr17382089wmc.156.1607337426480;
        Mon, 07 Dec 2020 02:37:06 -0800 (PST)
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
To: Thomas Huth <thuth@redhat.com>, qemu-devel@nongnu.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Willian Rampazzo
 <wrampazz@redhat.com>, Paul Durrant <paul@xen.org>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Marcelo Tosatti <mtosatti@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Claudio Fontana <cfontana@suse.de>, Halil Pasic <pasic@linux.ibm.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, David Gibson
 <david@gibson.dropbear.id.au>, Paolo Bonzini <pbonzini@redhat.com>,
 qemu-s390x@nongnu.org, Aurelien Jarno <aurelien@aurel32.net>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <2352c04c-829e-ea1d-0894-15fc1d06697a@redhat.com>
 <cd5d00b1-999a-fbb3-204e-a759a9e2c3ec@redhat.com>
 <0447129c-e6c9-71f6-1786-b4e8689b8214@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <b0ea4a2f-c79e-9d8f-86a5-eb6f53bf5067@redhat.com>
Date: Mon, 7 Dec 2020 11:37:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0447129c-e6c9-71f6-1786-b4e8689b8214@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 11:33 AM, Thomas Huth wrote:
> On 07/12/2020 11.26, Philippe Mathieu-Daudé wrote:
>> On 12/7/20 11:00 AM, Philippe Mathieu-Daudé wrote:
>>> On 12/7/20 6:46 AM, Thomas Huth wrote:
>>>> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>>>>> Cross-build s390x target with only KVM accelerator enabled.
>>>>>
>>>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>> ---
>>>>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>>>>  .gitlab-ci.yml                         | 1 +
>>>>>  MAINTAINERS                            | 1 +
>>>>>  3 files changed, 8 insertions(+)
>>>>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>>
>>>>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>> new file mode 100644
>>>>> index 00000000000..1731af62056
>>>>> --- /dev/null
>>>>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>> @@ -0,0 +1,6 @@
>>>>> +cross-s390x-kvm:
>>>>> +  extends: .cross_accel_build_job
>>>>> +  variables:
>>>>> +    IMAGE: debian-s390x-cross
>>>>> +    TARGETS: s390x-softmmu
>>>>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>>>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>>>>> index 573afceb3c7..a69619d7319 100644
>>>>> --- a/.gitlab-ci.yml
>>>>> +++ b/.gitlab-ci.yml
>>>>> @@ -14,6 +14,7 @@ include:
>>>>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>>>>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
>>>>
>>>> KVM code is already covered by the "cross-s390x-system" job, but an
>>>> additional compilation test with --disable-tcg makes sense here. I'd then
>>>> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
>>
>> What other accelerators are available on 390?
> 
> It's only TCG and KVM.

Easy, so no-tcg = kvm :)



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:37:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46172.81936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDtS-0007rC-4J; Mon, 07 Dec 2020 10:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46172.81936; Mon, 07 Dec 2020 10:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDtS-0007r4-0j; Mon, 07 Dec 2020 10:37:22 +0000
Received: by outflank-mailman (input) for mailman id 46172;
 Mon, 07 Dec 2020 10:37:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDtQ-0007qN-Q4
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:37:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0abe833-55cf-4f67-bf60-d9b05dc478ed;
 Mon, 07 Dec 2020 10:37:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7AEF0AC55;
 Mon,  7 Dec 2020 10:37:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0abe833-55cf-4f67-bf60-d9b05dc478ed
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607337439; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LnzNSEW5ph9csjSAH+eysCLiqYJb/8OEeRZ7MYumjZ0=;
	b=IC6rV8RbaAaBrHEHy9JPo9OWKLCS6sVs9vOqH+/rh6M7nXvuivFFoNJL061geOyIIDNlM/
	FIty9Jhyxff6m0N2UAanIA1dfsc6pDJxWowO8LU6PKAncapGiAIs56mH9Ii4TF1oOqxYWo
	liO1t3pBa6GOUOjVGG9Okd30d3aOWDQ=
Subject: [PATCH 2/5] x86/vPCI: check address in vpci_msi_update()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Message-ID: <c5bec6bd-b3cb-dc4c-0435-5154956cc4dd@suse.com>
Date: Mon, 7 Dec 2020 11:37:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

If the upper address bits don't match the interrupt delivery address
space window, entirely different behavior would need to be implemented.
Refuse such requests for the time being.

Replace adjacent hard tabs while introducing MSI_ADDR_BASE_MASK.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -682,6 +682,13 @@ static int vpci_msi_update(const struct
 
     ASSERT(pcidevs_locked());
 
+    if ( (address & MSI_ADDR_BASE_MASK) != MSI_ADDR_HEADER )
+    {
+        gdprintk(XENLOG_ERR, "%pp: PIRQ %u: unsupported address %lx\n",
+                 &pdev->sbdf, pirq, address);
+        return -EOPNOTSUPP;
+    }
+
     for ( i = 0; i < vectors; i++ )
     {
         uint8_t vector = MASK_EXTR(data, MSI_DATA_VECTOR_MASK);
--- a/xen/include/asm-x86/msi.h
+++ b/xen/include/asm-x86/msi.h
@@ -36,8 +36,9 @@
  * Shift/mask fields for msi address
  */
 
-#define MSI_ADDR_BASE_HI	    	0
-#define MSI_ADDR_BASE_LO	    	0xfee00000
+#define MSI_ADDR_BASE_HI            0
+#define MSI_ADDR_BASE_LO            0xfee00000
+#define MSI_ADDR_BASE_MASK          (~0xfffff)
 #define MSI_ADDR_HEADER             MSI_ADDR_BASE_LO
 
 #define MSI_ADDR_DESTMODE_SHIFT     2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:37:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46186.81948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDu1-00083D-FC; Mon, 07 Dec 2020 10:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46186.81948; Mon, 07 Dec 2020 10:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDu1-000836-B7; Mon, 07 Dec 2020 10:37:57 +0000
Received: by outflank-mailman (input) for mailman id 46186;
 Mon, 07 Dec 2020 10:37:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDu0-00080q-M8
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:37:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 061eb0b8-8fab-4991-aee8-8e903c804c7f;
 Mon, 07 Dec 2020 10:37:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4D456AC90;
 Mon,  7 Dec 2020 10:37:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 061eb0b8-8fab-4991-aee8-8e903c804c7f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607337469; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OC3OfGR5DZdR8EuXTTcO5WQ7g/MmFBAWeu7zrUpaC9w=;
	b=rwtn5BKYsgGh3qL2FyJhV5bplsEtPXJJOPqBDwjAUNCdE83OvlQpqrow8LetIGNuOlD1Na
	qnRsW96IKjEdVVks81tubhuuv8D/XSx8nXcRNRJgOeQzocxLqo4L571aCYVRVJMgvJa+2l
	sfXFfAPjw7KnpelNOsuWlytaxQhdGn4=
Subject: [PATCH 3/5] vPCI/MSI-X: fold clearing of entry->updated
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Message-ID: <c27c1c50-2d98-1796-f0e5-8fbae9f50045@suse.com>
Date: Mon, 7 Dec 2020 11:37:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Both call sites clear the flag after a successfull call to
update_entry(). This can be simplified by moving the clearing into the
function, onto its success path.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
As a result it becomes clear that the return value of the function is of
no interest to either of the callers. I'm not sure whether ditching it
is the right thing to do, or whether this rather hints at some problem.

--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -64,6 +64,8 @@ static int update_entry(struct vpci_msix
         return rc;
     }
 
+    entry->updated = false;
+
     return 0;
 }
 
@@ -92,13 +94,8 @@ static void control_write(const struct p
     if ( new_enabled && !new_masked && (!msix->enabled || msix->masked) )
     {
         for ( i = 0; i < msix->max_entries; i++ )
-        {
-            if ( msix->entries[i].masked || !msix->entries[i].updated ||
-                 update_entry(&msix->entries[i], pdev, i) )
-                continue;
-
-            msix->entries[i].updated = false;
-        }
+            if ( !msix->entries[i].masked && msix->entries[i].updated )
+                update_entry(&msix->entries[i], pdev, i);
     }
     else if ( !new_enabled && msix->enabled )
     {
@@ -365,10 +362,7 @@ static int msix_write(struct vcpu *v, un
              * data fields Xen needs to disable and enable the entry in order
              * to pick up the changes.
              */
-            if ( update_entry(entry, pdev, vmsix_entry_nr(msix, entry)) )
-                break;
-
-            entry->updated = false;
+            update_entry(entry, pdev, vmsix_entry_nr(msix, entry));
         }
         else
             vpci_msix_arch_mask_entry(entry, pdev, entry->masked);



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:38:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46192.81960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDuO-0008CG-O8; Mon, 07 Dec 2020 10:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46192.81960; Mon, 07 Dec 2020 10:38:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDuO-0008C8-KR; Mon, 07 Dec 2020 10:38:20 +0000
Received: by outflank-mailman (input) for mailman id 46192;
 Mon, 07 Dec 2020 10:38:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDuN-0008Bx-Oe
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:38:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c4508fc-0767-4faf-be4c-2385ea740217;
 Mon, 07 Dec 2020 10:38:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76A91AC90;
 Mon,  7 Dec 2020 10:38:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c4508fc-0767-4faf-be4c-2385ea740217
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607337498; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4AS8IwPtcpvu14uVMpQu9DCQyxF2L6BDwPNijCmQt0k=;
	b=ibN8vOXpYtPtStZKnBzSRx7tU2UOddRdJILoOX3eyZRgzT0S5uXfTW82FfutI954Hz/G/0
	qck5vTZgEVO0IPXNRP2137bnX89jTpb4cNA2IrqUUmgZttsz7fQVtohhBZM9Tna4kGXLWM
	9/aEmkKX502Wg4Rtvdn6E5T1wYlVV/Q=
Subject: [PATCH 4/5] vPCI/MSI-X: make use of xzalloc_flex_struct()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Message-ID: <062e84e0-0e19-001e-df65-b06318cc5925@suse.com>
Date: Mon, 7 Dec 2020 11:38:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... instead of effectively open-coding it in a type-unsafe way.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -23,8 +23,6 @@
 #include <asm/msi.h>
 #include <asm/p2m.h>
 
-#define VMSIX_SIZE(num) offsetof(struct vpci_msix, entries[num])
-
 #define VMSIX_ADDR_IN_RANGE(addr, vpci, nr)                               \
     ((addr) >= vmsix_table_addr(vpci, nr) &&                              \
      (addr) < vmsix_table_addr(vpci, nr) + vmsix_table_size(vpci, nr))
@@ -449,7 +447,8 @@ static int init_msix(struct pci_dev *pde
 
     max_entries = msix_table_size(control);
 
-    pdev->vpci->msix = xzalloc_bytes(VMSIX_SIZE(max_entries));
+    pdev->vpci->msix = xzalloc_flex_struct(struct vpci_msix, entries,
+                                           max_entries);
     if ( !pdev->vpci->msix )
         return -ENOMEM;
 



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:38:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46196.81972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDum-0008Jw-0i; Mon, 07 Dec 2020 10:38:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46196.81972; Mon, 07 Dec 2020 10:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmDul-0008Jp-TJ; Mon, 07 Dec 2020 10:38:43 +0000
Received: by outflank-mailman (input) for mailman id 46196;
 Mon, 07 Dec 2020 10:38:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmDuk-0008JN-0g
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:38:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36e9e0d0-0900-4a11-ba07-6623289f4d0a;
 Mon, 07 Dec 2020 10:38:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F00DAAC90;
 Mon,  7 Dec 2020 10:38:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36e9e0d0-0900-4a11-ba07-6623289f4d0a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607337520; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eMaRwYz5dvWjnaq+mpxQL4O/edeAvh/v0+ETeYdF/Bs=;
	b=k52pv2vR8IpMuzkQozOyWQFM7qLtIAIPtOoBi3rDcDplqXnRKOu3hn3IUS7BWcV3Y5QNQJ
	+XyDanG+NWa9lyAcf5knGIkZieajrFi+GZ4O+/8YMe8ceApkfWnfEjanKygngouvO1knNn
	fFL1D8wVpoPsrWFCQWYv5vrM+yFmMNI=
Subject: [PATCH 5/5] vPCI/MSI-X: tidy init_msix()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Message-ID: <e21e4936-f356-8c8e-845d-d60880a58ed4@suse.com>
Date: Mon, 7 Dec 2020 11:38:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

First of all introduce a local variable for the to be allocated struct.
The compiler can't CSE all the occurrences (I'm observing 80 bytes of
code saved with gcc 10). Additionally, while the caller can cope and
there was no memory leak, globally "announce" the struct only once done
initializing it. This also removes the dependency of the function on
the caller cleaning up after it in case of an error.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was heavily tempted to also move up the call to vpci_add_register(),
such that there would be no pointless init done in case of an error
coming back from there.

--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -436,6 +436,7 @@ static int init_msix(struct pci_dev *pde
     uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     unsigned int msix_offset, i, max_entries;
     uint16_t control;
+    struct vpci_msix *msix;
     int rc;
 
     msix_offset = pci_find_cap_offset(pdev->seg, pdev->bus, slot, func,
@@ -447,34 +448,37 @@ static int init_msix(struct pci_dev *pde
 
     max_entries = msix_table_size(control);
 
-    pdev->vpci->msix = xzalloc_flex_struct(struct vpci_msix, entries,
-                                           max_entries);
-    if ( !pdev->vpci->msix )
+    msix = xzalloc_flex_struct(struct vpci_msix, entries, max_entries);
+    if ( !msix )
         return -ENOMEM;
 
-    pdev->vpci->msix->max_entries = max_entries;
-    pdev->vpci->msix->pdev = pdev;
+    msix->max_entries = max_entries;
+    msix->pdev = pdev;
 
-    pdev->vpci->msix->tables[VPCI_MSIX_TABLE] =
+    msix->tables[VPCI_MSIX_TABLE] =
         pci_conf_read32(pdev->sbdf, msix_table_offset_reg(msix_offset));
-    pdev->vpci->msix->tables[VPCI_MSIX_PBA] =
+    msix->tables[VPCI_MSIX_PBA] =
         pci_conf_read32(pdev->sbdf, msix_pba_offset_reg(msix_offset));
 
-    for ( i = 0; i < pdev->vpci->msix->max_entries; i++)
+    for ( i = 0; i < msix->max_entries; i++)
     {
-        pdev->vpci->msix->entries[i].masked = true;
-        vpci_msix_arch_init_entry(&pdev->vpci->msix->entries[i]);
+        msix->entries[i].masked = true;
+        vpci_msix_arch_init_entry(&msix->entries[i]);
     }
 
     rc = vpci_add_register(pdev->vpci, control_read, control_write,
-                           msix_control_reg(msix_offset), 2, pdev->vpci->msix);
+                           msix_control_reg(msix_offset), 2, msix);
     if ( rc )
+    {
+        xfree(msix);
         return rc;
+    }
 
     if ( list_empty(&d->arch.hvm.msix_tables) )
         register_mmio_handler(d, &vpci_msix_table_ops);
 
-    list_add(&pdev->vpci->msix->next, &d->arch.hvm.msix_tables);
+    pdev->vpci->msix = msix;
+    list_add(&msix->next, &d->arch.hvm.msix_tables);
 
     return 0;
 }



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:49:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46208.81984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmE4t-00010h-0Y; Mon, 07 Dec 2020 10:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46208.81984; Mon, 07 Dec 2020 10:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmE4s-00010a-Tt; Mon, 07 Dec 2020 10:49:10 +0000
Received: by outflank-mailman (input) for mailman id 46208;
 Mon, 07 Dec 2020 10:49:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmE4s-00010V-2l
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:49:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 814d90b4-8668-42ce-b61a-04ac8e2d1836;
 Mon, 07 Dec 2020 10:49:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 60659AC9A;
 Mon,  7 Dec 2020 10:49:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 814d90b4-8668-42ce-b61a-04ac8e2d1836
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607338148; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=q6U3UDuJapJmS5xECijJLyHuigwT4okcrlFsn/wiRJY=;
	b=ju1MuFtVenTC/NSK++/36htzpKe5oG2ylbNcjXqcSNWqyFrCVXpCTB2RX0MOlY7nXoGXSi
	LMQujVtxoJOiD5SMTqgMga8B1aPZzRQ5IVrU4Lj+Zq21Ij25ZfY/O2eVg+wSYUaO0aHfun
	b1nYnoR2DChS55nMDHSrxt2ahsxGiBQ=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vIO-APIC: make use of xmalloc_flex_struct()
Message-ID: <a5a2e6e9-7bfa-75e8-7890-5d102b09835f@suse.com>
Date: Mon, 7 Dec 2020 11:49:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... instead of effectively open-coding it in a type-unsafe way. Drop
hvm_vioapic_size() altogether, folding the other use in a memset()
invocation into the subsequent loop.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -622,9 +622,9 @@ void vioapic_reset(struct domain *d)
         unsigned int nr_pins = vioapic->nr_pins, base_gsi = vioapic->base_gsi;
         unsigned int pin;
 
-        memset(vioapic, 0, hvm_vioapic_size(nr_pins));
+        memset(vioapic, 0, offsetof(typeof(*vioapic), redirtbl));
         for ( pin = 0; pin < nr_pins; pin++ )
-            vioapic->redirtbl[pin].fields.mask = 1;
+            vioapic->redirtbl[pin] = (union vioapic_redir_entry){ .fields.mask = 1 };
 
         if ( !is_hardware_domain(d) )
         {
@@ -685,7 +685,8 @@ int vioapic_init(struct domain *d)
         }
 
         if ( (domain_vioapic(d, i) =
-              xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
+              xmalloc_flex_struct(struct hvm_vioapic, redirtbl,
+                                  nr_pins)) == NULL )
         {
             vioapic_free(d, nr_vioapics);
             return -ENOMEM;
--- a/xen/include/asm-x86/hvm/vioapic.h
+++ b/xen/include/asm-x86/hvm/vioapic.h
@@ -56,7 +56,6 @@ struct hvm_vioapic {
     };
 };
 
-#define hvm_vioapic_size(cnt) offsetof(struct hvm_vioapic, redirtbl[cnt])
 #define domain_vioapic(d, i) ((d)->arch.hvm.vioapic[i])
 #define vioapic_domain(v) ((v)->domain)
 


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:50:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46213.81996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmE6J-0001oV-CV; Mon, 07 Dec 2020 10:50:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46213.81996; Mon, 07 Dec 2020 10:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmE6J-0001oO-90; Mon, 07 Dec 2020 10:50:39 +0000
Received: by outflank-mailman (input) for mailman id 46213;
 Mon, 07 Dec 2020 10:50:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmE6H-0001oH-Rc
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:50:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c265d1a1-d027-4f5a-b340-214ae9c34add;
 Mon, 07 Dec 2020 10:50:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 620AFAC90;
 Mon,  7 Dec 2020 10:50:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c265d1a1-d027-4f5a-b340-214ae9c34add
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607338236; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=emEnPp+Epqcvw33KMvAizqtN7vgoaG6rdwXlXDb0rKA=;
	b=X7qU6aqVKiA17Mkt2LASL3UFwibZ4A/6qUYU6LTAOcc2reZUYdyCMFeWEDtujIiIA4vZ8o
	ym03EcN+/yzIeblqiATAdRLxGpGsH7G8JS+c/fmrrH8HTkSCQlcTnC5cWraw0/eDFtYGpv
	anrHjoENMJkzoLfx53aS3Xa0zxyF7r4=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vPMU: make use of xmalloc_flex_struct()
Message-ID: <89357ce5-6a2b-9abf-0655-6bebced091bd@suse.com>
Date: Mon, 7 Dec 2020 11:50:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... instead of effectively open-coding it in a type-unsafe way. Drop the
regs_sz variable altogether, replacing other uses suitably.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -44,9 +44,6 @@ static const u32 __read_mostly *counters
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
-/* Total size of PMU registers block (copied to/from PV(H) guest) */
-static unsigned int __read_mostly regs_sz;
-
 #define F10H_NUM_COUNTERS   4
 #define F15H_NUM_COUNTERS   6
 #define MAX_NUM_COUNTERS    F15H_NUM_COUNTERS
@@ -156,12 +153,9 @@ static inline u32 get_fam15h_addr(u32 ad
 
 static void amd_vpmu_init_regs(struct xen_pmu_amd_ctxt *ctxt)
 {
-    unsigned i;
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    memset(&ctxt->regs[0], 0, regs_sz);
-    for ( i = 0; i < num_counters; i++ )
-        ctrl_regs[i] = ctrl_rsvd[i];
+    memset(&ctxt->regs[0], 0, num_counters * sizeof(ctxt->regs[0]));
+    memcpy(&ctxt->regs[num_counters], &ctrl_rsvd[0],
+           num_counters * sizeof(ctxt->regs[0]));
 }
 
 static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
@@ -242,7 +236,8 @@ static int amd_vpmu_load(struct vcpu *v,
         ctxt = vpmu->context;
         ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
-        memcpy(&ctxt->regs[0], &guest_ctxt->regs[0], regs_sz);
+        memcpy(&ctxt->regs[0], &guest_ctxt->regs[0],
+               2 * num_counters * sizeof(ctxt->regs[0]));
 
         for ( i = 0; i < num_counters; i++ )
         {
@@ -316,7 +311,8 @@ static int amd_vpmu_save(struct vcpu *v,
         ASSERT(!has_vlapic(v->domain));
         ctxt = vpmu->context;
         guest_ctxt = &vpmu->xenpmu_data->pmu.c.amd;
-        memcpy(&guest_ctxt->regs[0], &ctxt->regs[0], regs_sz);
+        memcpy(&guest_ctxt->regs[0], &ctxt->regs[0],
+               2 * num_counters * sizeof(ctxt->regs[0]));
     }
 
     return 1;
@@ -508,7 +504,8 @@ int svm_vpmu_initialise(struct vcpu *v)
     if ( !counters )
         return -EINVAL;
 
-    ctxt = xmalloc_bytes(sizeof(*ctxt) + regs_sz);
+    ctxt = xmalloc_flex_struct(struct xen_pmu_amd_ctxt, regs,
+                               2 * num_counters);
     if ( !ctxt )
     {
         printk(XENLOG_G_WARNING "Insufficient memory for PMU, "
@@ -565,8 +562,6 @@ static int __init common_init(void)
         ctrl_rsvd[i] &= CTRL_RSVD_MASK;
     }
 
-    regs_sz = 2 * sizeof(uint64_t) * num_counters;
-
     return 0;
 }
 


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:55:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46222.82007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEAd-00028v-6W; Mon, 07 Dec 2020 10:55:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46222.82007; Mon, 07 Dec 2020 10:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEAd-00028o-3Q; Mon, 07 Dec 2020 10:55:07 +0000
Received: by outflank-mailman (input) for mailman id 46222;
 Mon, 07 Dec 2020 10:55:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmEAb-00028e-4y
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:55:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmEAZ-0005xY-NY; Mon, 07 Dec 2020 10:55:03 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmEAZ-0005dh-Ek; Mon, 07 Dec 2020 10:55:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NnEbqKOdR082NW0t9XwGlGICAQ9n8pOelvEy3ARGAM4=; b=EnlJN2dOBtNox/O5f0saZ6evoi
	qlzgjMIoZ4GNzKJAFBOfqNWlg9siTPnNIJwaub1ezgryKihFJcAISY/TrPcHYgQtVb9PuaRpP37J0
	xrC04R3N9JuvjUF7qpYPJfWy++g7UoSZcKZ/4Df7aXs3bRPKSNrEEoEOSB6zxMWiFrkE=;
Subject: Re: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
 <cd74f2a7-7836-ef90-9cd8-857068adb0f5@xen.org>
 <51C0C24A-3CE6-48A3-85F5-14F010409DC3@arm.com>
 <b87e9293-77bb-2c43-63d0-8d54d5fc9a7e@xen.org>
 <0B43914F-93E1-4860-BA45-7E08F817818C@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <72c29861-4e50-dd7d-9986-a84000ece1dd@xen.org>
Date: Mon, 7 Dec 2020 10:55:01 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <0B43914F-93E1-4860-BA45-7E08F817818C@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 07/12/2020 10:36, Rahul Singh wrote:
>> I would implement it in include/xen/compiler.h.
> 
> Ok. I will implement and will share the patch for “__attribute__((__fallthrough__)) ” but for SMMUv3 is that ok if we can proceed with “ /* fallthrough */  ".
The first approach should always be to work with the community to 
introduce such functionality tree-wide. Note that I am not suggesting to 
replace existing "/* fallthrough */" with "fallthrough".

If it is taking too long or the task is significant, then we can discuss 
about way to temporally work around it.

I don't believe we are at that stage yet here.

> As “fallthorugh” implementation is common for all architecture it required review for all stakeholder as per my understanding. I don’t want to block SMMUv3 because of this.

It only requires review from "THE REST" maintainers.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:55:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46223.82020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEAe-00029y-FE; Mon, 07 Dec 2020 10:55:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46223.82020; Mon, 07 Dec 2020 10:55:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEAe-00029o-BF; Mon, 07 Dec 2020 10:55:08 +0000
Received: by outflank-mailman (input) for mailman id 46223;
 Mon, 07 Dec 2020 10:55:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmEAc-00028j-Rk
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:55:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d832024-4040-4366-9066-0d889ee49bd3;
 Mon, 07 Dec 2020 10:55:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2E5ADAC9A;
 Mon,  7 Dec 2020 10:55:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d832024-4040-4366-9066-0d889ee49bd3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607338503; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=KJpiuJgBhnj1MwTMzvIUckDr0A6Dg2EjphA7qkRPDD8=;
	b=cGoGdCdbajbEroTm16m9oZZVEOzA8TnXvHO8siqI8JVNxgr7QVnWzNddA/2I8SSO0uxwz+
	wyzv1cRMdei4bS7eh4ZEihT/OCNKqpW6XYeXbyoLc0v6og/wUz5yx/sZlD83sv1GtB/KlS
	mC/YSDw2oIsYl/GK9i+BjRr+tc+uAH8=
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Cc: Christoph Hellwig <hch@lst.de>, xen-devel
 <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>
Date: Mon, 7 Dec 2020 11:55:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="f4lkDjWkzdTPnDP7lkZZv8N8sBAAohmmJ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--f4lkDjWkzdTPnDP7lkZZv8N8sBAAohmmJ
Content-Type: multipart/mixed; boundary="wFUe5hxoxCUoTdipeFju8EQv0wbXJQE3P";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Cc: Christoph Hellwig <hch@lst.de>, xen-devel
 <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
Message-ID: <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
In-Reply-To: <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>

--wFUe5hxoxCUoTdipeFju8EQv0wbXJQE3P
Content-Type: multipart/mixed;
 boundary="------------87D70290072DEFFA48C976C4"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------87D70290072DEFFA48C976C4
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Marek,

On 06.12.20 17:47, Jason Andryuk wrote:
> On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.c=
om> wrote:
>>
>> On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=B3re=
cki wrote:
>>> On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
>>>> On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-G=C3=B3=
recki wrote:
>>>>> culprit:
>>>>>
>>>>> commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
>>>>> Author: Roger Pau Monne <roger.pau@citrix.com>
>>>>> Date:   Tue Sep 1 10:33:26 2020 +0200
>>>>>
>>>>>      xen: add helpers to allocate unpopulated memory
>>>>>
>>>>> I'm adding relevant people and xen-devel to the thread.
>>>>> For completeness, here is the original crash message:
>>>>
>>>> That commit definitively adds a new ZONE_DEVICE user, so it does loo=
k
>>>> related.  But you are not running on Xen, are you?
>>>
>>> I am. It is Xen dom0.
>>
>> I'm afraid I'm on leave and won't be able to look into this until the
>> beginning of January. I would guess it's some kind of bad
>> interaction between blkback and NVMe drivers both using ZONE_DEVICE?
>>
>> Maybe the best is to revert this change and I will look into it when
>> I get back, unless someone is willing to debug this further.
>=20
> Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , they
> both use page->lru which is part of the anonymous union shared with
> *pgmap.  That matches Marek's suspicion that the ZONE_DEVICE memory is
> being used as ZONE_NORMAL.
>=20
> memmap_init_zone_device() says:
> * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
> * and zone_device_data.  It is a bug if a ZONE_DEVICE page is
> * ever freed or placed on a driver-private list.

Second try, now even tested to work on a test system (without NVMe).

Juergen

--------------87D70290072DEFFA48C976C4
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-add-helpers-for-caching-grant-mapping-pages.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-xen-add-helpers-for-caching-grant-mapping-pages.patch"

=46rom 4f6ad98ce5fd457fd12e6617b0bc2a8f82fbce4d Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Mon, 7 Dec 2020 08:31:22 +0100
Subject: [PATCH 1/2] xen: add helpers for caching grant mapping pages

Instead of having similar helpers in multiple backend drivers use
common helpers for caching pages allocated via gnttab_alloc_pages().

Make use of those helpers in blkback and scsiback.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkback/blkback.c | 89 ++++++-----------------------
 drivers/block/xen-blkback/common.h  |  4 +-
 drivers/block/xen-blkback/xenbus.c  |  6 +-
 drivers/xen/grant-table.c           | 72 +++++++++++++++++++++++
 drivers/xen/xen-scsiback.c          | 60 ++++---------------
 include/xen/grant_table.h           | 13 +++++
 6 files changed, 116 insertions(+), 128 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
index 501e9dacfff9..9ebf53903d7b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -132,73 +132,12 @@ module_param(log_stats, int, 0644);
=20
 #define BLKBACK_INVALID_HANDLE (~0)
=20
-/* Number of free pages to remove on each call to gnttab_free_pages */
-#define NUM_BATCH_FREE_PAGES 10
-
 static inline bool persistent_gnt_timeout(struct persistent_gnt *persist=
ent_gnt)
 {
 	return pgrant_timeout && (jiffies - persistent_gnt->last_used >=3D
 			HZ * pgrant_timeout);
 }
=20
-static inline int get_free_page(struct xen_blkif_ring *ring, struct page=
 **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	if (list_empty(&ring->free_pages)) {
-		BUG_ON(ring->free_pages_num !=3D 0);
-		spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	BUG_ON(ring->free_pages_num =3D=3D 0);
-	page[0] =3D list_first_entry(&ring->free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	ring->free_pages_num--;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-
-	return 0;
-}
-
-static inline void put_free_pages(struct xen_blkif_ring *ring, struct pa=
ge **page,
-                                  int num)
-{
-	unsigned long flags;
-	int i;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	for (i =3D 0; i < num; i++)
-		list_add(&page[i]->lru, &ring->free_pages);
-	ring->free_pages_num +=3D num;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-}
-
-static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int=
 num)
-{
-	/* Remove requested pages in batches of NUM_BATCH_FREE_PAGES */
-	struct page *page[NUM_BATCH_FREE_PAGES];
-	unsigned int num_pages =3D 0;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	while (ring->free_pages_num > num) {
-		BUG_ON(list_empty(&ring->free_pages));
-		page[num_pages] =3D list_first_entry(&ring->free_pages,
-		                                   struct page, lru);
-		list_del(&page[num_pages]->lru);
-		ring->free_pages_num--;
-		if (++num_pages =3D=3D NUM_BATCH_FREE_PAGES) {
-			spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-			gnttab_free_pages(num_pages, page);
-			spin_lock_irqsave(&ring->free_pages_lock, flags);
-			num_pages =3D 0;
-		}
-	}
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-	if (num_pages !=3D 0)
-		gnttab_free_pages(num_pages, page);
-}
-
 #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
=20
 static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi=
_flags);
@@ -331,7 +270,8 @@ static void free_persistent_gnts(struct xen_blkif_rin=
g *ring, struct rb_root *ro
 			unmap_data.count =3D segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
=20
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap =3D 0;
 		}
=20
@@ -371,7 +311,8 @@ void xen_blkbk_unmap_purged_grants(struct work_struct=
 *work)
 		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
 			unmap_data.count =3D segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap =3D 0;
 		}
 		kfree(persistent_gnt);
@@ -379,7 +320,7 @@ void xen_blkbk_unmap_purged_grants(struct work_struct=
 *work)
 	if (segs_to_unmap > 0) {
 		unmap_data.count =3D segs_to_unmap;
 		BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-		put_free_pages(ring, pages, segs_to_unmap);
+		gnttab_page_cache_put(&ring->free_pages, pages, segs_to_unmap);
 	}
 }
=20
@@ -664,9 +605,10 @@ int xen_blkif_schedule(void *arg)
=20
 		/* Shrink the free pages pool if it is too large. */
 		if (time_before(jiffies, blkif->buffer_squeeze_end))
-			shrink_free_pagepool(ring, 0);
+			gnttab_page_cache_shrink(&ring->free_pages, 0);
 		else
-			shrink_free_pagepool(ring, max_buffer_pages);
+			gnttab_page_cache_shrink(&ring->free_pages,
+						 max_buffer_pages);
=20
 		if (log_stats && time_after(jiffies, ring->st_print))
 			print_stats(ring);
@@ -697,7 +639,7 @@ void xen_blkbk_free_caches(struct xen_blkif_ring *rin=
g)
 	ring->persistent_gnt_c =3D 0;
=20
 	/* Since we are shutting down remove all pages from the buffer */
-	shrink_free_pagepool(ring, 0 /* All */);
+	gnttab_page_cache_shrink(&ring->free_pages, 0 /* All */);
 }
=20
 static unsigned int xen_blkbk_unmap_prepare(
@@ -736,7 +678,7 @@ static void xen_blkbk_unmap_and_respond_callback(int =
result, struct gntab_unmap_
 	   but is this the best way to deal with this? */
 	BUG_ON(result);
=20
-	put_free_pages(ring, data->pages, data->count);
+	gnttab_page_cache_put(&ring->free_pages, data->pages, data->count);
 	make_response(ring, pending_req->id,
 		      pending_req->operation, pending_req->status);
 	free_req(ring, pending_req);
@@ -803,7 +745,8 @@ static void xen_blkbk_unmap(struct xen_blkif_ring *ri=
ng,
 		if (invcount) {
 			ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
 			BUG_ON(ret);
-			put_free_pages(ring, unmap_pages, invcount);
+			gnttab_page_cache_put(&ring->free_pages, unmap_pages,
+					      invcount);
 		}
 		pages +=3D batch;
 		num -=3D batch;
@@ -850,7 +793,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

 			pages[i]->page =3D persistent_gnt->page;
 			pages[i]->persistent_gnt =3D persistent_gnt;
 		} else {
-			if (get_free_page(ring, &pages[i]->page))
+			if (gnttab_page_cache_get(&ring->free_pages,
+						  &pages[i]->page))
 				goto out_of_memory;
 			addr =3D vaddr(pages[i]->page);
 			pages_to_gnt[segs_to_map] =3D pages[i]->page;
@@ -883,7 +827,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

 			BUG_ON(new_map_idx >=3D segs_to_map);
 			if (unlikely(map[new_map_idx].status !=3D 0)) {
 				pr_debug("invalid buffer -- could not remap it\n");
-				put_free_pages(ring, &pages[seg_idx]->page, 1);
+				gnttab_page_cache_put(&ring->free_pages,
+						      &pages[seg_idx]->page, 1);
 				pages[seg_idx]->handle =3D BLKBACK_INVALID_HANDLE;
 				ret |=3D 1;
 				goto next;
@@ -944,7 +889,7 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

=20
 out_of_memory:
 	pr_alert("%s: out of memory\n", __func__);
-	put_free_pages(ring, pages_to_gnt, segs_to_map);
+	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
 	for (i =3D last_map; i < num; i++)
 		pages[i]->handle =3D BLKBACK_INVALID_HANDLE;
 	return -ENOMEM;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkba=
ck/common.h
index c6ea5d38c509..a1b9df2c4ef1 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -288,9 +288,7 @@ struct xen_blkif_ring {
 	struct work_struct	persistent_purge_work;
=20
 	/* Buffer of free pages to map grant refs. */
-	spinlock_t		free_pages_lock;
-	int			free_pages_num;
-	struct list_head	free_pages;
+	struct gnttab_page_cache free_pages;
=20
 	struct work_struct	free_work;
 	/* Thread shutdown wait queue. */
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
index f5705569e2a7..76912c584a76 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -144,8 +144,7 @@ static int xen_blkif_alloc_rings(struct xen_blkif *bl=
kif)
 		INIT_LIST_HEAD(&ring->pending_free);
 		INIT_LIST_HEAD(&ring->persistent_purge_list);
 		INIT_WORK(&ring->persistent_purge_work, xen_blkbk_unmap_purged_grants)=
;
-		spin_lock_init(&ring->free_pages_lock);
-		INIT_LIST_HEAD(&ring->free_pages);
+		gnttab_page_cache_init(&ring->free_pages);
=20
 		spin_lock_init(&ring->pending_free_lock);
 		init_waitqueue_head(&ring->pending_free_wq);
@@ -317,8 +316,7 @@ static int xen_blkif_disconnect(struct xen_blkif *blk=
if)
 		BUG_ON(atomic_read(&ring->persistent_gnt_in_use) !=3D 0);
 		BUG_ON(!list_empty(&ring->persistent_purge_list));
 		BUG_ON(!RB_EMPTY_ROOT(&ring->persistent_gnts));
-		BUG_ON(!list_empty(&ring->free_pages));
-		BUG_ON(ring->free_pages_num !=3D 0);
+		BUG_ON(ring->free_pages.num_pages !=3D 0);
 		BUG_ON(ring->persistent_gnt_c !=3D 0);
 		WARN_ON(i !=3D (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));
 		ring->active =3D false;
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 523dcdf39cc9..e2e42912f241 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,6 +813,78 @@ int gnttab_alloc_pages(int nr_pages, struct page **p=
ages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
=20
+void gnttab_page_cache_init(struct gnttab_page_cache *cache)
+{
+	spin_lock_init(&cache->lock);
+	INIT_LIST_HEAD(&cache->pages);
+	cache->num_pages =3D 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
+
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page *=
*page)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	if (list_empty(&cache->pages)) {
+		spin_unlock_irqrestore(&cache->lock, flags);
+		return gnttab_alloc_pages(1, page);
+	}
+
+	page[0] =3D list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+	cache->num_pages--;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_get);
+
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page =
**page,
+			   unsigned int num)
+{
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	for (i =3D 0; i < num; i++)
+		list_add(&page[i]->lru, &cache->pages);
+	cache->num_pages +=3D num;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_put);
+
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned =
int num)
+{
+	struct page *page[10];
+	unsigned int i =3D 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	while (cache->num_pages > num) {
+		page[i] =3D list_first_entry(&cache->pages, struct page, lru);
+		list_del(&page[i]->lru);
+		cache->num_pages--;
+		if (++i =3D=3D ARRAY_SIZE(page)) {
+			spin_unlock_irqrestore(&cache->lock, flags);
+			gnttab_free_pages(i, page);
+			i =3D 0;
+			spin_lock_irqsave(&cache->lock, flags);
+		}
+	}
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	if (i !=3D 0)
+		gnttab_free_pages(i, page);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_shrink);
+
 void gnttab_pages_clear_private(int nr_pages, struct page **pages)
 {
 	int i;
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 4acc4e899600..862162dca33c 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -99,6 +99,8 @@ struct vscsibk_info {
 	struct list_head v2p_entry_lists;
=20
 	wait_queue_head_t waiting_to_free;
+
+	struct gnttab_page_cache free_pages;
 };
=20
 /* theoretical maximum of grants for one request */
@@ -188,10 +190,6 @@ module_param_named(max_buffer_pages, scsiback_max_bu=
ffer_pages, int, 0644);
 MODULE_PARM_DESC(max_buffer_pages,
 "Maximum number of free pages to keep in backend buffer");
=20
-static DEFINE_SPINLOCK(free_pages_lock);
-static int free_pages_num;
-static LIST_HEAD(scsiback_free_pages);
-
 /* Global spinlock to protect scsiback TPG list */
 static DEFINE_MUTEX(scsiback_mutex);
 static LIST_HEAD(scsiback_list);
@@ -207,41 +205,6 @@ static void scsiback_put(struct vscsibk_info *info)
 		wake_up(&info->waiting_to_free);
 }
=20
-static void put_free_pages(struct page **page, int num)
-{
-	unsigned long flags;
-	int i =3D free_pages_num + num, n =3D num;
-
-	if (num =3D=3D 0)
-		return;
-	if (i > scsiback_max_buffer_pages) {
-		n =3D min(num, i - scsiback_max_buffer_pages);
-		gnttab_free_pages(n, page + num - n);
-		n =3D num - n;
-	}
-	spin_lock_irqsave(&free_pages_lock, flags);
-	for (i =3D 0; i < n; i++)
-		list_add(&page[i]->lru, &scsiback_free_pages);
-	free_pages_num +=3D n;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-}
-
-static int get_free_page(struct page **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&free_pages_lock, flags);
-	if (list_empty(&scsiback_free_pages)) {
-		spin_unlock_irqrestore(&free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	page[0] =3D list_first_entry(&scsiback_free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	free_pages_num--;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-	return 0;
-}
-
 static unsigned long vaddr_page(struct page *page)
 {
 	unsigned long pfn =3D page_to_pfn(page);
@@ -302,7 +265,8 @@ static void scsiback_fast_flush_area(struct vscsibk_p=
end *req)
 		BUG_ON(err);
 	}
=20
-	put_free_pages(req->pages, req->n_grants);
+	gnttab_page_cache_put(&req->info->free_pages, req->pages,
+			      req->n_grants);
 	req->n_grants =3D 0;
 }
=20
@@ -445,8 +409,8 @@ static int scsiback_gnttab_data_map_list(struct vscsi=
bk_pend *pending_req,
 	struct vscsibk_info *info =3D pending_req->info;
=20
 	for (i =3D 0; i < cnt; i++) {
-		if (get_free_page(pg + mapcount)) {
-			put_free_pages(pg, mapcount);
+		if (gnttab_page_cache_get(&info->free_pages, pg + mapcount)) {
+			gnttab_page_cache_put(&info->free_pages, pg, mapcount);
 			pr_err("no grant page\n");
 			return -ENOMEM;
 		}
@@ -796,6 +760,8 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *in=
fo,
 		cond_resched();
 	}
=20
+	gnttab_page_cache_shrink(&info->free_pages, scsiback_max_buffer_pages);=

+
 	RING_FINAL_CHECK_FOR_REQUESTS(&info->ring, more_to_do);
 	return more_to_do;
 }
@@ -1233,6 +1199,8 @@ static int scsiback_remove(struct xenbus_device *de=
v)
=20
 	scsiback_release_translation_entry(info);
=20
+	gnttab_page_cache_shrink(&info->free_pages, 0);
+
 	dev_set_drvdata(&dev->dev, NULL);
=20
 	return 0;
@@ -1263,6 +1231,7 @@ static int scsiback_probe(struct xenbus_device *dev=
,
 	info->irq =3D 0;
 	INIT_LIST_HEAD(&info->v2p_entry_lists);
 	spin_lock_init(&info->v2p_lock);
+	gnttab_page_cache_init(&info->free_pages);
=20
 	err =3D xenbus_printf(XBT_NIL, dev->nodename, "feature-sg-grant", "%u",=

 			    SG_ALL);
@@ -1879,13 +1848,6 @@ static int __init scsiback_init(void)
=20
 static void __exit scsiback_exit(void)
 {
-	struct page *page;
-
-	while (free_pages_num) {
-		if (get_free_page(&page))
-			BUG();
-		gnttab_free_pages(1, &page);
-	}
 	target_unregister_template(&scsiback_ops);
 	xenbus_unregister_driver(&scsiback_driver);
 }
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 9bc5bc07d4d3..c6ef8ffc1a09 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -198,6 +198,19 @@ void gnttab_free_auto_xlat_frames(void);
 int gnttab_alloc_pages(int nr_pages, struct page **pages);
 void gnttab_free_pages(int nr_pages, struct page **pages);
=20
+struct gnttab_page_cache {
+	spinlock_t		lock;
+	struct list_head	pages;
+	unsigned int		num_pages;
+};
+
+void gnttab_page_cache_init(struct gnttab_page_cache *cache);
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page *=
*page);
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page =
**page,
+			   unsigned int num);
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache,
+			      unsigned int num);
+
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 struct gnttab_dma_alloc_args {
 	/* Device for which DMA memory will be/was allocated. */
--=20
2.26.2


--------------87D70290072DEFFA48C976C4
Content-Type: text/x-patch; charset=UTF-8;
 name="0002-xen-don-t-use-page-lru-for-ZONE_DEVICE-memory.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="0002-xen-don-t-use-page-lru-for-ZONE_DEVICE-memory.patch"

=46rom 5ecf68877ed7ff4c7a96464b82eb84cc34d6d3f3 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Mon, 7 Dec 2020 09:36:14 +0100
Subject: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory

Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
memory") introduced usage of ZONE_DEVICE memory for foreign memory
mappings.

Unfortunately this collides with using page->lru for Xen backend
private page caches.

Fix that by using page->zone_device_data instead.

Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")=

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/grant-table.c       | 65 +++++++++++++++++++++++++++++----
 drivers/xen/unpopulated-alloc.c | 20 +++++-----
 include/xen/grant_table.h       |  4 ++
 3 files changed, 73 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index e2e42912f241..ddb38a3d7680 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,10 +813,63 @@ int gnttab_alloc_pages(int nr_pages, struct page **=
pages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
=20
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	cache->pages =3D NULL;
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return cache->pages;
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page =3D cache->pages;
+	cache->pages =3D page->zone_device_data;
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct pag=
e *page)
+{
+	page->zone_device_data =3D cache->pages;
+	cache->pages =3D page;
+}
+#else
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	INIT_LIST_HEAD(&cache->pages);
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return list_empty(&cache->pages);
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page =3D list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct pag=
e *page)
+{
+	list_add(&page->lru, &cache->pages);
+}
+#endif
+
 void gnttab_page_cache_init(struct gnttab_page_cache *cache)
 {
 	spin_lock_init(&cache->lock);
-	INIT_LIST_HEAD(&cache->pages);
+	cache_init(cache);
 	cache->num_pages =3D 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
@@ -827,13 +880,12 @@ int gnttab_page_cache_get(struct gnttab_page_cache =
*cache, struct page **page)
=20
 	spin_lock_irqsave(&cache->lock, flags);
=20
-	if (list_empty(&cache->pages)) {
+	if (cache_empty(cache)) {
 		spin_unlock_irqrestore(&cache->lock, flags);
 		return gnttab_alloc_pages(1, page);
 	}
=20
-	page[0] =3D list_first_entry(&cache->pages, struct page, lru);
-	list_del(&page[0]->lru);
+	page[0] =3D cache_deq(cache);
 	cache->num_pages--;
=20
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -851,7 +903,7 @@ void gnttab_page_cache_put(struct gnttab_page_cache *=
cache, struct page **page,
 	spin_lock_irqsave(&cache->lock, flags);
=20
 	for (i =3D 0; i < num; i++)
-		list_add(&page[i]->lru, &cache->pages);
+		cache_enq(cache, page[i]);
 	cache->num_pages +=3D num;
=20
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -867,8 +919,7 @@ void gnttab_page_cache_shrink(struct gnttab_page_cach=
e *cache, unsigned int num)
 	spin_lock_irqsave(&cache->lock, flags);
=20
 	while (cache->num_pages > num) {
-		page[i] =3D list_first_entry(&cache->pages, struct page, lru);
-		list_del(&page[i]->lru);
+		page[i] =3D cache_deq(cache);
 		cache->num_pages--;
 		if (++i =3D=3D ARRAY_SIZE(page)) {
 			spin_unlock_irqrestore(&cache->lock, flags);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-al=
loc.c
index 8c512ea550bb..7762c1bb23cb 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -12,7 +12,7 @@
 #include <xen/xen.h>
=20
 static DEFINE_MUTEX(list_lock);
-static LIST_HEAD(page_list);
+static struct page *page_list;
 static unsigned int list_count;
=20
 static int fill_list(unsigned int nr_pages)
@@ -84,7 +84,8 @@ static int fill_list(unsigned int nr_pages)
 		struct page *pg =3D virt_to_page(vaddr + PAGE_SIZE * i);
=20
 		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
-		list_add(&pg->lru, &page_list);
+		pg->zone_device_data =3D page_list;
+		page_list =3D pg;
 		list_count++;
 	}
=20
@@ -118,12 +119,10 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pag=
es, struct page **pages)
 	}
=20
 	for (i =3D 0; i < nr_pages; i++) {
-		struct page *pg =3D list_first_entry_or_null(&page_list,
-							   struct page,
-							   lru);
+		struct page *pg =3D page_list;
=20
 		BUG_ON(!pg);
-		list_del(&pg->lru);
+		page_list =3D pg->zone_device_data;
 		list_count--;
 		pages[i] =3D pg;
=20
@@ -134,7 +133,8 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages=
, struct page **pages)
 				unsigned int j;
=20
 				for (j =3D 0; j <=3D i; j++) {
-					list_add(&pages[j]->lru, &page_list);
+					pages[j]->zone_device_data =3D page_list;
+					page_list =3D pages[j];
 					list_count++;
 				}
 				goto out;
@@ -160,7 +160,8 @@ void xen_free_unpopulated_pages(unsigned int nr_pages=
, struct page **pages)
=20
 	mutex_lock(&list_lock);
 	for (i =3D 0; i < nr_pages; i++) {
-		list_add(&pages[i]->lru, &page_list);
+		pages[i]->zone_device_data =3D page_list;
+		page_list =3D pages[i];
 		list_count++;
 	}
 	mutex_unlock(&list_lock);
@@ -189,7 +190,8 @@ static int __init init(void)
 			struct page *pg =3D
 				pfn_to_page(xen_extra_mem[i].start_pfn + j);
=20
-			list_add(&pg->lru, &page_list);
+			pg->zone_device_data =3D page_list;
+			page_list =3D pg;
 			list_count++;
 		}
 	}
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index c6ef8ffc1a09..b9c937b3a149 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -200,7 +200,11 @@ void gnttab_free_pages(int nr_pages, struct page **p=
ages);
=20
 struct gnttab_page_cache {
 	spinlock_t		lock;
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+	struct page		*pages;
+#else
 	struct list_head	pages;
+#endif
 	unsigned int		num_pages;
 };
=20
--=20
2.26.2


--------------87D70290072DEFFA48C976C4
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------87D70290072DEFFA48C976C4--

--wFUe5hxoxCUoTdipeFju8EQv0wbXJQE3P--

--f4lkDjWkzdTPnDP7lkZZv8N8sBAAohmmJ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/OCgUFAwAAAAAACgkQsN6d1ii/Ey+e
rAf9HIC1igwSwJJJsOKtv6bFlYBkxq3dDxCTX5wrIoW0qFDvmcXm9UckkGE4pmWyKIprtkGaqJIo
czraXjiQHgv4ZYlu/bW4SR1uNQ81NEaO6Wmgi7GmL5QN3M9k26vptIpVqNT9PyumVSi3MWIol0lo
6U3GT+MpF2kgK7HNTHjaElgREfHcBXF29b/Pl6GNt4Lm/NZipYXUVZkaW98SpUy11YZLzeOdH2dP
UVkwaBKdphSreootnM1cL9LKMys7GjWiqJCERC57WuicSBmlxWbGgiAnycRqEn8owSl3JIFj6zLv
ED0OF5HyU1x8wdp+X6Yz3t+5DUzfJIeur/RznAcRJA==
=NQoS
-----END PGP SIGNATURE-----

--f4lkDjWkzdTPnDP7lkZZv8N8sBAAohmmJ--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 10:58:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 10:58:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46233.82032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEEJ-0002SG-5S; Mon, 07 Dec 2020 10:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46233.82032; Mon, 07 Dec 2020 10:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEEJ-0002S9-2G; Mon, 07 Dec 2020 10:58:55 +0000
Received: by outflank-mailman (input) for mailman id 46233;
 Mon, 07 Dec 2020 10:58:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0970=FL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kmEEH-0002S4-NX
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 10:58:53 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d75af5f2-c1eb-4d77-9e7b-c62ae993d911;
 Mon, 07 Dec 2020 10:58:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d75af5f2-c1eb-4d77-9e7b-c62ae993d911
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607338733;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=WRaWRjxhCLGqFTOq2DkYisp0CeRSpSlrLti/H4PP4tA=;
  b=QhXqnPhnwuQjeyZT7ZpbNKZCnPTtphGS1ekfcOFdacHo/vFNJQWdEvPQ
   jdZ9EnihM01zXSGOBICR8UKn22AmPxgpZFmTb3nUMz6dQaCJR6lAPBB11
   9ycbFSuyxmaEGdYD1c1U4xzXCLxoAx5U9nI6YG+nBVZ4/zmDRpbnQScvq
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: NpLQ5EvuuHFOdkTesvWLne2urWZLDAlwbjNi8FY8Jmea4vYi9fhBoja92MISrjbqF+KuhLYiqk
 37O3aEPV9Cds1LlfWjKPA8gHcHpzxIqYYX5LLBz+dC8Jlkm/xQY0a4W+4b7Zk0aMGBDVX1FDfT
 Uq8Mbl9oHbjq/ND/V0U+WzuD90dufDkhVhekfN5xpSxpfm9VSyjwuRSIZanUrME6GPl8zrNXUq
 65M/TB2klvVcOlGPFzJ1QpKmXLJocdDHgM50PlyzVQzV6HknFsCKx1tf/KUJnmDWK6T4DvvLuH
 4nI=
X-SBRS: 5.1
X-MesageID: 33850551
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,399,1599537600"; 
   d="scan'208";a="33850551"
Subject: Re: [PATCH] x86/vIO-APIC: make use of xmalloc_flex_struct()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <a5a2e6e9-7bfa-75e8-7890-5d102b09835f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <51bc3c96-d45b-359d-2e30-f0254ca6d229@citrix.com>
Date: Mon, 7 Dec 2020 10:58:47 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a5a2e6e9-7bfa-75e8-7890-5d102b09835f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 07/12/2020 10:49, Jan Beulich wrote:
> ... instead of effectively open-coding it in a type-unsafe way. Drop
> hvm_vioapic_size() altogether, folding the other use in a memset()
> invocation into the subsequent loop.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:00:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:00:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46239.82044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEFw-0003Hc-Id; Mon, 07 Dec 2020 11:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46239.82044; Mon, 07 Dec 2020 11:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEFw-0003HV-Eu; Mon, 07 Dec 2020 11:00:36 +0000
Received: by outflank-mailman (input) for mailman id 46239;
 Mon, 07 Dec 2020 11:00:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0970=FL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kmEFu-0003HQ-KQ
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:00:34 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb5a5591-d6fe-459e-8111-bd9ddd651234;
 Mon, 07 Dec 2020 11:00:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb5a5591-d6fe-459e-8111-bd9ddd651234
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607338833;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=isKOUVHnsa5KV7Tz44a53pthMVBId3gu8Mv+BK1A/ZU=;
  b=ETdabGdYSPlrUCS7bFt6mQVp+9XupNy1Pfw2Z1TdtCyTx7wvWSbJb0ge
   KepuDs/KjJvEIU5GPeEPNP7FnqdroQOlhxq+iAkXEjId+/eJHseJeLO3B
   /WzUUkrVd7Ddlve0RnjOFYmakaJoYwFaAMX0m8zRBZvCPJoUPM98uChvA
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: UWgHT1byJ2MCL2PvYkIGiQeqvCqk+S4SenuGvUJ6xfT8EEcOYh2TVlGZU8evLXXKL1/3fWc8XN
 MMCSpIA8babvd4kOCDDdb4WW8rlNCJ/q8lq5ij/Sn8qh2JWFbl+8exaXaxRnFo0RAIWKUQ4rYn
 vGVnCUQu7LQVQmLT2Bn82E1vq/5uzhynNWQm/I0M68HgmGLLG8SYdyDuGesd8Sn5H8Z3PWHATD
 MnIKYiYrKf5+UbsgNtHi6HzKuGzCjr47lmvoQfxDRzcHQORQ5SALaQVwB3fJvEvNeyh22i8AOG
 qGs=
X-SBRS: 5.1
X-MesageID: 32632340
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,399,1599537600"; 
   d="scan'208";a="32632340"
Subject: Re: [PATCH] x86/vPMU: make use of xmalloc_flex_struct()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <89357ce5-6a2b-9abf-0655-6bebced091bd@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d62d0948-61a1-aa10-91a9-6f8d5033fc77@citrix.com>
Date: Mon, 7 Dec 2020 11:00:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <89357ce5-6a2b-9abf-0655-6bebced091bd@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 07/12/2020 10:50, Jan Beulich wrote:
> ... instead of effectively open-coding it in a type-unsafe way. Drop the
> regs_sz variable altogether, replacing other uses suitably.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:13:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:13:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46246.82056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmESW-0004VS-PR; Mon, 07 Dec 2020 11:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46246.82056; Mon, 07 Dec 2020 11:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmESW-0004VL-MK; Mon, 07 Dec 2020 11:13:36 +0000
Received: by outflank-mailman (input) for mailman id 46246;
 Mon, 07 Dec 2020 11:13:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmESV-0004VG-8B
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:13:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a95b8a1b-1a80-4833-a0cc-31e39d960564;
 Mon, 07 Dec 2020 11:13:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5522AC90;
 Mon,  7 Dec 2020 11:13:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a95b8a1b-1a80-4833-a0cc-31e39d960564
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607339612; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fCzpD7E3EAHXyy4/w480x8Wr+pQ8zWMdDln3IOghJfM=;
	b=FCdtJG1grs8Z87EaKCVtRAnYPFtjnrwZhZCXJuab9/4wq6adpqpiAA12gM0CpAKdELBXMO
	tvBKJ42YnS8oYTE4kkjgdi1d6L/goPHfV9bAZciF+WUAaWQDmuM3pbR3jTe42MtsfsAgru
	2CsUlIsIZBqOVvgtv8tnjlu4OtCFhtI=
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51a5c06f-e6ce-c651-2fd2-352aaa591fb1@suse.com>
Date: Mon, 7 Dec 2020 12:13:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -17,15 +17,15 @@
>   */
>  
>  #include <xen/ctype.h>
> +#include <xen/domain.h>
> +#include <xen/event.h>
>  #include <xen/init.h>
> +#include <xen/irq.h>
>  #include <xen/lib.h>
> -#include <xen/trace.h>
> +#include <xen/paging.h>
>  #include <xen/sched.h>
> -#include <xen/irq.h>
>  #include <xen/softirq.h>
> -#include <xen/domain.h>
> -#include <xen/event.h>
> -#include <xen/paging.h>
> +#include <xen/trace.h>
>  #include <xen/vpci.h>

Seeing this consolidation (thanks!), have you been able to figure
out what xen/ctype.h is needed for here? It looks to me as if it
could be dropped at the same time.

> @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
>      return rc;
>  }
>  
> -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>  {
>      hvm_unmap_ioreq_gfn(s, true);
>      hvm_unmap_ioreq_gfn(s, false);

How is this now different from ...

> @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>      return rc;
>  }
>  
> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
> +{
> +    hvm_remove_ioreq_gfn(s, false);
> +    hvm_remove_ioreq_gfn(s, true);
> +}

... this? Imo if at all possible there should be no such duplication
(i.e. at least have this one simply call the earlier one).

> @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>      return rc;
>  }
>  
> +/* Called with ioreq_server lock held */
> +int arch_ioreq_server_map_mem_type(struct domain *d,
> +                                   struct hvm_ioreq_server *s,
> +                                   uint32_t flags)
> +{
> +    int rc = p2m_set_ioreq_server(d, flags, s);
> +
> +    if ( rc == 0 && flags == 0 )
> +    {
> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        if ( read_atomic(&p2m->ioreq.entry_count) )
> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
> +    }
> +
> +    return rc;
> +}
> +
>  /*
>   * Map or unmap an ioreq server to specific memory type. For now, only
>   * HVMMEM_ioreq_server is supported, and in the future new types can be
> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>      if ( s->emulator != current->domain )
>          goto out;
>  
> -    rc = p2m_set_ioreq_server(d, flags, s);
> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>  
>   out:
>      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>  
> -    if ( rc == 0 && flags == 0 )
> -    {
> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -
> -        if ( read_atomic(&p2m->ioreq.entry_count) )
> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
> -    }
> -
>      return rc;
>  }

While you mention this change in the description, I'm still
missing justification as to why the change is safe to make. I
continue to think p2m_change_entry_type_global() would better
not be called inside the locked region, if at all possible.

> @@ -1239,33 +1279,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>  }
>  
> -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> -                                                 ioreq_t *p)
> +int arch_ioreq_server_get_type_addr(const struct domain *d,
> +                                    const ioreq_t *p,
> +                                    uint8_t *type,
> +                                    uint64_t *addr)
>  {
> -    struct hvm_ioreq_server *s;
> -    uint32_t cf8;
> -    uint8_t type;
> -    uint64_t addr;
> -    unsigned int id;
> +    unsigned int cf8 = d->arch.hvm.pci_cf8;
>  
>      if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> -        return NULL;
> -
> -    cf8 = d->arch.hvm.pci_cf8;
> +        return -EINVAL;

The caller cares about only a boolean. Either make the function
return bool, or (imo better, but others may like this less) have
it return "type" instead of using indirection, using e.g.
negative values to identify errors (which then could still be
errno ones if you wish).

> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -19,6 +19,25 @@
>  #ifndef __ASM_X86_HVM_IOREQ_H__
>  #define __ASM_X86_HVM_IOREQ_H__
>  
> +#define HANDLE_BUFIOREQ(s) \
> +    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
> +
> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
> +int arch_ioreq_server_map_mem_type(struct domain *d,
> +                                   struct hvm_ioreq_server *s,
> +                                   uint32_t flags);
> +bool arch_ioreq_server_destroy_all(struct domain *d);
> +int arch_ioreq_server_get_type_addr(const struct domain *d,
> +                                    const ioreq_t *p,
> +                                    uint8_t *type,
> +                                    uint64_t *addr);
> +void arch_ioreq_domain_init(struct domain *d);
> +
>  bool hvm_io_pending(struct vcpu *v);
>  bool handle_hvm_io_completion(struct vcpu *v);
>  bool is_ioreq_server_page(struct domain *d, const struct page_info *page);

What's the plan here? Introduce them into the x86 header just
to later move the entire block into the common one? Wouldn't
it make sense to introduce the common header here right away?
Or do you expect to convert some of the simpler ones to inline
functions later on?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:14:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:14:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46251.82068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmETZ-0004bW-4N; Mon, 07 Dec 2020 11:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46251.82068; Mon, 07 Dec 2020 11:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmETZ-0004bP-0y; Mon, 07 Dec 2020 11:14:41 +0000
Received: by outflank-mailman (input) for mailman id 46251;
 Mon, 07 Dec 2020 11:14:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmETY-0004bJ-89
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:14:40 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9a6cecec-bb71-47cf-ab2a-b65edb8019ec;
 Mon, 07 Dec 2020 11:14:39 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-22-ym1lwJR0PhSqqzGLSDmXGg-1; Mon, 07 Dec 2020 06:14:37 -0500
Received: by mail-wm1-f71.google.com with SMTP id y187so5205188wmy.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:14:37 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id 101sm8204198wrc.11.2020.12.07.03.14.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 03:14:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a6cecec-bb71-47cf-ab2a-b65edb8019ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607339679;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=d3due4jAC9MTV0mBnYKTJ7whzzjRsNAmshkXoWCe6CY=;
	b=hEzfI8QhHpBU3XWwe/cBDzu488W5eUA8mkOmg3DSsG/3Ih33s+szJVvyZeScslOZowqa4q
	oVeVemqRpj/lMeGShTONxTB2gXgM525n0q8n8xVjOpCjdnRyJW2Ptz5mGc+Ah7pUIe2/SI
	jT6Ox2h352vSrfnmX0uIimHJ0pcQCyc=
X-MC-Unique: ym1lwJR0PhSqqzGLSDmXGg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=d3due4jAC9MTV0mBnYKTJ7whzzjRsNAmshkXoWCe6CY=;
        b=mr2BEcFMGie1HXXbIV4MjEd88ZQUiDBQmp7oS2I6j3fkB28EG9KtP2YMjnbVDOpCzK
         kBAHj1Gg3XIhZnWuAHRQ9T4dbmsj0lrEmLuULhE9uvYcg/M/n3HXQlhcTAhe0nDYpy1o
         gHN97Uo9Iyt51e3Dwot4xG2AoOGgcp9/ZuYvjaULe3We0d0CJv+BFEhCzatDR1wEgO1Z
         dIEy1CZn7NvOMqy+5yKENaXHkC459K9hDLBLzDF+wPfTYxfuYQMjGJQrIKycnXr99UY7
         MEZRB5lOyBRqXNf7dxZFuduqH8ilx93b45rp1nGF/k5PRvw1D+oHMO8/bDneywxjK1jP
         ex3w==
X-Gm-Message-State: AOAM5312EXa/F7P98ESRufKs9Dksuj/s0bSA+ATVcQxKFck7ADhMW1hs
	BmGMKVpe+Rl45BV1NULasSZRTaaegUYwgzQ1Cvn0Hm8H55TwbakcqjkCWXfMVJu0fJJmLvZk/D0
	PMzSCRtNDKjstlbVpgiFMX+SwNr4=
X-Received: by 2002:a1c:e0d4:: with SMTP id x203mr18329765wmg.68.1607339676036;
        Mon, 07 Dec 2020 03:14:36 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx39v2hZ0fbf4DvQybV80l5NVeeedH84TOXnjX8rQjHffosdw7+EPGiVnjKS5k8UtdPHq9RPg==
X-Received: by 2002:a1c:e0d4:: with SMTP id x203mr18329736wmg.68.1607339675852;
        Mon, 07 Dec 2020 03:14:35 -0800 (PST)
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
To: =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>, Peter Maydell <peter.maydell@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Paul Durrant <paul@xen.org>,
 Cornelia Huck <cohuck@redhat.com>, qemu-devel@nongnu.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Marcelo Tosatti <mtosatti@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org,
 Claudio Fontana <cfontana@suse.de>, Willian Rampazzo <wrampazz@redhat.com>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Aurelien Jarno <aurelien@aurel32.net>,
 David Gibson <david@gibson.dropbear.id.au>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <20201207102450.GG3102898@redhat.com>
 <9233fe7f-8d56-e1ad-b67e-40b3ce5fcabb@redhat.com>
 <20201207103430.GI3102898@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <3bb741d3-aaf7-8d73-1990-efc01e3e8977@redhat.com>
Date: Mon, 7 Dec 2020 12:14:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201207103430.GI3102898@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 11:34 AM, Daniel P. Berrangé wrote:
> On Mon, Dec 07, 2020 at 11:26:58AM +0100, Philippe Mathieu-Daudé wrote:
>> On 12/7/20 11:25 AM, Daniel P. Berrangé wrote:
>>> On Mon, Dec 07, 2020 at 06:46:01AM +0100, Thomas Huth wrote:
>>>> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>>>>> Cross-build s390x target with only KVM accelerator enabled.
>>>>>
>>>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>> ---
>>>>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>>>>  .gitlab-ci.yml                         | 1 +
>>>>>  MAINTAINERS                            | 1 +
>>>>>  3 files changed, 8 insertions(+)
>>>>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>>
>>>>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>> new file mode 100644
>>>>> index 00000000000..1731af62056
>>>>> --- /dev/null
>>>>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>> @@ -0,0 +1,6 @@
>>>>> +cross-s390x-kvm:
>>>>> +  extends: .cross_accel_build_job
>>>>> +  variables:
>>>>> +    IMAGE: debian-s390x-cross
>>>>> +    TARGETS: s390x-softmmu
>>>>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>>>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>>>>> index 573afceb3c7..a69619d7319 100644
>>>>> --- a/.gitlab-ci.yml
>>>>> +++ b/.gitlab-ci.yml
>>>>> @@ -14,6 +14,7 @@ include:
>>>>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>>>>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
>>>>
>>>> KVM code is already covered by the "cross-s390x-system" job, but an
>>>> additional compilation test with --disable-tcg makes sense here. I'd then
>>>> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
>>>>
>>>> And while you're at it, I'd maybe rather name the new file just
>>>> crossbuilds-s390x.yml and also move the other s390x related jobs into it?
>>>
>>> I don't think we really should split it up so much - just put these
>>> jobs in the existing crosbuilds.yml file.
>>
>> Don't we want to leverage MAINTAINERS file?
> 
> As mentioned in the cover letter, I think this is mis-using the MAINTAINERS
> file to try to represent something different.
> 
> The MAINTAINERS file says who is responsible for the contents of the .yml
> file, which is the CI maintainers, because we want a consistent gitlab
> configuration as a whole, not everyone doing their own thing.
> 
> MAINTAINERS doesn't say who is responsible for making sure the actual
> jobs that run are passing, which is potentially a completely different
> person. If we want to track that, it is not the MAINTAINERS file.

Thanks, I was expecting subsystem maintainers would worry about the
CI jobs, but you made it clear this should be different persons who
look after CI. I understand it is better to have no maintainer than
to have incorrect maintainer.



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:19:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:19:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46258.82080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEXn-0004oa-N9; Mon, 07 Dec 2020 11:19:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46258.82080; Mon, 07 Dec 2020 11:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEXn-0004oT-Jm; Mon, 07 Dec 2020 11:19:03 +0000
Received: by outflank-mailman (input) for mailman id 46258;
 Mon, 07 Dec 2020 11:19:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEXm-0004oN-4N
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:19:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acbb9e91-c40e-4b5e-a515-1d342ff0842a;
 Mon, 07 Dec 2020 11:19:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0D6FAC9A;
 Mon,  7 Dec 2020 11:18:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acbb9e91-c40e-4b5e-a515-1d342ff0842a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607339939; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vKOtYRaxp5uzQ4H36SeAPcpXGVYMTxCAX57B+h/k7Nc=;
	b=Gqb6qVt4Ga9/aaj85RZ6gaw/YuL2ttuIuKDUlqN8yh+scVmDZb5bIDL5wG52X9BBSLUWYF
	OUH+ZZQO7WQXwhh2KxPme92TeotcZcyyojwL+INsdljivwvxiKLdGa4FpwT+hbz2pFqdqI
	rX9YxiXe+4rj+d64/xJbK2lMZaI3f2Y=
Subject: Re: [PATCH V3 02/23] x86/ioreq: Add IOREQ_STATUS_* #define-s and
 update code for moving
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <78632ed3-e4d4-7b4c-fd0a-504b6b26e78a@suse.com>
Date: Mon, 7 Dec 2020 12:19:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -74,6 +74,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
>  
>  void hvm_ioreq_init(struct domain *d);
>  
> +#define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
> +#define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
> +#define IOREQ_STATUS_RETRY       X86EMUL_RETRY

This correlation may not be altered. I think a comment is needed
to this effect, to avoid someone trying to subsequently fold the
x86 and (to be introduced) Arm ones. With that
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:24:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:24:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46265.82091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcg-0005oe-9w; Mon, 07 Dec 2020 11:24:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46265.82091; Mon, 07 Dec 2020 11:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcg-0005oX-6z; Mon, 07 Dec 2020 11:24:06 +0000
Received: by outflank-mailman (input) for mailman id 46265;
 Mon, 07 Dec 2020 11:24:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEce-0005oS-MA
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:24:04 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id cf2a39eb-1a84-4fd6-bc8a-e6a09744fb3d;
 Mon, 07 Dec 2020 11:24:02 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-19-07nhtAX6OiqA2xk8r6b4_w-1; Mon, 07 Dec 2020 06:23:58 -0500
Received: by mail-wm1-f69.google.com with SMTP id r1so5237120wmn.8
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:23:58 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id 34sm14514869wrh.78.2020.12.07.03.23.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 03:23:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf2a39eb-1a84-4fd6-bc8a-e6a09744fb3d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340242;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=KjLEJ26NwV7wRGptfySD4BLn63K7qvIHtT9dhMtihSI=;
	b=gkhud3T3SA5wng8wn+8iEfaMrh3mHGLoBUvYpEfh8z8x4mGaaeuawqcaIKVM5ZebRKy2jn
	JnCB1m1yQr70XCOK6vOzUuv1udJpgIVtMTZq7CM7MkJim1O9PhXbw3UC7st2PfPRs7edvJ
	fIImWTIJumaq8nCG3MAcHPuE7laRGrE=
X-MC-Unique: 07nhtAX6OiqA2xk8r6b4_w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=auivzVy5y6QOjO5B6onFwW2/1Tqt9pcw8PvGSGN2N6I=;
        b=XJwWFPaObSimdkcEEaCbfZuCTyyuu5MJejInuyQMnnVcuYT7RXgPRrPd4TxOnUJm41
         K5KKV0+iyA3QeRCN7D/2rZGbzlB10dirTeVxbD3BD9aF6lKaCqgyg9lR2Il5MHh6Aksz
         Ej2jPr0zGmLitST3v4Qnm3HrVDwLgxJoj67Bb0X0QDyv2d66wbCVAxrL3Nh66Dp5RBYN
         W9tnFpVA5MpOPOZPTnbHTG8iFkRYtRmD391DGfCp2YO7xZcDSmOG1nQb98+fWpkBKZDD
         n3KP80KjK1sGs/hsoJcGUO2caJzLcy0Aqyq0E7eOGjvUCNxrXMvkgqPFG0Q4yfNaEYEQ
         8Uwg==
X-Gm-Message-State: AOAM531+PAUWgln9BhOyHPxXtdvoIcS8OL9dQkaURU+xkbQz4keAJgba
	UjUWoEH0PK0qVi61iKgHgwdVgYiLbai6QPgn/ST+aw3bJi1p/uWWQ3Sp7cZtWy8BHkQOJvmT5Qo
	frrxqWtqByKX/bNXqR+YzlFoSbnA=
X-Received: by 2002:a1c:750f:: with SMTP id o15mr17874286wmc.144.1607340237196;
        Mon, 07 Dec 2020 03:23:57 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxtEVzeEC7jhHuC2uJxrowF2KeO/P6RnnWe7mlJ0qlo0yRhGBZE6N17mpPfK6BoXPRNgR263w==
X-Received: by 2002:a1c:750f:: with SMTP id o15mr17874250wmc.144.1607340236966;
        Mon, 07 Dec 2020 03:23:56 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	qemu-s390x@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 0/5] gitlab-ci: Add accelerator-specific Linux jobs
Date: Mon,  7 Dec 2020 12:23:48 +0100
Message-Id: <20201207112353.3814480-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Since v1:=0D
- Documented cross_accel_build_job template (Claudio)=0D
- Only add new job for s390x (Thomas)=0D
- Do not add entry to MAINTAINERS (Daniel)=0D
- Document 'build-tcg-disabled' job is X86 + KVM=0D
- Drop the patches with negative review feedbacks=0D
=0D
Hi,=0D
=0D
I was custom to use Travis-CI for testing KVM builds on s390x/ppc=0D
with the Travis-CI jobs.=0D
=0D
During October Travis-CI became unusable for me (extremely slow,=0D
see [1]). Then my free Travis account got updated to the new=0D
"10K credit minutes allotment" [2] which I burned without reading=0D
the notification email in time (I'd burn them eventually anyway).=0D
=0D
Today Travis-CI is pointless to me. While I could pay to run my=0D
QEMU jobs, I don't think it is fair for an Open Source project to=0D
ask its forks to pay for a service.=0D
=0D
As we want forks to run some CI before contributing patches, and=0D
we have cross-build Docker images available for Linux hosts, I=0D
added some cross KVM/Xen build jobs to Gitlab-CI.=0D
=0D
Cross-building doesn't have the same coverage as native building,=0D
as we can not run the tests. But this is still useful to get link=0D
failures.=0D
=0D
Resulting pipeline:=0D
https://gitlab.com/philmd/qemu/-/pipelines/225948077=0D
=0D
Regards,=0D
=0D
Phil.=0D
=0D
[1] https://travis-ci.community/t/build-delays-for-open-source-project/1027=
2=0D
[2] https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing=0D
=0D
Philippe Mathieu-Daud=C3=A9 (5):=0D
  gitlab-ci: Document 'build-tcg-disabled' is a KVM X86 job=0D
  gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)=0D
  gitlab-ci: Introduce 'cross_accel_build_job' template=0D
  gitlab-ci: Add KVM s390x cross-build jobs=0D
  gitlab-ci: Add Xen cross-build jobs=0D
=0D
 .gitlab-ci.d/crossbuilds.yml | 80 ++++++++++++++++++++++++++----------=0D
 .gitlab-ci.yml               |  5 +++=0D
 2 files changed, 64 insertions(+), 21 deletions(-)=0D
=0D
--=20=0D
2.26.2=0D
=0D



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:24:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:24:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46266.82104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcl-0005qm-MU; Mon, 07 Dec 2020 11:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46266.82104; Mon, 07 Dec 2020 11:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcl-0005qf-Ja; Mon, 07 Dec 2020 11:24:11 +0000
Received: by outflank-mailman (input) for mailman id 46266;
 Mon, 07 Dec 2020 11:24:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEcj-0005oS-Jx
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:24:09 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 45bbaa15-1bce-4220-9995-bfb0d29a3ca6;
 Mon, 07 Dec 2020 11:24:05 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-262-RUVXkkTgOOeyigoxdyCdaQ-1; Mon, 07 Dec 2020 06:24:03 -0500
Received: by mail-wr1-f70.google.com with SMTP id j5so2673264wro.12
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:24:03 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id k11sm13362266wmj.42.2020.12.07.03.24.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 03:24:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45bbaa15-1bce-4220-9995-bfb0d29a3ca6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340245;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U6QCRjxKvIwhYoV/41s2s2+6dI4wCbnqhxRPlqlth8U=;
	b=EfzQpQ/kqV49/D6sGFTAsOqsOLFZiRI2HKbkO+YuCnpL5UmPxJVmcpQyIpWrAOg2QMSpnF
	1/k90VajmPa2g2TPI5nIS809Km5vRJ5nnV5BwbaaBjyoWGJEq+73dWNZKyL9fWN6Z3RKGw
	dkU9z6fFhaV/DULnOwAUXui7lob1Ubg=
X-MC-Unique: RUVXkkTgOOeyigoxdyCdaQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=U6QCRjxKvIwhYoV/41s2s2+6dI4wCbnqhxRPlqlth8U=;
        b=kXpfBmBZ1l4yCLwL2/JgGNV4fi/Dz408TViGA+hLTx+QmIWBYVnxz+0r10ZA1x9L46
         3wFpymw38B1MC4rHUIrQvioYK7JBzUdUT6RLuRqdVbsldpALsRxB/dYYFVjKFWA/bVU9
         fE67PKq/cijKTJb1rIveBSDsbPmzwzUg0qb3V8Dou0gvuIMYoR4TLiq57K9urRflafLn
         hEH2B/L+yOO+skYtGaTUZ2WgcwDRg5K47/iqB90PIpbUSo1wqcLuS4B32iZE/wmxC1sn
         NeRMd7J283Wg4YCVcOI++qNb0NP+ZewOisgieXij9fHCiXRffG2aKt/nnF1+wBH38dIT
         cEdg==
X-Gm-Message-State: AOAM53253XMOitZJRDBENSjPFj9bUU940PdTMGd0Xi0olC7J+L4vA6wM
	b6VuIDmK9UJuah0VxBWcSMigrrFMubq/eeB42qKOBj4639zgUBllRJssqwf60hwGKVmBfeH5Tkt
	G9DdCtysNHSSFlwiDO1YiRHhWOU4=
X-Received: by 2002:a1c:4907:: with SMTP id w7mr10183699wma.175.1607340242504;
        Mon, 07 Dec 2020 03:24:02 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyUOPOj91GL2D5bWSF19z1crpAKZX/6htMwNjDW7CLNGlfCM/ktiYiE4SKzGGKVP6p9Vp9lvg==
X-Received: by 2002:a1c:4907:: with SMTP id w7mr10183684wma.175.1607340242361;
        Mon, 07 Dec 2020 03:24:02 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	qemu-s390x@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 1/5] gitlab-ci: Document 'build-tcg-disabled' is a KVM X86 job
Date: Mon,  7 Dec 2020 12:23:49 +0100
Message-Id: <20201207112353.3814480-2-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207112353.3814480-1-philmd@redhat.com>
References: <20201207112353.3814480-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Document what this job cover (build X86 targets with
KVM being the single accelerator available).

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.yml | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index d0173e82b16..ee31b1020fe 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -220,6 +220,11 @@ build-disabled:
       s390x-softmmu i386-linux-user
     MAKE_CHECK_ARGS: check-qtest SPEED=slow
 
+# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
+# the configure script. The container doesn't contain Xen headers so
+# Xen accelerator is not detected / selected. As result it build the
+# i386-softmmu and x86_64-softmmu with KVM being the single accelerator
+# available.
 build-tcg-disabled:
   <<: *native_build_job_definition
   variables:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:24:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46267.82116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcq-0005uG-0W; Mon, 07 Dec 2020 11:24:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46267.82116; Mon, 07 Dec 2020 11:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcp-0005u8-Sw; Mon, 07 Dec 2020 11:24:15 +0000
Received: by outflank-mailman (input) for mailman id 46267;
 Mon, 07 Dec 2020 11:24:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEco-0005oS-J2
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:24:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 51e58bfd-e4f1-4d31-9b64-c5e4122bf826;
 Mon, 07 Dec 2020 11:24:10 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-333-43a5KNFjPiKCnQVaKx5IfQ-1; Mon, 07 Dec 2020 06:24:09 -0500
Received: by mail-wm1-f69.google.com with SMTP id a130so8911868wmf.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:24:08 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id m81sm13320148wmf.29.2020.12.07.03.24.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 03:24:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51e58bfd-e4f1-4d31-9b64-c5e4122bf826
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340250;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qWT6ZPuFzyJ7mCL37uXEPVzuEPlsDzCAhcDUmllvAYo=;
	b=Y4k6VTzG9Lbv4K30ansu4EIhpmbDWkasotoChAJCX8eVb9e4AcBVXNkCYVoxEpmH36NnjV
	m3Omt51AmrtV31vtvduYVdrl/e6YrtjH4UtAcSSL503fw9lMw+msHfNVY8h57Mkbcy1itj
	Cc3CKaeefQv18/PMMbu27XRKoEWGL50=
X-MC-Unique: 43a5KNFjPiKCnQVaKx5IfQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=qWT6ZPuFzyJ7mCL37uXEPVzuEPlsDzCAhcDUmllvAYo=;
        b=nzkiglJUvYdtzDsJHkeZsP84BSbPstpQpnw0CDsrWctB0m8UN0xjStUS3PxVSHl2uT
         A8goUny20r/u1IW8QBeSkAa7lxxNYazKPs4AYBciTs+VsT5zmsMnRnm+FiuHzQN0iCcn
         n8EorjQBaHeUIvG5Asn6baFKe47rzensv0vzFuHbfqdhlLmf0xpW9qOpoCIvTuK6dfXE
         PVKFMnADXzxsG3jYr0WXMQWZfv1qq+aj1qbtdkdGftl2EkZY7Mn6EOP5o/R5CEaI0Opk
         cec4oFP200RZmC7wb1pwVm0yp64OvV4Hp4xLE3d21me1rkxAL4EG0SnjfUFclu7oIbLH
         GNoA==
X-Gm-Message-State: AOAM532TE14OMijgXRoCwauXjpKTmfT6+DMaunl4dmFMq86FS7xL5Pnh
	UbRHbZ1S4FYd4rvNgYun8QcycCm08EjwQddHnJc/YE9owBsIWvKlyFRVxMnImsd5b/xKJXu9ba3
	Wsuzu7spHT5VCgBmaisTveb76nKc=
X-Received: by 2002:a1c:5585:: with SMTP id j127mr18475753wmb.169.1607340247873;
        Mon, 07 Dec 2020 03:24:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyxIRAJGBFvdVDfI+WpRVMoiiAV8K3/VaE7nYgOrcybS4c3NMHXNtf5dyFYLhSziHnu/8RZlw==
X-Received: by 2002:a1c:5585:: with SMTP id j127mr18475737wmb.169.1607340247750;
        Mon, 07 Dec 2020 03:24:07 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	qemu-s390x@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 2/5] gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)
Date: Mon,  7 Dec 2020 12:23:50 +0100
Message-Id: <20201207112353.3814480-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207112353.3814480-1-philmd@redhat.com>
References: <20201207112353.3814480-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

'extends' is an alternative to using YAML anchors
and is a little more flexible and readable. See:
https://docs.gitlab.com/ee/ci/yaml/#extends

More importantly it allows exploding YAML jobs.

Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 40 ++++++++++++++++++------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 03ebfabb3fa..099949aaef3 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -1,5 +1,5 @@
 
-.cross_system_build_job_template: &cross_system_build_job_definition
+.cross_system_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
   timeout: 80m
@@ -13,7 +13,7 @@
           xtensa-softmmu"
     - make -j$(expr $(nproc) + 1) all check-build
 
-.cross_user_build_job_template: &cross_user_build_job_definition
+.cross_user_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
   script:
@@ -24,91 +24,91 @@
     - make -j$(expr $(nproc) + 1) all check-build
 
 cross-armel-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-armel-cross
 
 cross-armel-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-armel-cross
 
 cross-armhf-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-armhf-cross
 
 cross-armhf-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-armhf-cross
 
 cross-arm64-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-arm64-cross
 
 cross-arm64-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-arm64-cross
 
 cross-mips-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mips-cross
 
 cross-mips-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mips-cross
 
 cross-mipsel-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mipsel-cross
 
 cross-mipsel-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mipsel-cross
 
 cross-mips64el-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mips64el-cross
 
 cross-mips64el-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mips64el-cross
 
 cross-ppc64el-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-ppc64el-cross
 
 cross-ppc64el-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-ppc64el-cross
 
 cross-s390x-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-s390x-cross
 
 cross-s390x-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-s390x-cross
 
 cross-win32-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win32-cross
 
 cross-win64-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win64-cross
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:24:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:24:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46268.82128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcv-0005zI-Bo; Mon, 07 Dec 2020 11:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46268.82128; Mon, 07 Dec 2020 11:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcv-0005zB-6i; Mon, 07 Dec 2020 11:24:21 +0000
Received: by outflank-mailman (input) for mailman id 46268;
 Mon, 07 Dec 2020 11:24:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEct-0005oS-JE
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:24:19 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a50f0ea1-a04e-47c8-ad9c-93dc79228f2f;
 Mon, 07 Dec 2020 11:24:16 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-517-Ry8Whu46NJif-OwNPCDUhQ-1; Mon, 07 Dec 2020 06:24:14 -0500
Received: by mail-wm1-f71.google.com with SMTP id v5so5253269wmj.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:24:14 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id h15sm9685217wru.4.2020.12.07.03.24.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 03:24:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a50f0ea1-a04e-47c8-ad9c-93dc79228f2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340256;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g8YbOmB7dREvSr94Ah+ZfiHf5NH/XpXwyjPJvESlyyU=;
	b=Ys9lvdx6okvnYPvX2CLGyGO4uqbxWwdcKZ5cmIBPcmU+TCiblaFT65V/FGHBnInSAVSWqC
	I1DNMjyqf6Yu8kQd4Aulr+Uzn01B6pJ3DxoKw5cubBkfc1DK3iAKZP2Xr/rbzRFfVFAONJ
	Mah/fXjxdTo8VYfCT88i5AokEvo2Hb4=
X-MC-Unique: Ry8Whu46NJif-OwNPCDUhQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=g8YbOmB7dREvSr94Ah+ZfiHf5NH/XpXwyjPJvESlyyU=;
        b=sprac8N4ZK51ObGwUhlvURbCddF+WKp4EpHT0HBNBAVQRXYHRslGDRJHYswdvJTib2
         HAHZNIKZhyMU0FMiUaf3D/WILsvLIt5FlUdHsTMgw84/FWuzPeqllzaaX8iEcYDXmurM
         g9zYlz5lRePMYB1wkqgrsc1eCi0aRtIw5Al+YPImOgoFaEhh9Ay2t9mqVTdCnjAiWOa4
         A6bsNGD2TayYVqk9MDbehppjZJfKXr0t3BAcjhr4F6Wgka5mHXFjLPVatF/WTXQU1Oia
         HgECTF1n+FSjzT4edydH9SaV5li1GvBgLgz4U8ZIMJItEmuVmjQdaKMYp/3bxwhA6I4v
         Ph0A==
X-Gm-Message-State: AOAM532f+S7cpobr/Q1GNI+Hz/AXqQZ/imcdrZtVxqodMY6YfkSGMJfK
	FMRqu46WU16qKKiu92q4W7P2afX1f7myI9u9SSHOvF8D/48Hip4RcYEpvQwOipXHF1tXZnOI84q
	W2Oj52GnuhLp4mBTBNFSIetzmn9g=
X-Received: by 2002:a5d:6805:: with SMTP id w5mr19619040wru.266.1607340253257;
        Mon, 07 Dec 2020 03:24:13 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxRhB+8o3YIMMi0wDgCqDhu5Xr6LYssbvv4B7yWBshnzqnJb7A4bMr+1Oelyf2ZqMu7nwNEpg==
X-Received: by 2002:a5d:6805:: with SMTP id w5mr19619019wru.266.1607340253057;
        Mon, 07 Dec 2020 03:24:13 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	qemu-s390x@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 3/5] gitlab-ci: Introduce 'cross_accel_build_job' template
Date: Mon,  7 Dec 2020 12:23:51 +0100
Message-Id: <20201207112353.3814480-4-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207112353.3814480-1-philmd@redhat.com>
References: <20201207112353.3814480-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce a job template to cross-build accelerator specific
jobs (enable a specific accelerator, disabling the others).

The specific accelerator is selected by the $ACCEL environment
variable (default to KVM).

Extra options such disabling other accelerators are passed
via the $ACCEL_CONFIGURE_OPTS environment variable.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 099949aaef3..d8685ade376 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -13,6 +13,23 @@
           xtensa-softmmu"
     - make -j$(expr $(nproc) + 1) all check-build
 
+# Job to cross-build specific accelerators.
+#
+# Set the $ACCEL variable to select the specific accelerator (default to
+# KVM), and set extra options (such disabling other accelerators) via the
+# $ACCEL_CONFIGURE_OPTS variable.
+.cross_accel_build_job:
+  stage: build
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  timeout: 30m
+  script:
+    - mkdir build
+    - cd build
+    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
+      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
+        --enable-${ACCEL:-kvm} --target-list="$TARGETS" $ACCEL_CONFIGURE_OPTS
+    - make -j$(expr $(nproc) + 1) all check-build
+
 .cross_user_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:24:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46271.82140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcz-00064o-LC; Mon, 07 Dec 2020 11:24:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46271.82140; Mon, 07 Dec 2020 11:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEcz-00064d-GT; Mon, 07 Dec 2020 11:24:25 +0000
Received: by outflank-mailman (input) for mailman id 46271;
 Mon, 07 Dec 2020 11:24:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEcy-00063G-2S
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:24:24 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 19013f29-cbe6-4e0a-a20f-9bf3558a909f;
 Mon, 07 Dec 2020 11:24:23 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-504-yvgGKGDEN7S1A3FygGVl0A-1; Mon, 07 Dec 2020 06:24:19 -0500
Received: by mail-wm1-f71.google.com with SMTP id h68so5246059wme.5
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:24:19 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id e16sm4243619wra.94.2020.12.07.03.24.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 03:24:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19013f29-cbe6-4e0a-a20f-9bf3558a909f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340263;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MSG4LDxzGyW4ZC1IxX3fWcY4XCf9aUhCu+WzuAKC2lo=;
	b=VmRjGieL6BLT+YxmQ3uHu3d1jVIP0E89ZI3smZ+HHyXvy3NGHxM11k/8buwE0r7qdtCxwy
	hahiX81aRo3vBfCM7ZddmqWXhLJ7LcKqXP1nLCxGhAvSV8yQzi2QAruFIw6DmzlaBT1vEm
	ucOat+CQ18N5NRSAHzlKcLp7InCaND4=
X-MC-Unique: yvgGKGDEN7S1A3FygGVl0A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=MSG4LDxzGyW4ZC1IxX3fWcY4XCf9aUhCu+WzuAKC2lo=;
        b=sQbneVg7P1I/5u903Aw5aVtZfwk1zYhT1vAftZ5r4ZHux8REM4wcxDi3f9LvJq8BmM
         8ZdyxUWb0o01acf4cDnBy9W8MzHBKp7CbVdxMU1hBI+aHt5JHe8W5rEgoF8SZIu5Swvw
         JdS0gsHVKTaA+ADWFQDvNfgX+7TU1wnaQFxpHfe5a/F2DWKmfVXvSm7y9rvX30/Bm1ks
         AJ8OvjnzcgTgXgLXRbL2TpYZxlwQ01LSjuP1KKyvoI+nrnE2DAbLESWPKsycW06hZdOO
         f/EfVzmcO30tkHqQDtR6PNJPwk7xDBY9990W8Z/02CIS2A+oI8aWXIMT1b9+fbqv31TH
         0CaQ==
X-Gm-Message-State: AOAM530cUId3HjE+LGFN1guE3TDDR3EtnTcDwY5cDvWcXwibo8HJWXrb
	xJnGIQRqehj9JgyM+fXCe/MkIm7mlei3C0yfVAngZDOHjDrGoFClRst4M2h9/wn1zr+TFThfojo
	tFtZiN3SuxVZJ4rZpwXLw4gUtfww=
X-Received: by 2002:adf:e444:: with SMTP id t4mr19219659wrm.152.1607340258415;
        Mon, 07 Dec 2020 03:24:18 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyoNIgHdpMpGp2bUcwaH+FdRRx2uvKYslTLHSrs8jOZKgCeWvKJMTuuRmK450fuWJjSTjvKmw==
X-Received: by 2002:adf:e444:: with SMTP id t4mr19219640wrm.152.1607340258267;
        Mon, 07 Dec 2020 03:24:18 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	qemu-s390x@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 4/5] gitlab-ci: Add KVM s390x cross-build jobs
Date: Mon,  7 Dec 2020 12:23:52 +0100
Message-Id: <20201207112353.3814480-5-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207112353.3814480-1-philmd@redhat.com>
References: <20201207112353.3814480-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build s390x target with only KVM accelerator enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index d8685ade376..7a94a66b4b3 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -1,4 +1,3 @@
-
 .cross_system_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
@@ -120,6 +119,13 @@ cross-s390x-user:
   variables:
     IMAGE: debian-s390x-cross
 
+cross-s390x-kvm:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-s390x-cross
+    TARGETS: s390x-softmmu
+    ACCEL_CONFIGURE_OPTS: --disable-tcg
+
 cross-win32-system:
   extends: .cross_system_build_job
   variables:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:24:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:24:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46277.82152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEd5-0006B5-0E; Mon, 07 Dec 2020 11:24:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46277.82152; Mon, 07 Dec 2020 11:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEd4-0006At-RX; Mon, 07 Dec 2020 11:24:30 +0000
Received: by outflank-mailman (input) for mailman id 46277;
 Mon, 07 Dec 2020 11:24:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEd3-0005oS-JV
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:24:29 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e2d823ee-3c71-4da3-a7b3-9cdf9c83556e;
 Mon, 07 Dec 2020 11:24:26 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-154-_YrgJOFaN8GC0G1rT2DI5g-1; Mon, 07 Dec 2020 06:24:24 -0500
Received: by mail-wm1-f71.google.com with SMTP id b184so4036794wmh.6
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:24:24 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id m4sm8219391wrw.16.2020.12.07.03.24.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 03:24:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2d823ee-3c71-4da3-a7b3-9cdf9c83556e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340266;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SEo8ObuiWti6ikwYfSkzOi/pKrNqwxWSbgMNYO8oMYs=;
	b=Px2khBKQl+ncOkHrje3ZxzsjkwomljsUShZRcq9Fo1SQfN7EgwSfUwHnY5OumVqUrfDRM9
	B5g+BklnwkfRvVYvbUww84w949hzAPXAVxjAQHcER1VYEF1jC0YqhxnE7jP3fQ970ji3XF
	+YouDcRH6BGztwqRc6l+b/IwLfMZeL0=
X-MC-Unique: _YrgJOFaN8GC0G1rT2DI5g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=SEo8ObuiWti6ikwYfSkzOi/pKrNqwxWSbgMNYO8oMYs=;
        b=irZKeEWh9a98FX+jRIhaZ4ERrwQJUoxKKR7w+t+4Lq/B9iKw6EMjshghl7qkWARkF4
         4TppZwQ22QksKQZqXScc0akVQbUmCwy46G03IPS9AyLoANcO43kDtJBZA+/B7p7bRx/g
         4uVvC2dIjkE86XK+u04oK/SwKaRTRgG66+PA+qLCKH4gO1BAeMCKk9UoY5ZdUoP9OZI3
         ko2r/vM3Ckc85RZtazbD1613wxh8JaICyHjX2GiGIN73ryAFpeGkFLfQYwacozRuPAhL
         xqt+CsjOqP575dxMAhUbHzEY5Ecq2hDFd+XdWsIoaAxjsbommM4drbuZax6gVn8HPpFH
         uhRg==
X-Gm-Message-State: AOAM532WT8QE+lobr3/BxKsOjFl4mI6ndChPFvqpufjA85Axr0n7Elv7
	qgR1yOK6bv8NbRQKs/lS3yV2R2tEWMpDOBY08s5u3XIxc/7aKlnUlu19BS5bRFZpRwYpM7TnwUb
	D+Dz3+rpBJADQiuCm+/O8odg5R7Q=
X-Received: by 2002:a7b:c1d7:: with SMTP id a23mr17761614wmj.62.1607340263562;
        Mon, 07 Dec 2020 03:24:23 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzDr6itnK8hiEddAxr/Psf+fnJpJGiFVhvH4Vq5j5+wSxG6IJIVGf7oSHkC7/zpJDDY93Vnkg==
X-Received: by 2002:a7b:c1d7:: with SMTP id a23mr17761592wmj.62.1607340263423;
        Mon, 07 Dec 2020 03:24:23 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Fontana <cfontana@suse.de>,
	Willian Rampazzo <wrampazz@redhat.com>,
	qemu-s390x@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 5/5] gitlab-ci: Add Xen cross-build jobs
Date: Mon,  7 Dec 2020 12:23:53 +0100
Message-Id: <20201207112353.3814480-6-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207112353.3814480-1-philmd@redhat.com>
References: <20201207112353.3814480-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build ARM and X86 targets with only Xen accelerator enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 7a94a66b4b3..31f10f1e145 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -135,3 +135,18 @@ cross-win64-system:
   extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win64-cross
+
+cross-amd64-xen:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-amd64-cross
+    ACCEL: xen
+    TARGETS: i386-softmmu,x86_64-softmmu
+    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
+
+cross-arm64-xen:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-arm64-cross
+    ACCEL: xen
+    TARGETS: aarch64-softmmu
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:26:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46294.82164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEel-0006WV-LK; Mon, 07 Dec 2020 11:26:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46294.82164; Mon, 07 Dec 2020 11:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEel-0006WO-Hi; Mon, 07 Dec 2020 11:26:15 +0000
Received: by outflank-mailman (input) for mailman id 46294;
 Mon, 07 Dec 2020 11:26:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmEej-0006WF-Vz
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:26:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 4651d31b-360f-45e2-84e5-ad7c3e5b3496;
 Mon, 07 Dec 2020 11:26:12 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-471-DjEn5cfAOISj1qsmacCfkA-1; Mon, 07 Dec 2020 06:26:10 -0500
Received: by mail-wm1-f70.google.com with SMTP id u123so4022587wmu.5
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 03:26:10 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id s4sm14921932wra.91.2020.12.07.03.26.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 03:26:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4651d31b-360f-45e2-84e5-ad7c3e5b3496
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340372;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=E8SnAm7lBjFJY33tSzEd+s5+QBCNZ6+uVzsTHaYTIOw=;
	b=YKL1nPQ9xqIt5sftPOA/a5eihZDdOLy/ceA/ASnm7PXtVyUlWFwaT+KHLGAMlinrB/sZY+
	1pyEJoHNy4mRoxskP8aYLwHi32bJHsYKJdwXVo7IQ34A+F8vyOVyL6xWzjOa/oopmLfQkx
	W1keZZE94yhKzkOfRA1Km0o+N69u3n4=
X-MC-Unique: DjEn5cfAOISj1qsmacCfkA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=E8SnAm7lBjFJY33tSzEd+s5+QBCNZ6+uVzsTHaYTIOw=;
        b=iY0uY+/c9PaxzcQBUziRZuxnX0/81Bi3z6X3GHAUfeqRHSNEjrNzJwHftDYFPI+BDA
         mxPgfYNHDYyS/yEIXIhODMp8sQvdqE9CHJ9vXL/nN477DWxZZYa3AV5RcCcPvuG5J49v
         OwBuZCOcwKK+7Bw8773UMGIaMvwIrUIjRVLOkkJxX8o7nJLzmXqKeyECyi76NxnOHo12
         2KsnnMpaNoSazjrkdAFDI6AKZafGZS6fwLJzMmcOAu5e8ilZTs2Vo8NoNOkNrLPhWegS
         UcDBZICJz/Vhe7gQI0Yo3Pf2+KA++zWuzCkwA/FRDR8QYcDy/q1If+CsvgkCwpaTh4i3
         CWBw==
X-Gm-Message-State: AOAM531ao18fKxxJKxbRyBbcaT6OzQwyrd4lGuxfGxyiq9ADGyoPkOEA
	8Iv2owSUR1PxCXJGEcgUDBuMZD4jDIyHOkECwUA0Kadq7RJzs9xi3tcvtNzCel9YLe2p5acm5Ps
	mSMSBbev/NhCTVHlEG4cuV/YQgsI=
X-Received: by 2002:a5d:4c4e:: with SMTP id n14mr9412263wrt.209.1607340368499;
        Mon, 07 Dec 2020 03:26:08 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwcLG0ly5fRNXTnumnXcvmnqbciuOeraAOT2K0u9vBpVFo7cpgUmtymDQM92D6CgbpOtloTWw==
X-Received: by 2002:a5d:4c4e:: with SMTP id n14mr9412242wrt.209.1607340368301;
        Mon, 07 Dec 2020 03:26:08 -0800 (PST)
Subject: Re: [PATCH 5/8] gitlab-ci: Add KVM s390x cross-build jobs
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
To: =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>, Peter Maydell <peter.maydell@linaro.org>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Paul Durrant <paul@xen.org>,
 Cornelia Huck <cohuck@redhat.com>, qemu-devel@nongnu.org,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Marcelo Tosatti <mtosatti@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org,
 Claudio Fontana <cfontana@suse.de>, Willian Rampazzo <wrampazz@redhat.com>,
 Huacai Chen <chenhc@lemote.com>, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Aurelien Jarno <aurelien@aurel32.net>,
 David Gibson <david@gibson.dropbear.id.au>
References: <20201206185508.3545711-1-philmd@redhat.com>
 <20201206185508.3545711-6-philmd@redhat.com>
 <66d4d0ab-2bb5-1284-b08a-43c6c30f30dc@redhat.com>
 <20201207102450.GG3102898@redhat.com>
 <9233fe7f-8d56-e1ad-b67e-40b3ce5fcabb@redhat.com>
 <20201207103430.GI3102898@redhat.com>
 <3bb741d3-aaf7-8d73-1990-efc01e3e8977@redhat.com>
Message-ID: <6c7ef8e8-f2ab-9388-0058-4740bdcfd69a@redhat.com>
Date: Mon, 7 Dec 2020 12:26:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <3bb741d3-aaf7-8d73-1990-efc01e3e8977@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 12:14 PM, Philippe Mathieu-Daudé wrote:
> On 12/7/20 11:34 AM, Daniel P. Berrangé wrote:
>> On Mon, Dec 07, 2020 at 11:26:58AM +0100, Philippe Mathieu-Daudé wrote:
>>> On 12/7/20 11:25 AM, Daniel P. Berrangé wrote:
>>>> On Mon, Dec 07, 2020 at 06:46:01AM +0100, Thomas Huth wrote:
>>>>> On 06/12/2020 19.55, Philippe Mathieu-Daudé wrote:
>>>>>> Cross-build s390x target with only KVM accelerator enabled.
>>>>>>
>>>>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>>> ---
>>>>>>  .gitlab-ci.d/crossbuilds-kvm-s390x.yml | 6 ++++++
>>>>>>  .gitlab-ci.yml                         | 1 +
>>>>>>  MAINTAINERS                            | 1 +
>>>>>>  3 files changed, 8 insertions(+)
>>>>>>  create mode 100644 .gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>>>
>>>>>> diff --git a/.gitlab-ci.d/crossbuilds-kvm-s390x.yml b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>>> new file mode 100644
>>>>>> index 00000000000..1731af62056
>>>>>> --- /dev/null
>>>>>> +++ b/.gitlab-ci.d/crossbuilds-kvm-s390x.yml
>>>>>> @@ -0,0 +1,6 @@
>>>>>> +cross-s390x-kvm:
>>>>>> +  extends: .cross_accel_build_job
>>>>>> +  variables:
>>>>>> +    IMAGE: debian-s390x-cross
>>>>>> +    TARGETS: s390x-softmmu
>>>>>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg
>>>>>> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
>>>>>> index 573afceb3c7..a69619d7319 100644
>>>>>> --- a/.gitlab-ci.yml
>>>>>> +++ b/.gitlab-ci.yml
>>>>>> @@ -14,6 +14,7 @@ include:
>>>>>>    - local: '/.gitlab-ci.d/crossbuilds.yml'
>>>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-x86.yml'
>>>>>>    - local: '/.gitlab-ci.d/crossbuilds-kvm-arm.yml'
>>>>>> +  - local: '/.gitlab-ci.d/crossbuilds-kvm-s390x.yml'
>>>>>
>>>>> KVM code is already covered by the "cross-s390x-system" job, but an
>>>>> additional compilation test with --disable-tcg makes sense here. I'd then
>>>>> rather name it "cross-s390x-no-tcg" or so instead of "cross-s390x-kvm".
>>>>>
>>>>> And while you're at it, I'd maybe rather name the new file just
>>>>> crossbuilds-s390x.yml and also move the other s390x related jobs into it?
>>>>
>>>> I don't think we really should split it up so much - just put these
>>>> jobs in the existing crosbuilds.yml file.
>>>
>>> Don't we want to leverage MAINTAINERS file?
>>
>> As mentioned in the cover letter, I think this is mis-using the MAINTAINERS
>> file to try to represent something different.
>>
>> The MAINTAINERS file says who is responsible for the contents of the .yml
>> file, which is the CI maintainers, because we want a consistent gitlab
>> configuration as a whole, not everyone doing their own thing.
>>
>> MAINTAINERS doesn't say who is responsible for making sure the actual
>> jobs that run are passing, which is potentially a completely different
>> person. If we want to track that, it is not the MAINTAINERS file.
> 
> Thanks, I was expecting subsystem maintainers would worry about the
> CI jobs, but you made it clear this should be different persons who
> look after CI. I understand it is better to have no maintainer than
> to have incorrect maintainer.

MAINTAINERS and scripts/get_maintainer.pl doesn't scale well with
YAML / JSON... While this files are maintained by the Gitlab
subsystem maintainers, how can we had job-specific reviewers?



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:27:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46305.82176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEfw-0006gI-W4; Mon, 07 Dec 2020 11:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46305.82176; Mon, 07 Dec 2020 11:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEfw-0006gB-Sl; Mon, 07 Dec 2020 11:27:28 +0000
Received: by outflank-mailman (input) for mailman id 46305;
 Mon, 07 Dec 2020 11:27:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEfv-0006g5-4F
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:27:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4694b318-a931-48ba-824c-6c756e32d884;
 Mon, 07 Dec 2020 11:27:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 62C4DAD1E;
 Mon,  7 Dec 2020 11:27:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4694b318-a931-48ba-824c-6c756e32d884
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607340445; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hZw5fcK9gaw18iUcT69z6KP29lN5Uo1I+aCuJSd5Cvs=;
	b=HTOA0sssIbBQBlXSiwlgIFhH4qg6A0OKoXS/RfdPD0TSpin9ViNOm07DeI2El9j9NIFypO
	XNKA68LbhIaYsl0dv+6vvOtEvi+ib3WNTIcmCzYLrKX2Lt2PCS6vF4AZ4WhQ7i5+g5KW/0
	N1fmF5sfFCF9Pr6ApEJFIzjWbznl57o=
Subject: Re: [PATCH V3 03/23] x86/ioreq: Provide out-of-line wrapper for the
 handle_mmio()
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-4-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a12313a8-4e74-02a0-f849-72c6ed9b6161@suse.com>
Date: Mon, 7 Dec 2020 12:27:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-4-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -36,6 +36,11 @@
>  #include <public/hvm/ioreq.h>
>  #include <public/hvm/params.h>
>  
> +bool ioreq_complete_mmio(void)
> +{
> +    return handle_mmio();
> +}

As indicated before I don't like out-of-line functions like this
one; I think a #define would be quite fine here, but Paul as the
maintainer thinks differently. So be it. However, shouldn't this
function be named arch_ioreq_complete_mmio() according to the
new naming model, and then ...

> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -74,6 +74,8 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
>  
>  void hvm_ioreq_init(struct domain *d);
>  
> +bool ioreq_complete_mmio(void);

... get declared next to the other arch_*() hooks? With this
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:28:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46310.82187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEgk-0006oQ-9p; Mon, 07 Dec 2020 11:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46310.82187; Mon, 07 Dec 2020 11:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEgk-0006oJ-6m; Mon, 07 Dec 2020 11:28:18 +0000
Received: by outflank-mailman (input) for mailman id 46310;
 Mon, 07 Dec 2020 11:28:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GkbA=FL=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kmEgi-0006oA-5C
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:28:16 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72399a42-644c-495a-aed7-9e6550c629e6;
 Mon, 07 Dec 2020 11:28:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72399a42-644c-495a-aed7-9e6550c629e6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607340495;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+5kNau3vpeKkW10kWNfB4ErUzG8WO59WZ3vmdessr3M=;
  b=Yd2HwVGGaC11XbSBBG8puXVy/PxWMlOgWXjzyD1KKyiDUWEW8C2ihJWl
   hG1wuSwypUwJ0KmCJZkNeD/65WVqvgfxvbObWqSMRkbXTEYceKlRaOO9w
   VxASb5YhWc9Z/K0kvWDwMnXdwfEmgRPO/yRPJ4zMS+NTfu5QcVU512oGA
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 9XZo+aEforIyXbvF0kaOBYHCESBRxC+4lfK1RnxK0tNfj/K8u/uwmkPyBAH1p9BdjOax5kaD4P
 331GquH4Rn75U/isjEeW52BroaiymlP80CyyHIcErWQ0dKIqzhTaS/z5s7vB+P4bPUaC8sctRx
 8G6AwhzqfUGdU7UL5HI7/oeAIzXIfIMZ4tJO8Zmrm/CYlkyDoz3Ua4ot3UCWB3O1cLK6xIUm5R
 fT792wqD9+n1FbQ6TlB0npPIRwkpAkNL+9wvfOmttcoBphn/67gG5fnzRsdrXtRE2Uv/dvXl02
 ufQ=
X-SBRS: 5.1
X-MesageID: 32665833
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,399,1599537600"; 
   d="scan'208";a="32665833"
Subject: Re: [PATCH v3 1/2] x86/IRQ: make max number of guests for a shared
 IRQ configurable
To: Jan Beulich <jbeulich@suse.com>
CC: <andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>,
	<iwj@xenproject.org>, <julien@xen.org>, <sstabellini@kernel.org>,
	<wl@xen.org>, <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
 <dc023b15-9e28-322c-aa86-165459e65d77@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <7b5c1a4e-fef7-3534-c717-335c025ea6b6@citrix.com>
Date: Mon, 7 Dec 2020 11:28:06 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <dc023b15-9e28-322c-aa86-165459e65d77@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07/12/2020 09:43, Jan Beulich wrote:
> On 06.12.2020 18:43, Igor Druzhinin wrote:
>> @@ -1633,11 +1640,12 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>>          goto retry;
>>      }
>>  
>> -    if ( action->nr_guests == IRQ_MAX_GUESTS )
>> +    if ( action->nr_guests == irq_max_guests )
>>      {
>> -        printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
>> -               "Already at max share.\n",
>> -               pirq->pirq, v->domain->domain_id);
>> +        printk(XENLOG_G_INFO
>> +               "Cannot bind IRQ%d to dom%pd: already at max share %u ",

I noticed it just now but could you also remove stray "dom" left in this line while commiting.

>> +               pirq->pirq, v->domain, irq_max_guests);
>> +        printk("(increase with irq-max-guests= option)\n");
> 
> Now two separate printk()s are definitely worse. Then putting the
> part of the format string inside the parentheses on a separate line
> would still be better (and perhaps a sensible compromise with the
> grep-ability desire).

Now I'm confused because you asked me not to split the format string between the lines which
wouldn't be possible without splitting printk's. I didn't really want to drop anything
informative.

> With suitable adjustments, which I'd be okay making while committing
> as long as you agree,

Yes, do with it whatever you see fit.

Igor


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:33:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46323.82200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmElZ-0007p2-TU; Mon, 07 Dec 2020 11:33:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46323.82200; Mon, 07 Dec 2020 11:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmElZ-0007ov-Ps; Mon, 07 Dec 2020 11:33:17 +0000
Received: by outflank-mailman (input) for mailman id 46323;
 Mon, 07 Dec 2020 11:33:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kmElZ-0007oq-Cs
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:33:17 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ad371ca6-b359-42c0-a1ca-2960136a0a51;
 Mon, 07 Dec 2020 11:33:16 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-272-p6QMBE0tMBi98nP0Oz0kPQ-1; Mon, 07 Dec 2020 06:33:15 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 73F07180A092;
 Mon,  7 Dec 2020 11:33:13 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CE8A660C13;
 Mon,  7 Dec 2020 11:32:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad371ca6-b359-42c0-a1ca-2960136a0a51
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607340796;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7xdBCwIggNRrC/nj8pC9pamACvDBeORdRC9fwQeI4Ss=;
	b=DO31zYnL1cy7+leYgIz733Wp0lKvHUc1M0GKtEB8Ywrn15LM6VXLzF5X5FdsRnfcA7/SW7
	8E/X6TaSgL5GrTw4eUB+NB2al2HWra7/U5hXh/YDbYgsfiuf0Dw06Cuc+XnRJL86geKtlB
	bniUkTsO/CJbqmskapQTNTcQ7Eyx48Q=
X-MC-Unique: p6QMBE0tMBi98nP0Oz0kPQ-1
Subject: Re: [PATCH v2 1/5] gitlab-ci: Document 'build-tcg-disabled' is a KVM
 X86 job
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, Claudio Fontana <cfontana@suse.de>,
 Willian Rampazzo <wrampazz@redhat.com>, qemu-s390x@nongnu.org,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>
References: <20201207112353.3814480-1-philmd@redhat.com>
 <20201207112353.3814480-2-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <0a146451-04de-e29c-1e6e-91f2162306ee@redhat.com>
Date: Mon, 7 Dec 2020 12:32:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201207112353.3814480-2-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12

On 07/12/2020 12.23, Philippe Mathieu-Daudé wrote:
> Document what this job cover (build X86 targets with
> KVM being the single accelerator available).
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.yml | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> index d0173e82b16..ee31b1020fe 100644
> --- a/.gitlab-ci.yml
> +++ b/.gitlab-ci.yml
> @@ -220,6 +220,11 @@ build-disabled:
>        s390x-softmmu i386-linux-user
>      MAKE_CHECK_ARGS: check-qtest SPEED=slow
>  
> +# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
> +# the configure script. The container doesn't contain Xen headers so
> +# Xen accelerator is not detected / selected. As result it build the
> +# i386-softmmu and x86_64-softmmu with KVM being the single accelerator
> +# available.
>  build-tcg-disabled:
>    <<: *native_build_job_definition
>    variables:
> 

Reviewed-by: Thomas Huth <thuth@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:35:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:35:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46328.82212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEnh-0007xl-9G; Mon, 07 Dec 2020 11:35:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46328.82212; Mon, 07 Dec 2020 11:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEnh-0007xe-5s; Mon, 07 Dec 2020 11:35:29 +0000
Received: by outflank-mailman (input) for mailman id 46328;
 Mon, 07 Dec 2020 11:35:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEng-0007xZ-34
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:35:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96f2ee80-a521-4c74-b220-62957361df63;
 Mon, 07 Dec 2020 11:35:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6320FAC90;
 Mon,  7 Dec 2020 11:35:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96f2ee80-a521-4c74-b220-62957361df63
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607340926; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XBDHXALC2M4fuyp8zCjOLP5BxwUfsPXvFMOfCYZK1Q4=;
	b=KugIuvkRAiE+yQWQjtTiHom7tVM8hldeg7DEf0icu4CYnl8onuiH4a/jkBn1RniFUYhTun
	aS/QgKFWesBt2SMyozYGUUPWtWtX51tMT9DeNiKzhrKlyqeRwC8hgtWn7UJZ30NrQwMCv+
	pT/n7hhidqPT66OoY+awkV5Dz0ZU1Y4=
Subject: Re: [PATCH V3 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server
 handling common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-11-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4f9a68ad-c663-d7a1-9194-4ad28958b077@suse.com>
Date: Mon, 7 Dec 2020 12:35:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-11-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4699,50 +4699,6 @@ int xenmem_add_to_physmap_one(
>      return rc;
>  }
>  
> -int arch_acquire_resource(struct domain *d, unsigned int type,
> -                          unsigned int id, unsigned long frame,
> -                          unsigned int nr_frames, xen_pfn_t mfn_list[])
> -{
> -    int rc;
> -
> -    switch ( type )
> -    {
> -#ifdef CONFIG_HVM
> -    case XENMEM_resource_ioreq_server:
> -    {
> -        ioservid_t ioservid = id;
> -        unsigned int i;
> -
> -        rc = -EINVAL;
> -        if ( !is_hvm_domain(d) )
> -            break;
> -
> -        if ( id != (unsigned int)ioservid )
> -            break;
> -
> -        rc = 0;
> -        for ( i = 0; i < nr_frames; i++ )
> -        {
> -            mfn_t mfn;
> -
> -            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
> -            if ( rc )
> -                break;
> -
> -            mfn_list[i] = mfn_x(mfn);
> -        }
> -        break;
> -    }
> -#endif
> -
> -    default:
> -        rc = -EOPNOTSUPP;
> -        break;
> -    }
> -
> -    return rc;
> -}

Can't this be accompanied by removal of the xen/ioreq.h inclusion?
(I'm only looking at patch 4 right now, but the renaming there made
the soon to be unnecessary #include quite apparent.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:38:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46336.82227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEqO-00088m-Qw; Mon, 07 Dec 2020 11:38:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46336.82227; Mon, 07 Dec 2020 11:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEqO-00088f-NF; Mon, 07 Dec 2020 11:38:16 +0000
Received: by outflank-mailman (input) for mailman id 46336;
 Mon, 07 Dec 2020 11:38:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kmEqN-00088a-5J
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:38:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id cbe50e0a-31fb-4b73-96c4-b1d970159f10;
 Mon, 07 Dec 2020 11:38:14 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-14-sho2lGVVNdOBmTUquj-bNA-1; Mon, 07 Dec 2020 06:38:13 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D013B180A08A;
 Mon,  7 Dec 2020 11:38:10 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 195BC60BD8;
 Mon,  7 Dec 2020 11:38:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cbe50e0a-31fb-4b73-96c4-b1d970159f10
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607341094;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=muAFbZHcdnlA540NVtllekXZu37z+fEhka8P0LY3Jqw=;
	b=Levx0tKtdR2is0Sa1SwqQws+LdMGx3xmYaaLxkpgpHCdjSQ7g1jWGNKP0bycmf3WG4ptPk
	teTjg2w6RpVeJJg+5VHCfyFBkpiyfU6UcAkTXbbr4Ucv5sqJWB+dKyZp2x1JH7+QZHOPMj
	R2PoiMB8sE+VaA+SNLuuJALGpflKRKk=
X-MC-Unique: sho2lGVVNdOBmTUquj-bNA-1
Subject: Re: [PATCH v2 3/5] gitlab-ci: Introduce 'cross_accel_build_job'
 template
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, Claudio Fontana <cfontana@suse.de>,
 Willian Rampazzo <wrampazz@redhat.com>, qemu-s390x@nongnu.org,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>
References: <20201207112353.3814480-1-philmd@redhat.com>
 <20201207112353.3814480-4-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <93d186c0-feea-8e47-2a03-5276fb898bff@redhat.com>
Date: Mon, 7 Dec 2020 12:37:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201207112353.3814480-4-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11

On 07/12/2020 12.23, Philippe Mathieu-Daudé wrote:
> Introduce a job template to cross-build accelerator specific
> jobs (enable a specific accelerator, disabling the others).
> 
> The specific accelerator is selected by the $ACCEL environment
> variable (default to KVM).
> 
> Extra options such disabling other accelerators are passed
> via the $ACCEL_CONFIGURE_OPTS environment variable.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index 099949aaef3..d8685ade376 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -13,6 +13,23 @@
>            xtensa-softmmu"
>      - make -j$(expr $(nproc) + 1) all check-build
>  
> +# Job to cross-build specific accelerators.
> +#
> +# Set the $ACCEL variable to select the specific accelerator (default to
> +# KVM), and set extra options (such disabling other accelerators) via the
> +# $ACCEL_CONFIGURE_OPTS variable.
> +.cross_accel_build_job:
> +  stage: build
> +  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
> +  timeout: 30m
> +  script:
> +    - mkdir build
> +    - cd build
> +    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
> +      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
> +        --enable-${ACCEL:-kvm} --target-list="$TARGETS" $ACCEL_CONFIGURE_OPTS
> +    - make -j$(expr $(nproc) + 1) all check-build
> +
>  .cross_user_build_job:
>    stage: build
>    image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest

I wonder whether we could also simply use the .cross_user_build_job - e.g.
by adding a $EXTRA_CONFIGURE_OPTS variable in the "../configure ..." line so
that the accel-jobs could use that for their --enable... and --disable...
settings?

Anyway, I've got no strong opinion on that one, and I'm also fine if we add
this new template, so:

Reviewed-by: Thomas Huth <thuth@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:41:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:41:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46342.82239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEt4-0000ex-DU; Mon, 07 Dec 2020 11:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46342.82239; Mon, 07 Dec 2020 11:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEt4-0000eq-AL; Mon, 07 Dec 2020 11:41:02 +0000
Received: by outflank-mailman (input) for mailman id 46342;
 Mon, 07 Dec 2020 11:41:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEt2-0000eg-Eu
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:41:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03f2969b-d11b-4ea5-a8da-12da4c22efa7;
 Mon, 07 Dec 2020 11:40:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9AD65AC90;
 Mon,  7 Dec 2020 11:40:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03f2969b-d11b-4ea5-a8da-12da4c22efa7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607341258; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ktigrgzOccHxvbusY6yLdeGC0ejiIpTZQxtFb509trE=;
	b=pyWbUnb8Lz9Z0DiRgRroPSX6T0WKb/XJ3LiuaPgure2wECRbH5bePBC/z3fKLEy/WmVvJb
	5DXYyUqRX/LqNom+oWlKGWQKN4260RBPO8T445kPEa+94l8VcztrAFK6XHCdzSebZ1QMgl
	9A5+DNoSRYErcsgfebTsfoM8D9Zi86A=
Subject: Re: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
Date: Mon, 7 Dec 2020 12:41:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -19,8 +19,7 @@
>  #ifndef __ASM_X86_HVM_IOREQ_H__
>  #define __ASM_X86_HVM_IOREQ_H__
>  
> -#define HANDLE_BUFIOREQ(s) \
> -    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
> +#include <xen/ioreq.h>

Is there a strict need to do it this way round? Usually the common
header would include the arch one ...

> @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
>                                      uint64_t *addr);
>  void arch_ioreq_domain_init(struct domain *d);

As already mentioned in an earlier reply: What about these? They
shouldn't get declared once per arch. If anything, ones that
want to be inline functions can / should remain in the per-arch
header.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:41:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46343.82251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEtE-0000iZ-N1; Mon, 07 Dec 2020 11:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46343.82251; Mon, 07 Dec 2020 11:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEtE-0000iR-Ii; Mon, 07 Dec 2020 11:41:12 +0000
Received: by outflank-mailman (input) for mailman id 46343;
 Mon, 07 Dec 2020 11:41:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kmEtD-0000i1-Fs
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:41:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e61e124e-7348-4e8c-8f41-671d1a0d8443;
 Mon, 07 Dec 2020 11:41:11 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-286-A-HOIe1IPsKJsI0zvd0r6w-1; Mon, 07 Dec 2020 06:41:09 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A9CE0107ACE8;
 Mon,  7 Dec 2020 11:41:07 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 231B85D9DC;
 Mon,  7 Dec 2020 11:40:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e61e124e-7348-4e8c-8f41-671d1a0d8443
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607341270;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CoKqcZfYBnXMQMOGUtw8RaWj33ktvmZ2VgJzuFiWcTA=;
	b=Z/A3nsSdLNrceb8kWBoVnTq3nU9PlVlMYd69wtRUwZLju1cWAClOrFBIBpxdgD7nm+47xJ
	1HgxmnwmEilFt1ZGuYIny5f0OwhkbWNITFzLSKKrSYkwEg3Dn4ZMNT0QfJnlIANqvdmPSa
	t0Q5FCQy9qR0ZITHGJnWfxcXyhkZCvc=
X-MC-Unique: A-HOIe1IPsKJsI0zvd0r6w-1
Subject: Re: [PATCH v2 4/5] gitlab-ci: Add KVM s390x cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, Claudio Fontana <cfontana@suse.de>,
 Willian Rampazzo <wrampazz@redhat.com>, qemu-s390x@nongnu.org,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>
References: <20201207112353.3814480-1-philmd@redhat.com>
 <20201207112353.3814480-5-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <0a0c2002-64e1-0a6d-d520-144b70f2590a@redhat.com>
Date: Mon, 7 Dec 2020 12:40:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201207112353.3814480-5-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

On 07/12/2020 12.23, Philippe Mathieu-Daudé wrote:
> Cross-build s390x target with only KVM accelerator enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index d8685ade376..7a94a66b4b3 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -1,4 +1,3 @@
> -
>  .cross_system_build_job:
>    stage: build
>    image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
> @@ -120,6 +119,13 @@ cross-s390x-user:
>    variables:
>      IMAGE: debian-s390x-cross
>  
> +cross-s390x-kvm:

I'd still prefer "-no-tcg" or maybe "-kvm-only" ... but that's just a matter
of taste, so:

Reviewed-by: Thomas Huth <thuth@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:44:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46356.82265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEwf-0000yC-8S; Mon, 07 Dec 2020 11:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46356.82265; Mon, 07 Dec 2020 11:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEwf-0000y5-5N; Mon, 07 Dec 2020 11:44:45 +0000
Received: by outflank-mailman (input) for mailman id 46356;
 Mon, 07 Dec 2020 11:44:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEwe-0000y0-Cq
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:44:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c49290a-7c2e-4553-bfcd-80fa9f212a94;
 Mon, 07 Dec 2020 11:44:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33570AD1E;
 Mon,  7 Dec 2020 11:44:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c49290a-7c2e-4553-bfcd-80fa9f212a94
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607341482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X7gxFAF+E4uaedP2zYxSsqrZhMulAqRne8iVEz2bgZY=;
	b=CtTV8RCxk3tggl/lac+/3m7f1OH3h8jTcezcHMI7mtxx+olPOerv+IetTCINc9YyZlp7Cp
	s0r5vlkzTVwuxMzTK5sptgzjp4utuuc+mCINlFzvCGvIwJLtmISenjsBS6vyhKplGIDW5R
	9tcEbfJiuTy3wsSzSJxmJx8NTt3RwxU=
Subject: Re: [PATCH v3 1/2] x86/IRQ: make max number of guests for a shared
 IRQ configurable
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, sstabellini@kernel.org, wl@xen.org, roger.pau@citrix.com,
 xen-devel@lists.xenproject.org
References: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com>
 <dc023b15-9e28-322c-aa86-165459e65d77@suse.com>
 <7b5c1a4e-fef7-3534-c717-335c025ea6b6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <44a36706-bfc6-062b-49c8-06dfe999b164@suse.com>
Date: Mon, 7 Dec 2020 12:44:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <7b5c1a4e-fef7-3534-c717-335c025ea6b6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 12:28, Igor Druzhinin wrote:
> On 07/12/2020 09:43, Jan Beulich wrote:
>> On 06.12.2020 18:43, Igor Druzhinin wrote:
>>> @@ -1633,11 +1640,12 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
>>>          goto retry;
>>>      }
>>>  
>>> -    if ( action->nr_guests == IRQ_MAX_GUESTS )
>>> +    if ( action->nr_guests == irq_max_guests )
>>>      {
>>> -        printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
>>> -               "Already at max share.\n",
>>> -               pirq->pirq, v->domain->domain_id);
>>> +        printk(XENLOG_G_INFO
>>> +               "Cannot bind IRQ%d to dom%pd: already at max share %u ",
> 
> I noticed it just now but could you also remove stray "dom" left in this line while commiting.

Oh, sure.

>>> +               pirq->pirq, v->domain, irq_max_guests);
>>> +        printk("(increase with irq-max-guests= option)\n");
>>
>> Now two separate printk()s are definitely worse. Then putting the
>> part of the format string inside the parentheses on a separate line
>> would still be better (and perhaps a sensible compromise with the
>> grep-ability desire).
> 
> Now I'm confused because you asked me not to split the format string between the lines which
> wouldn't be possible without splitting printk's. I didn't really want to drop anything
> informative.

"Not splitting" really was meant in the sense of the words: No
splitting at all. Even less so across multiple printk()-s. But
since the line would get really long, I can live with the
outlined compromise.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:47:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:47:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46361.82277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEz7-00016Z-Mx; Mon, 07 Dec 2020 11:47:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46361.82277; Mon, 07 Dec 2020 11:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEz7-00016S-Jq; Mon, 07 Dec 2020 11:47:17 +0000
Received: by outflank-mailman (input) for mailman id 46361;
 Mon, 07 Dec 2020 11:47:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEz5-00016N-Qg
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:47:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab0c550b-b7da-4a25-a2ce-254e9dea003e;
 Mon, 07 Dec 2020 11:47:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3DEEBAC9A;
 Mon,  7 Dec 2020 11:47:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab0c550b-b7da-4a25-a2ce-254e9dea003e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607341634; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Kg3gDdT3ADiFHvwSa2dcT46uq2ybzUoNndm7KF8yLig=;
	b=Ew6afyminPVSFaXdqYWqFVp29Mp0GC36fx3IVefCbxJ7xzf9EcF/im5UeTwNK5Ui9qifHR
	Fmw6lQ0pZTDlwRYoar/vKaVG+8WLGqpbl05cwy9RVuAM3DoYRA4bSTumCgluEjyZlrWSYf
	4gwNijtNR8WWtIaPMxop9dJx/jDTAbc=
Subject: Re: [PATCH V3 05/23] xen/ioreq: Make x86's
 hvm_ioreq_needs_completion() common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-6-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <033af196-1338-54a6-1ec3-416df27337fa@suse.com>
Date: Mon, 7 Dec 2020 12:47:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-6-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -21,6 +21,13 @@
>  
>  #include <xen/sched.h>
>  
> +static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
> +{
> +    return ioreq->state == STATE_IOREQ_READY &&
> +           !ioreq->data_is_ptr &&
> +           (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
> +}
> +
>  #define HANDLE_BUFIOREQ(s) \
>      ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)

Personally I would have suggested to keep the #define first, but
I see you've already got Paul's R-b. Applicable parts
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:48:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:48:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46367.82290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEzt-0001DV-0o; Mon, 07 Dec 2020 11:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46367.82290; Mon, 07 Dec 2020 11:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmEzs-0001DO-Te; Mon, 07 Dec 2020 11:48:04 +0000
Received: by outflank-mailman (input) for mailman id 46367;
 Mon, 07 Dec 2020 11:48:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmEzr-0001DH-Do
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:48:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53f29c0c-75bc-44b6-b5ca-23d7d9cb859b;
 Mon, 07 Dec 2020 11:48:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 06764AC90;
 Mon,  7 Dec 2020 11:48:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53f29c0c-75bc-44b6-b5ca-23d7d9cb859b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607341682; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J0q/IP+BAA2HuYyl/GmCCSGXKldok9mJFemNRzg99R4=;
	b=a2j0CFx9af7GLJhLT6inG/1tAUYPD4ZHz/njxIfUR9+BFLO0E8BHkmKFy94WP/yBS9p9xp
	VH0r1nkjP1NvZPEy5kIIIUpkw2GvOq5GlfS3G7U5ufWab27gXbSsOL+3xzrRXgDtHYhSKm
	TNlPN3jiWuUDq6nEwgx1FESlt6Qrvh8=
Subject: Re: [PATCH V3 06/23] xen/ioreq: Make x86's
 hvm_mmio_first(last)_byte() common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-7-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dde59170-d298-1581-c161-4ad5ff156dd1@suse.com>
Date: Mon, 7 Dec 2020 12:48:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-7-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The IOREQ is a common feature now and these helpers will be used
> on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes
> with "ioreq".
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

Applicable parts
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:48:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46368.82302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF04-0001KN-AM; Mon, 07 Dec 2020 11:48:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46368.82302; Mon, 07 Dec 2020 11:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF04-0001KF-5f; Mon, 07 Dec 2020 11:48:16 +0000
Received: by outflank-mailman (input) for mailman id 46368;
 Mon, 07 Dec 2020 11:48:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zJBf=FL=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kmF03-0001Jq-6G
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:48:15 +0000
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f7c5894-c8c1-4eaf-bdef-18d1582162ba;
 Mon, 07 Dec 2020 11:48:14 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id C16FFC0C;
 Mon,  7 Dec 2020 06:48:12 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Mon, 07 Dec 2020 06:48:13 -0500
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id A0883240064;
 Mon,  7 Dec 2020 06:48:09 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f7c5894-c8c1-4eaf-bdef-18d1582162ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=VhnFKp
	DiphrJTeyL0PhwYlSNQ41mGFxTA7aWgbPmEDc=; b=kYstMaR93WoKQpgs1xGB1v
	u9LyJQqTLVLU3CNfwVWrGK0/P5P2OWzVOE7+kNyYJBJZKCSHhKsd4xEcIeABFYRv
	GLg+tV7hcuq2U5bNKIhlSv8bR57waWCBUljxbf5ZJSkDiMdhvhkaugxXios9HoG8
	KcwIWkrXjs+kfg4g0r0oi2W9fi8PvI0lzhNbciV4HbFRtmg2eskh6hqbDAwYn6Mz
	+C2BsqBXN1Qi29BDAqSZ/miKK2kpFu1zCmz0zy73kqd467jn5D04DYcqdjd5bdbP
	U5I6GswpXlxQCBZ6bQDrKvCDCKbzyt49jMxuTfMY5JSxqvABVH9G8aK2MNbncaAA
	==
X-ME-Sender: <xms:ehbOX7GMM_WOYhUwXBXIedB4CMTTax14bSu6rlninFTWUiv4c3ejWw>
    <xme:ehbOX4UJfDBuPgJHJL5mmYgs6VO3K5mSPC16niKZ9YuMWsEICT37QN0JP43GzVBap
    FyMNF8heAIINg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudejgedgfedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:ehbOX9LjWSlIXbg_zSEP4oUbl4kBVRv3lD8KzYXNRjih85obAG7n9A>
    <xmx:ehbOX5EYcMT16r5_MhWenfCab1gvXZt4ddGaU8w9Z4p5WUWEXitzJA>
    <xmx:ehbOXxWAvPlGL7he-Qn1OQCwUcMq0mUN3TMCg_8zO0nahZT0YnZD8g>
    <xmx:fBbOX_f0ub5Da5BwZDC4pZhWM4hB2bMrjEycHvneSb-T-eQ-Q3vYmg>
Date: Mon, 7 Dec 2020 12:48:05 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Christoph Hellwig <hch@lst.de>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Sagi Grimberg <sagi@grimberg.me>, linux-nvme@lists.infradead.org
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
Message-ID: <20201207114805.GF1244@mail-itl>
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl>
 <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de>
 <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
 <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="54ZiyWcDhi/7bWb8"
Content-Disposition: inline
In-Reply-To: <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>


--54ZiyWcDhi/7bWb8
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9

On Mon, Dec 07, 2020 at 11:55:01AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
> Marek,
>=20
> On 06.12.20 17:47, Jason Andryuk wrote:
> > On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.c=
om> wrote:
> > >=20
> > > On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=B3r=
ecki wrote:
> > > > On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
> > > > > On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-G=C3=
=B3recki wrote:
> > > > > > culprit:
> > > > > >=20
> > > > > > commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
> > > > > > Author: Roger Pau Monne <roger.pau@citrix.com>
> > > > > > Date:   Tue Sep 1 10:33:26 2020 +0200
> > > > > >=20
> > > > > >      xen: add helpers to allocate unpopulated memory
> > > > > >=20
> > > > > > I'm adding relevant people and xen-devel to the thread.
> > > > > > For completeness, here is the original crash message:
> > > > >=20
> > > > > That commit definitively adds a new ZONE_DEVICE user, so it does =
look
> > > > > related.  But you are not running on Xen, are you?
> > > >=20
> > > > I am. It is Xen dom0.
> > >=20
> > > I'm afraid I'm on leave and won't be able to look into this until the
> > > beginning of January. I would guess it's some kind of bad
> > > interaction between blkback and NVMe drivers both using ZONE_DEVICE?
> > >=20
> > > Maybe the best is to revert this change and I will look into it when
> > > I get back, unless someone is willing to debug this further.
> >=20
> > Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , they
> > both use page->lru which is part of the anonymous union shared with
> > *pgmap.  That matches Marek's suspicion that the ZONE_DEVICE memory is
> > being used as ZONE_NORMAL.
> >=20
> > memmap_init_zone_device() says:
> > * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
> > * and zone_device_data.  It is a bug if a ZONE_DEVICE page is
> > * ever freed or placed on a driver-private list.
>=20
> Second try, now even tested to work on a test system (without NVMe).

It doesn't work for me:

[  526.023340] xen-blkback: backend/vbd/1/51712: using 2 queues, protocol 1=
 (x86_64-abi) persistent grants
[  526.030550] xen-blkback: backend/vbd/1/51728: using 2 queues, protocol 1=
 (x86_64-abi) persistent grants
[  526.034810] BUG: kernel NULL pointer dereference, address: 0000000000000=
010
[  526.034841] #PF: supervisor read access in kernel mode
[  526.034857] #PF: error_code(0x0000) - not-present page
[  526.034875] PGD 105428067 P4D 105428067 PUD 105b92067 PMD 0=20
[  526.034896] Oops: 0000 [#1] SMP NOPTI
[  526.034909] CPU: 3 PID: 4007 Comm: 1.xvda-0 Tainted: G        W         =
5.10.0-rc6-1.qubes.x86_64+ #108
[  526.034933] Hardware name: LENOVO 20M9CTO1WW/20M9CTO1WW, BIOS N2CET50W (=
1.33 ) 01/15/2020
[  526.034974] RIP: e030:gnttab_page_cache_get+0x32/0x60
[  526.034990] Code: 89 f4 55 48 89 fd e8 4d e3 80 00 48 83 7d 08 00 48 89 =
c6 74 15 48 89 ef e8 5b e0 80 00 4c 89 e6 5d bf 01 00 00 00 41 5c eb 8e <48=
> 8b 04 25 10 00 00 00 48 89 ef 48 89 45 08 49 c7 04 24 00 00 00
[  526.035035] RSP: e02b:ffffc90003e27a40 EFLAGS: 00010046
[  526.035052] RAX: 0000000000000200 RBX: 0000000000000001 RCX: 00000000000=
00000
[  526.035072] RDX: 0000000000000001 RSI: 0000000000000200 RDI: ffff8881042=
75518
[  526.035092] RBP: ffff888104275518 R08: 0000000000000000 R09: 00000000000=
00000
[  526.035113] R10: ffff888104275400 R11: 0000000000000000 R12: ffff888109b=
5d3a0
[  526.035133] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8881042=
75400
[  526.035159] FS:  0000000000000000(0000) GS:ffff8881b54c0000(0000) knlGS:=
0000000000000000
[  526.035194] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[  526.035214] CR2: 0000000000000010 CR3: 0000000103b5a000 CR4: 00000000000=
50660
[  526.035239] Call Trace:
[  526.035253]  xen_blkbk_map+0x131/0x5a0
[  526.035268]  dispatch_rw_block_io+0x42a/0x9c0
[  526.035284]  ? xen_mc_flush+0xcb/0x190
[  526.035298]  __do_block_io_op+0x314/0x630
[  526.035312]  xen_blkif_schedule+0x182/0x790
[  526.035327]  ? finish_wait+0x80/0x80
[  526.035340]  ? xen_blkif_be_int+0x30/0x30
[  526.035355]  kthread+0xfe/0x140
[  526.035371]  ? kthread_park+0x90/0x90
[  526.035385]  ret_from_fork+0x22/0x30
[  526.035398] Modules linked in:
[  526.035410] CR2: 0000000000000010
[  526.035440] ---[ end trace 431ea72658d96c9d ]---
[  526.176390] RIP: e030:gnttab_page_cache_get+0x32/0x60
[  526.176460] Code: 89 f4 55 48 89 fd e8 4d e3 80 00 48 83 7d 08 00 48 89 =
c6 74 15 48 89 ef e8 5b e0 80 00 4c 89 e6 5d bf 01 00 00 00 41 5c eb 8e <48=
> 8b 04 25 10 00 00 00 48 89 ef 48 89 45 08 49 c7 04 24 00 00 00
[  526.250734] RSP: e02b:ffffc90003e27a40 EFLAGS: 00010046
[  526.250751] RAX: 0000000000000200 RBX: 0000000000000001 RCX: 00000000000=
00000
[  526.250771] RDX: 0000000000000001 RSI: 0000000000000200 RDI: ffff8881042=
75518
[  526.250790] RBP: ffff888104275518 R08: 0000000000000000 R09: 00000000000=
00000
[  526.250808] R10: ffff888104275400 R11: 0000000000000000 R12: ffff888109b=
5d3a0
[  526.250827] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8881042=
75400
[  526.250863] FS:  0000000000000000(0000) GS:ffff8881b54c0000(0000) knlGS:=
0000000000000000
[  526.250884] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[  526.250901] CR2: 0000000000000010 CR3: 0000000103b5a000 CR4: 00000000000=
50660
[  526.250924] Kernel panic - not syncing: Fatal exception
[  526.250972] Kernel Offset: disabled


This is 7059c2c00a2196865c2139083cbef47cd18109b6 with your patches on
top.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--54ZiyWcDhi/7bWb8
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl/OFnYACgkQ24/THMrX
1yyF5Af6ArbC1PElE8gZ7vC7XIXXUjUSeaPZWNCWAyIf/WYx1mdQs64ObV2d/e1I
d5J+xK7XYnN/GUWIv99ZsLQWf1IGgk+EaEjhdr8lIQP5q9kyeoRvYQb7UMNVdX9s
rJM/H8Wl24GQtwpyelx87S8AQ9pEVZ4VWTNHZLej3YavmICs1qFHMB5zJshdX4Y3
oRXbyFtutC0qmf0CAW3GtTnWiXOlHoxQFEtJ3VEijlmuU4THtPCR+kN8HNx6o82+
xLcj6AlA+kHjuQEBWPd/2s9oATgtaM+WCz1KkhTNMTtQsOWAmGSxVVrVPpC9WaoQ
Y6nIQYU6EGeOCynIYzvv+21Hg53eZg==
=4CJT
-----END PGP SIGNATURE-----

--54ZiyWcDhi/7bWb8--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:51:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46381.82314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF3B-0002M5-TT; Mon, 07 Dec 2020 11:51:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46381.82314; Mon, 07 Dec 2020 11:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF3B-0002Ly-QA; Mon, 07 Dec 2020 11:51:29 +0000
Received: by outflank-mailman (input) for mailman id 46381;
 Mon, 07 Dec 2020 11:51:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kmF3A-0002Lt-GN
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:51:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7963aaa1-8ff7-428a-8ea3-cecbea150a4e;
 Mon, 07 Dec 2020 11:51:27 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-477-OLykk5vMNfObUY5WAyj2jg-1; Mon, 07 Dec 2020 06:51:26 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC390107ACE4;
 Mon,  7 Dec 2020 11:51:23 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0AE7C5D9E2;
 Mon,  7 Dec 2020 11:51:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7963aaa1-8ff7-428a-8ea3-cecbea150a4e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607341887;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fZbtPO9TKtQkBSC6vY2I2aKZfTmFL8HdRvz1iHz5xEM=;
	b=eVEM8LyDseCwM9pvEbpuc38FEVLFWjlMxKpiyT2PAkPq942m+CAwUOV5l4WHZarMl+XiXJ
	Pe/OUfQ6PFmr/6FJvyfhYTIGclzMr8aASJRGiTR1YwloxBeDbOhnVVn7S2ylGWm+Yb51bW
	yapr8yN4+kU+un5w8DX1/jD8yC66i3U=
X-MC-Unique: OLykk5vMNfObUY5WAyj2jg-1
Subject: Re: [PATCH v2 5/5] gitlab-ci: Add Xen cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, Claudio Fontana <cfontana@suse.de>,
 Willian Rampazzo <wrampazz@redhat.com>, qemu-s390x@nongnu.org,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>
References: <20201207112353.3814480-1-philmd@redhat.com>
 <20201207112353.3814480-6-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <9bfd1ed4-baa2-ece8-5b96-ec8fc7a8c547@redhat.com>
Date: Mon, 7 Dec 2020 12:51:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201207112353.3814480-6-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

On 07/12/2020 12.23, Philippe Mathieu-Daudé wrote:
> Cross-build ARM and X86 targets with only Xen accelerator enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index 7a94a66b4b3..31f10f1e145 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -135,3 +135,18 @@ cross-win64-system:
>    extends: .cross_system_build_job
>    variables:
>      IMAGE: fedora-win64-cross
> +
> +cross-amd64-xen:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-amd64-cross
> +    ACCEL: xen
> +    TARGETS: i386-softmmu,x86_64-softmmu
> +    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
> +
> +cross-arm64-xen:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-arm64-cross
> +    ACCEL: xen
> +    TARGETS: aarch64-softmmu
Could you please simply replace aarch64-softmmu by arm-softmmu in the
target-list-exclude statement in this file instead of adding a new job for
arm64? That should have the same results and will spare us one job...

 Thanks,
  Thomas



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:54:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46386.82326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF5j-0002W1-B4; Mon, 07 Dec 2020 11:54:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46386.82326; Mon, 07 Dec 2020 11:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF5j-0002Vu-7u; Mon, 07 Dec 2020 11:54:07 +0000
Received: by outflank-mailman (input) for mailman id 46386;
 Mon, 07 Dec 2020 11:54:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kmF5i-0002Vp-4x
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:54:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kmF5f-0007HI-VQ; Mon, 07 Dec 2020 11:54:03 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kmF5f-0001WL-K5; Mon, 07 Dec 2020 11:54:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=ikBlnqEAXg+Yf0+YTnPF983S0uwctkYma+4j5+0nY+0=; b=yHdKlqJBTd6exfWZBmOSyXpejH
	atCFfiPtIaTSopTrJTt5HnirOfTG++XAMtrJsWf63qgFn7wE6lPU1z7gz0TfRlZZn3LaOFyWnfGmz
	wf8Ag5AZrYhtJavhcPk85uWJOQAtFhOKSBFt9GJxQB5rTox+sAR7MkKUGVtpze8bo66s=;
Message-ID: <8ba50b6c69ca5c61da8e01faabd9eadc020a49b2.camel@xen.org>
Subject: Re: [PATCH v2] x86/vmap: handle superpages in vmap_to_mfn()
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Date: Mon, 07 Dec 2020 11:54:04 +0000
In-Reply-To: <8a3c4749-4275-b632-b3fa-073447acd352@suse.com>
References: 
	<4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
	 <8a3c4749-4275-b632-b3fa-073447acd352@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-12-07 at 11:11 +0100, Jan Beulich wrote:
> On 03.12.2020 12:21, Hongyan Xia wrote:
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -5194,6 +5194,60 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long
> > v)
> >          }                                          \
> >      } while ( false )
> >  
> > +/* Translate mapped Xen address to MFN. */
> > +mfn_t xen_map_to_mfn(unsigned long va)
> > +{
> > +#define CHECK_MAPPED(cond_)     \
> > +    if ( !(cond_) )             \
> > +    {                           \
> > +        ASSERT_UNREACHABLE();   \
> > +        ret = INVALID_MFN;      \
> > +        goto out;               \
> > +    }                           \
> 
> This should be coded such that use sites ...
> 
> > +    bool locking = system_state > SYS_STATE_boot;
> > +    unsigned int l2_offset = l2_table_offset(va);
> > +    unsigned int l1_offset = l1_table_offset(va);
> > +    const l3_pgentry_t *pl3e = virt_to_xen_l3e(va);
> > +    const l2_pgentry_t *pl2e = NULL;
> > +    const l1_pgentry_t *pl1e = NULL;
> > +    struct page_info *l3page;
> > +    mfn_t ret;
> > +
> > +    L3T_INIT(l3page);
> > +    CHECK_MAPPED(pl3e)
> > +    l3page = virt_to_page(pl3e);
> > +    L3T_LOCK(l3page);
> > +
> > +    CHECK_MAPPED(l3e_get_flags(*pl3e) & _PAGE_PRESENT)
> 
> ... will properly require a statement-ending semicolon. With
> additionally the trailing underscore dropped from the macro's
> parameter name

The immediate solution that came to mind is a do-while construct. Would
you be happy with that?

> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

Thanks.

> Or wait,
> 
> > --- a/xen/include/asm-x86/mm.h
> > +++ b/xen/include/asm-x86/mm.h
> > @@ -578,6 +578,7 @@ mfn_t alloc_xen_pagetable_new(void);
> >  void free_xen_pagetable_new(mfn_t mfn);
> >  
> >  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
> > +mfn_t xen_map_to_mfn(unsigned long va);
> 
> This is now a pretty proper companion of map_page_to_xen(), and
> hence imo ought to be declared next to that one rather than here.
> Ultimately Arm may also need to gain an implementation.

Since map_pages_to_xen() is in the common header, are we okay with
having the declaration but not an implementation on the Arm side in
this patch? Or do we also want to introduce the Arm implementation in
this patch?

> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -291,7 +291,7 @@ void copy_page_sse2(void *, const void *);
> >  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
> >  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
> >  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
> > -#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned
> > long)(va)))
> > +#define vmap_to_mfn(va)     xen_map_to_mfn((unsigned long)va)
> 
> You've lost parentheses around va.

Will fix.

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:54:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:54:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46393.82338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF6Y-0002cq-LG; Mon, 07 Dec 2020 11:54:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46393.82338; Mon, 07 Dec 2020 11:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF6Y-0002cj-II; Mon, 07 Dec 2020 11:54:58 +0000
Received: by outflank-mailman (input) for mailman id 46393;
 Mon, 07 Dec 2020 11:54:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmF6W-0002cb-Qj
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:54:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72161036-7c26-4c14-b7d0-c371f3eb4ec0;
 Mon, 07 Dec 2020 11:54:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66BDAAC90;
 Mon,  7 Dec 2020 11:54:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72161036-7c26-4c14-b7d0-c371f3eb4ec0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607342094; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vHIlWrDJ6kyj51TcNfinMHIsaXaGP/TQ3AKqTAQs1x0=;
	b=Wcg7ztqfRnN/HV2ufgIeRCrczSHq/S9JGYyL7hvDEBz5m6bJdCclv5oNkQjqegwjGeBvFj
	5nxOsXQFH0cp4kNeSWR9z0xXksD/dLhIpe4uRu8SHgjDKxj64UgjFOTgWTIe8dWmpO9+Jk
	TTf7gWPZbT8GiVXuHpDkygRFan2Nv5Q=
Subject: Re: [PATCH V3 07/23] xen/ioreq: Make x86's
 hvm_ioreq_(page/vcpu/server) structs common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-8-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36b5640e-8d37-ed1a-f675-59e1a5d74ff7@suse.com>
Date: Mon, 7 Dec 2020 12:54:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-8-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The IOREQ is a common feature now and these structs will be used
> on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Applicable parts
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 11:58:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 11:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46402.82350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF9T-0002nS-4F; Mon, 07 Dec 2020 11:57:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46402.82350; Mon, 07 Dec 2020 11:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmF9T-0002nL-0p; Mon, 07 Dec 2020 11:57:59 +0000
Received: by outflank-mailman (input) for mailman id 46402;
 Mon, 07 Dec 2020 11:57:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmF9S-0002nG-95
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 11:57:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ac48130-bd60-4dbd-b78f-e68f0bd70734;
 Mon, 07 Dec 2020 11:57:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED84DAC9A;
 Mon,  7 Dec 2020 11:57:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ac48130-bd60-4dbd-b78f-e68f0bd70734
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607342275; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XfLKhrz8fJ/V7bJ03ccgql8oqt88qgxVWroOMg1p8Uo=;
	b=X5psbxUb9ovR72AZKEm1xPicqqxBX+0kuCaRsQoRBimu5L9z8tiHc8ly3smiFjFP9tKY6R
	6QgW5JwtY5gBKezy7KM/dQKa6yurCkx0eWVt89MQtYvrPEg8gL19btRE5mqO3qwqmKQefl
	cylL4oEQerjMDwAkcEm3GDkzki8twu8=
Subject: Re: [PATCH v2] x86/vmap: handle superpages in vmap_to_mfn()
To: Hongyan Xia <hx242@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <4a69a1177f9496ad0e3ea77e9b1d5b802bf83b60.1606994506.git.hongyxia@amazon.com>
 <8a3c4749-4275-b632-b3fa-073447acd352@suse.com>
 <8ba50b6c69ca5c61da8e01faabd9eadc020a49b2.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eeb1bef1-fc57-2f73-735c-cd88e48d6d1e@suse.com>
Date: Mon, 7 Dec 2020 12:57:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <8ba50b6c69ca5c61da8e01faabd9eadc020a49b2.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 12:54, Hongyan Xia wrote:
> On Mon, 2020-12-07 at 11:11 +0100, Jan Beulich wrote:
>> On 03.12.2020 12:21, Hongyan Xia wrote:
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -5194,6 +5194,60 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long
>>> v)
>>>          }                                          \
>>>      } while ( false )
>>>  
>>> +/* Translate mapped Xen address to MFN. */
>>> +mfn_t xen_map_to_mfn(unsigned long va)
>>> +{
>>> +#define CHECK_MAPPED(cond_)     \
>>> +    if ( !(cond_) )             \
>>> +    {                           \
>>> +        ASSERT_UNREACHABLE();   \
>>> +        ret = INVALID_MFN;      \
>>> +        goto out;               \
>>> +    }                           \
>>
>> This should be coded such that use sites ...
>>
>>> +    bool locking = system_state > SYS_STATE_boot;
>>> +    unsigned int l2_offset = l2_table_offset(va);
>>> +    unsigned int l1_offset = l1_table_offset(va);
>>> +    const l3_pgentry_t *pl3e = virt_to_xen_l3e(va);
>>> +    const l2_pgentry_t *pl2e = NULL;
>>> +    const l1_pgentry_t *pl1e = NULL;
>>> +    struct page_info *l3page;
>>> +    mfn_t ret;
>>> +
>>> +    L3T_INIT(l3page);
>>> +    CHECK_MAPPED(pl3e)
>>> +    l3page = virt_to_page(pl3e);
>>> +    L3T_LOCK(l3page);
>>> +
>>> +    CHECK_MAPPED(l3e_get_flags(*pl3e) & _PAGE_PRESENT)
>>
>> ... will properly require a statement-ending semicolon. With
>> additionally the trailing underscore dropped from the macro's
>> parameter name
> 
> The immediate solution that came to mind is a do-while construct. Would
> you be happy with that?

Sure.

>>> --- a/xen/include/asm-x86/mm.h
>>> +++ b/xen/include/asm-x86/mm.h
>>> @@ -578,6 +578,7 @@ mfn_t alloc_xen_pagetable_new(void);
>>>  void free_xen_pagetable_new(mfn_t mfn);
>>>  
>>>  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
>>> +mfn_t xen_map_to_mfn(unsigned long va);
>>
>> This is now a pretty proper companion of map_page_to_xen(), and
>> hence imo ought to be declared next to that one rather than here.
>> Ultimately Arm may also need to gain an implementation.
> 
> Since map_pages_to_xen() is in the common header, are we okay with
> having the declaration but not an implementation on the Arm side in
> this patch? Or do we also want to introduce the Arm implementation in
> this patch?

Just a declaration is fine imo. If a use in common code appears,
it'll still be noticeable at link time that Arm will need to
have an implementation added.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:00:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:00:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46408.82361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFBk-0003fN-LA; Mon, 07 Dec 2020 12:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46408.82361; Mon, 07 Dec 2020 12:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFBk-0003fG-I2; Mon, 07 Dec 2020 12:00:20 +0000
Received: by outflank-mailman (input) for mailman id 46408;
 Mon, 07 Dec 2020 12:00:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmFBi-0003f8-U1
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:00:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52b3c53e-946a-4105-bff2-176e0b4876eb;
 Mon, 07 Dec 2020 12:00:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 828BFAC9A;
 Mon,  7 Dec 2020 12:00:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52b3c53e-946a-4105-bff2-176e0b4876eb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607342415; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=VqkJ4iRLVFbRqVN5Asdmg09twR7LTnmi9jHb0xoUD/4=;
	b=Oy5bpr9vMIDiWkcbzDSp6o9bf4LwGeR4PCXymIgjwAbraNtMXpT0/600O469EmDztQib+S
	nPqWrYB46ZZSbBSBat7QaYzL5VOGuZaxoINpCdHI3wavojFEgdmJSeibnm2gOQi+jDHPEk
	ZJv1jW0pqqVqjBPOdNf5gvDM2mdk2v8=
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Christoph Hellwig <hch@lst.de>,
 xen-devel <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
 <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>
 <20201207114805.GF1244@mail-itl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9bf64b27-51e8-a734-e15e-8da6d2eda736@suse.com>
Date: Mon, 7 Dec 2020 13:00:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201207114805.GF1244@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="8qFesHj97LXhvd5tc9EEAUmuBrgfM3SxA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--8qFesHj97LXhvd5tc9EEAUmuBrgfM3SxA
Content-Type: multipart/mixed; boundary="czwcAqEXdbulx7YApJA2Gxdnqhaqj6WTq";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Christoph Hellwig <hch@lst.de>,
 xen-devel <xen-devel@lists.xenproject.org>, Keith Busch <kbusch@kernel.org>,
 Jens Axboe <axboe@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
 linux-nvme@lists.infradead.org
Message-ID: <9bf64b27-51e8-a734-e15e-8da6d2eda736@suse.com>
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
References: <20201129035639.GW2532@mail-itl>
 <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl> <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de> <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
 <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>
 <20201207114805.GF1244@mail-itl>
In-Reply-To: <20201207114805.GF1244@mail-itl>

--czwcAqEXdbulx7YApJA2Gxdnqhaqj6WTq
Content-Type: multipart/mixed;
 boundary="------------DF7FA127B9BAC6B5AF04F56B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------DF7FA127B9BAC6B5AF04F56B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.12.20 12:48, Marek Marczykowski-G=C3=B3recki wrote:
> On Mon, Dec 07, 2020 at 11:55:01AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> Marek,
>>
>> On 06.12.20 17:47, Jason Andryuk wrote:
>>> On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citrix=
=2Ecom> wrote:
>>>>
>>>> On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=B3=
recki wrote:
>>>>> On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wrote:
>>>>>> On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-G=C3=B3=
recki wrote:
>>>>>>> culprit:
>>>>>>>
>>>>>>> commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
>>>>>>> Author: Roger Pau Monne <roger.pau@citrix.com>
>>>>>>> Date:   Tue Sep 1 10:33:26 2020 +0200
>>>>>>>
>>>>>>>       xen: add helpers to allocate unpopulated memory
>>>>>>>
>>>>>>> I'm adding relevant people and xen-devel to the thread.
>>>>>>> For completeness, here is the original crash message:
>>>>>>
>>>>>> That commit definitively adds a new ZONE_DEVICE user, so it does l=
ook
>>>>>> related.  But you are not running on Xen, are you?
>>>>>
>>>>> I am. It is Xen dom0.
>>>>
>>>> I'm afraid I'm on leave and won't be able to look into this until th=
e
>>>> beginning of January. I would guess it's some kind of bad
>>>> interaction between blkback and NVMe drivers both using ZONE_DEVICE?=

>>>>
>>>> Maybe the best is to revert this change and I will look into it when=

>>>> I get back, unless someone is willing to debug this further.
>>>
>>> Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , the=
y
>>> both use page->lru which is part of the anonymous union shared with
>>> *pgmap.  That matches Marek's suspicion that the ZONE_DEVICE memory i=
s
>>> being used as ZONE_NORMAL.
>>>
>>> memmap_init_zone_device() says:
>>> * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
>>> * and zone_device_data.  It is a bug if a ZONE_DEVICE page is
>>> * ever freed or placed on a driver-private list.
>>
>> Second try, now even tested to work on a test system (without NVMe).
>=20
> It doesn't work for me:
>=20
> [  526.023340] xen-blkback: backend/vbd/1/51712: using 2 queues, protoc=
ol 1 (x86_64-abi) persistent grants
> [  526.030550] xen-blkback: backend/vbd/1/51728: using 2 queues, protoc=
ol 1 (x86_64-abi) persistent grants
> [  526.034810] BUG: kernel NULL pointer dereference, address: 000000000=
0000010

Oh, indeed. Silly bug. My test was with qdisk as backend :-(

3rd try...


Juergen

--------------DF7FA127B9BAC6B5AF04F56B
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-add-helpers-for-caching-grant-mapping-pages.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-xen-add-helpers-for-caching-grant-mapping-pages.patch"

=46rom 4f6ad98ce5fd457fd12e6617b0bc2a8f82fbce4d Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Mon, 7 Dec 2020 08:31:22 +0100
Subject: [PATCH 1/2] xen: add helpers for caching grant mapping pages

Instead of having similar helpers in multiple backend drivers use
common helpers for caching pages allocated via gnttab_alloc_pages().

Make use of those helpers in blkback and scsiback.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkback/blkback.c | 89 ++++++-----------------------
 drivers/block/xen-blkback/common.h  |  4 +-
 drivers/block/xen-blkback/xenbus.c  |  6 +-
 drivers/xen/grant-table.c           | 72 +++++++++++++++++++++++
 drivers/xen/xen-scsiback.c          | 60 ++++---------------
 include/xen/grant_table.h           | 13 +++++
 6 files changed, 116 insertions(+), 128 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
index 501e9dacfff9..9ebf53903d7b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -132,73 +132,12 @@ module_param(log_stats, int, 0644);
=20
 #define BLKBACK_INVALID_HANDLE (~0)
=20
-/* Number of free pages to remove on each call to gnttab_free_pages */
-#define NUM_BATCH_FREE_PAGES 10
-
 static inline bool persistent_gnt_timeout(struct persistent_gnt *persist=
ent_gnt)
 {
 	return pgrant_timeout && (jiffies - persistent_gnt->last_used >=3D
 			HZ * pgrant_timeout);
 }
=20
-static inline int get_free_page(struct xen_blkif_ring *ring, struct page=
 **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	if (list_empty(&ring->free_pages)) {
-		BUG_ON(ring->free_pages_num !=3D 0);
-		spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	BUG_ON(ring->free_pages_num =3D=3D 0);
-	page[0] =3D list_first_entry(&ring->free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	ring->free_pages_num--;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-
-	return 0;
-}
-
-static inline void put_free_pages(struct xen_blkif_ring *ring, struct pa=
ge **page,
-                                  int num)
-{
-	unsigned long flags;
-	int i;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	for (i =3D 0; i < num; i++)
-		list_add(&page[i]->lru, &ring->free_pages);
-	ring->free_pages_num +=3D num;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-}
-
-static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int=
 num)
-{
-	/* Remove requested pages in batches of NUM_BATCH_FREE_PAGES */
-	struct page *page[NUM_BATCH_FREE_PAGES];
-	unsigned int num_pages =3D 0;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	while (ring->free_pages_num > num) {
-		BUG_ON(list_empty(&ring->free_pages));
-		page[num_pages] =3D list_first_entry(&ring->free_pages,
-		                                   struct page, lru);
-		list_del(&page[num_pages]->lru);
-		ring->free_pages_num--;
-		if (++num_pages =3D=3D NUM_BATCH_FREE_PAGES) {
-			spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-			gnttab_free_pages(num_pages, page);
-			spin_lock_irqsave(&ring->free_pages_lock, flags);
-			num_pages =3D 0;
-		}
-	}
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-	if (num_pages !=3D 0)
-		gnttab_free_pages(num_pages, page);
-}
-
 #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
=20
 static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi=
_flags);
@@ -331,7 +270,8 @@ static void free_persistent_gnts(struct xen_blkif_rin=
g *ring, struct rb_root *ro
 			unmap_data.count =3D segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
=20
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap =3D 0;
 		}
=20
@@ -371,7 +311,8 @@ void xen_blkbk_unmap_purged_grants(struct work_struct=
 *work)
 		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
 			unmap_data.count =3D segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap =3D 0;
 		}
 		kfree(persistent_gnt);
@@ -379,7 +320,7 @@ void xen_blkbk_unmap_purged_grants(struct work_struct=
 *work)
 	if (segs_to_unmap > 0) {
 		unmap_data.count =3D segs_to_unmap;
 		BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-		put_free_pages(ring, pages, segs_to_unmap);
+		gnttab_page_cache_put(&ring->free_pages, pages, segs_to_unmap);
 	}
 }
=20
@@ -664,9 +605,10 @@ int xen_blkif_schedule(void *arg)
=20
 		/* Shrink the free pages pool if it is too large. */
 		if (time_before(jiffies, blkif->buffer_squeeze_end))
-			shrink_free_pagepool(ring, 0);
+			gnttab_page_cache_shrink(&ring->free_pages, 0);
 		else
-			shrink_free_pagepool(ring, max_buffer_pages);
+			gnttab_page_cache_shrink(&ring->free_pages,
+						 max_buffer_pages);
=20
 		if (log_stats && time_after(jiffies, ring->st_print))
 			print_stats(ring);
@@ -697,7 +639,7 @@ void xen_blkbk_free_caches(struct xen_blkif_ring *rin=
g)
 	ring->persistent_gnt_c =3D 0;
=20
 	/* Since we are shutting down remove all pages from the buffer */
-	shrink_free_pagepool(ring, 0 /* All */);
+	gnttab_page_cache_shrink(&ring->free_pages, 0 /* All */);
 }
=20
 static unsigned int xen_blkbk_unmap_prepare(
@@ -736,7 +678,7 @@ static void xen_blkbk_unmap_and_respond_callback(int =
result, struct gntab_unmap_
 	   but is this the best way to deal with this? */
 	BUG_ON(result);
=20
-	put_free_pages(ring, data->pages, data->count);
+	gnttab_page_cache_put(&ring->free_pages, data->pages, data->count);
 	make_response(ring, pending_req->id,
 		      pending_req->operation, pending_req->status);
 	free_req(ring, pending_req);
@@ -803,7 +745,8 @@ static void xen_blkbk_unmap(struct xen_blkif_ring *ri=
ng,
 		if (invcount) {
 			ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
 			BUG_ON(ret);
-			put_free_pages(ring, unmap_pages, invcount);
+			gnttab_page_cache_put(&ring->free_pages, unmap_pages,
+					      invcount);
 		}
 		pages +=3D batch;
 		num -=3D batch;
@@ -850,7 +793,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

 			pages[i]->page =3D persistent_gnt->page;
 			pages[i]->persistent_gnt =3D persistent_gnt;
 		} else {
-			if (get_free_page(ring, &pages[i]->page))
+			if (gnttab_page_cache_get(&ring->free_pages,
+						  &pages[i]->page))
 				goto out_of_memory;
 			addr =3D vaddr(pages[i]->page);
 			pages_to_gnt[segs_to_map] =3D pages[i]->page;
@@ -883,7 +827,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

 			BUG_ON(new_map_idx >=3D segs_to_map);
 			if (unlikely(map[new_map_idx].status !=3D 0)) {
 				pr_debug("invalid buffer -- could not remap it\n");
-				put_free_pages(ring, &pages[seg_idx]->page, 1);
+				gnttab_page_cache_put(&ring->free_pages,
+						      &pages[seg_idx]->page, 1);
 				pages[seg_idx]->handle =3D BLKBACK_INVALID_HANDLE;
 				ret |=3D 1;
 				goto next;
@@ -944,7 +889,7 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,=

=20
 out_of_memory:
 	pr_alert("%s: out of memory\n", __func__);
-	put_free_pages(ring, pages_to_gnt, segs_to_map);
+	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
 	for (i =3D last_map; i < num; i++)
 		pages[i]->handle =3D BLKBACK_INVALID_HANDLE;
 	return -ENOMEM;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkba=
ck/common.h
index c6ea5d38c509..a1b9df2c4ef1 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -288,9 +288,7 @@ struct xen_blkif_ring {
 	struct work_struct	persistent_purge_work;
=20
 	/* Buffer of free pages to map grant refs. */
-	spinlock_t		free_pages_lock;
-	int			free_pages_num;
-	struct list_head	free_pages;
+	struct gnttab_page_cache free_pages;
=20
 	struct work_struct	free_work;
 	/* Thread shutdown wait queue. */
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
index f5705569e2a7..76912c584a76 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -144,8 +144,7 @@ static int xen_blkif_alloc_rings(struct xen_blkif *bl=
kif)
 		INIT_LIST_HEAD(&ring->pending_free);
 		INIT_LIST_HEAD(&ring->persistent_purge_list);
 		INIT_WORK(&ring->persistent_purge_work, xen_blkbk_unmap_purged_grants)=
;
-		spin_lock_init(&ring->free_pages_lock);
-		INIT_LIST_HEAD(&ring->free_pages);
+		gnttab_page_cache_init(&ring->free_pages);
=20
 		spin_lock_init(&ring->pending_free_lock);
 		init_waitqueue_head(&ring->pending_free_wq);
@@ -317,8 +316,7 @@ static int xen_blkif_disconnect(struct xen_blkif *blk=
if)
 		BUG_ON(atomic_read(&ring->persistent_gnt_in_use) !=3D 0);
 		BUG_ON(!list_empty(&ring->persistent_purge_list));
 		BUG_ON(!RB_EMPTY_ROOT(&ring->persistent_gnts));
-		BUG_ON(!list_empty(&ring->free_pages));
-		BUG_ON(ring->free_pages_num !=3D 0);
+		BUG_ON(ring->free_pages.num_pages !=3D 0);
 		BUG_ON(ring->persistent_gnt_c !=3D 0);
 		WARN_ON(i !=3D (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));
 		ring->active =3D false;
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 523dcdf39cc9..e2e42912f241 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,6 +813,78 @@ int gnttab_alloc_pages(int nr_pages, struct page **p=
ages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
=20
+void gnttab_page_cache_init(struct gnttab_page_cache *cache)
+{
+	spin_lock_init(&cache->lock);
+	INIT_LIST_HEAD(&cache->pages);
+	cache->num_pages =3D 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
+
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page *=
*page)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	if (list_empty(&cache->pages)) {
+		spin_unlock_irqrestore(&cache->lock, flags);
+		return gnttab_alloc_pages(1, page);
+	}
+
+	page[0] =3D list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+	cache->num_pages--;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_get);
+
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page =
**page,
+			   unsigned int num)
+{
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	for (i =3D 0; i < num; i++)
+		list_add(&page[i]->lru, &cache->pages);
+	cache->num_pages +=3D num;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_put);
+
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned =
int num)
+{
+	struct page *page[10];
+	unsigned int i =3D 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	while (cache->num_pages > num) {
+		page[i] =3D list_first_entry(&cache->pages, struct page, lru);
+		list_del(&page[i]->lru);
+		cache->num_pages--;
+		if (++i =3D=3D ARRAY_SIZE(page)) {
+			spin_unlock_irqrestore(&cache->lock, flags);
+			gnttab_free_pages(i, page);
+			i =3D 0;
+			spin_lock_irqsave(&cache->lock, flags);
+		}
+	}
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	if (i !=3D 0)
+		gnttab_free_pages(i, page);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_shrink);
+
 void gnttab_pages_clear_private(int nr_pages, struct page **pages)
 {
 	int i;
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 4acc4e899600..862162dca33c 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -99,6 +99,8 @@ struct vscsibk_info {
 	struct list_head v2p_entry_lists;
=20
 	wait_queue_head_t waiting_to_free;
+
+	struct gnttab_page_cache free_pages;
 };
=20
 /* theoretical maximum of grants for one request */
@@ -188,10 +190,6 @@ module_param_named(max_buffer_pages, scsiback_max_bu=
ffer_pages, int, 0644);
 MODULE_PARM_DESC(max_buffer_pages,
 "Maximum number of free pages to keep in backend buffer");
=20
-static DEFINE_SPINLOCK(free_pages_lock);
-static int free_pages_num;
-static LIST_HEAD(scsiback_free_pages);
-
 /* Global spinlock to protect scsiback TPG list */
 static DEFINE_MUTEX(scsiback_mutex);
 static LIST_HEAD(scsiback_list);
@@ -207,41 +205,6 @@ static void scsiback_put(struct vscsibk_info *info)
 		wake_up(&info->waiting_to_free);
 }
=20
-static void put_free_pages(struct page **page, int num)
-{
-	unsigned long flags;
-	int i =3D free_pages_num + num, n =3D num;
-
-	if (num =3D=3D 0)
-		return;
-	if (i > scsiback_max_buffer_pages) {
-		n =3D min(num, i - scsiback_max_buffer_pages);
-		gnttab_free_pages(n, page + num - n);
-		n =3D num - n;
-	}
-	spin_lock_irqsave(&free_pages_lock, flags);
-	for (i =3D 0; i < n; i++)
-		list_add(&page[i]->lru, &scsiback_free_pages);
-	free_pages_num +=3D n;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-}
-
-static int get_free_page(struct page **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&free_pages_lock, flags);
-	if (list_empty(&scsiback_free_pages)) {
-		spin_unlock_irqrestore(&free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	page[0] =3D list_first_entry(&scsiback_free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	free_pages_num--;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-	return 0;
-}
-
 static unsigned long vaddr_page(struct page *page)
 {
 	unsigned long pfn =3D page_to_pfn(page);
@@ -302,7 +265,8 @@ static void scsiback_fast_flush_area(struct vscsibk_p=
end *req)
 		BUG_ON(err);
 	}
=20
-	put_free_pages(req->pages, req->n_grants);
+	gnttab_page_cache_put(&req->info->free_pages, req->pages,
+			      req->n_grants);
 	req->n_grants =3D 0;
 }
=20
@@ -445,8 +409,8 @@ static int scsiback_gnttab_data_map_list(struct vscsi=
bk_pend *pending_req,
 	struct vscsibk_info *info =3D pending_req->info;
=20
 	for (i =3D 0; i < cnt; i++) {
-		if (get_free_page(pg + mapcount)) {
-			put_free_pages(pg, mapcount);
+		if (gnttab_page_cache_get(&info->free_pages, pg + mapcount)) {
+			gnttab_page_cache_put(&info->free_pages, pg, mapcount);
 			pr_err("no grant page\n");
 			return -ENOMEM;
 		}
@@ -796,6 +760,8 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *in=
fo,
 		cond_resched();
 	}
=20
+	gnttab_page_cache_shrink(&info->free_pages, scsiback_max_buffer_pages);=

+
 	RING_FINAL_CHECK_FOR_REQUESTS(&info->ring, more_to_do);
 	return more_to_do;
 }
@@ -1233,6 +1199,8 @@ static int scsiback_remove(struct xenbus_device *de=
v)
=20
 	scsiback_release_translation_entry(info);
=20
+	gnttab_page_cache_shrink(&info->free_pages, 0);
+
 	dev_set_drvdata(&dev->dev, NULL);
=20
 	return 0;
@@ -1263,6 +1231,7 @@ static int scsiback_probe(struct xenbus_device *dev=
,
 	info->irq =3D 0;
 	INIT_LIST_HEAD(&info->v2p_entry_lists);
 	spin_lock_init(&info->v2p_lock);
+	gnttab_page_cache_init(&info->free_pages);
=20
 	err =3D xenbus_printf(XBT_NIL, dev->nodename, "feature-sg-grant", "%u",=

 			    SG_ALL);
@@ -1879,13 +1848,6 @@ static int __init scsiback_init(void)
=20
 static void __exit scsiback_exit(void)
 {
-	struct page *page;
-
-	while (free_pages_num) {
-		if (get_free_page(&page))
-			BUG();
-		gnttab_free_pages(1, &page);
-	}
 	target_unregister_template(&scsiback_ops);
 	xenbus_unregister_driver(&scsiback_driver);
 }
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 9bc5bc07d4d3..c6ef8ffc1a09 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -198,6 +198,19 @@ void gnttab_free_auto_xlat_frames(void);
 int gnttab_alloc_pages(int nr_pages, struct page **pages);
 void gnttab_free_pages(int nr_pages, struct page **pages);
=20
+struct gnttab_page_cache {
+	spinlock_t		lock;
+	struct list_head	pages;
+	unsigned int		num_pages;
+};
+
+void gnttab_page_cache_init(struct gnttab_page_cache *cache);
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page *=
*page);
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page =
**page,
+			   unsigned int num);
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache,
+			      unsigned int num);
+
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 struct gnttab_dma_alloc_args {
 	/* Device for which DMA memory will be/was allocated. */
--=20
2.26.2


--------------DF7FA127B9BAC6B5AF04F56B
Content-Type: text/x-patch; charset=UTF-8;
 name="0002-xen-don-t-use-page-lru-for-ZONE_DEVICE-memory.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="0002-xen-don-t-use-page-lru-for-ZONE_DEVICE-memory.patch"

=46rom bf6d138b2be7e3195d952dd3269efecc097f1e61 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Mon, 7 Dec 2020 09:36:14 +0100
Subject: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory

Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
memory") introduced usage of ZONE_DEVICE memory for foreign memory
mappings.

Unfortunately this collides with using page->lru for Xen backend
private page caches.

Fix that by using page->zone_device_data instead.

Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")=

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/grant-table.c       | 65 +++++++++++++++++++++++++++++----
 drivers/xen/unpopulated-alloc.c | 20 +++++-----
 include/xen/grant_table.h       |  4 ++
 3 files changed, 73 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index e2e42912f241..696663a439fe 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,10 +813,63 @@ int gnttab_alloc_pages(int nr_pages, struct page **=
pages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
=20
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	cache->pages =3D NULL;
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return !cache->pages;
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page =3D cache->pages;
+	cache->pages =3D page->zone_device_data;
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct pag=
e *page)
+{
+	page->zone_device_data =3D cache->pages;
+	cache->pages =3D page;
+}
+#else
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	INIT_LIST_HEAD(&cache->pages);
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return list_empty(&cache->pages);
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page =3D list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct pag=
e *page)
+{
+	list_add(&page->lru, &cache->pages);
+}
+#endif
+
 void gnttab_page_cache_init(struct gnttab_page_cache *cache)
 {
 	spin_lock_init(&cache->lock);
-	INIT_LIST_HEAD(&cache->pages);
+	cache_init(cache);
 	cache->num_pages =3D 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
@@ -827,13 +880,12 @@ int gnttab_page_cache_get(struct gnttab_page_cache =
*cache, struct page **page)
=20
 	spin_lock_irqsave(&cache->lock, flags);
=20
-	if (list_empty(&cache->pages)) {
+	if (cache_empty(cache)) {
 		spin_unlock_irqrestore(&cache->lock, flags);
 		return gnttab_alloc_pages(1, page);
 	}
=20
-	page[0] =3D list_first_entry(&cache->pages, struct page, lru);
-	list_del(&page[0]->lru);
+	page[0] =3D cache_deq(cache);
 	cache->num_pages--;
=20
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -851,7 +903,7 @@ void gnttab_page_cache_put(struct gnttab_page_cache *=
cache, struct page **page,
 	spin_lock_irqsave(&cache->lock, flags);
=20
 	for (i =3D 0; i < num; i++)
-		list_add(&page[i]->lru, &cache->pages);
+		cache_enq(cache, page[i]);
 	cache->num_pages +=3D num;
=20
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -867,8 +919,7 @@ void gnttab_page_cache_shrink(struct gnttab_page_cach=
e *cache, unsigned int num)
 	spin_lock_irqsave(&cache->lock, flags);
=20
 	while (cache->num_pages > num) {
-		page[i] =3D list_first_entry(&cache->pages, struct page, lru);
-		list_del(&page[i]->lru);
+		page[i] =3D cache_deq(cache);
 		cache->num_pages--;
 		if (++i =3D=3D ARRAY_SIZE(page)) {
 			spin_unlock_irqrestore(&cache->lock, flags);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-al=
loc.c
index 8c512ea550bb..7762c1bb23cb 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -12,7 +12,7 @@
 #include <xen/xen.h>
=20
 static DEFINE_MUTEX(list_lock);
-static LIST_HEAD(page_list);
+static struct page *page_list;
 static unsigned int list_count;
=20
 static int fill_list(unsigned int nr_pages)
@@ -84,7 +84,8 @@ static int fill_list(unsigned int nr_pages)
 		struct page *pg =3D virt_to_page(vaddr + PAGE_SIZE * i);
=20
 		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
-		list_add(&pg->lru, &page_list);
+		pg->zone_device_data =3D page_list;
+		page_list =3D pg;
 		list_count++;
 	}
=20
@@ -118,12 +119,10 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pag=
es, struct page **pages)
 	}
=20
 	for (i =3D 0; i < nr_pages; i++) {
-		struct page *pg =3D list_first_entry_or_null(&page_list,
-							   struct page,
-							   lru);
+		struct page *pg =3D page_list;
=20
 		BUG_ON(!pg);
-		list_del(&pg->lru);
+		page_list =3D pg->zone_device_data;
 		list_count--;
 		pages[i] =3D pg;
=20
@@ -134,7 +133,8 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages=
, struct page **pages)
 				unsigned int j;
=20
 				for (j =3D 0; j <=3D i; j++) {
-					list_add(&pages[j]->lru, &page_list);
+					pages[j]->zone_device_data =3D page_list;
+					page_list =3D pages[j];
 					list_count++;
 				}
 				goto out;
@@ -160,7 +160,8 @@ void xen_free_unpopulated_pages(unsigned int nr_pages=
, struct page **pages)
=20
 	mutex_lock(&list_lock);
 	for (i =3D 0; i < nr_pages; i++) {
-		list_add(&pages[i]->lru, &page_list);
+		pages[i]->zone_device_data =3D page_list;
+		page_list =3D pages[i];
 		list_count++;
 	}
 	mutex_unlock(&list_lock);
@@ -189,7 +190,8 @@ static int __init init(void)
 			struct page *pg =3D
 				pfn_to_page(xen_extra_mem[i].start_pfn + j);
=20
-			list_add(&pg->lru, &page_list);
+			pg->zone_device_data =3D page_list;
+			page_list =3D pg;
 			list_count++;
 		}
 	}
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index c6ef8ffc1a09..b9c937b3a149 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -200,7 +200,11 @@ void gnttab_free_pages(int nr_pages, struct page **p=
ages);
=20
 struct gnttab_page_cache {
 	spinlock_t		lock;
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+	struct page		*pages;
+#else
 	struct list_head	pages;
+#endif
 	unsigned int		num_pages;
 };
=20
--=20
2.26.2


--------------DF7FA127B9BAC6B5AF04F56B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------DF7FA127B9BAC6B5AF04F56B--

--czwcAqEXdbulx7YApJA2Gxdnqhaqj6WTq--

--8qFesHj97LXhvd5tc9EEAUmuBrgfM3SxA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/OGU4FAwAAAAAACgkQsN6d1ii/Ey8E
SAf5AagpcX28kfgA56q5tuEWeqk6gET2BeBidVqzEDtS62HNz6t7nsFV3kXfr3mqpkhli59N46fy
ZBK7nvFKMk/9NoxvvLLPdG8ZdPHTkl+la4mdTLrjABeI1rik57Htx+hoBFG1QaVA0t9JL/VVa8Ru
A6bN3E7+QMAzhRJLLS5f98tWQ3TgVxJBkNjxgvRbBa5gtBwZquBoIEVrbfzYDlK48RtNVet/xaJX
8oup64BEYro+nY+ysTJYTMhGd9xrLH+e5Ktcwgz+f/1j1LCp1Bb4cuQ0Oa4SXhz+u1D2zJ/EPUjQ
Cqf1j+2R+2S4RIv2/ztByEeetrkW/aD+Ytg9tfA7kg==
=8gBK
-----END PGP SIGNATURE-----

--8qFesHj97LXhvd5tc9EEAUmuBrgfM3SxA--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:04:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:04:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46416.82373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFFp-00040Q-BO; Mon, 07 Dec 2020 12:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46416.82373; Mon, 07 Dec 2020 12:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFFp-00040J-8B; Mon, 07 Dec 2020 12:04:33 +0000
Received: by outflank-mailman (input) for mailman id 46416;
 Mon, 07 Dec 2020 12:04:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmFFo-00040E-2u
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:04:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13e25b6b-f02e-4993-ba62-7ced120ab61a;
 Mon, 07 Dec 2020 12:04:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 60891ADD7;
 Mon,  7 Dec 2020 12:04:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13e25b6b-f02e-4993-ba62-7ced120ab61a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607342670; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TLA83gT1r9ILFBmvmDq6lXlW6yKdI3JqLM47ResQUAM=;
	b=enKSmarTp1mK+Q4S+1OBZ+APu//w84sxZRTpAllGo4DMUlLaoe5zA8pt8d56rRf7te7a+J
	gUlAZnAge4FNt43/aqTpxCDRltPi4J2pr/xh6hSbBpHB/BNN3ZaIFoPWndqSQw02V1tV3y
	6kZrTCUI5k6WVbEUe+i4DZOuaU1C/co=
Subject: Re: [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to struct
 domain
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, Paul Durrant <paul@xen.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-9-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5b5011b7-5b8f-79cd-d8dc-c276ba1f9e37@suse.com>
Date: Mon, 7 Dec 2020 13:04:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-9-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The IOREQ is a common feature now and this struct will be used
> on Arm as is. Move it to common struct domain. This also
> significantly reduces the layering violation in the common code
> (*arch.hvm* usage).
> 
> We don't move ioreq_gfn since it is not used in the common code
> (the "legacy" mechanism is x86 specific).
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Applicable parts
Acked-by: Jan Beulich <jbeulich@suse.com>
yet with a question, but maybe more to Paul than to you:

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -63,8 +63,6 @@ struct hvm_pi_ops {
>      void (*vcpu_block)(struct vcpu *);
>  };
>  
> -#define MAX_NR_IOREQ_SERVERS 8
> -
>  struct hvm_domain {
>      /* Guest page range used for non-default ioreq servers */
>      struct {
> @@ -73,12 +71,6 @@ struct hvm_domain {
>          unsigned long legacy_mask; /* indexed by HVM param number */
>      } ioreq_gfn;
>  
> -    /* Lock protects all other values in the sub-struct and the default */
> -    struct {
> -        spinlock_t              lock;
> -        struct ioreq_server *server[MAX_NR_IOREQ_SERVERS];
> -    } ioreq_server;
> -
>      /* Cached CF8 for guest PCI config cycles */
>      uint32_t                pci_cf8;
>  
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -316,6 +316,8 @@ struct sched_unit {
>  
>  struct evtchn_port_ops;
>  
> +#define MAX_NR_IOREQ_SERVERS 8
> +
>  struct domain
>  {
>      domid_t          domain_id;
> @@ -523,6 +525,14 @@ struct domain
>      /* Argo interdomain communication support */
>      struct argo_domain *argo;
>  #endif
> +
> +#ifdef CONFIG_IOREQ_SERVER
> +    /* Lock protects all other values in the sub-struct and the default */
> +    struct {
> +        spinlock_t              lock;
> +        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
> +    } ioreq_server;
> +#endif

The comment gets merely moved, but what "default" does it talk about?
Is this a stale part which would better be dropped at this occasion?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:09:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46424.82385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFK8-0004CF-Sp; Mon, 07 Dec 2020 12:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46424.82385; Mon, 07 Dec 2020 12:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFK8-0004C8-Pr; Mon, 07 Dec 2020 12:09:00 +0000
Received: by outflank-mailman (input) for mailman id 46424;
 Mon, 07 Dec 2020 12:08:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmFK6-0004C2-Ka
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:08:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 201cab70-792c-467e-9b72-72c3ea713b65;
 Mon, 07 Dec 2020 12:08:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08B8FAC90;
 Mon,  7 Dec 2020 12:08:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 201cab70-792c-467e-9b72-72c3ea713b65
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607342936; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UcVSi8g+hIX+IG4KRIj73MOY57kQKL0iEo/GaRZ/zTA=;
	b=TNWoIFNQU84nu5DmU18lWgfFdDf1okVXZHxTO37AOzq9ccRHm5NNpcKccMduankqYOQyxz
	XspcaCx/PC8z58759MoGziCu8Ee71Mgt3Mj3z+J/pLL1+JXZWs5hl6lLkgZGoh7QppF/ae
	QAwAq+aOZ5TliPn3ktQUlfRIlAP7S2I=
Subject: Re: [PATCH V3 09/23] xen/dm: Make x86's DM feature common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <00c3df9f-760d-bb3d-d1d6-7c7df7f0c17c@suse.com>
Date: Mon, 7 Dec 2020 13:08:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> As a lot of x86 code can be re-used on Arm later on, this patch
> splits devicemodel support into common and arch specific parts.
> 
> The common DM feature is supposed to be built with IOREQ_SERVER
> option enabled (as well as the IOREQ feature), which is selected
> for x86's config HVM for now.
> 
> Also update XSM code a bit to let DM op be used on Arm.
> 
> This support is going to be used on Arm to be able run device
> emulator outside of Xen hypervisor.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - update XSM, related changes were pulled from:
>      [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features
> 
> Changes V1 -> V2:
>    - update the author of a patch
>    - update patch description
>    - introduce xen/dm.h and move definitions here
> 
> Changes V2 -> V3:
>    - no changes

And my concern regarding the common vs arch nesting also hasn't
changed.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:11:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46430.82397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFMw-00058i-BP; Mon, 07 Dec 2020 12:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46430.82397; Mon, 07 Dec 2020 12:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFMw-00058b-8D; Mon, 07 Dec 2020 12:11:54 +0000
Received: by outflank-mailman (input) for mailman id 46430;
 Mon, 07 Dec 2020 12:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmFMv-00058W-JZ
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:11:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6bc98ff-3335-452a-9eea-70babb9d82a6;
 Mon, 07 Dec 2020 12:11:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 02EBCB255;
 Mon,  7 Dec 2020 12:11:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6bc98ff-3335-452a-9eea-70babb9d82a6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607343112; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lDueH7u8nvXIG/owt6SkFbkbWaonZLRfmo97yC7B5BA=;
	b=nSvlFxjD30qoimRcPrxyg4qyWWqeqZqJuim3eU/BhcArl6vXlfv8oHtdZSiNgBXLtQ4l3u
	AReidgwNmkoGDGFKAUlPo41psqQ0D/OQR4ScJW0ZRTBkXoEupRxYuWJZX+tkaR8+VjnBIv
	ccgzd62S5rVkSggOb20xL+vFOfE+jS8=
Subject: Re: [PATCH V3 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server
 handling common
From: Jan Beulich <jbeulich@suse.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-11-git-send-email-olekstysh@gmail.com>
 <4f9a68ad-c663-d7a1-9194-4ad28958b077@suse.com>
Message-ID: <39ee3665-48f2-334d-e7a0-2e1a17bccd23@suse.com>
Date: Mon, 7 Dec 2020 13:11:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <4f9a68ad-c663-d7a1-9194-4ad28958b077@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 12:35, Jan Beulich wrote:
> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4699,50 +4699,6 @@ int xenmem_add_to_physmap_one(
>>      return rc;
>>  }
>>  
>> -int arch_acquire_resource(struct domain *d, unsigned int type,
>> -                          unsigned int id, unsigned long frame,
>> -                          unsigned int nr_frames, xen_pfn_t mfn_list[])
>> -{
>> -    int rc;
>> -
>> -    switch ( type )
>> -    {
>> -#ifdef CONFIG_HVM
>> -    case XENMEM_resource_ioreq_server:
>> -    {
>> -        ioservid_t ioservid = id;
>> -        unsigned int i;
>> -
>> -        rc = -EINVAL;
>> -        if ( !is_hvm_domain(d) )
>> -            break;
>> -
>> -        if ( id != (unsigned int)ioservid )
>> -            break;
>> -
>> -        rc = 0;
>> -        for ( i = 0; i < nr_frames; i++ )
>> -        {
>> -            mfn_t mfn;
>> -
>> -            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
>> -            if ( rc )
>> -                break;
>> -
>> -            mfn_list[i] = mfn_x(mfn);
>> -        }
>> -        break;
>> -    }
>> -#endif
>> -
>> -    default:
>> -        rc = -EOPNOTSUPP;
>> -        break;
>> -    }
>> -
>> -    return rc;
>> -}
> 
> Can't this be accompanied by removal of the xen/ioreq.h inclusion?
> (I'm only looking at patch 4 right now, but the renaming there made
> the soon to be unnecessary #include quite apparent.)

And then, now that I've looked at this patch as a whole,
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:12:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46437.82409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFNs-0005Fy-LD; Mon, 07 Dec 2020 12:12:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46437.82409; Mon, 07 Dec 2020 12:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFNs-0005Fr-I8; Mon, 07 Dec 2020 12:12:52 +0000
Received: by outflank-mailman (input) for mailman id 46437;
 Mon, 07 Dec 2020 12:12:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e0y+=FL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kmFNq-0005Fi-Ua
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:12:51 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 97a4b626-0258-45a9-a088-68496a6e2f7c;
 Mon, 07 Dec 2020 12:12:50 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id a12so5733125wrv.8
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 04:12:50 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id o67sm13871553wmo.31.2020.12.07.04.12.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 07 Dec 2020 04:12:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97a4b626-0258-45a9-a088-68496a6e2f7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=YT62w/XHXRRVTJ4fKfvousr9yOLU+/nzOSs+DxN/Mms=;
        b=AUOwDd+TSzMZ4IH9hg4fW22gyiEBty0t/YKhqxKxSBvnri3envWVsRrlGS1PgrSK1L
         9oY9pnBkhpZ5U/I55KxOnsOGWkTt4OXrRLXcws5jHUjYy6PTQMAxa56xH/OLm7ZpK/cX
         z1KMqtBEtfvOz557ld4XbqG+IDdFEyC2SmjGofJE8j8ohYefu+vaYmn28YL2ctKq9Kpu
         /P36FlchjxaqrEGUseZLnnfDtg0EIwCISFMqBqwpMRkZjhcYSRZXOt+hTynhwj+Lxx85
         4hsd845jsBIyCMG6K7dN/D5GmYhSBubFyuBbW62JX8PwjKykfoR07xyWRE03xAaoNJlA
         sgFQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=YT62w/XHXRRVTJ4fKfvousr9yOLU+/nzOSs+DxN/Mms=;
        b=YgrJCh8ZK4MEq35gsR+IlibMOFaRxak1dz0qU8HwV567NdYyqSkDuNUyT0OB6DT/Tm
         aYl9dcAj5DuXbBWRWOkjCBtn6nZOUHFRkxgCBRTpb3i1vcDSlQmEzisfKiFK7mTptoI5
         zZDsWp1xXbNQ9XKIp+c4AWP59qBIyc1t+D/vzrcO3iaUcIrXmI2xibEEfJQkf6wKTZ2m
         lWSc1OI62nFbL+kLU3WOmz/KvcLfUaaO7mQuCIxU1KNdvYOC7AF4AaE2oXkdSTZpFkVY
         NKOT/VSLcprxuN7fZZe3JS4FL7pNKAqUonaL7MWYgm0TEhPaHm7GwkCg/KAmsEj0qEhm
         UhqQ==
X-Gm-Message-State: AOAM532fUHTbp3J7SALQiJcbhYUv54SzJiIgRUBaTyY/e4ZpScbUi1QC
	sfVSNKDUDkARhP9QFQNr4MU=
X-Google-Smtp-Source: ABdhPJz8BeY2CzQTjM0tEIsz0yTG3SYohbJ5bKXMzd0dRndUxPyiJS0VwbMmkRdTyn2x4EMAPVjUgQ==
X-Received: by 2002:adf:a3ca:: with SMTP id m10mr19324499wrb.228.1607343169274;
        Mon, 07 Dec 2020 04:12:49 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Oleksandr Tyshchenko'" <olekstysh@gmail.com>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-9-git-send-email-olekstysh@gmail.com> <5b5011b7-5b8f-79cd-d8dc-c276ba1f9e37@suse.com>
In-Reply-To: <5b5011b7-5b8f-79cd-d8dc-c276ba1f9e37@suse.com>
Subject: RE: [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to struct domain
Date: Mon, 7 Dec 2020 12:12:47 -0000
Message-ID: <0d0601d6cc92$47392ff0$d5ab8fd0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKk0D4Qme59XF0a0h96d36zIOxDhQJGmzfoAf5XvzyoLVwsAA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 07 December 2020 12:05
> To: Oleksandr Tyshchenko <olekstysh@gmail.com>; Paul Durrant =
<paul@xen.org>
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew =
Cooper <andrew.cooper3@citrix.com>;
> George Dunlap <george.dunlap@citrix.com>; Ian Jackson =
<iwj@xenproject.org>; Julien Grall
> <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu =
<wl@xen.org>; Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>; Julien Grall <julien.grall@arm.com>; =
xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to =
struct domain
>=20
> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> > The IOREQ is a common feature now and this struct will be used
> > on Arm as is. Move it to common struct domain. This also
> > significantly reduces the layering violation in the common code
> > (*arch.hvm* usage).
> >
> > We don't move ioreq_gfn since it is not used in the common code
> > (the "legacy" mechanism is x86 specific).
> >
> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> Applicable parts
> Acked-by: Jan Beulich <jbeulich@suse.com>
> yet with a question, but maybe more to Paul than to you:
>=20
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -63,8 +63,6 @@ struct hvm_pi_ops {
> >      void (*vcpu_block)(struct vcpu *);
> >  };
> >
> > -#define MAX_NR_IOREQ_SERVERS 8
> > -
> >  struct hvm_domain {
> >      /* Guest page range used for non-default ioreq servers */
> >      struct {
> > @@ -73,12 +71,6 @@ struct hvm_domain {
> >          unsigned long legacy_mask; /* indexed by HVM param number =
*/
> >      } ioreq_gfn;
> >
> > -    /* Lock protects all other values in the sub-struct and the =
default */
> > -    struct {
> > -        spinlock_t              lock;
> > -        struct ioreq_server *server[MAX_NR_IOREQ_SERVERS];
> > -    } ioreq_server;
> > -
> >      /* Cached CF8 for guest PCI config cycles */
> >      uint32_t                pci_cf8;
> >
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -316,6 +316,8 @@ struct sched_unit {
> >
> >  struct evtchn_port_ops;
> >
> > +#define MAX_NR_IOREQ_SERVERS 8
> > +
> >  struct domain
> >  {
> >      domid_t          domain_id;
> > @@ -523,6 +525,14 @@ struct domain
> >      /* Argo interdomain communication support */
> >      struct argo_domain *argo;
> >  #endif
> > +
> > +#ifdef CONFIG_IOREQ_SERVER
> > +    /* Lock protects all other values in the sub-struct and the =
default */
> > +    struct {
> > +        spinlock_t              lock;
> > +        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
> > +    } ioreq_server;
> > +#endif
>=20
> The comment gets merely moved, but what "default" does it talk about?
> Is this a stale part which would better be dropped at this occasion?
>=20

Yes, I think that is a stale part of the comment from the days of the =
default ioreq server. It can be dropped.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:13:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46438.82422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFO3-0005Jx-Vf; Mon, 07 Dec 2020 12:13:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46438.82422; Mon, 07 Dec 2020 12:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFO3-0005Jp-RH; Mon, 07 Dec 2020 12:13:03 +0000
Received: by outflank-mailman (input) for mailman id 46438;
 Mon, 07 Dec 2020 12:13:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifa4=FL=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kmFO2-0005JQ-Ds
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:13:02 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 959e33a4-5511-4810-ad1f-f67f09a26785;
 Mon, 07 Dec 2020 12:13:00 +0000 (UTC)
Received: from AM5PR0402CA0024.eurprd04.prod.outlook.com
 (2603:10a6:203:90::34) by DBAPR08MB5591.eurprd08.prod.outlook.com
 (2603:10a6:10:1ae::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21; Mon, 7 Dec
 2020 12:12:57 +0000
Received: from VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::77) by AM5PR0402CA0024.outlook.office365.com
 (2603:10a6:203:90::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Mon, 7 Dec 2020 12:12:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT014.mail.protection.outlook.com (10.152.19.38) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 12:12:57 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Mon, 07 Dec 2020 12:12:56 +0000
Received: from 12d0232201d0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C71EE3DD-1B9D-4797-9AD7-00E1D8622D4B.1; 
 Mon, 07 Dec 2020 12:12:18 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 12d0232201d0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 12:12:18 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB4956.eurprd08.prod.outlook.com (2603:10a6:10:e0::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.23; Mon, 7 Dec
 2020 12:12:16 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3632.022; Mon, 7 Dec 2020
 12:12:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 959e33a4-5511-4810-ad1f-f67f09a26785
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HOngS3NEAC9Z71rWkN8z41ixiLyBvpiwl0UUzcYq4C4=;
 b=6XyFSooDTVCcno83Xa00afWMazfuXvbssHliL5RrMqpcL36ZLsMHe9ZCDXU8kql+K1vj0J6hPJfq5oRDc3wT6XeSztol02wnL14q4lUc7a8U4JaLVd/bxcXjpv4699ZirmclW3AVNB7x98m4QxPIMQUK3zRzvFfYH8eSjBkRUgs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3c7aa679a17d60af
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lrmuTJJqficnyx1AJf71me/566Z52pOV7dE8G2VdpvCgFffJR1rl3xy9xQ60Th+3OJcliQ86xLQUR27wUDgpbmD7ur8c19QA8WjugPnhEXtYmUF86hGt58LqcKZvhktBWHO2bPwrE8L2vQirnmZOt5FhRo3EjbUE9LuzPmEzhVHUg5GdtbUB5iVTnMcG9wUeHk8BuN/sRmMlU9id9HksOcF1cy3v5AoUBmZlt5qlrIIcJ35MPx0hTcH5hm2+uUyywKxeNjX0vwi5/PSWwmeMo0GdXLkTkvcW48m4mcM59vBaxpbAjRqX/tKeOVYr4BDUQOOv6w70jn/du4Fi8qBZ+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HOngS3NEAC9Z71rWkN8z41ixiLyBvpiwl0UUzcYq4C4=;
 b=mR0XWmrXCu4TZS1VKBV5rqH+cd6QL+QYhTzwUwy0yT62faYH4IoIdacDkoB73m+zVhJ4kvcDCa8vY5+XQ0gFOLNLM5bF/m8XLjcVXoY9JxrBavBgjx4FVKfpF3RdNBwGfkvmGSDgQ/Ev8OSqIlCYCDOqWIYvvq/gTKrcCVIEBoFo3Ck7pyxB1A5D26zx766OGzTPzdMnS5vpczHf8YNSw0zk6WnnXRCGH6cf3m9cgOOPLGcuHsf/pPP0pLt41wjDJw1PR7crc2eLAusW78s4kBDiem6vaI5D9hMWwVaqhqjwMoCUUVWb1bJqcMsZ20VjrIs50RrTBp/RTzgKBuRYlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HOngS3NEAC9Z71rWkN8z41ixiLyBvpiwl0UUzcYq4C4=;
 b=6XyFSooDTVCcno83Xa00afWMazfuXvbssHliL5RrMqpcL36ZLsMHe9ZCDXU8kql+K1vj0J6hPJfq5oRDc3wT6XeSztol02wnL14q4lUc7a8U4JaLVd/bxcXjpv4699ZirmclW3AVNB7x98m4QxPIMQUK3zRzvFfYH8eSjBkRUgs=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWxBX/ttp8YMNqT0KunkbXuSldmKnkBpwAgAeVnoA=
Date: Mon, 7 Dec 2020 12:12:16 +0000
Message-ID: <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
In-Reply-To: <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1174b6be-4b3a-454c-9d9b-08d89aa96ec9
x-ms-traffictypediagnostic: DB8PR08MB4956:|DBAPR08MB5591:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB559141A0DB0F57A49408F6D6FCCE0@DBAPR08MB5591.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 n39VbKfvm0OinghC/kD5urMlyNfwS06absLibCDRnX3OvlNe4BdcrfxkAsRXpg/J8+6u/piZ2w8PxK8ZMZr+Qr/duDpGiEpni5N9tpIAjUZlXXYf4wkrezqtVehXS+SLY32NN6NO0YhmoRy8OqKybPi+eLhYLvv4IR5sfrKTsm8kkfZfu9TK3kyFWgWOFI6op09OJ2TjjrPpvip8E1qoe4gT7lAqlhBEv9kbomzrwao2fPbS6vwz1qFm+Xx5ct3h7IchPgKMdfrYdvTpx58/or236LfKlMczsDVDhnp61UsA1EaqAiLevDi2Xf2o3K/mivoTXBQXbtOirYgbINCcyNc2Ce/q1h6Brj+ZkpakTSs2oBAhxsdFLRY7+E8C4cATMEb+w6XslAq8q8hlR+wjZdrzKYC5K8R+v2zlEKlrC/zjlkg3PrnhWgYydLHIgyl5sz6p+3/X2cewcuF5hUe1QpGYfxxnsagbIpvPmqSW12cvPxgLXFui974dl77cLtSN
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(136003)(346002)(39860400002)(66556008)(478600001)(76116006)(91956017)(2906002)(53546011)(6486002)(86362001)(66946007)(6506007)(33656002)(4326008)(6512007)(316002)(8936002)(54906003)(8676002)(6916009)(83380400001)(36756003)(186003)(71200400001)(26005)(66446008)(2616005)(30864003)(66476007)(5660300002)(64756008)(7416002)(2004002)(45980500001)(579004)(559001)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?dGNzVS9rL2tvcG5DVy9nNTE5TUxKZmgwWVBLenZpTjNoTWZGN0JhSExkT3Bh?=
 =?utf-8?B?b05tYmQ2QXVldFZkZDNtb01WSVhYd2pNNGNIdkdnK2h6bEREeTBrSXJTWGFs?=
 =?utf-8?B?NWNlWmQzT2RNLzgxSkVTNVhuajRUd1ZEdm9KVnZpNklFM0pXWkZMeEdDdG9h?=
 =?utf-8?B?cm5ZUW5xU0JMRFR4VWI2OGg5TFR6VFRDamRWWTJPSXd4Q3FxL1d4YVUwVnNY?=
 =?utf-8?B?Zk9TOExRblZXSVhocUs1R1FZcWhMVUMybEZMYUN4Wk5qSnVzOExBSmR1NTFD?=
 =?utf-8?B?YUhwREZva1NvSEVoUjNMV2lXdzVoaXpYSmtIT1V0RUxZSWZFcnhiZ2xKWVhF?=
 =?utf-8?B?dlRnbmpBazE0NnZMbnBBY0tDanBWZlAraHR6UlkrMnJCSjJDaTV6WU1BTjl3?=
 =?utf-8?B?OFlwWEw0RG9Ja1ltRCtIOThsRXkvb0NYU1VmSklySXF3bWNDaHR6STFtUWRH?=
 =?utf-8?B?dFdjcXZiM1YwU1c3T0ZQVEp1ZHRZNGF0VHpycnphdFBSMUx2RVRnTVNhbitK?=
 =?utf-8?B?MXNTRmt2MEtiQUlMOTRCS29LV015dUlyaWd1dHl2bE5JRTNXSFg1SjQveDRP?=
 =?utf-8?B?ZVIvVVVzMmEyaEJ2eEtmckIreGsyOE1tY0FZRkgvSmpacmxweFZTRkQwUG5D?=
 =?utf-8?B?T09wbFpuMlpWUkdhTkpjN3ZRcjVVTFVkNHZHbmRJaEpQNXhnWjJJQitFSVgz?=
 =?utf-8?B?cU5vNklnd0tmYjlHcWgrdGpyOEtHMjlYWGpHY0dCWFBjdlZXdld4NEV0RFYy?=
 =?utf-8?B?V2UzUEhyYWNLRzE2bkZSeUEzUGEydTBKY0RIUmVYMHg0cHQ2WHgwVC9VWHl0?=
 =?utf-8?B?WTk5ejJaNFl2ZlhKVm5CTzJ1endUd3RPTnZGenFWWmdOeElKUFJZcmEwSU56?=
 =?utf-8?B?V3cyd3JyTUdxelZHdHpSb2ZRakVoZzBOTGRPSDNDNDNqWnUxeVVWRXU2b24x?=
 =?utf-8?B?L3N1R0xpdFV6bmx4QURGMWM0OWVsc3lYSHhWVjRtWitheTRTb2JTcTdFYnJR?=
 =?utf-8?B?ZjBJNGp1RUN6NVhMYW9LajVHc2tuV3NZaVZKcEZZSDgvR2wwQ2NKaDhNYVNP?=
 =?utf-8?B?eEo5emlDK0JWNkNnSmZSaW5VTGFRclFrWFZsNjBqWDB2cTBWanFoUGpzRi9U?=
 =?utf-8?B?d3hWeDltTWdHbU9xY3JZUWNSdEZzRHUvQThSSFF1VGFRbEhPbENIaEs0RDRs?=
 =?utf-8?B?N3BYbVFEcC9pOVR4TTN4VW1mY2NMVk1jRWpzcHBOU1MyWW5ZUUJCd1Z3K3NB?=
 =?utf-8?B?UGQrT2ZkUTNxSkd3Y2tocUFlcGJxbUFrMmN4d2N4TUdITDBEZnMwOTZ5OG1C?=
 =?utf-8?Q?Z3BdLEj8Z6Wj81x93FZn1vqGVaCZkgDaSm?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <049A7E0A256E6348AC53EAC47447CAA6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4956
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	621f1235-d034-42a8-0b27-08d89aa95661
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JbFBnJGxEUtM3xTpHyKsH2ckhKlYXjWb33nQEwX3K8zLnc+cD4KlBHL5GvWL9Q7ETnivE8QS3SgueabF8+9/bURNZ+rZLAgEJb1a7pzUw/J1qtLypfxAo4Ulf1ZegBx69/M1cuJeI3jov+5i2MBXUiWDfvDvGXyNG+QS/ZlZAzOB3Os7MUUT51USkl7QbcshPNvHnPMk1lgim8DlHBRexxjqHmIKGHynL6HVNBOUtvcMLt/LG99cW5PjNnMOHJD2EJ4m4ON4+HIleraJIX3hsqUIcBoxCsRmYibV6Abjc5eWsbBLCdddDm5EYhepEQPrs9QQlyLI7EDsxzeKIAS8RnghGA9KdxjVm6d9pArBZ6ljliFmohDL5+UqEVjrFBjg08RdLAXKQ6qTE9b2N7mHu4tqwR2EIQ2WmSii3csokQ+XH4FVO/qiK25BU+xgAWHdTn0tDo/yq2HSvF5Zoe9SCX7hcZ4Mds5u8b1e3+8JIuMd4YcbINSk+HselhDqhgbONtxQLiwYfCgaJWSu6aQM2g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966005)(8676002)(70206006)(82310400003)(5660300002)(2616005)(30864003)(53546011)(2906002)(70586007)(26005)(6506007)(83380400001)(478600001)(186003)(6486002)(33656002)(336012)(356005)(54906003)(107886003)(6862004)(316002)(47076004)(4326008)(82740400003)(86362001)(36756003)(8936002)(81166007)(6512007)(2004002)(579004)(309714004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 12:12:57.0113
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1174b6be-4b3a-454c-9d9b-08d89aa96ec9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5591

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDIgRGVjIDIwMjAsIGF0IDQ6MjIgcG0sIEp1bGllbiBHcmFs
bCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPiANCj4gSGkgUmFodWwsDQo+IA0KPiBPbiAyNi8x
MS8yMDIwIDE3OjAyLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+IEFkZCBzdXBwb3J0IGZvciBBUk0g
YXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxlbWVudGF0aW9uLiBJdCBpcyBiYXNlZCBvbg0KPj4gdGhl
IExpbnV4IFNNTVV2MyBkcml2ZXIuDQo+PiBNYWpvciBkaWZmZXJlbmNlcyB3aXRoIHJlZ2FyZCB0
byBMaW51eCBkcml2ZXIgYXJlIGFzIGZvbGxvd3M6DQo+PiAxLiBPbmx5IFN0YWdlLTIgdHJhbnNs
YXRpb24gaXMgc3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+ICAg
IHRoYXQgc3VwcG9ydHMgYm90aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+
IDIuIFVzZSBQMk0gIHBhZ2UgdGFibGUgaW5zdGVhZCBvZiBjcmVhdGluZyBvbmUgYXMgU01NVXYz
IGhhcyB0aGUNCj4+ICAgIGNhcGFiaWxpdHkgdG8gc2hhcmUgdGhlIHBhZ2UgdGFibGVzIHdpdGgg
dGhlIENQVS4NCj4+IDMuIFRhc2tsZXRzIGFyZSB1c2VkIGluIHBsYWNlIG9mIHRocmVhZGVkIElS
USdzIGluIExpbnV4IGZvciBldmVudCBxdWV1ZQ0KPj4gICAgYW5kIHByaW9yaXR5IHF1ZXVlIElS
USBoYW5kbGluZy4NCj4gDQo+IE9uIHRoZSBwcmV2aW91cyB2ZXJzaW9uLCB3ZSBkaXNjdXNzZWQg
dGhhdCB1c2luZyB0YXNrbGV0cyBpcyBub3QgYSBzdWl0YWJsZSByZXBsYWNlbWVudCBmb3IgdGhy
ZWFkZWQgSVJRcy4gV2hhdCdzIHRoZSBwbGFuIHRvIGFkZHJlc3MgaXQ/DQoNCkFzIHN1Z2dlc3Rl
ZCBieSB5b3UgaW4gZWFybGllciBkaXNjdXNzaW9ucyBJIHdpbGwgdHJ5IHRvIHVzZSB0aGUgdGlt
ZXIgdG8gc2NoZWR1bGUgdGhlIHdvcmsgYW5kIHdpbGwgc2hhcmUgdGhlIGZpbmRpbmdzLiANCj4g
DQo+PiA0LiBMYXRlc3QgdmVyc2lvbiBvZiB0aGUgTGludXggU01NVXYzIGNvZGUgaW1wbGVtZW50
cyB0aGUgY29tbWFuZHMgcXVldWUNCj4+ICAgIGFjY2VzcyBmdW5jdGlvbnMgYmFzZWQgb24gYXRv
bWljIG9wZXJhdGlvbnMgaW1wbGVtZW50ZWQgaW4gTGludXguDQo+PiAgICBBdG9taWMgZnVuY3Rp
b25zIHVzZWQgYnkgdGhlIGNvbW1hbmRzIHF1ZXVlIGFjY2VzcyBmdW5jdGlvbnMgYXJlIG5vdA0K
Pj4gICAgaW1wbGVtZW50ZWQgaW4gWEVOIHRoZXJlZm9yZSB3ZSBkZWNpZGVkIHRvIHBvcnQgdGhl
IGVhcmxpZXIgdmVyc2lvbg0KPj4gICAgb2YgdGhlIGNvZGUuIE9uY2UgdGhlIHByb3BlciBhdG9t
aWMgb3BlcmF0aW9ucyB3aWxsIGJlIGF2YWlsYWJsZSBpbg0KPj4gICAgWEVOIHRoZSBkcml2ZXIg
Y2FuIGJlIHVwZGF0ZWQuDQo+PiA1LiBEcml2ZXIgaXMgY3VycmVudGx5IHN1cHBvcnRlZCBhcyBU
ZWNoIFByZXZpZXcuDQo+PiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hA
YXJtLmNvbT4NCj4+IC0tLQ0KPj4gIE1BSU5UQUlORVJTICAgICAgICAgICAgICAgICAgICAgICAg
ICAgfCAgIDYgKw0KPj4gIFNVUFBPUlQubWQgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAg
IDEgKw0KPj4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL0tjb25maWcgICAgICAgfCAgMTAgKw0K
Pj4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9NYWtlZmlsZSAgfCAgIDEgKw0KPj4gIHhl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LXYzLmMgfCA5ODYgKysrKysrKysrKysrKysr
KysrKysrLS0tLS0NCj4+ICA1IGZpbGVzIGNoYW5nZWQsIDgxNCBpbnNlcnRpb25zKCspLCAxOTAg
ZGVsZXRpb25zKC0pDQo+PiBkaWZmIC0tZ2l0IGEvTUFJTlRBSU5FUlMgYi9NQUlOVEFJTkVSUw0K
Pj4gaW5kZXggZGFiMzhhNmExNC4uMWQ2MzQ4OWVlYyAxMDA2NDQNCj4+IC0tLSBhL01BSU5UQUlO
RVJTDQo+PiArKysgYi9NQUlOVEFJTkVSUw0KPj4gQEAgLTI0OSw2ICsyNDksMTIgQEAgRjoJeGVu
L2luY2x1ZGUvYXNtLWFybS8NCj4+ICBGOgl4ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0vDQo+
PiAgRjoJeGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmgNCj4+ICArQVJNIFNNTVV2Mw0KPj4g
K006CUJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+ICtNOglS
YWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4NCj4+ICtTOglTdXBwb3J0ZWQNCj4+ICtG
Ogl4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS12My5jDQo+PiArDQo+PiAgQ2hhbmdl
IExvZw0KPj4gIE06CVBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPj4gIFI6CUNvbW11bml0
eSBNYW5hZ2VyIDxjb21tdW5pdHkubWFuYWdlckB4ZW5wcm9qZWN0Lm9yZz4NCj4+IGRpZmYgLS1n
aXQgYS9TVVBQT1JULm1kIGIvU1VQUE9SVC5tZA0KPj4gaW5kZXggYWIwMmFjYTVmNC4uZTQwMmM3
MjAyZCAxMDA2NDQNCj4+IC0tLSBhL1NVUFBPUlQubWQNCj4+ICsrKyBiL1NVUFBPUlQubWQNCj4+
IEBAIC02OCw2ICs2OCw3IEBAIEZvciB0aGUgQ29ydGV4IEE1NyByMHAwIC0gcjFwMSwgc2VlIEVy
cmF0YSA4MzIwNzUuDQo+PiAgICAgIFN0YXR1cywgQVJNIFNNTVV2MTogU3VwcG9ydGVkLCBub3Qg
c2VjdXJpdHkgc3VwcG9ydGVkDQo+PiAgICAgIFN0YXR1cywgQVJNIFNNTVV2MjogU3VwcG9ydGVk
LCBub3Qgc2VjdXJpdHkgc3VwcG9ydGVkDQo+PiAgICAgIFN0YXR1cywgUmVuZXNhcyBJUE1NVS1W
TVNBOiBTdXBwb3J0ZWQsIG5vdCBzZWN1cml0eSBzdXBwb3J0ZWQNCj4+ICsgICAgU3RhdHVzLCBB
Uk0gU01NVXYzOiBUZWNoIFByZXZpZXcNCj4gDQo+IFBsZWFzZSBtb3ZlIHRoaXMgcmlnaHQgYWZ0
ZXIgIkFSTSBTTU1VdjLigJ0uDQoNCk9rLg0KDQo+IA0KPj4gICAgIyMjIEFSTS9HSUN2MyBJVFMN
Cj4+ICBkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvS2NvbmZpZyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL0tjb25maWcNCj4+IGluZGV4IDAwMzYwMDdlYzQuLjViNzFjNTlm
NDcgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9LY29uZmlnDQo+PiAr
KysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9LY29uZmlnDQo+PiBAQCAtMTMsNiArMTMsMTYg
QEAgY29uZmlnIEFSTV9TTU1VDQo+PiAgCSAgU2F5IFkgaGVyZSBpZiB5b3VyIFNvQyBpbmNsdWRl
cyBhbiBJT01NVSBkZXZpY2UgaW1wbGVtZW50aW5nIHRoZQ0KPj4gIAkgIEFSTSBTTU1VIGFyY2hp
dGVjdHVyZS4NCj4+ICArY29uZmlnIEFSTV9TTU1VX1YzDQo+PiArCWJvb2wgIkFSTSBMdGQuIFN5
c3RlbSBNTVUgVmVyc2lvbiAzIChTTU1VdjMpIFN1cHBvcnQiIGlmIEVYUEVSVA0KPj4gKwlkZXBl
bmRzIG9uIEFSTV82NA0KPj4gKwktLS1oZWxwLS0tDQo+PiArCSBTdXBwb3J0IGZvciBpbXBsZW1l
bnRhdGlvbnMgb2YgdGhlIEFSTSBTeXN0ZW0gTU1VIGFyY2hpdGVjdHVyZQ0KPj4gKwkgdmVyc2lv
biAzLg0KPj4gKw0KPj4gKwkgU2F5IFkgaGVyZSBpZiB5b3VyIHN5c3RlbSBpbmNsdWRlcyBhbiBJ
T01NVSBkZXZpY2UgaW1wbGVtZW50aW5nDQo+PiArCSB0aGUgQVJNIFNNTVV2MyBhcmNoaXRlY3R1
cmUuDQo+PiArDQo+PiAgY29uZmlnIElQTU1VX1ZNU0ENCj4+ICAJYm9vbCAiUmVuZXNhcyBJUE1N
VS1WTVNBIGZvdW5kIGluIFItQ2FyIEdlbjMgU29DcyINCj4+ICAJZGVwZW5kcyBvbiBBUk1fNjQN
Cj4+IGRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vTWFrZWZpbGUgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vTWFrZWZpbGUNCj4+IGluZGV4IGZjZDkxOGVhM2Uu
LmM1ZmIzYjU4YTUgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0v
TWFrZWZpbGUNCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9NYWtlZmlsZQ0K
Pj4gQEAgLTEsMyArMSw0IEBADQo+PiAgb2JqLXkgKz0gaW9tbXUubyBpb21tdV9oZWxwZXJzLm8g
aW9tbXVfZndzcGVjLm8NCj4+ICBvYmotJChDT05GSUdfQVJNX1NNTVUpICs9IHNtbXUubw0KPj4g
IG9iai0kKENPTkZJR19JUE1NVV9WTVNBKSArPSBpcG1tdS12bXNhLm8NCj4+ICtvYmotJChDT05G
SUdfQVJNX1NNTVVfVjMpICs9IHNtbXUtdjMubw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL2FybS9zbW11LXYzLmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0v
c21tdS12My5jDQo+PiBpbmRleCA1NWQxY2JhMTk0Li44ZjIzMzdlN2YyIDEwMDY0NA0KPj4gLS0t
IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUtdjMuYw0KPj4gKysrIGIveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUtdjMuYw0KPj4gQEAgLTIsMzYgKzIsMjgwIEBADQo+
PiAgLyoNCj4+ICAgKiBJT01NVSBBUEkgZm9yIEFSTSBhcmNoaXRlY3RlZCBTTU1VdjMgaW1wbGVt
ZW50YXRpb25zLg0KPj4gICAqDQo+PiAtICogQ29weXJpZ2h0IChDKSAyMDE1IEFSTSBMaW1pdGVk
DQo+PiArICogQmFzZWQgb24gTGludXgncyBTTU1VdjMgZHJpdmVyOg0KPj4gKyAqICAgIGRyaXZl
cnMvaW9tbXUvYXJtL2FybS1zbW11LXYzL2FybS1zbW11LXYzLmMNCj4+ICsgKiAgICBjb21taXQ6
IDk1MWNiYmMzODZmZjAxYjUwZGE0ZjQ2Mzg3ZTk5NGU4MWQ5YWI0MzENCj4+ICsgKiBhbmQgWGVu
J3MgU01NVSBkcml2ZXI6DQo+PiArICogICAgeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3Nt
bXUuYw0KPiANCj4gSSB3b3VsZCBzdWdnZXN0IHRvIGxpc3QgdGhlIG1ham9yIGRpZmZlcmVuY2Vz
IGhlcmUgYXMgd2VsbC4NCg0KT2suIA0KPiANCj4+ICAgKg0KPj4gLSAqIEF1dGhvcjogV2lsbCBE
ZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+DQo+PiArICogQ29weXJpZ2h0IChDKSAyMDE1IEFS
TSBMaW1pdGVkIFdpbGwgRGVhY29uIDx3aWxsLmRlYWNvbkBhcm0uY29tPg0KPiANCj4gV2h5IGRp
ZCB5b3UgbWVyZ2UgdGhlIEF1dGhvciBhbmQgY29weXJpZ2h0IGxpbmU/DQoNCkkgIFdpbGwgZml4
IHRoaXMgaW4gbmV4dCB2ZXJzaW9uLg0KDQo+PiAgICoNCj4+IC0gKiBUaGlzIGRyaXZlciBpcyBw
b3dlcmVkIGJ5IGJhZCBjb2ZmZWUgYW5kIGJvbWJheSBtaXguDQo+PiArICogQ29weXJpZ2h0IChD
KSAyMDIwIEFybSBMdGQuDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0
d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4gKyAqIGl0IHVu
ZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVyc2lvbiAy
IGFzDQo+PiArICogcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+
PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhh
dCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4+ICsgKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdp
dGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZg0KPj4gKyAqIE1FUkNIQU5UQUJJTElU
WSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+ICsgKiBH
TlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLg0KPj4gKyAqDQo+PiAr
ICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UNCj4+ICsgKiBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbS4gIElmIG5vdCwgc2Vl
IDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4NCj4+ICsgKg0KPj4gKyAqLw0KPj4gKw0K
Pj4gKyNpbmNsdWRlIDx4ZW4vYWNwaS5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vY29uZmlnLmg+DQo+
PiArI2luY2x1ZGUgPHhlbi9kZWxheS5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vZXJybm8uaD4NCj4+
ICsjaW5jbHVkZSA8eGVuL2Vyci5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vaXJxLmg+DQo+PiArI2lu
Y2x1ZGUgPHhlbi9saWIuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL2xpc3QuaD4NCj4+ICsjaW5jbHVk
ZSA8eGVuL21tLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9yYnRyZWUuaD4NCj4+ICsjaW5jbHVkZSA8
eGVuL3NjaGVkLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9zaXplcy5oPg0KPj4gKyNpbmNsdWRlIDx4
ZW4vdm1hcC5oPg0KPj4gKyNpbmNsdWRlIDxhc20vYXRvbWljLmg+DQo+PiArI2luY2x1ZGUgPGFz
bS9kZXZpY2UuaD4NCj4+ICsjaW5jbHVkZSA8YXNtL2lvLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9w
bGF0Zm9ybS5oPg0KPj4gKyNpbmNsdWRlIDxhc20vaW9tbXVfZndzcGVjLmg+DQo+IA0KPiBBbGwg
dGhlIGhlYWRlcnMgc2VlbSB0byBiZSBhbHBoYWJldGljYWxseSBvcmRlcmVkIGJ1dCB0aGlzIG9u
ZS4NCg0KQWNrLg0KPiANCj4+ICsNCj4+ICsvKiBMaW51eCBjb21wYXRpYmlsaXR5IGZ1bmN0aW9u
cy4gKi8NCj4gDQo+IFNvbWUgb2YgdGhlIGhlbHBlcnMgaGVyZSBzZWVtIHRvIGJlIHNpbWlsYXIg
dG8gdGhlIFNNTVUgZHJpdmVyLiBDYW4gd2UgaGF2ZSBhbiBoZWFkZXIgdGhhdCBjYW4gYmUgc2hh
cmVkIGJldHdlZW4gdGhlIHR3bz8NCg0KQXMgd2UgYWdyZWVkIHRoYXQgSSB3aWxsIG1ha2UgdGhl
IGRyaXZlciBYRU4gY29tcGF0aWJsZSBhbmQgSSB3aWxsIHJlbW92ZSB0aGUgY29tcGF0aWJpbGl0
eSBmdW5jdGlvbnMgZnJvbSBoZXJlLiBJZiB0aGVyZSBpcyBhbnkgaGVscGVyIHRoYXQgaXMgY29t
bW9uIEkgd2lsbCBjcmVhdGUgdGhlIGNvbW1vbiBoZWFkZXIgZmlsZS4NCj4gDQo+PiArdHlwZWRl
ZiBwYWRkcl90IGRtYV9hZGRyX3Q7DQo+PiArdHlwZWRlZiB1bnNpZ25lZCBpbnQgZ2ZwX3Q7DQo+
PiArDQo+PiArI2RlZmluZSBwbGF0Zm9ybV9kZXZpY2UgZGV2aWNlDQo+PiArDQo+PiArI2RlZmlu
ZSBHRlBfS0VSTkVMIDANCj4+ICsNCj4+ICsvKiBBbGlhcyB0byBYZW4gZGV2aWNlIHRyZWUgaGVs
cGVycyAqLw0KPj4gKyNkZWZpbmUgZGV2aWNlX25vZGUgZHRfZGV2aWNlX25vZGUNCj4+ICsjZGVm
aW5lIG9mX3BoYW5kbGVfYXJncyBkdF9waGFuZGxlX2FyZ3MNCj4+ICsjZGVmaW5lIG9mX2Rldmlj
ZV9pZCBkdF9kZXZpY2VfbWF0Y2gNCj4+ICsjZGVmaW5lIG9mX21hdGNoX25vZGUgZHRfbWF0Y2hf
bm9kZQ0KPj4gKyNkZWZpbmUgb2ZfcHJvcGVydHlfcmVhZF91MzIobnAsIHBuYW1lLCBvdXQpICgh
ZHRfcHJvcGVydHlfcmVhZF91MzIobnAsIHBuYW1lLCBvdXQpKQ0KPj4gKyNkZWZpbmUgb2ZfcHJv
cGVydHlfcmVhZF9ib29sIGR0X3Byb3BlcnR5X3JlYWRfYm9vbA0KPj4gKyNkZWZpbmUgb2ZfcGFy
c2VfcGhhbmRsZV93aXRoX2FyZ3MgZHRfcGFyc2VfcGhhbmRsZV93aXRoX2FyZ3MNCj4+ICsNCj4+
ICsvKiBBbGlhcyB0byBYZW4gbG9jayBmdW5jdGlvbnMgKi8NCj4+ICsjZGVmaW5lIG11dGV4IHNw
aW5sb2NrDQo+PiArI2RlZmluZSBtdXRleF9pbml0IHNwaW5fbG9ja19pbml0DQo+PiArI2RlZmlu
ZSBtdXRleF9sb2NrIHNwaW5fbG9jaw0KPj4gKyNkZWZpbmUgbXV0ZXhfdW5sb2NrIHNwaW5fdW5s
b2NrDQo+IA0KPiBIbW0uLi4gbXV0ZXggYXJlIG5vdCBzcGlubG9jay4gQ2FuIHlvdSBleHBsYWlu
IHdoeSB0aGlzIGlzIGZpbmUgdG8gc3dpdGNoIHRvIHNwaW5sb2NrPw0KDQpZZXMgbXV0ZXggYXJl
IG5vdCBzcGlubG9jay4gQXMgbXV0ZXggaXMgbm90IGltcGxlbWVudGVkIGluIFhFTiBJIHRob3Vn
aHQgb2YgdXNpbmcgc3BpbmxvY2sgaW4gcGxhY2Ugb2YgbXV0ZXggYXMgdGhpcyBpcyB0aGUgb25s
eSBsb2NraW5nIG1lY2hhbmlzbSBhdmFpbGFibGUgaW4gWEVOLiANCkxldCBtZSBrbm93IGlmIHRo
ZXJlIGlzIGFub3RoZXIgYmxvY2tpbmcgbG9jayBhdmFpbGFibGUgaW4gWEVOLiBJIHdpbGwgY2hl
Y2sgaWYgd2UgY2FuIHVzZSB0aGF0Lg0KDQo+IA0KPj4gKw0KPj4gKy8qIEFsaWFzIHRvIFhlbiB0
aW1lIGZ1bmN0aW9ucyAqLw0KPj4gKyNkZWZpbmUga3RpbWVfdCBzX3RpbWVfdA0KPj4gKyNkZWZp
bmUga3RpbWVfZ2V0KCkgICAgICAgICAgICAgKE5PVygpKQ0KPj4gKyNkZWZpbmUga3RpbWVfYWRk
X3VzKHQsaSkgICAgICAgKHQgKyBNSUNST1NFQ1MoaSkpDQo+PiArI2RlZmluZSBrdGltZV9jb21w
YXJlKHQsaSkgICAgICAodCA+IChpKSkNCj4+ICsNCj4+ICsvKiBBbGlhcyB0byBYZW4gYWxsb2Nh
dGlvbiBoZWxwZXJzICovDQo+PiArI2RlZmluZSBremFsbG9jKHNpemUsIGZsYWdzKSAgICBfeHph
bGxvYyhzaXplLCBzaXplb2Yodm9pZCAqKSkNCj4+ICsjZGVmaW5lIGtmcmVlIHhmcmVlDQo+PiAr
I2RlZmluZSBkZXZtX2t6YWxsb2MoZGV2LCBzaXplLCBmbGFncykgIF94emFsbG9jKHNpemUsIHNp
emVvZih2b2lkICopKQ0KPj4gKw0KPj4gKy8qIERldmljZSBsb2dnZXIgZnVuY3Rpb25zICovDQo+
PiArI2RlZmluZSBkZXZfbmFtZShkZXYpIGR0X25vZGVfZnVsbF9uYW1lKGRldi0+b2Zfbm9kZSkN
Cj4+ICsjZGVmaW5lIGRldl9kYmcoZGV2LCBmbXQsIC4uLikgICAgICBcDQo+PiArICAgIHByaW50
ayhYRU5MT0dfREVCVUcgIlNNTVV2MzogJXM6ICIgZm10LCBkZXZfbmFtZShkZXYpLCAjIyBfX1ZB
X0FSR1NfXykNCj4+ICsjZGVmaW5lIGRldl9ub3RpY2UoZGV2LCBmbXQsIC4uLikgICBcDQo+PiAr
ICAgIHByaW50ayhYRU5MT0dfSU5GTyAiU01NVXYzOiAlczogIiBmbXQsIGRldl9uYW1lKGRldiks
ICMjIF9fVkFfQVJHU19fKQ0KPj4gKyNkZWZpbmUgZGV2X3dhcm4oZGV2LCBmbXQsIC4uLikgICAg
IFwNCj4+ICsgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HICJTTU1VdjM6ICVzOiAiIGZtdCwgZGV2
X25hbWUoZGV2KSwgIyMgX19WQV9BUkdTX18pDQo+PiArI2RlZmluZSBkZXZfZXJyKGRldiwgZm10
LCAuLi4pICAgICAgXA0KPj4gKyAgICBwcmludGsoWEVOTE9HX0VSUiAiU01NVXYzOiAlczogIiBm
bXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJHU19fKQ0KPj4gKyNkZWZpbmUgZGV2X2luZm8o
ZGV2LCBmbXQsIC4uLikgICAgIFwNCj4+ICsgICAgcHJpbnRrKFhFTkxPR19JTkZPICJTTU1VdjM6
ICVzOiAiIGZtdCwgZGV2X25hbWUoZGV2KSwgIyMgX19WQV9BUkdTX18pDQo+PiArI2RlZmluZSBk
ZXZfZXJyX3JhdGVsaW1pdGVkKGRldiwgZm10LCAuLi4pICAgICAgXA0KPj4gKyAgICBwcmludGso
WEVOTE9HX0VSUiAiU01NVXYzOiAlczogIiBmbXQsIGRldl9uYW1lKGRldiksICMjIF9fVkFfQVJH
U19fKQ0KPj4gKw0KPj4gKy8qDQo+PiArICogUGVyaW9kaWNhbGx5IHBvbGwgYW4gYWRkcmVzcyBh
bmQgd2FpdCBiZXR3ZWVuIHJlYWRzIGluIHVzIHVudGlsIGENCj4+ICsgKiBjb25kaXRpb24gaXMg
bWV0IG9yIGEgdGltZW91dCBvY2N1cnMuDQo+PiArICovDQo+PiArI2RlZmluZSByZWFkeF9wb2xs
X3RpbWVvdXQob3AsIGFkZHIsIHZhbCwgY29uZCwgc2xlZXBfdXMsIHRpbWVvdXRfdXMpIFwNCj4+
ICsoeyBcDQo+PiArICAgICBzX3RpbWVfdCBkZWFkbGluZSA9IE5PVygpICsgTUlDUk9TRUNTKHRp
bWVvdXRfdXMpOyBcDQo+PiArICAgICBmb3IgKDs7KSB7IFwNCj4+ICsgICAgICAgICh2YWwpID0g
b3AoYWRkcik7IFwNCj4+ICsgICAgICAgIGlmIChjb25kKSBcDQo+PiArICAgICAgICAgICAgYnJl
YWs7IFwNCj4+ICsgICAgICAgIGlmIChOT1coKSA+IGRlYWRsaW5lKSB7IFwNCj4+ICsgICAgICAg
ICAgICAodmFsKSA9IG9wKGFkZHIpOyBcDQo+PiArICAgICAgICAgICAgYnJlYWs7IFwNCj4+ICsg
ICAgICAgIH0gXA0KPj4gKyAgICAgICAgdWRlbGF5KHNsZWVwX3VzKTsgXA0KPj4gKyAgICAgfSBc
DQo+PiArICAgICAoY29uZCkgPyAwIDogLUVUSU1FRE9VVDsgXA0KPj4gK30pDQo+PiArDQo+PiAr
I2RlZmluZSByZWFkbF9yZWxheGVkX3BvbGxfdGltZW91dChhZGRyLCB2YWwsIGNvbmQsIGRlbGF5
X3VzLCB0aW1lb3V0X3VzKSBcDQo+PiArICAgIHJlYWR4X3BvbGxfdGltZW91dChyZWFkbF9yZWxh
eGVkLCBhZGRyLCB2YWwsIGNvbmQsIGRlbGF5X3VzLCB0aW1lb3V0X3VzKQ0KPj4gKw0KPj4gKyNk
ZWZpbmUgRklFTERfUFJFUChfbWFzaywgX3ZhbCkgICAgICAgICBcDQo+PiArICAgICgoKHR5cGVv
ZihfbWFzaykpKF92YWwpIDw8IChfX2J1aWx0aW5fZmZzbGwoX21hc2spIC0gMSkpICYgKF9tYXNr
KSkNCj4+ICsNCj4+ICsjZGVmaW5lIEZJRUxEX0dFVChfbWFzaywgX3JlZykgICAgICAgICAgXA0K
Pj4gKyAgICAodHlwZW9mKF9tYXNrKSkoKChfcmVnKSAmIChfbWFzaykpID4+IChfX2J1aWx0aW5f
ZmZzbGwoX21hc2spIC0gMSkpDQo+PiArDQo+PiArI2RlZmluZSBXUklURV9PTkNFKHgsIHZhbCkg
ICAgICAgICAgICAgICAgICBcDQo+PiArZG8geyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcDQo+PiArICAgICoodm9sYXRpbGUgdHlwZW9mKHgpICopJih4KSA9ICh2YWwp
OyAgICBcDQo+PiArfSB3aGlsZSAoMCkNCj4gDQo+IFBsZWFzZSBpbXBsZW1lbnQgaXQgd2l0aCB3
cml0ZV9hdG9taWMoKSBvciBBQ0NFU1NfT05DRSgpLg0KDQpPSy4gDQo+IA0KPj4gKw0KPj4gKy8q
IFhlbjogU3R1YiBvdXQgRE1BIGRvbWFpbiByZWxhdGVkIGZ1bmN0aW9ucyAqLw0KPj4gKyNkZWZp
bmUgaW9tbXVfZ2V0X2RtYV9jb29raWUoZG9tKSAwDQo+PiArI2RlZmluZSBpb21tdV9wdXRfZG1h
X2Nvb2tpZShkb20pDQo+PiArDQo+PiArLyoNCj4+ICsgKiBIZWxwZXJzIGZvciBETUEgYWxsb2Nh
dGlvbi4gSnVzdCB0aGUgZnVuY3Rpb24gbmFtZSBpcyByZXVzZWQgZm9yDQo+PiArICogcG9ydGlu
ZyBjb2RlLCB0aGVzZSBhbGxvY2F0aW9uIGFyZSBub3QgbWFuYWdlZCBhbGxvY2F0aW9ucw0KPj4g
ICAqLw0KPj4gK3N0YXRpYyB2b2lkICpkbWFtX2FsbG9jX2NvaGVyZW50KHN0cnVjdCBkZXZpY2Ug
KmRldiwgc2l6ZV90IHNpemUsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
cGFkZHJfdCAqZG1hX2hhbmRsZSwgZ2ZwX3QgZ2ZwKQ0KPj4gK3sNCj4+ICsgICAgdm9pZCAqdmFk
ZHI7DQo+PiArICAgIHVuc2lnbmVkIGxvbmcgYWxpZ25tZW50ID0gc2l6ZTsNCj4+ICsNCj4+ICsg
ICAgLyoNCj4+ICsgICAgICogX3h6YWxsb2MgcmVxdWlyZXMgdGhhdCB0aGUgKGFsaWduICYgKGFs
aWduIC0xKSkgPSAwLiBNb3N0IG9mIHRoZQ0KPj4gKyAgICAgKiBhbGxvY2F0aW9ucyBpbiBTTU1V
IGNvZGUgc2hvdWxkIHNlbmQgdGhlIHJpZ2h0IHZhbHVlIGZvciBzaXplLiBJbg0KPj4gKyAgICAg
KiBjYXNlIHRoaXMgaXMgbm90IHRydWUgcHJpbnQgYSB3YXJuaW5nIGFuZCBhbGlnbiB0byB0aGUg
c2l6ZSBvZiBhDQo+PiArICAgICAqICh2b2lkICopDQo+PiArICAgICAqLw0KPj4gKyAgICBpZiAo
IHNpemUgJiAoc2l6ZSAtIDEpICkNCj4gDQo+IFdlIHNob3VsZCB1c2UgdGhlIHNhbWUgY29kaW5n
IHN0eWxlIHdpdGhpbiB0aGUgZmlsZS4gQXMgdGhlIGZpbGUgaXMgaW1wb3J0ZWQgZnJvbSBMaW51
eCwgbmV3IGNvZGUgc2hvdWxkIGZvbGxvdyBMaW51eCBjb2Rpbmcgc3R5bGUuDQoNCk9rLg0KPiAN
Cj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HICJTTU1VdjM6IEZp
eGluZyBhbGlnbm1lbnQgZm9yIHRoZSBETUEgYnVmZmVyXG4iKTsNCj4+ICsgICAgICAgIGFsaWdu
bWVudCA9IHNpemVvZih2b2lkICopOw0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHZhZGRyID0g
X3h6YWxsb2Moc2l6ZSwgYWxpZ25tZW50KTsNCj4+ICsgICAgaWYgKCAhdmFkZHIgKQ0KPj4gKyAg
ICB7DQo+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiU01NVXYzOiBETUEgYWxsb2NhdGlv
biBmYWlsZWRcbiIpOw0KPj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+PiArICAgIH0NCj4+ICsN
Cj4+ICsgICAgKmRtYV9oYW5kbGUgPSB2aXJ0X3RvX21hZGRyKHZhZGRyKTsNCj4+ICsNCj4+ICsg
ICAgcmV0dXJuIHZhZGRyOw0KPj4gK30NCj4+ICsNCj4+ICsvKiBYZW46IFR5cGUgZGVmaW5pdGlv
bnMgZm9yIGlvbW11X2RvbWFpbiAqLw0KPj4gKyNkZWZpbmUgSU9NTVVfRE9NQUlOX1VOTUFOQUdF
RCAwDQo+PiArI2RlZmluZSBJT01NVV9ET01BSU5fRE1BIDENCj4+ICsjZGVmaW5lIElPTU1VX0RP
TUFJTl9JREVOVElUWSAyDQo+PiArDQo+PiArLyogWGVuIHNwZWNpZmljIGNvZGUuICovDQo+PiAr
c3RydWN0IGlvbW11X2RvbWFpbiB7DQo+PiArICAgIC8qIFJ1bnRpbWUgU01NVSBjb25maWd1cmF0
aW9uIGZvciB0aGlzIGlvbW11X2RvbWFpbiAqLw0KPj4gKyAgICBhdG9taWNfdCByZWY7DQo+PiAr
ICAgIC8qDQo+PiArICAgICAqIFVzZWQgdG8gbGluayBpb21tdV9kb21haW4gY29udGV4dHMgZm9y
IGEgc2FtZSBkb21haW4uDQo+PiArICAgICAqIFRoZXJlIGlzIGF0IGxlYXN0IG9uZSBwZXItU01N
VSB0byB1c2VkIGJ5IHRoZSBkb21haW4uDQo+PiArICAgICAqLw0KPj4gKyAgICBzdHJ1Y3QgbGlz
dF9oZWFkICAgIGxpc3Q7DQo+PiArfTsNCj4+ICsNCj4+ICsvKiBEZXNjcmliZXMgaW5mb3JtYXRp
b24gcmVxdWlyZWQgZm9yIGEgWGVuIGRvbWFpbiAqLw0KPj4gK3N0cnVjdCBhcm1fc21tdV94ZW5f
ZG9tYWluIHsNCj4+ICsgICAgc3BpbmxvY2tfdCAgICAgIGxvY2s7DQo+PiArDQo+PiArICAgIC8q
IExpc3Qgb2YgaW9tbXUgZG9tYWlucyBhc3NvY2lhdGVkIHRvIHRoaXMgZG9tYWluICovDQo+PiAr
ICAgIHN0cnVjdCBsaXN0X2hlYWQgICAgY29udGV4dHM7DQo+PiArfTsNCj4+ICsNCj4+ICsvKg0K
Pj4gKyAqIEluZm9ybWF0aW9uIGFib3V0IGVhY2ggZGV2aWNlIHN0b3JlZCBpbiBkZXYtPmFyY2hk
YXRhLmlvbW11DQo+PiArICogVGhlIGRldi0+YXJjaGRhdGEuaW9tbXUgc3RvcmVzIHRoZSBpb21t
dV9kb21haW4gKHJ1bnRpbWUgY29uZmlndXJhdGlvbiBvZg0KPj4gKyAqIHRoZSBTTU1VKS4NCj4+
ICsgKi8NCj4+ICtzdHJ1Y3QgYXJtX3NtbXVfeGVuX2RldmljZSB7DQo+PiArICAgIHN0cnVjdCBp
b21tdV9kb21haW4gKmRvbWFpbjsNCj4+ICt9Ow0KPj4gKw0KPj4gKy8qIEtlZXAgYSBsaXN0IG9m
IGRldmljZXMgYXNzb2NpYXRlZCB3aXRoIHRoaXMgZHJpdmVyICovDQo+PiArc3RhdGljIERFRklO
RV9TUElOTE9DSyhhcm1fc21tdV9kZXZpY2VzX2xvY2spOw0KPj4gK3N0YXRpYyBMSVNUX0hFQUQo
YXJtX3NtbXVfZGV2aWNlcyk7DQo+PiArDQo+PiArDQo+PiArc3RhdGljIGlubGluZSB2b2lkICpk
ZXZfaW9tbXVfcHJpdl9nZXQoc3RydWN0IGRldmljZSAqZGV2KQ0KPj4gK3sNCj4+ICsgICAgc3Ry
dWN0IGlvbW11X2Z3c3BlYyAqZndzcGVjID0gZGV2X2lvbW11X2Z3c3BlY19nZXQoZGV2KTsNCj4+
ICsNCj4+ICsgICAgcmV0dXJuIGZ3c3BlYyAmJiBmd3NwZWMtPmlvbW11X3ByaXYgPyBmd3NwZWMt
PmlvbW11X3ByaXYgOiBOVUxMOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgaW5saW5lIHZvaWQg
ZGV2X2lvbW11X3ByaXZfc2V0KHN0cnVjdCBkZXZpY2UgKmRldiwgdm9pZCAqcHJpdikNCj4+ICt7
DQo+PiArICAgIHN0cnVjdCBpb21tdV9md3NwZWMgKmZ3c3BlYyA9IGRldl9pb21tdV9md3NwZWNf
Z2V0KGRldik7DQo+PiArDQo+PiArICAgIGZ3c3BlYy0+aW9tbXVfcHJpdiA9IHByaXY7DQo+PiAr
fQ0KPj4gKw0KPj4gK2ludCBkdF9wcm9wZXJ0eV9tYXRjaF9zdHJpbmcoY29uc3Qgc3RydWN0IGR0
X2RldmljZV9ub2RlICpucCwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0
IGNoYXIgKnByb3BuYW1lLCBjb25zdCBjaGFyICpzdHJpbmcpDQo+IA0KPiBJIHRoaW5rIHRoaXMg
c2hvdWxkIGJlIGltcGxlbWVudGVkIGluIGRldmljZV90cmVlLmMNCg0KT2sgSSB3aWxsIG1vdmUg
dGhpcyB0byBkZXZpY2VfdHJlZS5jIGZpbGUNCj4gDQo+PiArew0KPj4gKyAgICBjb25zdCBzdHJ1
Y3QgZHRfcHJvcGVydHkgKmR0cHJvcCA9IGR0X2ZpbmRfcHJvcGVydHkobnAsIHByb3BuYW1lLCBO
VUxMKTsNCj4+ICsgICAgc2l6ZV90IGw7DQo+PiArICAgIGludCBpOw0KPj4gKyAgICBjb25zdCBj
aGFyICpwLCAqZW5kOw0KPj4gKw0KPj4gKyAgICBpZiAoICFkdHByb3AgKQ0KPj4gKyAgICAgICAg
cmV0dXJuIC1FSU5WQUw7DQo+PiArDQo+PiArICAgIGlmICggIWR0cHJvcC0+dmFsdWUgKQ0KPj4g
KyAgICAgICAgcmV0dXJuIC1FTk9EQVRBOw0KPj4gKw0KPj4gKyAgICBwID0gZHRwcm9wLT52YWx1
ZTsNCj4+ICsgICAgZW5kID0gcCArIGR0cHJvcC0+bGVuZ3RoOw0KPj4gKw0KPj4gKyAgICBmb3Ig
KCBpID0gMDsgcCA8IGVuZDsgaSsrLCBwICs9IGwgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBs
ID0gc3RybmxlbihwLCBlbmQgLSBwKSArIDE7DQo+PiArDQo+PiArICAgICAgICBpZiAoIHAgKyBs
ID4gZW5kICkNCj4+ICsgICAgICAgICAgICByZXR1cm4gLUVJTFNFUTsNCj4+ICsNCj4+ICsgICAg
ICAgIGlmICggc3RyY21wKHN0cmluZywgcCkgPT0gMCApDQo+PiArICAgICAgICAgICAgcmV0dXJu
IGk7IC8qIEZvdW5kIGl0OyByZXR1cm4gaW5kZXggKi8NCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAg
ICByZXR1cm4gLUVOT0RBVEE7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgcGxhdGZvcm1f
Z2V0X2lycV9ieW5hbWVfb3B0aW9uYWwoc3RydWN0IGRldmljZSAqZGV2LA0KPj4gKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqbmFtZSkNCj4+
ICt7DQo+PiArICAgIGludCBpbmRleCwgcmV0Ow0KPj4gKyAgICBzdHJ1Y3QgZHRfZGV2aWNlX25v
ZGUgKm5wICA9IGRldl90b19kdChkZXYpOw0KPj4gKw0KPj4gKyAgICBpZiAoIHVubGlrZWx5KCFu
YW1lKSApDQo+PiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4+ICsNCj4+ICsgICAgaW5kZXgg
PSBkdF9wcm9wZXJ0eV9tYXRjaF9zdHJpbmcobnAsICJpbnRlcnJ1cHQtbmFtZXMiLCBuYW1lKTsN
Cj4+ICsgICAgaWYgKCBpbmRleCA8IDAgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBkZXZfaW5m
byhkZXYsICJJUlEgJXMgbm90IGZvdW5kXG4iLCBuYW1lKTsNCj4+ICsgICAgICAgIHJldHVybiBp
bmRleDsNCj4+ICsgICAgfQ0KPj4gIC0jaW5jbHVkZSA8bGludXgvYWNwaS5oPg0KPj4gLSNpbmNs
dWRlIDxsaW51eC9hY3BpX2lvcnQuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvYml0ZmllbGQuaD4N
Cj4+IC0jaW5jbHVkZSA8bGludXgvYml0b3BzLmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L2NyYXNo
X2R1bXAuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvZGVsYXkuaD4NCj4+IC0jaW5jbHVkZSA8bGlu
dXgvZG1hLWlvbW11Lmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L2Vyci5oPg0KPj4gLSNpbmNsdWRl
IDxsaW51eC9pbnRlcnJ1cHQuaD4NCj4+IC0jaW5jbHVkZSA8bGludXgvaW8tcGd0YWJsZS5oPg0K
Pj4gLSNpbmNsdWRlIDxsaW51eC9pb21tdS5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9pb3BvbGwu
aD4NCj4+IC0jaW5jbHVkZSA8bGludXgvbW9kdWxlLmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L21z
aS5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9vZi5oPg0KPj4gLSNpbmNsdWRlIDxsaW51eC9vZl9h
ZGRyZXNzLmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L29mX2lvbW11Lmg+DQo+PiAtI2luY2x1ZGUg
PGxpbnV4L29mX3BsYXRmb3JtLmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L3BjaS5oPg0KPj4gLSNp
bmNsdWRlIDxsaW51eC9wY2ktYXRzLmg+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L3BsYXRmb3JtX2Rl
dmljZS5oPg0KPj4gLQ0KPj4gLSNpbmNsdWRlIDxsaW51eC9hbWJhL2J1cy5oPg0KPj4gKyAgICBy
ZXQgPSBwbGF0Zm9ybV9nZXRfaXJxKG5wLCBpbmRleCk7DQo+PiArICAgIGlmICggcmV0IDwgMCAp
DQo+PiArICAgIHsNCj4+ICsgICAgICAgIGRldl9lcnIoZGV2LCAiZmFpbGVkIHRvIGdldCBpcnEg
aW5kZXggJWRcbiIsIGluZGV4KTsNCj4+ICsgICAgICAgIHJldHVybiAtRU5PREVWOw0KPj4gKyAg
ICB9DQo+PiArDQo+PiArICAgIHJldHVybiByZXQ7DQo+PiArfQ0KPj4gKw0KPj4gKy8qIFN0YXJ0
IG9mIExpbnV4IFNNTVV2MyBjb2RlICovDQo+PiAgICAvKiBNTUlPIHJlZ2lzdGVycyAqLw0KPj4g
ICNkZWZpbmUgQVJNX1NNTVVfSURSMAkJCTB4MA0KPj4gQEAgLTUwNyw2ICs3NTEsNyBAQCBzdHJ1
Y3QgYXJtX3NtbXVfczJfY2ZnIHsNCj4+ICAJdTE2CQkJCXZtaWQ7DQo+PiAgCXU2NAkJCQl2dHRi
cjsNCj4+ICAJdTY0CQkJCXZ0Y3I7DQo+PiArCXN0cnVjdCBkb21haW4JCSpkb21haW47DQo+PiAg
fTsNCj4+ICAgIHN0cnVjdCBhcm1fc21tdV9zdHJ0YWJfY2ZnIHsNCj4+IEBAIC01NjcsOCArODEy
LDEzIEBAIHN0cnVjdCBhcm1fc21tdV9kZXZpY2Ugew0KPj4gICAgCXN0cnVjdCBhcm1fc21tdV9z
dHJ0YWJfY2ZnCXN0cnRhYl9jZmc7DQo+PiAgLQkvKiBJT01NVSBjb3JlIGNvZGUgaGFuZGxlICov
DQo+PiAtCXN0cnVjdCBpb21tdV9kZXZpY2UJCWlvbW11Ow0KPj4gKwkvKiBOZWVkIHRvIGtlZXAg
YSBsaXN0IG9mIFNNTVUgZGV2aWNlcyAqLw0KPj4gKwlzdHJ1Y3QgbGlzdF9oZWFkCQlkZXZpY2Vz
Ow0KPj4gKw0KPj4gKwkvKiBUYXNrbGV0cyBmb3IgaGFuZGxpbmcgZXZ0cy9mYXVsdHMgYW5kIHBj
aSBwYWdlIHJlcXVlc3QgSVJRcyovDQo+PiArCXN0cnVjdCB0YXNrbGV0CQlldnRxX2lycV90YXNr
bGV0Ow0KPj4gKwlzdHJ1Y3QgdGFza2xldAkJcHJpcV9pcnFfdGFza2xldDsNCj4+ICsJc3RydWN0
IHRhc2tsZXQJCWNvbWJpbmVkX2lycV90YXNrbGV0Ow0KPj4gIH07DQo+PiAgICAvKiBTTU1VIHBy
aXZhdGUgZGF0YSBmb3IgZWFjaCBtYXN0ZXIgKi8NCj4+IEBAIC0xMTEwLDcgKzEzNjAsNyBAQCBz
dGF0aWMgaW50IGFybV9zbW11X2luaXRfbDJfc3RydGFiKHN0cnVjdCBhcm1fc21tdV9kZXZpY2Ug
KnNtbXUsIHUzMiBzaWQpDQo+PiAgfQ0KPj4gICAgLyogSVJRIGFuZCBldmVudCBoYW5kbGVycyAq
Lw0KPj4gLXN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9ldnRxX3RocmVhZChpbnQgaXJxLCB2
b2lkICpkZXYpDQo+PiArc3RhdGljIHZvaWQgYXJtX3NtbXVfZXZ0cV90aHJlYWQodm9pZCAqZGV2
KQ0KPj4gIHsNCj4+ICAJaW50IGk7DQo+PiAgCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUg
PSBkZXY7DQo+PiBAQCAtMTE0MCw3ICsxMzkwLDYgQEAgc3RhdGljIGlycXJldHVybl90IGFybV9z
bW11X2V2dHFfdGhyZWFkKGludCBpcnEsIHZvaWQgKmRldikNCj4+ICAJLyogU3luYyBvdXIgb3Zl
cmZsb3cgZmxhZywgYXMgd2UgYmVsaWV2ZSB3ZSdyZSB1cCB0byBzcGVlZCAqLw0KPj4gIAlsbHEt
PmNvbnMgPSBRX09WRihsbHEtPnByb2QpIHwgUV9XUlAobGxxLCBsbHEtPmNvbnMpIHwNCj4+ICAJ
CSAgICBRX0lEWChsbHEsIGxscS0+Y29ucyk7DQo+PiAtCXJldHVybiBJUlFfSEFORExFRDsNCj4+
ICB9DQo+PiAgICBzdGF0aWMgdm9pZCBhcm1fc21tdV9oYW5kbGVfcHByKHN0cnVjdCBhcm1fc21t
dV9kZXZpY2UgKnNtbXUsIHU2NCAqZXZ0KQ0KPj4gQEAgLTExODEsNyArMTQzMCw3IEBAIHN0YXRp
YyB2b2lkIGFybV9zbW11X2hhbmRsZV9wcHIoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSwg
dTY0ICpldnQpDQo+PiAgCX0NCj4+ICB9DQo+PiAgLXN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21t
dV9wcmlxX3RocmVhZChpbnQgaXJxLCB2b2lkICpkZXYpDQo+PiArc3RhdGljIHZvaWQgYXJtX3Nt
bXVfcHJpcV90aHJlYWQodm9pZCAqZGV2KQ0KPj4gIHsNCj4+ICAJc3RydWN0IGFybV9zbW11X2Rl
dmljZSAqc21tdSA9IGRldjsNCj4+ICAJc3RydWN0IGFybV9zbW11X3F1ZXVlICpxID0gJnNtbXUt
PnByaXEucTsNCj4+IEBAIC0xMjAwLDEyICsxNDQ5LDEyIEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBh
cm1fc21tdV9wcmlxX3RocmVhZChpbnQgaXJxLCB2b2lkICpkZXYpDQo+PiAgCWxscS0+Y29ucyA9
IFFfT1ZGKGxscS0+cHJvZCkgfCBRX1dSUChsbHEsIGxscS0+Y29ucykgfA0KPj4gIAkJICAgICAg
UV9JRFgobGxxLCBsbHEtPmNvbnMpOw0KPj4gIAlxdWV1ZV9zeW5jX2NvbnNfb3V0KHEpOw0KPj4g
LQlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+PiAgfQ0KPj4gICAgc3RhdGljIGludCBhcm1fc21tdV9k
ZXZpY2VfZGlzYWJsZShzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11KTsNCj4+ICAtc3RhdGlj
IGlycXJldHVybl90IGFybV9zbW11X2dlcnJvcl9oYW5kbGVyKGludCBpcnEsIHZvaWQgKmRldikN
Cj4+ICtzdGF0aWMgdm9pZCBhcm1fc21tdV9nZXJyb3JfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpk
ZXYsDQo+PiArCQkJCXN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQ0KPj4gIHsNCj4+ICAJdTMy
IGdlcnJvciwgZ2Vycm9ybiwgYWN0aXZlOw0KPj4gIAlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpz
bW11ID0gZGV2Ow0KPj4gQEAgLTEyMTUsNyArMTQ2NCw3IEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBh
cm1fc21tdV9nZXJyb3JfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYpDQo+PiAgICAJYWN0aXZl
ID0gZ2Vycm9yIF4gZ2Vycm9ybjsNCj4+ICAJaWYgKCEoYWN0aXZlICYgR0VSUk9SX0VSUl9NQVNL
KSkNCj4+IC0JCXJldHVybiBJUlFfTk9ORTsgLyogTm8gZXJyb3JzIHBlbmRpbmcgKi8NCj4+ICsJ
CXJldHVybjsgLyogTm8gZXJyb3JzIHBlbmRpbmcgKi8NCj4+ICAgIAlkZXZfd2FybihzbW11LT5k
ZXYsDQo+PiAgCQkgInVuZXhwZWN0ZWQgZ2xvYmFsIGVycm9yIHJlcG9ydGVkICgweCUwOHgpLCB0
aGlzIGNvdWxkIGJlIHNlcmlvdXNcbiIsDQo+PiBAQCAtMTI0OCwyNiArMTQ5Nyw0MiBAQCBzdGF0
aWMgaXJxcmV0dXJuX3QgYXJtX3NtbXVfZ2Vycm9yX2hhbmRsZXIoaW50IGlycSwgdm9pZCAqZGV2
KQ0KPj4gIAkJYXJtX3NtbXVfY21kcV9za2lwX2VycihzbW11KTsNCj4+ICAgIAl3cml0ZWwoZ2Vy
cm9yLCBzbW11LT5iYXNlICsgQVJNX1NNTVVfR0VSUk9STik7DQo+PiAtCXJldHVybiBJUlFfSEFO
RExFRDsNCj4+ICB9DQo+PiAgLXN0YXRpYyBpcnFyZXR1cm5fdCBhcm1fc21tdV9jb21iaW5lZF9p
cnFfdGhyZWFkKGludCBpcnEsIHZvaWQgKmRldikNCj4+ICtzdGF0aWMgdm9pZCBhcm1fc21tdV9j
b21iaW5lZF9pcnFfaGFuZGxlcihpbnQgaXJxLCB2b2lkICpkZXYsDQo+PiArCQkJCXN0cnVjdCBj
cHVfdXNlcl9yZWdzICpyZWdzKQ0KPj4gK3sNCj4+ICsJc3RydWN0IGFybV9zbW11X2RldmljZSAq
c21tdSA9IChzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICopZGV2Ow0KPiANCj4gVGhlIGNhc3QgaXMg
bm90IG5lY2Vzc2FyeS4NCg0KT2suDQo+IA0KPj4gKw0KPj4gKwlhcm1fc21tdV9nZXJyb3JfaGFu
ZGxlcihpcnEsIGRldiwgcmVncyk7DQo+PiArDQo+PiArCXRhc2tsZXRfc2NoZWR1bGUoJihzbW11
LT5jb21iaW5lZF9pcnFfdGFza2xldCkpOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgdm9pZCBh
cm1fc21tdV9jb21iaW5lZF9pcnFfdGhyZWFkKHZvaWQgKmRldikNCj4+ICB7DQo+PiAgCXN0cnVj
dCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSBkZXY7DQo+PiAgLQlhcm1fc21tdV9ldnRxX3RocmVh
ZChpcnEsIGRldik7DQo+PiArCWFybV9zbW11X2V2dHFfdGhyZWFkKGRldik7DQo+PiAgCWlmIChz
bW11LT5mZWF0dXJlcyAmIEFSTV9TTU1VX0ZFQVRfUFJJKQ0KPj4gLQkJYXJtX3NtbXVfcHJpcV90
aHJlYWQoaXJxLCBkZXYpOw0KPj4gLQ0KPj4gLQlyZXR1cm4gSVJRX0hBTkRMRUQ7DQo+PiArCQlh
cm1fc21tdV9wcmlxX3RocmVhZChkZXYpOw0KPj4gIH0NCj4+ICAtc3RhdGljIGlycXJldHVybl90
IGFybV9zbW11X2NvbWJpbmVkX2lycV9oYW5kbGVyKGludCBpcnEsIHZvaWQgKmRldikNCj4+ICtz
dGF0aWMgdm9pZCBhcm1fc21tdV9ldnRxX2lycV90YXNrbGV0KGludCBpcnEsIHZvaWQgKmRldiwN
Cj4+ICsJCQkJc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpDQo+PiAgew0KPj4gLQlhcm1fc21t
dV9nZXJyb3JfaGFuZGxlcihpcnEsIGRldik7DQo+PiAtCXJldHVybiBJUlFfV0FLRV9USFJFQUQ7
DQo+PiArCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSAoc3RydWN0IGFybV9zbW11X2Rl
dmljZSAqKWRldjsNCj4gDQo+IERpdHRvLg0KPiANCj4+ICsNCj4+ICsJdGFza2xldF9zY2hlZHVs
ZSgmKHNtbXUtPmV2dHFfaXJxX3Rhc2tsZXQpKTsNCj4+ICB9DQo+PiAgK3N0YXRpYyB2b2lkIGFy
bV9zbW11X3ByaXFfaXJxX3Rhc2tsZXQoaW50IGlycSwgdm9pZCAqZGV2LA0KPj4gKwkJCQlzdHJ1
Y3QgY3B1X3VzZXJfcmVncyAqcmVncykNCj4+ICt7DQo+PiArCXN0cnVjdCBhcm1fc21tdV9kZXZp
Y2UgKnNtbXUgPSAoc3RydWN0IGFybV9zbW11X2RldmljZSAqKWRldjsNCj4gDQo+IERpdHRvLg0K
PiANCj4+ICsNCj4+ICsJdGFza2xldF9zY2hlZHVsZSgmKHNtbXUtPnByaXFfaXJxX3Rhc2tsZXQp
KTsNCj4+ICt9DQo+PiAgICAvKiBJT19QR1RBQkxFIEFQSSAqLw0KPj4gIHN0YXRpYyB2b2lkIGFy
bV9zbW11X3RsYl9pbnZfY29udGV4dCh2b2lkICpjb29raWUpDQo+PiBAQCAtMTM1NCwyNyArMTYx
OSw2OSBAQCBzdGF0aWMgdm9pZCBhcm1fc21tdV9kb21haW5fZnJlZShzdHJ1Y3QgaW9tbXVfZG9t
YWluICpkb21haW4pDQo+PiAgfQ0KPj4gICAgc3RhdGljIGludCBhcm1fc21tdV9kb21haW5fZmlu
YWxpc2VfczIoc3RydWN0IGFybV9zbW11X2RvbWFpbiAqc21tdV9kb21haW4sDQo+PiAtCQkJCSAg
ICAgICBzdHJ1Y3QgYXJtX3NtbXVfbWFzdGVyICptYXN0ZXIsDQo+PiAtCQkJCSAgICAgICBzdHJ1
Y3QgaW9fcGd0YWJsZV9jZmcgKnBndGJsX2NmZykNCj4+ICsJCQkJICAgICAgIHN0cnVjdCBhcm1f
c21tdV9tYXN0ZXIgKm1hc3RlcikNCj4+ICB7DQo+PiAgCWludCB2bWlkOw0KPj4gKwl1NjQgcmVn
Ow0KPiANCj4gSSB0aGluayB1aW50MzJfdCBpcyBzdWZmaWNpZW50IGhlcmUuDQoNCk9rLiANCj4g
DQo+PiAgCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSBzbW11X2RvbWFpbi0+c21tdTsN
Cj4+ICAJc3RydWN0IGFybV9zbW11X3MyX2NmZyAqY2ZnID0gJnNtbXVfZG9tYWluLT5zMl9jZmc7
DQo+PiAgKwkvKiBWVENSICovDQo+PiArCXJlZyA9IFZUQ1JfUkVTMSB8IFZUQ1JfU0gwX0lTIHwg
VlRDUl9JUkdOMF9XQldBIHwgVlRDUl9PUkdOMF9XQldBOw0KPiANCj4gVlRDUl9SRVMxIHdpbGwg
c2V0IGJpdCAzMSB0byAxLiBIb3dldmVyLCBmcm9tIHRoZSBzcGVjIGl0IGxvb2tzIGxpa2UgdGhl
IGVxdWl2YWxlbnQgYml0IGluIHRoZSBlbnRyeSAoc2VlIDUuMiBpbiBBUk0gSUhJIDAwNzBDLmEp
IHdpbGwgYmUgUkVTMC4NCg0KWWVzIEkgd2lsbCBmaXggdGhpcy4NCg0KPiANCj4+ICsNCj4+ICsJ
c3dpdGNoIChQQUdFX1NJWkUpIHsNCj4+ICsJY2FzZSBTWl80SzoNCj4+ICsJCXJlZyB8PSBWVENS
X1RHMF80SzsNCj4+ICsJCWJyZWFrOw0KPj4gKwljYXNlIFNaXzE2SzoNCj4+ICsJCXJlZyB8PSBW
VENSX1RHMF8xNks7DQo+PiArCQlicmVhazsNCj4+ICsJY2FzZSBTWl82NEs6DQo+PiArCQlyZWcg
fD0gVlRDUl9URzBfNEs7DQo+PiArCQlicmVhazsNCj4+ICsJfQ0KPiANCj4gSSB3b3VsZCBqdXN0
IGhhbmRsZSA0SyBoZXJlIGFuZCBhZGQgYSBCVUlMRF9CVUdfT04oUEFHRV9TSVpFICE9IFNaXzRL
KS4NCg0KT2suDQo+IA0KPj4gKyA+ICsJc3dpdGNoIChzbW11LT5vYXMpIHsNCj4gDQo+IEFGQUlD
VCBzbW11LT5vYXMgYW5kIC4uLg0KPiANCj4+ICsJY2FzZSAzMjoNCj4+ICsJCXJlZyB8PSBWVENS
X1BTKF9BQygweDAsVUxMKSk7DQo+PiArCQlicmVhazsNCj4+ICsJY2FzZSAzNjoNCj4+ICsJCXJl
ZyB8PSBWVENSX1BTKF9BQygweDEsVUxMKSk7DQo+PiArCQlicmVhazsNCj4+ICsJY2FzZSA0MDoN
Cj4+ICsJCXJlZyB8PSBWVENSX1BTKF9BQygweDIsVUxMKSk7DQo+PiArCQlicmVhazsNCj4+ICsJ
Y2FzZSA0MjoNCj4+ICsJCXJlZyB8PSBWVENSX1BTKF9BQygweDMsVUxMKSk7DQo+PiArCQlicmVh
azsNCj4+ICsJY2FzZSA0NDoNCj4+ICsJCXJlZyB8PSBWVENSX1BTKF9BQygweDQsVUxMKSk7DQo+
PiArCQlicmVhazsNCj4+ICsJCWNhc2UgNDg6DQo+PiArCQlyZWcgfD0gVlRDUl9QUyhfQUMoMHg1
LFVMTCkpOw0KPj4gKwkJYnJlYWs7DQo+PiArCWNhc2UgNTI6DQo+PiArCQlyZWcgfD0gVlRDUl9Q
UyhfQUMoMHg2LFVMTCkpOw0KPj4gKwkJYnJlYWs7DQo+PiArCX0NCj4+ICsNCj4+ICsJcmVnIHw9
IFZUQ1JfVDBTWig2NFVMTCAtIHNtbXUtPmlhcyk7DQo+IA0KPiAuLi4gYXJlIGRpcmVjdGx5IHRh
a2VuIGZyb20gdGhlIFNNTVUgY29uZmlndXJhdGlvbi4gSG93ZXZlciwgYXMgd2Ugc2hhcmUgdGhl
IFAyTSwgd2UgbmVlZCB0byBtYWtlIHN1cmUgdGhlIHZhbHVlIG1hdGNoIHdoYXQgdGhlIENQVSBp
cyB1c2luZy4NCg0KT2sgbGV0IG1lIGRvdWJsZSBjaGVjayBhZ2FpbiBvbiB0aGlzLiBBcyBJIGhh
dmUgdGVzdGVkIFNNTVV2MyBkcml2ZXIgb24gTjFTRFAgYm9hcmQgaXQgaXMgb2JzZXJ2ZWQgYXMg
c2FtZS4NCj4gDQo+IEZvciB0aGUgSUFTLCB5b3Ugd2lsbCB3YW50IHRvIHVzZSBwMm1faXBhX2Jp
dHMgYW5kIGZvciB0aGUgb3V0cHV0LCB3ZSB3aWxsIHdhbnQgdG8gY2FwIHRvIFBBRERSX0JJVFMu
DQo+IA0KPj4gKwlyZWcgfD0gVlRDUl9TTDAoMHgyKTsNCj4gDQo+IFNpbWlsYXIgdG8gYWJvdmUs
IHRoZSBzdGFydGluZyBsZXZlbCB3aWxsIGRlcGVuZCBvbiBob3cgdGhlIFAyTSB3YXMgY29uZmln
dXJlZC4NCj4gDQo+PiArCXJlZyB8PSBWVENSX1ZTOw0KPiANCj4gQUZBSUNULCB0aGUgYml0IDE3
OSAoYml0IDE5IGluIHRoZSB3b3JkKSBpcyBpbmRpY2F0aW5nIHdoZXRoZXIgQUFyY2gzMiBvciBB
QXJjaDY0IHRyYW5zbGF0aW9uIHRhYmxlIGlzIHVzZWQuIEhvd2V2ZXIsIGJpdCAxOSBpbiBWVENS
X0VMMiBpbmRpY2F0ZXMgd2hldGhlciB3ZSBhcmUgdXNpbmcgOC1iaXQgb3IgMTYtYml0IFZNSUQu
DQoNCk9rIGxldCBtZSBkb3VibGUgY2hlY2sgYWxsIHRoZSBjb25maWd1cmF0aW9uIGZvciBTdHJl
YW0gVGFibGUgRW50cnkuDQoNCj4gDQo+PiArDQo+PiArCWNmZy0+dnRjciAgID0gcmVnOw0KPiAN
Cj4gSXQgd291bGQgYmUgYmV0dGVyIHRvIGluaXRpYWxpemUgdnRjciBleGFjdGx5IGF0IHRoZSBz
YW1lIHBsYWNlIGFzIExpbnV4IGRvZXMuIFRoaXMgd291bGQgbWFrZSBlYXNpZXIgdG8gbWF0Y2gg
dGhlIGNvZGUuDQoNCk9rLiANCj4+ICsNCj4+ICAJdm1pZCA9IGFybV9zbW11X2JpdG1hcF9hbGxv
YyhzbW11LT52bWlkX21hcCwgc21tdS0+dm1pZF9iaXRzKTsNCj4+ICAJaWYgKHZtaWQgPCAwKQ0K
Pj4gIAkJcmV0dXJuIHZtaWQ7DQo+PiArCWNmZy0+dm1pZCAgPSAodTE2KXZtaWQ7DQo+PiArDQo+
PiArCWNmZy0+dnR0YnIgID0gcGFnZV90b19tYWRkcihjZmctPmRvbWFpbi0+YXJjaC5wMm0ucm9v
dCk7DQo+PiArDQo+PiArCXByaW50ayhYRU5MT0dfREVCVUcNCj4+ICsJCSAgICJTTU1VdjM6IGQl
dTogdm1pZCAweCV4IHZ0Y3IgMHglIlBSSXBhZGRyIiBwMm1hZGRyIDB4JSJQUklwYWRkciJcbiIs
DQo+PiArCQkgICBjZmctPmRvbWFpbi0+ZG9tYWluX2lkLCBjZmctPnZtaWQsIGNmZy0+dnRjciwg
Y2ZnLT52dHRicik7DQo+PiAgLQl2dGNyID0gJnBndGJsX2NmZy0+YXJtX2xwYWVfczJfY2ZnLnZ0
Y3I7DQo+PiAtCWNmZy0+dm1pZAk9ICh1MTYpdm1pZDsNCj4+IC0JY2ZnLT52dHRicgk9IHBndGJs
X2NmZy0+YXJtX2xwYWVfczJfY2ZnLnZ0dGJyOw0KPj4gLQljZmctPnZ0Y3IJPSBGSUVMRF9QUkVQ
KFNUUlRBQl9TVEVfMl9WVENSX1MyVDBTWiwgdnRjci0+dHN6KSB8DQo+PiAtCQkJICBGSUVMRF9Q
UkVQKFNUUlRBQl9TVEVfMl9WVENSX1MyU0wwLCB2dGNyLT5zbCkgfA0KPj4gLQkJCSAgRklFTERf
UFJFUChTVFJUQUJfU1RFXzJfVlRDUl9TMklSMCwgdnRjci0+aXJnbikgfA0KPj4gLQkJCSAgRklF
TERfUFJFUChTVFJUQUJfU1RFXzJfVlRDUl9TMk9SMCwgdnRjci0+b3JnbikgfA0KPj4gLQkJCSAg
RklFTERfUFJFUChTVFJUQUJfU1RFXzJfVlRDUl9TMlNIMCwgdnRjci0+c2gpIHwNCj4+IC0JCQkg
IEZJRUxEX1BSRVAoU1RSVEFCX1NURV8yX1ZUQ1JfUzJURywgdnRjci0+dGcpIHwNCj4+IC0JCQkg
IEZJRUxEX1BSRVAoU1RSVEFCX1NURV8yX1ZUQ1JfUzJQUywgdnRjci0+cHMpOw0KPj4gIAlyZXR1
cm4gMDsNCj4+ICB9DQo+PiAgQEAgLTEzODIsMjggKzE2ODksMTIgQEAgc3RhdGljIGludCBhcm1f
c21tdV9kb21haW5fZmluYWxpc2Uoc3RydWN0IGlvbW11X2RvbWFpbiAqZG9tYWluLA0KPj4gIAkJ
CQkgICAgc3RydWN0IGFybV9zbW11X21hc3RlciAqbWFzdGVyKQ0KPj4gIHsNCj4+ICAJaW50IHJl
dDsNCj4+IC0JdW5zaWduZWQgbG9uZyBpYXMsIG9hczsNCj4+IC0JaW50ICgqZmluYWxpc2Vfc3Rh
Z2VfZm4pKHN0cnVjdCBhcm1fc21tdV9kb21haW4gKiwNCj4+IC0JCQkJIHN0cnVjdCBhcm1fc21t
dV9tYXN0ZXIgKiwNCj4+IC0JCQkJIHN0cnVjdCBpb19wZ3RhYmxlX2NmZyAqKTsNCj4+ICAJc3Ry
dWN0IGFybV9zbW11X2RvbWFpbiAqc21tdV9kb21haW4gPSB0b19zbW11X2RvbWFpbihkb21haW4p
Ow0KPj4gLQlzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gc21tdV9kb21haW4tPnNtbXU7
DQo+PiAgICAJLyogUmVzdHJpY3QgdGhlIHN0YWdlIHRvIHdoYXQgd2UgY2FuIGFjdHVhbGx5IHN1
cHBvcnQgKi8NCj4+ICAJc21tdV9kb21haW4tPnN0YWdlID0gQVJNX1NNTVVfRE9NQUlOX1MyOw0K
Pj4gIC0Jc3dpdGNoIChzbW11X2RvbWFpbi0+c3RhZ2UpIHsNCj4+IC0JY2FzZSBBUk1fU01NVV9E
T01BSU5fTkVTVEVEOg0KPj4gLQljYXNlIEFSTV9TTU1VX0RPTUFJTl9TMjoNCj4+IC0JCWlhcyA9
IHNtbXUtPmlhczsNCj4+IC0JCW9hcyA9IHNtbXUtPm9hczsNCj4+IC0JCWZpbmFsaXNlX3N0YWdl
X2ZuID0gYXJtX3NtbXVfZG9tYWluX2ZpbmFsaXNlX3MyOw0KPj4gLQkJYnJlYWs7DQo+PiAtCWRl
ZmF1bHQ6DQo+PiAtCQlyZXR1cm4gLUVJTlZBTDsNCj4+IC0JfQ0KPj4gLQ0KPj4gLQlyZXQgPSBm
aW5hbGlzZV9zdGFnZV9mbihzbW11X2RvbWFpbiwgbWFzdGVyLCAmcGd0YmxfY2ZnKTsNCj4gDQo+
IEl0IGlzIG5vdCBlbnRpcmVseSBjbGVhciB3aHkgdGhpcyBjb2RlIGlzIHJlbW92ZWQgaGVyZSBh
bmQgbm90IHRoZSBwcmV2aW91cyBwYXRjaD8NCg0KSSB3aWxsIHJlbW92ZWQgdGhpcyBjb2RlIGlu
IHByZXZpb3VzIHBhdGNoLiBTb21laG93IEkgbWlzc2VkLg0KPiANCj4+ICsJcmV0ID0gYXJtX3Nt
bXVfZG9tYWluX2ZpbmFsaXNlX3MyKHNtbXVfZG9tYWluLCBtYXN0ZXIpOw0KPj4gIAlpZiAocmV0
IDwgMCkgew0KPj4gIAkJcmV0dXJuIHJldDsNCj4+ICAJfQ0KPj4gQEAgLTE1NTMsNyArMTg0NCw4
IEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfaW5pdF9vbmVfcXVldWUoc3RydWN0IGFybV9zbW11X2Rl
dmljZSAqc21tdSwNCj4+ICAJCXJldHVybiAtRU5PTUVNOw0KPj4gIAl9DQo+PiAgLQlpZiAoIVdB
Uk5fT04ocS0+YmFzZV9kbWEgJiAocXN6IC0gMSkpKSB7DQo+PiArCVdBUk5fT04ocS0+YmFzZV9k
bWEgJiAocXN6IC0gMSkpOw0KPiANCj4gVGhpcyBpcyBhIGNhbGwgdG8gcmV3b3JrIGhvdyBvdXIg
V0FSTl9PTigpIGluIFhlbi4NCg0KWWVzLg0KPiANCj4+ICsJaWYgKHVubGlrZWx5KHEtPmJhc2Vf
ZG1hICYgKHFzeiAtIDEpKSkgew0KPj4gIAkJZGV2X2luZm8oc21tdS0+ZGV2LCAiYWxsb2NhdGVk
ICV1IGVudHJpZXMgZm9yICVzXG4iLA0KPj4gIAkJCSAxIDw8IHEtPmxscS5tYXhfbl9zaGlmdCwg
bmFtZSk7DQo+PiAgCX0NCj4+IEBAIC0xNzU4LDkgKzIwNTAsNyBAQCBzdGF0aWMgdm9pZCBhcm1f
c21tdV9zZXR1cF91bmlxdWVfaXJxcyhzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11KQ0KPj4g
IAkvKiBSZXF1ZXN0IGludGVycnVwdCBsaW5lcyAqLw0KPj4gIAlpcnEgPSBzbW11LT5ldnRxLnEu
aXJxOw0KPj4gIAlpZiAoaXJxKSB7DQo+PiAtCQlyZXQgPSBkZXZtX3JlcXVlc3RfdGhyZWFkZWRf
aXJxKHNtbXUtPmRldiwgaXJxLCBOVUxMLA0KPj4gLQkJCQkJCWFybV9zbW11X2V2dHFfdGhyZWFk
LA0KPj4gLQkJCQkJCUlSUUZfT05FU0hPVCwNCj4+ICsJCXJldCA9IHJlcXVlc3RfaXJxKGlycSwg
MCwgYXJtX3NtbXVfZXZ0cV9pcnFfdGFza2xldCwNCj4+ICAJCQkJCQkiYXJtLXNtbXUtdjMtZXZ0
cSIsIHNtbXUpOw0KPj4gIAkJaWYgKHJldCA8IDApDQo+PiAgCQkJZGV2X3dhcm4oc21tdS0+ZGV2
LCAiZmFpbGVkIHRvIGVuYWJsZSBldnRxIGlycVxuIik7DQo+PiBAQCAtMTc3MCw4ICsyMDYwLDgg
QEAgc3RhdGljIHZvaWQgYXJtX3NtbXVfc2V0dXBfdW5pcXVlX2lycXMoc3RydWN0IGFybV9zbW11
X2RldmljZSAqc21tdSkNCj4+ICAgIAlpcnEgPSBzbW11LT5nZXJyX2lycTsNCj4+ICAJaWYgKGly
cSkgew0KPj4gLQkJcmV0ID0gZGV2bV9yZXF1ZXN0X2lycShzbW11LT5kZXYsIGlycSwgYXJtX3Nt
bXVfZ2Vycm9yX2hhbmRsZXIsDQo+PiAtCQkJCSAgICAgICAwLCAiYXJtLXNtbXUtdjMtZ2Vycm9y
Iiwgc21tdSk7DQo+PiArCQlyZXQgPSByZXF1ZXN0X2lycShpcnEsIDAsIGFybV9zbW11X2dlcnJv
cl9oYW5kbGVyLA0KPj4gKwkJCQkJCSJhcm0tc21tdS12My1nZXJyb3IiLCBzbW11KTsNCj4+ICAJ
CWlmIChyZXQgPCAwKQ0KPj4gIAkJCWRldl93YXJuKHNtbXUtPmRldiwgImZhaWxlZCB0byBlbmFi
bGUgZ2Vycm9yIGlycVxuIik7DQo+PiAgCX0gZWxzZSB7DQo+PiBAQCAtMTc4MSwxMSArMjA3MSw4
IEBAIHN0YXRpYyB2b2lkIGFybV9zbW11X3NldHVwX3VuaXF1ZV9pcnFzKHN0cnVjdCBhcm1fc21t
dV9kZXZpY2UgKnNtbXUpDQo+PiAgCWlmIChzbW11LT5mZWF0dXJlcyAmIEFSTV9TTU1VX0ZFQVRf
UFJJKSB7DQo+PiAgCQlpcnEgPSBzbW11LT5wcmlxLnEuaXJxOw0KPj4gIAkJaWYgKGlycSkgew0K
Pj4gLQkJCXJldCA9IGRldm1fcmVxdWVzdF90aHJlYWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsIE5V
TEwsDQo+PiAtCQkJCQkJCWFybV9zbW11X3ByaXFfdGhyZWFkLA0KPj4gLQkJCQkJCQlJUlFGX09O
RVNIT1QsDQo+PiAtCQkJCQkJCSJhcm0tc21tdS12My1wcmlxIiwNCj4+IC0JCQkJCQkJc21tdSk7
DQo+PiArCQkJcmV0ID0gcmVxdWVzdF9pcnEoaXJxLCAwLCBhcm1fc21tdV9wcmlxX2lycV90YXNr
bGV0LA0KPj4gKwkJCQkJCQkiYXJtLXNtbXUtdjMtcHJpcSIsIHNtbXUpOw0KPj4gIAkJCWlmIChy
ZXQgPCAwKQ0KPj4gIAkJCQlkZXZfd2FybihzbW11LT5kZXYsDQo+PiAgCQkJCQkgImZhaWxlZCB0
byBlbmFibGUgcHJpcSBpcnFcbiIpOw0KPj4gQEAgLTE4MTQsMTEgKzIxMDEsOCBAQCBzdGF0aWMg
aW50IGFybV9zbW11X3NldHVwX2lycXMoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSkNCj4+
ICAJCSAqIENhdml1bSBUaHVuZGVyWDIgaW1wbGVtZW50YXRpb24gZG9lc24ndCBzdXBwb3J0IHVu
aXF1ZSBpcnENCj4+ICAJCSAqIGxpbmVzLiBVc2UgYSBzaW5nbGUgaXJxIGxpbmUgZm9yIGFsbCB0
aGUgU01NVXYzIGludGVycnVwdHMuDQo+PiAgCQkgKi8NCj4+IC0JCXJldCA9IGRldm1fcmVxdWVz
dF90aHJlYWRlZF9pcnEoc21tdS0+ZGV2LCBpcnEsDQo+PiAtCQkJCQlhcm1fc21tdV9jb21iaW5l
ZF9pcnFfaGFuZGxlciwNCj4+IC0JCQkJCWFybV9zbW11X2NvbWJpbmVkX2lycV90aHJlYWQsDQo+
PiAtCQkJCQlJUlFGX09ORVNIT1QsDQo+PiAtCQkJCQkiYXJtLXNtbXUtdjMtY29tYmluZWQtaXJx
Iiwgc21tdSk7DQo+PiArCQlyZXQgPSByZXF1ZXN0X2lycShpcnEsIDAsIGFybV9zbW11X2NvbWJp
bmVkX2lycV9oYW5kbGVyLA0KPj4gKwkJCQkJCSJhcm0tc21tdS12My1jb21iaW5lZC1pcnEiLCBz
bW11KTsNCj4+ICAJCWlmIChyZXQgPCAwKQ0KPj4gIAkJCWRldl93YXJuKHNtbXUtPmRldiwgImZh
aWxlZCB0byBlbmFibGUgY29tYmluZWQgaXJxXG4iKTsNCj4+ICAJfSBlbHNlDQo+PiBAQCAtMTg1
Nyw3ICsyMTQxLDcgQEAgc3RhdGljIGludCBhcm1fc21tdV9kZXZpY2VfcmVzZXQoc3RydWN0IGFy
bV9zbW11X2RldmljZSAqc21tdSwgYm9vbCBieXBhc3MpDQo+PiAgCXJlZyA9IHJlYWRsX3JlbGF4
ZWQoc21tdS0+YmFzZSArIEFSTV9TTU1VX0NSMCk7DQo+PiAgCWlmIChyZWcgJiBDUjBfU01NVUVO
KSB7DQo+PiAgCQlkZXZfd2FybihzbW11LT5kZXYsICJTTU1VIGN1cnJlbnRseSBlbmFibGVkISBS
ZXNldHRpbmcuLi5cbiIpOw0KPj4gLQkJV0FSTl9PTihpc19rZHVtcF9rZXJuZWwoKSAmJiAhZGlz
YWJsZV9ieXBhc3MpOw0KPj4gKwkJV0FSTl9PTighZGlzYWJsZV9ieXBhc3MpOw0KPj4gIAkJYXJt
X3NtbXVfdXBkYXRlX2dicGEoc21tdSwgR0JQQV9BQk9SVCwgMCk7DQo+PiAgCX0NCj4+ICBAQCAt
MTk1Miw4ICsyMjM2LDExIEBAIHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX3Jlc2V0KHN0cnVj
dCBhcm1fc21tdV9kZXZpY2UgKnNtbXUsIGJvb2wgYnlwYXNzKQ0KPj4gIAkJcmV0dXJuIHJldDsN
Cj4+ICAJfQ0KPj4gIC0JaWYgKGlzX2tkdW1wX2tlcm5lbCgpKQ0KPj4gLQkJZW5hYmxlcyAmPSB+
KENSMF9FVlRRRU4gfCBDUjBfUFJJUUVOKTsNCj4+ICsJLyogSW5pdGlhbGl6ZSB0YXNrbGV0cyBm
b3IgdGhyZWFkZWQgSVJRcyovDQo+PiArCXRhc2tsZXRfaW5pdCgmc21tdS0+ZXZ0cV9pcnFfdGFz
a2xldCwgYXJtX3NtbXVfZXZ0cV90aHJlYWQsIHNtbXUpOw0KPj4gKwl0YXNrbGV0X2luaXQoJnNt
bXUtPnByaXFfaXJxX3Rhc2tsZXQsIGFybV9zbW11X3ByaXFfdGhyZWFkLCBzbW11KTsNCj4+ICsJ
dGFza2xldF9pbml0KCZzbW11LT5jb21iaW5lZF9pcnFfdGFza2xldCwgYXJtX3NtbXVfY29tYmlu
ZWRfaXJxX3RocmVhZCwNCj4+ICsJCQkJIHNtbXUpOw0KPj4gICAgCS8qIEVuYWJsZSB0aGUgU01N
VSBpbnRlcmZhY2UsIG9yIGVuc3VyZSBieXBhc3MgKi8NCj4+ICAJaWYgKCFieXBhc3MgfHwgZGlz
YWJsZV9ieXBhc3MpIHsNCj4+IEBAIC0yMTk1LDcgKzI0ODIsNyBAQCBzdGF0aWMgaW5saW5lIGlu
dCBhcm1fc21tdV9kZXZpY2VfYWNwaV9wcm9iZShzdHJ1Y3QgcGxhdGZvcm1fZGV2aWNlICpwZGV2
LA0KPj4gIHN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX2R0X3Byb2JlKHN0cnVjdCBwbGF0Zm9y
bV9kZXZpY2UgKnBkZXYsDQo+PiAgCQkJCSAgICBzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11
KQ0KPj4gIHsNCj4+IC0Jc3RydWN0IGRldmljZSAqZGV2ID0gJnBkZXYtPmRldjsNCj4+ICsJc3Ry
dWN0IGRldmljZSAqZGV2ID0gcGRldjsNCj4+ICAJdTMyIGNlbGxzOw0KPj4gIAlpbnQgcmV0ID0g
LUVJTlZBTDsNCj4+ICBAQCAtMjIxOSwxMzAgKzI1MDYsNDQ5IEBAIHN0YXRpYyB1bnNpZ25lZCBs
b25nIGFybV9zbW11X3Jlc291cmNlX3NpemUoc3RydWN0IGFybV9zbW11X2RldmljZSAqc21tdSkN
Cj4+ICAJCXJldHVybiBTWl8xMjhLOw0KPj4gIH0NCj4+ICArLyogU3RhcnQgb2YgWGVuIHNwZWNp
ZmljIGNvZGUuICovDQo+PiAgc3RhdGljIGludCBhcm1fc21tdV9kZXZpY2VfcHJvYmUoc3RydWN0
IHBsYXRmb3JtX2RldmljZSAqcGRldikNCj4+ICB7DQo+PiAtCWludCBpcnEsIHJldDsNCj4+IC0J
c3RydWN0IHJlc291cmNlICpyZXM7DQo+PiAtCXJlc291cmNlX3NpemVfdCBpb2FkZHI7DQo+PiAt
CXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXU7DQo+PiAtCXN0cnVjdCBkZXZpY2UgKmRldiA9
ICZwZGV2LT5kZXY7DQo+PiAtCWJvb2wgYnlwYXNzOw0KPj4gKyAgICBpbnQgaXJxLCByZXQ7DQo+
PiArICAgIHBhZGRyX3QgaW9hZGRyLCBpb3NpemU7DQo+PiArICAgIHN0cnVjdCBhcm1fc21tdV9k
ZXZpY2UgKnNtbXU7DQo+PiArICAgIGJvb2wgYnlwYXNzOw0KPj4gKw0KPj4gKyAgICBzbW11ID0g
ZGV2bV9remFsbG9jKGRldiwgc2l6ZW9mKCpzbW11KSwgR0ZQX0tFUk5FTCk7DQo+PiArICAgIGlm
ICggIXNtbXUgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBkZXZfZXJyKHBkZXYsICJmYWlsZWQg
dG8gYWxsb2NhdGUgYXJtX3NtbXVfZGV2aWNlXG4iKTsNCj4+ICsgICAgICAgIHJldHVybiAtRU5P
TUVNOw0KPj4gKyAgICB9DQo+PiArICAgIHNtbXUtPmRldiA9IHBkZXY7DQo+PiArDQo+PiArICAg
IGlmICggcGRldi0+b2Zfbm9kZSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHJldCA9IGFybV9z
bW11X2RldmljZV9kdF9wcm9iZShwZGV2LCBzbW11KTsNCj4+ICsgICAgfSBlbHNlDQo+PiArICAg
IHsNCj4+ICsgICAgICAgIHJldCA9IGFybV9zbW11X2RldmljZV9hY3BpX3Byb2JlKHBkZXYsIHNt
bXUpOw0KPj4gKyAgICAgICAgaWYgKCByZXQgPT0gLUVOT0RFViApDQo+PiArICAgICAgICAgICAg
cmV0dXJuIHJldDsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICAvKiBTZXQgYnlwYXNzIG1vZGUg
YWNjb3JkaW5nIHRvIGZpcm13YXJlIHByb2JpbmcgcmVzdWx0ICovDQo+PiArICAgIGJ5cGFzcyA9
ICEhcmV0Ow0KPiANCj4gQUZBSUNULCBieXBhc3Mgd291bGQgYmUgc2V0IGlmIHRoZSBkZXZpY2Ut
dHJlZSBpcyBidWdneS4gRm9yIFhlbiwgSSB0aGluayBpdCB3b3VsZCBiZSBzYW5lciB0byBub3Qg
Y29udGludWUgYXMgdGhpcyB3b3VsZCBicmVhayBpc29sYXRpb24uDQoNCk9rIC4gSSB3aWxsIG5v
dCBjb25maWd1cmUgdGhlIFNNTVUgaW4gYnlwYXNzIG1vZGUgaWYgZGV2aWNlLXRyZWUgcHJvYmUg
aXMgZmFpbGVkLiBJIHdpbGwgcmVwb3J0IGVycm9yIHRvIHRoZSB1c2VyLg0KDQo+IA0KPj4gKw0K
Pj4gKyAgICAvKiBCYXNlIGFkZHJlc3MgKi8NCj4+ICsgICAgcmV0ID0gZHRfZGV2aWNlX2dldF9h
ZGRyZXNzKGRldl90b19kdChwZGV2KSwgMCwgJmlvYWRkciwgJmlvc2l6ZSk7DQo+PiArICAgIGlm
KCByZXQgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1FTk9ERVY7DQo+PiArDQo+PiArICAgIGlmICgg
aW9zaXplIDwgYXJtX3NtbXVfcmVzb3VyY2Vfc2l6ZShzbW11KSApDQo+PiArICAgIHsNCj4+ICsg
ICAgICAgIGRldl9lcnIocGRldiwgIk1NSU8gcmVnaW9uIHRvbyBzbWFsbCAoJWx4KVxuIiwgaW9z
aXplKTsNCj4+ICsgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4gKyAgICB9DQo+PiArDQo+PiAr
ICAgIC8qDQo+PiArICAgICAqIERvbid0IG1hcCB0aGUgSU1QTEVNRU5UQVRJT04gREVGSU5FRCBy
ZWdpb25zLCBzaW5jZSB0aGV5IG1heSBjb250YWluDQo+PiArICAgICAqIHRoZSBQTUNHIHJlZ2lz
dGVycyB3aGljaCBhcmUgcmVzZXJ2ZWQgYnkgdGhlIFBNVSBkcml2ZXIuDQo+PiArICAgICAqLw0K
PiANCj4gVGhpcyBjb21tZW50IGRvZXNuJ3Qgc2VlbSB0byBtYXRjaCB0aGUgY29kZS4gRGlkIHlv
dSBpbnRlbmQgdG8uLi4NCg0KWWVzIEkgd2lsbCBmaXggdGhlIGNvZGUgYW5kIHdpbGwgdXNlIEFS
TV9TTU1VX1JFR19TWiBhcyBzaXplLg0KPiANCj4gDQo+IA0KPj4gKyAgICBzbW11LT5iYXNlID0g
aW9yZW1hcF9ub2NhY2hlKGlvYWRkciwgaW9zaXplKTsNCj4gDQo+IC4uLiB1c2UgQVJNX1NNTVVf
UkVHX1NaIHdoaWNoIHdvdWxkIG9ubHkgbWFwIHRoZSBuZWNlc3NhcnkgcmVnaW9uPw0KDQpZZXMg
SSB3aWxsIHVzZSBBUk1fU01NVV9SRUdfU1ogLg0KPiANCj4gDQo+PiArICAgIGlmICggSVNfRVJS
KHNtbXUtPmJhc2UpICkNCj4+ICsgICAgICAgIHJldHVybiBQVFJfRVJSKHNtbXUtPmJhc2UpOw0K
Pj4gKw0KPj4gKyAgICBpZiAoIGlvc2l6ZSA+IFNaXzY0SyApDQo+PiArICAgIHsNCj4+ICsgICAg
ICAgIHNtbXUtPnBhZ2UxID0gaW9yZW1hcF9ub2NhY2hlKGlvYWRkciArIFNaXzY0SywgQVJNX1NN
TVVfUkVHX1NaKTsNCj4+ICsgICAgICAgIGlmICggSVNfRVJSKHNtbXUtPnBhZ2UxKSApDQo+PiAr
ICAgICAgICAgICAgcmV0dXJuIFBUUl9FUlIoc21tdS0+cGFnZTEpOw0KPj4gKyAgICB9DQo+PiAr
ICAgIGVsc2UNCj4+ICsgICAgew0KPj4gKyAgICAgICAgc21tdS0+cGFnZTEgPSBzbW11LT5iYXNl
Ow0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIC8qIEludGVycnVwdCBsaW5lcyAqLw0KPj4gKw0K
Pj4gKyAgICBpcnEgPSBwbGF0Zm9ybV9nZXRfaXJxX2J5bmFtZV9vcHRpb25hbChwZGV2LCAiY29t
YmluZWQiKTsNCj4+ICsgICAgaWYgKCBpcnEgPiAwICkNCj4+ICsgICAgICAgIHNtbXUtPmNvbWJp
bmVkX2lycSA9IGlycTsNCj4+ICsgICAgZWxzZQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBpcnEg
PSBwbGF0Zm9ybV9nZXRfaXJxX2J5bmFtZV9vcHRpb25hbChwZGV2LCAiZXZlbnRxIik7DQo+PiAr
ICAgICAgICBpZiAoIGlycSA+IDAgKQ0KPj4gKyAgICAgICAgICAgIHNtbXUtPmV2dHEucS5pcnEg
PSBpcnE7DQo+PiArDQo+PiArICAgICAgICBpcnEgPSBwbGF0Zm9ybV9nZXRfaXJxX2J5bmFtZV9v
cHRpb25hbChwZGV2LCAicHJpcSIpOw0KPj4gKyAgICAgICAgaWYgKCBpcnEgPiAwICkNCj4+ICsg
ICAgICAgICAgICBzbW11LT5wcmlxLnEuaXJxID0gaXJxOw0KPj4gKw0KPj4gKyAgICAgICAgaXJx
ID0gcGxhdGZvcm1fZ2V0X2lycV9ieW5hbWVfb3B0aW9uYWwocGRldiwgImdlcnJvciIpOw0KPj4g
KyAgICAgICAgaWYgKCBpcnEgPiAwICkNCj4+ICsgICAgICAgICAgICBzbW11LT5nZXJyX2lycSA9
IGlycTsNCj4+ICsgICAgfQ0KPj4gKyAgICAvKiBQcm9iZSB0aGUgaC93ICovDQo+PiArICAgIHJl
dCA9IGFybV9zbW11X2RldmljZV9od19wcm9iZShzbW11KTsNCj4+ICsgICAgaWYgKCByZXQgKQ0K
Pj4gKyAgICAgICAgcmV0dXJuIHJldDsNCj4+ICsNCj4+ICsgICAgLyogSW5pdGlhbGlzZSBpbi1t
ZW1vcnkgZGF0YSBzdHJ1Y3R1cmVzICovDQo+PiArICAgIHJldCA9IGFybV9zbW11X2luaXRfc3Ry
dWN0dXJlcyhzbW11KTsNCj4+ICsgICAgaWYgKCByZXQgKQ0KPj4gKyAgICAgICAgcmV0dXJuIHJl
dDsNCj4+ICsNCj4+ICsgICAgLyogUmVzZXQgdGhlIGRldmljZSAqLw0KPj4gKyAgICByZXQgPSBh
cm1fc21tdV9kZXZpY2VfcmVzZXQoc21tdSwgYnlwYXNzKTsNCj4+ICsgICAgaWYgKCByZXQgKQ0K
Pj4gKyAgICAgICAgcmV0dXJuIHJldDsNCj4+ICsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogS2Vl
cCBhIGxpc3Qgb2YgYWxsIHByb2JlZCBkZXZpY2VzLiBUaGlzIHdpbGwgYmUgdXNlZCB0byBxdWVy
eQ0KPj4gKyAgICAgKiB0aGUgc21tdSBkZXZpY2VzIGJhc2VkIG9uIHRoZSBmd25vZGUuDQo+PiAr
ICAgICAqLw0KPj4gKyAgICBJTklUX0xJU1RfSEVBRCgmc21tdS0+ZGV2aWNlcyk7DQo+PiArDQo+
PiArICAgIHNwaW5fbG9jaygmYXJtX3NtbXVfZGV2aWNlc19sb2NrKTsNCj4+ICsgICAgbGlzdF9h
ZGQoJnNtbXUtPmRldmljZXMsICZhcm1fc21tdV9kZXZpY2VzKTsNCj4+ICsgICAgc3Bpbl91bmxv
Y2soJmFybV9zbW11X2RldmljZXNfbG9jayk7DQo+PiArDQo+PiArICAgIHJldHVybiAwOw0KPj4g
K30NCj4+ICsNCj4+ICtzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBhcm1fc21tdV9pb3RsYl9mbHVz
aF9hbGwoc3RydWN0IGRvbWFpbiAqZCkNCj4+ICt7DQo+PiArICAgIHN0cnVjdCBhcm1fc21tdV94
ZW5fZG9tYWluICp4ZW5fZG9tYWluID0gZG9tX2lvbW11KGQpLT5hcmNoLnByaXY7DQo+PiArICAg
IHN0cnVjdCBpb21tdV9kb21haW4gKmlvX2RvbWFpbjsNCj4+ICsNCj4+ICsgICAgc3Bpbl9sb2Nr
KCZ4ZW5fZG9tYWluLT5sb2NrKTsNCj4+ICsNCj4+ICsgICAgbGlzdF9mb3JfZWFjaF9lbnRyeSgg
aW9fZG9tYWluLCAmeGVuX2RvbWFpbi0+Y29udGV4dHMsIGxpc3QgKQ0KPj4gKyAgICB7DQo+PiAr
ICAgICAgICAvKg0KPj4gKyAgICAgICAgICogT25seSBpbnZhbGlkYXRlIHRoZSBjb250ZXh0IHdo
ZW4gU01NVSBpcyBwcmVzZW50Lg0KPj4gKyAgICAgICAgICogVGhpcyBpcyBiZWNhdXNlIHRoZSBj
b250ZXh0IGluaXRpYWxpemF0aW9uIGlzIGRlbGF5ZWQNCj4+ICsgICAgICAgICAqIHVudGlsIGEg
bWFzdGVyIGhhcyBiZWVuIGFkZGVkLg0KPj4gKyAgICAgICAgICovDQo+PiArICAgICAgICBpZiAo
IHVubGlrZWx5KCFBQ0NFU1NfT05DRSh0b19zbW11X2RvbWFpbihpb19kb21haW4pLT5zbW11KSkg
KQ0KPj4gKyAgICAgICAgICAgIGNvbnRpbnVlOw0KPj4gKw0KPj4gKyAgICAgICAgYXJtX3NtbXVf
dGxiX2ludl9jb250ZXh0KHRvX3NtbXVfZG9tYWluKGlvX2RvbWFpbikpOw0KPj4gKyAgICB9DQo+
PiAgLQlzbW11ID0gZGV2bV9remFsbG9jKGRldiwgc2l6ZW9mKCpzbW11KSwgR0ZQX0tFUk5FTCk7
DQo+PiAtCWlmICghc21tdSkgew0KPj4gLQkJZGV2X2VycihkZXYsICJmYWlsZWQgdG8gYWxsb2Nh
dGUgYXJtX3NtbXVfZGV2aWNlXG4iKTsNCj4+IC0JCXJldHVybiAtRU5PTUVNOw0KPj4gLQl9DQo+
PiAtCXNtbXUtPmRldiA9IGRldjsNCj4+ICsgICAgc3Bpbl91bmxvY2soJnhlbl9kb21haW4tPmxv
Y2spOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIGlu
dCBfX211c3RfY2hlY2sgYXJtX3NtbXVfaW90bGJfZmx1c2goc3RydWN0IGRvbWFpbiAqZCwgZGZu
X3QgZGZuLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHVuc2lnbmVkIGxvbmcgcGFnZV9jb3VudCwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgZmx1c2hfZmxhZ3MpDQo+PiArew0KPj4g
KyAgICByZXR1cm4gYXJtX3NtbXVfaW90bGJfZmx1c2hfYWxsKGQpOw0KPj4gK30NCj4+ICAtCWlm
IChkZXYtPm9mX25vZGUpIHsNCj4+IC0JCXJldCA9IGFybV9zbW11X2RldmljZV9kdF9wcm9iZShw
ZGV2LCBzbW11KTsNCj4+IC0JfSBlbHNlIHsNCj4+IC0JCXJldCA9IGFybV9zbW11X2RldmljZV9h
Y3BpX3Byb2JlKHBkZXYsIHNtbXUpOw0KPj4gLQkJaWYgKHJldCA9PSAtRU5PREVWKQ0KPj4gLQkJ
CXJldHVybiByZXQ7DQo+PiAtCX0NCj4+ICtzdGF0aWMgc3RydWN0IGFybV9zbW11X2RldmljZSAq
YXJtX3NtbXVfZ2V0X2J5X2RldihzdHJ1Y3QgZGV2aWNlICpkZXYpDQo+PiArew0KPj4gKyAgICBz
dHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gTlVMTDsNCj4+ICAtCS8qIFNldCBieXBhc3Mg
bW9kZSBhY2NvcmRpbmcgdG8gZmlybXdhcmUgcHJvYmluZyByZXN1bHQgKi8NCj4+IC0JYnlwYXNz
ID0gISFyZXQ7DQo+PiArICAgIHNwaW5fbG9jaygmYXJtX3NtbXVfZGV2aWNlc19sb2NrKTsNCj4+
ICsgICAgbGlzdF9mb3JfZWFjaF9lbnRyeSggc21tdSwgJmFybV9zbW11X2RldmljZXMsIGRldmlj
ZXMgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBpZiAoIHNtbXUtPmRldiAgPT0gZGV2ICkNCj4+
ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBzcGluX3VubG9jaygmYXJtX3NtbXVfZGV2aWNl
c19sb2NrKTsNCj4+ICsgICAgICAgICAgICByZXR1cm4gc21tdTsNCj4+ICsgICAgICAgIH0NCj4+
ICsgICAgfQ0KPj4gKyAgICBzcGluX3VubG9jaygmYXJtX3NtbXVfZGV2aWNlc19sb2NrKTsNCj4+
ICAtCS8qIEJhc2UgYWRkcmVzcyAqLw0KPj4gLQlyZXMgPSBwbGF0Zm9ybV9nZXRfcmVzb3VyY2Uo
cGRldiwgSU9SRVNPVVJDRV9NRU0sIDApOw0KPj4gLQlpZiAocmVzb3VyY2Vfc2l6ZShyZXMpIDwg
YXJtX3NtbXVfcmVzb3VyY2Vfc2l6ZShzbW11KSkgew0KPj4gLQkJZGV2X2VycihkZXYsICJNTUlP
IHJlZ2lvbiB0b28gc21hbGwgKCVwcilcbiIsIHJlcyk7DQo+PiAtCQlyZXR1cm4gLUVJTlZBTDsN
Cj4+IC0JfQ0KPj4gLQlpb2FkZHIgPSByZXMtPnN0YXJ0Ow0KPiANCj4gVGhlIGNvZGUgcmVtb3Zl
ZCBpcyBiYXNpY2FsbHkgdGhlIHNhbWUgYXMgdGhlIG9uZSB5b3UgYWRkZWQgZXhjZXB0IHRoZSBj
b2Rpbmcgc3R5bGUgY2hhbmdlLiBUaGlzIHBhdGNoIGlzIGFscmVhZHkgcXVpdGUgbG9uZyB0byBy
ZXZpZXcsIHNvIGNhbiB3ZSBwbGVhc2Uga2VlcCB0aGUgY2hhbmdlIHRvIHRoZSBzdHJpY3QgbWlu
aW11bT8NCj4gDQo+IElmIHlvdSB3YW50IHRvIGRvIHRvIGNsZWFuLXVwIHRoZW4gdGhleSBzaG91
bGQgYmUgZG9uZSBiZWZvcmUvYWZ0ZXIuDQoNCk9rLg0KPiANCj4+ICsgICAgcmV0dXJuIE5VTEw7
DQo+PiArfQ0KPj4gIC0JLyoNCj4+IC0JICogRG9uJ3QgbWFwIHRoZSBJTVBMRU1FTlRBVElPTiBE
RUZJTkVEIHJlZ2lvbnMsIHNpbmNlIHRoZXkgbWF5IGNvbnRhaW4NCj4+IC0JICogdGhlIFBNQ0cg
cmVnaXN0ZXJzIHdoaWNoIGFyZSByZXNlcnZlZCBieSB0aGUgUE1VIGRyaXZlci4NCj4+IC0JICov
DQo+PiAtCXNtbXUtPmJhc2UgPSBhcm1fc21tdV9pb3JlbWFwKGRldiwgaW9hZGRyLCBBUk1fU01N
VV9SRUdfU1opOw0KPj4gLQlpZiAoSVNfRVJSKHNtbXUtPmJhc2UpKQ0KPj4gLQkJcmV0dXJuIFBU
Ul9FUlIoc21tdS0+YmFzZSk7DQo+PiAtDQo+PiAtCWlmIChhcm1fc21tdV9yZXNvdXJjZV9zaXpl
KHNtbXUpID4gU1pfNjRLKSB7DQo+PiAtCQlzbW11LT5wYWdlMSA9IGFybV9zbW11X2lvcmVtYXAo
ZGV2LCBpb2FkZHIgKyBTWl82NEssDQo+PiAtCQkJCQkgICAgICAgQVJNX1NNTVVfUkVHX1NaKTsN
Cj4+IC0JCWlmIChJU19FUlIoc21tdS0+cGFnZTEpKQ0KPj4gLQkJCXJldHVybiBQVFJfRVJSKHNt
bXUtPnBhZ2UxKTsNCj4+IC0JfSBlbHNlIHsNCj4+IC0JCXNtbXUtPnBhZ2UxID0gc21tdS0+YmFz
ZTsNCj4+IC0JfQ0KPj4gKy8qIFByb2JpbmcgYW5kIGluaXRpYWxpc2F0aW9uIGZ1bmN0aW9ucyAq
Lw0KPj4gK3N0YXRpYyBzdHJ1Y3QgaW9tbXVfZG9tYWluICphcm1fc21tdV9nZXRfZG9tYWluKHN0
cnVjdCBkb21haW4gKmQsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgc3RydWN0IGRldmljZSAqZGV2KQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IGlv
bW11X2RvbWFpbiAqaW9fZG9tYWluOw0KPj4gKyAgICBzdHJ1Y3QgYXJtX3NtbXVfZG9tYWluICpz
bW11X2RvbWFpbjsNCj4+ICsgICAgc3RydWN0IGlvbW11X2Z3c3BlYyAqZndzcGVjID0gZGV2X2lv
bW11X2Z3c3BlY19nZXQoZGV2KTsNCj4+ICsgICAgc3RydWN0IGFybV9zbW11X3hlbl9kb21haW4g
Knhlbl9kb21haW4gPSBkb21faW9tbXUoZCktPmFyY2gucHJpdjsNCj4+ICsgICAgc3RydWN0IGFy
bV9zbW11X2RldmljZSAqc21tdSA9IGFybV9zbW11X2dldF9ieV9kZXYoZndzcGVjLT5pb21tdV9k
ZXYpOw0KPj4gKw0KPj4gKyAgICBpZiAoICFzbW11ICkNCj4+ICsgICAgICAgIHJldHVybiBOVUxM
Ow0KPj4gIC0JLyogSW50ZXJydXB0IGxpbmVzICovDQo+PiArICAgIC8qDQo+PiArICAgICAqIExv
b3AgdGhyb3VnaCB0aGUgJnhlbl9kb21haW4tPmNvbnRleHRzIHRvIGxvY2F0ZSBhIGNvbnRleHQN
Cj4+ICsgICAgICogYXNzaWduZWQgdG8gdGhpcyBTTU1VDQo+PiArICAgICAqLw0KPj4gKyAgICBs
aXN0X2Zvcl9lYWNoX2VudHJ5KCBpb19kb21haW4sICZ4ZW5fZG9tYWluLT5jb250ZXh0cywgbGlz
dCApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHNtbXVfZG9tYWluID0gdG9fc21tdV9kb21haW4o
aW9fZG9tYWluKTsNCj4+ICsgICAgICAgIGlmICggc21tdV9kb21haW4tPnNtbXUgPT0gc21tdSAp
DQo+PiArICAgICAgICAgICAgcmV0dXJuIGlvX2RvbWFpbjsNCj4+ICsgICAgfQ0KPj4gIC0JaXJx
ID0gcGxhdGZvcm1fZ2V0X2lycV9ieW5hbWVfb3B0aW9uYWwocGRldiwgImNvbWJpbmVkIik7DQo+
PiAtCWlmIChpcnEgPiAwKQ0KPj4gLQkJc21tdS0+Y29tYmluZWRfaXJxID0gaXJxOw0KPj4gLQll
bHNlIHsNCj4+IC0JCWlycSA9IHBsYXRmb3JtX2dldF9pcnFfYnluYW1lX29wdGlvbmFsKHBkZXYs
ICJldmVudHEiKTsNCj4+IC0JCWlmIChpcnEgPiAwKQ0KPj4gLQkJCXNtbXUtPmV2dHEucS5pcnEg
PSBpcnE7DQo+PiArICAgIHJldHVybiBOVUxMOw0KPj4gK30NCj4+ICAtCQlpcnEgPSBwbGF0Zm9y
bV9nZXRfaXJxX2J5bmFtZV9vcHRpb25hbChwZGV2LCAicHJpcSIpOw0KPj4gLQkJaWYgKGlycSA+
IDApDQo+PiAtCQkJc21tdS0+cHJpcS5xLmlycSA9IGlycTsNCj4+ICtzdGF0aWMgdm9pZCBhcm1f
c21tdV9kZXN0cm95X2lvbW11X2RvbWFpbihzdHJ1Y3QgaW9tbXVfZG9tYWluICppb19kb21haW4p
DQo+PiArew0KPj4gKyAgICBsaXN0X2RlbCgmaW9fZG9tYWluLT5saXN0KTsNCj4+ICsgICAgYXJt
X3NtbXVfZG9tYWluX2ZyZWUoaW9fZG9tYWluKTsNCj4+ICt9DQo+PiAgLQkJaXJxID0gcGxhdGZv
cm1fZ2V0X2lycV9ieW5hbWVfb3B0aW9uYWwocGRldiwgImdlcnJvciIpOw0KPj4gLQkJaWYgKGly
cSA+IDApDQo+PiAtCQkJc21tdS0+Z2Vycl9pcnEgPSBpcnE7DQo+PiAtCX0NCj4+IC0JLyogUHJv
YmUgdGhlIGgvdyAqLw0KPj4gLQlyZXQgPSBhcm1fc21tdV9kZXZpY2VfaHdfcHJvYmUoc21tdSk7
DQo+PiAtCWlmIChyZXQpDQo+PiAtCQlyZXR1cm4gcmV0Ow0KPj4gK3N0YXRpYyBpbnQgYXJtX3Nt
bXVfYXNzaWduX2RldihzdHJ1Y3QgZG9tYWluICpkLCB1OCBkZXZmbiwNCj4+ICsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgc3RydWN0IGRldmljZSAqZGV2LCB1MzIgZmxhZykNCj4+ICt7
DQo+PiArICAgIGludCByZXQgPSAwOw0KPj4gKyAgICBzdHJ1Y3QgaW9tbXVfZG9tYWluICppb19k
b21haW47DQo+PiArICAgIHN0cnVjdCBhcm1fc21tdV9kb21haW4gKnNtbXVfZG9tYWluOw0KPj4g
KyAgICBzdHJ1Y3QgYXJtX3NtbXVfeGVuX2RvbWFpbiAqeGVuX2RvbWFpbiA9IGRvbV9pb21tdShk
KS0+YXJjaC5wcml2Ow0KPj4gIC0JLyogSW5pdGlhbGlzZSBpbi1tZW1vcnkgZGF0YSBzdHJ1Y3R1
cmVzICovDQo+PiAtCXJldCA9IGFybV9zbW11X2luaXRfc3RydWN0dXJlcyhzbW11KTsNCj4+IC0J
aWYgKHJldCkNCj4+IC0JCXJldHVybiByZXQ7DQo+PiArICAgIGlmICggIWRldi0+YXJjaGRhdGEu
aW9tbXUgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBkZXYtPmFyY2hkYXRhLmlvbW11ID0geHph
bGxvYyhzdHJ1Y3QgYXJtX3NtbXVfeGVuX2RldmljZSk7DQo+PiArICAgICAgICBpZiAoICFkZXYt
PmFyY2hkYXRhLmlvbW11ICkNCj4+ICsgICAgICAgICAgICByZXR1cm4gLUVOT01FTTsNCj4+ICsg
ICAgfQ0KPj4gIC0JLyogUmVjb3JkIG91ciBwcml2YXRlIGRldmljZSBzdHJ1Y3R1cmUgKi8NCj4+
IC0JcGxhdGZvcm1fc2V0X2RydmRhdGEocGRldiwgc21tdSk7DQo+PiArICAgIHNwaW5fbG9jaygm
eGVuX2RvbWFpbi0+bG9jayk7DQo+PiAgLQkvKiBSZXNldCB0aGUgZGV2aWNlICovDQo+PiAtCXJl
dCA9IGFybV9zbW11X2RldmljZV9yZXNldChzbW11LCBieXBhc3MpOw0KPj4gLQlpZiAocmV0KQ0K
Pj4gLQkJcmV0dXJuIHJldDsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogQ2hlY2sgdG8gc2VlIGlm
IGFuIGlvbW11X2RvbWFpbiBhbHJlYWR5IGV4aXN0cyBmb3IgdGhpcyB4ZW4gZG9tYWluDQo+PiAr
ICAgICAqIHVuZGVyIHRoZSBzYW1lIFNNTVUNCj4+ICsgICAgICovDQo+PiArICAgIGlvX2RvbWFp
biA9IGFybV9zbW11X2dldF9kb21haW4oZCwgZGV2KTsNCj4+ICsgICAgaWYgKCAhaW9fZG9tYWlu
ICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgaW9fZG9tYWluID0gYXJtX3NtbXVfZG9tYWluX2Fs
bG9jKElPTU1VX0RPTUFJTl9ETUEpOw0KPj4gKyAgICAgICAgaWYgKCAhaW9fZG9tYWluICkNCj4+
ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICByZXQgPSAtRU5PTUVNOw0KPj4gKyAgICAgICAg
ICAgIGdvdG8gb3V0Ow0KPj4gKyAgICAgICAgfQ0KPj4gIC0JLyogQW5kIHdlJ3JlIHVwLiBHbyBn
byBnbyEgKi8NCj4+IC0JcmV0ID0gaW9tbXVfZGV2aWNlX3N5c2ZzX2FkZCgmc21tdS0+aW9tbXUs
IGRldiwgTlVMTCwNCj4+IC0JCQkJICAgICAic21tdTMuJXBhIiwgJmlvYWRkcik7DQo+PiAtCWlm
IChyZXQpDQo+PiAtCQlyZXR1cm4gcmV0Ow0KPj4gKyAgICAgICAgc21tdV9kb21haW4gPSB0b19z
bW11X2RvbWFpbihpb19kb21haW4pOw0KPj4gKyAgICAgICAgc21tdV9kb21haW4tPnMyX2NmZy5k
b21haW4gPSBkOw0KPj4gIC0JaW9tbXVfZGV2aWNlX3NldF9vcHMoJnNtbXUtPmlvbW11LCAmYXJt
X3NtbXVfb3BzKTsNCj4+IC0JaW9tbXVfZGV2aWNlX3NldF9md25vZGUoJnNtbXUtPmlvbW11LCBk
ZXYtPmZ3bm9kZSk7DQo+PiArICAgICAgICAvKiBDaGFpbiB0aGUgbmV3IGNvbnRleHQgdG8gdGhl
IGRvbWFpbiAqLw0KPj4gKyAgICAgICAgbGlzdF9hZGQoJmlvX2RvbWFpbi0+bGlzdCwgJnhlbl9k
b21haW4tPmNvbnRleHRzKTsNCj4+ICAtCXJldCA9IGlvbW11X2RldmljZV9yZWdpc3Rlcigmc21t
dS0+aW9tbXUpOw0KPj4gLQlpZiAocmV0KSB7DQo+PiAtCQlkZXZfZXJyKGRldiwgIkZhaWxlZCB0
byByZWdpc3RlciBpb21tdVxuIik7DQo+PiAtCQlyZXR1cm4gcmV0Ow0KPj4gLQl9DQo+PiArICAg
IH0NCj4+ICsNCj4+ICsgICAgcmV0ID0gYXJtX3NtbXVfYXR0YWNoX2Rldihpb19kb21haW4sIGRl
dik7DQo+PiArICAgIGlmICggcmV0ICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgaWYgKCBpb19k
b21haW4tPnJlZi5jb3VudGVyID09IDAgKQ0KPj4gKyAgICAgICAgICAgIGFybV9zbW11X2Rlc3Ry
b3lfaW9tbXVfZG9tYWluKGlvX2RvbWFpbik7DQo+PiArICAgIH0NCj4+ICsgICAgZWxzZQ0KPj4g
KyAgICB7DQo+PiArICAgICAgICBhdG9taWNfaW5jKCZpb19kb21haW4tPnJlZik7DQo+PiArICAg
IH0NCj4+ICAtCXJldHVybiBhcm1fc21tdV9zZXRfYnVzX29wcygmYXJtX3NtbXVfb3BzKTsNCj4+
ICtvdXQ6DQo+PiArICAgIHNwaW5fdW5sb2NrKCZ4ZW5fZG9tYWluLT5sb2NrKTsNCj4+ICsgICAg
cmV0dXJuIHJldDsNCj4+ICB9DQo+PiAgLXN0YXRpYyBpbnQgYXJtX3NtbXVfZGV2aWNlX3JlbW92
ZShzdHJ1Y3QgcGxhdGZvcm1fZGV2aWNlICpwZGV2KQ0KPj4gK3N0YXRpYyBpbnQgYXJtX3NtbXVf
ZGVhc3NpZ25fZGV2KHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBkZXZpY2UgKmRldikNCj4+ICB7
DQo+PiAtCXN0cnVjdCBhcm1fc21tdV9kZXZpY2UgKnNtbXUgPSBwbGF0Zm9ybV9nZXRfZHJ2ZGF0
YShwZGV2KTsNCj4+ICsgICAgc3RydWN0IGlvbW11X2RvbWFpbiAqaW9fZG9tYWluID0gYXJtX3Nt
bXVfZ2V0X2RvbWFpbihkLCBkZXYpOw0KPj4gKyAgICBzdHJ1Y3QgYXJtX3NtbXVfeGVuX2RvbWFp
biAqeGVuX2RvbWFpbiA9IGRvbV9pb21tdShkKS0+YXJjaC5wcml2Ow0KPj4gKyAgICBzdHJ1Y3Qg
YXJtX3NtbXVfZG9tYWluICphcm1fc21tdSA9IHRvX3NtbXVfZG9tYWluKGlvX2RvbWFpbik7DQo+
PiArICAgIHN0cnVjdCBhcm1fc21tdV9tYXN0ZXIgKm1hc3RlciA9IGRldl9pb21tdV9wcml2X2dl
dChkZXYpOw0KPj4gIC0JYXJtX3NtbXVfc2V0X2J1c19vcHMoTlVMTCk7DQo+PiAtCWlvbW11X2Rl
dmljZV91bnJlZ2lzdGVyKCZzbW11LT5pb21tdSk7DQo+PiAtCWlvbW11X2RldmljZV9zeXNmc19y
ZW1vdmUoJnNtbXUtPmlvbW11KTsNCj4+IC0JYXJtX3NtbXVfZGV2aWNlX2Rpc2FibGUoc21tdSk7
DQo+PiArICAgIGlmICggIWFybV9zbW11IHx8IGFybV9zbW11LT5zMl9jZmcuZG9tYWluICE9IGQg
KQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBkZXZfZXJyKGRldiwgIiBub3QgYXR0YWNoZWQgdG8g
ZG9tYWluICVkXG4iLCBkLT5kb21haW5faWQpOw0KPj4gKyAgICAgICAgcmV0dXJuIC1FU1JDSDsN
Cj4+ICsgICAgfQ0KPj4gIC0JcmV0dXJuIDA7DQo+PiArICAgIHNwaW5fbG9jaygmeGVuX2RvbWFp
bi0+bG9jayk7DQo+PiArDQo+PiArICAgIGFybV9zbW11X2RldGFjaF9kZXYobWFzdGVyKTsNCj4+
ICsgICAgYXRvbWljX2RlYygmaW9fZG9tYWluLT5yZWYpOw0KPj4gKw0KPj4gKyAgICBpZiAoIGlv
X2RvbWFpbi0+cmVmLmNvdW50ZXIgPT0gMCApDQo+PiArICAgICAgICBhcm1fc21tdV9kZXN0cm95
X2lvbW11X2RvbWFpbihpb19kb21haW4pOw0KPj4gKw0KPj4gKyAgICBzcGluX3VubG9jaygmeGVu
X2RvbWFpbi0+bG9jayk7DQo+PiArDQo+PiArICAgIHJldHVybiAwOw0KPj4gK30NCj4+ICsNCj4+
ICtzdGF0aWMgaW50IGFybV9zbW11X3JlYXNzaWduX2RldihzdHJ1Y3QgZG9tYWluICpzLCBzdHJ1
Y3QgZG9tYWluICp0LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHU4IGRl
dmZuLCAgc3RydWN0IGRldmljZSAqZGV2KQ0KPj4gK3sNCj4+ICsgICAgaW50IHJldCA9IDA7DQo+
PiArDQo+PiArICAgIC8qIERvbid0IGFsbG93IHJlbWFwcGluZyBvbiBvdGhlciBkb21haW4gdGhh
biBod2RvbSAqLw0KPj4gKyAgICBpZiAoIHQgJiYgdCAhPSBoYXJkd2FyZV9kb21haW4gKQ0KPj4g
KyAgICAgICAgcmV0dXJuIC1FUEVSTTsNCj4+ICsNCj4+ICsgICAgaWYgKCB0ID09IHMgKQ0KPj4g
KyAgICAgICAgcmV0dXJuIDA7DQo+PiArDQo+PiArICAgIHJldCA9IGFybV9zbW11X2RlYXNzaWdu
X2RldihzLCBkZXYpOw0KPj4gKyAgICBpZiAoIHJldCApDQo+PiArICAgICAgICByZXR1cm4gcmV0
Ow0KPj4gKw0KPj4gKyAgICBpZiAoIHQgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICAvKiBObyBm
bGFncyBhcmUgZGVmaW5lZCBmb3IgQVJNLiAqLw0KPj4gKyAgICAgICAgcmV0ID0gYXJtX3NtbXVf
YXNzaWduX2Rldih0LCBkZXZmbiwgZGV2LCAwKTsNCj4+ICsgICAgICAgIGlmICggcmV0ICkNCj4+
ICsgICAgICAgICAgICByZXR1cm4gcmV0Ow0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHJldHVy
biAwOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgaW50IGFybV9zbW11X2lvbW11X3hlbl9kb21h
aW5faW5pdChzdHJ1Y3QgZG9tYWluICpkKQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IGFybV9zbW11
X3hlbl9kb21haW4gKnhlbl9kb21haW47DQo+PiArDQo+PiArICAgIHhlbl9kb21haW4gPSB4emFs
bG9jKHN0cnVjdCBhcm1fc21tdV94ZW5fZG9tYWluKTsNCj4+ICsgICAgaWYgKCAheGVuX2RvbWFp
biApDQo+PiArICAgICAgICByZXR1cm4gLUVOT01FTTsNCj4+ICsNCj4+ICsgICAgc3Bpbl9sb2Nr
X2luaXQoJnhlbl9kb21haW4tPmxvY2spOw0KPj4gKyAgICBJTklUX0xJU1RfSEVBRCgmeGVuX2Rv
bWFpbi0+Y29udGV4dHMpOw0KPj4gKw0KPj4gKyAgICBkb21faW9tbXUoZCktPmFyY2gucHJpdiA9
IHhlbl9kb21haW47DQo+PiArDQo+PiArICAgIHJldHVybiAwOw0KPj4gK30NCj4+ICsNCj4+ICtz
dGF0aWMgdm9pZCBfX2h3ZG9tX2luaXQgYXJtX3NtbXVfaW9tbXVfaHdkb21faW5pdChzdHJ1Y3Qg
ZG9tYWluICpkKQ0KPj4gK3sNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIHZvaWQgYXJtX3NtbXVf
aW9tbXVfeGVuX2RvbWFpbl90ZWFyZG93bihzdHJ1Y3QgZG9tYWluICpkKQ0KPj4gK3sNCj4+ICsg
ICAgc3RydWN0IGFybV9zbW11X3hlbl9kb21haW4gKnhlbl9kb21haW4gPSBkb21faW9tbXUoZCkt
PmFyY2gucHJpdjsNCj4+ICsNCj4+ICsgICAgQVNTRVJUKGxpc3RfZW1wdHkoJnhlbl9kb21haW4t
PmNvbnRleHRzKSk7DQo+PiArICAgIHhmcmVlKHhlbl9kb21haW4pOw0KPj4gK30NCj4+ICsNCj4+
ICtzdGF0aWMgaW50IGFybV9zbW11X2R0X3hsYXRlKHN0cnVjdCBkZXZpY2UgKmRldiwNCj4+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0cnVjdCBkdF9waGFuZGxlX2FyZ3Mg
KmFyZ3MpDQo+PiArew0KPj4gKyAgICBpbnQgcmV0Ow0KPj4gKyAgICBzdHJ1Y3QgaW9tbXVfZndz
cGVjICpmd3NwZWMgPSBkZXZfaW9tbXVfZndzcGVjX2dldChkZXYpOw0KPj4gKw0KPj4gKyAgICBy
ZXQgPSBpb21tdV9md3NwZWNfYWRkX2lkcyhkZXYsIGFyZ3MtPmFyZ3MsIDEpOw0KPj4gKyAgICBp
ZiAoIHJldCApDQo+PiArICAgICAgICByZXR1cm4gcmV0Ow0KPj4gKw0KPj4gKyAgICBpZiAoIGR0
X2RldmljZV9pc19wcm90ZWN0ZWQoZGV2X3RvX2R0KGRldikpICkNCj4+ICsgICAgew0KPj4gKyAg
ICAgICAgZGV2X2VycihkZXYsICJBbHJlYWR5IGFkZGVkIHRvIFNNTVV2M1xuIik7DQo+PiArICAg
ICAgICByZXR1cm4gLUVFWElTVDsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICAvKiBMZXQgWGVu
IGtub3cgdGhhdCB0aGUgbWFzdGVyIGRldmljZSBpcyBwcm90ZWN0ZWQgYnkgYW4gSU9NTVUuICov
DQo+PiArICAgIGR0X2RldmljZV9zZXRfcHJvdGVjdGVkKGRldl90b19kdChkZXYpKTsNCj4+ICsN
Cj4+ICsgICAgZGV2X2luZm8oZGV2LCAiQWRkZWQgbWFzdGVyIGRldmljZSAoU01NVXYzICVzIFN0
cmVhbUlkcyAldSlcbiIsDQo+PiArICAgICAgICAgICAgZGV2X25hbWUoZndzcGVjLT5pb21tdV9k
ZXYpLCBmd3NwZWMtPm51bV9pZHMpOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICB9DQo+
PiAgLXN0YXRpYyB2b2lkIGFybV9zbW11X2RldmljZV9zaHV0ZG93bihzdHJ1Y3QgcGxhdGZvcm1f
ZGV2aWNlICpwZGV2KQ0KPiANCj4gSSB0aGluayB0aGlzIGZ1bmN0aW9uIHNob3VsZCBoYXZlIGJl
ZW4gZHJvcHBlZCBpbiB0aGUgcHJldmlvdXMgcGF0Y2guDQoNCk9LLiANCj4gDQo+PiArc3RhdGlj
IGludCBhcm1fc21tdV9hZGRfZGV2aWNlKHU4IGRldmZuLCBzdHJ1Y3QgZGV2aWNlICpkZXYpDQo+
PiAgew0KPj4gLQlhcm1fc21tdV9kZXZpY2VfcmVtb3ZlKHBkZXYpOw0KPj4gKyAgICBpbnQgaSwg
cmV0Ow0KPj4gKyAgICBzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11Ow0KPj4gKyAgICBzdHJ1
Y3QgYXJtX3NtbXVfbWFzdGVyICptYXN0ZXI7DQo+PiArICAgIHN0cnVjdCBpb21tdV9md3NwZWMg
KmZ3c3BlYzsNCj4+ICsNCj4+ICsgICAgZndzcGVjID0gZGV2X2lvbW11X2Z3c3BlY19nZXQoZGV2
KTsNCj4+ICsgICAgaWYgKCAhZndzcGVjICkNCj4+ICsgICAgICAgIHJldHVybiAtRU5PREVWOw0K
Pj4gKw0KPj4gKyAgICBzbW11ID0gYXJtX3NtbXVfZ2V0X2J5X2Rldihmd3NwZWMtPmlvbW11X2Rl
dik7DQo+PiArICAgIGlmICggIXNtbXUgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1FTk9ERVY7DQo+
PiArDQo+PiArICAgIG1hc3RlciA9IHh6YWxsb2Moc3RydWN0IGFybV9zbW11X21hc3Rlcik7DQo+
PiArICAgIGlmICggIW1hc3RlciApDQo+PiArICAgICAgICByZXR1cm4gLUVOT01FTTsNCj4+ICsN
Cj4+ICsgICAgbWFzdGVyLT5kZXYgPSBkZXY7DQo+PiArICAgIG1hc3Rlci0+c21tdSA9IHNtbXU7
DQo+PiArICAgIG1hc3Rlci0+c2lkcyA9IGZ3c3BlYy0+aWRzOw0KPj4gKyAgICBtYXN0ZXItPm51
bV9zaWRzID0gZndzcGVjLT5udW1faWRzOw0KPj4gKw0KPj4gKyAgICBkZXZfaW9tbXVfcHJpdl9z
ZXQoZGV2LCBtYXN0ZXIpOw0KPj4gKw0KPj4gKyAgICAvKiBDaGVjayB0aGUgU0lEcyBhcmUgaW4g
cmFuZ2Ugb2YgdGhlIFNNTVUgYW5kIG91ciBzdHJlYW0gdGFibGUgKi8NCj4+ICsgICAgZm9yICgg
aSA9IDA7IGkgPCBtYXN0ZXItPm51bV9zaWRzOyBpKysgKQ0KPj4gKyAgICB7DQo+PiArICAgICAg
ICB1MzIgc2lkID0gbWFzdGVyLT5zaWRzW2ldOw0KPj4gKw0KPj4gKyAgICAgICAgaWYgKCAhYXJt
X3NtbXVfc2lkX2luX3JhbmdlKHNtbXUsIHNpZCkgKQ0KPj4gKyAgICAgICAgew0KPj4gKyAgICAg
ICAgICAgIHJldCA9IC1FUkFOR0U7DQo+PiArICAgICAgICAgICAgZ290byBlcnJfZnJlZV9tYXN0
ZXI7DQo+PiArICAgICAgICB9DQo+PiArDQo+PiArICAgICAgICAvKiBFbnN1cmUgbDIgc3RydGFi
IGlzIGluaXRpYWxpc2VkICovDQo+PiArICAgICAgICBpZiAoIHNtbXUtPmZlYXR1cmVzICYgQVJN
X1NNTVVfRkVBVF8yX0xWTF9TVFJUQUIgKQ0KPj4gKyAgICAgICAgew0KPj4gKyAgICAgICAgICAg
IHJldCA9IGFybV9zbW11X2luaXRfbDJfc3RydGFiKHNtbXUsIHNpZCk7DQo+PiArICAgICAgICAg
ICAgaWYgKCByZXQgKQ0KPj4gKyAgICAgICAgICAgICAgICBnb3RvIGVycl9mcmVlX21hc3RlcjsN
Cj4+ICsgICAgICAgIH0NCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICsN
Cj4+ICtlcnJfZnJlZV9tYXN0ZXI6DQo+PiArICAgIHhmcmVlKG1hc3Rlcik7DQo+PiArICAgIGRl
dl9pb21tdV9wcml2X3NldChkZXYsIE5VTEwpOw0KPj4gKyAgICByZXR1cm4gcmV0Ow0KPj4gIH0N
Cj4+ICAtc3RhdGljIGNvbnN0IHN0cnVjdCBvZl9kZXZpY2VfaWQgYXJtX3NtbXVfb2ZfbWF0Y2hb
XSA9IHsNCj4+IC0JeyAuY29tcGF0aWJsZSA9ICJhcm0sc21tdS12MyIsIH0sDQo+PiAtCXsgfSwN
Cj4+ICtzdGF0aWMgY29uc3Qgc3RydWN0IGlvbW11X29wcyBhcm1fc21tdV9pb21tdV9vcHMgPSB7
DQo+PiArICAgIC5pbml0ID0gYXJtX3NtbXVfaW9tbXVfeGVuX2RvbWFpbl9pbml0LA0KPj4gKyAg
ICAuaHdkb21faW5pdCA9IGFybV9zbW11X2lvbW11X2h3ZG9tX2luaXQsDQo+PiArICAgIC50ZWFy
ZG93biA9IGFybV9zbW11X2lvbW11X3hlbl9kb21haW5fdGVhcmRvd24sDQo+PiArICAgIC5pb3Rs
Yl9mbHVzaCA9IGFybV9zbW11X2lvdGxiX2ZsdXNoLA0KPj4gKyAgICAuaW90bGJfZmx1c2hfYWxs
ID0gYXJtX3NtbXVfaW90bGJfZmx1c2hfYWxsLA0KPj4gKyAgICAuYXNzaWduX2RldmljZSA9IGFy
bV9zbW11X2Fzc2lnbl9kZXYsDQo+PiArICAgIC5yZWFzc2lnbl9kZXZpY2UgPSBhcm1fc21tdV9y
ZWFzc2lnbl9kZXYsDQo+PiArICAgIC5tYXBfcGFnZSA9IGFybV9pb21tdV9tYXBfcGFnZSwNCj4+
ICsgICAgLnVubWFwX3BhZ2UgPSBhcm1faW9tbXVfdW5tYXBfcGFnZSwNCj4+ICsgICAgLmR0X3hs
YXRlID0gYXJtX3NtbXVfZHRfeGxhdGUsDQo+PiArICAgIC5hZGRfZGV2aWNlID0gYXJtX3NtbXVf
YWRkX2RldmljZSwNCj4+ICt9Ow0KPj4gKw0KPj4gK3N0YXRpYyBjb25zdCBzdHJ1Y3QgZHRfZGV2
aWNlX21hdGNoIGFybV9zbW11X29mX21hdGNoW10gPSB7DQo+PiArICAgIHsgLmNvbXBhdGlibGUg
PSAiYXJtLHNtbXUtdjMiLCB9LA0KPj4gKyAgICB7IH0sDQo+PiAgfTsNCj4+ICsNCj4+ICtzdGF0
aWMgX19pbml0IGludCBhcm1fc21tdV9kdF9pbml0KHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2
LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qgdm9pZCAqZGF0
YSkNCj4+ICt7DQo+PiArICAgIGludCByYzsNCj4+ICsNCj4+ICsgICAgLyoNCj4+ICsgICAgICog
RXZlbiBpZiB0aGUgZGV2aWNlIGNhbid0IGJlIGluaXRpYWxpemVkLCB3ZSBkb24ndCB3YW50IHRv
DQo+PiArICAgICAqIGdpdmUgdGhlIFNNTVUgZGV2aWNlIHRvIGRvbTAuDQo+PiArICAgICAqLw0K
Pj4gKyAgICBkdF9kZXZpY2Vfc2V0X3VzZWRfYnkoZGV2LCBET01JRF9YRU4pOw0KPj4gKw0KPj4g
KyAgICByYyA9IGFybV9zbW11X2RldmljZV9wcm9iZShkdF90b19kZXYoZGV2KSk7DQo+PiArICAg
IGlmICggcmMgKQ0KPj4gKyAgICAgICAgcmV0dXJuIHJjOw0KPj4gKw0KPj4gKyAgICBpb21tdV9z
ZXRfb3BzKCZhcm1fc21tdV9pb21tdV9vcHMpOw0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+
PiArDQo+PiArRFRfREVWSUNFX1NUQVJUKHNtbXV2MywgIkFSTSBTTU1VIFYzIiwgREVWSUNFX0lP
TU1VKQ0KPj4gKyAgICAuZHRfbWF0Y2ggPSBhcm1fc21tdV9vZl9tYXRjaCwNCj4+ICsgICAgLmlu
aXQgPSBhcm1fc21tdV9kdF9pbml0LA0KPj4gK0RUX0RFVklDRV9FTkQNCj4gDQo+IENoZWVycywN
Cj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:32:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:32:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46453.82434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFgf-0007R0-MD; Mon, 07 Dec 2020 12:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46453.82434; Mon, 07 Dec 2020 12:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFgf-0007Qt-J9; Mon, 07 Dec 2020 12:32:17 +0000
Received: by outflank-mailman (input) for mailman id 46453;
 Mon, 07 Dec 2020 12:32:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmFgd-0007Qo-MS
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:32:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8455dca-2113-4c98-bda9-2ed1c283df70;
 Mon, 07 Dec 2020 12:32:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6E8E2AB63;
 Mon,  7 Dec 2020 12:32:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8455dca-2113-4c98-bda9-2ed1c283df70
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607344332; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Yeb2J/POhdCK8JJbUDoyJ67Va85k6Y65cXrZDPBUImQ=;
	b=EuITZBfFFnnuVoXk0qExFvbzmlzl0IIK2wqqpd+1pfCM0qiP05+nB3WhtPlyHVa/Bi6z9C
	7HrInX5p+feEZ8631ZGKrRP5EvOSWyDXbMe7Ns+L0xLTphoa4x0ypQOaCnaMMDTC48TJRf
	eMcs/+kI0tCgO0+ClKjvia7F+egvqu4=
Subject: Re: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, Paul Durrant <paul@xen.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <742899b6-964b-be75-affc-31342c07133a@suse.com>
Date: Mon, 7 Dec 2020 13:32:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v)
>  {
>      struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
>  
> -    vio->io_req.state = STATE_IOREQ_NONE;
> -    vio->io_completion = HVMIO_no_completion;
> +    v->io.req.state = STATE_IOREQ_NONE;
> +    v->io.completion = IO_no_completion;
>      vio->mmio_cache_count = 0;
>      vio->mmio_insn_bytes = 0;
>      vio->mmio_access = (struct npfec){};
> @@ -159,7 +159,7 @@ static int hvmemul_do_io(
>  {
>      struct vcpu *curr = current;
>      struct domain *currd = curr->domain;
> -    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
> +    struct vcpu_io *vio = &curr->io;

Taking just these two hunks: "vio" would now stand for two entirely
different things. I realize the name is applicable to both, but I
wonder if such naming isn't going to risk confusion. Despite being
relatively familiar with the involved code, I've been repeatedly
unsure what exactly "vio" covers, and needed to go back to the
header. So together with the name possible adjustment mentioned
further down, maybe "vcpu_io" also wants it name changed, such that
the variable then also could sensibly be named (slightly)
differently? struct vcpu_io_state maybe? Or alternatively rename
variables of type struct hvm_vcpu_io * to hvio or hio? Otoh the
savings aren't very big for just ->io, so maybe better to stick to
the prior name with the prior type, and not introduce local
variables at all for the new field, like you already have it in the
former case?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
>  
>  struct waitqueue_vcpu;
>  
> +enum io_completion {
> +    IO_no_completion,
> +    IO_mmio_completion,
> +    IO_pio_completion,
> +#ifdef CONFIG_X86
> +    IO_realmode_completion,
> +#endif
> +};

I'm not entirely happy with io_ / IO_ here - they seem a little
too generic. How about ioreq_ / IOREQ_ respectively?

> +struct vcpu_io {
> +    /* I/O request in flight to device model. */
> +    enum io_completion   completion;
> +    ioreq_t              req;
> +};
> +
>  struct vcpu
>  {
>      int              vcpu_id;
> @@ -256,6 +271,10 @@ struct vcpu
>      struct vpci_vcpu vpci;
>  
>      struct arch_vcpu arch;
> +
> +#ifdef CONFIG_IOREQ_SERVER
> +    struct vcpu_io io;
> +#endif
>  };

I don't have a good solution in mind, and I'm also not meaning to
necessarily request a change here, but I'd like to point out that
this does away (for this part of it only, of course) with the
overlaying of the PV and HVM sub-structs on x86. As long as the
HVM part is the far bigger one, that's not a problem, but I wanted
to mention the aspect nevertheless.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:45:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46460.82445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFt3-00007s-Sb; Mon, 07 Dec 2020 12:45:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46460.82445; Mon, 07 Dec 2020 12:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFt3-00007l-P7; Mon, 07 Dec 2020 12:45:05 +0000
Received: by outflank-mailman (input) for mailman id 46460;
 Mon, 07 Dec 2020 12:45:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmFt2-00007g-MM
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:45:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70f7a8a4-0377-4ee4-8bd5-1eec8c17939e;
 Mon, 07 Dec 2020 12:45:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BE61FAC90;
 Mon,  7 Dec 2020 12:45:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70f7a8a4-0377-4ee4-8bd5-1eec8c17939e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607345102; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=d9GEG4c+vI432TN9cI9m52CDJgB9wR91u8aHGsQQ1sk=;
	b=XjNBfwjORsI6I86tGwkG9Ju4aW2VtAw8OlDheCjR/WwNiu51qgybUBzBHVIIcRcMCRGTdq
	y16PPwlmA3740ZsEMyK9hqFqonJb55G9h3UHUNGThZHaTUPabSVFhxsxFdUeUcxgVKQjsH
	TnUJNC0VB5szqZGgx/sj01DjJw/dsLE=
Subject: Re: [PATCH V3 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-13-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8b185227-3eae-de79-3af9-db39fed44459@suse.com>
Date: Mon, 7 Dec 2020 13:45:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-13-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
>      return found;
>  }
>  
> -static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
> -                                    struct ioreq_vcpu *sv)
> +static void ioreq_update_evtchn(struct ioreq_server *s,
> +                                struct ioreq_vcpu *sv)
>  {
>      ASSERT(spin_is_locked(&s->lock));

This looks to be an ioreq server function, which hence wants to be
named ioreq_server_update_evtchn()? Then

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:46:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:46:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46465.82458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFuF-0000FF-6N; Mon, 07 Dec 2020 12:46:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46465.82458; Mon, 07 Dec 2020 12:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFuF-0000F8-3D; Mon, 07 Dec 2020 12:46:19 +0000
Received: by outflank-mailman (input) for mailman id 46465;
 Mon, 07 Dec 2020 12:46:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmFuD-0000Ex-Dx
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:46:17 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9e22e6fb-e732-431f-908c-bb4142b3648a;
 Mon, 07 Dec 2020 12:46:15 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-209-piQwEYgEMQy9frD-eLZ2gA-1; Mon, 07 Dec 2020 07:46:12 -0500
Received: by mail-wm1-f69.google.com with SMTP id y187so5283898wmy.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 04:46:11 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id h20sm13745748wmb.29.2020.12.07.04.46.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 04:46:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e22e6fb-e732-431f-908c-bb4142b3648a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607345175;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tyOz5Tc9PqhiNlZWIQPWc0O+She1E1/qQZR6bIJl32A=;
	b=Ui8BMwfzJ3RyW2pEfnxZIMBbuLB7BBDx9oySOKcgaK8/2GpT2S5qraUDQRYH4sJK15dYMj
	xAV6t5rl71fjUOf4D+5OE0AjiZy6SWzmXI0HeBBAPpKWyGk+7nqo53m5elp2Vw7+AcPRMG
	8vowCyEHcq5xgF46fB2hElcnth0I7Es=
X-MC-Unique: piQwEYgEMQy9frD-eLZ2gA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=tyOz5Tc9PqhiNlZWIQPWc0O+She1E1/qQZR6bIJl32A=;
        b=Sb0RdwAuww2IoC9hb0Vdqt53zPn44X6z0Q7FSwAYwvgq/WUs9p4gkiyHp/88v5N4sk
         RVKs0OvH05IdXllCL95Ex12BAqhD05Fk/Ya5b0ngOXx72FyYhReDj5wUI4jv2FTAC9RW
         o3uBXZz1PLTEvrRgVPQW3Z+wkvTc9SWNGD2B2NKbaDtMXUh6KfN4fJea8dNUghkW4Cql
         6Roh2ZHRMFwxitLXLcOfYloT8dVZt8QZE8A38+VNAebjFde9oY0HtXoKO8O5D+kjt8VU
         w1xAUEyu3FU1HfDZJ/ZzRlRZAIUGHvpqAgNTZ5X9vmsNSmgw4ojKMGALsFnxtzK1GSVm
         IYcA==
X-Gm-Message-State: AOAM533EGZzjruNlvtg63ePU3OCAvTVnFX61wYbaVNPxkfP+vMSo22j2
	I5HGTEql42RBjKr6kj2KOaRen5qpHzIJ61PSK3qUSnxprJZdj04sONcA7W2sE81j6GmGHJZg1i/
	O411S+ShRanRk3L47RUo9S/FSZfs=
X-Received: by 2002:a1c:9d8b:: with SMTP id g133mr18420826wme.189.1607345170874;
        Mon, 07 Dec 2020 04:46:10 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxp1cQ0YqyVCzSrPnn6i9/5M4B45ElmdNAkkARQoGv3NPSs4feaDlLXpquqrHa2sqmKk3jUew==
X-Received: by 2002:a1c:9d8b:: with SMTP id g133mr18420802wme.189.1607345170698;
        Mon, 07 Dec 2020 04:46:10 -0800 (PST)
Subject: Re: [PATCH v2 5/5] gitlab-ci: Add Xen cross-build jobs
To: Thomas Huth <thuth@redhat.com>, qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, Claudio Fontana <cfontana@suse.de>,
 Willian Rampazzo <wrampazz@redhat.com>, qemu-s390x@nongnu.org,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>
References: <20201207112353.3814480-1-philmd@redhat.com>
 <20201207112353.3814480-6-philmd@redhat.com>
 <9bfd1ed4-baa2-ece8-5b96-ec8fc7a8c547@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <c335d1f5-e8cb-a9c2-9718-822dc0248fda@redhat.com>
Date: Mon, 7 Dec 2020 13:46:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9bfd1ed4-baa2-ece8-5b96-ec8fc7a8c547@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 12:51 PM, Thomas Huth wrote:
> On 07/12/2020 12.23, Philippe Mathieu-Daudé wrote:
>> Cross-build ARM and X86 targets with only Xen accelerator enabled.
>>
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> ---
>>  .gitlab-ci.d/crossbuilds.yml | 15 +++++++++++++++
>>  1 file changed, 15 insertions(+)
>>
>> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
>> index 7a94a66b4b3..31f10f1e145 100644
>> --- a/.gitlab-ci.d/crossbuilds.yml
>> +++ b/.gitlab-ci.d/crossbuilds.yml
>> @@ -135,3 +135,18 @@ cross-win64-system:
>>    extends: .cross_system_build_job
>>    variables:
>>      IMAGE: fedora-win64-cross
>> +
>> +cross-amd64-xen:
>> +  extends: .cross_accel_build_job
>> +  variables:
>> +    IMAGE: debian-amd64-cross
>> +    ACCEL: xen
>> +    TARGETS: i386-softmmu,x86_64-softmmu
>> +    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>> +
>> +cross-arm64-xen:
>> +  extends: .cross_accel_build_job
>> +  variables:
>> +    IMAGE: debian-arm64-cross
>> +    ACCEL: xen
>> +    TARGETS: aarch64-softmmu
> Could you please simply replace aarch64-softmmu by arm-softmmu in the
> target-list-exclude statement in this file instead of adding a new job for
> arm64? That should have the same results and will spare us one job...

Ah, I now see my mistake, this is not the job I wanted to add,
I probably messed during rebase. I'll respin with the proper
xen-only config.

> 
>  Thanks,
>   Thomas
> 



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:48:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:48:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46472.82470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFwK-0000RF-LH; Mon, 07 Dec 2020 12:48:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46472.82470; Mon, 07 Dec 2020 12:48:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFwK-0000R8-GP; Mon, 07 Dec 2020 12:48:28 +0000
Received: by outflank-mailman (input) for mailman id 46472;
 Mon, 07 Dec 2020 12:48:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmFwJ-0000Qv-5p; Mon, 07 Dec 2020 12:48:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmFwI-0008SK-VB; Mon, 07 Dec 2020 12:48:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmFwI-0000C7-MJ; Mon, 07 Dec 2020 12:48:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmFwI-0002cQ-Lr; Mon, 07 Dec 2020 12:48:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bRDhxJ4FvpwY4KVM80EDmTD1wA/kAQwY4H58mqhDkHE=; b=3CSkwxf5RtgzTxMoE65bRsUKz7
	kA3Z/5/Ts2D3J6+Q1rAxVdP0DsIDLYAO+N25SkEGba7FXaiqFgpU9sFrmJdNpepJorjqbKCLaP4WZ
	LQkGfoWzTLJ0OdSWzzHedP1UbRaJnes86uS2Oo7GeyPLZsjbdQMJtzjOP+t0lNa4wAOo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157255-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157255: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4b69fab6e20a98f56acd3c717bd53812950fe5b5
X-Osstest-Versions-That:
    ovmf=265eabc905eaa38b7c6deb3fedb83fe6d37e9b11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 12:48:26 +0000

flight 157255 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157255/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4b69fab6e20a98f56acd3c717bd53812950fe5b5
baseline version:
 ovmf                 265eabc905eaa38b7c6deb3fedb83fe6d37e9b11

Last test of basis   157214  2020-12-05 01:55:44 Z    2 days
Testing same since   157255  2020-12-07 08:39:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   265eabc905..4b69fab6e2  4b69fab6e20a98f56acd3c717bd53812950fe5b5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:49:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:49:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46479.82485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFxi-0000ZC-VO; Mon, 07 Dec 2020 12:49:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46479.82485; Mon, 07 Dec 2020 12:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmFxi-0000Z5-SK; Mon, 07 Dec 2020 12:49:54 +0000
Received: by outflank-mailman (input) for mailman id 46479;
 Mon, 07 Dec 2020 12:49:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kmFxh-0000Z0-3G
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:49:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kmFxg-0008Tk-Ei; Mon, 07 Dec 2020 12:49:52 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kmFxg-0005Dh-2I; Mon, 07 Dec 2020 12:49:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=avA3/AgNUAj6DyXzZAF5RRWLTIEI3aIs4sNfUQvf0+4=; b=DpXQsAghdggFCEbbBGYvoNxUqo
	f9PZ9/smDhPZktVdtNyT0B4bByTZTzsSbppL1rXaCNSdAkN82AXumWNn2mI9xWDl95/09S49/Bv24
	Zv6zaMS5zAjFQJlPZqkWmsvxUiQ4hf5JmdTBsd8xzy5Rxx9x0BSdxgSd/aTBHxgSOc20=;
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3] x86/vmap: handle superpages in vmap_to_mfn()
Date: Mon,  7 Dec 2020 12:49:33 +0000
Message-Id: <8e7a12064c68a1743dd3bd8c38feae2abea24071.1607345364.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1

From: Hongyan Xia <hongyxia@amazon.com>

There is simply no guarantee that vmap won't return superpages to the
caller. It can happen if the list of MFNs are contiguous, or we simply
have a large granularity. Although rare, if such things do happen, we
will simply hit BUG_ON() and crash.

Introduce xen_map_to_mfn() to translate any mapped Xen address to mfn
regardless of page size, and wrap vmap_to_mfn() around it.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v3:
- switch to do-while.
- move the declaration close to map_pages_to_xen().
- add missing parentheses to vmap_to_mfn().

Changed in v2:
- const pl*e
- introduce xen_map_to_mfn().
- goto to a single exit path.
- ASSERT_UNREACHABLE instead of ASSERT.
---
 xen/arch/x86/mm.c          | 56 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/page.h |  2 +-
 xen/include/xen/mm.h       |  1 +
 3 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5a50339284c7..723cc1070f16 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5194,6 +5194,62 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         }                                          \
     } while ( false )
 
+/* Translate mapped Xen address to MFN. */
+mfn_t xen_map_to_mfn(unsigned long va)
+{
+#define CHECK_MAPPED(cond)          \
+    do {                            \
+        if ( !(cond) )              \
+        {                           \
+            ASSERT_UNREACHABLE();   \
+            ret = INVALID_MFN;      \
+            goto out;               \
+        }                           \
+    } while ( false )
+
+    bool locking = system_state > SYS_STATE_boot;
+    unsigned int l2_offset = l2_table_offset(va);
+    unsigned int l1_offset = l1_table_offset(va);
+    const l3_pgentry_t *pl3e = virt_to_xen_l3e(va);
+    const l2_pgentry_t *pl2e = NULL;
+    const l1_pgentry_t *pl1e = NULL;
+    struct page_info *l3page;
+    mfn_t ret;
+
+    L3T_INIT(l3page);
+    CHECK_MAPPED(pl3e);
+    l3page = virt_to_page(pl3e);
+    L3T_LOCK(l3page);
+
+    CHECK_MAPPED(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
+    if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
+    {
+        ret = mfn_add(l3e_get_mfn(*pl3e),
+                      (l2_offset << PAGETABLE_ORDER) + l1_offset);
+        goto out;
+    }
+
+    pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
+    CHECK_MAPPED(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
+    if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
+    {
+        ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
+        goto out;
+    }
+
+    pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
+    CHECK_MAPPED(l1e_get_flags(*pl1e) & _PAGE_PRESENT);
+    ret = l1e_get_mfn(*pl1e);
+
+#undef CHECK_MAPPED
+ out:
+    L3T_UNLOCK(l3page);
+    unmap_domain_page(pl1e);
+    unmap_domain_page(pl2e);
+    unmap_domain_page(pl3e);
+    return ret;
+}
+
 int map_pages_to_xen(
     unsigned long virt,
     mfn_t mfn,
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a771baf7cb3..082c14a66226 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,7 +291,7 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
+#define vmap_to_mfn(va)     xen_map_to_mfn((unsigned long)(va))
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index f7975b2df00b..1475f352e411 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -175,6 +175,7 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
+mfn_t xen_map_to_mfn(unsigned long va);
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
     unsigned long virt,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 12:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 12:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46491.82497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmG1b-0001aN-Gk; Mon, 07 Dec 2020 12:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46491.82497; Mon, 07 Dec 2020 12:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmG1b-0001aG-DW; Mon, 07 Dec 2020 12:53:55 +0000
Received: by outflank-mailman (input) for mailman id 46491;
 Mon, 07 Dec 2020 12:53:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmG1a-0001aB-Fa
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 12:53:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b42b3295-87c7-498b-9236-07f88b8b9867;
 Mon, 07 Dec 2020 12:53:53 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-224-8Ov9-2QBPW6suenxyv9owQ-1; Mon, 07 Dec 2020 07:53:52 -0500
Received: by mail-wr1-f70.google.com with SMTP id r11so2891016wrs.23
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 04:53:51 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id h15sm15059392wrw.15.2020.12.07.04.53.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 04:53:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b42b3295-87c7-498b-9236-07f88b8b9867
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607345633;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8jQXhZcYJDxQ+pyrZJ9955D0Y/b7j+Ci1hyg9wiMJqU=;
	b=B5mNN6ssD+k8xdQceVG3A/oXt1xDCPvwExE0Jo6i1pil8ftFR8M/oq/lmJWRyRInCsV7L7
	ICPVXXqSqRbPvi8nbyB5seKrq0wXUGWiU+8xSsm+DMHdHK/IQcv9zI7wHqivS1wnK00oKX
	lyacLPihPVns0fG0bueqa6bE5hqWXow=
X-MC-Unique: 8Ov9-2QBPW6suenxyv9owQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=8jQXhZcYJDxQ+pyrZJ9955D0Y/b7j+Ci1hyg9wiMJqU=;
        b=NU9MxdvQB3wZXGCFUxRb818d2kKO3/oAKvMXLYBCq3YWWn2HGpEo62yghSxnVhOUkt
         M9Tg9ngTlMOHBwoclVnTsTOjLAwaiqiYXDhefnw50GcGp/zUukxd72rLyO4ZTNVCqSHG
         FYDkTOUnegjsp/rkcVcdo36y/ZDJCODsdyW99aQ7Ivl3iku5IsjnR/tr4YpYg3gWQVvm
         0B52tAum8xPWge2b9E1ziBx7EGgOSD5Lx9Z+5d74q9rJTmYWHJyI2Zy8eBk3inOqlsSf
         amrcyqSVbLRlHyklBzOzS/Iij+AyYTL50e/ECiOw6+lgYBU5dH17lV9zBtnKY4BT3+zw
         ndQQ==
X-Gm-Message-State: AOAM533vt8AqhjGuC9ubw5egJ9WaRnhlfaPSrQ3b11ZX6S1i4y8PZSca
	6gwOIfL+rbJ6QIXZay+5mUDwozFsZCd5y5/rHZTiP8zJINRGQBGH0gva5frgk9fp9XXutCjbYr9
	602ogVmibi4oFTk2ATMXMVwPKx/Q=
X-Received: by 2002:adf:e547:: with SMTP id z7mr16664444wrm.283.1607345630922;
        Mon, 07 Dec 2020 04:53:50 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwL3U84FFw+0qFmC+nPsQaUjHyjhjpgeVy6OSntbXqxnSM9myKiFBtd0teG8BcdiMc8S14tKw==
X-Received: by 2002:adf:e547:: with SMTP id z7mr16664424wrm.283.1607345630765;
        Mon, 07 Dec 2020 04:53:50 -0800 (PST)
Subject: Re: [PATCH v2 3/5] gitlab-ci: Introduce 'cross_accel_build_job'
 template
To: Thomas Huth <thuth@redhat.com>, qemu-devel@nongnu.org
Cc: Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Cornelia Huck <cohuck@redhat.com>, Claudio Fontana <cfontana@suse.de>,
 Willian Rampazzo <wrampazz@redhat.com>, qemu-s390x@nongnu.org,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 kvm@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>
References: <20201207112353.3814480-1-philmd@redhat.com>
 <20201207112353.3814480-4-philmd@redhat.com>
 <93d186c0-feea-8e47-2a03-5276fb898bff@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <39d13e9c-1fb4-2fa9-6cf4-01086ad920aa@redhat.com>
Date: Mon, 7 Dec 2020 13:53:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <93d186c0-feea-8e47-2a03-5276fb898bff@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/7/20 12:37 PM, Thomas Huth wrote:
> On 07/12/2020 12.23, Philippe Mathieu-Daudé wrote:
>> Introduce a job template to cross-build accelerator specific
>> jobs (enable a specific accelerator, disabling the others).
>>
>> The specific accelerator is selected by the $ACCEL environment
>> variable (default to KVM).
>>
>> Extra options such disabling other accelerators are passed
>> via the $ACCEL_CONFIGURE_OPTS environment variable.
>>
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>> ---
>>  .gitlab-ci.d/crossbuilds.yml | 17 +++++++++++++++++
>>  1 file changed, 17 insertions(+)
>>
>> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
>> index 099949aaef3..d8685ade376 100644
>> --- a/.gitlab-ci.d/crossbuilds.yml
>> +++ b/.gitlab-ci.d/crossbuilds.yml
>> @@ -13,6 +13,23 @@
>>            xtensa-softmmu"
>>      - make -j$(expr $(nproc) + 1) all check-build
>>  
>> +# Job to cross-build specific accelerators.
>> +#
>> +# Set the $ACCEL variable to select the specific accelerator (default to
>> +# KVM), and set extra options (such disabling other accelerators) via the
>> +# $ACCEL_CONFIGURE_OPTS variable.
>> +.cross_accel_build_job:
>> +  stage: build
>> +  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
>> +  timeout: 30m
>> +  script:
>> +    - mkdir build
>> +    - cd build
>> +    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
>> +      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
>> +        --enable-${ACCEL:-kvm} --target-list="$TARGETS" $ACCEL_CONFIGURE_OPTS
>> +    - make -j$(expr $(nproc) + 1) all check-build
>> +
>>  .cross_user_build_job:
>>    stage: build
>>    image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
> 
> I wonder whether we could also simply use the .cross_user_build_job - e.g.
> by adding a $EXTRA_CONFIGURE_OPTS variable in the "../configure ..." line so
> that the accel-jobs could use that for their --enable... and --disable...
> settings?

Well cross_user_build_job build tools (I'm not sure that desired).

> Anyway, I've got no strong opinion on that one, and I'm also fine if we add
> this new template, so:
> 
> Reviewed-by: Thomas Huth <thuth@redhat.com>

Thanks, we can improve on top.



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:00:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46498.82508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmG7k-0002Vb-C1; Mon, 07 Dec 2020 13:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46498.82508; Mon, 07 Dec 2020 13:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmG7k-0002VU-8v; Mon, 07 Dec 2020 13:00:16 +0000
Received: by outflank-mailman (input) for mailman id 46498;
 Mon, 07 Dec 2020 13:00:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zJBf=FL=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kmG7i-0002VP-NV
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:00:15 +0000
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 094fb9b1-0635-4a57-ac0c-20852b50e314;
 Mon, 07 Dec 2020 13:00:13 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 0FFF8E91;
 Mon,  7 Dec 2020 08:00:11 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Mon, 07 Dec 2020 08:00:12 -0500
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 45DE01080057;
 Mon,  7 Dec 2020 08:00:09 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 094fb9b1-0635-4a57-ac0c-20852b50e314
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=PR9uLY
	7bMTXb3ccEUWjKah2caVoXSIMVfxwaCLvgNBE=; b=Dp3qhOp8E8VI1eDEaGBn2P
	0bmQe+ei+wquG+5rF25NZ2wLqEeTxBQUC/O5SpPyPycEb86TdfJPGufeqIu/zFzp
	tSi934lnHI/jfZg2jVgVt/LFdwSXEPLwbrVVULh2COQlnB8bIbJ6z8x8bpJ53m/X
	w4wwIDcAIILYqC+sWG0dJDtOv72NHnLnPHO3M8XEkEqw59bGe6g+qsiu5GIbSZnQ
	yxGr1h5QkmXcuelqEjZPVkTwgAwQ/JzwgWWCN1dHzaGN4ul9Y6UayzyPqrAJIOsA
	BtHk2BUWfttB8lq/oFONiBU/IHm0KnUjhggeoYHjy51/pDor+v8gJnvH0WXR9i9g
	==
X-ME-Sender: <xms:WifOXypM9_2YDi1EacapM8qALGbTEEgo-ZXGAX_-pLjU5ZLPyBeHcQ>
    <xme:WifOX9nn_YZKCC3BkHiCH7YEPlG_1yCe7YKl6-LhrVjyPu-LZcIcO86UpyuzMeF6a
    RmocR4N3VnDKQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudejgedggeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:WifOXy3LXfdYwi_ZVz7D9Gt7DJF2RaCYQHtQdFa7HrBnxbQHra8eng>
    <xmx:WifOX6R-NBPC_jIS9eEfXz0BZASSVT6H03dNWw5wkNxNioat3liiLQ>
    <xmx:WifOX7xBCS_56QHqiAgk3BEp-2gGEuYdltM3tCwzu7J-p5uHDt0YYA>
    <xmx:WyfOX-8NTQfrDfHgAaHM44yN-_X7_bB9PMdIK-jlBwyWqIHrShR9Gw>
Date: Mon, 7 Dec 2020 14:00:04 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Christoph Hellwig <hch@lst.de>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Sagi Grimberg <sagi@grimberg.me>, linux-nvme@lists.infradead.org
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9
Message-ID: <20201207130004.GG1244@mail-itl>
References: <20201130164010.GA23494@redsun51.ssa.fujisawa.hgst.com>
 <20201202000642.GJ201140@mail-itl>
 <20201204110847.GU201140@mail-itl>
 <20201204120803.GA20727@lst.de>
 <20201204122054.GV201140@mail-itl>
 <20201205082839.ts3ju6yta46cgwjn@Air-de-Roger>
 <CAKf6xpvdD-XJoRO91B+Lwc=0Sb6Luw2X8Y9sH_MQsAWhZmj+hw@mail.gmail.com>
 <293433c5-d23b-63e7-d607-9d24f06c46b4@suse.com>
 <20201207114805.GF1244@mail-itl>
 <9bf64b27-51e8-a734-e15e-8da6d2eda736@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Tu8ztk+XgTAiG9Id"
Content-Disposition: inline
In-Reply-To: <9bf64b27-51e8-a734-e15e-8da6d2eda736@suse.com>


--Tu8ztk+XgTAiG9Id
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: GPF on 0xdead000000000100 in nvme_map_data - Linux 5.9.9

On Mon, Dec 07, 2020 at 01:00:14PM +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 07.12.20 12:48, Marek Marczykowski-G=C3=B3recki wrote:
> > On Mon, Dec 07, 2020 at 11:55:01AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
> > > Marek,
> > >=20
> > > On 06.12.20 17:47, Jason Andryuk wrote:
> > > > On Sat, Dec 5, 2020 at 3:29 AM Roger Pau Monn=C3=A9 <roger.pau@citr=
ix.com> wrote:
> > > > >=20
> > > > > On Fri, Dec 04, 2020 at 01:20:54PM +0100, Marek Marczykowski-G=C3=
=B3recki wrote:
> > > > > > On Fri, Dec 04, 2020 at 01:08:03PM +0100, Christoph Hellwig wro=
te:
> > > > > > > On Fri, Dec 04, 2020 at 12:08:47PM +0100, Marek Marczykowski-=
G=C3=B3recki wrote:
> > > > > > > > culprit:
> > > > > > > >=20
> > > > > > > > commit 9e2369c06c8a181478039258a4598c1ddd2cadfa
> > > > > > > > Author: Roger Pau Monne <roger.pau@citrix.com>
> > > > > > > > Date:   Tue Sep 1 10:33:26 2020 +0200
> > > > > > > >=20
> > > > > > > >       xen: add helpers to allocate unpopulated memory
> > > > > > > >=20
> > > > > > > > I'm adding relevant people and xen-devel to the thread.
> > > > > > > > For completeness, here is the original crash message:
> > > > > > >=20
> > > > > > > That commit definitively adds a new ZONE_DEVICE user, so it d=
oes look
> > > > > > > related.  But you are not running on Xen, are you?
> > > > > >=20
> > > > > > I am. It is Xen dom0.
> > > > >=20
> > > > > I'm afraid I'm on leave and won't be able to look into this until=
 the
> > > > > beginning of January. I would guess it's some kind of bad
> > > > > interaction between blkback and NVMe drivers both using ZONE_DEVI=
CE?
> > > > >=20
> > > > > Maybe the best is to revert this change and I will look into it w=
hen
> > > > > I get back, unless someone is willing to debug this further.
> > > >=20
> > > > Looking at commit 9e2369c06c8a and xen-blkback put_free_pages() , t=
hey
> > > > both use page->lru which is part of the anonymous union shared with
> > > > *pgmap.  That matches Marek's suspicion that the ZONE_DEVICE memory=
 is
> > > > being used as ZONE_NORMAL.
> > > >=20
> > > > memmap_init_zone_device() says:
> > > > * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
> > > > * and zone_device_data.  It is a bug if a ZONE_DEVICE page is
> > > > * ever freed or placed on a driver-private list.
> > >=20
> > > Second try, now even tested to work on a test system (without NVMe).
> >=20
> > It doesn't work for me:
> >=20
> > [  526.023340] xen-blkback: backend/vbd/1/51712: using 2 queues, protoc=
ol 1 (x86_64-abi) persistent grants
> > [  526.030550] xen-blkback: backend/vbd/1/51728: using 2 queues, protoc=
ol 1 (x86_64-abi) persistent grants
> > [  526.034810] BUG: kernel NULL pointer dereference, address: 000000000=
0000010
>=20
> Oh, indeed. Silly bug. My test was with qdisk as backend :-(
>=20
> 3rd try...

Now it works :)

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--Tu8ztk+XgTAiG9Id
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl/OJ1UACgkQ24/THMrX
1ywkjAf+Mf2G0ndFf12si0XOBlJIxCtaJF+I+52yefMe26dEIGqgetpMs6U5rfec
9mX2MR0B+UP1PApuXI72PKcGbefvtvD3k5wrB2xYYsEPp5AbE01pbL6X0ruIkHGH
VqfoT2rv1Qay8fGSyiyG+FGGVl4jRSHKeLe3CHkdmj0zly9Id5WWND/pSLPS7czJ
/ieBAxTIYjBvKYpJi8kgxfUdYQTAeaYNbUMSnkyxmpP+VeTDWg5r+mCGAGt25KAk
J5u9btEgWSsPylQp3+qjVvqahEdulTcNzVi/B4TdPoJcmVBuQ1WpE27VKyrkZboY
Bj/c8yLu5uIhrc7McUnGumZe4iW+bg==
=lL4Q
-----END PGP SIGNATURE-----

--Tu8ztk+XgTAiG9Id--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:03:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:03:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46507.82520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGBC-0002ps-U1; Mon, 07 Dec 2020 13:03:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46507.82520; Mon, 07 Dec 2020 13:03:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGBC-0002pk-Qy; Mon, 07 Dec 2020 13:03:50 +0000
Received: by outflank-mailman (input) for mailman id 46507;
 Mon, 07 Dec 2020 13:03:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rZzU=FL=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kmGBB-0002pf-8C
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:03:49 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.52]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c698abe1-8912-44df-8fc3-acd658299e34;
 Mon, 07 Dec 2020 13:03:46 +0000 (UTC)
Received: from AM5PR0201CA0014.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::24) by VI1PR08MB5470.eurprd08.prod.outlook.com
 (2603:10a6:803:136::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.20; Mon, 7 Dec
 2020 13:03:41 +0000
Received: from AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::d4) by AM5PR0201CA0014.outlook.office365.com
 (2603:10a6:203:3d::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Mon, 7 Dec 2020 13:03:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT046.mail.protection.outlook.com (10.152.16.164) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 13:03:41 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Mon, 07 Dec 2020 13:03:41 +0000
Received: from 9126baf25f32.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1C3708C4-2081-4DC9-9CD4-39A44DB7212E.1; 
 Mon, 07 Dec 2020 13:03:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9126baf25f32.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 13:03:36 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM4PR08MB2770.eurprd08.prod.outlook.com (2603:10a6:205:d::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.23; Mon, 7 Dec
 2020 13:03:32 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::61dc:45c4:eef0:c88f]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::61dc:45c4:eef0:c88f%6]) with mapi id 15.20.3632.023; Mon, 7 Dec 2020
 13:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c698abe1-8912-44df-8fc3-acd658299e34
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v1yhUkHoQlw9AQL3mJTAjNxkZ8PKkfnzpGqSJFQ7M3o=;
 b=FWL7cW7hS4OCIfGAPozi09oHyvkH70anb0go7FWQE3TinO0UMsOmTfv139g9A8/2Af44L6crV01gGOpxQ+3nF+aw4QIBdN4eT5m/1NBERh6tjET2I/nutCtNO/+sCCK4el+ayumYo1K7LY8DuH5AMPvK/3DTeBOV0LhjAtFHX6c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f9K3wwTl3hWuF3v+jlXobCvkt/cTh/LkOvQElw+VRrNwimdY9hM7u3Kr3ogTuzhnzw6cIUhEgrmrZ8644WXetP9sWiXMg7HdUJGvtdUPKpc/MNplxVWuxRrqSYr/1RVN7Ftuc/cO4WHyWd7RMxtaOqJjUHkzeHA8Tcu/QeHhUuvKdpzIl2d69AjT/mCmPtxyhcg8bJy64TLHwUvmD83JaBVX9HByq/Ld+NgVl/ep3WXgLudlYbVoKqdl29HXClf3qMi8LaDGzvwhXVs56UxX7Es1Q5RHkGpgmEfWmgAVz6DzhGVfUzlLuQ1oviT+dhUOS+upjulHEhKSpNuCCeJiDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v1yhUkHoQlw9AQL3mJTAjNxkZ8PKkfnzpGqSJFQ7M3o=;
 b=cvQWQhRne9VSH5IV6TB90B9PGSZkGIqb9blrsZEQJGnQ+71YJfHvccKLzcactBtBfz/jK4jFjB37LRaBkQ8G5noiq2AvRcmGqchvsshOU/iZFPvcJidpa4XVYC2ptHXgmeOi6JAM3YsZZ3oWLqsQyolfqOMoHJLc5w9mOc7FuwF6Fg0NvkosHV85Hr6eFwQy5MNkqNfCqqNrveXFcuB1WjcEuInVcOyx82dgozgvVlqLlt7TuJybPGMClNSWCoLZQKje0Xa7VaAFMGVqrcwQSfRH0c2SXjvH3L2q+34/jEr1i7rufKhhHOjcVj3oPfhgOgcPcoUUm2oSBevJ3lw2Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v1yhUkHoQlw9AQL3mJTAjNxkZ8PKkfnzpGqSJFQ7M3o=;
 b=FWL7cW7hS4OCIfGAPozi09oHyvkH70anb0go7FWQE3TinO0UMsOmTfv139g9A8/2Af44L6crV01gGOpxQ+3nF+aw4QIBdN4eT5m/1NBERh6tjET2I/nutCtNO/+sCCK4el+ayumYo1K7LY8DuH5AMPvK/3DTeBOV0LhjAtFHX6c=
From: Wei Chen <Wei.Chen@arm.com>
To: Oleksandr <olekstysh@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
	<paul@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<Julien.Grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Tim Deegan <tim@xen.org>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Jun
 Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Anthony
 PERARD <anthony.perard@citrix.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, Artem Mygaiev
	<joculator@gmail.com>, =?utf-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Subject: RE: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
Thread-Topic: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
Thread-Index: AQHWxwQnSZ1vYGMEHEmWeMwlPW4ML6ngiEAAgAsbpMA=
Date: Mon, 7 Dec 2020 13:03:31 +0000
Message-ID:
 <AM0PR08MB3747E17FB0F59A85CA72AEDA9ECE0@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <66df4a0b-166a-81c3-9237-854649c832f9@gmail.com>
In-Reply-To: <66df4a0b-166a-81c3-9237-854649c832f9@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: ECB3875155E27E4A9FC21E115B194115.0
x-checkrecipientchecked: true
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [218.82.176.105]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9f35903f-a95b-44ce-861c-08d89ab08567
x-ms-traffictypediagnostic: AM4PR08MB2770:|VI1PR08MB5470:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB5470AF543A4A8F42FEDBACB49ECE0@VI1PR08MB5470.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 46NsZZdexxjJFPg87SN/Km5oqHpqPATP6qtS0MiYXZQcpIuS6D21Sw/4SG5ourmHZnBMI+UZv+eItBDvpnd3Zd5mbvRj1GNhb9DL5zs/NTCXimtZzTHldNB6FaeUvrOyvZfCxapUceJHgOPrO4uyRYR+vQisah2MJldBUl4m3sUspeHV0QT5XiLgSNP3nUuP6gbRijGwb3H6haHx7bEU1hvkpnwjvy+GH8xlr0Lou114L+dqhOhlsSI3GkbyJFJR7iGMNN/f3O5axBf8VhzW2Kgbu/JoLWHzItpTCnJk0K2b2wDJ3pNtJV40my5DTEg8XqHMABxu57TjuvRNbgXFnC324WRMyGhSCBiKGtjcMp9r71cBL/bhHmxEKPzu8ihOICaliDCvTrRjaBo46ntzxQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(86362001)(30864003)(66574015)(66476007)(478600001)(64756008)(71200400001)(26005)(52536014)(966005)(66446008)(5660300002)(186003)(54906003)(76116006)(4326008)(2906002)(53546011)(110136005)(66556008)(9686003)(8936002)(55016002)(8676002)(316002)(6506007)(83380400001)(33656002)(66946007)(7696005)(7416002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?THZBYnppL3FNeGJac3pMOTZXL3FtemlGQ1k4MXM0SFQ3bUdDaExtaVBvWjVU?=
 =?utf-8?B?eHBjSkR0MjB3Ti9tNDlqS1YxaUxNSFhPTXM1ZTZwaWJna3Q3Rm9vTlMxOHhx?=
 =?utf-8?B?YlNKdDlGL1RzbzdzRkZBL2h0NXVyUHlXSDNuTi9OT3cyRW9vT2VkSnJNQ0dW?=
 =?utf-8?B?eHRvRFpQbHZ3ZGZUYjlzMDEvd0lhZHp5d0RkdFJpYWhNRzNBNWV6QmMzeGZm?=
 =?utf-8?B?TzBENm9FOEdVTjJUSUg2dTRsaldqSlIxcWR3ak16U3o0VXVUT1lIczNTdVRB?=
 =?utf-8?B?YVRmTEx2U3R2YWUramd6ay9GVkNIT2dMY01VN3RuMFVYQ0dRY0tEdTVSb3g3?=
 =?utf-8?B?YnVFU1BmOGdyZGpTV0cxa3pXWFNDbG9xWFI3L3JjejBOTm5ob2JlUGM1cTdp?=
 =?utf-8?B?QW1LMmx6K09EMEhMZk4yc2hIdXAza3F3Tkkzelpjemg4VGZ4SktnSFJCcm04?=
 =?utf-8?B?VVBhWHJWQldkU2EzbWJOT253d0FpdWh3TWZFQytWNXpVaWFQRCt6TXlVVkdz?=
 =?utf-8?B?akwvajZ5SEJzT2FzMkR0RjVaemgyWG5pNXpqbHE3SXltUjBVQ0JGSjBmekJj?=
 =?utf-8?B?NXI5eVlzNGFpVU4rMm5lUkVtVVJ0Q1hCanYxTXZNT2RuSytYckZET3hrUWlh?=
 =?utf-8?B?UWRDNWpMWVNiMkpWckF6NGE1NzRLWU94U3pyc2ZhRzVQdGdsNjFpUC9CRGRY?=
 =?utf-8?B?eUlEM1dlTlVrQlFyZkxyWGFYTzlkbTB6MWVuOWtQL0ZzbmoxcWxpQVpWdFBB?=
 =?utf-8?B?VVBqemVVK00zMVJVVndHMEE1dmVObWJwa20vaERVL2tBdWZFZCtCVEhwZ1Vk?=
 =?utf-8?B?SUdzUmdnT0VGSzFNUmJZaEhBaGNIMHdPRjlGUkZsTGlJQTZlbmNRSHhKN3Y0?=
 =?utf-8?B?UTJCTzd5cFFMSGlWKzMvc3ZmQWd4dUJrdG4zYVFQV01Ud085UGhjUHdZdUNt?=
 =?utf-8?B?YU5VOTQ1V1hjdVpETFFqZG9SbG5FdXZIdjZFcmliOGRTeWdCTVZzZnVCWk5J?=
 =?utf-8?B?dWZTQ3VJUkVMZjhBdW1Mc2JSbTFyQzhoSFdITy9DL05YSkR6bVZTNlA0aWhm?=
 =?utf-8?B?NUFleVREdTJtcS9BSkRwaDU3YURJZ0hUOFVncnpvOW1vU0NsMCttVCttTURi?=
 =?utf-8?B?OWpMaksrOG9nbFo4a25nQ0NzeTQrSlREWDJTNnVXZTN0bFVyak9LSGtZUDZN?=
 =?utf-8?B?bGU0TjZNNTZ6a3d3Ui9MMno5KzZmK1EwRVM3aVdwWWN1UG55RHNycU9YT3Y2?=
 =?utf-8?B?dVlMRmcvU3pMVFN2NVE0aXQwWStzQWVxNHJYMUN5c0pCTzB0N2NSc3k0bmpF?=
 =?utf-8?Q?7XgxgL/UtEO8g=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2770
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4758628e-4283-44ba-49e3-08d89ab07fa7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PPuXaUSK/jYsISM2cr4zXmaKGoFtv1pTPeEROcJg1S8RQbQrKcRB3psvw/iNcn4Z486h7RcyqJ2GouD3t8BJUoJ8khyt/+L+SP7Hn/eQurX00HSQwhxYonXyLptkx2GabVS+M2e33EWRVLaG6pCcDWEqFzAAQXiW29LZ6+0igi+5n4Y3t6ougBfUgG1cyjFIaxTdusUn3lURNqT4TA5N1OAZAa1lTj7WoLzNiRUWz+VfIYvgxFidbyxfADUIBQ9fWSpR2bMtdb101w5hpRzWSbXJUI+44eDsO4xCZPTgajNBPhaLp/PLcQhv+dash4mQYZvSuV6gDvUZzw6TQiPQw+qmx2+vuLX+ZN9aumhvYfUu7cBu0y6mZvIWBU4jCZS7rysGPPIep5zxET7F8z/W6YJfT0OvRbstb7FJKKZScmO9jESUg+2+mdzMZGheYgU7Ny9bZ8Fgj2z1rufrrL+vEco9gCAgjKQyGoEsAfFMn1Y=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(396003)(376002)(46966005)(83380400001)(6506007)(5660300002)(82740400003)(70586007)(66574015)(110136005)(33656002)(9686003)(7696005)(55016002)(4326008)(478600001)(53546011)(82310400003)(30864003)(70206006)(966005)(86362001)(54906003)(52536014)(107886003)(47076004)(36906005)(8936002)(26005)(2906002)(8676002)(186003)(81166007)(356005)(316002)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 13:03:41.4992
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9f35903f-a95b-44ce-861c-08d89ab08567
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5470

SGkgT2xla3NhbmRyLA0KDQpJIGhhdmUgdGVzdGVkIHYzLiBJdCB3b3JrcyB3ZWxsIHdpdGggdGhl
IGxhdGVzdCB2aXJ0aW8tYmFja2VuZCBzZXJ2aWNlWzFdLg0KWzFdIGh0dHBzOi8vZ2l0aHViLmNv
bS94ZW4tdHJvb3BzL3ZpcnRpby1kaXNrL2NvbW1pdHMvaW9yZXFfbWwxDQoNClRlc3RlZC1ieTog
V2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQoNClJlZ2FyZHMsDQpXZWkgQ2hlbg0KDQo+IC0t
LS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IE9sZWtzYW5kciA8b2xla3N0eXNoQGdt
YWlsLmNvbT4NCj4gU2VudDogMjAyMOW5tDEx5pyIMzDml6UgMTk6MjMNCj4gVG86IHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBDYzogT2xla3NhbmRyIFR5c2hjaGVua28gPG9sZWtz
YW5kcl90eXNoY2hlbmtvQGVwYW0uY29tPjsgUGF1bCBEdXJyYW50DQo+IDxwYXVsQHhlbi5vcmc+
OyBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPjsgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJp
eC5jb20+Ow0KPiBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgSnVsaWVuIEdyYWxsIDxKdWxpZW4uR3Jh
bGxAYXJtLmNvbT47IEdlb3JnZSBEdW5sYXANCj4gPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT47
IElhbiBKYWNrc29uIDxpd2pAeGVucHJvamVjdC5vcmc+OyBKdWxpZW4gR3JhbGwNCj4gPGp1bGll
bkB4ZW4ub3JnPjsgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsg
VGltIERlZWdhbg0KPiA8dGltQHhlbi5vcmc+OyBEYW5pZWwgRGUgR3JhYWYgPGRnZGVncmFAdHlj
aG8ubnNhLmdvdj47IFZvbG9keW15cg0KPiBCYWJjaHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFt
LmNvbT47IEp1biBOYWthamltYQ0KPiA8anVuLm5ha2FqaW1hQGludGVsLmNvbT47IEtldmluIFRp
YW4gPGtldmluLnRpYW5AaW50ZWwuY29tPjsgQW50aG9ueQ0KPiBQRVJBUkQgPGFudGhvbnkucGVy
YXJkQGNpdHJpeC5jb20+OyBCZXJ0cmFuZCBNYXJxdWlzDQo+IDxCZXJ0cmFuZC5NYXJxdWlzQGFy
bS5jb20+OyBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IEthbHkgWGluDQo+IDxLYWx5Llhp
bkBhcm0uY29tPjsgQXJ0ZW0gTXlnYWlldiA8am9jdWxhdG9yQGdtYWlsLmNvbT47IEFsZXggQmVu
bsOpZQ0KPiA8YWxleC5iZW5uZWVAbGluYXJvLm9yZz4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCBW
MyAwMC8yM10gSU9SRVEgZmVhdHVyZSAoKyB2aXJ0aW8tbW1pbykgb24gQXJtDQo+IA0KPiANCj4g
T24gMzAuMTEuMjAgMTI6MzEsIE9sZWtzYW5kciBUeXNoY2hlbmtvIHdyb3RlOg0KPiA+IEZyb206
IE9sZWtzYW5kciBUeXNoY2hlbmtvIDxvbGVrc2FuZHJfdHlzaGNoZW5rb0BlcGFtLmNvbT4NCj4g
DQo+IEhlbGxvIGFsbC4NCj4gDQo+IEFkZGVkIG1pc3NlZCBzdWJqZWN0IGxpbmUuIEkgYW0gc29y
cnkgZm9yIHRoZSBpbmNvbnZlbmllbmNlLg0KPiANCj4gDQo+ID4NCj4gPg0KPiA+IERhdGU6IFNh
dCwgMjggTm92IDIwMjAgMjI6MzM6NTEgKzAyMDANCj4gPiBTdWJqZWN0OiBbUEFUQ0ggVjMgMDAv
MjNdIElPUkVRIGZlYXR1cmUgKCsgdmlydGlvLW1taW8pIG9uIEFybQ0KPiA+IE1JTUUtVmVyc2lv
bjogMS4wDQo+ID4gQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04DQo+ID4g
Q29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJpdA0KPiA+DQo+ID4gSGVsbG8gYWxsLg0KPiA+
DQo+ID4gVGhlIHB1cnBvc2Ugb2YgdGhpcyBwYXRjaCBzZXJpZXMgaXMgdG8gYWRkIElPUkVRL0RN
IHN1cHBvcnQgdG8gWGVuIG9uIEFybS4NCj4gPiBZb3UgY2FuIGZpbmQgYW4gaW5pdGlhbCBkaXNj
dXNzaW9uIGF0IFsxXSBhbmQgUkZDL1YxL1YyIHNlcmllcyBhdCBbMl0vWzNdL1s0XS4NCj4gPiBY
ZW4gb24gQXJtIHJlcXVpcmVzIHNvbWUgaW1wbGVtZW50YXRpb24gdG8gZm9yd2FyZCBndWVzdCBN
TUlPIGFjY2VzcyB0byBhDQo+IGRldmljZQ0KPiA+IG1vZGVsIGluIG9yZGVyIHRvIGltcGxlbWVu
dCB2aXJ0aW8tbW1pbyBiYWNrZW5kIG9yIGV2ZW4gbWVkaWF0b3Igb3V0c2lkZSBvZg0KPiBoeXBl
cnZpc29yLg0KPiA+IEFzIFhlbiBvbiB4ODYgYWxyZWFkeSBjb250YWlucyByZXF1aXJlZCBzdXBw
b3J0IHRoaXMgc2VyaWVzIHRyaWVzIHRvIG1ha2UgaXQNCj4gY29tbW9uDQo+ID4gYW5kIGludHJv
ZHVjZSBBcm0gc3BlY2lmaWMgYml0cyBwbHVzIHNvbWUgbmV3IGZ1bmN0aW9uYWxpdHkuIFBhdGNo
IHNlcmllcyBpcw0KPiBiYXNlZCBvbg0KPiA+IEp1bGllbidzIFBvQyAieGVuL2FybTogQWRkIHN1
cHBvcnQgZm9yIEd1ZXN0IElPIGZvcndhcmRpbmcgdG8gYSBkZXZpY2UNCj4gZW11bGF0b3IiLg0K
PiA+IEJlc2lkZXMgc3BsaXR0aW5nIGV4aXN0aW5nIElPUkVRL0RNIHN1cHBvcnQgYW5kIGludHJv
ZHVjaW5nIEFybSBzaWRlLCB0aGUNCj4gc2VyaWVzDQo+ID4gYWxzbyBpbmNsdWRlcyB2aXJ0aW8t
bW1pbyByZWxhdGVkIGNoYW5nZXMgKGxhc3QgMiBwYXRjaGVzIGZvciB0b29sc3RhY2spDQo+ID4g
Zm9yIHRoZSByZXZpZXdlcnMgdG8gYmUgYWJsZSB0byBzZWUgaG93IHRoZSB3aG9sZSBwaWN0dXJl
IGNvdWxkIGxvb2sgbGlrZS4NCj4gPg0KPiA+IEFjY29yZGluZyB0byB0aGUgaW5pdGlhbCBkaXNj
dXNzaW9uIHRoZXJlIGFyZSBhIGZldyBvcGVuIHF1ZXN0aW9ucy9jb25jZXJucw0KPiA+IHJlZ2Fy
ZGluZyBzZWN1cml0eSwgcGVyZm9ybWFuY2UgaW4gVmlydElPIHNvbHV0aW9uOg0KPiA+IDEuIHZp
cnRpby1tbWlvIHZzIHZpcnRpby1wY2ksIFNQSSB2cyBNU0ksIGRpZmZlcmVudCB1c2UtY2FzZXMg
cmVxdWlyZSBkaWZmZXJlbnQNCj4gPiAgICAgdHJhbnNwb3J0Li4uDQo+ID4gMi4gdmlydGlvIGJh
Y2tlbmQgaXMgYWJsZSB0byBhY2Nlc3MgYWxsIGd1ZXN0IG1lbW9yeSwgc29tZSBraW5kIG9mIHBy
b3RlY3Rpb24NCj4gPiAgICAgaXMgbmVlZGVkOiAndmlydGlvLWlvbW11IGluIFhlbicgdnMgJ3By
ZS1zaGFyZWQtbWVtb3J5ICYgbWVtY3B5cyBpbiBndWVzdCcNCj4gPiAzLiBpbnRlcmZhY2UgYmV0
d2VlbiB0b29sc3RhY2sgYW5kICdvdXQtb2YtcWVtdScgdmlydGlvIGJhY2tlbmQsIGF2b2lkIHVz
aW5nDQo+ID4gICAgIFhlbnN0b3JlIGluIHZpcnRpbyBiYWNrZW5kIGlmIHBvc3NpYmxlLg0KPiA+
IDQuIGEgbG90IG9mICdmb3JlaW5nIG1hcHBpbmcnIGNvdWxkIGxlYWQgdG8gdGhlIG1lbW9yeSBl
eGhhdXN0aW9uLCBKdWxpZW4NCj4gPiAgICAgaGFzIHNvbWUgaWRlYSByZWdhcmRpbmcgdGhhdC4N
Cj4gPg0KPiA+IExvb2tzIGxpa2UgYWxsIG9mIHRoZW0gYXJlIHZhbGlkIGFuZCB3b3J0aCBjb25z
aWRlcmluZywgYnV0IHRoZSBmaXJzdCB0aGluZw0KPiA+IHdoaWNoIHdlIG5lZWQgb24gQXJtIGlz
IGEgbWVjaGFuaXNtIHRvIGZvcndhcmQgZ3Vlc3QgSU8gdG8gYSBkZXZpY2UNCj4gZW11bGF0b3Is
DQo+ID4gc28gbGV0J3MgZm9jdXMgb24gaXQgaW4gdGhlIGZpcnN0IHBsYWNlLg0KPiA+DQo+ID4g
KioqDQo+ID4NCj4gPiBUaGVyZSBhcmUgYSBsb3Qgb2YgY2hhbmdlcyBzaW5jZSBSRkMgc2VyaWVz
LCBhbG1vc3QgYWxsIFRPRE9zIHdlcmUgcmVzb2x2ZWQgb24NCj4gQXJtLA0KPiA+IEFybSBjb2Rl
IHdhcyBpbXByb3ZlZCBhbmQgaGFyZGVuZWQsIGNvbW1vbiBJT1JFUS9ETSBjb2RlIGJlY2FtZQ0K
PiByZWFsbHkgYXJjaC1hZ25vc3RpYw0KPiA+ICh3aXRob3V0IEhWTS1pc20pLCB0aGUgImxlZ2Fj
eSIgbWVjaGFuaXNtIG9mIG1hcHBpbmcgbWFnaWMgcGFnZXMgZm9yIHRoZQ0KPiBJT1JFUSBzZXJ2
ZXJzDQo+ID4gd2FzIGxlZnQgeDg2IHNwZWNpZmljLCBldGMuIEJ1dCBvbmUgVE9ETyBzdGlsbCBy
ZW1haW5zIHdoaWNoIGlzICJQSU8gaGFuZGxpbmciDQo+IG9uIEFybS4NCj4gPiBUaGUgIlBJTyBo
YW5kbGluZyIgVE9ETyBpcyBleHBlY3RlZCB0byBsZWZ0IHVuYWRkcmVzc2VkIGZvciB0aGUgY3Vy
cmVudCBzZXJpZXMuDQo+ID4gSXQgaXMgbm90IGFuIGJpZyBpc3N1ZSBmb3Igbm93IHdoaWxlIFhl
biBkb2Vzbid0IGhhdmUgc3VwcG9ydCBmb3IgdlBDSSBvbiBBcm0uDQo+ID4gT24gQXJtNjQgdGhl
eSBhcmUgb25seSB1c2VkIGZvciBQQ0kgSU8gQmFyIGFuZCB3ZSB3b3VsZCBwcm9iYWJseSB3YW50
IHRvDQo+IGV4cG9zZQ0KPiA+IHRoZW0gdG8gZW11bGF0b3IgYXMgUElPIGFjY2VzcyB0byBtYWtl
IGEgRE0gY29tcGxldGVseSBhcmNoLWFnbm9zdGljLiBTbw0KPiAiUElPIGhhbmRsaW5nIg0KPiA+
IHNob3VsZCBiZSBpbXBsZW1lbnRlZCB3aGVuIHdlIGFkZCBzdXBwb3J0IGZvciB2UENJLg0KPiA+
DQo+ID4gSSBsZWZ0IGludGVyZmFjZSB1bnRvdWNoZWQgaW4gdGhlIGZvbGxvd2luZyBwYXRjaA0K
PiA+ICJ4ZW4vZG06IEludHJvZHVjZSB4ZW5kZXZpY2Vtb2RlbF9zZXRfaXJxX2xldmVsIERNIG9w
Ig0KPiA+IHNpbmNlIHRoZXJlIGlzIHN0aWxsIGFuIG9wZW4gZGlzY3Vzc2lvbiB3aGF0IGludGVy
ZmFjZSB0byB1c2Uvd2hhdA0KPiA+IGluZm9ybWF0aW9uIHRvIHBhc3MgdG8gdGhlIGh5cGVydmlz
b3IuDQo+ID4NCj4gPiBUaGVyZSBpcyBhIHBhdGNoIG9uIHJldmlldyB0aGlzIHNlcmllcyBkZXBl
bmRzIG9uOg0KPiA+IGh0dHBzOi8vcGF0Y2h3b3JrLmtlcm5lbC5vcmcvcGF0Y2gvMTE4MTY2ODkN
Cj4gPg0KPiA+IFBsZWFzZSBub3RlLCB0aGF0IElPUkVRIGZlYXR1cmUgaXMgZGlzYWJsZWQgYnkg
ZGVmYXVsdCBvbiBBcm0gd2l0aGluIGN1cnJlbnQNCj4gc2VyaWVzLg0KPiA+DQo+ID4gKioqDQo+
ID4NCj4gPiBQYXRjaCBzZXJpZXMgWzVdIHdhcyByZWJhc2VkIG9uIHJlY2VudCAic3RhZ2luZyBi
cmFuY2giDQo+ID4gKDE4MWYyYzIgZXZ0Y2huOiBkb3VibGUgcGVyLWNoYW5uZWwgbG9ja2luZyBj
YW4ndCBoaXQgaWRlbnRpY2FsIGNoYW5uZWxzKSBhbmQNCj4gdGVzdGVkIG9uDQo+ID4gUmVuZXNh
cyBTYWx2YXRvci1YIGJvYXJkICsgSDMgRVMzLjAgU29DIChBcm02NCkgd2l0aCB2aXJ0aW8tbW1p
byBkaXNrDQo+IGJhY2tlbmQgWzZdDQo+ID4gcnVubmluZyBpbiBkcml2ZXIgZG9tYWluIGFuZCB1
bm1vZGlmaWVkIExpbnV4IEd1ZXN0IHJ1bm5pbmcgb24gZXhpc3RpbmcNCj4gPiB2aXJ0aW8tYmxr
IGRyaXZlciAoZnJvbnRlbmQpLiBObyBpc3N1ZXMgd2VyZSBvYnNlcnZlZC4gR3Vlc3QgZG9tYWlu
DQo+ICdyZWJvb3QvZGVzdHJveScNCj4gPiB1c2UtY2FzZXMgd29yayBwcm9wZXJseS4gUGF0Y2gg
c2VyaWVzIHdhcyBvbmx5IGJ1aWxkLXRlc3RlZCBvbiB4ODYuDQo+ID4NCj4gPiBQbGVhc2Ugbm90
ZSwgYnVpbGQtdGVzdCBwYXNzZWQgZm9yIHRoZSBmb2xsb3dpbmcgbW9kZXM6DQo+ID4gMS4geDg2
OiBDT05GSUdfSFZNPXkgLyBDT05GSUdfSU9SRVFfU0VSVkVSPXkgKGRlZmF1bHQpDQo+ID4gMi4g
eDg2OiAjQ09ORklHX0hWTSBpcyBub3Qgc2V0IC8gI0NPTkZJR19JT1JFUV9TRVJWRVIgaXMgbm90
IHNldA0KPiA+IDMuIEFybTY0OiBDT05GSUdfSFZNPXkgLyBDT05GSUdfSU9SRVFfU0VSVkVSPXkN
Cj4gPiA0LiBBcm02NDogQ09ORklHX0hWTT15IC8gI0NPTkZJR19JT1JFUV9TRVJWRVIgaXMgbm90
IHNldCAgKGRlZmF1bHQpDQo+ID4gNS4gQXJtMzI6IENPTkZJR19IVk09eSAvIENPTkZJR19JT1JF
UV9TRVJWRVI9eQ0KPiA+IDYuIEFybTMyOiBDT05GSUdfSFZNPXkgLyAjQ09ORklHX0lPUkVRX1NF
UlZFUiBpcyBub3Qgc2V0ICAoZGVmYXVsdCkNCj4gPg0KPiA+ICoqKg0KPiA+DQo+ID4gQW55IGZl
ZWRiYWNrL2hlbHAgd291bGQgYmUgaGlnaGx5IGFwcHJlY2lhdGVkLg0KPiA+DQo+ID4gWzFdIGh0
dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMC0N
Cj4gMDcvbXNnMDA4MjUuaHRtbA0KPiA+IFsyXSBodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3Jn
L2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMjAtDQo+IDA4L21zZzAwMDcxLmh0bWwNCj4gPiBb
M10gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8y
MDIwLQ0KPiAwOS9tc2cwMDczMi5odG1sDQo+ID4gWzRdIGh0dHBzOi8vbGlzdHMueGVucHJvamVj
dC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMC0NCj4gMTAvbXNnMDEwNzcuaHRtbA0K
PiA+IFs1XSBodHRwczovL2dpdGh1Yi5jb20vb3R5c2hjaGVua28xL3hlbi9jb21taXRzL2lvcmVx
XzQuMTRfbWw0DQo+ID4gWzZdIGh0dHBzOi8vZ2l0aHViLmNvbS94ZW4tdHJvb3BzL3ZpcnRpby1k
aXNrL2NvbW1pdHMvaW9yZXFfbWwxDQo+ID4NCj4gPiBKdWxpZW4gR3JhbGwgKDUpOg0KPiA+ICAg
IHhlbi9kbTogTWFrZSB4ODYncyBETSBmZWF0dXJlIGNvbW1vbg0KPiA+ICAgIHhlbi9tbTogTWFr
ZSB4ODYncyBYRU5NRU1fcmVzb3VyY2VfaW9yZXFfc2VydmVyIGhhbmRsaW5nIGNvbW1vbg0KPiA+
ICAgIGFybS9pb3JlcTogSW50cm9kdWNlIGFyY2ggc3BlY2lmaWMgYml0cyBmb3IgSU9SRVEvRE0g
ZmVhdHVyZXMNCj4gPiAgICB4ZW4vZG06IEludHJvZHVjZSB4ZW5kZXZpY2Vtb2RlbF9zZXRfaXJx
X2xldmVsIERNIG9wDQo+ID4gICAgbGlieGw6IEludHJvZHVjZSBiYXNpYyB2aXJ0aW8tbW1pbyBz
dXBwb3J0IG9uIEFybQ0KPiA+DQo+ID4gT2xla3NhbmRyIFR5c2hjaGVua28gKDE4KToNCj4gPiAg
ICB4ODYvaW9yZXE6IFByZXBhcmUgSU9SRVEgZmVhdHVyZSBmb3IgbWFraW5nIGl0IGNvbW1vbg0K
PiA+ICAgIHg4Ni9pb3JlcTogQWRkIElPUkVRX1NUQVRVU18qICNkZWZpbmUtcyBhbmQgdXBkYXRl
IGNvZGUgZm9yIG1vdmluZw0KPiA+ICAgIHg4Ni9pb3JlcTogUHJvdmlkZSBvdXQtb2YtbGluZSB3
cmFwcGVyIGZvciB0aGUgaGFuZGxlX21taW8oKQ0KPiA+ICAgIHhlbi9pb3JlcTogTWFrZSB4ODYn
cyBJT1JFUSBmZWF0dXJlIGNvbW1vbg0KPiA+ICAgIHhlbi9pb3JlcTogTWFrZSB4ODYncyBodm1f
aW9yZXFfbmVlZHNfY29tcGxldGlvbigpIGNvbW1vbg0KPiA+ICAgIHhlbi9pb3JlcTogTWFrZSB4
ODYncyBodm1fbW1pb19maXJzdChsYXN0KV9ieXRlKCkgY29tbW9uDQo+ID4gICAgeGVuL2lvcmVx
OiBNYWtlIHg4NidzIGh2bV9pb3JlcV8ocGFnZS92Y3B1L3NlcnZlcikgc3RydWN0cyBjb21tb24N
Cj4gPiAgICB4ZW4vaW9yZXE6IE1vdmUgeDg2J3MgaW9yZXFfc2VydmVyIHRvIHN0cnVjdCBkb21h
aW4NCj4gPiAgICB4ZW4vaW9yZXE6IE1vdmUgeDg2J3MgaW9fY29tcGxldGlvbi9pb19yZXEgZmll
bGRzIHRvIHN0cnVjdCB2Y3B1DQo+ID4gICAgeGVuL2lvcmVxOiBSZW1vdmUgImh2bSIgcHJlZml4
ZXMgZnJvbSBpbnZvbHZlZCBmdW5jdGlvbiBuYW1lcw0KPiA+ICAgIHhlbi9pb3JlcTogVXNlIGd1
ZXN0X2NtcHhjaGc2NCgpIGluc3RlYWQgb2YgY21weGNoZygpDQo+ID4gICAgeGVuL2FybTogU3Rp
Y2sgYXJvdW5kIGluIGxlYXZlX2h5cGVydmlzb3JfdG9fZ3Vlc3QgdW50aWwgSS9PIGhhcw0KPiA+
ICAgICAgY29tcGxldGVkDQo+ID4gICAgeGVuL21tOiBIYW5kbGUgcHJvcGVybHkgcmVmZXJlbmNl
IGluIHNldF9mb3JlaWduX3AybV9lbnRyeSgpIG9uIEFybQ0KPiA+ICAgIHhlbi9pb3JlcTogSW50
cm9kdWNlIGRvbWFpbl9oYXNfaW9yZXFfc2VydmVyKCkNCj4gPiAgICB4ZW4vYXJtOiBpbzogQWJz
dHJhY3Qgc2lnbi1leHRlbnNpb24NCj4gPiAgICB4ZW4vaW9yZXE6IE1ha2UgeDg2J3Mgc2VuZF9p
bnZhbGlkYXRlX3JlcSgpIGNvbW1vbg0KPiA+ICAgIHhlbi9hcm06IEFkZCBtYXBjYWNoZSBpbnZh
bGlkYXRpb24gaGFuZGxpbmcNCj4gPiAgICBbUkZDXSBsaWJ4bDogQWRkIHN1cHBvcnQgZm9yIHZp
cnRpby1kaXNrIGNvbmZpZ3VyYXRpb24NCj4gPg0KPiA+ICAgTUFJTlRBSU5FUlMgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgfCAgICA4ICstDQo+ID4gICB0b29scy9pbmNsdWRlL3hl
bmRldmljZW1vZGVsLmggICAgICAgICAgICAgICB8ICAgIDQgKw0KPiA+ICAgdG9vbHMvbGlicy9k
ZXZpY2Vtb2RlbC9jb3JlLmMgICAgICAgICAgICAgICAgfCAgIDE4ICsNCj4gPiAgIHRvb2xzL2xp
YnMvZGV2aWNlbW9kZWwvbGlieGVuZGV2aWNlbW9kZWwubWFwIHwgICAgMSArDQo+ID4gICB0b29s
cy9saWJzL2xpZ2h0L01ha2VmaWxlICAgICAgICAgICAgICAgICAgICB8ICAgIDEgKw0KPiA+ICAg
dG9vbHMvbGlicy9saWdodC9saWJ4bF9hcm0uYyAgICAgICAgICAgICAgICAgfCAgIDk0ICstDQo+
ID4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2NyZWF0ZS5jICAgICAgICAgICAgICB8ICAgIDEg
Kw0KPiA+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9pbnRlcm5hbC5oICAgICAgICAgICAgfCAg
ICAxICsNCj4gPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfdHlwZXMuaWRsICAgICAgICAgICAg
IHwgICAxNiArDQo+ID4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX3R5cGVzX2ludGVybmFsLmlk
bCAgICB8ICAgIDEgKw0KPiA+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF92aXJ0aW9fZGlzay5j
ICAgICAgICAgfCAgMTA5ICsrKw0KPiA+ICAgdG9vbHMveGwvTWFrZWZpbGUgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgfCAgICAyICstDQo+ID4gICB0b29scy94bC94bC5oICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICB8ICAgIDMgKw0KPiA+ICAgdG9vbHMveGwveGxfY21kdGFibGUu
YyAgICAgICAgICAgICAgICAgICAgICAgfCAgIDE1ICsNCj4gPiAgIHRvb2xzL3hsL3hsX3BhcnNl
LmMgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDExNiArKysNCj4gPiAgIHRvb2xzL3hsL3hs
X3ZpcnRpb19kaXNrLmMgICAgICAgICAgICAgICAgICAgIHwgICA0NiArDQo+ID4gICB4ZW4vYXJj
aC9hcm0vTWFrZWZpbGUgICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDIgKw0KPiA+ICAgeGVu
L2FyY2gvYXJtL2RtLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDg5ICsrDQo+ID4g
ICB4ZW4vYXJjaC9hcm0vZG9tYWluLmMgICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDkgKw0K
PiA+ICAgeGVuL2FyY2gvYXJtL2h2bS5jICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgICA0
ICsNCj4gPiAgIHhlbi9hcmNoL2FybS9pby5jICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwg
ICAyOSArLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2lvcmVxLmMgICAgICAgICAgICAgICAgICAgICAg
ICAgfCAgMTI2ICsrKw0KPiA+ICAgeGVuL2FyY2gvYXJtL3AybS5jICAgICAgICAgICAgICAgICAg
ICAgICAgICAgfCAgIDQ4ICstDQo+ID4gICB4ZW4vYXJjaC9hcm0vdHJhcHMuYyAgICAgICAgICAg
ICAgICAgICAgICAgICB8ICAgNTggKy0NCj4gPiAgIHhlbi9hcmNoL3g4Ni9LY29uZmlnICAgICAg
ICAgICAgICAgICAgICAgICAgIHwgICAgMSArDQo+ID4gICB4ZW4vYXJjaC94ODYvaHZtL2RtLmMg
ICAgICAgICAgICAgICAgICAgICAgICB8ICAyOTUgKy0tLS0tDQo+ID4gICB4ZW4vYXJjaC94ODYv
aHZtL2VtdWxhdGUuYyAgICAgICAgICAgICAgICAgICB8ICAgODAgKy0NCj4gPiAgIHhlbi9hcmNo
L3g4Ni9odm0vaHZtLmMgICAgICAgICAgICAgICAgICAgICAgIHwgICAxMiArLQ0KPiA+ICAgeGVu
L2FyY2gveDg2L2h2bS9oeXBlcmNhbGwuYyAgICAgICAgICAgICAgICAgfCAgICA5ICstDQo+ID4g
ICB4ZW4vYXJjaC94ODYvaHZtL2ludGVyY2VwdC5jICAgICAgICAgICAgICAgICB8ICAgIDUgKy0N
Cj4gPiAgIHhlbi9hcmNoL3g4Ni9odm0vaW8uYyAgICAgICAgICAgICAgICAgICAgICAgIHwgICAy
NiArLQ0KPiA+ICAgeGVuL2FyY2gveDg2L2h2bS9pb3JlcS5jICAgICAgICAgICAgICAgICAgICAg
fCAxMzU3ICsrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ID4gICB4ZW4vYXJjaC94ODYvaHZt
L3N0ZHZnYS5jICAgICAgICAgICAgICAgICAgICB8ICAgMTAgKy0NCj4gPiAgIHhlbi9hcmNoL3g4
Ni9odm0vc3ZtL25lc3RlZHN2bS5jICAgICAgICAgICAgIHwgICAgMiArLQ0KPiA+ICAgeGVuL2Fy
Y2gveDg2L2h2bS92bXgvcmVhbG1vZGUuYyAgICAgICAgICAgICAgfCAgICA2ICstDQo+ID4gICB4
ZW4vYXJjaC94ODYvaHZtL3ZteC92dm14LmMgICAgICAgICAgICAgICAgICB8ICAgIDIgKy0NCj4g
PiAgIHhlbi9hcmNoL3g4Ni9tbS5jICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICA0NiAr
LQ0KPiA+ICAgeGVuL2FyY2gveDg2L21tL3AybS5jICAgICAgICAgICAgICAgICAgICAgICAgfCAg
IDEzICstDQo+ID4gICB4ZW4vYXJjaC94ODYvbW0vc2hhZG93L2NvbW1vbi5jICAgICAgICAgICAg
ICB8ICAgIDIgKy0NCj4gPiAgIHhlbi9jb21tb24vS2NvbmZpZyAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHwgICAgMyArDQo+ID4gICB4ZW4vY29tbW9uL01ha2VmaWxlICAgICAgICAgICAgICAg
ICAgICAgICAgICB8ICAgIDIgKw0KPiA+ICAgeGVuL2NvbW1vbi9kbS5jICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgfCAgMjkyICsrKysrKw0KPiA+ICAgeGVuL2NvbW1vbi9pb3JlcS5jICAg
ICAgICAgICAgICAgICAgICAgICAgICAgfCAxMzA3ICsrKysrKysrKysrKysrKysrKysrKysrKysN
Cj4gPiAgIHhlbi9jb21tb24vbWVtb3J5LmMgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICA3
MyArLQ0KPiA+ICAgeGVuL2luY2x1ZGUvYXNtLWFybS9kb21haW4uaCAgICAgICAgICAgICAgICAg
fCAgICAzICsNCj4gPiAgIHhlbi9pbmNsdWRlL2FzbS1hcm0vaHZtL2lvcmVxLmggICAgICAgICAg
ICAgIHwgIDEzOSArKysNCj4gPiAgIHhlbi9pbmNsdWRlL2FzbS1hcm0vbW0uaCAgICAgICAgICAg
ICAgICAgICAgIHwgICAgOCAtDQo+ID4gICB4ZW4vaW5jbHVkZS9hc20tYXJtL21taW8uaCAgICAg
ICAgICAgICAgICAgICB8ICAgIDEgKw0KPiA+ICAgeGVuL2luY2x1ZGUvYXNtLWFybS9wMm0uaCAg
ICAgICAgICAgICAgICAgICAgfCAgIDE5ICstDQo+ID4gICB4ZW4vaW5jbHVkZS9hc20tYXJtL3Ry
YXBzLmggICAgICAgICAgICAgICAgICB8ICAgMjQgKw0KPiA+ICAgeGVuL2luY2x1ZGUvYXNtLXg4
Ni9odm0vZG9tYWluLmggICAgICAgICAgICAgfCAgIDQzIC0NCj4gPiAgIHhlbi9pbmNsdWRlL2Fz
bS14ODYvaHZtL2VtdWxhdGUuaCAgICAgICAgICAgIHwgICAgMiArLQ0KPiA+ICAgeGVuL2luY2x1
ZGUvYXNtLXg4Ni9odm0vaW8uaCAgICAgICAgICAgICAgICAgfCAgIDE3IC0NCj4gPiAgIHhlbi9p
bmNsdWRlL2FzbS14ODYvaHZtL2lvcmVxLmggICAgICAgICAgICAgIHwgICA1OCArLQ0KPiA+ICAg
eGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdmNwdS5oICAgICAgICAgICAgICAgfCAgIDE4IC0NCj4g
PiAgIHhlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCAgICAgICAgICAgICAgICAgICAgIHwgICAgNCAt
DQo+ID4gICB4ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oICAgICAgICAgICAgICAgICAgICB8ICAg
MjQgKy0NCj4gPiAgIHhlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oICAgICAgICAgICAgICAg
IHwgICAgNSArDQo+ID4gICB4ZW4vaW5jbHVkZS9wdWJsaWMvaHZtL2RtX29wLmggICAgICAgICAg
ICAgICB8ICAgMTYgKw0KPiA+ICAgeGVuL2luY2x1ZGUveGVuL2RtLmggICAgICAgICAgICAgICAg
ICAgICAgICAgfCAgIDQ0ICsNCj4gPiAgIHhlbi9pbmNsdWRlL3hlbi9pb3JlcS5oICAgICAgICAg
ICAgICAgICAgICAgIHwgIDE0NiArKysNCj4gPiAgIHhlbi9pbmNsdWRlL3hlbi9wMm0tY29tbW9u
LmggICAgICAgICAgICAgICAgIHwgICAgNCArDQo+ID4gICB4ZW4vaW5jbHVkZS94ZW4vc2NoZWQu
aCAgICAgICAgICAgICAgICAgICAgICB8ICAgMzIgKw0KPiA+ICAgeGVuL2luY2x1ZGUveHNtL2R1
bW15LmggICAgICAgICAgICAgICAgICAgICAgfCAgICA0ICstDQo+ID4gICB4ZW4vaW5jbHVkZS94
c20veHNtLmggICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDYgKy0NCj4gPiAgIHhlbi94c20v
ZHVtbXkuYyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMiArLQ0KPiA+ICAgeGVu
L3hzbS9mbGFzay9ob29rcy5jICAgICAgICAgICAgICAgICAgICAgICAgfCAgICA1ICstDQo+ID4g
ICA2NyBmaWxlcyBjaGFuZ2VkLCAzMDg0IGluc2VydGlvbnMoKyksIDE4ODQgZGVsZXRpb25zKC0p
DQo+ID4gICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvbGlicy9saWdodC9saWJ4bF92aXJ0aW9f
ZGlzay5jDQo+ID4gICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMveGwveGxfdmlydGlvX2Rpc2su
Yw0KPiA+ICAgY3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9hcmNoL2FybS9kbS5jDQo+ID4gICBjcmVh
dGUgbW9kZSAxMDA2NDQgeGVuL2FyY2gvYXJtL2lvcmVxLmMNCj4gPiAgIGNyZWF0ZSBtb2RlIDEw
MDY0NCB4ZW4vY29tbW9uL2RtLmMNCj4gPiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vY29tbW9u
L2lvcmVxLmMNCj4gPiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vaW5jbHVkZS9hc20tYXJtL2h2
bS9pb3JlcS5oDQo+ID4gICBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2luY2x1ZGUveGVuL2RtLmgN
Cj4gPiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vaW5jbHVkZS94ZW4vaW9yZXEuaA0KPiA+DQo+
IC0tDQo+IFJlZ2FyZHMsDQo+IA0KPiBPbGVrc2FuZHIgVHlzaGNoZW5rbw0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:08:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46513.82532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGG5-000336-MA; Mon, 07 Dec 2020 13:08:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46513.82532; Mon, 07 Dec 2020 13:08:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGG5-00032z-Iy; Mon, 07 Dec 2020 13:08:53 +0000
Received: by outflank-mailman (input) for mailman id 46513;
 Mon, 07 Dec 2020 13:08:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmGG4-00032r-MN; Mon, 07 Dec 2020 13:08:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmGG4-0000Tk-DY; Mon, 07 Dec 2020 13:08:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmGG4-0000ur-6m; Mon, 07 Dec 2020 13:08:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmGG4-00022p-6C; Mon, 07 Dec 2020 13:08:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I44cwFhP3UAltjlqnhrPrcBPMi5X4LUnfNW67uu1KNI=; b=jmVItaJlqr1ZA37aS8y2qdYr8C
	inHQNpmm19hc6pj0BX8nWGQWZNa02gIP/pUgoKqNgFF+z0H1e8D2K1PDo+ajtlofo81rJtfFeAAKO
	KkVrCsp9Eq4HgJusHixhFqlNVU2W+LV2pbmkbG5o5rdimENSBDut0XXqqAMsI2fqb/Ug=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157249-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157249: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
X-Osstest-Versions-That:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 13:08:52 +0000

flight 157249 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157249/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157233

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157233
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157233
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157233
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157233
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157233
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157233
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157233
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157233
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157233
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157233
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157233
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b
baseline version:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b

Last test of basis   157249  2020-12-07 01:51:52 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:14:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46521.82548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGLC-00043j-AW; Mon, 07 Dec 2020 13:14:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46521.82548; Mon, 07 Dec 2020 13:14:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGLC-00043c-6g; Mon, 07 Dec 2020 13:14:10 +0000
Received: by outflank-mailman (input) for mailman id 46521;
 Mon, 07 Dec 2020 13:14:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmGLB-00043X-05
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:14:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 873f223c-2466-4295-b409-3b3d045abf8c;
 Mon, 07 Dec 2020 13:14:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CA082B230;
 Mon,  7 Dec 2020 13:14:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 873f223c-2466-4295-b409-3b3d045abf8c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607346846; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ozoHlERTnRf1v54W16lmh/TxgYpprdxVJriZr/tiJeQ=;
	b=mtEMjVnMiYVAZMsC35ssNYGcYZGq4YLp7Z9E5UmPhCWOveoyQa8kvNLkmmWSk9fpznECFH
	yCP8Lodc8u/Lf4P5krI2lXfup0Y08ERrsemYvPKIe6sL+dtwgKlzsGo0JYZ+0iwsk4H+XS
	NbYjp+8ovBdCvg9mrY+NqEX8P6dcUwU=
Subject: Re: [PATCH v3] x86/vmap: handle superpages in vmap_to_mfn()
To: Hongyan Xia <hx242@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <8e7a12064c68a1743dd3bd8c38feae2abea24071.1607345364.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8cccd3c1-04a3-3fcf-df6f-d33fb27dfe4e@suse.com>
Date: Mon, 7 Dec 2020 14:14:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <8e7a12064c68a1743dd3bd8c38feae2abea24071.1607345364.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 13:49, Hongyan Xia wrote:
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -175,6 +175,7 @@ bool scrub_free_pages(void);
>  } while ( false )
>  #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>  
> +mfn_t xen_map_to_mfn(unsigned long va);
>  /* Map machine page range in Xen virtual address space. */
>  int map_pages_to_xen(
>      unsigned long virt,
> 

This really would have wanted to be below the sibling functions
and then, following suit, have a one line comment. Will adjust
while committing; sorry to be picky.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:15:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:15:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46525.82559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGME-0004AZ-K9; Mon, 07 Dec 2020 13:15:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46525.82559; Mon, 07 Dec 2020 13:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGME-0004AS-H2; Mon, 07 Dec 2020 13:15:14 +0000
Received: by outflank-mailman (input) for mailman id 46525;
 Mon, 07 Dec 2020 13:15:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmGMC-0004AL-Th
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:15:12 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 5b4fc8b5-c61d-4d6f-8bcf-4473de81d46a;
 Mon, 07 Dec 2020 13:15:11 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-78-06Pe1iUoMrCNM2-EithplQ-1; Mon, 07 Dec 2020 08:15:08 -0500
Received: by mail-wm1-f72.google.com with SMTP id d16so2487027wmd.1
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 05:15:08 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id 94sm6169289wrq.22.2020.12.07.05.15.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 05:15:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b4fc8b5-c61d-4d6f-8bcf-4473de81d46a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607346911;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=P8E9r6EVqPEiRqlaumtn8A3AH4erPVBibEbVofdGhF4=;
	b=NEYMsYEi4idYlSBrCyGNxo99DfcV6Q3/XyNohssFsdkbjF/z+m+p8lHhD9FMRn0dKhzdxJ
	bLt5lKP3hgEW+s0krQNA+THfTZWZyklROi1ahT3WK0wcVEx42W2hl89NGuqsS4Iu1neGiB
	TBBcAnDavQTqnleW8NV+GYAhiYRhhwU=
X-MC-Unique: 06Pe1iUoMrCNM2-EithplQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=D+Xzt47T3O6uanVPO2OPfmJ/SH9QMRtVuJ5cnLL0G9o=;
        b=iqoS9Ezbsngyuqy9fM+iTo1tUk8tT9k6DzNASaWa0Y/ZhyAoVMIxCgOQTLilpCk8pH
         ekv1aa1f94/j9U/tIPFmcneYQnJFQP6cO/nXXRXBXmyF6dMQ5jI0vJZFOiqquwIQ0ZGU
         +q8A/FzZt+VJwTa6HSQe4jpCqeXf5e51toQKHh3S78pOy7qWp7maIt5rkI3wIwwuDKj9
         DVO3BNM6cSljPz6XekFc6iKQyPjtnp2mVRKYa+gEuaFMTzZYh6N4pzDpFcsVj7DJgak4
         Razx+9eN7YhxEOBYZG+H7IzE1htra15Da2oxi7mWWIUlrnyVRMNrG8lBgzMO1WwETYSZ
         uF7Q==
X-Gm-Message-State: AOAM532dUcoCaCMjPxBzqXRsv7iLcqHnT5EG81OjxqdBtAO6p5qA3czv
	FIMiMkKDt3qsHTpsTsrESK0NqK+67Zy5NIaf8Ze/7REewgi5wWwhol2su3P08bDxiKs72MJ0AF+
	HKr9vCy7R2fShHz7Hht5okX+qGn4=
X-Received: by 2002:a5d:4d88:: with SMTP id b8mr19446883wru.134.1607346907513;
        Mon, 07 Dec 2020 05:15:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzpnUya1Cq26y+zerYd2xTIJ5RvuSXM0q1BxVsK+34xdINKnlF/ke5kWAF9RdCASl4P3rK0fg==
X-Received: by 2002:a5d:4d88:: with SMTP id b8mr19446856wru.134.1607346907342;
        Mon, 07 Dec 2020 05:15:07 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Cornelia Huck <cohuck@redhat.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH v3 0/5] gitlab-ci: Add accelerator-specific Linux jobs
Date: Mon,  7 Dec 2020 14:14:58 +0100
Message-Id: <20201207131503.3858889-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Since v2:=0D
- Fixed ARM Xen job=0D
- Renamed jobs with -$accel trailer (Thomas)=0D
=0D
Since v1:=0D
- Documented cross_accel_build_job template (Claudio)=0D
- Only add new job for s390x (Thomas)=0D
- Do not add entry to MAINTAINERS (Daniel)=0D
- Document 'build-tcg-disabled' job is X86 + KVM=0D
- Drop the patches with negative review feedbacks=0D
=0D
Hi,=0D
=0D
I was custom to use Travis-CI for testing KVM builds on s390x/ppc=0D
with the Travis-CI jobs.=0D
=0D
During October Travis-CI became unusable for me (extremely slow,=0D
see [1]). Then my free Travis account got updated to the new=0D
"10K credit minutes allotment" [2] which I burned without reading=0D
the notification email in time (I'd burn them eventually anyway).=0D
=0D
Today Travis-CI is pointless to me. While I could pay to run my=0D
QEMU jobs, I don't think it is fair for an Open Source project to=0D
ask its forks to pay for a service.=0D
=0D
As we want forks to run some CI before contributing patches, and=0D
we have cross-build Docker images available for Linux hosts, I=0D
added some cross KVM/Xen build jobs to Gitlab-CI.=0D
=0D
Cross-building doesn't have the same coverage as native building,=0D
as we can not run the tests. But this is still useful to get link=0D
failures.=0D
=0D
Resulting pipeline:=0D
https://gitlab.com/philmd/qemu/-/pipelines/226240415=0D
=0D
Regards,=0D
=0D
Phil.=0D
=0D
[1] https://travis-ci.community/t/build-delays-for-open-source-project/1027=
2=0D
[2] https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing=0D
=0D
Philippe Mathieu-Daud=C3=A9 (5):=0D
  gitlab-ci: Document 'build-tcg-disabled' is a KVM X86 job=0D
  gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)=0D
  gitlab-ci: Introduce 'cross_accel_build_job' template=0D
  gitlab-ci: Add KVM s390x cross-build jobs=0D
  gitlab-ci: Add Xen cross-build jobs=0D
=0D
 .gitlab-ci.d/crossbuilds.yml | 78 ++++++++++++++++++++++++++----------=0D
 .gitlab-ci.yml               |  5 +++=0D
 2 files changed, 62 insertions(+), 21 deletions(-)=0D
=0D
--=20=0D
2.26.2=0D
=0D



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:15:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:15:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46526.82572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMJ-0004DD-15; Mon, 07 Dec 2020 13:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46526.82572; Mon, 07 Dec 2020 13:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMI-0004D4-UJ; Mon, 07 Dec 2020 13:15:18 +0000
Received: by outflank-mailman (input) for mailman id 46526;
 Mon, 07 Dec 2020 13:15:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmGMH-0004AL-Nr
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:15:17 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id ce195ad8-dc95-405d-a39b-eba0d1d404d3;
 Mon, 07 Dec 2020 13:15:15 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-462-3jHvaCrGM4ix7YMFTP9OAA-1; Mon, 07 Dec 2020 08:15:14 -0500
Received: by mail-wm1-f70.google.com with SMTP id g198so4096852wme.7
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 05:15:13 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id k18sm2265572wrd.45.2020.12.07.05.15.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 05:15:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce195ad8-dc95-405d-a39b-eba0d1d404d3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607346915;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DUD443fjC7TlYCq2Zs7qkYPHzlFo7vsq2RAf28zHG0Y=;
	b=b49Y+4DyR6uXgzYr3drfs9eM2NaovpAiJryrok+zE1zpgP+0GZJlY/9Apr+vIgcT8Xveo7
	jbAkDr/2jZGDMDIb95XFKAd+21AM+VaEWpYUvgc8DOa8nclMXIV3lg9Dzf2dkKPa5In1yf
	QnNEFAFVp14994KiswEMc6UsNe7OdU0=
X-MC-Unique: 3jHvaCrGM4ix7YMFTP9OAA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=DUD443fjC7TlYCq2Zs7qkYPHzlFo7vsq2RAf28zHG0Y=;
        b=FgcJ+ktQ2c9T7D/4Pmw0aIg71534K1f3viN1MjIDLKTLeBSRcEwzqjtX857mIHZiQ0
         kJL3k4XYy4mlaHZuhWuWDzQ6ORYtP1/YRFmSGn8zXHbMh3XpvzzkSv5fHX5R5Z3N/Men
         aWhVdnXlBtne/6skXyYDIeQMOv9KWfZoIY+pcGXmTMwGmO0KPDRcbZYxp6/OpHvh48y7
         8F4NGJ7gcXogGfqfd7AP9JZCf/qHViLnVBuvfCk56dYV47Uzu7WxkukaRpd0cEmg4QOB
         CYmQowiRy33pQqddSWkeScwy9BC0OwSc8VjhPgyvr/oH13KU0Iz/g81Dxd/IXHioccAK
         EhiA==
X-Gm-Message-State: AOAM533AR709HvD9jAJT7ZVz7MvuhCNf4KwW/S6bDNCpFIPSn+X8LEhf
	hq+YPnz2Aw+MvdS6F71ouutp/xfv9a4Pnu/eXP5Wpdwt10XP4nYHTwrPIKFN3n/KU/wAwWb7qg2
	LK9TewN05c0HjATJIJlt9WL2zzzg=
X-Received: by 2002:a1c:c2d4:: with SMTP id s203mr18470517wmf.58.1607346912730;
        Mon, 07 Dec 2020 05:15:12 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx0KSXp7DP6CDis0L6tP1I2Gzz9cpg1u4ccuV+5/oiKxMdma/cepl4uO8AaAxQi4CtlwH61VQ==
X-Received: by 2002:a1c:c2d4:: with SMTP id s203mr18470485wmf.58.1607346912558;
        Mon, 07 Dec 2020 05:15:12 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Cornelia Huck <cohuck@redhat.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH v3 1/5] gitlab-ci: Document 'build-tcg-disabled' is a KVM X86 job
Date: Mon,  7 Dec 2020 14:14:59 +0100
Message-Id: <20201207131503.3858889-2-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207131503.3858889-1-philmd@redhat.com>
References: <20201207131503.3858889-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Document what this job cover (build X86 targets with
KVM being the single accelerator available).

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.yml | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index d0173e82b16..ee31b1020fe 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -220,6 +220,11 @@ build-disabled:
       s390x-softmmu i386-linux-user
     MAKE_CHECK_ARGS: check-qtest SPEED=slow
 
+# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
+# the configure script. The container doesn't contain Xen headers so
+# Xen accelerator is not detected / selected. As result it build the
+# i386-softmmu and x86_64-softmmu with KVM being the single accelerator
+# available.
 build-tcg-disabled:
   <<: *native_build_job_definition
   variables:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:15:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:15:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46527.82584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMO-0004HF-Aq; Mon, 07 Dec 2020 13:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46527.82584; Mon, 07 Dec 2020 13:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMO-0004H6-73; Mon, 07 Dec 2020 13:15:24 +0000
Received: by outflank-mailman (input) for mailman id 46527;
 Mon, 07 Dec 2020 13:15:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmGMN-0004Gw-SN
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:15:23 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ba159547-0c59-4bb1-94fa-01242ce554b0;
 Mon, 07 Dec 2020 13:15:22 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-382-LCQifpwQOdKqIiK1epI1yg-1; Mon, 07 Dec 2020 08:15:19 -0500
Received: by mail-wr1-f72.google.com with SMTP id x16so4822507wrm.20
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 05:15:18 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id 65sm12953670wri.95.2020.12.07.05.15.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 05:15:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba159547-0c59-4bb1-94fa-01242ce554b0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607346922;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qWT6ZPuFzyJ7mCL37uXEPVzuEPlsDzCAhcDUmllvAYo=;
	b=HtCFp4DEkTji63ssPhKhH7q9z/FrdSdOs4FF/I9b2CsJuJZEVQnrt63hwIcUDdzgeeD3ST
	qHxdHi9n3HQLhXuLeNXuJkp5LEAtfoDWNBRz/9xrf7G36dGDYL1xm52o6mJ09DMLoyRaiz
	5iEQ/we62MhgE8vY+LVyN1GwBdrfIaI=
X-MC-Unique: LCQifpwQOdKqIiK1epI1yg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=qWT6ZPuFzyJ7mCL37uXEPVzuEPlsDzCAhcDUmllvAYo=;
        b=cG+je97L+7kUtDxVD1VSWSyBmeMyOPXu4QxH4XgMRM/np+E3JFKkphK/O8CvnP4Z/9
         ROw2mPA4UvCZVR1xt0CIV+QKycH8UjYhS2xOqiKmo+/pEsSTZaHED+3H9a4ac+lYQvwU
         2jkaj3Y9TSJNs0HtOygAq0lmdVpCCGAeZdyC40W42MQvX0gRPa6nEcBTAWU4pnMJSGF3
         fUvrex+Suu9Zkb1FP27Y/spkGc1ceFjgNaqBmaYuWdDMRvsrwaootIyD0g88tP+HkpzL
         Sp9Y1BgMcnm6+iiOpZZIAzfJRWG/uWsj140i+xK4pyzL9+wEGm/UBtFTleXd57ACLbR8
         +QyQ==
X-Gm-Message-State: AOAM5304Hhhiadcl9/FylnVBADKBV46gyRBLlr4Dvf9e+A5FKZJmVVfT
	iwgrzyULDylmtOJPZwK8uAnYixRNkfneD/Kz9i2RqQltBQU2A/VeYbnxGNQGqjHARWpITPmfg25
	EGzqoBpbGZ8FXc98Do3exbNSOw2Y=
X-Received: by 2002:a1c:61c3:: with SMTP id v186mr18343496wmb.146.1607346917874;
        Mon, 07 Dec 2020 05:15:17 -0800 (PST)
X-Google-Smtp-Source: ABdhPJz5+KW+PFE10MP9jLpogYR1x1fZIHiIwppZYhRhxY3siG36Pzyg4Q4bF1rYwvWWqwdmc6Rj+g==
X-Received: by 2002:a1c:61c3:: with SMTP id v186mr18343465wmb.146.1607346917703;
        Mon, 07 Dec 2020 05:15:17 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Cornelia Huck <cohuck@redhat.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH v3 2/5] gitlab-ci: Replace YAML anchors by extends (cross_system_build_job)
Date: Mon,  7 Dec 2020 14:15:00 +0100
Message-Id: <20201207131503.3858889-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207131503.3858889-1-philmd@redhat.com>
References: <20201207131503.3858889-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

'extends' is an alternative to using YAML anchors
and is a little more flexible and readable. See:
https://docs.gitlab.com/ee/ci/yaml/#extends

More importantly it allows exploding YAML jobs.

Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 40 ++++++++++++++++++------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 03ebfabb3fa..099949aaef3 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -1,5 +1,5 @@
 
-.cross_system_build_job_template: &cross_system_build_job_definition
+.cross_system_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
   timeout: 80m
@@ -13,7 +13,7 @@
           xtensa-softmmu"
     - make -j$(expr $(nproc) + 1) all check-build
 
-.cross_user_build_job_template: &cross_user_build_job_definition
+.cross_user_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
   script:
@@ -24,91 +24,91 @@
     - make -j$(expr $(nproc) + 1) all check-build
 
 cross-armel-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-armel-cross
 
 cross-armel-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-armel-cross
 
 cross-armhf-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-armhf-cross
 
 cross-armhf-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-armhf-cross
 
 cross-arm64-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-arm64-cross
 
 cross-arm64-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-arm64-cross
 
 cross-mips-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mips-cross
 
 cross-mips-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mips-cross
 
 cross-mipsel-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mipsel-cross
 
 cross-mipsel-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mipsel-cross
 
 cross-mips64el-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-mips64el-cross
 
 cross-mips64el-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-mips64el-cross
 
 cross-ppc64el-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-ppc64el-cross
 
 cross-ppc64el-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-ppc64el-cross
 
 cross-s390x-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: debian-s390x-cross
 
 cross-s390x-user:
-  <<: *cross_user_build_job_definition
+  extends: .cross_user_build_job
   variables:
     IMAGE: debian-s390x-cross
 
 cross-win32-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win32-cross
 
 cross-win64-system:
-  <<: *cross_system_build_job_definition
+  extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win64-cross
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:15:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:15:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46528.82595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMR-0004Lb-La; Mon, 07 Dec 2020 13:15:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46528.82595; Mon, 07 Dec 2020 13:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMR-0004LQ-HU; Mon, 07 Dec 2020 13:15:27 +0000
Received: by outflank-mailman (input) for mailman id 46528;
 Mon, 07 Dec 2020 13:15:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmGMQ-0004Kw-S5
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:15:26 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 05d4123c-6f27-4e73-9918-471cb0d4d66d;
 Mon, 07 Dec 2020 13:15:26 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-153-yXxLE-j9PEW_K9JRFbeE9A-1; Mon, 07 Dec 2020 08:15:24 -0500
Received: by mail-wm1-f71.google.com with SMTP id r1so5333112wmn.8
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 05:15:23 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id b3sm14942829wrp.57.2020.12.07.05.15.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 05:15:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05d4123c-6f27-4e73-9918-471cb0d4d66d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607346926;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LrpyM62Xrmp0dmj6zHS9IbusrVswsePAgvUQq8W1ha4=;
	b=iYVyhUlO7Zg+2vqUm7Wcdm4bsG18bLtBV4hSs+wqhvKK6vb9+sutl90vA4N8eMqEZi4iUC
	NsK9NTsM+q9TxAnVPZ3XS3syAmz13qNdVHLg4ikUy4q5IUeFiO3w2zJPiE5IP2iHbJfE+x
	QiF4zHRa+QkUI/H9qRAfFzIvfzaeRCc=
X-MC-Unique: yXxLE-j9PEW_K9JRFbeE9A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=LrpyM62Xrmp0dmj6zHS9IbusrVswsePAgvUQq8W1ha4=;
        b=WMxefesltkcxcYVz6hF+2jd85EvOYEod6DRS0/jQ2ywyqXZdYeOJ9uglUeHFqlpOUG
         JZyfyIVq4eh7eK2dJb1WcN2dcr5BIBwb9dhJe4n9P8NbrKKxxe/ur+Gn8/D0RaGAzBiU
         otZuwZeJiH/ClNqgOxNP6gMIWplFpUghI1MvVrthf5TtG+FdbMH2Z6fdOR924f1sUn46
         hmVp1d9qc0qWk5gjC7+tH9MG5oeCSNGJCnSqsmO3QIgnb5wM481a/IW0os28R4pXbnp9
         mBcE6/I42GICCFNl5nbS0DG1KA2D4lbFMMBO0B5Xauo4JYanFOUiHyTq0rKfqtMRc7Eo
         BYmA==
X-Gm-Message-State: AOAM5311QqzjLk4M1Cp4YT7XI+BQ3etMJNMKdFQXgOW81MPD9c3jSFVW
	YtMHbdM6a41FtJBbj21H+hMxTU46nPjktWE3zjDswL2k13bd5KKhDXxUIs1jUwUonVf27DaAQky
	8931faT1dVwvdJlgMfixJJS6u7Fk=
X-Received: by 2002:adf:eec6:: with SMTP id a6mr7411041wrp.239.1607346923077;
        Mon, 07 Dec 2020 05:15:23 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwrSsP/k6tbNGXTd5ADaSfYhwBX1LosPwHROoqvLj3ywUi8jkYDiFzi1eqHTc0kTww1719V1w==
X-Received: by 2002:adf:eec6:: with SMTP id a6mr7411023wrp.239.1607346922931;
        Mon, 07 Dec 2020 05:15:22 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Cornelia Huck <cohuck@redhat.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH v3 3/5] gitlab-ci: Introduce 'cross_accel_build_job' template
Date: Mon,  7 Dec 2020 14:15:01 +0100
Message-Id: <20201207131503.3858889-4-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207131503.3858889-1-philmd@redhat.com>
References: <20201207131503.3858889-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce a job template to cross-build accelerator specific
jobs (enable a specific accelerator, disabling the others).

The specific accelerator is selected by the $ACCEL environment
variable (default to KVM).

Extra options such disabling other accelerators are passed
via the $ACCEL_CONFIGURE_OPTS environment variable.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 099949aaef3..b59516301f4 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -13,6 +13,23 @@
           xtensa-softmmu"
     - make -j$(expr $(nproc) + 1) all check-build
 
+# Job to cross-build specific accelerators.
+#
+# Set the $ACCEL variable to select the specific accelerator (default to
+# KVM), and set extra options (such disabling other accelerators) via the
+# $ACCEL_CONFIGURE_OPTS variable.
+.cross_accel_build_job:
+  stage: build
+  image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
+  timeout: 30m
+  script:
+    - mkdir build
+    - cd build
+    - PKG_CONFIG_PATH=$PKG_CONFIG_PATH
+      ../configure --enable-werror $QEMU_CONFIGURE_OPTS --disable-tools
+        --enable-${ACCEL:-kvm} $ACCEL_CONFIGURE_OPTS
+    - make -j$(expr $(nproc) + 1) all check-build
+
 .cross_user_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:15:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:15:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46531.82608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMY-0004Rq-0l; Mon, 07 Dec 2020 13:15:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46531.82608; Mon, 07 Dec 2020 13:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMX-0004Rh-Sd; Mon, 07 Dec 2020 13:15:33 +0000
Received: by outflank-mailman (input) for mailman id 46531;
 Mon, 07 Dec 2020 13:15:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmGMW-0004QJ-2S
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:15:32 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 42c34097-2ba9-4419-97ca-04ee1ead0566;
 Mon, 07 Dec 2020 13:15:31 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-42-BYSBmJVWO9ulczKMftfZcQ-1; Mon, 07 Dec 2020 08:15:29 -0500
Received: by mail-wr1-f69.google.com with SMTP id n13so4827292wrs.10
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 05:15:29 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id a62sm7862008wmh.40.2020.12.07.05.15.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 05:15:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42c34097-2ba9-4419-97ca-04ee1ead0566
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607346931;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R0x/t12SaUJO2Fk00rNBDpsAQee1BNuzclT5nFRV5W4=;
	b=hDDXTieIHZZkaFqpDDrWrlZF4vH4hW1anY0uUJPiDJLlRhZlUrJfGTujnJ7XbuJuSCRgbP
	GD4G7eMbbacRvu4jCbkHIr+ZMnjt/6lM0uRrFQYNwcI/ZDybSfyUmn8pEYMmJO11Siux9R
	6Qvy+YZDT7KkRDILDe5EsOTepGpRv5U=
X-MC-Unique: BYSBmJVWO9ulczKMftfZcQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=R0x/t12SaUJO2Fk00rNBDpsAQee1BNuzclT5nFRV5W4=;
        b=G2V19B6kaawBZl7z+NmSRDvUqhrowvhaXNB/LBNgL6P5nD7RQfgEdheiBpRhltetR9
         QHhV0ID559vMid2mtnFkn5q/M5+6v6AFR4WO2pjdq4LFB/k7Pm8sya2P4H/ChCV7n0FD
         Ant8cDw8424FBbCrHqvEuaf0DSGVj2QwpzMvSLgIfX9NPbJua/O0QB8D9SRLfaxF+EOl
         i3iHN9l18DIRAcVb5qx1lHFVR+Vksqg5NT7q+pQCBJ8JCLNzJmkoCA0oB6Oxy12afjgA
         21mmVKTcX4zyA0X51+x/h2h5jV5TnyEilhcu6tHxsFWKp8rqlhXaCTPAVzL4Ky6zfBxY
         zxNw==
X-Gm-Message-State: AOAM530c9dYiN86WKpoekTDwuSFjhPm1o0SCdJvpjNi220znBsML8gCm
	dvAPFTZbYNwHi+yfOIwFIMTeYzkUNuO/zGce+yh2Gvjxu78jyC34LIzHsEPXdxowZI0A4RtrN1u
	unCuzGxZySRa9lkgFkrFGjcc1t0g=
X-Received: by 2002:adf:fdcc:: with SMTP id i12mr1638513wrs.317.1607346928611;
        Mon, 07 Dec 2020 05:15:28 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx/E42Mjl9lrV3uJy+mcMKaCC0ghVf2euOZZfgnjWsjT11kOCMnAwQ1c7Ca2DexAj4PkQkYFw==
X-Received: by 2002:adf:fdcc:: with SMTP id i12mr1638484wrs.317.1607346928431;
        Mon, 07 Dec 2020 05:15:28 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Cornelia Huck <cohuck@redhat.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH v3 4/5] gitlab-ci: Add KVM s390x cross-build jobs
Date: Mon,  7 Dec 2020 14:15:02 +0100
Message-Id: <20201207131503.3858889-5-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207131503.3858889-1-philmd@redhat.com>
References: <20201207131503.3858889-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build s390x target with only KVM accelerator enabled.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index b59516301f4..51896bbc9fb 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -1,4 +1,3 @@
-
 .cross_system_build_job:
   stage: build
   image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:latest
@@ -120,6 +119,12 @@ cross-s390x-user:
   variables:
     IMAGE: debian-s390x-cross
 
+cross-s390x-kvm-only:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-s390x-cross
+    ACCEL_CONFIGURE_OPTS: --disable-tcg
+
 cross-win32-system:
   extends: .cross_system_build_job
   variables:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:15:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:15:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46534.82620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMe-0004Y1-A0; Mon, 07 Dec 2020 13:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46534.82620; Mon, 07 Dec 2020 13:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGMe-0004Xr-6F; Mon, 07 Dec 2020 13:15:40 +0000
Received: by outflank-mailman (input) for mailman id 46534;
 Mon, 07 Dec 2020 13:15:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Pxd=FL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kmGMc-0004Wp-DA
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:15:38 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 63393922-353a-4e65-bb1e-5e7fdb9f805a;
 Mon, 07 Dec 2020 13:15:37 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-561-4cPFA0F1OAGALI0DNt7TYw-1; Mon, 07 Dec 2020 08:15:35 -0500
Received: by mail-wr1-f72.google.com with SMTP id n13so4827371wrs.10
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 05:15:35 -0800 (PST)
Received: from localhost.localdomain (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id c190sm14567845wme.19.2020.12.07.05.15.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 05:15:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63393922-353a-4e65-bb1e-5e7fdb9f805a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607346937;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6EJ4iyK07Ty3dzRyIkzOuiJNHq5RuTRyKsBVarixwBA=;
	b=ZL/h83rGNOMGpjTRKxZRwckMp5/hshrf+qb5BGg/Io6IiMEt6z8MBeTBZXayDbkoru0Qwc
	uKE83oTv3hoeoEdcGXKYKw0ozzafexk+3XS5Iz4xlFTSlQXJz56vj33ygsHjQVkRwtlv+m
	n/KMhdIM4tzaIeqfSm1MWGH15C+kn3U=
X-MC-Unique: 4cPFA0F1OAGALI0DNt7TYw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=6EJ4iyK07Ty3dzRyIkzOuiJNHq5RuTRyKsBVarixwBA=;
        b=oYZK2owXKJxg2CEFDxOEWsy1S+cMRQCWKc05E9WESXHXSuYWq6Qs46WMKjh/rV6QR3
         qnrrMtyuVz0/UiFUF4BnAPkjwh2/e+K+jLc37lNWEzU+FkAa8SoxsUSb+5lIm9bz8GER
         xYqwFyB/JFBs6umalRGOXGPe9eqaC7GzGkqmM2ySJBD9YUR4qWRulzGrp14wMyNejANM
         wDldif0zTwbRxfbOSwi4BcpyfWL4NpyTN3dfh/2fYLeIUPlh4q5Hw/0Y5JWDwjgTIE7i
         WoUh8cbRvwYFwpk8bPcqVd2QZf08sGbyuRqyuyDbZNmHRtspDg/LzA7GCFW/bZNMCFaR
         7t/w==
X-Gm-Message-State: AOAM5327Lk2BhYz+XQL2xnIh2w9YOq8VeQYMn7PAcVJiz8Y2MsAP3eom
	LqVENJ372aMNi4VMbfLvpmZPDVbVojf+Akp8XyKPzWJHwqtif/qJck5+46KzYErBNrjW4OJp1Xy
	ikFi6d4mPIGBEhFX2K8wCJDocNhY=
X-Received: by 2002:a5d:4c49:: with SMTP id n9mr20011860wrt.30.1607346934123;
        Mon, 07 Dec 2020 05:15:34 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx9XTTmfTlfqcd6ffk3O9VyZaaUJTIkcE7tELrfEhCU+PvMWzkv9xF2oA2F54WLRjz0+k84yQ==
X-Received: by 2002:a5d:4c49:: with SMTP id n9mr20011839wrt.30.1607346933986;
        Mon, 07 Dec 2020 05:15:33 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-s390x@nongnu.org,
	Halil Pasic <pasic@linux.ibm.com>,
	Willian Rampazzo <wrampazz@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Cornelia Huck <cohuck@redhat.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>
Subject: [PATCH v3 5/5] gitlab-ci: Add Xen cross-build jobs
Date: Mon,  7 Dec 2020 14:15:03 +0100
Message-Id: <20201207131503.3858889-6-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207131503.3858889-1-philmd@redhat.com>
References: <20201207131503.3858889-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Cross-build ARM and X86 targets with only Xen accelerator enabled.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 .gitlab-ci.d/crossbuilds.yml | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 51896bbc9fb..bd6473a75a7 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -134,3 +134,17 @@ cross-win64-system:
   extends: .cross_system_build_job
   variables:
     IMAGE: fedora-win64-cross
+
+cross-amd64-xen-only:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-amd64-cross
+    ACCEL: xen
+    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
+
+cross-arm64-xen-only:
+  extends: .cross_accel_build_job
+  variables:
+    IMAGE: debian-arm64-cross
+    ACCEL: xen
+    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46568.82656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGb7-0006hi-DL; Mon, 07 Dec 2020 13:30:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46568.82656; Mon, 07 Dec 2020 13:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGb7-0006hX-AA; Mon, 07 Dec 2020 13:30:37 +0000
Received: by outflank-mailman (input) for mailman id 46568;
 Mon, 07 Dec 2020 13:30:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmGb5-0006dg-Or
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:30:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c23c5b8c-3fee-403d-b56a-747c45366e30;
 Mon, 07 Dec 2020 13:30:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A2F3ACF4;
 Mon,  7 Dec 2020 13:30:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c23c5b8c-3fee-403d-b56a-747c45366e30
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607347829; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZO5teaKU4bY6WAv2ur3lcADxCg7ltO2UkLJkOCEfo4M=;
	b=C/6Y806l2Ayi/SLM4GU4bpZhT0TniNWrDQy+PscKv/RT+RDSTxtACI4B+6F0XNJ0+p8/86
	RR/CQZbk9JT4Y5CwF6Yp8BSOKKx/CByoCxdVrb33S28gpnmq9Ye076aTOQ9FVhqfFMcRYg
	kMUZobzs8B7FXpD/SRkBNiQOgUYtqlw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/2] xen: add helpers for caching grant mapping pages
Date: Mon,  7 Dec 2020 14:30:23 +0100
Message-Id: <20201207133024.16621-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207133024.16621-1-jgross@suse.com>
References: <20201207133024.16621-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of having similar helpers in multiple backend drivers use
common helpers for caching pages allocated via gnttab_alloc_pages().

Make use of those helpers in blkback and scsiback.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkback/blkback.c | 89 ++++++-----------------------
 drivers/block/xen-blkback/common.h  |  4 +-
 drivers/block/xen-blkback/xenbus.c  |  6 +-
 drivers/xen/grant-table.c           | 72 +++++++++++++++++++++++
 drivers/xen/xen-scsiback.c          | 60 ++++---------------
 include/xen/grant_table.h           | 13 +++++
 6 files changed, 116 insertions(+), 128 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 501e9dacfff9..9ebf53903d7b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -132,73 +132,12 @@ module_param(log_stats, int, 0644);
 
 #define BLKBACK_INVALID_HANDLE (~0)
 
-/* Number of free pages to remove on each call to gnttab_free_pages */
-#define NUM_BATCH_FREE_PAGES 10
-
 static inline bool persistent_gnt_timeout(struct persistent_gnt *persistent_gnt)
 {
 	return pgrant_timeout && (jiffies - persistent_gnt->last_used >=
 			HZ * pgrant_timeout);
 }
 
-static inline int get_free_page(struct xen_blkif_ring *ring, struct page **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	if (list_empty(&ring->free_pages)) {
-		BUG_ON(ring->free_pages_num != 0);
-		spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	BUG_ON(ring->free_pages_num == 0);
-	page[0] = list_first_entry(&ring->free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	ring->free_pages_num--;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-
-	return 0;
-}
-
-static inline void put_free_pages(struct xen_blkif_ring *ring, struct page **page,
-                                  int num)
-{
-	unsigned long flags;
-	int i;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	for (i = 0; i < num; i++)
-		list_add(&page[i]->lru, &ring->free_pages);
-	ring->free_pages_num += num;
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-}
-
-static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int num)
-{
-	/* Remove requested pages in batches of NUM_BATCH_FREE_PAGES */
-	struct page *page[NUM_BATCH_FREE_PAGES];
-	unsigned int num_pages = 0;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ring->free_pages_lock, flags);
-	while (ring->free_pages_num > num) {
-		BUG_ON(list_empty(&ring->free_pages));
-		page[num_pages] = list_first_entry(&ring->free_pages,
-		                                   struct page, lru);
-		list_del(&page[num_pages]->lru);
-		ring->free_pages_num--;
-		if (++num_pages == NUM_BATCH_FREE_PAGES) {
-			spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-			gnttab_free_pages(num_pages, page);
-			spin_lock_irqsave(&ring->free_pages_lock, flags);
-			num_pages = 0;
-		}
-	}
-	spin_unlock_irqrestore(&ring->free_pages_lock, flags);
-	if (num_pages != 0)
-		gnttab_free_pages(num_pages, page);
-}
-
 #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
 
 static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags);
@@ -331,7 +270,8 @@ static void free_persistent_gnts(struct xen_blkif_ring *ring, struct rb_root *ro
 			unmap_data.count = segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
 
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap = 0;
 		}
 
@@ -371,7 +311,8 @@ void xen_blkbk_unmap_purged_grants(struct work_struct *work)
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
 			unmap_data.count = segs_to_unmap;
 			BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-			put_free_pages(ring, pages, segs_to_unmap);
+			gnttab_page_cache_put(&ring->free_pages, pages,
+					      segs_to_unmap);
 			segs_to_unmap = 0;
 		}
 		kfree(persistent_gnt);
@@ -379,7 +320,7 @@ void xen_blkbk_unmap_purged_grants(struct work_struct *work)
 	if (segs_to_unmap > 0) {
 		unmap_data.count = segs_to_unmap;
 		BUG_ON(gnttab_unmap_refs_sync(&unmap_data));
-		put_free_pages(ring, pages, segs_to_unmap);
+		gnttab_page_cache_put(&ring->free_pages, pages, segs_to_unmap);
 	}
 }
 
@@ -664,9 +605,10 @@ int xen_blkif_schedule(void *arg)
 
 		/* Shrink the free pages pool if it is too large. */
 		if (time_before(jiffies, blkif->buffer_squeeze_end))
-			shrink_free_pagepool(ring, 0);
+			gnttab_page_cache_shrink(&ring->free_pages, 0);
 		else
-			shrink_free_pagepool(ring, max_buffer_pages);
+			gnttab_page_cache_shrink(&ring->free_pages,
+						 max_buffer_pages);
 
 		if (log_stats && time_after(jiffies, ring->st_print))
 			print_stats(ring);
@@ -697,7 +639,7 @@ void xen_blkbk_free_caches(struct xen_blkif_ring *ring)
 	ring->persistent_gnt_c = 0;
 
 	/* Since we are shutting down remove all pages from the buffer */
-	shrink_free_pagepool(ring, 0 /* All */);
+	gnttab_page_cache_shrink(&ring->free_pages, 0 /* All */);
 }
 
 static unsigned int xen_blkbk_unmap_prepare(
@@ -736,7 +678,7 @@ static void xen_blkbk_unmap_and_respond_callback(int result, struct gntab_unmap_
 	   but is this the best way to deal with this? */
 	BUG_ON(result);
 
-	put_free_pages(ring, data->pages, data->count);
+	gnttab_page_cache_put(&ring->free_pages, data->pages, data->count);
 	make_response(ring, pending_req->id,
 		      pending_req->operation, pending_req->status);
 	free_req(ring, pending_req);
@@ -803,7 +745,8 @@ static void xen_blkbk_unmap(struct xen_blkif_ring *ring,
 		if (invcount) {
 			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
 			BUG_ON(ret);
-			put_free_pages(ring, unmap_pages, invcount);
+			gnttab_page_cache_put(&ring->free_pages, unmap_pages,
+					      invcount);
 		}
 		pages += batch;
 		num -= batch;
@@ -850,7 +793,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 			pages[i]->page = persistent_gnt->page;
 			pages[i]->persistent_gnt = persistent_gnt;
 		} else {
-			if (get_free_page(ring, &pages[i]->page))
+			if (gnttab_page_cache_get(&ring->free_pages,
+						  &pages[i]->page))
 				goto out_of_memory;
 			addr = vaddr(pages[i]->page);
 			pages_to_gnt[segs_to_map] = pages[i]->page;
@@ -883,7 +827,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 			BUG_ON(new_map_idx >= segs_to_map);
 			if (unlikely(map[new_map_idx].status != 0)) {
 				pr_debug("invalid buffer -- could not remap it\n");
-				put_free_pages(ring, &pages[seg_idx]->page, 1);
+				gnttab_page_cache_put(&ring->free_pages,
+						      &pages[seg_idx]->page, 1);
 				pages[seg_idx]->handle = BLKBACK_INVALID_HANDLE;
 				ret |= 1;
 				goto next;
@@ -944,7 +889,7 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring,
 
 out_of_memory:
 	pr_alert("%s: out of memory\n", __func__);
-	put_free_pages(ring, pages_to_gnt, segs_to_map);
+	gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map);
 	for (i = last_map; i < num; i++)
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 	return -ENOMEM;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index c6ea5d38c509..a1b9df2c4ef1 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -288,9 +288,7 @@ struct xen_blkif_ring {
 	struct work_struct	persistent_purge_work;
 
 	/* Buffer of free pages to map grant refs. */
-	spinlock_t		free_pages_lock;
-	int			free_pages_num;
-	struct list_head	free_pages;
+	struct gnttab_page_cache free_pages;
 
 	struct work_struct	free_work;
 	/* Thread shutdown wait queue. */
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index f5705569e2a7..76912c584a76 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -144,8 +144,7 @@ static int xen_blkif_alloc_rings(struct xen_blkif *blkif)
 		INIT_LIST_HEAD(&ring->pending_free);
 		INIT_LIST_HEAD(&ring->persistent_purge_list);
 		INIT_WORK(&ring->persistent_purge_work, xen_blkbk_unmap_purged_grants);
-		spin_lock_init(&ring->free_pages_lock);
-		INIT_LIST_HEAD(&ring->free_pages);
+		gnttab_page_cache_init(&ring->free_pages);
 
 		spin_lock_init(&ring->pending_free_lock);
 		init_waitqueue_head(&ring->pending_free_wq);
@@ -317,8 +316,7 @@ static int xen_blkif_disconnect(struct xen_blkif *blkif)
 		BUG_ON(atomic_read(&ring->persistent_gnt_in_use) != 0);
 		BUG_ON(!list_empty(&ring->persistent_purge_list));
 		BUG_ON(!RB_EMPTY_ROOT(&ring->persistent_gnts));
-		BUG_ON(!list_empty(&ring->free_pages));
-		BUG_ON(ring->free_pages_num != 0);
+		BUG_ON(ring->free_pages.num_pages != 0);
 		BUG_ON(ring->persistent_gnt_c != 0);
 		WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));
 		ring->active = false;
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 523dcdf39cc9..e2e42912f241 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,6 +813,78 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
 
+void gnttab_page_cache_init(struct gnttab_page_cache *cache)
+{
+	spin_lock_init(&cache->lock);
+	INIT_LIST_HEAD(&cache->pages);
+	cache->num_pages = 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
+
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page **page)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	if (list_empty(&cache->pages)) {
+		spin_unlock_irqrestore(&cache->lock, flags);
+		return gnttab_alloc_pages(1, page);
+	}
+
+	page[0] = list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+	cache->num_pages--;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_get);
+
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page **page,
+			   unsigned int num)
+{
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	for (i = 0; i < num; i++)
+		list_add(&page[i]->lru, &cache->pages);
+	cache->num_pages += num;
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_put);
+
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned int num)
+{
+	struct page *page[10];
+	unsigned int i = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&cache->lock, flags);
+
+	while (cache->num_pages > num) {
+		page[i] = list_first_entry(&cache->pages, struct page, lru);
+		list_del(&page[i]->lru);
+		cache->num_pages--;
+		if (++i == ARRAY_SIZE(page)) {
+			spin_unlock_irqrestore(&cache->lock, flags);
+			gnttab_free_pages(i, page);
+			i = 0;
+			spin_lock_irqsave(&cache->lock, flags);
+		}
+	}
+
+	spin_unlock_irqrestore(&cache->lock, flags);
+
+	if (i != 0)
+		gnttab_free_pages(i, page);
+}
+EXPORT_SYMBOL_GPL(gnttab_page_cache_shrink);
+
 void gnttab_pages_clear_private(int nr_pages, struct page **pages)
 {
 	int i;
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 4acc4e899600..862162dca33c 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -99,6 +99,8 @@ struct vscsibk_info {
 	struct list_head v2p_entry_lists;
 
 	wait_queue_head_t waiting_to_free;
+
+	struct gnttab_page_cache free_pages;
 };
 
 /* theoretical maximum of grants for one request */
@@ -188,10 +190,6 @@ module_param_named(max_buffer_pages, scsiback_max_buffer_pages, int, 0644);
 MODULE_PARM_DESC(max_buffer_pages,
 "Maximum number of free pages to keep in backend buffer");
 
-static DEFINE_SPINLOCK(free_pages_lock);
-static int free_pages_num;
-static LIST_HEAD(scsiback_free_pages);
-
 /* Global spinlock to protect scsiback TPG list */
 static DEFINE_MUTEX(scsiback_mutex);
 static LIST_HEAD(scsiback_list);
@@ -207,41 +205,6 @@ static void scsiback_put(struct vscsibk_info *info)
 		wake_up(&info->waiting_to_free);
 }
 
-static void put_free_pages(struct page **page, int num)
-{
-	unsigned long flags;
-	int i = free_pages_num + num, n = num;
-
-	if (num == 0)
-		return;
-	if (i > scsiback_max_buffer_pages) {
-		n = min(num, i - scsiback_max_buffer_pages);
-		gnttab_free_pages(n, page + num - n);
-		n = num - n;
-	}
-	spin_lock_irqsave(&free_pages_lock, flags);
-	for (i = 0; i < n; i++)
-		list_add(&page[i]->lru, &scsiback_free_pages);
-	free_pages_num += n;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-}
-
-static int get_free_page(struct page **page)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&free_pages_lock, flags);
-	if (list_empty(&scsiback_free_pages)) {
-		spin_unlock_irqrestore(&free_pages_lock, flags);
-		return gnttab_alloc_pages(1, page);
-	}
-	page[0] = list_first_entry(&scsiback_free_pages, struct page, lru);
-	list_del(&page[0]->lru);
-	free_pages_num--;
-	spin_unlock_irqrestore(&free_pages_lock, flags);
-	return 0;
-}
-
 static unsigned long vaddr_page(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
@@ -302,7 +265,8 @@ static void scsiback_fast_flush_area(struct vscsibk_pend *req)
 		BUG_ON(err);
 	}
 
-	put_free_pages(req->pages, req->n_grants);
+	gnttab_page_cache_put(&req->info->free_pages, req->pages,
+			      req->n_grants);
 	req->n_grants = 0;
 }
 
@@ -445,8 +409,8 @@ static int scsiback_gnttab_data_map_list(struct vscsibk_pend *pending_req,
 	struct vscsibk_info *info = pending_req->info;
 
 	for (i = 0; i < cnt; i++) {
-		if (get_free_page(pg + mapcount)) {
-			put_free_pages(pg, mapcount);
+		if (gnttab_page_cache_get(&info->free_pages, pg + mapcount)) {
+			gnttab_page_cache_put(&info->free_pages, pg, mapcount);
 			pr_err("no grant page\n");
 			return -ENOMEM;
 		}
@@ -796,6 +760,8 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info,
 		cond_resched();
 	}
 
+	gnttab_page_cache_shrink(&info->free_pages, scsiback_max_buffer_pages);
+
 	RING_FINAL_CHECK_FOR_REQUESTS(&info->ring, more_to_do);
 	return more_to_do;
 }
@@ -1233,6 +1199,8 @@ static int scsiback_remove(struct xenbus_device *dev)
 
 	scsiback_release_translation_entry(info);
 
+	gnttab_page_cache_shrink(&info->free_pages, 0);
+
 	dev_set_drvdata(&dev->dev, NULL);
 
 	return 0;
@@ -1263,6 +1231,7 @@ static int scsiback_probe(struct xenbus_device *dev,
 	info->irq = 0;
 	INIT_LIST_HEAD(&info->v2p_entry_lists);
 	spin_lock_init(&info->v2p_lock);
+	gnttab_page_cache_init(&info->free_pages);
 
 	err = xenbus_printf(XBT_NIL, dev->nodename, "feature-sg-grant", "%u",
 			    SG_ALL);
@@ -1879,13 +1848,6 @@ static int __init scsiback_init(void)
 
 static void __exit scsiback_exit(void)
 {
-	struct page *page;
-
-	while (free_pages_num) {
-		if (get_free_page(&page))
-			BUG();
-		gnttab_free_pages(1, &page);
-	}
 	target_unregister_template(&scsiback_ops);
 	xenbus_unregister_driver(&scsiback_driver);
 }
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 9bc5bc07d4d3..c6ef8ffc1a09 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -198,6 +198,19 @@ void gnttab_free_auto_xlat_frames(void);
 int gnttab_alloc_pages(int nr_pages, struct page **pages);
 void gnttab_free_pages(int nr_pages, struct page **pages);
 
+struct gnttab_page_cache {
+	spinlock_t		lock;
+	struct list_head	pages;
+	unsigned int		num_pages;
+};
+
+void gnttab_page_cache_init(struct gnttab_page_cache *cache);
+int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page **page);
+void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page **page,
+			   unsigned int num);
+void gnttab_page_cache_shrink(struct gnttab_page_cache *cache,
+			      unsigned int num);
+
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 struct gnttab_dma_alloc_args {
 	/* Device for which DMA memory will be/was allocated. */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46566.82632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGb1-0006e0-TV; Mon, 07 Dec 2020 13:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46566.82632; Mon, 07 Dec 2020 13:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGb1-0006dt-Ov; Mon, 07 Dec 2020 13:30:31 +0000
Received: by outflank-mailman (input) for mailman id 46566;
 Mon, 07 Dec 2020 13:30:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmGb0-0006dg-TL
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:30:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71266572-2b70-4f48-9a71-f1b698e0947e;
 Mon, 07 Dec 2020 13:30:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 04B78AB63;
 Mon,  7 Dec 2020 13:30:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71266572-2b70-4f48-9a71-f1b698e0947e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607347829; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=TlW6LRYzNqv2k8YiMXo4C8ho53DA5MhvO4sDaaIu7G8=;
	b=Isny2xaeNXr0QwvWgDPZvWhlph9aEdARWAGMR8XD8mIwJQVGDRljPeuqNDCgqaNncogi7U
	kExKzaMsp3RukRSg9ga3/hbaAlEjkMHMsOZBAa9to1CWegHsRMFs8r6bfuK7vxPsMa92G0
	EoXiC//R5iM0/X6/MTVWmbjDGY0eTzI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 0/2] xen: fix using ZONE_DEVICE memory for foreign mappings
Date: Mon,  7 Dec 2020 14:30:22 +0100
Message-Id: <20201207133024.16621-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fix an issue found in dom0 when running on a host with NVMe.

Juergen Gross (2):
  xen: add helpers for caching grant mapping pages
  xen: don't use page->lru for ZONE_DEVICE memory

 drivers/block/xen-blkback/blkback.c |  89 ++++----------------
 drivers/block/xen-blkback/common.h  |   4 +-
 drivers/block/xen-blkback/xenbus.c  |   6 +-
 drivers/xen/grant-table.c           | 123 ++++++++++++++++++++++++++++
 drivers/xen/unpopulated-alloc.c     |  20 +++--
 drivers/xen/xen-scsiback.c          |  60 +++-----------
 include/xen/grant_table.h           |  17 ++++
 7 files changed, 182 insertions(+), 137 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46567.82644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGb3-0006eu-4e; Mon, 07 Dec 2020 13:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46567.82644; Mon, 07 Dec 2020 13:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGb3-0006en-0w; Mon, 07 Dec 2020 13:30:33 +0000
Received: by outflank-mailman (input) for mailman id 46567;
 Mon, 07 Dec 2020 13:30:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmGb1-0006do-Ju
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:30:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd845fb6-9b2c-47e7-b339-d9698956272a;
 Mon, 07 Dec 2020 13:30:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5E70EAD09;
 Mon,  7 Dec 2020 13:30:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd845fb6-9b2c-47e7-b339-d9698956272a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607347829; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qO8uDSCaCVklC/1884Z3m6tRv+wQIAXdoqVmyxZTomc=;
	b=Lqq67VCQiRnET2stayy3VmJFaL/+SiunSGCMyjY+/iAT69IgVj1Po9szzlkif8CQKoFjJw
	O421QkLOJd619Fb3UmAxypDynmuGG8KeDiysJD/h1VK+iW+mTfloQjJWku0CVt7Mb7pfrj
	jAbOZY6dksmrkLKftV+hq5yZbqKd2ME=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
Date: Mon,  7 Dec 2020 14:30:24 +0100
Message-Id: <20201207133024.16621-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201207133024.16621-1-jgross@suse.com>
References: <20201207133024.16621-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
memory") introduced usage of ZONE_DEVICE memory for foreign memory
mappings.

Unfortunately this collides with using page->lru for Xen backend
private page caches.

Fix that by using page->zone_device_data instead.

Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/grant-table.c       | 65 +++++++++++++++++++++++++++++----
 drivers/xen/unpopulated-alloc.c | 20 +++++-----
 include/xen/grant_table.h       |  4 ++
 3 files changed, 73 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index e2e42912f241..696663a439fe 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -813,10 +813,63 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_pages);
 
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	cache->pages = NULL;
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return !cache->pages;
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page = cache->pages;
+	cache->pages = page->zone_device_data;
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct page *page)
+{
+	page->zone_device_data = cache->pages;
+	cache->pages = page;
+}
+#else
+static inline void cache_init(struct gnttab_page_cache *cache)
+{
+	INIT_LIST_HEAD(&cache->pages);
+}
+
+static inline bool cache_empty(struct gnttab_page_cache *cache)
+{
+	return list_empty(&cache->pages);
+}
+
+static inline struct page *cache_deq(struct gnttab_page_cache *cache)
+{
+	struct page *page;
+
+	page = list_first_entry(&cache->pages, struct page, lru);
+	list_del(&page[0]->lru);
+
+	return page;
+}
+
+static inline void cache_enq(struct gnttab_page_cache *cache, struct page *page)
+{
+	list_add(&page->lru, &cache->pages);
+}
+#endif
+
 void gnttab_page_cache_init(struct gnttab_page_cache *cache)
 {
 	spin_lock_init(&cache->lock);
-	INIT_LIST_HEAD(&cache->pages);
+	cache_init(cache);
 	cache->num_pages = 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_page_cache_init);
@@ -827,13 +880,12 @@ int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page **page)
 
 	spin_lock_irqsave(&cache->lock, flags);
 
-	if (list_empty(&cache->pages)) {
+	if (cache_empty(cache)) {
 		spin_unlock_irqrestore(&cache->lock, flags);
 		return gnttab_alloc_pages(1, page);
 	}
 
-	page[0] = list_first_entry(&cache->pages, struct page, lru);
-	list_del(&page[0]->lru);
+	page[0] = cache_deq(cache);
 	cache->num_pages--;
 
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -851,7 +903,7 @@ void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page **page,
 	spin_lock_irqsave(&cache->lock, flags);
 
 	for (i = 0; i < num; i++)
-		list_add(&page[i]->lru, &cache->pages);
+		cache_enq(cache, page[i]);
 	cache->num_pages += num;
 
 	spin_unlock_irqrestore(&cache->lock, flags);
@@ -867,8 +919,7 @@ void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned int num)
 	spin_lock_irqsave(&cache->lock, flags);
 
 	while (cache->num_pages > num) {
-		page[i] = list_first_entry(&cache->pages, struct page, lru);
-		list_del(&page[i]->lru);
+		page[i] = cache_deq(cache);
 		cache->num_pages--;
 		if (++i == ARRAY_SIZE(page)) {
 			spin_unlock_irqrestore(&cache->lock, flags);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
index 8c512ea550bb..7762c1bb23cb 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -12,7 +12,7 @@
 #include <xen/xen.h>
 
 static DEFINE_MUTEX(list_lock);
-static LIST_HEAD(page_list);
+static struct page *page_list;
 static unsigned int list_count;
 
 static int fill_list(unsigned int nr_pages)
@@ -84,7 +84,8 @@ static int fill_list(unsigned int nr_pages)
 		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
 
 		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
-		list_add(&pg->lru, &page_list);
+		pg->zone_device_data = page_list;
+		page_list = pg;
 		list_count++;
 	}
 
@@ -118,12 +119,10 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
 	}
 
 	for (i = 0; i < nr_pages; i++) {
-		struct page *pg = list_first_entry_or_null(&page_list,
-							   struct page,
-							   lru);
+		struct page *pg = page_list;
 
 		BUG_ON(!pg);
-		list_del(&pg->lru);
+		page_list = pg->zone_device_data;
 		list_count--;
 		pages[i] = pg;
 
@@ -134,7 +133,8 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
 				unsigned int j;
 
 				for (j = 0; j <= i; j++) {
-					list_add(&pages[j]->lru, &page_list);
+					pages[j]->zone_device_data = page_list;
+					page_list = pages[j];
 					list_count++;
 				}
 				goto out;
@@ -160,7 +160,8 @@ void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
 
 	mutex_lock(&list_lock);
 	for (i = 0; i < nr_pages; i++) {
-		list_add(&pages[i]->lru, &page_list);
+		pages[i]->zone_device_data = page_list;
+		page_list = pages[i];
 		list_count++;
 	}
 	mutex_unlock(&list_lock);
@@ -189,7 +190,8 @@ static int __init init(void)
 			struct page *pg =
 				pfn_to_page(xen_extra_mem[i].start_pfn + j);
 
-			list_add(&pg->lru, &page_list);
+			pg->zone_device_data = page_list;
+			page_list = pg;
 			list_count++;
 		}
 	}
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index c6ef8ffc1a09..b9c937b3a149 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -200,7 +200,11 @@ void gnttab_free_pages(int nr_pages, struct page **pages);
 
 struct gnttab_page_cache {
 	spinlock_t		lock;
+#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+	struct page		*pages;
+#else
 	struct list_head	pages;
+#endif
 	unsigned int		num_pages;
 };
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:41:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:41:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46586.82668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGlc-00088G-FY; Mon, 07 Dec 2020 13:41:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46586.82668; Mon, 07 Dec 2020 13:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGlc-000889-BW; Mon, 07 Dec 2020 13:41:28 +0000
Received: by outflank-mailman (input) for mailman id 46586;
 Mon, 07 Dec 2020 13:41:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xlZz=FL=bounce.vates.fr=bounce-md_30504962.5fce3105.v1-c51429d20de44e80987a2908f93cb7af@srs-us1.protection.inumbo.net>)
 id 1kmGlb-00087o-9F
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:41:27 +0000
Received: from mail187-17.suw11.mandrillapp.com (unknown [198.2.187.17])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10729004-03fd-4487-a5c9-bd2facc0a2eb;
 Mon, 07 Dec 2020 13:41:26 +0000 (UTC)
Received: from pmta01.mandrill.prod.suw01.rsglab.com (127.0.0.1) by
 mail187-17.suw11.mandrillapp.com id hpoogc174i40 for
 <xen-devel@lists.xenproject.org>;
 Mon, 7 Dec 2020 13:41:26 +0000 (envelope-from
 <bounce-md_30504962.5fce3105.v1-c51429d20de44e80987a2908f93cb7af@bounce.vates.fr>)
Received: from [185.78.159.90] by mandrillapp.com id
 c51429d20de44e80987a2908f93cb7af; Mon, 07 Dec 2020 13:41:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10729004-03fd-4487-a5c9-bd2facc0a2eb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=mandrill; d=vates.fr;
 h=From:Subject:To:Message-Id:Date:MIME-Version:Content-Type; i=benjamin.reis@vates.fr;
 bh=uwjndJrzvEh86+hmZ4HZG7uKSNuP8jinRy8UFX+YLjk=;
 b=oWAAAxX0JmVr3HVZ/U1zOau1KB89nT1uvicl0YF1y1adv2xFx9oxlE0BxTYniliJ9FvAbiKTcnYD
   xic0kpVM3HmdHxqogNqPeJh84+/NfZ2rq5x1vDxm5qS2X1rxdv8bFQM4UtM4kkVXKqBaxROONCYy
   IeUYv4F9PMVmTG76kyg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1607348485; h=From : 
 Subject : To : Message-Id : Date : MIME-Version : Content-Type : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=uwjndJrzvEh86+hmZ4HZG7uKSNuP8jinRy8UFX+YLjk=; 
 b=XzhHYpIhchXFtRm38/7Yc1zYqg7pgGJQlSF5LZ6YmCUnOWgJzs2K+qrVN1PAeJMy94m2Mo
 F8zFHepejDqXjICt+5H4RvOJmQp61ARZBB0TuI6pKjWyWMq4X9KJepaBdJAsk36EI2hqQo8s
 y9g0vXJTynF5pKLaXmKbcamfG0Beo=
From: Benjamin Reis <benjamin.reis@vates.fr>
Subject: [Help Wanted] New toolstack Rust PoC
X-Virus-Scanned: amavisd-new at plam.fr
To: xen-devel@lists.xenproject.org
Message-Id: <1b86b418-71e1-3422-6531-00af746cf7a8@vates.fr>
X-Report-Abuse: Please forward a copy of this message, including all headers, to abuse@mandrill.com
X-Report-Abuse: You can also report abuse here: http://mandrillapp.com/contact/abuse?id=30504962.c51429d20de44e80987a2908f93cb7af
X-Mandrill-User: md_30504962
Date: Mon, 07 Dec 2020 13:41:25 +0000
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="_av-VT238g60RITZZ1j9P4kUSA"

--_av-VT238g60RITZZ1j9P4kUSA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

As you may know: we are working on a rust PoC 
(https://github.com/xcp-ng/xenopsd-ng/rust/ 
<https://github.com/xcp-ng/xenopsd-ng/pull/13>) for the new Xen 
Toolstack. We have partial bindings for Xenctrl and Xenstore and with 
that we're able to pause/unpause/shutdown a guest and we're close to 
booting a XTF guest (https://github.com/xcp-ng/xenopsd-ng/pull/13 
<https://github.com/xcp-ng/xenopsd-ng/pull/13>).

We are new to Xen so we're having a hard time finding what's wrong with 
our current code, so we're looking for some help/guidance to keep our 
PoC going forward. We're suspecting that in hypercall fails silently 
when setting the guest's hvm context. Would anyone be interested in 
helping us debugging/developping the PoC? We're currently tracking this 
issue here: https://github.com/xcp-ng/xenopsd-ng/issues/17. Hoping to 
hear from you soon. Benjamin Reis Vates SAS



--_av-VT238g60RITZZ1j9P4kUSA
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html>
  <head>

    <meta http-equiv=3D"content-type" content=3D"text/html; charset=3DUTF-8=
">
  </head>
  <body>
    <p style=3D"box-sizing: border-box; margin: 0px; font-size: 13.5px; lin=
e-height: 1.6em; white-space: pre-wrap; width: 1598px; word-break: break-wo=
rd; color: rgb(221, 221, 221); font-family: &quot;Open Sans&quot;, sans-ser=
if; font-style: normal; font-variant-ligatures: normal; font-variant-caps: =
normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: s=
tart; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px;=
 -webkit-text-stroke-width: 0px; background-color: rgba(221, 221, 221, 0.04=
); text-decoration-thickness: initial; text-decoration-style: initial; text=
-decoration-color: initial;">Hi all, 
</p>
    <p style=3D"box-sizing: border-box; margin: 0px; font-size: 13.5px; lin=
e-height: 1.6em; white-space: pre-wrap; width: 1598px; word-break: break-wo=
rd; color: rgb(221, 221, 221); font-family: &quot;Open Sans&quot;, sans-ser=
if; font-style: normal; font-variant-ligatures: normal; font-variant-caps: =
normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: s=
tart; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px;=
 -webkit-text-stroke-width: 0px; background-color: rgba(221, 221, 221, 0.04=
); text-decoration-thickness: initial; text-decoration-style: initial; text=
-decoration-color: initial;">
</p>
    <p style=3D"box-sizing: border-box; margin: 0.5em 0px 0px; font-size: 1=
3.5px; line-height: 1.6em; white-space: pre-wrap; width: 1598px; word-break=
: break-word; color: rgb(221, 221, 221); font-family: &quot;Open Sans&quot;=
, sans-serif; font-style: normal; font-variant-ligatures: normal; font-vari=
ant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; tex=
t-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spa=
cing: 0px; -webkit-text-stroke-width: 0px; background-color: rgba(221, 221,=
 221, 0.04); text-decoration-thickness: initial; text-decoration-style: ini=
tial; text-decoration-color: initial;">As you may know: we are working on a=
 rust PoC (<a class=3D"theme markdown__link" href=3D"https://github.com/xcp=
-ng/xenopsd-ng/pull/13" rel=3D"noreferrer" target=3D"_blank" style=3D"box-s=
izing: border-box; background-color: transparent; color: rgb(164, 255, 235)=
; text-decoration: none; cursor: pointer; word-break: break-word;">https://=
github.com/xcp-ng/xenopsd-ng/rust/</a>) for the new Xen Toolstack.
We have partial bindings for Xenctrl and Xenstore and with that we're able =
to pause/unpause/shutdown a guest and we're close to booting a XTF guest (<=
a class=3D"theme markdown__link" href=3D"https://github.com/xcp-ng/xenopsd-=
ng/pull/13" rel=3D"noreferrer" target=3D"_blank" style=3D"box-sizing: borde=
r-box; background-color: transparent; color: rgb(164, 255, 235); text-decor=
ation: none; cursor: pointer; word-break: break-word;">https://github.com/x=
cp-ng/xenopsd-ng/pull/13</a>).</p>
    <p style=3D"box-sizing: border-box; margin: 0.5em 0px 0px; font-size: 1=
3.5px; line-height: 1.6em; white-space: pre-wrap; width: 1598px; word-break=
: break-word; color: rgb(221, 221, 221); font-family: &quot;Open Sans&quot;=
, sans-serif; font-style: normal; font-variant-ligatures: normal; font-vari=
ant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; tex=
t-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spa=
cing: 0px; -webkit-text-stroke-width: 0px; background-color: rgba(221, 221,=
 221, 0.04); text-decoration-thickness: initial; text-decoration-style: ini=
tial; text-decoration-color: initial;">We are new to Xen so we're having a =
hard time finding what's wrong with our current code, so we're looking for =
some help/guidance to keep our PoC going forward.
We're suspecting that in hypercall fails silently when setting the guest's =
hvm context.
Would anyone be interested in helping us debugging/developping the PoC?
We're currently tracking this issue here: <a class=3D"moz-txt-link-freetext=
" href=3D"https://github.com/xcp-ng/xenopsd-ng/issues/17">https://github.co=
m/xcp-ng/xenopsd-ng/issues/17</a>.

Hoping to hear from you soon.

Benjamin Reis
Vates SAS
</p>
  <img src=3D"https://mandrillapp.com/track/open.php?u=3D30504962&id=3Dc514=
29d20de44e80987a2908f93cb7af" height=3D"1" width=3D"1"></body>
</html>


--_av-VT238g60RITZZ1j9P4kUSA--



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 13:56:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 13:56:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46606.82692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGzg-0000xc-4N; Mon, 07 Dec 2020 13:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46606.82692; Mon, 07 Dec 2020 13:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmGzg-0000xV-1A; Mon, 07 Dec 2020 13:56:00 +0000
Received: by outflank-mailman (input) for mailman id 46606;
 Mon, 07 Dec 2020 13:55:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xVE2=FL=bounce.vates.fr=bounce-md_30504962.5fce346c.v1-0be095ed234e41ce8e97c3944e01e4fc@srs-us1.protection.inumbo.net>)
 id 1kmGzd-0000xQ-Tp
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:55:58 +0000
Received: from mail145-20.atl61.mandrillapp.com (unknown [198.2.145.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28e1be9b-d2c1-467e-9070-ee819e48cf14;
 Mon, 07 Dec 2020 13:55:56 +0000 (UTC)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-20.atl61.mandrillapp.com (Mailchimp) with ESMTP id
 4CqPwX4hsszCf9LfK
 for <xen-devel@lists.xenproject.org>; Mon,  7 Dec 2020 13:55:56 +0000 (GMT)
Received: from [185.78.159.90] by mandrillapp.com id
 0be095ed234e41ce8e97c3944e01e4fc; Mon, 07 Dec 2020 13:55:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28e1be9b-d2c1-467e-9070-ee819e48cf14
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1607349356; i=benjamin.reis@vates.fr;
	bh=4VZaVyg+b6MyJupqgbO2pHhU3e8r4QkiNoU+FOFm/CE=;
	h=From:Subject:To:References:Message-Id:In-Reply-To:Date:
	 MIME-Version:Content-Type:Content-Transfer-Encoding;
	b=H6I9S0GtrhuuMuN98hzxCRLpucdRHmiGVv/GgE1MIllC2+Becx6aeoYRpwbj1zdZP
	 ffhzHN4fdFqUyGpcG7cibhtZbsB/kYx/2BARhDuPYRwdom3ZxdlefnTLNCXWe4LD+j
	 pyArnBwijWoEBd0F0B//bMy59rTCo+yWnDnmGWBU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1607349356; h=From : 
 Subject : To : References : Message-Id : In-Reply-To : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=4VZaVyg+b6MyJupqgbO2pHhU3e8r4QkiNoU+FOFm/CE=; 
 b=n4gVD2ZmLcwz/HJkdI2IVYfXfU18PCNfU4oyC6F37lnp6qMqVCvsrgCp7+L+sOf4lqiNv0
 YAFQJJsM0r9wSzgCDvZt3MvOG6AEUHbiWNL8DGgJsblxuAFyAJSqp+ijn5k8yhdaiYyYgHne
 Pi92oR4CXlTYlNlwNrmba9LWybDb4=
From: Benjamin Reis <benjamin.reis@vates.fr>
Subject: Re: [Help Wanted] New toolstack Rust PoC
X-Virus-Scanned: amavisd-new at plam.fr
To: xen-devel@lists.xenproject.org
References: <1b86b418-71e1-3422-6531-00af746cf7a8@vates.fr>
Message-Id: <18094b9d-0e09-aed9-8fe0-f17070b55f39@vates.fr>
In-Reply-To: <1b86b418-71e1-3422-6531-00af746cf7a8@vates.fr>
X-Report-Abuse: Please forward a copy of this message, including all headers, to abuse@mandrill.com
X-Report-Abuse: You can also report abuse here: http://mandrillapp.com/contact/abuse?id=30504962.0be095ed234e41ce8e97c3944e01e4fc
X-Mandrill-User: md_30504962
Date: Mon, 07 Dec 2020 13:55:56 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit

Same mail as plain text, sorry for the incovenience and the none 
readability of my previous one.

Hi all,

As you may know: we are working on a rust PoC 
(https://github.com/xcp-ng/xenopsd-ng/rust/ 
<https://github.com/xcp-ng/xenopsd-ng/pull/13>) for the new Xen Toolstack.

We have partial bindings for Xenctrl and Xenstore and with that we're 
able to pause/unpause/shutdown a guest and we're close to booting a XTF 
guest (https://github.com/xcp-ng/xenopsd-ng/pull/13 
<https://github.com/xcp-ng/xenopsd-ng/pull/13>).

We are new to Xen so we're having a hard time finding what's wrong with 
our current code, so we're looking for some help/guidance to keep our 
PoC going forward.

We're suspecting that in hypercall fails silently when setting the 
guest's hvm context.

Would anyone be interested in helping us debugging/developping the PoC?

We're currently tracking this issue here: 
https://github.com/xcp-ng/xenopsd-ng/issues/17.

Hoping to hear from you soon.

Benjamin Reis

Vates SAS


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 14:44:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 14:44:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46663.82733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmHk9-0006Cb-Ck; Mon, 07 Dec 2020 14:44:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46663.82733; Mon, 07 Dec 2020 14:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmHk9-0006CU-9c; Mon, 07 Dec 2020 14:44:01 +0000
Received: by outflank-mailman (input) for mailman id 46663;
 Mon, 07 Dec 2020 14:43:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AUZZ=FL=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kmHk7-0006CP-NL
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 14:43:59 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6714aaeb-98e6-478d-9651-a3b06d8ca960;
 Mon, 07 Dec 2020 14:43:58 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-142-SPQGlG4kPs6bEH4OCatr7A-1; Mon, 07 Dec 2020 09:43:56 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5EB25107ACE6;
 Mon,  7 Dec 2020 14:43:54 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-85.ams2.redhat.com [10.36.112.85])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0439760C0F;
 Mon,  7 Dec 2020 14:43:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6714aaeb-98e6-478d-9651-a3b06d8ca960
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607352238;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yYikCPcIPVfkQPyOTA1er7+ojxS2PgeXAGCbiR51IuI=;
	b=bGKLa0MHvrZRLICou5EsDR8Q3ushej4hUwOpVkHDdpmIs5cDDMJG3N7qvvxYAFib0ya6Bo
	Rqh1cs4xu78ty3o1kuR/hsF3TGSpS11JVMrxcLVJxXE+ONfxQrnbahAphLWDJXkSL8OETA
	QAZpx9a1LoIT2SjF6RBJx9Jm6dgl3XE=
X-MC-Unique: SPQGlG4kPs6bEH4OCatr7A-1
Subject: Re: [PATCH v3 5/5] gitlab-ci: Add Xen cross-build jobs
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Christian Borntraeger <borntraeger@de.ibm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, kvm@vger.kernel.org,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Willian Rampazzo <wrampazz@redhat.com>,
 Paul Durrant <paul@xen.org>, Cornelia Huck <cohuck@redhat.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Claudio Fontana <cfontana@suse.de>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>
References: <20201207131503.3858889-1-philmd@redhat.com>
 <20201207131503.3858889-6-philmd@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <f69be51c-1a16-8c5a-f46c-d621d905c9ca@redhat.com>
Date: Mon, 7 Dec 2020 15:43:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201207131503.3858889-6-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12

On 07/12/2020 14.15, Philippe Mathieu-Daudé wrote:
> Cross-build ARM and X86 targets with only Xen accelerator enabled.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index 51896bbc9fb..bd6473a75a7 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -134,3 +134,17 @@ cross-win64-system:
>    extends: .cross_system_build_job
>    variables:
>      IMAGE: fedora-win64-cross
> +
> +cross-amd64-xen-only:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-amd64-cross
> +    ACCEL: xen
> +    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm
> +
> +cross-arm64-xen-only:
> +  extends: .cross_accel_build_job
> +  variables:
> +    IMAGE: debian-arm64-cross
> +    ACCEL: xen
> +    ACCEL_CONFIGURE_OPTS: --disable-tcg --disable-kvm

Reviewed-by: Thomas Huth <thuth@redhat.com>




From xen-devel-bounces@lists.xenproject.org Mon Dec 07 14:49:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 14:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46670.82746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmHp1-0006Ob-16; Mon, 07 Dec 2020 14:49:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46670.82746; Mon, 07 Dec 2020 14:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmHp0-0006OU-Tm; Mon, 07 Dec 2020 14:49:02 +0000
Received: by outflank-mailman (input) for mailman id 46670;
 Mon, 07 Dec 2020 14:49:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DX/D=FL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmHoz-0006OP-G2
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 14:49:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbb6fdf3-c5e6-48ce-b202-d0b8f07a0dba;
 Mon, 07 Dec 2020 14:49:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9F1A2AB63;
 Mon,  7 Dec 2020 14:48:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbb6fdf3-c5e6-48ce-b202-d0b8f07a0dba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607352539; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3jt2Zre32XJyVR5/Pd2ncuWrHMxYQyWnVOWainLoUmw=;
	b=HKD3kOYi1/vmaEM4LYBC/IAZjrHn06E+wBaweE7AI6lu/xNn0hJGKUij0ActByRFuM+Twk
	BaTWejFEnWYQuU9mvR4AdNjRuIghN20RP+BQHFgmUtvFohX/QWCgE6c7z/gd3g05ISiIvH
	Qu5l3OgYuHia1vfORI7NZC+Wgh52nY0=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
 <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
 <eb2e47aa-6c82-f3dd-83e7-b1853816c41c@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d21da8c1-fdb9-6184-06f1-dce6ed683d92@suse.com>
Date: Mon, 7 Dec 2020 15:48:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <eb2e47aa-6c82-f3dd-83e7-b1853816c41c@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TNumpRhGdK1eAkg13d4zxXx299HLpTVSk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TNumpRhGdK1eAkg13d4zxXx299HLpTVSk
Content-Type: multipart/mixed; boundary="eDS3yjH2os8VjBdjo7yqfRZloBPkgm1PL";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <d21da8c1-fdb9-6184-06f1-dce6ed683d92@suse.com>
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
 <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
 <eb2e47aa-6c82-f3dd-83e7-b1853816c41c@suse.com>
In-Reply-To: <eb2e47aa-6c82-f3dd-83e7-b1853816c41c@suse.com>

--eDS3yjH2os8VjBdjo7yqfRZloBPkgm1PL
Content-Type: multipart/mixed;
 boundary="------------FE7B507EBBDDF0E1B5BB60C5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FE7B507EBBDDF0E1B5BB60C5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.12.20 10:59, Jan Beulich wrote:
> On 01.12.2020 10:01, J=C3=BCrgen Gro=C3=9F wrote:
>> On 01.12.20 09:55, Jan Beulich wrote:
>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>> --- a/xen/common/sched/private.h
>>>> +++ b/xen/common/sched/private.h
>>>> @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const stru=
ct sched_unit *unit)
>>>>   =20
>>>>    struct cpupool
>>>>    {
>>>> -    int              cpupool_id;
>>>> -#define CPUPOOLID_NONE    (-1)
>>>> +    unsigned int     cpupool_id;
>>>> +#define CPUPOOLID_NONE    (~0U)
>>>
>>> How about using XEN_SYSCTL_CPUPOOL_PAR_ANY here? Furthermore,
>>> together with the remark above, I think you also want to consider
>>> the case of sizeof(unsigned int) > sizeof(uint32_t).
>>
>> With patch 5 this should be completely fine.
>=20
> I don't think so, as there still will be CPUPOOLID_NONE !=3D
> XEN_SYSCTL_CPUPOOL_PAR_ANY in the mentioned case.

I don't see that being relevant, as we have in cpupool_do_sysctl():

poolid =3D (op->cpupool_id =3D=3D XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
             CPUPOOLID_NONE: op->cpupool_id;


Juergen

--------------FE7B507EBBDDF0E1B5BB60C5
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FE7B507EBBDDF0E1B5BB60C5--

--eDS3yjH2os8VjBdjo7yqfRZloBPkgm1PL--

--TNumpRhGdK1eAkg13d4zxXx299HLpTVSk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/OQNoFAwAAAAAACgkQsN6d1ii/Ey9n
vQf7BdYARAAWdD+qmWicbkyKvsjGl18p0rehtbTcHQa0/NQaRAeFQB+ow2QIIlsYZqgRv53WEn0A
KLnSemjcivqFC17gy7SnvCVTTRpWZ4n5S0z21Aq5eQ1FVl4fleeCqmmJwFI5e1jvVmrctXVwsCXz
2BQK799YeedZfMN4lgBqBtf7IeAYQHgWucst/zFUJ0GEJHn5jGyQKozGtd3aTnMEw0uDo7PPHnb5
x1TNYuWJK8E60UIeCcgwcNtDW6Qmp+BavA69OOix01qvacfpc/0GWRAnrNrYhpJt7J4z7ZDPdj+a
IV2yg/KlvVQP1noXfFbiYZzVRlsuFiX5dSUwb8F2+Q==
=SJA+
-----END PGP SIGNATURE-----

--TNumpRhGdK1eAkg13d4zxXx299HLpTVSk--


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:00:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46682.82758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmHzp-0008CC-1T; Mon, 07 Dec 2020 15:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46682.82758; Mon, 07 Dec 2020 15:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmHzo-0008C5-Tt; Mon, 07 Dec 2020 15:00:12 +0000
Received: by outflank-mailman (input) for mailman id 46682;
 Mon, 07 Dec 2020 15:00:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmHzn-0008C0-TL
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:00:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d052f47e-e39b-4724-8db8-88a2179e342c;
 Mon, 07 Dec 2020 15:00:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 32128ACBD;
 Mon,  7 Dec 2020 15:00:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d052f47e-e39b-4724-8db8-88a2179e342c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607353210; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UPfYqQ+Bzpt16+gJpfccy7YbR/QJRFCBohhl/OWjGb4=;
	b=fXitukg3x33dDcruenaOQ7JFDXDGx9svtHN3Psc/5Mn/OpJ2/QpWF9zPebbn05q56X3y6b
	ebaXHWKCYx6AU23gEzdor59j1UGcXDxUZQtO4U2R28bz4GdY2Oktj10efj/5M+6ruRe7I5
	EKTeRSZdyJ53Bw+tewk1B9pFXcYiYlM=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <a0bac022-fe6e-aae6-6d07-6a2b9bc492b3@suse.com>
 <eed1baac-a6eb-f10b-7272-742c08f5124e@suse.com>
 <eb2e47aa-6c82-f3dd-83e7-b1853816c41c@suse.com>
 <d21da8c1-fdb9-6184-06f1-dce6ed683d92@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2efb6f26-ab82-7122-26c9-1db586d44bbb@suse.com>
Date: Mon, 7 Dec 2020 16:00:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <d21da8c1-fdb9-6184-06f1-dce6ed683d92@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.12.2020 15:48, Jürgen Groß wrote:
> On 07.12.20 10:59, Jan Beulich wrote:
>> On 01.12.2020 10:01, Jürgen Groß wrote:
>>> On 01.12.20 09:55, Jan Beulich wrote:
>>>> On 01.12.2020 09:21, Juergen Gross wrote:
>>>>> --- a/xen/common/sched/private.h
>>>>> +++ b/xen/common/sched/private.h
>>>>> @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
>>>>>    
>>>>>    struct cpupool
>>>>>    {
>>>>> -    int              cpupool_id;
>>>>> -#define CPUPOOLID_NONE    (-1)
>>>>> +    unsigned int     cpupool_id;
>>>>> +#define CPUPOOLID_NONE    (~0U)
>>>>
>>>> How about using XEN_SYSCTL_CPUPOOL_PAR_ANY here? Furthermore,
>>>> together with the remark above, I think you also want to consider
>>>> the case of sizeof(unsigned int) > sizeof(uint32_t).
>>>
>>> With patch 5 this should be completely fine.
>>
>> I don't think so, as there still will be CPUPOOLID_NONE !=
>> XEN_SYSCTL_CPUPOOL_PAR_ANY in the mentioned case.
> 
> I don't see that being relevant, as we have in cpupool_do_sysctl():
> 
> poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
>              CPUPOOLID_NONE: op->cpupool_id;

Oh, sorry for the noise then. I forgot about this transformation.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:21:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:21:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46698.82769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIKc-0001zg-Pm; Mon, 07 Dec 2020 15:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46698.82769; Mon, 07 Dec 2020 15:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIKc-0001zZ-MY; Mon, 07 Dec 2020 15:21:42 +0000
Received: by outflank-mailman (input) for mailman id 46698;
 Mon, 07 Dec 2020 15:21:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmIKb-0001zU-1u
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:21:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a0de98d-3d99-451e-abbd-1e897680e968;
 Mon, 07 Dec 2020 15:21:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C0E1AB63;
 Mon,  7 Dec 2020 15:21:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a0de98d-3d99-451e-abbd-1e897680e968
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607354499; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4qsMwrQS7vfRwqSJw3t1sG9JFXYCXuoAuYCkKH3a51E=;
	b=QKeCgOo60R3nl5TYxtphUai48h7V7S/iBlIkmZYBqmEQ4bLJnAAx2ng7U1MU51fNs2OGzm
	JcK2lsI2id9FHzVETkzzGLdo9RDWEpvviyfuuiQPRgi1qWZg3hBUJoqh4Si4T+DOhbAZDB
	4D4dmHhnGtzyXHlxli/cgoaiSRpNvRs=
Subject: Re: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned
To: Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201201082128.15239-1-jgross@suse.com>
 <20201201082128.15239-5-jgross@suse.com>
 <c2753a9853a1dccc159e0b34db0cdf0a364d2206.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <45bd5958-5935-7cbf-ddb3-7fae9e2dfd4b@suse.com>
Date: Mon, 7 Dec 2020 16:21:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c2753a9853a1dccc159e0b34db0cdf0a364d2206.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 16:52, Dario Faggioli wrote:
> On Tue, 2020-12-01 at 09:21 +0100, Juergen Gross wrote:
>> The cpupool id is an unsigned value in the public interface header,
>> so
>> there is no reason why it is a signed value in struct cpupool.
>>
>> Switch it to unsigned int.
>>
> I think we can add:
> 
> "No functional change intended"

I've not added this - there is an intentional change at least
for

        if ( (poolid != CPUPOOLID_NONE) && (last >= poolid) )

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:25:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46709.82793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIOT-0002F3-Gg; Mon, 07 Dec 2020 15:25:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46709.82793; Mon, 07 Dec 2020 15:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIOT-0002Ew-D7; Mon, 07 Dec 2020 15:25:41 +0000
Received: by outflank-mailman (input) for mailman id 46709;
 Mon, 07 Dec 2020 15:25:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmIOS-0002Er-BQ
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:25:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 370156ee-ea5a-409a-8625-12cb07d49450;
 Mon, 07 Dec 2020 15:25:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D64DCAB63;
 Mon,  7 Dec 2020 15:25:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 370156ee-ea5a-409a-8625-12cb07d49450
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607354738; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I6SoY8CvJ7zGy7t31o0iJZMwSuFc/r2C5JcqmfBX6Go=;
	b=Qtqdppx02gtRACsGXJlhE/wuAfABeE2V2rJpGKtiSYW0o57LmrLeX4ylFwrJrnsIrbJR4f
	fRBrns1DoVeAmtCDBzbyxRnhDi2hDxx87MSlR/rLpbD80xzFNrhfRu9fEIiSY0FZZZTjKZ
	rwTIUZNszT89aw22Up2r5YCFx0/pD8s=
Subject: Ping: [PATCH] libxenstat: avoid build race
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Olaf Hering <olaf@aepfle.de>, Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <273da46e-2a56-f53c-f137-f6fc411ad8db@suse.com>
 <6CDFEFFF-368E-467B-970A-4CDFA7978A2E@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f2b92ab9-5bc2-79f8-8c28-2cf5d17f49e2@suse.com>
Date: Mon, 7 Dec 2020 16:25:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <6CDFEFFF-368E-467B-970A-4CDFA7978A2E@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 15:27, Bertrand Marquis wrote:
>> On 17 Nov 2020, at 09:42, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> Olaf reported observing
>>
>> xenstat_qmp.c:26:10: fatal error: _paths.h: No such file or directory
>> .../tools/libs/stat/../../../tools/Rules.mk:153: xenstat_qmp.opic] Error 1
>>
>> Obviously _paths.h, when included by any of the sources, needs to be
>> created in advance of compiling any of them, not just the non-PIC ones.
>>
>> Reported-by: Olaf Hering <olaf@aepfle.de>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Ping? (I guess this one is again simple enough that I should
time out waiting by the middle of the week.)

Jan

>> ---
>> A similar issue (at the time of the report) in the building of
>> libxenstore was addressed by Jürgen's 9af5e2b31b4e ("tools/libs/store:
>> don't use symbolic links for external files").
>>
>> --- a/tools/libs/stat/Makefile
>> +++ b/tools/libs/stat/Makefile
>> @@ -30,7 +30,7 @@ include $(XEN_ROOT)/tools/libs/libs.mk
>>
>> include $(XEN_ROOT)/tools/libs/libs.mk
>>
>> -$(LIB_OBJS): _paths.h
>> +$(LIB_OBJS) $(PIC_OBJS): _paths.h
>>
>> PYLIB=bindings/swig/python/_xenstat.so
>> PYMOD=bindings/swig/python/xenstat.py
>>
> 



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:28:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46714.82806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIQk-0002OE-2w; Mon, 07 Dec 2020 15:28:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46714.82806; Mon, 07 Dec 2020 15:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIQj-0002O7-Vu; Mon, 07 Dec 2020 15:28:01 +0000
Received: by outflank-mailman (input) for mailman id 46714;
 Mon, 07 Dec 2020 15:28:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmIQi-0002O2-7R
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:28:00 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34d11b94-77f7-4928-bfeb-5ed64f8d9eaf;
 Mon, 07 Dec 2020 15:27:58 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id z21so18633570lfe.12
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:27:58 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id g6sm2842794lfb.291.2020.12.07.07.27.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 07:27:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34d11b94-77f7-4928-bfeb-5ed64f8d9eaf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=VuZVxmAH9LGv61YV1IiPgab8KuYu84A8RgRMWmUMgpE=;
        b=Xpm7u3McNUiOO0/B78RfoNPIUR6LHeyv/KlTK6C4GpikJCo1Rh0Muyz6e8KEuKE4hw
         lp336CZK8fJqwKDAWae+Ts2AWXZlK6dm3IQmLvrt/BTuMO9Z8RUkINWZufw8hSccRVT0
         LBUhR0bJ6cSiz6K7ErNP53PtvLscYBF6UarwlRgFAzWAQlAnFUbeQveUYp8Mwq7ZD/kc
         3Ru8enh04av/ZmUo4zrGtXrQYigE84yszYFsqJ68joZ9Tocbg0XtQbOkHpeVCSNfP7j4
         nTrOXoSqrT6GJ0BU2GGf/OZ9VcdQ0gfCtJl63EYbW7vlbr4WXYHzs5siRktJsmeYryAb
         wHmQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=VuZVxmAH9LGv61YV1IiPgab8KuYu84A8RgRMWmUMgpE=;
        b=bkqDYAjbPBE92spiwiTkmPtsMho7WMCAjCWPEcNsXlenFkjiT+jdDpta21yPQBUlXE
         3UHgXv0cdxVJMuye8UKehNz73GNd649FSv5DpvTP5y7L6pT6fIgoyzinWsOQ5/qyB2ZT
         LaYczsF/TBb3/pcN92x5yjxfGrQgiBsI3vKf3VMgoz3nrTJ6XaeBrVmUgBKLhq5fDjPB
         L7RBj64x1jLTcr5ZjnL2DdXjdGQNFgLPiRMgAMpUdpfOgNTsU/ngF+RCuv14ziQ4RmGv
         F4lMPN06h5EGzf1RpLXOHAHFpqiwRpbRgolY6Tk32SWD/RzW11ljTu1ggWTbHrGDOdPy
         nTvw==
X-Gm-Message-State: AOAM531+37tcXKPbFgSwpwVqAMF5p0XFgUFAAJXyhDQ6xH0VJkEu+Gfd
	CBAL92sPqgivTkA2PRKvF0u0nNv9Zn2PdA==
X-Google-Smtp-Source: ABdhPJzv4QWMcyfgnwFRy1d1sTHxRRF5l+Qn/xXT0m+v8BNcAKUamC1hoa7/5b8p1cXi4h9t88dRNQ==
X-Received: by 2002:a19:4a10:: with SMTP id x16mr2776301lfa.150.1607354877396;
        Mon, 07 Dec 2020 07:27:57 -0800 (PST)
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <51a5c06f-e6ce-c651-2fd2-352aaa591fb1@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <029c3dcc-fac2-5b65-703e-5d821335f2a0@gmail.com>
Date: Mon, 7 Dec 2020 17:27:50 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <51a5c06f-e6ce-c651-2fd2-352aaa591fb1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 13:13, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/arch/x86/hvm/ioreq.c
>> +++ b/xen/arch/x86/hvm/ioreq.c
>> @@ -17,15 +17,15 @@
>>    */
>>   
>>   #include <xen/ctype.h>
>> +#include <xen/domain.h>
>> +#include <xen/event.h>
>>   #include <xen/init.h>
>> +#include <xen/irq.h>
>>   #include <xen/lib.h>
>> -#include <xen/trace.h>
>> +#include <xen/paging.h>
>>   #include <xen/sched.h>
>> -#include <xen/irq.h>
>>   #include <xen/softirq.h>
>> -#include <xen/domain.h>
>> -#include <xen/event.h>
>> -#include <xen/paging.h>
>> +#include <xen/trace.h>
>>   #include <xen/vpci.h>
> Seeing this consolidation (thanks!), have you been able to figure
> out what xen/ctype.h is needed for here? It looks to me as if it
> could be dropped at the same time.

Not yet, but will re-check.


>
>> @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
>>       return rc;
>>   }
>>   
>> -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>   {
>>       hvm_unmap_ioreq_gfn(s, true);
>>       hvm_unmap_ioreq_gfn(s, false);
> How is this now different from ...
>
>> @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>>       return rc;
>>   }
>>   
>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
>> +{
>> +    hvm_remove_ioreq_gfn(s, false);
>> +    hvm_remove_ioreq_gfn(s, true);
>> +}
> ... this? Imo if at all possible there should be no such duplication
> (i.e. at least have this one simply call the earlier one).

I am afraid, I don't see any duplication between mentioned functions. 
Would you mind explaining?


>
>> @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>>       return rc;
>>   }
>>   
>> +/* Called with ioreq_server lock held */
>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>> +                                   struct hvm_ioreq_server *s,
>> +                                   uint32_t flags)
>> +{
>> +    int rc = p2m_set_ioreq_server(d, flags, s);
>> +
>> +    if ( rc == 0 && flags == 0 )
>> +    {
>> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>> +    }
>> +
>> +    return rc;
>> +}
>> +
>>   /*
>>    * Map or unmap an ioreq server to specific memory type. For now, only
>>    * HVMMEM_ioreq_server is supported, and in the future new types can be
>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>       if ( s->emulator != current->domain )
>>           goto out;
>>   
>> -    rc = p2m_set_ioreq_server(d, flags, s);
>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>   
>>    out:
>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>   
>> -    if ( rc == 0 && flags == 0 )
>> -    {
>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> -
>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>> -    }
>> -
>>       return rc;
>>   }
> While you mention this change in the description, I'm still
> missing justification as to why the change is safe to make. I
> continue to think p2m_change_entry_type_global() would better
> not be called inside the locked region, if at all possible.
Well. I am afraid, I don't have a 100% justification why the change is 
safe to make as well
as I don't see an obvious reason why it is not safe to make (at least I 
didn't find a possible deadlock scenario while investigating the code).
I raised a question earlier whether I can fold this check in, which 
implied calling p2m_change_entry_type_global() with ioreq_server lock held.


If there is a concern with calling this inside the locked region 
(unfortunately still unclear for me at the moment), I will try to find 
another way how to split hvm_map_mem_type_to_ioreq_server() without
potentially unsafe change here *and* exposing 
p2m_change_entry_type_global() to the common code. Right now, I don't 
have any ideas how this could be split other than
introducing one more hook here to deal with p2m_change_entry_type_global 
(probably arch_ioreq_server_map_mem_type_complete?), but I don't expect 
it to be accepted.
I appreciate any ideas on that.
>
>> @@ -1239,33 +1279,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>   }
>>   
>> -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>> -                                                 ioreq_t *p)
>> +int arch_ioreq_server_get_type_addr(const struct domain *d,
>> +                                    const ioreq_t *p,
>> +                                    uint8_t *type,
>> +                                    uint64_t *addr)
>>   {
>> -    struct hvm_ioreq_server *s;
>> -    uint32_t cf8;
>> -    uint8_t type;
>> -    uint64_t addr;
>> -    unsigned int id;
>> +    unsigned int cf8 = d->arch.hvm.pci_cf8;
>>   
>>       if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>> -        return NULL;
>> -
>> -    cf8 = d->arch.hvm.pci_cf8;
>> +        return -EINVAL;
> The caller cares about only a boolean. Either make the function
> return bool, or (imo better, but others may like this less) have
> it return "type" instead of using indirection, using e.g.
> negative values to identify errors (which then could still be
> errno ones if you wish).

Makes sense. I will probably make the function return bool. Even if 
return "type" we will still have an indirection for "addr".


>
>> --- a/xen/include/asm-x86/hvm/ioreq.h
>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>> @@ -19,6 +19,25 @@
>>   #ifndef __ASM_X86_HVM_IOREQ_H__
>>   #define __ASM_X86_HVM_IOREQ_H__
>>   
>> +#define HANDLE_BUFIOREQ(s) \
>> +    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
>> +
>> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
>> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
>> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
>> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>> +                                   struct hvm_ioreq_server *s,
>> +                                   uint32_t flags);
>> +bool arch_ioreq_server_destroy_all(struct domain *d);
>> +int arch_ioreq_server_get_type_addr(const struct domain *d,
>> +                                    const ioreq_t *p,
>> +                                    uint8_t *type,
>> +                                    uint64_t *addr);
>> +void arch_ioreq_domain_init(struct domain *d);
>> +
>>   bool hvm_io_pending(struct vcpu *v);
>>   bool handle_hvm_io_completion(struct vcpu *v);
>>   bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
> What's the plan here? Introduce them into the x86 header just
> to later move the entire block into the common one? Wouldn't
> it make sense to introduce the common header here right away?
> Or do you expect to convert some of the simpler ones to inline
> functions later on?
The former. The subsequent patch is moving the the entire block(s) from 
here and from x86/hvm/ioreq.c to the common code in one go.
I thought it was a little bit odd to expose a header before exposing an 
implementation to the common code. Another reason is to minimize places 
that need touching by current patch.
After all, this is done within single series and without breakage in 
between. But, if introducing the common header right away will make 
patch more cleaner and correct I am absolutely OK and happy to update a 
patch. Shall I?


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:28:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46715.82817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIQs-0002Rq-BB; Mon, 07 Dec 2020 15:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46715.82817; Mon, 07 Dec 2020 15:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIQs-0002Re-87; Mon, 07 Dec 2020 15:28:10 +0000
Received: by outflank-mailman (input) for mailman id 46715;
 Mon, 07 Dec 2020 15:28:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmIQq-0002QU-Fn
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:28:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e031b65-33ee-4af9-a51f-591451357b0d;
 Mon, 07 Dec 2020 15:28:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6861ABE9;
 Mon,  7 Dec 2020 15:28:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e031b65-33ee-4af9-a51f-591451357b0d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607354886; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JjN23h87VAmxj756ADXTtOqjPfVg45T7K9PwtwT1L+o=;
	b=WGYAfZnaY0j9dFXHlj3mNNmo4uGG5MytVOBM7BtHleS13j+HQOsNr7KH7cO18LXveGoKlY
	9aMTaK3EQK7gc+7qPbqnrHodlas8Bi7B6Vfwm7VATrZGiqrJl3dvyp9wxxsIg6PxSoev/9
	PfE6Q9ih+/IvCjLHkhGP2i/laqny9+Y=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
Date: Mon, 7 Dec 2020 16:28:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.12.2020 20:15, Tamas K Lengyel wrote:
> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
>> On 04/12/2020 15:21, Tamas K Lengyel wrote:
>>> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>>>> While there don't look to be any problems with this right now, the lock
>>>>>>> order implications from holding the lock can be very difficult to follow
>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>>>
>>>>>>> However, vm_event_disable() frees the structures used by respective
>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>>>
>>>>>> AFAICT, this callback is not the only place where the synchronization is
>>>>>> missing in the VM event code.
>>>>>>
>>>>>> For instance, vm_event_put_request() can also race against
>>>>>> vm_event_disable().
>>>>>>
>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>
>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>
>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>> monitoring software to do the right thing.
>>>>
>>>> I will refrain to comment on this approach. However, given the race is
>>>> much wider than the event channel, I would recommend to not add more
>>>> code in the event channel to deal with such problem.
>>>>
>>>> Instead, this should be fixed in the VM event code when someone has time
>>>> to harden the subsystem.
>>>
>>> I double-checked and the disable route is actually more robust, we
>>> don't just rely on the toolstack doing the right thing. The domain
>>> gets paused before any calls to vm_event_disable. So I don't think
>>> there is really a race-condition here.
>>
>> The code will *only* pause the monitored domain. I can see two issues:
>>     1) The toolstack is still sending event while destroy is happening.
>> This is the race discussed here.
>>     2) The implement of vm_event_put_request() suggests that it can be
>> called with not-current domain.
>>
>> I don't see how just pausing the monitored domain is enough here.
> 
> Requests only get generated by the monitored domain. So if the domain
> is not running you won't get more of them. The toolstack can only send
> replies.

Julien,

does this change your view on the refcounting added by the patch
at the root of this sub-thread?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:28:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:28:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46716.82830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIR4-0002Yb-M8; Mon, 07 Dec 2020 15:28:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46716.82830; Mon, 07 Dec 2020 15:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIR4-0002YU-IQ; Mon, 07 Dec 2020 15:28:22 +0000
Received: by outflank-mailman (input) for mailman id 46716;
 Mon, 07 Dec 2020 15:28:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kmIR3-0002Y3-CH
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:28:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kmIR3-0003PS-5q; Mon, 07 Dec 2020 15:28:21 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kmIR2-00082U-Og; Mon, 07 Dec 2020 15:28:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=IIVArqVktZ4J/is/epXYf6uuXtAdhEZuJtiNizzZDP4=; b=0T9rxk/eybT3TFzNxETSqqujwc
	s4bRzM9ESsczoitZqVMv69P1T+X56wXFHehgFT4IwfmMEargOTIvAZYkEbmXWb1GQMGvuCoC7xjXq
	8sv+wy/n1ZR8YqSRgZVn8638GkIr6n7eohRT7Pil25e+FdOJ3lGMTbYV2u3WKeMNly74=;
Message-ID: <5a6560c6eeff1196f65e9856579b219f3b9c1ff7.camel@xen.org>
Subject: Re: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>,  Wei Liu <wl@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Mon, 07 Dec 2020 15:28:20 +0000
In-Reply-To: <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
	 <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-07-27 at 15:21 +0100, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Rewrite those functions to use the new APIs. Modify its callers to
> unmap
> the pointer returned. Since alloc_xen_pagetable_new() is almost never
> useful unless accompanied by page clearing and a mapping, introduce a
> helper alloc_map_clear_xen_pt() for this sequence.
> 
> Note that the change of virt_to_xen_l1e() also requires vmap_to_mfn()
> to
> unmap the page, which requires domain_page.h header in vmap.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

I believe the vmap part can be removed since x86 now handles
superpages.

> @@ -5085,8 +5117,8 @@ int map_pages_to_xen(
>      unsigned int flags)
>  {
>      bool locking = system_state > SYS_STATE_boot;
> -    l3_pgentry_t *pl3e, ol3e;
> -    l2_pgentry_t *pl2e, ol2e;
> +    l3_pgentry_t *pl3e = NULL, ol3e;
> +    l2_pgentry_t *pl2e = NULL, ol2e;
>      l1_pgentry_t *pl1e, ol1e;
>      unsigned int  i;
>      int rc = -ENOMEM;
> @@ -5107,6 +5139,10 @@ int map_pages_to_xen(
>  
>      while ( nr_mfns != 0 )
>      {
> +        /* Clean up mappings mapped in the previous iteration. */
> +        UNMAP_DOMAIN_PAGE(pl3e);
> +        UNMAP_DOMAIN_PAGE(pl2e);
> +
>          pl3e = virt_to_xen_l3e(virt);
>  
>          if ( !pl3e )

While rebasing, I found another issue. XSA-345 now locks the L3 table
by L3T_LOCK(virt_to_page(pl3e)) but for this series we cannot call
virt_to_page() here.

We could call domain_page_map_to_mfn() on pl3e to get back the mfn,
which should be fine since this function is rarely used outside boot so
the overhead should be low. We could probably pass an *mfn in as an
additional argument, but do we want to change this also for
virt_to_xen_l[21]e() to be consistent (although they don't need the
mfn)? I might also need to remove the R-b due to this non-trivial
change.

Thoughts?

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:32:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46737.82842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIUc-0003cs-7o; Mon, 07 Dec 2020 15:32:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46737.82842; Mon, 07 Dec 2020 15:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIUc-0003cl-4r; Mon, 07 Dec 2020 15:32:02 +0000
Received: by outflank-mailman (input) for mailman id 46737;
 Mon, 07 Dec 2020 15:32:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oN+h=FL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmIUa-0003cg-Ue
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:32:00 +0000
Received: from mail-wm1-f43.google.com (unknown [209.85.128.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b288b6e-e77f-4fcb-9f1b-d16ab521d9ec;
 Mon, 07 Dec 2020 15:32:00 +0000 (UTC)
Received: by mail-wm1-f43.google.com with SMTP id a6so11809337wmc.2
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:32:00 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id m8sm14868395wmc.27.2020.12.07.07.31.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 07:31:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b288b6e-e77f-4fcb-9f1b-d16ab521d9ec
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=YBdYUAxEtU8X88CPXHzOic1oxoW8h4pv5pGl9fOdRuU=;
        b=rEVQphKYEY/zdArkXF0EKrDEPPg7T44SES4ek2irU0rMGfZd/xsMMkbUNNYnAPa/Wc
         iLuBld8D7p+V7EWJz9joz1F+MKFGJcvg1apffzEnZPWuW2X9JYy2AIZzMXMgA3lozSu6
         XJ3RZQnWrVPpTWrLNoz2MpjiAMVjQRd+Nyz4CkRUY+KdB7qZyUfmJO5yLlOBAq9TPRsc
         wmi5orF9GDsSCyth1dR6wr6AX3u/3kFXvZ+WHcq35f3dpyHHDyIA4GwMdcfHDzFZvYGZ
         eiW9ghMlyw3R3vcUgN7/9OtklvRheF0J6EjPHPy6QHq9oByzgzvvMjjBNHXqB028DSJK
         TduQ==
X-Gm-Message-State: AOAM531xy7ujyS4T58TnjR36pes1Mb/qnqqkht/cfdb3sZbIHvn3hMfW
	9NxI7y0bw82O3SLHPJ/2CHk=
X-Google-Smtp-Source: ABdhPJwoqYY+70wdFf4jeJGI9vENM16yvaSQTihIN3pxt71zU9sjwyvivqufcH2VA1Gj5McQpDgxYg==
X-Received: by 2002:a7b:c24b:: with SMTP id b11mr10360248wmj.168.1607355119285;
        Mon, 07 Dec 2020 07:31:59 -0800 (PST)
Date: Mon, 7 Dec 2020 15:31:56 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Olaf Hering <olaf@aepfle.de>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Ping: [PATCH] libxenstat: avoid build race
Message-ID: <20201207153156.roxlvr5tcy7gkh2c@liuwe-devbox-debian-v2>
References: <273da46e-2a56-f53c-f137-f6fc411ad8db@suse.com>
 <6CDFEFFF-368E-467B-970A-4CDFA7978A2E@arm.com>
 <f2b92ab9-5bc2-79f8-8c28-2cf5d17f49e2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <f2b92ab9-5bc2-79f8-8c28-2cf5d17f49e2@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Dec 07, 2020 at 04:25:41PM +0100, Jan Beulich wrote:
> On 18.11.2020 15:27, Bertrand Marquis wrote:
> >> On 17 Nov 2020, at 09:42, Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> Olaf reported observing
> >>
> >> xenstat_qmp.c:26:10: fatal error: _paths.h: No such file or directory
> >> .../tools/libs/stat/../../../tools/Rules.mk:153: xenstat_qmp.opic] Error 1
> >>
> >> Obviously _paths.h, when included by any of the sources, needs to be
> >> created in advance of compiling any of them, not just the non-PIC ones.
> >>
> >> Reported-by: Olaf Hering <olaf@aepfle.de>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Ping? (I guess this one is again simple enough that I should
> time out waiting by the middle of the week.)

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:33:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:33:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46741.82853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIW6-0003lq-Kg; Mon, 07 Dec 2020 15:33:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46741.82853; Mon, 07 Dec 2020 15:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIW6-0003lj-Hc; Mon, 07 Dec 2020 15:33:34 +0000
Received: by outflank-mailman (input) for mailman id 46741;
 Mon, 07 Dec 2020 15:33:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oN+h=FL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmIW5-0003ld-50
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:33:33 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c2e97f0-2e16-47dd-a0fb-bbd46a3922d5;
 Mon, 07 Dec 2020 15:33:32 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id c198so11857988wmd.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:33:32 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a18sm15233911wrr.20.2020.12.07.07.33.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 07:33:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c2e97f0-2e16-47dd-a0fb-bbd46a3922d5
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=/HxYlJEEENpDYaot1R/O+4+FQN37quWrxjN1IR1VJUQ=;
        b=YAJLB64u6Q7+Pv8Hi9Tr0EMV7plx42pFLqbdy5Bd5TNe90Z1fJ8PthcjzybZjfW0ct
         Wrf5/XYGDMqaDASada4YM+3d2t+8vxsM4ULoXtYvuA9NJuX6WcHkk76QlPLCBUF2mnxV
         k0Y0PcokYSjpG8b1TMztmNYF5zvV5ocMvZ6uHKgttw4E5DSZz452uTqfWia6zvpdlctL
         n36ufBt15H0D9LQxZ2xHyLgsW6pAFe/ttBM73SJN+fOl/cpdHuFn/YOWI8aKT/Mi6g7p
         WlH6iF6Tdx2YlxMIkhElAASEHppFwW/qm3vcsld9TsWJMkBCh57dOqkMDu5muwxHOcEz
         2jmg==
X-Gm-Message-State: AOAM532U4Snbozp0lXrSHH1jt0rxYyA/WXG5eAEEpPPLNjioQTQidg5n
	qegbocarkumoS9/aDVnV93E=
X-Google-Smtp-Source: ABdhPJyT3dQFlVgiBtIPM5ATmTNbcOgPSUqD6V60isObtFlwmYfe1T8rxfm8l9/SEiTyc9Aqf/d8qg==
X-Received: by 2002:a1c:f405:: with SMTP id z5mr18893330wma.93.1607355211586;
        Mon, 07 Dec 2020 07:33:31 -0800 (PST)
Date: Mon, 7 Dec 2020 15:33:29 +0000
From: Wei Liu <wl@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 0/2] automation: arm64 dom0less smoke test
Message-ID: <20201207153329.fl4ovssvnhv3kwqc@liuwe-devbox-debian-v2>
References: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>
User-Agent: NeoMutt/20180716

On Mon, Nov 16, 2020 at 08:21:18PM -0800, Stefano Stabellini wrote:
> Hi all,
> 
> This short series introduces a very simple Xen Dom0less smoke test based
> on qemu-system-aarch64 to be run as part of the gitlab CI-loop. It
> currently passes on staging.
> 
> Cheers,
> 
> Stefano
> 
> 
> Changes in v2:
> - use the Debian kernel for testing instead of building your own
> - fix x86_32 build
> 
> 
> Stefano Stabellini (2):
>       automation: add a QEMU aarch64 smoke test
>       automation: add dom0less to the QEMU aarch64 smoke test

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:37:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46751.82866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIZh-0003x3-BS; Mon, 07 Dec 2020 15:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46751.82866; Mon, 07 Dec 2020 15:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIZh-0003ww-8H; Mon, 07 Dec 2020 15:37:17 +0000
Received: by outflank-mailman (input) for mailman id 46751;
 Mon, 07 Dec 2020 15:37:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmIZg-0003wr-NP
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:37:16 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19782f3a-ab27-418f-87fe-ec93720d690b;
 Mon, 07 Dec 2020 15:37:16 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a1so14071705ljq.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:37:16 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id u8sm2931943lfb.133.2020.12.07.07.37.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 07:37:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19782f3a-ab27-418f-87fe-ec93720d690b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=WgMYtVxCUDd+WDqbSmvz2/UFjBSEYAqB08KyvfqP6rc=;
        b=X116nSJZsHD1AUBeJVBIeS2VR8FaDvu2u7MMIxmoqvySptPP61WohACMa1sDHRHcm/
         8+TCrcDzz8Wo/5t4DrgFKh3RROYHhgVoHSYgK2IZqkXOTLy5NKVzDHVTxgFpABqj0P8P
         0j/db+3ZYGzEsZqq3fMegCCu1lcLL3Mruo/tgZGW5GJerf0H//m7SO4+JrmmA9QiECnd
         jGHfXGuHBGd13oWQbiEVR+RKnEcDpad7iNwb/rSU/Bpp0Y3+FVD8CRm/UEi/JESS6UAm
         xtFnAq0L9WTV5HX2N9lB8VQmVTtUtY+hP/Mtl87eYUSFqfk/6sThd+PxLheln4cu7yoM
         MCdQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=WgMYtVxCUDd+WDqbSmvz2/UFjBSEYAqB08KyvfqP6rc=;
        b=K9QQSIXv3ZmGSiJQLV5H8xpzQSmcUHJbWl5sVFjazSEJ+xqu8kAaY8xZgJOQl9R7rB
         aaTwGdafz+g3rM5MjbuVZw24m67doquTfBAn2eBDSoS/2SMVECsHmqQo18hsvLFqq3ic
         Gve7TMInjyyk0xs1e1krmKEGblI1/zmzl1Uu3SUFVJZ9VEobS14bScaW0hi2p9o+cQ/3
         noeW4JTjYA6Usy12hGwEkM9QjiJxLa2Qm/lf7Bdkzvu8RNAR+QR9dHTU4DSpDREnzRKb
         rJD2ehhneTc+CgDIO4pammcYTigWi7tafoSdk4CIeYxR6oIzz2HY7un+jhMZFSOBZqsz
         ATyg==
X-Gm-Message-State: AOAM530l1W+hkMXHj+3yzjpWevLgZrkjPt7p42yKrwi9nYbybJN3FOdC
	lFgCvfmQQ2aCz7vVDAwRmVOwZ+DDBzkKgw==
X-Google-Smtp-Source: ABdhPJwkA6mDUtq5GqVPpxnhfSzJbpwt12GJZXd0dnoQXiWsejE8WtwO3B4BQUG5Ck6sp+wiXKLgNw==
X-Received: by 2002:a2e:900c:: with SMTP id h12mr3671482ljg.451.1607355434806;
        Mon, 07 Dec 2020 07:37:14 -0800 (PST)
Subject: Re: [PATCH V3 02/23] x86/ioreq: Add IOREQ_STATUS_* #define-s and
 update code for moving
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
 <78632ed3-e4d4-7b4c-fd0a-504b6b26e78a@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <9d17171d-e858-3920-4f69-ca2e4584c20a@gmail.com>
Date: Mon, 7 Dec 2020 17:37:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <78632ed3-e4d4-7b4c-fd0a-504b6b26e78a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 13:19, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/asm-x86/hvm/ioreq.h
>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>> @@ -74,6 +74,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
>>   
>>   void hvm_ioreq_init(struct domain *d);
>>   
>> +#define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
>> +#define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>> +#define IOREQ_STATUS_RETRY       X86EMUL_RETRY
> This correlation may not be altered. I think a comment is needed
> to this effect, to avoid someone trying to subsequently fold the
> x86 and (to be introduced) Arm ones.

ok, will add.


> With that
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thank you.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:39:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:39:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46756.82878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIby-00047a-Qg; Mon, 07 Dec 2020 15:39:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46756.82878; Mon, 07 Dec 2020 15:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIby-00047T-Lo; Mon, 07 Dec 2020 15:39:38 +0000
Received: by outflank-mailman (input) for mailman id 46756;
 Mon, 07 Dec 2020 15:39:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmIbx-00047O-PM
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:39:37 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a979d995-7a8b-4c4f-b09a-f4d4cc27bccc;
 Mon, 07 Dec 2020 15:39:36 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id u19so18736448lfr.7
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:39:36 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m16sm2901440lfa.57.2020.12.07.07.39.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 07:39:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a979d995-7a8b-4c4f-b09a-f4d4cc27bccc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=wseb78LRFeCqaji/N+Aaw/+SO46gGnyHXNM5KDvRA14=;
        b=vEbYw9whoKJjyz8CQJsQ8UIvpW/Y8J8FN4pwYIRZ7z5MDPK4vAeZ2xTQZnn8idKQHy
         kxUZ5oEbDYxjHRs6HTKeNq7Hl2ZeydMIp//qM2YzEDWTTpIBEjRCxDEnX5//wkL6bHf+
         1xM629ZIVdRDQ4tK8Jh80TbbrxUCPF1XgIOGBiyzI6KpFkGyMol0LrEUrO85XevvlJsn
         I4DWNb9QrLSngotrdu9nglkVz5LomIiM90G+LsCueSJCIWhwC9qb2Yve4+gtkSE6BhWs
         UrMA8V6me75mztDAJwN3C00n6cr6UJCPGDbRBMYXSKbjZvyGodgFUhu4uAz2zn9mdA54
         d7gg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=wseb78LRFeCqaji/N+Aaw/+SO46gGnyHXNM5KDvRA14=;
        b=X/0geVKDvASccOzKxOlK8t6W4oreLNuIoe9+i+ZbfZTCbxUsBJ6xtVk+9+5D0zCl86
         kMlhyM1SvcNqXCoW8RJUYnjhzJMrJDyjRy8MGmjLhOcpK+3MeaW9O7k2HstRKdJObzlJ
         y/vn0ltbagfRQF6Unqr1K3MZQ8W0N9hjyihlulvl/ys47e5m9WwUo++Re+EnwK0B3GYe
         3hNFw5xln2BRdg3nXtqFt1ox0yjvTc6kmqeH/XHDJIW+fwlP2bZDqtgbcbsE5nZOD0eJ
         Klvg311FeEG+v6fI2X/XkZUso39zkLSKezslBVVE2corCXF9uE08ftL24e3tQUzT29B0
         /u/Q==
X-Gm-Message-State: AOAM532PtHPEte+CR256HHUIcs5Wauw2MwtCG4C7R2GwqJ/qfCw4i7iL
	5CYN5zdHjHtOTzM/iM/BS1mEqRKHi37RbQ==
X-Google-Smtp-Source: ABdhPJz0r6Pry8excVDnCT1GdUGlNv0EP9Z6Dk9YVgFrH3TRRpYwL//zcP2BdR+aVicLsL3pTp2apA==
X-Received: by 2002:a05:6512:33a8:: with SMTP id i8mr639176lfg.5.1607355575564;
        Mon, 07 Dec 2020 07:39:35 -0800 (PST)
Subject: Re: [PATCH V3 03/23] x86/ioreq: Provide out-of-line wrapper for the
 handle_mmio()
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-4-git-send-email-olekstysh@gmail.com>
 <a12313a8-4e74-02a0-f849-72c6ed9b6161@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <ec151ff1-f1e0-4978-69ef-29b62ab71aaf@gmail.com>
Date: Mon, 7 Dec 2020 17:39:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a12313a8-4e74-02a0-f849-72c6ed9b6161@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 13:27, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/arch/x86/hvm/ioreq.c
>> +++ b/xen/arch/x86/hvm/ioreq.c
>> @@ -36,6 +36,11 @@
>>   #include <public/hvm/ioreq.h>
>>   #include <public/hvm/params.h>
>>   
>> +bool ioreq_complete_mmio(void)
>> +{
>> +    return handle_mmio();
>> +}
> As indicated before I don't like out-of-line functions like this
> one; I think a #define would be quite fine here, but Paul as the
> maintainer thinks differently. So be it. However, shouldn't this
> function be named arch_ioreq_complete_mmio() according to the
> new naming model, and then ...
>
>> --- a/xen/include/asm-x86/hvm/ioreq.h
>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>> @@ -74,6 +74,8 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
>>   
>>   void hvm_ioreq_init(struct domain *d);
>>   
>> +bool ioreq_complete_mmio(void);
> ... get declared next to the other arch_*() hooks? With this

sounds reasonable, will update


> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Thank you

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:40:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46762.82890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIcw-0004to-3T; Mon, 07 Dec 2020 15:40:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46762.82890; Mon, 07 Dec 2020 15:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIcw-0004th-0L; Mon, 07 Dec 2020 15:40:38 +0000
Received: by outflank-mailman (input) for mailman id 46762;
 Mon, 07 Dec 2020 15:40:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oN+h=FL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmIcu-0004tb-DN
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:40:36 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ee1d84f-0483-4d84-84e8-5ac2c563014f;
 Mon, 07 Dec 2020 15:40:35 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id k10so11832922wmi.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:40:35 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 90sm15771278wrl.60.2020.12.07.07.40.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 07:40:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ee1d84f-0483-4d84-84e8-5ac2c563014f
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=zKr99a2PtjKqyw1Cfrt8oWmnwL3Z9GPuLWdaa1WZMls=;
        b=IL04gh6xKAgZVB0oj8R/erpNwABAn+/skGbmdu84bm1vhegXnR/Gr24UGDgQKJa428
         AxIwnA6K5iuBeOb/u/t7d3JKD/ETPiTvI3mhi3a2qf/5FfSGl0V+qTBRMcQJ3SFFJESI
         wKi2gVW4erZjGhqsFcuh62pYdkPMXQLbdi9bCf5mPyub2McwQYu0zRPuhuTw5FpW5+Jd
         jR9azEtYA5jzuJNOW5k9KvOORDcDPoikr7rziO+JXSUl1NTQIo1NGbb6fAH0nJaPBQq7
         xM80lLif4KHV5cvnqq+7EQATnnxNC7znL/b1rHbBjP17G/wmhXFK3osh43K4g7WyzL84
         Hdrg==
X-Gm-Message-State: AOAM5333sIfhcTmo8CnLKLJzOKenILCK8gClsId2TkCP5nekkmDuTQL0
	CK/jqRx4kfIO2ysEOLS2jtE=
X-Google-Smtp-Source: ABdhPJzgTnYMLU50Da7z2XDhoyg234fDYc7FgWvOMCRwogXG3VBN874GUtLWn1LNcEPwjsCKYVumgA==
X-Received: by 2002:a1c:4c07:: with SMTP id z7mr18865454wmf.142.1607355634964;
        Mon, 07 Dec 2020 07:40:34 -0800 (PST)
Date: Mon, 7 Dec 2020 15:40:33 +0000
From: Wei Liu <wl@xen.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] docs: fix documentation about default scheduler
Message-ID: <20201207154033.t3bexjrvx5vrv2n4@liuwe-devbox-debian-v2>
References: <20201117093258.26754-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117093258.26754-1-roger.pau@citrix.com>
User-Agent: NeoMutt/20180716

On Tue, Nov 17, 2020 at 10:32:58AM +0100, Roger Pau Monne wrote:
> Fix the command line document to account for the default scheduler in
> Kconfig being credit2 now, and the fact that it's selectable at build
> time and thus different builds could end up with different default
> schedulers.
> 
> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>

Well I have not seen an explicit agreement on the text from the v1
thread, and I don't have v2 in my inbox, but this patch is definitely an
improvement over the old text in that it shows the correct scheduler. In
the interest of making progress:

Acked-by: Wei Liu <wl@xen.org>

Wei.


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:42:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:42:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46767.82901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIel-0005Bt-F0; Mon, 07 Dec 2020 15:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46767.82901; Mon, 07 Dec 2020 15:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIel-0005Bm-C7; Mon, 07 Dec 2020 15:42:31 +0000
Received: by outflank-mailman (input) for mailman id 46767;
 Mon, 07 Dec 2020 15:42:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oN+h=FL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmIek-0005Bh-C7
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:42:30 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 897a60ae-c738-44bb-ab66-f26fd3337c03;
 Mon, 07 Dec 2020 15:42:29 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id e25so14114184wme.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:42:29 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id z3sm10343264wrn.59.2020.12.07.07.42.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 07:42:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 897a60ae-c738-44bb-ab66-f26fd3337c03
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=E1VuPrE4WJQoV8bM0sRmdHKS5xodT+kbUXaTIWgNYPY=;
        b=Z1ealXO59PRCDmEt/rsJ0EB/MW5DQgDEKEx28Pe/WkHd3eWTHHn4e1R8l/2+jJJQc/
         L41S26vAO0e4lPO0kZJ/r7MOY/KPidvaOW2GIhNN2FTHp88sCiX+jzlLRjL/3cNiwsG2
         3ohUHGb9rheZPM8pfZ3mw8aQugy4V9nFpY+wM0Jd381rYLOWtrIXIBxrfhHgV965voki
         SfaPhzKISTiRjZwE/Sd2+dcXuXaHN51MiON9dLXMb0csMbbKkkL5Dn0Osp01gNebbaIn
         k+z4I18yhL8PtGMjjYgECiTPsV6kX/VPE5XkaAf7qrIzjgQ6djJXPJmZeCvMv7lsalFl
         UyvQ==
X-Gm-Message-State: AOAM531NvMeqbg4JgZ86UCdyn8SMzbdx7BgblIFJDt8J/06GoRT0AD3h
	KQGApEuTm/bsUJD9x/s2iF0=
X-Google-Smtp-Source: ABdhPJyr0ZpEBV4dbpWzfP1yKIW+Ax3gaOGs3ID0HaE+YIlzBtvigpe+mqlYx1fYNkY2ou4KNhpCyw==
X-Received: by 2002:a1c:bc88:: with SMTP id m130mr19497466wmf.82.1607355748739;
        Mon, 07 Dec 2020 07:42:28 -0800 (PST)
Date: Mon, 7 Dec 2020 15:42:26 +0000
From: Wei Liu <wl@xen.org>
To: Edwin =?utf-8?B?VMO2csO2aw==?= <edvin.torok@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: Re: [PATCH v1 3/4] Makefile: add build-tools-oxenstored
Message-ID: <20201207154226.b5sltvbugxm4psal@liuwe-devbox-debian-v2>
References: <cover.1605636799.git.edvin.torok@citrix.com>
 <516274ccf7ce5958251fa36b1bd63b3216937b8b.1605636800.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <516274ccf7ce5958251fa36b1bd63b3216937b8b.1605636800.git.edvin.torok@citrix.com>
User-Agent: NeoMutt/20180716

On Tue, Nov 17, 2020 at 06:24:11PM +0000, Edwin Trk wrote:
> As a convenience so that oxenstored patches can be compile-tested
> using upstream's build-system before submitting upstream.
> 
> Signed-off-by: Edwin Trk <edvin.torok@citrix.com>

Acked-by: Wei Liu <wl@xen.org>

Seeing that there is still pending comments from Andrew I won't commit
this series any time soon, despite Christian and Doug having acked this
series.

FAOD Andrew feel free to commit these patches once the comments are
addressed.

Wei.


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:46:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46775.82914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIiJ-0005N8-Uu; Mon, 07 Dec 2020 15:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46775.82914; Mon, 07 Dec 2020 15:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIiJ-0005N1-S1; Mon, 07 Dec 2020 15:46:11 +0000
Received: by outflank-mailman (input) for mailman id 46775;
 Mon, 07 Dec 2020 15:46:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oN+h=FL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmIiH-0005Mw-Tf
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:46:09 +0000
Received: from mail-wm1-f53.google.com (unknown [209.85.128.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14b4345b-a6f6-43ad-8ec0-a0b6dc3a8a44;
 Mon, 07 Dec 2020 15:46:09 +0000 (UTC)
Received: by mail-wm1-f53.google.com with SMTP id c198so11892149wmd.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:46:09 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q1sm15426803wrj.8.2020.12.07.07.46.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 07:46:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14b4345b-a6f6-43ad-8ec0-a0b6dc3a8a44
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=gZqI8/dnzfZY7IWPMn40GMkW9Jdbsk89hKKk2ZqlQfc=;
        b=dbsuQAMHFllzAuGY5kXp4/PZb1quF2Q3zhCy42IpX5u2cxE7ermvqLWKS81kZ2aSdh
         liEx5yotj7PpyDe9ZvX0QzrR5f+BoME8JR963DX6RQwEHxQAXgIOR0MEXq0ezoBDoHUr
         GLS/EMdV+6VFbmdGvVr2EUoBDqJucVibKmgTYIyJOAlDa8YNWEs4eQJDskwL/NBS7WfZ
         1KyO16UILTJ3LftvjgYqm5Fegir/9hmKR68Wnn3dvQErUegLI4digNMZnKRU4XUmkdpm
         DXJvm8XDlwN9nx5lTzlXU9VjW/ZwiO1jS70TfUm4HAxa5Z1jvv87T1bC/ixHqEDYebkw
         B2zg==
X-Gm-Message-State: AOAM533K7bF6GUvckYcBJj1+mVZfKa6ZBWKi6MmYRjH6QE/w+bmouAzT
	/7wVYO5b0CStSVLx59hGuu4=
X-Google-Smtp-Source: ABdhPJzp8m4FwJnbkhMLbgL+XavhU7Nn8RozoEfM5ahyfGXcbvFUbeNjRq9BOyxI1apQpBRwNEvG2w==
X-Received: by 2002:a1c:a185:: with SMTP id k127mr20131908wme.23.1607355968383;
        Mon, 07 Dec 2020 07:46:08 -0800 (PST)
Date: Mon, 7 Dec 2020 15:46:06 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 0/2] ns16550: #ifdef-ary
Message-ID: <20201207154606.xd6ivom5jnzzdoui@liuwe-devbox-debian-v2>
References: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
User-Agent: NeoMutt/20180716

On Thu, Nov 19, 2020 at 09:54:58AM +0100, Jan Beulich wrote:
> 1: "com<N>=" command line options are x86-specific
> 2: drop stray "#ifdef CONFIG_HAS_PCI"
> 

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 15:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 15:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46784.82925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIn0-0006Nd-HO; Mon, 07 Dec 2020 15:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46784.82925; Mon, 07 Dec 2020 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIn0-0006NW-E9; Mon, 07 Dec 2020 15:51:02 +0000
Received: by outflank-mailman (input) for mailman id 46784;
 Mon, 07 Dec 2020 15:51:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oN+h=FL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmImz-0006NR-34
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 15:51:01 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f1544ce-9ed7-456a-896c-4bee5f695aa0;
 Mon, 07 Dec 2020 15:51:00 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id 3so14049897wmg.4
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 07:51:00 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c9sm10284253wrn.65.2020.12.07.07.50.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 07 Dec 2020 07:50:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f1544ce-9ed7-456a-896c-4bee5f695aa0
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=CVSlm2qIah8jgTFZMtp/c4oNrq3s/9fEez8UnRCzsCo=;
        b=iquGDs8ZqrRvUHmvLY+uVES9Fo2EVdQplwhk+QbHCIsdk+26xVX5ErHZyBjjUKyFBh
         r1Tc7dWkomf9lMtba2SWEkNqG5ZupZqjPnbKGnQ0S37qVNhqUmK81ro5qvKetQA3RaOo
         +DW9Lme4FEgFqFOSw4fAOJ+1G32+Ao4uV3N91rv0bwrJO9z8wz385RqfSpQkAzrmnuWJ
         uPpI+fc/F3MuM8FIYAQX+m6RO5jjkI6QGUa9hVZZA43pnKSFJUPGJWHU1ZeRdTT8mk+T
         XRoygx2wiGdzkVzypHdRvZofHjOFqaeKz1YpuwlgRqEISYzLHBuzv7G+Q4wH9PxCoDkv
         mR0g==
X-Gm-Message-State: AOAM532Qs9380vOTHWfMsCuVxxv6l7rbXoT3zsos0rRV1bKwkuJfwh85
	AFt1ghZV90Jmc4R+LG56jZnAC7JH/w4=
X-Google-Smtp-Source: ABdhPJyybxFFv3N/RuqRuoTjg7SuQ7VVFfW9DJkZ9h8SdBARS4vPGtunrfMDE1JfLKYWEe40tswRYg==
X-Received: by 2002:a1c:4c14:: with SMTP id z20mr19229359wmf.149.1607356259675;
        Mon, 07 Dec 2020 07:50:59 -0800 (PST)
Date: Mon, 7 Dec 2020 15:50:57 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] docs: remove stale README.incompatibilities
Message-ID: <20201207155057.fbjaz66auyxewrl6@liuwe-devbox-debian-v2>
References: <20200909105213.23112-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200909105213.23112-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Wed, Sep 09, 2020 at 12:52:13PM +0200, Olaf Hering wrote:
> It mentions just stale and obsolete distributions.
> They are not suitable to build current Xen, since a couple of years.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked + applied.

Wei.


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 16:01:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 16:01:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46800.82950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIxQ-00086S-Mq; Mon, 07 Dec 2020 16:01:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46800.82950; Mon, 07 Dec 2020 16:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmIxQ-00086L-Jt; Mon, 07 Dec 2020 16:01:48 +0000
Received: by outflank-mailman (input) for mailman id 46800;
 Mon, 07 Dec 2020 16:01:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmIxP-00086C-6y; Mon, 07 Dec 2020 16:01:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmIxP-0004hE-15; Mon, 07 Dec 2020 16:01:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmIxO-000162-OJ; Mon, 07 Dec 2020 16:01:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmIxO-0002kM-Np; Mon, 07 Dec 2020 16:01:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HwKoxa5wEPSDv+w6UrANFvwuyYFWthd1iwNnl6387YQ=; b=hb7TCU9qdwVzEIcnufGdLPUbdE
	XnDvl6JQ6LfNY+HIEjZ/w74EmaI6fJxssdRoAZBODisPJLvfee1TW3rv16gCzn8AVj6v2WtxC2+oW
	iaOb9xwmV9Movcl0Z6/ecHswwFPNDM7sO2c92v0Wn/vq7lQZW3IeI1YnOiWs5QM9E0y0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157253-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157253: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 16:01:46 +0000

flight 157253 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157253/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail in 157245 pass in 157253
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 157245

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 157245 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 157245 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  109 days
Failing since        152659  2020-08-21 14:07:39 Z  108 days  225 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    5 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 16:18:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 16:18:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46818.82968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJDS-0000vq-Aa; Mon, 07 Dec 2020 16:18:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46818.82968; Mon, 07 Dec 2020 16:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJDS-0000vj-7Z; Mon, 07 Dec 2020 16:18:22 +0000
Received: by outflank-mailman (input) for mailman id 46818;
 Mon, 07 Dec 2020 16:18:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e0y+=FL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kmJDQ-0000ve-8P
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 16:18:20 +0000
Received: from mail-wm1-x32c.google.com (unknown [2a00:1450:4864:20::32c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7f94a67-4857-4deb-9252-1b8af9315cb6;
 Mon, 07 Dec 2020 16:18:19 +0000 (UTC)
Received: by mail-wm1-x32c.google.com with SMTP id g25so227067wmh.1
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 08:18:19 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id d13sm6289418wrn.90.2020.12.07.08.18.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 07 Dec 2020 08:18:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7f94a67-4857-4deb-9252-1b8af9315cb6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=dhkLA5SQdwKofqUPxChTpwY9t7sFcnx3FYpG2NJ9IyI=;
        b=VgfYKCm/rZ0Wt0ILjg7IBNEj58HxqAM/BtVxgLVTi/d2heC0EYpJTJ6Pc3qAjVDyx/
         5K2U6OoWnnMknjcUS+U95Cwj9wgVKczhQtbL0CNWsapctSq7VluhTIoOBA0ngUu51mka
         0DcVPCLsvpk0kHzUwcAZ3tZ1bkhe6ZwB9ZR443+IMnQNH4k3LONMSxNVzadl4ot/mA6p
         UsU2/EeaKNVkWCwtCmXqGmpUeUnLzZ2iRWlKFS/p07a4SiBPfNBwZFH0WNnSpNogM6xO
         FtjWoRQ1lh0UwHJA2WsRc+KMZh71/cx1fTj1FjUIG1IQL3jV1LeDemDPAE70eTYm86hX
         bcVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=dhkLA5SQdwKofqUPxChTpwY9t7sFcnx3FYpG2NJ9IyI=;
        b=SaZrp2GXuTjij0VuBwwl8F5N2AzaI4APC9BLhUEHCxsQphLmeadgGk+sW+vkQB4V+J
         ziY7TMjf/Hh6I08QH/8UdOU3GMU3vR7Qjt1YDHgq58oYsgX2bjvOIyMPCY9JGFAudCMI
         LdHLxuiZCkKrw7LPypkJp0z6brAT7ZDSG7ZXIyevb+FnKBgmbN96wAAgRcOqG19PRGw8
         dd5bOBM2gYPrtpdDZYeN73PKk2JhO/CCGTG9nm5LgTuEeZ9fsOt5rzD/aV85rrB80H3l
         3CXmcIO0PEl3WABGqiLDMkSB3RLgc/gjMNiwgLrs8sOE+a4xqmYyKOv4ZKwy9xGL1zYV
         /UEw==
X-Gm-Message-State: AOAM533tSQUDcyb4k9lG+4i1dmHrPG+qC8667j7WEd9RIlvNnVKU882Y
	hPlnfc3wdG3gSDYFqx5127w=
X-Google-Smtp-Source: ABdhPJx3km4y6Ptxbo1gRtmUp3RQWIpNpjW7YVghKYqRYc32VogPWecdNa/CIIhbTR9xlsBk6+cVag==
X-Received: by 2002:a1c:6056:: with SMTP id u83mr18884717wmb.90.1607357898220;
        Mon, 07 Dec 2020 08:18:18 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: <paul@xen.org>,
	"'Wei Liu'" <wl@xen.org>
Cc: <xen-devel@lists.xenproject.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Oleksandr Andrushchenko'" <oleksandr_andrushchenko@epam.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201203142534.4017-1-paul@xen.org> <20201203142534.4017-2-paul@xen.org> <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2> <013d01d6ca2f$605fe7e0$211fb7a0$@xen.org> <20201204112141.wdwb54brb23x2bgs@liuwe-devbox-debian-v2> <014701d6ca2f$e414f260$ac3ed720$@xen.org>
In-Reply-To: <014701d6ca2f$e414f260$ac3ed720$@xen.org>
Subject: RE: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Mon, 7 Dec 2020 16:18:16 -0000
Message-ID: <0d2701d6ccb4$9251c3e0$b6f54ba0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLNnXLs2pPLw4H6lzH6w2mINWFu0wFb+WecAj+9ygIB0uemdAExrIbdAi9lKOCnt67fAA==

> -----Original Message-----
[snip]
> > > >
> > > > This is going to break libxl callers because the name "pcidev" is
> > > > visible from the public header.
> > > >
> > > > I agree this is confusing and inconsistent, but we didn't go extra
> > > > length to maintain the inconsistency for no reason.
> > > >
> > > > If you really want to change it, I won't stand in the way. In fact, I'm
> > > > all for consistency. I think the flag you added should help alleviate
> > > > the fallout.
> > >
> > > Yes, I thought that was the idea... we can make API changes if we add a flag. I could see about
> > adding shims to translate the names
> > > and keep the internal code clean.
> >
> > Yes if you can add some internal shims to handle it that would be
> > great. Otherwise you will need to at least fix libvirt.
> >
> 
> I think shims are safest. We don't know what other callers are lurking out there :-)
> 

Wei,

Looking at this again; the only mentions of 'pcidev' in the public header that I can see are in argument names in function
prototypes, modified in the following hunks.

@@ -2307,15 +2314,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,

 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;

@@ -2359,8 +2366,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);

 /* CPUID handling */

I can't see how renaming these will break anything. The type name (which is what I thought I'd changed) actually remains the same.
The main changes are in the libxl__device_type structure but AFAICT that is not publicly visible. Am I missing something?

  Paul



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 16:23:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 16:23:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46823.82981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJIG-0001wv-UI; Mon, 07 Dec 2020 16:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46823.82981; Mon, 07 Dec 2020 16:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJIG-0001wo-RJ; Mon, 07 Dec 2020 16:23:20 +0000
Received: by outflank-mailman (input) for mailman id 46823;
 Mon, 07 Dec 2020 16:23:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=og4e=FL=amazon.co.uk=prvs=60380b542=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kmJIG-0001wj-4H
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 16:23:20 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76fd9cc1-e8bf-4255-8533-533e0c12770b;
 Mon, 07 Dec 2020 16:23:18 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 07 Dec 2020 16:23:10 +0000
Received: from EX13D32EUC003.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com (Postfix) with ESMTPS
 id ABFB1A18D5; Mon,  7 Dec 2020 16:23:09 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 7 Dec 2020 16:23:08 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 7 Dec 2020 16:23:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76fd9cc1-e8bf-4255-8533-533e0c12770b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1607358199; x=1638894199;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=M5CQ43J06hArqDloXOWuQ8P6al+1L6FLc+2jHKf0F/o=;
  b=GUtV5HmWq/vHgMxUAcmfBLAbUfv+d42a9Ea2MN2q49FnUMLmoUpKuztQ
   eXIHPQFT9zaoCtv7+wIltjpiJpK55d9Gk4KBycWyuKjsENKEc9wHzZbu+
   R45cEvVxO1eCs3AjVvxpzEcB4qi9qCWk36qkT0NlpCRLlGzY9zvc7cJnm
   M=;
X-IronPort-AV: E=Sophos;i="5.78,400,1599523200"; 
   d="scan'208";a="71039094"
Subject: RE: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Thread-Topic: [PATCH v5 01/23] xl / libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: "paul@xen.org" <paul@xen.org>, 'Wei Liu' <wl@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	'Oleksandr Andrushchenko' <oleksandr_andrushchenko@epam.com>, 'Ian Jackson'
	<iwj@xenproject.org>, 'Anthony PERARD' <anthony.perard@citrix.com>
Thread-Index: AQLNnXLs2pPLw4H6lzH6w2mINWFu0wFb+WecAj+9ygIB0uemdAExrIbdAi9lKOCnt67fAIAAAnZw
Date: Mon, 7 Dec 2020 16:23:08 +0000
Message-ID: <7300f5ebe6a6491e9aea02b7276dd9cf@EX13D32EUC003.ant.amazon.com>
References: <20201203142534.4017-1-paul@xen.org>
 <20201203142534.4017-2-paul@xen.org>
 <20201204111326.5pxgqertdm3tk7y2@liuwe-devbox-debian-v2>
 <013d01d6ca2f$605fe7e0$211fb7a0$@xen.org>
 <20201204112141.wdwb54brb23x2bgs@liuwe-devbox-debian-v2>
 <014701d6ca2f$e414f260$ac3ed720$@xen.org>
 <0d2701d6ccb4$9251c3e0$b6f54ba0$@xen.org>
In-Reply-To: <0d2701d6ccb4$9251c3e0$b6f54ba0$@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.166.209]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Precedence: Bulk

> -----Original Message-----
> From: Paul Durrant <xadimgnik@gmail.com>
> Sent: 07 December 2020 16:18
> To: paul@xen.org; 'Wei Liu' <wl@xen.org>
> Cc: xen-devel@lists.xenproject.org; Durrant, Paul <pdurrant@amazon.co.uk>=
; 'Oleksandr Andrushchenko'
> <oleksandr_andrushchenko@epam.com>; 'Ian Jackson' <iwj@xenproject.org>; '=
Anthony PERARD'
> <anthony.perard@citrix.com>
> Subject: RE: [EXTERNAL] [PATCH v5 01/23] xl / libxl: s/pcidev/pci and rem=
ove
> DEFINE_DEVICE_TYPE_STRUCT_X
>=20
> CAUTION: This email originated from outside of the organization. Do not c=
lick links or open
> attachments unless you can confirm the sender and know the content is saf=
e.
>=20
>=20
>=20
> > -----Original Message-----
> [snip]
> > > > >
> > > > > This is going to break libxl callers because the name "pcidev" is
> > > > > visible from the public header.
> > > > >
> > > > > I agree this is confusing and inconsistent, but we didn't go extr=
a
> > > > > length to maintain the inconsistency for no reason.
> > > > >
> > > > > If you really want to change it, I won't stand in the way. In fac=
t, I'm
> > > > > all for consistency. I think the flag you added should help allev=
iate
> > > > > the fallout.
> > > >
> > > > Yes, I thought that was the idea... we can make API changes if we a=
dd a flag. I could see about
> > > adding shims to translate the names
> > > > and keep the internal code clean.
> > >
> > > Yes if you can add some internal shims to handle it that would be
> > > great. Otherwise you will need to at least fix libvirt.
> > >
> >
> > I think shims are safest. We don't know what other callers are lurking =
out there :-)
> >
>=20
> Wei,
>=20
> Looking at this again; the only mentions of 'pcidev' in the public header=
 that I can see are in
> argument names in function
> prototypes, modified in the following hunks.
>=20
> @@ -2307,15 +2314,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx=
, uint32_t domid,
>=20
>  /* PCI Passthrough */
>  int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
> -                         libxl_device_pci *pcidev,
> +                         libxl_device_pci *pci,
>                           const libxl_asyncop_how *ao_how)
>                           LIBXL_EXTERNAL_CALLERS_ONLY;
>  int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
> -                            libxl_device_pci *pcidev,
> +                            libxl_device_pci *pci,
>                              const libxl_asyncop_how *ao_how)
>                              LIBXL_EXTERNAL_CALLERS_ONLY;
>  int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
> -                             libxl_device_pci *pcidev,
> +                             libxl_device_pci *pci,
>                               const libxl_asyncop_how *ao_how)
>                               LIBXL_EXTERNAL_CALLERS_ONLY;
>=20
> @@ -2359,8 +2366,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
>   * added or is not bound, the functions will emit a warning but return
>   * SUCCESS.
>   */
> -int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pc=
idev, int rebind);
> -int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci =
*pcidev, int rebind);
> +int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pc=
i, int rebind);
> +int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci =
*pci, int rebind);
>  libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *=
num);
>=20
>  /* CPUID handling */
>=20
> I can't see how renaming these will break anything. The type name (which =
is what I thought I'd
> changed) actually remains the same.
> The main changes are in the libxl__device_type structure but AFAICT that =
is not publicly visible. Am I
> missing something?

Oh NM... I see the direct use of the domain_config field names lower down. =
I guess I can probably leave those names alone.

  Paul

>=20
>   Paul



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 16:29:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 16:29:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46830.82992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJNz-00029l-JZ; Mon, 07 Dec 2020 16:29:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46830.82992; Mon, 07 Dec 2020 16:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJNz-00029e-Gb; Mon, 07 Dec 2020 16:29:15 +0000
Received: by outflank-mailman (input) for mailman id 46830;
 Mon, 07 Dec 2020 16:29:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3bhA=FL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmJNy-00029Z-R3
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 16:29:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ef093d0-fa7b-49b4-ad86-2b5226dab19f;
 Mon, 07 Dec 2020 16:29:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0BE89ADCD;
 Mon,  7 Dec 2020 16:29:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ef093d0-fa7b-49b4-ad86-2b5226dab19f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607358552; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cuDNFLNOZxKVULJwkXmi28LdVyrLgNpGSQ7VYcZCdKY=;
	b=Dp3GlhExFUYZATcyjAakOAH/uLdqBLto4qAuwOi7+G89+nlBYPzxcRS0dqK4JNGuk2l085
	t8lwRe7t5wFji7h5JTPB1IF8k+GMjJAyA1WrYO05/SqXLS/RUagUR/M675G9GkCyeO5Fbr
	hpPtQHRUjh0q3WP1lYPMg111VR02Jfw=
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <51a5c06f-e6ce-c651-2fd2-352aaa591fb1@suse.com>
 <029c3dcc-fac2-5b65-703e-5d821335f2a0@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2d83a093-29d3-5870-0814-229cc7f1c04b@suse.com>
Date: Mon, 7 Dec 2020 17:29:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <029c3dcc-fac2-5b65-703e-5d821335f2a0@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 16:27, Oleksandr wrote:
> On 07.12.20 13:13, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
>>>       return rc;
>>>   }
>>>   
>>> -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>>   {
>>>       hvm_unmap_ioreq_gfn(s, true);
>>>       hvm_unmap_ioreq_gfn(s, false);
>> How is this now different from ...
>>
>>> @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>>>       return rc;
>>>   }
>>>   
>>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
>>> +{
>>> +    hvm_remove_ioreq_gfn(s, false);
>>> +    hvm_remove_ioreq_gfn(s, true);
>>> +}
>> ... this? Imo if at all possible there should be no such duplication
>> (i.e. at least have this one simply call the earlier one).
> 
> I am afraid, I don't see any duplication between mentioned functions. 
> Would you mind explaining?

Ouch - somehow my eyes considered "unmap" == "remove". I'm sorry
for the noise.

>>> @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>>>       return rc;
>>>   }
>>>   
>>> +/* Called with ioreq_server lock held */
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags)
>>> +{
>>> +    int rc = p2m_set_ioreq_server(d, flags, s);
>>> +
>>> +    if ( rc == 0 && flags == 0 )
>>> +    {
>>> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> +    }
>>> +
>>> +    return rc;
>>> +}
>>> +
>>>   /*
>>>    * Map or unmap an ioreq server to specific memory type. For now, only
>>>    * HVMMEM_ioreq_server is supported, and in the future new types can be
>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>>       if ( s->emulator != current->domain )
>>>           goto out;
>>>   
>>> -    rc = p2m_set_ioreq_server(d, flags, s);
>>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>>   
>>>    out:
>>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>   
>>> -    if ( rc == 0 && flags == 0 )
>>> -    {
>>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> -
>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> -    }
>>> -
>>>       return rc;
>>>   }
>> While you mention this change in the description, I'm still
>> missing justification as to why the change is safe to make. I
>> continue to think p2m_change_entry_type_global() would better
>> not be called inside the locked region, if at all possible.
> Well. I am afraid, I don't have a 100% justification why the change is 
> safe to make as well
> as I don't see an obvious reason why it is not safe to make (at least I 
> didn't find a possible deadlock scenario while investigating the code).
> I raised a question earlier whether I can fold this check in, which 
> implied calling p2m_change_entry_type_global() with ioreq_server lock held.

I'm aware of the earlier discussion. But "didn't find" isn't good
enough in a case like this, and since it's likely hard to indeed
prove there's no deadlock possible, I think it's best to avoid
having to provide such a proof by avoiding the nesting.

> If there is a concern with calling this inside the locked region 
> (unfortunately still unclear for me at the moment), I will try to find 
> another way how to split hvm_map_mem_type_to_ioreq_server() without
> potentially unsafe change here *and* exposing 
> p2m_change_entry_type_global() to the common code. Right now, I don't 
> have any ideas how this could be split other than
> introducing one more hook here to deal with p2m_change_entry_type_global 
> (probably arch_ioreq_server_map_mem_type_complete?), but I don't expect 
> it to be accepted.
> I appreciate any ideas on that.

Is there a reason why the simplest solution (two independent
arch_*() calls) won't do? If so, what are the constraints?
Can the first one e.g. somehow indicate what needs to happen
after the lock was dropped? But the two calls look independent
right now, so I don't see any complicating factors.

>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>> @@ -19,6 +19,25 @@
>>>   #ifndef __ASM_X86_HVM_IOREQ_H__
>>>   #define __ASM_X86_HVM_IOREQ_H__
>>>   
>>> +#define HANDLE_BUFIOREQ(s) \
>>> +    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
>>> +
>>> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
>>> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags);
>>> +bool arch_ioreq_server_destroy_all(struct domain *d);
>>> +int arch_ioreq_server_get_type_addr(const struct domain *d,
>>> +                                    const ioreq_t *p,
>>> +                                    uint8_t *type,
>>> +                                    uint64_t *addr);
>>> +void arch_ioreq_domain_init(struct domain *d);
>>> +
>>>   bool hvm_io_pending(struct vcpu *v);
>>>   bool handle_hvm_io_completion(struct vcpu *v);
>>>   bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
>> What's the plan here? Introduce them into the x86 header just
>> to later move the entire block into the common one? Wouldn't
>> it make sense to introduce the common header here right away?
>> Or do you expect to convert some of the simpler ones to inline
>> functions later on?
> The former. The subsequent patch is moving the the entire block(s) from 
> here and from x86/hvm/ioreq.c to the common code in one go.

I think I saw it move the _other_ pieces there, and this block
left here. (FAOD my comment is about the arch_*() declarations
you add, not the patch context in view.)

> I thought it was a little bit odd to expose a header before exposing an 
> implementation to the common code. Another reason is to minimize places 
> that need touching by current patch.

By exposing arch_*() declarations you don't give the impression
of exposing any "implementation". These are helpers the
implementation is to invoke; I'm fine with you moving the
declarations of the functions actually constituting this
component's external interface only once you also move the
implementation to common code.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 16:42:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 16:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46840.83005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJb0-00047Z-SS; Mon, 07 Dec 2020 16:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46840.83005; Mon, 07 Dec 2020 16:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmJb0-00047S-PP; Mon, 07 Dec 2020 16:42:42 +0000
Received: by outflank-mailman (input) for mailman id 46840;
 Mon, 07 Dec 2020 16:42:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmJaz-00047J-Md; Mon, 07 Dec 2020 16:42:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmJaz-0005V8-CW; Mon, 07 Dec 2020 16:42:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmJaz-0002Cl-5D; Mon, 07 Dec 2020 16:42:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmJaz-0007p5-4h; Mon, 07 Dec 2020 16:42:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xWS4/BjcagI4SA36nHRcFZLO00u8CAg6rlBLE1uAXOY=; b=bAVeQhgMHnwW/p0HCRtLMeKtkK
	Kas4T/LhxrKWyi4kn0TGKQnmWNkvJWP4sxm+AgCHRqft+F25yOF1EBT+1i7+Rfa4TCWFAPZ4+IuUb
	7txGtEjQnANPtPLNnVZO6u3Z4dmbtnEGtJfVtIw73kjDnPfYjdCXfWqgJnzxtQGRTva0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157259-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157259: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ec53aa79905edc3891b8267123d88a221553370
X-Osstest-Versions-That:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 16:42:41 +0000

flight 157259 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157259/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3ec53aa79905edc3891b8267123d88a221553370
baseline version:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b

Last test of basis   157206  2020-12-04 16:02:46 Z    3 days
Testing same since   157259  2020-12-07 14:01:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e666356a9..3ec53aa799  3ec53aa79905edc3891b8267123d88a221553370 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46860.83024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKDB-0008FS-2X; Mon, 07 Dec 2020 17:22:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46860.83024; Mon, 07 Dec 2020 17:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKDA-0008FL-VC; Mon, 07 Dec 2020 17:22:08 +0000
Received: by outflank-mailman (input) for mailman id 46860;
 Mon, 07 Dec 2020 17:22:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmKD9-0008FG-HO
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:22:07 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efb4e1f3-7f71-4a54-9e36-3d369c7a154b;
 Mon, 07 Dec 2020 17:22:05 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id j10so15809066lja.5
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 09:22:05 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id n17sm1838070lfi.255.2020.12.07.09.22.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 09:22:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efb4e1f3-7f71-4a54-9e36-3d369c7a154b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=ufMp5KkW3QzSDBR4HYpaQZm7V0OvDFIsmY2WCbO0DsU=;
        b=eP7XBwfnhC2VlRLXYe1Hc9b/JLqq68Y2XK/f7eHZiYu7PvMpRdqtSDC/llzqcJANTK
         MNJX3iUoWRSXmaBgFgfIIaF6QsseU4WX/QLdqeqVNfHlRQqMBz8G/Ho+qHs4ZroL/Qdh
         9nufLdwYtubVB731HS8eBw4rKAsVrwHQVAaBcFepkoT7bweX5EuSuiKTc95KEHRTDEGn
         SwNwmNUvz9nt9C0mAykHKvOdosDn9qywn5mVxHSVL/z0RH7xZwdQkthEGDAuu/kB2Ky2
         zvqV+JDfyKJsN7PMZFnCu9Set/MDTslNllN35Iw6GCrfhd4+a9J5j3uvu4lYuEWEhHrf
         WEhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=ufMp5KkW3QzSDBR4HYpaQZm7V0OvDFIsmY2WCbO0DsU=;
        b=huR7wcFlXFb/OFcydBf+OjZ71k9r2lb1aI1GMk30usz/eP2H/uSeb43TiNjtkrQOyl
         gK8X+iukDCAE47OMDmsu3w+2xzb+0TjmsChVt8PtmQUBEAYwDQpYLeTXFw78J/28ulTT
         uLGUVtt+7Abt4cg4jWnRtGCzhwnCR0Q4z0Bw6c64Ck5TaWtilVDj9qwLVWZ9aKDjyx/j
         1dgLaEIX/o2Eu91rZBZb7P/yXBQoB+Qb6GVkJ5B2aHqL3413UEcoSFemATfummHP/k1F
         gndgYS0HDIi/f+E9kukY0oTg3cyYp5JgTbM1NUtPjtyTK+AOc6y69g07Vk2tKVyhCLHq
         X7UA==
X-Gm-Message-State: AOAM530PsmXyU0crum1yo8xBGPTRPeKgTSQzl0+9o81HWk+zrUi6vbK6
	E+i5fqOh2KjdNyhw2ObwwCcN8CxQ+V+crQ==
X-Google-Smtp-Source: ABdhPJzm3kjh0urtOkQonub16LKuOQOG1BncRFNURMLBDxQDaRC6xggvxqisAPbLYY1aiTQuFhKMWg==
X-Received: by 2002:a2e:9ac9:: with SMTP id p9mr9339067ljj.393.1607361724321;
        Mon, 07 Dec 2020 09:22:04 -0800 (PST)
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
 <51a5c06f-e6ce-c651-2fd2-352aaa591fb1@suse.com>
 <029c3dcc-fac2-5b65-703e-5d821335f2a0@gmail.com>
 <2d83a093-29d3-5870-0814-229cc7f1c04b@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <631684a0-2579-475e-54b3-50e4522b6788@gmail.com>
Date: Mon, 7 Dec 2020 19:21:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2d83a093-29d3-5870-0814-229cc7f1c04b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


Hi Jan


>>>> @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>>>>        return rc;
>>>>    }
>>>>    
>>>> +/* Called with ioreq_server lock held */
>>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>>> +                                   struct hvm_ioreq_server *s,
>>>> +                                   uint32_t flags)
>>>> +{
>>>> +    int rc = p2m_set_ioreq_server(d, flags, s);
>>>> +
>>>> +    if ( rc == 0 && flags == 0 )
>>>> +    {
>>>> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>> +
>>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>>> +    }
>>>> +
>>>> +    return rc;
>>>> +}
>>>> +
>>>>    /*
>>>>     * Map or unmap an ioreq server to specific memory type. For now, only
>>>>     * HVMMEM_ioreq_server is supported, and in the future new types can be
>>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>>>        if ( s->emulator != current->domain )
>>>>            goto out;
>>>>    
>>>> -    rc = p2m_set_ioreq_server(d, flags, s);
>>>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>>>    
>>>>     out:
>>>>        spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>>    
>>>> -    if ( rc == 0 && flags == 0 )
>>>> -    {
>>>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>> -
>>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>>> -    }
>>>> -
>>>>        return rc;
>>>>    }
>>> While you mention this change in the description, I'm still
>>> missing justification as to why the change is safe to make. I
>>> continue to think p2m_change_entry_type_global() would better
>>> not be called inside the locked region, if at all possible.
>> Well. I am afraid, I don't have a 100% justification why the change is
>> safe to make as well
>> as I don't see an obvious reason why it is not safe to make (at least I
>> didn't find a possible deadlock scenario while investigating the code).
>> I raised a question earlier whether I can fold this check in, which
>> implied calling p2m_change_entry_type_global() with ioreq_server lock held.
> I'm aware of the earlier discussion. But "didn't find" isn't good
> enough in a case like this, and since it's likely hard to indeed
> prove there's no deadlock possible, I think it's best to avoid
> having to provide such a proof by avoiding the nesting.

Agree here.


>
>> If there is a concern with calling this inside the locked region
>> (unfortunately still unclear for me at the moment), I will try to find
>> another way how to split hvm_map_mem_type_to_ioreq_server() without
>> potentially unsafe change here *and* exposing
>> p2m_change_entry_type_global() to the common code. Right now, I don't
>> have any ideas how this could be split other than
>> introducing one more hook here to deal with p2m_change_entry_type_global
>> (probably arch_ioreq_server_map_mem_type_complete?), but I don't expect
>> it to be accepted.
>> I appreciate any ideas on that.
> Is there a reason why the simplest solution (two independent
> arch_*() calls) won't do? If so, what are the constraints?

There is no reason.


> Can the first one e.g. somehow indicate what needs to happen
> after the lock was dropped?

I think, yes.


> But the two calls look independent
> right now, so I don't see any complicating factors.

ok, will go "two independent arch hooks" route then.

Thank you for the idea.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:22:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46864.83036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKDg-0008Kp-BF; Mon, 07 Dec 2020 17:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46864.83036; Mon, 07 Dec 2020 17:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKDg-0008Ki-8D; Mon, 07 Dec 2020 17:22:40 +0000
Received: by outflank-mailman (input) for mailman id 46864;
 Mon, 07 Dec 2020 17:22:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmKDf-0008KZ-0n
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:22:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmKDc-0006KD-L3; Mon, 07 Dec 2020 17:22:36 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmKDc-0003yK-Aj; Mon, 07 Dec 2020 17:22:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pZn28BFSU+9QlmOhqKIMuQNIB92uM9LdjTAh4FV+EuI=; b=JsM9ELQtljR6yGWPURl6bfk1IC
	7RyI6yInKzbJcGOlIFeYdw+fjF6TRY1y7Ybf2PMonHXsuuwa9ZEhRYaq+mSD4QDmK7vjDbXscuggt
	ircvSrW80GYI3+gSBCCIJQScVRg+60heFtk5N+rpMsHSDEXe3H+rk2h2e/cRgq7fWG5U=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <269f9a2d-7a8d-cba2-801f-6d3b12f9455f@suse.com>
 <02a2b77f-27a9-b1b6-1acf-1f136cffdf30@xen.org>
 <48395363-ea47-9139-011e-233d92581a71@suse.com>
 <2edfc711-d8d9-4854-94a2-2d9e4d9902ec@xen.org>
 <381cbc5b-29e8-d84d-0b7c-e84de82bc1a4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <90ace303-b0a9-7d83-098d-ec01c3b308ad@xen.org>
Date: Mon, 7 Dec 2020 17:22:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <381cbc5b-29e8-d84d-0b7c-e84de82bc1a4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 07/12/2020 08:02, Jan Beulich wrote:
> On 04.12.2020 16:09, Julien Grall wrote:
>> On 04/12/2020 12:01, Jan Beulich wrote:
>>> On 04.12.2020 12:51, Julien Grall wrote:
>>>> On 04/12/2020 11:48, Jan Beulich wrote:
>>>>> On 04.12.2020 12:28, Julien Grall wrote:
>>>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>>>
>>>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>>>
>>>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>>>> monitoring software to do the right thing.
>>>>>>
>>>>>> I will refrain to comment on this approach. However, given the race is
>>>>>> much wider than the event channel, I would recommend to not add more
>>>>>> code in the event channel to deal with such problem.
>>>>>>
>>>>>> Instead, this should be fixed in the VM event code when someone has time
>>>>>> to harden the subsystem.
>>>>>
>>>>> Are effectively saying I should now undo the addition of the
>>>>> refcounting, which was added in response to feedback from you?
>>>>
>>>> Please point out where I made the request to use the refcounting...
>>>
>>> You didn't ask for this directly, sure, but ...
>>>
>>>> I pointed out there was an issue with the VM event code.
>>>
>>> ... this has ultimately led to the decision to use refcounting
>>> (iirc there was one alternative that I had proposed, besides
>>> the option of doing nothing).
>>
>> One other option that was discussed (maybe only on security@xen.org) is
>> to move the spinlock outside of the structure so it is always allocated.
> 
> Oh, right - forgot about that one, because that's nothing I would
> ever have taken on actually carrying out.
> 
>>>> This was latter
>>>> analysed as a wider issue. The VM event folks doesn't seem to be very
>>>> concerned on the race, so I don't see the reason to try to fix it in the
>>>> event channel code.
>>>
>>> And you won't need the refcount for vpl011 then?
>>
>> I don't believe we need it for the vpl011 as the spin lock protecting
>> the code should always be allocated. The problem today is the lock is
>> initialized too late.
>>
>>> I can certainly
>>> drop it again, but it feels odd to go back to an earlier version
>>> under the circumstances ...
>>
>> The code introduced doesn't look necessary outside of the VM event code.
>> So I think it would be wrong to merge it if it is just papering over a
>> bigger problem.
> 
> So to translate this to a clear course of action: You want me to
> go back to the earlier version by dropping the refcounting again?

Yes.

> (I don't view this as "papering over" btw, but a tiny step towards
> a solution.)

This is implying that the refcounting is part of the actual solution. I 
think you can solve it directly in the VM event code without touching 
the event channel code.

Furthermore, I see no point to add code in the common code if the 
maintainers of the affected subsystem think there code is safe (I don't 
believe it is).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:25:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46871.83047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKFz-000056-P4; Mon, 07 Dec 2020 17:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46871.83047; Mon, 07 Dec 2020 17:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKFz-00004v-Lf; Mon, 07 Dec 2020 17:25:03 +0000
Received: by outflank-mailman (input) for mailman id 46871;
 Mon, 07 Dec 2020 17:25:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gzin=FL=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmKFy-0008WT-4f
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:25:02 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bf7c91a-a2b3-40f0-8326-87d9c4e89b4f;
 Mon, 07 Dec 2020 17:25:00 +0000 (UTC)
Received: from DB6PR0802CA0032.eurprd08.prod.outlook.com (2603:10a6:4:a3::18)
 by VI1PR08MB2782.eurprd08.prod.outlook.com (2603:10a6:802:1c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.23; Mon, 7 Dec
 2020 17:24:58 +0000
Received: from DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::6d) by DB6PR0802CA0032.outlook.office365.com
 (2603:10a6:4:a3::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Mon, 7 Dec 2020 17:24:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT064.mail.protection.outlook.com (10.152.21.199) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 17:24:58 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Mon, 07 Dec 2020 17:24:58 +0000
Received: from 5f376cc0e6fb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5FD13C11-3B31-4C2E-87D8-D52004EAECE7.1; 
 Mon, 07 Dec 2020 17:24:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5f376cc0e6fb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 17:24:20 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5589.eurprd08.prod.outlook.com (2603:10a6:10:1a2::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21; Mon, 7 Dec
 2020 17:24:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Mon, 7 Dec 2020
 17:24:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bf7c91a-a2b3-40f0-8326-87d9c4e89b4f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WlH+Sxi4vEnksM6T974PSKhfPj8eLEJTs61bgnI4m5U=;
 b=b6VWScZkLdA9c8gfA7c3HulsSNiqua2CdZyw2xL8b4dzUmlE+3HsjmrIrZTlmo/UUEPkF/wXnmk5sJiPROKHvzfUuACf/z4UT34qBPxFxgoItMjYpSsbud5k6z1yT2+c+2LR++UcUDZNbwqjGEsUEpH0jLCBvvIOCrwRYFnOifo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 24997c4ae3059805
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aIfwE7lnH4FQOsYwNdD9ekj0jbCZp379ZrvcBe9TMO1YKhGVIi7dHNRa2f6Rm00zZMqm07+lds5lDYpnEbO8wf4tWIGg7HnMGz9S4CNHUitdWIZDYUpMgrGAaEfx0RmTvn0rnlap0//j6gzTODZ8IjVtPF73rZwV7JzTlTcnQ5HchLn7YfBfcHocFa9rLrVgvihUG+RRHxabiwKSSU5SssxHYlLNYpBtpz50hqP+eZGBfJPiqMJC1aoGUr7UmJfq9rI8X5MsSNmim8RgJZmggtgEIB83Kx0TWJAO8qivJsXbxZBSqtJUbCe9EPDMElDMztqycgCB+CFyegR24xm4yA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WlH+Sxi4vEnksM6T974PSKhfPj8eLEJTs61bgnI4m5U=;
 b=iXSTeiw2kZBllqvCSA9PxoR+IMljBQ0m/J/C2fvNbAze8wD0ERgUop0S3GSlzZxZWuTI9rju6t4bXPJy8gGh4oQmY9R5v6iDQHJ61Ms/aPtnpbKdAVNL67NgO3EjZ0aTDsB8+Kyq8jsPzcXdsne//UniB6rS0E5UOKt8u67qQCsSqJYvpUQGfu+wkKQAPZZfn+fuVgrpZ1z7OWkKrbmAhSm+jxxUwbuHRIbcanTttIotGxBgU0dS2yBG+/WaS0hDjQ62yyeUw2p4sR1AS20WKmDr3sKUokg2RvHUezwLGfR3XTAkVkpJwJjU92DhnSajZCbg6d5RrPGp4P7WGAZ0wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WlH+Sxi4vEnksM6T974PSKhfPj8eLEJTs61bgnI4m5U=;
 b=b6VWScZkLdA9c8gfA7c3HulsSNiqua2CdZyw2xL8b4dzUmlE+3HsjmrIrZTlmo/UUEPkF/wXnmk5sJiPROKHvzfUuACf/z4UT34qBPxFxgoItMjYpSsbud5k6z1yT2+c+2LR++UcUDZNbwqjGEsUEpH0jLCBvvIOCrwRYFnOifo=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index: AQHWxyRpRLPV3+rSX0iDOmZYGXBVV6nnpDEAgARJGoA=
Date: Mon, 7 Dec 2020 17:24:18 +0000
Message-ID: <F1C7605B-2E88-4A9E-A556-C529AF06E9E6@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012041531420.32240@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012041531420.32240@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 859d6174-7665-427b-4da4-08d89ad50559
x-ms-traffictypediagnostic: DBAPR08MB5589:|VI1PR08MB2782:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB2782DC1555E38E0890E30AE69DCE0@VI1PR08MB2782.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 r3tPtTfKodZCCXxp4tb6+WFh3E2td9v2ijD1aRIYmhBBBkvwU0GMUgVD9HaDEzzo2z3ZNI4jBsGtUfEICYDco2hanA26oHIenQlwUfBWWzAYtz53uRtqfPvgNwIaZrZ890VKVwNGHynJcqJazdqE5pr1irWVxRXqbuIWyh2/TgxO67qbWUVtChsKIselSWcKfirdB97LkTLntkKdtbXfEcqxDlXmqjeQ452d0vcFiUW1ne/QikIJ+PWYrluF6Cjpfro2CuDfmZX8D2fUHwDJBTrUuCnEC7MZZwG0lIVa8NHgf0d+RS7fdZjMPYMAtVGqDJz16IMwDSydTrp9zi4tpXOKzW84z6CaMufFsOXse25WEzrouIbic/X0jwl9Hx5g
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(366004)(39860400002)(376002)(86362001)(478600001)(186003)(64756008)(5660300002)(66556008)(2616005)(6486002)(53546011)(2906002)(8676002)(6916009)(6506007)(4326008)(54906003)(316002)(66946007)(33656002)(66446008)(91956017)(8936002)(66476007)(6512007)(76116006)(36756003)(83380400001)(26005)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?YxcaKSYh2v98mKCl/9oRE8HRQzmX+tdj8PD2IUh8oPGfaE1Xf4dHzUKjw0M8?=
 =?us-ascii?Q?rlIwYdGx2JDxkd8asKLBrUc2Hu+f0fY24RP7+QuvLyLTDRYIhNA4+ja9DewL?=
 =?us-ascii?Q?XGDITahPi0zuArG9v4z0oMtGP9KB+rw5qnVx8A5q/12oUOGJsmhle6XCuIwo?=
 =?us-ascii?Q?RbGnDglKoM3d8BCnBl584QTcM67DpORB/iR0S+u8ZTiWqG3IN30asaUNqAtq?=
 =?us-ascii?Q?I23UHkTv5Z95BgAi22GnXMfHvYXOPbP6YTeqXAMA5jvcH3veEZkKh+5a8vyr?=
 =?us-ascii?Q?Ls4/E/V6URyYeDIMGGiTQpbq7oux0vPyghE4rBKxdxz6GYhjVM8Pl375tcBb?=
 =?us-ascii?Q?Szd/PhExyA7anIqReI+sk0yw+ddunXeB3R3cDpZ7aWOtEBbIXakOSAFRc3hL?=
 =?us-ascii?Q?8xxMboiwK90n83tQZAhhUKle409wqvtpCR8xmQ8+l6PLi5KeEDcp2p38ndtJ?=
 =?us-ascii?Q?JUi2foycAynxvlYpQRuBzb0tkjNGfgs5ex0QAVeV285p9IP7rJkn/angF5uX?=
 =?us-ascii?Q?69Fu3OSaeEXxuIgBAbhbW4d3n716VFIN0yUkHf+Akbhxi0WUDsasLTyT/Pbd?=
 =?us-ascii?Q?FPSus5+nc355W9LZXGRow1WNz3owGOHPo43EUwXCCACqXQL9+XLg7gkVj7v+?=
 =?us-ascii?Q?fNynGnlowO2M+pWc6qWrMh0OAe+SDR/wAl/9fQrBTN9Dx5bUMwvEWSYPrMH0?=
 =?us-ascii?Q?ZaxayTi3yu+vqRLsy09cCc6tuWmF+8HKYIKWYeD8VITDnpmot+DUxFPf99Lg?=
 =?us-ascii?Q?ds6GoS5V/XfXN5WRoQaocH2QTcnVvkCaH9XONVor4e4tu9rWGt/k5D4bfX5T?=
 =?us-ascii?Q?1D4H+Vw0+gWwuVtQdCpmqoSG5kJqZLHfruGjzoHLoMHWZgC/SJmg8NKL6je5?=
 =?us-ascii?Q?0Zk7mywmzA2vQJTvw72dxgXxknpUq9/Tv04T6Z0RzFSCPtfpQ+1eR3KteWz1?=
 =?us-ascii?Q?PbCpTEnY7fZKqMScqKpQQCjHbhjkgOU0lD9S8cQcGO82oVG26Qst1zVZG2F4?=
 =?us-ascii?Q?Th1Q?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9BD0F064CC8593479E5244EFDC0E8239@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5589
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e6316a74-19cc-49b8-147d-08d89ad4edde
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HfsRyN/5m2USLiQ87BQaMHMi5obps14etQZ/C19W/cemcPnNykmmAMOfd14z/p8Q/aUEvkYI/LUmEtUQ8aZWVelm08Z1N5p6MdJ19X/QGqcEmbH31eSl20oZgQX3hZ4sRS4CSpGLrrRPBzfC++F4zB2CjgE6NFKmYbE25DDdfBAtksoc3re+uugAc/AvKTwYnaaPhecvrJVkdNdF7jM0GLHn2AHAyxkZ8OqYeAUMVD7bUF9ylT+Reg3ischB2CgMHsjwTtlF4S5VIH2E6LHeBF/ZgDirYvbEvx7fP6akf8L6yjaGouLoC+LukPgGlitowo5RdJHvj2iX+9rpOkf90MkzNFZO2iF/Ph6CLirUWihfS/AInz6rjHauEsOix5fvZBUkH3CuyeKAOOx/CIpL6Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(396003)(136003)(46966005)(70586007)(82740400003)(70206006)(6512007)(4326008)(316002)(83380400001)(8676002)(478600001)(5660300002)(356005)(82310400003)(2906002)(81166007)(8936002)(6486002)(26005)(33656002)(107886003)(36756003)(6506007)(6862004)(86362001)(2616005)(47076004)(54906003)(53546011)(336012)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 17:24:58.1300
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 859d6174-7665-427b-4da4-08d89ad50559
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2782

Hi Stefano,

> On 4 Dec 2020, at 23:57, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Mon, 30 Nov 2020, Bertrand Marquis wrote:
>> Create a cpuinfo structure for guest and mask into it the features that
>> we do not support in Xen or that we do not want to publish to guests.
>>=20
>> Modify some values in the cpuinfo structure for guests to mask some
>> features which we do not want to allow to guests (like AMU) or we do not
>> support (like SVE).
>=20
> The first two sentences seem to say the same thing in two different
> ways.
>=20
>=20
>> The code is trying to group together registers modifications for the
>> same feature to be able in the long term to easily enable/disable a
>> feature depending on user parameters or add other registers modification
>> in the same place (like enabling/disabling HCR bits).
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: rebase
>> ---
>> xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>> xen/include/asm-arm/cpufeature.h |  2 ++
>> 2 files changed, 53 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index 204be9b084..309941ff37 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -24,6 +24,8 @@
>>=20
>> DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>=20
>> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
>> +
>> void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>>                              const char *info)
>> {
>> @@ -156,6 +158,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>> #endif
>> }
>>=20
>> +/*
>> + * This function is creating a cpuinfo structure with values modified t=
o mask
>> + * all cpu features that should not be published to guest.
>> + * The created structure is then used to provide ID registers values to=
 guests.
>> + */
>> +static int __init create_guest_cpuinfo(void)
>> +{
>> +    /*
>> +     * TODO: The code is currently using only the features detected on =
the boot
>> +     * core. In the long term we should try to compute values containin=
g only
>> +     * features supported by all cores.
>> +     */
>> +    identify_cpu(&guest_cpuinfo);
>=20
> Given that we already have boot_cpu_data and current_cpu_data, which
> should be already initialized at this point, we could simply:
>=20
>  guest_cpuinfo =3D current_cpu_data;
>=20
> or
>=20
>  guest_cpuinfo =3D boot_cpu_data;
>=20
> ?

Ack, I will do that.

Cheers
Bertrand

>=20
>=20
>> +#ifdef CONFIG_ARM_64
>> +    /* Disable MPAM as xen does not support it */
>> +    guest_cpuinfo.pfr64.mpam =3D 0;
>> +    guest_cpuinfo.pfr64.mpam_frac =3D 0;
>> +
>> +    /* Disable SVE as Xen does not support it */
>> +    guest_cpuinfo.pfr64.sve =3D 0;
>> +    guest_cpuinfo.zfr64.bits[0] =3D 0;
>> +
>> +    /* Disable MTE as Xen does not support it */
>> +    guest_cpuinfo.pfr64.mte =3D 0;
>> +#endif
>> +
>> +    /* Disable AMU */
>> +#ifdef CONFIG_ARM_64
>> +    guest_cpuinfo.pfr64.amu =3D 0;
>> +#endif
>> +    guest_cpuinfo.pfr32.amu =3D 0;
>> +
>> +    /* Disable RAS as Xen does not support it */
>> +#ifdef CONFIG_ARM_64
>> +    guest_cpuinfo.pfr64.ras =3D 0;
>> +    guest_cpuinfo.pfr64.ras_frac =3D 0;
>> +#endif
>> +    guest_cpuinfo.pfr32.ras =3D 0;
>> +    guest_cpuinfo.pfr32.ras_frac =3D 0;
>> +
>> +    return 0;
>> +}
>> +/*
>> + * This function needs to be run after all smp are started to have
>> + * cpuinfo structures for all cores.
>> + */
>> +__initcall(create_guest_cpuinfo);
>> +
>> /*
>>  * Local variables:
>>  * mode: C
>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpuf=
eature.h
>> index 64354c3f19..0ab6dd42a0 100644
>> --- a/xen/include/asm-arm/cpufeature.h
>> +++ b/xen/include/asm-arm/cpufeature.h
>> @@ -290,6 +290,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>> extern struct cpuinfo_arm cpu_data[];
>> #define current_cpu_data cpu_data[smp_processor_id()]
>>=20
>> +extern struct cpuinfo_arm guest_cpuinfo;
>> +
>> #endif /* __ASSEMBLY__ */
>>=20
>> #endif
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:30:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46881.83059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKLV-000103-Dj; Mon, 07 Dec 2020 17:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46881.83059; Mon, 07 Dec 2020 17:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKLV-0000zv-Ai; Mon, 07 Dec 2020 17:30:45 +0000
Received: by outflank-mailman (input) for mailman id 46881;
 Mon, 07 Dec 2020 17:30:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmKLT-0000zq-LQ
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:30:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmKLR-0006Uy-MA; Mon, 07 Dec 2020 17:30:41 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmKLR-0004M7-Ap; Mon, 07 Dec 2020 17:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=qYLK5Rbilo17L0PX4ORzWdQytoszAoKkLyoSTL8bCJg=; b=ZXiNXYpF2OJlYRyFi8Jrrwwh40
	aj4taQxfI4S7mT9vFF7g2hdUyhtY876jKqgGJrZRaTsW0NDV/XdIoCwysw+we8c0IM/glJKpvPWoM
	6Dkoa2Wt4L43zDr5rhKiz50Fp4Vp7DXYQYrwv3cqWfMrvGwQQb3qPSf95CothjzEqqJc=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
Date: Mon, 7 Dec 2020 17:30:38 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 07/12/2020 15:28, Jan Beulich wrote:
> On 04.12.2020 20:15, Tamas K Lengyel wrote:
>> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
>>> On 04/12/2020 15:21, Tamas K Lengyel wrote:
>>>> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
>>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>>>>> While there don't look to be any problems with this right now, the lock
>>>>>>>> order implications from holding the lock can be very difficult to follow
>>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>>>>
>>>>>>>> However, vm_event_disable() frees the structures used by respective
>>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>>>>
>>>>>>> AFAICT, this callback is not the only place where the synchronization is
>>>>>>> missing in the VM event code.
>>>>>>>
>>>>>>> For instance, vm_event_put_request() can also race against
>>>>>>> vm_event_disable().
>>>>>>>
>>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>>
>>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>>
>>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>>> monitoring software to do the right thing.
>>>>>
>>>>> I will refrain to comment on this approach. However, given the race is
>>>>> much wider than the event channel, I would recommend to not add more
>>>>> code in the event channel to deal with such problem.
>>>>>
>>>>> Instead, this should be fixed in the VM event code when someone has time
>>>>> to harden the subsystem.
>>>>
>>>> I double-checked and the disable route is actually more robust, we
>>>> don't just rely on the toolstack doing the right thing. The domain
>>>> gets paused before any calls to vm_event_disable. So I don't think
>>>> there is really a race-condition here.
>>>
>>> The code will *only* pause the monitored domain. I can see two issues:
>>>      1) The toolstack is still sending event while destroy is happening.
>>> This is the race discussed here.
>>>      2) The implement of vm_event_put_request() suggests that it can be
>>> called with not-current domain.
>>>
>>> I don't see how just pausing the monitored domain is enough here.
>>
>> Requests only get generated by the monitored domain. So if the domain
>> is not running you won't get more of them. The toolstack can only send
>> replies.
> 
> Julien,
> 
> does this change your view on the refcounting added by the patch
> at the root of this sub-thread?

I still think the code is at best fragile. One example I can find is:

   -> guest_remove_page()
     -> p2m_mem_paging_drop_page()
      -> vm_event_put_request()

guest_remove_page() is not always call on the current domain. So there 
are possibility for vm_event_put_request() to happen on a foreign domain 
and therefore wouldn't be protected by the current hypercall.

Anyway, I don't think the refcounting should be part of the event 
channel without any idea on how this would fit in fixing the VM event race.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:35:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:35:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46891.83076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKQH-0001Ow-8N; Mon, 07 Dec 2020 17:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46891.83076; Mon, 07 Dec 2020 17:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKQH-0001Oo-4d; Mon, 07 Dec 2020 17:35:41 +0000
Received: by outflank-mailman (input) for mailman id 46891;
 Mon, 07 Dec 2020 17:35:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gzin=FL=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmKQG-0001OY-ON
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:35:40 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::624])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90e580ff-daf2-4067-8aea-c3d93baefa92;
 Mon, 07 Dec 2020 17:35:39 +0000 (UTC)
Received: from DB6PR0501CA0033.eurprd05.prod.outlook.com (2603:10a6:4:67::19)
 by AM6PR08MB3174.eurprd08.prod.outlook.com (2603:10a6:209:45::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.19; Mon, 7 Dec
 2020 17:35:37 +0000
Received: from DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:67:cafe::ee) by DB6PR0501CA0033.outlook.office365.com
 (2603:10a6:4:67::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Mon, 7 Dec 2020 17:35:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT020.mail.protection.outlook.com (10.152.20.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 17:35:36 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Mon, 07 Dec 2020 17:35:36 +0000
Received: from acb1dfc3750f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3E0E9D14-DB2C-42E6-86DB-B541866D81A2.1; 
 Mon, 07 Dec 2020 17:35:21 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id acb1dfc3750f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 17:35:21 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6075.eurprd08.prod.outlook.com (2603:10a6:10:207::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Mon, 7 Dec
 2020 17:35:20 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Mon, 7 Dec 2020
 17:35:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90e580ff-daf2-4067-8aea-c3d93baefa92
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ryUw8W+AuaX954A3E44HR98N9ygf8VNE0rvlGCFDbsw=;
 b=LfFToD/0Kbp/pRv517YAzvJyljI2EQvSr4hgxPRPjmPhLmOpiIfDcqXlsonkh4QyDBtLnRLZYN4x3vwsaPdmlgs+vGKOOHfaZpHelVaMHv7uqY1j4m+hRb+A1JLeZa+BwmvKxLjTA3MD1epYhT8Kq0fTUW5HFFS4QTW+xyBKvX4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5e83f94dd51b0ad5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mK6bDMwlT0QhWZ9RPBi+LqGYi6f8TxwjO0gkwiJdgQocLpCAM8kgWDfURPTZfEIaidrQv0ZwOXD15wh6iV3ZXsdzC44H5+EvAKk+wwNgh2AD81zCSuC4YeCmBprL85IE83oEjgqfIGEhh9WlR6KP+OotdNr21LscsspSh/ni1tGMxvYio35j/H+ZpBUgw6y7VJcBu59cRsGySsPDdjbtFDXFBA+y0qVMqTUxdF+ERTBqkZtxE/hBepRm+EeSKaLrm4s+p/cON3NvrzV1/B90+yh7AQNy2hx3KucxAT8QEsNzBffN1g/fDageAT6njPYnSy1WJwwzTCHbAB/duvBQqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ryUw8W+AuaX954A3E44HR98N9ygf8VNE0rvlGCFDbsw=;
 b=nVDbf7MunJPRtdH9krxwSs5nwhZO4YjIfIVv8JU/frw9iZDUljIBcCJ9ffELbPOn1by+uOcUqH+EF1+gmVw+qzaCE7mKGlWi9m1QtscyGB0uGfVXkW7SODYNNtEgik1uXnikXvgphrISbeLc5Osucv0WoR7z+Of905L1KS70DizwEk4jVLLK9vBW+hEXJJANa7287etknqejxR6YNa+l8gQARrx9Tj197bTmsitCeXYVIlvjytNm+Rue8V3ZIY1MHMRQkQrcJjBJZ0YGOgZN1APs4t4kWnmSmjaHgA66Fc3NUxpvYg/iip4S8ke4qPTmIpIHf8RDrcUa0n4JvHRgkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ryUw8W+AuaX954A3E44HR98N9ygf8VNE0rvlGCFDbsw=;
 b=LfFToD/0Kbp/pRv517YAzvJyljI2EQvSr4hgxPRPjmPhLmOpiIfDcqXlsonkh4QyDBtLnRLZYN4x3vwsaPdmlgs+vGKOOHfaZpHelVaMHv7uqY1j4m+hRb+A1JLeZa+BwmvKxLjTA3MD1epYhT8Kq0fTUW5HFFS4QTW+xyBKvX4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/7] xen/arm: Add ID registers and complete cpufinfo
Thread-Topic: [PATCH v2 1/7] xen/arm: Add ID registers and complete cpufinfo
Thread-Index: AQHWxyRhg3JcFq2vMUqckLksQKUHEKnnotSAgARNjYA=
Date: Mon, 7 Dec 2020 17:35:20 +0000
Message-ID: <06C2A7AB-A28D-4BF0-9837-A15BD2759C1B@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <97efd89cccdffc2a7fd987ac8156f5eea191fd3f.1606742184.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012041546340.32240@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012041546340.32240@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d77406fc-339a-4031-7e79-08d89ad68219
x-ms-traffictypediagnostic: DBBPR08MB6075:|AM6PR08MB3174:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB317478CBE4362894B413323A9DCE0@AM6PR08MB3174.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EQ5Lbk62JpqDPxslogjTWUQr021DLfyjmP44QCmQwrYvUEGwaItdT/i4tmS6zG8cdQt1ckx6W+ntBSU2jBf3Z/gSU6bjAbr88kV3AGaUCtzziUop07mBvSQ6TqG2Io6/qrvfWaVvQUX1gbbGYHb3KRzDjNVveQY3kc7oA2C1thFCAHrkZimqXMpADKg3nnEiRfQO4MTTFrx3O3ge2KIdQvASe4L/s8qBjCM1nVBdAW6fzFhV7VkwqlyHPQZQKQNtoQbUuCkTy7TeaC7NgVKoo0rd4f1UK0QygO1z3kPAd5lTMJgX4v0qphISjVkNarl1/BihZGpwPIv8NIgQ/PfNdkhho39Y5R8J9Qiz6x+W2lPBkvFaSzkMkL/eUtp1ztyq
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(366004)(376002)(396003)(346002)(71200400001)(6512007)(8936002)(26005)(66446008)(53546011)(91956017)(36756003)(66476007)(186003)(6506007)(2616005)(4326008)(66556008)(6486002)(76116006)(66946007)(30864003)(64756008)(316002)(83380400001)(8676002)(478600001)(5660300002)(6916009)(2906002)(86362001)(33656002)(54906003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?6KIbhRoos9XsUa0cyTefutihur4n5h1V29Qe3M1JPXrUwlQYqaBHbi6PDJob?=
 =?us-ascii?Q?52CzqN//GYQRpfikpHlX8czLjXSnFKE8xUGwxRPVrdbloP9aJJQuyQpn/luF?=
 =?us-ascii?Q?uFwnZk4nVj5nqcJpaqhLjbcUsJTeXSOOLEivOwyVeDx2b/brvCfsT7HX8Sw0?=
 =?us-ascii?Q?RxcUHCds9SNj2Dyvj4vLrqrkLIsjcVaEdKgqbK/ZV+/c9xPozxblQYlljcNS?=
 =?us-ascii?Q?SoLg8IbpHW+7x5+We+AIId2i95xx41bAOODugF9TI1DAPp12miI3QrEAG09B?=
 =?us-ascii?Q?Lhg3Bw6GDlyXvFSlhm5oyY94r6EYtLsTFAhcK4lVVrF3TD/xhvtBg9Gfll5h?=
 =?us-ascii?Q?sw7I0bWhnINncxN5uGkASjm2mYP+L5CTfWXWRAsmz/5np8TqtApA8P6sCR/y?=
 =?us-ascii?Q?O9Fe+eHzcCWJzg3CWwBKoC9qJGyX/XUw6VLK3gHc4Ul5Qef7WgEELOjZzgq+?=
 =?us-ascii?Q?QUZmffRP993R5wr8MoWricjNzGxMhTt5VvcIrwZw1DPzSL+V1vY4um4H/0Bq?=
 =?us-ascii?Q?XcHly2oV1mWZkWu/YV1nDe67w71u3Gey2WX5jmocdDn5KcwqhpobTkAz5eI1?=
 =?us-ascii?Q?lQECM8jD+6ANq2/j1BX4RE117gz34nSTJP0gcuyewBrAx4A1wiyiofypmHD5?=
 =?us-ascii?Q?XkxZOxKpmGmc6mmxSuK2OFdH2LqQjmsKYS97tQLb5jkI1bqXldxOJf1oqQ9D?=
 =?us-ascii?Q?9sZNQvbtCQ1MmbIqLeGQHkjjd7VQvJVlVy0hvTz4jVrSQFM9GhAy2qPiksV0?=
 =?us-ascii?Q?HarAqrfUGYK4XXbPjetCW1cuZtKklHy5b1+yzV6F/Z6UtcKEuiHJ7lGZvpbs?=
 =?us-ascii?Q?R1AeACOzTA0hrgcFkIHzD8WUZFJONQM7nVBIy56pV6BpRxpTxKH9o8id7DSj?=
 =?us-ascii?Q?KdrYDiimE6vigLlAGEkjZnFs0nd9rSqh3AGB8CgI55mTeeobQTcH62Nt3VW4?=
 =?us-ascii?Q?K/JA4GqtR3BIyXk6Q4TKh7gT118cCoBlqe2fDJMBxo+zn9Cq4g7O8M7KRhuA?=
 =?us-ascii?Q?2DM+?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0F75C1D391BB43449FD2335535FEF9AB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6075
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1926bb22-1d6d-48d8-8537-08d89ad67813
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0+ej5TD0qCL69wGyphK+V0DjNadKrCiytqtnQi1JLLWISr3id4bzbmIX7b0b6xg5bajGBZWj9IUV+BccPryQZnp9Ehf+UQD8owJcf2g69PIqYV+dsu/G/I3droezrGpf5d8jxvRYBmLfKstwSETATbbwF9f7vfp9wF4PIn4KRlQTox5/Z+bupXbH54oIGhV1vD/m9YZrfMIjcpsdYkXCkcHry55LYPwSHWgj3zdQ+NWKEy8rfEfGrQsnGC5MsrphEpIWsiC86Cj3MKNj19kyULMspxaUxLV2b4F/NBVmmGvr5i/Mhrqe4MzEycd9iHWWNt9PVj+YHVn+g0zvGHqRx9ZFcN4F+sr8LAvR+LDLsrXI9wgnSLOOphSY8bnLXdVpfAlodppJ1DQZP3j/jvzNmg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(39860400002)(376002)(46966005)(30864003)(81166007)(70206006)(6512007)(33656002)(47076004)(6862004)(6486002)(107886003)(6506007)(316002)(26005)(82740400003)(82310400003)(36756003)(8936002)(70586007)(478600001)(2906002)(53546011)(54906003)(186003)(8676002)(83380400001)(4326008)(356005)(336012)(5660300002)(86362001)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 17:35:36.9182
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d77406fc-339a-4031-7e79-08d89ad68219
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3174

Hi Stefano,

> On 4 Dec 2020, at 23:52, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> There is a typo in the subject line

I will fix that in the v3.

>=20
>=20
> On Mon, 30 Nov 2020, Bertrand Marquis wrote:
>> Add definition and entries in cpuinfo for ID registers introduced in
>> newer Arm Architecture reference manual:
>> - ID_PFR2: processor feature register 2
>> - ID_DFR1: debug feature register 1
>> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
>> - ID_ISA6: ISA Feature register 6
>> Add more bitfield definitions in PFR fields of cpuinfo.
>> Add MVFR2 register definition for aarch32.
>> Add mvfr values in cpuinfo.
>> Add some registers definition for arm64 in sysregs as some are not
>> always know by compilers.
>> Initialize the new values added in cpuinfo in identify_cpu during init.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> I realize I am using an old compiler but I am getting a build error:
>=20
> /local/repos/gcc-linaro-5.3.1-2016.05-x86_64_aarch64-linux-gnu/bin/aarch6=
4-linux-gnu-gcc -MMD -MP -MF ./.cpufeature.o.d  -DBUILD_ID -fno-strict-alia=
sing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -=
Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O1 -fno-omit-fram=
e-pointer -nostdinc -fno-builtin -fno-common -Werror -Wredundant-decls -Wno=
-pointer-arith -Wvla -pipe -D__XEN__ -include /local/repos/xen-upstream/xen=
/include/xen/config.h -Wa,--strip-local-absolute -g -mcpu=3Dgeneric -mgener=
al-regs-only   -I/local/repos/xen-upstream/xen/include -fno-stack-protector=
 -fno-exceptions -fno-asynchronous-unwind-tables -Wnested-externs '-D__OBJE=
CT_FILE__=3D"cpufeature.o"'  -c cpufeature.c -o cpufeature.o
> {standard input}: Assembler messages:
> {standard input}:634: Error: unknown or missing system register name at o=
perand 2 -- `mrs x1,ID_MMFR4_EL1'
>=20
> If I remove the line:
>=20
>  c->mm32.bits[4]  =3D READ_SYSREG32(ID_MMFR4_EL1);

I will add MMFR4 definition if it is not defined by the compiler in v3.

Cheers
Bertrand


>=20
>=20
> it builds just fine
>=20
>=20
>=20
>> ---
>> Changes in V2:
>>  Fix dbg32 table size and add proper initialisation of the second entry
>>  of the table by reading ID_DFR1 register.
>> ---
>> xen/arch/arm/cpufeature.c           | 17 ++++++++
>> xen/include/asm-arm/arm64/sysregs.h | 25 ++++++++++++
>> xen/include/asm-arm/cpregs.h        | 11 +++++
>> xen/include/asm-arm/cpufeature.h    | 63 ++++++++++++++++++++++++-----
>> 4 files changed, 107 insertions(+), 9 deletions(-)
>>=20
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index 44126dbf07..204be9b084 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>>=20
>>         c->mm64.bits[0]  =3D READ_SYSREG64(ID_AA64MMFR0_EL1);
>>         c->mm64.bits[1]  =3D READ_SYSREG64(ID_AA64MMFR1_EL1);
>> +        c->mm64.bits[2]  =3D READ_SYSREG64(ID_AA64MMFR2_EL1);
>>=20
>>         c->isa64.bits[0] =3D READ_SYSREG64(ID_AA64ISAR0_EL1);
>>         c->isa64.bits[1] =3D READ_SYSREG64(ID_AA64ISAR1_EL1);
>> +
>> +        c->zfr64.bits[0] =3D READ_SYSREG64(ID_AA64ZFR0_EL1);
>> #endif
>>=20
>>         c->pfr32.bits[0] =3D READ_SYSREG32(ID_PFR0_EL1);
>>         c->pfr32.bits[1] =3D READ_SYSREG32(ID_PFR1_EL1);
>> +        c->pfr32.bits[2] =3D READ_SYSREG32(ID_PFR2_EL1);
>>=20
>>         c->dbg32.bits[0] =3D READ_SYSREG32(ID_DFR0_EL1);
>> +        c->dbg32.bits[1] =3D READ_SYSREG32(ID_DFR1_EL1);
>>=20
>>         c->aux32.bits[0] =3D READ_SYSREG32(ID_AFR0_EL1);
>>=20
>> @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>>         c->mm32.bits[1]  =3D READ_SYSREG32(ID_MMFR1_EL1);
>>         c->mm32.bits[2]  =3D READ_SYSREG32(ID_MMFR2_EL1);
>>         c->mm32.bits[3]  =3D READ_SYSREG32(ID_MMFR3_EL1);
>> +        c->mm32.bits[4]  =3D READ_SYSREG32(ID_MMFR4_EL1);
>> +        c->mm32.bits[5]  =3D READ_SYSREG32(ID_MMFR5_EL1);
>>=20
>>         c->isa32.bits[0] =3D READ_SYSREG32(ID_ISAR0_EL1);
>>         c->isa32.bits[1] =3D READ_SYSREG32(ID_ISAR1_EL1);
>> @@ -137,6 +144,16 @@ void identify_cpu(struct cpuinfo_arm *c)
>>         c->isa32.bits[3] =3D READ_SYSREG32(ID_ISAR3_EL1);
>>         c->isa32.bits[4] =3D READ_SYSREG32(ID_ISAR4_EL1);
>>         c->isa32.bits[5] =3D READ_SYSREG32(ID_ISAR5_EL1);
>> +        c->isa32.bits[6] =3D READ_SYSREG32(ID_ISAR6_EL1);
>> +
>> +#ifdef CONFIG_ARM_64
>> +        c->mvfr.bits[0] =3D READ_SYSREG64(MVFR0_EL1);
>> +        c->mvfr.bits[1] =3D READ_SYSREG64(MVFR1_EL1);
>> +        c->mvfr.bits[2] =3D READ_SYSREG64(MVFR2_EL1);
>> +#else
>> +        c->mvfr.bits[0] =3D READ_CP32(MVFR0);
>> +        c->mvfr.bits[1] =3D READ_CP32(MVFR1);
>> +#endif
>> }
>>=20
>> /*
>> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/a=
rm64/sysregs.h
>> index c60029d38f..5abbeda3fd 100644
>> --- a/xen/include/asm-arm/arm64/sysregs.h
>> +++ b/xen/include/asm-arm/arm64/sysregs.h
>> @@ -57,6 +57,31 @@
>> #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>> #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>>=20
>> +/*
>> + * Define ID coprocessor registers if they are not
>> + * already defined by the compiler.
>> + *
>> + * Values picked from linux kernel
>> + */
>> +#ifndef ID_AA64MMFR2_EL1
>> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
>> +#endif
>> +#ifndef ID_PFR2_EL1
>> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
>> +#endif
>> +#ifndef ID_MMFR5_EL1
>> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
>> +#endif
>> +#ifndef ID_ISAR6_EL1
>> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
>> +#endif
>> +#ifndef ID_AA64ZFR0_EL1
>> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
>> +#endif
>> +#ifndef ID_DFR1_EL1
>> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
>> +#endif
>> +
>> /* Access to system registers */
>>=20
>> #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
>> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
>> index 8fd344146e..58be898891 100644
>> --- a/xen/include/asm-arm/cpregs.h
>> +++ b/xen/include/asm-arm/cpregs.h
>> @@ -63,6 +63,7 @@
>> #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Regi=
ster */
>> #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Con=
trol Register */
>> #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Registe=
r 0 */
>> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Regist=
er 1 */
>> #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Cont=
rol Register */
>> #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Re=
gister */
>> #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Re=
gister 2 */
>> @@ -108,18 +109,23 @@
>> #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Regis=
ter */
>> #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 =
*/
>> #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 =
*/
>> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2=
 */
>> #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
>> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>> #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 =
*/
>> #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register=
 0 */
>> #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register=
 1 */
>> #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register=
 2 */
>> #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register=
 3 */
>> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Registe=
r 4 */
>> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Registe=
r 5 */
>> #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>> #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>> #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>> #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>> #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>> #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
>> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>> #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>> #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>> #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register=
 */
>> @@ -312,18 +318,23 @@
>> #define HSTR_EL2                HSTR
>> #define ID_AFR0_EL1             ID_AFR0
>> #define ID_DFR0_EL1             ID_DFR0
>> +#define ID_DFR1_EL1             ID_DFR1
>> #define ID_ISAR0_EL1            ID_ISAR0
>> #define ID_ISAR1_EL1            ID_ISAR1
>> #define ID_ISAR2_EL1            ID_ISAR2
>> #define ID_ISAR3_EL1            ID_ISAR3
>> #define ID_ISAR4_EL1            ID_ISAR4
>> #define ID_ISAR5_EL1            ID_ISAR5
>> +#define ID_ISAR6_EL1            ID_ISAR6
>> #define ID_MMFR0_EL1            ID_MMFR0
>> #define ID_MMFR1_EL1            ID_MMFR1
>> #define ID_MMFR2_EL1            ID_MMFR2
>> #define ID_MMFR3_EL1            ID_MMFR3
>> +#define ID_MMFR4_EL1            ID_MMFR4
>> +#define ID_MMFR5_EL1            ID_MMFR5
>> #define ID_PFR0_EL1             ID_PFR0
>> #define ID_PFR1_EL1             ID_PFR1
>> +#define ID_PFR2_EL1             ID_PFR2
>> #define IFSR32_EL2              IFSR
>> #define MDCR_EL2                HDCR
>> #define MIDR_EL1                MIDR
>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpuf=
eature.h
>> index c7b5052992..64354c3f19 100644
>> --- a/xen/include/asm-arm/cpufeature.h
>> +++ b/xen/include/asm-arm/cpufeature.h
>> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>>     union {
>>         uint64_t bits[2];
>>         struct {
>> +            /* PFR0 */
>>             unsigned long el0:4;
>>             unsigned long el1:4;
>>             unsigned long el2:4;
>> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>>             unsigned long fp:4;   /* Floating Point */
>>             unsigned long simd:4; /* Advanced SIMD */
>>             unsigned long gic:4;  /* GIC support */
>> -            unsigned long __res0:28;
>> +            unsigned long ras:4;
>> +            unsigned long sve:4;
>> +            unsigned long sel2:4;
>> +            unsigned long mpam:4;
>> +            unsigned long amu:4;
>> +            unsigned long dit:4;
>> +            unsigned long __res0:4;
>>             unsigned long csv2:4;
>> -            unsigned long __res1:4;
>> +            unsigned long cvs3:4;
>> +
>> +            /* PFR1 */
>> +            unsigned long bt:4;
>> +            unsigned long ssbs:4;
>> +            unsigned long mte:4;
>> +            unsigned long ras_frac:4;
>> +            unsigned long mpam_frac:4;
>> +            unsigned long __res1:44;
>>         };
>>     } pfr64;
>>=20
>> @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>>     } aux64;
>>=20
>>     union {
>> -        uint64_t bits[2];
>> +        uint64_t bits[3];
>>         struct {
>>             unsigned long pa_range:4;
>>             unsigned long asid_bits:4;
>> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>>             unsigned long pan:4;
>>             unsigned long __res1:8;
>>             unsigned long __res2:32;
>> +
>> +            unsigned long __res3:64;
>>         };
>>     } mm64;
>>=20
>> @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>>         uint64_t bits[2];
>>     } isa64;
>>=20
>> +    struct {
>> +        uint64_t bits[1];
>> +    } zfr64;
>> +
>> #endif
>>=20
>>     /*
>> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>>      * when running in 32-bit mode.
>>      */
>>     union {
>> -        uint32_t bits[2];
>> +        uint32_t bits[3];
>>         struct {
>> +            /* PFR0 */
>>             unsigned long arm:4;
>>             unsigned long thumb:4;
>>             unsigned long jazelle:4;
>>             unsigned long thumbee:4;
>> -            unsigned long __res0:16;
>> +            unsigned long csv2:4;
>> +            unsigned long amu:4;
>> +            unsigned long dit:4;
>> +            unsigned long ras:4;
>>=20
>> +            /* PFR1 */
>>             unsigned long progmodel:4;
>>             unsigned long security:4;
>>             unsigned long mprofile:4;
>>             unsigned long virt:4;
>>             unsigned long gentimer:4;
>> -            unsigned long __res1:12;
>> +            unsigned long sec_frac:4;
>> +            unsigned long virt_frac:4;
>> +            unsigned long gic:4;
>> +
>> +            /* PFR2 */
>> +            unsigned long csv3:4;
>> +            unsigned long ssbs:4;
>> +            unsigned long ras_frac:4;
>> +            unsigned long __res2:20;
>>         };
>>     } pfr32;
>>=20
>>     struct {
>> -        uint32_t bits[1];
>> +        uint32_t bits[2];
>>     } dbg32;
>>=20
>>     struct {
>> @@ -230,12 +264,23 @@ struct cpuinfo_arm {
>>     } aux32;
>>=20
>>     struct {
>> -        uint32_t bits[4];
>> +        uint32_t bits[6];
>>     } mm32;
>>=20
>>     struct {
>> -        uint32_t bits[6];
>> +        uint32_t bits[7];
>>     } isa32;
>> +
>> +#ifdef CONFIG_ARM_64
>> +    struct {
>> +        uint64_t bits[3];
>> +    } mvfr;
>> +#else
>> +    /* Only MVFR0 and MVFR1 exist on armv7 */
>> +    struct {
>> +        uint32_t bits[2];
>> +    } mvfr;
>> +#endif
>> };
>>=20
>> extern struct cpuinfo_arm boot_cpu_data;
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:36:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46899.83088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKR5-0001VT-Iy; Mon, 07 Dec 2020 17:36:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46899.83088; Mon, 07 Dec 2020 17:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKR5-0001VM-Er; Mon, 07 Dec 2020 17:36:31 +0000
Received: by outflank-mailman (input) for mailman id 46899;
 Mon, 07 Dec 2020 17:36:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Ca2=FL=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1kmKR4-0001VC-6g
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:36:30 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45ef6e1b-efe0-49d5-9720-17dcc621cc72;
 Mon, 07 Dec 2020 17:36:29 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id v14so9270wml.1
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 09:36:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45ef6e1b-efe0-49d5-9720-17dcc621cc72
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=44+jLF9ikTDp4rQKpwAdEvxyTfg13sgfo2WXJoKpSGs=;
        b=gy4FeTkIEwm9rPiKrFw5XITuv2Nn2ynlfPJDZzvFkYnTh+RI6/Eo6rFn0eXPp7Apow
         FZU4mYFJ6tNUbSxIWwObkGk3kjAu+UILfS46MoKS8yHn/gRatsgS1w3XA/e6kLvEfXqw
         zynWUzqsMJA8C2b9CduItN5y4SeI6snGctPcLgBkvVigWCVl6J+X6el5kQ+kJVV+9lqq
         af2iGW/G7MGFZJ6jiuLfXcIj1RLS98BGimIkzBarIxL6chhdUJNkPPRoCivGeHr6TQOl
         +4vXnET/zloKnXyIqyZ4/h2XuYzGOeoxrOxNqbBaueSYpM7yk+ZEraAhXGKOf6XwBcfS
         fuMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=44+jLF9ikTDp4rQKpwAdEvxyTfg13sgfo2WXJoKpSGs=;
        b=mosWabCcII410HCJH8DJZHvkpl9OUMskTXtxdQ2r8oXiPNXcSSeB8pATiQRmT3LuAP
         0NzuiIAS6AF+ZyF9/148FpWbuZpjkv4edQ5YHK7sOXWZJ8rrI46z0GpfAkcZ1DzTDct4
         Dk+9NDvwxmgpQGIyzSxkGsOXOeS0O+/CKQrNzB2NKXUn1q9QynSCYw0aFdRhDWJYCJrU
         UqUON0uAuTDY7Z1OJhISkEUMIpcJW9LRtHtib1GmzxgsUlWotq799oYzdZHGVw/wZnUs
         UNZje3XeKY7jPrwOdVScIaRAojcPvcdqRlUXukRRToRi3q9hKyDD64rgmthkxjsw2GA8
         3VkQ==
X-Gm-Message-State: AOAM530NDbY4X2SSdDzI1b24G9rC0FshyCMcEEZX9+fVr5eKK+ebgWO0
	SSRuLhWAt7gFso/+h0rSK8djx7P0iQcqMlNXfV0=
X-Google-Smtp-Source: ABdhPJwrteo10U0BkrPMW2DDXr7730Ns+a+ona8pG2RkgNuTYPsEjpHyVLzV4606kvyfOtzBLinVnR7sN8FC+WTAl04=
X-Received: by 2002:a1c:4e0a:: with SMTP id g10mr19690029wmh.51.1607362588389;
 Mon, 07 Dec 2020 09:36:28 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com> <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org> <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com> <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
In-Reply-To: <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 7 Dec 2020 12:35:52 -0500
Message-ID: <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Dec 7, 2020 at 12:30 PM Julien Grall <julien@xen.org> wrote:
>
> Hi Jan,
>
> On 07/12/2020 15:28, Jan Beulich wrote:
> > On 04.12.2020 20:15, Tamas K Lengyel wrote:
> >> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
> >>> On 04/12/2020 15:21, Tamas K Lengyel wrote:
> >>>> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
> >>>>> On 03/12/2020 10:09, Jan Beulich wrote:
> >>>>>> On 02.12.2020 22:10, Julien Grall wrote:
> >>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
> >>>>>>>> While there don't look to be any problems with this right now, the lock
> >>>>>>>> order implications from holding the lock can be very difficult to follow
> >>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
> >>>>>>>> (and no such callback should) have any need for the lock to be held.
> >>>>>>>>
> >>>>>>>> However, vm_event_disable() frees the structures used by respective
> >>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
> >>>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> >>>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
> >>>>>>>
> >>>>>>> AFAICT, this callback is not the only place where the synchronization is
> >>>>>>> missing in the VM event code.
> >>>>>>>
> >>>>>>> For instance, vm_event_put_request() can also race against
> >>>>>>> vm_event_disable().
> >>>>>>>
> >>>>>>> So shouldn't we handle this issue properly in VM event?
> >>>>>>
> >>>>>> I suppose that's a question to the VM event folks rather than me?
> >>>>>
> >>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
> >>>>> monitoring software to do the right thing.
> >>>>>
> >>>>> I will refrain to comment on this approach. However, given the race is
> >>>>> much wider than the event channel, I would recommend to not add more
> >>>>> code in the event channel to deal with such problem.
> >>>>>
> >>>>> Instead, this should be fixed in the VM event code when someone has time
> >>>>> to harden the subsystem.
> >>>>
> >>>> I double-checked and the disable route is actually more robust, we
> >>>> don't just rely on the toolstack doing the right thing. The domain
> >>>> gets paused before any calls to vm_event_disable. So I don't think
> >>>> there is really a race-condition here.
> >>>
> >>> The code will *only* pause the monitored domain. I can see two issues:
> >>>      1) The toolstack is still sending event while destroy is happening.
> >>> This is the race discussed here.
> >>>      2) The implement of vm_event_put_request() suggests that it can be
> >>> called with not-current domain.
> >>>
> >>> I don't see how just pausing the monitored domain is enough here.
> >>
> >> Requests only get generated by the monitored domain. So if the domain
> >> is not running you won't get more of them. The toolstack can only send
> >> replies.
> >
> > Julien,
> >
> > does this change your view on the refcounting added by the patch
> > at the root of this sub-thread?
>
> I still think the code is at best fragile. One example I can find is:
>
>    -> guest_remove_page()
>      -> p2m_mem_paging_drop_page()
>       -> vm_event_put_request()
>
> guest_remove_page() is not always call on the current domain. So there
> are possibility for vm_event_put_request() to happen on a foreign domain
> and therefore wouldn't be protected by the current hypercall.
>
> Anyway, I don't think the refcounting should be part of the event
> channel without any idea on how this would fit in fixing the VM event race.

If the problematic patterns only appear with mem_paging I would
suggest just removing the mem_paging code completely. It's been
abandoned for several years now.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 17:39:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 17:39:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46906.83100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKUM-0001iw-64; Mon, 07 Dec 2020 17:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46906.83100; Mon, 07 Dec 2020 17:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKUM-0001ip-2h; Mon, 07 Dec 2020 17:39:54 +0000
Received: by outflank-mailman (input) for mailman id 46906;
 Mon, 07 Dec 2020 17:39:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmKUK-0001ik-Jh
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 17:39:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmKUG-0006hS-Q9; Mon, 07 Dec 2020 17:39:48 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmKUG-00056Z-IS; Mon, 07 Dec 2020 17:39:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pfHGBUr1s9nQwpTrKDSjpf3MZkvL4xZtowvLk9HwFcw=; b=hkGUgDt9dH51LNsi6lq2qf49x7
	pXUJgxTFPOBs7u8UkJ842Jab6IV3x1uDxxaeSQMVhqhTZeIrhn/Bab1KzZRfNTOID5vwX9hEpEZzK
	T0mAUgOE7S37h9hEC6dLs4gefBuMn7swcyisb/neIBLxdd+6uGJU/0gYT3Ggixco2WJE=;
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
 <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
Date: Mon, 7 Dec 2020 17:39:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 07/12/2020 12:12, Rahul Singh wrote:
>>> +typedef paddr_t dma_addr_t;
>>> +typedef unsigned int gfp_t;
>>> +
>>> +#define platform_device device
>>> +
>>> +#define GFP_KERNEL 0
>>> +
>>> +/* Alias to Xen device tree helpers */
>>> +#define device_node dt_device_node
>>> +#define of_phandle_args dt_phandle_args
>>> +#define of_device_id dt_device_match
>>> +#define of_match_node dt_match_node
>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
>>> +#define of_property_read_bool dt_property_read_bool
>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>>> +
>>> +/* Alias to Xen lock functions */
>>> +#define mutex spinlock
>>> +#define mutex_init spin_lock_init
>>> +#define mutex_lock spin_lock
>>> +#define mutex_unlock spin_unlock
>>
>> Hmm... mutex are not spinlock. Can you explain why this is fine to switch to spinlock?
> 
> Yes mutex are not spinlock. As mutex is not implemented in XEN I thought of using spinlock in place of mutex as this is the only locking mechanism available in XEN.
> Let me know if there is another blocking lock available in XEN. I will check if we can use that.

There are no blocking lock available in Xen so far. However, if Linux 
were using mutex instead of spinlock, then it likely means they 
operations in the critical section can be long running.

How did you came to the conclusion that using spinlock in the SMMU 
driver would be fine?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 18:07:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 18:07:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46912.83111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKvO-0004sC-Ep; Mon, 07 Dec 2020 18:07:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46912.83111; Mon, 07 Dec 2020 18:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmKvO-0004s5-Bq; Mon, 07 Dec 2020 18:07:50 +0000
Received: by outflank-mailman (input) for mailman id 46912;
 Mon, 07 Dec 2020 18:07:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KVsS=FL=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kmKvM-0004s0-BN
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 18:07:48 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ac7b0a4-287c-43b6-922c-4baa9742175d;
 Mon, 07 Dec 2020 18:07:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0B7I7UMW033564
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 7 Dec 2020 13:07:36 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0B7I7TJq033563;
 Mon, 7 Dec 2020 10:07:29 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ac7b0a4-287c-43b6-922c-4baa9742175d
Message-Id: <202012071807.0B7I7TJq033563@m5p.com>
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date: Mon, 7 Dec 2020 07:36:11 -0800
Subject: [RFC PATCH] xen/arm: domain_build: Ignore empty memory bank
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Previously Xen had stopped processing Device Trees if an empty
(size == 0) memory bank was found.

Commit 5a37207df52066efefe419c677b089a654d37afc changed this behavior to
ignore such banks.  Unfortunately this means these empty nodes are
visible to code which accesses the device trees.  Have domain_build also
ignore these entries.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---

Looking at this, I think the problem is likely even larger than this and
really needs a proper solution closer to the core of the device-tree
code.  Likely either all device-tree handling code needs to be audited
for ignoring zero-size entries, or the core should take care of these and
nothing outside of xen/common/device_tree.c should ever see these (except
perhaps to confirm such entries exist as flags).  Notably this is the
*second* location where zero-size device-tree entries need to be ignored,
action might be worthwhile before a third is confirmed.
---
 xen/arch/arm/domain_build.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e824ba34b0..0b83384bd3 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1405,6 +1405,11 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     {
         struct map_range_data mr_data = { .d = d, .p2mt = p2mt };
         res = dt_device_get_address(dev, i, &addr, &size);
+
+        /* Some DT may describe empty bank, ignore them */
+        if ( !size )
+            continue;
+
         if ( res )
         {
             printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
-- 


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





From xen-devel-bounces@lists.xenproject.org Mon Dec 07 18:43:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 18:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46925.83124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmLTa-0000T4-Df; Mon, 07 Dec 2020 18:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46925.83124; Mon, 07 Dec 2020 18:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmLTa-0000Sd-8t; Mon, 07 Dec 2020 18:43:10 +0000
Received: by outflank-mailman (input) for mailman id 46925;
 Mon, 07 Dec 2020 18:43:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifa4=FL=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kmLTY-0000RY-MP
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 18:43:08 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.79]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e018ee22-3bd7-4ffd-a0e8-f1891747e665;
 Mon, 07 Dec 2020 18:43:05 +0000 (UTC)
Received: from AM6P192CA0043.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:82::20)
 by DB6PR0801MB2117.eurprd08.prod.outlook.com (2603:10a6:4:2e::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.22; Mon, 7 Dec
 2020 18:43:03 +0000
Received: from VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:82:cafe::a0) by AM6P192CA0043.outlook.office365.com
 (2603:10a6:209:82::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Mon, 7 Dec 2020 18:43:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT011.mail.protection.outlook.com (10.152.18.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Mon, 7 Dec 2020 18:43:02 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Mon, 07 Dec 2020 18:43:02 +0000
Received: from 7f1830ab4f86.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 928EE7B1-5115-45B7-87B2-0C3DA6117048.1; 
 Mon, 07 Dec 2020 18:42:46 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7f1830ab4f86.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 07 Dec 2020 18:42:46 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6170.eurprd08.prod.outlook.com (2603:10a6:10:200::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Mon, 7 Dec
 2020 18:42:40 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3632.022; Mon, 7 Dec 2020
 18:42:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e018ee22-3bd7-4ffd-a0e8-f1891747e665
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p01CoVpdQE0NVlpZRY0LQOE/3ZsylSlgSX9tml/Dv8w=;
 b=xjJqkxzv6OuW85O7ntYHaQixfihVxrvIdeB+h1prY0XsbzvtnGQEx1CBvbM19VUsjk02phEIWrQCPhu8ad3ou+3jJQG2vvAPQrccTae3I6/Yjm4aPrusbuqiJUAdO2MMvUm3O+QxT219mLhYx34Z1VaOCXm+YJoefdEupYfKFdk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c72496df9b7893e3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IhPdYSXKcuQJqiaOY6FDzFGUssWwIqtZXZ4IIpSu9/XRx97vQHHvxy9/tSF0y4+dLq6TTZIH0ZZ2J9gX1CVVKDH6AkyLYGhDMANTGJSjlqm080dfr0jmEJpeVRh99Ac8djtlowBxy9e2e/VEuZkC8iIjyNpgiXiFPm2VSCYcJZ/1naV71XahJ4prxvl+Z5x1yiW6hOlf6q55dkXOjiz6wjo9pjfzyRvPLEczvfaZT1m8RqsH8MvaJGmR19VVS9T4IvgQT7FJLBx7pUnNsDwn9UfaWZg/2OjJURrQjyaLU7V1ZnuwQNM1Il2wsyr1QtbViLNBY0odWkMgVBnZB8W0RQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p01CoVpdQE0NVlpZRY0LQOE/3ZsylSlgSX9tml/Dv8w=;
 b=RSdxeB3y+1ER6EoEtryQ5byhJqogbtgjInLvrBPKGH01XrZ92TrsTOBFCVeZWlKBRHp96UnTkAj2tYkDvO4aesPxsEbWcemRwrQ7+9XQqrW16kJHVlB1ORHrNVaMIkc1w8QIqKbZ8GL/09yJNEaxSIx4kvqxAhTRSq2sTpYHHCd76H4hv/b2rvrhoy+YhQdKrWEXAP8vRu1wTQAb5BLYBbN37PUzu0fpdwoo27aAQtFKwB7uJstQFuI5G5mdYTOeccrpihLEIo067Js2i/ZDUgJyFkBmH6z2uayylygq+nbb07zNQuOOZucXy+2VwMeefmCuxQrTN7eWQvUxn/owVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p01CoVpdQE0NVlpZRY0LQOE/3ZsylSlgSX9tml/Dv8w=;
 b=xjJqkxzv6OuW85O7ntYHaQixfihVxrvIdeB+h1prY0XsbzvtnGQEx1CBvbM19VUsjk02phEIWrQCPhu8ad3ou+3jJQG2vvAPQrccTae3I6/Yjm4aPrusbuqiJUAdO2MMvUm3O+QxT219mLhYx34Z1VaOCXm+YJoefdEupYfKFdk=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWxBX/ttp8YMNqT0KunkbXuSldmKnkBpwAgAeVnoCAAFuAgIAAEZOA
Date: Mon, 7 Dec 2020 18:42:40 +0000
Message-ID: <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
 <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
 <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
In-Reply-To: <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 08a26eb5-1d23-426b-b655-08d89adfedb2
x-ms-traffictypediagnostic: DBBPR08MB6170:|DB6PR0801MB2117:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB21171EA7F300919C70902DEEFCCE0@DB6PR0801MB2117.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PC2kM8g0cTcba9gJig+FiYVkdMRVWwTHscrcyZBa3xrHbCbZiDAc4cq8W/uvIcUoY4C/65eP1wDdoL+xHS1rOGRg9NAiQBjUZ8Wgy7WHzRG6ZdqNzrLOw8vf+xoeLw66l1ywPcS4WZA6biTrObyaGRIttEAGV5wdMCzRIl9CgesD8yxssHJm+z+lQiG5RphaS2uY3gopkmr5J4ignqatE5nqxFu6b9fKyYl4c5tSEGebuJ9M9dUEhh3vVTLDywhSOOzRutJlJCok7lpPkwdRjyEO/hEVA6PSctF8fEXUB7ujzv76P76EhXg8wmbG3vfsN+BZrWrLdFzI8oOrxuNnka+BwdrwqV24gw9gklfqaxKuafO48yKbAqJ+Z1J+3rm7
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(6916009)(66476007)(54906003)(26005)(91956017)(76116006)(6486002)(53546011)(478600001)(5660300002)(4326008)(7416002)(64756008)(186003)(2616005)(6512007)(66446008)(8676002)(316002)(2906002)(86362001)(6506007)(33656002)(36756003)(66556008)(8936002)(71200400001)(83380400001)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?gmYPyEnZxFsEtC9pOmjoJtvAJPzdK3mQN6yI6JUO4Hj82b6mxS0mybI6YU3K?=
 =?us-ascii?Q?LQznczmg+6+zewqDDgCbvxCOVg83bEPFakhujJSFqUk0bK0CwoTwmjPxJUKy?=
 =?us-ascii?Q?BUB0kSgwkVr10fsYHH5CDwnI8ZaE2B3H8KQcfmOfU8c1XAZZ5akFin7Hz8s1?=
 =?us-ascii?Q?1AfKmtHHj1QvccUq0SBurJ38U1d9QxNaB/2bkUgludMwos2H+tdwpesF6LgH?=
 =?us-ascii?Q?F/0SxsdhdEoZCQBkLUfgY/6Avx+eA+FasJPUrfnEYlzcjSzxaOOhIpv+WJXE?=
 =?us-ascii?Q?RGKfxZwranJsWB83BVUUK/q8V+34YdR/pr3pVog3y5gzgtnpa8eAqOeh+nRt?=
 =?us-ascii?Q?mmnKBAZZEeUY38msZpzoVf1vfsHvBDv+dSW8+BW/BxerUCTZchcmi7IoemfD?=
 =?us-ascii?Q?DBKLASZj7WydxyLq60Vkj2W66bS7kAlsznZwu8FKFvuvR4rZhugRoLIHMK3r?=
 =?us-ascii?Q?klxIqTnZBj7Bw9CA9m+h6sWiPRqmXm8lXOB9v8kyqfY+cybmzhlLV4Ey0KfO?=
 =?us-ascii?Q?lYDlr0e6iOHRXiT5eRxTFqm+cvfNRTiMHvN+fevozk4PVhgVitqSsQbxcgOU?=
 =?us-ascii?Q?qt9fnJ2p/7WsZqLURo+p7EbKpXENYIneHlkMZPDC9+J2ma87q1fhMW74iTTQ?=
 =?us-ascii?Q?WpJAuuOUeB/y9A6TAhLEzxetsH2QEcMGCJF/mFBahTLD0mqYYVn5ZXIJIubt?=
 =?us-ascii?Q?63sSflndtgLusrD6sjAVlEYZ7bnkMs0oDaIqgyleOuEUS1/hnaFLcHbVOaC1?=
 =?us-ascii?Q?yGqOom614IvbxOrONvmiycmPYkwCo76x1l3Txb0ekI7uZGUkvL8rtufbDFhb?=
 =?us-ascii?Q?WlCp3gdqwAm+BSw95yEeGHOALXveIFscjlrfagrcN5gFdd4TO3wZqI2KLHwJ?=
 =?us-ascii?Q?JGPa+B3La6/XhEer4Y1oA+RVFe/z8nvZoJKXI9bXUTJnpfaScGmbEdOulGX9?=
 =?us-ascii?Q?jdKInaw6fkx8pg1nWOThcRFjvaSZQEIhIcjciVC9v8HhRsZQIvYnV7dJSELi?=
 =?us-ascii?Q?9rkG?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3AEDA2C7E61E7841803E19D2EED68B6B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6170
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b709f1db-8a60-4158-0f0d-08d89adfe036
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gPUpfSMRP7I3phBPihT0dut+A+d17amxFkbXkWVCwEvIwDxh3tM4Ft62c+i/CCBafFYHqjQF1A/5Ti3BTCir2GHONB5tbPHzXUA1IYrBXt9ePtb6M6swn1jeGgO2ZuRLYx8yahOV0I0B1wNs7kb27uImNNWApa3YoJq7mCr0dbRxpTuZQEi//h0yn4V45YUnpSl9pGKocpcwx97drWdXIt6GZbLuMGdre4fY457uT1ZF468KlRwFfXq0QHKTU02f/pKDpW6x8A/5VKPIXRd2DUYySI6jkXL9RZTedXNLIEjxWDzbRSfshQ+vKZ0YJk2gIz+qVV4bydpn8Vm0eMeJkcBXYTzdul4aMgvSGSNzCpIjNLJq7ZYOclTdp5f5ADULeykc7NYoTTsUeb8ToMv+eQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(136003)(396003)(46966005)(83380400001)(26005)(2616005)(336012)(47076004)(82740400003)(8936002)(53546011)(81166007)(5660300002)(36756003)(4326008)(8676002)(6506007)(82310400003)(478600001)(6486002)(2906002)(70586007)(6862004)(107886003)(316002)(54906003)(356005)(6512007)(33656002)(86362001)(70206006)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2020 18:43:02.7962
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 08a26eb5-1d23-426b-b655-08d89adfedb2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB2117

Hello Julien,

> On 7 Dec 2020, at 5:39 pm, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 07/12/2020 12:12, Rahul Singh wrote:
>>>> +typedef paddr_t dma_addr_t;
>>>> +typedef unsigned int gfp_t;
>>>> +
>>>> +#define platform_device device
>>>> +
>>>> +#define GFP_KERNEL 0
>>>> +
>>>> +/* Alias to Xen device tree helpers */
>>>> +#define device_node dt_device_node
>>>> +#define of_phandle_args dt_phandle_args
>>>> +#define of_device_id dt_device_match
>>>> +#define of_match_node dt_match_node
>>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(n=
p, pname, out))
>>>> +#define of_property_read_bool dt_property_read_bool
>>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>>>> +
>>>> +/* Alias to Xen lock functions */
>>>> +#define mutex spinlock
>>>> +#define mutex_init spin_lock_init
>>>> +#define mutex_lock spin_lock
>>>> +#define mutex_unlock spin_unlock
>>>=20
>>> Hmm... mutex are not spinlock. Can you explain why this is fine to swit=
ch to spinlock?
>> Yes mutex are not spinlock. As mutex is not implemented in XEN I thought=
 of using spinlock in place of mutex as this is the only locking mechanism =
available in XEN.
>> Let me know if there is another blocking lock available in XEN. I will c=
heck if we can use that.
>=20
> There are no blocking lock available in Xen so far. However, if Linux wer=
e using mutex instead of spinlock, then it likely means they operations in =
the critical section can be long running.

Yes you are right Linux is using mutex when attaching a device to the SMMU =
as this operation might take longer time.
>=20
> How did you came to the conclusion that using spinlock in the SMMU driver=
 would be fine?

Mutex is replaced by spinlock  in the SMMU driver when there is a request t=
o assign a device to the guest. As we are in user context at that time its =
ok to use spinlock.
As per my understanding there is one scenario when CPU will spin when there=
 is a request from the user at the same time to assign another device to th=
e SMMU and I think that is very rare.

Please suggest how we can proceed on this.

Regards
Rahul
>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 19:06:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 19:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46923.83142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmLq9-0002eh-Gk; Mon, 07 Dec 2020 19:06:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46923.83142; Mon, 07 Dec 2020 19:06:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmLq9-0002ea-DJ; Mon, 07 Dec 2020 19:06:29 +0000
Received: by outflank-mailman (input) for mailman id 46923;
 Mon, 07 Dec 2020 18:31:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KVsS=FL=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kmLI1-0007r5-As
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 18:31:13 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37e468e2-ea19-4f08-b2b6-22dcff629aba;
 Mon, 07 Dec 2020 18:31:11 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0B7IV1BS033689
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 7 Dec 2020 13:31:07 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0B7IV133033688;
 Mon, 7 Dec 2020 10:31:01 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37e468e2-ea19-4f08-b2b6-22dcff629aba
Message-Id: <202012071831.0B7IV133033688@m5p.com>
From: Elliott Mitchell <ehem+xenn@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: "Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>
Date: Thu, 1 Oct 2020 15:19:33 -0700
Subject: [RFC PATCH] tools/python: Correct extension filenames for Python 3
X-Spam-Status: No, score=2.1 required=10.0 tests=DATE_IN_PAST_96_XX,
	KHOP_HELO_FCRDNS autolearn=no autolearn_force=no version=3.4.4
X-Spam-Level: **
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Python distutils really looks like between 2 and 3, it took two steps
forward and then two steps backward.

First, it broke the linking step off of the compile step.  Thus CC and
LDSHARED were separated, thus CFLAGS and LDFLAGS were separated.  A
substantial step forwards.  Yet then CFLAGS was appended to LDFLAGS,
meaning LDSHARED needed to be able to accept CFLAGS as arguments.  Thus
effectively reuniting them.

Second, now distutils includes the host type in the object file area and
the full host triplet in the shared object name.  As such now a
filesystem can be shared by hosts with distinct architectures.  Great for
mixed environments.  Yet the build machine architecture/triplet is
assumed to be the runtime architecture, thus we end up with a workaround
something like the below.

The current state of this seems like a pretty awful hack.  I'm making
assumptions about the architecture which I won't bet on holding.  This
was really a quick hack for Pry Mar to assist with the current situation.

A proper solution is needed.  This feels like a proof-of-concept, but
needs refinement before ending up in any tree.

---
 tools/pygrub/setup.py | 16 +++++++++++++++-
 tools/python/setup.py | 16 +++++++++++++++-
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/tools/pygrub/setup.py b/tools/pygrub/setup.py
index b8f1dc4590..737f97d679 100644
--- a/tools/pygrub/setup.py
+++ b/tools/pygrub/setup.py
@@ -7,6 +7,19 @@ extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
 
 XEN_ROOT = "../.."
 
+from distutils import command
+import distutils.command.build_ext
+class BuildExtArch(distutils.command.build_ext.build_ext):
+	arch_map = {
+		'x86_64':	'amd64',
+		'x86_32':	'i386',
+		'arm64':	'aarch64',
+		'arm32':	'armel',
+	}
+	def get_ext_filename(self, ext_name):
+		name = super().get_ext_filename(ext_name)
+		return name.replace(os.getenv("XEN_COMPILE_ARCH"), self.arch_map[os.getenv("XEN_TARGET_ARCH")])
+
 xenfsimage = Extension("xenfsimage",
     extra_compile_args = extra_compile_args,
     include_dirs = [ XEN_ROOT + "/tools/libfsimage/common/" ],
@@ -25,5 +38,6 @@ setup(name='pygrub',
       package_dir={'grub': 'src', 'fsimage': 'src'},
       scripts = ["src/pygrub"],
       packages=pkgs,
-      ext_modules = [ xenfsimage ]
+      ext_modules = [ xenfsimage ],
+      cmdclass = {'build_ext': BuildExtArch},
       )
diff --git a/tools/python/setup.py b/tools/python/setup.py
index 8c95db7769..4d761f9360 100644
--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -17,6 +17,19 @@ PATH_LIBXENCTRL = XEN_ROOT + "/tools/libs/ctrl"
 PATH_LIBXENGUEST = XEN_ROOT + "/tools/libs/guest"
 PATH_XENSTORE = XEN_ROOT + "/tools/libs/store"
 
+from distutils import command
+import distutils.command.build_ext
+class BuildExtArch(distutils.command.build_ext.build_ext):
+	arch_map = {
+		'x86_64':	'amd64',
+		'x86_32':	'i386',
+		'arm64':	'aarch64',
+		'arm32':	'armel',
+	}
+	def get_ext_filename(self, ext_name):
+		name = super().get_ext_filename(ext_name)
+		return name.replace(os.getenv("XEN_COMPILE_ARCH"), self.arch_map[os.getenv("XEN_TARGET_ARCH")])
+
 xc = Extension("xc",
                extra_compile_args = extra_compile_args,
                include_dirs       = [ PATH_XEN,
@@ -51,5 +64,6 @@ setup(name            = 'xen',
                          'xen.lowlevel',
                         ],
       ext_package = "xen.lowlevel",
-      ext_modules = modules
+      ext_modules = modules,
+      cmdclass = {'build_ext': BuildExtArch},
       )
-- 


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





From xen-devel-bounces@lists.xenproject.org Mon Dec 07 19:13:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 19:13:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46948.83154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmLwg-0003is-8o; Mon, 07 Dec 2020 19:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46948.83154; Mon, 07 Dec 2020 19:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmLwg-0003il-4q; Mon, 07 Dec 2020 19:13:14 +0000
Received: by outflank-mailman (input) for mailman id 46948;
 Mon, 07 Dec 2020 19:13:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e0y+=FL=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kmLwe-0003ig-87
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 19:13:12 +0000
Received: from mail-wr1-x434.google.com (unknown [2a00:1450:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a03cd1b4-4776-483c-b591-2348ce19a3de;
 Mon, 07 Dec 2020 19:13:10 +0000 (UTC)
Received: by mail-wr1-x434.google.com with SMTP id i2so13881925wrs.4
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 11:13:10 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id u10sm195629wmd.43.2020.12.07.11.13.08
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 07 Dec 2020 11:13:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a03cd1b4-4776-483c-b591-2348ce19a3de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=mKImpMHefcGMjwuZymDE86va3HkD31kS1uQWZsITmds=;
        b=rfT/Ep6s4v0B1xbYfnA75do4KB6pJ0V7gW8HHP6HqNT4+mv6BVoIp8qJUmxkMdEpc7
         XqssnJYS2TtqWZJTIHyUHXEyPoZ9cQBrmCUxb4TMkU2JyK5ckegjowPLj6fjxRVvM8IX
         GAj1u7TvJBeXLrHSA355NYJXrAQiilhsfrBkEVyG53NEv6cKtrK0ABE4f6cbMl6B4PiI
         8Apz9wZIB0gAHHbS5VlsLCK0raQEgkEkr+b3JkmYWt7h7uhD4p8hCWsV5l/GAMUG+9Fo
         ghJjtwyFYmpHDJR6g1kn05Qg43sjiiXBmpAKZX7XUkWgYGyG8iZkzVKFS1kE/8Pk9LuX
         YZOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=mKImpMHefcGMjwuZymDE86va3HkD31kS1uQWZsITmds=;
        b=nxniQ3knFQK5EWoqGGyR+P6jaMSMK6nSa9Nbw4G78ZQ8ysVxwX02T8uxWpBhrlolf7
         /8Keg3L6SdhkWQPa6NHEgArfctLCjRwoWbH33ePOeTNB9ha9i0SEf1hudTomBJCpL3u6
         uWhNkBnvvPQCst8LffzGaersvRU/7qrc/US5j9xOrbEsNMH/r4kCTqcEhSY7CKQxQeyN
         +59usKBwZWNOxfBiRpFFBVfmrhxTODsZidax5XYYfNMria1IlqN0bEE1ONQ8ZBAGZyK7
         X4K/S1KTJRgYK+XAnAX+DnKv4HepNC+gUvQtyEJX4cYRqWubs4P5FEDPUBRdw+AfL2so
         E4fg==
X-Gm-Message-State: AOAM533Ld5nR0031Oi7MoNUvd1YYPSFE+ffsTS9Mbu6fLZkcGi4FAq4Q
	iGnakDaTgELMYO+hfSZpKUE=
X-Google-Smtp-Source: ABdhPJxUNUMRg4txWxb/8/bsuceumcDwSZaTvlnQGXtEc1QkLnwk35GphjW3ed+ojEDRta9qcPAkjg==
X-Received: by 2002:adf:fa4a:: with SMTP id y10mr21526240wrr.122.1607368389799;
        Mon, 07 Dec 2020 11:13:09 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>
Cc: <xen-devel@lists.xenproject.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20201203142534.4017-1-paul@xen.org> <20201203142534.4017-19-paul@xen.org> <20201204113413.iebyf2ldzq6rfpsg@liuwe-devbox-debian-v2>
In-Reply-To: <20201204113413.iebyf2ldzq6rfpsg@liuwe-devbox-debian-v2>
Subject: RE: [PATCH v5 18/23] libxlu: introduce xlu_pci_parse_spec_string()
Date: Mon, 7 Dec 2020 19:13:08 -0000
Message-ID: <0d2e01d6cccc$ffe23da0$ffa6b8e0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLNnXLs2pPLw4H6lzH6w2mINWFu0wIe2fTUAq/cxhCn1+ivcA==

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 04 December 2020 11:34
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Ian Jackson
> <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Anthony PERARD <anthony.perard@citrix.com>
> Subject: Re: [PATCH v5 18/23] libxlu: introduce xlu_pci_parse_spec_string()
> 
> On Thu, Dec 03, 2020 at 02:25:29PM +0000, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
> > it via the newly introduced function. The new parser also deals with 'bdf'
> > and 'vslot' as non-positional paramaters, as per the documentation in
> > xl-pci-configuration(5).
> >
> > The existing xlu_pci_parse_bdf() function remains, but now strictly parses
> > BDF values. Some existing callers of xlu_pci_parse_bdf() are
> > modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).
> >
> > NOTE: Usage text in xl_cmdtable.c and error messages are also modified
> >       appropriately.
> >
> > Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
> 
> I don't think d25cc3ec93eb is buggy, so this tag is not needed.

It is. If you supply the '*' for all funcs then vfunc_mask is set but func is not... so you then hit the assertion at the end of
xlu_pci_parse_bdf().

> 
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> 
> Acked-by: Wei Liu <wl@xen.org>

Thanks,

  Paul



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 19:44:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 19:44:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46965.83165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmMQW-0006pt-OP; Mon, 07 Dec 2020 19:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46965.83165; Mon, 07 Dec 2020 19:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmMQW-0006pm-Ka; Mon, 07 Dec 2020 19:44:04 +0000
Received: by outflank-mailman (input) for mailman id 46965;
 Mon, 07 Dec 2020 19:44:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmMQW-0006ph-6X
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 19:44:04 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cc4651e-e410-4240-ab33-d5746eea059b;
 Mon, 07 Dec 2020 19:44:03 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id a9so19790928lfh.2
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 11:44:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id f27sm2948599lfq.188.2020.12.07.11.44.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 11:44:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7cc4651e-e410-4240-ab33-d5746eea059b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=oOFrFJgwhDSVVW1x6xs5kZDbwNY0kgio3YF5vXX7VTM=;
        b=I2ddpwdVV+NDWK7Gan5jGmGFIhQX6QNOUxGzhqI430+C2nMKBTWAWWR6BGWuOLNnR2
         z0q/dSlF4DOWfGXhP3eah9oYaI1R2epnzmNrMTjuifdjvi04LlKkyrUslBffISdaS/bA
         KDJVTG8ulMfNKirnhHrvjofE6KHNWT5MVFGDrRm/MmjTysamtnyo4gZNsr9k9sEIJzwz
         aeoVmJSR6NXte5I2+kRm9o0WSXUb0QJNo54GrWz8PaiBBcNw80E6xBrs9Lr9HVlx2964
         29Ja7F0GRCVd7NSdbyBwVp4Ub0b0n078hOOaEHmABR3vxYPD51wnXCqIo6yB3QxMBCN9
         gJAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=oOFrFJgwhDSVVW1x6xs5kZDbwNY0kgio3YF5vXX7VTM=;
        b=h0LZxttZDvMqez0gx4CHkzWxJDf+eQd8o1xjZYzyWhgHSNYodORsvmqv5FKtesoBu5
         PuN7w/9QVhsMvV6AltMQmXzuaxCmJQLkM+37LMOt19qTTkk0TTVUmoxtPo3mEQ7dzvFB
         2WZpUmC5EZwV6MvXmsd9rR6w9K6l8x3Iq+d76JQbMc+A41Frn7ecLCGDjpYimrfZVecJ
         GxlOgpLCIXFfkepYYMa2qxRyyC4tJ0GKl3yTbVRzTWGfAIWC/SpdGaD6JlfrtXYRMj4B
         HbDkQrYdnmrVbVcVxRyORwL89SwODbA2UYb17qf7FAeOW1QzS20GmFi5cmjcRoTy7nuR
         iUxA==
X-Gm-Message-State: AOAM533bSIKdGV9fAW+nWJXyMvMEZ7mSDm4j42BVXXVMUVxs9k+AMmZF
	7R9F5Qw6S573pY20iUSDd96yNbEoshpbaw==
X-Google-Smtp-Source: ABdhPJzsEluSllECrjvXudw0KliXpu5hqU4UmBOSoMXDdKDVaMuvwJICX8d/KCJWYq7V9hzYo1zq/Q==
X-Received: by 2002:a05:6512:3243:: with SMTP id c3mr8713331lfr.371.1607370241928;
        Mon, 07 Dec 2020 11:44:01 -0800 (PST)
Subject: Re: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
 <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <4a82d6f3-6b6c-566a-6ad0-36e22df323fa@gmail.com>
Date: Mon, 7 Dec 2020 21:43:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 13:41, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/asm-x86/hvm/ioreq.h
>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>> @@ -19,8 +19,7 @@
>>   #ifndef __ASM_X86_HVM_IOREQ_H__
>>   #define __ASM_X86_HVM_IOREQ_H__
>>   
>> -#define HANDLE_BUFIOREQ(s) \
>> -    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
>> +#include <xen/ioreq.h>
> Is there a strict need to do it this way round? Usually the common
> header would include the arch one ...
The reason was to make a bunch of x86 files (which included 
asm/hvm/ioreq.h so far) to not suffer from moving IOREQ interface location
and as the result to limit the number of files which needed touching. If 
a common rule is to another way around, I will follow it.
So will change to include arch header from the common one. Or even will 
include arch header only where it is required (common ioreq.c right now 
and Arm io.c in future).


>> @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
>>                                       uint64_t *addr);
>>   void arch_ioreq_domain_init(struct domain *d);
> As already mentioned in an earlier reply: What about these? They
> shouldn't get declared once per arch. If anything, ones that
> want to be inline functions can / should remain in the per-arch
> header.
Don't entirely get a suggestion. Is the suggestion to make "simple" ones 
inline? Why not, there are a few ones which probably want to be inline, 
such as the following for example:
- arch_ioreq_domain_init
- arch_ioreq_server_destroy
- arch_ioreq_server_destroy_all
- arch_ioreq_server_map_mem_type (probably)


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 19:52:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 19:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46975.83178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmMYx-0007tn-KZ; Mon, 07 Dec 2020 19:52:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46975.83178; Mon, 07 Dec 2020 19:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmMYx-0007tg-HE; Mon, 07 Dec 2020 19:52:47 +0000
Received: by outflank-mailman (input) for mailman id 46975;
 Mon, 07 Dec 2020 19:52:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmMYw-0007tb-CS
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 19:52:46 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d1e08ff-4389-4e5a-9dcd-69000875d004;
 Mon, 07 Dec 2020 19:52:45 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id y16so16404094ljk.1
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 11:52:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id q23sm569246lfo.278.2020.12.07.11.52.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 11:52:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d1e08ff-4389-4e5a-9dcd-69000875d004
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=vWT48cVrnZI5gtATTM2YkZ0QmJVC2e6rTy/UJQbYmPY=;
        b=HnVupPQmXmQdpWXmW8R3X6DNGnYz7hxO0B7EtMdw8PzIAeW5W9jbOdc1+zm5h6wRHq
         C/UWP4IjCwZrJlrhU767mXc2JGh9Wmz9690k0g220T1VNl3qrfoADRmjnEQuXxBxAbIp
         yTHVzry7CB92bnnP7Pd6pkE+pZKjqr6uayyEjOxIyPOawFeDLhBquBJ3dYHq4B41pnOM
         kc4E+lou1oQ59k9/u/HgkqsnyUgCi+mf7PuMsvLBwVotqt2hzlxCRGRJIexfWiq5/onK
         +QRb8AnrUSGVOp4XLeJUzcY6HfWw0gaOpnenzzrH2Yp5QJSRJiQcKalsOmpYO1BBByLN
         z+DA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=vWT48cVrnZI5gtATTM2YkZ0QmJVC2e6rTy/UJQbYmPY=;
        b=bEk8IXqYzDwL8n29N3a8LRdP1u13HtIqnYrtvA9ZRi9NptAD4pgRt6JB088kc/CBbx
         e5a/9aiO5DwZB7E40Ji4tz8hsmtF4M4UnyTCFD8DiJk5AuqslDbJ/JfDYGbAHt3w4dmR
         nTPP0CwPk5uEpg3TtL/rVt6Ber7W5IYrYEigVgtxVsDmHvxSNX0tHcmP6gnjc0+uhTsb
         cipQKihZEMmFbBrgPKqA0U456RWUFhRF/tszoj6i7XtUQ3XlE8DKj915a089hPvziYN5
         GVB8kk61DGYs74Zm/u3MivjcrT8NpzVKLZT9I00Zv0uQLdpy7SOAxIyGfnCeGQWxxAdu
         ojCA==
X-Gm-Message-State: AOAM532Z6o074WtOIBzNIl2G/SM2k13zd65IA6Af4bQVAAytppPb3UgN
	O2sVlRreTlKo/t8xByWOgIY2NdtGNyvdZQ==
X-Google-Smtp-Source: ABdhPJxkWalchd4pid70bGfIyUozTCiDn8jUgASfLLKee/UqfhIJQyxy8HIL7AE068nRDHEHGtl3bg==
X-Received: by 2002:a05:651c:1033:: with SMTP id w19mr9535774ljm.55.1607370764153;
        Mon, 07 Dec 2020 11:52:44 -0800 (PST)
Subject: Re: [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to struct
 domain
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-9-git-send-email-olekstysh@gmail.com>
 <5b5011b7-5b8f-79cd-d8dc-c276ba1f9e37@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <b04fc121-37bf-8dd1-4204-e6732b98afc3@gmail.com>
Date: Mon, 7 Dec 2020 21:52:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5b5011b7-5b8f-79cd-d8dc-c276ba1f9e37@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 14:04, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The IOREQ is a common feature now and this struct will be used
>> on Arm as is. Move it to common struct domain. This also
>> significantly reduces the layering violation in the common code
>> (*arch.hvm* usage).
>>
>> We don't move ioreq_gfn since it is not used in the common code
>> (the "legacy" mechanism is x86 specific).
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Applicable parts
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thank you.


> yet with a question, but maybe more to Paul than to you:
>
>> --- a/xen/include/asm-x86/hvm/domain.h
>> +++ b/xen/include/asm-x86/hvm/domain.h
>> @@ -63,8 +63,6 @@ struct hvm_pi_ops {
>>       void (*vcpu_block)(struct vcpu *);
>>   };
>>   
>> -#define MAX_NR_IOREQ_SERVERS 8
>> -
>>   struct hvm_domain {
>>       /* Guest page range used for non-default ioreq servers */
>>       struct {
>> @@ -73,12 +71,6 @@ struct hvm_domain {
>>           unsigned long legacy_mask; /* indexed by HVM param number */
>>       } ioreq_gfn;
>>   
>> -    /* Lock protects all other values in the sub-struct and the default */
>> -    struct {
>> -        spinlock_t              lock;
>> -        struct ioreq_server *server[MAX_NR_IOREQ_SERVERS];
>> -    } ioreq_server;
>> -
>>       /* Cached CF8 for guest PCI config cycles */
>>       uint32_t                pci_cf8;
>>   
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -316,6 +316,8 @@ struct sched_unit {
>>   
>>   struct evtchn_port_ops;
>>   
>> +#define MAX_NR_IOREQ_SERVERS 8
>> +
>>   struct domain
>>   {
>>       domid_t          domain_id;
>> @@ -523,6 +525,14 @@ struct domain
>>       /* Argo interdomain communication support */
>>       struct argo_domain *argo;
>>   #endif
>> +
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    /* Lock protects all other values in the sub-struct and the default */
>> +    struct {
>> +        spinlock_t              lock;
>> +        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
>> +    } ioreq_server;
>> +#endif
> The comment gets merely moved, but what "default" does it talk about?
> Is this a stale part which would better be dropped at this occasion?

I saw Paul's answer, will drop stale part.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 20:24:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 20:24:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46985.83190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmN37-0002gB-0x; Mon, 07 Dec 2020 20:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46985.83190; Mon, 07 Dec 2020 20:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmN36-0002g4-TJ; Mon, 07 Dec 2020 20:23:56 +0000
Received: by outflank-mailman (input) for mailman id 46985;
 Mon, 07 Dec 2020 20:23:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmN35-0002fz-8l
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 20:23:55 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fbe4344-8812-452f-ab76-f557e40d9f4e;
 Mon, 07 Dec 2020 20:23:54 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id y22so1655568ljn.9
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 12:23:54 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id e26sm3048339lfj.21.2020.12.07.12.23.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 12:23:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fbe4344-8812-452f-ab76-f557e40d9f4e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=C1/GBRlh9ccv1IKSp+8HLv9M0+fWlJEJY01lCU5M4Zk=;
        b=Pd/qEYVRlcc5xxet2eemL5R5l8pRN5QyHcFl2UfNBs4LvPxgYVCyu7i9zYzmdM+QA8
         MWc94AkgJ+dFOGSjORphVIPG81J/1QzkrfRbd1hJAi3NbJT5oCH+HdRKHEXe/jpp+E2E
         O4pIag04zCq8eAV6aX3OlDSBfmpqc1xgmywKaDH3x1bubx9CZdOEzWQ8phERWnyOQ9E7
         mSU3G5fDCSCxuNSJ2glwz3W8owDwLgduSLM1jnsbXr0eIZ3lLIcRcav/u9NwTf/7VZNN
         OXMn76Mm4mDwd/3U6s5hN0KCtjFUmi8cOjsWrsI0ahV00ITNLQlrzz9GtnFStoRrF3eg
         UwNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=C1/GBRlh9ccv1IKSp+8HLv9M0+fWlJEJY01lCU5M4Zk=;
        b=EElTXkYizhJEV8H67DhkC0vQmzWjd5be+egJoNTyjmZNOgg9YK+OD/CA/9ylRn/iZP
         R7/jlWwQtjHUlrCyMyND5U0uc0InHIJUKgj/2xC39T6T4kxMtn/t5ODlwUvZW88X8d3o
         6Ar0mT2gt567sqz2XmCUt9K2oDTmAOk7vyqgbsY6L6hWhZRQovKWo+yOwZFxaKQUNVV8
         LgrCOpskCuAQ7bfjEhWLMmcBkt/AVNFtWNpfiUI0+IjSjQLV9nbvH3ErzZhJz0lTOa/u
         RmFVFRgQvm36JR+vGN9kwVUr7QB3GAqlAu9J/x+ZXTpBqKyaisxWrDk59XdFPSJO/YRS
         PEoA==
X-Gm-Message-State: AOAM533HDHb3fW2w+5w3Odi25n55nL+8kNsKfzIzSBunjEpADut9O4I/
	m5GSHNo/l55EiNndIEiaiWTj7sr+rByL0Q==
X-Google-Smtp-Source: ABdhPJxftBqeGVHRAuKApX+C26T5++dOHXKDWOBBYWWpJg3g2a+AYkv5MOUwa9XxG6agbIHy9FNTfw==
X-Received: by 2002:a05:651c:503:: with SMTP id o3mr9022385ljp.253.1607372632707;
        Mon, 07 Dec 2020 12:23:52 -0800 (PST)
Subject: Re: [PATCH V3 09/23] xen/dm: Make x86's DM feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
 <00c3df9f-760d-bb3d-d1d6-7c7df7f0c17c@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <24191fca-78e7-3e6b-ff02-c06e8ae79f56@gmail.com>
Date: Mon, 7 Dec 2020 22:23:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <00c3df9f-760d-bb3d-d1d6-7c7df7f0c17c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 14:08, Jan Beulich wrote:

Hi Jan.

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> As a lot of x86 code can be re-used on Arm later on, this patch
>> splits devicemodel support into common and arch specific parts.
>>
>> The common DM feature is supposed to be built with IOREQ_SERVER
>> option enabled (as well as the IOREQ feature), which is selected
>> for x86's config HVM for now.
>>
>> Also update XSM code a bit to let DM op be used on Arm.
>>
>> This support is going to be used on Arm to be able run device
>> emulator outside of Xen hypervisor.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> Changes RFC -> V1:
>>     - update XSM, related changes were pulled from:
>>       [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features
>>
>> Changes V1 -> V2:
>>     - update the author of a patch
>>     - update patch description
>>     - introduce xen/dm.h and move definitions here
>>
>> Changes V2 -> V3:
>>     - no changes
> And my concern regarding the common vs arch nesting also hasn't
> changed.


I am sorry, I might misread your comment, but I failed to see any 
obvious to me request(s) for changes.
I have just re-read previous discussion...
So the question about considering doing it the other way around (top 
level dm-op handling arch-specific
and call into e.g. ioreq_server_dm_op() for otherwise unhandled ops) is 
exactly a concern which I should have addressed?

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 20:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 20:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.46993.83202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmN78-0002qM-I1; Mon, 07 Dec 2020 20:28:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 46993.83202; Mon, 07 Dec 2020 20:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmN78-0002qF-Et; Mon, 07 Dec 2020 20:28:06 +0000
Received: by outflank-mailman (input) for mailman id 46993;
 Mon, 07 Dec 2020 20:28:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmN76-0002qA-T5
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 20:28:04 +0000
Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c9ac47d-5a38-4fee-9d5c-be253467f0fc;
 Mon, 07 Dec 2020 20:28:04 +0000 (UTC)
Received: by mail-lj1-x22b.google.com with SMTP id f11so4275035ljm.8
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 12:28:04 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id z20sm3102567ljh.86.2020.12.07.12.28.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 12:28:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c9ac47d-5a38-4fee-9d5c-be253467f0fc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=71qlpQ3/OFvfHBS/kNYeBBOEh5LTxZRYxfVwm9MDmn8=;
        b=KnpLS7LLeNsCUY6gDiyVFj/qP4tI7z25+YaxkskJK/opwl2IKB7gxkLQLPhXbFagsz
         F+W+VDri0ZmoUwJMB3VX/w2O1nb+G8GAWTKDbt4PhJpXU7ywPM4YcQ3DTY8L80mTM0R0
         RZpzn3UFKL6SewkD2o4vSdrtJZJryAMMYN+O488z2q4NBdcRbuAqykxZNuXcVMH0ocmH
         2XVcl5kUgcLWZ/SFgRL0P5sduxdU8iJuteIT2f1J05m9yraEabyZ/UMv2nBokLI2bytc
         etRfeInwctpIx23bDZ1Cf48CJsHczDOPyWpLlik+wOy1tgBGL4ibxVjQxRGhcASET2HE
         0Crw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=71qlpQ3/OFvfHBS/kNYeBBOEh5LTxZRYxfVwm9MDmn8=;
        b=qQ3KCtF93gb75cW1yOJKBssC8s/KS0SrTLem1uw5JwYDnD3860NulBcMjg5/0XvwSi
         EEXS74mOUpDvENHHFPtoBpnpFyhPPaHm1szpKnZ+tfF0ry/7cbejE8kH5HFqkuffNzE0
         9sVRmkzZzaP4eoRvcJQEsixVSEJR/i7y6Fxo8vyiAoG43yCJqtdmvM9TOMgRY//vsj3o
         pO5iFie7Em6UBwI+GUJBA4gKFr9Br8MZpEzLThA9xVN4pRmMEJSGlKmxddgh+ck+6+oS
         3LKEyqDp/vqkteGuxEodndR2T27FUZP4VEwhIjH7qzLNJLJ9gprOVPqAJ+NKU5g+204+
         bZjg==
X-Gm-Message-State: AOAM532ELMGsqv0V02Ub3X03b2cwgMk3g0PxeOTAF6q6+O58JPnpVxPn
	NDmoB40uEoNMg2AZh4lkQLXJ5PqB619BBw==
X-Google-Smtp-Source: ABdhPJwEOfJpowfNuKAuPffb6VyiGPsgqcZ999KU8X4kBZyEWO6KramY9dmkoCFIEoyO+z9sGrontA==
X-Received: by 2002:a05:651c:1b6:: with SMTP id c22mr9687723ljn.365.1607372882977;
        Mon, 07 Dec 2020 12:28:02 -0800 (PST)
Subject: Re: [PATCH V3 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-13-git-send-email-olekstysh@gmail.com>
 <8b185227-3eae-de79-3af9-db39fed44459@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <cb35481d-1e21-3223-90b8-28d5c55f197b@gmail.com>
Date: Mon, 7 Dec 2020 22:28:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8b185227-3eae-de79-3af9-db39fed44459@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 14:45, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
>>       return found;
>>   }
>>   
>> -static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
>> -                                    struct ioreq_vcpu *sv)
>> +static void ioreq_update_evtchn(struct ioreq_server *s,
>> +                                struct ioreq_vcpu *sv)
>>   {
>>       ASSERT(spin_is_locked(&s->lock));
> This looks to be an ioreq server function, which hence wants to be
> named ioreq_server_update_evtchn()? Then

Will rename


>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Thank you

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 20:37:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 20:37:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47000.83214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNFo-0003vy-F0; Mon, 07 Dec 2020 20:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47000.83214; Mon, 07 Dec 2020 20:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNFo-0003vr-Bu; Mon, 07 Dec 2020 20:37:04 +0000
Received: by outflank-mailman (input) for mailman id 47000;
 Mon, 07 Dec 2020 20:37:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmNFm-0003vj-Qo; Mon, 07 Dec 2020 20:37:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmNFm-00024x-KK; Mon, 07 Dec 2020 20:37:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmNFm-0004fJ-9O; Mon, 07 Dec 2020 20:37:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmNFm-0001FD-8k; Mon, 07 Dec 2020 20:37:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gH52y0nC5Xe+zankejbQO8PacLogtTPyHR7EwCpPVKU=; b=H6ZO00Cd4Yy3jwwZFMWrrDPdDM
	UpzibjmpYSZw4oXluh+Kp97ecuLFAyhp60Zso+aOoICWUaIWyjkAsvVr9m76VibfkiFytzuS6MXP0
	MAV/1xHdmQmbolUo7JFSFvGiZbmD68LgBW7iJ2a9BPAn2gLIEXUXdO6x7nksZIQ4z6PE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157262: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4b0e0db86194b5e9e18c9f2c10b3910f3394c56f
X-Osstest-Versions-That:
    xen=3ec53aa79905edc3891b8267123d88a221553370
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 20:37:02 +0000

flight 157262 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157262/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4b0e0db86194b5e9e18c9f2c10b3910f3394c56f
baseline version:
 xen                  3ec53aa79905edc3891b8267123d88a221553370

Last test of basis   157259  2020-12-07 14:01:30 Z    0 days
Testing same since   157262  2020-12-07 17:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  From: Juergen Gross <jgross@suse.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3ec53aa799..4b0e0db861  4b0e0db86194b5e9e18c9f2c10b3910f3394c56f -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 20:48:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 20:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47011.83229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNQl-00055C-GX; Mon, 07 Dec 2020 20:48:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47011.83229; Mon, 07 Dec 2020 20:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNQl-000555-DS; Mon, 07 Dec 2020 20:48:23 +0000
Received: by outflank-mailman (input) for mailman id 47011;
 Mon, 07 Dec 2020 20:48:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mZuY=FL=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kmNQk-000550-8J
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 20:48:22 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a823cb9-733c-42f6-9b9b-14471ac59f73;
 Mon, 07 Dec 2020 20:48:21 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id y22so1732356ljn.9
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 12:48:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a823cb9-733c-42f6-9b9b-14471ac59f73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=afeBzrAkz5BwizCHeAJgvDbsOemFkWApafPEH+e5coo=;
        b=MvYUbDa0VUimGq7DOGdUVu7dwP7LEF2c3QsAWimh87QwX6Nv62K9zLhFFVLn9qBZac
         SPFcpd1egvO5SmP1TWr17dth5GA8Xk9bwLt62acxeqFQkA7Ma/6fAj+4oKYPHChpgFAc
         6XXBxW93B3u11eAa8Sr/erm+EJuO63LFruir3fMCB/telhLRjG+3w+lzyEwcS0auKbme
         g7aVUrv10BGIV8btAjzhMMOUft24h7/jP4KOqscuqXZ6gcLw8+PftjN7G5T4aE7WfFs8
         JiCkYcIWL+nY1SOW0pH1lKeUDGsURRRIWavwwOR9ZiiHw/pQiCWHXayr1KMAs61R1dDC
         CczQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=afeBzrAkz5BwizCHeAJgvDbsOemFkWApafPEH+e5coo=;
        b=ZSValv/+zg1xHiqtWFL5Qfi6x6+T7oAmmypro/qkx3HJz804ZCcerH8enzqF0PENTX
         d3VCyovDUnIYJkNEhEpq0+ff0IlTaBShdN6hAFvCMY9afSNo2BVLxlXqUHwAno74mPv1
         g7ifioak6GJ4MntHeOFZkNeDzkZT//dd2aLD8sXpRzuE8l94OPyjqzZn4EF0Piq4HBk/
         yuySFW/O9QrNfxrEyqSlZOiZ3k0/5F/NpsUuYNbFNbh3pLsdYk8qn8emqpmEFGYqFBJW
         +ua341g/i49W7uZLOdPnTORAtG+LJWhw4vh6D9B+Z+Sc0c8f/M130L6RrGstgb2Kkdk8
         q2xg==
X-Gm-Message-State: AOAM5302IUzdykbHGcbrmZoRDYgdsTeRrobPOZnK75DrrDoiogYNBYLW
	3fZqAa+OIM9w7v6rtufHRcvEy63o5AC9K8DS0uo=
X-Google-Smtp-Source: ABdhPJwTkr+9lK2EQYVg4JAu6qvcY0ivTKIzhAQQRziBGRp+n2S9E83RzY48r1SabrKZFJZwphmIcgtUl5P2VwiF/sI=
X-Received: by 2002:a2e:9747:: with SMTP id f7mr3313388ljj.262.1607374100396;
 Mon, 07 Dec 2020 12:48:20 -0800 (PST)
MIME-Version: 1.0
References: <20201207133024.16621-1-jgross@suse.com> <20201207133024.16621-3-jgross@suse.com>
In-Reply-To: <20201207133024.16621-3-jgross@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 7 Dec 2020 15:48:09 -0500
Message-ID: <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	open list <linux-kernel@vger.kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Dec 7, 2020 at 8:30 AM Juergen Gross <jgross@suse.com> wrote:
>
> Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
> memory") introduced usage of ZONE_DEVICE memory for foreign memory
> mappings.
>
> Unfortunately this collides with using page->lru for Xen backend
> private page caches.
>
> Fix that by using page->zone_device_data instead.
>
> Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Would it make sense to add BUG_ON(is_zone_device_page(page)) and the
opposite as appropriate to cache_enq?  Either way:
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 20:59:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 20:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47019.83240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNbj-0006EV-My; Mon, 07 Dec 2020 20:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47019.83240; Mon, 07 Dec 2020 20:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNbj-0006EO-Jl; Mon, 07 Dec 2020 20:59:43 +0000
Received: by outflank-mailman (input) for mailman id 47019;
 Mon, 07 Dec 2020 20:59:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmNbi-0006EJ-EI
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 20:59:42 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62d2cb66-1c0b-42ca-b705-361a12e6955d;
 Mon, 07 Dec 2020 20:59:41 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f11so4706527ljn.2
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 12:59:41 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m15sm2215097lji.130.2020.12.07.12.59.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 12:59:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62d2cb66-1c0b-42ca-b705-361a12e6955d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=oUiUZRqpkI5YFscM3PoZHzn1lds/nMS0/PGFFF8wK4E=;
        b=rHgkQbb+mwY/o+VhZkZGbQ1+9k6p551rkDrFJYks5J/Yk99AyuNCPBqE72b0ahEz/c
         OLbkyFE+8+rfiJlee4aTiO/6ZxiwmtryZnNhti2cHXX+JpXUt0Gq0qAapWMpaJHVznW+
         aZlJzfEu09di4dZhGfYMlyW3z8P7qYhcnmapgfsM1aKSrMJiZ9qzwFrtKNtC8T+oWBXh
         /rIEUL534xnkZ+jxEd3aTmRdm3sRgvWvvM9RASJuXfzCIQY6cvSgDBo1rr1QSUictqV+
         AmG+d8jKMA4JlvBhuWZX9fo1t5lTN2IlCZxgYD67FjIAaYAV2cCvGnzVF4ot0gz1u2JL
         Oscw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=oUiUZRqpkI5YFscM3PoZHzn1lds/nMS0/PGFFF8wK4E=;
        b=dTN+J5v1U0u5bVTFnY8XrrHnZX+nnT6zprgxR5Vg8wcOLo4qNPthz7s4Jb9+e2CkX4
         qHPkLEsxc4+c88v54GxmvwNriaoPIggzD7rZ/kjln6M5ihSYMx4R7n43ok1mCjlMuKC6
         DyNv8/BLohvCdxMfVJzNFKFc2XagLTBiSxwLdzc0vEprpzRJPschzrEWth2RNpGreARY
         jomY0cqHtMSknxeYoVzaFMKjtWKG2dw2R+1dAoiOgolqHejfBhkoHWzhnli6QZaYkSog
         eCvFAvMP+XjpNsprtrPdy0l67oZrl8OcFK8hfrMhAhl3kgyi1fMlPWpDqWxi9cdK3ct5
         CL9Q==
X-Gm-Message-State: AOAM532Idf1CMnBjeFT+91Grq1pNrR6CztnzcRKfCvPbxcqJk3kF9dsF
	SqI4g7T5y4Y4dmZ/kfq7a9Eaxv5CphwGTw==
X-Google-Smtp-Source: ABdhPJxeC6hdyBW9LinjpbvmeQ71a09hY3YeLRYmLYM6E7EXRtXTzzOUisCwCBGrfHQ9HlR1TrO49Q==
X-Received: by 2002:a2e:3807:: with SMTP id f7mr1480028lja.24.1607374779722;
        Mon, 07 Dec 2020 12:59:39 -0800 (PST)
Subject: Re: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
 <742899b6-964b-be75-affc-31342c07133a@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <d7d867d3-b508-0c6c-191f-264e1e08bf39@gmail.com>
Date: Mon, 7 Dec 2020 22:59:33 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <742899b6-964b-be75-affc-31342c07133a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 07.12.20 14:32, Jan Beulich wrote:

Hi Jan, Paul.

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v)
>>   {
>>       struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
>>   
>> -    vio->io_req.state = STATE_IOREQ_NONE;
>> -    vio->io_completion = HVMIO_no_completion;
>> +    v->io.req.state = STATE_IOREQ_NONE;
>> +    v->io.completion = IO_no_completion;
>>       vio->mmio_cache_count = 0;
>>       vio->mmio_insn_bytes = 0;
>>       vio->mmio_access = (struct npfec){};
>> @@ -159,7 +159,7 @@ static int hvmemul_do_io(
>>   {
>>       struct vcpu *curr = current;
>>       struct domain *currd = curr->domain;
>> -    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
>> +    struct vcpu_io *vio = &curr->io;
> Taking just these two hunks: "vio" would now stand for two entirely
> different things. I realize the name is applicable to both, but I
> wonder if such naming isn't going to risk confusion.Despite being
> relatively familiar with the involved code, I've been repeatedly
> unsure what exactly "vio" covers, and needed to go back to the

  Good comment... Agree that with the naming scheme in current patch the 
code became a little bit confusing to read.


> header. So together with the name possible adjustment mentioned
> further down, maybe "vcpu_io" also wants it name changed, such that
> the variable then also could sensibly be named (slightly)
> differently? struct vcpu_io_state maybe? Or alternatively rename
> variables of type struct hvm_vcpu_io * to hvio or hio? Otoh the
> savings aren't very big for just ->io, so maybe better to stick to
> the prior name with the prior type, and not introduce local
> variables at all for the new field, like you already have it in the
> former case?
I would much prefer the last suggestion which is "not introduce local
variables at all for the new field" (I admit I was thinking almost the 
same, but haven't chosen this direction).
But I am OK with any suggestions here. Paul what do you think?


>
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
>>   
>>   struct waitqueue_vcpu;
>>   
>> +enum io_completion {
>> +    IO_no_completion,
>> +    IO_mmio_completion,
>> +    IO_pio_completion,
>> +#ifdef CONFIG_X86
>> +    IO_realmode_completion,
>> +#endif
>> +};
> I'm not entirely happy with io_ / IO_ here - they seem a little
> too generic. How about ioreq_ / IOREQ_ respectively?

I am OK, would like to hear Paul's opinion on both questions.


>
>> +struct vcpu_io {
>> +    /* I/O request in flight to device model. */
>> +    enum io_completion   completion;
>> +    ioreq_t              req;
>> +};
>> +
>>   struct vcpu
>>   {
>>       int              vcpu_id;
>> @@ -256,6 +271,10 @@ struct vcpu
>>       struct vpci_vcpu vpci;
>>   
>>       struct arch_vcpu arch;
>> +
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    struct vcpu_io io;
>> +#endif
>>   };
> I don't have a good solution in mind, and I'm also not meaning to
> necessarily request a change here, but I'd like to point out that
> this does away (for this part of it only, of course) with the
> overlaying of the PV and HVM sub-structs on x86. As long as the
> HVM part is the far bigger one, that's not a problem, but I wanted
> to mention the aspect nevertheless.
>
> Jan

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 21:03:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 21:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47024.83253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNfH-0007GK-9U; Mon, 07 Dec 2020 21:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47024.83253; Mon, 07 Dec 2020 21:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNfH-0007GD-42; Mon, 07 Dec 2020 21:03:23 +0000
Received: by outflank-mailman (input) for mailman id 47024;
 Mon, 07 Dec 2020 21:03:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmNfF-0007G8-EP
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 21:03:21 +0000
Received: from mail-lf1-x131.google.com (unknown [2a00:1450:4864:20::131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7070f3a1-12cd-4f4a-a5dc-630d71c0be3a;
 Mon, 07 Dec 2020 21:03:20 +0000 (UTC)
Received: by mail-lf1-x131.google.com with SMTP id a8so5335278lfb.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 13:03:20 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id l7sm534930lja.15.2020.12.07.13.03.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 13:03:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7070f3a1-12cd-4f4a-a5dc-630d71c0be3a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=9xUPJN4ARralP1aPENVZ3BLZkCJG+0W2+X4hLd9YekU=;
        b=vLRsPDCQAJbrZ8OjcyJEWKBWgz7CYBzNvv4lBFhMq+K29/24/T4w/19twUdtWAM3xo
         YOcHLvn1O1DMLlmAh5vNyDP3h+EMk9Wdw4+kRjmfFBOXPcM5woTQXF7D98KI6vIBu1nq
         8eMzxx7RcSeRIkvF5oxhyLZEIpymqBHnWDp8I6kihRe8PGOkz0bMk5yU5KCpNwtuDgpY
         v0P8jhCqjgPN+rZqpbWL5Dq0c/4wIE37zSs0DozxOQrM0EC4/XpKtV3HRS5uRSFOWWcG
         lfRt9w2w5TxRvP4DgBlMJAWfsqXuDtlNq/D4nSTfCmBcCUuxrxtrTuJoqenwCaTKiHgA
         RqOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=9xUPJN4ARralP1aPENVZ3BLZkCJG+0W2+X4hLd9YekU=;
        b=WpIDQpg3BagkP5O5GSeeMZmFWqRQBElFi1IAJVlyXESjH5nllmNOz+5VSmsiXhOpGv
         anLc/ZemNKpRWb8MyJXygWSv++OjU4uvT8ZKu/rT6bZov81VrbDi9zxrj8g/Ay6PwxOO
         ay+YsO8DsE43r3O7vXfqjhIjgzap2MtCctmRlk6TsApe7itngXWyW8mD6/RM32UaLcYa
         uudJYof9EWey8J8izVbz6//tMlTB6otGCV4voeymuEUKB8rBqRRnVH4nP9TfwKv6ZFY+
         z41tNJLjw5G64RftJoLuTamiIgFMJZHyBMEBfj0l/rEOg4YSJtWxYxe15Mefnx/KOmKO
         uAKQ==
X-Gm-Message-State: AOAM530kafK4Sc/1ur5bjZKEPReBWwyLCNd4y1f2oxL1GTV1FW1DtlQy
	1WcOVDEhluMioaJm7kVwiyI=
X-Google-Smtp-Source: ABdhPJxlaxxqUxTKPajCcUsqXcrCc9JOL1RLRS+EVzHMeRcUPhhGqNEWQEJCAWA1nQ0b2j/hPbCigw==
X-Received: by 2002:a19:e8a:: with SMTP id 132mr5762415lfo.108.1607374999655;
        Mon, 07 Dec 2020 13:03:19 -0800 (PST)
Subject: Re: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Wei Chen <Wei.Chen@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <Julien.Grall@arm.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
 Artem Mygaiev <joculator@gmail.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <66df4a0b-166a-81c3-9237-854649c832f9@gmail.com>
 <AM0PR08MB3747E17FB0F59A85CA72AEDA9ECE0@AM0PR08MB3747.eurprd08.prod.outlook.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <76d418d9-a991-1de8-437e-cc950df5bdd5@gmail.com>
Date: Mon, 7 Dec 2020 23:03:17 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <AM0PR08MB3747E17FB0F59A85CA72AEDA9ECE0@AM0PR08MB3747.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 15:03, Wei Chen wrote:
> Hi Oleksandr,

Hi Wei


>
> I have tested v3. It works well with the latest virtio-backend service[1].
> [1] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml1
>
> Tested-by: Wei Chen <Wei.Chen@arm.com>

Thank you very much for the testing!


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 21:06:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 21:06:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47031.83265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNiY-0007QF-NP; Mon, 07 Dec 2020 21:06:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47031.83265; Mon, 07 Dec 2020 21:06:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmNiY-0007Q8-KK; Mon, 07 Dec 2020 21:06:46 +0000
Received: by outflank-mailman (input) for mailman id 47031;
 Mon, 07 Dec 2020 21:06:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=exIY=FL=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmNiX-0007Q3-1S
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 21:06:45 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 928be4e6-ef8b-419a-9634-78f08e2146e8;
 Mon, 07 Dec 2020 21:06:44 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id a1so15270569ljq.3
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 13:06:44 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id p8sm2972818lfk.109.2020.12.07.13.06.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 07 Dec 2020 13:06:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 928be4e6-ef8b-419a-9634-78f08e2146e8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=KxM8MOM4A2fB4/dU/u5BsCDi/uBpr79q7L1Yi7hk8Ls=;
        b=qfPh2rBjzmDia+1DAgan33i+ywUAzQwSNAkvAafXnXjimGrO9VHR97kcqQ1SgSAP04
         0MvCA0GK+7zbY6UqaRnjSw+Uy0s9LKj629ZUYHkg8uyzQZtRG9oszj320kz/C35Ue+56
         AwwEXvnrnLeXTwUBtV6IRNxtNERrQYPMDaHd1XWNdlbpoomIEuNy9sL0WU3nLXeHKn6b
         flPZg1DN0zH+arkGc9Gyn9e3kcrlMa6/NOZ+Zu4nrCgDM2MTWC5Kxj7L7izhMG81xtg8
         EXN4GujQ48siXiFLimcIGwQa2IlmcfBTIty2lcmJDUh5jY1XcYayy2wg55Ya5pJlRpy3
         5F2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=KxM8MOM4A2fB4/dU/u5BsCDi/uBpr79q7L1Yi7hk8Ls=;
        b=EUBRGLzAaIr5s3oFLiwkcx0NwzNQBddaJYnWe1T1boN/20JAIF8pby7fM5tFMVgwbh
         7bJNcGFdl+gNryJ37yUrm8GNwc2I16MIaXYc567m2OqlTzpUbj6RBHsQF8r6JEKEv4JO
         PAYwS06bldOXQz358Q4RuXDh6B4DUVA0asfMszGnR+8SH9xe/K5l0YMhH1E0mZaoKkOA
         oX6WQ0NbWEWyHlwg2FUHP7DXMudKuzqUiIaQYc5ljliZj59uQFNSCSXD0whrVBLHQqmF
         dWYcs1sGMiiPp20WAlXanXQxuIkKOG51M3kp8aF/4IxpTNVvhCrLnqSRu8cVkUijIPYn
         JvIw==
X-Gm-Message-State: AOAM530k+vnw4kjE52K/Bz4IVc/LLta9Vq4U0uaPsjL7LfMDay096+Mm
	Fz2mh9JWaHCWDG0LzoDnTlrgV3R5WbzBCA==
X-Google-Smtp-Source: ABdhPJxmIIrI5WSLW8xVtTJIpCNZa3Qjq1yAURvQ3lIAwKCRYYB8y3pqTBWCiFmDGWQUM9jQGJ0FoQ==
X-Received: by 2002:a2e:8eda:: with SMTP id e26mr5201138ljl.272.1607375202906;
        Mon, 07 Dec 2020 13:06:42 -0800 (PST)
Subject: Re: [PATCH V3 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server
 handling common
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-11-git-send-email-olekstysh@gmail.com>
 <4f9a68ad-c663-d7a1-9194-4ad28958b077@suse.com>
 <39ee3665-48f2-334d-e7a0-2e1a17bccd23@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a83ec541-53ad-491d-372d-3c3b182637a3@gmail.com>
Date: Mon, 7 Dec 2020 23:06:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <39ee3665-48f2-334d-e7a0-2e1a17bccd23@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 07.12.20 14:11, Jan Beulich wrote:

Hi Jan

> On 07.12.2020 12:35, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -4699,50 +4699,6 @@ int xenmem_add_to_physmap_one(
>>>       return rc;
>>>   }
>>>   
>>> -int arch_acquire_resource(struct domain *d, unsigned int type,
>>> -                          unsigned int id, unsigned long frame,
>>> -                          unsigned int nr_frames, xen_pfn_t mfn_list[])
>>> -{
>>> -    int rc;
>>> -
>>> -    switch ( type )
>>> -    {
>>> -#ifdef CONFIG_HVM
>>> -    case XENMEM_resource_ioreq_server:
>>> -    {
>>> -        ioservid_t ioservid = id;
>>> -        unsigned int i;
>>> -
>>> -        rc = -EINVAL;
>>> -        if ( !is_hvm_domain(d) )
>>> -            break;
>>> -
>>> -        if ( id != (unsigned int)ioservid )
>>> -            break;
>>> -
>>> -        rc = 0;
>>> -        for ( i = 0; i < nr_frames; i++ )
>>> -        {
>>> -            mfn_t mfn;
>>> -
>>> -            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
>>> -            if ( rc )
>>> -                break;
>>> -
>>> -            mfn_list[i] = mfn_x(mfn);
>>> -        }
>>> -        break;
>>> -    }
>>> -#endif
>>> -
>>> -    default:
>>> -        rc = -EOPNOTSUPP;
>>> -        break;
>>> -    }
>>> -
>>> -    return rc;
>>> -}
>> Can't this be accompanied by removal of the xen/ioreq.h inclusion?
>> (I'm only looking at patch 4 right now, but the renaming there made
>> the soon to be unnecessary #include quite apparent.)
> And then, now that I've looked at this patch as a whole,
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Great, thank you.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Dec 07 22:15:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 22:15:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47050.83283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmOmM-00065z-5n; Mon, 07 Dec 2020 22:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47050.83283; Mon, 07 Dec 2020 22:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmOmM-00065s-2c; Mon, 07 Dec 2020 22:14:46 +0000
Received: by outflank-mailman (input) for mailman id 47050;
 Mon, 07 Dec 2020 22:14:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmOmK-00065k-JS; Mon, 07 Dec 2020 22:14:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmOmK-00045s-3R; Mon, 07 Dec 2020 22:14:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmOmJ-00031B-Pr; Mon, 07 Dec 2020 22:14:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmOmJ-0004nT-PN; Mon, 07 Dec 2020 22:14:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0Ro1ksHXtdlBTOt3FZxcUXGrWPr2RblzS5XKpdrW/Gk=; b=xa8TTtDqO6oA9S/0W0htutL1aD
	Bsno2bXC4WCdDj9UBS/Y7FcsCBRtsVwqMgwXo+MccZHM4/g0rHoPXOLOcrzdNEuYM71EZMzhuHl4W
	S+w1XyqQhMk2FZdEjPmFbp5EOFW/pkd4DXhNr5zz3GviU9slZ4q+wjQTqNxslfBu+3Nc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157257-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157257: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0477e92881850d44910a7e94fc2c46f96faa131f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 07 Dec 2020 22:14:43 +0000

flight 157257 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157257/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm 10 host-ping-check-xen fail in 157248 REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 157248 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 157248 pass in 157257
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 157248 pass in 157257
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157248
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157248

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0477e92881850d44910a7e94fc2c46f96faa131f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  129 days
Failing since        152366  2020-08-01 20:49:34 Z  128 days  219 attempts
Testing same since   157248  2020-12-07 01:10:12 Z    0 days    2 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 700670 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 07 23:18:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 23:18:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47060.83298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmPlT-0003oh-0y; Mon, 07 Dec 2020 23:17:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47060.83298; Mon, 07 Dec 2020 23:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmPlS-0003oa-U4; Mon, 07 Dec 2020 23:17:54 +0000
Received: by outflank-mailman (input) for mailman id 47060;
 Mon, 07 Dec 2020 23:17:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KVsS=FL=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kmPlR-0003oV-B5
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 23:17:53 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7992ddc3-6dcd-4f93-9877-0ca8d506d2fa;
 Mon, 07 Dec 2020 23:17:52 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0B7NHdhB034943
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 7 Dec 2020 18:17:45 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0B7NHcpJ034942;
 Mon, 7 Dec 2020 15:17:38 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7992ddc3-6dcd-4f93-9877-0ca8d506d2fa
Date: Mon, 7 Dec 2020 15:17:38 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [RFC PATCH] xen/arm: domain_build: Ignore empty memory bank
Message-ID: <X864Ep+tA609svay@mattapan.m5p.com>
References: <202012071807.0B7I7TJq033563@m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <202012071807.0B7I7TJq033563@m5p.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Dec 07, 2020 at 07:36:11AM -0800, Elliott Mitchell wrote:
> Commit 5a37207df52066efefe419c677b089a654d37afc changed this behavior to
> ignore such banks.  Unfortunately this means these empty nodes are
> visible to code which accesses the device trees.  Have domain_build also
> ignore these entries.

It is implicit, but I discovered 7d2b21fd36c2a47799eed71c67bae7faa1ec4272
actually broke Xen for me.  As such I believe this should get into
stable-4.14 as a bugfix.

Taking a second look, that error message is bad.  The preliminary fix I
came up with was:   if(!addr) continue;

As I thought the 0 it was reporting was an address of 0.  Perhaps
"Unable to retrieve address for index %u of %s\n"?

(I opted for testing size after finding the source of the bug and decided
duplicating behavior was better)


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Dec 07 23:52:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 23:52:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47066.83309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmQJ6-0007nK-Qk; Mon, 07 Dec 2020 23:52:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47066.83309; Mon, 07 Dec 2020 23:52:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmQJ6-0007nD-Ne; Mon, 07 Dec 2020 23:52:40 +0000
Received: by outflank-mailman (input) for mailman id 47066;
 Mon, 07 Dec 2020 23:52:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=woqd=FL=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kmQJ5-0007n8-Ux
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 23:52:39 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7efd1cf-a501-4749-9ac1-ba7efbd121e4;
 Mon, 07 Dec 2020 23:52:35 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B7NjW2P099624;
 Mon, 7 Dec 2020 23:52:30 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2130.oracle.com with ESMTP id 357yqbraru-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 07 Dec 2020 23:52:30 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B7Njg61075935;
 Mon, 7 Dec 2020 23:52:29 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3030.oracle.com with ESMTP id 358m4wyyqu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 07 Dec 2020 23:52:29 +0000
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0B7NqQoa021467;
 Mon, 7 Dec 2020 23:52:27 GMT
Received: from [10.39.215.209] (/10.39.215.209)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 07 Dec 2020 15:52:26 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7efd1cf-a501-4749-9ac1-ba7efbd121e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=6l5IUmCUhNxwZdcNx+0VSeMQockYDo+hmSJ8uIBSmJ0=;
 b=yhFLv1BpXZhVaoMR/lIwbxxGFWMJFI5dpefwnS7GqxqxmoN8bhr1V0HiNtDg/8+smFuX
 6ZyDiUt/NbK8q7czLUtKhuh0gFcvv2R73UccN7ji/vJhHRnzu6rtwezzK0ySy4iV02XF
 0utF42ht8wAg9N5TtW1DBcfJwl6rO2Lzdg5be4NiNLOBX71Cg9HQveOm0VTEykzBzFMR
 YQc5JeyMbWREu8hm172a8YJ5p8DJ2aTC1HZDGATmsDuzWihlrZMzSJaoTOM9ibKtkn1o
 Y+HV892IyPlMVLckLb431jW3r5znvZJK7Tv4Y2HZVlq3la5gJ6tg16ymzCAHi6YT8UyW dg== 
Subject: Re: [PATCH 1/2] xen: add helpers for caching grant mapping pages
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
        linux-scsi@vger.kernel.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
        Jens Axboe <axboe@kernel.dk>,
        Stefano Stabellini <sstabellini@kernel.org>
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-2-jgross@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <67875212-a5e1-5ea3-6ecd-2cf878f3cd5d@oracle.com>
Date: Mon, 7 Dec 2020 18:52:24 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201207133024.16621-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9828 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 suspectscore=2
 bulkscore=0 malwarescore=0 phishscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012070157
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9828 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 mlxlogscore=999
 clxscore=1011 malwarescore=0 bulkscore=0 phishscore=0 adultscore=0
 spamscore=0 priorityscore=1501 mlxscore=0 lowpriorityscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012070157


On 12/7/20 8:30 AM, Juergen Gross wrote:
> Instead of having similar helpers in multiple backend drivers use
> common helpers for caching pages allocated via gnttab_alloc_pages().
>
> Make use of those helpers in blkback and scsiback.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovksy@oracle.com>


> +
> +void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned int num)
> +{
> +	struct page *page[10];
> +	unsigned int i = 0;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&cache->lock, flags);
> +
> +	while (cache->num_pages > num) {
> +		page[i] = list_first_entry(&cache->pages, struct page, lru);
> +		list_del(&page[i]->lru);
> +		cache->num_pages--;
> +		if (++i == ARRAY_SIZE(page)) {
> +			spin_unlock_irqrestore(&cache->lock, flags);
> +			gnttab_free_pages(i, page);
> +			i = 0;
> +			spin_lock_irqsave(&cache->lock, flags);
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&cache->lock, flags);
> +
> +	if (i != 0)
> +		gnttab_free_pages(i, page);
> +}


How about splitting cache->pages list into two lists (one @num long and the other holding the rest) and then batching gntab_free_pages() from that first list? Then you won't have to deal with locking.


In fact, I am not even sure batching gains us much, I'd consider going one-by-one.


-boris




From xen-devel-bounces@lists.xenproject.org Mon Dec 07 23:56:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 07 Dec 2020 23:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47071.83321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmQMX-0007yT-AI; Mon, 07 Dec 2020 23:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47071.83321; Mon, 07 Dec 2020 23:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmQMX-0007yM-7J; Mon, 07 Dec 2020 23:56:13 +0000
Received: by outflank-mailman (input) for mailman id 47071;
 Mon, 07 Dec 2020 23:56:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=woqd=FL=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kmQMV-0007yG-Mi
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 23:56:11 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a202223b-363c-4ca7-806b-63e2b13b5d1e;
 Mon, 07 Dec 2020 23:56:11 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B7Nrx76149590;
 Mon, 7 Dec 2020 23:56:08 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 3581mqr5wv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 07 Dec 2020 23:56:08 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B7NsbtL102334;
 Mon, 7 Dec 2020 23:56:08 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3020.oracle.com with ESMTP id 358kys1hxq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 07 Dec 2020 23:56:08 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0B7Nu7Uh011934;
 Mon, 7 Dec 2020 23:56:07 GMT
Received: from [10.39.215.209] (/10.39.215.209)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 07 Dec 2020 15:56:06 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a202223b-363c-4ca7-806b-63e2b13b5d1e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=Tueq5whBtLsp2a0o8pSIRIag4QTwwNeexz4YhPhpU3k=;
 b=tz7Jyf3YdxhXAg9ydLFiyS7oTuMO3eqAm/va2SdIkiA/xvI9FlPMNxBEADONL2cDaoO+
 WgN2kTyW5KrKeNuY9WbK9gjdne+fgkvNCG2Sutywo4j1TYcKsDkmgbX6rk5Hm15CPQWc
 zaM41FT6Cv53uQ7QsguSbpuU/SSWgo22QAzJH8f4fJbY2+mwRePtu4JOnY1fitYO9CeU
 cCiZCxfzS8Ly5JPVd7KFPWnhqUVF73kmleOY+rDRFGRArSZkV1bvHaeA3PunB6oNEely
 kd0yIZptHVW1CLDDvidOiQyYRKex6IakyAmAsSHD9TODIkZHePWMnkdZtvRW+9bUlfX/ lw== 
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-3-jgross@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <eea24527-15d8-7d54-9e82-7737f0e3cf70@oracle.com>
Date: Mon, 7 Dec 2020 18:56:05 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201207133024.16621-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9828 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 mlxscore=0
 malwarescore=0 suspectscore=0 mlxlogscore=999 bulkscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012070158
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9828 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 clxscore=1015 malwarescore=0 priorityscore=1501 adultscore=0
 lowpriorityscore=0 phishscore=0 spamscore=0 impostorscore=0 mlxscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012070158


On 12/7/20 8:30 AM, Juergen Gross wrote:


> --- a/drivers/xen/unpopulated-alloc.c
> +++ b/drivers/xen/unpopulated-alloc.c
> @@ -12,7 +12,7 @@
>  #include <xen/xen.h>
>  
>  static DEFINE_MUTEX(list_lock);
> -static LIST_HEAD(page_list);
> +static struct page *page_list;


next_page or next_allocated_page?


Either way


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Tue Dec 08 02:14:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 02:14:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47083.83343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmSWQ-0005c0-0A; Tue, 08 Dec 2020 02:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47083.83343; Tue, 08 Dec 2020 02:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmSWP-0005bt-Sq; Tue, 08 Dec 2020 02:14:33 +0000
Received: by outflank-mailman (input) for mailman id 47083;
 Tue, 08 Dec 2020 02:14:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmSWP-0005bl-Ca; Tue, 08 Dec 2020 02:14:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmSWP-00014Y-43; Tue, 08 Dec 2020 02:14:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmSWO-0000AE-NY; Tue, 08 Dec 2020 02:14:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmSWO-0008PR-N3; Tue, 08 Dec 2020 02:14:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jg5HkS8Nrir2mjIiSuAP45es/8bYmCbNohPngKd+l1U=; b=xsTYRaexmqyNkPWax5P5I7xi7h
	9ajtjkJnoCrCddz7kan+uBRYyhSdXYjW2XFVMhRevpWeKg2YDLF2rNikN7fyY9ef6X4S1aU5yNOPD
	ZE85JKscpOY0OopzaheDSREpcGS677Mem7B7sJaC6mBZ8QC0LvPV+rtOd74qMVZow0DU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157261: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 02:14:32 +0000

flight 157261 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157261/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157253 pass in 157261
 test-amd64-i386-xl-qemuu-ovmf-amd64 20 guest-start/debianhvm.repeat fail pass in 157253
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 157253

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 157253 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 157253 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  109 days
Failing since        152659  2020-08-21 14:07:39 Z  108 days  226 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    6 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 03:31:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 03:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47093.83363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmTid-0004bm-A6; Tue, 08 Dec 2020 03:31:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47093.83363; Tue, 08 Dec 2020 03:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmTid-0004bf-77; Tue, 08 Dec 2020 03:31:15 +0000
Received: by outflank-mailman (input) for mailman id 47093;
 Tue, 08 Dec 2020 03:31:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmTic-0004bX-O6; Tue, 08 Dec 2020 03:31:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmTic-0002av-GQ; Tue, 08 Dec 2020 03:31:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmTic-00034M-9L; Tue, 08 Dec 2020 03:31:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmTic-0003H5-8o; Tue, 08 Dec 2020 03:31:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0LfjcTGXv6aEYBD4QgBwCIkLEYGUwDlNbXCwbBsntz0=; b=to9ayWI7NpLLSHuDQ2pm5nJVvA
	3tn4W8stRklHKeo+PJm2f/00KBb9+Qf7VqXyLohBuPMljuGccqRschx1bxn71oFm4Vc7YI+PTyyEJ
	emUFre5syJIH2avV3MmUmQ5/BaGnvynT2K9jc4UnhX5i8Nr+DMy3MRwoLqgiqG6bdUco=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157263: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ec53aa79905edc3891b8267123d88a221553370
X-Osstest-Versions-That:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 03:31:14 +0000

flight 157263 xen-unstable real [real]
flight 157269 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157263/
http://logs.test-lab.xenproject.org/osstest/logs/157269/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 157249

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157249
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157249
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157249
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157249
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157249
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157249
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157249
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157249
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157249
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157249
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157249
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3ec53aa79905edc3891b8267123d88a221553370
baseline version:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b

Last test of basis   157249  2020-12-07 01:51:52 Z    1 days
Testing same since   157263  2020-12-07 17:06:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3ec53aa79905edc3891b8267123d88a221553370
Author: Hongyan Xia <hongyxia@amazon.com>
Date:   Mon Dec 7 14:54:44 2020 +0100

    x86/vmap: handle superpages in vmap_to_mfn()
    
    There is simply no guarantee that vmap won't return superpages to the
    caller. It can happen if the list of MFNs are contiguous, or we simply
    have a large granularity. Although rare, if such things do happen, we
    will simply hit BUG_ON() and crash.
    
    Introduce xen_map_to_mfn() to translate any mapped Xen address to mfn
    regardless of page size, and wrap vmap_to_mfn() around it.
    
    Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 826a6dd030fe7d2a62b019e7d6e51ab79227f4aa
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Dec 7 14:54:08 2020 +0100

    x86/vPMU: make use of xmalloc_flex_struct()
    
    ... instead of effectively open-coding it in a type-unsafe way. Drop the
    regs_sz variable altogether, replacing other uses suitably.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d218fb11884346bb079fa2226ab8786a7bfeee16
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Dec 7 14:53:20 2020 +0100

    x86/vIO-APIC: make use of xmalloc_flex_struct()
    
    ... instead of effectively open-coding it in a type-unsafe way. Drop
    hvm_vioapic_size() altogether, folding the other use in a memset()
    invocation into the subsequent loop.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit b7c333016e3d6adf38e80b4e6b121950da092405
Author: Igor Druzhinin <igor.druzhinin@citrix.com>
Date:   Mon Dec 7 14:52:35 2020 +0100

    x86/IRQ: allocate guest array of max size only for shareable IRQs
    
    ... and increase default "irq-max-guests" to 32.
    
    It's not necessary to have an array of a size more than 1 for non-shareable
    IRQs and it might impact scalability in case of high "irq-max-guests"
    values being used - every IRQ in the system including MSIs would be
    supplied with an array of that size.
    
    Since it's now less impactful to use higher "irq-max-guests" value - bump
    the default to 32. That should give more headroom for future systems.
    
    Requested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit e373bc1bdc593564e15d11a7a50f74968907ee5f
Author: Igor Druzhinin <igor.druzhinin@citrix.com>
Date:   Mon Dec 7 14:49:30 2020 +0100

    x86/IRQ: make max number of guests for a shared IRQ configurable
    
    ... and increase the default to 16.
    
    Current limit of 7 is too restrictive for modern systems where one GSI
    could be shared by potentially many PCI INTx sources where each of them
    corresponds to a device passed through to its own guest. Some systems do not
    apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
    interrupt pin for the majority of PCI devices behind a single router,
    resulting in overuse of a GSI.
    
    Introduce a new command line option to configure that limit and dynamically
    allocate an array of the necessary size. Set the default size now to 16 which
    is higher than 7 but could later be increased even more if necessary.
    
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 04:53:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 04:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47114.83409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmUzv-0003zd-1U; Tue, 08 Dec 2020 04:53:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47114.83409; Tue, 08 Dec 2020 04:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmUzu-0003zW-Tt; Tue, 08 Dec 2020 04:53:10 +0000
Received: by outflank-mailman (input) for mailman id 47114;
 Tue, 08 Dec 2020 04:53:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KM1O=FM=oracle.com=martin.petersen@srs-us1.protection.inumbo.net>)
 id 1kmUzt-0003y0-6H
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 04:53:09 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3342fc7e-e8b2-4082-a499-3f9f0714e580;
 Tue, 08 Dec 2020 04:53:07 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B84nxYX064111;
 Tue, 8 Dec 2020 04:52:35 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2120.oracle.com with ESMTP id 35825m0srp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 08 Dec 2020 04:52:35 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B84ocFe155422;
 Tue, 8 Dec 2020 04:52:34 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by userp3020.oracle.com with ESMTP id 358kys9m8t-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 08 Dec 2020 04:52:34 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0B84qX4M159553;
 Tue, 8 Dec 2020 04:52:33 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3020.oracle.com with ESMTP id 358kys9m7s-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 08 Dec 2020 04:52:33 +0000
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0B84qDZf015901;
 Tue, 8 Dec 2020 04:52:15 GMT
Received: from ca-mkp.ca.oracle.com (/10.156.108.201)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 07 Dec 2020 20:52:13 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3342fc7e-e8b2-4082-a499-3f9f0714e580
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : mime-version :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=1e3cV7XKyt5cBPHNeWYhYMHu3W87CKppja6U1Jvw5F4=;
 b=V+G180Fh0lHbcmydQJqS5i+cerb42SoRrRI9QCXlQMjyWKKfj0acXqExTQUiK+7OEtft
 OuMy/8L57grCegXY4FPi2mfpdUD9NETOrjU6XyLEGnB6yCKU9e30d2WSgQTsYBxq/cMN
 5junVfgiRpfFB5rc1EfDtZHP43anCs3FIrUtz16u4yPsKG0NCInMT5yeMTaPCX1MMJeq
 qZ1GHIcbwNatFRQ/tELhwJDJybfqjlIskC/pDoCLpTPJ2KTrod7PX9rst5aaRTC3xIQA
 veh/+mbPHLjYXfePq5oiUCHv2+7Gowf8M2KKiHFzojpZKSSSQD2meZy2nTRpI8CVh6TQ Yg== 
From: "Martin K. Petersen" <martin.petersen@oracle.com>
To: linux-kernel@vger.kernel.org,
        "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: "Martin K . Petersen" <martin.petersen@oracle.com>, coreteam@netfilter.org,
        selinux@vger.kernel.org, Miguel Ojeda <ojeda@kernel.org>,
        Joe Perches <joe@perches.com>, linux-hardening@vger.kernel.org,
        reiserfs-devel@vger.kernel.org, amd-gfx@lists.freedesktop.org,
        patches@opensource.cirrus.com, linux-fbdev@vger.kernel.org,
        keyrings@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>,
        linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org,
        linux-hams@vger.kernel.org, linux-ext4@vger.kernel.org,
        wcn36xx@lists.infradead.org, GR-everest-linux-l2@marvell.com,
        x86@kernel.org, linux-watchdog@vger.kernel.org,
        linux-media@vger.kernel.org, linux-cifs@vger.kernel.org,
        linux-nfs@vger.kernel.org, linux-iio@vger.kernel.org,
        linux-usb@vger.kernel.org, devel@driverdev.osuosl.org,
        linux-atm-general@lists.sourceforge.net,
        linux-wireless@vger.kernel.org, linux-crypto@vger.kernel.org,
        linux-decnet-user@lists.sourceforge.net,
        Nathan Chancellor <natechancellor@gmail.com>,
        netfilter-devel@vger.kernel.org, target-devel@vger.kernel.org,
        linux-integrity@vger.kernel.org,
        virtualization@lists.linux-foundation.org,
        linux-mediatek@lists.infradead.org, Kees Cook <keescook@chromium.org>,
        samba-technical@lists.samba.org, ceph-devel@vger.kernel.org,
        drbd-dev@tron.linbit.com, intel-gfx@lists.freedesktop.org,
        dm-devel@redhat.com, linux-acpi@vger.kernel.org,
        linux-ide@vger.kernel.org, xen-devel@lists.xenproject.org,
        op-tee@lists.trustedfirmware.org, linux-hwmon@vger.kernel.org,
        linux-sctp@vger.kernel.org, bridge@lists.linux-foundation.org,
        linux-mtd@lists.infradead.org, linux-input@vger.kernel.org,
        linux-can@vger.kernel.org, rds-devel@oss.oracle.com,
        oss-drivers@netronome.com, tipc-discussion@lists.sourceforge.net,
        cluster-devel@redhat.com, linux-rdma@vger.kernel.org,
        dri-devel@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
        linux-block@vger.kernel.org, usb-storage@lists.one-eyed-alien.net,
        linux1394-devel@lists.sourceforge.net, alsa-devel@alsa-project.org,
        linux-i3c@lists.infradead.org, linux-arm-msm@vger.kernel.org,
        linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org,
        linux-afs@lists.infradead.org, nouveau@lists.freedesktop.org,
        GR-Linux-NIC-Dev@marvell.com, netdev@vger.kernel.org,
        linux-security-module@vger.kernel.org,
        linux-stm32@st-md-mailman.stormreply.com, linux-mm@kvack.org,
        intel-wired-lan@lists.osuosl.org, linux-renesas-soc@vger.kernel.org
Subject: Re: (subset) [PATCH 000/141] Fix fall-through warnings for Clang
Date: Mon,  7 Dec 2020 23:52:01 -0500
Message-Id: <160740299787.710.4201881220590518200.b4-ty@oracle.com>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9828 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 adultscore=0 bulkscore=0
 phishscore=0 mlxlogscore=740 clxscore=1015 priorityscore=1501 mlxscore=0
 spamscore=0 lowpriorityscore=0 malwarescore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012080029

On Fri, 20 Nov 2020 12:21:39 -0600, Gustavo A. R. Silva wrote:

> This series aims to fix almost all remaining fall-through warnings in
> order to enable -Wimplicit-fallthrough for Clang.
> 
> In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> add multiple break/goto/return/fallthrough statements instead of just
> letting the code fall through to the next case.
> 
> [...]

Applied to 5.11/scsi-queue, thanks!

[054/141] target: Fix fall-through warnings for Clang
          https://git.kernel.org/mkp/scsi/c/492096ecfa39

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 05:12:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 05:12:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47045.83429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmVIO-0006GG-0q; Tue, 08 Dec 2020 05:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47045.83429; Tue, 08 Dec 2020 05:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmVIN-0006G9-Td; Tue, 08 Dec 2020 05:12:15 +0000
Received: by outflank-mailman (input) for mailman id 47045;
 Mon, 07 Dec 2020 21:50:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ovVX=FL=konsulko.com=trini@srs-us1.protection.inumbo.net>)
 id 1kmOP0-0003pY-36
 for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 21:50:38 +0000
Received: from mail-qk1-x732.google.com (unknown [2607:f8b0:4864:20::732])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f646448e-91af-4056-aa1f-4a3fa8fe05c3;
 Mon, 07 Dec 2020 21:50:37 +0000 (UTC)
Received: by mail-qk1-x732.google.com with SMTP id 1so14259205qka.0
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 13:50:37 -0800 (PST)
Received: from bill-the-cat (cpe-65-184-135-175.ec.res.rr.com.
 [65.184.135.175])
 by smtp.gmail.com with ESMTPSA id b14sm11970109qtx.36.2020.12.07.13.50.34
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Mon, 07 Dec 2020 13:50:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f646448e-91af-4056-aa1f-4a3fa8fe05c3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=konsulko.com; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=/ZcCG7pADF2Oan3XaaLEvoIyvCq89Mhc+GwW8vVvIrY=;
        b=Lozzj4GfI580ONqAuCJKC3AAbdqg1jhqs2wm7PT4W0lBO7ssjToZ1C5s6xMzXBuy2Q
         RANmCAIdVLZ8U6O5cFiEnSTK/M3L+2NbtbQNUHJLTwHedrzizOas4aZSk09mnrHh2fO6
         J1Lq1zy1/7zESVTeOQjjTpj7b3wkCAVA4oQPI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=/ZcCG7pADF2Oan3XaaLEvoIyvCq89Mhc+GwW8vVvIrY=;
        b=bw7z6F3wREVi1VaHbnHPvhAYkEH05w0um3fHokudcqRwHwNgUGA0huRXHkWAdQSEkB
         PQw+MTOPBOMPedWk+s+TwkLX5cveaJoWNLjEAHQfytwqINp8KlcKp/+rlvcoIYbEsigu
         g1+jUPU4QPaQARWiWGvLC9J1QNSRFrBMBsX3dGP0zUcDcb4BQiPyHGBb2y1a/qJaMIWy
         B/0eNrPlbkdW3fDGGHU48hTHi+8Clfp0jx+3UpzUejkc7dBt3gNB3ylPk6I+iWIIMAQV
         fb+eb3iKrNchCWkmR3DXjJVeTRmTKGJb3vDekuEftVk5vJPJHbA3FGsk60ASdfuBBpa8
         zMow==
X-Gm-Message-State: AOAM530ppifdu3ryScQth8AEDS3mpiZeCCgegZnPDmVDl4kVeWi0SPKo
	jsrSPUPPKysHR4nJlaGxjL0+7A==
X-Google-Smtp-Source: ABdhPJz6FCbeMXa2G2maw67GOiOFfbP8/C/NR5x0geY5gSM9+hW8KXY+f/apvUvqCtKzzryrb+b0Nw==
X-Received: by 2002:a05:620a:6c8:: with SMTP id 8mr963037qky.176.1607377837079;
        Mon, 07 Dec 2020 13:50:37 -0800 (PST)
Date: Mon, 7 Dec 2020 16:50:32 -0500
From: Tom Rini <trini@konsulko.com>
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: Wim Vervoorn <wvervoorn@eltan.com>,
	The development of GNU GRUB <grub-devel@gnu.org>,
	Daniel Kiper <daniel.kiper@oracle.com>, coreboot@coreboot.org,
	LKML <linux-kernel@vger.kernel.org>,
	systemd-devel@lists.freedesktop.org,
	trenchboot-devel@googlegroups.com,
	U-Boot Mailing List <u-boot@lists.denx.de>, x86@kernel.org,
	xen-devel@lists.xenproject.org, alecb@umass.edu,
	alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
	andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org,
	"btrotter@gmail.com" <btrotter@gmail.com>,
	dpsmith@apertussolutions.com, eric.devolder@oracle.com,
	eric.snowberg@oracle.com, hpa@zytor.com, hun@n-dimensional.de,
	javierm@redhat.com, joao.m.martins@oracle.com,
	kanth.ghatraju@oracle.com, konrad.wilk@oracle.com,
	krystian.hebel@3mdeb.com, leif@nuviainc.com,
	lukasz.hawrylko@intel.com, luto@amacapital.net,
	michal.zygowski@3mdeb.com, Matthew Garrett <mjg59@google.com>,
	mtottenh@akamai.com,
	Vladimir 'phcoder' Serbinenko <phcoder@gmail.com>,
	piotr.krol@3mdeb.com, pjones@redhat.com, roger.pau@citrix.com,
	ross.philipson@oracle.com, tyhicks@linux.microsoft.com,
	Heinrich Schuchardt <xypron.glpk@gmx.de>
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
Message-ID: <20201207215032.GN32272@bill-the-cat>
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
 <CAODwPW9dxvMfXY=92pJNGazgYqcynAk72EkzOcmF7JZXhHTwSQ@mail.gmail.com>
 <6c1e79be210549949c30253a6cfcafc1@Eltsrv03.Eltan.local>
 <9b614471-0395-88a5-1347-66417797e39d@molgen.mpg.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="/HtAX+RUajEpU4A/"
Content-Disposition: inline
In-Reply-To: <9b614471-0395-88a5-1347-66417797e39d@molgen.mpg.de>
X-Clacks-Overhead: GNU Terry Pratchett
User-Agent: Mutt/1.9.4 (2018-02-28)


--/HtAX+RUajEpU4A/
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 04, 2020 at 02:23:23PM +0100, Paul Menzel wrote:
> Dear Wim, dear Daniel,
>=20
>=20
> First, thank you for including all parties in the discussion.
> Am 04.12.20 um 13:52 schrieb Wim Vervoorn:
>=20
> > I agree with you. Using an existing standard is better than inventing
> > a new one in this case. I think using the coreboot logging is a good
> > idea as there is indeed a lot of support already available and it is
> > lightweight and simple.
> In my opinion coreboot=E2=80=99s format is lacking, that it does not reco=
rd the
> timestamp, and the log level is not stored as metadata, but (in coreboot)
> only used to decide if to print the message or not.
>=20
> I agree with you, that an existing standard should be used, and in my
> opinion it=E2=80=99s Linux message format. That is most widely supported,=
 and
> existing tools could then also work with pre-Linux messages.
>=20
> Sean Hudson from Mentor Graphics presented that idea at Embedded Linux
> Conference Europe 2016 [1]. No idea, if anything came out of that effort.
> (Unfortunately, I couldn=E2=80=99t find an email. Does somebody have cont=
acts at
> Mentor to find out, how to reach him?)

I believe the main thing that came out of this was the reminder that
there was an even older attempt by U-Boot to have such a mechanism, and
that at the time getting the work accepted in Linux faced some hurdles
or another.

That said, I too agree with taking what's already a de facto standard,
the coreboot logging, and expand on it as needed.

--=20
Tom

--/HtAX+RUajEpU4A/
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQGzBAABCgAdFiEEGjx/cOCPqxcHgJu/FHw5/5Y0tywFAl/Oo6UACgkQFHw5/5Y0
tywf8Qv+L4APGvoo04UfMbodnLWALTyJgHeFXO5lXNUc1MVcmDGlZImiPdj72ky9
E8YqQQS1D5zoRlQrHXefKqKxUHPcmHnSYGkwC3xfBX7yrvxB8ggEbbrA1uWBZm+2
mXpJhP9wPwUzAJYGriknNOA4Ly6UVpgSljQCwKSRPSVAOXr90A9bWsQLktAqrkH5
QdWwrpKAS1XaOPvnWYWyFs6dTQjqxfGngG+9zWio8JFwj9tFHTvZgI3Nqzi+8+N5
7bBRqyvEkpni2VoQi20RQSiWwdVw21r8ezlP4Zx8HkW4LqoqZIEpY2pa2UVkDb8b
phjIxsZqG4lCy9sP4byqcAQnu1h0FFzSdKFuBpRJi3VJaQ0gJ2g0ySPzBSCDu4Va
5qIVE/64uMhIhgU3cYr+7pi/bgw8LoGNnjOBh9d4CVYh61M2EWSEA9f227tlAWlJ
YK0wpz6REoq0TRpRHwr313pkdMHefe9dWaJ2ul4tSud420mQ+zzFZ+ghLywLSo0w
vAPmRdwM
=tVGW
-----END PGP SIGNATURE-----

--/HtAX+RUajEpU4A/--


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 05:22:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 05:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47129.83445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmVRo-0007IW-UW; Tue, 08 Dec 2020 05:22:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47129.83445; Tue, 08 Dec 2020 05:22:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmVRo-0007IP-RO; Tue, 08 Dec 2020 05:22:00 +0000
Received: by outflank-mailman (input) for mailman id 47129;
 Tue, 08 Dec 2020 05:21:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IMz/=FM=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1kmVRn-0007IK-MP
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 05:21:59 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f2f351c-2adc-42a6-8668-09a43f694c72;
 Tue, 08 Dec 2020 05:21:57 +0000 (UTC)
Received: from DB6PR07CA0119.eurprd07.prod.outlook.com (2603:10a6:6:2c::33) by
 VI1PR08MB4608.eurprd08.prod.outlook.com (2603:10a6:803:c0::22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17; Tue, 8 Dec 2020 05:21:52 +0000
Received: from DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2c:cafe::fe) by DB6PR07CA0119.outlook.office365.com
 (2603:10a6:6:2c::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.5 via Frontend
 Transport; Tue, 8 Dec 2020 05:21:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT028.mail.protection.outlook.com (10.152.20.99) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Tue, 8 Dec 2020 05:21:52 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Tue, 08 Dec 2020 05:21:52 +0000
Received: from c9c30700e293.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FC184327-F0B3-4E7C-A1FB-B0015BC7E11B.1; 
 Tue, 08 Dec 2020 05:21:36 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c9c30700e293.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 08 Dec 2020 05:21:36 +0000
Received: from DU2PR04CA0032.eurprd04.prod.outlook.com (2603:10a6:10:234::7)
 by DBBPR08MB4725.eurprd08.prod.outlook.com (2603:10a6:10:f5::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Tue, 8 Dec
 2020 05:21:34 +0000
Received: from DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::8e) by DU2PR04CA0032.outlook.office365.com
 (2603:10a6:10:234::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 8 Dec 2020 05:21:34 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT048.mail.protection.outlook.com (10.152.21.28) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3632.17 via Frontend Transport; Tue, 8 Dec 2020 05:21:33 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2044.4; Tue, 8 Dec 2020
 05:21:24 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1779.2; Tue, 8 Dec
 2020 05:21:23 +0000
Received: from entos-d05.shanghai.arm.com (10.169.212.212) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2044.4 via Frontend
 Transport; Tue, 8 Dec 2020 05:21:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f2f351c-2adc-42a6-8668-09a43f694c72
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xke5hNIj05zvXJb580XIY9RhZFz2LEBvBYMvDx7nia0=;
 b=eApz4/JVFn9ghlYGv8RGBZIeYBnuN4CsDdOc+tTWTi5cwuyxX5wWCcPCr1faVOCToeXRvDEjOm1H5PkTHAOXln6pIv/Ej15VOsNsIE8TTmZoCFjtPQmeL4xkgHuE80a9Tnsgvf2A8CVxBjvNRVjSs8CxZU0H37M4QJ4rJEt77Vc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1591b7bbf81c0587
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EXY2xiSKhQFnb7+Pr2hQ11p5IaLU+sCbSTDAj2KtFsYSBI+3dLo5uGTii2uL6QoAuNAveH7V/8Jn+NIuZL38kvPwwUjNWFXOyOw+oK+WVV/xUxTNDRMTxdAHoQz+wBnWs2e5TC11bBx4mHtA0p+vOaksSo6+eRxoY5feslinQ8JrmoIipgPZuwZgOy6GtvmjK83FN3mFBxxCtZg22p1kCF1eKreqtB0kISgUJ1qHybOpSpURmTGGk1Q1iiO/7TNmEKMa9qepOYNJAazS68J53qpMJVIPSc/Wu5MTKBMQi2sOpd7ckk0W2zw1lshygtCLfbaMgeE33iL+703WCE3jFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xke5hNIj05zvXJb580XIY9RhZFz2LEBvBYMvDx7nia0=;
 b=aMuOeDRa14qZ2TPx0V8xyvpZDbP4ngBszbqvRH+sxk1FkVtnxoRCuUmGdt+FJY1WWnYMhbei0cxoVOHJCAwnb4WhpV2tLCQUYieMIZZCMH8ze9QqVvNLaxFL4W4EYQC8/g0eup1x5r/CJXFk+uyrPx8n+alFboy0dynDdRJxD/2U7L0gyZM5u4SMy2oTe2zUteScoMfSCnuQxzzo9PXgUY+W4tErHwZvQLg6Fiwjj8DBU6jIERfwRr+rt3dtIwe+RfxZgglWIBsm7EKPG7HRjjsnhr77t16Dz2CuVNgUKEV/6Q4LOMOEEwKNEq7G9vXnimng0oMHMkD2t+3998YMYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xke5hNIj05zvXJb580XIY9RhZFz2LEBvBYMvDx7nia0=;
 b=eApz4/JVFn9ghlYGv8RGBZIeYBnuN4CsDdOc+tTWTi5cwuyxX5wWCcPCr1faVOCToeXRvDEjOm1H5PkTHAOXln6pIv/Ej15VOsNsIE8TTmZoCFjtPQmeL4xkgHuE80a9Tnsgvf2A8CVxBjvNRVjSs8CxZU0H37M4QJ4rJEt77Vc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Kaly.Xin@arm.com>,
	<Wei.Chen@arm.com>, <nd@arm.com>
Subject: [RFC] design: design doc for 1:1 direct-map
Date: Tue, 8 Dec 2020 13:21:13 +0800
Message-ID: <20201208052113.1641514-1-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 201b1dd2-622d-4e28-6c0e-08d89b392c0d
X-MS-TrafficTypeDiagnostic: DBBPR08MB4725:|VI1PR08MB4608:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB46080845A998D3FA724081BFF7CD0@VI1PR08MB4608.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QV26fuKh7gMSJRrrYk8OLIQf7uM7NWf2O0M94r/n23WpZAjOwsH5r0St2pm6i4XB60oVC8uptGMahg1Tqk2QAofaL4xr/L9e1m0dTTYgxxHItS7v29gzUM6o90lIjJ/Q/iC5R4g11z8yccw6MLtNCfKU1xjKLm/rhshy7DOLZlSB5ENTUXvEb2iGRRj8QwMGTvrpsk6IfnSupAmO08quWFc+nGHyv9zGzuqVZViNC4EJxw8gsJw5CWT9LenII2YwluhWzqBoiGsXAvkmo2Jt/nHyMpoPGCnwo9wx49SnANx7e0Ub/yXD437xzZW2iKlgK0ckjoFDotI7DsdSzmqLW+4SIlwJB1i2FOieZNd6bOsVxt9u5E9UGAy+SFJ7NyJ/lRsrEUoIDRjD1GPzr6p/EDQjK14LzxqYYhQxbcwmnYosK4Kv7zK5VPo0C/d3bd7+JgGZVRqRqZ/LLSMN2VOafQcJazstK67FL51+BinJY4C0y0nUf6QYypJpXvYhuSnH
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(46966005)(6666004)(70586007)(86362001)(7696005)(83380400001)(47076004)(82310400003)(110136005)(5660300002)(36756003)(54906003)(356005)(1076003)(81166007)(4326008)(426003)(26005)(336012)(44832011)(8676002)(70206006)(186003)(508600001)(2906002)(2616005)(8936002)(966005)(21314003);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4725
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	04e3f927-46ef-4eb5-817e-08d89b3920db
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fPlE419AV/rIIOGpj60fX7BYdD4fAWk69QLpYm+O/TBR+2z5mpwsQKo4SVYoZDNF+5oJQ6/FcRuGjcm+w98/QKJxKkoTEXLWGwDNGN4T7OfEcjhJ2WPHxovu6GmmRKFcLPyImUjPJc1il5Fk8F3KnLfO6yoNV8zK/xxYbbnAAmpuPZmHQ0RugPkWcwLJ7XjREoOGWYZp+VW6lRNZlng7zgl2joCdNUtXqOnOORIDBhPCEzQLfRflX5Acp4aw86Q7MS6bQgPFsavJQqUe2k8sFFnby29fRDZ2HVBwWF3yQYQCZXIuoHLD/Oa1fYW+rqRvnrniZL8irP878rO9DC9JuU+t6bDH6OFzKgSndqgRWg38YniVujq+dx8mt3nbBdad9vtB0DQM2CXgl39dZVwQjr1FytjpHtS6sUwQRbvM/avlEpC1OnMG7OOkCU5HrtaQmO6uicZcFJKuf0y5B42YfKmt3Prfv5wjh5W9tTZwDk2IZBmWHqZUePAwc/RjufUG
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(46966005)(26005)(70586007)(2616005)(186003)(426003)(82310400003)(36756003)(966005)(70206006)(336012)(44832011)(1076003)(83380400001)(7696005)(5660300002)(8936002)(4326008)(110136005)(54906003)(47076004)(2906002)(8676002)(86362001)(508600001)(81166007)(6666004)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2020 05:21:52.7373
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 201b1dd2-622d-4e28-6c0e-08d89b392c0d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4608

This is one draft design about the infrastructure for now, not ready
for upstream yet (hence the RFC tag), thought it'd be useful to firstly
start a discussion with the community.

Create one design doc for 1:1 direct-map.
It aims to describe why and how we allocate 1:1 direct-map(guest physical
== physical) domains.

This document is partly based on Stefano Stabellini's patch serie v1:
[direct-map DomUs](
https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
For the part regarding allocating 1:1 direct-map domains with user-defined
memory regions, it will be included in next design of static memory
allocation.
---
 docs/designs/1_1_direct-map.md | 87 ++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 docs/designs/1_1_direct-map.md

diff --git a/docs/designs/1_1_direct-map.md b/docs/designs/1_1_direct-map.md
new file mode 100644
index 0000000000..ce3e2c77fd
--- /dev/null
+++ b/docs/designs/1_1_direct-map.md
@@ -0,0 +1,87 @@
+# Preface
+
+The document is an early draft for direct-map memory map
+(`guest physical == physical`) of domUs. And right now, it constrains to ARM
+architecture.
+
+It aims to describe why and how the guest would be created as direct-map domain.
+
+This document is partly based on Stefano Stabellini's patch serie v1:
+[direct-map DomUs](
+https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).
+
+This is a first draft and some questions are still unanswered. When this is the
+case, the text shall contain XXX.
+
+# Introduction
+
+## Background
+
+Cases where domU needs direct-map memory map:
+
+  * IOMMU not present in the system.
+  * IOMMU disabled, since it doesn't cover a specific device.
+  * IOMMU disabled, since it doesn't have enough bandwidth.
+  * IOMMU disabled, since it adds too much latency.
+
+*WARNING:
+Users should be careful that it is not always secure to assign a device without
+IOMMU/SMMU protection.
+Users must be aware of this risk, that guests having access to hardware with
+DMA capacity must be trusted, or it could use the DMA engine to access any
+other memory area.
+Guests could use additional security hardware component like NOC, System MPU
+to protect the memory.
+
+## Design
+
+The implementation may cover following aspects:
+
+### Native Address and IRQ numbers for GIC and UART(vPL011)
+
+Today, fixed addresses and IRQ numbers are used to map GIC and UART(vPL011)
+in DomUs. And it may cause potential clash on direct-map domains.
+So, Using native addresses and irq numbers for GIC, UART(vPL011), in
+direct-map domains is necessary.
+e.g.
+For the virtual interrupt of vPL011: instead of always using `GUEST_VPL011_SPI`,
+try to reuse the physical SPI number if possible.
+
+### Device tree option: `direct_map`
+
+Introduce a new device tree option `direct_map` for direct-map domains.
+Then, when users try to allocate one direct-map domain(except DOM0),
+`direct-map` property needs to be added under the appropriate `/chosen/domUx`.
+
+
+            chosen {
+                ...
+                domU1 {
+                    compatible = "xen, domain";
+                    #address-cells = <0x2>;
+                    #size-cells = <0x1>;
+                    direct-map;
+                    ...
+                };
+                ...
+            };
+
+If users are using imagebuilder, they can add to boot.source something like the
+following:
+
+    fdt set /chosen/domU1 direct-map
+
+Users could also use `xl` to create direct-map domains, just use the following
+config option: `direct-map=true`
+
+### direct-map guest memory allocation
+
+Func `allocate_memory_direct_map` is based on `allocate_memory_11`, and shall
+be refined to allocate memory for all direct-map domains, including DOM0.
+Roughly speaking, firstly, it tries to allocate arbitrary memory chunk of
+requested size from domain sub-allocator(`alloc_domheap_pages`). If fail,
+split the chunk into halves, and re-try, until it succeed or bail out with the
+smallest chunk size.
+Then, `insert_11_bank` shall insert above allocated pages into a memory bank,
+which are ordered by address, and also set up guest P2M mapping(
+`guest_physmap_add_page`) to ensure `gfn == mfn`.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 06:32:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 06:32:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47148.83483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmWXg-0005Ya-NB; Tue, 08 Dec 2020 06:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47148.83483; Tue, 08 Dec 2020 06:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmWXg-0005YT-K3; Tue, 08 Dec 2020 06:32:08 +0000
Received: by outflank-mailman (input) for mailman id 47148;
 Tue, 08 Dec 2020 06:32:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmWXf-0005YL-I1; Tue, 08 Dec 2020 06:32:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmWXf-0006p3-AR; Tue, 08 Dec 2020 06:32:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmWXf-0002gl-0e; Tue, 08 Dec 2020 06:32:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmWXf-0000OP-0A; Tue, 08 Dec 2020 06:32:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fGkU3rBMxibjSw79PpPQ7BkFVNzwa0cBeiYbngmkimM=; b=Bb1bC7WEdreeXVs84n43AsE/jH
	6owfeqrvlc+X4dMIj+O64JxDScHUwqTV7JxrYTVi+Vm/Ynbd9j9trReht6iJn1evWxm1c3+z2nx9D
	edG/ekWOf/Dy0X7/cLr9X8vpYRY4n/0lclnkPFcp6wzbQH7lN+nJIFi4xrcUufVapB5c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157266: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cd796ed3345030aa1bb332fe5c793b3dddaf56e7
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 06:32:07 +0000

flight 157266 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157266/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cd796ed3345030aa1bb332fe5c793b3dddaf56e7
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  129 days
Failing since        152366  2020-08-01 20:49:34 Z  128 days  220 attempts
Testing same since   157266  2020-12-07 22:40:29 Z    0 days    1 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 700707 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 06:45:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 06:45:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47159.83505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmWkC-0006gQ-6P; Tue, 08 Dec 2020 06:45:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47159.83505; Tue, 08 Dec 2020 06:45:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmWkC-0006gJ-2q; Tue, 08 Dec 2020 06:45:04 +0000
Received: by outflank-mailman (input) for mailman id 47159;
 Tue, 08 Dec 2020 06:45:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LMSP=FM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmWkA-0006gE-Rb
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 06:45:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3281bf9f-3de6-4ca8-ac92-4c3cf2b89ce8;
 Tue, 08 Dec 2020 06:45:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BAE4DAD41;
 Tue,  8 Dec 2020 06:45:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3281bf9f-3de6-4ca8-ac92-4c3cf2b89ce8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607409900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=B4qg8nuNsxBlC4DVeh6XkCDOdkWU5AaPYPIvqXZwoWc=;
	b=scyISi7ME3Gq3/i8CXTrCBsOP935pKa2XKl78EaIrNxlDRWowtnUGxdtAm7s9LrfREKdOy
	FWK7WbcTmBREAojJC7QOGleFGoP6IeP8OdBIrHuGQNH9yqws4z+BeJ8lTdF+81DYGt6k27
	a6hiFsmx7u9vzAhXvxzUhSHuVuKoX5M=
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 open list <linux-kernel@vger.kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-3-jgross@suse.com>
 <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
Message-ID: <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com>
Date: Tue, 8 Dec 2020 07:45:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="cV2DFq5HhB0DJvGp9vWxAtDSVVIuHsGaz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--cV2DFq5HhB0DJvGp9vWxAtDSVVIuHsGaz
Content-Type: multipart/mixed; boundary="AAtAYjElSB8dHXTdN4ldFSOorn5hlu7Z3";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 open list <linux-kernel@vger.kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com>
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-3-jgross@suse.com>
 <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
In-Reply-To: <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>

--AAtAYjElSB8dHXTdN4ldFSOorn5hlu7Z3
Content-Type: multipart/mixed;
 boundary="------------66A30C87FE131471BF04702D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------66A30C87FE131471BF04702D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.12.20 21:48, Jason Andryuk wrote:
> On Mon, Dec 7, 2020 at 8:30 AM Juergen Gross <jgross@suse.com> wrote:
>>
>> Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
>> memory") introduced usage of ZONE_DEVICE memory for foreign memory
>> mappings.
>>
>> Unfortunately this collides with using page->lru for Xen backend
>> private page caches.
>>
>> Fix that by using page->zone_device_data instead.
>>
>> Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memor=
y")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Would it make sense to add BUG_ON(is_zone_device_page(page)) and the
> opposite as appropriate to cache_enq?

No, I don't think so. At least in the CONFIG_ZONE_DEVICE case the
initial list in a PV dom0 is populated from extra memory (basically
the same, but not marked as zone device memory explicitly).

Juergen

--------------66A30C87FE131471BF04702D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------66A30C87FE131471BF04702D--

--AAtAYjElSB8dHXTdN4ldFSOorn5hlu7Z3--

--cV2DFq5HhB0DJvGp9vWxAtDSVVIuHsGaz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/PIOwFAwAAAAAACgkQsN6d1ii/Ey+A
EAf/dhnf9lD4yl339gf4XqzB1RM2geEGVVrTakAgm+2STBVKC/sca9ZwNDewAWomR3QbcZPy23vu
lZ1Lzh62FFvgXobXU67jMDGjIsNwc5zO17aU7agx+1YbYsxY97mzgODXzf54XmQOeDWPQ4VFjl7E
JVH8b8PMoNG1YqiH0GdhAp4N3BMtmVRWhCDNO17HEtbvKrPNEPaDwovyYZdwzIXh18tlyuW0DM94
FxBSW4ec4iEfU/wtGYRg/fBxWUVxWVxjW12GVYtk6C25uUD4FsWxjuz+DauULUL2iYDJK+sYMlaE
6Ho7PjarAqu2WNKtgWvlZNJ22+//nmLy5Wq1cM3rvg==
=6KS0
-----END PGP SIGNATURE-----

--cV2DFq5HhB0DJvGp9vWxAtDSVVIuHsGaz--


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 07:23:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 07:23:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47171.83535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmXLZ-0002BJ-Km; Tue, 08 Dec 2020 07:23:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47171.83535; Tue, 08 Dec 2020 07:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmXLZ-0002BC-HW; Tue, 08 Dec 2020 07:23:41 +0000
Received: by outflank-mailman (input) for mailman id 47171;
 Tue, 08 Dec 2020 07:23:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JbUw=FM=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kmXLY-0002B5-Lw
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 07:23:40 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6cbbc409-bbb9-4462-a961-2df6a3430daa;
 Tue, 08 Dec 2020 07:23:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3CD3230E;
 Mon,  7 Dec 2020 23:23:38 -0800 (PST)
Received: from e123311-lin.arm.com (unknown [10.57.20.247])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA3383F718;
 Mon,  7 Dec 2020 23:23:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cbbc409-bbb9-4462-a961-2df6a3430daa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
Date: Tue,  8 Dec 2020 08:23:27 +0100
Message-Id: <20201208072327.11890-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When executing in aarch32 state at EL0, a load at EL0 from a
virtual address that matches the bottom 32 bits of the virtual address
used by a recent load at (aarch64) EL1 might return incorrect data.

The workaround is to insert a write of the contextidr_el1 register
on exception return to an aarch32 guest.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 docs/misc/arm/silicon-errata.txt |  1 +
 xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
 xen/arch/arm/arm64/entry.S       |  9 +++++++++
 xen/arch/arm/cpuerrata.c         |  8 ++++++++
 xen/include/asm-arm/cpufeature.h |  3 ++-
 5 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index 27bf957ebf..fa3d9af63d 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -45,6 +45,7 @@ stable hypervisors.
 | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319    |
 | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069    |
 | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_819472    |
+| ARM            | Cortex-A53      | #845719         | ARM64_ERRATUM_845719    |
 | ARM            | Cortex-A55      | #1530923        | N/A                     |
 | ARM            | Cortex-A57      | #852523         | N/A                     |
 | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075    |
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index f5b1bcda03..6bea393555 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -244,6 +244,25 @@ config ARM_ERRATUM_858921
 
 	  If unsure, say Y.
 
+config ARM64_ERRATUM_845719
+	bool "Cortex-A53: 845719: A load might read incorrect data"
+	default y
+	help
+	  This option adds an alternative code sequence to work around ARM
+	  erratum 845719 on Cortex-A53 parts up to r0p4.
+
+	  When executing in aarch32 state at EL0, a load at EL0 from a
+	  virtual address that matches the bottom 32 bits of the virtual address
+	  used by a recent load at (aarch64) EL1 might return incorrect data.
+
+	  The workaround is to insert a write of the contextidr_el1 register
+	  on exception return to an aarch32 guest.
+	  Please note that this does not necessarily enable the workaround,
+	  as it depends on the alternative framework, which will only patch
+	  the kernel if an affected CPU is detected.
+
+	  If unsure, say Y.
+
 config ARM64_WORKAROUND_REPEAT_TLBI
 	bool
 
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 175ea2981e..ef3336f34a 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -96,6 +96,15 @@
         msr     SPSR_fiq, x22
         msr     SPSR_irq, x23
 
+#ifdef CONFIG_ARM64_ERRATUM_845719
+alternative_if ARM64_WORKAROUND_845719
+        /* contextidr_el1 is not accessible from aarch32 guest so we can
+         * write xzr to it
+         */
+        msr     contextidr_el1, xzr
+alternative_else_nop_endif
+#endif
+
         add     x21, sp, #UREGS_SPSR_und
         ldp     w22, w23, [x21]
         msr     SPSR_und, x22
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index b398d480f1..8959d4d4dc 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -491,6 +491,14 @@ static const struct arm_cpu_capabilities arm_errata[] = {
         .capability = ARM_WORKAROUND_858921,
         MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
     },
+#endif
+#ifdef CONFIG_ARM64_ERRATUM_845719
+    {
+        /* Cortex-A53 r0p[01234] */
+        .desc = "ARM erratum 845719",
+        .capability = ARM64_WORKAROUND_845719,
+        MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
+    },
 #endif
     {
         /* Neoverse r0p0 - r2p0 */
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index c7b5052992..1165a1eb62 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -47,8 +47,9 @@
 #define ARM64_WORKAROUND_AT_SPECULATE 9
 #define ARM_WORKAROUND_858921 10
 #define ARM64_WORKAROUND_REPEAT_TLBI 11
+#define ARM64_WORKAROUND_845719 12
 
-#define ARM_NCAPS           12
+#define ARM_NCAPS           13
 
 #ifndef __ASSEMBLY__
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 07:52:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 07:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47181.83562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmXnJ-0005C7-Bw; Tue, 08 Dec 2020 07:52:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47181.83562; Tue, 08 Dec 2020 07:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmXnJ-0005C0-85; Tue, 08 Dec 2020 07:52:21 +0000
Received: by outflank-mailman (input) for mailman id 47181;
 Tue, 08 Dec 2020 07:52:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pB08=FM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kmXnH-0005Bs-Dq
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 07:52:19 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6dde8698-dc85-4e0c-964f-960ab3f90309;
 Tue, 08 Dec 2020 07:52:17 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id x6so11321239wro.11
 for <xen-devel@lists.xenproject.org>; Mon, 07 Dec 2020 23:52:17 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id b12sm3445244wmj.2.2020.12.07.23.52.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 07 Dec 2020 23:52:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dde8698-dc85-4e0c-964f-960ab3f90309
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=c0UBFw5yYMNiX5sN2M3lE3YG+qyYb4qxjzf2Lw0KATw=;
        b=RuyoaXVF9uIQr0BQs9DGzBxAUTRdmtdQcpko8893W3S9BdEGcTQ3cIywAMZVXVjUNC
         GR8yQ7tZMC5dIoragpOYa5S0kAvaU54fgQGh55+6sinFmqMO2ab4CbUufy6V9fvJqtHr
         FGxwcad0ssgvOoGv/Adnq0wc/7Sz1wONTQRIMfbaOnSoLORVzLKUliWzZTVAB6s10G6F
         lEpgTgImhrI8rECNmpqntK9HwufuUJBZr5H/dnvNyJCa8UW7s6sItDLu34iIxkEL/eCc
         Vv/V9F6bw+0NKXlSS6QLBULPnq/B1CTRSeOnpgNpx2/UZhkzhFC0UhPmI4j/MOB+OzFb
         688Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=c0UBFw5yYMNiX5sN2M3lE3YG+qyYb4qxjzf2Lw0KATw=;
        b=l9GL/TBgbn0LhmxoCUIGSRuh8arSuDD6fSM2J36MQQ9R4MPOYB7K4KMqtfoUpYZx9f
         CJFMKo3ZPWhXskWBhOEeXDToeZ3CuTB9F/00qtQd1Q9L8r1IqD1SjI1niryLQtO4vTGk
         6JwYcNPFhG3n+rD7fHKcSSnQYLCssto26UPz6SZqei4kpJa1XnmDGZoFxtEz6lRok13p
         DfhAIiXKlEBn3uu7V7Z+n9tzyenaQz6YhHGOQZe6Y8Kb6FJ7ekIr4jibCyZN2WDqwokk
         pCAsa9z7tTJ6lSN04q4gDob9m150pkat6EtgcdS4HC8ugK8YM2nXMqYZjaRgcx3AxptY
         zf3A==
X-Gm-Message-State: AOAM530jvYJxUb++zdmGx0ytedlFR0kQqefHHHSc9s/o7dvn/pDcKECK
	g8OdiPxlHc1otCIiyTv/LBk=
X-Google-Smtp-Source: ABdhPJx/g6x0Wvsa4GypNyMWeU+56RTGUsb31eR8SX6yqK2dQH4+la9P1ThHIdbFRZ3MnCEC0UW8NQ==
X-Received: by 2002:a5d:4c8d:: with SMTP id z13mr24840385wrs.248.1607413936874;
        Mon, 07 Dec 2020 23:52:16 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>,
	"'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-12-git-send-email-olekstysh@gmail.com> <742899b6-964b-be75-affc-31342c07133a@suse.com> <d7d867d3-b508-0c6c-191f-264e1e08bf39@gmail.com>
In-Reply-To: <d7d867d3-b508-0c6c-191f-264e1e08bf39@gmail.com>
Subject: RE: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
Date: Tue, 8 Dec 2020 07:52:14 -0000
Message-ID: <0d3c01d6cd37$0c013770$2403a650$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKk0D4Qme59XF0a0h96d36zIOxDhQFXwruAAXaZVD8BqIA2UKgtE7NA

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 07 December 2020 21:00
> To: Jan Beulich <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew =
Cooper <andrew.cooper3@citrix.com>;
> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; =
George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien =
Grall <julien@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Jun Nakajima =
<jun.nakajima@intel.com>; Kevin Tian
> <kevin.tian@intel.com>; Julien Grall <julien.grall@arm.com>; =
xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V3 11/23] xen/ioreq: Move x86's =
io_completion/io_req fields to struct vcpu
>=20
>=20
> On 07.12.20 14:32, Jan Beulich wrote:
>=20
> Hi Jan, Paul.
>=20
> > On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> >> --- a/xen/arch/x86/hvm/emulate.c
> >> +++ b/xen/arch/x86/hvm/emulate.c
> >> @@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v)
> >>   {
> >>       struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io;
> >>
> >> -    vio->io_req.state =3D STATE_IOREQ_NONE;
> >> -    vio->io_completion =3D HVMIO_no_completion;
> >> +    v->io.req.state =3D STATE_IOREQ_NONE;
> >> +    v->io.completion =3D IO_no_completion;
> >>       vio->mmio_cache_count =3D 0;
> >>       vio->mmio_insn_bytes =3D 0;
> >>       vio->mmio_access =3D (struct npfec){};
> >> @@ -159,7 +159,7 @@ static int hvmemul_do_io(
> >>   {
> >>       struct vcpu *curr =3D current;
> >>       struct domain *currd =3D curr->domain;
> >> -    struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io;
> >> +    struct vcpu_io *vio =3D &curr->io;
> > Taking just these two hunks: "vio" would now stand for two entirely
> > different things. I realize the name is applicable to both, but I
> > wonder if such naming isn't going to risk confusion.Despite being
> > relatively familiar with the involved code, I've been repeatedly
> > unsure what exactly "vio" covers, and needed to go back to the
>=20
>   Good comment... Agree that with the naming scheme in current patch =
the
> code became a little bit confusing to read.
>=20
>=20
> > header. So together with the name possible adjustment mentioned
> > further down, maybe "vcpu_io" also wants it name changed, such that
> > the variable then also could sensibly be named (slightly)
> > differently? struct vcpu_io_state maybe? Or alternatively rename
> > variables of type struct hvm_vcpu_io * to hvio or hio? Otoh the
> > savings aren't very big for just ->io, so maybe better to stick to
> > the prior name with the prior type, and not introduce local
> > variables at all for the new field, like you already have it in the
> > former case?
> I would much prefer the last suggestion which is "not introduce local
> variables at all for the new field" (I admit I was thinking almost the
> same, but haven't chosen this direction).
> But I am OK with any suggestions here. Paul what do you think?
>=20

I personally don't think there is that much risk of confusion. If there =
is a desire to disambiguate though, I would go the route of naming =
hvm_vcpu_io locals 'hvio'.

>=20
> >
> >> --- a/xen/include/xen/sched.h
> >> +++ b/xen/include/xen/sched.h
> >> @@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); =
/* from complete_domain_destroy
> */
> >>
> >>   struct waitqueue_vcpu;
> >>
> >> +enum io_completion {
> >> +    IO_no_completion,
> >> +    IO_mmio_completion,
> >> +    IO_pio_completion,
> >> +#ifdef CONFIG_X86
> >> +    IO_realmode_completion,
> >> +#endif
> >> +};
> > I'm not entirely happy with io_ / IO_ here - they seem a little
> > too generic. How about ioreq_ / IOREQ_ respectively?
>=20
> I am OK, would like to hear Paul's opinion on both questions.
>=20

No, I think the 'IO_' prefix is better. They relate to a field in the =
vcpu_io struct. An alternative might be 'VIO_'...

>=20
> >
> >> +struct vcpu_io {
> >> +    /* I/O request in flight to device model. */
> >> +    enum io_completion   completion;

... in which case, you could also name the enum 'vio_completion'.

  Paul

> >> +    ioreq_t              req;
> >> +};
> >> +



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 08:59:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 08:59:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47213.83612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmYqK-0003Fv-D5; Tue, 08 Dec 2020 08:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47213.83612; Tue, 08 Dec 2020 08:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmYqK-0003Fo-9w; Tue, 08 Dec 2020 08:59:32 +0000
Received: by outflank-mailman (input) for mailman id 47213;
 Tue, 08 Dec 2020 08:59:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IVKH=FM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmYqI-0003Fj-Ja
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 08:59:30 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f17c2b0-4480-4f38-a6b4-857704fba4ba;
 Tue, 08 Dec 2020 08:59:29 +0000 (UTC)
Received: from DBBPR09CA0041.eurprd09.prod.outlook.com (2603:10a6:10:d4::29)
 by AS8PR08MB6072.eurprd08.prod.outlook.com (2603:10a6:20b:296::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Tue, 8 Dec
 2020 08:59:26 +0000
Received: from DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:d4:cafe::b9) by DBBPR09CA0041.outlook.office365.com
 (2603:10a6:10:d4::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17 via Frontend
 Transport; Tue, 8 Dec 2020 08:59:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT039.mail.protection.outlook.com (10.152.21.120) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Tue, 8 Dec 2020 08:59:26 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Tue, 08 Dec 2020 08:59:26 +0000
Received: from fe243f909d03.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3B30D710-0BA2-460A-B68D-9E5054E7798C.1; 
 Tue, 08 Dec 2020 08:59:10 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fe243f909d03.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 08 Dec 2020 08:59:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3516.eurprd08.prod.outlook.com (2603:10a6:10:4f::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Tue, 8 Dec
 2020 08:59:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Tue, 8 Dec 2020
 08:59:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f17c2b0-4480-4f38-a6b4-857704fba4ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z4dZok7Zgkqh1353jUTUNHppHiK1fvd1ufgwHyoPiv0=;
 b=ACLWdNdd88Vm0LCQ0D58umM4qnDpepMzLXi/inloUFgOBd7DZGWYXpRbE0AUO4dzidUnPFNXUS8ZJi1QzDEmwe7PgF9OLC19NVw103uB69348itwQpVu+HhQYRiAW6/iqIKD77Kr+8k+rDJeRnFX8zG3WOPMUy5dqEyyrJLVhGU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 84ed3635e08cdbc4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C5eDL0zz6nhdhVIrUaEEnsiZJ/gS3uhLxma5VAKP+IWKYswiKaiPT0PAc8K5GXho4+LTcUrLyVC9zcMk9ucZhPqdVcfJWVjhp/syf+FDnnqDvplQrMMkQsDZVK/gt7VtLsH52GlKkagiyRqwm5obI5h4IPkvAd7WXhl6asINOJ+mwBS2a+6HYMvSZPnGgDQYOkARSqitLvZJHhS/w/IeulotW9uF11ebCaEJ8Y5l7WKvRyuuk0RxTO36qK/HT69FhtBQONr5acKqvPm4gNS+K+mmpgNOtpCZM+/BwAA6hVyP2Htfz5DMXje3IVozKzuz4DuGtfK4FhnK4npx2MDJGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z4dZok7Zgkqh1353jUTUNHppHiK1fvd1ufgwHyoPiv0=;
 b=WKAB7qXHAbdo5JnH/tueHHrwACP+8QqB97d8/F38uY7i59KhF/Rj/4vuLlB87QkrSk4OBYqVGlUvrCRko54ETMG1WklJSKe24QjZl0bARuSyKmStl6C7BxjcitHRkPUfBvKzyj83jzGon/hUPX2oZFXPpaELQwyIx+pG4CU+V3V71HYUqUtpxsQMR69ntHtPBx3c/J8FH/1cdlt+2GHdpp6sW8Py5D+hJ7hcIkFlCovauNtEbhYijgM0iVCg8gHlDRtJ4ePJ7XfDo55SNr6JGxpuPEHMFK0iKCn5whoplVO1r0FaE3s9ERSKB3vnYWlDxx46d1XFrjOAq3gcX3gjpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z4dZok7Zgkqh1353jUTUNHppHiK1fvd1ufgwHyoPiv0=;
 b=ACLWdNdd88Vm0LCQ0D58umM4qnDpepMzLXi/inloUFgOBd7DZGWYXpRbE0AUO4dzidUnPFNXUS8ZJi1QzDEmwe7PgF9OLC19NVw103uB69348itwQpVu+HhQYRiAW6/iqIKD77Kr+8k+rDJeRnFX8zG3WOPMUy5dqEyyrJLVhGU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Michal Orzel <Michal.Orzel@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Wei Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
Thread-Index: AQHWzTMYnytbZyotvEaEgHJUGuOUO6ns5mAA
Date: Tue, 8 Dec 2020 08:59:08 +0000
Message-ID: <8634155E-0EF2-4367-9DC1-243DFB148563@arm.com>
References: <20201208072327.11890-1-michal.orzel@arm.com>
In-Reply-To: <20201208072327.11890-1-michal.orzel@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bb6d340c-6ed4-4fe4-05e3-08d89b5790b0
x-ms-traffictypediagnostic: DB7PR08MB3516:|AS8PR08MB6072:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB607221CDB8E6507B6A9658EC9DCD0@AS8PR08MB6072.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 57I/ta41ITOLgF/dxAj9CoUPoAnjAOTbLC+J9ITz1ey5RCgNJfBbxhXknsYjEe0fxw9rHGGKsJRVdX7eIeDcK46IJKDz/kw3t6lU4eqCKkj5mjClOvGfWdO69RpXdQqOgxnuMMUjBnQByVFCZ7sAC4XXZJaZkiVfiXutaBT3qXcIS1fbzoFUHcYTxsyk+DEZWLIopAn6Ad/LpybzQd26dMLdLyPiskDCdpRT7NgscrShUlvbPJNhu4pSDuvCqUa4teSVSt0xMzpHvw5uigiOm5OTyIegsm5mpwWyaDAbXlbiXhwUx+TgdP9E9MkEuHA6uuQLSuVVGZAugEop2FbqodUjyHfdU9DvBVwLQGqIdeLqrB+AwyMHkHP6PD/bpB+9
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(346002)(4326008)(71200400001)(2906002)(33656002)(86362001)(5660300002)(6512007)(6486002)(53546011)(6506007)(76116006)(91956017)(186003)(6636002)(66476007)(64756008)(66446008)(66556008)(6862004)(66946007)(2616005)(54906003)(508600001)(8676002)(83380400001)(8936002)(36756003)(26005)(37006003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?HBtT1PBlpfkIP2YVE4Hxw9eW+YxlVHHisS8BlfMz2hwMrmAEd3kjE+jz4pEw?=
 =?us-ascii?Q?6vWsso/rq/1F2NSey3S35JzWgdEl/NJ3JiGqZz4MnYN3mFBWTLW462QPOrgL?=
 =?us-ascii?Q?5x2oLjodZOjdvuTJZG9wTWxUowYak9i9GF/oT25KhNS3Wl8ulP832M2fa+Fj?=
 =?us-ascii?Q?BPMh2DFNBxzlNqSEwxUCXzg+zx+HhqIEzndlJO6GZp5DeQMOcHc6h2g+iOkj?=
 =?us-ascii?Q?UNlAoYJDAO7auDTwqyvKquLoXBVktu+023CQaOspuD1Pn0L3wNSNMBnziyGt?=
 =?us-ascii?Q?8vokWh4cMC1Udf3+mNnsJYqIAOPFhH/9MyVmYVBzCR4LU8iPgXsdgVAs1cZS?=
 =?us-ascii?Q?xf7Oojrkk5ycNYokmc+fBnAgKZCbpwkQs+6PA2Or9uMxXRneYnKYRCvLrGKm?=
 =?us-ascii?Q?5xbaJ9CS/S0wQ5sFEhdoOQtgCsyuaSlS1SpE8oPI86vMj624IIn/9vlreF59?=
 =?us-ascii?Q?dynJOMMS4kCJ1yS6RK+vnNzq9JwrEEb6awPppIjmQ888/vTzW/lkzurx17a1?=
 =?us-ascii?Q?wOE3PHELlI/durJzP7q7fhJ+3MIm5jgGR0P7ld9R0wUMuy/0MoesEhE48eoK?=
 =?us-ascii?Q?q0nL+0OkPz6qIbl7xclzNkV20K8Nz2cT7i6Ls0lbIkET18gbjIY0M9DmGqK4?=
 =?us-ascii?Q?9mIHwHzsBN+tsms2oKfXvD4iOpNi7Qwb0AFV0qaIJizc34LNI25M7+z5tpce?=
 =?us-ascii?Q?z0dERT2AxA0dhts0Lksbz6vA78Ej1PYlGzlvY8ALZIwgepzK3mRIj8htImev?=
 =?us-ascii?Q?31uEEKXFdUb4KpZkP5mgtOiWfViwGoVV7CjT3dvLeaj/dLpmYgziNIiQhfes?=
 =?us-ascii?Q?QVaiX2KNMHOvgijmS1B+7N2bRmHSGFgbKjy5UZbajacA4iEB3vI2l5YqXt3w?=
 =?us-ascii?Q?MenByCWt2ADqAAG7oHRHygxzHx2CxS/Vj9jsWsuQ6UOUGcRtDIKud88Ax41X?=
 =?us-ascii?Q?CHxAAjQ08vE2/1GrNwDhS79Y9xlRKEwBHfNF0R9IdYZ6sbFB1+BjgEq9b5BH?=
 =?us-ascii?Q?Ex+/?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <262C9EB671EC97429409E3D1A50F4491@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3516
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7fc97c66-0214-49fb-ce7a-08d89b57861d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3wKYObZZKuJpajBt1M9Kq6KnJuaPwTca3lgtmgHyIaUZ9v7PnHeqVgpWX3lTLe1/taIKB9b5q68xMm11WWzjFJWRtQ5/bzweHaHBlxHbQKeYudQh2nS+m4ga9U1puUa3eN7BmCDSJayLRfqGtuArcmnfunR46Kwwxz7hLljeqjhLh7jlYNwrOBrc9KjYNvHmNqJcT8IfbfzroO5G4csSnmT/h3wYcOUZ1yLpcKct3QevcW8NbRWD+ZAM85cyvswpd7Yp6xjTFjFmPCZk71ebvcIDCXmwWEQorUskf7zLA619o6/Zem3uTZZKyrUY5lMdY9WVDLLrE0EjEZ2/9BNAfCIJwDwck6SHFeI7iacJLUK4dXAoyyLrn01EXqNkgVjPLDesHUkslD2e0ZeG5Hmdtg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(46966005)(6636002)(33656002)(47076004)(8936002)(8676002)(37006003)(2616005)(54906003)(508600001)(86362001)(70206006)(82310400003)(70586007)(6512007)(26005)(2906002)(53546011)(186003)(36756003)(6506007)(4326008)(83380400001)(6486002)(5660300002)(81166007)(356005)(6862004)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2020 08:59:26.4665
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bb6d340c-6ed4-4fe4-05e3-08d89b5790b0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6072

Hi,

> On 8 Dec 2020, at 07:23, Michal Orzel <Michal.Orzel@arm.com> wrote:
>=20
> When executing in aarch32 state at EL0, a load at EL0 from a
> virtual address that matches the bottom 32 bits of the virtual address
> used by a recent load at (aarch64) EL1 might return incorrect data.
>=20
> The workaround is to insert a write of the contextidr_el1 register
> on exception return to an aarch32 guest.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks
Bertrand

> ---
> docs/misc/arm/silicon-errata.txt |  1 +
> xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
> xen/arch/arm/arm64/entry.S       |  9 +++++++++
> xen/arch/arm/cpuerrata.c         |  8 ++++++++
> xen/include/asm-arm/cpufeature.h |  3 ++-
> 5 files changed, 39 insertions(+), 1 deletion(-)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index 27bf957ebf..fa3d9af63d 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -45,6 +45,7 @@ stable hypervisors.
> | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_8273=
19    |
> | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_8240=
69    |
> | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_8194=
72    |
> +| ARM            | Cortex-A53      | #845719         | ARM64_ERRATUM_845=
719    |
> | ARM            | Cortex-A55      | #1530923        | N/A               =
      |
> | ARM            | Cortex-A57      | #852523         | N/A               =
      |
> | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_8320=
75    |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f5b1bcda03..6bea393555 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,25 @@ config ARM_ERRATUM_858921
>=20
> 	  If unsure, say Y.
>=20
> +config ARM64_ERRATUM_845719
> +	bool "Cortex-A53: 845719: A load might read incorrect data"
> +	default y
> +	help
> +	  This option adds an alternative code sequence to work around ARM
> +	  erratum 845719 on Cortex-A53 parts up to r0p4.
> +
> +	  When executing in aarch32 state at EL0, a load at EL0 from a
> +	  virtual address that matches the bottom 32 bits of the virtual addres=
s
> +	  used by a recent load at (aarch64) EL1 might return incorrect data.
> +
> +	  The workaround is to insert a write of the contextidr_el1 register
> +	  on exception return to an aarch32 guest.
> +	  Please note that this does not necessarily enable the workaround,
> +	  as it depends on the alternative framework, which will only patch
> +	  the kernel if an affected CPU is detected.
> +
> +	  If unsure, say Y.
> +
> config ARM64_WORKAROUND_REPEAT_TLBI
> 	bool
>=20
> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
> index 175ea2981e..ef3336f34a 100644
> --- a/xen/arch/arm/arm64/entry.S
> +++ b/xen/arch/arm/arm64/entry.S
> @@ -96,6 +96,15 @@
>         msr     SPSR_fiq, x22
>         msr     SPSR_irq, x23
>=20
> +#ifdef CONFIG_ARM64_ERRATUM_845719
> +alternative_if ARM64_WORKAROUND_845719
> +        /* contextidr_el1 is not accessible from aarch32 guest so we can
> +         * write xzr to it
> +         */
> +        msr     contextidr_el1, xzr
> +alternative_else_nop_endif
> +#endif
> +
>         add     x21, sp, #UREGS_SPSR_und
>         ldp     w22, w23, [x21]
>         msr     SPSR_und, x22
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index b398d480f1..8959d4d4dc 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -491,6 +491,14 @@ static const struct arm_cpu_capabilities arm_errata[=
] =3D {
>         .capability =3D ARM_WORKAROUND_858921,
>         MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
>     },
> +#endif
> +#ifdef CONFIG_ARM64_ERRATUM_845719
> +    {
> +        /* Cortex-A53 r0p[01234] */
> +        .desc =3D "ARM erratum 845719",
> +        .capability =3D ARM64_WORKAROUND_845719,
> +        MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
> +    },
> #endif
>     {
>         /* Neoverse r0p0 - r2p0 */
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufe=
ature.h
> index c7b5052992..1165a1eb62 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -47,8 +47,9 @@
> #define ARM64_WORKAROUND_AT_SPECULATE 9
> #define ARM_WORKAROUND_858921 10
> #define ARM64_WORKAROUND_REPEAT_TLBI 11
> +#define ARM64_WORKAROUND_845719 12
>=20
> -#define ARM_NCAPS           12
> +#define ARM_NCAPS           13
>=20
> #ifndef __ASSEMBLY__
>=20
> --=20
> 2.28.0
>=20



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 09:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 09:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47233.83624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmYxj-0004Ia-CA; Tue, 08 Dec 2020 09:07:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47233.83624; Tue, 08 Dec 2020 09:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmYxj-0004IT-8l; Tue, 08 Dec 2020 09:07:11 +0000
Received: by outflank-mailman (input) for mailman id 47233;
 Tue, 08 Dec 2020 09:07:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmYxi-0004IO-2X
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 09:07:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmYxh-0002Aa-1R; Tue, 08 Dec 2020 09:07:09 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmYxg-000282-OW; Tue, 08 Dec 2020 09:07:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=O6ppWs7CkQhDVKrgp7pUo1DZnK/ZfTFLu7ap3jFjf0o=; b=quV4mXrgMckJW45pEINS6pTzhg
	uCFjjHmpHCmtaRf2BlncSRiUHa2wHZH6yXVvGlfZUAiB5x6U39RDFrrEygdgkRF1ufBj75gYXl/uQ
	eJbHB0mxQSUOeYmeROyVq0SHNdFjCRlCoZLNB9UippG6tuWkD1YPQyUO49wiQqedV8Js=;
Subject: Re: [RFC] design: design doc for 1:1 direct-map
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Kaly.Xin@arm.com, Wei.Chen@arm.com, nd@arm.com,
 Paul Durrant <paul@xen.org>, famzheng@amazon.com
References: <20201208052113.1641514-1-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
Date: Tue, 8 Dec 2020 09:07:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201208052113.1641514-1-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

I am adding Paul and Zheng in the thread as there are similar interest 
for the x86 side.

On 08/12/2020 05:21, Penny Zheng wrote:
> This is one draft design about the infrastructure for now, not ready
> for upstream yet (hence the RFC tag), thought it'd be useful to firstly
> start a discussion with the community.
> 
> Create one design doc for 1:1 direct-map.
> It aims to describe why and how we allocate 1:1 direct-map(guest physical
> == physical) domains.
> 
> This document is partly based on Stefano Stabellini's patch serie v1:
> [direct-map DomUs](
> https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).

May I ask why a different approach?

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> For the part regarding allocating 1:1 direct-map domains with user-defined
> memory regions, it will be included in next design of static memory
> allocation.

I don't think you can do without user-defined memory regions (see more 
below).

> ---
>   docs/designs/1_1_direct-map.md | 87 ++++++++++++++++++++++++++++++++++
>   1 file changed, 87 insertions(+)
>   create mode 100644 docs/designs/1_1_direct-map.md
> 
> diff --git a/docs/designs/1_1_direct-map.md b/docs/designs/1_1_direct-map.md
> new file mode 100644
> index 0000000000..ce3e2c77fd
> --- /dev/null
> +++ b/docs/designs/1_1_direct-map.md
> @@ -0,0 +1,87 @@
> +# Preface
> +
> +The document is an early draft for direct-map memory map
> +(`guest physical == physical`) of domUs. And right now, it constrains to ARM

s/constrains/limited/

Aside the interface to the user, you should be able to re-use the same 
code on x86. Note that because the memory layout on x86 is fixed (always 
starting at 0), you would only be able to have only one direct-mapped 
domain.

> +architecture.
> +
> +It aims to describe why and how the guest would be created as direct-map domain.
> +
> +This document is partly based on Stefano Stabellini's patch serie v1:
> +[direct-map DomUs](
> +https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).
> +
> +This is a first draft and some questions are still unanswered. When this is the
> +case, the text shall contain XXX.
> +
> +# Introduction
> +
> +## Background
> +
> +Cases where domU needs direct-map memory map:
> +
> +  * IOMMU not present in the system.
> +  * IOMMU disabled, since it doesn't cover a specific device.

If the device is not covered by the IOMMU, then why would you want to 
disable the IOMMUs for the rest of the system?

> +  * IOMMU disabled, since it doesn't have enough bandwidth.

I am not sure to understand this one.

> +  * IOMMU disabled, since it adds too much latency.

The list above sounds like direct-map memory would be necessary even 
without device-passthrough. Can you clarify it?

> +
> +*WARNING:
> +Users should be careful that it is not always secure to assign a device without

s/careful/aware/ I think. Also, it is never secure to assign a device 
without IOMMU/SMMU unless you have a replacement.

I would suggest to reword it something like:

"When the device is not protected by the IOMMU, the administrator should 
make sure that:
    - The device is assigned to a trusted guest
    - You have an additional security mechanism on the platform (e.g 
MPU) to protect the memory."

> +IOMMU/SMMU protection.
> +Users must be aware of this risk, that guests having access to hardware with
> +DMA capacity must be trusted, or it could use the DMA engine to access any
> +other memory area.
> +Guests could use additional security hardware component like NOC, System MPU
> +to protect the memory.

What's the NOC?

> +
> +## Design
> +
> +The implementation may cover following aspects:
> +
> +### Native Address and IRQ numbers for GIC and UART(vPL011)
> +
> +Today, fixed addresses and IRQ numbers are used to map GIC and UART(vPL011)
> +in DomUs. And it may cause potential clash on direct-map domains.
> +So, Using native addresses and irq numbers for GIC, UART(vPL011), in
> +direct-map domains is necessary.
> +e.g.

To me e.g. means example. But below this is not an example, this is a 
requirement in order to use the vpl011 on system without pl011 UART.

> +For the virtual interrupt of vPL011: instead of always using `GUEST_VPL011_SPI`,
> +try to reuse the physical SPI number if possible.

How would you find the following region for guest using PV drivers;
    - Event channel interrupt
    - Grant table area

> +
> +### Device tree option: `direct_map`
> +
> +Introduce a new device tree option `direct_map` for direct-map domains.
> +Then, when users try to allocate one direct-map domain(except DOM0),
> +`direct-map` property needs to be added under the appropriate `/chosen/domUx`.
> +
> +
> +            chosen {
> +                ...
> +                domU1 {
> +                    compatible = "xen, domain";
> +                    #address-cells = <0x2>;
> +                    #size-cells = <0x1>;
> +                    direct-map;
> +                    ...
> +                };
> +                ...
> +            };
> +
> +If users are using imagebuilder, they can add to boot.source something like the

This documentations ounds like more something for imagebuilder rather 
than Xen itself.

> +following:
> +
> +    fdt set /chosen/domU1 direct-map
> +
> +Users could also use `xl` to create direct-map domains, just use the following
> +config option: `direct-map=true`
> +
> +### direct-map guest memory allocation
> +
> +Func `allocate_memory_direct_map` is based on `allocate_memory_11`, and shall
> +be refined to allocate memory for all direct-map domains, including DOM0.
> +Roughly speaking, firstly, it tries to allocate arbitrary memory chunk of
> +requested size from domain sub-allocator(`alloc_domheap_pages`). If fail,
> +split the chunk into halves, and re-try, until it succeed or bail out with the
> +smallest chunk size.

If you have a mix of domain with direct-mapped and normal domain, you 
may end up to have the memory so small that your direct-mapped domain 
will have many small banks. This is going to be a major problem if you 
are creating the domain at runtime (you suggest xl can be used).

In addition, some users may want to be able to control the location of 
the memory as this reduced the amount of work in the guest (e.g you 
don't have to dynamically discover the memory).

I think it would be best to always require the admin to select the RAM 
bank used by a direct mapped domain. Alternatively, we could have a pool 
of memory that can only be used for direct mapped domain. This should 
limit the fragmentation of the memory.

> +Then, `insert_11_bank` shall insert above allocated pages into a memory bank,
> +which are ordered by address, and also set up guest P2M mapping(
> +`guest_physmap_add_page`) to ensure `gfn == mfn`.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 09:12:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 09:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47242.83638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZ37-0005G3-2F; Tue, 08 Dec 2020 09:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47242.83638; Tue, 08 Dec 2020 09:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZ36-0005Fw-VL; Tue, 08 Dec 2020 09:12:44 +0000
Received: by outflank-mailman (input) for mailman id 47242;
 Tue, 08 Dec 2020 09:12:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmZ35-0005Fr-4e
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 09:12:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ad26ae0-c35f-421b-9730-9806fc593c14;
 Tue, 08 Dec 2020 09:12:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6AB6CB04C;
 Tue,  8 Dec 2020 09:12:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ad26ae0-c35f-421b-9730-9806fc593c14
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607418761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Wji7cVr3E4ewzf0/ekPSVYyJD48uMGTKrk3JbT5PmOM=;
	b=VItoo7KwBUcpXcHTZrylVcMEStb4pjIBDlKbmUj/5oY/x2/doF4+BGOb/oOad6RXqLji+F
	gM1U932Y+smoMwjYcMn0HUu8RvYVRlPw1GcHxlDWsJ+AZxsLh2bauyFgYnc5e8Y41mC0en
	dWiLtFHrUr8cfcOdCcy2BESvRp1N0HU=
Subject: Re: [RFC] design: design doc for 1:1 direct-map
To: Julien Grall <julien@xen.org>
Cc: Bertrand.Marquis@arm.com, Kaly.Xin@arm.com, Wei.Chen@arm.com, nd@arm.com,
 Paul Durrant <paul@xen.org>, famzheng@amazon.com,
 Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
References: <20201208052113.1641514-1-penny.zheng@arm.com>
 <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b53b7ea5-51f2-8746-8d0d-17d2b57ecc89@suse.com>
Date: Tue, 8 Dec 2020 10:12:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.12.2020 10:07, Julien Grall wrote:
> On 08/12/2020 05:21, Penny Zheng wrote:
>> --- /dev/null
>> +++ b/docs/designs/1_1_direct-map.md
>> @@ -0,0 +1,87 @@
>> +# Preface
>> +
>> +The document is an early draft for direct-map memory map
>> +(`guest physical == physical`) of domUs. And right now, it constrains to ARM
> 
> s/constrains/limited/
> 
> Aside the interface to the user, you should be able to re-use the same 
> code on x86. Note that because the memory layout on x86 is fixed (always 
> starting at 0), you would only be able to have only one direct-mapped 
> domain.

Even one seems challenging, if it's truly meant to have all of the
domain's memory direct-mapped: The use of space in the first Mb is
different between host and guest.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 09:21:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 09:21:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47250.83654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZBE-0006HI-UJ; Tue, 08 Dec 2020 09:21:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47250.83654; Tue, 08 Dec 2020 09:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZBE-0006HB-RA; Tue, 08 Dec 2020 09:21:08 +0000
Received: by outflank-mailman (input) for mailman id 47250;
 Tue, 08 Dec 2020 09:21:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmZBD-0006H6-Ai
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 09:21:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 905176cb-45b5-42ab-87bd-e7a7e7ada3ba;
 Tue, 08 Dec 2020 09:21:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9D946AD12;
 Tue,  8 Dec 2020 09:21:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 905176cb-45b5-42ab-87bd-e7a7e7ada3ba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607419265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vBchpSeWtncbpdf+1VsDSWkKrJPN2E5t2O8lWf00Kz8=;
	b=ngOuN7fz1o9NSqtjW2ns0qn7FLHeGhZq9xr8lqX5yJyS4pYWfrbsfEPoW7f8AdC1SqaBhR
	cIYaKkkSyTOkutk2Kqp63KjxoQA+qVvSAE+WxBi2x9PYKm26ovr6GoAI6FDHYZVTdTmLjT
	3q6a5SRefgezk/7v6UcbBMuxjhNapXg=
Subject: Re: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
 <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
 <4a82d6f3-6b6c-566a-6ad0-36e22df323fa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <536b5e63-0605-f4d3-e163-dff67ec0422d@suse.com>
Date: Tue, 8 Dec 2020 10:21:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <4a82d6f3-6b6c-566a-6ad0-36e22df323fa@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 20:43, Oleksandr wrote:
> On 07.12.20 13:41, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
>>>                                       uint64_t *addr);
>>>   void arch_ioreq_domain_init(struct domain *d);
>> As already mentioned in an earlier reply: What about these? They
>> shouldn't get declared once per arch. If anything, ones that
>> want to be inline functions can / should remain in the per-arch
>> header.
> Don't entirely get a suggestion. Is the suggestion to make "simple" ones 
> inline? Why not, there are a few ones which probably want to be inline, 
> such as the following for example:
> - arch_ioreq_domain_init
> - arch_ioreq_server_destroy
> - arch_ioreq_server_destroy_all
> - arch_ioreq_server_map_mem_type (probably)

Before being able to make a suggestion, I need to have my question
answered: Why do the arch_*() declarations live in the arch header?
They represent a common interface (between common and arch code)
and hence should be declared in exactly one place. It is only at
the point where you/we _consider_ making some of them inline that
moving those (back) to the arch header may make sense. Albeit even
then I'd prefer if only the ones get moved which are expected to
be inline for all arch-es. Others would better have the arch header
indicate to the common one that no declaration is needed (such that
the declaration still remains common for all arch-es using out-of-
line functions).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 09:30:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 09:30:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47258.83672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZJt-00078G-UN; Tue, 08 Dec 2020 09:30:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47258.83672; Tue, 08 Dec 2020 09:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZJt-00077Y-R9; Tue, 08 Dec 2020 09:30:05 +0000
Received: by outflank-mailman (input) for mailman id 47258;
 Tue, 08 Dec 2020 09:30:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmZJs-0006uA-Iq
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 09:30:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33ba3cda-1dda-4203-8050-66a7192fd880;
 Tue, 08 Dec 2020 09:30:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EA5DAD2D;
 Tue,  8 Dec 2020 09:30:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33ba3cda-1dda-4203-8050-66a7192fd880
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607419802; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3wt44vm/MGCLjbQk93Md2GUjGQNb7Fev4LCfvILFYuw=;
	b=p95Sg2pCswPM+TJerj0uOsuP/7qEL6YxGGyfvj7kNLSO0tUVjrlUhpSG1gOmV3s9/bcKYD
	bC4GIe8OPWE9/My6U1ShPPk/KF9x4AmzzW8NWzM1kwUKPySOB2GMLeXNpdP2/ShXiyA8Tg
	AirwJz/swAaZ+g05DQCzosj0sTnRQyg=
Subject: Re: [PATCH V3 09/23] xen/dm: Make x86's DM feature common
To: Oleksandr <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
 <00c3df9f-760d-bb3d-d1d6-7c7df7f0c17c@suse.com>
 <24191fca-78e7-3e6b-ff02-c06e8ae79f56@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7c985117-2bb4-dd5b-53cf-e217e25b2e8e@suse.com>
Date: Tue, 8 Dec 2020 10:30:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <24191fca-78e7-3e6b-ff02-c06e8ae79f56@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 21:23, Oleksandr wrote:
> On 07.12.20 14:08, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> From: Julien Grall <julien.grall@arm.com>
>>>
>>> As a lot of x86 code can be re-used on Arm later on, this patch
>>> splits devicemodel support into common and arch specific parts.
>>>
>>> The common DM feature is supposed to be built with IOREQ_SERVER
>>> option enabled (as well as the IOREQ feature), which is selected
>>> for x86's config HVM for now.
>>>
>>> Also update XSM code a bit to let DM op be used on Arm.
>>>
>>> This support is going to be used on Arm to be able run device
>>> emulator outside of Xen hypervisor.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> ---
>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>> "Add support for Guest IO forwarding to a device emulator"
>>>
>>> Changes RFC -> V1:
>>>     - update XSM, related changes were pulled from:
>>>       [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features
>>>
>>> Changes V1 -> V2:
>>>     - update the author of a patch
>>>     - update patch description
>>>     - introduce xen/dm.h and move definitions here
>>>
>>> Changes V2 -> V3:
>>>     - no changes
>> And my concern regarding the common vs arch nesting also hasn't
>> changed.
> 
> 
> I am sorry, I might misread your comment, but I failed to see any 
> obvious to me request(s) for changes.
> I have just re-read previous discussion...
> So the question about considering doing it the other way around (top 
> level dm-op handling arch-specific
> and call into e.g. ioreq_server_dm_op() for otherwise unhandled ops) is 
> exactly a concern which I should have addressed?

Well, on v2 you replied you didn't consider the alternative. I would
have expected that you would at least go through this consideration
process, and see whether there are better reasons to stick with the
apparently backwards arrangement than to change to the more
conventional one. If there are such reasons, I would expect them to
be called out in reply and perhaps also in the commit message; the
latter because down the road more people may wonder why the more
narrow / special set of cases gets handled at a higher layer than
the wider set of remaining ones, and they would then be able to find
an explanation without having to resort to searching through list
archives.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 09:35:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 09:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47266.83683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZOn-0007Vt-Il; Tue, 08 Dec 2020 09:35:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47266.83683; Tue, 08 Dec 2020 09:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZOn-0007Vm-Fn; Tue, 08 Dec 2020 09:35:09 +0000
Received: by outflank-mailman (input) for mailman id 47266;
 Tue, 08 Dec 2020 09:35:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmZOm-0007Vh-IG
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 09:35:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b82cf30-bbb9-4334-b346-c157f5a3a98a;
 Tue, 08 Dec 2020 09:35:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BBDCEAD21;
 Tue,  8 Dec 2020 09:35:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b82cf30-bbb9-4334-b346-c157f5a3a98a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607420106; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MUQbzhw7NK6yRCTLkT/kEn3M96AjRwg+fxEt7cvqeZ0=;
	b=BwDGNAHLlAuGPVYvIV/J7ZfDBc40UQgWrrp1I3poRi1NwJtKuj8+Fekh/N17T8jaYBnvjd
	0tPGLup5/wD0gUlqBhFgXAxkiZN9vptmxAszqeej3h/LaWfASd6z7iafuzNV5BDuojNymT
	d5oaRKvzzg2X26QYKtuHFUo0eJeKdHU=
Subject: Re: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: paul@xen.org, 'Oleksandr' <olekstysh@gmail.com>
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Kevin Tian'
 <kevin.tian@intel.com>, 'Julien Grall' <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
 <742899b6-964b-be75-affc-31342c07133a@suse.com>
 <d7d867d3-b508-0c6c-191f-264e1e08bf39@gmail.com>
 <0d3c01d6cd37$0c013770$2403a650$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c96725ea-883a-6123-0bf5-da74c1a3cc47@suse.com>
Date: Tue, 8 Dec 2020 10:35:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <0d3c01d6cd37$0c013770$2403a650$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.12.2020 08:52, Paul Durrant wrote:
>> From: Oleksandr <olekstysh@gmail.com>
>> Sent: 07 December 2020 21:00
>>
>> On 07.12.20 14:32, Jan Beulich wrote:
>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>> --- a/xen/include/xen/sched.h
>>>> +++ b/xen/include/xen/sched.h
>>>> @@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy
>> */
>>>>
>>>>   struct waitqueue_vcpu;
>>>>
>>>> +enum io_completion {
>>>> +    IO_no_completion,
>>>> +    IO_mmio_completion,
>>>> +    IO_pio_completion,
>>>> +#ifdef CONFIG_X86
>>>> +    IO_realmode_completion,
>>>> +#endif
>>>> +};
>>> I'm not entirely happy with io_ / IO_ here - they seem a little
>>> too generic. How about ioreq_ / IOREQ_ respectively?
>>
>> I am OK, would like to hear Paul's opinion on both questions.
>>
> 
> No, I think the 'IO_' prefix is better. They relate to a field in the vcpu_io struct. An alternative might be 'VIO_'...
> 
>>
>>>
>>>> +struct vcpu_io {
>>>> +    /* I/O request in flight to device model. */
>>>> +    enum io_completion   completion;
> 
> ... in which case, you could also name the enum 'vio_completion'.

I'd be okay with these - still better than just "io".

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 09:47:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 09:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47274.83702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZab-00008c-S9; Tue, 08 Dec 2020 09:47:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47274.83702; Tue, 08 Dec 2020 09:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZab-00008V-NL; Tue, 08 Dec 2020 09:47:21 +0000
Received: by outflank-mailman (input) for mailman id 47274;
 Tue, 08 Dec 2020 09:47:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmZaa-00008Q-La
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 09:47:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmZaZ-0002yo-Dg; Tue, 08 Dec 2020 09:47:19 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmZaZ-0005Ap-4d; Tue, 08 Dec 2020 09:47:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=YvHQTI4XmTUDYGQGBG8XOps6X0Uw3L3zOWwSWzPritI=; b=Rj1Ar53ay1mmf0rNlQ0RAd7ElH
	POU2gpLZ2nYWdmsZPxCdxrZEgNDzxAv1w4+Zp4qsIIKQEEdghNTrIrSYhCzM3ErqNY3RjQ6WMQEwV
	08/tG+gv09PUM2OLkY6bhXkzY5dekR48J0UcnRSnzSajsEoJArRkRC1jvM3t5E4E6zpU=;
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20201208072327.11890-1-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org>
Date: Tue, 8 Dec 2020 09:47:16 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201208072327.11890-1-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 08/12/2020 07:23, Michal Orzel wrote:
> When executing in aarch32 state at EL0, a load at EL0 from a
> virtual address that matches the bottom 32 bits of the virtual address
> used by a recent load at (aarch64) EL1 might return incorrect data.
> 
> The workaround is to insert a write of the contextidr_el1 register
> on exception return to an aarch32 guest.

I am a bit confused with this comment. In the previous paragraph, you 
are suggesting that the problem is an interaction between EL1 AArch64 
and EL0 AArch32. But here you seem to imply the issue only happen when 
running a AArch32 guest.

Can you clarify it?

> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>   docs/misc/arm/silicon-errata.txt |  1 +
>   xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
>   xen/arch/arm/arm64/entry.S       |  9 +++++++++
>   xen/arch/arm/cpuerrata.c         |  8 ++++++++
>   xen/include/asm-arm/cpufeature.h |  3 ++-
>   5 files changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> index 27bf957ebf..fa3d9af63d 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -45,6 +45,7 @@ stable hypervisors.
>   | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319    |
>   | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069    |
>   | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_819472    |
> +| ARM            | Cortex-A53      | #845719         | ARM64_ERRATUM_845719    |
>   | ARM            | Cortex-A55      | #1530923        | N/A                     |
>   | ARM            | Cortex-A57      | #852523         | N/A                     |
>   | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075    |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f5b1bcda03..6bea393555 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,25 @@ config ARM_ERRATUM_858921
>   
>   	  If unsure, say Y.
>   
> +config ARM64_ERRATUM_845719
> +	bool "Cortex-A53: 845719: A load might read incorrect data"
> +	default y
> +	help
> +	  This option adds an alternative code sequence to work around ARM
> +	  erratum 845719 on Cortex-A53 parts up to r0p4.
> +
> +	  When executing in aarch32 state at EL0, a load at EL0 from a
> +	  virtual address that matches the bottom 32 bits of the virtual address
> +	  used by a recent load at (aarch64) EL1 might return incorrect data.
> +
> +	  The workaround is to insert a write of the contextidr_el1 register
> +	  on exception return to an aarch32 guest.
> +	  Please note that this does not necessarily enable the workaround,
> +	  as it depends on the alternative framework, which will only patch
> +	  the kernel if an affected CPU is detected.
> +
> +	  If unsure, say Y.
> +
>   config ARM64_WORKAROUND_REPEAT_TLBI
>   	bool
>   
> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
> index 175ea2981e..ef3336f34a 100644
> --- a/xen/arch/arm/arm64/entry.S
> +++ b/xen/arch/arm/arm64/entry.S
> @@ -96,6 +96,15 @@
>           msr     SPSR_fiq, x22
>           msr     SPSR_irq, x23
>   
> +#ifdef CONFIG_ARM64_ERRATUM_845719
> +alternative_if ARM64_WORKAROUND_845719
> +        /* contextidr_el1 is not accessible from aarch32 guest so we can
> +         * write xzr to it
> +         */

This path is also used when the trapping occurs when running in EL0 
aarch32. So wouldn't you clobber it if the EL1 AArch64 use it (Linux may 
store the PID in it)?

Also the coding style for multi-line comment in Xen is:

/*
  * Foo
  * Bar
  */

> +        msr     contextidr_el1, xzr
> +alternative_else_nop_endif
> +#endif

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 10:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 10:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47286.83723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZxB-0002wG-2Y; Tue, 08 Dec 2020 10:10:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47286.83723; Tue, 08 Dec 2020 10:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmZxA-0002w9-UU; Tue, 08 Dec 2020 10:10:40 +0000
Received: by outflank-mailman (input) for mailman id 47286;
 Tue, 08 Dec 2020 10:10:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmZxA-0002w1-8Q; Tue, 08 Dec 2020 10:10:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmZxA-0003Vt-2C; Tue, 08 Dec 2020 10:10:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmZx9-00079j-Sk; Tue, 08 Dec 2020 10:10:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmZx9-0002h4-SF; Tue, 08 Dec 2020 10:10:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3FqxB5ig7W24wueCIwsNeRbtpyfUnrnj7ehSsZqAcbo=; b=eAz/BlgyS58tys9ex0XGLu7WF0
	OqIROlXOv9+SpD9Ng7U3KVRDq0UC7pgApFg+pKVdfTg5TIafq0Lnrm9lGxBVd676lPUiyUrHhwl9C
	saHgDr72e7Gy/PcSkqokQv1EqIZnO+M7e4X/n5DT/swjQEyaEXwyuNgVah2S/hOKfNp0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157293: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
X-Osstest-Versions-That:
    xen=4b0e0db86194b5e9e18c9f2c10b3910f3394c56f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 10:10:39 +0000

flight 157293 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157293/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20
baseline version:
 xen                  4b0e0db86194b5e9e18c9f2c10b3910f3394c56f

Last test of basis   157262  2020-12-07 17:00:26 Z    0 days
Testing same since   157293  2020-12-08 08:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4b0e0db861..777e3590f1  777e3590f154e6a8af560dd318b9465fa168db20 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 10:22:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 10:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47297.83741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kma8d-00046U-6Z; Tue, 08 Dec 2020 10:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47297.83741; Tue, 08 Dec 2020 10:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kma8d-00046N-3b; Tue, 08 Dec 2020 10:22:31 +0000
Received: by outflank-mailman (input) for mailman id 47297;
 Tue, 08 Dec 2020 10:22:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CW+N=FM=euphon.net=fam@srs-us1.protection.inumbo.net>)
 id 1kma8Z-00046I-95
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 10:22:29 +0000
Received: from sender2-of-o52.zoho.com.cn (unknown [163.53.93.247])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c8d9297-7e11-45a8-b2e0-83314a3d04eb;
 Tue, 08 Dec 2020 10:22:24 +0000 (UTC)
Received: from localhost (ec2-52-56-101-76.eu-west-2.compute.amazonaws.com
 [52.56.101.76]) by mx.zoho.com.cn
 with SMTPS id 160742293597166.17876928455007;
 Tue, 8 Dec 2020 18:22:15 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c8d9297-7e11-45a8-b2e0-83314a3d04eb
ARC-Seal: i=1; a=rsa-sha256; t=1607422937; cv=none; 
	d=zoho.com.cn; s=zohoarc; 
	b=TFiKC+Cw4GY2VxCXzJ3S4rnO1WVzgHC4Kinjk9FF/m+Y+PHDKMxbPbGgl6SpTGi4+fZjFV45YoRPIMeb7zsENmUb+i24hknorBZmNyE4DUg/U/ebUVO5n0eUH7GgOLfpMupJPAUiVOavG/VOlO3UpTVrCRTG0FNQFmMQmrCXyl8=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com.cn; s=zohoarc; 
	t=1607422937; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=E6Gx3CZw78P9pkk42XXrq3hoXi1ADhJ5b/Vp4030ksk=; 
	b=BTpIVA+cx3ZB6Sf2nKgZoiVAW41Byffc1RsJNBTlKjzMtPyXRHc9mxSSdR8190mgIDNHatsXFLKJ/yfkKY+SyxXuOaoFgFgkM5FL5FzIon0wHFHrRzM04DucbzfJWY4+qB5Zm7oZV2e+aLRIwqbGMzGQQGXQKz5GYf5YpXKBFPk=
ARC-Authentication-Results: i=1; mx.zoho.com.cn;
	dkim=pass  header.i=euphon.net;
	spf=pass  smtp.mailfrom=fam@euphon.net;
	dmarc=pass header.from=<fam@euphon.net> header.from=<fam@euphon.net>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1607422937;
	s=zoho; d=euphon.net; i=fam@euphon.net;
	h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version:Content-Type:In-Reply-To;
	bh=E6Gx3CZw78P9pkk42XXrq3hoXi1ADhJ5b/Vp4030ksk=;
	b=YM4tzEWX2z1Om9nNP9zJ3+Am8RiYFc5GDQX/k5m42S7vrF8Rva1+2jUOy3fcCnNe
	fBfFjLWJGNhUusJMOmbmjA0gJjlWEnxi8ktawWR2SwRWsURmHgbnNpyMIR6dsVGWZw3
	36d+Z8bxcN+jVB/zyHP6iqK5yAUQRH/NTCtSS/sU=
Date: Tue, 8 Dec 2020 10:22:05 +0000
From: Fam Zheng <fam@euphon.net>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Bertrand.Marquis@arm.com,
	Kaly.Xin@arm.com, Wei.Chen@arm.com, nd@arm.com,
	Paul Durrant <paul@xen.org>, Penny Zheng <penny.zheng@arm.com>,
	xen-devel@lists.xenproject.org, sstabellini@kernel.org
Subject: Re: [RFC] design: design doc for 1:1 direct-map
Message-ID: <20201208102205.GA118611@dev>
References: <20201208052113.1641514-1-penny.zheng@arm.com>
 <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
 <b53b7ea5-51f2-8746-8d0d-17d2b57ecc89@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <b53b7ea5-51f2-8746-8d0d-17d2b57ecc89@suse.com>
X-ZohoCNMailClient: External

On 2020-12-08 10:12, Jan Beulich wrote:
> On 08.12.2020 10:07, Julien Grall wrote:
> > On 08/12/2020 05:21, Penny Zheng wrote:
> >> --- /dev/null
> >> +++ b/docs/designs/1_1_direct-map.md
> >> @@ -0,0 +1,87 @@
> >> +# Preface
> >> +
> >> +The document is an early draft for direct-map memory map
> >> +(`guest physical == physical`) of domUs. And right now, it constrains to ARM
> > 
> > s/constrains/limited/
> > 
> > Aside the interface to the user, you should be able to re-use the same 
> > code on x86. Note that because the memory layout on x86 is fixed (always 
> > starting at 0), you would only be able to have only one direct-mapped 
> > domain.
> 
> Even one seems challenging, if it's truly meant to have all of the
> domain's memory direct-mapped: The use of space in the first Mb is
> different between host and guest.

Speaking about the case of x86, we can still direct-map the ram regions
to the single direct-mapped DomU because neither Xen nor dom0 require
those low mem.

We don't worry about (i.e. don't direct-map) non-ram regions (or any
range that is not reported as usable ram from DomU's PoV (dictated by
e820 table), so those can be MMIO or arbitrary mapping with EPT.

Fam


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 10:29:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 10:29:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47306.83762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmaFG-0004NG-5E; Tue, 08 Dec 2020 10:29:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47306.83762; Tue, 08 Dec 2020 10:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmaFG-0004N9-20; Tue, 08 Dec 2020 10:29:22 +0000
Received: by outflank-mailman (input) for mailman id 47306;
 Tue, 08 Dec 2020 10:29:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CW+N=FM=euphon.net=fam@srs-us1.protection.inumbo.net>)
 id 1kmaFE-0004N4-3v
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 10:29:20 +0000
Received: from sender2-of-o52.zoho.com.cn (unknown [163.53.93.247])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f79e7af-906e-49d7-9ad4-346c4f7f5d82;
 Tue, 08 Dec 2020 10:29:18 +0000 (UTC)
Received: from localhost (ec2-52-56-101-76.eu-west-2.compute.amazonaws.com
 [52.56.101.76]) by mx.zoho.com.cn
 with SMTPS id 1607423351294598.910321877991;
 Tue, 8 Dec 2020 18:29:11 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f79e7af-906e-49d7-9ad4-346c4f7f5d82
ARC-Seal: i=1; a=rsa-sha256; t=1607423353; cv=none; 
	d=zoho.com.cn; s=zohoarc; 
	b=VbR5uEP+c1d1++89PyB3QJYiIPneJ9JqLhFWx4C2IBkB9awpVXlbE5g2IYDEMm0XgcuPE0cL2oBlMCb2eCdHti6Jir0nU4X3+oSzldniBVOSmjJWZJFFMzzoEGMvtNj1DwoC/H9cS00fjNBMkXPyXAOl3nrJTY8eztBVkOeTna4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com.cn; s=zohoarc; 
	t=1607423353; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=f5tEVL1Y8nWe5Y61mdoZX5lRcGa2W3xGaLAOqlIAHbA=; 
	b=OrJJmUml/hqDrbLGU52AJXs85ahagdbthmumC3VHFoDkLLiAWzhBwqbYaVtBfcUvD8B4zAKDCFtY3RGO+Y0JSf1n1w+Ba/AlMREJQCp5Iq9v0wv8Q6EEuvu6w9/6Y6M+e9slGw5BuPOMt7sPP3a9hzplIul7kO4wmpTgu5rPepA=
ARC-Authentication-Results: i=1; mx.zoho.com.cn;
	dkim=pass  header.i=euphon.net;
	spf=pass  smtp.mailfrom=fam@euphon.net;
	dmarc=pass header.from=<fam@euphon.net> header.from=<fam@euphon.net>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1607423353;
	s=zoho; d=euphon.net; i=fam@euphon.net;
	h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version:Content-Type:In-Reply-To;
	bh=f5tEVL1Y8nWe5Y61mdoZX5lRcGa2W3xGaLAOqlIAHbA=;
	b=JlvvvwSYtZOwce3C2/BeaKJXHJmluCvVP86IuykHknPcKQpAZccDSNSOh27HljFw
	Wi2IuT3v/NWFG//Yqz5zMBbFEKZf+8nJaYBGvfwSzSLqxy8zETzzE8GU3QvvAu2iWvA
	bRAlAZ8g7iR1U93kT/d2GiLRbihPa9ZvdJiDdj7w=
Date: Tue, 8 Dec 2020 10:29:05 +0000
From: Fam Zheng <fam@euphon.net>
To: Penny Zheng <penny.zheng@arm.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org,
	Bertrand.Marquis@arm.com, Kaly.Xin@arm.com, Wei.Chen@arm.com,
	nd@arm.com
Subject: Re: [RFC] design: design doc for 1:1 direct-map
Message-ID: <20201208102905.GB118611@dev>
References: <20201208052113.1641514-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208052113.1641514-1-penny.zheng@arm.com>
X-ZohoCNMailClient: External

On 2020-12-08 13:21, Penny Zheng wrote:
> +The document is an early draft for direct-map memory map
> +(`guest physical == physical`) of domUs. And right now, it constrains to ARM
> +architecture.

I'm also working on direct-map DomU on x86, so let's coordinate and
cover both arches.

> +
> +It aims to describe why and how the guest would be created as direct-map domain.
> +
> +This document is partly based on Stefano Stabellini's patch serie v1:
> +[direct-map DomUs](
> +https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).
> +
> +This is a first draft and some questions are still unanswered. When this is the
> +case, the text shall contain XXX.
> +
> +# Introduction
> +
> +## Background
> +
> +Cases where domU needs direct-map memory map:
> +
> +  * IOMMU not present in the system.
> +  * IOMMU disabled, since it doesn't cover a specific device.
> +  * IOMMU disabled, since it doesn't have enough bandwidth.
> +  * IOMMU disabled, since it adds too much latency.
> +
> +*WARNING:
> +Users should be careful that it is not always secure to assign a device without
> +IOMMU/SMMU protection.
> +Users must be aware of this risk, that guests having access to hardware with
> +DMA capacity must be trusted, or it could use the DMA engine to access any
> +other memory area.
> +Guests could use additional security hardware component like NOC, System MPU
> +to protect the memory.
> +
> +## Design
> +
> +The implementation may cover following aspects:
> +
> +### Native Address and IRQ numbers for GIC and UART(vPL011)
> +
> +Today, fixed addresses and IRQ numbers are used to map GIC and UART(vPL011)
> +in DomUs. And it may cause potential clash on direct-map domains.
> +So, Using native addresses and irq numbers for GIC, UART(vPL011), in
> +direct-map domains is necessary.
> +e.g.
> +For the virtual interrupt of vPL011: instead of always using `GUEST_VPL011_SPI`,
> +try to reuse the physical SPI number if possible.
> +
> +### Device tree option: `direct_map`
> +
> +Introduce a new device tree option `direct_map` for direct-map domains.
> +Then, when users try to allocate one direct-map domain(except DOM0),
> +`direct-map` property needs to be added under the appropriate `/chosen/domUx`.
> +
> +
> +            chosen {
> +                ...
> +                domU1 {
> +                    compatible = "xen, domain";
> +                    #address-cells = <0x2>;
> +                    #size-cells = <0x1>;
> +                    direct-map;
> +                    ...
> +                };
> +                ...
> +            };
> +
> +If users are using imagebuilder, they can add to boot.source something like the
> +following:
> +
> +    fdt set /chosen/domU1 direct-map
> +
> +Users could also use `xl` to create direct-map domains, just use the following
> +config option: `direct-map=true`
> +
> +### direct-map guest memory allocation
> +
> +Func `allocate_memory_direct_map` is based on `allocate_memory_11`, and shall
> +be refined to allocate memory for all direct-map domains, including DOM0.
> +Roughly speaking, firstly, it tries to allocate arbitrary memory chunk of
> +requested size from domain sub-allocator(`alloc_domheap_pages`). If fail,
> +split the chunk into halves, and re-try, until it succeed or bail out with the
> +smallest chunk size.
> +Then, `insert_11_bank` shall insert above allocated pages into a memory bank,
> +which are ordered by address, and also set up guest P2M mapping(
> +`guest_physmap_add_page`) to ensure `gfn == mfn`.

A high level comment from x86 PoV: in the mfn addr space, we want to
explicitly reserve range for direct-map. This ensures Xen or Dom0 will
leave the pages for DomU at boot time, since as Julien mentioned, x86
machines have fixed mem layout starting from 0, so the corresponding
pages mustn't go into xenheap/domheap in the first place.

IOW x86 depends on some mechanism very similar to what badpage= does.
But I wouldn't overload/abuse the parameter for direct-map. Maybe
introduce a new option, like "identpage=".

Fam


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 10:30:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 10:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47311.83773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmaGN-00059e-FG; Tue, 08 Dec 2020 10:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47311.83773; Tue, 08 Dec 2020 10:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmaGN-00059X-C6; Tue, 08 Dec 2020 10:30:31 +0000
Received: by outflank-mailman (input) for mailman id 47311;
 Tue, 08 Dec 2020 10:30:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LMSP=FM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmaGM-00059O-5k
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 10:30:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 151d3671-09a5-4246-b08e-2b7ab9592ab1;
 Tue, 08 Dec 2020 10:30:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7B59CAE65;
 Tue,  8 Dec 2020 10:30:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 151d3671-09a5-4246-b08e-2b7ab9592ab1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607423428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Q1BJsJoq2Z+HC7NwOIC96pWhya7CjvMphLQjJtvQ+Fw=;
	b=dsT2X5h2fPfNfi5dVkUmj6JDeCPD1+Ui48uYBaE8eZeeYsAFBmY5/OImnokqDqAFFMk8+3
	AihZjQ3Wka1T67f+HmInDjFthSZb04KuV4cn1Q6qaTvBhYmlrJ/Vmt9k1Prhzrl+fIryFL
	EeMJRe6iL/Ri65z7L/STo2uqDbU/JH4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] MAINTAINERS: add me as maintainer for tools/xenstore/
Date: Tue,  8 Dec 2020 11:30:26 +0100
Message-Id: <20201208103026.28772-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

I have been the major contributor for C Xenstore the past few years.

Add me as a maintainer for tools/xenstore/.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index dab38a6a14..6dbd99aff4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -584,6 +584,13 @@ F:	xen/include/asm-x86/guest/hyperv-hcall.h
 F:	xen/include/asm-x86/guest/hyperv-tlfs.h
 F:	xen/include/asm-x86/hvm/viridian.h
 
+XENSTORE
+M:	Ian Jackson <iwj@xenproject.org>
+M:	Wei Liu <wl@xen.org>
+M:	Juergen Gross <jgross@suse.com>
+S:	Supported
+F:	tools/xenstore/
+
 XENTRACE
 M:	George Dunlap <george.dunlap@citrix.com>
 S:	Supported
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 10:49:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 10:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47323.83794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmaYS-0006T3-9k; Tue, 08 Dec 2020 10:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47323.83794; Tue, 08 Dec 2020 10:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmaYS-0006Sw-68; Tue, 08 Dec 2020 10:49:12 +0000
Received: by outflank-mailman (input) for mailman id 47323;
 Tue, 08 Dec 2020 10:49:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tPq0=FM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kmaYR-0006Sr-A3
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 10:49:11 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f2b32e3-6fc6-4678-bc87-78d561c3bae6;
 Tue, 08 Dec 2020 10:49:08 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id i9so161387wrc.4
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 02:49:08 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id v20sm1298719wra.19.2020.12.08.02.49.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 08 Dec 2020 02:49:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f2b32e3-6fc6-4678-bc87-78d561c3bae6
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=SARJCBDALgKrxEAD0MNVA8yh4y1JKPKQxvzqwlnVTpE=;
        b=DscfhaV+AZ2UAUjo2ebZUgBThZCXzSz4Ys0RahKmsxYFz4IyZDjVPK27UX2Ae6ao1a
         XkusBeHqVvnfLWntTwG8pgJkoMk2cv/0aPngJ4tRMXe5vB0SrJfr/oG4WGEL5tRhYMxB
         DMN9Shq6dtfjADNfaAkjVfrmsiL0kFZHuYIUnu2GTkenIK8tFZ5AzRv+hzNz9v1jqmbZ
         sXMQr7Fy/zblTigubp4HrgBfpdBO/pmycfoDaCtPNFyvQqCq68So0wSG/qtEoS+IZvr5
         GItB6NyXfKWcANNnVJTazAMs3ZlfPGVt0V5JNCjc9OwQ5hJOrnkslgwqsbxvQZiO2iEB
         Jarg==
X-Gm-Message-State: AOAM533n5cowtmzQVcfCVqW/wLtbF8Yt4OB/ub3mHx6NSSHTHujxyB1M
	xHvdFwMLQdvweGJN+nMgu0A=
X-Google-Smtp-Source: ABdhPJyOGZi4xWSEmgNgSlxhUraRdKkRNdoVwtaqC2/jITq0vDfZFoapZuSPbNyScw4Fnn5VAl3zUA==
X-Received: by 2002:adf:f2d1:: with SMTP id d17mr24232655wrp.339.1607424547720;
        Tue, 08 Dec 2020 02:49:07 -0800 (PST)
Date: Tue, 8 Dec 2020 10:49:05 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] MAINTAINERS: add me as maintainer for tools/xenstore/
Message-ID: <20201208104905.7roijzd3ytictsbf@liuwe-devbox-debian-v2>
References: <20201208103026.28772-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208103026.28772-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 11:30:26AM +0100, Juergen Gross wrote:
> I have been the major contributor for C Xenstore the past few years.
> 
> Add me as a maintainer for tools/xenstore/.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>

Thanks for stepping up.

> ---
>  MAINTAINERS | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index dab38a6a14..6dbd99aff4 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -584,6 +584,13 @@ F:	xen/include/asm-x86/guest/hyperv-hcall.h
>  F:	xen/include/asm-x86/guest/hyperv-tlfs.h
>  F:	xen/include/asm-x86/hvm/viridian.h
>  
> +XENSTORE
> +M:	Ian Jackson <iwj@xenproject.org>
> +M:	Wei Liu <wl@xen.org>
> +M:	Juergen Gross <jgross@suse.com>
> +S:	Supported
> +F:	tools/xenstore/
> +
>  XENTRACE
>  M:	George Dunlap <george.dunlap@citrix.com>
>  S:	Supported
> -- 
> 2.26.2
> 


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 10:53:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 10:53:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47334.83810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmacp-0007QH-TX; Tue, 08 Dec 2020 10:53:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47334.83810; Tue, 08 Dec 2020 10:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmacp-0007QA-QE; Tue, 08 Dec 2020 10:53:43 +0000
Received: by outflank-mailman (input) for mailman id 47334;
 Tue, 08 Dec 2020 10:53:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmaco-0007Pr-SD
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 10:53:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e254554-1d65-4a37-a7aa-8e6df17c2d31;
 Tue, 08 Dec 2020 10:53:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7FD19AC9A;
 Tue,  8 Dec 2020 10:53:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e254554-1d65-4a37-a7aa-8e6df17c2d31
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607424820; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=teVdR5LWjpy28H32W12XwS95Be7kWecWh2vWuKGHkPg=;
	b=iKSRRceQ9Z5BCHv0dMekUMJWwLEJSjpw1hr/9jiDcEtOk1D9UCtTIHw83H8s9dyW0zcrz2
	2uNgzLj2rxNOiW81Q10VcC7wiOSWckjd5F9JpgUHg6oDwdAEM94KHUvBPVDLug58yei0e6
	grMaxdMJVmAf0hiqvyFA0HOYXGhLwf4=
Subject: Re: [RFC] design: design doc for 1:1 direct-map
To: Fam Zheng <fam@euphon.net>
Cc: Julien Grall <julien@xen.org>, Bertrand.Marquis@arm.com,
 Kaly.Xin@arm.com, Wei.Chen@arm.com, nd@arm.com, Paul Durrant <paul@xen.org>,
 Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
References: <20201208052113.1641514-1-penny.zheng@arm.com>
 <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
 <b53b7ea5-51f2-8746-8d0d-17d2b57ecc89@suse.com> <20201208102205.GA118611@dev>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2c7e96e1-9a34-4a9d-2a8f-7479d46f1a92@suse.com>
Date: Tue, 8 Dec 2020 11:53:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201208102205.GA118611@dev>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.12.2020 11:22, Fam Zheng wrote:
> On 2020-12-08 10:12, Jan Beulich wrote:
>> On 08.12.2020 10:07, Julien Grall wrote:
>>> On 08/12/2020 05:21, Penny Zheng wrote:
>>>> --- /dev/null
>>>> +++ b/docs/designs/1_1_direct-map.md
>>>> @@ -0,0 +1,87 @@
>>>> +# Preface
>>>> +
>>>> +The document is an early draft for direct-map memory map
>>>> +(`guest physical == physical`) of domUs. And right now, it constrains to ARM
>>>
>>> s/constrains/limited/
>>>
>>> Aside the interface to the user, you should be able to re-use the same 
>>> code on x86. Note that because the memory layout on x86 is fixed (always 
>>> starting at 0), you would only be able to have only one direct-mapped 
>>> domain.
>>
>> Even one seems challenging, if it's truly meant to have all of the
>> domain's memory direct-mapped: The use of space in the first Mb is
>> different between host and guest.
> 
> Speaking about the case of x86, we can still direct-map the ram regions
> to the single direct-mapped DomU because neither Xen nor dom0 require
> those low mem.
> 
> We don't worry about (i.e. don't direct-map) non-ram regions (or any
> range that is not reported as usable ram from DomU's PoV (dictated by
> e820 table), so those can be MMIO or arbitrary mapping with EPT.

For one, the very first page is considered special in x86 Xen. No
guest should gain access to MFN 0, unless you first audit all
code and address all the issues you find. And then there's also
Xen's low-memory trampoline living there. Plus besides the BDA
(at real-mode address 0040:0000) I suppose the EBDA also shouldn't
be exposed to a guest, nor anything else that the host finds
reserved in E820. IOW it would be the host E820 to dictate some
of the guest E820 in such a case.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 11:24:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 11:24:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47358.83834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmb6F-0001w3-KQ; Tue, 08 Dec 2020 11:24:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47358.83834; Tue, 08 Dec 2020 11:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmb6F-0001vw-HL; Tue, 08 Dec 2020 11:24:07 +0000
Received: by outflank-mailman (input) for mailman id 47358;
 Tue, 08 Dec 2020 11:24:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CW+N=FM=euphon.net=fam@srs-us1.protection.inumbo.net>)
 id 1kmb6C-0001vr-Hc
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 11:24:06 +0000
Received: from sender2-of-o52.zoho.com.cn (unknown [163.53.93.247])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a03a1fcd-5ccb-4c70-ab26-8d554934213f;
 Tue, 08 Dec 2020 11:24:01 +0000 (UTC)
Received: from freeip.amazon.com (54.239.6.186 [54.239.6.186]) by
 mx.zoho.com.cn with SMTPS id 1607426632344594.6343788567094;
 Tue, 8 Dec 2020 19:23:52 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a03a1fcd-5ccb-4c70-ab26-8d554934213f
ARC-Seal: i=1; a=rsa-sha256; t=1607426635; cv=none; 
	d=zoho.com.cn; s=zohoarc; 
	b=VNfiMeLzy3SgXCtOR7yczQs1cHlee/laXPOCDvB+3b0J8Wmsc2J57BPOueD/fnZPtyH9hqhoAN0eRzXdmGrQVWkFNPoFV4YBJenZJjXipZF+OyT0FvLKfm7F9cAka1Wv8kEr6oik43/N5YUftY+QbrKWmZR9/PMQaXcFNa6aycw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com.cn; s=zohoarc; 
	t=1607426635; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=1iOZu/bWRMc27o2XpZheBxvCkC5w68WGiu3hg1+B8lQ=; 
	b=b4xLFSTjd7Hhus4LQFiY1pThbEQwpUafIZYwqOm0ETjqZBaUeGV2sUUIFNzAEzjpC2C4okPTLy9QsWYQ9SQQIj7pqogGWFy9u2mQ5HGo2C5SA8DHrl8A5sljcJk9DYBpOScx/uUneoIh2WM2sLnPeBPCGAx7H3/akm2V0rhdmiA=
ARC-Authentication-Results: i=1; mx.zoho.com.cn;
	dkim=pass  header.i=euphon.net;
	spf=pass  smtp.mailfrom=fam@euphon.net;
	dmarc=pass header.from=<fam@euphon.net> header.from=<fam@euphon.net>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1607426635;
	s=zoho; d=euphon.net; i=fam@euphon.net;
	h=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References:Content-Type:Mime-Version:Content-Transfer-Encoding;
	bh=1iOZu/bWRMc27o2XpZheBxvCkC5w68WGiu3hg1+B8lQ=;
	b=hRz/YgNiFsAFVdV5YwO+EetSfpjOIWK6mEsuzb49i9Yj0+gAySetG9zleRInL/TS
	HaKGQRZ9zE0klc16g2dvExNjof+H09tMRrNNb4TGzKtr3wwiSqOkLqiF2m7OekJa6iJ
	jxaDB+FrzM2ZM/uaTpUcah+uDKdVybXBOApAIaD8=
Message-ID: <dc240dfed0c6bb4810a936ac307aedf0c2ee6892.camel@euphon.net>
Subject: Re: [RFC] design: design doc for 1:1 direct-map
From: Fam Zheng <fam@euphon.net>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Bertrand.Marquis@arm.com,
 Kaly.Xin@arm.com,  Wei.Chen@arm.com, nd@arm.com, Paul Durrant
 <paul@xen.org>, Penny Zheng <penny.zheng@arm.com>,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org
Date: Tue, 08 Dec 2020 11:23:44 +0000
In-Reply-To: <2c7e96e1-9a34-4a9d-2a8f-7479d46f1a92@suse.com>
References: <20201208052113.1641514-1-penny.zheng@arm.com>
	 <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
	 <b53b7ea5-51f2-8746-8d0d-17d2b57ecc89@suse.com>
	 <20201208102205.GA118611@dev>
	 <2c7e96e1-9a34-4a9d-2a8f-7479d46f1a92@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-ZohoCNMailClient: External

On Tue, 2020-12-08 at 11:53 +0100, Jan Beulich wrote:
> On 08.12.2020 11:22, Fam Zheng wrote:
> > On 2020-12-08 10:12, Jan Beulich wrote:
> > > On 08.12.2020 10:07, Julien Grall wrote:
> > > > On 08/12/2020 05:21, Penny Zheng wrote:
> > > > > --- /dev/null
> > > > > +++ b/docs/designs/1_1_direct-map.md
> > > > > @@ -0,0 +1,87 @@
> > > > > +# Preface
> > > > > +
> > > > > +The document is an early draft for direct-map memory map
> > > > > +(`guest physical == physical`) of domUs. And right now, it
> > > > > constrains to ARM
> > > > 
> > > > s/constrains/limited/
> > > > 
> > > > Aside the interface to the user, you should be able to re-use
> > > > the same 
> > > > code on x86. Note that because the memory layout on x86 is
> > > > fixed (always 
> > > > starting at 0), you would only be able to have only one direct-
> > > > mapped 
> > > > domain.
> > > 
> > > Even one seems challenging, if it's truly meant to have all of
> > > the
> > > domain's memory direct-mapped: The use of space in the first Mb
> > > is
> > > different between host and guest.
> > 
> > Speaking about the case of x86, we can still direct-map the ram
> > regions
> > to the single direct-mapped DomU because neither Xen nor dom0
> > require
> > those low mem.
> > 
> > We don't worry about (i.e. don't direct-map) non-ram regions (or
> > any
> > range that is not reported as usable ram from DomU's PoV (dictated
> > by
> > e820 table), so those can be MMIO or arbitrary mapping with EPT.
> 
> For one, the very first page is considered special in x86 Xen. No
> guest should gain access to MFN 0, unless you first audit all
> code and address all the issues you find. And then there's also
> Xen's low-memory trampoline living there. Plus besides the BDA
> (at real-mode address 0040:0000) I suppose the EBDA also shouldn't
> be exposed to a guest, nor anything else that the host finds
> reserved in E820. IOW it would be the host E820 to dictate some
> of the guest E820 in such a case.
> 

You're right about the trampoline area, it has to be specially taken
care of. Not a problem if we could disable cpu hotplug. I don't think
the guest will ever try to DMA from/to MFN 0, BDA or EBDA, so even not
direct mapping those should not make any functional difference.

In general, I agree the guest E820 as well as all direct mapping areas
mustn't break out of host E820 limitation, otherwise it will not work.

Fam



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 12:33:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 12:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47419.83874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmcAm-00006X-J5; Tue, 08 Dec 2020 12:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47419.83874; Tue, 08 Dec 2020 12:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmcAm-00006Q-Fx; Tue, 08 Dec 2020 12:32:52 +0000
Received: by outflank-mailman (input) for mailman id 47419;
 Tue, 08 Dec 2020 12:32:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmcAl-00006I-OJ; Tue, 08 Dec 2020 12:32:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmcAl-0006QC-DC; Tue, 08 Dec 2020 12:32:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmcAl-0006VD-52; Tue, 08 Dec 2020 12:32:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmcAl-0007Hg-4W; Tue, 08 Dec 2020 12:32:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lV9YRF3xDKVjkjqZOgUyyiRKW53Vv24dVW0BoEqK7tI=; b=kY3nyIPq6b1esQruYUC7j8iWY/
	YhqcfatSgKQ307ZX/44d94ZrDg5oF4uQpseiX8cARVls7tW4x5w9++TMOjZLZJhYa9AisNv5v3kmB
	HByX47TAkwfYc9i1bvRB+wYGrTYbtL/yHlkqU/PlErPKhZbfA0DGTvJSJz0Li2GCg0Zw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157274: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=df89071faa0bac101f8513d5c08dc823f8a3b71d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 12:32:51 +0000

flight 157274 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157274/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              df89071faa0bac101f8513d5c08dc823f8a3b71d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  151 days
Failing since        151818  2020-07-11 04:18:52 Z  150 days  145 attempts
Testing same since   157274  2020-12-08 04:19:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31785 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 13:52:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 13:52:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47447.83929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmdPE-0007aR-8U; Tue, 08 Dec 2020 13:51:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47447.83929; Tue, 08 Dec 2020 13:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmdPE-0007aK-5C; Tue, 08 Dec 2020 13:51:52 +0000
Received: by outflank-mailman (input) for mailman id 47447;
 Tue, 08 Dec 2020 13:51:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LMSP=FM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmdPC-0007aF-Ar
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 13:51:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id faf721a8-32d4-4c15-a694-c2bf48cacf82;
 Tue, 08 Dec 2020 13:51:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54A71AC9A;
 Tue,  8 Dec 2020 13:51:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: faf721a8-32d4-4c15-a694-c2bf48cacf82
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607435508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=3Xetfu6kFyQluvFamyEt0BBT9TTyFFzpymYZUbEc6A0=;
	b=nCX7Dhye++ZDa5FaLmKfyA1b6PgdaBy5pzYKw9sgTqMbefhAtZ9dOgKg0atb3QX1v95DmF
	7aWaBZ97xhDH8LWTmgdh01Tm+jZmMzuRaCNl3QPZwnFMBRk6/GiYto5gwYm2MKCMFJUc7h
	5ean9dSm+bV/V2nKKlJE+foLmrp1cM8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen: CONFIG_PV_SHIM_EXCLUSIVE and CONFIG_HVM are mutually exclusive
Date: Tue,  8 Dec 2020 14:51:46 +0100
Message-Id: <20201208135146.30540-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With CONFIG_PV_SHIM_EXCLUSIVE some sources required for CONFIG_HVM are
not built, so let CONFIG_HVM depend on !CONFIG_PV_SHIM_EXCLUSIVE.

Let CONFIG_HVM default to !CONFIG_PV_SHIM instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/Kconfig | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 24868aa6ad..0107cfa12f 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -90,7 +90,8 @@ config PV_LINEAR_PT
          If unsure, say Y.
 
 config HVM
-	def_bool !PV_SHIM_EXCLUSIVE
+	depends on !PV_SHIM_EXCLUSIVE
+	def_bool !PV_SHIM
 	prompt "HVM support"
 	---help---
 	  Interfaces to support HVM domains.  HVM domains require hardware
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 13:56:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 13:56:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47454.83944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmdTc-0007n6-SR; Tue, 08 Dec 2020 13:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47454.83944; Tue, 08 Dec 2020 13:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmdTc-0007mz-Od; Tue, 08 Dec 2020 13:56:24 +0000
Received: by outflank-mailman (input) for mailman id 47454;
 Tue, 08 Dec 2020 13:56:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmdTc-0007ms-8J
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 13:56:24 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b810822c-be0a-41eb-a1b8-cf4f6314a290;
 Tue, 08 Dec 2020 13:56:23 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id t22so19537155ljk.0
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 05:56:23 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id z16sm3467677ljc.27.2020.12.08.05.56.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 05:56:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b810822c-be0a-41eb-a1b8-cf4f6314a290
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=AEM4r0RQc1Iqs4XZv1K4qOyAXvA0DEIwLHJvd9HvmIk=;
        b=WvegLxUyhjkNVY952bMNMCGBtJeckA1Qa20HLKxPZS2hAR9SyTe2nGYCyPROHYtvhV
         LqRGcG4nhJsaZPKF6AvUKX1QN2dYK8BiuKI6989KQpQBxRxrJdxYK051W6w1hvuA9hCD
         W188nak1SKCCDkl+htCcwjbi2OMJI5hZZAlGMGTAoM389Ham2RdAyBPEVFgZw4LSwO5n
         /TpESx8/b3WbNVDbq0BrlYeT4Og9QJzuSt8iYplVaHMYTzavZkCgTTH9pp8WrxvhkMjt
         l5zP3fi6K2dT9DjwP+uSrYsE5DvfQZ/sOWnTZJM0LGwYcVby484NassSCCjsG3BIY43r
         3/lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=AEM4r0RQc1Iqs4XZv1K4qOyAXvA0DEIwLHJvd9HvmIk=;
        b=rhKzdtnKmwpVwxZxaCYj8HJeTAPqfPDweifkBLK/IAzzIZR9ARsmkKqraZOZeM56qN
         2SkELSKB0wkar/q0e1H9QnTay/PUivArziLHvqfCtR+clakBSuvyxYXyf3sxp3lXR0j8
         13cSJ/TXN5utxPqtLLXUfVufuPU3vfr+jRh4xNnBXIh9vf616iqiq5ol2RzwFMWaDlUC
         POmF/vT4FUtd+M2m49SNcGvP8q81Kp/dvCkN7xLYGyHkxzqfnSpZeB/622gyGLl0bonK
         gBQwKVbiN8WqDbns/I+bNNDEWJW1vcSYtLwcF2wG7RZD4upNdzapXqtEeuI7qwRCmULy
         3j2w==
X-Gm-Message-State: AOAM530iOQfI/+1vFELX/9QvNOHTegQhIRkQEOrecgIxEUq/P2Ep+Bl7
	k9Ti+o38mxwRf4nSGL7LNz+NdQoQ1SWFuA==
X-Google-Smtp-Source: ABdhPJw7FN0rfIvdVNrQvdBwbiaeRJPsY9a8LYf8zAGDvsmWUeYOkad8vtxm8Y4XoMcSmjo/HbONaw==
X-Received: by 2002:a05:651c:1027:: with SMTP id w7mr333275ljm.297.1607435782086;
        Tue, 08 Dec 2020 05:56:22 -0800 (PST)
Subject: Re: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
 <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
 <4a82d6f3-6b6c-566a-6ad0-36e22df323fa@gmail.com>
 <536b5e63-0605-f4d3-e163-dff67ec0422d@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <8ee60a49-8e64-ae25-510b-42eb243ea3ae@gmail.com>
Date: Tue, 8 Dec 2020 15:56:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <536b5e63-0605-f4d3-e163-dff67ec0422d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 08.12.20 11:21, Jan Beulich wrote:

Hi Jan

> On 07.12.2020 20:43, Oleksandr wrote:
>> On 07.12.20 13:41, Jan Beulich wrote:
>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>> @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
>>>>                                        uint64_t *addr);
>>>>    void arch_ioreq_domain_init(struct domain *d);
>>> As already mentioned in an earlier reply: What about these? They
>>> shouldn't get declared once per arch. If anything, ones that
>>> want to be inline functions can / should remain in the per-arch
>>> header.
>> Don't entirely get a suggestion. Is the suggestion to make "simple" ones
>> inline? Why not, there are a few ones which probably want to be inline,
>> such as the following for example:
>> - arch_ioreq_domain_init
>> - arch_ioreq_server_destroy
>> - arch_ioreq_server_destroy_all
>> - arch_ioreq_server_map_mem_type (probably)


First of all, thank you for the clarification, now your point is clear 
to me.


> Before being able to make a suggestion, I need to have my question
> answered: Why do the arch_*() declarations live in the arch header?
> They represent a common interface (between common and arch code)
> and hence should be declared in exactly one place.

I got it, I had a wrong assumption that arch hooks declarations should 
live in arch code.


> It is only at
> the point where you/we _consider_ making some of them inline that
> moving those (back) to the arch header may make sense. Albeit even
> then I'd prefer if only the ones get moved which are expected to
> be inline for all arch-es. Others would better have the arch header
> indicate to the common one that no declaration is needed (such that
> the declaration still remains common for all arch-es using out-of-
> line functions).
I got it as well.

Well, I think, in order to address your comments two options are available:
1. All arch hooks are out-of-line: мove all arch hook declarations to 
the common header here and modify
"[PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM 
features" to make all Arm variants
out-of-line (I made them inline since all of them are just stubs).
2. Some of arch hooks are inline: consider which want to be inline (for 
both arch-es) and place them into arch headers, other ones
should remain in the common header.

My question is which option is more suitable?


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:24:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47463.83965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmdug-0002GM-4c; Tue, 08 Dec 2020 14:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47463.83965; Tue, 08 Dec 2020 14:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmdug-0002GF-1R; Tue, 08 Dec 2020 14:24:22 +0000
Received: by outflank-mailman (input) for mailman id 47463;
 Tue, 08 Dec 2020 14:24:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmduf-0002GA-H7
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:24:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 062bee9b-b0f7-4573-a915-77641e17e771;
 Tue, 08 Dec 2020 14:24:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47D39AD7C;
 Tue,  8 Dec 2020 14:24:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 062bee9b-b0f7-4573-a915-77641e17e771
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607437459; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KNfUuYQESJJt5mAHm79BTbGPI882KZU9ZahFYdHxYr8=;
	b=OQwD1rkdyEGQDLNH1xNjTvZuvrt5/rnrAAWE5LsTsdwT9ATzYWPwGdcwirB0A4q8n7PguH
	Q2yWaVEJuiyzQFPJywiqsOymoFKvoaADRUfoj+0UvEeiYozerGDNEbOwKcRfyevy6dxwVJ
	+y7BVZS8Lm1Gp+4bvqy41MojJtHUNn4=
Subject: Re: [PATCH V3 16/23] xen/mm: Handle properly reference in
 set_foreign_p2m_entry() on Arm
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5f1a2914-f894-0046-2911-9cccb5d94dbf@suse.com>
Date: Tue, 8 Dec 2020 15:24:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1134,12 +1134,8 @@ static int acquire_resource(
>      xen_pfn_t mfn_list[32];
>      int rc;
>  
> -    /*
> -     * FIXME: Until foreign pages inserted into the P2M are properly
> -     *        reference counted, it is unsafe to allow mapping of
> -     *        resource pages unless the caller is the hardware domain.
> -     */
> -    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
> +    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
> +         !arch_acquire_resource_check() )
>          return -EACCES;

Looks like I didn't express myself clearly enough when replying
to v2, by saying "as both prior parts of the condition should be
needed only on the x86 side, and there (for PV) there's no p2m
involved in the refcounting". While one may debate whether the
hwdom check may remain here, the "translated" one definitely
should move into the x86 hook. This (I think) will the also make
apparent that ...

> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -382,6 +382,19 @@ struct p2m_domain {
>  #endif
>  #include <xen/p2m-common.h>
>  
> +static inline bool arch_acquire_resource_check(void)
> +{
> +    /*
> +     * The reference counting of foreign entries in set_foreign_p2m_entry()
> +     * is not supported on x86.
> +     *
> +     * FIXME: Until foreign pages inserted into the P2M are properly
> +     * reference counted, it is unsafe to allow mapping of
> +     * resource pages unless the caller is the hardware domain.
> +     */
> +    return false;
> +}

... the initial part of the comment is true only for translated
domains. The reference to hwdom in the latter part of the comment
(which merely gets moved here) is a good indication that the
hwdom check also wants moving here. In turn the check at the top
of p2m_add_foreign() should imo then also use this new function,
instead of effectively open-coding it (with a similar comment).
And x86's set_foreign_p2m_entry() may want to gain

    ASSERT(arch_acquire_resource_check(d));

perhaps alongside the same ASSERT() you add to the Arm variant.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:30:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47470.83980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kme0S-00039A-UR; Tue, 08 Dec 2020 14:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47470.83980; Tue, 08 Dec 2020 14:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kme0S-000393-PP; Tue, 08 Dec 2020 14:30:20 +0000
Received: by outflank-mailman (input) for mailman id 47470;
 Tue, 08 Dec 2020 14:30:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kme0R-00038y-PV
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:30:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5c24320-df7b-4b80-95d5-e11eb573a433;
 Tue, 08 Dec 2020 14:30:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F191CAF3E;
 Tue,  8 Dec 2020 14:30:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5c24320-df7b-4b80-95d5-e11eb573a433
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607437818; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v+L/+Z2x90PRAtf5gqYmW+PtCmVGRowXcWV4+i+TcHc=;
	b=s+MNE5J30yFwGGogFnOTUc15xQvgksRcpjFg+iyfT23Qhc4tg0hrpHf2VOSet5CRokrwsS
	X22iu9At363B3Imfq5zBX77IQcqetNUlVvhUZROO5qNjJgxHHzzxA/cwJ9E+sRbd/CAGaP
	O6z2Z0Vt+gfVs9KO8MQL4H/ofcWzjo0=
Subject: Re: [PATCH] xen: CONFIG_PV_SHIM_EXCLUSIVE and CONFIG_HVM are mutually
 exclusive
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201208135146.30540-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <724f015a-ce72-a425-6cf7-6751620f1eec@suse.com>
Date: Tue, 8 Dec 2020 15:30:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201208135146.30540-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.12.2020 14:51, Juergen Gross wrote:
> With CONFIG_PV_SHIM_EXCLUSIVE some sources required for CONFIG_HVM are
> not built, so let CONFIG_HVM depend on !CONFIG_PV_SHIM_EXCLUSIVE.
> 
> Let CONFIG_HVM default to !CONFIG_PV_SHIM instead.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/arch/x86/Kconfig | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)

See "x86/shim: don't permit HVM and PV_SHIM_EXCLUSIVE at the same
time" posted on Oct 19. I'd be fine switching to the !PV_SHIM
default you have here. But Andrew looks to be objecting to a
change like this, sadly without pointing out a good alternative
so far.

Jan

> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index 24868aa6ad..0107cfa12f 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -90,7 +90,8 @@ config PV_LINEAR_PT
>           If unsure, say Y.
>  
>  config HVM
> -	def_bool !PV_SHIM_EXCLUSIVE
> +	depends on !PV_SHIM_EXCLUSIVE
> +	def_bool !PV_SHIM
>  	prompt "HVM support"
>  	---help---
>  	  Interfaces to support HVM domains.  HVM domains require hardware
> 



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:34:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:34:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47477.83994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kme43-0003QP-Hj; Tue, 08 Dec 2020 14:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47477.83994; Tue, 08 Dec 2020 14:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kme43-0003QI-Ep; Tue, 08 Dec 2020 14:34:03 +0000
Received: by outflank-mailman (input) for mailman id 47477;
 Tue, 08 Dec 2020 14:34:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PeDt=FM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kme41-0003QD-Pt
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:34:01 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d50020c1-1a74-42da-81f3-9a65fe809b87;
 Tue, 08 Dec 2020 14:34:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d50020c1-1a74-42da-81f3-9a65fe809b87
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607438040;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=yIUj8hMRyr0GAmSeVgq0BMEU5WF1ogfC7Z8gN3L0L7Q=;
  b=Sh1SRq85OnYg8/mA0qKwB4x9RQ3vQBUbZSN6vtCHDLtowDRP+7bltjTd
   p/LS9OjbTMTu5+qvqH3FP2dN//P9WmPWypF/PMPQv7GERDNiH+7zWCHwb
   cu7Oq+n+5dbfmn6kp/ZulQsxqm94xzJKzqGoppeTnyVh1IbA/r4Nu4vkH
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: E5xIAiTp0i+FJjtJQtUqoircIjnlvSjuhhoYpbszATjlFP07uvAh/n7Pw1SZBMCIPp8L70djM7
 JXrEzf83FJsba7oyg4wrICQE+iB9lmGbXOLT7yW/LXroPOh7b0DTtS0HRpUCb+49M+TzeBtiyb
 jNmQp9mpXRaZCnCVzaMVYwTjl/aLYE8RIyZWEBL5KlRnP2w2AsmxMQjkcUTTSBQSjCVG+yU3cw
 1t+NcSf0CYes3FzjZb+oNNPYQOhyfKDTeXPal3nFjBYB9taJZeMkuo9wcDh5ruSs1tvN/noprF
 pME=
X-SBRS: 5.1
X-MesageID: 32778924
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,402,1599537600"; 
   d="scan'208";a="32778924"
Subject: Re: [PATCH] xen: CONFIG_PV_SHIM_EXCLUSIVE and CONFIG_HVM are mutually
 exclusive
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201208135146.30540-1-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>
Date: Tue, 8 Dec 2020 14:33:52 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201208135146.30540-1-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 08/12/2020 13:51, Juergen Gross wrote:
> With CONFIG_PV_SHIM_EXCLUSIVE some sources required for CONFIG_HVM are
> not built, so let CONFIG_HVM depend on !CONFIG_PV_SHIM_EXCLUSIVE.
>
> Let CONFIG_HVM default to !CONFIG_PV_SHIM instead.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>

So while this will fix the randconfig failure, the statement isn't
true.  There are HVM codepaths which aren't even dead in shim-exclusive
mode.

The problem here is the way CONFIG_PV_SHIM_EXCLUSIVE abuses the Kconfig
system.  What is currently happening is that this option is trying to
enforce the pv shim defconfig in the dependency system.

We already have a defconfig, which is used in appropriate locations.  We
should not have two different things fighting over control.

This is the fault of c/s 8b5b49ceb3d which went in despite my
objections.  The change is not related to PV_SHIM_EXCLUSIVE - it is to
do with not supporting a control domain, which a) better describes what
it is actually doing, and b) has wider utility than PV Shim.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:39:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47484.84010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kme9D-0003da-9Y; Tue, 08 Dec 2020 14:39:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47484.84010; Tue, 08 Dec 2020 14:39:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kme9D-0003dT-5i; Tue, 08 Dec 2020 14:39:23 +0000
Received: by outflank-mailman (input) for mailman id 47484;
 Tue, 08 Dec 2020 14:39:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IVKH=FM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kme9C-0003dO-GV
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:39:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::61e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a02fc977-71d0-405f-b14e-3fa8ffb56a9c;
 Tue, 08 Dec 2020 14:39:19 +0000 (UTC)
Received: from DB3PR06CA0013.eurprd06.prod.outlook.com (2603:10a6:8:1::26) by
 VE1PR08MB5822.eurprd08.prod.outlook.com (2603:10a6:800:1a7::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Tue, 8 Dec
 2020 14:39:16 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:1:cafe::a9) by DB3PR06CA0013.outlook.office365.com
 (2603:10a6:8:1::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Tue, 8 Dec 2020 14:39:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Tue, 8 Dec 2020 14:39:16 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Tue, 08 Dec 2020 14:39:16 +0000
Received: from d7c2c4d14ba0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 818A344E-99A2-42F3-BA9E-2E5D3AF8780A.1; 
 Tue, 08 Dec 2020 14:38:57 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d7c2c4d14ba0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 08 Dec 2020 14:38:57 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5765.eurprd08.prod.outlook.com (2603:10a6:10:1ac::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Tue, 8 Dec
 2020 14:38:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Tue, 8 Dec 2020
 14:38:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a02fc977-71d0-405f-b14e-3fa8ffb56a9c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W6XEMSu+xHR+mN7n/m1IrWCYX60Sul941CpDSEMoJhE=;
 b=YzSr1pexAXaTSa0NJNDnekClwBEBqLUB08xyWDhWjR9k3OlHNBxtJEmzR1Nz9M0/OJMIkjdbVxDKGmWZKhXyVHubOi6FpxAq0/7I64vZ+ODJnGVRBMr2azfoCyYQWEaOthUtVGr4XdYg6MUbRAGwTxwjgsDLQADklUmHBYnIhCE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c5e79f811a28ec00
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KqnPkWqhV6yNm+r1cGzgZlcRGjnYXfDXV/T+AUMbm9K9cVSFpCx+yKE3nXSgNOKe9AUfY77yJZFg4kGPa41joHB34Gp2lx7s0qz3pyjN6OH03nMMSrHWKZlagqdNuramx7Tmt5Nn316R8FUTkLMOEhsYzcne7T6zALrDTQO3LxTTH+OlCqFFLv6UmM05NqVzHmM5XOqKxPWirJOopwu7yvyog/+uhYDz/7dVjexLw2NDWTG8TwrVpqMI0KPI5GDhrbE9kw486Fr7j11rWx5c6VnnJlKA41Y8yyMeMRXWaOLQMNrfPC27WQTF7N/Eh4EWQEdxeViD3fFD6KC7s++1vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W6XEMSu+xHR+mN7n/m1IrWCYX60Sul941CpDSEMoJhE=;
 b=oWl0cQJs7IqgX6z8kwW1LZODJXLYRuYPNgWfzJreH8TvX0OsAa9cYYjK/aYpjCnS/cuTuOW6jvYNvrPqGLG6eEbnr3GsJ2ufsylOUKkTnwi3PCNbqZgiBD3T7VXYOQy7mXogFb+ZXcBIZjQ1RewRJLsgH9rVJRpx5HksrQk7JwRib0mJBMWwLZA0AL3clCp5hYjI+QuNd4F5ZNmnVbCwWVxbMeVxndDvEONxBO/r+oLvY4tgrLYhw0lA5ENc3ZCAOvIzjxUXW1UPsYoIjyxPjqyLTsmnSQ5T2ISpMXdKhS+2zgtuvVxduOQo9Rp0B6rqMSSGLWaorIVK9jCI0JgHIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W6XEMSu+xHR+mN7n/m1IrWCYX60Sul941CpDSEMoJhE=;
 b=YzSr1pexAXaTSa0NJNDnekClwBEBqLUB08xyWDhWjR9k3OlHNBxtJEmzR1Nz9M0/OJMIkjdbVxDKGmWZKhXyVHubOi6FpxAq0/7I64vZ+ODJnGVRBMr2azfoCyYQWEaOthUtVGr4XdYg6MUbRAGwTxwjgsDLQADklUmHBYnIhCE=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Michal Orzel <Michal.Orzel@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei
 Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
Thread-Index: AQHWzTMYnytbZyotvEaEgHJUGuOUO6ns89IAgABRfAA=
Date: Tue, 8 Dec 2020 14:38:55 +0000
Message-ID: <5D1B5771-A6B3-4F5E-81A1-864DBC8787B4@arm.com>
References: <20201208072327.11890-1-michal.orzel@arm.com>
 <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org>
In-Reply-To: <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e1beb820-c1fd-4416-e65b-08d89b870a29
x-ms-traffictypediagnostic: DBAPR08MB5765:|VE1PR08MB5822:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB582230CCABBABD37AE883A6D9DCD0@VE1PR08MB5822.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 XQcgTt3PRAbWXCYEQjczHwL33v5wp+0mJv/SwKMUbg04BG567GxpBGU0MBxRHYmXm5GHb7MXNuw8LXSbkE1yfnoITO8xHfnv8T4NIL8szQTur02WWRgWMW4uwhduwgXNPPrlQdAWiuCcbvQA54rTBmu8JPI4z3l3gE49iSJMpIoYQxyhR87q/aU2gZ7JGbMt/1gQAXx+IfJy7G7GdEX7XKt80iZVzgAKhTHQFgSKy+ObATkzZMbkGhcM+aZW29Bn3BiCacPgKzbQHyrRhy5P6cZGtvCxkgbJnLc1TJJfvX0Lk3Jnksy32fRzmg0f3iWspzajnvle7mB9OE8Nw3XO/SwWmLf1ss+TzjO7NgglJ5Wzb94jDVD/MY3xU+hFylFE
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(376002)(53546011)(2616005)(76116006)(6916009)(8676002)(4326008)(8936002)(508600001)(6486002)(2906002)(6512007)(6506007)(26005)(54906003)(186003)(91956017)(64756008)(5660300002)(36756003)(66946007)(71200400001)(86362001)(33656002)(83380400001)(66446008)(66556008)(66476007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?osRsh/mEc9SBywVi3i48fo3qX1cqJvR+cbrUObuX58G8NZmNxAOj7mDI+m3X?=
 =?us-ascii?Q?iUtmexCEp5QPGr0UGazJ+rLUACZjY79lpp6pfNppTlvc2pdlKoqZ4PBjzGgA?=
 =?us-ascii?Q?s0XYdci3cYSoQssw+5cLU2/E9cJj0Lb9z9vmmCIwl5A7yDSd0KTrZPDKqLQ6?=
 =?us-ascii?Q?DHyB56zOxOHdCrKmu4xkrdkLz2kebyQM3c+qz7rjfmDDiKIXg/3A2yFvFn+6?=
 =?us-ascii?Q?cuoSsARbqjQeu3dWxzlnrfmvHZENmXPGKMgAa4Jh2ZeidPkFgOETSdfkvJ/O?=
 =?us-ascii?Q?kCnfrmsgNqE4p+VjbsclQam8OSElvqTE9XVM/WvdYpoamjPavd32xfJ23dxt?=
 =?us-ascii?Q?RsCd1APK8BX1L3hNsZNniQyk8kly1JZ03oqQ1t656A6h8Ssy7Bw+FnLwc5Q7?=
 =?us-ascii?Q?AR6JEH30x+/9OWypsd/2QUtBzyyJGs+GV1uWcXOv1PvH9m19aJnl41KYII/I?=
 =?us-ascii?Q?uGY0K0K4T2jKL1n8nZnhtwofkBCBz3KFOtAinLTCe0onEg3Uw/67XuWB8kO1?=
 =?us-ascii?Q?0YVs0gqj3RRRNdpioYL+5XfJfFghOrUUzjrI5ac7xO5eSwhH8O6fYDzulJ/p?=
 =?us-ascii?Q?mBepfEqH7zCohtdAnGg1/S8vhhIkKMgL0K7lJgMHmcmr17rBeUxb7lAUPHGf?=
 =?us-ascii?Q?t9Ak42VNGJSwt4Na7mBfrSCqgYBL14zqd+k2kS90gXdLNCEHUnT/LEowjHgj?=
 =?us-ascii?Q?AyfEOyWZ2D7JHuuOESZz+QlkmNnqBeAFejIY7LiVNIW+Pj7lKgIUI+o+Di27?=
 =?us-ascii?Q?1TGfmMU9CypGFPZF6q1q1f2FOHt8QcKUhfqQLWm97dPd6mFu/z1h3UaQKyby?=
 =?us-ascii?Q?pUSydmig2ej2JvxuzCgC49peZm+qiU4pMQ024CHyBw+SS6lNSUn2gvF7+LvH?=
 =?us-ascii?Q?dTXYGmYVTxb7l85p72j2gV/aC0xXKcZkqdXiaETAfda7Mo0rd7hrHYlmd0N/?=
 =?us-ascii?Q?1cbrGqq35xigViIK3fVfQfkiDh+0xACp+FrbJq5BVmFtfNN3uYlCLV2ep1KY?=
 =?us-ascii?Q?kkl2?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <58FCEC9FF543044380496E9F9FCF9C17@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5765
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c2ff2998-51ab-4752-18e4-08d89b86fde2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QqMDCRfwCFmkycKZXUf48eUwIm4RsV5cglfrTKh5hpmakaLFR8KRzFrO4yZvLUkvjqYJ+5HxYZg+66yVUkmDZ1QxMrtkEGHjjpQx0OZvdtKzOPZ4eQJGdlv0GZ/NAVjBrWIOvJHmLh543v0cOVJm9yfrtqv5vEFRr4sHJf/syfUTH9oP8r5fq1G0Uxd3pLf/+RMoecCCnLjGfcM/OfUEd2vCXLE8O6swm3JSWOU0eIWXJUdETn4Ng/Z6dpuh/fp01iO4GSj7BLYcHQkVayexT5ISEiCPMV/zda/60sQwrqPd57BLLOeiR8T/0r2oEfAQzGlvJSGye5gZq2jviRH5ehR+gU6TlYIELlMOKk7H/BquEXzTBZ4GGJ/mTFXquNGdZ1e+Tg8dGybRozhyczDLHg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(46966005)(83380400001)(86362001)(6512007)(53546011)(81166007)(6506007)(47076004)(356005)(36756003)(6862004)(54906003)(5660300002)(4326008)(82310400003)(2906002)(186003)(70586007)(26005)(336012)(508600001)(8936002)(2616005)(33656002)(6486002)(8676002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2020 14:39:16.6213
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e1beb820-c1fd-4416-e65b-08d89b870a29
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5822

Hi Julien,

> On 8 Dec 2020, at 09:47, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 08/12/2020 07:23, Michal Orzel wrote:
>> When executing in aarch32 state at EL0, a load at EL0 from a
>> virtual address that matches the bottom 32 bits of the virtual address
>> used by a recent load at (aarch64) EL1 might return incorrect data.
>> The workaround is to insert a write of the contextidr_el1 register
>> on exception return to an aarch32 guest.
>=20
> I am a bit confused with this comment. In the previous paragraph, you are=
 suggesting that the problem is an interaction between EL1 AArch64 and EL0 =
AArch32. But here you seem to imply the issue only happen when running a AA=
rch32 guest.
>=20
> Can you clarify it?

This can happen when switching from an aarch64 guest to an aarch32 guest so=
 not only when there is interaction.

>=20
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>>  docs/misc/arm/silicon-errata.txt |  1 +
>>  xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
>>  xen/arch/arm/arm64/entry.S       |  9 +++++++++
>>  xen/arch/arm/cpuerrata.c         |  8 ++++++++
>>  xen/include/asm-arm/cpufeature.h |  3 ++-
>>  5 files changed, 39 insertions(+), 1 deletion(-)
>> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-er=
rata.txt
>> index 27bf957ebf..fa3d9af63d 100644
>> --- a/docs/misc/arm/silicon-errata.txt
>> +++ b/docs/misc/arm/silicon-errata.txt
>> @@ -45,6 +45,7 @@ stable hypervisors.
>>  | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_82=
7319    |
>>  | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_82=
4069    |
>>  | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_81=
9472    |
>> +| ARM            | Cortex-A53      | #845719         | ARM64_ERRATUM_84=
5719    |
>>  | ARM            | Cortex-A55      | #1530923        | N/A             =
        |
>>  | ARM            | Cortex-A57      | #852523         | N/A             =
        |
>>  | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_83=
2075    |
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index f5b1bcda03..6bea393555 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -244,6 +244,25 @@ config ARM_ERRATUM_858921
>>    	  If unsure, say Y.
>>  +config ARM64_ERRATUM_845719
>> +	bool "Cortex-A53: 845719: A load might read incorrect data"
>> +	default y
>> +	help
>> +	  This option adds an alternative code sequence to work around ARM
>> +	  erratum 845719 on Cortex-A53 parts up to r0p4.
>> +
>> +	  When executing in aarch32 state at EL0, a load at EL0 from a
>> +	  virtual address that matches the bottom 32 bits of the virtual addre=
ss
>> +	  used by a recent load at (aarch64) EL1 might return incorrect data.
>> +
>> +	  The workaround is to insert a write of the contextidr_el1 register
>> +	  on exception return to an aarch32 guest.
>> +	  Please note that this does not necessarily enable the workaround,
>> +	  as it depends on the alternative framework, which will only patch
>> +	  the kernel if an affected CPU is detected.
>> +
>> +	  If unsure, say Y.
>> +
>>  config ARM64_WORKAROUND_REPEAT_TLBI
>>  	bool
>>  diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
>> index 175ea2981e..ef3336f34a 100644
>> --- a/xen/arch/arm/arm64/entry.S
>> +++ b/xen/arch/arm/arm64/entry.S
>> @@ -96,6 +96,15 @@
>>          msr     SPSR_fiq, x22
>>          msr     SPSR_irq, x23
>>  +#ifdef CONFIG_ARM64_ERRATUM_845719
>> +alternative_if ARM64_WORKAROUND_845719
>> +        /* contextidr_el1 is not accessible from aarch32 guest so we ca=
n
>> +         * write xzr to it
>> +         */
>=20
> This path is also used when the trapping occurs when running in EL0 aarch=
32. So wouldn't you clobber it if the EL1 AArch64 use it (Linux may store t=
he PID in it)?

Right we missed that.
In this case i would suggest to do something reading and then writting back=
 in contextidr instead so that we do not clobber it.

Regards
Bertrand

>=20
> Also the coding style for multi-line comment in Xen is:
>=20
> /*
> * Foo
> * Bar
> */
>=20
>> +        msr     contextidr_el1, xzr
>> +alternative_else_nop_endif
>> +#endif
>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47493.84025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeN4-0005Sv-Me; Tue, 08 Dec 2020 14:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47493.84025; Tue, 08 Dec 2020 14:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeN4-0005So-Hr; Tue, 08 Dec 2020 14:53:42 +0000
Received: by outflank-mailman (input) for mailman id 47493;
 Tue, 08 Dec 2020 14:53:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LMSP=FM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmeN3-0005Sj-Ds
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:53:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 504f9fbc-cb7b-47a3-8d2e-0598a0dc6a49;
 Tue, 08 Dec 2020 14:53:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CA2A6AD6B;
 Tue,  8 Dec 2020 14:53:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 504f9fbc-cb7b-47a3-8d2e-0598a0dc6a49
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607439218; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=V44Jrby2MLB2tq4c63oLWb52M0EgLJBv4Sbc98uXpxY=;
	b=WmbosEv9wDftM4rmmVjBmtjZbCrWZ7AZ47mEC9pE35DRneHndgiwdGtgGWPW2kLrUhlUvK
	FUjPWrfWCJA1gy7hUm33eUibxenfCelYcNfc8jzFVPsPDHts/rl4Zh1S5Qf1QbKOvwiHfd
	SNxEByDEPluL9dx154ub92WCy0GrOUE=
Subject: Re: [PATCH] xen: CONFIG_PV_SHIM_EXCLUSIVE and CONFIG_HVM are mutually
 exclusive
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201208135146.30540-1-jgross@suse.com>
 <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3eb661e5-ec09-af4a-868d-0e3909136802@suse.com>
Date: Tue, 8 Dec 2020 15:53:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ZPKGI4IVok2fegTUvmt2muuekcEasmhdV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ZPKGI4IVok2fegTUvmt2muuekcEasmhdV
Content-Type: multipart/mixed; boundary="9tDm7sMYzcWNTQnMvaKLotNDOX0BJyQS7";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Message-ID: <3eb661e5-ec09-af4a-868d-0e3909136802@suse.com>
Subject: Re: [PATCH] xen: CONFIG_PV_SHIM_EXCLUSIVE and CONFIG_HVM are mutually
 exclusive
References: <20201208135146.30540-1-jgross@suse.com>
 <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>
In-Reply-To: <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>

--9tDm7sMYzcWNTQnMvaKLotNDOX0BJyQS7
Content-Type: multipart/mixed;
 boundary="------------8814A09FA15D196BB17691F0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8814A09FA15D196BB17691F0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.12.20 15:33, Andrew Cooper wrote:
> On 08/12/2020 13:51, Juergen Gross wrote:
>> With CONFIG_PV_SHIM_EXCLUSIVE some sources required for CONFIG_HVM are=

>> not built, so let CONFIG_HVM depend on !CONFIG_PV_SHIM_EXCLUSIVE.
>>
>> Let CONFIG_HVM default to !CONFIG_PV_SHIM instead.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> So while this will fix the randconfig failure, the statement isn't
> true.=C2=A0 There are HVM codepaths which aren't even dead in shim-excl=
usive
> mode.

I only said that CONFIG_PV_SHIM_EXCLUSIVE disables building some
sources which are required for CONFIG_HVM, and this is certainly true.

> The problem here is the way CONFIG_PV_SHIM_EXCLUSIVE abuses the Kconfig=

> system.=C2=A0 What is currently happening is that this option is trying=
 to
> enforce the pv shim defconfig in the dependency system.
>=20
> We already have a defconfig, which is used in appropriate locations.=C2=
=A0 We
> should not have two different things fighting over control.
>=20
> This is the fault of c/s 8b5b49ceb3d which went in despite my
> objections.=C2=A0 The change is not related to PV_SHIM_EXCLUSIVE - it i=
s to
> do with not supporting a control domain, which a) better describes what=

> it is actually doing, and b) has wider utility than PV Shim.

Yes, maybe.

Random build failures are not nice, so in case there is no agreement
how to proceed I'd be in favor for fixing the fallout and then discuss
a proper solution.


Juergen

--------------8814A09FA15D196BB17691F0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8814A09FA15D196BB17691F0--

--9tDm7sMYzcWNTQnMvaKLotNDOX0BJyQS7--

--ZPKGI4IVok2fegTUvmt2muuekcEasmhdV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Pk3IFAwAAAAAACgkQsN6d1ii/Ey+v
Awf9FKkt7HtG1aboK+3gUVaXQdt4oDQG2qKLWGfZph/Er/p/Xq4tJCIBkadDNKHgUwyoPz448eEn
lI62LgvtPH2/wXuecD1bKDc2oKzF0qhuX9Uy9z0B9IN7pvGNzE9pu/q2fh+rul5kK6OXoPOgViAh
YlYYmCOwPS006Fh9WUL3qF7/+PAr3ZYSCrWJRpyV83uBywGPqhQRmX/I+w8R3TKc4rbKcvDw/gMQ
fdweqoYqxwdPRofXD955dsBTh7qWyplJDmVTYoTGW4oS269r8eXS+OmcN5Vp+14YmzI9tt15ZVkn
Vly7b02dPXUDUBj2h/71RB3LogLo7SLfeDSa4H//hA==
=Qi28
-----END PGP SIGNATURE-----

--ZPKGI4IVok2fegTUvmt2muuekcEasmhdV--


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:54:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47499.84040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeO8-0005Zp-1u; Tue, 08 Dec 2020 14:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47499.84040; Tue, 08 Dec 2020 14:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeO7-0005Zi-UQ; Tue, 08 Dec 2020 14:54:47 +0000
Received: by outflank-mailman (input) for mailman id 47499;
 Tue, 08 Dec 2020 14:54:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmeO6-0005Zb-BQ
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:54:46 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ce931cc-f0fa-49b9-bfc0-3956a563fd4b;
 Tue, 08 Dec 2020 14:54:45 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id m12so1900425lfo.7
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 06:54:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id o123sm722694lff.84.2020.12.08.06.54.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 06:54:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ce931cc-f0fa-49b9-bfc0-3956a563fd4b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=hE6E8q75/HbTu5s7bfWpn41dJTfyrwHIeTndrmaxDHo=;
        b=ZMDkTgT717PXEkxSl+izAJH9utq1o4z8uWAKHaYb5hl1MVw5PyGFyr3tO6xWcRWW/z
         BrBD+0wFGsuAcJLUqY/y1jjfIeS7AZSmLWM9ZEHUp7+wp9IOHQm1CyM+VXkvcho94PG8
         Ca9EzW6NpiFQ/SDiBHSjbEWdA95KwM5GBJHX+/fBKjGYvzDv2tNBi2GFjebUyqQOb81B
         9VSvlaywFXhFhENZzltNn08LRS7BhcVcYH0nsZYtbhMKcwUlgO1K3qXzWIQR+0GLK3FW
         NXuObCwC2+cVq/wSzD4VoB9wT+lE3KsBsoH5mX9rUQGquE4gE3SE0sNPb4jPaU0bWJzH
         fFdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=hE6E8q75/HbTu5s7bfWpn41dJTfyrwHIeTndrmaxDHo=;
        b=pGSl8XbT4piho3ReC2WZ/r5DWsER0lsiBzvStjohzazfkpgAJKWeLBxmPwIEAWgek5
         cCL87aF2A11WNmkkSiro0Q+uvFq1J4dMeADqwUGdbmPCkmmaRjbOjWh5yOdsP3ppsXPY
         6SYiylTWIaKHOuklczNpfSbPnodYGz9Sr/qMlZKHXolijTG6sTXpgb7LmbuN+3K3s/zo
         LGm5BYRxjRbMAZK8MMAAvowZNO6CY8/+g07jjp8FdcFUBaXVgPQszAObMAwccKp8Pc7G
         t5y1hA7FgQzkNPEpIXfsxuRb24Id4curI+yI2kA0kIwxdDsvuQrnfe7INW4wS1lzjfHz
         PC5g==
X-Gm-Message-State: AOAM532Kz46IKrE7eix7ueYV51pyYqrF/vC6zRmOTQ0ZRQtUw11gH//L
	Jv2pkFAQt9ChUh3GcEBY+pE1kvoiQG2l6Q==
X-Google-Smtp-Source: ABdhPJxA1Z2GlI0KleWLlDrb2kdosArQuygD8wHkYJbD+A72MhSznOels8QIBISPGf1DZzuMw2TpXQ==
X-Received: by 2002:a19:be0d:: with SMTP id o13mr447115lff.517.1607439283882;
        Tue, 08 Dec 2020 06:54:43 -0800 (PST)
Subject: Re: [PATCH V3 09/23] xen/dm: Make x86's DM feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
 <00c3df9f-760d-bb3d-d1d6-7c7df7f0c17c@suse.com>
 <24191fca-78e7-3e6b-ff02-c06e8ae79f56@gmail.com>
 <7c985117-2bb4-dd5b-53cf-e217e25b2e8e@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <f7d06c56-7f76-9ba7-6797-ebd9cf133479@gmail.com>
Date: Tue, 8 Dec 2020 16:54:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7c985117-2bb4-dd5b-53cf-e217e25b2e8e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 08.12.20 11:30, Jan Beulich wrote:

Hi Jan

> On 07.12.2020 21:23, Oleksandr wrote:
>> On 07.12.20 14:08, Jan Beulich wrote:
>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>> From: Julien Grall <julien.grall@arm.com>
>>>>
>>>> As a lot of x86 code can be re-used on Arm later on, this patch
>>>> splits devicemodel support into common and arch specific parts.
>>>>
>>>> The common DM feature is supposed to be built with IOREQ_SERVER
>>>> option enabled (as well as the IOREQ feature), which is selected
>>>> for x86's config HVM for now.
>>>>
>>>> Also update XSM code a bit to let DM op be used on Arm.
>>>>
>>>> This support is going to be used on Arm to be able run device
>>>> emulator outside of Xen hypervisor.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> ---
>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>>> "Add support for Guest IO forwarding to a device emulator"
>>>>
>>>> Changes RFC -> V1:
>>>>      - update XSM, related changes were pulled from:
>>>>        [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features
>>>>
>>>> Changes V1 -> V2:
>>>>      - update the author of a patch
>>>>      - update patch description
>>>>      - introduce xen/dm.h and move definitions here
>>>>
>>>> Changes V2 -> V3:
>>>>      - no changes
>>> And my concern regarding the common vs arch nesting also hasn't
>>> changed.
>>
>> I am sorry, I might misread your comment, but I failed to see any
>> obvious to me request(s) for changes.
>> I have just re-read previous discussion...
>> So the question about considering doing it the other way around (top
>> level dm-op handling arch-specific
>> and call into e.g. ioreq_server_dm_op() for otherwise unhandled ops) is
>> exactly a concern which I should have addressed?
> Well, on v2 you replied you didn't consider the alternative. I would
> have expected that you would at least go through this consideration
> process, and see whether there are better reasons to stick with the
> apparently backwards arrangement than to change to the more
> conventional one. If there are such reasons, I would expect them to
> be called out in reply and perhaps also in the commit message; the
> latter because down the road more people may wonder why the more
> narrow / special set of cases gets handled at a higher layer than
> the wider set of remaining ones, and they would then be able to find
> an explanation without having to resort to searching through list
> archives.
Ah, will investigate. Sorry for not paying enough attention to it.
Yes, IOREQ (I mean "common") ops are 7 out of 18 right now. The 
subsequent patch is adding one more DM op - XEN_DMOP_set_irq_level.
There are several PCI related ops which might want to be common in the 
future (but I am not sure).


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 14:58:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 14:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47506.84052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeRZ-0005ne-NV; Tue, 08 Dec 2020 14:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47506.84052; Tue, 08 Dec 2020 14:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeRZ-0005nX-JU; Tue, 08 Dec 2020 14:58:21 +0000
Received: by outflank-mailman (input) for mailman id 47506;
 Tue, 08 Dec 2020 14:58:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmeRY-0005nS-9m
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 14:58:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d25603db-62aa-45d3-a1da-deaf3ef9b739;
 Tue, 08 Dec 2020 14:58:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82F96AC9A;
 Tue,  8 Dec 2020 14:58:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d25603db-62aa-45d3-a1da-deaf3ef9b739
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607439498; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+jgd78D59i4R04e+1pdQQfXhaDttc6ehT1IeLzQPV4A=;
	b=NmrwJPYfZfmbF0UWwR24PckTi3IELB4yklkMJ84P5OfwT4jBEUftdCNrZpmblQx/yoRsvU
	qMoy7zr1o1y8KdIf0r3xyLjdDRUbaNmElAn5WbhIsdshYnyS+3ngcoZv/Q9ttzK0z6A4Ph
	JHg4IHr+6TDUMVDHXCJWqKtwXYJ6h70=
Subject: Re: [PATCH] xen: CONFIG_PV_SHIM_EXCLUSIVE and CONFIG_HVM are mutually
 exclusive
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201208135146.30540-1-jgross@suse.com>
 <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0a1f7c01-4466-5e2f-2f50-b174e2ba9edf@suse.com>
Date: Tue, 8 Dec 2020 15:58:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <6d68bb96-b57f-f13a-9242-47bb8bb7fc86@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.12.2020 15:33, Andrew Cooper wrote:
> On 08/12/2020 13:51, Juergen Gross wrote:
>> With CONFIG_PV_SHIM_EXCLUSIVE some sources required for CONFIG_HVM are
>> not built, so let CONFIG_HVM depend on !CONFIG_PV_SHIM_EXCLUSIVE.
>>
>> Let CONFIG_HVM default to !CONFIG_PV_SHIM instead.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> So while this will fix the randconfig failure, the statement isn't
> true.  There are HVM codepaths which aren't even dead in shim-exclusive
> mode.
> 
> The problem here is the way CONFIG_PV_SHIM_EXCLUSIVE abuses the Kconfig
> system.  What is currently happening is that this option is trying to
> enforce the pv shim defconfig in the dependency system.
> 
> We already have a defconfig, which is used in appropriate locations.  We
> should not have two different things fighting over control.
> 
> This is the fault of c/s 8b5b49ceb3d which went in despite my
> objections.  The change is not related to PV_SHIM_EXCLUSIVE - it is to
> do with not supporting a control domain, which a) better describes what
> it is actually doing, and b) has wider utility than PV Shim.

Would you mind pointing me at where you had voiced objections
to that change? I've just searched both my inbox and the list
archives, without finding any. I only recall your objections
to the patch I sent later which is similar to Jürgen's. And
I'm quite certain I'd have stayed away from committing anything
while aware of unresolved objections, even if - more often than
not - this means waiting almost indefinitely, which I don't
appreciate as a way to deal with disagreement.

>From what you further state, I derive that you'd like to see
e.g. !PV_SHIM_EXCLUSIVE be a dependency of a new CONTROL_DOMAIN
Kconfig setting? I'm not sure though I see how this would help
the situation (I'm not even sure what scope this control would
have: just domctl, or also sysctl, or additionally platform-op).
Nor do I see what's wrong with forcing HVM off in shim-exclusive
builds - there can't be HVM domains in such a configuration (but
I'm pretty sure I said so somewhere else already, without ever
hearing back, albeit it apparently wasn't on either of the
patches' threads according to my outbox).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 15:02:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 15:02:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47524.84104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeVt-0006wI-Me; Tue, 08 Dec 2020 15:02:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47524.84104; Tue, 08 Dec 2020 15:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeVt-0006wB-Jf; Tue, 08 Dec 2020 15:02:49 +0000
Received: by outflank-mailman (input) for mailman id 47524;
 Tue, 08 Dec 2020 15:02:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmeVs-0006w6-8p
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 15:02:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a04aa700-5cb0-45c0-a96c-2f0a816cb239;
 Tue, 08 Dec 2020 15:02:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 67180AD6B;
 Tue,  8 Dec 2020 15:02:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a04aa700-5cb0-45c0-a96c-2f0a816cb239
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607439766; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KEpX4ogB3q4fppRhmZvlVE/aFYpKhSkD5hKvAs2ADDc=;
	b=pD8TriR4+3UBbxeamHgn6qY3eGEC1veQ4VY5SbPnJEP1qe/F/FxCnM/Pd03tySS4diP6J8
	OHhve0nFzoPjp3V0+0hL89Y9h75Doikr/n45Sdtt5bQ7GWhGzdEw2LKLxcbX643dSQr5j/
	Jj++r8g8mfHWNduJEnASbf6QqIc/BEo=
Subject: Re: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
 <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
 <4a82d6f3-6b6c-566a-6ad0-36e22df323fa@gmail.com>
 <536b5e63-0605-f4d3-e163-dff67ec0422d@suse.com>
 <8ee60a49-8e64-ae25-510b-42eb243ea3ae@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f7343410-5fc8-7549-0bdc-9fd822570742@suse.com>
Date: Tue, 8 Dec 2020 16:02:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <8ee60a49-8e64-ae25-510b-42eb243ea3ae@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.12.2020 14:56, Oleksandr wrote:
> 
> On 08.12.20 11:21, Jan Beulich wrote:
> 
> Hi Jan
> 
>> On 07.12.2020 20:43, Oleksandr wrote:
>>> On 07.12.20 13:41, Jan Beulich wrote:
>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>> @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
>>>>>                                        uint64_t *addr);
>>>>>    void arch_ioreq_domain_init(struct domain *d);
>>>> As already mentioned in an earlier reply: What about these? They
>>>> shouldn't get declared once per arch. If anything, ones that
>>>> want to be inline functions can / should remain in the per-arch
>>>> header.
>>> Don't entirely get a suggestion. Is the suggestion to make "simple" ones
>>> inline? Why not, there are a few ones which probably want to be inline,
>>> such as the following for example:
>>> - arch_ioreq_domain_init
>>> - arch_ioreq_server_destroy
>>> - arch_ioreq_server_destroy_all
>>> - arch_ioreq_server_map_mem_type (probably)
> 
> 
> First of all, thank you for the clarification, now your point is clear 
> to me.
> 
> 
>> Before being able to make a suggestion, I need to have my question
>> answered: Why do the arch_*() declarations live in the arch header?
>> They represent a common interface (between common and arch code)
>> and hence should be declared in exactly one place.
> 
> I got it, I had a wrong assumption that arch hooks declarations should 
> live in arch code.
> 
> 
>> It is only at
>> the point where you/we _consider_ making some of them inline that
>> moving those (back) to the arch header may make sense. Albeit even
>> then I'd prefer if only the ones get moved which are expected to
>> be inline for all arch-es. Others would better have the arch header
>> indicate to the common one that no declaration is needed (such that
>> the declaration still remains common for all arch-es using out-of-
>> line functions).
> I got it as well.
> 
> Well, I think, in order to address your comments two options are available:
> 1. All arch hooks are out-of-line: мove all arch hook declarations to 
> the common header here and modify
> "[PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM 
> features" to make all Arm variants
> out-of-line (I made them inline since all of them are just stubs).
> 2. Some of arch hooks are inline: consider which want to be inline (for 
> both arch-es) and place them into arch headers, other ones
> should remain in the common header.
> 
> My question is which option is more suitable?

And the presumably very helpful to you answer is "Depends." It's a
case by case judgement call in the end.

Sorry, Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 15:02:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 15:02:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47525.84116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeW0-0006yZ-V4; Tue, 08 Dec 2020 15:02:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47525.84116; Tue, 08 Dec 2020 15:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeW0-0006yR-Rc; Tue, 08 Dec 2020 15:02:56 +0000
Received: by outflank-mailman (input) for mailman id 47525;
 Tue, 08 Dec 2020 15:02:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeVz-0006y5-7a; Tue, 08 Dec 2020 15:02:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeVz-0001B7-0w; Tue, 08 Dec 2020 15:02:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeVy-0006iX-NK; Tue, 08 Dec 2020 15:02:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeVy-0003OP-Mj; Tue, 08 Dec 2020 15:02:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UiIwppAVGRjOupNR0nNTokGsqQhWXUx8WdQH7hStLnM=; b=5HeA7MN4K04CCzFsxHP9IIAzCQ
	wcZWwYMLiIZhSK8bn11P4mjn0nsxTw1GjCpfJg8H51sBDC+zWYYUMEL6yDmpb6lJKie8/4gmoBh1b
	DdaTkxDHStKNKR97KMcfz2AsPcyZmZxUELr3/+Ta05mnW/bCcUru/evvXp87gi9fccR0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157271: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4b0e0db86194b5e9e18c9f2c10b3910f3394c56f
X-Osstest-Versions-That:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 15:02:54 +0000

flight 157271 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157271/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157249
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157249
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157249
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157249
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157249
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157249
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157249
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157249
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157249
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157249
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157249
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4b0e0db86194b5e9e18c9f2c10b3910f3394c56f
baseline version:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b

Last test of basis   157249  2020-12-07 01:51:52 Z    1 days
Failing since        157263  2020-12-07 17:06:33 Z    0 days    2 attempts
Testing same since   157271  2020-12-08 03:33:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  From: Juergen Gross <jgross@suse.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e666356a9..4b0e0db861  4b0e0db86194b5e9e18c9f2c10b3910f3394c56f -> master


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 15:11:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 15:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47540.84133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeeb-00085O-1L; Tue, 08 Dec 2020 15:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47540.84133; Tue, 08 Dec 2020 15:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeea-00085H-Ua; Tue, 08 Dec 2020 15:11:48 +0000
Received: by outflank-mailman (input) for mailman id 47540;
 Tue, 08 Dec 2020 15:11:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmeea-00085C-E9
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 15:11:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f453aeb9-1538-42b6-acb3-370d319de40f;
 Tue, 08 Dec 2020 15:11:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E49BCAF6D;
 Tue,  8 Dec 2020 15:11:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f453aeb9-1538-42b6-acb3-370d319de40f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607440304; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jXgSZQhnC8jyaOrMTBVGpRFySSze5DeQ9hRTfDYd1X0=;
	b=j43BfM5SrYg0oB0sUaPvlN+ENjEXIb0dj+VgZqpPaMQie0arZdco047cTzSng/E+qxCRCC
	v0kkIShz1DQ61RKj+01e8K9Tg3dpSsLr58B3aNCw/wQEGS7Bla05SrjwwolWF+mmykCyJm
	WnFwOTRF0m5I6+EhGJtS3C3Xj1BWSEc=
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
Date: Tue, 8 Dec 2020 16:11:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -55,6 +55,20 @@ struct ioreq_server {
>      uint8_t                bufioreq_handling;
>  };
>  
> +/*
> + * This should only be used when d == current->domain and it's not paused,

Is the "not paused" part really relevant here? Besides it being rare
that the current domain would be paused (if so, it's in the process
of having all its vCPU-s scheduled out), does this matter at all?

Apart from this the patch looks okay to me, but I'm not sure it
addresses Paul's concerns. Iirc he had suggested to switch back to
a list if doing a swipe over the entire array is too expensive in
this specific case.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 15:21:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 15:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47548.84146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeoA-0000fz-TR; Tue, 08 Dec 2020 15:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47548.84146; Tue, 08 Dec 2020 15:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeoA-0000fs-QF; Tue, 08 Dec 2020 15:21:42 +0000
Received: by outflank-mailman (input) for mailman id 47548;
 Tue, 08 Dec 2020 15:21:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeo9-0000fk-Od; Tue, 08 Dec 2020 15:21:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeo9-0001ZK-GT; Tue, 08 Dec 2020 15:21:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeo9-0007N6-6j; Tue, 08 Dec 2020 15:21:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmeo9-0003LC-6G; Tue, 08 Dec 2020 15:21:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oL9LqY1lUbesvpj622aPjfsV1L75dVEzqgnaFHoLs90=; b=isiKJGFHn24LRDNdt7n/ir2pAv
	1r/swlwCGKs0EMWXsvFuvSkdPuP2VGHfeVP79PTQ+W1DpjisddH2IXyJmMljo5YQeYvSOSUl3A4Lk
	dPcZ3oz73jJ4MRLYmSE9Fs9rfgcv6qmPMTqtlX05Nsz2HKsTpSbiMiJu9o4YngsPAHUs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157268: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 15:21:41 +0000

flight 157268 qemu-mainline real [real]
flight 157320 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157268/
http://logs.test-lab.xenproject.org/osstest/logs/157320/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  110 days
Failing since        152659  2020-08-21 14:07:39 Z  109 days  227 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    6 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 15:24:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 15:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47555.84161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeqy-0000rl-Iv; Tue, 08 Dec 2020 15:24:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47555.84161; Tue, 08 Dec 2020 15:24:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmeqy-0000re-Fs; Tue, 08 Dec 2020 15:24:36 +0000
Received: by outflank-mailman (input) for mailman id 47555;
 Tue, 08 Dec 2020 15:24:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ur7q=FM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmeqx-0000rZ-6R
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 15:24:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b879cc3a-e829-412f-85c6-cb2acb0a13c4;
 Tue, 08 Dec 2020 15:24:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4D7E1AC94;
 Tue,  8 Dec 2020 15:24:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b879cc3a-e829-412f-85c6-cb2acb0a13c4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607441073; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a0CNlvlWfuIMXpJETN9TGA509zEJLEmh2TLLgOB8iqU=;
	b=NcCKdI0LK/XC/MCNUPGke3wL+QLlaYvQkbtGqLySDu1ls3IDOlY4tzXbANNe4/qzvzt1Bc
	1M+B5FXFzmNb5Uv50uTeJwp1pC419byJZUfMBj6brEQTvb6roOgjBBeUBCszuChw5ASq3n
	YT0fV1O5evPaLkZwdsrASuqrjp8sDSY=
Subject: Re: [PATCH V3 20/23] xen/ioreq: Make x86's send_invalidate_req()
 common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-21-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <acb09993-40fc-2ab0-21b9-5dbe2f061554@suse.com>
Date: Tue, 8 Dec 2020 16:24:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-21-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -552,6 +552,8 @@ struct domain
>          struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
>          unsigned int            nr_servers;
>      } ioreq_server;
> +
> +    bool mapcache_invalidate;
>  #endif
>  };

While I can see reasons to put this inside the #ifdef here, I
would suspect putting it in the hole next to the group of 5
bools further up would be more efficient.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 15:33:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 15:33:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47565.84172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmezs-0001sD-FH; Tue, 08 Dec 2020 15:33:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47565.84172; Tue, 08 Dec 2020 15:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmezs-0001s6-CL; Tue, 08 Dec 2020 15:33:48 +0000
Received: by outflank-mailman (input) for mailman id 47565;
 Tue, 08 Dec 2020 15:33:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmezq-0001s1-Ho
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 15:33:46 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e67e432-3244-4b8c-99b1-933c354867b8;
 Tue, 08 Dec 2020 15:33:45 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id b4so16430394lfo.6
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 07:33:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id t17sm1058782lfl.125.2020.12.08.07.33.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 07:33:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e67e432-3244-4b8c-99b1-933c354867b8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=EJ6EQJyET7wZB+Ih40QfYrwivg33/KKPS/tK5AgC9AY=;
        b=QdMYjnC12NvPHdzhf3Vn+UeuLWWpb9TKYdb9pUmxEipIbppFgtjLYV13HJ8yrjMqTs
         qVfmCqi2MY39jc2mZsTHPidU69Q27Frxrb51UGTwxZ7bYkttpTlgJoIw7fxpHRJErnWV
         7yqUCrRDppOOVQacePSV1Joqz6ZKDFerJLb02GzsXJ3CkRtUartLP1djLk9dBkZ4LGDF
         Ts2F7w/gj7a5vW+Sb7eJOoPaG35qnkize+LfzGjniZr8mGyvKHd4N58cd8URHIJqQ5UZ
         CIiXIFbwxmbLbKGlB66BG5rxXuCZe9g9MCQ+NzLGnCLOWGrBC4p9N4bNALYY7StuFC/F
         WfDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=EJ6EQJyET7wZB+Ih40QfYrwivg33/KKPS/tK5AgC9AY=;
        b=PTk4z36ViY5/Q/waKoF81R+Z2/INCSAK2xz9vbQY9T/38Ex/V0uhDWvROj9nj214Lz
         IiWDXoZYplGXLtO8y4/CP0/HYAlsAWR+OZ6fPKWqt9gNl7R/e31ATs6GVE4Qeks9T4y5
         OOgN4t/eu+YUrzx3BfNHYNH/XInCxnXzLqJsDCvMT/TPGxfVBEYBBex52zz6N8IL5wb1
         YU9yGkkUndcemoMpOSrs+3NI5WCnlh1FKvvMKuLpbkG7UlAPEAQapcfbEsmlxBOjlbVd
         X9zFHIdk5CnYyQZx6LjYtzSGDY/YrEqUrsOEaX7pZzDf5TlWMVMAJbkqgzk57RSDrBMx
         62jQ==
X-Gm-Message-State: AOAM530ydrzQZj2W+JOw+uovDOxZbm3y8Rn/yZdFLJGiO1MahdzzQGjM
	hZPy7JXHizLqPgrnh3teYadetIb08pY3VQ==
X-Google-Smtp-Source: ABdhPJyh2p4+/mDRSxTmMGrDT7yESBgSDcwRz6o/2LuunaqqSetOVZMVtf06a6jAMF1PbwkDD9/hOg==
X-Received: by 2002:ac2:4c96:: with SMTP id d22mr9845887lfl.288.1607441623907;
        Tue, 08 Dec 2020 07:33:43 -0800 (PST)
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
Date: Tue, 8 Dec 2020 17:33:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 08.12.20 17:11, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/xen/ioreq.h
>> +++ b/xen/include/xen/ioreq.h
>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>       uint8_t                bufioreq_handling;
>>   };
>>   
>> +/*
>> + * This should only be used when d == current->domain and it's not paused,
> Is the "not paused" part really relevant here? Besides it being rare
> that the current domain would be paused (if so, it's in the process
> of having all its vCPU-s scheduled out), does this matter at all?do any extra actionsdo any extra actions

No, it isn't relevant, I will drop it.


>
> Apart from this the patch looks okay to me, but I'm not sure it
> addresses Paul's concerns. Iirc he had suggested to switch back to
> a list if doing a swipe over the entire array is too expensive in
> this specific case.
We would like to avoid to do any extra actions in 
leave_hypervisor_to_guest() if possible.
But not only there, the logic whether we check/set mapcache_invalidation 
variable could be avoided if a domain doesn't use IOREQ server...


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 16:42:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 16:42:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47597.84235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmg3z-0000jo-G2; Tue, 08 Dec 2020 16:42:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47597.84235; Tue, 08 Dec 2020 16:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmg3z-0000jh-D4; Tue, 08 Dec 2020 16:42:07 +0000
Received: by outflank-mailman (input) for mailman id 47597;
 Tue, 08 Dec 2020 16:42:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmg3y-0000jc-8E
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 16:42:06 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70c061bf-f533-49cd-b338-2897a66bf768;
 Tue, 08 Dec 2020 16:42:04 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id 23so11520772lfg.10
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 08:42:04 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id p7sm3247086lfc.222.2020.12.08.08.42.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 08:42:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70c061bf-f533-49cd-b338-2897a66bf768
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=nO71H+lpySIUCevPMYEFO+2WdKIK4v2yLo3yenusKsI=;
        b=u9g1memofRM4rP0x8qj+jkgGHT98+CW6/6fBYFXVrZL3ZyrLAZTdtDnQ+MPUSQdwlL
         E79lrh41F16XqZlO0g7d9HxMJfHMlaK0dPN7tlk0WszgB6IVn1IC3HQ2tVko8Qj6bxuA
         26gyRB0cGyG9ttfU41gT0Pahvgu1YAObjNK9moS88v34oWIB3qS/6guAVGq1fugqksrQ
         ThO4gJJ3YZusE61l0xLSVTkbkuI1geOe6tQ9Tv10CSz3xjlASyfCw3q4NG/ZLaVjxteT
         u4C5ckktZ5MJHxkpSS3Mi2u4/G1ze0R77SxWTc5u2rQqvLsr2AZJTqPxH8dcdePS83Qw
         0pmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=nO71H+lpySIUCevPMYEFO+2WdKIK4v2yLo3yenusKsI=;
        b=JxVYuTUX5Ii0SwveykXGFv7ahx2xsXMJouOhUzaxZSC1CvQBY/VhmP3Zjfb7HmEI2b
         dlelI1PQEQ/f0FOw6t7s4gaK/+LA0fYXtxbw+Dl953qh5dKwmWH0dlWpr5m7p1xdJ6nv
         gSW7cRksc3ZsZTK9PQDY9VOBPt6a3xm86n2c+V/DcziIU4facnSZc+qqsvnXk6UX0Zm4
         M72wEtBtshcASQ6cm03t3no+clciUnKE7/7p3CGgZq9ga4AkXt5P7q4QpHs5YwEPkuJW
         aefSFh4SiwbssbWZsFz6nT7mwFlackgnvGrMzpZb8OJUZFBRLY2sbPWUB3NaoDqNEaV8
         Zrzw==
X-Gm-Message-State: AOAM531+fReCS8+s47ABl4l8S5i2oG8x8pZuV1Xfap5cOexa5vBZAI5Z
	EfJI7by2pYj6EnbiTodveoEJTPaaIs7Y7A==
X-Google-Smtp-Source: ABdhPJw5GJO50ncmK487CxnIrSV2Ad38z4Vka40SPmGgWSo9SALdAfAtX6HVl7zMQaFQcmx55wuELg==
X-Received: by 2002:ac2:5a49:: with SMTP id r9mr10512099lfn.381.1607445723569;
        Tue, 08 Dec 2020 08:42:03 -0800 (PST)
Subject: Re: [PATCH V3 16/23] xen/mm: Handle properly reference in
 set_foreign_p2m_entry() on Arm
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
 <5f1a2914-f894-0046-2911-9cccb5d94dbf@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <e9973974-8240-8c28-90cd-2916f3cae25f@gmail.com>
Date: Tue, 8 Dec 2020 18:41:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5f1a2914-f894-0046-2911-9cccb5d94dbf@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 08.12.20 16:24, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -1134,12 +1134,8 @@ static int acquire_resource(
>>       xen_pfn_t mfn_list[32];
>>       int rc;
>>   
>> -    /*
>> -     * FIXME: Until foreign pages inserted into the P2M are properly
>> -     *        reference counted, it is unsafe to allow mapping of
>> -     *        resource pages unless the caller is the hardware domain.
>> -     */
>> -    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
>> +    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
>> +         !arch_acquire_resource_check() )
>>           return -EACCES;
> Looks like I didn't express myself clearly enough when replying
> to v2, by saying "as both prior parts of the condition should be
> needed only on the x86 side, and there (for PV) there's no p2m
> involved in the refcounting". While one may debate whether the
> hwdom check may remain here, the "translated" one definitely
> should move into the x86 hook. This (I think) will the also make
> apparent that ...
>
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -382,6 +382,19 @@ struct p2m_domain {
>>   #endif
>>   #include <xen/p2m-common.h>
>>   
>> +static inline bool arch_acquire_resource_check(void)
>> +{
>> +    /*
>> +     * The reference counting of foreign entries in set_foreign_p2m_entry()
>> +     * is not supported on x86.
>> +     *
>> +     * FIXME: Until foreign pages inserted into the P2M are properly
>> +     * reference counted, it is unsafe to allow mapping of
>> +     * resource pages unless the caller is the hardware domain.
>> +     */
>> +    return false;
>> +}
> ... the initial part of the comment is true only for translated
> domains. The reference to hwdom in the latter part of the comment
> (which merely gets moved here) is a good indication that the
> hwdom check also wants moving here. In turn the check at the top
> of p2m_add_foreign() should imo then also use this new function,
> instead of effectively open-coding it (with a similar comment).
> And x86's set_foreign_p2m_entry() may want to gain
>
>      ASSERT(arch_acquire_resource_check(d));
>
> perhaps alongside the same ASSERT() you add to the Arm variant.

Well, will do. I was about to mention, that new function wanted to gain 
domain as parameter, but noticed you had given a hint in the ASSERT 
example.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 16:49:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 16:49:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47603.84247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmgBS-000126-AD; Tue, 08 Dec 2020 16:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47603.84247; Tue, 08 Dec 2020 16:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmgBS-00011z-79; Tue, 08 Dec 2020 16:49:50 +0000
Received: by outflank-mailman (input) for mailman id 47603;
 Tue, 08 Dec 2020 16:49:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmgBR-00011u-9d
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 16:49:49 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6825d8ee-764f-4837-ad14-19bda939b590;
 Tue, 08 Dec 2020 16:49:48 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id s11so11845898ljp.4
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 08:49:48 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id n10sm1259603ljg.139.2020.12.08.08.49.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 08:49:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6825d8ee-764f-4837-ad14-19bda939b590
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=NW1ig4hO7jgwy54JnuoHax4nhf8RYyzgMtiDQuWg8VY=;
        b=DKMt6pgpw161wYD0sqSCEZ58f5rHMEBITXp4bpr73qLGCrZHX5qCfI5IBzH/KzNE9l
         fughebjVQkIArf5uKAspgedJLOa2KeUczS47gyiUCNzOEwh+3o85eQ2CJL54oPbQ6lvo
         1r84r6DLAaFACX4dAJs5UfCt+6BXkh8+a3ozjnznk/Mv2gF7TOG86Mg6psL6g60X8dLi
         GbDSyDUaE9FummlO08s5jfolkEi6ikyHBTiG/IZos5lKSNEv4wOXXZ74/5sfXpmVhoVn
         w6OVA7SB0aKPq+E234eLzCuN1OMmkbPZgO1q2ri1IUNH8rTeE/79bztQLJvy5nItU8j7
         2AQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=NW1ig4hO7jgwy54JnuoHax4nhf8RYyzgMtiDQuWg8VY=;
        b=N5GCYHBjSSQ3LeddhsYUIKBBFSuvQV10w/aXW3JpV/nEEsUSO/GC7Vup8dfYTj5xvf
         VEWbYMMduIKpEomW2EhqwYzu8Cqi+nB6+oPK7W+bZfd3WY18cqmvALOS0VkOHKB06pbR
         bmzuaZPt6mSzm/DihC15Hyzl7OcZXEakOY36acjoD9VzK8TIOqVfGwL/LbLANwaz/muB
         nn455I5K04RtDpL9RgNxnjrhR++wrqBBXl1XKu2HY1bV8yLOvXUni+KZ/IWBHGQ9cy6p
         vH0PVND8zbz3Wlg1ueprifCtg4LrpbvNEpcn7JBgqs5HfTCYY35qPUl9Hde8ystOYvGz
         /b4A==
X-Gm-Message-State: AOAM5334v9jacafTwDbTESWqEtkFduYKTzKzfdRtm6Zf1F27sM9zBHAK
	bWGkBTzsg3Hl1hZMYTCxN1XnuJwPHMmrYw==
X-Google-Smtp-Source: ABdhPJyKXTWXrcLigOaVnRME3y7whjXqqsMKHnBJLVZdpn4c2xr+pO98rMtBIcGerZ0+Y4PST7yZ2g==
X-Received: by 2002:a2e:9dcd:: with SMTP id x13mr11177952ljj.147.1607446187249;
        Tue, 08 Dec 2020 08:49:47 -0800 (PST)
Subject: Re: [PATCH V3 20/23] xen/ioreq: Make x86's send_invalidate_req()
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-21-git-send-email-olekstysh@gmail.com>
 <acb09993-40fc-2ab0-21b9-5dbe2f061554@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <efbac31b-1a9d-da16-ef60-372f10451f8e@gmail.com>
Date: Tue, 8 Dec 2020 18:49:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <acb09993-40fc-2ab0-21b9-5dbe2f061554@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 08.12.20 17:24, Jan Beulich wrote:

Hi Jan

> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -552,6 +552,8 @@ struct domain
>>           struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
>>           unsigned int            nr_servers;
>>       } ioreq_server;
>> +
>> +    bool mapcache_invalidate;
>>   #endif
>>   };
> While I can see reasons to put this inside the #ifdef here, I
> would suspect putting it in the hole next to the group of 5
> bools further up would be more efficient.

ok, will put (although it will increase the number of #ifdef)

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 16:56:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 16:56:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47609.84260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmgIB-0001zQ-3I; Tue, 08 Dec 2020 16:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47609.84260; Tue, 08 Dec 2020 16:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmgIB-0001zJ-0A; Tue, 08 Dec 2020 16:56:47 +0000
Received: by outflank-mailman (input) for mailman id 47609;
 Tue, 08 Dec 2020 16:56:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmgIA-0001zE-82
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 16:56:46 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 649081aa-017a-46f5-9b1c-2e9a740de779;
 Tue, 08 Dec 2020 16:56:45 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a1so19248764ljq.3
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 08:56:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id t30sm490478lft.266.2020.12.08.08.56.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 08:56:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 649081aa-017a-46f5-9b1c-2e9a740de779
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Yucp48b98YYkkh285rGvs6t+fiwz8/XBk1X/+qcHqoA=;
        b=krjHIFexf97C6Ohi+e3vSA9kbyI8AnlrAepw+5V4a/UWj/CfiIn0pKR86arCbT59xy
         JTvZelpgMrjjtz+I9+Sbs5ZoWoQ1nyy6GUQzgKZbq1++5exLq4PjJfjDQ43Y5FEbD+SZ
         rvrWdqzdGObORNgdwUFVCeCgFgIbJiiRXJ3+TkDRktRbxUn8szM/plrT3EGpFD9iO9rS
         CFsgfDdJtpMiMIQ++ipwifS97c0dJugIjf+V4/noSbwTD3brcWDWYkg+STtmnzGc4jPF
         9TLGNJcro/LyAqvZ5JSmPdxNqom49J+8aG+Ua+MrblEMx5RBa8V4TzOS8fEeMo5av7zJ
         M5cQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Yucp48b98YYkkh285rGvs6t+fiwz8/XBk1X/+qcHqoA=;
        b=oIH5f2uFqA/hn0RE6BJ4n+aEd59U7dS0Nd5x/6kzKCNrhBXm0Rc19+uj0/b6gW0ykB
         ggvfCMNHnK3+Cv2LFf05kJG6jLsVmxpRxTE8ewn4DUznmqHny9rp24BT4ETRVYrwsNZ8
         1l/cKqFi6R+aIv5Tx2HJUD+UJd+vEKg9ESsncVGbaw0J5Sj7W3dB2r9L9V5jVTjI/vks
         8zu6hW1m3sBdn9JEc/Oe6oP/erWCuq+WTfaSFuu8bFNPXOLkKBkF90CdCsj1TgNmh0Wg
         WQJjjXYCjM8b9RPaJKVIJTXqsNRfkW/LcRHl70b4EunfPI6sWYKWxBhoxtxm502F9yij
         RxKA==
X-Gm-Message-State: AOAM532KuO9KtxU5S1FMszhrGi/Hi96keq++5Vsk/8yxRIh4EhM9s8iL
	J0gMaeilYibs9NgJRgg9hYBbr9uQeG7kww==
X-Google-Smtp-Source: ABdhPJxgbgiB2THHpUyQfP69rIMiMJfgYSOvA7/js4UeJcsOjRnsNu4zpESYg2auOrKZEFAbGsSt2Q==
X-Received: by 2002:a2e:2284:: with SMTP id i126mr6375905lji.93.1607446604110;
        Tue, 08 Dec 2020 08:56:44 -0800 (PST)
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
From: Oleksandr <olekstysh@gmail.com>
To: Paul Durrant <paul@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
 <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
Message-ID: <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
Date: Tue, 8 Dec 2020 18:56:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


Hi Paul.


On 08.12.20 17:33, Oleksandr wrote:
>
> On 08.12.20 17:11, Jan Beulich wrote:
>
> Hi Jan
>
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> --- a/xen/include/xen/ioreq.h
>>> +++ b/xen/include/xen/ioreq.h
>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>       uint8_t                bufioreq_handling;
>>>   };
>>>   +/*
>>> + * This should only be used when d == current->domain and it's not 
>>> paused,
>> Is the "not paused" part really relevant here? Besides it being rare
>> that the current domain would be paused (if so, it's in the process
>> of having all its vCPU-s scheduled out), does this matter at all?do 
>> any extra actionsdo any extra actions
>
> No, it isn't relevant, I will drop it.
>
>
>>
>> Apart from this the patch looks okay to me, but I'm not sure it
>> addresses Paul's concerns. Iirc he had suggested to switch back to
>> a list if doing a swipe over the entire array is too expensive in
>> this specific case.
> We would like to avoid to do any extra actions in 
> leave_hypervisor_to_guest() if possible.
> But not only there, the logic whether we check/set 
> mapcache_invalidation variable could be avoided if a domain doesn't 
> use IOREQ server...


Are you OK with this patch (common part of it)?


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 17:25:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 17:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47616.84272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmgjS-0004ol-Cx; Tue, 08 Dec 2020 17:24:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47616.84272; Tue, 08 Dec 2020 17:24:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmgjS-0004oe-9t; Tue, 08 Dec 2020 17:24:58 +0000
Received: by outflank-mailman (input) for mailman id 47616;
 Tue, 08 Dec 2020 17:24:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmgjQ-0004oZ-Jg
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 17:24:56 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc0827b2-4866-4d00-8408-ae0759d8e8b2;
 Tue, 08 Dec 2020 17:24:55 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id a9so24858224lfh.2
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 09:24:55 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id b20sm3255181lfb.18.2020.12.08.09.24.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 09:24:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc0827b2-4866-4d00-8408-ae0759d8e8b2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=IhsRK93rfTSs/1JqhNpJM5Be1tO0Ukrefi3zXqvcwAE=;
        b=efClm3lKsm2WljmwWEc0BX9gQ/qGhSNnemU3iCLKJcJpbnCa8gvH8HG1zuLwqqAYFS
         U8V4Ea+lMz6NWL5Ez7b1REHDMMh/aYE3FR4VnYMtsaKms+VLF2qwDYZo7YsgArgCWu+Z
         wTbVmLRkWe48NOr+gXCHnKaEl44KZNAR1I+QYHxr+1Q87pR/u6m2rUwwlrFZdubmT5W/
         r7ShoXvyYBOma7eU7bGDHUA58U4k1AtQbVwOXJSTN8bfZeT22SLxiJMchY3yIlF1HBfy
         CupNAx+ySQpbqR6WaBn10/fkhCrVSN8X5x1IsTyNt9FEvF2dJwspuk3yYeXhwhiJPLkp
         3igw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=IhsRK93rfTSs/1JqhNpJM5Be1tO0Ukrefi3zXqvcwAE=;
        b=OXc4Jy6LE34cQK7AVrZJe2jhNkybHu1GLiIFufiHOo/MWP/2nT+7lrWHpyYsmS9Xl7
         polK2fALwIdwd7wdzuXK53/AgQ8Qi/YGNQA/XwkOGi5prliT6G2Z5GPeM5VO06Xi345O
         lZbV2vZD01yRrQwiEKmubH6966y3Ze5CP7jeFhM8WmNQq4eiuz6a/36UxDAF8unS7RcY
         x6wcm+zD1JWt5vZfqKaBUDHmWGcZs+Vwdml34Wzzzzo7KyHcjX7pOQln75QiJoxxdv8s
         1iGM4dZoovG0Txr9AA+ubc8yWTseh81HfXamPENKewo/TfyxnRrVdhXso7w0gX87SWN6
         mAjA==
X-Gm-Message-State: AOAM530MVBfpfmwZXTs871CXhPGptkpTbmCkIDtXy7w9TD6M+McODb+9
	1zQD0l/KfNPTB4UhAMA4f9rFtWz6R2VeVw==
X-Google-Smtp-Source: ABdhPJxQRDFAeA59BTnmsACh5rbfneJgsk6ZgWutZguo3xhb+UzIsRjEHMAK5PTBDCwQOp7EO65ycA==
X-Received: by 2002:a19:4915:: with SMTP id w21mr10313265lfa.57.1607448293788;
        Tue, 08 Dec 2020 09:24:53 -0800 (PST)
Subject: Re: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
 <d1fdebe9-3355-fece-e9dc-e6a7acc180e7@suse.com>
 <4a82d6f3-6b6c-566a-6ad0-36e22df323fa@gmail.com>
 <536b5e63-0605-f4d3-e163-dff67ec0422d@suse.com>
 <8ee60a49-8e64-ae25-510b-42eb243ea3ae@gmail.com>
 <f7343410-5fc8-7549-0bdc-9fd822570742@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <f1fad6a0-0879-636f-6fdd-8ee78b68fd5c@gmail.com>
Date: Tue, 8 Dec 2020 19:24:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f7343410-5fc8-7549-0bdc-9fd822570742@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 08.12.20 17:02, Jan Beulich wrote:

Hi Jan

> On 08.12.2020 14:56, Oleksandr wrote:
>> On 08.12.20 11:21, Jan Beulich wrote:
>>
>> Hi Jan
>>
>>> On 07.12.2020 20:43, Oleksandr wrote:
>>>> On 07.12.20 13:41, Jan Beulich wrote:
>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>>> @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
>>>>>>                                         uint64_t *addr);
>>>>>>     void arch_ioreq_domain_init(struct domain *d);
>>>>> As already mentioned in an earlier reply: What about these? They
>>>>> shouldn't get declared once per arch. If anything, ones that
>>>>> want to be inline functions can / should remain in the per-arch
>>>>> header.
>>>> Don't entirely get a suggestion. Is the suggestion to make "simple" ones
>>>> inline? Why not, there are a few ones which probably want to be inline,
>>>> such as the following for example:
>>>> - arch_ioreq_domain_init
>>>> - arch_ioreq_server_destroy
>>>> - arch_ioreq_server_destroy_all
>>>> - arch_ioreq_server_map_mem_type (probably)
>>
>> First of all, thank you for the clarification, now your point is clear
>> to me.
>>
>>
>>> Before being able to make a suggestion, I need to have my question
>>> answered: Why do the arch_*() declarations live in the arch header?
>>> They represent a common interface (between common and arch code)
>>> and hence should be declared in exactly one place.
>> I got it, I had a wrong assumption that arch hooks declarations should
>> live in arch code.
>>
>>
>>> It is only at
>>> the point where you/we _consider_ making some of them inline that
>>> moving those (back) to the arch header may make sense. Albeit even
>>> then I'd prefer if only the ones get moved which are expected to
>>> be inline for all arch-es. Others would better have the arch header
>>> indicate to the common one that no declaration is needed (such that
>>> the declaration still remains common for all arch-es using out-of-
>>> line functions).
>> I got it as well.
>>
>> Well, I think, in order to address your comments two options are available:
>> 1. All arch hooks are out-of-line: мove all arch hook declarations to
>> the common header here and modify
>> "[PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM
>> features" to make all Arm variants
>> out-of-line (I made them inline since all of them are just stubs).
>> 2. Some of arch hooks are inline: consider which want to be inline (for
>> both arch-es) and place them into arch headers, other ones
>> should remain in the common header.
>>
>> My question is which option is more suitable?
> And the presumably very helpful to you answer is "Depends." It's a
> case by case judgement call in the end.
>
> Sorry, Jan
Thank you for the honest answer. I will use option #1 since all these 
arch hooks are for single-use purpose only.
If indeed there is a need to make some of them inline, I think it could 
be done later on.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 17:58:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 17:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47626.84283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhFI-0007fZ-4L; Tue, 08 Dec 2020 17:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47626.84283; Tue, 08 Dec 2020 17:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhFI-0007fS-1O; Tue, 08 Dec 2020 17:57:52 +0000
Received: by outflank-mailman (input) for mailman id 47626;
 Tue, 08 Dec 2020 17:57:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ykJI=FM=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kmhFG-0007fN-Hj
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 17:57:50 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 761140db-e521-4e17-998b-9f5bd6e31cd3;
 Tue, 08 Dec 2020 17:57:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B8HvhSC021158
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
 for <xen-devel@lists.xenproject.org>; Tue, 8 Dec 2020 18:57:45 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 80ED52E93A2; Tue,  8 Dec 2020 18:57:38 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 761140db-e521-4e17-998b-9f5bd6e31cd3
Date: Tue, 8 Dec 2020 18:57:38 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201208175738.GA3390@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 08 Dec 2020 18:57:45 +0100 (MET)

Hello,
for the first time I tried to boot a xen kernel from devel with
a NetBSD PV dom0. The kernel boots, but when the first userland prcess
is launched, it seems to enter a loop involving search_pre_exception_table()
(I see an endless stream from the dprintk() at arch/x86/extable.c:202)

With xen 4.13 I see it, but exactly once:
(XEN) extable.c:202: Pre-exception: ffff82d08038c304 -> ffff82d08038c8c8

with devel:
(XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
(XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
(XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
(XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
(XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
[...]

the dom0 kernel is the same.

At first glance it looks like a fault in the guest is not handled at it should,
and the userland process keeps faulting on the same address.

Any idea what to look at ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 17:59:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 17:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47631.84296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhGv-0007oV-Ft; Tue, 08 Dec 2020 17:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47631.84296; Tue, 08 Dec 2020 17:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhGv-0007oO-Cl; Tue, 08 Dec 2020 17:59:33 +0000
Received: by outflank-mailman (input) for mailman id 47631;
 Tue, 08 Dec 2020 17:59:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmhGu-0007oB-8G; Tue, 08 Dec 2020 17:59:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmhGt-0005O9-W1; Tue, 08 Dec 2020 17:59:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmhGt-0006t1-NE; Tue, 08 Dec 2020 17:59:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmhGt-0000tJ-MU; Tue, 08 Dec 2020 17:59:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EhgLLTmSX/eUWxZdLln81n64dPys1eQsq2R5ypbx1gI=; b=WfpHeqA7R3KYQXjklTHJF6xQbg
	yDlcTSZoBIdwOGH18I1vOdJyLSVzs5mlKsoT92GcsQKJN2OjQJ+zvHtKEo4r5+oAlwWQDO2cZ0nc2
	BMM+FbK9vTNNiPdbH/GWd1eIaZBoA6JJh78dcH6E3vwX/CwliPCYfwlG+H315DIxtgIM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157285: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cd796ed3345030aa1bb332fe5c793b3dddaf56e7
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 17:59:31 +0000

flight 157285 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157285/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 157266 REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 157266 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 157266 pass in 157285
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 157266 pass in 157285
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157266 pass in 157285
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157266
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157266
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10     fail pass in 157266

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157266 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cd796ed3345030aa1bb332fe5c793b3dddaf56e7
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  129 days
Failing since        152366  2020-08-01 20:49:34 Z  128 days  221 attempts
Testing same since   157266  2020-12-07 22:40:29 Z    0 days    2 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 700707 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 18:14:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 18:14:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47643.84317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhUq-0001MH-V2; Tue, 08 Dec 2020 18:13:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47643.84317; Tue, 08 Dec 2020 18:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhUq-0001MA-Rn; Tue, 08 Dec 2020 18:13:56 +0000
Received: by outflank-mailman (input) for mailman id 47643;
 Tue, 08 Dec 2020 18:13:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PeDt=FM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kmhUo-0001M3-U3
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 18:13:55 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ac12206-8299-4f02-b026-7822e2f62b52;
 Tue, 08 Dec 2020 18:13:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ac12206-8299-4f02-b026-7822e2f62b52
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607451233;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=1UlGM2EOezoYIsIyrb55OTD5yfyLUv4TkoosVZ95/Ug=;
  b=J4XNme2qhHfnk9YUXZe30fUhT8jhUiTLu0i5DgDKcXfmhfI95ampBeJh
   ixjjKYFTMlV7gMICE9jOMsLPO5jmOxkXGNZRFkMUDG7tPt4zByvGfm/o/
   uYjO3E+mOXRWRycnsND+jc0saWxqLr4uc/YikZP1NTeWCTLOgqukLfuzi
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Byvv2QzLHlJxhCFcpxGzw5URdeBF5RrE7os7/Ao9YUw3cfEl4t+FeGVnn80G7eBefMtINbOfTI
 17cmhQO6geRiTPa6I/hrSWW0SmrTTkUTggsY276R69CyLg7uDIXCn9uCEegwo7IvsCh/pCRGaJ
 JZSqKH89QLDQZOn5VT9AaRUfRIdo+inXFE+BxQMrpTc/lH6VfDTgJS1gOoEvQFJBvkLUODcYXO
 5S7n7LyX2zDaRtZZkN0bcf+53bevav2mBTNwIkb6W2fUNtBKB5J9+qdun1w6GL0F0ppV+v+Yrk
 6xI=
X-SBRS: 5.1
X-MesageID: 33020474
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,403,1599537600"; 
   d="scan'208";a="33020474"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
References: <20201208175738.GA3390@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
Date: Tue, 8 Dec 2020 18:13:46 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201208175738.GA3390@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 08/12/2020 17:57, Manuel Bouyer wrote:
> Hello,
> for the first time I tried to boot a xen kernel from devel with
> a NetBSD PV dom0. The kernel boots, but when the first userland prcess
> is launched, it seems to enter a loop involving search_pre_exception_table()
> (I see an endless stream from the dprintk() at arch/x86/extable.c:202)
>
> With xen 4.13 I see it, but exactly once:
> (XEN) extable.c:202: Pre-exception: ffff82d08038c304 -> ffff82d08038c8c8
>
> with devel:
> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> [...]
>
> the dom0 kernel is the same.
>
> At first glance it looks like a fault in the guest is not handled at it should,
> and the userland process keeps faulting on the same address.
>
> Any idea what to look at ?

That is a reoccurring fault on IRET back to guest context, and is
probably caused by some unwise-in-hindsight cleanup which doesn't
escalate the failure to the failsafe callback.

This ought to give something useful to debug with:

~Andrew

diff --git a/xen/arch/x86/extable.c b/xen/arch/x86/extable.c
index 70972f1085..62a7bcbe38 100644
--- a/xen/arch/x86/extable.c
+++ b/xen/arch/x86/extable.c
@@ -191,6 +191,10 @@ static int __init stub_selftest(void)
 __initcall(stub_selftest);
 #endif
 
+#include <xen/sched.h>
+#include <xen/softirq.h>
+const char *vec_name(unsigned int vec);
+
 unsigned long
 search_pre_exception_table(struct cpu_user_regs *regs)
 {
@@ -199,7 +203,13 @@ search_pre_exception_table(struct cpu_user_regs *regs)
         __start___pre_ex_table, __stop___pre_ex_table-1, addr);
     if ( fixup )
     {
-        dprintk(XENLOG_INFO, "Pre-exception: %p -> %p\n", _p(addr),
_p(fixup));
+        printk(XENLOG_ERR "IRET fault: %s[%04x]\n",
+               vec_name(regs->entry_vector), regs->error_code);
+
+        domain_crash(current->domain);
+        for ( ;; )
+            do_softirq();
+
         perfc_incr(exception_fixed);
     }
     return fixup;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0459cee9fb..1059f3ce66 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -687,7 +687,7 @@ const char *trapstr(unsigned int trapnr)
     return trapnr < ARRAY_SIZE(strings) ? strings[trapnr] : "???";
 }
 
-static const char *vec_name(unsigned int vec)
+const char *vec_name(unsigned int vec)
 {
     static const char names[][4] = {
 #define P(x) [X86_EXC_ ## x] = "#" #x



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 18:22:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 18:22:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47652.84331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhcl-0002Lp-1d; Tue, 08 Dec 2020 18:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47652.84331; Tue, 08 Dec 2020 18:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhck-0002Li-Ua; Tue, 08 Dec 2020 18:22:06 +0000
Received: by outflank-mailman (input) for mailman id 47652;
 Tue, 08 Dec 2020 18:22:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmhcj-0002Ld-16
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 18:22:05 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b95ea55-8f77-485c-8c3f-6d45430de5c2;
 Tue, 08 Dec 2020 18:22:03 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id s30so25245754lfc.4
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 10:22:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id f10sm497350lfa.139.2020.12.08.10.22.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 10:22:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b95ea55-8f77-485c-8c3f-6d45430de5c2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=WB9Rm/l+/bhTPfa+Zw6hszWR35pGFXfkFBv9V096RoE=;
        b=XnsuMmkyeqLTvbs4ZSB5kWKg/cOI4QqB9U+j/l5lP8QcB9FjBTFFp2mSH/Xb+wPdlz
         i8vk/Ym0yEPLBijKlfx36Lw/Mp/Hk8JGyiqQUoZjpQzMRT1AXfDtJuOOY1uRGm6D7ztR
         sQMuhZ4Aw4Ebmk9QFnSHwxgG25kiRcTPivYIDKowlNKGuyZhnLVn+2Apn9d+dfT+tW9J
         va6g9VA59xT0T/8gOPPSwbtzmZa6BwVBNWghR6GTAP06OKlm7rWMs5gWC7VFCIRvK1y4
         KF6b+dqkKYDe5ZISYSuJlbH0FRL8Yya3+s5YUnT3NOYAy0Iw66Zv8FKUqE0MautBeqLG
         CSLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=WB9Rm/l+/bhTPfa+Zw6hszWR35pGFXfkFBv9V096RoE=;
        b=ZsWYeguvCm8qsog1/fwg1LhyGgaRQpHB3qwjiy0mpSvSYQ88zjDP9En8wihamz9zND
         hYCV1PYMX0r158jtfe6ohpQAmrO0Fx9Y3bN3t8/jUiQ4KqjWY9WmqDyziB4ovUCwu6ed
         oEFdMlNmMFtI6cN36qzypoMZtm+y/wXPwaVv/+SJBMSdikulBEW3NHYXB77t+xst5Zjj
         Iz5NIDIH6xJk2/zCsf2czpTYuRSPp13uqLM5LS0sOTPPt/C45gJtbF/KQSoq4CBlOZaN
         P7rldb5+Sxo5XvL5w22dss60omlpORxrjP5rxjSsWKtUAWEYjFLTWtt/ueLB9okNtgZ/
         /j8Q==
X-Gm-Message-State: AOAM532Zp9XnS6p4tmJcSMvronUxkUF4ZGQNb3kIlJkSiK9LenHJbA+n
	Lw4uZaXqBgR0KOXgMQo9HXuCSPrKYPvi3w==
X-Google-Smtp-Source: ABdhPJy85mdeKgWspK6QiwaeJBOPYXWPXoVjbhgYkGoIavW0WOzteHHgz0ANqcpOF0FCpwwuad/YzQ==
X-Received: by 2002:a05:6512:6c6:: with SMTP id u6mr10423316lff.174.1607451722370;
        Tue, 08 Dec 2020 10:22:02 -0800 (PST)
Subject: Re: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: paul@xen.org, 'Jan Beulich' <jbeulich@suse.com>
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Kevin Tian'
 <kevin.tian@intel.com>, 'Julien Grall' <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
 <742899b6-964b-be75-affc-31342c07133a@suse.com>
 <d7d867d3-b508-0c6c-191f-264e1e08bf39@gmail.com>
 <0d3c01d6cd37$0c013770$2403a650$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <c3e31e73-585b-4516-27ce-019db254b893@gmail.com>
Date: Tue, 8 Dec 2020 20:21:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0d3c01d6cd37$0c013770$2403a650$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 08.12.20 09:52, Paul Durrant wrote:

Hi Paul, Jan

>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>> --- a/xen/arch/x86/hvm/emulate.c
>>>> +++ b/xen/arch/x86/hvm/emulate.c
>>>> @@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v)
>>>>    {
>>>>        struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
>>>>
>>>> -    vio->io_req.state = STATE_IOREQ_NONE;
>>>> -    vio->io_completion = HVMIO_no_completion;
>>>> +    v->io.req.state = STATE_IOREQ_NONE;
>>>> +    v->io.completion = IO_no_completion;
>>>>        vio->mmio_cache_count = 0;
>>>>        vio->mmio_insn_bytes = 0;
>>>>        vio->mmio_access = (struct npfec){};
>>>> @@ -159,7 +159,7 @@ static int hvmemul_do_io(
>>>>    {
>>>>        struct vcpu *curr = current;
>>>>        struct domain *currd = curr->domain;
>>>> -    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
>>>> +    struct vcpu_io *vio = &curr->io;
>>> Taking just these two hunks: "vio" would now stand for two entirely
>>> different things. I realize the name is applicable to both, but I
>>> wonder if such naming isn't going to risk confusion.Despite being
>>> relatively familiar with the involved code, I've been repeatedly
>>> unsure what exactly "vio" covers, and needed to go back to the
>>    Good comment... Agree that with the naming scheme in current patch the
>> code became a little bit confusing to read.
>>
>>
>>> header. So together with the name possible adjustment mentioned
>>> further down, maybe "vcpu_io" also wants it name changed, such that
>>> the variable then also could sensibly be named (slightly)
>>> differently? struct vcpu_io_state maybe? Or alternatively rename
>>> variables of type struct hvm_vcpu_io * to hvio or hio? Otoh the
>>> savings aren't very big for just ->io, so maybe better to stick to
>>> the prior name with the prior type, and not introduce local
>>> variables at all for the new field, like you already have it in the
>>> former case?
>> I would much prefer the last suggestion which is "not introduce local
>> variables at all for the new field" (I admit I was thinking almost the
>> same, but haven't chosen this direction).
>> But I am OK with any suggestions here. Paul what do you think?
>>
> I personally don't think there is that much risk of confusion. If there is a desire to disambiguate though, I would go the route of naming hvm_vcpu_io locals 'hvio'.
Well, I assume I should rename all hvm_vcpu_io locals in the code to 
"hvio" (even if this patch didn't touch these places so far since no new 
vcpu_io fields were involved).
I am OK, although expecting a lot of places which need touching here...


>
>>>> --- a/xen/include/xen/sched.h
>>>> +++ b/xen/include/xen/sched.h
>>>> @@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy
>> */
>>>>    struct waitqueue_vcpu;
>>>>
>>>> +enum io_completion {
>>>> +    IO_no_completion,
>>>> +    IO_mmio_completion,
>>>> +    IO_pio_completion,
>>>> +#ifdef CONFIG_X86
>>>> +    IO_realmode_completion,
>>>> +#endif
>>>> +};
>>> I'm not entirely happy with io_ / IO_ here - they seem a little
>>> too generic. How about ioreq_ / IOREQ_ respectively?
>> I am OK, would like to hear Paul's opinion on both questions.
>>
> No, I think the 'IO_' prefix is better. They relate to a field in the vcpu_io struct. An alternative might be 'VIO_'...
>
>>>> +struct vcpu_io {
>>>> +    /* I/O request in flight to device model. */
>>>> +    enum io_completion   completion;
> ... in which case, you could also name the enum 'vio_completion'.
>
>    Paul

ok, will follow new renaming scheme IO_ -> VIO_ (io_ -> vio_).


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 18:43:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 18:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47660.84344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhxO-0004Jr-Qv; Tue, 08 Dec 2020 18:43:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47660.84344; Tue, 08 Dec 2020 18:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmhxO-0004Jk-NS; Tue, 08 Dec 2020 18:43:26 +0000
Received: by outflank-mailman (input) for mailman id 47660;
 Tue, 08 Dec 2020 18:43:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gyNq=FM=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1kmhxM-0004Jf-BG
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 18:43:25 +0000
Received: from mail.skyhub.de (unknown [5.9.137.197])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e601ae3e-b817-461e-a9ed-0c1178269f73;
 Tue, 08 Dec 2020 18:43:21 +0000 (UTC)
Received: from zn.tnic (p200300ec2f0f08004da90e847a90bd48.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f0f:800:4da9:e84:7a90:bd48])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 515021EC04BF;
 Tue,  8 Dec 2020 19:43:20 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e601ae3e-b817-461e-a9ed-0c1178269f73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1607453000;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=/ZQYdyMntojjSpS0VNOBBCh2xJC9JCydUpM1Shp6nLw=;
	b=VKBSSiMMABTlg4e/gOqwPW/l0D5HODUGSc5er/ZHtC5DNYPpTLEXfdpuF8K9sFZgw4VBpc
	Xy4p+CvjCfeCNoPlVAz08QqPMi6ReE+AkmIi0VhKyamT4fs+09CAjHc/dRBZXGpRxIty6N
	isu5dUhGHYWMJdK2EHdjMgTj+Bz30CA=
Date: Tue, 8 Dec 2020 19:43:15 +0100
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
Message-ID: <20201208184315.GE27920@zn.tnic>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-8-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:25PM +0100, Juergen Gross wrote:
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index dad350d42ecf..ffa23c655412 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -237,6 +237,9 @@
>  #define X86_FEATURE_VMCALL		( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
>  #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
>  #define X86_FEATURE_SEV_ES		( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */
> +#define X86_FEATURE_NOT_XENPV		( 8*32+21) /* "" Inverse of X86_FEATURE_XENPV */
> +#define X86_FEATURE_NO_PVUNLOCK		( 8*32+22) /* "" No PV unlock function */
> +#define X86_FEATURE_NO_VCPUPREEMPT	( 8*32+23) /* "" No PV vcpu_is_preempted function */

Ew, negative features. ;-\

/me goes forward and looks at usage sites:

+	ALTERNATIVE_2 "jmp *paravirt_iret(%rip);",			\
+		      "jmp native_iret;", X86_FEATURE_NOT_XENPV,	\
+		      "jmp xen_iret;", X86_FEATURE_XENPV

Can we make that:

	ALTERNATIVE_TERNARY "jmp *paravirt_iret(%rip);",
		      "jmp xen_iret;", X86_FEATURE_XENPV,
		      "jmp native_iret;", X86_FEATURE_XENPV,

where the last two lines are supposed to mean

			    X86_FEATURE_XENPV ? "jmp xen_iret;" : "jmp native_iret;"

Now, in order to convey that logic to apply_alternatives(), you can do:

struct alt_instr {
        s32 instr_offset;       /* original instruction */
        s32 repl_offset;        /* offset to replacement instruction */
        u16 cpuid;              /* cpuid bit set for replacement */
        u8  instrlen;           /* length of original instruction */
        u8  replacementlen;     /* length of new instruction */
        u8  padlen;             /* length of build-time padding */
	u8  flags;		/* patching flags */ 			<--- THIS
} __packed;

and yes, we have had the flags thing in a lot of WIP diffs over the
years but we've never come to actually needing it.

Anyway, then, apply_alternatives() will do:

	if (flags & ALT_NOT_FEATURE)

or something like that - I'm bad at naming stuff - then it should patch
only when the feature is NOT set and vice versa.

There in that

	if (!boot_cpu_has(a->cpuid)) {

branch.

Hmm?

>  /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
>  #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index 2400ad62f330..f8f9700719cf 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -593,6 +593,18 @@ int alternatives_text_reserved(void *start, void *end)
>  #endif /* CONFIG_SMP */
>  
>  #ifdef CONFIG_PARAVIRT
> +static void __init paravirt_set_cap(void)
> +{
> +	if (!boot_cpu_has(X86_FEATURE_XENPV))
> +		setup_force_cpu_cap(X86_FEATURE_NOT_XENPV);
> +
> +	if (pv_is_native_spin_unlock())
> +		setup_force_cpu_cap(X86_FEATURE_NO_PVUNLOCK);
> +
> +	if (pv_is_native_vcpu_is_preempted())
> +		setup_force_cpu_cap(X86_FEATURE_NO_VCPUPREEMPT);
> +}
> +
>  void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
>  				     struct paravirt_patch_site *end)
>  {
> @@ -616,6 +628,8 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
>  }
>  extern struct paravirt_patch_site __start_parainstructions[],
>  	__stop_parainstructions[];
> +#else
> +static void __init paravirt_set_cap(void) { }
>  #endif	/* CONFIG_PARAVIRT */
>  
>  /*
> @@ -723,6 +737,18 @@ void __init alternative_instructions(void)
>  	 * patching.
>  	 */
>  
> +	paravirt_set_cap();

Can that be called from somewhere in the Xen init path and not from
here? Somewhere before check_bugs() gets called.

> +	/*
> +	 * First patch paravirt functions, such that we overwrite the indirect
> +	 * call with the direct call.
> +	 */
> +	apply_paravirt(__parainstructions, __parainstructions_end);
> +
> +	/*
> +	 * Then patch alternatives, such that those paravirt calls that are in
> +	 * alternatives can be overwritten by their immediate fragments.
> +	 */
>  	apply_alternatives(__alt_instructions, __alt_instructions_end);

Can you give an example here pls why the paravirt patching needs to run
first?

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 18:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 18:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47667.84356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmi1S-0004Ts-C8; Tue, 08 Dec 2020 18:47:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47667.84356; Tue, 08 Dec 2020 18:47:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmi1S-0004Tl-8L; Tue, 08 Dec 2020 18:47:38 +0000
Received: by outflank-mailman (input) for mailman id 47667;
 Tue, 08 Dec 2020 18:47:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmi1R-0004Tg-G1
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 18:47:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmi1Q-0006Tv-7c; Tue, 08 Dec 2020 18:47:36 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmi1Q-0007gh-0V; Tue, 08 Dec 2020 18:47:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ENT6h8UNCHRxBm/mDi0sAcFmAXX8XKWlD5w1KWH69tE=; b=GKCWh5zCmqOJn6hamSdwxs5nSK
	RuwBJnBodx314KjyZVwgRuWJFrKJxwdeLNGGAhKpKUOaAW3iSYB46D2ZdL0es1R54rSYebBzSx+d6
	glgtp+N4sqjzQLN0VsKsBlMasbx1A1CbDRlje4hCkMMXIOM/NK4r6vrsHHwH7lRtm7Go=;
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Michal Orzel <Michal.Orzel@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Chen <Wei.Chen@arm.com>
References: <20201208072327.11890-1-michal.orzel@arm.com>
 <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org>
 <5D1B5771-A6B3-4F5E-81A1-864DBC8787B4@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bf45e0f4-2de7-d1db-4732-342937bf61e7@xen.org>
Date: Tue, 8 Dec 2020 18:47:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <5D1B5771-A6B3-4F5E-81A1-864DBC8787B4@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 08/12/2020 14:38, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 8 Dec 2020, at 09:47, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 08/12/2020 07:23, Michal Orzel wrote:
>>> When executing in aarch32 state at EL0, a load at EL0 from a
>>> virtual address that matches the bottom 32 bits of the virtual address
>>> used by a recent load at (aarch64) EL1 might return incorrect data.
>>> The workaround is to insert a write of the contextidr_el1 register
>>> on exception return to an aarch32 guest.
>>
>> I am a bit confused with this comment. In the previous paragraph, you are suggesting that the problem is an interaction between EL1 AArch64 and EL0 AArch32. But here you seem to imply the issue only happen when running a AArch32 guest.
>>
>> Can you clarify it?
> 
> This can happen when switching from an aarch64 guest to an aarch32 guest so not only when there is interaction.

Right, but the context switch will write to CONTEXTIDR_EL1. So this case 
should already be handled.

Xen will never switch from AArch64 EL1 to AArch32 EL0 without a context 
switch (the inverse can happen if we inject an exception to the guest).

Reading the Cortex-A53 SDEN, it sounds like this is an OS and not 
Hypervisor problem. In fact, Linux only seems to workaround it when 
switching in the OS side rather than the hypervisor.

Therefore, I am not sure to understand why we need to workaroud it in Xen.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:06:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47675.84368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmiJA-0006Ow-UT; Tue, 08 Dec 2020 19:05:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47675.84368; Tue, 08 Dec 2020 19:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmiJA-0006Op-R5; Tue, 08 Dec 2020 19:05:56 +0000
Received: by outflank-mailman (input) for mailman id 47675;
 Tue, 08 Dec 2020 19:05:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmiJ9-0006Ok-6y
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:05:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmiJ5-0006qq-7n; Tue, 08 Dec 2020 19:05:51 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmiJ4-0000M6-SJ; Tue, 08 Dec 2020 19:05:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=SPGXI9Cv/DwjHDru6Ui2XCxKm7d0JIz04c5newoCBdg=; b=Y9fZJs8q0IEOdzo8FTUFeFVTHW
	7cSfw82iJUA1zETFydhmwhq+ZpCwlt2iy97auQNOhBmIFYxQJ1fM3Ri2B1Wcy2AiFUAQOXSTlTdsR
	bkyXoa//eSwOCWTZTBOO6cyXzMaPfOJteLco1XkS5YQwU/smSDI07Xx8B9RIKg/KfuWc=;
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
 <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
 <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
 <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <156ab0f5-e46d-6b96-7ff1-28ad3a748950@xen.org>
Date: Tue, 8 Dec 2020 19:05:48 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 07/12/2020 18:42, Rahul Singh wrote:
> Hello Julien,

Hi,

> 
>> On 7 Dec 2020, at 5:39 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 07/12/2020 12:12, Rahul Singh wrote:
>>>>> +typedef paddr_t dma_addr_t;
>>>>> +typedef unsigned int gfp_t;
>>>>> +
>>>>> +#define platform_device device
>>>>> +
>>>>> +#define GFP_KERNEL 0
>>>>> +
>>>>> +/* Alias to Xen device tree helpers */
>>>>> +#define device_node dt_device_node
>>>>> +#define of_phandle_args dt_phandle_args
>>>>> +#define of_device_id dt_device_match
>>>>> +#define of_match_node dt_match_node
>>>>> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
>>>>> +#define of_property_read_bool dt_property_read_bool
>>>>> +#define of_parse_phandle_with_args dt_parse_phandle_with_args
>>>>> +
>>>>> +/* Alias to Xen lock functions */
>>>>> +#define mutex spinlock
>>>>> +#define mutex_init spin_lock_init
>>>>> +#define mutex_lock spin_lock
>>>>> +#define mutex_unlock spin_unlock
>>>>
>>>> Hmm... mutex are not spinlock. Can you explain why this is fine to switch to spinlock?
>>> Yes mutex are not spinlock. As mutex is not implemented in XEN I thought of using spinlock in place of mutex as this is the only locking mechanism available in XEN.
>>> Let me know if there is another blocking lock available in XEN. I will check if we can use that.
>>
>> There are no blocking lock available in Xen so far. However, if Linux were using mutex instead of spinlock, then it likely means they operations in the critical section can be long running.
> 
> Yes you are right Linux is using mutex when attaching a device to the SMMU as this operation might take longer time.
>>
>> How did you came to the conclusion that using spinlock in the SMMU driver would be fine?
> 
> Mutex is replaced by spinlock  in the SMMU driver when there is a request to assign a device to the guest. As we are in user context at that time its ok to use spinlock.

I am not sure to understand what you mean by "user context" here. Can 
you clarify it?

> As per my understanding there is one scenario when CPU will spin when there is a request from the user at the same time to assign another device to the SMMU and I think that is very rare.

What "user" are you referring to?

> 
> Please suggest how we can proceed on this.

I am guessing that what you are saying is the request to 
assign/de-assign device will be issued by the toolstack and therefore 
they should be trusted.

My concern here is not about someone waiting on the lock to be released. 
It is more the fact that using a mutex() is an insight that the 
operation protected can be long. Depending on the length, this may 
result to unwanted side effect (e.g. other vCPU not scheduled, RCU stall 
in dom0, watchdog hit...).

I recall a discussion from a couple of years ago mentioning that STE 
programming operations can take quite a long time. So I would like to 
understand how long the operation is meant to last.

For a tech preview, this is probably OK to replace the mutex with an 
spinlock. But I would not want this to go past the tech preview stage 
without a proper analysis.

Stefano, what do you think?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47686.84415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihC-0000iu-6Q; Tue, 08 Dec 2020 19:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47686.84415; Tue, 08 Dec 2020 19:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihC-0000ii-2H; Tue, 08 Dec 2020 19:30:46 +0000
Received: by outflank-mailman (input) for mailman id 47686;
 Tue, 08 Dec 2020 19:30:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihA-0000fk-F1
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih9-0007LM-AX; Tue, 08 Dec 2020 19:30:43 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih9-0001p0-2U; Tue, 08 Dec 2020 19:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ksPadBrGqHahICVt3GhGrqsoWKZBYE1bmDQJ3Zcn3LQ=; b=5Z/dwt9+QRVw8sUtsbwUS3/Y/7
	VL75Y2FmpZXqSRzR2TYjR+IufvJqWkCxE8uVgGcDBlzsPtZfbQ2FTYVOxVIToPsAbuBxAc6pYbQtF
	i4FX0TzBbZvEtwyFgWMHh9DXhKjaiM6s+px+WOmY8azThfL8/rM4YuMfVsdBckM2vqUQ=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 03/25] libxl: make libxl__device_list() work correctly for LIBXL__DEVICE_KIND_PCI...
Date: Tue,  8 Dec 2020 19:30:11 +0000
Message-Id: <20201208193033.11306-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... devices.

Currently there is an assumption built into libxl__device_list() that device
backends are fully enumarated under the '/libxl' path in xenstore. This is
not the case for PCI backend devices, which are only properly enumerated
under '/local/domain/0/backend'.

This patch adds a new get_path() method to libxl__device_type to allow a
backend implementation (such as PCI) to specify the xenstore path where
devices are enumerated and modifies libxl__device_list() to use this method
if it is available. Also, if the get_num() method is defined then the
from_xenstore() method expects to be passed the backend path without the device
number concatenated, so this issue is also rectified.

Having made libxl__device_list() work correctly, this patch removes the
open-coded libxl_pci_device_pci_list() in favour of an evaluation of the
LIBXL_DEFINE_DEVICE_LIST() macro. This has the side-effect of also defining
libxl_pci_device_pci_list_free() which will be used in subsequent patches.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v3:
 - New in v3 (replacing "libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices")
---
 tools/include/libxl.h             |  7 ++++
 tools/libs/light/libxl_device.c   | 66 ++++++++++++++++---------------
 tools/libs/light/libxl_internal.h |  2 +
 tools/libs/light/libxl_pci.c      | 29 ++++----------
 4 files changed, 52 insertions(+), 52 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 733263522bd9..bb7fc893fc13 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -451,6 +451,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_LIST_FREE indicates that the
+ * libxl_device_pci_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2321,6 +2327,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
 /*
  * Turns the current process into a backend device service daemon
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e081faf9a94e..ac173a043d31 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -2011,7 +2011,7 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
     void *r = NULL;
     void *list = NULL;
     void *item = NULL;
-    char *libxl_path;
+    char *path;
     char **dir = NULL;
     unsigned int ndirs = 0;
     unsigned int ndevs = 0;
@@ -2019,42 +2019,46 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
 
     *num = 0;
 
-    libxl_path = GCSPRINTF("%s/device/%s",
-                           libxl__xs_libxl_path(gc, domid),
-                           libxl__device_kind_to_string(dt->type));
-
-    dir = libxl__xs_directory(gc, XBT_NULL, libxl_path, &ndirs);
+    if (dt->get_path) {
+        rc = dt->get_path(gc, domid, &path);
+        if (rc) goto out;
+    } else {
+        path = GCSPRINTF("%s/device/%s",
+                         libxl__xs_libxl_path(gc, domid),
+                         libxl__device_kind_to_string(dt->type));
+    }
 
-    if (dir && ndirs) {
-        if (dt->get_num) {
-            if (ndirs != 1) {
-                LOGD(ERROR, domid, "multiple entries in %s\n", libxl_path);
-                rc = ERROR_FAIL;
-                goto out;
-            }
-            rc = dt->get_num(gc, GCSPRINTF("%s/%s", libxl_path, *dir), &ndevs);
-            if (rc) goto out;
-        } else {
+    if (dt->get_num) {
+        rc = dt->get_num(gc, path, &ndevs);
+        if (rc) goto out;
+    } else {
+        dir = libxl__xs_directory(gc, XBT_NULL, path, &ndirs);
+        if (dir && ndirs)
             ndevs = ndirs;
-        }
-        list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
-        item = list;
+    }
 
-        while (*num < ndevs) {
-            dt->init(item);
+    if (!ndevs)
+        return NULL;
 
-            if (dt->from_xenstore) {
-                int nr = dt->get_num ? *num : atoi(*dir);
-                char *device_libxl_path = GCSPRINTF("%s/%s", libxl_path, *dir);
-                rc = dt->from_xenstore(gc, device_libxl_path, nr, item);
-                if (rc) goto out;
-            }
+    list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
+    item = list;
 
-            item = (uint8_t *)item + dt->dev_elem_size;
-            ++(*num);
-            if (!dt->get_num)
-                ++dir;
+    while (*num < ndevs) {
+        dt->init(item);
+
+        if (dt->from_xenstore) {
+            int nr = dt->get_num ? *num : atoi(*dir);
+            char *device_path = dt->get_num ? path :
+                GCSPRINTF("%s/%d", path, nr);
+
+            rc = dt->from_xenstore(gc, device_path, nr, item);
+            if (rc) goto out;
         }
+
+        item = (uint8_t *)item + dt->dev_elem_size;
+        ++(*num);
+        if (!dt->get_num)
+            ++dir;
     }
 
     r = list;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index c2c5a9b92673..d0c23def3c3e 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3917,6 +3917,7 @@ typedef int (*device_dm_needed_fn_t)(void *, unsigned);
 typedef void (*device_update_config_fn_t)(libxl__gc *, void *, void *);
 typedef int (*device_update_devid_fn_t)(libxl__gc *, uint32_t, void *);
 typedef int (*device_get_num_fn_t)(libxl__gc *, const char *, unsigned int *);
+typedef int (*device_get_path_fn_t)(libxl__gc *, uint32_t, char **);
 typedef int (*device_from_xenstore_fn_t)(libxl__gc *, const char *,
                                          libxl_devid, void *);
 typedef int (*device_set_xenstore_config_fn_t)(libxl__gc *, uint32_t, void *,
@@ -3941,6 +3942,7 @@ struct libxl__device_type {
     device_update_config_fn_t       update_config;
     device_update_devid_fn_t        update_devid;
     device_get_num_fn_t             get_num;
+    device_get_path_fn_t            get_path;
     device_from_xenstore_fn_t       from_xenstore;
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 3340076d2cb3..d536702ac420 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -2393,29 +2393,13 @@ static int libxl__device_pci_get_num(libxl__gc *gc, const char *be_path,
     return rc;
 }
 
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num)
+static int libxl__device_pci_get_path(libxl__gc *gc, uint32_t domid,
+                                      char **path)
 {
-    GC_INIT(ctx);
-    char *be_path;
-    unsigned int n, i;
-    libxl_device_pci *pcis = NULL;
-
-    *num = 0;
-
-    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
-                                                LIBXL__DEVICE_KIND_PCI);
-    if (libxl__device_pci_get_num(gc, be_path, &n))
-        goto out;
+    *path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                              LIBXL__DEVICE_KIND_PCI);
 
-    pcis = calloc(n, sizeof(libxl_device_pci));
-
-    for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
-
-    *num = n;
-out:
-    GC_FREE;
-    return pcis;
+    return 0;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
@@ -2492,10 +2476,13 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
     return COMPARE_PCI(d1, d2);
 }
 
+LIBXL_DEFINE_DEVICE_LIST(pci)
+
 #define libxl__device_pci_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(pci, PCI, pcidevs,
     .get_num = libxl__device_pci_get_num,
+    .get_path = libxl__device_pci_get_path,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47688.84440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihF-0000tN-AO; Tue, 08 Dec 2020 19:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47688.84440; Tue, 08 Dec 2020 19:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihF-0000tE-3d; Tue, 08 Dec 2020 19:30:49 +0000
Received: by outflank-mailman (input) for mailman id 47688;
 Tue, 08 Dec 2020 19:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihC-0000j9-9G
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihB-0007Lc-9h; Tue, 08 Dec 2020 19:30:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihB-0001p0-1c; Tue, 08 Dec 2020 19:30:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=GW8F74uuKnBsl/QhgvU4YWCjBsqUt8SG+nfW/x6vTeU=; b=qzD4mITNZ6VcGw1+bulyKQf8t2
	UbzOveql4qDDGq0xkUmFTBMO5yVsDrzhD1Dbg3RNl8xUbIlT1XjvWCbb9Gj9ry7rtHCIYYKPa/VS5
	+AzrFzoGUy7ThoppnCBlecRBqJUnx38nx8qv1SSLQ+0RyvrJnTZ139Rm74CadnkOTh1M=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 05/25] libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
Date: Tue,  8 Dec 2020 19:30:13 +0000
Message-Id: <20201208193033.11306-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Other parameters, such as 'msitranslate' and 'permissive' are dealt with
but 'rdm_policy' appears to be have been completely missed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 3f85f0d7620e..2a5a4db976e1 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -61,9 +61,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
-              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pci->msitranslate, pci->power_mgmt,
-                             pci->permissive));
+              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d,rdm_policy=%s",
+                        pci->msitranslate, pci->power_mgmt,
+                        pci->permissive, libxl_rdm_reserve_policy_to_string(pci->rdm_policy)));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
@@ -2374,6 +2374,9 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
             } else if (!strcmp(p, "permissive")) {
                 p = strtok_r(NULL, ",=", &saveptr);
                 pci->permissive = atoi(p);
+            } else if (!strcmp(p, "rdm_policy")) {
+                p = strtok_r(NULL, ",=", &saveptr);
+                libxl_rdm_reserve_policy_from_string(p, &pci->rdm_policy);
             }
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47689.84448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihG-0000uU-08; Tue, 08 Dec 2020 19:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47689.84448; Tue, 08 Dec 2020 19:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihF-0000tz-Gd; Tue, 08 Dec 2020 19:30:49 +0000
Received: by outflank-mailman (input) for mailman id 47689;
 Tue, 08 Dec 2020 19:30:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihC-0000kU-Sx
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihC-0007Lh-74; Tue, 08 Dec 2020 19:30:46 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihB-0001p0-Uy; Tue, 08 Dec 2020 19:30:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=YuOQecAixX1DjK13kmE19J965kCqYu/hcSuqKjMtAqY=; b=ZjepG28HskRGPgNyU9ReEbUrfH
	bsBGI58ATPqvc8xl3v4h4TBileodAWIsoOAcZ+E/ok1yAu2a7EPJSLlPnmnngAxPnInSpgYPLdORs
	51aY7EIk6ZzBp19zzylYrYCo5Jk10/omKgEAun7EOs9RR8rOHKOJ64Mb36/znbhyOV7c=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 06/25] libxl: s/detatched/detached in libxl_pci.c
Date: Tue,  8 Dec 2020 19:30:14 +0000
Message-Id: <20201208193033.11306-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Simply spelling correction. Purely cosmetic fix.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2a5a4db976e1..b6d3bd29b718 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1864,7 +1864,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
 static void pci_remove_timeout(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
-static void pci_remove_detatched(libxl__egc *egc,
+static void pci_remove_detached(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
@@ -1978,7 +1978,7 @@ skip1:
 skip_irq:
     rc = 0;
 out_fail:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -2002,7 +2002,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del(libxl__egc *egc,
@@ -2028,7 +2028,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
@@ -2051,7 +2051,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
@@ -2067,7 +2067,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
@@ -2127,7 +2127,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     }
 
 out:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
@@ -2146,12 +2146,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
      * error */
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
-static void pci_remove_detatched(libxl__egc *egc,
-                                 pci_remove_state *prs,
-                                 int rc)
+static void pci_remove_detached(libxl__egc *egc,
+                                pci_remove_state *prs,
+                                int rc)
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47691.84467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihH-0000yf-M3; Tue, 08 Dec 2020 19:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47691.84467; Tue, 08 Dec 2020 19:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihH-0000yD-22; Tue, 08 Dec 2020 19:30:51 +0000
Received: by outflank-mailman (input) for mailman id 47691;
 Tue, 08 Dec 2020 19:30:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihE-0000sP-Lu
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihE-0007M1-0G; Tue, 08 Dec 2020 19:30:48 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihD-0001p0-Os; Tue, 08 Dec 2020 19:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=+dD0y2vMDsFbroYpFk5v6m+aYVM8RV4tZrBb3da8/jw=; b=TlA2ilgu/AUIE/KCSL4zXASUlb
	WxqaveG4IPdmWcdkPTk0st60CuH58cBDtw9D9m0JueDWcNFL7eXBlxm3+BPhjuXJfA1bE37bgsAzR
	3WMBgWS5wjZ4pjtLgMH7xq0CBZkHppZAUONPZc6rMO7vX4oqieQDp6dH0Tc0+UAPgXKY=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 08/25] libxl: stop using aodev->device_config in libxl__device_pci_add()...
Date: Tue,  8 Dec 2020 19:30:16 +0000
Message-Id: <20201208193033.11306-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to hold a pointer to the device.

There is already a 'pci' field in 'pci_add_state' so simply use that from
the start. This also allows the 'pci' (#3) argument to be dropped from
do_pci_add().

NOTE: This patch also changes the type of the 'pci_domid' field in
      'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
      given what the field is used for.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bcc4d2ef9686..be2145222d2b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1074,7 +1074,7 @@ typedef struct pci_add_state {
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
     libxl_device_pci *pci;
-    int pci_domid;
+    libxl_domid pci_domid;
 } pci_add_state;
 
 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1091,7 +1091,6 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1101,7 +1100,6 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1564,13 +1562,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pci to be used by callbacks */
-    aodev->device_config = pci;
-    aodev->device_type = &libxl__pci_devtype;
-
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
+    pas->pci = pci;
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1624,9 +1619,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         GCNEW(pci_s);
         libxl_device_pci_init(pci_s);
         libxl_device_pci_copy(CTX, pci_s, pci);
+        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pas); /* must be last */
         return;
     }
 
@@ -1681,9 +1677,8 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     int i;
 
     /* Convenience aliases */
-    libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1718,7 +1713,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
                 pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pci, pas); /* must be last */
+            do_pci_add(egc, domid, pas); /* must be last */
             return;
         }
     }
@@ -1734,7 +1729,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47687.84422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihC-0000js-RC; Tue, 08 Dec 2020 19:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47687.84422; Tue, 08 Dec 2020 19:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihC-0000jV-FT; Tue, 08 Dec 2020 19:30:46 +0000
Received: by outflank-mailman (input) for mailman id 47687;
 Tue, 08 Dec 2020 19:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihB-0000hS-7X
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihA-0007LS-CX; Tue, 08 Dec 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihA-0001p0-2c; Tue, 08 Dec 2020 19:30:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=yn6sZWxU4vwNPVA8AP0gTT6bc+30jY0cMASuBOO950E=; b=jSHjQYLlHY8No67/nBbY1YL4Pg
	ygXhvNw18was+IXO269HFAbyYQt93yKwuqwXRsSaGudIOToIptj6/mO0gTqX8coTLsLvSJiehDvdB
	IoYNzrf5AuLYGiKZi/jdR6Ui5M6kYFeAYo+vZRG+3knTK3Z4ELeOG98QcaBvQW5UBoLA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 04/25] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Tue,  8 Dec 2020 19:30:12 +0000
Message-Id: <20201208193033.11306-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently libxl__device_pci_add_xenstore() is broken in that does not
update the domain's configuration for the first device added (which causes
creation of the overall backend area in xenstore). This can be easily observed
by running 'xl list -l' after adding a single device: the device will be
missing.

This patch fixes the problem and adds a DEBUG log line to allow easy
verification that the domain configuration is being modified. Also, the use
of libxl__device_generic_add() is dropped as it leads to a confusing situation
where only partial backend information is written under the xenstore
'/libxl' path. For LIBXL__DEVICE_KIND_PCI devices the only definitive
information in xenstore is under '/local/domain/0/backend' (the '0' being
hard-coded).

NOTE: This patch includes a whitespace in add_pcis_done().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v3:
 - Revert some changes form v2 as there is confusion over use of the libxl
   and backend xenstore paths which needs to be fixed

v2:
 - Avoid having two completely different ways of adding devices into xenstore
---
 tools/libs/light/libxl_pci.c | 87 +++++++++++++++++++-----------------
 1 file changed, 45 insertions(+), 42 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index d536702ac420..3f85f0d7620e 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -79,39 +79,55 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
     device->kind = LIBXL__DEVICE_KIND_PCI;
 }
 
-static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pci,
-                                     int num)
+static void libxl__create_pci_backend(libxl__gc *gc, xs_transaction_t t,
+                                      uint32_t domid, const libxl_device_pci *pci)
 {
-    flexarray_t *front = NULL;
-    flexarray_t *back = NULL;
-    libxl__device device;
-    int i;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    flexarray_t *front, *back;
+    char *fe_path, *be_path;
+    struct xs_permissions fe_perms[2], be_perms[2];
+
+    LOGD(DEBUG, domid, "Creating pci backend");
 
     front = flexarray_make(gc, 16, 1);
     back = flexarray_make(gc, 16, 1);
 
-    LOGD(DEBUG, domid, "Creating pci backend");
-
-    /* add pci device */
-    libxl__device_from_pci(gc, domid, pci, &device);
+    fe_path = libxl__domain_device_frontend_path(gc, domid, 0,
+                                                 LIBXL__DEVICE_KIND_PCI);
+    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                                LIBXL__DEVICE_KIND_PCI);
 
+    flexarray_append_pair(back, "frontend", fe_path);
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
-    flexarray_append_pair(back, "online", "1");
+    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pci++)
-        libxl_create_pci_backend_device(gc, back, i, pci);
+    be_perms[0].id = 0;
+    be_perms[0].perms = XS_PERM_NONE;
+    be_perms[1].id = domid;
+    be_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, be_path);
+    xs_mkdir(ctx->xsh, t, be_path);
+    xs_set_permissions(ctx->xsh, t, be_path, be_perms,
+                       ARRAY_SIZE(be_perms));
+    libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
-    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
+    flexarray_append_pair(front, "backend", be_path);
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
     flexarray_append_pair(front, "state", GCSPRINTF("%d", XenbusStateInitialising));
 
-    return libxl__device_generic_add(gc, XBT_NULL, &device,
-                                     libxl__xs_kvs_of_flexarray(gc, back),
-                                     libxl__xs_kvs_of_flexarray(gc, front),
-                                     NULL);
+    fe_perms[0].id = domid;
+    fe_perms[0].perms = XS_PERM_NONE;
+    fe_perms[1].id = 0;
+    fe_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, fe_path);
+    xs_mkdir(ctx->xsh, t, fe_path);
+    xs_set_permissions(ctx->xsh, t, fe_path,
+                       fe_perms, ARRAY_SIZE(fe_perms));
+    libxl__xs_writev(gc, t, fe_path, libxl__xs_kvs_of_flexarray(gc, front));
 }
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
@@ -135,8 +151,6 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
-    if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -150,17 +164,17 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     back = flexarray_make(gc, 16, 1);
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
-    num = atoi(num_devs);
+    num = num_devs ? atoi(num_devs) : 0;
     libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
-    if (!starting)
+    if (num && !starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
 
     /*
      * Stubdomin config is derived from its target domain, it doesn't have
      * its own file.
      */
-    if (!is_stubdomain) {
+    if (!is_stubdomain && !starting) {
         lock = libxl__lock_domain_userdata(gc, domid);
         if (!lock) {
             rc = ERROR_LOCK_FAIL;
@@ -170,6 +184,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
+        LOGD(DEBUG, domid, "Adding new pci device to config");
         device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
                                  pci);
 
@@ -186,6 +201,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             if (rc) goto out;
         }
 
+        /* This is the first device, so create the backend */
+        if (!num_devs)
+            libxl__create_pci_backend(gc, t, domid, pci);
+
         libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
         rc = libxl__xs_transaction_commit(gc, &t);
@@ -1437,7 +1456,7 @@ out_no_irq:
         }
     }
 
-    if (!starting && !libxl_get_stubdom_id(CTX, domid))
+    if (!libxl_get_stubdom_id(CTX, domid))
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
@@ -1765,28 +1784,12 @@ static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
 }
 
 static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
-                             int rc)
+                          int rc)
 {
     EGC_GC;
     add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
-
-    /* Convenience aliases */
-    libxl_domain_config *d_config = apds->d_config;
-    libxl_domid domid = apds->domid;
     libxl__ao_device *aodev = apds->outer_aodev;
 
-    if (rc) goto out;
-
-    if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-                                       d_config->num_pcidevs);
-        if (rc < 0) {
-            LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
-            goto out;
-        }
-    }
-
-out:
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47693.84477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihI-00010l-F9; Tue, 08 Dec 2020 19:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47693.84477; Tue, 08 Dec 2020 19:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihH-00010B-RL; Tue, 08 Dec 2020 19:30:51 +0000
Received: by outflank-mailman (input) for mailman id 47693;
 Tue, 08 Dec 2020 19:30:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihF-0000uZ-LD
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihE-0007M8-TL; Tue, 08 Dec 2020 19:30:48 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihE-0001p0-Ll; Tue, 08 Dec 2020 19:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=qfopx7cHG4X0r38QiGo2Y3baRSgtNBNUsNc3slBAJGc=; b=epy8WCO/EMUg3ERX120HlUtRxV
	e51MAG3gYoC6ld9I8Lm1yxpGnIJJBP1xpBgk539KL14P1qVuhn0eWdbZoiMK+OU6lqIOiDbvtVWcj
	QNKGvmeQc4NPdHn96KeCDnkt5GE15HyuRsV3v8n8QKy8BEyw4ny0k4Lqc4HspeHUnIQg=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 09/25] libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
Date: Tue,  8 Dec 2020 19:30:17 +0000
Message-Id: <20201208193033.11306-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

For the purposes of re-binding a device to its previous driver
libxl__device_pci_assignable_add() writes the driver path into xenstore.
This path is then read back in libxl__device_pci_assignable_remove().

The functions that support this writing to and reading from xenstore are
currently dedicated for this purpose and hence the node name 'driver_path'
is hard-coded. This patch generalizes these utility functions and passes
'driver_path' as an argument. Subsequent patches will invoke them to
access other nodes.

NOTE: Because functions will have a broader use (other than storing a
      driver path in lieu of pciback) the base xenstore path is also
      changed from '/libxl/pciback' to '/libxl/pci'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 66 +++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index be2145222d2b..78ee641e9aa8 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -737,48 +737,46 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCIBACK_INFO_PATH "/libxl/pciback"
+#define PCI_INFO_PATH "/libxl/pci"
 
-static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            char *driver_path)
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    char *path;
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
 
-    path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pci->domain,
-                     pci->bus,
-                     pci->dev,
-                     pci->func);
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
+    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
+        LOGE(WARN, "Write of %s to node %s failed.", val, path);
     }
 }
 
-static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    return libxl__xs_read(gc, XBT_NULL,
-                          GCSPRINTF(
-                           PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func));
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
 {
+    char *path = pci_info_xs_path(gc, pci, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL,
-          GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pci->domain,
-                    pci->bus,
-                    pci->dev,
-                    pci->func) );
+    xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
@@ -824,9 +822,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pci, driver_path);
+            pci_info_xs_write(gc, pci, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
+                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -834,7 +832,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pci);
+        pci_info_xs_remove(gc, pci, "driver_path");
     }
 
     if ( pciback_dev_assign(gc, pci) ) {
@@ -884,7 +882,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pci);
+    driver_path = pci_info_xs_read(gc, pci, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -897,7 +895,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pci);
+            pci_info_xs_remove(gc, pci, "driver_path");
         }
     } else {
         if ( rebind ) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47685.84400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihA-0000g8-S4; Tue, 08 Dec 2020 19:30:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47685.84400; Tue, 08 Dec 2020 19:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihA-0000fm-GA; Tue, 08 Dec 2020 19:30:44 +0000
Received: by outflank-mailman (input) for mailman id 47685;
 Tue, 08 Dec 2020 19:30:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmih9-0000f4-8j
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih8-0007LB-BI; Tue, 08 Dec 2020 19:30:42 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih8-0001p0-2Y; Tue, 08 Dec 2020 19:30:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=riw3MyToWucKYseT/Zi4z6kwfCRW/U/hR5lNoqu9wNw=; b=wM0QsmSD20dySYYbE3Jg39hB0c
	OInlEuQXAzvj69pClMJ3pA9VryqeWHC5CxOyUUB7C6XqEkGKktQ064sQVqwCqJt162B91W5cOKz1z
	8Td0Vp47SoA75TW9lvTDagv/x7DmCjKvjddBYFa9YHbE+Ud+h2sMbiHrmz2vTN1V821g=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 02/25] xl: s/pcidev/pci where possible
Date: Tue,  8 Dec 2020 19:30:10 +0000
Message-Id: <20201208193033.11306-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To improve naming consistency, replaces occurrences of 'pcidev' with 'pci'.
The only remaining use of the term should be in relation to
'libxl_domain_config' where there are fields named 'pcidevs' and 'num_pcidevs'.

Purely cosmetic. No functional change.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v6:
 - New in v6 (split out from "xl / libxl: s/pcidev/pci and remove
   DEFINE_DEVICE_TYPE_STRUCT_X")
---
 tools/xl/xl_parse.c | 22 +++++++--------
 tools/xl/xl_pci.c   | 68 ++++++++++++++++++++++-----------------------
 2 files changed, 45 insertions(+), 45 deletions(-)

diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c5a..4ebf39620ae7 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1473,21 +1473,21 @@ void parse_config_data(const char *config_source,
         d_config->num_pcidevs = 0;
         d_config->pcidevs = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
-            libxl_device_pci *pcidev;
-
-            pcidev = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
-                                               d_config->num_pcidevs,
-                                               libxl_device_pci_init);
-            pcidev->msitranslate = pci_msitranslate;
-            pcidev->power_mgmt = pci_power_mgmt;
-            pcidev->permissive = pci_permissive;
-            pcidev->seize = pci_seize;
+            libxl_device_pci *pci;
+
+            pci = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
+                                            d_config->num_pcidevs,
+                                            libxl_device_pci_init);
+            pci->msitranslate = pci_msitranslate;
+            pci->power_mgmt = pci_power_mgmt;
+            pci->permissive = pci_permissive;
+            pci->seize = pci_seize;
             /*
              * Like other pci option, the per-device policy always follows
              * the global policy by default.
              */
-            pcidev->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pcidev, buf);
+            pci->rdm_policy = b_info->u.hvm.rdm.policy;
+            e = xlu_pci_parse_bdf(config, pci, buf);
             if (e) {
                 fprintf(stderr,
                         "unable to parse PCI BDF `%s' for passthrough\n",
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 58345bdae213..34fcf5a4fadf 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -24,20 +24,20 @@
 
 static void pcilist(uint32_t domid)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(ctx, domid, &num);
-    if (pcidevs == NULL)
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (pcis == NULL)
         return;
     printf("Vdev Device\n");
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
-               (pcidevs[i].vdevfn >> 3) & 0x1f, pcidevs[i].vdevfn & 0x7,
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pcilist(int argc, char **argv)
@@ -57,28 +57,28 @@ int main_pcilist(int argc, char **argv)
 
 static int pcidetach(uint32_t domid, const char *bdf, int force)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
     if (force) {
-        if (libxl_device_pci_destroy(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
     } else {
-        if (libxl_device_pci_remove(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_remove(ctx, domid, &pci, 0))
             r = 1;
     }
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -108,24 +108,24 @@ int main_pcidetach(int argc, char **argv)
 
 static int pciattach(uint32_t domid, const char *bdf, const char *vs)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_add(ctx, domid, &pcidev, 0))
+    if (libxl_device_pci_add(ctx, domid, &pci, 0))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -155,19 +155,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcidevs == NULL )
+    if ( pcis == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -184,24 +184,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -226,24 +226,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47684.84392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihA-0000fS-BV; Tue, 08 Dec 2020 19:30:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47684.84392; Tue, 08 Dec 2020 19:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihA-0000fJ-89; Tue, 08 Dec 2020 19:30:44 +0000
Received: by outflank-mailman (input) for mailman id 47684;
 Tue, 08 Dec 2020 19:30:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmih8-0000eq-OR
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih7-0007L7-Ex; Tue, 08 Dec 2020 19:30:41 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih6-0001p0-Ux; Tue, 08 Dec 2020 19:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=cBR0cFUJiqGdqAnzx98TGoc+XReo3bFMnFXG44K2vPM=; b=tuoNXXfrttWvfUtiK4TwNMNvrE
	LLLNzG1ChqgqU8TPEZNhkVHVjDPII1xPjeR8fXVnbsxEU9P7rZOvgQ2FD7sJ/sSfmaquMYNF3ZnPY
	OW3jBIYJo8PO6vA/RjCA/xmLi5alI85hs1QWQOC82GR5wF72FYwOofHyUZY6d0P3i81c=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 01/25] libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Tue,  8 Dec 2020 19:30:09 +0000
Message-Id: <20201208193033.11306-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
is confusing and also compromises use of some macros used for other device
types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
of this duality.

This patch purges use of 'pcidev' from the libxl internal code, but
unfortunately the 'pcidevs' and 'num_pcidevs' fields in 'libxl_domain_config'
are part of the API and need to be retained to avoid breaking callers,
particularly libvirt.

DEFINE_DEVICE_TYPE_STRUCT_X is still removed to avoid the special case in
libxl_pci.c but DEFINE_DEVICE_TYPE_STRUCT is given an extra 'array' argument
which is used to identify the fields in 'libxl_domain_config' relating to
the device type.

NOTE: Some of the more gross formatting errors (such as lack of spaces after
      keywords) that came into context have been fixed in libxl_pci.c.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v6:
 - Avoid name changes in 'libxl_domain_config'
 - Defer xl changes to a subsequent patch

v5:
 - Minor cosmetic fix
---
 tools/include/libxl.h             |  10 +-
 tools/libs/light/libxl_9pfs.c     |   2 +-
 tools/libs/light/libxl_console.c  |   2 +-
 tools/libs/light/libxl_create.c   |   4 +-
 tools/libs/light/libxl_disk.c     |   2 +-
 tools/libs/light/libxl_internal.h |  29 +-
 tools/libs/light/libxl_nic.c      |   2 +-
 tools/libs/light/libxl_pci.c      | 570 +++++++++++++++---------------
 tools/libs/light/libxl_pvcalls.c  |   2 +-
 tools/libs/light/libxl_usb.c      |   4 +-
 tools/libs/light/libxl_vdispl.c   |   2 +-
 tools/libs/light/libxl_vkb.c      |   2 +-
 tools/libs/light/libxl_vsnd.c     |   2 +-
 tools/libs/light/libxl_vtpm.c     |   2 +-
 tools/libs/util/libxlu_pci.c      |  36 +-
 15 files changed, 334 insertions(+), 337 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index eaffccb30f37..733263522bd9 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -2307,15 +2307,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,
 
 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
@@ -2359,8 +2359,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_9pfs.c b/tools/libs/light/libxl_9pfs.c
index e5c41e9a2524..5ab0d3aa21b0 100644
--- a/tools/libs/light/libxl_9pfs.c
+++ b/tools/libs/light/libxl_9pfs.c
@@ -45,7 +45,7 @@ static LIBXL_DEFINE_DEVICE_FROM_TYPE(p9)
 
 LIBXL_DEFINE_DEVICE_REMOVE(p9)
 
-DEFINE_DEVICE_TYPE_STRUCT(p9, 9PFS,
+DEFINE_DEVICE_TYPE_STRUCT(p9, 9PFS, p9s,
     .skip_attach = 1,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
                            libxl__set_xenstore_p9,
diff --git a/tools/libs/light/libxl_console.c b/tools/libs/light/libxl_console.c
index 047d23d7ae78..d8b2bc546582 100644
--- a/tools/libs/light/libxl_console.c
+++ b/tools/libs/light/libxl_console.c
@@ -725,7 +725,7 @@ static LIBXL_DEFINE_DEVICE_FROM_TYPE(vfb)
 /* vfb */
 LIBXL_DEFINE_DEVICE_REMOVE(vfb)
 
-DEFINE_DEVICE_TYPE_STRUCT(vfb, VFB,
+DEFINE_DEVICE_TYPE_STRUCT(vfb, VFB, vfbs,
     .skip_attach = 1,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
                            libxl__set_xenstore_vfb,
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519b5..86f4a8369d35 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1809,7 +1809,7 @@ out:
 #define libxl__device_from_dtdev NULL
 #define libxl__device_dtdev_setdefault NULL
 #define libxl__device_dtdev_update_devid NULL
-static DEFINE_DEVICE_TYPE_STRUCT(dtdev, NONE);
+static DEFINE_DEVICE_TYPE_STRUCT(dtdev, NONE, dtdevs);
 
 const libxl__device_type *device_type_tbl[] = {
     &libxl__disk_devtype,
@@ -1817,7 +1817,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__vtpm_devtype,
     &libxl__usbctrl_devtype,
     &libxl__usbdev_devtype,
-    &libxl__pcidev_devtype,
+    &libxl__pci_devtype,
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index de183e0fb082..411ffeaca6ce 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -1374,7 +1374,7 @@ LIBXL_DEFINE_DEVICE_LIST(disk)
 
 #define libxl__device_disk_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT(disk, VBD,
+DEFINE_DEVICE_TYPE_STRUCT(disk, VBD, disks,
     .merge       = libxl_device_disk_merge,
     .dm_needed   = libxl_device_disk_dm_needed,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__disk_from_xenstore,
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9b5045..c2c5a9b92673 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1709,7 +1709,7 @@ _hidden int libxl__pci_topology_init(libxl__gc *gc,
 /* from libxl_pci */
 
 _hidden void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                                   libxl_device_pci *pcidev, bool starting,
+                                   libxl_device_pci *pci, bool starting,
                                    libxl__ao_device *aodev);
 _hidden void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                            libxl__multidev *);
@@ -3945,30 +3945,27 @@ struct libxl__device_type {
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
 
-#define DEFINE_DEVICE_TYPE_STRUCT_X(name, sname, kind, ...)                    \
+#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, array, ...)                      \
     const libxl__device_type libxl__ ## name ## _devtype = {                   \
-        .type          = LIBXL__DEVICE_KIND_ ## kind,                       \
-        .ptr_offset    = offsetof(libxl_domain_config, name ## s),             \
-        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),     \
-        .dev_elem_size = sizeof(libxl_device_ ## sname),                       \
+        .type          = LIBXL__DEVICE_KIND_ ## kind,                          \
+        .ptr_offset    = offsetof(libxl_domain_config, array),                 \
+        .num_offset    = offsetof(libxl_domain_config, num_ ## array),         \
+        .dev_elem_size = sizeof(libxl_device_ ## name),                        \
         .add           = libxl__add_ ## name ## s,                             \
         .set_default   = (device_set_default_fn_t)                             \
-                         libxl__device_ ## sname ## _setdefault,               \
+                         libxl__device_ ## name ## _setdefault,                \
         .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name,   \
-        .init          = (device_init_fn_t)libxl_device_ ## sname ## _init,    \
-        .copy          = (device_copy_fn_t)libxl_device_ ## sname ## _copy,    \
+        .init          = (device_init_fn_t)libxl_device_ ## name ## _init,     \
+        .copy          = (device_copy_fn_t)libxl_device_ ## name ## _copy,     \
         .dispose       = (device_dispose_fn_t)                                 \
-                         libxl_device_ ## sname ## _dispose,                   \
+                         libxl_device_ ## name ## _dispose,                    \
         .compare       = (device_compare_fn_t)                                 \
-                         libxl_device_ ## sname ## _compare,                   \
+                         libxl_device_ ## name ## _compare,                    \
         .update_devid  = (device_update_devid_fn_t)                            \
-                         libxl__device_ ## sname ## _update_devid,             \
+                         libxl__device_ ## name ## _update_devid,              \
         __VA_ARGS__                                                            \
     }
 
-#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                             \
-    DEFINE_DEVICE_TYPE_STRUCT_X(name, name, kind, __VA_ARGS__)
-
 static inline void **libxl__device_type_get_ptr(
     const libxl__device_type *dt, const libxl_domain_config *d_config)
 {
@@ -3995,7 +3992,7 @@ extern const libxl__device_type libxl__nic_devtype;
 extern const libxl__device_type libxl__vtpm_devtype;
 extern const libxl__device_type libxl__usbctrl_devtype;
 extern const libxl__device_type libxl__usbdev_devtype;
-extern const libxl__device_type libxl__pcidev_devtype;
+extern const libxl__device_type libxl__pci_devtype;
 extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
diff --git a/tools/libs/light/libxl_nic.c b/tools/libs/light/libxl_nic.c
index 0e5d120ae9a4..144e9e23e198 100644
--- a/tools/libs/light/libxl_nic.c
+++ b/tools/libs/light/libxl_nic.c
@@ -528,7 +528,7 @@ LIBXL_DEFINE_DEVICE_ADD(nic)
 LIBXL_DEFINE_DEVICES_ADD(nic)
 LIBXL_DEFINE_DEVICE_REMOVE(nic)
 
-DEFINE_DEVICE_TYPE_STRUCT(nic, VIF,
+DEFINE_DEVICE_TYPE_STRUCT(nic, VIF, nics,
     .update_config = libxl_device_nic_update_config,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__nic_from_xenstore,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bc5843b13701..3340076d2cb3 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,51 +25,51 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pcidev_encode_bdf(libxl_device_pci *pcidev)
+static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pcidev->domain << 16;
-    value |= (pcidev->bus & 0xff) << 8;
-    value |= (pcidev->dev & 0x1f) << 3;
-    value |= (pcidev->func & 0x7);
+    value = pci->domain << 16;
+    value |= (pci->bus & 0xff) << 8;
+    value |= (pci->dev & 0x1f) << 3;
+    value |= (pci->func & 0x7);
 
     return value;
 }
 
-static void pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                            unsigned int bus, unsigned int dev,
+                            unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
 }
 
 static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             flexarray_t *back,
                                             int num,
-                                            const libxl_device_pci *pcidev)
+                                            const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
-    if (pcidev->vdevfn)
-        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pcidev->vdevfn));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    if (pci->vdevfn)
+        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
               GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pcidev->msitranslate, pcidev->power_mgmt,
-                             pcidev->permissive));
+                             pci->msitranslate, pci->power_mgmt,
+                             pci->permissive));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
-static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
-                                      const libxl_device_pci *pcidev,
-                                      libxl__device *device)
+static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
+                                   const libxl_device_pci *pci,
+                                   libxl__device *device)
 {
     device->backend_devid = 0;
     device->backend_domid = 0;
@@ -80,7 +80,7 @@ static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
 }
 
 static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pcidev,
+                                     const libxl_device_pci *pci,
                                      int num)
 {
     flexarray_t *front = NULL;
@@ -94,15 +94,15 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
     LOGD(DEBUG, domid, "Creating pci backend");
 
     /* add pci device */
-    libxl__device_from_pcidev(gc, domid, pcidev, &device);
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
     flexarray_append_pair(back, "online", "1");
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pcidev++)
-        libxl_create_pci_backend_device(gc, back, i, pcidev);
+    for (i = 0; i < num; i++, pci++)
+        libxl_create_pci_backend_device(gc, back, i, pci);
 
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
@@ -116,7 +116,7 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           uint32_t domid,
-                                          const libxl_device_pci *pcidev,
+                                          const libxl_device_pci *pci,
                                           bool starting)
 {
     flexarray_t *back;
@@ -136,7 +136,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pcidev, 1);
+        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -151,7 +151,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
     num = atoi(num_devs);
-    libxl_create_pci_backend_device(gc, back, num, pcidev);
+    libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
     if (!starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
@@ -170,8 +170,8 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
-        device_add_domain_config(gc, &d_config, &libxl__pcidev_devtype,
-                                 pcidev);
+        device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
+                                 pci);
 
         rc = libxl__dm_check_start(gc, &d_config, domid);
         if (rc) goto out;
@@ -201,7 +201,7 @@ out:
     return rc;
 }
 
-static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev)
+static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *be_path, *num_devs_path, *num_devs, *xsdev, *tmp, *tmppath;
@@ -231,8 +231,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pcidev->domain && bus == pcidev->bus &&
-            pcidev->dev == dev && pcidev->func == func) {
+        if (domain == pci->domain && bus == pci->bus &&
+            pci->dev == dev && pci->func == func) {
             break;
         }
     }
@@ -350,7 +350,7 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
                     *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
                     if (*list == NULL)
                         return ERROR_NOMEM;
-                    pcidev_struct_fill(*list + *num, dom, bus, dev, func, 0);
+                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
                     (*num)++;
                 }
             }
@@ -361,8 +361,8 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
     return 0;
 }
 
-static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
-                       int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
+                           int dom, int bus, int dev, int func)
 {
     int i;
 
@@ -383,7 +383,7 @@ static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
 static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pcidev)
+                           libxl_device_pci *pci)
 {
     int rc, fd;
     char *buf;
@@ -394,8 +394,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus,
-                    pcidev->dev, pcidev->func);
+    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
+                    pci->dev, pci->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -411,7 +411,7 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new, *assigned;
     struct dirent *de;
     DIR *dir;
     int r, num_assigned;
@@ -436,40 +436,40 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pcidev_in_array(assigned, num_assigned, dom, bus, dev, func))
+        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
             continue;
 
-        new = realloc(pcidevs, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcidevs = new;
-        new = pcidevs + *num;
+        pcis = new;
+        new = pcis + *num;
 
         memset(new, 0, sizeof(*new));
-        pcidev_struct_fill(new, dom, bus, dev, func, 0);
+        pci_struct_fill(new, dom, bus, dev, func, 0);
         (*num)++;
     }
 
     closedir(dir);
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func);
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -483,7 +483,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pcidev) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -495,11 +495,11 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
     return 0;
 }
 
-static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -507,7 +507,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -515,18 +515,18 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_vendor;
 }
 
-static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -534,7 +534,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -542,25 +542,25 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_device;
 }
 
-static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                     pci->domain, pci->bus, pci->dev, pci->func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -569,7 +569,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
     }
 
@@ -589,15 +589,15 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     unsigned long class;
 
     for (i = 0 ; i < d_config->num_pcidevs ; i++) {
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
-        pt_vendor = sysfs_dev_get_vendor(gc, pcidev);
-        pt_device = sysfs_dev_get_device(gc, pcidev);
+        libxl_device_pci *pci = &d_config->pcidevs[i];
+        pt_vendor = sysfs_dev_get_vendor(gc, pci);
+        pt_device = sysfs_dev_get_device(gc, pci);
 
         if (pt_vendor == 0xffff || pt_device == 0xffff ||
             pt_vendor != 0x8086)
             continue;
 
-        if (sysfs_dev_get_class(gc, pcidev, &class))
+        if (sysfs_dev_get_class(gc, pci, &class))
             continue;
         if (class == 0x030000)
             return true;
@@ -621,8 +621,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
 {
     FILE *f;
     int rc = 0;
@@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
 
-    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if(dom == pcidev->domain
-           && bus == pcidev->bus
-           && dev == pcidev->dev
-           && func == pcidev->func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
+        if (dom == pci->domain
+            && bus == pci->bus
+            && dev == pci->dev
+            && func == pci->func) {
             rc = 1;
             goto out;
         }
@@ -649,7 +649,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
     int rc;
@@ -665,8 +665,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pcidev->domain, pcidev->bus,
-                      pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus,
+                      pci->dev, pci->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -677,40 +677,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
 {
     int rc;
 
-    if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcidev) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pcidev) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -721,49 +721,49 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 #define PCIBACK_INFO_PATH "/libxl/pciback"
 
 static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             char *driver_path)
 {
     char *path;
 
     path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pcidev->domain,
-                     pcidev->bus,
-                     pcidev->dev,
-                     pcidev->func);
+                     pci->domain,
+                     pci->bus,
+                     pci->dev,
+                     pci->func);
     if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
         LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
     }
 }
 
 static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     return libxl__xs_read(gc, XBT_NULL,
                           GCSPRINTF(
                            PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func));
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func));
 }
 
 static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL,
           GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pcidev->domain,
-                    pcidev->bus,
-                    pcidev->dev,
-                    pcidev->func) );
+                    pci->domain,
+                    pci->bus,
+                    pci->dev,
+                    pci->func) );
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -773,10 +773,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pcidev->domain;
-    bus = pcidev->bus;
-    dev = pcidev->dev;
-    func = pcidev->func;
+    dom = pci->domain;
+    bus = pci->bus;
+    dev = pci->dev;
+    func = pci->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -786,7 +786,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pcidev);
+    rc = pciback_dev_is_assigned(gc, pci);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -796,7 +796,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pcidev, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -805,9 +805,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pcidev, driver_path);
+            pci_assignable_driver_path_write(gc, pci, driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pcidev)) != NULL ) {
+                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,10 +815,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pcidev);
+        pci_assignable_driver_path_remove(gc, pci);
     }
 
-    if ( pciback_dev_assign(gc, pcidev) ) {
+    if ( pciback_dev_assign(gc, pci) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -829,7 +829,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -840,7 +840,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pcidev,
+                                               libxl_device_pci *pci,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -848,24 +848,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcidev->domain, pcidev->bus,
-            pcidev->dev, pcidev->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc=pciback_dev_is_assigned(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pcidev);
+        pciback_dev_unassign(gc, pci);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pcidev);
+    driver_path = pci_assignable_driver_path_read(gc, pci);
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -873,12 +873,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pcidev) < 0 ) {
+                                 pci) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pcidev);
+            pci_assignable_driver_path_remove(gc, pci);
         }
     } else {
         if ( rebind ) {
@@ -890,26 +890,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
 
     GC_FREE;
     return rc;
@@ -920,7 +920,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
  * driver. It also initialises a bit-mask of which function numbers are present
  * on that device.
 */
-static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsigned int *func_mask)
+static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigned int *func_mask)
 {
     struct dirent *de;
     DIR *dir;
@@ -940,11 +940,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pcidev->domain != dom )
+        if ( pci->domain != dom )
             continue;
-        if ( pcidev->bus != bus )
+        if ( pci->bus != bus )
             continue;
-        if ( pcidev->dev != dev )
+        if ( pci->dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -979,7 +979,7 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 }
 
 static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
-                                 libxl_device_pci *pcidev)
+                                 libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc = 0;
@@ -991,15 +991,15 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    if (pcidev->vdevfn) {
+    if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pcidev->domain, pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->vdevfn, pcidev->msitranslate,
-                         pcidev->power_mgmt);
+                         pci->domain, pci->bus, pci->dev,
+                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pcidev->domain,  pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->msitranslate, pcidev->power_mgmt);
+                         pci->domain,  pci->bus, pci->dev,
+                         pci->func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1010,7 +1010,7 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     if ( rc < 0 )
         LOGD(ERROR, domid, "qemu refused to add device: %s", vdevfn);
-    else if ( sscanf(vdevfn, "0x%x", &pcidev->vdevfn) != 1 ) {
+    else if ( sscanf(vdevfn, "0x%x", &pci->vdevfn) != 1 ) {
         LOGD(ERROR, domid, "wrong format for the vdevfn: '%s'", vdevfn);
         rc = -1;
     }
@@ -1054,7 +1054,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     int pci_domid;
 } pci_add_state;
 
@@ -1072,7 +1072,7 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pcidev,
+                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1082,7 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pcidev = pcidev;
+    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1128,7 +1128,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1136,7 +1136,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_add_xenstore(gc, domid, pcidev);
+    rc = qemu_pci_add_xenstore(gc, domid, pci);
 out:
     pci_add_dm_done(egc, pas, rc); /* must be last */
 }
@@ -1149,7 +1149,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1160,14 +1160,14 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
-    if (pcidev->vdevfn) {
+                           "%04x:%02x:%02x.%01x", pci->domain,
+                           pci->bus, pci->dev, pci->func);
+    if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
-                               PCI_SLOT(pcidev->vdevfn),
-                               PCI_FUNC(pcidev->vdevfn));
+                               PCI_SLOT(pci->vdevfn),
+                               PCI_FUNC(pci->vdevfn));
     }
     /*
      * Version of QEMU prior to the XSA-131 fix did not support
@@ -1179,7 +1179,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
      * set the permissive flag if it is true. Users of older QEMU
      * have no reason to set the flag so this is ok.
      */
-    if (pcidev->permissive)
+    if (pci->permissive)
         libxl__qmp_param_add_bool(gc, &args, "permissive", true);
 
     qmp->ao = pas->aodev->ao;
@@ -1230,7 +1230,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1251,7 +1251,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1283,7 +1283,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
              }
              dev_func = libxl__json_object_get_integer(o);
 
-             pcidev->vdevfn = PCI_DEVFN(dev_slot, dev_func);
+             pci->vdevfn = PCI_DEVFN(dev_slot, dev_func);
 
              rc = 0;
              goto out;
@@ -1331,7 +1331,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1342,8 +1342,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                           pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1383,8 +1383,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                                pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                                pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1411,9 +1411,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
-    if (pcidev->permissive) {
+    if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1422,14 +1422,14 @@ static void pci_add_dm_done(libxl__egc *egc,
 
 out_no_irq:
     if (!isstubdom) {
-        if (pcidev->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
+        if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
-        } else if (pcidev->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
+        } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
             LOGED(ERROR, domainid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1438,7 +1438,7 @@ out_no_irq:
     }
 
     if (!starting && !libxl_get_stubdom_id(CTX, domid))
-        rc = libxl__device_pci_add_xenstore(gc, domid, pcidev, starting);
+        rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
 out:
@@ -1493,7 +1493,7 @@ int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -1504,24 +1504,24 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_ADD;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_add(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_add(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
-static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
+static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
     for (i = 0; i < num; i++) {
-        if (pcidevs[i].domain == pcidev->domain &&
-            pcidevs[i].bus == pcidev->bus &&
-            pcidevs[i].dev == pcidev->dev &&
-            pcidevs[i].func == pcidev->func)
+        if (pcis[i].domain == pci->domain &&
+            pcis[i].bus == pci->bus &&
+            pcis[i].dev == pci->dev &&
+            pcis[i].func == pci->func)
             break;
     }
-    free(pcidevs);
+    free(pcis);
     return i != num;
 }
 
@@ -1535,7 +1535,7 @@ static void device_pci_add_done(libxl__egc *egc,
     pci_add_state *, int rc);
 
 void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                           libxl_device_pci *pcidev, bool starting,
+                           libxl_device_pci *pci, bool starting,
                            libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
@@ -1545,9 +1545,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pcidev to be used by callbacks */
-    aodev->device_config = pcidev;
-    aodev->device_type = &libxl__pcidev_devtype;
+    /* Store *pci to be used by callbacks */
+    aodev->device_config = pci;
+    aodev->device_type = &libxl__pci_devtype;
 
     GCNEW(pas);
     pas->aodev = aodev;
@@ -1556,29 +1556,29 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+                 pci->domain, pci->bus, pci->dev, pci->func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
         }
     }
 
-    rc = libxl__device_pci_setdefault(gc, domid, pcidev, !starting);
+    rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pcidev->seize && !pciback_dev_is_assigned(gc, pcidev)) {
-        rc = libxl__device_pci_assignable_add(gc, pcidev, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
+        rc = libxl__device_pci_assignable_add(gc, pci, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pcidev_assignable(ctx, pcidev)) {
+    if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1589,25 +1589,25 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
              "cannot determine if device is assigned, refusing to continue");
         goto out;
     }
-    if ( is_pcidev_in_array(assigned, num_assigned, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
+                         pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domid, "PCI device already attached to a domain");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pcidev_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
         return;
     }
 
@@ -1664,42 +1664,42 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     /* Convenience aliases */
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) goto out;
 
-    orig_vdev = pcidev->vdevfn & ~7U;
+    orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( !(pcidev->vdevfn >> 3) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( !(pci->vdevfn >> 3) ) {
             LOGD(ERROR, domid, "Must specify a v-slot for multi-function devices");
             rc = ERROR_INVAL;
             goto out;
         }
-        if ( pci_multifunction_check(gc, pcidev, &pfunc_mask) ) {
+        if ( pci_multifunction_check(gc, pci, &pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= pfunc_mask;
+        pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pcidev->func);
+        pfunc_mask = (1 << pci->func);
     }
 
-    for(rc = 0, i = 7; i >= 0; --i) {
+    for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
                 /* if not passing through multiple devices in a block make
                  * sure that virtual function number 0 is always used otherwise
                  * guest won't see the device
                  */
-                pcidev->vdevfn = orig_vdev;
+                pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pcidev, pas); /* must be last */
+            do_pci_add(egc, domid, pci, pas); /* must be last */
             return;
         }
     }
@@ -1715,13 +1715,13 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) {
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+             pci->domain, pci->bus, pci->dev, pci->func,
              rc);
     }
     aodev->rc = rc;
@@ -1733,16 +1733,16 @@ typedef struct {
     libxl__ao_device *outer_aodev;
     libxl_domain_config *d_config;
     libxl_domid domid;
-} add_pcidevs_state;
+} add_pcis_state;
 
-static void add_pcidevs_done(libxl__egc *, libxl__multidev *, int rc);
+static void add_pcis_done(libxl__egc *, libxl__multidev *, int rc);
 
-static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                               libxl_domain_config *d_config,
-                               libxl__multidev *multidev)
+static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
+                            libxl_domain_config *d_config,
+                            libxl__multidev *multidev)
 {
     AO_GC;
-    add_pcidevs_state *apds;
+    add_pcis_state *apds;
     int i;
 
     /* We need to start a new multidev in order to be able to execute
@@ -1752,7 +1752,7 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     apds->outer_aodev = libxl__multidev_prepare(multidev);
     apds->d_config = d_config;
     apds->domid = domid;
-    apds->multidev.callback = add_pcidevs_done;
+    apds->multidev.callback = add_pcis_done;
     libxl__multidev_begin(ao, &apds->multidev);
 
     for (i = 0; i < d_config->num_pcidevs; i++) {
@@ -1764,11 +1764,11 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     libxl__multidev_prepared(egc, &apds->multidev, 0);
 }
 
-static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
+static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
                              int rc)
 {
     EGC_GC;
-    add_pcidevs_state *apds = CONTAINER_OF(multidev, *apds, multidev);
+    add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
 
     /* Convenience aliases */
     libxl_domain_config *d_config = apds->d_config;
@@ -1779,7 +1779,7 @@ static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
 
     if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
         rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
+                                       d_config->num_pcidevs);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
@@ -1792,7 +1792,7 @@ out:
 }
 
 static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
-                                    libxl_device_pci *pcidev, int force)
+                                    libxl_device_pci *pci, int force)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *state;
@@ -1804,12 +1804,12 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
+                     pci->bus, pci->dev, pci->func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
-    if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
+    if ( !force && (pci->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
         if (libxl__wait_for_device_model_deprecated(gc, domid, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
@@ -1830,7 +1830,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1844,7 +1844,7 @@ typedef struct pci_remove_state {
 } pci_remove_state;
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
-    uint32_t domid, libxl_device_pci *pcidev, bool force,
+    uint32_t domid, libxl_device_pci *pci, bool force,
     libxl__ao_device *aodev);
 static void device_pci_remove_common_next(libxl__egc *egc,
     pci_remove_state *prs, int rc);
@@ -1869,7 +1869,7 @@ static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
 static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pcidev, int force,
+                          libxl_device_pci *pci, int force,
                           pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
@@ -1887,8 +1887,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl__ptr_add(gc, assigned);
 
     rc = ERROR_INVAL;
-    if ( !is_pcidev_in_array(assigned, num, pcidev->domain,
-                      pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( !is_pci_in_array(assigned, num, pci->domain,
+                          pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1917,8 +1917,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                                     pcidev->bus, pcidev->dev, pcidev->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                                     pci->bus, pci->dev, pci->func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1953,8 +1953,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                               pcidev->bus, pcidev->dev, pcidev->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                               pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1988,7 +1988,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1996,7 +1996,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_remove_xenstore(gc, domid, pcidev, prs->force);
+    rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
     pci_remove_detatched(egc, prs, rc);
@@ -2010,7 +2010,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2018,7 +2018,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2080,14 +2080,14 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     if (rc) goto out;
 
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2135,10 +2135,10 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2156,7 +2156,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2170,30 +2170,30 @@ static void pci_remove_detatched(libxl__egc *egc,
     isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
 
     /* don't do multiple resets while some functions are still passed through */
-    if ( (pcidev->vdevfn & 0x7) == 0 ) {
-        libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    if ((pci->vdevfn & 0x7) == 0) {
+        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
         libxl__ao_device *const stubdom_aodev = &prs->stubdom_aodev;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
 
         libxl__prepare_ao_device(ao, stubdom_aodev);
         stubdom_aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
         stubdom_aodev->callback = pci_remove_stubdom_done;
         stubdom_aodev->update_json = prs->aodev->update_json;
-        libxl__device_pci_remove_common(egc, stubdomid, pcidev_s,
+        libxl__device_pci_remove_common(egc, stubdomid, pci_s,
                                         prs->force, stubdom_aodev);
         return;
     }
@@ -2219,14 +2219,14 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
                                             uint32_t domid,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             bool force,
                                             libxl__ao_device *aodev)
 {
@@ -2237,7 +2237,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pcidev = pcidev;
+    prs->pci = pci;
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2247,16 +2247,16 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
-    prs->orig_vdev = pcidev->vdevfn & ~7U;
+    prs->orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( pci_multifunction_check(gc, pcidev, &prs->pfunc_mask) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( pci_multifunction_check(gc, pci, &prs->pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= prs->pfunc_mask;
-    }else{
-        prs->pfunc_mask = (1 << pcidev->func);
+        pci->vfunc_mask &= prs->pfunc_mask;
+    } else {
+        prs->pfunc_mask = (1 << pci->func);
     }
 
     rc = 0;
@@ -2273,7 +2273,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2284,13 +2284,13 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         const int i = prs->next_func;
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
-                pcidev->vdevfn = orig_vdev;
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
+                pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pcidev, prs->force, prs);
+            do_pci_remove(egc, domid, pci, prs->force, prs);
             return;
         }
     }
@@ -2306,7 +2306,7 @@ out:
 }
 
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
 
 {
@@ -2318,12 +2318,12 @@ int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -2334,7 +2334,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, true, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, true, aodev);
     return AO_INPROGRESS;
 }
 
@@ -2353,7 +2353,7 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
     if (s)
         vdevfn = strtol(s, (char **) NULL, 16);
 
-    pcidev_struct_fill(pci, domain, bus, dev, func, vdevfn);
+    pci_struct_fill(pci, domain, bus, dev, func, vdevfn);
 
     s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/opts-%d", be_path, nr));
     if (s) {
@@ -2398,7 +2398,7 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     GC_INIT(ctx);
     char *be_path;
     unsigned int n, i;
-    libxl_device_pci *pcidevs = NULL;
+    libxl_device_pci *pcis = NULL;
 
     *num = 0;
 
@@ -2407,28 +2407,28 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     if (libxl__device_pci_get_num(gc, be_path, &n))
         goto out;
 
-    pcidevs = calloc(n, sizeof(libxl_device_pci));
+    pcis = calloc(n, sizeof(libxl_device_pci));
 
     for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcidevs + i);
+        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
 
     *num = n;
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
     STATE_AO_GC(multidev->ao);
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(CTX, domid, &num);
-    if ( pcidevs == NULL )
+    pcis = libxl_device_pci_list(CTX, domid, &num);
+    if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcidevs);
+    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2436,7 +2436,7 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
          * devices by the time we even get here!
          */
         libxl__ao_device *aodev = libxl__multidev_prepare(multidev);
-        libxl__device_pci_remove_common(egc, domid, pcidevs + i, true,
+        libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
 }
@@ -2452,10 +2452,10 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
     for (i = 0 ; i < d_config->num_pcidevs ; i++) {
         uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
         uint32_t stubdom_domid;
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
+        libxl_device_pci *pci = &d_config->pcidevs[i];
         unsigned long pci_device_class;
 
-        if (sysfs_dev_get_class(gc, pcidev, &pci_device_class))
+        if (sysfs_dev_get_class(gc, pci, &pci_device_class))
             continue;
         if (pci_device_class != 0x030000) /* VGA class */
             continue;
@@ -2494,7 +2494,7 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
 
 #define libxl__device_pci_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT_X(pcidev, pci, PCI,
+DEFINE_DEVICE_TYPE_STRUCT(pci, PCI, pcidevs,
     .get_num = libxl__device_pci_get_num,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
diff --git a/tools/libs/light/libxl_pvcalls.c b/tools/libs/light/libxl_pvcalls.c
index 870318e71618..1fbedf651c2c 100644
--- a/tools/libs/light/libxl_pvcalls.c
+++ b/tools/libs/light/libxl_pvcalls.c
@@ -34,4 +34,4 @@ static LIBXL_DEFINE_DEVICE_FROM_TYPE(pvcallsif)
 
 LIBXL_DEFINE_DEVICE_REMOVE(pvcallsif)
 
-DEFINE_DEVICE_TYPE_STRUCT(pvcallsif, PVCALLS);
+DEFINE_DEVICE_TYPE_STRUCT(pvcallsif, PVCALLS, pvcallsifs);
diff --git a/tools/libs/light/libxl_usb.c b/tools/libs/light/libxl_usb.c
index 171bb044394e..c5ae59681c91 100644
--- a/tools/libs/light/libxl_usb.c
+++ b/tools/libs/light/libxl_usb.c
@@ -2139,7 +2139,7 @@ void libxl_device_usbdev_list_free(libxl_device_usbdev *list, int nr)
 
 LIBXL_DEFINE_DEVID_TO_DEVICE(usbctrl)
 LIBXL_DEFINE_DEVICE_LIST(usbctrl)
-DEFINE_DEVICE_TYPE_STRUCT(usbctrl, VUSB,
+DEFINE_DEVICE_TYPE_STRUCT(usbctrl, VUSB, usbctrls,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__usbctrl_from_xenstore,
     .dm_needed = libxl_device_usbctrl_dm_needed
 );
@@ -2147,7 +2147,7 @@ DEFINE_DEVICE_TYPE_STRUCT(usbctrl, VUSB,
 #define libxl__device_from_usbdev NULL
 #define libxl__device_usbdev_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT(usbdev, VUSB);
+DEFINE_DEVICE_TYPE_STRUCT(usbdev, VUSB, usbdevs);
 
 /*
  * Local variables:
diff --git a/tools/libs/light/libxl_vdispl.c b/tools/libs/light/libxl_vdispl.c
index 8ddc8940e9ff..60427c76c2d1 100644
--- a/tools/libs/light/libxl_vdispl.c
+++ b/tools/libs/light/libxl_vdispl.c
@@ -206,7 +206,7 @@ LIBXL_DEFINE_DEVICE_ADD(vdispl)
 LIBXL_DEFINE_DEVICE_REMOVE(vdispl)
 LIBXL_DEFINE_DEVICE_LIST(vdispl)
 
-DEFINE_DEVICE_TYPE_STRUCT(vdispl, VDISPL,
+DEFINE_DEVICE_TYPE_STRUCT(vdispl, VDISPL, vdispls,
     .update_config = (device_update_config_fn_t)libxl__update_config_vdispl,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__vdispl_from_xenstore,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
diff --git a/tools/libs/light/libxl_vkb.c b/tools/libs/light/libxl_vkb.c
index 4c44a813c11a..bb88059f93c3 100644
--- a/tools/libs/light/libxl_vkb.c
+++ b/tools/libs/light/libxl_vkb.c
@@ -336,7 +336,7 @@ static LIBXL_DEFINE_UPDATE_DEVID(vkb)
 LIBXL_DEFINE_DEVICE_LIST(vkb)
 LIBXL_DEFINE_DEVICE_REMOVE(vkb)
 
-DEFINE_DEVICE_TYPE_STRUCT(vkb, VKBD,
+DEFINE_DEVICE_TYPE_STRUCT(vkb, VKBD, vkbs,
     .skip_attach = 1,
     .dm_needed = libxl__device_vkb_dm_needed,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
diff --git a/tools/libs/light/libxl_vsnd.c b/tools/libs/light/libxl_vsnd.c
index 0bc5f6dbb19f..bb7942bbc991 100644
--- a/tools/libs/light/libxl_vsnd.c
+++ b/tools/libs/light/libxl_vsnd.c
@@ -670,7 +670,7 @@ LIBXL_DEFINE_DEVICE_ADD(vsnd)
 LIBXL_DEFINE_DEVICE_REMOVE(vsnd)
 LIBXL_DEFINE_DEVICE_LIST(vsnd)
 
-DEFINE_DEVICE_TYPE_STRUCT(vsnd, VSND,
+DEFINE_DEVICE_TYPE_STRUCT(vsnd, VSND, vsnds,
     .update_config = (device_update_config_fn_t) libxl__update_config_vsnd,
     .from_xenstore = (device_from_xenstore_fn_t) libxl__vsnd_from_xenstore,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
diff --git a/tools/libs/light/libxl_vtpm.c b/tools/libs/light/libxl_vtpm.c
index dd00b267bbf0..0148c572d444 100644
--- a/tools/libs/light/libxl_vtpm.c
+++ b/tools/libs/light/libxl_vtpm.c
@@ -231,7 +231,7 @@ LIBXL_DEFINE_DEVICE_ADD(vtpm)
 LIBXL_DEFINE_DEVICE_REMOVE(vtpm)
 LIBXL_DEFINE_DEVICE_LIST(vtpm)
 
-DEFINE_DEVICE_TYPE_STRUCT(vtpm, VTPM,
+DEFINE_DEVICE_TYPE_STRUCT(vtpm, VTPM, vtpms,
     .update_config = libxl_device_vtpm_update_config,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__vtpm_from_xenstore,
     .set_xenstore_config = (device_set_xenstore_config_fn_t)
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 12fc0b3a7fdc..1d38fffce357 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -23,15 +23,15 @@ static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
     return 0;
 }
 
-static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                           unsigned int bus, unsigned int dev,
+                           unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
     return 0;
 }
 
@@ -47,7 +47,7 @@ static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
 #define STATE_RDM_STRATEGY      10
 #define STATE_RESERVE_POLICY    11
 #define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str)
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
 {
     unsigned state = STATE_DOMAIN;
     unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
@@ -110,11 +110,11 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 }
                 *ptr = '\0';
                 if ( !strcmp(tok, "*") ) {
-                    pcidev->vfunc_mask = LIBXL_PCI_FUNC_ALL;
+                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
                 }else{
                     if ( hex_convert(tok, &func, 0x7) )
                         goto parse_error;
-                    pcidev->vfunc_mask = (1 << 0);
+                    pci->vfunc_mask = (1 << 0);
                 }
                 tok = ptr + 1;
             }
@@ -141,18 +141,18 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
                 *ptr = '\0';
                 if ( !strcmp(optkey, "msitranslate") ) {
-                    pcidev->msitranslate = atoi(tok);
+                    pci->msitranslate = atoi(tok);
                 }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pcidev->power_mgmt = atoi(tok);
+                    pci->power_mgmt = atoi(tok);
                 }else if ( !strcmp(optkey, "permissive") ) {
-                    pcidev->permissive = atoi(tok);
+                    pci->permissive = atoi(tok);
                 }else if ( !strcmp(optkey, "seize") ) {
-                    pcidev->seize = atoi(tok);
+                    pci->seize = atoi(tok);
                 } else if (!strcmp(optkey, "rdm_policy")) {
                     if (!strcmp(tok, "strict")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
                     } else if (!strcmp(tok, "relaxed")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
                     } else {
                         XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
                                           " policy: 'strict' or 'relaxed'.",
@@ -175,7 +175,7 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
     assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
 
     /* Just a pretty way to fill in the values */
-    pcidev_struct_fill(pcidev, dom, bus, dev, func, vslot << 3);
+    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
 
     free(buf2);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47690.84455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihG-0000wY-Mp; Tue, 08 Dec 2020 19:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47690.84455; Tue, 08 Dec 2020 19:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmihG-0000vx-90; Tue, 08 Dec 2020 19:30:50 +0000
Received: by outflank-mailman (input) for mailman id 47690;
 Tue, 08 Dec 2020 19:30:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmihD-0000me-Rc
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihD-0007Lr-3J; Tue, 08 Dec 2020 19:30:47 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihC-0001p0-S0; Tue, 08 Dec 2020 19:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=PWEJza/r2oRNQ2L9vbvH4x3sSK+2PBgCX1brH2xdXO4=; b=HHKbrMRjImxkQgQAxgoYtuuAkK
	GipesqIQm+HUBkbFZ8A2LveyGLCka5/dk0P4VFfDelXj/oY5rSbcNdQaZ6LxehkQYxCk7yl3WY76/
	CEfiVCKdmDPagpOjOIz8lfJQ8aWhXhXk/f+HaTB65qP1FbGWWrB8tacX7ULuBUtDatOo=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 07/25] libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
Date: Tue,  8 Dec 2020 19:30:15 +0000
Message-Id: <20201208193033.11306-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
need to also pass them as separate arguments.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index b6d3bd29b718..bcc4d2ef9686 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1871,14 +1871,14 @@ static void pci_remove_stubdom_done(libxl__egc *egc,
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
-static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pci, int force,
-                          pci_remove_state *prs)
+static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_device_pci *assigned;
+    uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
+    libxl_device_pci *pci = prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
@@ -2275,7 +2275,6 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_domid domid = prs->domid;
     libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
@@ -2293,7 +2292,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
             } else {
                 pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pci, prs->force, prs);
+            do_pci_remove(egc, prs);
             return;
         }
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47683.84379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmih4-0000dY-VI; Tue, 08 Dec 2020 19:30:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47683.84379; Tue, 08 Dec 2020 19:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmih4-0000dR-SD; Tue, 08 Dec 2020 19:30:38 +0000
Received: by outflank-mailman (input) for mailman id 47683;
 Tue, 08 Dec 2020 19:30:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmih2-0000dM-OH
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:30:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih2-0007L3-Bz; Tue, 08 Dec 2020 19:30:36 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmih2-0001p0-3M; Tue, 08 Dec 2020 19:30:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=wPo+BLoCp0K6ZlG+7eNeHGnXEd8/37lHgn7nyvvMsSo=; b=fKUBug
	AzBY4OkZXK8KB75RXOQaTxJp3noFFYIXqE9qOY5fOi9QgTo509xFvI+/9/yF/FwtFyXZ2MT4lsrCF
	oYulmNuFht4qDrDpZb97d9/j6yyECWePTxXE8ZOFb+2azLTLnSvoXfkNRcQAwU9cBNAZE3pLPxfOL
	djHyTDfOWQk=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
Date: Tue,  8 Dec 2020 19:30:08 +0000
Message-Id: <20201208193033.11306-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (25):
  libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  xl: s/pcidev/pci where possible
  libxl: make libxl__device_list() work correctly for
    LIBXL__DEVICE_KIND_PCI...
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  docs/man: modify xl(1) in preparation for naming of assignable devices
  libxl: convert internal functions in libxl_pci.c...
  libxl: introduce libxl_pci_bdf_assignable_add/remove/list/list_free(),
    ...
  xl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  libxl / libxlu: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 ++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +-
 tools/golang/xenlight/helpers.gen.go |   77 +-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   68 +-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_9pfs.c        |    2 +-
 tools/libs/light/libxl_console.c     |    2 +-
 tools/libs/light/libxl_create.c      |    4 +-
 tools/libs/light/libxl_device.c      |   66 +-
 tools/libs/light/libxl_disk.c        |    2 +-
 tools/libs/light/libxl_dm.c          |    8 +-
 tools/libs/light/libxl_internal.h    |   39 +-
 tools/libs/light/libxl_nic.c         |    2 +-
 tools/libs/light/libxl_pci.c         | 1079 ++++++++++++++------------
 tools/libs/light/libxl_pvcalls.c     |    2 +-
 tools/libs/light/libxl_types.idl     |   17 +-
 tools/libs/light/libxl_usb.c         |    4 +-
 tools/libs/light/libxl_vdispl.c      |    2 +-
 tools/libs/light/libxl_vkb.c         |    2 +-
 tools/libs/light/libxl_vsnd.c        |    2 +-
 tools/libs/light/libxl_vtpm.c        |    2 +-
 tools/libs/util/libxlu_pci.c         |  359 +++++----
 tools/ocaml/libs/xl/xenlight_stubs.c |    3 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   24 +-
 tools/xl/xl_pci.c                    |  163 ++--
 tools/xl/xl_sxp.c                    |    4 +-
 29 files changed, 1373 insertions(+), 917 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 19:43:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 19:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47752.84500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmitX-0002tr-U7; Tue, 08 Dec 2020 19:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47752.84500; Tue, 08 Dec 2020 19:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmitX-0002tk-R7; Tue, 08 Dec 2020 19:43:31 +0000
Received: by outflank-mailman (input) for mailman id 47752;
 Tue, 08 Dec 2020 19:43:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pB08=FM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kmitW-0002tf-Ns
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 19:43:30 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05900a23-a7e0-424e-81aa-5cfb51db949d;
 Tue, 08 Dec 2020 19:43:29 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id m5so7836002wrx.9
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 11:43:29 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id p19sm24373470wrg.18.2020.12.08.11.43.27
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 08 Dec 2020 11:43:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05900a23-a7e0-424e-81aa-5cfb51db949d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=qjzyXx2IMmVT+nf19cGpA6lxPWA9L/XuyXLsL1yFufU=;
        b=iCn/gZYgc2fznrPCMOwpRCI+GASERyIrvi8q+udmWKhJtmQuAEu/8N6D+nYztBwX7U
         bfz/edOkmJWlivUfEVNVof/jv/04qwlYrt8TfkDPMWLufPwyrnSR+ukzAVjN5bMpNprH
         v6nYqYjrucm5h3nwITkul4C1JmGGK3q2i6eZIqTvOgJNyxQmcFO1r8gwnAuB+x6Vcqcp
         2ccXo3Qbt1XYQ/U+5uOS/hoFkiNA1EamSNiMwmQ07PeeAlRFM3jL/741gMeHIthg1Qug
         KK9HfWErhDcw7xooth3C+iOxE+XziVRkT47CZ83n8bzvU0vJ18Ldlv0SmJ5tjeWElVZd
         UhDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=qjzyXx2IMmVT+nf19cGpA6lxPWA9L/XuyXLsL1yFufU=;
        b=ZIfMskYlZRkH3cKvOh4QlJp+PeljgM2t2FmmkW+zK3rzl0JKUKidincoom+ViLFrO1
         CRpBCxFNcQ4Vpi5E1eecJjwEguJ3dkRHSZMh9PEgCsc7d5Tms5BahDlrXPwSnwMFiXV2
         WI89gliXC4RBz3KqlhXHfsyUw9KR/z8sw2ivPau2dfesGJonSAWdWeV2/2rLBwUCVvzM
         SL7XhOt4njeKg+Tp//nx3LOC+bvIiHeDOMRREOAEzH8B2NAl/fCgJEUeiZIy1cyE5PwE
         7XjhDJrehMp4xo9wO5euRikahxLod89WVso+BE8WnPPSIFuZJgYLXfUn9AqDix21DbRF
         mWCw==
X-Gm-Message-State: AOAM533O0DjxRv15qIfFxVmFdXYOVFTfK+wletR8USQn5PHpI6ClzlgD
	vpyXns8iq7kbrMgLIP+BxKU=
X-Google-Smtp-Source: ABdhPJzKC5lPnW7jqWcSMU2/3bfO1AxUZIm1lvwHhjhewQ8WO1X3gNKYfN2AYs8KyMx/SvP3k8lQEw==
X-Received: by 2002:a5d:62cb:: with SMTP id o11mr27141971wrv.25.1607456608657;
        Tue, 08 Dec 2020 11:43:28 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>
Cc: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-18-git-send-email-olekstysh@gmail.com> <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com> <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com> <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
In-Reply-To: <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
Subject: RE: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Tue, 8 Dec 2020 19:43:26 -0000
Message-ID: <0d6c01d6cd9a$666326c0$33297440$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKk0D4Qme59XF0a0h96d36zIOxDhQIE/k2jAQvf44kCfV5qfQHp2KBNqBXTFAA=

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 08 December 2020 16:57
> To: Paul Durrant <paul@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Julien Grall
> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
> 
> 
> Hi Paul.
> 
> 
> On 08.12.20 17:33, Oleksandr wrote:
> >
> > On 08.12.20 17:11, Jan Beulich wrote:
> >
> > Hi Jan
> >
> >> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> >>> --- a/xen/include/xen/ioreq.h
> >>> +++ b/xen/include/xen/ioreq.h
> >>> @@ -55,6 +55,20 @@ struct ioreq_server {
> >>>       uint8_t                bufioreq_handling;
> >>>   };
> >>>   +/*
> >>> + * This should only be used when d == current->domain and it's not
> >>> paused,
> >> Is the "not paused" part really relevant here? Besides it being rare
> >> that the current domain would be paused (if so, it's in the process
> >> of having all its vCPU-s scheduled out), does this matter at all?do
> >> any extra actionsdo any extra actions
> >
> > No, it isn't relevant, I will drop it.
> >
> >
> >>
> >> Apart from this the patch looks okay to me, but I'm not sure it
> >> addresses Paul's concerns. Iirc he had suggested to switch back to
> >> a list if doing a swipe over the entire array is too expensive in
> >> this specific case.
> > We would like to avoid to do any extra actions in
> > leave_hypervisor_to_guest() if possible.
> > But not only there, the logic whether we check/set
> > mapcache_invalidation variable could be avoided if a domain doesn't
> > use IOREQ server...
> 
> 
> Are you OK with this patch (common part of it)?

How much of a performance benefit is this? The array is small to simply counting the non-NULL entries should be pretty quick.

  Paul

> 
> 
> --
> Regards,
> 
> Oleksandr Tyshchenko




From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47762.84523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAF-0004r6-Rh; Tue, 08 Dec 2020 20:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47762.84523; Tue, 08 Dec 2020 20:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAF-0004qz-Ng; Tue, 08 Dec 2020 20:00:47 +0000
Received: by outflank-mailman (input) for mailman id 47762;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004op-Ik
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086O-V8; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihS-0001p0-El; Tue, 08 Dec 2020 19:31:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=0oRd0qn55VhmZOaU+m4KZB043Eh08xShf8/CczC16Yk=; b=CTRnNxB9yMYOWioY0XVnNe+VCd
	Oe4BOCUxWqx/i8gR9S/ml9csKS2QhM8L5PpBSKvn1iSgq8tP2f2s1S3+e8IcMqD1R2ngy+jr2NVqF
	FyAvVfpxKc7a9y712PXdJZcEQGzEHH1ktQt/VUTPTOZ2QWSSo04iZbmdOZUYC5SYa9oI=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 24/25] docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
Date: Tue,  8 Dec 2020 19:30:32 +0000
Message-Id: <20201208193033.11306-25-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Since assignable devices can be named, a subsequent patch will support use
of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
this case the name will be used to look up the 'bdf' in the list of assignable
(or assigned) devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 docs/man/xl-pci-configuration.5.pod | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 4dd73bc498d6..db3360307cbd 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -51,7 +51,7 @@ is not specified, or if it is specified with an empty value (whether
 positionally or explicitly).
 
 B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
-B<bdf> will be ignored.
+B<bdf> or B<name> will be ignored.
 
 =head1 Positional Parameters
 
@@ -70,7 +70,11 @@ B<*> to indicate all functions of a multi-function device.
 
 =item Default Value
 
-None. This parameter is mandatory as it identifies the device.
+None. This parameter is mandatory in its positional form. As a non-positional
+parameter it is also mandatory unless a B<name> parameter is present, in
+which case B<bdf> must not be present since the B<name> will be used to find
+the B<bdf> in the list of assignable devices. See L<xl(1)> for more information
+on naming assignable devices.
 
 =back
 
@@ -194,4 +198,21 @@ B<NOTE>: This overrides the global B<rdm> option.
 
 =back
 
+=item B<name>=I<STRING>
+
+=over 4
+
+=item Description
+
+This is the name given when the B<BDF> was made assignable. See L<xl(1)> for
+more information on naming assignable devices.
+
+=item Default Value
+
+None. This parameter must not be present if a B<bdf> parameter is present.
+If a B<bdf> parameter is not present then B<name> is mandatory as it is
+required to look up the B<BDF> in the list of assignable devices.
+
+=back
+
 =back
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47764.84552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAH-0004un-9d; Tue, 08 Dec 2020 20:00:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47764.84552; Tue, 08 Dec 2020 20:00:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAG-0004u8-Rp; Tue, 08 Dec 2020 20:00:48 +0000
Received: by outflank-mailman (input) for mailman id 47764;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004om-Jh
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086E-M8; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihG-0001p0-HU; Tue, 08 Dec 2020 19:30:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=mETAd4vIPWNeYGaK2+L7Uc2zER+2iOz4jTch2pcWZyo=; b=uSDziSntLinJRXwNiYTgCG14So
	58d/DJEnitRUouxI7dR/h1d0ifqQtZBZi5yJQjz8Zl+8f5m8kZ+r+vIsvNYbAHq4zvXyzQdwViLwg
	toDLCf2MCdKsMvMyfYvphnzWed9jwIhG/M9GtsISHPO1+FGCJCU7hUQs3mj3e7csqKx8=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 11/25] libxl: remove get_all_assigned_devices() from libxl_pci.c
Date: Tue,  8 Dec 2020 19:30:19 +0000
Message-Id: <20201208193033.11306-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Use of this function is a very inefficient way to check whether a device
has already been assigned.

This patch adds code that saves the domain id in xenstore at the point of
assignment, and removes it again when the device id de-assigned (or the
domain is destroyed). It is then straightforward to check whether a device
has been assigned by checking whether a device has a saved domain id.

NOTE: To facilitate the xenstore check it is necessary to move the
      pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
      together, the rest of the pci_info_xs_XXX() functions are moved too.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 149 +++++++++++++----------------------
 1 file changed, 55 insertions(+), 94 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0c2ab5075d9f..37a6ec9eb443 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,50 +336,6 @@ retry_transaction2:
     return 0;
 }
 
-static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
-{
-    char **domlist;
-    unsigned int nd = 0, i;
-
-    *list = NULL;
-    *num = 0;
-
-    domlist = libxl__xs_directory(gc, XBT_NULL, "/local/domain", &nd);
-    for(i = 0; i < nd; i++) {
-        char *path, *num_devs;
-
-        path = GCSPRINTF("/local/domain/0/backend/%s/%s/0/num_devs",
-                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                         domlist[i]);
-        num_devs = libxl__xs_read(gc, XBT_NULL, path);
-        if ( num_devs ) {
-            int ndev = atoi(num_devs), j;
-            char *devpath, *bdf;
-
-            for(j = 0; j < ndev; j++) {
-                devpath = GCSPRINTF("/local/domain/0/backend/%s/%s/0/dev-%u",
-                                    libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                                    domlist[i], j);
-                bdf = libxl__xs_read(gc, XBT_NULL, devpath);
-                if ( bdf ) {
-                    unsigned dom, bus, dev, func;
-                    if ( sscanf(bdf, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
-                        continue;
-
-                    *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
-                    if (*list == NULL)
-                        return ERROR_NOMEM;
-                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
-                    (*num)++;
-                }
-            }
-        }
-    }
-    libxl__ptr_add(gc, *list);
-
-    return 0;
-}
-
 static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
                            int dom, int bus, int dev, int func)
 {
@@ -427,19 +383,58 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
     return 0;
 }
 
+#define PCI_INFO_PATH "/libxl/pci"
+
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
 
     *num = 0;
 
-    r = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if (r) goto out;
-
     dir = opendir(SYSFS_PCIBACK_DRIVER);
     if (NULL == dir) {
         if (errno == ENOENT) {
@@ -455,9 +450,6 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
-            continue;
-
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
@@ -467,6 +459,10 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
         memset(new, 0, sizeof(*new));
         pci_struct_fill(new, dom, bus, dev, func, 0);
+
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+            continue;
+
         (*num)++;
     }
 
@@ -737,48 +733,6 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCI_INFO_PATH "/libxl/pci"
-
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    return node ?
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
-                  node) :
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
-}
-
-
-static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node, const char *val)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", val, path);
-    }
-}
-
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    return libxl__xs_read(gc, XBT_NULL, path);
-}
-
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
-                               const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-    libxl_ctx *ctx = libxl__gc_owner(gc);
-
-    /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL, path);
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
@@ -1594,6 +1548,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
+    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    if (rc) goto out;
+
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
@@ -1721,6 +1678,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->domain, pci->bus, pci->dev, pci->func,
              rc);
+        pci_info_xs_remove(gc, pci, "domid");
     }
     aodev->rc = rc;
     aodev->callback(egc, aodev);
@@ -2282,6 +2240,9 @@ out:
     libxl__xswait_stop(gc, &prs->xswait);
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
+
+    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47761.84540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAG-0004su-Op; Tue, 08 Dec 2020 20:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47761.84540; Tue, 08 Dec 2020 20:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAG-0004s6-Bb; Tue, 08 Dec 2020 20:00:48 +0000
Received: by outflank-mailman (input) for mailman id 47761;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004oo-Il
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086C-Gp; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihR-0001p0-LB; Tue, 08 Dec 2020 19:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=+f7YY+lF0YoyI1swofoqUpjdUy+mpVesNl7dk7J2kFI=; b=rr04WefTtRhowSm90K+hZUAZQu
	2w9CfjUdYuOPRetPipUjVXc+yzmn5NctlWwxICoNQZ8F7v8gb83hPvR76PeMlgL4GVzwAVCHrWAem
	r/yYGqdgmzbEHIRWSuhTFO/3bm70s1QC6hSdUITLTr33oA+Mgmz9O+f7SSjDE9O31uuA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 23/25] xl: support naming of assignable devices
Date: Tue,  8 Dec 2020 19:30:31 +0000
Message-Id: <20201208193033.11306-24-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch converts libxl to use libxl_pci_bdf_assignable_add/remove/list/
list_free() rather than libxl_device_pci_assignable_add/remove/list/
list_free(), which then allows naming of assignable devices to be supported.

With this patch applied 'xl pci-assignable-add' will take an optional '--name'
parameter, 'xl pci-assignable-remove' can be passed either a BDF or a name and
'xl pci-assignable-list' will take a optional '--show-names' flag which
determines whether names are displayed in its output.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v6:
 - New in v6 (split out from "xl / libxl: support naming of assignable
   devices")
---
 tools/xl/xl_cmdtable.c |  12 +++--
 tools/xl/xl_pci.c      | 100 +++++++++++++++++++++++++++--------------
 2 files changed, 74 insertions(+), 38 deletions(-)

diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 30e17a2848cd..bd8af12ff36e 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -105,21 +105,25 @@ struct cmd_spec cmd_table[] = {
     { "pci-assignable-add",
       &main_pciassignable_add, 0, 1,
       "Make a device assignable for pci-passthru",
-      "<BDF>",
+      "[options] <BDF>",
+      "-n NAME, --name=NAME    Name the assignable device.\n"
       "-h                      Print this help.\n"
     },
     { "pci-assignable-remove",
       &main_pciassignable_remove, 0, 1,
       "Remove a device from being assignable",
-      "[options] <BDF>",
+      "[options] <BDF>|NAME",
       "-h                      Print this help.\n"
       "-r                      Attempt to re-assign the device to the\n"
-      "                        original driver"
+      "                        original driver."
     },
     { "pci-assignable-list",
       &main_pciassignable_list, 0, 0,
       "List all the assignable pci devices",
-      "",
+      "[options]",
+      "-h                      Print this help.\n"
+      "-n, --show-names        Display assignable device names where\n"
+      "                        supplied.\n"
     },
     { "pause",
       &main_pause, 0, 1,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 9c24496cb2dd..eb29b4e08d8b 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -152,55 +152,68 @@ int main_pciattach(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
-static void pciassignable_list(void)
+static void pciassignable_list(bool show_names)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_pci_bdf_assignable_list(ctx, &num);
 
-    if ( pcis == NULL )
+    if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
-        printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
-               pcis[i].bdf.func);
+        libxl_pci_bdf *pcibdf = &pcibdfs[i];
+        char *name = show_names ?
+            libxl_pci_bdf_assignable_bdf2name(ctx, pcibdf) : NULL;
+
+        printf("%04x:%02x:%02x.%01x %s\n",
+               pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
+               name ?: "");
+
+        free(name);
     }
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_pci_bdf_assignable_list_free(pcibdfs, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
 {
     int opt;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-        /* No options */
+    static struct option opts[] = {
+        {"show-names", 0, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    bool show_names = false;
+
+    SWITCH_FOREACH_OPT(opt, "n", opts, "pci-assignable-list", 0) {
+    case 'n':
+        show_names = true;
+        break;
     }
 
-    pciassignable_list();
+    pciassignable_list(show_names);
     return 0;
 }
 
-static int pciassignable_add(const char *bdf, int rebind)
+static int pciassignable_add(const char *bdf, const char *name, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
+    if (libxl_pci_bdf_assignable_add(ctx, &pcibdf, name, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -210,39 +223,58 @@ int main_pciassignable_add(int argc, char **argv)
 {
     int opt;
     const char *bdf = NULL;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-        /* No options */
+    static struct option opts[] = {
+        {"name", 1, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    const char *name = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "n:", opts, "pci-assignable-add", 0) {
+    case 'n':
+        name = optarg;
+        break;
     }
 
     bdf = argv[optind];
 
-    if (pciassignable_add(bdf, 1))
+    if (pciassignable_add(bdf, name, 1))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciassignable_remove(const char *bdf, int rebind)
+static int pciassignable_remove(const char *ident, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf *pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
-
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
-        exit(2);
+    pcibdf = libxl_pci_bdf_assignable_name2bdf(ctx, ident);
+    if (!pcibdf) {
+        pcibdf = calloc(1, sizeof(*pcibdf));
+
+        if (!pcibdf) {
+            fprintf(stderr,
+                    "pci-assignable-remove: failed to allocate memory\n");
+            exit(2);
+        }
+
+        libxl_pci_bdf_init(pcibdf);
+        if (xlu_pci_parse_bdf(config, pcibdf, ident)) {
+            fprintf(stderr,
+                    "pci-assignable-remove: malformed BDF '%s'\n", ident);
+            exit(2);
+        }
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
+    if (libxl_pci_bdf_assignable_remove(ctx, pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(pcibdf);
+    free(pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -251,7 +283,7 @@ static int pciassignable_remove(const char *bdf, int rebind)
 int main_pciassignable_remove(int argc, char **argv)
 {
     int opt;
-    const char *bdf = NULL;
+    const char *ident = NULL;
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
@@ -260,9 +292,9 @@ int main_pciassignable_remove(int argc, char **argv)
         break;
     }
 
-    bdf = argv[optind];
+    ident = argv[optind];
 
-    if (pciassignable_remove(bdf, rebind))
+    if (pciassignable_remove(ident, rebind))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47760.84512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAE-0004qE-IA; Tue, 08 Dec 2020 20:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47760.84512; Tue, 08 Dec 2020 20:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAE-0004q2-EX; Tue, 08 Dec 2020 20:00:46 +0000
Received: by outflank-mailman (input) for mailman id 47760;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004oq-Iz
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086G-Nq; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihL-0001p0-BW; Tue, 08 Dec 2020 19:30:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Kp5JHM/PcDPgoyQFpS4CPrVmdU0OAzBzGq/q/o/48HU=; b=RwuXlpVlGqqwSfs9ckruZrDqFD
	c7NWlIYHncocBbwbUaf7sSbkXzqOZiFW5VsVz3PrS6OiqyckaYsi/KW5Z+v9bPqm05nHRVE2uKQnx
	pP1OR8hKo4CSiBDHjvE3K7nbIX/GFrLm7g9w3jENbMvRX5PEU4EwBTmN47EOXHcKCxCY=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 16/25] docs/man: improve documentation of PCI_SPEC_STRING...
Date: Tue,  8 Dec 2020 19:30:24 +0000
Message-Id: <20201208193033.11306-17-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and prepare for adding support for non-positional parsing of 'bdf' and
'vslot' in a subsequent patch.

Also document 'BDF' as a first-class parameter type and fix the documentation
to state that the default value of 'rdm_policy' is actually 'strict', not
'relaxed', as can be seen in libxl__device_pci_setdefault().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 docs/man/xl-pci-configuration.5.pod | 177 +++++++++++++++++++++++-----
 1 file changed, 148 insertions(+), 29 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 72a27bd95dec..4dd73bc498d6 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -6,32 +6,105 @@ xl-pci-configuration - XL PCI Configuration Syntax
 
 =head1 SYNTAX
 
-This document specifies the format for B<PCI_SPEC_STRING> which is used by
-the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+This document specifies the format for B<BDF> and B<PCI_SPEC_STRING> which are
+used by the L<xl.cfg(5)> pci configuration option, and related L<xl(1)>
+commands.
 
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+A B<BDF> has the following form:
+
+    [DDDD:]BB:SS.F
+
+B<DDDD> is the domain number, B<BB> is the bus number, B<SS> is the device (or
+slot) number, and B<F> is the function number. This is the same scheme as
+used in the output of L<lspci(1)> for the device in question. By default
+L<lspci(1)> will omit the domain (B<DDDD>) if it is zero and hence a zero
+value for domain may also be omitted when specifying a B<BDF>.
+
+Each B<PCI_SPEC_STRING> has the one of the forms:
+
+=over 4
+
+    [<bdf>[@<vslot>,][<key>=<value>,]*
+    [<key>=<value>,]*
+
+=back
+
+For example, these strings are equivalent:
 
 =over 4
 
-=item B<[DDDD:]BB:DD.F>
+    36:00.0@20,seize=1
+    36:00.0,vslot=20,seize=1
+    bdf=36:00.0,vslot=20,seize=1
 
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
+=back
+
+More formally, the string is a series of comma-separated keyword/value
+pairs, flags and positional parameters.  Parameters which are not bare
+keywords and which do not contain "=" symbols are assigned to the
+positional parameters, in the order specified below.  The positional
+parameters may also be specified by name.
+
+Each parameter may be specified at most once, either as a positional
+parameter or a named parameter.  Default values apply if the parameter
+is not specified, or if it is specified with an empty value (whether
+positionally or explicitly).
+
+B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
+B<bdf> will be ignored.
+
+=head1 Positional Parameters
+
+=over 4
+
+=item B<bdf>=I<BDF>
+
+=over 4
 
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
+=item Description
 
-=item B<@VSLOT>
+This identifies the PCI device from the host perspective.
 
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
+In the context of a B<PCI_SPEC_STRING> you may specify the function (B<F>) as
+B<*> to indicate all functions of a multi-function device.
 
-=item B<permissive=BOOLEAN>
+=item Default Value
+
+None. This parameter is mandatory as it identifies the device.
+
+=back
+
+=item B<vslot>=I<NUMBER>
+
+=over 4
+
+=item Description
+
+Specifies the virtual slot (device) number where the guest will see this
+device. For example, running L<lspci(1)> in a Linux guest where B<vslot>
+was specified as C<8> would identify the device as C<00:08.0>. Virtual domain
+and bus numbers are always 0.
+
+B<NOTE:> This parameter is always parsed as a hexidecimal value.
+
+=item Default Value
+
+None. This parameter is not mandatory. An available B<vslot> will be selected
+if this parameter is not specified.
+
+=back
+
+=back
+
+=head1 Other Parameters and Flags
+
+=over 4
+
+=item B<permissive>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 By default pciback only allows PV guests to write "known safe" values
 into PCI configuration space, likewise QEMU (both qemu-xen and
@@ -46,33 +119,79 @@ more control over the device, which may have security or stability
 implications.  It is recommended to only enable this option for
 trusted VMs under administrator's control.
 
-=item B<msitranslate=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<msitranslate>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 Specifies that MSI-INTx translation should be turned on for the PCI
 device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
+the PCI device regardless of whether the guest uses INTx or MSI.
+
+=item Default Value
+
+Some device drivers, such as NVIDIA's, detect an inconsistency and do not
 function when this option is enabled. Therefore the default is false (0).
 
-=item B<seize=BOOLEAN>
+=back
+
+=item B<seize>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
+Tells L<xl(1)> to automatically attempt to make the device assignable to
+guests if that has not already been done by the B<pci-assignable-add>
+command.
 
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+B<WARNING:> If you set this option, L<xl> will gladly re-assign a critical
 system device, such as a network or a disk controller being used by
 dom0 without confirmation.  Please use with care.
 
-=item B<power_mgmt=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<power_mgmt>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
+D0-D3hot power management states for the PCI device.
+
+=item Default Value
+
+0
 
-=item B<rdm_policy=STRING>
+=back
+
+=item B<rdm_policy>=I<STRING>
+
+=over 4
+
+=item Description
 
 B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
+option in L<xl.cfg(5)> but just specific to a given device.
 
-Note: this would override global B<rdm> option.
+B<NOTE>: This overrides the global B<rdm> option.
+
+=item Default Value
+
+"strict"
+
+=back
 
 =back
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47763.84531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAG-0004rX-8A; Tue, 08 Dec 2020 20:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47763.84531; Tue, 08 Dec 2020 20:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAG-0004rL-0v; Tue, 08 Dec 2020 20:00:48 +0000
Received: by outflank-mailman (input) for mailman id 47763;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004on-Jh
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086I-QG; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihF-0001p0-Id; Tue, 08 Dec 2020 19:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=HD4e8QIuOJKsoYEaJ78yCnUJ89s+VAB/WU9F9pKuq1c=; b=OGMBKZ0qbSoolNUERAOJgtRPOy
	dfRNLmBTJFWj59NT7iboNV42XgSRmilKYne8uoC9ZCA3ePsZW1i6p0zmN+XwlADfWJvnzK8X+UYoh
	a+XlICCQsQktAQZ2av3UyCTpie/ybWp1QEWiemrnUs+jvqg+3QgUcQ+u7AGH2HcYFmBg=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 10/25] libxl: remove unnecessary check from libxl__device_pci_add()
Date: Tue,  8 Dec 2020 19:30:18 +0000
Message-Id: <20201208193033.11306-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The code currently checks explicitly whether the device is already assigned,
but this is actually unnecessary as assigned devices do not form part of
the list returned by libxl_device_pci_assignable_list() and hence the
libxl_pci_assignable() test would have already failed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_pci.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 78ee641e9aa8..0c2ab5075d9f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1555,8 +1555,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 {
     STATE_AO_GC(aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
-    int num_assigned, rc;
+    int rc;
     int stubdomid = 0;
     pci_add_state *pas;
 
@@ -1595,19 +1594,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
-    rc = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if ( rc ) {
-        LOGD(ERROR, domid,
-             "cannot determine if device is assigned, refusing to continue");
-        goto out;
-    }
-    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
-                         pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domid, "PCI device already attached to a domain");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47765.84562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAH-0004wt-Sa; Tue, 08 Dec 2020 20:00:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47765.84562; Tue, 08 Dec 2020 20:00:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAH-0004wL-Dl; Tue, 08 Dec 2020 20:00:49 +0000
Received: by outflank-mailman (input) for mailman id 47765;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004p5-Kg
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086K-Rj; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihM-0001p0-56; Tue, 08 Dec 2020 19:30:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=BBwaFwKiybbCIqgc9tMzuIzX22z57wEbmB7dmgtI4NU=; b=Mqeq+2l/Zz1BLnK4Q0P079EKns
	h8ldMBHO1UCEvz0JlDZ2OPbBjkD6rB+dRT2/K9bCLOtqFdZfbmXG+48LSmO2DDMQJJqGEVlD22SBW
	4LX8VtzScFEsUv4h2pfot21Hz1ESVMSveqpMMot7crI19OavCeM9GXrEjFcngOtp1aP4=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 17/25] docs/man: fix xl(1) documentation for 'pci' operations
Date: Tue,  8 Dec 2020 19:30:25 +0000
Message-Id: <20201208193033.11306-18-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently the documentation completely fails to mention the existence of
PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
patch).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 docs/man/xl.1.pod.in | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index eaa72faad6a0..af31d2b5727a 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1597,14 +1597,18 @@ List virtual network interfaces for a domain.
 
 =item B<pci-assignable-list>
 
-List all the assignable PCI devices.
+List all the B<BDF> of assignable PCI devices. See
+L<xl-pci-configuration(5)> for more information.
+
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
 =item B<pci-assignable-add> I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF assignable to guests.
+Make the device at B<BDF> assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
 first be unbound, and the original driver stored so that it can be
@@ -1620,8 +1624,10 @@ being used.
 
 =item B<pci-assignable-remove> [I<-r>] I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF not assignable to
-guests.  This will at least unbind the device from pciback, and
+Make the device at B<BDF> not assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
+This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
 option is specified, it will also attempt to re-bind the device to its
 original driver, making it usable by Domain 0 again.  If the device is
@@ -1637,15 +1643,15 @@ As always, this should only be done if you trust the guest, or are
 confident that the particular device you're re-assigning to dom0 will
 cancel all in-flight DMA on FLR.
 
-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-plug a new pass-through pci device to the specified domain.
-B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
+Hot-plug a new pass-through pci device to the specified domain. See
+L<xl-pci-configuration(5)> for more information.
 
-=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
+=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
-Bus/Device/Function of the physical device to be removed from the guest domain.
+Hot-unplug a pci device that was previously passed through to a domain. See
+L<xl-pci-configuration(5)> for more information.
 
 B<OPTIONS>
 
@@ -1660,7 +1666,7 @@ even without guest domain's collaboration.
 
 =item B<pci-list> I<domain-id>
 
-List pass-through pci devices for a domain.
+List the B<BDF> of pci devices passed through to a domain.
 
 =back
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47766.84577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAI-0004zB-Mr; Tue, 08 Dec 2020 20:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47766.84577; Tue, 08 Dec 2020 20:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAI-0004yQ-3r; Tue, 08 Dec 2020 20:00:50 +0000
Received: by outflank-mailman (input) for mailman id 47766;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004pG-NT
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAC-00086M-TG; Tue, 08 Dec 2020 20:00:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihQ-0001p0-OP; Tue, 08 Dec 2020 19:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ys3y67RaqaXIN5X7X+CEx/lpFH/KIEaBqkv7RhCev3w=; b=gjhvluOV6aC2KdnKGAZACFnyMU
	nyxcR2eTaiUWLCy9OTGdiB1TD6pksTH2nvHdnHbDczzMEku+k5HEmAbt4N0ZUwiyYKMsMu3hRz3+b
	FT4FgdnK6QVnIORwlRvTvWUQVEmaSya8jMrKJmIGvZYNXwq/rWXoHYPrlnETEFn1AyrA=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 22/25] libxl: introduce libxl_pci_bdf_assignable_add/remove/list/list_free(), ...
Date: Tue,  8 Dec 2020 19:30:30 +0000
Message-Id: <20201208193033.11306-23-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

which support naming and use 'libxl_pci_bdf' rather than 'libxl_device_pci',
as replacements for libxl_device_pci_assignable_add/remove/list/list_free().

libxl_pci_bdf_assignable_add() takes a 'name' parameter which is stored in
xenstore and facilitates two addtional functions added by this patch:
libxl_pci_bdf_assignable_name2bdf() and libxl_pci_bdf_assignable_bdf2name().
Currently there are no callers of these two functions. They will be added in
a subsequent patch.

libxl_device_pci_assignable_add/remove/list/list_free() are left in place
for compatibility but are re-implemented in terms of the newly introduced
functions.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v6:
 - New in v6 (replacing remaining code from "libxl: modify
   libxl_device_pci_assignable_add/remove/list/list_free()...")
---
 tools/include/libxl.h        |  36 ++++++--
 tools/libs/light/libxl_pci.c | 166 ++++++++++++++++++++++++++++++-----
 2 files changed, 171 insertions(+), 31 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1fa4c5806df9..fda611f88960 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -469,6 +469,13 @@
  */
 #define LIBXL_HAVE_PCI_BDF 1
 
+/*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
+ * libxl_pci_bdf_assignable_add/remove/list/list_free() functions all
+ * exist.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2357,9 +2364,9 @@ int libxl_device_events_handler(libxl_ctx *ctx,
                                 LIBXL_EXTERNAL_CALLERS_ONLY;
 
 /*
- * Functions related to making devices assignable -- that is, bound to
- * the pciback driver, ready to be given to a guest via
- * libxl_pci_device_add.
+ * Functions related to making PCI devices with the specified BDF
+ * assignable -- that is, bound to the pciback driver, ready to be given to
+ * a guest via libxl_pci_device_add.
  *
  * - ..._add() will unbind the device from its current driver (if
  * already bound) and re-bind it to pciback; at that point it will be
@@ -2371,16 +2378,31 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * rebind is non-zero, attempt to assign it back to the driver
  * from whence it came.
  *
- * - ..._list() will return a list of the PCI devices available to be
+ * - ..._list() will return a list of the PCI BDFs available to be
  * assigned.
  *
  * add and remove are idempotent: if the device in question is already
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+int libxl_pci_bdf_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                 const char *name, int rebind);
+int libxl_pci_bdf_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    int rebind);
+libxl_pci_bdf *libxl_pci_bdf_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_pci_bdf_assignable_list_free(libxl_pci_bdf *list, int num);
+libxl_pci_bdf *libxl_pci_bdf_assignable_name2bdf(libxl_ctx *ctx,
+                                                 const char *name);
+char *libxl_pci_bdf_assignable_bdf2name(libxl_ctx *ctx,
+                                        libxl_pci_bdf *pcibdf);
+
+/* Compatibility functions - Use libxl_pci_bdf_assignable_* instead */
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+                                    int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+                                       int rebind);
+libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx,
+                                                   int *num);
 void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 448fe969514b..e11574e73f59 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -426,10 +426,10 @@ static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
     xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
+libxl_pci_bdf *libxl_pci_bdf_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new;
+    libxl_pci_bdf *pcibdfs = NULL, *new;
     struct dirent *de;
     DIR *dir;
 
@@ -450,17 +450,17 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcibdfs, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcis = new;
-        new = pcis + *num;
+        pcibdfs = new;
+        new = pcibdfs + *num;
 
-        libxl_device_pci_init(new);
-        pcibdf_struct_fill(&new->bdf, dom, bus, dev, func);
+        libxl_pci_bdf_init(new);
+        pcibdf_struct_fill(new, dom, bus, dev, func);
 
-        if (pci_info_xs_read(gc, &new->bdf, "domid")) /* already assigned */
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
             continue;
 
         (*num)++;
@@ -469,15 +469,15 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
     closedir(dir);
 out:
     GC_FREE;
-    return pcis;
+    return pcibdfs;
 }
 
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+void libxl_pci_bdf_assignable_list_free(libxl_pci_bdf *list, int num)
 {
     int i;
 
     for (i = 0; i < num; i++)
-        libxl_device_pci_dispose(&list[i]);
+        libxl_pci_bdf_dispose(&list[i]);
 
     free(list);
 }
@@ -745,6 +745,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 
 static int libxl__pci_bdf_assignable_add(libxl__gc *gc,
                                          libxl_pci_bdf *pcibdf,
+                                         const char *name,
                                          int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -753,6 +754,23 @@ static int libxl__pci_bdf_assignable_add(libxl__gc *gc,
     int rc;
     struct stat st;
 
+    /* Sanitise any name that was passed */
+    if (name) {
+        unsigned int i, n = strlen(name);
+
+        if (n > 64) { /* Reasonable upper bound on name length */
+            LOG(ERROR, "Name too long");
+            return ERROR_FAIL;
+        }
+
+        for (i = 0; i < n; i++) {
+            if (!isgraph(name[i])) {
+                LOG(ERROR, "Names may only include printable characters");
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     /* Local copy for convenience */
     dom = pcibdf->domain;
     bus = pcibdf->bus;
@@ -773,7 +791,7 @@ static int libxl__pci_bdf_assignable_add(libxl__gc *gc,
     }
     if ( rc ) {
         LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto quarantine;
+        goto name;
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
@@ -804,7 +822,12 @@ static int libxl__pci_bdf_assignable_add(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-quarantine:
+name:
+    if (name)
+        pci_info_xs_write(gc, pcibdf, "name", name);
+    else
+        pci_info_xs_remove(gc, pcibdf, "name");
+
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -868,34 +891,87 @@ static int libxl__pci_bdf_assignable_remove(libxl__gc *gc,
         }
     }
 
+    pci_info_xs_remove(gc, pcibdf, "name");
+
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
-                                    int rebind)
+int libxl_pci_bdf_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                 const char *name, int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__pci_bdf_assignable_add(gc, &pci->bdf, rebind);
+    rc = libxl__pci_bdf_assignable_add(gc, pcibdf, name, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
-                                       int rebind)
+int libxl_pci_bdf_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__pci_bdf_assignable_remove(gc, &pci->bdf, rebind);
+    rc = libxl__pci_bdf_assignable_remove(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
 }
 
+libxl_pci_bdf *libxl_pci_bdf_assignable_name2bdf(libxl_ctx *ctx,
+                                                 const char *name)
+{
+    GC_INIT(ctx);
+    char **bdfs;
+    libxl_pci_bdf *pcibdf = NULL;
+    unsigned int i, n;
+
+    bdfs = libxl__xs_directory(gc, XBT_NULL, PCI_INFO_PATH, &n);
+    if (!n)
+        goto out;
+
+    pcibdf = calloc(1, sizeof(*pcibdf));
+    if (!pcibdf)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        unsigned dom, bus, dev, func;
+        const char *tmp;
+
+        if (sscanf(bdfs[i], PCI_BDF_XSPATH, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        pcibdf_struct_fill(pcibdf, dom, bus, dev, func);
+
+        tmp = pci_info_xs_read(gc, pcibdf, "name");
+        if (tmp && !strcmp(tmp, name))
+            goto out;
+    }
+
+    free(pcibdf);
+    pcibdf = NULL;
+
+out:
+    GC_FREE;
+    return pcibdf;
+}
+
+char *libxl_pci_bdf_assignable_bdf2name(libxl_ctx *ctx,
+                                        libxl_pci_bdf *pcibdf)
+{
+    GC_INIT(ctx);
+    char *name = NULL, *tmp = pci_info_xs_read(gc, pcibdf, "name");
+
+    if (tmp)
+        name = strdup(tmp);
+
+    GC_FREE;
+    return name;
+}
+
 /*
  * This function checks that all functions of a device are bound to pciback
  * driver. It also initialises a bit-mask of which function numbers are present
@@ -1490,17 +1566,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
 
 static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_pci_bdf_assignable_list(ctx, &num);
 
     for (i = 0; i < num; i++) {
-        if (COMPARE_BDF(pcibdf, &pcis[i].bdf))
+        if (COMPARE_BDF(pcibdf, &pcibdfs[i]))
             break;
     }
 
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_pci_bdf_assignable_list_free(pcibdfs, num);
 
     return i < num;
 }
@@ -1551,7 +1627,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     if (rc) goto out;
 
     if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
-        rc = libxl__pci_bdf_assignable_add(gc, &pci->bdf, 1);
+        rc = libxl__pci_bdf_assignable_add(gc, &pci->bdf, NULL, 1);
         if ( rc )
             goto out;
     }
@@ -2449,6 +2525,48 @@ DEFINE_DEVICE_TYPE_STRUCT(pci, PCI, pcidevs,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
 
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+                                    int rebind)
+{
+    return libxl_pci_bdf_assignable_add(ctx, &pci->bdf, NULL, rebind);
+}
+
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+                                       int rebind)
+{
+    return libxl_pci_bdf_assignable_remove(ctx, &pci->bdf, rebind);
+}
+
+libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx,
+                                                   int *num)
+{
+    libxl_pci_bdf *pcibdfs = libxl_pci_bdf_assignable_list(ctx, num);
+    libxl_device_pci *pcis;
+    unsigned int i;
+
+    if (!pcibdfs)
+        return NULL;
+
+    pcis = calloc(*num, sizeof(*pcis));
+    if (!pcis) {
+        libxl_pci_bdf_assignable_list_free(pcibdfs, *num);
+        return NULL;
+    }
+
+    for (i = 0; i < *num; i++) {
+        libxl_device_pci_init(&pcis[i]);
+        libxl_pci_bdf_copy(ctx, &pcis[i].bdf, &pcibdfs[i]);
+    }
+
+    libxl_pci_bdf_assignable_list_free(pcibdfs, *num);
+    return pcis;
+}
+
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    libxl_device_pci_list_free(list, num);
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47767.84587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAJ-00051t-ES; Tue, 08 Dec 2020 20:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47767.84587; Tue, 08 Dec 2020 20:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAI-00050u-W3; Tue, 08 Dec 2020 20:00:50 +0000
Received: by outflank-mailman (input) for mailman id 47767;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004pJ-Po
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086S-0s; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihP-0001p0-Rf; Tue, 08 Dec 2020 19:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=7O3jPwvmGZxkm+Vdmlru8cGpzRajsK5N4k2xBh61IQ4=; b=UJuqKLFdKWw996L4WBuLN1AqaY
	vrZMak8/LpzlO6UphBo5zcFgFAtnzb9jYMYotQ6ep48EWMDBfZ3QPtu+IlYVUE47U0XpXGNy0X3T9
	R1kYCMqar5N1zBwB9hZ0yQzn6m9MgcuwhKd0DiizMY8CNWG9Hw3n5BqlesDnGreDxkPY=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 21/25] libxl: convert internal functions in libxl_pci.c...
Date: Tue,  8 Dec 2020 19:30:29 +0000
Message-Id: <20201208193033.11306-22-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to use 'libx_pci_bdf' where appropriate.

No API change.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v6:
 - New in v6 (split out from "libxl: modify libxl_device_pci_assignable_add/
   remove/list/list_free()..."
---
 tools/libs/light/libxl_pci.c | 192 +++++++++++++++++++----------------
 1 file changed, 103 insertions(+), 89 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 6b14f0f29ef8..448fe969514b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,26 +25,33 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pci_encode_bdf(libxl_device_pci *pci)
+static unsigned int pci_encode_bdf(libxl_pci_bdf *pcibdf)
 {
     unsigned int value;
 
-    value = pci->bdf.domain << 16;
-    value |= (pci->bdf.bus & 0xff) << 8;
-    value |= (pci->bdf.dev & 0x1f) << 3;
-    value |= (pci->bdf.func & 0x7);
+    value = pcibdf->domain << 16;
+    value |= (pcibdf->bus & 0xff) << 8;
+    value |= (pcibdf->dev & 0x1f) << 3;
+    value |= (pcibdf->func & 0x7);
 
     return value;
 }
 
+static void pcibdf_struct_fill(libxl_pci_bdf *pcibdf, unsigned int domain,
+                               unsigned int bus, unsigned int dev,
+                               unsigned int func)
+{
+    pcibdf->domain = domain;
+    pcibdf->bus = bus;
+    pcibdf->dev = dev;
+    pcibdf->func = func;
+}
+
 static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
+    pcibdf_struct_fill(&pci->bdf, domain, bus, dev, func);
     pci->vdevfn = vdevfn;
 }
 
@@ -350,8 +357,8 @@ static bool is_pci_in_array(libxl_device_pci *pcis, int num,
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
+static int sysfs_write_bdf(libxl__gc *gc, const char *sysfs_path,
+                           libxl_pci_bdf *pcibdf)
 {
     int rc, fd;
     char *buf;
@@ -362,8 +369,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-                    pci->bdf.dev, pci->bdf.func);
+    buf = GCSPRINTF(PCI_BDF, pcibdf->domain, pcibdf->bus,
+                    pcibdf->dev, pcibdf->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -378,22 +385,22 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 
 #define PCI_INFO_PATH "/libxl/pci"
 
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_path(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
 }
 
 
-static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+static int pci_info_xs_write(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node, const char *val)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
 
     if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
@@ -401,18 +408,18 @@ static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
     return rc;
 }
 
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_read(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
 
     return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                                const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
@@ -451,9 +458,9 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         new = pcis + *num;
 
         libxl_device_pci_init(new);
-        pci_struct_fill(new, dom, bus, dev, func, 0);
+        pcibdf_struct_fill(&new->bdf, dom, bus, dev, func);
 
-        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+        if (pci_info_xs_read(gc, &new->bdf, "domid")) /* already assigned */
             continue;
 
         (*num)++;
@@ -477,17 +484,17 @@ void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->bdf.domain,
-                           pci->bdf.bus,
-                           pci->bdf.dev,
-                           pci->bdf.func);
+                           pcibdf->domain,
+                           pcibdf->bus,
+                           pcibdf->dev,
+                           pcibdf->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -501,7 +508,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -639,8 +646,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
+/* Scan through /sys/.../pciback/slots looking for BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     FILE *f;
     int rc = 0;
@@ -654,10 +661,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
-        if (dom == pci->bdf.domain
-            && bus == pci->bdf.bus
-            && dev == pci->bdf.dev
-            && func == pci->bdf.func) {
+        if (dom == pcibdf->domain
+            && bus == pcibdf->bus
+            && dev == pcibdf->dev
+            && func == pcibdf->func) {
             rc = 1;
             goto out;
         }
@@ -667,7 +674,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     char * spath;
     int rc;
@@ -683,8 +690,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->bdf.domain, pci->bdf.bus,
-                      pci->bdf.dev, pci->bdf.func);
+                      pcibdf->domain, pcibdf->bus,
+                      pcibdf->dev, pcibdf->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -695,40 +702,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_assign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     int rc;
 
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pcibdf)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcibdf) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pcibdf) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -736,9 +743,9 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            int rebind)
+static int libxl__pci_bdf_assignable_add(libxl__gc *gc,
+                                         libxl_pci_bdf *pcibdf,
+                                         int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     unsigned dom, bus, dev, func;
@@ -747,10 +754,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->bdf.domain;
-    bus = pci->bdf.bus;
-    dev = pci->bdf.dev;
-    func = pci->bdf.func;
+    dom = pcibdf->domain;
+    bus = pcibdf->bus;
+    dev = pcibdf->dev;
+    func = pcibdf->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -760,7 +767,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
+    rc = pciback_dev_is_assigned(gc, pcibdf);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -770,7 +777,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -779,9 +786,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
+            pci_info_xs_write(gc, pcibdf, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
+                     pci_info_xs_read(gc, pcibdf, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -789,10 +796,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
+        pci_info_xs_remove(gc, pcibdf, "driver_path");
     }
 
-    if ( pciback_dev_assign(gc, pci) ) {
+    if ( pciback_dev_assign(gc, pcibdf) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -803,7 +810,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -813,33 +820,33 @@ quarantine:
     return 0;
 }
 
-static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pci,
-                                               int rebind)
+static int libxl__pci_bdf_assignable_remove(libxl__gc *gc,
+                                            libxl_pci_bdf *pcibdf,
+                                            int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc;
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-            pci->bdf.dev, pci->bdf.func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcibdf->domain,
+            pcibdf->bus, pcibdf->dev, pcibdf->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pcibdf)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
+        pciback_dev_unassign(gc, pcibdf);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    driver_path = pci_info_xs_read(gc, pcibdf, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -847,12 +854,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
+                                 pcibdf) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_info_xs_remove(gc, pci, "driver_path");
+            pci_info_xs_remove(gc, pcibdf, "driver_path");
         }
     } else {
         if ( rebind ) {
@@ -870,7 +877,7 @@ int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
+    rc = libxl__pci_bdf_assignable_add(gc, &pci->bdf, rebind);
 
     GC_FREE;
     return rc;
@@ -883,7 +890,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
+    rc = libxl__pci_bdf_assignable_remove(gc, &pci->bdf, rebind);
 
     GC_FREE;
     return rc;
@@ -1385,7 +1392,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+                             &pci->bdf) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1401,7 +1408,8 @@ out_no_irq:
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(&pci->bdf),
+                             flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1480,17 +1488,21 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
 {
     libxl_device_pci *pcis;
-    int num;
-    bool assignable;
+    int num, i;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    assignable = is_pci_in_array(pcis, num, pci);
+
+    for (i = 0; i < num; i++) {
+        if (COMPARE_BDF(pcibdf, &pcis[i].bdf))
+            break;
+    }
+
     libxl_device_pci_assignable_list_free(pcis, num);
 
-    return assignable;
+    return i < num;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1523,7 +1535,8 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
+        rc = xc_test_assign_device(ctx->xch, domid,
+                                   pci_encode_bdf(&pci->bdf));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
@@ -1537,20 +1550,20 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
-        rc = libxl__device_pci_assignable_add(gc, pci, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
+        rc = libxl__pci_bdf_assignable_add(gc, &pci->bdf, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pci_assignable(ctx, pci)) {
+    if (!is_bdf_assignable(ctx, &pci->bdf)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    rc = pci_info_xs_write(gc, &pci->bdf, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
     libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
@@ -1674,7 +1687,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
-        pci_info_xs_remove(gc, pci, "domid");
+        pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
@@ -2114,7 +2127,8 @@ static void pci_remove_detached(libxl__egc *egc,
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
+        rc = xc_deassign_device(CTX->xch, domid,
+                                pci_encode_bdf(&pci->bdf));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
@@ -2243,7 +2257,7 @@ out:
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
 
-    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+    if (!rc) pci_info_xs_remove(gc, &pci->bdf, "domid");
 
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47768.84601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAK-00053v-Ij; Tue, 08 Dec 2020 20:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47768.84601; Tue, 08 Dec 2020 20:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAJ-00052y-N6; Tue, 08 Dec 2020 20:00:51 +0000
Received: by outflank-mailman (input) for mailman id 47768;
 Tue, 08 Dec 2020 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAD-0004pR-Th
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086k-7e; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihJ-0001p0-L0; Tue, 08 Dec 2020 19:30:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=A0Hhjl7MiFPC/iDK9UOxUI+73tMCfqzZ/Aivp/KDVnE=; b=mpknZnVJCJAfLM1qs5QMogrnah
	vv6OSjolmpFU92XRwCjW4WEA2AcKwT+6rM6rS953b+tNMKW4+8OiHBsR4Yv+W3mhRR448FMMjTp7s
	lklqfldBYb84KLt8bGJjOR9rjWI4Gv7amFbWhqVjppiTJIL08kUM6lLabyxIIXbRFCjs=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 14/25] libxl: use COMPARE_PCI() macro is_pci_in_array()...
Date: Tue,  8 Dec 2020 19:30:22 +0000
Message-Id: <20201208193033.11306-15-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... rather than an open-coded equivalent.

This patch tidies up the is_pci_in_array() function, making it take a single
'libxl_device_pci' argument rather than separate domain, bus, device and
function arguments. The already-available COMPARE_PCI() macro can then be
used and it is also modified to return 'bool' rather than 'int'.

The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
than a separate open-coded equivalent, and also modifies it to return a
'bool' rather than an 'int'.

NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
      comparison, which should always have been the case.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 tools/libs/light/libxl_internal.h |  7 +++---
 tools/libs/light/libxl_pci.c      | 38 +++++++++++--------------------
 2 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index d0c23def3c3e..c79523ba9248 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,9 +4746,10 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->func == (b)->func &&    \
-                           (a)->bus == (b)->bus &&      \
-                           (a)->dev == (b)->dev)
+#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+                           (a)->bus == (b)->bus &&       \
+                           (a)->dev == (b)->dev &&       \
+                           (a)->func == (b)->func)
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2a594e432855..74c2196ae339 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,24 +336,17 @@ retry_transaction2:
     return 0;
 }
 
-static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
-                           int dom, int bus, int dev, int func)
+static bool is_pci_in_array(libxl_device_pci *pcis, int num,
+                            libxl_device_pci *pci)
 {
     int i;
 
-    for(i = 0; i < num_assigned; i++) {
-        if ( assigned[i].domain != dom )
-            continue;
-        if ( assigned[i].bus != bus )
-            continue;
-        if ( assigned[i].dev != dev )
-            continue;
-        if ( assigned[i].func != func )
-            continue;
-        return 1;
+    for (i = 0; i < num; i++) {
+        if (COMPARE_PCI(pci, &pcis[i]))
+            break;
     }
 
-    return 0;
+    return i < num;
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
@@ -1487,21 +1480,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
     libxl_device_pci *pcis;
-    int num, i;
+    int num;
+    bool assignable;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    for (i = 0; i < num; i++) {
-        if (pcis[i].domain == pci->domain &&
-            pcis[i].bus == pci->bus &&
-            pcis[i].dev == pci->dev &&
-            pcis[i].func == pci->func)
-            break;
-    }
+    assignable = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_assignable_list_free(pcis, num);
-    return i != num;
+
+    return assignable;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1834,8 +1823,7 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         goto out_fail;
     }
 
-    attached = is_pci_in_array(pcis, num, pci->domain,
-                               pci->bus, pci->dev, pci->func);
+    attached = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47769.84610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAL-00056q-Af; Tue, 08 Dec 2020 20:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47769.84610; Tue, 08 Dec 2020 20:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAK-00055J-IK; Tue, 08 Dec 2020 20:00:52 +0000
Received: by outflank-mailman (input) for mailman id 47769;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004pX-00
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086w-E8; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihT-0001p0-Bg; Tue, 08 Dec 2020 19:31:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=oIgRO1UliD0pqS423IkrFQbxvGE4mpUSzQoY5cMSB2U=; b=Y/Zi/AcMSIem7j9OgSSum2zfRA
	8FYyNk7eHSnHs6cu13ELU96e+uBAMiaq4CJcvouCAA/RPEpOTpi2UH7Abbog0HVO2aCEx580INw6Z
	mLSpdApGuzDVFwX0h6w6WJzi44i5OXAexf82ZCd4j6jCSiGr0rMtWEIp7HJf9Qm0up3M=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 25/25] libxl / libxlu: support 'xl pci-attach/detach' by name
Date: Tue,  8 Dec 2020 19:30:33 +0000
Message-Id: <20201208193033.11306-26-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a 'name' field into the idl for 'libxl_device_pci' and
libxlu_pci_parse_spec_string() is modified to parse the new 'name'
parameter of PCI_SPEC_STRING detailed in the updated documention in
xl-pci-configuration(5).

If the 'name' field is non-NULL then both libxl_device_pci_add() and
libxl_device_pci_remove() will use it to look up the device BDF in
the list of assignable devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v6:
 - Re-base
 - Slight modification to the patch name
 - Kept Wei's A-b since modifications are small
---
 tools/include/libxl.h            |  6 +++
 tools/libs/light/libxl_pci.c     | 67 +++++++++++++++++++++++++++++---
 tools/libs/light/libxl_types.idl |  1 +
 tools/libs/util/libxlu_pci.c     |  7 +++-
 4 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index fda611f88960..90a7aa9b731a 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -476,6 +476,12 @@
  */
 #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_NAME indicates that the 'name' field of
+ * libxl_device_pci is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_NAME 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e11574e73f59..5d83db2a5981 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -60,6 +60,10 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             int num,
                                             const libxl_device_pci *pci)
 {
+    if (pci->name) {
+        flexarray_append(back, GCSPRINTF("name-%d", num));
+        flexarray_append(back, GCSPRINTF("%s", pci->name));
+    }
     flexarray_append(back, GCSPRINTF("key-%d", num));
     flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
@@ -284,6 +288,7 @@ retry_transaction:
 
 retry_transaction2:
     t = xs_transaction_start(ctx->xsh);
+    xs_rm(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/state-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/key-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/dev-%d", be_path, i));
@@ -322,6 +327,12 @@ retry_transaction2:
             xs_write(ctx->xsh, t, GCSPRINTF("%s/vdevfn-%d", be_path, j - 1), tmp, strlen(tmp));
             xs_rm(ctx->xsh, t, tmppath);
         }
+        tmppath = GCSPRINTF("%s/name-%d", be_path, j);
+        tmp = libxl__xs_read(gc, t, tmppath);
+        if (tmp) {
+            xs_write(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, j - 1), tmp, strlen(tmp));
+            xs_rm(ctx->xsh, t, tmppath);
+        }
     }
     if (!xs_transaction_end(ctx->xsh, t, 0))
         if (errno == EAGAIN)
@@ -1610,6 +1621,23 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_pci_bdf_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &pci->bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
         rc = xc_test_assign_device(ctx->xch, domid,
                                    pci_encode_bdf(&pci->bdf));
@@ -1758,11 +1786,19 @@ static void device_pci_add_done(libxl__egc *egc,
     libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
-        LOGD(ERROR, domid,
-             "libxl__device_pci_add  failed for "
-             "PCI device %x:%x:%x.%x (rc %d)",
-             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
-             rc);
+        if (pci->name) {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device '%s' (rc %d)",
+                 pci->name,
+                 rc);
+        } else {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device %x:%x:%x.%x (rc %d)",
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                 rc);
+        }
         pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
@@ -2279,6 +2315,23 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_pci_bdf_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &prs->pci.bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     prs->orig_vdev = pci->vdevfn & ~7U;
 
     if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
@@ -2413,6 +2466,10 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
 
+    s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/name-%d", be_path, nr));
+    if (s)
+        pci->name = strdup(s);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 21a2cf5c1c9b..32cc99beff6d 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -779,6 +779,7 @@ libxl_pci_bdf = Struct("pci_bdf", [
 
 libxl_device_pci = Struct("device_pci", [
     ("bdf", libxl_pci_bdf),
+    ("name", string),
     ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index a8b6ce542736..543a1f80e99e 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -151,6 +151,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
 {
     const char *ptr = str;
     bool bdf_present = false;
+    bool name_present = false;
     int ret;
 
     /* Attempt to parse 'bdf' as positional parameter */
@@ -193,6 +194,10 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             pcidev->power_mgmt = atoi(val);
         } else if (!strcmp(key, "rdm_policy")) {
             ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else if (!strcmp(key, "name")) {
+            name_present = true;
+            pcidev->name = strdup(val);
+            if (!pcidev->name) ret = ERROR_NOMEM;
         } else {
             XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
             ret = ERROR_INVAL;
@@ -205,7 +210,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             return ret;
     }
 
-    if (!bdf_present)
+    if (!(bdf_present ^ name_present))
         return ERROR_INVAL;
 
     return 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47770.84622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAM-0005A2-NT; Tue, 08 Dec 2020 20:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47770.84622; Tue, 08 Dec 2020 20:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAL-00059Y-Oc; Tue, 08 Dec 2020 20:00:53 +0000
Received: by outflank-mailman (input) for mailman id 47770;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004pY-01
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086a-3y; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihK-0001p0-Hr; Tue, 08 Dec 2020 19:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=UhlxfGmI/aEuEJYbZvO+wCWpWg3W+k9Y3YHBgam9bbQ=; b=F/WcsDkRt8mlAXrlFtOscwsnHx
	rUnmrIHMRw331u/FS2GTWJ3Eo7GPC0/q0ojoisBBfAWMxsE6mnNoCLgR/m7cJja0IvJrMATQdqq+f
	GxFm5tv93YUJIgH8P+4nzE0iwh/w2rZHIP1t/W3tiW99Q8P62p9280RIqnsyHQ6sb/Io=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 15/25] docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
Date: Tue,  8 Dec 2020 19:30:23 +0000
Message-Id: <20201208193033.11306-16-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and put it into a new xl-pci-configuration(5) manpage, akin to the
xl-network-configration(5) and xl-disk-configuration(5) manpages.

This patch moves the content of the section verbatim. A subsequent patch
will improve the documentation, once it is in its new location.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 docs/man/xl-pci-configuration.5.pod | 78 +++++++++++++++++++++++++++++
 docs/man/xl.cfg.5.pod.in            | 68 +------------------------
 2 files changed, 79 insertions(+), 67 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
new file mode 100644
index 000000000000..72a27bd95dec
--- /dev/null
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -0,0 +1,78 @@
+=encoding utf8
+
+=head1 NAME
+
+xl-pci-configuration - XL PCI Configuration Syntax
+
+=head1 SYNTAX
+
+This document specifies the format for B<PCI_SPEC_STRING> which is used by
+the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+
+Each B<PCI_SPEC_STRING> has the form of
+B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<[DDDD:]BB:DD.F>
+
+Identifies the PCI device from the host perspective in the domain
+(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
+the same scheme as used in the output of B<lspci(1)> for the device in
+question.
+
+Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
+is zero and it is optional here also. You may specify the function
+(B<F>) as B<*> to indicate all functions.
+
+=item B<@VSLOT>
+
+Specifies the virtual slot where the guest will see this
+device. This is equivalent to the B<DD> which the guest sees. In a
+guest B<DDDD> and B<BB> are C<0000:00>.
+
+=item B<permissive=BOOLEAN>
+
+By default pciback only allows PV guests to write "known safe" values
+into PCI configuration space, likewise QEMU (both qemu-xen and
+qemu-xen-traditional) imposes the same constraint on HVM guests.
+However, many devices require writes to other areas of the configuration space
+in order to operate properly.  This option tells the backend (pciback or QEMU)
+to allow all writes to the PCI configuration space of this device by this
+domain.
+
+B<This option should be enabled with caution:> it gives the guest much
+more control over the device, which may have security or stability
+implications.  It is recommended to only enable this option for
+trusted VMs under administrator's control.
+
+=item B<msitranslate=BOOLEAN>
+
+Specifies that MSI-INTx translation should be turned on for the PCI
+device. When enabled, MSI-INTx translation will always enable MSI on
+the PCI device regardless of whether the guest uses INTx or MSI. Some
+device drivers, such as NVIDIA's, detect an inconsistency and do not
+function when this option is enabled. Therefore the default is false (0).
+
+=item B<seize=BOOLEAN>
+
+Tells B<xl> to automatically attempt to re-assign a device to
+pciback if it is not already assigned.
+
+B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+system device, such as a network or a disk controller being used by
+dom0 without confirmation.  Please use with care.
+
+=item B<power_mgmt=BOOLEAN>
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device. The default is false (0).
+
+=item B<rdm_policy=STRING>
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option but just specific to a given device. The default is "relaxed".
+
+Note: this would override global B<rdm> option.
+
+=back
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 12201a7e5461..c8e017f950de 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1101,73 +1101,7 @@ option is valid only when the B<controller> option is specified.
 =item B<pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]>
 
 Specifies the host PCI devices to passthrough to this guest.
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
-
-=over 4
-
-=item B<[DDDD:]BB:DD.F>
-
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
-
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
-
-=item B<@VSLOT>
-
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
-
-=item B<permissive=BOOLEAN>
-
-By default pciback only allows PV guests to write "known safe" values
-into PCI configuration space, likewise QEMU (both qemu-xen and
-qemu-xen-traditional) imposes the same constraint on HVM guests.
-However, many devices require writes to other areas of the configuration space
-in order to operate properly.  This option tells the backend (pciback or QEMU)
-to allow all writes to the PCI configuration space of this device by this
-domain.
-
-B<This option should be enabled with caution:> it gives the guest much
-more control over the device, which may have security or stability
-implications.  It is recommended to only enable this option for
-trusted VMs under administrator's control.
-
-=item B<msitranslate=BOOLEAN>
-
-Specifies that MSI-INTx translation should be turned on for the PCI
-device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
-function when this option is enabled. Therefore the default is false (0).
-
-=item B<seize=BOOLEAN>
-
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
-
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
-system device, such as a network or a disk controller being used by
-dom0 without confirmation.  Please use with care.
-
-=item B<power_mgmt=BOOLEAN>
-
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
-
-=back
+See L<xl-pci-configuration(5)> for more details.
 
 =item B<pci_permissive=BOOLEAN>
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47771.84640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAO-0005IJ-Pf; Tue, 08 Dec 2020 20:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47771.84640; Tue, 08 Dec 2020 20:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAO-0005H0-3E; Tue, 08 Dec 2020 20:00:56 +0000
Received: by outflank-mailman (input) for mailman id 47771;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004ph-0u
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086U-2S; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihH-0001p0-Hc; Tue, 08 Dec 2020 19:30:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=JrM/ntLiiP2bhtck3185mew5Px47koc4HiV7GFFcjR4=; b=RCxDmlYtfDlShoJNgqqQhGRC/K
	+EC9OxNj1OESZtWMkku0W0BCNiG3fiDS+PpE7+U4FGTwePL+hbhe6knMyxcWCokvstCCXrigXzZWO
	BElmxg2gL4mcomUqNVz62xHeUus2qDuk9uA8nF1OLBAvW7W/QqBwNAkFmrJZAmxkhCrc=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 12/25] libxl: make sure callers of libxl_device_pci_list() free the list after use
Date: Tue,  8 Dec 2020 19:30:20 +0000
Message-Id: <20201208193033.11306-13-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced libxl_device_pci_list_free() which should be used
by callers of libxl_device_pci_list() to properly dispose of the exported
'libxl_device_pci' types and the free the memory holding them. Whilst all
current callers do ensure the memory is freed, only the code in xl's
pcilist() function actually calls libxl_device_pci_dispose(). As it stands
this laxity does not lead to any memory leaks, but the simple addition of
.e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
to leaks.

This patch makes sure all callers of libxl_device_pci_list() can call
libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
sure these are properly disposed at the end of the operations) rather
than keeping pointers to the structures returned by libxl_device_pci_list().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_pci.c | 68 ++++++++++++++++++++----------------
 tools/xl/xl_pci.c            |  3 +-
 2 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 37a6ec9eb443..c555f3ed29c4 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1025,7 +1025,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     libxl_domid pci_domid;
 } pci_add_state;
 
@@ -1097,7 +1097,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1118,7 +1118,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1199,7 +1199,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1300,7 +1300,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1516,7 +1516,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
-    pas->pci = pci;
+
+    libxl_device_pci_copy(CTX, &pas->pci, pci);
+    pci = &pas->pci;
+
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1555,12 +1558,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pci_s;
-
-        GCNEW(pci_s);
-        libxl_device_pci_init(pci_s);
-        libxl_device_pci_copy(CTX, pci_s, pci);
-        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
         do_pci_add(egc, stubdomid, pas); /* must be last */
@@ -1619,7 +1616,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1670,7 +1667,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
@@ -1680,6 +1677,7 @@ static void device_pci_add_done(libxl__egc *egc,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -1770,7 +1768,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1812,23 +1810,26 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
+    libxl_device_pci *pcis;
+    bool attached;
     uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
-    libxl_device_pci *pci = prs->pci;
+    libxl_device_pci *pci = &prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
-    assigned = libxl_device_pci_list(ctx, domid, &num);
-    if (assigned == NULL) {
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (!pcis) {
         rc = ERROR_FAIL;
         goto out_fail;
     }
-    libxl__ptr_add(gc, assigned);
+
+    attached = is_pci_in_array(pcis, num, pci->domain,
+                               pci->bus, pci->dev, pci->func);
+    libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-    if ( !is_pci_in_array(assigned, num, pci->domain,
-                          pci->bus, pci->dev, pci->func) ) {
+    if (!attached) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1928,7 +1929,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1950,7 +1951,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2020,7 +2021,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     if (rc) goto out;
 
@@ -2075,7 +2076,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
          PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
@@ -2096,7 +2097,7 @@ static void pci_remove_detached(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2159,7 +2160,7 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, &prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
@@ -2177,7 +2178,10 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pci = pci;
+
+    libxl_device_pci_copy(CTX, &prs->pci, pci);
+    pci = &prs->pci;
+
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2212,7 +2216,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2243,6 +2247,7 @@ out:
 
     if (!rc) pci_info_xs_remove(gc, pci, "domid");
 
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -2357,7 +2362,6 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
     pcis = libxl_device_pci_list(CTX, domid, &num);
     if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2368,6 +2372,8 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
         libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
+
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 34fcf5a4fadf..7c0f102ac7b7 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -35,9 +35,8 @@ static void pcilist(uint32_t domid)
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int main_pcilist(int argc, char **argv)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:00:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47773.84650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAQ-0005Lp-BU; Tue, 08 Dec 2020 20:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47773.84650; Tue, 08 Dec 2020 20:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAP-0005KF-5W; Tue, 08 Dec 2020 20:00:57 +0000
Received: by outflank-mailman (input) for mailman id 47773;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004po-3P
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086e-64; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihN-0001p0-8H; Tue, 08 Dec 2020 19:30:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Wxee7ELAusJoPC2cWRtRIQj+DNuLoWUQ2nWiFDbzG0c=; b=njfoDHmu0sHSS5TVFN7SN+OU0+
	dO69fOdH/OMwDX0GzTiPgMODjAdM62p+guiCZ5Qsar1y4p9Xfd91GKIixzM5nIjbKlns0s+kxW0dL
	9XLFYkD8Q8rZHuTE/NDMAYP+P6dQQmm2tTkf1z4pKrAYRssRVYUspkHSSpN8BVnzQl4I=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 18/25] libxl: introduce 'libxl_pci_bdf' in the idl...
Date: Tue,  8 Dec 2020 19:30:26 +0000
Message-Id: <20201208193033.11306-19-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and use in 'libxl_device_pci'

This patch is preparatory work for restricting the type passed to functions
that only require BDF information, rather than passing a 'libxl_device_pci'
structure which is only partially filled. In this patch only the minimal
mechanical changes necessary to deal with the structural changes are made.
Subsequent patches will adjust the code to make better use of the new type.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/helpers.gen.go |  77 ++++++++++----
 tools/golang/xenlight/types.gen.go   |   8 +-
 tools/include/libxl.h                |   6 ++
 tools/libs/light/libxl_dm.c          |   8 +-
 tools/libs/light/libxl_internal.h    |   3 +-
 tools/libs/light/libxl_pci.c         | 148 +++++++++++++--------------
 tools/libs/light/libxl_types.idl     |  16 +--
 tools/libs/util/libxlu_pci.c         |   8 +-
 tools/xl/xl_pci.c                    |   6 +-
 tools/xl/xl_sxp.c                    |   4 +-
 10 files changed, 167 insertions(+), 117 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index c8605994e7fe..b7230f693c69 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1999,6 +1999,41 @@ xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
  return nil
  }
 
+// NewPciBdf returns an instance of PciBdf initialized with defaults.
+func NewPciBdf() (*PciBdf, error) {
+var (
+x PciBdf
+xc C.libxl_pci_bdf)
+
+C.libxl_pci_bdf_init(&xc)
+defer C.libxl_pci_bdf_dispose(&xc)
+
+if err := x.fromC(&xc); err != nil {
+return nil, err }
+
+return &x, nil}
+
+func (x *PciBdf) fromC(xc *C.libxl_pci_bdf) error {
+ x.Func = byte(xc._func)
+x.Dev = byte(xc.dev)
+x.Bus = byte(xc.bus)
+x.Domain = int(xc.domain)
+
+ return nil}
+
+func (x *PciBdf) toC(xc *C.libxl_pci_bdf) (err error){defer func(){
+if err != nil{
+C.libxl_pci_bdf_dispose(xc)}
+}()
+
+xc._func = C.uint8_t(x.Func)
+xc.dev = C.uint8_t(x.Dev)
+xc.bus = C.uint8_t(x.Bus)
+xc.domain = C.int(x.Domain)
+
+ return nil
+ }
+
 // NewDevicePci returns an instance of DevicePci initialized with defaults.
 func NewDevicePci() (*DevicePci, error) {
 var (
@@ -2014,10 +2049,9 @@ return nil, err }
 return &x, nil}
 
 func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
- x.Func = byte(xc._func)
-x.Dev = byte(xc.dev)
-x.Bus = byte(xc.bus)
-x.Domain = int(xc.domain)
+ if err := x.Bdf.fromC(&xc.bdf);err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 x.Vdevfn = uint32(xc.vdevfn)
 x.VfuncMask = uint32(xc.vfunc_mask)
 x.Msitranslate = bool(xc.msitranslate)
@@ -2033,10 +2067,9 @@ if err != nil{
 C.libxl_device_pci_dispose(xc)}
 }()
 
-xc._func = C.uint8_t(x.Func)
-xc.dev = C.uint8_t(x.Dev)
-xc.bus = C.uint8_t(x.Bus)
-xc.domain = C.int(x.Domain)
+if err := x.Bdf.toC(&xc.bdf); err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 xc.vdevfn = C.uint32_t(x.Vdevfn)
 xc.vfunc_mask = C.uint32_t(x.VfuncMask)
 xc.msitranslate = C.bool(x.Msitranslate)
@@ -2766,13 +2799,13 @@ if err := x.Nics[i].fromC(&v); err != nil {
 return fmt.Errorf("converting field Nics: %v", err) }
 }
 }
-x.Pcidevs = nil
-if n := int(xc.num_pcidevs); n > 0 {
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
-x.Pcidevs = make([]DevicePci, n)
-for i, v := range cPcidevs {
-if err := x.Pcidevs[i].fromC(&v); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err) }
+x.Pcis = nil
+if n := int(xc.num_pcis); n > 0 {
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:n:n]
+x.Pcis = make([]DevicePci, n)
+for i, v := range cPcis {
+if err := x.Pcis[i].fromC(&v); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err) }
 }
 }
 x.Rdms = nil
@@ -2922,13 +2955,13 @@ return fmt.Errorf("converting field Nics: %v", err)
 }
 }
 }
-if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
-xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs)*C.sizeof_libxl_device_pci))
-xc.num_pcidevs = C.int(numPcidevs)
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
-for i,v := range x.Pcidevs {
-if err := v.toC(&cPcidevs[i]); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err)
+if numPcis := len(x.Pcis); numPcis > 0 {
+xc.pcis = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcis)*C.sizeof_libxl_device_pci))
+xc.num_pcis = C.int(numPcis)
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:numPcis:numPcis]
+for i,v := range x.Pcis {
+if err := v.toC(&cPcis[i]); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err)
 }
 }
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b4c5df0f2c5c..bc62ae8ce9d1 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -707,11 +707,15 @@ ColoCheckpointHost string
 ColoCheckpointPort string
 }
 
-type DevicePci struct {
+type PciBdf struct {
 Func byte
 Dev byte
 Bus byte
 Domain int
+}
+
+type DevicePci struct {
+Bdf PciBdf
 Vdevfn uint32
 VfuncMask uint32
 Msitranslate bool
@@ -896,7 +900,7 @@ CInfo DomainCreateInfo
 BInfo DomainBuildInfo
 Disks []DeviceDisk
 Nics []DeviceNic
-Pcidevs []DevicePci
+Pcis []DevicePci
 Rdms []DeviceRdm
 Dtdevs []DeviceDtdev
 Vfbs []DeviceVfb
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 3433c950f9aa..1fa4c5806df9 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -463,6 +463,12 @@
  */
 #define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
 
+/*
+ * LIBXL_HAVE_PCI_BDF indicates that the 'libxl_pci_bdf' type is defined
+ * is embedded in the 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_BDF 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3da83259c08e..1b951b09201e 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -472,10 +472,10 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     for (i = 0; i < d_config->num_pcidevs; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcidevs[i].domain;
-        bus = d_config->pcidevs[i].bus;
-        devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                          d_config->pcidevs[i].func);
+        seg = d_config->pcidevs[i].bdf.domain;
+        bus = d_config->pcidevs[i].bdf.bus;
+        devfn = PCI_DEVFN(d_config->pcidevs[i].bdf.dev,
+                          d_config->pcidevs[i].bdf.func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index c79523ba9248..6be7b12e4cb7 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,10 +4746,11 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+#define COMPARE_BDF(a, b) ((a)->domain == (b)->domain && \
                            (a)->bus == (b)->bus &&       \
                            (a)->dev == (b)->dev &&       \
                            (a)->func == (b)->func)
+#define COMPARE_PCI(a, b) COMPARE_BDF(&((a)->bdf), &((b)->bdf))
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 74c2196ae339..6b14f0f29ef8 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,10 +29,10 @@ static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pci->domain << 16;
-    value |= (pci->bus & 0xff) << 8;
-    value |= (pci->dev & 0x1f) << 3;
-    value |= (pci->func & 0x7);
+    value = pci->bdf.domain << 16;
+    value |= (pci->bdf.bus & 0xff) << 8;
+    value |= (pci->bdf.dev & 0x1f) << 3;
+    value |= (pci->bdf.func & 0x7);
 
     return value;
 }
@@ -41,10 +41,10 @@ static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
 }
 
@@ -54,9 +54,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     if (pci->vdevfn)
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
@@ -250,8 +250,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pci->domain && bus == pci->bus &&
-            pci->dev == dev && pci->func == func) {
+        if (domain == pci->bdf.domain && bus == pci->bdf.bus &&
+            pci->bdf.dev == dev && pci->bdf.func == func) {
             break;
         }
     }
@@ -362,8 +362,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
+    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+                    pci->bdf.dev, pci->bdf.func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -383,10 +383,10 @@ static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 }
 
 
@@ -484,10 +484,10 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
+                           pci->bdf.domain,
+                           pci->bdf.bus,
+                           pci->bdf.dev,
+                           pci->bdf.func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -517,7 +517,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -525,7 +525,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -533,7 +533,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -544,7 +544,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -552,7 +552,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -560,7 +560,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -571,14 +571,14 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pci->domain, pci->bus, pci->dev, pci->func);
+                     pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -587,7 +587,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
     }
 
@@ -654,10 +654,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
+        if (dom == pci->bdf.domain
+            && bus == pci->bdf.bus
+            && dev == pci->bdf.dev
+            && func == pci->bdf.func) {
             rc = 1;
             goto out;
         }
@@ -683,8 +683,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus,
+                      pci->bdf.dev, pci->bdf.func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -747,10 +747,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
+    dom = pci->bdf.domain;
+    bus = pci->bdf.bus;
+    dev = pci->bdf.dev;
+    func = pci->bdf.func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -824,8 +824,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
-            pci->dev, pci->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+            pci->bdf.dev, pci->bdf.func);
         return ERROR_FAIL;
     }
 
@@ -914,11 +914,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigne
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pci->domain != dom )
+        if ( pci->bdf.domain != dom )
             continue;
-        if ( pci->bus != bus )
+        if ( pci->bdf.bus != bus )
             continue;
-        if ( pci->dev != dev )
+        if ( pci->bdf.dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -967,13 +967,13 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
     if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pci->domain, pci->bus, pci->dev,
-                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->bdf.domain, pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->vdevfn, pci->msitranslate,
                          pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pci->domain,  pci->bus, pci->dev,
-                         pci->func, pci->msitranslate, pci->power_mgmt);
+                         pci->bdf.domain,  pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1132,10 +1132,10 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+                           "%04x:%02x:%02x.%01x", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
                                PCI_SLOT(pci->vdevfn),
@@ -1223,7 +1223,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1314,8 +1314,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1355,8 +1355,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                                pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1527,7 +1527,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pci->domain, pci->bus, pci->dev, pci->func,
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
@@ -1545,7 +1545,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1553,7 +1553,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
-    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+    libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
@@ -1634,13 +1634,13 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
         pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pci->func);
+        pfunc_mask = (1 << pci->bdf.func);
     }
 
     for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 /* if not passing through multiple devices in a block make
@@ -1672,7 +1672,7 @@ static void device_pci_add_done(libxl__egc *egc,
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pci->domain, pci->bus, pci->dev, pci->func,
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
@@ -1741,8 +1741,8 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
-                     pci->bus, pci->dev, pci->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->bdf.domain,
+                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
@@ -1856,8 +1856,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                                     pci->bus, pci->dev, pci->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1892,8 +1892,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                               pci->bus, pci->dev, pci->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                               pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1957,7 +1957,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2026,7 +2026,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2077,7 +2077,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
+         PCI_PT_QDEV_ID, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2110,7 +2110,7 @@ static void pci_remove_detached(libxl__egc *egc,
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
-        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+        libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     }
 
     if (!isstubdom) {
@@ -2198,7 +2198,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
         }
         pci->vfunc_mask &= prs->pfunc_mask;
     } else {
-        prs->pfunc_mask = (1 << pci->func);
+        prs->pfunc_mask = (1 << pci->bdf.func);
     }
 
     rc = 0;
@@ -2226,7 +2226,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 pci->vdevfn = orig_vdev;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 05324736b744..21a2cf5c1c9b 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -770,18 +770,22 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_pci_bdf = Struct("pci_bdf", [
+    ("func", uint8),
+    ("dev", uint8),
+    ("bus", uint8),
+    ("domain", integer),
+    ])
+
 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
     ("power_mgmt", bool),
     ("permissive", bool),
     ("seize", bool),
-    ("rdm_policy",      libxl_rdm_reserve_policy),
+    ("rdm_policy", libxl_rdm_reserve_policy),
     ])
 
 libxl_device_rdm = Struct("device_rdm", [
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 1d38fffce357..5c107f264260 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -27,10 +27,10 @@ static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                            unsigned int bus, unsigned int dev,
                            unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
     return 0;
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f71498cbb570..b6dc7c28401c 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -34,7 +34,8 @@ static void pcilist(uint32_t domid)
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_list_free(pcis, num);
 }
@@ -163,7 +164,8 @@ static void pciassignable_list(void)
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_assignable_list_free(pcis, num);
 }
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a0015709e..dc49fb7d5074 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -194,8 +194,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcidevs[i].domain, d_config->pcidevs[i].bus,
-               d_config->pcidevs[i].dev, d_config->pcidevs[i].func,
+               d_config->pcidevs[i].bdf.domain, d_config->pcidevs[i].bdf.bus,
+               d_config->pcidevs[i].bdf.dev, d_config->pcidevs[i].bdf.func,
                d_config->pcidevs[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
                d_config->pcidevs[i].msitranslate,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:01:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47772.84659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAS-0005Pb-H6; Tue, 08 Dec 2020 20:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47772.84659; Tue, 08 Dec 2020 20:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAQ-0005OB-LH; Tue, 08 Dec 2020 20:00:58 +0000
Received: by outflank-mailman (input) for mailman id 47772;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004pj-1u
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086s-B8; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihO-0001p0-Uv; Tue, 08 Dec 2020 19:30:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=kHLySQ+itOx1hiauIp8pIwnhad1Xuv7b9R5clDzcs+U=; b=6LKp47s9wKFE4yMHdI3ddHL7em
	G7Q9FNCvKRA1VkFCJG0xOSdrFHvt9gCr4kMyaKyWXCVzt33c0fFeQ8+S49JocEA7I1eBUJTPKa5lA
	/aJ1wQ4jhEIgAZI8oVw9jCMm9Nd2IxknhewWE17k7mSIf7FfyUcDGoI+BuEFTPgGd1w8=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v6 20/25] docs/man: modify xl(1) in preparation for naming of assignable devices
Date: Tue,  8 Dec 2020 19:30:28 +0000
Message-Id: <20201208193033.11306-21-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to allow a name to be specified to
'xl pci-assignable-add' such that the assignable device may be referred to
by than name in subsequent operations.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
---
 docs/man/xl.1.pod.in | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index af31d2b5727a..f4779d8fd654 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1595,19 +1595,23 @@ List virtual network interfaces for a domain.
 
 =over 4
 
-=item B<pci-assignable-list>
+=item B<pci-assignable-list> [I<-n>]
 
 List all the B<BDF> of assignable PCI devices. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+specified then any name supplied when the device was made assignable
+will also be displayed.
 
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
-=item B<pci-assignable-add> I<BDF>
+=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
 
 Make the device at B<BDF> assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+supplied then the assignable device entry will the named with the
+given B<NAME>.
 
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
@@ -1622,10 +1626,11 @@ not to do this on a device critical to domain 0's operation, such as
 storage controllers, network interfaces, or GPUs that are currently
 being used.
 
-=item B<pci-assignable-remove> [I<-r>] I<BDF>
+=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
 
-Make the device at B<BDF> not assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+Make a device non-assignable to guests. The device may be identified
+either by its B<BDF> or the B<NAME> supplied when the device was made
+assignable. See L<xl-pci-configuration(5)> for more information.
 
 This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:01:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:01:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47774.84671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAU-0005Y1-UH; Tue, 08 Dec 2020 20:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47774.84671; Tue, 08 Dec 2020 20:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAT-0005Vm-31; Tue, 08 Dec 2020 20:01:01 +0000
Received: by outflank-mailman (input) for mailman id 47774;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004pt-4g
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086o-9D; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihO-0001p0-56; Tue, 08 Dec 2020 19:30:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=5mcnUE0XX/ukJnV9lmREapDDysj6tEKU6ydDwPaupNM=; b=j3cfNJqIMtV77XKCh+VPTY8OVa
	DqhDMUt4wBwXSCzWc5YjCSZzk5uEfh2sZr8y1JiGvb6WAI5YCp4WXpPhGTQEHSbqZtGJI9X6oFzFW
	RjbP49/gOzPZKD+gQIBecAgZMMmhKhxjfAnNX6UzDcarTGGMVpO5ZscFOa0nmCMW7lXM=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 19/25] libxlu: introduce xlu_pci_parse_spec_string()
Date: Tue,  8 Dec 2020 19:30:27 +0000
Message-Id: <20201208193033.11306-20-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
it via the newly introduced function. The new parser also deals with 'bdf'
and 'vslot' as non-positional paramaters, as per the documentation in
xl-pci-configuration(5).

The existing xlu_pci_parse_bdf() function remains, but now strictly parses
BDF values. Some existing callers of xlu_pci_parse_bdf() are
modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).

NOTE: Usage text in xl_cmdtable.c and error messages are also modified
      appropriately.
      As a side-effect this patch also fixes a bug where using '*' to specify
      all functions would lead to an assertion failure at the end of
      xlu_pci_parse_bdf().

Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxlutil.h    |   8 +-
 tools/libs/util/libxlu_pci.c | 354 +++++++++++++++++++----------------
 tools/xl/xl_cmdtable.c       |   4 +-
 tools/xl/xl_parse.c          |   4 +-
 tools/xl/xl_pci.c            |  37 ++--
 5 files changed, 220 insertions(+), 187 deletions(-)

diff --git a/tools/include/libxlutil.h b/tools/include/libxlutil.h
index 92e35c546278..cdd6aab4f816 100644
--- a/tools/include/libxlutil.h
+++ b/tools/include/libxlutil.h
@@ -108,10 +108,16 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
    * resulting disk struct is used with libxl.
    */
 
+/*
+ * PCI BDF
+ */
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str);
+
 /*
  * PCI specification parsing
  */
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pci,
+                              const char *str);
 
 /*
  * RDM parsing
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 5c107f264260..a8b6ce542736 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -1,5 +1,7 @@
 #define _GNU_SOURCE
 
+#include <ctype.h>
+
 #include "libxlu_internal.h"
 #include "libxlu_disk_l.h"
 #include "libxlu_disk_i.h"
@@ -9,185 +11,213 @@
 #define XLU__PCI_ERR(_c, _x, _a...) \
     if((_c) && (_c)->report) fprintf((_c)->report, _x, ##_a)
 
-static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
+static int parse_bdf(libxl_pci_bdf *bdfp, uint32_t *vfunc_maskp,
+                     const char *str, const char **endp)
 {
-    unsigned long ret;
-    char *end;
-
-    ret = strtoul(str, &end, 16);
-    if ( end == str || *end != '\0' )
-        return -1;
-    if ( ret & ~mask )
-        return -1;
-    *val = (unsigned int)ret & mask;
+    const char *ptr = str;
+    unsigned int colons = 0;
+    unsigned int domain, bus, dev, func;
+    int n;
+
+    /* Count occurrences of ':' to detrmine presence/absence of the 'domain' */
+    while (isxdigit(*ptr) || *ptr == ':') {
+        if (*ptr == ':')
+            colons++;
+        ptr++;
+    }
+
+    ptr = str;
+    switch (colons) {
+    case 1:
+        domain = 0;
+        if (sscanf(ptr, "%x:%x.%n", &bus, &dev, &n) != 2)
+            return ERROR_INVAL;
+        break;
+    case 2:
+        if (sscanf(ptr, "%x:%x:%x.%n", &domain, &bus, &dev, &n) != 3)
+            return ERROR_INVAL;
+        break;
+    default:
+        return ERROR_INVAL;
+    }
+
+    if (domain > 0xffff || bus > 0xff || dev > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+    if (*ptr == '*') {
+        if (!vfunc_maskp)
+            return ERROR_INVAL;
+        *vfunc_maskp = LIBXL_PCI_FUNC_ALL;
+        func = 0;
+        ptr++;
+    } else {
+        if (sscanf(ptr, "%x%n", &func, &n) != 1)
+            return ERROR_INVAL;
+        if (func > 7)
+            return ERROR_INVAL;
+        if (vfunc_maskp)
+            *vfunc_maskp = 1;
+        ptr += n;
+    }
+
+    bdfp->domain = domain;
+    bdfp->bus = bus;
+    bdfp->dev = dev;
+    bdfp->func = func;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
-                           unsigned int bus, unsigned int dev,
-                           unsigned int func, unsigned int vdevfn)
+static int parse_vslot(uint32_t *vdevfnp, const char *str, const char **endp)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
-    pci->vdevfn = vdevfn;
+    const char *ptr = str;
+    unsigned int val;
+    int n;
+
+    if (sscanf(ptr, "%x%n", &val, &n) != 1)
+        return ERROR_INVAL;
+
+    if (val > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+
+    *vdevfnp = val << 3;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-#define STATE_DOMAIN    0
-#define STATE_BUS       1
-#define STATE_DEV       2
-#define STATE_FUNC      3
-#define STATE_VSLOT     4
-#define STATE_OPTIONS_K 6
-#define STATE_OPTIONS_V 7
-#define STATE_TERMINAL  8
-#define STATE_TYPE      9
-#define STATE_RDM_STRATEGY      10
-#define STATE_RESERVE_POLICY    11
-#define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
+static int parse_key_val(char **keyp, char**valp, const char *str,
+                         const char **endp)
 {
-    unsigned state = STATE_DOMAIN;
-    unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
-    char *buf2, *tok, *ptr, *end, *optkey = NULL;
+    const char *ptr = str;
+    char *key, *val;
+
+    while (*ptr != '=' && *ptr != '\0')
+        ptr++;
 
-    if ( NULL == (buf2 = ptr = strdup(str)) )
+    if (*ptr == '\0')
+        return ERROR_INVAL;
+
+    key = strndup(str, ptr - str);
+    if (!key)
         return ERROR_NOMEM;
 
-    for(tok = ptr, end = ptr + strlen(ptr) + 1; ptr < end; ptr++) {
-        switch(state) {
-        case STATE_DOMAIN:
-            if ( *ptr == ':' ) {
-                state = STATE_BUS;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dom, 0xffff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_BUS:
-            if ( *ptr == ':' ) {
-                state = STATE_DEV;
-                *ptr = '\0';
-                if ( hex_convert(tok, &bus, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }else if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( dom & ~0xff )
-                    goto parse_error;
-                bus = dom;
-                dom = 0;
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_DEV:
-            if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_FUNC:
-            if ( *ptr == '\0' || *ptr == '@' || *ptr == ',' ) {
-                switch( *ptr ) {
-                case '\0':
-                    state = STATE_TERMINAL;
-                    break;
-                case '@':
-                    state = STATE_VSLOT;
-                    break;
-                case ',':
-                    state = STATE_OPTIONS_K;
-                    break;
-                }
-                *ptr = '\0';
-                if ( !strcmp(tok, "*") ) {
-                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
-                }else{
-                    if ( hex_convert(tok, &func, 0x7) )
-                        goto parse_error;
-                    pci->vfunc_mask = (1 << 0);
-                }
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_VSLOT:
-            if ( *ptr == '\0' || *ptr == ',' ) {
-                state = ( *ptr == ',' ) ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( hex_convert(tok, &vslot, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_K:
-            if ( *ptr == '=' ) {
-                state = STATE_OPTIONS_V;
-                *ptr = '\0';
-                optkey = tok;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_V:
-            if ( *ptr == ',' || *ptr == '\0' ) {
-                state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( !strcmp(optkey, "msitranslate") ) {
-                    pci->msitranslate = atoi(tok);
-                }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pci->power_mgmt = atoi(tok);
-                }else if ( !strcmp(optkey, "permissive") ) {
-                    pci->permissive = atoi(tok);
-                }else if ( !strcmp(optkey, "seize") ) {
-                    pci->seize = atoi(tok);
-                } else if (!strcmp(optkey, "rdm_policy")) {
-                    if (!strcmp(tok, "strict")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                    } else if (!strcmp(tok, "relaxed")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                    } else {
-                        XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
-                                          " policy: 'strict' or 'relaxed'.",
-                                     tok);
-                        goto parse_error;
-                    }
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown PCI BDF option: %s", optkey);
-                }
-                tok = ptr + 1;
-            }
-        default:
-            break;
+    str = ++ptr; /* skip '=' */
+    while (*ptr != ',' && *ptr != '\0')
+        ptr++;
+
+    val = strndup(str, ptr - str);
+    if (!val) {
+        free(key);
+        return ERROR_NOMEM;
+    }
+
+    if (*ptr == ',')
+        ptr++;
+
+    *keyp = key;
+    *valp = val;
+    *endp = ptr;
+
+    return 0;
+}
+
+static int parse_rdm_policy(XLU_Config *cfg, libxl_rdm_reserve_policy *policy,
+                            const char *str)
+{
+    int ret = libxl_rdm_reserve_policy_from_string(str, policy);
+
+    if (ret)
+        XLU__PCI_ERR(cfg, "Unknown RDM policy: %s", str);
+
+    return ret;
+}
+
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str)
+{
+    return parse_bdf(bdf, NULL, str, NULL);
+}
+
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
+                              const char *str)
+{
+    const char *ptr = str;
+    bool bdf_present = false;
+    int ret;
+
+    /* Attempt to parse 'bdf' as positional parameter */
+    ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, ptr, &ptr);
+    if (!ret) {
+        bdf_present = true;
+
+        /* Check whether 'vslot' if present */
+        if (*ptr == '@') {
+            ret = parse_vslot(&pcidev->vdevfn, ++ptr, &ptr);
+            if (ret)
+                return ret;
         }
+        if (*ptr == ',')
+            ptr++;
+        else if (*ptr != '\0')
+            return ERROR_INVAL;
     }
 
-    if ( tok != ptr || state != STATE_TERMINAL )
-        goto parse_error;
+    /* Parse the rest as 'key=val' pairs */
+    while (*ptr != '\0') {
+        char *key, *val;
 
-    assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
+        ret = parse_key_val(&key, &val, ptr, &ptr);
+        if (ret)
+            return ret;
 
-    /* Just a pretty way to fill in the values */
-    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
+        if (!strcmp(key, "bdf")) {
+            ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, val, NULL);
+            bdf_present = !ret;
+        } else if (!strcmp(key, "vslot")) {
+            ret = parse_vslot(&pcidev->vdevfn, val, NULL);
+        } else if (!strcmp(key, "permissive")) {
+            pcidev->permissive = atoi(val);
+        } else if (!strcmp(key, "msitranslate")) {
+            pcidev->msitranslate = atoi(val);
+        } else if (!strcmp(key, "seize")) {
+            pcidev->seize= atoi(val);
+        } else if (!strcmp(key, "power_mgmt")) {
+            pcidev->power_mgmt = atoi(val);
+        } else if (!strcmp(key, "rdm_policy")) {
+            ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else {
+            XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
+            ret = ERROR_INVAL;
+        }
 
-    free(buf2);
+        free(key);
+        free(val);
 
-    return 0;
+        if (ret)
+            return ret;
+    }
 
-parse_error:
-    free(buf2);
-    return ERROR_INVAL;
+    if (!bdf_present)
+        return ERROR_INVAL;
+
+    return 0;
 }
 
 int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 {
+#define STATE_TYPE           0
+#define STATE_RDM_STRATEGY   1
+#define STATE_RESERVE_POLICY 2
+#define STATE_TERMINAL       3
+
     unsigned state = STATE_TYPE;
     char *buf2, *tok, *ptr, *end;
 
@@ -227,15 +257,8 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
             if (*ptr == ',' || *ptr == '\0') {
                 state = *ptr == ',' ? STATE_TYPE : STATE_TERMINAL;
                 *ptr = '\0';
-                if (!strcmp(tok, "strict")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                } else if (!strcmp(tok, "relaxed")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown RDM property policy value: %s",
-                                 tok);
+                if (!parse_rdm_policy(cfg, &rdm->policy, tok))
                     goto parse_error;
-                }
                 tok = ptr + 1;
             }
         default:
@@ -253,6 +276,11 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 parse_error:
     free(buf2);
     return ERROR_INVAL;
+
+#undef STATE_TYPE
+#undef STATE_RDM_STRATEGY
+#undef STATE_RESERVE_POLICY
+#undef STATE_TERMINAL
 }
 
 /*
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 6ab5e47da3f3..30e17a2848cd 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -90,12 +90,12 @@ struct cmd_spec cmd_table[] = {
     { "pci-attach",
       &main_pciattach, 0, 1,
       "Insert a new pass-through pci device",
-      "<Domain> <BDF> [Virtual Slot]",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-detach",
       &main_pcidetach, 0, 1,
       "Remove a domain's pass-through pci device",
-      "<Domain> <BDF>",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-list",
       &main_pcilist, 0, 0,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 4ebf39620ae7..867e4d068a59 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1487,10 +1487,10 @@ void parse_config_data(const char *config_source,
              * the global policy by default.
              */
             pci->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pci, buf);
+            e = xlu_pci_parse_spec_string(config, pci, buf);
             if (e) {
                 fprintf(stderr,
-                        "unable to parse PCI BDF `%s' for passthrough\n",
+                        "unable to parse PCI_SPEC_STRING `%s' for passthrough\n",
                         buf);
                 exit(-e);
             }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index b6dc7c28401c..9c24496cb2dd 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -55,7 +55,7 @@ int main_pcilist(int argc, char **argv)
     return 0;
 }
 
-static int pcidetach(uint32_t domid, const char *bdf, int force)
+static int pcidetach(uint32_t domid, const char *spec_string, int force)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -66,8 +66,9 @@ static int pcidetach(uint32_t domid, const char *bdf, int force)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-detach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
     if (force) {
@@ -89,7 +90,7 @@ int main_pcidetach(int argc, char **argv)
     uint32_t domid;
     int opt;
     int force = 0;
-    const char *bdf = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
     case 'f':
@@ -98,15 +99,15 @@ int main_pcidetach(int argc, char **argv)
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (pcidetach(domid, bdf, force))
+    if (pcidetach(domid, spec_string, force))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciattach(uint32_t domid, const char *bdf, const char *vs)
+static int pciattach(uint32_t domid, const char *spec_string)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -117,8 +118,9 @@ static int pciattach(uint32_t domid, const char *bdf, const char *vs)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-attach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
 
@@ -135,19 +137,16 @@ int main_pciattach(int argc, char **argv)
 {
     uint32_t domid;
     int opt;
-    const char *bdf = NULL, *vs = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
         /* No options */
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
-
-    if (optind + 1 < argc)
-        vs = argv[optind + 2];
+    spec_string = argv[optind + 1];
 
-    if (pciattach(domid, bdf, vs))
+    if (pciattach(domid, spec_string))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
@@ -193,8 +192,8 @@ static int pciassignable_add(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
@@ -235,8 +234,8 @@ static int pciassignable_remove(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:01:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47775.84686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAY-0005gw-AD; Tue, 08 Dec 2020 20:01:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47775.84686; Tue, 08 Dec 2020 20:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjAW-0005eB-7S; Tue, 08 Dec 2020 20:01:04 +0000
Received: by outflank-mailman (input) for mailman id 47775;
 Tue, 08 Dec 2020 20:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kmjAE-0004q3-Dy
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:00:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmjAD-00086u-Cf; Tue, 08 Dec 2020 20:00:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=desktop.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kmihI-0001p0-OB; Tue, 08 Dec 2020 19:30:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=PZJi1U8a34pSfhTO2q4+cuqPY9geRRKGfVIRW15UroI=; b=gRN6xCPXtJ2xw8uXlgs41t8/qA
	OGyie0rMQo/kmVJNkvWVylob56XwJbii0GH7t2xx1Oa1KIo8VNNU+mRxiVzbHg8cqRuWgEmLuEMpD
	yfT7rlkW1vsdFl5cdIwblPjHnesoVFUtPsK2T1dm/s+kLfqu2DgGSeaCzYCCYTh0+rBY=;
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 13/25] libxl: add libxl_device_pci_assignable_list_free()...
Date: Tue,  8 Dec 2020 19:30:21 +0000
Message-Id: <20201208193033.11306-14-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201208193033.11306-1-paul@xen.org>
References: <20201208193033.11306-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to be used by callers of libxl_device_pci_assignable_list().

Currently there is no API for callers of libxl_device_pci_assignable_list()
to free the list. The xl function pciassignable_list() calls
libxl_device_pci_dispose() on each element of the returned list, but
libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
of libxl_device_pci_assignable_list() call libxl_device_pci_init().

This patch adds the new API function, makes sure it is used everywhere and
also modifies libxl_device_pci_assignable_list() to initialize list
entries rather than just zeroing them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl_pci.c         | 14 ++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +--
 tools/xl/xl_pci.c                    |  3 +--
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index bb7fc893fc13..3433c950f9aa 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -457,6 +457,12 @@
  */
 #define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE indicates that the
+ * libxl_device_pci_assignable_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2369,6 +2375,7 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index c555f3ed29c4..2a594e432855 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -457,7 +457,7 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         pcis = new;
         new = pcis + *num;
 
-        memset(new, 0, sizeof(*new));
+        libxl_device_pci_init(new);
         pci_struct_fill(new, dom, bus, dev, func, 0);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
@@ -472,6 +472,16 @@ out:
     return pcis;
 }
 
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    int i;
+
+    for (i = 0; i < num; i++)
+        libxl_device_pci_dispose(&list[i]);
+
+    free(list);
+}
+
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
 static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
@@ -1490,7 +1500,7 @@ static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
             pcis[i].func == pci->func)
             break;
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
     return i != num;
 }
 
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 1181971da4e7..352a00134d70 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -894,9 +894,8 @@ value stub_xl_device_pci_assignable_list(value ctx)
 		Field(list, 1) = temp;
 		temp = list;
 		Store_field(list, 0, Val_device_pci(&c_list[i]));
-		libxl_device_pci_dispose(&c_list[i]);
 	}
-	free(c_list);
+	libxl_device_pci_assignable_list_free(c_list, nb);
 
 	CAMLreturn(list);
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 7c0f102ac7b7..f71498cbb570 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -164,9 +164,8 @@ static void pciassignable_list(void)
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:15:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47869.84704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjNw-0007r6-M0; Tue, 08 Dec 2020 20:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47869.84704; Tue, 08 Dec 2020 20:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjNw-0007qz-IN; Tue, 08 Dec 2020 20:14:56 +0000
Received: by outflank-mailman (input) for mailman id 47869;
 Tue, 08 Dec 2020 20:14:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjNv-0007qr-3g; Tue, 08 Dec 2020 20:14:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjNu-0008PW-RG; Tue, 08 Dec 2020 20:14:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjNu-0007DH-HI; Tue, 08 Dec 2020 20:14:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjNu-0007JP-Gr; Tue, 08 Dec 2020 20:14:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a3hrlg0GEMFEtCf3m1cKBcHy9JRratvJNSGl5nU98L0=; b=qebYmGVdXYkYm5ou+JytIpATtp
	FG1PIiaTBKZP3JL3MpVoUfll9t2+UiYBOiAs24TTrtvaxC7YY3zWGUAqx8MWklJCWD6tFL8q3OL5O
	MQzCS1BrXG3yXMIEcZr1hY0sEwpa9y8fN1NPBKtdqmWPX7P/V44om3CqSpTYS8sjozlU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157323-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157323: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
X-Osstest-Versions-That:
    ovmf=4b69fab6e20a98f56acd3c717bd53812950fe5b5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 20:14:54 +0000

flight 157323 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157323/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
baseline version:
 ovmf                 4b69fab6e20a98f56acd3c717bd53812950fe5b5

Last test of basis   157255  2020-12-07 08:39:47 Z    1 days
Testing same since   157323  2020-12-08 14:10:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenyi Xie <xiewenyi2@huawei.com>
  wenyi,xie via groups.io <xiewenyi2=huawei.com@groups.io>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4b69fab6e2..8e4cb8fbce  8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:17:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:17:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47877.84718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjQ2-0007zi-3K; Tue, 08 Dec 2020 20:17:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47877.84718; Tue, 08 Dec 2020 20:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjQ2-0007zb-0J; Tue, 08 Dec 2020 20:17:06 +0000
Received: by outflank-mailman (input) for mailman id 47877;
 Tue, 08 Dec 2020 20:17:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KXXm=FM=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kmjQ0-0007zV-O1
 for xen-devel@lists.xenproject.org; Tue, 08 Dec 2020 20:17:04 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0d09c8f-303e-4305-b729-833d2c61df6a;
 Tue, 08 Dec 2020 20:17:03 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id a1so20442708ljq.3
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 12:17:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 129sm3313925lfo.43.2020.12.08.12.17.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 12:17:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0d09c8f-303e-4305-b729-833d2c61df6a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=HjN+rS4v9m2c0PQGwpM1swajT+DtylVzRjuh32YTpSg=;
        b=MTvB2lIh+q69m1zntUlL9zABjWhq8rfXDZtLJBr8qp4Ur1saBRsSfkbmc8K4K5eqTI
         q0uHzijYrMZrTfXNCoKbt5YUvLoRsT0YhDHpBSsv+GjjnGp4TyA3tzKlu4/CevwAGQAH
         oVyuoYuy6fJMCzjsDZBlg6DdGMsGKUwv5ZTIgnoWe6kfXhRXHY9SALD4yVBWWqc8L69C
         r8xEwsJWqVK28GNPxixzY76fsC3oyhSW7FEdZqU+/CL5CR9+a/8LVIU/SRl3cmd9VXud
         u8ZTSvSpAAFLkHSlu9psXSUCsRwIPSHtsLfvZphfzkakAldpLV+aGinr2lAkkTQ2Q/9o
         LeOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=HjN+rS4v9m2c0PQGwpM1swajT+DtylVzRjuh32YTpSg=;
        b=FQKa27AZza2vcBM8+QfmwJ7HN7X6/TgnW8v5GmZ3fYWHfNGokivP6HkbFJg/nHsqpf
         rHXDoJjXwLiOxDihk+NZI84QITKPbHeXSThRf0XuYhiYTo3rLWJnumX33vgTVrWVTh7T
         jFsaMrMATllpiua8HluIlwwwIxng/kZYsLUDzuL7mJfG50dT40IT+KE74fb1Y+KtBUKt
         URhU14ehzWax7voMfoWH80O+5I3DP1VANb3D3W1SsfJSjNzpRH+KvL6YggM0Td4gREjr
         rJ6tW6RSNZrxFpGAUKkHkaWAotpGMGpKQ1NaBBp3Tg6DZpjtHJ89giOAey3wwjmrX7r4
         DWlw==
X-Gm-Message-State: AOAM530cY5stEUq3ratKCmgPKyT61XOn0BqQPijBDYalZ5DGQj6+c00G
	mg+bYrzyT78szOWmPgTwpkhJ69wWAWW8yQ==
X-Google-Smtp-Source: ABdhPJzbpxEywi/5uHMqUt1RniGQlNgWxGgbf+PuWG/YPxAxEUjeMoaGAzdgeDceVIJeHUeIKLOoLQ==
X-Received: by 2002:a2e:58f:: with SMTP id 137mr1647786ljf.469.1607458622449;
        Tue, 08 Dec 2020 12:17:02 -0800 (PST)
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
 <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
 <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
 <0d6c01d6cd9a$666326c0$33297440$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
Date: Tue, 8 Dec 2020 22:16:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0d6c01d6cd9a$666326c0$33297440$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 08.12.20 21:43, Paul Durrant wrote:

Hi Paul

>> -----Original Message-----
>> From: Oleksandr <olekstysh@gmail.com>
>> Sent: 08 December 2020 16:57
>> To: Paul Durrant <paul@xen.org>
>> Cc: Jan Beulich <jbeulich@suse.com>; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
>> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
>> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
>> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Julien Grall
>> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>
>>
>> Hi Paul.
>>
>>
>> On 08.12.20 17:33, Oleksandr wrote:
>>> On 08.12.20 17:11, Jan Beulich wrote:
>>>
>>> Hi Jan
>>>
>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>> --- a/xen/include/xen/ioreq.h
>>>>> +++ b/xen/include/xen/ioreq.h
>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>        uint8_t                bufioreq_handling;
>>>>>    };
>>>>>    +/*
>>>>> + * This should only be used when d == current->domain and it's not
>>>>> paused,
>>>> Is the "not paused" part really relevant here? Besides it being rare
>>>> that the current domain would be paused (if so, it's in the process
>>>> of having all its vCPU-s scheduled out), does this matter at all?do
>>>> any extra actionsdo any extra actions
>>> No, it isn't relevant, I will drop it.
>>>
>>>
>>>> Apart from this the patch looks okay to me, but I'm not sure it
>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
>>>> a list if doing a swipe over the entire array is too expensive in
>>>> this specific case.
>>> We would like to avoid to do any extra actions in
>>> leave_hypervisor_to_guest() if possible.
>>> But not only there, the logic whether we check/set
>>> mapcache_invalidation variable could be avoided if a domain doesn't
>>> use IOREQ server...
>>
>> Are you OK with this patch (common part of it)?
> How much of a performance benefit is this? The array is small to simply counting the non-NULL entries should be pretty quick.
I didn't perform performance measurements on how much this call consumes.
In our system we run three domains. The emulator is in DomD only, so I 
would like to avoid to call vcpu_ioreq_handle_completion() for every 
Dom0/DomU's vCPUs
if there is no real need to do it. On Arm vcpu_ioreq_handle_completion() 
is called with IRQ enabled, so the call is accompanied with 
corresponding irq_enable/irq_disable.
These unneeded actions could be avoided by using this simple one-line 
helper...


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Dec 08 20:46:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 08 Dec 2020 20:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47890.84737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjsm-0002Rv-BA; Tue, 08 Dec 2020 20:46:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47890.84737; Tue, 08 Dec 2020 20:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmjsm-0002Ro-7M; Tue, 08 Dec 2020 20:46:48 +0000
Received: by outflank-mailman (input) for mailman id 47890;
 Tue, 08 Dec 2020 20:46:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjsl-0002Rg-0Q; Tue, 08 Dec 2020 20:46:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjsk-0000hZ-FA; Tue, 08 Dec 2020 20:46:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjsk-0001vE-5c; Tue, 08 Dec 2020 20:46:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmjsk-0002nd-5A; Tue, 08 Dec 2020 20:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EvjmoUpfpojlgevHu4TUwnUTL8w9MpTt5Be+ABsWXvY=; b=vFtwI6+lOTfGEnBB2Vml9BFxUz
	RtpfzYKtW0XVrfQNI7YRKstYARU4uc4/oQngWqZDV87UbnXb4OBXdkXYZn0AR2B6CO2UDHEEOXPUB
	pZjEBnWejLEAwwbGMUnQV+05t4EUq696gz/vn1tGVAHWGefnw3obF2CNdErmUlazEfIc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157303-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157303: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ec274ecd62f9e0404c935ff073346d243d5082e6
X-Osstest-Versions-That:
    linux=42af416d71462a72b02ba6ac632c8dcb9ce729a0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 08 Dec 2020 20:46:46 +0000

flight 157303 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157303/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 157153

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157153
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157153
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157153
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157153
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157153
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157153
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157153
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157153
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157153
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157153
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157153
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ec274ecd62f9e0404c935ff073346d243d5082e6
baseline version:
 linux                42af416d71462a72b02ba6ac632c8dcb9ce729a0

Last test of basis   157153  2020-12-02 08:12:37 Z    6 days
Testing same since   157303  2020-12-08 10:10:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Duyck <alexanderduyck@fb.com>
  Anmol Karn <anmol.karan123@gmail.com>
  Antoine Tenart <atenart@kernel.org>
  Bruno Meneguele <bmeneg@redhat.com>  (TPM 1.2, TPM 2.0)
  Dan Carpenter <dan.carpenter@oracle.com>
  David S. Miller <davem@davemloft.net>
  Davide Caratti <dcaratti@redhat.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eran Ben Elisha <eranbe@nvidia.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Hector Martin <marcan@marcan.st>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@nuviainc.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jon Hunter <jonathanh@nvidia.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Marc Kleine-Budde <mkl@pengutronix.de> # for tcan4x5x.txt
  Martin Schiller <ms@dev.tdt.de>
  Matti Vuorela <matti.vuorela@bitfactor.fi>
  Maurizio Drocco <maurizio.drocco@ibm.com>
  Maxim Mikityanskiy <maximmi@mellanox.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Parav Pandit <parav@nvidia.com>
  Pete Heist <pete@heistp.net>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Raju Rangoju <rajur@chelsio.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rob Herring <robh@kernel.org>
  Saeed Mahameed <saeedm@nvidia.com>
  Sanjay Govind <sanjay.govind9@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Soheil Hassas Yeganeh <soheil@google.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  syzbot+a1c743815982d9496393@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Thomas Falcon <tlfalcon@linux.ibm.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Udai Sharma <udai.sharma@chelsio.com>
  Vadim Fedorenko <vfedorenko@novek.ru>
  Vasily Averin <vvs@virtuozzo.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Wang Hai <wanghai38@huawei.com>
  Willem de Bruijn <willemb@google.com>
  Yevgeny Kliteynik <kliteyn@nvidia.com>
  Yves-Alexis Perez <corsac@corsac.net>
  Zhang Changzhong <zhangchangzhong@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   42af416d7146..ec274ecd62f9  ec274ecd62f9e0404c935ff073346d243d5082e6 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 00:43:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 00:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47914.84761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmnZW-0008DQ-MZ; Wed, 09 Dec 2020 00:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47914.84761; Wed, 09 Dec 2020 00:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmnZW-0008DJ-G6; Wed, 09 Dec 2020 00:43:10 +0000
Received: by outflank-mailman (input) for mailman id 47914;
 Wed, 09 Dec 2020 00:43:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmnZU-0008Bk-Eh; Wed, 09 Dec 2020 00:43:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmnZU-00069B-1J; Wed, 09 Dec 2020 00:43:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmnZT-00047u-P4; Wed, 09 Dec 2020 00:43:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmnZT-0007Xe-Ob; Wed, 09 Dec 2020 00:43:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pZyqPJXBQZqDray1xLKM+Gssk6Y8mqkJz/xDGQ1PeFw=; b=QtfIfLR54ZHUEIjoly9oAf4uon
	KHIk0gxvjaHVWelhq496OHlAAFAHVu6D/70W3m9URhpEdMN/3RpG6HSlyRtQ6O3aqNoNpTuSP0FqS
	VsBu1zyyJLSR+mvyAKKMmIr6Y+q/W5/X1B0uSPR2S0cztK2VaznPBL/RLGbpYDx9fZws=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157327-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157327: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
X-Osstest-Versions-That:
    xen=4b0e0db86194b5e9e18c9f2c10b3910f3394c56f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 00:43:07 +0000

flight 157327 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157327/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157271
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157271
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157271
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157271
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157271
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157271
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157271
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157271
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157271
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157271
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157271
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20
baseline version:
 xen                  4b0e0db86194b5e9e18c9f2c10b3910f3394c56f

Last test of basis   157271  2020-12-08 03:33:03 Z    0 days
Testing same since   157327  2020-12-08 15:07:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4b0e0db861..777e3590f1  777e3590f154e6a8af560dd318b9465fa168db20 -> master


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 01:03:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 01:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47928.84784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmnsf-0001hp-6R; Wed, 09 Dec 2020 01:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47928.84784; Wed, 09 Dec 2020 01:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmnsf-0001hi-24; Wed, 09 Dec 2020 01:02:57 +0000
Received: by outflank-mailman (input) for mailman id 47928;
 Wed, 09 Dec 2020 01:02:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kmnse-0001hd-6P
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 01:02:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6b084aa-293d-41db-82bc-074eafd94f2e;
 Wed, 09 Dec 2020 01:02:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6b084aa-293d-41db-82bc-074eafd94f2e
Date: Tue, 8 Dec 2020 17:02:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607475774;
	bh=9APXSBnwBZtieHDgiJUZOHBuoT5Tf8+YqVIQ/YNNfR8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=ZrzRqDbmSA0n9qPk66IQ7v+ZKbVyNU6Mn+n5gD2Xf5Ktsf7/eXLUDGPKRbBRg++Zx
	 KQKWm9/sc+MKIesVl6nIs/y7adHx4idJ5I7fnqYaLDuv+BwiQrDkLRRsiKxB/hVbB/
	 AQgeXsXm535yFRSHFX2AzvXjzWWlgj+GmCjYlKC7vT7z8W709jXIx4vtKzx8HsXcJo
	 y2+o5vk/BlYuYqNaLBQuBLqePnWHudpa4HAJaEGSChyQpmbIEFRZHdLLQc5xKmka8M
	 0Mrg2Fzn8d3S5WeUS+lv3BLbdq6vzBy/s2rp50wwZQUXG7hjLRYzntMhNGDNePqEd/
	 pPKr5Au3qy5+w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: famzheng@amazon.com, sstabellini@kernel.org, cardoe@cardoe.com, wl@xen.org, 
    Bertrand.Marquis@arm.com, julien@xen.org, andrew.cooper3@citrix.com
Subject: Re: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
In-Reply-To: <160746448732.12203.10647684023172140005@600e7e483b3a>
Message-ID: <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s>
References: <160746448732.12203.10647684023172140005@600e7e483b3a>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

The pipeline failed because the "fedora-gcc-debug" build failed with a
timeout: 

ERROR: Job failed: execution took longer than 1h0m0s seconds

given that all the other jobs passed (including the other Fedora job), I
take this failed because the gitlab-ci x86 runners were overloaded?


On Tue, 8 Dec 2020, no-reply@patchew.org wrote:
> Hi,
> 
> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> 
> You can find the link to the pipeline in the beginning of the report below:
> 
> Type: series
> Message-id: 20201208193033.11306-1-paul@xen.org
> Subject: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> sleep 10
> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> === TEST SCRIPT END ===
> 
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>    5e666356a9..4b0e0db861  master     -> master
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>  * [new tag]               patchew/20201208193033.11306-1-paul@xen.org -> patchew/20201208193033.11306-1-paul@xen.org
> Switched to a new branch 'test'
> 6c78dcb6d3 libxl / libxlu: support 'xl pci-attach/detach' by name
> 117f736c8b docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
> 38e63698d6 xl: support naming of assignable devices
> 32b064a4a2 libxl: introduce libxl_pci_bdf_assignable_add/remove/list/list_free(), ...
> 830b6fa734 libxl: convert internal functions in libxl_pci.c...
> d5d5d08e3b docs/man: modify xl(1) in preparation for naming of assignable devices
> bb4cbf5856 libxlu: introduce xlu_pci_parse_spec_string()
> 62f09b89d2 libxl: introduce 'libxl_pci_bdf' in the idl...
> eb3c3ecef6 docs/man: fix xl(1) documentation for 'pci' operations
> cab74a871d docs/man: improve documentation of PCI_SPEC_STRING...
> da45af2de8 docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
> 797b0fd3d4 libxl: use COMPARE_PCI() macro is_pci_in_array()...
> 2c0d9b579f libxl: add libxl_device_pci_assignable_list_free()...
> 1d4d73044e libxl: make sure callers of libxl_device_pci_list() free the list after use
> 24150e4156 libxl: remove get_all_assigned_devices() from libxl_pci.c
> a3d908d5a2 libxl: remove unnecessary check from libxl__device_pci_add()
> ada8e55b23 libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
> a38482aa96 libxl: stop using aodev->device_config in libxl__device_pci_add()...
> d115527623 libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
> b1369310e6 libxl: s/detatched/detached in libxl_pci.c
> 4ccef90ca8 libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
> 09d3adddb4 libxl: Make sure devices added by pci-attach are reflected in the config
> e2feb1c29b libxl: make libxl__device_list() work correctly for LIBXL__DEVICE_KIND_PCI...
> 8599a6a85e xl: s/pcidev/pci where possible
> 4648bbbb01 libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> 
> === OUTPUT BEGIN ===
> [2020-12-08 20:09:14] Looking up pipeline...
> [2020-12-08 20:09:14] Found pipeline 226993561:
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/226993561
> 
> [2020-12-08 20:09:14] Waiting for pipeline to finish...
> [2020-12-08 20:24:18] Still waiting...
> [2020-12-08 20:39:23] Still waiting...
> [2020-12-08 20:54:28] Still waiting...
> [2020-12-08 21:09:32] Still waiting...
> [2020-12-08 21:24:36] Still waiting...
> [2020-12-08 21:39:41] Still waiting...
> [2020-12-08 21:54:45] Still waiting...
> [2020-12-08 21:54:46] Pipeline failed
> [2020-12-08 21:54:46] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' is skipped
> [2020-12-08 21:54:46] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is skipped
> [2020-12-08 21:54:46] Job 'qemu-smoke-x86-64-clang' in stage 'test' is skipped
> [2020-12-08 21:54:46] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is skipped
> [2020-12-08 21:54:46] Job 'build-each-commit-gcc' in stage 'test' is skipped
> === OUTPUT END ===
> 
> Test command exited with code: 1


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 01:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 01:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47936.84795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmo8e-0002r8-Jd; Wed, 09 Dec 2020 01:19:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47936.84795; Wed, 09 Dec 2020 01:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmo8e-0002r1-Gh; Wed, 09 Dec 2020 01:19:28 +0000
Received: by outflank-mailman (input) for mailman id 47936;
 Wed, 09 Dec 2020 01:19:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kmo8d-0002qw-3L
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 01:19:27 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5fed6afa-7833-4ba8-855b-fa9b03d22efb;
 Wed, 09 Dec 2020 01:19:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fed6afa-7833-4ba8-855b-fa9b03d22efb
Date: Tue, 8 Dec 2020 17:19:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607476765;
	bh=Fjihoz9+YwDs0vShIH14xIbJJ1ip4q0T1Wec3Amls/g=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=MMCiQYExvgYS56RTBarL4bL5hXSO2mhEuCSnaw/DILTtzS2pyn6vVlALgEqnxvL9z
	 +QqKCgXoJDqvnNOIUzKu39IgztyTGY5cU0b4rhu6nIJZJAtNsvP+Pn8TatpQynC9mD
	 7WiEWDqKaR0b+CFglPuTR8nuhkukVkeDUKMpfmdojEo1WxQAkrswpdGk7sWWwenGA7
	 X4nPhvsvG0kUAcuJK/qKCcmpjDD+qUbdHmEP1FQXCvYCAmhLxQCMvyPCAcAORU3n/M
	 DwqURWZagRlSGKp/whmhg8/porJsxpLDRJ1JkANvqSlrPaDdLlAj/A5X09NSKyNtju
	 mS2omuXzq7zWQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Rahul Singh <Rahul.Singh@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
In-Reply-To: <156ab0f5-e46d-6b96-7ff1-28ad3a748950@xen.org>
Message-ID: <alpine.DEB.2.21.2012081711200.20986@sstabellini-ThinkPad-T480s>
References: <cover.1606406359.git.rahul.singh@arm.com> <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com> <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org> <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com> <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
 <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com> <156ab0f5-e46d-6b96-7ff1-28ad3a748950@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 8 Dec 2020, Julien Grall wrote:
> On 07/12/2020 18:42, Rahul Singh wrote:
> > > On 7 Dec 2020, at 5:39 pm, Julien Grall <julien@xen.org> wrote:
> > > On 07/12/2020 12:12, Rahul Singh wrote:
> > > > > > +typedef paddr_t dma_addr_t;
> > > > > > +typedef unsigned int gfp_t;
> > > > > > +
> > > > > > +#define platform_device device
> > > > > > +
> > > > > > +#define GFP_KERNEL 0
> > > > > > +
> > > > > > +/* Alias to Xen device tree helpers */
> > > > > > +#define device_node dt_device_node
> > > > > > +#define of_phandle_args dt_phandle_args
> > > > > > +#define of_device_id dt_device_match
> > > > > > +#define of_match_node dt_match_node
> > > > > > +#define of_property_read_u32(np, pname, out)
> > > > > > (!dt_property_read_u32(np, pname, out))
> > > > > > +#define of_property_read_bool dt_property_read_bool
> > > > > > +#define of_parse_phandle_with_args dt_parse_phandle_with_args
> > > > > > +
> > > > > > +/* Alias to Xen lock functions */
> > > > > > +#define mutex spinlock
> > > > > > +#define mutex_init spin_lock_init
> > > > > > +#define mutex_lock spin_lock
> > > > > > +#define mutex_unlock spin_unlock
> > > > > 
> > > > > Hmm... mutex are not spinlock. Can you explain why this is fine to
> > > > > switch to spinlock?
> > > > Yes mutex are not spinlock. As mutex is not implemented in XEN I thought
> > > > of using spinlock in place of mutex as this is the only locking
> > > > mechanism available in XEN.
> > > > Let me know if there is another blocking lock available in XEN. I will
> > > > check if we can use that.
> > > 
> > > There are no blocking lock available in Xen so far. However, if Linux were
> > > using mutex instead of spinlock, then it likely means they operations in
> > > the critical section can be long running.
> > 
> > Yes you are right Linux is using mutex when attaching a device to the SMMU
> > as this operation might take longer time.
> > > 
> > > How did you came to the conclusion that using spinlock in the SMMU driver
> > > would be fine?
> > 
> > Mutex is replaced by spinlock  in the SMMU driver when there is a request to
> > assign a device to the guest. As we are in user context at that time its ok
> > to use spinlock.
> 
> I am not sure to understand what you mean by "user context" here. Can you
> clarify it?
> 
> > As per my understanding there is one scenario when CPU will spin when there
> > is a request from the user at the same time to assign another device to the
> > SMMU and I think that is very rare.
> 
> What "user" are you referring to?
> 
> > 
> > Please suggest how we can proceed on this.
> 
> I am guessing that what you are saying is the request to assign/de-assign
> device will be issued by the toolstack and therefore they should be trusted.
> 
> My concern here is not about someone waiting on the lock to be released. It is
> more the fact that using a mutex() is an insight that the operation protected
> can be long. Depending on the length, this may result to unwanted side effect
> (e.g. other vCPU not scheduled, RCU stall in dom0, watchdog hit...).
> 
> I recall a discussion from a couple of years ago mentioning that STE
> programming operations can take quite a long time. So I would like to
> understand how long the operation is meant to last.
> 
> For a tech preview, this is probably OK to replace the mutex with an spinlock.
> But I would not want this to go past the tech preview stage without a proper
> analysis.
> 
> Stefano, what do you think?

In short, I agree.


We need to be very careful replacing mutexes with spinlocks. We need to
look closely at the ways the spinlocks could introduce unwanted
latencies. Concurrent assign_device operations are possible but rare
and, more importantly, they are user-driven so they could be mitigated.
I am more worried about other possible scenarios, e.g. STE or other
operations.

Rahul clearly put a lot of work into this series already and I think it
is better to take this incrementally, which will allow us to do better
testing and also move faster overall. So I am fine to take the series as
is now, pending an investigation on the spinlocks later.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 01:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 01:34:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47942.84807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmoNF-0004jO-3M; Wed, 09 Dec 2020 01:34:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47942.84807; Wed, 09 Dec 2020 01:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmoNE-0004jH-WA; Wed, 09 Dec 2020 01:34:32 +0000
Received: by outflank-mailman (input) for mailman id 47942;
 Wed, 09 Dec 2020 01:34:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kmoND-0004jC-HQ
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 01:34:31 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fc92b19-1595-4c9c-9991-d0d05a6678f9;
 Wed, 09 Dec 2020 01:34:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fc92b19-1595-4c9c-9991-d0d05a6678f9
Date: Tue, 8 Dec 2020 17:34:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607477670;
	bh=3fbALPm5UDI9wd3ouG1stjFm09oSzhHf8AvkEsWsGmk=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=qGJO0i7138SSd+SAxeE8WX7vT/Rabn0vYg79lf/SDsmX6+E4bdIWUB9HWI5cnOf9f
	 dnrxK+5eFegfLUiL3Pc/tWQgyN1Wx26G64GJmXewjhzgROByQYVC4ya7zClrjr7Klp
	 PQjwjnkvg6yp+qmXjTTarBp0jXb3uhaMP4WE3xWKMncbPKZAixp+DqcZJCV7bxKKxe
	 4D1iar9lBBm9t9ecWp3U4H+OF1UMMWdlsYqHjOhibxHVLX5AKORk34esGPGsUqmWSm
	 KKIc556PX+KCWVWEpX8vNdyhrE3XVexY+T7BCbfzQf4bX9H9HtglVhx5PxykTwz7YV
	 ThK5YhYX4DHww==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Michal Orzel <Michal.Orzel@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Wei Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
In-Reply-To: <bf45e0f4-2de7-d1db-4732-342937bf61e7@xen.org>
Message-ID: <alpine.DEB.2.21.2012081730020.20986@sstabellini-ThinkPad-T480s>
References: <20201208072327.11890-1-michal.orzel@arm.com> <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org> <5D1B5771-A6B3-4F5E-81A1-864DBC8787B4@arm.com> <bf45e0f4-2de7-d1db-4732-342937bf61e7@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 8 Dec 2020, Julien Grall wrote:
> On 08/12/2020 14:38, Bertrand Marquis wrote:
> > Hi Julien,
> > 
> > > On 8 Dec 2020, at 09:47, Julien Grall <julien@xen.org> wrote:
> > > 
> > > Hi,
> > > 
> > > On 08/12/2020 07:23, Michal Orzel wrote:
> > > > When executing in aarch32 state at EL0, a load at EL0 from a
> > > > virtual address that matches the bottom 32 bits of the virtual address
> > > > used by a recent load at (aarch64) EL1 might return incorrect data.
> > > > The workaround is to insert a write of the contextidr_el1 register
> > > > on exception return to an aarch32 guest.
> > > 
> > > I am a bit confused with this comment. In the previous paragraph, you are
> > > suggesting that the problem is an interaction between EL1 AArch64 and EL0
> > > AArch32. But here you seem to imply the issue only happen when running a
> > > AArch32 guest.
> > > 
> > > Can you clarify it?
> > 
> > This can happen when switching from an aarch64 guest to an aarch32 guest so
> > not only when there is interaction.

Just to confirm: it cannot happen when switching from aarch64 *EL2* to
aarch32 EL0/1, right?  Because that happens all the time in Xen.


> Right, but the context switch will write to CONTEXTIDR_EL1. So this case
> should already be handled.
> 
> Xen will never switch from AArch64 EL1 to AArch32 EL0 without a context switch
> (the inverse can happen if we inject an exception to the guest).
> 
> Reading the Cortex-A53 SDEN, it sounds like this is an OS and not Hypervisor
> problem. In fact, Linux only seems to workaround it when switching in the OS
> side rather than the hypervisor.
> 
> Therefore, I am not sure to understand why we need to workaroud it in Xen.

It looks like Julien is right in regards to the "aarch64 EL1 to aarch32
EL0" issue.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 03:35:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 03:35:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47951.84826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmqGQ-0007i2-21; Wed, 09 Dec 2020 03:35:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47951.84826; Wed, 09 Dec 2020 03:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmqGP-0007hv-UF; Wed, 09 Dec 2020 03:35:37 +0000
Received: by outflank-mailman (input) for mailman id 47951;
 Wed, 09 Dec 2020 03:35:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqGO-0007hn-JF; Wed, 09 Dec 2020 03:35:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqGO-00017v-Ac; Wed, 09 Dec 2020 03:35:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqGO-00069h-0k; Wed, 09 Dec 2020 03:35:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqGO-0005ZE-0F; Wed, 09 Dec 2020 03:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=olBmJbzDHRJPo9M5xg3oCRQ3xVNNbGLjj2dKkHs0xW4=; b=waXPqF9ahO013urk1qhn+1Y/HL
	cMssHUO5WGqTwiv6LeVubDNkfO7JMbl/JKLk454rfDC4/Hb489PHSJWnvujLPP+fmLtKLIcdPcZqp
	3SPqX/TFvRbyoWvtq7M8nySiBTPNyeUrw7BTTlMSzqFIUtd2nEY0hZdvRBkpNAYm+798=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157328-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157328: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 03:35:36 +0000

flight 157328 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157328/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64 20 guest-start/debianhvm.repeat fail pass in 157268

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157268 like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d73c46e4a84e47ffc61b8bf7c378b1383e7316b5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  110 days
Failing since        152659  2020-08-21 14:07:39 Z  109 days  228 attempts
Testing same since   157142  2020-12-01 20:39:57 Z    7 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69355 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 03:45:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 03:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47959.84840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmqQC-0000K2-5M; Wed, 09 Dec 2020 03:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47959.84840; Wed, 09 Dec 2020 03:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmqQC-0000Jv-2E; Wed, 09 Dec 2020 03:45:44 +0000
Received: by outflank-mailman (input) for mailman id 47959;
 Wed, 09 Dec 2020 03:45:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqQA-0000Jm-RL; Wed, 09 Dec 2020 03:45:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqQA-0001KW-KJ; Wed, 09 Dec 2020 03:45:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqQA-0006Wt-AH; Wed, 09 Dec 2020 03:45:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmqQA-00013f-9p; Wed, 09 Dec 2020 03:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kFYX9v7bAmQLyo/7hnSYya58DKPRmsR7tepk8JfTFlo=; b=YCIaUts0gfY4lTY7s5UajbbEK/
	aG3JKXt4qk43Sr/u171UdLR8OLFG2Hm8kAkrknR1nDZBzbdWpbcu+Mr4jHOMwBkA1S0WBpakILqIp
	9Mbze5WTXp6Y5IGCa3d3bnsjhq6XJUwl/I1r9Qk1jPLvHm27fRZpcXBJhUKxlsT4AIyg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157333-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157333: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
X-Osstest-Versions-That:
    ovmf=8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 03:45:42 +0000

flight 157333 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157333/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
baseline version:
 ovmf                 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970

Last test of basis   157323  2020-12-08 14:10:34 Z    0 days
Testing same since   157333  2020-12-08 22:41:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8e4cb8fbce..cee5b0441a  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 05:49:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 05:49:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47970.84862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmsLn-0003sj-V5; Wed, 09 Dec 2020 05:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47970.84862; Wed, 09 Dec 2020 05:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmsLn-0003sc-Rp; Wed, 09 Dec 2020 05:49:19 +0000
Received: by outflank-mailman (input) for mailman id 47970;
 Wed, 09 Dec 2020 05:41:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zSLj=FN=gmail.com=frowand.list@srs-us1.protection.inumbo.net>)
 id 1kmsEb-0003jH-TR
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 05:41:53 +0000
Received: from mail-qt1-x833.google.com (unknown [2607:f8b0:4864:20::833])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4b796b8-d4c3-470c-b206-834b3a7425f3;
 Wed, 09 Dec 2020 05:41:52 +0000 (UTC)
Received: by mail-qt1-x833.google.com with SMTP id b9so220850qtr.2
 for <xen-devel@lists.xenproject.org>; Tue, 08 Dec 2020 21:41:52 -0800 (PST)
Received: from [192.168.1.49] (c-67-187-90-124.hsd1.ky.comcast.net.
 [67.187.90.124])
 by smtp.gmail.com with ESMTPSA id h9sm522190qkk.33.2020.12.08.21.41.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 08 Dec 2020 21:41:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4b796b8-d4c3-470c-b206-834b3a7425f3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=i8ZJuoDJybHp2tzVONMGuofWcZ7p2pBjclZqoID/hEc=;
        b=UOXSCoJxmnpVjuMvUpp9guycLEE9pkjcVxY9md2+onQ1/ecp1Hi+vcYRfU3bgQSiTH
         FziOD9AYxPqW2NiWirjt0PlziMyMbmRpP+pbqFB99E5rYU777pTYACGKI0RpCF2c9BRo
         NIJYT6J1DaJD+i1uxYNzqd5IK4o7lp3SjuAu41rH0pEihjr5k4oVSIyeZmSuI48S6a/4
         lf7Eg9l27J5ARK9y0vhVPx5QvCOtxV4iNf6PO4ClZ10PXsQFpszITQcubScGmUNUUZZl
         IlGCHvHktRn31X5o8iDr8mhWI0yZv9RpHMLkiGWr0vh8QMQ7JO5k+mYqw7z4f14+oXW7
         vGUw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=i8ZJuoDJybHp2tzVONMGuofWcZ7p2pBjclZqoID/hEc=;
        b=lv3ipFIQLPTVjhrH8hzZLm82PBclkYXaY6FdTSAy9O1hQp5v5CfZoyMFXN+H1DspJP
         yob/iG3YgzJ1aenVbHw0/TbXMcmgsy/L2JHp3W+Z0kQcCaYgz/y8qqBqtnEwwIYnS2rY
         2h57wPV9y5c7SFM53hnUBCZ7OAUC3pJoHz1BH9yqAXBxCaEZS8bJrw5QP9diqPcE3PnD
         Ia5gE6HyCcIoF1qJXaMBdueiq5hZQFQD7jePY2ZwwoYVTy66khNOBPd6XB+6GIfitTAW
         8CsSA6ME6wxXO7Ycx7jrFR1CbCuMI5Lgbfap4d1rboz5D+V/giMXJwDylXI7G/4B/ZOl
         9TQA==
X-Gm-Message-State: AOAM533pVXlIM4ucxExEXAT/jNkD1vzVwpCNFI6mLQI43gnlwsKBfE9R
	RXR7Ok7mUc04n2YLgMiOILI=
X-Google-Smtp-Source: ABdhPJzZUuf2XsInUB3Fx8zqDdvZHWKQUIW5Gw5sT1OV3zktLeXI8yB9W4KtQpq/Gp48DlAV7nptBw==
X-Received: by 2002:ac8:4648:: with SMTP id f8mr1305258qto.5.1607492512637;
        Tue, 08 Dec 2020 21:41:52 -0800 (PST)
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Paul Menzel <pmenzel@molgen.mpg.de>, Wim Vervoorn <wvervoorn@eltan.com>,
 The development of GNU GRUB <grub-devel@gnu.org>,
 Daniel Kiper <daniel.kiper@oracle.com>
Cc: coreboot@coreboot.org, LKML <linux-kernel@vger.kernel.org>,
 systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com,
 U-Boot Mailing List <u-boot@lists.denx.de>, x86@kernel.org,
 xen-devel@lists.xenproject.org, alecb@umass.edu,
 alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
 andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org,
 "btrotter@gmail.com" <btrotter@gmail.com>, dpsmith@apertussolutions.com,
 eric.devolder@oracle.com, eric.snowberg@oracle.com, hpa@zytor.com,
 hun@n-dimensional.de, javierm@redhat.com, joao.m.martins@oracle.com,
 kanth.ghatraju@oracle.com, konrad.wilk@oracle.com, krystian.hebel@3mdeb.com,
 leif@nuviainc.com, lukasz.hawrylko@intel.com, luto@amacapital.net,
 michal.zygowski@3mdeb.com, Matthew Garrett <mjg59@google.com>,
 mtottenh@akamai.com, Vladimir 'phcoder' Serbinenko <phcoder@gmail.com>,
 piotr.krol@3mdeb.com, pjones@redhat.com, roger.pau@citrix.com,
 ross.philipson@oracle.com, tyhicks@linux.microsoft.com,
 Heinrich Schuchardt <xypron.glpk@gmx.de>
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
 <CAODwPW9dxvMfXY=92pJNGazgYqcynAk72EkzOcmF7JZXhHTwSQ@mail.gmail.com>
 <6c1e79be210549949c30253a6cfcafc1@Eltsrv03.Eltan.local>
 <9b614471-0395-88a5-1347-66417797e39d@molgen.mpg.de>
From: Frank Rowand <frowand.list@gmail.com>
Message-ID: <a173867b-49db-8147-de55-8d601f033036@gmail.com>
Date: Tue, 8 Dec 2020 23:41:50 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9b614471-0395-88a5-1347-66417797e39d@molgen.mpg.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/4/20 7:23 AM, Paul Menzel wrote:
> Dear Wim, dear Daniel,
> 
> 
> First, thank you for including all parties in the discussion.
> Am 04.12.20 um 13:52 schrieb Wim Vervoorn:
> 
>> I agree with you. Using an existing standard is better than inventing
>> a new one in this case. I think using the coreboot logging is a good
>> idea as there is indeed a lot of support already available and it is
>> lightweight and simple.
> In my opinion coreboot’s format is lacking, that it does not record the timestamp, and the log level is not stored as metadata, but (in coreboot) only used to decide if to print the message or not.
> 
> I agree with you, that an existing standard should be used, and in my opinion it’s Linux message format. That is most widely supported, and existing tools could then also work with pre-Linux messages.
> 
> Sean Hudson from Mentor Graphics presented that idea at Embedded Linux Conference Europe 2016 [1]. No idea, if anything came out of that effort. (Unfortunately, I couldn’t find an email. Does somebody have contacts at Mentor to find out, how to reach him?)

I forwarded this to Sean.

-Frank

> 
> 
> Kind regards,
> 
> Paul
> 
> 
> [1]: http://events17.linuxfoundation.org/sites/events/files/slides/2016-10-12%20-%20ELCE%20-%20Shared%20Logging%20-%20Part%20Deux.pdf



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 06:52:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 06:52:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47978.84874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtL3-0001uX-Qk; Wed, 09 Dec 2020 06:52:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47978.84874; Wed, 09 Dec 2020 06:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtL3-0001uQ-NW; Wed, 09 Dec 2020 06:52:37 +0000
Received: by outflank-mailman (input) for mailman id 47978;
 Wed, 09 Dec 2020 06:52:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtL2-0001uI-QL; Wed, 09 Dec 2020 06:52:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtL2-0006PV-Fz; Wed, 09 Dec 2020 06:52:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtL2-0007HF-3d; Wed, 09 Dec 2020 06:52:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtL2-0003t5-37; Wed, 09 Dec 2020 06:52:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s+k82nL84Bd6rW0WVfAjr5c6ydAuuBmyYjnG2Xsc7pw=; b=bDKDMp0f1t4BVmQoWUOIFmMvRs
	ydqSk8weu5azKyQsRWMGTs14SJ8ioraVtcWz8jkLJDXUUxCn1nTVUzJxq/MhvlVPc1YcIjM5Ay7DJ
	EFHA3MK8ICh65VK/t9aCrl355Poar14i2IMBP3TNj+f3Tj6e+8SPG/s49s2myw7KTsxc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157330-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157330: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cd796ed3345030aa1bb332fe5c793b3dddaf56e7
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 06:52:36 +0000

flight 157330 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157330/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 157266 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install fail in 157285 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157285 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 157266 pass in 157330
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157266 pass in 157330
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 157285 pass in 157330
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 157285 pass in 157330
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157266
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157266
 test-arm64-arm64-examine      8 reboot                     fail pass in 157285
 test-armhf-armhf-libvirt-raw  8 xen-boot                   fail pass in 157285
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157285

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157266 blocked in 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 157285 like 152332
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 157285 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cd796ed3345030aa1bb332fe5c793b3dddaf56e7
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  130 days
Failing since        152366  2020-08-01 20:49:34 Z  129 days  222 attempts
Testing same since   157266  2020-12-07 22:40:29 Z    1 days    3 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 700707 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 07:24:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 07:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47988.84889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtpO-0004ti-G0; Wed, 09 Dec 2020 07:23:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47988.84889; Wed, 09 Dec 2020 07:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtpO-0004tb-CN; Wed, 09 Dec 2020 07:23:58 +0000
Received: by outflank-mailman (input) for mailman id 47988;
 Wed, 09 Dec 2020 07:23:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtpM-0004tT-Fn; Wed, 09 Dec 2020 07:23:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtpM-0007BL-9a; Wed, 09 Dec 2020 07:23:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtpM-0002Ck-0S; Wed, 09 Dec 2020 07:23:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmtpM-00044Q-01; Wed, 09 Dec 2020 07:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ntyXOEFxY8mWYnu8WkaP0/xOvTZAgDvCAPKVVg0eaAo=; b=Cv7jjRN4e28lbdCEnj6LQRNWrz
	RqAyhL3OPvzhLbB5R7Ik+m6LW6yoAw8LmNjYEL4gcgQkMu0cSNU55jI4VAFbOS9wIfQP/mcJmpQzB
	MI1rdLeo7xImDC6tG/nM1wy5GlbnRtjzdMTcqBV187M98cuj9jAEm7zwun4oWvfhINUs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157339: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4e750e932a6a6ea5c72dcccab5ffa8da652a2d60
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 07:23:56 +0000

flight 157339 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157339/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4e750e932a6a6ea5c72dcccab5ffa8da652a2d60
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  152 days
Failing since        151818  2020-07-11 04:18:52 Z  151 days  146 attempts
Testing same since   157339  2020-12-09 04:19:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32017 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 07:31:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 07:31:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.47997.84907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtwA-0005sU-Fb; Wed, 09 Dec 2020 07:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 47997.84907; Wed, 09 Dec 2020 07:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtwA-0005sN-Aw; Wed, 09 Dec 2020 07:30:58 +0000
Received: by outflank-mailman (input) for mailman id 47997;
 Wed, 09 Dec 2020 07:30:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmtw9-0005sI-Eo
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 07:30:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e10f250-94b3-4d36-889d-e4c6fbbfe0c5;
 Wed, 09 Dec 2020 07:30:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0AEC7AC94;
 Wed,  9 Dec 2020 07:30:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e10f250-94b3-4d36-889d-e4c6fbbfe0c5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607499055; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BGmU10Ccy1IYdgF9pruKnoo3bFbtLiADIpY7MKZDn4U=;
	b=UKdm82/P/I+41Vt6FKZLzvwNykB+hEKMXh5aZqq5yudUO+YmFjqCJcqVYF44a4rAvq3UCP
	Nag65QqHWulJX4CxotdQUmoQyezXHBWv/Fjus9v2QMcGfwtv4N7AEVFu4cVgYdV4vHhUdJ
	9KxYQa032LCgEZb+AR4NAFnwymzIgxs=
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com> <20201208184315.GE27920@zn.tnic>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
Message-ID: <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>
Date: Wed, 9 Dec 2020 08:30:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201208184315.GE27920@zn.tnic>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="vGZzNM2H5synFpsP7FReEyUMxt3ir0vhv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--vGZzNM2H5synFpsP7FReEyUMxt3ir0vhv
Content-Type: multipart/mixed; boundary="zEkN86S7Ztx9Wlh91q0PqkQHgAmyleFIX";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com> <20201208184315.GE27920@zn.tnic>
In-Reply-To: <20201208184315.GE27920@zn.tnic>

--zEkN86S7Ztx9Wlh91q0PqkQHgAmyleFIX
Content-Type: multipart/mixed;
 boundary="------------674AAB7D9F3E491CB4034860"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------674AAB7D9F3E491CB4034860
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.12.20 19:43, Borislav Petkov wrote:
> On Fri, Nov 20, 2020 at 12:46:25PM +0100, Juergen Gross wrote:
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm=
/cpufeatures.h
>> index dad350d42ecf..ffa23c655412 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -237,6 +237,9 @@
>>   #define X86_FEATURE_VMCALL		( 8*32+18) /* "" Hypervisor supports the=
 VMCALL instruction */
>>   #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMM=
CALL hypercall instruction */
>>   #define X86_FEATURE_SEV_ES		( 8*32+20) /* AMD Secure Encrypted Virtu=
alization - Encrypted State */
>> +#define X86_FEATURE_NOT_XENPV		( 8*32+21) /* "" Inverse of X86_FEATUR=
E_XENPV */
>> +#define X86_FEATURE_NO_PVUNLOCK		( 8*32+22) /* "" No PV unlock functi=
on */
>> +#define X86_FEATURE_NO_VCPUPREEMPT	( 8*32+23) /* "" No PV vcpu_is_pre=
empted function */
>=20
> Ew, negative features. ;-\

Hey, I already suggested to use ~FEATURE for that purpose (see
https://lore.kernel.org/lkml/f105a63d-6b51-3afb-83e0-e899ea40813e@suse.co=
m/=20
).

>=20
> /me goes forward and looks at usage sites:
>=20
> +	ALTERNATIVE_2 "jmp *paravirt_iret(%rip);",			\
> +		      "jmp native_iret;", X86_FEATURE_NOT_XENPV,	\
> +		      "jmp xen_iret;", X86_FEATURE_XENPV
>=20
> Can we make that:
>=20
> 	ALTERNATIVE_TERNARY "jmp *paravirt_iret(%rip);",
> 		      "jmp xen_iret;", X86_FEATURE_XENPV,
> 		      "jmp native_iret;", X86_FEATURE_XENPV,

Would we really want to specify the feature twice?

I'd rather make the syntax:

ALTERNATIVE_TERNARY <initial-code> <feature> <code-for-feature-set>
                                              <code-for-feature-unset>

as this ...

>=20
> where the last two lines are supposed to mean
>=20
> 			    X86_FEATURE_XENPV ? "jmp xen_iret;" : "jmp native_iret;"

=2E.. would match perfectly to this interpretation.

>=20
> Now, in order to convey that logic to apply_alternatives(), you can do:=

>=20
> struct alt_instr {
>          s32 instr_offset;       /* original instruction */
>          s32 repl_offset;        /* offset to replacement instruction *=
/
>          u16 cpuid;              /* cpuid bit set for replacement */
>          u8  instrlen;           /* length of original instruction */
>          u8  replacementlen;     /* length of new instruction */
>          u8  padlen;             /* length of build-time padding */
> 	u8  flags;		/* patching flags */ 			<--- THIS
> } __packed;

Hmm, using flags is an alternative (pun intended :-) ).

>=20
> and yes, we have had the flags thing in a lot of WIP diffs over the
> years but we've never come to actually needing it.
>=20
> Anyway, then, apply_alternatives() will do:
>=20
> 	if (flags & ALT_NOT_FEATURE)
>=20
> or something like that - I'm bad at naming stuff - then it should patch=

> only when the feature is NOT set and vice versa.
>=20
> There in that
>=20
> 	if (!boot_cpu_has(a->cpuid)) {
>=20
> branch.
>=20
> Hmm?

Fine with me (I'd prefer my ALTERNATIVE_TERNARY syntax, though).

>=20
>>   /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word =
9 */
>>   #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGS=
BASE, WRGSBASE instructions*/
>> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternati=
ve.c
>> index 2400ad62f330..f8f9700719cf 100644
>> --- a/arch/x86/kernel/alternative.c
>> +++ b/arch/x86/kernel/alternative.c
>> @@ -593,6 +593,18 @@ int alternatives_text_reserved(void *start, void =
*end)
>>   #endif /* CONFIG_SMP */
>>  =20
>>   #ifdef CONFIG_PARAVIRT
>> +static void __init paravirt_set_cap(void)
>> +{
>> +	if (!boot_cpu_has(X86_FEATURE_XENPV))
>> +		setup_force_cpu_cap(X86_FEATURE_NOT_XENPV);
>> +
>> +	if (pv_is_native_spin_unlock())
>> +		setup_force_cpu_cap(X86_FEATURE_NO_PVUNLOCK);
>> +
>> +	if (pv_is_native_vcpu_is_preempted())
>> +		setup_force_cpu_cap(X86_FEATURE_NO_VCPUPREEMPT);
>> +}
>> +
>>   void __init_or_module apply_paravirt(struct paravirt_patch_site *sta=
rt,
>>   				     struct paravirt_patch_site *end)
>>   {
>> @@ -616,6 +628,8 @@ void __init_or_module apply_paravirt(struct paravi=
rt_patch_site *start,
>>   }
>>   extern struct paravirt_patch_site __start_parainstructions[],
>>   	__stop_parainstructions[];
>> +#else
>> +static void __init paravirt_set_cap(void) { }
>>   #endif	/* CONFIG_PARAVIRT */
>>  =20
>>   /*
>> @@ -723,6 +737,18 @@ void __init alternative_instructions(void)
>>   	 * patching.
>>   	 */
>>  =20
>> +	paravirt_set_cap();
>=20
> Can that be called from somewhere in the Xen init path and not from
> here? Somewhere before check_bugs() gets called.

No, this is needed for non-Xen cases, too (especially pv spinlocks).

>=20
>> +	/*
>> +	 * First patch paravirt functions, such that we overwrite the indire=
ct
>> +	 * call with the direct call.
>> +	 */
>> +	apply_paravirt(__parainstructions, __parainstructions_end);
>> +
>> +	/*
>> +	 * Then patch alternatives, such that those paravirt calls that are =
in
>> +	 * alternatives can be overwritten by their immediate fragments.
>> +	 */
>>   	apply_alternatives(__alt_instructions, __alt_instructions_end);
>=20
> Can you give an example here pls why the paravirt patching needs to run=

> first?

Okay.


Juergen

--------------674AAB7D9F3E491CB4034860
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------674AAB7D9F3E491CB4034860--

--zEkN86S7Ztx9Wlh91q0PqkQHgAmyleFIX--

--vGZzNM2H5synFpsP7FReEyUMxt3ir0vhv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/QfS4FAwAAAAAACgkQsN6d1ii/Ey8/
lAf/QNrDeVJIsiWDsuQ/oBFqwXuVFoFGJMXJCcMNwDkIpmyXdAz1wdSj5MoOiiTg2mPr40nja7r8
YApOj1Rcbnd42IcJiFn4UjTvjqH8JKkF4CMSiNjQrgJ5y9R5iP8CpHuIcpxSUYkxI76U4f1zbiJk
p2r5fKNt4xTU+LhE3kjnc7dLA/5MLJsz/k6EBUNnpBnC4vv1Ah/aHOvlgJmGZih1O4Tke5Nmjg8R
XPhNz2iQsu6WIoEaAoXPPSE5X8HEvEm4/nHI+Jq8nxFS05Z+D6C6Uq4ymx3d8ZAiOKYz8pNhl76I
/CVt+O2olDWrRt37HVzjl2kF0SrtQ9EaWn6OtAsfPg==
=QPJL
-----END PGP SIGNATURE-----

--vGZzNM2H5synFpsP7FReEyUMxt3ir0vhv--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 07:32:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 07:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48001.84918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtxc-0005zY-Pz; Wed, 09 Dec 2020 07:32:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48001.84918; Wed, 09 Dec 2020 07:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmtxc-0005zR-Mq; Wed, 09 Dec 2020 07:32:28 +0000
Received: by outflank-mailman (input) for mailman id 48001;
 Wed, 09 Dec 2020 07:32:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ml5q=FN=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kmtxb-0005zM-5s
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 07:32:27 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c613b343-a7cb-4a85-af53-992c3d92ab12;
 Wed, 09 Dec 2020 07:32:25 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E94BA30E;
 Tue,  8 Dec 2020 23:32:24 -0800 (PST)
Received: from [10.57.1.227] (unknown [10.57.1.227])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CF3313F66B;
 Tue,  8 Dec 2020 23:32:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c613b343-a7cb-4a85-af53-992c3d92ab12
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Chen <Wei.Chen@arm.com>
References: <20201208072327.11890-1-michal.orzel@arm.com>
 <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org>
 <5D1B5771-A6B3-4F5E-81A1-864DBC8787B4@arm.com>
 <bf45e0f4-2de7-d1db-4732-342937bf61e7@xen.org>
 <alpine.DEB.2.21.2012081730020.20986@sstabellini-ThinkPad-T480s>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <e974547f-ddf2-a7d1-43a1-cdc81c874823@arm.com>
Date: Wed, 9 Dec 2020 08:32:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012081730020.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 09.12.2020 02:34, Stefano Stabellini wrote:
> On Tue, 8 Dec 2020, Julien Grall wrote:
>> On 08/12/2020 14:38, Bertrand Marquis wrote:
>>> Hi Julien,
>>>
>>>> On 8 Dec 2020, at 09:47, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On 08/12/2020 07:23, Michal Orzel wrote:
>>>>> When executing in aarch32 state at EL0, a load at EL0 from a
>>>>> virtual address that matches the bottom 32 bits of the virtual address
>>>>> used by a recent load at (aarch64) EL1 might return incorrect data.
>>>>> The workaround is to insert a write of the contextidr_el1 register
>>>>> on exception return to an aarch32 guest.
>>>>
>>>> I am a bit confused with this comment. In the previous paragraph, you are
>>>> suggesting that the problem is an interaction between EL1 AArch64 and EL0
>>>> AArch32. But here you seem to imply the issue only happen when running a
>>>> AArch32 guest.
>>>>
>>>> Can you clarify it?
>>>
>>> This can happen when switching from an aarch64 guest to an aarch32 guest so
>>> not only when there is interaction.
> 
> Just to confirm: it cannot happen when switching from aarch64 *EL2* to
> aarch32 EL0/1, right?  Because that happens all the time in Xen.
> 
> 
No it cannot. It can only happen when switching from aarch64 EL1 to aarch32 EL0.
>> Right, but the context switch will write to CONTEXTIDR_EL1. So this case
>> should already be handled.
>>
>> Xen will never switch from AArch64 EL1 to AArch32 EL0 without a context switch
>> (the inverse can happen if we inject an exception to the guest).
>>
>> Reading the Cortex-A53 SDEN, it sounds like this is an OS and not Hypervisor
>> problem. In fact, Linux only seems to workaround it when switching in the OS
>> side rather than the hypervisor.
>>
>> Therefore, I am not sure to understand why we need to workaroud it in Xen.
> 
> It looks like Julien is right in regards to the "aarch64 EL1 to aarch32
> EL0" issue.
> 
Yes I agree. I missed the fact that there is a write to CONTEXTIDR_EL1
in 'ctxt_switch_to'. Let's abandon this.

Thanks,
Michal


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 07:56:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 07:56:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48014.84934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmuKC-00081A-RE; Wed, 09 Dec 2020 07:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48014.84934; Wed, 09 Dec 2020 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmuKC-000813-Nw; Wed, 09 Dec 2020 07:55:48 +0000
Received: by outflank-mailman (input) for mailman id 48014;
 Wed, 09 Dec 2020 07:55:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmuKA-00080y-Og
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 07:55:46 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.80]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87dd7c2b-47ec-4966-af42-5f4af7ca806a;
 Wed, 09 Dec 2020 07:55:45 +0000 (UTC)
Received: from DU2PR04CA0136.eurprd04.prod.outlook.com (2603:10a6:10:231::21)
 by DBBPR08MB4474.eurprd08.prod.outlook.com (2603:10a6:10:c2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.19; Wed, 9 Dec
 2020 07:55:41 +0000
Received: from DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::ac) by DU2PR04CA0136.outlook.office365.com
 (2603:10a6:10:231::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 9 Dec 2020 07:55:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT060.mail.protection.outlook.com (10.152.21.231) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 07:55:41 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Wed, 09 Dec 2020 07:55:41 +0000
Received: from a3413eff1521.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0C68268B-C683-4C00-B1EB-F337538C4260.1; 
 Wed, 09 Dec 2020 07:55:25 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a3413eff1521.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 07:55:25 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2295.eurprd08.prod.outlook.com (2603:10a6:4:84::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 9 Dec
 2020 07:55:24 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 07:55:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87dd7c2b-47ec-4966-af42-5f4af7ca806a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gqajidAzes7AiDmYPl58ikzgJpH/RfXnbQTEW6FdPHM=;
 b=a0QyeZVGnOXEazyxeXDOkojnbz1NA70qiCg+m64jjardF1KJFQPTOimNoBTxVgvaX2jswGLPRjnVtmLn+E4XgAduICQuFgc9yKQttnhscbeDZ0VDeERWsgrC+cRQvSXPY82f18e9nb5eP8p/QNDqh0TQmEvPYKRlbkfVWyTPkuo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 11d54bafe44e01a6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DXXuibQh/bkSjE63VFWStJp/jibERVwQ4NQclc5qWveghSXw2qgob89d1OqFTWhfe5TFa67XWAvXfwkhaiu67cOCJWkYpkRKm8M5nf0+27hIrdSCTW75nagVd69ZM/XfCEbdAxnwbNyt4k/EBFE/QnZfjgTzrUEmrMd9qitWj+5mJEyZDd4Y0zja2q2tdL/yStzfjcqyEvUuJIpYuIZvRHQnDv9iFVWo27j493fbRwZxkgnxxSravnN9QqDfn1hW8FtjDTamRoNb2APYHJ6Jh3OyybS8C3irTU1aaY7Q/fI7WAVYcQW3oAPMXfKT+EOI2i/J2fUZEi8vrpCDwrV1Rg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gqajidAzes7AiDmYPl58ikzgJpH/RfXnbQTEW6FdPHM=;
 b=B5i4rZODJIAvkrs1JhRisL4R2sIwsZkP4ysN1OH1Hh+RAA1qE5EJJ0RIREIy1a1MxJIbtMqiTUx2Rhl3SVi7h8vBUqp7dWWw3z5epSj/s/WwMztdAJfpg1AZQtNC6NzwiovsIZxqPUC7SC+kXPWAf8jLbPa66iu7fhIVZli/7dtesKEQGS7m8yAXodXDRF9AUfacf41EXC9YlRdUdOydvGR+N/6bvd6zz8SLY6UQPxYHpPg09o1YzWTItjE+79vFIIKt+HAQMsh3ITlVWP6EhuCq5saJie6GHWDTrU/UbZ86pSWNc7C0uG/0mZ0EauBquzniSZ5iuWqySWWBNvXe6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gqajidAzes7AiDmYPl58ikzgJpH/RfXnbQTEW6FdPHM=;
 b=a0QyeZVGnOXEazyxeXDOkojnbz1NA70qiCg+m64jjardF1KJFQPTOimNoBTxVgvaX2jswGLPRjnVtmLn+E4XgAduICQuFgc9yKQttnhscbeDZ0VDeERWsgrC+cRQvSXPY82f18e9nb5eP8p/QNDqh0TQmEvPYKRlbkfVWyTPkuo=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Rahul Singh <Rahul.Singh@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Paul Durrant <paul@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index:
 AQHWxBX/rtTrR0LJ0EyzSHUauPpHcqnkBpwAgAeVnwCAAFt/gIAAEZUAgAGYywCAAGhiAIAAbqQA
Date: Wed, 9 Dec 2020 07:55:24 +0000
Message-ID: <BE0F99C1-AAA7-4EC7-A800-7CDEA24177DF@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
 <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
 <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
 <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com>
 <156ab0f5-e46d-6b96-7ff1-28ad3a748950@xen.org>
 <alpine.DEB.2.21.2012081711200.20986@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012081711200.20986@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ebf6ce2a-a3f8-4b01-510c-08d89c17d327
x-ms-traffictypediagnostic: DB6PR0802MB2295:|DBBPR08MB4474:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB44743417879E37535B3744119DCC0@DBBPR08MB4474.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KCKwkuDD18SIl56d7sQ9EA7wtDomGt4ua4RLtNdUyuPPt+TLBZYWKo4lauamIBkadj/3/wP57GXQfzRNmQ9lqaByvODLFd3W5DZUTBY6JxMtzX2QElZn7LaVacB4XuAokb0Zqd/FnKr/lcgqn7WExRyZ3xR4924SKM6w4aYlmORQvfF3BinPmD3rbxdiaRCqA1CAFh9dsSSzaseKCwmLl9SmpxvlznFaBf2COpwU/r6/UQYEHyvg+5cCiIgcgcZx6tcQvoxnfZtEF/+STbX/2i6uiISKxGXzwt9T9Z+yPVjM8qV86Pd4TNidD8/AarSvLhADXZVyql27M/lrR2WrCaMVEVUwr8zRA/CW1p1qzex/Fa2StAbXOBmOOz+GNSMq
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39850400004)(346002)(366004)(136003)(83380400001)(66946007)(6512007)(91956017)(36756003)(66476007)(8936002)(6486002)(26005)(86362001)(186003)(64756008)(7416002)(66556008)(6506007)(8676002)(2906002)(478600001)(4326008)(66446008)(76116006)(71200400001)(54906003)(33656002)(5660300002)(2616005)(53546011)(6916009)(316002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?R1h1NE1nb1FMZjYxZks1bzJnaENjazl6bmppclpUd0FHbE5Yd3NUbmtjL0RK?=
 =?utf-8?B?QUxLVXJ0VDkvNWFwL011RnFWR3pxNEd2RmZETzZJbFdzS1Zmc3Z6UEJwTHpJ?=
 =?utf-8?B?WHdrYWNyM251MlBkZnArVUNzUnE1Sm5WdWtRa09mZjlpVGlHK2cwRWJlbjVj?=
 =?utf-8?B?QnZrVkZDOHBGUFhmb0VFYUZub2RHYldZMERFQ3g5TkxmemhRY3ZoMlYvb1E3?=
 =?utf-8?B?MTBEcUM0TS9FRE5OSTRmbXFnRlZhWFVOc0pVMm4wQ2t1TVRoZHRkVHFpaDNp?=
 =?utf-8?B?d0VIUXJuZlJra0poSWFtamUrYlJ1N2RGWGlyUDJmNzF1NnFmaDZjK2pOSlY5?=
 =?utf-8?B?U3VaRVZXcEI3VWFOblBrZW8yVUVZbEJpaTFIdVJXdXpQMjIwWHIxNWNRYnpB?=
 =?utf-8?B?ZVVxR1BYOXMxQlFkbzVLN3RGMFhFOFVYeWlSR0lEMEdwWmNnYlFQOW94M1Nj?=
 =?utf-8?B?MXdDWVkwai9OUzM1RkdCckl5YlU4eWlvMnlZbzVVVFB6cTl5UWV2dWltc0kv?=
 =?utf-8?B?RG1zTmF4RitTcktUdk14MG5lUm9xMDZFVzlocml1M3BQcGZnZ2w4NmY0TTQ1?=
 =?utf-8?B?Qm1lV2FYVktOcm00S3ZYM1AyYnoxTnJLd0ova21ueXBBQ012bHcvTlFqRXRm?=
 =?utf-8?B?SmV5enJFeXJWRUE2alhnc3lTK1JIVEdxL2dNbGpaVlRXSjZ3RzZ6bVBTZHoy?=
 =?utf-8?B?dERRK09TRE5FYW91Y09GVnhsMHd0UVBHem9oSEdHK1NuMmNUUDJycTR1Vkxt?=
 =?utf-8?B?WGJVRnVJODMwVWtwcS9DVStocWhKYitHRVFCZTFlZWRwd1E2OHdESXVSd1Nk?=
 =?utf-8?B?Ny82cTYvM1FVZUZuWVVkdUlES2kxczJKeXdkS0UzQXVSNnJ6bXlvMmsvWk9m?=
 =?utf-8?B?enhTdFlwVllTUE5ncGIyQU93ZkFQM0RZN2lRcDFaTXVzVm5qWkpIamNUcEYy?=
 =?utf-8?B?MEU1WnNCQTdSNkRzUGhCVjBoZ0UwaGtMWThoVGcyM2pTT0VzRHVzaFNEaFg0?=
 =?utf-8?B?OWZIaUltSE43U1pSM1RXQjF1elBvV3lEd2k3M09TQjRIZGUySFR4SFVLQnVI?=
 =?utf-8?B?L2djakh2UXgrTytxcDlPMFBDRVZYZjFhQVVMc0lINUpvM0ZYMnlQZkoyYkpW?=
 =?utf-8?B?TytCeFY0dHlPZG9Yb0xzV0xpTTE2SGJ6emM1Um1mUzUvOTFMT3pPTTNmS0pR?=
 =?utf-8?B?Q2JKN0pHOUJwQkljNytRV0svNjB5V1hobHdQdkQ1S0t3cy9JVzdKOG1QaW5B?=
 =?utf-8?B?MlZLUk0vaFJaeXdpZUl1OVZRMmRTRU5PeGtEU1lzb2c5VzRoUzZXQlRuZ1lB?=
 =?utf-8?Q?d24H2v5Ob+GbIQPg4uAoyEyiEOrCvLpmx5?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <431E8F9461222C48828EFE0EB94A90FE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2295
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	354b7248-fddf-4324-a2a0-08d89c17c94c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4NzwmgTU5K6jxTZQYaG4zV9yYfP6qDiWGx3/q9JZngjNYCKvA/ffurXI8ur022L2/G9bsZTe0v+iEOD6ysIhLzAz3ScqiR5Zog3/yBnSON9P76tBYtNN+Y7+dR05fDzh1rrfLTdMoe8hpmeSom0x/rBPR+rlyl+DdFt2dRZGe6OUQbW9kgm/ycWsV8DDMyUBP6tqHMWvUaGiDUeMvJfY4d5TnB/+LQuet+Eqphv7MdsWfh6lkHrxNhP+Gs7JIzwMKW9GX69ozoY2c6ks8P3T14AN0ex9Hs3oWHUQTqSOtQVSLl1OUlx831NWI/GcBLc7C8LEgI67yHvBbxMsOnKvl6VBtST7wHM3cHtkb2MEYBlUinL8uMslks3mhYSIgXr53cX0V9lZR9v9rhYlxBedzA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(136003)(346002)(376002)(46966005)(107886003)(47076004)(336012)(36756003)(478600001)(26005)(6862004)(54906003)(33656002)(81166007)(2906002)(53546011)(82740400003)(316002)(70586007)(2616005)(83380400001)(70206006)(356005)(6512007)(186003)(8676002)(5660300002)(82310400003)(86362001)(6506007)(6486002)(4326008)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 07:55:41.3605
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ebf6ce2a-a3f8-4b01-510c-08d89c17d327
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4474

SGksDQoNCj4gT24gOSBEZWMgMjAyMCwgYXQgMDE6MTksIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBPbiBUdWUsIDggRGVjIDIwMjAsIEp1
bGllbiBHcmFsbCB3cm90ZToNCj4+IE9uIDA3LzEyLzIwMjAgMTg6NDIsIFJhaHVsIFNpbmdoIHdy
b3RlOg0KPj4+PiBPbiA3IERlYyAyMDIwLCBhdCA1OjM5IHBtLCBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPiB3cm90ZToNCj4+Pj4gT24gMDcvMTIvMjAyMCAxMjoxMiwgUmFodWwgU2luZ2gg
d3JvdGU6DQo+Pj4+Pj4+ICt0eXBlZGVmIHBhZGRyX3QgZG1hX2FkZHJfdDsNCj4+Pj4+Pj4gK3R5
cGVkZWYgdW5zaWduZWQgaW50IGdmcF90Ow0KPj4+Pj4+PiArDQo+Pj4+Pj4+ICsjZGVmaW5lIHBs
YXRmb3JtX2RldmljZSBkZXZpY2UNCj4+Pj4+Pj4gKw0KPj4+Pj4+PiArI2RlZmluZSBHRlBfS0VS
TkVMIDANCj4+Pj4+Pj4gKw0KPj4+Pj4+PiArLyogQWxpYXMgdG8gWGVuIGRldmljZSB0cmVlIGhl
bHBlcnMgKi8NCj4+Pj4+Pj4gKyNkZWZpbmUgZGV2aWNlX25vZGUgZHRfZGV2aWNlX25vZGUNCj4+
Pj4+Pj4gKyNkZWZpbmUgb2ZfcGhhbmRsZV9hcmdzIGR0X3BoYW5kbGVfYXJncw0KPj4+Pj4+PiAr
I2RlZmluZSBvZl9kZXZpY2VfaWQgZHRfZGV2aWNlX21hdGNoDQo+Pj4+Pj4+ICsjZGVmaW5lIG9m
X21hdGNoX25vZGUgZHRfbWF0Y2hfbm9kZQ0KPj4+Pj4+PiArI2RlZmluZSBvZl9wcm9wZXJ0eV9y
ZWFkX3UzMihucCwgcG5hbWUsIG91dCkNCj4+Pj4+Pj4gKCFkdF9wcm9wZXJ0eV9yZWFkX3UzMihu
cCwgcG5hbWUsIG91dCkpDQo+Pj4+Pj4+ICsjZGVmaW5lIG9mX3Byb3BlcnR5X3JlYWRfYm9vbCBk
dF9wcm9wZXJ0eV9yZWFkX2Jvb2wNCj4+Pj4+Pj4gKyNkZWZpbmUgb2ZfcGFyc2VfcGhhbmRsZV93
aXRoX2FyZ3MgZHRfcGFyc2VfcGhhbmRsZV93aXRoX2FyZ3MNCj4+Pj4+Pj4gKw0KPj4+Pj4+PiAr
LyogQWxpYXMgdG8gWGVuIGxvY2sgZnVuY3Rpb25zICovDQo+Pj4+Pj4+ICsjZGVmaW5lIG11dGV4
IHNwaW5sb2NrDQo+Pj4+Pj4+ICsjZGVmaW5lIG11dGV4X2luaXQgc3Bpbl9sb2NrX2luaXQNCj4+
Pj4+Pj4gKyNkZWZpbmUgbXV0ZXhfbG9jayBzcGluX2xvY2sNCj4+Pj4+Pj4gKyNkZWZpbmUgbXV0
ZXhfdW5sb2NrIHNwaW5fdW5sb2NrDQo+Pj4+Pj4gDQo+Pj4+Pj4gSG1tLi4uIG11dGV4IGFyZSBu
b3Qgc3BpbmxvY2suIENhbiB5b3UgZXhwbGFpbiB3aHkgdGhpcyBpcyBmaW5lIHRvDQo+Pj4+Pj4g
c3dpdGNoIHRvIHNwaW5sb2NrPw0KPj4+Pj4gWWVzIG11dGV4IGFyZSBub3Qgc3BpbmxvY2suIEFz
IG11dGV4IGlzIG5vdCBpbXBsZW1lbnRlZCBpbiBYRU4gSSB0aG91Z2h0DQo+Pj4+PiBvZiB1c2lu
ZyBzcGlubG9jayBpbiBwbGFjZSBvZiBtdXRleCBhcyB0aGlzIGlzIHRoZSBvbmx5IGxvY2tpbmcN
Cj4+Pj4+IG1lY2hhbmlzbSBhdmFpbGFibGUgaW4gWEVOLg0KPj4+Pj4gTGV0IG1lIGtub3cgaWYg
dGhlcmUgaXMgYW5vdGhlciBibG9ja2luZyBsb2NrIGF2YWlsYWJsZSBpbiBYRU4uIEkgd2lsbA0K
Pj4+Pj4gY2hlY2sgaWYgd2UgY2FuIHVzZSB0aGF0Lg0KPj4+PiANCj4+Pj4gVGhlcmUgYXJlIG5v
IGJsb2NraW5nIGxvY2sgYXZhaWxhYmxlIGluIFhlbiBzbyBmYXIuIEhvd2V2ZXIsIGlmIExpbnV4
IHdlcmUNCj4+Pj4gdXNpbmcgbXV0ZXggaW5zdGVhZCBvZiBzcGlubG9jaywgdGhlbiBpdCBsaWtl
bHkgbWVhbnMgdGhleSBvcGVyYXRpb25zIGluDQo+Pj4+IHRoZSBjcml0aWNhbCBzZWN0aW9uIGNh
biBiZSBsb25nIHJ1bm5pbmcuDQo+Pj4gDQo+Pj4gWWVzIHlvdSBhcmUgcmlnaHQgTGludXggaXMg
dXNpbmcgbXV0ZXggd2hlbiBhdHRhY2hpbmcgYSBkZXZpY2UgdG8gdGhlIFNNTVUNCj4+PiBhcyB0
aGlzIG9wZXJhdGlvbiBtaWdodCB0YWtlIGxvbmdlciB0aW1lLg0KPj4+PiANCj4+Pj4gSG93IGRp
ZCB5b3UgY2FtZSB0byB0aGUgY29uY2x1c2lvbiB0aGF0IHVzaW5nIHNwaW5sb2NrIGluIHRoZSBT
TU1VIGRyaXZlcg0KPj4+PiB3b3VsZCBiZSBmaW5lPw0KPj4+IA0KPj4+IE11dGV4IGlzIHJlcGxh
Y2VkIGJ5IHNwaW5sb2NrICBpbiB0aGUgU01NVSBkcml2ZXIgd2hlbiB0aGVyZSBpcyBhIHJlcXVl
c3QgdG8NCj4+PiBhc3NpZ24gYSBkZXZpY2UgdG8gdGhlIGd1ZXN0LiBBcyB3ZSBhcmUgaW4gdXNl
ciBjb250ZXh0IGF0IHRoYXQgdGltZSBpdHMgb2sNCj4+PiB0byB1c2Ugc3BpbmxvY2suDQo+PiAN
Cj4+IEkgYW0gbm90IHN1cmUgdG8gdW5kZXJzdGFuZCB3aGF0IHlvdSBtZWFuIGJ5ICJ1c2VyIGNv
bnRleHQiIGhlcmUuIENhbiB5b3UNCj4+IGNsYXJpZnkgaXQ/DQo+PiANCj4+PiBBcyBwZXIgbXkg
dW5kZXJzdGFuZGluZyB0aGVyZSBpcyBvbmUgc2NlbmFyaW8gd2hlbiBDUFUgd2lsbCBzcGluIHdo
ZW4gdGhlcmUNCj4+PiBpcyBhIHJlcXVlc3QgZnJvbSB0aGUgdXNlciBhdCB0aGUgc2FtZSB0aW1l
IHRvIGFzc2lnbiBhbm90aGVyIGRldmljZSB0byB0aGUNCj4+PiBTTU1VIGFuZCBJIHRoaW5rIHRo
YXQgaXMgdmVyeSByYXJlLg0KPj4gDQo+PiBXaGF0ICJ1c2VyIiBhcmUgeW91IHJlZmVycmluZyB0
bz8NCj4+IA0KPj4+IA0KPj4+IFBsZWFzZSBzdWdnZXN0IGhvdyB3ZSBjYW4gcHJvY2VlZCBvbiB0
aGlzLg0KPj4gDQo+PiBJIGFtIGd1ZXNzaW5nIHRoYXQgd2hhdCB5b3UgYXJlIHNheWluZyBpcyB0
aGUgcmVxdWVzdCB0byBhc3NpZ24vZGUtYXNzaWduDQo+PiBkZXZpY2Ugd2lsbCBiZSBpc3N1ZWQg
YnkgdGhlIHRvb2xzdGFjayBhbmQgdGhlcmVmb3JlIHRoZXkgc2hvdWxkIGJlIHRydXN0ZWQuDQo+
PiANCj4+IE15IGNvbmNlcm4gaGVyZSBpcyBub3QgYWJvdXQgc29tZW9uZSB3YWl0aW5nIG9uIHRo
ZSBsb2NrIHRvIGJlIHJlbGVhc2VkLiBJdCBpcw0KPj4gbW9yZSB0aGUgZmFjdCB0aGF0IHVzaW5n
IGEgbXV0ZXgoKSBpcyBhbiBpbnNpZ2h0IHRoYXQgdGhlIG9wZXJhdGlvbiBwcm90ZWN0ZWQNCj4+
IGNhbiBiZSBsb25nLiBEZXBlbmRpbmcgb24gdGhlIGxlbmd0aCwgdGhpcyBtYXkgcmVzdWx0IHRv
IHVud2FudGVkIHNpZGUgZWZmZWN0DQo+PiAoZS5nLiBvdGhlciB2Q1BVIG5vdCBzY2hlZHVsZWQs
IFJDVSBzdGFsbCBpbiBkb20wLCB3YXRjaGRvZyBoaXQuLi4pLg0KPj4gDQo+PiBJIHJlY2FsbCBh
IGRpc2N1c3Npb24gZnJvbSBhIGNvdXBsZSBvZiB5ZWFycyBhZ28gbWVudGlvbmluZyB0aGF0IFNU
RQ0KPj4gcHJvZ3JhbW1pbmcgb3BlcmF0aW9ucyBjYW4gdGFrZSBxdWl0ZSBhIGxvbmcgdGltZS4g
U28gSSB3b3VsZCBsaWtlIHRvDQo+PiB1bmRlcnN0YW5kIGhvdyBsb25nIHRoZSBvcGVyYXRpb24g
aXMgbWVhbnQgdG8gbGFzdC4NCj4+IA0KPj4gRm9yIGEgdGVjaCBwcmV2aWV3LCB0aGlzIGlzIHBy
b2JhYmx5IE9LIHRvIHJlcGxhY2UgdGhlIG11dGV4IHdpdGggYW4gc3BpbmxvY2suDQo+PiBCdXQg
SSB3b3VsZCBub3Qgd2FudCB0aGlzIHRvIGdvIHBhc3QgdGhlIHRlY2ggcHJldmlldyBzdGFnZSB3
aXRob3V0IGEgcHJvcGVyDQo+PiBhbmFseXNpcy4NCj4+IA0KPj4gU3RlZmFubywgd2hhdCBkbyB5
b3UgdGhpbms/DQo+IA0KPiBJbiBzaG9ydCwgSSBhZ3JlZS4NCj4gDQo+IA0KPiBXZSBuZWVkIHRv
IGJlIHZlcnkgY2FyZWZ1bCByZXBsYWNpbmcgbXV0ZXhlcyB3aXRoIHNwaW5sb2Nrcy4gV2UgbmVl
ZCB0bw0KPiBsb29rIGNsb3NlbHkgYXQgdGhlIHdheXMgdGhlIHNwaW5sb2NrcyBjb3VsZCBpbnRy
b2R1Y2UgdW53YW50ZWQNCj4gbGF0ZW5jaWVzLiBDb25jdXJyZW50IGFzc2lnbl9kZXZpY2Ugb3Bl
cmF0aW9ucyBhcmUgcG9zc2libGUgYnV0IHJhcmUNCj4gYW5kLCBtb3JlIGltcG9ydGFudGx5LCB0
aGV5IGFyZSB1c2VyLWRyaXZlbiBzbyB0aGV5IGNvdWxkIGJlIG1pdGlnYXRlZC4NCj4gSSBhbSBt
b3JlIHdvcnJpZWQgYWJvdXQgb3RoZXIgcG9zc2libGUgc2NlbmFyaW9zLCBlLmcuIFNURSBvciBv
dGhlcg0KPiBvcGVyYXRpb25zLg0KPiANCj4gUmFodWwgY2xlYXJseSBwdXQgYSBsb3Qgb2Ygd29y
ayBpbnRvIHRoaXMgc2VyaWVzIGFscmVhZHkgYW5kIEkgdGhpbmsgaXQNCj4gaXMgYmV0dGVyIHRv
IHRha2UgdGhpcyBpbmNyZW1lbnRhbGx5LCB3aGljaCB3aWxsIGFsbG93IHVzIHRvIGRvIGJldHRl
cg0KPiB0ZXN0aW5nIGFuZCBhbHNvIG1vdmUgZmFzdGVyIG92ZXJhbGwuIFNvIEkgYW0gZmluZSB0
byB0YWtlIHRoZSBzZXJpZXMgYXMNCj4gaXMgbm93LCBwZW5kaW5nIGFuIGludmVzdGlnYXRpb24g
b24gdGhlIHNwaW5sb2NrcyBsYXRlci4NCg0KSSBhbHNvIGFncmVlIHdpdGggdGhlIGlzc3VlIG9u
IHRoZSBzcGlubG9jayBidXQgd2UgaGF2ZSBubyBlcXVpdmFsZW50IG9mIHNvbWV0aGluZw0KbG9v
a2luZyBsaWtlIGEgbXV0ZXggZm9yIG5vdyBpbiBYZW4gc28gdGhpcyB3b3VsZCByZXF1aXJlIHNv
bWUgbWFqb3IgcmVkZXNpZ24gYW5kDQp3aWxsIHRha2UgdXMgZmFyIGZyb20gdGhlIGxpbnV4IGRy
aXZlci4NCg0KSSB3b3VsZCBzdWdnZXN0IHRvIGFkZCBhIGNvbW1lbnQgYmVmb3JlIHRoaXMgcGFy
dCBvZiB0aGUgY29kZSB3aXRoIGEg4oCcVE9ET+KAnSBzbyB0aGF0DQppdCBpcyBjbGVhciBpbnNp
ZGUgdGhlIGNvZGUuDQpXZSBjb3VsZCBhbHNvIGFkZCBzb21lIGNvbW1lbnQgaW4gS2NvbmZpZyB0
byBtZW50aW9uIHRoaXMgcG9zc2libGUg4oCcZmF1bHR54oCdIGJlaGF2aW91ci4NCg0KV291bGQg
eW91IGFncmVlIG9uIGdvaW5nIGZvcndhcmQgbGlrZSB0aGlzID8NCg0KUmVnYXJkcw0KQmVydHJh
bmQNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 08:21:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 08:21:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48025.84946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmuiw-0002up-6z; Wed, 09 Dec 2020 08:21:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48025.84946; Wed, 09 Dec 2020 08:21:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmuiw-0002ui-3E; Wed, 09 Dec 2020 08:21:22 +0000
Received: by outflank-mailman (input) for mailman id 48025;
 Wed, 09 Dec 2020 08:21:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmuiv-0002ud-Cq
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 08:21:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a50152d-9972-4d03-8093-91d06f3869d7;
 Wed, 09 Dec 2020 08:21:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08D66AC94;
 Wed,  9 Dec 2020 08:21:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a50152d-9972-4d03-8093-91d06f3869d7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607502079; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BR6OrlxOK2RJx/adT+RYdSKkeSPh4Laeq9svf64qL4g=;
	b=LmlvQZfMK5mvaPD0MltX3k9qTurzaUI0cza4yMWsgkB6oLg1qF39uszypauhZHBJCM+pSG
	u9HbpPyPtTmVnqzuAi2uE/ynpkFsEsKf034mWLUN3FT5Zw4V30OaxYKoTmVi01FvL8oROB
	dXTKTNL3RGni8Tubr+wKRMwqhlsKdKA=
Subject: Re: [PATCH V3 20/23] xen/ioreq: Make x86's send_invalidate_req()
 common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-21-git-send-email-olekstysh@gmail.com>
 <acb09993-40fc-2ab0-21b9-5dbe2f061554@suse.com>
 <efbac31b-1a9d-da16-ef60-372f10451f8e@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <545e71ec-7168-3a45-3c21-82d839481d03@suse.com>
Date: Wed, 9 Dec 2020 09:21:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <efbac31b-1a9d-da16-ef60-372f10451f8e@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.12.2020 17:49, Oleksandr wrote:
> On 08.12.20 17:24, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -552,6 +552,8 @@ struct domain
>>>           struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
>>>           unsigned int            nr_servers;
>>>       } ioreq_server;
>>> +
>>> +    bool mapcache_invalidate;
>>>   #endif
>>>   };
>> While I can see reasons to put this inside the #ifdef here, I
>> would suspect putting it in the hole next to the group of 5
>> bools further up would be more efficient.
> 
> ok, will put (although it will increase the number of #ifdef)

I was implying no #ifdef in this case, suitably justified by half
a sentence in the patch description.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 08:33:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 08:33:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48032.84958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmuuy-00040J-0L; Wed, 09 Dec 2020 08:33:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48032.84958; Wed, 09 Dec 2020 08:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmuux-00040C-Sr; Wed, 09 Dec 2020 08:33:47 +0000
Received: by outflank-mailman (input) for mailman id 48032;
 Wed, 09 Dec 2020 08:33:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FZAR=FN=kaod.org=clg@srs-us1.protection.inumbo.net>)
 id 1kmuuv-000407-VM
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 08:33:45 +0000
Received: from 8.mo51.mail-out.ovh.net (unknown [46.105.45.231])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3368b96f-7108-4f7a-b1c9-48dd7800cb85;
 Wed, 09 Dec 2020 08:33:44 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.109.143.183])
 by mo51.mail-out.ovh.net (Postfix) with ESMTPS id AF737240B8B;
 Wed,  9 Dec 2020 09:33:38 +0100 (CET)
Received: from kaod.org (37.59.142.106) by DAG4EX1.mxp5.local (172.16.2.31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Wed, 9 Dec 2020
 09:33:36 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3368b96f-7108-4f7a-b1c9-48dd7800cb85
Authentication-Results: garm.ovh; auth=pass (GARM-106R00683016755-4cd7-4a58-9314-76cd7bd62976,
                    F89B705946B58AFC48376D7414C518838B559611) smtp.auth=clg@kaod.org
X-OVh-ClientIp: 82.64.250.170
Subject: Re: [PATCH 3/3] net: checksum: Introduce fine control over checksum
 type
To: Bin Meng <bmeng.cn@gmail.com>, <qemu-devel@nongnu.org>
CC: Bin Meng <bin.meng@windriver.com>, Alistair Francis
	<alistair@alistair23.me>, Andrew Jeffery <andrew@aj.id.au>, Anthony Perard
	<anthony.perard@citrix.com>, Beniamino Galvani <b.galvani@gmail.com>, David
 Gibson <david@gibson.dropbear.id.au>, "Edgar E. Iglesias"
	<edgar.iglesias@gmail.com>, Jason Wang <jasowang@redhat.com>, Joel Stanley
	<joel@jms.id.au>, Li Zhijian <lizhijian@cn.fujitsu.com>, "Michael S. Tsirkin"
	<mst@redhat.com>, Paul Durrant <paul@xen.org>, Peter Chubb
	<peter.chubb@nicta.com.au>, Peter Maydell <peter.maydell@linaro.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Zhang Chen <chen.zhang@intel.com>,
	<qemu-arm@nongnu.org>, <qemu-ppc@nongnu.org>,
	<xen-devel@lists.xenproject.org>
References: <1607220847-24096-1-git-send-email-bmeng.cn@gmail.com>
 <1607220847-24096-3-git-send-email-bmeng.cn@gmail.com>
From: =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>
Message-ID: <adb845f8-8623-988f-cb11-148ec4cc2f4b@kaod.org>
Date: Wed, 9 Dec 2020 09:33:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1607220847-24096-3-git-send-email-bmeng.cn@gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.59.142.106]
X-ClientProxiedBy: DAG4EX1.mxp5.local (172.16.2.31) To DAG4EX1.mxp5.local
 (172.16.2.31)
X-Ovh-Tracer-GUID: 481ee65c-d874-4de9-800e-f6db287e3d47
X-Ovh-Tracer-Id: 10079618919775570899
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedrudejjedguddvtdcutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemucehtddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefuvfhfhffkffgfgggjtgfgihesthejredttdefjeenucfhrhhomhepveorughrihgtpgfnvggpifhorghtvghruceotghlgheskhgrohgurdhorhhgqeenucggtffrrghtthgvrhhnpeefudeltdfhtddtieevudduveehffeutedvueeuleduiedvgffgueduhfehgfejheenucffohhmrghinhepohhffhhlohgrugdrnhgvthdpghhithhhuhgsrdgtohhmpdhkvghrnhgvlhdrohhrghenucfkpheptddrtddrtddrtddpfeejrdehledrudegvddruddtieenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhhouggvpehsmhhtphdqohhuthdphhgvlhhopehmgihplhgrnhehrdhmrghilhdrohhvhhdrnhgvthdpihhnvghtpedtrddtrddtrddtpdhmrghilhhfrhhomheptghlgheskhgrohgurdhorhhgpdhrtghpthhtohepsghmvghnghdrtghnsehgmhgrihhlrdgtohhm

Hello !

> diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
> index 782ff19..fbae1f1 100644
> --- a/hw/net/ftgmac100.c
> +++ b/hw/net/ftgmac100.c
> @@ -573,7 +573,15 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
>              }
>  
>              if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
> -                net_checksum_calculate(s->frame, frame_size);
> +                /*
> +                 * TODO:
> +                 * FTGMAC100_TXDES1_IP_CHKSUM seems to be only for IP checksum,
> +                 * however previous net_checksum_calculate() did not calculate
> +                 * IP checksum at all. Passing CSUM_ALL for now until someone
> +                 * who is familar with this MAC to figure out what should be
> +                 * properly added for TCP/UDP checksum offload.
> +                 */
> +                net_checksum_calculate(s->frame, frame_size, CSUM_ALL);
>              }
>              /* Last buffer in frame.  */
>              qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);


You can test your changes using the HOWTO Joel provided here : 

  https://github.com/openbmc/qemu/wiki/Usage

Please also check the Linux driver  :

  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/faraday/ftgmac100.c#n685

That said, something like the change below should be more appropriate.

Thanks,

C. 

+static int ftgmac100_convert_csum_flag(uint32_t flags)
+{
+    int csum = 0;
+
+    if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
+        csum |= CSUM_IP;
+    }
+    if (flags & FTGMAC100_TXDES1_TCP_CHKSUM) {
+        csum |= CSUM_TCP;
+    }
+    if (flags & FTGMAC100_TXDES1_UDP_CHKSUM) {
+        csum |= CSUM_UDP;
+    }
+    return csum;
+}
+
 static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
                             uint32_t tx_descriptor)
 {
@@ -602,6 +618,7 @@ static void ftgmac100_do_tx(FTGMAC100Sta
         ptr += len;
         frame_size += len;
         if (bd.des0 & FTGMAC100_TXDES0_LTS) {
+            int csum = ftgmac100_convert_csum_flag(flags);
 
             /* Check for VLAN */
             if (flags & FTGMAC100_TXDES1_INS_VLANTAG &&
@@ -610,16 +627,8 @@ static void ftgmac100_do_tx(FTGMAC100Sta
                                             FTGMAC100_TXDES1_VLANTAG_CI(flags));
             }
 
-            if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
-                /*
-                 * TODO:
-                 * FTGMAC100_TXDES1_IP_CHKSUM seems to be only for IP checksum,
-                 * however previous net_checksum_calculate() did not calculate
-                 * IP checksum at all. Passing CSUM_ALL for now until someone
-                 * who is familar with this MAC to figure out what should be
-                 * properly added for TCP/UDP checksum offload.
-                 */
-                net_checksum_calculate(s->frame, frame_size, CSUM_ALL);
+            if (csum) {
+                net_checksum_calculate(s->frame, frame_size, csum);
             }
             /* Last buffer in frame.  */
             qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 08:39:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 08:39:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48038.84970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmv0r-0004E4-S5; Wed, 09 Dec 2020 08:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48038.84970; Wed, 09 Dec 2020 08:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmv0r-0004Dx-O1; Wed, 09 Dec 2020 08:39:53 +0000
Received: by outflank-mailman (input) for mailman id 48038;
 Wed, 09 Dec 2020 08:39:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kmv0q-0004Ds-IJ
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 08:39:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 713da693-e4ec-4974-bd5f-07ebd5931f06;
 Wed, 09 Dec 2020 08:39:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 64BE8ACF9;
 Wed,  9 Dec 2020 08:39:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 713da693-e4ec-4974-bd5f-07ebd5931f06
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607503191; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WmiMX/BdyhXgrl1NBMoFYDC7yJJFNPRV13Db/WmHB0w=;
	b=C7ap+WWBLvI0qGL6VHY4w2V5FDtIGiTmCqoWIWnAhMKPUWrmYq9g6cl4GmW7LJHa7DgNix
	OvByPsQpLzN/XiMBCgnhBRwZV77Mv0KT48wWaMwDZ3ErrD5fVaYRMtZHe2YdAHwlaxAQYe
	qEcMoDMI1hwiZ7YUSZ7rawIbQS89jcs=
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Manuel Bouyer <bouyer@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cf1f85eb-1aa2-c3fa-680c-ea5ba5f68647@suse.com>
Date: Wed, 9 Dec 2020 09:39:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.12.2020 19:13, Andrew Cooper wrote:
> On 08/12/2020 17:57, Manuel Bouyer wrote:
>> Hello,
>> for the first time I tried to boot a xen kernel from devel with
>> a NetBSD PV dom0. The kernel boots, but when the first userland prcess
>> is launched, it seems to enter a loop involving search_pre_exception_table()
>> (I see an endless stream from the dprintk() at arch/x86/extable.c:202)
>>
>> With xen 4.13 I see it, but exactly once:
>> (XEN) extable.c:202: Pre-exception: ffff82d08038c304 -> ffff82d08038c8c8
>>
>> with devel:
>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>> [...]
>>
>> the dom0 kernel is the same.
>>
>> At first glance it looks like a fault in the guest is not handled at it should,
>> and the userland process keeps faulting on the same address.
>>
>> Any idea what to look at ?
> 
> That is a reoccurring fault on IRET back to guest context, and is
> probably caused by some unwise-in-hindsight cleanup which doesn't
> escalate the failure to the failsafe callback.

But is this a 32-bit Dom0? 64-bit ones get well-known selectors
installed for CS and SS by create_bounce_frame(), and we don't
permit registration of non-canonical trap handler entry point
addresses.

I have to admit I also find curious the difference between 4.13
and master.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 09:01:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 09:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48044.84981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmvLo-0006wl-KI; Wed, 09 Dec 2020 09:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48044.84981; Wed, 09 Dec 2020 09:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmvLo-0006we-H7; Wed, 09 Dec 2020 09:01:32 +0000
Received: by outflank-mailman (input) for mailman id 48044;
 Wed, 09 Dec 2020 09:01:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ulx=FN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kmvLm-0006wY-TN
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 09:01:30 +0000
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b2c993e-aec7-4513-b805-723463ff5767;
 Wed, 09 Dec 2020 09:01:29 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id i9so849293wrc.4
 for <xen-devel@lists.xenproject.org>; Wed, 09 Dec 2020 01:01:29 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id c10sm313809wrb.92.2020.12.09.01.01.27
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 09 Dec 2020 01:01:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b2c993e-aec7-4513-b805-723463ff5767
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=lPHsFue/8A1LiIQByTrkFvGuPpPc+9TVRioxM6O/VAg=;
        b=At0H2eMlJtYuHyiwGRn1V5FJ/KqvkdrT8F2HOndQqcQTq4hGXGFgwVDVzvF+hGAzrx
         boUsYoce/SWJgq9WJr3bavv/lvGpANzo2hvIkgRX+or0g3xsQRMmqbRoE03SzPcFtyic
         Uw/FEMyQtHnWihoHd+QjVrtfYjTZwwxflqWzHxRMu8QNRXBY8RDv9iGJ9E9tHudRBLBj
         18w4C37HACVkUD53RQTwmSR2RkBhtbBpu552FhdVVYIRUSxSrmYV4HKYU+P3JIEt/kKP
         ne3PUMDz4XURmWiTdP03phRp2CTDCVoqcZjm2dOW+Yzt9z8PRQwB8HcOs3jkxJDy1tmD
         x+pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=lPHsFue/8A1LiIQByTrkFvGuPpPc+9TVRioxM6O/VAg=;
        b=qBRJkgDRNmcSKzM4NFbU6JXMoSgQv+ODCH3SuZZja3PX3OK2/+H+8m8NBuoT+w20IF
         0VXPXY2SHjmNfLruM2ux43SHqiHGi8Z+TC0IJKLroqSewdhDScymJcRBWDEI40iQxoln
         WPTNqzHPLf5/LSVg/c4LUD4xlwqTqSyfx622RihUTFOeDVaulM/ROFHKJ9ppbZD6GsXy
         GoDBLA5f/TaeDUl96fqYV21n6gLAb3VJGOSBGLwtLSSM5mQF8VyvBLPrWCVBN9QBjUph
         drdUcKpNPkGq/hq9BXfWYAAF4MtAPhsqPoQGUm5vRzkC08KbwkVYhw+vQSPPvHqmKbhB
         CzyQ==
X-Gm-Message-State: AOAM5321btbcNFpcalRJTLMrQDyI8ppaQBbyueET2b4nTFcCFJ+M5l6h
	S83R3+gGXQMsw+gxiWc4DbE=
X-Google-Smtp-Source: ABdhPJyBN3Sy+Tpt0p2o3pXIxEp154Y4tLbSJFjl1TU1q0idH9o0QKXV0L3RmcXPQyhNrdCJYd9mpQ==
X-Received: by 2002:a5d:660c:: with SMTP id n12mr1462599wru.291.1607504488891;
        Wed, 09 Dec 2020 01:01:28 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>
Cc: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-18-git-send-email-olekstysh@gmail.com> <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com> <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com> <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com> <0d6c01d6cd9a$666326c0$33297440$@xen.org> <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
In-Reply-To: <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
Subject: RE: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Wed, 9 Dec 2020 09:01:27 -0000
Message-ID: <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKk0D4Qme59XF0a0h96d36zIOxDhQIE/k2jAQvf44kCfV5qfQHp2KBNAqVQolsB+TCoJqfxvSbg

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 08 December 2020 20:17
> To: paul@xen.org
> Cc: 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' =
<oleksandr_tyshchenko@epam.com>;
> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Julien Grall' =
<julien@xen.org>; 'Volodymyr Babchuk'
> <Volodymyr_Babchuk@epam.com>; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>; 'George Dunlap'
> <george.dunlap@citrix.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Wei =
Liu' <wl@xen.org>; 'Julien Grall'
> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce =
domain_has_ioreq_server()
>=20
>=20
> On 08.12.20 21:43, Paul Durrant wrote:
>=20
> Hi Paul
>=20
> >> -----Original Message-----
> >> From: Oleksandr <olekstysh@gmail.com>
> >> Sent: 08 December 2020 16:57
> >> To: Paul Durrant <paul@xen.org>
> >> Cc: Jan Beulich <jbeulich@suse.com>; Oleksandr Tyshchenko =
<oleksandr_tyshchenko@epam.com>; Stefano
> >> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; =
Volodymyr Babchuk
> >> <Volodymyr_Babchuk@epam.com>; Andrew Cooper =
<andrew.cooper3@citrix.com>; George Dunlap
> >> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Wei =
Liu <wl@xen.org>; Julien Grall
> >> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
> >> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce =
domain_has_ioreq_server()
> >>
> >>
> >> Hi Paul.
> >>
> >>
> >> On 08.12.20 17:33, Oleksandr wrote:
> >>> On 08.12.20 17:11, Jan Beulich wrote:
> >>>
> >>> Hi Jan
> >>>
> >>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> >>>>> --- a/xen/include/xen/ioreq.h
> >>>>> +++ b/xen/include/xen/ioreq.h
> >>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
> >>>>>        uint8_t                bufioreq_handling;
> >>>>>    };
> >>>>>    +/*
> >>>>> + * This should only be used when d =3D=3D current->domain and =
it's not
> >>>>> paused,
> >>>> Is the "not paused" part really relevant here? Besides it being =
rare
> >>>> that the current domain would be paused (if so, it's in the =
process
> >>>> of having all its vCPU-s scheduled out), does this matter at =
all?do
> >>>> any extra actionsdo any extra actions
> >>> No, it isn't relevant, I will drop it.
> >>>
> >>>
> >>>> Apart from this the patch looks okay to me, but I'm not sure it
> >>>> addresses Paul's concerns. Iirc he had suggested to switch back =
to
> >>>> a list if doing a swipe over the entire array is too expensive in
> >>>> this specific case.
> >>> We would like to avoid to do any extra actions in
> >>> leave_hypervisor_to_guest() if possible.
> >>> But not only there, the logic whether we check/set
> >>> mapcache_invalidation variable could be avoided if a domain =
doesn't
> >>> use IOREQ server...
> >>
> >> Are you OK with this patch (common part of it)?
> > How much of a performance benefit is this? The array is small to =
simply counting the non-NULL
> entries should be pretty quick.
> I didn't perform performance measurements on how much this call =
consumes.
> In our system we run three domains. The emulator is in DomD only, so I
> would like to avoid to call vcpu_ioreq_handle_completion() for every
> Dom0/DomU's vCPUs
> if there is no real need to do it.

This is not relevant to the domain that the emulator is running in; it's =
concerning the domains which the emulator is servicing. How many of =
those are there?

> On Arm vcpu_ioreq_handle_completion()
> is called with IRQ enabled, so the call is accompanied with
> corresponding irq_enable/irq_disable.
> These unneeded actions could be avoided by using this simple one-line
> helper...
>=20

The helper may be one line but there is more to the patch than that. I =
still think you could just walk the array in the helper rather than =
keeping a running occupancy count.

  Paul




From xen-devel-bounces@lists.xenproject.org Wed Dec 09 09:18:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 09:18:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48050.84993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmvcO-000886-UC; Wed, 09 Dec 2020 09:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48050.84993; Wed, 09 Dec 2020 09:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmvcO-00087z-RH; Wed, 09 Dec 2020 09:18:40 +0000
Received: by outflank-mailman (input) for mailman id 48050;
 Wed, 09 Dec 2020 09:18:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmvcN-00087u-3h
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 09:18:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmvcJ-0003J4-O2; Wed, 09 Dec 2020 09:18:35 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmvcJ-0001sv-EN; Wed, 09 Dec 2020 09:18:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DAahiC9NoYOt+ab4ncTjAy1i0GArZVsp5vJUbWwC1qM=; b=ovHP4iWyC2xDcIZMJVqBEoN5kz
	M2SiC4Jjt/UMig7sj3mL/5VMo3nb0UQ7qxKXHhK5Cn+wxdj8asRo/8jdQiWEbM/mzoQhwSouIb2Za
	g+olcCJ0kBnTLlca2xjT2abiEw7gv5lnSuVbP5uEmDLNSzQ8xCaJMJYAirxEw829vUjA=;
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
 <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
 <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
 <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com>
 <156ab0f5-e46d-6b96-7ff1-28ad3a748950@xen.org>
 <alpine.DEB.2.21.2012081711200.20986@sstabellini-ThinkPad-T480s>
 <BE0F99C1-AAA7-4EC7-A800-7CDEA24177DF@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <47bf1e5d-9e9e-d455-f6d8-5ddec0367ef2@xen.org>
Date: Wed, 9 Dec 2020 09:18:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <BE0F99C1-AAA7-4EC7-A800-7CDEA24177DF@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 09/12/2020 07:55, Bertrand Marquis wrote:
> Hi,

Hi,

> I also agree with the issue on the spinlock but we have no equivalent of something
> looking like a mutex for now in Xen so this would require some major redesign and
> will take us far from the linux driver.

I agree that keeping the Xen implementation close to the Linux one is 
important. However, I view this has a the secondary goal. The primary 
goal is to have a safe and secure driver.

If it means introducing a new set of lock or diverging from Linux, then 
so it be.

> 
> I would suggest to add a comment before this part of the code with a “TODO” so that
> it is clear inside the code.
> We could also add some comment in Kconfig to mention this possible “faulty” behaviour.

I think it would be enough to write in the Kconfig that the driver is 
"experimental and should not be used in production".

Ideally, I would like a list of known issues for the driver (could be in 
the cover letter or/and at the top of the source file) so we can track 
what's missing to get it supported.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 09:41:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 09:41:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48056.85006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmvyI-0002P8-PV; Wed, 09 Dec 2020 09:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48056.85006; Wed, 09 Dec 2020 09:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmvyI-0002P1-MV; Wed, 09 Dec 2020 09:41:18 +0000
Received: by outflank-mailman (input) for mailman id 48056;
 Wed, 09 Dec 2020 09:41:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmvyH-0002Ov-Ax
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 09:41:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmvyG-0003ll-2k; Wed, 09 Dec 2020 09:41:16 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmvyF-0003ae-RU; Wed, 09 Dec 2020 09:41:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=R4sJhaKCXsab7iT7m8N1tJ5o9kZxMmHfCEkyfLNMiSA=; b=yQgbxDNMHmrd4/QQFkEXNfiMJq
	+34D272EPBBURZNCpPJjJH3EnkOvHQi9fIhquyco9hWq2kDUSYjXYYQuewbzUYWA0PIrJ25L8EUB0
	p5nnLu01DIkdELUNtv52bC+l5nsrVlsvk2AlwvC8OBI576se4XhNSJu1hUlR6VKMuqIM=;
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
 <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
 <e0782c3b-9958-3792-eab9-d3fd6708225f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5fc44865-2115-947c-bd22-b51d7f17d39c@xen.org>
Date: Wed, 9 Dec 2020 09:41:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <e0782c3b-9958-3792-eab9-d3fd6708225f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 07/12/2020 10:23, Jan Beulich wrote:
> On 24.11.2020 17:57, Julien Grall wrote:
>> On 24/11/2020 00:40, Andrew Cooper wrote:
>>> On a totally separate point,  I wonder if we'd be better off compiling
>>> with -fgnu89-inline because I can't see any case we're we'd want the C99
>>> inline semantics anywhere in Xen.
>>
>> This was one of my point above. It feels that if we want to use the
>> behavior in Xen, then it should be everywhere rather than just this helper.
> 
> I'll be committing the series up to patch 6 in a minute. It remains
> unclear to me whether your responses on this sub-thread are meant
> to be an objection, or just a comment. Andrew gave his R-b despite
> this separate consideration, and I now also have an ack from Wei
> for the entire series. Please clarify.

It still feels strange to apply to one function and not the others... 
But I don't have a strong objection against the idea of using C99 
inlines in Xen.

IOW, I will neither Ack nor NAck this patch.

> 
> Or actually I only thought I could commit a fair initial part of
> the series - I'm still lacking Arm-side acks for patches 2 and 3
> here.

If you remember, I have asked if Anthony could review the build system 
because he worked on it recently.

Unfortunately, I haven't seen any answer so far... I have pinged him on IRC.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 09:48:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 09:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48062.85018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmw4y-0002eB-Hq; Wed, 09 Dec 2020 09:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48062.85018; Wed, 09 Dec 2020 09:48:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmw4y-0002e4-EZ; Wed, 09 Dec 2020 09:48:12 +0000
Received: by outflank-mailman (input) for mailman id 48062;
 Wed, 09 Dec 2020 09:48:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmw4x-0002dw-50; Wed, 09 Dec 2020 09:48:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmw4w-0003vf-SO; Wed, 09 Dec 2020 09:48:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmw4w-00018D-CV; Wed, 09 Dec 2020 09:48:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmw4w-0005vY-C5; Wed, 09 Dec 2020 09:48:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/FU8OOivyMLVridxEUC9401GRTVxWvWUR0O3mnLIiXc=; b=SgfEkMtzyyvUk4gosfbn6rRxwO
	Yyuih+1fi7YvTm+1FdK5ddLqgSgVeDQoYfF2qjnjqjbXpXmQv3qxWRcnf/FmSk9hQwkLw4nEVGXLR
	lEzzWYpY+s+cEwSiqXlLYQu0V1MLsA9oyf7dauTPcE+F1BF+JkTlNwRaH3zCHS00gz/k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157343: all pass - PUSHED
X-Osstest-Versions-This:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
X-Osstest-Versions-That:
    xen=5e666356a9d55fbd9eb5b8506088aa760e107b5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 09:48:10 +0000

flight 157343 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157343/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20
baseline version:
 xen                  5e666356a9d55fbd9eb5b8506088aa760e107b5b

Last test of basis   157238  2020-12-06 09:18:27 Z    3 days
Testing same since   157343  2020-12-09 09:19:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  From: Juergen Gross <jgross@suse.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e666356a9..777e3590f1  777e3590f154e6a8af560dd318b9465fa168db20 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 09:49:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 09:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48068.85033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmw66-0002lb-UO; Wed, 09 Dec 2020 09:49:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48068.85033; Wed, 09 Dec 2020 09:49:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmw66-0002lU-Ql; Wed, 09 Dec 2020 09:49:22 +0000
Received: by outflank-mailman (input) for mailman id 48068;
 Wed, 09 Dec 2020 09:49:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdhY=FN=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kmw66-0002lP-2X
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 09:49:22 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6572575-c20b-4e6b-b715-acfde1cdd48a;
 Wed, 09 Dec 2020 09:49:19 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B99nAfK029140
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 9 Dec 2020 10:49:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id E0FA42E93A2; Wed,  9 Dec 2020 10:49:04 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6572575-c20b-4e6b-b715-acfde1cdd48a
Date: Wed, 9 Dec 2020 10:49:04 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201209094904.GC1469@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <cf1f85eb-1aa2-c3fa-680c-ea5ba5f68647@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cf1f85eb-1aa2-c3fa-680c-ea5ba5f68647@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 09 Dec 2020 10:49:11 +0100 (MET)

On Wed, Dec 09, 2020 at 09:39:49AM +0100, Jan Beulich wrote:
> On 08.12.2020 19:13, Andrew Cooper wrote:
> > On 08/12/2020 17:57, Manuel Bouyer wrote:
> >> Hello,
> >> for the first time I tried to boot a xen kernel from devel with
> >> a NetBSD PV dom0. The kernel boots, but when the first userland prcess
> >> is launched, it seems to enter a loop involving search_pre_exception_table()
> >> (I see an endless stream from the dprintk() at arch/x86/extable.c:202)
> >>
> >> With xen 4.13 I see it, but exactly once:
> >> (XEN) extable.c:202: Pre-exception: ffff82d08038c304 -> ffff82d08038c8c8
> >>
> >> with devel:
> >> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> >> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> >> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> >> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> >> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> >> [...]
> >>
> >> the dom0 kernel is the same.
> >>
> >> At first glance it looks like a fault in the guest is not handled at it should,
> >> and the userland process keeps faulting on the same address.
> >>
> >> Any idea what to look at ?
> > 
> > That is a reoccurring fault on IRET back to guest context, and is
> > probably caused by some unwise-in-hindsight cleanup which doesn't
> > escalate the failure to the failsafe callback.
> 
> But is this a 32-bit Dom0? 64-bit ones get well-known selectors
> installed for CS and SS by create_bounce_frame(), and we don't
> permit registration of non-canonical trap handler entry point
> addresses.

No, it's a 64bits dom0.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 09:53:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 09:53:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48076.85045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmwA5-0003iS-Ff; Wed, 09 Dec 2020 09:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48076.85045; Wed, 09 Dec 2020 09:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmwA5-0003iL-Cd; Wed, 09 Dec 2020 09:53:29 +0000
Received: by outflank-mailman (input) for mailman id 48076;
 Wed, 09 Dec 2020 09:53:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmwA3-0003iG-VM
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 09:53:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmwA2-000424-W3; Wed, 09 Dec 2020 09:53:26 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmwA2-0004aI-Nk; Wed, 09 Dec 2020 09:53:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=CnzN0qywqES9bUxXUiV2DG77UGN62F8l63o8oWlkDBA=; b=XKxOQiFCukGOeXckYI/KSJPyil
	JV/PjG7IjycEtANuFobDgY+TXmMDVwc23HoHqTds9wuGJwyRb6067QSUuPgWNgPdI3+YfYUNm//yx
	a+OVQT7aHwtNO87l0ebx4pIirraYkKLyL+leLYo/L8ub9o9tg2BGw8OzbmpTBzv1DvjU=;
Subject: Re: [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d709a9c3-dbe2-65c6-2c2f-6a12f486335d@suse.com>
 <70170293-a9a7-282a-dde6-7ed73fc2da48@xen.org>
 <c15b1e7e-ed9c-b597-2fc1-b8cf89999c55@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f14d147b-b218-a2ab-0b9e-06ece58d58e4@xen.org>
Date: Wed, 9 Dec 2020 09:53:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c15b1e7e-ed9c-b597-2fc1-b8cf89999c55@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 03/12/2020 09:46, Jan Beulich wrote:
> On 02.12.2020 20:03, Julien Grall wrote:
>> On 23/11/2020 13:28, Jan Beulich wrote:
>>> The per-vCPU virq_lock, which is being held anyway, together with there
>>> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
>>> is zero, provide sufficient guarantees.
>>
>> I agree that the per-vCPU virq_lock is going to be sufficient, however
>> dropping the lock also means the event channel locking is more complex
>> to understand (the long comment that was added proves it).
>>
>> In fact, the locking in the event channel code was already proven to be
>> quite fragile, therefore I think this patch is not worth the risk.
> 
> I agree this is a very reasonable position to take. I probably
> would even have remained silent if in the meantime the
> spin_lock()s there hadn't changed to read_trylock()s. I really
> think we want to limit this unusual locking model to where we
> strictly need it.

While I appreciate that the current locking is unusual, we should follow 
the same model everywhere rather than having a dozen of way to lock the 
same structure.

The rationale is quite simple, if you have one way to lock a structure, 
then there are less chance to screw up.

The only reason I would be willing to diverge from statement is if the 
performance are significantly improved.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 10:11:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 10:11:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48083.85056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmwRN-0005hT-6a; Wed, 09 Dec 2020 10:11:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48083.85056; Wed, 09 Dec 2020 10:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmwRN-0005hM-3H; Wed, 09 Dec 2020 10:11:21 +0000
Received: by outflank-mailman (input) for mailman id 48083;
 Wed, 09 Dec 2020 10:11:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmwRL-0005hH-28
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 10:11:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0210185-84b6-4c17-a603-10f535e9e90c;
 Wed, 09 Dec 2020 10:11:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AC6CFAC94;
 Wed,  9 Dec 2020 10:11:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0210185-84b6-4c17-a603-10f535e9e90c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607508676; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=lp/BX/zYvTTJSC9VfCvCXfJRULVfbr1TBpetvpV+Cfo=;
	b=CDXncT75s8PkuuzKBjWJ+/rdiXZ+WdhzCg1/7T/SWjXNpddihYd5xSJe/k16ukZGzghXAm
	oQ5wTadzIhPc0MvnfNb9/0We/IXiQU3HkCllxc5/dTlKxjXA97f89OJGtaIHvVw/Sf4Cfm
	cO4sAMeS2KWjwBa/YpnQ4QkFWy+NO7Y=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] xen/xenbus: make xs_talkv() interruptible for SIGKILL
Date: Wed,  9 Dec 2020 11:11:14 +0100
Message-Id: <20201209101114.31522-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In case a process waits for any Xenstore action in the xenbus driver
it should be interruptible via SIGKILL.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/xenbus/xenbus_xs.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index 3a06eb699f33..1f4d3593e89e 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -205,8 +205,15 @@ static bool test_reply(struct xb_req_data *req)
 
 static void *read_reply(struct xb_req_data *req)
 {
+	int ret;
+
 	do {
-		wait_event(req->wq, test_reply(req));
+		ret = wait_event_interruptible(req->wq, test_reply(req));
+
+		if (ret == -ERESTARTSYS && fatal_signal_pending(current)) {
+			req->msg.type = XS_ERROR;
+			return ERR_PTR(-EINTR);
+		}
 
 		if (!xenbus_ok())
 			/*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 10:15:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 10:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48089.85069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmwVI-0005so-OC; Wed, 09 Dec 2020 10:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48089.85069; Wed, 09 Dec 2020 10:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmwVI-0005sh-Ke; Wed, 09 Dec 2020 10:15:24 +0000
Received: by outflank-mailman (input) for mailman id 48089;
 Wed, 09 Dec 2020 10:15:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdhY=FN=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kmwVH-0005sc-Rn
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 10:15:23 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a19bfb3-1449-46c5-abf3-6a32558b0c5f;
 Wed, 09 Dec 2020 10:15:22 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B9AFHiF026674
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 9 Dec 2020 11:15:18 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id A99752E93A2; Wed,  9 Dec 2020 11:15:12 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a19bfb3-1449-46c5-abf3-6a32558b0c5f
Date: Wed, 9 Dec 2020 11:15:12 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201209101512.GA1299@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 09 Dec 2020 11:15:18 +0100 (MET)

On Tue, Dec 08, 2020 at 06:13:46PM +0000, Andrew Cooper wrote:
> On 08/12/2020 17:57, Manuel Bouyer wrote:
> > Hello,
> > for the first time I tried to boot a xen kernel from devel with
> > a NetBSD PV dom0. The kernel boots, but when the first userland prcess
> > is launched, it seems to enter a loop involving search_pre_exception_table()
> > (I see an endless stream from the dprintk() at arch/x86/extable.c:202)
> >
> > With xen 4.13 I see it, but exactly once:
> > (XEN) extable.c:202: Pre-exception: ffff82d08038c304 -> ffff82d08038c8c8
> >
> > with devel:
> > (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> > (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> > (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> > (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> > (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
> > [...]
> >
> > the dom0 kernel is the same.
> >
> > At first glance it looks like a fault in the guest is not handled at it should,
> > and the userland process keeps faulting on the same address.
> >
> > Any idea what to look at ?
> 
> That is a reoccurring fault on IRET back to guest context, and is
> probably caused by some unwise-in-hindsight cleanup which doesn't
> escalate the failure to the failsafe callback.
> 
> This ought to give something useful to debug with:

thanks, I got:
(XEN) IRET fault: #PF[0000]                                                 
(XEN) domain_crash called from extable.c:209                                
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:                                   
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----       
(XEN) CPU:    0                                                             
(XEN) RIP:    0047:[<00007f7e184007d0>]                                     
(XEN) RFLAGS: 0000000000000202   EM: 0   CONTEXT: pv guest (d0v0)           
(XEN) rax: ffff82d04038c309   rbx: 0000000000000000   rcx: 000000000000e008 
(XEN) rdx: 0000000000010086   rsi: ffff83007fcb7f78   rdi: 000000000000e010 
(XEN) rbp: 0000000000000000   rsp: 00007f7fff53e3e0   r8:  0000000e00000000 
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000 
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000 
(XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000002660 
(XEN) cr3: 0000000079cdb000   cr2: 00007f7fff53e3e0                         
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: ffffffff80cf2dc0    
(XEN) ds: 0023   es: 0023   fs: 0000   gs: 0000   ss: 003f   cs: 0047       
(XEN) Guest stack trace from rsp=00007f7fff53e3e0:          
(XEN)    0000000000000001 00007f7fff53e8f8 0000000000000000 0000000000000000
(XEN)    0000000000000003 000000004b600040 0000000000000004 0000000000000038
(XEN)    0000000000000005 0000000000000008 0000000000000006 0000000000001000
(XEN)    0000000000000007 00007f7e18400000 0000000000000008 0000000000000000
(XEN)    0000000000000009 000000004b601cd0 00000000000007d0 0000000000000000
(XEN)    00000000000007d1 0000000000000000 00000000000007d2 0000000000000000
(XEN)    00000000000007d3 0000000000000000 000000000000000d 00007f7fff53f000
(XEN)    00000000000007de 00007f7fff53e4e0 0000000000000000 0000000000000000
(XEN)    6e692f6e6962732f 0000000000007469 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:17:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:17:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48137.85106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxSu-0003QK-2U; Wed, 09 Dec 2020 11:17:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48137.85106; Wed, 09 Dec 2020 11:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxSt-0003QD-V2; Wed, 09 Dec 2020 11:16:59 +0000
Received: by outflank-mailman (input) for mailman id 48137;
 Wed, 09 Dec 2020 11:16:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmxSs-0003Q8-1q
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:16:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmxSq-0005rB-07; Wed, 09 Dec 2020 11:16:56 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmxSp-0003EZ-N4; Wed, 09 Dec 2020 11:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=LMVuGsN2J17uA7dGWuaJGpfZYc8eBmei0dEK7x+yVzM=; b=ayjfQT22rZNujy9mpFgu1SekJ1
	RPxQ/vlQns4WVyhRomGGK/Mg/a2CDgF7sILKhdph05lF2FilgnbO9/7qhAadeJnpxGtPJLHEpMRW0
	LN1Dpq0IQu6lRbB5BI9MaBhMBAtMN9vDo31X9GLwwHq6YlO8hWfsuLOcoCtX6njInJWc=;
Subject: Re: [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d2461bd6-fb2f-447f-11c6-bd8afd573d7b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c11437e3-58f2-6dd3-45ae-9fd5f1d97e59@xen.org>
Date: Wed, 9 Dec 2020 11:16:53 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <d2461bd6-fb2f-447f-11c6-bd8afd573d7b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/11/2020 13:28, Jan Beulich wrote:
> There's no need to serialize all sending of vIRQ-s; all that's needed
> is serialization against the closing of the respective event channels
> (so far by means of a barrier). To facilitate the conversion, switch to
> an ordinary write locked region in evtchn_close().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:20:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48142.85118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxWU-0004Hm-IR; Wed, 09 Dec 2020 11:20:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48142.85118; Wed, 09 Dec 2020 11:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxWU-0004Hf-FC; Wed, 09 Dec 2020 11:20:42 +0000
Received: by outflank-mailman (input) for mailman id 48142;
 Wed, 09 Dec 2020 11:20:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmxWT-0004HX-FV; Wed, 09 Dec 2020 11:20:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmxWT-0005ud-AT; Wed, 09 Dec 2020 11:20:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmxWS-00069i-U6; Wed, 09 Dec 2020 11:20:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmxWS-0000Iu-Tb; Wed, 09 Dec 2020 11:20:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CGHA4d+ZTa2q686b9Fw0xDipKnAZt3DnCfkUmBOtkZI=; b=iq1BwG8e4HOJzZL+5hbYdjfW9l
	qmN+MLR+ORWWjNCsdmUB+sEzau0yHF0GMO0bIdJ6cKzznFT3arTRy9tn7jNbkmEfMkIfmfx1fIdx3
	OM+xQg1g4ZQUJs7nkl0F/AdwkPYVEYdaye7EIAxiZ9yHDRwv84Ky8VTUu4hpBgKjlntA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157335: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
X-Osstest-Versions-That:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 11:20:40 +0000

flight 157335 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157335/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157327
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 157327

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157327
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157327
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157327
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157327
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157327
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157327
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157327
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157327
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157327
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157327
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157327
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20
baseline version:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20

Last test of basis   157335  2020-12-09 01:53:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:34:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:34:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48151.85133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxjO-0005U7-1M; Wed, 09 Dec 2020 11:34:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48151.85133; Wed, 09 Dec 2020 11:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxjN-0005U0-UN; Wed, 09 Dec 2020 11:34:01 +0000
Received: by outflank-mailman (input) for mailman id 48151;
 Wed, 09 Dec 2020 11:34:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmxjM-0005Tv-3y
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:34:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.75]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbfc12ee-16db-4e23-8435-1ab9faf9db87;
 Wed, 09 Dec 2020 11:33:59 +0000 (UTC)
Received: from MR2P264CA0014.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::26) by
 AM0PR08MB3921.eurprd08.prod.outlook.com (2603:10a6:208:130::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Wed, 9 Dec
 2020 11:33:55 +0000
Received: from VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::c9) by MR2P264CA0014.outlook.office365.com
 (2603:10a6:500:1::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21 via Frontend
 Transport; Wed, 9 Dec 2020 11:33:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT043.mail.protection.outlook.com (10.152.19.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 11:33:55 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Wed, 09 Dec 2020 11:33:54 +0000
Received: from 524ac13b9a38.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B622475F-6622-4468-AD04-47D7780CD928.1; 
 Wed, 09 Dec 2020 11:33:49 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 524ac13b9a38.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 11:33:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3852.eurprd08.prod.outlook.com (2603:10a6:10:7f::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 9 Dec
 2020 11:33:47 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 11:33:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbfc12ee-16db-4e23-8435-1ab9faf9db87
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EnuAxnuXMlv2SnleMKIqhOaNxzyLAcdqVkNdCcd9Qwo=;
 b=ZJtid9wDnXkvpVGNk60QAjIh2efA8RO0amSHERAXfL8DiG7iCH50EboIAbamXcqoWrCubBwptXJXRYsXqGcv7ryLV/BOOBtzfQ5hWBDYgrzuO0VOLaJPBWeT1ia9LxK2vVw8KGGwGvLVDlmjiKNRzMtMWjCOniQ8RWZtkDzUpks=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 530a1bad736a405e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jgXYigpARm0z45LWPuzSWTnEcS8NHiqbhIV04qaWfA1WTsZhptEvuGGihYobYvkUKvJUDou517cjqE4SZyZwyx5Okz7Q9JXTCfz+iv2b8NuP3znMI3B2RtT8SMZkfs1ZqRkaNlEG1c2xKscy/EzS+tDjb4grcgNTmHcdDaIr2VKpfvbRQ86JyvWcaUZu8xUGNo8iL6V8TGOoeDryqlgtu2EGJKSw9bCEe/4BwqLXoHHKHlcfTdtnNufbRxBXzEgVE+ES64GJBaWDCtV7UfgISOkQOatxJCDyioA3MHUarUvg7FcMuY5S8/m3k1aX56tQyLADOHI6DXqYXcUR0Xf3JA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EnuAxnuXMlv2SnleMKIqhOaNxzyLAcdqVkNdCcd9Qwo=;
 b=Fdj2kaz+52+uX6CmNpc/ih1oJFwNyvKOszLCOVYI8R1BJwp2IKJ7IxGFBo/ZBfRefUt/KjefrHfktEY4WU8vjYWgtRIpu6jAnNU0wMEzGcb6fid5f5b2MEDhuaMVY1QuVYe4rP3UV9sXaMbWp6sJ2GOa8DCAHjwjWoRZn/vE15b8JOGGV/6FTJp7ShZlhLmNCjGuP1AhTv7gbpd/65n47x17pwUZLQRKUq/LApvAVCDAAOycFvu3fUwTQL1qsjC34NOmkmsJ4N4SSI1TFVSN116F2QuTdSZeh5MDoJNESmIkcgSen9ejYmKZmMjENKMf46/8uuKOVU5lya6CoJIKGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EnuAxnuXMlv2SnleMKIqhOaNxzyLAcdqVkNdCcd9Qwo=;
 b=ZJtid9wDnXkvpVGNk60QAjIh2efA8RO0amSHERAXfL8DiG7iCH50EboIAbamXcqoWrCubBwptXJXRYsXqGcv7ryLV/BOOBtzfQ5hWBDYgrzuO0VOLaJPBWeT1ia9LxK2vVw8KGGwGvLVDlmjiKNRzMtMWjCOniQ8RWZtkDzUpks=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 0/8] xen: beginnings of moving library-like code into
 an archive
Thread-Topic: [PATCH v3 0/8] xen: beginnings of moving library-like code into
 an archive
Thread-Index: AQHWwaul6ZNQ27c670WmXpdnGV75B6nuuvgA
Date: Wed, 9 Dec 2020 11:33:47 +0000
Message-ID: <509B2BDB-A226-4328-A75E-33AAF74BE45B@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8c47407f-8123-4801-afed-08d89c364fe3
x-ms-traffictypediagnostic: DB7PR08MB3852:|AM0PR08MB3921:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3921F5DDDE7F5DF5DA2B90379DCC0@AM0PR08MB3921.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1vTnTcBNqHQ8eDTTJdpKSkESxTmFTKs6UBA56OlyKhwjJ9K+dIvfL1L8F6uG16XSdbb94THp2fT007JRRf6I4W+LeFbwrghdkbYk1s4hr/hYsGO/BX6DxX8AkSQXhVmsSPMd8+MzvAtHKxU2R4E3sPGXm5Ws+Uu5pD3Cvk/bT6j5RC0eru3SKz9LMPJPkupvb3b9ST0s0JzNntVOhmALSH9F0Ne7Gy1USeHKCk/fz59hhw4JsEvlGlT/bIspL2tsywmQw1O9r2dOuhrHjsu9Njs2yst9te9omzn3DTNF42wFz8bsyBP+GDDrqDmZHl3wT7NrhHYRwA4OqFicsakS7fKnzYBUCi2pqg7At0l1DHFgmdsyvsSyjCmZCCHklDnW
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(346002)(396003)(39860400002)(136003)(186003)(2616005)(8676002)(8936002)(6486002)(6916009)(2906002)(5660300002)(86362001)(6506007)(54906003)(33656002)(6512007)(71200400001)(478600001)(316002)(66446008)(91956017)(76116006)(83380400001)(66556008)(64756008)(66946007)(66476007)(53546011)(36756003)(4326008)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?R2+l+ycms7e0wS/+VL5kxBq7d7KtSIiTO/zrNBTi6KLKqbe8D5veef9B7+Fb?=
 =?us-ascii?Q?igb38Org+Y3TIVeI3aieAA62R7GBCzLixhcrCVP6/Sr2rtyzL9JL4TGyR/m1?=
 =?us-ascii?Q?rXaHVfAruns69i1+LuCPfQ0nPIcvVhthqQTbos+77lP9uEgREd4qs8Y2HFlR?=
 =?us-ascii?Q?4MDAD7J6syvCXDCHhEIhbdyimyG9Ow9vTdIOrRjEGCI1SJOBdvbFzdfnNhh1?=
 =?us-ascii?Q?UHZWMRN4uKAfLMAc76ET4VNebqwWknsVvLycGkQjVMM3uYIuSdlYOPgD2jZ3?=
 =?us-ascii?Q?ZeNBqV0zHrOYh0nRhV5MQB6c7MHxPSg1qlUU6pRlBSmazOMgkzt30uLugdgm?=
 =?us-ascii?Q?vjxtIaxjRAkLKkGOCf2/qLu7iSQgbAHROaJ51qXQXEVVrE1LFcPkdeTamZRn?=
 =?us-ascii?Q?CiDoAtXcb5CV4IDL+hQrj6yV9N4RhMN74jV8QQ6XodJVNYPibpswUIq61wuA?=
 =?us-ascii?Q?JU+5yT8TmqKWYCe9A+KB1Mwkw3V7v4oZhCMXgpsMLg/PyOTlnbuEPw8A54og?=
 =?us-ascii?Q?/wuR+kGSap7kz8Nir6GTSVrxPPGkmoeJ958oHMoaEDg5JZ3IFs3hs3LqDJiw?=
 =?us-ascii?Q?ZdoHoFB3Dj/oTdEzr/ns7d33vvbBeocvhkoxyJKL4Dpha9K0RSUw3KbVawPu?=
 =?us-ascii?Q?3fiJSWUwX/mNj3Z4Jgc9i4Dd0UgM3NN8rbzhwkiwSws3ktD9KukOBkDo61q2?=
 =?us-ascii?Q?4h7jkC2hUgmYeV4vOKurT1QPFkUV2IZsCllsOlwIPDZ4Oda3YB3Jlscp49X/?=
 =?us-ascii?Q?77a+pPOAoEF4JYczal7C3D+iTVW+O4q4cTGvMt5AS4ISvtck3NV19PxVI0iv?=
 =?us-ascii?Q?5AUua1Y6/3kqvxSnglrB7Qt7WttC87JfiNJ+onqeulfD4VLvl68EafvXvCRK?=
 =?us-ascii?Q?YBKOmbqESXWhilZZfhsqrwks1Lc3g0q2d/FisikPr+frhcTX7h4CiM4u4oxW?=
 =?us-ascii?Q?8FiFz8TtrjSrnGqL5sUCn4w48/eJfO7PGb83UEopqec/G5yOsiN4BOq52KFo?=
 =?us-ascii?Q?Vn4F?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <54E70B608C2F28458F8B802E586752F6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3852
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	beb203f3-a084-40fe-afdf-08d89c364b53
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+ciLPrGtJbFyxxeiVApQdEBU11lzQViIrXNqmDdElskr4ekdbz8kk++nOOs9mfaWthAsvsUXOmMNIhxEsngobLs+r6TcKQLAy8+h3Hg/+xkM26pxdMQF7P0tScmLzGpkows+DG70BYj9lBRCFh/g6+Ak/S6TbvscMBEV0C1Xod5TBFTCfPbx65BWsriYjegcsuYM9H5g8N9gqc+FOI24qBQYNE5snWIL+35iizldUp95Mk95Chfbg8K63oG35219QHcU6kV56YBYINwI+6UFeUFgfgvBxU9wylvyqMwTVU1ZwDdOubgCpBeRn+7wTuYabUDWAIPDHG01V+MmqFtDyRU6vmNTxD9O3DoOy4tDIB7zWXmzeavyCcH+QhPtbJOvzzOAEiKAsOi1Z/wNLYKEmA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(46966005)(54906003)(6506007)(81166007)(70206006)(70586007)(6862004)(508600001)(356005)(47076004)(336012)(186003)(36756003)(83380400001)(8676002)(26005)(8936002)(5660300002)(86362001)(107886003)(53546011)(2906002)(33656002)(6486002)(4326008)(2616005)(82310400003)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 11:33:55.4134
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c47407f-8123-4801-afed-08d89c364fe3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3921

Hi Jan,

I will review this today, sorry for the delay.

Regards
Bertrand

> On 23 Nov 2020, at 15:16, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> In a few cases we link in library-like functions when they're not
> actually needed. While we could use Kconfig options for each one
> of them, I think the better approach for such generic code is to
> build it always (thus making sure a build issue can't be introduced
> for these in any however exotic configuration) and then put it into
> an archive, for the linker to pick up as needed. The series here
> presents a first few tiny steps towards such a goal.
>=20
> Note that we can't use thin archives yet, due to our tool chain
> (binutils) baseline being too low.
>=20
> Further almost immediate steps I'd like to take if the approach
> meets no opposition are
> - split and move the rest of common/lib.c,
> - split and move common/string.c, dropping the need for all the
>  __HAVE_ARCH_* (implying possible per-arch archives then need to
>  be specified ahead of lib/lib.a on the linker command lines),
> - move common/libelf/ and common/libfdt/.
>=20
> v3 has a new 1st patch and some review feedback addressed. See
> individual patches.
>=20
> 1: xen: fix build when $(obj-y) consists of just blanks
> 2: lib: collect library files in an archive
> 3: lib: move list sorting code
> 4: lib: move parse_size_and_unit()
> 5: lib: move init_constructors()
> 6: lib: move rbtree code
> 7: lib: move bsearch code
> 8: lib: move sort code
>=20
> Jan
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48155.85145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxk0-0005Zx-Bo; Wed, 09 Dec 2020 11:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48155.85145; Wed, 09 Dec 2020 11:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxk0-0005Zq-7l; Wed, 09 Dec 2020 11:34:40 +0000
Received: by outflank-mailman (input) for mailman id 48155;
 Wed, 09 Dec 2020 11:34:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmxjy-0005Yz-5U
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:34:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.42]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5c7f4ce-e73e-4392-adce-96e34b14a115;
 Wed, 09 Dec 2020 11:34:37 +0000 (UTC)
Received: from AM5PR1001CA0026.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::39)
 by AM9PR08MB6308.eurprd08.prod.outlook.com (2603:10a6:20b:287::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21; Wed, 9 Dec
 2020 11:34:35 +0000
Received: from AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::9) by AM5PR1001CA0026.outlook.office365.com
 (2603:10a6:206:2::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 11:34:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT053.mail.protection.outlook.com (10.152.16.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 11:34:34 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Wed, 09 Dec 2020 11:34:34 +0000
Received: from 42170922e995.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F1B8FA4A-1F39-49D7-A19B-47EBAA7172F1.1; 
 Wed, 09 Dec 2020 11:34:19 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 42170922e995.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 11:34:19 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6075.eurprd08.prod.outlook.com (2603:10a6:10:207::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 9 Dec
 2020 11:34:17 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 11:34:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5c7f4ce-e73e-4392-adce-96e34b14a115
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cCsIW8HwRg6tCtIHiltBiyAFuFodchcZeoGi09AQt48=;
 b=hxp2ZWsG+PmVq3rsDcNNnjcKJk1oHIQuBZYgaoakSRZOIVpwoIyBFE9gMfP16WqqqmldpDIUpaTV7kIVnVqoHwpd36+KbX8Q31e/6RidSOZ4b+kxA3ahaxaGun0w1MswCSt8mhaZZoTJvUAMreXhKWHFZ7qDi5Y5WEXfbr8tcc0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 85614dc067c1c540
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BDh7hxQfl/xiJ8iH9rVbvQ2HHWC4PjNSzoSRJO2JyAjip2asYg9QDyFl6D+siZ6revU0FSRAWvm2ACMNNhUP+GgCNL852vzvUUWQbhvsXdWBsT3CkLa1mvBzZTnD4l/8VBdRLYGwo77e/90mCkpWQukf5e9nS7Xzm1rM4ks/cL3m24cs48hpSTrrig9HUaXCapl7KAFu3y+iMpelRkNc2wB+dpk2ZvZPi/UAXpC6jFM54+oHAGHC2D//4KI2i2mNuH1QdFy2C40NvwF0iLd6VO5WrpTmzJxsMgpC9HUSYVb4WwS5Cs6gY/lAOpUqpXOvjlY0O74miC4YcMbOYQYSeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cCsIW8HwRg6tCtIHiltBiyAFuFodchcZeoGi09AQt48=;
 b=Qeuczp/PGWCrp2eioY46nBT+Bo5u0fDjPZ1gxsQMS5Bbn7FLy/kdiOmZMNTNnIoh4BM2kzd0my+h9UDMc3JaJyMuDlVWj7fwAQSl+Q2W5XwT6q14cbzVG7MrP2y8X9avcKOE1iJfdPl6a68XESsclWIqnMfJfEg6TOyY5CoDLHncv8nWwyVaEfHbWDizuj0+QTfzuz00J9dBh1JN0Owu1WSbxuA14qv5f7AZpjYirtWg5E4NUYA5cgahtvtAHi57fsV/OF51lPDl2t7EKluBskKI4gE7A3Q0XmQdxHbAiqZj0oXeAIga/puAQElqpgLhfPcN6lPsGXqXm4VRHisYrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cCsIW8HwRg6tCtIHiltBiyAFuFodchcZeoGi09AQt48=;
 b=hxp2ZWsG+PmVq3rsDcNNnjcKJk1oHIQuBZYgaoakSRZOIVpwoIyBFE9gMfP16WqqqmldpDIUpaTV7kIVnVqoHwpd36+KbX8Q31e/6RidSOZ4b+kxA3ahaxaGun0w1MswCSt8mhaZZoTJvUAMreXhKWHFZ7qDi5Y5WEXfbr8tcc0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 1/8] xen: fix build when $(obj-y) consists of just
 blanks
Thread-Topic: [PATCH v3 1/8] xen: fix build when $(obj-y) consists of just
 blanks
Thread-Index: AQHWwaxOoKHklwCvRUixoNq5gGL6M6nuuxuA
Date: Wed, 9 Dec 2020 11:34:17 +0000
Message-ID: <371B84F7-C2E7-48BA-8591-DC6F95D4412F@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>
In-Reply-To: <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 804ea879-5555-415b-978e-08d89c36676d
x-ms-traffictypediagnostic: DBBPR08MB6075:|AM9PR08MB6308:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB6308DA7366C772C6457778079DCC0@AM9PR08MB6308.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IkQ6W+Ww6vyygOkPBuTz/D4Thi4wt5HnWM1QKSZtCepBcK31w5Nf1Qdfc/i5aD3RyOpZ33+Mp/8To7guCc3jx7/h0INJzNMH3rgzU7blKwcKTrgAE3oRLny+oKAQgi6mje8YIOAoH596VKivl0tcO6eT7Y8Fllwkfwory5VV9MkV52As/C9oWQ/XyzmX5PJm7CB2tw4VKjJ8afxkuONRCBVvDkBfN8LrPOxcZLqAKaJJaFbTufaELVMDo/6RMbu1SST8uRNDGIIDB4GjNJeNWLdc3pYwzkQbEQ5RnP+ebSOkqLjdhJ0pFEnOLzRe24xh61YqLZKmAMcLpn088ivwKj61wM5NMbR45Px6y7nL7/EXKe8uItpBzKjYlGCj/xsJ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(366004)(376002)(396003)(346002)(6512007)(5660300002)(6486002)(8936002)(53546011)(66446008)(66556008)(71200400001)(36756003)(66476007)(26005)(6506007)(186003)(4326008)(91956017)(478600001)(8676002)(6916009)(66946007)(2616005)(76116006)(33656002)(64756008)(83380400001)(316002)(2906002)(86362001)(54906003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?en+4R7BIoAuG185h/FOT3FmCiY8vdhCd+ZKiuBeivAadysYU+QfVXXpWVbQF?=
 =?us-ascii?Q?iT7MtDHJ5qBKmfvM2U6X8pAO+JPk78Gatqcx5cB0/mfE8AZVF5YMchLZDSH2?=
 =?us-ascii?Q?fmf7gAPqvOHd+wgd+NB+HFDSM4DGoDh1WJgy21NtKXcjeH35SYlqAQBVGInB?=
 =?us-ascii?Q?HrKKcaxL6i7pO4OQ1McIHHTtwA0II6nY+UorbcmEEAZQBaB8tRDyDc9Pevpa?=
 =?us-ascii?Q?+9yU3mZKcy6Fm5PJlWp1g43SNEv+xOTIYOBa0qmiNJ0NIWn5mD9opnZjUayf?=
 =?us-ascii?Q?ClzCTBNer0FXXKF79U/4KdIAtJiEBkN6qz+g7ktyTasMCaAhW90dcE8Adme9?=
 =?us-ascii?Q?ZGWWtUp8WCjLXXnet9CqFr8PsmuqFHPUxFhpZtnwOGhimOHrUEye9RycEjp3?=
 =?us-ascii?Q?I3n0hnpF65MM+dEmblRt7fnF1tSo/6ZO6mpJFYpsl9MzSIx9kIAMYLibw3Dd?=
 =?us-ascii?Q?mcXOCpytXt8q30U73sXnI2v0/83t2A+0tGOlgp3m1wpUh2A+MnwbuFPkAg9S?=
 =?us-ascii?Q?UliuYJv5F+qlu+IZ/G/x+KvgrWxZRDBGOmUUvL/S/NEofWlqzbXKHxv6aBiB?=
 =?us-ascii?Q?D+cXxl4osiDnCDwJ6UP0zc1XIGIyWNbXgYKNSjSAqmMu/D6Spg9+iJ6WaxII?=
 =?us-ascii?Q?Ie2eKrOkOsy6Ta47Fm0WAVTJjrK/W9PE0+Ny6EMJlEsm7MLqkAAgLIKdJ1mF?=
 =?us-ascii?Q?5IeAK0hBcectuItmAr8T2bX7sKWdD6NO36S4w+ABPQ/sfvZRraGEYITqB7eX?=
 =?us-ascii?Q?bxVBr3WX/snh/VFS8niYjGsjFx9UQx6jvlrh7dAF7f9GkvkB2+0RF8dQUlzg?=
 =?us-ascii?Q?7nHFZ5MtM5VUK/t3Ml+b6wOJXC++DkUDO9RPLVflTg/XdLH5C9P3KxyKcMqP?=
 =?us-ascii?Q?hWOcrsnC5wKMvMsleqmdOW71vOT3hWCX2ErpCRsH87jmzuG7Pu+NVI3bPZm6?=
 =?us-ascii?Q?UlxHEJ4JWKWmudJ0SXl6CUJClcB20y2qfxIUlfXZMNg4EmAboUmJ+vxdtcHZ?=
 =?us-ascii?Q?mT3K?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <01C5B200E868D84E96A147794A8900E6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6075
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	511bba7c-3ef8-49d4-d534-08d89c365d04
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4gvoS3ID4mhT0nwyfnZR1tW7zQTvIjS7o+oXAkxvtzyRXTWBtthPxCLU2f6uoHP3Ip/0ykDHwbAry9tc+xYGk4NOHKFoUQdrtW+QAvOujTSvZ7H/R7oe3q4yT/5+BmjDRbay5CLJqA/wnRGabwoLsga6vcwKAtC7QNCaxCYCR8QbHG7S0wQAixDuitJM3N/Lhj4R0GH4Tn9WQljZujJCogw5xKaZN5NzUAa9VN5xdIsYzkbUt1kQDBEV6+bdUbn+AinB/AWzlFDUt1lTd8/STHHwYvdhwyIso+YFBLGKlYAv1eZEkbJxSap6ruwK2n4m5xaNDewFRShm42NSe8VwPSCzPBLxj8Dc033avjugS5jOc0dYow/WjPgqhQiGQ0SvGI+sW+ghOwitXrysIA4YSA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(46966005)(36756003)(26005)(2906002)(70206006)(2616005)(70586007)(33656002)(356005)(6512007)(336012)(81166007)(83380400001)(5660300002)(8936002)(86362001)(107886003)(53546011)(6486002)(8676002)(4326008)(6506007)(47076004)(82310400003)(508600001)(36906005)(186003)(6862004)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 11:34:34.9607
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 804ea879-5555-415b-978e-08d89c36676d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6308

Hi Jan,

> On 23 Nov 2020, at 15:20, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> This case can occur when combining empty lists
>=20
> obj-y :=3D
> ...
> obj-y +=3D $(empty)
>=20
> or
>=20
> obj-y :=3D $(empty) $(empty)
>=20
> where (only) blanks would accumulate. This was only a latent issue until
> now, but would become an active issue for Arm once lib/ gets populated
> with all respective objects going into the to be introduced lib.a.
>=20
> Also address a related issue at this occasion: When an empty built_in.o
> gets created, .built_in.o.d will have its dependencies recorded. If, on
> a subsequent incremental build, an actual constituent of built_in.o
> appeared, the $(filter-out ) would leave these recorded dependencies in
> place. But of course the linker won't know what to do with C header
> files. (The apparent alternative of avoiding to pass $(c_flags) or
> $(a_flags) would not be reliable afaict, as among these flags there may
> be some affecting information conveyed via the object file to the
> linker. The linker, finding inconsistent flags across object files, may
> then error out.) Using just $(obj-y) won't work either: It breaks when
> the same object file is listed more than once.
>=20
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/Rules.mk | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>=20
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index 333e19bec343..d5e5eb33de39 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -130,13 +130,13 @@ c_flags +=3D $(CFLAGS-y)
> a_flags +=3D $(CFLAGS-y) $(AFLAGS-y)
>=20
> built_in.o: $(obj-y) $(extra-y)
> -ifeq ($(obj-y),)
> +ifeq ($(strip $(obj-y)),)
> 	$(CC) $(c_flags) -c -x c /dev/null -o $@
> else
> ifeq ($(CONFIG_LTO),y)
> -	$(LD_LTO) -r -o $@ $(filter-out $(extra-y),$^)
> +	$(LD_LTO) -r -o $@ $(filter $(obj-y),$^)
> else
> -	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
> +	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter $(obj-y),$^)
> endif
> endif
>=20
> @@ -145,10 +145,10 @@ targets +=3D $(filter-out $(subdir-obj-y), $(obj-y)=
) $(extra-y)
> targets +=3D $(MAKECMDGOALS)
>=20
> built_in_bin.o: $(obj-bin-y) $(extra-y)
> -ifeq ($(obj-bin-y),)
> +ifeq ($(strip $(obj-bin-y)),)
> 	$(CC) $(a_flags) -c -x assembler /dev/null -o $@
> else
> -	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
> +	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter $(obj-bin-y),$^)
> endif
>=20
> # Force execution of pattern rules (for which PHONY cannot be directly us=
ed).
> --=20
> 2.22.0
>=20
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:38:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:38:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48163.85157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxna-0005mk-SR; Wed, 09 Dec 2020 11:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48163.85157; Wed, 09 Dec 2020 11:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxna-0005md-PK; Wed, 09 Dec 2020 11:38:22 +0000
Received: by outflank-mailman (input) for mailman id 48163;
 Wed, 09 Dec 2020 11:38:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmxnZ-0005mY-Fq
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:38:21 +0000
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown
 [40.107.9.89]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3f74adc-5408-4342-a952-f1d44500b9a1;
 Wed, 09 Dec 2020 11:38:19 +0000 (UTC)
Received: from AM6P194CA0085.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::26)
 by PR2PR08MB4780.eurprd08.prod.outlook.com (2603:10a6:101:1a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Wed, 9 Dec
 2020 11:38:17 +0000
Received: from VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::19) by AM6P194CA0085.outlook.office365.com
 (2603:10a6:209:8f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 11:38:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT008.mail.protection.outlook.com (10.152.18.75) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 11:38:16 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Wed, 09 Dec 2020 11:38:16 +0000
Received: from 076082c5ce7d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5C5A348F-2D42-45B5-B784-F43B837967F2.1; 
 Wed, 09 Dec 2020 11:38:00 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 076082c5ce7d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 11:38:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6075.eurprd08.prod.outlook.com (2603:10a6:10:207::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 9 Dec
 2020 11:37:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 11:37:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3f74adc-5408-4342-a952-f1d44500b9a1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A+muicTQYv5AUi/cxy5BXbMIrHWiNucm/gPOaEvKUXU=;
 b=0u4Pcp32N1C1x2A8uvIM1DNd1fnFFaQtpDAoMm1E9iK+MajTz5ZIPjYo1RUW8Ie47KCu7vidRXih/cZ9udl5UXNs40ROoqPXKAsqETs6wEwJm6AhTL37J5YB3qOGEYLpXDKKkYT06JCJ2aUzt498Fbb4FkulklfViJfAbzjj718=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 12c22f426be2257c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GtZDoaqOvOBrRSVBz9zGX/s0qUOQWtf6Ygyi/1WbpmgXWUSlovhH+6yABDaxRKtBcOMq2skjOPhenhT0UqlUwzkfAaKGITDYOOGuPjAgFHNpPsNJww7lxc3tV9PAV19X4G4VLzWrLfhtJa21fCsZsgbZ1LBiM9E+HGQq5JPOEsFN+G8C6ohZ8/lVT0nB0e5ADvrka8Uq19lGTKYAs1wgJ4oErEwG/ytmYruuO1Dp58yH9pHhx0s5VViVgh6eHgXDOYkqyGswAD8WZzVGtwo5hvC71gIKEluqYCBeFQiRnsVUDeVnIEIxZ3gOxurT5wjXhALzMevgJglRkdR4QTD6LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A+muicTQYv5AUi/cxy5BXbMIrHWiNucm/gPOaEvKUXU=;
 b=Bp23UCsVYxlLysXPaex59cxIIBPRnn3ofk9pZX7HklwAmewKVCFr/bjTaBUrNKyy5ijPRBcMZJZWw/+9HU2SQj4WWPYQts5CqhO3kst0/IOKnriZAEOrkiQAr+FzTux8ug1Q0UnomFJGqFpADWxD9JSPtRtRIidTrVpuiPwOZsagvP3tTbjLhf0HBe3e7jai3C3pF0gQ2svVLth7zbpCVyjWtXeBQpCzSWEeB6uUVYAwvfc59CchFQ1to7ACTmH3WlZpfSEGBHNH+v9B2va9Gnu04xecSI1T9gxM3YhwErT6BEkGNwz4eO3p23JCON/sB1MMof5lnYFH5S9h/r5wlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A+muicTQYv5AUi/cxy5BXbMIrHWiNucm/gPOaEvKUXU=;
 b=0u4Pcp32N1C1x2A8uvIM1DNd1fnFFaQtpDAoMm1E9iK+MajTz5ZIPjYo1RUW8Ie47KCu7vidRXih/cZ9udl5UXNs40ROoqPXKAsqETs6wEwJm6AhTL37J5YB3qOGEYLpXDKKkYT06JCJ2aUzt498Fbb4FkulklfViJfAbzjj718=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 2/8] lib: collect library files in an archive
Thread-Topic: [PATCH v3 2/8] lib: collect library files in an archive
Thread-Index: AQHWwaxzwnlkqyYO/Ea2tJ3MHVbjvKnuvCIA
Date: Wed, 9 Dec 2020 11:37:59 +0000
Message-ID: <E7B41B4F-98F9-4C52-8549-F407D6FB8251@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
In-Reply-To: <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7dcee07b-d579-42c5-5eaa-08d89c36eb9d
x-ms-traffictypediagnostic: DBBPR08MB6075:|PR2PR08MB4780:
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB47800D2D0BFA09999054EFE19DCC0@PR2PR08MB4780.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 TrMjizt6lOTNCxkAct3wYWoG3tvISX3xOh9uBlaQkDdFD293arJIkoq9/DNmKyHFD5KED2WBe2FVMlOB9Ij6dD2Y+szYopCnyvS8MRCkVzcMv62rrmpBXc5k30SS5McMUEf1JMbhmtfaE1u6l2E8s7rYnAjKYRf4iOu8j7tsPBB66UyhpwoClFStrKaPFwVorzy6bHfm60Fo+Gb8nZD2TpSGqIOdVpgjvXPOTz5J5qhc+wRG0HXBZuWEItat+DDp8pBVdUjgoxk1a5Gxd567cFoMg/nF2Nh7qO/DCZuIIeT33tt+jAVgoJQBXMv/JA1TOs+2mZvLJViEZ8VzYJWCF1ulreit/Kh3npYO10LpEY9yKAvjtcY3LqJ1b1IIi6bk
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(366004)(376002)(396003)(346002)(6512007)(5660300002)(6486002)(8936002)(53546011)(66446008)(66556008)(71200400001)(36756003)(66476007)(26005)(6506007)(186003)(4326008)(91956017)(478600001)(8676002)(6916009)(66946007)(2616005)(76116006)(33656002)(64756008)(83380400001)(316002)(2906002)(86362001)(54906003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?poRg2x9oa8Ski2W7piryDpikcHyDbBSi4sefBFhJk2MK76YFJFZk3dkj7BWN?=
 =?us-ascii?Q?vSjHZkKLd+MHfMYUq5tkhifVONnKfw8lKmsZDMR6CHQWvkpuE4oFsEqAmvtk?=
 =?us-ascii?Q?TPhfq1gmAtzPEwQ/znaSKY6xVkl3KEC0rGnt2iY9mpeXqgXYdi0CYS0fEOra?=
 =?us-ascii?Q?Lp7g8LO1GAkoxBMcBFClxrHw6eQSGbnFgWD5rsa5UqGpmqN9Paha8rpWGCCO?=
 =?us-ascii?Q?4K8g0tWJuxClb07Hwn1xzU62ssKFgY7biZKTlWvZ34PMRw0dyDwobSikhHWq?=
 =?us-ascii?Q?AaQHO9tZVxuhrzq8OklbSEI+HqYp6eN23dl4O95mkqqVViejn760hMivQ2e5?=
 =?us-ascii?Q?NypR5ENLSzlxlM9AgKHIouVMbmObLAulHh9vm3KWhPGVflUIGMKhWVbOuqeI?=
 =?us-ascii?Q?+E3TbobLBPZ8X6e9p8Mj9oDyfx8qgvv93H4/VbpqKkeCgBPCUETkOGxE3a52?=
 =?us-ascii?Q?IIjlaS+KzxHONyfjXGgXl3z8/jzhtRuhgHV7ench5NcO7XiUXAhwiChVO4fe?=
 =?us-ascii?Q?nEF3sXnoU/++cLolNb3EFMfEhm+REy6pfaO7ypzIh1rfuShGAl97cgbZXMDv?=
 =?us-ascii?Q?pZ7DHBUtamgOckFThW2PG+zSiVHRbxaZLfGrB7HXh3SJWxAvBK9rePCyQe0G?=
 =?us-ascii?Q?7HcqhEgmgVZvLKz/02KjbbDf4JJlg1Nex7tv8bGGZPRipWmIAmqqg9h3EYnC?=
 =?us-ascii?Q?xutIpT38Gd0fCOikKEOmUcbIAGprjkDiBsRArFp6B552D/1sKZxERcR9B0D3?=
 =?us-ascii?Q?AZCKmTDpsqPvfevuWRJA2niV2BROxB7mX0t5UK5dcFkbwUzXXLo6oU9kZsYa?=
 =?us-ascii?Q?rc8TDkP+wFl5YxY0TVZ07Q2vjviiwaDypQ92T3r68prUevvYNVt7qgvXw9UI?=
 =?us-ascii?Q?Hsx6BzOrozn7T4TdO0A6hC982IEdEL8roAuzkbqp0teiTHPKCWari1SA1J7G?=
 =?us-ascii?Q?QKzQEnJbmkQ+Yoz4ydzsI2rNJnz3icBlKsbowi924yWw5bwuAHP2gWTgCg+U?=
 =?us-ascii?Q?3ztx?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <77984E269CF54740A87039565766E0AE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6075
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	611b7996-78da-4cbe-c44e-08d89c36e12b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2ZA7Lx53osHa1vcD5P3ZMKka+Rx23uGTUKAMVYkbhT/bg7OxWs8EVdqXpnArZw42eylVIcJaa7pOaYpiKA/8PKq8je1weRTJNqJPMHhxC0jOCP0QFSid9mUFJT0PpaVdTIW1J4UHx4a8qd3LYS5rPjUHF+AKoOp/w5fW90ExPgeVp69gm7TKYkduyeRgfQU+s2VANnAa8tzdYKgiHcOjWXRbL1gMtcGP6zAk4k1IGfH2NtaQ9KaCGsu8v5cIPc/ae6nkNar9aG2lsnYW2lsmrHrhdpdcV5aFMu7ylnYHn4/L1cEZiQTJgePrLz9BnFaZMbK0JcJq2YUCESuOqoBK2oQjZM9PT+H8cxvTIS7BAjNISxS0EiBf4Xd1FxU5bEFceW1M9W+mS/V83Bl24YHU2w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(46966005)(107886003)(336012)(8676002)(8936002)(54906003)(70206006)(33656002)(508600001)(4326008)(70586007)(5660300002)(83380400001)(186003)(6486002)(6862004)(26005)(82310400003)(6512007)(6506007)(53546011)(2906002)(36756003)(2616005)(47076004)(356005)(81166007)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 11:38:16.6712
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7dcee07b-d579-42c5-5eaa-08d89c36eb9d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4780

Hi,

> On 23 Nov 2020, at 15:21, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
> just to avoid bloating binaries when only some arch-es and/or
> configurations need generic library routines, combine objects under lib/
> into an archive, which the linker then can pick the necessary objects
> out of.
>=20
> Note that we can't use thin archives just yet, until we've raised the
> minimum required binutils version suitably.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

with one remark...


> ---
> xen/Rules.mk          | 29 +++++++++++++++++++++++++----
> xen/arch/arm/Makefile |  6 +++---
> xen/arch/x86/Makefile |  8 ++++----
> xen/lib/Makefile      |  3 ++-
> 4 files changed, 34 insertions(+), 12 deletions(-)
>=20
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index d5e5eb33de39..aba6ca2a90f5 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -41,12 +41,16 @@ ALL_OBJS-y               +=3D $(BASEDIR)/xsm/built_in=
.o
> ALL_OBJS-y               +=3D $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
> ALL_OBJS-$(CONFIG_CRYPTO)   +=3D $(BASEDIR)/crypto/built_in.o
>=20
> +ALL_LIBS-y               :=3D $(BASEDIR)/lib/lib.a
> +
> # Initialise some variables
> +lib-y :=3D
> targets :=3D
> CFLAGS-y :=3D
> AFLAGS-y :=3D
>=20
> ALL_OBJS :=3D $(ALL_OBJS-y)
> +ALL_LIBS :=3D $(ALL_LIBS-y)
>=20
> SPECIAL_DATA_SECTIONS :=3D rodata $(foreach a,1 2 4 8 16, \
>                                             $(foreach w,1 2 4, \
> @@ -60,7 +64,14 @@ include Makefile
> # -----------------------------------------------------------------------=
----
>=20
> quiet_cmd_ld =3D LD      $@
> -cmd_ld =3D $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
> +cmd_ld =3D $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs=
)) \
> +               --start-group $(filter %.a,$(real-prereqs)) --end-group

This might be a good idea to add a comment to explain why the start/end-gro=
up
is needed so that someone does not change this back in the future.

Something like: put libraries between start/end group to have unused symbol=
s removed.

Cheers
Bertrand

> +
> +# Archive
> +# ----------------------------------------------------------------------=
-----
> +
> +quiet_cmd_ar =3D AR      $@
> +cmd_ar =3D rm -f $@; $(AR) cPrs $@ $(real-prereqs)
>=20
> # Objcopy
> # -----------------------------------------------------------------------=
----
> @@ -86,6 +97,10 @@ obj-y    :=3D $(patsubst %/, %/built_in.o, $(obj-y))
> # tell kbuild to descend
> subdir-obj-y :=3D $(filter %/built_in.o, $(obj-y))
>=20
> +# Libraries are always collected in one lib file.
> +# Filter out objects already built-in
> +lib-y :=3D $(filter-out $(obj-y), $(sort $(lib-y)))
> +
> $(filter %.init.o,$(obj-y) $(obj-bin-y) $(extra-y)): CFLAGS-y +=3D -DINIT=
_SECTIONS_ONLY
>=20
> ifeq ($(CONFIG_COVERAGE),y)
> @@ -129,7 +144,7 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
> c_flags +=3D $(CFLAGS-y)
> a_flags +=3D $(CFLAGS-y) $(AFLAGS-y)
>=20
> -built_in.o: $(obj-y) $(extra-y)
> +built_in.o: $(obj-y) $(if $(strip $(lib-y)),lib.a) $(extra-y)
> ifeq ($(strip $(obj-y)),)
> 	$(CC) $(c_flags) -c -x c /dev/null -o $@
> else
> @@ -140,8 +155,14 @@ else
> endif
> endif
>=20
> +lib.a: $(lib-y) FORCE
> +	$(call if_changed,ar)
> +
> targets +=3D built_in.o
> -targets +=3D $(filter-out $(subdir-obj-y), $(obj-y)) $(extra-y)
> +ifneq ($(strip $(lib-y)),)
> +targets +=3D lib.a
> +endif
> +targets +=3D $(filter-out $(subdir-obj-y), $(obj-y) $(lib-y)) $(extra-y)
> targets +=3D $(MAKECMDGOALS)
>=20
> built_in_bin.o: $(obj-bin-y) $(extra-y)
> @@ -155,7 +176,7 @@ endif
> PHONY +=3D FORCE
> FORCE:
>=20
> -%/built_in.o: FORCE
> +%/built_in.o %/lib.a: FORCE
> 	$(MAKE) -f $(BASEDIR)/Rules.mk -C $* built_in.o
>=20
> %/built_in_bin.o: FORCE
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 296c5e68bbc3..612a83b315c8 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -90,14 +90,14 @@ endif
>=20
> ifeq ($(CONFIG_LTO),y)
> # Gather all LTO objects together
> -prelink_lto.o: $(ALL_OBJS)
> -	$(LD_LTO) -r -o $@ $^
> +prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
> +	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) =
--end-group
>=20
> # Link it with all the binary objects
> prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_=
lto.o
> 	$(call if_changed,ld)
> else
> -prelink.o: $(ALL_OBJS) FORCE
> +prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
> 	$(call if_changed,ld)
> endif
>=20
> diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
> index 9b368632fb43..8f2180485b2b 100644
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -132,8 +132,8 @@ EFI_OBJS-$(XEN_BUILD_EFI) :=3D efi/relocs-dummy.o
>=20
> ifeq ($(CONFIG_LTO),y)
> # Gather all LTO objects together
> -prelink_lto.o: $(ALL_OBJS)
> -	$(LD_LTO) -r -o $@ $^
> +prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
> +	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) =
--end-group
>=20
> # Link it with all the binary objects
> prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_=
lto.o $(EFI_OBJS-y) FORCE
> @@ -142,10 +142,10 @@ prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o=
,$(ALL_OBJS)) prelink_lto.o $
> prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prel=
ink_lto.o FORCE
> 	$(call if_changed,ld)
> else
> -prelink.o: $(ALL_OBJS) $(EFI_OBJS-y) FORCE
> +prelink.o: $(ALL_OBJS) $(ALL_LIBS) $(EFI_OBJS-y) FORCE
> 	$(call if_changed,ld)
>=20
> -prelink-efi.o: $(ALL_OBJS) FORCE
> +prelink-efi.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
> 	$(call if_changed,ld)
> endif
>=20
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 53b1da025e0d..b8814361d63e 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1,2 +1,3 @@
> -obj-y +=3D ctype.o
> obj-$(CONFIG_X86) +=3D x86/
> +
> +lib-y +=3D ctype.o
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:39:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:39:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48168.85169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxoY-0005uU-Ch; Wed, 09 Dec 2020 11:39:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48168.85169; Wed, 09 Dec 2020 11:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxoY-0005uN-8f; Wed, 09 Dec 2020 11:39:22 +0000
Received: by outflank-mailman (input) for mailman id 48168;
 Wed, 09 Dec 2020 11:39:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmxoW-0005uF-Rm
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:39:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 858938e4-3ed7-45cd-8624-c3dbaa3c58b5;
 Wed, 09 Dec 2020 11:39:19 +0000 (UTC)
Received: from DB6P191CA0001.EURP191.PROD.OUTLOOK.COM (2603:10a6:6:28::11) by
 AM6PR08MB3510.eurprd08.prod.outlook.com (2603:10a6:20b:48::30) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12; Wed, 9 Dec 2020 11:39:17 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:28:cafe::2) by DB6P191CA0001.outlook.office365.com
 (2603:10a6:6:28::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 11:39:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 11:39:17 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Wed, 09 Dec 2020 11:39:17 +0000
Received: from b97061682d90.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CE20DFE9-1925-41A5-B5E3-14C962CC0C21.1; 
 Wed, 09 Dec 2020 11:39:01 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b97061682d90.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 11:39:01 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4249.eurprd08.prod.outlook.com (2603:10a6:10:c6::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.19; Wed, 9 Dec
 2020 11:39:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 11:39:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 858938e4-3ed7-45cd-8624-c3dbaa3c58b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A3BoWmywItlVr6Eflpd6/88DyyjdT9NO8dU6TUnBCDk=;
 b=e2mTMZqkSCrMtbq9/M8faY1QAR/OTf9YcHWJFY2t9l+ai/97M3uiHCkoabS5oj2JQFwJI0gPIUs6aPp9OCNLYJqcVptUjc+FQ7vl9i7aS/Rxt8g2OApEQyPt4mVZcGO+YjPP/3WPeZlYI6OPYis6qIZMxYcbzV46PVf3hy7ZPOw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5328e6b1c559be22
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ms/w3Yx9qnayFBa39tWM3d5Dnh3hKVwvYiLGsIM0eMWJq500GHgYqCyl2gNt4zhSXyxN3HeqkAzELzdPPoCXHBUvlEARc6IDQxvEDx+n8XZ8VocHy+KOXBnhMVapFZVXjOEkiPioh7i+up5QkbKDYeydptqfgeXph1yocFC5Oc7a5snkdS0pER/clAAeCNT5dN/BQskdMFBG1xPo6AvF0LjAdpjlwSja3WX2fnfAvE8ijnlLeqSdiNxJFuPD0Co3WvoVR9s4YjBzrDKjQ3cqUNSRMregZ5Ka0X3adLB45294tyoSOZMDxELbFVyKWSBzahqI0ZbZwFUmT4Mv++bYAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A3BoWmywItlVr6Eflpd6/88DyyjdT9NO8dU6TUnBCDk=;
 b=nbfAy2edLYUFtxKO+xz4QBPuCSSq6yjIFmWsLBVcv2r6lG/rJgNITNV+nzKRnajDOuoPEZzv0SOZFjtnTpWKUwFRU7ojwTgMTg5aMfG04nE9qFNfi4oeOVRuI671w2o66e85cCgmLrAhlj0MbU53+f3zlXaiydvWiTfNRfG/fSvbNq1mwT+PpP++UHcNbV7qLhLdxiIszsq+l1UQIkenu1PKaenZElDzrXM5oSGejGyCVZImG3JJs3bio1A6e4cvBaE0OcJ4EHQPHfAoO/PjTFDIvemvxOgJi33UEdqfhUJKMbV60Ee1mStIGGuvrkC5sbUddepWU2Edg/7K8YOSSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A3BoWmywItlVr6Eflpd6/88DyyjdT9NO8dU6TUnBCDk=;
 b=e2mTMZqkSCrMtbq9/M8faY1QAR/OTf9YcHWJFY2t9l+ai/97M3uiHCkoabS5oj2JQFwJI0gPIUs6aPp9OCNLYJqcVptUjc+FQ7vl9i7aS/Rxt8g2OApEQyPt4mVZcGO+YjPP/3WPeZlYI6OPYis6qIZMxYcbzV46PVf3hy7ZPOw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 3/8] lib: move list sorting code
Thread-Topic: [PATCH v3 3/8] lib: move list sorting code
Thread-Index: AQHWwax1PUiQEQiqDESCXdsiG9mv36nuvGuA
Date: Wed, 9 Dec 2020 11:39:00 +0000
Message-ID: <1883EED6-2257-4B43-AD07-EA28601F6B96@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <9e855f2f-c654-6515-ae4f-9c69859c1c88@suse.com>
In-Reply-To: <9e855f2f-c654-6515-ae4f-9c69859c1c88@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 228589ee-aff9-43e4-f741-08d89c370faa
x-ms-traffictypediagnostic: DBBPR08MB4249:|AM6PR08MB3510:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3510C354E00F36FE32C7FA3D9DCC0@AM6PR08MB3510.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3044;OLM:3044;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PqcO0MVi9opndKDECOSOsMSPE/0QRZ1TYauNVRE52YZR4UaCyiGhnSN/0qYRvckGIijUciT0iw7fxvSG3nwN40HfPg4FExi/ldYoJZMWPpd7bAQO9D13IjO9gRyzaOKLlrpzrKtn/29k2bl3TiCUW4eSt8pBdIW/jsxxgKVPHCiXOKyUBFzXS9MqQ1cgqJBBQssjVi8Cu1fjEj+P3BLw1tJC/1HWDZhWikIvDa2vjPp/qvMit0gPGRgnxsdUGaGrgkVubDmfrBGQhABADtaLL6zZ9bQDqMl5ZxUSQs+wLrAGPwDp+OjN6Hz+tCbE+o1feH3yPm07jKe/WQtesEmJIRGAHBIXqZXS41uY6jTgWFSxxCHy6LBP+2B69gHdcfGOqWo5/JVmfHhu+XWhpV0RBWncpmmqp8oLuGRNGk8YZj3AcB3AH1nVV9vzZg3ZAUmkSWaCd0kUorsMu24Io98+vQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(376002)(8676002)(26005)(8936002)(54906003)(33656002)(53546011)(6506007)(186003)(64756008)(4326008)(83380400001)(6916009)(36756003)(66446008)(66476007)(5660300002)(2906002)(71200400001)(6512007)(6486002)(66946007)(76116006)(91956017)(66556008)(2616005)(508600001)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?yH30vFNdvyPajsAZEV8IWPE0FE7AuPA1IZxwdJRSY6E7fefSGvBCKW/TAhyS?=
 =?us-ascii?Q?UU0qrR2Go88WRktkulBtOANy+nkxxjmAMZN/IzugxAkd3t5EpCJTYNspL9wb?=
 =?us-ascii?Q?IsOXLv7HdJkIFsitSbYHUb6OLSsPzaZ0tniMxz/tgdDndRWKYozJ3ErXaSbL?=
 =?us-ascii?Q?PNb4hN3n2x51WnSZkrnqqisvtrd4J9nm5vHsZPtA2TuWzQoX3gfvwq0B8FZx?=
 =?us-ascii?Q?cV8zuDHCybQYyNB7umcpaVLnENa5Aw9Ef19iVK/XNOjgNARbeJNMnwwk5euL?=
 =?us-ascii?Q?W1kkrP+w/AybATNOiZdJv5f4XD1RfYp5XxIIjzYTvfEEOxwAmU3EeP0UjkE0?=
 =?us-ascii?Q?YV3hLy8eSENtWOj2LbjGIneBMzSrp4oJjdkKOpJRi6hgigqJK/tyIt2M9LkZ?=
 =?us-ascii?Q?bs0oWDi8iTsAZ/4VjlS+9Y8R3uCCGqEfQsmoqi87SZDzMOq2PydzrDSCPmSr?=
 =?us-ascii?Q?kkN0XayfQzXhn5kxyzmvGduL+e87lwyi9fR8hneX2Ry7gs+r42akQeFFDqT9?=
 =?us-ascii?Q?WAWs/83MJuBKybbUSCTpPMuwrgt48RmhnGa1wmnNkoXLJBiRfAbfL6OQedtl?=
 =?us-ascii?Q?5OX7b/VtHB16m2dAxAv7RxPneMIL0t4nZ/x0WgxQfuM7+dNEOmLs5X5KlUcw?=
 =?us-ascii?Q?lJ3xsoV5EgBSA3LSWsPxyt5skIxsM/mKkACFVFJX31fZDdDVZQcpQqWZTFJR?=
 =?us-ascii?Q?p3TYP54MkhkZeVAQ8YJ/wzk3SXKEUgrpCMY95i4+2pHtoP0TeoR77woyqf6f?=
 =?us-ascii?Q?n2PZYRvTWYJLclvTy7bbryvUGlW0VVQGvGeZrnK44gY4El+MRRtNBeKoxvgm?=
 =?us-ascii?Q?mRW2hGoWCMqx4qBkMkf/tYdKqLzqkI1t8Ym+/1SlGgF4bOFhHG1XlUIu21c5?=
 =?us-ascii?Q?wRETXgW5s6y9IqN5dOrWZ/GdiLSiRoXeZG+r1ZDNkzB1/lRsv+jzBftpeF51?=
 =?us-ascii?Q?mPCL1oTby5nDvX3jWS3oyp41jZlg1UOURPL8M4zKn5ztO2spaznhb410OeRr?=
 =?us-ascii?Q?H7gS?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <10CCACDF08CF7947B69F55AC2FD1DB79@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4249
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c3c823b4-5cd5-4dd6-572e-08d89c370572
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	en6BZbyaIu6bPv/cal06olGh7iOyCliVEYdqvrgBUEOfyKtbL++Tr5XKjnzE7dbikvJ8ptJ31JE6C4jTjfjt1PEGcpTN0t9BMp0cHiMu+DeUGR7A2b9XlcH09ad9z24M5EoJUuVOOko8lHZwDtjfYpe90JTlhR5IIqHc2b2JWhdZeqJtauQZTUdUv2KLZ1uwFHB5m0e2obyWuK6Vtu+uZGGZizYyKP6IHACMHzGCbKCpq0iS6HWSNImZJOsEA2cQ/w5NYW1S2wxKjXoolgBKRoHUbT0vmEaUN5Ldv8C0S+V6BmgiBKhrVd6BU0t3J4zxwZDBmnzmHJzUKz6jY6Pt+EIUKeN4jD6qObQDGr5F7NIqvEItksolsnkkWJnG3Dx8YUdcUKpqJXmK+2IaoZj8E+y6ixesVHi7ZnwKHh7Na3zqLk8rObFuCSLhGsxaHTWb4eRIF83WG6Wc+4W34CyI3HnugNwJafE0hYvR5PMxQs4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(396003)(136003)(46966005)(2616005)(86362001)(82740400003)(336012)(4326008)(81166007)(107886003)(6506007)(6512007)(53546011)(26005)(316002)(6486002)(47076004)(36756003)(54906003)(2906002)(33656002)(478600001)(70206006)(5660300002)(70586007)(82310400003)(6862004)(186003)(8676002)(356005)(8936002)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 11:39:17.2852
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 228589ee-aff9-43e4-f741-08d89c370faa
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3510



> On 23 Nov 2020, at 15:21, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> Build the source file always, as by putting it into an archive it still
> won't be linked into final binaries when not needed. This way possible
> build breakage will be easier to notice, and it's more consistent with
> us unconditionally building other library kind of code (e.g. sort() or
> bsearch()).
>=20
> While moving the source file, take the opportunity and drop the
> pointless EXPORT_SYMBOL() and an unnecessary #include.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/arch/arm/Kconfig                        | 4 +---
> xen/common/Kconfig                          | 3 ---
> xen/common/Makefile                         | 1 -
> xen/lib/Makefile                            | 1 +
> xen/{common/list_sort.c =3D> lib/list-sort.c} | 2 --
> 5 files changed, 2 insertions(+), 9 deletions(-)
> rename xen/{common/list_sort.c =3D> lib/list-sort.c} (98%)
>=20
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f5b1bcda0323..38b6c31ba5dd 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -56,9 +56,7 @@ config HVM
>         def_bool y
>=20
> config NEW_VGIC
> -	bool
> -	prompt "Use new VGIC implementation"
> -	select NEEDS_LIST_SORT
> +	bool "Use new VGIC implementation"
> 	---help---
>=20
> 	This is an alternative implementation of the ARM GIC interrupt
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 3e2cf2508899..0661328a99e7 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -66,9 +66,6 @@ config MEM_ACCESS
> config NEEDS_LIBELF
> 	bool
>=20
> -config NEEDS_LIST_SORT
> -	bool
> -
> menu "Speculative hardening"
>=20
> config SPECULATIVE_HARDEN_ARRAY
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index d109f279a490..332e7d667cec 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -21,7 +21,6 @@ obj-y +=3D keyhandler.o
> obj-$(CONFIG_KEXEC) +=3D kexec.o
> obj-$(CONFIG_KEXEC) +=3D kimage.o
> obj-y +=3D lib.o
> -obj-$(CONFIG_NEEDS_LIST_SORT) +=3D list_sort.o
> obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o livepatch_elf.o
> obj-$(CONFIG_MEM_ACCESS) +=3D mem_access.o
> obj-y +=3D memory.o
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index b8814361d63e..764f3624b5f9 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1,3 +1,4 @@
> obj-$(CONFIG_X86) +=3D x86/
>=20
> lib-y +=3D ctype.o
> +lib-y +=3D list-sort.o
> diff --git a/xen/common/list_sort.c b/xen/lib/list-sort.c
> similarity index 98%
> rename from xen/common/list_sort.c
> rename to xen/lib/list-sort.c
> index af2b2f6519f1..f8d8bbf28178 100644
> --- a/xen/common/list_sort.c
> +++ b/xen/lib/list-sort.c
> @@ -15,7 +15,6 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>=20
> -#include <xen/lib.h>
> #include <xen/list.h>
>=20
> #define MAX_LIST_LENGTH_BITS 20
> @@ -154,4 +153,3 @@ void list_sort(void *priv, struct list_head *head,
>=20
> 	merge_and_restore_back_links(priv, cmp, head, part[max_lev], list);
> }
> -EXPORT_SYMBOL(list_sort);
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:40:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:40:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48173.85181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxpU-0006h8-Nd; Wed, 09 Dec 2020 11:40:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48173.85181; Wed, 09 Dec 2020 11:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmxpU-0006h1-KI; Wed, 09 Dec 2020 11:40:20 +0000
Received: by outflank-mailman (input) for mailman id 48173;
 Wed, 09 Dec 2020 11:40:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kmxpT-0006gv-3P
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:40:19 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dedb4548-2145-4fe8-b93f-cf89efe11e36;
 Wed, 09 Dec 2020 11:40:17 +0000 (UTC)
Received: from DB6P191CA0006.EURP191.PROD.OUTLOOK.COM (2603:10a6:6:28::16) by
 AM6PR08MB3959.eurprd08.prod.outlook.com (2603:10a6:20b:aa::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12; Wed, 9 Dec 2020 11:40:16 +0000
Received: from DB5EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:28:cafe::75) by DB6P191CA0006.outlook.office365.com
 (2603:10a6:6:28::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 11:40:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT056.mail.protection.outlook.com (10.152.21.124) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3632.17 via Frontend Transport; Wed, 9 Dec 2020 11:40:16 +0000
Received: ("Tessian outbound 76bd5a04122f:v71");
 Wed, 09 Dec 2020 11:40:16 +0000
Received: from 3918bbb3866e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 25ED851B-E927-4A18-B01A-FEBAD5C3B5AA.1; 
 Wed, 09 Dec 2020 11:40:00 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3918bbb3866e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 11:40:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6075.eurprd08.prod.outlook.com (2603:10a6:10:207::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Wed, 9 Dec
 2020 11:40:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 11:40:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dedb4548-2145-4fe8-b93f-cf89efe11e36
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/KpoOcniabhfGTxW9b5H+Y2FnQw5rFkheUMJwStrqbo=;
 b=QJj/T3nA6HG1uxKFjsBIgO++LmxtYwY5hqSdMrTOK7ECBI09jsXDoxdNhSgKXvPRtOqNks+M0jVngpoa4fOREh/8m++Zw4+zRgmZwxXODrvJzTtVW8YP5ZRkpMsq1uKDLNq8rBFR3hNzUcyOF/p5mglNKfVX32/2uCPghaT9ppM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 41e6ce38710fa98f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jb9onrpQKvSfWl1Z4zXDpHqGiNXZhnTkbkyLPRbGGbvLxHkWL6nbi8awBnRBAWx3l52nYjQmUx5cB7p0NwWYtB2Py8efRoptfm+Od2EDaqAAVbYr/00rcVZDycb8bg05VlgvQcQOUXfkZx/HGaXc9VNuhjwovVzJZhJ0sNeeGnwkdXmkrJTTUiN5T78y22tpuWim7J0kuC6RuQRg8OGlbLFdd1pirP76i4O+PxgPXrrQGS6AMyOig5QY9fY/qiuCvIY9b3GPQdEpHey9sUamaGZszBGUFyEK+ezKDLQ5b8wHek/LbdVpyy00e2pYU0zzpi8jzLq4zdRpe7fjswFzgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/KpoOcniabhfGTxW9b5H+Y2FnQw5rFkheUMJwStrqbo=;
 b=TgZ7MFG1dQ8ncpCPIbdzV4ELJGGz8rleiJ54Gd/GseZtGsowrqzkM6e6Fj62i5UhI99E8l0QFLzO90xiwCXxCSoMkf6GVMJFenO/75xu5DQgzW+sl+OWNAVydBZmOOZeXGlsAxnOyshDnNov6p2ci+NzmFVw2QCsiURubVW7eHN3zpH/U54CH/Ke79zU4zS8E96/R1UWbiqtyHAsZucogzqP6Hn8RcgN+WlKuRFtqW2c6CZ4LBMyAzg2i19pDHbN/iWOtdXOVG87pWQPtFpoBT63JzO8LBhd/bDJLtJOSTv9s5+q12/p9eSRGvWqV0EQ0JmuDLnRnOaLE5KSsQjQgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/KpoOcniabhfGTxW9b5H+Y2FnQw5rFkheUMJwStrqbo=;
 b=QJj/T3nA6HG1uxKFjsBIgO++LmxtYwY5hqSdMrTOK7ECBI09jsXDoxdNhSgKXvPRtOqNks+M0jVngpoa4fOREh/8m++Zw4+zRgmZwxXODrvJzTtVW8YP5ZRkpMsq1uKDLNq8rBFR3hNzUcyOF/p5mglNKfVX32/2uCPghaT9ppM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 4/8] lib: move parse_size_and_unit()
Thread-Topic: [PATCH v3 4/8] lib: move parse_size_and_unit()
Thread-Index: AQHWwayC7RWAkHGkqkKPXBnWs+PFuqnuvLOA
Date: Wed, 9 Dec 2020 11:40:00 +0000
Message-ID: <F9B5D8B2-67CA-4912-97BE-5A980ED572EB@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <489b3707-9e9c-184b-3e1f-1d28fd1fb0ee@suse.com>
In-Reply-To: <489b3707-9e9c-184b-3e1f-1d28fd1fb0ee@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e9ed098d-576b-4398-aeb0-08d89c3732bc
x-ms-traffictypediagnostic: DBBPR08MB6075:|AM6PR08MB3959:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB395964C22B97BF748C97EADE9DCC0@AM6PR08MB3959.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:187;OLM:187;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 MAFGFE1Smts5HoQZ5s7tyFdTM/WudcUIf6m2IYBm6Si+/zj9ttsoZALG/qkbxyhkwfRX2Gq0p4c9vb3psbRj9Htsw66yFs94O3KFJiE534T3sLP3QoVPvuVd0SvbmG3I38FXnxt8rE6PIVi3b1fSIQUFmmQU4XyqMX7FSTq3q8nFWIwPOCRCkNheiP2sRxZ6YWabpe86adqiiO2G08m2p8vU6H5kG6twzrPnAnKv6yQ1NhuTBOVMH9ciZgqIGtRSSlw3NF7PZT+S07r+Rg2TQ9CYEzVy7uC6ZHhXdMMHt0zfyPcko3uaBFENMIfQiydGMMcFbV7PJIuSa6m0FdvPw63+HnYfB1ZJAm8z+Yx1C0KHDcQH2aveHLJ1RxwGLsEw
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(136003)(376002)(64756008)(33656002)(83380400001)(6916009)(8676002)(66946007)(2616005)(76116006)(86362001)(54906003)(2906002)(53546011)(66556008)(71200400001)(66446008)(508600001)(8936002)(6486002)(6512007)(5660300002)(4326008)(91956017)(186003)(6506007)(36756003)(66476007)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?AH0uqVPb473FX12hEWqug4/IjmeJa3Z/nChqHC+KBr1hquwSCnN7FSTulAVc?=
 =?us-ascii?Q?thHRI1RenGmP0DN0ldkrjn+knjw4SNpuUMX2T78Ut0qhDzgRlt7IV1zVi2wr?=
 =?us-ascii?Q?k8+mZa2RXjpy46trUocbWgNcZHJfrIPLCvR56jT//Y9/GC5F50XxKnCLvCcA?=
 =?us-ascii?Q?FceQuDD05xt8jyhqdlObjdDQRC7jlt5hx7mxtOQttBtuelSzqUH05E+403F4?=
 =?us-ascii?Q?f7+fpOMIklYKz/Oqsgwd0YNfRZz2rGeZMOERV2rptGikrmLfDEJ9rjA+p+cF?=
 =?us-ascii?Q?1LtdPnsLmUdh502QRJFmj5dmUXJnLwxaxnWHK3dDt3bE3oMCHXNb77XqBBKk?=
 =?us-ascii?Q?zkzBF89VTPLOIcW78P70AKRPuQgk6JnwmgjiBoYebMe1rX/7t6zckQO8N1cW?=
 =?us-ascii?Q?BOwd2RbgJ0QY66f8Kr5Eoc21X8dK1Wo7X3xGa/DdI2iY5xNtIgs2YmatvniE?=
 =?us-ascii?Q?HzeOUGP8L4J/E0pYsG42BG96Br9MiNd8FPGgXXR38LELtHeanlJEexGGfGxd?=
 =?us-ascii?Q?bDt7ocWMQPJENKLX1mG8m9i4TxUWwzNCI6GHmVWjStLoPZdCPYQeCw/dALuo?=
 =?us-ascii?Q?T7ZtIJDSlNvyuhXNGnWc2kNCu6UwsoiqJ/2VeL06p5D0wa+6XzJOkltMjyLI?=
 =?us-ascii?Q?MUYAQxdTDBkbbmEi4gUl0nA3LKnOD4QBCvvpq18nvBAsAvKfNKAbxcFoUh9O?=
 =?us-ascii?Q?Pzt6Oqm2wQlJ+fRhXi2WC4MVICnlQ0l6Z0ewdInbbOI4iMZp2PigNv0nQdzI?=
 =?us-ascii?Q?q/7NkwbsOjIIAZyRCrVN5PbcPBrMCzK3h4n5dx96REzCgpIntjeZ/YjLY4I0?=
 =?us-ascii?Q?kE0Ecq4Z9hVduxR/vC8nPifNzXg/N+NhDYuRxCADHY1kIyhGYauaZTnmtB1F?=
 =?us-ascii?Q?b9/4qnSmLlFyWXAxjcTGd1GwoHBtm7A9KzFrJjfWN+HG+ERMYO9jJVIDbM2K?=
 =?us-ascii?Q?RKhJ0guOwuH9JREet6kG7mYAWrihumcFO7xh924IDMf8qO6WwcYtm4PdHoGq?=
 =?us-ascii?Q?Mq0V?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A0608BDA842C4444B1A9D6DDD5D6B68D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6075
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7ea865fc-df24-473d-dd32-08d89c372932
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jTZGtPIub500pOg4gJSvwMWhR+JUuC9a+iFL3jdn5tgJc7l/0P+OsFf0O/+JQKgQHKlzPrq8b+HazTkBPzpPYMolgrOcvfgML60PTl+Qvvi1RqYPV89fCMg2y7L88+8PoXCLL92Z9aeyquiu+whIxmjNvjgo++PZBo2cy9/WRhyLwHetYONDX74kKdZtKmKs4KwnbMuxokkNj8HpYUhKnz/oIfcNMjenKO2JRdcR6jRwT+kv9q+aSoWHgXjyp0vSExLgOc1/pnRtYNZxRh+zVwO+A34JubZiSnTHEArZpmfCmr0SEFUE2zkaCB++L6xsHigHQtPOxoEtmeD+0UAk3YPdrtHr7WKO8wwnBgpdN6a4L7+aC+6haXoeyA9zT+5k2TACXzKLcADCvH9APPAggA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(46966005)(36756003)(26005)(6512007)(8676002)(47076004)(33656002)(186003)(2906002)(107886003)(70586007)(6506007)(70206006)(478600001)(86362001)(8936002)(4326008)(83380400001)(336012)(356005)(6862004)(316002)(53546011)(6486002)(5660300002)(2616005)(81166007)(82310400003)(54906003)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 11:40:16.1215
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e9ed098d-576b-4398-aeb0-08d89c3732bc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3959



> On 23 Nov 2020, at 15:22, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> ... into its own CU, to build it into an archive.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/common/lib.c     | 39 ----------------------------------
> xen/lib/Makefile     |  1 +
> xen/lib/parse-size.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 51 insertions(+), 39 deletions(-)
> create mode 100644 xen/lib/parse-size.c
>=20
> diff --git a/xen/common/lib.c b/xen/common/lib.c
> index a224efa8f6e8..6cfa332142a5 100644
> --- a/xen/common/lib.c
> +++ b/xen/common/lib.c
> @@ -423,45 +423,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c=
)
> #endif
> }
>=20
> -unsigned long long parse_size_and_unit(const char *s, const char **ps)
> -{
> -    unsigned long long ret;
> -    const char *s1;
> -
> -    ret =3D simple_strtoull(s, &s1, 0);
> -
> -    switch ( *s1 )
> -    {
> -    case 'T': case 't':
> -        ret <<=3D 10;
> -        /* fallthrough */
> -    case 'G': case 'g':
> -        ret <<=3D 10;
> -        /* fallthrough */
> -    case 'M': case 'm':
> -        ret <<=3D 10;
> -        /* fallthrough */
> -    case 'K': case 'k':
> -        ret <<=3D 10;
> -        /* fallthrough */
> -    case 'B': case 'b':
> -        s1++;
> -        break;
> -    case '%':
> -        if ( ps )
> -            break;
> -        /* fallthrough */
> -    default:
> -        ret <<=3D 10; /* default to kB */
> -        break;
> -    }
> -
> -    if ( ps !=3D NULL )
> -        *ps =3D s1;
> -
> -    return ret;
> -}
> -
> typedef void (*ctor_func_t)(void);
> extern const ctor_func_t __ctors_start[], __ctors_end[];
>=20
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 764f3624b5f9..99f857540c99 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -2,3 +2,4 @@ obj-$(CONFIG_X86) +=3D x86/
>=20
> lib-y +=3D ctype.o
> lib-y +=3D list-sort.o
> +lib-y +=3D parse-size.o
> diff --git a/xen/lib/parse-size.c b/xen/lib/parse-size.c
> new file mode 100644
> index 000000000000..ec980cadfff3
> --- /dev/null
> +++ b/xen/lib/parse-size.c
> @@ -0,0 +1,50 @@
> +#include <xen/lib.h>
> +
> +unsigned long long parse_size_and_unit(const char *s, const char **ps)
> +{
> +    unsigned long long ret;
> +    const char *s1;
> +
> +    ret =3D simple_strtoull(s, &s1, 0);
> +
> +    switch ( *s1 )
> +    {
> +    case 'T': case 't':
> +        ret <<=3D 10;
> +        /* fallthrough */
> +    case 'G': case 'g':
> +        ret <<=3D 10;
> +        /* fallthrough */
> +    case 'M': case 'm':
> +        ret <<=3D 10;
> +        /* fallthrough */
> +    case 'K': case 'k':
> +        ret <<=3D 10;
> +        /* fallthrough */
> +    case 'B': case 'b':
> +        s1++;
> +        break;
> +    case '%':
> +        if ( ps )
> +            break;
> +        /* fallthrough */
> +    default:
> +        ret <<=3D 10; /* default to kB */
> +        break;
> +    }
> +
> +    if ( ps !=3D NULL )
> +        *ps =3D s1;
> +
> +    return ret;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:52:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48182.85193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmy0j-0007rs-Vn; Wed, 09 Dec 2020 11:51:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48182.85193; Wed, 09 Dec 2020 11:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmy0j-0007rl-ST; Wed, 09 Dec 2020 11:51:57 +0000
Received: by outflank-mailman (input) for mailman id 48182;
 Wed, 09 Dec 2020 11:51:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmy0i-0007rd-Qw; Wed, 09 Dec 2020 11:51:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmy0i-0006aE-JP; Wed, 09 Dec 2020 11:51:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kmy0i-0007Im-AS; Wed, 09 Dec 2020 11:51:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kmy0i-0005mW-9y; Wed, 09 Dec 2020 11:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bvO+xI7xBp2p21S6ekC0CX4GVA1fdwUdAZ2bMYnEk8I=; b=1KZwPLYXGYqUypeA07/4PTVTmX
	KZJ4yZMQxWivSXc8MKJQuYpHh29r6sr0EWSivzFx9tD7hxd6ZeHQx5gGA9Qs8CsnRzugwUDdwBCw0
	qjfOROZeOjRMW+5ExDEGTSzp09Av2Hsv0p8iwZQ0ElAQ4VIyI08Ak5tRlVwyE7EqI+50=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157338-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157338: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7061294be500de021bef3d4bc5218134d223315f
X-Osstest-Versions-That:
    ovmf=cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 11:51:56 +0000

flight 157338 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157338/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7061294be500de021bef3d4bc5218134d223315f
baseline version:
 ovmf                 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00

Last test of basis   157333  2020-12-08 22:41:46 Z    0 days
Testing same since   157338  2020-12-09 03:48:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cee5b0441a..7061294be5  7061294be500de021bef3d4bc5218134d223315f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 11:54:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 11:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48188.85208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmy31-00081m-DV; Wed, 09 Dec 2020 11:54:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48188.85208; Wed, 09 Dec 2020 11:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmy31-00081f-A7; Wed, 09 Dec 2020 11:54:19 +0000
Received: by outflank-mailman (input) for mailman id 48188;
 Wed, 09 Dec 2020 11:54:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmy30-00081a-5j
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 11:54:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmy2x-0006cV-OR; Wed, 09 Dec 2020 11:54:15 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmy2x-0006Ab-H2; Wed, 09 Dec 2020 11:54:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=PZbvjGYPW+C80/qudm7L2NIELdMPGmRcgxQGTG5ezS0=; b=QdfpBPlyKvVVOJRp0xSTjqEFfy
	W24rD7BZOwTS4tDQW1/Cc5oG0sbDHwNGwFpTbAdjIUvG+bBtHlgHmMdxqBR4Wl1eCdcqWEiNNMnQg
	emvHnXYrO8TRir5ARPDrI6fRmlYsD+C0rcFv6js0JBysFmHLjEE5NvlGbD/DyMq3ezso=;
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
Date: Wed, 9 Dec 2020 11:54:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/11/2020 13:29, Jan Beulich wrote:
> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>       long           rc = 0;
>   
>    again:
> -    spin_lock(&d1->event_lock);
> +    write_lock(&d1->event_lock);
>   
>       if ( !port_is_valid(d1, port1) )
>       {
> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>                   BUG();
>   
>               if ( d1 < d2 )
> -            {
> -                spin_lock(&d2->event_lock);
> -            }
> +                read_lock(&d2->event_lock);

This change made me realized that I don't quite understand how the 
rwlock is meant to work for event_lock. I was actually expecting this to 
be a write_lock() given there are state changed in the d2 events.

Could you outline how a developper can find out whether he/she should 
use read_lock or write_lock?

[...]

> --- a/xen/common/rwlock.c
> +++ b/xen/common/rwlock.c
> @@ -102,6 +102,14 @@ void queue_write_lock_slowpath(rwlock_t
>       spin_unlock(&lock->lock);
>   }
>   
> +void _rw_barrier(rwlock_t *lock)
> +{
> +    check_barrier(&lock->lock.debug);
> +    smp_mb();
> +    while ( _rw_is_locked(lock) )
> +        arch_lock_relax();
> +    smp_mb();
> +}

As I pointed out when this implementation was first proposed (see [1]), 
there is a risk that the loop will never exit.

I think the following implementation would be better (although it is ugly):

write_lock();
/* do nothing */
write_unlock();

This will act as a barrier between lock held before and after the call.

As an aside, I think the introduction of rw_barrier() deserve to be a in 
separate patch to help the review.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 12:03:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 12:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48200.85220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmyBg-0000iz-Lg; Wed, 09 Dec 2020 12:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48200.85220; Wed, 09 Dec 2020 12:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmyBg-0000is-IR; Wed, 09 Dec 2020 12:03:16 +0000
Received: by outflank-mailman (input) for mailman id 48200;
 Wed, 09 Dec 2020 12:03:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bdKt=FN=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1kmyBe-0000in-4Y
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 12:03:14 +0000
Received: from mail.skyhub.de (unknown [5.9.137.197])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8576c31-d14e-45e3-8af7-24df90f1ea39;
 Wed, 09 Dec 2020 12:03:12 +0000 (UTC)
Received: from zn.tnic (p200300ec2f0f480029f89b316a92fa4b.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f0f:4800:29f8:9b31:6a92:fa4b])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 7B28D1EC0118;
 Wed,  9 Dec 2020 13:03:11 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8576c31-d14e-45e3-8af7-24df90f1ea39
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1607515391;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MkDF2BGW53pllBVxbrp35zb7hHGQHDWR/sD7dvFz1IY=;
	b=HEfZeiKQCt2C7bZJNE6U0ObntUyi1mhN9sU2Pi4U4OH50xzO7goM8aBTpCO/vHZW/T20UL
	gCvwseWgnBUqsp6rixMDsQe46s2seaCU7GZWJG73Sb5KWEHRfSUOdQYEOF8C/pR3Li6VPM
	DksKCjaHWoiL91XA/+XuLCu+FKdjEN8=
Date: Wed, 9 Dec 2020 13:03:07 +0100
From: Borislav Petkov <bp@alien8.de>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
Message-ID: <20201209120307.GB18203@zn.tnic>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com>
 <20201208184315.GE27920@zn.tnic>
 <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>

On Wed, Dec 09, 2020 at 08:30:53AM +0100, Jürgen Groß wrote:
> Hey, I already suggested to use ~FEATURE for that purpose (see
> https://lore.kernel.org/lkml/f105a63d-6b51-3afb-83e0-e899ea40813e@suse.com/

Great minds think alike!

:-P

> I'd rather make the syntax:
> 
> ALTERNATIVE_TERNARY <initial-code> <feature> <code-for-feature-set>
>                                              <code-for-feature-unset>
> 
> as this ...

Sure, that is ok too.

> ... would match perfectly to this interpretation.

Yap.

> Hmm, using flags is an alternative (pun intended :-) ).

LOL!

Btw, pls do check how much the vmlinux size of an allyesconfig grows
with this as we will be adding a byte per patch site. Not that it would
matter too much - the flags are a long way a comin'. :-)

> No, this is needed for non-Xen cases, too (especially pv spinlocks).

I see.

> > Can you give an example here pls why the paravirt patching needs to run
> > first?
> 
> Okay.

I meant an example for me to have a look at. :)

If possible pls.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 12:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 12:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48206.85231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmyJr-0001iZ-HU; Wed, 09 Dec 2020 12:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48206.85231; Wed, 09 Dec 2020 12:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmyJr-0001iS-Ec; Wed, 09 Dec 2020 12:11:43 +0000
Received: by outflank-mailman (input) for mailman id 48206;
 Wed, 09 Dec 2020 12:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kmyJq-0001iM-CW
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 12:11:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmyJl-00070h-9e; Wed, 09 Dec 2020 12:11:37 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kmyJk-0007iu-TC; Wed, 09 Dec 2020 12:11:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OOX2ZDO/DwRSwT9r6UxOEOqMKyeANE6eis42AXi/oaM=; b=Q89iRDJuarFdwfPrhFXOqNaesZ
	oAo+rr95I56u+wz56YWs6VD6e7o7TlISKKrKSeTTk3yXcNb6c5Ifdx3xPRMSQYyk2FVfMcS8WhgqB
	Zrs0zEOYs8bbIlFgJ7wcEa3O60j0o0Pwp7LRX4e/JSYQ3qHJDHbOfJ++eGh3vLrP7UbM=;
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
Date: Wed, 9 Dec 2020 12:11:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

Sorry for jumping late in the discussion.

On 26/11/2020 11:20, Jan Beulich wrote:
> On 26.11.2020 09:03, Juergen Gross wrote:
>> When the host crashes it would sometimes be nice to have additional
>> debug data available which could be produced via debug keys, but
>> halting the server for manual intervention might be impossible due to
>> the need to reboot/kexec rather sooner than later.
>>
>> Add support for automatic debug key actions in case of crashes which
>> can be activated via boot- or runtime-parameter.
>>
>> Depending on the type of crash the desired data might be different, so
>> support different settings for the possible types of crashes.
>>
>> The parameter is "crash-debug" with the following syntax:
>>
>>    crash-debug-<type>=<string>
>>
>> with <type> being one of:
>>
>>    panic, hwdom, watchdog, kexeccmd, debugkey
>>
>> and <string> a sequence of debug key characters with '+' having the
>> special semantics of a 10 millisecond pause.
>>
>> So "crash-debug-watchdog=0+0qr" would result in special output in case
>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>> domain info, run queues).
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - switched special character '.' to '+' (Jan Beulich)
>> - 10 ms instead of 1 s pause (Jan Beulich)
>> - added more text to the boot parameter description (Jan Beulich)
>>
>> V3:
>> - added const (Jan Beulich)
>> - thorough test of crash reason parameter (Jan Beulich)
>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>> - added dummy get_irq_regs() helper on Arm
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Except for the Arm aspect, where I'm not sure using
> guest_cpu_user_regs() is correct in all cases,

I am not entirely sure to understand what get_irq_regs() is supposed to 
returned on x86. Is it the registers saved from the most recent exception?

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 12:22:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 12:22:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48237.85278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmyUH-00035b-5U; Wed, 09 Dec 2020 12:22:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48237.85278; Wed, 09 Dec 2020 12:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmyUH-00035U-2R; Wed, 09 Dec 2020 12:22:29 +0000
Received: by outflank-mailman (input) for mailman id 48237;
 Wed, 09 Dec 2020 12:22:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kmyUF-00035P-Ln
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 12:22:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e598f9fa-fb91-44d1-bb95-32e8fa1abd17;
 Wed, 09 Dec 2020 12:22:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2AE30ACEB;
 Wed,  9 Dec 2020 12:22:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e598f9fa-fb91-44d1-bb95-32e8fa1abd17
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607516545; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=d6BLSrmEsvAmBCGh/lx0C5ZfrqtSEHsMuHBG2XnZ/Ro=;
	b=ncPpEGPcbHUJafnkDYQ/1H+ADDcCm5mL63ULlOfqMBhAnbc4yg/Gky2O5504x9o/tZrBrw
	DP7jhSJhM4Fpz8OxbijA5E0XwoPASgyWTUDdc+ATpiMeMmSQpO0kHNOkr8o9+l+8OPlFsh
	ixuwSO3VZGgWks6WWKL58SRm0h4vXUU=
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com> <20201208184315.GE27920@zn.tnic>
 <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>
 <20201209120307.GB18203@zn.tnic>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
Message-ID: <9e989b07-84e8-b07b-ba6e-c2a3ed19d7b1@suse.com>
Date: Wed, 9 Dec 2020 13:22:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201209120307.GB18203@zn.tnic>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qobO3V9L9yPkUmsoDuHQESsEkFOc5p2K2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qobO3V9L9yPkUmsoDuHQESsEkFOc5p2K2
Content-Type: multipart/mixed; boundary="j7y3yD5DTlqLcvANFz2bK4alg1dWK3UG5";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <9e989b07-84e8-b07b-ba6e-c2a3ed19d7b1@suse.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com> <20201208184315.GE27920@zn.tnic>
 <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>
 <20201209120307.GB18203@zn.tnic>
In-Reply-To: <20201209120307.GB18203@zn.tnic>

--j7y3yD5DTlqLcvANFz2bK4alg1dWK3UG5
Content-Type: multipart/mixed;
 boundary="------------CA72DA38978A6BC0090784A9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CA72DA38978A6BC0090784A9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.12.20 13:03, Borislav Petkov wrote:
> On Wed, Dec 09, 2020 at 08:30:53AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> Hey, I already suggested to use ~FEATURE for that purpose (see
>> https://lore.kernel.org/lkml/f105a63d-6b51-3afb-83e0-e899ea40813e@suse=
=2Ecom/
>=20
> Great minds think alike!
>=20
> :-P
>=20
>> I'd rather make the syntax:
>>
>> ALTERNATIVE_TERNARY <initial-code> <feature> <code-for-feature-set>
>>                                               <code-for-feature-unset>=

>>
>> as this ...
>=20
> Sure, that is ok too.
>=20
>> ... would match perfectly to this interpretation.
>=20
> Yap.
>=20
>> Hmm, using flags is an alternative (pun intended :-) ).
>=20
> LOL!
>=20
> Btw, pls do check how much the vmlinux size of an allyesconfig grows
> with this as we will be adding a byte per patch site. Not that it would=

> matter too much - the flags are a long way a comin'. :-)
>=20
>> No, this is needed for non-Xen cases, too (especially pv spinlocks).
>=20
> I see.
>=20
>>> Can you give an example here pls why the paravirt patching needs to r=
un
>>> first?
>>
>> Okay.
>=20
> I meant an example for me to have a look at. :)
>=20
> If possible pls.

Ah, okay.

Lets take the spin_unlock() case. With patch 11 of the series this is

PVOP_ALT_VCALLEE1(lock.queued_spin_unlock, lock,
                   "movb $0, (%%" _ASM_ARG1 ");",
                   X86_FEATURE_NO_PVUNLOCK);

which boils down to ALTERNATIVE "call *lock.queued_spin_unlock"
                                 "movb $0,(%rdi)" X86_FEATURE_NO_PVUNLOCK=


The initial (paravirt) code is an indirect call in order to allow
spin_unlock() before paravirt/alternative patching takes place.

Paravirt patching will then replace the indirect call with a direct call
to the correct unlock function. Then alternative patching might replace
the direct call to the bare metal unlock with a plain "movb $0,(%rdi)"
in case pvlocks are not enabled.

In case alternative patching would occur first, the indirect call might
be replaced with the "movb ...", and then paravirt patching would
clobber that with the direct call, resulting in the bare metal
optimization being removed again.


Juergen

--------------CA72DA38978A6BC0090784A9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CA72DA38978A6BC0090784A9--

--j7y3yD5DTlqLcvANFz2bK4alg1dWK3UG5--

--qobO3V9L9yPkUmsoDuHQESsEkFOc5p2K2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/QwYAFAwAAAAAACgkQsN6d1ii/Ey8C
rwgAjKCTh8rwB+MLx5PuBMufLtRi8vf4GP26O85xV8wEvfWhch7/qy70h5Gw5A6BOCCxQLWZSINQ
ufDERwJjYhLC6lWo60Twwu3XlwUy3XC3/OC2xMrCeFGb6aoPL7Cis1jgNN+Rv3sBDx5y5doXRQZI
uQTYdLfiUnFW8Gow9yS0sofiHJjgcpzkF/kgft19MRGadRYH/AIJhLUko8bHBptLLkwSbqzZWKBP
pXOdsFjwYHLJkCog8GqKVx4pTgBmJGuDHh9stdFY7co9u3nLz4jISxEgEoPrl/cRD+JKeTkSkiPm
mVjAzQ02EJP47h09rTGf0nTohZHBTjzJVqDg/yeeLw==
=MeHc
-----END PGP SIGNATURE-----

--qobO3V9L9yPkUmsoDuHQESsEkFOc5p2K2--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 12:35:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 12:35:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48244.85290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmygx-0004Ej-92; Wed, 09 Dec 2020 12:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48244.85290; Wed, 09 Dec 2020 12:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmygx-0004Ec-4y; Wed, 09 Dec 2020 12:35:35 +0000
Received: by outflank-mailman (input) for mailman id 48244;
 Wed, 09 Dec 2020 12:35:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZHYZ=FN=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kmygw-0004EX-Fe
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 12:35:34 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0917a44-8970-46ab-a772-ddf2273d71f6;
 Wed, 09 Dec 2020 12:35:33 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B9CXYfw030646;
 Wed, 9 Dec 2020 12:35:31 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 35825m7wtu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 09 Dec 2020 12:35:31 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0B9CTlu4160349;
 Wed, 9 Dec 2020 12:33:30 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 358m50gmf9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 09 Dec 2020 12:33:30 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0B9CXS0r031296;
 Wed, 9 Dec 2020 12:33:28 GMT
Received: from [10.39.218.141] (/10.39.218.141)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 09 Dec 2020 04:33:28 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0917a44-8970-46ab-a772-ddf2273d71f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=uAtbp6FlKUllmr+yHGq3Pv/VB5lfTrcVbzi/Vo9937M=;
 b=IAEUhaS2kdS1CLykMSAJgfswBZwFaoZsSUedBFW3zRzZU0sSyTnBQr/FqFPPGV3kWslX
 +elj9a5AQTDiTMYPe2roDdS4OpLPNnvMAxC476aNA8UIaLQBEQc+2m3qbPGjSWT/k1VS
 lF7wVNfaoI3ndGS7pHIfDsQRH3o5ecWa/CuFlPJODtqje7QIfI6xXNqzTnV6lbw6J1ZV
 dBr69adjApWiru89fDMDsVJN+Va8pw4eHj1BiNEaQI7lBtlO0EBZEzBA6MIqywEpZP9J
 NuMjgD5OlDLTUN8KZzd8jiutFed6P/2U5AYqoxYQd70WP8jZnfdBDfInH7muqW0StLSt VA== 
Subject: Re: [PATCH] xen/xenbus: make xs_talkv() interruptible for SIGKILL
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20201209101114.31522-1-jgross@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <8eae89eb-9250-ec03-e78a-686efc38742e@oracle.com>
Date: Wed, 9 Dec 2020 07:33:27 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209101114.31522-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9829 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 suspectscore=0
 bulkscore=0 malwarescore=0 phishscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012090088
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9829 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 adultscore=0 bulkscore=0
 phishscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 mlxscore=0
 spamscore=0 lowpriorityscore=0 malwarescore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012090089


On 12/9/20 5:11 AM, Juergen Gross wrote:
> In case a process waits for any Xenstore action in the xenbus driver
> it should be interruptible via SIGKILL.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Wed Dec 09 13:27:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 13:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48250.85302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmzV5-0000ZV-A6; Wed, 09 Dec 2020 13:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48250.85302; Wed, 09 Dec 2020 13:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmzV5-0000ZO-68; Wed, 09 Dec 2020 13:27:23 +0000
Received: by outflank-mailman (input) for mailman id 48250;
 Wed, 09 Dec 2020 13:27:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S60M=FN=arm.com=mark.rutland@srs-us1.protection.inumbo.net>)
 id 1kmzV4-0000ZJ-1e
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 13:27:22 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 54f6f2c4-09a3-485c-b1c7-f5bd4c0e4a57;
 Wed, 09 Dec 2020 13:27:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6032831B;
 Wed,  9 Dec 2020 05:27:20 -0800 (PST)
Received: from C02TD0UTHF1T.local (unknown [10.57.26.40])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F7983F66B;
 Wed,  9 Dec 2020 05:27:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54f6f2c4-09a3-485c-b1c7-f5bd4c0e4a57
Date: Wed, 9 Dec 2020 13:27:10 +0000
From: Mark Rutland <mark.rutland@arm.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux Virtualization <virtualization@lists.linux-foundation.org>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
Message-ID: <20201209132710.GA8566@C02TD0UTHF1T.local>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
 <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>

On Sun, Nov 22, 2020 at 01:44:53PM -0800, Andy Lutomirski wrote:
> On Sat, Nov 21, 2020 at 10:55 PM Jürgen Groß <jgross@suse.com> wrote:
> >
> > On 20.11.20 12:59, Peter Zijlstra wrote:
> > > On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
> > >> +static __always_inline void arch_local_irq_restore(unsigned long flags)
> > >> +{
> > >> +    if (!arch_irqs_disabled_flags(flags))
> > >> +            arch_local_irq_enable();
> > >> +}
> > >
> > > If someone were to write horrible code like:
> > >
> > >       local_irq_disable();
> > >       local_irq_save(flags);
> > >       local_irq_enable();
> > >       local_irq_restore(flags);
> > >
> > > we'd be up some creek without a paddle... now I don't _think_ we have
> > > genius code like that, but I'd feel saver if we can haz an assertion in
> > > there somewhere...
> > >
> > > Maybe something like:
> > >
> > > #ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
> > >       WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF);
> > > #endif
> > >
> > > At the end?
> >
> > I'd like to, but using WARN_ON_ONCE() in include/asm/irqflags.h sounds
> > like a perfect receipt for include dependency hell.
> >
> > We could use a plain asm("ud2") instead.
> 
> How about out-of-lining it:
> 
> #ifdef CONFIG_DEBUG_ENTRY
> extern void warn_bogus_irqrestore();
> #endif
> 
> static __always_inline void arch_local_irq_restore(unsigned long flags)
> {
>        if (!arch_irqs_disabled_flags(flags)) {
>                arch_local_irq_enable();
>        } else {
> #ifdef CONFIG_DEBUG_ENTRY
>                if (unlikely(arch_local_irq_save() & X86_EFLAGS_IF))
>                     warn_bogus_irqrestore();
> #endif
> }

I was just talking to Peter on IRC about implementing the same thing for
arm64, so could we put this in the generic irqflags code? IIUC we can
use raw_irqs_disabled() to do the check.

As this isn't really entry specific (and IIUC the cases this should
catch would break lockdep today), maybe we should add a new
DEBUG_IRQFLAGS for this, that DEBUG_LOCKDEP can also select?

Something like:

#define local_irq_restore(flags)                               \
       do {                                                    \
               if (!raw_irqs_disabled_flags(flags)) {          \
                       trace_hardirqs_on();                    \
               } else if (IS_ENABLED(CONFIG_DEBUG_IRQFLAGS) {  \
                       if (unlikely(raw_irqs_disabled())       \
                               warn_bogus_irqrestore();        \
               }                                               \
               raw_local_irq_restore(flags);                   \
        } while (0)

... perhaps? (ignoring however we deal with once-ness).

Thanks,
Mark.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 13:29:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48255.85314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmzWm-0000iU-Lg; Wed, 09 Dec 2020 13:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48255.85314; Wed, 09 Dec 2020 13:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kmzWm-0000iN-IF; Wed, 09 Dec 2020 13:29:08 +0000
Received: by outflank-mailman (input) for mailman id 48255;
 Wed, 09 Dec 2020 13:29:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NJeK=FN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kmzWl-0000iH-2S
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 13:29:07 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d68dc93-435d-48bb-90bd-8d414bcd9998;
 Wed, 09 Dec 2020 13:29:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d68dc93-435d-48bb-90bd-8d414bcd9998
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607520545;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to;
  bh=m4hcXQ+GHB8KUEuJp6QdskPIyy0LMVtgcFhjUc8O588=;
  b=DckOXCbJxTITU2HPKD70mBngrmTWhoN5J/1/xy9PPlUGIgd6ehj4UA59
   3qgeZjPXjqgQYNbev9+9obfqMpm7DMZGUNXJSHRHFkf4KKrIR6hq8dRXy
   4+puK8XN48RgT+SHZildEArCL3D/y/FUm3qf3DkJzcOYb/YHcbR1tIUuh
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OAq9Zg151lthr0GY17a0abyNpoE/3JWRV9aUlX+BOdYJjaIpfpqb9T/1n57qJqdMX1k+VKhVIF
 iduC4APXAgh26ynvLO1QCjcY4LXbclhYtRKLtFZAnwgtK6qh1pX3hA7+TDdf3nwrqvV/gma0L+
 aKNktzrcyC4fVclmEfZHA+EXNK6Y3o6srqKM77PoJVS9Ppm3N018m+4XldoX/CUOfBJwa8JyLv
 dS67OWKTT1d0IOenttv3esfOhjeEaX87N4WQf0BZn9n/BS7ynYy6wsmtuVsqZ/5c0gf6e8a66P
 8JI=
X-SBRS: 5.2
X-MesageID: 32825861
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:49WMsqnKKjpR4sHbKdZUgvgZ3JDpDfOLimdD5ilNYBxZY6Wkvu
 qF9c516TbfjjENVHY83fWJP6edSX3RnKQFh7U5F7GkQQXgpS+UPJhvhLGSpAHINiXi+odmpM
 RdWodkDtmYNzlHpOb8pDK1CtMxhOSAmZrY4dv261dIYUVUZ7p77wF/Yzzrd3FeYAVdH5I2GN
 69y6N8xwaIQngcYsSlCnRtZYGqzLf2vanrbhIcCxks5BPmt12VwYX3DgSC2VMmWy5PqI1SiF
 TtqRDz5amorpiApyP06mm71fhrsef6xsAGLMKBjdV9EESPti+YIK5lW7GEoQkvpvCu5FsAgL
 D30mwdFvU2zWjQcGGzqQbs3Ael8A9G0Q6b9WOwhHHUocHpLQ8SAcApv+1kWxHe7Fctu8w51a
 pN0X6QuZY/N2KnoA324d/UWxZ20leluHZKq591s1VTWZYTAYUhzrA381hSFP47fR7S6IdiC+
 V2CdGZ+fA+SyL/U1ncsnN0yNKhGnQ/dy3nfmEHusiYlydbh2p4yUxw/r17ol4a+JgwS4ZJ6o
 3/W8wC/o1mVcMYYblwA+0MW6KMZFDlWh7QLHmUZU3uCaBvAQO1l7fs/L436Ou2EaZk8LIunv
 36PG9wqXQ/YAbnB8GIwfRwg3LwaXT4VzHsxsZC/oN+q73xSbH6WBfzM2wGgo+nuPUQAsrSRv
 a1NtZXGpbYXBPTJbo=
X-IronPort-AV: E=Sophos;i="5.78,405,1599537600"; 
   d="scan'208,223";a="32825861"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
Date: Wed, 9 Dec 2020 13:28:54 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201209101512.GA1299@antioche.eu.org>
Content-Type: multipart/mixed;
	boundary="------------9EAEEE9A3F71311E771CA46C"
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

--------------9EAEEE9A3F71311E771CA46C
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

On 09/12/2020 10:15, Manuel Bouyer wrote:
> On Tue, Dec 08, 2020 at 06:13:46PM +0000, Andrew Cooper wrote:
>> On 08/12/2020 17:57, Manuel Bouyer wrote:
>>> Hello,
>>> for the first time I tried to boot a xen kernel from devel with
>>> a NetBSD PV dom0. The kernel boots, but when the first userland prcess
>>> is launched, it seems to enter a loop involving search_pre_exception_table()
>>> (I see an endless stream from the dprintk() at arch/x86/extable.c:202)
>>>
>>> With xen 4.13 I see it, but exactly once:
>>> (XEN) extable.c:202: Pre-exception: ffff82d08038c304 -> ffff82d08038c8c8
>>>
>>> with devel:
>>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>>> (XEN) extable.c:202: Pre-exception: ffff82d040393309 -> ffff82d0403938c8        
>>> [...]
>>>
>>> the dom0 kernel is the same.
>>>
>>> At first glance it looks like a fault in the guest is not handled at it should,
>>> and the userland process keeps faulting on the same address.
>>>
>>> Any idea what to look at ?
>> That is a reoccurring fault on IRET back to guest context, and is
>> probably caused by some unwise-in-hindsight cleanup which doesn't
>> escalate the failure to the failsafe callback.
>>
>> This ought to give something useful to debug with:
> thanks, I got:
> (XEN) IRET fault: #PF[0000]                                                 
> (XEN) domain_crash called from extable.c:209                                
> (XEN) Domain 0 (vcpu#0) crashed on cpu#0:                                   
> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----       
> (XEN) CPU:    0                                                             
> (XEN) RIP:    0047:[<00007f7e184007d0>]                                     
> (XEN) RFLAGS: 0000000000000202   EM: 0   CONTEXT: pv guest (d0v0)           
> (XEN) rax: ffff82d04038c309   rbx: 0000000000000000   rcx: 000000000000e008 
> (XEN) rdx: 0000000000010086   rsi: ffff83007fcb7f78   rdi: 000000000000e010 
> (XEN) rbp: 0000000000000000   rsp: 00007f7fff53e3e0   r8:  0000000e00000000 
> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000 
> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000 
> (XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000002660 
> (XEN) cr3: 0000000079cdb000   cr2: 00007f7fff53e3e0                         
> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: ffffffff80cf2dc0    
> (XEN) ds: 0023   es: 0023   fs: 0000   gs: 0000   ss: 003f   cs: 0047       
> (XEN) Guest stack trace from rsp=00007f7fff53e3e0:          
> (XEN)    0000000000000001 00007f7fff53e8f8 0000000000000000 0000000000000000
> (XEN)    0000000000000003 000000004b600040 0000000000000004 0000000000000038
> (XEN)    0000000000000005 0000000000000008 0000000000000006 0000000000001000
> (XEN)    0000000000000007 00007f7e18400000 0000000000000008 0000000000000000
> (XEN)    0000000000000009 000000004b601cd0 00000000000007d0 0000000000000000
> (XEN)    00000000000007d1 0000000000000000 00000000000007d2 0000000000000000
> (XEN)    00000000000007d3 0000000000000000 000000000000000d 00007f7fff53f000
> (XEN)    00000000000007de 00007f7fff53e4e0 0000000000000000 0000000000000000
> (XEN)    6e692f6e6962732f 0000000000007469 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

Pagefaults on IRET come either from stack accesses for operands (not the
case here as Xen is otherwise working fine), or from segement selector
loads for %cs and %ss.

In this example, %ss is in the LDT, which specifically does use
pagefaults to promote the frame to PGT_segdesc.

I suspect that what is happening is that handle_ldt_mapping_fault() is
failing to promote the page (for some reason), and we're taking the "In
hypervisor mode? Leave it to the #PF handler to fix up." path due to the
confusion in context, and Xen's #PF handler is concluding "nothing else
to do".

The older behaviour of escalating to the failsafe callback would have
broken this cycle by rewriting %ss and re-entering the kernel.


Please try the attached debugging patch, which is an extension of what I
gave you yesterday.  First, it ought to print %cr2, which I expect will
point to Xen's virtual mapping of the vcpu's LDT.  The logic ought to
loop a few times so we can inspect the hypervisor codepaths which are
effectively livelocked in this state, and I've also instrumented
check_descriptor() failures because I've got a gut feeling that is the
root cause of the problem.

~Andrew

--------------9EAEEE9A3F71311E771CA46C
Content-Type: text/x-patch; charset="UTF-8"; name="0001-extable-dbg.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="0001-extable-dbg.patch"

>From 841a6950fec5b43b370653e0c833a54fed64882e Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 9 Dec 2020 12:50:38 +0000
Subject: extable-dbg


diff --git a/xen/arch/x86/extable.c b/xen/arch/x86/extable.c
index 70972f1085..88b05bef38 100644
--- a/xen/arch/x86/extable.c
+++ b/xen/arch/x86/extable.c
@@ -191,6 +191,10 @@ static int __init stub_selftest(void)
 __initcall(stub_selftest);
 #endif
 
+#include <xen/sched.h>
+#include <xen/softirq.h>
+const char *vec_name(unsigned int vec);
+
 unsigned long
 search_pre_exception_table(struct cpu_user_regs *regs)
 {
@@ -199,7 +203,21 @@ search_pre_exception_table(struct cpu_user_regs *regs)
         __start___pre_ex_table, __stop___pre_ex_table-1, addr);
     if ( fixup )
     {
-        dprintk(XENLOG_INFO, "Pre-exception: %p -> %p\n", _p(addr), _p(fixup));
+        static int count;
+
+        printk(XENLOG_ERR "IRET fault: %s[%04x]\n",
+               vec_name(regs->entry_vector), regs->error_code);
+
+        if ( regs->entry_vector == X86_EXC_PF )
+            printk(XENLOG_ERR "%%cr2 %016lx\n", read_cr2());
+
+        if ( count++ > 2 )
+        {
+            domain_crash(current->domain);
+            for ( ;; )
+                do_softirq();
+        }
+
         perfc_incr(exception_fixed);
     }
     return fixup;
diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
index 39c1a2311a..6bc58bba67 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -282,6 +282,10 @@ int validate_segdesc_page(struct page_info *page)
 
     unmap_domain_page(descs);
 
+    if ( i != 512 )
+        printk_once("Check Descriptor failed: idx %u, a: %08x, b: %08x\n",
+                    i, descs[i].a, descs[i].b);
+
     return i == 512 ? 0 : -EINVAL;
 }
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0459cee9fb..1059f3ce66 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -687,7 +687,7 @@ const char *trapstr(unsigned int trapnr)
     return trapnr < ARRAY_SIZE(strings) ? strings[trapnr] : "???";
 }
 
-static const char *vec_name(unsigned int vec)
+const char *vec_name(unsigned int vec)
 {
     static const char names[][4] = {
 #define P(x) [X86_EXC_ ## x] = "#" #x

--------------9EAEEE9A3F71311E771CA46C--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 13:59:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 13:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48274.85334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn005-0003gc-5z; Wed, 09 Dec 2020 13:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48274.85334; Wed, 09 Dec 2020 13:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn005-0003gV-2g; Wed, 09 Dec 2020 13:59:25 +0000
Received: by outflank-mailman (input) for mailman id 48274;
 Wed, 09 Dec 2020 13:59:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdhY=FN=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kn004-0003gQ-Ev
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 13:59:24 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95922b69-8533-40ff-9909-9189a95ad141;
 Wed, 09 Dec 2020 13:59:21 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B9DxDJh007027
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 9 Dec 2020 14:59:14 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id BB6FA2E946C; Wed,  9 Dec 2020 14:59:08 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95922b69-8533-40ff-9909-9189a95ad141
Date: Wed, 9 Dec 2020 14:59:08 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201209135908.GA4269@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 09 Dec 2020 14:59:15 +0100 (MET)

On Wed, Dec 09, 2020 at 01:28:54PM +0000, Andrew Cooper wrote:
> 
> Pagefaults on IRET come either from stack accesses for operands (not the
> case here as Xen is otherwise working fine), or from segement selector
> loads for %cs and %ss.
> 
> In this example, %ss is in the LDT, which specifically does use
> pagefaults to promote the frame to PGT_segdesc.
> 
> I suspect that what is happening is that handle_ldt_mapping_fault() is
> failing to promote the page (for some reason), and we're taking the "In
> hypervisor mode? Leave it to the #PF handler to fix up." path due to the
> confusion in context, and Xen's #PF handler is concluding "nothing else
> to do".
> 
> The older behaviour of escalating to the failsafe callback would have
> broken this cycle by rewriting %ss and re-entering the kernel.
> 
> 
> Please try the attached debugging patch, which is an extension of what I
> gave you yesterday. First, it ought to print %cr2, which I expect will
> point to Xen's virtual mapping of the vcpu's LDT. The logic ought to
> loop a few times so we can inspect the hypervisor codepaths which are
> effectively livelocked in this state, and I've also instrumented
> check_descriptor() failures because I've got a gut feeling that is the
> root cause of the problem.

here's the output:
(XEN) IRET fault: #PF[0000]                                            [23/1999]
(XEN) %cr2 ffff820000010040                                                    
(XEN) IRET fault: #PF[0000]                                                    
(XEN) %cr2 ffff820000010040                                                 
(XEN) IRET fault: #PF[0000]
(XEN) %cr2 ffff820000010040
(XEN) IRET fault: #PF[0000]
(XEN) %cr2 ffff820000010040
(XEN) domain_crash called from extable.c:216
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
(XEN) CPU:    0
(XEN) RIP:    0047:[<00007f7ff60007d0>]
(XEN) RFLAGS: 0000000000000202   EM: 0   CONTEXT: pv guest (d0v0)
(XEN) rax: ffff82d04038c309   rbx: 0000000000000000   rcx: 000000000000e008
(XEN) rdx: 0000000000010086   rsi: ffff83007fcb7f78   rdi: 000000000000e010
(XEN) rbp: 0000000000000000   rsp: 00007f7fff4876c0   r8:  0000000e00000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 0000000079cdb000   cr2: ffffa1000000a040
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: ffffffff80cf2dc0
(XEN) ds: 0023   es: 0023   fs: 0000   gs: 0000   ss: 003f   cs: 0047
(XEN) Guest stack trace from rsp=00007f7fff4876c0:
(XEN)    0000000000000001 00007f7fff487bd8 0000000000000000 0000000000000000
(XEN)    0000000000000003 00000000aee00040 0000000000000004 0000000000000038
(XEN)    0000000000000005 0000000000000008 0000000000000006 0000000000001000
(XEN)    0000000000000007 00007f7ff6000000 0000000000000008 0000000000000000
(XEN)    0000000000000009 00000000aee01cd0 00000000000007d0 0000000000000000
(XEN)    00000000000007d1 0000000000000000 00000000000007d2 0000000000000000
(XEN)    00000000000007d3 0000000000000000 000000000000000d 00007f7fff488000
(XEN)    00000000000007de 00007f7fff4877c0 0000000000000000 0000000000000000
(XEN)    6e692f6e6962732f 0000000000007469 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:02:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48279.85346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn038-0004fz-Kr; Wed, 09 Dec 2020 14:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48279.85346; Wed, 09 Dec 2020 14:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn038-0004fs-Ha; Wed, 09 Dec 2020 14:02:34 +0000
Received: by outflank-mailman (input) for mailman id 48279;
 Wed, 09 Dec 2020 14:02:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S60M=FN=arm.com=mark.rutland@srs-us1.protection.inumbo.net>)
 id 1kn037-0004fn-OC
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:02:33 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id dadac90b-a437-4a8f-a18e-2490a31909fb;
 Wed, 09 Dec 2020 14:02:32 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EAFD31FB;
 Wed,  9 Dec 2020 06:02:31 -0800 (PST)
Received: from C02TD0UTHF1T.local (unknown [10.57.26.40])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 067D13F66B;
 Wed,  9 Dec 2020 06:02:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dadac90b-a437-4a8f-a18e-2490a31909fb
Date: Wed, 9 Dec 2020 14:02:21 +0000
From: Mark Rutland <mark.rutland@arm.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux Virtualization <virtualization@lists.linux-foundation.org>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
Message-ID: <20201209140221.GA9087@C02TD0UTHF1T.local>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
 <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
 <20201209132710.GA8566@C02TD0UTHF1T.local>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201209132710.GA8566@C02TD0UTHF1T.local>

On Wed, Dec 09, 2020 at 01:27:10PM +0000, Mark Rutland wrote:
> On Sun, Nov 22, 2020 at 01:44:53PM -0800, Andy Lutomirski wrote:
> > On Sat, Nov 21, 2020 at 10:55 PM Jürgen Groß <jgross@suse.com> wrote:
> > > On 20.11.20 12:59, Peter Zijlstra wrote:
> > > > If someone were to write horrible code like:
> > > >
> > > >       local_irq_disable();
> > > >       local_irq_save(flags);
> > > >       local_irq_enable();
> > > >       local_irq_restore(flags);
> > > >
> > > > we'd be up some creek without a paddle... now I don't _think_ we have
> > > > genius code like that, but I'd feel saver if we can haz an assertion in
> > > > there somewhere...

> I was just talking to Peter on IRC about implementing the same thing for
> arm64, so could we put this in the generic irqflags code? IIUC we can
> use raw_irqs_disabled() to do the check.
> 
> As this isn't really entry specific (and IIUC the cases this should
> catch would break lockdep today), maybe we should add a new
> DEBUG_IRQFLAGS for this, that DEBUG_LOCKDEP can also select?
> 
> Something like:
> 
> #define local_irq_restore(flags)                               \
>        do {                                                    \
>                if (!raw_irqs_disabled_flags(flags)) {          \
>                        trace_hardirqs_on();                    \
>                } else if (IS_ENABLED(CONFIG_DEBUG_IRQFLAGS) {  \
>                        if (unlikely(raw_irqs_disabled())       \

Whoops; that should be !raw_irqs_disabled().

>                                warn_bogus_irqrestore();        \
>                }                                               \
>                raw_local_irq_restore(flags);                   \
>         } while (0)
> 
> ... perhaps? (ignoring however we deal with once-ness).

If no-one shouts in the next day or two I'll spin this as its own patch.

Mark.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:05:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48284.85358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn06C-0004qw-43; Wed, 09 Dec 2020 14:05:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48284.85358; Wed, 09 Dec 2020 14:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn06C-0004qp-13; Wed, 09 Dec 2020 14:05:44 +0000
Received: by outflank-mailman (input) for mailman id 48284;
 Wed, 09 Dec 2020 14:05:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn06B-0004qf-0V
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:05:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74e3124d-8db3-46b9-aee1-d8f53be237dc;
 Wed, 09 Dec 2020 14:05:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0203ACEB;
 Wed,  9 Dec 2020 14:05:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74e3124d-8db3-46b9-aee1-d8f53be237dc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607522741; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uXgDSSgnRzBGDMdctLYaRQ+Qn2PprXJNdztVTKfo77c=;
	b=VBLFnGdFLTZejaes667CEyWZxD0WAgEJ51z5oj3X8hKrujSoQO3KE8O+t/PTAzZi9+s3+H
	s0Ej2Glu5ccJ8u9gsnr256lA8K5pMwj5Xsyx4FqRkAfCEtHb/xtCrk2T3NVppdMovVrLTa
	7NxlwD+oCqy1r458bCjBnqoIF2sArLY=
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
To: Mark Rutland <mark.rutland@arm.com>, Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel
 <xen-devel@lists.xenproject.org>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
 <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
 <20201209132710.GA8566@C02TD0UTHF1T.local>
 <20201209140221.GA9087@C02TD0UTHF1T.local>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a37be173-6702-5523-8757-2b5a1b4ae311@suse.com>
Date: Wed, 9 Dec 2020 15:05:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201209140221.GA9087@C02TD0UTHF1T.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="SbHnCuqvhr456hxBGyZfmnROM8i0IqhVV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--SbHnCuqvhr456hxBGyZfmnROM8i0IqhVV
Content-Type: multipart/mixed; boundary="3SWBQ2CGRPlIb6qugl6p4NZVoXATi0Vjq";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Mark Rutland <mark.rutland@arm.com>, Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel
 <xen-devel@lists.xenproject.org>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <a37be173-6702-5523-8757-2b5a1b4ae311@suse.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
 <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
 <20201209132710.GA8566@C02TD0UTHF1T.local>
 <20201209140221.GA9087@C02TD0UTHF1T.local>
In-Reply-To: <20201209140221.GA9087@C02TD0UTHF1T.local>

--3SWBQ2CGRPlIb6qugl6p4NZVoXATi0Vjq
Content-Type: multipart/mixed;
 boundary="------------11CE66C562C189ECDCC48235"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------11CE66C562C189ECDCC48235
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.12.20 15:02, Mark Rutland wrote:
> On Wed, Dec 09, 2020 at 01:27:10PM +0000, Mark Rutland wrote:
>> On Sun, Nov 22, 2020 at 01:44:53PM -0800, Andy Lutomirski wrote:
>>> On Sat, Nov 21, 2020 at 10:55 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.c=
om> wrote:
>>>> On 20.11.20 12:59, Peter Zijlstra wrote:
>>>>> If someone were to write horrible code like:
>>>>>
>>>>>        local_irq_disable();
>>>>>        local_irq_save(flags);
>>>>>        local_irq_enable();
>>>>>        local_irq_restore(flags);
>>>>>
>>>>> we'd be up some creek without a paddle... now I don't _think_ we ha=
ve
>>>>> genius code like that, but I'd feel saver if we can haz an assertio=
n in
>>>>> there somewhere...
>=20
>> I was just talking to Peter on IRC about implementing the same thing f=
or
>> arm64, so could we put this in the generic irqflags code? IIUC we can
>> use raw_irqs_disabled() to do the check.
>>
>> As this isn't really entry specific (and IIUC the cases this should
>> catch would break lockdep today), maybe we should add a new
>> DEBUG_IRQFLAGS for this, that DEBUG_LOCKDEP can also select?
>>
>> Something like:
>>
>> #define local_irq_restore(flags)                               \
>>         do {                                                    \
>>                 if (!raw_irqs_disabled_flags(flags)) {          \
>>                         trace_hardirqs_on();                    \
>>                 } else if (IS_ENABLED(CONFIG_DEBUG_IRQFLAGS) {  \
>>                         if (unlikely(raw_irqs_disabled())       \
>=20
> Whoops; that should be !raw_irqs_disabled().
>=20
>>                                 warn_bogus_irqrestore();        \
>>                 }                                               \
>>                 raw_local_irq_restore(flags);                   \
>>          } while (0)
>>
>> ... perhaps? (ignoring however we deal with once-ness).
>=20
> If no-one shouts in the next day or two I'll spin this as its own patch=
=2E

Fine with me. So I'll just ignore a potential error case in my patch.

Thanks,


Juergen


--------------11CE66C562C189ECDCC48235
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------11CE66C562C189ECDCC48235--

--3SWBQ2CGRPlIb6qugl6p4NZVoXATi0Vjq--

--SbHnCuqvhr456hxBGyZfmnROM8i0IqhVV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Q2bMFAwAAAAAACgkQsN6d1ii/Ey+M
Sgf/YvUCb1mTnblXUk7krczQ1ATvqpRzUPkRKWI/gbHwiba2E6IgZu9qI9rnOGU2XftlrlseQrYY
TgBQ6ElRBxmD7UwTQHYEo8sGuk4aRzlqWFZLiOQhYPhqFc8uv3eWlZWVcplQH9mer1rAZS4Z0tCR
Rbh4HJa+Qmfhh0n0L+kBDWzNfll7LrjnV21sZLSSzSDLsDxObqi4yRoM+Mr/0wLovtHHb3oVIE3y
q87ZQjcZeCZZYcwIYLQrhpIPtU5b7rrYB5kLEyKHus+S+B+PciDn7QYEvbfIddjOmaIMB8VWIIC4
Ir1NaFfvEJlgYdw15YRsTmU/yqkiTZ3/cp4vlaFqKw==
=ZFoe
-----END PGP SIGNATURE-----

--SbHnCuqvhr456hxBGyZfmnROM8i0IqhVV--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:17:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48294.85370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0H2-0005uf-9N; Wed, 09 Dec 2020 14:16:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48294.85370; Wed, 09 Dec 2020 14:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0H2-0005uY-50; Wed, 09 Dec 2020 14:16:56 +0000
Received: by outflank-mailman (input) for mailman id 48294;
 Wed, 09 Dec 2020 14:16:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn0H0-0005uT-1v
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:16:54 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.76]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74b127dc-c732-47f7-ab2d-0b8d16de4052;
 Wed, 09 Dec 2020 14:16:52 +0000 (UTC)
Received: from AM6PR02CA0006.eurprd02.prod.outlook.com (2603:10a6:20b:6e::19)
 by PR3PR08MB5787.eurprd08.prod.outlook.com (2603:10a6:102:90::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.20; Wed, 9 Dec
 2020 14:16:50 +0000
Received: from VE1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::72) by AM6PR02CA0006.outlook.office365.com
 (2603:10a6:20b:6e::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 14:16:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT055.mail.protection.outlook.com (10.152.19.158) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 14:16:49 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Wed, 09 Dec 2020 14:16:49 +0000
Received: from a467417626a3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 003A9122-9980-4C96-B08C-07980B00DBEA.1; 
 Wed, 09 Dec 2020 14:16:11 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a467417626a3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 14:16:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.22; Wed, 9 Dec
 2020 14:16:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 14:16:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74b127dc-c732-47f7-ab2d-0b8d16de4052
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4izZKCVsdEjKthzaBsRUktlaPI4UQdr3VkDWbU0Rydo=;
 b=QRXrn0r6V7y4CLWybjeLHq1aDDFIMa4GwLdTbEgKYlJF04iP68ssLFsya3gbd69hCIEi4n+ZvrvUr2AzgQAAZK16JuhcN3UKN9RsKCWTWobyOyIl3qVgO8L2IoiMycXEULKMC/nWBiCav9CR/jFWK/ts6CqpZQU3uMER5HBMzKQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 024de50e573ef70f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M4X8bGnk352HdGnDorp2q1THtw/2QZhA+EjqkhoSOPVuS9qKfMMb2xgFxbo3Cv7cDGBbpLxdCDvCY6C9dt4sjVLq0ACogt3B2sldtIBMcbiNoVDlB5yBa3wXwUb6GZC5OC6IcHZ/0f7EVeLw9iuXIU7jOCcgQM2VDl+/39q4eFm2LbMhK7sHO8xSMsmPr+NxM5ofbR+E+JLBeajbPd9BvpsNk//WuOjTnjBeWYLK9lx4+9GglLwaP/r8R+rFx0rodkr605voG9INNYvHlmbrNwRG/lnYWlhZxTpDbLlwKOt0olCEfXjQuicfITCgbENt7eS9zD4iHVggD/OlQlbAdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4izZKCVsdEjKthzaBsRUktlaPI4UQdr3VkDWbU0Rydo=;
 b=ByRD55oh6kVikCg/PIRPl6vHc+X7R2KkTB/lCWqZjmhkVob2YezzKZ01XhF1weTQJ7GjfNaniIUFQQprVtkK/oS6ykxXOCKMRVByJXxPcw0NVWNx97goF7NFRHLg3CwUzifdwpxNHnUZsrA3EJl8JsOXCKvwCeKx41JrCqXpeev/S8FZAhDj3tbM0vJIeZpSxlxkV8GzaUvAqUnSoELKCbagR/5Chgjv9CyG8P9MEvTbXQVDeFcc7lYU+tK6UQCy3L+SnnbFXpA2z8X7F/g1OHBOg5JVz2GBC2No1cUF8sKz/ru8CsPRK0Lm3TFapswxaYZbKG7+R/pC+ezjXCbYIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4izZKCVsdEjKthzaBsRUktlaPI4UQdr3VkDWbU0Rydo=;
 b=QRXrn0r6V7y4CLWybjeLHq1aDDFIMa4GwLdTbEgKYlJF04iP68ssLFsya3gbd69hCIEi4n+ZvrvUr2AzgQAAZK16JuhcN3UKN9RsKCWTWobyOyIl3qVgO8L2IoiMycXEULKMC/nWBiCav9CR/jFWK/ts6CqpZQU3uMER5HBMzKQ=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 5/8] lib: move init_constructors()
Thread-Topic: [PATCH v3 5/8] lib: move init_constructors()
Thread-Index: AQHWwayh7DilyZtY2UyuPRFMdOVH76nu6FSA
Date: Wed, 9 Dec 2020 14:16:10 +0000
Message-ID: <469DDA0B-66C9-4F57-B647-E2CD7AFE84F1@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <c67ca263-8a82-d0c8-e6e1-6afdeeb9df8c@suse.com>
In-Reply-To: <c67ca263-8a82-d0c8-e6e1-6afdeeb9df8c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 76571e28-02e9-47dc-37a2-08d89c4d11f2
x-ms-traffictypediagnostic: DB6PR08MB2693:|PR3PR08MB5787:
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB57878420FA9072A62AC0D26E9DCC0@PR3PR08MB5787.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4125;OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 E04s2a+IjXhvnuxL13jZnFduTDex1vilmfapdxPrb5qpuhq4BM+QigOZ+JAsOgyznUhNnLfibr+2NaC4B5+D3y/eQkAOVCkQYe/VdOKl4GfuOPu2ebmiWHV9L9s4p+aDsQTXoai0sYUTUIdXuckoEAOwKDHp9f3IcS0JWVMe2d9TXZzkzNCf/Af/tK44KE3SF+pDNNQarB2rtusKPHrD3HgyaEFFUlvSuFIw4PQsp8qa/tEjAOdfx4AFJIB/HvdKv71uUJke4iestcmTgD0qkhOCbwb21xzs5psBHyDPxDmhq70cf2xrud/47J5dS6uLRu1JtkvVHAI22J7n4LNQAwl/98BfIbeUev+vja7ocOpQP4JCT5RZs1cmICa55Gxa
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(366004)(136003)(53546011)(26005)(64756008)(508600001)(8936002)(6916009)(8676002)(2616005)(36756003)(66946007)(6486002)(6506007)(66476007)(186003)(54906003)(2906002)(5660300002)(6512007)(83380400001)(66556008)(76116006)(66446008)(71200400001)(86362001)(91956017)(4326008)(33656002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?HVpEoiNr/pIlqOXKGPWpgldWKnxGea9X+S3Uoln5QAbOW9ib21GT9Q7GA/3K?=
 =?us-ascii?Q?TbgkHs70Ziy6YpNRXhhw/375R6pcRH3hJAldxnsIXeHoSmIWxQ/BuFZGfFQb?=
 =?us-ascii?Q?q2NirO3cIwRbdRSTaBEIuITvUsbiF9iT8xtKk1lHWGYsigY7fRm1cHGUGKb0?=
 =?us-ascii?Q?RJ6vAdX8CmM+a91Ky41XhI2Qg57pjddMc8JrJXGlnPX/WaUzyD9vQpMhWZmB?=
 =?us-ascii?Q?QKMUZ5Rt3NEW1ErzB3P+dUvw1+jaVhgz0hJVbWt2G7yfOzlYwRbZZm2sggEo?=
 =?us-ascii?Q?Q3UcQh+Ankty9mKNhCRyDM95qBmhMggheDsuH6hGKYb8Tqx4+utpPtwmxK23?=
 =?us-ascii?Q?UpN9ZvojoKRMB2KdTHkASwIJDpeSTDBb95cEl6TTYv8fgTbhFy52tknf0uYz?=
 =?us-ascii?Q?syKCo3mWBgrUA4lFuG5DnMEyqu2tibkbnThsc2uDYZvvqLG5z83Nav4TsGSY?=
 =?us-ascii?Q?CYk8j2yxqU5BhGN4RjMZf5enWcx08b8tmki925v0oyNnEDa8Al8MxA/V7QvA?=
 =?us-ascii?Q?L24qIYgJnG983mK2y+CkgZq7PYgJ68erTGsEgGUy26EiphK9gGN6jDG165Lj?=
 =?us-ascii?Q?twYdrKBZK/9LqPnnGEhMn2JMTP0I2420jXVMNk/m+5V5j8B09jqX7AU5Ro1o?=
 =?us-ascii?Q?kU8TZwHCQnTjdoFDyN9oDOCSLxMLrQsDzl2QeCC4L2CZ41g2W6yjAZx0p60Z?=
 =?us-ascii?Q?IfRK4ECLbUXHvRip+uFR6R3hx9eEuCX9jgu51GXkOvVAdu4SeDNVpQB23yOO?=
 =?us-ascii?Q?FA2Cbp2kVRvZ6qISU9wsnNVvGOvi1Wv4KjfH39qa3z/ogDWn1/rlWl3wupzF?=
 =?us-ascii?Q?yGb/f1ISjz0YEnYcLAyYXHeVTsqqWPjW0phj1QJS3VcHDKmAZbWNCdZZF14Z?=
 =?us-ascii?Q?TaD0MBBnpAXxQ3TsQah5YmlKqs+7y51SmNony9KJLLoCZ4LmUC2do2LGg/UI?=
 =?us-ascii?Q?yR3kPAjzdLyxH+T9OXjk+d1zxwsQ4Yg+POmA3wjYE4i2ACeG/gmQsH28J8u2?=
 =?us-ascii?Q?b/BC?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D1FDED7345616542B49D668D7772A5D1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2693
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f43d9cb6-c540-4658-e12d-08d89c4cfa36
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g4oBPbQ1JqwDa9wYFRkahRaX0sgPXck2q8TVvqtHpvQE2OQ01s6+JHLMsXhtYW2r0E0B0ZGqM7AtNl2yjSAYIZK6G3znnYF9oyE9ngAB+Mdm+DzjCYzy5Qr4burLKh9txd/HYa1cfsimJtGwUOehGnjFvooLBVCnewTx/iuf6AhWEGnClRbY7zdZ+6lOEDmQgLfD91s+3q8DYacL1CV8yQfql8Sgl2K9gRQP5M+Bd7/gbJZCkMRCUp5r9uOQmyA5/67kzp22jqYPbnr6y5sJU3FOKj7fRm+D3qQjiKyJJdQdjQ1VoQ5YpkVoiSTucAJV3tHy6E1fdFLgZK1pMpeWVxrLEoIUHE3T8CfHwWEfsnSTWQS9NkceM4QECuiTDdRHZAByFBvUjzMKyYmyZuQloA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(46966005)(6506007)(4326008)(356005)(33656002)(2906002)(107886003)(70206006)(83380400001)(8676002)(336012)(86362001)(2616005)(508600001)(53546011)(81166007)(36756003)(5660300002)(26005)(70586007)(6486002)(186003)(6862004)(54906003)(6512007)(47076004)(8936002)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 14:16:49.9127
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 76571e28-02e9-47dc-37a2-08d89c4d11f2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5787

Hi Jan,


> On 23 Nov 2020, at 15:22, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> ... into its own CU, for being unrelated to other things in
> common/lib.c.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/common/lib.c | 14 --------------
> xen/lib/Makefile |  1 +
> xen/lib/ctors.c  | 25 +++++++++++++++++++++++++
> 3 files changed, 26 insertions(+), 14 deletions(-)
> create mode 100644 xen/lib/ctors.c
>=20
> diff --git a/xen/common/lib.c b/xen/common/lib.c
> index 6cfa332142a5..f5ca179a0af4 100644
> --- a/xen/common/lib.c
> +++ b/xen/common/lib.c
> @@ -1,6 +1,5 @@
> #include <xen/lib.h>
> #include <xen/types.h>
> -#include <xen/init.h>
> #include <asm/byteorder.h>
>=20
> /*
> @@ -423,19 +422,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c=
)
> #endif
> }
>=20
> -typedef void (*ctor_func_t)(void);
> -extern const ctor_func_t __ctors_start[], __ctors_end[];
> -
> -void __init init_constructors(void)
> -{
> -    const ctor_func_t *f;
> -    for ( f =3D __ctors_start; f < __ctors_end; ++f )
> -        (*f)();
> -
> -    /* Putting this here seems as good (or bad) as any other place. */
> -    BUILD_BUG_ON(sizeof(size_t) !=3D sizeof(ssize_t));
> -}
> -
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 99f857540c99..72c72fffecf2 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1,5 +1,6 @@
> obj-$(CONFIG_X86) +=3D x86/
>=20
> +lib-y +=3D ctors.o
> lib-y +=3D ctype.o
> lib-y +=3D list-sort.o
> lib-y +=3D parse-size.o
> diff --git a/xen/lib/ctors.c b/xen/lib/ctors.c
> new file mode 100644
> index 000000000000..5bdc591cd50a
> --- /dev/null
> +++ b/xen/lib/ctors.c
> @@ -0,0 +1,25 @@
> +#include <xen/init.h>
> +#include <xen/lib.h>
> +
> +typedef void (*ctor_func_t)(void);
> +extern const ctor_func_t __ctors_start[], __ctors_end[];
> +
> +void __init init_constructors(void)
> +{
> +    const ctor_func_t *f;
> +    for ( f =3D __ctors_start; f < __ctors_end; ++f )
> +        (*f)();
> +
> +    /* Putting this here seems as good (or bad) as any other place. */
> +    BUILD_BUG_ON(sizeof(size_t) !=3D sizeof(ssize_t));
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:18:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:18:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48298.85382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0Iz-00064Z-Qj; Wed, 09 Dec 2020 14:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48298.85382; Wed, 09 Dec 2020 14:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0Iz-00064S-N4; Wed, 09 Dec 2020 14:18:57 +0000
Received: by outflank-mailman (input) for mailman id 48298;
 Wed, 09 Dec 2020 14:18:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn0Iy-00064N-Jm
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:18:56 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.81]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b5b1283-8401-4453-95af-9b8dc914a2b7;
 Wed, 09 Dec 2020 14:18:55 +0000 (UTC)
Received: from AM0PR05CA0086.eurprd05.prod.outlook.com (2603:10a6:208:136::26)
 by AM5PR0802MB2532.eurprd08.prod.outlook.com (2603:10a6:203:a1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 9 Dec
 2020 14:18:53 +0000
Received: from AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:136:cafe::69) by AM0PR05CA0086.outlook.office365.com
 (2603:10a6:208:136::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 14:18:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT009.mail.protection.outlook.com (10.152.16.110) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 14:18:52 +0000
Received: ("Tessian outbound 76bd5a04122f:v71");
 Wed, 09 Dec 2020 14:18:52 +0000
Received: from 6e4837a21de5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6C5C3EC0-785F-443B-B2A4-C6DFF1BBE2CF.1; 
 Wed, 09 Dec 2020 14:18:45 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6e4837a21de5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 14:18:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.22; Wed, 9 Dec
 2020 14:18:44 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 14:18:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b5b1283-8401-4453-95af-9b8dc914a2b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X767IjoWUE1hrzjhIkzCnpM5+Falf9ZjrrY8OUWElO8=;
 b=rgrbS6ui6CJtA+F3bzrJoyy3A0+o9UgcqjCztdmlYm2cAPgWDaf4bIhwj/MMAKdFjVnPUCE4LQXfCoKtvu8zrisRb49H/E5YzO2MYk8Xvq5l6bTcqaFgQQXk5XGw6IsuIM6KxiewtR+15kADPk20LEvJmrPcLZ4k482vBxIgAHw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: d231f7406e8d55da
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A3x7stLqIkuDWhYGIqOV6Yk3PRgS4dk66S7xklGHnqvDlh5qlB5iwV8lyL+j5iNC3vHsUBISRyMdF3dOTMyEJoUxYllWpXmp5AEK9ZE7jI9eC2Qul0IAZ/FQK4lh931tUHwxxgS5VIDFLYsckaVmKc+lgbUPfB7hdgpSj46oWXVk6mIN/1lhyV5DIObVsDZhuR012HquRKjS7C7qryNVKvsS3OvXuqeKHI/epZA3jQq08ZdEZzZHLdbQS5R3gHLirIcc453t2uQVSIX15NGxa4XL8JicCd/vw65+sxiyA2uZpr5Sanl27KT/vr/sLBT/LdhMZlh4/hCiIEv3KaUlRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X767IjoWUE1hrzjhIkzCnpM5+Falf9ZjrrY8OUWElO8=;
 b=JdVaQ8YCnflHLlmWbRK+mv1J2leV+F89c9sX3CoLeY7lX9TLm4fS/M/VHzl6GT4F2T/nHoHw/Nt5Jom1fFdBoluW7UEa+OsYhFVIjNMzm+SdxmiQyT0rr1QgqnHeb5iqEqOj5Ouq1+N12OlB9ws7ys8NSFL6AgNeoA01Balx/tuibxpVPUbD2TlBosjLVS4SfffjahnwaiFKSLz7Qqc21NShyza8PSjXW3bPa7nFAOFtdTA5h6qUfEvZmoy7Y4+uNRM58OG3odUs6zRxvb4RflHSrqiOpTFONvc+feMrOw5JtpBNmdC52TaqZ6agiqB+vDIoSih7/fPmk0PmvUiDvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X767IjoWUE1hrzjhIkzCnpM5+Falf9ZjrrY8OUWElO8=;
 b=rgrbS6ui6CJtA+F3bzrJoyy3A0+o9UgcqjCztdmlYm2cAPgWDaf4bIhwj/MMAKdFjVnPUCE4LQXfCoKtvu8zrisRb49H/E5YzO2MYk8Xvq5l6bTcqaFgQQXk5XGw6IsuIM6KxiewtR+15kADPk20LEvJmrPcLZ4k482vBxIgAHw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 6/8] lib: move rbtree code
Thread-Topic: [PATCH v3 6/8] lib: move rbtree code
Thread-Index: AQHWway2iTIGxXspY0WlIcuHoV7Fs6nu6Q0A
Date: Wed, 9 Dec 2020 14:18:44 +0000
Message-ID: <F5092007-8C0F-4D0A-AA61-B07300886980@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <749adfdd-70d6-c653-7fcf-dad13fd8463f@suse.com>
In-Reply-To: <749adfdd-70d6-c653-7fcf-dad13fd8463f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b4f1e8fc-0d99-426d-8dc0-08d89c4d5b35
x-ms-traffictypediagnostic: DB6PR08MB2693:|AM5PR0802MB2532:
X-Microsoft-Antispam-PRVS:
	<AM5PR0802MB25321D3981037ACCC0F6FEF49DCC0@AM5PR0802MB2532.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YhHRkMCVV1YkSKYIvwacqre32j/eHSRGRUrVAbrfXBpOn22K3GkYAHcCmTzG0jnSp4YbQxuFqHSCEGXeR/dwWVyuPjAReycOvlkIGTQnF+iOBoJywl9qFBaM27KU//yjCrB+jkDB8l45YKhhMUdkc9oEJn0xPtahaN2cweLF4SMpiL9RZDPmC7vahT/eirmGGUoI8RM1fHwWr8b8AnmWD0FSANZGUq3MF+kbpxItbnIoU2VS0TRmjtjrofiu6+XIesPfxeaqnlVpZq3rQLEewGwPXvuEwdCyInnEG0tjLIViEMcsTqMCZyjkF6T/lO8qE28W4HCfGrp2/NQknK7Cn7TOToMm5RZOtpeWCV/FHN7YqlJ8mgFFby5wrT+7wWww3/6fGfqpG6Z3DW2yA4T6u0V5WuFuwXSYxDrbcI29pirlvvOayvdDZFJRWyu19epaLiW74yEQnrMf4dLMOm/fZg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(366004)(136003)(53546011)(26005)(64756008)(508600001)(8936002)(6916009)(8676002)(2616005)(36756003)(66946007)(6486002)(6506007)(66476007)(186003)(54906003)(19625735003)(2906002)(5660300002)(6512007)(83380400001)(66556008)(966005)(76116006)(66446008)(71200400001)(86362001)(91956017)(4326008)(33656002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?xNPIXNzhspjDUpG3rb0W0nw7Ecn/auU3itsjH0+OAecFiO6DB4cfwrrAhEp3?=
 =?us-ascii?Q?DRQUfjN9hrNhduiz/MHv8TXnvulDmF61fJfaHvAmFxvR+tZu2znK6JjtoO51?=
 =?us-ascii?Q?5XmCv4uSwdhcnuEcmYkKWD8H8s4rbc7dUK+cEND8vb1ZWMpP1ctyrxg5Ntan?=
 =?us-ascii?Q?HLPAc3WAsHzfEpN8lqSr3HPbRCWDQm2ebEADNdUJPqyCwCdc4jzfYhrbQwTL?=
 =?us-ascii?Q?W4f6w8Cr7TVRIYe8gzQlLAbflItfSW1h4Wzu/F1bJ7zNAECeNPB6yqq7YJpZ?=
 =?us-ascii?Q?MEBpvlGkX9GTitdn5On+yNtm3hZF0OztfqaovyqbF+2E+TffCdfSJZfAfRoX?=
 =?us-ascii?Q?GhrdzAdxzewWzVis26niqLrrMern1swsWYeDgps+VJfoXAm0rYyav3BwTwPx?=
 =?us-ascii?Q?r7yMDF1L27+WOocbZwooM/OX1VX69BmpiIXFZ3xGEHwFKPUuta5lYMHUe8Wz?=
 =?us-ascii?Q?PQbN8tMKEg1ngy5wDfw6YmqiaIZp4KJm3GW8cJUal69Giup7IYxqb9cetcjB?=
 =?us-ascii?Q?n1oLWsHETuUDZ2MgwWxiIawrxeoYT374b+qCbLXjgIFPQJh3jeSkr0fNORZj?=
 =?us-ascii?Q?mcbpceZYeQOIJhlYsLDE4yNAAGTerdrvDRclFIsPMd0AX27XqOs51Fxbl9V0?=
 =?us-ascii?Q?3YuP4jcGLxFT57LWNQRCkv0NqyAkcMgrYKV0PJnsTRtN+XUjmZfq477CeiJe?=
 =?us-ascii?Q?a2MaZjjmBfI+OwHgXOjatqV7qi7XDkQA/TBCC9A8Wo/JA5xG6aHGn/55Hv0h?=
 =?us-ascii?Q?2OvCs+q6X1atwbDFigFVGT0MRy5jkW/zw4ppm0KxgJPYuYfMU6f8N7grxnJ9?=
 =?us-ascii?Q?BHdv9ZggEpU31MitekTvJcBoBY8Z8HKC6ZPh3wwtaWeEY+oBrD53sq8kJevM?=
 =?us-ascii?Q?w0DoRMk3OMMDQZZOihQfb2XN0sTIC/wqp6YirbK3Dw+VtkgT14c/iYo7prI6?=
 =?us-ascii?Q?PweMm3Xs6UWX2R9P454ZqOawTdepUcaPbzkG/LJ2kqZ35NJ/ZlAUDD94VBgR?=
 =?us-ascii?Q?IIA7?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <927EFA7C2B6F6C45BC2BC417891CFAA8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2693
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d7372e19-5c11-4ff4-e6e3-08d89c4d5656
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/8OhUcdwUmNIQXLilBh1yUdrOGzHb3IHpkhk1nEco6/5Rpuw8qmKb30Ii+eVwDPXrNaruCm3jjs1oXhdENtuOK+UoG5C98e0co1hc5G3TzVaGuWxt/hryOouJWoU1uysJCeqOfkExcB6SbTPfUg0Z9EqzM0zg9aaSboSjjb5bBQ+mKbQ2XdFZg87l5Zl2K6CamipWkTrNJvC7Vg9Rol/9V+SaND2vD9AxW8TcD17IzsIhdFmG+0wgeNjacOoRsGvaXPGE0fzboSKs6TjcfrL1ZMavzbSwl+tpAmgyiDk8nLZnXYK1K/vpXghk+k+s1TyhQ6ML1z71Z1EbR62eM9z67duPYm9gvbja5sEtBr6Wdpfb17NqiEw5ttqhScc22wa+Z0Kx2/9FzAfMB/IfKDJGm82NF6C6YY4Qbo8TeLjCG2Mx+MrRHuPALvo4yYBJpmP8Ga+iIcOzbH/hAwnyULnO/dWDjYvPZ7dHVMH8WhjizU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(46966005)(2616005)(47076004)(33656002)(81166007)(86362001)(36906005)(356005)(82310400003)(8676002)(5660300002)(83380400001)(336012)(6512007)(19625735003)(6486002)(53546011)(8936002)(36756003)(2906002)(4326008)(107886003)(508600001)(54906003)(6862004)(966005)(186003)(26005)(6506007)(70586007)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 14:18:52.9046
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b4f1e8fc-0d99-426d-8dc0-08d89c4d5b35
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2532

Hi,

> On 23 Nov 2020, at 15:23, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> Build this code into an archive, which results in not linking it into
> x86 final binaries. This saves about 1.5k of dead code.
>=20
> While moving the source file, take the opportunity and drop the
> pointless EXPORT_SYMBOL() and an instance of trailing whitespace.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/common/Makefile          | 1 -
> xen/lib/Makefile             | 1 +
> xen/{common =3D> lib}/rbtree.c | 9 +--------
> 3 files changed, 2 insertions(+), 9 deletions(-)
> rename xen/{common =3D> lib}/rbtree.c (98%)
>=20
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 332e7d667cec..d65c9fe9cb4e 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -33,7 +33,6 @@ obj-y +=3D preempt.o
> obj-y +=3D random.o
> obj-y +=3D rangeset.o
> obj-y +=3D radix-tree.o
> -obj-y +=3D rbtree.o
> obj-y +=3D rcupdate.o
> obj-y +=3D rwlock.o
> obj-y +=3D shutdown.o
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 72c72fffecf2..b0fe8c72acf5 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -4,3 +4,4 @@ lib-y +=3D ctors.o
> lib-y +=3D ctype.o
> lib-y +=3D list-sort.o
> lib-y +=3D parse-size.o
> +lib-y +=3D rbtree.o
> diff --git a/xen/common/rbtree.c b/xen/lib/rbtree.c
> similarity index 98%
> rename from xen/common/rbtree.c
> rename to xen/lib/rbtree.c
> index 9f5498a89d4e..95e045d52461 100644
> --- a/xen/common/rbtree.c
> +++ b/xen/lib/rbtree.c
> @@ -25,7 +25,7 @@
> #include <xen/rbtree.h>
>=20
> /*
> - * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree=20
> + * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
>  *
>  *  1) A node is either red or black
>  *  2) The root is black
> @@ -223,7 +223,6 @@ void rb_insert_color(struct rb_node *node, struct rb_=
root *root)
> 		}
> 	}
> }
> -EXPORT_SYMBOL(rb_insert_color);
>=20
> static void __rb_erase_color(struct rb_node *parent, struct rb_root *root=
)
> {
> @@ -467,7 +466,6 @@ void rb_erase(struct rb_node *node, struct rb_root *r=
oot)
> 	if (rebalance)
> 		__rb_erase_color(rebalance, root);
> }
> -EXPORT_SYMBOL(rb_erase);
>=20
> /*
>  * This function returns the first node (in sort order) of the tree.
> @@ -483,7 +481,6 @@ struct rb_node *rb_first(const struct rb_root *root)
> 		n =3D n->rb_left;
> 	return n;
> }
> -EXPORT_SYMBOL(rb_first);
>=20
> struct rb_node *rb_last(const struct rb_root *root)
> {
> @@ -496,7 +493,6 @@ struct rb_node *rb_last(const struct rb_root *root)
> 		n =3D n->rb_right;
> 	return n;
> }
> -EXPORT_SYMBOL(rb_last);
>=20
> struct rb_node *rb_next(const struct rb_node *node)
> {
> @@ -528,7 +524,6 @@ struct rb_node *rb_next(const struct rb_node *node)
>=20
> 	return parent;
> }
> -EXPORT_SYMBOL(rb_next);
>=20
> struct rb_node *rb_prev(const struct rb_node *node)
> {
> @@ -557,7 +552,6 @@ struct rb_node *rb_prev(const struct rb_node *node)
>=20
> 	return parent;
> }
> -EXPORT_SYMBOL(rb_prev);
>=20
> void rb_replace_node(struct rb_node *victim, struct rb_node *new,
> 		     struct rb_root *root)
> @@ -574,4 +568,3 @@ void rb_replace_node(struct rb_node *victim, struct r=
b_node *new,
> 	/* Copy the pointers/colour from the victim to the replacement */
> 	*new =3D *victim;
> }
> -EXPORT_SYMBOL(rb_replace_node);
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:24:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:24:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48305.85394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0OZ-00072X-Ea; Wed, 09 Dec 2020 14:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48305.85394; Wed, 09 Dec 2020 14:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0OZ-00072Q-BL; Wed, 09 Dec 2020 14:24:43 +0000
Received: by outflank-mailman (input) for mailman id 48305;
 Wed, 09 Dec 2020 14:24:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kn0OY-00072K-80
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:24:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a6fd15d-60c9-4eda-95ee-468e2be972c3;
 Wed, 09 Dec 2020 14:24:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7BF28ACEB;
 Wed,  9 Dec 2020 14:24:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a6fd15d-60c9-4eda-95ee-468e2be972c3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607523880; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zmlaU1F8gUaCsK+uLZ8ZkMQZERSzre81kPuNpSJ0lB4=;
	b=PRLF+AwqoZ3tYdtdJ6ezgYdDDlONcsQg9m43iyd2mRF+fIZCYGmd66UzJf3Zz0eZDbIzVD
	RyUOKO4UxGzZVJWo3AJUxiW4OfX4mfJ4n9hgYR4xHvKjo0hpMYklWG/H61hDVymYEbzN3D
	ZsOsVufn6WlYR6waGQKV6H5stn9Ic9c=
Subject: Re: [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d709a9c3-dbe2-65c6-2c2f-6a12f486335d@suse.com>
 <70170293-a9a7-282a-dde6-7ed73fc2da48@xen.org>
 <c15b1e7e-ed9c-b597-2fc1-b8cf89999c55@suse.com>
 <f14d147b-b218-a2ab-0b9e-06ece58d58e4@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <606e6b73-6b4c-1044-0ee5-2887f1423448@suse.com>
Date: Wed, 9 Dec 2020 15:24:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f14d147b-b218-a2ab-0b9e-06ece58d58e4@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 10:53, Julien Grall wrote:
> On 03/12/2020 09:46, Jan Beulich wrote:
>> On 02.12.2020 20:03, Julien Grall wrote:
>>> On 23/11/2020 13:28, Jan Beulich wrote:
>>>> The per-vCPU virq_lock, which is being held anyway, together with there
>>>> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
>>>> is zero, provide sufficient guarantees.
>>>
>>> I agree that the per-vCPU virq_lock is going to be sufficient, however
>>> dropping the lock also means the event channel locking is more complex
>>> to understand (the long comment that was added proves it).
>>>
>>> In fact, the locking in the event channel code was already proven to be
>>> quite fragile, therefore I think this patch is not worth the risk.
>>
>> I agree this is a very reasonable position to take. I probably
>> would even have remained silent if in the meantime the
>> spin_lock()s there hadn't changed to read_trylock()s. I really
>> think we want to limit this unusual locking model to where we
>> strictly need it.
> 
> While I appreciate that the current locking is unusual, we should follow 
> the same model everywhere rather than having a dozen of way to lock the 
> same structure.
> 
> The rationale is quite simple, if you have one way to lock a structure, 
> then there are less chance to screw up.

If only this all was consistent prior to this change. It's not, and
hence I don't see how things get so much more unusual than it was
before. In fact one "unusual" (the trylock) gets treated for another
one (the specific lock protecting the sending of VIRQ events).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:28:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48311.85406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0Rc-0007CE-UY; Wed, 09 Dec 2020 14:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48311.85406; Wed, 09 Dec 2020 14:27:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0Rc-0007C7-Qz; Wed, 09 Dec 2020 14:27:52 +0000
Received: by outflank-mailman (input) for mailman id 48311;
 Wed, 09 Dec 2020 14:27:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn0Ra-0007C2-VU
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:27:51 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::61d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec865b8e-0dee-4809-aac4-549f3f01008b;
 Wed, 09 Dec 2020 14:27:49 +0000 (UTC)
Received: from DB6PR1001CA0047.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::33)
 by VI1PR0802MB2301.eurprd08.prod.outlook.com (2603:10a6:800:a0::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.19; Wed, 9 Dec
 2020 14:27:46 +0000
Received: from DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::5) by DB6PR1001CA0047.outlook.office365.com
 (2603:10a6:4:55::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 14:27:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT035.mail.protection.outlook.com (10.152.20.65) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 14:27:46 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Wed, 09 Dec 2020 14:27:46 +0000
Received: from d1247c0b2995.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0B0B855F-8EAF-472C-84C7-0F90CD0CB9C0.1; 
 Wed, 09 Dec 2020 14:27:13 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d1247c0b2995.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 14:27:13 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Wed, 9 Dec
 2020 14:27:13 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 14:27:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec865b8e-0dee-4809-aac4-549f3f01008b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Agrz0qaGZgpl61pf9qGCx/YT91JtFl0hwm5ZWOGy5uQ=;
 b=pOPLS1XCd/WJqxz4l9HzTf3lHqN0aHOQtUz03WH92hr6G/1qodfRe5dC8eBnLYVYN9SqImnJMtOIvQVup50drd2B2tft/rnlR/IH4gguxOus9mXXhZNgKS+1B/cjYH37+HNi85G4a3wwqxehdm51N7FEYAh4eZJCTVVofxr/PT4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 48d9834b47b21bce
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mKLKwe1G257P0qqicc7GmXnW+iQBajrtizG9m1J49Jtxk6VoZZv+tXpesdWHrPibphkH/mF1jCwpUX8lpdhGIwy8gPhFyVAVSm5inrWxqFrp6RfZEQDjJqnHfiwGFyX4Hsvl/58su1OifroCjgZf1GcE4Y9QErhLTNUMLKyZmINeX6+6OHHouSOIl67KHYPXeBAf5yyzzp8o5kfvo4KXmUbz5JFZTwYgpkUbSOTMThfjEw2J6cGkH8ZHwfZAQNK0syrNL4ckKoqFpStgME3vBEaTS1Ysizxeaswmf7zIRlZLKLX75NMO4sNUK8GshTSCaWeHUzZaicmFrQgcRhKcBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Agrz0qaGZgpl61pf9qGCx/YT91JtFl0hwm5ZWOGy5uQ=;
 b=kQOChzO8uTuUxMvCVIHyW/cKbtw82/2jdQiHv1pTxV1lS8+3VEkexce3rM5TZtp0oE40dHlbI6jjT7ywQWmGwud+jvtvGsutciIcnmjs+NFJcKGKoY65J7AWNYG70q2Mn/09/USHCPPGWALL5UeHoGs7kwu+SSd6nSmJQU4Qz2I0MGDe0laxPYdvGbWVTrswEMmeQUDWVUiGfJ3GFM4mpXPGcWhn/0P8LA0NMi83mlIGGikVXYSbL759AQ3JHEdM9AU7sR2TcJU7mVYfHdUxRv+NF+aIh5sUZkKkj2Yx/5FiI18XCoRvHfg1+Qn4CfYqNfBycn0CwnOM7QsoinnmUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Agrz0qaGZgpl61pf9qGCx/YT91JtFl0hwm5ZWOGy5uQ=;
 b=pOPLS1XCd/WJqxz4l9HzTf3lHqN0aHOQtUz03WH92hr6G/1qodfRe5dC8eBnLYVYN9SqImnJMtOIvQVup50drd2B2tft/rnlR/IH4gguxOus9mXXhZNgKS+1B/cjYH37+HNi85G4a3wwqxehdm51N7FEYAh4eZJCTVVofxr/PT4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
Thread-Topic: [PATCH v2 7/8] lib: move bsearch code
Thread-Index:
 AQHWqSYUW6jKcf9JAUabryyJK6872KnOWaMAgAERSYCABxiQgIAAHyEAgAEQxgCAFABlAIADGMmAgABP5wA=
Date: Wed, 9 Dec 2020 14:27:12 +0000
Message-ID: <689373AE-AF16-429F-818C-0467485E5748@arm.com>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
 <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
 <e0782c3b-9958-3792-eab9-d3fd6708225f@suse.com>
 <5fc44865-2115-947c-bd22-b51d7f17d39c@xen.org>
In-Reply-To: <5fc44865-2115-947c-bd22-b51d7f17d39c@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: be07b293-1373-4d9a-0789-08d89c4e992c
x-ms-traffictypediagnostic: DBAPR08MB5752:|VI1PR0802MB2301:
X-Microsoft-Antispam-PRVS:
	<VI1PR0802MB23018DC99AEA2C0FD4F36A5B9DCC0@VI1PR0802MB2301.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4941;OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GtoYJyVOkHzv5eFFr7oWbilowD+6heG17YODoX4ZTYPr4AQHZ0z+XYDGqYkGnEzsFFAE41Cobimri9ls8VA6BLqVQZ2BZ/ZUuljmZONorwCjfXx6y3zKSzQ3J/FoUoFaEj0O6cZceKh0mHN4/2vOwZGr2hUqg8cKjrds+XeRH9KXymwcBKyCfjP6s9kdb2LCvq1Qkdy3DjnZNuleneDz4zA2nSJ/M01R1QeOofhhRnqA/H3UB8iCWgJfK6juFjLRlTQiKgzl1ZYwDCmwd5L9snUmO8mUeo5s+DDQpGNtUzfq39WAKs1MQBElYazwywBPCdC3soRU0AElgrE+Nf2c0VjzVHAIfe+Lxph/S3/CHa1fxvstWKia96531hUCpifc
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(39860400002)(136003)(396003)(346002)(91956017)(478600001)(316002)(2616005)(36756003)(86362001)(6506007)(76116006)(6486002)(5660300002)(66476007)(2906002)(4326008)(186003)(6512007)(83380400001)(54906003)(66446008)(71200400001)(33656002)(6916009)(53546011)(64756008)(26005)(66556008)(66946007)(8936002)(8676002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?hA2Fo+T52ickdi2sIR/sxohPAjor+Tcw2rtHek5C2zgTye4KPuIJYqDQhzYd?=
 =?us-ascii?Q?3VpLwc35XYF4jfAO8XJ1Kuavd9iVhWmRijRwnq2hnzGencKat6A5UVQCl01U?=
 =?us-ascii?Q?5+V8xxiJ2w4LEf2OyE3FV3iCNcz3/el56P7iO0lfyH/G8sDeDMKfcCdvKq04?=
 =?us-ascii?Q?zZ28D/zKV8ZoisyvV+PMNqM49erRr2a8b5DCeyKKjPDY7aSuy2wTxYFyerI8?=
 =?us-ascii?Q?uMr8/57jELQA6KoHLf/zcNN1wbQD0ZybKBCqALMH6l8RsuSSZn/xaZtUX7zL?=
 =?us-ascii?Q?yyD4tWGlEOWB6FU3gZ7Qj3Zxaf64gIncTYmav+KlyQpiSQGd6qlcF+iZscGN?=
 =?us-ascii?Q?49f0O1PyjgC/E8nscYdnrMdyuUE9qPXWIxtbi72uUGEfDd23zdNw77zvNmPO?=
 =?us-ascii?Q?+forT8RLUNunL+FZjPuSWCaBQmOzmt/o/S7RoKTeL/JpUzFgiss7mEo40hTT?=
 =?us-ascii?Q?sH5Ybodtw8dnn4T4ifuYMVAEWIgOIeOGuiv21ZBt0h1sjh+kmivOe0bWkvTQ?=
 =?us-ascii?Q?vbA2q5d6Y4srFR6Xd8fnR8IbaOuflQVAIq28AxU/WeGBTIyYpWjmOZetsk/j?=
 =?us-ascii?Q?mUyFBOj9iv7r2Fi6w8s3kkPav2tmCwC0nPsWMed0KrUHWTFL12qzmaWOOuSB?=
 =?us-ascii?Q?PCV2jiod67j9nAJfnKUgvXZL/HQi2+vj/tg+AUEF64Di0A2eisaTpFvLsTnr?=
 =?us-ascii?Q?9EZO/+blgi2+zW/D85XbcbnfgQ70Zo8X6b8XSAj8894hxCHhmUTdydY0zgnh?=
 =?us-ascii?Q?0YFCvIYi2noDBZYejsI8EvaoxCDubFn/c+ywiXswJyhKX782A6BG64iutbPz?=
 =?us-ascii?Q?4iGd3F03XFwQfGqTgJvAGo/iY6BjOE71LM8+AEVchku6waIZlRP5MfXVLFIX?=
 =?us-ascii?Q?oyND04VXIty8tVHjy8LaFh9HRKKPR6yjcnYZWP/H9rPLFMjC8DEalzmTLedM?=
 =?us-ascii?Q?uChR9yQxWznvmkvwtl9L1UJExaZLUeDRZTIiFV2z/5AOrrRe/mSHUNdn9q8V?=
 =?us-ascii?Q?QtpK?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D26CFC3D25E4A4479F233923E1A6FD47@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5752
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e8db970a-578c-41d3-8b5a-08d89c4e8546
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tgW+vlxF5E4nmQrPnUOlezLF9uTZxhXNuBnwB21jHo+GR6A1huCRBq0PLsxR8MZ6g4q2YHb55ZVvYBgb9FdKuA0ZwcvsuhYXAdT91VGC1lRCr+rvxTyITPIOqwIMurGWERqSXW7KwUGbZZ50gHx7LZSm2x9SeQSkqZU88NYCJ7pYbq9XBd4ExWWgIBH95cmZ2Hi5Icm59wh4xQsqN5wRP6LFMc5Zo94A1BVfCzCFVpXTJgGu+2Y67Pj+7rQcn14KVgc59OJbBh2KlKZpC2XRuW1jTWa/KP/k+oAoJKDB+FvE5DLY460Mi/W6v6+bA03dxSgINxxSyOckPsVIt1als/b7DGeTmgC9kar4kRsgUNY+Uh31yflOqN3DLSHu45uACpyFy235m0JcaecDCGjsCQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(46966005)(26005)(6512007)(356005)(6862004)(8936002)(54906003)(186003)(33656002)(5660300002)(82310400003)(4326008)(86362001)(6486002)(2906002)(508600001)(336012)(81166007)(8676002)(6506007)(70586007)(107886003)(53546011)(70206006)(83380400001)(2616005)(36756003)(47076004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 14:27:46.3916
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be07b293-1373-4d9a-0789-08d89c4e992c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2301

Hi Jan,

> On 9 Dec 2020, at 09:41, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Jan,
>=20
> On 07/12/2020 10:23, Jan Beulich wrote:
>> On 24.11.2020 17:57, Julien Grall wrote:
>>> On 24/11/2020 00:40, Andrew Cooper wrote:
>>>> On a totally separate point,  I wonder if we'd be better off compiling
>>>> with -fgnu89-inline because I can't see any case we're we'd want the C=
99
>>>> inline semantics anywhere in Xen.
>>>=20
>>> This was one of my point above. It feels that if we want to use the
>>> behavior in Xen, then it should be everywhere rather than just this hel=
per.
>> I'll be committing the series up to patch 6 in a minute. It remains
>> unclear to me whether your responses on this sub-thread are meant
>> to be an objection, or just a comment. Andrew gave his R-b despite
>> this separate consideration, and I now also have an ack from Wei
>> for the entire series. Please clarify.
>=20
> It still feels strange to apply to one function and not the others... But=
 I don't have a strong objection against the idea of using C99 inlines in X=
en.
>=20
> IOW, I will neither Ack nor NAck this patch.

I think as Julien here: why doing this inline thing for this function and n=
ot the others
provided by the library ?
Could you explain the reasons for this or the use cases you expect ?

I see 2 usage of bsearch in arm code and I do not get why you are doing thi=
s for
bsearch and not for the other functions.

Regards
Bertrand

>=20
>> Or actually I only thought I could commit a fair initial part of
>> the series - I'm still lacking Arm-side acks for patches 2 and 3
>> here.
>=20
> If you remember, I have asked if Anthony could review the build system be=
cause he worked on it recently.
>=20
> Unfortunately, I haven't seen any answer so far... I have pinged him on I=
RC.
>=20
> Cheers,
>=20
> --=20
> Julien Grall
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:28:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:28:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48313.85417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0Rx-0007IP-AK; Wed, 09 Dec 2020 14:28:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48313.85417; Wed, 09 Dec 2020 14:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0Rx-0007II-7H; Wed, 09 Dec 2020 14:28:13 +0000
Received: by outflank-mailman (input) for mailman id 48313;
 Wed, 09 Dec 2020 14:28:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn0Rw-0007I3-0m
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:28:12 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.56]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35d42f26-b0e0-49a7-adb3-acbb924c1e8d;
 Wed, 09 Dec 2020 14:28:10 +0000 (UTC)
Received: from MR2P264CA0106.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::22)
 by AM6PR08MB4373.eurprd08.prod.outlook.com (2603:10a6:20b:70::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Wed, 9 Dec
 2020 14:28:08 +0000
Received: from VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::9a) by MR2P264CA0106.outlook.office365.com
 (2603:10a6:500:33::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21 via Frontend
 Transport; Wed, 9 Dec 2020 14:28:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT025.mail.protection.outlook.com (10.152.18.74) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 14:28:07 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Wed, 09 Dec 2020 14:28:07 +0000
Received: from bf62841135a2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A897CF14-F53B-4866-BEC2-D7BE977BB5E4.1; 
 Wed, 09 Dec 2020 14:27:49 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf62841135a2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 14:27:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Wed, 9 Dec
 2020 14:27:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 14:27:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35d42f26-b0e0-49a7-adb3-acbb924c1e8d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnYIawYMtjBZScVMAekXwJUvV5Yzu1NbGzcbtf+PC+E=;
 b=xw90otUFDvYNmT5vpFURt/Mi6zbblhvatjaGu5chEcODFbPMtgm09zO/qJbbGFbI6IfAhZIDsf7SqeunuUkZSKIC1ZQka1200PW57u8oCG45sGlbmW8tbv5PGm0SL/277TNZjmhrODk/JNNf3tPACgGuiqqO2obiTi93KEBuOT8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2f29f2f68edd540b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WvM6xIg95wHQ288ZFUS/9mPB1eO0HdTi7uzx2gExR2Kzu/ePfi0Ap6GvGVNHIOEEol2dHoQIjKQh2OFUmk3gJHDYHAdz+hZAgiUIwc+b4z4HKfjt3m8ppjdXiDK1PRS08dWDk3yGWD2Oc3q0hQ00x6LViI7cxZQuBinUr5sQiTizmaqR/V04Jw6BxVz8+Ttb1wqUqabZmpAHPMnbTAvUZ+JMZdfnX50TDU/W7TmjbgpLhO9fImVQENZeo73kJJPjekOBN9v4zE8XjjOnz/w1NvrMvQ+U8qOz2bSWGRZdU1X3Z3YXiuoBpiXpP49Y7NYBZKNNWcOKfnADkA2ALNlmvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnYIawYMtjBZScVMAekXwJUvV5Yzu1NbGzcbtf+PC+E=;
 b=j7ZMls/VIrGonKXTlTqjk9ksg/BBEil9VrzKtkXI/JkNWxaO+nZnXEyGPQ6lBmTkuhzfy2DSKiUosFfhTii05CRgijzATfbAl2YI44LR5a8Y4VMysgT4VUWf209L/gyKU0fLQ9mhjhhXiJ9JiG06LrXs3vXicd78zLbV5Th9t9/ck7sJ5Su7608mwqcsz6Ic9yJO1tDoMjatD3ZeYd+ukCo1raNqD8c9lOPEmN6fczj0+PaZpd7DIZ0QDZa4tQxW+WEcUxBLH7msD2s2UyxQceJovevTugE83W0A98mYHzSkXaOOCsHnFP+CnLXJYL681Tdfbyg5MLJqGpJzlolIxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnYIawYMtjBZScVMAekXwJUvV5Yzu1NbGzcbtf+PC+E=;
 b=xw90otUFDvYNmT5vpFURt/Mi6zbblhvatjaGu5chEcODFbPMtgm09zO/qJbbGFbI6IfAhZIDsf7SqeunuUkZSKIC1ZQka1200PW57u8oCG45sGlbmW8tbv5PGm0SL/277TNZjmhrODk/JNNf3tPACgGuiqqO2obiTi93KEBuOT8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 8/8] lib: move sort code
Thread-Topic: [PATCH v3 8/8] lib: move sort code
Thread-Index: AQHWwazCYMZRivICQk+HtBzW3vmiCqnu65UA
Date: Wed, 9 Dec 2020 14:27:48 +0000
Message-ID: <A2E397D5-4F51-46D0-9929-5AB81DCD6E23@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <84bd9aaf-e94d-da6b-f2f9-c2da64df5312@suse.com>
In-Reply-To: <84bd9aaf-e94d-da6b-f2f9-c2da64df5312@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0a28cf76-8fbd-4ebd-340d-08d89c4ea5f8
x-ms-traffictypediagnostic: DBAPR08MB5752:|AM6PR08MB4373:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4373358878C85E101A2BC7669DCC0@AM6PR08MB4373.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:556;OLM:556;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oiiOhYN7cZlTBbgO4SJOolT5yJIj7EY+tdMDPPPnBaTtEtQwSDT/KbhNJ98Q0GSihUcDLuwYswJ5PAxD3EBQWy8oo1EoZtec8V2G6tJiSzBEDxJaAs1LMPUTundBcaURpzJZ6kjRM0jBQV1xLkQL2n029TODY3aaPEjypPNib22DPSzHQ9GNSpBxMkuDj4iJeV9tg5J/jidNhwTlLxIvQxuFIW4/Vzl2/O/+GUfdIk/G20ZZfwbivFQXf+08WosnaY+ZxCFx9t6+fEhbaj///GRWhp28Xe+YAb+4fhQyBoTAYRpNjjXYHiCkzkB9Sp5ywDJH3+3KEY7tuMeLSTnDTagajNsyuWnzV3i5fZnEo1cX3OvlF+OOSOIrQeepsO6p
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(39860400002)(136003)(396003)(346002)(91956017)(478600001)(316002)(2616005)(36756003)(86362001)(6506007)(76116006)(6486002)(5660300002)(66476007)(2906002)(4326008)(186003)(6512007)(83380400001)(54906003)(66446008)(71200400001)(33656002)(6916009)(53546011)(64756008)(26005)(66556008)(66946007)(8936002)(8676002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?v9Z4EIjtS8TM4jv91OyW4i+xsf204je4nL6ciegB3TBM5lsJio+/COlvyLch?=
 =?us-ascii?Q?16OQ+/z7hbjwPFRYgqFZnmPSHS90Ay0CZUIH70kZa9Ic/U3eAkSt7hSfLAaS?=
 =?us-ascii?Q?N7/te8upeZOgwxcnr8j4zIBpIFwDKUM1jH0FphspPDUq4F4N+wwda74b/tJG?=
 =?us-ascii?Q?4dW8Qxtr437KDpWMOmu4W7aSdIn5+PerXOmsnCpi9PnW/h+ZvHXrKKshX12L?=
 =?us-ascii?Q?aduY3bx9+MGndiKScBFtv1u2AHPLHRmZGRrzVY4sRr9tzKQ5Mnm/2m8szK7E?=
 =?us-ascii?Q?8SPC9vZNAiGsqKeDmZlrvDseOv9hgOAv7m0HncB5FZwWY0vU/0jSrtIbeNgq?=
 =?us-ascii?Q?yJaH00bOuYfzw7nV8dMu3HGOZKh99u93Bq1HLR+LqYtTxJSHXMTfPiDVV5xY?=
 =?us-ascii?Q?gX3v6HNn+TdatAyyr0yvMS+C1hMoRyYKq8vfoMuwrckPxOlc5RC0Ju3Q2GSZ?=
 =?us-ascii?Q?hthumv04Ylq/YDM9mNeEclxMJqhGbkIr4HKPP5I5rYb2LGgUmg0G9f3/DG2v?=
 =?us-ascii?Q?3WJWHD2yZaQ4n1Mc4AMqfRfvbD/FkjEN4JAqOcKKTcxx1qa/ZIniUn6/9XC6?=
 =?us-ascii?Q?ysyhn/TaqKRW5Zvwv9b0Q2eFwNBFV1QJ9yvm8aEcaMUjP1ogyNPRBAbOq2fJ?=
 =?us-ascii?Q?j1vjBtERJIPJmR2hVkpD1qdvDT5EvQJeB8o6qgAqabP9uaJuWM77krh51Wa2?=
 =?us-ascii?Q?nMLQgGn+QjpSjC8zWyMqn7mY7dSxGkf6yBzh3v9rrMDrLYqse28AU6RA7gpO?=
 =?us-ascii?Q?L1Pbv3LQUU7AFLJxEoVRP6sxQNflOpo0Xrz527Vii9l2sW9bieFH8gQwUHng?=
 =?us-ascii?Q?R/jbKEmkSEY7vrGsyhPqWcnjp3QMswjsqDctrAxLySITprX8tB5C2u06KGjp?=
 =?us-ascii?Q?EXvSy+h8jGsfJcSCosclMids8s1czHC0j2f30nlKPtI4eKflG3eUm8oy9ZFS?=
 =?us-ascii?Q?CzAn+sjqyMP2R07JA6B2Fns8aT+SBD7SaM5Cp1Rl/vFPtEWyI4qhbWLe7N43?=
 =?us-ascii?Q?dMww?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FE8319227750B146BA6F441958552FD9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5752
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a4b4e113-0d71-4ff6-32f5-08d89c4e9ac2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k/TRAxCEm8r7/4G6f6WEmwPNj4+pKfXsLGHOhmvRYGpEC/4XuZVa8Y7lDHt2CxQ9FiMYGxK4r5dxVbN9vbmMR5uuHw/dA3l7rJGmKYynTGNjtuH9PW4w15RY35k+E+OOCZhDgL/SLXAzRzmu97INF2GDXb4JAJUAjCjXzj9vRG/X65/7szve2b7pH6aP1vm9rfO0qbcSJU0iahFV5QqPhs+H3dtl5LfMvbB0f4W9HmXN7OIkgMBenThZq3jbTdtGAeXGIyM0qFZX2KOnz3HT/+8rtxKnMwIJRPG/aZsKifLTn6uFABK87KpvRR3i26oVePPODqxUEsf6ejD3gzJBF3vUNpz8JKORfipb+E0rBWsRT2hEcmjG/o8QuEzz/EPq6PBH0z/mUxu1l1I4ZeGmsA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(46966005)(53546011)(81166007)(33656002)(82310400003)(8676002)(70206006)(47076004)(26005)(6862004)(8936002)(70586007)(36756003)(86362001)(2616005)(6486002)(107886003)(6512007)(356005)(336012)(186003)(83380400001)(508600001)(54906003)(5660300002)(6506007)(4326008)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 14:28:07.7484
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a28cf76-8fbd-4ebd-340d-08d89c4ea5f8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4373

Hi,

> On 23 Nov 2020, at 15:24, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> Build this code into an archive, partly paralleling bsearch().
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/common/Makefile        | 1 -
> xen/lib/Makefile           | 1 +
> xen/{common =3D> lib}/sort.c | 0
> 3 files changed, 1 insertion(+), 1 deletion(-)
> rename xen/{common =3D> lib}/sort.c (100%)
>=20
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index e8ce23acea67..7a4e652b575e 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -36,7 +36,6 @@ obj-y +=3D rcupdate.o
> obj-y +=3D rwlock.o
> obj-y +=3D shutdown.o
> obj-y +=3D softirq.o
> -obj-y +=3D sort.o
> obj-y +=3D smp.o
> obj-y +=3D spinlock.o
> obj-y +=3D stop_machine.o
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index f12dab7a737a..42cf7a1164ef 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -6,3 +6,4 @@ lib-y +=3D ctype.o
> lib-y +=3D list-sort.o
> lib-y +=3D parse-size.o
> lib-y +=3D rbtree.o
> +lib-y +=3D sort.o
> diff --git a/xen/common/sort.c b/xen/lib/sort.c
> similarity index 100%
> rename from xen/common/sort.c
> rename to xen/lib/sort.c
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:29:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:29:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48323.85430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0TZ-0007TF-NH; Wed, 09 Dec 2020 14:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48323.85430; Wed, 09 Dec 2020 14:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0TZ-0007T8-K0; Wed, 09 Dec 2020 14:29:53 +0000
Received: by outflank-mailman (input) for mailman id 48323;
 Wed, 09 Dec 2020 14:29:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kn0TY-0007T1-Qi
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:29:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7097960-b139-4d81-bf0f-9f89b952a31a;
 Wed, 09 Dec 2020 14:29:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0862EAD2B;
 Wed,  9 Dec 2020 14:29:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7097960-b139-4d81-bf0f-9f89b952a31a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607524191; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wvH8szOYMnoxUWvnEUf3sjUc5EPUzCoHPoeOxp7FKj4=;
	b=UYsM75cxGbPUH6PImonQmGjZz8HBy7FEcY2tq4rtBQOaeyeH8/TFKA1G4cB/IoIx3vtaKK
	YS3wDF3hPCCoEiUTdB1z+7eU+8sm1XXiGnQdCIAt9vKdMaOcMUZELEf6M8fkW2nqTR82GB
	lAKUKWs35Fdkke7rmZJN2lRn63FKvCc=
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
Date: Wed, 9 Dec 2020 15:29:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 13:11, Julien Grall wrote:
> On 26/11/2020 11:20, Jan Beulich wrote:
>> On 26.11.2020 09:03, Juergen Gross wrote:
>>> When the host crashes it would sometimes be nice to have additional
>>> debug data available which could be produced via debug keys, but
>>> halting the server for manual intervention might be impossible due to
>>> the need to reboot/kexec rather sooner than later.
>>>
>>> Add support for automatic debug key actions in case of crashes which
>>> can be activated via boot- or runtime-parameter.
>>>
>>> Depending on the type of crash the desired data might be different, so
>>> support different settings for the possible types of crashes.
>>>
>>> The parameter is "crash-debug" with the following syntax:
>>>
>>>    crash-debug-<type>=<string>
>>>
>>> with <type> being one of:
>>>
>>>    panic, hwdom, watchdog, kexeccmd, debugkey
>>>
>>> and <string> a sequence of debug key characters with '+' having the
>>> special semantics of a 10 millisecond pause.
>>>
>>> So "crash-debug-watchdog=0+0qr" would result in special output in case
>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>> domain info, run queues).
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - switched special character '.' to '+' (Jan Beulich)
>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>> - added more text to the boot parameter description (Jan Beulich)
>>>
>>> V3:
>>> - added const (Jan Beulich)
>>> - thorough test of crash reason parameter (Jan Beulich)
>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>> - added dummy get_irq_regs() helper on Arm
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Except for the Arm aspect, where I'm not sure using
>> guest_cpu_user_regs() is correct in all cases,
> 
> I am not entirely sure to understand what get_irq_regs() is supposed to 
> returned on x86. Is it the registers saved from the most recent exception?

An interrupt (not an exception) sets the underlying per-CPU
variable, such that interested parties will know the real
context is not guest or "normal" Xen code, but an IRQ.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:41:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:41:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48332.85441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0eq-0000sS-Q3; Wed, 09 Dec 2020 14:41:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48332.85441; Wed, 09 Dec 2020 14:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0eq-0000sL-Mz; Wed, 09 Dec 2020 14:41:32 +0000
Received: by outflank-mailman (input) for mailman id 48332;
 Wed, 09 Dec 2020 14:41:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NJeK=FN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kn0ep-0000sF-Mt
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:41:31 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31a19d4a-58fe-4dd7-a546-9bb31528dd43;
 Wed, 09 Dec 2020 14:41:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31a19d4a-58fe-4dd7-a546-9bb31528dd43
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607524890;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=D4r5jtqfdP/QbqUxHGzimSh5Yk7o/nEtyDgsKLy034A=;
  b=HMozLkZhaybUuJFWLOUfiVEJE7apO6apyuU5WDnWuiUbMo52GICTKvSN
   4W2FuGMK//IAUtTIEGkVNngMDV/9A2evOxzC3sfJOcLGWXRDiUjM5g+DW
   aEJxxvgog6oR2WfH3GTS039VdzOVyMQ+n8RNwCdbeOXpnkbSGuXB8Takb
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0Lxv2UMDflCZOIIlxONC9qSC10t2Kwy1oSVPHgdj0UX+rVzC1RINk7E3AvCSsZ3wbsab3KtMOw
 poCO8PrIBBWlIQ6Zj7t8gvFncAHcrP9yg+xQWqMgpRXkoNCc6nQx5W+m36x5wnFIZWDSrLkNXE
 QD+JrVnzQl7bwVP1ImeVTSd8lU9Fcx8eWxhRJ9eGb5XA2hlb+FrbcS5XnDDi5h5wJsy9ODov3Q
 o2zlHONi4fgNr5d3GsTzULHVE9Rnc2+WEd35xFP+l2JereFadZdN/EeIhOdXBWYy4Pgj5ZJhDw
 /2U=
X-SBRS: 5.2
X-MesageID: 33197328
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tp0BXaGNgAp2nsxXpLqFjZLXdLJzesId70hD6mlaY3VuHfCwvc
 aogfgdyFvQgDEeRHkvlbm7SdC9aFnb8oN45pRUAKyrWxPotHDtAIZp64bjxDOIIVyZysd206
 B8f69iTODhFFQSt7ec3CCUG8stqeP3k5yAqvzZyx5WLD1CS6Yl1AthDxbeL0sefngjObMcNL
 6xovVKvCChf3N/VLXfOlAgU/LYr9PG0LLKCCRnOzcd5AODjSyl5dfBenDytCs2aD9Bzawv9m
 LIiWXCiJmLiP2n1gTak1ba8pU+oqqY9vJ4GMeOhsIJQw+Ati+UYu1aN4GqgCo4u6WG5losjb
 D30nUdFvU2wXbQcmapmADqygnt3R0/gkWSs2OwsD/Eusz2RDUzFspHi8Z4S3LimjEdgPh42p
 RK0nPxirNcB3r78xjV7d7OSh1siw6wqX0tjeYcgxVkIPIjQbVWqpES+14QDYwJGzj05JtiHO
 5lCszd4/g+SyL9U1nSuG5zzNuwGmkiBxvueDlkhuWZ2yVb9UoJrHcwy9cYmh47la4VS54B/O
 jcN7QtibcmdL5zUYt4CP0aScW6TmzBKCitDEuXIVDqUL4KIGjMrZmf2sRR2MiwdJYFzIQ/lf
 36OTsy31IaYE7gBdaD25dG6Hn2LlmVRjjx1tpYo4Fwp7yUfsuSDQSYVFssn8G8ys9zPuTHXZ
 +IVK5+H+XuNi/nF4pPwmTFKvtvAGhbWsgUttEnQkmJs8LGJ4b739arCsr7Nf7qCjYrWmT2H3
 sFUnzyPax7nzuWZkM=
X-IronPort-AV: E=Sophos;i="5.78,405,1599537600"; 
   d="scan'208";a="33197328"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
Date: Wed, 9 Dec 2020 14:41:23 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201209135908.GA4269@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/12/2020 13:59, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 01:28:54PM +0000, Andrew Cooper wrote:
>> Pagefaults on IRET come either from stack accesses for operands (not the
>> case here as Xen is otherwise working fine), or from segement selector
>> loads for %cs and %ss.
>>
>> In this example, %ss is in the LDT, which specifically does use
>> pagefaults to promote the frame to PGT_segdesc.
>>
>> I suspect that what is happening is that handle_ldt_mapping_fault() is
>> failing to promote the page (for some reason), and we're taking the "In
>> hypervisor mode? Leave it to the #PF handler to fix up." path due to the
>> confusion in context, and Xen's #PF handler is concluding "nothing else
>> to do".
>>
>> The older behaviour of escalating to the failsafe callback would have
>> broken this cycle by rewriting %ss and re-entering the kernel.
>>
>>
>> Please try the attached debugging patch, which is an extension of what I
>> gave you yesterday.  First, it ought to print %cr2, which I expect will
>> point to Xen's virtual mapping of the vcpu's LDT.  The logic ought to
>> loop a few times so we can inspect the hypervisor codepaths which are
>> effectively livelocked in this state, and I've also instrumented
>> check_descriptor() failures because I've got a gut feeling that is the
>> root cause of the problem.
> here's the output:
> (XEN) IRET fault: #PF[0000]                                            [23/1999]
> (XEN) %cr2 ffff820000010040                                                    
> (XEN) IRET fault: #PF[0000]                                                    
> (XEN) %cr2 ffff820000010040                                                 
> (XEN) IRET fault: #PF[0000]
> (XEN) %cr2 ffff820000010040
> (XEN) IRET fault: #PF[0000]
> (XEN) %cr2 ffff820000010040
> (XEN) domain_crash called from extable.c:216
> (XEN) Domain 0 (vcpu#0) crashed on cpu#0:
> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
> (XEN) CPU:    0
> (XEN) RIP:    0047:[<00007f7ff60007d0>]
> (XEN) RFLAGS: 0000000000000202   EM: 0   CONTEXT: pv guest (d0v0)
> (XEN) rax: ffff82d04038c309   rbx: 0000000000000000   rcx: 000000000000e008
> (XEN) rdx: 0000000000010086   rsi: ffff83007fcb7f78   rdi: 000000000000e010
> (XEN) rbp: 0000000000000000   rsp: 00007f7fff4876c0   r8:  0000000e00000000
> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
> (XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000002660
> (XEN) cr3: 0000000079cdb000   cr2: ffffa1000000a040
> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: ffffffff80cf2dc0
> (XEN) ds: 0023   es: 0023   fs: 0000   gs: 0000   ss: 003f   cs: 0047
> (XEN) Guest stack trace from rsp=00007f7fff4876c0:
> (XEN)    0000000000000001 00007f7fff487bd8 0000000000000000 0000000000000000
> (XEN)    0000000000000003 00000000aee00040 0000000000000004 0000000000000038
> (XEN)    0000000000000005 0000000000000008 0000000000000006 0000000000001000
> (XEN)    0000000000000007 00007f7ff6000000 0000000000000008 0000000000000000
> (XEN)    0000000000000009 00000000aee01cd0 00000000000007d0 0000000000000000
> (XEN)    00000000000007d1 0000000000000000 00000000000007d2 0000000000000000
> (XEN)    00000000000007d3 0000000000000000 000000000000000d 00007f7fff488000
> (XEN)    00000000000007de 00007f7fff4877c0 0000000000000000 0000000000000000
> (XEN)    6e692f6e6962732f 0000000000007469 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

Huh, so it is the LDT, but we're not getting as far as inspecting the
target frame.

I wonder if the LDT is set up correctly.  How about this incremental delta?

~Andrew

diff --git a/xen/arch/x86/extable.c b/xen/arch/x86/extable.c
index 88b05bef38..be59a3e216 100644
--- a/xen/arch/x86/extable.c
+++ b/xen/arch/x86/extable.c
@@ -203,13 +203,16 @@ search_pre_exception_table(struct cpu_user_regs *regs)
         __start___pre_ex_table, __stop___pre_ex_table-1, addr);
     if ( fixup )
     {
+        struct vcpu *curr = current;
         static int count;
 
         printk(XENLOG_ERR "IRET fault: %s[%04x]\n",
                vec_name(regs->entry_vector), regs->error_code);
 
         if ( regs->entry_vector == X86_EXC_PF )
-            printk(XENLOG_ERR "%%cr2 %016lx\n", read_cr2());
+            printk(XENLOG_ERR "%%cr2 %016lx, LDT base %016lx, limit
%04x\n",
+                   read_cr2(), curr->arch.pv.ldt_base,
+                   (curr->arch.pv.ldt_ents << 3) | 7);
 
         if ( count++ > 2 )
         {
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 1059f3ce66..3ac07a84c3 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1233,6 +1233,8 @@ static int handle_ldt_mapping_fault(unsigned int
offset,
     }
     else
     {
+        printk(XENLOG_ERR "*** pv_map_ldt_shadow_page(%#x) failed\n",
offset);
+
         /* In hypervisor mode? Leave it to the #PF handler to fix up. */
         if ( !guest_mode(regs) )
             return 0;



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:42:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48337.85454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0g6-000103-50; Wed, 09 Dec 2020 14:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48337.85454; Wed, 09 Dec 2020 14:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0g6-0000zw-1t; Wed, 09 Dec 2020 14:42:50 +0000
Received: by outflank-mailman (input) for mailman id 48337;
 Wed, 09 Dec 2020 14:42:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kn0g4-0000zq-C5
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:42:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99beeb13-f007-472b-93f3-fa9d58141815;
 Wed, 09 Dec 2020 14:42:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B056AD2B;
 Wed,  9 Dec 2020 14:42:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99beeb13-f007-472b-93f3-fa9d58141815
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607524966; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VCqA7fbJ0LAudNU8V/slBuJvIHhpFqzfocQYAr+T5E4=;
	b=Fw4uq9bR9WITgtkGIEDNQ5Glt/ma4ic89NLNXCslwnbpd3hR1TtoMWVg4RVzc5U8x1MmBy
	eEbyxpwi9Z/OLHqZSXZz+WZjE1RK0wv+3nk9uAp27Rqb4yo48kJHPWlRFbLhbJFFqGL1Zl
	8Hbw1lzoJuFznGNN3EdQsF4woHqcoWE=
Subject: Re: [PATCH v3 2/8] lib: collect library files in an archive
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
 <E7B41B4F-98F9-4C52-8549-F407D6FB8251@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <742e504a-02f7-2132-c631-6a31c03959e4@suse.com>
Date: Wed, 9 Dec 2020 15:42:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <E7B41B4F-98F9-4C52-8549-F407D6FB8251@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 12:37, Bertrand Marquis wrote:
>> On 23 Nov 2020, at 15:21, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
>> just to avoid bloating binaries when only some arch-es and/or
>> configurations need generic library routines, combine objects under lib/
>> into an archive, which the linker then can pick the necessary objects
>> out of.
>>
>> Note that we can't use thin archives just yet, until we've raised the
>> minimum required binutils version suitably.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks.

>> @@ -60,7 +64,14 @@ include Makefile
>> # ---------------------------------------------------------------------------
>>
>> quiet_cmd_ld = LD      $@
>> -cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
>> +cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
>> +               --start-group $(filter %.a,$(real-prereqs)) --end-group
> 
> This might be a good idea to add a comment to explain why the start/end-group
> is needed so that someone does not change this back in the future.

Since we're trying to inherit Linux'es build system, I did look
there and iirc there was no comment, so I didn't see a basis for
us to have one.

> Something like: put libraries between start/end group to have unused symbols removed.

Now that's not the reason - why you describe is the default
behavior for archives, and there is something like a "whole
archive" option iirc to change to a mode where all objects
get pulled out. Instead this is a symbol resolution thing
aiui - by default earlier archives can't resolve undefined
symbols first referenced by objects pulled out of later
archives.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:42:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:42:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48338.85466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0g8-00012E-Hj; Wed, 09 Dec 2020 14:42:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48338.85466; Wed, 09 Dec 2020 14:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0g8-000123-Ei; Wed, 09 Dec 2020 14:42:52 +0000
Received: by outflank-mailman (input) for mailman id 48338;
 Wed, 09 Dec 2020 14:42:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn0g6-00010E-Aj; Wed, 09 Dec 2020 14:42:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn0g6-0001kI-1j; Wed, 09 Dec 2020 14:42:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn0g5-000766-Oq; Wed, 09 Dec 2020 14:42:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kn0g5-0002cY-ON; Wed, 09 Dec 2020 14:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aX1uy8yfwbuk8OHBd10Z4YsFCsgWoNqldb9F4jAPKpY=; b=jzpgOM4xdarNdZirsG+W0bCvny
	rpXqpzg9cO+0f/U7g8eTHpvk7ADVr2Cb4lMV+0oz9SaUn04wuqqlbcE3LHA2l53E1Nl+rRiL4Iux8
	nCW4AiRCO3eqDJvSzUM8UwrII4bRoU61zKQjVNOvzkf5kSBNXUJAaq5i2JhbdwsguSDQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157345: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
X-Osstest-Versions-That:
    ovmf=7061294be500de021bef3d4bc5218134d223315f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 14:42:49 +0000

flight 157345 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157345/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e
baseline version:
 ovmf                 7061294be500de021bef3d4bc5218134d223315f

Last test of basis   157338  2020-12-09 03:48:15 Z    0 days
Testing same since   157345  2020-12-09 12:40:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chasel Chiu <chasel.chiu@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7061294be5..f95e80d832  f95e80d832e923046c92cd6f0b8208cec147138e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:47:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48351.85480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0k7-0001Kw-59; Wed, 09 Dec 2020 14:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48351.85480; Wed, 09 Dec 2020 14:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0k7-0001Kp-1j; Wed, 09 Dec 2020 14:46:59 +0000
Received: by outflank-mailman (input) for mailman id 48351;
 Wed, 09 Dec 2020 14:46:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn0k6-0001Kk-3T
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:46:58 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::628])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bc8507a-5566-4908-970d-8ebeacdc9b07;
 Wed, 09 Dec 2020 14:46:56 +0000 (UTC)
Received: from AM0P190CA0022.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::32)
 by DBAPR08MB5768.eurprd08.prod.outlook.com (2603:10a6:10:1b1::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Wed, 9 Dec
 2020 14:46:54 +0000
Received: from AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:190:cafe::cf) by AM0P190CA0022.outlook.office365.com
 (2603:10a6:208:190::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 14:46:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT022.mail.protection.outlook.com (10.152.16.79) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 14:46:54 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Wed, 09 Dec 2020 14:46:54 +0000
Received: from 18395851d706.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8852F488-E3C7-47AD-902E-01E7D249CC25.1; 
 Wed, 09 Dec 2020 14:46:37 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 18395851d706.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 14:46:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4774.eurprd08.prod.outlook.com (2603:10a6:10:d5::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.22; Wed, 9 Dec
 2020 14:46:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 14:46:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bc8507a-5566-4908-970d-8ebeacdc9b07
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C8NIOMcqO2YuIZ25ESTOtw60WrM6U29ULZLouwF2GZY=;
 b=kSKTrZEpFcHi6to/HIVd6UUs6PCDvt0pJpbUFbqsAONSeEHD0K69mby6QkX3taEd5cezUJiqiR6uGbcdGmug05Ku+jGpRywAkKDUwlcdaS7aJjXD/D1iIRdmtximQOSocLfmgz3Al0Bp/xNz2rSsjd44jhRzOi5mlB4yOOwsX7c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f8eb1fea6a40a7ee
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TA8mqwJqJKiu3JKuCdH2cMK042j0QPhayaTLXM+04bKYKfZP5rkDJQV80xTQii9iB81dF8N20GZaddI0wN/CKP/FHMeJ/NNq9AiJGdu6UAs1ZmCCyEsyo52AldJKI6COubuqSk3MvCAu+n9casE85hL7M2hsXo2NyI66PaZLBJCwFt9uF4mmLIaECuOcqAsU9gIkBVQIjBZLXT2QMeIoHmfSbgA0p7RD20HRrY+qioOix7o5JFuV78wImV1PSAwNLk5TJ1K5FbHJHWJwmhZgDcSq5T/aby3Mwp1lnADQ0dvH0W2dwxNO2t4WzrmYZgFMiLSWWupEivVFoBaw1IcYuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C8NIOMcqO2YuIZ25ESTOtw60WrM6U29ULZLouwF2GZY=;
 b=S4sYTj1zu4Ob9E0W/CLhpXvIKi3lEW0ZMHgzhDvzJkqQmB3piXV8oYmkYNcadVaDZGV5kc4X//MHQoE3okYVhqNdvl0j7zjj6WQyyhN4YVycKqQHay6B87Ah44k10kYFnrmGEOzZr9jicZ8xLC10GSSiHgZ3cW1CrdL5a1FEIdteYvGNnH70e5jVKDKEYvA4Vtjzpowr4xy3Q+aYnQ7bgrHIGlaZhHxigrryUCBlocZEZnrQ3Iej8pZmSBT9IE5Lhxzbx+vedm5ud78WieQtY3h3lZ3Bo//rm4MUV7z/oxYUS+eefe3SMGE4WYDeFWjcQTHE3ynD9XAvIujSSIOkpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C8NIOMcqO2YuIZ25ESTOtw60WrM6U29ULZLouwF2GZY=;
 b=kSKTrZEpFcHi6to/HIVd6UUs6PCDvt0pJpbUFbqsAONSeEHD0K69mby6QkX3taEd5cezUJiqiR6uGbcdGmug05Ku+jGpRywAkKDUwlcdaS7aJjXD/D1iIRdmtximQOSocLfmgz3Al0Bp/xNz2rSsjd44jhRzOi5mlB4yOOwsX7c=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v3 2/8] lib: collect library files in an archive
Thread-Topic: [PATCH v3 2/8] lib: collect library files in an archive
Thread-Index: AQHWwaxzwnlkqyYO/Ea2tJ3MHVbjvKnuvCIAgAAzoYCAAAERAA==
Date: Wed, 9 Dec 2020 14:46:34 +0000
Message-ID: <AF8181EC-DE6D-4BFA-A1B4-3B97FCE0BBA8@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
 <E7B41B4F-98F9-4C52-8549-F407D6FB8251@arm.com>
 <742e504a-02f7-2132-c631-6a31c03959e4@suse.com>
In-Reply-To: <742e504a-02f7-2132-c631-6a31c03959e4@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: a3a72d8d-d431-4cd1-576f-08d89c51455f
x-ms-traffictypediagnostic: DBBPR08MB4774:|DBAPR08MB5768:
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5768B2D04193A7D3F3075EFE9DCC0@DBAPR08MB5768.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2000;OLM:2000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FgKWfQ6oVLESHpSGQi2IUw5/MIgVkXToqi0eGxSmZ2FBQa158CUHBz36lwCnNyecNTW4MsyB4u7J9mAbz0AmV6YxZ10p3+xXpaX/D91Gbh+0+PWS2iL7/QQc6cHV3llCK2OykU/9GQgKUnMUuFkBp9Zs22Es2YksZLVlTJoELxJ8WZsTbhL+irO+mfj+e6/ppt1Nh//leFwN2HLOv586PS5drL7+e647YTOSQaBgHAUzY6MhdaPov2m4XWGKNHIMmWpqNPwqOLMrLoXPUyFx3FHJ+3Dnw2nFDiwFhgykIuXAIbAAOCN3+q7jgMk3tWAxMRafeczODWKTZOzDbX+K4tUu+7XZZVMbPOzgQi+R24Rkf4Vtb05hroYQQtvZr838
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(136003)(396003)(376002)(346002)(66476007)(83380400001)(76116006)(6512007)(2616005)(86362001)(33656002)(8676002)(8936002)(91956017)(478600001)(186003)(4326008)(66556008)(316002)(36756003)(64756008)(5660300002)(54906003)(66446008)(66946007)(71200400001)(6506007)(26005)(6486002)(6916009)(53546011)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?jlUwNqCFZBHpyyzkHCUXzbadL9hhkclbUS667Ovw2KU2LbjFK5Y20go3s20O?=
 =?us-ascii?Q?8XdDh93JFC7BMjF50dP8xBZYaq33RORiXLFr3zmYysKF2wDZ0XHrV9S+mIOz?=
 =?us-ascii?Q?UAw4P1F8D3ohmx90yqrdnd8QFF5HcArynZAkQsChUcYgxh6Q7fQBTBzHMziu?=
 =?us-ascii?Q?3Id0SWnY8PndXHnXt4AV6wGho0VBBlZew5HmKT1YhVhh7FFxlboAP4fLrUyN?=
 =?us-ascii?Q?kgfwxvQHF6JsviwtB6Ff1fUz444wVcrO4PvzRArE1l1HGtDr+89YxdtF/oY3?=
 =?us-ascii?Q?PkcLpRsmX4wCX5wy8M1VLoFEFDj3N5PsA5JsiFcPbJdC/Uqq0oQjPmZj0sLs?=
 =?us-ascii?Q?uWia6qYxQcIx9XNXruo5KCkajS5Yrf2XxcG04qXN2Wf0TK/fwR7YJY8hasaW?=
 =?us-ascii?Q?G1Z2H6APXtZD6aP57svEUP1rYqZvEEKjWuJhGxbfKKvTd51IjUec9G6LYWsh?=
 =?us-ascii?Q?R/jUBVwYGqUmoLdlYb0rpV1NYnukKRJaFrDmdFps7FK5uRHMYr0amqvOmeJS?=
 =?us-ascii?Q?B68T3uk28/tdn+Km6tsCCh+JfEASgfDBYTaXGyoiiuVHsyvc63Vd39Z/gCSV?=
 =?us-ascii?Q?gkoKm7gLJ61QZylOUBm02M02mU8AuLuZhaQipeYXTKQOSEowzk34AVrFWVBs?=
 =?us-ascii?Q?EgfYxMHT++iLlNuSlc1q2jG5Ql7zVLTGk+ncXC+AUNw4Dbs2karkeidkH6VK?=
 =?us-ascii?Q?9zBoMrwLDdChFVBmBmxtVF2DLgu87d4qYb3ibTk4YKaZc7ahKLfnLoJd8FzK?=
 =?us-ascii?Q?S3Y3si0mpFCGDAfiq43LM9+ms3+xRAPr6Eu6U7Y8ixd6jFT3Z7GHs8oCgVJ+?=
 =?us-ascii?Q?zDtP0U0Ivg2vuFVN7eE4Mx2jzZPN4AFXuwLemTLISL+uHXjHsmqGlUI8am9o?=
 =?us-ascii?Q?/H2hAkIARjNJZqdTzJUlI+y0pyGNQx9G2m/FgNRuXqvA/MZC1fHcOmIoxXyp?=
 =?us-ascii?Q?lunm8bEIJFHxk9MsIM5UOXwrruoi9PzvbFFvkl6xIrrpMq+Qg6tx5xRHeZhd?=
 =?us-ascii?Q?RseN?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <68E2C66AF10B9245A34882B064EC039A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4774
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e97fceca-97c0-48ea-9782-08d89c5139d8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VMASb4URZWY+H+Z4UsuyOKD7PL7lGr11lD/1n6Rl3dGY3SX3umGYkKoKF3TNpYNh5LM5cZDFIjkqSwNyDnbNWTE2O5hSS1dUttstZKhz7V98CbLgKfNAZwd871E3FORj9AYsQ0zpVs2/PLt5oHeGOsiSBL5AppFM4D8IzktUZo7J7DjiFG/K8vx3B2BYyB0vkr4chKIn+BKSeM9FlDK92GYtWEFNX2zS2mbb+xQIXXHRNTLDXouudywN561wddCcRiczb1Xa5yyyREZzYSO53FU8qvZaeAwZB6PeTTTPlJhDOQ7RdXQaYh/HWnj4dUs6MEDCwCsrabBfUhyCAQJVc6ujcAORnd+06T7AX489EOfPcZWt6PZqSqJ12SNNco4aOJ+NvP0ekLto2xAwwNSoiw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(46966005)(86362001)(82310400003)(6862004)(8676002)(2616005)(508600001)(107886003)(70586007)(6512007)(36906005)(186003)(70206006)(6506007)(54906003)(83380400001)(4326008)(5660300002)(47076004)(81166007)(33656002)(8936002)(6486002)(53546011)(356005)(336012)(36756003)(26005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 14:46:54.2525
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a3a72d8d-d431-4cd1-576f-08d89c51455f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5768

Hi Jan,

> On 9 Dec 2020, at 14:42, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.12.2020 12:37, Bertrand Marquis wrote:
>>> On 23 Nov 2020, at 15:21, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
>>> just to avoid bloating binaries when only some arch-es and/or
>>> configurations need generic library routines, combine objects under lib=
/
>>> into an archive, which the linker then can pick the necessary objects
>>> out of.
>>>=20
>>> Note that we can't use thin archives just yet, until we've raised the
>>> minimum required binutils version suitably.
>>>=20
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Thanks.
>=20
>>> @@ -60,7 +64,14 @@ include Makefile
>>> # ---------------------------------------------------------------------=
------
>>>=20
>>> quiet_cmd_ld =3D LD      $@
>>> -cmd_ld =3D $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
>>> +cmd_ld =3D $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prere=
qs)) \
>>> +               --start-group $(filter %.a,$(real-prereqs)) --end-group
>>=20
>> This might be a good idea to add a comment to explain why the start/end-=
group
>> is needed so that someone does not change this back in the future.
>=20
> Since we're trying to inherit Linux'es build system, I did look
> there and iirc there was no comment, so I didn't see a basis for
> us to have one.
>=20
>> Something like: put libraries between start/end group to have unused sym=
bols removed.
>=20
> Now that's not the reason - why you describe is the default
> behavior for archives, and there is something like a "whole
> archive" option iirc to change to a mode where all objects
> get pulled out. Instead this is a symbol resolution thing
> aiui - by default earlier archives can't resolve undefined
> symbols first referenced by objects pulled out of later
> archives.

ah yes i remember seeing that.
Maybe just add your last sentence in the commit message.

Cheers
Bertrand

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:47:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48357.85493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0kj-0001Qn-Dv; Wed, 09 Dec 2020 14:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48357.85493; Wed, 09 Dec 2020 14:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0kj-0001Qg-Au; Wed, 09 Dec 2020 14:47:37 +0000
Received: by outflank-mailman (input) for mailman id 48357;
 Wed, 09 Dec 2020 14:47:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kn0kh-0001QW-OC
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:47:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a380b7d-712d-4a92-a767-6a9e9b0feb10;
 Wed, 09 Dec 2020 14:47:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 17FCEAC9A;
 Wed,  9 Dec 2020 14:47:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a380b7d-712d-4a92-a767-6a9e9b0feb10
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607525254; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ONIxTvhg1ZSj4kcIDt7DUVJnKldpPMpr8yIzQKHZvu4=;
	b=LTyLnMxHICiaZieurOY3iqPVnHPIIV4GT+lXu6BNNYDOIIESvA+YAcDq8kOeSjsGF+vZzW
	H0Yu89WkMSVOTVYWjn+dYPm+XGd2uNCkfvI3XLkBz0KsDxfQ0Ju6iG/VfdgC7wxDOKFw6b
	pQy4tOrO3RT3MCSyuk3hkwuzylSrUBs=
Subject: Re: [PATCH v3 0/8] xen: beginnings of moving library-like code into
 an archive
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <509B2BDB-A226-4328-A75E-33AAF74BE45B@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <41587b00-2899-65a6-3867-97664529fdab@suse.com>
Date: Wed, 9 Dec 2020 15:47:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <509B2BDB-A226-4328-A75E-33AAF74BE45B@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 12:33, Bertrand Marquis wrote:
> I will review this today, sorry for the delay.

Thanks for the reviews, and no problem at all. Since iirc it was
you who asked on the last community call, I wanted to point out
that despite your reviews and despite Wei's acks the series
still won't be able to go in, because patches 2 and 3 are still
lacking Arm maintainer acks.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:52:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48365.85508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0p0-0002QX-4v; Wed, 09 Dec 2020 14:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48365.85508; Wed, 09 Dec 2020 14:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0p0-0002QQ-1i; Wed, 09 Dec 2020 14:52:02 +0000
Received: by outflank-mailman (input) for mailman id 48365;
 Wed, 09 Dec 2020 14:52:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn0oy-0002QJ-CG
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:52:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn0ov-0001xh-7f; Wed, 09 Dec 2020 14:51:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn0ou-0001Hq-VZ; Wed, 09 Dec 2020 14:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=+K/zvjpkueRPcqVCGHkSj5uY/mkKXh2WurdDiBo7FZA=; b=JcNfUb/UQ0kLzfBYmLWwU+onJ8
	Zoml+n1cxepyeT0S1QQ3AghPm+wDXpongSNNG8lQHSltrCCSND4BiCL9zFxZ0Uc3YCgwdEeiJ66EB
	nMtaYaU4JvP6kNNWHvG8KHeAJ4yLZUl/i/XusOVKjb0hII8LVy2Pkdkumtmq5NOxs8Nw=;
Subject: Re: [PATCH v3 0/8] xen: beginnings of moving library-like code into
 an archive
To: Jan Beulich <jbeulich@suse.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <509B2BDB-A226-4328-A75E-33AAF74BE45B@arm.com>
 <41587b00-2899-65a6-3867-97664529fdab@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6d56c227-2828-22fc-61a9-ae836ada805a@xen.org>
Date: Wed, 9 Dec 2020 14:51:54 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <41587b00-2899-65a6-3867-97664529fdab@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 09/12/2020 14:47, Jan Beulich wrote:
> On 09.12.2020 12:33, Bertrand Marquis wrote:
>> I will review this today, sorry for the delay.
> 
> Thanks for the reviews, and no problem at all. Since iirc it was
> you who asked on the last community call, I wanted to point out
> that despite your reviews and despite Wei's acks the series
> still won't be able to go in, because patches 2 and 3 are still
> lacking Arm maintainer acks.

I am waiting on Anthony's input before giving my ack on patch 2. I am 
not sure whether he is already on holidays.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:54:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48370.85520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0rb-0002aQ-J4; Wed, 09 Dec 2020 14:54:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48370.85520; Wed, 09 Dec 2020 14:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0rb-0002aJ-Fz; Wed, 09 Dec 2020 14:54:43 +0000
Received: by outflank-mailman (input) for mailman id 48370;
 Wed, 09 Dec 2020 14:54:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kn0rZ-0002aE-Jw
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:54:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81454537-124c-4819-8110-54e1814c0d24;
 Wed, 09 Dec 2020 14:54:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E93C9AC9A;
 Wed,  9 Dec 2020 14:54:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81454537-124c-4819-8110-54e1814c0d24
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607525680; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=54RpgZ26Znpuu9TFmjXOPDEZrFmM1w24OJPQ2n8VQos=;
	b=LOSP5q+dfr+rDc6L5FUZSFdGKYx5duJZqAwzpB6AM8hWkSQL5N0p8Jgi/uH8N6iPTndLWP
	Hs1Dk41ZaMO60tmN5+OKGbqooi+xC0XwaAxII5RrVVBCbY4fbWyCFkaJPqlrUQGpQSQCH+
	okGhttERwtCrV+rg/Ln3i31fFYTnUoY=
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
 <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
 <e0782c3b-9958-3792-eab9-d3fd6708225f@suse.com>
 <5fc44865-2115-947c-bd22-b51d7f17d39c@xen.org>
 <689373AE-AF16-429F-818C-0467485E5748@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <80ee54b1-5fd1-2aa8-606f-279c4b81a7ad@suse.com>
Date: Wed, 9 Dec 2020 15:54:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <689373AE-AF16-429F-818C-0467485E5748@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 15:27, Bertrand Marquis wrote:
>> On 9 Dec 2020, at 09:41, Julien Grall <julien@xen.org> wrote:
>> On 07/12/2020 10:23, Jan Beulich wrote:
>>> On 24.11.2020 17:57, Julien Grall wrote:
>>>> On 24/11/2020 00:40, Andrew Cooper wrote:
>>>>> On a totally separate point,  I wonder if we'd be better off compiling
>>>>> with -fgnu89-inline because I can't see any case we're we'd want the C99
>>>>> inline semantics anywhere in Xen.
>>>>
>>>> This was one of my point above. It feels that if we want to use the
>>>> behavior in Xen, then it should be everywhere rather than just this helper.
>>> I'll be committing the series up to patch 6 in a minute. It remains
>>> unclear to me whether your responses on this sub-thread are meant
>>> to be an objection, or just a comment. Andrew gave his R-b despite
>>> this separate consideration, and I now also have an ack from Wei
>>> for the entire series. Please clarify.
>>
>> It still feels strange to apply to one function and not the others... But I don't have a strong objection against the idea of using C99 inlines in Xen.
>>
>> IOW, I will neither Ack nor NAck this patch.
> 
> I think as Julien here: why doing this inline thing for this function and not the others
> provided by the library ?
> Could you explain the reasons for this or the use cases you expect ?
> 
> I see 2 usage of bsearch in arm code and I do not get why you are doing this for
> bsearch and not for the other functions.

May I ask whether you read Andrew's quite exhaustive reply to Julien
asking about this? Apart from this, it was Andrew who had asked to
inline bsearch(), and I followed that request. The initial version
of this series didn't have any inlining of these library functions
(which, after all, also isn't the goal of the series).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 14:56:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 14:56:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48377.85534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0tL-0002kL-7L; Wed, 09 Dec 2020 14:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48377.85534; Wed, 09 Dec 2020 14:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0tL-0002kE-4B; Wed, 09 Dec 2020 14:56:31 +0000
Received: by outflank-mailman (input) for mailman id 48377;
 Wed, 09 Dec 2020 14:56:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uDNN=FN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kn0tK-0002k8-BB
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 14:56:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30da2f88-ee82-421f-8f21-80c8a4dc4632;
 Wed, 09 Dec 2020 14:56:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DC0E8AC9A;
 Wed,  9 Dec 2020 14:56:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30da2f88-ee82-421f-8f21-80c8a4dc4632
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607525789; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cZ9X8tPyiBYCjlpfW2dYi3z3C5S4jPnmYJ28bKkFqHA=;
	b=LIo2qftjZUUwlM5R0bbGTiGu92mxPxVqD5EhWikq+ZvbnD+kZQncvrBnGRK2qwnbQ3Akq5
	9AlquJrlOSsOajteWnMvRxPdS1E9LCA8MXwbt+oPIZPbxvB9YtUshyviE04yLrarVBtFwP
	4T1Hiursnv0KGB5mdSIs9H74t4AoXWQ=
Subject: Re: [PATCH v3 0/8] xen: beginnings of moving library-like code into
 an archive
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <509B2BDB-A226-4328-A75E-33AAF74BE45B@arm.com>
 <41587b00-2899-65a6-3867-97664529fdab@suse.com>
 <6d56c227-2828-22fc-61a9-ae836ada805a@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5d2fe1ea-a3b1-1d24-9e8e-3fba16a5f294@suse.com>
Date: Wed, 9 Dec 2020 15:56:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <6d56c227-2828-22fc-61a9-ae836ada805a@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 15:51, Julien Grall wrote:
> On 09/12/2020 14:47, Jan Beulich wrote:
>> On 09.12.2020 12:33, Bertrand Marquis wrote:
>>> I will review this today, sorry for the delay.
>>
>> Thanks for the reviews, and no problem at all. Since iirc it was
>> you who asked on the last community call, I wanted to point out
>> that despite your reviews and despite Wei's acks the series
>> still won't be able to go in, because patches 2 and 3 are still
>> lacking Arm maintainer acks.
> 
> I am waiting on Anthony's input before giving my ack on patch 2.

Well, okay. I'll continue to wait then. What about patch 3 though?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 15:01:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 15:01:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48384.85546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0yD-0003k4-RJ; Wed, 09 Dec 2020 15:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48384.85546; Wed, 09 Dec 2020 15:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn0yD-0003jx-OH; Wed, 09 Dec 2020 15:01:33 +0000
Received: by outflank-mailman (input) for mailman id 48384;
 Wed, 09 Dec 2020 15:01:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8mlr=FN=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kn0yC-0003jr-FK
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 15:01:32 +0000
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a802157-34fe-463e-9be6-18d48d0d6f00;
 Wed, 09 Dec 2020 15:01:30 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id D83E5AFB;
 Wed,  9 Dec 2020 10:01:28 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Wed, 09 Dec 2020 10:01:29 -0500
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 11C831080063;
 Wed,  9 Dec 2020 10:01:25 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a802157-34fe-463e-9be6-18d48d0d6f00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=Jc/F6i
	K7o3PUb9Iz+/A5vMzJfnPabacuo83uCpBn6y0=; b=EWQksCubZPK5tzGqR9z1aB
	19JQFgBR7bbwgcoSiNDOk4+LXxOLkZq0zL9awtR8Zcxe/WgdxQeLtJ/DI437/tmW
	yFudNwDoOGZeWXM6qPLGvFyGaRdW/kXls1VCDUzJHTsYGVZEvKB/zUXvTyVX3kMW
	Qa0F5r3MYUdqsjE4XVoLCncx4ODHF6SpTrlMio9AEieVvF0vQYRkPcTEG3npa0Pk
	0FneNkf4CF1QPQ/y3vGKVcV3a4iaCSLoultj9dKLvtdBb9rpmvxN/scqFJFZEEgo
	1j1t4SKCzmL3VqRNEY38bjiB7mMS4qvsQkfrnCF0w+ToGcAARTJMzq1ro4a4rIMg
	==
X-ME-Sender: <xms:xubQX3DlpGN_uwJpsJon-ywhUfffnAPOtSgIkpWC8P6GNiqrMQwS9Q>
    <xme:xubQX9hsjn6KQ73-_JoO89htyFfpp-yWuCow3VtNoD9jRwdaeCJzRyxFTGLb8-SU3
    6LUNJjDDob3wg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudejkedgjedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:xubQXynA7nbRAsarVjEJgTrMGdAPmrnjNsJEw8SkbajH0aC0Q1n_sA>
    <xmx:xubQX5wT_BwoY_5DC_7bwpru8kKmrKm_YQYC7CzcDywxh_mnYsi7IQ>
    <xmx:xubQX8RFBmmHTUlswgWNOHgv91iiXOYBCcdAzRIheWZ7IpgmBhAMZg>
    <xmx:yObQX15ZWgPYMiTQIR3ySsPaF8G9vWP-7AJ9qKspxJ7BTM7FIPV6Qg>
Date: Wed, 9 Dec 2020 16:01:21 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] libxl: cleanup remaining backend xs dirs after driver
 domain
Message-ID: <20201209150121.GM1244@mail-itl>
References: <20201108145942.3089012-1-marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+nG9yj4eE4W6Oba0"
Content-Disposition: inline
In-Reply-To: <20201108145942.3089012-1-marmarek@invisiblethingslab.com>


--+nG9yj4eE4W6Oba0
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH] libxl: cleanup remaining backend xs dirs after driver
 domain

On Sun, Nov 08, 2020 at 03:59:42PM +0100, Marek Marczykowski-G=C3=B3recki w=
rote:
> When device is removed, backend domain (which may be a driver domain) is
> responsible for removing backend entries from xenstore. But in case of
> driver domain, it has no access to remove all of them - specifically the
> directory named after frontend-id remains. This may accumulate enough to
> exceed xenstore quote of the driver domain, breaking further devices.
>=20
> Fix this by calling libxl__xs_path_cleanup() on the backend path from
> libxl__device_destroy() in the toolstack domain too. Note
> libxl__device_destroy() is called when the driver domain already removed
> what it can (see device_destroy_be_watch_cb()->device_hotplug_done()).
>=20
> Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>

Ping?

> ---
>  tools/libs/light/libxl_device.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>=20
> diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_dev=
ice.c
> index e081faf9a94e..f131a6c822c6 100644
> --- a/tools/libs/light/libxl_device.c
> +++ b/tools/libs/light/libxl_device.c
> @@ -763,6 +763,12 @@ int libxl__device_destroy(libxl__gc *gc, libxl__devi=
ce *dev)
>               * from the backend path.
>               */
>              libxl__xs_path_cleanup(gc, t, be_path);
> +        } else if (domid =3D=3D LIBXL_TOOLSTACK_DOMID && !libxl_only) {
> +            /*
> +             * Then, toolstack domain is in charge of removing the parent
> +             * directory if empty already.
> +             */
> +            libxl__xs_path_cleanup(gc, t, be_path);
>          }
> =20
>          rc =3D libxl__xs_transaction_commit(gc, &t);

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--+nG9yj4eE4W6Oba0
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl/Q5sEACgkQ24/THMrX
1yxqPAgAgDxlOEfg0KtsT1B3Kyx/oFwD3mJAGzzu8Bz3HJrOXVGW4HnVtnY1ivMk
bA1m3Eb6omGTEVym61QntXx36ZJgcI9fk/xOxfbvzP5UkANI3f7gHxWGhoJ0ADEo
i3OZn7/Qhu540kmFn6sW2U4lqNHozota8iRfsDy2LJKtsUyfL2YpsCdkEN9EBOxf
yy8vo5AVcWQuFKofqheJLzTrugttOlH8dQaXXOeBjN0j5uaY4AxJnfPzE9+2rK9C
Ssyy2XdRYQbmIdykcSFtPdv0GN05Fk8RamvJhkFOiJK2VFTlN7OKtRkcKJiTM6u7
grzMP/aE1c9rY5U5fXgu8T2c+nTAcw==
=dy9n
-----END PGP SIGNATURE-----

--+nG9yj4eE4W6Oba0--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 15:07:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 15:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48389.85559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn13i-0003xk-Gi; Wed, 09 Dec 2020 15:07:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48389.85559; Wed, 09 Dec 2020 15:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn13i-0003xd-DO; Wed, 09 Dec 2020 15:07:14 +0000
Received: by outflank-mailman (input) for mailman id 48389;
 Wed, 09 Dec 2020 15:07:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn13g-0003xY-Kg
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 15:07:12 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.59]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c2d57fb-eb6b-40e8-a81e-103580190b4d;
 Wed, 09 Dec 2020 15:07:11 +0000 (UTC)
Received: from DB6P195CA0017.EURP195.PROD.OUTLOOK.COM (2603:10a6:4:cb::27) by
 VI1PR0802MB2319.eurprd08.prod.outlook.com (2603:10a6:800:a0::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21; Wed, 9 Dec
 2020 15:07:08 +0000
Received: from DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:cb:cafe::7b) by DB6P195CA0017.outlook.office365.com
 (2603:10a6:4:cb::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 15:07:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT004.mail.protection.outlook.com (10.152.20.128) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 15:07:08 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Wed, 09 Dec 2020 15:07:08 +0000
Received: from b5cfdc04e386.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2D6D664A-BE66-4963-A460-5F9959E9EDE0.1; 
 Wed, 09 Dec 2020 15:06:30 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b5cfdc04e386.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 15:06:30 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3036.eurprd08.prod.outlook.com (2603:10a6:5:18::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.31; Wed, 9 Dec
 2020 15:06:28 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Wed, 9 Dec 2020
 15:06:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c2d57fb-eb6b-40e8-a81e-103580190b4d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OQe/52FD9Izyodr3+axSsuM6wzeIQcXtUpqM2gZ6Dog=;
 b=bjhIPopUmU3YbhCv6z56plku4LR3WJs45iYTU9pL48H3phbxeDwTLAoTcn8uah5G0abAYEJHbJhx0x7PSkKgQpNZPhxvobQT+ewgJwt1ST/bK/9hXofLSJPL2PiRTpv5TBS1K56YDecMFl+n8FrpdCLw6i7wxbMRPv9rhAOPKSk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 82d7be83f3a6fc3f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Eutjk7iEUPMRYhg/ukCpVnKCMRetMJSbIB32Zw7RCdsKubWQ8RPd2DF6+WkJjr0hYbcHimWjMQra8Wt7KBe1Pk5+cOcFX/3OcyRv2B90IAbUs4F5QAMIUhS8qkjPY/cL5sKN1TONLldyrRljDR8o20o7cYY13zLjXEGJRswZF5rdtAkRDSJTQEpAdtrNVrVDgU6nqb/6LXirWc7DNPwVK6tGlWzsSSpdRSrIpbNVqIq2g9DI6kgOQStvBTNf0Rlr/SIQuRT+69chS8b3nNK78GipM8XURbQTLRMavYKz7r+LdpiR06cy1Vuj+Eg7R1cVf/VY3gqawJOYptbdNTrc7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OQe/52FD9Izyodr3+axSsuM6wzeIQcXtUpqM2gZ6Dog=;
 b=cddb0VBF4GRwi/LPh+bRinUifyhD0FgaZ93KusMdY78rAJA6Z4lxGKHVjLzIRJumGXNLletZblQ4T/x+LGaBBHUURYPCUcXSH4T9L9pIhIzYmS/I6Do4qxOvdvurT6hi5R6p1x5VJ75slw1rf+mYeaolXuc9U0iUCAOVSSIqFVk03IPR5AvdRKw31HOzzo+7fnFDVdEsdfA3QjlSkQY/2EyBMb+jPSZdflq4noaEc2kIJHAziCYlR/Mnbt7rZG0+kbvjnx0tG3dQ5SfHAlo+7kMycKtaGzOHGtwZJSUqMytt+tjq/0IYmdjL8Zfhl8mHaaYyo9b4iviCDfGTnjw1Hg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OQe/52FD9Izyodr3+axSsuM6wzeIQcXtUpqM2gZ6Dog=;
 b=bjhIPopUmU3YbhCv6z56plku4LR3WJs45iYTU9pL48H3phbxeDwTLAoTcn8uah5G0abAYEJHbJhx0x7PSkKgQpNZPhxvobQT+ewgJwt1ST/bK/9hXofLSJPL2PiRTpv5TBS1K56YDecMFl+n8FrpdCLw6i7wxbMRPv9rhAOPKSk=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
Thread-Topic: [PATCH v2 7/8] lib: move bsearch code
Thread-Index:
 AQHWqSYUW6jKcf9JAUabryyJK6872KnOWaMAgAERSYCABxiQgIAAHyEAgAEQxgCAFABlAIADGMmAgABP5wCAAAergIAAA00A
Date: Wed, 9 Dec 2020 15:06:28 +0000
Message-ID: <CBC505FF-7D23-4AC5-9EAD-91E833EB6B45@arm.com>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
 <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
 <e0782c3b-9958-3792-eab9-d3fd6708225f@suse.com>
 <5fc44865-2115-947c-bd22-b51d7f17d39c@xen.org>
 <689373AE-AF16-429F-818C-0467485E5748@arm.com>
 <80ee54b1-5fd1-2aa8-606f-279c4b81a7ad@suse.com>
In-Reply-To: <80ee54b1-5fd1-2aa8-606f-279c4b81a7ad@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 76bc1d98-afc4-45d7-5c09-08d89c5418fe
x-ms-traffictypediagnostic: DB7PR08MB3036:|VI1PR0802MB2319:
X-Microsoft-Antispam-PRVS:
	<VI1PR0802MB23194A80731BF016FE209C759DCC0@VI1PR0802MB2319.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 40TKpLQdEt6dVTaPWrUm0e0jeejM4MOo8gl6maFe+Qogf2aR8U3QmgGtMyHNQZA5mnUT5GEpDIjcySllANYKvhYT3OorJ0bQ1Ler9z0nsTE79vLhPmKXLS6ItlLwK8680egWvEsftEcaQ5Bc6ej6rQwz50AU6Ao9M4hg3f46iKVqumdkosMPhFp6a/sSg5U/SjVyAdqdVmgDeJn6lxhtd/597z5679KAO5EF6JM1NY5Wy1tzG3NQcFuKsiNnUdJ/X6W2GZaE4ZQbuYzXTSfsrxnGZ32vA44wcTe0PcdM8lEOChOKdI4Gen4p0KcvI/La68kL1fX5YujXrhQe1RcTDtdght0wDXM+S7dNO/ClH04XbXntnL6guJSd07Bx3l54
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(366004)(39860400002)(136003)(54906003)(6916009)(71200400001)(66946007)(6512007)(33656002)(8676002)(316002)(2616005)(66476007)(2906002)(53546011)(86362001)(36756003)(478600001)(83380400001)(26005)(6486002)(8936002)(91956017)(6506007)(186003)(66446008)(5660300002)(4326008)(76116006)(66556008)(64756008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?jAa6TZMuSYzs9jH+7buaTUwV1Ol9M4XHPH05iD/Lrf8QnhwSfKyqSm/EGDuo?=
 =?us-ascii?Q?bcLd3ANKKsUu/02XrQ6V3D+KSrwHHjp4SuGTUql5QABWqOK5hR4FBWVOVEP0?=
 =?us-ascii?Q?AxTTE7o7AWtgvdZMx4ffDkV3Rqihe+XdeRMcgPg56BITVTZvUM7aC6vmzldL?=
 =?us-ascii?Q?GJn7ygDZbroCnSpw310oZDYlItIJWGOk659GOSh6V5ASocV/7T+ava5BTKIg?=
 =?us-ascii?Q?p+lXmQb2+NmWJn9dJSfs+aTyCaj+Vdt7UZT4qOLjzA3lYtybBwWQo+BUnRrC?=
 =?us-ascii?Q?VBIg4esDfJN2ovToz/uTJKlbLbtAmg0HO7P9ajnQBcTZnqwLPdRALW8Ibeuu?=
 =?us-ascii?Q?Q8zBQr4wm79d6+2MeIGfky7gdP9YdrB0tsKho3x8o1wKgwYPa0TaLxYMTsn2?=
 =?us-ascii?Q?xAqASjBxcb3FM61fiu7V8bLSDjSDRtFcCFml9Fl2qmzdk1Q3e5FrI1O9yBbI?=
 =?us-ascii?Q?smERPbLOToqbjlH8I1+EFunMACze0LH7d8ZpR3RAkHSNx61cEMKaF8YNJuin?=
 =?us-ascii?Q?BaEmUKTzNMB0jL/1X+k7CU4eAsTVMU7KeMWmMDa8Hqsc7zx5YhYnlt47Peha?=
 =?us-ascii?Q?wdfUWq7rHnhvNeTRod9zDK1Ga5cXlPReUueUT+gH0Uu/OkhhwcE7DuYXbiIX?=
 =?us-ascii?Q?gSQWgPK68Y+d3H5SPefeCjDZeq0ZGsWPZVCB4lJoMsicsL6v/zardJ0ICOsC?=
 =?us-ascii?Q?u/plg19iW1deJzADXVQIj3e/+Gi9qKZgNyQQ9lO1cXv612gtd+Q550bnLSku?=
 =?us-ascii?Q?hj1DgTvT2dP3TSUKta0ozuzlQT1jNtdhh72FsSqFbwiWib/CyGTT6CudVH95?=
 =?us-ascii?Q?nVrDLhTyYCzXOPSYGZhSpiIVrcG79FJS6eBizz7TUd1HU1j0/50L5D+Yc5yq?=
 =?us-ascii?Q?C6IgGiMA7WS1QUZ4D22DBuIzJILNOY3Bw2Fz3e9nGw3m+FelwiIz72pm7sSg?=
 =?us-ascii?Q?hIicB89YX7h9+WJjUfCgbHGsOP/w5EcDGe42s+CP069AssCLIntNBJvQxjoX?=
 =?us-ascii?Q?PhIO?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DB9792C7A7B0CE43B9045AD735A0E178@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3036
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	16b370c6-94fc-4795-86d9-08d89c540166
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/5e+Fqjj6SFkVNL7Gr6DHR8lLsOWp75ekUv0I+PxK0w0L+GWsN1mkkprlwYjiAxZE2DncZfcqKLhHsVpndWpRic9VNoTZ7t3+GZ2E+Cio3FaQJ12RTS9Ph3M+CbdQetSvPBlcC0Zl5Oi0TDyGeGOYT0Q5CepwXIg5y9E/vSMsC28T4YY74tkmV6bAsY32cGfeHbPC2buP2svvyvqkM932rStW70KflyxzQrDwEoFRBucSKK/2pdUXWTDz3lPl8cPN0MbjoeyCBaFvO1xBejCG6+lf1J36psKMTQwBYeD8Nk9fb0bepceyRLPeaP6010Esq/ucT/xFgHzrWF3GhQ5roD3qgezA1RihQRjOypmkDQyn+WSk13ymutchz4wghDVbhyg23aCOBoWViZvYUT25w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(46966005)(6862004)(8936002)(81166007)(70206006)(83380400001)(5660300002)(47076004)(8676002)(2906002)(356005)(6486002)(54906003)(86362001)(186003)(70586007)(26005)(508600001)(6512007)(336012)(36756003)(33656002)(82310400003)(6506007)(53546011)(4326008)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 15:07:08.3364
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 76bc1d98-afc4-45d7-5c09-08d89c5418fe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2319

Hi Jan,

> On 9 Dec 2020, at 14:54, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.12.2020 15:27, Bertrand Marquis wrote:
>>> On 9 Dec 2020, at 09:41, Julien Grall <julien@xen.org> wrote:
>>> On 07/12/2020 10:23, Jan Beulich wrote:
>>>> On 24.11.2020 17:57, Julien Grall wrote:
>>>>> On 24/11/2020 00:40, Andrew Cooper wrote:
>>>>>> On a totally separate point,  I wonder if we'd be better off compili=
ng
>>>>>> with -fgnu89-inline because I can't see any case we're we'd want the=
 C99
>>>>>> inline semantics anywhere in Xen.
>>>>>=20
>>>>> This was one of my point above. It feels that if we want to use the
>>>>> behavior in Xen, then it should be everywhere rather than just this h=
elper.
>>>> I'll be committing the series up to patch 6 in a minute. It remains
>>>> unclear to me whether your responses on this sub-thread are meant
>>>> to be an objection, or just a comment. Andrew gave his R-b despite
>>>> this separate consideration, and I now also have an ack from Wei
>>>> for the entire series. Please clarify.
>>>=20
>>> It still feels strange to apply to one function and not the others... B=
ut I don't have a strong objection against the idea of using C99 inlines in=
 Xen.
>>>=20
>>> IOW, I will neither Ack nor NAck this patch.
>>=20
>> I think as Julien here: why doing this inline thing for this function an=
d not the others
>> provided by the library ?
>> Could you explain the reasons for this or the use cases you expect ?
>>=20
>> I see 2 usage of bsearch in arm code and I do not get why you are doing =
this for
>> bsearch and not for the other functions.
>=20
> May I ask whether you read Andrew's quite exhaustive reply to Julien
> asking about this? Apart from this, it was Andrew who had asked to
> inline bsearch(), and I followed that request. The initial version
> of this series didn't have any inlining of these library functions
> (which, after all, also isn't the goal of the series).

I just did (sorry missed that one in the history).

But seeing where this is called and the look of the code with this
change i do not really think that the complexity introduced by this
will make a real win in terms of performance/code size but it does
make this complex.

So I would rather have the inline part out but the code is right:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

so that this is unblocked.
Regards
Bertrand

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 15:45:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 15:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48396.85571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1e0-0007l4-Dk; Wed, 09 Dec 2020 15:44:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48396.85571; Wed, 09 Dec 2020 15:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1e0-0007kx-AY; Wed, 09 Dec 2020 15:44:44 +0000
Received: by outflank-mailman (input) for mailman id 48396;
 Wed, 09 Dec 2020 15:44:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdhY=FN=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kn1dz-0007ks-BI
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 15:44:43 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a37f9c88-de81-4f7e-83e9-21da9fc1d759;
 Wed, 09 Dec 2020 15:44:41 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B9FiaAv014436
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 9 Dec 2020 16:44:37 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 3983E2E946C; Wed,  9 Dec 2020 16:44:31 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a37f9c88-de81-4f7e-83e9-21da9fc1d759
Date: Wed, 9 Dec 2020 16:44:31 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201209154431.GA4913@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 09 Dec 2020 16:44:37 +0100 (MET)

On Wed, Dec 09, 2020 at 02:41:23PM +0000, Andrew Cooper wrote:
> 
> Huh, so it is the LDT, but we're not getting as far as inspecting the
> target frame.
> 
> I wonder if the LDT is set up correctly.

I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?

> How about this incremental delta?

Here's the output
(XEN) IRET fault: #PF[0000]                                                    
(XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
(XEN) IRET fault: #PF[0000]                                                    
(XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
(XEN) IRET fault: #PF[0000]                                                 
(XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057
(XEN) *** pv_map_ldt_shadow_page(0x40) failed
(XEN) IRET fault: #PF[0000]
(XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057
(XEN) domain_crash called from extable.c:219
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
(XEN) CPU:    0
(XEN) RIP:    0047:[<00007f7ecaa007d0>]
(XEN) RFLAGS: 0000000000000202   EM: 0   CONTEXT: pv guest (d0v0)
(XEN) rax: ffff82d04038c309   rbx: 0000000000000000   rcx: 000000000000e008
(XEN) rdx: 0000000000010086   rsi: ffff83007fcb7f78   rdi: 000000000000e010
(XEN) rbp: 0000000000000000   rsp: 00007f7fff32e3f0   r8:  0000000e00000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 0000000079cdb000   cr2: ffffc4800000a040
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: ffffffff80cf2dc0
(XEN) ds: 0023   es: 0023   fs: 0000   gs: 0000   ss: 003f   cs: 0047
(XEN) Guest stack trace from rsp=00007f7fff32e3f0:
(XEN)    0000000000000001 00007f7fff32e908 0000000000000000 0000000000000000
(XEN)    0000000000000003 0000000173e00040 0000000000000004 0000000000000038
(XEN)    0000000000000005 0000000000000008 0000000000000006 0000000000001000
(XEN)    0000000000000007 00007f7ecaa00000 0000000000000008 0000000000000000
(XEN)    0000000000000009 0000000173e01cd0 00000000000007d0 0000000000000000
(XEN)    00000000000007d1 0000000000000000 00000000000007d2 0000000000000000
(XEN)    00000000000007d3 0000000000000000 000000000000000d 00007f7fff32f000
(XEN)    00000000000007de 00007f7fff32e4f0 0000000000000000 0000000000000000
(XEN)    6e692f6e6962732f 0000000000007469 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 15:55:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 15:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48402.85583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1o1-0000OJ-Gz; Wed, 09 Dec 2020 15:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48402.85583; Wed, 09 Dec 2020 15:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1o1-0000OC-D4; Wed, 09 Dec 2020 15:55:05 +0000
Received: by outflank-mailman (input) for mailman id 48402;
 Wed, 09 Dec 2020 15:55:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0kem=FN=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kn1nz-0000O2-TZ
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 15:55:04 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.160])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 193b3f80-78c8-41b1-9d02-4ede601b9ffe;
 Wed, 09 Dec 2020 15:55:01 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.6.3 DYNA|AUTH)
 with ESMTPSA id 007720wB9Fst0b8
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 9 Dec 2020 16:54:55 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 193b3f80-78c8-41b1-9d02-4ede601b9ffe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1607529300;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:From:Subject:Sender;
	bh=OFKa3btZzlVY9IFyVAhE+lCMTDnS9LRZ3EMEfMGxlXM=;
	b=Gvx2UPoo8egL6unaM2cLF+QU4V2KXJD43rGE9TtcF5aPEiIYMcV7CKBpK1Q6VkLk3c
	LOUlELThVMO30qszvH91zMApd6zN6ONeyqQT4rUeaAdAKDExaBIlZv9VqfoxcWV65H44
	FbNSCNVcRVlgDZTty2uYXWzknyJDDSy690NI7E5gNFkA4rcBxiJzvO+B67CnLLWvYePs
	KRLWidGi4Wav9ykskYz1hbqRhQihKiJCvL1ocRSFtLFRoy0/jryfYtzqbmDG6iyqHXOR
	jS2QEcxJUn/QDwgozz0P72m5TmgupcMhbJfq8uZddKqPjvc/W/28wDJs4KEsRiDh+fgf
	62Dw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuz7MdiQehTvc3KJf+TWodQ=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 1/3] tools: allocate bitmaps in units of unsigned long
Date: Wed,  9 Dec 2020 16:54:49 +0100
Message-Id: <20201209155452.28376-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Allocate enough memory so that the returned pointer can be safely
accessed as an array of unsigned long.

The actual bitmap size in units of bytes, as returned by bitmap_size,
remains unchanged.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_bitops.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
index 3d3a09772a..d6c5ea5138 100644
--- a/tools/libs/ctrl/xc_bitops.h
+++ b/tools/libs/ctrl/xc_bitops.h
@@ -21,7 +21,10 @@ static inline unsigned long bitmap_size(unsigned long nr_bits)
 
 static inline void *bitmap_alloc(unsigned long nr_bits)
 {
-    return calloc(1, bitmap_size(nr_bits));
+    unsigned long longs;
+
+    longs = (nr_bits + BITS_PER_LONG - 1) / BITS_PER_LONG;
+    return calloc(longs, sizeof(unsigned long));
 }
 
 static inline void bitmap_set(void *addr, unsigned long nr_bits)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 15:55:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 15:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48403.85588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1o1-0000Ok-Pn; Wed, 09 Dec 2020 15:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48403.85588; Wed, 09 Dec 2020 15:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1o1-0000OX-KP; Wed, 09 Dec 2020 15:55:05 +0000
Received: by outflank-mailman (input) for mailman id 48403;
 Wed, 09 Dec 2020 15:55:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0kem=FN=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kn1o0-0000O7-7s
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 15:55:04 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.216])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33b82ef4-bbf4-4c02-a086-fc70b1815583;
 Wed, 09 Dec 2020 15:55:02 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.6.3 DYNA|AUTH)
 with ESMTPSA id 007720wB9Fsv0b9
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 9 Dec 2020 16:54:57 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33b82ef4-bbf4-4c02-a086-fc70b1815583
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1607529301;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:From:
	Subject:Sender;
	bh=DKL6ewFHEenBMo6vG4Fk2Fh+oKBRcf8oXazH6dWBh0w=;
	b=qTbLOn0LUJEyKmdhsn8RRZ3dC2L6b50tQtlFBRwsUKsDe7iE+FGqIx24SHulLvoDGS
	Hy7Ig8qIxbLnmQK8zcAMNBkAU994dgOByIXDOKSEL7qoCK32oJzyFfZprjuB4HndjLYQ
	/PGTV2ANwi6DofIhZXvRRgbzMH/QPNG4htS3eLcv2XR8q2ITWXwj6lYuSMfg9Lq8Dk5n
	FmIRfXkfxPl9BanVwxI0OpscY9gxBmqQbiqiaCwSemqJglNEtQlUu225O/87uISW7ti5
	v4guYrkdGgZZCdxY0chtRmp3UuvC//SG+FX/zA7IuRyosOeYpXY3dnuRyhClkmA3kD20
	AB4g==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuz7MdiQehTvc3KJf+TWodQ=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 2/3] tools: remove unused ORDER_LONG
Date: Wed,  9 Dec 2020 16:54:50 +0100
Message-Id: <20201209155452.28376-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209155452.28376-1-olaf@aepfle.de>
References: <20201209155452.28376-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are no users left, xenpaging has its own variant.
The last user was removed with commit 11d0044a168994de85b9b328452292852aedc871

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_bitops.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
index d6c5ea5138..f0bac4a071 100644
--- a/tools/libs/ctrl/xc_bitops.h
+++ b/tools/libs/ctrl/xc_bitops.h
@@ -6,9 +6,7 @@
 #include <stdlib.h>
 #include <string.h>
 
-/* Needed by several includees, but no longer used for bitops. */
 #define BITS_PER_LONG (sizeof(unsigned long) * 8)
-#define ORDER_LONG (sizeof(unsigned long) == 4 ? 5 : 6)
 
 #define BITMAP_ENTRY(_nr,_bmap) ((_bmap))[(_nr) / 8]
 #define BITMAP_SHIFT(_nr) ((_nr) % 8)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 15:55:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 15:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48404.85606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1o5-0000Rz-VW; Wed, 09 Dec 2020 15:55:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48404.85606; Wed, 09 Dec 2020 15:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1o5-0000Rr-SU; Wed, 09 Dec 2020 15:55:09 +0000
Received: by outflank-mailman (input) for mailman id 48404;
 Wed, 09 Dec 2020 15:55:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0kem=FN=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kn1o4-0000O2-KB
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 15:55:08 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.164])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31da302d-6595-43d6-aebb-98ed20980d29;
 Wed, 09 Dec 2020 15:55:03 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.6.3 DYNA|AUTH)
 with ESMTPSA id 007720wB9Fsx0bB
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 9 Dec 2020 16:54:59 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31da302d-6595-43d6-aebb-98ed20980d29
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1607529303;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:From:
	Subject:Sender;
	bh=ti7wpap7OKPtBty/ilhF07cZXxS6hyTWcz0abirc9Nk=;
	b=IqFVHRvEQJI8EdzsqLVU4d73cnsrkHaKATHpes1Gv8xzJG4nRbXnMc3Tsg9ovNcrXA
	Gcq+2lQHFp4cP9disWVwv3+8cc6XI3esQCp/bR+fbUAcNeA4roRNJYVglLxlHCfia2z8
	CIBPXUPnO/YhoOm/0ayf8RUP1ZOeGUPZ9yxMPRRDKgh5gR3GZBVOW3n8N5qXWV7WMI0G
	oAOs9cLvagI3VwzJGL4OI7K0DVSMoFSLyPvL2wifIUDHkpC5TLknyFHtf3EJdXHIp1Ti
	XOMV350sUKuZiuFQUcyfPHxuzVBujESqdPlcwM3YGtiZI6HOoxjw1FfABNz011hsTnKf
	Socw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuz7MdiQehTvc3KJf+TWodQ=="
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 3/3] tools: add API to work with sevaral bits at once
Date: Wed,  9 Dec 2020 16:54:51 +0100
Message-Id: <20201209155452.28376-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209155452.28376-1-olaf@aepfle.de>
References: <20201209155452.28376-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce new API to test if a fixed number of bits is clear or set,
and clear or set them all at once.

The caller has to make sure the input bitnumber is a multiply of BITS_PER_LONG.

This API avoids the loop over each bit in a known range just to see
if all of them are either clear or set.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_bitops.h | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
index f0bac4a071..92f38872fb 100644
--- a/tools/libs/ctrl/xc_bitops.h
+++ b/tools/libs/ctrl/xc_bitops.h
@@ -77,4 +77,29 @@ static inline void bitmap_or(void *_dst, const void *_other,
         dst[i] |= other[i];
 }
 
+static inline int test_bit_long_set(unsigned long nr_base, const void *_addr)
+{
+    const unsigned long *addr = _addr;
+    unsigned long val = addr[nr_base / BITS_PER_LONG];
+    return val == ~0;
+}
+
+static inline int test_bit_long_clear(unsigned long nr_base, const void *_addr)
+{
+    const unsigned long *addr = _addr;
+    unsigned long val = addr[nr_base / BITS_PER_LONG];
+    return val == 0;
+}
+
+static inline void clear_bit_long(unsigned long nr_base, void *_addr)
+{
+    unsigned long *addr = _addr;
+    addr[nr_base / BITS_PER_LONG] = 0;
+}
+
+static inline void set_bit_long(unsigned long nr_base, void *_addr)
+{
+    unsigned long *addr = _addr;
+    addr[nr_base / BITS_PER_LONG] = ~0;
+}
 #endif  /* XC_BITOPS_H */


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:00:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48420.85619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1sw-0001tt-Jy; Wed, 09 Dec 2020 16:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48420.85619; Wed, 09 Dec 2020 16:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn1sw-0001tm-GM; Wed, 09 Dec 2020 16:00:10 +0000
Received: by outflank-mailman (input) for mailman id 48420;
 Wed, 09 Dec 2020 16:00:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NJeK=FN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kn1sv-0001th-VF
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:00:09 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd73d7aa-f145-43ba-9dec-01c1250e9d71;
 Wed, 09 Dec 2020 16:00:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd73d7aa-f145-43ba-9dec-01c1250e9d71
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607529608;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ZApL8V1v0OtWA7YuFI4qL9rpovu2jonLWpRVf2y2jTg=;
  b=if2b1TLLcxi2b1DQYhz7SNd7wMShOmWeRocIse7qwN7aQmnipGJ+iQvQ
   eEpnCHepD+JNAFhjBTIvhUxVfWTNpBM6G7ErBTxJ+Opaxbi8SHhvb0rtz
   z8FGYY34s7y7KC6dyZwihwS1Olj75ENtX87QKF7ivXxXja8WG9AofkKnB
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qAAMO42tiHbdG3unZHWa1BN7kCR2vVzwvIjP1g0jNOlWDlXLhRc7u609aSWX6+L2cFmYmxltU1
 xvj1u+ZWjuDmTWtbfW/B5pXngGkSDJAI5dBt7wQM8t73WW6VPeDfZeIYBhhpj4ewtQrf9vhSFn
 ckmBqUzpSiP4DUoKsFO/guQqPbCY0OJss/dilmU0uMkabvhUkrf+nwo7Ij0JhfZQi5Hxs51b1P
 4UdLhC/Tzw5+yLMdyTBAcukcVHp4LxBKgHP+r0k2agXY54OFiGkSfBhA9rIFTrVi8IJi/otkG9
 X6w=
X-SBRS: 5.2
X-MesageID: 34066157
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,405,1599537600"; 
   d="scan'208";a="34066157"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
Date: Wed, 9 Dec 2020 16:00:02 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201209154431.GA4913@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/12/2020 15:44, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 02:41:23PM +0000, Andrew Cooper wrote:
>> Huh, so it is the LDT, but we're not getting as far as inspecting the
>> target frame.
>>
>> I wonder if the LDT is set up correctly.
> I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?

Well - you said you always saw it once on 4.13, which clearly shows that
something was wonky, but it managed to unblock itself.

>> How about this incremental delta?
> Here's the output
> (XEN) IRET fault: #PF[0000]                                                    
> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> (XEN) IRET fault: #PF[0000]                                                    
> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> (XEN) IRET fault: #PF[0000]                                                 

Ok, so the promotion definitely fails, but we don't get as far as
inspecting the content of the LDT frame.  This probably means it failed
to change the page type, which probably means there are still
outstanding writeable references.

I'm expecting the final printk to be the one which triggers.

~Andrew

diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
index 5d74d11cba..2823dc2894 100644
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -87,14 +87,23 @@ bool pv_map_ldt_shadow_page(unsigned int offset)
 
     gl1e = guest_get_eff_kern_l1e(linear);
     if ( unlikely(!(l1e_get_flags(gl1e) & _PAGE_PRESENT)) )
+    {
+        printk(XENLOG_ERR "*** LDT: gl1e %"PRIpte" not present\n",
gl1e.l1);
         return false;
+    }
 
     page = get_page_from_gfn(currd, l1e_get_pfn(gl1e), NULL, P2M_ALLOC);
     if ( unlikely(!page) )
+    {
+        printk(XENLOG_ERR "*** LDT: failed to get gfn %05lx reference\n",
+               l1e_get_pfn(gl1e));
         return false;
+    }
 
     if ( unlikely(!get_page_type(page, PGT_seg_desc_page)) )
     {
+        printk(XENLOG_ERR "*** LDT: bad type: caf %016lx, taf=%016lx\n",
+               page->count_info, page->u.inuse.type_info);
         put_page(page);
         return false;
     }



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48426.85631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22U-0002Vb-JM; Wed, 09 Dec 2020 16:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48426.85631; Wed, 09 Dec 2020 16:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22U-0002V1-Fj; Wed, 09 Dec 2020 16:10:02 +0000
Received: by outflank-mailman (input) for mailman id 48426;
 Wed, 09 Dec 2020 16:10:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22T-0002Or-AP
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c52ac2eb-1082-4d1d-abf1-6da9345f8552;
 Wed, 09 Dec 2020 16:09:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0678AC9A;
 Wed,  9 Dec 2020 16:09:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c52ac2eb-1082-4d1d-abf1-6da9345f8552
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=kGF6k6YkBY0gTxGsY3A9nBmvv9dZFk8vYQw5QGMKns4=;
	b=cun1VRxXKh7JgT/oHY6u4CCjqg0nEQWdUSFinkqtDGYDBGFz6RytW2fYHX9VFiTH2M6eQg
	sVxXc4m4HnNQtuo68EwCwvGcu7Oq6K8kH+P6CfU3d2zsHffiVm54MPboR+tO63snEKU/74
	i92YGAp0Gi765jlRim3x3boOcJQ4Hpk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/8] xen: support per-cpupool scheduling granularity
Date: Wed,  9 Dec 2020 17:09:48 +0100
Message-Id: <20201209160956.32456-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Support scheduling granularity per cpupool. Setting the granularity is
done via hypfs, which needed to gain dynamical entries for that
purpose.

Apart from the hypfs related additional functionality the main change
for cpupools was the support for moving a domain to a new granularity,
as this requires to modify the scheduling unit/vcpu relationship.

I have tried to do the hypfs modifications in a rather generic way in
order to be able to use the same infrastructure in other cases, too
(e.g. for per-domain entries).

The complete series has been tested by creating cpupools with different
granularities and moving busy and idle domains between those.

Changes in V3:
- Patches 1-6 and 8-11 of V2 have been committed already
- New patch 2
- Addressed all comments
- Added a data pointer to struct hypfs_dyndir

Changes in V2:
- Added several new patches, especially for some further cleanups in
  cpupool.c.
- Completely reworked the locking scheme with dynamical directories:
  locking of resources (cpupools in this series) is now done via new
  callbacks which are called when traversing the hypfs tree. This
  removes the need to add locking to each hypfs related cpupool
  function and it ensures data integrity across multiple callbacks.
- Reordered the first few patches in order to have already acked
  patches in pure cleanup patches first.
- Addressed several comments.

Juergen Gross (8):
  xen/cpupool: support moving domain between cpupools with different
    granularity
  xen/hypfs: switch write function handles to const
  xen/hypfs: add new enter() and exit() per node callbacks
  xen/hypfs: support dynamic hypfs nodes
  xen/hypfs: add support for id-based dynamic directories
  xen/cpupool: add cpupool directories
  xen/cpupool: add scheduling granularity entry to cpupool entries
  xen/cpupool: make per-cpupool sched-gran hypfs node writable

 docs/misc/hypfs-paths.pandoc   |  16 +++
 xen/common/hypfs.c             | 226 ++++++++++++++++++++++++++++++-
 xen/common/sched/core.c        | 121 ++++++++++++-----
 xen/common/sched/cpupool.c     | 240 +++++++++++++++++++++++++++++++--
 xen/include/xen/guest_access.h |   5 +
 xen/include/xen/hypfs.h        |  66 ++++++---
 6 files changed, 610 insertions(+), 64 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48427.85638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22V-0002Xh-0u; Wed, 09 Dec 2020 16:10:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48427.85638; Wed, 09 Dec 2020 16:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22U-0002XU-Ox; Wed, 09 Dec 2020 16:10:02 +0000
Received: by outflank-mailman (input) for mailman id 48427;
 Wed, 09 Dec 2020 16:10:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22T-0002Oq-Ad
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5098c72f-d137-4a1d-b8ac-fcbb34d1c47c;
 Wed, 09 Dec 2020 16:09:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05E57AF8D;
 Wed,  9 Dec 2020 16:09:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5098c72f-d137-4a1d-b8ac-fcbb34d1c47c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=708vSf4hdDLmhqVFIbpBu/E6M/WDwL53lyQ6wvfCs8c=;
	b=GOdn9DLkeQ5NgCKeDecYY+yJJtj3HJFe6Oc9MX2QEujxoe4TiN54CGBJ2DVq/UG0dELW7g
	LAfkuDAtm383RkycI3vY0SI382b/5VWBxpP0uvAXu5MYJdZCEGglQ+WzpRuN4wtPFY3xZ1
	Z0ODRWmMQByUCN185u3/rFNn0tIGwrg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v3 1/8] xen/cpupool: support moving domain between cpupools with different granularity
Date: Wed,  9 Dec 2020 17:09:49 +0100
Message-Id: <20201209160956.32456-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When moving a domain between cpupools with different scheduling
granularity the sched_units of the domain need to be adjusted.

Do that by allocating new sched_units and throwing away the old ones
in sched_move_domain().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++----------
 1 file changed, 90 insertions(+), 31 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index a429fc7640..2a61c879b3 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -613,17 +613,45 @@ static void sched_move_irqs(const struct sched_unit *unit)
         vcpu_move_irqs(v);
 }
 
+/*
+ * Move a domain from one cpupool to another.
+ *
+ * A domain with any vcpu having temporary affinity settings will be denied
+ * to move. Hard and soft affinities will be reset.
+ *
+ * In order to support cpupools with different scheduling granularities all
+ * scheduling units are replaced by new ones.
+ *
+ * The complete move is done in the following steps:
+ * - check prerequisites (no vcpu with temporary affinities)
+ * - allocate all new data structures (scheduler specific domain data, unit
+ *   memory, scheduler specific unit data)
+ * - pause domain
+ * - temporarily move all (old) units to the same scheduling resource (this
+ *   makes the final resource assignment easier in case the new cpupool has
+ *   a larger granularity than the old one, as the scheduling locks for all
+ *   vcpus must be held for that operation)
+ * - remove old units from scheduling
+ * - set new cpupool and scheduler domain data pointers in struct domain
+ * - switch all vcpus to new units, still assigned to the old scheduling
+ *   resource
+ * - migrate all new units to scheduling resources of the new cpupool
+ * - unpause the domain
+ * - free the old memory (scheduler specific domain data, unit memory,
+ *   scheduler specific unit data)
+ */
 int sched_move_domain(struct domain *d, struct cpupool *c)
 {
     struct vcpu *v;
-    struct sched_unit *unit;
+    struct sched_unit *unit, *old_unit;
+    struct sched_unit *new_units = NULL, *old_units;
+    struct sched_unit **unit_ptr = &new_units;
     unsigned int new_p, unit_idx;
-    void **unit_priv;
     void *domdata;
-    void *unitdata;
-    struct scheduler *old_ops;
+    struct scheduler *old_ops = dom_scheduler(d);
     void *old_domdata;
     unsigned int gran = cpupool_get_granularity(c);
+    unsigned int n_units = DIV_ROUND_UP(d->max_vcpus, gran);
     int ret = 0;
 
     for_each_vcpu ( d, v )
@@ -641,53 +669,78 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
         goto out;
     }
 
-    unit_priv = xzalloc_array(void *, DIV_ROUND_UP(d->max_vcpus, gran));
-    if ( unit_priv == NULL )
+    for ( unit_idx = 0; unit_idx < n_units; unit_idx++ )
     {
-        sched_free_domdata(c->sched, domdata);
-        ret = -ENOMEM;
-        goto out;
-    }
+        unit = sched_alloc_unit_mem();
+        if ( unit )
+        {
+            /* Initialize unit for sched_alloc_udata() to work. */
+            unit->domain = d;
+            unit->unit_id = unit_idx * gran;
+            unit->vcpu_list = d->vcpu[unit->unit_id];
+            unit->priv = sched_alloc_udata(c->sched, unit, domdata);
+            *unit_ptr = unit;
+        }
 
-    unit_idx = 0;
-    for_each_sched_unit ( d, unit )
-    {
-        unit_priv[unit_idx] = sched_alloc_udata(c->sched, unit, domdata);
-        if ( unit_priv[unit_idx] == NULL )
+        if ( !unit || !unit->priv )
         {
-            for ( unit_idx = 0; unit_priv[unit_idx]; unit_idx++ )
-                sched_free_udata(c->sched, unit_priv[unit_idx]);
-            xfree(unit_priv);
-            sched_free_domdata(c->sched, domdata);
+            old_units = new_units;
+            old_domdata = domdata;
             ret = -ENOMEM;
-            goto out;
+            goto out_free;
         }
-        unit_idx++;
+
+        unit_ptr = &unit->next_in_list;
     }
 
     domain_pause(d);
 
-    old_ops = dom_scheduler(d);
     old_domdata = d->sched_priv;
 
+    new_p = cpumask_first(d->cpupool->cpu_valid);
     for_each_sched_unit ( d, unit )
     {
+        spinlock_t *lock;
+
+        /*
+         * Temporarily move all units to same processor to make locking
+         * easier when moving the new units to the new processors.
+         */
+        lock = unit_schedule_lock_irq(unit);
+        sched_set_res(unit, get_sched_res(new_p));
+        spin_unlock_irq(lock);
+
         sched_remove_unit(old_ops, unit);
     }
 
+    old_units = d->sched_unit_list;
+
     d->cpupool = c;
     d->sched_priv = domdata;
 
+    unit = new_units;
+    for_each_vcpu ( d, v )
+    {
+        old_unit = v->sched_unit;
+        if ( unit->unit_id + gran == v->vcpu_id )
+            unit = unit->next_in_list;
+
+        unit->state_entry_time = old_unit->state_entry_time;
+        unit->runstate_cnt[v->runstate.state]++;
+        /* Temporarily use old resource assignment */
+        unit->res = get_sched_res(new_p);
+
+        v->sched_unit = unit;
+    }
+
+    d->sched_unit_list = new_units;
+
     new_p = cpumask_first(c->cpu_valid);
-    unit_idx = 0;
     for_each_sched_unit ( d, unit )
     {
         spinlock_t *lock;
         unsigned int unit_p = new_p;
 
-        unitdata = unit->priv;
-        unit->priv = unit_priv[unit_idx];
-
         for_each_sched_unit_vcpu ( unit, v )
         {
             migrate_timer(&v->periodic_timer, new_p);
@@ -713,8 +766,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
         sched_insert_unit(c->sched, unit);
 
-        sched_free_udata(old_ops, unitdata);
-
         unit_idx++;
     }
 
@@ -722,11 +773,19 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
     domain_unpause(d);
 
-    sched_free_domdata(old_ops, old_domdata);
+ out_free:
+    for ( unit = old_units; unit; )
+    {
+        if ( unit->priv )
+            sched_free_udata(c->sched, unit->priv);
+        old_unit = unit;
+        unit = unit->next_in_list;
+        xfree(old_unit);
+    }
 
-    xfree(unit_priv);
+    sched_free_domdata(old_ops, old_domdata);
 
-out:
+ out:
     rcu_read_unlock(&sched_res_rculock);
 
     return ret;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48428.85654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22Z-00034q-CQ; Wed, 09 Dec 2020 16:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48428.85654; Wed, 09 Dec 2020 16:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22Z-00034g-8q; Wed, 09 Dec 2020 16:10:07 +0000
Received: by outflank-mailman (input) for mailman id 48428;
 Wed, 09 Dec 2020 16:10:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22Y-0002Oq-0f
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 882a4794-43af-449a-b516-ed041b9e8ffe;
 Wed, 09 Dec 2020 16:10:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47C3EAFEB;
 Wed,  9 Dec 2020 16:09:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 882a4794-43af-449a-b516-ed041b9e8ffe
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BW9eYydQo2F6aub4GEJBwW00UPy+ko8uE06ZsSs8E0c=;
	b=s+aVsUzKnQvk3WcsKTSlB//9Bhm8JWaN/FT2ryo4p4hQk2LVDSBp0Kxuu+YGIcvzXMOvxL
	qjvtiIT01twmizuwFx9GviG1TNnOCchq52L2zi6Y6Em/gjvsOhdXLHoN8rpjK/Q56mXh16
	suuDBdWc1pID0DjuUOj63IV2JWDxeVQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/8] xen/hypfs: switch write function handles to const
Date: Wed,  9 Dec 2020 17:09:50 +0100
Message-Id: <20201209160956.32456-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The node specific write functions take a void user address handle as
parameter. As a write won't change the user memory use a const_void
handle instead.

This requires a new macro for casting a guest handle to a const type.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 xen/common/hypfs.c             | 17 +++++++++++------
 xen/include/xen/guest_access.h |  5 +++++
 xen/include/xen/hypfs.h        | 14 +++++++++-----
 3 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 2e8e90591e..6f822ae097 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -344,7 +344,8 @@ static int hypfs_read(const struct hypfs_entry *entry,
 }
 
 int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen)
 {
     char *buf;
     int ret;
@@ -384,7 +385,8 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
 }
 
 int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen)
 {
     bool buf;
 
@@ -405,7 +407,8 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
 }
 
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
-                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+                       XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                       unsigned int ulen)
 {
     struct param_hypfs *p;
     char *buf;
@@ -439,13 +442,15 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
 }
 
 int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen)
 {
     return -EACCES;
 }
 
 static int hypfs_write(struct hypfs_entry *entry,
-                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+                       XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                       unsigned long ulen)
 {
     struct hypfs_entry_leaf *l;
 
@@ -497,7 +502,7 @@ long do_hypfs_op(unsigned int cmd,
         break;
 
     case XEN_HYPFS_OP_write_contents:
-        ret = hypfs_write(entry, arg3, arg4);
+        ret = hypfs_write(entry, guest_handle_const_cast(arg3, void), arg4);
         break;
 
     default:
diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index f9b94cf1f4..5a50c3ccee 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -26,6 +26,11 @@
     type *_x = (hnd).p;                         \
     (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
 })
+/* Same for casting to a const type. */
+#define guest_handle_const_cast(hnd, type) ({       \
+    const type *_x = (const type *)((hnd).p);       \
+    (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x };  \
+})
 
 /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
 #define guest_handle_to_param(hnd, type) ({                  \
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 53f50772b4..99fd4b036d 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -38,7 +38,7 @@ struct hypfs_funcs {
     int (*read)(const struct hypfs_entry *entry,
                 XEN_GUEST_HANDLE_PARAM(void) uaddr);
     int (*write)(struct hypfs_entry_leaf *leaf,
-                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+                 XEN_GUEST_HANDLE_PARAM(const_void) uaddr, unsigned int ulen);
     unsigned int (*getsize)(const struct hypfs_entry *entry);
     struct hypfs_entry *(*findentry)(const struct hypfs_entry_dir *dir,
                                      const char *name, unsigned int name_len);
@@ -154,13 +154,17 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
 int hypfs_read_leaf(const struct hypfs_entry *entry,
                     XEN_GUEST_HANDLE_PARAM(void) uaddr);
 int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen);
 int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen);
 int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen);
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
-                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+                       XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                       unsigned int ulen);
 unsigned int hypfs_getsize(const struct hypfs_entry *entry);
 struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
                                          const char *name,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48429.85663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22Z-00035p-UJ; Wed, 09 Dec 2020 16:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48429.85663; Wed, 09 Dec 2020 16:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22Z-00035S-J3; Wed, 09 Dec 2020 16:10:07 +0000
Received: by outflank-mailman (input) for mailman id 48429;
 Wed, 09 Dec 2020 16:10:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22Y-0002Or-6q
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2cfb2ef-da70-498d-8298-e61b67c49e9d;
 Wed, 09 Dec 2020 16:10:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 56988B262;
 Wed,  9 Dec 2020 16:10:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2cfb2ef-da70-498d-8298-e61b67c49e9d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vnXlBBXDbMK1+Yl5pOhMMaMgERekk96ALZfGkXhOrdM=;
	b=dyDAkVMHprI72vEcBc/P1SiyqOVzfw0JGgZtEwAoi3MF32zbTWsgyhhau5+29pYArS0qz/
	CPY04Iw9K1z5cErHxJJ4xH4swH1+0+RDTc7Yy8ISvo48rCDvYF81cL2uvjVqi15Y0vb633
	OKJagR2VatOt1rVOfW5vJqlyE0FHhSY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v3 6/8] xen/cpupool: add cpupool directories
Date: Wed,  9 Dec 2020 17:09:54 +0100
Message-Id: <20201209160956.32456-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be implemented.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added const (Jan Beulich)
- call hypfs_add_dir() in helper (Dario Faggioli)
- switch locking to enter/exit callbacks

V3:
- use generic dyndirid enter function
- const for hypfs function vector (Jan Beulich)
- drop size calculation from cpupool_dir_read() (Jan Beulich)
- check cpupool id to not exceed UINT_MAX (Jan Beulich)
- coding style (#if/#else/#endif) (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |   9 +++
 xen/common/sched/cpupool.c   | 104 +++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 6c7b2f7ee3..aaca1cdf92 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -175,6 +175,15 @@ The major version of Xen.
 
 The minor version of Xen.
 
+#### /cpupool/
+
+A directory of all current cpupools.
+
+#### /cpupool/*/
+
+The individual cpupools. Each entry is a directory with the name being the
+cpupool-id (e.g. /cpupool/0/).
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 0db7d77219..f293ba0cc4 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -13,6 +13,8 @@
 
 #include <xen/cpu.h>
 #include <xen/cpumask.h>
+#include <xen/guest_access.h>
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
@@ -33,6 +35,7 @@ static int cpupool_moving_cpu = -1;
 static struct cpupool *cpupool_cpu_moving = NULL;
 static cpumask_t cpupool_locked_cpus;
 
+/* This lock nests inside sysctl or hypfs lock. */
 static DEFINE_SPINLOCK(cpupool_lock);
 
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
@@ -1003,12 +1006,113 @@ static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
+#ifdef CONFIG_HYPFS
+
+static HYPFS_DIR_INIT(cpupool_pooldir, "%u");
+
+static int cpupool_dir_read(const struct hypfs_entry *entry,
+                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    const struct cpupool *c;
+
+    list_for_each_entry(c, &cpupool_list, list)
+    {
+        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
+                                         list_is_last(&c->list, &cpupool_list),
+                                         &uaddr);
+        if ( ret )
+            break;
+    }
+
+    return ret;
+}
+
+static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
+{
+    const struct cpupool *c;
+    unsigned int size = 0;
+
+    list_for_each_entry(c, &cpupool_list, list)
+        size += hypfs_dynid_entry_size(entry, c->cpupool_id);
+
+    return size;
+}
+
+static const struct hypfs_entry *cpupool_dir_enter(
+    const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_alloc_dyndata(struct hypfs_dyndir_id);
+    if ( !data )
+        return ERR_PTR(-ENOMEM);
+    data->id = CPUPOOLID_NONE;
+
+    spin_lock(&cpupool_lock);
+
+    return entry;
+}
+
+static void cpupool_dir_exit(const struct hypfs_entry *entry)
+{
+    spin_unlock(&cpupool_lock);
+
+    hypfs_free_dyndata();
+}
+
+static struct hypfs_entry *cpupool_dir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    struct cpupool *cpupool;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( end != name + name_len || id > UINT_MAX )
+        return ERR_PTR(-ENOENT);
+
+    cpupool = __cpupool_find_by_id(id, true);
+
+    if ( !cpupool )
+        return ERR_PTR(-ENOENT);
+
+    return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool);
+}
+
+static const struct hypfs_funcs cpupool_dir_funcs = {
+    .enter = cpupool_dir_enter,
+    .exit = cpupool_dir_exit,
+    .read = cpupool_dir_read,
+    .write = hypfs_write_deny,
+    .getsize = cpupool_dir_getsize,
+    .findentry = cpupool_dir_findentry,
+};
+
+static HYPFS_DIR_INIT_FUNC(cpupool_dir, "cpupool", &cpupool_dir_funcs);
+
+static void cpupool_hypfs_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
+    hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+}
+
+#else /* CONFIG_HYPFS */
+
+static void cpupool_hypfs_init(void)
+{
+}
+
+#endif /* CONFIG_HYPFS */
+
 static int __init cpupool_init(void)
 {
     unsigned int cpu;
 
     cpupool_gran_init();
 
+    cpupool_hypfs_init();
+
     cpupool0 = cpupool_create(0, 0);
     BUG_ON(IS_ERR(cpupool0));
     cpupool_put(cpupool0);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48430.85679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22e-0003FL-2G; Wed, 09 Dec 2020 16:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48430.85679; Wed, 09 Dec 2020 16:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22d-0003FC-V9; Wed, 09 Dec 2020 16:10:11 +0000
Received: by outflank-mailman (input) for mailman id 48430;
 Wed, 09 Dec 2020 16:10:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22d-0002Oq-0k
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 655f8d91-9a56-412f-9c6c-1a6a305e6172;
 Wed, 09 Dec 2020 16:10:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 855F4AFB1;
 Wed,  9 Dec 2020 16:09:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 655f8d91-9a56-412f-9c6c-1a6a305e6172
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1J8R7FCSyQqCaMoIA3PTZ/7bQX8TF6nm8R32Obm0UGs=;
	b=pXXSTzhOn5RlDCzbRdLNqZrvZAf7Hec+YmgMl9L9ega6V8p26vsQlNwIF1695+BGH5wx49
	TGuZZqjhzRbqMy4lD5UiG0m0SDGiLJv+NTvtA0sF4XDWnGaQbux38seDgCwjuXlEhq7CL5
	B6RYWs9JbvAa4aDB6FjDzQd6KNrd/ko=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node callbacks
Date: Wed,  9 Dec 2020 17:09:51 +0100
Message-Id: <20201209160956.32456-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to better support resource allocation and locking for dynamic
hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.

The enter() callback is called when entering a node during hypfs user
actions (traversing, reading or writing it), while the exit() callback
is called when leaving a node (accessing another node at the same or a
higher directory level, or when returning to the user).

For avoiding recursion this requires a parent pointer in each node.
Let the enter() callback return the entry address which is stored as
the last accessed node in order to be able to use a template entry for
that purpose in case of dynamic entries.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch

V3:
- add ASSERT(entry); (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 80 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h |  5 +++
 2 files changed, 85 insertions(+)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 6f822ae097..f04934db10 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry;
      ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
 
 const struct hypfs_funcs hypfs_dir_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_dir,
     .write = hypfs_write_deny,
     .getsize = hypfs_getsize,
     .findentry = hypfs_dir_findentry,
 };
 const struct hypfs_funcs hypfs_leaf_ro_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_deny,
     .getsize = hypfs_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_leaf_wr_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_leaf,
     .getsize = hypfs_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_bool_wr_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_bool,
     .getsize = hypfs_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 const struct hypfs_funcs hypfs_custom_wr_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
     .read = hypfs_read_leaf,
     .write = hypfs_write_custom,
     .getsize = hypfs_getsize,
@@ -63,6 +73,8 @@ enum hypfs_lock_state {
 };
 static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
 
+static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
+
 HYPFS_DIR_INIT(hypfs_root, "");
 
 static void hypfs_read_lock(void)
@@ -100,11 +112,59 @@ static void hypfs_unlock(void)
     }
 }
 
+const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
+{
+    return entry;
+}
+
+void hypfs_node_exit(const struct hypfs_entry *entry)
+{
+}
+
+static int node_enter(const struct hypfs_entry *entry)
+{
+    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
+
+    entry = entry->funcs->enter(entry);
+    if ( IS_ERR(entry) )
+        return PTR_ERR(entry);
+
+    ASSERT(entry);
+    ASSERT(!*last || *last == entry->parent);
+
+    *last = entry;
+
+    return 0;
+}
+
+static void node_exit(const struct hypfs_entry *entry)
+{
+    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
+
+    if ( !*last )
+        return;
+
+    ASSERT(*last == entry);
+    *last = entry->parent;
+
+    entry->funcs->exit(entry);
+}
+
+static void node_exit_all(void)
+{
+    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
+
+    while ( *last )
+        node_exit(*last);
+}
+
 static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 {
     int ret = -ENOENT;
     struct hypfs_entry *e;
 
+    ASSERT(new->funcs->enter);
+    ASSERT(new->funcs->exit);
     ASSERT(new->funcs->read);
     ASSERT(new->funcs->write);
     ASSERT(new->funcs->getsize);
@@ -140,6 +200,7 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
         unsigned int sz = strlen(new->name);
 
         parent->e.size += DIRENTRY_SIZE(sz);
+        new->parent = &parent->e;
     }
 
     hypfs_unlock();
@@ -221,6 +282,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
     const char *end;
     struct hypfs_entry *entry;
     unsigned int name_len;
+    int ret;
 
     for ( ; ; )
     {
@@ -235,6 +297,10 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
             end = strchr(path, '\0');
         name_len = end - path;
 
+        ret = node_enter(&dir->e);
+        if ( ret )
+            return ERR_PTR(ret);
+
         entry = dir->e.funcs->findentry(dir, path, name_len);
         if ( IS_ERR(entry) || !*end )
             return entry;
@@ -265,6 +331,7 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
     const struct hypfs_entry_dir *d;
     const struct hypfs_entry *e;
     unsigned int size = entry->funcs->getsize(entry);
+    int ret;
 
     ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
 
@@ -276,12 +343,19 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
         unsigned int e_namelen = strlen(e->name);
         unsigned int e_len = DIRENTRY_SIZE(e_namelen);
 
+        ret = node_enter(e);
+        if ( ret )
+            return ret;
+
         direntry.e.pad = 0;
         direntry.e.type = e->type;
         direntry.e.encoding = e->encoding;
         direntry.e.content_len = e->funcs->getsize(e);
         direntry.e.max_write_len = e->max_size;
         direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
+
+        node_exit(e);
+
         if ( copy_to_guest(uaddr, &direntry, 1) )
             return -EFAULT;
 
@@ -495,6 +569,10 @@ long do_hypfs_op(unsigned int cmd,
         goto out;
     }
 
+    ret = node_enter(entry);
+    if ( ret )
+        goto out;
+
     switch ( cmd )
     {
     case XEN_HYPFS_OP_read:
@@ -511,6 +589,8 @@ long do_hypfs_op(unsigned int cmd,
     }
 
  out:
+    node_exit_all();
+
     hypfs_unlock();
 
     return ret;
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 99fd4b036d..a6dfdb7d8e 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -35,6 +35,8 @@ struct hypfs_entry;
  * "/a/b/c" findentry() will be called for "/", "/a", and "/a/b").
  */
 struct hypfs_funcs {
+    const struct hypfs_entry *(*enter)(const struct hypfs_entry *entry);
+    void (*exit)(const struct hypfs_entry *entry);
     int (*read)(const struct hypfs_entry *entry,
                 XEN_GUEST_HANDLE_PARAM(void) uaddr);
     int (*write)(struct hypfs_entry_leaf *leaf,
@@ -56,6 +58,7 @@ struct hypfs_entry {
     unsigned int size;
     unsigned int max_size;
     const char *name;
+    struct hypfs_entry *parent;
     struct list_head list;
     const struct hypfs_funcs *funcs;
 };
@@ -149,6 +152,8 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,
                   struct hypfs_entry_dir *dir, bool nofault);
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault);
+const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry);
+void hypfs_node_exit(const struct hypfs_entry *entry);
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
 int hypfs_read_leaf(const struct hypfs_entry *entry,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48431.85691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22f-0003Hj-Bo; Wed, 09 Dec 2020 16:10:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48431.85691; Wed, 09 Dec 2020 16:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22f-0003HT-8Q; Wed, 09 Dec 2020 16:10:13 +0000
Received: by outflank-mailman (input) for mailman id 48431;
 Wed, 09 Dec 2020 16:10:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22d-0002Or-7D
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f6655d5-55ae-48c3-be13-b6ed1eccfdd3;
 Wed, 09 Dec 2020 16:10:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9A041B278;
 Wed,  9 Dec 2020 16:10:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f6655d5-55ae-48c3-be13-b6ed1eccfdd3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=am94U0utN2Ue9jktg4hF/gzv3ykid5UM0GvnBsWgHyU=;
	b=EpPmOjNG0FFXPmBjiSvX6HmVSUhiHTbeOY9WCv7hSTNwt6+60G4OOwKdLUxtBkadZb9Jq2
	KPksBVJ2V9xqu9NPhoW9ZqViGv+usvLqXbChx2kF42EwWQ2mOVgqqwmbKi66ZiU+ybwKV/
	J2TZebCcU9fwBgnr8c9/6RI4ldDit7A=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v3 7/8] xen/cpupool: add scheduling granularity entry to cpupool entries
Date: Wed,  9 Dec 2020 17:09:55 +0100
Message-Id: <20201209160956.32456-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a "sched-gran" entry to the per-cpupool hypfs directories.

For now make this entry read-only and let it contain one of the
strings "cpu", "core" or "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added const (Jan Beulich)
- modify test in cpupool_gran_read() (Jan Beulich)
---
 docs/misc/hypfs-paths.pandoc |  4 ++
 xen/common/sched/cpupool.c   | 72 ++++++++++++++++++++++++++++++++++--
 2 files changed, 72 insertions(+), 4 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index aaca1cdf92..f1ce24d7fe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,6 +184,10 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+
+The scheduling granularity of a cpupool.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index f293ba0cc4..e2011367bd 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -41,9 +41,10 @@ static DEFINE_SPINLOCK(cpupool_lock);
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
 static unsigned int __read_mostly sched_granularity = 1;
 
+#define SCHED_GRAN_NAME_LEN  8
 struct sched_gran_name {
     enum sched_gran mode;
-    char name[8];
+    char name[SCHED_GRAN_NAME_LEN];
 };
 
 static const struct sched_gran_name sg_name[] = {
@@ -52,7 +53,7 @@ static const struct sched_gran_name sg_name[] = {
     {SCHED_GRAN_socket, "socket"},
 };
 
-static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+static const char *sched_gran_get_name(enum sched_gran mode)
 {
     const char *name = "";
     unsigned int i;
@@ -66,8 +67,13 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
         }
     }
 
+    return name;
+}
+
+static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+{
     printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
-           name, gran, gran == 1 ? "" : "s");
+           sched_gran_get_name(mode), gran, gran == 1 ? "" : "s");
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
@@ -1014,10 +1020,16 @@ static int cpupool_dir_read(const struct hypfs_entry *entry,
                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
     int ret = 0;
-    const struct cpupool *c;
+    struct cpupool *c;
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
 
     list_for_each_entry(c, &cpupool_list, list)
     {
+        data->id = c->cpupool_id;
+        data->data = c;
+
         ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
                                          list_is_last(&c->list, &cpupool_list),
                                          &uaddr);
@@ -1080,6 +1092,56 @@ static struct hypfs_entry *cpupool_dir_findentry(
     return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool);
 }
 
+static int cpupool_gran_read(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+    const struct cpupool *cpupool;
+    const char *gran;
+
+    data = hypfs_get_dyndata();
+    cpupool = data->data;
+    ASSERT(cpupool);
+
+    gran = sched_gran_get_name(cpupool->gran);
+
+    if ( !*gran )
+        return -ENOENT;
+
+    return copy_to_guest(uaddr, gran, strlen(gran) + 1) ? -EFAULT : 0;
+}
+
+static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
+{
+    const struct hypfs_dyndir_id *data;
+    const struct cpupool *cpupool;
+    const char *gran;
+
+    data = hypfs_get_dyndata();
+    cpupool = data->data;
+    ASSERT(cpupool);
+
+    gran = sched_gran_get_name(cpupool->gran);
+
+    return strlen(gran) + 1;
+}
+
+static const struct hypfs_funcs cpupool_gran_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
+    .read = cpupool_gran_read,
+    .write = hypfs_write_deny,
+    .getsize = hypfs_gran_getsize,
+    .findentry = hypfs_leaf_findentry,
+};
+
+static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
+                          0, &cpupool_gran_funcs);
+static char granstr[SCHED_GRAN_NAME_LEN] = {
+    [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
+    [SCHED_GRAN_NAME_LEN - 1] = 0
+};
+
 static const struct hypfs_funcs cpupool_dir_funcs = {
     .enter = cpupool_dir_enter,
     .exit = cpupool_dir_exit,
@@ -1095,6 +1157,8 @@ static void cpupool_hypfs_init(void)
 {
     hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
     hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+    hypfs_string_set_reference(&cpupool_gran, granstr);
+    hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true);
 }
 
 #else /* CONFIG_HYPFS */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48432.85703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22j-0003Nc-0w; Wed, 09 Dec 2020 16:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48432.85703; Wed, 09 Dec 2020 16:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22i-0003NM-QB; Wed, 09 Dec 2020 16:10:16 +0000
Received: by outflank-mailman (input) for mailman id 48432;
 Wed, 09 Dec 2020 16:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22i-0002Oq-15
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06e11609-ef77-40a5-a43e-42fb9c9b29c6;
 Wed, 09 Dec 2020 16:10:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C14FBB274;
 Wed,  9 Dec 2020 16:09:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06e11609-ef77-40a5-a43e-42fb9c9b29c6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o2JApg47DI5mpxRq4xxKt1WSm6vnOnCEs4DTuG9auFw=;
	b=TfMarEecuPKxBUsV6iBJWIxXE48w9LHvhkM8v+o35axGIIhKsn2UBEdzD++qmqyDooRLUi
	K4UN22skTxkVMW79KjkkeDTaLxqeg8gOOYlI+6yKYhn1p1u0Zet5MdMk8NveFWrBOQCKH1
	X32mG/vgmiWph33cXECEDg6UhbtDARs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 4/8] xen/hypfs: support dynamic hypfs nodes
Date: Wed,  9 Dec 2020 17:09:52 +0100
Message-Id: <20201209160956.32456-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a HYPFS_VARDIR_INIT() macro for initializing such a directory
statically, taking a struct hypfs_funcs pointer as parameter additional
to those of HYPFS_DIR_INIT().

Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
additional parameter as this will be needed for dynamical entries.

For being able to let the generic hypfs coding continue to work on
normal struct hypfs_entry entities even for dynamical nodes add some
infrastructure for allocating a working area for the current hypfs
request in order to store needed information for traversing the tree.
This area is anchored in a percpu pointer and can be retrieved by any
level of the dynamic entries. The normal way to handle allocation and
freeing is to allocate the data in the enter() callback of a node and
to free it in the related exit() callback.

Add a hypfs_add_dyndir() function for adding a dynamic directory
template to the tree, which is needed for having the correct reference
to its position in hypfs.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich)
- carved out from previous patch
- use enter() and exit() callbacks for allocating and freeing
  dyndata memory
- add hypfs_add_dyndir()

V3:
- switch hypfs_alloc_dyndata() to be type safe (Jan Beulich)
- rename HYPFS_VARDIR_INIT() to HYPFS_DIR_INIT_FUNC() (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 31 +++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 29 +++++++++++++++++++----------
 2 files changed, 50 insertions(+), 10 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index f04934db10..8faf65cea0 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -72,6 +72,7 @@ enum hypfs_lock_state {
     hypfs_write_locked
 };
 static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
+static DEFINE_PER_CPU(struct hypfs_dyndata *, hypfs_dyndata);
 
 static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
 
@@ -158,6 +159,30 @@ static void node_exit_all(void)
         node_exit(*last);
 }
 
+void *hypfs_alloc_dyndata_size(unsigned long size)
+{
+    unsigned int cpu = smp_processor_id();
+
+    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
+    ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL);
+
+    per_cpu(hypfs_dyndata, cpu) = xzalloc_bytes(size);
+
+    return per_cpu(hypfs_dyndata, cpu);
+}
+
+void *hypfs_get_dyndata(void)
+{
+    ASSERT(this_cpu(hypfs_dyndata));
+
+    return this_cpu(hypfs_dyndata);
+}
+
+void hypfs_free_dyndata(void)
+{
+    XFREE(this_cpu(hypfs_dyndata));
+}
+
 static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 {
     int ret = -ENOENT;
@@ -219,6 +244,12 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,
     return ret;
 }
 
+void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
+                      struct hypfs_entry_dir *template)
+{
+    template->e.parent = &parent->e;
+}
+
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index a6dfdb7d8e..4c469cbeb4 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -76,7 +76,7 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
-#define HYPFS_DIR_INIT(var, nam)                  \
+#define HYPFS_DIR_INIT_FUNC(var, nam, fn)         \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
         .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
@@ -84,22 +84,25 @@ struct hypfs_entry_dir {
         .e.size = 0,                              \
         .e.max_size = 0,                          \
         .e.list = LIST_HEAD_INIT(var.e.list),     \
-        .e.funcs = &hypfs_dir_funcs,              \
+        .e.funcs = (fn),                          \
         .dirlist = LIST_HEAD_INIT(var.dirlist),   \
     }
 
-#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
-    struct hypfs_entry_leaf __read_mostly var = { \
-        .e.type = (typ),                          \
-        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
-        .e.name = (nam),                          \
-        .e.max_size = (msz),                      \
-        .e.funcs = &hypfs_leaf_ro_funcs,          \
+#define HYPFS_DIR_INIT(var, nam)                  \
+    HYPFS_DIR_INIT_FUNC(var, nam, &hypfs_dir_funcs)
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \
+    struct hypfs_entry_leaf __read_mostly var = {  \
+        .e.type = (typ),                           \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,         \
+        .e.name = (nam),                           \
+        .e.max_size = (msz),                       \
+        .e.funcs = (fn),                           \
     }
 
 /* Content and size need to be set via hypfs_string_set_reference(). */
 #define HYPFS_STRING_INIT(var, nam)               \
-    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_funcs)
 
 /*
  * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
@@ -150,6 +153,8 @@ extern struct hypfs_entry_dir hypfs_root;
 
 int hypfs_add_dir(struct hypfs_entry_dir *parent,
                   struct hypfs_entry_dir *dir, bool nofault);
+void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
+                      struct hypfs_entry_dir *template);
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault);
 const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry);
@@ -177,6 +182,10 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
 struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
                                         const char *name,
                                         unsigned int name_len);
+void *hypfs_alloc_dyndata_size(unsigned long size);
+#define hypfs_alloc_dyndata(type) (type *)hypfs_alloc_dyndata_size(sizeof(type))
+void *hypfs_get_dyndata(void);
+void hypfs_free_dyndata(void);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48433.85715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22k-0003QO-A5; Wed, 09 Dec 2020 16:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48433.85715; Wed, 09 Dec 2020 16:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22k-0003QF-50; Wed, 09 Dec 2020 16:10:18 +0000
Received: by outflank-mailman (input) for mailman id 48433;
 Wed, 09 Dec 2020 16:10:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22i-0002Or-85
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5b6a8e3-6e04-4556-b4d4-e66437eb66b7;
 Wed, 09 Dec 2020 16:10:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E29DBAF98;
 Wed,  9 Dec 2020 16:10:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5b6a8e3-6e04-4556-b4d4-e66437eb66b7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530201; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m3mpGjGJjZ3eVI9IE+6eYYkoeqGGeXNSXNPwqjgRH8Q=;
	b=Q/vYSC91rfMZH1E/W1oKg4ejMvz0aKHj3aswTdDUdZ9iTMjK4KNGUzsoK8RxdPY5EJfQoj
	TiF3gpkyaPYsLBpBB0jSzhed9g3FpLO+YkIVb/68mCtk9O6sOe5Cz2+/yU6/SwH8eiMz6B
	ZS8n433bmqjoi0kp9ajV5IcnLi/gH7M=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH v3 8/8] xen/cpupool: make per-cpupool sched-gran hypfs node writable
Date: Wed,  9 Dec 2020 17:09:56 +0100
Message-Id: <20201209160956.32456-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
cpupool selectable scheduling granularity.

Writing this node is allowed only with no cpu assigned to the cpupool.
Allowed are values "cpu", "core" and "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- test user parameters earlier (Jan Beulich)

V3:
- fix build without CONFIG_HYPFS on Arm (Andrew Cooper)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |  5 ++-
 xen/common/sched/cpupool.c   | 70 ++++++++++++++++++++++++++++++------
 2 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index f1ce24d7fe..e86f7d0dbe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,10 +184,13 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
-#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") [w]
 
 The scheduling granularity of a cpupool.
 
+Writing a value is allowed only for cpupools with no cpu assigned and if the
+architecture is supporting different scheduling granularities.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index e2011367bd..acd26f9449 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
-static int __init sched_select_granularity(const char *str)
+static int sched_gran_get(const char *str, enum sched_gran *mode)
 {
     unsigned int i;
 
@@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str)
     {
         if ( strcmp(sg_name[i].name, str) == 0 )
         {
-            opt_sched_granularity = sg_name[i].mode;
+            *mode = sg_name[i].mode;
             return 0;
         }
     }
 
     return -EINVAL;
 }
+
+static int __init sched_select_granularity(const char *str)
+{
+    return sched_gran_get(str, &opt_sched_granularity);
+}
 custom_param("sched-gran", sched_select_granularity);
+#elif CONFIG_HYPFS
+static int sched_gran_get(const char *str, enum sched_gran *mode)
+{
+    return -EINVAL;
+}
 #endif
 
-static unsigned int __init cpupool_check_granularity(void)
+static unsigned int cpupool_check_granularity(enum sched_gran mode)
 {
     unsigned int cpu;
     unsigned int siblings, gran = 0;
 
-    if ( opt_sched_granularity == SCHED_GRAN_cpu )
+    if ( mode == SCHED_GRAN_cpu )
         return 1;
 
     for_each_online_cpu ( cpu )
     {
-        siblings = cpumask_weight(sched_get_opt_cpumask(opt_sched_granularity,
-                                                        cpu));
+        siblings = cpumask_weight(sched_get_opt_cpumask(mode, cpu));
         if ( gran == 0 )
             gran = siblings;
         else if ( gran != siblings )
             return 0;
     }
 
-    sched_disable_smt_switching = true;
-
     return gran;
 }
 
@@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void)
 
     while ( gran == 0 )
     {
-        gran = cpupool_check_granularity();
+        gran = cpupool_check_granularity(opt_sched_granularity);
 
         if ( gran == 0 )
         {
@@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void)
     if ( fallback )
         warning_add(fallback);
 
+    if ( opt_sched_granularity != SCHED_GRAN_cpu )
+        sched_disable_smt_switching = true;
+
     sched_granularity = gran;
     sched_gran_print(opt_sched_granularity, sched_granularity);
 }
@@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
     return strlen(gran) + 1;
 }
 
+static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
+                              XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                              unsigned int ulen)
+{
+    const struct hypfs_dyndir_id *data;
+    struct cpupool *cpupool;
+    enum sched_gran gran;
+    unsigned int sched_gran = 0;
+    char name[SCHED_GRAN_NAME_LEN];
+    int ret = 0;
+
+    if ( ulen > SCHED_GRAN_NAME_LEN )
+        return -ENOSPC;
+
+    if ( copy_from_guest(name, uaddr, ulen) )
+        return -EFAULT;
+
+    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
+        sched_gran = sched_gran_get(name, &gran) ?
+                     0 : cpupool_check_granularity(gran);
+    if ( sched_gran == 0 )
+        return -EINVAL;
+
+    data = hypfs_get_dyndata();
+    cpupool = data->data;
+    ASSERT(cpupool);
+
+    if ( !cpumask_empty(cpupool->cpu_valid) )
+        ret = -EBUSY;
+    else
+    {
+        cpupool->gran = gran;
+        cpupool->sched_gran = sched_gran;
+    }
+
+    return ret;
+}
+
 static const struct hypfs_funcs cpupool_gran_funcs = {
     .enter = hypfs_node_enter,
     .exit = hypfs_node_exit,
     .read = cpupool_gran_read,
-    .write = hypfs_write_deny,
+    .write = cpupool_gran_write,
     .getsize = hypfs_gran_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 
 static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
-                          0, &cpupool_gran_funcs);
+                          SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs);
 static char granstr[SCHED_GRAN_NAME_LEN] = {
     [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
     [SCHED_GRAN_NAME_LEN - 1] = 0
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:10:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:10:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48436.85727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22o-0003Xp-MZ; Wed, 09 Dec 2020 16:10:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48436.85727; Wed, 09 Dec 2020 16:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn22o-0003Xb-Fr; Wed, 09 Dec 2020 16:10:22 +0000
Received: by outflank-mailman (input) for mailman id 48436;
 Wed, 09 Dec 2020 16:10:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn22n-0002Oq-19
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:10:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4a5b375-8b5e-4597-b1b4-71553962e5b6;
 Wed, 09 Dec 2020 16:10:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11FAAB279;
 Wed,  9 Dec 2020 16:10:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4a5b375-8b5e-4597-b1b4-71553962e5b6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jJK4b1ugQ0UokWHNyqHOknc4B9LzQZ5ESb/m8S9r3M4=;
	b=VCjABHYGpCX/zS1sAiAupiFalwBJK4b5d1LwOh9X67fqEVr3S7XWJL7APl86F/khjaNZPJ
	DdAycoJ5Pe/DUs6dE80Fpiket73S6x0TTg3y47N2F1OeA9fki+1AQwKGH3BU5DHkOuwpsb
	ffCin5jN3ioxWlfaoSWx0h9WdWCDQgY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic directories
Date: Wed,  9 Dec 2020 17:09:53 +0100
Message-Id: <20201209160956.32456-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209160956.32456-1-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add some helpers to hypfs.c to support dynamic directories with a
numerical id as name.

The dynamic directory is based on a template specified by the user
allowing to use specific access functions and having a predefined
set of entries in the directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use macro for length of entry name (Jan Beulich)
- const attributes (Jan Beulich)
- use template name as format string (Jan Beulich)
- add hypfs_dynid_entry_size() helper (Jan Beulich)
- expect dyndir data having been allocated by enter() callback

V3:
- add a specific enter() callback returning the template pointer
- add data field to struct hypfs_dyndir_id
- rename hypfs_gen_dyndir_entry_id() (Jan Beulich)
- add comments regarding generated names to be kept in sync (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 98 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 18 ++++++++
 2 files changed, 116 insertions(+)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 8faf65cea0..087c63b92f 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -356,6 +356,104 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
     return entry->size;
 }
 
+/*
+ * Fill the direntry for a dynamically generated directory. Especially the
+ * generated name needs to be kept in sync with hypfs_gen_dyndir_id_entry().
+ */
+int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
+{
+    struct xen_hypfs_dirlistentry direntry;
+    char name[HYPFS_DYNDIR_ID_NAMELEN];
+    unsigned int e_namelen, e_len;
+
+    e_namelen = snprintf(name, sizeof(name), template->e.name, id);
+    e_len = DIRENTRY_SIZE(e_namelen);
+    direntry.e.pad = 0;
+    direntry.e.type = template->e.type;
+    direntry.e.encoding = template->e.encoding;
+    direntry.e.content_len = template->e.funcs->getsize(&template->e);
+    direntry.e.max_write_len = template->e.max_size;
+    direntry.off_next = is_last ? 0 : e_len;
+    if ( copy_to_guest(*uaddr, &direntry, 1) )
+        return -EFAULT;
+    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
+                              e_namelen + 1) )
+        return -EFAULT;
+
+    guest_handle_add_offset(*uaddr, e_len);
+
+    return 0;
+}
+
+static const struct hypfs_entry *hypfs_dyndir_enter(
+    const struct hypfs_entry *entry)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original enter function. */
+    return data->template->e.funcs->enter(&data->template->e);
+}
+
+static struct hypfs_entry *hypfs_dyndir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original findentry function. */
+    return data->template->e.funcs->findentry(data->template, name, name_len);
+}
+
+static int hypfs_read_dyndir(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original read function. */
+    return data->template->e.funcs->read(&data->template->e, uaddr);
+}
+
+/*
+ * Fill dyndata with a dynamically generated directory based on a template
+ * and a numerical id.
+ * Needs to be kept in sync with hypfs_read_dyndir_id_entry() regarding the
+ * name generated.
+ */
+struct hypfs_entry *hypfs_gen_dyndir_id_entry(
+    const struct hypfs_entry_dir *template, unsigned int id, void *data)
+{
+    struct hypfs_dyndir_id *dyndata;
+
+    dyndata = hypfs_get_dyndata();
+
+    dyndata->template = template;
+    dyndata->id = id;
+    dyndata->data = data;
+    snprintf(dyndata->name, sizeof(dyndata->name), template->e.name, id);
+    dyndata->dir = *template;
+    dyndata->dir.e.name = dyndata->name;
+    dyndata->dir.e.funcs = &dyndata->funcs;
+    dyndata->funcs = *template->e.funcs;
+    dyndata->funcs.enter = hypfs_dyndir_enter;
+    dyndata->funcs.findentry = hypfs_dyndir_findentry;
+    dyndata->funcs.read = hypfs_read_dyndir;
+
+    return &dyndata->dir.e;
+}
+
+unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
+                                    unsigned int id)
+{
+    return DIRENTRY_SIZE(snprintf(NULL, 0, template->name, id));
+}
+
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 4c469cbeb4..34073faff8 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -76,6 +76,17 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
+struct hypfs_dyndir_id {
+    struct hypfs_entry_dir dir;             /* Modified copy of template. */
+    struct hypfs_funcs funcs;               /* Dynamic functions. */
+    const struct hypfs_entry_dir *template; /* Template used. */
+#define HYPFS_DYNDIR_ID_NAMELEN 12
+    char name[HYPFS_DYNDIR_ID_NAMELEN];     /* Name of hypfs entry. */
+
+    unsigned int id;                        /* Numerical id. */
+    void *data;                             /* Data associated with id. */
+};
+
 #define HYPFS_DIR_INIT_FUNC(var, nam, fn)         \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
@@ -186,6 +197,13 @@ void *hypfs_alloc_dyndata_size(unsigned long size);
 #define hypfs_alloc_dyndata(type) (type *)hypfs_alloc_dyndata_size(sizeof(type))
 void *hypfs_get_dyndata(void);
 void hypfs_free_dyndata(void);
+int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr);
+struct hypfs_entry *hypfs_gen_dyndir_id_entry(
+    const struct hypfs_entry_dir *template, unsigned int id, void *data);
+unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
+                                    unsigned int id);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:14:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:14:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48474.85738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn26x-0004K6-B5; Wed, 09 Dec 2020 16:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48474.85738; Wed, 09 Dec 2020 16:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn26x-0004Jz-7w; Wed, 09 Dec 2020 16:14:39 +0000
Received: by outflank-mailman (input) for mailman id 48474;
 Wed, 09 Dec 2020 16:14:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BAt3=FN=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kn26v-0004Ju-UU
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:14:37 +0000
Received: from mail-wm1-f43.google.com (unknown [209.85.128.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 124e8341-ec5f-4dc9-a087-ea1204aeca61;
 Wed, 09 Dec 2020 16:14:37 +0000 (UTC)
Received: by mail-wm1-f43.google.com with SMTP id q75so2252953wme.2
 for <xen-devel@lists.xenproject.org>; Wed, 09 Dec 2020 08:14:36 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w12sm5020797wre.57.2020.12.09.08.14.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 09 Dec 2020 08:14:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 124e8341-ec5f-4dc9-a087-ea1204aeca61
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=YGNj/RJq/kpRiFkbtXCcstj9h5lvfO5GHNpbdotNTlM=;
        b=U+Z3oZHBkLeWr/LKdA4q3SPT0arc7T0dLFwL68bEG1NiDULmWijs9+6Np02jga2NBK
         +81XLtK/Bu9lKQmDHI0Ehww7MbbJ4f5iPotF+DqtrG6TMa1pZid+HPIjWEOQA2KQNTLT
         iqrTNkmo4wlauIcZ56reJdCBMb6qw8be/y39pSsYv32s7ZxyM86NNBXX6giKZhBIJH8H
         rQzBprfC/pUlPmoJImS7iDPVfdoXVggdKBca+P70hH9uW4IHk0+L/cPkZ4wtnmR2NGUL
         REqb4DffBVf2wQZ5Unyjsx6XhOt3uL4MOIDCZFvMKSRtBbXTyXnTw0jGJPPoaZuVO3Q/
         uvEQ==
X-Gm-Message-State: AOAM532OE9roW70POAz6b6kyiNg4cJXS6/kzO5rAaeXBGfPRdIols686
	L0tj5ej+EU3Oer8zZQGooqQ=
X-Google-Smtp-Source: ABdhPJyNIldCrLbq7aNFDJGeBbz+5kwI3WZL3iPwc23RtFhK3VNh6SCliJZ6xIK8+T5E5jb+VxZMtQ==
X-Received: by 2002:a1c:6208:: with SMTP id w8mr3561720wmb.96.1607530476246;
        Wed, 09 Dec 2020 08:14:36 -0800 (PST)
Date: Wed, 9 Dec 2020 16:14:33 +0000
From: Wei Liu <wl@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, famzheng@amazon.com, cardoe@cardoe.com,
	wl@xen.org, Bertrand.Marquis@arm.com, julien@xen.org,
	andrew.cooper3@citrix.com
Subject: Re: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
Message-ID: <20201209161433.d7xpx5zwtikd3fmk@liuwe-devbox-debian-v2>
References: <160746448732.12203.10647684023172140005@600e7e483b3a>
 <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 05:02:50PM -0800, Stefano Stabellini wrote:
> The pipeline failed because the "fedora-gcc-debug" build failed with a
> timeout: 
> 
> ERROR: Job failed: execution took longer than 1h0m0s seconds
> 
> given that all the other jobs passed (including the other Fedora job), I
> take this failed because the gitlab-ci x86 runners were overloaded?
> 

The CI system is configured to auto-scale as the number of jobs grows.
The limit is set to 10 (VMs) at the moment.

https://gitlab.com/xen-project/xen-gitlab-ci/-/commit/832bfd72ea3a227283bf3df88b418a9aae95a5a4

I haven't looked at the log, but the number of build jobs looks rather
larger than when we get started. Maybe the limit of 10 is not good
enough?

Wei.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:16:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48484.85763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28j-0004Um-5r; Wed, 09 Dec 2020 16:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48484.85763; Wed, 09 Dec 2020 16:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28j-0004Ue-1i; Wed, 09 Dec 2020 16:16:29 +0000
Received: by outflank-mailman (input) for mailman id 48484;
 Wed, 09 Dec 2020 16:16:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn28h-0004SF-1r
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:16:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e50217e-569d-4108-be58-42ac93a182cc;
 Wed, 09 Dec 2020 16:16:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B4591ADAA;
 Wed,  9 Dec 2020 16:16:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e50217e-569d-4108-be58-42ac93a182cc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530580; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b+/TxrF4qJUX0wGGTUMyByEWIjy0WZY4LBlGieHkPRc=;
	b=jCp82V2QdeP4AHedfgWNEX4ayd24waSekvuv7niZe9ARvaGxoizwmB79CEDgcSySxGJu4H
	U+ePcxt0UxSvKq2vKr1u3xy6b7L4QN61f12K0uVv7+tq3a2gnT56NXko65QyoE4WBNgdOg
	ugTHHXmSBDCbg1ppbWxjOP4ipVg14DQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: paul@xen.org,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 1/3] xen/hypfs: add support for bool leafs in dynamic directories
Date: Wed,  9 Dec 2020 17:16:16 +0100
Message-Id: <20201209161618.309-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209161618.309-1-jgross@suse.com>
References: <20201209161618.309-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add support for reading fixed sized leafs and writing bool leafs in
dynamic directories with the backing variable being a member of the
structure anchored in struct hypfs_dyndir->data.

This adds the related leaf read and write functions and a helper
macro HYPFS_STRUCT_ELEM() for referencing the element.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 53 +++++++++++++++++++++++++++++++++++------
 xen/include/xen/hypfs.h |  7 ++++++
 2 files changed, 53 insertions(+), 7 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 087c63b92f..caee48cc97 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -501,17 +501,26 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
     return 0;
 }
 
-int hypfs_read_leaf(const struct hypfs_entry *entry,
-                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
+static int hypfs_read_leaf_off(const struct hypfs_entry *entry,
+                               XEN_GUEST_HANDLE_PARAM(void) uaddr,
+                               void *off)
 {
     const struct hypfs_entry_leaf *l;
     unsigned int size = entry->funcs->getsize(entry);
+    const void *ptr;
 
     ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
 
     l = container_of(entry, const struct hypfs_entry_leaf, e);
+    ptr = off ? off + (unsigned long)l->u.content : l->u.content;
+
+    return copy_to_guest(uaddr, ptr, size) ? -EFAULT : 0;
+}
 
-    return copy_to_guest(uaddr, l->u.content, size) ?  -EFAULT : 0;
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    return hypfs_read_leaf_off(entry, uaddr, NULL);
 }
 
 static int hypfs_read(const struct hypfs_entry *entry,
@@ -587,11 +596,12 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
     return ret;
 }
 
-int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
-                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
-                     unsigned int ulen)
+static int hypfs_write_bool_off(struct hypfs_entry_leaf *leaf,
+                                XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                                unsigned int ulen, void *off)
 {
     bool buf;
+    bool *ptr;
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
     ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
@@ -604,11 +614,19 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
     if ( copy_from_guest(&buf, uaddr, ulen) )
         return -EFAULT;
 
-    *(bool *)leaf->u.write_ptr = buf;
+    ptr = off ? off + (unsigned long)leaf->u.write_ptr : leaf->u.write_ptr;
+    *ptr = buf;
 
     return 0;
 }
 
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                     unsigned int ulen)
+{
+    return hypfs_write_bool_off(leaf, uaddr, ulen, NULL);
+}
+
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
                        XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
                        unsigned int ulen)
@@ -644,6 +662,27 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
     return ret;
 }
 
+int hypfs_dyndir_id_read_leaf(const struct hypfs_entry *entry,
+                              XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    struct hypfs_dyndir_id *dyndata;
+
+    dyndata = hypfs_get_dyndata();
+
+    return hypfs_read_leaf_off(entry, uaddr, dyndata->data);
+}
+
+int hypfs_dyndir_id_write_bool(struct hypfs_entry_leaf *leaf,
+                               XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                               unsigned int ulen)
+{
+    struct hypfs_dyndir_id *dyndata;
+
+    dyndata = hypfs_get_dyndata();
+
+    return hypfs_write_bool_off(leaf, uaddr, ulen, dyndata->data);
+}
+
 int hypfs_write_deny(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
                      unsigned int ulen)
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 34073faff8..670dc68b48 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -160,6 +160,8 @@ static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
                          &hypfs_bool_wr_funcs, 1)
 
+#define HYPFS_STRUCT_ELEM(type, elem)    (((type *)NULL)->elem)
+
 extern struct hypfs_entry_dir hypfs_root;
 
 int hypfs_add_dir(struct hypfs_entry_dir *parent,
@@ -204,6 +206,11 @@ struct hypfs_entry *hypfs_gen_dyndir_id_entry(
     const struct hypfs_entry_dir *template, unsigned int id, void *data);
 unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
                                     unsigned int id);
+int hypfs_dyndir_id_read_leaf(const struct hypfs_entry *entry,
+                              XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_dyndir_id_write_bool(struct hypfs_entry_leaf *leaf,
+                               XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                               unsigned int ulen);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:16:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48483.85750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28d-0004SS-NV; Wed, 09 Dec 2020 16:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48483.85750; Wed, 09 Dec 2020 16:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28d-0004SL-KO; Wed, 09 Dec 2020 16:16:23 +0000
Received: by outflank-mailman (input) for mailman id 48483;
 Wed, 09 Dec 2020 16:16:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn28c-0004SF-30
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:16:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9df0cea8-afcc-49b6-af09-42760f289216;
 Wed, 09 Dec 2020 16:16:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A563AD89;
 Wed,  9 Dec 2020 16:16:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9df0cea8-afcc-49b6-af09-42760f289216
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530580; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=7s9j1vgacL/VeYC8XNsPbPiTS/jgKwiYvgvAIn7vv54=;
	b=QcZkxZTk7UGUGFqMCbQamAhJN1V4YBtEt/LtVovw4IxWwJT8omY1neYr40jx1LqOTJoZO6
	Z2c/rdy0yNKF+lw/9Nsa1iVKWmGTHAwfMf8k+p3ZEHkN3MUnHcYI2grPS4ZvmvLYbnSCLz
	+sHMC/q++H7W2sdAkmAlKEY9WlVulCc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: paul@xen.org,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 0/3] xen: add hypfs per-domain abi-features
Date: Wed,  9 Dec 2020 17:16:15 +0100
Message-Id: <20201209161618.309-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This small series is meant as an example how to add further dynamical
directories to hypfs. It can be used to replace Paul's current approach
to specify ABI-features via domain create flags and replace those by
hypfs nodes.

The related libxl part could just use libxenhypfs to set the values.

This series is meant to be applied on top of my series:

  xen: support per-cpupool scheduling granularity

Juergen Gross (3):
  xen/hypfs: add support for bool leafs in dynamic directories
  xen/domain: add domain hypfs directories
  xen/domain: add per-domain hypfs directory abi-features

 docs/misc/hypfs-paths.pandoc |  10 ++
 xen/common/Makefile          |   1 +
 xen/common/hypfs.c           |  53 +++++++++--
 xen/common/hypfs_dom.c       | 176 +++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h      |   7 ++
 xen/include/xen/sched.h      |   4 +
 6 files changed, 244 insertions(+), 7 deletions(-)
 create mode 100644 xen/common/hypfs_dom.c

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:16:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:16:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48485.85774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28n-0004YE-Cf; Wed, 09 Dec 2020 16:16:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48485.85774; Wed, 09 Dec 2020 16:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28n-0004Y7-9V; Wed, 09 Dec 2020 16:16:33 +0000
Received: by outflank-mailman (input) for mailman id 48485;
 Wed, 09 Dec 2020 16:16:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn28m-0004SF-21
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:16:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ea05d4a-9f24-491c-9980-a2f279dc18e2;
 Wed, 09 Dec 2020 16:16:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F1257ADC5;
 Wed,  9 Dec 2020 16:16:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ea05d4a-9f24-491c-9980-a2f279dc18e2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530581; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KRtEHnVaMwtk5C3UwqZutxYpthuwswPsa5VGCeAsQUM=;
	b=IBlresOrvNrwtJf7ZbuEUaOvcpmnBPzux8muFTgD+31wgPQ6wDdatxNJHt1AUo0ZV3Tn3Q
	AdqtrHkD2RrTqrFyh8SrImPxPYJinrpTTsn9MP/TlG8lddn/+nTcTQYpi7loEs0k9kJ8ig
	DrrUGtUt8zl7uKYuZscvo6OvcfxTPjc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: paul@xen.org,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
Date: Wed,  9 Dec 2020 17:16:17 +0100
Message-Id: <20201209161618.309-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209161618.309-1-jgross@suse.com>
References: <20201209161618.309-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add /domain/<domid> directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be implemented.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 docs/misc/hypfs-paths.pandoc |  10 +++
 xen/common/Makefile          |   1 +
 xen/common/hypfs_dom.c       | 137 +++++++++++++++++++++++++++++++++++
 3 files changed, 148 insertions(+)
 create mode 100644 xen/common/hypfs_dom.c

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index e86f7d0dbe..116642e367 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -34,6 +34,7 @@ not containing any '/' character. The names "." and ".." are reserved
 for file system internal use.
 
 VALUES are strings and can take the following forms (note that this represents
+>>>>>>> patched
 only the syntax used in this document):
 
 * STRING -- an arbitrary 0-delimited byte string.
@@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
 Writing a value is allowed only for cpupools with no cpu assigned and if the
 architecture is supporting different scheduling granularities.
 
+#### /domain/
+
+A directory of all current domains.
+
+#### /domain/*/
+
+The individual domains. Each entry is a directory with the name being the
+domain-id (e.g. /domain/0/).
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/Makefile b/xen/common/Makefile
index d109f279a4..e88a9ee91e 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_GRANT_TABLE) += grant_table.o
 obj-y += guestcopy.o
 obj-bin-y += gunzip.init.o
 obj-$(CONFIG_HYPFS) += hypfs.o
+obj-$(CONFIG_HYPFS) += hypfs_dom.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += keyhandler.o
diff --git a/xen/common/hypfs_dom.c b/xen/common/hypfs_dom.c
new file mode 100644
index 0000000000..241e379b24
--- /dev/null
+++ b/xen/common/hypfs_dom.c
@@ -0,0 +1,137 @@
+/******************************************************************************
+ *
+ * hypfs_dom.c
+ *
+ * Per domain hypfs nodes.
+ */
+
+#include <xen/err.h>
+#include <xen/hypfs.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+
+static const struct hypfs_entry *domain_domdir_enter(
+    const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+    struct domain *d;
+
+    data = hypfs_get_dyndata();
+    d = get_domain_by_id(data->id);
+    data->data = d;
+    if ( !d )
+        return ERR_PTR(-ENOENT);
+
+    return entry;
+}
+
+static void domain_domdir_exit(const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+    struct domain *d;
+
+    data = hypfs_get_dyndata();
+    d = data->data;
+    put_domain(d);
+}
+
+static const struct hypfs_funcs domain_domdir_funcs = {
+    .enter = domain_domdir_enter,
+    .exit = domain_domdir_exit,
+    .read = hypfs_read_dir,
+    .write = hypfs_write_deny,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_dir_findentry,
+};
+
+static HYPFS_DIR_INIT_FUNC(domain_domdir, "%u", &domain_domdir_funcs);
+
+static int domain_dir_read(const struct hypfs_entry *entry,
+                           XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    const struct domain *d;
+
+    for_each_domain ( d )
+    {
+        ret = hypfs_read_dyndir_id_entry(&domain_domdir, d->domain_id,
+                                         !d->next_in_list, &uaddr);
+        if ( ret )
+            break;
+    }
+
+    return ret;
+}
+
+static unsigned int domain_dir_getsize(const struct hypfs_entry *entry)
+{
+    const struct domain *d;
+    unsigned int size = 0;
+
+    for_each_domain ( d )
+        size += hypfs_dynid_entry_size(entry, d->domain_id);
+
+    return size;
+}
+
+static const struct hypfs_entry *domain_dir_enter(
+    const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_alloc_dyndata(struct hypfs_dyndir_id);
+    if ( !data )
+        return ERR_PTR(-ENOMEM);
+    data->id = DOMID_SELF;
+
+    rcu_read_lock(&domlist_read_lock);
+
+    return entry;
+}
+
+static void domain_dir_exit(const struct hypfs_entry *entry)
+{
+    rcu_read_unlock(&domlist_read_lock);
+
+    hypfs_free_dyndata();
+}
+
+static struct hypfs_entry *domain_dir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    struct domain *d;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( end != name + name_len )
+        return ERR_PTR(-ENOENT);
+
+    d = rcu_lock_domain_by_id(id);
+    if ( !d )
+        return ERR_PTR(-ENOENT);
+
+    rcu_unlock_domain(d);
+
+    return hypfs_gen_dyndir_id_entry(&domain_domdir, id, d);
+}
+
+static const struct hypfs_funcs domain_dir_funcs = {
+    .enter = domain_dir_enter,
+    .exit = domain_dir_exit,
+    .read = domain_dir_read,
+    .write = hypfs_write_deny,
+    .getsize = domain_dir_getsize,
+    .findentry = domain_dir_findentry,
+};
+
+static HYPFS_DIR_INIT_FUNC(domain_dir, "domain", &domain_dir_funcs);
+
+static int __init domhypfs_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &domain_dir, true);
+    hypfs_add_dyndir(&domain_dir, &domain_domdir);
+
+    return 0;
+}
+__initcall(domhypfs_init);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:16:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:16:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48486.85787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28s-0004cy-M1; Wed, 09 Dec 2020 16:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48486.85787; Wed, 09 Dec 2020 16:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn28s-0004cn-Ht; Wed, 09 Dec 2020 16:16:38 +0000
Received: by outflank-mailman (input) for mailman id 48486;
 Wed, 09 Dec 2020 16:16:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDS6=FN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kn28r-0004SF-2F
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:16:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d08d6373-737d-49ff-ac70-952481d7f693;
 Wed, 09 Dec 2020 16:16:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 444A3AF8A;
 Wed,  9 Dec 2020 16:16:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d08d6373-737d-49ff-ac70-952481d7f693
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607530581; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g8rJKVfvD7B5g+TUwc6aQ0QAVGKg6M4SOf5V18X4xJk=;
	b=Gr6xFEk+e4HbG53mSI/tfN9x/ztLT9J2Zxt4L6Qqof0hSTZHL5Nq6i8ZsB+1CPQgqTnG+x
	52z7Na5xWdk5TXma52nhw8mqa7aeoxBofwRZMBIk2SKhl6tsDo5mZoy6jqNoOZ1Ep1iNyc
	Hvs7W0Uq5gR2nqDxE++F3QCbGktgcLI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: paul@xen.org,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 3/3] xen/domain: add per-domain hypfs directory abi-features
Date: Wed,  9 Dec 2020 17:16:18 +0100
Message-Id: <20201209161618.309-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201209161618.309-1-jgross@suse.com>
References: <20201209161618.309-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new per-domain hypfs directory "abi-features" used to control
some feature availability. Changing the availability of a feature is
allowed only before the first activation of the domain.

The related leafs right now are "event-channel-upcall" and
"fifo-event-channels". For those bool elements are added to struct
domain, but for now without any further handling.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs_dom.c  | 39 +++++++++++++++++++++++++++++++++++++++
 xen/include/xen/sched.h |  4 ++++
 2 files changed, 43 insertions(+)

diff --git a/xen/common/hypfs_dom.c b/xen/common/hypfs_dom.c
index 241e379b24..b54e246ad6 100644
--- a/xen/common/hypfs_dom.c
+++ b/xen/common/hypfs_dom.c
@@ -10,6 +10,42 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+static int domain_abifeat_write(struct hypfs_entry_leaf *leaf,
+                                XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                                unsigned int ulen)
+{
+    struct hypfs_dyndir_id *data;
+    struct domain *d;
+
+    data = hypfs_get_dyndata();
+    d = data->data;
+
+    if ( d->creation_finished )
+        return -EBUSY;
+
+    return hypfs_dyndir_id_write_bool(leaf, uaddr, ulen);
+}
+
+static const struct hypfs_funcs abifeat_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
+    .read = hypfs_dyndir_id_read_leaf,
+    .write = domain_abifeat_write,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_leaf_findentry,
+};
+
+static HYPFS_FIXEDSIZE_INIT(abifeat_evtupcall, XEN_HYPFS_TYPE_BOOL,
+                            "event-channel-upcall",
+                            HYPFS_STRUCT_ELEM(struct domain, abi_evt_upcall),
+                            &abifeat_funcs, 1);
+static HYPFS_FIXEDSIZE_INIT(abifeat_fifoevnt, XEN_HYPFS_TYPE_BOOL,
+                            "fifo-event-channels",
+                            HYPFS_STRUCT_ELEM(struct domain, abi_fifo_evt),
+                            &abifeat_funcs, 1);
+
+static HYPFS_DIR_INIT(domain_abifeatdir, "abi-features");
+
 static const struct hypfs_entry *domain_domdir_enter(
     const struct hypfs_entry *entry)
 {
@@ -131,6 +167,9 @@ static int __init domhypfs_init(void)
 {
     hypfs_add_dir(&hypfs_root, &domain_dir, true);
     hypfs_add_dyndir(&domain_dir, &domain_domdir);
+    hypfs_add_dir(&domain_domdir, &domain_abifeatdir, true);
+    hypfs_add_leaf(&domain_abifeatdir, &abifeat_evtupcall, true);
+    hypfs_add_leaf(&domain_abifeatdir, &abifeat_fifoevnt, true);
 
     return 0;
 }
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 31abbe7a99..fb99249743 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -424,6 +424,10 @@ struct domain
      */
     bool             creation_finished;
 
+    /* ABI-features (can be set via hypfs before first unpause).*/
+    bool             abi_fifo_evt;
+    bool             abi_evt_upcall;
+
     /* Which guest this guest has privileges on */
     struct domain   *target;
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:18:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:18:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48501.85798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2AS-0004x1-1f; Wed, 09 Dec 2020 16:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48501.85798; Wed, 09 Dec 2020 16:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2AR-0004wu-Uy; Wed, 09 Dec 2020 16:18:15 +0000
Received: by outflank-mailman (input) for mailman id 48501;
 Wed, 09 Dec 2020 16:18:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn2AQ-0004wk-KI; Wed, 09 Dec 2020 16:18:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn2AQ-0004K0-DI; Wed, 09 Dec 2020 16:18:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn2AQ-0001b7-2Q; Wed, 09 Dec 2020 16:18:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kn2AQ-0003xm-1y; Wed, 09 Dec 2020 16:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9GWV/q34//LYxxrwpCl4NVd8yr72VtavGTF0zTElC/I=; b=y1I+VBYnceyY+59B/jG/OKPOgO
	GlPZL4Y8nw7kThtGWGowdTwKjHTy1nLE5o5E5vw7qEtftPUJbyfinA7fvYYoKvexYUEXxcLF8KiIQ
	vPj3/c7h4AG6ZTQOsQpbWztT8FZDN9Gen98l0tH6ep+O/LJzyyYypciNkVOcg7t3qXIg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157337-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157337: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=553032db17440f8de011390e5a1cfddd13751b0b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 16:18:14 +0000

flight 157337 qemu-mainline real [real]
flight 157347 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157337/
http://logs.test-lab.xenproject.org/osstest/logs/157347/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                553032db17440f8de011390e5a1cfddd13751b0b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  111 days
Failing since        152659  2020-08-21 14:07:39 Z  110 days  229 attempts
Testing same since   157337  2020-12-09 03:37:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69363 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:24:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48518.85814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Ge-000607-Us; Wed, 09 Dec 2020 16:24:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48518.85814; Wed, 09 Dec 2020 16:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Ge-000600-RK; Wed, 09 Dec 2020 16:24:40 +0000
Received: by outflank-mailman (input) for mailman id 48518;
 Wed, 09 Dec 2020 16:24:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn2Gd-0005zt-5o
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:24:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn2Gb-0004RF-8p; Wed, 09 Dec 2020 16:24:37 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn2Gb-0000Ae-0N; Wed, 09 Dec 2020 16:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=CrduSadRGWxDmlUA9cyM3OxNjxcP51vGA77Oa/Mm+pc=; b=K/aYP80xERl8bFO3fxINkKXKfu
	MosNRQbijFi0EWj9mGpFyej/8hICDFgxmhG+ARLmD5nWAu/xKAy5hrh0ob75E+z5MHzQe0MIExFtm
	JZcDovgMR6D7B1FL0EyPTTek3tya6b4QoYdvK5imF7SWYti4ZY7icsmWnKcwMl47uI00=;
Subject: Re: [PATCH RFC 0/3] xen: add hypfs per-domain abi-features
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a2270efd-19d4-5d5e-2e8b-4696ba9751ab@xen.org>
Date: Wed, 9 Dec 2020 16:24:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209161618.309-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 09/12/2020 16:16, Juergen Gross wrote:
> This small series is meant as an example how to add further dynamical
> directories to hypfs. It can be used to replace Paul's current approach
> to specify ABI-features via domain create flags and replace those by
> hypfs nodes.

This can only work if all the ABI-features are not required at the time 
of creating the domain.

Those features should also be set only once. Furthermore, HYPFS is so 
far meant to be optional.

So it feels to me Paul's approach is leaner and better for the 
ABI-features purpose.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:31:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48524.85825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Mu-0006wQ-MA; Wed, 09 Dec 2020 16:31:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48524.85825; Wed, 09 Dec 2020 16:31:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Mu-0006wG-Iv; Wed, 09 Dec 2020 16:31:08 +0000
Received: by outflank-mailman (input) for mailman id 48524;
 Wed, 09 Dec 2020 16:31:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdhY=FN=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kn2Mt-0006w9-Pn
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:31:07 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce0f3510-d766-46d6-b82d-352339172288;
 Wed, 09 Dec 2020 16:31:06 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B9GUsB2017471
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 9 Dec 2020 17:30:55 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 9960E2E946C; Wed,  9 Dec 2020 17:30:49 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce0f3510-d766-46d6-b82d-352339172288
Date: Wed, 9 Dec 2020 17:30:49 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201209163049.GA6158@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 09 Dec 2020 17:30:56 +0100 (MET)

On Wed, Dec 09, 2020 at 04:00:02PM +0000, Andrew Cooper wrote:
> [...]
> >> I wonder if the LDT is set up correctly.
> > I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?
> 
> Well - you said you always saw it once on 4.13, which clearly shows that
> something was wonky, but it managed to unblock itself.
> 
> >> How about this incremental delta?
> > Here's the output
> > (XEN) IRET fault: #PF[0000]                                                    
> > (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
> > (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> > (XEN) IRET fault: #PF[0000]                                                    
> > (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
> > (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> > (XEN) IRET fault: #PF[0000]                                                 
> 
> Ok, so the promotion definitely fails, but we don't get as far as
> inspecting the content of the LDT frame. This probably means it failed
> to change the page type, which probably means there are still
> outstanding writeable references.
> 
> I'm expecting the final printk to be the one which triggers.

It's not. 
Here's the output:
(XEN) IRET fault: #PF[0000]                                                    
(XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057             
(XEN) *** LDT: gl1e 0000000000000000 not present                               
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
(XEN) IRET fault: #PF[0000]                                                    
(XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057             
(XEN) *** LDT: gl1e 0000000000000000 not present                               
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
(XEN) IRET fault: #PF[0000]                                                    
(XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057          
(XEN) *** LDT: gl1e 0000000000000000 not present
(XEN) *** pv_map_ldt_shadow_page(0x40) failed
(XEN) IRET fault: #PF[0000]
(XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
(XEN) domain_crash called from extable.c:219
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
(XEN) CPU:    0
(XEN) RIP:    0047:[<00007f7f5dc007d0>]
(XEN) RFLAGS: 0000000000000202   EM: 0   CONTEXT: pv guest (d0v0)
(XEN) rax: ffff82d04038c309   rbx: 0000000000000000   rcx: 000000000000e008
(XEN) rdx: 0000000000010086   rsi: ffff83007fcb7f78   rdi: 000000000000e010
(XEN) rbp: 0000000000000000   rsp: 00007f7fffcfc8d0   r8:  0000000e00000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 0000000079cdb000   cr2: ffffbd000000a040
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: ffffffff80cf2dc0
(XEN) ds: 0023   es: 0023   fs: 0000   gs: 0000   ss: 003f   cs: 0047
(XEN) Guest stack trace from rsp=00007f7fffcfc8d0:
(XEN)    0000000000000001 00007f7fffcfcde8 0000000000000000 0000000000000000
(XEN)    0000000000000003 000000000e200040 0000000000000004 0000000000000038
(XEN)    0000000000000005 0000000000000008 0000000000000006 0000000000001000
(XEN)    0000000000000007 00007f7f5dc00000 0000000000000008 0000000000000000
(XEN)    0000000000000009 000000000e201cd0 00000000000007d0 0000000000000000
(XEN)    00000000000007d1 0000000000000000 00000000000007d2 0000000000000000
(XEN)    00000000000007d3 0000000000000000 000000000000000d 00007f7fffcfd000
(XEN)    00000000000007de 00007f7fffcfc9d0 0000000000000000 0000000000000000
(XEN)    6e692f6e6962732f 0000000000007469 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:31:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:31:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48527.85838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2N5-0006zd-VX; Wed, 09 Dec 2020 16:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48527.85838; Wed, 09 Dec 2020 16:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2N5-0006zV-RM; Wed, 09 Dec 2020 16:31:19 +0000
Received: by outflank-mailman (input) for mailman id 48527;
 Wed, 09 Dec 2020 16:31:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2N4-0006z4-Cx
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:31:18 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 478590eb-876e-4383-a916-6569b259bcac;
 Wed, 09 Dec 2020 16:31:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2D1AD1FB;
 Wed,  9 Dec 2020 08:31:17 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 77C2B3F68F;
 Wed,  9 Dec 2020 08:31:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 478590eb-876e-4383-a916-6569b259bcac
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 0/7] xen/arm: Emulate ID registers
Date: Wed,  9 Dec 2020 16:30:53 +0000
Message-Id: <cover.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

The goal of this serie is to emulate coprocessor ID registers so that
Xen only publish to guest features that are supported by Xen and can
actually be used by guests.
One practical example where this is required are SVE support which is
forbidden by Xen as it is not supported, but if Linux is compiled with
it, it will crash on boot. An other one is AMU which is also forbidden
by Xen but one Linux compiled with it would crash if the platform
supports it.

To be able to emulate the coprocessor registers defining what features
are supported by the hardware, the TID3 bit of HCR must be disabled and
Xen must emulated the values of those registers when an exception is
catched when a guest is accessing it.

This serie is first creating a guest cpuinfo structure which will
contain the values that we want to publish to the guests and then
provides the proper emulationg for those registers when Xen is getting
an exception due to an access to any of those registers.

This is a first simple implementation to solve the problem and the way
to define the values that we provide to guests and which features are
disabled will be in a future patchset enhance so that we could decide
per guest what can be used or not and depending on this deduce the bits
to activate in HCR and the values that we must publish on ID registers.

---
Changes in V2:
  Fix First patch to properly handle DFR1 register and increase dbg32
  size. Other patches have just been rebased.

Changes in V3:
  Add handling of reserved registers as RAZ
  Minor fixes described in each patch

Bertrand Marquis (7):
  xen/arm: Add ID registers and complete cpuinfo
  xen/arm: Add arm64 ID registers definitions
  xen/arm: create a cpuinfo structure for guest
  xen/arm: Add handler for ID registers on arm64
  xen/arm: Add handler for cp15 ID registers
  xen/arm: Add CP10 exception support to handle MVFR
  xen/arm: Activate TID3 in HCR_EL2

 xen/arch/arm/arm64/vsysreg.c        | 53 ++++++++++++++++++++
 xen/arch/arm/cpufeature.c           | 69 ++++++++++++++++++++++++++
 xen/arch/arm/traps.c                |  7 ++-
 xen/arch/arm/vcpreg.c               | 76 +++++++++++++++++++++++++++++
 xen/include/asm-arm/arm64/hsr.h     | 66 +++++++++++++++++++++++++
 xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++
 xen/include/asm-arm/cpregs.h        | 37 ++++++++++++++
 xen/include/asm-arm/cpufeature.h    | 58 ++++++++++++++++++----
 xen/include/asm-arm/perfc_defn.h    |  1 +
 xen/include/asm-arm/traps.h         |  1 +
 10 files changed, 386 insertions(+), 10 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48537.85850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qc-0007Ev-Gf; Wed, 09 Dec 2020 16:34:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48537.85850; Wed, 09 Dec 2020 16:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qc-0007Eo-DM; Wed, 09 Dec 2020 16:34:58 +0000
Received: by outflank-mailman (input) for mailman id 48537;
 Wed, 09 Dec 2020 16:34:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qa-0007Ej-Sb
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:34:56 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 05ca0ee8-e161-471b-a33e-eec6b8ebd799;
 Wed, 09 Dec 2020 16:34:56 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF9351FB;
 Wed,  9 Dec 2020 08:34:55 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F29603F68F;
 Wed,  9 Dec 2020 08:34:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05ca0ee8-e161-471b-a33e-eec6b8ebd799
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
Date: Wed,  9 Dec 2020 16:30:54 +0000
Message-Id: <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Add definition and entries in cpuinfo for ID registers introduced in
newer Arm Architecture reference manual:
- ID_PFR2: processor feature register 2
- ID_DFR1: debug feature register 1
- ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
- ID_ISA6: ISA Feature register 6
Add more bitfield definitions in PFR fields of cpuinfo.
Add MVFR2 register definition for aarch32.
Add mvfr values in cpuinfo.
Add some registers definition for arm64 in sysregs as some are not
always know by compilers.
Initialize the new values added in cpuinfo in identify_cpu during init.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

---
Changes in V2:
  Fix dbg32 table size and add proper initialisation of the second entry
  of the table by reading ID_DFR1 register.
Changes in V3:
  Fix typo in commit title
  Add MVFR2 definition and handling on aarch32 and remove specific case
  for mvfr field in cpuinfo (now the same on arm64 and arm32).
  Add MMFR4 definition if not known by the compiler.

---
 xen/arch/arm/cpufeature.c           | 18 ++++++++++
 xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
 xen/include/asm-arm/cpregs.h        | 12 +++++++
 xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
 4 files changed, 105 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 44126dbf07..bc7ee5ac95 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
 
         c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
         c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
+        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
 
         c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
         c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
+
+        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
 #endif
 
         c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
         c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
+        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
 
         c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
+        c->dbg32.bits[1] = READ_SYSREG32(ID_DFR1_EL1);
 
         c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
 
@@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
         c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
         c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
+        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
+        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);
 
         c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
         c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
@@ -137,6 +144,17 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
         c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
         c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
+        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
+
+#ifdef CONFIG_ARM_64
+        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
+        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
+        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
+#else
+        c->mvfr.bits[0] = READ_CP32(MVFR0);
+        c->mvfr.bits[1] = READ_CP32(MVFR1);
+        c->mvfr.bits[2] = READ_CP32(MVFR2);
+#endif
 }
 
 /*
diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
index c60029d38f..077fd95fb7 100644
--- a/xen/include/asm-arm/arm64/sysregs.h
+++ b/xen/include/asm-arm/arm64/sysregs.h
@@ -57,6 +57,34 @@
 #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
 #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
 
+/*
+ * Define ID coprocessor registers if they are not
+ * already defined by the compiler.
+ *
+ * Values picked from linux kernel
+ */
+#ifndef ID_AA64MMFR2_EL1
+#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
+#endif
+#ifndef ID_PFR2_EL1
+#define ID_PFR2_EL1                 S3_0_C0_C3_4
+#endif
+#ifndef ID_MMFR4_EL1
+#define ID_MMFR4_EL1                S3_0_C0_C2_6
+#endif
+#ifndef ID_MMFR5_EL1
+#define ID_MMFR5_EL1                S3_0_C0_C3_6
+#endif
+#ifndef ID_ISAR6_EL1
+#define ID_ISAR6_EL1                S3_0_C0_C2_7
+#endif
+#ifndef ID_AA64ZFR0_EL1
+#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
+#endif
+#ifndef ID_DFR1_EL1
+#define ID_DFR1_EL1                 S3_0_C0_C3_5
+#endif
+
 /* Access to system registers */
 
 #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 8fd344146e..2690ddeb7a 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -63,6 +63,8 @@
 #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
 #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
 #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
+#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
+#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Register 2 */
 #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
 #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
 #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
@@ -108,18 +110,23 @@
 #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
 #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
 #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
+#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
 #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
+#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
 #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
 #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
 #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
 #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
 #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
+#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
+#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
 #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
 #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
 #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
 #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
 #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
 #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
+#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
 #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
 #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
 #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
@@ -312,18 +319,23 @@
 #define HSTR_EL2                HSTR
 #define ID_AFR0_EL1             ID_AFR0
 #define ID_DFR0_EL1             ID_DFR0
+#define ID_DFR1_EL1             ID_DFR1
 #define ID_ISAR0_EL1            ID_ISAR0
 #define ID_ISAR1_EL1            ID_ISAR1
 #define ID_ISAR2_EL1            ID_ISAR2
 #define ID_ISAR3_EL1            ID_ISAR3
 #define ID_ISAR4_EL1            ID_ISAR4
 #define ID_ISAR5_EL1            ID_ISAR5
+#define ID_ISAR6_EL1            ID_ISAR6
 #define ID_MMFR0_EL1            ID_MMFR0
 #define ID_MMFR1_EL1            ID_MMFR1
 #define ID_MMFR2_EL1            ID_MMFR2
 #define ID_MMFR3_EL1            ID_MMFR3
+#define ID_MMFR4_EL1            ID_MMFR4
+#define ID_MMFR5_EL1            ID_MMFR5
 #define ID_PFR0_EL1             ID_PFR0
 #define ID_PFR1_EL1             ID_PFR1
+#define ID_PFR2_EL1             ID_PFR2
 #define IFSR32_EL2              IFSR
 #define MDCR_EL2                HDCR
 #define MIDR_EL1                MIDR
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index c7b5052992..6cf83d775b 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -148,6 +148,7 @@ struct cpuinfo_arm {
     union {
         uint64_t bits[2];
         struct {
+            /* PFR0 */
             unsigned long el0:4;
             unsigned long el1:4;
             unsigned long el2:4;
@@ -155,9 +156,23 @@ struct cpuinfo_arm {
             unsigned long fp:4;   /* Floating Point */
             unsigned long simd:4; /* Advanced SIMD */
             unsigned long gic:4;  /* GIC support */
-            unsigned long __res0:28;
+            unsigned long ras:4;
+            unsigned long sve:4;
+            unsigned long sel2:4;
+            unsigned long mpam:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long __res0:4;
             unsigned long csv2:4;
-            unsigned long __res1:4;
+            unsigned long cvs3:4;
+
+            /* PFR1 */
+            unsigned long bt:4;
+            unsigned long ssbs:4;
+            unsigned long mte:4;
+            unsigned long ras_frac:4;
+            unsigned long mpam_frac:4;
+            unsigned long __res1:44;
         };
     } pfr64;
 
@@ -170,7 +185,7 @@ struct cpuinfo_arm {
     } aux64;
 
     union {
-        uint64_t bits[2];
+        uint64_t bits[3];
         struct {
             unsigned long pa_range:4;
             unsigned long asid_bits:4;
@@ -190,6 +205,8 @@ struct cpuinfo_arm {
             unsigned long pan:4;
             unsigned long __res1:8;
             unsigned long __res2:32;
+
+            unsigned long __res3:64;
         };
     } mm64;
 
@@ -197,6 +214,10 @@ struct cpuinfo_arm {
         uint64_t bits[2];
     } isa64;
 
+    struct {
+        uint64_t bits[1];
+    } zfr64;
+
 #endif
 
     /*
@@ -204,25 +225,38 @@ struct cpuinfo_arm {
      * when running in 32-bit mode.
      */
     union {
-        uint32_t bits[2];
+        uint32_t bits[3];
         struct {
+            /* PFR0 */
             unsigned long arm:4;
             unsigned long thumb:4;
             unsigned long jazelle:4;
             unsigned long thumbee:4;
-            unsigned long __res0:16;
+            unsigned long csv2:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long ras:4;
 
+            /* PFR1 */
             unsigned long progmodel:4;
             unsigned long security:4;
             unsigned long mprofile:4;
             unsigned long virt:4;
             unsigned long gentimer:4;
-            unsigned long __res1:12;
+            unsigned long sec_frac:4;
+            unsigned long virt_frac:4;
+            unsigned long gic:4;
+
+            /* PFR2 */
+            unsigned long csv3:4;
+            unsigned long ssbs:4;
+            unsigned long ras_frac:4;
+            unsigned long __res2:20;
         };
     } pfr32;
 
     struct {
-        uint32_t bits[1];
+        uint32_t bits[2];
     } dbg32;
 
     struct {
@@ -230,12 +264,16 @@ struct cpuinfo_arm {
     } aux32;
 
     struct {
-        uint32_t bits[4];
+        uint32_t bits[6];
     } mm32;
 
     struct {
-        uint32_t bits[6];
+        uint32_t bits[7];
     } isa32;
+
+    struct {
+        uint64_t bits[3];
+    } mvfr;
 };
 
 extern struct cpuinfo_arm boot_cpu_data;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48538.85862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qf-0007GW-OE; Wed, 09 Dec 2020 16:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48538.85862; Wed, 09 Dec 2020 16:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qf-0007GP-Km; Wed, 09 Dec 2020 16:35:01 +0000
Received: by outflank-mailman (input) for mailman id 48538;
 Wed, 09 Dec 2020 16:35:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qe-0007Fp-Fg
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:35:00 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 01d21cba-7b66-414c-baeb-6ed292fe766b;
 Wed, 09 Dec 2020 16:34:59 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C7A21FB;
 Wed,  9 Dec 2020 08:34:59 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 99C683F68F;
 Wed,  9 Dec 2020 08:34:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01d21cba-7b66-414c-baeb-6ed292fe766b
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 5/7] xen/arm: Add handler for cp15 ID registers
Date: Wed,  9 Dec 2020 16:30:58 +0000
Message-Id: <5a36325410f485dbdddc0f6088378cacc54c5243.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Add support for emulation of cp15 based ID registers (on arm32 or when
running a 32bit guest on arm64).
The handlers are returning the values stored in the guest_cpuinfo
structure for known registers and RAZ for all reserved registers.
In the current status the MVFR registers are no supported.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Add case definition for reserved registers
  Add handling of reserved registers as RAZ.
  Fix code style in GENERATE_TID3_INFO declaration

---
 xen/arch/arm/vcpreg.c        | 39 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/cpregs.h | 25 +++++++++++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index cdc91cdf5b..d371a1c38c 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -155,6 +155,14 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
         break;                                                      \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg, field, offset)                      \
+    case HSR_CPREG32(reg):                                          \
+    {                                                               \
+        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
+                          1, guest_cpuinfo.field.bits[offset]);     \
+    }
+
 void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp32 cp32 = hsr.cp32;
@@ -286,6 +294,37 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
          */
         return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
+    /* MVFR registers are in cp10 no cp15 */
+
+    HSR_CPREG32_TID3_RESERVED_CASE:
+        /* Handle all reserved registers as RAZ */
+        return handle_ro_raz(regs, regidx, cp32.read, hsr, 1);
+
     /*
      * HCR_EL2.TIDCP
      *
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 2690ddeb7a..5cb1ad5cbe 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -133,6 +133,31 @@
 #define VPIDR           p15,4,c0,c0,0   /* Virtualization Processor ID Register */
 #define VMPIDR          p15,4,c0,c0,5   /* Virtualization Multiprocessor ID Register */
 
+/*
+ * Those cases are catching all Reserved registers trapped by TID3 which
+ * currently have no assignment.
+ * HCR.TID3 is trapping all registers in the group 3:
+ * coproc == p15, opc1 == 0, CRn == c0, CRm == {c2-c7}, opc2 == {0-7}.
+ */
+#define HSR_CPREG32_TID3_CASES(REG)     case HSR_CPREG32(p15,0,c0,REG,0): \
+                                        case HSR_CPREG32(p15,0,c0,REG,1): \
+                                        case HSR_CPREG32(p15,0,c0,REG,2): \
+                                        case HSR_CPREG32(p15,0,c0,REG,3): \
+                                        case HSR_CPREG32(p15,0,c0,REG,4): \
+                                        case HSR_CPREG32(p15,0,c0,REG,5): \
+                                        case HSR_CPREG32(p15,0,c0,REG,6): \
+                                        case HSR_CPREG32(p15,0,c0,REG,7)
+
+#define HSR_CPREG32_TID3_RESERVED_CASE  case HSR_CPREG32(p15,0,c0,c3,0): \
+                                        case HSR_CPREG32(p15,0,c0,c3,1): \
+                                        case HSR_CPREG32(p15,0,c0,c3,2): \
+                                        case HSR_CPREG32(p15,0,c0,c3,3): \
+                                        case HSR_CPREG32(p15,0,c0,c3,7): \
+                                        HSR_CPREG32_TID3_CASES(c4): \
+                                        HSR_CPREG32_TID3_CASES(c5): \
+                                        HSR_CPREG32_TID3_CASES(c6): \
+                                        HSR_CPREG32_TID3_CASES(c7)
+
 /* CP15 CR1: System Control Registers */
 #define SCTLR           p15,0,c1,c0,0   /* System Control Register */
 #define ACTLR           p15,0,c1,c0,1   /* Auxiliary Control Register */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48539.85874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qh-0007Ic-6p; Wed, 09 Dec 2020 16:35:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48539.85874; Wed, 09 Dec 2020 16:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qh-0007IU-2q; Wed, 09 Dec 2020 16:35:03 +0000
Received: by outflank-mailman (input) for mailman id 48539;
 Wed, 09 Dec 2020 16:35:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qf-0007Ej-Oe
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:35:01 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ac6edc1e-1ffb-419b-90c9-721a0d2890e0;
 Wed, 09 Dec 2020 16:34:56 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3F241042;
 Wed,  9 Dec 2020 08:34:56 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F046C3F68F;
 Wed,  9 Dec 2020 08:34:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac6edc1e-1ffb-419b-90c9-721a0d2890e0
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
Date: Wed,  9 Dec 2020 16:30:55 +0000
Message-Id: <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Add coprocessor registers definitions for all ID registers trapped
through the TID3 bit of HSR.
Those are the one that will be emulated in Xen to only publish to guests
the features that are supported by Xen and that are accessible to
guests.
Also define a case to catch all reserved registers that should be
handled as RAZ.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Add case definition for reserved registers.

---
 xen/include/asm-arm/arm64/hsr.h | 66 +++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
index ca931dd2fe..ffe0f0007e 100644
--- a/xen/include/asm-arm/arm64/hsr.h
+++ b/xen/include/asm-arm/arm64/hsr.h
@@ -110,6 +110,72 @@
 #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
 #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
 
+/* Those registers are used when HCR_EL2.TID3 is set */
+#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
+#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
+#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
+#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
+#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
+#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
+#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
+#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
+#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
+#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
+#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
+#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
+#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
+#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
+#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
+#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
+#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
+#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
+#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
+#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
+#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
+#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
+
+#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
+#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
+#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
+#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
+#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
+#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
+#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
+#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
+#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
+#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
+#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
+#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
+
+/*
+ * Those cases are catching all Reserved registers trapped by TID3 which
+ * currently have no assignment.
+ * HCR.TID3 is trapping all registers in the group 3:
+ * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
+ */
+#define HSR_SYSREG_TID3_RESERVED_CASE  case HSR_SYSREG(3,0,c0,c3,3): \
+                                       case HSR_SYSREG(3,0,c0,c3,7): \
+                                       case HSR_SYSREG(3,0,c0,c4,2): \
+                                       case HSR_SYSREG(3,0,c0,c4,3): \
+                                       case HSR_SYSREG(3,0,c0,c4,5): \
+                                       case HSR_SYSREG(3,0,c0,c4,6): \
+                                       case HSR_SYSREG(3,0,c0,c4,7): \
+                                       case HSR_SYSREG(3,0,c0,c5,2): \
+                                       case HSR_SYSREG(3,0,c0,c5,3): \
+                                       case HSR_SYSREG(3,0,c0,c5,6): \
+                                       case HSR_SYSREG(3,0,c0,c5,7): \
+                                       case HSR_SYSREG(3,0,c0,c6,2): \
+                                       case HSR_SYSREG(3,0,c0,c6,3): \
+                                       case HSR_SYSREG(3,0,c0,c6,4): \
+                                       case HSR_SYSREG(3,0,c0,c6,5): \
+                                       case HSR_SYSREG(3,0,c0,c6,6): \
+                                       case HSR_SYSREG(3,0,c0,c6,7): \
+                                       case HSR_SYSREG(3,0,c0,c7,3): \
+                                       case HSR_SYSREG(3,0,c0,c7,4): \
+                                       case HSR_SYSREG(3,0,c0,c7,5): \
+                                       case HSR_SYSREG(3,0,c0,c7,6): \
+                                       case HSR_SYSREG(3,0,c0,c7,7)
+
 #endif /* __ASM_ARM_ARM64_HSR_H */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48540.85886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qk-0007MS-Fw; Wed, 09 Dec 2020 16:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48540.85886; Wed, 09 Dec 2020 16:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qk-0007MJ-CC; Wed, 09 Dec 2020 16:35:06 +0000
Received: by outflank-mailman (input) for mailman id 48540;
 Wed, 09 Dec 2020 16:35:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qj-0007Fp-Ev
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:35:05 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6af9f6ac-e876-40fd-afa6-506424aaec58;
 Wed, 09 Dec 2020 16:35:00 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C67B1042;
 Wed,  9 Dec 2020 08:35:00 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7C9EE3F68F;
 Wed,  9 Dec 2020 08:34:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6af9f6ac-e876-40fd-afa6-506424aaec58
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle MVFR
Date: Wed,  9 Dec 2020 16:30:59 +0000
Message-Id: <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Add support for cp10 exceptions decoding to be able to emulate the
values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
This is required for aarch32 guests accessing MVFR registers using
vmrs and vmsr instructions.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Add case for MVFR2, fix typo VMFR <-> MVFR.

---
 xen/arch/arm/traps.c             |  5 ++++
 xen/arch/arm/vcpreg.c            | 39 +++++++++++++++++++++++++++++++-
 xen/include/asm-arm/perfc_defn.h |  1 +
 xen/include/asm-arm/traps.h      |  1 +
 4 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd4c6..28d9d64558 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_cp14_dbg);
         do_cp14_dbg(regs, hsr);
         break;
+    case HSR_EC_CP10:
+        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
+        perfc_incr(trap_cp10);
+        do_cp10(regs, hsr);
+        break;
     case HSR_EC_CP:
         GUEST_BUG_ON(!psr_mode_is_32bit(regs));
         perfc_incr(trap_cp);
diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index d371a1c38c..da4e22a467 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -319,7 +319,7 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
     GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
     GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
     GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
-    /* MVFR registers are in cp10 no cp15 */
+    /* MVFR registers are in cp10 not cp15 */
 
     HSR_CPREG32_TID3_RESERVED_CASE:
         /* Handle all reserved registers as RAZ */
@@ -638,6 +638,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
     inject_undef_exception(regs, hsr);
 }
 
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
+{
+    const struct hsr_cp32 cp32 = hsr.cp32;
+    int regidx = cp32.reg;
+
+    if ( !check_conditional_instr(regs, hsr) )
+    {
+        advance_pc(regs, hsr);
+        return;
+    }
+
+    switch ( hsr.bits & HSR_CP32_REGS_MASK )
+    {
+    /*
+     * HSR.TID3 is trapping access to MVFR register used to identify the
+     * VFP/Simd using VMRS/VMSR instructions.
+     * Exception encoding is using MRC/MCR standard with the reg field in Crn
+     * as are declared MVFR0 and MVFR1 in cpregs.h
+     */
+    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
+    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
+
+    default:
+        gdprintk(XENLOG_ERR,
+                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
+                 cp32.read ? "mrc" : "mcr",
+                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
+        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
+                 hsr.bits & HSR_CP32_REGS_MASK);
+        inject_undef_exception(regs, hsr);
+        return;
+    }
+
+    advance_pc(regs, hsr);
+}
+
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp cp = hsr.cp;
diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
index 6a83185163..31f071222b 100644
--- a/xen/include/asm-arm/perfc_defn.h
+++ b/xen/include/asm-arm/perfc_defn.h
@@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
 PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
 PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
 PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
+PERFCOUNTER(trap_cp10,     "trap: cp10 access")
 PERFCOUNTER(trap_cp,       "trap: cp access")
 PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
 PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
index 997c37884e..c4a3d0fb1b 100644
--- a/xen/include/asm-arm/traps.h
+++ b/xen/include/asm-arm/traps.h
@@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
 
 /* SMCCC handling */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48541.85898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qm-0007Ps-PN; Wed, 09 Dec 2020 16:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48541.85898; Wed, 09 Dec 2020 16:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qm-0007Pi-LG; Wed, 09 Dec 2020 16:35:08 +0000
Received: by outflank-mailman (input) for mailman id 48541;
 Wed, 09 Dec 2020 16:35:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qk-0007Ej-Ok
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:35:06 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ccf3f5bc-6f3d-43aa-bb7e-dcb444f4a150;
 Wed, 09 Dec 2020 16:34:57 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 835F71FB;
 Wed,  9 Dec 2020 08:34:57 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D44463F68F;
 Wed,  9 Dec 2020 08:34:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccf3f5bc-6f3d-43aa-bb7e-dcb444f4a150
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Date: Wed,  9 Dec 2020 16:30:56 +0000
Message-Id: <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Create a cpuinfo structure for guest and mask into it the features that
we do not support in Xen or that we do not want to publish to guests.

Modify some values in the cpuinfo structure for guests to mask some
features which we do not want to allow to guests (like AMU) or we do not
support (like SVE).

The code is trying to group together registers modifications for the
same feature to be able in the long term to easily enable/disable a
feature depending on user parameters or add other registers modification
in the same place (like enabling/disabling HCR bits).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Use current_cpu_data info instead of recalling identify_cpu

---
 xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/cpufeature.h |  2 ++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index bc7ee5ac95..7255383504 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -24,6 +24,8 @@
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
 
+struct cpuinfo_arm __read_mostly guest_cpuinfo;
+
 void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
                              const char *info)
 {
@@ -157,6 +159,55 @@ void identify_cpu(struct cpuinfo_arm *c)
 #endif
 }
 
+/*
+ * This function is creating a cpuinfo structure with values modified to mask
+ * all cpu features that should not be published to guest.
+ * The created structure is then used to provide ID registers values to guests.
+ */
+static int __init create_guest_cpuinfo(void)
+{
+    /*
+     * TODO: The code is currently using only the features detected on the boot
+     * core. In the long term we should try to compute values containing only
+     * features supported by all cores.
+     */
+    guest_cpuinfo = current_cpu_data;
+
+#ifdef CONFIG_ARM_64
+    /* Disable MPAM as xen does not support it */
+    guest_cpuinfo.pfr64.mpam = 0;
+    guest_cpuinfo.pfr64.mpam_frac = 0;
+
+    /* Disable SVE as Xen does not support it */
+    guest_cpuinfo.pfr64.sve = 0;
+    guest_cpuinfo.zfr64.bits[0] = 0;
+
+    /* Disable MTE as Xen does not support it */
+    guest_cpuinfo.pfr64.mte = 0;
+#endif
+
+    /* Disable AMU */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.amu = 0;
+#endif
+    guest_cpuinfo.pfr32.amu = 0;
+
+    /* Disable RAS as Xen does not support it */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.ras = 0;
+    guest_cpuinfo.pfr64.ras_frac = 0;
+#endif
+    guest_cpuinfo.pfr32.ras = 0;
+    guest_cpuinfo.pfr32.ras_frac = 0;
+
+    return 0;
+}
+/*
+ * This function needs to be run after all smp are started to have
+ * cpuinfo structures for all cores.
+ */
+__initcall(create_guest_cpuinfo);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 6cf83d775b..10b62bd324 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -283,6 +283,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
 extern struct cpuinfo_arm cpu_data[];
 #define current_cpu_data cpu_data[smp_processor_id()]
 
+extern struct cpuinfo_arm guest_cpuinfo;
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48542.85910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qp-0007Tq-6b; Wed, 09 Dec 2020 16:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48542.85910; Wed, 09 Dec 2020 16:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qp-0007Te-1F; Wed, 09 Dec 2020 16:35:11 +0000
Received: by outflank-mailman (input) for mailman id 48542;
 Wed, 09 Dec 2020 16:35:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qo-0007Fp-F9
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:35:10 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b4d68c8c-474d-45c1-be3a-5e462c638463;
 Wed, 09 Dec 2020 16:35:01 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 108B31FB;
 Wed,  9 Dec 2020 08:35:01 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5FE4A3F68F;
 Wed,  9 Dec 2020 08:35:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4d68c8c-474d-45c1-be3a-5e462c638463
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 7/7] xen/arm: Activate TID3 in HCR_EL2
Date: Wed,  9 Dec 2020 16:31:00 +0000
Message-Id: <956cf336ffce24f0cabfc7a98ae855bc71d5f028.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Activate TID3 bit in HSR register when starting a guest.
This will trap all coprecessor ID registers so that we can give to guest
values corresponding to what they can actually use and mask some
features to guests even though they would be supported by the underlying
hardware (like SVE or MPAM).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3: Rebase

---
 xen/arch/arm/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 28d9d64558..c1a9ad6056 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -98,7 +98,7 @@ register_t get_default_hcr_flags(void)
 {
     return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
              (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
-             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
+             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
 static enum {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:35:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:35:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48543.85922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qr-0007Xv-GT; Wed, 09 Dec 2020 16:35:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48543.85922; Wed, 09 Dec 2020 16:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2Qr-0007Xh-By; Wed, 09 Dec 2020 16:35:13 +0000
Received: by outflank-mailman (input) for mailman id 48543;
 Wed, 09 Dec 2020 16:35:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiOm=FN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kn2Qp-0007Ej-Ov
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:35:11 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id b11871d3-465d-40c5-a2f5-4589f0a988cb;
 Wed, 09 Dec 2020 16:34:58 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6728C1FB;
 Wed,  9 Dec 2020 08:34:58 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B6EBB3F68F;
 Wed,  9 Dec 2020 08:34:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b11871d3-465d-40c5-a2f5-4589f0a988cb
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Date: Wed,  9 Dec 2020 16:30:57 +0000
Message-Id: <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1607524536.git.bertrand.marquis@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>

Add vsysreg emulation for registers trapped when TID3 bit is activated
in HSR.
The emulation is returning the value stored in cpuinfo_guest structure
for know registers and is handling reserved registers as RAZ.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Fix commit message
  Fix code style for GENERATE_TID3_INFO declaration
  Add handling of reserved registers as RAZ.

---
 xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 8a85507d9d..ef7a11dbdd 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
         break;                                                          \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg, field, offset)                          \
+    case HSR_SYSREG_##reg:                                              \
+    {                                                                   \
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
+                          1, guest_cpuinfo.field.bits[offset]);         \
+    }
+
 void do_sysreg(struct cpu_user_regs *regs,
                const union hsr hsr)
 {
@@ -259,6 +267,51 @@ void do_sysreg(struct cpu_user_regs *regs,
          */
         return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
+    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
+    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
+    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
+    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
+    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
+    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
+    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
+    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
+    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
+    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
+    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    HSR_SYSREG_TID3_RESERVED_CASE:
+        /* Handle all reserved registers as RAZ */
+        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 16:37:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 16:37:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48585.85934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2TF-00087j-W1; Wed, 09 Dec 2020 16:37:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48585.85934; Wed, 09 Dec 2020 16:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn2TF-00087c-Sk; Wed, 09 Dec 2020 16:37:41 +0000
Received: by outflank-mailman (input) for mailman id 48585;
 Wed, 09 Dec 2020 16:37:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn2TE-00087V-Ba
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 16:37:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn2TB-0004kc-K3; Wed, 09 Dec 2020 16:37:37 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn2TB-0001DO-CE; Wed, 09 Dec 2020 16:37:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zq3qs2Y37oE2YD1FWHU1D6FEOZCIvFeODTkcNxNpPe0=; b=GEi5v+CU7P1Og1LNtjD5C36tJP
	CQfvxzQtU6B+2VNzXjKODp8yf7n3se3c2OoMdzQnkg8cPHpdyOerymW6oxMFE0NJdOuShuwXEeCkF
	2qhVasAvWNQp0vTjl12n5Abp+lBVcM/tPXR6fm6L0Y0HSs3v+x8ZLToOKEnYytGxwcYI=;
Subject: Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
 <20201209161618.309-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
Date: Wed, 9 Dec 2020 16:37:35 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209161618.309-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 09/12/2020 16:16, Juergen Gross wrote:
> Add /domain/<domid> directories to hypfs. Those are completely
> dynamic, so the related hypfs access functions need to be implemented.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V3:
> - new patch
> ---
>   docs/misc/hypfs-paths.pandoc |  10 +++
>   xen/common/Makefile          |   1 +
>   xen/common/hypfs_dom.c       | 137 +++++++++++++++++++++++++++++++++++
>   3 files changed, 148 insertions(+)
>   create mode 100644 xen/common/hypfs_dom.c
> 
> diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
> index e86f7d0dbe..116642e367 100644
> --- a/docs/misc/hypfs-paths.pandoc
> +++ b/docs/misc/hypfs-paths.pandoc
> @@ -34,6 +34,7 @@ not containing any '/' character. The names "." and ".." are reserved
>   for file system internal use.
>   
>   VALUES are strings and can take the following forms (note that this represents
> +>>>>>>> patched

This seems to be a left-over of a merge.

>   only the syntax used in this document):
>   
>   * STRING -- an arbitrary 0-delimited byte string.
> @@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
>   Writing a value is allowed only for cpupools with no cpu assigned and if the
>   architecture is supporting different scheduling granularities.
>   

[...]

> +
> +static int domain_dir_read(const struct hypfs_entry *entry,
> +                           XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    int ret = 0;
> +    const struct domain *d;
> +
> +    for_each_domain ( d )

This is definitely going to be an issue if you have a lot of domain 
running as Xen is not preemptible.

I think the first step is to make sure that HYPFS can scale without 
hogging a pCPU for a long time.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 17:40:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 17:40:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48602.85952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn3Rn-0006Gf-1w; Wed, 09 Dec 2020 17:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48602.85952; Wed, 09 Dec 2020 17:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn3Rm-0006GY-Ue; Wed, 09 Dec 2020 17:40:14 +0000
Received: by outflank-mailman (input) for mailman id 48602;
 Wed, 09 Dec 2020 17:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atyx=FN=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kn3Rl-0006GT-Te
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 17:40:13 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c569f61-6e31-4f57-9192-45b3fe65b19d;
 Wed, 09 Dec 2020 17:40:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c569f61-6e31-4f57-9192-45b3fe65b19d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607535612;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=oqB04BnwV8YKEAA5lwVIHzO7kV7aixxFQohMpXRA2wM=;
  b=Bn/Ev2gpq5BssnRDoRi1q/ZloewFlx5Ye4dnH0rihX80YPLclLPPjRRq
   b9i2olpmjATeeVEjHZQ8NIb/gizCJziMjMS5M6QTCnMC1l/O05HFnsXII
   OrxtxeSj5mkmRnXxH+9n/znBBIJRho6IYsl2GkVPn62jfrbmno7CEABJ0
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: GdC42mKzGq431WnKJvGb0mIJlS1Ea7JCAdUR8TFSGJ6TnlKlbkhAb2h5kDZXboXXYKZ3Be8gND
 Ey2va32GN39YapDsbctRl4zEvLkD6iGpNylkDhV6m6tbjvhgkJpmGUFewcmqaYHo3IMfrSqM3f
 pLL49Wel5OqCU/PFmjXLWmsRHfPnGX/uYbb9VvFLzl9Io05QJWFfok8wRpq+0zG4KXvfcuB8OW
 bJf5OrkMCBZXFUK96hGysaEYB8O5QMhwpjsSSeKRhYke89WiANTgqR1rN5OC/Wmg7aM2pKQMAE
 1qs=
X-SBRS: 5.2
X-MesageID: 32880971
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,405,1599537600"; 
   d="scan'208";a="32880971"
Date: Wed, 9 Dec 2020 17:40:07 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 1/8] xen: fix build when $(obj-y) consists of just
 blanks
Message-ID: <X9EL90SMyqrs9GaL@perard.uk.xensource.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>

On Mon, Nov 23, 2020 at 04:20:52PM +0100, Jan Beulich wrote:
> This case can occur when combining empty lists
> 
> obj-y :=
> ...
> obj-y += $(empty)
> 
> or
> 
> obj-y := $(empty) $(empty)
> 
> where (only) blanks would accumulate. This was only a latent issue until
> now, but would become an active issue for Arm once lib/ gets populated
> with all respective objects going into the to be introduced lib.a.
> 
> Also address a related issue at this occasion: When an empty built_in.o
> gets created, .built_in.o.d will have its dependencies recorded. If, on
> a subsequent incremental build, an actual constituent of built_in.o
> appeared, the $(filter-out ) would leave these recorded dependencies in
> place. But of course the linker won't know what to do with C header
> files. (The apparent alternative of avoiding to pass $(c_flags) or
> $(a_flags) would not be reliable afaict, as among these flags there may
> be some affecting information conveyed via the object file to the
> linker. The linker, finding inconsistent flags across object files, may

How about using $(XEN_CFLAGS) instead of $(c_flags)? That should prevent
CC from generating the .*.o.d files while keeping the relevant flags. I
was planing to do that to avoid the issue, see:
https://lore.kernel.org/xen-devel/20200421161208.2429539-10-anthony.perard@citrix.com

> then error out.) Using just $(obj-y) won't work either: It breaks when
> the same object file is listed more than once.

Do we need to worry about having a object file been listed twice?
Wouldn't that be a mistake?

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:09:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48608.85963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn3td-00008s-8f; Wed, 09 Dec 2020 18:09:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48608.85963; Wed, 09 Dec 2020 18:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn3td-00008l-5f; Wed, 09 Dec 2020 18:09:01 +0000
Received: by outflank-mailman (input) for mailman id 48608;
 Wed, 09 Dec 2020 18:09:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NJeK=FN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kn3tc-00008g-AN
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:09:00 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 375ceb45-e9a5-4588-87be-34dad0fe1fd8;
 Wed, 09 Dec 2020 18:08:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 375ceb45-e9a5-4588-87be-34dad0fe1fd8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607537338;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=cBU3LS98o+fjiQPofsknAqLnUizj3TVodTJMvLkvuNM=;
  b=RQoLXZIhaIQxMgWSbCc/hoJAIe2tWhA/opPx2v2T1nR5mcyIjw7jRERe
   +Rzfq9SGkPKbdUXhY4ajIdgX7GpLbWHSSkij8Kt9jpjNBAsgL+ZKHb/pa
   KnurcWxgDb3lvA8l1bjjd1LZF1FQtXm9YFamOO7y6JgjSPnTCOa+CFv5u
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ilHGjCB9KEJmVYK57iapJYRZLA7WicDig9i8/+OCfrMqEi/awjt07bWeqo0cMmYsQZN6eZcBhX
 z8lgvCEvr2LQ0llrSb8IyiHtW1V5ke3fqTODLq8i8memPKcRbtScbN2D9JEqawPT0oKV3kLpnV
 v/fVFj6S/knaBzv7iSV9zFUI54aHcfN+LsLcvpX/U5P5zLavyd+1IKlxqHCPwyWI/35bsXxTb1
 9CdhzXcCVCFILAiEDy+VyVbI5g5COuS4pCI/x1lce2kYfRvCyrbxwZLiGKaCE3mAGNDYn9fFEv
 vtI=
X-SBRS: 5.2
X-MesageID: 32883396
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,405,1599537600"; 
   d="scan'208";a="32883396"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
Date: Wed, 9 Dec 2020 18:08:53 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201209163049.GA6158@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/12/2020 16:30, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 04:00:02PM +0000, Andrew Cooper wrote:
>> [...]
>>>> I wonder if the LDT is set up correctly.
>>> I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?
>> Well - you said you always saw it once on 4.13, which clearly shows that
>> something was wonky, but it managed to unblock itself.
>>
>>>> How about this incremental delta?
>>> Here's the output
>>> (XEN) IRET fault: #PF[0000]                                                    
>>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
>>> (XEN) IRET fault: #PF[0000]                                                    
>>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
>>> (XEN) IRET fault: #PF[0000]                                                 
>> Ok, so the promotion definitely fails, but we don't get as far as
>> inspecting the content of the LDT frame.  This probably means it failed
>> to change the page type, which probably means there are still
>> outstanding writeable references.
>>
>> I'm expecting the final printk to be the one which triggers.
> It's not. 
> Here's the output:
> (XEN) IRET fault: #PF[0000]
> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> (XEN) *** LDT: gl1e 0000000000000000 not present
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed
> (XEN) IRET fault: #PF[0000]
> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> (XEN) *** LDT: gl1e 0000000000000000 not present
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed

Ok.  So the mapping registered for the LDT is not yet present.  Xen
should be raising #PF with the guest, and would be in every case other
than the weird context on IRET, where we've confused bad guest state
with bad hypervisor state.

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3ac07a84c3..35c24ed668 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1235,10 +1235,6 @@ static int handle_ldt_mapping_fault(unsigned int
offset,
     {
         printk(XENLOG_ERR "*** pv_map_ldt_shadow_page(%#x) failed\n",
offset);
 
-        /* In hypervisor mode? Leave it to the #PF handler to fix up. */
-        if ( !guest_mode(regs) )
-            return 0;
-
         /* Access would have become non-canonical? Pass #GP[sel] back. */
         if ( unlikely(!is_canonical_address(curr->arch.pv.ldt_base +
offset)) )
         {


This bodge ought to cause a #PF to be delivered suitably, but may make
other corner cases not quite work correctly, so isn't a clean fix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:15:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:15:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48614.85975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn3zt-000182-Vu; Wed, 09 Dec 2020 18:15:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48614.85975; Wed, 09 Dec 2020 18:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn3zt-00017v-Sf; Wed, 09 Dec 2020 18:15:29 +0000
Received: by outflank-mailman (input) for mailman id 48614;
 Wed, 09 Dec 2020 18:15:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S60M=FN=arm.com=mark.rutland@srs-us1.protection.inumbo.net>)
 id 1kn3zs-00017q-Kg
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:15:28 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6185d4b2-be4d-4dc7-9bbc-b25f19572fd8;
 Wed, 09 Dec 2020 18:15:26 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0928F1FB;
 Wed,  9 Dec 2020 10:15:26 -0800 (PST)
Received: from C02TD0UTHF1T.local (unknown [10.57.26.40])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B118B3F68F;
 Wed,  9 Dec 2020 10:15:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6185d4b2-be4d-4dc7-9bbc-b25f19572fd8
Date: Wed, 9 Dec 2020 18:15:14 +0000
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
Message-ID: <20201209181514.GA14235@C02TD0UTHF1T.local>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120115943.GD3021@hirez.programming.kicks-ass.net>

On Fri, Nov 20, 2020 at 12:59:43PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
> > +static __always_inline void arch_local_irq_restore(unsigned long flags)
> > +{
> > +	if (!arch_irqs_disabled_flags(flags))
> > +		arch_local_irq_enable();
> > +}
> 
> If someone were to write horrible code like:
> 
> 	local_irq_disable();
> 	local_irq_save(flags);
> 	local_irq_enable();
> 	local_irq_restore(flags);
> 
> we'd be up some creek without a paddle... now I don't _think_ we have
> genius code like that, but I'd feel saver if we can haz an assertion in
> there somewhere...

I've cobbled that together locally (i'll post it momentarily), and gave it a
spin on both arm64 and x86, whereupon it exploded at boot time on x86.

In arch/x86/kernel/apic/io_apic.c's timer_irq_works() we do:

	local_irq_save(flags);
	local_irq_enable();

	[ trigger an IRQ here ]

	local_irq_restore(flags);

... and in check_timer() we call that a number of times after either a
local_irq_save() or local_irq_disable(), eventually trailing with a
local_irq_disable() that will balance things up before calling
local_irq_restore().

I guess that timer_irq_works() should instead do:

	local_irq_save(flags);
	local_irq_enable();
	...
	local_irq_disable();
	local_irq_restore(flags);

... assuming we consider that legitimate?

With that, and all the calls to local_irq_disable() in check_timer() removed
(diff below) I get a clean boot under QEMU with the assertion hacked in and
DEBUG_LOCKDEP enabled.

Thanks
Mark.

---->8----
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 7b3c7e0d4a09..e79e665a3aeb 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1631,6 +1631,7 @@ static int __init timer_irq_works(void)
        else
                delay_without_tsc();
 
+       local_irq_disable();
        local_irq_restore(flags);
 
        /*
@@ -2191,7 +2192,6 @@ static inline void __init check_timer(void)
                        goto out;
                }
                panic_if_irq_remap("timer doesn't work through Interrupt-remapped IO-APIC");
-               local_irq_disable();
                clear_IO_APIC_pin(apic1, pin1);
                if (!no_pin1)
                        apic_printk(APIC_QUIET, KERN_ERR "..MP-BIOS bug: "
@@ -2215,7 +2215,6 @@ static inline void __init check_timer(void)
                /*
                 * Cleanup, just in case ...
                 */
-               local_irq_disable();
                legacy_pic->mask(0);
                clear_IO_APIC_pin(apic2, pin2);
                apic_printk(APIC_QUIET, KERN_INFO "....... failed.\n");
@@ -2232,7 +2231,6 @@ static inline void __init check_timer(void)
                apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
                goto out;
        }
-       local_irq_disable();
        legacy_pic->mask(0);
        apic_write(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_FIXED | cfg->vector);
        apic_printk(APIC_QUIET, KERN_INFO "..... failed.\n");
@@ -2251,7 +2249,6 @@ static inline void __init check_timer(void)
                apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
                goto out;
        }
-       local_irq_disable();
        apic_printk(APIC_QUIET, KERN_INFO "..... failed :(.\n");
        if (apic_is_x2apic_enabled())
                apic_printk(APIC_QUIET, KERN_INFO


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:16:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48619.85988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn40s-0001Ep-9k; Wed, 09 Dec 2020 18:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48619.85988; Wed, 09 Dec 2020 18:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn40s-0001Ei-66; Wed, 09 Dec 2020 18:16:30 +0000
Received: by outflank-mailman (input) for mailman id 48619;
 Wed, 09 Dec 2020 18:16:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn40q-0001Ec-Bb
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:16:28 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a22cde7-f0db-402b-8b92-3cb1c385b917;
 Wed, 09 Dec 2020 18:16:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a22cde7-f0db-402b-8b92-3cb1c385b917
Date: Wed, 9 Dec 2020 10:16:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607537786;
	bh=Av76fKJ5U2Hr4LGBXl1FsH9tWYcREJgon6dpLFNEaBw=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Gzbd+ktynE3Oe57Ll+KPmCe9hKQbz782OLThp8nCVuQTbAfB3VLfFsClPYwmgSxCS
	 Ow2b+eNiiX85obVL8up1IWXuT0QNdws+Q9F6OskbFQ96oMeLkRv1akL3ruUTZKo4/W
	 Ri19uE0XHYNWbDGViFNgEBvFjDO6wChsouSIytx/r8GBpcuXqCcW8S3vboLmYoWdgh
	 xZubh4EujkoCVNTKLhEioGJv0tHULIrXGNuvzdXIlUD0Wrq7PXDPw20CqMGgFftamJ
	 0ig29cAMsAgsOakxEPVI6SCf1dgnqAyUymgQCsiBjTqSAs+etQgvjAr6LbREE1Ufew
	 NFVoBjHZYmilg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Michal Orzel <michal.orzel@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Wei Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #845719
In-Reply-To: <e974547f-ddf2-a7d1-43a1-cdc81c874823@arm.com>
Message-ID: <alpine.DEB.2.21.2012091015540.20986@sstabellini-ThinkPad-T480s>
References: <20201208072327.11890-1-michal.orzel@arm.com> <d286241c-fd3b-8506-37e5-0ddcdaae97be@xen.org> <5D1B5771-A6B3-4F5E-81A1-864DBC8787B4@arm.com> <bf45e0f4-2de7-d1db-4732-342937bf61e7@xen.org> <alpine.DEB.2.21.2012081730020.20986@sstabellini-ThinkPad-T480s>
 <e974547f-ddf2-a7d1-43a1-cdc81c874823@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Michal Orzel wrote:
> On 09.12.2020 02:34, Stefano Stabellini wrote:
> > On Tue, 8 Dec 2020, Julien Grall wrote:
> >> On 08/12/2020 14:38, Bertrand Marquis wrote:
> >>> Hi Julien,
> >>>
> >>>> On 8 Dec 2020, at 09:47, Julien Grall <julien@xen.org> wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> On 08/12/2020 07:23, Michal Orzel wrote:
> >>>>> When executing in aarch32 state at EL0, a load at EL0 from a
> >>>>> virtual address that matches the bottom 32 bits of the virtual address
> >>>>> used by a recent load at (aarch64) EL1 might return incorrect data.
> >>>>> The workaround is to insert a write of the contextidr_el1 register
> >>>>> on exception return to an aarch32 guest.
> >>>>
> >>>> I am a bit confused with this comment. In the previous paragraph, you are
> >>>> suggesting that the problem is an interaction between EL1 AArch64 and EL0
> >>>> AArch32. But here you seem to imply the issue only happen when running a
> >>>> AArch32 guest.
> >>>>
> >>>> Can you clarify it?
> >>>
> >>> This can happen when switching from an aarch64 guest to an aarch32 guest so
> >>> not only when there is interaction.
> > 
> > Just to confirm: it cannot happen when switching from aarch64 *EL2* to
> > aarch32 EL0/1, right?  Because that happens all the time in Xen.
> > 
> > 
> No it cannot. It can only happen when switching from aarch64 EL1 to aarch32 EL0.

Excellent, thanks for checking


> >> Right, but the context switch will write to CONTEXTIDR_EL1. So this case
> >> should already be handled.
> >>
> >> Xen will never switch from AArch64 EL1 to AArch32 EL0 without a context switch
> >> (the inverse can happen if we inject an exception to the guest).
> >>
> >> Reading the Cortex-A53 SDEN, it sounds like this is an OS and not Hypervisor
> >> problem. In fact, Linux only seems to workaround it when switching in the OS
> >> side rather than the hypervisor.
> >>
> >> Therefore, I am not sure to understand why we need to workaroud it in Xen.
> > 
> > It looks like Julien is right in regards to the "aarch64 EL1 to aarch32
> > EL0" issue.
> > 
> Yes I agree. I missed the fact that there is a write to CONTEXTIDR_EL1
> in 'ctxt_switch_to'. Let's abandon this.

Job done :-)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:23:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:23:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48626.86000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn481-0002J4-1E; Wed, 09 Dec 2020 18:23:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48626.86000; Wed, 09 Dec 2020 18:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn480-0002Ix-UU; Wed, 09 Dec 2020 18:23:52 +0000
Received: by outflank-mailman (input) for mailman id 48626;
 Wed, 09 Dec 2020 18:23:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn480-0002Ip-2I; Wed, 09 Dec 2020 18:23:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn47z-00073M-Pi; Wed, 09 Dec 2020 18:23:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn47z-0005w7-H4; Wed, 09 Dec 2020 18:23:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kn47z-00064a-Gb; Wed, 09 Dec 2020 18:23:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r0XOfH+LXwKOAVZpzBBZg2PbQfyc4/tUF1177JCJ6bg=; b=4y5r8WceHgDy6m8sbdQuuYIp+L
	mYbCFnhceFn9+f1Jl8hnR5kEfz4YCm2NWPBaP1vDWS01wHa3uTMZ8cdhogCOp8a1IsXStK9yeKPS1
	paZIn6cneOLXq/2FusFKW36Rv94Sv5nigptz2X0CWDTNestLzmSnDpNIJjDTspbFCnLw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157348-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157348: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=272a1db63a09087ce3da4cf44ec7b758611ff1ed
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 18:23:51 +0000

flight 157348 ovmf real [real]
flight 157350 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157348/
http://logs.test-lab.xenproject.org/osstest/logs/157350/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 272a1db63a09087ce3da4cf44ec7b758611ff1ed
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    0 days
Testing same since   157348  2020-12-09 15:39:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <Pierre.Gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 272a1db63a09087ce3da4cf44ec7b758611ff1ed
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 13 11:31:01 2020 +0000

    ArmPlatformPkg: Fix cspell reported spelling/wording
    
    The edk2 CI runs the "cspell" spell checker tool. Some words
    are not recognized by the tool, triggering errors.
    This patch modifies some spelling/wording detected by cspell.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 061cbbc1115eb7360f2c7627d53d13e35d63cbe3
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 20 10:01:13 2020 +0000

    ArmPlatformPkg: Fix Ecc error 8001 in PrePi
    
    This patch fixes the following Ecc reported error:
    Only capital letters are allowed to be used for #define
    declarations
    
    The "SerialPrint" macro is definied for the PrePi module
    residing in the ArmPlatformPkg. It is never used in the module.
    The macro is thus removed.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 2dfd81aaf50ca2bd1e2d33ed5687620de90810ce
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 6 09:47:47 2020 +0000

    ArmPlatformPkg: Fix Ecc error 10006 in ArmPlatformPkg.dsc
    
    This patch fixes the following Ecc reported error:
    There should be no unnecessary inclusion of library
    classes in the INF file
    
    This comes with the additional information:
    "The Library Class [TimeBaseLib] is not used
    in any platform"
    "The Library Class [PL011UartClockLib] is not used
    in any platform"
    "The Library Class [PL011UartLib] is not used
    in any platform"
    
    Indeed, the PL011SerialPortLib module requires the
    PL011UartClockLib and PL011UartLib libraries.
    The PL031RealTimeClockLib module requires the TimeBaseLib
    library.
    ArmPlatformPkg/ArmPlatformPkg.dsc builds the two modules,
    but doesn't build the required libraries. This patch adds
    the missing libraries to the [LibraryClasses.common] section.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 42bec8c8104c9db4891dfd1b208032c9c413d861
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:33:12 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in SP805WatchdogDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/SP805WatchdogDxe/SP805Watchdog.h]
    is existing in module directory but it is not described
    in INF file.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 2e0cbfcbed96505953ef09fcfb72d4ea83cc8df2
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:32:42 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/PL061GpioDxe/PL061Gpio.h]
    is existing in module directory but it is not described
    in INF file.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit a36a0f1d81a2502a922617cf99be0bb81de2f57a
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:32:26 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in LcdGraphicsOutputDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/LcdGraphicsOutputDxe/LcdGraphicsOutputDxe.h]
    is existing in module directory but it is not described
    in INF file.
    
    Files in [Sources.common] are also alphabetically re-ordered.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit c5d970a01e76c1a20f6bb009b32e479ad2444548
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:18:04 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10016 in LcdPlatformNullLib
    
    This patch fixes the following Ecc reported error:
    Module file has FILE_GUID collision with other
    module file
    
    The two .inf files with clashing GUID are:
    edk2\ArmPlatformPkg\PrePeiCore\PrePeiCoreMPCore.inf
    edk2\ArmPlatformPkg\Library\LcdPlatformNullLib\LcdPlatformNullLib.inf
    
    The PrePeiCoreMPCore module has been imported in 2011 and the
    LcdPlatformNullLib module has been created in 2017. The
    PrePeiCoreMPCore has the precedence.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 746dda63b2d612a2ad9e0b4c05722920586d2e60
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:37:14 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10016 in PrePi
    
    This patch fixes the following Ecc reported error:
    Module file has FILE_GUID collision with other
    module file
    
    The two .inf files with clashing GUID are:
    edk2\ArmPlatformPkg\PrePi\PeiUniCore.inf
    edk2\ArmPlatformPkg\PrePi\PeiMPCore.inf
    
    Both files seem to have been imported from the previous
    svn repository as the same time.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 28978df0bdafce1d26ff337fd67ee6c3a5b3876e
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:36:19 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in PL031RealTimeClockLib
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 1485e8bbc86e9a7e1954cfe5697fbd45d8e3b04e
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:36:01 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 4c7e107810cacd1dbd4c6f7d6d4d22e3de2f8db1
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:35:36 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in NorFlashDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit eb97f13839fd64ce3e4ff9dd39ea9950db48207d
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:35:07 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in LcdGraphicsOutputDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit d315bd2286cde306f1ef5256026038e610505cca
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:32:40 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3002 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    Non-Boolean comparisons should use a compare operator
    (==, !=, >, < >=, <=)
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit ee78edceca89057ab9854f7e5070391a8229ece4
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:31:50 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3002 in PL011UartLib
    
    This patch fixes the following Ecc reported error:
    Non-Boolean comparisons should use a compare operator
    (==, !=, >, < >=, <=)
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit dd917bae85396055ff5d6ea760bff3702d154101
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 13:31:40 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3001 in NorFlashDxe
    
    This patch fixes the following Ecc reported error:
    Boolean values and variable type BOOLEAN should not use
    explicit comparisons to TRUE or FALSE
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:34:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48635.86015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4I5-0003Ny-5M; Wed, 09 Dec 2020 18:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48635.86015; Wed, 09 Dec 2020 18:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4I5-0003Nr-2D; Wed, 09 Dec 2020 18:34:17 +0000
Received: by outflank-mailman (input) for mailman id 48635;
 Wed, 09 Dec 2020 18:34:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn4I3-0003Nj-U2; Wed, 09 Dec 2020 18:34:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn4I3-0007Gs-Jr; Wed, 09 Dec 2020 18:34:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn4I3-0006IO-A3; Wed, 09 Dec 2020 18:34:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kn4I3-0002zw-9Z; Wed, 09 Dec 2020 18:34:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jHaRZxADiiHxohzEpwpW57NwBtyD4J+FX8qVjoiWAuo=; b=Uwq2okIzb6uKID/cJhiXAVRnUG
	ikIx3YqwJSbaSq8PGl8YpO6asxxTEZt1m61J0kaMcKmZRIUU3zc0lRieWUCGTWr5gZwStI1XF0eKD
	KMwwDdN6ywZa+3nsTNn+83Y9IdWMaZrFwiikeG2b50OzSMSBDdY3iUmZic8O8qG/VsOQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157341: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a68a0262abdaa251e12c53715f48e698a18ef402
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 09 Dec 2020 18:34:15 +0000

flight 157341 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157341/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a68a0262abdaa251e12c53715f48e698a18ef402
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  130 days
Failing since        152366  2020-08-01 20:49:34 Z  129 days  223 attempts
Testing same since   157341  2020-12-09 06:56:20 Z    0 days    1 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 700866 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:37:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:37:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48643.86033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4LQ-0003YP-Nz; Wed, 09 Dec 2020 18:37:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48643.86033; Wed, 09 Dec 2020 18:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4LQ-0003YI-Km; Wed, 09 Dec 2020 18:37:44 +0000
Received: by outflank-mailman (input) for mailman id 48643;
 Wed, 09 Dec 2020 18:37:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ynM+=FN=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kn4LP-0003YD-Ds
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:37:43 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::605])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4db0852c-cd14-4aec-a625-9c928c52c284;
 Wed, 09 Dec 2020 18:37:40 +0000 (UTC)
Received: from DB6P191CA0011.EURP191.PROD.OUTLOOK.COM (2603:10a6:6:28::21) by
 AM6PR08MB3270.eurprd08.prod.outlook.com (2603:10a6:209:50::15) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12; Wed, 9 Dec 2020 18:37:38 +0000
Received: from DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:28:cafe::29) by DB6P191CA0011.outlook.office365.com
 (2603:10a6:6:28::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Wed, 9 Dec 2020 18:37:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT030.mail.protection.outlook.com (10.152.20.144) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Wed, 9 Dec 2020 18:37:38 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Wed, 09 Dec 2020 18:37:38 +0000
Received: from c212a79a3008.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 978C5B05-98E4-4725-9ECA-A72C3EDAE846.1; 
 Wed, 09 Dec 2020 18:37:10 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c212a79a3008.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 09 Dec 2020 18:37:10 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB5932.eurprd08.prod.outlook.com (2603:10a6:10:207::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.19; Wed, 9 Dec
 2020 18:37:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3654.013; Wed, 9 Dec 2020
 18:37:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4db0852c-cd14-4aec-a625-9c928c52c284
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ajGJisMwZEtO0TjadzZL+EJYvuJe7ZlzAnjlmFp6Mh8=;
 b=fURaqMvyISW8ZHIa+7IM+NNKFvix76ESfvQ5Y+CDp/8SIigjodV9Bzs2h5sYcUnBHjDGlGAOF0PKs5OkMb9VBssZumxZxhgTuFxZ29xq8OkUQ8W49urcUhMyULDuaCWi4KO8DYZkgKh6OqonM771ag59wOnNuYzMldLetPYwQx8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: b60c993a5d4aa2cd
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NywbdgJcJ7ncXVP+Dgj1ShrT9XmD6sh6NV1Tp61CMV5Uh8UQhHvPA3t3x/+y+Jg7RIryIWU0XLR/leptYQ+6nAI6+2/Xbna+s1y6rVPDBh4xLeqX7lM9BZ0XDxDV/3KKlyKifxMWM64jgC27Ioww9YTBt0isPxoCcKpVDsSeboPqsz/v9jcmdOUa0vRsSZ6bT4kLy7Y8GkInpf8QyHkvgKi4kgEvxu/9rKvhFquK21suKhAbi0rymztcZwgWev2O1AtzllbGkeRQdP9ESY6zsM2SX8KRfUMiAXDcPyRg+Lcd8LZ4a34IRM5Am3FD5prhgHhktta+mdoBy/87xc82EA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ajGJisMwZEtO0TjadzZL+EJYvuJe7ZlzAnjlmFp6Mh8=;
 b=OHq+zC2+tUFfQharKyIx5hbYiLLC4Uo1zffx2wTA9JWkcQANZYS/6mwnwI8i2kDJINpYyLiwC/nbaWyp4CgUdUU/bob3cE/V3g8+5wRgAluDQ17lH7oDsee8ibUMgFxmY1Rhp6QIjByJoBjfsfJMNp8KRVbAOdGDeqI5DqUE0e4cfcUe2G58BOF7eJs9CYegCJIWlH/TmCN7+tLGAAQtMblShV0sqVtwG2XPsoSum7JgcuFrI+KL86RjQH9omqldNFDHBEu5tMKRquL63MSknnPDH7ymYFtFhQZCX35HpoelqIrdLMZejgmAKBY+tiRS4zLXroQe9y6CorjORwLoVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ajGJisMwZEtO0TjadzZL+EJYvuJe7ZlzAnjlmFp6Mh8=;
 b=fURaqMvyISW8ZHIa+7IM+NNKFvix76ESfvQ5Y+CDp/8SIigjodV9Bzs2h5sYcUnBHjDGlGAOF0PKs5OkMb9VBssZumxZxhgTuFxZ29xq8OkUQ8W49urcUhMyULDuaCWi4KO8DYZkgKh6OqonM771ag59wOnNuYzMldLetPYwQx8=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Thread-Index:
 AQHWxBX/ttp8YMNqT0KunkbXuSldmKnkBpwAgAeVnoCAAFuAgIAAEZOAgAGYzQCAAGhiAIAAbqQAgAAXOgCAAJwRgA==
Date: Wed, 9 Dec 2020 18:37:08 +0000
Message-ID: <FEBC2AAF-C15B-4CEC-9FCC-FEA94714421D@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
 <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
 <a67bb114-a4a9-651a-338b-123b350ac4b3@xen.org>
 <9C890E87-D438-4232-8647-8EC64FF32C42@arm.com>
 <bb6a710e-4a7a-5db2-fece-b5845e06d092@xen.org>
 <9F9A955B-815C-4771-9EC0-073E9CF3E995@arm.com>
 <156ab0f5-e46d-6b96-7ff1-28ad3a748950@xen.org>
 <alpine.DEB.2.21.2012081711200.20986@sstabellini-ThinkPad-T480s>
 <BE0F99C1-AAA7-4EC7-A800-7CDEA24177DF@arm.com>
 <47bf1e5d-9e9e-d455-f6d8-5ddec0367ef2@xen.org>
In-Reply-To: <47bf1e5d-9e9e-d455-f6d8-5ddec0367ef2@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 3af195a1-94c4-494a-bd31-08d89c718117
x-ms-traffictypediagnostic: DBBPR08MB5932:|AM6PR08MB3270:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3270433F1BDE71DB2E7A3950FCCC0@AM6PR08MB3270.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iSeF+GI6aEwKVbBCRgHDzaeascSKNjxrf4cqVn9id/8S1ipVyAkWYP7GG/GsaHwApXvF3sbrEySkWxU1EcGz1wDpBjOVaqh1AgJkFzXyJWrgEirYNbcBlpIVXw7cI4xTAdYj63VaHLMWuuS0fhoYx5Q7q/OPlofAW7sgsioPot/7TlfriKVAxZLRjyqAcbmL6UAshZNkq8oO64gdFoG/QW7rca6uumOe17xEE+k3Gf0pmkrQdocTIhgc+SAr7hW6TKz1piEbl1BNC/i9Z0LF5o3C+Mi5ZcqXS9qj6ifSFVEcyWvAJONRRwH56FzHLJlcmVEGN4Gl9URSi8gcNM8rm8BD4TmDBQsM4l/KERWJuKWvX+ju3f03Z0DD++c/DTR6
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(376002)(39860400002)(396003)(136003)(26005)(2906002)(6512007)(66476007)(66946007)(8936002)(91956017)(8676002)(6486002)(33656002)(64756008)(71200400001)(66556008)(316002)(6916009)(6506007)(86362001)(5660300002)(66446008)(478600001)(186003)(83380400001)(36756003)(76116006)(4326008)(7416002)(53546011)(54906003)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?amJCc3EvVjdpMGhzUVQ5em5FQXFBSmNTOS9sbnlRdGVoZ3ZFblpyR05qMi83?=
 =?utf-8?B?NG80NFNiZlA3NU9xL3FRQkladEJ4ajlJa0drYU92R1VDOVJSUmtOMldBS3E2?=
 =?utf-8?B?UFdBMnFHZU95SUFLYWFUc1MreFA3T09IOUVFUXBaNERFRi8xdWtrNlNFSndo?=
 =?utf-8?B?TGNYVU5FZlp6djUyenlYc1BvZzc2KzM3T0ZUcEk3ZTZmOWk3dEtpYVJRM05m?=
 =?utf-8?B?MGV2MnVtMUpJZ2JSUTBHTWt6NTBpbER5OGZmcmRiYkN3dlM4TDNYY2VTN09a?=
 =?utf-8?B?V2JNUWRVdFFabU5EWVlMaFNBNkhoYkRJVzkwWnJJeEF3dkk4NWRvS29KcnE2?=
 =?utf-8?B?bmxVVEZRckxWYlVJUEZpVHhuSG9QUWtUOG8xYjk1a2phcyt0QXpIanZSb0t6?=
 =?utf-8?B?T1JDQkZMcWxPbFJKK2FCVE9IdUNjaU5NcFlBa1FSUjdyeTVnTmlKMzYyeDRU?=
 =?utf-8?B?NzdNV3IzUE84bktCL3pvWlFWUFBTNENpemRydjRxTEIzcmNtTnFzWjc2d1pG?=
 =?utf-8?B?U0RSKytqYnI5QTlaZlFtRlU2cmh6d0gwVFdicVZrcFhnOTRVTzJwUkN5THc3?=
 =?utf-8?B?eGk4UHJJZUVpWHFyc1ZVWlZiSllsUkU1V1RQc2V0a1FLOXZ0OGsxbE8weG5V?=
 =?utf-8?B?YktVS0VHOXp2dVlZZjlLaXVsWjliZXptY1JLT29hdkM1dkdXS250SGNSa2Z4?=
 =?utf-8?B?Q3BqSUtkQjZqTmdCYlp5YkRhdVJaNllDUTZMbENyemt1SEZFdTR6M29SenZO?=
 =?utf-8?B?L3BVbm9JTmt0ZHI3SXEyT3JHbytLQ0RNT2J3VHZhcVBxOUtVcWs1UlcxWmR3?=
 =?utf-8?B?MXFpQi81Q1VyQ2RxTDZ3S1dVbUVVVmkyN2k2UnRESS9sVWJMejFqcUt2K0Ni?=
 =?utf-8?B?aVZoUG9xVmE0MUIvbmNhLzczODZIUkNNbjNTQytaVU5hUjRTTkRQM3NBYzY1?=
 =?utf-8?B?bE90WGZQWmtKZ2ppU2hpWTdQTXlTQklLTW9FZlUvQyt2L2E0WTJvZ3FSTmFC?=
 =?utf-8?B?S1cyc3pxNGgzNk9KRDhJZ2YzUVNZeTBCL2RWTmptVkZaSkcwZ0grekJMSVFw?=
 =?utf-8?B?cnNXbTZ4eW1xRHc1Vk1DSGZEZysyUlQraWRZeEhZaklpbzlYcUJHT3FnR3JE?=
 =?utf-8?B?QzRPejhUdEs0U0R3cVBxVTdzMHhpUlhvc0dDbkNiVEhuZmwyQXNQZHRUZWx4?=
 =?utf-8?B?NmtaVHNKRTF0L3cvc3c5anpISEJ3eHk5ekIvMjEwK2FsWFE0eWRDVDZHV2Vw?=
 =?utf-8?B?TVBNejcrb09RT2VFYVlDRU1XaWdwcGp1YjVuL0ttenJEa01TNnJic29sSUE3?=
 =?utf-8?Q?j20d3dq1a2MTj+kFhF1mFvQF8kzvqPuDte?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <08CE1E9A660C024B966B0AF893954D52@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5932
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4531b9dd-8c67-45aa-958b-08d89c716f28
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a3UOaIlPO5PoLpUmFsUDuSAFEPtK7bWXjEWetQ2Cq1kDK3MZYbkCXBNF1l8+/IDSCAT7skOac64DXreJ7OdsH+l6ejjs7s2tSApy8N9n9QogECLV9qDv2quXB1Ro8fAjDcJ/T4jiVwFDSkCx37Tb5nV4HMIwxOPg6JPURoLRK1a6Pcr4s6oLouab0gUlwqYYKbI9c8ZdxP8Lh8tbE6RwT305uO99Ewit1VE+78mcYwIBYhpKB9RsR/2gIBErrsj1RkcBd4aD6frcpEWxNFnCDvcgIuXmES2ucb7leyRCvNkkeXuLv7/MlIoXms0TurxFzrytvNn1HK9j7YuwNoskXIETCg2Yfq6lbNMJznwXJbVcfmETp9Du34izEueum5xfMnd2JTqJBes/LVmenuIpuQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(46966005)(336012)(4326008)(86362001)(53546011)(6862004)(508600001)(70586007)(70206006)(6506007)(6486002)(107886003)(186003)(2616005)(83380400001)(81166007)(26005)(54906003)(356005)(36756003)(33656002)(47076004)(5660300002)(82310400003)(6512007)(2906002)(8936002)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Dec 2020 18:37:38.3914
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3af195a1-94c4-494a-bd31-08d89c718117
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3270

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDkgRGVjIDIwMjAsIGF0IDk6MTggYW0sIEp1bGllbiBHcmFs
bCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPiANCj4gDQo+IA0KPiBPbiAwOS8xMi8yMDIwIDA3
OjU1LCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gSGksDQo+IA0KPiBIaSwNCj4gDQo+PiBJ
IGFsc28gYWdyZWUgd2l0aCB0aGUgaXNzdWUgb24gdGhlIHNwaW5sb2NrIGJ1dCB3ZSBoYXZlIG5v
IGVxdWl2YWxlbnQgb2Ygc29tZXRoaW5nDQo+PiBsb29raW5nIGxpa2UgYSBtdXRleCBmb3Igbm93
IGluIFhlbiBzbyB0aGlzIHdvdWxkIHJlcXVpcmUgc29tZSBtYWpvciByZWRlc2lnbiBhbmQNCj4+
IHdpbGwgdGFrZSB1cyBmYXIgZnJvbSB0aGUgbGludXggZHJpdmVyLg0KPiANCj4gSSBhZ3JlZSB0
aGF0IGtlZXBpbmcgdGhlIFhlbiBpbXBsZW1lbnRhdGlvbiBjbG9zZSB0byB0aGUgTGludXggb25l
IGlzIGltcG9ydGFudC4gSG93ZXZlciwgSSB2aWV3IHRoaXMgaGFzIGEgdGhlIHNlY29uZGFyeSBn
b2FsLiBUaGUgcHJpbWFyeSBnb2FsIGlzIHRvIGhhdmUgYSBzYWZlIGFuZCBzZWN1cmUgZHJpdmVy
Lg0KPiANCj4gSWYgaXQgbWVhbnMgaW50cm9kdWNpbmcgYSBuZXcgc2V0IG9mIGxvY2sgb3IgZGl2
ZXJnaW5nIGZyb20gTGludXgsIHRoZW4gc28gaXQgYmUuDQo+PiBJIHdvdWxkIHN1Z2dlc3QgdG8g
YWRkIGEgY29tbWVudCBiZWZvcmUgdGhpcyBwYXJ0IG9mIHRoZSBjb2RlIHdpdGggYSDigJxUT0RP
4oCdIHNvIHRoYXQNCj4+IGl0IGlzIGNsZWFyIGluc2lkZSB0aGUgY29kZS4NCj4+IFdlIGNvdWxk
IGFsc28gYWRkIHNvbWUgY29tbWVudCBpbiBLY29uZmlnIHRvIG1lbnRpb24gdGhpcyBwb3NzaWJs
ZSDigJxmYXVsdHnigJ0gYmVoYXZpb3VyLg0KPiANCj4gSSB0aGluayBpdCB3b3VsZCBiZSBlbm91
Z2ggdG8gd3JpdGUgaW4gdGhlIEtjb25maWcgdGhhdCB0aGUgZHJpdmVyIGlzICJleHBlcmltZW50
YWwgYW5kIHNob3VsZCBub3QgYmUgdXNlZCBpbiBwcm9kdWN0aW9u4oCdLg0KDQpPayAuIEkgd2ls
bCBhZGQgaW4gS2NvbmZpZy4NCj4gDQo+IElkZWFsbHksIEkgd291bGQgbGlrZSBhIGxpc3Qgb2Yg
a25vd24gaXNzdWVzIGZvciB0aGUgZHJpdmVyIChjb3VsZCBiZSBpbiB0aGUgY292ZXIgbGV0dGVy
IG9yL2FuZCBhdCB0aGUgdG9wIG9mIHRoZSBzb3VyY2UgZmlsZSkgc28gd2UgY2FuIHRyYWNrIHdo
YXQncyBtaXNzaW5nIHRvIGdldCBpdCBzdXBwb3J0ZWQuDQoNCkkgd2lsbCBsaXN0IGFsbCBrbm93
biBpc3N1ZXMgYW5kIHNob3J0IGNvbWluZyBvZiB0aGUgZHJpdmVyIGF0IHRoZSB0b3Agb2YgdGhl
IHNvdXJjZSBmaWxlLg0KDQpSZWdhcmRzLA0KUmFodWwNCj4gDQo+IENoZWVycywNCj4gDQo+IC0t
IA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:48:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48650.86044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4V8-0004d0-S3; Wed, 09 Dec 2020 18:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48650.86044; Wed, 09 Dec 2020 18:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4V8-0004ct-Oc; Wed, 09 Dec 2020 18:47:46 +0000
Received: by outflank-mailman (input) for mailman id 48650;
 Wed, 09 Dec 2020 18:47:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn4V6-0004co-HU
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:47:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 875d951d-0969-4744-bc2a-7a909b5ae6d9;
 Wed, 09 Dec 2020 18:47:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 875d951d-0969-4744-bc2a-7a909b5ae6d9
Date: Wed, 9 Dec 2020 10:47:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607539662;
	bh=S77Xdr4GUfQdQuyEVOr/V1TbtP9CWgp2ztUbmJ1zloc=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=fdY1ulKlvtZdO1Aa591qgJm2XFqNWA2tqkUWVFkq6Fjn4/s6ca1hJwtWpsdPpFaYO
	 U28HmI71yWVHBKUkknu+wlh+cBw/8wHPMN1UIFUy6A2udSh3tBEuyJ3NIpRa/9nhdw
	 1mjsAyFeozZFAKWhQ9gbSjgEeUvO6wPslnbT8/VQrHC5vSDyZFME4Z+NOOqKQB6ifX
	 fci7fprG0KSWdUxgBttVIAshmVdl6GOAzTKlpRNeVmOYtgT6VWM4cKzMtuhB/pOe86
	 ettcXk/Lby9juXngmIOGHBmo/m9wUUnNPzO7e7ynG3RgKWmHK46olz5KDJTZ1Zn8zT
	 bWl3lydI6V/Qw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Wei Liu <wl@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, famzheng@amazon.com, cardoe@cardoe.com, 
    Bertrand.Marquis@arm.com, julien@xen.org, andrew.cooper3@citrix.com
Subject: Re: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
In-Reply-To: <20201209161433.d7xpx5zwtikd3fmk@liuwe-devbox-debian-v2>
Message-ID: <alpine.DEB.2.21.2012091046400.20986@sstabellini-ThinkPad-T480s>
References: <160746448732.12203.10647684023172140005@600e7e483b3a> <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s> <20201209161433.d7xpx5zwtikd3fmk@liuwe-devbox-debian-v2>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Wei Liu wrote:
> On Tue, Dec 08, 2020 at 05:02:50PM -0800, Stefano Stabellini wrote:
> > The pipeline failed because the "fedora-gcc-debug" build failed with a
> > timeout: 
> > 
> > ERROR: Job failed: execution took longer than 1h0m0s seconds
> > 
> > given that all the other jobs passed (including the other Fedora job), I
> > take this failed because the gitlab-ci x86 runners were overloaded?
> > 
> 
> The CI system is configured to auto-scale as the number of jobs grows.
> The limit is set to 10 (VMs) at the moment.
> 
> https://gitlab.com/xen-project/xen-gitlab-ci/-/commit/832bfd72ea3a227283bf3df88b418a9aae95a5a4
> 
> I haven't looked at the log, but the number of build jobs looks rather
> larger than when we get started. Maybe the limit of 10 is not good
> enough?

Interesting! That's only for the x86 runners, not the ARM runners (we
only have 1 ARM64 runner), is that right?

If we could increase the number of VMs for x86 I think that would be
helpful because we have very many x86 jobs.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:54:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48656.86057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4bf-0005dV-J8; Wed, 09 Dec 2020 18:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48656.86057; Wed, 09 Dec 2020 18:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4bf-0005dO-Fy; Wed, 09 Dec 2020 18:54:31 +0000
Received: by outflank-mailman (input) for mailman id 48656;
 Wed, 09 Dec 2020 18:54:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/1wO=FN=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kn4bd-0005dJ-TQ
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:54:30 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d4e0494-9fb1-4078-a985-d76a7190c2f8;
 Wed, 09 Dec 2020 18:54:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d4e0494-9fb1-4078-a985-d76a7190c2f8
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607540067;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7Uu7pkTsDY6PR0uRw8JT+RVlLSz62Q8pgzg6y5Vv+F8=;
	b=ZKyRHF02e6C/n1mUBKn5ZR+XVl6dDjKmAKKg5N6crtAg3HJ2EOJsCCYkTM8vHNeLb8e1EE
	woT+FRsq9wRBfKR1H/jU/h5ITtu8upRL+qht0KlSG2FPcGkAH50CYTjWzYIsGM1MBx+iWY
	ZvsmEPNQYKrYuTvetq+cnT9AM+zSzfv26zNUDqMcYYy5Az+ooZ8kxUTRgcQxFQ6FrUZZA7
	tY++O41Ru5wCUH11xGUu3nNNoe1jAmst2c8vb9fs1ROv5DpuiAJAfEGvFi7t2m4iBw9m+F
	U7kRP7HRP7vVZzw7D+fjTVnk7dQ2NJnIZPMHxtpdBE5YJPwHXnrxOYNyE+5jUA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607540067;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7Uu7pkTsDY6PR0uRw8JT+RVlLSz62Q8pgzg6y5Vv+F8=;
	b=c0VvWr+XQ1eFIDzaipp+m6QkbNpqK8MTFWzd5nGNoue3bmiiBo/fQBptKc2vv17Tly9FpG
	ckQTiaR3Qc/ZrSAA==
To: Mark Rutland <mark.rutland@arm.com>, Peter Zijlstra <peterz@infradead.org>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, luto@kernel.org, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>, "VMware\, Inc." <pv-drivers@vmware.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use popf
In-Reply-To: <20201209181514.GA14235@C02TD0UTHF1T.local>
References: <20201120114630.13552-1-jgross@suse.com> <20201120114630.13552-6-jgross@suse.com> <20201120115943.GD3021@hirez.programming.kicks-ass.net> <20201209181514.GA14235@C02TD0UTHF1T.local>
Date: Wed, 09 Dec 2020 19:54:26 +0100
Message-ID: <87tusuzu71.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Wed, Dec 09 2020 at 18:15, Mark Rutland wrote:
> In arch/x86/kernel/apic/io_apic.c's timer_irq_works() we do:
>
> 	local_irq_save(flags);
> 	local_irq_enable();
>
> 	[ trigger an IRQ here ]
>
> 	local_irq_restore(flags);
>
> ... and in check_timer() we call that a number of times after either a
> local_irq_save() or local_irq_disable(), eventually trailing with a
> local_irq_disable() that will balance things up before calling
> local_irq_restore().
>
> I guess that timer_irq_works() should instead do:
>
> 	local_irq_save(flags);
> 	local_irq_enable();
> 	...
> 	local_irq_disable();
> 	local_irq_restore(flags);
>
> ... assuming we consider that legitimate?

Nah. That's old and insane gunk.

Thanks,

        tglx
---
 arch/x86/kernel/apic/io_apic.c |   22 ++++++----------------
 1 file changed, 6 insertions(+), 16 deletions(-)

--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1618,21 +1618,16 @@ static void __init delay_without_tsc(voi
 static int __init timer_irq_works(void)
 {
 	unsigned long t1 = jiffies;
-	unsigned long flags;
 
 	if (no_timer_check)
 		return 1;
 
-	local_save_flags(flags);
 	local_irq_enable();
-
 	if (boot_cpu_has(X86_FEATURE_TSC))
 		delay_with_tsc();
 	else
 		delay_without_tsc();
 
-	local_irq_restore(flags);
-
 	/*
 	 * Expect a few ticks at least, to be sure some possible
 	 * glue logic does not lock up after one or two first
@@ -1641,10 +1636,10 @@ static int __init timer_irq_works(void)
 	 * least one tick may be lost due to delays.
 	 */
 
-	/* jiffies wrap? */
-	if (time_after(jiffies, t1 + 4))
-		return 1;
-	return 0;
+	local_irq_disable();
+
+	/* Did jiffies advance? */
+	return time_after(jiffies, t1 + 4);
 }
 
 /*
@@ -2117,13 +2112,12 @@ static inline void __init check_timer(vo
 	struct irq_cfg *cfg = irqd_cfg(irq_data);
 	int node = cpu_to_node(0);
 	int apic1, pin1, apic2, pin2;
-	unsigned long flags;
 	int no_pin1 = 0;
 
 	if (!global_clock_event)
 		return;
 
-	local_irq_save(flags);
+	local_irq_disable();
 
 	/*
 	 * get/set the timer IRQ vector:
@@ -2191,7 +2185,6 @@ static inline void __init check_timer(vo
 			goto out;
 		}
 		panic_if_irq_remap("timer doesn't work through Interrupt-remapped IO-APIC");
-		local_irq_disable();
 		clear_IO_APIC_pin(apic1, pin1);
 		if (!no_pin1)
 			apic_printk(APIC_QUIET, KERN_ERR "..MP-BIOS bug: "
@@ -2215,7 +2208,6 @@ static inline void __init check_timer(vo
 		/*
 		 * Cleanup, just in case ...
 		 */
-		local_irq_disable();
 		legacy_pic->mask(0);
 		clear_IO_APIC_pin(apic2, pin2);
 		apic_printk(APIC_QUIET, KERN_INFO "....... failed.\n");
@@ -2232,7 +2224,6 @@ static inline void __init check_timer(vo
 		apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
 		goto out;
 	}
-	local_irq_disable();
 	legacy_pic->mask(0);
 	apic_write(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_FIXED | cfg->vector);
 	apic_printk(APIC_QUIET, KERN_INFO "..... failed.\n");
@@ -2251,7 +2242,6 @@ static inline void __init check_timer(vo
 		apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
 		goto out;
 	}
-	local_irq_disable();
 	apic_printk(APIC_QUIET, KERN_INFO "..... failed :(.\n");
 	if (apic_is_x2apic_enabled())
 		apic_printk(APIC_QUIET, KERN_INFO
@@ -2260,7 +2250,7 @@ static inline void __init check_timer(vo
 	panic("IO-APIC + timer doesn't work!  Boot with apic=debug and send a "
 		"report.  Then try booting with the 'noapic' option.\n");
 out:
-	local_irq_restore(flags);
+	local_irq_enable();
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:57:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:57:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48661.86069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4eX-0005md-2A; Wed, 09 Dec 2020 18:57:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48661.86069; Wed, 09 Dec 2020 18:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4eW-0005mW-VG; Wed, 09 Dec 2020 18:57:28 +0000
Received: by outflank-mailman (input) for mailman id 48661;
 Wed, 09 Dec 2020 18:57:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdhY=FN=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kn4eV-0005mR-RC
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:57:27 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47a305cb-98f8-44d9-9b34-ad0dc326ae89;
 Wed, 09 Dec 2020 18:57:25 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0B9IvJl1009568
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 9 Dec 2020 19:57:20 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 2FC9C2E946C; Wed,  9 Dec 2020 19:57:14 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47a305cb-98f8-44d9-9b34-ad0dc326ae89
Date: Wed, 9 Dec 2020 19:57:14 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201209185714.GS1469@antioche.eu.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 09 Dec 2020 19:57:20 +0100 (MET)

On Wed, Dec 09, 2020 at 06:08:53PM +0000, Andrew Cooper wrote:
> On 09/12/2020 16:30, Manuel Bouyer wrote:
> > On Wed, Dec 09, 2020 at 04:00:02PM +0000, Andrew Cooper wrote:
> >> [...]
> >>>> I wonder if the LDT is set up correctly.
> >>> I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?
> >> Well - you said you always saw it once on 4.13, which clearly shows that
> >> something was wonky, but it managed to unblock itself.
> >>
> >>>> How about this incremental delta?
> >>> Here's the output
> >>> (XEN) IRET fault: #PF[0000]                                                    
> >>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
> >>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> >>> (XEN) IRET fault: #PF[0000]                                                    
> >>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
> >>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> >>> (XEN) IRET fault: #PF[0000]                                                 
> >> Ok, so the promotion definitely fails, but we don't get as far as
> >> inspecting the content of the LDT frame. This probably means it failed
> >> to change the page type, which probably means there are still
> >> outstanding writeable references.
> >>
> >> I'm expecting the final printk to be the one which triggers.
> > It's not. 
> > Here's the output:
> > (XEN) IRET fault: #PF[0000]
> > (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> > (XEN) *** LDT: gl1e 0000000000000000 not present
> > (XEN) *** pv_map_ldt_shadow_page(0x40) failed
> > (XEN) IRET fault: #PF[0000]
> > (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> > (XEN) *** LDT: gl1e 0000000000000000 not present
> > (XEN) *** pv_map_ldt_shadow_page(0x40) failed
> 
> Ok. So the mapping registered for the LDT is not yet present. Xen
> should be raising #PF with the guest, and would be in every case other
> than the weird context on IRET, where we've confused bad guest state
> with bad hypervisor state.

Unfortunably it doesn't fix the problem. I'm now getting a loop of
(XEN) *** LDT: gl1e 0000000000000000 not present                               
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 18:58:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 18:58:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48668.86081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4fM-0005vA-D0; Wed, 09 Dec 2020 18:58:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48668.86081; Wed, 09 Dec 2020 18:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4fM-0005v3-9O; Wed, 09 Dec 2020 18:58:20 +0000
Received: by outflank-mailman (input) for mailman id 48668;
 Wed, 09 Dec 2020 18:58:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn4fK-0005uy-HK
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 18:58:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn4fJ-0007mx-I9; Wed, 09 Dec 2020 18:58:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn4fJ-0007px-3x; Wed, 09 Dec 2020 18:58:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=9CP0Wr+HcuavK8spAj03FIwOzghiv+hj+CI3XS6MuIw=; b=CXarsxWDoyytJ6yFV+IU80FM1b
	+QxjOlFmrg+sCgD5pDq6QeeMz9QkPUB95zncgaNbxq4cSdC3/VVSbMn9AX7XKYTJC6MIqE+QTeELf
	mHvLQDpctiwar2U9G5nY1PKHpTTA69kxYgH0R3baZTvUiqsqTZsFVOWYslBH2umhqV6w=;
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: paul@xen.org, 'Oleksandr' <olekstysh@gmail.com>
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
 <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
 <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
 <0d6c01d6cd9a$666326c0$33297440$@xen.org>
 <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
 <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <84d7238d-0ec1-acdd-6cea-db78aba6f3d7@xen.org>
Date: Wed, 9 Dec 2020 18:58:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Oleksandr and Paul,

Sorry for jumping late in the conversation.

On 09/12/2020 09:01, Paul Durrant wrote:
>> -----Original Message-----
>> From: Oleksandr <olekstysh@gmail.com>
>> Sent: 08 December 2020 20:17
>> To: paul@xen.org
>> Cc: 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>;
>> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien@xen.org>; 'Volodymyr Babchuk'
>> <Volodymyr_Babchuk@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap'
>> <george.dunlap@citrix.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Wei Liu' <wl@xen.org>; 'Julien Grall'
>> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>
>>
>> On 08.12.20 21:43, Paul Durrant wrote:
>>
>> Hi Paul
>>
>>>> -----Original Message-----
>>>> From: Oleksandr <olekstysh@gmail.com>
>>>> Sent: 08 December 2020 16:57
>>>> To: Paul Durrant <paul@xen.org>
>>>> Cc: Jan Beulich <jbeulich@suse.com>; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
>>>> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
>>>> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
>>>> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Julien Grall
>>>> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>>>> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>>>
>>>>
>>>> Hi Paul.
>>>>
>>>>
>>>> On 08.12.20 17:33, Oleksandr wrote:
>>>>> On 08.12.20 17:11, Jan Beulich wrote:
>>>>>
>>>>> Hi Jan
>>>>>
>>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>>>> --- a/xen/include/xen/ioreq.h
>>>>>>> +++ b/xen/include/xen/ioreq.h
>>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>>>         uint8_t                bufioreq_handling;
>>>>>>>     };
>>>>>>>     +/*
>>>>>>> + * This should only be used when d == current->domain and it's not
>>>>>>> paused,
>>>>>> Is the "not paused" part really relevant here? Besides it being rare
>>>>>> that the current domain would be paused (if so, it's in the process
>>>>>> of having all its vCPU-s scheduled out), does this matter at all?do
>>>>>> any extra actionsdo any extra actions
>>>>> No, it isn't relevant, I will drop it.
>>>>>
>>>>>
>>>>>> Apart from this the patch looks okay to me, but I'm not sure it
>>>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
>>>>>> a list if doing a swipe over the entire array is too expensive in
>>>>>> this specific case.
>>>>> We would like to avoid to do any extra actions in
>>>>> leave_hypervisor_to_guest() if possible.
>>>>> But not only there, the logic whether we check/set
>>>>> mapcache_invalidation variable could be avoided if a domain doesn't
>>>>> use IOREQ server...
>>>>
>>>> Are you OK with this patch (common part of it)?
>>> How much of a performance benefit is this? The array is small to simply counting the non-NULL
>> entries should be pretty quick.
>> I didn't perform performance measurements on how much this call consumes.
>> In our system we run three domains. The emulator is in DomD only, so I
>> would like to avoid to call vcpu_ioreq_handle_completion() for every
>> Dom0/DomU's vCPUs
>> if there is no real need to do it.
> 
> This is not relevant to the domain that the emulator is running in; it's concerning the domains which the emulator is servicing. How many of those are there?

AFAICT, the maximum number of IOREQ servers is 8 today.

> 
>> On Arm vcpu_ioreq_handle_completion()
>> is called with IRQ enabled, so the call is accompanied with
>> corresponding irq_enable/irq_disable.
>> These unneeded actions could be avoided by using this simple one-line
>> helper...
>>
> 
> The helper may be one line but there is more to the patch than that. I still think you could just walk the array in the helper rather than keeping a running occupancy count.

Right, the concern here is this function will be called in an hotpath 
(everytime we are re-entering to the guest). At the difference of x86, 
the entry/exit code is really small, so any additional code will have an 
impact on the overall performance.

That said, the IOREQ code is a tech preview for Arm. So I would be fine 
going with Paul's approach until we have a better understanding on the 
performance of virtio/IOREQ.

I am going to throw some more thoughts about the optimization here. The 
patch is focusing on performance impact when IOREQ is built-in and not 
used. I think we can do further optimization (which may superseed this one).

get_pending_vcpu() (called from handle_hvm_io_completion()) is overly 
expensive in particular if you have no I/O forwarded to an IOREQ server. 
Entry to the hypervisor can happen for many reasons (interrupts, system 
registers emulation, I/O emulation...) and the I/O forwarded should be a 
small subset.

Ideally, handle_hvm_io_completion() should be a NOP (at max a few 
instructions) if there are nothing to do. Maybe we want to introduce a 
per-vCPU flag indicating if an I/O has been forwarded to an IOREQ server.

This would also us to bypass most of the function if there is nothing to do.

Any thoughts?

In any case this is more a forward looking rather than a request for the 
current series. What matters to me is we have a functional (not 
necessarily optimized) version of IOREQ in Xen 4.15. This would be a 
great step towards using Virto on Xen.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 19:08:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 19:08:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48675.86093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4pW-000719-Cf; Wed, 09 Dec 2020 19:08:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48675.86093; Wed, 09 Dec 2020 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn4pW-000712-9l; Wed, 09 Dec 2020 19:08:50 +0000
Received: by outflank-mailman (input) for mailman id 48675;
 Wed, 09 Dec 2020 19:08:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NJeK=FN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kn4pU-00070x-EQ
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 19:08:48 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64b4a243-ab89-4819-bf98-5e65025b038c;
 Wed, 09 Dec 2020 19:08:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64b4a243-ab89-4819-bf98-5e65025b038c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607540926;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=pqv5EwtNQMm9dZ0BSNuE7TLRoytgQKnF5xXuDVOERBo=;
  b=QUyBwkLZrVKBu1MI5t0E24Xc5r+b69mwshLKGnDHNomVed7amv1KJdyz
   dweKVqNjqMP6QUGG+hFtIwoXv4jABfNqZOzadeBeCLT17sNjE34WIkXjR
   tbZUH0ScSCLCr3q4ZO/A3ClD+Y4JRTePW72jV0SWjtC+FKWO2SjSucGFe
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: X08dhlGsZJzVr+2U3hZEPgxrgbT9Qom8oXWRH+8ryl9HtCOkAvvEKH6bAPOxZzPhl9UGzNuHl5
 J/EKh6zQTfSNuwYYUniak5pZx0MjmntKcoQvU4EMgkDKFLGix57rwJxhVvxAAmpnyHDRBIzHbE
 S2oEMQIiQwzeaHXuOUkcIlwd+8z8x/gS4fc5k+vVRcnPjyqbV/wIwk7zBUt8JZwKsKKl/38kSu
 KqfCWqaSMns5eVvAO+QrLfFpgNzFlEXKU7hU00QBGhP/uBmnGcA9OZjvHfT2JWbiJw/Kp4DW/A
 cmU=
X-SBRS: 5.2
X-MesageID: 32863393
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,405,1599537600"; 
   d="scan'208";a="32863393"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201208175738.GA3390@antioche.eu.org>
 <e73cc71d-c1a6-87c8-1b82-5d70d4f52eaa@citrix.com>
 <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
Date: Wed, 9 Dec 2020 19:08:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201209185714.GS1469@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/12/2020 18:57, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 06:08:53PM +0000, Andrew Cooper wrote:
>> On 09/12/2020 16:30, Manuel Bouyer wrote:
>>> On Wed, Dec 09, 2020 at 04:00:02PM +0000, Andrew Cooper wrote:
>>>> [...]
>>>>>> I wonder if the LDT is set up correctly.
>>>>> I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?
>>>> Well - you said you always saw it once on 4.13, which clearly shows that
>>>> something was wonky, but it managed to unblock itself.
>>>>
>>>>>> How about this incremental delta?
>>>>> Here's the output
>>>>> (XEN) IRET fault: #PF[0000]                                                    
>>>>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
>>>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
>>>>> (XEN) IRET fault: #PF[0000]                                                    
>>>>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057             
>>>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
>>>>> (XEN) IRET fault: #PF[0000]                                                 
>>>> Ok, so the promotion definitely fails, but we don't get as far as
>>>> inspecting the content of the LDT frame.  This probably means it failed
>>>> to change the page type, which probably means there are still
>>>> outstanding writeable references.
>>>>
>>>> I'm expecting the final printk to be the one which triggers.
>>> It's not. 
>>> Here's the output:
>>> (XEN) IRET fault: #PF[0000]
>>> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
>>> (XEN) *** LDT: gl1e 0000000000000000 not present
>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed
>>> (XEN) IRET fault: #PF[0000]
>>> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
>>> (XEN) *** LDT: gl1e 0000000000000000 not present
>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed
>> Ok.  So the mapping registered for the LDT is not yet present.  Xen
>> should be raising #PF with the guest, and would be in every case other
>> than the weird context on IRET, where we've confused bad guest state
>> with bad hypervisor state.
> Unfortunably it doesn't fix the problem. I'm now getting a loop of
> (XEN) *** LDT: gl1e 0000000000000000 not present                               
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  

Oh of course - we don't follow the exit-to-guest path on the way out here.

As a gross hack to check that we've at least diagnosed the issue
appropriately, could you modify NetBSD to explicitly load the %ss
selector into %es (or any other free segment) before first entering user
context?

If it a sequence of LDT demand-faulting issues, that should cause them
to be fully resolved before Xen's IRET becomes the first actual LDT load.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 19:38:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 19:38:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48683.86105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn5IA-0001Vk-Ux; Wed, 09 Dec 2020 19:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48683.86105; Wed, 09 Dec 2020 19:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn5IA-0001Vd-Rx; Wed, 09 Dec 2020 19:38:26 +0000
Received: by outflank-mailman (input) for mailman id 48683;
 Wed, 09 Dec 2020 19:38:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn5IA-0001VY-9U
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 19:38:26 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8acd188a-a50f-420b-886f-a3933b814d71;
 Wed, 09 Dec 2020 19:38:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8acd188a-a50f-420b-886f-a3933b814d71
Date: Wed, 9 Dec 2020 11:38:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607542704;
	bh=DHOd1MbYdbEzgawURuOXyNtKiflOem05991XPrCV4Aw=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=dHET5MeYuD/SknaQWidwf77TU58L0TDLghH6GeEF7Amga7ej3ZzvSx2rYv9XrCjFO
	 vg+NgW1Wm2aNFFsRzKK5HQ0vezVcsz+dihbrjYL2zypdXp6INOzbcQiqqq5+ofs9uo
	 9xP37Efr/h7ioiZWdYUZ4V+ZJtq/6PT5BPOL4gBTYiNJNKopO+/qLW2zB7hV+IvIgz
	 Xcb2fZSBfrgyGgF6paIOP7tk0wfQZZjUr52L9Ic/zZ9BkIhTOb9y6PiIFC0LruPpV+
	 qKuzjj9KQJb8IL4n4qXtIGLdcF/3iHQmHcnQDx8mVPWL/LyFM9GqzsjoMVBuVWQh/5
	 t1DmsnTJKLcKQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
In-Reply-To: <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091131350.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Add vsysreg emulation for registers trapped when TID3 bit is activated
> in HSR.
> The emulation is returning the value stored in cpuinfo_guest structure
> for know registers and is handling reserved registers as RAZ.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>   Fix commit message
>   Fix code style for GENERATE_TID3_INFO declaration
>   Add handling of reserved registers as RAZ.
> 
> ---
>  xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 8a85507d9d..ef7a11dbdd 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>          break;                                                          \
>      }
>  
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg, field, offset)                          \
> +    case HSR_SYSREG_##reg:                                              \
> +    {                                                                   \
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
> +                          1, guest_cpuinfo.field.bits[offset]);         \

[...]

> +    HSR_SYSREG_TID3_RESERVED_CASE:
> +        /* Handle all reserved registers as RAZ */
> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);


We are implementing both the known and the implementation defined
registers as read-as-zero. On write, we inject an exception.

However, reading the manual, it looks like the implementation defined
registers should be read-as-zero/write-ignore, is that right?

I couldn't easily find in the manual if it is OK to inject an exception
on write to a known register.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 19:55:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 19:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48689.86117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn5Xw-0003R6-D2; Wed, 09 Dec 2020 19:54:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48689.86117; Wed, 09 Dec 2020 19:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn5Xw-0003Qz-9A; Wed, 09 Dec 2020 19:54:44 +0000
Received: by outflank-mailman (input) for mailman id 48689;
 Wed, 09 Dec 2020 19:54:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn5Xv-0003Qu-4q
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 19:54:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0391753-0872-4ce6-b992-5df83792284d;
 Wed, 09 Dec 2020 19:54:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0391753-0872-4ce6-b992-5df83792284d
Date: Wed, 9 Dec 2020 11:54:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607543681;
	bh=bUu9X1geTL8M/GRL58coGcKFEnZEH5lzro/be4YmT74=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=pLg3Ih70pXOFULvXM8uvkooxLIRpGfsGn7QGkW+xZMzyZWBWeXUa66vc7QrgSePF4
	 B7GMNcsTgHal8F/SA+mnr0b7/6oGMpHya9nmY+0Tb23fG6wBpNPuu8lNSuxu5NTiz5
	 +RyuexWbSD68EoRrFQNMboAx+a8XdHr5E6NGWbjbwkA2MpgKCMmI5OWnSQ3a5ifnxs
	 VFvelzr0rhX+cWeqqx9ueKZUv6QLAfnhaXKUVhbL4/l4CyRdwHh/pqH5d+Sx+f77D6
	 6gmB+hT7cttJ9bcgsYVnjJzS7JHWSG5o7aUfwp+elxAVFcMNZ9Sr9YVG3c1NxuzRfH
	 7sPU7zG48Qung==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 5/7] xen/arm: Add handler for cp15 ID registers
In-Reply-To: <5a36325410f485dbdddc0f6088378cacc54c5243.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091153400.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <5a36325410f485dbdddc0f6088378cacc54c5243.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Add support for emulation of cp15 based ID registers (on arm32 or when
> running a 32bit guest on arm64).
> The handlers are returning the values stored in the guest_cpuinfo
> structure for known registers and RAZ for all reserved registers.
> In the current status the MVFR registers are no supported.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>   Add case definition for reserved registers
>   Add handling of reserved registers as RAZ.
>   Fix code style in GENERATE_TID3_INFO declaration
> 
> ---
>  xen/arch/arm/vcpreg.c        | 39 ++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/cpregs.h | 25 +++++++++++++++++++++++
>  2 files changed, 64 insertions(+)
> 
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index cdc91cdf5b..d371a1c38c 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -155,6 +155,14 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
>          break;                                                      \
>      }
>  
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg, field, offset)                      \
> +    case HSR_CPREG32(reg):                                          \
> +    {                                                               \
> +        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
> +                          1, guest_cpuinfo.field.bits[offset]);     \
> +    }
> +
>  void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>  {
>      const struct hsr_cp32 cp32 = hsr.cp32;
> @@ -286,6 +294,37 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>           */
>          return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
>  
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
> +    /* MVFR registers are in cp10 no cp15 */
> +
> +    HSR_CPREG32_TID3_RESERVED_CASE:
> +        /* Handle all reserved registers as RAZ */
> +        return handle_ro_raz(regs, regidx, cp32.read, hsr, 1);

Same question as for the aarch64 case: do we need to do write-ignore
for the reserved registers?


>      /*
>       * HCR_EL2.TIDCP
>       *
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index 2690ddeb7a..5cb1ad5cbe 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -133,6 +133,31 @@
>  #define VPIDR           p15,4,c0,c0,0   /* Virtualization Processor ID Register */
>  #define VMPIDR          p15,4,c0,c0,5   /* Virtualization Multiprocessor ID Register */
>  
> +/*
> + * Those cases are catching all Reserved registers trapped by TID3 which
> + * currently have no assignment.
> + * HCR.TID3 is trapping all registers in the group 3:
> + * coproc == p15, opc1 == 0, CRn == c0, CRm == {c2-c7}, opc2 == {0-7}.
> + */
> +#define HSR_CPREG32_TID3_CASES(REG)     case HSR_CPREG32(p15,0,c0,REG,0): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,1): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,2): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,3): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,4): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,5): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,6): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,7)
> +
> +#define HSR_CPREG32_TID3_RESERVED_CASE  case HSR_CPREG32(p15,0,c0,c3,0): \
> +                                        case HSR_CPREG32(p15,0,c0,c3,1): \
> +                                        case HSR_CPREG32(p15,0,c0,c3,2): \
> +                                        case HSR_CPREG32(p15,0,c0,c3,3): \
> +                                        case HSR_CPREG32(p15,0,c0,c3,7): \
> +                                        HSR_CPREG32_TID3_CASES(c4): \
> +                                        HSR_CPREG32_TID3_CASES(c5): \
> +                                        HSR_CPREG32_TID3_CASES(c6): \
> +                                        HSR_CPREG32_TID3_CASES(c7)

The following are missing, is it a problem?

p15,0,c0,c0,2
p15,0,c0,c0,3
p15,0,c0,c0,4
p15,0,c0,c0,6
p15,0,c0,c0,7


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 20:36:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 20:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48698.86129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6CJ-0007Pe-Oh; Wed, 09 Dec 2020 20:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48698.86129; Wed, 09 Dec 2020 20:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6CJ-0007PX-K8; Wed, 09 Dec 2020 20:36:27 +0000
Received: by outflank-mailman (input) for mailman id 48698;
 Wed, 09 Dec 2020 20:36:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GmJG=FN=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kn6CI-0007PM-EZ
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 20:36:26 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 781e3555-9673-4d04-aed3-392dd359b4c4;
 Wed, 09 Dec 2020 20:36:25 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f11so4056547ljn.2
 for <xen-devel@lists.xenproject.org>; Wed, 09 Dec 2020 12:36:25 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id t196sm280262lff.195.2020.12.09.12.36.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 09 Dec 2020 12:36:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 781e3555-9673-4d04-aed3-392dd359b4c4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=gqUt/I2482+pfCTh1cUIEi2n9lArJhB7Bpk6SZ2KITU=;
        b=VovL3mFebRhI7evZxqhaeoqlIz22rDzjjE37z6UVy4sBHqMEYiPC8abkryznN7KUtx
         vpciLxT23LHj4Gn+Wllx5bvQJF6lsXSEDpgjmuNUJ2THMbNOY9tFbgCfwHmyYHQNVbOl
         NVh6ev2dbDD+s8cE+j3EaGP1x9y/JPboTv0ueaHTCPy2PuY2fNQixjZT/HVTIpN63eGD
         Vil92zAc0/eTM1cna9i51/BTA90/VV9ChCYW7fJPwY6TPMusjGAsiuJCmmzqsFvBI++l
         9bhkLifeMFwuE9zromWx48QU5VvpcOREX+8QZY7DFdQoBK37CL4Fsg/wWxtaq8NxF/+x
         XrBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=gqUt/I2482+pfCTh1cUIEi2n9lArJhB7Bpk6SZ2KITU=;
        b=VvdFwmaRWkl/uYstG8uim09J1aU8AABWTLZbJyZsKGosIFPDsKzF1/VthxMDKZqetZ
         NJbBlj8MsUbpuexzU+1MokITbMeYuUqI0mrSesqwZ69QXOhqhS8Id9li/dwL9rh6kEFf
         DXa6LiZ3q0gRn8g5pu6jmAOp5vNt1PfhVGaLtMi3sPyPFbSj3cq1/Og9jYzaQZ/5s85q
         +je25qk3xFv/DDLkvg7QrffU1bfa1W/MZmx5/29oOJKvXkMkWD8IllKusFjYJiF5hhO+
         DUfI2lON0bmP1syjvlvFbzzIAksi8SfeASyRhCUruDwARFZ7q02/o7XUyPWO0YEWPR19
         aWFg==
X-Gm-Message-State: AOAM530cPZTARjH9YbbzH0/hee2g1bmq45RWjQazJ1NAkLV1GRpLuQMr
	24hzjNFjQAHxpXh6cl4bl173gDfo4n58fg==
X-Google-Smtp-Source: ABdhPJzW3plcS9eRZ2Z11WUFpOTMfiwJWmukAhLVxfuz9v/1IDHyHou2B6Kqj90gDj+IRUCUjRctug==
X-Received: by 2002:a2e:88c2:: with SMTP id a2mr1746585ljk.415.1607546183862;
        Wed, 09 Dec 2020 12:36:23 -0800 (PST)
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
 <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
 <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
 <0d6c01d6cd9a$666326c0$33297440$@xen.org>
 <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
 <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <96b9b843-f4fe-834a-f17b-d75198aa0dab@gmail.com>
Date: Wed, 9 Dec 2020 22:36:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


Hi Paul.


>>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>>>> --- a/xen/include/xen/ioreq.h
>>>>>>> +++ b/xen/include/xen/ioreq.h
>>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>>>         uint8_t                bufioreq_handling;
>>>>>>>     };
>>>>>>>     +/*
>>>>>>> + * This should only be used when d == current->domain and it's not
>>>>>>> paused,
>>>>>> Is the "not paused" part really relevant here? Besides it being rare
>>>>>> that the current domain would be paused (if so, it's in the process
>>>>>> of having all its vCPU-s scheduled out), does this matter at all?do
>>>>>> any extra actionsdo any extra actions
>>>>> No, it isn't relevant, I will drop it.
>>>>>
>>>>>
>>>>>> Apart from this the patch looks okay to me, but I'm not sure it
>>>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
>>>>>> a list if doing a swipe over the entire array is too expensive in
>>>>>> this specific case.
>>>>> We would like to avoid to do any extra actions in
>>>>> leave_hypervisor_to_guest() if possible.
>>>>> But not only there, the logic whether we check/set
>>>>> mapcache_invalidation variable could be avoided if a domain doesn't
>>>>> use IOREQ server...
>>>> Are you OK with this patch (common part of it)?
>>> How much of a performance benefit is this? The array is small to simply counting the non-NULL
>> entries should be pretty quick.
>> I didn't perform performance measurements on how much this call consumes.
>> In our system we run three domains. The emulator is in DomD only, so I
>> would like to avoid to call vcpu_ioreq_handle_completion() for every
>> Dom0/DomU's vCPUs
>> if there is no real need to do it.
> This is not relevant to the domain that the emulator is running in; it's concerning the domains which the emulator is servicing. How many of those are there?
Err, yes, I wasn't precise when providing an example.
Single emulator is running in DomD and servicing DomU. So with the 
helper in place the vcpu_ioreq_handle_completion() gets only called for 
DomU vCPUs (as expected).
Without an optimization the vcpu_ioreq_handle_completion() gets called 
for _all_ vCPUs, and I see it as an extra action for Dom0, DomD vCPUs.


>
>> On Arm vcpu_ioreq_handle_completion()
>> is called with IRQ enabled, so the call is accompanied with
>> corresponding irq_enable/irq_disable.
>> These unneeded actions could be avoided by using this simple one-line
>> helper...
>>
> The helper may be one line but there is more to the patch than that. I still think you could just walk the array in the helper rather than keeping a running occupancy count.

OK, is the implementation below close to what you propose? If yes, I 
will update a helper and drop nr_servers variable.

bool domain_has_ioreq_server(const struct domain *d)
{
     const struct ioreq_server *s;
     unsigned int id;

     FOR_EACH_IOREQ_SERVER(d, id, s)
         return true;

     return false;
}

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:03:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48706.86146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6cM-0001v6-Sz; Wed, 09 Dec 2020 21:03:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48706.86146; Wed, 09 Dec 2020 21:03:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6cM-0001uz-Ps; Wed, 09 Dec 2020 21:03:22 +0000
Received: by outflank-mailman (input) for mailman id 48706;
 Wed, 09 Dec 2020 21:03:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/1wO=FN=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kn6cK-0001uu-NW
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:03:20 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d857475a-3cda-471b-8370-3d20ba7df5e2;
 Wed, 09 Dec 2020 21:03:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d857475a-3cda-471b-8370-3d20ba7df5e2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607547796;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to; bh=hNj079rC8GX5zfhsejFcRe+fUy18shN+tw2tIy22n68=;
	b=tfGrf6ucHKdubpqgU5nra0rYDWqvB0mIeri1Uqo1FyyEnFIEhEUNXO3+Lnc8bnkJsA4eHL
	0fin5E7QTBnkNC/ygxpJIkzt+GIBeW/S4NBuYI36+B+5Siq0tkX5m9ALwNh2W+2jjnvSyM
	00GfY+TQDq4Z5KL+4g03Ix2//g6i/0Ets34gKCx3ppzQ0g5RXVawvo++SF/MdNVoWrQfzZ
	8Gf3dYxdP1/07TJ0hldpbe0xmrWvZIPsXQaOFBbf6i/QhlWbqeRJc7Gjvfh4swJhUTeEzR
	iYnzzG7c+XHDEGxMxATdw20P+08Ot/jlViISPGpc6V3AgSlZwK53m1G3URv3eQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607547796;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to; bh=hNj079rC8GX5zfhsejFcRe+fUy18shN+tw2tIy22n68=;
	b=RhXZJZRElncI738YJE3Yz7QMi9kllsKSf21gQ7U6c7lGF3UGbJIOoJN6RM8ni0BsqF6DzW
	qQfu/RWtRA5lKeBw==
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
Cc: peterz@infradead.org, luto@kernel.org, Juergen Gross <jgross@suse.com>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 01/12] x86/xen: use specific Xen pv interrupt entry for MCE
In-Reply-To: <20201120114630.13552-2-jgross@suse.com>
Date: Wed, 09 Dec 2020 22:03:15 +0100
Message-ID: <877dpqlmjw.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Nov 20 2020 at 12:46, Juergen Gross wrote:

> Xen PV guests don't use IST. For machine check interrupts switch to
> the same model as debug interrupts.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:03:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:03:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48708.86158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6cn-0001ze-5w; Wed, 09 Dec 2020 21:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48708.86158; Wed, 09 Dec 2020 21:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6cn-0001zX-2o; Wed, 09 Dec 2020 21:03:49 +0000
Received: by outflank-mailman (input) for mailman id 48708;
 Wed, 09 Dec 2020 21:03:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/1wO=FN=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kn6cm-0001zP-BB
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:03:48 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5508bce5-3da2-469e-899c-6132280d8484;
 Wed, 09 Dec 2020 21:03:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5508bce5-3da2-469e-899c-6132280d8484
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607547826;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vH7YX6Q8hF6owa+mw8R0RFamxdvfP3I6NKP6vgDZYXk=;
	b=4dfcI2KN69tAMiPB1RlzL+VJZBM9MiERByCXwyO77yIXJSfgKBurVkcVv/1WUd35OqMzx6
	+pOlgB5wRy4xEGOOepk2c4S5oXiKY4juJE+1UQWI7G8pNUuIxv/ltZSAGNiWUOg7wJVXYy
	IcofZHjaAmAOrUcOjze6mBHLF7ALWMq+hQSJxRIdd1OSbrQr+zkX8EBI3vkbyBshl6Gf7w
	zOgK8/ZzJx5gNn58BdUxsRBZ5Or3fntZgYXFITjFwuzr22F9yJiJeJ/hDn0aGoIqvBSc78
	wrVSsSrY2OzpSbZyaCZiiiIIJkxyrAFDFPgcPIaJ/TUYnfLbVTP9HTN0ylNqAA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607547826;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vH7YX6Q8hF6owa+mw8R0RFamxdvfP3I6NKP6vgDZYXk=;
	b=Y/yO9t+Cv1wex1RlcyfpDs0lGoIkdsSOId+znpJt0SIpjdwY4lFZTBM1lsCIKwK1UJn79A
	foY/KqON7X1HpbBQ==
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
Cc: peterz@infradead.org, luto@kernel.org, Juergen Gross <jgross@suse.com>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 02/12] x86/xen: use specific Xen pv interrupt entry for DF
In-Reply-To: <20201120114630.13552-3-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com> <20201120114630.13552-3-jgross@suse.com>
Date: Wed, 09 Dec 2020 22:03:45 +0100
Message-ID: <874kkulmj2.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Nov 20 2020 at 12:46, Juergen Gross wrote:
> Xen PV guests don't use IST. For double fault interrupts switch to
> the same model as NMI.
>
> Correct a typo in a comment while copying it.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:04:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48716.86171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6do-00028U-G1; Wed, 09 Dec 2020 21:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48716.86171; Wed, 09 Dec 2020 21:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6do-00028N-Cn; Wed, 09 Dec 2020 21:04:52 +0000
Received: by outflank-mailman (input) for mailman id 48716;
 Wed, 09 Dec 2020 21:04:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn6dn-00028F-6c
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:04:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c91569a-df7d-44d4-b6bc-5d8da50acc03;
 Wed, 09 Dec 2020 21:04:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c91569a-df7d-44d4-b6bc-5d8da50acc03
Date: Wed, 9 Dec 2020 13:04:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607547890;
	bh=vx+ee/rfnr7/Yh4QApZGzWPzlTiLPkE5TOwt3YIWgO4=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=IZrWvIK9stFArh7tHtTCslCBew0hlTRHa8mrc5XLvbneZv/CsgVgI4jcr5QHv/+Dk
	 VeknHE3ix2YA44a/fdi4xkS9VyfxuguCOPZZSCZIWRCe8pjraje5HB7yDtPJgPkwRo
	 lTD1BetJBip7zrZ3/nN/T30IYDYo8/ffuRMdo4beo+t6noaw9/YmTocptRJQtRtsbf
	 KXa7WlLtQnFhFnCYIdbRJEG/ARU09naJF7M9A5yghIFPrbqXhw2kgdUUbQQnifQyEK
	 Fe9RHtEnRN9Em9C2c2tRh9cig+7V+RhirNwXCDtuQf8ASlA5SUZgBrwjGUilvwAxi1
	 X5dWMGImwnEew==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle
 MVFR
In-Reply-To: <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091256290.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Add support for cp10 exceptions decoding to be able to emulate the
> values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
> This is required for aarch32 guests accessing MVFR registers using
> vmrs and vmsr instructions.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>   Add case for MVFR2, fix typo VMFR <-> MVFR.
> 
> ---
>  xen/arch/arm/traps.c             |  5 ++++
>  xen/arch/arm/vcpreg.c            | 39 +++++++++++++++++++++++++++++++-
>  xen/include/asm-arm/perfc_defn.h |  1 +
>  xen/include/asm-arm/traps.h      |  1 +
>  4 files changed, 45 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 22bd1bd4c6..28d9d64558 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
>          perfc_incr(trap_cp14_dbg);
>          do_cp14_dbg(regs, hsr);
>          break;
> +    case HSR_EC_CP10:
> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
> +        perfc_incr(trap_cp10);
> +        do_cp10(regs, hsr);
> +        break;
>      case HSR_EC_CP:
>          GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>          perfc_incr(trap_cp);
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index d371a1c38c..da4e22a467 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -319,7 +319,7 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>      GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
>      GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
>      GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
> -    /* MVFR registers are in cp10 no cp15 */
> +    /* MVFR registers are in cp10 not cp15 */
>  
>      HSR_CPREG32_TID3_RESERVED_CASE:
>          /* Handle all reserved registers as RAZ */
> @@ -638,6 +638,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
>      inject_undef_exception(regs, hsr);
>  }
>  
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
> +{
> +    const struct hsr_cp32 cp32 = hsr.cp32;
> +    int regidx = cp32.reg;
> +
> +    if ( !check_conditional_instr(regs, hsr) )
> +    {
> +        advance_pc(regs, hsr);
> +        return;
> +    }
> +
> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
> +    {
> +    /*
> +     * HSR.TID3 is trapping access to MVFR register used to identify the
          ^ HCR

> +     * VFP/Simd using VMRS/VMSR instructions.
> +     * Exception encoding is using MRC/MCR standard with the reg field in Crn
> +     * as are declared MVFR0 and MVFR1 in cpregs.h
> +     */
> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
> +
> +    default:
> +        gdprintk(XENLOG_ERR,
> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
> +                 cp32.read ? "mrc" : "mcr",
> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
> +                 hsr.bits & HSR_CP32_REGS_MASK);
> +        inject_undef_exception(regs, hsr);
> +        return;

I take we are sure there are no other cp10 registers of interest?


> +    }
> +
> +    advance_pc(regs, hsr);
> +}
> +
>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>  {
>      const struct hsr_cp cp = hsr.cp;
> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
> index 6a83185163..31f071222b 100644
> --- a/xen/include/asm-arm/perfc_defn.h
> +++ b/xen/include/asm-arm/perfc_defn.h
> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>  PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>  PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>  PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>  PERFCOUNTER(trap_cp,       "trap: cp access")
>  PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>  PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
> index 997c37884e..c4a3d0fb1b 100644
> --- a/xen/include/asm-arm/traps.h
> +++ b/xen/include/asm-arm/traps.h
> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
>  
>  /* SMCCC handling */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:05:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48719.86183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6e4-0002Ey-Tq; Wed, 09 Dec 2020 21:05:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48719.86183; Wed, 09 Dec 2020 21:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6e4-0002Er-Pk; Wed, 09 Dec 2020 21:05:08 +0000
Received: by outflank-mailman (input) for mailman id 48719;
 Wed, 09 Dec 2020 21:05:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GmJG=FN=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kn6e3-0002EX-8q
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:05:07 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75f87b02-3e70-4363-b5f8-743be902e657;
 Wed, 09 Dec 2020 21:05:05 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id x23so4114597lji.7
 for <xen-devel@lists.xenproject.org>; Wed, 09 Dec 2020 13:05:05 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id x67sm285152lff.82.2020.12.09.13.05.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 09 Dec 2020 13:05:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75f87b02-3e70-4363-b5f8-743be902e657
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=ziK8rAJAHWHZ7JGTlXpNI/0LtdEcTIT2NfNjvM79Uvk=;
        b=Czp9zXTlJlGo4XB4ISmYDMsMym/4rNLOnM7KUwJuw7Po4kj2287MTK4/rXMaZ54Utt
         lSNHocyH45GAU9hfrbfxMsAISNXW45b631qcvFhlCHAozKkOOj9ScjJRy0JjCEwWFW6b
         8qWMZp4p93rjwBxCZ16VX5vUZlNPAhN+LsDn9o0AwxsZr2GTnzZusN5PGp9wgO4egXUt
         JFdNxJ6te9cVmk5LL00YPnl+yopCkgpRT1MKrqMjX+SeVdV2pohLK/qUC2hQodmBeEak
         QJJRxPFig4zEDW706YVJf7LrLpsT7H8Yb2dNzShHG7m4ffDF42DDzezMGQmW/BT5Zbuw
         K4fw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=ziK8rAJAHWHZ7JGTlXpNI/0LtdEcTIT2NfNjvM79Uvk=;
        b=CO0fIDv8oEDSFe2SN1nrywDjgg1S760tetQC88PNhRluTJV7xq+EoqhmYcyYAIGv4g
         KUrh3gV97lRbmpPaVIFv9YdBWr4DXN836dyqlHOg0YQFkvkn5FkdJ7xhtyZPWY3J6WrN
         fedTBP2lA7jqSvY8XGEO8eMSkK0uclDH2xpUspqB4PSwrbx9EjCSPwz6r6Da9eqiSQOj
         BD3CtfLwTxr+VhmPL7CI/iKp9ZhHkElbMXGmUYTXHIs2kDSWGnYkqwC6J+EEH+DTmewP
         9ToZTmw7xBGBpeqmyrslOHV/GvL5hhDySHWZfOGpVFBF8dOcKOEAhy6K4vnBFfWcEqis
         eZzA==
X-Gm-Message-State: AOAM533LmiHvn/pshhSTZWmT69iw8VrvUE+W+b7FqhNerOvj+2dY6/6X
	ifi3juNck+bKGGjmUKxsL5E9MPjWMyCuYA==
X-Google-Smtp-Source: ABdhPJyeG3Dq5+gTT8OTtJ47P4+DJRpzrLCyZP9Wj+/4bgsIjGXdBN5F8g7ySE+XMqT7dgx4aLub0Q==
X-Received: by 2002:a05:651c:1b6:: with SMTP id c22mr1774381ljn.365.1607547904498;
        Wed, 09 Dec 2020 13:05:04 -0800 (PST)
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Julien Grall <julien@xen.org>, paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
 <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
 <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
 <0d6c01d6cd9a$666326c0$33297440$@xen.org>
 <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
 <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
 <84d7238d-0ec1-acdd-6cea-db78aba6f3d7@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <dfe1f85a-6293-5c5e-ad33-4367f83a5c60@gmail.com>
Date: Wed, 9 Dec 2020 23:05:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <84d7238d-0ec1-acdd-6cea-db78aba6f3d7@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 09.12.20 20:58, Julien Grall wrote:
> Hi Oleksandr and Paul,

Hi Julien, Paul.


>
> Sorry for jumping late in the conversation.
>
> On 09/12/2020 09:01, Paul Durrant wrote:
>>> -----Original Message-----
>>> From: Oleksandr <olekstysh@gmail.com>
>>> Sent: 08 December 2020 20:17
>>> To: paul@xen.org
>>> Cc: 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' 
>>> <oleksandr_tyshchenko@epam.com>;
>>> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Julien Grall' 
>>> <julien@xen.org>; 'Volodymyr Babchuk'
>>> <Volodymyr_Babchuk@epam.com>; 'Andrew Cooper' 
>>> <andrew.cooper3@citrix.com>; 'George Dunlap'
>>> <george.dunlap@citrix.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Wei 
>>> Liu' <wl@xen.org>; 'Julien Grall'
>>> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>>> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce 
>>> domain_has_ioreq_server()
>>>
>>>
>>> On 08.12.20 21:43, Paul Durrant wrote:
>>>
>>> Hi Paul
>>>
>>>>> -----Original Message-----
>>>>> From: Oleksandr <olekstysh@gmail.com>
>>>>> Sent: 08 December 2020 16:57
>>>>> To: Paul Durrant <paul@xen.org>
>>>>> Cc: Jan Beulich <jbeulich@suse.com>; Oleksandr Tyshchenko 
>>>>> <oleksandr_tyshchenko@epam.com>; Stefano
>>>>> Stabellini <sstabellini@kernel.org>; Julien Grall 
>>>>> <julien@xen.org>; Volodymyr Babchuk
>>>>> <Volodymyr_Babchuk@epam.com>; Andrew Cooper 
>>>>> <andrew.cooper3@citrix.com>; George Dunlap
>>>>> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Wei 
>>>>> Liu <wl@xen.org>; Julien Grall
>>>>> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>>>>> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce 
>>>>> domain_has_ioreq_server()
>>>>>
>>>>>
>>>>> Hi Paul.
>>>>>
>>>>>
>>>>> On 08.12.20 17:33, Oleksandr wrote:
>>>>>> On 08.12.20 17:11, Jan Beulich wrote:
>>>>>>
>>>>>> Hi Jan
>>>>>>
>>>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>>>>> --- a/xen/include/xen/ioreq.h
>>>>>>>> +++ b/xen/include/xen/ioreq.h
>>>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>>>>         uint8_t                bufioreq_handling;
>>>>>>>>     };
>>>>>>>>     +/*
>>>>>>>> + * This should only be used when d == current->domain and it's 
>>>>>>>> not
>>>>>>>> paused,
>>>>>>> Is the "not paused" part really relevant here? Besides it being 
>>>>>>> rare
>>>>>>> that the current domain would be paused (if so, it's in the process
>>>>>>> of having all its vCPU-s scheduled out), does this matter at all?do
>>>>>>> any extra actionsdo any extra actions
>>>>>> No, it isn't relevant, I will drop it.
>>>>>>
>>>>>>
>>>>>>> Apart from this the patch looks okay to me, but I'm not sure it
>>>>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
>>>>>>> a list if doing a swipe over the entire array is too expensive in
>>>>>>> this specific case.
>>>>>> We would like to avoid to do any extra actions in
>>>>>> leave_hypervisor_to_guest() if possible.
>>>>>> But not only there, the logic whether we check/set
>>>>>> mapcache_invalidation variable could be avoided if a domain doesn't
>>>>>> use IOREQ server...
>>>>>
>>>>> Are you OK with this patch (common part of it)?
>>>> How much of a performance benefit is this? The array is small to 
>>>> simply counting the non-NULL
>>> entries should be pretty quick.
>>> I didn't perform performance measurements on how much this call 
>>> consumes.
>>> In our system we run three domains. The emulator is in DomD only, so I
>>> would like to avoid to call vcpu_ioreq_handle_completion() for every
>>> Dom0/DomU's vCPUs
>>> if there is no real need to do it.
>>
>> This is not relevant to the domain that the emulator is running in; 
>> it's concerning the domains which the emulator is servicing. How many 
>> of those are there?
>
> AFAICT, the maximum number of IOREQ servers is 8 today.
>
>>
>>> On Arm vcpu_ioreq_handle_completion()
>>> is called with IRQ enabled, so the call is accompanied with
>>> corresponding irq_enable/irq_disable.
>>> These unneeded actions could be avoided by using this simple one-line
>>> helper...
>>>
>>
>> The helper may be one line but there is more to the patch than that. 
>> I still think you could just walk the array in the helper rather than 
>> keeping a running occupancy count.
>
> Right, the concern here is this function will be called in an hotpath 
> (everytime we are re-entering to the guest). At the difference of x86, 
> the entry/exit code is really small, so any additional code will have 
> an impact on the overall performance.
+1


>
>
> That said, the IOREQ code is a tech preview for Arm. So I would be 
> fine going with Paul's approach until we have a better understanding 
> on the performance of virtio/IOREQ.

I am fine with Paul's approach for now (I only need a confirmation that 
I got it correctly).


>
>
> I am going to throw some more thoughts about the optimization here. 
> The patch is focusing on performance impact when IOREQ is built-in and 
> not used.
It is true, what I would to add here is the helper also avoids 
unnecessary vcpu_ioreq_handle_completion() calls as well another 
unnecessary action
(mapcache handling logic, although it is not a hotpath) in subsequent 
patch when IOREQ is used.


> I think we can do further optimization (which may superseed this one).
>
> get_pending_vcpu() (called from handle_hvm_io_completion()) is overly 
> expensive in particular if you have no I/O forwarded to an IOREQ 
> server. Entry to the hypervisor can happen for many reasons 
> (interrupts, system registers emulation, I/O emulation...) and the I/O 
> forwarded should be a small subset.
>
> Ideally, handle_hvm_io_completion() should be a NOP (at max a few 
> instructions) if there are nothing to do. Maybe we want to introduce a 
> per-vCPU flag indicating if an I/O has been forwarded to an IOREQ server.
>
> This would also us to bypass most of the function if there is nothing 
> to do.
>
> Any thoughts?
>
> In any case this is more a forward looking rather than a request for 
> the current series. What matters to me is we have a functional (not 
> necessarily optimized) version of IOREQ in Xen 4.15. This would be a 
> great step towards using Virto on Xen.

Completely agree, current series is quite big) and if we will try to 
make it perfect I am afraid, we won't have it even in Xen 4.16). As for 
proposed optimization - I think it worth considering, I will mention 
about it in the cover letter for the series among other possible things 
such as buffered request, etc.


>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:05:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:05:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48726.86195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6ep-0002Ni-8N; Wed, 09 Dec 2020 21:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48726.86195; Wed, 09 Dec 2020 21:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6ep-0002Na-4A; Wed, 09 Dec 2020 21:05:55 +0000
Received: by outflank-mailman (input) for mailman id 48726;
 Wed, 09 Dec 2020 21:05:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn6en-0002NO-Ko
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:05:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e92b115-7093-41fe-98e9-a5644f508a0e;
 Wed, 09 Dec 2020 21:05:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e92b115-7093-41fe-98e9-a5644f508a0e
Date: Wed, 9 Dec 2020 13:05:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607547952;
	bh=ClpgMRd9x5q58/di1AJQe568A42ut6G9v5HuiN5QT6g=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=K3OhrhRdI0SqXeK+3NwwiKVrIFYBOj3WG0gaAmLL5BHYes5Fz73IpkI/v2USsobiG
	 SjTVM/eh4qFnK5yjczOAR+oBcriwZtCPKoK+q9X0l+ua0s4mYDwGLQrnNMP1Wj39hZ
	 AfcoKp0oq9SQg3iPWi7U1060y2YDCXn6GlbQGG3EAueco8TaHRBQn/IUdScV8qTjtu
	 gipaOJW9wBdNKLxmDAWl8qLczFByRJv2q3+Etx/9luf7HsbrqrgteQHuh2+w7p81DK
	 64jNSxj9+RayE1bode/FjvJAyF5w3V9ZsBL98Vn6C/7x6VyFWUodF/LQTQRIhu+PKD
	 Ydl5hh95SnI/w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
In-Reply-To: <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091102520.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Add definition and entries in cpuinfo for ID registers introduced in
> newer Arm Architecture reference manual:
> - ID_PFR2: processor feature register 2
> - ID_DFR1: debug feature register 1
> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
> - ID_ISA6: ISA Feature register 6
> Add more bitfield definitions in PFR fields of cpuinfo.
> Add MVFR2 register definition for aarch32.
> Add mvfr values in cpuinfo.
> Add some registers definition for arm64 in sysregs as some are not
> always know by compilers.
> Initialize the new values added in cpuinfo in identify_cpu during init.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2:
>   Fix dbg32 table size and add proper initialisation of the second entry
>   of the table by reading ID_DFR1 register.
> Changes in V3:
>   Fix typo in commit title
>   Add MVFR2 definition and handling on aarch32 and remove specific case
>   for mvfr field in cpuinfo (now the same on arm64 and arm32).
>   Add MMFR4 definition if not known by the compiler.
> 
> ---
>  xen/arch/arm/cpufeature.c           | 18 ++++++++++
>  xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
>  xen/include/asm-arm/cpregs.h        | 12 +++++++
>  xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
>  4 files changed, 105 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 44126dbf07..bc7ee5ac95 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>  
>          c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
>          c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
> +        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
>  
>          c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
>          c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
> +
> +        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
>  #endif
>  
>          c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
>          c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
> +        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
>  
>          c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
> +        c->dbg32.bits[1] = READ_SYSREG32(ID_DFR1_EL1);
>  
>          c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
>  
> @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
>          c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
>          c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
> +        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
> +        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);
>  
>          c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
>          c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
> @@ -137,6 +144,17 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
>          c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
>          c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
> +        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
> +
> +#ifdef CONFIG_ARM_64
> +        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
> +        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
> +        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
> +#else
> +        c->mvfr.bits[0] = READ_CP32(MVFR0);
> +        c->mvfr.bits[1] = READ_CP32(MVFR1);
> +        c->mvfr.bits[2] = READ_CP32(MVFR2);
> +#endif
>  }
>  
>  /*
> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
> index c60029d38f..077fd95fb7 100644
> --- a/xen/include/asm-arm/arm64/sysregs.h
> +++ b/xen/include/asm-arm/arm64/sysregs.h
> @@ -57,6 +57,34 @@
>  #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>  #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>  
> +/*
> + * Define ID coprocessor registers if they are not
> + * already defined by the compiler.
> + *
> + * Values picked from linux kernel
> + */
> +#ifndef ID_AA64MMFR2_EL1
> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
> +#endif
> +#ifndef ID_PFR2_EL1
> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
> +#endif
> +#ifndef ID_MMFR4_EL1
> +#define ID_MMFR4_EL1                S3_0_C0_C2_6
> +#endif
> +#ifndef ID_MMFR5_EL1
> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
> +#endif
> +#ifndef ID_ISAR6_EL1
> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
> +#endif
> +#ifndef ID_AA64ZFR0_EL1
> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
> +#endif
> +#ifndef ID_DFR1_EL1
> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
> +#endif
> +
>  /* Access to system registers */
>  
>  #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index 8fd344146e..2690ddeb7a 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -63,6 +63,8 @@
>  #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
>  #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
>  #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
> +#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Register 2 */
>  #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
>  #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
>  #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
> @@ -108,18 +110,23 @@
>  #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
>  #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
>  #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
>  #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>  #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
>  #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
>  #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
>  #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
>  #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
>  #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>  #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>  #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>  #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>  #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>  #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>  #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>  #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>  #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
> @@ -312,18 +319,23 @@
>  #define HSTR_EL2                HSTR
>  #define ID_AFR0_EL1             ID_AFR0
>  #define ID_DFR0_EL1             ID_DFR0
> +#define ID_DFR1_EL1             ID_DFR1
>  #define ID_ISAR0_EL1            ID_ISAR0
>  #define ID_ISAR1_EL1            ID_ISAR1
>  #define ID_ISAR2_EL1            ID_ISAR2
>  #define ID_ISAR3_EL1            ID_ISAR3
>  #define ID_ISAR4_EL1            ID_ISAR4
>  #define ID_ISAR5_EL1            ID_ISAR5
> +#define ID_ISAR6_EL1            ID_ISAR6
>  #define ID_MMFR0_EL1            ID_MMFR0
>  #define ID_MMFR1_EL1            ID_MMFR1
>  #define ID_MMFR2_EL1            ID_MMFR2
>  #define ID_MMFR3_EL1            ID_MMFR3
> +#define ID_MMFR4_EL1            ID_MMFR4
> +#define ID_MMFR5_EL1            ID_MMFR5
>  #define ID_PFR0_EL1             ID_PFR0
>  #define ID_PFR1_EL1             ID_PFR1
> +#define ID_PFR2_EL1             ID_PFR2
>  #define IFSR32_EL2              IFSR
>  #define MDCR_EL2                HDCR
>  #define MIDR_EL1                MIDR
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index c7b5052992..6cf83d775b 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>      union {
>          uint64_t bits[2];
>          struct {
> +            /* PFR0 */
>              unsigned long el0:4;
>              unsigned long el1:4;
>              unsigned long el2:4;
> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>              unsigned long fp:4;   /* Floating Point */
>              unsigned long simd:4; /* Advanced SIMD */
>              unsigned long gic:4;  /* GIC support */
> -            unsigned long __res0:28;
> +            unsigned long ras:4;
> +            unsigned long sve:4;
> +            unsigned long sel2:4;
> +            unsigned long mpam:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long __res0:4;
>              unsigned long csv2:4;
> -            unsigned long __res1:4;
> +            unsigned long cvs3:4;
> +
> +            /* PFR1 */
> +            unsigned long bt:4;
> +            unsigned long ssbs:4;
> +            unsigned long mte:4;
> +            unsigned long ras_frac:4;
> +            unsigned long mpam_frac:4;
> +            unsigned long __res1:44;
>          };
>      } pfr64;
>  
> @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>      } aux64;
>  
>      union {
> -        uint64_t bits[2];
> +        uint64_t bits[3];
>          struct {
>              unsigned long pa_range:4;
>              unsigned long asid_bits:4;
> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>              unsigned long pan:4;
>              unsigned long __res1:8;
>              unsigned long __res2:32;
> +
> +            unsigned long __res3:64;
>          };
>      } mm64;
>  
> @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>          uint64_t bits[2];
>      } isa64;
>  
> +    struct {
> +        uint64_t bits[1];
> +    } zfr64;
> +
>  #endif
>  
>      /*
> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>       * when running in 32-bit mode.
>       */
>      union {
> -        uint32_t bits[2];
> +        uint32_t bits[3];
>          struct {
> +            /* PFR0 */
>              unsigned long arm:4;
>              unsigned long thumb:4;
>              unsigned long jazelle:4;
>              unsigned long thumbee:4;
> -            unsigned long __res0:16;
> +            unsigned long csv2:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long ras:4;
>  
> +            /* PFR1 */
>              unsigned long progmodel:4;
>              unsigned long security:4;
>              unsigned long mprofile:4;
>              unsigned long virt:4;
>              unsigned long gentimer:4;
> -            unsigned long __res1:12;
> +            unsigned long sec_frac:4;
> +            unsigned long virt_frac:4;
> +            unsigned long gic:4;
> +
> +            /* PFR2 */
> +            unsigned long csv3:4;
> +            unsigned long ssbs:4;
> +            unsigned long ras_frac:4;
> +            unsigned long __res2:20;
>          };
>      } pfr32;
>  
>      struct {
> -        uint32_t bits[1];
> +        uint32_t bits[2];
>      } dbg32;
>  
>      struct {
> @@ -230,12 +264,16 @@ struct cpuinfo_arm {
>      } aux32;
>  
>      struct {
> -        uint32_t bits[4];
> +        uint32_t bits[6];
>      } mm32;
>  
>      struct {
> -        uint32_t bits[6];
> +        uint32_t bits[7];
>      } isa32;
> +
> +    struct {
> +        uint64_t bits[3];
> +    } mvfr;
>  };
>  
>  extern struct cpuinfo_arm boot_cpu_data;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:06:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:06:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48727.86207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6ey-0002Rw-HP; Wed, 09 Dec 2020 21:06:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48727.86207; Wed, 09 Dec 2020 21:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6ey-0002Ro-DM; Wed, 09 Dec 2020 21:06:04 +0000
Received: by outflank-mailman (input) for mailman id 48727;
 Wed, 09 Dec 2020 21:06:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn6ex-0002RR-64
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:06:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd6ef676-00f7-438e-8dfb-a09928220085;
 Wed, 09 Dec 2020 21:06:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd6ef676-00f7-438e-8dfb-a09928220085
Date: Wed, 9 Dec 2020 13:06:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607547961;
	bh=GlyZFgHGu1SY9FDI6V3+Wo8t237osQArufNvu+ZtlI0=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=U1Y4LKqplDar4n218LbdqX5D2fykik0LY12dBSvqfqkP2l0O30J0A3df1nkiG1uIq
	 utLKA3A+gA2x40c/cN+6aG70qaotJSjE8SM3K8gKop241IKzsp0PrT5T1rpdy1PK2F
	 9ypcFKvRNQCcc4NDz/Erw3S0ssRvL+1sASJ70xWw7zyL3VQYkfcwH5GwxD00QBUCh0
	 l7FAAwxmP5fsVzepYif9mpJHXyu6SCaUhZ+htE+v2iIc8vE1HRLQfaJIhnoXNFexDx
	 y/uqeECwW23oifs/khw1BUgLRW2vtk0Er6KcUVAuqL7QHpePrYRfSU8Dl9KecgzEno
	 dN+b3gM7P6WOQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
In-Reply-To: <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091112090.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Add coprocessor registers definitions for all ID registers trapped
> through the TID3 bit of HSR.
> Those are the one that will be emulated in Xen to only publish to guests
> the features that are supported by Xen and that are accessible to
> guests.
> Also define a case to catch all reserved registers that should be
> handled as RAZ.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3:
>   Add case definition for reserved registers.
> 
> ---
>  xen/include/asm-arm/arm64/hsr.h | 66 +++++++++++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
> 
> diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
> index ca931dd2fe..ffe0f0007e 100644
> --- a/xen/include/asm-arm/arm64/hsr.h
> +++ b/xen/include/asm-arm/arm64/hsr.h
> @@ -110,6 +110,72 @@
>  #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
>  #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
>  
> +/* Those registers are used when HCR_EL2.TID3 is set */
> +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
> +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
> +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
> +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
> +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
> +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
> +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
> +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
> +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
> +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
> +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
> +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
> +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
> +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
> +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
> +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
> +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
> +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
> +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
> +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
> +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
> +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
> +
> +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
> +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
> +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
> +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
> +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
> +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
> +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
> +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
> +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
> +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
> +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
> +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
> +
> +/*
> + * Those cases are catching all Reserved registers trapped by TID3 which
> + * currently have no assignment.
> + * HCR.TID3 is trapping all registers in the group 3:
> + * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
> + */
> +#define HSR_SYSREG_TID3_RESERVED_CASE  case HSR_SYSREG(3,0,c0,c3,3): \
> +                                       case HSR_SYSREG(3,0,c0,c3,7): \
> +                                       case HSR_SYSREG(3,0,c0,c4,2): \
> +                                       case HSR_SYSREG(3,0,c0,c4,3): \
> +                                       case HSR_SYSREG(3,0,c0,c4,5): \
> +                                       case HSR_SYSREG(3,0,c0,c4,6): \
> +                                       case HSR_SYSREG(3,0,c0,c4,7): \
> +                                       case HSR_SYSREG(3,0,c0,c5,2): \
> +                                       case HSR_SYSREG(3,0,c0,c5,3): \
> +                                       case HSR_SYSREG(3,0,c0,c5,6): \
> +                                       case HSR_SYSREG(3,0,c0,c5,7): \
> +                                       case HSR_SYSREG(3,0,c0,c6,2): \
> +                                       case HSR_SYSREG(3,0,c0,c6,3): \
> +                                       case HSR_SYSREG(3,0,c0,c6,4): \
> +                                       case HSR_SYSREG(3,0,c0,c6,5): \
> +                                       case HSR_SYSREG(3,0,c0,c6,6): \
> +                                       case HSR_SYSREG(3,0,c0,c6,7): \
> +                                       case HSR_SYSREG(3,0,c0,c7,3): \
> +                                       case HSR_SYSREG(3,0,c0,c7,4): \
> +                                       case HSR_SYSREG(3,0,c0,c7,5): \
> +                                       case HSR_SYSREG(3,0,c0,c7,6): \
> +                                       case HSR_SYSREG(3,0,c0,c7,7)
> +
>  #endif /* __ASM_ARM_ARM64_HSR_H */
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:06:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:06:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48732.86219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6fC-0002Zv-Uy; Wed, 09 Dec 2020 21:06:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48732.86219; Wed, 09 Dec 2020 21:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6fC-0002Zm-Rl; Wed, 09 Dec 2020 21:06:18 +0000
Received: by outflank-mailman (input) for mailman id 48732;
 Wed, 09 Dec 2020 21:06:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn6fB-0002Xx-Lj
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:06:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65e5a828-96d1-4c7e-97a7-e8756cdfa7a7;
 Wed, 09 Dec 2020 21:06:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65e5a828-96d1-4c7e-97a7-e8756cdfa7a7
Date: Wed, 9 Dec 2020 13:06:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607547976;
	bh=G4kJOcGJNyShlWbHMpv/xprT01PekoPSCbwvRQ3T/LQ=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=E4oKLaT6iZq/7WPvetWRYsGL+D2XEr0dw9W4LeDiC0NQDuaXuLfN7aSTp/yYxi+UV
	 xWl8zLHk1cI0CXWY4eNOIQY7cmBLf/5jJuR1DTHclCcOeoIz8vz2NeIvpaX/f8nODx
	 MklUGZXIskaBuOTpRvpUbiUQYqQpiNs4kkJmPYzv5bxA5uG6IyQahhZqJppfmXme9Z
	 kR1k9fX1Ip5oBx0f1CVNmUTZtT2XjldtZhGVMjeJvBqyI9abUqRQZ179w2QR1NA3nB
	 767m9nf/h2+nqZOg6tTjJy46WFDgyy6v4KGvy4laMlSTu7gwJjqdTV0j37KOCP06dm
	 t3YeSBy6a2o+w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
In-Reply-To: <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091115280.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Create a cpuinfo structure for guest and mask into it the features that
> we do not support in Xen or that we do not want to publish to guests.
> 
> Modify some values in the cpuinfo structure for guests to mask some
> features which we do not want to allow to guests (like AMU) or we do not
> support (like SVE).
> 
> The code is trying to group together registers modifications for the
> same feature to be able in the long term to easily enable/disable a
> feature depending on user parameters or add other registers modification
> in the same place (like enabling/disabling HCR bits).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3:
>   Use current_cpu_data info instead of recalling identify_cpu
> 
> ---
>  xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/cpufeature.h |  2 ++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index bc7ee5ac95..7255383504 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -24,6 +24,8 @@
>  
>  DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>  
> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
> +
>  void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>                               const char *info)
>  {
> @@ -157,6 +159,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>  #endif
>  }
>  
> +/*
> + * This function is creating a cpuinfo structure with values modified to mask
> + * all cpu features that should not be published to guest.
> + * The created structure is then used to provide ID registers values to guests.
> + */
> +static int __init create_guest_cpuinfo(void)
> +{
> +    /*
> +     * TODO: The code is currently using only the features detected on the boot
> +     * core. In the long term we should try to compute values containing only
> +     * features supported by all cores.
> +     */
> +    guest_cpuinfo = current_cpu_data;
> +
> +#ifdef CONFIG_ARM_64
> +    /* Disable MPAM as xen does not support it */
> +    guest_cpuinfo.pfr64.mpam = 0;
> +    guest_cpuinfo.pfr64.mpam_frac = 0;
> +
> +    /* Disable SVE as Xen does not support it */
> +    guest_cpuinfo.pfr64.sve = 0;
> +    guest_cpuinfo.zfr64.bits[0] = 0;
> +
> +    /* Disable MTE as Xen does not support it */
> +    guest_cpuinfo.pfr64.mte = 0;
> +#endif
> +
> +    /* Disable AMU */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.amu = 0;
> +#endif
> +    guest_cpuinfo.pfr32.amu = 0;
> +
> +    /* Disable RAS as Xen does not support it */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.ras = 0;
> +    guest_cpuinfo.pfr64.ras_frac = 0;
> +#endif
> +    guest_cpuinfo.pfr32.ras = 0;
> +    guest_cpuinfo.pfr32.ras_frac = 0;
> +
> +    return 0;
> +}
> +/*
> + * This function needs to be run after all smp are started to have
> + * cpuinfo structures for all cores.
> + */
> +__initcall(create_guest_cpuinfo);
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 6cf83d775b..10b62bd324 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -283,6 +283,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>  extern struct cpuinfo_arm cpu_data[];
>  #define current_cpu_data cpu_data[smp_processor_id()]
>  
> +extern struct cpuinfo_arm guest_cpuinfo;
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:06:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48736.86231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6fI-0002dN-7t; Wed, 09 Dec 2020 21:06:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48736.86231; Wed, 09 Dec 2020 21:06:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6fI-0002dG-4s; Wed, 09 Dec 2020 21:06:24 +0000
Received: by outflank-mailman (input) for mailman id 48736;
 Wed, 09 Dec 2020 21:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn6fG-0002ct-SD
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:06:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c220ae2-0492-4fe7-8304-fe5a6c5f792c;
 Wed, 09 Dec 2020 21:06:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c220ae2-0492-4fe7-8304-fe5a6c5f792c
Date: Wed, 9 Dec 2020 13:06:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607547981;
	bh=fxnAYyY+AG6VQmt5QhJC+mLiy8etXez1tkgPfJl8ATs=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=hLTu/ACjH6add2oLNh8TiioJeZgNt3UPjIMMJc0WjZlNh8D5NNsUZlbfc+UGbMrjj
	 ndi9+guX639tQAsTz9s8pibq6ISqxlBer7pq1vwP1xaoW3hczEanZH7L5gYKilxP9N
	 n+WBwmjrZcIUNPOfcj4Og++JQt8YFhjOg9yCm02uaeqKyiiFj5+95mIuk2LDZ2KWip
	 CCD1MrvbOqGwRPqlvV/Fzgp6fU/AkrWKB8uANdlbUURryHmJ8ykx/RkJr8OyA/W2uj
	 tln6qif5gVWx9+VSdgfKGuwuwaUo5AjS5e8viuOv20nR8Li9EOvfUFI7fMKGYU33FB
	 uporSGxT7dNFQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/7] xen/arm: Activate TID3 in HCR_EL2
In-Reply-To: <956cf336ffce24f0cabfc7a98ae855bc71d5f028.1607524536.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012091157200.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <956cf336ffce24f0cabfc7a98ae855bc71d5f028.1607524536.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> Activate TID3 bit in HSR register when starting a guest.
> This will trap all coprecessor ID registers so that we can give to guest
> values corresponding to what they can actually use and mask some
> features to guests even though they would be supported by the underlying
> hardware (like SVE or MPAM).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3: Rebase
> 
> ---
>  xen/arch/arm/traps.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 28d9d64558..c1a9ad6056 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -98,7 +98,7 @@ register_t get_default_hcr_flags(void)
>  {
>      return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
>               (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> -             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> +             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>  }
>  
>  static enum {
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:07:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48746.86243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6g8-0002rO-JP; Wed, 09 Dec 2020 21:07:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48746.86243; Wed, 09 Dec 2020 21:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn6g8-0002rH-Fi; Wed, 09 Dec 2020 21:07:16 +0000
Received: by outflank-mailman (input) for mailman id 48746;
 Wed, 09 Dec 2020 21:07:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/1wO=FN=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kn6g7-0002r0-Af
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:07:15 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ec88874-25f3-4748-be12-9189e4afa396;
 Wed, 09 Dec 2020 21:07:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ec88874-25f3-4748-be12-9189e4afa396
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607548032;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L7i3CPNAPt2KU+IZb9dhE7iSR6cyuRM8HyieTAr/YYg=;
	b=kq1OHmCQNTXmjFxRiiYQCtMcgHsW2nl9XwXEhd2UDC34JY4VqsuqNqWdgTS3YK0B2HTN0a
	GLiGRqcl4feIgRGti60S5LHayQAwiZFaJ2AcyKwnEyQbepXQ2jz8X8vNbxqDaot95wevaf
	Lz67AUgT8+6fwzbCvE7SbExB3ThHr/fHr+UAilkliO85KAQIMK7eww5bH7SAetpnob4Kqc
	q6ySWnueDo4tqYCw7qzFmoI9zbcSv58CxsWPgdnODOP/uZ4j59J5/wPRhcDpcZpcic0r+V
	wcfFMs/wLf2fAEfHBp8cin02bVRhQTrk424mMLHkmAmUY5hzxKqvWbPQgnwCnw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607548032;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L7i3CPNAPt2KU+IZb9dhE7iSR6cyuRM8HyieTAr/YYg=;
	b=htO3ZaH9N+kRa526lyS/ICS9mPFUq9txIKXd3dIhl9PtBy0srICRWtTNhIRq5W11XlRqAu
	yEM17S/seRxu7uBg==
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org
Cc: peterz@infradead.org, luto@kernel.org, Juergen Gross <jgross@suse.com>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>, "VMware\, Inc." <pv-drivers@vmware.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 03/12] x86/pv: switch SWAPGS to ALTERNATIVE
In-Reply-To: <20201120114630.13552-4-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com> <20201120114630.13552-4-jgross@suse.com>
Date: Wed, 09 Dec 2020 22:07:12 +0100
Message-ID: <871rfylmdb.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Nov 20 2020 at 12:46, Juergen Gross wrote:
> SWAPGS is used only for interrupts coming from user mode or for
> returning to user mode. So there is no reason to use the PARAVIRT
> framework, as it can easily be replaced by an ALTERNATIVE depending
> on X86_FEATURE_XENPV.
>
> There are several instances using the PV-aware SWAPGS macro in paths
> which are never executed in a Xen PV guest. Replace those with the
> plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Andy Lutomirski <luto@kernel.org>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 21:32:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 21:32:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48784.86290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn74s-0005yX-6q; Wed, 09 Dec 2020 21:32:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48784.86290; Wed, 09 Dec 2020 21:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn74s-0005yQ-3d; Wed, 09 Dec 2020 21:32:50 +0000
Received: by outflank-mailman (input) for mailman id 48784;
 Wed, 09 Dec 2020 21:32:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn74q-0005yL-PI
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 21:32:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bca2c42-c755-443d-9530-0ffb5b08eeb9;
 Wed, 09 Dec 2020 21:32:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bca2c42-c755-443d-9530-0ffb5b08eeb9
Date: Wed, 9 Dec 2020 13:32:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607549567;
	bh=5CVtPUFtiMxACabaGtBvf6XMIWIWvn7TVRc7ElyN5X8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=fQZQiimmrHmRpECbf3+ldOiRcQMOI5BAEOnbMUsx5eB/c/XYhcpQORv92uS3bA0Wz
	 5Vx1WF7sflBexLwWIribrYfG7fR+ai3YB5QApxDDvwWVqKpNsMogAWaoexRqLNWAXL
	 IX475PS7tLXCRCJ3A/V6i5SuyCPcIE+uQizdSfhzdSqS5xrHtsUIppV/Z8sNZkmj4A
	 vkiX8XON7B0BrOOb3vjJK0Rwgj1lvVLjlj4ebiJJCzG8pe6NkJ5Tr3KmJxJQXCJXW6
	 yHKrzo+1kwoD+Ezj6Xr/R9UgKDlB5xUT7DIxXuGLenrATeSBUT01akhF/iof7VIkwA
	 3ipsRT5cxDIuA==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Paul Durrant <paul@xen.org>, Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 13/23] xen/ioreq: Use guest_cmpxchg64() instead of
 cmpxchg()
In-Reply-To: <1606732298-22107-14-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091329480.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-14-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The cmpxchg() in ioreq_send_buffered() operates on memory shared
> with the emulator domain (and the target domain if the legacy
> interface is used).
> 
> In order to be on the safe side we need to switch
> to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm.
> 
> As there is no plan to support the legacy interface on Arm,
> we will have a page to be mapped in a single domain at the time,
> so we can use s->emulator in guest_cmpxchg64() safely.
> 
> Thankfully the only user of the legacy interface is x86 so far
> and there is not concern regarding the atomics operations.
> 
> Please note, that the legacy interface *must* not be used on Arm
> without revisiting the code.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - new patch
> 
> Changes V1 -> V2:
>    - move earlier to avoid breaking arm32 compilation
>    - add an explanation to commit description and hvm_allow_set_param()
>    - pass s->emulator
> 
> Changes V2 -> V3:
>    - update patch description
> ---
> ---
>  xen/arch/arm/hvm.c | 4 ++++
>  xen/common/ioreq.c | 3 ++-
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 8951b34..9694e5a 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -31,6 +31,10 @@
>  
>  #include <asm/hypercall.h>
>  
> +/*
> + * The legacy interface (which involves magic IOREQ pages) *must* not be used
> + * without revisiting the code.
> + */

This is a NIT, but I'd prefer if you moved the comment a few lines
below, maybe just before the existing comment starting with "The
following parameters".

The reason is that as it is now it is not clear which set_params
interfaces should not be used without revisiting the code.

With that:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


>  static int hvm_allow_set_param(const struct domain *d, unsigned int param)
>  {
>      switch ( param )
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index 3ca5b96..4855dd8 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -29,6 +29,7 @@
>  #include <xen/trace.h>
>  #include <xen/vpci.h>
>  
> +#include <asm/guest_atomics.h>
>  #include <asm/hvm/ioreq.h>
>  
>  #include <public/hvm/ioreq.h>
> @@ -1182,7 +1183,7 @@ static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p)
>  
>          new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
>          new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
> -        cmpxchg(&pg->ptrs.full, old.full, new.full);
> +        guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full);
>      }
>  
>      notify_via_xen_event_channel(d, s->bufioreq_evtchn);
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 22:05:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 22:05:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48796.86309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn7Zn-0000Yg-QB; Wed, 09 Dec 2020 22:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48796.86309; Wed, 09 Dec 2020 22:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn7Zn-0000YZ-ML; Wed, 09 Dec 2020 22:04:47 +0000
Received: by outflank-mailman (input) for mailman id 48796;
 Wed, 09 Dec 2020 22:04:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn7Zm-0000YU-IM
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 22:04:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ed15ad6-7056-4733-9e1a-bef699b809c7;
 Wed, 09 Dec 2020 22:04:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ed15ad6-7056-4733-9e1a-bef699b809c7
Date: Wed, 9 Dec 2020 14:04:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607551485;
	bh=2q/ECwjKVm71dN2+Egl9Otj75lGb/FNpA/lUSXRao0k=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=WHL2IOp3FFueGa/ASXKDw/8zfG01DjLyxK9JeWh1RLz/oEPk1dpPwdkzZCd1Jk2p2
	 zIWXUPF0j+Pt4kgg6JBrq0COKhdYPn/qqhAhN4DyM+L52jn1EeaiRnCwaNsny7jdHf
	 lvSgegAtx/V4l2sUp05OEY9OBtDtyz9QS0g/BOgVc1VWKI7dShA3YGrScMMQYNb4OS
	 J45DIGx+9j0DyklDLsrTWQqJpYU4pvvODs5zFNIiBTMYY4V90icfwpOMWrHICKeQpS
	 tjOmQ//Yq2UDVAKfUcOlQ/c1b8WarIlymTW02gJDrgmDx/2otcZpcPMrIjiV5ogpnI
	 X/QbzsTqmFx6Q==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for
 IOREQ/DM features
In-Reply-To: <1606732298-22107-15-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091357430.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-15-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-761576849-1607551132=:20986"
Content-ID: <alpine.DEB.2.21.2012091359000.20986@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-761576849-1607551132=:20986
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2012091359001.20986@sstabellini-ThinkPad-T480s>

On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> This patch adds basic IOREQ/DM support on Arm. The subsequent
> patches will improve functionality and add remaining bits.
> 
> The IOREQ/DM features are supposed to be built with IOREQ_SERVER
> option enabled, which is disabled by default on Arm for now.
> 
> Please note, the "PIO handling" TODO is expected to left unaddressed
> for the current series. It is not an big issue for now while Xen
> doesn't have support for vPCI on Arm. On Arm64 they are only used
> for PCI IO Bar and we would probably want to expose them to emulator
> as PIO access to make a DM completely arch-agnostic. So "PIO handling"
> should be implemented when we add support for vPCI.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - was split into:
>      - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>      - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
>    - update patch description
>    - update asm-arm/hvm/ioreq.h according to the newly introduced arch functions:
>      - arch_hvm_destroy_ioreq_server()
>      - arch_handle_hvm_io_completion()
>    - update arch files to include xen/ioreq.h
>    - remove HVMOP plumbing
>    - rewrite a logic to handle properly case when hvm_send_ioreq() returns IO_RETRY
>    - add a logic to handle properly handle_hvm_io_completion() return value
>    - rename handle_mmio() to ioreq_handle_complete_mmio()
>    - move paging_mark_pfn_dirty() to asm-arm/paging.h
>    - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h
>    - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER
>    - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/ioreq.h
>    - use gdprintk in try_fwd_ioserv(), remove unneeded prints
>    - update list of #include-s
>    - move has_vpci() to asm-arm/domain.h
>    - add a comment (TODO) to unimplemented yet handle_pio()
>    - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) structs
>      from the arch files, they were already moved to the common code
>    - remove set_foreign_p2m_entry() changes, they will be properly implemented
>      in the follow-up patch
>    - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig
>    - remove x86's realmode and other unneeded stubs from xen/ioreq.h
>    - clafify ioreq_t p.df usage in try_fwd_ioserv()
>    - set ioreq_t p.count to 1 in try_fwd_ioserv()
> 
> Changes V1 -> V2:
>    - was split into:
>      - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>      - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed
>    - update the author of a patch
>    - update patch description
>    - move a loop in leave_hypervisor_to_guest() to a separate patch
>    - set IOREQ_SERVER disabled by default
>    - remove already clarified /* XXX */
>    - replace BUG() by ASSERT_UNREACHABLE() in handle_pio()
>    - remove default case for handling the return value of try_handle_mmio()
>    - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io,
>      struct hvm_vcpu from asm-arm/domain.h, these are common materials now
>    - update everything according to the recent changes (IOREQ related function
>      names don't contain "hvm" prefixes/infixes anymore, IOREQ related fields
>      are part of common struct vcpu/domain now, etc)
> 
> Changes V2 -> V3:
>    - update patch according the "legacy interface" is x86 specific
>    - add dummy arch hooks
>    - remove dummy paging_mark_pfn_dirty()
>    - don’t include <xen/domain_page.h> in common ioreq.c
>    - don’t include <public/hvm/ioreq.h> in arch ioreq.h
>    - remove #define ioreq_params(d, i)
> ---
> ---
>  xen/arch/arm/Makefile           |   2 +
>  xen/arch/arm/dm.c               |  34 ++++++++++
>  xen/arch/arm/domain.c           |   9 +++
>  xen/arch/arm/io.c               |  11 +++-
>  xen/arch/arm/ioreq.c            | 141 ++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/traps.c            |  13 ++++
>  xen/include/asm-arm/domain.h    |   3 +
>  xen/include/asm-arm/hvm/ioreq.h | 139 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/mmio.h      |   1 +
>  9 files changed, 352 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/arm/dm.c
>  create mode 100644 xen/arch/arm/ioreq.c
>  create mode 100644 xen/include/asm-arm/hvm/ioreq.h
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 296c5e6..c3ff454 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -13,6 +13,7 @@ obj-y += cpuerrata.o
>  obj-y += cpufeature.o
>  obj-y += decode.o
>  obj-y += device.o
> +obj-$(CONFIG_IOREQ_SERVER) += dm.o
>  obj-y += domain.o
>  obj-y += domain_build.init.o
>  obj-y += domctl.o
> @@ -27,6 +28,7 @@ obj-y += guest_atomics.o
>  obj-y += guest_walk.o
>  obj-y += hvm.o
>  obj-y += io.o
> +obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
>  obj-y += irq.o
>  obj-y += kernel.init.o
>  obj-$(CONFIG_LIVEPATCH) += livepatch.o
> diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
> new file mode 100644
> index 0000000..5d3da37
> --- /dev/null
> +++ b/xen/arch/arm/dm.c
> @@ -0,0 +1,34 @@
> +/*
> + * Copyright (c) 2019 Arm ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/dm.h>
> +#include <xen/hypercall.h>
> +
> +int arch_dm_op(struct xen_dm_op *op, struct domain *d,
> +               const struct dmop_args *op_args, bool *const_op)
> +{
> +    return -EOPNOTSUPP;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 18cafcd..8f55aba 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -15,6 +15,7 @@
>  #include <xen/guest_access.h>
>  #include <xen/hypercall.h>
>  #include <xen/init.h>
> +#include <xen/ioreq.h>
>  #include <xen/lib.h>
>  #include <xen/livepatch.h>
>  #include <xen/sched.h>
> @@ -696,6 +697,10 @@ int arch_domain_create(struct domain *d,
>  
>      ASSERT(config != NULL);
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +    ioreq_domain_init(d);
> +#endif
> +
>      /* p2m_init relies on some value initialized by the IOMMU subsystem */
>      if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 )
>          goto fail;
> @@ -1014,6 +1019,10 @@ int domain_relinquish_resources(struct domain *d)
>          if (ret )
>              return ret;
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +        ioreq_server_destroy_all(d);
> +#endif
> +
>      PROGRESS(xen):
>          ret = relinquish_memory(d, &d->xenpage_list);
>          if ( ret )
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index ae7ef96..f44cfd4 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -23,6 +23,7 @@
>  #include <asm/cpuerrata.h>
>  #include <asm/current.h>
>  #include <asm/mmio.h>
> +#include <asm/hvm/ioreq.h>
>  
>  #include "decode.h"
>  
> @@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
>  
>      handler = find_mmio_handler(v->domain, info.gpa);
>      if ( !handler )
> -        return IO_UNHANDLED;
> +    {
> +        int rc;
> +
> +        rc = try_fwd_ioserv(regs, v, &info);
> +        if ( rc == IO_HANDLED )
> +            return handle_ioserv(regs, v);
> +
> +        return rc;
> +    }
>  
>      /* All the instructions used on emulated MMIO region should be valid */
>      if ( !dabt.valid )
> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
> new file mode 100644
> index 0000000..f08190c
> --- /dev/null
> +++ b/xen/arch/arm/ioreq.c
> @@ -0,0 +1,141 @@
> +/*
> + * arm/ioreq.c: hardware virtual machine I/O emulation
> + *
> + * Copyright (c) 2019 Arm ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/domain.h>
> +#include <xen/ioreq.h>
> +
> +#include <asm/traps.h>
> +
> +#include <public/hvm/ioreq.h>
> +
> +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
> +{
> +    const union hsr hsr = { .bits = regs->hsr };
> +    const struct hsr_dabt dabt = hsr.dabt;
> +    /* Code is similar to handle_read */
> +    uint8_t size = (1 << dabt.size) * 8;
> +    register_t r = v->io.req.data;
> +
> +    /* We are done with the IO */
> +    v->io.req.state = STATE_IOREQ_NONE;
> +
> +    if ( dabt.write )
> +        return IO_HANDLED;
> +
> +    /*
> +     * Sign extend if required.
> +     * Note that we expect the read handler to have zeroed the bits
> +     * outside the requested access size.
> +     */
> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
> +    {
> +        /*
> +         * We are relying on register_t using the same as
> +         * an unsigned long in order to keep the 32-bit assembly
> +         * code smaller.
> +         */
> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
> +        r |= (~0UL) << size;
> +    }
> +
> +    set_user_reg(regs, dabt.reg, r);

Could you introduce a set_user_reg_signextend static inline function
that can be used both here and in handle_read?


> +    return IO_HANDLED;
> +}
> +
> +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
> +                             struct vcpu *v, mmio_info_t *info)
> +{
> +    struct vcpu_io *vio = &v->io;
> +    ioreq_t p = {
> +        .type = IOREQ_TYPE_COPY,
> +        .addr = info->gpa,
> +        .size = 1 << info->dabt.size,
> +        .count = 1,
> +        .dir = !info->dabt.write,
> +        /*
> +         * On x86, df is used by 'rep' instruction to tell the direction
> +         * to iterate (forward or backward).
> +         * On Arm, all the accesses to MMIO region will do a single
> +         * memory access. So for now, we can safely always set to 0.
> +         */
> +        .df = 0,
> +        .data = get_user_reg(regs, info->dabt.reg),
> +        .state = STATE_IOREQ_READY,
> +    };
> +    struct ioreq_server *s = NULL;
> +    enum io_state rc;
> +
> +    switch ( vio->req.state )
> +    {
> +    case STATE_IOREQ_NONE:
> +        break;
> +
> +    case STATE_IORESP_READY:
> +        return IO_HANDLED;
> +
> +    default:
> +        gdprintk(XENLOG_ERR, "wrong state %u\n", vio->req.state);
> +        return IO_ABORT;
> +    }
> +
> +    s = ioreq_server_select(v->domain, &p);
> +    if ( !s )
> +        return IO_UNHANDLED;
> +
> +    if ( !info->dabt.valid )
> +        return IO_ABORT;
> +
> +    vio->req = p;
> +
> +    rc = ioreq_send(s, &p, 0);
> +    if ( rc != IO_RETRY || v->domain->is_shutting_down )
> +        vio->req.state = STATE_IOREQ_NONE;
> +    else if ( !ioreq_needs_completion(&vio->req) )
> +        rc = IO_HANDLED;
> +    else
> +        vio->completion = IO_mmio_completion;
> +
> +    return rc;
> +}
> +
> +bool ioreq_complete_mmio(void)
> +{
> +    struct vcpu *v = current;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    const union hsr hsr = { .bits = regs->hsr };
> +    paddr_t addr = v->io.req.addr;
> +
> +    if ( try_handle_mmio(regs, hsr, addr) == IO_HANDLED )
> +    {
> +        advance_pc(regs, hsr);
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 22bd1bd..036b13f 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -21,6 +21,7 @@
>  #include <xen/hypercall.h>
>  #include <xen/init.h>
>  #include <xen/iocap.h>
> +#include <xen/ioreq.h>
>  #include <xen/irq.h>
>  #include <xen/lib.h>
>  #include <xen/mem_access.h>
> @@ -1385,6 +1386,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
>  #ifdef CONFIG_HYPFS
>      HYPERCALL(hypfs_op, 5),
>  #endif
> +#ifdef CONFIG_IOREQ_SERVER
> +    HYPERCALL(dm_op, 3),
> +#endif
>  };
>  
>  #ifndef NDEBUG
> @@ -1956,6 +1960,9 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
>              case IO_HANDLED:
>                  advance_pc(regs, hsr);
>                  return;
> +            case IO_RETRY:
> +                /* finish later */
> +                return;
>              case IO_UNHANDLED:
>                  /* IO unhandled, try another way to handle it. */
>                  break;
> @@ -2254,6 +2261,12 @@ static void check_for_vcpu_work(void)
>  {
>      struct vcpu *v = current;
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +    local_irq_enable();
> +    vcpu_ioreq_handle_completion(v);
> +    local_irq_disable();
> +#endif
> +
>      if ( likely(!v->arch.need_flush_to_ram) )
>          return;
>  
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 6819a3b..c235e5b 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -10,6 +10,7 @@
>  #include <asm/gic.h>
>  #include <asm/vgic.h>
>  #include <asm/vpl011.h>
> +#include <public/hvm/dm_op.h>
>  #include <public/hvm/params.h>
>  
>  struct hvm_domain
> @@ -262,6 +263,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {}
>  
>  #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag)
>  
> +#define has_vpci(d)    ({ (void)(d); false; })
> +
>  #endif /* __ASM_DOMAIN_H__ */
>  
>  /*
> diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/ioreq.h
> new file mode 100644
> index 0000000..2bffc7a
> --- /dev/null
> +++ b/xen/include/asm-arm/hvm/ioreq.h
> @@ -0,0 +1,139 @@
> +/*
> + * hvm.h: Hardware virtual machine assist interface definitions.
> + *
> + * Copyright (c) 2016 Citrix Systems Inc.
> + * Copyright (c) 2019 Arm ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ASM_ARM_HVM_IOREQ_H__
> +#define __ASM_ARM_HVM_IOREQ_H__
> +
> +#include <xen/ioreq.h>
> +
> +#ifdef CONFIG_IOREQ_SERVER
> +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v);
> +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
> +                             struct vcpu *v, mmio_info_t *info);
> +#else
> +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs,
> +                                          struct vcpu *v)
> +{
> +    return IO_UNHANDLED;
> +}
> +
> +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
> +                                           struct vcpu *v, mmio_info_t *info)
> +{
> +    return IO_UNHANDLED;
> +}
> +#endif

If we are providing stub functions, then we can also provide stub
functions for:

ioreq_domain_init
ioreq_server_destroy_all

and avoid the ifdefs.


> +bool ioreq_complete_mmio(void);
> +
> +static inline bool handle_pio(uint16_t port, unsigned int size, int dir)
> +{
> +    /*
> +     * TODO: For Arm64, the main user will be PCI. So this should be
> +     * implemented when we add support for vPCI.
> +     */
> +    ASSERT_UNREACHABLE();
> +    return true;
> +}
> +
> +static inline void msix_write_completion(struct vcpu *v)
> +{
> +}
> +
> +static inline bool arch_vcpu_ioreq_completion(enum io_completion io_completion)
> +{
> +    ASSERT_UNREACHABLE();
> +    return true;
> +}
> +
> +/*
> + * The "legacy" mechanism of mapping magic pages for the IOREQ servers
> + * is x86 specific, so the following hooks don't need to be implemented on Arm:
> + * - arch_ioreq_server_map_pages
> + * - arch_ioreq_server_unmap_pages
> + * - arch_ioreq_server_enable
> + * - arch_ioreq_server_disable
> + */
> +static inline int arch_ioreq_server_map_pages(struct ioreq_server *s)
> +{
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline void arch_ioreq_server_unmap_pages(struct ioreq_server *s)
> +{
> +}
> +
> +static inline void arch_ioreq_server_enable(struct ioreq_server *s)
> +{
> +}
> +
> +static inline void arch_ioreq_server_disable(struct ioreq_server *s)
> +{
> +}
> +
> +static inline void arch_ioreq_server_destroy(struct ioreq_server *s)
> +{
> +}
> +
> +static inline int arch_ioreq_server_map_mem_type(struct domain *d,
> +                                                 struct ioreq_server *s,
> +                                                 uint32_t flags)
> +{
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline bool arch_ioreq_server_destroy_all(struct domain *d)
> +{
> +    return true;
> +}
> +
> +static inline int arch_ioreq_server_get_type_addr(const struct domain *d,
> +                                                  const ioreq_t *p,
> +                                                  uint8_t *type,
> +                                                  uint64_t *addr)
> +{
> +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> +        return -EINVAL;
> +
> +    *type = (p->type == IOREQ_TYPE_PIO) ?
> +             XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> +    *addr = p->addr;

This function is not used in this patch and PIOs are left unimplemented
according to a few comments, so I am puzzled by this code here. Do we
need it?


> +    return 0;
> +}
> +
> +static inline void arch_ioreq_domain_init(struct domain *d)
> +{
> +}
> +
> +#define IOREQ_STATUS_HANDLED     IO_HANDLED
> +#define IOREQ_STATUS_UNHANDLED   IO_UNHANDLED
> +#define IOREQ_STATUS_RETRY       IO_RETRY
> +
> +#endif /* __ASM_ARM_HVM_IOREQ_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
> index 8dbfb27..7ab873c 100644
> --- a/xen/include/asm-arm/mmio.h
> +++ b/xen/include/asm-arm/mmio.h
> @@ -37,6 +37,7 @@ enum io_state
>      IO_ABORT,       /* The IO was handled by the helper and led to an abort. */
>      IO_HANDLED,     /* The IO was successfully handled by the helper. */
>      IO_UNHANDLED,   /* The IO was not handled by the helper. */
> +    IO_RETRY,       /* Retry the emulation for some reason */
>  };
>  
>  typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info,
> -- 
> 2.7.4
> 
--8323329-761576849-1607551132=:20986--


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 22:34:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 22:34:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48802.86321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn82q-0003V5-Ct; Wed, 09 Dec 2020 22:34:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48802.86321; Wed, 09 Dec 2020 22:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn82q-0003Uy-9n; Wed, 09 Dec 2020 22:34:48 +0000
Received: by outflank-mailman (input) for mailman id 48802;
 Wed, 09 Dec 2020 22:34:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GmJG=FN=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kn82p-0003Ut-1B
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 22:34:47 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 821a30fc-9ff7-4d9f-b428-bdd89e8694ed;
 Wed, 09 Dec 2020 22:34:45 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id h19so5318952lfc.12
 for <xen-devel@lists.xenproject.org>; Wed, 09 Dec 2020 14:34:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id b29sm305007lfc.12.2020.12.09.14.34.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 09 Dec 2020 14:34:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 821a30fc-9ff7-4d9f-b428-bdd89e8694ed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=6vOSNl6ZN+cRD4S79xLYqwO2xUi/L6Es4MdZ96t33CI=;
        b=rrxUX6eU2HV56LbQJ7QzxF22bmupZKBe1FHTZGW2k5Rs/emO+Dy3tv2h/mby6fZkZg
         y3FskzidsQAOrQ9d/1KP8u+fI925wAJCAqCeMlwCjkPWZ5xcV/QhnyWk8LXfcgYcUPPC
         adm8+NkXTeuBjqnHhRTpAOdw/vaedU5QyP3j5GWKZSuPG3g4yDKY7ifLv+LzLjPn86/i
         K/SfSB7vWok2LEnFaUuLMnsn64DKqDOdobhXjvHpyLLMS5tU8YIkXwjwLDHdUB9Ab+7C
         Hl6yXWH9ckxrBrBHkxieuHnrEZodQ8T2KfAqDamBpHoomey8FBVH/B3HtTVfd0tr6mdN
         Pqow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=6vOSNl6ZN+cRD4S79xLYqwO2xUi/L6Es4MdZ96t33CI=;
        b=iOuFUBFZ86R52r3yYdHHu+oXvN6ukL/yOdC7CekcSVCiCXlKv24O3u745kxoqfdwHc
         G1zx/blnOucA3yKko5M0XBOe+7LbBH5lcovDJ+RtBETuhFPN/5yKPY/D8a79eW+PcqrW
         E0DhMJzl+v0hiDvvv+jWXmeG/8rhOacPOGDhw9yI1XINXE/cqvgpN5d+LhW3fgTsHAM9
         FNF9XnvchU+d3/A9eCv7vX+nbQzdRjkelwuRDEPw/50DnJuvzMKusmvMq5jlZ/lULzDN
         WvsKK9o+vgMk23SgJl31CQZoF/jWiIoDHitTmUcAu8I347+MZNxPPSKEjims6t1iZH/w
         Fzyw==
X-Gm-Message-State: AOAM530zKZbfo3ZTVuhS0J5dd1d9w6A5g+Vxk42BvaRn4vQoC8ChQEho
	KbBP/QfRhLGn9kWNKVPWjoU=
X-Google-Smtp-Source: ABdhPJwxY8mp53k+b6uPi3cJo7z72H97Mv0vFDIwbSnJYKNFecXHJvQKXRSYP/5qfEoIKLyq0jJGQA==
X-Received: by 2002:a05:6512:34d3:: with SMTP id w19mr1687874lfr.180.1607553284483;
        Wed, 09 Dec 2020 14:34:44 -0800 (PST)
Subject: Re: [PATCH V3 13/23] xen/ioreq: Use guest_cmpxchg64() instead of
 cmpxchg()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-14-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091329480.20986@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <e9ba97b2-01e4-f657-fceb-ccf4857e91c2@gmail.com>
Date: Thu, 10 Dec 2020 00:34:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091329480.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 09.12.20 23:32, Stefano Stabellini wrote:

Hi Stefano

> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The cmpxchg() in ioreq_send_buffered() operates on memory shared
>> with the emulator domain (and the target domain if the legacy
>> interface is used).
>>
>> In order to be on the safe side we need to switch
>> to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm.
>>
>> As there is no plan to support the legacy interface on Arm,
>> we will have a page to be mapped in a single domain at the time,
>> so we can use s->emulator in guest_cmpxchg64() safely.
>>
>> Thankfully the only user of the legacy interface is x86 so far
>> and there is not concern regarding the atomics operations.
>>
>> Please note, that the legacy interface *must* not be used on Arm
>> without revisiting the code.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> Changes RFC -> V1:
>>     - new patch
>>
>> Changes V1 -> V2:
>>     - move earlier to avoid breaking arm32 compilation
>>     - add an explanation to commit description and hvm_allow_set_param()
>>     - pass s->emulator
>>
>> Changes V2 -> V3:
>>     - update patch description
>> ---
>> ---
>>   xen/arch/arm/hvm.c | 4 ++++
>>   xen/common/ioreq.c | 3 ++-
>>   2 files changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 8951b34..9694e5a 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -31,6 +31,10 @@
>>   
>>   #include <asm/hypercall.h>
>>   
>> +/*
>> + * The legacy interface (which involves magic IOREQ pages) *must* not be used
>> + * without revisiting the code.
>> + */
> This is a NIT, but I'd prefer if you moved the comment a few lines
> below, maybe just before the existing comment starting with "The
> following parameters".
>
> The reason is that as it is now it is not clear which set_params
> interfaces should not be used without revisiting the code.
OK, but maybe this comment wants dropping at all? It was actual when the 
legacy interface was the part of the common code (V2). Now the legacy 
interface is
x86 specific so I am not sure this comment should be here.


>
> With that:
>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 22:49:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 22:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48809.86336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8HO-0004eq-KV; Wed, 09 Dec 2020 22:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48809.86336; Wed, 09 Dec 2020 22:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8HO-0004ej-HX; Wed, 09 Dec 2020 22:49:50 +0000
Received: by outflank-mailman (input) for mailman id 48809;
 Wed, 09 Dec 2020 22:49:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GmJG=FN=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kn8HM-0004ee-Ne
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 22:49:48 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92daae74-1023-4065-8e8a-4d700e7b8f53;
 Wed, 09 Dec 2020 22:49:46 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id l11so5447112lfg.0
 for <xen-devel@lists.xenproject.org>; Wed, 09 Dec 2020 14:49:46 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id d1sm306839lfs.216.2020.12.09.14.49.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 09 Dec 2020 14:49:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92daae74-1023-4065-8e8a-4d700e7b8f53
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=eSrfHixI8CRkmO34CVpc+ZRZhqn7TykEpy53kp85QiA=;
        b=N+WYWQ0LLEhlNld8J5Fl/gsIEOhO3z+dEIFqNTthPUI2eaov5LVY4usqY997A5tqzm
         6kwZ2GU+W+oVYfq9YshNO9WrXFYLXEslDFfCNldkNvUorbqcZ2FAGRleAekubW7PBQqA
         BqZmy7Fxop6jrLR6gEoEAhvF5OWNwAc9RGVmwIOaNsAj4JrLVj/lG0e0AhACsq+EQ4Wq
         YG5C0710Ga0ZAkAlzwrZ0aE6RTs8Ybirzz7wQO8AYrNGTAg66WXXeOdSHuKG+MVtlfRp
         WW8/6hTckm02ibLJF5Mmwrtez0sJYaf2XnNge9SklbCyvNw5JHQOKyKNAXVqBpnKLywK
         a0SQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=eSrfHixI8CRkmO34CVpc+ZRZhqn7TykEpy53kp85QiA=;
        b=tPdbO140e9H1ZykBSdOS1sQFcQliw1yX7WsoNtW5P8mE58BkjN0MQmegfcT7QoYkL0
         dlRwtMWcqt63XC1BLm8mbpOWTjmD4skQO1M91lxWxOeDMhnTfllZXb45kSJk1/nz/LPV
         DuqnT7v5/+LCncc5c3SL+fvSjkeIm4qSq4Su+YGIVKN0bI7dRftwzLSG/CCvw2UhQkgt
         MI9JgZVHF6yyfTjLchUMzRWn30vB8zSP5UdyOIED6N0vswokzK9lpLptaRlrw+YVcPpy
         bNMpsDzFTfgFdJsCrGgSOQeMActsaJUOq0xCPSF4Vz2dqkeVzqpfYL73jnre+7hjRlVq
         de4w==
X-Gm-Message-State: AOAM532WqNEdSX5myR5hooVsQc7OWgBPMPtGpTwlnpD941l1PNq/2pD2
	f0/grLCakIBxjPUdQunyp80=
X-Google-Smtp-Source: ABdhPJwOGO2pFef1l1LzMAhYURGFX6N0dKB995JVawqAKJ/up+SBjA54KSr/99C0K4P6g1kC8lNIfg==
X-Received: by 2002:ac2:4ec1:: with SMTP id p1mr1707877lfr.394.1607554185331;
        Wed, 09 Dec 2020 14:49:45 -0800 (PST)
Subject: Re: [PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for
 IOREQ/DM features
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-15-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091357430.20986@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <31e35f5d-5ab4-19bf-e36b-8e7c0b7bf7d4@gmail.com>
Date: Thu, 10 Dec 2020 00:49:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091357430.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 10.12.20 00:04, Stefano Stabellini wrote:

Hi Stefano

> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> This patch adds basic IOREQ/DM support on Arm. The subsequent
>> patches will improve functionality and add remaining bits.
>>
>> The IOREQ/DM features are supposed to be built with IOREQ_SERVER
>> option enabled, which is disabled by default on Arm for now.
>>
>> Please note, the "PIO handling" TODO is expected to left unaddressed
>> for the current series. It is not an big issue for now while Xen
>> doesn't have support for vPCI on Arm. On Arm64 they are only used
>> for PCI IO Bar and we would probably want to expose them to emulator
>> as PIO access to make a DM completely arch-agnostic. So "PIO handling"
>> should be implemented when we add support for vPCI.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> Changes RFC -> V1:
>>     - was split into:
>>       - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>>       - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
>>     - update patch description
>>     - update asm-arm/hvm/ioreq.h according to the newly introduced arch functions:
>>       - arch_hvm_destroy_ioreq_server()
>>       - arch_handle_hvm_io_completion()
>>     - update arch files to include xen/ioreq.h
>>     - remove HVMOP plumbing
>>     - rewrite a logic to handle properly case when hvm_send_ioreq() returns IO_RETRY
>>     - add a logic to handle properly handle_hvm_io_completion() return value
>>     - rename handle_mmio() to ioreq_handle_complete_mmio()
>>     - move paging_mark_pfn_dirty() to asm-arm/paging.h
>>     - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h
>>     - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER
>>     - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/ioreq.h
>>     - use gdprintk in try_fwd_ioserv(), remove unneeded prints
>>     - update list of #include-s
>>     - move has_vpci() to asm-arm/domain.h
>>     - add a comment (TODO) to unimplemented yet handle_pio()
>>     - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) structs
>>       from the arch files, they were already moved to the common code
>>     - remove set_foreign_p2m_entry() changes, they will be properly implemented
>>       in the follow-up patch
>>     - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig
>>     - remove x86's realmode and other unneeded stubs from xen/ioreq.h
>>     - clafify ioreq_t p.df usage in try_fwd_ioserv()
>>     - set ioreq_t p.count to 1 in try_fwd_ioserv()
>>
>> Changes V1 -> V2:
>>     - was split into:
>>       - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>>       - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed
>>     - update the author of a patch
>>     - update patch description
>>     - move a loop in leave_hypervisor_to_guest() to a separate patch
>>     - set IOREQ_SERVER disabled by default
>>     - remove already clarified /* XXX */
>>     - replace BUG() by ASSERT_UNREACHABLE() in handle_pio()
>>     - remove default case for handling the return value of try_handle_mmio()
>>     - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io,
>>       struct hvm_vcpu from asm-arm/domain.h, these are common materials now
>>     - update everything according to the recent changes (IOREQ related function
>>       names don't contain "hvm" prefixes/infixes anymore, IOREQ related fields
>>       are part of common struct vcpu/domain now, etc)
>>
>> Changes V2 -> V3:
>>     - update patch according the "legacy interface" is x86 specific
>>     - add dummy arch hooks
>>     - remove dummy paging_mark_pfn_dirty()
>>     - don’t include <xen/domain_page.h> in common ioreq.c
>>     - don’t include <public/hvm/ioreq.h> in arch ioreq.h
>>     - remove #define ioreq_params(d, i)
>> ---
>> ---
>>   xen/arch/arm/Makefile           |   2 +
>>   xen/arch/arm/dm.c               |  34 ++++++++++
>>   xen/arch/arm/domain.c           |   9 +++
>>   xen/arch/arm/io.c               |  11 +++-
>>   xen/arch/arm/ioreq.c            | 141 ++++++++++++++++++++++++++++++++++++++++
>>   xen/arch/arm/traps.c            |  13 ++++
>>   xen/include/asm-arm/domain.h    |   3 +
>>   xen/include/asm-arm/hvm/ioreq.h | 139 +++++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-arm/mmio.h      |   1 +
>>   9 files changed, 352 insertions(+), 1 deletion(-)
>>   create mode 100644 xen/arch/arm/dm.c
>>   create mode 100644 xen/arch/arm/ioreq.c
>>   create mode 100644 xen/include/asm-arm/hvm/ioreq.h
>>
>> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
>> index 296c5e6..c3ff454 100644
>> --- a/xen/arch/arm/Makefile
>> +++ b/xen/arch/arm/Makefile
>> @@ -13,6 +13,7 @@ obj-y += cpuerrata.o
>>   obj-y += cpufeature.o
>>   obj-y += decode.o
>>   obj-y += device.o
>> +obj-$(CONFIG_IOREQ_SERVER) += dm.o
>>   obj-y += domain.o
>>   obj-y += domain_build.init.o
>>   obj-y += domctl.o
>> @@ -27,6 +28,7 @@ obj-y += guest_atomics.o
>>   obj-y += guest_walk.o
>>   obj-y += hvm.o
>>   obj-y += io.o
>> +obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
>>   obj-y += irq.o
>>   obj-y += kernel.init.o
>>   obj-$(CONFIG_LIVEPATCH) += livepatch.o
>> diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
>> new file mode 100644
>> index 0000000..5d3da37
>> --- /dev/null
>> +++ b/xen/arch/arm/dm.c
>> @@ -0,0 +1,34 @@
>> +/*
>> + * Copyright (c) 2019 Arm ltd.
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <xen/dm.h>
>> +#include <xen/hypercall.h>
>> +
>> +int arch_dm_op(struct xen_dm_op *op, struct domain *d,
>> +               const struct dmop_args *op_args, bool *const_op)
>> +{
>> +    return -EOPNOTSUPP;
>> +}
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 18cafcd..8f55aba 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -15,6 +15,7 @@
>>   #include <xen/guest_access.h>
>>   #include <xen/hypercall.h>
>>   #include <xen/init.h>
>> +#include <xen/ioreq.h>
>>   #include <xen/lib.h>
>>   #include <xen/livepatch.h>
>>   #include <xen/sched.h>
>> @@ -696,6 +697,10 @@ int arch_domain_create(struct domain *d,
>>   
>>       ASSERT(config != NULL);
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    ioreq_domain_init(d);
>> +#endif
>> +
>>       /* p2m_init relies on some value initialized by the IOMMU subsystem */
>>       if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 )
>>           goto fail;
>> @@ -1014,6 +1019,10 @@ int domain_relinquish_resources(struct domain *d)
>>           if (ret )
>>               return ret;
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
>> +        ioreq_server_destroy_all(d);
>> +#endif
>> +
>>       PROGRESS(xen):
>>           ret = relinquish_memory(d, &d->xenpage_list);
>>           if ( ret )
>> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
>> index ae7ef96..f44cfd4 100644
>> --- a/xen/arch/arm/io.c
>> +++ b/xen/arch/arm/io.c
>> @@ -23,6 +23,7 @@
>>   #include <asm/cpuerrata.h>
>>   #include <asm/current.h>
>>   #include <asm/mmio.h>
>> +#include <asm/hvm/ioreq.h>
>>   
>>   #include "decode.h"
>>   
>> @@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
>>   
>>       handler = find_mmio_handler(v->domain, info.gpa);
>>       if ( !handler )
>> -        return IO_UNHANDLED;
>> +    {
>> +        int rc;
>> +
>> +        rc = try_fwd_ioserv(regs, v, &info);
>> +        if ( rc == IO_HANDLED )
>> +            return handle_ioserv(regs, v);
>> +
>> +        return rc;
>> +    }
>>   
>>       /* All the instructions used on emulated MMIO region should be valid */
>>       if ( !dabt.valid )
>> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
>> new file mode 100644
>> index 0000000..f08190c
>> --- /dev/null
>> +++ b/xen/arch/arm/ioreq.c
>> @@ -0,0 +1,141 @@
>> +/*
>> + * arm/ioreq.c: hardware virtual machine I/O emulation
>> + *
>> + * Copyright (c) 2019 Arm ltd.
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <xen/domain.h>
>> +#include <xen/ioreq.h>
>> +
>> +#include <asm/traps.h>
>> +
>> +#include <public/hvm/ioreq.h>
>> +
>> +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
>> +{
>> +    const union hsr hsr = { .bits = regs->hsr };
>> +    const struct hsr_dabt dabt = hsr.dabt;
>> +    /* Code is similar to handle_read */
>> +    uint8_t size = (1 << dabt.size) * 8;
>> +    register_t r = v->io.req.data;
>> +
>> +    /* We are done with the IO */
>> +    v->io.req.state = STATE_IOREQ_NONE;
>> +
>> +    if ( dabt.write )
>> +        return IO_HANDLED;
>> +
>> +    /*
>> +     * Sign extend if required.
>> +     * Note that we expect the read handler to have zeroed the bits
>> +     * outside the requested access size.
>> +     */
>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>> +    {
>> +        /*
>> +         * We are relying on register_t using the same as
>> +         * an unsigned long in order to keep the 32-bit assembly
>> +         * code smaller.
>> +         */
>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>> +        r |= (~0UL) << size;
>> +    }
>> +
>> +    set_user_reg(regs, dabt.reg, r);
> Could you introduce a set_user_reg_signextend static inline function
> that can be used both here and in handle_read?
Yes, already done (this was requested by Julien). Please see
https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg86986.html


>
>
>> +    return IO_HANDLED;
>> +}
>> +
>> +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
>> +                             struct vcpu *v, mmio_info_t *info)
>> +{
>> +    struct vcpu_io *vio = &v->io;
>> +    ioreq_t p = {
>> +        .type = IOREQ_TYPE_COPY,
>> +        .addr = info->gpa,
>> +        .size = 1 << info->dabt.size,
>> +        .count = 1,
>> +        .dir = !info->dabt.write,
>> +        /*
>> +         * On x86, df is used by 'rep' instruction to tell the direction
>> +         * to iterate (forward or backward).
>> +         * On Arm, all the accesses to MMIO region will do a single
>> +         * memory access. So for now, we can safely always set to 0.
>> +         */
>> +        .df = 0,
>> +        .data = get_user_reg(regs, info->dabt.reg),
>> +        .state = STATE_IOREQ_READY,
>> +    };
>> +    struct ioreq_server *s = NULL;
>> +    enum io_state rc;
>> +
>> +    switch ( vio->req.state )
>> +    {
>> +    case STATE_IOREQ_NONE:
>> +        break;
>> +
>> +    case STATE_IORESP_READY:
>> +        return IO_HANDLED;
>> +
>> +    default:
>> +        gdprintk(XENLOG_ERR, "wrong state %u\n", vio->req.state);
>> +        return IO_ABORT;
>> +    }
>> +
>> +    s = ioreq_server_select(v->domain, &p);
>> +    if ( !s )
>> +        return IO_UNHANDLED;
>> +
>> +    if ( !info->dabt.valid )
>> +        return IO_ABORT;
>> +
>> +    vio->req = p;
>> +
>> +    rc = ioreq_send(s, &p, 0);
>> +    if ( rc != IO_RETRY || v->domain->is_shutting_down )
>> +        vio->req.state = STATE_IOREQ_NONE;
>> +    else if ( !ioreq_needs_completion(&vio->req) )
>> +        rc = IO_HANDLED;
>> +    else
>> +        vio->completion = IO_mmio_completion;
>> +
>> +    return rc;
>> +}
>> +
>> +bool ioreq_complete_mmio(void)
>> +{
>> +    struct vcpu *v = current;
>> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +    const union hsr hsr = { .bits = regs->hsr };
>> +    paddr_t addr = v->io.req.addr;
>> +
>> +    if ( try_handle_mmio(regs, hsr, addr) == IO_HANDLED )
>> +    {
>> +        advance_pc(regs, hsr);
>> +        return true;
>> +    }
>> +
>> +    return false;
>> +}
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 22bd1bd..036b13f 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -21,6 +21,7 @@
>>   #include <xen/hypercall.h>
>>   #include <xen/init.h>
>>   #include <xen/iocap.h>
>> +#include <xen/ioreq.h>
>>   #include <xen/irq.h>
>>   #include <xen/lib.h>
>>   #include <xen/mem_access.h>
>> @@ -1385,6 +1386,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
>>   #ifdef CONFIG_HYPFS
>>       HYPERCALL(hypfs_op, 5),
>>   #endif
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    HYPERCALL(dm_op, 3),
>> +#endif
>>   };
>>   
>>   #ifndef NDEBUG
>> @@ -1956,6 +1960,9 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
>>               case IO_HANDLED:
>>                   advance_pc(regs, hsr);
>>                   return;
>> +            case IO_RETRY:
>> +                /* finish later */
>> +                return;
>>               case IO_UNHANDLED:
>>                   /* IO unhandled, try another way to handle it. */
>>                   break;
>> @@ -2254,6 +2261,12 @@ static void check_for_vcpu_work(void)
>>   {
>>       struct vcpu *v = current;
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    local_irq_enable();
>> +    vcpu_ioreq_handle_completion(v);
>> +    local_irq_disable();
>> +#endif
>> +
>>       if ( likely(!v->arch.need_flush_to_ram) )
>>           return;
>>   
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 6819a3b..c235e5b 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -10,6 +10,7 @@
>>   #include <asm/gic.h>
>>   #include <asm/vgic.h>
>>   #include <asm/vpl011.h>
>> +#include <public/hvm/dm_op.h>
>>   #include <public/hvm/params.h>
>>   
>>   struct hvm_domain
>> @@ -262,6 +263,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {}
>>   
>>   #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag)
>>   
>> +#define has_vpci(d)    ({ (void)(d); false; })
>> +
>>   #endif /* __ASM_DOMAIN_H__ */
>>   
>>   /*
>> diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/ioreq.h
>> new file mode 100644
>> index 0000000..2bffc7a
>> --- /dev/null
>> +++ b/xen/include/asm-arm/hvm/ioreq.h
>> @@ -0,0 +1,139 @@
>> +/*
>> + * hvm.h: Hardware virtual machine assist interface definitions.
>> + *
>> + * Copyright (c) 2016 Citrix Systems Inc.
>> + * Copyright (c) 2019 Arm ltd.
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __ASM_ARM_HVM_IOREQ_H__
>> +#define __ASM_ARM_HVM_IOREQ_H__
>> +
>> +#include <xen/ioreq.h>
>> +
>> +#ifdef CONFIG_IOREQ_SERVER
>> +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v);
>> +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
>> +                             struct vcpu *v, mmio_info_t *info);
>> +#else
>> +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs,
>> +                                          struct vcpu *v)
>> +{
>> +    return IO_UNHANDLED;
>> +}
>> +
>> +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
>> +                                           struct vcpu *v, mmio_info_t *info)
>> +{
>> +    return IO_UNHANDLED;
>> +}
>> +#endif
> If we are providing stub functions, then we can also provide stub
> functions for:
>
> ioreq_domain_init
> ioreq_server_destroy_all
>
> and avoid the ifdefs.
I got your point. These are common IOREQ interface functions, which 
declarations live in the common header, should I provide
stubs in the common ioreq.h?


>
>
>> +bool ioreq_complete_mmio(void);
>> +
>> +static inline bool handle_pio(uint16_t port, unsigned int size, int dir)
>> +{
>> +    /*
>> +     * TODO: For Arm64, the main user will be PCI. So this should be
>> +     * implemented when we add support for vPCI.
>> +     */
>> +    ASSERT_UNREACHABLE();
>> +    return true;
>> +}
>> +
>> +static inline void msix_write_completion(struct vcpu *v)
>> +{
>> +}
>> +
>> +static inline bool arch_vcpu_ioreq_completion(enum io_completion io_completion)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +    return true;
>> +}
>> +
>> +/*
>> + * The "legacy" mechanism of mapping magic pages for the IOREQ servers
>> + * is x86 specific, so the following hooks don't need to be implemented on Arm:
>> + * - arch_ioreq_server_map_pages
>> + * - arch_ioreq_server_unmap_pages
>> + * - arch_ioreq_server_enable
>> + * - arch_ioreq_server_disable
>> + */
>> +static inline int arch_ioreq_server_map_pages(struct ioreq_server *s)
>> +{
>> +    return -EOPNOTSUPP;
>> +}
>> +
>> +static inline void arch_ioreq_server_unmap_pages(struct ioreq_server *s)
>> +{
>> +}
>> +
>> +static inline void arch_ioreq_server_enable(struct ioreq_server *s)
>> +{
>> +}
>> +
>> +static inline void arch_ioreq_server_disable(struct ioreq_server *s)
>> +{
>> +}
>> +
>> +static inline void arch_ioreq_server_destroy(struct ioreq_server *s)
>> +{
>> +}
>> +
>> +static inline int arch_ioreq_server_map_mem_type(struct domain *d,
>> +                                                 struct ioreq_server *s,
>> +                                                 uint32_t flags)
>> +{
>> +    return -EOPNOTSUPP;
>> +}
>> +
>> +static inline bool arch_ioreq_server_destroy_all(struct domain *d)
>> +{
>> +    return true;
>> +}
>> +
>> +static inline int arch_ioreq_server_get_type_addr(const struct domain *d,
>> +                                                  const ioreq_t *p,
>> +                                                  uint8_t *type,
>> +                                                  uint64_t *addr)
>> +{
>> +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>> +        return -EINVAL;
>> +
>> +    *type = (p->type == IOREQ_TYPE_PIO) ?
>> +             XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
>> +    *addr = p->addr;
> This function is not used in this patch and PIOs are left unimplemented
> according to a few comments, so I am puzzled by this code here. Do we
> need it?
Yes. It is called from ioreq_server_select (common/ioreq.c). I could 
just skip PIO case and use
*type = XEN_DMOP_IO_RANGE_MEMORY, but I didn't want to diverge.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:03:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48817.86354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8Uj-0006c2-7E; Wed, 09 Dec 2020 23:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48817.86354; Wed, 09 Dec 2020 23:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8Uj-0006bv-3I; Wed, 09 Dec 2020 23:03:37 +0000
Received: by outflank-mailman (input) for mailman id 48817;
 Wed, 09 Dec 2020 23:03:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8Uh-0006bq-Gh
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:03:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8Ug-0004W7-9F; Wed, 09 Dec 2020 23:03:34 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8Uf-0008Ro-VS; Wed, 09 Dec 2020 23:03:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3MQJ9vloOwkHii0kvk6B1/G/TcoG2G3CSPvRoenFr9Q=; b=1P6Xm7LqFuAHlkLQjVhOBnZFtu
	qBwWrkysyShYRewioM4fEl3dJKOG3dZB6x5O7lxkaDMCD0Tnff206vXIIdvvRTHUSHsKZWaP5oqIM
	fwjuNQh8/XBlB1VkG9vqkVCbsjM9qj7O7XNZQbrJUGcqjo32orlqvvELPdsHLcgNACbk=;
Subject: Re: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <62484fa0-fa86-523a-12e0-54d69934d791@xen.org>
Date: Wed, 9 Dec 2020 23:03:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 09/12/2020 16:30, Bertrand Marquis wrote:
> Add definition and entries in cpuinfo for ID registers introduced in
> newer Arm Architecture reference manual:
> - ID_PFR2: processor feature register 2
> - ID_DFR1: debug feature register 1
> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
> - ID_ISA6: ISA Feature register 6
> Add more bitfield definitions in PFR fields of cpuinfo.
> Add MVFR2 register definition for aarch32.
> Add mvfr values in cpuinfo.
> Add some registers definition for arm64 in sysregs as some are not
> always know by compilers.
> Initialize the new values added in cpuinfo in identify_cpu during init.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> ---
> Changes in V2:
>    Fix dbg32 table size and add proper initialisation of the second entry
>    of the table by reading ID_DFR1 register.
> Changes in V3:
>    Fix typo in commit title
>    Add MVFR2 definition and handling on aarch32 and remove specific case
>    for mvfr field in cpuinfo (now the same on arm64 and arm32).
>    Add MMFR4 definition if not known by the compiler.
> 
> ---
>   xen/arch/arm/cpufeature.c           | 18 ++++++++++
>   xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
>   xen/include/asm-arm/cpregs.h        | 12 +++++++
>   xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
>   4 files changed, 105 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 44126dbf07..bc7ee5ac95 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>   
>           c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
>           c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
> +        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
>   
>           c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
>           c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
> +
> +        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
>   #endif
>   
>           c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
>           c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
> +        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
>   
>           c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
> +        c->dbg32.bits[1] = READ_SYSREG32(ID_DFR1_EL1);
>   
>           c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
>   
> @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>           c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
>           c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
>           c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
> +        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
> +        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);

Please don't introduce any more use of READ_SYSREG32(), they are wrong 
on Armv8 because system registers are always 64-bit.

>   
>           c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
>           c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
> @@ -137,6 +144,17 @@ void identify_cpu(struct cpuinfo_arm *c)
>           c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
>           c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
>           c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
> +        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
> +
> +#ifdef CONFIG_ARM_64
> +        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
> +        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
> +        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
> +#else
> +        c->mvfr.bits[0] = READ_CP32(MVFR0);
> +        c->mvfr.bits[1] = READ_CP32(MVFR1);
> +        c->mvfr.bits[2] = READ_CP32(MVFR2);
> +#endif

READ_SYSREG() will do the job to either use READ_SYSREG64() or 
READ_CP32() depending on the arch used.

>   }
>   
>   /*
> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
> index c60029d38f..077fd95fb7 100644
> --- a/xen/include/asm-arm/arm64/sysregs.h
> +++ b/xen/include/asm-arm/arm64/sysregs.h
> @@ -57,6 +57,34 @@
>   #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>   #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>   
> +/*
> + * Define ID coprocessor registers if they are not
> + * already defined by the compiler.
> + *
> + * Values picked from linux kernel
> + */
> +#ifndef ID_AA64MMFR2_EL1

I am a bit puzzled how this meant to work. Will the libc/compiler 
headers define ID_AA64MMFR2_EL1?

> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
> +#endif
> +#ifndef ID_PFR2_EL1
> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
> +#endif
> +#ifndef ID_MMFR4_EL1
> +#define ID_MMFR4_EL1                S3_0_C0_C2_6
> +#endif
> +#ifndef ID_MMFR5_EL1
> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
> +#endif
> +#ifndef ID_ISAR6_EL1
> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
> +#endif
> +#ifndef ID_AA64ZFR0_EL1
> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
> +#endif
> +#ifndef ID_DFR1_EL1
> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
> +#endif
> +
>   /* Access to system registers */
>   
>   #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index 8fd344146e..2690ddeb7a 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -63,6 +63,8 @@
>   #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
>   #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
>   #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
> +#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Register 2 */
>   #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
>   #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
>   #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
> @@ -108,18 +110,23 @@
>   #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
>   #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
>   #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
>   #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>   #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
>   #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
>   #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
>   #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
>   #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
>   #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>   #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>   #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>   #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>   #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>   #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>   #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>   #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>   #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
> @@ -312,18 +319,23 @@
>   #define HSTR_EL2                HSTR
>   #define ID_AFR0_EL1             ID_AFR0
>   #define ID_DFR0_EL1             ID_DFR0
> +#define ID_DFR1_EL1             ID_DFR1
>   #define ID_ISAR0_EL1            ID_ISAR0
>   #define ID_ISAR1_EL1            ID_ISAR1
>   #define ID_ISAR2_EL1            ID_ISAR2
>   #define ID_ISAR3_EL1            ID_ISAR3
>   #define ID_ISAR4_EL1            ID_ISAR4
>   #define ID_ISAR5_EL1            ID_ISAR5
> +#define ID_ISAR6_EL1            ID_ISAR6
>   #define ID_MMFR0_EL1            ID_MMFR0
>   #define ID_MMFR1_EL1            ID_MMFR1
>   #define ID_MMFR2_EL1            ID_MMFR2
>   #define ID_MMFR3_EL1            ID_MMFR3
> +#define ID_MMFR4_EL1            ID_MMFR4
> +#define ID_MMFR5_EL1            ID_MMFR5
>   #define ID_PFR0_EL1             ID_PFR0
>   #define ID_PFR1_EL1             ID_PFR1
> +#define ID_PFR2_EL1             ID_PFR2
>   #define IFSR32_EL2              IFSR
>   #define MDCR_EL2                HDCR
>   #define MIDR_EL1                MIDR
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index c7b5052992..6cf83d775b 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>       union {
>           uint64_t bits[2];
>           struct {
> +            /* PFR0 */
>               unsigned long el0:4;
>               unsigned long el1:4;
>               unsigned long el2:4;
> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>               unsigned long fp:4;   /* Floating Point */
>               unsigned long simd:4; /* Advanced SIMD */
>               unsigned long gic:4;  /* GIC support */
> -            unsigned long __res0:28;
> +            unsigned long ras:4;
> +            unsigned long sve:4;
> +            unsigned long sel2:4;
> +            unsigned long mpam:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long __res0:4;
>               unsigned long csv2:4;
> -            unsigned long __res1:4;
> +            unsigned long cvs3:4;
> +
> +            /* PFR1 */
> +            unsigned long bt:4;
> +            unsigned long ssbs:4;
> +            unsigned long mte:4;
> +            unsigned long ras_frac:4;
> +            unsigned long mpam_frac:4;
> +            unsigned long __res1:44;
>           };
>       } pfr64;
>   
> @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>       } aux64;
>   
>       union {
> -        uint64_t bits[2];
> +        uint64_t bits[3];
>           struct {
>               unsigned long pa_range:4;
>               unsigned long asid_bits:4;
> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>               unsigned long pan:4;
>               unsigned long __res1:8;
>               unsigned long __res2:32;
> +
> +            unsigned long __res3:64;
>           };
>       } mm64;
>   
> @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>           uint64_t bits[2];
>       } isa64;
>   
> +    struct {
> +        uint64_t bits[1];
> +    } zfr64;
> +
>   #endif
>   
>       /*
> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>        * when running in 32-bit mode.
>        */
>       union {
> -        uint32_t bits[2];
> +        uint32_t bits[3];
>           struct {
> +            /* PFR0 */
>               unsigned long arm:4;
>               unsigned long thumb:4;
>               unsigned long jazelle:4;
>               unsigned long thumbee:4;
> -            unsigned long __res0:16;
> +            unsigned long csv2:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long ras:4;
>   
> +            /* PFR1 */
>               unsigned long progmodel:4;
>               unsigned long security:4;
>               unsigned long mprofile:4;
>               unsigned long virt:4;
>               unsigned long gentimer:4;
> -            unsigned long __res1:12;
> +            unsigned long sec_frac:4;
> +            unsigned long virt_frac:4;
> +            unsigned long gic:4;
> +
> +            /* PFR2 */
> +            unsigned long csv3:4;
> +            unsigned long ssbs:4;
> +            unsigned long ras_frac:4;
> +            unsigned long __res2:20;
>           };
>       } pfr32;
>   
>       struct {
> -        uint32_t bits[1];
> +        uint32_t bits[2];
>       } dbg32;
>   
>       struct {
> @@ -230,12 +264,16 @@ struct cpuinfo_arm {
>       } aux32;
>   
>       struct {
> -        uint32_t bits[4];
> +        uint32_t bits[6];
>       } mm32;
>   
>       struct {
> -        uint32_t bits[6];
> +        uint32_t bits[7];
>       } isa32;
> +
> +    struct {
> +        uint64_t bits[3];

Shouldn't this be register_t?

> +    } mvfr;
>   };
>   
>   extern struct cpuinfo_arm boot_cpu_data;
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:06:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48822.86366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8XK-0006ke-M0; Wed, 09 Dec 2020 23:06:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48822.86366; Wed, 09 Dec 2020 23:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8XK-0006kX-HT; Wed, 09 Dec 2020 23:06:18 +0000
Received: by outflank-mailman (input) for mailman id 48822;
 Wed, 09 Dec 2020 23:06:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8XI-0006kR-TM
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:06:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8XH-0004ZK-O2; Wed, 09 Dec 2020 23:06:15 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8XH-0000B1-DB; Wed, 09 Dec 2020 23:06:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IU0JmedGICv2rROWiE780mPtdxgZPBi+oSrLDDAZqVw=; b=zEJeyLStK+peLCI4NEd7Cqu1h8
	Y3Zx+gOuxWkeO9OENukiAasRamckmzJ774Q+SzOj3/Ns0qE5nK9x/yZkCWkq0zCRL3ZsJfUj88TS/
	E4ZO/kKbs7b8lFSdyuBd8ItdexGiGCvK7ycnV96V+DSsO7Wbb9ixO3jgeFogunZp0CXU=;
Subject: Re: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <af02eefb-5846-d32b-22e5-65763e6f51e0@xen.org>
Date: Wed, 9 Dec 2020 23:06:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 09/12/2020 16:30, Bertrand Marquis wrote:
> Add coprocessor registers definitions for all ID registers trapped
> through the TID3 bit of HSR.
> Those are the one that will be emulated in Xen to only publish to guests
> the features that are supported by Xen and that are accessible to
> guests.
> Also define a case to catch all reserved registers that should be
> handled as RAZ.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>    Add case definition for reserved registers.
> 
> ---
>   xen/include/asm-arm/arm64/hsr.h | 66 +++++++++++++++++++++++++++++++++
>   1 file changed, 66 insertions(+)
> 
> diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
> index ca931dd2fe..ffe0f0007e 100644
> --- a/xen/include/asm-arm/arm64/hsr.h
> +++ b/xen/include/asm-arm/arm64/hsr.h
> @@ -110,6 +110,72 @@
>   #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
>   #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
>   
> +/* Those registers are used when HCR_EL2.TID3 is set */
> +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
> +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
> +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
> +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
> +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
> +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
> +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
> +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
> +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
> +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
> +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
> +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
> +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
> +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
> +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
> +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
> +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
> +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
> +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
> +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
> +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
> +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
> +
> +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
> +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
> +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
> +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
> +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
> +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
> +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
> +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
> +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
> +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
> +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
> +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
> +
> +/*
> + * Those cases are catching all Reserved registers trapped by TID3 which
> + * currently have no assignment.
> + * HCR.TID3 is trapping all registers in the group 3:
> + * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
> + */
> +#define HSR_SYSREG_TID3_RESERVED_CASE  case HSR_SYSREG(3,0,c0,c3,3): \
> +                                       case HSR_SYSREG(3,0,c0,c3,7): \
> +                                       case HSR_SYSREG(3,0,c0,c4,2): \
> +                                       case HSR_SYSREG(3,0,c0,c4,3): \
> +                                       case HSR_SYSREG(3,0,c0,c4,5): \
> +                                       case HSR_SYSREG(3,0,c0,c4,6): \
> +                                       case HSR_SYSREG(3,0,c0,c4,7): \
> +                                       case HSR_SYSREG(3,0,c0,c5,2): \
> +                                       case HSR_SYSREG(3,0,c0,c5,3): \
> +                                       case HSR_SYSREG(3,0,c0,c5,6): \
> +                                       case HSR_SYSREG(3,0,c0,c5,7): \
> +                                       case HSR_SYSREG(3,0,c0,c6,2): \
> +                                       case HSR_SYSREG(3,0,c0,c6,3): \
> +                                       case HSR_SYSREG(3,0,c0,c6,4): \
> +                                       case HSR_SYSREG(3,0,c0,c6,5): \
> +                                       case HSR_SYSREG(3,0,c0,c6,6): \
> +                                       case HSR_SYSREG(3,0,c0,c6,7): \
> +                                       case HSR_SYSREG(3,0,c0,c7,3): \
> +                                       case HSR_SYSREG(3,0,c0,c7,4): \
> +                                       case HSR_SYSREG(3,0,c0,c7,5): \
> +                                       case HSR_SYSREG(3,0,c0,c7,6): \
> +                                       case HSR_SYSREG(3,0,c0,c7,7)

I don't like the idea to define the list of case in a header that is 
used by multiple source. Please define it directly in the source file 
that use it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:09:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48829.86381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8ad-0006wP-9m; Wed, 09 Dec 2020 23:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48829.86381; Wed, 09 Dec 2020 23:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8ad-0006wI-5x; Wed, 09 Dec 2020 23:09:43 +0000
Received: by outflank-mailman (input) for mailman id 48829;
 Wed, 09 Dec 2020 23:09:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8ab-0006wC-1B
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:09:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8aZ-0004dz-JQ; Wed, 09 Dec 2020 23:09:39 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8aZ-0000Td-CB; Wed, 09 Dec 2020 23:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sDZqRXJ9x65fHI0YJVKKxjPbRLQ/3/qafiP8sra/fr0=; b=ahWNsmcsnlOzSi/Vc1L0EgmAoA
	k0b5NTLMZSNHxIZIsiaygnHAR6L0lNIR1Xu/OQ7tQza5HjJa7sA6lbnzm6svIHMLvqp/2BxYqEdfx
	zBu5PJrv6GBxkZiJj4BZzMOJOKUlV652uVlYnMebhPyTOncmYmSD90trcXyIGURbVMXY=;
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
Date: Wed, 9 Dec 2020 23:09:38 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertand,

On 09/12/2020 16:30, Bertrand Marquis wrote:
> Create a cpuinfo structure for guest and mask into it the features that
> we do not support in Xen or that we do not want to publish to guests.
> 
> Modify some values in the cpuinfo structure for guests to mask some
> features which we do not want to allow to guests (like AMU) or we do not
> support (like SVE).
> 
> The code is trying to group together registers modifications for the
> same feature to be able in the long term to easily enable/disable a
> feature depending on user parameters or add other registers modification
> in the same place (like enabling/disabling HCR bits).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>    Use current_cpu_data info instead of recalling identify_cpu
> 
> ---
>   xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>   xen/include/asm-arm/cpufeature.h |  2 ++
>   2 files changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index bc7ee5ac95..7255383504 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -24,6 +24,8 @@
>   
>   DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>   
> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
> +
>   void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>                                const char *info)
>   {
> @@ -157,6 +159,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>   #endif
>   }
>   
> +/*
> + * This function is creating a cpuinfo structure with values modified to mask
> + * all cpu features that should not be published to guest.
> + * The created structure is then used to provide ID registers values to guests.
> + */
> +static int __init create_guest_cpuinfo(void)
> +{
> +    /*
> +     * TODO: The code is currently using only the features detected on the boot
> +     * core. In the long term we should try to compute values containing only
> +     * features supported by all cores.
> +     */
> +    guest_cpuinfo = current_cpu_data;

It would be more logical to use boot_cpu_data as this would be easier to 
match with your comment.

> +
> +#ifdef CONFIG_ARM_64
> +    /* Disable MPAM as xen does not support it */
> +    guest_cpuinfo.pfr64.mpam = 0;
> +    guest_cpuinfo.pfr64.mpam_frac = 0;
> +
> +    /* Disable SVE as Xen does not support it */
> +    guest_cpuinfo.pfr64.sve = 0;
> +    guest_cpuinfo.zfr64.bits[0] = 0;
> +
> +    /* Disable MTE as Xen does not support it */
> +    guest_cpuinfo.pfr64.mte = 0;
> +#endif
> +
> +    /* Disable AMU */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.amu = 0;
> +#endif
> +    guest_cpuinfo.pfr32.amu = 0;
> +
> +    /* Disable RAS as Xen does not support it */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.ras = 0;
> +    guest_cpuinfo.pfr64.ras_frac = 0;
> +#endif
> +    guest_cpuinfo.pfr32.ras = 0;
> +    guest_cpuinfo.pfr32.ras_frac = 0;

How about all the fields that are currently marked as RES0/RES1? 
Shouldn't we make sure they will stay like that even if newer 
architecture use them?

> +
> +    return 0;
> +}
> +/*
> + * This function needs to be run after all smp are started to have
> + * cpuinfo structures for all cores.
> + */
> +__initcall(create_guest_cpuinfo);
> +
>   /*
>    * Local variables:
>    * mode: C
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 6cf83d775b..10b62bd324 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -283,6 +283,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>   extern struct cpuinfo_arm cpu_data[];
>   #define current_cpu_data cpu_data[smp_processor_id()]
>   
> +extern struct cpuinfo_arm guest_cpuinfo;
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:13:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:13:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48836.86393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8eF-0007u6-Po; Wed, 09 Dec 2020 23:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48836.86393; Wed, 09 Dec 2020 23:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8eF-0007tz-Ml; Wed, 09 Dec 2020 23:13:27 +0000
Received: by outflank-mailman (input) for mailman id 48836;
 Wed, 09 Dec 2020 23:13:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8eE-0007tp-Gz
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:13:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8eD-0004jm-7E; Wed, 09 Dec 2020 23:13:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8eD-0000qp-0Q; Wed, 09 Dec 2020 23:13:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZXUAnsjBfLVJAh63qAGiwwR2vLlKdEjO0/ttJkDZziM=; b=Rqngn4u0ggN/GW7X+5Jg7IoFSz
	rFrUmWVAkPgTO+eNIEI8vHlRXKQilVuX4NSc9J+QRxSADAmMuCTATMDuS9yrPPo15PDnAWWO2zOtI
	itdJKlB9bWL8SEg5Tv/2/Gr7sKrbyLhIBtYLiQIh/CVOxa6rO5iNLm+cHyABG6aEqX3s=;
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8a154f7c-f700-5b6f-5645-a122fec45d19@xen.org>
Date: Wed, 9 Dec 2020 23:13:23 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 09/12/2020 16:30, Bertrand Marquis wrote:
> Add vsysreg emulation for registers trapped when TID3 bit is activated
> in HSR.
> The emulation is returning the value stored in cpuinfo_guest structure
> for know registers and is handling reserved registers as RAZ.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>    Fix commit message
>    Fix code style for GENERATE_TID3_INFO declaration
>    Add handling of reserved registers as RAZ.
> 
> ---
>   xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
>   1 file changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 8a85507d9d..ef7a11dbdd 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>           break;                                                          \
>       }
>   
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg, field, offset)                          \
> +    case HSR_SYSREG_##reg:                                              \
> +    {                                                                   \
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
> +                          1, guest_cpuinfo.field.bits[offset]);         \

The indentation looks wrong here. The "1" should be aligned with "regs".

> +    }
> +
>   void do_sysreg(struct cpu_user_regs *regs,
>                  const union hsr hsr)
>   {
> @@ -259,6 +267,51 @@ void do_sysreg(struct cpu_user_regs *regs,
>            */
>           return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
>   
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
> +
> +    HSR_SYSREG_TID3_RESERVED_CASE:
> +        /* Handle all reserved registers as RAZ */
> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
> +
>       /*
>        * HCR_EL2.TIDCP
>        *
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:15:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:15:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48841.86405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8gO-000830-C2; Wed, 09 Dec 2020 23:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48841.86405; Wed, 09 Dec 2020 23:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8gO-00082t-8h; Wed, 09 Dec 2020 23:15:40 +0000
Received: by outflank-mailman (input) for mailman id 48841;
 Wed, 09 Dec 2020 23:15:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8gN-00082o-CD
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:15:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8gM-0004m9-CA; Wed, 09 Dec 2020 23:15:38 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8gL-0000zC-W0; Wed, 09 Dec 2020 23:15:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OiKLK7GW6v2lqJO/UHJcHAkBq0gqz2zx7OfltBTnKB0=; b=i8itkNw1GslJERp4ClnloFN9ey
	3JwQSz4Bkq3UYY+SF25znwQj5zlHFZ5m+eBLZcPzjb+Jjm3jn+ZEZkFXzljvB/sguAU4WcNLyV98+
	vKlxkmRaT4eqghaZ3dzsKtQJsC0RXv0jdFn0s1YgmZCvGX0S15SzMhfUoxpKWbPqqniA=;
Subject: Re: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle MVFR
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e06cde1e-2833-4335-9456-04aae3f6d287@xen.org>
Date: Wed, 9 Dec 2020 23:15:36 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 09/12/2020 16:30, Bertrand Marquis wrote:
> Add support for cp10 exceptions decoding to be able to emulate the
> values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
> This is required for aarch32 guests accessing MVFR registers using
> vmrs and vmsr instructions.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>    Add case for MVFR2, fix typo VMFR <-> MVFR.
> 
> ---
>   xen/arch/arm/traps.c             |  5 ++++
>   xen/arch/arm/vcpreg.c            | 39 +++++++++++++++++++++++++++++++-
>   xen/include/asm-arm/perfc_defn.h |  1 +
>   xen/include/asm-arm/traps.h      |  1 +
>   4 files changed, 45 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 22bd1bd4c6..28d9d64558 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
>           perfc_incr(trap_cp14_dbg);
>           do_cp14_dbg(regs, hsr);
>           break;
> +    case HSR_EC_CP10:
> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
> +        perfc_incr(trap_cp10);
> +        do_cp10(regs, hsr);
> +        break;
>       case HSR_EC_CP:
>           GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>           perfc_incr(trap_cp);
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index d371a1c38c..da4e22a467 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -319,7 +319,7 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>       GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
>       GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
>       GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
> -    /* MVFR registers are in cp10 no cp15 */
> +    /* MVFR registers are in cp10 not cp15 */

The code was originally added in the previous patch. Please either 
introduce the comment here or fold it in the previous patch.

>   
>       HSR_CPREG32_TID3_RESERVED_CASE:
>           /* Handle all reserved registers as RAZ */
> @@ -638,6 +638,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
>       inject_undef_exception(regs, hsr);
>   }
>   
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
> +{
> +    const struct hsr_cp32 cp32 = hsr.cp32;
> +    int regidx = cp32.reg;
> +
> +    if ( !check_conditional_instr(regs, hsr) )
> +    {
> +        advance_pc(regs, hsr);
> +        return;
> +    }
> +
> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
> +    {
> +    /*
> +     * HSR.TID3 is trapping access to MVFR register used to identify the
> +     * VFP/Simd using VMRS/VMSR instructions.
> +     * Exception encoding is using MRC/MCR standard with the reg field in Crn
> +     * as are declared MVFR0 and MVFR1 in cpregs.h
> +     */
> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
> +
> +    default:
> +        gdprintk(XENLOG_ERR,
> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
> +                 cp32.read ? "mrc" : "mcr",
> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
> +                 hsr.bits & HSR_CP32_REGS_MASK);
> +        inject_undef_exception(regs, hsr);
> +        return;
> +    }
> +
> +    advance_pc(regs, hsr);
> +}
> +
>   void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>   {
>       const struct hsr_cp cp = hsr.cp;
> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
> index 6a83185163..31f071222b 100644
> --- a/xen/include/asm-arm/perfc_defn.h
> +++ b/xen/include/asm-arm/perfc_defn.h
> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>   PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>   PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>   PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>   PERFCOUNTER(trap_cp,       "trap: cp access")
>   PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>   PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
> index 997c37884e..c4a3d0fb1b 100644
> --- a/xen/include/asm-arm/traps.h
> +++ b/xen/include/asm-arm/traps.h
> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
>   void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>   void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>   void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>   void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
>   
>   /* SMCCC handling */
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:18:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48847.86417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8ie-0008CA-QH; Wed, 09 Dec 2020 23:18:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48847.86417; Wed, 09 Dec 2020 23:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8ie-0008C2-Mr; Wed, 09 Dec 2020 23:18:00 +0000
Received: by outflank-mailman (input) for mailman id 48847;
 Wed, 09 Dec 2020 23:17:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8ic-0008Bw-W3
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:17:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8ib-0004qF-Va; Wed, 09 Dec 2020 23:17:57 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8ib-00018l-Pr; Wed, 09 Dec 2020 23:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZSTBGZi7SQDw33eWOlRaH5mwEbg7Txz4jY2GfV0b96U=; b=co3/Kxtv9nG0WP5KTT9FhS54qe
	r+Ry1wwTmna2mKGbsldw30ZgW8Tv8su2BVWGt7pCB1w+l+Gd3ASvMhn9Kd6mg5hGrQFhjPUxz0APG
	mahAoVrBdJBD3WuRyQtMvhbz0+MEI9Mff+lGKxawiNsrtPadF5TZxNVzzaQlNt7rY5m8=;
Subject: Re: [PATCH v3 7/7] xen/arm: Activate TID3 in HCR_EL2
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <956cf336ffce24f0cabfc7a98ae855bc71d5f028.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6e81e7ff-9cfc-aaec-e1fc-336dec06dd6d@xen.org>
Date: Wed, 9 Dec 2020 23:17:56 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <956cf336ffce24f0cabfc7a98ae855bc71d5f028.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 09/12/2020 16:31, Bertrand Marquis wrote:
> Activate TID3 bit in HSR register when starting a guest.

s/HSR/HCR/

> This will trap all coprecessor ID registers so that we can give to guest
> values corresponding to what they can actually use and mask some
> features to guests even though they would be supported by the underlying
> hardware (like SVE or MPAM).

So this will make sure the guest will not be able to identify the 
feature. Did you check that the features are effectively not accessible 
by the guest? IOW it should trap.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:18:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:18:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48852.86428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8jM-0008K1-4M; Wed, 09 Dec 2020 23:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48852.86428; Wed, 09 Dec 2020 23:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8jM-0008Ju-12; Wed, 09 Dec 2020 23:18:44 +0000
Received: by outflank-mailman (input) for mailman id 48852;
 Wed, 09 Dec 2020 23:18:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn8jK-0008Jl-91
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:18:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04d22b38-9595-4a96-af38-b39f5030debc;
 Wed, 09 Dec 2020 23:18:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04d22b38-9595-4a96-af38-b39f5030debc
Date: Wed, 9 Dec 2020 15:18:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607555920;
	bh=WZZx/SPoCOZE6KlYMXcGv/kcccuVo2OCwMxhA2l32Ho=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=oIb2tJhW2r/A/EO2F9/7FDbxyTDGt0Fxdjvzqr555mi472WjTwIZm4UHN1FG3TvK7
	 TK1kilypvf0uxGM6VFZ65M+ZxNmD4/n+ERmu1+V9fG8PHaYiAD7VrTRYFMcs9PGu/W
	 hWRangm3fgFk7WUhnw8oGfVJ3dBnKwtC/5vFwUGQpiHm/GlgxfbARAyV3Tc1/BEFor
	 qTVRFLcpwvdxDAIdKZ9vcOk8G2a1wZxzGX+xbfDK4PYfZplQ5K0NLun2i5u8Vb+m3R
	 oz4WT+lkoYVTQysz85nn/nZLr4oRtC6Uk25nNuhZUtzudYwBw11TyAIr5BkeTiV9G0
	 9ElvE6+Dq7y9g==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest
 until I/O has completed
In-Reply-To: <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This patch adds proper handling of return value of
> vcpu_ioreq_handle_completion() which involves using a loop
> in leave_hypervisor_to_guest().
> 
> The reason to use an unbounded loop here is the fact that vCPU
> shouldn't continue until an I/O has completed. In Xen case, if an I/O
> never completes then it most likely means that something went horribly
> wrong with the Device Emulator. And it is most likely not safe to
> continue. So letting the vCPU to spin forever if I/O never completes
> is a safer action than letting it continue and leaving the guest in
> unclear state and is the best what we can do for now.
> 
> This wouldn't be an issue for Xen as do_softirq() would be called at
> every loop. In case of failure, the guest will crash and the vCPU
> will be unscheduled.

Imagine that we have two guests: one that requires an ioreq server and
one that doesn't. If I am not mistaken this loop could potentially spin
forever on a pcpu, thus preventing any other guest being scheduled, even
if the other guest doesn't need any ioreq servers.


My other concern is that we are busy-looping. Could we call something
like wfi() or do_idle() instead? The ioreq server event notification of
completion should wake us up?

Following this line of thinking, I am wondering if instead of the
busy-loop we should call vcpu_block_unless_event_pending(current) in
try_handle_mmio if IO_RETRY. Then when the emulation is done, QEMU (or
equivalent) calls xenevtchn_notify which ends up waking up the domU
vcpu. Would that work?



> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes V1 -> V2:
>    - new patch, changes were derived from (+ new explanation):
>      arm/ioreq: Introduce arch specific bits for IOREQ/DM features
> 
> Changes V2 -> V3:
>    - update patch description
> ---
> ---
>  xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++-----
>  1 file changed, 26 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 036b13f..4cef43e 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void)
>   * Process pending work for the vCPU. Any call should be fast or
>   * implement preemption.
>   */
> -static void check_for_vcpu_work(void)
> +static bool check_for_vcpu_work(void)
>  {
>      struct vcpu *v = current;
>  
>  #ifdef CONFIG_IOREQ_SERVER
> +    bool handled;
> +
>      local_irq_enable();
> -    vcpu_ioreq_handle_completion(v);
> +    handled = vcpu_ioreq_handle_completion(v);
>      local_irq_disable();
> +
> +    if ( !handled )
> +        return true;
>  #endif
>  
>      if ( likely(!v->arch.need_flush_to_ram) )
> -        return;
> +        return false;
>  
>      /*
>       * Give a chance for the pCPU to process work before handling the vCPU
> @@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void)
>      local_irq_enable();
>      p2m_flush_vm(v);
>      local_irq_disable();
> +
> +    return false;
>  }
>  
>  /*
> @@ -2291,8 +2298,22 @@ void leave_hypervisor_to_guest(void)
>  {
>      local_irq_disable();
>  
> -    check_for_vcpu_work();
> -    check_for_pcpu_work();
> +    /*
> +     * The reason to use an unbounded loop here is the fact that vCPU
> +     * shouldn't continue until an I/O has completed. In Xen case, if an I/O
> +     * never completes then it most likely means that something went horribly
> +     * wrong with the Device Emulator. And it is most likely not safe to
> +     * continue. So letting the vCPU to spin forever if I/O never completes
> +     * is a safer action than letting it continue and leaving the guest in
> +     * unclear state and is the best what we can do for now.
> +     *
> +     * This wouldn't be an issue for Xen as do_softirq() would be called at
> +     * every loop. In case of failure, the guest will crash and the vCPU
> +     * will be unscheduled.
> +     */
> +    do {
> +        check_for_pcpu_work();
> +    } while ( check_for_vcpu_work() );
>  
>      vgic_sync_to_lrs();
>  
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:22:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48860.86441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8nB-0000rO-NR; Wed, 09 Dec 2020 23:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48860.86441; Wed, 09 Dec 2020 23:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8nB-0000rH-Ii; Wed, 09 Dec 2020 23:22:41 +0000
Received: by outflank-mailman (input) for mailman id 48860;
 Wed, 09 Dec 2020 23:22:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn8nA-0000rC-Kx
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:22:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8n9-0004xM-6D; Wed, 09 Dec 2020 23:22:39 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn8n8-0001YV-V4; Wed, 09 Dec 2020 23:22:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JjyjhzTWaMwnycOk0aOS8NcMq+XcL2vXe5foY/m4WHc=; b=Yi7zwWKSKoV3Oi7WP70/WQFdwA
	2ktS6ubIMJ4q1OZNJcwh8r6vG5FLVqOX243tHjFTKeDU3kzZmX23FEUx3DzICfUjXYHRnJLvfXdRG
	+Llc6Eg+++3yv8CN67lEe9qYUKUJQfZD+GcUBIhP0U9IQZmGKa4/tqgyx1Gl/sTbmIWI=;
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6cf80971-e9aa-f9e4-cb9b-4f102b84a99b@xen.org>
Date: Wed, 9 Dec 2020 23:22:37 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 09/12/2020 16:30, Bertrand Marquis wrote:
> +    /* Disable MPAM as xen does not support it */

I am going to be picky :). I think we want to say "hide" rather than 
"disable" because the latter is done differently via the HCR_EL2.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48866.86453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8zE-0001y3-RW; Wed, 09 Dec 2020 23:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48866.86453; Wed, 09 Dec 2020 23:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn8zE-0001xw-OQ; Wed, 09 Dec 2020 23:35:08 +0000
Received: by outflank-mailman (input) for mailman id 48866;
 Wed, 09 Dec 2020 23:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn8zD-0001xr-Mw
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:35:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6ff657a-6697-45ce-9138-c05a936da4ef;
 Wed, 09 Dec 2020 23:35:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6ff657a-6697-45ce-9138-c05a936da4ef
Date: Wed, 9 Dec 2020 15:35:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607556906;
	bh=YoVLcOmgtgdXh5DDh0RvYgaGpz4Ay9q2qHzeVVTt8vg=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=afrq+N/w2XDVFCf4axW1W9X5d/bV4XE2CgoKQbVvj7wjO3JxR3DpMcTtFEZzvfASE
	 OfORAofcNOZQeLCQX2E5VHdjvrHmF3mbjeg7BxenYzWbuZiECFlvAOAChJfM9Y9RKB
	 bzWyWRTTEYc1Qui6waJga0IPUD1F3BoYgwQPap4bPPU51bRcvoTfmohiMA6fNpZVOo
	 upm701jDTarenQfmqUtXrQfHk9kbKKauR5DNTp3SfcpYT5+81sWzbuPwMMtnFEHfS6
	 tgdMTkOCsvDALI/ohl6JrJlw9fj1ZctGtcUtrHHiKsGD1ev4BVS+imha1OciqwubhU
	 v1N3INidYbJ1g==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest
 until I/O has completed
In-Reply-To: <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2012091521480.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-16-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Stefano Stabellini wrote:
> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > 
> > This patch adds proper handling of return value of
> > vcpu_ioreq_handle_completion() which involves using a loop
> > in leave_hypervisor_to_guest().
> > 
> > The reason to use an unbounded loop here is the fact that vCPU
> > shouldn't continue until an I/O has completed. In Xen case, if an I/O
> > never completes then it most likely means that something went horribly
> > wrong with the Device Emulator. And it is most likely not safe to
> > continue. So letting the vCPU to spin forever if I/O never completes
> > is a safer action than letting it continue and leaving the guest in
> > unclear state and is the best what we can do for now.
> > 
> > This wouldn't be an issue for Xen as do_softirq() would be called at
> > every loop. In case of failure, the guest will crash and the vCPU
> > will be unscheduled.
> 
> Imagine that we have two guests: one that requires an ioreq server and
> one that doesn't. If I am not mistaken this loop could potentially spin
> forever on a pcpu, thus preventing any other guest being scheduled, even
> if the other guest doesn't need any ioreq servers.
> 
> 
> My other concern is that we are busy-looping. Could we call something
> like wfi() or do_idle() instead? The ioreq server event notification of
> completion should wake us up?
> 
> Following this line of thinking, I am wondering if instead of the
> busy-loop we should call vcpu_block_unless_event_pending(current) in
> try_handle_mmio if IO_RETRY. Then when the emulation is done, QEMU (or
> equivalent) calls xenevtchn_notify which ends up waking up the domU
> vcpu. Would that work?

I read now Julien's reply: we are already doing something similar to
what I suggested with the following call chain:

check_for_vcpu_work -> vcpu_ioreq_handle_completion -> wait_for_io -> wait_on_xen_event_channel

So the busy-loop here is only a safety-belt in cause of a spurious
wake-up, in which case we are going to call again check_for_vcpu_work,
potentially causing a guest reschedule.

Then, this is fine and addresses both my concerns. Maybe let's add a note
in the commit message about it.


I am also wondering if there is any benefit in calling wait_for_io()
earlier, maybe from try_handle_mmio if IO_RETRY?
leave_hypervisor_to_guest is very late for that. In any case, it is not
an important optimization (if it is even an optimization at all) so it
is fine to leave it as is.


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:38:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48871.86465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn92U-000299-Ac; Wed, 09 Dec 2020 23:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48871.86465; Wed, 09 Dec 2020 23:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn92U-000292-78; Wed, 09 Dec 2020 23:38:30 +0000
Received: by outflank-mailman (input) for mailman id 48871;
 Wed, 09 Dec 2020 23:38:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn92T-00028w-Aw
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:38:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn92R-0005Iy-RM; Wed, 09 Dec 2020 23:38:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn92R-0002ao-IM; Wed, 09 Dec 2020 23:38:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OPfKh88rj2a43wotGM9GnLojN6RW6OzBmxRIlQMO0ow=; b=Tj1RNt9rtiwE/t7B60/2RrwZRe
	fyrYX52R9FirDFXSJ+TVQ8FelYy0UUPuTxXGg9IRV3lTcNTx/Ok8nzZ9udBx2zKMRe9+M7LZhiVaT
	6STZeAIL/bWfGrr7MDEb6grdeNwu00Uzd+sSROX4/rEpJXFh6jlMKjJZ+PX3rTWj9H+U=;
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <fa447d9c-d601-eee7-1cc2-d595c7c2edf9@xen.org>
Date: Wed, 9 Dec 2020 23:38:25 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 09/12/2020 23:18, Stefano Stabellini wrote:
> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch adds proper handling of return value of
>> vcpu_ioreq_handle_completion() which involves using a loop
>> in leave_hypervisor_to_guest().
>>
>> The reason to use an unbounded loop here is the fact that vCPU
>> shouldn't continue until an I/O has completed. In Xen case, if an I/O
>> never completes then it most likely means that something went horribly
>> wrong with the Device Emulator. And it is most likely not safe to
>> continue. So letting the vCPU to spin forever if I/O never completes
>> is a safer action than letting it continue and leaving the guest in
>> unclear state and is the best what we can do for now.
>>
>> This wouldn't be an issue for Xen as do_softirq() would be called at
>> every loop. In case of failure, the guest will crash and the vCPU
>> will be unscheduled.
>
> Imagine that we have two guests: one that requires an ioreq server and
> one that doesn't. If I am not mistaken this loop could potentially spin
> forever on a pcpu, thus preventing any other guest being scheduled, even
> if the other guest doesn't need any ioreq servers.

That's not correct. At every loop we will call check_for_pcpu_work() 
that will process pending softirqs. If the rescheduling is necessary 
(might be set by a timer or by a caller in check_for_vcpu_work()), then 
the vCPU will be descheduled to give place to someone else.

> 
> 
> My other concern is that we are busy-looping. Could we call something
> like wfi() or do_idle() instead? The ioreq server event notification of
> completion should wake us up?

There are no busy loop here. If the IOREQ has not yet handled the I/O we 
will block the vCPU until an event notification is received (see the 
call to wait_on_xen_event_channel()).

This loop make sure that all the vPCU works are done before we return to 
the guest.

The worse that can happen here if the vCPU will never run again if the 
IOREQ server is been naughty.

> 
> Following this line of thinking, I am wondering if instead of the
> busy-loop we should call vcpu_block_unless_event_pending(current) in
> try_handle_mmio if IO_RETRY. Then when the emulation is done, QEMU (or
> equivalent) calls xenevtchn_notify which ends up waking up the domU
> vcpu. Would that work?

vcpu_block_unless_event_pending() will not block if there are interrupts 
pending. However, here we really want to block until the I/O has been 
completed. So vcpu_block_unless_event_pending() is not the right approach.

The IOREQ code is using wait_on_xen_event_channel(). Yet, this can still 
"exit" early if an event has been received. But this doesn't mean the 
I/O has completed. So we need to check if the I/O has completed and wait 
again if it hasn't.

I seem to keep having to explain how the code works. So maybe we want to 
update the commit message with more details?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:47:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:47:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48878.86477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn9BZ-0003BB-CW; Wed, 09 Dec 2020 23:47:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48878.86477; Wed, 09 Dec 2020 23:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn9BZ-0003B4-8p; Wed, 09 Dec 2020 23:47:53 +0000
Received: by outflank-mailman (input) for mailman id 48878;
 Wed, 09 Dec 2020 23:47:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kn9BX-0003Az-9L
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:47:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn9BV-0005VF-UI; Wed, 09 Dec 2020 23:47:49 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kn9BV-0003Mp-Lu; Wed, 09 Dec 2020 23:47:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=UONJBA/KX+vQqohI3/fhkl5eeQUbAO/CODqEwYJ967o=; b=oOfhfeHephQ8rgxIDGdlY10Ijp
	DxxN5YwGMFpJkiYKsdbp+icgQEO3TJSyKo7K1tasOf+ikcbWK/fZp08Lc7WrW6oAqC8V+YLZsM7TD
	EvB+iJnKRtT0U3jf+ygLz5jS4FRB1X8aLzvYeawQrtZQLzr7EE/I4zkCUUO2b8/LdIXE=;
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012091521480.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <52799b99-6405-03f4-2a46-3a0a4aac597f@xen.org>
Date: Wed, 9 Dec 2020 23:47:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091521480.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 09/12/2020 23:35, Stefano Stabellini wrote:
> On Wed, 9 Dec 2020, Stefano Stabellini wrote:
>> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> This patch adds proper handling of return value of
>>> vcpu_ioreq_handle_completion() which involves using a loop
>>> in leave_hypervisor_to_guest().
>>>
>>> The reason to use an unbounded loop here is the fact that vCPU
>>> shouldn't continue until an I/O has completed. In Xen case, if an I/O
>>> never completes then it most likely means that something went horribly
>>> wrong with the Device Emulator. And it is most likely not safe to
>>> continue. So letting the vCPU to spin forever if I/O never completes
>>> is a safer action than letting it continue and leaving the guest in
>>> unclear state and is the best what we can do for now.
>>>
>>> This wouldn't be an issue for Xen as do_softirq() would be called at
>>> every loop. In case of failure, the guest will crash and the vCPU
>>> will be unscheduled.
>>
>> Imagine that we have two guests: one that requires an ioreq server and
>> one that doesn't. If I am not mistaken this loop could potentially spin
>> forever on a pcpu, thus preventing any other guest being scheduled, even
>> if the other guest doesn't need any ioreq servers.
>>
>>
>> My other concern is that we are busy-looping. Could we call something
>> like wfi() or do_idle() instead? The ioreq server event notification of
>> completion should wake us up?
>>
>> Following this line of thinking, I am wondering if instead of the
>> busy-loop we should call vcpu_block_unless_event_pending(current) in
>> try_handle_mmio if IO_RETRY. Then when the emulation is done, QEMU (or
>> equivalent) calls xenevtchn_notify which ends up waking up the domU
>> vcpu. Would that work?
> 
> I read now Julien's reply: we are already doing something similar to
> what I suggested with the following call chain:
> 
> check_for_vcpu_work -> vcpu_ioreq_handle_completion -> wait_for_io -> wait_on_xen_event_channel
> 
> So the busy-loop here is only a safety-belt in cause of a spurious
> wake-up, in which case we are going to call again check_for_vcpu_work,
> potentially causing a guest reschedule.
> 
> Then, this is fine and addresses both my concerns. Maybe let's add a note
> in the commit message about it.

Damm, I hit the "sent" button just a second before seen your reply. :/ 
Oh well. I suggested the same because I have seen the same question 
multiple time.

> 
> 
> I am also wondering if there is any benefit in calling wait_for_io()
> earlier, maybe from try_handle_mmio if IO_RETRY?

wait_for_io() may end up to deschedule the vCPU. I would like to avoid 
this to happen in the middle of the I/O emulation because we need to 
happen it without lock held at all.

I don't think there are locks involved today, but the deeper in the call 
stack the scheduling happens, the more chance we may screw up in the future.

However...

> leave_hypervisor_to_guest is very late for that.

... I am not sure what's the problem with that. The IOREQ will be 
notified of the pending I/O as soon as try_handle_mmio() put the I/O in 
the shared page.

If the IOREQ server is running on a different pCPU, then it might be 
possible that the I/O has completed before reached 
leave_hypervisor_to_guest(). In this case, we would not have to wait for 
the I/O.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 09 23:49:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 09 Dec 2020 23:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48883.86489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn9D9-0003M0-RG; Wed, 09 Dec 2020 23:49:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48883.86489; Wed, 09 Dec 2020 23:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn9D9-0003Ls-Kp; Wed, 09 Dec 2020 23:49:31 +0000
Received: by outflank-mailman (input) for mailman id 48883;
 Wed, 09 Dec 2020 23:49:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xMB=FN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kn9D8-0003LX-9W
 for xen-devel@lists.xenproject.org; Wed, 09 Dec 2020 23:49:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 681a309f-298e-48df-8bae-3405f40b94aa;
 Wed, 09 Dec 2020 23:49:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 681a309f-298e-48df-8bae-3405f40b94aa
Date: Wed, 9 Dec 2020 15:49:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607557769;
	bh=1Xsr9Fqgal0YfsSPjBnxq5MBvfuYZJtTBbrypfnMCCk=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Zpd+Wua8J7b10g7L542pszU4FDCroAtpQB1LKXA/0mHIHG1gl9S3Yv5KI4EhK5Dbi
	 UIXi/V6EI0h25SsBDv1Of5wyRkJaEyUdTq23wyuIqYMmzcrLhO62v/CqSlze3XeARS
	 QOZXmn3gFE7oKHhqUCS7mcVlFukEnmunB2Yr12bTBJrX8240cJCG60wWKNu7znRLwc
	 g5BXc3FOnJ5Y8pGcC5iiBymxn6CyahJ6aRU91n2+zeKBA9KbxSBgA+nNHXBObOJOdZ
	 Hu4u8flHgeFF35oAoTctg7g+vPzJoA9CRrYVREay+zZ8nERKQZ6SDpMAKtTGtzZokH
	 23dxsvvRCzKJQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 16/23] xen/mm: Handle properly reference in
 set_foreign_p2m_entry() on Arm
In-Reply-To: <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091549140.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1214772993-1607557769=:20986"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1214772993-1607557769=:20986
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This patch implements reference counting of foreign entries in
> in set_foreign_p2m_entry() on Arm. This is a mandatory action if
> we want to run emulator (IOREQ server) in other than dom0 domain,
> as we can't trust it to do the right thing if it is not running
> in dom0. So we need to grab a reference on the page to avoid it
> disappearing.
> 
> It is valid to always pass "p2m_map_foreign_rw" type to
> guest_physmap_add_entry() since the current and foreign domains
> would be always different. A case when they are equal would be
> rejected by rcu_lock_remote_domain_by_id(). Besides the similar
> comment in the code put a respective ASSERT() to catch incorrect
> usage in future.
> 
> It was tested with IOREQ feature to confirm that all the pages given
> to this function belong to a domain, so we can use the same approach
> as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one().
> 
> This involves adding an extra parameter for the foreign domain to
> set_foreign_p2m_entry() and a helper to indicate whether the arch
> supports the reference counting of foreign entries and the restriction
> for the hardware domain in the common code can be skipped for it.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>

The arm side looks OK to me


> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - new patch, was split from:
>      "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features"
>    - rewrite a logic to handle properly reference in set_foreign_p2m_entry()
>      instead of treating foreign entries as p2m_ram_rw
> 
> Changes V1 -> V2:
>    - rebase according to the recent changes to acquire_resource()
>    - update patch description
>    - introduce arch_refcounts_p2m()
>    - add an explanation why p2m_map_foreign_rw is valid
>    - move set_foreign_p2m_entry() to p2m-common.h
>    - add const to new parameter
> 
> Changes V2 -> V3:
>    - update patch description
>    - rename arch_refcounts_p2m() to arch_acquire_resource_check()
>    - move comment to x86’s arch_acquire_resource_check()
>    - return rc in Arm's set_foreign_p2m_entry()
>    - put a respective ASSERT() into Arm's set_foreign_p2m_entry()
> ---
> ---
>  xen/arch/arm/p2m.c           | 24 ++++++++++++++++++++++++
>  xen/arch/x86/mm/p2m.c        |  5 +++--
>  xen/common/memory.c          | 10 +++-------
>  xen/include/asm-arm/p2m.h    | 19 +++++++++----------
>  xen/include/asm-x86/p2m.h    | 16 +++++++++++++---
>  xen/include/xen/p2m-common.h |  4 ++++
>  6 files changed, 56 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 4eeb867..5b8d494 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1380,6 +1380,30 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
>      return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>  }
>  
> +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
> +                          unsigned long gfn, mfn_t mfn)
> +{
> +    struct page_info *page = mfn_to_page(mfn);
> +    int rc;
> +
> +    if ( !get_page(page, fd) )
> +        return -EINVAL;
> +
> +    /*
> +     * It is valid to always use p2m_map_foreign_rw here as if this gets
> +     * called then d != fd. A case when d == fd would be rejected by
> +     * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT()
> +     * to catch incorrect usage in future.
> +     */
> +    ASSERT(d != fd);
> +
> +    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
> +    if ( rc )
> +        put_page(page);
> +
> +    return rc;
> +}
> +
>  static struct page_info *p2m_allocate_root(void)
>  {
>      struct page_info *page;
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 7a2ba82..4772c86 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1321,7 +1321,8 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
>  }
>  
>  /* Set foreign mfn in the given guest's p2m table. */
> -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
> +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
> +                          unsigned long gfn, mfn_t mfn)
>  {
>      return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign,
>                                 p2m_get_hostp2m(d)->default_access);
> @@ -2621,7 +2622,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
>       * will update the m2p table which will result in  mfn -> gpfn of dom0
>       * and not fgfn of domU.
>       */
> -    rc = set_foreign_p2m_entry(tdom, gpfn, mfn);
> +    rc = set_foreign_p2m_entry(tdom, fdom, gpfn, mfn);
>      if ( rc )
>          gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. "
>                   "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n",
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 3363c06..49e3001 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1134,12 +1134,8 @@ static int acquire_resource(
>      xen_pfn_t mfn_list[32];
>      int rc;
>  
> -    /*
> -     * FIXME: Until foreign pages inserted into the P2M are properly
> -     *        reference counted, it is unsafe to allow mapping of
> -     *        resource pages unless the caller is the hardware domain.
> -     */
> -    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
> +    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
> +         !arch_acquire_resource_check() )
>          return -EACCES;
>  
>      if ( copy_from_guest(&xmar, arg, 1) )
> @@ -1207,7 +1203,7 @@ static int acquire_resource(
>  
>          for ( i = 0; !rc && i < xmar.nr_frames; i++ )
>          {
> -            rc = set_foreign_p2m_entry(currd, gfn_list[i],
> +            rc = set_foreign_p2m_entry(currd, d, gfn_list[i],
>                                         _mfn(mfn_list[i]));
>              /* rc should be -EIO for any iteration other than the first */
>              if ( rc && i )
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 28ca9a8..4f8056e 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -161,6 +161,15 @@ typedef enum {
>  #endif
>  #include <xen/p2m-common.h>
>  
> +static inline bool arch_acquire_resource_check(void)
> +{
> +    /*
> +     * The reference counting of foreign entries in set_foreign_p2m_entry()
> +     * is supported on Arm.
> +     */
> +    return true;
> +}
> +
>  static inline
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  {
> @@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
>      return gfn_add(gfn, 1UL << order);
>  }
>  
> -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
> -                                        mfn_t mfn)
> -{
> -    /*
> -     * NOTE: If this is implemented then proper reference counting of
> -     *       foreign entries will need to be implemented.
> -     */
> -    return -EOPNOTSUPP;
> -}
> -
>  /*
>   * A vCPU has cache enabled only when the MMU is enabled and data cache
>   * is enabled.
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 4603560..8d2dc22 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -382,6 +382,19 @@ struct p2m_domain {
>  #endif
>  #include <xen/p2m-common.h>
>  
> +static inline bool arch_acquire_resource_check(void)
> +{
> +    /*
> +     * The reference counting of foreign entries in set_foreign_p2m_entry()
> +     * is not supported on x86.
> +     *
> +     * FIXME: Until foreign pages inserted into the P2M are properly
> +     * reference counted, it is unsafe to allow mapping of
> +     * resource pages unless the caller is the hardware domain.
> +     */
> +    return false;
> +}
> +
>  /*
>   * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that np2m.
>   */
> @@ -647,9 +660,6 @@ int p2m_finish_type_change(struct domain *d,
>  int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
>                            unsigned long end);
>  
> -/* Set foreign entry in the p2m table (for priv-mapping) */
> -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn);
> -
>  /* Set mmio addresses in the p2m table (for pass-through) */
>  int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
>                         unsigned int order);
> diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> index 58031a6..b4bc709 100644
> --- a/xen/include/xen/p2m-common.h
> +++ b/xen/include/xen/p2m-common.h
> @@ -3,6 +3,10 @@
>  
>  #include <xen/mm.h>
>  
> +/* Set foreign entry in the p2m table */
> +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
> +                          unsigned long gfn, mfn_t mfn);
> +
>  /* Remove a page from a domain's p2m table */
>  int __must_check
>  guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
> -- 
> 2.7.4
> 
--8323329-1214772993-1607557769=:20986--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 00:13:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 00:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48890.86500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn9a7-0006qc-7q; Thu, 10 Dec 2020 00:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48890.86500; Thu, 10 Dec 2020 00:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kn9a7-0006qV-4c; Thu, 10 Dec 2020 00:13:15 +0000
Received: by outflank-mailman (input) for mailman id 48890;
 Thu, 10 Dec 2020 00:13:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn9a5-0006qN-GH; Thu, 10 Dec 2020 00:13:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn9a5-0006fd-93; Thu, 10 Dec 2020 00:13:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kn9a5-0005L5-1W; Thu, 10 Dec 2020 00:13:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kn9a5-0004iM-11; Thu, 10 Dec 2020 00:13:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mi3vcQ5ByLRZ0PM62lwCyVqB6ubb0957fF+A05Ahpn4=; b=YmIyf9hSbdAFDnx0zwBGGRIFmX
	cSX3WmdKqnnMXh1eJMd9O8rZvM2l0l6zQrCqJJuApsZWzA5aP2r04TgmXYmBBg6NMGocZvbWcMb6u
	C7hjTQ21YHQdxycV7v/OxXNiHEsrL22kzHLjyjtj1/wOQ0+BbmVMKXGneAoBXy4NozTE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157349: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c923a30481baf87f631659085f94cd6000116192
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 00:13:13 +0000

flight 157349 qemu-mainline real [real]
flight 157359 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157349/
http://logs.test-lab.xenproject.org/osstest/logs/157359/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c923a30481baf87f631659085f94cd6000116192
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  111 days
Failing since        152659  2020-08-21 14:07:39 Z  110 days  230 attempts
Testing same since   157349  2020-12-09 16:38:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69371 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 01:43:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 01:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48907.86534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knAz4-00071k-B1; Thu, 10 Dec 2020 01:43:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48907.86534; Thu, 10 Dec 2020 01:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knAz4-00071b-3U; Thu, 10 Dec 2020 01:43:06 +0000
Received: by outflank-mailman (input) for mailman id 48907;
 Thu, 10 Dec 2020 01:43:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knAz3-00071T-7D; Thu, 10 Dec 2020 01:43:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knAz3-00080x-2c; Thu, 10 Dec 2020 01:43:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knAz2-0000UC-IS; Thu, 10 Dec 2020 01:43:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knAz2-0003Jd-Hz; Thu, 10 Dec 2020 01:43:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Nxu7ibejaOxBodWYrhMm/yWTXBC3N+irrCsx+Bmhxus=; b=Wt6zcke6CLiLnR/zA4D1jNKvEo
	6DiVnoeeB+qayuIXHB4ZDNP3cJmK5IBBsXJM3bdYk7WEUXGUQMzcVNN2yoT1yibSbozLwk5flZhHK
	w7l93J9CRFy04rNioN1mlG0S5pz7ZPK6SDzvdYWBGLVAyHf4x8ciId1Y8WspoUB5pFSA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157354-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157354: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=272a1db63a09087ce3da4cf44ec7b758611ff1ed
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 01:43:04 +0000

flight 157354 ovmf real [real]
flight 157362 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157354/
http://logs.test-lab.xenproject.org/osstest/logs/157362/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 272a1db63a09087ce3da4cf44ec7b758611ff1ed
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    0 days
Testing same since   157348  2020-12-09 15:39:39 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <Pierre.Gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 272a1db63a09087ce3da4cf44ec7b758611ff1ed
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 13 11:31:01 2020 +0000

    ArmPlatformPkg: Fix cspell reported spelling/wording
    
    The edk2 CI runs the "cspell" spell checker tool. Some words
    are not recognized by the tool, triggering errors.
    This patch modifies some spelling/wording detected by cspell.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 061cbbc1115eb7360f2c7627d53d13e35d63cbe3
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 20 10:01:13 2020 +0000

    ArmPlatformPkg: Fix Ecc error 8001 in PrePi
    
    This patch fixes the following Ecc reported error:
    Only capital letters are allowed to be used for #define
    declarations
    
    The "SerialPrint" macro is definied for the PrePi module
    residing in the ArmPlatformPkg. It is never used in the module.
    The macro is thus removed.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 2dfd81aaf50ca2bd1e2d33ed5687620de90810ce
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 6 09:47:47 2020 +0000

    ArmPlatformPkg: Fix Ecc error 10006 in ArmPlatformPkg.dsc
    
    This patch fixes the following Ecc reported error:
    There should be no unnecessary inclusion of library
    classes in the INF file
    
    This comes with the additional information:
    "The Library Class [TimeBaseLib] is not used
    in any platform"
    "The Library Class [PL011UartClockLib] is not used
    in any platform"
    "The Library Class [PL011UartLib] is not used
    in any platform"
    
    Indeed, the PL011SerialPortLib module requires the
    PL011UartClockLib and PL011UartLib libraries.
    The PL031RealTimeClockLib module requires the TimeBaseLib
    library.
    ArmPlatformPkg/ArmPlatformPkg.dsc builds the two modules,
    but doesn't build the required libraries. This patch adds
    the missing libraries to the [LibraryClasses.common] section.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 42bec8c8104c9db4891dfd1b208032c9c413d861
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:33:12 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in SP805WatchdogDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/SP805WatchdogDxe/SP805Watchdog.h]
    is existing in module directory but it is not described
    in INF file.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 2e0cbfcbed96505953ef09fcfb72d4ea83cc8df2
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:32:42 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/PL061GpioDxe/PL061Gpio.h]
    is existing in module directory but it is not described
    in INF file.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit a36a0f1d81a2502a922617cf99be0bb81de2f57a
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:32:26 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in LcdGraphicsOutputDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/LcdGraphicsOutputDxe/LcdGraphicsOutputDxe.h]
    is existing in module directory but it is not described
    in INF file.
    
    Files in [Sources.common] are also alphabetically re-ordered.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit c5d970a01e76c1a20f6bb009b32e479ad2444548
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:18:04 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10016 in LcdPlatformNullLib
    
    This patch fixes the following Ecc reported error:
    Module file has FILE_GUID collision with other
    module file
    
    The two .inf files with clashing GUID are:
    edk2\ArmPlatformPkg\PrePeiCore\PrePeiCoreMPCore.inf
    edk2\ArmPlatformPkg\Library\LcdPlatformNullLib\LcdPlatformNullLib.inf
    
    The PrePeiCoreMPCore module has been imported in 2011 and the
    LcdPlatformNullLib module has been created in 2017. The
    PrePeiCoreMPCore has the precedence.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 746dda63b2d612a2ad9e0b4c05722920586d2e60
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:37:14 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10016 in PrePi
    
    This patch fixes the following Ecc reported error:
    Module file has FILE_GUID collision with other
    module file
    
    The two .inf files with clashing GUID are:
    edk2\ArmPlatformPkg\PrePi\PeiUniCore.inf
    edk2\ArmPlatformPkg\PrePi\PeiMPCore.inf
    
    Both files seem to have been imported from the previous
    svn repository as the same time.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 28978df0bdafce1d26ff337fd67ee6c3a5b3876e
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:36:19 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in PL031RealTimeClockLib
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 1485e8bbc86e9a7e1954cfe5697fbd45d8e3b04e
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:36:01 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 4c7e107810cacd1dbd4c6f7d6d4d22e3de2f8db1
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:35:36 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in NorFlashDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit eb97f13839fd64ce3e4ff9dd39ea9950db48207d
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:35:07 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in LcdGraphicsOutputDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit d315bd2286cde306f1ef5256026038e610505cca
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:32:40 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3002 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    Non-Boolean comparisons should use a compare operator
    (==, !=, >, < >=, <=)
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit ee78edceca89057ab9854f7e5070391a8229ece4
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:31:50 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3002 in PL011UartLib
    
    This patch fixes the following Ecc reported error:
    Non-Boolean comparisons should use a compare operator
    (==, !=, >, < >=, <=)
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit dd917bae85396055ff5d6ea760bff3702d154101
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 13:31:40 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3001 in NorFlashDxe
    
    This patch fixes the following Ecc reported error:
    Boolean values and variable type BOOLEAN should not use
    explicit comparisons to TRUE or FALSE
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48916.86549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBaA-0002pK-AH; Thu, 10 Dec 2020 02:21:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48916.86549; Thu, 10 Dec 2020 02:21:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBaA-0002pD-6k; Thu, 10 Dec 2020 02:21:26 +0000
Received: by outflank-mailman (input) for mailman id 48916;
 Thu, 10 Dec 2020 02:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBa8-0002p8-OK
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:21:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c30a9133-2519-4b0d-af31-b349307a50ab;
 Thu, 10 Dec 2020 02:21:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c30a9133-2519-4b0d-af31-b349307a50ab
Date: Wed, 9 Dec 2020 18:21:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607566882;
	bh=UNQcLEtngSuCobx1FTNYh6Y2rz91ba3pHeTBgv1GL40=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=QScRGRK6nByNLxX2ExDW9w65WRTNycxcOU9wGa8TXEdUZwqtqIQJbrTIwqCkJDOE0
	 5moUzqPTN5bRnfZ/ZcvEx3OgNaZD9BuG+Gqz0H0IWlgqSlmKe9O0GSCZIz/wypmZlM
	 v0Z2lY0xhWYP2oEHUacB3Qkz/LR74JvqSDikxZJo+QhzJfW0eCQYx8f25sq4mCl7gq
	 1y4H5GjFWKFzxdW7eM3XwkJFVck6+lGlDnAre5U+bB2S6zoEZaE8D5dAOwbS2Wbcpq
	 Cj8jyyofYjaFNK8aNW0Od6rXOqJZFhXqBKSIt1nhTO2Ce0U7g0Opk3PZb2iDVCYfEi
	 sfYEcSSECYp6g==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    alex.bennee@linaro.org
Subject: Re: [PATCH V3 18/23] xen/dm: Introduce xendevicemodel_set_irq_level
 DM op
In-Reply-To: <1606732298-22107-19-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091802240.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-19-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> This patch adds ability to the device emulator to notify otherend
> (some entity running in the guest) using a SPI and implements Arm
> specific bits for it. Proposed interface allows emulator to set
> the logical level of a one of a domain's IRQ lines.
> 
> We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level)
> to inject an interrupt as the "isa_irq" field is only 8-bit and
> able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020).
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> ***
> Please note, I left interface untouched since there is still
> an open discussion what interface to use/what information to pass
> to the hypervisor. The question whether we should abstract away
> the state of the line or not.
> ***

Let's start with a simple question: is this going to work with
virtio-mmio emulation in QEMU that doesn't lower the state of the line
to end the notification (only calls qemu_set_irq(irq, high))?

See: hw/virtio/virtio-mmio.c:virtio_mmio_update_irq


Alex (CC'ed) might be able to confirm whether I am reading the QEMU code
correctly. Assuming that it is true that QEMU is only raising the level,
never lowering it, although the emulation is obviously not correct, I
would rather keep QEMU as is for efficiency reasons, and because we
don't want to deviate from the common implementation in QEMU.


Looking at this patch and at vgic_inject_irq, yes, I think it would
work as is.


So it looks like we are going to end up with an interface that:

- in theory it is modelling the line closely
- in practice it is only called to "trigger the IRQ"


Hence my preference for being explicit about it and just call it
trigger_irq.

If we keep the patch as is, should we at least add a comment to document
the "QEMU style" use model?


> Changes RFC -> V1:
>    - check incoming parameters in arch_dm_op()
>    - add explicit padding to struct xen_dm_op_set_irq_level
> 
> Changes V1 -> V2:
>    - update the author of a patch
>    - update patch description
>    - check that padding is always 0
>    - mention that interface is Arm only and only SPIs are
>      supported for now
>    - allow to set the logical level of a line for non-allocated
>      interrupts only
>    - add xen_dm_op_set_irq_level_t
> 
> Changes V2 -> V3:
>    - no changes
> ---
> ---
>  tools/include/xendevicemodel.h               |  4 ++
>  tools/libs/devicemodel/core.c                | 18 +++++++++
>  tools/libs/devicemodel/libxendevicemodel.map |  1 +
>  xen/arch/arm/dm.c                            | 57 +++++++++++++++++++++++++++-
>  xen/common/dm.c                              |  1 +
>  xen/include/public/hvm/dm_op.h               | 16 ++++++++
>  6 files changed, 96 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h
> index e877f5c..c06b3c8 100644
> --- a/tools/include/xendevicemodel.h
> +++ b/tools/include/xendevicemodel.h
> @@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level(
>      xendevicemodel_handle *dmod, domid_t domid, uint8_t irq,
>      unsigned int level);
>  
> +int xendevicemodel_set_irq_level(
> +    xendevicemodel_handle *dmod, domid_t domid, unsigned int irq,
> +    unsigned int level);
> +
>  /**
>   * This function maps a PCI INTx line to a an IRQ line.
>   *
> diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
> index 4d40639..30bd79f 100644
> --- a/tools/libs/devicemodel/core.c
> +++ b/tools/libs/devicemodel/core.c
> @@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level(
>      return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
>  }
>  
> +int xendevicemodel_set_irq_level(
> +    xendevicemodel_handle *dmod, domid_t domid, uint32_t irq,
> +    unsigned int level)
> +{
> +    struct xen_dm_op op;
> +    struct xen_dm_op_set_irq_level *data;
> +
> +    memset(&op, 0, sizeof(op));
> +
> +    op.op = XEN_DMOP_set_irq_level;
> +    data = &op.u.set_irq_level;
> +
> +    data->irq = irq;
> +    data->level = level;
> +
> +    return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
> +}
> +
>  int xendevicemodel_set_pci_link_route(
>      xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq)
>  {
> diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map
> index 561c62d..a0c3012 100644
> --- a/tools/libs/devicemodel/libxendevicemodel.map
> +++ b/tools/libs/devicemodel/libxendevicemodel.map
> @@ -32,6 +32,7 @@ VERS_1.2 {
>  	global:
>  		xendevicemodel_relocate_memory;
>  		xendevicemodel_pin_memory_cacheattr;
> +		xendevicemodel_set_irq_level;
>  } VERS_1.1;
>  
>  VERS_1.3 {
> diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
> index 5d3da37..e4bb233 100644
> --- a/xen/arch/arm/dm.c
> +++ b/xen/arch/arm/dm.c
> @@ -17,10 +17,65 @@
>  #include <xen/dm.h>
>  #include <xen/hypercall.h>
>  
> +#include <asm/vgic.h>
> +
>  int arch_dm_op(struct xen_dm_op *op, struct domain *d,
>                 const struct dmop_args *op_args, bool *const_op)
>  {
> -    return -EOPNOTSUPP;
> +    int rc;
> +
> +    switch ( op->op )
> +    {
> +    case XEN_DMOP_set_irq_level:
> +    {
> +        const struct xen_dm_op_set_irq_level *data =
> +            &op->u.set_irq_level;
> +        unsigned int i;
> +
> +        /* Only SPIs are supported */
> +        if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >= vgic_num_irqs(d)) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        if ( data->level != 0 && data->level != 1 )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        /* Check that padding is always 0 */
> +        for ( i = 0; i < sizeof(data->pad); i++ )
> +        {
> +            if ( data->pad[i] )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +        }
> +
> +        /*
> +         * Allow to set the logical level of a line for non-allocated
> +         * interrupts only.
> +         */
> +        if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        vgic_inject_irq(d, NULL, data->irq, data->level);
> +        rc = 0;
> +        break;
> +    }
> +
> +    default:
> +        rc = -EOPNOTSUPP;
> +        break;
> +    }
> +
> +    return rc;
>  }
>  
>  /*
> diff --git a/xen/common/dm.c b/xen/common/dm.c
> index 9d394fc..7bfb46c 100644
> --- a/xen/common/dm.c
> +++ b/xen/common/dm.c
> @@ -48,6 +48,7 @@ static int dm_op(const struct dmop_args *op_args)
>          [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
>          [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
>          [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
> +        [XEN_DMOP_set_irq_level]                    = sizeof(struct xen_dm_op_set_irq_level),
>      };
>  
>      rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
> diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
> index 66cae1a..1f70d58 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr {
>  };
>  typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t;
>  
> +/*
> + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's
> + *                         IRQ lines (currently Arm only).
> + * Only SPIs are supported.
> + */
> +#define XEN_DMOP_set_irq_level 19
> +
> +struct xen_dm_op_set_irq_level {
> +    uint32_t irq;
> +    /* IN - Level: 0 -> deasserted, 1 -> asserted */
> +    uint8_t level;
> +    uint8_t pad[3];
> +};
> +typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t;
> +
>  struct xen_dm_op {
>      uint32_t op;
>      uint32_t pad;
> @@ -447,6 +462,7 @@ struct xen_dm_op {
>          xen_dm_op_track_dirty_vram_t track_dirty_vram;
>          xen_dm_op_set_pci_intx_level_t set_pci_intx_level;
>          xen_dm_op_set_isa_irq_level_t set_isa_irq_level;
> +        xen_dm_op_set_irq_level_t set_irq_level;
>          xen_dm_op_set_pci_link_route_t set_pci_link_route;
>          xen_dm_op_modified_memory_t modified_memory;
>          xen_dm_op_set_mem_type_t set_mem_type;
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:30:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:30:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48922.86560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBiZ-0003hw-5W; Thu, 10 Dec 2020 02:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48922.86560; Thu, 10 Dec 2020 02:30:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBiZ-0003hL-2G; Thu, 10 Dec 2020 02:30:07 +0000
Received: by outflank-mailman (input) for mailman id 48922;
 Thu, 10 Dec 2020 02:30:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBiY-0003dc-1Q
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:30:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aaf3bc85-c222-4439-9148-6c7d3081ec26;
 Thu, 10 Dec 2020 02:30:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaf3bc85-c222-4439-9148-6c7d3081ec26
Date: Wed, 9 Dec 2020 18:30:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607567404;
	bh=Yv5v27nCDnCoS0ad6Hjwta6gTf838NB2wyUr57grFwc=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=MNpIdRgN9ngy1hGK0AL6CoHv3GKV5tUgerT6Y+zFfq0iGrHj9ey5XbIdrEvchFqAQ
	 tvCR6ncoVw9kB35KFZe01IyGgSbKt2xFam7rdJL6AHYwvnOKUSWrHpfW10d9v76//y
	 dcitEKAi+CGO9qEM1S3Z62UPq5JGgW00VnltZT769kHngpClfpX1X7/OJEU1mBzphh
	 jPybJrmtWLxR4cIqFYHDZjK7siIrlPsqzbx98Rm2+REH/AW59kZeknMig08p3R6VXW
	 nE2otkClR1O0yyjixHUFxZxo3rSnZr9WKEtMep1b6HLK0CeGtB+5mEAhzL3TDcB/RR
	 Eq7buPSrck0TA==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest
 until I/O has completed
In-Reply-To: <52799b99-6405-03f4-2a46-3a0a4aac597f@xen.org>
Message-ID: <alpine.DEB.2.21.2012091745550.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-16-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s> <alpine.DEB.2.21.2012091521480.20986@sstabellini-ThinkPad-T480s>
 <52799b99-6405-03f4-2a46-3a0a4aac597f@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Julien Grall wrote:
> On 09/12/2020 23:35, Stefano Stabellini wrote:
> > On Wed, 9 Dec 2020, Stefano Stabellini wrote:
> > > On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > 
> > > > This patch adds proper handling of return value of
> > > > vcpu_ioreq_handle_completion() which involves using a loop
> > > > in leave_hypervisor_to_guest().
> > > > 
> > > > The reason to use an unbounded loop here is the fact that vCPU
> > > > shouldn't continue until an I/O has completed. In Xen case, if an I/O
> > > > never completes then it most likely means that something went horribly
> > > > wrong with the Device Emulator. And it is most likely not safe to
> > > > continue. So letting the vCPU to spin forever if I/O never completes
> > > > is a safer action than letting it continue and leaving the guest in
> > > > unclear state and is the best what we can do for now.
> > > > 
> > > > This wouldn't be an issue for Xen as do_softirq() would be called at
> > > > every loop. In case of failure, the guest will crash and the vCPU
> > > > will be unscheduled.
> > > 
> > > Imagine that we have two guests: one that requires an ioreq server and
> > > one that doesn't. If I am not mistaken this loop could potentially spin
> > > forever on a pcpu, thus preventing any other guest being scheduled, even
> > > if the other guest doesn't need any ioreq servers.
> > > 
> > > 
> > > My other concern is that we are busy-looping. Could we call something
> > > like wfi() or do_idle() instead? The ioreq server event notification of
> > > completion should wake us up?
> > > 
> > > Following this line of thinking, I am wondering if instead of the
> > > busy-loop we should call vcpu_block_unless_event_pending(current) in
> > > try_handle_mmio if IO_RETRY. Then when the emulation is done, QEMU (or
> > > equivalent) calls xenevtchn_notify which ends up waking up the domU
> > > vcpu. Would that work?
> > 
> > I read now Julien's reply: we are already doing something similar to
> > what I suggested with the following call chain:
> > 
> > check_for_vcpu_work -> vcpu_ioreq_handle_completion -> wait_for_io ->
> > wait_on_xen_event_channel
> > 
> > So the busy-loop here is only a safety-belt in cause of a spurious
> > wake-up, in which case we are going to call again check_for_vcpu_work,
> > potentially causing a guest reschedule.
> > 
> > Then, this is fine and addresses both my concerns. Maybe let's add a note
> > in the commit message about it.
> 
> Damm, I hit the "sent" button just a second before seen your reply. :/ Oh
> well. I suggested the same because I have seen the same question multiple
> time.

:-)

 
> > I am also wondering if there is any benefit in calling wait_for_io()
> > earlier, maybe from try_handle_mmio if IO_RETRY?
> 
> wait_for_io() may end up to deschedule the vCPU. I would like to avoid this to
> happen in the middle of the I/O emulation because we need to happen it without
> lock held at all.
> 
> I don't think there are locks involved today, but the deeper in the call stack
> the scheduling happens, the more chance we may screw up in the future.
> 
> However...
> 
> > leave_hypervisor_to_guest is very late for that.
> 
> ... I am not sure what's the problem with that. The IOREQ will be notified of
> the pending I/O as soon as try_handle_mmio() put the I/O in the shared page.
> 
> If the IOREQ server is running on a different pCPU, then it might be possible
> that the I/O has completed before reached leave_hypervisor_to_guest(). In this
> case, we would not have to wait for the I/O.

Yeah, I was thinking about that too. Actually it could be faster
this way we end up being lucky.

The reason for moving it earlier would be that by the time
leave_hypervisor_to_guest is called "Xen" has already decided to
continue running this particular vcpu. If we called wait_for_io()
earlier, we would give important information to the scheduler before any
decision is made. This is more "philosophical" than practical though.
Let's leave it as is.


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:30:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:30:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48923.86573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBij-0003ox-Hd; Thu, 10 Dec 2020 02:30:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48923.86573; Thu, 10 Dec 2020 02:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBij-0003oo-Dv; Thu, 10 Dec 2020 02:30:17 +0000
Received: by outflank-mailman (input) for mailman id 48923;
 Thu, 10 Dec 2020 02:30:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBih-0003oG-SB
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:30:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60240eae-14cd-464b-8292-29a1952fbf7a;
 Thu, 10 Dec 2020 02:30:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60240eae-14cd-464b-8292-29a1952fbf7a
Date: Wed, 9 Dec 2020 18:30:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607567414;
	bh=ed5jc+mIWtIVT1XojWkam9DBGIYbXtUNab+lKWwMA+0=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=rM2ba7IaiEiImZuBlJeOxwdHw91iTBTmVndc3yYoAgoY5tvGV7iuyH8r8QnSIDGqP
	 Vw3UhK4WPUfROU/j4ASwNVahNwX7qt53OVL1KXDA07A1WyGtPdsI2SuXtb1hdrud6v
	 mBZzEGUeWPzZFpw94lGAnYaDrzqjNb9weF4nniROpfIIVWjjGS1w8Ufjh7YnufCwxt
	 QsetyxW/E7CqIkXt6LKQTQTuBalJtU16NHNKCXLsUvxjD7lEJdkKZbfiZRvcvsJUcR
	 dIqMPaOn4wY1MB6BrFou8eXG02slqkREBEONGv6tH9gytWUZhlqHO5FuueBp7TiW2W
	 51t4N9s/r0aTQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Paul Durrant <paul@xen.org>, Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 13/23] xen/ioreq: Use guest_cmpxchg64() instead of
 cmpxchg()
In-Reply-To: <e9ba97b2-01e4-f657-fceb-ccf4857e91c2@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091750230.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-14-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.21.2012091329480.20986@sstabellini-ThinkPad-T480s> <e9ba97b2-01e4-f657-fceb-ccf4857e91c2@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Oleksandr wrote:
> > On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > 
> > > The cmpxchg() in ioreq_send_buffered() operates on memory shared
> > > with the emulator domain (and the target domain if the legacy
> > > interface is used).
> > > 
> > > In order to be on the safe side we need to switch
> > > to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm.
> > > 
> > > As there is no plan to support the legacy interface on Arm,
> > > we will have a page to be mapped in a single domain at the time,
> > > so we can use s->emulator in guest_cmpxchg64() safely.
> > > 
> > > Thankfully the only user of the legacy interface is x86 so far
> > > and there is not concern regarding the atomics operations.
> > > 
> > > Please note, that the legacy interface *must* not be used on Arm
> > > without revisiting the code.
> > > 
> > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > CC: Julien Grall <julien.grall@arm.com>
> > > 
> > > ---
> > > Please note, this is a split/cleanup/hardening of Julien's PoC:
> > > "Add support for Guest IO forwarding to a device emulator"
> > > 
> > > Changes RFC -> V1:
> > >     - new patch
> > > 
> > > Changes V1 -> V2:
> > >     - move earlier to avoid breaking arm32 compilation
> > >     - add an explanation to commit description and hvm_allow_set_param()
> > >     - pass s->emulator
> > > 
> > > Changes V2 -> V3:
> > >     - update patch description
> > > ---
> > > ---
> > >   xen/arch/arm/hvm.c | 4 ++++
> > >   xen/common/ioreq.c | 3 ++-
> > >   2 files changed, 6 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> > > index 8951b34..9694e5a 100644
> > > --- a/xen/arch/arm/hvm.c
> > > +++ b/xen/arch/arm/hvm.c
> > > @@ -31,6 +31,10 @@
> > >     #include <asm/hypercall.h>
> > >   +/*
> > > + * The legacy interface (which involves magic IOREQ pages) *must* not be
> > > used
> > > + * without revisiting the code.
> > > + */
> > This is a NIT, but I'd prefer if you moved the comment a few lines
> > below, maybe just before the existing comment starting with "The
> > following parameters".
> > 
> > The reason is that as it is now it is not clear which set_params
> > interfaces should not be used without revisiting the code.
> OK, but maybe this comment wants dropping at all? It was actual when the
> legacy interface was the part of the common code (V2). Now the legacy
> interface is
> x86 specific so I am not sure this comment should be here.

Yeah, fine by me.

 
> > 
> > With that:
> > 
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Thank you



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:30:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:30:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48924.86585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBir-0003u4-RE; Thu, 10 Dec 2020 02:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48924.86585; Thu, 10 Dec 2020 02:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBir-0003tv-O3; Thu, 10 Dec 2020 02:30:25 +0000
Received: by outflank-mailman (input) for mailman id 48924;
 Thu, 10 Dec 2020 02:30:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBiq-0003tB-1I
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:30:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efbef8a4-8b7f-41ae-bfed-d1116a81e6e2;
 Thu, 10 Dec 2020 02:30:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efbef8a4-8b7f-41ae-bfed-d1116a81e6e2
Date: Wed, 9 Dec 2020 18:30:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607567422;
	bh=PdaMRgLLFYsRMRdD15V8WbneeRY6Ve49Y+a9ouPszoU=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=MKBtfF870VSV7oIWK/93lGGVs+Gz9P5jdsBZSgepUS7C/fOIikaVWn9VoB6OYZyBm
	 d28m0fjvLMw9JjwR8OQFnmGaGsQh+NPcmuXQVO+OGh4Cpdwj/yZjZg0grnlGwntudD
	 PflyUFclYm/1DW1gfj+XQQQhfK5YpHujxXykBJPkI5cXeTgsVtRNuz89DWXFoPFieW
	 vRaq63Rcpp70lM9oVe74riKdnCuKLdTftPyTuvUWz4TEVr75BvN+zM0HbJ4EJJyeLr
	 cYJ/kLm6mlaVRKmtdFzqvbEBRaXdNdkC4rQB4uf7baQ6M+b8IGa3NTdlsruq34egbe
	 3unM1ZIgrcZzg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, jbeulich@suse.com, 
    xadimgnik@gmail.com
Subject: Re: [PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for
 IOREQ/DM features
In-Reply-To: <31e35f5d-5ab4-19bf-e36b-8e7c0b7bf7d4@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091750420.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-15-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.21.2012091357430.20986@sstabellini-ThinkPad-T480s> <31e35f5d-5ab4-19bf-e36b-8e7c0b7bf7d4@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Oleksandr wrote:
> > > +#ifdef CONFIG_IOREQ_SERVER
> > > +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v);
> > > +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
> > > +                             struct vcpu *v, mmio_info_t *info);
> > > +#else
> > > +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs,
> > > +                                          struct vcpu *v)
> > > +{
> > > +    return IO_UNHANDLED;
> > > +}
> > > +
> > > +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
> > > +                                           struct vcpu *v, mmio_info_t
> > > *info)
> > > +{
> > > +    return IO_UNHANDLED;
> > > +}
> > > +#endif
> > If we are providing stub functions, then we can also provide stub
> > functions for:
> > 
> > ioreq_domain_init
> > ioreq_server_destroy_all
> > 
> > and avoid the ifdefs.
> I got your point. These are common IOREQ interface functions, which
> declarations live in the common header, should I provide
> stubs in the common ioreq.h?
 
I'd prefer that, but if Jan and Paul don't want to have them I won't insist.
 
 
> > > +bool ioreq_complete_mmio(void);
> > > +
> > > +static inline bool handle_pio(uint16_t port, unsigned int size, int dir)
> > > +{
> > > +    /*
> > > +     * TODO: For Arm64, the main user will be PCI. So this should be
> > > +     * implemented when we add support for vPCI.
> > > +     */
> > > +    ASSERT_UNREACHABLE();
> > > +    return true;
> > > +}
> > > +
> > > +static inline void msix_write_completion(struct vcpu *v)
> > > +{
> > > +}
> > > +
> > > +static inline bool arch_vcpu_ioreq_completion(enum io_completion
> > > io_completion)
> > > +{
> > > +    ASSERT_UNREACHABLE();
> > > +    return true;
> > > +}
> > > +
> > > +/*
> > > + * The "legacy" mechanism of mapping magic pages for the IOREQ servers
> > > + * is x86 specific, so the following hooks don't need to be implemented
> > > on Arm:
> > > + * - arch_ioreq_server_map_pages
> > > + * - arch_ioreq_server_unmap_pages
> > > + * - arch_ioreq_server_enable
> > > + * - arch_ioreq_server_disable
> > > + */
> > > +static inline int arch_ioreq_server_map_pages(struct ioreq_server *s)
> > > +{
> > > +    return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline void arch_ioreq_server_unmap_pages(struct ioreq_server *s)
> > > +{
> > > +}
> > > +
> > > +static inline void arch_ioreq_server_enable(struct ioreq_server *s)
> > > +{
> > > +}
> > > +
> > > +static inline void arch_ioreq_server_disable(struct ioreq_server *s)
> > > +{
> > > +}
> > > +
> > > +static inline void arch_ioreq_server_destroy(struct ioreq_server *s)
> > > +{
> > > +}
> > > +
> > > +static inline int arch_ioreq_server_map_mem_type(struct domain *d,
> > > +                                                 struct ioreq_server *s,
> > > +                                                 uint32_t flags)
> > > +{
> > > +    return -EOPNOTSUPP;
> > > +}
> > > +
> > > +static inline bool arch_ioreq_server_destroy_all(struct domain *d)
> > > +{
> > > +    return true;
> > > +}
> > > +
> > > +static inline int arch_ioreq_server_get_type_addr(const struct domain *d,
> > > +                                                  const ioreq_t *p,
> > > +                                                  uint8_t *type,
> > > +                                                  uint64_t *addr)
> > > +{
> > > +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> > > +        return -EINVAL;
> > > +
> > > +    *type = (p->type == IOREQ_TYPE_PIO) ?
> > > +             XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> > > +    *addr = p->addr;
> > This function is not used in this patch and PIOs are left unimplemented
> > according to a few comments, so I am puzzled by this code here. Do we
> > need it?
> Yes. It is called from ioreq_server_select (common/ioreq.c). I could just skip
> PIO case and use
> *type = XEN_DMOP_IO_RANGE_MEMORY, but I didn't want to diverge.
 
I see. OK then.


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:30:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48930.86597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBj2-00040Y-4z; Thu, 10 Dec 2020 02:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48930.86597; Thu, 10 Dec 2020 02:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBj2-00040Q-1n; Thu, 10 Dec 2020 02:30:36 +0000
Received: by outflank-mailman (input) for mailman id 48930;
 Thu, 10 Dec 2020 02:30:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBiz-0003zi-VS
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:30:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb6081bc-808e-4fd8-8c7d-c98fee9d76d8;
 Thu, 10 Dec 2020 02:30:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb6081bc-808e-4fd8-8c7d-c98fee9d76d8
Date: Wed, 9 Dec 2020 18:30:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607567432;
	bh=FwxnLnNrqy7aesfgd3t6U5ag5+jjfPIjoAfG2bTFR0g=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=hUKELPFGRtXB/8gLLBYUjauc7GPZ/1qkFmhvg1AwoP8ZrmrsncPABZYfoquPsBjN7
	 FrnkULURuPdUMpXh604izG4ElC8zAhac3sYHAUDGbxPHzswi+1XWfAz4VupOWwrbj4
	 WD2i3aGwDnjVHRNHpVIsGOEWVqtBAyCbgdctDu9a5D9PjS/P3SjDhw3NMm1dDp2IyZ
	 eE2KZHNrYmGyG72tb0qGIVxT4oNqbFqKP/kwz7wRkTM25m4PRJW1UbBsvRpbqLbk5j
	 zV6sUXhracbYf5kXS7HlHODPEu7DBwAMTfAhnZPIhU1nfrwRTc0r8Sp67i0ioifm8O
	 CFxgRx0uyGLPQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <bertrand.marquis@arm.com>, 
    xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
In-Reply-To: <af02eefb-5846-d32b-22e5-65763e6f51e0@xen.org>
Message-ID: <alpine.DEB.2.21.2012091742420.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com> <af02eefb-5846-d32b-22e5-65763e6f51e0@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Julien Grall wrote:
> Hi Bertrand,
> 
> On 09/12/2020 16:30, Bertrand Marquis wrote:
> > Add coprocessor registers definitions for all ID registers trapped
> > through the TID3 bit of HSR.
> > Those are the one that will be emulated in Xen to only publish to guests
> > the features that are supported by Xen and that are accessible to
> > guests.
> > Also define a case to catch all reserved registers that should be
> > handled as RAZ.
> > 
> > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > ---
> > Changes in V2: Rebase
> > Changes in V3:
> >    Add case definition for reserved registers.
> > 
> > ---
> >   xen/include/asm-arm/arm64/hsr.h | 66 +++++++++++++++++++++++++++++++++
> >   1 file changed, 66 insertions(+)
> > 
> > diff --git a/xen/include/asm-arm/arm64/hsr.h
> > b/xen/include/asm-arm/arm64/hsr.h
> > index ca931dd2fe..ffe0f0007e 100644
> > --- a/xen/include/asm-arm/arm64/hsr.h
> > +++ b/xen/include/asm-arm/arm64/hsr.h
> > @@ -110,6 +110,72 @@
> >   #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
> >   #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
> >   +/* Those registers are used when HCR_EL2.TID3 is set */
> > +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
> > +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
> > +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
> > +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
> > +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
> > +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
> > +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
> > +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
> > +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
> > +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
> > +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
> > +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
> > +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
> > +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
> > +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
> > +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
> > +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
> > +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
> > +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
> > +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
> > +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
> > +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
> > +
> > +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
> > +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
> > +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
> > +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
> > +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
> > +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
> > +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
> > +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
> > +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
> > +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
> > +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
> > +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
> > +
> > +/*
> > + * Those cases are catching all Reserved registers trapped by TID3 which
> > + * currently have no assignment.
> > + * HCR.TID3 is trapping all registers in the group 3:
> > + * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
> > + */
> > +#define HSR_SYSREG_TID3_RESERVED_CASE  case HSR_SYSREG(3,0,c0,c3,3): \
> > +                                       case HSR_SYSREG(3,0,c0,c3,7): \
> > +                                       case HSR_SYSREG(3,0,c0,c4,2): \
> > +                                       case HSR_SYSREG(3,0,c0,c4,3): \
> > +                                       case HSR_SYSREG(3,0,c0,c4,5): \
> > +                                       case HSR_SYSREG(3,0,c0,c4,6): \
> > +                                       case HSR_SYSREG(3,0,c0,c4,7): \
> > +                                       case HSR_SYSREG(3,0,c0,c5,2): \
> > +                                       case HSR_SYSREG(3,0,c0,c5,3): \
> > +                                       case HSR_SYSREG(3,0,c0,c5,6): \
> > +                                       case HSR_SYSREG(3,0,c0,c5,7): \
> > +                                       case HSR_SYSREG(3,0,c0,c6,2): \
> > +                                       case HSR_SYSREG(3,0,c0,c6,3): \
> > +                                       case HSR_SYSREG(3,0,c0,c6,4): \
> > +                                       case HSR_SYSREG(3,0,c0,c6,5): \
> > +                                       case HSR_SYSREG(3,0,c0,c6,6): \
> > +                                       case HSR_SYSREG(3,0,c0,c6,7): \
> > +                                       case HSR_SYSREG(3,0,c0,c7,3): \
> > +                                       case HSR_SYSREG(3,0,c0,c7,4): \
> > +                                       case HSR_SYSREG(3,0,c0,c7,5): \
> > +                                       case HSR_SYSREG(3,0,c0,c7,6): \
> > +                                       case HSR_SYSREG(3,0,c0,c7,7)
> 
> I don't like the idea to define the list of case in a header that is used by
> multiple source. Please define it directly in the source file that use it.

At that point it might be best to open-coding it in do_sysreg? I mean no
#define at all. 


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48933.86609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBj9-00045R-Eh; Thu, 10 Dec 2020 02:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48933.86609; Thu, 10 Dec 2020 02:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBj9-00045I-BJ; Thu, 10 Dec 2020 02:30:43 +0000
Received: by outflank-mailman (input) for mailman id 48933;
 Thu, 10 Dec 2020 02:30:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBj8-00044u-Cc
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:30:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 845fae40-5021-45bc-80e3-d35fb1114adb;
 Thu, 10 Dec 2020 02:30:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 845fae40-5021-45bc-80e3-d35fb1114adb
Date: Wed, 9 Dec 2020 18:30:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607567441;
	bh=X+iXEu7MKVRAXqalOPEivlTXGleWKjjVcnCirIwOifM=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=ZtKr6F6ZtL6EeGOYbmwmaI8SqxQ7KgNWYPKbs0oueXAVaJALtsNSqSQNESTCp149u
	 DH/CLEsYHEK5V20BopU90Qlm1aqpsEpYJqxaJ5HvD4d6DyOSBfIYAd87B2Ee+BsBpN
	 kzw4eJfG/zmdppkAgV4SgDDUb1qY3Eo9WIh0jO+jZUPm2j9EJBEpTwlZSc1/6UeIGP
	 i60WLVGF6VFb1ZGaimV9+u9PDOPsbJoo1zacYXcbyyLkEpzt2xsXukzRtyjWWmSfD3
	 QgMNW7UhRQ7wAyptePNeKe3hSWQgd/V/TkeOgpTSuzfKWhxpHyg/krRGx/yzZmw4Q0
	 JKeJaiTalfY0Q==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
In-Reply-To: <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> We need to send mapcache invalidation request to qemu/demu everytime
> the page gets removed from a guest.
> 
> At the moment, the Arm code doesn't explicitely remove the existing
> mapping before inserting the new mapping. Instead, this is done
> implicitely by __p2m_set_entry().
> 
> So we need to recognize a case when old entry is a RAM page *and*
> the new MFN is different in order to set the corresponding flag.
> The most suitable place to do this is p2m_free_entry(), there
> we can find the correct leaf type. The invalidation request
> will be sent in do_trap_hypercall() later on.

Why is it sent in do_trap_hypercall() ?


> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes V1 -> V2:
>    - new patch, some changes were derived from (+ new explanation):
>      xen/ioreq: Make x86's invalidate qemu mapcache handling common
>    - put setting of the flag into __p2m_set_entry()
>    - clarify the conditions when the flag should be set
>    - use domain_has_ioreq_server()
>    - update do_trap_hypercall() by adding local variable
> 
> Changes V2 -> V3:
>    - update patch description
>    - move check to p2m_free_entry()
>    - add a comment
>    - use "curr" instead of "v" in do_trap_hypercall()
> ---
> ---
>  xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
>  xen/arch/arm/traps.c | 13 ++++++++++---
>  2 files changed, 26 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 5b8d494..9674f6f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1,6 +1,7 @@
>  #include <xen/cpu.h>
>  #include <xen/domain_page.h>
>  #include <xen/iocap.h>
> +#include <xen/ioreq.h>
>  #include <xen/lib.h>
>  #include <xen/sched.h>
>  #include <xen/softirq.h>
> @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>      if ( !p2m_is_valid(entry) )
>          return;
>  
> -    /* Nothing to do but updating the stats if the entry is a super-page. */
> -    if ( p2m_is_superpage(entry, level) )
> +    if ( p2m_is_superpage(entry, level) || (level == 3) )
>      {
> -        p2m->stats.mappings[level]--;
> -        return;
> -    }
> +#ifdef CONFIG_IOREQ_SERVER
> +        /*
> +         * If this gets called (non-recursively) then either the entry
> +         * was replaced by an entry with a different base (valid case) or
> +         * the shattering of a superpage was failed (error case).
> +         * So, at worst, the spurious mapcache invalidation might be sent.
> +         */
> +        if ( domain_has_ioreq_server(p2m->domain) &&
> +             (p2m->domain == current->domain) && p2m_is_ram(entry.p2m.type) )
> +            p2m->domain->mapcache_invalidate = true;

Why the (p2m->domain == current->domain) check? Shouldn't we set
mapcache_invalidate to true anyway? What happens if p2m->domain !=
current->domain? We wouldn't want the domain to lose the
mapcache_invalidate notification.


> +#endif
>  
> -    if ( level == 3 )
> -    {
>          p2m->stats.mappings[level]--;
> -        p2m_put_l3_page(entry);
> +        /* Nothing to do if the entry is a super-page. */
> +        if ( level == 3 )
> +            p2m_put_l3_page(entry);
>          return;
>      }
>  
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index b6077d2..151c626 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1443,6 +1443,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>                                const union hsr hsr)
>  {
>      arm_hypercall_fn_t call = NULL;
> +    struct vcpu *curr = current;

Is this just to save 3 characters?


>      BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
>  
> @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>          return;
>      }
>  
> -    current->hcall_preempted = false;
> +    curr->hcall_preempted = false;
>  
>      perfc_incra(hypercalls, *nr);
>      call = arm_hypercall_table[*nr].fn;
> @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>      HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
>  
>  #ifndef NDEBUG
> -    if ( !current->hcall_preempted )
> +    if ( !curr->hcall_preempted )
>      {
>          /* Deliberately corrupt parameter regs used by this hypercall. */
>          switch ( arm_hypercall_table[*nr].nr_args ) {
> @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>  #endif
>  
>      /* Ensure the hypercall trap instruction is re-executed. */
> -    if ( current->hcall_preempted )
> +    if ( curr->hcall_preempted )
>          regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
> +
> +#ifdef CONFIG_IOREQ_SERVER
> +    if ( unlikely(curr->domain->mapcache_invalidate) &&
> +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
> +        ioreq_signal_mapcache_invalidate();

Why not just:

if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
    ioreq_signal_mapcache_invalidate();


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 02:41:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 02:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48952.86620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBtE-0005R9-Ei; Thu, 10 Dec 2020 02:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48952.86620; Thu, 10 Dec 2020 02:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knBtE-0005R2-Bm; Thu, 10 Dec 2020 02:41:08 +0000
Received: by outflank-mailman (input) for mailman id 48952;
 Thu, 10 Dec 2020 02:41:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knBtC-0005Qx-PP
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 02:41:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a80a1375-7662-4df5-82e4-24a5d3d2d72a;
 Thu, 10 Dec 2020 02:41:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a80a1375-7662-4df5-82e4-24a5d3d2d72a
Date: Wed, 9 Dec 2020 18:41:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607568065;
	bh=UKaPPHmdU53tv8J1sCQYzI3p26iWcq1dUNTz0WcaJQQ=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=s2JIY3CITk4vPYvXGvtLvVgI2MJHimmpb42MLcu+mvz/eu18m/qJ4Nz5VRmvgQYfL
	 Si4d4msNg2IncP5iGm0ivqd9C6yZTorTjneUMSwJXxtWQatKj0ZqQpjNFeb+xGxisq
	 hkNwsZcOeiBdeqaX1RjkAhMROMy5ygZl5nOmkFiWB56JkJb5LvT9gd9t8s0ieE+F5z
	 DDgUBAFVwfpfM9TcVzdaQA8Kph+zEZDe/oqsUbY0S+pon7lPRssYupO8yar2NzACdb
	 +g6SH7HS32GidyPgPJdgbGbBzmUwp63upMATjmGrU3b/2xxplaSto6b/qhusguIFgj
	 jFvSJfKC5DXpA==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org, famzheng@amazon.com, 
    cardoe@cardoe.com, Bertrand.Marquis@arm.com, julien@xen.org, 
    andrew.cooper3@citrix.com
Subject: Re: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
In-Reply-To: <alpine.DEB.2.21.2012091046400.20986@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2012091839430.20986@sstabellini-ThinkPad-T480s>
References: <160746448732.12203.10647684023172140005@600e7e483b3a> <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s> <20201209161433.d7xpx5zwtikd3fmk@liuwe-devbox-debian-v2> <alpine.DEB.2.21.2012091046400.20986@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 9 Dec 2020, Stefano Stabellini wrote:
> On Wed, 9 Dec 2020, Wei Liu wrote:
> > On Tue, Dec 08, 2020 at 05:02:50PM -0800, Stefano Stabellini wrote:
> > > The pipeline failed because the "fedora-gcc-debug" build failed with a
> > > timeout: 
> > > 
> > > ERROR: Job failed: execution took longer than 1h0m0s seconds
> > > 
> > > given that all the other jobs passed (including the other Fedora job), I
> > > take this failed because the gitlab-ci x86 runners were overloaded?
> > > 
> > 
> > The CI system is configured to auto-scale as the number of jobs grows.
> > The limit is set to 10 (VMs) at the moment.
> > 
> > https://gitlab.com/xen-project/xen-gitlab-ci/-/commit/832bfd72ea3a227283bf3df88b418a9aae95a5a4
> > 
> > I haven't looked at the log, but the number of build jobs looks rather
> > larger than when we get started. Maybe the limit of 10 is not good
> > enough?
> 
> Interesting! That's only for the x86 runners, not the ARM runners (we
> only have 1 ARM64 runner), is that right?
> 
> If we could increase the number of VMs for x86 I think that would be
> helpful because we have very many x86 jobs.

I don't know what is going on but at the moment there seems to be only
one x86 build active
(https://gitlab.com/xen-project/patchew/xen/-/pipelines/227280736).
Should there be at least 3 of them?


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 04:01:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 04:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48967.86639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knD8U-0004qC-GE; Thu, 10 Dec 2020 04:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48967.86639; Thu, 10 Dec 2020 04:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knD8U-0004q5-Cb; Thu, 10 Dec 2020 04:00:58 +0000
Received: by outflank-mailman (input) for mailman id 48967;
 Thu, 10 Dec 2020 04:00:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knD8S-0004px-BF; Thu, 10 Dec 2020 04:00:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knD8S-0002wm-3i; Thu, 10 Dec 2020 04:00:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knD8R-0007WK-N0; Thu, 10 Dec 2020 04:00:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knD8R-0007Aa-MU; Thu, 10 Dec 2020 04:00:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0d8gx7y2ig6KN7DRMo+mwfqZnaNU7XiwabcxJa4zMgE=; b=Y6uVytSHHrrsi6foCr0z4Vclh5
	TjkQbjQuELy2KN6y8toBZCAJUyH5zS9D/fFP76p6QUPW9G8LzGnjJaUlrKRQIke3GGrnY6NZx5cUg
	35Iruf1KbP8rLRrUI/Kf5adVoCdK4mDJyGAZUvJcuenyzfoYR4yIs6EJfBJBVWPHUwx0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157353-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157353: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ca4bbdaf171604841f77648a2877e2e43db69b71
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 04:00:55 +0000

flight 157353 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157353/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-xsm       5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ca4bbdaf171604841f77648a2877e2e43db69b71
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  131 days
Failing since        152366  2020-08-01 20:49:34 Z  130 days  224 attempts
Testing same since   157353  2020-12-09 18:41:41 Z    0 days    1 attempts

------------------------------------------------------------
3652 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-credit1 host-install(5)
broken-step test-arm64-arm64-xl-xsm host-install(5)

Not pushing.

(No revision log; it would be 700901 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 07:02:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 07:02:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48981.86666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knFy2-0006HJ-4U; Thu, 10 Dec 2020 07:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48981.86666; Thu, 10 Dec 2020 07:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knFy2-0006HC-19; Thu, 10 Dec 2020 07:02:22 +0000
Received: by outflank-mailman (input) for mailman id 48981;
 Thu, 10 Dec 2020 07:02:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x/HS=FO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1knFy0-0006H7-Fe
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 07:02:20 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.40]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ccb180b-51f7-4ddf-97f3-e3d3859a93eb;
 Thu, 10 Dec 2020 07:02:16 +0000 (UTC)
Received: from AM6P195CA0033.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::46)
 by AM6PR08MB5014.eurprd08.prod.outlook.com (2603:10a6:20b:eb::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Thu, 10 Dec
 2020 07:02:13 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::ab) by AM6P195CA0033.outlook.office365.com
 (2603:10a6:209:81::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 07:02:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 07:02:12 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Thu, 10 Dec 2020 07:02:12 +0000
Received: from 8b84641b6c89.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6D4984D6-2197-47CB-9220-508638249C3B.1; 
 Thu, 10 Dec 2020 07:02:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8b84641b6c89.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 07:02:07 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3581.eurprd08.prod.outlook.com (2603:10a6:803:79::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 07:02:05 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::68c2:f9e0:49c5:7e18]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::68c2:f9e0:49c5:7e18%3]) with mapi id 15.20.3632.019; Thu, 10 Dec 2020
 07:02:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ccb180b-51f7-4ddf-97f3-e3d3859a93eb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6hH2bFeJ6U0WTSu6M+Xc88Fyo5heiPr2hAxOCd+RRBc=;
 b=5O8WJI6yeaMqMRKPmaJnpiWBt+473phBGXa7+9/xHqn7QnbKYF/S1xyKZjC40F9UI3Ej73I86tfxIJaPfdNyCjsuI5o5v1SwusH4GiQ5l7AHMmdmH7A5F/to4fldzR5OE+DApjhbb5V7amXg0iwKmHyltNzPrxFrhLJwbK8V24g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PpFe9sp8F03RfrPUTKaWJFW868Uwt2z8KX8grZboyMx+nOB4s+QYpO9XKfAWl+3+XllnQA9HBklnbiUydlJyaGCIh7zFLYqx50ef91x8JexJ18yGwPK6olMRNPUubG7z+OhNgdlF4wG2FUt7G1X/eaBkKUgq0hYBOIqjn/PDKPRYn3WDE6kw44S8RhdtRIyFB+2CU5hx/9GpgWmi3/XT6nlMmYoaN97TxjffZBlHz1xsmmVQkIYQx8OmqInHjRnajQCvbVw5iREsBJXRtBiKfKrrbvd0fHsIsMG5ampd6LWb9zHst8eViFHj32bN20YPc8vjVNVYvHna+ax5G4bO4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6hH2bFeJ6U0WTSu6M+Xc88Fyo5heiPr2hAxOCd+RRBc=;
 b=JzD4bFm6rKFmFGL12A7JM+iBNgvUWppN3KCQ9gQbVju9ZbgMAnqA8rVjSWDoLM0lXkEJ3pKnj0BXN+fbl+q+BHp3/Vh9o+nhVwEtY+ZsHjaEUSVpL5uBRkv5EpxZs8rJEMxrplR7pfJJtgytE4mge4I/tgfIDr4wBLPwckJJJl9VAswGSGFH8sA1k8Mgjtts93x2v8Dfrsz1F9oxXsf8/pmk8TW8eBU/7fzloV9NZJiuFFmsgbZK6unE4OGhbMCX9U2NDDXpWZuQGKUixd6NRr38JIQ9cctyImVmE/Cp/ADhqiruv+ShapymolB9zCHXUzAuo1ug7VnJcc4i0bcA3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6hH2bFeJ6U0WTSu6M+Xc88Fyo5heiPr2hAxOCd+RRBc=;
 b=5O8WJI6yeaMqMRKPmaJnpiWBt+473phBGXa7+9/xHqn7QnbKYF/S1xyKZjC40F9UI3Ej73I86tfxIJaPfdNyCjsuI5o5v1SwusH4GiQ5l7AHMmdmH7A5F/to4fldzR5OE+DApjhbb5V7amXg0iwKmHyltNzPrxFrhLJwbK8V24g=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
	Wei Chen <Wei.Chen@arm.com>, nd <nd@arm.com>, Paul Durrant <paul@xen.org>,
	"famzheng@amazon.com" <famzheng@amazon.com>
Subject: RE: [RFC] design: design doc for 1:1 direct-map
Thread-Topic: [RFC] design: design doc for 1:1 direct-map
Thread-Index: AQHWzSILZVLbUZur0UCHvW9hRaI4w6ns6LwAgAL3c8A=
Date: Thu, 10 Dec 2020 07:02:04 +0000
Message-ID:
 <VE1PR08MB52154F42074B1DB4F3BBFDADF7CB0@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20201208052113.1641514-1-penny.zheng@arm.com>
 <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
In-Reply-To: <6731d0c1-37df-ade8-7b77-d1032c326111@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4AC69F1C4F8E584286225D7FF602937D.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 76a92426-3fb4-40a7-3c0b-08d89cd98542
x-ms-traffictypediagnostic: VI1PR08MB3581:|AM6PR08MB5014:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB5014D237A90ABFC90A506B06F7CB0@AM6PR08MB5014.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6430;OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FQM6ezCRf3+vmaail66I+QgGHYB9nl3yXONFVy1bCP5XQKhtRHN6byAdwfly7t2xQzmR28BumFjn/9m8Ld/mQjS4JGp9+o+UE4dSayEkbVTZ0Cdjhw1+U5zYTRzsydZmjRS2/MZRR4PuEpHDUp7+5tbJZeKaxRHjnnbSRala4WLyq6bkkU0RxYSx/X6wKlnp8trH76jyBG4GUNUiOBCBV/J/fjzj3HsSBKOH8snerBDeLOeXy6/SBx9Ps8mxd5R5ym2erhC1PjU9VNQ0UxS6/dcx+Vdb4QbFcPGbPxTNTMrdVbN2/SlKyz8xeSWTxcBcnPheluH8Edfq3gI/0V9GZAU9oDuav9S82ue6ZwoA76hOX1zh598mLrjCMyGMmRoz6gbSvYqZLyJnjMMroBhoxbrnQ56TvXInLr9TF6fNn4V0C0MDca1ujPac19uVBeWpXPeHwYBiX8x4rTro9OAI0g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(366004)(26005)(9686003)(7696005)(86362001)(52536014)(83380400001)(30864003)(6506007)(66476007)(54906003)(110136005)(4326008)(508600001)(186003)(76116006)(5660300002)(66446008)(33656002)(71200400001)(966005)(53546011)(2906002)(64756008)(66946007)(8936002)(8676002)(66556008)(55016002)(21314003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?b1prZUFuK0w1OFliZUVjSFpvd3pySzJDNjRhUVM1NFlPa0NuSkt2RHVkcWNV?=
 =?utf-8?B?bkZETjJ5SjBwZ3ZTS1NkSFRJVG9POFhWeXhWeUpSTGIySFN5SXMwdWljUG5z?=
 =?utf-8?B?eVJmT2tVK2cvaytGMTRyL1VwTFdEUW5MalhWZDMvdjB3dGRQdWNhR3NtZjNT?=
 =?utf-8?B?ZTBrQmI3ejNaeEorQVJvRU1jKzlsY3NxSk05S2U0Ym5lemUvYzdGckYyQ3dw?=
 =?utf-8?B?aU5LV0VRMkF2M05pS1NQMmVEc3Bwd25UUkdzSU4vWjhoUmNTNVh1OVJvdUp2?=
 =?utf-8?B?NFdoQW0vNXlQUElKNk1xcUVGRXpWdms4c0VIdDhxQkQ2TkJhVHZ4Q2twVW94?=
 =?utf-8?B?K1VEWTVCSEZWZVBuakdsUnBCS1VOZ1FDRmJqRWVwbVVkaWJXSUtUSGszY3dh?=
 =?utf-8?B?U2IxOVptT0E3amFNLzhXV2R0VGVuT2Fwa0daOXVpL0Z0ckdtcFliVUJHTlJv?=
 =?utf-8?B?VzJrcFVNa1A3NEIydGtVK2tKTXN2TytPMmlHeEZWTWc3dVJEbmNDN2UvUFJI?=
 =?utf-8?B?S093UzlxNzRqVjdVUlZablBQQ2hROURycHhDKzRnMzZLcWtVTHV3R3ZxN08y?=
 =?utf-8?B?dENrUVZYR1RyM2txSm5IbmZ1VitaRmhOYXB0WTBTL2FoTmR3d2pRYklYVzNK?=
 =?utf-8?B?WXhncnBydnBoWWFlWmJXdTRFTldJSkp4ZllNQWZwTittNHZXYTFrQUlsaUVW?=
 =?utf-8?B?aDdBVzVqV2o0ZVNjdGZ0NHZTM0ZhcmlWM1lpRU1Wck5LeGFZcy9kck1Ec3pz?=
 =?utf-8?B?MTFULzVkK0tMMWdaRlU5RlhxcHNwbG83MmtmSTlwL2YrdVgzSHZtcHRRY1Bo?=
 =?utf-8?B?Y0JteEpONVF3SDZwM013dnIvZnFTMkZFMFVxcXdXbzhoSS9OeG9xaFhZamMw?=
 =?utf-8?B?aXdOQVo0MzZFUzJCTFBjcVZQV0FHOWMrNVYvVjl0ZFd2MnlDTkdjdktCS01H?=
 =?utf-8?B?UTg2WEZ0K2g1ZE1USlZhcUJocHZlSWZwWkxHalhsMWdqSE9BL04vajJBNFlU?=
 =?utf-8?B?YVlxK1AyTXhmbkFUaXpVQ0RVTEtXNFlGR3BaVVg0aXdJNFR1Z0lpRWJjeEtq?=
 =?utf-8?B?TFhkYVZNRTV4TWtMdEI5RStOd1VyQ2xhV1VhaEZrdURYbVNYOW83bXlSdlNE?=
 =?utf-8?B?SkwxR05ZQXVMMVBSMUxJT2U1OTZuM3hOQXR2c2xwU2x6N3YxWUxQZWRsVE56?=
 =?utf-8?B?QVVWT2lEeWx4eHFrK3NhcFhVQUFFcTQrNUpaWFdkV3pPTEpSVFZRdm5DY3R3?=
 =?utf-8?B?cnltekZudGlPVjJzTEVJK0N2dEFRSmwxblViMWxWaDdzdHpua0QyQ0dtMVRl?=
 =?utf-8?Q?OrImhcPztuEfk=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3581
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	59c9ac18-3a95-4229-7896-08d89cd98062
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VeMBxMaupbmTwlDlg10hk+3TYnJPig+czjjoBeOFf+nuqaqaQhLMZhqgZad8Zuk0R3hQSLoACj2/Nfrj6mJZ+DCVGU9S8nNGbopsb/AprRY/GUXJrAHcBJ0n96/IdsG3KgCx2/r9Pj1NWERjRPiesHtw1HkbiaZurugu/tOxFugvLGv+Ihzv/eHxc59J8/DdPrl6gVfPTLJqqnGNNqrteOtUaDWXDtnX8R6xvhw5OrvBN+2cUqVp4fNz4cLlQCPoSdCm6XVK2vCQUFQeYK216xFyuCRhFrLBkVNmP7fHFtvGflBGiaYXHVVev3DgkNzRmzcCtDPVYldjAZkKGtqmIAcRVfHxK+iO/wegcGefRyI8o6OPB4s7ZpKOStuufWhhC2TXmoe84mAhw8KaeSu3WykEZ1+p/xMYBgvNmPG1/+AgtWzceXNJ01Z82JENwdCPiqGodRotcvuPxs6RZblP18WIaSTUQ4S1bDjf3fHmFStEzwL3/rVeBTTbdHHDtptf
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(46966005)(86362001)(33656002)(82310400003)(47076004)(53546011)(336012)(55016002)(966005)(9686003)(83380400001)(110136005)(8676002)(70586007)(508600001)(70206006)(54906003)(52536014)(7696005)(107886003)(2906002)(186003)(26005)(36906005)(356005)(30864003)(8936002)(5660300002)(4326008)(6506007)(81166007)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 07:02:12.9662
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 76a92426-3fb4-40a7-3c0b-08d89cd98542
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5014

SGkgSnVsaWVuDQoNClRoYW5rcyBmb3IgdGhlIG5pY2UgYW5kIGRldGFpbGVkIGNvbW1lbnRzLiAo
Kl7ilr1eKikNCkhlcmUgYXJlIHRoZSByZXBsaWVzOg0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2Fn
ZS0tLS0tDQo+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IFR1
ZXNkYXksIERlY2VtYmVyIDgsIDIwMjAgNTowNyBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55
LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7DQo+IHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcNCj4gQ2M6IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1
aXNAYXJtLmNvbT47IEthbHkgWGluDQo+IDxLYWx5LlhpbkBhcm0uY29tPjsgV2VpIENoZW4gPFdl
aS5DaGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT47DQo+IFBhdWwgRHVycmFudCA8cGF1bEB4
ZW4ub3JnPjsgZmFtemhlbmdAYW1hem9uLmNvbQ0KPiBTdWJqZWN0OiBSZTogW1JGQ10gZGVzaWdu
OiBkZXNpZ24gZG9jIGZvciAxOjEgZGlyZWN0LW1hcA0KPiANCj4gSGkgUGVubnksDQo+IA0KPiBJ
IGFtIGFkZGluZyBQYXVsIGFuZCBaaGVuZyBpbiB0aGUgdGhyZWFkIGFzIHRoZXJlIGFyZSBzaW1p
bGFyIGludGVyZXN0IGZvciB0aGUNCj4geDg2IHNpZGUuDQo+IA0KPiBPbiAwOC8xMi8yMDIwIDA1
OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBUaGlzIGlzIG9uZSBkcmFmdCBkZXNpZ24gYWJv
dXQgdGhlIGluZnJhc3RydWN0dXJlIGZvciBub3csIG5vdCByZWFkeQ0KPiA+IGZvciB1cHN0cmVh
bSB5ZXQgKGhlbmNlIHRoZSBSRkMgdGFnKSwgdGhvdWdodCBpdCdkIGJlIHVzZWZ1bCB0bw0KPiA+
IGZpcnN0bHkgc3RhcnQgYSBkaXNjdXNzaW9uIHdpdGggdGhlIGNvbW11bml0eS4NCj4gPg0KPiA+
IENyZWF0ZSBvbmUgZGVzaWduIGRvYyBmb3IgMToxIGRpcmVjdC1tYXAuDQo+ID4gSXQgYWltcyB0
byBkZXNjcmliZSB3aHkgYW5kIGhvdyB3ZSBhbGxvY2F0ZSAxOjEgZGlyZWN0LW1hcChndWVzdA0K
PiA+IHBoeXNpY2FsID09IHBoeXNpY2FsKSBkb21haW5zLg0KPiA+DQo+ID4gVGhpcyBkb2N1bWVu
dCBpcyBwYXJ0bHkgYmFzZWQgb24gU3RlZmFubyBTdGFiZWxsaW5pJ3MgcGF0Y2ggc2VyaWUgdjE6
DQo+ID4gW2RpcmVjdC1tYXAgRG9tVXNdKA0KPiA+IGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5v
cmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMC0NCj4gMDQvbXNnMDA3MDcuaHRtbCkuDQo+
IA0KPiBNYXkgSSBhc2sgd2h5IGEgZGlmZmVyZW50IGFwcHJvYWNoPw0KDQpJbiBTdGVmYW5vIG9y
aWdpbmFsIGRlc2lnbiwgaGUnZCBsaWtlIHRvIGFsbG9jYXRlIDE6MSBkaXJlY3QtbWFwIHdpdGgg
dXNlci1kZWZpbmVkDQptZW1vcnkgcmVnaW9ucyBhbmQgaGUgcHJlZmVycyBhbGxvY2F0aW5nIGl0
IGZyb20gc3ViLWRvbWFpbiBhbGxvY2F0b3IuDQoNCkFuZCBpdCBicmluZ3MgcXVpdGUgYSBkaXNj
dXNzaW9uIHRoZXJlIGFuZCBpbiB0aGUgbGFzdCwgZXZlcnlvbmUga2luZHMgb2YgYWxsDQphZ3Jl
ZXMgdGhhdCBpdCBpcyBub3Qgd29ya2FibGUuIFNpbmNlIGlmIHJlcXVlc3RlZCBtZW1vcnkgZXZl
ciBnb2VzIGludG8gYW55DQphbGxvY2F0b3JzLCBubyBtYXR0ZXIgYm9vdCwgb3Igc3ViLWRvbWFp
biBhbGxvY2F0b3IsIHdlIGNvdWxkIG5vdCBlbnN1cmUgdGhhdA0KYmVmb3JlIGFjdHVhbGx5IGFs
bG9jYXRpbmcgaXQgZm9yIG9uZSAxOjEgZGlyZWN0LW1hcCBkb21haW4sIGl0IHdpbGwgbm90IGJl
IGludG8NCmFueSBvdGhlciB1c2UuDQoNClNvIEknZCBwcmVmZXIgdG8gc3BsaXQgb3JpZ2luYWwg
ZGVzaWduIGludG8gdHdvIHBhcnRzOiBvbmUgaXMgaGVyZSwgdGhhdCB1c2VyIG9ubHkNCndhbnRz
IHRvIGFsbG9jYXRlIG9uZSAxOjEgZGlyZWN0LW1hcCBkb21haW4sIG5vdCBjYXJpbmcgYWJvdXQg
d2hlcmUgdGhlIHJhbQ0Kd2lsbCBiZSBsb2NhdGVkIGludG8uIFRoaW5rIGFib3V0IGRvbTAuIFRo
ZW4sIHdlIGNvdWxkIHN0aWNrIHRvIGFsbG9jYXRlIG1lbW9yeQ0Kc3RpbGwgZnJvbSBzdWItZG9t
YWluIGFsbG9jYXRvci4NCiANCkFub3RoZXIgcGFydCB3aGljaCBJIHNhaWQgaW4gYmVsb3cgY29t
bWl0cywgICJGb3IgdGhlIHBhcnQgcmVnYXJkaW5nIGFsbG9jYXRpbmcgDQoxOjEgZGlyZWN0LSBt
YXAgZG9tYWlucyB3aXRoIHVzZXItZGVmaW5lZCBtZW1vcnkgcmVnaW9ucywgaXQgd2lsbCBiZSBp
bmNsdWRlZA0KaW4gbmV4dCBkZXNpZ24gb2Ygc3RhdGljIG1lbW9yeSBhbGxvY2F0aW9uIi4NCg0K
QnV0IG9mIGNvdXJzZSwgSWYgYSBjb21iaW5hdGlvbiBjYW4gbWFrZSBjb21tdW5pdHkgdG8gYmV0
dGVyIHVuZGVyc3RhbmQgb3VyDQppZGVhcywgV2UncmUgd2lsbGluZyB0byBjb21iaW5lIHRoZW0g
aW4gbmV4dCB2ZXJzaW9uLiDwn5iJDQoNCkJyaWVmbHkgc3BlYWtpbmcsIGlmIHdlIGFsbG9jYXRp
bmcgMToxIGRpcmVjdC1tYXAgZG9tYWlucyB3aXRoIHVzZXItZGVmaW5lZA0KbWVtb3J5IHJlZ2lv
bnMsIHdlIG5lZWQgdG8gcmVzZXJ2ZSB0aG9zZSBtZW1vcnkgcmVnaW9ucyBpbiB0aGUgYmVnaW5u
aW5nLg0KDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdA
YXJtLmNvbT4NCj4gPiAtLS0NCj4gPiBGb3IgdGhlIHBhcnQgcmVnYXJkaW5nIGFsbG9jYXRpbmcg
MToxIGRpcmVjdC1tYXAgZG9tYWlucyB3aXRoDQo+ID4gdXNlci1kZWZpbmVkIG1lbW9yeSByZWdp
b25zLCBpdCB3aWxsIGJlIGluY2x1ZGVkIGluIG5leHQgZGVzaWduIG9mDQo+ID4gc3RhdGljIG1l
bW9yeSBhbGxvY2F0aW9uLg0KPiANCj4gSSBkb24ndCB0aGluayB5b3UgY2FuIGRvIHdpdGhvdXQg
dXNlci1kZWZpbmVkIG1lbW9yeSByZWdpb25zIChzZWUgbW9yZQ0KPiBiZWxvdykuDQo+IA0KPiA+
IC0tLQ0KPiA+ICAgZG9jcy9kZXNpZ25zLzFfMV9kaXJlY3QtbWFwLm1kIHwgODcNCj4gKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+ICAgMSBmaWxlIGNoYW5nZWQsIDg3IGlu
c2VydGlvbnMoKykNCj4gPiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCBkb2NzL2Rlc2lnbnMvMV8xX2Rp
cmVjdC1tYXAubWQNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9kb2NzL2Rlc2lnbnMvMV8xX2RpcmVj
dC1tYXAubWQNCj4gPiBiL2RvY3MvZGVzaWducy8xXzFfZGlyZWN0LW1hcC5tZCBuZXcgZmlsZSBt
b2RlIDEwMDY0NCBpbmRleA0KPiA+IDAwMDAwMDAwMDAuLmNlM2UyYzc3ZmQNCj4gPiAtLS0gL2Rl
di9udWxsDQo+ID4gKysrIGIvZG9jcy9kZXNpZ25zLzFfMV9kaXJlY3QtbWFwLm1kDQo+ID4gQEAg
LTAsMCArMSw4NyBAQA0KPiA+ICsjIFByZWZhY2UNCj4gPiArDQo+ID4gK1RoZSBkb2N1bWVudCBp
cyBhbiBlYXJseSBkcmFmdCBmb3IgZGlyZWN0LW1hcCBtZW1vcnkgbWFwIChgZ3Vlc3QNCj4gPiAr
cGh5c2ljYWwgPT0gcGh5c2ljYWxgKSBvZiBkb21Vcy4gQW5kIHJpZ2h0IG5vdywgaXQgY29uc3Ry
YWlucyB0byBBUk0NCj4gDQo+IHMvY29uc3RyYWlucy9saW1pdGVkLw0KPiANCj4gQXNpZGUgdGhl
IGludGVyZmFjZSB0byB0aGUgdXNlciwgeW91IHNob3VsZCBiZSBhYmxlIHRvIHJlLXVzZSB0aGUg
c2FtZSBjb2RlDQo+IG9uIHg4Ni4gTm90ZSB0aGF0IGJlY2F1c2UgdGhlIG1lbW9yeSBsYXlvdXQg
b24geDg2IGlzIGZpeGVkIChhbHdheXMgc3RhcnRpbmcNCj4gYXQgMCksIHlvdSB3b3VsZCBvbmx5
IGJlIGFibGUgdG8gaGF2ZSBvbmx5IG9uZSBkaXJlY3QtbWFwcGVkIGRvbWFpbi4NCj4gDQoNClNv
cnJ5LCBJIGhhdmUgbGl0dGxlIGtub3dsZWRnZSBvbiB4ODYuIEFuZCBpdCBtYXkgbmVlZCBtb3Jl
IGludmVzdGlnYXRpb24uDQoNCj4gPiArYXJjaGl0ZWN0dXJlLg0KPiA+ICsNCj4gPiArSXQgYWlt
cyB0byBkZXNjcmliZSB3aHkgYW5kIGhvdyB0aGUgZ3Vlc3Qgd291bGQgYmUgY3JlYXRlZCBhcyBk
aXJlY3QtbWFwDQo+IGRvbWFpbi4NCj4gPiArDQo+ID4gK1RoaXMgZG9jdW1lbnQgaXMgcGFydGx5
IGJhc2VkIG9uIFN0ZWZhbm8gU3RhYmVsbGluaSdzIHBhdGNoIHNlcmllIHYxOg0KPiA+ICtbZGly
ZWN0LW1hcCBEb21Vc10oDQo+ID4gK2h0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2
ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMC0NCj4gMDQvbXNnMDA3MDcuaHRtbCkuDQo+ID4gKw0KPiA+
ICtUaGlzIGlzIGEgZmlyc3QgZHJhZnQgYW5kIHNvbWUgcXVlc3Rpb25zIGFyZSBzdGlsbCB1bmFu
c3dlcmVkLiBXaGVuDQo+ID4gK3RoaXMgaXMgdGhlIGNhc2UsIHRoZSB0ZXh0IHNoYWxsIGNvbnRh
aW4gWFhYLg0KPiA+ICsNCj4gPiArIyBJbnRyb2R1Y3Rpb24NCj4gPiArDQo+ID4gKyMjIEJhY2tn
cm91bmQNCj4gPiArDQo+ID4gK0Nhc2VzIHdoZXJlIGRvbVUgbmVlZHMgZGlyZWN0LW1hcCBtZW1v
cnkgbWFwOg0KPiA+ICsNCj4gPiArICAqIElPTU1VIG5vdCBwcmVzZW50IGluIHRoZSBzeXN0ZW0u
DQo+ID4gKyAgKiBJT01NVSBkaXNhYmxlZCwgc2luY2UgaXQgZG9lc24ndCBjb3ZlciBhIHNwZWNp
ZmljIGRldmljZS4NCj4gDQo+IElmIHRoZSBkZXZpY2UgaXMgbm90IGNvdmVyZWQgYnkgdGhlIElP
TU1VLCB0aGVuIHdoeSB3b3VsZCB5b3Ugd2FudCB0bw0KPiBkaXNhYmxlIHRoZSBJT01NVXMgZm9y
IHRoZSByZXN0IG9mIHRoZSBzeXN0ZW0/DQo+IA0KDQpUaGlzIGlzIGEgbWl4ZWQgc2NlbmFyaW8u
IFdlIHBhc3Mgc29tZSBkZXZpY2VzIHRvIFZNIHdpdGggU01NVSwgYW5kIHdlDQpwYXNzIG90aGVy
IGRldmljZXMgdG8gVk0gd2l0aG91dCBTTU1VLiBXZSBjb3VsZCBub3QgZ3VhcmFudGVlIGd1ZXN0
IA0KRE1BIHNlY3VyaXR5LiANCg0KU28gdXNlcnMgbWF5IHdhbnQgdG8gZGlzYWJsZSB0aGUgU01N
VSwgYXQgbGVhc3QsIHRoZXkgY2FuIGdhaW4gc29tZQ0KcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQg
ZnJvbSBTTU1VIGRpc2FibGVkLg0KDQo+ID4gKyAgKiBJT01NVSBkaXNhYmxlZCwgc2luY2UgaXQg
ZG9lc24ndCBoYXZlIGVub3VnaCBiYW5kd2lkdGguDQo+IA0KPiBJIGFtIG5vdCBzdXJlIHRvIHVu
ZGVyc3RhbmQgdGhpcyBvbmUuDQo+IA0KDQpJbiBzb21lIFNvQywgdGhlcmUgd291bGQgYmUgbXVs
dGlwbGUgZGV2aWNlcyBjb25uZWN0ZWQgdG8gb25lIFNNTVUuDQoNCkluIHNvbWUgZXh0cmVtZSBz
aXR1YXRpb24sIG11bHRpcGxlIGRldmljZXMgZG8gRE1BIGNvbmN1cnJlbmN5LCBUaGUNCnRyYW5z
bGF0aW9uIHJlcXVlc3RzIGNhbiBleGNlZWQgU01NVSdzIHRyYW5zbGF0aW9uIGNhcGFjaXR5LiBU
aGlzIHdpbGwNCmNhdXNlIERNQSBsYXRlbmN5Lg0KDQo+ID4gKyAgKiBJT01NVSBkaXNhYmxlZCwg
c2luY2UgaXQgYWRkcyB0b28gbXVjaCBsYXRlbmN5Lg0KPiANCj4gVGhlIGxpc3QgYWJvdmUgc291
bmRzIGxpa2UgZGlyZWN0LW1hcCBtZW1vcnkgd291bGQgYmUgbmVjZXNzYXJ5IGV2ZW4NCj4gd2l0
aG91dCBkZXZpY2UtcGFzc3Rocm91Z2guIENhbiB5b3UgY2xhcmlmeSBpdD8NCj4gDQoNCk9rYXku
IA0KDQpTTU1VIG9uIGRpZmZlcmVudCBTb0NzIGNhbiBiZSBpbXBsZW1lbnRlZCBkaWZmZXJlbnRs
eS4gRm9yIGV4YW1wbGUsIHNvbWUNClNvQyB2ZW5kb3IgbWF5IHJlbW92ZSB0aGUgVExCIGluc2lk
ZSBTTU1VLg0KDQpJbiB0aGlzIGNhc2UsIHRoZSBTTU1VIHdpbGwgYWRkIGxhdGVuY3kgaW4gRE1B
IHByb2dyZXNzLiBVc2VycyBtYXkgd2FudCB0bw0KZGlzYWJsZSB0aGUgU01NVSBmb3Igc29tZSBS
ZWFsdGltZSBzY2VuYXJpb3MuDQoNCj4gPiArDQo+ID4gKypXQVJOSU5HOg0KPiA+ICtVc2VycyBz
aG91bGQgYmUgY2FyZWZ1bCB0aGF0IGl0IGlzIG5vdCBhbHdheXMgc2VjdXJlIHRvIGFzc2lnbiBh
DQo+ID4gK2RldmljZSB3aXRob3V0DQo+IA0KPiBzL2NhcmVmdWwvYXdhcmUvIEkgdGhpbmsuIEFs
c28sIGl0IGlzIG5ldmVyIHNlY3VyZSB0byBhc3NpZ24gYSBkZXZpY2Ugd2l0aG91dA0KPiBJT01N
VS9TTU1VIHVubGVzcyB5b3UgaGF2ZSBhIHJlcGxhY2VtZW50Lg0KPiANCj4gSSB3b3VsZCBzdWdn
ZXN0IHRvIHJld29yZCBpdCBzb21ldGhpbmcgbGlrZToNCj4gDQo+ICJXaGVuIHRoZSBkZXZpY2Ug
aXMgbm90IHByb3RlY3RlZCBieSB0aGUgSU9NTVUsIHRoZSBhZG1pbmlzdHJhdG9yIHNob3VsZA0K
PiBtYWtlIHN1cmUgdGhhdDoNCj4gICAgIC0gVGhlIGRldmljZSBpcyBhc3NpZ25lZCB0byBhIHRy
dXN0ZWQgZ3Vlc3QNCj4gICAgIC0gWW91IGhhdmUgYW4gYWRkaXRpb25hbCBzZWN1cml0eSBtZWNo
YW5pc20gb24gdGhlIHBsYXRmb3JtIChlLmcNCj4gTVBVKSB0byBwcm90ZWN0IHRoZSBtZW1vcnku
Ig0KPiANCg0KVGhhbmtzIGZvciB0aGUgcmVwaHJhc2UuICgqXuKWvV4qKQ0KDQo+ID4gK0lPTU1V
L1NNTVUgcHJvdGVjdGlvbi4NCj4gPiArVXNlcnMgbXVzdCBiZSBhd2FyZSBvZiB0aGlzIHJpc2ss
IHRoYXQgZ3Vlc3RzIGhhdmluZyBhY2Nlc3MgdG8NCj4gPiAraGFyZHdhcmUgd2l0aCBETUEgY2Fw
YWNpdHkgbXVzdCBiZSB0cnVzdGVkLCBvciBpdCBjb3VsZCB1c2UgdGhlIERNQQ0KPiA+ICtlbmdp
bmUgdG8gYWNjZXNzIGFueSBvdGhlciBtZW1vcnkgYXJlYS4NCj4gPiArR3Vlc3RzIGNvdWxkIHVz
ZSBhZGRpdGlvbmFsIHNlY3VyaXR5IGhhcmR3YXJlIGNvbXBvbmVudCBsaWtlIE5PQywNCj4gPiAr
U3lzdGVtIE1QVSB0byBwcm90ZWN0IHRoZSBtZW1vcnkuDQo+IA0KPiBXaGF0J3MgdGhlIE5PQz8N
Cj4gDQoNCk5ldHdvcmsgb24gQ2hpcC4gDQoNClNvbWUga2luZCBvZiBTb0MgbGV2ZWwgZmlyZXdh
bGwgdGhhdCBsaW1pdHMgdGhlIGRldmljZXMnIERNQSBhY2Nlc3MgcmFuZ2UNCm9yIENQVSBtZW1v
cnkgYWNjZXNzIHJhbmdlLg0KDQo+ID4gKw0KPiA+ICsjIyBEZXNpZ24NCj4gPiArDQo+ID4gK1Ro
ZSBpbXBsZW1lbnRhdGlvbiBtYXkgY292ZXIgZm9sbG93aW5nIGFzcGVjdHM6DQo+ID4gKw0KPiA+
ICsjIyMgTmF0aXZlIEFkZHJlc3MgYW5kIElSUSBudW1iZXJzIGZvciBHSUMgYW5kIFVBUlQodlBM
MDExKQ0KPiA+ICsNCj4gPiArVG9kYXksIGZpeGVkIGFkZHJlc3NlcyBhbmQgSVJRIG51bWJlcnMg
YXJlIHVzZWQgdG8gbWFwIEdJQyBhbmQNCj4gPiArVUFSVCh2UEwwMTEpIGluIERvbVVzLiBBbmQg
aXQgbWF5IGNhdXNlIHBvdGVudGlhbCBjbGFzaCBvbiBkaXJlY3QtbWFwDQo+IGRvbWFpbnMuDQo+
ID4gK1NvLCBVc2luZyBuYXRpdmUgYWRkcmVzc2VzIGFuZCBpcnEgbnVtYmVycyBmb3IgR0lDLCBV
QVJUKHZQTDAxMSksIGluDQo+ID4gK2RpcmVjdC1tYXAgZG9tYWlucyBpcyBuZWNlc3NhcnkuDQo+
ID4gK2UuZy4NCj4gDQo+IFRvIG1lIGUuZy4gbWVhbnMgZXhhbXBsZS4gQnV0IGJlbG93IHRoaXMg
aXMgbm90IGFuIGV4YW1wbGUsIHRoaXMgaXMgYQ0KPiByZXF1aXJlbWVudCBpbiBvcmRlciB0byB1
c2UgdGhlIHZwbDAxMSBvbiBzeXN0ZW0gd2l0aG91dCBwbDAxMSBVQVJULg0KPg0KDQpZZXMsIHJp
Z2h0Lg0KSSdsbCBkZWxldGUgZS5nLiBoZXJlDQogDQo+ID4gK0ZvciB0aGUgdmlydHVhbCBpbnRl
cnJ1cHQgb2YgdlBMMDExOiBpbnN0ZWFkIG9mIGFsd2F5cyB1c2luZw0KPiA+ICtgR1VFU1RfVlBM
MDExX1NQSWAsIHRyeSB0byByZXVzZSB0aGUgcGh5c2ljYWwgU1BJIG51bWJlciBpZiBwb3NzaWJs
ZS4NCj4gDQo+IEhvdyB3b3VsZCB5b3UgZmluZCB0aGUgZm9sbG93aW5nIHJlZ2lvbiBmb3IgZ3Vl
c3QgdXNpbmcgUFYgZHJpdmVyczsNCj4gICAgIC0gRXZlbnQgY2hhbm5lbCBpbnRlcnJ1cHQNCj4g
ICAgIC0gR3JhbnQgdGFibGUgYXJlYQ0KPg0KR29vZCBjYXRjaCEgdGhvdXNhbmQgdGh4LiDwn5iJ
DQogDQpXZSd2ZSBkb25lIHNvbWUgaW52ZXN0aWdhdGlvbiBvbiB0aGlzIHBhcnQuIENvcnJlY3Qg
bWUgaWYgSSBhbSB3cm9uZy4NCg0KUGFnZXMgbGlrZSBzaGFyZWRfaW5mbywgZ3JhbnQgdGFibGUs
IGV0Yywgc2hhcmVkIGJldHdlZW4gZ3Vlc3RzIGFuZCANCnhlbiwgYXJlIG1hcHBlZCBieSBBUk0g
Z3Vlc3RzIHVzaW5nIHRoZSBoeXBlcmNhbGwgSFlQRVJWSVNPUl9tZW1vcnlfb3AgDQphbmQgYWx3
YXlzIHdvdWxkIG5vdCBiZSBkaXJlY3RseSBtYXBwZWQsIGV2ZW4gaW4gZG9tMC4NCg0KU28sIGhl
cmUsIHdlIHN1Z2dlc3QgdGhhdCBtYXliZSB3ZSBjb3VsZCBkbyBzb21lIG1vZGlmaWNhdGlvbiBp
biB0aGUgaHlwZXJjYWxsDQp0byBsZXQgaXQgbm90IG9ubHkgcGFzcyBnZm4gdG8geGVuLCBidXQg
YWxzbyByZWNlaXZlIGFscmVhZHkgYWxsb2NhdGVkIG1mbnMoZS5nLiBncmFudA0KdGFibGVzKSBm
cm9tIHhlbiBpbiBkaXJlY3QgbWFwIHNpdHVhdGlvbi4gDQpCdXQgSWYgc28sIGl0IGludm9sdmVz
IG1vZGlmaWNhdGlvbiBpbiBsaW51eCwgbyjilaXvuY/ilaUpby4NCg0KQW5kIGFsc28gd2UgaW5j
bGluZSB0byBrZWVwIGFsbCBndWVzdCByZWxhdGVkIHBhZ2VzKGluY2x1ZGluZyByYW0sICBncmFu
dCB0YWJsZXMsDQpldGMpIGluIG9uZSB3aG9sZSBwaWVjZS4NCg0KUmlnaHQgbm93LCBwYWdlcyBs
aWtlIGdyYW50IHRhYmxlcyBhcmUgYWxsb2NhdGVkIHNlcGFyYXRlbHkgaW4gWGVuIGhlYXAsIHNv
IGRvbid0DQpzdGFuZCBtdWNoIGNoYW5jZSB0byBiZSBjb25zaXN0ZW50IHdpdGggdGhlIGd1ZXN0
IHJhbS4NCiANClNvIHdoYXQgaWYgd2UgYWxsb2NhdGUgbW9yZSByYW0gYXQgZmlyc3QsIHN1Y2gg
bGlrZSwgbmVlZCAyNTZNQiwgZ2l2ZSBpdCAyNTdNQiwgbGV0DQpleHRyYSAxTUIgdXNlZCBmb3Ig
dGhvc2UgcGFnZXMuIFRoZW4gaWYgc28sIHdlIGNvdWxkIGtlZXAgaXQgYXMgYSB3aG9sZS4NCg0K
VGhpcyBpcyBteSBxdWl0ZSByb3VnaCBicmFpbnN0b3JtLCBwbHogYmVhciBpdCBhbmQgZ2l2ZSBt
ZSBtb3JlIHRob3VnaHRzIG9uIGl0Lg0KIA0KPiA+ICsNCj4gPiArIyMjIERldmljZSB0cmVlIG9w
dGlvbjogYGRpcmVjdF9tYXBgDQo+ID4gKw0KPiA+ICtJbnRyb2R1Y2UgYSBuZXcgZGV2aWNlIHRy
ZWUgb3B0aW9uIGBkaXJlY3RfbWFwYCBmb3IgZGlyZWN0LW1hcCBkb21haW5zLg0KPiA+ICtUaGVu
LCB3aGVuIHVzZXJzIHRyeSB0byBhbGxvY2F0ZSBvbmUgZGlyZWN0LW1hcCBkb21haW4oZXhjZXB0
IERPTTApLA0KPiA+ICtgZGlyZWN0LW1hcGAgcHJvcGVydHkgbmVlZHMgdG8gYmUgYWRkZWQgdW5k
ZXIgdGhlIGFwcHJvcHJpYXRlDQo+IGAvY2hvc2VuL2RvbVV4YC4NCj4gPiArDQo+ID4gKw0KPiA+
ICsgICAgICAgICAgICBjaG9zZW4gew0KPiA+ICsgICAgICAgICAgICAgICAgLi4uDQo+ID4gKyAg
ICAgICAgICAgICAgICBkb21VMSB7DQo+ID4gKyAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJs
ZSA9ICJ4ZW4sIGRvbWFpbiI7DQo+ID4gKyAgICAgICAgICAgICAgICAgICAgI2FkZHJlc3MtY2Vs
bHMgPSA8MHgyPjsNCj4gPiArICAgICAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDE+
Ow0KPiA+ICsgICAgICAgICAgICAgICAgICAgIGRpcmVjdC1tYXA7DQo+ID4gKyAgICAgICAgICAg
ICAgICAgICAgLi4uDQo+ID4gKyAgICAgICAgICAgICAgICB9Ow0KPiA+ICsgICAgICAgICAgICAg
ICAgLi4uDQo+ID4gKyAgICAgICAgICAgIH07DQo+ID4gKw0KPiA+ICtJZiB1c2VycyBhcmUgdXNp
bmcgaW1hZ2VidWlsZGVyLCB0aGV5IGNhbiBhZGQgdG8gYm9vdC5zb3VyY2UNCj4gPiArc29tZXRo
aW5nIGxpa2UgdGhlDQo+IA0KPiBUaGlzIGRvY3VtZW50YXRpb25zIG91bmRzIGxpa2UgbW9yZSBz
b21ldGhpbmcgZm9yIGltYWdlYnVpbGRlciByYXRoZXINCj4gdGhhbiBYZW4gaXRzZWxmLg0KPiAN
Cg0KWWVzLCByaWdodC4gSSdsbCBkZWxldGUgdGhpcyBwYXJ0Lg0KDQo+ID4gK2ZvbGxvd2luZzoN
Cj4gPiArDQo+ID4gKyAgICBmZHQgc2V0IC9jaG9zZW4vZG9tVTEgZGlyZWN0LW1hcA0KPiA+ICsN
Cj4gPiArVXNlcnMgY291bGQgYWxzbyB1c2UgYHhsYCB0byBjcmVhdGUgZGlyZWN0LW1hcCBkb21h
aW5zLCBqdXN0IHVzZSB0aGUNCj4gPiArZm9sbG93aW5nIGNvbmZpZyBvcHRpb246IGBkaXJlY3Qt
bWFwPXRydWVgDQo+ID4gKw0KPiA+ICsjIyMgZGlyZWN0LW1hcCBndWVzdCBtZW1vcnkgYWxsb2Nh
dGlvbg0KPiA+ICsNCj4gPiArRnVuYyBgYWxsb2NhdGVfbWVtb3J5X2RpcmVjdF9tYXBgIGlzIGJh
c2VkIG9uIGBhbGxvY2F0ZV9tZW1vcnlfMTFgLA0KPiA+ICthbmQgc2hhbGwgYmUgcmVmaW5lZCB0
byBhbGxvY2F0ZSBtZW1vcnkgZm9yIGFsbCBkaXJlY3QtbWFwIGRvbWFpbnMsDQo+IGluY2x1ZGlu
ZyBET00wLg0KPiA+ICtSb3VnaGx5IHNwZWFraW5nLCBmaXJzdGx5LCBpdCB0cmllcyB0byBhbGxv
Y2F0ZSBhcmJpdHJhcnkgbWVtb3J5DQo+ID4gK2NodW5rIG9mIHJlcXVlc3RlZCBzaXplIGZyb20g
ZG9tYWluDQo+ID4gK3N1Yi1hbGxvY2F0b3IoYGFsbG9jX2RvbWhlYXBfcGFnZXNgKS4gSWYgZmFp
bCwgc3BsaXQgdGhlIGNodW5rIGludG8NCj4gPiAraGFsdmVzLCBhbmQgcmUtdHJ5LCB1bnRpbCBp
dCBzdWNjZWVkIG9yIGJhaWwgb3V0IHdpdGggdGhlIHNtYWxsZXN0IGNodW5rIHNpemUuDQo+IA0K
PiBJZiB5b3UgaGF2ZSBhIG1peCBvZiBkb21haW4gd2l0aCBkaXJlY3QtbWFwcGVkIGFuZCBub3Jt
YWwgZG9tYWluLCB5b3UNCj4gbWF5IGVuZCB1cCB0byBoYXZlIHRoZSBtZW1vcnkgc28gc21hbGwg
dGhhdCB5b3VyIGRpcmVjdC1tYXBwZWQgZG9tYWluDQo+IHdpbGwgaGF2ZSBtYW55IHNtYWxsIGJh
bmtzLiBUaGlzIGlzIGdvaW5nIHRvIGJlIGEgbWFqb3IgcHJvYmxlbSBpZiB5b3UgYXJlDQo+IGNy
ZWF0aW5nIHRoZSBkb21haW4gYXQgcnVudGltZSAoeW91IHN1Z2dlc3QgeGwgY2FuIGJlIHVzZWQp
Lg0KPiANCj4gSW4gYWRkaXRpb24sIHNvbWUgdXNlcnMgbWF5IHdhbnQgdG8gYmUgYWJsZSB0byBj
b250cm9sIHRoZSBsb2NhdGlvbiBvZiB0aGUNCj4gbWVtb3J5IGFzIHRoaXMgcmVkdWNlZCB0aGUg
YW1vdW50IG9mIHdvcmsgaW4gdGhlIGd1ZXN0IChlLmcgeW91IGRvbid0IGhhdmUNCj4gdG8gZHlu
YW1pY2FsbHkgZGlzY292ZXIgdGhlIG1lbW9yeSkuDQo+IA0KPiBJIHRoaW5rIGl0IHdvdWxkIGJl
IGJlc3QgdG8gYWx3YXlzIHJlcXVpcmUgdGhlIGFkbWluIHRvIHNlbGVjdCB0aGUgUkFNIGJhbmsN
Cj4gdXNlZCBieSBhIGRpcmVjdCBtYXBwZWQgZG9tYWluLiBBbHRlcm5hdGl2ZWx5LCB3ZSBjb3Vs
ZCBoYXZlIGEgcG9vbCBvZg0KPiBtZW1vcnkgdGhhdCBjYW4gb25seSBiZSB1c2VkIGZvciBkaXJl
Y3QgbWFwcGVkIGRvbWFpbi4gVGhpcyBzaG91bGQgbGltaXQNCj4gdGhlIGZyYWdtZW50YXRpb24g
b2YgdGhlIG1lbW9yeS4NCj4NCg0KWWVwLCBpbiBzb21lIGNhc2VzLCBpZiB3ZSBoYXZlIG1peCBv
ZiBkb21haW5zIHdpdGggZGlyZWN0LW1hcHBlZCB3aXRoIA0KdXNlci0gZGVmaW5lZCBtZW1vcnkg
cmVnaW9ucyAoc2NhdHRlcmluZyBsb29zZWx5KWFuZCBub3JtYWwgZG9tYWlucyBhdCANCnRoZSBi
ZWdpbm5pbmcsIGl0IG1heSBmYWlsIHdoZW4gd2UgbGF0ZXIgY3JlYXRpbmcgdGhlIGRvbWFpbiBh
dCBydW50aW1lICh1c2UgDQp4bCksIG5vIG1hdHRlciBkaXJlY3QtbWFwIGRvbWFpbiBvciBub3Qu
DQoNCkJ1dCwgdXNlcnMgc2hvdWxkIGJlIGZyZWUgdG8gYWxsb2NhdGUgd2hlcmUgdGhleSB3YW50
LCB3ZSBtYXkgbm90IGxpbWl0IGENCnBvb2wgb2YgbWVtb3J5IHRvIHVzZS4NCg0KT2YgY291cnNl
LCB3ZSBjb3VsZCBhZGQgd2FybmluZyB0byBsZXQgdGhlbSBiZWluZyBhd2FyZS4NCg0KQnV0IEkn
bSB3aXRoIHlvdSB0aGF0IGl0IHdvdWxkIGJlIGJlc3QgdG8gYWx3YXlzIHJlcXVpcmUgdGhlIGFk
bWluIHRvIHNlbGVjdA0KdGhlIFJBTSBiYW5rIHVzZWQgYnkgYSBkaXJlY3QgbWFwcGVkIGRvbWFp
bi4gDQoNCkxhdGVyLCBJIHdpbGwgYWRkIHRoaXMgcGFydCBkZXNpZ24gaW4gbXkgbmV4dCBzZXJp
ZXMuIA0KDQpBbmQgSnVzdCBhZGRpbmcgMToxIGRpcmVjdC1tYXAgd2l0aG91dCB1c2VyLWRlZmlu
ZWQgcmVnaW9ucyBhcyBhbiBleHRyYSBvcHRpb24gaGVyZS4gDQogDQo+ID4gK1RoZW4sIGBpbnNl
cnRfMTFfYmFua2Agc2hhbGwgaW5zZXJ0IGFib3ZlIGFsbG9jYXRlZCBwYWdlcyBpbnRvIGENCj4g
PiArbWVtb3J5IGJhbmssIHdoaWNoIGFyZSBvcmRlcmVkIGJ5IGFkZHJlc3MsIGFuZCBhbHNvIHNl
dCB1cCBndWVzdCBQMk0NCj4gPiArbWFwcGluZygNCj4gPiArYGd1ZXN0X3BoeXNtYXBfYWRkX3Bh
Z2VgKSB0byBlbnN1cmUgYGdmbiA9PSBtZm5gLg0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4g
SnVsaWVuIEdyYWxsDQoNCkNoZWVycywNCg0KLS0NClBlbm55DQo=


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 08:20:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 08:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.48997.86693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knHAx-0005Kw-H2; Thu, 10 Dec 2020 08:19:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 48997.86693; Thu, 10 Dec 2020 08:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knHAx-0005Kp-DR; Thu, 10 Dec 2020 08:19:47 +0000
Received: by outflank-mailman (input) for mailman id 48997;
 Thu, 10 Dec 2020 08:19:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knHAw-0005Kh-JZ; Thu, 10 Dec 2020 08:19:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knHAw-0001B2-BH; Thu, 10 Dec 2020 08:19:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knHAw-0005Gp-11; Thu, 10 Dec 2020 08:19:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knHAv-0004XA-UP; Thu, 10 Dec 2020 08:19:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KQJKElnqiSV6FLo/dCreLUG1w+/suax4NVXxlG4rTis=; b=JLbCYmyxaasg/14mkfC+lw3YZn
	i8zSRtuKwIL1+2Mpfdds3j3MbDZCJUM0iODBRsLCEB0iTn08A+I8bgFle74tyAZ+6dYWzKCdaT5Wo
	KqQVc74aPj65Cv79w3vxTpcjY7+aJPjamiCSFar9TRopvLrfIQ3H6QLvPMnXeeUGj72s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157369-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157369: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=61802ce3f0fc7f42cbc1cef6b71e01e79b8d2a93
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 08:19:45 +0000

flight 157369 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157369/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              61802ce3f0fc7f42cbc1cef6b71e01e79b8d2a93
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  153 days
Failing since        151818  2020-07-11 04:18:52 Z  152 days  147 attempts
Testing same since   157369  2020-12-10 04:19:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32500 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 08:38:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 08:38:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49005.86708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knHSv-0007Ke-2l; Thu, 10 Dec 2020 08:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49005.86708; Thu, 10 Dec 2020 08:38:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knHSu-0007KX-W2; Thu, 10 Dec 2020 08:38:20 +0000
Received: by outflank-mailman (input) for mailman id 49005;
 Thu, 10 Dec 2020 08:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e+LE=FO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1knHSt-0007KS-TX
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 08:38:19 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07be165e-77c2-4a9c-acf0-df531ba2b6da;
 Thu, 10 Dec 2020 08:38:18 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id r14so4542859wrn.0
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 00:38:18 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id y2sm7910390wma.6.2020.12.10.00.38.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 10 Dec 2020 00:38:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07be165e-77c2-4a9c-acf0-df531ba2b6da
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=hWism8dAra/zqrZqphMYnl0UaABicIgYXXykcRXnLqU=;
        b=X2xnoBVLs+HgZ8kKHCXL9qWLnPnb6AgH00At0ZiOgUAhSO7aHNojIiMca5WYBk9qvY
         cJ5WLuOEDbcQmQSu9UF1ihTD7a08DBjgrjSjRD+KpFStRAT/YgOgg2Gyb/VsWJCzOpkB
         NePdwwWA9dpqyW2nDTd1h2D8j1GsfZrlQUZwy1VJI5GClVAQdnLcaTDaAo3nNt3vRszG
         CKSgj2F1V0I/UUBmLLieGdlJr2eUQibAxVDBDVCI+jf8JBvVlu+UFwxOP/2/In4N8m43
         m8GsOJ5YO+8OC/+TewPprL9Vf7UhR23eMdLtX42Lf5rPucz9ojnYfUhTNzKvknqMINaJ
         uBRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=hWism8dAra/zqrZqphMYnl0UaABicIgYXXykcRXnLqU=;
        b=UR25DyHTU6Eh6+fcIs2PgP1GwDrOp2f26uhYCVBrgIz/cBxBY3UcVQrZCwYQPYu0eP
         i3bXzL6iLRMta34w7TStlX4d2eN+XwQ9YNO+VX+7FWeEwCH1JY5b94OmnBG0ejQmU5RL
         Dt3gXDb3oc0AukyQ7a0ThzTNG81hoi1tSZtIeZ9qdD0C41LD2bKVKB5rt5yKh+tyN1Wo
         KCe4WJi1NEcjH8RcGUv0x8KS4rPdfPOED6i9P2Ex55F9eeVueAE47H2xuBqlaQOBB9wz
         O60Cid2UZnKePFDnqGWO06jgJPUNCEZcQyEl4yitkTGChmqhEz19mvZOWoM7HOAvO/XI
         uZBw==
X-Gm-Message-State: AOAM530jK3GuZ9tAg+0cHc9SzsIHyYWAvtwcntjZ6KzFOvPEkdK6TcKD
	OHO6+elckTLF+/zFz83VA7U=
X-Google-Smtp-Source: ABdhPJwEbVCylVa5C70bKwkavxNmoAgvQCZ2K8i+uMODl+5/Th+YllX6SnLsdYEze4drM9dIqXtVQg==
X-Received: by 2002:a5d:4ccf:: with SMTP id c15mr6941235wrt.237.1607589497574;
        Thu, 10 Dec 2020 00:38:17 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>
Cc: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-18-git-send-email-olekstysh@gmail.com> <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com> <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com> <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com> <0d6c01d6cd9a$666326c0$33297440$@xen.org> <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com> <0d7201d6ce09$e13dce80$a3b96b80$@xen.org> <96b9b843-f4fe-834a-f17b-d75198aa0dab@gmail.com>
In-Reply-To: <96b9b843-f4fe-834a-f17b-d75198aa0dab@gmail.com>
Subject: RE: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Thu, 10 Dec 2020 08:38:15 -0000
Message-ID: <002401d6cecf$ce472710$6ad57530$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKk0D4Qme59XF0a0h96d36zIOxDhQIE/k2jAQvf44kCfV5qfQHp2KBNAqVQolsB+TCoJgKL4LaZAk9ZfyOnzG/cYA==

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 09 December 2020 20:36
> To: paul@xen.org
> Cc: 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>;
> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien@xen.org>; 'Volodymyr Babchuk'
> <Volodymyr_Babchuk@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap'
> <george.dunlap@citrix.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Wei Liu' <wl@xen.org>; 'Julien Grall'
> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
> 
> 
> Hi Paul.
> 
> 
> >>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> >>>>>>> --- a/xen/include/xen/ioreq.h
> >>>>>>> +++ b/xen/include/xen/ioreq.h
> >>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
> >>>>>>>         uint8_t                bufioreq_handling;
> >>>>>>>     };
> >>>>>>>     +/*
> >>>>>>> + * This should only be used when d == current->domain and it's not
> >>>>>>> paused,
> >>>>>> Is the "not paused" part really relevant here? Besides it being rare
> >>>>>> that the current domain would be paused (if so, it's in the process
> >>>>>> of having all its vCPU-s scheduled out), does this matter at all?do
> >>>>>> any extra actionsdo any extra actions
> >>>>> No, it isn't relevant, I will drop it.
> >>>>>
> >>>>>
> >>>>>> Apart from this the patch looks okay to me, but I'm not sure it
> >>>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
> >>>>>> a list if doing a swipe over the entire array is too expensive in
> >>>>>> this specific case.
> >>>>> We would like to avoid to do any extra actions in
> >>>>> leave_hypervisor_to_guest() if possible.
> >>>>> But not only there, the logic whether we check/set
> >>>>> mapcache_invalidation variable could be avoided if a domain doesn't
> >>>>> use IOREQ server...
> >>>> Are you OK with this patch (common part of it)?
> >>> How much of a performance benefit is this? The array is small to simply counting the non-NULL
> >> entries should be pretty quick.
> >> I didn't perform performance measurements on how much this call consumes.
> >> In our system we run three domains. The emulator is in DomD only, so I
> >> would like to avoid to call vcpu_ioreq_handle_completion() for every
> >> Dom0/DomU's vCPUs
> >> if there is no real need to do it.
> > This is not relevant to the domain that the emulator is running in; it's concerning the domains
> which the emulator is servicing. How many of those are there?
> Err, yes, I wasn't precise when providing an example.
> Single emulator is running in DomD and servicing DomU. So with the
> helper in place the vcpu_ioreq_handle_completion() gets only called for
> DomU vCPUs (as expected).
> Without an optimization the vcpu_ioreq_handle_completion() gets called
> for _all_ vCPUs, and I see it as an extra action for Dom0, DomD vCPUs.
> 
> 
> >
> >> On Arm vcpu_ioreq_handle_completion()
> >> is called with IRQ enabled, so the call is accompanied with
> >> corresponding irq_enable/irq_disable.
> >> These unneeded actions could be avoided by using this simple one-line
> >> helper...
> >>
> > The helper may be one line but there is more to the patch than that. I still think you could just
> walk the array in the helper rather than keeping a running occupancy count.
> 
> OK, is the implementation below close to what you propose? If yes, I
> will update a helper and drop nr_servers variable.
> 
> bool domain_has_ioreq_server(const struct domain *d)
> {
>      const struct ioreq_server *s;
>      unsigned int id;
> 
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>          return true;
> 
>      return false;
> }

Yes, that's what I had in mind.

  Paul

> 
> --
> Regards,
> 
> Oleksandr Tyshchenko




From xen-devel-bounces@lists.xenproject.org Thu Dec 10 09:39:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 09:39:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49032.86734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knIQ3-0004oV-VA; Thu, 10 Dec 2020 09:39:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49032.86734; Thu, 10 Dec 2020 09:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knIQ3-0004oO-SD; Thu, 10 Dec 2020 09:39:27 +0000
Received: by outflank-mailman (input) for mailman id 49032;
 Thu, 10 Dec 2020 09:39:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knIQ2-0004oF-N4; Thu, 10 Dec 2020 09:39:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knIQ2-0002m1-Go; Thu, 10 Dec 2020 09:39:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knIQ2-0001iF-6Y; Thu, 10 Dec 2020 09:39:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knIQ2-0008Oi-64; Thu, 10 Dec 2020 09:39:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KgV1wPMYytocUdYZqVZ+OSL/yOTmw7qZJ1MGwgPb350=; b=WDxAlfugm1FPypVAf6xqbOJUWD
	R6EnoiM1ao/ZDU3O0HW5h8HQMnRnRlUPG0a1u4zfiLopqeT5lnFN+Yq3VUvSSREC6XHL5vMl9HeRU
	MFMtfZ16y/ovrfvDj5Lh+V8fCZWdwl0WAhyaVsGrlya6dRfbv0AMGlsm5y0qyt2dAygU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157361-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157361: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5e7b204dbfae9a562fc73684986f936b97f63877
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 09:39:26 +0000

flight 157361 qemu-mainline real [real]
flight 157374 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157361/
http://logs.test-lab.xenproject.org/osstest/logs/157374/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                5e7b204dbfae9a562fc73684986f936b97f63877
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  112 days
Failing since        152659  2020-08-21 14:07:39 Z  110 days  231 attempts
Testing same since   157361  2020-12-10 00:36:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erich McMillan <erich.mcmillan@hp.com>
  Erich-McMillan <erich.mcmillan@hp.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiahui Cen <cenjiahui@huawei.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Levon <john.levon@nutanix.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Juan Quintela <quintela@redhat.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yubo Miao <miaoyubo@huawei.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 70796 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 09:52:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 09:52:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49041.86749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knIc8-0006hC-7v; Thu, 10 Dec 2020 09:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49041.86749; Thu, 10 Dec 2020 09:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knIc8-0006h5-4F; Thu, 10 Dec 2020 09:51:56 +0000
Received: by outflank-mailman (input) for mailman id 49041;
 Thu, 10 Dec 2020 09:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oBdS=FO=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1knIc7-0006h0-4M
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 09:51:55 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 666ef150-8685-41a6-b230-392ca3a1202f;
 Thu, 10 Dec 2020 09:51:52 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BA9pipr014969
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 10 Dec 2020 10:51:45 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id BAC912E946C; Thu, 10 Dec 2020 10:51:39 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 666ef150-8685-41a6-b230-392ca3a1202f
Date: Thu, 10 Dec 2020 10:51:39 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201210095139.GA455@antioche.eu.org>
References: <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 10 Dec 2020 10:51:46 +0100 (MET)

On Wed, Dec 09, 2020 at 07:08:41PM +0000, Andrew Cooper wrote:
> Oh of course - we don't follow the exit-to-guest path on the way out here.
> 
> As a gross hack to check that we've at least diagnosed the issue
> appropriately, could you modify NetBSD to explicitly load the %ss
> selector into %es (or any other free segment) before first entering user
> context?

If I understood it properly, the user %ss is loaded by Xen from the
trapframe when the guest swictes from kernel to user mode, isn't it ?
So you mean setting %es to the same value in the trapframe ?

Actually I used %fs because %es is set equal to %ds.
Xen 4.13 boots fine with this change, but with 4.15 I get a loop of:


(XEN) *** LDT: gl1e 0000000000000000 not present                               
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
[  12.3586540] Process (pid 1) got sig 11                                      

which means that the dom0 gets the trap, and decides that the fault address
is not mapped. Without the change the dom0 doesn't show the
"Process (pid 1) got sig 11"

I activated the NetBSD trap debug code, and this shows:
[   6.7165877] kern.module.path=/stand/amd64-xen/9.1/modules                    (XEN) *** LDT: gl1e 0000000000000000 not present                                
(XEN) *** pv_map_ldt_shadow_page(0x40) failed                                   
[   6.9462322] pid 1.1 (init): signal 11 code=1 (trap 0x6) @rip 0x7f7ef0c007d0 a
ddr 0xffffbd800000a040 error=14
[   7.0647896] trapframe 0xffffbd80381cff00
[   7.1126288] rip 0x00007f7ef0c007d0  rsp 0x00007f7fff10aa30  rfl 0x00000000000
00202
[   7.2041518] rdi 000000000000000000  rsi 000000000000000000  rdx 0000000000000
00000
[   7.2956758] rcx 000000000000000000  r8  000000000000000000  r9  0000000000000
00000
[   7.3872013] r10 000000000000000000  r11 000000000000000000  r12 0000000000000
00000
[   7.4787216] r13 000000000000000000  r14 000000000000000000  r15 0000000000000
00000
[   7.5702439] rbp 000000000000000000  rbx 0x00007f7fff10afe0  rax 0000000000000
00000
[   7.6617663] cs 0x47  ds 0x23  es 0x23  fs 0000  gs 0000  ss 0x3f
[   7.7345663] fsbase 000000000000000000 gsbase 000000000000000000

so it looks like something resets %fs to 0 ...

Anyway the fault address 0xffffbd800000a040 is in the hypervisor's range,
isn't it ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 10:43:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 10:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49054.86772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knJQ2-0003GP-Eg; Thu, 10 Dec 2020 10:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49054.86772; Thu, 10 Dec 2020 10:43:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knJQ2-0003GI-Bk; Thu, 10 Dec 2020 10:43:30 +0000
Received: by outflank-mailman (input) for mailman id 49054;
 Thu, 10 Dec 2020 10:43:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gO0O=FO=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1knJQ0-0003GD-IB
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 10:43:28 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4ad526f4-b2db-4788-94e3-8b8c52186b02;
 Thu, 10 Dec 2020 10:43:26 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 49C9C30E;
 Thu, 10 Dec 2020 02:43:26 -0800 (PST)
Received: from localhost.localdomain (unknown [10.57.62.29])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8AFDC3F718;
 Thu, 10 Dec 2020 02:43:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ad526f4-b2db-4788-94e3-8b8c52186b02
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #843419
Date: Thu, 10 Dec 2020 10:42:58 +0000
Message-Id: <20201210104258.111-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

On the Cortex A53, when executing in AArch64 state, a load or store instruction
which uses the result of an ADRP instruction as a base register, or which uses
a base register written by an instruction immediately after an ADRP to the
same register, might access an incorrect address.

The workaround is to enable the linker flag --fix-cortex-a53-843419
if present, to check and fix the affected sequence. Otherwise print a warning
that Xen may be susceptible to this errata

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/misc/arm/silicon-errata.txt |  1 +
 xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
 xen/arch/arm/Makefile            |  8 ++++++++
 xen/scripts/Kbuild.include       | 12 ++++++++++++
 4 files changed, 40 insertions(+)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index 27bf957ebf..1925d8fd4e 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -45,6 +45,7 @@ stable hypervisors.
 | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319    |
 | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069    |
 | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_819472    |
+| ARM            | Cortex-A53      | #843419         | ARM64_ERRATUM_843419    |
 | ARM            | Cortex-A55      | #1530923        | N/A                     |
 | ARM            | Cortex-A57      | #852523         | N/A                     |
 | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075    |
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index f5b1bcda03..41bde2f401 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -186,6 +186,25 @@ config ARM64_ERRATUM_819472
 
 	  If unsure, say Y.
 
+config ARM64_ERRATUM_843419
+	bool "Cortex-A53: 843419: A load or store might access an incorrect address"
+	default y
+	depends on ARM_64
+	help
+	  This option adds an alternative code sequence to work around ARM
+	  erratum 843419 on Cortex-A53 parts up to r0p4.
+
+	  When executing in AArch64 state, a load or store instruction which uses
+	  the result of an ADRP instruction as a base register, or which uses a
+	  base register written by an instruction immediately after an ADRP to the
+	  same register, might access an incorrect address.
+
+	  The workaround enables the linker to check if the affected sequence is
+	  produced and it will fix it with an alternative not affected sequence
+	  that produce the same behavior.
+
+	  If unsure, say Y.
+
 config ARM64_ERRATUM_832075
 	bool "Cortex-A57: 832075: possible deadlock on mixing exclusive memory accesses with device loads"
 	default y
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e68bb..ad2d497c45 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -101,6 +101,14 @@ prelink.o: $(ALL_OBJS) FORCE
 	$(call if_changed,ld)
 endif
 
+ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
+    ifeq ($(call ld-option, --fix-cortex-a53-843419),n)
+        $(warning ld does not support --fix-cortex-a53-843419; xen may be susceptible to erratum)
+    else
+        XEN_LDFLAGS += --fix-cortex-a53-843419
+    endif
+endif
+
 targets += prelink.o
 
 $(TARGET)-syms: prelink.o xen.lds
diff --git a/xen/scripts/Kbuild.include b/xen/scripts/Kbuild.include
index e62eddc365..83c7e1457b 100644
--- a/xen/scripts/Kbuild.include
+++ b/xen/scripts/Kbuild.include
@@ -43,6 +43,18 @@ define as-option-add-closure
     endif
 endef
 
+# $(if-success,<command>,<then>,<else>)
+# Return <then> if <command> exits with 0, <else> otherwise.
+if-success = $(shell { $(1); } >/dev/null 2>&1 && echo "$(2)" || echo "$(3)")
+
+# $(success,<command>)
+# Return y if <command> exits with 0, n otherwise
+success = $(call if-success,$(1),y,n)
+
+# $(ld-option,<flag>)
+# Return y if the linker supports <flag>, n otherwise
+ld-option = $(call success,$(LD) -v $(1))
+
 # cc-ifversion
 # Usage:  EXTRA_CFLAGS += $(call cc-ifversion, -lt, 0402, -O1)
 cc-ifversion = $(shell [ $(CONFIG_GCC_VERSION)0 $(1) $(2)000 ] && echo $(3) || echo $(4))
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:10:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49070.86785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knJpw-00065Z-NA; Thu, 10 Dec 2020 11:10:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49070.86785; Thu, 10 Dec 2020 11:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knJpw-00065S-Hh; Thu, 10 Dec 2020 11:10:16 +0000
Received: by outflank-mailman (input) for mailman id 49070;
 Thu, 10 Dec 2020 11:10:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gD3s=FO=arm.com=mark.rutland@srs-us1.protection.inumbo.net>)
 id 1knJpw-00065N-3L
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:10:16 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 31dfe8db-e3aa-45be-bda6-2bd8ac4fbb66;
 Thu, 10 Dec 2020 11:10:15 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B642F30E;
 Thu, 10 Dec 2020 03:10:14 -0800 (PST)
Received: from C02TD0UTHF1T.local (unknown [10.57.27.13])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C06353F718;
 Thu, 10 Dec 2020 03:10:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31dfe8db-e3aa-45be-bda6-2bd8ac4fbb66
Date: Thu, 10 Dec 2020 11:10:08 +0000
From: Mark Rutland <mark.rutland@arm.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>, Juergen Gross <jgross@suse.com>,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, luto@kernel.org,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
Message-ID: <20201210111008.GB88655@C02TD0UTHF1T.local>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <20201209181514.GA14235@C02TD0UTHF1T.local>
 <87tusuzu71.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87tusuzu71.fsf@nanos.tec.linutronix.de>

On Wed, Dec 09, 2020 at 07:54:26PM +0100, Thomas Gleixner wrote:
> On Wed, Dec 09 2020 at 18:15, Mark Rutland wrote:
> > In arch/x86/kernel/apic/io_apic.c's timer_irq_works() we do:
> >
> > 	local_irq_save(flags);
> > 	local_irq_enable();
> >
> > 	[ trigger an IRQ here ]
> >
> > 	local_irq_restore(flags);
> >
> > ... and in check_timer() we call that a number of times after either a
> > local_irq_save() or local_irq_disable(), eventually trailing with a
> > local_irq_disable() that will balance things up before calling
> > local_irq_restore().

I gave the patchlet below a spin with my debug patch, and it boots
cleanly for me under QEMU. If you spin it as a real patch, feel free to
add:

Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/x86/kernel/apic/io_apic.c |   22 ++++++----------------
>  1 file changed, 6 insertions(+), 16 deletions(-)
> 
> --- a/arch/x86/kernel/apic/io_apic.c
> +++ b/arch/x86/kernel/apic/io_apic.c
> @@ -1618,21 +1618,16 @@ static void __init delay_without_tsc(voi
>  static int __init timer_irq_works(void)
>  {
>  	unsigned long t1 = jiffies;
> -	unsigned long flags;
>  
>  	if (no_timer_check)
>  		return 1;
>  
> -	local_save_flags(flags);
>  	local_irq_enable();
> -
>  	if (boot_cpu_has(X86_FEATURE_TSC))
>  		delay_with_tsc();
>  	else
>  		delay_without_tsc();
>  
> -	local_irq_restore(flags);
> -
>  	/*
>  	 * Expect a few ticks at least, to be sure some possible
>  	 * glue logic does not lock up after one or two first
> @@ -1641,10 +1636,10 @@ static int __init timer_irq_works(void)
>  	 * least one tick may be lost due to delays.
>  	 */
>  
> -	/* jiffies wrap? */
> -	if (time_after(jiffies, t1 + 4))
> -		return 1;
> -	return 0;
> +	local_irq_disable();
> +
> +	/* Did jiffies advance? */
> +	return time_after(jiffies, t1 + 4);
>  }
>  
>  /*
> @@ -2117,13 +2112,12 @@ static inline void __init check_timer(vo
>  	struct irq_cfg *cfg = irqd_cfg(irq_data);
>  	int node = cpu_to_node(0);
>  	int apic1, pin1, apic2, pin2;
> -	unsigned long flags;
>  	int no_pin1 = 0;
>  
>  	if (!global_clock_event)
>  		return;
>  
> -	local_irq_save(flags);
> +	local_irq_disable();
>  
>  	/*
>  	 * get/set the timer IRQ vector:
> @@ -2191,7 +2185,6 @@ static inline void __init check_timer(vo
>  			goto out;
>  		}
>  		panic_if_irq_remap("timer doesn't work through Interrupt-remapped IO-APIC");
> -		local_irq_disable();
>  		clear_IO_APIC_pin(apic1, pin1);
>  		if (!no_pin1)
>  			apic_printk(APIC_QUIET, KERN_ERR "..MP-BIOS bug: "
> @@ -2215,7 +2208,6 @@ static inline void __init check_timer(vo
>  		/*
>  		 * Cleanup, just in case ...
>  		 */
> -		local_irq_disable();
>  		legacy_pic->mask(0);
>  		clear_IO_APIC_pin(apic2, pin2);
>  		apic_printk(APIC_QUIET, KERN_INFO "....... failed.\n");
> @@ -2232,7 +2224,6 @@ static inline void __init check_timer(vo
>  		apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
>  		goto out;
>  	}
> -	local_irq_disable();
>  	legacy_pic->mask(0);
>  	apic_write(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_FIXED | cfg->vector);
>  	apic_printk(APIC_QUIET, KERN_INFO "..... failed.\n");
> @@ -2251,7 +2242,6 @@ static inline void __init check_timer(vo
>  		apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
>  		goto out;
>  	}
> -	local_irq_disable();
>  	apic_printk(APIC_QUIET, KERN_INFO "..... failed :(.\n");
>  	if (apic_is_x2apic_enabled())
>  		apic_printk(APIC_QUIET, KERN_INFO
> @@ -2260,7 +2250,7 @@ static inline void __init check_timer(vo
>  	panic("IO-APIC + timer doesn't work!  Boot with apic=debug and send a "
>  		"report.  Then try booting with the 'noapic' option.\n");
>  out:
> -	local_irq_restore(flags);
> +	local_irq_enable();
>  }
>  
>  /*


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:15:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:15:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49077.86796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knJud-0006O2-9S; Thu, 10 Dec 2020 11:15:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49077.86796; Thu, 10 Dec 2020 11:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knJud-0006Nv-6H; Thu, 10 Dec 2020 11:15:07 +0000
Received: by outflank-mailman (input) for mailman id 49077;
 Thu, 10 Dec 2020 11:15:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jTxL=FO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1knJub-0006Nq-En
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:15:05 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55b8c065-135e-4b01-8edf-b818fc62cd7b;
 Thu, 10 Dec 2020 11:15:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55b8c065-135e-4b01-8edf-b818fc62cd7b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607598903;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=epAsmBLaUkoMUS/pHo0oHFDNdEoq5Gn12bK70AEVopU=;
  b=J58CYYZflj2T3J3TpIteUFx1sZMW7RDGw/FpPA9rFvMVqrZ3nnljJPK8
   Dsn7rBosf2kmxUTAQNWsfFP54plxzcC5UwZUujPleGonX0qR4OKeQdMIT
   a/h8VPuAxrbeWnilLbPwN3E+UQdM9ZXHCHEpxazKbXzrpV6HDsSEHdMh1
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9NgqSnwd4VlKqjLuatlDB3zNY9y58s9JX+m1gdIyzt1Z4F6YRTSQCRsYyAHbjTkZtG76hqVagC
 tHmEy1NYG3xxcPiHjIqooTyWT5JDDm86wttexcVTTMejMsqZ1IO7uUJSGuBxAj45EaRG5IgbCQ
 STHEg55xiAZDGNGA8DO25BbfcFLDq84u4fvy1vsEuo8aQHLWiHEpTNDPESARIY8B4V5n4erfoI
 hX3E5enCurudunLrX94VL75V1UmmI5GnORznbgqjMMuApK5dKQAye30YpRrMSSctZd7S+tKORX
 Ctk=
X-SBRS: 5.2
X-MesageID: 32953073
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,408,1599537600"; 
   d="scan'208";a="32953073"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jGnlV+5W5OgzxF/zc/YtZt/dDlt6g/Hp1g71iYfvHHF/9SLrRYDjHehkicV6Vk67HuyenVR41wS0EFBDfn2ALUMN7E4DaMvUsA+n4NFxYmasWJ/S2cV1AadKqwunRXvdOQ7D8oGJ+Mn0okmjEpfwc+87Otmmf0JV9XStrhio8z1+JisfNl9Pi3Ibx6/5g8IamVxnBpxWX6FbkogKFetoD8VyFCYf+i+/ABdnYJYKRHL37nWWnY28dITJs7Z4No37t/B88LKLxsQHnkNJaQM98ykULUDYElE3gAHE430BiF3/+mba5Cbs55sYCHHXHf5rCAN8Kkk/jWx+1Rc86C9a9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CfbDngfGN+I8AVvETLYbDyAcVrmiGDTjBFcn0q3y5no=;
 b=R+wFTWeBNJRrWjHdVVpCZox/xz+Ng+fix4+4j35KMo8OP0EkZE23PJnAo4FLwZL14e/72NCfzYLjVG1yIuCtRBQmCaiGbSBpM+GANA6vI4cIo/XbF6FQyECVkdznjBQM/8201RrxakWu+01Qrkhqxc/YyNsihm97jEANbegNukvMrJQ1i67XATRvn8gF/R0JJpdTvXKG6dEiGdzUeHS3bBrESxJWnAv9wU1LZG/QK++RgjjjJdvBB3c9qgJnEth6u+larP4W8H+XHDzyqD+BRFC8Mf/DXTcb5cfApalWYQE1Vf89QkJRlJ6sB4ovGKziaWLsJtZZC9kz9HlF/fNGAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CfbDngfGN+I8AVvETLYbDyAcVrmiGDTjBFcn0q3y5no=;
 b=cxNE6KOI2HGmbCgECrVE7iiCKMmOqW5ogmlaQEo2GwZuHGbmKcGVZvqhTKEFh/uEzld9O5QbilWuAErp+4mBo4EuaV0fNwxdM5MHzO+TBCPFIUMFMNxIO8qtMtI5zg/ItwsCGjhJYfmcqVKHwGFWBD+iAAeVnZWD0JeTE+W1vjA=
Date: Thu, 10 Dec 2020 12:14:54 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: Jason Andryuk <jandryuk@gmail.com>, xen-devel
	<xen-devel@lists.xenproject.org>, open list <linux-kernel@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
Message-ID: <20201210111454.dxykvyktzwr3fjyk@Air-de-Roger>
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-3-jgross@suse.com>
 <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
 <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com>
X-ClientProxiedBy: MR2P264CA0132.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5721cf22-e4be-41bf-f558-08d89cfcd5bc
X-MS-TrafficTypeDiagnostic: DM6PR03MB4395:
X-Microsoft-Antispam-PRVS: <DM6PR03MB43952ABCD326CE1FFF92B5568FCB0@DM6PR03MB4395.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wOr1DktL9gMYNhuPMeJnmgxEPdHG5X4HztUtKpGOE8x8gj0zRRUEzFROst3+GFPo6mmIDQ86b6Y+uSI/6MSa9j57OCZwWLiBj0hdnEW63oYyEPCB+929V1W0Eng3vNvOecDAJo8gMkij5gryYvHa+nDh9AozY/UP63o48mtWYHO2jDIgv6tji0dmyX7Mpel+4gpolzHkW2U/3tgFymmAmA1bJ5FZ8koVcVgfWjPh98gNWfpY7mIJklZ2FtSoLxsJLB0rfdSiAwIyzwcMfaw2C9Qe4UscMQDoqOZqCR9720WxXgLiBmm4RokWRY5e+KGvuSyEkP54DC8eHV7/1dv+Hw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(346002)(366004)(376002)(186003)(66946007)(508600001)(2906002)(86362001)(66556008)(8676002)(53546011)(8936002)(6486002)(1076003)(16526019)(6666004)(54906003)(9686003)(6916009)(4326008)(6496006)(956004)(33716001)(66476007)(85182001)(26005)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cGtidXVZOGpCZURUVi8xRndVSHZZcEJ2YS9IZjlRNEZkUGJjcHh0a0J3OXd6?=
 =?utf-8?B?TXFPdVowVS9wdnJWdUVkcTEzdlVqRXp4dkZqSmtlaGlVd0FyLzVqZFdPdVNz?=
 =?utf-8?B?SE13dzNmQU5MelljNy9CR2piZG5OMDdNQ0NiY3hKZFpEdFJ2NXFwTFlCSUsr?=
 =?utf-8?B?RFZJZkoxVGpyMjFjV1VqSTE5aW54Rm5UT2FwY2pEUHBvZFBwaHlZVFRMZGNq?=
 =?utf-8?B?MzRLc3F0UkxUZ1lVbEFJU3hCQi84OVVlU0txalpsVm10NEhmMmM0SmZnRHFB?=
 =?utf-8?B?TVlUUkRpdnJvOS9Ra1RkQnNDZ1pvZ0t2RnhmM3BIcU1TMFJKTmYxUDl5cnBi?=
 =?utf-8?B?Ty9aWU9wUkNKTTRkeW03MUdqQkI4dUtSbHFwTGt6bzFJS2Y5aFY3QnJLRXZs?=
 =?utf-8?B?M09iSGFsd2Rld2FTOU1aTTRCMkhyQnU2SWtHSk8wZVFqcHNJS3IyLzRuWGdN?=
 =?utf-8?B?U0pkbVNCZ2tZVUdpMzl6RFgra3FiRFFCZHRKV011RGk4bnIzSUZ3Q21lKzk5?=
 =?utf-8?B?VUFiSVZFYTdHV3krMXdJSUN4T05talYwZ1hCbUcxYk5mV0ZXNXkvbzVaU2I0?=
 =?utf-8?B?Z0JJcUkwekFmR3c0M2tTZ2IxaTdwQWU3cFh2OU44eVNrWW9sYVcySTNCZ3N5?=
 =?utf-8?B?VkVsMUw1UzBhWVAxejl0WkJnOG0xbVBRdUNlSVNzRU8vOWN6dGJIdGRnYVBI?=
 =?utf-8?B?OGZLUy9pOEJaK0FnMlZ4NndJeHFxaUVzajd5TnZpOCtrWE1UUFJ3V1NlM2t4?=
 =?utf-8?B?TXJGUmhLaG1BQ3M0TkV1Rjc3MzNROU1tVG5EVlRleGdJeGczbE1ibUtDZURQ?=
 =?utf-8?B?NUhFdG0zRzdJbU1tZEU3SGFPRE9sMWdWZS9QQlpKcVo0OFYyanVCR3RGOXJ5?=
 =?utf-8?B?dXBzdWdsdC83M05GTUlhWEdKS2dSdlNkRlduRGl2TW44QURqNVAzQVZIcDhR?=
 =?utf-8?B?SWZ2cGVWMnJRaks2dVBIWFZGZjBnVVdJazZmVDdrSWYweVlhamxNcld5dlN6?=
 =?utf-8?B?VTlxTVdPVk9pUWJJellqeFIya2V2UjIyV1Jna3Y5cUhZTitpNlBxc0ZiclVk?=
 =?utf-8?B?Zk9DNXdpaDQ4VUg5VFN3bFNkMit1V0d0RFdQS2RhMW5Wajl6REloaWJGazdX?=
 =?utf-8?B?SmxUL0M5N1pqL1BORlltekVyUy9VZzZ4TS9PYndpWnVMbTJMY3BBNEV0MlVw?=
 =?utf-8?B?K0d6cDJ6bm0zdDZHUFljWnZhZTAvSjM4RUt0dHgzWnoxNHYzRnpsV255S1Ev?=
 =?utf-8?B?OU52dVV6WG1TUWNUN0txTVRmY0hzMEFJaUhZRkFwMmdQY0ZjSVYzdEdXRjY4?=
 =?utf-8?Q?9UM9fdbaePq6A6h+lH+zj5eq51csHeITy0?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 11:15:00.3472
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 5721cf22-e4be-41bf-f558-08d89cfcd5bc
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GAUnG4kLrGeB2LO9ZS/JGucIpj1uN+xtxn5atfgddSY2Zt0q4VfI7rXSB2TmQqoXYNeCshZWHjsBmxsBQ+DLTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4395
X-OriginatorOrg: citrix.com

On Tue, Dec 08, 2020 at 07:45:00AM +0100, Jürgen Groß wrote:
> On 07.12.20 21:48, Jason Andryuk wrote:
> > On Mon, Dec 7, 2020 at 8:30 AM Juergen Gross <jgross@suse.com> wrote:
> > > 
> > > Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
> > > memory") introduced usage of ZONE_DEVICE memory for foreign memory
> > > mappings.
> > > 
> > > Unfortunately this collides with using page->lru for Xen backend
> > > private page caches.
> > > 
> > > Fix that by using page->zone_device_data instead.
> > > 
> > > Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > 
> > Would it make sense to add BUG_ON(is_zone_device_page(page)) and the
> > opposite as appropriate to cache_enq?
> 
> No, I don't think so. At least in the CONFIG_ZONE_DEVICE case the
> initial list in a PV dom0 is populated from extra memory (basically
> the same, but not marked as zone device memory explicitly).

I assume it's fine for us to then use page->zone_device_data even if
the page is not explicitly marked as ZONE_DEVICE memory?

If that's fine, add my:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

To both patches, and thank you very much for fixing this!

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:30:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:30:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49098.86814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knK9E-0008Fc-PL; Thu, 10 Dec 2020 11:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49098.86814; Thu, 10 Dec 2020 11:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knK9E-0008FV-MP; Thu, 10 Dec 2020 11:30:12 +0000
Received: by outflank-mailman (input) for mailman id 49098;
 Thu, 10 Dec 2020 11:30:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knK9C-0008FN-SF; Thu, 10 Dec 2020 11:30:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knK9C-00053a-KM; Thu, 10 Dec 2020 11:30:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knK9C-0007Wm-CP; Thu, 10 Dec 2020 11:30:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knK9C-0001kO-Bs; Thu, 10 Dec 2020 11:30:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AEi6LpX3qtbQNn3fJ/tL95SgicAzDipq9YpJheK+qEI=; b=awJCgj445ZdGbfsNBOgWvHKmss
	szhUc0RVSVkcXdaxVCUk4orhsuixJ/f45zgmiNtndQWAxUEIqCXmkdU+weDqx9szhG5ESsDQpv91i
	QWC0RBjHpsnNnKrjMeTFBmSe8dKiF6VFVKmIoI+tW/BxWByzrPrHmAuI9V/g2k9tNPsY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157365-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157365: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
X-Osstest-Versions-That:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 11:30:10 +0000

flight 157365 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157365/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 157335 pass in 157365
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157335 pass in 157365
 test-amd64-i386-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail pass in 157335

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157335
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157335
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157335
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157335
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157335
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157335
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157335
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157335
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157335
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157335
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157335
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20
baseline version:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20

Last test of basis   157365  2020-12-10 01:52:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:40:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49107.86830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKJX-00011a-Uu; Thu, 10 Dec 2020 11:40:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49107.86830; Thu, 10 Dec 2020 11:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKJX-00011T-Rj; Thu, 10 Dec 2020 11:40:51 +0000
Received: by outflank-mailman (input) for mailman id 49107;
 Thu, 10 Dec 2020 11:40:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PZm+=FO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knKJW-00011O-SS
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:40:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 309028b6-33b3-41fa-af2d-f7c3522de9e4;
 Thu, 10 Dec 2020 11:40:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2436FAD63;
 Thu, 10 Dec 2020 11:40:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 309028b6-33b3-41fa-af2d-f7c3522de9e4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607600449; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zaS3R0D28hfhRJRE8kS9Fjpf3ltkfs74Q0HQPBSqdr8=;
	b=l1ku7VnksbHmDI/hbfm+HUu1MfXIfYXOu3dojB1LBKeYCDxar1PKmTBS213bq9oCCc4E0U
	p4Luu9fLuvfi7i+/Gnt2G11845AMhTzcFCP9DkNixY7baetrDGi9+Iy9ZOAKPH3wosCOxQ
	bFoTD0LYZFaxB20mP5tglo2JiecXXWw=
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 open list <linux-kernel@vger.kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-3-jgross@suse.com>
 <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
 <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com>
 <20201210111454.dxykvyktzwr3fjyk@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <7425aed6-ff6f-873a-b629-b9c7058e9b13@suse.com>
Date: Thu, 10 Dec 2020 12:25:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201210111454.dxykvyktzwr3fjyk@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ksoksSJb3j0Ut26tcfA9wmBm4Bn3Gsd2Y"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ksoksSJb3j0Ut26tcfA9wmBm4Bn3Gsd2Y
Content-Type: multipart/mixed; boundary="LnRafePN06iNHry9leb0lMX8VhyDVlrY5";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 open list <linux-kernel@vger.kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <7425aed6-ff6f-873a-b629-b9c7058e9b13@suse.com>
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
References: <20201207133024.16621-1-jgross@suse.com>
 <20201207133024.16621-3-jgross@suse.com>
 <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
 <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com>
 <20201210111454.dxykvyktzwr3fjyk@Air-de-Roger>
In-Reply-To: <20201210111454.dxykvyktzwr3fjyk@Air-de-Roger>

--LnRafePN06iNHry9leb0lMX8VhyDVlrY5
Content-Type: multipart/mixed;
 boundary="------------BA5A6B6E95D44F69ED3FB6FB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BA5A6B6E95D44F69ED3FB6FB
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.12.20 12:14, Roger Pau Monn=C3=A9 wrote:
> On Tue, Dec 08, 2020 at 07:45:00AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 07.12.20 21:48, Jason Andryuk wrote:
>>> On Mon, Dec 7, 2020 at 8:30 AM Juergen Gross <jgross@suse.com> wrote:=

>>>>
>>>> Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
>>>> memory") introduced usage of ZONE_DEVICE memory for foreign memory
>>>> mappings.
>>>>
>>>> Unfortunately this collides with using page->lru for Xen backend
>>>> private page caches.
>>>>
>>>> Fix that by using page->zone_device_data instead.
>>>>
>>>> Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated mem=
ory")
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> Would it make sense to add BUG_ON(is_zone_device_page(page)) and the
>>> opposite as appropriate to cache_enq?
>>
>> No, I don't think so. At least in the CONFIG_ZONE_DEVICE case the
>> initial list in a PV dom0 is populated from extra memory (basically
>> the same, but not marked as zone device memory explicitly).
>=20
> I assume it's fine for us to then use page->zone_device_data even if
> the page is not explicitly marked as ZONE_DEVICE memory?

I think so, yes, as we are owner of that page and we were fine to use
lru, too.

>=20
> If that's fine, add my:
>=20
> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> To both patches, and thank you very much for fixing this!

UR welcome. :-)


Juergen

--------------BA5A6B6E95D44F69ED3FB6FB
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BA5A6B6E95D44F69ED3FB6FB--

--LnRafePN06iNHry9leb0lMX8VhyDVlrY5--

--ksoksSJb3j0Ut26tcfA9wmBm4Bn3Gsd2Y
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/SBb8FAwAAAAAACgkQsN6d1ii/Ey+Y
JAgAhvi8On0omlCq6kAAnEhiDDtQpG1QxSB4jHR5B5cy5PJCgWGrrtxQj06Fm+nVv9UxAbNnPdxq
aSIoodvIB1HTeOCluhp3kV1jyOBoCEtHG3/M7gjA64LJxAgKYtgxcWxZc5AEXPAaa+4Z1FgpoM9Z
/rLx7ysQVM0gLMYYqQk5cbETqa8pCHpcbSN8DmI4vDk/cC4TlHWZ7SMwC5AMRfqFPh8Vnee6ozo1
Y8+XKGRFlFzjkB/nvg7jYc/bUl/+8EdXK+PJNPWjYwxPWgctNX8JHkSnYtNiX3iEF3zCR19mnYP5
2R15q5LdCuWQoot9zOb7gpMlO7V0KDxr6/h4UwhDMw==
=+PsR
-----END PGP SIGNATURE-----

--ksoksSJb3j0Ut26tcfA9wmBm4Bn3Gsd2Y--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:43:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:43:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49112.86841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKLr-0001BP-BR; Thu, 10 Dec 2020 11:43:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49112.86841; Thu, 10 Dec 2020 11:43:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKLr-0001BI-8a; Thu, 10 Dec 2020 11:43:15 +0000
Received: by outflank-mailman (input) for mailman id 49112;
 Thu, 10 Dec 2020 11:43:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wUnW=FO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knKLq-0001BD-Hb
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:43:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47aa3161-e3fd-4813-bf34-523b5cf023db;
 Thu, 10 Dec 2020 11:43:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DB7F3ACE0;
 Thu, 10 Dec 2020 11:43:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47aa3161-e3fd-4813-bf34-523b5cf023db
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607600592; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rHv+vrHEFSnyiqfyOw9O5XBc5B+d/fjAEKvyLvCM0lM=;
	b=kQCIyifVts2rpvwOjc76Imytuj+L6BlkEke93rCarW/DCzlGbZujZM213j3JzrgnW+NQPR
	kfL85k548bNxIg7eGOwGtn1165K+KYLxQcudt4IeWwFdnwoc27KUA0jubuoxyuWheohUU9
	lvSsjJuVuozOIdqE2Q4KsiCEVcwWRPE=
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <27f113f3-01a3-e5a4-eea5-e593693625fe@suse.com>
Date: Thu, 10 Dec 2020 11:41:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210095139.GA455@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.12.2020 10:51, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 07:08:41PM +0000, Andrew Cooper wrote:
>> Oh of course - we don't follow the exit-to-guest path on the way out here.
>>
>> As a gross hack to check that we've at least diagnosed the issue
>> appropriately, could you modify NetBSD to explicitly load the %ss
>> selector into %es (or any other free segment) before first entering user
>> context?
> 
> If I understood it properly, the user %ss is loaded by Xen from the
> trapframe when the guest swictes from kernel to user mode, isn't it ?
> So you mean setting %es to the same value in the trapframe ?
> 
> Actually I used %fs because %es is set equal to %ds.
> Xen 4.13 boots fine with this change, but with 4.15 I get a loop of:
> 
> 
> (XEN) *** LDT: gl1e 0000000000000000 not present                               
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> [  12.3586540] Process (pid 1) got sig 11                                      
> 
> which means that the dom0 gets the trap, and decides that the fault address
> is not mapped. Without the change the dom0 doesn't show the
> "Process (pid 1) got sig 11"
> 
> I activated the NetBSD trap debug code, and this shows:
> [   6.7165877] kern.module.path=/stand/amd64-xen/9.1/modules                    (XEN) *** LDT: gl1e 0000000000000000 not present                                
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                   
> [   6.9462322] pid 1.1 (init): signal 11 code=1 (trap 0x6) @rip 0x7f7ef0c007d0 a
> ddr 0xffffbd800000a040 error=14
> [   7.0647896] trapframe 0xffffbd80381cff00
> [   7.1126288] rip 0x00007f7ef0c007d0  rsp 0x00007f7fff10aa30  rfl 0x00000000000
> 00202
> [   7.2041518] rdi 000000000000000000  rsi 000000000000000000  rdx 0000000000000
> 00000
> [   7.2956758] rcx 000000000000000000  r8  000000000000000000  r9  0000000000000
> 00000
> [   7.3872013] r10 000000000000000000  r11 000000000000000000  r12 0000000000000
> 00000
> [   7.4787216] r13 000000000000000000  r14 000000000000000000  r15 0000000000000
> 00000
> [   7.5702439] rbp 000000000000000000  rbx 0x00007f7fff10afe0  rax 0000000000000
> 00000
> [   7.6617663] cs 0x47  ds 0x23  es 0x23  fs 0000  gs 0000  ss 0x3f
> [   7.7345663] fsbase 000000000000000000 gsbase 000000000000000000
> 
> so it looks like something resets %fs to 0 ...
> 
> Anyway the fault address 0xffffbd800000a040 is in the hypervisor's range,
> isn't it ?

No, the hypervisor range is 0xffff800000000000-0xffff880000000000.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:44:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49117.86854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKMQ-0001Ho-LQ; Thu, 10 Dec 2020 11:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49117.86854; Thu, 10 Dec 2020 11:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKMQ-0001Hh-IQ; Thu, 10 Dec 2020 11:43:50 +0000
Received: by outflank-mailman (input) for mailman id 49117;
 Thu, 10 Dec 2020 11:43:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PZm+=FO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knKMP-0001HR-4z
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:43:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98f57eb2-0890-4db2-b73e-490f65edc755;
 Thu, 10 Dec 2020 11:43:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9360AAE95;
 Thu, 10 Dec 2020 11:43:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98f57eb2-0890-4db2-b73e-490f65edc755
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607600625; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6mV0b3KCapS7XDKLmQWqZDjFDGoUzXK8ZrY2OYAWHlI=;
	b=XHGRAks3Xnqi0IX6tqmwlqlfkYrQUsaskh+uBOr6JLT0gLc01GQjH4Ey5xhRZ1TgrEB5Pe
	LEZqfuWzTZkzRWcW0kd6PiscybdODUsGQytHkASh0gbJ//AOisLgWp94/vg2rue5SHNMOP
	9nDE8djrej3eCsMFkgCsPzpbJJwBa6c=
Subject: Re: [PATCH RFC 0/3] xen: add hypfs per-domain abi-features
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
 <a2270efd-19d4-5d5e-2e8b-4696ba9751ab@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b2118f4d-07c5-abae-2b1b-ac8f45c02563@suse.com>
Date: Thu, 10 Dec 2020 08:49:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <a2270efd-19d4-5d5e-2e8b-4696ba9751ab@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="eKJZfniLu9P811YC41rrKKDkfBQt0nHGW"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--eKJZfniLu9P811YC41rrKKDkfBQt0nHGW
Content-Type: multipart/mixed; boundary="evVRhgtOv3cpflGTJuugJMPibqU1vSPSY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <b2118f4d-07c5-abae-2b1b-ac8f45c02563@suse.com>
Subject: Re: [PATCH RFC 0/3] xen: add hypfs per-domain abi-features
References: <20201209161618.309-1-jgross@suse.com>
 <a2270efd-19d4-5d5e-2e8b-4696ba9751ab@xen.org>
In-Reply-To: <a2270efd-19d4-5d5e-2e8b-4696ba9751ab@xen.org>

--evVRhgtOv3cpflGTJuugJMPibqU1vSPSY
Content-Type: multipart/mixed;
 boundary="------------E677FE150D0E218282CED8EE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E677FE150D0E218282CED8EE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.12.20 17:24, Julien Grall wrote:
> Hi,
>=20
> On 09/12/2020 16:16, Juergen Gross wrote:
>> This small series is meant as an example how to add further dynamical
>> directories to hypfs. It can be used to replace Paul's current approac=
h
>> to specify ABI-features via domain create flags and replace those by
>> hypfs nodes.
>=20
> This can only work if all the ABI-features are not required at the time=
=20
> of creating the domain.

Yes. In case there is some further initialization needed this has to be
done later depending on the setting.

> Those features should also be set only once. Furthermore, HYPFS is so=20
> far meant to be optional.

"set once" isn't the point. They should not be able to change after the
domain has been started, and that is covered.

>=20
> So it feels to me Paul's approach is leaner and better for the=20
> ABI-features purpose.

Depends.

My approach doesn't need any tools side changes after first
implementation when adding new abi-features. And it isn't expanding an
unstable interface.

In the end this is the reason I marked this series as RFC. If using
flags is preferred, fine.


Juergen

--------------E677FE150D0E218282CED8EE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E677FE150D0E218282CED8EE--

--evVRhgtOv3cpflGTJuugJMPibqU1vSPSY--

--eKJZfniLu9P811YC41rrKKDkfBQt0nHGW
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/R0xgFAwAAAAAACgkQsN6d1ii/Ey8t
eQgAiamucRYpdredgK4K3hsG4B/19DPFEkjE5aLRSl4wNLQDL9nsjXVBtbFxv1JIBOnCqgR4w3Kg
ljxqxSCqPKeV/b1xOqylrJrcG2Llgn6Y7qlXXw2PUkYBGC9HGfZnbopYQeEE4JgXQDQuf4BQPYG4
u91fn2TwFNaWgjvSZxLHmpi2PDAJUic46dEqYvIeOqjbh31k/NR1HmFE0DmW+HwqHZiBd4u+A8dz
SPdvdgOMEXdTpQhg/tvQZw4rXuHtu6EhcHIm0PLezEShgRTUAujcLt4ywrx4+9qufse/lNZWVx+n
/YFqCXkmJ/3J6JAbgas67RNOpoC6OZ13zRKMmPpI+w==
=rFyB
-----END PGP SIGNATURE-----

--eKJZfniLu9P811YC41rrKKDkfBQt0nHGW--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:44:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:44:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49119.86866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKMg-0001MR-Un; Thu, 10 Dec 2020 11:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49119.86866; Thu, 10 Dec 2020 11:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKMg-0001MJ-RS; Thu, 10 Dec 2020 11:44:06 +0000
Received: by outflank-mailman (input) for mailman id 49119;
 Thu, 10 Dec 2020 11:44:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PZm+=FO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knKMe-0001Kj-UV
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:44:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e035dd6-5185-45a5-a752-f639291cf8a5;
 Thu, 10 Dec 2020 11:44:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6A49CAD7C;
 Thu, 10 Dec 2020 11:44:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e035dd6-5185-45a5-a752-f639291cf8a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607600643; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xyHh1YlXh3D5krmV0gaGkpUq1r97cEDPdXsFnMllEUU=;
	b=DKqIeH3h5q0p4CZs6wwn6FxTbKhI9zOixC9FJO7NfZBsjB/ahZ6r/A9mad/xrbPy3Gap9C
	LELGFDR59flElH5Q1twrOzE0YU8aZJDECw7KHT+G4F8CFVCJQgjrV1My/H4i8dgOxNzcbh
	ks085czZLsNKo+fwWKzKiBLeCwX772U=
Subject: Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
 <20201209161618.309-3-jgross@suse.com>
 <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8ec5f4f5-4314-9c4d-45f0-1f4686028a82@suse.com>
Date: Thu, 10 Dec 2020 08:54:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="gSdjpcUaNTrBpTrbsoW5JWlJUvXsbFLdP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gSdjpcUaNTrBpTrbsoW5JWlJUvXsbFLdP
Content-Type: multipart/mixed; boundary="H6xg0yS6hWU500hkl23Elp9371I9TLsmN";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <8ec5f4f5-4314-9c4d-45f0-1f4686028a82@suse.com>
Subject: Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
References: <20201209161618.309-1-jgross@suse.com>
 <20201209161618.309-3-jgross@suse.com>
 <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
In-Reply-To: <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>

--H6xg0yS6hWU500hkl23Elp9371I9TLsmN
Content-Type: multipart/mixed;
 boundary="------------BB6929A11E713E9E6A013377"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BB6929A11E713E9E6A013377
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.12.20 17:37, Julien Grall wrote:
> Hi Juergen,
>=20
> On 09/12/2020 16:16, Juergen Gross wrote:
>> Add /domain/<domid> directories to hypfs. Those are completely
>> dynamic, so the related hypfs access functions need to be implemented.=

>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V3:
>> - new patch
>> ---
>> =C2=A0 docs/misc/hypfs-paths.pandoc |=C2=A0 10 +++
>> =C2=A0 xen/common/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |=C2=A0=C2=A0 1 +
>> =C2=A0 xen/common/hypfs_dom.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 13=
7 +++++++++++++++++++++++++++++++++++
>> =C2=A0 3 files changed, 148 insertions(+)
>> =C2=A0 create mode 100644 xen/common/hypfs_dom.c
>>
>> diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pand=
oc
>> index e86f7d0dbe..116642e367 100644
>> --- a/docs/misc/hypfs-paths.pandoc
>> +++ b/docs/misc/hypfs-paths.pandoc
>> @@ -34,6 +34,7 @@ not containing any '/' character. The names "." and =

>> ".." are reserved
>> =C2=A0 for file system internal use.
>> =C2=A0 VALUES are strings and can take the following forms (note that =
this=20
>> represents
>> +>>>>>>> patched
>=20
> This seems to be a left-over of a merge.

Oh, interesting that I wasn't warned about that.

>=20
>> =C2=A0 only the syntax used in this document):
>> =C2=A0 * STRING -- an arbitrary 0-delimited byte string.
>> @@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
>> =C2=A0 Writing a value is allowed only for cpupools with no cpu assign=
ed=20
>> and if the
>> =C2=A0 architecture is supporting different scheduling granularities.
>=20
> [...]
>=20
>> +
>> +static int domain_dir_read(const struct hypfs_entry *entry,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +=C2=A0=C2=A0=C2=A0 int ret =3D 0;
>> +=C2=A0=C2=A0=C2=A0 const struct domain *d;
>> +
>> +=C2=A0=C2=A0=C2=A0 for_each_domain ( d )
>=20
> This is definitely going to be an issue if you have a lot of domain=20
> running as Xen is not preemptible.

In general this is correct, but in this case I don't think it will
be a problem. The execution time for each loop iteration should be
rather short (in the microsecond range?), so even with 32000 guests
we would stay way below one second. And on rather slow cpus I don't
think we'd have thousands of guests anyway.

> I think the first step is to make sure that HYPFS can scale without=20
> hogging a pCPU for a long time.

I agree this would be best.


Juergen


--------------BB6929A11E713E9E6A013377
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BB6929A11E713E9E6A013377--

--H6xg0yS6hWU500hkl23Elp9371I9TLsmN--

--gSdjpcUaNTrBpTrbsoW5JWlJUvXsbFLdP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/R1CIFAwAAAAAACgkQsN6d1ii/Ey+o
Agf/SWKaghqbwtnyyuxoLT/6CiHvxLUovD62MWxPPBVQh6uBeO3rPHnOD5J4Tw7KlyZkD7Hxb9Gl
5IKh6TSEapBOGlnzF5ZiHyn9g1kOt6c6Bs047AxVfsU7r6L5HwhpgxSfGe9R9Yecq6wa6HiR0do/
89ySZGPegJGJErvF7bXMEoPoBhhfWjYpCRy8hB4NKaMVAGDpGEbZemR6Q3b+Gb8SM2a4kyc8ZEaA
+WAJDoZXMWMytrRelrhGh6SBlJJv/uFhDmvXCqDKsRRiKVwWKsuQkJtjyq5oaD8gh4JPWZBBnG75
3zDoWLCnMjIHyzo9iOt5b644nklKXulZ9/RT3frxKg==
=bcmM
-----END PGP SIGNATURE-----

--gSdjpcUaNTrBpTrbsoW5JWlJUvXsbFLdP--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:44:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49127.86878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKN6-0001VG-CF; Thu, 10 Dec 2020 11:44:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49127.86878; Thu, 10 Dec 2020 11:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKN6-0001V9-8r; Thu, 10 Dec 2020 11:44:32 +0000
Received: by outflank-mailman (input) for mailman id 49127;
 Thu, 10 Dec 2020 11:44:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wUnW=FO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knKN5-0001Uu-01
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:44:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1151a45e-295f-45b5-a6c1-8421fcc2c5ae;
 Thu, 10 Dec 2020 11:44:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BA173AD5C;
 Thu, 10 Dec 2020 11:44:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1151a45e-295f-45b5-a6c1-8421fcc2c5ae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607600669; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kGZ1v2zV5ELcG+fBrSPDafSIORbvLPX59cOeyhmcLHo=;
	b=F/SWGT1zGXxFxZwqDhu2D0nzaR+3ozKgQgmsgi5iW2vZ32xE53WK5fvEPDklugr8Vjz9d5
	wenkjsDSbG9OwzxdfUFsuJozVeREbsTKjwd3NYa2BFTqupsbPz2Yc53feKf8vp8DhVfDR9
	4ruP9BzmCpMB2vB4daK+c3G8RwLk/EU=
Subject: Re: [PATCH v3 1/8] xen: fix build when $(obj-y) consists of just
 blanks
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>
 <X9EL90SMyqrs9GaL@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <50fc5143-5b5e-46ae-56a3-6eba2707f293@suse.com>
Date: Thu, 10 Dec 2020 11:21:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <X9EL90SMyqrs9GaL@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 18:40, Anthony PERARD wrote:
> On Mon, Nov 23, 2020 at 04:20:52PM +0100, Jan Beulich wrote:
>> This case can occur when combining empty lists
>>
>> obj-y :=
>> ...
>> obj-y += $(empty)
>>
>> or
>>
>> obj-y := $(empty) $(empty)
>>
>> where (only) blanks would accumulate. This was only a latent issue until
>> now, but would become an active issue for Arm once lib/ gets populated
>> with all respective objects going into the to be introduced lib.a.
>>
>> Also address a related issue at this occasion: When an empty built_in.o
>> gets created, .built_in.o.d will have its dependencies recorded. If, on
>> a subsequent incremental build, an actual constituent of built_in.o
>> appeared, the $(filter-out ) would leave these recorded dependencies in
>> place. But of course the linker won't know what to do with C header
>> files. (The apparent alternative of avoiding to pass $(c_flags) or
>> $(a_flags) would not be reliable afaict, as among these flags there may
>> be some affecting information conveyed via the object file to the
>> linker. The linker, finding inconsistent flags across object files, may
> 
> How about using $(XEN_CFLAGS) instead of $(c_flags)? That should prevent
> CC from generating the .*.o.d files while keeping the relevant flags.

What does "relevant" cover? For an empty .o it may not be important
right now, but I could see

c_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_CFLAGS) '-D__OBJECT_FILE__="$@"'
a_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_AFLAGS)

include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk

c_flags += $(CFLAGS-y)
a_flags += $(CFLAGS-y) $(AFLAGS-y)

leading to CFLAGS-y / AFLAGS-y which need to be consistent across
_all_ object files (e.g. some recording of ABI used).

> I
> was planing to do that to avoid the issue, see:
> https://lore.kernel.org/xen-devel/20200421161208.2429539-10-anthony.perard@citrix.com
> 
>> then error out.) Using just $(obj-y) won't work either: It breaks when
>> the same object file is listed more than once.
> 
> Do we need to worry about having a object file been listed twice?
> Wouldn't that be a mistake?

No. The list approach (obj-$(CONFIG_xyz) += ...) easily allows for
this to happen. See xen/arch/x86/mm/Makefile for an existing example.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:51:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49138.86893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKTc-0002Zf-4U; Thu, 10 Dec 2020 11:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49138.86893; Thu, 10 Dec 2020 11:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKTc-0002ZY-1Y; Thu, 10 Dec 2020 11:51:16 +0000
Received: by outflank-mailman (input) for mailman id 49138;
 Thu, 10 Dec 2020 11:51:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knKTa-0002ZT-El
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:51:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knKTY-0005V3-9p; Thu, 10 Dec 2020 11:51:12 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knKTY-0001Sb-2A; Thu, 10 Dec 2020 11:51:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=qsz24wjIyr/LTsrdeQgsthxNleWSORfomvrF5AwzyEg=; b=gyCj77biDq6xxryvrlLFKyKjoZ
	AylGJtxskhmZAjB3LI+PbaUWEJ1zd5ri7Ikp4xEqO+4sOgB/kC5Q6qxH94I9xeaTi3JiF9eBSxGbX
	6L7xCGwXfaUMWz3eyG9T5I3ptg+EJDvaWXAlEl4mEYvSCS3DCo8kZYYsS4v2cdyyTMd8=;
Subject: Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
 <20201209161618.309-3-jgross@suse.com>
 <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
 <8ec5f4f5-4314-9c4d-45f0-1f4686028a82@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e4fde734-b353-d885-95e8-0ea9c2210994@xen.org>
Date: Thu, 10 Dec 2020 11:51:09 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <8ec5f4f5-4314-9c4d-45f0-1f4686028a82@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/12/2020 07:54, Jürgen Groß wrote:
> On 09.12.20 17:37, Julien Grall wrote:
>>>   only the syntax used in this document):
>>>   * STRING -- an arbitrary 0-delimited byte string.
>>> @@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
>>>   Writing a value is allowed only for cpupools with no cpu assigned 
>>> and if the
>>>   architecture is supporting different scheduling granularities.
>>
>> [...]
>>
>>> +
>>> +static int domain_dir_read(const struct hypfs_entry *entry,
>>> +                           XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>> +{
>>> +    int ret = 0;
>>> +    const struct domain *d;
>>> +
>>> +    for_each_domain ( d )
>>
>> This is definitely going to be an issue if you have a lot of domain 
>> running as Xen is not preemptible.
> 
> In general this is correct, but in this case I don't think it will
> be a problem. The execution time for each loop iteration should be
> rather short (in the microsecond range?), so even with 32000 guests
> we would stay way below one second.

The scheduling slice are usually in ms and not second (yet this will 
depend on your scheduler). It would be unacceptable to me if another 
vCPU cannot run for a second because dom0 is trying to list the domain 
via HYPFS.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 11:58:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 11:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49145.86908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKa4-0002oC-T8; Thu, 10 Dec 2020 11:57:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49145.86908; Thu, 10 Dec 2020 11:57:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knKa4-0002o5-PJ; Thu, 10 Dec 2020 11:57:56 +0000
Received: by outflank-mailman (input) for mailman id 49145;
 Thu, 10 Dec 2020 11:57:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knKa3-0002o0-Mk
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 11:57:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knKa1-0005eh-Nf; Thu, 10 Dec 2020 11:57:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knKa1-0001w6-Dr; Thu, 10 Dec 2020 11:57:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vrBLsUjlzYoDlkXVsPb2wZLmKVxCUpiso65HZWimxZ8=; b=ueG75IqjVUNWvJZ9NnldyJFlVn
	2UO69DhkmXUjO575Tuxy2S/6bTVGFj4yreOSwALR7cnx+KOH8a3Ng0iwz0cfjHKKv4IT2rD6nJeed
	i+slXrJPLY4BarhMrRQpwj8TPuMmiAkDvqUvl8GGxaBbeslt4JUKwEMx4g5aNoCYEYGk=;
Subject: Re: [PATCH RFC 0/3] xen: add hypfs per-domain abi-features
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
 <a2270efd-19d4-5d5e-2e8b-4696ba9751ab@xen.org>
 <b2118f4d-07c5-abae-2b1b-ac8f45c02563@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8122851a-e674-4283-d82a-9cc99425d66c@xen.org>
Date: Thu, 10 Dec 2020 11:57:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <b2118f4d-07c5-abae-2b1b-ac8f45c02563@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/12/2020 07:49, Jürgen Groß wrote:
> On 09.12.20 17:24, Julien Grall wrote:
>> Hi,
>>
>> On 09/12/2020 16:16, Juergen Gross wrote:
>>> This small series is meant as an example how to add further dynamical
>>> directories to hypfs. It can be used to replace Paul's current approach
>>> to specify ABI-features via domain create flags and replace those by
>>> hypfs nodes.
>>
>> This can only work if all the ABI-features are not required at the 
>> time of creating the domain.
> 
> Yes. In case there is some further initialization needed this has to be
> done later depending on the setting.
We used to allocate vCPUs after the domain has been created. But this 
ended up in a can of worms because this requires a careful ordering of 
the hypercalls.

So I would rather like to avoid introducing similar operation again...

> 
>> Those features should also be set only once. Furthermore, HYPFS is so 
>> far meant to be optional.
> 
> "set once" isn't the point. They should not be able to change after the
> domain has been started, and that is covered.

That really depends on the flag. Imagine there are dependencies between 
flags or memory needs to be allocated.

> 
>>
>> So it feels to me Paul's approach is leaner and better for the 
>> ABI-features purpose.
> 
> Depends.
> 
> My approach doesn't need any tools side changes after first
> implementation when adding new abi-features. And it isn't expanding an
> unstable interface.
> 
> In the end this is the reason I marked this series as RFC. If using
> flags is preferred, fine.

I prefer the flag version. Let see what other thinks.

Cheers,

---
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 12:46:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 12:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49173.86926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLL2-0007oW-W5; Thu, 10 Dec 2020 12:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49173.86926; Thu, 10 Dec 2020 12:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLL2-0007oP-Sg; Thu, 10 Dec 2020 12:46:28 +0000
Received: by outflank-mailman (input) for mailman id 49173;
 Thu, 10 Dec 2020 12:46:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PZm+=FO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knLL1-0007oK-Sv
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 12:46:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 770a3515-c483-4872-b784-f47c18e08da4;
 Thu, 10 Dec 2020 12:46:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C67D7AC6A;
 Thu, 10 Dec 2020 12:46:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 770a3515-c483-4872-b784-f47c18e08da4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607604385; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=r/j7GBPPiVnxEH4aiu/iyhUaMmpI/TnCPrfRtBRKQus=;
	b=HxZe91PG71iaDbz96cYhAe0io14GXqOZG89jsrgv58zOZ8K2yTzm+e/IEB17ohZ0N8nSwE
	chg0wf9uEHlFPd+ePBJBefzOvoid27qmWNMTNW23XEHIE0Y3OzBS8GWH75Bz5s8g7zQlhh
	JOBgcbdhFpexzs+PuiOS5r3HeDmYzNc=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201209161618.309-1-jgross@suse.com>
 <20201209161618.309-3-jgross@suse.com>
 <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
 <8ec5f4f5-4314-9c4d-45f0-1f4686028a82@suse.com>
 <e4fde734-b353-d885-95e8-0ea9c2210994@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
Message-ID: <db24e3f3-c360-1801-ec8a-ca1b948633fa@suse.com>
Date: Thu, 10 Dec 2020 13:46:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e4fde734-b353-d885-95e8-0ea9c2210994@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="AuAjU2w8QFAuDpLnz5M6PEC2EIASVAMsB"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--AuAjU2w8QFAuDpLnz5M6PEC2EIASVAMsB
Content-Type: multipart/mixed; boundary="DKhSRLXbICi9FruhPkiwYvhFL0Fd7uOr3";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <db24e3f3-c360-1801-ec8a-ca1b948633fa@suse.com>
Subject: Re: [PATCH RFC 2/3] xen/domain: add domain hypfs directories
References: <20201209161618.309-1-jgross@suse.com>
 <20201209161618.309-3-jgross@suse.com>
 <75232058-4626-80cb-6f71-4ce5253f3ef6@xen.org>
 <8ec5f4f5-4314-9c4d-45f0-1f4686028a82@suse.com>
 <e4fde734-b353-d885-95e8-0ea9c2210994@xen.org>
In-Reply-To: <e4fde734-b353-d885-95e8-0ea9c2210994@xen.org>

--DKhSRLXbICi9FruhPkiwYvhFL0Fd7uOr3
Content-Type: multipart/mixed;
 boundary="------------BEE83DB8BBEA45535CB2F7EF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BEE83DB8BBEA45535CB2F7EF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.12.20 12:51, Julien Grall wrote:
>=20
>=20
> On 10/12/2020 07:54, J=C3=BCrgen Gro=C3=9F wrote:
>> On 09.12.20 17:37, Julien Grall wrote:
>>>> =C2=A0 only the syntax used in this document):
>>>> =C2=A0 * STRING -- an arbitrary 0-delimited byte string.
>>>> @@ -191,6 +192,15 @@ The scheduling granularity of a cpupool.
>>>> =C2=A0 Writing a value is allowed only for cpupools with no cpu assi=
gned=20
>>>> and if the
>>>> =C2=A0 architecture is supporting different scheduling granularities=
=2E
>>>
>>> [...]
>>>
>>>> +
>>>> +static int domain_dir_read(const struct hypfs_entry *entry,
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>>> +{
>>>> +=C2=A0=C2=A0=C2=A0 int ret =3D 0;
>>>> +=C2=A0=C2=A0=C2=A0 const struct domain *d;
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0 for_each_domain ( d )
>>>
>>> This is definitely going to be an issue if you have a lot of domain=20
>>> running as Xen is not preemptible.
>>
>> In general this is correct, but in this case I don't think it will
>> be a problem. The execution time for each loop iteration should be
>> rather short (in the microsecond range?), so even with 32000 guests
>> we would stay way below one second.
>=20
> The scheduling slice are usually in ms and not second (yet this will=20
> depend on your scheduler). It would be unacceptable to me if another=20
> vCPU cannot run for a second because dom0 is trying to list the domain =

> via HYPFS.

Okay, I did a test.

The worrying operation is the reading of /domain/ with lots of domains.

"xenhypfs ls /domain" with 500 domains running needed 231 us of real
time for the library call, while "xenhypfs ls /" needed about 70 us.
This makes 3 domains per usec, resulting in about 10 msecs with 30000
domains.


Juergen

--------------BEE83DB8BBEA45535CB2F7EF
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BEE83DB8BBEA45535CB2F7EF--

--DKhSRLXbICi9FruhPkiwYvhFL0Fd7uOr3--

--AuAjU2w8QFAuDpLnz5M6PEC2EIASVAMsB
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/SGKEFAwAAAAAACgkQsN6d1ii/Ey8h
WAf8CU0N8nIBIkZBz9JhztleGaMwMpyEtiZ5ssjH1PV7frCN65YW2wFUFaiRzXKsda4H3VxmF+sU
3iBYfHLkAoQVjBKJ2FkjibxJiKt9KJ/i2eHikntlaM+unphRFOM+S0FMZLZY6HnfInRMoYbWrw5I
+GiVdLi24ixQLN9a75f5JKI633S3eJrhP1p3oDk55pOvRWpwgEfTzUZ62LqB6rjWgjg/y+IqNRxv
2gbMxjX8jsW3s8SHFPxYaLw1EbtPfxwNI065FaJpF4CIc3lCEov6yC9fs2iMC6+Pm4bhs0YdejJx
CC4XQihAPgjMJ8jOidt2l7Ms8F7VzrupySVi4lsdag==
=FeCT
-----END PGP SIGNATURE-----

--AuAjU2w8QFAuDpLnz5M6PEC2EIASVAMsB--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 12:58:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 12:58:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49182.86938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLWe-0000Ug-3P; Thu, 10 Dec 2020 12:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49182.86938; Thu, 10 Dec 2020 12:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLWd-0000UZ-Vi; Thu, 10 Dec 2020 12:58:27 +0000
Received: by outflank-mailman (input) for mailman id 49182;
 Thu, 10 Dec 2020 12:58:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ykji=FO=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1knLWc-0000UP-Fa
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 12:58:26 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c2beb07-3afe-4d07-b38a-6e95b861d811;
 Thu, 10 Dec 2020 12:58:25 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id m12so8055290lfo.7
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 04:58:25 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id f2sm595273ljc.118.2020.12.10.04.58.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 04:58:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c2beb07-3afe-4d07-b38a-6e95b861d811
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=ohwkoyODvJw6BcZ33Iln+najISdwGwnci2zjmYTgWbU=;
        b=T/OWrY4Yd1rnYS0alT3/ZO5DD++RxEMUbtUPShKmTtIW9m/V6E25ljoIOEkvwXdQJw
         h9hDm2tH7pzehWl4CKbIZfl9mY661XMf2nCcGb3D8DSrvHYfcbGHxxBSoK6sMl4TP4Rt
         aElWoxOOjjZiH0z0EcRchtpjtuYWdj9JAfkMfoJHdZG8labIwT0Vcg8wOQ0h/G8IYuzC
         XbIgBYKx2wbsRqkrNFRWQu4i0rq8yKkNjEGiPwHGmsk+g8VdJfjtEtxM4oQq++PwOfDR
         GeqsARsuFOey7eUqsKidRCF8nZg7AiArrkmtfNfO0ULcHRQdBoqIl5wQRgaev6VF6BJd
         0SxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=ohwkoyODvJw6BcZ33Iln+najISdwGwnci2zjmYTgWbU=;
        b=V83MmwCozs8niIqOSiKjwfN+WVU5hU17oIjarTGxxwf5JVW/XoCusG3IFrJElU1r2N
         j6CqodltUN55YRJFdJPRnnF8RQ7XUJAdMCVfFYCUMxPXDNkTB6HSlSV0ueWWrUnt/+Ge
         WQFOUCntSxVmdeNmNlJzNmsVBR6mAa0Oi4N9PDvrrLKBPvt8PLNWwETpMR+Q5AK4XVXX
         /5OLDr12tpOETvjKGKQRS+xcr+VIsxG8xBj5tsCR2SxzS5wdlhooq9a28UiG1yhGkyEG
         z4XWULc0wY39BHz7psgPEcvX/PUoPEwstLAAOZykTvAVjXv3gp8sv4JYaDX25Sj4tS5C
         0RIg==
X-Gm-Message-State: AOAM533lNHvzxTZym/vr0PasWlg1Ni7IAWYzpZRhMQMylfLbReAuEA4J
	I9g2NEaDmcR/F0boR7azRTY=
X-Google-Smtp-Source: ABdhPJxCP+weIO1KUKDIM65V/3MvzqmAhHEEe+b3TXaDEJ6q276HTDGTrw79px3ruqERDNWD/6GU6g==
X-Received: by 2002:a05:6512:481:: with SMTP id v1mr2584455lfq.132.1607605104174;
        Thu, 10 Dec 2020 04:58:24 -0800 (PST)
Subject: Re: [PATCH V3 18/23] xen/dm: Introduce xendevicemodel_set_irq_level
 DM op
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, alex.bennee@linaro.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-19-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091802240.20986@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <ca3ddec2-dd93-7b6e-fd65-5be730418200@gmail.com>
Date: Thu, 10 Dec 2020 14:58:17 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091802240.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10.12.20 04:21, Stefano Stabellini wrote:

Hi Stefano

> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> This patch adds ability to the device emulator to notify otherend
>> (some entity running in the guest) using a SPI and implements Arm
>> specific bits for it. Proposed interface allows emulator to set
>> the logical level of a one of a domain's IRQ lines.
>>
>> We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level)
>> to inject an interrupt as the "isa_irq" field is only 8-bit and
>> able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020).
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> ***
>> Please note, I left interface untouched since there is still
>> an open discussion what interface to use/what information to pass
>> to the hypervisor. The question whether we should abstract away
>> the state of the line or not.
>> ***
> Let's start with a simple question: is this going to work with
> virtio-mmio emulation in QEMU that doesn't lower the state of the line
> to end the notification (only calls qemu_set_irq(irq, high))?
>
> See: hw/virtio/virtio-mmio.c:virtio_mmio_update_irq
>
>
> Alex (CC'ed) might be able to confirm whether I am reading the QEMU code
> correctly. Assuming that it is true that QEMU is only raising the level,
> never lowering it, although the emulation is obviously not correct, I
> would rather keep QEMU as is for efficiency reasons, and because we
> don't want to deviate from the common implementation in QEMU.
>
>
> Looking at this patch and at vgic_inject_irq, yes, I think it would
> work as is.
Not sure whether QEMU lowers the level or not, but in virtio-disk 
backend example we don't set level to 0.
IIRC there was a discussion about that from which I took that "setting 
level to 0 still does nothing on Arm if IRQ edge triggered".
So, looks like, yes, it would work as is.


>
> So it looks like we are going to end up with an interface that:
>
> - in theory it is modelling the line closely
> - in practice it is only called to "trigger the IRQ"
>
>
> Hence my preference for being explicit about it and just call it
> trigger_irq.

I got it, just rename with retaining the level parameter?


>
> If we keep the patch as is, should we at least add a comment to document
> the "QEMU style" use model?

Sure, I will describe that QEMU is only raising the level and never 
lowering it, if I have a confirmation this is true.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:04:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49188.86950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLco-0001Vm-PI; Thu, 10 Dec 2020 13:04:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49188.86950; Thu, 10 Dec 2020 13:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLco-0001Vf-M6; Thu, 10 Dec 2020 13:04:50 +0000
Received: by outflank-mailman (input) for mailman id 49188;
 Thu, 10 Dec 2020 13:04:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knLcn-0001VX-0y; Thu, 10 Dec 2020 13:04:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knLcm-00074c-Le; Thu, 10 Dec 2020 13:04:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knLcm-0003w6-Dr; Thu, 10 Dec 2020 13:04:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knLcm-0005es-DO; Thu, 10 Dec 2020 13:04:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/ltSuwyEgmlKF3X3oJFm5jKjr3juCZEvxiISf1EvbnQ=; b=De+jGkec+8MrSAVJ58xOlQ6fyU
	437djacLcSc20pPkj8eBS6spJJ5EPMKUSuppUlJ89Sr6+v+hC7PINWGlPwJKuQU8Cc2hILPrRSc7F
	zZAyH1zLxfnLm8xddeZpycQ0xsgZkyW3+/xs5Cf9YUTfOHfgi8Bj/1dmMS2YApWFOzVM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157366-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157366: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=272a1db63a09087ce3da4cf44ec7b758611ff1ed
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 13:04:48 +0000

flight 157366 ovmf real [real]
flight 157381 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157366/
http://logs.test-lab.xenproject.org/osstest/logs/157381/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 272a1db63a09087ce3da4cf44ec7b758611ff1ed
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Testing same since   157348  2020-12-09 15:39:39 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <Pierre.Gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 272a1db63a09087ce3da4cf44ec7b758611ff1ed
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 13 11:31:01 2020 +0000

    ArmPlatformPkg: Fix cspell reported spelling/wording
    
    The edk2 CI runs the "cspell" spell checker tool. Some words
    are not recognized by the tool, triggering errors.
    This patch modifies some spelling/wording detected by cspell.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 061cbbc1115eb7360f2c7627d53d13e35d63cbe3
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 20 10:01:13 2020 +0000

    ArmPlatformPkg: Fix Ecc error 8001 in PrePi
    
    This patch fixes the following Ecc reported error:
    Only capital letters are allowed to be used for #define
    declarations
    
    The "SerialPrint" macro is definied for the PrePi module
    residing in the ArmPlatformPkg. It is never used in the module.
    The macro is thus removed.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 2dfd81aaf50ca2bd1e2d33ed5687620de90810ce
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Nov 6 09:47:47 2020 +0000

    ArmPlatformPkg: Fix Ecc error 10006 in ArmPlatformPkg.dsc
    
    This patch fixes the following Ecc reported error:
    There should be no unnecessary inclusion of library
    classes in the INF file
    
    This comes with the additional information:
    "The Library Class [TimeBaseLib] is not used
    in any platform"
    "The Library Class [PL011UartClockLib] is not used
    in any platform"
    "The Library Class [PL011UartLib] is not used
    in any platform"
    
    Indeed, the PL011SerialPortLib module requires the
    PL011UartClockLib and PL011UartLib libraries.
    The PL031RealTimeClockLib module requires the TimeBaseLib
    library.
    ArmPlatformPkg/ArmPlatformPkg.dsc builds the two modules,
    but doesn't build the required libraries. This patch adds
    the missing libraries to the [LibraryClasses.common] section.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 42bec8c8104c9db4891dfd1b208032c9c413d861
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:33:12 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in SP805WatchdogDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/SP805WatchdogDxe/SP805Watchdog.h]
    is existing in module directory but it is not described
    in INF file.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 2e0cbfcbed96505953ef09fcfb72d4ea83cc8df2
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:32:42 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/PL061GpioDxe/PL061Gpio.h]
    is existing in module directory but it is not described
    in INF file.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit a36a0f1d81a2502a922617cf99be0bb81de2f57a
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:32:26 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10014 in LcdGraphicsOutputDxe
    
    This patch fixes the following Ecc reported error:
    No used module files found
    
    The source file
    [ArmPlatformPkg/Drivers/LcdGraphicsOutputDxe/LcdGraphicsOutputDxe.h]
    is existing in module directory but it is not described
    in INF file.
    
    Files in [Sources.common] are also alphabetically re-ordered.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit c5d970a01e76c1a20f6bb009b32e479ad2444548
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 15:18:04 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10016 in LcdPlatformNullLib
    
    This patch fixes the following Ecc reported error:
    Module file has FILE_GUID collision with other
    module file
    
    The two .inf files with clashing GUID are:
    edk2\ArmPlatformPkg\PrePeiCore\PrePeiCoreMPCore.inf
    edk2\ArmPlatformPkg\Library\LcdPlatformNullLib\LcdPlatformNullLib.inf
    
    The PrePeiCoreMPCore module has been imported in 2011 and the
    LcdPlatformNullLib module has been created in 2017. The
    PrePeiCoreMPCore has the precedence.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 746dda63b2d612a2ad9e0b4c05722920586d2e60
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:37:14 2020 +0100

    ArmPlatformPkg: Fix Ecc error 10016 in PrePi
    
    This patch fixes the following Ecc reported error:
    Module file has FILE_GUID collision with other
    module file
    
    The two .inf files with clashing GUID are:
    edk2\ArmPlatformPkg\PrePi\PeiUniCore.inf
    edk2\ArmPlatformPkg\PrePi\PeiMPCore.inf
    
    Both files seem to have been imported from the previous
    svn repository as the same time.
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 28978df0bdafce1d26ff337fd67ee6c3a5b3876e
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:36:19 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in PL031RealTimeClockLib
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 1485e8bbc86e9a7e1954cfe5697fbd45d8e3b04e
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:36:01 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit 4c7e107810cacd1dbd4c6f7d6d4d22e3de2f8db1
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:35:36 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in NorFlashDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit eb97f13839fd64ce3e4ff9dd39ea9950db48207d
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:35:07 2020 +0100

    ArmPlatformPkg: Fix Ecc error 5007 in LcdGraphicsOutputDxe
    
    This patch fixes the following Ecc reported error:
    There should be no initialization of a variable as
    part of its declaration
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit d315bd2286cde306f1ef5256026038e610505cca
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:32:40 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3002 in PL061GpioDxe
    
    This patch fixes the following Ecc reported error:
    Non-Boolean comparisons should use a compare operator
    (==, !=, >, < >=, <=)
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit ee78edceca89057ab9854f7e5070391a8229ece4
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 14:31:50 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3002 in PL011UartLib
    
    This patch fixes the following Ecc reported error:
    Non-Boolean comparisons should use a compare operator
    (==, !=, >, < >=, <=)
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>

commit dd917bae85396055ff5d6ea760bff3702d154101
Author: Pierre Gondois <Pierre.Gondois@arm.com>
Date:   Fri Oct 23 13:31:40 2020 +0100

    ArmPlatformPkg: Fix Ecc error 3001 in NorFlashDxe
    
    This patch fixes the following Ecc reported error:
    Boolean values and variable type BOOLEAN should not use
    explicit comparisons to TRUE or FALSE
    
    Signed-off-by: Pierre Gondois <Pierre.Gondois@arm.com>
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49199.86965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLh9-0001ip-Je; Thu, 10 Dec 2020 13:09:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49199.86965; Thu, 10 Dec 2020 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLh9-0001ii-FW; Thu, 10 Dec 2020 13:09:19 +0000
Received: by outflank-mailman (input) for mailman id 49199;
 Thu, 10 Dec 2020 13:09:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1knLh9-0001id-53
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:09:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1knLh8-0007AJ-9H; Thu, 10 Dec 2020 13:09:18 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1knLh7-0007Rm-Sx; Thu, 10 Dec 2020 13:09:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=keotjG6VPy7Ty3zMpaGQu/RhUQ507IS5vmtwzCQ8to8=; b=EhfDU7yfo5+hggnBpFZR9MLhvV
	nltuBwp2q0vWmO+16nFH0HJjv2F1p70hvqCr5o6FAx6VbhgJmfDPdgf1WrwSuOdbistBm+HNz1tU/
	B1EjeH6IhB1czGwhCOBW6noawpxE5IO5iYIQqGtAZC9EkISewMeVXR7byikgU1Bs0QlA=;
Message-ID: <c6bcaecf71f9e51bdac15c7f97c8ce8460bef306.camel@xen.org>
Subject: Re: [PATCH] x86/HVM: refine when to send mapcache invalidation
 request to qemu
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
	Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, "olekstysh@gmail.com"
	 <olekstysh@gmail.com>, George Dunlap <George.Dunlap@eu.citrix.com>
Date: Thu, 10 Dec 2020 13:09:15 +0000
In-Reply-To: <f92f62bf-2f8d-34db-4be5-d3e6a4b9d580@suse.com>
References: <f92f62bf-2f8d-34db-4be5-d3e6a4b9d580@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

I came across the same issue when QEMU was holding an extra reference
to a page removed from p2m, via XENMEM_add_to_physmap. Please tell me
if I am talking nonsense since my knowledge around QEMU invalidation is
limited.

On Mon, 2020-09-28 at 12:44 +0200, Jan Beulich wrote:
> For one it was wrong to send the request only upon a completed
> hypercall: Even if only part of it completed before getting
> preempted,
> invalidation ought to occur. Therefore fold the two return
> statements.
> 
> And then XENMEM_decrease_reservation isn't the only means by which
> pages
> can get removed from a guest, yet all removals ought to be signaled
> to
> qemu. Put setting of the flag into the central p2m_remove_page()
> underlying all respective hypercalls.

Sounds good. I know this still does not catch everything, but at least
a nice improvement from before.

> 
> Plus finally there's no point sending the request for the local
> domain
> when the domain acted upon is a different one. If anything that
> domain's
> qemu's mapcache may need invalidating, but it's unclear how useful
> this
> would be: That remote domain may not execute hypercalls at all, and
> hence may never make it to the point where the request actually gets
> issued. I guess the assumption is that such manipulation is not
> supposed
> to happen anymore once the guest has been started?

I may still want to set the invalidation signal to true even if the
domain acted on is not the local domain. I know the remote domain may
never reach the point to issue the invalidate, but it sounds to me that
the problem is not whether we should set the signal but whether we can
change where the signal is checked to make sure the point of issue can
be reliably triggered, and the latter can be done in a future patch.

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Putting the check in guest_physmap_remove_page() might also suffice,
> but then a separate is_hvm_domain() would need adding again.
> 
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -31,7 +31,6 @@
>  
>  static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    const struct vcpu *curr = current;
>      long rc;
>  
>      switch ( cmd & MEMOP_CMD_MASK )
> @@ -41,14 +40,11 @@ static long hvm_memory_op(int cmd, XEN_G
>          return -ENOSYS;
>      }
>  
> -    if ( !curr->hcall_compat )
> +    if ( !current->hcall_compat )
>          rc = do_memory_op(cmd, arg);
>      else
>          rc = compat_memory_op(cmd, arg);
>  
> -    if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
> -        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
> -
>      return rc;
>  }
>  
> @@ -326,14 +322,11 @@ int hvm_hypercall(struct cpu_user_regs *
>  
>      HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
>  
> -    if ( curr->hcall_preempted )
> -        return HVM_HCALL_preempted;
> -
>      if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
>           test_and_clear_bool(currd-
> >arch.hvm.qemu_mapcache_invalidate) )
>          send_invalidate_req();
>  
> -    return HVM_HCALL_completed;
> +    return curr->hcall_preempted ? HVM_HCALL_preempted :
> HVM_HCALL_completed;
>  }
>  
>  /*
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -812,6 +812,9 @@ p2m_remove_page(struct p2m_domain *p2m,
>          }
>      }
>  
> +    if ( p2m->domain == current->domain )
> +        p2m->domain->arch.hvm.qemu_mapcache_invalidate = true;
> +

So my comment above makes me want to remove the condition.

>      return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order,
> p2m_invalid,
>                           p2m->default_access);
>  }
> 

Hongyan



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:17:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:17:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49205.86977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLpH-0002kY-ER; Thu, 10 Dec 2020 13:17:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49205.86977; Thu, 10 Dec 2020 13:17:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLpH-0002kR-BT; Thu, 10 Dec 2020 13:17:43 +0000
Received: by outflank-mailman (input) for mailman id 49205;
 Thu, 10 Dec 2020 13:17:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knLpF-0002kM-WC
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:17:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knLpE-0007La-HA; Thu, 10 Dec 2020 13:17:40 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knLpE-00086w-7w; Thu, 10 Dec 2020 13:17:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=fCgGDOSwt7p/h6DxE1W0vBRr6YFicucTeTT1d4kB45k=; b=W8WLuYcy5i+A/goZtaE3F952eL
	sw8heASU8yYCa8EGKvOmGGb7DOrYOAO/I6F8uOMT0hmdD/Tj+4MhfxaJuQYMEYuXBmSv3zbU00ZtP
	+kY5gqP5BaeXzjHovJi4h2l1TuP9xD2IQnvmAcuKBhUba0+DQIDTCSoAX8nD9R5e3Sp4=;
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012091521480.20986@sstabellini-ThinkPad-T480s>
 <52799b99-6405-03f4-2a46-3a0a4aac597f@xen.org>
 <alpine.DEB.2.21.2012091745550.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <6053ad08-95ce-fb31-122d-450df21a20f7@xen.org>
Date: Thu, 10 Dec 2020 13:17:38 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091745550.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 10/12/2020 02:30, Stefano Stabellini wrote:
>>> I am also wondering if there is any benefit in calling wait_for_io()
>>> earlier, maybe from try_handle_mmio if IO_RETRY?
>>
>> wait_for_io() may end up to deschedule the vCPU. I would like to avoid this to
>> happen in the middle of the I/O emulation because we need to happen it without
>> lock held at all.
>>
>> I don't think there are locks involved today, but the deeper in the call stack
>> the scheduling happens, the more chance we may screw up in the future.
>>
>> However...
>>
>>> leave_hypervisor_to_guest is very late for that.
>>
>> ... I am not sure what's the problem with that. The IOREQ will be notified of
>> the pending I/O as soon as try_handle_mmio() put the I/O in the shared page.
>>
>> If the IOREQ server is running on a different pCPU, then it might be possible
>> that the I/O has completed before reached leave_hypervisor_to_guest(). In this
>> case, we would not have to wait for the I/O.
> 
> Yeah, I was thinking about that too. Actually it could be faster
> this way we end up being lucky.
> 
> The reason for moving it earlier would be that by the time
> leave_hypervisor_to_guest is called "Xen" has already decided to
> continue running this particular vcpu. If we called wait_for_io()
> earlier, we would give important information to the scheduler before any
> decision is made.

I don't understand this. Xen preemption is voluntary, so the scheduler 
is not going to run unless requested.

wait_for_io() is a preemption point. So if you call it, then vCPU may 
get descheduled at that point.

Why would we want to do this? What's our benefits here?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:22:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49211.86989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLtP-0003iS-0j; Thu, 10 Dec 2020 13:21:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49211.86989; Thu, 10 Dec 2020 13:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knLtO-0003iL-Tm; Thu, 10 Dec 2020 13:21:58 +0000
Received: by outflank-mailman (input) for mailman id 49211;
 Thu, 10 Dec 2020 13:21:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ykji=FO=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1knLtM-0003iG-Vw
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:21:57 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 532f62c1-e92f-48b9-8877-7f66ac2abf49;
 Thu, 10 Dec 2020 13:21:55 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id m25so8140879lfc.11
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 05:21:55 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id y12sm482995lfy.300.2020.12.10.05.21.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 05:21:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 532f62c1-e92f-48b9-8877-7f66ac2abf49
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=NOedhVT3rvhdHXuGqVxU37/Xsir0ZN0vrpQc+/P9xm4=;
        b=HIAbXnrYpPbBzsPh8PCjdG//01sIMd72psTDBVAXPZmmMaNmcUOEPnqjiqbYwTjacS
         wcPReKzQb4FyURa6ZWrdPZpKHkdfquk8p1Y8PETCLAr5gXBEpx133DC42Bvq8iCeJqyM
         J0JWZIPGXWALeZWk4wPFGJE5jeOPdWBqTcsVriMJw0OAObJvNvSkoyMC3CumZl/jojH4
         MQ3so+qLcQKrsnyj2RTBJ+cY0NdFoBaYHRnMPQKCjiZd2bteW8ZDtI2/tSitEm+7XsZd
         YN0F8SqVuS+HHlKzRrVDDolcc/pX2nCRhqLc1kOamc8XCQyhZCHX9o9SPFn1cg7FNs4n
         fT+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=NOedhVT3rvhdHXuGqVxU37/Xsir0ZN0vrpQc+/P9xm4=;
        b=KC0+1rzC0xIqKF7f0a03Q5e0IlxUqlWFJWYaloJZXn7gwp/Nt0ZgshRKWJrsXAaRw7
         ZeH/x5mIu8ZgQue0SCLzYmCsexv9/zE1X0lLKYF/vptm5GL3G2wysi0WIVTnxQ7hF9sQ
         uqW2oBTLobR1XLKkA++xphhMZyz6N+UILOJD/4roJgFdH9tEE9fCXE6AnZwrQlao+eef
         I0HaqQjaU40tSulfe6pNBrv2kYvlFiZ20TtblwzESGlrtVKXiYcsLWqCeBSS+shOnfuB
         fR1yxTVDf/C701BQQIlHoU0kF7h8SD8/393auCt9bWtmln8SaK6NOY1rtUGD8VoNDwBC
         TGZg==
X-Gm-Message-State: AOAM531RlTrw5CGYIQp4cv1dQ3wOf3KLE0yCvhBGt7f+u5F2HqW+QRgB
	DDptNBqJqT455sFdTijhUhA=
X-Google-Smtp-Source: ABdhPJx5/4mH2wr+XsZl8BCvpjmawibTtVzu0mH/VQIu3KglLmkLGls9nL7kqH1M8QbsvZL0GJGHPA==
X-Received: by 2002:a19:a57:: with SMTP id 84mr2749675lfk.327.1607606514683;
        Thu, 10 Dec 2020 05:21:54 -0800 (PST)
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091432450.20986@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012091521480.20986@sstabellini-ThinkPad-T480s>
 <52799b99-6405-03f4-2a46-3a0a4aac597f@xen.org>
 <alpine.DEB.2.21.2012091745550.20986@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <88aea7f9-43d2-f67b-b5a8-c0b6204eac58@gmail.com>
Date: Thu, 10 Dec 2020 15:21:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091745550.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10.12.20 04:30, Stefano Stabellini wrote:

Hi Julien, Stefano

> On Wed, 9 Dec 2020, Julien Grall wrote:
>> On 09/12/2020 23:35, Stefano Stabellini wrote:
>>> On Wed, 9 Dec 2020, Stefano Stabellini wrote:
>>>> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>
>>>>> This patch adds proper handling of return value of
>>>>> vcpu_ioreq_handle_completion() which involves using a loop
>>>>> in leave_hypervisor_to_guest().
>>>>>
>>>>> The reason to use an unbounded loop here is the fact that vCPU
>>>>> shouldn't continue until an I/O has completed. In Xen case, if an I/O
>>>>> never completes then it most likely means that something went horribly
>>>>> wrong with the Device Emulator. And it is most likely not safe to
>>>>> continue. So letting the vCPU to spin forever if I/O never completes
>>>>> is a safer action than letting it continue and leaving the guest in
>>>>> unclear state and is the best what we can do for now.
>>>>>
>>>>> This wouldn't be an issue for Xen as do_softirq() would be called at
>>>>> every loop. In case of failure, the guest will crash and the vCPU
>>>>> will be unscheduled.
>>>> Imagine that we have two guests: one that requires an ioreq server and
>>>> one that doesn't. If I am not mistaken this loop could potentially spin
>>>> forever on a pcpu, thus preventing any other guest being scheduled, even
>>>> if the other guest doesn't need any ioreq servers.
>>>>
>>>>
>>>> My other concern is that we are busy-looping. Could we call something
>>>> like wfi() or do_idle() instead? The ioreq server event notification of
>>>> completion should wake us up?
>>>>
>>>> Following this line of thinking, I am wondering if instead of the
>>>> busy-loop we should call vcpu_block_unless_event_pending(current) in
>>>> try_handle_mmio if IO_RETRY. Then when the emulation is done, QEMU (or
>>>> equivalent) calls xenevtchn_notify which ends up waking up the domU
>>>> vcpu. Would that work?
>>> I read now Julien's reply: we are already doing something similar to
>>> what I suggested with the following call chain:
>>>
>>> check_for_vcpu_work -> vcpu_ioreq_handle_completion -> wait_for_io ->
>>> wait_on_xen_event_channel
>>>
>>> So the busy-loop here is only a safety-belt in cause of a spurious
>>> wake-up, in which case we are going to call again check_for_vcpu_work,
>>> potentially causing a guest reschedule.
>>>
>>> Then, this is fine and addresses both my concerns. Maybe let's add a note
>>> in the commit message about it.
>> Damm, I hit the "sent" button just a second before seen your reply. :/ Oh
>> well. I suggested the same because I have seen the same question multiple
>> time.


I will update commit description and probably the comment in code.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:30:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:30:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49218.87001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knM1T-0004f2-SF; Thu, 10 Dec 2020 13:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49218.87001; Thu, 10 Dec 2020 13:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knM1T-0004ev-P5; Thu, 10 Dec 2020 13:30:19 +0000
Received: by outflank-mailman (input) for mailman id 49218;
 Thu, 10 Dec 2020 13:30:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B/l5=FO=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1knM1R-0004eq-UT
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:30:18 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5fdf29a-8cd4-4d23-8ea8-696ca9481606;
 Thu, 10 Dec 2020 13:30:17 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id f11so6663661ljm.8
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 05:30:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5fdf29a-8cd4-4d23-8ea8-696ca9481606
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=6Y1/pCcVVUBfv3HtE9L8Hpubgn3eVeV8xQvQ2V1/F7c=;
        b=pZ+LO5QJlGoNVILFpKMp61lgJEapSo4Muukuh4BzipvJOvo7UPK7aH63a8QDnxRBRu
         8gdyFlum+HyecIZxcXR/tVzeSeV2oVn0hHElChzBWdkeHvdlIW8Uqs2ZMtiXeLcKboNX
         Y9pw53BAa+K9bojIdDQcBL5D0DZrr/6UGIexYlLoGWBd0WiRJ1UPLbUEQubEp1N4VijA
         fsy8eWPZrsVKpwlQrvd2Ah28UmLWQzk3zf+OVAsCzO6vSxuhAZzJzNgXNiE6ul7N25ON
         5JOB4ki6qLEC3XbQz5lvH87lt82jzoD3/ApNDoff3s3m+a8G9rVKIfRbbg3mk3mUz+CY
         0fwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=6Y1/pCcVVUBfv3HtE9L8Hpubgn3eVeV8xQvQ2V1/F7c=;
        b=bIwTV39hgskpS0PTHIU1D8CaSr+iz64D87TtiKptGY0F6Ky5UgLJCUgsRoH71r3gDt
         pV9HLu4Y3ovUjbp2PVWWibrwhfHOCDGTAxRHaLLBdb5Stu6MAzMcqz0I5WoenBD3U/jA
         xjhufQaMIiqwvl84+auJx+2V/VumRvxVGgu8iVaH0Q8cmbbHN0/oOkN87JRx8rYBu+w2
         5dQR8mgD6ECuTjSLfIOqQsJeZCYXTHNl+PpK+AV3xgOU2lUoWx0ay7K1A+dcBFm9xKIe
         S9XZmSo6NMC6iNU//T4RKSfNejKHfRceaRR9LgO5JZplqBC/y6BnaJHhOLwPFYr8xRy7
         f3yw==
X-Gm-Message-State: AOAM533UNZcdzoTt1v/dG4cXXLQ/7Hbr0+cUKYeZznkT78A7pPQ8n+rM
	R+Fpuu0MWmha/bOlFEm/S1pgDpCMFvUu4j1e8Tg=
X-Google-Smtp-Source: ABdhPJz6/LInmyWBeicZXN1hFtBzjWA/tL2UzztmxE+KuDFoSVDLuD2xYTfWuigL789vYz9/QvF9K1XfZxYfUdtq60w=
X-Received: by 2002:a2e:1657:: with SMTP id 23mr3024716ljw.12.1607607015736;
 Thu, 10 Dec 2020 05:30:15 -0800 (PST)
MIME-Version: 1.0
References: <20201207133024.16621-1-jgross@suse.com> <20201207133024.16621-3-jgross@suse.com>
 <CAKf6xpuqdY=TctOjNsnTTexeBpkV+HMkOHFsAd4vxUudBpxizA@mail.gmail.com>
 <72bc4417-076c-78f0-9c7e-5a9c95e79fb2@suse.com> <20201210111454.dxykvyktzwr3fjyk@Air-de-Roger>
 <7425aed6-ff6f-873a-b629-b9c7058e9b13@suse.com>
In-Reply-To: <7425aed6-ff6f-873a-b629-b9c7058e9b13@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 10 Dec 2020 08:30:04 -0500
Message-ID: <CAKf6xpvxLiBfWKUecbbWW4DZr-gcPeo5PADtiYzwPft8NQ2aeA@mail.gmail.com>
Subject: Re: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, open list <linux-kernel@vger.kernel.org>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 10, 2020 at 6:40 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wro=
te:
>
> On 10.12.20 12:14, Roger Pau Monn=C3=A9 wrote:
> > On Tue, Dec 08, 2020 at 07:45:00AM +0100, J=C3=BCrgen Gro=C3=9F wrote:
> >> On 07.12.20 21:48, Jason Andryuk wrote:
> >>> On Mon, Dec 7, 2020 at 8:30 AM Juergen Gross <jgross@suse.com> wrote:
> >>>>
> >>>> Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
> >>>> memory") introduced usage of ZONE_DEVICE memory for foreign memory
> >>>> mappings.
> >>>>
> >>>> Unfortunately this collides with using page->lru for Xen backend
> >>>> private page caches.
> >>>>
> >>>> Fix that by using page->zone_device_data instead.
> >>>>
> >>>> Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated mem=
ory")
> >>>> Signed-off-by: Juergen Gross <jgross@suse.com>
> >>>
> >>> Would it make sense to add BUG_ON(is_zone_device_page(page)) and the
> >>> opposite as appropriate to cache_enq?
> >>
> >> No, I don't think so. At least in the CONFIG_ZONE_DEVICE case the
> >> initial list in a PV dom0 is populated from extra memory (basically
> >> the same, but not marked as zone device memory explicitly).
> >
> > I assume it's fine for us to then use page->zone_device_data even if
> > the page is not explicitly marked as ZONE_DEVICE memory?
>
> I think so, yes, as we are owner of that page and we were fine to use
> lru, too.

I think memremap_pages or devm_memremap_pages (which calls
memremap_pages) is how you mark memory as ZONE_DEVICE.  i.e. they are
explicitly marked.

memremap_pages
  memmap_init_zone_device (with ZONE_DEVICE)
    __init_single_page
      set_page_links
        set_page_zone

grep only finds a few uses of ZONE_DEVICE in the whole tree.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:37:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49224.87013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knM8Z-00057z-LP; Thu, 10 Dec 2020 13:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49224.87013; Thu, 10 Dec 2020 13:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knM8Z-00057s-I8; Thu, 10 Dec 2020 13:37:39 +0000
Received: by outflank-mailman (input) for mailman id 49224;
 Thu, 10 Dec 2020 13:37:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wUnW=FO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knM8Y-00057n-Gg
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:37:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0ae76ee-870e-4523-a481-246850d78346;
 Thu, 10 Dec 2020 13:37:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 821CCAB91;
 Thu, 10 Dec 2020 13:37:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0ae76ee-870e-4523-a481-246850d78346
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607607455; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ThRX3I6LrR7+gLMCPbddJe3G47KxrjgR4tCww7gTtOk=;
	b=qspkVmk9Q8P9qN8sn8iM5nTnbmtjQxBSDfLbVsWdBC8fgbpPPzHXn2g5YFYGzbIimfrhzb
	MlSAyY/XKb8LYlPuecec3StsbCWZCwONNW19L+uvpJBv/TsByEmGzk/qP6+7Ii3DljbtOw
	Cd7kbHn9w2mN9Q4M1+eAm55c+QW7Z6k=
Subject: Re: [PATCH] x86/HVM: refine when to send mapcache invalidation
 request to qemu
To: Hongyan Xia <hx242@xen.org>, Paul Durrant <paul@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "olekstysh@gmail.com" <olekstysh@gmail.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f92f62bf-2f8d-34db-4be5-d3e6a4b9d580@suse.com>
 <c6bcaecf71f9e51bdac15c7f97c8ce8460bef306.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d522f01e-af5f-fc65-2888-2573dbcefcf5@suse.com>
Date: Thu, 10 Dec 2020 14:37:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c6bcaecf71f9e51bdac15c7f97c8ce8460bef306.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.12.2020 14:09, Hongyan Xia wrote:
> On Mon, 2020-09-28 at 12:44 +0200, Jan Beulich wrote:
>> Plus finally there's no point sending the request for the local
>> domain
>> when the domain acted upon is a different one. If anything that
>> domain's
>> qemu's mapcache may need invalidating, but it's unclear how useful
>> this
>> would be: That remote domain may not execute hypercalls at all, and
>> hence may never make it to the point where the request actually gets
>> issued. I guess the assumption is that such manipulation is not
>> supposed
>> to happen anymore once the guest has been started?
> 
> I may still want to set the invalidation signal to true even if the
> domain acted on is not the local domain. I know the remote domain may
> never reach the point to issue the invalidate, but it sounds to me that
> the problem is not whether we should set the signal but whether we can
> change where the signal is checked to make sure the point of issue can
> be reliably triggered, and the latter can be done in a future patch.

One of Paul's replies was quite helpful here: The main thing to
worry about is for the vCPU to not continue running before the
invalidation request was signaled (or else, aiui, qemu may serve
a subsequent emulation request by the guest incorrectly, because
of using the stale mapping). Hence I believe for a non-paused
guest remote operations simply cannot be allowed when the may
lead to the need for invalidation. Therefore yes, if we assume
the guest is paused in such cases, we could drop the "is current"
check, but we'd then still need to arrange for actual signaling
before the guest gets to run again. I wonder whether
handle_hvm_io_completion() (or its caller, hvm_do_resume(),
right after that other call) wouldn't be a good place to do so.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:38:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:38:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49229.87024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knM9F-0005Ff-UB; Thu, 10 Dec 2020 13:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49229.87024; Thu, 10 Dec 2020 13:38:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knM9F-0005FY-R3; Thu, 10 Dec 2020 13:38:21 +0000
Received: by outflank-mailman (input) for mailman id 49229;
 Thu, 10 Dec 2020 13:38:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knM9F-0005F1-A1
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:38:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knM98-0007ky-6h; Thu, 10 Dec 2020 13:38:14 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knM97-0001C1-UQ; Thu, 10 Dec 2020 13:38:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=aYqTd/WChvNGXDV0uaxGOhHg4JZtfhldey1GEQ5D6OM=; b=b412AEuSV52OKxlkX+/14/QlJc
	IF5HXqrP6SF3gRzg32vfgmSf0rialgpfXgg8c8D0KbCURQu9mHf+bSfQhXRFamkHGR702dFGGcCFR
	uVwUxBMfeXsEaJtIcoK2dgkxMxDEaXNxKvij6oqrHM4wQNsuceSA3TBtyNm6eQHRXAZg=;
Subject: Re: [PATCH V3 18/23] xen/dm: Introduce xendevicemodel_set_irq_level
 DM op
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, alex.bennee@linaro.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-19-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091802240.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <d02e00e3-655f-9378-1d95-0e7895d943f4@xen.org>
Date: Thu, 10 Dec 2020 13:38:11 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091802240.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 10/12/2020 02:21, Stefano Stabellini wrote:
> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> This patch adds ability to the device emulator to notify otherend
>> (some entity running in the guest) using a SPI and implements Arm
>> specific bits for it. Proposed interface allows emulator to set
>> the logical level of a one of a domain's IRQ lines.
>>
>> We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level)
>> to inject an interrupt as the "isa_irq" field is only 8-bit and
>> able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020).
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> ***
>> Please note, I left interface untouched since there is still
>> an open discussion what interface to use/what information to pass
>> to the hypervisor. The question whether we should abstract away
>> the state of the line or not.
>> ***
> 
> Let's start with a simple question: is this going to work with
> virtio-mmio emulation in QEMU that doesn't lower the state of the line
> to end the notification (only calls qemu_set_irq(irq, high))?
> 
> See: hw/virtio/virtio-mmio.c:virtio_mmio_update_irq

Hmmm my version of QEMU is using:

level = (qatomic_read(&vdev->isr) != 0);
trace_virtio_mmio_setting_irq(level);
qemu_set_irq(proxy->irq, level);

So QEMU will raise/lower the interrupt based on whether there are 
pending still interrupts.

> 
> 
> Alex (CC'ed) might be able to confirm whether I am reading the QEMU code
> correctly. Assuming that it is true that QEMU is only raising the level,
> never lowering it, although the emulation is obviously not correct, I
> would rather keep QEMU as is for efficiency reasons, and because we
> don't want to deviate from the common implementation in QEMU.
> 
> 
> Looking at this patch and at vgic_inject_irq, yes, I think it would
> work as is.

Our implementation of vgic_inject_irq() is completely bogus as soon as 
you deal with level interrupt. We are getting away so far, because there 
are not many fully emulated level interrupt (AFAIK this would only be 
the pl011). In fact, we carry a gross hack in the emulation to handle them.

In case of the level interrupt, we should keep injecting the interrupt 
to the guest until the line was lowered down (e.g qemu_set_irq(irq, 0) 
assuming active-high).

> 
> 
> So it looks like we are going to end up with an interface that:
> 
> - in theory it is modelling the line closely

For level interrupt we need to know whether the line is low or high. I 
am struggling to see how this would work if we consider the variable as 
"trigger".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:48:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49239.87043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMIr-0006MU-3P; Thu, 10 Dec 2020 13:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49239.87043; Thu, 10 Dec 2020 13:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMIr-0006MN-0J; Thu, 10 Dec 2020 13:48:17 +0000
Received: by outflank-mailman (input) for mailman id 49239;
 Thu, 10 Dec 2020 13:48:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMIp-0006MI-2X
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:48:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9b677d54-238b-4520-a7da-6b7d44d251e6;
 Thu, 10 Dec 2020 13:48:13 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-322-p-pu4U4xNmybGAAj8pkIRQ-1; Thu, 10 Dec 2020 08:48:09 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4A6141005E48;
 Thu, 10 Dec 2020 13:48:07 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 521085D9DD;
 Thu, 10 Dec 2020 13:47:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b677d54-238b-4520-a7da-6b7d44d251e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608093;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=anpnx/tw5p62yFhBITy05bzKP3wcJQGVw5OZrWMts1E=;
	b=BAwICLTLxv0oGQhFhA1ceAn2wOYQBQX0BSZEsKBi36D0gnUw4vGL1QXL+t2JTxsqurA7Mu
	NDe4KM0SWPygZUlWjPq0hXrbszlrRKnJgfBFebiKk9b9rb4fN8cGaQOciYHsFtV87YCsta
	GuB8aWhiOmTcO65pBV24OkHYXvCoQPg=
X-MC-Unique: p-pu4U4xNmybGAAj8pkIRQ-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 00/13] Remove GCC < 4.8 checks
Date: Thu, 10 Dec 2020 17:47:39 +0400
Message-Id: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>=0D

Hi,=0D
=0D
Since commit efc6c07 ("configure: Add a test for the minimum compiler versi=
on"),=0D
QEMU explicitely depends on GCC >=3D 4.8.=0D
=0D
v3:=0D
 - drop first patch replacing QEMU_GNUC_PREREQ with G_GNUC_CHECK_VERSION=0D
 - add last patch to remove QEMU_GNUC_PREREQ=0D
 - tweak commit messages to replace clang 3.8 with clang 3.4=0D
 - fix some extra coding style=0D
 - collect r-b/a-b tags=0D
=0D
v2:=0D
 - include reviewed Philippe earlier series=0D
 - drop problematic patch to replace GCC_FMT_ATTR, but tweak the check to b=
e clang=0D
 - replace QEMU_GNUC_PREREQ with G_GNUC_CHECK_VERSION=0D
 - split changes=0D
 - add patches to drop __GNUC__ checks (clang advertizes itself as 4.2.1, u=
nless=0D
   -fgnuc-version=3D0)=0D
=0D
Marc-Andr=C3=A9 Lureau (11):=0D
  compiler.h: remove GCC < 3 __builtin_expect fallback=0D
  qemu-plugin.h: remove GCC < 4=0D
  tests: remove GCC < 4 fallbacks=0D
  virtiofsd: replace _Static_assert with QEMU_BUILD_BUG_ON=0D
  compiler.h: explicit case for Clang printf attribute=0D
  audio: remove GNUC & MSVC check=0D
  poison: remove GNUC check=0D
  xen: remove GNUC check=0D
  compiler: remove GNUC check=0D
  linux-user: remove GNUC check=0D
  compiler.h: remove QEMU_GNUC_PREREQ=0D
=0D
Philippe Mathieu-Daud=C3=A9 (2):=0D
  qemu/atomic: Drop special case for unsupported compiler=0D
  accel/tcg: Remove special case for GCC < 4.6=0D
=0D
 include/exec/poison.h              |  2 --=0D
 include/hw/xen/interface/io/ring.h |  9 ------=0D
 include/qemu/atomic.h              | 17 -----------=0D
 include/qemu/compiler.h            | 45 ++++++++----------------------=0D
 include/qemu/qemu-plugin.h         |  9 ++----=0D
 scripts/cocci-macro-file.h         |  1 -=0D
 tools/virtiofsd/fuse_common.h      | 11 +-------=0D
 accel/tcg/cpu-exec.c               |  2 +-=0D
 audio/audio.c                      |  8 +-----=0D
 linux-user/strace.c                |  4 ---=0D
 tests/tcg/arm/fcvt.c               |  8 ++----=0D
 11 files changed, 20 insertions(+), 96 deletions(-)=0D
=0D
--=20=0D
2.29.0=0D
=0D



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:48:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49240.87055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJ1-0006PS-CT; Thu, 10 Dec 2020 13:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49240.87055; Thu, 10 Dec 2020 13:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJ1-0006PK-8d; Thu, 10 Dec 2020 13:48:27 +0000
Received: by outflank-mailman (input) for mailman id 49240;
 Thu, 10 Dec 2020 13:48:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMIz-0006Ou-Pz
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:48:25 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 458557fb-3582-4815-a092-145740b29281;
 Thu, 10 Dec 2020 13:48:25 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-63-8PrYjN8sPoKWRVQm0sBUGQ-1; Thu, 10 Dec 2020 08:48:20 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 044E1107ACE4;
 Thu, 10 Dec 2020 13:48:19 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 428966064B;
 Thu, 10 Dec 2020 13:48:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 458557fb-3582-4815-a092-145740b29281
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608105;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ulMUWJhPnu4f0g/wIardZXW7c36v91u4ATkIG/ONjyg=;
	b=ArTQTcy3FXFs1jzLmExg4fo+1a9AkfgT5G2UqXmmCrl8lHbQyNhfBSQRpRj6E0pMBAGCMl
	5hdqmu1bftOnBvh18399JQ8Ns8TrvpEsOBB+dMETawNM5DoDOYWH5sch7U7YLUBlDCC50Q
	qALAWN7A/cDuPetGh9AEsCUotFOLkGQ=
X-MC-Unique: 8PrYjN8sPoKWRVQm0sBUGQ-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>
Subject: [PATCH v3 01/13] qemu/atomic: Drop special case for unsupported compiler
Date: Thu, 10 Dec 2020 17:47:40 +0400
Message-Id: <20201210134752.780923-2-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <philmd@redhat.com>

Since commit efc6c070aca ("configure: Add a test for the
minimum compiler version") the minimum compiler version
required for GCC is 4.8, which has the GCC BZ#36793 bug fixed.

We can safely remove the special case introduced in commit
a281ebc11a6 ("virtio: add missing mb() on notification").

With clang 3.4, __ATOMIC_RELAXED is defined, so the chunk to
remove (which is x86-specific), isn't reached either.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/qemu/atomic.h | 17 -----------------
 1 file changed, 17 deletions(-)

diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index c1d211a351..8f4b3a80fb 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -241,23 +241,6 @@
 
 #else /* __ATOMIC_RELAXED */
 
-/*
- * We use GCC builtin if it's available, as that can use mfence on
- * 32-bit as well, e.g. if built with -march=pentium-m. However, on
- * i386 the spec is buggy, and the implementation followed it until
- * 4.3 (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36793).
- */
-#if defined(__i386__) || defined(__x86_64__)
-#if !QEMU_GNUC_PREREQ(4, 4)
-#if defined __x86_64__
-#define smp_mb()    ({ asm volatile("mfence" ::: "memory"); (void)0; })
-#else
-#define smp_mb()    ({ asm volatile("lock; addl $0,0(%%esp) " ::: "memory"); (void)0; })
-#endif
-#endif
-#endif
-
-
 #ifdef __alpha__
 #define smp_read_barrier_depends()   asm volatile("mb":::"memory")
 #endif
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:48:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49243.87067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJB-0006UO-MB; Thu, 10 Dec 2020 13:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49243.87067; Thu, 10 Dec 2020 13:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJB-0006UF-IT; Thu, 10 Dec 2020 13:48:37 +0000
Received: by outflank-mailman (input) for mailman id 49243;
 Thu, 10 Dec 2020 13:48:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMJA-0006Tn-7F
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:48:36 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 98ee8c6c-39fc-4302-8d27-044d5a397b3e;
 Thu, 10 Dec 2020 13:48:35 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-230-WetvPUssOgaCqGMOqlYqLw-1; Thu, 10 Dec 2020 08:48:33 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 40A641005E45;
 Thu, 10 Dec 2020 13:48:31 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4728E1975F;
 Thu, 10 Dec 2020 13:48:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98ee8c6c-39fc-4302-8d27-044d5a397b3e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608115;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L84sEemMXWD2SD9KwsemmLR042MHG0edSakZTzouy6E=;
	b=ilFhJTQzvW/PwmmTq2Av8YKgSSHNU2uhDA0DI/0B+HcQ8tjPVbbx5mpgjifibPoMPInShP
	8e+EBZS8RpIpXyJUmU4qMpn6oLqeAZn4buRoGhDf/kXNBQPFxthoWqraotUmeMlZTMpqcP
	74LErBWJy/f9hESSi9wdM5rmP14WeKM=
X-MC-Unique: WetvPUssOgaCqGMOqlYqLw-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>
Subject: [PATCH v3 02/13] accel/tcg: Remove special case for GCC < 4.6
Date: Thu, 10 Dec 2020 17:47:41 +0400
Message-Id: <20201210134752.780923-3-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <philmd@redhat.com>

Since commit efc6c070aca ("configure: Add a test for the
minimum compiler version") the minimum compiler version
required for GCC is 4.8.

We can safely remove the special case for GCC 4.6 introduced
in commit 0448f5f8b81 ("cpu-exec: Fix compiler warning
(-Werror=clobbered)").
No change for Clang as we don't know.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 accel/tcg/cpu-exec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 58aea605d8..37a88edb6d 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -724,7 +724,7 @@ int cpu_exec(CPUState *cpu)
 
     /* prepare setjmp context for exception handling */
     if (sigsetjmp(cpu->jmp_env, 0) != 0) {
-#if defined(__clang__) || !QEMU_GNUC_PREREQ(4, 6)
+#if defined(__clang__)
         /* Some compilers wrongly smash all local variables after
          * siglongjmp. There were bug reports for gcc 4.5.0 and clang.
          * Reload essential local variables here for those compilers.
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:48:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:48:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49251.87079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJS-0006bo-Vo; Thu, 10 Dec 2020 13:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49251.87079; Thu, 10 Dec 2020 13:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJS-0006bf-Rg; Thu, 10 Dec 2020 13:48:54 +0000
Received: by outflank-mailman (input) for mailman id 49251;
 Thu, 10 Dec 2020 13:48:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMJS-0006bH-2f
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:48:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4897c4a1-b609-4ecf-8209-e86397e795cd;
 Thu, 10 Dec 2020 13:48:53 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-549-3rhGTMrfP0ODZkGQQriAyA-1; Thu, 10 Dec 2020 08:48:51 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D45791005E45;
 Thu, 10 Dec 2020 13:48:48 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D37D710016F6;
 Thu, 10 Dec 2020 13:48:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4897c4a1-b609-4ecf-8209-e86397e795cd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608133;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1DdfdKdA7l8BixI1VKlgzbdz9vMe5mVRY9RuhcSENTs=;
	b=a3GdI08aPy7uMc9VHpSQ5bstbRBapA1JHGtbR90t3jvxOyVl+h56hqIH7+K/M8NW1KoZlt
	gCRE7kz7kKPdgtJIcdr9b0LhHG1UugYJIruL6H3RBSVtGVWK/QcloW0ROeROubIqnAsD/d
	fqxx1j83t5mJ2zLS+XAUnkz6eCYc9ac=
X-MC-Unique: 3rhGTMrfP0ODZkGQQriAyA-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 03/13] compiler.h: remove GCC < 3 __builtin_expect fallback
Date: Thu, 10 Dec 2020 17:47:42 +0400
Message-Id: <20201210134752.780923-4-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Since commit efc6c07 ("configure: Add a test for the minimum compiler
version"), QEMU explicitely depends on GCC >= 4.8.

(clang >= 3.4 advertizes itself as GCC >= 4.2 compatible and supports
__builtin_expect too)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/qemu/compiler.h | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
index c76281f354..226ead6c90 100644
--- a/include/qemu/compiler.h
+++ b/include/qemu/compiler.h
@@ -44,10 +44,6 @@
 #endif
 
 #ifndef likely
-#if __GNUC__ < 3
-#define __builtin_expect(x, n) (x)
-#endif
-
 #define likely(x)   __builtin_expect(!!(x), 1)
 #define unlikely(x)   __builtin_expect(!!(x), 0)
 #endif
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49257.87091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJn-0006j2-8s; Thu, 10 Dec 2020 13:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49257.87091; Thu, 10 Dec 2020 13:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMJn-0006iu-5Z; Thu, 10 Dec 2020 13:49:15 +0000
Received: by outflank-mailman (input) for mailman id 49257;
 Thu, 10 Dec 2020 13:49:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMJl-0006hu-QX
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:49:13 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 21b8f01f-55b9-48e6-bbe1-a4a823d5b3bd;
 Thu, 10 Dec 2020 13:49:13 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-277-tt29vyJuMMCsKfnRw_kDUw-1; Thu, 10 Dec 2020 08:49:08 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 08B9315732;
 Thu, 10 Dec 2020 13:49:07 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E42817046C;
 Thu, 10 Dec 2020 13:48:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21b8f01f-55b9-48e6-bbe1-a4a823d5b3bd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608153;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7laBw/T720iHN/BJanY29fxaiafvWQ3ZmcJva9ZXuMM=;
	b=ANqaQT1lsdDYaddE5Ao3ia4AWUQA6ms+ISiHhRX+lcKKyTSCFUrGPnDEFz+eLKc5CF2UCA
	2mANAcMRDg5sCa1RoVnPdOfoJZ5xc/ec1phnziOgzQz1vEFZXc8EcMgVyu4driASHBiTm9
	Pbrh87A2JLiFpCt7MEJwhJmsipW8M2U=
X-MC-Unique: tt29vyJuMMCsKfnRw_kDUw-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 04/13] qemu-plugin.h: remove GCC < 4
Date: Thu, 10 Dec 2020 17:47:43 +0400
Message-Id: <20201210134752.780923-5-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Since commit efc6c07 ("configure: Add a test for the minimum compiler
version"), QEMU explicitely depends on GCC >= 4.8.

(clang >= 3.4 advertizes itself as GCC >= 4.2 compatible)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/qemu/qemu-plugin.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/include/qemu/qemu-plugin.h b/include/qemu/qemu-plugin.h
index bab8b0d4b3..5775e82c4e 100644
--- a/include/qemu/qemu-plugin.h
+++ b/include/qemu/qemu-plugin.h
@@ -28,13 +28,8 @@
   #endif
   #define QEMU_PLUGIN_LOCAL
 #else
-  #if __GNUC__ >= 4
-    #define QEMU_PLUGIN_EXPORT __attribute__((visibility("default")))
-    #define QEMU_PLUGIN_LOCAL  __attribute__((visibility("hidden")))
-  #else
-    #define QEMU_PLUGIN_EXPORT
-    #define QEMU_PLUGIN_LOCAL
-  #endif
+  #define QEMU_PLUGIN_EXPORT __attribute__((visibility("default")))
+  #define QEMU_PLUGIN_LOCAL  __attribute__((visibility("hidden")))
 #endif
 
 typedef uint64_t qemu_plugin_id_t;
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:49:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:49:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49262.87103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMK7-0006rG-J8; Thu, 10 Dec 2020 13:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49262.87103; Thu, 10 Dec 2020 13:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMK7-0006r8-Fx; Thu, 10 Dec 2020 13:49:35 +0000
Received: by outflank-mailman (input) for mailman id 49262;
 Thu, 10 Dec 2020 13:49:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMK6-0006qi-1z
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:49:34 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0b1dd360-49ae-47d2-93a4-9ab387acf17a;
 Thu, 10 Dec 2020 13:49:33 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-100-nqVUJhY8NMCkTQFWY4kjLQ-1; Thu, 10 Dec 2020 08:49:25 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CC6A6800D53;
 Thu, 10 Dec 2020 13:49:23 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5511D19713;
 Thu, 10 Dec 2020 13:49:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b1dd360-49ae-47d2-93a4-9ab387acf17a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608173;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UPLHk9PCXdFlionEQtiO0kTXHb3i/E1vdNJvbkD9DzE=;
	b=NlJWsN9ZQl1y5xFXt1IMrt+nS8p59zoCaIRnx6+hdsg5kUGkBf4wjC3vLZAdsodqS83wbl
	SlcNiaL/LGpOZC7gbMsfRsj6uh3u18ZNK5WrIF9CzeEd9tLuDypnM3Co7yj/ufSDM0YxF+
	7H5WJmRhYfieh2AfEPkEj1mOPaQEpxc=
X-MC-Unique: nqVUJhY8NMCkTQFWY4kjLQ-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 05/13] tests: remove GCC < 4 fallbacks
Date: Thu, 10 Dec 2020 17:47:44 +0400
Message-Id: <20201210134752.780923-6-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Since commit efc6c07 ("configure: Add a test for the minimum compiler
version"), QEMU explicitely depends on GCC >= 4.8.

(clang >= 3.4 advertizes itself as GCC >= 4.2 compatible)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
---
 tests/tcg/arm/fcvt.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/tests/tcg/arm/fcvt.c b/tests/tcg/arm/fcvt.c
index 617626bc63..7ac47b564e 100644
--- a/tests/tcg/arm/fcvt.c
+++ b/tests/tcg/arm/fcvt.c
@@ -73,11 +73,9 @@ static void print_int64(int i, int64_t num)
 
 #ifndef SNANF
 /* Signaling NaN macros, if supported.  */
-# if __GNUC_PREREQ(3, 3)
-#  define SNANF (__builtin_nansf (""))
-#  define SNAN (__builtin_nans (""))
-#  define SNANL (__builtin_nansl (""))
-# endif
+# define SNANF (__builtin_nansf (""))
+# define SNAN (__builtin_nans (""))
+# define SNANL (__builtin_nansl (""))
 #endif
 
 float single_numbers[] = { -SNANF,
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:49:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49265.87115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKL-0006xg-UE; Thu, 10 Dec 2020 13:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49265.87115; Thu, 10 Dec 2020 13:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKL-0006xZ-QS; Thu, 10 Dec 2020 13:49:49 +0000
Received: by outflank-mailman (input) for mailman id 49265;
 Thu, 10 Dec 2020 13:49:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMKK-0006vX-Ep
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:49:48 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 63ae00f5-a893-456d-8a2a-2acd1aef07f7;
 Thu, 10 Dec 2020 13:49:45 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-548-wSJyU3UuOkCKoQd_7B4-4g-1; Thu, 10 Dec 2020 08:49:41 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B9659107ACE4;
 Thu, 10 Dec 2020 13:49:39 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1AE7B5C1C4;
 Thu, 10 Dec 2020 13:49:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63ae00f5-a893-456d-8a2a-2acd1aef07f7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608185;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iHYSa9kEpwoKzIqL0kHciIz6Aw/ueVjJb1/YiLOQ2lU=;
	b=PWeD6ogERY3OVZgXFvXfWlGIDhAxwR6vP3/codNXBXPdxcBFtkAvgyUdlpA30lwt3MhfG3
	Fw0MX0bYKMd4exQojM4bnVJJv1+igCJLJfssAR/C29c1mdt0bUIqxJcOKXa3uaLi2v8U/j
	j+cK0TyVY4g3A7yWg6CHfcaQCg13PQM=
X-MC-Unique: wSJyU3UuOkCKoQd_7B4-4g-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 06/13] virtiofsd: replace _Static_assert with QEMU_BUILD_BUG_ON
Date: Thu, 10 Dec 2020 17:47:45 +0400
Message-Id: <20201210134752.780923-7-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

This allows to get rid of a check for older GCC version (which was a bit
bogus too since it was falling back on c++ version..)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 tools/virtiofsd/fuse_common.h | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/tools/virtiofsd/fuse_common.h b/tools/virtiofsd/fuse_common.h
index 5aee5193eb..a2484060b6 100644
--- a/tools/virtiofsd/fuse_common.h
+++ b/tools/virtiofsd/fuse_common.h
@@ -809,15 +809,6 @@ void fuse_remove_signal_handlers(struct fuse_session *se);
  *
  * On 32bit systems please add -D_FILE_OFFSET_BITS=64 to your compile flags!
  */
-
-#if defined(__GNUC__) &&                                      \
-    (__GNUC__ > 4 || __GNUC__ == 4 && __GNUC_MINOR__ >= 6) && \
-    !defined __cplusplus
-_Static_assert(sizeof(off_t) == 8, "fuse: off_t must be 64bit");
-#else
-struct _fuse_off_t_must_be_64bit_dummy_struct {
-    unsigned _fuse_off_t_must_be_64bit:((sizeof(off_t) == 8) ? 1 : -1);
-};
-#endif
+QEMU_BUILD_BUG_ON(sizeof(off_t) != 8);
 
 #endif /* FUSE_COMMON_H_ */
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:50:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49268.87127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKX-00074Q-Ct; Thu, 10 Dec 2020 13:50:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49268.87127; Thu, 10 Dec 2020 13:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKX-00074J-95; Thu, 10 Dec 2020 13:50:01 +0000
Received: by outflank-mailman (input) for mailman id 49268;
 Thu, 10 Dec 2020 13:50:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMKW-00073v-Fd
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:50:00 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2a25ce89-4d1c-4c55-9d20-3c7fcb2b3913;
 Thu, 10 Dec 2020 13:49:59 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-439-d9qAt3hLPxmeeP3x39eJCw-1; Thu, 10 Dec 2020 08:49:56 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2A7811005504;
 Thu, 10 Dec 2020 13:49:55 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 14C1010021AA;
 Thu, 10 Dec 2020 13:49:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a25ce89-4d1c-4c55-9d20-3c7fcb2b3913
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608199;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uC8B/2alqBgSU7hoZ9+H0Qhhp1eKRr97cLOh0lFkCEo=;
	b=IOWaDCl4oVIs654F2uXzz8cwZ5dpzZcUsd1soPsoC88yNtkm9VRZvwQgyL/ffc2r/xJWgN
	x1KM2s1lELeVbEfLvH4jg4UUeAnKTLX12lPuwBgGmgrW4DMYik7+XbZ2n1SQp9uGMC2apL
	vkd1DZPZ/iCUsRc3l8SuQlk3txdpeDM=
X-MC-Unique: d9qAt3hLPxmeeP3x39eJCw-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 07/13] compiler.h: explicit case for Clang printf attribute
Date: Thu, 10 Dec 2020 17:47:46 +0400
Message-Id: <20201210134752.780923-8-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Since commit efc6c07 ("configure: Add a test for the minimum compiler
version"), QEMU explicitely depends on GCC >= 4.8, we could thus drop
earlier version checks. Except clang advertizes itself as GCC 4.2.1.

Since clang doesn't support gnu_printf, make that case explicitely and
drop GCC version check.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
---
 include/qemu/compiler.h | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
index 226ead6c90..6212295e52 100644
--- a/include/qemu/compiler.h
+++ b/include/qemu/compiler.h
@@ -99,18 +99,18 @@
 #define QEMU_BUILD_BUG_ON_ZERO(x) (sizeof(QEMU_BUILD_BUG_ON_STRUCT(x)) - \
                                    sizeof(QEMU_BUILD_BUG_ON_STRUCT(x)))
 
-#if defined __GNUC__
-# if !QEMU_GNUC_PREREQ(4, 4)
-   /* gcc versions before 4.4.x don't support gnu_printf, so use printf. */
-#  define GCC_FMT_ATTR(n, m) __attribute__((format(printf, n, m)))
-# else
-   /* Use gnu_printf when supported (qemu uses standard format strings). */
-#  define GCC_FMT_ATTR(n, m) __attribute__((format(gnu_printf, n, m)))
-#  if defined(_WIN32)
-    /* Map __printf__ to __gnu_printf__ because we want standard format strings
-     * even when MinGW or GLib include files use __printf__. */
-#   define __printf__ __gnu_printf__
-#  endif
+#if defined(__clang__)
+/* clang doesn't support gnu_printf, so use printf. */
+# define GCC_FMT_ATTR(n, m) __attribute__((format(printf, n, m)))
+#elif defined(__GNUC__)
+/* Use gnu_printf (qemu uses standard format strings). */
+# define GCC_FMT_ATTR(n, m) __attribute__((format(gnu_printf, n, m)))
+# if defined(_WIN32)
+/*
+ * Map __printf__ to __gnu_printf__ because we want standard format strings even
+ * when MinGW or GLib include files use __printf__.
+ */
+#  define __printf__ __gnu_printf__
 # endif
 #else
 #define GCC_FMT_ATTR(n, m)
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:50:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:50:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49272.87139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKd-0007kf-NH; Thu, 10 Dec 2020 13:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49272.87139; Thu, 10 Dec 2020 13:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKd-0007kW-IQ; Thu, 10 Dec 2020 13:50:07 +0000
Received: by outflank-mailman (input) for mailman id 49272;
 Thu, 10 Dec 2020 13:50:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMKc-0007eY-8u
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:50:06 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f74f169f-0d51-4107-a1f7-730d4fb9165f;
 Thu, 10 Dec 2020 13:50:05 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-405-Qm-VJHcVP8OwZ_U7FJZpnQ-1; Thu, 10 Dec 2020 08:50:03 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 45D0D190A7A3;
 Thu, 10 Dec 2020 13:50:01 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8FED810016F6;
 Thu, 10 Dec 2020 13:49:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f74f169f-0d51-4107-a1f7-730d4fb9165f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608205;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Rg+dFk1r7xme4f47B/YxXkkc3kBVbh72BAEdLMuhlFg=;
	b=MyCLFZ5mPV16BHIKGjZi16KUjk9GOcHOpogD2KHLOjkFManAxW7WQ5BwdvmQuzc5FrB42/
	yHV+2xeHKlqe1i5KuZitXIJL8i3LVpaRVd+5tUWU3+WorA3Mwi93TSDek9/190kYgVISdH
	wrdhJsAszpoIB28AkUJ4crTGSWtxFJ4=
X-MC-Unique: Qm-VJHcVP8OwZ_U7FJZpnQ-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 08/13] audio: remove GNUC & MSVC check
Date: Thu, 10 Dec 2020 17:47:47 +0400
Message-Id: <20201210134752.780923-9-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

QEMU requires either GCC or Clang, which both advertize __GNUC__.
Drop MSVC fallback path.

Note: I intentionally left further cleanups for a later work.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 audio/audio.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/audio/audio.c b/audio/audio.c
index 46578e4a58..d7a00294de 100644
--- a/audio/audio.c
+++ b/audio/audio.c
@@ -122,13 +122,7 @@ int audio_bug (const char *funcname, int cond)
 
 #if defined AUDIO_BREAKPOINT_ON_BUG
 #  if defined HOST_I386
-#    if defined __GNUC__
-        __asm__ ("int3");
-#    elif defined _MSC_VER
-        _asm _emit 0xcc;
-#    else
-        abort ();
-#    endif
+      __asm__ ("int3");
 #  else
         abort ();
 #  endif
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:50:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:50:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49282.87151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKx-0007yk-0B; Thu, 10 Dec 2020 13:50:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49282.87151; Thu, 10 Dec 2020 13:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMKw-0007yc-Rc; Thu, 10 Dec 2020 13:50:26 +0000
Received: by outflank-mailman (input) for mailman id 49282;
 Thu, 10 Dec 2020 13:50:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMKv-00073v-Es
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:50:25 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2a74ba27-cd5e-4a6b-8b69-3da677a654ee;
 Thu, 10 Dec 2020 13:50:21 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-120-Gmi7SMuINcumk-1cNlsFsA-1; Thu, 10 Dec 2020 08:50:19 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 79F67100C662;
 Thu, 10 Dec 2020 13:50:17 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 06182709A0;
 Thu, 10 Dec 2020 13:50:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a74ba27-cd5e-4a6b-8b69-3da677a654ee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608220;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=655teq4WQH113PO8F0kS3wL4OZKf3txyzGY5haGVOpE=;
	b=iOk1ZZZbmSaC4t74ALXYq1wQzlzd/EaoS6VEfl+7CHTEfP8Cso0/cPKRJXn0Sb1grYaQXE
	7ajBmjD3E9tFgCpbgln2IcAQY1vzhwLw7NywdoieILo0I6/NNcpsWc80Y4403Z6AtaJSDw
	LIe/63q7kPRTTldA3uE4vxIanyzXLGM=
X-MC-Unique: Gmi7SMuINcumk-1cNlsFsA-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 09/13] poison: remove GNUC check
Date: Thu, 10 Dec 2020 17:47:48 +0400
Message-Id: <20201210134752.780923-10-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

QEMU requires Clang or GCC, that define and support __GNUC__ extensions

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
---
 include/exec/poison.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/include/exec/poison.h b/include/exec/poison.h
index 7b9ac361dc..d7ae1f23e7 100644
--- a/include/exec/poison.h
+++ b/include/exec/poison.h
@@ -3,7 +3,6 @@
 
 #ifndef HW_POISON_H
 #define HW_POISON_H
-#ifdef __GNUC__
 
 #pragma GCC poison TARGET_I386
 #pragma GCC poison TARGET_X86_64
@@ -93,4 +92,3 @@
 #pragma GCC poison CONFIG_SOFTMMU
 
 #endif
-#endif
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:50:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:50:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49284.87163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knML9-000853-87; Thu, 10 Dec 2020 13:50:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49284.87163; Thu, 10 Dec 2020 13:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knML9-00084w-4F; Thu, 10 Dec 2020 13:50:39 +0000
Received: by outflank-mailman (input) for mailman id 49284;
 Thu, 10 Dec 2020 13:50:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knML7-000836-WA
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:50:38 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0374b9c9-2f3c-4872-a240-c66cc15a66a3;
 Thu, 10 Dec 2020 13:50:37 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-496-SxnkM2ESM9-NbJcLlr6UVw-1; Thu, 10 Dec 2020 08:50:35 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 637AE100C667;
 Thu, 10 Dec 2020 13:50:33 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2B5B460BF1;
 Thu, 10 Dec 2020 13:50:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0374b9c9-2f3c-4872-a240-c66cc15a66a3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608237;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HuW0gR6laAl8ZCFcUIcQZqus2PIG7w04hxhAM9m/fXo=;
	b=OTFLIhzUeNoBNDr2sJWR8Gx/iU4GbgALgXcfzxwU72+2RfPu8xSKKLLSsfGrMdtquDu8Nf
	IQgQxI/KfIcHWgra+4I4aoMckyNvxJhcfmPjVlHXXCFKC8Hhij4+VenqJr1q/dNUN6eh84
	cXHJOsNXol76QqTthTWe0zggdya3rXw=
X-MC-Unique: SxnkM2ESM9-NbJcLlr6UVw-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 10/13] xen: remove GNUC check
Date: Thu, 10 Dec 2020 17:47:49 +0400
Message-Id: <20201210134752.780923-11-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

QEMU requires Clang or GCC, that define and support __GNUC__ extensions

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 include/hw/xen/interface/io/ring.h | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/include/hw/xen/interface/io/ring.h b/include/hw/xen/interface/io/ring.h
index 5d048b335c..115705f3f4 100644
--- a/include/hw/xen/interface/io/ring.h
+++ b/include/hw/xen/interface/io/ring.h
@@ -206,21 +206,12 @@ typedef struct __name##_back_ring __name##_back_ring_t
 #define RING_HAS_UNCONSUMED_RESPONSES(_r)                               \
     ((_r)->sring->rsp_prod - (_r)->rsp_cons)
 
-#ifdef __GNUC__
 #define RING_HAS_UNCONSUMED_REQUESTS(_r) ({                             \
     unsigned int req = (_r)->sring->req_prod - (_r)->req_cons;          \
     unsigned int rsp = RING_SIZE(_r) -                                  \
         ((_r)->req_cons - (_r)->rsp_prod_pvt);                          \
     req < rsp ? req : rsp;                                              \
 })
-#else
-/* Same as above, but without the nice GCC ({ ... }) syntax. */
-#define RING_HAS_UNCONSUMED_REQUESTS(_r)                                \
-    ((((_r)->sring->req_prod - (_r)->req_cons) <                        \
-      (RING_SIZE(_r) - ((_r)->req_cons - (_r)->rsp_prod_pvt))) ?        \
-     ((_r)->sring->req_prod - (_r)->req_cons) :                         \
-     (RING_SIZE(_r) - ((_r)->req_cons - (_r)->rsp_prod_pvt)))
-#endif
 
 /* Direct access to individual ring elements, by index. */
 #define RING_GET_REQUEST(_r, _idx)                                      \
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49290.87174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMLP-0008KN-Gs; Thu, 10 Dec 2020 13:50:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49290.87174; Thu, 10 Dec 2020 13:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMLP-0008KF-DZ; Thu, 10 Dec 2020 13:50:55 +0000
Received: by outflank-mailman (input) for mailman id 49290;
 Thu, 10 Dec 2020 13:50:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMLO-0008JM-5n
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:50:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7111a8d4-b178-4bd1-8218-37c919e727c2;
 Thu, 10 Dec 2020 13:50:53 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-252-LBqTJH63OeGNdFIMlkfKdw-1; Thu, 10 Dec 2020 08:50:48 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0D6A4100C666;
 Thu, 10 Dec 2020 13:50:47 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 446095C1C4;
 Thu, 10 Dec 2020 13:50:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7111a8d4-b178-4bd1-8218-37c919e727c2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608253;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hcVdp1z/ZHZ0T5ogC3rWT4bNutncQowkRPo6+7t0mqE=;
	b=H/ty3W1o8G+q6IQjVHdwXrXHcGpe6dx6sFdFUU6zcAqFCJNT4EEmNs6islaboIbK/NBvzs
	Yb8iWS5YNnwTuhH6T9kjJ0TpyFBWTrYw2qtEFHXd5KIN9NSrfwJn9q9EQAqfVnT5KsfaFU
	lU26CtP629eUIp4jij/TRwcmRnSOWV0=
X-MC-Unique: LBqTJH63OeGNdFIMlkfKdw-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 11/13] compiler: remove GNUC check
Date: Thu, 10 Dec 2020 17:47:50 +0400
Message-Id: <20201210134752.780923-12-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

QEMU requires Clang or GCC, that define and support __GNUC__ extensions.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/qemu/compiler.h | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
index 6212295e52..5e6cf2c8e8 100644
--- a/include/qemu/compiler.h
+++ b/include/qemu/compiler.h
@@ -64,14 +64,10 @@
     (offsetof(container, field) + sizeof_field(container, field))
 
 /* Convert from a base type to a parent type, with compile time checking.  */
-#ifdef __GNUC__
 #define DO_UPCAST(type, field, dev) ( __extension__ ( { \
     char __attribute__((unused)) offset_must_be_zero[ \
         -offsetof(type, field)]; \
     container_of(dev, type, field);}))
-#else
-#define DO_UPCAST(type, field, dev) container_of(dev, type, field)
-#endif
 
 #define typeof_field(type, field) typeof(((type *)0)->field)
 #define type_check(t1,t2) ((t1*)0 - (t2*)0)
@@ -102,7 +98,7 @@
 #if defined(__clang__)
 /* clang doesn't support gnu_printf, so use printf. */
 # define GCC_FMT_ATTR(n, m) __attribute__((format(printf, n, m)))
-#elif defined(__GNUC__)
+#else
 /* Use gnu_printf (qemu uses standard format strings). */
 # define GCC_FMT_ATTR(n, m) __attribute__((format(gnu_printf, n, m)))
 # if defined(_WIN32)
@@ -112,8 +108,6 @@
  */
 #  define __printf__ __gnu_printf__
 # endif
-#else
-#define GCC_FMT_ATTR(n, m)
 #endif
 
 #ifndef __has_warning
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:51:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:51:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49295.87187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMLf-0008SA-SJ; Thu, 10 Dec 2020 13:51:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49295.87187; Thu, 10 Dec 2020 13:51:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMLf-0008S3-Nw; Thu, 10 Dec 2020 13:51:11 +0000
Received: by outflank-mailman (input) for mailman id 49295;
 Thu, 10 Dec 2020 13:51:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMLe-0008Pb-1s
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:51:10 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c60d29e6-9b24-42e1-9539-30dca2762416;
 Thu, 10 Dec 2020 13:51:08 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-490-iPA-pKwJOj-482M4VtH90w-1; Thu, 10 Dec 2020 08:51:04 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B2125107ACE6;
 Thu, 10 Dec 2020 13:51:02 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E56CB6E717;
 Thu, 10 Dec 2020 13:50:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c60d29e6-9b24-42e1-9539-30dca2762416
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608268;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=K/Yso0JMCvkfnEynmDIuRnHbevbaJM9mZp2hfr8WVNA=;
	b=UxzRuB7sLDu0rMOMLu9BWUnRpJsDH9a5Q6pOoyhh+rq+7lxeIYwdmsbLEOMBWtOiwxxztM
	bIbDO+0gcxn32rFuJyB70+YCFcLdDm2UpMYRq9R78efh0Y9WoHJ0JlOwVK3jUq2HWn/grr
	ZBJJWYrZDI2LWQkEC4DoiSpUoSDHiEI=
X-MC-Unique: iPA-pKwJOj-482M4VtH90w-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 12/13] linux-user: remove GNUC check
Date: Thu, 10 Dec 2020 17:47:51 +0400
Message-Id: <20201210134752.780923-13-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

QEMU requires Clang or GCC, that define and support __GNUC__ extensions.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
---
 linux-user/strace.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/linux-user/strace.c b/linux-user/strace.c
index 11fea14fba..e00275fcb5 100644
--- a/linux-user/strace.c
+++ b/linux-user/strace.c
@@ -24,7 +24,6 @@ struct syscallname {
                    abi_long, abi_long, abi_long);
 };
 
-#ifdef __GNUC__
 /*
  * It is possible that target doesn't have syscall that uses
  * following flags but we don't want the compiler to warn
@@ -32,9 +31,6 @@ struct syscallname {
  * functions.  It is ok to keep them while not used.
  */
 #define UNUSED __attribute__ ((unused))
-#else
-#define UNUSED
-#endif
 
 /*
  * Structure used to translate flag values into strings.  This is
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:51:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:51:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49300.87199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMLs-00007E-5W; Thu, 10 Dec 2020 13:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49300.87199; Thu, 10 Dec 2020 13:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMLs-000071-1j; Thu, 10 Dec 2020 13:51:24 +0000
Received: by outflank-mailman (input) for mailman id 49300;
 Thu, 10 Dec 2020 13:51:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nlUt=FO=redhat.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1knMLq-0008WE-Cf
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:51:22 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ef0ec818-f874-4358-809c-3e5f2609a83f;
 Thu, 10 Dec 2020 13:51:21 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-305-BuJ90PEsPry3jN9awvrssA-1; Thu, 10 Dec 2020 08:51:19 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1D778100C66B;
 Thu, 10 Dec 2020 13:51:18 +0000 (UTC)
Received: from localhost (unknown [10.36.110.59])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 287FA5C260;
 Thu, 10 Dec 2020 13:51:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef0ec818-f874-4358-809c-3e5f2609a83f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607608281;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RTnJJ5tS3uEIm5t/A/GM2tAIUagEhic34ZpOQ1Fixxo=;
	b=MzzaiFGiq5q00qM+WC0yYAQIUCamN/FMe+DNajHbr+952OM6aIpt6RrbpxL6W090qVxYLr
	2tIgozo2X4yyzSIuKZKyJNfRt2ACzEUrKcRRkAFlCBd4xiZMQKl26h9+I06HSzcLHiiaLB
	EhtUTVjkPCV/BFWh9HKgBguJfAlHCSM=
X-MC-Unique: BuJ90PEsPry3jN9awvrssA-1
From: marcandre.lureau@redhat.com
To: qemu-devel@nongnu.org
Cc: philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>
Subject: [PATCH v3 13/13] compiler.h: remove QEMU_GNUC_PREREQ
Date: Thu, 10 Dec 2020 17:47:52 +0400
Message-Id: <20201210134752.780923-14-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=marcandre.lureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Marc-André Lureau <marcandre.lureau@redhat.com>

When needed, the G_GNUC_CHECK_VERSION() glib macro can be used instead.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/qemu/compiler.h    | 11 -----------
 scripts/cocci-macro-file.h |  1 -
 2 files changed, 12 deletions(-)

diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
index 5e6cf2c8e8..1b9e58e82b 100644
--- a/include/qemu/compiler.h
+++ b/include/qemu/compiler.h
@@ -11,17 +11,6 @@
 #define QEMU_STATIC_ANALYSIS 1
 #endif
 
-/*----------------------------------------------------------------------------
-| The macro QEMU_GNUC_PREREQ tests for minimum version of the GNU C compiler.
-| The code is a copy of SOFTFLOAT_GNUC_PREREQ, see softfloat-macros.h.
-*----------------------------------------------------------------------------*/
-#if defined(__GNUC__) && defined(__GNUC_MINOR__)
-# define QEMU_GNUC_PREREQ(maj, min) \
-         ((__GNUC__ << 16) + __GNUC_MINOR__ >= ((maj) << 16) + (min))
-#else
-# define QEMU_GNUC_PREREQ(maj, min) 0
-#endif
-
 #define QEMU_NORETURN __attribute__ ((__noreturn__))
 
 #define QEMU_WARN_UNUSED_RESULT __attribute__((warn_unused_result))
diff --git a/scripts/cocci-macro-file.h b/scripts/cocci-macro-file.h
index c6bbc05ba3..20eea6b708 100644
--- a/scripts/cocci-macro-file.h
+++ b/scripts/cocci-macro-file.h
@@ -19,7 +19,6 @@
  */
 
 /* From qemu/compiler.h */
-#define QEMU_GNUC_PREREQ(maj, min) 1
 #define QEMU_NORETURN __attribute__ ((__noreturn__))
 #define QEMU_WARN_UNUSED_RESULT __attribute__((warn_unused_result))
 #define QEMU_SENTINEL __attribute__((sentinel))
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 13:58:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 13:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49323.87210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMSD-0000Uo-2o; Thu, 10 Dec 2020 13:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49323.87210; Thu, 10 Dec 2020 13:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMSC-0000Uh-W4; Thu, 10 Dec 2020 13:57:56 +0000
Resent-Date: Thu, 10 Dec 2020 13:57:56 +0000
Resent-Message-Id: <E1knMSC-0000Uh-W4@lists.xenproject.org>
Received: by outflank-mailman (input) for mailman id 49323;
 Thu, 10 Dec 2020 13:57:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=31WP=FO=patchew.org=no-reply@srs-us1.protection.inumbo.net>)
 id 1knMSB-0000Uc-Go
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 13:57:55 +0000
Received: from sender4-of-o56.zoho.com (unknown [136.143.188.56])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 939a9c36-487b-4f88-8b91-b10ade743df1;
 Thu, 10 Dec 2020 13:57:54 +0000 (UTC)
Received: from [172.17.0.3] (23.253.156.214 [23.253.156.214]) by
 mx.zohomail.com with SMTPS id 1607608665218928.6832951833587;
 Thu, 10 Dec 2020 05:57:45 -0800 (PST)
Resent-From: 
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 939a9c36-487b-4f88-8b91-b10ade743df1
ARC-Seal: i=1; a=rsa-sha256; t=1607608667; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=J7yP/bEcqUv7RGlQn2+HOgTRvXtN83USY+vfvkEnpC3H3bveRAvH2vBshR99WVsDPGSkx4siWC2XRiIJBBPevmrfZHEreKZmVgUi5jqGTFSUslwc+va7/dfG2zMY6Q+/Hj0DKDWTxAZvoWK2zD1raWa60/ZeWnBeCTbw6pZ6Ojs=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1607608667; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:Reply-To:Subject:To; 
	bh=FqEloFzX8R1oXIXHz5gZRze20BKsfvsbNvg1Fkrpw5E=; 
	b=nnlYfeVVM2I51k1BWBbq+ctE0KKAmNCJRNn2KkvSoV6WAm37gKQr219+RBgDo7ECHccOXcj0k4IOTd2nQkbDfIxtJnpX1OT7dfaK2B+hJYGDD500SY1lXfgPtfljDEQMp4QTDq+7xgAs4r0lcgLKGKl7CGt/iuTEsZfSncVnjrU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	spf=pass  smtp.mailfrom=no-reply@patchew.org;
	dmarc=pass header.from=<no-reply@patchew.org> header.from=<no-reply@patchew.org>
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
Reply-To: <qemu-devel@nongnu.org>
Subject: Re: [PATCH v3 00/13] Remove GCC < 4.8 checks
Message-ID: <160760866227.10419.11672890973023491886@600e7e483b3a>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
From: no-reply@patchew.org
To: marcandre.lureau@redhat.com
Cc: qemu-devel@nongnu.org, philmd@redhat.com, richard.henderson@linaro.org, laurent@vivier.eu, paul@xen.org, xen-devel@lists.xenproject.org, stefanha@redhat.com, kraxel@redhat.com, sstabellini@kernel.org, anthony.perard@citrix.com, dgilbert@redhat.com, qemu-arm@nongnu.org, pbonzini@redhat.com, peter.maydell@linaro.org, marcandre.lureau@redhat.com
Date: Thu, 10 Dec 2020 05:57:45 -0800 (PST)
X-ZohoMailClient: External

UGF0Y2hldyBVUkw6IGh0dHBzOi8vcGF0Y2hldy5vcmcvUUVNVS8yMDIwMTIxMDEzNDc1Mi43ODA5
MjMtMS1tYXJjYW5kcmUubHVyZWF1QHJlZGhhdC5jb20vCgoKCkhpLAoKVGhpcyBzZXJpZXMgc2Vl
bXMgdG8gaGF2ZSBzb21lIGNvZGluZyBzdHlsZSBwcm9ibGVtcy4gU2VlIG91dHB1dCBiZWxvdyBm
b3IKbW9yZSBpbmZvcm1hdGlvbjoKClR5cGU6IHNlcmllcwpNZXNzYWdlLWlkOiAyMDIwMTIxMDEz
NDc1Mi43ODA5MjMtMS1tYXJjYW5kcmUubHVyZWF1QHJlZGhhdC5jb20KU3ViamVjdDogW1BBVENI
IHYzIDAwLzEzXSBSZW1vdmUgR0NDIDwgNC44IGNoZWNrcwoKPT09IFRFU1QgU0NSSVBUIEJFR0lO
ID09PQojIS9iaW4vYmFzaApnaXQgcmV2LXBhcnNlIGJhc2UgPiAvZGV2L251bGwgfHwgZXhpdCAw
CmdpdCBjb25maWcgLS1sb2NhbCBkaWZmLnJlbmFtZWxpbWl0IDAKZ2l0IGNvbmZpZyAtLWxvY2Fs
IGRpZmYucmVuYW1lcyBUcnVlCmdpdCBjb25maWcgLS1sb2NhbCBkaWZmLmFsZ29yaXRobSBoaXN0
b2dyYW0KLi9zY3JpcHRzL2NoZWNrcGF0Y2gucGwgLS1tYWlsYmFjayBiYXNlLi4KPT09IFRFU1Qg
U0NSSVBUIEVORCA9PT0KClVwZGF0aW5nIDNjOGNmNWE5YzIxZmY4NzgyMTY0ZDFkZWY3ZjQ0YmQ4
ODg3MTMzODQKRnJvbSBodHRwczovL2dpdGh1Yi5jb20vcGF0Y2hldy1wcm9qZWN0L3FlbXUKICAg
NWU3YjIwNC4uMTgwODM0ZCAgbWFzdGVyICAgICAtPiBtYXN0ZXIKIC0gW3RhZyB1cGRhdGVdICAg
ICAgcGF0Y2hldy8yMDIwMTIwODA1NTA0My4zMTU0OC0xLWxlcnNla0ByZWRoYXQuY29tIC0+IHBh
dGNoZXcvMjAyMDEyMDgwNTUwNDMuMzE1NDgtMS1sZXJzZWtAcmVkaGF0LmNvbQogLSBbdGFnIHVw
ZGF0ZV0gICAgICBwYXRjaGV3LzIwMjAxMjA5MTAwODExLjE5MDMxNi0xLWFuZHJleS5ncnV6ZGV2
QHZpcnR1b3p6by5jb20gLT4gcGF0Y2hldy8yMDIwMTIwOTEwMDgxMS4xOTAzMTYtMS1hbmRyZXku
Z3J1emRldkB2aXJ0dW96em8uY29tCiAqIFtuZXcgdGFnXSAgICAgICAgIHBhdGNoZXcvMjAyMDEy
MTAxMjU5MjkuMTEzNjM5MC0xLW1sZXZpdHNrQHJlZGhhdC5jb20gLT4gcGF0Y2hldy8yMDIwMTIx
MDEyNTkyOS4xMTM2MzkwLTEtbWxldml0c2tAcmVkaGF0LmNvbQogKiBbbmV3IHRhZ10gICAgICAg
ICBwYXRjaGV3LzIwMjAxMjEwMTM0NzUyLjc4MDkyMy0xLW1hcmNhbmRyZS5sdXJlYXVAcmVkaGF0
LmNvbSAtPiBwYXRjaGV3LzIwMjAxMjEwMTM0NzUyLjc4MDkyMy0xLW1hcmNhbmRyZS5sdXJlYXVA
cmVkaGF0LmNvbQpTd2l0Y2hlZCB0byBhIG5ldyBicmFuY2ggJ3Rlc3QnCjc3OGEyZTMgY29tcGls
ZXIuaDogcmVtb3ZlIFFFTVVfR05VQ19QUkVSRVEKMGEzZjQxMCBsaW51eC11c2VyOiByZW1vdmUg
R05VQyBjaGVjawo5Njc4YzFlIGNvbXBpbGVyOiByZW1vdmUgR05VQyBjaGVjawphMDEzOGY4IHhl
bjogcmVtb3ZlIEdOVUMgY2hlY2sKNDBmMzE3MCBwb2lzb246IHJlbW92ZSBHTlVDIGNoZWNrCmI4
MGY1Y2IgYXVkaW86IHJlbW92ZSBHTlVDICYgTVNWQyBjaGVjawpiNjM1ZjVmIGNvbXBpbGVyLmg6
IGV4cGxpY2l0IGNhc2UgZm9yIENsYW5nIHByaW50ZiBhdHRyaWJ1dGUKZDUyZjNjNCB2aXJ0aW9m
c2Q6IHJlcGxhY2UgX1N0YXRpY19hc3NlcnQgd2l0aCBRRU1VX0JVSUxEX0JVR19PTgo5YmJlMmEw
IHRlc3RzOiByZW1vdmUgR0NDIDwgNCBmYWxsYmFja3MKN2MzMzBjYiBxZW11LXBsdWdpbi5oOiBy
ZW1vdmUgR0NDIDwgNAo0MzRkZTVkIGNvbXBpbGVyLmg6IHJlbW92ZSBHQ0MgPCAzIF9fYnVpbHRp
bl9leHBlY3QgZmFsbGJhY2sKMDY5OWU3OCBhY2NlbC90Y2c6IFJlbW92ZSBzcGVjaWFsIGNhc2Ug
Zm9yIEdDQyA8IDQuNgowM2UyMzE4IHFlbXUvYXRvbWljOiBEcm9wIHNwZWNpYWwgY2FzZSBmb3Ig
dW5zdXBwb3J0ZWQgY29tcGlsZXIKCj09PSBPVVRQVVQgQkVHSU4gPT09CjEvMTMgQ2hlY2tpbmcg
Y29tbWl0IDAzZTIzMTgzZmI1NSAocWVtdS9hdG9taWM6IERyb3Agc3BlY2lhbCBjYXNlIGZvciB1
bnN1cHBvcnRlZCBjb21waWxlcikKMi8xMyBDaGVja2luZyBjb21taXQgMDY5OWU3OGEyNWZiIChh
Y2NlbC90Y2c6IFJlbW92ZSBzcGVjaWFsIGNhc2UgZm9yIEdDQyA8IDQuNikKV0FSTklORzogYXJj
aGl0ZWN0dXJlIHNwZWNpZmljIGRlZmluZXMgc2hvdWxkIGJlIGF2b2lkZWQKIzMwOiBGSUxFOiBh
Y2NlbC90Y2cvY3B1LWV4ZWMuYzo3Mjc6CisjaWYgZGVmaW5lZChfX2NsYW5nX18pCgp0b3RhbDog
MCBlcnJvcnMsIDEgd2FybmluZ3MsIDggbGluZXMgY2hlY2tlZAoKUGF0Y2ggMi8xMyBoYXMgc3R5
bGUgcHJvYmxlbXMsIHBsZWFzZSByZXZpZXcuICBJZiBhbnkgb2YgdGhlc2UgZXJyb3JzCmFyZSBm
YWxzZSBwb3NpdGl2ZXMgcmVwb3J0IHRoZW0gdG8gdGhlIG1haW50YWluZXIsIHNlZQpDSEVDS1BB
VENIIGluIE1BSU5UQUlORVJTLgozLzEzIENoZWNraW5nIGNvbW1pdCA0MzRkZTVkMjQ0MWEgKGNv
bXBpbGVyLmg6IHJlbW92ZSBHQ0MgPCAzIF9fYnVpbHRpbl9leHBlY3QgZmFsbGJhY2spCjQvMTMg
Q2hlY2tpbmcgY29tbWl0IDdjMzMwY2I2ZTQ0YyAocWVtdS1wbHVnaW4uaDogcmVtb3ZlIEdDQyA8
IDQpCjUvMTMgQ2hlY2tpbmcgY29tbWl0IDliYmUyYTAwYTIyOCAodGVzdHM6IHJlbW92ZSBHQ0Mg
PCA0IGZhbGxiYWNrcykKRVJST1I6IHNwYWNlIHByb2hpYml0ZWQgYmV0d2VlbiBmdW5jdGlvbiBu
YW1lIGFuZCBvcGVuIHBhcmVudGhlc2lzICcoJwojMzA6IEZJTEU6IHRlc3RzL3RjZy9hcm0vZmN2
dC5jOjc2OgorIyBkZWZpbmUgU05BTkYgKF9fYnVpbHRpbl9uYW5zZiAoIiIpKQoKRVJST1I6IHNw
YWNlIHByb2hpYml0ZWQgYmV0d2VlbiBmdW5jdGlvbiBuYW1lIGFuZCBvcGVuIHBhcmVudGhlc2lz
ICcoJwojMzE6IEZJTEU6IHRlc3RzL3RjZy9hcm0vZmN2dC5jOjc3OgorIyBkZWZpbmUgU05BTiAo
X19idWlsdGluX25hbnMgKCIiKSkKCkVSUk9SOiBzcGFjZSBwcm9oaWJpdGVkIGJldHdlZW4gZnVu
Y3Rpb24gbmFtZSBhbmQgb3BlbiBwYXJlbnRoZXNpcyAnKCcKIzMyOiBGSUxFOiB0ZXN0cy90Y2cv
YXJtL2ZjdnQuYzo3ODoKKyMgZGVmaW5lIFNOQU5MIChfX2J1aWx0aW5fbmFuc2wgKCIiKSkKCnRv
dGFsOiAzIGVycm9ycywgMCB3YXJuaW5ncywgMTQgbGluZXMgY2hlY2tlZAoKUGF0Y2ggNS8xMyBo
YXMgc3R5bGUgcHJvYmxlbXMsIHBsZWFzZSByZXZpZXcuICBJZiBhbnkgb2YgdGhlc2UgZXJyb3Jz
CmFyZSBmYWxzZSBwb3NpdGl2ZXMgcmVwb3J0IHRoZW0gdG8gdGhlIG1haW50YWluZXIsIHNlZQpD
SEVDS1BBVENIIGluIE1BSU5UQUlORVJTLgoKNi8xMyBDaGVja2luZyBjb21taXQgZDUyZjNjNDll
N2Y4ICh2aXJ0aW9mc2Q6IHJlcGxhY2UgX1N0YXRpY19hc3NlcnQgd2l0aCBRRU1VX0JVSUxEX0JV
R19PTikKNy8xMyBDaGVja2luZyBjb21taXQgYjYzNWY1ZjBmMWMwIChjb21waWxlci5oOiBleHBs
aWNpdCBjYXNlIGZvciBDbGFuZyBwcmludGYgYXR0cmlidXRlKQpXQVJOSU5HOiBhcmNoaXRlY3R1
cmUgc3BlY2lmaWMgZGVmaW5lcyBzaG91bGQgYmUgYXZvaWRlZAojMzg6IEZJTEU6IGluY2x1ZGUv
cWVtdS9jb21waWxlci5oOjEwMjoKKyNpZiBkZWZpbmVkKF9fY2xhbmdfXykKCnRvdGFsOiAwIGVy
cm9ycywgMSB3YXJuaW5ncywgMzAgbGluZXMgY2hlY2tlZAoKUGF0Y2ggNy8xMyBoYXMgc3R5bGUg
cHJvYmxlbXMsIHBsZWFzZSByZXZpZXcuICBJZiBhbnkgb2YgdGhlc2UgZXJyb3JzCmFyZSBmYWxz
ZSBwb3NpdGl2ZXMgcmVwb3J0IHRoZW0gdG8gdGhlIG1haW50YWluZXIsIHNlZQpDSEVDS1BBVENI
IGluIE1BSU5UQUlORVJTLgo4LzEzIENoZWNraW5nIGNvbW1pdCBiODBmNWNiMTUzYWUgKGF1ZGlv
OiByZW1vdmUgR05VQyAmIE1TVkMgY2hlY2spCjkvMTMgQ2hlY2tpbmcgY29tbWl0IDQwZjMxNzA2
YzU1NCAocG9pc29uOiByZW1vdmUgR05VQyBjaGVjaykKMTAvMTMgQ2hlY2tpbmcgY29tbWl0IGEw
MTM4ZjhiOGNhMSAoeGVuOiByZW1vdmUgR05VQyBjaGVjaykKMTEvMTMgQ2hlY2tpbmcgY29tbWl0
IDk2NzhjMWUxYjBkOCAoY29tcGlsZXI6IHJlbW92ZSBHTlVDIGNoZWNrKQoxMi8xMyBDaGVja2lu
ZyBjb21taXQgMGEzZjQxMDJiY2ZhIChsaW51eC11c2VyOiByZW1vdmUgR05VQyBjaGVjaykKMTMv
MTMgQ2hlY2tpbmcgY29tbWl0IDc3OGEyZTMxYzVjZSAoY29tcGlsZXIuaDogcmVtb3ZlIFFFTVVf
R05VQ19QUkVSRVEpCj09PSBPVVRQVVQgRU5EID09PQoKVGVzdCBjb21tYW5kIGV4aXRlZCB3aXRo
IGNvZGU6IDEKCgpUaGUgZnVsbCBsb2cgaXMgYXZhaWxhYmxlIGF0Cmh0dHA6Ly9wYXRjaGV3Lm9y
Zy9sb2dzLzIwMjAxMjEwMTM0NzUyLjc4MDkyMy0xLW1hcmNhbmRyZS5sdXJlYXVAcmVkaGF0LmNv
bS90ZXN0aW5nLmNoZWNrcGF0Y2gvP3R5cGU9bWVzc2FnZS4KLS0tCkVtYWlsIGdlbmVyYXRlZCBh
dXRvbWF0aWNhbGx5IGJ5IFBhdGNoZXcgW2h0dHBzOi8vcGF0Y2hldy5vcmcvXS4KUGxlYXNlIHNl
bmQgeW91ciBmZWVkYmFjayB0byBwYXRjaGV3LWRldmVsQHJlZGhhdC5jb20=


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:20:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:20:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49329.87223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMo6-0003J5-0i; Thu, 10 Dec 2020 14:20:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49329.87223; Thu, 10 Dec 2020 14:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMo5-0003Iy-T4; Thu, 10 Dec 2020 14:20:33 +0000
Received: by outflank-mailman (input) for mailman id 49329;
 Thu, 10 Dec 2020 14:20:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knMo4-0003Iq-W0; Thu, 10 Dec 2020 14:20:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knMo4-0000J1-KJ; Thu, 10 Dec 2020 14:20:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knMo4-0006gM-CR; Thu, 10 Dec 2020 14:20:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knMo4-0007pZ-Bz; Thu, 10 Dec 2020 14:20:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=UUqXxPSsItJ5010qRD0JUk/Gdag5QMLhHX1UyXLQtRI=; b=jE8umKoHrrjmXFzTqh0ahxKpDK
	/ozkmdOKUc6RAomu0Jny/p2t2+wzwd5K8JBWd+0D38gt6FJFWM4XajduwxbIhbhehhSpXhCQe+1Pz
	EOnXzz1eEjusuUnq2gqLjerKO2W4lrzXVxvvoQf90HqPHE7MYVKnKEKtbGfrwmxydAbo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [ovmf bisection] complete test-amd64-i386-xl-qemuu-ovmf-amd64
Message-Id: <E1knMo4-0007pZ-Bz@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 14:20:32 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157384/


  commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Author: Guo Dong <guo.dong@intel.com>
  Date:   Wed Dec 2 14:18:18 2020 -0700
  
      UefiCpuPkg/CpuDxe: Fix boot error
      
      REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
      
      When DXE drivers are dispatched above 4GB memory and
      the system is already in 64bit mode, the address
      setCodeSelectorLongJump in stack will be override
      by parameter. so change to use 64bit address and
      jump to qword address.
      
      Signed-off-by: Guo Dong <guo.dong@intel.com>
      Reviewed-by: Ray Ni <ray.ni@intel.com>
      Reviewed-by: Eric Dong <eric.dong@intel.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/ovmf/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/ovmf/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/157384.bisection-summary --basis-template=157345 --blessings=real,real-bisect,real-retry ovmf test-amd64-i386-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 157366 fail [host=pinot1] / 157345 [host=elbling0] 157338 [host=chardonnay0] 157333 [host=fiano1] 157323 [host=pinot0] 157255 [host=elbling1] 157214 [host=huxelrebe1] 157204 [host=chardonnay1] 157194 [host=huxelrebe0] 157191 [host=albana0] 157184 [host=albana1] 157178 [host=fiano1] 157167 [host=fiano0] 157117 [host=chardonnay0] 157104 [host=elbling0] 157060 [host=rimava1] 157055 [host=pinot0] 157042 [host=elbling1] 157025 [host=chardonnay1] 157018 ok.
Failure / basis pass flights: 157366 / 157018
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 272a1db63a09087ce3da4cf44ec7b758611ff1ed 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e9d62effa37ea13fe04fc89b21d2de7776f183a2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 8147e00e4fbfcc43b665dc6bf279b204c501ba04
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 https://github.com/tianocore/edk2.git#e9d62effa37ea13fe04fc89b21d2de7776f183a2-272a1db63a09087ce3da4cf44ec7b758611ff1ed git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c743\
 7ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#748d619be3282fba35f99446098ac2d0579f6063-748d619be3282fba35f99446098ac2d0579f6063 git://xenbits.xen.org/xen.git#8147e00e4fbfcc43b665dc6bf279b204c501ba04-777e3590f154e6a8af560dd318b9465fa168db20
Loaded 10001 nodes in revision graph
Searching for test results:
 157018 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e9d62effa37ea13fe04fc89b21d2de7776f183a2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 8147e00e4fbfcc43b665dc6bf279b204c501ba04
 157025 [host=chardonnay1]
 157042 [host=elbling1]
 157055 [host=pinot0]
 157060 [host=rimava1]
 157104 [host=elbling0]
 157117 [host=chardonnay0]
 157167 [host=fiano0]
 157178 [host=fiano1]
 157184 [host=albana1]
 157191 [host=albana0]
 157194 [host=huxelrebe0]
 157204 [host=chardonnay1]
 157214 [host=huxelrebe1]
 157255 [host=elbling1]
 157323 [host=pinot0]
 157333 [host=fiano1]
 157338 [host=chardonnay0]
 157345 [host=elbling0]
 157348 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 272a1db63a09087ce3da4cf44ec7b758611ff1ed 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157352 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e9d62effa37ea13fe04fc89b21d2de7776f183a2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 8147e00e4fbfcc43b665dc6bf279b204c501ba04
 157356 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 272a1db63a09087ce3da4cf44ec7b758611ff1ed 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157357 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 375e9b190e37041129b35a1c667993ea145e5b7e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 be3755af37263833cb3b1c6b1f2ba219bdf97ec3
 157354 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 272a1db63a09087ce3da4cf44ec7b758611ff1ed 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157360 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4b69fab6e20a98f56acd3c717bd53812950fe5b5 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 30d430b2126697dda0bd53d19fe267fb4d30e9b8
 157363 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ee78edceca89057ab9854f7e5070391a8229ece4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157367 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4b69fab6e20a98f56acd3c717bd53812950fe5b5 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157370 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7061294be500de021bef3d4bc5218134d223315f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157372 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157375 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157378 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157366 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 272a1db63a09087ce3da4cf44ec7b758611ff1ed 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157380 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157382 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157384 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
Searching for interesting versions
 Result found: flight 157018 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20, results HASH(0x55ebf437e0b0) HASH(0x55ebf38fb810) HASH(0x55ebf43891a8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 4b69fab6e20a98f56acd3c717bd53812950fe5b5 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20, results HASH(0x55ebf4395800) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4b69fab6e20a98f56acd3c717bd53812950fe5b5 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 30d430b2126697dda0bd53d19fe267fb4d30e9b8, results HASH(0x55ebf43866f8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 375e9b190e37041129b35a1c667993ea145e5b7e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 be3755af37263833cb3b1c6b1f2ba219bdf97ec3, results HASH(0x55ebf4387c20) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e9d62effa37ea13fe04fc89b21d2de7776f183a2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 8147e00e4fbfcc43b665dc6bf279b204c501ba04, results HASH(0x55ebf438bab0) HASH(0x55ebf4396728) Result found: flight 157348 (fail), for basis failure (at ancestor ~5611)
 Repro found: flight 157352 (pass), for basis pass
 Repro found: flight 157354 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
No revisions left to test, checking graph state.
 Result found: flight 157372 (pass), for last pass
 Result found: flight 157375 (fail), for first failure
 Repro found: flight 157378 (pass), for last pass
 Repro found: flight 157380 (fail), for first failure
 Repro found: flight 157382 (pass), for last pass
 Repro found: flight 157384 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157384/


  commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Author: Guo Dong <guo.dong@intel.com>
  Date:   Wed Dec 2 14:18:18 2020 -0700
  
      UefiCpuPkg/CpuDxe: Fix boot error
      
      REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
      
      When DXE drivers are dispatched above 4GB memory and
      the system is already in 64bit mode, the address
      setCodeSelectorLongJump in stack will be override
      by parameter. so change to use 64bit address and
      jump to qword address.
      
      Signed-off-by: Guo Dong <guo.dong@intel.com>
      Reviewed-by: Ray Ni <ray.ni@intel.com>
      Reviewed-by: Eric Dong <eric.dong@intel.com>

Revision graph left in /home/logs/results/bisect/ovmf/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
157384: tolerable ALL FAIL

flight 157384 ovmf real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/157384/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49337.87238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMu4-0003dm-P9; Thu, 10 Dec 2020 14:26:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49337.87238; Thu, 10 Dec 2020 14:26:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMu4-0003df-KN; Thu, 10 Dec 2020 14:26:44 +0000
Received: by outflank-mailman (input) for mailman id 49337;
 Thu, 10 Dec 2020 14:26:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Anpp=FO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1knMu2-0003da-ND
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:26:42 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 5ed05038-207d-4f7e-bad3-00de52d3f290;
 Thu, 10 Dec 2020 14:26:41 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-514-13Qwd3tkOn-axT977J48wg-1; Thu, 10 Dec 2020 09:26:40 -0500
Received: by mail-wm1-f69.google.com with SMTP id w204so1293804wmb.1
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:26:39 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id b9sm9483846wmd.32.2020.12.10.06.26.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 06:26:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ed05038-207d-4f7e-bad3-00de52d3f290
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607610401;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wwzUYSWq/h6EtaDoCvUeshJJpFiXqC6zBbdC+3h4P3k=;
	b=PBRQfeaHriwLyAXVmlv/JplQaG9lrRd/D+rmZ+S0fl2BMJmH79/TTgraV9IBULy67HnnQl
	6mCHEJyWz9A7qy29YB7F7rtExVwc1kLWYegHPgxbhVhWki+2fakKnhZUdx/MoLRmmmpz8U
	hv7gKU5z3vAcz+abE7PWqL/HVy3u8eY=
X-MC-Unique: 13Qwd3tkOn-axT977J48wg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=wwzUYSWq/h6EtaDoCvUeshJJpFiXqC6zBbdC+3h4P3k=;
        b=hmXzCkhHAMuM3H+qQhNhfQUNfK8Ws0pHHWzu3Z2pvcKbSAfS319x5eTEJ9H6uCVa1K
         FLjXomy8cS077coxo1FRJpjqB5dlw9zYdLXN6WAs2hhmLHxeBgBY+DpGDoZy0R8Az46J
         vqORnZUKLEbdtMod8ZxsARgWNHtWGLdigs9hS2U4XzJRjNZo3+2QfA1rzOIHNuEhaOXB
         1oOfSXnzEgfTb8RZjL3usTCWjFsp2sBlEbH1PH44xKGACgS632+T2MDGWFx08icM2vYX
         1SRhJmc+uwCk+6lx4kaaHTwcA297WjuiCoVpAltr8h6093h6Hs3jzTgKetLwcEfm5rjx
         Ho4w==
X-Gm-Message-State: AOAM53126/9roSNblvHmf9wMP/xVmn3vmKVaRtojjsRLHGzRs4P3NaTC
	w7RmYV7f7IoNCoPM9m5t2UIvTTeLC1ugAKf3U6evVFuRqfVvKZcdlGmZstUJt3lZNR6CELw6iEG
	Ltz4LDXCOar3VqNjI9zMOd9bD+/8=
X-Received: by 2002:a1c:4c14:: with SMTP id z20mr8646313wmf.149.1607610398621;
        Thu, 10 Dec 2020 06:26:38 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzaUj7tpOFO2508Lw7v7CE5X685qxDLYndPVRSbnJIXc50ft6ZIJ81hdHJIppshYmFGwc18AA==
X-Received: by 2002:a1c:4c14:: with SMTP id z20mr8646296wmf.149.1607610398481;
        Thu, 10 Dec 2020 06:26:38 -0800 (PST)
Subject: Re: [PATCH v3 08/13] audio: remove GNUC & MSVC check
To: marcandre.lureau@redhat.com, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, Stefan Hajnoczi <stefanha@redhat.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Peter Maydell <peter.maydell@linaro.org>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-9-marcandre.lureau@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <a162ec7c-4dc3-5784-866e-dc95f6919b1f@redhat.com>
Date: Thu, 10 Dec 2020 15:26:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-9-marcandre.lureau@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> QEMU requires either GCC or Clang, which both advertize __GNUC__.
> Drop MSVC fallback path.
> 
> Note: I intentionally left further cleanups for a later work.
> 
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> ---
>  audio/audio.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/audio/audio.c b/audio/audio.c
> index 46578e4a58..d7a00294de 100644
> --- a/audio/audio.c
> +++ b/audio/audio.c
> @@ -122,13 +122,7 @@ int audio_bug (const char *funcname, int cond)
>  
>  #if defined AUDIO_BREAKPOINT_ON_BUG
>  #  if defined HOST_I386
> -#    if defined __GNUC__
> -        __asm__ ("int3");
> -#    elif defined _MSC_VER
> -        _asm _emit 0xcc;
> -#    else
> -        abort ();
> -#    endif
> +      __asm__ ("int3");

This was 15 years ago... Why not simply use abort() today?

>  #  else
>          abort ();
>  #  endif
> 



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:27:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:27:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49341.87250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMue-0003k9-4M; Thu, 10 Dec 2020 14:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49341.87250; Thu, 10 Dec 2020 14:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMue-0003k2-11; Thu, 10 Dec 2020 14:27:20 +0000
Received: by outflank-mailman (input) for mailman id 49341;
 Thu, 10 Dec 2020 14:27:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Anpp=FO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1knMuc-0003jv-NM
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:27:18 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2e5a1164-de01-4467-a314-458220aa727d;
 Thu, 10 Dec 2020 14:27:18 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-581-mGaSwvYOO1Gg3ISAz0zdqQ-1; Thu, 10 Dec 2020 09:27:15 -0500
Received: by mail-wr1-f69.google.com with SMTP id u29so2008987wru.6
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:27:15 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id c10sm8484225wrb.92.2020.12.10.06.27.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 06:27:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e5a1164-de01-4467-a314-458220aa727d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607610437;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8rn2VH4T8LuIT+s6rlqEBKnGBqM/nvNt/ScI69+xP3I=;
	b=fNyffFdKY0BoLStxXVHzKRO/PX8plME2OWWZrFLttEByl7OHBKSQxlePx0fD6KpFDs6Fx1
	nFWuoKfM6jS4c4RQ/FsR4X16AWTV/MnG3Nu3igS7yd/9X2I316KZJe2C+zHfogSKQ7txj+
	MOKjq2YjKNJmonDVdoT9V3ApRGkjkro=
X-MC-Unique: mGaSwvYOO1Gg3ISAz0zdqQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=8rn2VH4T8LuIT+s6rlqEBKnGBqM/nvNt/ScI69+xP3I=;
        b=Y1umiMvo7dHgRHi3dIFBOTseTgLTd6EvvW6DDUogpLE6u1QA9+cMwUMKiG/AJS2EIz
         SMe9KoqUXXpdyV/qVBaC17cruLJi60vpRWhEIFW6ieyrnV5MaE9nD4G/gNxJBDLzzZzo
         cDxpPRo2xFhK/W8CA65HuTwMSMagMRMf/UzSmWPCeveCz/K3BQscyPqshdi2UVSyLc7M
         NaThgMs9k83LadURPA9Gfr8hJUJ8Fgg1/mjg1kTWdWOI5MZ5BkwSIh4jrEmWxWpkP4OX
         X62cpXokmAJIiyVPDjcWuQoW+7cUPo7GGJjEwHyr4SS+yHxY0KleqIXKd2VZJf/PTMuv
         1XhQ==
X-Gm-Message-State: AOAM530xDXulsj509BK70TG7Stiv1/1VMZxVELqenHu2eSDJZeF8R+lO
	/LhwKnRYSh/++H45e5G75yDrvaoKjsSTvV821oocFh7kTLhV7/WeCM+DZeZ+ZVwWj8P1kXLRMcL
	+Gh1Nl4Gxmmrvlmaa2kT19gUAOpo=
X-Received: by 2002:a1c:2646:: with SMTP id m67mr8479009wmm.81.1607610434242;
        Thu, 10 Dec 2020 06:27:14 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyJWAdz+KqZMUM8QDoTMVk+MZI2a3+4LfwDInoEr+NawwT5wR47Yw7MbaR5q6dQWccikQgqIQ==
X-Received: by 2002:a1c:2646:: with SMTP id m67mr8478997wmm.81.1607610434130;
        Thu, 10 Dec 2020 06:27:14 -0800 (PST)
Subject: Re: [PATCH v3 09/13] poison: remove GNUC check
To: marcandre.lureau@redhat.com, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, Stefan Hajnoczi <stefanha@redhat.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Peter Maydell <peter.maydell@linaro.org>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-10-marcandre.lureau@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <ec8d32e4-11ca-8a59-c021-5b212b8f6d78@redhat.com>
Date: Thu, 10 Dec 2020 15:27:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-10-marcandre.lureau@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> QEMU requires Clang or GCC, that define and support __GNUC__ extensions
> 
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>  include/exec/poison.h | 2 --
>  1 file changed, 2 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:27:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:27:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49346.87262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMv6-0003qo-De; Thu, 10 Dec 2020 14:27:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49346.87262; Thu, 10 Dec 2020 14:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMv6-0003qh-AZ; Thu, 10 Dec 2020 14:27:48 +0000
Received: by outflank-mailman (input) for mailman id 49346;
 Thu, 10 Dec 2020 14:27:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4S1=FO=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1knMv5-0003qU-21
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:27:47 +0000
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bee6f3a5-1ecf-4624-87e9-d78374f634e4;
 Thu, 10 Dec 2020 14:27:46 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id bo9so7549825ejb.13
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:27:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bee6f3a5-1ecf-4624-87e9-d78374f634e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=h0ZkNmdrRk1QmBSvbXHC8b9Yk7HUiLC1aWMMCjxRxnY=;
        b=F4QheKpiJJxVg1rHl8Qc5ltjP127YbCLivdRTsGvE/XgdZc4gFui1VMaywti47I6x+
         wTN6MdEF2x/sMYb8G1GowTyblyKhQyTl/dbsFBnLJk9k0hu0t6QczluJUnNcHp8baCae
         2E0jP0hXt/1s/BKOzp2M5R2JellblkE1jgjiGwETIGouxXMEDxZ4AekqsqCC7gE64ysC
         Rg16qM3W1/LT0hxPqs1JXkaJ/3/pP2QNmgEFeXFLmVsB6eUoiESufJj58d1A5D/6rerG
         X1Ysd5zsc3glD0HXcqM6kom6yR+d52wZwzFm572+S2LeuxOQtu207RB0pOg5bOip5+hZ
         n2Iw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=h0ZkNmdrRk1QmBSvbXHC8b9Yk7HUiLC1aWMMCjxRxnY=;
        b=nfWC9/yEZRKVmf3+LWZm1OgofPZn6ElnY+yi6i87PKR2bIPugIojb/BaVIxhLonBlb
         CdsLdSI4+O8z1ZmT1qJ1FlQxDDa1J35nAczeHR3WNXyuLOR1eMbmKFNoCs+spx/YqOMf
         FDlhqyseEo26z5mYmwKUaBnvlSlYqTW+LtF9kXaoQICyq1tZHxbbqVQ+pDGwfxb9wa4o
         RG0Hb20LMzCzBbXvFvnTWz6w01gyKgR2lGpfMWR7FoK4WcRxWTWsq77m7qEc+IHCKR6n
         npZ43xI5mwlE1ocnrVdcfgqkky/KIGLldCasO3IKMUr8HBFel95720FSKhMbXOASi/jX
         dBbA==
X-Gm-Message-State: AOAM532t1nSNJZizuUJai+YwfmEpc+oHRnJ3VELZXpiQMUBLgoLxQdXQ
	08KBq75wpm3RfQuzIPmJeG8yfS+m18q/RXia985mCw==
X-Google-Smtp-Source: ABdhPJxKojA/+/fWdEygGDSNtO/TGC7C2s8P2PjilRSbR6HuRNhJZ6sQ54SA4GXtiOiUK4RRINyXTzehCjna6e5aCn4=
X-Received: by 2002:a17:906:31d2:: with SMTP id f18mr6606354ejf.407.1607610465440;
 Thu, 10 Dec 2020 06:27:45 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-9-marcandre.lureau@redhat.com> <a162ec7c-4dc3-5784-866e-dc95f6919b1f@redhat.com>
In-Reply-To: <a162ec7c-4dc3-5784-866e-dc95f6919b1f@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 10 Dec 2020 14:27:34 +0000
Message-ID: <CAFEAcA_xbEm+DmP5hixtkzWJK1fi8X7wZh+eGKmDneZv_=-xbA@mail.gmail.com>
Subject: Re: [PATCH v3 08/13] audio: remove GNUC & MSVC check
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Cc: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>, 
	QEMU Developers <qemu-devel@nongnu.org>, Richard Henderson <richard.henderson@linaro.org>, 
	Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>, 
	"open list:X86" <xen-devel@lists.xenproject.org>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Gerd Hoffmann <kraxel@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	qemu-arm <qemu-arm@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 10 Dec 2020 at 14:26, Philippe Mathieu-Daud=C3=A9 <philmd@redhat.co=
m> wrote:
>
> On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> > From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> >
> > QEMU requires either GCC or Clang, which both advertize __GNUC__.
> > Drop MSVC fallback path.
> >
> > Note: I intentionally left further cleanups for a later work.
> >
> > Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> > ---
> >  audio/audio.c | 8 +-------
> >  1 file changed, 1 insertion(+), 7 deletions(-)
> >
> > diff --git a/audio/audio.c b/audio/audio.c
> > index 46578e4a58..d7a00294de 100644
> > --- a/audio/audio.c
> > +++ b/audio/audio.c
> > @@ -122,13 +122,7 @@ int audio_bug (const char *funcname, int cond)
> >
> >  #if defined AUDIO_BREAKPOINT_ON_BUG
> >  #  if defined HOST_I386
> > -#    if defined __GNUC__
> > -        __asm__ ("int3");
> > -#    elif defined _MSC_VER
> > -        _asm _emit 0xcc;
> > -#    else
> > -        abort ();
> > -#    endif
> > +      __asm__ ("int3");
>
> This was 15 years ago... Why not simply use abort() today?

That's what I suggested when I looked at this patch in
the previous version of the patchset, yes...

-- PMM


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:28:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49349.87274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMvK-0003vp-Ms; Thu, 10 Dec 2020 14:28:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49349.87274; Thu, 10 Dec 2020 14:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knMvK-0003vh-Jj; Thu, 10 Dec 2020 14:28:02 +0000
Received: by outflank-mailman (input) for mailman id 49349;
 Thu, 10 Dec 2020 14:28:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Anpp=FO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1knMvJ-0003vR-BN
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:28:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ef049fca-65dd-42a5-a523-94518638f57f;
 Thu, 10 Dec 2020 14:28:00 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-502-wJxBrpx2Nc-d0yrBhGMlXA-1; Thu, 10 Dec 2020 09:27:58 -0500
Received: by mail-wr1-f72.google.com with SMTP id g16so698110wrv.1
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:27:58 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id c81sm10499615wmd.6.2020.12.10.06.27.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 06:27:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef049fca-65dd-42a5-a523-94518638f57f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607610480;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=H6gmFIInvImD2kayUKLyvQaKlpV11gJ7jcME2e0PUB8=;
	b=hz+/Z64fpHPN+Lnbx9W3iPDGU+MQHJdxRQ+oo26mGExJuL/FV7HqjQK8gTqJUlYTw1G8VN
	U5FRUoxmxLwt39oxykDtFwT7nTw/OxEo+EUlHK4UZm68aMSpjsas7IxCn97hYFWsxS5CA4
	BeB0Oz5RslNAcdEEqta+amALbfGQiRU=
X-MC-Unique: wJxBrpx2Nc-d0yrBhGMlXA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=H6gmFIInvImD2kayUKLyvQaKlpV11gJ7jcME2e0PUB8=;
        b=bQqGV0iEbq1GB6QPlntRuZs8t6NrPEWfpkM/KNGeDMNblv8zmvvZC97zw2B7mpfIWz
         okAI7dPKyFObTaj0S1cQ0EKUNQTJlfnh2ID1+BVrNdlxjck2P9597fGfrVtKKdoFpj9G
         T2u4NM+WQSMOtvd+j+sos8Wbe/oVTmMrImZizfWbn5DhD4mZ9ev9Vr9VUv6yi+1WPuA7
         ljQy2S+tPwQQr/O0rfMX3rYE0JB725QHY8L69C0tcMCdQG2RHFjQY37/x0E2/4jgpwfi
         RcUr6IUBgHjJ85K+wkfRWP4nq/dgtcF8m/W1+CXULJIMv1O11GoH9pl093GT57eLESh9
         FAVg==
X-Gm-Message-State: AOAM533qZuioLz0gO9WqN66Ju6pDgPJdu7R0gyQ2wG8UJScj2A88zrpC
	Jbcm8VD0yNbTtUtZXsa06Hgv/a3em28TC19RevVXcJQPfQPQFaZoGU+T1Rhp10DY84Tw7lxNHkJ
	q1gP/Pa/x+hWGMXyCOtMPj9K/qZU=
X-Received: by 2002:a5d:6045:: with SMTP id j5mr1454233wrt.223.1607610477558;
        Thu, 10 Dec 2020 06:27:57 -0800 (PST)
X-Google-Smtp-Source: ABdhPJy7bHwU1JSlDFVxpuvOX/Iiux9ODs4iDn2SnVkkhqy/VoN5VHkkduwq/x4vp8pvjDYzkbi8dw==
X-Received: by 2002:a5d:6045:: with SMTP id j5mr1454216wrt.223.1607610477406;
        Thu, 10 Dec 2020 06:27:57 -0800 (PST)
Subject: Re: [PATCH v3 12/13] linux-user: remove GNUC check
To: marcandre.lureau@redhat.com, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, Stefan Hajnoczi <stefanha@redhat.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Peter Maydell <peter.maydell@linaro.org>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-13-marcandre.lureau@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <b87fb20f-5b0e-2995-fe03-968391d3dce9@redhat.com>
Date: Thu, 10 Dec 2020 15:27:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-13-marcandre.lureau@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> QEMU requires Clang or GCC, that define and support __GNUC__ extensions.
> 
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>  linux-user/strace.c | 4 ----
>  1 file changed, 4 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:33:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:33:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49361.87286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knN04-00050r-At; Thu, 10 Dec 2020 14:32:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49361.87286; Thu, 10 Dec 2020 14:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knN04-00050k-7h; Thu, 10 Dec 2020 14:32:56 +0000
Received: by outflank-mailman (input) for mailman id 49361;
 Thu, 10 Dec 2020 14:32:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Anpp=FO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1knN02-00050f-D5
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:32:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 614244d8-b27b-4c1e-a11a-c37b5ae26ba0;
 Thu, 10 Dec 2020 14:32:53 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-32-cvAPxXi3PPiANXlANUmJMw-1; Thu, 10 Dec 2020 09:32:49 -0500
Received: by mail-wr1-f72.google.com with SMTP id p18so1988321wro.9
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:32:49 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id z13sm9898753wmz.3.2020.12.10.06.32.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 06:32:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 614244d8-b27b-4c1e-a11a-c37b5ae26ba0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607610773;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rtNjAtow8SQkrLmz8+XRRA+YtRhz+KF6e43NFXvxDSI=;
	b=IW7tPMlmxQR+yENluLsy7ds+4O4Kpa2rm00nb1e2dPQ2WrjKDHQtXCilEQR20GCK8q36d0
	OzKai3LzdY7y0rsQtU/kW8+YOydsR/GTQRouG4q41UGrkADjxSDtk6XZnbrfg7b6RSd0qG
	ddT5RqSII1iMcRPgt86rE0HPWag78Rg=
X-MC-Unique: cvAPxXi3PPiANXlANUmJMw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=rtNjAtow8SQkrLmz8+XRRA+YtRhz+KF6e43NFXvxDSI=;
        b=ZQTKNK8Ey7j9E0rhjzOQbsyxVyJg5L+Lpa1WQJUtLz9Q8AKlbk+VCtRR2ihZHVLI6t
         hTH+5TvBPfxI/q3Jh2Nt10m2gQuzHklgWP0a1ywhcYVbiBmUH8cKHK/hzMlpiuAcb5aX
         9BXdm/+VHZfm2l+/uCn8QAv3iEe9UEWo25G5auj9qA224m38HgU3vYg77fa55eLIwnEg
         jvdSLebOA+veb/iTdomx0FX5ZwjBSNOBT1fcWk8nY4qfh26VzePU6Ak4j/7NZ86tk5mj
         YMtVzYyvn27vitgsdgg0BVhTnU/SZaZjFBYJVaM18Qq0kpxPEBxTCizPawQfBJmcc4gw
         NTgw==
X-Gm-Message-State: AOAM531YQw7ZrZL6q+B601pDUso+HpFd766SO46ynrjVsqkWZY0AaiU4
	0Og+9CmHigbj7KlKeum/6i9C3Jj9DdCFuDrLlap8SVRcInBwjqq9bIefU4bBQB2ze3bjdTr22MH
	Yni+EEkw8BGF0GdU8HlT05sJpM+w=
X-Received: by 2002:a7b:c04c:: with SMTP id u12mr8855716wmc.185.1607610768368;
        Thu, 10 Dec 2020 06:32:48 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyvghq2Cu/Yzn0V6T8HyFCC4QgkWnmvztae8ydvZyqC4LvjfyNnoaYdKanep7mP2UKqi7czEA==
X-Received: by 2002:a7b:c04c:: with SMTP id u12mr8855692wmc.185.1607610768234;
        Thu, 10 Dec 2020 06:32:48 -0800 (PST)
Subject: Re: [PATCH v3 03/13] compiler.h: remove GCC < 3 __builtin_expect
 fallback
To: marcandre.lureau@redhat.com, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, Stefan Hajnoczi <stefanha@redhat.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Peter Maydell <peter.maydell@linaro.org>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-4-marcandre.lureau@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <fead8bf1-7848-8809-c67a-e6354e7b5cf7@redhat.com>
Date: Thu, 10 Dec 2020 15:32:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-4-marcandre.lureau@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> Since commit efc6c07 ("configure: Add a test for the minimum compiler
> version"), QEMU explicitely depends on GCC >= 4.8.
> 
> (clang >= 3.4 advertizes itself as GCC >= 4.2 compatible and supports
> __builtin_expect too)
> 
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> ---
>  include/qemu/compiler.h | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
> index c76281f354..226ead6c90 100644
> --- a/include/qemu/compiler.h
> +++ b/include/qemu/compiler.h
> @@ -44,10 +44,6 @@
>  #endif
>  
>  #ifndef likely
> -#if __GNUC__ < 3
> -#define __builtin_expect(x, n) (x)
> -#endif
> -
>  #define likely(x)   __builtin_expect(!!(x), 1)
>  #define unlikely(x)   __builtin_expect(!!(x), 0)
>  #endif
> 

Trying with GCC 10:
warning: implicit declaration of function ‘likely’
[-Wimplicit-function-declaration]

Clang 10:
warning: implicit declaration of function 'likely' is invalid in C99
[-Wimplicit-function-declaration]

Shouldn't it becleaner to test in the configure script or Meson that
likely() and unlikely() are not defined, and define them here
unconditionally?

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:35:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:35:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49366.87298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knN23-0005AR-Ne; Thu, 10 Dec 2020 14:34:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49366.87298; Thu, 10 Dec 2020 14:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knN23-0005AK-KC; Thu, 10 Dec 2020 14:34:59 +0000
Received: by outflank-mailman (input) for mailman id 49366;
 Thu, 10 Dec 2020 14:34:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Anpp=FO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1knN23-0005AF-3y
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:34:59 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0a65c9e7-ae9b-4458-b61f-5b29f4cce0d7;
 Thu, 10 Dec 2020 14:34:58 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-471--nFxMmSzN_aW3oiKZ3bKuw-1; Thu, 10 Dec 2020 09:34:54 -0500
Received: by mail-wm1-f69.google.com with SMTP id o203so1178186wmo.3
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:34:54 -0800 (PST)
Received: from [192.168.1.36] (101.red-88-21-206.staticip.rima-tde.net.
 [88.21.206.101])
 by smtp.gmail.com with ESMTPSA id b9sm9513717wmd.32.2020.12.10.06.34.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 06:34:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a65c9e7-ae9b-4458-b61f-5b29f4cce0d7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607610898;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1iJnxtvgCJ39eWBsSfprAe1n+FbeGJcCIfWwLPok5fk=;
	b=ASM9zcx9CytjojQ/FsWJzLgPyYQqaqiKiDg72Z99XluyOMtIhksIlxoLEQfp+ERenk2uxS
	h5AEMlCJSs6LGGkun1H81xastJZJTRgYF/k0seqYJqMFYuxafCr5WWxwIzjc1UUkYk3sZ7
	OlL2kG4+i0H3a120F7ZR/eBhXMZrTNg=
X-MC-Unique: -nFxMmSzN_aW3oiKZ3bKuw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=1iJnxtvgCJ39eWBsSfprAe1n+FbeGJcCIfWwLPok5fk=;
        b=IEdtZ2/bx8+k3qBSlo39He0BpM35UyqpoKKohNOAedUV201nhpVVnfJ2Mm3JW1SuTG
         eiz/Oz+3ft6gLtgszaFajPr00baKPLVGebjpkysa/SPcLe/NAmAAomzcSTbL45c23EeO
         iyRpLEEJEJecJtgatPGHdkjjSWXc186KX89abxy5uRkMHDfO5npFsympacQP5BzDxGCe
         WWm6Z/lknIEeWjM2ceDxjX0S6e7itDHiOVnMmr8MMCkylLYgLND2KzHvwxM1UF+vtrCT
         rsPs0Za8KGm0aDVyPsyU38hrp+n0otNcoAlBXK0GS3dXcjMOiTwqmaklRa43N+/O2Oo0
         2iAg==
X-Gm-Message-State: AOAM533HwHKRjlHWH7Ta+lX5xv7L3Q8V26WKwh5YYlknb6RkKgCmphLz
	f6iqAgyfSIjd2QDIxT+f+2Cybg4fRJiRTSryLAJEq7Se1x71G9QMLlTq92osFzXgTMOeCeqnYlo
	u9HuDCQEIZIynmR0qUi0BRp5Pk6Y=
X-Received: by 2002:adf:c648:: with SMTP id u8mr8613007wrg.215.1607610893491;
        Thu, 10 Dec 2020 06:34:53 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyJM5ihYY53/XlfDii2nCk1B/kZHEpNNLEOjXCv2BcF6TABt0f8DkOpZAu5B1naY2xoeGxkTg==
X-Received: by 2002:adf:c648:: with SMTP id u8mr8612994wrg.215.1607610893357;
        Thu, 10 Dec 2020 06:34:53 -0800 (PST)
Subject: Re: [PATCH v3 08/13] audio: remove GNUC & MSVC check
To: Peter Maydell <peter.maydell@linaro.org>
Cc: =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 QEMU Developers <qemu-devel@nongnu.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefan Hajnoczi <stefanha@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 qemu-arm <qemu-arm@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-9-marcandre.lureau@redhat.com>
 <a162ec7c-4dc3-5784-866e-dc95f6919b1f@redhat.com>
 <CAFEAcA_xbEm+DmP5hixtkzWJK1fi8X7wZh+eGKmDneZv_=-xbA@mail.gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <ac25b79a-c22a-04ab-f125-873710ef9f6d@redhat.com>
Date: Thu, 10 Dec 2020 15:34:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <CAFEAcA_xbEm+DmP5hixtkzWJK1fi8X7wZh+eGKmDneZv_=-xbA@mail.gmail.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12/10/20 3:27 PM, Peter Maydell wrote:
> On Thu, 10 Dec 2020 at 14:26, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>>
>> On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
>>> From: Marc-André Lureau <marcandre.lureau@redhat.com>
>>>
>>> QEMU requires either GCC or Clang, which both advertize __GNUC__.
>>> Drop MSVC fallback path.
>>>
>>> Note: I intentionally left further cleanups for a later work.
>>>
>>> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
>>> ---
>>>  audio/audio.c | 8 +-------
>>>  1 file changed, 1 insertion(+), 7 deletions(-)
>>>
>>> diff --git a/audio/audio.c b/audio/audio.c
>>> index 46578e4a58..d7a00294de 100644
>>> --- a/audio/audio.c
>>> +++ b/audio/audio.c
>>> @@ -122,13 +122,7 @@ int audio_bug (const char *funcname, int cond)
>>>
>>>  #if defined AUDIO_BREAKPOINT_ON_BUG
>>>  #  if defined HOST_I386
>>> -#    if defined __GNUC__
>>> -        __asm__ ("int3");
>>> -#    elif defined _MSC_VER
>>> -        _asm _emit 0xcc;
>>> -#    else
>>> -        abort ();
>>> -#    endif
>>> +      __asm__ ("int3");
>>
>> This was 15 years ago... Why not simply use abort() today?
> 
> That's what I suggested when I looked at this patch in
> the previous version of the patchset, yes...

Ah, I went back to read v2 thread. Actually I even prefer
Gerd's suggestion to remove this dead code.

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:47:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:47:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49376.87310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNDd-0006GX-Sr; Thu, 10 Dec 2020 14:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49376.87310; Thu, 10 Dec 2020 14:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNDd-0006GQ-PT; Thu, 10 Dec 2020 14:46:57 +0000
Received: by outflank-mailman (input) for mailman id 49376;
 Thu, 10 Dec 2020 14:46:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4S1=FO=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1knNDc-0006GL-Fy
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:46:56 +0000
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d183679-b709-42fd-bfbc-4ae2a9c64bab;
 Thu, 10 Dec 2020 14:46:55 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id jx16so7655149ejb.10
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:46:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d183679-b709-42fd-bfbc-4ae2a9c64bab
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=Gnog/ZDg4vUsu/nZaon+KwESfvhC1ZcgHfRWYsilTJU=;
        b=OjSF7hTNj1VGglWlkOX0X1dBduXiJvq389R+cxqIRwpdEkAwU8C5xqSali8y4qg8kQ
         pHavcUZQGOsp9DEGYk18bL13bmlauyNPgd0sBkZssiO0lKWy2/DWw1UtvYBCluM2oMdQ
         PHiZXb5s5FzA0N20bAAA8nJUZRvRPjFj81mOqgDVUy+SZjiqkvLr6cW2kKWMYLznsYrD
         XMcQo0K4qkNutd1YtTpRiVA5OktEQTUTSfuFw7QxGdel2tGxXgGtCA3GPYr1c+/4ZohD
         33fgdKz7FyeAuUXHkhmBmeL8BS33Ruobz4pMx6s5SkTkBxZz/DGojTp4q+l2BSzt1QWZ
         brAg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=Gnog/ZDg4vUsu/nZaon+KwESfvhC1ZcgHfRWYsilTJU=;
        b=aWJihKt8jMubdr/qNOzO7E8o9TyiNf4huBMu4kmb8CrA1GUlehICh/06I7tAAFEqrN
         i/IkLD4FFeOr6MLnjj6iiSwSVd2N3oPnzXG57ED1E3euHhx6DSqnZnijiWe7GPfO4IvE
         jPzTS2HT1aA6saFlVP3zn8hh0x/MKSsyvb4k9b42qqvG5jwiAWpFsaFLorPoXo7SCwtx
         gPJJFYqaUCnU0Bm1ooH12JAAVjMgm4szmfEuucsEnavOStusSwl/5EhFJodK8Om8OpTL
         w2lOpvHJikvPZ15Dg/5tvgSy083Fi23G754mPnhO03mTk8md0AdU3lH7NzP5zhobkibk
         TxZQ==
X-Gm-Message-State: AOAM532teORYxW80U/P1nwljH6DuiQ+xhSIeT6OsPi7WlpDudCs3ziw8
	tb1S6af6acV4mHHx4OD5Ungq/GJDAup+VXi1bpL1pg==
X-Google-Smtp-Source: ABdhPJxDzx70qFg8/+8SQWp/wzh+hCnvMQcmbo3zM+VkDnGHRz9EtVJKzK7XDjv+P9nGgZ56Jg0bEdx1VWHQ+YEXXGU=
X-Received: by 2002:a17:906:31d2:: with SMTP id f18mr6677002ejf.407.1607611614716;
 Thu, 10 Dec 2020 06:46:54 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-4-marcandre.lureau@redhat.com> <fead8bf1-7848-8809-c67a-e6354e7b5cf7@redhat.com>
In-Reply-To: <fead8bf1-7848-8809-c67a-e6354e7b5cf7@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 10 Dec 2020 14:46:43 +0000
Message-ID: <CAFEAcA986crbUmJLR2GU5PE9BOq8w9KWKA5obYfY3eSoviMtnw@mail.gmail.com>
Subject: Re: [PATCH v3 03/13] compiler.h: remove GCC < 3 __builtin_expect fallback
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Cc: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>, 
	QEMU Developers <qemu-devel@nongnu.org>, Richard Henderson <richard.henderson@linaro.org>, 
	Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>, 
	"open list:X86" <xen-devel@lists.xenproject.org>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Gerd Hoffmann <kraxel@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	qemu-arm <qemu-arm@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 10 Dec 2020 at 14:32, Philippe Mathieu-Daud=C3=A9 <philmd@redhat.co=
m> wrote:
>
> On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> > From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> >
> > Since commit efc6c07 ("configure: Add a test for the minimum compiler
> > version"), QEMU explicitely depends on GCC >=3D 4.8.
> >
> > (clang >=3D 3.4 advertizes itself as GCC >=3D 4.2 compatible and suppor=
ts
> > __builtin_expect too)
> >
> > Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>

> Shouldn't it becleaner to test in the configure script or Meson that
> likely() and unlikely() are not defined, and define them here
> unconditionally?

That sounds like way more infrastructure than we need if
just checking "is it already defined" is sufficient...

-- PMM


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:48:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49381.87321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNEe-0006OF-Al; Thu, 10 Dec 2020 14:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49381.87321; Thu, 10 Dec 2020 14:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNEe-0006O8-7H; Thu, 10 Dec 2020 14:48:00 +0000
Received: by outflank-mailman (input) for mailman id 49381;
 Thu, 10 Dec 2020 14:47:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hBiL=FO=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1knNEc-0006Nz-Dp
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:47:58 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a602a9f-1e80-4b97-ae45-2a8e3a16367e;
 Thu, 10 Dec 2020 14:47:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a602a9f-1e80-4b97-ae45-2a8e3a16367e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607611676;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=YzBzMUXp40/RwzVqdRyq4Gso6zKiB40x+XQEEGsH8F0=;
  b=AxJp8h+bAwzhs/56lbiBbp1Q/ncIpG+cXLhF/v3JcvZ9j0w+fOxe0JKu
   ueqzBKiHnZbog1se5rEGAqRt24FZLv//f8g3ytBiqgOp2nssdQWXGChS9
   1t60Q0PbhIfSdWms6wOTdvfnocHNvxWg7iSsfm6qGr/nzBEDmXib1R1bl
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kaIvGIlSdtH5/bvCFkYR+gHHmNFG/3Dkbiw3RymXSDvdRv3TRqUMZfnBRo5vWj4MJ9DKJpqc76
 v04pTj08/IbCZwun5nXyYsjamo0vUsHqFP1J+3dFrXGjCPFfV7xlpZJ3UuAIMzYuWTq7pKa6em
 L5ejrR7NYQiU54OrDntaxfEzB5KkSqDBnUIaLBASE1y10dUV2wwbuLuASdzwTjhr4L9jHlrY60
 QZIUrVllYBTSlJEXOoN/SQE4Ua6Stu3z9LTF1/QQ3SMf6RWeKKYe00eGh5ZC5uEWSMmrc1GSM+
 brw=
X-SBRS: 5.2
X-MesageID: 32957099
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,408,1599537600"; 
   d="scan'208";a="32957099"
Date: Thu, 10 Dec 2020 14:47:52 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 2/8] lib: collect library files in an archive
Message-ID: <X9I1GCAM2nn8W8eN@perard.uk.xensource.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>

On Mon, Nov 23, 2020 at 04:21:19PM +0100, Jan Beulich wrote:
> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
> just to avoid bloating binaries when only some arch-es and/or
> configurations need generic library routines, combine objects under lib/
> into an archive, which the linker then can pick the necessary objects
> out of.
> 
> Note that we can't use thin archives just yet, until we've raised the
> minimum required binutils version suitably.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/Rules.mk          | 29 +++++++++++++++++++++++++----
>  xen/arch/arm/Makefile |  6 +++---
>  xen/arch/x86/Makefile |  8 ++++----
>  xen/lib/Makefile      |  3 ++-
>  4 files changed, 34 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index d5e5eb33de39..aba6ca2a90f5 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -60,7 +64,14 @@ include Makefile
>  # ---------------------------------------------------------------------------
>  
>  quiet_cmd_ld = LD      $@
> -cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
> +cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
> +               --start-group $(filter %.a,$(real-prereqs)) --end-group

It might be a bit weird to modify the generic LD command for the benefit
of only prelink.o objects but it's probably fine as long as we only use
archives for lib.a. libelf and libfdt will just have --start/end-group
added to there ld command line. So I guess the change is fine.


The rest looks good,
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:51:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:51:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49388.87334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNHW-0007Mp-P0; Thu, 10 Dec 2020 14:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49388.87334; Thu, 10 Dec 2020 14:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNHW-0007Mi-Lb; Thu, 10 Dec 2020 14:50:58 +0000
Received: by outflank-mailman (input) for mailman id 49388;
 Thu, 10 Dec 2020 14:50:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hBiL=FO=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1knNHU-0007Md-NO
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:50:56 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8bb974b-d1b8-42fd-9206-3c405fce51d5;
 Thu, 10 Dec 2020 14:50:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8bb974b-d1b8-42fd-9206-3c405fce51d5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607611855;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=d7yW/0Easa2rv+HhhDpJy1/8GG46cCAThpfClkI+EHw=;
  b=VIXqJLWJoZSemNIuoUD+8nVN2CV+mzgRDPkYA6UtEMdJlk07TKaiLMBi
   cdCR13ShyH1ZPrrnG9jPQg1pHHLMtZuqKR8aLj6Rj/jTLSUA3DUKVC9kD
   +ma0bd0VXJhgOFRYFtbno9tkJYUeX10cRSDQJCxegvyrAApVtBz/14BHx
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CyonRS18s9UGpcznotCRZLAvitXpCNJaykJIKrX+QsixWhv0V2yVGRL2M8J18dNURP6LOwflAj
 YMXuKABLAt5No4z4enY7u+eu9BYcIUZZlFW/BLnpXtA5Ovdz/84uhbetlBuY2uj5wyk/R51x2Q
 qp4vwww6Ir7IpHLA9KVeAGEW/OglpSP12N3IQ7+UG5OkXNGiZiV6yqxP3lHdj1ZR3sqgPsLHTM
 HxWcq/Z9l8dITZblHkz0WFZ+hLN9DmjUIvoHV902yzYMHTOfsHB6uHKY1RI/GjgWhRFrHXK7E/
 gec=
X-SBRS: 5.2
X-MesageID: 33295570
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,408,1599537600"; 
   d="scan'208";a="33295570"
Date: Thu, 10 Dec 2020 14:50:51 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 1/8] xen: fix build when $(obj-y) consists of just
 blanks
Message-ID: <X9I1yxNbWD83F+58@perard.uk.xensource.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>
 <X9EL90SMyqrs9GaL@perard.uk.xensource.com>
 <50fc5143-5b5e-46ae-56a3-6eba2707f293@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <50fc5143-5b5e-46ae-56a3-6eba2707f293@suse.com>

On Thu, Dec 10, 2020 at 11:21:53AM +0100, Jan Beulich wrote:
> On 09.12.2020 18:40, Anthony PERARD wrote:
> > How about using $(XEN_CFLAGS) instead of $(c_flags)? That should prevent
> > CC from generating the .*.o.d files while keeping the relevant flags.
> 
> What does "relevant" cover? For an empty .o it may not be important
> right now, but I could see
> 
> c_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_CFLAGS) '-D__OBJECT_FILE__="$@"'
> a_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_AFLAGS)
> 
> include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
> 
> c_flags += $(CFLAGS-y)
> a_flags += $(CFLAGS-y) $(AFLAGS-y)
> 
> leading to CFLAGS-y / AFLAGS-y which need to be consistent across
> _all_ object files (e.g. some recording of ABI used).
> 
> > Do we need to worry about having a object file been listed twice?
> > Wouldn't that be a mistake?
> 
> No. The list approach (obj-$(CONFIG_xyz) += ...) easily allows for
> this to happen. See xen/arch/x86/mm/Makefile for an existing example.


Sounds good,
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:55:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:55:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49395.87345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNMG-0007ZE-BS; Thu, 10 Dec 2020 14:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49395.87345; Thu, 10 Dec 2020 14:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNMG-0007Z7-8P; Thu, 10 Dec 2020 14:55:52 +0000
Received: by outflank-mailman (input) for mailman id 49395;
 Thu, 10 Dec 2020 14:55:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sk2J=FO=redhat.com=mlureau@srs-us1.protection.inumbo.net>)
 id 1knNMF-0007Z2-6k
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:55:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d87aaf62-12bf-4c04-9b8d-ae5bf7e756ca;
 Thu, 10 Dec 2020 14:55:50 +0000 (UTC)
Received: from mail-io1-f69.google.com (mail-io1-f69.google.com
 [209.85.166.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-373--i1cTrN8Pvm7rMlPzp9C-w-1; Thu, 10 Dec 2020 09:55:47 -0500
Received: by mail-io1-f69.google.com with SMTP id 191so4064243iob.15
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:55:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d87aaf62-12bf-4c04-9b8d-ae5bf7e756ca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607612150;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mBanhr1fgvfj3a2tbMQ3LSwkRYdL35LLpMiBdo5ibtY=;
	b=WxYd5U4Ff9tE+E073+5PX/y7CjGTSGkqQa/MJomiVGWC38tyMk6VW017HxFiHyxCbNC9S3
	hk9ce+yqIT5If6L1C9lb1q9XfeWATuudf99kFuoAJ5DCl++ooRzYY1YzpNJXc17+cy05sa
	RH24YMveAp+nb6CDbrQhna7iJTXBMbY=
X-MC-Unique: -i1cTrN8Pvm7rMlPzp9C-w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=mBanhr1fgvfj3a2tbMQ3LSwkRYdL35LLpMiBdo5ibtY=;
        b=OL3Lk2jWFmfP6tg9RXBjRCPjhX0rbfWtWaeC2Td1saQkYLlYRtw3HhyjO9HHtF7ii2
         YxehZ4IqbMlHdmyCrII6fRLzYvqN5ScxJh/qMKColxpwVz0vuzQNTSS3fuZSREvD/MPt
         4apaBkMD61ZX5qk08Et0rPOHXoquRcwciHpocT/VhHCznr3Bs8w41dpoZ7PrxLMwtBXo
         WEOnC9cfddWBmKaXBUWkeG/8RyT1ZCKuN1dVjF2bgDqImyWJfom5f1urA3DfDXk/i/sq
         e+b8kc0a5SQa5jbHh2DEysqa6Auqz1f3eHgkbqfX1MxTefee42a89A8fgC4wt/bwjFrE
         a+7Q==
X-Gm-Message-State: AOAM533J5kTt706zKQMjkkSdk0DiTd1OrVBN3kYNLpQZttyEk3JPiEAI
	qXW9hHleu8yOELMJAD7Ks5+zkT3UrLlD2PylieH1zdi4JFpi53ykdAwOQY2+mml24NOaCOnBKLS
	EDfkYLH7L0uiZ7G/UyAOWhxMmIRexFqAmWYKdHI815t0=
X-Received: by 2002:a6b:b5d2:: with SMTP id e201mr5227602iof.111.1607612146609;
        Thu, 10 Dec 2020 06:55:46 -0800 (PST)
X-Google-Smtp-Source: ABdhPJw+s9DW2PJte84KZQqjMa29hIrqXHSO4YD2VzYLlS+k4pC7kfrBaKl4ag4krI7jL31ahzBUBxRx9ZK+JFg018M=
X-Received: by 2002:a6b:b5d2:: with SMTP id e201mr5227589iof.111.1607612146364;
 Thu, 10 Dec 2020 06:55:46 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-4-marcandre.lureau@redhat.com> <fead8bf1-7848-8809-c67a-e6354e7b5cf7@redhat.com>
 <CAFEAcA986crbUmJLR2GU5PE9BOq8w9KWKA5obYfY3eSoviMtnw@mail.gmail.com>
In-Reply-To: <CAFEAcA986crbUmJLR2GU5PE9BOq8w9KWKA5obYfY3eSoviMtnw@mail.gmail.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>
Date: Thu, 10 Dec 2020 18:55:35 +0400
Message-ID: <CAMxuvawzN0oOJJhqHu4dX++O+fAdA8BQ0+yNgoQHf_dL5=rVow@mail.gmail.com>
Subject: Re: [PATCH v3 03/13] compiler.h: remove GCC < 3 __builtin_expect fallback
To: Peter Maydell <peter.maydell@linaro.org>
Cc: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	QEMU Developers <qemu-devel@nongnu.org>, Richard Henderson <richard.henderson@linaro.org>, 
	Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>, 
	"open list:X86" <xen-devel@lists.xenproject.org>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Gerd Hoffmann <kraxel@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	qemu-arm <qemu-arm@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mlureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi

On Thu, Dec 10, 2020 at 6:47 PM Peter Maydell <peter.maydell@linaro.org> wr=
ote:
>
> On Thu, 10 Dec 2020 at 14:32, Philippe Mathieu-Daud=C3=A9 <philmd@redhat.=
com> wrote:
> >
> > On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> > > From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> > >
> > > Since commit efc6c07 ("configure: Add a test for the minimum compiler
> > > version"), QEMU explicitely depends on GCC >=3D 4.8.
> > >
> > > (clang >=3D 3.4 advertizes itself as GCC >=3D 4.2 compatible and supp=
orts
> > > __builtin_expect too)
> > >
> > > Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
>
> > Shouldn't it becleaner to test in the configure script or Meson that
> > likely() and unlikely() are not defined, and define them here
> > unconditionally?
>
> That sounds like way more infrastructure than we need if
> just checking "is it already defined" is sufficient...
>

Eh, I am just removing dead-code "#if __GNUC__ < 3". Further cleanups
can be done after.



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 14:57:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 14:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49401.87360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNNS-0007gv-Qx; Thu, 10 Dec 2020 14:57:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49401.87360; Thu, 10 Dec 2020 14:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNNS-0007go-O6; Thu, 10 Dec 2020 14:57:06 +0000
Received: by outflank-mailman (input) for mailman id 49401;
 Thu, 10 Dec 2020 14:57:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sk2J=FO=redhat.com=mlureau@srs-us1.protection.inumbo.net>)
 id 1knNNR-0007gi-KY
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 14:57:05 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7e32e582-82f0-4700-ba16-327a22c4b256;
 Thu, 10 Dec 2020 14:57:04 +0000 (UTC)
Received: from mail-io1-f72.google.com (mail-io1-f72.google.com
 [209.85.166.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-317-dGGqX1UcNrSh7OuJ36995A-1; Thu, 10 Dec 2020 09:57:02 -0500
Received: by mail-io1-f72.google.com with SMTP id s11so4075542iod.14
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 06:57:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e32e582-82f0-4700-ba16-327a22c4b256
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607612224;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JXqyrvdOJb2ioK7+vOVVst13JBAIvCX7ELMPyGaGrxc=;
	b=I0lfrheSPqu6tv4AJijdNDtV76CNhf/PA+NuUFLXl/S6HegtODw/r2ygmrS4bT/ufuYvoq
	i8+jJgqVJSaJMCtDKYd4KFpY/oPUgnN17lo0/g3NoMu97VDqlVEUfsgTUhEXUxMhFR1YVZ
	xUHr9yrN3jWcgvd2w/7N4+0kBKIbD1E=
X-MC-Unique: dGGqX1UcNrSh7OuJ36995A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=JXqyrvdOJb2ioK7+vOVVst13JBAIvCX7ELMPyGaGrxc=;
        b=phbkRyH0gVy2pOyvdGnsQnXP6G3pFhW++bJpuY7gKtC+mcHjO9pfrlCfNdSpsyYE5O
         exIh2rair5LRtPAGr4EtOBpsmjBSTu9sURvcx8PNbf2eB10PJi5SsXyBTex79NA7iKhB
         VsE9rIeTTNOW+SBjypEiWQ0rdlZ8BKqtvIIs8UVZRgTThDT5hdeQbIeNBEHKX5i1bZo2
         w23vXJEDYazLL8OEJ62HoaF0QeXfrUBVOHKorb9hMbtwVkiKe5vwzYtX8G4faFI8yHMq
         vk4qHlwHvHmOaMXV70dZPTodwxheQZNkgVz+ERUnqqTJsZy4jcbZtpOgnbQIHOHaCDUS
         mULA==
X-Gm-Message-State: AOAM531NxTtC0kzM+JiGOptgjViDcIX9fQCOKGE563bFY8jdiraeeuFz
	8yRgzMZPqcMkYYKRkeMlCX9AKUKxnnvJn0f2aICqOObXynJKCIyBX+kwHZSllp2+jUm/Kq34Eis
	HhrbkFxmIlEj+s2Rh+zm/WWU2hKShNThHaSDUrOu88w0=
X-Received: by 2002:a92:c26c:: with SMTP id h12mr3441190ild.165.1607612221488;
        Thu, 10 Dec 2020 06:57:01 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzTp0xzsbwNTZrcPB5s1KuzFnilNq6cZjoRe/XflqmlO4DDt1OBf1aX0Z0sIhLZJ0uGd19w3g9P9yy6kBEvWm4=
X-Received: by 2002:a92:c26c:: with SMTP id h12mr3441180ild.165.1607612221323;
 Thu, 10 Dec 2020 06:57:01 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-9-marcandre.lureau@redhat.com> <a162ec7c-4dc3-5784-866e-dc95f6919b1f@redhat.com>
 <CAFEAcA_xbEm+DmP5hixtkzWJK1fi8X7wZh+eGKmDneZv_=-xbA@mail.gmail.com> <ac25b79a-c22a-04ab-f125-873710ef9f6d@redhat.com>
In-Reply-To: <ac25b79a-c22a-04ab-f125-873710ef9f6d@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>
Date: Thu, 10 Dec 2020 18:56:49 +0400
Message-ID: <CAMxuvay1DzS=GzW+0g3x0BgTC850zuqsQ_3tVZ-Fu7Nxz+vLYw@mail.gmail.com>
Subject: Re: [PATCH v3 08/13] audio: remove GNUC & MSVC check
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>, QEMU Developers <qemu-devel@nongnu.org>, 
	Richard Henderson <richard.henderson@linaro.org>, Laurent Vivier <laurent@vivier.eu>, 
	Paul Durrant <paul@xen.org>, "open list:X86" <xen-devel@lists.xenproject.org>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm <qemu-arm@nongnu.org>, 
	Paolo Bonzini <pbonzini@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mlureau@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi

On Thu, Dec 10, 2020 at 6:35 PM Philippe Mathieu-Daud=C3=A9
<philmd@redhat.com> wrote:
>
> On 12/10/20 3:27 PM, Peter Maydell wrote:
> > On Thu, 10 Dec 2020 at 14:26, Philippe Mathieu-Daud=C3=A9 <philmd@redha=
t.com> wrote:
> >>
> >> On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
> >>> From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> >>>
> >>> QEMU requires either GCC or Clang, which both advertize __GNUC__.
> >>> Drop MSVC fallback path.
> >>>
> >>> Note: I intentionally left further cleanups for a later work.
> >>>
> >>> Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> >>> ---
> >>>  audio/audio.c | 8 +-------
> >>>  1 file changed, 1 insertion(+), 7 deletions(-)
> >>>
> >>> diff --git a/audio/audio.c b/audio/audio.c
> >>> index 46578e4a58..d7a00294de 100644
> >>> --- a/audio/audio.c
> >>> +++ b/audio/audio.c
> >>> @@ -122,13 +122,7 @@ int audio_bug (const char *funcname, int cond)
> >>>
> >>>  #if defined AUDIO_BREAKPOINT_ON_BUG
> >>>  #  if defined HOST_I386
> >>> -#    if defined __GNUC__
> >>> -        __asm__ ("int3");
> >>> -#    elif defined _MSC_VER
> >>> -        _asm _emit 0xcc;
> >>> -#    else
> >>> -        abort ();
> >>> -#    endif
> >>> +      __asm__ ("int3");
> >>
> >> This was 15 years ago... Why not simply use abort() today?
> >
> > That's what I suggested when I looked at this patch in
> > the previous version of the patchset, yes...
>
> Ah, I went back to read v2 thread. Actually I even prefer
> Gerd's suggestion to remove this dead code.
>

And I totally agree. However, I don't want to mix concerns. I am just
removing dead-code.



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:09:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49412.87376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNZl-0000RN-4e; Thu, 10 Dec 2020 15:09:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49412.87376; Thu, 10 Dec 2020 15:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNZl-0000RG-1I; Thu, 10 Dec 2020 15:09:49 +0000
Received: by outflank-mailman (input) for mailman id 49412;
 Thu, 10 Dec 2020 15:09:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNZj-0000RB-He
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:09:47 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.41]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22e8c2aa-6abb-40fc-acd5-431b8ca02cd2;
 Thu, 10 Dec 2020 15:09:45 +0000 (UTC)
Received: from AM5PR0502CA0018.eurprd05.prod.outlook.com
 (2603:10a6:203:91::28) by AM6PR08MB4691.eurprd08.prod.outlook.com
 (2603:10a6:20b:cc::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.15; Thu, 10 Dec
 2020 15:09:42 +0000
Received: from VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::40) by AM5PR0502CA0018.outlook.office365.com
 (2603:10a6:203:91::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:09:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT032.mail.protection.outlook.com (10.152.18.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:09:42 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Thu, 10 Dec 2020 15:09:41 +0000
Received: from c3e0269dda7c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AFCD8F4D-6813-4B77-AFE1-203E5426B5BD.1; 
 Thu, 10 Dec 2020 15:09:24 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c3e0269dda7c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:09:24 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6315.eurprd08.prod.outlook.com (2603:10a6:10:209::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Thu, 10 Dec
 2020 15:09:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:09:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22e8c2aa-6abb-40fc-acd5-431b8ca02cd2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R6HlzVGwjUBj42alFlMf0NbLNUbRd/U/JMZvhKBC6BU=;
 b=g+Mvimp1hhpoRNgXgZngd8a2avQ6nW3VHq+nhrZrNafl45DYiK10sdyMuTDa0tXSLETjFRpTU5a6wvcf23ctV5Iyr59ufODXsBzyIoEMjkgmtKRLOYuXUZdnCvKTMzs3EquF/kz46Hg6Pr/WFXpg8FjEnyeS/o8TnmRQI7EQCnQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c3399b676f8fc5d6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C77A8BflKsTU1t4PADopIE1b2jTGA1X52x0SLi9D4qAMzuHCTv7JyX634eZJkTpOqVzqguvn7If84fPSQoCPtjxn+22Q3YJUDY/PDdSNxyB2Kxxi3XTVBTfioqAoJHrW6cWxwoVqLaBo3r3NYcRBhu+BkUb+0HXyfjzq94sLXH1Zgwp0VTj1HasmXPrywyk852/WbPebFitkI1AwlwGZnR7H2o+toCEzCBwz69cTS4klo0NzCVEyN0DOrkFg5V06KKJ7TC0IB4lxNNTCR83UtXF9frmZkILvRNfeSwI09xf8ERece9vOszbgN85GX/uHHxA/Usc1i4F7kIdjuIT7xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R6HlzVGwjUBj42alFlMf0NbLNUbRd/U/JMZvhKBC6BU=;
 b=h3OEL6H7ynijTLefrXsQoDhENGfYYhLbbngPUA/TgpEHCR2iJKZDgsM4FQ7Y7YmtvKr0oNqCDWc0beoPnPv9zfOgvIl2Aswokaiyi0EbnKCF8Cck6mPiBy0XTP2boVQZzhOEclKZ+QZhOWc+9UKMQdfvY/NzOWiCKlF+mGGvKxr4xOAD3wg/5ApFst0a9xzYq55TzHNnDnaISh1SoGk2v9HcGI/sEqTdtz24r19uw3RsVRAuIRW54SZl2SEGG95XqlYNLWd9mWg+8fjXAAt6cnilw5h4Rg5grY1TQgdQtismZfIJoRORrpEURsaDWHirAqDa4djXLuj9m1UxAmcEYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R6HlzVGwjUBj42alFlMf0NbLNUbRd/U/JMZvhKBC6BU=;
 b=g+Mvimp1hhpoRNgXgZngd8a2avQ6nW3VHq+nhrZrNafl45DYiK10sdyMuTDa0tXSLETjFRpTU5a6wvcf23ctV5Iyr59ufODXsBzyIoEMjkgmtKRLOYuXUZdnCvKTMzs3EquF/kz46Hg6Pr/WFXpg8FjEnyeS/o8TnmRQI7EQCnQ=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v3 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHWzklmttlhxiafuUKrQQSihgPiRKnvLa8AgAFCnwA=
Date: Thu, 10 Dec 2020 15:09:22 +0000
Message-ID: <C881A361-12A2-42FF-B64E-7AE8A1F436EC@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <5a36325410f485dbdddc0f6088378cacc54c5243.1607524536.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012091153400.20986@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012091153400.20986@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: eeac6912-0e32-460c-2544-08d89d1d9f41
x-ms-traffictypediagnostic: DBBPR08MB6315:|AM6PR08MB4691:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4691DEC5FF67C6B7365829D09DCB0@AM6PR08MB4691.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1Ge728XOyOakXuKrM3/UaAByhSRPjLnJTyrRj7LBXj7OlVqja6s3gF1d1OAKDEpcckEn0R2q3nPqeJbN8w89OlXZhFSTqBgVLT/5nR2lQuDLR8uyEbjBmi23ZkQYcGoRAlU+8vGKX3HiyIdoDFXwl9HWTNXiEyoRyGm6YfYi5ekO79c+RO4CO3ASS0906iE5vHynW1NjXYLovbMFNIELI1CEH9Md/mTsn6wNlFbPsgGCDWxGemlYjHy8PsUWLybVlqqMI4l0SUrXFA6hk29fkRfnJiG0C5isYRooAzckM6rJ4UuSIr57K3RicdxCZ35ZmT/zcenceEvR1KlB8XAXH/enYZAcW5k3ZNKaHSFGFE2EyBRN0YECiIViFSBtv3jz
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(396003)(39860400002)(136003)(76116006)(91956017)(2616005)(66476007)(316002)(8936002)(33656002)(5660300002)(6506007)(66946007)(36756003)(86362001)(66446008)(8676002)(83380400001)(71200400001)(6512007)(54906003)(186003)(66556008)(6486002)(2906002)(26005)(64756008)(4326008)(478600001)(6916009)(53546011)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?c21wOU4walFGdUI5L1IvUklOOS96RVlFZVI1a3o4RmYrM2NOT0RHS1d2Vm5y?=
 =?utf-8?B?NFVneDk2c29pWHIzem5XQ0lMNU1DY2hZVjZSczcyc0VjNVBTUURucUUvS29I?=
 =?utf-8?B?MW8vaVdUNUxOK3pZSU9XR0FGaXJqUDBKYVU3T09nZVA1OGV5b1ZHZFZkYU9j?=
 =?utf-8?B?bkJZOTI1UCtEb3M3ZGRMRFhXM3ZjR2pIMGp0dVdUNWwrbjV5MGo2OXRobElo?=
 =?utf-8?B?c0dBNUxmQ1pxOWlkbDFZdWVGZmtZNTFNVVNROHk1bFVEMGdqZ0NYWEVPT1BT?=
 =?utf-8?B?YjJWanBOTG1aeno0RUxMeXllMDZVakEvZUNScGZ6Mzd5S1VkZHJCV0JVNjFT?=
 =?utf-8?B?aEVFZElPWWd1SS9kSnZmZEJlMEpldzVzTzRwanlQK2ZYcm9VU3JGMUVLSi9P?=
 =?utf-8?B?c2VjTG0zcUVObVU5bHpxMkZJZjYxSTh0ZHNwRXBBd2RhQU1ybEwxSmZ0djNX?=
 =?utf-8?B?aEFQNlBPWXo4WWE1aDV5WEJ6c2V5SnRJZ0JrYlRWbGlrUFBCTXhiVUxOOUsv?=
 =?utf-8?B?Z0ZjcUt5OHpmZHFaTXBhc213ZkhDckt2bXpyNk1qSXBKMW9IdEJZbkJ0TCtF?=
 =?utf-8?B?UmZUM3EzdlpsOW1Rcm1UbXRLUzFxaUlRcUd2T1pwc0gzZHJQNm1xczY2dG1w?=
 =?utf-8?B?RG9vd1A2MzBtMG9DT09MZVlCc201SFRXNzlBSXVURlpXM3g4U2hkVU15WFNU?=
 =?utf-8?B?Y2hTWEpVZ2xvMjl6VEhMTmhJNXZIdnQ4VzlqVlloaS9wUy9jQTVQRGpFZHJW?=
 =?utf-8?B?Um5VdUxYWUxKRURrV3ZpNUhaU09BbjUvMzZxUmJ1TmtXc1dMaUZKSHY5ZVBN?=
 =?utf-8?B?OC9TZDM3d2RTMXJWa3hyV3pzd1VuaWZkb3h4alBvQ1p0Si96VVZqOWhzYytN?=
 =?utf-8?B?b3I0czBNZmcvdnpBMkUrYk1CNVZDdi83TFlCWVpEeU9VdVJDakdFT3NwVWdF?=
 =?utf-8?B?NUt4YVErTVdVRDZqRUtQYjJVUVJPWHdFWENXZ2hreGVmMGE2YkNiTWcrdkI5?=
 =?utf-8?B?THF2TXJyaldsS1hnYmozeG1lRUNudm9SZERJQmY5d2hJK2RqYVpLMDdkZHhl?=
 =?utf-8?B?TndSZk5Nb0RXbDhnRXdUWitlNkhNYThLSDZiZW1DY3RlL1BnUndJSWRES0pL?=
 =?utf-8?B?NzRyV21yKzFBQkJ1bE9XcFBhNE91eUh3cExRNjQ2OGJZL3hPMjdGSlV0L1ox?=
 =?utf-8?B?ZGxacDhxWmhFQmNxS2tjazBvWnJ4cFdKZXkvMlNZTmxwOGlIcjh2UWZpM2tY?=
 =?utf-8?B?N0svM1VNek12ak5RQnZCMVpkcmdoaVNnZjFvSnFUakxtcDFwcHJES0NBNHF1?=
 =?utf-8?Q?EgF2035fQJeAowofF0s8ROx4//NC25Y6A9?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <DAEBD7E8EEF3A44DBF48DFB22D444E4A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6315
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ff1d8988-be18-45f4-2ac0-08d89d1d93ab
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bIt6VX5tKMtGlVAvmgZxEGj+nLkW0iQffVayJJCIdwo/ppnMw548D7lBJ166iafWkex7vIkghTWf6I+TU8wVSaKauXVATFCH+a4pxyNI+T4/0zMBFPCL8hGcfor0OsdL6xOPg/KLTEz2QRu7XiJAxXagi35ZZ8BTWwH2u+vwI044uyMmBlHuUZPz1qpKW1aySNCCiIETJ9mpGfmLEiwHBv8Lw7V0vOL+Whw4ERBfQ3oWMts8nDp9CTFGew97qjnILiCr456PRjahvGsxl7u+c45WSA6fa/D11ynbyBFSI/yBXV2/+Jnmpv6cIETitc66rgClsRwm6dluEdOi9VFSxBDOPa7CjonMu0NaRDq9rRKeFsbyTHkGhzKUbk+erYbfkHfXXIvYxOKu3orBExc1UA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966005)(356005)(2906002)(54906003)(4326008)(6862004)(82740400003)(83380400001)(8676002)(107886003)(47076004)(6512007)(81166007)(6486002)(82310400003)(8936002)(86362001)(316002)(5660300002)(336012)(36756003)(478600001)(6506007)(53546011)(26005)(186003)(70206006)(2616005)(33656002)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:09:42.2625
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eeac6912-0e32-460c-2544-08d89d1d9f41
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4691

SGkgU3RlZmFubywNCg0KPiBPbiA5IERlYyAyMDIwLCBhdCAxOTo1NCwgU3RlZmFubyBTdGFiZWxs
aW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwgOSBEZWMg
MjAyMCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+IEFkZCBzdXBwb3J0IGZvciBlbXVsYXRp
b24gb2YgY3AxNSBiYXNlZCBJRCByZWdpc3RlcnMgKG9uIGFybTMyIG9yIHdoZW4NCj4+IHJ1bm5p
bmcgYSAzMmJpdCBndWVzdCBvbiBhcm02NCkuDQo+PiBUaGUgaGFuZGxlcnMgYXJlIHJldHVybmlu
ZyB0aGUgdmFsdWVzIHN0b3JlZCBpbiB0aGUgZ3Vlc3RfY3B1aW5mbw0KPj4gc3RydWN0dXJlIGZv
ciBrbm93biByZWdpc3RlcnMgYW5kIFJBWiBmb3IgYWxsIHJlc2VydmVkIHJlZ2lzdGVycy4NCj4+
IEluIHRoZSBjdXJyZW50IHN0YXR1cyB0aGUgTVZGUiByZWdpc3RlcnMgYXJlIG5vIHN1cHBvcnRl
ZC4NCj4+IA0KPj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFy
cXVpc0Bhcm0uY29tPg0KPj4gLS0tDQo+PiBDaGFuZ2VzIGluIFYyOiBSZWJhc2UNCj4+IENoYW5n
ZXMgaW4gVjM6DQo+PiAgQWRkIGNhc2UgZGVmaW5pdGlvbiBmb3IgcmVzZXJ2ZWQgcmVnaXN0ZXJz
DQo+PiAgQWRkIGhhbmRsaW5nIG9mIHJlc2VydmVkIHJlZ2lzdGVycyBhcyBSQVouDQo+PiAgRml4
IGNvZGUgc3R5bGUgaW4gR0VORVJBVEVfVElEM19JTkZPIGRlY2xhcmF0aW9uDQo+PiANCj4+IC0t
LQ0KPj4geGVuL2FyY2gvYXJtL3ZjcHJlZy5jICAgICAgICB8IDM5ICsrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKw0KPj4geGVuL2luY2x1ZGUvYXNtLWFybS9jcHJlZ3MuaCB8IDI1
ICsrKysrKysrKysrKysrKysrKysrKysrDQo+PiAyIGZpbGVzIGNoYW5nZWQsIDY0IGluc2VydGlv
bnMoKykNCj4+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92Y3ByZWcuYyBiL3hlbi9h
cmNoL2FybS92Y3ByZWcuYw0KPj4gaW5kZXggY2RjOTFjZGY1Yi4uZDM3MWExYzM4YyAxMDA2NDQN
Cj4+IC0tLSBhL3hlbi9hcmNoL2FybS92Y3ByZWcuYw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL3Zj
cHJlZy5jDQo+PiBAQCAtMTU1LDYgKzE1NSwxNCBAQCBUVk1fUkVHMzIoQ09OVEVYVElEUiwgQ09O
VEVYVElEUl9FTDEpDQo+PiAgICAgICAgIGJyZWFrOyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+ICAgICB9DQo+PiANCj4+ICsvKiBNYWNy
byB0byBnZW5lcmF0ZSBlYXNpbHkgY2FzZSBmb3IgSUQgY28tcHJvY2Vzc29yIGVtdWxhdGlvbiAq
Lw0KPj4gKyNkZWZpbmUgR0VORVJBVEVfVElEM19JTkZPKHJlZywgZmllbGQsIG9mZnNldCkgICAg
ICAgICAgICAgICAgICAgICAgXA0KPj4gKyAgICBjYXNlIEhTUl9DUFJFRzMyKHJlZyk6ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4gKyAgICB7ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0K
Pj4gKyAgICAgICAgcmV0dXJuIGhhbmRsZV9yb19yZWFkX3ZhbChyZWdzLCByZWdpZHgsIGNwMzIu
cmVhZCwgaHNyLCAgICAgXA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgMSwgZ3Vlc3Rf
Y3B1aW5mby5maWVsZC5iaXRzW29mZnNldF0pOyAgICAgXA0KPj4gKyAgICB9DQo+PiArDQo+PiB2
b2lkIGRvX2NwMTVfMzIoc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGNvbnN0IHVuaW9uIGhz
ciBoc3IpDQo+PiB7DQo+PiAgICAgY29uc3Qgc3RydWN0IGhzcl9jcDMyIGNwMzIgPSBoc3IuY3Az
MjsNCj4+IEBAIC0yODYsNiArMjk0LDM3IEBAIHZvaWQgZG9fY3AxNV8zMihzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncywgY29uc3QgdW5pb24gaHNyIGhzcikNCj4+ICAgICAgICAgICovDQo+PiAg
ICAgICAgIHJldHVybiBoYW5kbGVfcmF6X3dpKHJlZ3MsIHJlZ2lkeCwgY3AzMi5yZWFkLCBoc3Is
IDEpOw0KPj4gDQo+PiArICAgIC8qDQo+PiArICAgICAqIEhDUl9FTDIuVElEMw0KPj4gKyAgICAg
Kg0KPj4gKyAgICAgKiBUaGlzIGlzIHRyYXBwaW5nIG1vc3QgSWRlbnRpZmljYXRpb24gcmVnaXN0
ZXJzIHVzZWQgYnkgYSBndWVzdA0KPj4gKyAgICAgKiB0byBpZGVudGlmeSB0aGUgcHJvY2Vzc29y
IGZlYXR1cmVzDQo+PiArICAgICAqLw0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfUEZS
MCwgcGZyMzIsIDApDQo+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9QRlIxLCBwZnIzMiwg
MSkNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX1BGUjIsIHBmcjMyLCAyKQ0KPj4gKyAg
ICBHRU5FUkFURV9USUQzX0lORk8oSURfREZSMCwgZGJnMzIsIDApDQo+PiArICAgIEdFTkVSQVRF
X1RJRDNfSU5GTyhJRF9ERlIxLCBkYmczMiwgMSkNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZP
KElEX0FGUjAsIGF1eDMyLCAwKQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfTU1GUjAs
IG1tMzIsIDApDQo+PiArICAgIEdFTkVSQVRFX1RJRDNfSU5GTyhJRF9NTUZSMSwgbW0zMiwgMSkN
Cj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElEX01NRlIyLCBtbTMyLCAyKQ0KPj4gKyAgICBH
RU5FUkFURV9USUQzX0lORk8oSURfTU1GUjMsIG1tMzIsIDMpDQo+PiArICAgIEdFTkVSQVRFX1RJ
RDNfSU5GTyhJRF9NTUZSNCwgbW0zMiwgNCkNCj4+ICsgICAgR0VORVJBVEVfVElEM19JTkZPKElE
X01NRlI1LCBtbTMyLCA1KQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjAsIGlz
YTMyLCAwKQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjEsIGlzYTMyLCAxKQ0K
Pj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjIsIGlzYTMyLCAyKQ0KPj4gKyAgICBH
RU5FUkFURV9USUQzX0lORk8oSURfSVNBUjMsIGlzYTMyLCAzKQ0KPj4gKyAgICBHRU5FUkFURV9U
SUQzX0lORk8oSURfSVNBUjQsIGlzYTMyLCA0KQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8o
SURfSVNBUjUsIGlzYTMyLCA1KQ0KPj4gKyAgICBHRU5FUkFURV9USUQzX0lORk8oSURfSVNBUjYs
IGlzYTMyLCA2KQ0KPj4gKyAgICAvKiBNVkZSIHJlZ2lzdGVycyBhcmUgaW4gY3AxMCBubyBjcDE1
ICovDQo+PiArDQo+PiArICAgIEhTUl9DUFJFRzMyX1RJRDNfUkVTRVJWRURfQ0FTRToNCj4+ICsg
ICAgICAgIC8qIEhhbmRsZSBhbGwgcmVzZXJ2ZWQgcmVnaXN0ZXJzIGFzIFJBWiAqLw0KPj4gKyAg
ICAgICAgcmV0dXJuIGhhbmRsZV9yb19yYXoocmVncywgcmVnaWR4LCBjcDMyLnJlYWQsIGhzciwg
MSk7DQo+IA0KPiBTYW1lIHF1ZXN0aW9uIGFzIGZvciB0aGUgYWFyY2g2NCBjYXNlOiBkbyB3ZSBu
ZWVkIHRvIGRvIHdyaXRlLWlnbm9yZQ0KPiBmb3IgdGhlIHJlc2VydmVkIHJlZ2lzdGVycz8NCg0K
QXJtIGFyY2hpdGVjdHVyZSByZWZlcmVuY2UgbWFudWFsIGlzIGxpc3RpbmcgYWxsIHRob3NlIHJl
Z2lzdGVycyBhcyBSTyBpbmNsdWRpbmcNCnRoZSByZXNlcnZlZCBvbmVzIChjZiB0YWJsZSBEMTIt
MikuIFRoaXMgc2FpZCBJIGhhdmUgbm8gb2JqZWN0aW9uIHRvIG1ha2UgdGhlbQ0Kd3JpdGUgaWdu
b3JlIGJ1dCBmcm9tIG15IHVuZGVyc3RhbmRpbmcgdGhpcyB3b3VsZCBub3QgcmVmbGVjdCB0aGUg
aGFyZHdhcmUNCmJlaGF2aW91ci4NCg0KPiANCj4gDQo+PiAgICAgLyoNCj4+ICAgICAgKiBIQ1Jf
RUwyLlRJRENQDQo+PiAgICAgICoNCj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJt
L2NwcmVncy5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9jcHJlZ3MuaA0KPj4gaW5kZXggMjY5MGRk
ZWI3YS4uNWNiMWFkNWNiZSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vY3By
ZWdzLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vY3ByZWdzLmgNCj4+IEBAIC0xMzMs
NiArMTMzLDMxIEBADQo+PiAjZGVmaW5lIFZQSURSICAgICAgICAgICBwMTUsNCxjMCxjMCwwICAg
LyogVmlydHVhbGl6YXRpb24gUHJvY2Vzc29yIElEIFJlZ2lzdGVyICovDQo+PiAjZGVmaW5lIFZN
UElEUiAgICAgICAgICBwMTUsNCxjMCxjMCw1ICAgLyogVmlydHVhbGl6YXRpb24gTXVsdGlwcm9j
ZXNzb3IgSUQgUmVnaXN0ZXIgKi8NCj4+IA0KPj4gKy8qDQo+PiArICogVGhvc2UgY2FzZXMgYXJl
IGNhdGNoaW5nIGFsbCBSZXNlcnZlZCByZWdpc3RlcnMgdHJhcHBlZCBieSBUSUQzIHdoaWNoDQo+
PiArICogY3VycmVudGx5IGhhdmUgbm8gYXNzaWdubWVudC4NCj4+ICsgKiBIQ1IuVElEMyBpcyB0
cmFwcGluZyBhbGwgcmVnaXN0ZXJzIGluIHRoZSBncm91cCAzOg0KPj4gKyAqIGNvcHJvYyA9PSBw
MTUsIG9wYzEgPT0gMCwgQ1JuID09IGMwLCBDUm0gPT0ge2MyLWM3fSwgb3BjMiA9PSB7MC03fS4N
Cj4+ICsgKi8NCj4+ICsjZGVmaW5lIEhTUl9DUFJFRzMyX1RJRDNfQ0FTRVMoUkVHKSAgICAgY2Fz
ZSBIU1JfQ1BSRUczMihwMTUsMCxjMCxSRUcsMCk6IFwNCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY2FzZSBIU1JfQ1BSRUczMihwMTUsMCxjMCxSRUcsMSk6IFwN
Cj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY2FzZSBIU1JfQ1BS
RUczMihwMTUsMCxjMCxSRUcsMik6IFwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgY2FzZSBIU1JfQ1BSRUczMihwMTUsMCxjMCxSRUcsMyk6IFwNCj4+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY2FzZSBIU1JfQ1BSRUczMihwMTUs
MCxjMCxSRUcsNCk6IFwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgY2FzZSBIU1JfQ1BSRUczMihwMTUsMCxjMCxSRUcsNSk6IFwNCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgY2FzZSBIU1JfQ1BSRUczMihwMTUsMCxjMCxSRUcs
Nik6IFwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY2FzZSBI
U1JfQ1BSRUczMihwMTUsMCxjMCxSRUcsNykNCj4+ICsNCj4+ICsjZGVmaW5lIEhTUl9DUFJFRzMy
X1RJRDNfUkVTRVJWRURfQ0FTRSAgY2FzZSBIU1JfQ1BSRUczMihwMTUsMCxjMCxjMywwKTogXA0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjYXNlIEhTUl9DUFJF
RzMyKHAxNSwwLGMwLGMzLDEpOiBcDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGNhc2UgSFNSX0NQUkVHMzIocDE1LDAsYzAsYzMsMik6IFwNCj4+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY2FzZSBIU1JfQ1BSRUczMihwMTUsMCxj
MCxjMywzKTogXA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBj
YXNlIEhTUl9DUFJFRzMyKHAxNSwwLGMwLGMzLDcpOiBcDQo+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIEhTUl9DUFJFRzMyX1RJRDNfQ0FTRVMoYzQpOiBcDQo+PiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEhTUl9DUFJFRzMyX1RJRDNf
Q0FTRVMoYzUpOiBcDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IEhTUl9DUFJFRzMyX1RJRDNfQ0FTRVMoYzYpOiBcDQo+PiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIEhTUl9DUFJFRzMyX1RJRDNfQ0FTRVMoYzcpDQo+IA0KPiBUaGUg
Zm9sbG93aW5nIGFyZSBtaXNzaW5nLCBpcyBpdCBhIHByb2JsZW0/DQo+IA0KPiBwMTUsMCxjMCxj
MCwyDQo+IHAxNSwwLGMwLGMwLDMNCj4gcDE1LDAsYzAsYzAsNA0KPiBwMTUsMCxjMCxjMCw2DQo+
IHAxNSwwLGMwLGMwLDcNCg0KSENSLlRJRDMgZG9jdW1lbnRhdGlvbiBpcyBzYXlpbmcgdGhhdCBh
Y2Nlc3MgdG8gImNvcHJvYyA9PSBwMTUsIG9wYzEgPT0gMCwgDQpDUm4gPT0gYzAsIENSbSA9PSB7
YzItYzd9LCBvcGMyID09IHswLTd94oCdIGFyZSB0cmFwcGVkIHNvIENSbSA9IGMwIGlzIG5vdCBo
YW5kbGVkIGhlcmUuDQoNCkNoZWVycw0KQmVydHJhZG4NCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:11:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49418.87388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNbF-0001MK-Ka; Thu, 10 Dec 2020 15:11:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49418.87388; Thu, 10 Dec 2020 15:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNbF-0001MD-HP; Thu, 10 Dec 2020 15:11:21 +0000
Received: by outflank-mailman (input) for mailman id 49418;
 Thu, 10 Dec 2020 15:11:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knNbE-0001M4-Oq; Thu, 10 Dec 2020 15:11:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knNbE-0001NG-Ig; Thu, 10 Dec 2020 15:11:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knNbE-0000bc-8m; Thu, 10 Dec 2020 15:11:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knNbE-0004ik-8E; Thu, 10 Dec 2020 15:11:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3kETBhqPUQ6ethPpoDB8tOwRj3NniVvHpPG/41nMW2Y=; b=JJO77NCDl5M4bn/ALhrXat3/Um
	vFlvTcJtslsMVXKzQhAxMiMnjgc982wJjcyT4w80dlE7a31dJ7zF3VP4k1JfCXVbMH7DazMUTFtOU
	jEo5M82Blu/uagnZvIB08T/kFHYNgyga0zxEeGtPedF7w574sv/R232Myj4iput+ou8s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157368-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157368: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a2f5ea9e314ba6778f885c805c921e9362ec0420
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 15:11:20 +0000

flight 157368 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157368/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a2f5ea9e314ba6778f885c805c921e9362ec0420
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  131 days
Failing since        152366  2020-08-01 20:49:34 Z  130 days  225 attempts
Testing same since   157368  2020-12-10 04:03:39 Z    0 days    1 attempts

------------------------------------------------------------
3659 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 701346 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:15:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:15:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49427.87403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNep-0001Yw-6r; Thu, 10 Dec 2020 15:15:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49427.87403; Thu, 10 Dec 2020 15:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNep-0001Yp-3W; Thu, 10 Dec 2020 15:15:03 +0000
Received: by outflank-mailman (input) for mailman id 49427;
 Thu, 10 Dec 2020 15:15:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNen-0001Yj-Jx
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:15:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.54]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d088c60-af83-4e44-b932-9485305839c2;
 Thu, 10 Dec 2020 15:15:00 +0000 (UTC)
Received: from AM7PR04CA0030.eurprd04.prod.outlook.com (2603:10a6:20b:110::40)
 by AM5PR0801MB1635.eurprd08.prod.outlook.com (2603:10a6:203:3b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 10 Dec
 2020 15:14:58 +0000
Received: from VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::81) by AM7PR04CA0030.outlook.office365.com
 (2603:10a6:20b:110::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:14:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT032.mail.protection.outlook.com (10.152.18.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:14:57 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Thu, 10 Dec 2020 15:14:57 +0000
Received: from c4b794d906eb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8FB8C28E-A6CB-436B-BFA7-330AB07BF829.1; 
 Thu, 10 Dec 2020 15:14:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c4b794d906eb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:14:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6315.eurprd08.prod.outlook.com (2603:10a6:10:209::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Thu, 10 Dec
 2020 15:14:51 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:14:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d088c60-af83-4e44-b932-9485305839c2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YytKpmL6/doYBIVnmXxqPX7JgX4MNepTG7tLrzrN/zI=;
 b=9lUbeGGr0tdoM8443rICWpQCTmWSdhd4wh8XhilKd8aCFjbrjpvAAwryAjvTuPoQ7Mnx0Ykc8Ev6nx0tYhMMFnVr5kycBjwzuY2hctE/gmwS2g5s972Qsci1Js5M7KcN9M3Lr9YHCathFiKzyfwUJA6Vo4b6DcFQXZ1htEIqfSw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 904c18ee44b05502
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FvRiWwSp5ItKZlzKy8dWQulWHLJvE+mkDacqzgT1Mk0bmhd+SCw8TZ1ZeaCUwICAI4/LgTNXolv5rm51CwqaIii6FOkAZ50MtJZfYQqM8OML2ksImxtM6UC4z/Ul665DWWd/Qz2ZkEJw8sGYB5wOH5zkrV9y4wqf1ocg0CvGtOO+fvqB5c7eoErO+ucf/Kn7/WP6EkW3iZ6Cdl2gjHfxwj6IQg17/2QJEIyDMs5LeLW/7dCSsTdhgdQn8/ca/fSAm9HuBTcbWITIdaDGLeKPjzClLtIqSa/KYs+vAHfEcJEOe5fODVx1HxtYebc1Mwb6P1fC3933KDbzmu9DFVOF0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YytKpmL6/doYBIVnmXxqPX7JgX4MNepTG7tLrzrN/zI=;
 b=hY19QwfWUHaCIzq5XmUPEG3zSBSySt2osyrfMCg6qo9/eDnB3O5Zw263BpGDv2MOJFJZlT4+xZeBL2hHefCRhXGxFup0YGe7wT7s/k4yffY3JrdBGndXSU8vDm6PseFsy5hXsY2nI0iVz6vM9ZlABihP8EXfygXRR1mDWcotnt5zhlqfctQIA2w/8tupnpBUtvX/biI52h5Pdrs/lllgxIYcj0NkfGW+2gRgeAdpxdZYcYrvKzfWpcrjTjFCDbwaXhsXcgWLz44odismUfwdcK5++tJFLRxxPeKp7BzNR5KIhmWUd9JZPxc+aerik5Ucf/bt98Fbrj4+GJJ+qnPgTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YytKpmL6/doYBIVnmXxqPX7JgX4MNepTG7tLrzrN/zI=;
 b=9lUbeGGr0tdoM8443rICWpQCTmWSdhd4wh8XhilKd8aCFjbrjpvAAwryAjvTuPoQ7Mnx0Ykc8Ev6nx0tYhMMFnVr5kycBjwzuY2hctE/gmwS2g5s972Qsci1Js5M7KcN9M3Lr9YHCathFiKzyfwUJA6Vo4b6DcFQXZ1htEIqfSw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
Thread-Topic: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
Thread-Index: AQHWzklS6YSKOBiOok+g4tPLnYP0b6nvYnQAgAEPYQA=
Date: Thu, 10 Dec 2020 15:14:51 +0000
Message-ID: <8D31DCB1-3529-4785-A18B-CFE69CC0E846@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
 <62484fa0-fa86-523a-12e0-54d69934d791@xen.org>
In-Reply-To: <62484fa0-fa86-523a-12e0-54d69934d791@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b3f30379-2bdb-4ef3-337e-08d89d1e5b60
x-ms-traffictypediagnostic: DBBPR08MB6315:|AM5PR0801MB1635:
X-Microsoft-Antispam-PRVS:
	<AM5PR0801MB1635505893B8FF572F0B22F89DCB0@AM5PR0801MB1635.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6430;OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +1KT2X5yRQnp1Fr6Gf3dOgpKgDDj6HSfIm67YMweWbIc5kcpPCapc0wijevfZHkjIlmGLUyB8uhQ22mEsq0jf/xPX7Rs+18fvJBmwlUupzpDF5AItjer1t1QkWug41rimc/8u+4Uit54H3kOKP9DNlZCoGpLxQfyEZ4HrFw27r8agR9RnEBGh9plo/yJSvLOUfkOnIyVaBPM6bpy9jnVXTLLhvh01CRYc6LWyK3LKkMa+PkOiFagINPjN7otzVac1WA99X8BgzjVSEYH0OxSEa9T3m7XvOkAjRj73EVN46pWDLHXUcVOcLCCuzJST73sz+fLTjg7VWXUvToXFYAmF1T1Bmv2qPoKD7957qNqwQNX9V6q6bj/bGVBkwtF0d1O
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(376002)(396003)(366004)(66556008)(186003)(6486002)(54906003)(6512007)(53546011)(6916009)(4326008)(478600001)(2906002)(64756008)(26005)(66476007)(316002)(2616005)(91956017)(8936002)(76116006)(30864003)(86362001)(66446008)(83380400001)(71200400001)(8676002)(36756003)(6506007)(33656002)(5660300002)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?3xpXuNfhMRAeT9hNxMVGoHwVlfTh836TkQtnQaXiPv86M1w3cbrDlhmVID79?=
 =?us-ascii?Q?rbiDzW0wn5dvM6ddYmetAlKcFMgNxoL4JmXhZLJkX1oTCGs+IOg1ZljLIpvN?=
 =?us-ascii?Q?mjPYpcfpWQrxXnJFfwb2mpA7rCFT/YkfWXY6a9r/XZa/q+LaF2oX8QTtvTNV?=
 =?us-ascii?Q?oHRRyKMZ8kVA8bGQmGIo60AFdhDL97w0pwAM0Bh+UuWPBhaz7PNNAWeUn/IY?=
 =?us-ascii?Q?3ZSJibYUf9e1vLaxukiIDPfM8qjdeoxYmCBA7Jmp3zD2wN2hu09l+68o8TxR?=
 =?us-ascii?Q?Ajwl4JdC17WAoaGm3+2XgfRgK7q1Dtbdv/LtCpTaiMs++AiDWKr+2EOC3K00?=
 =?us-ascii?Q?HMjc+3FzFv6IVCXuUrSzwdeJCSW9niLiHeXtxh4GcA6EWEyMVnsfSYyLGofc?=
 =?us-ascii?Q?2xfkonjsh1/p3MGzwRWdpEyfi63whDwCTrqOj7qIy6IXsaNORodEuYfHTVLW?=
 =?us-ascii?Q?9sZX0ehLHaMrGc+9XcIYBIPesEryRbUHCnRJCR27iK+3IOGoLimlGcG8fZ0P?=
 =?us-ascii?Q?7QF6UCZWgOQTn+8DOPfMYWaSLmL6inrj8Rp+oT483ZdJ/z3EYQUmXIR/Y/jV?=
 =?us-ascii?Q?vnLpTn4UJuwHR7xa2AV/nZWQu9PNY+gpMgin9elQekOthsvDV1FyiUeaM5z9?=
 =?us-ascii?Q?4xNYeqqExDIYF7mRh93/B5RVusvrXlF73pCOUlrH0xJPlzw8KMWdhDD34S82?=
 =?us-ascii?Q?lCJkCXB+R40hK8WXKiW/JGhi1v1sZ+mZAkrxRk+ZUSG7nE6qV2Waz0fTaJpi?=
 =?us-ascii?Q?Jticx8XefLB4DfjeGBTpnkW5D+9QZWTzSqM4YbiAv3n3iyyRT+xzW06A5aHt?=
 =?us-ascii?Q?63WMPYgrBBh/PVS2E5zwZz+thcTWCDuBbIWon9ZxDhcjjsFU0Qhu3W+VfoKO?=
 =?us-ascii?Q?FxoZu80Qt70zX6pYPsw0He+tPZecAqH+CPhz0kJSNw7CijMUCNR6PW3jyaS4?=
 =?us-ascii?Q?zKjxGEWvn3gaPSSHWGarDNhm8P6E4hroCG2REnVC+OKB3anG2fdsTF6/Puof?=
 =?us-ascii?Q?weQE?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CD3B0D2549BBA8409E5238F448706022@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6315
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c3b86ad5-2d0d-4784-e564-08d89d1e578d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zF25z1B0davgUGsdWA8IcR3yZ86mAHda33rfw6bT8lvkczc3TAUQYkE8hrHIawlB3tcAmbN+wPXg+ZlCkanmszPCDhg+17uprkGjd99PD6Rr4CeqUjN+WcnXfgFRLaI3ZB26xpMaVGLIL9JWyiyTn0S7+cL571dF4pp1hYm9ghtKuYyI44GJ+EhtAgOl3jMIjLKK3Orxjmb6+P+puISvcsTjliPWrTiomNIx+UojnIW4yr1y3+y5SkKt9vSpdjVIw9xcQHZQaKmKnc4BkSTrlbI8HISLmSFSFCgkpbhvJYvQ9e2cIqKrhDgiHUhU/j2pUf5Tsf/TZ+tNML7+NdXMnwHaMUu758gptPQFLdFegIQB1HGhOMMdi7aD0C+gcep+IhZ/L1YtUDykgi4yInhojQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966005)(53546011)(81166007)(107886003)(186003)(26005)(356005)(6862004)(86362001)(33656002)(82310400003)(6512007)(5660300002)(70586007)(6506007)(4326008)(47076004)(6486002)(8676002)(54906003)(336012)(30864003)(2906002)(8936002)(82740400003)(316002)(70206006)(83380400001)(36756003)(478600001)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:14:57.8856
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b3f30379-2bdb-4ef3-337e-08d89d1e5b60
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1635

Hi Julien,

> On 9 Dec 2020, at 23:03, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 09/12/2020 16:30, Bertrand Marquis wrote:
>> Add definition and entries in cpuinfo for ID registers introduced in
>> newer Arm Architecture reference manual:
>> - ID_PFR2: processor feature register 2
>> - ID_DFR1: debug feature register 1
>> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
>> - ID_ISA6: ISA Feature register 6
>> Add more bitfield definitions in PFR fields of cpuinfo.
>> Add MVFR2 register definition for aarch32.
>> Add mvfr values in cpuinfo.
>> Add some registers definition for arm64 in sysregs as some are not
>> always know by compilers.
>> Initialize the new values added in cpuinfo in identify_cpu during init.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2:
>>   Fix dbg32 table size and add proper initialisation of the second entry
>>   of the table by reading ID_DFR1 register.
>> Changes in V3:
>>   Fix typo in commit title
>>   Add MVFR2 definition and handling on aarch32 and remove specific case
>>   for mvfr field in cpuinfo (now the same on arm64 and arm32).
>>   Add MMFR4 definition if not known by the compiler.
>> ---
>>  xen/arch/arm/cpufeature.c           | 18 ++++++++++
>>  xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
>>  xen/include/asm-arm/cpregs.h        | 12 +++++++
>>  xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
>>  4 files changed, 105 insertions(+), 9 deletions(-)
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index 44126dbf07..bc7ee5ac95 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>>            c->mm64.bits[0]  =3D READ_SYSREG64(ID_AA64MMFR0_EL1);
>>          c->mm64.bits[1]  =3D READ_SYSREG64(ID_AA64MMFR1_EL1);
>> +        c->mm64.bits[2]  =3D READ_SYSREG64(ID_AA64MMFR2_EL1);
>>            c->isa64.bits[0] =3D READ_SYSREG64(ID_AA64ISAR0_EL1);
>>          c->isa64.bits[1] =3D READ_SYSREG64(ID_AA64ISAR1_EL1);
>> +
>> +        c->zfr64.bits[0] =3D READ_SYSREG64(ID_AA64ZFR0_EL1);
>>  #endif
>>            c->pfr32.bits[0] =3D READ_SYSREG32(ID_PFR0_EL1);
>>          c->pfr32.bits[1] =3D READ_SYSREG32(ID_PFR1_EL1);
>> +        c->pfr32.bits[2] =3D READ_SYSREG32(ID_PFR2_EL1);
>>            c->dbg32.bits[0] =3D READ_SYSREG32(ID_DFR0_EL1);
>> +        c->dbg32.bits[1] =3D READ_SYSREG32(ID_DFR1_EL1);
>>            c->aux32.bits[0] =3D READ_SYSREG32(ID_AFR0_EL1);
>>  @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>>          c->mm32.bits[1]  =3D READ_SYSREG32(ID_MMFR1_EL1);
>>          c->mm32.bits[2]  =3D READ_SYSREG32(ID_MMFR2_EL1);
>>          c->mm32.bits[3]  =3D READ_SYSREG32(ID_MMFR3_EL1);
>> +        c->mm32.bits[4]  =3D READ_SYSREG32(ID_MMFR4_EL1);
>> +        c->mm32.bits[5]  =3D READ_SYSREG32(ID_MMFR5_EL1);
>=20
> Please don't introduce any more use of READ_SYSREG32(), they are wrong on=
 Armv8 because system registers are always 64-bit.

I followed the existing implementation but ...

>=20
>>            c->isa32.bits[0] =3D READ_SYSREG32(ID_ISAR0_EL1);
>>          c->isa32.bits[1] =3D READ_SYSREG32(ID_ISAR1_EL1);
>> @@ -137,6 +144,17 @@ void identify_cpu(struct cpuinfo_arm *c)
>>          c->isa32.bits[3] =3D READ_SYSREG32(ID_ISAR3_EL1);
>>          c->isa32.bits[4] =3D READ_SYSREG32(ID_ISAR4_EL1);
>>          c->isa32.bits[5] =3D READ_SYSREG32(ID_ISAR5_EL1);
>> +        c->isa32.bits[6] =3D READ_SYSREG32(ID_ISAR6_EL1);
>> +
>> +#ifdef CONFIG_ARM_64
>> +        c->mvfr.bits[0] =3D READ_SYSREG64(MVFR0_EL1);
>> +        c->mvfr.bits[1] =3D READ_SYSREG64(MVFR1_EL1);
>> +        c->mvfr.bits[2] =3D READ_SYSREG64(MVFR2_EL1);
>> +#else
>> +        c->mvfr.bits[0] =3D READ_CP32(MVFR0);
>> +        c->mvfr.bits[1] =3D READ_CP32(MVFR1);
>> +        c->mvfr.bits[2] =3D READ_CP32(MVFR2);
>> +#endif
>=20
> READ_SYSREG() will do the job to either use READ_SYSREG64() or READ_CP32(=
) depending on the arch used.

.. I can modify the ones I added and the existing ones to user READ_SYSREG =
instead.
Please confirm if you want me to do that.

>=20
>>  }
>>    /*
>> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/a=
rm64/sysregs.h
>> index c60029d38f..077fd95fb7 100644
>> --- a/xen/include/asm-arm/arm64/sysregs.h
>> +++ b/xen/include/asm-arm/arm64/sysregs.h
>> @@ -57,6 +57,34 @@
>>  #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>>  #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>>  +/*
>> + * Define ID coprocessor registers if they are not
>> + * already defined by the compiler.
>> + *
>> + * Values picked from linux kernel
>> + */
>> +#ifndef ID_AA64MMFR2_EL1
>=20
> I am a bit puzzled how this meant to work. Will the libc/compiler headers=
 define ID_AA64MMFR2_EL1?

I tested this and if the compiler has a definition for the register, I am n=
ot entering the ifndef.
So there is no header defining this but if the compiler has the definition =
for this the ifndef will
not be entered.

>=20
>> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
>> +#endif
>> +#ifndef ID_PFR2_EL1
>> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
>> +#endif
>> +#ifndef ID_MMFR4_EL1
>> +#define ID_MMFR4_EL1                S3_0_C0_C2_6
>> +#endif
>> +#ifndef ID_MMFR5_EL1
>> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
>> +#endif
>> +#ifndef ID_ISAR6_EL1
>> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
>> +#endif
>> +#ifndef ID_AA64ZFR0_EL1
>> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
>> +#endif
>> +#ifndef ID_DFR1_EL1
>> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
>> +#endif
>> +
>>  /* Access to system registers */
>>    #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
>> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
>> index 8fd344146e..2690ddeb7a 100644
>> --- a/xen/include/asm-arm/cpregs.h
>> +++ b/xen/include/asm-arm/cpregs.h
>> @@ -63,6 +63,8 @@
>>  #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Reg=
ister */
>>  #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Co=
ntrol Register */
>>  #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Regist=
er 0 */
>> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Regist=
er 1 */
>> +#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Regist=
er 2 */
>>  #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Con=
trol Register */
>>  #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction R=
egister */
>>  #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction R=
egister 2 */
>> @@ -108,18 +110,23 @@
>>  #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Regi=
ster */
>>  #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0=
 */
>>  #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1=
 */
>> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2=
 */
>>  #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
>> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>>  #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0=
 */
>>  #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Registe=
r 0 */
>>  #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Registe=
r 1 */
>>  #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Registe=
r 2 */
>>  #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Registe=
r 3 */
>> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Registe=
r 4 */
>> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Registe=
r 5 */
>>  #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>>  #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>>  #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>>  #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>>  #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>>  #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
>> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>>  #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>>  #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>>  #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Registe=
r */
>> @@ -312,18 +319,23 @@
>>  #define HSTR_EL2                HSTR
>>  #define ID_AFR0_EL1             ID_AFR0
>>  #define ID_DFR0_EL1             ID_DFR0
>> +#define ID_DFR1_EL1             ID_DFR1
>>  #define ID_ISAR0_EL1            ID_ISAR0
>>  #define ID_ISAR1_EL1            ID_ISAR1
>>  #define ID_ISAR2_EL1            ID_ISAR2
>>  #define ID_ISAR3_EL1            ID_ISAR3
>>  #define ID_ISAR4_EL1            ID_ISAR4
>>  #define ID_ISAR5_EL1            ID_ISAR5
>> +#define ID_ISAR6_EL1            ID_ISAR6
>>  #define ID_MMFR0_EL1            ID_MMFR0
>>  #define ID_MMFR1_EL1            ID_MMFR1
>>  #define ID_MMFR2_EL1            ID_MMFR2
>>  #define ID_MMFR3_EL1            ID_MMFR3
>> +#define ID_MMFR4_EL1            ID_MMFR4
>> +#define ID_MMFR5_EL1            ID_MMFR5
>>  #define ID_PFR0_EL1             ID_PFR0
>>  #define ID_PFR1_EL1             ID_PFR1
>> +#define ID_PFR2_EL1             ID_PFR2
>>  #define IFSR32_EL2              IFSR
>>  #define MDCR_EL2                HDCR
>>  #define MIDR_EL1                MIDR
>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpuf=
eature.h
>> index c7b5052992..6cf83d775b 100644
>> --- a/xen/include/asm-arm/cpufeature.h
>> +++ b/xen/include/asm-arm/cpufeature.h
>> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>>      union {
>>          uint64_t bits[2];
>>          struct {
>> +            /* PFR0 */
>>              unsigned long el0:4;
>>              unsigned long el1:4;
>>              unsigned long el2:4;
>> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>>              unsigned long fp:4;   /* Floating Point */
>>              unsigned long simd:4; /* Advanced SIMD */
>>              unsigned long gic:4;  /* GIC support */
>> -            unsigned long __res0:28;
>> +            unsigned long ras:4;
>> +            unsigned long sve:4;
>> +            unsigned long sel2:4;
>> +            unsigned long mpam:4;
>> +            unsigned long amu:4;
>> +            unsigned long dit:4;
>> +            unsigned long __res0:4;
>>              unsigned long csv2:4;
>> -            unsigned long __res1:4;
>> +            unsigned long cvs3:4;
>> +
>> +            /* PFR1 */
>> +            unsigned long bt:4;
>> +            unsigned long ssbs:4;
>> +            unsigned long mte:4;
>> +            unsigned long ras_frac:4;
>> +            unsigned long mpam_frac:4;
>> +            unsigned long __res1:44;
>>          };
>>      } pfr64;
>>  @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>>      } aux64;
>>        union {
>> -        uint64_t bits[2];
>> +        uint64_t bits[3];
>>          struct {
>>              unsigned long pa_range:4;
>>              unsigned long asid_bits:4;
>> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>>              unsigned long pan:4;
>>              unsigned long __res1:8;
>>              unsigned long __res2:32;
>> +
>> +            unsigned long __res3:64;
>>          };
>>      } mm64;
>>  @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>>          uint64_t bits[2];
>>      } isa64;
>>  +    struct {
>> +        uint64_t bits[1];
>> +    } zfr64;
>> +
>>  #endif
>>        /*
>> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>>       * when running in 32-bit mode.
>>       */
>>      union {
>> -        uint32_t bits[2];
>> +        uint32_t bits[3];
>>          struct {
>> +            /* PFR0 */
>>              unsigned long arm:4;
>>              unsigned long thumb:4;
>>              unsigned long jazelle:4;
>>              unsigned long thumbee:4;
>> -            unsigned long __res0:16;
>> +            unsigned long csv2:4;
>> +            unsigned long amu:4;
>> +            unsigned long dit:4;
>> +            unsigned long ras:4;
>>  +            /* PFR1 */
>>              unsigned long progmodel:4;
>>              unsigned long security:4;
>>              unsigned long mprofile:4;
>>              unsigned long virt:4;
>>              unsigned long gentimer:4;
>> -            unsigned long __res1:12;
>> +            unsigned long sec_frac:4;
>> +            unsigned long virt_frac:4;
>> +            unsigned long gic:4;
>> +
>> +            /* PFR2 */
>> +            unsigned long csv3:4;
>> +            unsigned long ssbs:4;
>> +            unsigned long ras_frac:4;
>> +            unsigned long __res2:20;
>>          };
>>      } pfr32;
>>        struct {
>> -        uint32_t bits[1];
>> +        uint32_t bits[2];
>>      } dbg32;
>>        struct {
>> @@ -230,12 +264,16 @@ struct cpuinfo_arm {
>>      } aux32;
>>        struct {
>> -        uint32_t bits[4];
>> +        uint32_t bits[6];
>>      } mm32;
>>        struct {
>> -        uint32_t bits[6];
>> +        uint32_t bits[7];
>>      } isa32;
>> +
>> +    struct {
>> +        uint64_t bits[3];
>=20
> Shouldn't this be register_t?

I followed the scheme of the rest of the structure which
is always using uint64_t or uint32_t for bits definitions.

Why should I use register_t type for this one ?

Cheers
Bertrand

>=20
>> +    } mvfr;
>>  };
>>    extern struct cpuinfo_arm boot_cpu_data;
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:18:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49433.87415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNic-0001lJ-TJ; Thu, 10 Dec 2020 15:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49433.87415; Thu, 10 Dec 2020 15:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNic-0001lC-Q7; Thu, 10 Dec 2020 15:18:58 +0000
Received: by outflank-mailman (input) for mailman id 49433;
 Thu, 10 Dec 2020 15:18:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNib-0001l7-56
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:18:57 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7c535cb-7b91-45b0-9a97-ac5c6ddd8935;
 Thu, 10 Dec 2020 15:18:56 +0000 (UTC)
Received: from DB6PR0301CA0036.eurprd03.prod.outlook.com (2603:10a6:4:3e::46)
 by VI1PR0802MB2288.eurprd08.prod.outlook.com (2603:10a6:800:a6::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 15:18:54 +0000
Received: from DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3e:cafe::ec) by DB6PR0301CA0036.outlook.office365.com
 (2603:10a6:4:3e::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:18:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT064.mail.protection.outlook.com (10.152.21.199) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:18:54 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Thu, 10 Dec 2020 15:18:54 +0000
Received: from 25ad87520432.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9F39FC8B-D8DB-4DCD-ADAE-D41E1779FC18.1; 
 Thu, 10 Dec 2020 15:18:38 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 25ad87520432.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:18:38 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4821.eurprd08.prod.outlook.com (2603:10a6:10:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Thu, 10 Dec
 2020 15:18:36 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7c535cb-7b91-45b0-9a97-ac5c6ddd8935
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZASlVesH782+AFLniPdwPzkyWVtP5yPWZKKhLBduk+Y=;
 b=kZfjcPIH6OyfmsEHrI7hlDU6QXpE+oOGRYWmfXugB/w6+PiF6bHxoKioH/pYYuD86EwyXdPnzSWPSFP1OVa3jzTQpo86xx0Ei07vkr4VC6/yjFp/pz8Ofxn/yJ7G4zGHbZGv2uTTQvwALzIkq81vGtvyAEU5Bk3BrvTkrAD3euU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 938ce5b525716768
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oZPEWYbryKy86hAqihxwVg6TDOhnoNOZ2OOrJv5ArU209qt0MERgz2FZfwUNF6jsMIwmbJmLnQettafT43NGCwSh9txK8bVNFpJ8nPFQH7KRmnoAaLAh/wrs827Qu1j2u0T5TmmijFqQdSHWoT9MZ9AaZTEJuw1tT6J8CvWhq3xASmiOPWiSCIZ2hlDSAhu21xkE4ZNqzmEPXN/vO98I5J1WzEgerSyH0UFbpZXQ/VSDG12kW9+HXjriGkqDbhJfIEuxKCWBfzimMUvNFG1SvZKywof5k+0m/e/hMKp9Is438+RT8D7RUA6jzHrOF40FrLjinauAdJZw2pxCvGD3IA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZASlVesH782+AFLniPdwPzkyWVtP5yPWZKKhLBduk+Y=;
 b=kpps6bk5NCwgPd/aBdPK51dcN5ImvqP8GePpda+xrkYxxkjoUE9/c9VpCk0yKJGyqB9Mh3Bfu4ByBd/sDL+XD2nA3wIGDq17Ka2A3oyHy7hh7JvyOjqCi/eK55dqSq2tM/uAp0qSEab5K07jgHhRSooojSQcd6KozLdfthfI4Ugp6n+a58vQONBC6cq7U+hlRp2E0T8kMFN31TQYM8ox18JYq8SLAvHzxpG4jQtPdUBZLAyLMQTdfBFwGmu/YwfyX8j2hxsMN8zM4/iawIxxVfRY60MCcu14ke72gmvdO6YCSSH9JVtWSiXtS/bEshWhrjx6xIzmiNw6Dm0WCVJ8fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZASlVesH782+AFLniPdwPzkyWVtP5yPWZKKhLBduk+Y=;
 b=kZfjcPIH6OyfmsEHrI7hlDU6QXpE+oOGRYWmfXugB/w6+PiF6bHxoKioH/pYYuD86EwyXdPnzSWPSFP1OVa3jzTQpo86xx0Ei07vkr4VC6/yjFp/pz8Ofxn/yJ7G4zGHbZGv2uTTQvwALzIkq81vGtvyAEU5Bk3BrvTkrAD3euU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Topic: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Index: AQHWzkloLLvBmqE820eVrJEh4IHR0KnvKSEAgAFJwQA=
Date: Thu, 10 Dec 2020 15:18:36 +0000
Message-ID: <4B26BDEE-DA30-4B5B-A428-9D8D4659B581@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012091131350.20986@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012091131350.20986@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c720d1ae-9d88-4422-462c-08d89d1ee811
x-ms-traffictypediagnostic: DBBPR08MB4821:|VI1PR0802MB2288:
X-Microsoft-Antispam-PRVS:
	<VI1PR0802MB22881AC1D35C612E04D63B2D9DCB0@VI1PR0802MB2288.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5516;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8omD98Pb0ZfNz6sw9RREOJWuhxUzqvIlQ60fiBKEXSvbFDY+fS3bIyP+YWqiWcBxQcL8eMsaPrXeMO0dohkSbIX9dMpr+2ie8j7ZnnqZxdq+I7ItV9u9afyetGokQ4tru8byBeZFrYZhexvsjBEu5IJDh944W1AOFdjVXmBTiO+0q21QnCHFFA5C+TEJUVIeKruWoIRzFChZehAoMZwusME+3JBRlPHi2mVCL7dqdxzYIChdFKB/pkd+pd/6KdiJshm2v8fgXKd93vvZz4M+oMunFp5IZ4OuyYfdRs7BATCD4g18rriy/+NB33bPX9D4U6WeTEWtjueEFa1ENiYO1OZSN7D9LzmGB/5PIYt1OndogcHJArhT0vaGOAzy8zl1
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39860400002)(136003)(366004)(396003)(6506007)(5660300002)(316002)(6512007)(53546011)(4326008)(8676002)(76116006)(2616005)(2906002)(91956017)(71200400001)(26005)(54906003)(186003)(6916009)(83380400001)(66946007)(66476007)(36756003)(8936002)(64756008)(33656002)(66446008)(478600001)(6486002)(86362001)(66556008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?zEDwhg5J3AQLmVyuPrKljUcnDVM9imE+WlOB02tbT1r19mMPirG6gnHZV2pg?=
 =?us-ascii?Q?ngCB3qW7lT0V8JWq2TfxhuQN761xiG0Cqcm8KnqaiPWBrUh1QOa6pTA8h1l7?=
 =?us-ascii?Q?KSS44WWWI81iVTFeBkKhIIPPVdwCb/qXw4PSO54xTBDmRFPWJ05DDBfRzvZT?=
 =?us-ascii?Q?rgEfL64gEs8CqRwlceexcXNNfngDmxprbI9haOrJpyOVt8mW60JCis7R1o15?=
 =?us-ascii?Q?Kos5X5Ey1czpbB/etoluWrHdxr1mwu9kPfUOI/1IGMiIFud/GpD8GxFJtw+q?=
 =?us-ascii?Q?FMv1wlUUNHAP3VZllcQwbY7jflmHIW7jnfRP3y3G93UYbEWSqC9U3FLzgMNK?=
 =?us-ascii?Q?t1uYPv6M4aafTzxSQV64ZjAh43/xj5BoHwlQqH0FhAyiSrUIbE7Ea8dxfub6?=
 =?us-ascii?Q?MwrZLu7sjErSgkyUWqHU4CV+dnP10taSfqNXr4ypB63vzNPWpA2C/tR4z/5A?=
 =?us-ascii?Q?Xf+idl8Z/iH4P8aPoWlM6hjNftxPkCgU4csVkBjktM+WE7W66ryk6S/KtFa7?=
 =?us-ascii?Q?S1IbhQuf/1kkWpOoldGXXOj0lJB+mnlchTVnlkl2k/YaXObxxX6N+rT0zJ30?=
 =?us-ascii?Q?wJaXQBda/0YGVHAMhyT4zc6ZAJUDsKnAq6iyUzQGGEtmPm7IZYyHN2zsKQOE?=
 =?us-ascii?Q?of/YzQmAGIAmXxN0XLTOAkcsYW7gd31eMleF2SrLgCdRWmdL/a+NtFt+kFCJ?=
 =?us-ascii?Q?hzCY+6BrFyXsx1aPIks3yl2I4i1N1e4hegxrSIahMg7X/jNt8kSUMfbTaD6+?=
 =?us-ascii?Q?piys3r/xoNhk4bNHOZSrp/sMw0RMafKDB+dv5g5iT1GKTVXNyKAQSk+WSRm5?=
 =?us-ascii?Q?32b/dUjNiKt4Kzey77P28/IME3OqOvTIosBYYR1aKuSVq47b0bNGbn9VRbOj?=
 =?us-ascii?Q?MwwoQxXKAr+Xkyb3FRh1iWsfnSoOuxQo8zZ7S+Qralkybz5yJ9EsWxoIe95f?=
 =?us-ascii?Q?rvHAxAcvOxHaADyt669A9zdia+BNIN3mmakawJ4qrED5u21kYQ31DH3HLU+K?=
 =?us-ascii?Q?CPZq?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <AE9F331D14450B4995446F34947D180A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4821
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6de977b2-6fa4-4a0c-ae4e-08d89d1eddc3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+EDUP7tgDSE5PQ1BSffBPwLlH3IHnCA5/tfznTWfjzwLsuuUUvTyVMBzCAdKeJjdEV8z3KB8074kLFA6RA0KJ+BlBZP6r/xCfLbAhz9f1lZtcd4Gb2pmVYR+S1KsG0kfHyAqdp0loOeBUfKlkianiyXrrCvYVPjIPX41lN3ieVMbvlX6V2a+0Ba28fLNc+z55y7wPQSRQmrRbaXItRq6HRG1Oix3J0B6RXSxDZmnBOHCoVNJT11Tgi7lZuLWrMe1Jd37crjXLXzYQY0P0L3MPZOYDJ1+3gsT503uU+Xa/4feQnJ6rjcfPAQNhzj0xtyXt/J9+xsSI0ea2YNPpp6n977GOUS/AtBngdKLgP2I+kV748FI+Sk97iEFE6Bliki+/VPmfdCsTDxrEo421ZKRVg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(346002)(396003)(46966005)(5660300002)(107886003)(70586007)(53546011)(336012)(70206006)(8936002)(82310400003)(26005)(6506007)(2616005)(478600001)(6512007)(81166007)(86362001)(186003)(8676002)(2906002)(356005)(47076004)(4326008)(83380400001)(36756003)(54906003)(6486002)(316002)(6862004)(82740400003)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:18:54.0891
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c720d1ae-9d88-4422-462c-08d89d1ee811
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2288

Hi Stefano,

> On 9 Dec 2020, at 19:38, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Wed, 9 Dec 2020, Bertrand Marquis wrote:
>> Add vsysreg emulation for registers trapped when TID3 bit is activated
>> in HSR.
>> The emulation is returning the value stored in cpuinfo_guest structure
>> for know registers and is handling reserved registers as RAZ.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: Rebase
>> Changes in V3:
>>  Fix commit message
>>  Fix code style for GENERATE_TID3_INFO declaration
>>  Add handling of reserved registers as RAZ.
>>=20
>> ---
>> xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
>> 1 file changed, 53 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
>> index 8a85507d9d..ef7a11dbdd 100644
>> --- a/xen/arch/arm/arm64/vsysreg.c
>> +++ b/xen/arch/arm/arm64/vsysreg.c
>> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>>         break;                                                          =
\
>>     }
>>=20
>> +/* Macro to generate easily case for ID co-processor emulation */
>> +#define GENERATE_TID3_INFO(reg, field, offset)                         =
 \
>> +    case HSR_SYSREG_##reg:                                             =
 \
>> +    {                                                                  =
 \
>> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,  =
 \
>> +                          1, guest_cpuinfo.field.bits[offset]);        =
 \
>=20
> [...]
>=20
>> +    HSR_SYSREG_TID3_RESERVED_CASE:
>> +        /* Handle all reserved registers as RAZ */
>> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
>=20
>=20
> We are implementing both the known and the implementation defined
> registers as read-as-zero. On write, we inject an exception.
>=20
> However, reading the manual, it looks like the implementation defined
> registers should be read-as-zero/write-ignore, is that right?

In the documentation, I did find all those defined as RO (Arm Architecture
reference manual, chapter D12.3.1). Do you think we should handle Read
only register as write ignore ? now i think of it RO does not explicitely m=
ean
if writes are ignored or should generate an exception.

>=20
> I couldn't easily find in the manual if it is OK to inject an exception
> on write to a known register.

I am actually unsure if it should or not.
I will try to run a test to check what is happening if this is done on the
real hardware and come back to you on this one.

Cheers
Bertrand




From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:21:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:21:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49438.87427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNkx-0002gU-Bj; Thu, 10 Dec 2020 15:21:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49438.87427; Thu, 10 Dec 2020 15:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNkx-0002gN-85; Thu, 10 Dec 2020 15:21:23 +0000
Received: by outflank-mailman (input) for mailman id 49438;
 Thu, 10 Dec 2020 15:21:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNkw-0002gI-7X
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:21:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5527c3c9-febd-407f-a220-9ee1a4d05c6c;
 Thu, 10 Dec 2020 15:21:20 +0000 (UTC)
Received: from DB6PR07CA0193.eurprd07.prod.outlook.com (2603:10a6:6:42::23) by
 AM6PR08MB3208.eurprd08.prod.outlook.com (2603:10a6:209:4b::10) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12; Thu, 10 Dec 2020 15:21:18 +0000
Received: from DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::87) by DB6PR07CA0193.outlook.office365.com
 (2603:10a6:6:42::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.5 via Frontend
 Transport; Thu, 10 Dec 2020 15:21:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT029.mail.protection.outlook.com (10.152.20.131) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:21:18 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Thu, 10 Dec 2020 15:21:18 +0000
Received: from 7d897cbad704.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A5547A75-18B0-40FA-919B-77A0108516C4.1; 
 Thu, 10 Dec 2020 15:21:12 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7d897cbad704.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:21:12 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4776.eurprd08.prod.outlook.com (2603:10a6:10:f2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.15; Thu, 10 Dec
 2020 15:21:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:21:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5527c3c9-febd-407f-a220-9ee1a4d05c6c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jana2+yJyQvAcw+b25PWB2OhcnsKE7U72kETQClUwfc=;
 b=knGW/B3o1JcBgkRgf9no0MsieYLyDTiGUoktsSDn1chOfEISjyrgBTGxsiR3kd4Ni/KXQnuEpOG2bwS6AV6+/40ieFD0TmQT4NN7KpVkdBsm5EO82v1f3vay5sloJpLcnVfQj6K4tUYIOVhbocXlvkp/rQlBKIiilsL5hWVrnKc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 84be96c1756122fa
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N5hBnwY6U8m1Xyvwad0WMOeVYIKG+zCvSt+TxRsLpph668oqPpXe6FFAfQIrTj3JwPAMgL+OQ8hkjJB9MG7Ko+3oFDdGwmAykl9JFnfN0QPz8CE3QYsfnAd46eLubWGokL0iqGOLcXwTF1Oe2U1xAvRmr5jkK0aO2ojCiIpqsZVlMAo2liEUZRH4aTLc3iAgZLlEQVwXt/Ft1glan0d2zPaxw8UbB0+GJgrfHCwtnPAZ0QPbhjENF8HdLVvYdW0T2jZaIVSHIky2yGZJIaNaCtVd5zB7OGLYWjgjSBfGYJagSsFYr7Hm685z/p51Dhj93VLuIv1bcM4udfZkv4tkew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jana2+yJyQvAcw+b25PWB2OhcnsKE7U72kETQClUwfc=;
 b=XG+t78DEeyr6lnSIaPOqjGqP5F2GmCSMD7GW/qdgoVP8rjeVf05C66m0AutyosiuHBMiGZpt8MqKW8+6QE6LbC1SD9baACEee//bBxa5QbWRsa2nHMH2hVqg6hrENnus1WVJvYRJAc7NqbjDtLsQXT4IlKCmsYVS1qeSAeov8vw96nZqQupDgbS4zcMTJjlztyCgkg0efm30yOo8vkbYIblcq+wVN8p4/KiApsL3wN498f3mQ+4X7Oiwf0Uj5GpX/Gke7bwX4Lh8m8HLJSVYt2B5DzMj2TN+21G2ZF9Bbc2dbGIE3dPy1t7ssshv75bosRd+0s43S98G95lnpHrZ9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jana2+yJyQvAcw+b25PWB2OhcnsKE7U72kETQClUwfc=;
 b=knGW/B3o1JcBgkRgf9no0MsieYLyDTiGUoktsSDn1chOfEISjyrgBTGxsiR3kd4Ni/KXQnuEpOG2bwS6AV6+/40ieFD0TmQT4NN7KpVkdBsm5EO82v1f3vay5sloJpLcnVfQj6K4tUYIOVhbocXlvkp/rQlBKIiilsL5hWVrnKc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Topic: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Index: AQHWzkloLLvBmqE820eVrJEh4IHR0KnvZTWAgAEOZoA=
Date: Thu, 10 Dec 2020 15:21:11 +0000
Message-ID: <FFBCC49B-C6C6-4D95-A24E-523741592527@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
 <8a154f7c-f700-5b6f-5645-a122fec45d19@xen.org>
In-Reply-To: <8a154f7c-f700-5b6f-5645-a122fec45d19@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b8c02bf6-99e8-40e3-fc0c-08d89d1f3e0b
x-ms-traffictypediagnostic: DBBPR08MB4776:|AM6PR08MB3208:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB32081F8DFAE6ACB4E57AEE789DCB0@AM6PR08MB3208.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4125;OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 O8uJITUqG5FeW7C/+Qi7mjuqY/Pkil/UV5n3YGQy4mhosN5CKe2Ya9SFl8tGwjITUF0wgp7HrCO3nnA21KtXNTDozm0nMReiA//P+n3h3hDCetO/l719pZgaI3tGHdGRkClEG7RFEpG+Mf46s8/qh87WDYfyiW6cZBAH2UCOka1r0yfT9sO5egkM4kVEL7hMViDrqMYZ+t7PoJsh/72f/49TYGl282e4q4OoSMZBAle6MEdNQEjXwB069l9u2PnaYhUu89tKMrbAlCV9w/8p0wEZ5Lm7tKFS3LQZ0LcglZOneQrfk3NrRMe5KGDhSEmVgL3X9+l2ONB2atrXPClkjKE18yvk1hwLfBqYFgxxre27v26D9r0VjQZBESQo8JR8
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(366004)(136003)(39860400002)(66946007)(66556008)(8936002)(66446008)(2906002)(76116006)(66476007)(8676002)(6916009)(91956017)(6506007)(53546011)(4326008)(64756008)(71200400001)(33656002)(5660300002)(54906003)(316002)(478600001)(6486002)(2616005)(26005)(86362001)(83380400001)(6512007)(186003)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?9xCavrbAFyAJo7vYy6Tsq13c1tm/e37fzZr/JRc7zqceUqcNNzGzfc0werjy?=
 =?us-ascii?Q?FxgBIWEGBnI6UQHhFMK1AoI3qGOE1rq8qIOQUK79PX/C6ERmgQl5EK8eHPfK?=
 =?us-ascii?Q?swokm9FuZD0DnnKMFElquXvK6hMnmKVJFez2MOrtw48mM1vygjCpGBSgknNd?=
 =?us-ascii?Q?HYC34PwloG5G1NtyaGyTZTqOBCTdz5oKBrfpr4KWO3hvL5vnJbqxRW1dSPVz?=
 =?us-ascii?Q?cYq9hxv58beDqYPC9qruh4U6WwCIFPHihlwDNm163xQ/5mt12IRa6i03gbGh?=
 =?us-ascii?Q?nVy7a4YrQC3ABsFHDcy6bdRqFP4RCvaemqub0CUqbdQTMSQ3a4moEhc/JgI9?=
 =?us-ascii?Q?2ObGL1cD8poHxaHzOaKN9Gl+KLGWz0997WbZLa2BRcPs6FlvM+tRIYRG9Rgn?=
 =?us-ascii?Q?dDa5PYcmwf1SMzD8z6I3XfnZzAdPuspFwtDRNWB0H2uey/lplKXa6JgjSS5L?=
 =?us-ascii?Q?CGvvKlpR6vo2I4Fxa4OlIyt2BW1X44l5s1Y2rfQF49xp4yPi2WTJfnlGAVEP?=
 =?us-ascii?Q?SuSpHCk4M/NMscbclclGn93Xd4KZoxM4TzZI2t/GehLWL0jf4+Bc38fOJSBN?=
 =?us-ascii?Q?d5qtw1Gh759wTb19ZYLYE999SFwqjbXRS5nDvlQjAi+2fLAR0v4758+8efP4?=
 =?us-ascii?Q?O4K9Z525GvbEih8GsdjxMFacuexGflGbDPEI81sSSOPeaKkAASwEhmsI8Xab?=
 =?us-ascii?Q?JFaAOb0fJMcrVincviCZSuq9yDYRAA2aiLrgNF4CC1dFnRRcymnju8kQ5ud7?=
 =?us-ascii?Q?zotiFjeIi3e5wLVKRrM11DTky+DaVH8SHdRhMCJBFnKhS0na6rqyVsMY6pzv?=
 =?us-ascii?Q?OOtNfB0vaXd8tmEwNSjPg/V4w+GwuFgPEssUFeG9ZMPARu1TDaHkzF80zUIH?=
 =?us-ascii?Q?WOKGWul0ZN8gYxIR8pd31lh9mvGLamJDAiqKLf3WlUqoaGw2gDi8SGn++/ED?=
 =?us-ascii?Q?Hsiw/WufL2tq6VOD3GEtxhRL8KmBnTGIL4V0FOoZ+Ye9qo/PNa7nVZwtb8Fp?=
 =?us-ascii?Q?HC2C?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A3FDFC0216432A49A7C1078D9AFB8BDE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4776
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	19618b9e-46e2-417d-cc79-08d89d1f3a04
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BLTgalOHyknaBvs4STr5nAsWzFoDg9+Ja/bKMZLKlz4vdnwXK3+7+u4w/hPa3XZfiVVhT/PQRgjY4NyLATpUH/uh+EcKQuZgfPndmGY3fkf6Elks0+B9eAgvzq0K0CWJ8MxelcTgkXxUyMD75D5k3gGUZRH0cSBXDT/HpQ9yXHnztsgP6gh+M3YvDnl4CCW/pTobq531tRdiBlGLWJ/QsS+PNn3Gu+sPLzsyc3If48R7suTYbvc2cM7DavbeizoKLLcMyy3PT7ATmhdiq+6VyOIyjIKwWo6R3DDPb9R32X2KUsQShTyRTYhgh1rOP5jkqc+qrHzYhBdKNMMYaldU497ZlL3C+8X91o0P3oqLo5v88s0lTkWSJ0oemVMprOkxzjo1k0cpWyWCat+2+O4eqQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(136003)(39860400002)(46966005)(336012)(5660300002)(107886003)(47076004)(4326008)(2906002)(478600001)(82310400003)(86362001)(6862004)(70206006)(33656002)(70586007)(26005)(82740400003)(8676002)(6506007)(53546011)(186003)(8936002)(2616005)(356005)(36756003)(6512007)(81166007)(54906003)(316002)(6486002)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:21:18.3385
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b8c02bf6-99e8-40e3-fc0c-08d89d1f3e0b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3208

Hi Julien,

> On 9 Dec 2020, at 23:13, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 09/12/2020 16:30, Bertrand Marquis wrote:
>> Add vsysreg emulation for registers trapped when TID3 bit is activated
>> in HSR.
>> The emulation is returning the value stored in cpuinfo_guest structure
>> for know registers and is handling reserved registers as RAZ.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: Rebase
>> Changes in V3:
>>   Fix commit message
>>   Fix code style for GENERATE_TID3_INFO declaration
>>   Add handling of reserved registers as RAZ.
>> ---
>>  xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
>>  1 file changed, 53 insertions(+)
>> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
>> index 8a85507d9d..ef7a11dbdd 100644
>> --- a/xen/arch/arm/arm64/vsysreg.c
>> +++ b/xen/arch/arm/arm64/vsysreg.c
>> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>>          break;                                                         =
 \
>>      }
>>  +/* Macro to generate easily case for ID co-processor emulation */
>> +#define GENERATE_TID3_INFO(reg, field, offset)                         =
 \
>> +    case HSR_SYSREG_##reg:                                             =
 \
>> +    {                                                                  =
 \
>> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,  =
 \
>> +                          1, guest_cpuinfo.field.bits[offset]);        =
 \
>=20
> The indentation looks wrong here. The "1" should be aligned with "regs".

Right, I will fix that in v4.

Cheers
Bertrand

>=20
>> +    }
>> +
>>  void do_sysreg(struct cpu_user_regs *regs,
>>                 const union hsr hsr)
>>  {
>> @@ -259,6 +267,51 @@ void do_sysreg(struct cpu_user_regs *regs,
>>           */
>>          return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
>>  +    /*
>> +     * HCR_EL2.TID3
>> +     *
>> +     * This is trapping most Identification registers used by a guest
>> +     * to identify the processor features
>> +     */
>> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
>> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
>> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
>> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
>> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
>> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
>> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
>> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
>> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
>> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
>> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
>> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
>> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
>> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
>> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
>> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
>> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
>> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
>> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
>> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
>> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
>> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
>> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
>> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
>> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
>> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
>> +
>> +    HSR_SYSREG_TID3_RESERVED_CASE:
>> +        /* Handle all reserved registers as RAZ */
>> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
>> +
>>      /*
>>       * HCR_EL2.TIDCP
>>       *
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:24:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49443.87439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNoK-0002ss-V6; Thu, 10 Dec 2020 15:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49443.87439; Thu, 10 Dec 2020 15:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNoK-0002sl-Rx; Thu, 10 Dec 2020 15:24:52 +0000
Received: by outflank-mailman (input) for mailman id 49443;
 Thu, 10 Dec 2020 15:24:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNoJ-0002sf-Na
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:24:51 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::614])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c069ed3a-661b-42f6-81f1-5e39e42aea03;
 Thu, 10 Dec 2020 15:24:50 +0000 (UTC)
Received: from DB6P192CA0008.EURP192.PROD.OUTLOOK.COM (2603:10a6:4:b8::18) by
 VI1PR0801MB1727.eurprd08.prod.outlook.com (2603:10a6:800:5a::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Thu, 10 Dec
 2020 15:24:48 +0000
Received: from DB5EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b8:cafe::25) by DB6P192CA0008.outlook.office365.com
 (2603:10a6:4:b8::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:24:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT047.mail.protection.outlook.com (10.152.21.232) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:24:47 +0000
Received: ("Tessian outbound 76bd5a04122f:v71");
 Thu, 10 Dec 2020 15:24:47 +0000
Received: from d51f337d481e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 58607FE7-1E00-4AEB-A8CB-F2F4CD7FFBDB.1; 
 Thu, 10 Dec 2020 15:24:40 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d51f337d481e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:24:40 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1989.eurprd08.prod.outlook.com (2603:10a6:4:75::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 10 Dec
 2020 15:24:38 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c069ed3a-661b-42f6-81f1-5e39e42aea03
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4bPH8rJeWoyjg+tIkKS0XDp2P54aYaZ+tH5ElhQjML0=;
 b=fWWEw4e3A3p9gQ79AY8nKqfXSrvPMH5BB+6/f6cuTpEZ+zIOkW6ENuX7YPjwzpRpY65LbV93i+WtUW5AmUUKZzWxGSalt/++4OfPko0BimTyPpfbRICWPxhLqwBuCyNQSENStmfLlIUFGPRrI89peNJKg7sBRXLQmMs0uPPt/HA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f4a8b7e1512d3267
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AX/BfmPM2QcUdGkvvkwFBvOziTaC/WCZXGf21AeARcJqX4fNEW9TaFudpjgI5BmX7q2qrqhguwAY9Sd8qqrq/XnmI2rFdiHls7j+misaYDzuAeXYAGWmqYeOwoNb0U0CEz91UwHf7SM9Y7WcDcF+e+9GDFsJ7VqiYsHWuNnbA/IJvtc7stYnm6fo+nJZmbcZurZkApgLGUEUaOcVaN5qeU2u7/r339V9S05VpCTlKHbPSbscOqZVQkKKiD2iT8/tS4ibaGKabRmVGjNEURwv+IOyg/aVzQgHt1zU1tQhbxqB/9prjbyPkkph/bApoHTCkVe/pCPKcX8KBDXPVWbqtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4bPH8rJeWoyjg+tIkKS0XDp2P54aYaZ+tH5ElhQjML0=;
 b=dyGa4IiPu3z3gBmrDUfZVNgK5PqVs6FSuVNWK38uejX1nRxMX5lHVvAKWmDNGhWr+QzF2kB9W2aSadWR7MMsitnXri9tkN/HlsCFDgk58zniGG79sbdFoCeqY5MrnHH2g43FZKZbnd7uuVqn49tOp/OLoLYSQnIGxI8FjygFqG37dMwCCQNvGespLMIGOEsKBulcg6Ei03AH6eP9RxcyIJeocpxy8IGkT1ICODTyDtlRDdT6tJLgxXJi3Qi6tIZmd1HNCa1cF8VvT8bfpldrxdYWLPtvIjNkNLmcRVkFz2Aqe5M+0jbiC1eOd0FNoZgPoWDJwYy+wDCAb7XyoifLgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4bPH8rJeWoyjg+tIkKS0XDp2P54aYaZ+tH5ElhQjML0=;
 b=fWWEw4e3A3p9gQ79AY8nKqfXSrvPMH5BB+6/f6cuTpEZ+zIOkW6ENuX7YPjwzpRpY65LbV93i+WtUW5AmUUKZzWxGSalt/++4OfPko0BimTyPpfbRICWPxhLqwBuCyNQSENStmfLlIUFGPRrI89peNJKg7sBRXLQmMs0uPPt/HA=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle MVFR
Thread-Topic: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle
 MVFR
Thread-Index: AQHWzklxLeD2qzBVf0meNxRIV80xoKnvQUgAgAEzSgA=
Date: Thu, 10 Dec 2020 15:24:38 +0000
Message-ID: <12F1D373-4661-44A0-BC04-FFD867C90976@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012091256290.20986@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012091256290.20986@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7e66db4f-61bd-40a2-279e-08d89d1fbaf4
x-ms-traffictypediagnostic: DB6PR0801MB1989:|VI1PR0801MB1727:
X-Microsoft-Antispam-PRVS:
	<VI1PR0801MB17271CABB0035021D7DF12F49DCB0@VI1PR0801MB1727.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tEkmXDBKdM3i2oShv6XsEpMI5ht4c3MVWZsicYL72oP5qEbj25yCj4e66GfkuzlnKIJfEWc6Gp1oOGW89UdyMIV8dKVSAR2aU1JL2yzFGPipTVDB6iooFSnTno4nNIKuwT1DHJt/BsGwoIHiGZsAVvBqWnrUwXUjQUAxw2ZAo7vE0UBtRd9jlvCGjyfsVPJdshVxQsUKvMwVme9+uB62j1GPXNlAKtawWunFee92tSKawVM2KrO0ebfkGCA9RS5Lx8LGi03cYQN2oErXD4uLN7dz0Q2QWG4Pcu8/aB9lQeKxIWSIm7FCVtY/n6o9Rnw1ES53uDckyJuC4RFNfu7ypCumImSbUM0cOAcYao7jRveAgDZRusbNmKAztOnTJbuU
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(366004)(136003)(396003)(83380400001)(316002)(76116006)(66556008)(33656002)(5660300002)(91956017)(2906002)(478600001)(4326008)(36756003)(2616005)(8936002)(54906003)(26005)(186003)(6486002)(66446008)(66946007)(8676002)(71200400001)(64756008)(6512007)(86362001)(6916009)(6506007)(66476007)(53546011)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?V4NrLnpVt29/YxT/m1X+UONjlyEUcgoFU/kLfJLZ/HxqkKnkiZja8qCr0QxX?=
 =?us-ascii?Q?RFwJncotyi8nzfqv2P9leWvw6p8eqbRvsjdYZp4lIsZdVuUuH6mEzfIXEr0U?=
 =?us-ascii?Q?ULjWhBbGWk1RPoZZq7Ee3Ki2XhmC/LgqaFPwOg9W6X4gWh0P91equrmbZF+E?=
 =?us-ascii?Q?6FHeHh4aGFvc71k/RcYpHRS35v0SLBtx1IoRFilKJ8C++Xphbr+BwEa3Du3K?=
 =?us-ascii?Q?mKCredljrorqnUNjoSH5twGWmRhnyvFQGFNEmm3O2eCN+0JBu2cxZgXQtBXG?=
 =?us-ascii?Q?04DUSWgeVF9v9qqKRZukdFEh3G+Vn9gv90p1b1fDx4Q76MOSPIoD/S/TiY+C?=
 =?us-ascii?Q?6HpRqE3/G/ABaV0lUs6SklrIh5rcMmkFKAerTKDO0CM8c5UeZ2OpYrIcMsCP?=
 =?us-ascii?Q?FeRcal9UJgXsqcT60yybzwROxQNDqoRwjYAQAIzwbCLkX3IlhcmrvfnE276b?=
 =?us-ascii?Q?t8tWAxyeJqaMN7Md08HkLgpEyrsx8ofxhTY6AdjAcfnNdPW9/wYNn7ItCKxw?=
 =?us-ascii?Q?jQi5OSVkYJ5YeVxN1OozlJTs1T9FH4oRgu+/HiG7Va+6IuFfsvPBGqM1aki8?=
 =?us-ascii?Q?5foEZ+Dxs0W8553fstxYI+hjCf/ibXBcVwqpDv9BNRUFm2mWRNRvEpMfeKPz?=
 =?us-ascii?Q?pampVkUv62TCRcM5V6Fx3DgXkAmUhSHJTFtxtxoSpcgJqzAjCvjOucFaJHRr?=
 =?us-ascii?Q?QV7JrgXujiYEU8jaV3vSW99LR9c6MBtJzMvsmfgEQ3CB9IKWQShwo63Phqqd?=
 =?us-ascii?Q?6MqqenxByUW6b3jPhnsYAu96/CYIjCkp28zBe4CphF30BWhEPl7n5ypv5WK3?=
 =?us-ascii?Q?DUGUgs/mzvR7UF+utS+VOckBE2MoO6RqvKtCNjBepzcU0cFP0ha1wJ6YK/67?=
 =?us-ascii?Q?veYPFGj+cCmslMyPvuiig6GMzCGlmPSFiAVpLzHNj1s0ZN0AopnhPdS/CsQI?=
 =?us-ascii?Q?Y3ZqFLhXTNMJyIQd6zBpz27PCPSmX93caTcX3tffmA/oOx0h/BlbdO564kRZ?=
 =?us-ascii?Q?2YQ5?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C46DE912B24BC742A8D8E1114E845EF4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1989
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f2bd50ca-54f0-4cc9-00c3-08d89d1fb559
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OWZt77ikzOMBUAx1dv4YmOT7baqcflOQ7U+BNm0bgha3K/mY/x3Q9nMMfnybuHOa9vhEHqyfSgfOwsIWAQCkEhg2OIJAOUj/0K/PAqEc3BykLJis+aifwajF6pUIM6UJEx+z/R9931Xq/A1S1homXgkKzCdUvwFq08zvddFtldDa6efMuBIKH1uRML/GI8KOCdztuCD69muT4y328bDklCamFulK595VA/qAbHT+zM7nZm7grCyklsDxRZ0oc5UJbdWI56zlFqJZnPGO/6AyQ3tAuIe9+b3yp904lZskH268Sj6ytzIIMFuEmk7zxi/TxSXF4TgS2nSSxwjWlBakmZn+M8tOmE1IDhR5sUZlys1qZulaX8YWjFVPHZupkkPVWtDr5Kw0gpBFh+C8UjQKFg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(376002)(396003)(46966005)(186003)(316002)(6512007)(6506007)(2616005)(47076004)(2906002)(82310400003)(86362001)(356005)(107886003)(54906003)(478600001)(70206006)(81166007)(70586007)(8676002)(36756003)(336012)(53546011)(26005)(8936002)(6486002)(82740400003)(4326008)(5660300002)(6862004)(33656002)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:24:47.8955
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e66db4f-61bd-40a2-279e-08d89d1fbaf4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1727

Hi Stefano,

> On 9 Dec 2020, at 21:04, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Wed, 9 Dec 2020, Bertrand Marquis wrote:
>> Add support for cp10 exceptions decoding to be able to emulate the
>> values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
>> This is required for aarch32 guests accessing MVFR registers using
>> vmrs and vmsr instructions.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: Rebase
>> Changes in V3:
>>  Add case for MVFR2, fix typo VMFR <-> MVFR.
>>=20
>> ---
>> xen/arch/arm/traps.c             |  5 ++++
>> xen/arch/arm/vcpreg.c            | 39 +++++++++++++++++++++++++++++++-
>> xen/include/asm-arm/perfc_defn.h |  1 +
>> xen/include/asm-arm/traps.h      |  1 +
>> 4 files changed, 45 insertions(+), 1 deletion(-)
>>=20
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 22bd1bd4c6..28d9d64558 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *reg=
s)
>>         perfc_incr(trap_cp14_dbg);
>>         do_cp14_dbg(regs, hsr);
>>         break;
>> +    case HSR_EC_CP10:
>> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>> +        perfc_incr(trap_cp10);
>> +        do_cp10(regs, hsr);
>> +        break;
>>     case HSR_EC_CP:
>>         GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>>         perfc_incr(trap_cp);
>> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
>> index d371a1c38c..da4e22a467 100644
>> --- a/xen/arch/arm/vcpreg.c
>> +++ b/xen/arch/arm/vcpreg.c
>> @@ -319,7 +319,7 @@ void do_cp15_32(struct cpu_user_regs *regs, const un=
ion hsr hsr)
>>     GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
>>     GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
>>     GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
>> -    /* MVFR registers are in cp10 no cp15 */
>> +    /* MVFR registers are in cp10 not cp15 */
>>=20
>>     HSR_CPREG32_TID3_RESERVED_CASE:
>>         /* Handle all reserved registers as RAZ */
>> @@ -638,6 +638,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const =
union hsr hsr)
>>     inject_undef_exception(regs, hsr);
>> }
>>=20
>> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
>> +{
>> +    const struct hsr_cp32 cp32 =3D hsr.cp32;
>> +    int regidx =3D cp32.reg;
>> +
>> +    if ( !check_conditional_instr(regs, hsr) )
>> +    {
>> +        advance_pc(regs, hsr);
>> +        return;
>> +    }
>> +
>> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
>> +    {
>> +    /*
>> +     * HSR.TID3 is trapping access to MVFR register used to identify th=
e
>          ^ HCR

ack, will fix the typo in v4.

>=20
>> +     * VFP/Simd using VMRS/VMSR instructions.
>> +     * Exception encoding is using MRC/MCR standard with the reg field =
in Crn
>> +     * as are declared MVFR0 and MVFR1 in cpregs.h
>> +     */
>> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
>> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
>> +    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
>> +
>> +    default:
>> +        gdprintk(XENLOG_ERR,
>> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n"=
,
>> +                 cp32.read ? "mrc" : "mcr",
>> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs=
->pc);
>> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
>> +                 hsr.bits & HSR_CP32_REGS_MASK);
>> +        inject_undef_exception(regs, hsr);
>> +        return;
>=20
> I take we are sure there are no other cp10 registers of interest?

Documentation is saying:
"VMRS access to MVFR0, MVFR1, and MVFR2, are trapped to EL2, reported using=
 EC
syndrome value 0x08"

So this is my understanding yes.

Cheers
Bertrand

>=20
>=20
>> +    }
>> +
>> +    advance_pc(regs, hsr);
>> +}
>> +
>> void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>> {
>>     const struct hsr_cp cp =3D hsr.cp;
>> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perf=
c_defn.h
>> index 6a83185163..31f071222b 100644
>> --- a/xen/include/asm-arm/perfc_defn.h
>> +++ b/xen/include/asm-arm/perfc_defn.h
>> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>> PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>> PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>> PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
>> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>> PERFCOUNTER(trap_cp,       "trap: cp access")
>> PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>> PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
>> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
>> index 997c37884e..c4a3d0fb1b 100644
>> --- a/xen/include/asm-arm/traps.h
>> +++ b/xen/include/asm-arm/traps.h
>> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const unio=
n hsr hsr);
>> void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>> void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>> void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
>> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>> void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
>>=20
>> /* SMCCC handling */
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:27:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49450.87451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNrB-00032T-DN; Thu, 10 Dec 2020 15:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49450.87451; Thu, 10 Dec 2020 15:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNrB-00032M-AE; Thu, 10 Dec 2020 15:27:49 +0000
Received: by outflank-mailman (input) for mailman id 49450;
 Thu, 10 Dec 2020 15:27:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNr9-00032H-Uv
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:27:47 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cd3bed9-5327-46c3-8bd6-ecc6e7a0d0dc;
 Thu, 10 Dec 2020 15:27:46 +0000 (UTC)
Received: from AM5PR0601CA0054.eurprd06.prod.outlook.com (2603:10a6:206::19)
 by DB6PR0802MB2294.eurprd08.prod.outlook.com (2603:10a6:4:85::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.23; Thu, 10 Dec
 2020 15:27:44 +0000
Received: from VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::99) by AM5PR0601CA0054.outlook.office365.com
 (2603:10a6:206::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:27:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT050.mail.protection.outlook.com (10.152.19.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:27:43 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Thu, 10 Dec 2020 15:27:43 +0000
Received: from 763f2b5106d1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 78C7A1DA-BB31-4478-BC48-562FDF37E16D.1; 
 Thu, 10 Dec 2020 15:27:27 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 763f2b5106d1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:27:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4776.eurprd08.prod.outlook.com (2603:10a6:10:f2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.15; Thu, 10 Dec
 2020 15:27:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:27:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cd3bed9-5327-46c3-8bd6-ecc6e7a0d0dc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QYBVTzjMcojE4MWVJoJYd5WdYRou82iBM+RCkOA8L5c=;
 b=HCp7dbHTtTzX7PYJQydQNsDlEiCaxyqehnglYOtIPdRSlrakv6Lcm0MWKmgPNn7HMcsnzIZ95Me1ZQS//Wk4aHNR8pBUPO1O5isiR7KqhUUMI890X9N+sD1LFVBT5XxrUv+xxyg/ARuFaMiJE7Wd9KFm/RVHuuWQmlrJkPDX1Ws=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: cfcc2bcd73248e60
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nqfAQO4d5+305dxTH3URFgPHp9hCLDvrroTcGJfDoEBRpo3BPkulyTDXKAeogQgCXeFd58LqeD4NabSogd4tX1ty6ORotxIBp6Z0vV6z5hU+MDAw5gAIk1G5EhPt+JWaTW+2rNzVzYkrrDcoapEDKxRIQPgMvHU07DEkmwSYL0ScW/UBzQDD+x3h9v1MbR0CqY9982pVKTFz+gZWitdoXDSdChRKyQaQS/V93iMRwV3lqN+SKOHcQGMlaBpL0cFn7wUJUQ4el7zBGUNtOIIx2+ZOJIO+ok9mz7rlRr65fWtm2UBsKtivsbMzgR917zNYLjXX1s+F0IntFzAX4h+vYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QYBVTzjMcojE4MWVJoJYd5WdYRou82iBM+RCkOA8L5c=;
 b=RAPq6MCWWi9jseLMWuUdVBIcqf6Mu8aD1XL8/ONY8CrsWwNxYDN9biEgM2bix7kjTysbvaAfqX8x0IFAbNaW0apEog9CGlu2WKKbTtGILHqRU+6uRRvq3cUmgFn/AjC9NU71E86hgUHmBYekj48EHSPDWoFj4frIN05FCokhWKFxRMaa7gfpVuBZGU1t6DvZOTfZ2Z6Ptehr9FScJ0saFJK9EBaRBPKd0AoCrXqvNYcdnNn4RE/7OMeEvUgulGYkyf96NGGNIH6GYkJk1R8/D4TfqWzcAE+tz1nm6wVRNxCTj1m4TcjP5uPV1pUl7CY417hY1UoXGxs+BeuJ8/Tvvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QYBVTzjMcojE4MWVJoJYd5WdYRou82iBM+RCkOA8L5c=;
 b=HCp7dbHTtTzX7PYJQydQNsDlEiCaxyqehnglYOtIPdRSlrakv6Lcm0MWKmgPNn7HMcsnzIZ95Me1ZQS//Wk4aHNR8pBUPO1O5isiR7KqhUUMI890X9N+sD1LFVBT5XxrUv+xxyg/ARuFaMiJE7Wd9KFm/RVHuuWQmlrJkPDX1Ws=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle MVFR
Thread-Topic: [PATCH v3 6/7] xen/arm: Add CP10 exception support to handle
 MVFR
Thread-Index: AQHWzklxLeD2qzBVf0meNxRIV80xoKnvZdMAgAEPhoA=
Date: Thu, 10 Dec 2020 15:27:25 +0000
Message-ID: <45F93E3E-210D-4432-BEAF-0304F0018DF4@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <a72a378cd1d4e5c6670980cf4d201d457abe5abc.1607524536.git.bertrand.marquis@arm.com>
 <e06cde1e-2833-4335-9456-04aae3f6d287@xen.org>
In-Reply-To: <e06cde1e-2833-4335-9456-04aae3f6d287@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: efb7f69f-d397-4585-d654-08d89d2023d8
x-ms-traffictypediagnostic: DBBPR08MB4776:|DB6PR0802MB2294:
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB2294C5BF23D2BD2D9B12DF699DCB0@DB6PR0802MB2294.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Wcie9QjVGegedn26zSFiL5a6OOqH5R7WOnHGpeOYFlMXeXCpaxQHAlivEvfelWiIOHamx8TEA7Tg3trqP6mHgkJF+om9UwkMUgt5Q1L2nJmOgqK/5u2Qzpys6w6J44tDLkarFaaKAqE7OGdFoMKG4kR0vHAYUPaMAFcp+4RfratOZdZJmpJ5V4KmmZgwQfogATj+OulNEniYL1j0HVGvcsM+xEk+5RUENC7D0y+7PoJHjPhLOzGeXqmouXhfUXR0DQaVXGAvluS8QoHfoxeEIcZZCqOV5OmxwhkbM8S6TVTmVDAwFD58yb+Mq5gFmalkGqbzPazJeEZ75tfLYKX5X43pCCYqbjeTAas747OFZSQV6Pu9A8Iq5mBIc6Y77tBo
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(366004)(136003)(39860400002)(66946007)(66556008)(8936002)(66446008)(2906002)(76116006)(66476007)(8676002)(6916009)(91956017)(6506007)(53546011)(4326008)(64756008)(71200400001)(33656002)(5660300002)(54906003)(316002)(478600001)(6486002)(2616005)(26005)(86362001)(83380400001)(6512007)(186003)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?RoGlo1OU7rpMT41UpCj23zMGGksQOpLiflMd1+BeeEVteO1FxFICmayoiKX4?=
 =?us-ascii?Q?oLlTlfTVxfn8ngl0yJvWbIvehxHEed46n/zE68uOuxGEZpn+9Ezw6IjMwtXd?=
 =?us-ascii?Q?xJYGKMSzGHdH/kGVTwRnLWx/XLqc8/wr+NJ++uPvNfZ5zD1FJY2GKjPZ/0uE?=
 =?us-ascii?Q?4ppLy1uYWAYtzRp02Y7hc/bLmTybFr4Fwi1nbWxsZhtwArSzaiIniu0rS6u2?=
 =?us-ascii?Q?omHRD6pT5Ho7YI8ZVeWJYx9ZSfQaA7CJ8Va9+qsihXLLGy6K3zrFRcHmnKGY?=
 =?us-ascii?Q?av41uUtrf0Mj+HFh7EXrh6JBnZtOlb/Is/NluuerKjxH03UCBnO4DyLDsUCN?=
 =?us-ascii?Q?X07J0wY7JCZfhZwJhYZyqK1cdrnRZ+OZ89A4y8elZJpWbVYf3dcoOSN2o4Yo?=
 =?us-ascii?Q?CPXRuImy1k66SM2Sw1DRy+Jw3g4gvBKkek164DuyyJrEp9p/ry7J9TTmgOZv?=
 =?us-ascii?Q?X59Al9p52zBJs9ljgfHK0MTvuV1sv1kjM1CYA0Al8TiI8mU27rO2vnJqDQ5E?=
 =?us-ascii?Q?aKetvR4ujVbVEqO3qr7mrAEza1v1JvZNC8ho/EeOx/tjHE+K1sueC/4volZF?=
 =?us-ascii?Q?OCDxmddIS5RYprUv+bzvF+LxuIb3z27li6U+ExNpGaIDUwtWmIuG8wzlEs8d?=
 =?us-ascii?Q?1oMR3/kblHAerK4lYhfOlUnQ1UG+XHzRpI1ONKmV6CC85Tkj4R+VT+9sng8p?=
 =?us-ascii?Q?sOHsT8/hEHY47vLBtnvBfcudA02f1NJxoV484bHGQKLKoAgp16WJVDpBzZDs?=
 =?us-ascii?Q?bc8ea9J84RTtsoCZcHCzty5rbwJ+mMUTQIdsN54oeMnbYUbbBtvPK+vyYlvV?=
 =?us-ascii?Q?vK9odqB2sNtK9u4v+r4IdOC7t8VAn+QFDhlQfoCt/W00vMD38B2PEzNkvfmZ?=
 =?us-ascii?Q?MlomquUXiTkMjaUt+1mauNeP7rZocjlqQHZZZ0lwtu8VpwB2GBaMfPoZZBVU?=
 =?us-ascii?Q?dnWIDCG8jSzl+qIM8vcuETft6+iG5gCwcaiNMf+M7vmapnCWKRPufDN3MWKH?=
 =?us-ascii?Q?IA1S?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <05DEA66479194A4DA08AEC86DB92295B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4776
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c31e61b3-3c55-4bb7-482e-08d89d20193c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mVWVsjnSyrIyc18RnZs8PKc/dPSw0NOdrTTD2BZRqI1BL49YWuUzhri8jTnBhPtBh2vI9VXJAlpBHnjq7V2aRRCj7Y0E8zfVtqUdLZiqw4llJgflcClTqQoGvnI7IsTjfK9NonvYfsrb9gCm2hLODd61CaEAnIbPrAH4c4pZ1uaBD1rteLIQR8n3PTU+UaQkxVJe9Fc+6B+InODi4Fa7G1MlX4nGqnWdv1ae0MhjpLcCJu/Ugb/YJeZ48QWYt0XtvLkA4Ktf37V7MLaDepceVc5S0CgvzoTG2XhlqkwdkVMdmXw7AeZunYPV6khjTLLdKDwxx/5rbyZos5NfUS210Shr4y/pIuHuUrTHmvpNDDMMqFvhOIY3xxT85DNHw0fDgH3POS0xck3p/7CgaNASgw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966005)(47076004)(86362001)(336012)(26005)(53546011)(186003)(83380400001)(6512007)(2906002)(8676002)(82740400003)(316002)(2616005)(36756003)(70206006)(8936002)(6862004)(81166007)(6506007)(6486002)(478600001)(107886003)(4326008)(33656002)(54906003)(356005)(82310400003)(70586007)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:27:43.7583
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: efb7f69f-d397-4585-d654-08d89d2023d8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2294

Hi Julien

> On 9 Dec 2020, at 23:15, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 09/12/2020 16:30, Bertrand Marquis wrote:
>> Add support for cp10 exceptions decoding to be able to emulate the
>> values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
>> This is required for aarch32 guests accessing MVFR registers using
>> vmrs and vmsr instructions.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: Rebase
>> Changes in V3:
>>   Add case for MVFR2, fix typo VMFR <-> MVFR.
>> ---
>>  xen/arch/arm/traps.c             |  5 ++++
>>  xen/arch/arm/vcpreg.c            | 39 +++++++++++++++++++++++++++++++-
>>  xen/include/asm-arm/perfc_defn.h |  1 +
>>  xen/include/asm-arm/traps.h      |  1 +
>>  4 files changed, 45 insertions(+), 1 deletion(-)
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 22bd1bd4c6..28d9d64558 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *reg=
s)
>>          perfc_incr(trap_cp14_dbg);
>>          do_cp14_dbg(regs, hsr);
>>          break;
>> +    case HSR_EC_CP10:
>> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>> +        perfc_incr(trap_cp10);
>> +        do_cp10(regs, hsr);
>> +        break;
>>      case HSR_EC_CP:
>>          GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>>          perfc_incr(trap_cp);
>> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
>> index d371a1c38c..da4e22a467 100644
>> --- a/xen/arch/arm/vcpreg.c
>> +++ b/xen/arch/arm/vcpreg.c
>> @@ -319,7 +319,7 @@ void do_cp15_32(struct cpu_user_regs *regs, const un=
ion hsr hsr)
>>      GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
>>      GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
>>      GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
>> -    /* MVFR registers are in cp10 no cp15 */
>> +    /* MVFR registers are in cp10 not cp15 */
>=20
> The code was originally added in the previous patch. Please either introd=
uce the comment here or fold it in the previous patch.

Right i will fold it back in previous patch.

Regards
Bertrand

>=20
>>        HSR_CPREG32_TID3_RESERVED_CASE:
>>          /* Handle all reserved registers as RAZ */
>> @@ -638,6 +638,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const =
union hsr hsr)
>>      inject_undef_exception(regs, hsr);
>>  }
>>  +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
>> +{
>> +    const struct hsr_cp32 cp32 =3D hsr.cp32;
>> +    int regidx =3D cp32.reg;
>> +
>> +    if ( !check_conditional_instr(regs, hsr) )
>> +    {
>> +        advance_pc(regs, hsr);
>> +        return;
>> +    }
>> +
>> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
>> +    {
>> +    /*
>> +     * HSR.TID3 is trapping access to MVFR register used to identify th=
e
>> +     * VFP/Simd using VMRS/VMSR instructions.
>> +     * Exception encoding is using MRC/MCR standard with the reg field =
in Crn
>> +     * as are declared MVFR0 and MVFR1 in cpregs.h
>> +     */
>> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
>> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
>> +    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
>> +
>> +    default:
>> +        gdprintk(XENLOG_ERR,
>> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n"=
,
>> +                 cp32.read ? "mrc" : "mcr",
>> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs=
->pc);
>> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
>> +                 hsr.bits & HSR_CP32_REGS_MASK);
>> +        inject_undef_exception(regs, hsr);
>> +        return;
>> +    }
>> +
>> +    advance_pc(regs, hsr);
>> +}
>> +
>>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>>  {
>>      const struct hsr_cp cp =3D hsr.cp;
>> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perf=
c_defn.h
>> index 6a83185163..31f071222b 100644
>> --- a/xen/include/asm-arm/perfc_defn.h
>> +++ b/xen/include/asm-arm/perfc_defn.h
>> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>>  PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>>  PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>>  PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
>> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>>  PERFCOUNTER(trap_cp,       "trap: cp access")
>>  PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>>  PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
>> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
>> index 997c37884e..c4a3d0fb1b 100644
>> --- a/xen/include/asm-arm/traps.h
>> +++ b/xen/include/asm-arm/traps.h
>> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const unio=
n hsr hsr);
>>  void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>>  void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>>  void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
>> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
>>    /* SMCCC handling */
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:36:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49458.87463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNzl-00046u-Gn; Thu, 10 Dec 2020 15:36:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49458.87463; Thu, 10 Dec 2020 15:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knNzl-00046n-C9; Thu, 10 Dec 2020 15:36:41 +0000
Received: by outflank-mailman (input) for mailman id 49458;
 Thu, 10 Dec 2020 15:36:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knNzk-00046h-Po
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:36:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::610])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0da8f9a7-21b9-41b2-bc82-9f3d52fe316d;
 Thu, 10 Dec 2020 15:36:39 +0000 (UTC)
Received: from DB6PR0301CA0029.eurprd03.prod.outlook.com (2603:10a6:4:3e::39)
 by VI1PR08MB3968.eurprd08.prod.outlook.com (2603:10a6:803:e5::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Thu, 10 Dec
 2020 15:36:36 +0000
Received: from DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3e:cafe::ae) by DB6PR0301CA0029.outlook.office365.com
 (2603:10a6:4:3e::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:36:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT035.mail.protection.outlook.com (10.152.20.65) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:36:35 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Thu, 10 Dec 2020 15:36:35 +0000
Received: from 5754061f6571.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 28A81783-8BDF-4E17-9F3F-8B7F4D753DC3.1; 
 Thu, 10 Dec 2020 15:36:29 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5754061f6571.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:36:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6252.eurprd08.prod.outlook.com (2603:10a6:10:20b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.20; Thu, 10 Dec
 2020 15:36:28 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:36:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0da8f9a7-21b9-41b2-bc82-9f3d52fe316d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KcRXd9RF3nGiJPD5Q9wbQ6foiJEShdxvoSLTCVIWw+Q=;
 b=Au8fj9RUkz95QQTm7pDCR9rQ8ExJ766yuom4dbi1oVm3bDgzRYP2+sXIEE5j/VyDWHI/SAn6kc5uxNt/ZiWVAl1b2oBf3I7KNUSeVsEBLRDw9Yq1umrGOLbaDpo8o0dYMRI5SFOf4Yk33zU7ZbsiJNmgNfLoqtvpbtXH4lY8UXs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 03d8f3fe86f71e60
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k6a2CRPNzmfAVAxcGOSbp+Npl0L0/5j4kf0fMBoQni9BwKT2uUNu9VcXTJXwwzWrikV8u+Sn8DQAC53ig/y/S+LqcWVMu5D33a0qyVYBCHVzjop01VtGsEOFR+ZnggqgDwO6RB7KnvrLqqx5G7FHXUJaSnejw+ivTGPJrGdGq6Z0/LM/t2D8RfyRjmetH0Kx7feQJJuaJfzArlChINNFHTN9z85i3H7wGR1Nj+vFbEsPh/awDEbLdo3b3Bi0DaJpEB1VudAiYjGAkt18fH76QjFLonE2BBVJRIcGiaUr5+1dXLR39La8XGFuyZlMKc9zDOnKFwX4dkRpyYwtNTr0nQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KcRXd9RF3nGiJPD5Q9wbQ6foiJEShdxvoSLTCVIWw+Q=;
 b=U+8zBEkcOyCzazU8meIEQLVaOurams9nrNHzYKMeq6yfzHau7C4su/EgLoAqC0uJg1qj7rrE1ZR14MIrmoSCnOIqYuVLvbNLG5lE9lcYabK/3OUy1Cf/NNcvWugFstfw4UVrcD9mBmVJiv5LZtv1Xcupprek22b7aC0kk9U8NQy6jYr+feh7aClhB20RPpx1KlfV6JS43Im9pSlit3mbxRb7CFvOT5Se686xRvivzpu48PnGDwHx7YnsUVgKhncHh+owdiU0mJYUt9i69OaadjTEpkETZ59hsQioevbIt9c4of64EkMeD+3hkfIV167ftWKUZD9cebS0c6n4oKQIpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KcRXd9RF3nGiJPD5Q9wbQ6foiJEShdxvoSLTCVIWw+Q=;
 b=Au8fj9RUkz95QQTm7pDCR9rQ8ExJ766yuom4dbi1oVm3bDgzRYP2+sXIEE5j/VyDWHI/SAn6kc5uxNt/ZiWVAl1b2oBf3I7KNUSeVsEBLRDw9Yq1umrGOLbaDpo8o0dYMRI5SFOf4Yk33zU7ZbsiJNmgNfLoqtvpbtXH4lY8UXs=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/7] xen/arm: Activate TID3 in HCR_EL2
Thread-Topic: [PATCH v3 7/7] xen/arm: Activate TID3 in HCR_EL2
Thread-Index: AQHWzkl013/0PfAuCEKW6HlWdTiWa6nvZnoAgAERZAA=
Date: Thu, 10 Dec 2020 15:36:28 +0000
Message-ID: <C38106BC-5FDB-49A7-A36C-2584748A7054@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <956cf336ffce24f0cabfc7a98ae855bc71d5f028.1607524536.git.bertrand.marquis@arm.com>
 <6e81e7ff-9cfc-aaec-e1fc-336dec06dd6d@xen.org>
In-Reply-To: <6e81e7ff-9cfc-aaec-e1fc-336dec06dd6d@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7beff5c9-73ea-476f-f224-08d89d2160e8
x-ms-traffictypediagnostic: DBBPR08MB6252:|VI1PR08MB3968:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3968B9B11D65731ABAE6B98D9DCB0@VI1PR08MB3968.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +OM7R2+LIsdPmegoQ0dWsADD7XIyw96vhYSJio8/5WZ3kbCo4orxng53BpcuCw5gQ7jDBr4eTxAd2Zmcn0csNwgKFfVtISMgTudz7UoOF9S/48pyQ1OM5VExtCXnbnjuOfV537D7krJPdRgdvNbLwJbkw56C9fkzajxaMi0lcqm8CvcZJuEbHLsnBBzeg6ogIzsO7DLB7R9j68xY8OuakQLkdULVbApbB5fKTyCqSYgvSWEIkdPKiEoMDF2Rnb9LclysB0TUO/Z/PyaKnH7Cw1T5/OhUKT6voLFTyI3hFS5EzAtR+Z/FJfIOztnkKiWHWNicexB/0/F5651rSbcd28evCuiSvK2dQjNkCHvHsMnPW+pwe9cOGRrqD/ukE8+a
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(366004)(376002)(136003)(66946007)(54906003)(36756003)(33656002)(6486002)(6916009)(186003)(66446008)(8676002)(26005)(6512007)(5660300002)(316002)(2906002)(91956017)(478600001)(66476007)(6506007)(71200400001)(2616005)(86362001)(4326008)(64756008)(53546011)(76116006)(83380400001)(66556008)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?VDMWxokBqQxNmV4EBD2Wd3CMBeRP1+gZxNKazzkjeYyLwMgc4zoLvCZ2TGG4?=
 =?us-ascii?Q?VdkbUyDapDlzfKj5OXJPx4wYuk5VoP3ZUP1au+kILprflCNGLe4Mz5DxJRrG?=
 =?us-ascii?Q?PJ5n24VUxj6fJAUsCSMV2sYKQa0e4UoAWc9ZtYzmfZnDlRKSUIphyDrjAAql?=
 =?us-ascii?Q?UDmYB82A8p+1MLJp6njkGItlMNgHrlGTZbbTmnXZyJNvaObccZTGTzcb0Pf2?=
 =?us-ascii?Q?BQgnyB/po3NhwcI6bMmegJJzBqaEe9Lq1AvptzIRL3PQ2NkirZZ4g3dw0d+7?=
 =?us-ascii?Q?oK1otQK7KhlmHISKxx21oZ32a2S9fX4prmjrHU7LKmcKoEhkeg6auF5ncWxR?=
 =?us-ascii?Q?aEVNroOOj1L9sEwYh6h0wNgrpjJ5PZmOEkkvXP5YPE3Y4cMMtSQN+tvT4KBr?=
 =?us-ascii?Q?Fqt6ta1D+yY4rX1V2be9yb81M2TpFpPzlQNDkEnnakRbOyYxcCqsFknMQJnA?=
 =?us-ascii?Q?XBpfDxqGnVqGsa5Fj1mAnFfIWvH8Qz4IE0qgUiQbT11jJx4qb6g1YOZNsnn4?=
 =?us-ascii?Q?B0t+Z2ZQSTX2UR5yiNsw/eQCIBIbeJAHaPr07RSlqc9DuvN8VwKUG/0AVqon?=
 =?us-ascii?Q?IDlyTdo1a5PUT1CNpfXGaReOo0oPWsYXxtwyUWDvPjRhQM6M747/wbbvDVkg?=
 =?us-ascii?Q?u5J50W+35ZwcXr6kXkc85TnnyZR1PZ2NKiRcuVF2I6V8enxzWhq4kQ1xC+Hs?=
 =?us-ascii?Q?Khm05XGR3q4v1wsGcJjSIYpzTyIb9qtFJywE20FEf/W5jW6zhLZRiTjVlVyV?=
 =?us-ascii?Q?orf/EAvRvWMCuMpKcZDMrC8JamdsAoeiGXp1XCfS/BH1dYkdXVZ3dvmIdmzI?=
 =?us-ascii?Q?Ktsy0j8okFTVBUp20if59UIX6GZ7Ta3dDzxTbkh3AuMdBbqDtAQ9oFvZ320g?=
 =?us-ascii?Q?D0GQWBpjQ3rJa5/AJ8G/cNyrr//Wki3ymgFX2MGKXMjYqDxI0hWt5Le0CFw0?=
 =?us-ascii?Q?fSqVMpbo9qXF72lf5cnFO0BpCUA77Jji0oHchbPaLBeCXMZq5kGPw+ch70Rz?=
 =?us-ascii?Q?Haze?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E1A59F816D87A84A99C41C89AD6ECFCF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6252
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e3b6bbba-50be-4529-28c1-08d89d215c7e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pGQTlzeX9RSL3sHzUnt9ffgVpUKBRnvBNF+Rjyub8Bx7beNKymg4ZSAk9X7ViMaUootcjAG1oM/qSsfr5HFkbyZcr7er+AKFC303YnvRif0Jd8maXKxi/OVlvoknyJAN2unwJKWNl1wYCSPoxV/tSTZ4r1MkDRJ0xPktuSKCMNnZCP2IctDULjRoKMXGJMab0oFgApX2UVst2nqkQP0HKXovLTFWmOu+byC636owipp1VJpKpj7a9Rvqs4XPqe67oIaavWHeitQg02NolsOPB5cnrJYqFTGwYS2A1KSQnW2EsBoS5N7LlfUyp1r0AlH9QvkfUxN0dx2UsttzL+43iHDa5V0CwB2ottPjvlKwicioYFcCXrvk1EJ6FqKt3GEEW84VWrsizOqRVnauOq566w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(39860400002)(346002)(46966005)(33656002)(356005)(54906003)(36756003)(8676002)(8936002)(6486002)(86362001)(70586007)(6512007)(186003)(81166007)(2906002)(316002)(6506007)(26005)(478600001)(2616005)(107886003)(53546011)(82310400003)(4326008)(6862004)(47076004)(82740400003)(70206006)(5660300002)(83380400001)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:36:35.8225
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7beff5c9-73ea-476f-f224-08d89d2160e8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3968

Hi Julien,

> On 9 Dec 2020, at 23:17, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 09/12/2020 16:31, Bertrand Marquis wrote:
>> Activate TID3 bit in HSR register when starting a guest.
>=20
> s/HSR/HCR/
>=20

Right, I did it a lot thanks for the review.
I will fix that in V4.

>> This will trap all coprecessor ID registers so that we can give to guest
>> values corresponding to what they can actually use and mask some
>> features to guests even though they would be supported by the underlying
>> hardware (like SVE or MPAM).
>=20
> So this will make sure the guest will not be able to identify the feature=
. Did you check that the features are effectively not accessible by the gue=
st? IOW it should trap.

For SVE yes I checked and with the serie a Linux kernel with SVE support ac=
tivated on a target with SVE is now working (was crashing before).
For MPAM, I have no target available with MPAM support so I could not test =
that but your recent XSA patch did turn the access to the guest off.

With my SVE test, I could confirm that access are trapped and properly emul=
ated.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall
>=20



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:45:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49464.87475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knO8V-0005Bi-CS; Thu, 10 Dec 2020 15:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49464.87475; Thu, 10 Dec 2020 15:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knO8V-0005Bb-9G; Thu, 10 Dec 2020 15:45:43 +0000
Received: by outflank-mailman (input) for mailman id 49464;
 Thu, 10 Dec 2020 15:45:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knO8U-0005BW-JY
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:45:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knO8T-00026h-1l; Thu, 10 Dec 2020 15:45:41 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knO8S-0002VE-K5; Thu, 10 Dec 2020 15:45:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MPZgRJbPYZax3D2hee6VK8GTc8eosRP/c3EZNBRT+uk=; b=3nAaBMh58Rq3hK6OyZDzXA7lX5
	6AiAGearzY9MwzAb7D7g/rlKD5zd4yiycvFoHCBg3WSGyUWsNV1f4ooHyW7lwOYc6o9jhFNLrHKQk
	Xy6fB/SZcJu4rYibHS7UWjkEqeC7i0DSJiqdVPXcoqhau0jUS/7ighUwmkxbH/VRxxxc=;
Subject: Re: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
 <62484fa0-fa86-523a-12e0-54d69934d791@xen.org>
 <8D31DCB1-3529-4785-A18B-CFE69CC0E846@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <157559b2-7d95-ff46-204c-f875e617d464@xen.org>
Date: Thu, 10 Dec 2020 15:45:37 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <8D31DCB1-3529-4785-A18B-CFE69CC0E846@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 10/12/2020 15:14, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 9 Dec 2020, at 23:03, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Bertrand,
>>
>> On 09/12/2020 16:30, Bertrand Marquis wrote:
>>> Add definition and entries in cpuinfo for ID registers introduced in
>>> newer Arm Architecture reference manual:
>>> - ID_PFR2: processor feature register 2
>>> - ID_DFR1: debug feature register 1
>>> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
>>> - ID_ISA6: ISA Feature register 6
>>> Add more bitfield definitions in PFR fields of cpuinfo.
>>> Add MVFR2 register definition for aarch32.
>>> Add mvfr values in cpuinfo.
>>> Add some registers definition for arm64 in sysregs as some are not
>>> always know by compilers.
>>> Initialize the new values added in cpuinfo in identify_cpu during init.
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> Changes in V2:
>>>    Fix dbg32 table size and add proper initialisation of the second entry
>>>    of the table by reading ID_DFR1 register.
>>> Changes in V3:
>>>    Fix typo in commit title
>>>    Add MVFR2 definition and handling on aarch32 and remove specific case
>>>    for mvfr field in cpuinfo (now the same on arm64 and arm32).
>>>    Add MMFR4 definition if not known by the compiler.
>>> ---
>>>   xen/arch/arm/cpufeature.c           | 18 ++++++++++
>>>   xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
>>>   xen/include/asm-arm/cpregs.h        | 12 +++++++
>>>   xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
>>>   4 files changed, 105 insertions(+), 9 deletions(-)
>>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>>> index 44126dbf07..bc7ee5ac95 100644
>>> --- a/xen/arch/arm/cpufeature.c
>>> +++ b/xen/arch/arm/cpufeature.c
>>> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>             c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
>>>           c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
>>> +        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
>>>             c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
>>>           c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
>>> +
>>> +        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
>>>   #endif
>>>             c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
>>>           c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
>>> +        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
>>>             c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
>>> +        c->dbg32.bits[1] = READ_SYSREG32(ID_DFR1_EL1);
>>>             c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
>>>   @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>           c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
>>>           c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
>>>           c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
>>> +        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
>>> +        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);
>>
>> Please don't introduce any more use of READ_SYSREG32(), they are wrong on Armv8 because system registers are always 64-bit.
> 
> I followed the existing implementation but ...
> 
>>
>>>             c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
>>>           c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
>>> @@ -137,6 +144,17 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>           c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
>>>           c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
>>>           c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
>>> +        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
>>> +
>>> +#ifdef CONFIG_ARM_64
>>> +        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
>>> +        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
>>> +        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
>>> +#else
>>> +        c->mvfr.bits[0] = READ_CP32(MVFR0);
>>> +        c->mvfr.bits[1] = READ_CP32(MVFR1);
>>> +        c->mvfr.bits[2] = READ_CP32(MVFR2);
>>> +#endif
>>
>> READ_SYSREG() will do the job to either use READ_SYSREG64() or READ_CP32() depending on the arch used.
> 
> .. I can modify the ones I added and the existing ones to user READ_SYSREG instead.
> Please confirm if you want me to do that.

Yes, please use READ_SYSREG() for the new ones. We can convert the 
others separately.

> 
>>
>>>   }
>>>     /*
>>> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
>>> index c60029d38f..077fd95fb7 100644
>>> --- a/xen/include/asm-arm/arm64/sysregs.h
>>> +++ b/xen/include/asm-arm/arm64/sysregs.h
>>> @@ -57,6 +57,34 @@
>>>   #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>>>   #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>>>   +/*
>>> + * Define ID coprocessor registers if they are not
>>> + * already defined by the compiler.
>>> + *
>>> + * Values picked from linux kernel
>>> + */
>>> +#ifndef ID_AA64MMFR2_EL1
>>
>> I am a bit puzzled how this meant to work. Will the libc/compiler headers define ID_AA64MMFR2_EL1?
> 
> I tested this and if the compiler has a definition for the register, I am not entering the ifndef.
> So there is no header defining this but if the compiler has the definition for this the ifndef will
> not be entered.

Good to hear :).

> 
>>
>>> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
>>> +#endif
>>> +#ifndef ID_PFR2_EL1
>>> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
>>> +#endif
>>> +#ifndef ID_MMFR4_EL1
>>> +#define ID_MMFR4_EL1                S3_0_C0_C2_6
>>> +#endif
>>> +#ifndef ID_MMFR5_EL1
>>> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
>>> +#endif
>>> +#ifndef ID_ISAR6_EL1
>>> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
>>> +#endif
>>> +#ifndef ID_AA64ZFR0_EL1
>>> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
>>> +#endif
>>> +#ifndef ID_DFR1_EL1
>>> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
>>> +#endif
>>> +
>>>   /* Access to system registers */
>>>     #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
>>> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
>>> index 8fd344146e..2690ddeb7a 100644
>>> --- a/xen/include/asm-arm/cpregs.h
>>> +++ b/xen/include/asm-arm/cpregs.h
>>> @@ -63,6 +63,8 @@
>>>   #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
>>>   #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
>>>   #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
>>> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
>>> +#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Register 2 */
>>>   #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
>>>   #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
>>>   #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
>>> @@ -108,18 +110,23 @@
>>>   #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
>>>   #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
>>>   #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
>>> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
>>>   #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
>>> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>>>   #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
>>>   #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
>>>   #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
>>>   #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
>>>   #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
>>> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
>>> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
>>>   #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>>>   #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>>>   #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>>>   #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>>>   #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>>>   #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
>>> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>>>   #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>>>   #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>>>   #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
>>> @@ -312,18 +319,23 @@
>>>   #define HSTR_EL2                HSTR
>>>   #define ID_AFR0_EL1             ID_AFR0
>>>   #define ID_DFR0_EL1             ID_DFR0
>>> +#define ID_DFR1_EL1             ID_DFR1
>>>   #define ID_ISAR0_EL1            ID_ISAR0
>>>   #define ID_ISAR1_EL1            ID_ISAR1
>>>   #define ID_ISAR2_EL1            ID_ISAR2
>>>   #define ID_ISAR3_EL1            ID_ISAR3
>>>   #define ID_ISAR4_EL1            ID_ISAR4
>>>   #define ID_ISAR5_EL1            ID_ISAR5
>>> +#define ID_ISAR6_EL1            ID_ISAR6
>>>   #define ID_MMFR0_EL1            ID_MMFR0
>>>   #define ID_MMFR1_EL1            ID_MMFR1
>>>   #define ID_MMFR2_EL1            ID_MMFR2
>>>   #define ID_MMFR3_EL1            ID_MMFR3
>>> +#define ID_MMFR4_EL1            ID_MMFR4
>>> +#define ID_MMFR5_EL1            ID_MMFR5
>>>   #define ID_PFR0_EL1             ID_PFR0
>>>   #define ID_PFR1_EL1             ID_PFR1
>>> +#define ID_PFR2_EL1             ID_PFR2
>>>   #define IFSR32_EL2              IFSR
>>>   #define MDCR_EL2                HDCR
>>>   #define MIDR_EL1                MIDR
>>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
>>> index c7b5052992..6cf83d775b 100644
>>> --- a/xen/include/asm-arm/cpufeature.h
>>> +++ b/xen/include/asm-arm/cpufeature.h
>>> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>>>       union {
>>>           uint64_t bits[2];
>>>           struct {
>>> +            /* PFR0 */
>>>               unsigned long el0:4;
>>>               unsigned long el1:4;
>>>               unsigned long el2:4;
>>> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>>>               unsigned long fp:4;   /* Floating Point */
>>>               unsigned long simd:4; /* Advanced SIMD */
>>>               unsigned long gic:4;  /* GIC support */
>>> -            unsigned long __res0:28;
>>> +            unsigned long ras:4;
>>> +            unsigned long sve:4;
>>> +            unsigned long sel2:4;
>>> +            unsigned long mpam:4;
>>> +            unsigned long amu:4;
>>> +            unsigned long dit:4;
>>> +            unsigned long __res0:4;
>>>               unsigned long csv2:4;
>>> -            unsigned long __res1:4;
>>> +            unsigned long cvs3:4;
>>> +
>>> +            /* PFR1 */
>>> +            unsigned long bt:4;
>>> +            unsigned long ssbs:4;
>>> +            unsigned long mte:4;
>>> +            unsigned long ras_frac:4;
>>> +            unsigned long mpam_frac:4;
>>> +            unsigned long __res1:44;
>>>           };
>>>       } pfr64;
>>>   @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>>>       } aux64;
>>>         union {
>>> -        uint64_t bits[2];
>>> +        uint64_t bits[3];
>>>           struct {
>>>               unsigned long pa_range:4;
>>>               unsigned long asid_bits:4;
>>> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>>>               unsigned long pan:4;
>>>               unsigned long __res1:8;
>>>               unsigned long __res2:32;
>>> +
>>> +            unsigned long __res3:64;
>>>           };
>>>       } mm64;
>>>   @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>>>           uint64_t bits[2];
>>>       } isa64;
>>>   +    struct {
>>> +        uint64_t bits[1];
>>> +    } zfr64;
>>> +
>>>   #endif
>>>         /*
>>> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>>>        * when running in 32-bit mode.
>>>        */
>>>       union {
>>> -        uint32_t bits[2];
>>> +        uint32_t bits[3];
>>>           struct {
>>> +            /* PFR0 */
>>>               unsigned long arm:4;
>>>               unsigned long thumb:4;
>>>               unsigned long jazelle:4;
>>>               unsigned long thumbee:4;
>>> -            unsigned long __res0:16;
>>> +            unsigned long csv2:4;
>>> +            unsigned long amu:4;
>>> +            unsigned long dit:4;
>>> +            unsigned long ras:4;
>>>   +            /* PFR1 */
>>>               unsigned long progmodel:4;
>>>               unsigned long security:4;
>>>               unsigned long mprofile:4;
>>>               unsigned long virt:4;
>>>               unsigned long gentimer:4;
>>> -            unsigned long __res1:12;
>>> +            unsigned long sec_frac:4;
>>> +            unsigned long virt_frac:4;
>>> +            unsigned long gic:4;
>>> +
>>> +            /* PFR2 */
>>> +            unsigned long csv3:4;
>>> +            unsigned long ssbs:4;
>>> +            unsigned long ras_frac:4;
>>> +            unsigned long __res2:20;
>>>           };
>>>       } pfr32;
>>>         struct {
>>> -        uint32_t bits[1];
>>> +        uint32_t bits[2];
>>>       } dbg32;
>>>         struct {
>>> @@ -230,12 +264,16 @@ struct cpuinfo_arm {
>>>       } aux32;
>>>         struct {
>>> -        uint32_t bits[4];
>>> +        uint32_t bits[6];
>>>       } mm32;
>>>         struct {
>>> -        uint32_t bits[6];
>>> +        uint32_t bits[7];
>>>       } isa32;
>>> +
>>> +    struct {
>>> +        uint64_t bits[3];
>>
>> Shouldn't this be register_t?
> 
> I followed the scheme of the rest of the structure which
> is always using uint64_t or uint32_t for bits definitions.

Right, but I am sure you will not be surprised if I tell you this is 
buggy ;). The historical reason is, IIRC, the original spec of Armv8.0 
described them as 32-bit registers.

The spec was updated a while ago to clarify they are 64-bit when running 
in AArch64. But a majority of them still have the top 32-bit RES0 
(thankfully!).

> 
> Why should I use register_t type for this one ?

Because the value is 32-bit on AArch32 and 64-bit for AArch64. I am ok 
to use 64-bit still for AArch32, but it sounds a bit of a waste of memory.

What I care the most here is we use 64-bit for the new registers on 
AArch64. Otherwise, we are going to soon discover that a bit above 32 
was allocated and not detected by Xen. I don't want to be the one doing 
the debugging!

Admittly, this is not a new issue. However, the more offending code we 
had the more it will be difficul to get Xen to be fully compliant with 
the Armv8 spec.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:46:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49467.87487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knO93-0005Hi-PS; Thu, 10 Dec 2020 15:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49467.87487; Thu, 10 Dec 2020 15:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knO93-0005Hb-MT; Thu, 10 Dec 2020 15:46:17 +0000
Received: by outflank-mailman (input) for mailman id 49467;
 Thu, 10 Dec 2020 15:46:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knO92-0005HT-39
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:46:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knO91-00027A-R5; Thu, 10 Dec 2020 15:46:15 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knO91-0002YB-Fp; Thu, 10 Dec 2020 15:46:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BWs/1Y103LZxSwBAIMpiV6/e79gTAaGw8GGl2wWeYPs=; b=sviYpuSCrtJcquVIxlNj7YN2js
	25GgMKXxAVjLPf7in4dNuBtamXRshDntE66twFKwro8Exp5eJUUEe/cHPUHZmwG//PjCCMpXUxf+a
	jkWal6Ebm5FK9zGuvpuErUM4cgTpqZIoxh1ZP4xpZehmuulzvpdAS+MnaEWDv0+MnO8Q=;
Subject: Re: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
 <af02eefb-5846-d32b-22e5-65763e6f51e0@xen.org>
 <alpine.DEB.2.21.2012091742420.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <29a7ea09-2bc5-94e9-563e-07abb18ed260@xen.org>
Date: Thu, 10 Dec 2020 15:46:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091742420.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 10/12/2020 02:30, Stefano Stabellini wrote:
> On Wed, 9 Dec 2020, Julien Grall wrote:
>> Hi Bertrand,
>>
>> On 09/12/2020 16:30, Bertrand Marquis wrote:
>>> Add coprocessor registers definitions for all ID registers trapped
>>> through the TID3 bit of HSR.
>>> Those are the one that will be emulated in Xen to only publish to guests
>>> the features that are supported by Xen and that are accessible to
>>> guests.
>>> Also define a case to catch all reserved registers that should be
>>> handled as RAZ.
>>>
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> Changes in V2: Rebase
>>> Changes in V3:
>>>     Add case definition for reserved registers.
>>>
>>> ---
>>>    xen/include/asm-arm/arm64/hsr.h | 66 +++++++++++++++++++++++++++++++++
>>>    1 file changed, 66 insertions(+)
>>>
>>> diff --git a/xen/include/asm-arm/arm64/hsr.h
>>> b/xen/include/asm-arm/arm64/hsr.h
>>> index ca931dd2fe..ffe0f0007e 100644
>>> --- a/xen/include/asm-arm/arm64/hsr.h
>>> +++ b/xen/include/asm-arm/arm64/hsr.h
>>> @@ -110,6 +110,72 @@
>>>    #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
>>>    #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
>>>    +/* Those registers are used when HCR_EL2.TID3 is set */
>>> +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
>>> +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
>>> +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
>>> +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
>>> +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
>>> +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
>>> +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
>>> +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
>>> +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
>>> +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
>>> +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
>>> +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
>>> +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
>>> +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
>>> +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
>>> +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
>>> +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
>>> +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
>>> +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
>>> +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
>>> +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
>>> +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
>>> +
>>> +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
>>> +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
>>> +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
>>> +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
>>> +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
>>> +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
>>> +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
>>> +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
>>> +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
>>> +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
>>> +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
>>> +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
>>> +
>>> +/*
>>> + * Those cases are catching all Reserved registers trapped by TID3 which
>>> + * currently have no assignment.
>>> + * HCR.TID3 is trapping all registers in the group 3:
>>> + * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
>>> + */
>>> +#define HSR_SYSREG_TID3_RESERVED_CASE  case HSR_SYSREG(3,0,c0,c3,3): \
>>> +                                       case HSR_SYSREG(3,0,c0,c3,7): \
>>> +                                       case HSR_SYSREG(3,0,c0,c4,2): \
>>> +                                       case HSR_SYSREG(3,0,c0,c4,3): \
>>> +                                       case HSR_SYSREG(3,0,c0,c4,5): \
>>> +                                       case HSR_SYSREG(3,0,c0,c4,6): \
>>> +                                       case HSR_SYSREG(3,0,c0,c4,7): \
>>> +                                       case HSR_SYSREG(3,0,c0,c5,2): \
>>> +                                       case HSR_SYSREG(3,0,c0,c5,3): \
>>> +                                       case HSR_SYSREG(3,0,c0,c5,6): \
>>> +                                       case HSR_SYSREG(3,0,c0,c5,7): \
>>> +                                       case HSR_SYSREG(3,0,c0,c6,2): \
>>> +                                       case HSR_SYSREG(3,0,c0,c6,3): \
>>> +                                       case HSR_SYSREG(3,0,c0,c6,4): \
>>> +                                       case HSR_SYSREG(3,0,c0,c6,5): \
>>> +                                       case HSR_SYSREG(3,0,c0,c6,6): \
>>> +                                       case HSR_SYSREG(3,0,c0,c6,7): \
>>> +                                       case HSR_SYSREG(3,0,c0,c7,3): \
>>> +                                       case HSR_SYSREG(3,0,c0,c7,4): \
>>> +                                       case HSR_SYSREG(3,0,c0,c7,5): \
>>> +                                       case HSR_SYSREG(3,0,c0,c7,6): \
>>> +                                       case HSR_SYSREG(3,0,c0,c7,7)
>>
>> I don't like the idea to define the list of case in a header that is used by
>> multiple source. Please define it directly in the source file that use it.
> 
> At that point it might be best to open-coding it in do_sysreg? I mean no
> #define at all.

I am happpy with that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49476.87498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOBy-0005UJ-96; Thu, 10 Dec 2020 15:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49476.87498; Thu, 10 Dec 2020 15:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOBy-0005UC-64; Thu, 10 Dec 2020 15:49:18 +0000
Received: by outflank-mailman (input) for mailman id 49476;
 Thu, 10 Dec 2020 15:49:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOBw-0005U7-5O
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:49:16 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99308355-f3ee-4317-b8b1-eee1979d98aa;
 Thu, 10 Dec 2020 15:49:14 +0000 (UTC)
Received: from AS8PR04CA0046.eurprd04.prod.outlook.com (2603:10a6:20b:312::21)
 by HE1PR0801MB1835.eurprd08.prod.outlook.com (2603:10a6:3:7c::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Thu, 10 Dec
 2020 15:49:11 +0000
Received: from AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::f1) by AS8PR04CA0046.outlook.office365.com
 (2603:10a6:20b:312::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:49:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT023.mail.protection.outlook.com (10.152.16.169) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:49:11 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Thu, 10 Dec 2020 15:49:11 +0000
Received: from e57a4a290c5b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AE2236A6-0EBD-45A9-9199-0DE3A5ABEB6A.1; 
 Thu, 10 Dec 2020 15:48:55 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e57a4a290c5b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:48:55 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2567.eurprd08.prod.outlook.com (2603:10a6:4:96::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Thu, 10 Dec
 2020 15:48:53 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:48:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99308355-f3ee-4317-b8b1-eee1979d98aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KtWqv1julNLuzMbcf9ge/fy7J8uVhFzcvlP2ga7XGZM=;
 b=xg2NpWUAAb9UtdRvBp58IHeiRC1+LuXS8bttBgFl6Lou+le66thSAla7cCy9BfsQViNm2IJIvMsmZZD94COAbsLD1EZfL5yngOigVV4LU92tQoE8i6cO9lLfVeAQaWIeJAPTvyPtYERfl87b7G7WLxj6t3bAlUQXHt50GURCtEU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8d34676d6381171c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nxGzgy67q2xfrBSIR2B0QIRvpl0hUhnYmKBEiVNcMflnh1qxWjjtVRfCS8bnPbVG09SkdWNUTUR6vcG5Omb/zhXXDNaIfs/NQY1Ipt5iQ+0YeTHwS3GMfrLqzf2zAwgbD2G4yJAwOBm4p6NLpDoY8zw61fzNEwcVoZx1tuqpAE3uCi/slbaQMdnqkJwlG8pVpiW9R0jGRwhf5naZjE+otc2CBwAtMIoaC2GX+7o7os86pkIJiyiCE2gfJG/2/cKfX1xGyRjQsyLzIiUGOgC6G0BgszXxvhKbu7oFp4oqvVghYU8b2Mdt+KE2k3+H+X0MwJo5LXnH0sBNNPU6CMp2tQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KtWqv1julNLuzMbcf9ge/fy7J8uVhFzcvlP2ga7XGZM=;
 b=XTCE0c8DSz85PFHliobQsVm6GfPh61Gyzdr/LNI3KGP56ndy/jfCoQ4OfxBkj9oiPcfILj7hXYsMp0XpfJc7HZxdWguY/IjvAlxNQ1v0Ck343NMq29oPX5yRBMSJ/58apzHJqOXhuB7VIqqXgl14ysSwDJasAXx62O94+uYAxEemZ0TjsKhdkY7t9g+Fn5ZlQsotT6gvaKuCcu7RolZXIzSLTsFztKIHJpqL68G/zPTO2dYTphrECQi46kUR+Jgpv5R/9FdjHwv5JmE0swXKbYAMd4di2UDmwwIEN2JcV5qo5I9gzA6PF+o2mIg3EI3A6SqQ5YhO51PS/KO03tsKSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KtWqv1julNLuzMbcf9ge/fy7J8uVhFzcvlP2ga7XGZM=;
 b=xg2NpWUAAb9UtdRvBp58IHeiRC1+LuXS8bttBgFl6Lou+le66thSAla7cCy9BfsQViNm2IJIvMsmZZD94COAbsLD1EZfL5yngOigVV4LU92tQoE8i6cO9lLfVeAQaWIeJAPTvyPtYERfl87b7G7WLxj6t3bAlUQXHt50GURCtEU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index: AQHWzklW+4XymezDRUeCGhvhzW8UY6nvZCkAgAEXL4A=
Date: Thu, 10 Dec 2020 15:48:53 +0000
Message-ID: <BD35BA39-FE40-4752-9B21-CCD0F0D963B0@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
 <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
In-Reply-To: <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9b28c497-b83d-40eb-2959-08d89d232355
x-ms-traffictypediagnostic: DB6PR0802MB2567:|HE1PR0801MB1835:
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB18359B60EDFE5BCBB03D35C19DCB0@HE1PR0801MB1835.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1DqUVYWh/VknFDPVeW/Onuajt7CLBdOFG0Gtm55xFJrhYy3O6ogIo+353COmSIN9Sf0SuX6aiRrKPDbMhD6YdwE6QWbbd+hpoXj5sd//cEBW4BZIuO3J3p+5RJtThhUgs8uyzBhYy4IMnX78LA/YxSO/oRiEc3Bm28x6+YZsRiDlCLILODR+Gck9MEhbOqnWA8slq1AnuoCnCkusQu4pIlAexz7wwhR9UpYb+LZ2RoHgF9gEEFsA2N/PqV9eiq2qCirx818FMEr84xrhyj1EAF6WhclI0w+T5JqFYFXjQmqtihRh2kiPuD6K7Epk2oOouq0r2hpOdseP2yW3oClDOFFM5Pr85M1B9xnXmwvaWA6CGfGkNKRfaGLkBGMLrI3+
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(396003)(346002)(136003)(39860400002)(86362001)(2906002)(478600001)(316002)(2616005)(4326008)(66946007)(91956017)(8936002)(26005)(36756003)(6512007)(54906003)(186003)(33656002)(53546011)(76116006)(71200400001)(8676002)(6506007)(5660300002)(66556008)(83380400001)(66476007)(6916009)(6486002)(66446008)(64756008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?RNMZeYtpwelSNEoIpHWwpth1ZeSItJ/9WCnCV3tIDM/5PK+sgs9uhqayYWh0?=
 =?us-ascii?Q?bzb58mzytrNRvM/jpmkoiE8LrrD91tuqtFaVONiOknAExO4grdT510PmiWwP?=
 =?us-ascii?Q?/9SohGIZ6L1Z64iCtUB5GIMJ58dzw0+2pTckrwZLY3gLZpIJ69S566WhGBZi?=
 =?us-ascii?Q?sYDr7mx3pnOykuM7v2+Rj0PgIJB1wy421mLwCX28+igIFlPovjulmoj8UY8/?=
 =?us-ascii?Q?8pPto1pO7c2JlLZR6zNq8SAPrVXzXqtPBFJs7Ox9/aZnAY2QiCGM1EITeNhI?=
 =?us-ascii?Q?lP0/uue2x9TlSx/vntLQ4RQT/H+ABJ6CWOr4uaBBq1uYGaWJgOEx0uIkgPxo?=
 =?us-ascii?Q?f2nnqZUSOCNJcHQaYkseTs7JHl2V7uzanyV9o8gkBAiqgbJvVRzX86lc67I1?=
 =?us-ascii?Q?N2+QfAq8LEVs6IX79Yn7o1vellAr9i1DmrCY7QUfS5hrAPK07ceveoCz1Rcj?=
 =?us-ascii?Q?FhhiCktOKsmIGk1z8Eh2z7UIEIC5HDLNdGdvwe4+QgumI+o1N5ZmZWw5drRj?=
 =?us-ascii?Q?tjHtSkPtQdKjYwYoaU67MGXhZRQAMIAE5Egb2cYII08FgDYQd0/adFSqHwGz?=
 =?us-ascii?Q?FxvQiJP9/owqoS+2Wfl8YaRh7OveXOkEMpH3ZMoG1D6MkteOPqo8hKKgPwP+?=
 =?us-ascii?Q?FP0mN4DCyLLL1mtD7/4ab9YtABWNHrSXnnm9yx9RIK1eZvn1UWfj15UuMgz2?=
 =?us-ascii?Q?GcOlGlQmliI+gMSWoCKBoxST9LVlJPkkSogQVME5uCDanZYr0kXBcGfamdN2?=
 =?us-ascii?Q?FqQREknJgMJBv3A9rEpaqViIMc19wiP0DM1LeDIDyYgUZatogCVvj2709nty?=
 =?us-ascii?Q?Dn79Ry4AZ5cG+nb26qXbSkh/lWfk0B+Du8QilFCd6f9T1lCiQxuGTu9gQCtB?=
 =?us-ascii?Q?MRQDjwQMo5Z5sSvP7z+Ydquide13chXaHNzdUZkrDZIBFu5aIfkcLKkW7MQD?=
 =?us-ascii?Q?0K/2ypYaXuDObfuSkEpWSBQV8ko92r+9xJ1swudVZudoOgJFchZlQ61+dCAY?=
 =?us-ascii?Q?hsIg?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C6F4132AF99C03478C4BBC2EF1B926CF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2567
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c41f84d1-85b8-4fe0-6ffb-08d89d2318ae
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EItJsAZEV8Y7Mfammz7/NEBSl/bIxaAdpGB//j8lcavoKE510ZMMJqEzFiDh1bmxfskYveFh/neyIHxX+P6dFEPUlW4jFHZGrjn9WNKah/vN5SBJUV5cL2lfJP8wutVqal5m+Uc3h91UM/9o6KvsVg2q4f2BJGSrNaWbpnREEOG56/ydM292Piv++1xCExUgCL9WLB9RqEkxzkap3qigW5Az2M3vpACK7OG+8gJbitQcnM5ARLCWK1Mzz+RfNsxTSTfGrNbFkFg4SlKFk5XvjiJsvg3zMAbnmZsb5Sf+wxLCxH31P+BrQnTHUPdnteXTVRC1gTUhu48HxNp9LO3cbrFIOTSe2xsp1sfNXYtGgGrl0VGa+9ZZlnM2/ChhrJGGrjqHn0HMc3HHpcmoe4+gog==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(39860400002)(46966005)(8676002)(70586007)(82310400003)(8936002)(86362001)(356005)(83380400001)(2616005)(81166007)(33656002)(82740400003)(6486002)(36756003)(26005)(36906005)(53546011)(47076004)(336012)(4326008)(6506007)(316002)(5660300002)(107886003)(6862004)(186003)(478600001)(70206006)(6512007)(2906002)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:49:11.4525
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b28c497-b83d-40eb-2959-08d89d232355
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1835

Hi Julien,

> On 9 Dec 2020, at 23:09, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertand,
>=20
> On 09/12/2020 16:30, Bertrand Marquis wrote:
>> Create a cpuinfo structure for guest and mask into it the features that
>> we do not support in Xen or that we do not want to publish to guests.
>> Modify some values in the cpuinfo structure for guests to mask some
>> features which we do not want to allow to guests (like AMU) or we do not
>> support (like SVE).
>> The code is trying to group together registers modifications for the
>> same feature to be able in the long term to easily enable/disable a
>> feature depending on user parameters or add other registers modification
>> in the same place (like enabling/disabling HCR bits).
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: Rebase
>> Changes in V3:
>>   Use current_cpu_data info instead of recalling identify_cpu
>> ---
>>  xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/cpufeature.h |  2 ++
>>  2 files changed, 53 insertions(+)
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index bc7ee5ac95..7255383504 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -24,6 +24,8 @@
>>    DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>  +struct cpuinfo_arm __read_mostly guest_cpuinfo;
>> +
>>  void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>>                               const char *info)
>>  {
>> @@ -157,6 +159,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>>  #endif
>>  }
>>  +/*
>> + * This function is creating a cpuinfo structure with values modified t=
o mask
>> + * all cpu features that should not be published to guest.
>> + * The created structure is then used to provide ID registers values to=
 guests.
>> + */
>> +static int __init create_guest_cpuinfo(void)
>> +{
>> +    /*
>> +     * TODO: The code is currently using only the features detected on =
the boot
>> +     * core. In the long term we should try to compute values containin=
g only
>> +     * features supported by all cores.
>> +     */
>> +    guest_cpuinfo =3D current_cpu_data;
>=20
> It would be more logical to use boot_cpu_data as this would be easier to =
match with your comment.

Agree, I will fix that in V4.

>=20
>> +
>> +#ifdef CONFIG_ARM_64
>> +    /* Disable MPAM as xen does not support it */
>> +    guest_cpuinfo.pfr64.mpam =3D 0;
>> +    guest_cpuinfo.pfr64.mpam_frac =3D 0;
>> +
>> +    /* Disable SVE as Xen does not support it */
>> +    guest_cpuinfo.pfr64.sve =3D 0;
>> +    guest_cpuinfo.zfr64.bits[0] =3D 0;
>> +
>> +    /* Disable MTE as Xen does not support it */
>> +    guest_cpuinfo.pfr64.mte =3D 0;
>> +#endif
>> +
>> +    /* Disable AMU */
>> +#ifdef CONFIG_ARM_64
>> +    guest_cpuinfo.pfr64.amu =3D 0;
>> +#endif
>> +    guest_cpuinfo.pfr32.amu =3D 0;
>> +
>> +    /* Disable RAS as Xen does not support it */
>> +#ifdef CONFIG_ARM_64
>> +    guest_cpuinfo.pfr64.ras =3D 0;
>> +    guest_cpuinfo.pfr64.ras_frac =3D 0;
>> +#endif
>> +    guest_cpuinfo.pfr32.ras =3D 0;
>> +    guest_cpuinfo.pfr32.ras_frac =3D 0;
>=20
> How about all the fields that are currently marked as RES0/RES1? Shouldn'=
t we make sure they will stay like that even if newer architecture use them=
?

Definitely we can do more then this here (including allowing to enable some=
 things for dom0 or for test reasons).
This is a first try to solve current issues with MPAM and SVE and I plan to=
 continue to enhance this in the future
to enable more customisation here.
I do think we could do a bit more here to have some features controlled by =
the user but this will need a bit of
discussion to agree on a strategy.

Could we agree to keep the current scope for this serie (to have this in ne=
xt release) and work then on future
enhancements like this ?

Cheers
Bertrand

>=20
>> +
>> +    return 0;
>> +}
>> +/*
>> + * This function needs to be run after all smp are started to have
>> + * cpuinfo structures for all cores.
>> + */
>> +__initcall(create_guest_cpuinfo);
>> +
>>  /*
>>   * Local variables:
>>   * mode: C
>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpuf=
eature.h
>> index 6cf83d775b..10b62bd324 100644
>> --- a/xen/include/asm-arm/cpufeature.h
>> +++ b/xen/include/asm-arm/cpufeature.h
>> @@ -283,6 +283,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>>  extern struct cpuinfo_arm cpu_data[];
>>  #define current_cpu_data cpu_data[smp_processor_id()]
>>  +extern struct cpuinfo_arm guest_cpuinfo;
>> +
>>  #endif /* __ASSEMBLY__ */
>>    #endif
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49481.87511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOD2-0006Gk-Ka; Thu, 10 Dec 2020 15:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49481.87511; Thu, 10 Dec 2020 15:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOD2-0006Gd-HC; Thu, 10 Dec 2020 15:50:24 +0000
Received: by outflank-mailman (input) for mailman id 49481;
 Thu, 10 Dec 2020 15:50:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOD1-0006GY-A1
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:50:23 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37f1ddea-58ff-46a8-b0c5-c3f26560992f;
 Thu, 10 Dec 2020 15:50:22 +0000 (UTC)
Received: from DB7PR05CA0063.eurprd05.prod.outlook.com (2603:10a6:10:2e::40)
 by AM0PR08MB5362.eurprd08.prod.outlook.com (2603:10a6:208:180::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 15:50:20 +0000
Received: from DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::9a) by DB7PR05CA0063.outlook.office365.com
 (2603:10a6:10:2e::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:50:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT012.mail.protection.outlook.com (10.152.20.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:50:20 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Thu, 10 Dec 2020 15:50:20 +0000
Received: from e1da092aec17.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A9E26798-CEF6-4CAA-969A-0A8D6E9D0A7D.1; 
 Thu, 10 Dec 2020 15:49:51 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e1da092aec17.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:49:51 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1798.eurprd08.prod.outlook.com (2603:10a6:4:3c::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 10 Dec
 2020 15:49:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:49:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37f1ddea-58ff-46a8-b0c5-c3f26560992f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zm9M4Gm06IJ1PiQn+j/Tr0+qmJf3CBRgdTVfNx51KG0=;
 b=I7A8qy7ynmHGOcH2AoM45r40DSR2xmE7YzThUOOl1SFqGLGd9JVbKeulcg5WlVwzJgEbhUiRZMxtJf8dCC0iiZp12yGhyaVuEa7/eucX+uNrstoVLJvqDEulqKjKFLJmtJ7WHOD0l3OgJFIZS9g+3J3WjFZUPDQQenuxvEikw7I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: ff05f758596b8a88
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AdJlWe3Hgdfgld1eoZUTICgQ9tZRKRxS5YdS4kdYebi9W2ubVZZffKIN+Wjzeg8Nl4ScTR+WecHPGrJZ4uG3G3KNc/r6GevibdPJbPwSUAjgbcFzdDmkcoY7cxptOyePZt5vnC3UvPVmfEuD4nwjfCbAspojDxUiXyCvZfM2jzAI77jDjgPaFMrfdnNmaCDthd1S6GoaZjdjPtcOYJNUyje/oqMRNQOqx11DhT1m3tNxL3+zpa0qfqF2sMZERSBmKemhFUhrrxamx+tJExkkJHgVwV6MVyPJRbaAZeZnvFzxXIdX8Wrx8Di6/pWYsYaBahssQl8YudGRvper5PSzsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zm9M4Gm06IJ1PiQn+j/Tr0+qmJf3CBRgdTVfNx51KG0=;
 b=PbPt+x3/WKmHUnMkHpckdgIIRU7/8RQH7s9y/4k3gEGYNO1KM2k+y4BesXMCY5/8U9ZXgK8rr3HUbH7VHEek98uE+YpxPWGcqHlaj3N7HuxwOyOxbxCdIyQYW2CaCCqsP7+vKn4u4l62iX4njlmCz8UbXBtvALL83sFs5ritm8d9yP5fh5eks6YnyIAoXOv8ezY4x+b185DOK6IWF5xIXGY5IubfM7fQo8BGyeaHb8Jzl1KdkuVmVyTSsPgnoQa4Z82tn+b9CO1PXqu6WZ/RL4H0A+xnQSRgmC+IJlNr84Hm9+Q6XsYP8J3c1IkMSGf7+WRY9nSRE0v1t6Y/DT/V+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zm9M4Gm06IJ1PiQn+j/Tr0+qmJf3CBRgdTVfNx51KG0=;
 b=I7A8qy7ynmHGOcH2AoM45r40DSR2xmE7YzThUOOl1SFqGLGd9JVbKeulcg5WlVwzJgEbhUiRZMxtJf8dCC0iiZp12yGhyaVuEa7/eucX+uNrstoVLJvqDEulqKjKFLJmtJ7WHOD0l3OgJFIZS9g+3J3WjFZUPDQQenuxvEikw7I=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index: AQHWzklW+4XymezDRUeCGhvhzW8UY6nvZ8mAgAET0oA=
Date: Thu, 10 Dec 2020 15:49:49 +0000
Message-ID: <873A3EA8-DAE4-49AD-840B-1832E0249DCC@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
 <6cf80971-e9aa-f9e4-cb9b-4f102b84a99b@xen.org>
In-Reply-To: <6cf80971-e9aa-f9e4-cb9b-4f102b84a99b@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b9b24715-21c7-4546-148c-08d89d234c4e
x-ms-traffictypediagnostic: DB6PR0801MB1798:|AM0PR08MB5362:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB53624E1803889BB98D2131CB9DCB0@AM0PR08MB5362.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tsqVm3N5sQaGD7pF0S3fPDePZdyrCl1wKAF/etxJYKGxn+rVavg38UQE5kmZvESe1m2Rj2tvKJzyrc7z19sXxmCV6X0V3Q3+Qhz08UKSRXJXBVYTyr59SSyeApXChx5a2dmQi3vzY0IcYFTaeY9czLxx+Nji000ngcUGgrMhpluUhq11kGvOj1XQKKrNJxUaGF99JFvjv1OG8bOXX9+QPbMgDPyZSZBYE9nOGz+EnIbQwwegGSIiD94IqufwUGPj66MmObNIBR2kLC7+rfZsa4d17YiAQIeMmNQrUM0mLqCov9puk5jAvRipo9fyle1qpmHYFnqnC1/WNvXxvCG92efMOZledCN7trAVjpbDx/JFWRonTG8XDxJPMOKYjexO
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(366004)(8936002)(36756003)(66476007)(76116006)(4326008)(66446008)(64756008)(66556008)(91956017)(2906002)(71200400001)(8676002)(26005)(6506007)(5660300002)(186003)(2616005)(316002)(33656002)(54906003)(53546011)(6916009)(4744005)(478600001)(86362001)(6486002)(6512007)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?iJWbYLZ8DI0DPOo2dSlyeGCPO92F/cjrpjg+mEN2bO0Yf+7pnCPItbKCXCAF?=
 =?us-ascii?Q?cfANoLUu1Je8otUoTfIIByLT311NKKJVHrTx66ZP5SGGo5j649l2ADORZLPn?=
 =?us-ascii?Q?h48k38eeMdph4WqdMBn9LTPcaYn00HREvUpSwMTtCHNhFREIjnPgDMX3FYda?=
 =?us-ascii?Q?x5WZjiSylbsiepJ7FVrrXxtsq7twOp0RB45A44HGPesN8kdAuLPe1lw3U8IY?=
 =?us-ascii?Q?W33Uctu627wvgKDxWNScu7L70NhTQTch+yWs456WQ3ApXaYv/NCZe+RIxuEC?=
 =?us-ascii?Q?UNC3c35rlLWuqz8f2cs48sge0vo8/ViSTEeXh0fOBVDJiYg94fd+PtymIVS8?=
 =?us-ascii?Q?sBkuWUUTntpqYaPbEUQlkGnDs/PJu2Pn73IdfpUfCuiVkOWexuHKi7i0AWwe?=
 =?us-ascii?Q?o+TUhbD96wFgHm1GWcwvWiuLYtOiXmIVQ0BiPicwB/lLmz366hR/akJEkUuR?=
 =?us-ascii?Q?bQ1OZaBEregdt6brLDq14UAAhRnfhwLrmtyPB9n8zvn6uUGbVcPI0yFw+Ho+?=
 =?us-ascii?Q?6ENEajCCHXe3PTsl5OQ6XW9ddXV7bpKsb5wYgdnkCapi1/fOxcjxH8xxNUUi?=
 =?us-ascii?Q?Ry7l6XzfBbYDrlcGoG53xtZ45qjrGuN7vJP2zA5H77yJH7QIzIOR+t2QUaCw?=
 =?us-ascii?Q?/K2y7B5qAZPlM8PjmVbjF3r4AIIJw8JvzCiObW9GtRck59WA0BhqoZo4WwD5?=
 =?us-ascii?Q?dCIeU3Di/tio5GRL+P55rlaOt+oo8vToscgmG7/NSGNdbeEeopnPkDExdjum?=
 =?us-ascii?Q?44dBfhVWGX+CBHuQwVehBcYHI3br+GXOB06belNZ3og1G1fGeT3eeMJK+Jmu?=
 =?us-ascii?Q?/vT5RIDgorSMDQ90ix9J0/3xfHaRmT8mKIK+BOFP+FrnDTn7oN6Im+20r6CS?=
 =?us-ascii?Q?0CncFueuyxveGHMQvgKG2GDgdgTpVTx2CR8EkJ2ohffJX/vhBlo4AuYGKUVl?=
 =?us-ascii?Q?SFWACHlP9XDIJgPpVuSmZJKZpsH3zGf7shvQa2BULp4Me+rKYK+XqnyExlD+?=
 =?us-ascii?Q?eSNW?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <682C7DF4E953474180E26C965E61A79A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1798
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	59ecd4c5-3335-4160-8d63-08d89d2339fd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fsuX3KM/MD0LdsQEqEyZQDKDXOwB36W97HPKtiFzQqBxBwxAWuErzTQtVt4seOT6ZOx9jBfseefjO9AgqK2M9ljW+28QHEzuIWj3LDrf7Xe3lCI0OuaxiSusIWYSsU8QDXo2wJRswJXOzWd4AtAzpofSPoDYl4psLMVl/jH9y2FJPoWhWUgd86bK7z5/QBug9FFB4JQ2RyE47CxbXFtV5XdCRTPVPgbi/mDTXpRW8BdbQAdBL52WQNNRqi1dtqcGvlxg8Go/1GiLjm3DRd2mJML0t19chp2mD5x05SpVS2CGi7rFOWj1n4lay6ixgIWwOyJ62mooxVJN76iVQw8KC9DfktjzvLYxe48cNNFWgez6NA6d2Eq9RDxMtxs0NJTb3E+vqRIBdOIrQ2LajmmTjA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(46966005)(316002)(6862004)(336012)(8936002)(82740400003)(107886003)(6486002)(186003)(478600001)(47076004)(2906002)(82310400003)(4326008)(70586007)(70206006)(6506007)(54906003)(356005)(2616005)(86362001)(8676002)(5660300002)(26005)(33656002)(4744005)(6512007)(53546011)(81166007)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:50:20.2466
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b9b24715-21c7-4546-148c-08d89d234c4e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5362

Hi,

> On 9 Dec 2020, at 23:22, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 09/12/2020 16:30, Bertrand Marquis wrote:
>> +    /* Disable MPAM as xen does not support it */
>=20
> I am going to be picky :). I think we want to say "hide" rather than "dis=
able" because the latter is done differently via the HCR_EL2.

That does make sense as we are not really disabling but hiding you are righ=
t.
I will fix that in V4.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:52:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49486.87522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOEV-0006YB-42; Thu, 10 Dec 2020 15:51:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49486.87522; Thu, 10 Dec 2020 15:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOEV-0006Y4-17; Thu, 10 Dec 2020 15:51:55 +0000
Received: by outflank-mailman (input) for mailman id 49486;
 Thu, 10 Dec 2020 15:51:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=25P7=FO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1knOET-0006Xy-Rp
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:51:53 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e906db7-7393-46a7-b4bb-e4e3b1551fa9;
 Thu, 10 Dec 2020 15:51:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e906db7-7393-46a7-b4bb-e4e3b1551fa9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607615512;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ALbDrC7Mm1GrMzfB1WylaHcfB2cS8558iTIts2riVhk=;
  b=Ez1oAV7lFsB89aj9mLbd0tgN+zhtPBc79ZBD5f3V2FAo6x9ayfEHhceO
   U0048ttoe7SnaKx6pgW1B7LmFsTMk5lplPyXMymduvpp8CZGg4fe6pbIn
   rhqTwMpP2sK9hjyYm/rH0pn1ZVZ3SgKxLc44gcJoHg9+Y5WmGscFkAFpa
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: mWw8NNhDzR65AO5zwlAJMmTflC/7BP7swKtiNWBIeWtsmGXmcIv5KaaaZG37sxx+WGHagocYrF
 otAv/wAr56cWWIsRDQwM4cOwlc2p8GjEFbBjhP3ZTkKVAN11X0WXnpVwSTZXYpeVMGv0nSfoKH
 g0wPI0XQva1TQzDsZ1F7OCeVzAcKYLbs32PgsC60fSfTTn+amQ2+noyJOd817vbE1VbBiYfmEn
 w9uT04uTm2dQkcy+s4UsKyWW2jgt20dhOT99t41hjlwLxkA2+xq8bhndixPQiq5rYmfuo4A5AN
 fis=
X-SBRS: 5.2
X-MesageID: 34165577
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,408,1599537600"; 
   d="scan'208";a="34165577"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
Date: Thu, 10 Dec 2020 15:51:46 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201210095139.GA455@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 10/12/2020 09:51, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 07:08:41PM +0000, Andrew Cooper wrote:
>> Oh of course - we don't follow the exit-to-guest path on the way out here.
>>
>> As a gross hack to check that we've at least diagnosed the issue
>> appropriately, could you modify NetBSD to explicitly load the %ss
>> selector into %es (or any other free segment) before first entering user
>> context?
> If I understood it properly, the user %ss is loaded by Xen from the
> trapframe when the guest swictes from kernel to user mode, isn't it ?

Yes.  The kernel involves HYPERCALL_iret, and Xen copies/audits the
provided trapframe, and uses it to actually enter userspace.

> So you mean setting %es to the same value in the trapframe ?

Yes - specifically I wanted to force the LDT reference to happen in a
context where demand-faulting should work, so all the mappings get set
up properly before we first encounter the LDT reference in Xen's IRET
instruction.

And to be clear, there is definitely a bug needing fixing here in Xen in
terms of handling IRET faults caused by guest state.  However, it looks
like this isn't the root of the problem - merely some very weird
collateral damage.

> Actually I used %fs because %es is set equal to %ds.
> Xen 4.13 boots fine with this change, but with 4.15 I get a loop of:
>
>
> (XEN) *** LDT: gl1e 0000000000000000 not present                               
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  
> [  12.3586540] Process (pid 1) got sig 11                                      
>
> which means that the dom0 gets the trap, and decides that the fault address
> is not mapped. Without the change the dom0 doesn't show the
> "Process (pid 1) got sig 11"
>
> I activated the NetBSD trap debug code, and this shows:
> [   6.7165877] kern.module.path=/stand/amd64-xen/9.1/modules
> (XEN) *** LDT: gl1e 0000000000000000 not present                                
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed
> [   6.9462322] pid 1.1 (init): signal 11 code=1 (trap 0x6) @rip 0x7f7ef0c007d0 addr 0xffffbd800000a040 error=14
> [   7.0647896] trapframe 0xffffbd80381cff00
> [   7.1126288] rip 0x00007f7ef0c007d0  rsp 0x00007f7fff10aa30  rfl 0x0000000000000202
> [   7.2041518] rdi 000000000000000000  rsi 000000000000000000  rdx 000000000000000000
> [   7.2956758] rcx 000000000000000000  r8  000000000000000000  r9  000000000000000000
> [   7.3872013] r10 000000000000000000  r11 000000000000000000  r12 000000000000000000
> [   7.4787216] r13 000000000000000000  r14 000000000000000000  r15 000000000000000000
> [   7.5702439] rbp 000000000000000000  rbx 0x00007f7fff10afe0  rax 000000000000000000
> [   7.6617663] cs 0x47  ds 0x23  es 0x23  fs 0000  gs 0000  ss 0x3f
> [   7.7345663] fsbase 000000000000000000 gsbase 000000000000000000
>
> so it looks like something resets %fs to 0 ...
>
> Anyway the fault address 0xffffbd800000a040 is in the hypervisor's range,
> isn't it ?

No.  Its the kernel's LDT.  From previous debugging:
> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057

LDT handling in Xen is a bit complicated.  To maintain host safety, we
must map it into Xen's range, and we explicitly support a PV guest doing
on-demand mapping of the LDT.  (This pertains to the experimental
Windows XP PV support which never made it beyond a prototype.  Windows
can page out the LDT.)  Either way, we lazily map the LDT frames on
first use.

So %cr2 is the real hardware faulting address, and is in the Xen range. 
We spot that it is an LDT access, and try to lazily map the frame (at
LDT base), but find that the kernel's virtual address mapping
0xffffbd000000a000 is not present (the gl1e printk).

Therefore, we pass #PF to the guest kernel, adjusting vCR2 to what would
have happened had Xen not mapped the real LDT elsewhere, which is
expected to cause the guest kernel to do whatever demand mapping is
necessary to pull the LDT back in.


I suppose it is worth taking a step back and ascertaining how exactly
NetBSD handles (or, should be handling) the LDT.

Do you mind elaborating on how it is supposed to work?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:55:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:55:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49494.87535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOIB-0006kT-LN; Thu, 10 Dec 2020 15:55:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49494.87535; Thu, 10 Dec 2020 15:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOIB-0006kM-I1; Thu, 10 Dec 2020 15:55:43 +0000
Received: by outflank-mailman (input) for mailman id 49494;
 Thu, 10 Dec 2020 15:55:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1knOIA-0006kH-DA
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:55:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1knOI7-0002Jp-CL; Thu, 10 Dec 2020 15:55:39 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1knOI6-0003EO-U3; Thu, 10 Dec 2020 15:55:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=5Mrh5MMqbgJNX25Z8N11CP3F098ff4zaCmN2yJECfmU=; b=mYRTGh0Sr4DDJnR+/Ndabralq7
	S3GmJFP3F0SrzUFFPpVQ7VMx4W7xpYYVTkLOVIZ/a3RDjffyBZoA71HfENdFxkO4eZWDTQVsN3GNu
	fxOljgcSxWd1XgjVyFBFODXP3cj+nRLo9Lry9jmdWz39z7V9TrIgnt+c5FI7cuq1zpEo=;
Message-ID: <e484c21ccda8c0c8049655288d7bf72f74f0de38.camel@xen.org>
Subject: Re: [PATCH] x86/HVM: refine when to send mapcache invalidation
 request to qemu
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
 Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, "olekstysh@gmail.com"
 <olekstysh@gmail.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Thu, 10 Dec 2020 15:55:36 +0000
In-Reply-To: <d522f01e-af5f-fc65-2888-2573dbcefcf5@suse.com>
References: <f92f62bf-2f8d-34db-4be5-d3e6a4b9d580@suse.com>
	 <c6bcaecf71f9e51bdac15c7f97c8ce8460bef306.camel@xen.org>
	 <d522f01e-af5f-fc65-2888-2573dbcefcf5@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Thu, 2020-12-10 at 14:37 +0100, Jan Beulich wrote:
> On 10.12.2020 14:09, Hongyan Xia wrote:
> > On Mon, 2020-09-28 at 12:44 +0200, Jan Beulich wrote:
> > > Plus finally there's no point sending the request for the local
> > > domain
> > > when the domain acted upon is a different one. If anything that
> > > domain's
> > > qemu's mapcache may need invalidating, but it's unclear how
> > > useful
> > > this
> > > would be: That remote domain may not execute hypercalls at all,
> > > and
> > > hence may never make it to the point where the request actually
> > > gets
> > > issued. I guess the assumption is that such manipulation is not
> > > supposed
> > > to happen anymore once the guest has been started?
> > 
> > I may still want to set the invalidation signal to true even if the
> > domain acted on is not the local domain. I know the remote domain
> > may
> > never reach the point to issue the invalidate, but it sounds to me
> > that
> > the problem is not whether we should set the signal but whether we
> > can
> > change where the signal is checked to make sure the point of issue
> > can
> > be reliably triggered, and the latter can be done in a future
> > patch.
> 
> One of Paul's replies was quite helpful here: The main thing to

Hmm, I seem to not be able to see the whole thread...

> worry about is for the vCPU to not continue running before the
> invalidation request was signaled (or else, aiui, qemu may serve
> a subsequent emulation request by the guest incorrectly, because
> of using the stale mapping). Hence I believe for a non-paused
> guest remote operations simply cannot be allowed when the may
> lead to the need for invalidation. Therefore yes, if we assume
> the guest is paused in such cases, we could drop the "is current"
> check, but we'd then still need to arrange for actual signaling
> before the guest gets to run again. I wonder whether
> handle_hvm_io_completion() (or its caller, hvm_do_resume(),
> right after that other call) wouldn't be a good place to do so.

Actually, the existing code must assume that when QEMU is up, the only
one that manipulates the p2m is the guest itself like you said. If the
caller is XENMEM_decrease_reservation, the code does not even check
which p2m this is for and unconditionally sets the QEMU invalidate flag
for the current domain. Athough this assumption may simply be wrong
now, so I agree care should be taken for remote p2m ops (I may need to
read the code more to know how this should be done).

Hongyan



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:56:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:56:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49498.87547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOJ9-0006rU-Va; Thu, 10 Dec 2020 15:56:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49498.87547; Thu, 10 Dec 2020 15:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOJ9-0006rN-SO; Thu, 10 Dec 2020 15:56:43 +0000
Received: by outflank-mailman (input) for mailman id 49498;
 Thu, 10 Dec 2020 15:56:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wMS7=FO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1knOJ8-0006rG-7b
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:56:42 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 97e56119-6fb4-4c32-8946-96529e4560ad;
 Thu, 10 Dec 2020 15:56:41 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id y23so5843888wmi.1
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 07:56:41 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q143sm10724375wme.28.2020.12.10.07.56.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 10 Dec 2020 07:56:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97e56119-6fb4-4c32-8946-96529e4560ad
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=CRecanxbOToTOHGblZx5e7biwudfKmBvFc15XWLYY80=;
        b=e5nQXa3xjaj5tFDBJRfnVl6UyDgPmXBC7TEcg8t7tbob3XQ7E63i4yDB9vUqRBDdSC
         3eM/dUw8/4vFAU6oq/Afc9KvO8pD+1KY+DCFVQB1ZW2ITFRgo+Jy36Dp5lrWcyEB9r0Z
         Fc652aO7H20KMfh13v9fAB4+3Ro5ZHbTC52bLqk++/qQvn36bIzGAGK/QnXK0dOSxxZx
         GfvJLIJBybQJDll5ZiWacJdNvO6a0Yrql2jN90umF3El5WgXzAq3zkit8QkPw18685WM
         OTZw4XuqPPuwkTbikKj8vUGu9BeslAd2KjIny6G95bwPnzCQOOdNOCrlDMycG599tMMl
         r9Sg==
X-Gm-Message-State: AOAM532m/pPO30gMsDQZmAul6u80DWz96sAH+pD3uSUZo7VId+nOpv5V
	oFYdRCO7e7/gq4FfKyW6qCM=
X-Google-Smtp-Source: ABdhPJwfl2y6tpNzmSJ22RXmEs18D6WlT1Gp+0DvFxRI6nVQQQxF1/+JtL/K1EcbwxbR6cN3hUkNpA==
X-Received: by 2002:a05:600c:208:: with SMTP id 8mr9103445wmi.143.1607615800613;
        Thu, 10 Dec 2020 07:56:40 -0800 (PST)
Date: Thu, 10 Dec 2020 15:56:38 +0000
From: Wei Liu <wl@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	famzheng@amazon.com, cardoe@cardoe.com, Bertrand.Marquis@arm.com,
	julien@xen.org, andrew.cooper3@citrix.com
Subject: Re: [PATCH v6 00/25] xl / libxl: named PCI pass-through devices
Message-ID: <20201210155638.mxjx4zmjqmcpk7z3@liuwe-devbox-debian-v2>
References: <160746448732.12203.10647684023172140005@600e7e483b3a>
 <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s>
 <20201209161433.d7xpx5zwtikd3fmk@liuwe-devbox-debian-v2>
 <alpine.DEB.2.21.2012091046400.20986@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012091839430.20986@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2012091839430.20986@sstabellini-ThinkPad-T480s>
User-Agent: NeoMutt/20180716

On Wed, Dec 09, 2020 at 06:41:03PM -0800, Stefano Stabellini wrote:
> On Wed, 9 Dec 2020, Stefano Stabellini wrote:
> > On Wed, 9 Dec 2020, Wei Liu wrote:
> > > On Tue, Dec 08, 2020 at 05:02:50PM -0800, Stefano Stabellini wrote:
> > > > The pipeline failed because the "fedora-gcc-debug" build failed with a
> > > > timeout: 
> > > > 
> > > > ERROR: Job failed: execution took longer than 1h0m0s seconds
> > > > 
> > > > given that all the other jobs passed (including the other Fedora job), I
> > > > take this failed because the gitlab-ci x86 runners were overloaded?
> > > > 
> > > 
> > > The CI system is configured to auto-scale as the number of jobs grows.
> > > The limit is set to 10 (VMs) at the moment.
> > > 
> > > https://gitlab.com/xen-project/xen-gitlab-ci/-/commit/832bfd72ea3a227283bf3df88b418a9aae95a5a4
> > > 
> > > I haven't looked at the log, but the number of build jobs looks rather
> > > larger than when we get started. Maybe the limit of 10 is not good
> > > enough?
> > 
> > Interesting! That's only for the x86 runners, not the ARM runners (we
> > only have 1 ARM64 runner), is that right?
> > 
> > If we could increase the number of VMs for x86 I think that would be
> > helpful because we have very many x86 jobs.
> 
> I don't know what is going on but at the moment there seems to be only
> one x86 build active
> (https://gitlab.com/xen-project/patchew/xen/-/pipelines/227280736).
> Should there be at least 3 of them?

Not sure what you meant here. That pipeline is green.

It may take some time for the CI to scale up if it is "cold". By default
there is only 1 standby runner to reduce cost.

Wei.


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:58:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:58:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49504.87559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOKx-000726-Cf; Thu, 10 Dec 2020 15:58:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49504.87559; Thu, 10 Dec 2020 15:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOKx-00071z-8r; Thu, 10 Dec 2020 15:58:35 +0000
Received: by outflank-mailman (input) for mailman id 49504;
 Thu, 10 Dec 2020 15:58:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOKw-00071t-3R
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:58:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.40]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68fa7154-7ef3-4675-bd8c-5a46dd930cbf;
 Thu, 10 Dec 2020 15:58:32 +0000 (UTC)
Received: from AM6PR10CA0038.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:80::15)
 by DB8PR08MB5066.eurprd08.prod.outlook.com (2603:10a6:10:e4::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 10 Dec
 2020 15:58:30 +0000
Received: from AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:80:cafe::8f) by AM6PR10CA0038.outlook.office365.com
 (2603:10a6:209:80::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:58:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT012.mail.protection.outlook.com (10.152.16.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:58:30 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Thu, 10 Dec 2020 15:58:30 +0000
Received: from 90b4cc8b37dc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 80F71FCD-B98B-462A-97A2-362275146CE2.1; 
 Thu, 10 Dec 2020 15:58:11 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 90b4cc8b37dc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:58:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5605.eurprd08.prod.outlook.com (2603:10a6:10:1af::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 15:58:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68fa7154-7ef3-4675-bd8c-5a46dd930cbf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JZSoR77VXtoS0GA3wN0FZdKwE6WOAHp9p3wsp+rJbTU=;
 b=zHbHdtZXyxO6vSBc4JOBg0tdkSLlEN28+ns1M7TWEibhymKx0OrpmNFRec7w/m0HoYM7aTsiuVm6JihJRfPRWiGd6+WUgJRD/dxt8DxXSopNXp1aN9SWvUxRhBvd7lUL6v4Vw1MXTRrNqyI8HgETuTz/Vb4ELEF0wick1tucsdc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 87ee920a35308948
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D61Y1Gh3fuU5WBPQpJGEFph/dw1YeXvepCoeamyLCnHc0vxPgqDPtPlN4S66RU0myvLu5vAe2p2+VLUQKSzHEPGSrZlZnWwniHS5KCncIe3R6CLVRM+FV/dHoiaO1t0j40nBuJF7j3RCUI70KCmGMyGw3hmtwRse5YsvFr2aFzAFM9XSXyMI2oEVZL/CEyhE9ijALXJ6ATDXCfjIUscULqCbtaw2k+qvR3sXOXx9Z16jpQn4xryzPEqM38CO7pt8DbLSPWK+9g8wA+KubG3nd/WttpYgepn7d6wZ8M+x3j2Fv9NP5QQr1xw53k6lteL9GtO31Qwxs9VoC2mOPubusg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JZSoR77VXtoS0GA3wN0FZdKwE6WOAHp9p3wsp+rJbTU=;
 b=jqWBe9hvolprHgwOLbZfck2sfsRQruMLDtuIqMr4FAxUCdkoDka5Gb7x2puH5jTcVYoFfsFaF40SigFl29Z7ygOlLdfDTtAzd3NBzAwBoVeSQWWjHVCuMw7F3QBl8j+tsmafm7IeNimUf7uw8U2oUtZd4VcF5I6JOacXg6AyYIStxVt5cxiIlHT+xULsxlNaRTdnGI76FAkp//zWcRB0biCoqldBTWJpg5WN1p3Xs1yZ/hDC+qQFdOjM0w8xUBAXtmf2YQZ+j1ROtEFIomN3cJX+SfjfB8n/R0ZpQPo/qgmIALcfjboVFgPC8u/6em0NFFF+bja8hQKyFtLKFaQxQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JZSoR77VXtoS0GA3wN0FZdKwE6WOAHp9p3wsp+rJbTU=;
 b=zHbHdtZXyxO6vSBc4JOBg0tdkSLlEN28+ns1M7TWEibhymKx0OrpmNFRec7w/m0HoYM7aTsiuVm6JihJRfPRWiGd6+WUgJRD/dxt8DxXSopNXp1aN9SWvUxRhBvd7lUL6v4Vw1MXTRrNqyI8HgETuTz/Vb4ELEF0wick1tucsdc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
Thread-Topic: [PATCH v3 1/7] xen/arm: Add ID registers and complete cpuinfo
Thread-Index: AQHWzklS6YSKOBiOok+g4tPLnYP0b6nvYnQAgAEPYQCAAAiagIAAA4CA
Date: Thu, 10 Dec 2020 15:58:10 +0000
Message-ID: <BB95B1AB-C487-490C-9BF5-F7C6F4EC8268@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <aab713989bec4dc843bd513c03b305c83028851b.1607524536.git.bertrand.marquis@arm.com>
 <62484fa0-fa86-523a-12e0-54d69934d791@xen.org>
 <8D31DCB1-3529-4785-A18B-CFE69CC0E846@arm.com>
 <157559b2-7d95-ff46-204c-f875e617d464@xen.org>
In-Reply-To: <157559b2-7d95-ff46-204c-f875e617d464@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1b3a99e3-2ddb-4015-1c80-08d89d24706b
x-ms-traffictypediagnostic: DBAPR08MB5605:|DB8PR08MB5066:
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB506600A2336900ADBC6C57209DCB0@DB8PR08MB5066.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EfDsvbDho6urWBMt4CtT8z3gs3q9x3pgldj8QrFHWhFOrVqom5S6C5do9/eRHha6JC0By7AwKiXNJfI4dQLTek3zCd3FVkgaamu42J3tkg2/kszwOlNi92SP/yfaTAb8EPIk6d+fhWKfpQkaWTXY7jNJ4YnU14zEkQ6cGxDoKwfzjHCZBtRhERoLJ7xFgg/WHzqYNknLF/pSo52NzkZSM3wRpI4yMMWJOJe57wODRwqJVOqhCgbTvxvYeVgSbzP8F9pxxAM2cWa/0aEKeUceLTxoG/CULjqskhb3PAadCP5SRsfhWI83zP+drv/DRS9CmYL8NYmXR65Ip/AaIPCwOg1KcRlXC/3ZpCtmPAZA/dgipZltp61flCZSK+uJsHsk
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(39860400002)(136003)(396003)(346002)(54906003)(8676002)(6916009)(66446008)(5660300002)(4326008)(6506007)(66556008)(66946007)(2616005)(64756008)(91956017)(53546011)(71200400001)(30864003)(86362001)(66476007)(26005)(6512007)(478600001)(186003)(33656002)(76116006)(316002)(8936002)(36756003)(83380400001)(2906002)(6486002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?hVmASkzLyv1Mf98bq4hne2eQRLH1KlQlea8UJ5VSVVI/HQcISI6ZAEyLUAOD?=
 =?us-ascii?Q?IbbkFYJhEocU01R0g11wYxPnOY6Nojae5UAtTJLORVn7CryDsfFLeYHy3qFc?=
 =?us-ascii?Q?pNEFDL9uvN7J0H97fdO6ZqJiCodiQDhDQiyz6F1q0VV42dBLN3fTve9mUJip?=
 =?us-ascii?Q?vsBzxfjZ1HDmrN6ipywSYaGOohzIa6YJc/8oleUF/3TN8LfS/2PPXHpQRTTo?=
 =?us-ascii?Q?5NY5tt3CJQWFJmB8KupoDS3+yVb8vIAHfUNfMWTzhHXdtTyxb/vAoL83qCuR?=
 =?us-ascii?Q?sfI5+8mkFuMmSQPlGNce0c6J+Oq5rg7KXF7q7YNhQV7b1y4/csiohO3r8pms?=
 =?us-ascii?Q?9XllPMk+pDIal8HV4cFhl4CKq12ocwLdm86tffl3DCret09RlpMr7WSW+49j?=
 =?us-ascii?Q?SquadNHtlYSlA0XDKNahgdmtH6HHHZbe9MnH9r/XFvdpMYVoBTfZtqBeDtly?=
 =?us-ascii?Q?c6NBmC4J8SfFy9qcn8yO5W7e16gnHet3W8tH/T965jf53L21KXHTJyKJOKMw?=
 =?us-ascii?Q?2Vvv3LYUQUmT0i1fx1uaKAOBbCutzUANjUTJn5BZEoZeqWFYKwOrnZE6UYsq?=
 =?us-ascii?Q?TAdUsRSzEiYXjffckNBbtysJqnBp4vd9s8CAmtY6622g/JRHcfKA+vHC3EjY?=
 =?us-ascii?Q?JzdVRNB+h48S9nIAGNvWyijAqVfF4k6xVqO7UoP1EjtW9I2MjCHi/oAGHtWG?=
 =?us-ascii?Q?f55041RYZPC5MtoSMqNER3sx48T2lp1AEhIx6HlavdAauXNsA++BCIBHpWSX?=
 =?us-ascii?Q?ckXyZW4meedDGUia2apkRHF2xWNedG9EzYF22+Pb4rnJYgN2K4RYzIluem4z?=
 =?us-ascii?Q?MziOoHQxjB3UiWjMNhu9kgEG49v3SCCsuuiCRA9mMVGocZARHuFUxMwH5Kea?=
 =?us-ascii?Q?d/Gv2dEHERBChQKiuqANAC5R+0albmg2YNS4nGaDfHZwF0mcoLMGrd+p9IGo?=
 =?us-ascii?Q?O0/hQVj5p4dVXasu9Txhgvqpv/pSJUay3oLWafBRY41gjMJgab0tf/5Pmygx?=
 =?us-ascii?Q?cHxa?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <51073259578F954F842582A536E9BD3C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5605
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ab593a56-9d1e-466b-bd74-08d89d246489
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ISjJnsedvT5eKi4P1zx8if3O8I449P5AGprSYlg4S9lYrSbydSNpjdBrxtBkKP/8dcYRcZYPHkZ7U3gaI0IbyI7haR17yGhINM01Gg+TD4nj4YrLyjnndPaChGNBrPHbfexpZMrd+B+j+g43hnbRrakj4Nloi+HgqTdTeshLbxkqooKwPdNvvgNeZsPuSrgirK8jlV0jWKemQM45UUyTHuYGZ2NPpyPcml5hNJLHx2E5y3SEtbth4AddRCYcn/txIavtCmyYPwyrdkvwEPosiM3wnFV77BFLW9I/LIsZQISSypdqbFJdzqCRuK5ozjPKgDDQZbTOQIQ/zcqCcburk63tjnfTX3P5LVH97eRiA6x8jq5ndQvdG42wp/LOVtiIVTCuIIvpR1U7fxBljNTCuw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(39860400002)(346002)(136003)(46966005)(82310400003)(54906003)(26005)(4326008)(2616005)(316002)(186003)(478600001)(36756003)(336012)(2906002)(8936002)(53546011)(36906005)(6486002)(8676002)(6512007)(6862004)(86362001)(107886003)(83380400001)(33656002)(70206006)(6506007)(70586007)(47076004)(82740400003)(356005)(81166007)(30864003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:58:30.2646
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b3a99e3-2ddb-4015-1c80-08d89d24706b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5066

Hi,

> On 10 Dec 2020, at 15:45, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 10/12/2020 15:14, Bertrand Marquis wrote:
>> Hi Julien,
>>> On 9 Dec 2020, at 23:03, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi Bertrand,
>>>=20
>>> On 09/12/2020 16:30, Bertrand Marquis wrote:
>>>> Add definition and entries in cpuinfo for ID registers introduced in
>>>> newer Arm Architecture reference manual:
>>>> - ID_PFR2: processor feature register 2
>>>> - ID_DFR1: debug feature register 1
>>>> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
>>>> - ID_ISA6: ISA Feature register 6
>>>> Add more bitfield definitions in PFR fields of cpuinfo.
>>>> Add MVFR2 register definition for aarch32.
>>>> Add mvfr values in cpuinfo.
>>>> Add some registers definition for arm64 in sysregs as some are not
>>>> always know by compilers.
>>>> Initialize the new values added in cpuinfo in identify_cpu during init=
.
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>> ---
>>>> Changes in V2:
>>>>   Fix dbg32 table size and add proper initialisation of the second ent=
ry
>>>>   of the table by reading ID_DFR1 register.
>>>> Changes in V3:
>>>>   Fix typo in commit title
>>>>   Add MVFR2 definition and handling on aarch32 and remove specific cas=
e
>>>>   for mvfr field in cpuinfo (now the same on arm64 and arm32).
>>>>   Add MMFR4 definition if not known by the compiler.
>>>> ---
>>>>  xen/arch/arm/cpufeature.c           | 18 ++++++++++
>>>>  xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
>>>>  xen/include/asm-arm/cpregs.h        | 12 +++++++
>>>>  xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++----=
-
>>>>  4 files changed, 105 insertions(+), 9 deletions(-)
>>>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>>>> index 44126dbf07..bc7ee5ac95 100644
>>>> --- a/xen/arch/arm/cpufeature.c
>>>> +++ b/xen/arch/arm/cpufeature.c
>>>> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>>            c->mm64.bits[0]  =3D READ_SYSREG64(ID_AA64MMFR0_EL1);
>>>>          c->mm64.bits[1]  =3D READ_SYSREG64(ID_AA64MMFR1_EL1);
>>>> +        c->mm64.bits[2]  =3D READ_SYSREG64(ID_AA64MMFR2_EL1);
>>>>            c->isa64.bits[0] =3D READ_SYSREG64(ID_AA64ISAR0_EL1);
>>>>          c->isa64.bits[1] =3D READ_SYSREG64(ID_AA64ISAR1_EL1);
>>>> +
>>>> +        c->zfr64.bits[0] =3D READ_SYSREG64(ID_AA64ZFR0_EL1);
>>>>  #endif
>>>>            c->pfr32.bits[0] =3D READ_SYSREG32(ID_PFR0_EL1);
>>>>          c->pfr32.bits[1] =3D READ_SYSREG32(ID_PFR1_EL1);
>>>> +        c->pfr32.bits[2] =3D READ_SYSREG32(ID_PFR2_EL1);
>>>>            c->dbg32.bits[0] =3D READ_SYSREG32(ID_DFR0_EL1);
>>>> +        c->dbg32.bits[1] =3D READ_SYSREG32(ID_DFR1_EL1);
>>>>            c->aux32.bits[0] =3D READ_SYSREG32(ID_AFR0_EL1);
>>>>  @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>>          c->mm32.bits[1]  =3D READ_SYSREG32(ID_MMFR1_EL1);
>>>>          c->mm32.bits[2]  =3D READ_SYSREG32(ID_MMFR2_EL1);
>>>>          c->mm32.bits[3]  =3D READ_SYSREG32(ID_MMFR3_EL1);
>>>> +        c->mm32.bits[4]  =3D READ_SYSREG32(ID_MMFR4_EL1);
>>>> +        c->mm32.bits[5]  =3D READ_SYSREG32(ID_MMFR5_EL1);
>>>=20
>>> Please don't introduce any more use of READ_SYSREG32(), they are wrong =
on Armv8 because system registers are always 64-bit.
>> I followed the existing implementation but ...
>>>=20
>>>>            c->isa32.bits[0] =3D READ_SYSREG32(ID_ISAR0_EL1);
>>>>          c->isa32.bits[1] =3D READ_SYSREG32(ID_ISAR1_EL1);
>>>> @@ -137,6 +144,17 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>>          c->isa32.bits[3] =3D READ_SYSREG32(ID_ISAR3_EL1);
>>>>          c->isa32.bits[4] =3D READ_SYSREG32(ID_ISAR4_EL1);
>>>>          c->isa32.bits[5] =3D READ_SYSREG32(ID_ISAR5_EL1);
>>>> +        c->isa32.bits[6] =3D READ_SYSREG32(ID_ISAR6_EL1);
>>>> +
>>>> +#ifdef CONFIG_ARM_64
>>>> +        c->mvfr.bits[0] =3D READ_SYSREG64(MVFR0_EL1);
>>>> +        c->mvfr.bits[1] =3D READ_SYSREG64(MVFR1_EL1);
>>>> +        c->mvfr.bits[2] =3D READ_SYSREG64(MVFR2_EL1);
>>>> +#else
>>>> +        c->mvfr.bits[0] =3D READ_CP32(MVFR0);
>>>> +        c->mvfr.bits[1] =3D READ_CP32(MVFR1);
>>>> +        c->mvfr.bits[2] =3D READ_CP32(MVFR2);
>>>> +#endif
>>>=20
>>> READ_SYSREG() will do the job to either use READ_SYSREG64() or READ_CP3=
2() depending on the arch used.
>> .. I can modify the ones I added and the existing ones to user READ_SYSR=
EG instead.
>> Please confirm if you want me to do that.
>=20
> Yes, please use READ_SYSREG() for the new ones. We can convert the others=
 separately.
>=20
>>>=20
>>>>  }
>>>>    /*
>>>> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm=
/arm64/sysregs.h
>>>> index c60029d38f..077fd95fb7 100644
>>>> --- a/xen/include/asm-arm/arm64/sysregs.h
>>>> +++ b/xen/include/asm-arm/arm64/sysregs.h
>>>> @@ -57,6 +57,34 @@
>>>>  #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>>>>  #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>>>>  +/*
>>>> + * Define ID coprocessor registers if they are not
>>>> + * already defined by the compiler.
>>>> + *
>>>> + * Values picked from linux kernel
>>>> + */
>>>> +#ifndef ID_AA64MMFR2_EL1
>>>=20
>>> I am a bit puzzled how this meant to work. Will the libc/compiler heade=
rs define ID_AA64MMFR2_EL1?
>> I tested this and if the compiler has a definition for the register, I a=
m not entering the ifndef.
>> So there is no header defining this but if the compiler has the definiti=
on for this the ifndef will
>> not be entered.
>=20
> Good to hear :).
>=20
>>>=20
>>>> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
>>>> +#endif
>>>> +#ifndef ID_PFR2_EL1
>>>> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
>>>> +#endif
>>>> +#ifndef ID_MMFR4_EL1
>>>> +#define ID_MMFR4_EL1                S3_0_C0_C2_6
>>>> +#endif
>>>> +#ifndef ID_MMFR5_EL1
>>>> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
>>>> +#endif
>>>> +#ifndef ID_ISAR6_EL1
>>>> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
>>>> +#endif
>>>> +#ifndef ID_AA64ZFR0_EL1
>>>> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
>>>> +#endif
>>>> +#ifndef ID_DFR1_EL1
>>>> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
>>>> +#endif
>>>> +
>>>>  /* Access to system registers */
>>>>    #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
>>>> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs=
.h
>>>> index 8fd344146e..2690ddeb7a 100644
>>>> --- a/xen/include/asm-arm/cpregs.h
>>>> +++ b/xen/include/asm-arm/cpregs.h
>>>> @@ -63,6 +63,8 @@
>>>>  #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID R=
egister */
>>>>  #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and =
Control Register */
>>>>  #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Regi=
ster 0 */
>>>> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Regi=
ster 1 */
>>>> +#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Regi=
ster 2 */
>>>>  #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception C=
ontrol Register */
>>>>  #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction=
 Register */
>>>>  #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction=
 Register 2 */
>>>> @@ -108,18 +110,23 @@
>>>>  #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Re=
gister */
>>>>  #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register=
 0 */
>>>>  #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register=
 1 */
>>>> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register=
 2 */
>>>>  #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 *=
/
>>>> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 *=
/
>>>>  #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register=
 0 */
>>>>  #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Regis=
ter 0 */
>>>>  #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Regis=
ter 1 */
>>>>  #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Regis=
ter 2 */
>>>>  #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Regis=
ter 3 */
>>>> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Regis=
ter 4 */
>>>> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Regis=
ter 5 */
>>>>  #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>>>>  #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>>>>  #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>>>>  #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>>>>  #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>>>>  #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
>>>> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>>>>  #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>>>>  #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>>>>  #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Regis=
ter */
>>>> @@ -312,18 +319,23 @@
>>>>  #define HSTR_EL2                HSTR
>>>>  #define ID_AFR0_EL1             ID_AFR0
>>>>  #define ID_DFR0_EL1             ID_DFR0
>>>> +#define ID_DFR1_EL1             ID_DFR1
>>>>  #define ID_ISAR0_EL1            ID_ISAR0
>>>>  #define ID_ISAR1_EL1            ID_ISAR1
>>>>  #define ID_ISAR2_EL1            ID_ISAR2
>>>>  #define ID_ISAR3_EL1            ID_ISAR3
>>>>  #define ID_ISAR4_EL1            ID_ISAR4
>>>>  #define ID_ISAR5_EL1            ID_ISAR5
>>>> +#define ID_ISAR6_EL1            ID_ISAR6
>>>>  #define ID_MMFR0_EL1            ID_MMFR0
>>>>  #define ID_MMFR1_EL1            ID_MMFR1
>>>>  #define ID_MMFR2_EL1            ID_MMFR2
>>>>  #define ID_MMFR3_EL1            ID_MMFR3
>>>> +#define ID_MMFR4_EL1            ID_MMFR4
>>>> +#define ID_MMFR5_EL1            ID_MMFR5
>>>>  #define ID_PFR0_EL1             ID_PFR0
>>>>  #define ID_PFR1_EL1             ID_PFR1
>>>> +#define ID_PFR2_EL1             ID_PFR2
>>>>  #define IFSR32_EL2              IFSR
>>>>  #define MDCR_EL2                HDCR
>>>>  #define MIDR_EL1                MIDR
>>>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cp=
ufeature.h
>>>> index c7b5052992..6cf83d775b 100644
>>>> --- a/xen/include/asm-arm/cpufeature.h
>>>> +++ b/xen/include/asm-arm/cpufeature.h
>>>> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>>>>      union {
>>>>          uint64_t bits[2];
>>>>          struct {
>>>> +            /* PFR0 */
>>>>              unsigned long el0:4;
>>>>              unsigned long el1:4;
>>>>              unsigned long el2:4;
>>>> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>>>>              unsigned long fp:4;   /* Floating Point */
>>>>              unsigned long simd:4; /* Advanced SIMD */
>>>>              unsigned long gic:4;  /* GIC support */
>>>> -            unsigned long __res0:28;
>>>> +            unsigned long ras:4;
>>>> +            unsigned long sve:4;
>>>> +            unsigned long sel2:4;
>>>> +            unsigned long mpam:4;
>>>> +            unsigned long amu:4;
>>>> +            unsigned long dit:4;
>>>> +            unsigned long __res0:4;
>>>>              unsigned long csv2:4;
>>>> -            unsigned long __res1:4;
>>>> +            unsigned long cvs3:4;
>>>> +
>>>> +            /* PFR1 */
>>>> +            unsigned long bt:4;
>>>> +            unsigned long ssbs:4;
>>>> +            unsigned long mte:4;
>>>> +            unsigned long ras_frac:4;
>>>> +            unsigned long mpam_frac:4;
>>>> +            unsigned long __res1:44;
>>>>          };
>>>>      } pfr64;
>>>>  @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>>>>      } aux64;
>>>>        union {
>>>> -        uint64_t bits[2];
>>>> +        uint64_t bits[3];
>>>>          struct {
>>>>              unsigned long pa_range:4;
>>>>              unsigned long asid_bits:4;
>>>> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>>>>              unsigned long pan:4;
>>>>              unsigned long __res1:8;
>>>>              unsigned long __res2:32;
>>>> +
>>>> +            unsigned long __res3:64;
>>>>          };
>>>>      } mm64;
>>>>  @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>>>>          uint64_t bits[2];
>>>>      } isa64;
>>>>  +    struct {
>>>> +        uint64_t bits[1];
>>>> +    } zfr64;
>>>> +
>>>>  #endif
>>>>        /*
>>>> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>>>>       * when running in 32-bit mode.
>>>>       */
>>>>      union {
>>>> -        uint32_t bits[2];
>>>> +        uint32_t bits[3];
>>>>          struct {
>>>> +            /* PFR0 */
>>>>              unsigned long arm:4;
>>>>              unsigned long thumb:4;
>>>>              unsigned long jazelle:4;
>>>>              unsigned long thumbee:4;
>>>> -            unsigned long __res0:16;
>>>> +            unsigned long csv2:4;
>>>> +            unsigned long amu:4;
>>>> +            unsigned long dit:4;
>>>> +            unsigned long ras:4;
>>>>  +            /* PFR1 */
>>>>              unsigned long progmodel:4;
>>>>              unsigned long security:4;
>>>>              unsigned long mprofile:4;
>>>>              unsigned long virt:4;
>>>>              unsigned long gentimer:4;
>>>> -            unsigned long __res1:12;
>>>> +            unsigned long sec_frac:4;
>>>> +            unsigned long virt_frac:4;
>>>> +            unsigned long gic:4;
>>>> +
>>>> +            /* PFR2 */
>>>> +            unsigned long csv3:4;
>>>> +            unsigned long ssbs:4;
>>>> +            unsigned long ras_frac:4;
>>>> +            unsigned long __res2:20;
>>>>          };
>>>>      } pfr32;
>>>>        struct {
>>>> -        uint32_t bits[1];
>>>> +        uint32_t bits[2];
>>>>      } dbg32;
>>>>        struct {
>>>> @@ -230,12 +264,16 @@ struct cpuinfo_arm {
>>>>      } aux32;
>>>>        struct {
>>>> -        uint32_t bits[4];
>>>> +        uint32_t bits[6];
>>>>      } mm32;
>>>>        struct {
>>>> -        uint32_t bits[6];
>>>> +        uint32_t bits[7];
>>>>      } isa32;
>>>> +
>>>> +    struct {
>>>> +        uint64_t bits[3];
>>>=20
>>> Shouldn't this be register_t?
>> I followed the scheme of the rest of the structure which
>> is always using uint64_t or uint32_t for bits definitions.
>=20
> Right, but I am sure you will not be surprised if I tell you this is bugg=
y ;). The historical reason is, IIRC, the original spec of Armv8.0 describe=
d them as 32-bit registers.
>=20
> The spec was updated a while ago to clarify they are 64-bit when running =
in AArch64. But a majority of them still have the top 32-bit RES0 (thankful=
ly!).
>=20
>> Why should I use register_t type for this one ?
>=20
> Because the value is 32-bit on AArch32 and 64-bit for AArch64. I am ok to=
 use 64-bit still for AArch32, but it sounds a bit of a waste of memory.
>=20
> What I care the most here is we use 64-bit for the new registers on AArch=
64. Otherwise, we are going to soon discover that a bit above 32 was alloca=
ted and not detected by Xen. I don't want to be the one doing the debugging=
!
>=20
> Admittly, this is not a new issue. However, the more offending code we ha=
d the more it will be difficul to get Xen to be fully compliant with the Ar=
mv8 spec.

I agree with that so I propose to do the following:
- create a new serie to fix those:=20
	- use READ_SYSREG in cpufeature.c
	- use register_t in cpuinfo structure
- rebase my serie on top of this one and use the proper types on it

Might require more work but seem to be the right approach.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 15:59:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 15:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49508.87571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOLX-00078U-PM; Thu, 10 Dec 2020 15:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49508.87571; Thu, 10 Dec 2020 15:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOLX-00078N-MG; Thu, 10 Dec 2020 15:59:11 +0000
Received: by outflank-mailman (input) for mailman id 49508;
 Thu, 10 Dec 2020 15:59:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOLW-00078H-Pm
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 15:59:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.56]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 686e8f75-9b60-4b60-a9f5-15df7dab2819;
 Thu, 10 Dec 2020 15:59:09 +0000 (UTC)
Received: from AM6P194CA0078.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::19)
 by AM4PR08MB2913.eurprd08.prod.outlook.com (2603:10a6:205:a::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 15:59:08 +0000
Received: from VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::23) by AM6P194CA0078.outlook.office365.com
 (2603:10a6:209:8f::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 15:59:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT023.mail.protection.outlook.com (10.152.18.133) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 15:59:07 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Thu, 10 Dec 2020 15:59:07 +0000
Received: from 5e8dbee7d06a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C3F2E2AE-4660-4817-B608-296538B288A0.1; 
 Thu, 10 Dec 2020 15:59:02 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5e8dbee7d06a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 15:59:02 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5605.eurprd08.prod.outlook.com (2603:10a6:10:1af::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 15:59:01 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 15:59:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 686e8f75-9b60-4b60-a9f5-15df7dab2819
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AhnaInN98bY5l09QGCGKOWD3Hj1c0ukB+yxV7eGQzq0=;
 b=npXLlJ5SZpyBGt2kr68ch1BgYsuRjwhP9MqVcG5PfWklU2CZ1YsHL8TCLlSfR16lN3+a9LTitiAW77jVG0TFz83+eSoYqErP0iswYzoOYD6qP/XxBIYZ3IiQZCdfNvNJUgd0N2x98t+GDHbH/JAjMWmc0oZdBLc+hRv4IzFtvIQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 68232168eece3367
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iYoqdbOC3W7eQKQCifZXoNrE8Rm47tyEnFd6dIYQgZ/fbfRFBZdL4waf5FPT6A6WoAqPMLvXMOvJCOf/RdGw/fwXS+um/B8tPXJrY2RoTaMng+DSfJ+uRleZqzJKQeP8LQAN8kgmy9aKaySPlAF2owachtyFf3Jc74BJNJltPQU1RW5dKRdHjBd7Vawj6KZyyGpXgj9yOvkBOt6QCPiMm4XjyTRQM1p4l+BDNyFv+cLWcZU69tJde9gu4uzXN5RtDSHF9vpVb9zOp28ROstxPP2kMc7KC/zUO1/G/w3/B2FMfGBL2rGwDrcM9X1VYnjRc/UG1FSRLmbFcMZ6EfG8yg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AhnaInN98bY5l09QGCGKOWD3Hj1c0ukB+yxV7eGQzq0=;
 b=QVrywH3bONaghXJRRf/gz7N7EuzO/nhzAZa1mGK7Phc7GKQcoOHyBwPIwuo1JFQ955rY6mwGuMdLCdAr+OgXASmvXLbZEpEW+XayoE/fLFZxSZ3dJjMIiWkIu3MoA+dt1eQWm7jRMJxuTj8pGtOsQwgIc7YNcXGp/4XZ6oCwngb+jY6pbE4mwhpbJ6jiGvU69lU2hkXauwDX28vquHpuVi4olSZ/c2Nns5iZw4KT52Yst/2QbRyVMQqAnKJInzCh5JbAx+N+9GdLSTy5Z8lqJUe+0P7XTxIO/JFHrD5yJWIOC6WTwOURYaa1MuE6OF2CySEVnsUd0b1kXAVhYYogHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AhnaInN98bY5l09QGCGKOWD3Hj1c0ukB+yxV7eGQzq0=;
 b=npXLlJ5SZpyBGt2kr68ch1BgYsuRjwhP9MqVcG5PfWklU2CZ1YsHL8TCLlSfR16lN3+a9LTitiAW77jVG0TFz83+eSoYqErP0iswYzoOYD6qP/XxBIYZ3IiQZCdfNvNJUgd0N2x98t+GDHbH/JAjMWmc0oZdBLc+hRv4IzFtvIQ=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
Thread-Topic: [PATCH v3 2/7] xen/arm: Add arm64 ID registers definitions
Thread-Index: AQHWzkltLRvwC2Fl20mIggNF1RyrE6nvYzSAgAA5EwCAAN5UAIAAA5KA
Date: Thu, 10 Dec 2020 15:59:01 +0000
Message-ID: <89552CA7-D63D-4916-9066-15599E4B0F94@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <96a970e5e5d2f1b1bd0e50327857de6a8c8441f7.1607524536.git.bertrand.marquis@arm.com>
 <af02eefb-5846-d32b-22e5-65763e6f51e0@xen.org>
 <alpine.DEB.2.21.2012091742420.20986@sstabellini-ThinkPad-T480s>
 <29a7ea09-2bc5-94e9-563e-07abb18ed260@xen.org>
In-Reply-To: <29a7ea09-2bc5-94e9-563e-07abb18ed260@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f017ecfc-fe11-4fed-655f-08d89d2486de
x-ms-traffictypediagnostic: DBAPR08MB5605:|AM4PR08MB2913:
X-Microsoft-Antispam-PRVS:
	<AM4PR08MB2913BD003CD2F941690321E19DCB0@AM4PR08MB2913.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OdXbQ6lS4Smj6srnylaA5kKcX5efda+yHrokVXYrWIQknZbKQZDPehyVZVUTa9CxuoVGSSoVXtT/w8xBSZd/6P35XWMMJhup3P+3MH25ogjeOHChn9mgohHHG9YbBseh1+7XQTlP6iSNeoZ28uyljE/2UUawaM558OCzDxHTufHykDh4qr6zsjS/sQdv6v0ximsx5gb4O51HKY1SiKTpZmFHJqu0B01/tXn3/5yDUeVnvuMQ3LGpPkotTV+dkIsFphZqUUGDjeLOV4dR+ZYLj0V9HhVQk9IgREhPTw4bhe2/dKhS5c7eO8BUvFRhyk3IzdfZ816fNV6fsFWWUWqlZuWb4QTPLC9eWJXF2hqPSrN8iGixlb/Lmtc7o+9yCvsx
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(39860400002)(136003)(396003)(346002)(54906003)(8676002)(6916009)(66446008)(5660300002)(4326008)(6506007)(66556008)(66946007)(2616005)(64756008)(91956017)(53546011)(71200400001)(86362001)(66476007)(26005)(6512007)(478600001)(186003)(33656002)(76116006)(316002)(8936002)(36756003)(83380400001)(2906002)(6486002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?iQYvDcyjhZPRiWUko0RyFufmHBehjPtbSGysg8r8oVlgGOVo6Hdcr2LDWo4b?=
 =?us-ascii?Q?VJ1i6d4WmHodz72/mVsKotwMCqA2KA3hBukHdC26q8uHRQJGaQub47a+UppP?=
 =?us-ascii?Q?Y3aJypR02aH8cDey0F75fDVAvxXtVu1zWuy/BuXVHWy+CYCGsxrtJpRpryEz?=
 =?us-ascii?Q?qovSK47bipMC3eHGSvNPduVS5ctjsjGn+/VeTMD8kkUvWuYtY5x3O+SpRqwm?=
 =?us-ascii?Q?yPmuNhgqRSU9uLzBWq7CPF7fwCGcImvfca8B4+ffrXNHH7Ok5L0sgc3LYF9O?=
 =?us-ascii?Q?ggdo+DMi9ZxFirIRC8dx4CJF7OCbJsLdvrv/vX6ZvmSHN4eq81+aoyFJXAtc?=
 =?us-ascii?Q?1lYdtiFwdKrvR3LOsHTpwX4EoZMXIMYkdIZfHDSQzWSWuIg+JdTJkurdOnJQ?=
 =?us-ascii?Q?iml5fMTQlJaRZ1eZCJldXK2fzGBnndEmyCsD1pJFjPVuCYBXXqNR8lrLsk4w?=
 =?us-ascii?Q?xDGN3IH4T2H2tIAvfaGcJjHi6gWNRPVCHfUs9u79NkRdWedbZwreiAlkMhqF?=
 =?us-ascii?Q?NEbuadf5TFJHb+gfaeUks/50fimIhkY8te5RqLA9FTHNtPDKh5Pol86PUqoO?=
 =?us-ascii?Q?6VI/Qcc9FDWXQxkQyOdP55xtglLyotjdDYwbWFIsPuq+oLWFRSWEcWvgNPO2?=
 =?us-ascii?Q?gYzn3/la1Xc7h+obT5kbglD3ClG3kKwQ9U4r6yXlFA2FZV8xX9jDCSenBBcj?=
 =?us-ascii?Q?siRjiM2TFH7CDGHCfMt4QEowFJXb659tmq1Sn+qart6jjB5Y2iCgqprbo9Wz?=
 =?us-ascii?Q?kc+8B0UmoVPxOMqLpa4KG4dUpWQAnxMgtdiRhPfrbxO7QpMTKbWwZQG/s2gP?=
 =?us-ascii?Q?M/LZENeJwxEzPW82AT3xuZmyIeSUSTAJxUMnqUhsbAKL/knB3nFoYHnZA1ra?=
 =?us-ascii?Q?jGzaD2LAeJWUqYd/2La4Ir3NY27hwRAzqTmx2azzaRDqGVKNbB4acHJ8fLiy?=
 =?us-ascii?Q?jr+7+3XYpIWTYyq2auQ7KbHNSYn383gCyDb+oxwwCw6FXaggadnYSRCtajaw?=
 =?us-ascii?Q?eEXY?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <15405F296881FD4A9F746AD4D2EA5F48@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5605
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	405d4496-6aeb-4710-0dba-08d89d2482f5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	46tr00YySbbB/qHO3Xwb8otRjqmbH8wz9cyqm0LSoqaUJlnFlrEUaf5UYAcbgwVW85AHTf1kjJnV1/icXai/JYKIzFVooIh3LZ+R39H4nnjw+wAnU5y/dhu3iD/0iRAgeAxUVWhjFZxHbNwyT+mbejIkZ9nNd2HvGY6JoZL6en8g8YVqqxQXWbe4WEleX4vcBM2QGSX1FDfSylyPNQVBhzbgJyJHTUwvNLvMHOOFv19KOi2GKnborDCufn6iDSOuZ9W4WZMBB8z76xdaPxs2ogSZtLFJc4OdgnT8IohZiZSmRByzdzqVthv4vLoUmB165xaEkDMHNgt2xXlSgxZst6/cjKgK66lTE6Y4LsVTyyb+uwxG03SVleA7HTSmCe5vs5tKkcHo+BEd4ZsJtqmQyw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(376002)(39860400002)(46966005)(316002)(6862004)(6512007)(54906003)(5660300002)(6486002)(107886003)(8676002)(70586007)(8936002)(4326008)(356005)(83380400001)(82310400003)(47076004)(478600001)(81166007)(36756003)(70206006)(2616005)(6506007)(33656002)(53546011)(26005)(336012)(82740400003)(86362001)(186003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 15:59:07.8741
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f017ecfc-fe11-4fed-655f-08d89d2486de
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2913

Hi,

> On 10 Dec 2020, at 15:46, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 10/12/2020 02:30, Stefano Stabellini wrote:
>> On Wed, 9 Dec 2020, Julien Grall wrote:
>>> Hi Bertrand,
>>>=20
>>> On 09/12/2020 16:30, Bertrand Marquis wrote:
>>>> Add coprocessor registers definitions for all ID registers trapped
>>>> through the TID3 bit of HSR.
>>>> Those are the one that will be emulated in Xen to only publish to gues=
ts
>>>> the features that are supported by Xen and that are accessible to
>>>> guests.
>>>> Also define a case to catch all reserved registers that should be
>>>> handled as RAZ.
>>>>=20
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>> ---
>>>> Changes in V2: Rebase
>>>> Changes in V3:
>>>>    Add case definition for reserved registers.
>>>>=20
>>>> ---
>>>>   xen/include/asm-arm/arm64/hsr.h | 66 +++++++++++++++++++++++++++++++=
++
>>>>   1 file changed, 66 insertions(+)
>>>>=20
>>>> diff --git a/xen/include/asm-arm/arm64/hsr.h
>>>> b/xen/include/asm-arm/arm64/hsr.h
>>>> index ca931dd2fe..ffe0f0007e 100644
>>>> --- a/xen/include/asm-arm/arm64/hsr.h
>>>> +++ b/xen/include/asm-arm/arm64/hsr.h
>>>> @@ -110,6 +110,72 @@
>>>>   #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
>>>>   #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
>>>>   +/* Those registers are used when HCR_EL2.TID3 is set */
>>>> +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
>>>> +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
>>>> +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
>>>> +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
>>>> +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
>>>> +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
>>>> +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
>>>> +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
>>>> +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
>>>> +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
>>>> +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
>>>> +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
>>>> +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
>>>> +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
>>>> +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
>>>> +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
>>>> +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
>>>> +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
>>>> +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
>>>> +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
>>>> +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
>>>> +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
>>>> +
>>>> +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
>>>> +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
>>>> +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
>>>> +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
>>>> +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
>>>> +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
>>>> +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
>>>> +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
>>>> +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
>>>> +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
>>>> +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
>>>> +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
>>>> +
>>>> +/*
>>>> + * Those cases are catching all Reserved registers trapped by TID3 wh=
ich
>>>> + * currently have no assignment.
>>>> + * HCR.TID3 is trapping all registers in the group 3:
>>>> + * Op0 =3D=3D 3, op1 =3D=3D 0, CRn =3D=3D c0,CRm =3D=3D {c1-c7}, op2 =
=3D=3D {0-7}.
>>>> + */
>>>> +#define HSR_SYSREG_TID3_RESERVED_CASE  case HSR_SYSREG(3,0,c0,c3,3): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c3,7): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c4,2): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c4,3): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c4,5): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c4,6): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c4,7): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c5,2): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c5,3): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c5,6): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c5,7): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c6,2): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c6,3): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c6,4): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c6,5): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c6,6): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c6,7): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c7,3): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c7,4): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c7,5): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c7,6): =
\
>>>> +                                       case HSR_SYSREG(3,0,c0,c7,7)
>>>=20
>>> I don't like the idea to define the list of case in a header that is us=
ed by
>>> multiple source. Please define it directly in the source file that use =
it.
>> At that point it might be best to open-coding it in do_sysreg? I mean no
>> #define at all.
>=20
> I am happpy with that.

Then i will fix both cp15 and arm64 code to do it this way.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall
>=20



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:05:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:05:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49518.87583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOS1-0000Iy-H9; Thu, 10 Dec 2020 16:05:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49518.87583; Thu, 10 Dec 2020 16:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOS1-0000Ir-DM; Thu, 10 Dec 2020 16:05:53 +0000
Received: by outflank-mailman (input) for mailman id 49518;
 Thu, 10 Dec 2020 16:05:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knOS0-0000Im-8w
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:05:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knORy-000364-TO; Thu, 10 Dec 2020 16:05:50 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knORy-00041W-8V; Thu, 10 Dec 2020 16:05:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8K8Sp7FHN4l6C1lhyc7qSU2TmPUPPrZ0EJ0xLZO41Sg=; b=w857N7pnc/HzmdfWnTA3IftorG
	3SsuY7j0poIYqoEOwnS+Ivy4aYxr3UwWJMW+8Oc2N9CKOlqnlMlhwJwr/BCigEicMYUor5orK+UdU
	cHIrcz8gEcX7ZE+7SNdS5jvtrd1VR3DOQ1CeKly8yS/fFk1bX6sv2Xo/qj3nP7KiogOw=;
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
 <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
 <BD35BA39-FE40-4752-9B21-CCD0F0D963B0@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0ad1c0d0-a3f2-da98-0a0f-b90736cc11dc@xen.org>
Date: Thu, 10 Dec 2020 16:05:48 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <BD35BA39-FE40-4752-9B21-CCD0F0D963B0@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 10/12/2020 15:48, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 9 Dec 2020, at 23:09, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Bertand,
>>
>> On 09/12/2020 16:30, Bertrand Marquis wrote:
>>> Create a cpuinfo structure for guest and mask into it the features that
>>> we do not support in Xen or that we do not want to publish to guests.
>>> Modify some values in the cpuinfo structure for guests to mask some
>>> features which we do not want to allow to guests (like AMU) or we do not
>>> support (like SVE).
>>> The code is trying to group together registers modifications for the
>>> same feature to be able in the long term to easily enable/disable a
>>> feature depending on user parameters or add other registers modification
>>> in the same place (like enabling/disabling HCR bits).
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> Changes in V2: Rebase
>>> Changes in V3:
>>>    Use current_cpu_data info instead of recalling identify_cpu
>>> ---
>>>   xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>>>   xen/include/asm-arm/cpufeature.h |  2 ++
>>>   2 files changed, 53 insertions(+)
>>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>>> index bc7ee5ac95..7255383504 100644
>>> --- a/xen/arch/arm/cpufeature.c
>>> +++ b/xen/arch/arm/cpufeature.c
>>> @@ -24,6 +24,8 @@
>>>     DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>>   +struct cpuinfo_arm __read_mostly guest_cpuinfo;
>>> +
>>>   void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>>>                                const char *info)
>>>   {
>>> @@ -157,6 +159,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>   #endif
>>>   }
>>>   +/*
>>> + * This function is creating a cpuinfo structure with values modified to mask
>>> + * all cpu features that should not be published to guest.
>>> + * The created structure is then used to provide ID registers values to guests.
>>> + */
>>> +static int __init create_guest_cpuinfo(void)
>>> +{
>>> +    /*
>>> +     * TODO: The code is currently using only the features detected on the boot
>>> +     * core. In the long term we should try to compute values containing only
>>> +     * features supported by all cores.
>>> +     */
>>> +    guest_cpuinfo = current_cpu_data;
>>
>> It would be more logical to use boot_cpu_data as this would be easier to match with your comment.
> 
> Agree, I will fix that in V4.
> 
>>
>>> +
>>> +#ifdef CONFIG_ARM_64
>>> +    /* Disable MPAM as xen does not support it */
>>> +    guest_cpuinfo.pfr64.mpam = 0;
>>> +    guest_cpuinfo.pfr64.mpam_frac = 0;
>>> +
>>> +    /* Disable SVE as Xen does not support it */
>>> +    guest_cpuinfo.pfr64.sve = 0;
>>> +    guest_cpuinfo.zfr64.bits[0] = 0;
>>> +
>>> +    /* Disable MTE as Xen does not support it */
>>> +    guest_cpuinfo.pfr64.mte = 0;
>>> +#endif
>>> +
>>> +    /* Disable AMU */
>>> +#ifdef CONFIG_ARM_64
>>> +    guest_cpuinfo.pfr64.amu = 0;
>>> +#endif
>>> +    guest_cpuinfo.pfr32.amu = 0;
>>> +
>>> +    /* Disable RAS as Xen does not support it */
>>> +#ifdef CONFIG_ARM_64
>>> +    guest_cpuinfo.pfr64.ras = 0;
>>> +    guest_cpuinfo.pfr64.ras_frac = 0;
>>> +#endif
>>> +    guest_cpuinfo.pfr32.ras = 0;
>>> +    guest_cpuinfo.pfr32.ras_frac = 0;
>>
>> How about all the fields that are currently marked as RES0/RES1? Shouldn't we make sure they will stay like that even if newer architecture use them?
> 
> Definitely we can do more then this here (including allowing to enable some things for dom0 or for test reasons).
> This is a first try to solve current issues with MPAM and SVE and I plan to continue to enhance this in the future
> to enable more customisation here.
> I do think we could do a bit more here to have some features controlled by the user but this will need a bit of
> discussion to agree on a strategy.

I think you misunderstood my comment. I am not asking whether we want to 
customize the value per domain. Instead, I am raising questions for the 
strategy taken in this patch.

I am going to leave the safety aside, because I think this is orthogonal 
to this patch.

This patch is introducing a deny list. This means that all the features 
will be exposed to a domain unless someone determine that this is not
supported by Xen.

This means we will always try to catch up with what Arm decided to 
invent and attempt to fix it before the silicon is out.

Instead, I think it would be better to use an allow list. We should only 
expose features to the guest we know works (this could possibly be just 
the Armv8.0 one).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:17:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:17:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49526.87594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOdF-0001PJ-Jg; Thu, 10 Dec 2020 16:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49526.87594; Thu, 10 Dec 2020 16:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOdF-0001PC-Gd; Thu, 10 Dec 2020 16:17:29 +0000
Received: by outflank-mailman (input) for mailman id 49526;
 Thu, 10 Dec 2020 16:17:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOdD-0001P7-Ek
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:17:27 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.45]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 776f7c9e-6fd0-448b-bc1c-61c3dd88898f;
 Thu, 10 Dec 2020 16:17:25 +0000 (UTC)
Received: from DB6PR0402CA0014.eurprd04.prod.outlook.com (2603:10a6:4:91::24)
 by VE1PR08MB5183.eurprd08.prod.outlook.com (2603:10a6:803:10b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 16:17:23 +0000
Received: from DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:91:cafe::d9) by DB6PR0402CA0014.outlook.office365.com
 (2603:10a6:4:91::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 16:17:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT027.mail.protection.outlook.com (10.152.20.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 16:17:23 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Thu, 10 Dec 2020 16:17:23 +0000
Received: from 896aca465ec3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9CE9C1BF-CD22-49D3-AE7F-AED0556FF670.1; 
 Thu, 10 Dec 2020 16:17:06 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 896aca465ec3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 16:17:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5369.eurprd08.prod.outlook.com (2603:10a6:10:11c::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 16:17:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 16:17:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 776f7c9e-6fd0-448b-bc1c-61c3dd88898f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fHRqENzcbnT8NZL9P8ZfWgaZCvF+rDpKye35Gl8bMWk=;
 b=ERfUTed3nb7m1z2WuIvIZoRkKW6vsPVznmRfmzdE10TT1/xf5/7BBK5pL1lZWyrR1Hg9XmFaQ6/XOs6LeszeZNwznKoOw73iUh5k2X+XT4JonxC7/yjXe4WF/YLtDph8ATnMgsF5KZalwcStAdL4r6J5n/i1wR7sUhw1aNFERM8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1a36779677410a91
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cCrOEKjT4nhoWz+LqQCXmLHyZ0wGKGi4EGWRed5bYLIOfnfN6PlzDmAV6zxwp6Rii/RhN84sPvNHvms1l3kD6DfNHJ4O6CVsBU0B0ratXaM68LZLr+4G+uWKJQCZ4tycu2qJpTMgpyTYFEXtoP6158CUmlOiqlLe36Z5ztGrY9ndHNC22FL5a/kVMgaYnWk6PA1tYZWVP+sb98lLy9buMpz5yPL4VCdBUBGcQwjqFI6WxYn5kxDXMqsm/sEzKMC1U7eHD/bxYQyKi6rpPpKvTMWEHpWWk0l5HU/6j4xZz/ZQePPeQAT7XzsTNa7hghi/fr4DkyR1ao2wlFxJkrJYYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fHRqENzcbnT8NZL9P8ZfWgaZCvF+rDpKye35Gl8bMWk=;
 b=IJU8AwXlq9zzW1iDplB1W3CYyr3JdKsiivyVxZKHrQLIM7c1Zc8PQe5k6/a/HHWP2q6uKu8YKuvh/1eWw+84NOyOqoid/DZC44TAGanvKmGBF19aV10EgrKnXD1dk1mJ4Xgp1lRkeSE3xb12RBjGBKFKu7NYdxEkZp2127TdTVmsgmlkdW/aT2OaVIop4ZVNkBGruQEf2iM2ofK+ehHT8RwaKdnB7Oqkuep+mG/awuynncHipJdyKTzJmIXoceKB6yhkTuS3qW8WUbuOUsA3UEtTzb+a9KsazWkbtjMUfGY7Jd0nGDu8IUlKZI4ey0NcdK2Z2w1CXzT8qXynvvjceA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fHRqENzcbnT8NZL9P8ZfWgaZCvF+rDpKye35Gl8bMWk=;
 b=ERfUTed3nb7m1z2WuIvIZoRkKW6vsPVznmRfmzdE10TT1/xf5/7BBK5pL1lZWyrR1Hg9XmFaQ6/XOs6LeszeZNwznKoOw73iUh5k2X+XT4JonxC7/yjXe4WF/YLtDph8ATnMgsF5KZalwcStAdL4r6J5n/i1wR7sUhw1aNFERM8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index: AQHWzklW+4XymezDRUeCGhvhzW8UY6nvZCkAgAEXL4CAAAS6AIAAAyYA
Date: Thu, 10 Dec 2020 16:17:04 +0000
Message-ID: <FA121B9D-0B9F-4636-A7FC-0548C05E31F0@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
 <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
 <BD35BA39-FE40-4752-9B21-CCD0F0D963B0@arm.com>
 <0ad1c0d0-a3f2-da98-0a0f-b90736cc11dc@xen.org>
In-Reply-To: <0ad1c0d0-a3f2-da98-0a0f-b90736cc11dc@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0775541a-6155-45d8-39e6-08d89d271397
x-ms-traffictypediagnostic: DB8PR08MB5369:|VE1PR08MB5183:
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB51835CFE2E9E92C4AD9C63D69DCB0@VE1PR08MB5183.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oLHmChfLsNU4xNgbFm1DynoJiUtJIV9J5lql/y+UV4zWLHqIcZZ0HokHCjDJaGM+89EAqqWkB/yTWVsAiy3feTsDSndEswXptfQEm/xuMH5v+Y1OHuyrVUnOYxHTYk+M2MIfDH5MEHl1yS9Zj10rH2vcLdiyl9yKWwJ5qNQKKV/IF1fecbbYTc/Qm+3NFDfT1aRKIROXe8g2xoMNzRib1CaBbmcwewYvyl3tUTvHX1LU4l7DV18L/6xAJ29r+TGgiY1e6DIhdbmZvuzC2L+KQkXGtATQAW3U1i93YF0xf4TWwMcsAW9e/8u6PwK9CLJ+diCVhRnHkqp3wkKkLDABPwyTwPeGBupSsx4ampJg6dM8M+TQSZbDRu+2TMe4Iibo
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(39860400002)(346002)(376002)(186003)(33656002)(54906003)(83380400001)(4326008)(36756003)(26005)(2906002)(66476007)(316002)(64756008)(2616005)(478600001)(8676002)(66946007)(66556008)(53546011)(6512007)(6506007)(86362001)(5660300002)(8936002)(6486002)(66446008)(71200400001)(6916009)(76116006)(91956017)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?eTU3bVg0YWZ3b05DZkgwaXZoaU1SbitzSHF6T0RrMG8zT3ZGZ2xHWlBmT3Nr?=
 =?utf-8?B?K0lMQWVnaTgwc2pYTU5mbGRDUGZPUitwZHJTUTNnY1JaVmlNNHVFaWZoWWtB?=
 =?utf-8?B?YmFkc2JlV284b1ZpTXJpUS9wU0pMVWhNNzlOcG5XYUxxQ3pIdnBzN2lHK3Vk?=
 =?utf-8?B?Q0t1cnhGY043WmJPcDZMSGpvQ28vMXFjWVBCZjJhYTdYOUI3cEVjQlFqRTNs?=
 =?utf-8?B?elRWK0FEckR5UFhrWTVnbHhWcXplbncxd255NnduN0ZsdElZZERHOVVnb2Fw?=
 =?utf-8?B?Y0pZT2paYWY4NHcvdUZNanIrRDZWOVV1RnNNaVQ1MUEwOTNLVHU3bHBMbHd0?=
 =?utf-8?B?bE44RjNSYjV0SSt3aEoxRXMyTHBkd1FyZmQwcVFSSG9Fem9QMVR2a3pZdmgw?=
 =?utf-8?B?TjNuYkdabWdjR0RTRytTc2tVek12WTE5a2JKbVN1dmFCN0NsQXJwRXFRcVFF?=
 =?utf-8?B?Snp5bXJJZnFIQUVpV29NeUVJeVBjb05xZ3BNSHZYMEJyaTE5OE1aTlNrVXh5?=
 =?utf-8?B?RjlacG84Y2FZOGIvTTdBYmc1V0ViTU5RSVEyTVdvQitUUGs4OVZqbUlVTG1M?=
 =?utf-8?B?TmdCWkpJWkRwZVNWaURuSFZueUF2RDNwVVdhbmRKOWcrQzZiaVRDd0cvajhp?=
 =?utf-8?B?ZHJDN29KeC9FR2dyd1hEK3RMNW9qTHJFWC9wSUlJbUt0enhuV25WVmpoWEE0?=
 =?utf-8?B?SWlabGc0dEtJSXpqeExjTkQ2ZGN4N1h1T3pudzhPOEJKQ05rY0RxR2dOemg3?=
 =?utf-8?B?YXFxbnJodXM3bVR4ZXNMSlEwazNUWDhIT21HVnNFTFBGZlRiQjFVR3FGRDVU?=
 =?utf-8?B?d2tBSDRGa0FxS0NzSHRIemFadDFUdklaZFh4T3hBdFhMUjRhVUkwUWdJeE0w?=
 =?utf-8?B?anhvb1ZvRGdrWnpaVjFNNlVZenlENndaZGpXV0Y3ZlpWRkEwTFBweTlROWVx?=
 =?utf-8?B?clpBYVdvVnkyTnNRZE8vNE1sOWF4Uk50YjVSbG9IS2V2dTYyYWZoZmhWTW5m?=
 =?utf-8?B?ZzEraWZMVlFWZ2lSeUtMVUthUWNiMjliSkxLQ3MvbUFFbXJ6c3Z5a3hoQUov?=
 =?utf-8?B?WEpQa2NBeXpXa2xyZ2tnaUtYbXhpUkNRNDRka29zR083cDlYeE4xdDg2UVN1?=
 =?utf-8?B?MFBlSmxrQlNtRzQzbDZkL015UEtNbDRVdGlSazdENCtmSlA0a0NHejR0Q2lo?=
 =?utf-8?B?MkV6TFROcHZMa2JDelBZbmJ6OWNucnFwcCtMbHNGaFVMUzZ6QUpaZDBiN3lP?=
 =?utf-8?B?UDcwaUZyTnFzaHFmaCtkNVZqUk9kQ2Fqb08vbFNuOG1Db1V0K2RnaGEvZ3V1?=
 =?utf-8?Q?TCEoXqZ/R9rulLybU0GCpyuILYIlnIsXJa?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <A13CFFE71D7E514E82BE7472D7C7499B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5369
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	86b9d766-a69c-4a29-78f2-08d89d2708c2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	j/UaSBCIwHMoboMJr0SYJudjlnGY/JnZGsEFPDkcx0cAwqja149s+vcbPFzmk7GhoTkC4bhior0xqQgxCg9oLEnIKC8jPtmR3gqh4PVRx9dfqQKPyc62Q8poUUnAUbTI8rpnYzddfEtq6iv9FQR8eBt3WKDgtPkZUzGQ2hjaXkHl/GDOELm8foqZ1hNL+Eg6nPT+TAM21K0rhw+lkm5FK/SXZ1aZJQDwvOOefCSY29kGow1x92J9BrCrG0PEOekqJcnBbX1hewW1o9vTFbSUYi2o4TCRbKer3vaOHpWyLL42IDLKjnJMA/6xMwpPimFn5nXnIpf+5qtd+OqKDjWnQFBOBEYHZTvhFG05amP9pBUxvVCqmST+QOXgzdaU5DI4G5qE4M3lWfh+FgjMtZQu6w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(46966005)(316002)(6486002)(107886003)(6862004)(8936002)(336012)(478600001)(186003)(82740400003)(83380400001)(82310400003)(356005)(8676002)(2906002)(47076004)(70586007)(70206006)(4326008)(54906003)(2616005)(86362001)(6506007)(5660300002)(26005)(6512007)(36756003)(53546011)(81166007)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 16:17:23.0818
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0775541a-6155-45d8-39e6-08d89d271397
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5183

SGkgSnVsaWVuLA0KDQo+IE9uIDEwIERlYyAyMDIwLCBhdCAxNjowNSwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiANCj4gDQo+IE9uIDEwLzEyLzIwMjAgMTU6NDgs
IEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBIaSBKdWxpZW4sDQo+Pj4gT24gOSBEZWMgMjAy
MCwgYXQgMjM6MDksIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+IA0K
Pj4+IEhpIEJlcnRhbmQsDQo+Pj4gDQo+Pj4gT24gMDkvMTIvMjAyMCAxNjozMCwgQmVydHJhbmQg
TWFycXVpcyB3cm90ZToNCj4+Pj4gQ3JlYXRlIGEgY3B1aW5mbyBzdHJ1Y3R1cmUgZm9yIGd1ZXN0
IGFuZCBtYXNrIGludG8gaXQgdGhlIGZlYXR1cmVzIHRoYXQNCj4+Pj4gd2UgZG8gbm90IHN1cHBv
cnQgaW4gWGVuIG9yIHRoYXQgd2UgZG8gbm90IHdhbnQgdG8gcHVibGlzaCB0byBndWVzdHMuDQo+
Pj4+IE1vZGlmeSBzb21lIHZhbHVlcyBpbiB0aGUgY3B1aW5mbyBzdHJ1Y3R1cmUgZm9yIGd1ZXN0
cyB0byBtYXNrIHNvbWUNCj4+Pj4gZmVhdHVyZXMgd2hpY2ggd2UgZG8gbm90IHdhbnQgdG8gYWxs
b3cgdG8gZ3Vlc3RzIChsaWtlIEFNVSkgb3Igd2UgZG8gbm90DQo+Pj4+IHN1cHBvcnQgKGxpa2Ug
U1ZFKS4NCj4+Pj4gVGhlIGNvZGUgaXMgdHJ5aW5nIHRvIGdyb3VwIHRvZ2V0aGVyIHJlZ2lzdGVy
cyBtb2RpZmljYXRpb25zIGZvciB0aGUNCj4+Pj4gc2FtZSBmZWF0dXJlIHRvIGJlIGFibGUgaW4g
dGhlIGxvbmcgdGVybSB0byBlYXNpbHkgZW5hYmxlL2Rpc2FibGUgYQ0KPj4+PiBmZWF0dXJlIGRl
cGVuZGluZyBvbiB1c2VyIHBhcmFtZXRlcnMgb3IgYWRkIG90aGVyIHJlZ2lzdGVycyBtb2RpZmlj
YXRpb24NCj4+Pj4gaW4gdGhlIHNhbWUgcGxhY2UgKGxpa2UgZW5hYmxpbmcvZGlzYWJsaW5nIEhD
UiBiaXRzKS4NCj4+Pj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQu
bWFycXVpc0Bhcm0uY29tPg0KPj4+PiAtLS0NCj4+Pj4gQ2hhbmdlcyBpbiBWMjogUmViYXNlDQo+
Pj4+IENoYW5nZXMgaW4gVjM6DQo+Pj4+ICAgVXNlIGN1cnJlbnRfY3B1X2RhdGEgaW5mbyBpbnN0
ZWFkIG9mIHJlY2FsbGluZyBpZGVudGlmeV9jcHUNCj4+Pj4gLS0tDQo+Pj4+ICB4ZW4vYXJjaC9h
cm0vY3B1ZmVhdHVyZS5jICAgICAgICB8IDUxICsrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrDQo+Pj4+ICB4ZW4vaW5jbHVkZS9hc20tYXJtL2NwdWZlYXR1cmUuaCB8ICAyICsrDQo+Pj4+
ICAyIGZpbGVzIGNoYW5nZWQsIDUzIGluc2VydGlvbnMoKykNCj4+Pj4gZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS9jcHVmZWF0dXJlLmMgYi94ZW4vYXJjaC9hcm0vY3B1ZmVhdHVyZS5jDQo+Pj4+
IGluZGV4IGJjN2VlNWFjOTUuLjcyNTUzODM1MDQgMTAwNjQ0DQo+Pj4+IC0tLSBhL3hlbi9hcmNo
L2FybS9jcHVmZWF0dXJlLmMNCj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2NwdWZlYXR1cmUuYw0K
Pj4+PiBAQCAtMjQsNiArMjQsOCBAQA0KPj4+PiAgICBERUNMQVJFX0JJVE1BUChjcHVfaHdjYXBz
LCBBUk1fTkNBUFMpOw0KPj4+PiAgK3N0cnVjdCBjcHVpbmZvX2FybSBfX3JlYWRfbW9zdGx5IGd1
ZXN0X2NwdWluZm87DQo+Pj4+ICsNCj4+Pj4gIHZvaWQgdXBkYXRlX2NwdV9jYXBhYmlsaXRpZXMo
Y29uc3Qgc3RydWN0IGFybV9jcHVfY2FwYWJpbGl0aWVzICpjYXBzLA0KPj4+PiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBjb25zdCBjaGFyICppbmZvKQ0KPj4+PiAgew0KPj4+PiBAQCAt
MTU3LDYgKzE1OSw1NSBAQCB2b2lkIGlkZW50aWZ5X2NwdShzdHJ1Y3QgY3B1aW5mb19hcm0gKmMp
DQo+Pj4+ICAjZW5kaWYNCj4+Pj4gIH0NCj4+Pj4gICsvKg0KPj4+PiArICogVGhpcyBmdW5jdGlv
biBpcyBjcmVhdGluZyBhIGNwdWluZm8gc3RydWN0dXJlIHdpdGggdmFsdWVzIG1vZGlmaWVkIHRv
IG1hc2sNCj4+Pj4gKyAqIGFsbCBjcHUgZmVhdHVyZXMgdGhhdCBzaG91bGQgbm90IGJlIHB1Ymxp
c2hlZCB0byBndWVzdC4NCj4+Pj4gKyAqIFRoZSBjcmVhdGVkIHN0cnVjdHVyZSBpcyB0aGVuIHVz
ZWQgdG8gcHJvdmlkZSBJRCByZWdpc3RlcnMgdmFsdWVzIHRvIGd1ZXN0cy4NCj4+Pj4gKyAqLw0K
Pj4+PiArc3RhdGljIGludCBfX2luaXQgY3JlYXRlX2d1ZXN0X2NwdWluZm8odm9pZCkNCj4+Pj4g
K3sNCj4+Pj4gKyAgICAvKg0KPj4+PiArICAgICAqIFRPRE86IFRoZSBjb2RlIGlzIGN1cnJlbnRs
eSB1c2luZyBvbmx5IHRoZSBmZWF0dXJlcyBkZXRlY3RlZCBvbiB0aGUgYm9vdA0KPj4+PiArICAg
ICAqIGNvcmUuIEluIHRoZSBsb25nIHRlcm0gd2Ugc2hvdWxkIHRyeSB0byBjb21wdXRlIHZhbHVl
cyBjb250YWluaW5nIG9ubHkNCj4+Pj4gKyAgICAgKiBmZWF0dXJlcyBzdXBwb3J0ZWQgYnkgYWxs
IGNvcmVzLg0KPj4+PiArICAgICAqLw0KPj4+PiArICAgIGd1ZXN0X2NwdWluZm8gPSBjdXJyZW50
X2NwdV9kYXRhOw0KPj4+IA0KPj4+IEl0IHdvdWxkIGJlIG1vcmUgbG9naWNhbCB0byB1c2UgYm9v
dF9jcHVfZGF0YSBhcyB0aGlzIHdvdWxkIGJlIGVhc2llciB0byBtYXRjaCB3aXRoIHlvdXIgY29t
bWVudC4NCj4+IEFncmVlLCBJIHdpbGwgZml4IHRoYXQgaW4gVjQuDQo+Pj4gDQo+Pj4+ICsNCj4+
Pj4gKyNpZmRlZiBDT05GSUdfQVJNXzY0DQo+Pj4+ICsgICAgLyogRGlzYWJsZSBNUEFNIGFzIHhl
biBkb2VzIG5vdCBzdXBwb3J0IGl0ICovDQo+Pj4+ICsgICAgZ3Vlc3RfY3B1aW5mby5wZnI2NC5t
cGFtID0gMDsNCj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnBmcjY0Lm1wYW1fZnJhYyA9IDA7DQo+
Pj4+ICsNCj4+Pj4gKyAgICAvKiBEaXNhYmxlIFNWRSBhcyBYZW4gZG9lcyBub3Qgc3VwcG9ydCBp
dCAqLw0KPj4+PiArICAgIGd1ZXN0X2NwdWluZm8ucGZyNjQuc3ZlID0gMDsNCj4+Pj4gKyAgICBn
dWVzdF9jcHVpbmZvLnpmcjY0LmJpdHNbMF0gPSAwOw0KPj4+PiArDQo+Pj4+ICsgICAgLyogRGlz
YWJsZSBNVEUgYXMgWGVuIGRvZXMgbm90IHN1cHBvcnQgaXQgKi8NCj4+Pj4gKyAgICBndWVzdF9j
cHVpbmZvLnBmcjY0Lm10ZSA9IDA7DQo+Pj4+ICsjZW5kaWYNCj4+Pj4gKw0KPj4+PiArICAgIC8q
IERpc2FibGUgQU1VICovDQo+Pj4+ICsjaWZkZWYgQ09ORklHX0FSTV82NA0KPj4+PiArICAgIGd1
ZXN0X2NwdWluZm8ucGZyNjQuYW11ID0gMDsNCj4+Pj4gKyNlbmRpZg0KPj4+PiArICAgIGd1ZXN0
X2NwdWluZm8ucGZyMzIuYW11ID0gMDsNCj4+Pj4gKw0KPj4+PiArICAgIC8qIERpc2FibGUgUkFT
IGFzIFhlbiBkb2VzIG5vdCBzdXBwb3J0IGl0ICovDQo+Pj4+ICsjaWZkZWYgQ09ORklHX0FSTV82
NA0KPj4+PiArICAgIGd1ZXN0X2NwdWluZm8ucGZyNjQucmFzID0gMDsNCj4+Pj4gKyAgICBndWVz
dF9jcHVpbmZvLnBmcjY0LnJhc19mcmFjID0gMDsNCj4+Pj4gKyNlbmRpZg0KPj4+PiArICAgIGd1
ZXN0X2NwdWluZm8ucGZyMzIucmFzID0gMDsNCj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnBmcjMy
LnJhc19mcmFjID0gMDsNCj4+PiANCj4+PiBIb3cgYWJvdXQgYWxsIHRoZSBmaWVsZHMgdGhhdCBh
cmUgY3VycmVudGx5IG1hcmtlZCBhcyBSRVMwL1JFUzE/IFNob3VsZG4ndCB3ZSBtYWtlIHN1cmUg
dGhleSB3aWxsIHN0YXkgbGlrZSB0aGF0IGV2ZW4gaWYgbmV3ZXIgYXJjaGl0ZWN0dXJlIHVzZSB0
aGVtPw0KPj4gRGVmaW5pdGVseSB3ZSBjYW4gZG8gbW9yZSB0aGVuIHRoaXMgaGVyZSAoaW5jbHVk
aW5nIGFsbG93aW5nIHRvIGVuYWJsZSBzb21lIHRoaW5ncyBmb3IgZG9tMCBvciBmb3IgdGVzdCBy
ZWFzb25zKS4NCj4+IFRoaXMgaXMgYSBmaXJzdCB0cnkgdG8gc29sdmUgY3VycmVudCBpc3N1ZXMg
d2l0aCBNUEFNIGFuZCBTVkUgYW5kIEkgcGxhbiB0byBjb250aW51ZSB0byBlbmhhbmNlIHRoaXMg
aW4gdGhlIGZ1dHVyZQ0KPj4gdG8gZW5hYmxlIG1vcmUgY3VzdG9taXNhdGlvbiBoZXJlLg0KPj4g
SSBkbyB0aGluayB3ZSBjb3VsZCBkbyBhIGJpdCBtb3JlIGhlcmUgdG8gaGF2ZSBzb21lIGZlYXR1
cmVzIGNvbnRyb2xsZWQgYnkgdGhlIHVzZXIgYnV0IHRoaXMgd2lsbCBuZWVkIGEgYml0IG9mDQo+
PiBkaXNjdXNzaW9uIHRvIGFncmVlIG9uIGEgc3RyYXRlZ3kuDQo+IA0KPiBJIHRoaW5rIHlvdSBt
aXN1bmRlcnN0b29kIG15IGNvbW1lbnQuIEkgYW0gbm90IGFza2luZyB3aGV0aGVyIHdlIHdhbnQg
dG8gY3VzdG9taXplIHRoZSB2YWx1ZSBwZXIgZG9tYWluLiBJbnN0ZWFkLCBJIGFtIHJhaXNpbmcg
cXVlc3Rpb25zIGZvciB0aGUgc3RyYXRlZ3kgdGFrZW4gaW4gdGhpcyBwYXRjaC4NCj4gDQo+IEkg
YW0gZ29pbmcgdG8gbGVhdmUgdGhlIHNhZmV0eSBhc2lkZSwgYmVjYXVzZSBJIHRoaW5rIHRoaXMg
aXMgb3J0aG9nb25hbCB0byB0aGlzIHBhdGNoLg0KPiANCj4gVGhpcyBwYXRjaCBpcyBpbnRyb2R1
Y2luZyBhIGRlbnkgbGlzdC4gVGhpcyBtZWFucyB0aGF0IGFsbCB0aGUgZmVhdHVyZXMgd2lsbCBi
ZSBleHBvc2VkIHRvIGEgZG9tYWluIHVubGVzcyBzb21lb25lIGRldGVybWluZSB0aGF0IHRoaXMg
aXMgbm90DQo+IHN1cHBvcnRlZCBieSBYZW4uDQo+IA0KPiBUaGlzIG1lYW5zIHdlIHdpbGwgYWx3
YXlzIHRyeSB0byBjYXRjaCB1cCB3aXRoIHdoYXQgQXJtIGRlY2lkZWQgdG8gaW52ZW50IGFuZCBh
dHRlbXB0IHRvIGZpeCBpdCBiZWZvcmUgdGhlIHNpbGljb24gaXMgb3V0Lg0KPiANCj4gSW5zdGVh
ZCwgSSB0aGluayBpdCB3b3VsZCBiZSBiZXR0ZXIgdG8gdXNlIGFuIGFsbG93IGxpc3QuIFdlIHNo
b3VsZCBvbmx5IGV4cG9zZSBmZWF0dXJlcyB0byB0aGUgZ3Vlc3Qgd2Uga25vdyB3b3JrcyAodGhp
cyBjb3VsZCBwb3NzaWJseSBiZSBqdXN0IHRoZSBBcm12OC4wIG9uZSkuDQoNCkkgdW5kZXJzdG9v
ZCB0aGF0IGFuZCBJIGZ1bGx5IGFncmVlIHRoYXQgdGhpcyBpcyB3aGF0IHdlIHNob3VsZCBkbzog
b25seSBleHBvc2Ugd2hhdCB3ZSBzdXBwb3J0IGFuZCBrbm93IGFuZCBsZXQgZXZlcnl0aGluZyBl
bHNlIGFzIOKAnGRpc2FibGVk4oCdLg0KQW5kIEkgZGVmaW5pdGVseSB3YW50IHRvIGRvIHRoYXQg
YW5kIEkgdGhpbmsgd2l0aCB0aGlzIHNlcmllIHdlIGhhdmUgdGhlIHJlcXVpcmVkIHN1cHBvcnQg
dG8gZG8gdGhhdCwgd2Ugd2lsbCBuZWVkIHRvIHJld29yayBob3cgd2UgaW5pdGlhbGlzZSB0aGUN
Cmd1ZXN0X2NwdWluZm8gc3RydWN0dXJlLg0KDQpJIGp1c3Qgd2FudCB0byBsZWF2ZSB0aGlzIGRp
c2N1c3Npb24gZm9yIGFmdGVyIHNvIHRoYXQgd2UgY2FuIGF0IGxlYXN0IHJpZ2h0IG5vdyBoYXZl
IGEgY3VycmVudCBsaW51eCBib290aW5nIHdpdGhvdXQgdGhlIG5lZWQgdG8gbW9kaWZ5IHRoZSBs
aW51eA0Ka2VybmVsIGJpbmFyeSB0byByZW1vdmUgdGhpbmdzIGxpa2UgU1ZFLg0KDQpSZWdhcmRz
DQpCZXJ0cmFuZA0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxsDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:30:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49532.87607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOq5-0003Df-Uw; Thu, 10 Dec 2020 16:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49532.87607; Thu, 10 Dec 2020 16:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOq5-0003DY-Rt; Thu, 10 Dec 2020 16:30:45 +0000
Received: by outflank-mailman (input) for mailman id 49532;
 Thu, 10 Dec 2020 16:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knOq5-0003DT-3O
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knOq3-0003c8-ON; Thu, 10 Dec 2020 16:30:43 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knOq3-0005qV-CG; Thu, 10 Dec 2020 16:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=npKr7XdwcZ3GccZnUcDMIlTKerkkAejcVfNbm6eTs8s=; b=fo5lNWV9JCN/80ouedDxPk+kOD
	gt8ZDNT0DKU5ZVP2huByoK/RuVD/MNH7rDQkBfbQJNIgx4NLbtgIFnxQOpKKEDkBT6PQsR9gL/HVS
	+sQoBZraFL7TRtTRnXudwdIBP7kd7qE/gbJDHpZow+xUUk3jLurmi8k0zVGChsZzSk0g=;
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
 <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
 <BD35BA39-FE40-4752-9B21-CCD0F0D963B0@arm.com>
 <0ad1c0d0-a3f2-da98-0a0f-b90736cc11dc@xen.org>
 <FA121B9D-0B9F-4636-A7FC-0548C05E31F0@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3630cf94-1fb8-a337-dde7-16907da9af3f@xen.org>
Date: Thu, 10 Dec 2020 16:30:41 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <FA121B9D-0B9F-4636-A7FC-0548C05E31F0@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/12/2020 16:17, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

>> On 10 Dec 2020, at 16:05, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 10/12/2020 15:48, Bertrand Marquis wrote:
>>> Hi Julien,
>>>> On 9 Dec 2020, at 23:09, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Bertand,
>>>>
>>>> On 09/12/2020 16:30, Bertrand Marquis wrote:
>>>>> Create a cpuinfo structure for guest and mask into it the features that
>>>>> we do not support in Xen or that we do not want to publish to guests.
>>>>> Modify some values in the cpuinfo structure for guests to mask some
>>>>> features which we do not want to allow to guests (like AMU) or we do not
>>>>> support (like SVE).
>>>>> The code is trying to group together registers modifications for the
>>>>> same feature to be able in the long term to easily enable/disable a
>>>>> feature depending on user parameters or add other registers modification
>>>>> in the same place (like enabling/disabling HCR bits).
>>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>> ---
>>>>> Changes in V2: Rebase
>>>>> Changes in V3:
>>>>>    Use current_cpu_data info instead of recalling identify_cpu
>>>>> ---
>>>>>   xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>>>>>   xen/include/asm-arm/cpufeature.h |  2 ++
>>>>>   2 files changed, 53 insertions(+)
>>>>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>>>>> index bc7ee5ac95..7255383504 100644
>>>>> --- a/xen/arch/arm/cpufeature.c
>>>>> +++ b/xen/arch/arm/cpufeature.c
>>>>> @@ -24,6 +24,8 @@
>>>>>     DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>>>>   +struct cpuinfo_arm __read_mostly guest_cpuinfo;
>>>>> +
>>>>>   void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>>>>>                                const char *info)
>>>>>   {
>>>>> @@ -157,6 +159,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>>>   #endif
>>>>>   }
>>>>>   +/*
>>>>> + * This function is creating a cpuinfo structure with values modified to mask
>>>>> + * all cpu features that should not be published to guest.
>>>>> + * The created structure is then used to provide ID registers values to guests.
>>>>> + */
>>>>> +static int __init create_guest_cpuinfo(void)
>>>>> +{
>>>>> +    /*
>>>>> +     * TODO: The code is currently using only the features detected on the boot
>>>>> +     * core. In the long term we should try to compute values containing only
>>>>> +     * features supported by all cores.
>>>>> +     */
>>>>> +    guest_cpuinfo = current_cpu_data;
>>>>
>>>> It would be more logical to use boot_cpu_data as this would be easier to match with your comment.
>>> Agree, I will fix that in V4.
>>>>
>>>>> +
>>>>> +#ifdef CONFIG_ARM_64
>>>>> +    /* Disable MPAM as xen does not support it */
>>>>> +    guest_cpuinfo.pfr64.mpam = 0;
>>>>> +    guest_cpuinfo.pfr64.mpam_frac = 0;
>>>>> +
>>>>> +    /* Disable SVE as Xen does not support it */
>>>>> +    guest_cpuinfo.pfr64.sve = 0;
>>>>> +    guest_cpuinfo.zfr64.bits[0] = 0;
>>>>> +
>>>>> +    /* Disable MTE as Xen does not support it */
>>>>> +    guest_cpuinfo.pfr64.mte = 0;
>>>>> +#endif
>>>>> +
>>>>> +    /* Disable AMU */
>>>>> +#ifdef CONFIG_ARM_64
>>>>> +    guest_cpuinfo.pfr64.amu = 0;
>>>>> +#endif
>>>>> +    guest_cpuinfo.pfr32.amu = 0;
>>>>> +
>>>>> +    /* Disable RAS as Xen does not support it */
>>>>> +#ifdef CONFIG_ARM_64
>>>>> +    guest_cpuinfo.pfr64.ras = 0;
>>>>> +    guest_cpuinfo.pfr64.ras_frac = 0;
>>>>> +#endif
>>>>> +    guest_cpuinfo.pfr32.ras = 0;
>>>>> +    guest_cpuinfo.pfr32.ras_frac = 0;
>>>>
>>>> How about all the fields that are currently marked as RES0/RES1? Shouldn't we make sure they will stay like that even if newer architecture use them?
>>> Definitely we can do more then this here (including allowing to enable some things for dom0 or for test reasons).
>>> This is a first try to solve current issues with MPAM and SVE and I plan to continue to enhance this in the future
>>> to enable more customisation here.
>>> I do think we could do a bit more here to have some features controlled by the user but this will need a bit of
>>> discussion to agree on a strategy.
>>
>> I think you misunderstood my comment. I am not asking whether we want to customize the value per domain. Instead, I am raising questions for the strategy taken in this patch.
>>
>> I am going to leave the safety aside, because I think this is orthogonal to this patch.
>>
>> This patch is introducing a deny list. This means that all the features will be exposed to a domain unless someone determine that this is not
>> supported by Xen.
>>
>> This means we will always try to catch up with what Arm decided to invent and attempt to fix it before the silicon is out.
>>
>> Instead, I think it would be better to use an allow list. We should only expose features to the guest we know works (this could possibly be just the Armv8.0 one).
> 
> I understood that and I fully agree that this is what we should do: only expose what we support and know and let everything else as “disabled”.
> And I definitely want to do that and I think with this serie we have the required support to do that, we will need to rework how we initialise the
> guest_cpuinfo structure.
> 
> I just want to leave this discussion for after so that we can at least right now have a current linux booting without the need to modify the linux
> kernel binary to remove things like SVE.

Ok. So this patch is more to fill the gap rather than the final 
solution. This should be clarified in the commit message.

Although, it is still unclear to me why this can't be an allowlist from 
the start. As you said, the infrastructure is already there. So it would 
be a matter of copying bits we know work with Xen (rather than cloberring).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:38:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:38:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49539.87619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOxG-0003ag-Nn; Thu, 10 Dec 2020 16:38:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49539.87619; Thu, 10 Dec 2020 16:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOxG-0003aZ-KH; Thu, 10 Dec 2020 16:38:10 +0000
Received: by outflank-mailman (input) for mailman id 49539;
 Thu, 10 Dec 2020 16:38:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOxF-0003Z3-10
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:38:09 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::611])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed6f33ec-5f2b-4bbc-8585-5f62605f5f63;
 Thu, 10 Dec 2020 16:38:07 +0000 (UTC)
Received: from AM5PR0701CA0052.eurprd07.prod.outlook.com (2603:10a6:203:2::14)
 by AM0PR08MB4081.eurprd08.prod.outlook.com (2603:10a6:208:12b::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 16:38:05 +0000
Received: from AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::c2) by AM5PR0701CA0052.outlook.office365.com
 (2603:10a6:203:2::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.5 via Frontend
 Transport; Thu, 10 Dec 2020 16:38:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT053.mail.protection.outlook.com (10.152.16.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 16:38:04 +0000
Received: ("Tessian outbound 665ba7fbdfd9:v71");
 Thu, 10 Dec 2020 16:38:04 +0000
Received: from f8ef368ec943.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7EB63844-9FDF-49C4-A5F9-438B19995B10.1; 
 Thu, 10 Dec 2020 16:37:49 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f8ef368ec943.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 16:37:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB3993.eurprd08.prod.outlook.com (2603:10a6:10:ad::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 10 Dec
 2020 16:37:47 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 16:37:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed6f33ec-5f2b-4bbc-8585-5f62605f5f63
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tuYGK+NQTwfNA/B1wXFLTrbshqLP5tP6teJ5Jdu3Hc=;
 b=pdcboBcc84WlCgXp+cyJvQSW2rN/MQaj52fSa7WAFTM0d40X/iwOPN7NxXmgdL2YVY5D9ljGPfw65+AWtKWCtOyi62NPtTU9CO9kOsSYoLmJjhCjZLSz4kkB3rmE+C3Djhb5kZeFaPmbkas47c5umczWKaQ0ro3fGCr4fqR9YqM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: af727b833209f434
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bR4/KQL/DZPtPrXcu7rl3snQy2pLJbVbjbr1qtLsBOsI8jQKQk00xzXrqlAJSuO9bFbodcF9cmlrhsuhvDF0soJ/nxiAbIKGnrAyEpsgBFNZJ0v3JIEuuZ60bewnarrG7ro0CaEwQVwpDudxvkgYozs8jCmLk8HI0X0Dxq0e9z7ijrq+KPKZTxzxcN+g6a/SaFd0vAnXXCvk+0NsIcX0UtgpAcrFDzfpQK4DA94NSzqndG+Tjgoc9zHJdJ1ikiLEshw0NllrZ9NbM4IgOY2LyK+ulg00Oyi40lpPf6WDLTaRW/I7j9dpi/QkcfmpY0LIKhOApT9VdIwitINbkZbtUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tuYGK+NQTwfNA/B1wXFLTrbshqLP5tP6teJ5Jdu3Hc=;
 b=BrdOTJ9K0EZlWASNa7hfDn7Kl03E/Jwr7mfiKjqF1dmoOasiNn8qsaWZxnCmh3UdKWWwoCEhuAjAcMGGesodoi3LS6+HTadof+ic+WAizQT/q2bvAiNRLpE3xfWEBjztKtaB9GFsZ7vzbJUJJ9UWokIPbWfqhrwA7T6Qiv62ryloKWuo9R01YILps9mWLpCE+/42bpMbPob628wF3OcOg2Avgy3iAO58SMraKT2ZJexnAIr8zeIS9//5rgAcOmjepE8e+8Q7/lLqt2816uIQQqluWEPYuLG2aebacfSyTP/3DKjn9SfeA43ecg5jPc3VBhI3k9UMBAxJSW54l2UOIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tuYGK+NQTwfNA/B1wXFLTrbshqLP5tP6teJ5Jdu3Hc=;
 b=pdcboBcc84WlCgXp+cyJvQSW2rN/MQaj52fSa7WAFTM0d40X/iwOPN7NxXmgdL2YVY5D9ljGPfw65+AWtKWCtOyi62NPtTU9CO9kOsSYoLmJjhCjZLSz4kkB3rmE+C3Djhb5kZeFaPmbkas47c5umczWKaQ0ro3fGCr4fqR9YqM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v3 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index:
 AQHWzklW+4XymezDRUeCGhvhzW8UY6nvZCkAgAEXL4CAAAS6AIAAAyYAgAADzoCAAAH7AA==
Date: Thu, 10 Dec 2020 16:37:47 +0000
Message-ID: <706337B0-8C4B-4CFE-99A3-650C149291FB@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <33f39e7f521e6f73a0dba57a8be9fb50656e1807.1607524536.git.bertrand.marquis@arm.com>
 <61b2677c-bc0d-af0b-95f8-f8de76a20856@xen.org>
 <BD35BA39-FE40-4752-9B21-CCD0F0D963B0@arm.com>
 <0ad1c0d0-a3f2-da98-0a0f-b90736cc11dc@xen.org>
 <FA121B9D-0B9F-4636-A7FC-0548C05E31F0@arm.com>
 <3630cf94-1fb8-a337-dde7-16907da9af3f@xen.org>
In-Reply-To: <3630cf94-1fb8-a337-dde7-16907da9af3f@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f17724fb-b2f1-404c-edc4-08d89d29f7b4
x-ms-traffictypediagnostic: DB8PR08MB3993:|AM0PR08MB4081:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB4081E3DCDF747C1A1B6E65DF9DCB0@AM0PR08MB4081.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KkAaWfuHECRkw+ZTOYJagveBUily9oiA3Jxb10HFxdihj0BnTSYzi/GynEL1JBhu3YfYBEmPwawmauYmSoCr/xOAYZbaDnbDrcSBEH2AbCA8dYjpjUHBPwD+O+qHNBUemPDTFygunchxic0epRTlQKnCKvAzPAyoR2o2v0a3P+QB+BAuXVrE8ruycSjgfN0Tv4HSfq6MjgNlBUoDu2aBTAiiTbeMUv+RQEseSFOSZ5rdLhPm65NAOkrjLiLzUZQIPp5RwBzrrhoxXclLAb0iWjK3zlUet1RtZxB5f4YWwArJc+H92nSigXwGffvagiFMezlWTR183PbBakjBP+RSn5+MeSwlHrkt1VrCESsxSbOVsoaCdOKOCnZ3SqV//bpy
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(136003)(376002)(346002)(5660300002)(66946007)(33656002)(186003)(6506007)(53546011)(8676002)(2616005)(71200400001)(26005)(64756008)(86362001)(54906003)(316002)(76116006)(83380400001)(66556008)(66446008)(8936002)(6512007)(6486002)(66476007)(2906002)(36756003)(478600001)(91956017)(6916009)(4326008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?QUZrKytTNFRyZklGcFVGNE94OWp6SGFGNGIwT25lc0VNUUFRNmNLbkhFMTBJ?=
 =?utf-8?B?YU1VWTcrKzkxZTJxdnovS3R1Unl0bGhmNGM1U2VJdG1URDFMQy9rUmwybUtr?=
 =?utf-8?B?bE04Z1Y2VGlrNTFLYU13cHFBU2xRdGVrcUFoeEFzbmlBeGozU3RuWWV6Ny9I?=
 =?utf-8?B?Z2dUd2hUNGM3STVVQkZRYlJmRmFoRGpPcVJtd1g4UnAxSndXUXlJeksxM3cr?=
 =?utf-8?B?VXQrNUg1TWJ2VlAxcVcvWEN3Q0h4QnpRMkwvVnlVZG4xNkhKa054OTdDYWdv?=
 =?utf-8?B?cThlVjhtZWIrMXNvRmtIcFN6T0ZhNTQyL2xIaHlpcWh2dU5UQkVvZk45bHZB?=
 =?utf-8?B?dWNaRnhza3JjaXpuQlBKT3BKZ3hDVkY2UGtnMkxVUUg2c0FPc3l0VHF3UWhD?=
 =?utf-8?B?ZmFsRFZNZ3o0V1JuL2FONzdBTzczc0E3WVlZWFdJZmVwV29uYUNxNis1ZEZG?=
 =?utf-8?B?YTR4Q3VJSGpxUTdyb2cvUUdncE45NkQ4ZS9WRXpadjFINjlHeis4cVFMNlJU?=
 =?utf-8?B?VDM0NlhDYzRiZFhIQlZEN0NNZGpjUlZtKzVlYUMzTUtlQlgzYm9YVnBrZ3pG?=
 =?utf-8?B?WklESm5WS0ZGYlJ4Qm82ZTJDcUR2NE5CSER4M2RnZkxtRU14bm84Q2tEKzFx?=
 =?utf-8?B?YWVJVkE4THhidU5DNHIrMDJVT2dYWXp6NVFoWkFOZFpYaDF4L0hCdWNSM1dG?=
 =?utf-8?B?K1JWSHRza2VWSlVyTDFJYmRWNTdCWkdkTmZ6OWkzNGluczJYZlpjcHR2MExp?=
 =?utf-8?B?OTdrTklTZm1VMzc5RE9sNVNiZS9BOTdySUsvU210Q1djU0x1TWNaUTVnRjhK?=
 =?utf-8?B?dGdyMGRlRTkxRHBhTFA3RnladWZ1WmhFelNUaGM3VStLSCtvTEpLblNTU0Ra?=
 =?utf-8?B?YXRGdXFDMDVBVnQvc0k2WXNrMEcrL2NlbFpUYUl4R0JWSmphZEJybWZOd0Nt?=
 =?utf-8?B?b0ozb1JYOU9iL2k5L0JpMTFOOFVIUEZQODJ1b1kzV0lHTFE5ZGZlSTBaYU02?=
 =?utf-8?B?c0xzUW5XdzV3TlFlNHpiSnNjaHdYWTF3RnVHclEvdUFhVzhzdHB5RGJjdnkz?=
 =?utf-8?B?bHZqTlh6S1Z1M25hV2VlMmxEd1Q3K3hyUzhZRFRGWHJwQi85K1VXcG1rL1Ro?=
 =?utf-8?B?Qll4UFhaY2xacTE1NEpETCtESUVjYVRjQ3FFVzhFODlIRm9RUXlDb0xMSlQv?=
 =?utf-8?B?Rnk1azlGY0QraEwyY1pnOHpDSU5OTWY5Nnp2Sy94ZWZLQmx4eVFOUW1SMmlY?=
 =?utf-8?B?TVJoVmNFUDlsOEdhdFluL0ZkZUh2NTFyNlc2ZnNaYXFJcWc3aFMxYVBFemlC?=
 =?utf-8?Q?hpn5Ub3BIgWhz4LSQ3JcEitwufy53BgHuE?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <863532BA7DCC524DB7EAC5621CB8B76C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3993
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cbb6a300-fae2-4ce6-fc2c-08d89d29ed60
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XPbz0/p8KPxtddnkrOCacDZ/W0y5hmnszZqokeDLkJVlB98HFD9ufPAjshRqbtKS2JE1yADFmhLIq05w0y4xpWzPs0Ty7YsyQ2QlONuT/lQnsvjV+cunVjBgfNStUrEd26i/pLNuorlrW8xd426NweeJKdrk/gpFcIGTdHAcjeVFBmWvhWTBSno3VzHlS9VX+C0UcEn6NyCQUf42GGabvCoxn4m0b3a+tdpwAdb44YjQOAqDOqX8NovSMAVksDqcUck3Uybo27Zz7uMjRlKr/ebfATYePLNLRWcqq04MmRS7j6/o44kBqfKLF7TzVSnZyoGQ7pVj6F3abOzm3q12ZKCDretx4Url2jf8qbK/T/x4hg+/9ziubi2cldbXln+ldds9ObuUXmuGbHK7A9Q8tg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966005)(33656002)(8676002)(8936002)(4326008)(54906003)(5660300002)(70206006)(70586007)(107886003)(6506007)(36756003)(2906002)(478600001)(83380400001)(186003)(82310400003)(6486002)(36906005)(6862004)(316002)(26005)(82740400003)(6512007)(2616005)(47076004)(336012)(53546011)(356005)(81166007)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 16:38:04.7235
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f17724fb-b2f1-404c-edc4-08d89d29f7b4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4081

SGkgSnVsaWVuLA0KDQo+IE9uIDEwIERlYyAyMDIwLCBhdCAxNjozMCwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiANCj4gDQo+IE9uIDEwLzEyLzIwMjAgMTY6MTcs
IEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBIaSBKdWxpZW4sDQo+IA0KPiBIaSBCZXJ0cmFu
ZCwNCj4gDQo+Pj4gT24gMTAgRGVjIDIwMjAsIGF0IDE2OjA1LCBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPiB3cm90ZToNCj4+PiANCj4+PiANCj4+PiANCj4+PiBPbiAxMC8xMi8yMDIwIDE1
OjQ4LCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+PiBIaSBKdWxpZW4sDQo+Pj4+PiBPbiA5
IERlYyAyMDIwLCBhdCAyMzowOSwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6
DQo+Pj4+PiANCj4+Pj4+IEhpIEJlcnRhbmQsDQo+Pj4+PiANCj4+Pj4+IE9uIDA5LzEyLzIwMjAg
MTY6MzAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4gQ3JlYXRlIGEgY3B1aW5mbyBz
dHJ1Y3R1cmUgZm9yIGd1ZXN0IGFuZCBtYXNrIGludG8gaXQgdGhlIGZlYXR1cmVzIHRoYXQNCj4+
Pj4+PiB3ZSBkbyBub3Qgc3VwcG9ydCBpbiBYZW4gb3IgdGhhdCB3ZSBkbyBub3Qgd2FudCB0byBw
dWJsaXNoIHRvIGd1ZXN0cy4NCj4+Pj4+PiBNb2RpZnkgc29tZSB2YWx1ZXMgaW4gdGhlIGNwdWlu
Zm8gc3RydWN0dXJlIGZvciBndWVzdHMgdG8gbWFzayBzb21lDQo+Pj4+Pj4gZmVhdHVyZXMgd2hp
Y2ggd2UgZG8gbm90IHdhbnQgdG8gYWxsb3cgdG8gZ3Vlc3RzIChsaWtlIEFNVSkgb3Igd2UgZG8g
bm90DQo+Pj4+Pj4gc3VwcG9ydCAobGlrZSBTVkUpLg0KPj4+Pj4+IFRoZSBjb2RlIGlzIHRyeWlu
ZyB0byBncm91cCB0b2dldGhlciByZWdpc3RlcnMgbW9kaWZpY2F0aW9ucyBmb3IgdGhlDQo+Pj4+
Pj4gc2FtZSBmZWF0dXJlIHRvIGJlIGFibGUgaW4gdGhlIGxvbmcgdGVybSB0byBlYXNpbHkgZW5h
YmxlL2Rpc2FibGUgYQ0KPj4+Pj4+IGZlYXR1cmUgZGVwZW5kaW5nIG9uIHVzZXIgcGFyYW1ldGVy
cyBvciBhZGQgb3RoZXIgcmVnaXN0ZXJzIG1vZGlmaWNhdGlvbg0KPj4+Pj4+IGluIHRoZSBzYW1l
IHBsYWNlIChsaWtlIGVuYWJsaW5nL2Rpc2FibGluZyBIQ1IgYml0cykuDQo+Pj4+Pj4gU2lnbmVk
LW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPj4+
Pj4+IC0tLQ0KPj4+Pj4+IENoYW5nZXMgaW4gVjI6IFJlYmFzZQ0KPj4+Pj4+IENoYW5nZXMgaW4g
VjM6DQo+Pj4+Pj4gICBVc2UgY3VycmVudF9jcHVfZGF0YSBpbmZvIGluc3RlYWQgb2YgcmVjYWxs
aW5nIGlkZW50aWZ5X2NwdQ0KPj4+Pj4+IC0tLQ0KPj4+Pj4+ICB4ZW4vYXJjaC9hcm0vY3B1ZmVh
dHVyZS5jICAgICAgICB8IDUxICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+Pj4+
Pj4gIHhlbi9pbmNsdWRlL2FzbS1hcm0vY3B1ZmVhdHVyZS5oIHwgIDIgKysNCj4+Pj4+PiAgMiBm
aWxlcyBjaGFuZ2VkLCA1MyBpbnNlcnRpb25zKCspDQo+Pj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9h
cmNoL2FybS9jcHVmZWF0dXJlLmMgYi94ZW4vYXJjaC9hcm0vY3B1ZmVhdHVyZS5jDQo+Pj4+Pj4g
aW5kZXggYmM3ZWU1YWM5NS4uNzI1NTM4MzUwNCAxMDA2NDQNCj4+Pj4+PiAtLS0gYS94ZW4vYXJj
aC9hcm0vY3B1ZmVhdHVyZS5jDQo+Pj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2NwdWZlYXR1cmUu
Yw0KPj4+Pj4+IEBAIC0yNCw2ICsyNCw4IEBADQo+Pj4+Pj4gICAgREVDTEFSRV9CSVRNQVAoY3B1
X2h3Y2FwcywgQVJNX05DQVBTKTsNCj4+Pj4+PiAgK3N0cnVjdCBjcHVpbmZvX2FybSBfX3JlYWRf
bW9zdGx5IGd1ZXN0X2NwdWluZm87DQo+Pj4+Pj4gKw0KPj4+Pj4+ICB2b2lkIHVwZGF0ZV9jcHVf
Y2FwYWJpbGl0aWVzKGNvbnN0IHN0cnVjdCBhcm1fY3B1X2NhcGFiaWxpdGllcyAqY2FwcywNCj4+
Pj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBjaGFyICppbmZvKQ0KPj4+
Pj4+ICB7DQo+Pj4+Pj4gQEAgLTE1Nyw2ICsxNTksNTUgQEAgdm9pZCBpZGVudGlmeV9jcHUoc3Ry
dWN0IGNwdWluZm9fYXJtICpjKQ0KPj4+Pj4+ICAjZW5kaWYNCj4+Pj4+PiAgfQ0KPj4+Pj4+ICAr
LyoNCj4+Pj4+PiArICogVGhpcyBmdW5jdGlvbiBpcyBjcmVhdGluZyBhIGNwdWluZm8gc3RydWN0
dXJlIHdpdGggdmFsdWVzIG1vZGlmaWVkIHRvIG1hc2sNCj4+Pj4+PiArICogYWxsIGNwdSBmZWF0
dXJlcyB0aGF0IHNob3VsZCBub3QgYmUgcHVibGlzaGVkIHRvIGd1ZXN0Lg0KPj4+Pj4+ICsgKiBU
aGUgY3JlYXRlZCBzdHJ1Y3R1cmUgaXMgdGhlbiB1c2VkIHRvIHByb3ZpZGUgSUQgcmVnaXN0ZXJz
IHZhbHVlcyB0byBndWVzdHMuDQo+Pj4+Pj4gKyAqLw0KPj4+Pj4+ICtzdGF0aWMgaW50IF9faW5p
dCBjcmVhdGVfZ3Vlc3RfY3B1aW5mbyh2b2lkKQ0KPj4+Pj4+ICt7DQo+Pj4+Pj4gKyAgICAvKg0K
Pj4+Pj4+ICsgICAgICogVE9ETzogVGhlIGNvZGUgaXMgY3VycmVudGx5IHVzaW5nIG9ubHkgdGhl
IGZlYXR1cmVzIGRldGVjdGVkIG9uIHRoZSBib290DQo+Pj4+Pj4gKyAgICAgKiBjb3JlLiBJbiB0
aGUgbG9uZyB0ZXJtIHdlIHNob3VsZCB0cnkgdG8gY29tcHV0ZSB2YWx1ZXMgY29udGFpbmluZyBv
bmx5DQo+Pj4+Pj4gKyAgICAgKiBmZWF0dXJlcyBzdXBwb3J0ZWQgYnkgYWxsIGNvcmVzLg0KPj4+
Pj4+ICsgICAgICovDQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvID0gY3VycmVudF9jcHVfZGF0
YTsNCj4+Pj4+IA0KPj4+Pj4gSXQgd291bGQgYmUgbW9yZSBsb2dpY2FsIHRvIHVzZSBib290X2Nw
dV9kYXRhIGFzIHRoaXMgd291bGQgYmUgZWFzaWVyIHRvIG1hdGNoIHdpdGggeW91ciBjb21tZW50
Lg0KPj4+PiBBZ3JlZSwgSSB3aWxsIGZpeCB0aGF0IGluIFY0Lg0KPj4+Pj4gDQo+Pj4+Pj4gKw0K
Pj4+Pj4+ICsjaWZkZWYgQ09ORklHX0FSTV82NA0KPj4+Pj4+ICsgICAgLyogRGlzYWJsZSBNUEFN
IGFzIHhlbiBkb2VzIG5vdCBzdXBwb3J0IGl0ICovDQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZv
LnBmcjY0Lm1wYW0gPSAwOw0KPj4+Pj4+ICsgICAgZ3Vlc3RfY3B1aW5mby5wZnI2NC5tcGFtX2Zy
YWMgPSAwOw0KPj4+Pj4+ICsNCj4+Pj4+PiArICAgIC8qIERpc2FibGUgU1ZFIGFzIFhlbiBkb2Vz
IG5vdCBzdXBwb3J0IGl0ICovDQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnBmcjY0LnN2ZSA9
IDA7DQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnpmcjY0LmJpdHNbMF0gPSAwOw0KPj4+Pj4+
ICsNCj4+Pj4+PiArICAgIC8qIERpc2FibGUgTVRFIGFzIFhlbiBkb2VzIG5vdCBzdXBwb3J0IGl0
ICovDQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnBmcjY0Lm10ZSA9IDA7DQo+Pj4+Pj4gKyNl
bmRpZg0KPj4+Pj4+ICsNCj4+Pj4+PiArICAgIC8qIERpc2FibGUgQU1VICovDQo+Pj4+Pj4gKyNp
ZmRlZiBDT05GSUdfQVJNXzY0DQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnBmcjY0LmFtdSA9
IDA7DQo+Pj4+Pj4gKyNlbmRpZg0KPj4+Pj4+ICsgICAgZ3Vlc3RfY3B1aW5mby5wZnIzMi5hbXUg
PSAwOw0KPj4+Pj4+ICsNCj4+Pj4+PiArICAgIC8qIERpc2FibGUgUkFTIGFzIFhlbiBkb2VzIG5v
dCBzdXBwb3J0IGl0ICovDQo+Pj4+Pj4gKyNpZmRlZiBDT05GSUdfQVJNXzY0DQo+Pj4+Pj4gKyAg
ICBndWVzdF9jcHVpbmZvLnBmcjY0LnJhcyA9IDA7DQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZv
LnBmcjY0LnJhc19mcmFjID0gMDsNCj4+Pj4+PiArI2VuZGlmDQo+Pj4+Pj4gKyAgICBndWVzdF9j
cHVpbmZvLnBmcjMyLnJhcyA9IDA7DQo+Pj4+Pj4gKyAgICBndWVzdF9jcHVpbmZvLnBmcjMyLnJh
c19mcmFjID0gMDsNCj4+Pj4+IA0KPj4+Pj4gSG93IGFib3V0IGFsbCB0aGUgZmllbGRzIHRoYXQg
YXJlIGN1cnJlbnRseSBtYXJrZWQgYXMgUkVTMC9SRVMxPyBTaG91bGRuJ3Qgd2UgbWFrZSBzdXJl
IHRoZXkgd2lsbCBzdGF5IGxpa2UgdGhhdCBldmVuIGlmIG5ld2VyIGFyY2hpdGVjdHVyZSB1c2Ug
dGhlbT8NCj4+Pj4gRGVmaW5pdGVseSB3ZSBjYW4gZG8gbW9yZSB0aGVuIHRoaXMgaGVyZSAoaW5j
bHVkaW5nIGFsbG93aW5nIHRvIGVuYWJsZSBzb21lIHRoaW5ncyBmb3IgZG9tMCBvciBmb3IgdGVz
dCByZWFzb25zKS4NCj4+Pj4gVGhpcyBpcyBhIGZpcnN0IHRyeSB0byBzb2x2ZSBjdXJyZW50IGlz
c3VlcyB3aXRoIE1QQU0gYW5kIFNWRSBhbmQgSSBwbGFuIHRvIGNvbnRpbnVlIHRvIGVuaGFuY2Ug
dGhpcyBpbiB0aGUgZnV0dXJlDQo+Pj4+IHRvIGVuYWJsZSBtb3JlIGN1c3RvbWlzYXRpb24gaGVy
ZS4NCj4+Pj4gSSBkbyB0aGluayB3ZSBjb3VsZCBkbyBhIGJpdCBtb3JlIGhlcmUgdG8gaGF2ZSBz
b21lIGZlYXR1cmVzIGNvbnRyb2xsZWQgYnkgdGhlIHVzZXIgYnV0IHRoaXMgd2lsbCBuZWVkIGEg
Yml0IG9mDQo+Pj4+IGRpc2N1c3Npb24gdG8gYWdyZWUgb24gYSBzdHJhdGVneS4NCj4+PiANCj4+
PiBJIHRoaW5rIHlvdSBtaXN1bmRlcnN0b29kIG15IGNvbW1lbnQuIEkgYW0gbm90IGFza2luZyB3
aGV0aGVyIHdlIHdhbnQgdG8gY3VzdG9taXplIHRoZSB2YWx1ZSBwZXIgZG9tYWluLiBJbnN0ZWFk
LCBJIGFtIHJhaXNpbmcgcXVlc3Rpb25zIGZvciB0aGUgc3RyYXRlZ3kgdGFrZW4gaW4gdGhpcyBw
YXRjaC4NCj4+PiANCj4+PiBJIGFtIGdvaW5nIHRvIGxlYXZlIHRoZSBzYWZldHkgYXNpZGUsIGJl
Y2F1c2UgSSB0aGluayB0aGlzIGlzIG9ydGhvZ29uYWwgdG8gdGhpcyBwYXRjaC4NCj4+PiANCj4+
PiBUaGlzIHBhdGNoIGlzIGludHJvZHVjaW5nIGEgZGVueSBsaXN0LiBUaGlzIG1lYW5zIHRoYXQg
YWxsIHRoZSBmZWF0dXJlcyB3aWxsIGJlIGV4cG9zZWQgdG8gYSBkb21haW4gdW5sZXNzIHNvbWVv
bmUgZGV0ZXJtaW5lIHRoYXQgdGhpcyBpcyBub3QNCj4+PiBzdXBwb3J0ZWQgYnkgWGVuLg0KPj4+
IA0KPj4+IFRoaXMgbWVhbnMgd2Ugd2lsbCBhbHdheXMgdHJ5IHRvIGNhdGNoIHVwIHdpdGggd2hh
dCBBcm0gZGVjaWRlZCB0byBpbnZlbnQgYW5kIGF0dGVtcHQgdG8gZml4IGl0IGJlZm9yZSB0aGUg
c2lsaWNvbiBpcyBvdXQuDQo+Pj4gDQo+Pj4gSW5zdGVhZCwgSSB0aGluayBpdCB3b3VsZCBiZSBi
ZXR0ZXIgdG8gdXNlIGFuIGFsbG93IGxpc3QuIFdlIHNob3VsZCBvbmx5IGV4cG9zZSBmZWF0dXJl
cyB0byB0aGUgZ3Vlc3Qgd2Uga25vdyB3b3JrcyAodGhpcyBjb3VsZCBwb3NzaWJseSBiZSBqdXN0
IHRoZSBBcm12OC4wIG9uZSkuDQo+PiBJIHVuZGVyc3Rvb2QgdGhhdCBhbmQgSSBmdWxseSBhZ3Jl
ZSB0aGF0IHRoaXMgaXMgd2hhdCB3ZSBzaG91bGQgZG86IG9ubHkgZXhwb3NlIHdoYXQgd2Ugc3Vw
cG9ydCBhbmQga25vdyBhbmQgbGV0IGV2ZXJ5dGhpbmcgZWxzZSBhcyDigJxkaXNhYmxlZOKAnS4N
Cj4+IEFuZCBJIGRlZmluaXRlbHkgd2FudCB0byBkbyB0aGF0IGFuZCBJIHRoaW5rIHdpdGggdGhp
cyBzZXJpZSB3ZSBoYXZlIHRoZSByZXF1aXJlZCBzdXBwb3J0IHRvIGRvIHRoYXQsIHdlIHdpbGwg
bmVlZCB0byByZXdvcmsgaG93IHdlIGluaXRpYWxpc2UgdGhlDQo+PiBndWVzdF9jcHVpbmZvIHN0
cnVjdHVyZS4NCj4+IEkganVzdCB3YW50IHRvIGxlYXZlIHRoaXMgZGlzY3Vzc2lvbiBmb3IgYWZ0
ZXIgc28gdGhhdCB3ZSBjYW4gYXQgbGVhc3QgcmlnaHQgbm93IGhhdmUgYSBjdXJyZW50IGxpbnV4
IGJvb3Rpbmcgd2l0aG91dCB0aGUgbmVlZCB0byBtb2RpZnkgdGhlIGxpbnV4DQo+PiBrZXJuZWwg
YmluYXJ5IHRvIHJlbW92ZSB0aGluZ3MgbGlrZSBTVkUuDQo+IA0KPiBPay4gU28gdGhpcyBwYXRj
aCBpcyBtb3JlIHRvIGZpbGwgdGhlIGdhcCByYXRoZXIgdGhhbiB0aGUgZmluYWwgc29sdXRpb24u
IFRoaXMgc2hvdWxkIGJlIGNsYXJpZmllZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UuDQoNCkkgY2Fu
IGFkZCB0aGF0IGluIHRoZSBjb21taXQgbWVzc2FnZS4NCg0KPiANCj4gQWx0aG91Z2gsIGl0IGlz
IHN0aWxsIHVuY2xlYXIgdG8gbWUgd2h5IHRoaXMgY2FuJ3QgYmUgYW4gYWxsb3dsaXN0IGZyb20g
dGhlIHN0YXJ0LiBBcyB5b3Ugc2FpZCwgdGhlIGluZnJhc3RydWN0dXJlIGlzIGFscmVhZHkgdGhl
cmUuIFNvIGl0IHdvdWxkIGJlIGEgbWF0dGVyIG9mIGNvcHlpbmcgYml0cyB3ZSBrbm93IHdvcmsg
d2l0aCBYZW4gKHJhdGhlciB0aGFuIGNsb2JlcnJpbmcpLg0KDQpUaGUgYW5hbHlzaXMgdG8gZG8g
dGhhdCBwcm9wZXJseSBtaWdodCByZXF1aXJlIG1vcmUgd29yayBhcyB3ZSBzaG91bGQgc3RhcnQg
ZnJvbSBldmVyeXRoaW5nIG9mZiBhbmQgdGhlbiBnb2luZyBvbmUgYnkgb25lIGFuZCBtYWtpbmcg
c3VyZSB3ZSBub3Qgb25seSBkbyB0aGF0IGhlcmUgYnV0IGFsc28gaW4gSENSIHJlZ2lzdGVyIGJp
dHMuDQoNCkhlcmUgd2UgYXJlIGp1c3QgZXhwbGljaXRlbHkg4oCcaGlkaW5n4oCdIGZlYXR1cmVz
IHdoaWNoIGlzIHNvbWVob3cgZWFzeSB0byByZXZpZXcgYW5kIGNoZWNrLg0KDQpJIHdpbGwgbm90
IGhhdmUgdGltZSB0byBpbnZlc3RpZ2F0ZSBkZWVwZXIgdGhlbiB3aGF0IEkgZGlkIGFscmVhZHkg
YmVmb3JlIHRoZSBuZXh0IHhlbiByZWxlYXNlLg0KU28gYmVzdCBpIGNhbiBkbyBpcyBjaGFuZ2Ug
dGhpcyBwYXRjaCB0byBub3QgbW9kaWZ5IGFueXRoaW5nIGluIHRoZSBndWVzdF9jcHVpbmZvIHNv
IHRoYXQgc29tZW9uZSBlbHNlIGNhbiBkbyB0aGF0IHdvcmsgb24gdG9wIG9mIHRoaXMgc2VyaWUu
DQpJIGhhdmUgcGxhbm5lZCBzb21lIHRpbWUgdG8gd29yayBvbiBjb250aW51aW5nIHRoaXMgd29y
aywgYnV0IG5vdCBiZWZvcmUgbWFyY2guDQoNClJlZ2FyZHMNCkJlcnRyYW5kDQoNCj4gDQo+IENo
ZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:40:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:40:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49544.87631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOzp-0004Xy-5s; Thu, 10 Dec 2020 16:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49544.87631; Thu, 10 Dec 2020 16:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knOzp-0004Xr-2p; Thu, 10 Dec 2020 16:40:49 +0000
Received: by outflank-mailman (input) for mailman id 49544;
 Thu, 10 Dec 2020 16:40:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/MM=FO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knOzn-0004Xl-8r
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:40:47 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.44]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84137fcd-450c-4775-9506-5a0673421800;
 Thu, 10 Dec 2020 16:40:45 +0000 (UTC)
Received: from DB6P18901CA0006.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::16)
 by AM5PR0801MB1747.eurprd08.prod.outlook.com (2603:10a6:203:36::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 10 Dec
 2020 16:40:41 +0000
Received: from DB5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::84) by DB6P18901CA0006.outlook.office365.com
 (2603:10a6:4:16::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 10 Dec 2020 16:40:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT055.mail.protection.outlook.com (10.152.21.30) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 10 Dec 2020 16:40:41 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Thu, 10 Dec 2020 16:40:41 +0000
Received: from c07450a69764.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F9099E05-965C-4A88-910C-049916B030BA.1; 
 Thu, 10 Dec 2020 16:40:26 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c07450a69764.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 10 Dec 2020 16:40:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2694.eurprd08.prod.outlook.com (2603:10a6:6:1f::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Thu, 10 Dec
 2020 16:40:24 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3632.023; Thu, 10 Dec 2020
 16:40:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84137fcd-450c-4775-9506-5a0673421800
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LGzAees7ci6ehURaIMSQQi31zPIrWQ3jAjGAbguxcnA=;
 b=KA5FM8Mv0MXr6x/1aROzPtaUY1H6huXEeeTdqJDYfhLgIdBtzil5x6+7c2JZUa2+Vm2vyfTQdgSxeiCL+azMVsLDfiO9TumzCzmguZ274+rFxeZvxeWhX59tq6C6VTjKGrtSDAKHZ8kbBfE89vLYzrGc4YpKmpbb9RMGik2OlYs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8d298e4cc75ccdc2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bfIMHxySovn6b9fkBKjXfD2hitAFv4bu7WPSk8DyOT/EXDRY0I2yJuWI5yWUD2r2EfKcdxZL7xnkMfwr4J1a4d9cROV0eBJbtV1KsG4IFwZIsY1AO2t+J5HICPClfXT4DEDrvq6BFMa/EC2hAQoLHN8NEeQ9BjRYRq/S5cNqteDNEg6CHwfc1tcjnShJD16F+gO+1oGgz2cPQOSN/MLp7+QFQgP2uQcssnRyIbzNMXM3gmfL3FCShSph73xscqrLXGTPbYb5plio9xsRgA3Q7kwluOnWO+v9ZUCthWIKHgLwTOQLiu5WmRtcYsJMwFjjiLvohJ2r1uR4cFXVR8Ze1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LGzAees7ci6ehURaIMSQQi31zPIrWQ3jAjGAbguxcnA=;
 b=W/uxqol6q2MepPE1akUJcD2oiBYrJfj3CEzAhBAonxoRrwgvtHNrNgLw9pjIPTK7RMeQgDaboi/6RyeqG3Hm2YXayTXRQKN/ke+iENHnoDCXE5UGq0DSgr3T121lcEBX2sTFWc4pMwRP/SpdFx/MjK3YICwo4/e3rifYVVvnR9g9kyws4eougkkyIK78vX1L4r9j2bZpaEhHRKSeVvO1e17yxbmFK0/0jsu81UZA5MtyjMvybKbKDVLyh1Lj1bTxgwekQkA2EafxhAaXwl2r4hPDLZAPyqx5wXUWg7Zf5Gx15FGt6oG1CXBtc9XAEWF+C3e19D32D5XSKC3vYzitzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LGzAees7ci6ehURaIMSQQi31zPIrWQ3jAjGAbguxcnA=;
 b=KA5FM8Mv0MXr6x/1aROzPtaUY1H6huXEeeTdqJDYfhLgIdBtzil5x6+7c2JZUa2+Vm2vyfTQdgSxeiCL+azMVsLDfiO9TumzCzmguZ274+rFxeZvxeWhX59tq6C6VTjKGrtSDAKHZ8kbBfE89vLYzrGc4YpKmpbb9RMGik2OlYs=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #843419
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #843419
Thread-Index: AQHWzuFTbv9qCLSEMEiMd7L0GnVOBKnwiIyA
Date: Thu, 10 Dec 2020 16:40:24 +0000
Message-ID: <0D0CD227-43F4-449D-A7E2-E00C6442F19F@arm.com>
References: <20201210104258.111-1-luca.fancellu@arm.com>
In-Reply-To: <20201210104258.111-1-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e99aa3ec-2574-46de-89da-08d89d2a5529
x-ms-traffictypediagnostic: DB6PR08MB2694:|AM5PR0801MB1747:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM5PR0801MB1747B18C54B3360C1F82C67C9DCB0@AM5PR0801MB1747.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8BTXaeWOWB6N5JWSlkwYw9ATCZv5U8zna+GDxw8Aks/0vzWI6xPvQohBIPsQ9+rE+XIfSgObxbgl3UOh8ZPE7Sm7WM6MiEuWoeDPyVxb1iGlZUYG2x7oDxnJH4NEVaKKm7H3ERg6snfEykpbynvDds/fkbDRK8AMB8oUlLdhyQ8tHdXhsToQmJ3cg7w6/RA594OSUbyATO8W/Fc6kpVm7ZJhWQ+8AYBcTMHJobnZIrZMnO9XU/shwWoR85L9nv5WEpwJfL8guGqXEDTYC2Qjbc6A80M8P2Kobs9rRaBsTk9lU++csdd7W0VFjiWBQ+UKXckIVSWaYvyNf86D/Ga+a/2e1zSkMn06Z9NKVZF7NxhAVRM8v7TLaEwGAindY28W
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(376002)(136003)(39860400002)(6636002)(71200400001)(33656002)(6506007)(2616005)(76116006)(91956017)(86362001)(53546011)(478600001)(66946007)(66556008)(4326008)(37006003)(66446008)(64756008)(54906003)(66476007)(5660300002)(8676002)(8936002)(6512007)(83380400001)(316002)(26005)(6862004)(186003)(36756003)(6486002)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?s1Tk+tZEwum91sgEy46LgzSzsNDRA4WGFebnuz5IjcR1omLjfEMECJsSPOlc?=
 =?us-ascii?Q?Yl8eXuLEWe5Ob8RuV7SKl3VETP3he0jXTafFIc4imc01ZD7fuZ1rwW1Lr9Pp?=
 =?us-ascii?Q?e7koCtJvpZ+J9PtBPKeoYMEx/qxd2H1eR+NpkUGn4t26571dHXn81H1rl8Mj?=
 =?us-ascii?Q?B48WPwPsH8KEVbDUNWY0KUAWP9h2VPv0YvnQowmv/9k/ZAjfa5weif/okGd3?=
 =?us-ascii?Q?sl3hNIsxYW3UsCI2+pbbJm7z8w685YU8iEooziXmuQx8JzxyRGh4QtYEaL1A?=
 =?us-ascii?Q?l3o9LL6EieOKJD1txEI1J2tGCxqZ3krIEPxIFMOGb2T71otOGzTFVj/jLYPh?=
 =?us-ascii?Q?gC5/luOoN60an79wC6iCmU/u9ukP2hgg+84WcPShipdGM+OYXtSDdncLw0Va?=
 =?us-ascii?Q?dUXUli7zvLDJkRIKt3ReraY2NHei1pdZ81HhAUYJt2yJZs3CYAlG7TSg2cgm?=
 =?us-ascii?Q?O/ftyyBIeK37EP9ZGZfn7WGXBrviDDI2dNYEr9kUpJ6HxL8myGuvqRKbFZv8?=
 =?us-ascii?Q?rtyTdun24LCswipNLdF16EHoNbppaJEWWMmTDUUIiH+2irp1KvmHHeHowZ+i?=
 =?us-ascii?Q?wTESNasrhNTI7xt8vuhXeKCpDrYiyo0EMPojdfoJaqKPNfa2ixV7ABVH/lB4?=
 =?us-ascii?Q?2sgARb0Wycz4VmPKa/usCqlqJQxqC+GmSWdwG0Gcx9cu3BJRO7ybjzC1yuef?=
 =?us-ascii?Q?EfRKM2rJAUgzB0C7jjlNQt1VhgHPZR++57rUHxWVnmTRSq1LzzaJAQkQcPdQ?=
 =?us-ascii?Q?O/47XmTCZpmGwUGg51/5wnnF2gF+j6aHPzy+1bVIjm3xQwioC84htcTCKtF9?=
 =?us-ascii?Q?5aFNB00I9eiKd8RhcXEXFIqestaQNUiWo7NDxjYcescZBrysWOdmyIOhEILR?=
 =?us-ascii?Q?qVBwtozx5pKW4T5SDbFz6OX5RmJWJEetK5D6lq2uRpI6jKhCmnTbWT6eMMRD?=
 =?us-ascii?Q?aSOPmvtVmx5CMyMlh4DdQ/KsxPFI/qwNNujqDewcLOBMrkzX3H3tX2SCnIrX?=
 =?us-ascii?Q?h9PA?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D004B980C62DC849B638A3B9F0B83B4F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2694
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c42438bf-d68a-4a5f-8e1e-08d89d2a4b09
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	44I82oNRHECozS+0aybbjN3gIMcAC3uP7h75eV/jqkR+YklOyXoMPjbqgA3kuMC8Tb/B7zWxe73QqS5MPpHxcz1iSlDkbHqeYCtBviGd8Y7JKgtyU/ASRVlbwbwg4xm/P2Zz4uparPvCj6wOQAo5uaM3g74Grtee/MfH2971MNe/w6Q7CijdXLO89ObTJ+qYIb/eMO9V30NKgUBKpD5G0JRQmIyeI1YJ/sL8FGdQN7TCDqNE3s0J8RBsztEafTO3I9yQLmxO6D7wz8lkKn09qMj3XO9H8jYohukUC/EvZdraYmeh3Ely+Nkcp/tWk7BdoC09t/8t5o+2DJs3PV/lT2Vnb30nDWN5dTvxToC4PieEbutJDW9RAohMY3BbEBIIY/VR85zEoxDJiW1mQ94eqQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966005)(82740400003)(81166007)(6506007)(6636002)(6486002)(37006003)(356005)(83380400001)(86362001)(8936002)(4326008)(8676002)(2616005)(478600001)(26005)(70206006)(47076004)(316002)(82310400003)(54906003)(5660300002)(336012)(186003)(36756003)(6512007)(53546011)(6862004)(33656002)(2906002)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Dec 2020 16:40:41.5833
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e99aa3ec-2574-46de-89da-08d89d2a5529
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1747

Hi Luca,

> On 10 Dec 2020, at 10:42, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> On the Cortex A53, when executing in AArch64 state, a load or store instr=
uction
> which uses the result of an ADRP instruction as a base register, or which=
 uses
> a base register written by an instruction immediately after an ADRP to th=
e
> same register, might access an incorrect address.
>=20
> The workaround is to enable the linker flag --fix-cortex-a53-843419
> if present, to check and fix the affected sequence. Otherwise print a war=
ning
> that Xen may be susceptible to this errata
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks
Cheers
Bertrand

> ---
> docs/misc/arm/silicon-errata.txt |  1 +
> xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
> xen/arch/arm/Makefile            |  8 ++++++++
> xen/scripts/Kbuild.include       | 12 ++++++++++++
> 4 files changed, 40 insertions(+)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index 27bf957ebf..1925d8fd4e 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -45,6 +45,7 @@ stable hypervisors.
> | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_8273=
19    |
> | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_8240=
69    |
> | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_8194=
72    |
> +| ARM            | Cortex-A53      | #843419         | ARM64_ERRATUM_843=
419    |
> | ARM            | Cortex-A55      | #1530923        | N/A               =
      |
> | ARM            | Cortex-A57      | #852523         | N/A               =
      |
> | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_8320=
75    |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f5b1bcda03..41bde2f401 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -186,6 +186,25 @@ config ARM64_ERRATUM_819472
>=20
> 	  If unsure, say Y.
>=20
> +config ARM64_ERRATUM_843419
> +	bool "Cortex-A53: 843419: A load or store might access an incorrect add=
ress"
> +	default y
> +	depends on ARM_64
> +	help
> +	  This option adds an alternative code sequence to work around ARM
> +	  erratum 843419 on Cortex-A53 parts up to r0p4.
> +
> +	  When executing in AArch64 state, a load or store instruction which us=
es
> +	  the result of an ADRP instruction as a base register, or which uses a
> +	  base register written by an instruction immediately after an ADRP to =
the
> +	  same register, might access an incorrect address.
> +
> +	  The workaround enables the linker to check if the affected sequence i=
s
> +	  produced and it will fix it with an alternative not affected sequence
> +	  that produce the same behavior.
> +
> +	  If unsure, say Y.
> +
> config ARM64_ERRATUM_832075
> 	bool "Cortex-A57: 832075: possible deadlock on mixing exclusive memory a=
ccesses with device loads"
> 	default y
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 296c5e68bb..ad2d497c45 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -101,6 +101,14 @@ prelink.o: $(ALL_OBJS) FORCE
> 	$(call if_changed,ld)
> endif
>=20
> +ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
> +    ifeq ($(call ld-option, --fix-cortex-a53-843419),n)
> +        $(warning ld does not support --fix-cortex-a53-843419; xen may b=
e susceptible to erratum)
> +    else
> +        XEN_LDFLAGS +=3D --fix-cortex-a53-843419
> +    endif
> +endif
> +
> targets +=3D prelink.o
>=20
> $(TARGET)-syms: prelink.o xen.lds
> diff --git a/xen/scripts/Kbuild.include b/xen/scripts/Kbuild.include
> index e62eddc365..83c7e1457b 100644
> --- a/xen/scripts/Kbuild.include
> +++ b/xen/scripts/Kbuild.include
> @@ -43,6 +43,18 @@ define as-option-add-closure
>     endif
> endef
>=20
> +# $(if-success,<command>,<then>,<else>)
> +# Return <then> if <command> exits with 0, <else> otherwise.
> +if-success =3D $(shell { $(1); } >/dev/null 2>&1 && echo "$(2)" || echo =
"$(3)")
> +
> +# $(success,<command>)
> +# Return y if <command> exits with 0, n otherwise
> +success =3D $(call if-success,$(1),y,n)
> +
> +# $(ld-option,<flag>)
> +# Return y if the linker supports <flag>, n otherwise
> +ld-option =3D $(call success,$(LD) -v $(1))
> +
> # cc-ifversion
> # Usage:  EXTRA_CFLAGS +=3D $(call cc-ifversion, -lt, 0402, -O1)
> cc-ifversion =3D $(shell [ $(CONFIG_GCC_VERSION)0 $(1) $(2)000 ] && echo =
$(3) || echo $(4))
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:57:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:57:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49554.87643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPG4-0005kI-Op; Thu, 10 Dec 2020 16:57:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49554.87643; Thu, 10 Dec 2020 16:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPG4-0005kB-Ku; Thu, 10 Dec 2020 16:57:36 +0000
Received: by outflank-mailman (input) for mailman id 49554;
 Thu, 10 Dec 2020 16:57:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPG2-0005k6-FB
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:57:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 99e105ad-f3fa-46c1-9e26-d6f7d887b8f1;
 Thu, 10 Dec 2020 16:57:32 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9956930E;
 Thu, 10 Dec 2020 08:57:32 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 14FBC3F66B;
 Thu, 10 Dec 2020 08:57:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99e105ad-f3fa-46c1-9e26-d6f7d887b8f1
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3 0/8] xen/arm: Add support for SMMUv3 driver
Date: Thu, 10 Dec 2020 16:56:58 +0000
Message-Id: <cover.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch series is v3 of the work to add support for the SMMUv3 driver.

Approach taken is to first merge the Linux copy of the SMMUv3 driver
(tag v5.8.18) and then modify the driver to build on XEN.

MSI and PCI ATS functionality are not supported. Code is not tested and
compiled. Code is guarded by the flag CONFIG_PCI_ATS and CONFIG_MSI to compile
the driver.

Code specific to Linux is removed from the driver to avoid dead code.

Driver is currently supported as tech preview.

Following functionality should be supported before driver is out for tech
preview
1. Investigate the timing analysis of using spin lock in place of mutex when
   attaching a  device to SMMU.
2. Merged the latest Linux SMMUv3 driver code once atomic operation is
   available in XEN.
3. PCI ATS and MSI interrupts should be supported.
4. Investigate side-effect of using tasklet in place of threaded IRQ and fix
   if any.
5. fallthorugh keyword should be supported.
6. Implement the ffsll function in bitops.h file.

Rahul Singh (8):
  xen/arm: Import the SMMUv3 driver from Linux
  xen/arm: revert atomic operation related command-queue insertion patch
  xen/arm: revert patch related to XArray
  xen/arm: Remove support for Stage-1 translation on SMMUv3.
  xen/device-tree: Add dt_property_match_string helper
  xen/arm: Remove Linux specific code that is not usable in XEN
  xen/arm: Add support for SMMUv3 driver
  xen/arm: smmuv3: Remove linux compatibility functions.

 MAINTAINERS                           |    6 +
 SUPPORT.md                            |    1 +
 xen/common/device_tree.c              |   27 +
 xen/drivers/passthrough/Kconfig       |   11 +
 xen/drivers/passthrough/arm/Makefile  |    1 +
 xen/drivers/passthrough/arm/smmu-v3.c | 3316 +++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |   12 +
 7 files changed, 3374 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:57:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:57:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49555.87655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPGH-0005my-11; Thu, 10 Dec 2020 16:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49555.87655; Thu, 10 Dec 2020 16:57:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPGG-0005mr-Tz; Thu, 10 Dec 2020 16:57:48 +0000
Received: by outflank-mailman (input) for mailman id 49555;
 Thu, 10 Dec 2020 16:57:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ykji=FO=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1knPGG-0005mY-0k
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:57:48 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 799aad02-ea0a-46dd-a737-9bbf727e7e1a;
 Thu, 10 Dec 2020 16:57:47 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id u18so9218616lfd.9
 for <xen-devel@lists.xenproject.org>; Thu, 10 Dec 2020 08:57:46 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m21sm581989lfh.234.2020.12.10.08.57.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 10 Dec 2020 08:57:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 799aad02-ea0a-46dd-a737-9bbf727e7e1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=9veu/89fdLOXOwROQXRtrg7V3t8PiokUD1pb3Hncd5Q=;
        b=In7p5uj+XFf3tXocsuDz8ouvSKm/8bZjO5FcEZT/b0NK7VUv3v23wVpwTY1zbDtbsy
         6WYfYwRAnV+zGpXTDxSDtyZ+SNtaoMCrrXl4tsoAJ/6xZ7craqXa7SldWxWlXy5ntITX
         UHefaZx+WSrRSkTim2ZA1PeIHfiFpZ9MYEERAQwIRWZZBe4e5WEXpMl0D2q9exM+FLGM
         cX1yUazWa7FpthsJL6LZhK38qq1ru5uQbg8K39G1305+IGlFLU22svsvhF77MyVnD4Q4
         aXgqqsiwPUido/lyXjqThqPlhdacCiGh9QoKfZZk7Gi5raOJW1QZ+UrKI4mvJ9pg9vUD
         B2jQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=9veu/89fdLOXOwROQXRtrg7V3t8PiokUD1pb3Hncd5Q=;
        b=nDXb/DxM6FNe+lOUGI9Rvos8qIvZjYz5JQAfkCusRLvbJ4cCyC9LNZJI5ZSJ60VfYz
         diTGDb0ulp912Jcl7uGKm1gKppmZOISD3h2nxxBTyRpCABTAJIebuJqO5t/rJ+DDHGYO
         YrldWufHtWgfckuQrNRD5fQqAQztK2+EApbrDK3D9sU0zi3cXfrWJskmA4GJqgZze+F/
         crcpFhf7Y5uCpCMrogqQxr1en/IGtr9SqxksTgJp+fyk7SmImI7cbySwcmSFwc0lzzsr
         t04ENNdw45PpwXPz9mHI6tVNvP1f1K0GYOJNb5yUbF1RdP36J3AGJQg/Y1lpjPNQupHJ
         bSMQ==
X-Gm-Message-State: AOAM530ycM6X6tHU98RRtMkmxOnczTF19D+tDiXaGxKZt4fXMGTU2vnU
	+E2ig8rVbvsZL0C5W2rdm5HGoJsO88c1oQ==
X-Google-Smtp-Source: ABdhPJyFSwyEiwnn3G6isxehumFtenn+UsTTokHJUVstFVwcJs3deSHagC6X0x1sA5mSYceuuP9QMg==
X-Received: by 2002:ac2:5462:: with SMTP id e2mr3225826lfn.552.1607619465716;
        Thu, 10 Dec 2020 08:57:45 -0800 (PST)
Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
 <3bb4c3b5-a46a-ba31-292f-5c6ba49fa9be@suse.com>
 <6026b7f3-ae6e-f98f-be65-27d7f729a37f@gmail.com>
 <18bfd9b1-3e6a-8119-efd0-c82ad7ae681d@gmail.com>
 <0d6c01d6cd9a$666326c0$33297440$@xen.org>
 <57bfc007-e400-6777-0075-827daa8acf0e@gmail.com>
 <0d7201d6ce09$e13dce80$a3b96b80$@xen.org>
 <96b9b843-f4fe-834a-f17b-d75198aa0dab@gmail.com>
 <002401d6cecf$ce472710$6ad57530$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <1c012758-7121-5702-038e-4510f3cf1b0b@gmail.com>
Date: Thu, 10 Dec 2020 18:57:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002401d6cecf$ce472710$6ad57530$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10.12.20 10:38, Paul Durrant wrote:

Hi Paul.

>> -----Original Message-----
>> From: Oleksandr <olekstysh@gmail.com>
>> Sent: 09 December 2020 20:36
>> To: paul@xen.org
>> Cc: 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>;
>> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien@xen.org>; 'Volodymyr Babchuk'
>> <Volodymyr_Babchuk@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap'
>> <george.dunlap@citrix.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Wei Liu' <wl@xen.org>; 'Julien Grall'
>> <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>
>>
>> Hi Paul.
>>
>>
>>>>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>>>>>>>> --- a/xen/include/xen/ioreq.h
>>>>>>>>> +++ b/xen/include/xen/ioreq.h
>>>>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>>>>>          uint8_t                bufioreq_handling;
>>>>>>>>>      };
>>>>>>>>>      +/*
>>>>>>>>> + * This should only be used when d == current->domain and it's not
>>>>>>>>> paused,
>>>>>>>> Is the "not paused" part really relevant here? Besides it being rare
>>>>>>>> that the current domain would be paused (if so, it's in the process
>>>>>>>> of having all its vCPU-s scheduled out), does this matter at all?do
>>>>>>>> any extra actionsdo any extra actions
>>>>>>> No, it isn't relevant, I will drop it.
>>>>>>>
>>>>>>>
>>>>>>>> Apart from this the patch looks okay to me, but I'm not sure it
>>>>>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
>>>>>>>> a list if doing a swipe over the entire array is too expensive in
>>>>>>>> this specific case.
>>>>>>> We would like to avoid to do any extra actions in
>>>>>>> leave_hypervisor_to_guest() if possible.
>>>>>>> But not only there, the logic whether we check/set
>>>>>>> mapcache_invalidation variable could be avoided if a domain doesn't
>>>>>>> use IOREQ server...
>>>>>> Are you OK with this patch (common part of it)?
>>>>> How much of a performance benefit is this? The array is small to simply counting the non-NULL
>>>> entries should be pretty quick.
>>>> I didn't perform performance measurements on how much this call consumes.
>>>> In our system we run three domains. The emulator is in DomD only, so I
>>>> would like to avoid to call vcpu_ioreq_handle_completion() for every
>>>> Dom0/DomU's vCPUs
>>>> if there is no real need to do it.
>>> This is not relevant to the domain that the emulator is running in; it's concerning the domains
>> which the emulator is servicing. How many of those are there?
>> Err, yes, I wasn't precise when providing an example.
>> Single emulator is running in DomD and servicing DomU. So with the
>> helper in place the vcpu_ioreq_handle_completion() gets only called for
>> DomU vCPUs (as expected).
>> Without an optimization the vcpu_ioreq_handle_completion() gets called
>> for _all_ vCPUs, and I see it as an extra action for Dom0, DomD vCPUs.
>>
>>
>>>> On Arm vcpu_ioreq_handle_completion()
>>>> is called with IRQ enabled, so the call is accompanied with
>>>> corresponding irq_enable/irq_disable.
>>>> These unneeded actions could be avoided by using this simple one-line
>>>> helper...
>>>>
>>> The helper may be one line but there is more to the patch than that. I still think you could just
>> walk the array in the helper rather than keeping a running occupancy count.
>>
>> OK, is the implementation below close to what you propose? If yes, I
>> will update a helper and drop nr_servers variable.
>>
>> bool domain_has_ioreq_server(const struct domain *d)
>> {
>>       const struct ioreq_server *s;
>>       unsigned int id;
>>
>>       FOR_EACH_IOREQ_SERVER(d, id, s)
>>           return true;
>>
>>       return false;
>> }
> Yes, that's what I had in mind.
>
>    Paul

Thank you for the clarification.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:58:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49563.87667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPGi-0005wR-BL; Thu, 10 Dec 2020 16:58:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49563.87667; Thu, 10 Dec 2020 16:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPGi-0005wK-86; Thu, 10 Dec 2020 16:58:16 +0000
Received: by outflank-mailman (input) for mailman id 49563;
 Thu, 10 Dec 2020 16:58:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPGh-0005uj-6z
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:58:15 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 51ae45c1-93b5-400c-9980-b85d146fe2ee;
 Thu, 10 Dec 2020 16:58:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF4FA30E;
 Thu, 10 Dec 2020 08:58:03 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D01B3F66B;
 Thu, 10 Dec 2020 08:58:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51ae45c1-93b5-400c-9980-b85d146fe2ee
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 1/8] xen/arm: Import the SMMUv3 driver from Linux
Date: Thu, 10 Dec 2020 16:56:59 +0000
Message-Id: <db84c98cf5f59757d6e225e9aeae62d2757ea646.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Based on tag Linux 5.8.18 commit ab435ce49bd1d02e33dfec24f76955dc1196970b

Directory structure change for the SMMUv3 driver starting from
Linux 5.9, to revert the patches smoothly using the "git revert" command
we decided to choose Linux 5.8.18.

Only difference between latest stable Linux 5.9.12 and Linux 5.8.18
SMMUv3 driver is the use of the "fallthrough" keyword. This patch will
be merged once "fallthrough" keyword implementation is available in XEN.

It's a copy of the Linux SMMUv3 driver. Xen specific code has not
been added yet and code has not been compiled.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
- Import the driver from Linux 5.8.18 as compared to the previous
  version where Linux 5.9.8 is used. Linux 5.8.18 has been chosen to
  smoothly revert the changes required as directory structure changes
  for the SMMUv3 driver starting from 5.9. The only difference between
  Linux 5.8.18 and Linux 5.9.8 is the use of fallthrough keyword.

---
 xen/drivers/passthrough/arm/smmu-v3.c | 4165 +++++++++++++++++++++++++
 1 file changed, 4165 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
new file mode 100644
index 0000000000..f578677a5c
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -0,0 +1,4165 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IOMMU API for ARM architected SMMUv3 implementations.
+ *
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This driver is powered by bad coffee and bombay mix.
+ */
+
+#include <linux/acpi.h>
+#include <linux/acpi_iort.h>
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/crash_dump.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/io-pgtable.h>
+#include <linux/iommu.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/pci-ats.h>
+#include <linux/platform_device.h>
+
+#include <linux/amba/bus.h>
+
+/* MMIO registers */
+#define ARM_SMMU_IDR0			0x0
+#define IDR0_ST_LVL			GENMASK(28, 27)
+#define IDR0_ST_LVL_2LVL		1
+#define IDR0_STALL_MODEL		GENMASK(25, 24)
+#define IDR0_STALL_MODEL_STALL		0
+#define IDR0_STALL_MODEL_FORCE		2
+#define IDR0_TTENDIAN			GENMASK(22, 21)
+#define IDR0_TTENDIAN_MIXED		0
+#define IDR0_TTENDIAN_LE		2
+#define IDR0_TTENDIAN_BE		3
+#define IDR0_CD2L			(1 << 19)
+#define IDR0_VMID16			(1 << 18)
+#define IDR0_PRI			(1 << 16)
+#define IDR0_SEV			(1 << 14)
+#define IDR0_MSI			(1 << 13)
+#define IDR0_ASID16			(1 << 12)
+#define IDR0_ATS			(1 << 10)
+#define IDR0_HYP			(1 << 9)
+#define IDR0_COHACC			(1 << 4)
+#define IDR0_TTF			GENMASK(3, 2)
+#define IDR0_TTF_AARCH64		2
+#define IDR0_TTF_AARCH32_64		3
+#define IDR0_S1P			(1 << 1)
+#define IDR0_S2P			(1 << 0)
+
+#define ARM_SMMU_IDR1			0x4
+#define IDR1_TABLES_PRESET		(1 << 30)
+#define IDR1_QUEUES_PRESET		(1 << 29)
+#define IDR1_REL			(1 << 28)
+#define IDR1_CMDQS			GENMASK(25, 21)
+#define IDR1_EVTQS			GENMASK(20, 16)
+#define IDR1_PRIQS			GENMASK(15, 11)
+#define IDR1_SSIDSIZE			GENMASK(10, 6)
+#define IDR1_SIDSIZE			GENMASK(5, 0)
+
+#define ARM_SMMU_IDR3			0xc
+#define IDR3_RIL			(1 << 10)
+
+#define ARM_SMMU_IDR5			0x14
+#define IDR5_STALL_MAX			GENMASK(31, 16)
+#define IDR5_GRAN64K			(1 << 6)
+#define IDR5_GRAN16K			(1 << 5)
+#define IDR5_GRAN4K			(1 << 4)
+#define IDR5_OAS			GENMASK(2, 0)
+#define IDR5_OAS_32_BIT			0
+#define IDR5_OAS_36_BIT			1
+#define IDR5_OAS_40_BIT			2
+#define IDR5_OAS_42_BIT			3
+#define IDR5_OAS_44_BIT			4
+#define IDR5_OAS_48_BIT			5
+#define IDR5_OAS_52_BIT			6
+#define IDR5_VAX			GENMASK(11, 10)
+#define IDR5_VAX_52_BIT			1
+
+#define ARM_SMMU_CR0			0x20
+#define CR0_ATSCHK			(1 << 4)
+#define CR0_CMDQEN			(1 << 3)
+#define CR0_EVTQEN			(1 << 2)
+#define CR0_PRIQEN			(1 << 1)
+#define CR0_SMMUEN			(1 << 0)
+
+#define ARM_SMMU_CR0ACK			0x24
+
+#define ARM_SMMU_CR1			0x28
+#define CR1_TABLE_SH			GENMASK(11, 10)
+#define CR1_TABLE_OC			GENMASK(9, 8)
+#define CR1_TABLE_IC			GENMASK(7, 6)
+#define CR1_QUEUE_SH			GENMASK(5, 4)
+#define CR1_QUEUE_OC			GENMASK(3, 2)
+#define CR1_QUEUE_IC			GENMASK(1, 0)
+/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */
+#define CR1_CACHE_NC			0
+#define CR1_CACHE_WB			1
+#define CR1_CACHE_WT			2
+
+#define ARM_SMMU_CR2			0x2c
+#define CR2_PTM				(1 << 2)
+#define CR2_RECINVSID			(1 << 1)
+#define CR2_E2H				(1 << 0)
+
+#define ARM_SMMU_GBPA			0x44
+#define GBPA_UPDATE			(1 << 31)
+#define GBPA_ABORT			(1 << 20)
+
+#define ARM_SMMU_IRQ_CTRL		0x50
+#define IRQ_CTRL_EVTQ_IRQEN		(1 << 2)
+#define IRQ_CTRL_PRIQ_IRQEN		(1 << 1)
+#define IRQ_CTRL_GERROR_IRQEN		(1 << 0)
+
+#define ARM_SMMU_IRQ_CTRLACK		0x54
+
+#define ARM_SMMU_GERROR			0x60
+#define GERROR_SFM_ERR			(1 << 8)
+#define GERROR_MSI_GERROR_ABT_ERR	(1 << 7)
+#define GERROR_MSI_PRIQ_ABT_ERR		(1 << 6)
+#define GERROR_MSI_EVTQ_ABT_ERR		(1 << 5)
+#define GERROR_MSI_CMDQ_ABT_ERR		(1 << 4)
+#define GERROR_PRIQ_ABT_ERR		(1 << 3)
+#define GERROR_EVTQ_ABT_ERR		(1 << 2)
+#define GERROR_CMDQ_ERR			(1 << 0)
+#define GERROR_ERR_MASK			0xfd
+
+#define ARM_SMMU_GERRORN		0x64
+
+#define ARM_SMMU_GERROR_IRQ_CFG0	0x68
+#define ARM_SMMU_GERROR_IRQ_CFG1	0x70
+#define ARM_SMMU_GERROR_IRQ_CFG2	0x74
+
+#define ARM_SMMU_STRTAB_BASE		0x80
+#define STRTAB_BASE_RA			(1UL << 62)
+#define STRTAB_BASE_ADDR_MASK		GENMASK_ULL(51, 6)
+
+#define ARM_SMMU_STRTAB_BASE_CFG	0x88
+#define STRTAB_BASE_CFG_FMT		GENMASK(17, 16)
+#define STRTAB_BASE_CFG_FMT_LINEAR	0
+#define STRTAB_BASE_CFG_FMT_2LVL	1
+#define STRTAB_BASE_CFG_SPLIT		GENMASK(10, 6)
+#define STRTAB_BASE_CFG_LOG2SIZE	GENMASK(5, 0)
+
+#define ARM_SMMU_CMDQ_BASE		0x90
+#define ARM_SMMU_CMDQ_PROD		0x98
+#define ARM_SMMU_CMDQ_CONS		0x9c
+
+#define ARM_SMMU_EVTQ_BASE		0xa0
+#define ARM_SMMU_EVTQ_PROD		0x100a8
+#define ARM_SMMU_EVTQ_CONS		0x100ac
+#define ARM_SMMU_EVTQ_IRQ_CFG0		0xb0
+#define ARM_SMMU_EVTQ_IRQ_CFG1		0xb8
+#define ARM_SMMU_EVTQ_IRQ_CFG2		0xbc
+
+#define ARM_SMMU_PRIQ_BASE		0xc0
+#define ARM_SMMU_PRIQ_PROD		0x100c8
+#define ARM_SMMU_PRIQ_CONS		0x100cc
+#define ARM_SMMU_PRIQ_IRQ_CFG0		0xd0
+#define ARM_SMMU_PRIQ_IRQ_CFG1		0xd8
+#define ARM_SMMU_PRIQ_IRQ_CFG2		0xdc
+
+#define ARM_SMMU_REG_SZ			0xe00
+
+/* Common MSI config fields */
+#define MSI_CFG0_ADDR_MASK		GENMASK_ULL(51, 2)
+#define MSI_CFG2_SH			GENMASK(5, 4)
+#define MSI_CFG2_MEMATTR		GENMASK(3, 0)
+
+/* Common memory attribute values */
+#define ARM_SMMU_SH_NSH			0
+#define ARM_SMMU_SH_OSH			2
+#define ARM_SMMU_SH_ISH			3
+#define ARM_SMMU_MEMATTR_DEVICE_nGnRE	0x1
+#define ARM_SMMU_MEMATTR_OIWB		0xf
+
+#define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
+#define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
+#define Q_OVERFLOW_FLAG			(1U << 31)
+#define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
+#define Q_ENT(q, p)			((q)->base +			\
+					 Q_IDX(&((q)->llq), p) *	\
+					 (q)->ent_dwords)
+
+#define Q_BASE_RWA			(1UL << 62)
+#define Q_BASE_ADDR_MASK		GENMASK_ULL(51, 5)
+#define Q_BASE_LOG2SIZE			GENMASK(4, 0)
+
+/* Ensure DMA allocations are naturally aligned */
+#ifdef CONFIG_CMA_ALIGNMENT
+#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
+#else
+#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + MAX_ORDER - 1)
+#endif
+
+/*
+ * Stream table.
+ *
+ * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
+ * 2lvl: 128k L1 entries,
+ *       256 lazy entries per table (each table covers a PCI bus)
+ */
+#define STRTAB_L1_SZ_SHIFT		20
+#define STRTAB_SPLIT			8
+
+#define STRTAB_L1_DESC_DWORDS		1
+#define STRTAB_L1_DESC_SPAN		GENMASK_ULL(4, 0)
+#define STRTAB_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 6)
+
+#define STRTAB_STE_DWORDS		8
+#define STRTAB_STE_0_V			(1UL << 0)
+#define STRTAB_STE_0_CFG		GENMASK_ULL(3, 1)
+#define STRTAB_STE_0_CFG_ABORT		0
+#define STRTAB_STE_0_CFG_BYPASS		4
+#define STRTAB_STE_0_CFG_S1_TRANS	5
+#define STRTAB_STE_0_CFG_S2_TRANS	6
+
+#define STRTAB_STE_0_S1FMT		GENMASK_ULL(5, 4)
+#define STRTAB_STE_0_S1FMT_LINEAR	0
+#define STRTAB_STE_0_S1FMT_64K_L2	2
+#define STRTAB_STE_0_S1CTXPTR_MASK	GENMASK_ULL(51, 6)
+#define STRTAB_STE_0_S1CDMAX		GENMASK_ULL(63, 59)
+
+#define STRTAB_STE_1_S1DSS		GENMASK_ULL(1, 0)
+#define STRTAB_STE_1_S1DSS_TERMINATE	0x0
+#define STRTAB_STE_1_S1DSS_BYPASS	0x1
+#define STRTAB_STE_1_S1DSS_SSID0	0x2
+
+#define STRTAB_STE_1_S1C_CACHE_NC	0UL
+#define STRTAB_STE_1_S1C_CACHE_WBRA	1UL
+#define STRTAB_STE_1_S1C_CACHE_WT	2UL
+#define STRTAB_STE_1_S1C_CACHE_WB	3UL
+#define STRTAB_STE_1_S1CIR		GENMASK_ULL(3, 2)
+#define STRTAB_STE_1_S1COR		GENMASK_ULL(5, 4)
+#define STRTAB_STE_1_S1CSH		GENMASK_ULL(7, 6)
+
+#define STRTAB_STE_1_S1STALLD		(1UL << 27)
+
+#define STRTAB_STE_1_EATS		GENMASK_ULL(29, 28)
+#define STRTAB_STE_1_EATS_ABT		0UL
+#define STRTAB_STE_1_EATS_TRANS		1UL
+#define STRTAB_STE_1_EATS_S1CHK		2UL
+
+#define STRTAB_STE_1_STRW		GENMASK_ULL(31, 30)
+#define STRTAB_STE_1_STRW_NSEL1		0UL
+#define STRTAB_STE_1_STRW_EL2		2UL
+
+#define STRTAB_STE_1_SHCFG		GENMASK_ULL(45, 44)
+#define STRTAB_STE_1_SHCFG_INCOMING	1UL
+
+#define STRTAB_STE_2_S2VMID		GENMASK_ULL(15, 0)
+#define STRTAB_STE_2_VTCR		GENMASK_ULL(50, 32)
+#define STRTAB_STE_2_VTCR_S2T0SZ	GENMASK_ULL(5, 0)
+#define STRTAB_STE_2_VTCR_S2SL0		GENMASK_ULL(7, 6)
+#define STRTAB_STE_2_VTCR_S2IR0		GENMASK_ULL(9, 8)
+#define STRTAB_STE_2_VTCR_S2OR0		GENMASK_ULL(11, 10)
+#define STRTAB_STE_2_VTCR_S2SH0		GENMASK_ULL(13, 12)
+#define STRTAB_STE_2_VTCR_S2TG		GENMASK_ULL(15, 14)
+#define STRTAB_STE_2_VTCR_S2PS		GENMASK_ULL(18, 16)
+#define STRTAB_STE_2_S2AA64		(1UL << 51)
+#define STRTAB_STE_2_S2ENDI		(1UL << 52)
+#define STRTAB_STE_2_S2PTW		(1UL << 54)
+#define STRTAB_STE_2_S2R		(1UL << 58)
+
+#define STRTAB_STE_3_S2TTB_MASK		GENMASK_ULL(51, 4)
+
+/*
+ * Context descriptors.
+ *
+ * Linear: when less than 1024 SSIDs are supported
+ * 2lvl: at most 1024 L1 entries,
+ *       1024 lazy entries per table.
+ */
+#define CTXDESC_SPLIT			10
+#define CTXDESC_L2_ENTRIES		(1 << CTXDESC_SPLIT)
+
+#define CTXDESC_L1_DESC_DWORDS		1
+#define CTXDESC_L1_DESC_V		(1UL << 0)
+#define CTXDESC_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 12)
+
+#define CTXDESC_CD_DWORDS		8
+#define CTXDESC_CD_0_TCR_T0SZ		GENMASK_ULL(5, 0)
+#define CTXDESC_CD_0_TCR_TG0		GENMASK_ULL(7, 6)
+#define CTXDESC_CD_0_TCR_IRGN0		GENMASK_ULL(9, 8)
+#define CTXDESC_CD_0_TCR_ORGN0		GENMASK_ULL(11, 10)
+#define CTXDESC_CD_0_TCR_SH0		GENMASK_ULL(13, 12)
+#define CTXDESC_CD_0_TCR_EPD0		(1ULL << 14)
+#define CTXDESC_CD_0_TCR_EPD1		(1ULL << 30)
+
+#define CTXDESC_CD_0_ENDI		(1UL << 15)
+#define CTXDESC_CD_0_V			(1UL << 31)
+
+#define CTXDESC_CD_0_TCR_IPS		GENMASK_ULL(34, 32)
+#define CTXDESC_CD_0_TCR_TBI0		(1ULL << 38)
+
+#define CTXDESC_CD_0_AA64		(1UL << 41)
+#define CTXDESC_CD_0_S			(1UL << 44)
+#define CTXDESC_CD_0_R			(1UL << 45)
+#define CTXDESC_CD_0_A			(1UL << 46)
+#define CTXDESC_CD_0_ASET		(1UL << 47)
+#define CTXDESC_CD_0_ASID		GENMASK_ULL(63, 48)
+
+#define CTXDESC_CD_1_TTB0_MASK		GENMASK_ULL(51, 4)
+
+/*
+ * When the SMMU only supports linear context descriptor tables, pick a
+ * reasonable size limit (64kB).
+ */
+#define CTXDESC_LINEAR_CDMAX		ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3))
+
+/* Command queue */
+#define CMDQ_ENT_SZ_SHIFT		4
+#define CMDQ_ENT_DWORDS			((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
+#define CMDQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT)
+
+#define CMDQ_CONS_ERR			GENMASK(30, 24)
+#define CMDQ_ERR_CERROR_NONE_IDX	0
+#define CMDQ_ERR_CERROR_ILL_IDX		1
+#define CMDQ_ERR_CERROR_ABT_IDX		2
+#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
+
+#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
+
+/*
+ * This is used to size the command queue and therefore must be at least
+ * BITS_PER_LONG so that the valid_map works correctly (it relies on the
+ * total number of queue entries being a multiple of BITS_PER_LONG).
+ */
+#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
+
+#define CMDQ_0_OP			GENMASK_ULL(7, 0)
+#define CMDQ_0_SSV			(1UL << 11)
+
+#define CMDQ_PREFETCH_0_SID		GENMASK_ULL(63, 32)
+#define CMDQ_PREFETCH_1_SIZE		GENMASK_ULL(4, 0)
+#define CMDQ_PREFETCH_1_ADDR_MASK	GENMASK_ULL(63, 12)
+
+#define CMDQ_CFGI_0_SSID		GENMASK_ULL(31, 12)
+#define CMDQ_CFGI_0_SID			GENMASK_ULL(63, 32)
+#define CMDQ_CFGI_1_LEAF		(1UL << 0)
+#define CMDQ_CFGI_1_RANGE		GENMASK_ULL(4, 0)
+
+#define CMDQ_TLBI_0_NUM			GENMASK_ULL(16, 12)
+#define CMDQ_TLBI_RANGE_NUM_MAX		31
+#define CMDQ_TLBI_0_SCALE		GENMASK_ULL(24, 20)
+#define CMDQ_TLBI_0_VMID		GENMASK_ULL(47, 32)
+#define CMDQ_TLBI_0_ASID		GENMASK_ULL(63, 48)
+#define CMDQ_TLBI_1_LEAF		(1UL << 0)
+#define CMDQ_TLBI_1_TTL			GENMASK_ULL(9, 8)
+#define CMDQ_TLBI_1_TG			GENMASK_ULL(11, 10)
+#define CMDQ_TLBI_1_VA_MASK		GENMASK_ULL(63, 12)
+#define CMDQ_TLBI_1_IPA_MASK		GENMASK_ULL(51, 12)
+
+#define CMDQ_ATC_0_SSID			GENMASK_ULL(31, 12)
+#define CMDQ_ATC_0_SID			GENMASK_ULL(63, 32)
+#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
+#define CMDQ_ATC_1_SIZE			GENMASK_ULL(5, 0)
+#define CMDQ_ATC_1_ADDR_MASK		GENMASK_ULL(63, 12)
+
+#define CMDQ_PRI_0_SSID			GENMASK_ULL(31, 12)
+#define CMDQ_PRI_0_SID			GENMASK_ULL(63, 32)
+#define CMDQ_PRI_1_GRPID		GENMASK_ULL(8, 0)
+#define CMDQ_PRI_1_RESP			GENMASK_ULL(13, 12)
+
+#define CMDQ_SYNC_0_CS			GENMASK_ULL(13, 12)
+#define CMDQ_SYNC_0_CS_NONE		0
+#define CMDQ_SYNC_0_CS_IRQ		1
+#define CMDQ_SYNC_0_CS_SEV		2
+#define CMDQ_SYNC_0_MSH			GENMASK_ULL(23, 22)
+#define CMDQ_SYNC_0_MSIATTR		GENMASK_ULL(27, 24)
+#define CMDQ_SYNC_0_MSIDATA		GENMASK_ULL(63, 32)
+#define CMDQ_SYNC_1_MSIADDR_MASK	GENMASK_ULL(51, 2)
+
+/* Event queue */
+#define EVTQ_ENT_SZ_SHIFT		5
+#define EVTQ_ENT_DWORDS			((1 << EVTQ_ENT_SZ_SHIFT) >> 3)
+#define EVTQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT)
+
+#define EVTQ_0_ID			GENMASK_ULL(7, 0)
+
+/* PRI queue */
+#define PRIQ_ENT_SZ_SHIFT		4
+#define PRIQ_ENT_DWORDS			((1 << PRIQ_ENT_SZ_SHIFT) >> 3)
+#define PRIQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT)
+
+#define PRIQ_0_SID			GENMASK_ULL(31, 0)
+#define PRIQ_0_SSID			GENMASK_ULL(51, 32)
+#define PRIQ_0_PERM_PRIV		(1UL << 58)
+#define PRIQ_0_PERM_EXEC		(1UL << 59)
+#define PRIQ_0_PERM_READ		(1UL << 60)
+#define PRIQ_0_PERM_WRITE		(1UL << 61)
+#define PRIQ_0_PRG_LAST			(1UL << 62)
+#define PRIQ_0_SSID_V			(1UL << 63)
+
+#define PRIQ_1_PRG_IDX			GENMASK_ULL(8, 0)
+#define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
+
+/* High-level queue structures */
+#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
+#define ARM_SMMU_POLL_SPIN_COUNT	10
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+static bool disable_bypass = 1;
+module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_bypass,
+	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
+
+enum pri_resp {
+	PRI_RESP_DENY = 0,
+	PRI_RESP_FAIL = 1,
+	PRI_RESP_SUCC = 2,
+};
+
+enum arm_smmu_msi_index {
+	EVTQ_MSI_INDEX,
+	GERROR_MSI_INDEX,
+	PRIQ_MSI_INDEX,
+	ARM_SMMU_MAX_MSIS,
+};
+
+static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
+	[EVTQ_MSI_INDEX] = {
+		ARM_SMMU_EVTQ_IRQ_CFG0,
+		ARM_SMMU_EVTQ_IRQ_CFG1,
+		ARM_SMMU_EVTQ_IRQ_CFG2,
+	},
+	[GERROR_MSI_INDEX] = {
+		ARM_SMMU_GERROR_IRQ_CFG0,
+		ARM_SMMU_GERROR_IRQ_CFG1,
+		ARM_SMMU_GERROR_IRQ_CFG2,
+	},
+	[PRIQ_MSI_INDEX] = {
+		ARM_SMMU_PRIQ_IRQ_CFG0,
+		ARM_SMMU_PRIQ_IRQ_CFG1,
+		ARM_SMMU_PRIQ_IRQ_CFG2,
+	},
+};
+
+struct arm_smmu_cmdq_ent {
+	/* Common fields */
+	u8				opcode;
+	bool				substream_valid;
+
+	/* Command-specific fields */
+	union {
+		#define CMDQ_OP_PREFETCH_CFG	0x1
+		struct {
+			u32			sid;
+			u8			size;
+			u64			addr;
+		} prefetch;
+
+		#define CMDQ_OP_CFGI_STE	0x3
+		#define CMDQ_OP_CFGI_ALL	0x4
+		#define CMDQ_OP_CFGI_CD		0x5
+		#define CMDQ_OP_CFGI_CD_ALL	0x6
+		struct {
+			u32			sid;
+			u32			ssid;
+			union {
+				bool		leaf;
+				u8		span;
+			};
+		} cfgi;
+
+		#define CMDQ_OP_TLBI_NH_ASID	0x11
+		#define CMDQ_OP_TLBI_NH_VA	0x12
+		#define CMDQ_OP_TLBI_EL2_ALL	0x20
+		#define CMDQ_OP_TLBI_S12_VMALL	0x28
+		#define CMDQ_OP_TLBI_S2_IPA	0x2a
+		#define CMDQ_OP_TLBI_NSNH_ALL	0x30
+		struct {
+			u8			num;
+			u8			scale;
+			u16			asid;
+			u16			vmid;
+			bool			leaf;
+			u8			ttl;
+			u8			tg;
+			u64			addr;
+		} tlbi;
+
+		#define CMDQ_OP_ATC_INV		0x40
+		#define ATC_INV_SIZE_ALL	52
+		struct {
+			u32			sid;
+			u32			ssid;
+			u64			addr;
+			u8			size;
+			bool			global;
+		} atc;
+
+		#define CMDQ_OP_PRI_RESP	0x41
+		struct {
+			u32			sid;
+			u32			ssid;
+			u16			grpid;
+			enum pri_resp		resp;
+		} pri;
+
+		#define CMDQ_OP_CMD_SYNC	0x46
+		struct {
+			u64			msiaddr;
+		} sync;
+	};
+};
+
+struct arm_smmu_ll_queue {
+	union {
+		u64			val;
+		struct {
+			u32		prod;
+			u32		cons;
+		};
+		struct {
+			atomic_t	prod;
+			atomic_t	cons;
+		} atomic;
+		u8			__pad[SMP_CACHE_BYTES];
+	} ____cacheline_aligned_in_smp;
+	u32				max_n_shift;
+};
+
+struct arm_smmu_queue {
+	struct arm_smmu_ll_queue	llq;
+	int				irq; /* Wired interrupt */
+
+	__le64				*base;
+	dma_addr_t			base_dma;
+	u64				q_base;
+
+	size_t				ent_dwords;
+
+	u32 __iomem			*prod_reg;
+	u32 __iomem			*cons_reg;
+};
+
+struct arm_smmu_queue_poll {
+	ktime_t				timeout;
+	unsigned int			delay;
+	unsigned int			spin_cnt;
+	bool				wfe;
+};
+
+struct arm_smmu_cmdq {
+	struct arm_smmu_queue		q;
+	atomic_long_t			*valid_map;
+	atomic_t			owner_prod;
+	atomic_t			lock;
+};
+
+struct arm_smmu_cmdq_batch {
+	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
+	int				num;
+};
+
+struct arm_smmu_evtq {
+	struct arm_smmu_queue		q;
+	u32				max_stalls;
+};
+
+struct arm_smmu_priq {
+	struct arm_smmu_queue		q;
+};
+
+/* High-level stream table and context descriptor structures */
+struct arm_smmu_strtab_l1_desc {
+	u8				span;
+
+	__le64				*l2ptr;
+	dma_addr_t			l2ptr_dma;
+};
+
+struct arm_smmu_ctx_desc {
+	u16				asid;
+	u64				ttbr;
+	u64				tcr;
+	u64				mair;
+};
+
+struct arm_smmu_l1_ctx_desc {
+	__le64				*l2ptr;
+	dma_addr_t			l2ptr_dma;
+};
+
+struct arm_smmu_ctx_desc_cfg {
+	__le64				*cdtab;
+	dma_addr_t			cdtab_dma;
+	struct arm_smmu_l1_ctx_desc	*l1_desc;
+	unsigned int			num_l1_ents;
+};
+
+struct arm_smmu_s1_cfg {
+	struct arm_smmu_ctx_desc_cfg	cdcfg;
+	struct arm_smmu_ctx_desc	cd;
+	u8				s1fmt;
+	u8				s1cdmax;
+};
+
+struct arm_smmu_s2_cfg {
+	u16				vmid;
+	u64				vttbr;
+	u64				vtcr;
+};
+
+struct arm_smmu_strtab_cfg {
+	__le64				*strtab;
+	dma_addr_t			strtab_dma;
+	struct arm_smmu_strtab_l1_desc	*l1_desc;
+	unsigned int			num_l1_ents;
+
+	u64				strtab_base;
+	u32				strtab_base_cfg;
+};
+
+/* An SMMUv3 instance */
+struct arm_smmu_device {
+	struct device			*dev;
+	void __iomem			*base;
+	void __iomem			*page1;
+
+#define ARM_SMMU_FEAT_2_LVL_STRTAB	(1 << 0)
+#define ARM_SMMU_FEAT_2_LVL_CDTAB	(1 << 1)
+#define ARM_SMMU_FEAT_TT_LE		(1 << 2)
+#define ARM_SMMU_FEAT_TT_BE		(1 << 3)
+#define ARM_SMMU_FEAT_PRI		(1 << 4)
+#define ARM_SMMU_FEAT_ATS		(1 << 5)
+#define ARM_SMMU_FEAT_SEV		(1 << 6)
+#define ARM_SMMU_FEAT_MSI		(1 << 7)
+#define ARM_SMMU_FEAT_COHERENCY		(1 << 8)
+#define ARM_SMMU_FEAT_TRANS_S1		(1 << 9)
+#define ARM_SMMU_FEAT_TRANS_S2		(1 << 10)
+#define ARM_SMMU_FEAT_STALLS		(1 << 11)
+#define ARM_SMMU_FEAT_HYP		(1 << 12)
+#define ARM_SMMU_FEAT_STALL_FORCE	(1 << 13)
+#define ARM_SMMU_FEAT_VAX		(1 << 14)
+#define ARM_SMMU_FEAT_RANGE_INV		(1 << 15)
+	u32				features;
+
+#define ARM_SMMU_OPT_SKIP_PREFETCH	(1 << 0)
+#define ARM_SMMU_OPT_PAGE0_REGS_ONLY	(1 << 1)
+	u32				options;
+
+	struct arm_smmu_cmdq		cmdq;
+	struct arm_smmu_evtq		evtq;
+	struct arm_smmu_priq		priq;
+
+	int				gerr_irq;
+	int				combined_irq;
+
+	unsigned long			ias; /* IPA */
+	unsigned long			oas; /* PA */
+	unsigned long			pgsize_bitmap;
+
+#define ARM_SMMU_MAX_ASIDS		(1 << 16)
+	unsigned int			asid_bits;
+
+#define ARM_SMMU_MAX_VMIDS		(1 << 16)
+	unsigned int			vmid_bits;
+	DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
+
+	unsigned int			ssid_bits;
+	unsigned int			sid_bits;
+
+	struct arm_smmu_strtab_cfg	strtab_cfg;
+
+	/* IOMMU core code handle */
+	struct iommu_device		iommu;
+};
+
+/* SMMU private data for each master */
+struct arm_smmu_master {
+	struct arm_smmu_device		*smmu;
+	struct device			*dev;
+	struct arm_smmu_domain		*domain;
+	struct list_head		domain_head;
+	u32				*sids;
+	unsigned int			num_sids;
+	bool				ats_enabled;
+	unsigned int			ssid_bits;
+};
+
+/* SMMU private data for an IOMMU domain */
+enum arm_smmu_domain_stage {
+	ARM_SMMU_DOMAIN_S1 = 0,
+	ARM_SMMU_DOMAIN_S2,
+	ARM_SMMU_DOMAIN_NESTED,
+	ARM_SMMU_DOMAIN_BYPASS,
+};
+
+struct arm_smmu_domain {
+	struct arm_smmu_device		*smmu;
+	struct mutex			init_mutex; /* Protects smmu pointer */
+
+	struct io_pgtable_ops		*pgtbl_ops;
+	bool				non_strict;
+	atomic_t			nr_ats_masters;
+
+	enum arm_smmu_domain_stage	stage;
+	union {
+		struct arm_smmu_s1_cfg	s1_cfg;
+		struct arm_smmu_s2_cfg	s2_cfg;
+	};
+
+	struct iommu_domain		domain;
+
+	struct list_head		devices;
+	spinlock_t			devices_lock;
+};
+
+struct arm_smmu_option_prop {
+	u32 opt;
+	const char *prop;
+};
+
+static DEFINE_XARRAY_ALLOC1(asid_xa);
+
+static struct arm_smmu_option_prop arm_smmu_options[] = {
+	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
+	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
+	{ 0, NULL},
+};
+
+static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
+						 struct arm_smmu_device *smmu)
+{
+	if (offset > SZ_64K)
+		return smmu->page1 + offset - SZ_64K;
+
+	return smmu->base + offset;
+}
+
+static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
+{
+	return container_of(dom, struct arm_smmu_domain, domain);
+}
+
+static void parse_driver_options(struct arm_smmu_device *smmu)
+{
+	int i = 0;
+
+	do {
+		if (of_property_read_bool(smmu->dev->of_node,
+						arm_smmu_options[i].prop)) {
+			smmu->options |= arm_smmu_options[i].opt;
+			dev_notice(smmu->dev, "option %s\n",
+				arm_smmu_options[i].prop);
+		}
+	} while (arm_smmu_options[++i].opt);
+}
+
+/* Low-level queue manipulation functions */
+static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
+{
+	u32 space, prod, cons;
+
+	prod = Q_IDX(q, q->prod);
+	cons = Q_IDX(q, q->cons);
+
+	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
+		space = (1 << q->max_n_shift) - (prod - cons);
+	else
+		space = cons - prod;
+
+	return space >= n;
+}
+
+static bool queue_full(struct arm_smmu_ll_queue *q)
+{
+	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
+}
+
+static bool queue_empty(struct arm_smmu_ll_queue *q)
+{
+	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
+}
+
+static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
+{
+	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
+		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
+	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
+		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
+}
+
+static void queue_sync_cons_out(struct arm_smmu_queue *q)
+{
+	/*
+	 * Ensure that all CPU accesses (reads and writes) to the queue
+	 * are complete before we update the cons pointer.
+	 */
+	mb();
+	writel_relaxed(q->llq.cons, q->cons_reg);
+}
+
+static void queue_inc_cons(struct arm_smmu_ll_queue *q)
+{
+	u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
+	q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+}
+
+static int queue_sync_prod_in(struct arm_smmu_queue *q)
+{
+	int ret = 0;
+	u32 prod = readl_relaxed(q->prod_reg);
+
+	if (Q_OVF(prod) != Q_OVF(q->llq.prod))
+		ret = -EOVERFLOW;
+
+	q->llq.prod = prod;
+	return ret;
+}
+
+static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
+{
+	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
+	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+}
+
+static void queue_poll_init(struct arm_smmu_device *smmu,
+			    struct arm_smmu_queue_poll *qp)
+{
+	qp->delay = 1;
+	qp->spin_cnt = 0;
+	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+static int queue_poll(struct arm_smmu_queue_poll *qp)
+{
+	if (ktime_compare(ktime_get(), qp->timeout) > 0)
+		return -ETIMEDOUT;
+
+	if (qp->wfe) {
+		wfe();
+	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
+		cpu_relax();
+	} else {
+		udelay(qp->delay);
+		qp->delay *= 2;
+		qp->spin_cnt = 0;
+	}
+
+	return 0;
+}
+
+static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
+{
+	int i;
+
+	for (i = 0; i < n_dwords; ++i)
+		*dst++ = cpu_to_le64(*src++);
+}
+
+static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
+{
+	int i;
+
+	for (i = 0; i < n_dwords; ++i)
+		*dst++ = le64_to_cpu(*src++);
+}
+
+static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+	if (queue_empty(&q->llq))
+		return -EAGAIN;
+
+	queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords);
+	queue_inc_cons(&q->llq);
+	queue_sync_cons_out(q);
+	return 0;
+}
+
+/* High-level queue accessors */
+static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
+{
+	memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
+	cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
+
+	switch (ent->opcode) {
+	case CMDQ_OP_TLBI_EL2_ALL:
+	case CMDQ_OP_TLBI_NSNH_ALL:
+		break;
+	case CMDQ_OP_PREFETCH_CFG:
+		cmd[0] |= FIELD_PREP(CMDQ_PREFETCH_0_SID, ent->prefetch.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
+		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
+		break;
+	case CMDQ_OP_CFGI_CD:
+		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid);
+		/* Fallthrough */
+	case CMDQ_OP_CFGI_STE:
+		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
+		break;
+	case CMDQ_OP_CFGI_CD_ALL:
+		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
+		break;
+	case CMDQ_OP_CFGI_ALL:
+		/* Cover the entire SID range */
+		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
+		break;
+	case CMDQ_OP_TLBI_NH_VA:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
+		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
+		break;
+	case CMDQ_OP_TLBI_S2_IPA:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
+		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
+		break;
+	case CMDQ_OP_TLBI_NH_ASID:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
+		/* Fallthrough */
+	case CMDQ_OP_TLBI_S12_VMALL:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+		break;
+	case CMDQ_OP_ATC_INV:
+		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
+		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
+		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
+		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
+		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
+		break;
+	case CMDQ_OP_PRI_RESP:
+		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
+		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
+		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid);
+		switch (ent->pri.resp) {
+		case PRI_RESP_DENY:
+		case PRI_RESP_FAIL:
+		case PRI_RESP_SUCC:
+			break;
+		default:
+			return -EINVAL;
+		}
+		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
+		break;
+	case CMDQ_OP_CMD_SYNC:
+		if (ent->sync.msiaddr) {
+			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
+			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
+		} else {
+			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
+		}
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
+		break;
+	default:
+		return -ENOENT;
+	}
+
+	return 0;
+}
+
+static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
+					 u32 prod)
+{
+	struct arm_smmu_queue *q = &smmu->cmdq.q;
+	struct arm_smmu_cmdq_ent ent = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+	};
+
+	/*
+	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
+	 * payload, so the write will zero the entire command on that platform.
+	 */
+	if (smmu->features & ARM_SMMU_FEAT_MSI &&
+	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
+		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
+				   q->ent_dwords * 8;
+	}
+
+	arm_smmu_cmdq_build_cmd(cmd, &ent);
+}
+
+static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
+{
+	static const char *cerror_str[] = {
+		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
+		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
+		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
+		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
+	};
+
+	int i;
+	u64 cmd[CMDQ_ENT_DWORDS];
+	struct arm_smmu_queue *q = &smmu->cmdq.q;
+	u32 cons = readl_relaxed(q->cons_reg);
+	u32 idx = FIELD_GET(CMDQ_CONS_ERR, cons);
+	struct arm_smmu_cmdq_ent cmd_sync = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+	};
+
+	dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
+		idx < ARRAY_SIZE(cerror_str) ?  cerror_str[idx] : "Unknown");
+
+	switch (idx) {
+	case CMDQ_ERR_CERROR_ABT_IDX:
+		dev_err(smmu->dev, "retrying command fetch\n");
+	case CMDQ_ERR_CERROR_NONE_IDX:
+		return;
+	case CMDQ_ERR_CERROR_ATC_INV_IDX:
+		/*
+		 * ATC Invalidation Completion timeout. CONS is still pointing
+		 * at the CMD_SYNC. Attempt to complete other pending commands
+		 * by repeating the CMD_SYNC, though we might well end up back
+		 * here since the ATC invalidation may still be pending.
+		 */
+		return;
+	case CMDQ_ERR_CERROR_ILL_IDX:
+		/* Fallthrough */
+	default:
+		break;
+	}
+
+	/*
+	 * We may have concurrent producers, so we need to be careful
+	 * not to touch any of the shadow cmdq state.
+	 */
+	queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
+	dev_err(smmu->dev, "skipping command in error state:\n");
+	for (i = 0; i < ARRAY_SIZE(cmd); ++i)
+		dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
+
+	/* Convert the erroneous command into a CMD_SYNC */
+	if (arm_smmu_cmdq_build_cmd(cmd, &cmd_sync)) {
+		dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
+		return;
+	}
+
+	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
+}
+
+/*
+ * Command queue locking.
+ * This is a form of bastardised rwlock with the following major changes:
+ *
+ * - The only LOCK routines are exclusive_trylock() and shared_lock().
+ *   Neither have barrier semantics, and instead provide only a control
+ *   dependency.
+ *
+ * - The UNLOCK routines are supplemented with shared_tryunlock(), which
+ *   fails if the caller appears to be the last lock holder (yes, this is
+ *   racy). All successful UNLOCK routines have RELEASE semantics.
+ */
+static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
+{
+	int val;
+
+	/*
+	 * We can try to avoid the cmpxchg() loop by simply incrementing the
+	 * lock counter. When held in exclusive state, the lock counter is set
+	 * to INT_MIN so these increments won't hurt as the value will remain
+	 * negative.
+	 */
+	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
+		return;
+
+	do {
+		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
+	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
+}
+
+static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
+{
+	(void)atomic_dec_return_release(&cmdq->lock);
+}
+
+static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
+{
+	if (atomic_read(&cmdq->lock) == 1)
+		return false;
+
+	arm_smmu_cmdq_shared_unlock(cmdq);
+	return true;
+}
+
+#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
+({									\
+	bool __ret;							\
+	local_irq_save(flags);						\
+	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
+	if (!__ret)							\
+		local_irq_restore(flags);				\
+	__ret;								\
+})
+
+#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
+({									\
+	atomic_set_release(&cmdq->lock, 0);				\
+	local_irq_restore(flags);					\
+})
+
+
+/*
+ * Command queue insertion.
+ * This is made fiddly by our attempts to achieve some sort of scalability
+ * since there is one queue shared amongst all of the CPUs in the system.  If
+ * you like mixed-size concurrency, dependency ordering and relaxed atomics,
+ * then you'll *love* this monstrosity.
+ *
+ * The basic idea is to split the queue up into ranges of commands that are
+ * owned by a given CPU; the owner may not have written all of the commands
+ * itself, but is responsible for advancing the hardware prod pointer when
+ * the time comes. The algorithm is roughly:
+ *
+ * 	1. Allocate some space in the queue. At this point we also discover
+ *	   whether the head of the queue is currently owned by another CPU,
+ *	   or whether we are the owner.
+ *
+ *	2. Write our commands into our allocated slots in the queue.
+ *
+ *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
+ *
+ *	4. If we are an owner:
+ *		a. Wait for the previous owner to finish.
+ *		b. Mark the queue head as unowned, which tells us the range
+ *		   that we are responsible for publishing.
+ *		c. Wait for all commands in our owned range to become valid.
+ *		d. Advance the hardware prod pointer.
+ *		e. Tell the next owner we've finished.
+ *
+ *	5. If we are inserting a CMD_SYNC (we may or may not have been an
+ *	   owner), then we need to stick around until it has completed:
+ *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
+ *		   to clear the first 4 bytes.
+ *		b. Otherwise, we spin waiting for the hardware cons pointer to
+ *		   advance past our command.
+ *
+ * The devil is in the details, particularly the use of locking for handling
+ * SYNC completion and freeing up space in the queue before we think that it is
+ * full.
+ */
+static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
+					       u32 sprod, u32 eprod, bool set)
+{
+	u32 swidx, sbidx, ewidx, ebidx;
+	struct arm_smmu_ll_queue llq = {
+		.max_n_shift	= cmdq->q.llq.max_n_shift,
+		.prod		= sprod,
+	};
+
+	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
+	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
+
+	while (llq.prod != eprod) {
+		unsigned long mask;
+		atomic_long_t *ptr;
+		u32 limit = BITS_PER_LONG;
+
+		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
+		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
+
+		ptr = &cmdq->valid_map[swidx];
+
+		if ((swidx == ewidx) && (sbidx < ebidx))
+			limit = ebidx;
+
+		mask = GENMASK(limit - 1, sbidx);
+
+		/*
+		 * The valid bit is the inverse of the wrap bit. This means
+		 * that a zero-initialised queue is invalid and, after marking
+		 * all entries as valid, they become invalid again when we
+		 * wrap.
+		 */
+		if (set) {
+			atomic_long_xor(mask, ptr);
+		} else { /* Poll */
+			unsigned long valid;
+
+			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
+			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
+		}
+
+		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
+	}
+}
+
+/* Mark all entries in the range [sprod, eprod) as valid */
+static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
+					u32 sprod, u32 eprod)
+{
+	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
+}
+
+/* Wait for all entries in the range [sprod, eprod) to become valid */
+static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
+					 u32 sprod, u32 eprod)
+{
+	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
+}
+
+/* Wait for the command queue to become non-full */
+static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
+					     struct arm_smmu_ll_queue *llq)
+{
+	unsigned long flags;
+	struct arm_smmu_queue_poll qp;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	int ret = 0;
+
+	/*
+	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
+	 * that fails, spin until somebody else updates it for us.
+	 */
+	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
+		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
+		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
+		llq->val = READ_ONCE(cmdq->q.llq.val);
+		return 0;
+	}
+
+	queue_poll_init(smmu, &qp);
+	do {
+		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
+		if (!queue_full(llq))
+			break;
+
+		ret = queue_poll(&qp);
+	} while (!ret);
+
+	return ret;
+}
+
+/*
+ * Wait until the SMMU signals a CMD_SYNC completion MSI.
+ * Must be called with the cmdq lock held in some capacity.
+ */
+static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
+					  struct arm_smmu_ll_queue *llq)
+{
+	int ret = 0;
+	struct arm_smmu_queue_poll qp;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
+
+	queue_poll_init(smmu, &qp);
+
+	/*
+	 * The MSI won't generate an event, since it's being written back
+	 * into the command queue.
+	 */
+	qp.wfe = false;
+	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
+	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
+	return ret;
+}
+
+/*
+ * Wait until the SMMU cons index passes llq->prod.
+ * Must be called with the cmdq lock held in some capacity.
+ */
+static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
+					       struct arm_smmu_ll_queue *llq)
+{
+	struct arm_smmu_queue_poll qp;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	u32 prod = llq->prod;
+	int ret = 0;
+
+	queue_poll_init(smmu, &qp);
+	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
+	do {
+		if (queue_consumed(llq, prod))
+			break;
+
+		ret = queue_poll(&qp);
+
+		/*
+		 * This needs to be a readl() so that our subsequent call
+		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
+		 *
+		 * Specifically, we need to ensure that we observe all
+		 * shared_lock()s by other CMD_SYNCs that share our owner,
+		 * so that a failing call to tryunlock() means that we're
+		 * the last one out and therefore we can safely advance
+		 * cmdq->q.llq.cons. Roughly speaking:
+		 *
+		 * CPU 0		CPU1			CPU2 (us)
+		 *
+		 * if (sync)
+		 * 	shared_lock();
+		 *
+		 * dma_wmb();
+		 * set_valid_map();
+		 *
+		 * 			if (owner) {
+		 *				poll_valid_map();
+		 *				<control dependency>
+		 *				writel(prod_reg);
+		 *
+		 *						readl(cons_reg);
+		 *						tryunlock();
+		 *
+		 * Requires us to see CPU 0's shared_lock() acquisition.
+		 */
+		llq->cons = readl(cmdq->q.cons_reg);
+	} while (!ret);
+
+	return ret;
+}
+
+static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
+					 struct arm_smmu_ll_queue *llq)
+{
+	if (smmu->features & ARM_SMMU_FEAT_MSI &&
+	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
+		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
+
+	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
+}
+
+static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
+					u32 prod, int n)
+{
+	int i;
+	struct arm_smmu_ll_queue llq = {
+		.max_n_shift	= cmdq->q.llq.max_n_shift,
+		.prod		= prod,
+	};
+
+	for (i = 0; i < n; ++i) {
+		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
+
+		prod = queue_inc_prod_n(&llq, i);
+		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
+	}
+}
+
+/*
+ * This is the actual insertion function, and provides the following
+ * ordering guarantees to callers:
+ *
+ * - There is a dma_wmb() before publishing any commands to the queue.
+ *   This can be relied upon to order prior writes to data structures
+ *   in memory (such as a CD or an STE) before the command.
+ *
+ * - On completion of a CMD_SYNC, there is a control dependency.
+ *   This can be relied upon to order subsequent writes to memory (e.g.
+ *   freeing an IOVA) after completion of the CMD_SYNC.
+ *
+ * - Command insertion is totally ordered, so if two CPUs each race to
+ *   insert their own list of commands then all of the commands from one
+ *   CPU will appear before any of the commands from the other CPU.
+ */
+static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
+				       u64 *cmds, int n, bool sync)
+{
+	u64 cmd_sync[CMDQ_ENT_DWORDS];
+	u32 prod;
+	unsigned long flags;
+	bool owner;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	struct arm_smmu_ll_queue llq = {
+		.max_n_shift = cmdq->q.llq.max_n_shift,
+	}, head = llq;
+	int ret = 0;
+
+	/* 1. Allocate some space in the queue */
+	local_irq_save(flags);
+	llq.val = READ_ONCE(cmdq->q.llq.val);
+	do {
+		u64 old;
+
+		while (!queue_has_space(&llq, n + sync)) {
+			local_irq_restore(flags);
+			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
+				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
+			local_irq_save(flags);
+		}
+
+		head.cons = llq.cons;
+		head.prod = queue_inc_prod_n(&llq, n + sync) |
+					     CMDQ_PROD_OWNED_FLAG;
+
+		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
+		if (old == llq.val)
+			break;
+
+		llq.val = old;
+	} while (1);
+	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
+	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
+	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
+
+	/*
+	 * 2. Write our commands into the queue
+	 * Dependency ordering from the cmpxchg() loop above.
+	 */
+	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
+	if (sync) {
+		prod = queue_inc_prod_n(&llq, n);
+		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
+		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
+
+		/*
+		 * In order to determine completion of our CMD_SYNC, we must
+		 * ensure that the queue can't wrap twice without us noticing.
+		 * We achieve that by taking the cmdq lock as shared before
+		 * marking our slot as valid.
+		 */
+		arm_smmu_cmdq_shared_lock(cmdq);
+	}
+
+	/* 3. Mark our slots as valid, ensuring commands are visible first */
+	dma_wmb();
+	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
+
+	/* 4. If we are the owner, take control of the SMMU hardware */
+	if (owner) {
+		/* a. Wait for previous owner to finish */
+		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
+
+		/* b. Stop gathering work by clearing the owned flag */
+		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
+						   &cmdq->q.llq.atomic.prod);
+		prod &= ~CMDQ_PROD_OWNED_FLAG;
+
+		/*
+		 * c. Wait for any gathered work to be written to the queue.
+		 * Note that we read our own entries so that we have the control
+		 * dependency required by (d).
+		 */
+		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
+
+		/*
+		 * d. Advance the hardware prod pointer
+		 * Control dependency ordering from the entries becoming valid.
+		 */
+		writel_relaxed(prod, cmdq->q.prod_reg);
+
+		/*
+		 * e. Tell the next owner we're done
+		 * Make sure we've updated the hardware first, so that we don't
+		 * race to update prod and potentially move it backwards.
+		 */
+		atomic_set_release(&cmdq->owner_prod, prod);
+	}
+
+	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
+	if (sync) {
+		llq.prod = queue_inc_prod_n(&llq, n);
+		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
+		if (ret) {
+			dev_err_ratelimited(smmu->dev,
+					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
+					    llq.prod,
+					    readl_relaxed(cmdq->q.prod_reg),
+					    readl_relaxed(cmdq->q.cons_reg));
+		}
+
+		/*
+		 * Try to unlock the cmq lock. This will fail if we're the last
+		 * reader, in which case we can safely update cmdq->q.llq.cons
+		 */
+		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
+			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
+			arm_smmu_cmdq_shared_unlock(cmdq);
+		}
+	}
+
+	local_irq_restore(flags);
+	return ret;
+}
+
+static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+				   struct arm_smmu_cmdq_ent *ent)
+{
+	u64 cmd[CMDQ_ENT_DWORDS];
+
+	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+			 ent->opcode);
+		return -EINVAL;
+	}
+
+	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
+}
+
+static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
+{
+	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
+}
+
+static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
+				    struct arm_smmu_cmdq_batch *cmds,
+				    struct arm_smmu_cmdq_ent *cmd)
+{
+	if (cmds->num == CMDQ_BATCH_ENTRIES) {
+		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
+		cmds->num = 0;
+	}
+	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
+	cmds->num++;
+}
+
+static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
+				      struct arm_smmu_cmdq_batch *cmds)
+{
+	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
+}
+
+/* Context descriptor manipulation functions */
+static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
+			     int ssid, bool leaf)
+{
+	size_t i;
+	unsigned long flags;
+	struct arm_smmu_master *master;
+	struct arm_smmu_cmdq_batch cmds = {};
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_cmdq_ent cmd = {
+		.opcode	= CMDQ_OP_CFGI_CD,
+		.cfgi	= {
+			.ssid	= ssid,
+			.leaf	= leaf,
+		},
+	};
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+		for (i = 0; i < master->num_sids; i++) {
+			cmd.cfgi.sid = master->sids[i];
+			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+		}
+	}
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	arm_smmu_cmdq_batch_submit(smmu, &cmds);
+}
+
+static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
+					struct arm_smmu_l1_ctx_desc *l1_desc)
+{
+	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
+
+	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
+					     &l1_desc->l2ptr_dma, GFP_KERNEL);
+	if (!l1_desc->l2ptr) {
+		dev_warn(smmu->dev,
+			 "failed to allocate context descriptor table\n");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static void arm_smmu_write_cd_l1_desc(__le64 *dst,
+				      struct arm_smmu_l1_ctx_desc *l1_desc)
+{
+	u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) |
+		  CTXDESC_L1_DESC_V;
+
+	/* See comment in arm_smmu_write_ctx_desc() */
+	WRITE_ONCE(*dst, cpu_to_le64(val));
+}
+
+static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain,
+				   u32 ssid)
+{
+	__le64 *l1ptr;
+	unsigned int idx;
+	struct arm_smmu_l1_ctx_desc *l1_desc;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
+
+	if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR)
+		return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS;
+
+	idx = ssid >> CTXDESC_SPLIT;
+	l1_desc = &cdcfg->l1_desc[idx];
+	if (!l1_desc->l2ptr) {
+		if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc))
+			return NULL;
+
+		l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS;
+		arm_smmu_write_cd_l1_desc(l1ptr, l1_desc);
+		/* An invalid L1CD can be cached */
+		arm_smmu_sync_cd(smmu_domain, ssid, false);
+	}
+	idx = ssid & (CTXDESC_L2_ENTRIES - 1);
+	return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
+}
+
+static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
+				   int ssid, struct arm_smmu_ctx_desc *cd)
+{
+	/*
+	 * This function handles the following cases:
+	 *
+	 * (1) Install primary CD, for normal DMA traffic (SSID = 0).
+	 * (2) Install a secondary CD, for SID+SSID traffic.
+	 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
+	 *     CD, then invalidate the old entry and mappings.
+	 * (4) Remove a secondary CD.
+	 */
+	u64 val;
+	bool cd_live;
+	__le64 *cdptr;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax)))
+		return -E2BIG;
+
+	cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid);
+	if (!cdptr)
+		return -ENOMEM;
+
+	val = le64_to_cpu(cdptr[0]);
+	cd_live = !!(val & CTXDESC_CD_0_V);
+
+	if (!cd) { /* (4) */
+		val = 0;
+	} else if (cd_live) { /* (3) */
+		val &= ~CTXDESC_CD_0_ASID;
+		val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
+		/*
+		 * Until CD+TLB invalidation, both ASIDs may be used for tagging
+		 * this substream's traffic
+		 */
+	} else { /* (1) and (2) */
+		cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
+		cdptr[2] = 0;
+		cdptr[3] = cpu_to_le64(cd->mair);
+
+		/*
+		 * STE is live, and the SMMU might read dwords of this CD in any
+		 * order. Ensure that it observes valid values before reading
+		 * V=1.
+		 */
+		arm_smmu_sync_cd(smmu_domain, ssid, true);
+
+		val = cd->tcr |
+#ifdef __BIG_ENDIAN
+			CTXDESC_CD_0_ENDI |
+#endif
+			CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
+			CTXDESC_CD_0_AA64 |
+			FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
+			CTXDESC_CD_0_V;
+
+		/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
+		if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
+			val |= CTXDESC_CD_0_S;
+	}
+
+	/*
+	 * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
+	 * "Configuration structures and configuration invalidation completion"
+	 *
+	 *   The size of single-copy atomic reads made by the SMMU is
+	 *   IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
+	 *   field within an aligned 64-bit span of a structure can be altered
+	 *   without first making the structure invalid.
+	 */
+	WRITE_ONCE(cdptr[0], cpu_to_le64(val));
+	arm_smmu_sync_cd(smmu_domain, ssid, true);
+	return 0;
+}
+
+static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
+{
+	int ret;
+	size_t l1size;
+	size_t max_contexts;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+	struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg;
+
+	max_contexts = 1 << cfg->s1cdmax;
+
+	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) ||
+	    max_contexts <= CTXDESC_L2_ENTRIES) {
+		cfg->s1fmt = STRTAB_STE_0_S1FMT_LINEAR;
+		cdcfg->num_l1_ents = max_contexts;
+
+		l1size = max_contexts * (CTXDESC_CD_DWORDS << 3);
+	} else {
+		cfg->s1fmt = STRTAB_STE_0_S1FMT_64K_L2;
+		cdcfg->num_l1_ents = DIV_ROUND_UP(max_contexts,
+						  CTXDESC_L2_ENTRIES);
+
+		cdcfg->l1_desc = devm_kcalloc(smmu->dev, cdcfg->num_l1_ents,
+					      sizeof(*cdcfg->l1_desc),
+					      GFP_KERNEL);
+		if (!cdcfg->l1_desc)
+			return -ENOMEM;
+
+		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+	}
+
+	cdcfg->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cdcfg->cdtab_dma,
+					   GFP_KERNEL);
+	if (!cdcfg->cdtab) {
+		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
+		ret = -ENOMEM;
+		goto err_free_l1;
+	}
+
+	return 0;
+
+err_free_l1:
+	if (cdcfg->l1_desc) {
+		devm_kfree(smmu->dev, cdcfg->l1_desc);
+		cdcfg->l1_desc = NULL;
+	}
+	return ret;
+}
+
+static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
+{
+	int i;
+	size_t size, l1size;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
+
+	if (cdcfg->l1_desc) {
+		size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
+
+		for (i = 0; i < cdcfg->num_l1_ents; i++) {
+			if (!cdcfg->l1_desc[i].l2ptr)
+				continue;
+
+			dmam_free_coherent(smmu->dev, size,
+					   cdcfg->l1_desc[i].l2ptr,
+					   cdcfg->l1_desc[i].l2ptr_dma);
+		}
+		devm_kfree(smmu->dev, cdcfg->l1_desc);
+		cdcfg->l1_desc = NULL;
+
+		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+	} else {
+		l1size = cdcfg->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
+	}
+
+	dmam_free_coherent(smmu->dev, l1size, cdcfg->cdtab, cdcfg->cdtab_dma);
+	cdcfg->cdtab_dma = 0;
+	cdcfg->cdtab = NULL;
+}
+
+static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
+{
+	if (!cd->asid)
+		return;
+
+	xa_erase(&asid_xa, cd->asid);
+}
+
+/* Stream table manipulation functions */
+static void
+arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
+{
+	u64 val = 0;
+
+	val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span);
+	val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
+
+	/* See comment in arm_smmu_write_ctx_desc() */
+	WRITE_ONCE(*dst, cpu_to_le64(val));
+}
+
+static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+	struct arm_smmu_cmdq_ent cmd = {
+		.opcode	= CMDQ_OP_CFGI_STE,
+		.cfgi	= {
+			.sid	= sid,
+			.leaf	= true,
+		},
+	};
+
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+}
+
+static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
+				      __le64 *dst)
+{
+	/*
+	 * This is hideously complicated, but we only really care about
+	 * three cases at the moment:
+	 *
+	 * 1. Invalid (all zero) -> bypass/fault (init)
+	 * 2. Bypass/fault -> translation/bypass (attach)
+	 * 3. Translation/bypass -> bypass/fault (detach)
+	 *
+	 * Given that we can't update the STE atomically and the SMMU
+	 * doesn't read the thing in a defined order, that leaves us
+	 * with the following maintenance requirements:
+	 *
+	 * 1. Update Config, return (init time STEs aren't live)
+	 * 2. Write everything apart from dword 0, sync, write dword 0, sync
+	 * 3. Update Config, sync
+	 */
+	u64 val = le64_to_cpu(dst[0]);
+	bool ste_live = false;
+	struct arm_smmu_device *smmu = NULL;
+	struct arm_smmu_s1_cfg *s1_cfg = NULL;
+	struct arm_smmu_s2_cfg *s2_cfg = NULL;
+	struct arm_smmu_domain *smmu_domain = NULL;
+	struct arm_smmu_cmdq_ent prefetch_cmd = {
+		.opcode		= CMDQ_OP_PREFETCH_CFG,
+		.prefetch	= {
+			.sid	= sid,
+		},
+	};
+
+	if (master) {
+		smmu_domain = master->domain;
+		smmu = master->smmu;
+	}
+
+	if (smmu_domain) {
+		switch (smmu_domain->stage) {
+		case ARM_SMMU_DOMAIN_S1:
+			s1_cfg = &smmu_domain->s1_cfg;
+			break;
+		case ARM_SMMU_DOMAIN_S2:
+		case ARM_SMMU_DOMAIN_NESTED:
+			s2_cfg = &smmu_domain->s2_cfg;
+			break;
+		default:
+			break;
+		}
+	}
+
+	if (val & STRTAB_STE_0_V) {
+		switch (FIELD_GET(STRTAB_STE_0_CFG, val)) {
+		case STRTAB_STE_0_CFG_BYPASS:
+			break;
+		case STRTAB_STE_0_CFG_S1_TRANS:
+		case STRTAB_STE_0_CFG_S2_TRANS:
+			ste_live = true;
+			break;
+		case STRTAB_STE_0_CFG_ABORT:
+			BUG_ON(!disable_bypass);
+			break;
+		default:
+			BUG(); /* STE corruption */
+		}
+	}
+
+	/* Nuke the existing STE_0 value, as we're going to rewrite it */
+	val = STRTAB_STE_0_V;
+
+	/* Bypass/fault */
+	if (!smmu_domain || !(s1_cfg || s2_cfg)) {
+		if (!smmu_domain && disable_bypass)
+			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
+		else
+			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
+
+		dst[0] = cpu_to_le64(val);
+		dst[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG,
+						STRTAB_STE_1_SHCFG_INCOMING));
+		dst[2] = 0; /* Nuke the VMID */
+		/*
+		 * The SMMU can perform negative caching, so we must sync
+		 * the STE regardless of whether the old value was live.
+		 */
+		if (smmu)
+			arm_smmu_sync_ste_for_sid(smmu, sid);
+		return;
+	}
+
+	if (s1_cfg) {
+		BUG_ON(ste_live);
+		dst[1] = cpu_to_le64(
+			 FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) |
+			 FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
+			 FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
+			 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
+			 FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1));
+
+		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
+		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
+			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
+
+		val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
+			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) |
+			FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) |
+			FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
+	}
+
+	if (s2_cfg) {
+		BUG_ON(ste_live);
+		dst[2] = cpu_to_le64(
+			 FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) |
+			 FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) |
+#ifdef __BIG_ENDIAN
+			 STRTAB_STE_2_S2ENDI |
+#endif
+			 STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
+			 STRTAB_STE_2_S2R);
+
+		dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK);
+
+		val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
+	}
+
+	if (master->ats_enabled)
+		dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
+						 STRTAB_STE_1_EATS_TRANS));
+
+	arm_smmu_sync_ste_for_sid(smmu, sid);
+	/* See comment in arm_smmu_write_ctx_desc() */
+	WRITE_ONCE(dst[0], cpu_to_le64(val));
+	arm_smmu_sync_ste_for_sid(smmu, sid);
+
+	/* It's likely that we'll want to use the new STE soon */
+	if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
+		arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+}
+
+static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
+{
+	unsigned int i;
+
+	for (i = 0; i < nent; ++i) {
+		arm_smmu_write_strtab_ent(NULL, -1, strtab);
+		strtab += STRTAB_STE_DWORDS;
+	}
+}
+
+static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
+{
+	size_t size;
+	void *strtab;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+	struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
+
+	if (desc->l2ptr)
+		return 0;
+
+	size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
+	strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
+
+	desc->span = STRTAB_SPLIT + 1;
+	desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
+					  GFP_KERNEL);
+	if (!desc->l2ptr) {
+		dev_err(smmu->dev,
+			"failed to allocate l2 stream table for SID %u\n",
+			sid);
+		return -ENOMEM;
+	}
+
+	arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
+	arm_smmu_write_strtab_l1_desc(strtab, desc);
+	return 0;
+}
+
+/* IRQ and event handlers */
+static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+{
+	int i;
+	struct arm_smmu_device *smmu = dev;
+	struct arm_smmu_queue *q = &smmu->evtq.q;
+	struct arm_smmu_ll_queue *llq = &q->llq;
+	u64 evt[EVTQ_ENT_DWORDS];
+
+	do {
+		while (!queue_remove_raw(q, evt)) {
+			u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
+
+			dev_info(smmu->dev, "event 0x%02x received:\n", id);
+			for (i = 0; i < ARRAY_SIZE(evt); ++i)
+				dev_info(smmu->dev, "\t0x%016llx\n",
+					 (unsigned long long)evt[i]);
+
+		}
+
+		/*
+		 * Not much we can do on overflow, so scream and pretend we're
+		 * trying harder.
+		 */
+		if (queue_sync_prod_in(q) == -EOVERFLOW)
+			dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
+	} while (!queue_empty(llq));
+
+	/* Sync our overflow flag, as we believe we're up to speed */
+	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+		    Q_IDX(llq, llq->cons);
+	return IRQ_HANDLED;
+}
+
+static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
+{
+	u32 sid, ssid;
+	u16 grpid;
+	bool ssv, last;
+
+	sid = FIELD_GET(PRIQ_0_SID, evt[0]);
+	ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]);
+	ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0;
+	last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]);
+	grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]);
+
+	dev_info(smmu->dev, "unexpected PRI request received:\n");
+	dev_info(smmu->dev,
+		 "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
+		 sid, ssid, grpid, last ? "L" : "",
+		 evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
+		 evt[0] & PRIQ_0_PERM_READ ? "R" : "",
+		 evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
+		 evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
+		 evt[1] & PRIQ_1_ADDR_MASK);
+
+	if (last) {
+		struct arm_smmu_cmdq_ent cmd = {
+			.opcode			= CMDQ_OP_PRI_RESP,
+			.substream_valid	= ssv,
+			.pri			= {
+				.sid	= sid,
+				.ssid	= ssid,
+				.grpid	= grpid,
+				.resp	= PRI_RESP_DENY,
+			},
+		};
+
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	}
+}
+
+static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+{
+	struct arm_smmu_device *smmu = dev;
+	struct arm_smmu_queue *q = &smmu->priq.q;
+	struct arm_smmu_ll_queue *llq = &q->llq;
+	u64 evt[PRIQ_ENT_DWORDS];
+
+	do {
+		while (!queue_remove_raw(q, evt))
+			arm_smmu_handle_ppr(smmu, evt);
+
+		if (queue_sync_prod_in(q) == -EOVERFLOW)
+			dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
+	} while (!queue_empty(llq));
+
+	/* Sync our overflow flag, as we believe we're up to speed */
+	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+		      Q_IDX(llq, llq->cons);
+	queue_sync_cons_out(q);
+	return IRQ_HANDLED;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
+
+static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
+{
+	u32 gerror, gerrorn, active;
+	struct arm_smmu_device *smmu = dev;
+
+	gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
+	gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
+
+	active = gerror ^ gerrorn;
+	if (!(active & GERROR_ERR_MASK))
+		return IRQ_NONE; /* No errors pending */
+
+	dev_warn(smmu->dev,
+		 "unexpected global error reported (0x%08x), this could be serious\n",
+		 active);
+
+	if (active & GERROR_SFM_ERR) {
+		dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
+		arm_smmu_device_disable(smmu);
+	}
+
+	if (active & GERROR_MSI_GERROR_ABT_ERR)
+		dev_warn(smmu->dev, "GERROR MSI write aborted\n");
+
+	if (active & GERROR_MSI_PRIQ_ABT_ERR)
+		dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
+
+	if (active & GERROR_MSI_EVTQ_ABT_ERR)
+		dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
+
+	if (active & GERROR_MSI_CMDQ_ABT_ERR)
+		dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
+
+	if (active & GERROR_PRIQ_ABT_ERR)
+		dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
+
+	if (active & GERROR_EVTQ_ABT_ERR)
+		dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
+
+	if (active & GERROR_CMDQ_ERR)
+		arm_smmu_cmdq_skip_err(smmu);
+
+	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
+{
+	struct arm_smmu_device *smmu = dev;
+
+	arm_smmu_evtq_thread(irq, dev);
+	if (smmu->features & ARM_SMMU_FEAT_PRI)
+		arm_smmu_priq_thread(irq, dev);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
+{
+	arm_smmu_gerror_handler(irq, dev);
+	return IRQ_WAKE_THREAD;
+}
+
+static void
+arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
+			struct arm_smmu_cmdq_ent *cmd)
+{
+	size_t log2_span;
+	size_t span_mask;
+	/* ATC invalidates are always on 4096-bytes pages */
+	size_t inval_grain_shift = 12;
+	unsigned long page_start, page_end;
+
+	*cmd = (struct arm_smmu_cmdq_ent) {
+		.opcode			= CMDQ_OP_ATC_INV,
+		.substream_valid	= !!ssid,
+		.atc.ssid		= ssid,
+	};
+
+	if (!size) {
+		cmd->atc.size = ATC_INV_SIZE_ALL;
+		return;
+	}
+
+	page_start	= iova >> inval_grain_shift;
+	page_end	= (iova + size - 1) >> inval_grain_shift;
+
+	/*
+	 * In an ATS Invalidate Request, the address must be aligned on the
+	 * range size, which must be a power of two number of page sizes. We
+	 * thus have to choose between grossly over-invalidating the region, or
+	 * splitting the invalidation into multiple commands. For simplicity
+	 * we'll go with the first solution, but should refine it in the future
+	 * if multiple commands are shown to be more efficient.
+	 *
+	 * Find the smallest power of two that covers the range. The most
+	 * significant differing bit between the start and end addresses,
+	 * fls(start ^ end), indicates the required span. For example:
+	 *
+	 * We want to invalidate pages [8; 11]. This is already the ideal range:
+	 *		x = 0b1000 ^ 0b1011 = 0b11
+	 *		span = 1 << fls(x) = 4
+	 *
+	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
+	 *		x = 0b0111 ^ 0b1010 = 0b1101
+	 *		span = 1 << fls(x) = 16
+	 */
+	log2_span	= fls_long(page_start ^ page_end);
+	span_mask	= (1ULL << log2_span) - 1;
+
+	page_start	&= ~span_mask;
+
+	cmd->atc.addr	= page_start << inval_grain_shift;
+	cmd->atc.size	= log2_span;
+}
+
+static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
+{
+	int i;
+	struct arm_smmu_cmdq_ent cmd;
+
+	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
+
+	for (i = 0; i < master->num_sids; i++) {
+		cmd.atc.sid = master->sids[i];
+		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
+	}
+
+	return arm_smmu_cmdq_issue_sync(master->smmu);
+}
+
+static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
+				   int ssid, unsigned long iova, size_t size)
+{
+	int i;
+	unsigned long flags;
+	struct arm_smmu_cmdq_ent cmd;
+	struct arm_smmu_master *master;
+	struct arm_smmu_cmdq_batch cmds = {};
+
+	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
+		return 0;
+
+	/*
+	 * Ensure that we've completed prior invalidation of the main TLBs
+	 * before we read 'nr_ats_masters' in case of a concurrent call to
+	 * arm_smmu_enable_ats():
+	 *
+	 *	// unmap()			// arm_smmu_enable_ats()
+	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
+	 *	smp_mb();			[...]
+	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
+	 *
+	 * Ensures that we always see the incremented 'nr_ats_masters' count if
+	 * ATS was enabled at the PCI device before completion of the TLBI.
+	 */
+	smp_mb();
+	if (!atomic_read(&smmu_domain->nr_ats_masters))
+		return 0;
+
+	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+		if (!master->ats_enabled)
+			continue;
+
+		for (i = 0; i < master->num_sids; i++) {
+			cmd.atc.sid = master->sids[i];
+			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
+		}
+	}
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
+}
+
+/* IO_PGTABLE API */
+static void arm_smmu_tlb_inv_context(void *cookie)
+{
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_cmdq_ent cmd;
+
+	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
+		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
+		cmd.tlbi.vmid	= 0;
+	} else {
+		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
+		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
+	}
+
+	/*
+	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
+	 * PTEs previously cleared by unmaps on the current CPU not yet visible
+	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
+	 * insertion to guarantee those are observed before the TLBI. Do be
+	 * careful, 007.
+	 */
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
+}
+
+static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
+				   size_t granule, bool leaf,
+				   struct arm_smmu_domain *smmu_domain)
+{
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
+	size_t inv_range = granule;
+	struct arm_smmu_cmdq_batch cmds = {};
+	struct arm_smmu_cmdq_ent cmd = {
+		.tlbi = {
+			.leaf	= leaf,
+		},
+	};
+
+	if (!size)
+		return;
+
+	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
+		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
+	} else {
+		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
+		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
+	}
+
+	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
+		/* Get the leaf page size */
+		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
+
+		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
+		cmd.tlbi.tg = (tg - 10) / 2;
+
+		/* Determine what level the granule is at */
+		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
+
+		num_pages = size >> tg;
+	}
+
+	while (iova < end) {
+		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
+			/*
+			 * On each iteration of the loop, the range is 5 bits
+			 * worth of the aligned size remaining.
+			 * The range in pages is:
+			 *
+			 * range = (num_pages & (0x1f << __ffs(num_pages)))
+			 */
+			unsigned long scale, num;
+
+			/* Determine the power of 2 multiple number of pages */
+			scale = __ffs(num_pages);
+			cmd.tlbi.scale = scale;
+
+			/* Determine how many chunks of 2^scale size we have */
+			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
+			cmd.tlbi.num = num - 1;
+
+			/* range is num * 2^scale * pgsize */
+			inv_range = num << (scale + tg);
+
+			/* Clear out the lower order bits for the next iteration */
+			num_pages -= num << scale;
+		}
+
+		cmd.tlbi.addr = iova;
+		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+		iova += inv_range;
+	}
+	arm_smmu_cmdq_batch_submit(smmu, &cmds);
+
+	/*
+	 * Unfortunately, this can't be leaf-only since we may have
+	 * zapped an entire table.
+	 */
+	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
+}
+
+static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
+					 unsigned long iova, size_t granule,
+					 void *cookie)
+{
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct iommu_domain *domain = &smmu_domain->domain;
+
+	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
+}
+
+static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
+				  size_t granule, void *cookie)
+{
+	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
+}
+
+static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
+				  size_t granule, void *cookie)
+{
+	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
+}
+
+static const struct iommu_flush_ops arm_smmu_flush_ops = {
+	.tlb_flush_all	= arm_smmu_tlb_inv_context,
+	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
+	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
+	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
+};
+
+/* IOMMU API */
+static bool arm_smmu_capable(enum iommu_cap cap)
+{
+	switch (cap) {
+	case IOMMU_CAP_CACHE_COHERENCY:
+		return true;
+	case IOMMU_CAP_NOEXEC:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
+{
+	struct arm_smmu_domain *smmu_domain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED &&
+	    type != IOMMU_DOMAIN_DMA &&
+	    type != IOMMU_DOMAIN_IDENTITY)
+		return NULL;
+
+	/*
+	 * Allocate the domain and initialise some of its data structures.
+	 * We can't really do anything meaningful until we've added a
+	 * master.
+	 */
+	smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
+	if (!smmu_domain)
+		return NULL;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&smmu_domain->domain)) {
+		kfree(smmu_domain);
+		return NULL;
+	}
+
+	mutex_init(&smmu_domain->init_mutex);
+	INIT_LIST_HEAD(&smmu_domain->devices);
+	spin_lock_init(&smmu_domain->devices_lock);
+
+	return &smmu_domain->domain;
+}
+
+static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
+{
+	int idx, size = 1 << span;
+
+	do {
+		idx = find_first_zero_bit(map, size);
+		if (idx == size)
+			return -ENOSPC;
+	} while (test_and_set_bit(idx, map));
+
+	return idx;
+}
+
+static void arm_smmu_bitmap_free(unsigned long *map, int idx)
+{
+	clear_bit(idx, map);
+}
+
+static void arm_smmu_domain_free(struct iommu_domain *domain)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	iommu_put_dma_cookie(domain);
+	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
+
+	/* Free the CD and ASID, if we allocated them */
+	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+		if (cfg->cdcfg.cdtab)
+			arm_smmu_free_cd_tables(smmu_domain);
+		arm_smmu_free_asid(&cfg->cd);
+	} else {
+		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+		if (cfg->vmid)
+			arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+	}
+
+	kfree(smmu_domain);
+}
+
+static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+				       struct arm_smmu_master *master,
+				       struct io_pgtable_cfg *pgtbl_cfg)
+{
+	int ret;
+	u32 asid;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
+
+	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
+		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	cfg->s1cdmax = master->ssid_bits;
+
+	ret = arm_smmu_alloc_cd_tables(smmu_domain);
+	if (ret)
+		goto out_free_asid;
+
+	cfg->cd.asid	= (u16)asid;
+	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
+	cfg->cd.tcr	= FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) |
+			  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair;
+
+	/*
+	 * Note that this will end up calling arm_smmu_sync_cd() before
+	 * the master has been added to the devices list for this domain.
+	 * This isn't an issue because the STE hasn't been installed yet.
+	 */
+	ret = arm_smmu_write_ctx_desc(smmu_domain, 0, &cfg->cd);
+	if (ret)
+		goto out_free_cd_tables;
+
+	return 0;
+
+out_free_cd_tables:
+	arm_smmu_free_cd_tables(smmu_domain);
+out_free_asid:
+	arm_smmu_free_asid(&cfg->cd);
+	return ret;
+}
+
+static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+				       struct arm_smmu_master *master,
+				       struct io_pgtable_cfg *pgtbl_cfg)
+{
+	int vmid;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
+
+	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
+	if (vmid < 0)
+		return vmid;
+
+	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
+	cfg->vmid	= (u16)vmid;
+	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
+	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
+	return 0;
+}
+
+static int arm_smmu_domain_finalise(struct iommu_domain *domain,
+				    struct arm_smmu_master *master)
+{
+	int ret;
+	unsigned long ias, oas;
+	enum io_pgtable_fmt fmt;
+	struct io_pgtable_cfg pgtbl_cfg;
+	struct io_pgtable_ops *pgtbl_ops;
+	int (*finalise_stage_fn)(struct arm_smmu_domain *,
+				 struct arm_smmu_master *,
+				 struct io_pgtable_cfg *);
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
+		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
+		return 0;
+	}
+
+	/* Restrict the stage to what we can actually support */
+	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
+		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
+	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
+		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+
+	switch (smmu_domain->stage) {
+	case ARM_SMMU_DOMAIN_S1:
+		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
+		ias = min_t(unsigned long, ias, VA_BITS);
+		oas = smmu->ias;
+		fmt = ARM_64_LPAE_S1;
+		finalise_stage_fn = arm_smmu_domain_finalise_s1;
+		break;
+	case ARM_SMMU_DOMAIN_NESTED:
+	case ARM_SMMU_DOMAIN_S2:
+		ias = smmu->ias;
+		oas = smmu->oas;
+		fmt = ARM_64_LPAE_S2;
+		finalise_stage_fn = arm_smmu_domain_finalise_s2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	pgtbl_cfg = (struct io_pgtable_cfg) {
+		.pgsize_bitmap	= smmu->pgsize_bitmap,
+		.ias		= ias,
+		.oas		= oas,
+		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
+		.tlb		= &arm_smmu_flush_ops,
+		.iommu_dev	= smmu->dev,
+	};
+
+	if (smmu_domain->non_strict)
+		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
+
+	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
+	if (!pgtbl_ops)
+		return -ENOMEM;
+
+	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
+	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
+	domain->geometry.force_aperture = true;
+
+	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
+	if (ret < 0) {
+		free_io_pgtable_ops(pgtbl_ops);
+		return ret;
+	}
+
+	smmu_domain->pgtbl_ops = pgtbl_ops;
+	return 0;
+}
+
+static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+	__le64 *step;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+		struct arm_smmu_strtab_l1_desc *l1_desc;
+		int idx;
+
+		/* Two-level walk */
+		idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
+		l1_desc = &cfg->l1_desc[idx];
+		idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
+		step = &l1_desc->l2ptr[idx];
+	} else {
+		/* Simple linear lookup */
+		step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
+	}
+
+	return step;
+}
+
+static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
+{
+	int i, j;
+	struct arm_smmu_device *smmu = master->smmu;
+
+	for (i = 0; i < master->num_sids; ++i) {
+		u32 sid = master->sids[i];
+		__le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
+
+		/* Bridged PCI devices may end up with duplicated IDs */
+		for (j = 0; j < i; j++)
+			if (master->sids[j] == sid)
+				break;
+		if (j < i)
+			continue;
+
+		arm_smmu_write_strtab_ent(master, sid, step);
+	}
+}
+
+static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
+{
+	struct device *dev = master->dev;
+	struct arm_smmu_device *smmu = master->smmu;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
+		return false;
+
+	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
+		return false;
+
+	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
+}
+
+static void arm_smmu_enable_ats(struct arm_smmu_master *master)
+{
+	size_t stu;
+	struct pci_dev *pdev;
+	struct arm_smmu_device *smmu = master->smmu;
+	struct arm_smmu_domain *smmu_domain = master->domain;
+
+	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
+	if (!master->ats_enabled)
+		return;
+
+	/* Smallest Translation Unit: log2 of the smallest supported granule */
+	stu = __ffs(smmu->pgsize_bitmap);
+	pdev = to_pci_dev(master->dev);
+
+	atomic_inc(&smmu_domain->nr_ats_masters);
+	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
+	if (pci_enable_ats(pdev, stu))
+		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
+}
+
+static void arm_smmu_disable_ats(struct arm_smmu_master *master)
+{
+	struct arm_smmu_domain *smmu_domain = master->domain;
+
+	if (!master->ats_enabled)
+		return;
+
+	pci_disable_ats(to_pci_dev(master->dev));
+	/*
+	 * Ensure ATS is disabled at the endpoint before we issue the
+	 * ATC invalidation via the SMMU.
+	 */
+	wmb();
+	arm_smmu_atc_inv_master(master);
+	atomic_dec(&smmu_domain->nr_ats_masters);
+}
+
+static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
+{
+	int ret;
+	int features;
+	int num_pasids;
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(master->dev))
+		return -ENODEV;
+
+	pdev = to_pci_dev(master->dev);
+
+	features = pci_pasid_features(pdev);
+	if (features < 0)
+		return features;
+
+	num_pasids = pci_max_pasids(pdev);
+	if (num_pasids <= 0)
+		return num_pasids;
+
+	ret = pci_enable_pasid(pdev, features);
+	if (ret) {
+		dev_err(&pdev->dev, "Failed to enable PASID\n");
+		return ret;
+	}
+
+	master->ssid_bits = min_t(u8, ilog2(num_pasids),
+				  master->smmu->ssid_bits);
+	return 0;
+}
+
+static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
+{
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(master->dev))
+		return;
+
+	pdev = to_pci_dev(master->dev);
+
+	if (!pdev->pasid_enabled)
+		return;
+
+	master->ssid_bits = 0;
+	pci_disable_pasid(pdev);
+}
+
+static void arm_smmu_detach_dev(struct arm_smmu_master *master)
+{
+	unsigned long flags;
+	struct arm_smmu_domain *smmu_domain = master->domain;
+
+	if (!smmu_domain)
+		return;
+
+	arm_smmu_disable_ats(master);
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_del(&master->domain_head);
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	master->domain = NULL;
+	master->ats_enabled = false;
+	arm_smmu_install_ste_for_dev(master);
+}
+
+static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int ret = 0;
+	unsigned long flags;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_master *master;
+
+	if (!fwspec)
+		return -ENOENT;
+
+	master = dev_iommu_priv_get(dev);
+	smmu = master->smmu;
+
+	arm_smmu_detach_dev(master);
+
+	mutex_lock(&smmu_domain->init_mutex);
+
+	if (!smmu_domain->smmu) {
+		smmu_domain->smmu = smmu;
+		ret = arm_smmu_domain_finalise(domain, master);
+		if (ret) {
+			smmu_domain->smmu = NULL;
+			goto out_unlock;
+		}
+	} else if (smmu_domain->smmu != smmu) {
+		dev_err(dev,
+			"cannot attach to SMMU %s (upstream of %s)\n",
+			dev_name(smmu_domain->smmu->dev),
+			dev_name(smmu->dev));
+		ret = -ENXIO;
+		goto out_unlock;
+	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 &&
+		   master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) {
+		dev_err(dev,
+			"cannot attach to incompatible domain (%u SSID bits != %u)\n",
+			smmu_domain->s1_cfg.s1cdmax, master->ssid_bits);
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+
+	master->domain = smmu_domain;
+
+	if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
+		master->ats_enabled = arm_smmu_ats_supported(master);
+
+	arm_smmu_install_ste_for_dev(master);
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_add(&master->domain_head, &smmu_domain->devices);
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	arm_smmu_enable_ats(master);
+
+out_unlock:
+	mutex_unlock(&smmu_domain->init_mutex);
+	return ret;
+}
+
+static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
+			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
+{
+	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+	if (!ops)
+		return -ENODEV;
+
+	return ops->map(ops, iova, paddr, size, prot);
+}
+
+static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
+			     size_t size, struct iommu_iotlb_gather *gather)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
+
+	if (!ops)
+		return 0;
+
+	return ops->unmap(ops, iova, size, gather);
+}
+
+static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	if (smmu_domain->smmu)
+		arm_smmu_tlb_inv_context(smmu_domain);
+}
+
+static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
+				struct iommu_iotlb_gather *gather)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
+			       gather->pgsize, true, smmu_domain);
+}
+
+static phys_addr_t
+arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
+{
+	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+	if (domain->type == IOMMU_DOMAIN_IDENTITY)
+		return iova;
+
+	if (!ops)
+		return 0;
+
+	return ops->iova_to_phys(ops, iova);
+}
+
+static struct platform_driver arm_smmu_driver;
+
+static
+struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
+							  fwnode);
+	put_device(dev);
+	return dev ? dev_get_drvdata(dev) : NULL;
+}
+
+static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
+{
+	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
+
+	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+		limit *= 1UL << STRTAB_SPLIT;
+
+	return sid < limit;
+}
+
+static struct iommu_ops arm_smmu_ops;
+
+static struct iommu_device *arm_smmu_probe_device(struct device *dev)
+{
+	int i, ret;
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_master *master;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	if (!fwspec || fwspec->ops != &arm_smmu_ops)
+		return ERR_PTR(-ENODEV);
+
+	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
+		return ERR_PTR(-EBUSY);
+
+	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!smmu)
+		return ERR_PTR(-ENODEV);
+
+	master = kzalloc(sizeof(*master), GFP_KERNEL);
+	if (!master)
+		return ERR_PTR(-ENOMEM);
+
+	master->dev = dev;
+	master->smmu = smmu;
+	master->sids = fwspec->ids;
+	master->num_sids = fwspec->num_ids;
+	dev_iommu_priv_set(dev, master);
+
+	/* Check the SIDs are in range of the SMMU and our stream table */
+	for (i = 0; i < master->num_sids; i++) {
+		u32 sid = master->sids[i];
+
+		if (!arm_smmu_sid_in_range(smmu, sid)) {
+			ret = -ERANGE;
+			goto err_free_master;
+		}
+
+		/* Ensure l2 strtab is initialised */
+		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+			ret = arm_smmu_init_l2_strtab(smmu, sid);
+			if (ret)
+				goto err_free_master;
+		}
+	}
+
+	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
+
+	/*
+	 * Note that PASID must be enabled before, and disabled after ATS:
+	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
+	 *
+	 *   Behavior is undefined if this bit is Set and the value of the PASID
+	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bits
+	 *   are changed.
+	 */
+	arm_smmu_enable_pasid(master);
+
+	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
+		master->ssid_bits = min_t(u8, master->ssid_bits,
+					  CTXDESC_LINEAR_CDMAX);
+
+	return &smmu->iommu;
+
+err_free_master:
+	kfree(master);
+	dev_iommu_priv_set(dev, NULL);
+	return ERR_PTR(ret);
+}
+
+static void arm_smmu_release_device(struct device *dev)
+{
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct arm_smmu_master *master;
+
+	if (!fwspec || fwspec->ops != &arm_smmu_ops)
+		return;
+
+	master = dev_iommu_priv_get(dev);
+	arm_smmu_detach_dev(master);
+	arm_smmu_disable_pasid(master);
+	kfree(master);
+	iommu_fwspec_free(dev);
+}
+
+static struct iommu_group *arm_smmu_device_group(struct device *dev)
+{
+	struct iommu_group *group;
+
+	/*
+	 * We don't support devices sharing stream IDs other than PCI RID
+	 * aliases, since the necessary ID-to-device lookup becomes rather
+	 * impractical given a potential sparse 32-bit stream ID space.
+	 */
+	if (dev_is_pci(dev))
+		group = pci_device_group(dev);
+	else
+		group = generic_device_group(dev);
+
+	return group;
+}
+
+static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
+				    enum iommu_attr attr, void *data)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	switch (domain->type) {
+	case IOMMU_DOMAIN_UNMANAGED:
+		switch (attr) {
+		case DOMAIN_ATTR_NESTING:
+			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
+			return 0;
+		default:
+			return -ENODEV;
+		}
+		break;
+	case IOMMU_DOMAIN_DMA:
+		switch (attr) {
+		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
+			*(int *)data = smmu_domain->non_strict;
+			return 0;
+		default:
+			return -ENODEV;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+}
+
+static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
+				    enum iommu_attr attr, void *data)
+{
+	int ret = 0;
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	mutex_lock(&smmu_domain->init_mutex);
+
+	switch (domain->type) {
+	case IOMMU_DOMAIN_UNMANAGED:
+		switch (attr) {
+		case DOMAIN_ATTR_NESTING:
+			if (smmu_domain->smmu) {
+				ret = -EPERM;
+				goto out_unlock;
+			}
+
+			if (*(int *)data)
+				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
+			else
+				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+			break;
+		default:
+			ret = -ENODEV;
+		}
+		break;
+	case IOMMU_DOMAIN_DMA:
+		switch(attr) {
+		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
+			smmu_domain->non_strict = *(int *)data;
+			break;
+		default:
+			ret = -ENODEV;
+		}
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+out_unlock:
+	mutex_unlock(&smmu_domain->init_mutex);
+	return ret;
+}
+
+static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void arm_smmu_get_resv_regions(struct device *dev,
+				      struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					 prot, IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static struct iommu_ops arm_smmu_ops = {
+	.capable		= arm_smmu_capable,
+	.domain_alloc		= arm_smmu_domain_alloc,
+	.domain_free		= arm_smmu_domain_free,
+	.attach_dev		= arm_smmu_attach_dev,
+	.map			= arm_smmu_map,
+	.unmap			= arm_smmu_unmap,
+	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
+	.iotlb_sync		= arm_smmu_iotlb_sync,
+	.iova_to_phys		= arm_smmu_iova_to_phys,
+	.probe_device		= arm_smmu_probe_device,
+	.release_device		= arm_smmu_release_device,
+	.device_group		= arm_smmu_device_group,
+	.domain_get_attr	= arm_smmu_domain_get_attr,
+	.domain_set_attr	= arm_smmu_domain_set_attr,
+	.of_xlate		= arm_smmu_of_xlate,
+	.get_resv_regions	= arm_smmu_get_resv_regions,
+	.put_resv_regions	= generic_iommu_put_resv_regions,
+	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
+};
+
+/* Probing and initialisation functions */
+static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
+				   struct arm_smmu_queue *q,
+				   unsigned long prod_off,
+				   unsigned long cons_off,
+				   size_t dwords, const char *name)
+{
+	size_t qsz;
+
+	do {
+		qsz = ((1 << q->llq.max_n_shift) * dwords) << 3;
+		q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma,
+					      GFP_KERNEL);
+		if (q->base || qsz < PAGE_SIZE)
+			break;
+
+		q->llq.max_n_shift--;
+	} while (1);
+
+	if (!q->base) {
+		dev_err(smmu->dev,
+			"failed to allocate queue (0x%zx bytes) for %s\n",
+			qsz, name);
+		return -ENOMEM;
+	}
+
+	if (!WARN_ON(q->base_dma & (qsz - 1))) {
+		dev_info(smmu->dev, "allocated %u entries for %s\n",
+			 1 << q->llq.max_n_shift, name);
+	}
+
+	q->prod_reg	= arm_smmu_page1_fixup(prod_off, smmu);
+	q->cons_reg	= arm_smmu_page1_fixup(cons_off, smmu);
+	q->ent_dwords	= dwords;
+
+	q->q_base  = Q_BASE_RWA;
+	q->q_base |= q->base_dma & Q_BASE_ADDR_MASK;
+	q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift);
+
+	q->llq.prod = q->llq.cons = 0;
+	return 0;
+}
+
+static void arm_smmu_cmdq_free_bitmap(void *data)
+{
+	unsigned long *bitmap = data;
+	bitmap_free(bitmap);
+}
+
+static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
+{
+	int ret = 0;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
+	atomic_long_t *bitmap;
+
+	atomic_set(&cmdq->owner_prod, 0);
+	atomic_set(&cmdq->lock, 0);
+
+	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
+	if (!bitmap) {
+		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
+		ret = -ENOMEM;
+	} else {
+		cmdq->valid_map = bitmap;
+		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
+	}
+
+	return ret;
+}
+
+static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
+{
+	int ret;
+
+	/* cmdq */
+	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
+				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
+				      "cmdq");
+	if (ret)
+		return ret;
+
+	ret = arm_smmu_cmdq_init(smmu);
+	if (ret)
+		return ret;
+
+	/* evtq */
+	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
+				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
+				      "evtq");
+	if (ret)
+		return ret;
+
+	/* priq */
+	if (!(smmu->features & ARM_SMMU_FEAT_PRI))
+		return 0;
+
+	return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
+				       ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS,
+				       "priq");
+}
+
+static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
+{
+	unsigned int i;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+	size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
+	void *strtab = smmu->strtab_cfg.strtab;
+
+	cfg->l1_desc = devm_kzalloc(smmu->dev, size, GFP_KERNEL);
+	if (!cfg->l1_desc) {
+		dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < cfg->num_l1_ents; ++i) {
+		arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
+		strtab += STRTAB_L1_DESC_DWORDS << 3;
+	}
+
+	return 0;
+}
+
+static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
+{
+	void *strtab;
+	u64 reg;
+	u32 size, l1size;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+	/* Calculate the L1 size, capped to the SIDSIZE. */
+	size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
+	size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+	cfg->num_l1_ents = 1 << size;
+
+	size += STRTAB_SPLIT;
+	if (size < smmu->sid_bits)
+		dev_warn(smmu->dev,
+			 "2-level strtab only covers %u/%u bits of SID\n",
+			 size, smmu->sid_bits);
+
+	l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
+	strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
+				     GFP_KERNEL);
+	if (!strtab) {
+		dev_err(smmu->dev,
+			"failed to allocate l1 stream table (%u bytes)\n",
+			size);
+		return -ENOMEM;
+	}
+	cfg->strtab = strtab;
+
+	/* Configure strtab_base_cfg for 2 levels */
+	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL);
+	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size);
+	reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT);
+	cfg->strtab_base_cfg = reg;
+
+	return arm_smmu_init_l1_strtab(smmu);
+}
+
+static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
+{
+	void *strtab;
+	u64 reg;
+	u32 size;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+	size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
+	strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
+				     GFP_KERNEL);
+	if (!strtab) {
+		dev_err(smmu->dev,
+			"failed to allocate linear stream table (%u bytes)\n",
+			size);
+		return -ENOMEM;
+	}
+	cfg->strtab = strtab;
+	cfg->num_l1_ents = 1 << smmu->sid_bits;
+
+	/* Configure strtab_base_cfg for a linear table covering all SIDs */
+	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR);
+	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits);
+	cfg->strtab_base_cfg = reg;
+
+	arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
+	return 0;
+}
+
+static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
+{
+	u64 reg;
+	int ret;
+
+	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+		ret = arm_smmu_init_strtab_2lvl(smmu);
+	else
+		ret = arm_smmu_init_strtab_linear(smmu);
+
+	if (ret)
+		return ret;
+
+	/* Set the strtab base address */
+	reg  = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK;
+	reg |= STRTAB_BASE_RA;
+	smmu->strtab_cfg.strtab_base = reg;
+
+	/* Allocate the first VMID for stage-2 bypass STEs */
+	set_bit(0, smmu->vmid_map);
+	return 0;
+}
+
+static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
+{
+	int ret;
+
+	ret = arm_smmu_init_queues(smmu);
+	if (ret)
+		return ret;
+
+	return arm_smmu_init_strtab(smmu);
+}
+
+static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
+				   unsigned int reg_off, unsigned int ack_off)
+{
+	u32 reg;
+
+	writel_relaxed(val, smmu->base + reg_off);
+	return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
+					  1, ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+/* GBPA is "special" */
+static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+{
+	int ret;
+	u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
+
+	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+					 1, ARM_SMMU_POLL_TIMEOUT_US);
+	if (ret)
+		return ret;
+
+	reg &= ~clr;
+	reg |= set;
+	writel_relaxed(reg | GBPA_UPDATE, gbpa);
+	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+					 1, ARM_SMMU_POLL_TIMEOUT_US);
+
+	if (ret)
+		dev_err(smmu->dev, "GBPA not responding to update\n");
+	return ret;
+}
+
+static void arm_smmu_free_msis(void *data)
+{
+	struct device *dev = data;
+	platform_msi_domain_free_irqs(dev);
+}
+
+static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
+{
+	phys_addr_t doorbell;
+	struct device *dev = msi_desc_to_dev(desc);
+	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
+	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
+
+	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
+	doorbell &= MSI_CFG0_ADDR_MASK;
+
+	writeq_relaxed(doorbell, smmu->base + cfg[0]);
+	writel_relaxed(msg->data, smmu->base + cfg[1]);
+	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
+}
+
+static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
+{
+	struct msi_desc *desc;
+	int ret, nvec = ARM_SMMU_MAX_MSIS;
+	struct device *dev = smmu->dev;
+
+	/* Clear the MSI address regs */
+	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
+	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
+
+	if (smmu->features & ARM_SMMU_FEAT_PRI)
+		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
+	else
+		nvec--;
+
+	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
+		return;
+
+	if (!dev->msi_domain) {
+		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
+		return;
+	}
+
+	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
+	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
+	if (ret) {
+		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
+		return;
+	}
+
+	for_each_msi_entry(desc, dev) {
+		switch (desc->platform.msi_index) {
+		case EVTQ_MSI_INDEX:
+			smmu->evtq.q.irq = desc->irq;
+			break;
+		case GERROR_MSI_INDEX:
+			smmu->gerr_irq = desc->irq;
+			break;
+		case PRIQ_MSI_INDEX:
+			smmu->priq.q.irq = desc->irq;
+			break;
+		default:	/* Unknown */
+			continue;
+		}
+	}
+
+	/* Add callback to free MSIs on teardown */
+	devm_add_action(dev, arm_smmu_free_msis, dev);
+}
+
+static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
+{
+	int irq, ret;
+
+	arm_smmu_setup_msis(smmu);
+
+	/* Request interrupt lines */
+	irq = smmu->evtq.q.irq;
+	if (irq) {
+		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
+						arm_smmu_evtq_thread,
+						IRQF_ONESHOT,
+						"arm-smmu-v3-evtq", smmu);
+		if (ret < 0)
+			dev_warn(smmu->dev, "failed to enable evtq irq\n");
+	} else {
+		dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n");
+	}
+
+	irq = smmu->gerr_irq;
+	if (irq) {
+		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
+				       0, "arm-smmu-v3-gerror", smmu);
+		if (ret < 0)
+			dev_warn(smmu->dev, "failed to enable gerror irq\n");
+	} else {
+		dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n");
+	}
+
+	if (smmu->features & ARM_SMMU_FEAT_PRI) {
+		irq = smmu->priq.q.irq;
+		if (irq) {
+			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
+							arm_smmu_priq_thread,
+							IRQF_ONESHOT,
+							"arm-smmu-v3-priq",
+							smmu);
+			if (ret < 0)
+				dev_warn(smmu->dev,
+					 "failed to enable priq irq\n");
+		} else {
+			dev_warn(smmu->dev, "no priq irq - PRI will be broken\n");
+		}
+	}
+}
+
+static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
+{
+	int ret, irq;
+	u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
+
+	/* Disable IRQs first */
+	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
+				      ARM_SMMU_IRQ_CTRLACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to disable irqs\n");
+		return ret;
+	}
+
+	irq = smmu->combined_irq;
+	if (irq) {
+		/*
+		 * Cavium ThunderX2 implementation doesn't support unique irq
+		 * lines. Use a single irq line for all the SMMUv3 interrupts.
+		 */
+		ret = devm_request_threaded_irq(smmu->dev, irq,
+					arm_smmu_combined_irq_handler,
+					arm_smmu_combined_irq_thread,
+					IRQF_ONESHOT,
+					"arm-smmu-v3-combined-irq", smmu);
+		if (ret < 0)
+			dev_warn(smmu->dev, "failed to enable combined irq\n");
+	} else
+		arm_smmu_setup_unique_irqs(smmu);
+
+	if (smmu->features & ARM_SMMU_FEAT_PRI)
+		irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
+
+	/* Enable interrupt generation on the SMMU */
+	ret = arm_smmu_write_reg_sync(smmu, irqen_flags,
+				      ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
+	if (ret)
+		dev_warn(smmu->dev, "failed to enable irqs\n");
+
+	return 0;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
+{
+	int ret;
+
+	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
+	if (ret)
+		dev_err(smmu->dev, "failed to clear cr0\n");
+
+	return ret;
+}
+
+static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+{
+	int ret;
+	u32 reg, enables;
+	struct arm_smmu_cmdq_ent cmd;
+
+	/* Clear CR0 and sync (disables SMMU and queue processing) */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+	if (reg & CR0_SMMUEN) {
+		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
+		WARN_ON(is_kdump_kernel() && !disable_bypass);
+		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
+	}
+
+	ret = arm_smmu_device_disable(smmu);
+	if (ret)
+		return ret;
+
+	/* CR1 (table and queue memory attributes) */
+	reg = FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) |
+	      FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) |
+	      FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) |
+	      FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) |
+	      FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) |
+	      FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB);
+	writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
+
+	/* CR2 (random crap) */
+	reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
+	writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
+
+	/* Stream table */
+	writeq_relaxed(smmu->strtab_cfg.strtab_base,
+		       smmu->base + ARM_SMMU_STRTAB_BASE);
+	writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
+		       smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
+
+	/* Command queue */
+	writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
+	writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
+	writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
+
+	enables = CR0_CMDQEN;
+	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+				      ARM_SMMU_CR0ACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to enable command queue\n");
+		return ret;
+	}
+
+	/* Invalidate any cached configuration */
+	cmd.opcode = CMDQ_OP_CFGI_ALL;
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+
+	/* Invalidate any stale TLB entries */
+	if (smmu->features & ARM_SMMU_FEAT_HYP) {
+		cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	}
+
+	cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+
+	/* Event queue */
+	writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
+	writel_relaxed(smmu->evtq.q.llq.prod,
+		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_PROD, smmu));
+	writel_relaxed(smmu->evtq.q.llq.cons,
+		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_CONS, smmu));
+
+	enables |= CR0_EVTQEN;
+	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+				      ARM_SMMU_CR0ACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to enable event queue\n");
+		return ret;
+	}
+
+	/* PRI queue */
+	if (smmu->features & ARM_SMMU_FEAT_PRI) {
+		writeq_relaxed(smmu->priq.q.q_base,
+			       smmu->base + ARM_SMMU_PRIQ_BASE);
+		writel_relaxed(smmu->priq.q.llq.prod,
+			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_PROD, smmu));
+		writel_relaxed(smmu->priq.q.llq.cons,
+			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_CONS, smmu));
+
+		enables |= CR0_PRIQEN;
+		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+					      ARM_SMMU_CR0ACK);
+		if (ret) {
+			dev_err(smmu->dev, "failed to enable PRI queue\n");
+			return ret;
+		}
+	}
+
+	if (smmu->features & ARM_SMMU_FEAT_ATS) {
+		enables |= CR0_ATSCHK;
+		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+					      ARM_SMMU_CR0ACK);
+		if (ret) {
+			dev_err(smmu->dev, "failed to enable ATS check\n");
+			return ret;
+		}
+	}
+
+	ret = arm_smmu_setup_irqs(smmu);
+	if (ret) {
+		dev_err(smmu->dev, "failed to setup irqs\n");
+		return ret;
+	}
+
+	if (is_kdump_kernel())
+		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
+
+	/* Enable the SMMU interface, or ensure bypass */
+	if (!bypass || disable_bypass) {
+		enables |= CR0_SMMUEN;
+	} else {
+		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+		if (ret)
+			return ret;
+	}
+	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+				      ARM_SMMU_CR0ACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to enable SMMU interface\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
+{
+	u32 reg;
+	bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
+
+	/* IDR0 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
+
+	/* 2-level structures */
+	if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL)
+		smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+	if (reg & IDR0_CD2L)
+		smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB;
+
+	/*
+	 * Translation table endianness.
+	 * We currently require the same endianness as the CPU, but this
+	 * could be changed later by adding a new IO_PGTABLE_QUIRK.
+	 */
+	switch (FIELD_GET(IDR0_TTENDIAN, reg)) {
+	case IDR0_TTENDIAN_MIXED:
+		smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE;
+		break;
+#ifdef __BIG_ENDIAN
+	case IDR0_TTENDIAN_BE:
+		smmu->features |= ARM_SMMU_FEAT_TT_BE;
+		break;
+#else
+	case IDR0_TTENDIAN_LE:
+		smmu->features |= ARM_SMMU_FEAT_TT_LE;
+		break;
+#endif
+	default:
+		dev_err(smmu->dev, "unknown/unsupported TT endianness!\n");
+		return -ENXIO;
+	}
+
+	/* Boolean feature flags */
+	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
+		smmu->features |= ARM_SMMU_FEAT_PRI;
+
+	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
+		smmu->features |= ARM_SMMU_FEAT_ATS;
+
+	if (reg & IDR0_SEV)
+		smmu->features |= ARM_SMMU_FEAT_SEV;
+
+	if (reg & IDR0_MSI)
+		smmu->features |= ARM_SMMU_FEAT_MSI;
+
+	if (reg & IDR0_HYP)
+		smmu->features |= ARM_SMMU_FEAT_HYP;
+
+	/*
+	 * The coherency feature as set by FW is used in preference to the ID
+	 * register, but warn on mismatch.
+	 */
+	if (!!(reg & IDR0_COHACC) != coherent)
+		dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n",
+			 coherent ? "true" : "false");
+
+	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
+	case IDR0_STALL_MODEL_FORCE:
+		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
+		/* Fallthrough */
+	case IDR0_STALL_MODEL_STALL:
+		smmu->features |= ARM_SMMU_FEAT_STALLS;
+	}
+
+	if (reg & IDR0_S1P)
+		smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
+
+	if (reg & IDR0_S2P)
+		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
+
+	if (!(reg & (IDR0_S1P | IDR0_S2P))) {
+		dev_err(smmu->dev, "no translation support!\n");
+		return -ENXIO;
+	}
+
+	/* We only support the AArch64 table format at present */
+	switch (FIELD_GET(IDR0_TTF, reg)) {
+	case IDR0_TTF_AARCH32_64:
+		smmu->ias = 40;
+		/* Fallthrough */
+	case IDR0_TTF_AARCH64:
+		break;
+	default:
+		dev_err(smmu->dev, "AArch64 table format not supported!\n");
+		return -ENXIO;
+	}
+
+	/* ASID/VMID sizes */
+	smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
+	smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
+
+	/* IDR1 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
+	if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) {
+		dev_err(smmu->dev, "embedded implementation not supported\n");
+		return -ENXIO;
+	}
+
+	/* Queue sizes, capped to ensure natural alignment */
+	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
+					     FIELD_GET(IDR1_CMDQS, reg));
+	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
+		/*
+		 * We don't support splitting up batches, so one batch of
+		 * commands plus an extra sync needs to fit inside the command
+		 * queue. There's also no way we can handle the weird alignment
+		 * restrictions on the base pointer for a unit-length queue.
+		 */
+		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
+			CMDQ_BATCH_ENTRIES);
+		return -ENXIO;
+	}
+
+	smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT,
+					     FIELD_GET(IDR1_EVTQS, reg));
+	smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT,
+					     FIELD_GET(IDR1_PRIQS, reg));
+
+	/* SID/SSID sizes */
+	smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg);
+	smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
+
+	/*
+	 * If the SMMU supports fewer bits than would fill a single L2 stream
+	 * table, use a linear table instead.
+	 */
+	if (smmu->sid_bits <= STRTAB_SPLIT)
+		smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+	/* IDR3 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+	if (FIELD_GET(IDR3_RIL, reg))
+		smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
+
+	/* IDR5 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
+
+	/* Maximum number of outstanding stalls */
+	smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg);
+
+	/* Page sizes */
+	if (reg & IDR5_GRAN64K)
+		smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
+	if (reg & IDR5_GRAN16K)
+		smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
+	if (reg & IDR5_GRAN4K)
+		smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+
+	/* Input address size */
+	if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT)
+		smmu->features |= ARM_SMMU_FEAT_VAX;
+
+	/* Output address size */
+	switch (FIELD_GET(IDR5_OAS, reg)) {
+	case IDR5_OAS_32_BIT:
+		smmu->oas = 32;
+		break;
+	case IDR5_OAS_36_BIT:
+		smmu->oas = 36;
+		break;
+	case IDR5_OAS_40_BIT:
+		smmu->oas = 40;
+		break;
+	case IDR5_OAS_42_BIT:
+		smmu->oas = 42;
+		break;
+	case IDR5_OAS_44_BIT:
+		smmu->oas = 44;
+		break;
+	case IDR5_OAS_52_BIT:
+		smmu->oas = 52;
+		smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */
+		break;
+	default:
+		dev_info(smmu->dev,
+			"unknown output address size. Truncating to 48-bit\n");
+		/* Fallthrough */
+	case IDR5_OAS_48_BIT:
+		smmu->oas = 48;
+	}
+
+	if (arm_smmu_ops.pgsize_bitmap == -1UL)
+		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
+	else
+		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
+
+	/* Set the DMA mask for our table walker */
+	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
+		dev_warn(smmu->dev,
+			 "failed to set DMA mask for table walker\n");
+
+	smmu->ias = max(smmu->ias, smmu->oas);
+
+	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
+		 smmu->ias, smmu->oas, smmu->features);
+	return 0;
+}
+
+#ifdef CONFIG_ACPI
+static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu)
+{
+	switch (model) {
+	case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX:
+		smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY;
+		break;
+	case ACPI_IORT_SMMU_V3_HISILICON_HI161X:
+		smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH;
+		break;
+	}
+
+	dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options);
+}
+
+static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
+				      struct arm_smmu_device *smmu)
+{
+	struct acpi_iort_smmu_v3 *iort_smmu;
+	struct device *dev = smmu->dev;
+	struct acpi_iort_node *node;
+
+	node = *(struct acpi_iort_node **)dev_get_platdata(dev);
+
+	/* Retrieve SMMUv3 specific data */
+	iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
+
+	acpi_smmu_get_options(iort_smmu->model, smmu);
+
+	if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE)
+		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+	return 0;
+}
+#else
+static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
+					     struct arm_smmu_device *smmu)
+{
+	return -ENODEV;
+}
+#endif
+
+static int arm_smmu_device_dt_probe(struct platform_device *pdev,
+				    struct arm_smmu_device *smmu)
+{
+	struct device *dev = &pdev->dev;
+	u32 cells;
+	int ret = -EINVAL;
+
+	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
+		dev_err(dev, "missing #iommu-cells property\n");
+	else if (cells != 1)
+		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
+	else
+		ret = 0;
+
+	parse_driver_options(smmu);
+
+	if (of_dma_is_coherent(dev->of_node))
+		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+	return ret;
+}
+
+static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
+{
+	if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY)
+		return SZ_64K;
+	else
+		return SZ_128K;
+}
+
+static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
+{
+	int err;
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != ops) {
+		err = bus_set_iommu(&pci_bus_type, ops);
+		if (err)
+			return err;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != ops) {
+		err = bus_set_iommu(&amba_bustype, ops);
+		if (err)
+			goto err_reset_pci_ops;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != ops) {
+		err = bus_set_iommu(&platform_bus_type, ops);
+		if (err)
+			goto err_reset_amba_ops;
+	}
+
+	return 0;
+
+err_reset_amba_ops:
+#ifdef CONFIG_ARM_AMBA
+	bus_set_iommu(&amba_bustype, NULL);
+#endif
+err_reset_pci_ops: __maybe_unused;
+#ifdef CONFIG_PCI
+	bus_set_iommu(&pci_bus_type, NULL);
+#endif
+	return err;
+}
+
+static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
+				      resource_size_t size)
+{
+	struct resource res = {
+		.flags = IORESOURCE_MEM,
+		.start = start,
+		.end = start + size - 1,
+	};
+
+	return devm_ioremap_resource(dev, &res);
+}
+
+static int arm_smmu_device_probe(struct platform_device *pdev)
+{
+	int irq, ret;
+	struct resource *res;
+	resource_size_t ioaddr;
+	struct arm_smmu_device *smmu;
+	struct device *dev = &pdev->dev;
+	bool bypass;
+
+	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
+	if (!smmu) {
+		dev_err(dev, "failed to allocate arm_smmu_device\n");
+		return -ENOMEM;
+	}
+	smmu->dev = dev;
+
+	if (dev->of_node) {
+		ret = arm_smmu_device_dt_probe(pdev, smmu);
+	} else {
+		ret = arm_smmu_device_acpi_probe(pdev, smmu);
+		if (ret == -ENODEV)
+			return ret;
+	}
+
+	/* Set bypass mode according to firmware probing result */
+	bypass = !!ret;
+
+	/* Base address */
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
+		dev_err(dev, "MMIO region too small (%pr)\n", res);
+		return -EINVAL;
+	}
+	ioaddr = res->start;
+
+	/*
+	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
+	 * the PMCG registers which are reserved by the PMU driver.
+	 */
+	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
+	if (IS_ERR(smmu->base))
+		return PTR_ERR(smmu->base);
+
+	if (arm_smmu_resource_size(smmu) > SZ_64K) {
+		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
+					       ARM_SMMU_REG_SZ);
+		if (IS_ERR(smmu->page1))
+			return PTR_ERR(smmu->page1);
+	} else {
+		smmu->page1 = smmu->base;
+	}
+
+	/* Interrupt lines */
+
+	irq = platform_get_irq_byname_optional(pdev, "combined");
+	if (irq > 0)
+		smmu->combined_irq = irq;
+	else {
+		irq = platform_get_irq_byname_optional(pdev, "eventq");
+		if (irq > 0)
+			smmu->evtq.q.irq = irq;
+
+		irq = platform_get_irq_byname_optional(pdev, "priq");
+		if (irq > 0)
+			smmu->priq.q.irq = irq;
+
+		irq = platform_get_irq_byname_optional(pdev, "gerror");
+		if (irq > 0)
+			smmu->gerr_irq = irq;
+	}
+	/* Probe the h/w */
+	ret = arm_smmu_device_hw_probe(smmu);
+	if (ret)
+		return ret;
+
+	/* Initialise in-memory data structures */
+	ret = arm_smmu_init_structures(smmu);
+	if (ret)
+		return ret;
+
+	/* Record our private device structure */
+	platform_set_drvdata(pdev, smmu);
+
+	/* Reset the device */
+	ret = arm_smmu_device_reset(smmu, bypass);
+	if (ret)
+		return ret;
+
+	/* And we're up. Go go go! */
+	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
+				     "smmu3.%pa", &ioaddr);
+	if (ret)
+		return ret;
+
+	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
+	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
+
+	ret = iommu_device_register(&smmu->iommu);
+	if (ret) {
+		dev_err(dev, "Failed to register iommu\n");
+		return ret;
+	}
+
+	return arm_smmu_set_bus_ops(&arm_smmu_ops);
+}
+
+static int arm_smmu_device_remove(struct platform_device *pdev)
+{
+	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
+
+	arm_smmu_set_bus_ops(NULL);
+	iommu_device_unregister(&smmu->iommu);
+	iommu_device_sysfs_remove(&smmu->iommu);
+	arm_smmu_device_disable(smmu);
+
+	return 0;
+}
+
+static void arm_smmu_device_shutdown(struct platform_device *pdev)
+{
+	arm_smmu_device_remove(pdev);
+}
+
+static const struct of_device_id arm_smmu_of_match[] = {
+	{ .compatible = "arm,smmu-v3", },
+	{ },
+};
+MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
+
+static struct platform_driver arm_smmu_driver = {
+	.driver	= {
+		.name			= "arm-smmu-v3",
+		.of_match_table		= arm_smmu_of_match,
+		.suppress_bind_attrs	= true,
+	},
+	.probe	= arm_smmu_device_probe,
+	.remove	= arm_smmu_device_remove,
+	.shutdown = arm_smmu_device_shutdown,
+};
+module_platform_driver(arm_smmu_driver);
+
+MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
+MODULE_AUTHOR("Will Deacon <will@kernel.org>");
+MODULE_ALIAS("platform:arm-smmu-v3");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:58:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:58:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49568.87679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPH9-00063p-T9; Thu, 10 Dec 2020 16:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49568.87679; Thu, 10 Dec 2020 16:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPH9-00063h-Pd; Thu, 10 Dec 2020 16:58:43 +0000
Received: by outflank-mailman (input) for mailman id 49568;
 Thu, 10 Dec 2020 16:58:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPH8-00063M-4X
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:58:42 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 287cbf76-d88d-4340-b6f6-57c833654894;
 Thu, 10 Dec 2020 16:58:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 950B230E;
 Thu, 10 Dec 2020 08:58:38 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 91FB43F66B;
 Thu, 10 Dec 2020 08:58:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 287cbf76-d88d-4340-b6f6-57c833654894
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 2/8] xen/arm: revert atomic operation related command-queue insertion patch
Date: Thu, 10 Dec 2020 16:57:00 +0000
Message-Id: <06ce0b7f7574347c9de592677b44c4dac716d268.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

Linux SMMUv3 code implements the commands-queue insertion based on
atomic operations implemented in Linux. Atomic functions used by the
commands-queue insertion are not implemented in XEN therefore revert the
patch that implemented the commands-queue insertion based on atomic
operations.

Reverted the other patches also that are implemented based on the code
that introduced the atomic-operations.

Atomic operations are introduced in the patch "iommu/arm-smmu-v3: Reduce
contention during command-queue insertion" that fixed the bottleneck of
the SMMU command queue insertion operation. A new algorithm for
inserting commands into the queue is introduced in this patch, which is
lock-free on the fast-path.

Consequence of reverting the patch is that the command queue insertion
will be slow for large systems as spinlock will be used to serializes
accesses from all CPUs to the single queue supported by the hardware.

Once the proper atomic operations will be available in XEN the driver
can be updated.

Following commits are reverted in this patch:
1. "iommu/arm-smmu-v3: Add SMMUv3.2 range invalidation support"
    commit 6a481a95d4c198a2dd0a61f8877b92a375757db8.
2. "iommu/arm-smmu-v3: Batch ATC invalidation commands"
    commit 9e773aee8c3e1b3ba019c5c7f8435aaa836c6130.
3. "iommu/arm-smmu-v3: Batch context descriptor invalidation"
    commit edd0351e7bc49555d8b5ad8438a65a7ca262c9f0.
4. "iommu/arm-smmu-v3: Add command queue batching helpers
    commit 4ce8da453640147101bda418640394637c1a7cfc.
5. "iommu/arm-smmu-v3: Fix ATC invalidation ordering wrt main TLBs"
    commit 353e3cf8590cf182a9f42e67993de3aca91e8090.
6. "iommu/arm-smmu-v3: Defer TLB invalidation until ->iotlb_sync()"
    commit 2af2e72b18b499fa36d3f7379fd010ff25d2a984.
7. "iommu/arm-smmu-v3: Reduce contention during command-queue insertion"
    commit 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
- Added consequences of reverting this patch in commit message.
- List all the commits that are reverted in this patch in commit
  message.

---
 xen/drivers/passthrough/arm/smmu-v3.c | 878 ++++++--------------------
 1 file changed, 186 insertions(+), 692 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index f578677a5c..8b7747ed38 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -69,9 +69,6 @@
 #define IDR1_SSIDSIZE			GENMASK(10, 6)
 #define IDR1_SIDSIZE			GENMASK(5, 0)
 
-#define ARM_SMMU_IDR3			0xc
-#define IDR3_RIL			(1 << 10)
-
 #define ARM_SMMU_IDR5			0x14
 #define IDR5_STALL_MAX			GENMASK(31, 16)
 #define IDR5_GRAN64K			(1 << 6)
@@ -187,7 +184,7 @@
 
 #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
 #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
-#define Q_OVERFLOW_FLAG			(1U << 31)
+#define Q_OVERFLOW_FLAG			(1 << 31)
 #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
 #define Q_ENT(q, p)			((q)->base +			\
 					 Q_IDX(&((q)->llq), p) *	\
@@ -330,15 +327,6 @@
 #define CMDQ_ERR_CERROR_ABT_IDX		2
 #define CMDQ_ERR_CERROR_ATC_INV_IDX	3
 
-#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
-
-/*
- * This is used to size the command queue and therefore must be at least
- * BITS_PER_LONG so that the valid_map works correctly (it relies on the
- * total number of queue entries being a multiple of BITS_PER_LONG).
- */
-#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
-
 #define CMDQ_0_OP			GENMASK_ULL(7, 0)
 #define CMDQ_0_SSV			(1UL << 11)
 
@@ -351,14 +339,9 @@
 #define CMDQ_CFGI_1_LEAF		(1UL << 0)
 #define CMDQ_CFGI_1_RANGE		GENMASK_ULL(4, 0)
 
-#define CMDQ_TLBI_0_NUM			GENMASK_ULL(16, 12)
-#define CMDQ_TLBI_RANGE_NUM_MAX		31
-#define CMDQ_TLBI_0_SCALE		GENMASK_ULL(24, 20)
 #define CMDQ_TLBI_0_VMID		GENMASK_ULL(47, 32)
 #define CMDQ_TLBI_0_ASID		GENMASK_ULL(63, 48)
 #define CMDQ_TLBI_1_LEAF		(1UL << 0)
-#define CMDQ_TLBI_1_TTL			GENMASK_ULL(9, 8)
-#define CMDQ_TLBI_1_TG			GENMASK_ULL(11, 10)
 #define CMDQ_TLBI_1_VA_MASK		GENMASK_ULL(63, 12)
 #define CMDQ_TLBI_1_IPA_MASK		GENMASK_ULL(51, 12)
 
@@ -407,8 +390,9 @@
 #define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
 
 /* High-level queue structures */
-#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
-#define ARM_SMMU_POLL_SPIN_COUNT	10
+#define ARM_SMMU_POLL_TIMEOUT_US	100
+#define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
+#define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -483,13 +467,9 @@ struct arm_smmu_cmdq_ent {
 		#define CMDQ_OP_TLBI_S2_IPA	0x2a
 		#define CMDQ_OP_TLBI_NSNH_ALL	0x30
 		struct {
-			u8			num;
-			u8			scale;
 			u16			asid;
 			u16			vmid;
 			bool			leaf;
-			u8			ttl;
-			u8			tg;
 			u64			addr;
 		} tlbi;
 
@@ -513,24 +493,15 @@ struct arm_smmu_cmdq_ent {
 
 		#define CMDQ_OP_CMD_SYNC	0x46
 		struct {
+			u32			msidata;
 			u64			msiaddr;
 		} sync;
 	};
 };
 
 struct arm_smmu_ll_queue {
-	union {
-		u64			val;
-		struct {
-			u32		prod;
-			u32		cons;
-		};
-		struct {
-			atomic_t	prod;
-			atomic_t	cons;
-		} atomic;
-		u8			__pad[SMP_CACHE_BYTES];
-	} ____cacheline_aligned_in_smp;
+	u32				prod;
+	u32				cons;
 	u32				max_n_shift;
 };
 
@@ -548,23 +519,9 @@ struct arm_smmu_queue {
 	u32 __iomem			*cons_reg;
 };
 
-struct arm_smmu_queue_poll {
-	ktime_t				timeout;
-	unsigned int			delay;
-	unsigned int			spin_cnt;
-	bool				wfe;
-};
-
 struct arm_smmu_cmdq {
 	struct arm_smmu_queue		q;
-	atomic_long_t			*valid_map;
-	atomic_t			owner_prod;
-	atomic_t			lock;
-};
-
-struct arm_smmu_cmdq_batch {
-	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
-	int				num;
+	spinlock_t			lock;
 };
 
 struct arm_smmu_evtq {
@@ -647,7 +604,6 @@ struct arm_smmu_device {
 #define ARM_SMMU_FEAT_HYP		(1 << 12)
 #define ARM_SMMU_FEAT_STALL_FORCE	(1 << 13)
 #define ARM_SMMU_FEAT_VAX		(1 << 14)
-#define ARM_SMMU_FEAT_RANGE_INV		(1 << 15)
 	u32				features;
 
 #define ARM_SMMU_OPT_SKIP_PREFETCH	(1 << 0)
@@ -660,6 +616,8 @@ struct arm_smmu_device {
 
 	int				gerr_irq;
 	int				combined_irq;
+	u32				sync_nr;
+	u8				prev_cmd_opcode;
 
 	unsigned long			ias; /* IPA */
 	unsigned long			oas; /* PA */
@@ -677,6 +635,12 @@ struct arm_smmu_device {
 
 	struct arm_smmu_strtab_cfg	strtab_cfg;
 
+	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
+	union {
+		u32			sync_count;
+		u64			padding;
+	};
+
 	/* IOMMU core code handle */
 	struct iommu_device		iommu;
 };
@@ -763,21 +727,6 @@ static void parse_driver_options(struct arm_smmu_device *smmu)
 }
 
 /* Low-level queue manipulation functions */
-static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
-{
-	u32 space, prod, cons;
-
-	prod = Q_IDX(q, q->prod);
-	cons = Q_IDX(q, q->cons);
-
-	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
-		space = (1 << q->max_n_shift) - (prod - cons);
-	else
-		space = cons - prod;
-
-	return space >= n;
-}
-
 static bool queue_full(struct arm_smmu_ll_queue *q)
 {
 	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
@@ -790,12 +739,9 @@ static bool queue_empty(struct arm_smmu_ll_queue *q)
 	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
 }
 
-static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
+static void queue_sync_cons_in(struct arm_smmu_queue *q)
 {
-	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
-		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
-	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
-		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
+	q->llq.cons = readl_relaxed(q->cons_reg);
 }
 
 static void queue_sync_cons_out(struct arm_smmu_queue *q)
@@ -826,34 +772,46 @@ static int queue_sync_prod_in(struct arm_smmu_queue *q)
 	return ret;
 }
 
-static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
+static void queue_sync_prod_out(struct arm_smmu_queue *q)
 {
-	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
-	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+	writel(q->llq.prod, q->prod_reg);
 }
 
-static void queue_poll_init(struct arm_smmu_device *smmu,
-			    struct arm_smmu_queue_poll *qp)
+static void queue_inc_prod(struct arm_smmu_ll_queue *q)
 {
-	qp->delay = 1;
-	qp->spin_cnt = 0;
-	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
-	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
+	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
+	q->prod = Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
 }
 
-static int queue_poll(struct arm_smmu_queue_poll *qp)
+/*
+ * Wait for the SMMU to consume items. If sync is true, wait until the queue
+ * is empty. Otherwise, wait until there is at least one free slot.
+ */
+static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
 {
-	if (ktime_compare(ktime_get(), qp->timeout) > 0)
-		return -ETIMEDOUT;
+	ktime_t timeout;
+	unsigned int delay = 1, spin_cnt = 0;
 
-	if (qp->wfe) {
-		wfe();
-	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
-		cpu_relax();
-	} else {
-		udelay(qp->delay);
-		qp->delay *= 2;
-		qp->spin_cnt = 0;
+	/* Wait longer if it's a CMD_SYNC */
+	timeout = ktime_add_us(ktime_get(), sync ?
+					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
+					    ARM_SMMU_POLL_TIMEOUT_US);
+
+	while (queue_sync_cons_in(q),
+	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
+		if (ktime_compare(ktime_get(), timeout) > 0)
+			return -ETIMEDOUT;
+
+		if (wfe) {
+			wfe();
+		} else if (++spin_cnt < ARM_SMMU_CMDQ_SYNC_SPIN_COUNT) {
+			cpu_relax();
+			continue;
+		} else {
+			udelay(delay);
+			delay *= 2;
+			spin_cnt = 0;
+		}
 	}
 
 	return 0;
@@ -867,6 +825,17 @@ static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
 		*dst++ = cpu_to_le64(*src++);
 }
 
+static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+	if (queue_full(&q->llq))
+		return -ENOSPC;
+
+	queue_write(Q_ENT(q, q->llq.prod), ent, q->ent_dwords);
+	queue_inc_prod(&q->llq);
+	queue_sync_prod_out(q);
+	return 0;
+}
+
 static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
 {
 	int i;
@@ -916,22 +885,14 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
 		break;
 	case CMDQ_OP_TLBI_NH_VA:
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
 		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
 		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
 		break;
 	case CMDQ_OP_TLBI_S2_IPA:
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
 		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
 		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
 		break;
 	case CMDQ_OP_TLBI_NH_ASID:
@@ -964,14 +925,20 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
 		break;
 	case CMDQ_OP_CMD_SYNC:
-		if (ent->sync.msiaddr) {
+		if (ent->sync.msiaddr)
 			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
-			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
-		} else {
+		else
 			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
-		}
 		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
 		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
+		/*
+		 * Commands are written little-endian, but we want the SMMU to
+		 * receive MSIData, and thus write it back to memory, in CPU
+		 * byte order, so big-endian needs an extra byteswap here.
+		 */
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
+				     cpu_to_le32(ent->sync.msidata));
+		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
 		break;
 	default:
 		return -ENOENT;
@@ -980,27 +947,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 	return 0;
 }
 
-static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
-					 u32 prod)
-{
-	struct arm_smmu_queue *q = &smmu->cmdq.q;
-	struct arm_smmu_cmdq_ent ent = {
-		.opcode = CMDQ_OP_CMD_SYNC,
-	};
-
-	/*
-	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
-	 * payload, so the write will zero the entire command on that platform.
-	 */
-	if (smmu->features & ARM_SMMU_FEAT_MSI &&
-	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
-		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
-				   q->ent_dwords * 8;
-	}
-
-	arm_smmu_cmdq_build_cmd(cmd, &ent);
-}
-
 static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 {
 	static const char *cerror_str[] = {
@@ -1059,474 +1005,109 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
 }
 
-/*
- * Command queue locking.
- * This is a form of bastardised rwlock with the following major changes:
- *
- * - The only LOCK routines are exclusive_trylock() and shared_lock().
- *   Neither have barrier semantics, and instead provide only a control
- *   dependency.
- *
- * - The UNLOCK routines are supplemented with shared_tryunlock(), which
- *   fails if the caller appears to be the last lock holder (yes, this is
- *   racy). All successful UNLOCK routines have RELEASE semantics.
- */
-static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
-{
-	int val;
-
-	/*
-	 * We can try to avoid the cmpxchg() loop by simply incrementing the
-	 * lock counter. When held in exclusive state, the lock counter is set
-	 * to INT_MIN so these increments won't hurt as the value will remain
-	 * negative.
-	 */
-	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
-		return;
-
-	do {
-		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
-	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
-}
-
-static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
-{
-	(void)atomic_dec_return_release(&cmdq->lock);
-}
-
-static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
+static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
 {
-	if (atomic_read(&cmdq->lock) == 1)
-		return false;
-
-	arm_smmu_cmdq_shared_unlock(cmdq);
-	return true;
-}
-
-#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
-({									\
-	bool __ret;							\
-	local_irq_save(flags);						\
-	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
-	if (!__ret)							\
-		local_irq_restore(flags);				\
-	__ret;								\
-})
-
-#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
-({									\
-	atomic_set_release(&cmdq->lock, 0);				\
-	local_irq_restore(flags);					\
-})
-
-
-/*
- * Command queue insertion.
- * This is made fiddly by our attempts to achieve some sort of scalability
- * since there is one queue shared amongst all of the CPUs in the system.  If
- * you like mixed-size concurrency, dependency ordering and relaxed atomics,
- * then you'll *love* this monstrosity.
- *
- * The basic idea is to split the queue up into ranges of commands that are
- * owned by a given CPU; the owner may not have written all of the commands
- * itself, but is responsible for advancing the hardware prod pointer when
- * the time comes. The algorithm is roughly:
- *
- * 	1. Allocate some space in the queue. At this point we also discover
- *	   whether the head of the queue is currently owned by another CPU,
- *	   or whether we are the owner.
- *
- *	2. Write our commands into our allocated slots in the queue.
- *
- *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
- *
- *	4. If we are an owner:
- *		a. Wait for the previous owner to finish.
- *		b. Mark the queue head as unowned, which tells us the range
- *		   that we are responsible for publishing.
- *		c. Wait for all commands in our owned range to become valid.
- *		d. Advance the hardware prod pointer.
- *		e. Tell the next owner we've finished.
- *
- *	5. If we are inserting a CMD_SYNC (we may or may not have been an
- *	   owner), then we need to stick around until it has completed:
- *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
- *		   to clear the first 4 bytes.
- *		b. Otherwise, we spin waiting for the hardware cons pointer to
- *		   advance past our command.
- *
- * The devil is in the details, particularly the use of locking for handling
- * SYNC completion and freeing up space in the queue before we think that it is
- * full.
- */
-static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
-					       u32 sprod, u32 eprod, bool set)
-{
-	u32 swidx, sbidx, ewidx, ebidx;
-	struct arm_smmu_ll_queue llq = {
-		.max_n_shift	= cmdq->q.llq.max_n_shift,
-		.prod		= sprod,
-	};
-
-	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
-	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
-
-	while (llq.prod != eprod) {
-		unsigned long mask;
-		atomic_long_t *ptr;
-		u32 limit = BITS_PER_LONG;
-
-		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
-		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
-
-		ptr = &cmdq->valid_map[swidx];
-
-		if ((swidx == ewidx) && (sbidx < ebidx))
-			limit = ebidx;
-
-		mask = GENMASK(limit - 1, sbidx);
-
-		/*
-		 * The valid bit is the inverse of the wrap bit. This means
-		 * that a zero-initialised queue is invalid and, after marking
-		 * all entries as valid, they become invalid again when we
-		 * wrap.
-		 */
-		if (set) {
-			atomic_long_xor(mask, ptr);
-		} else { /* Poll */
-			unsigned long valid;
+	struct arm_smmu_queue *q = &smmu->cmdq.q;
+	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
 
-			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
-			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
-		}
+	smmu->prev_cmd_opcode = FIELD_GET(CMDQ_0_OP, cmd[0]);
 
-		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
+	while (queue_insert_raw(q, cmd) == -ENOSPC) {
+		if (queue_poll_cons(q, false, wfe))
+			dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
 	}
 }
 
-/* Mark all entries in the range [sprod, eprod) as valid */
-static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
-					u32 sprod, u32 eprod)
-{
-	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
-}
-
-/* Wait for all entries in the range [sprod, eprod) to become valid */
-static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
-					 u32 sprod, u32 eprod)
-{
-	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
-}
-
-/* Wait for the command queue to become non-full */
-static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
-					     struct arm_smmu_ll_queue *llq)
+static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+				    struct arm_smmu_cmdq_ent *ent)
 {
+	u64 cmd[CMDQ_ENT_DWORDS];
 	unsigned long flags;
-	struct arm_smmu_queue_poll qp;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	int ret = 0;
 
-	/*
-	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
-	 * that fails, spin until somebody else updates it for us.
-	 */
-	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
-		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
-		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
-		llq->val = READ_ONCE(cmdq->q.llq.val);
-		return 0;
+	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+			 ent->opcode);
+		return;
 	}
 
-	queue_poll_init(smmu, &qp);
-	do {
-		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
-		if (!queue_full(llq))
-			break;
-
-		ret = queue_poll(&qp);
-	} while (!ret);
-
-	return ret;
+	spin_lock_irqsave(&smmu->cmdq.lock, flags);
+	arm_smmu_cmdq_insert_cmd(smmu, cmd);
+	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 }
 
 /*
- * Wait until the SMMU signals a CMD_SYNC completion MSI.
- * Must be called with the cmdq lock held in some capacity.
+ * The difference between val and sync_idx is bounded by the maximum size of
+ * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
  */
-static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
-					  struct arm_smmu_ll_queue *llq)
-{
-	int ret = 0;
-	struct arm_smmu_queue_poll qp;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
-
-	queue_poll_init(smmu, &qp);
-
-	/*
-	 * The MSI won't generate an event, since it's being written back
-	 * into the command queue.
-	 */
-	qp.wfe = false;
-	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
-	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
-	return ret;
-}
-
-/*
- * Wait until the SMMU cons index passes llq->prod.
- * Must be called with the cmdq lock held in some capacity.
- */
-static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
-					       struct arm_smmu_ll_queue *llq)
-{
-	struct arm_smmu_queue_poll qp;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	u32 prod = llq->prod;
-	int ret = 0;
-
-	queue_poll_init(smmu, &qp);
-	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
-	do {
-		if (queue_consumed(llq, prod))
-			break;
-
-		ret = queue_poll(&qp);
-
-		/*
-		 * This needs to be a readl() so that our subsequent call
-		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
-		 *
-		 * Specifically, we need to ensure that we observe all
-		 * shared_lock()s by other CMD_SYNCs that share our owner,
-		 * so that a failing call to tryunlock() means that we're
-		 * the last one out and therefore we can safely advance
-		 * cmdq->q.llq.cons. Roughly speaking:
-		 *
-		 * CPU 0		CPU1			CPU2 (us)
-		 *
-		 * if (sync)
-		 * 	shared_lock();
-		 *
-		 * dma_wmb();
-		 * set_valid_map();
-		 *
-		 * 			if (owner) {
-		 *				poll_valid_map();
-		 *				<control dependency>
-		 *				writel(prod_reg);
-		 *
-		 *						readl(cons_reg);
-		 *						tryunlock();
-		 *
-		 * Requires us to see CPU 0's shared_lock() acquisition.
-		 */
-		llq->cons = readl(cmdq->q.cons_reg);
-	} while (!ret);
-
-	return ret;
-}
-
-static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
-					 struct arm_smmu_ll_queue *llq)
+static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
 {
-	if (smmu->features & ARM_SMMU_FEAT_MSI &&
-	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
-		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
-
-	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
-}
-
-static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
-					u32 prod, int n)
-{
-	int i;
-	struct arm_smmu_ll_queue llq = {
-		.max_n_shift	= cmdq->q.llq.max_n_shift,
-		.prod		= prod,
-	};
+	ktime_t timeout;
+	u32 val;
 
-	for (i = 0; i < n; ++i) {
-		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
+	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
+	val = smp_cond_load_acquire(&smmu->sync_count,
+				    (int)(VAL - sync_idx) >= 0 ||
+				    !ktime_before(ktime_get(), timeout));
 
-		prod = queue_inc_prod_n(&llq, i);
-		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
-	}
+	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
 }
 
-/*
- * This is the actual insertion function, and provides the following
- * ordering guarantees to callers:
- *
- * - There is a dma_wmb() before publishing any commands to the queue.
- *   This can be relied upon to order prior writes to data structures
- *   in memory (such as a CD or an STE) before the command.
- *
- * - On completion of a CMD_SYNC, there is a control dependency.
- *   This can be relied upon to order subsequent writes to memory (e.g.
- *   freeing an IOVA) after completion of the CMD_SYNC.
- *
- * - Command insertion is totally ordered, so if two CPUs each race to
- *   insert their own list of commands then all of the commands from one
- *   CPU will appear before any of the commands from the other CPU.
- */
-static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
-				       u64 *cmds, int n, bool sync)
+static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
 {
-	u64 cmd_sync[CMDQ_ENT_DWORDS];
-	u32 prod;
+	u64 cmd[CMDQ_ENT_DWORDS];
 	unsigned long flags;
-	bool owner;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	struct arm_smmu_ll_queue llq = {
-		.max_n_shift = cmdq->q.llq.max_n_shift,
-	}, head = llq;
-	int ret = 0;
-
-	/* 1. Allocate some space in the queue */
-	local_irq_save(flags);
-	llq.val = READ_ONCE(cmdq->q.llq.val);
-	do {
-		u64 old;
-
-		while (!queue_has_space(&llq, n + sync)) {
-			local_irq_restore(flags);
-			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
-				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
-			local_irq_save(flags);
-		}
-
-		head.cons = llq.cons;
-		head.prod = queue_inc_prod_n(&llq, n + sync) |
-					     CMDQ_PROD_OWNED_FLAG;
-
-		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
-		if (old == llq.val)
-			break;
-
-		llq.val = old;
-	} while (1);
-	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
-	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
-	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
-
-	/*
-	 * 2. Write our commands into the queue
-	 * Dependency ordering from the cmpxchg() loop above.
-	 */
-	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
-	if (sync) {
-		prod = queue_inc_prod_n(&llq, n);
-		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
-		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
-
-		/*
-		 * In order to determine completion of our CMD_SYNC, we must
-		 * ensure that the queue can't wrap twice without us noticing.
-		 * We achieve that by taking the cmdq lock as shared before
-		 * marking our slot as valid.
-		 */
-		arm_smmu_cmdq_shared_lock(cmdq);
-	}
-
-	/* 3. Mark our slots as valid, ensuring commands are visible first */
-	dma_wmb();
-	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
-
-	/* 4. If we are the owner, take control of the SMMU hardware */
-	if (owner) {
-		/* a. Wait for previous owner to finish */
-		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
-
-		/* b. Stop gathering work by clearing the owned flag */
-		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
-						   &cmdq->q.llq.atomic.prod);
-		prod &= ~CMDQ_PROD_OWNED_FLAG;
-
-		/*
-		 * c. Wait for any gathered work to be written to the queue.
-		 * Note that we read our own entries so that we have the control
-		 * dependency required by (d).
-		 */
-		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
+	struct arm_smmu_cmdq_ent ent = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+		.sync	= {
+			.msiaddr = virt_to_phys(&smmu->sync_count),
+		},
+	};
 
-		/*
-		 * d. Advance the hardware prod pointer
-		 * Control dependency ordering from the entries becoming valid.
-		 */
-		writel_relaxed(prod, cmdq->q.prod_reg);
+	spin_lock_irqsave(&smmu->cmdq.lock, flags);
 
-		/*
-		 * e. Tell the next owner we're done
-		 * Make sure we've updated the hardware first, so that we don't
-		 * race to update prod and potentially move it backwards.
-		 */
-		atomic_set_release(&cmdq->owner_prod, prod);
+	/* Piggy-back on the previous command if it's a SYNC */
+	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
+		ent.sync.msidata = smmu->sync_nr;
+	} else {
+		ent.sync.msidata = ++smmu->sync_nr;
+		arm_smmu_cmdq_build_cmd(cmd, &ent);
+		arm_smmu_cmdq_insert_cmd(smmu, cmd);
 	}
 
-	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
-	if (sync) {
-		llq.prod = queue_inc_prod_n(&llq, n);
-		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
-		if (ret) {
-			dev_err_ratelimited(smmu->dev,
-					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
-					    llq.prod,
-					    readl_relaxed(cmdq->q.prod_reg),
-					    readl_relaxed(cmdq->q.cons_reg));
-		}
+	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 
-		/*
-		 * Try to unlock the cmq lock. This will fail if we're the last
-		 * reader, in which case we can safely update cmdq->q.llq.cons
-		 */
-		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
-			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
-			arm_smmu_cmdq_shared_unlock(cmdq);
-		}
-	}
-
-	local_irq_restore(flags);
-	return ret;
+	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
 }
 
-static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
-				   struct arm_smmu_cmdq_ent *ent)
+static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
 	u64 cmd[CMDQ_ENT_DWORDS];
+	unsigned long flags;
+	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+	struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC };
+	int ret;
 
-	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
-		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
-			 ent->opcode);
-		return -EINVAL;
-	}
+	arm_smmu_cmdq_build_cmd(cmd, &ent);
 
-	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
-}
+	spin_lock_irqsave(&smmu->cmdq.lock, flags);
+	arm_smmu_cmdq_insert_cmd(smmu, cmd);
+	ret = queue_poll_cons(&smmu->cmdq.q, true, wfe);
+	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 
-static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
-{
-	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
+	return ret;
 }
 
-static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
-				    struct arm_smmu_cmdq_batch *cmds,
-				    struct arm_smmu_cmdq_ent *cmd)
+static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
-	if (cmds->num == CMDQ_BATCH_ENTRIES) {
-		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
-		cmds->num = 0;
-	}
-	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
-	cmds->num++;
-}
+	int ret;
+	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
+		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
 
-static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
-				      struct arm_smmu_cmdq_batch *cmds)
-{
-	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
+	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
+		  : __arm_smmu_cmdq_issue_sync(smmu);
+	if (ret)
+		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
+	return ret;
 }
 
 /* Context descriptor manipulation functions */
@@ -1536,7 +1117,6 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
 	size_t i;
 	unsigned long flags;
 	struct arm_smmu_master *master;
-	struct arm_smmu_cmdq_batch cmds = {};
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_cmdq_ent cmd = {
 		.opcode	= CMDQ_OP_CFGI_CD,
@@ -1550,12 +1130,12 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
 	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
 		for (i = 0; i < master->num_sids; i++) {
 			cmd.cfgi.sid = master->sids[i];
-			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
 		}
 	}
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
-	arm_smmu_cmdq_batch_submit(smmu, &cmds);
+	arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
@@ -2190,16 +1770,17 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
 	cmd->atc.size	= log2_span;
 }
 
-static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
+static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
+				   struct arm_smmu_cmdq_ent *cmd)
 {
 	int i;
-	struct arm_smmu_cmdq_ent cmd;
 
-	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
+	if (!master->ats_enabled)
+		return 0;
 
 	for (i = 0; i < master->num_sids; i++) {
-		cmd.atc.sid = master->sids[i];
-		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
+		cmd->atc.sid = master->sids[i];
+		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
 	}
 
 	return arm_smmu_cmdq_issue_sync(master->smmu);
@@ -2208,11 +1789,10 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
 static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 				   int ssid, unsigned long iova, size_t size)
 {
-	int i;
+	int ret = 0;
 	unsigned long flags;
 	struct arm_smmu_cmdq_ent cmd;
 	struct arm_smmu_master *master;
-	struct arm_smmu_cmdq_batch cmds = {};
 
 	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
 		return 0;
@@ -2237,18 +1817,11 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
 
 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
-	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
-		if (!master->ats_enabled)
-			continue;
-
-		for (i = 0; i < master->num_sids; i++) {
-			cmd.atc.sid = master->sids[i];
-			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
-		}
-	}
+	list_for_each_entry(master, &smmu_domain->devices, domain_head)
+		ret |= arm_smmu_atc_inv_master(master, &cmd);
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
-	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
+	return ret ? -ETIMEDOUT : 0;
 }
 
 /* IO_PGTABLE API */
@@ -2270,26 +1843,23 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	/*
 	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
 	 * PTEs previously cleared by unmaps on the current CPU not yet visible
-	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
-	 * insertion to guarantee those are observed before the TLBI. Do be
-	 * careful, 007.
+	 * to the SMMU. We are relying on the DSB implicit in
+	 * queue_sync_prod_out() to guarantee those are observed before the
+	 * TLBI. Do be careful, 007.
 	 */
 	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
 	arm_smmu_cmdq_issue_sync(smmu);
-	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
 }
 
-static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
-				   size_t granule, bool leaf,
-				   struct arm_smmu_domain *smmu_domain)
+static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+					  size_t granule, bool leaf, void *cookie)
 {
+	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
-	size_t inv_range = granule;
-	struct arm_smmu_cmdq_batch cmds = {};
 	struct arm_smmu_cmdq_ent cmd = {
 		.tlbi = {
 			.leaf	= leaf,
+			.addr	= iova,
 		},
 	};
 
@@ -2304,78 +1874,37 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
 		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 	}
 
-	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
-		/* Get the leaf page size */
-		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
-
-		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
-		cmd.tlbi.tg = (tg - 10) / 2;
-
-		/* Determine what level the granule is at */
-		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
-
-		num_pages = size >> tg;
-	}
-
-	while (iova < end) {
-		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
-			/*
-			 * On each iteration of the loop, the range is 5 bits
-			 * worth of the aligned size remaining.
-			 * The range in pages is:
-			 *
-			 * range = (num_pages & (0x1f << __ffs(num_pages)))
-			 */
-			unsigned long scale, num;
-
-			/* Determine the power of 2 multiple number of pages */
-			scale = __ffs(num_pages);
-			cmd.tlbi.scale = scale;
-
-			/* Determine how many chunks of 2^scale size we have */
-			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
-			cmd.tlbi.num = num - 1;
-
-			/* range is num * 2^scale * pgsize */
-			inv_range = num << (scale + tg);
-
-			/* Clear out the lower order bits for the next iteration */
-			num_pages -= num << scale;
-		}
-
-		cmd.tlbi.addr = iova;
-		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
-		iova += inv_range;
-	}
-	arm_smmu_cmdq_batch_submit(smmu, &cmds);
-
-	/*
-	 * Unfortunately, this can't be leaf-only since we may have
-	 * zapped an entire table.
-	 */
-	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
+	do {
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+		cmd.tlbi.addr += granule;
+	} while (size -= granule);
 }
 
 static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
 					 unsigned long iova, size_t granule,
 					 void *cookie)
 {
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct iommu_domain *domain = &smmu_domain->domain;
-
-	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
+	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
 }
 
 static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
 				  size_t granule, void *cookie)
 {
-	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
+	arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
 				  size_t granule, void *cookie)
 {
-	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
+	arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static const struct iommu_flush_ops arm_smmu_flush_ops = {
@@ -2701,6 +2230,7 @@ static void arm_smmu_enable_ats(struct arm_smmu_master *master)
 
 static void arm_smmu_disable_ats(struct arm_smmu_master *master)
 {
+	struct arm_smmu_cmdq_ent cmd;
 	struct arm_smmu_domain *smmu_domain = master->domain;
 
 	if (!master->ats_enabled)
@@ -2712,7 +2242,8 @@ static void arm_smmu_disable_ats(struct arm_smmu_master *master)
 	 * ATC invalidation via the SMMU.
 	 */
 	wmb();
-	arm_smmu_atc_inv_master(master);
+	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
+	arm_smmu_atc_inv_master(master, &cmd);
 	atomic_dec(&smmu_domain->nr_ats_masters);
 }
 
@@ -2856,13 +2387,18 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
 static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
 			     size_t size, struct iommu_iotlb_gather *gather)
 {
+	int ret;
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
 	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
 
 	if (!ops)
 		return 0;
 
-	return ops->unmap(ops, iova, size, gather);
+	ret = ops->unmap(ops, iova, size, gather);
+	if (ret && arm_smmu_atc_inv_domain(smmu_domain, 0, iova, size))
+		return 0;
+
+	return ret;
 }
 
 static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
@@ -2876,10 +2412,10 @@ static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
 static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
 				struct iommu_iotlb_gather *gather)
 {
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
 
-	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
-			       gather->pgsize, true, smmu_domain);
+	if (smmu)
+		arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static phys_addr_t
@@ -3177,49 +2713,18 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
 	return 0;
 }
 
-static void arm_smmu_cmdq_free_bitmap(void *data)
-{
-	unsigned long *bitmap = data;
-	bitmap_free(bitmap);
-}
-
-static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
-{
-	int ret = 0;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
-	atomic_long_t *bitmap;
-
-	atomic_set(&cmdq->owner_prod, 0);
-	atomic_set(&cmdq->lock, 0);
-
-	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
-	if (!bitmap) {
-		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
-		ret = -ENOMEM;
-	} else {
-		cmdq->valid_map = bitmap;
-		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
-	}
-
-	return ret;
-}
-
 static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
 {
 	int ret;
 
 	/* cmdq */
+	spin_lock_init(&smmu->cmdq.lock);
 	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
 				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
 				      "cmdq");
 	if (ret)
 		return ret;
 
-	ret = arm_smmu_cmdq_init(smmu);
-	if (ret)
-		return ret;
-
 	/* evtq */
 	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
 				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
@@ -3800,15 +3305,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	/* Queue sizes, capped to ensure natural alignment */
 	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
 					     FIELD_GET(IDR1_CMDQS, reg));
-	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
-		/*
-		 * We don't support splitting up batches, so one batch of
-		 * commands plus an extra sync needs to fit inside the command
-		 * queue. There's also no way we can handle the weird alignment
-		 * restrictions on the base pointer for a unit-length queue.
-		 */
-		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
-			CMDQ_BATCH_ENTRIES);
+	if (!smmu->cmdq.q.llq.max_n_shift) {
+		/* Odd alignment restrictions on the base, so ignore for now */
+		dev_err(smmu->dev, "unit-length command queue not supported\n");
 		return -ENXIO;
 	}
 
@@ -3828,11 +3327,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	if (smmu->sid_bits <= STRTAB_SPLIT)
 		smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
 
-	/* IDR3 */
-	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
-	if (FIELD_GET(IDR3_RIL, reg))
-		smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
-
 	/* IDR5 */
 	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:58:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:58:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49571.87691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPHI-00068T-B8; Thu, 10 Dec 2020 16:58:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49571.87691; Thu, 10 Dec 2020 16:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPHI-00068M-81; Thu, 10 Dec 2020 16:58:52 +0000
Received: by outflank-mailman (input) for mailman id 49571;
 Thu, 10 Dec 2020 16:58:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPHH-00067z-7d
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:58:51 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8fd7ecf3-3242-416d-826e-283c4d7d22e0;
 Thu, 10 Dec 2020 16:58:50 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E415330E;
 Thu, 10 Dec 2020 08:58:49 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0C5713F66B;
 Thu, 10 Dec 2020 08:58:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fd7ecf3-3242-416d-826e-283c4d7d22e0
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 3/8] xen/arm: revert patch related to XArray
Date: Thu, 10 Dec 2020 16:57:01 +0000
Message-Id: <ca988f0f6c66a2e35d06c07e226d0145c1320611.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

XArray is not implemented in XEN revert the patch that introduce the
XArray code in SMMUv3 driver.

XArray is added in preparation for sharing some ASIDs with the CPU,

As XEN support only Stage-2 translation, ASID is used for Stage-1
translation there is no consequences of reverting this patch for XEN.

Once XArray is implemented in XEN this patch can be added in XEN if XEN
supports Stage-1 translation.

Reverted the commit 0299a1a81ca056e79c1a7fb751f936ec0d5c7afe

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
 - Added consequences of reverting this patch in commit message

---
 xen/drivers/passthrough/arm/smmu-v3.c | 27 +++++++++------------------
 1 file changed, 9 insertions(+), 18 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 8b7747ed38..7b29ead48c 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -625,6 +625,7 @@ struct arm_smmu_device {
 
 #define ARM_SMMU_MAX_ASIDS		(1 << 16)
 	unsigned int			asid_bits;
+	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
 
 #define ARM_SMMU_MAX_VMIDS		(1 << 16)
 	unsigned int			vmid_bits;
@@ -690,8 +691,6 @@ struct arm_smmu_option_prop {
 	const char *prop;
 };
 
-static DEFINE_XARRAY_ALLOC1(asid_xa);
-
 static struct arm_smmu_option_prop arm_smmu_options[] = {
 	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
 	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
@@ -1346,14 +1345,6 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
 	cdcfg->cdtab = NULL;
 }
 
-static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
-{
-	if (!cd->asid)
-		return;
-
-	xa_erase(&asid_xa, cd->asid);
-}
-
 /* Stream table manipulation functions */
 static void
 arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
@@ -1988,9 +1979,10 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
 		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
 
-		if (cfg->cdcfg.cdtab)
+		if (cfg->cdcfg.cdtab) {
 			arm_smmu_free_cd_tables(smmu_domain);
-		arm_smmu_free_asid(&cfg->cd);
+			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
+		}
 	} else {
 		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 		if (cfg->vmid)
@@ -2005,15 +1997,14 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
 				       struct io_pgtable_cfg *pgtbl_cfg)
 {
 	int ret;
-	u32 asid;
+	int asid;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
 	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
 
-	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
-	if (ret)
-		return ret;
+	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
+	if (asid < 0)
+		return asid;
 
 	cfg->s1cdmax = master->ssid_bits;
 
@@ -2046,7 +2037,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
 out_free_cd_tables:
 	arm_smmu_free_cd_tables(smmu_domain);
 out_free_asid:
-	arm_smmu_free_asid(&cfg->cd);
+	arm_smmu_bitmap_free(smmu->asid_map, asid);
 	return ret;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:59:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:59:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49583.87703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPHl-0006Iz-L8; Thu, 10 Dec 2020 16:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49583.87703; Thu, 10 Dec 2020 16:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPHl-0006Is-Gy; Thu, 10 Dec 2020 16:59:21 +0000
Received: by outflank-mailman (input) for mailman id 49583;
 Thu, 10 Dec 2020 16:59:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPHk-0006Ic-1E
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:59:20 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1f14ff4c-7af0-421e-9d68-29bdaf99f4d2;
 Thu, 10 Dec 2020 16:59:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F02830E;
 Thu, 10 Dec 2020 08:59:17 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7D30A3F66B;
 Thu, 10 Dec 2020 08:59:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f14ff4c-7af0-421e-9d68-29bdaf99f4d2
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 4/8] xen/arm: Remove support for Stage-1 translation on SMMUv3.
Date: Thu, 10 Dec 2020 16:57:02 +0000
Message-Id: <a5e3509bbc4ce21e0d9d176d7a2984ef40ad0ae3.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

Linux SMMUv3 driver supports both Stage-1 and Stage-2 translations.
As of now only Stage-2 translation support has been tested.

Once Stage-1 translation support is tested this patch can be added.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
 - No change from previous version.

---
 xen/drivers/passthrough/arm/smmu-v3.c | 464 +-------------------------
 1 file changed, 14 insertions(+), 450 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 7b29ead48c..0f16c63c49 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -449,19 +449,14 @@ struct arm_smmu_cmdq_ent {
 
 		#define CMDQ_OP_CFGI_STE	0x3
 		#define CMDQ_OP_CFGI_ALL	0x4
-		#define CMDQ_OP_CFGI_CD		0x5
-		#define CMDQ_OP_CFGI_CD_ALL	0x6
 		struct {
 			u32			sid;
-			u32			ssid;
 			union {
 				bool		leaf;
 				u8		span;
 			};
 		} cfgi;
 
-		#define CMDQ_OP_TLBI_NH_ASID	0x11
-		#define CMDQ_OP_TLBI_NH_VA	0x12
 		#define CMDQ_OP_TLBI_EL2_ALL	0x20
 		#define CMDQ_OP_TLBI_S12_VMALL	0x28
 		#define CMDQ_OP_TLBI_S2_IPA	0x2a
@@ -541,32 +536,6 @@ struct arm_smmu_strtab_l1_desc {
 	dma_addr_t			l2ptr_dma;
 };
 
-struct arm_smmu_ctx_desc {
-	u16				asid;
-	u64				ttbr;
-	u64				tcr;
-	u64				mair;
-};
-
-struct arm_smmu_l1_ctx_desc {
-	__le64				*l2ptr;
-	dma_addr_t			l2ptr_dma;
-};
-
-struct arm_smmu_ctx_desc_cfg {
-	__le64				*cdtab;
-	dma_addr_t			cdtab_dma;
-	struct arm_smmu_l1_ctx_desc	*l1_desc;
-	unsigned int			num_l1_ents;
-};
-
-struct arm_smmu_s1_cfg {
-	struct arm_smmu_ctx_desc_cfg	cdcfg;
-	struct arm_smmu_ctx_desc	cd;
-	u8				s1fmt;
-	u8				s1cdmax;
-};
-
 struct arm_smmu_s2_cfg {
 	u16				vmid;
 	u64				vttbr;
@@ -623,15 +592,10 @@ struct arm_smmu_device {
 	unsigned long			oas; /* PA */
 	unsigned long			pgsize_bitmap;
 
-#define ARM_SMMU_MAX_ASIDS		(1 << 16)
-	unsigned int			asid_bits;
-	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
-
 #define ARM_SMMU_MAX_VMIDS		(1 << 16)
 	unsigned int			vmid_bits;
 	DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
 
-	unsigned int			ssid_bits;
 	unsigned int			sid_bits;
 
 	struct arm_smmu_strtab_cfg	strtab_cfg;
@@ -655,7 +619,6 @@ struct arm_smmu_master {
 	u32				*sids;
 	unsigned int			num_sids;
 	bool				ats_enabled;
-	unsigned int			ssid_bits;
 };
 
 /* SMMU private data for an IOMMU domain */
@@ -676,7 +639,6 @@ struct arm_smmu_domain {
 
 	enum arm_smmu_domain_stage	stage;
 	union {
-		struct arm_smmu_s1_cfg	s1_cfg;
 		struct arm_smmu_s2_cfg	s2_cfg;
 	};
 
@@ -869,34 +831,19 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
 		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
 		break;
-	case CMDQ_OP_CFGI_CD:
-		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid);
-		/* Fallthrough */
 	case CMDQ_OP_CFGI_STE:
 		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
 		break;
-	case CMDQ_OP_CFGI_CD_ALL:
-		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
-		break;
 	case CMDQ_OP_CFGI_ALL:
 		/* Cover the entire SID range */
 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
 		break;
-	case CMDQ_OP_TLBI_NH_VA:
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
-		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
-		break;
 	case CMDQ_OP_TLBI_S2_IPA:
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
 		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
 		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
 		break;
-	case CMDQ_OP_TLBI_NH_ASID:
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
-		/* Fallthrough */
 	case CMDQ_OP_TLBI_S12_VMALL:
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
 		break;
@@ -1109,242 +1056,6 @@ static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 	return ret;
 }
 
-/* Context descriptor manipulation functions */
-static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
-			     int ssid, bool leaf)
-{
-	size_t i;
-	unsigned long flags;
-	struct arm_smmu_master *master;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_cmdq_ent cmd = {
-		.opcode	= CMDQ_OP_CFGI_CD,
-		.cfgi	= {
-			.ssid	= ssid,
-			.leaf	= leaf,
-		},
-	};
-
-	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
-	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
-		for (i = 0; i < master->num_sids; i++) {
-			cmd.cfgi.sid = master->sids[i];
-			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
-		}
-	}
-	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
-
-	arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
-					struct arm_smmu_l1_ctx_desc *l1_desc)
-{
-	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
-
-	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
-					     &l1_desc->l2ptr_dma, GFP_KERNEL);
-	if (!l1_desc->l2ptr) {
-		dev_warn(smmu->dev,
-			 "failed to allocate context descriptor table\n");
-		return -ENOMEM;
-	}
-	return 0;
-}
-
-static void arm_smmu_write_cd_l1_desc(__le64 *dst,
-				      struct arm_smmu_l1_ctx_desc *l1_desc)
-{
-	u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) |
-		  CTXDESC_L1_DESC_V;
-
-	/* See comment in arm_smmu_write_ctx_desc() */
-	WRITE_ONCE(*dst, cpu_to_le64(val));
-}
-
-static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain,
-				   u32 ssid)
-{
-	__le64 *l1ptr;
-	unsigned int idx;
-	struct arm_smmu_l1_ctx_desc *l1_desc;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
-
-	if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR)
-		return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS;
-
-	idx = ssid >> CTXDESC_SPLIT;
-	l1_desc = &cdcfg->l1_desc[idx];
-	if (!l1_desc->l2ptr) {
-		if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc))
-			return NULL;
-
-		l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS;
-		arm_smmu_write_cd_l1_desc(l1ptr, l1_desc);
-		/* An invalid L1CD can be cached */
-		arm_smmu_sync_cd(smmu_domain, ssid, false);
-	}
-	idx = ssid & (CTXDESC_L2_ENTRIES - 1);
-	return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
-}
-
-static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
-				   int ssid, struct arm_smmu_ctx_desc *cd)
-{
-	/*
-	 * This function handles the following cases:
-	 *
-	 * (1) Install primary CD, for normal DMA traffic (SSID = 0).
-	 * (2) Install a secondary CD, for SID+SSID traffic.
-	 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
-	 *     CD, then invalidate the old entry and mappings.
-	 * (4) Remove a secondary CD.
-	 */
-	u64 val;
-	bool cd_live;
-	__le64 *cdptr;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax)))
-		return -E2BIG;
-
-	cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid);
-	if (!cdptr)
-		return -ENOMEM;
-
-	val = le64_to_cpu(cdptr[0]);
-	cd_live = !!(val & CTXDESC_CD_0_V);
-
-	if (!cd) { /* (4) */
-		val = 0;
-	} else if (cd_live) { /* (3) */
-		val &= ~CTXDESC_CD_0_ASID;
-		val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
-		/*
-		 * Until CD+TLB invalidation, both ASIDs may be used for tagging
-		 * this substream's traffic
-		 */
-	} else { /* (1) and (2) */
-		cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
-		cdptr[2] = 0;
-		cdptr[3] = cpu_to_le64(cd->mair);
-
-		/*
-		 * STE is live, and the SMMU might read dwords of this CD in any
-		 * order. Ensure that it observes valid values before reading
-		 * V=1.
-		 */
-		arm_smmu_sync_cd(smmu_domain, ssid, true);
-
-		val = cd->tcr |
-#ifdef __BIG_ENDIAN
-			CTXDESC_CD_0_ENDI |
-#endif
-			CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
-			CTXDESC_CD_0_AA64 |
-			FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
-			CTXDESC_CD_0_V;
-
-		/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
-		if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
-			val |= CTXDESC_CD_0_S;
-	}
-
-	/*
-	 * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
-	 * "Configuration structures and configuration invalidation completion"
-	 *
-	 *   The size of single-copy atomic reads made by the SMMU is
-	 *   IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
-	 *   field within an aligned 64-bit span of a structure can be altered
-	 *   without first making the structure invalid.
-	 */
-	WRITE_ONCE(cdptr[0], cpu_to_le64(val));
-	arm_smmu_sync_cd(smmu_domain, ssid, true);
-	return 0;
-}
-
-static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
-{
-	int ret;
-	size_t l1size;
-	size_t max_contexts;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-	struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg;
-
-	max_contexts = 1 << cfg->s1cdmax;
-
-	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) ||
-	    max_contexts <= CTXDESC_L2_ENTRIES) {
-		cfg->s1fmt = STRTAB_STE_0_S1FMT_LINEAR;
-		cdcfg->num_l1_ents = max_contexts;
-
-		l1size = max_contexts * (CTXDESC_CD_DWORDS << 3);
-	} else {
-		cfg->s1fmt = STRTAB_STE_0_S1FMT_64K_L2;
-		cdcfg->num_l1_ents = DIV_ROUND_UP(max_contexts,
-						  CTXDESC_L2_ENTRIES);
-
-		cdcfg->l1_desc = devm_kcalloc(smmu->dev, cdcfg->num_l1_ents,
-					      sizeof(*cdcfg->l1_desc),
-					      GFP_KERNEL);
-		if (!cdcfg->l1_desc)
-			return -ENOMEM;
-
-		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
-	}
-
-	cdcfg->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cdcfg->cdtab_dma,
-					   GFP_KERNEL);
-	if (!cdcfg->cdtab) {
-		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
-		ret = -ENOMEM;
-		goto err_free_l1;
-	}
-
-	return 0;
-
-err_free_l1:
-	if (cdcfg->l1_desc) {
-		devm_kfree(smmu->dev, cdcfg->l1_desc);
-		cdcfg->l1_desc = NULL;
-	}
-	return ret;
-}
-
-static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
-{
-	int i;
-	size_t size, l1size;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
-
-	if (cdcfg->l1_desc) {
-		size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
-
-		for (i = 0; i < cdcfg->num_l1_ents; i++) {
-			if (!cdcfg->l1_desc[i].l2ptr)
-				continue;
-
-			dmam_free_coherent(smmu->dev, size,
-					   cdcfg->l1_desc[i].l2ptr,
-					   cdcfg->l1_desc[i].l2ptr_dma);
-		}
-		devm_kfree(smmu->dev, cdcfg->l1_desc);
-		cdcfg->l1_desc = NULL;
-
-		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
-	} else {
-		l1size = cdcfg->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
-	}
-
-	dmam_free_coherent(smmu->dev, l1size, cdcfg->cdtab, cdcfg->cdtab_dma);
-	cdcfg->cdtab_dma = 0;
-	cdcfg->cdtab = NULL;
-}
-
 /* Stream table manipulation functions */
 static void
 arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
@@ -1394,7 +1105,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 	u64 val = le64_to_cpu(dst[0]);
 	bool ste_live = false;
 	struct arm_smmu_device *smmu = NULL;
-	struct arm_smmu_s1_cfg *s1_cfg = NULL;
 	struct arm_smmu_s2_cfg *s2_cfg = NULL;
 	struct arm_smmu_domain *smmu_domain = NULL;
 	struct arm_smmu_cmdq_ent prefetch_cmd = {
@@ -1409,25 +1119,13 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 		smmu = master->smmu;
 	}
 
-	if (smmu_domain) {
-		switch (smmu_domain->stage) {
-		case ARM_SMMU_DOMAIN_S1:
-			s1_cfg = &smmu_domain->s1_cfg;
-			break;
-		case ARM_SMMU_DOMAIN_S2:
-		case ARM_SMMU_DOMAIN_NESTED:
-			s2_cfg = &smmu_domain->s2_cfg;
-			break;
-		default:
-			break;
-		}
-	}
+	if (smmu_domain)
+		s2_cfg = &smmu_domain->s2_cfg;
 
 	if (val & STRTAB_STE_0_V) {
 		switch (FIELD_GET(STRTAB_STE_0_CFG, val)) {
 		case STRTAB_STE_0_CFG_BYPASS:
 			break;
-		case STRTAB_STE_0_CFG_S1_TRANS:
 		case STRTAB_STE_0_CFG_S2_TRANS:
 			ste_live = true;
 			break;
@@ -1443,7 +1141,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 	val = STRTAB_STE_0_V;
 
 	/* Bypass/fault */
-	if (!smmu_domain || !(s1_cfg || s2_cfg)) {
+	if (!smmu_domain || !(s2_cfg)) {
 		if (!smmu_domain && disable_bypass)
 			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
 		else
@@ -1462,25 +1160,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 		return;
 	}
 
-	if (s1_cfg) {
-		BUG_ON(ste_live);
-		dst[1] = cpu_to_le64(
-			 FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) |
-			 FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
-			 FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
-			 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
-			 FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1));
-
-		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
-		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
-			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
-
-		val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
-			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) |
-			FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) |
-			FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
-	}
-
 	if (s2_cfg) {
 		BUG_ON(ste_live);
 		dst[2] = cpu_to_le64(
@@ -1502,7 +1181,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 						 STRTAB_STE_1_EATS_TRANS));
 
 	arm_smmu_sync_ste_for_sid(smmu, sid);
-	/* See comment in arm_smmu_write_ctx_desc() */
 	WRITE_ONCE(dst[0], cpu_to_le64(val));
 	arm_smmu_sync_ste_for_sid(smmu, sid);
 
@@ -1822,14 +1500,8 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_cmdq_ent cmd;
 
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
-		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
-		cmd.tlbi.vmid	= 0;
-	} else {
-		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
-		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-	}
+	cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
+	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 
 	/*
 	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
@@ -1857,13 +1529,8 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 	if (!size)
 		return;
 
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
-		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
-	} else {
-		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
-		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-	}
+	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
+	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 
 	do {
 		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
@@ -1971,75 +1638,17 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 {
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 
 	iommu_put_dma_cookie(domain);
 	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
 
-	/* Free the CD and ASID, if we allocated them */
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-
-		if (cfg->cdcfg.cdtab) {
-			arm_smmu_free_cd_tables(smmu_domain);
-			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
-		}
-	} else {
-		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
-		if (cfg->vmid)
-			arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
-	}
+	if (cfg->vmid)
+		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
 
 	kfree(smmu_domain);
 }
 
-static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
-				       struct arm_smmu_master *master,
-				       struct io_pgtable_cfg *pgtbl_cfg)
-{
-	int ret;
-	int asid;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
-
-	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
-	if (asid < 0)
-		return asid;
-
-	cfg->s1cdmax = master->ssid_bits;
-
-	ret = arm_smmu_alloc_cd_tables(smmu_domain);
-	if (ret)
-		goto out_free_asid;
-
-	cfg->cd.asid	= (u16)asid;
-	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
-	cfg->cd.tcr	= FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) |
-			  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
-	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair;
-
-	/*
-	 * Note that this will end up calling arm_smmu_sync_cd() before
-	 * the master has been added to the devices list for this domain.
-	 * This isn't an issue because the STE hasn't been installed yet.
-	 */
-	ret = arm_smmu_write_ctx_desc(smmu_domain, 0, &cfg->cd);
-	if (ret)
-		goto out_free_cd_tables;
-
-	return 0;
-
-out_free_cd_tables:
-	arm_smmu_free_cd_tables(smmu_domain);
-out_free_asid:
-	arm_smmu_bitmap_free(smmu->asid_map, asid);
-	return ret;
-}
 
 static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
 				       struct arm_smmu_master *master,
@@ -2075,9 +1684,6 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 	enum io_pgtable_fmt fmt;
 	struct io_pgtable_cfg pgtbl_cfg;
 	struct io_pgtable_ops *pgtbl_ops;
-	int (*finalise_stage_fn)(struct arm_smmu_domain *,
-				 struct arm_smmu_master *,
-				 struct io_pgtable_cfg *);
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 
@@ -2087,29 +1693,8 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 	}
 
 	/* Restrict the stage to what we can actually support */
-	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
-		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
-	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
-		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
-
-	switch (smmu_domain->stage) {
-	case ARM_SMMU_DOMAIN_S1:
-		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
-		ias = min_t(unsigned long, ias, VA_BITS);
-		oas = smmu->ias;
-		fmt = ARM_64_LPAE_S1;
-		finalise_stage_fn = arm_smmu_domain_finalise_s1;
-		break;
-	case ARM_SMMU_DOMAIN_NESTED:
-	case ARM_SMMU_DOMAIN_S2:
-		ias = smmu->ias;
-		oas = smmu->oas;
-		fmt = ARM_64_LPAE_S2;
-		finalise_stage_fn = arm_smmu_domain_finalise_s2;
-		break;
-	default:
-		return -EINVAL;
-	}
+	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
+
 
 	pgtbl_cfg = (struct io_pgtable_cfg) {
 		.pgsize_bitmap	= smmu->pgsize_bitmap,
@@ -2131,7 +1716,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
 	domain->geometry.force_aperture = true;
 
-	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
+	ret = arm_smmu_domain_finalise_s2(smmu_domain, master, &pgtbl_cfg);
 	if (ret < 0) {
 		free_io_pgtable_ops(pgtbl_ops);
 		return ret;
@@ -2264,8 +1849,6 @@ static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
 		return ret;
 	}
 
-	master->ssid_bits = min_t(u8, ilog2(num_pasids),
-				  master->smmu->ssid_bits);
 	return 0;
 }
 
@@ -2281,7 +1864,6 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
 	if (!pdev->pasid_enabled)
 		return;
 
-	master->ssid_bits = 0;
 	pci_disable_pasid(pdev);
 }
 
@@ -2337,13 +1919,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 			dev_name(smmu->dev));
 		ret = -ENXIO;
 		goto out_unlock;
-	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 &&
-		   master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) {
-		dev_err(dev,
-			"cannot attach to incompatible domain (%u SSID bits != %u)\n",
-			smmu_domain->s1_cfg.s1cdmax, master->ssid_bits);
-		ret = -EINVAL;
-		goto out_unlock;
 	}
 
 	master->domain = smmu_domain;
@@ -2490,8 +2065,6 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 		}
 	}
 
-	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
-
 	/*
 	 * Note that PASID must be enabled before, and disabled after ATS:
 	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
@@ -2502,10 +2075,6 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 	 */
 	arm_smmu_enable_pasid(master);
 
-	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
-		master->ssid_bits = min_t(u8, master->ssid_bits,
-					  CTXDESC_LINEAR_CDMAX);
-
 	return &smmu->iommu;
 
 err_free_master:
@@ -3259,13 +2828,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 		smmu->features |= ARM_SMMU_FEAT_STALLS;
 	}
 
-	if (reg & IDR0_S1P)
-		smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
-
 	if (reg & IDR0_S2P)
 		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
 
-	if (!(reg & (IDR0_S1P | IDR0_S2P))) {
+	if (!(reg & IDR0_S2P)) {
 		dev_err(smmu->dev, "no translation support!\n");
 		return -ENXIO;
 	}
@@ -3283,7 +2849,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	}
 
 	/* ASID/VMID sizes */
-	smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
 	smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
 
 	/* IDR1 */
@@ -3308,7 +2873,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 					     FIELD_GET(IDR1_PRIQS, reg));
 
 	/* SID/SSID sizes */
-	smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg);
 	smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
 
 	/*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 16:59:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 16:59:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49591.87715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPIK-0006Ru-TV; Thu, 10 Dec 2020 16:59:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49591.87715; Thu, 10 Dec 2020 16:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPIK-0006Rn-QP; Thu, 10 Dec 2020 16:59:56 +0000
Received: by outflank-mailman (input) for mailman id 49591;
 Thu, 10 Dec 2020 16:59:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPIJ-0006R7-Bb
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 16:59:55 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e03e9275-5d13-4ebf-ba32-7557c250fe9d;
 Thu, 10 Dec 2020 16:59:52 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6286E30E;
 Thu, 10 Dec 2020 08:59:51 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8D4173F66B;
 Thu, 10 Dec 2020 08:59:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e03e9275-5d13-4ebf-ba32-7557c250fe9d
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH v3 5/8] xen/device-tree: Add dt_property_match_string helper
Date: Thu, 10 Dec 2020 16:57:03 +0000
Message-Id: <2cf4c10d0ce81290af96e29ee364df87c06ef849.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

Import the Linux helper of_property_match_string. This function searches
a string list property and returns the index of a specific string value.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
 - This patch is introduce in this verison.

---
 xen/common/device_tree.c      | 27 +++++++++++++++++++++++++++
 xen/include/xen/device_tree.h | 12 ++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index e107c6f89f..18825e333e 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -208,6 +208,33 @@ int dt_property_read_string(const struct dt_device_node *np,
     return 0;
 }
 
+int dt_property_match_string(const struct dt_device_node *np,
+                             const char *propname, const char *string)
+{
+    const struct dt_property *dtprop = dt_find_property(np, propname, NULL);
+    size_t l;
+    int i;
+    const char *p, *end;
+
+    if ( !dtprop )
+        return -EINVAL;
+    if ( !dtprop->value )
+        return -ENODATA;
+
+    p = dtprop->value;
+    end = p + dtprop->length;
+
+    for ( i = 0; p < end; i++, p += l )
+    {
+        l = strnlen(p, end - p) + 1;
+        if ( p + l > end )
+            return -EILSEQ;
+        if ( strcmp(string, p) == 0 )
+            return i; /* Found it; return index */
+    }
+    return -ENODATA;
+}
+
 bool_t dt_device_is_compatible(const struct dt_device_node *device,
                                const char *compat)
 {
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index f2ad22b79c..b02696be94 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -400,6 +400,18 @@ static inline bool_t dt_property_read_bool(const struct dt_device_node *np,
 int dt_property_read_string(const struct dt_device_node *np,
                             const char *propname, const char **out_string);
 
+/**
+ * dt_property_match_string() - Find string in a list and return index
+ * @np: pointer to node containing string list property
+ * @propname: string list property name
+ * @string: pointer to string to search for in string list
+ *
+ * This function searches a string list property and returns the index
+ * of a specific string value.
+ */
+int dt_property_match_string(const struct dt_device_node *np,
+                             const char *propname, const char *string);
+
 /**
  * Checks if the given "compat" string matches one of the strings in
  * the device's "compatible" property
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:00:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:00:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49597.87727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPIu-0007I7-D9; Thu, 10 Dec 2020 17:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49597.87727; Thu, 10 Dec 2020 17:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPIu-0007I0-8e; Thu, 10 Dec 2020 17:00:32 +0000
Received: by outflank-mailman (input) for mailman id 49597;
 Thu, 10 Dec 2020 17:00:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPIt-0007Hm-HE
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:00:31 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6e9a53fb-61bf-445c-a320-c28acf59f846;
 Thu, 10 Dec 2020 17:00:29 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5216130E;
 Thu, 10 Dec 2020 09:00:29 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CC753F66B;
 Thu, 10 Dec 2020 09:00:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e9a53fb-61bf-445c-a320-c28acf59f846
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 6/8] xen/arm: Remove Linux specific code that is not usable in XEN
Date: Thu, 10 Dec 2020 16:57:04 +0000
Message-Id: <91b9845a03068d92aeaaa86fa67d4d06b2824652.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

Remove code that is related to below functionality :
 1. struct io_pgtable_ops
 2. struct io_pgtable_cfg
 3. struct iommu_flush_ops,
 4. struct iommu_ops
 5. module_param_named, MODULE_PARM_DESC, module_platform_driver,
    MODULE_*
 6. IOMMU domain-types
 7. arm_smmu_set_bus_ops
 8. iommu_device_sysfs_add, iommu_device_register,
    iommu_device_set_fwnode

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
- Commit message is updated to add more detail what is removed in this
  patch.
- remove instances of io_pgtable_cfg.
- Added back ARM_SMMU_FEAT_COHERENCY feature.

---
 xen/drivers/passthrough/arm/smmu-v3.c | 475 ++------------------------
 1 file changed, 21 insertions(+), 454 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 0f16c63c49..2966015e5d 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -394,13 +394,7 @@
 #define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
 #define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
 
-#define MSI_IOVA_BASE			0x8000000
-#define MSI_IOVA_LENGTH			0x100000
-
 static bool disable_bypass = 1;
-module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
-MODULE_PARM_DESC(disable_bypass,
-	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
 
 enum pri_resp {
 	PRI_RESP_DENY = 0,
@@ -552,6 +546,19 @@ struct arm_smmu_strtab_cfg {
 	u32				strtab_base_cfg;
 };
 
+struct arm_lpae_s2_cfg {
+	u64			vttbr;
+	struct {
+		u32			ps:3;
+		u32			tg:2;
+		u32			sh:2;
+		u32			orgn:2;
+		u32			irgn:2;
+		u32			sl:2;
+		u32			tsz:6;
+	} vtcr;
+};
+
 /* An SMMUv3 instance */
 struct arm_smmu_device {
 	struct device			*dev;
@@ -633,7 +640,6 @@ struct arm_smmu_domain {
 	struct arm_smmu_device		*smmu;
 	struct mutex			init_mutex; /* Protects smmu pointer */
 
-	struct io_pgtable_ops		*pgtbl_ops;
 	bool				non_strict;
 	atomic_t			nr_ats_masters;
 
@@ -1493,7 +1499,6 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 	return ret ? -ETIMEDOUT : 0;
 }
 
-/* IO_PGTABLE API */
 static void arm_smmu_tlb_inv_context(void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
@@ -1514,86 +1519,10 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	arm_smmu_cmdq_issue_sync(smmu);
 }
 
-static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  size_t granule, bool leaf, void *cookie)
-{
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_cmdq_ent cmd = {
-		.tlbi = {
-			.leaf	= leaf,
-			.addr	= iova,
-		},
-	};
-
-	if (!size)
-		return;
-
-	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
-	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-
-	do {
-		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
-		cmd.tlbi.addr += granule;
-	} while (size -= granule);
-}
-
-static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
-					 unsigned long iova, size_t granule,
-					 void *cookie)
-{
-	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
-}
-
-static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
-				  size_t granule, void *cookie)
-{
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
-	arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
-				  size_t granule, void *cookie)
-{
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
-	arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static const struct iommu_flush_ops arm_smmu_flush_ops = {
-	.tlb_flush_all	= arm_smmu_tlb_inv_context,
-	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
-	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
-	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
-};
-
-/* IOMMU API */
-static bool arm_smmu_capable(enum iommu_cap cap)
-{
-	switch (cap) {
-	case IOMMU_CAP_CACHE_COHERENCY:
-		return true;
-	case IOMMU_CAP_NOEXEC:
-		return true;
-	default:
-		return false;
-	}
-}
-
-static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
+static struct iommu_domain *arm_smmu_domain_alloc(void)
 {
 	struct arm_smmu_domain *smmu_domain;
 
-	if (type != IOMMU_DOMAIN_UNMANAGED &&
-	    type != IOMMU_DOMAIN_DMA &&
-	    type != IOMMU_DOMAIN_IDENTITY)
-		return NULL;
-
 	/*
 	 * Allocate the domain and initialise some of its data structures.
 	 * We can't really do anything meaningful until we've added a
@@ -1603,12 +1532,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
 	if (!smmu_domain)
 		return NULL;
 
-	if (type == IOMMU_DOMAIN_DMA &&
-	    iommu_get_dma_cookie(&smmu_domain->domain)) {
-		kfree(smmu_domain);
-		return NULL;
-	}
-
 	mutex_init(&smmu_domain->init_mutex);
 	INIT_LIST_HEAD(&smmu_domain->devices);
 	spin_lock_init(&smmu_domain->devices_lock);
@@ -1640,9 +1563,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 
-	iommu_put_dma_cookie(domain);
-	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
-
 	if (cfg->vmid)
 		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
 
@@ -1651,21 +1571,20 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 
 
 static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
-				       struct arm_smmu_master *master,
-				       struct io_pgtable_cfg *pgtbl_cfg)
+				       struct arm_smmu_master *master)
 {
 	int vmid;
+	struct arm_lpae_s2_cfg arm_lpae_s2_cfg;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
-	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
+	typeof(&arm_lpae_s2_cfg.vtcr) vtcr = &arm_lpae_s2_cfg.vtcr;
 
 	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
 	if (vmid < 0)
 		return vmid;
 
-	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
 	cfg->vmid	= (u16)vmid;
-	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
+	cfg->vttbr	= arm_lpae_s2_cfg.vttbr;
 	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
@@ -1680,49 +1599,15 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 				    struct arm_smmu_master *master)
 {
 	int ret;
-	unsigned long ias, oas;
-	enum io_pgtable_fmt fmt;
-	struct io_pgtable_cfg pgtbl_cfg;
-	struct io_pgtable_ops *pgtbl_ops;
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
-		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
-		return 0;
-	}
 
 	/* Restrict the stage to what we can actually support */
 	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
 
-
-	pgtbl_cfg = (struct io_pgtable_cfg) {
-		.pgsize_bitmap	= smmu->pgsize_bitmap,
-		.ias		= ias,
-		.oas		= oas,
-		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
-		.tlb		= &arm_smmu_flush_ops,
-		.iommu_dev	= smmu->dev,
-	};
-
-	if (smmu_domain->non_strict)
-		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
-
-	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
-	if (!pgtbl_ops)
-		return -ENOMEM;
-
-	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
-	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
-	domain->geometry.force_aperture = true;
-
-	ret = arm_smmu_domain_finalise_s2(smmu_domain, master, &pgtbl_cfg);
-	if (ret < 0) {
-		free_io_pgtable_ops(pgtbl_ops);
+	ret = arm_smmu_domain_finalise_s2(smmu_domain, master);
+	if (ret < 0)
 		return ret;
-	}
 
-	smmu_domain->pgtbl_ops = pgtbl_ops;
 	return 0;
 }
 
@@ -1939,76 +1824,6 @@ out_unlock:
 	return ret;
 }
 
-static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
-			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
-{
-	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
-
-	if (!ops)
-		return -ENODEV;
-
-	return ops->map(ops, iova, paddr, size, prot);
-}
-
-static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-			     size_t size, struct iommu_iotlb_gather *gather)
-{
-	int ret;
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
-
-	if (!ops)
-		return 0;
-
-	ret = ops->unmap(ops, iova, size, gather);
-	if (ret && arm_smmu_atc_inv_domain(smmu_domain, 0, iova, size))
-		return 0;
-
-	return ret;
-}
-
-static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
-{
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-	if (smmu_domain->smmu)
-		arm_smmu_tlb_inv_context(smmu_domain);
-}
-
-static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
-				struct iommu_iotlb_gather *gather)
-{
-	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
-
-	if (smmu)
-		arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static phys_addr_t
-arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
-{
-	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
-
-	if (domain->type == IOMMU_DOMAIN_IDENTITY)
-		return iova;
-
-	if (!ops)
-		return 0;
-
-	return ops->iova_to_phys(ops, iova);
-}
-
-static struct platform_driver arm_smmu_driver;
-
-static
-struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
-{
-	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
-							  fwnode);
-	put_device(dev);
-	return dev ? dev_get_drvdata(dev) : NULL;
-}
-
 static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 {
 	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
@@ -2019,8 +1834,6 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 	return sid < limit;
 }
 
-static struct iommu_ops arm_smmu_ops;
-
 static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 {
 	int i, ret;
@@ -2028,16 +1841,12 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 	struct arm_smmu_master *master;
 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 
-	if (!fwspec || fwspec->ops != &arm_smmu_ops)
+	if (!fwspec)
 		return ERR_PTR(-ENODEV);
 
 	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
 		return ERR_PTR(-EBUSY);
 
-	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
-	if (!smmu)
-		return ERR_PTR(-ENODEV);
-
 	master = kzalloc(sizeof(*master), GFP_KERNEL);
 	if (!master)
 		return ERR_PTR(-ENOMEM);
@@ -2083,153 +1892,11 @@ err_free_master:
 	return ERR_PTR(ret);
 }
 
-static void arm_smmu_release_device(struct device *dev)
-{
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
-	struct arm_smmu_master *master;
-
-	if (!fwspec || fwspec->ops != &arm_smmu_ops)
-		return;
-
-	master = dev_iommu_priv_get(dev);
-	arm_smmu_detach_dev(master);
-	arm_smmu_disable_pasid(master);
-	kfree(master);
-	iommu_fwspec_free(dev);
-}
-
-static struct iommu_group *arm_smmu_device_group(struct device *dev)
-{
-	struct iommu_group *group;
-
-	/*
-	 * We don't support devices sharing stream IDs other than PCI RID
-	 * aliases, since the necessary ID-to-device lookup becomes rather
-	 * impractical given a potential sparse 32-bit stream ID space.
-	 */
-	if (dev_is_pci(dev))
-		group = pci_device_group(dev);
-	else
-		group = generic_device_group(dev);
-
-	return group;
-}
-
-static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
-				    enum iommu_attr attr, void *data)
-{
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-	switch (domain->type) {
-	case IOMMU_DOMAIN_UNMANAGED:
-		switch (attr) {
-		case DOMAIN_ATTR_NESTING:
-			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
-			return 0;
-		default:
-			return -ENODEV;
-		}
-		break;
-	case IOMMU_DOMAIN_DMA:
-		switch (attr) {
-		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
-			*(int *)data = smmu_domain->non_strict;
-			return 0;
-		default:
-			return -ENODEV;
-		}
-		break;
-	default:
-		return -EINVAL;
-	}
-}
-
-static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
-				    enum iommu_attr attr, void *data)
-{
-	int ret = 0;
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-	mutex_lock(&smmu_domain->init_mutex);
-
-	switch (domain->type) {
-	case IOMMU_DOMAIN_UNMANAGED:
-		switch (attr) {
-		case DOMAIN_ATTR_NESTING:
-			if (smmu_domain->smmu) {
-				ret = -EPERM;
-				goto out_unlock;
-			}
-
-			if (*(int *)data)
-				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
-			else
-				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
-			break;
-		default:
-			ret = -ENODEV;
-		}
-		break;
-	case IOMMU_DOMAIN_DMA:
-		switch(attr) {
-		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
-			smmu_domain->non_strict = *(int *)data;
-			break;
-		default:
-			ret = -ENODEV;
-		}
-		break;
-	default:
-		ret = -EINVAL;
-	}
-
-out_unlock:
-	mutex_unlock(&smmu_domain->init_mutex);
-	return ret;
-}
-
 static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
 {
 	return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
-static void arm_smmu_get_resv_regions(struct device *dev,
-				      struct list_head *head)
-{
-	struct iommu_resv_region *region;
-	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
-
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
-					 prot, IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
-
-	list_add_tail(&region->list, head);
-
-	iommu_dma_get_resv_regions(dev, head);
-}
-
-static struct iommu_ops arm_smmu_ops = {
-	.capable		= arm_smmu_capable,
-	.domain_alloc		= arm_smmu_domain_alloc,
-	.domain_free		= arm_smmu_domain_free,
-	.attach_dev		= arm_smmu_attach_dev,
-	.map			= arm_smmu_map,
-	.unmap			= arm_smmu_unmap,
-	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
-	.iotlb_sync		= arm_smmu_iotlb_sync,
-	.iova_to_phys		= arm_smmu_iova_to_phys,
-	.probe_device		= arm_smmu_probe_device,
-	.release_device		= arm_smmu_release_device,
-	.device_group		= arm_smmu_device_group,
-	.domain_get_attr	= arm_smmu_domain_get_attr,
-	.domain_set_attr	= arm_smmu_domain_set_attr,
-	.of_xlate		= arm_smmu_of_xlate,
-	.get_resv_regions	= arm_smmu_get_resv_regions,
-	.put_resv_regions	= generic_iommu_put_resv_regions,
-	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
-};
-
 /* Probing and initialisation functions */
 static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
 				   struct arm_smmu_queue *q,
@@ -2929,16 +2596,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 		smmu->oas = 48;
 	}
 
-	if (arm_smmu_ops.pgsize_bitmap == -1UL)
-		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
-	else
-		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
-
-	/* Set the DMA mask for our table walker */
-	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
-		dev_warn(smmu->dev,
-			 "failed to set DMA mask for table walker\n");
-
 	smmu->ias = max(smmu->ias, smmu->oas);
 
 	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
@@ -3018,43 +2675,6 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
 		return SZ_128K;
 }
 
-static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
-{
-	int err;
-
-#ifdef CONFIG_PCI
-	if (pci_bus_type.iommu_ops != ops) {
-		err = bus_set_iommu(&pci_bus_type, ops);
-		if (err)
-			return err;
-	}
-#endif
-#ifdef CONFIG_ARM_AMBA
-	if (amba_bustype.iommu_ops != ops) {
-		err = bus_set_iommu(&amba_bustype, ops);
-		if (err)
-			goto err_reset_pci_ops;
-	}
-#endif
-	if (platform_bus_type.iommu_ops != ops) {
-		err = bus_set_iommu(&platform_bus_type, ops);
-		if (err)
-			goto err_reset_amba_ops;
-	}
-
-	return 0;
-
-err_reset_amba_ops:
-#ifdef CONFIG_ARM_AMBA
-	bus_set_iommu(&amba_bustype, NULL);
-#endif
-err_reset_pci_ops: __maybe_unused;
-#ifdef CONFIG_PCI
-	bus_set_iommu(&pci_bus_type, NULL);
-#endif
-	return err;
-}
-
 static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
 				      resource_size_t size)
 {
@@ -3147,68 +2767,15 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
-	/* Record our private device structure */
-	platform_set_drvdata(pdev, smmu);
-
 	/* Reset the device */
 	ret = arm_smmu_device_reset(smmu, bypass);
 	if (ret)
 		return ret;
 
-	/* And we're up. Go go go! */
-	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
-				     "smmu3.%pa", &ioaddr);
-	if (ret)
-		return ret;
-
-	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
-	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
-
-	ret = iommu_device_register(&smmu->iommu);
-	if (ret) {
-		dev_err(dev, "Failed to register iommu\n");
-		return ret;
-	}
-
-	return arm_smmu_set_bus_ops(&arm_smmu_ops);
-}
-
-static int arm_smmu_device_remove(struct platform_device *pdev)
-{
-	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
-
-	arm_smmu_set_bus_ops(NULL);
-	iommu_device_unregister(&smmu->iommu);
-	iommu_device_sysfs_remove(&smmu->iommu);
-	arm_smmu_device_disable(smmu);
-
 	return 0;
 }
 
-static void arm_smmu_device_shutdown(struct platform_device *pdev)
-{
-	arm_smmu_device_remove(pdev);
-}
-
 static const struct of_device_id arm_smmu_of_match[] = {
 	{ .compatible = "arm,smmu-v3", },
 	{ },
 };
-MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
-
-static struct platform_driver arm_smmu_driver = {
-	.driver	= {
-		.name			= "arm-smmu-v3",
-		.of_match_table		= arm_smmu_of_match,
-		.suppress_bind_attrs	= true,
-	},
-	.probe	= arm_smmu_device_probe,
-	.remove	= arm_smmu_device_remove,
-	.shutdown = arm_smmu_device_shutdown,
-};
-module_platform_driver(arm_smmu_driver);
-
-MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
-MODULE_AUTHOR("Will Deacon <will@kernel.org>");
-MODULE_ALIAS("platform:arm-smmu-v3");
-MODULE_LICENSE("GPL v2");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:01:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49604.87739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPJL-0007V1-Lv; Thu, 10 Dec 2020 17:00:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49604.87739; Thu, 10 Dec 2020 17:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPJL-0007Uq-Ig; Thu, 10 Dec 2020 17:00:59 +0000
Received: by outflank-mailman (input) for mailman id 49604;
 Thu, 10 Dec 2020 17:00:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPJK-0007UO-Fc
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:00:58 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 81fb0ba8-7859-4cc5-89ee-05c7c164a3b4;
 Thu, 10 Dec 2020 17:00:52 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C07C130E;
 Thu, 10 Dec 2020 09:00:51 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 227D23F66B;
 Thu, 10 Dec 2020 09:00:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81fb0ba8-7859-4cc5-89ee-05c7c164a3b4
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Date: Thu, 10 Dec 2020 16:57:05 +0000
Message-Id: <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

Add support for ARM architected SMMUv3 implementation. It is based on
the Linux SMMUv3 driver.

Driver is currently supported as Tech Preview.

Major differences with regard to Linux driver are as follows:
2. Only Stage-2 translation is supported as compared to the Linux driver
   that supports both Stage-1 and Stage-2 translations.
3. Use P2M  page table instead of creating one as SMMUv3 has the
   capability to share the page tables with the CPU.
4. Tasklets are used in place of threaded IRQ's in Linux for event queue
   and priority queue IRQ handling.
5. Latest version of the Linux SMMUv3 code implements the commands queue
   access functions based on atomic operations implemented in Linux.
   Atomic functions used by the commands queue access functions are not
   implemented in XEN therefore we decided to port the earlier version
   of the code. Atomic operations are introduced to fix the bottleneck
   of the SMMU command queue insertion operation. A new algorithm for
   inserting commands into the queue is introduced, which is lock-free
   on the fast-path.
   Consequence of reverting the patch is that the command queue
   insertion will be slow for large systems as spinlock will be used to
   serializes accesses from all CPUs to the single queue supported by
   the hardware. Once the proper atomic operations will be available in
   XEN the driver can be updated.
6. Spin lock is used in place of mutex when attaching a device to the
   SMMU, as there is no blocking locks implementation available in XEN.
   This might introduce latency in XEN. Need to investigate before
   driver is out for tech preview.
7. PCI ATS functionality is not supported, as there is no support
   available in XEN to test the functionality. Code is not tested and
   compiled. Code is guarded by the flag CONFIG_PCI_ATS.
8. MSI interrupts are not supported as there is no support available in
   XEN to request MSI interrupts. Code is not tested and compiled. Code
   is guarded by the flag CONFIG_MSI.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
- added return statement for readx_poll_timeout function.
- remove iommu_get_dma_cookie and iommu_put_dma_cookie.
- remove struct arm_smmu_xen_device as not required.
- move dt_property_match_string to device_tree.c file.
- replace arm_smmu_*_thread to arm_smmu_*_tasklet to avoid confusion.
- use ARM_SMMU_REG_SZ as size when map memory to XEN.
- remove bypass keyword to make sure when device-tree probe is failed we
  are reporting error and not continuing to configure SMMU in bypass
  mode.
- fixed minor comments.

---
 MAINTAINERS                           |   6 +
 SUPPORT.md                            |   1 +
 xen/drivers/passthrough/Kconfig       |  11 +
 xen/drivers/passthrough/arm/Makefile  |   1 +
 xen/drivers/passthrough/arm/smmu-v3.c | 777 ++++++++++++++++++++++----
 5 files changed, 683 insertions(+), 113 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index dab38a6a14..1d63489eec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -249,6 +249,12 @@ F:	xen/include/asm-arm/
 F:	xen/include/public/arch-arm/
 F:	xen/include/public/arch-arm.h
 
+ARM SMMUv3
+M:	Bertrand Marquis <bertrand.marquis@arm.com>
+M:	Rahul Singh <rahul.singh@arm.com>
+S:	Supported
+F:	xen/drivers/passthrough/arm/smmu-v3.c
+
 Change Log
 M:	Paul Durrant <paul@xen.org>
 R:	Community Manager <community.manager@xenproject.org>
diff --git a/SUPPORT.md b/SUPPORT.md
index ab02aca5f4..5ee3c8651a 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -67,6 +67,7 @@ For the Cortex A57 r0p0 - r1p1, see Errata 832075.
     Status, Intel VT-d: Supported
     Status, ARM SMMUv1: Supported, not security supported
     Status, ARM SMMUv2: Supported, not security supported
+    Status, ARM SMMUv3: Tech Preview
     Status, Renesas IPMMU-VMSA: Supported, not security supported
 
 ### ARM/GICv3 ITS
diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 0036007ec4..341ba92b30 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -13,6 +13,17 @@ config ARM_SMMU
 	  Say Y here if your SoC includes an IOMMU device implementing the
 	  ARM SMMU architecture.
 
+config ARM_SMMU_V3
+	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
+	depends on ARM_64
+	---help---
+	 Support for implementations of the ARM System MMU architecture
+	 version 3. Driver is in experimental stage and should not be used in
+	 production.
+
+	 Say Y here if your system includes an IOMMU device implementing
+	 the ARM SMMUv3 architecture.
+
 config IPMMU_VMSA
 	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
 	depends on ARM_64
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index fcd918ea3e..c5fb3b58a5 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1,3 +1,4 @@
 obj-y += iommu.o iommu_helpers.o iommu_fwspec.o
 obj-$(CONFIG_ARM_SMMU) += smmu.o
 obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
+obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 2966015e5d..65b3db94ad 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2,37 +2,268 @@
 /*
  * IOMMU API for ARM architected SMMUv3 implementations.
  *
+ * Based on Linux's SMMUv3 driver:
+ *    drivers/iommu/arm-smmu-v3.c
+ *    commit: ab435ce49bd1d02e33dfec24f76955dc1196970b
+ * and Xen's SMMU driver:
+ *    xen/drivers/passthrough/arm/smmu.c
+ *
+ * Major differences with regard to Linux driver are as follows:
+ *  1. Driver is currently supported as Tech Preview.
+ *  2. Only Stage-2 translation is supported as compared to the Linux driver
+ *     that supports both Stage-1 and Stage-2 translations.
+ *  3. Use P2M  page table instead of creating one as SMMUv3 has the
+ *     capability to share the page tables with the CPU.
+ *  4. Tasklets are used in place of threaded IRQ's in Linux for event queue
+ *     and priority queue IRQ handling.
+ *  5. Latest version of the Linux SMMUv3 code implements the commands queue
+ *     access functions based on atomic operations implemented in Linux.
+ *     Atomic functions used by the commands queue access functions are not
+ *     implemented in XEN therefore we decided to port the earlier version
+ *     of the code. Atomic operations are introduced to fix the bottleneck of
+ *     the SMMU command queue insertion operation. A new algorithm for
+ *     inserting commands into the queue is introduced, which is
+ *     lock-free on the fast-path.
+ *     Consequence of reverting the patch is that the command queue insertion
+ *     will be slow for large systems as spinlock will be used to serializes
+ *     accesses from all CPUs to the single queue supported by the hardware.
+ *     Once the proper atomic operations will be available in XEN the driver
+ *     can be updated.
+ *  6. Spin lock is used in place of Mutex when attaching a device to the SMMU,
+ *     as there is no blocking locks implementation available in XEN.This might
+ *     introduce latency in XEN. Need to investigate before driver is out for
+ *     Tech Preview.
+ *  7. PCI ATS functionality is not supported, as there is no support available
+ *     in XEN to test the functionality. Code is not tested and compiled. Code
+ *     is guarded by the flag CONFIG_PCI_ATS.
+ *  8. MSI interrupts are not supported as there is no support available
+ *     in XEN to request MSI interrupts. Code is not tested and compiled. Code
+ *     is guarded by the flag CONFIG_MSI.
+ *
+ * Following functionality should be supported before driver is out for tech
+ * preview
+ *
+ *  1. Investigate the timing analysis of using spin lock in place of mutex
+ *     when attaching a  device to SMMU.
+ *  2. Merged the latest Linux SMMUv3 driver code once atomic operation is
+ *     available in XEN.
+ *  3. PCI ATS and MSI interrupts should be supported.
+ *  4. Investigate side-effect of using tasklet in place of threaded IRQ and
+ *     fix if any.
+ *  5. fallthorugh keyword should be supported.
+ *  6. Implement the ffsll function in bitops.h file.
+ *
  * Copyright (C) 2015 ARM Limited
  *
  * Author: Will Deacon <will.deacon@arm.com>
  *
- * This driver is powered by bad coffee and bombay mix.
+ * Copyright (C) 2020 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
  */
 
-#include <linux/acpi.h>
-#include <linux/acpi_iort.h>
-#include <linux/bitfield.h>
-#include <linux/bitops.h>
-#include <linux/crash_dump.h>
-#include <linux/delay.h>
-#include <linux/dma-iommu.h>
-#include <linux/err.h>
-#include <linux/interrupt.h>
-#include <linux/io-pgtable.h>
-#include <linux/iommu.h>
-#include <linux/iopoll.h>
-#include <linux/module.h>
-#include <linux/msi.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_iommu.h>
-#include <linux/of_platform.h>
-#include <linux/pci.h>
-#include <linux/pci-ats.h>
-#include <linux/platform_device.h>
-
-#include <linux/amba/bus.h>
+#include <xen/acpi.h>
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/err.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
+#include <xen/vmap.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/iommu_fwspec.h>
+#include <asm/platform.h>
+
+/* Linux compatibility functions. */
+typedef paddr_t		dma_addr_t;
+typedef paddr_t		phys_addr_t;
+typedef unsigned int		gfp_t;
+
+#define platform_device		device
+
+#define GFP_KERNEL		0
+
+/* Alias to Xen device tree helpers */
+#define device_node			dt_device_node
+#define of_phandle_args		dt_phandle_args
+#define of_device_id		dt_device_match
+#define of_match_node		dt_match_node
+#define of_property_read_u32(np, pname, out)	\
+		(!dt_property_read_u32(np, pname, out))
+#define of_property_read_bool		dt_property_read_bool
+#define of_parse_phandle_with_args	dt_parse_phandle_with_args
+
+/* Alias to Xen time functions */
+#define ktime_t s_time_t
+#define ktime_get()			(NOW())
+#define ktime_add_us(t, i)		(t + MICROSECS(i))
+#define ktime_compare(t, i)		(t > (i))
+
+/* Alias to Xen allocation helpers */
+#define kzalloc(size, flags)	_xzalloc(size, sizeof(void *))
+#define kfree	xfree
+#define devm_kzalloc(dev, size, flags)	 _xzalloc(size, sizeof(void *))
+
+/* Device logger functions */
+#define dev_name(dev)	dt_node_full_name(dev->of_node)
+#define dev_dbg(dev, fmt, ...)			\
+	printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_notice(dev, fmt, ...)		\
+	printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_warn(dev, fmt, ...)			\
+	printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_err(dev, fmt, ...)			\
+	printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_info(dev, fmt, ...)			\
+	printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_err_ratelimited(dev, fmt, ...)			\
+	printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+
+/*
+ * Periodically poll an address and wait between reads in us until a
+ * condition is met or a timeout occurs.
+ *
+ * @return: 0 when cond met, -ETIMEDOUT upon timeout
+ */
+#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us)	\
+({		\
+	s_time_t deadline = NOW() + MICROSECS(timeout_us);		\
+	for (;;) {		\
+		(val) = op(addr);		\
+		if (cond)		\
+			break;		\
+		if (NOW() > deadline) {		\
+			(val) = op(addr);		\
+			break;		\
+		}		\
+		udelay(sleep_us);		\
+	}		\
+	(cond) ? 0 : -ETIMEDOUT;		\
+})
+
+#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us)	\
+	readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
+
+#define FIELD_PREP(_mask, _val)			\
+	(((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))
+
+#define FIELD_GET(_mask, _reg)			\
+	(typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
+
+/*
+ * Helpers for DMA allocation. Just the function name is reused for
+ * porting code, these allocation are not managed allocations
+ */
+static void *dmam_alloc_coherent(struct device *dev, size_t size,
+				paddr_t *dma_handle, gfp_t gfp)
+{
+	void *vaddr;
+	unsigned long alignment = size;
+
+	/*
+	 * _xzalloc requires that the (align & (align -1)) = 0. Most of the
+	 * allocations in SMMU code should send the right value for size. In
+	 * case this is not true print a warning and align to the size of a
+	 * (void *)
+	 */
+	if (size & (size - 1)) {
+		printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n");
+		alignment = sizeof(void *);
+	}
+
+	vaddr = _xzalloc(size, alignment);
+	if (!vaddr) {
+		printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
+		return NULL;
+	}
+
+	*dma_handle = virt_to_maddr(vaddr);
+
+	return vaddr;
+}
+
+
+/* Xen specific code. */
+struct iommu_domain {
+	/* Runtime SMMU configuration for this iommu_domain */
+	atomic_t		ref;
+	/*
+	 * Used to link iommu_domain contexts for a same domain.
+	 * There is at least one per-SMMU to used by the domain.
+	 */
+	struct list_head		list;
+};
 
+/* Describes information required for a Xen domain */
+struct arm_smmu_xen_domain {
+	spinlock_t		lock;
+
+	/* List of iommu domains associated to this domain */
+	struct list_head	contexts;
+};
+
+
+/* Keep a list of devices associated with this driver */
+static DEFINE_SPINLOCK(arm_smmu_devices_lock);
+static LIST_HEAD(arm_smmu_devices);
+
+static inline void *dev_iommu_priv_get(struct device *dev)
+{
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
+}
+
+static inline void dev_iommu_priv_set(struct device *dev, void *priv)
+{
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	fwspec->iommu_priv = priv;
+}
+
+static int platform_get_irq_byname_optional(struct device *dev,
+				const char *name)
+{
+	int index, ret;
+	struct dt_device_node *np  = dev_to_dt(dev);
+
+	if (unlikely(!name))
+		return -EINVAL;
+
+	index = dt_property_match_string(np, "interrupt-names", name);
+	if (index < 0) {
+		dev_info(dev, "IRQ %s not found\n", name);
+		return index;
+	}
+
+	ret = platform_get_irq(np, index);
+	if (ret < 0) {
+		dev_err(dev, "failed to get irq index %d\n", index);
+		return -ENODEV;
+	}
+
+	return ret;
+}
+
+/* Start of Linux SMMUv3 code */
 /* MMIO registers */
 #define ARM_SMMU_IDR0			0x0
 #define IDR0_ST_LVL			GENMASK(28, 27)
@@ -402,6 +633,7 @@ enum pri_resp {
 	PRI_RESP_SUCC = 2,
 };
 
+#ifdef CONFIF_MSI
 enum arm_smmu_msi_index {
 	EVTQ_MSI_INDEX,
 	GERROR_MSI_INDEX,
@@ -426,6 +658,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
 		ARM_SMMU_PRIQ_IRQ_CFG2,
 	},
 };
+#endif
 
 struct arm_smmu_cmdq_ent {
 	/* Common fields */
@@ -534,6 +767,7 @@ struct arm_smmu_s2_cfg {
 	u16				vmid;
 	u64				vttbr;
 	u64				vtcr;
+	struct domain		*domain;
 };
 
 struct arm_smmu_strtab_cfg {
@@ -613,8 +847,13 @@ struct arm_smmu_device {
 		u64			padding;
 	};
 
-	/* IOMMU core code handle */
-	struct iommu_device		iommu;
+	/* Need to keep a list of SMMU devices */
+	struct list_head		devices;
+
+	/* Tasklets for handling evts/faults and pci page request IRQs*/
+	struct tasklet		evtq_irq_tasklet;
+	struct tasklet		priq_irq_tasklet;
+	struct tasklet		combined_irq_tasklet;
 };
 
 /* SMMU private data for each master */
@@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
 
 struct arm_smmu_domain {
 	struct arm_smmu_device		*smmu;
-	struct mutex			init_mutex; /* Protects smmu pointer */
 
 	bool				non_strict;
 	atomic_t			nr_ats_masters;
@@ -987,6 +1225,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
 	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 }
 
+#ifdef CONFIF_MSI
 /*
  * The difference between val and sync_idx is bounded by the maximum size of
  * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
@@ -1030,6 +1269,13 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
 
 	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
 }
+#else
+static inline int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
+{
+	return 0;
+}
+#endif /* CONFIG_MSI */
+
 
 static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
@@ -1072,7 +1318,7 @@ arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
 	val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
 
 	/* See comment in arm_smmu_write_ctx_desc() */
-	WRITE_ONCE(*dst, cpu_to_le64(val));
+	write_atomic(dst, cpu_to_le64(val));
 }
 
 static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
@@ -1187,7 +1433,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 						 STRTAB_STE_1_EATS_TRANS));
 
 	arm_smmu_sync_ste_for_sid(smmu, sid);
-	WRITE_ONCE(dst[0], cpu_to_le64(val));
+	write_atomic(&dst[0], cpu_to_le64(val));
 	arm_smmu_sync_ste_for_sid(smmu, sid);
 
 	/* It's likely that we'll want to use the new STE soon */
@@ -1234,7 +1480,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
 }
 
 /* IRQ and event handlers */
-static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+static void arm_smmu_evtq_tasklet(void *dev)
 {
 	int i;
 	struct arm_smmu_device *smmu = dev;
@@ -1264,7 +1510,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
 	/* Sync our overflow flag, as we believe we're up to speed */
 	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
 		    Q_IDX(llq, llq->cons);
-	return IRQ_HANDLED;
 }
 
 static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
@@ -1305,7 +1550,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
 	}
 }
 
-static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+static void arm_smmu_priq_tasklet(void *dev)
 {
 	struct arm_smmu_device *smmu = dev;
 	struct arm_smmu_queue *q = &smmu->priq.q;
@@ -1324,12 +1569,12 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
 	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
 		      Q_IDX(llq, llq->cons);
 	queue_sync_cons_out(q);
-	return IRQ_HANDLED;
 }
 
 static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
 
-static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
+static void arm_smmu_gerror_handler(int irq, void *dev,
+				struct cpu_user_regs *regs)
 {
 	u32 gerror, gerrorn, active;
 	struct arm_smmu_device *smmu = dev;
@@ -1339,7 +1584,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
 
 	active = gerror ^ gerrorn;
 	if (!(active & GERROR_ERR_MASK))
-		return IRQ_NONE; /* No errors pending */
+		return; /* No errors pending */
 
 	dev_warn(smmu->dev,
 		 "unexpected global error reported (0x%08x), this could be serious\n",
@@ -1372,26 +1617,44 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
 		arm_smmu_cmdq_skip_err(smmu);
 
 	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
-	return IRQ_HANDLED;
 }
 
-static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
+static void arm_smmu_combined_irq_handler(int irq, void *dev,
+				struct cpu_user_regs *regs)
+{
+	struct arm_smmu_device *smmu = dev;
+
+	arm_smmu_gerror_handler(irq, dev, regs);
+
+	tasklet_schedule(&(smmu->combined_irq_tasklet));
+}
+
+static void arm_smmu_combined_irq_tasklet(void *dev)
 {
 	struct arm_smmu_device *smmu = dev;
 
-	arm_smmu_evtq_thread(irq, dev);
+	arm_smmu_evtq_tasklet(dev);
 	if (smmu->features & ARM_SMMU_FEAT_PRI)
-		arm_smmu_priq_thread(irq, dev);
+		arm_smmu_priq_tasklet(dev);
+}
 
-	return IRQ_HANDLED;
+static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
+				struct cpu_user_regs *regs)
+{
+	struct arm_smmu_device *smmu = dev;
+
+	tasklet_schedule(&(smmu->evtq_irq_tasklet));
 }
 
-static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
+static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
+				struct cpu_user_regs *regs)
 {
-	arm_smmu_gerror_handler(irq, dev);
-	return IRQ_WAKE_THREAD;
+	struct arm_smmu_device *smmu = dev;
+
+	tasklet_schedule(&(smmu->priq_irq_tasklet));
 }
 
+#ifdef CONFIG_PCI_ATS
 static void
 arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
 			struct arm_smmu_cmdq_ent *cmd)
@@ -1498,6 +1761,7 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 
 	return ret ? -ETIMEDOUT : 0;
 }
+#endif
 
 static void arm_smmu_tlb_inv_context(void *cookie)
 {
@@ -1532,7 +1796,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(void)
 	if (!smmu_domain)
 		return NULL;
 
-	mutex_init(&smmu_domain->init_mutex);
 	INIT_LIST_HEAD(&smmu_domain->devices);
 	spin_lock_init(&smmu_domain->devices_lock);
 
@@ -1578,6 +1841,17 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 	typeof(&arm_lpae_s2_cfg.vtcr) vtcr = &arm_lpae_s2_cfg.vtcr;
+	uint64_t reg = READ_SYSREG64(VTCR_EL2);
+
+	vtcr->tsz	= FIELD_GET(STRTAB_STE_2_VTCR_S2T0SZ, reg);
+	vtcr->sl	= FIELD_GET(STRTAB_STE_2_VTCR_S2SL0, reg);
+	vtcr->irgn	= FIELD_GET(STRTAB_STE_2_VTCR_S2IR0, reg);
+	vtcr->orgn	= FIELD_GET(STRTAB_STE_2_VTCR_S2OR0, reg);
+	vtcr->sh	= FIELD_GET(STRTAB_STE_2_VTCR_S2SH0, reg);
+	vtcr->tg	= FIELD_GET(STRTAB_STE_2_VTCR_S2TG, reg);
+	vtcr->ps	= FIELD_GET(STRTAB_STE_2_VTCR_S2PS, reg);
+
+	arm_lpae_s2_cfg.vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
 
 	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
 	if (vmid < 0)
@@ -1592,6 +1866,11 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
+
+	printk(XENLOG_DEBUG
+		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
+		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
+
 	return 0;
 }
 
@@ -1653,6 +1932,7 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
 	}
 }
 
+#ifdef CONFIG_PCI_ATS
 static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
 {
 	struct device *dev = master->dev;
@@ -1751,6 +2031,23 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
 
 	pci_disable_pasid(pdev);
 }
+#else
+static inline bool arm_smmu_ats_supported(struct arm_smmu_master *master)
+{
+	return false;
+}
+
+static inline void arm_smmu_enable_ats(struct arm_smmu_master *master) { }
+
+static inline void arm_smmu_disable_ats(struct arm_smmu_master *master) { }
+
+static inline int arm_smmu_enable_pasid(struct arm_smmu_master *master)
+{
+	return 0;
+}
+
+static inline void arm_smmu_disable_pasid(struct arm_smmu_master *master) { }
+#endif
 
 static void arm_smmu_detach_dev(struct arm_smmu_master *master)
 {
@@ -1788,8 +2085,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 
 	arm_smmu_detach_dev(master);
 
-	mutex_lock(&smmu_domain->init_mutex);
-
 	if (!smmu_domain->smmu) {
 		smmu_domain->smmu = smmu;
 		ret = arm_smmu_domain_finalise(domain, master);
@@ -1820,7 +2115,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 	arm_smmu_enable_ats(master);
 
 out_unlock:
-	mutex_unlock(&smmu_domain->init_mutex);
 	return ret;
 }
 
@@ -1833,8 +2127,10 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 
 	return sid < limit;
 }
+/* Forward declaration */
+static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
 
-static struct iommu_device *arm_smmu_probe_device(struct device *dev)
+static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
 	int i, ret;
 	struct arm_smmu_device *smmu;
@@ -1842,14 +2138,15 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 
 	if (!fwspec)
-		return ERR_PTR(-ENODEV);
+		return -ENODEV;
 
-	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
-		return ERR_PTR(-EBUSY);
+	smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
+	if (!smmu)
+		return -ENODEV;
 
 	master = kzalloc(sizeof(*master), GFP_KERNEL);
 	if (!master)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 	master->dev = dev;
 	master->smmu = smmu;
@@ -1884,17 +2181,36 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 	 */
 	arm_smmu_enable_pasid(master);
 
-	return &smmu->iommu;
+	return 0;
 
 err_free_master:
 	kfree(master);
 	dev_iommu_priv_set(dev, NULL);
-	return ERR_PTR(ret);
+	return ret;
 }
 
-static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
+static int arm_smmu_dt_xlate(struct device *dev,
+				const struct dt_phandle_args *args)
 {
-	return iommu_fwspec_add_ids(dev, args->args, 1);
+	int ret;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	ret = iommu_fwspec_add_ids(dev, args->args, 1);
+	if (ret)
+		return ret;
+
+	if (dt_device_is_protected(dev_to_dt(dev))) {
+		dev_err(dev, "Already added to SMMUv3\n");
+		return -EEXIST;
+	}
+
+	/* Let Xen know that the master device is protected by an IOMMU. */
+	dt_device_set_protected(dev_to_dt(dev));
+
+	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
+			dev_name(fwspec->iommu_dev), fwspec->num_ids);
+
+	return 0;
 }
 
 /* Probing and initialisation functions */
@@ -1923,8 +2239,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
 		return -ENOMEM;
 	}
 
-	if (!WARN_ON(q->base_dma & (qsz - 1))) {
-		dev_info(smmu->dev, "allocated %u entries for %s\n",
+	if (unlikely(q->base_dma & (qsz - 1))) {
+		dev_warn(smmu->dev, "allocated %u entries for %s\n",
 			 1 << q->llq.max_n_shift, name);
 	}
 
@@ -2121,6 +2437,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
 	return ret;
 }
 
+#ifdef CONFIF_MSI
 static void arm_smmu_free_msis(void *data)
 {
 	struct device *dev = data;
@@ -2191,6 +2508,9 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
 	/* Add callback to free MSIs on teardown */
 	devm_add_action(dev, arm_smmu_free_msis, dev);
 }
+#else
+static inline void arm_smmu_setup_msis(struct arm_smmu_device *smmu) { }
+#endif /* CONFIG_MSI */
 
 static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 {
@@ -2201,9 +2521,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 	/* Request interrupt lines */
 	irq = smmu->evtq.q.irq;
 	if (irq) {
-		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
-						arm_smmu_evtq_thread,
-						IRQF_ONESHOT,
+		ret = request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
 						"arm-smmu-v3-evtq", smmu);
 		if (ret < 0)
 			dev_warn(smmu->dev, "failed to enable evtq irq\n");
@@ -2213,8 +2531,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 
 	irq = smmu->gerr_irq;
 	if (irq) {
-		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
-				       0, "arm-smmu-v3-gerror", smmu);
+		ret = request_irq(irq, 0, arm_smmu_gerror_handler,
+						"arm-smmu-v3-gerror", smmu);
 		if (ret < 0)
 			dev_warn(smmu->dev, "failed to enable gerror irq\n");
 	} else {
@@ -2224,11 +2542,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 	if (smmu->features & ARM_SMMU_FEAT_PRI) {
 		irq = smmu->priq.q.irq;
 		if (irq) {
-			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
-							arm_smmu_priq_thread,
-							IRQF_ONESHOT,
-							"arm-smmu-v3-priq",
-							smmu);
+			ret = request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
+							"arm-smmu-v3-priq", smmu);
 			if (ret < 0)
 				dev_warn(smmu->dev,
 					 "failed to enable priq irq\n");
@@ -2257,11 +2572,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
 		 * Cavium ThunderX2 implementation doesn't support unique irq
 		 * lines. Use a single irq line for all the SMMUv3 interrupts.
 		 */
-		ret = devm_request_threaded_irq(smmu->dev, irq,
-					arm_smmu_combined_irq_handler,
-					arm_smmu_combined_irq_thread,
-					IRQF_ONESHOT,
-					"arm-smmu-v3-combined-irq", smmu);
+		ret = request_irq(irq, 0, arm_smmu_combined_irq_handler,
+						"arm-smmu-v3-combined-irq", smmu);
 		if (ret < 0)
 			dev_warn(smmu->dev, "failed to enable combined irq\n");
 	} else
@@ -2290,7 +2602,7 @@ static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
 	return ret;
 }
 
-static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+static int arm_smmu_device_reset(struct arm_smmu_device *smmu)
 {
 	int ret;
 	u32 reg, enables;
@@ -2300,7 +2612,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
 	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
 	if (reg & CR0_SMMUEN) {
 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
-		WARN_ON(is_kdump_kernel() && !disable_bypass);
+		WARN_ON(!disable_bypass);
 		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
 	}
 
@@ -2404,11 +2716,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
 		return ret;
 	}
 
-	if (is_kdump_kernel())
-		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
+	/* Initialize tasklets for threaded IRQs*/
+	tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_tasklet, smmu);
+	tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_tasklet, smmu);
+	tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_tasklet,
+				 smmu);
 
 	/* Enable the SMMU interface, or ensure bypass */
-	if (!bypass || disable_bypass) {
+	if (disable_bypass) {
 		enables |= CR0_SMMUEN;
 	} else {
 		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
@@ -2473,8 +2788,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	if (reg & IDR0_SEV)
 		smmu->features |= ARM_SMMU_FEAT_SEV;
 
+#ifdef CONFIF_MSI
 	if (reg & IDR0_MSI)
 		smmu->features |= ARM_SMMU_FEAT_MSI;
+#endif
 
 	if (reg & IDR0_HYP)
 		smmu->features |= ARM_SMMU_FEAT_HYP;
@@ -2499,7 +2816,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
 
 	if (!(reg & IDR0_S2P)) {
-		dev_err(smmu->dev, "no translation support!\n");
+		dev_err(smmu->dev, "no stage-2 translation support!\n");
 		return -ENXIO;
 	}
 
@@ -2648,7 +2965,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
 static int arm_smmu_device_dt_probe(struct platform_device *pdev,
 				    struct arm_smmu_device *smmu)
 {
-	struct device *dev = &pdev->dev;
+	struct device *dev = pdev;
 	u32 cells;
 	int ret = -EINVAL;
 
@@ -2661,7 +2978,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
 
 	parse_driver_options(smmu);
 
-	if (of_dma_is_coherent(dev->of_node))
+	if (dt_get_property(dev->of_node, "dma-coherent", NULL))
 		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
 
 	return ret;
@@ -2675,63 +2992,49 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
 		return SZ_128K;
 }
 
-static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
-				      resource_size_t size)
-{
-	struct resource res = {
-		.flags = IORESOURCE_MEM,
-		.start = start,
-		.end = start + size - 1,
-	};
-
-	return devm_ioremap_resource(dev, &res);
-}
-
 static int arm_smmu_device_probe(struct platform_device *pdev)
 {
 	int irq, ret;
-	struct resource *res;
-	resource_size_t ioaddr;
+	paddr_t ioaddr, iosize;
 	struct arm_smmu_device *smmu;
-	struct device *dev = &pdev->dev;
-	bool bypass;
 
-	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
+	smmu = devm_kzalloc(pdev, sizeof(*smmu), GFP_KERNEL);
 	if (!smmu) {
-		dev_err(dev, "failed to allocate arm_smmu_device\n");
+		dev_err(pdev, "failed to allocate arm_smmu_device\n");
 		return -ENOMEM;
 	}
-	smmu->dev = dev;
+	smmu->dev = pdev;
 
-	if (dev->of_node) {
+	if (pdev->of_node) {
 		ret = arm_smmu_device_dt_probe(pdev, smmu);
+		if (ret)
+			return -EINVAL;
 	} else {
 		ret = arm_smmu_device_acpi_probe(pdev, smmu);
 		if (ret == -ENODEV)
 			return ret;
 	}
 
-	/* Set bypass mode according to firmware probing result */
-	bypass = !!ret;
-
 	/* Base address */
-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
-		dev_err(dev, "MMIO region too small (%pr)\n", res);
+	ret = dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
+	if (ret)
+		return -ENODEV;
+
+	if (iosize < arm_smmu_resource_size(smmu)) {
+		dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
 		return -EINVAL;
 	}
-	ioaddr = res->start;
 
 	/*
 	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
 	 * the PMCG registers which are reserved by the PMU driver.
 	 */
-	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
+	smmu->base = ioremap_nocache(ioaddr, ARM_SMMU_REG_SZ);
 	if (IS_ERR(smmu->base))
 		return PTR_ERR(smmu->base);
 
-	if (arm_smmu_resource_size(smmu) > SZ_64K) {
-		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
+	if (iosize > SZ_64K) {
+		smmu->page1 = ioremap_nocache(ioaddr + SZ_64K,
 					       ARM_SMMU_REG_SZ);
 		if (IS_ERR(smmu->page1))
 			return PTR_ERR(smmu->page1);
@@ -2768,14 +3071,262 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
 		return ret;
 
 	/* Reset the device */
-	ret = arm_smmu_device_reset(smmu, bypass);
+	ret = arm_smmu_device_reset(smmu);
 	if (ret)
 		return ret;
 
+	/*
+	 * Keep a list of all probed devices. This will be used to query
+	 * the smmu devices based on the fwnode.
+	 */
+	INIT_LIST_HEAD(&smmu->devices);
+
+	spin_lock(&arm_smmu_devices_lock);
+	list_add(&smmu->devices, &arm_smmu_devices);
+	spin_unlock(&arm_smmu_devices_lock);
+
 	return 0;
 }
 
-static const struct of_device_id arm_smmu_of_match[] = {
+static const struct dt_device_match arm_smmu_of_match[] = {
 	{ .compatible = "arm,smmu-v3", },
 	{ },
 };
+
+/* Start of Xen specific code. */
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
+{
+	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+	struct iommu_domain *io_domain;
+
+	spin_lock(&xen_domain->lock);
+
+	list_for_each_entry(io_domain, &xen_domain->contexts, list) {
+		/*
+		 * Only invalidate the context when SMMU is present.
+		 * This is because the context initialization is delayed
+		 * until a master has been added.
+		 */
+		if (unlikely(!ACCESS_ONCE(to_smmu_domain(io_domain)->smmu)))
+			continue;
+
+		arm_smmu_tlb_inv_context(to_smmu_domain(io_domain));
+	}
+
+	spin_unlock(&xen_domain->lock);
+
+	return 0;
+}
+
+static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
+				unsigned long page_count, unsigned int flush_flags)
+{
+	return arm_smmu_iotlb_flush_all(d);
+}
+
+static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
+{
+	struct arm_smmu_device *smmu = NULL;
+
+	spin_lock(&arm_smmu_devices_lock);
+
+	list_for_each_entry(smmu, &arm_smmu_devices, devices) {
+		if (smmu->dev  == dev) {
+			spin_unlock(&arm_smmu_devices_lock);
+			return smmu;
+		}
+	}
+
+	spin_unlock(&arm_smmu_devices_lock);
+
+	return NULL;
+}
+
+static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
+				struct device *dev)
+{
+	struct iommu_domain *io_domain;
+	struct arm_smmu_domain *smmu_domain;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+	struct arm_smmu_device *smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
+
+	if (!smmu)
+		return NULL;
+
+	/*
+	 * Loop through the &xen_domain->contexts to locate a context
+	 * assigned to this SMMU
+	 */
+	list_for_each_entry(io_domain, &xen_domain->contexts, list) {
+		smmu_domain = to_smmu_domain(io_domain);
+		if (smmu_domain->smmu == smmu)
+			return io_domain;
+	}
+	return NULL;
+}
+
+static void arm_smmu_destroy_iommu_domain(struct iommu_domain *io_domain)
+{
+	list_del(&io_domain->list);
+	arm_smmu_domain_free(io_domain);
+}
+
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+		struct device *dev, u32 flag)
+{
+	int ret = 0;
+	struct iommu_domain *io_domain;
+	struct arm_smmu_domain *smmu_domain;
+	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+
+	spin_lock(&xen_domain->lock);
+
+	/*
+	 * Check to see if an iommu_domain already exists for this xen domain
+	 * under the same SMMU
+	 */
+	io_domain = arm_smmu_get_domain(d, dev);
+	if (!io_domain) {
+		io_domain = arm_smmu_domain_alloc();
+		if (!io_domain) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		smmu_domain = to_smmu_domain(io_domain);
+		smmu_domain->s2_cfg.domain = d;
+
+		/* Chain the new context to the domain */
+		list_add(&io_domain->list, &xen_domain->contexts);
+	}
+
+	ret = arm_smmu_attach_dev(io_domain, dev);
+	if (ret) {
+		if (io_domain->ref.counter == 0)
+			arm_smmu_destroy_iommu_domain(io_domain);
+	} else {
+		atomic_inc(&io_domain->ref);
+	}
+
+out:
+	spin_unlock(&xen_domain->lock);
+	return ret;
+}
+
+static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+{
+	struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
+	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+	struct arm_smmu_domain *arm_smmu = to_smmu_domain(io_domain);
+	struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+	if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
+		dev_err(dev, " not attached to domain %d\n", d->domain_id);
+		return -ESRCH;
+	}
+
+	spin_lock(&xen_domain->lock);
+
+	arm_smmu_detach_dev(master);
+	atomic_dec(&io_domain->ref);
+
+	if (io_domain->ref.counter == 0)
+		arm_smmu_destroy_iommu_domain(io_domain);
+
+	spin_unlock(&xen_domain->lock);
+
+	return 0;
+}
+
+static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
+				u8 devfn,  struct device *dev)
+{
+	int ret = 0;
+
+	/* Don't allow remapping on other domain than hwdom */
+	if (t && t != hardware_domain)
+		return -EPERM;
+
+	if (t == s)
+		return 0;
+
+	ret = arm_smmu_deassign_dev(s, dev);
+	if (ret)
+		return ret;
+
+	if (t) {
+		/* No flags are defined for ARM. */
+		ret = arm_smmu_assign_dev(t, devfn, dev, 0);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int arm_smmu_iommu_xen_domain_init(struct domain *d)
+{
+	struct arm_smmu_xen_domain *xen_domain;
+
+	xen_domain = xzalloc(struct arm_smmu_xen_domain);
+	if (!xen_domain)
+		return -ENOMEM;
+
+	spin_lock_init(&xen_domain->lock);
+	INIT_LIST_HEAD(&xen_domain->contexts);
+
+	dom_iommu(d)->arch.priv = xen_domain;
+	return 0;
+
+}
+
+static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
+{
+}
+
+static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
+{
+	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+
+	ASSERT(list_empty(&xen_domain->contexts));
+	xfree(xen_domain);
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+	.init		= arm_smmu_iommu_xen_domain_init,
+	.hwdom_init		= arm_smmu_iommu_hwdom_init,
+	.teardown		= arm_smmu_iommu_xen_domain_teardown,
+	.iotlb_flush		= arm_smmu_iotlb_flush,
+	.iotlb_flush_all	= arm_smmu_iotlb_flush_all,
+	.assign_device		= arm_smmu_assign_dev,
+	.reassign_device	= arm_smmu_reassign_dev,
+	.map_page		= arm_iommu_map_page,
+	.unmap_page		= arm_iommu_unmap_page,
+	.dt_xlate		= arm_smmu_dt_xlate,
+	.add_device		= arm_smmu_add_device,
+};
+
+static __init int arm_smmu_dt_init(struct dt_device_node *dev,
+				const void *data)
+{
+	int rc;
+
+	/*
+	 * Even if the device can't be initialized, we don't want to
+	 * give the SMMU device to dom0.
+	 */
+	dt_device_set_used_by(dev, DOMID_XEN);
+
+	rc = arm_smmu_device_probe(dt_to_dev(dev));
+	if (rc)
+		return rc;
+
+	iommu_set_ops(&arm_smmu_iommu_ops);
+
+	return 0;
+}
+
+DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
+.dt_match = arm_smmu_of_match,
+.init = arm_smmu_dt_init,
+DT_DEVICE_END
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:01:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49605.87751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPJT-0007a9-4T; Thu, 10 Dec 2020 17:01:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49605.87751; Thu, 10 Dec 2020 17:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPJT-0007Zz-0p; Thu, 10 Dec 2020 17:01:07 +0000
Received: by outflank-mailman (input) for mailman id 49605;
 Thu, 10 Dec 2020 17:01:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLK9=FO=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knPJR-0007YB-7R
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:01:05 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 416ef82b-5042-482d-978e-15753698a07b;
 Thu, 10 Dec 2020 17:01:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EA64D30E;
 Thu, 10 Dec 2020 09:01:03 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0FBCE3F66B;
 Thu, 10 Dec 2020 09:01:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 416ef82b-5042-482d-978e-15753698a07b
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 8/8] xen/arm: smmuv3: Remove linux compatibility functions.
Date: Thu, 10 Dec 2020 16:57:06 +0000
Message-Id: <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>

Replace all Linux compatible device tree handling function with the XEN
functions.

Replace all Linux ktime function with the XEN time functions.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
Changes in v3:
 - This patch is introduce in this version.

---
 xen/drivers/passthrough/arm/smmu-v3.c | 32 +++++++--------------------
 1 file changed, 8 insertions(+), 24 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 65b3db94ad..c19c56ebc8 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -101,22 +101,6 @@ typedef unsigned int		gfp_t;
 
 #define GFP_KERNEL		0
 
-/* Alias to Xen device tree helpers */
-#define device_node			dt_device_node
-#define of_phandle_args		dt_phandle_args
-#define of_device_id		dt_device_match
-#define of_match_node		dt_match_node
-#define of_property_read_u32(np, pname, out)	\
-		(!dt_property_read_u32(np, pname, out))
-#define of_property_read_bool		dt_property_read_bool
-#define of_parse_phandle_with_args	dt_parse_phandle_with_args
-
-/* Alias to Xen time functions */
-#define ktime_t s_time_t
-#define ktime_get()			(NOW())
-#define ktime_add_us(t, i)		(t + MICROSECS(i))
-#define ktime_compare(t, i)		(t > (i))
-
 /* Alias to Xen allocation helpers */
 #define kzalloc(size, flags)	_xzalloc(size, sizeof(void *))
 #define kfree	xfree
@@ -922,7 +906,7 @@ static void parse_driver_options(struct arm_smmu_device *smmu)
 	int i = 0;
 
 	do {
-		if (of_property_read_bool(smmu->dev->of_node,
+		if (dt_property_read_bool(smmu->dev->of_node,
 						arm_smmu_options[i].prop)) {
 			smmu->options |= arm_smmu_options[i].opt;
 			dev_notice(smmu->dev, "option %s\n",
@@ -994,17 +978,17 @@ static void queue_inc_prod(struct arm_smmu_ll_queue *q)
  */
 static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
 {
-	ktime_t timeout;
+	s_time_t timeout;
 	unsigned int delay = 1, spin_cnt = 0;
 
 	/* Wait longer if it's a CMD_SYNC */
-	timeout = ktime_add_us(ktime_get(), sync ?
+	timeout = NOW() + MICROSECS(sync ?
 					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
 					    ARM_SMMU_POLL_TIMEOUT_US);
 
 	while (queue_sync_cons_in(q),
 	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
-		if (ktime_compare(ktime_get(), timeout) > 0)
+		if ((NOW() > timeout) > 0)
 			return -ETIMEDOUT;
 
 		if (wfe) {
@@ -1232,13 +1216,13 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
  */
 static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
 {
-	ktime_t timeout;
+	s_time_t timeout;
 	u32 val;
 
-	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
+	timeout = NOW() + MICROSECS(ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
 	val = smp_cond_load_acquire(&smmu->sync_count,
 				    (int)(VAL - sync_idx) >= 0 ||
-				    !ktime_before(ktime_get(), timeout));
+				    !(NOW() < timeout));
 
 	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
 }
@@ -2969,7 +2953,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
 	u32 cells;
 	int ret = -EINVAL;
 
-	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
+	if (!dt_property_read_u32(dev->of_node, "#iommu-cells", &cells))
 		dev_err(dev, "missing #iommu-cells property\n");
 	else if (cells != 1)
 		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:03:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49627.87763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPLt-0007uj-Hs; Thu, 10 Dec 2020 17:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49627.87763; Thu, 10 Dec 2020 17:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPLt-0007uc-EJ; Thu, 10 Dec 2020 17:03:37 +0000
Received: by outflank-mailman (input) for mailman id 49627;
 Thu, 10 Dec 2020 17:03:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oBdS=FO=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1knPLs-0007uQ-4Z
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:03:36 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f1fba60-e35e-4e97-9cb2-02710a4af8a0;
 Thu, 10 Dec 2020 17:03:33 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BAH3PYb016977
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 10 Dec 2020 18:03:26 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id EA5A42E9383; Thu, 10 Dec 2020 18:03:19 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f1fba60-e35e-4e97-9cb2-02710a4af8a0
Date: Thu, 10 Dec 2020 18:03:19 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201210170319.GG455@antioche.eu.org>
References: <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 10 Dec 2020 18:03:26 +0100 (MET)

On Thu, Dec 10, 2020 at 03:51:46PM +0000, Andrew Cooper wrote:
> > [   7.6617663] cs 0x47  ds 0x23  es 0x23  fs 0000  gs 0000  ss 0x3f
> > [   7.7345663] fsbase 000000000000000000 gsbase 000000000000000000
> >
> > so it looks like something resets %fs to 0 ...
> >
> > Anyway the fault address 0xffffbd800000a040 is in the hypervisor's range,
> > isn't it ?
> 
> No. Its the kernel's LDT. From previous debugging:
> > (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> 
> LDT handling in Xen is a bit complicated. To maintain host safety, we
> must map it into Xen's range, and we explicitly support a PV guest doing
> on-demand mapping of the LDT. (This pertains to the experimental
> Windows XP PV support which never made it beyond a prototype. Windows
> can page out the LDT.) Either way, we lazily map the LDT frames on
> first use.
> 
> So %cr2 is the real hardware faulting address, and is in the Xen range.
> We spot that it is an LDT access, and try to lazily map the frame (at
> LDT base), but find that the kernel's virtual address mapping
> 0xffffbd000000a000 is not present (the gl1e printk).
> 
> Therefore, we pass #PF to the guest kernel, adjusting vCR2 to what would
> have happened had Xen not mapped the real LDT elsewhere, which is
> expected to cause the guest kernel to do whatever demand mapping is
> necessary to pull the LDT back in.
> 
> 
> I suppose it is worth taking a step back and ascertaining how exactly
> NetBSD handles (or, should be handling) the LDT.
> 
> Do you mind elaborating on how it is supposed to work?

Note that I'm not familiar with this selector stuff; and I usually get
it wrong the first time I go back to it.

AFAIK, in the Xen PV case, a page is allocated an mapped in kernel
space, and registered to Xen with MMUEXT_SET_LDT.
>From what I found, in the common case the LDT is the same for all processes.
Does it make sense ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:18:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49641.87774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPaa-0000fY-LC; Thu, 10 Dec 2020 17:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49641.87774; Thu, 10 Dec 2020 17:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPaa-0000fR-IC; Thu, 10 Dec 2020 17:18:48 +0000
Received: by outflank-mailman (input) for mailman id 49641;
 Thu, 10 Dec 2020 17:18:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=25P7=FO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1knPaY-0000fM-Nd
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:18:46 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb64facd-6afa-42c8-9b42-4225489d2e77;
 Thu, 10 Dec 2020 17:18:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb64facd-6afa-42c8-9b42-4225489d2e77
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607620725;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Ml0eq08gHz2a2b7moZyOahLuVT9xnMoVnmNf0GtMZPA=;
  b=d2w0mPM/OL4o/633Nf3rvrq2MFSfd/23Q2MbAZtRxIG00KUEN4PwMncL
   CgUgGeQbdd3ZooFlZjh/ZiTsayIOBEyP8+Fk6yYPh1M33lmcoS2GEY2UU
   GrQYotPhQV6jC+0oCDYfIunV9LatsEtdv+iGRWBMZTD7S7URfIDsvoprV
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3MmlqKstwSYTXm0HfeOnmb2teyx2tQn+CHBet464J0xI+5zswoZNcFQiJZjzw4qTCnIjlviT4P
 2YUguIgKNkWoLVA4t+yYSa7fYmZinR58cHO0RGz4MpCpJsWUq8Cv2VHlf9xWNn9fGG0+lhO7Q/
 Kkn12ZwqQrS/PTUo0ZqhnYyxQxFCuEXi4m362EUmCMBbQJlZnBXyL32nASweu4bdp1hnzhYfY/
 fP0BKX0+IaC4xeTCSkPgDdHezKKGr3pQ+9ZWhW/MZ8GIvOMurUWJi4+IjXXg4eUb8Ro+UDr3Um
 Y2c=
X-SBRS: 5.2
X-MesageID: 32974499
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,408,1599537600"; 
   d="scan'208";a="32974499"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
 <20201210170319.GG455@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ed06a0f4-8468-addf-2797-be3ba3a2d607@citrix.com>
Date: Thu, 10 Dec 2020 17:18:39 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201210170319.GG455@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 10/12/2020 17:03, Manuel Bouyer wrote:
> On Thu, Dec 10, 2020 at 03:51:46PM +0000, Andrew Cooper wrote:
>>> [   7.6617663] cs 0x47  ds 0x23  es 0x23  fs 0000  gs 0000  ss 0x3f
>>> [   7.7345663] fsbase 000000000000000000 gsbase 000000000000000000
>>>
>>> so it looks like something resets %fs to 0 ...
>>>
>>> Anyway the fault address 0xffffbd800000a040 is in the hypervisor's range,
>>> isn't it ?
>> No.  Its the kernel's LDT.  From previous debugging:
>>> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
>> LDT handling in Xen is a bit complicated.  To maintain host safety, we
>> must map it into Xen's range, and we explicitly support a PV guest doing
>> on-demand mapping of the LDT.  (This pertains to the experimental
>> Windows XP PV support which never made it beyond a prototype.  Windows
>> can page out the LDT.)  Either way, we lazily map the LDT frames on
>> first use.
>>
>> So %cr2 is the real hardware faulting address, and is in the Xen range. 
>> We spot that it is an LDT access, and try to lazily map the frame (at
>> LDT base), but find that the kernel's virtual address mapping
>> 0xffffbd000000a000 is not present (the gl1e printk).
>>
>> Therefore, we pass #PF to the guest kernel, adjusting vCR2 to what would
>> have happened had Xen not mapped the real LDT elsewhere, which is
>> expected to cause the guest kernel to do whatever demand mapping is
>> necessary to pull the LDT back in.
>>
>>
>> I suppose it is worth taking a step back and ascertaining how exactly
>> NetBSD handles (or, should be handling) the LDT.
>>
>> Do you mind elaborating on how it is supposed to work?
> Note that I'm not familiar with this selector stuff; and I usually get
> it wrong the first time I go back to it.
>
> AFAIK, in the Xen PV case, a page is allocated an mapped in kernel
> space, and registered to Xen with MMUEXT_SET_LDT.
> From what I found, in the common case the LDT is the same for all processes.
> Does it make sense ?

The debugging earlier shows that MMUEXT_SET_LDT has indeed been called. 
Presumably 0xffffbd000000a000 is a plausible virtual address for NetBSD
to position the LDT?

However, Xen finds the mapping not-present when trying to demand-map it,
hence why the #PF is forwarded to the kernel.

The way we pull guest virtual addresses was altered by XSA-286 (released
not too long ago despite its apparent age), but *should* have been no
functional change.  I wonder if we accidentally broke something there. 
What exactly are you running, Xen-wise, with the 4.13 version?

Given that this is init failing, presumably the issue would repro with
the net installer version too?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:36:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49651.87790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPrJ-0002dn-6p; Thu, 10 Dec 2020 17:36:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49651.87790; Thu, 10 Dec 2020 17:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knPrJ-0002dg-3V; Thu, 10 Dec 2020 17:36:05 +0000
Received: by outflank-mailman (input) for mailman id 49651;
 Thu, 10 Dec 2020 17:36:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oBdS=FO=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1knPrH-0002db-Ru
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:36:03 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 731dfe6f-bfae-43b4-b232-a19c1bf58060;
 Thu, 10 Dec 2020 17:36:02 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BAHZvue015214
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 10 Dec 2020 18:35:57 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id E38F82E9383; Thu, 10 Dec 2020 18:35:51 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 731dfe6f-bfae-43b4-b232-a19c1bf58060
Date: Thu, 10 Dec 2020 18:35:51 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201210173551.GJ455@antioche.eu.org>
References: <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
 <20201210170319.GG455@antioche.eu.org>
 <ed06a0f4-8468-addf-2797-be3ba3a2d607@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ed06a0f4-8468-addf-2797-be3ba3a2d607@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 10 Dec 2020 18:35:58 +0100 (MET)

On Thu, Dec 10, 2020 at 05:18:39PM +0000, Andrew Cooper wrote:
> The debugging earlier shows that MMUEXT_SET_LDT has indeed been called.
> Presumably 0xffffbd000000a000 is a plausible virtual address for NetBSD
> to position the LDT?

Yes, it is. 

> 
> However, Xen finds the mapping not-present when trying to demand-map it,
> hence why the #PF is forwarded to the kernel.
> 
> The way we pull guest virtual addresses was altered by XSA-286 (released
> not too long ago despite its apparent age), but *should* have been no
> functional change. I wonder if we accidentally broke something there.
> What exactly are you running, Xen-wise, with the 4.13 version?

It is 4.13.2, with the patch for XSA351

> 
> Given that this is init failing, presumably the issue would repro with
> the net installer version too?

Hopefully yes, maybe even as a domU. But I don't have a linux dom0 to test.

If you have a Xen setup you can test with
http://ftp.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/binary/kernel/netbsd-INSTALL_XEN3_DOMU.gz

note that this won't boot as a dom0 kernel.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 17:59:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 17:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49658.87805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knQDR-0004oQ-4y; Thu, 10 Dec 2020 17:58:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49658.87805; Thu, 10 Dec 2020 17:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knQDR-0004oJ-1Q; Thu, 10 Dec 2020 17:58:57 +0000
Received: by outflank-mailman (input) for mailman id 49658;
 Thu, 10 Dec 2020 17:58:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=57pE=FO=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1knQDP-0004oE-90
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 17:58:55 +0000
Received: from mail.skyhub.de (unknown [2a01:4f8:190:11c2::b:1457])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 944c9135-2f20-4747-ba46-25eed1eb33df;
 Thu, 10 Dec 2020 17:58:52 +0000 (UTC)
Received: from zn.tnic (p200300ec2f0d410017205789a0fcbfc3.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f0d:4100:1720:5789:a0fc:bfc3])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id BC6941EC0266;
 Thu, 10 Dec 2020 18:58:51 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 944c9135-2f20-4747-ba46-25eed1eb33df
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1607623131;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MwyvOQBqxuMUG7KzYXJrCDdCJAVHNYnOubZCd9QHgEo=;
	b=QAbiXUnnXNeeVEl8f1bWlVdTRkQaqyuO5GwcX57rzx5YgNGyAoPsKd8SiHoQhEKOL3cswb
	xJFTBEQqrivdDf2SutMFerrTsDT8f2MUOi3bt/x+qDD+kGByiHPn6aZFFd6jQgbpBKA5EE
	NQZ4t6DTvpVHfanMjclYCmWpEGp+G0c=
Date: Thu, 10 Dec 2020 18:58:46 +0100
From: Borislav Petkov <bp@alien8.de>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, peterz@infradead.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 07/12] x86: add new features for paravirt patching
Message-ID: <20201210175846.GE26529@zn.tnic>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-8-jgross@suse.com>
 <20201208184315.GE27920@zn.tnic>
 <2510752e-5d3d-f71c-8a4c-a5d2aae0075e@suse.com>
 <20201209120307.GB18203@zn.tnic>
 <9e989b07-84e8-b07b-ba6e-c2a3ed19d7b1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9e989b07-84e8-b07b-ba6e-c2a3ed19d7b1@suse.com>

On Wed, Dec 09, 2020 at 01:22:24PM +0100, Jürgen Groß wrote:
> Lets take the spin_unlock() case. With patch 11 of the series this is
> 
> PVOP_ALT_VCALLEE1(lock.queued_spin_unlock, lock,
>                   "movb $0, (%%" _ASM_ARG1 ");",
>                   X86_FEATURE_NO_PVUNLOCK);
> 
> which boils down to ALTERNATIVE "call *lock.queued_spin_unlock"
>                                 "movb $0,(%rdi)" X86_FEATURE_NO_PVUNLOCK
> 
> The initial (paravirt) code is an indirect call in order to allow
> spin_unlock() before paravirt/alternative patching takes place.
> 
> Paravirt patching will then replace the indirect call with a direct call
> to the correct unlock function. Then alternative patching might replace
> the direct call to the bare metal unlock with a plain "movb $0,(%rdi)"
> in case pvlocks are not enabled.

Aha, that zeros the locking var on unlock, I see.

> In case alternative patching would occur first, the indirect call might
> be replaced with the "movb ...", and then paravirt patching would
> clobber that with the direct call, resulting in the bare metal
> optimization being removed again.

Yeah, that explains the whole situation much better - thanks - and
considering how complex the whole patching is, I wouldn't mind the gist
of it as text in alternative_instructions() or in a comment above it so
that we don't have to swap everything back in, months and years from
now, when we optimize it yet again. :-}

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 18:14:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 18:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49666.87817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knQSZ-0006s2-HV; Thu, 10 Dec 2020 18:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49666.87817; Thu, 10 Dec 2020 18:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knQSZ-0006rv-Df; Thu, 10 Dec 2020 18:14:35 +0000
Received: by outflank-mailman (input) for mailman id 49666;
 Thu, 10 Dec 2020 18:14:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knQSX-0006rk-PA; Thu, 10 Dec 2020 18:14:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knQSX-0005u0-IX; Thu, 10 Dec 2020 18:14:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knQSX-0007dv-AM; Thu, 10 Dec 2020 18:14:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knQSX-0004T4-9r; Thu, 10 Dec 2020 18:14:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cDrMs2xTLhY5XOm87GJ+PJ3EuAHhDugmFNXvb+bT5Co=; b=gENof2CrNvuIlKSp9qUplJ50/c
	NaGoosD6gPlR6GPTkjxpkbn7dXDvSs0hmR0MPUQuqAbvnpFnam1BXc9LT9idQxF44YAcQWICKnnQ4
	C1j4sfDMvL+WqPDAtkKmXcydGaOQJvBFCCrqtT06EAEWkYlg3K436emLUfalsA+hF3m0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157383-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157383: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=10dc8c561c687c9e73e29743d04d828cca56a288
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 18:14:33 +0000

flight 157383 ovmf real [real]
flight 157387 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157383/
http://logs.test-lab.xenproject.org/osstest/logs/157387/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 10dc8c561c687c9e73e29743d04d828cca56a288
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days    4 attempts
Testing same since   157383  2020-12-10 13:09:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 308 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 18:50:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 18:50:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49678.87831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knR19-0002DF-AW; Thu, 10 Dec 2020 18:50:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49678.87831; Thu, 10 Dec 2020 18:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knR19-0002D8-7a; Thu, 10 Dec 2020 18:50:19 +0000
Received: by outflank-mailman (input) for mailman id 49678;
 Thu, 10 Dec 2020 18:50:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knR17-0002D3-KN
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 18:50:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knR16-0006ao-2W; Thu, 10 Dec 2020 18:50:16 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knR15-0004L6-Lc; Thu, 10 Dec 2020 18:50:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XDwWIjmn41xeSxXSoYg5HaOcihHdudMxNY5/au0DBj0=; b=ABYRR+2viglG8M5ct3QeoaHbPd
	J+8+PlXA2A2JZtUzciwKlAWhplQVmbdk4rk7PlY5MyGit+ubb/IHHXQFtJHXJNYuhp+FrOKR0mZrO
	ynEzijvJ8RxAm9tqREteYn/J2Z0IhyTjuW2MpmQFTYOP3VbSYMltjnAhMaGgBL1NX5VQ=;
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
Date: Thu, 10 Dec 2020 18:50:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 10/12/2020 02:30, Stefano Stabellini wrote:
> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> We need to send mapcache invalidation request to qemu/demu everytime
>> the page gets removed from a guest.
>>
>> At the moment, the Arm code doesn't explicitely remove the existing
>> mapping before inserting the new mapping. Instead, this is done
>> implicitely by __p2m_set_entry().
>>
>> So we need to recognize a case when old entry is a RAM page *and*
>> the new MFN is different in order to set the corresponding flag.
>> The most suitable place to do this is p2m_free_entry(), there
>> we can find the correct leaf type. The invalidation request
>> will be sent in do_trap_hypercall() later on.
> 
> Why is it sent in do_trap_hypercall() ?

I believe this is following the approach used by x86. There are actually 
some discussion about it (see [1]).

Leaving aside the toolstack case for now, AFAIK, the only way a guest 
can modify its p2m is via an hypercall. Do you have an example otherwise?

When sending the invalidation request, the vCPU will be blocked until 
all the IOREQ server have acknowledged the invalidation. So the 
hypercall seems to be the best position to do it.

Alternatively, we could use check_for_vcpu_work() to check if the 
mapcache needs to be invalidated. The inconvenience is we would execute 
a few more instructions in each entry/exit path.

> 
> 
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> Changes V1 -> V2:
>>     - new patch, some changes were derived from (+ new explanation):
>>       xen/ioreq: Make x86's invalidate qemu mapcache handling common
>>     - put setting of the flag into __p2m_set_entry()
>>     - clarify the conditions when the flag should be set
>>     - use domain_has_ioreq_server()
>>     - update do_trap_hypercall() by adding local variable
>>
>> Changes V2 -> V3:
>>     - update patch description
>>     - move check to p2m_free_entry()
>>     - add a comment
>>     - use "curr" instead of "v" in do_trap_hypercall()
>> ---
>> ---
>>   xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
>>   xen/arch/arm/traps.c | 13 ++++++++++---
>>   2 files changed, 26 insertions(+), 11 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 5b8d494..9674f6f 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1,6 +1,7 @@
>>   #include <xen/cpu.h>
>>   #include <xen/domain_page.h>
>>   #include <xen/iocap.h>
>> +#include <xen/ioreq.h>
>>   #include <xen/lib.h>
>>   #include <xen/sched.h>
>>   #include <xen/softirq.h>
>> @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>>       if ( !p2m_is_valid(entry) )
>>           return;
>>   
>> -    /* Nothing to do but updating the stats if the entry is a super-page. */
>> -    if ( p2m_is_superpage(entry, level) )
>> +    if ( p2m_is_superpage(entry, level) || (level == 3) )
>>       {
>> -        p2m->stats.mappings[level]--;
>> -        return;
>> -    }
>> +#ifdef CONFIG_IOREQ_SERVER
>> +        /*
>> +         * If this gets called (non-recursively) then either the entry
>> +         * was replaced by an entry with a different base (valid case) or
>> +         * the shattering of a superpage was failed (error case).
>> +         * So, at worst, the spurious mapcache invalidation might be sent.
>> +         */
>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>> +             (p2m->domain == current->domain) && p2m_is_ram(entry.p2m.type) )
>> +            p2m->domain->mapcache_invalidate = true;
> 
> Why the (p2m->domain == current->domain) check? Shouldn't we set
> mapcache_invalidate to true anyway? What happens if p2m->domain !=
> current->domain? We wouldn't want the domain to lose the
> mapcache_invalidate notification.

This is also discussed in [1]. :) The main question is why would a 
toolstack/device model modify the guest memory after boot?

If we assume it does, then the device model would need to pause the 
domain before modifying the RAM.

We also need to make sure that all the IOREQ servers have invalidated
the mapcache before the domain run again.

This would require quite a bit of work. I am not sure the effort is 
worth if there are no active users today.

> 
> 
>> +#endif
>>   
>> -    if ( level == 3 )
>> -    {
>>           p2m->stats.mappings[level]--;
>> -        p2m_put_l3_page(entry);
>> +        /* Nothing to do if the entry is a super-page. */
>> +        if ( level == 3 )
>> +            p2m_put_l3_page(entry);
>>           return;
>>       }
>>   
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index b6077d2..151c626 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -1443,6 +1443,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>>                                 const union hsr hsr)
>>   {
>>       arm_hypercall_fn_t call = NULL;
>> +    struct vcpu *curr = current;
> 
> Is this just to save 3 characters?

Because current is not cheap to read and the compiler cannot optimize it 
(we obfuscate it as this is a per-cpu variable). So we commonly store 
'current'  in a local variable if there are multiple use.

> 
> 
>>       BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
>>   
>> @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>>           return;
>>       }
>>   
>> -    current->hcall_preempted = false;
>> +    curr->hcall_preempted = false;
>>   
>>       perfc_incra(hypercalls, *nr);
>>       call = arm_hypercall_table[*nr].fn;
>> @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>>       HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
>>   
>>   #ifndef NDEBUG
>> -    if ( !current->hcall_preempted )
>> +    if ( !curr->hcall_preempted )
>>       {
>>           /* Deliberately corrupt parameter regs used by this hypercall. */
>>           switch ( arm_hypercall_table[*nr].nr_args ) {
>> @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>>   #endif
>>   
>>       /* Ensure the hypercall trap instruction is re-executed. */
>> -    if ( current->hcall_preempted )
>> +    if ( curr->hcall_preempted )
>>           regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
>> +
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    if ( unlikely(curr->domain->mapcache_invalidate) &&
>> +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
>> +        ioreq_signal_mapcache_invalidate();
> 
> Why not just:
> 
> if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
>      ioreq_signal_mapcache_invalidate();
> 

This seems to match the x86 code. My guess is they tried to prevent the 
cost of the atomic operation if there is no chance mapcache_invalidate 
is true.

I am split whether the first check is worth it. The atomic operation 
should be uncontended most of the time, so it should be quick. But it 
will always be slower than just a read because there is always a store 
involved.

On a related topic, Jan pointed out that the invalidation would not work 
properly if you have multiple vCPU modifying the P2M at the same time.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/f92f62bf-2f8d-34db-4be5-d3e6a4b9d580@suse.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49699.87892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpc-0007VO-HW; Thu, 10 Dec 2020 19:42:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49699.87892; Thu, 10 Dec 2020 19:42:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpc-0007VD-Dm; Thu, 10 Dec 2020 19:42:28 +0000
Received: by outflank-mailman (input) for mailman id 49699;
 Thu, 10 Dec 2020 19:42:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpb-0007NY-HA
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:27 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69ccdec6-8894-415f-87fd-d3c42a288a22;
 Thu, 10 Dec 2020 19:42:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69ccdec6-8894-415f-87fd-d3c42a288a22
Message-Id: <20201210194042.703779349@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629337;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=xNUoZoBVroF7IJy3X8hhV7V+S/ANMxwK+r48iZvHH7k=;
	b=SuZCyt4zlNSkg7Jh6xjd5t+bljnkIDhiuE1rhn/qKeMG+ite6vJEiEbNofXy6Jc8tYg+0b
	IECvv2LjnWcGzCS6R2c2tg4sgQpGcjXByKLBZPL7Xs4OwNUUaKF820w3Ic1JKPFWbxYaV/
	zQxlwp4rnzD+WzTiaw91J+4L7iQQ+SR32KSqZT854CLAJKIIUHqBcwUh01vq4gYJysHS9U
	2YuIdbPutXAYf6qxYXI8k/N2s6EcL8OojQbYBV3yqFeMwoY5o0cLfzqE9P8/gw+P+igDVq
	lKj3W3wk7xu1ziy9h34ybqOITt3G9Oy0XdV3S2Ut5Fvm8NWwxziMLNP2XRk3vg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629337;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=xNUoZoBVroF7IJy3X8hhV7V+S/ANMxwK+r48iZvHH7k=;
	b=VHoSRskFC2dXgfT8CV99ZHE5BpRPUfLU6nfE7o3PMPrsIEKWqDmtgbLpWFRMlRZjjTeINe
	qsFjW8cMZ9WikxCw==
Date: Thu, 10 Dec 2020 20:25:38 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 02/30] genirq: Move status flag checks to core
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

These checks are used by modules and prevent the removal of the export of
irq_to_desc(). Move the accessor into the core.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irqdesc.h |   17 +++++------------
 kernel/irq/manage.c     |   17 +++++++++++++++++
 2 files changed, 22 insertions(+), 12 deletions(-)

--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -223,28 +223,21 @@ irq_set_chip_handler_name_locked(struct
 	data->chip = chip;
 }
 
+bool irq_check_status_bit(unsigned int irq, unsigned int bitmask);
+
 static inline bool irq_balancing_disabled(unsigned int irq)
 {
-	struct irq_desc *desc;
-
-	desc = irq_to_desc(irq);
-	return desc->status_use_accessors & IRQ_NO_BALANCING_MASK;
+	return irq_check_status_bit(irq, IRQ_NO_BALANCING_MASK);
 }
 
 static inline bool irq_is_percpu(unsigned int irq)
 {
-	struct irq_desc *desc;
-
-	desc = irq_to_desc(irq);
-	return desc->status_use_accessors & IRQ_PER_CPU;
+	return irq_check_status_bit(irq, IRQ_PER_CPU);
 }
 
 static inline bool irq_is_percpu_devid(unsigned int irq)
 {
-	struct irq_desc *desc;
-
-	desc = irq_to_desc(irq);
-	return desc->status_use_accessors & IRQ_PER_CPU_DEVID;
+	return irq_check_status_bit(irq, IRQ_PER_CPU_DEVID);
 }
 
 static inline void
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -2769,3 +2769,23 @@ bool irq_has_action(unsigned int irq)
 	return res;
 }
 EXPORT_SYMBOL_GPL(irq_has_action);
+
+/**
+ * irq_check_status_bit - Check whether bits in the irq descriptor status are set
+ * @irq:	The linux irq number
+ * @bitmask:	The bitmask to evaluate
+ *
+ * Returns: True if one of the bits in @bitmask is set
+ */
+bool irq_check_status_bit(unsigned int irq, unsigned int bitmask)
+{
+	struct irq_desc *desc;
+	bool res = false;
+
+	rcu_read_lock();
+	desc = irq_to_desc(irq);
+	if (desc)
+		res = !!(desc->status_use_accessors & bitmask);
+	rcu_read_unlock();
+	return res;
+}



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49695.87844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpT-0007Nk-EC; Thu, 10 Dec 2020 19:42:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49695.87844; Thu, 10 Dec 2020 19:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpT-0007Nd-9e; Thu, 10 Dec 2020 19:42:19 +0000
Received: by outflank-mailman (input) for mailman id 49695;
 Thu, 10 Dec 2020 19:42:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpR-0007NY-MO
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:18 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c0734a9-a88a-4caa-a14b-9c0be355e987;
 Thu, 10 Dec 2020 19:42:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c0734a9-a88a-4caa-a14b-9c0be355e987
Message-Id: <20201210192536.118432146@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629334;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=dqmaBUU1aX3t41xhFMZZXhwmlvXhIHW8LZkxAL3qIuw=;
	b=h/TAwStAiTk4XvWiKPBPyDEOx218hZJC1vmF2rxVMRIvmco9q+UXf8EllPz4AjaFdOly7G
	LH+a53gOb/6x23hHUexy7zt6RJrXCLvlfH0RT7UmulMioZYuYNOHYckgeF+NjT2uTUeh+Y
	x4QOtIuHWxwLs6LKhedv+5hufAepZeedXGgORll6jDA+oynwB+JjiqXx/tgkxTCP8dbJ6L
	eWUOoe8IS6Z7h2dgCBQQBehNHm1Tfe56GXWt3TBc5qz2bsl6Xo6wT+1/DTPysSy3XD7/6K
	p0j9T2Hw/f1US+iZCI2FQmyJpu2ijKWYz9OMLgGwEfvXwTDnGAF9V1bHZmS/0A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629334;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=dqmaBUU1aX3t41xhFMZZXhwmlvXhIHW8LZkxAL3qIuw=;
	b=njYD6xBoyWobXQbQRXgtO58qqL+V8jLaqaWB6hUvHzVko4O9Y0w527+7oT5RC78ucYDSLL
	sPAJGjvbV9zCgIAw==
Date: Thu, 10 Dec 2020 20:25:36 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 00/30] genirq: Treewide hunt for irq descriptor abuse and
 assorted fixes
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

QSByZWNlbnQgcmVxdWVzdCB0byBleHBvcnQga3N0YXRfaXJxcygpIHBvaW50ZWQgdG8gYSBjb3B5
IG9mIHRoZSBzYW1lIGluCnRoZSBpOTE1IGNvZGUsIHdoaWNoIG1hZGUgbWUgbG9vayBmb3IgZnVy
dGhlciB1c2FnZSBvZiBpcnEgZGVzY3JpcHRvcnMgaW4KZHJpdmVycy4KClRoZSB1c2FnZSBpbiBk
cml2ZXJzIHJhbmdlcyBmcm9tIGNyZWF0aXZlIHRvIGJyb2tlbiBpbiBhbGwgY29sb3Vycy4KCmly
cWRlc2MuaCBjbGVhcmx5IHNheXMgdGhhdCB0aGlzIGlzIGNvcmUgZnVuY3Rpb25hbGl0eSBhbmQg
dGhlIGZhY3QgQyBkb2VzCm5vdCBhbGxvdyBmdWxsIGVuY2Fwc3VsYXRpb24gaXMgbm90IGEganVz
dGlmaWNhdGlvbiB0byBmaWRkbGUgd2l0aCBpdCBqdXN0CmJlY2F1c2UuIEl0IHRvb2sgdXMgYSBs
b3Qgb2YgZWZmb3J0IHRvIG1ha2UgdGhlIGNvcmUgZnVuY3Rpb25hbGl0eSBwcm92aWRlCndoYXQg
ZHJpdmVycyBuZWVkLgoKSWYgdGhlcmUgaXMgYSBzaG9ydGNvbWluZywgaXQncyBub3QgYXNrZWQg
dG9vIG11Y2ggdG8gdGFsayB0byB0aGUgcmVsZXZhbnQKbWFpbnRhaW5lcnMgaW5zdGVhZCBvZiBn
b2luZyBvZmYgYW5kIGZpZGRsaW5nIHdpdGggdGhlIGd1dHMgb2YgaW50ZXJydXB0CmRlc2NyaXB0
b3JzIGFuZCBvZnRlbiBlbm91Z2ggd2l0aG91dCB1bmRlcnN0YW5kaW5nIGxpZmV0aW1lIGFuZCBs
b2NraW5nCnJ1bGVzLgoKQXMgcGVvcGxlIGluc2lzdCBvbiBub3QgcmVzcGVjdGluZyBib3VuZGFy
aWVzLCB0aGlzIHNlcmllcyBjbGVhbnMgdXAgdGhlCihhYil1c2UgYW5kIGF0IHRoZSBlbmQgcmVt
b3ZlcyB0aGUgZXhwb3J0IG9mIGlycV90b19kZXNjKCkgdG8gbWFrZSBpdCBhdApsZWFzdCBoYXJk
ZXIuIEFsbCBsZWdpdGltYXRlIHVzZXJzIG9mIHRoaXMgYXJlIGJ1aWx0IGluLgoKV2hpbGUgYXQg
aXQgSSBzdHVtYmxlZCBvdmVyIHNvbWUgb3RoZXIgb2RkaXRpZXMgcmVsYXRlZCB0byBpbnRlcnJ1
cHQKY291bnRpbmcgYW5kIGNsZWFuZWQgdGhlbSB1cCBhcyB3ZWxsLgoKVGhlIHNlcmllcyBhcHBs
aWVzIG9uIHRvcCBvZgoKICAgZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJu
ZWwvZ2l0L3RpcC90aXAuZ2l0IGlycS9jb3JlCgphbmQgaXMgYWxzbyBhdmFpbGFibGUgZnJvbSBn
aXQ6CgogIGdpdDovL2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC90Z2x4
L2RldmVsLmdpdCBnZW5pcnEKClRoYW5rcywKCgl0Z2x4Ci0tLQogYXJjaC9hbHBoYS9rZXJuZWwv
c3lzX2plbnNlbi5jICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDIgCiBhcmNoL2FybS9rZXJu
ZWwvc21wLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMiAKIGFyY2gvcGFy
aXNjL2tlcm5lbC9pcnEuYyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgICA3IAogYXJj
aC9zMzkwL2tlcm5lbC9pcnEuYyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDIg
CiBhcmNoL3g4Ni9rZXJuZWwvdG9wb2xvZ3kuYyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwg
ICAgMSAKIGFyY2gvYXJtNjQva2VybmVsL3NtcC5jICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgfCAgICAyIAogZHJpdmVycy9ncHUvZHJtL2k5MTUvZGlzcGxheS9pbnRlbF9scGVfYXVkaW8u
YyAgICAgICB8ICAgIDQgCiBkcml2ZXJzL2dwdS9kcm0vaTkxNS9pOTE1X2lycS5jICAgICAgICAg
ICAgICAgICAgICAgIHwgICAzNCArKysKIGRyaXZlcnMvZ3B1L2RybS9pOTE1L2k5MTVfcG11LmMg
ICAgICAgICAgICAgICAgICAgICAgfCAgIDE4IC0KIGRyaXZlcnMvZ3B1L2RybS9pOTE1L2k5MTVf
cG11LmggICAgICAgICAgICAgICAgICAgICAgfCAgICA4IAogZHJpdmVycy9tZmQvYWI4NTAwLWRl
YnVnZnMuYyAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMTYgLQogZHJpdmVycy9uZXQvZXRo
ZXJuZXQvbWVsbGFub3gvbWx4NC9lbl9jcS5jICAgICAgICAgICB8ICAgIDggCiBkcml2ZXJzL25l
dC9ldGhlcm5ldC9tZWxsYW5veC9tbHg0L2VuX3J4LmMgICAgICAgICAgIHwgICAgNiAKIGRyaXZl
cnMvbmV0L2V0aGVybmV0L21lbGxhbm94L21seDQvbWx4NF9lbi5oICAgICAgICAgfCAgICAzIAog
ZHJpdmVycy9uZXQvZXRoZXJuZXQvbWVsbGFub3gvbWx4NS9jb3JlL2VuLmggICAgICAgICB8ICAg
IDIgCiBkcml2ZXJzL25ldC9ldGhlcm5ldC9tZWxsYW5veC9tbHg1L2NvcmUvZW5fbWFpbi5jICAg
IHwgICAgMiAKIGRyaXZlcnMvbmV0L2V0aGVybmV0L21lbGxhbm94L21seDUvY29yZS9lbl90eHJ4
LmMgICAgfCAgICA2IAogZHJpdmVycy9udGIvbXNpLmMgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICB8ICAgIDQgCiBkcml2ZXJzL3BjaS9jb250cm9sbGVyL21vYml2ZWlsL3BjaWUt
bW9iaXZlaWwtaG9zdC5jIHwgICAgOCAKIGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpZS14aWxp
bngtbndsLmMgICAgICAgICAgICAgfCAgICA4IAogZHJpdmVycy9waW5jdHJsL25vbWFkaWsvcGlu
Y3RybC1ub21hZGlrLmMgICAgICAgICAgICB8ICAgIDMgCiBkcml2ZXJzL3hlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYyAgICAgICAgICAgICAgICAgICAgIHwgIDE3MiArKysrKysrKysrKy0tLS0tLS0t
CiBkcml2ZXJzL3hlbi9ldnRjaG4uYyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwg
ICAzNCAtLS0KIGluY2x1ZGUvbGludXgvaW50ZXJydXB0LmggICAgICAgICAgICAgICAgICAgICAg
ICAgICAgfCAgICAxIAogaW5jbHVkZS9saW51eC9pcnEuaCAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICB8ICAgIDcgCiBpbmNsdWRlL2xpbnV4L2lycWRlc2MuaCAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHwgICA0MCArLS0tCiBpbmNsdWRlL2xpbnV4L2tlcm5lbF9zdGF0Lmgg
ICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMSAKIGtlcm5lbC9pcnEvaXJxZGVzYy5jICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDQyICsrLS0KIGtlcm5lbC9pcnEvbWFu
YWdlLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDM3ICsrKysKIGtlcm5l
bC9pcnEvcHJvYy5jICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgICA1IAog
MzAgZmlsZXMgY2hhbmdlZCwgMjYzIGluc2VydGlvbnMoKyksIDIyMiBkZWxldGlvbnMoLSkKCgo=


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49696.87856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpW-0007Or-Nn; Thu, 10 Dec 2020 19:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49696.87856; Thu, 10 Dec 2020 19:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpW-0007Og-I5; Thu, 10 Dec 2020 19:42:22 +0000
Received: by outflank-mailman (input) for mailman id 49696;
 Thu, 10 Dec 2020 19:42:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpV-0007OY-IW
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:21 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4bee92c-cca8-415e-a525-aac2136fd8dd;
 Thu, 10 Dec 2020 19:42:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4bee92c-cca8-415e-a525-aac2136fd8dd
Message-Id: <20201210194042.860029489@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629338;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=yFPV7W+/QqCKeqQACSJL6sxoKPc89zwgLkkLw3CK5Qw=;
	b=b4xfy4bgyikTmVvFd1ABIiZ73tEfKizLVhR+VIJlwT2XXEhpCUYmv0T35kedPEaKJQVpyL
	EZ6ZpzGOZ5IxdHU/m0CyhJdifeVKmZv6VHKNE7HVl9gYcZmfYEhjmMcUyrjO41XASIxwkE
	9klGh5JDf3wOEU5Ed3lPXcLem5zy/i2BIvrR40gIEa/OR0IKxYWPTy1L0gdW4vYnASDXBG
	oPlb3Y/UTUmNePPkOPES/7lQTGxRfLrcgjzM1Sg3uuPGxXkbOxvSMK9KGVtjMA1XVdJV0t
	Q4ytattWGMjEx0ecdLFwT7qCkSF6HEiYjaunGW+F6+YzwXCTRCwunCHjVGEGUQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629338;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=yFPV7W+/QqCKeqQACSJL6sxoKPc89zwgLkkLw3CK5Qw=;
	b=QKWQOIKE0SKfWygdsF0p53UAxVkxHrZRTXTSBAi/APvw0tfKjTWt41c59REi77MGE+SbMx
	F6+minehzWx00+CA==
Date: Thu, 10 Dec 2020 20:25:39 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 03/30] genirq: Move irq_set_lockdep_class() to core
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

irq_set_lockdep_class() is used from modules and requires irq_to_desc() to
be exported. Move it into the core code which lifts another requirement for
the export.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irqdesc.h |   10 ++++------
 kernel/irq/irqdesc.c    |   14 ++++++++++++++
 2 files changed, 18 insertions(+), 6 deletions(-)

--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -240,16 +240,14 @@ static inline bool irq_is_percpu_devid(u
 	return irq_check_status_bit(irq, IRQ_PER_CPU_DEVID);
 }
 
+void __irq_set_lockdep_class(unsigned int irq, struct lock_class_key *lock_class,
+			     struct lock_class_key *request_class);
 static inline void
 irq_set_lockdep_class(unsigned int irq, struct lock_class_key *lock_class,
 		      struct lock_class_key *request_class)
 {
-	struct irq_desc *desc = irq_to_desc(irq);
-
-	if (desc) {
-		lockdep_set_class(&desc->lock, lock_class);
-		lockdep_set_class(&desc->request_mutex, request_class);
-	}
+	if (IS_ENABLED(CONFIG_LOCKDEP))
+		__irq_set_lockdep_class(irq, lock_class, request_class);
 }
 
 #endif
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -968,3 +968,17 @@ unsigned int kstat_irqs_usr(unsigned int
 	rcu_read_unlock();
 	return sum;
 }
+
+#ifdef CONFIG_LOCKDEP
+void __irq_set_lockdep_class(unsigned int irq, struct lock_class_key *lock_class,
+			     struct lock_class_key *request_class)
+{
+	struct irq_desc *desc = irq_to_desc(irq);
+
+	if (desc) {
+		lockdep_set_class(&desc->lock, lock_class);
+		lockdep_set_class(&desc->request_mutex, request_class);
+	}
+}
+EXPORT_SYMBOL_GPL(irq_set_lockdep_class);
+#endif



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49698.87880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpb-0007TL-7M; Thu, 10 Dec 2020 19:42:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49698.87880; Thu, 10 Dec 2020 19:42:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpb-0007TC-3x; Thu, 10 Dec 2020 19:42:27 +0000
Received: by outflank-mailman (input) for mailman id 49698;
 Thu, 10 Dec 2020 19:42:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpa-0007OY-CS
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:26 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 46c878e4-0845-42ee-be26-a499b10bdf1f;
 Thu, 10 Dec 2020 19:42:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46c878e4-0845-42ee-be26-a499b10bdf1f
Message-Id: <20201210194043.172893840@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629342;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=H6DxzomRopFmkit4/4R6NYU2nMIU3VEhxNUq8xSIzZg=;
	b=Mmus8yMHzmRoOuv6OrtKQYqVrIXFlyNqamgESIcbo+FOsqNCh4jNhMrhvtgIbG1dHFvJC+
	BN8Q2AXY7eE0VbmyhcLPklodEgxwCZ8ZVxoetzzzrQMxxhJeOuw6ycJopu9ZM+HbvHrHyd
	IWP7LrT/B7MAbb8pT190VoO7MTJuA1obD8dhEINtYVE4KCpqtRWqxMcR/CBUrG9TYUjwDi
	+msxVmLhS+JWVhN88RJHPjmRnkz1vSg1hhvl2W0fsWBOq4djR+BrSPhFkw8uZNfFkc9Wy1
	kP6KYk7n5zZjtKAaOxpoGToDh/v/xpae6+owBkHebtH5vIi8texeMaL2QUYeaQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629342;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=H6DxzomRopFmkit4/4R6NYU2nMIU3VEhxNUq8xSIzZg=;
	b=rqJf3IS+tp6Aq+14lO31OpHLTmQ/963wA224eQ6BXLp/Y3RncaPXYeQ/h9yfO2+idiLTk2
	qfa93PzKze1vcODg==
Date: Thu, 10 Dec 2020 20:25:42 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject:
 [patch 06/30] parisc/irq: Simplify irq count output for /proc/interrupts
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

The SMP variant works perfectly fine on UP as well.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: afzal mohammed <afzal.mohd.ma@gmail.com>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/kernel/irq.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -216,12 +216,9 @@ int show_interrupts(struct seq_file *p,
 		if (!action)
 			goto skip;
 		seq_printf(p, "%3d: ", i);
-#ifdef CONFIG_SMP
+
 		for_each_online_cpu(j)
 			seq_printf(p, "%10u ", kstat_irqs_cpu(i, j));
-#else
-		seq_printf(p, "%10u ", kstat_irqs(i));
-#endif
 
 		seq_printf(p, " %14s", irq_desc_get_chip(desc)->name);
 #ifndef PARISC_IRQ_CR16_COUNTS



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49697.87868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpX-0007QA-U7; Thu, 10 Dec 2020 19:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49697.87868; Thu, 10 Dec 2020 19:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpX-0007Q3-QU; Thu, 10 Dec 2020 19:42:23 +0000
Received: by outflank-mailman (input) for mailman id 49697;
 Thu, 10 Dec 2020 19:42:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpW-0007NY-Gw
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:22 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e761b325-a2c4-429c-bd01-edc5f9648da7;
 Thu, 10 Dec 2020 19:42:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e761b325-a2c4-429c-bd01-edc5f9648da7
Message-Id: <20201210194042.548936472@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629336;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=wCqGjbHBcSlcHWGOV/dAhxvR97JcFGNX9psmcROoCq0=;
	b=2Z0Q8lbqw2d3iK0lyMTfu6GGSIFMkE/mgVP/hAGnWOqxBRMxf5MtZTpRPSPBXz/boYfgYn
	ST6oe0NetpTwhOSarLLIdAuYLS1gydE77MQVYVcY3FouLrWnl5si1eWOJ4F95yCk0G9X60
	1VL1Pvt1tpdt2jzvXhHk1B3J4gqT2I16payytnnsVM1ae0GkKHVOkzm3mG/NvUjNMZDZ8b
	Y9sC2WWQtJSCM32DjS16A3YMAUHnAI48PATHpxuESAMuGdCfbbEW6ICvvlfOLWvLQ9S/jh
	Z+QD95xpqockzbshrRUV6G0e1FqEFRsR9IAhKZG5jK6CihEBgYA5K4ONks8gvw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629336;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=wCqGjbHBcSlcHWGOV/dAhxvR97JcFGNX9psmcROoCq0=;
	b=7JxYxk/ThtDXDjNM4lO0Kv6ZI0VZ4wL2ccu9YMROMO3FB+Dn8roKKZsGAALM1EdgllUWzl
	ydzvbEXlvqdXI/CQ==
Date: Thu, 10 Dec 2020 20:25:37 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 01/30] genirq: Move irq_has_action() into core code
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

This function uses irq_to_desc() and is going to be used by modules to
replace the open coded irq_to_desc() (ab)usage. The final goal is to remove
the export of irq_to_desc() so driver cannot fiddle with it anymore.

Move it into the core code and fixup the usage sites to include the proper
header.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/alpha/kernel/sys_jensen.c |    2 +-
 arch/x86/kernel/topology.c     |    1 +
 include/linux/interrupt.h      |    1 +
 include/linux/irqdesc.h        |    7 +------
 kernel/irq/manage.c            |   17 +++++++++++++++++
 5 files changed, 21 insertions(+), 7 deletions(-)

--- a/arch/alpha/kernel/sys_jensen.c
+++ b/arch/alpha/kernel/sys_jensen.c
@@ -7,7 +7,7 @@
  *
  * Code supporting the Jensen.
  */
-
+#include <linux/interrupt.h>
 #include <linux/kernel.h>
 #include <linux/types.h>
 #include <linux/mm.h>
--- a/arch/x86/kernel/topology.c
+++ b/arch/x86/kernel/topology.c
@@ -25,6 +25,7 @@
  *
  * Send feedback to <colpatch@us.ibm.com>
  */
+#include <linux/interrupt.h>
 #include <linux/nodemask.h>
 #include <linux/export.h>
 #include <linux/mmzone.h>
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -232,6 +232,7 @@ extern void devm_free_irq(struct device
 # define local_irq_enable_in_hardirq()	local_irq_enable()
 #endif
 
+bool irq_has_action(unsigned int irq);
 extern void disable_irq_nosync(unsigned int irq);
 extern bool disable_hardirq(unsigned int irq);
 extern void disable_irq(unsigned int irq);
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -179,12 +179,7 @@ int handle_domain_nmi(struct irq_domain
 /* Test to see if a driver has successfully requested an irq */
 static inline int irq_desc_has_action(struct irq_desc *desc)
 {
-	return desc->action != NULL;
-}
-
-static inline int irq_has_action(unsigned int irq)
-{
-	return irq_desc_has_action(irq_to_desc(irq));
+	return desc && desc->action != NULL;
 }
 
 /**
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -2752,3 +2752,20 @@ int irq_set_irqchip_state(unsigned int i
 	return err;
 }
 EXPORT_SYMBOL_GPL(irq_set_irqchip_state);
+
+/**
+ * irq_has_action - Check whether an interrupt is requested
+ * @irq:	The linux irq number
+ *
+ * Returns: A snapshot of the current state
+ */
+bool irq_has_action(unsigned int irq)
+{
+	bool res;
+
+	rcu_read_lock();
+	res = irq_desc_has_action(irq_to_desc(irq));
+	rcu_read_unlock();
+	return res;
+}
+EXPORT_SYMBOL_GPL(irq_has_action);



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49700.87904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRph-0007cW-7L; Thu, 10 Dec 2020 19:42:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49700.87904; Thu, 10 Dec 2020 19:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRph-0007cN-1X; Thu, 10 Dec 2020 19:42:33 +0000
Received: by outflank-mailman (input) for mailman id 49700;
 Thu, 10 Dec 2020 19:42:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpf-0007OY-Cn
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:31 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37bf2da7-5f6e-44d6-b7ea-6278350e7d51;
 Thu, 10 Dec 2020 19:42:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37bf2da7-5f6e-44d6-b7ea-6278350e7d51
Message-Id: <20201210194043.362094758@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629344;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=xI8ipC8kVjJGltjSHu5Mwub/dbA42XKOSuIg0xDxbuM=;
	b=D/de10U4HoG38vblr7IeySKjPU1LbUwG6rWRmeGYZrTiPhh//Hh7l/GxJreJ0rFcvzRPJy
	uoauNylV8dWf/MoltroGmVGhvJ2oJ5Y1i1Hdh3B/UY4mD9pwH+y7Dg4V+Vuar0V4FZXmUS
	6T9k0+65HYvYSEKNJ4TVXxdcFWWUmHXiCvk5q873lejGHd1ZF8kDnbNFZOuvndA/bZ6hqk
	upKi4g8D5MCEBYYTgyELzJKcRa+JlxwqpINFEbwiyouRus62LMleSD5uCsPspz8eEzaopF
	eLafHsQRj846r/5tje8mdg+dCV5YsColFl6docNWF4RiFgqrORcZz2o154ST/Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629344;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=xI8ipC8kVjJGltjSHu5Mwub/dbA42XKOSuIg0xDxbuM=;
	b=uhk8uY0rFu42URnjai4w+dmeZE7ul5z78imfPTBVOwBrf5zNSXRAMP26aad4ZYcrnfPUQU
	IQRkkJ1oiqPOXJDw==
Date: Thu, 10 Dec 2020 20:25:44 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 08/30] genirq: Provide kstat_irqdesc_cpu()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Most users of kstat_irqs_cpu() have the irq descriptor already. No point in
calling into the core code and looking it up once more.

Use it in per_cpu_count_show() to start with.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irqdesc.h |    6 ++++++
 kernel/irq/irqdesc.c    |    4 ++--
 2 files changed, 8 insertions(+), 2 deletions(-)

--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -113,6 +113,12 @@ static inline void irq_unlock_sparse(voi
 extern struct irq_desc irq_desc[NR_IRQS];
 #endif
 
+static inline unsigned int irq_desc_kstat_cpu(struct irq_desc *desc,
+					      unsigned int cpu)
+{
+	return desc->kstat_irqs ? *per_cpu_ptr(desc->kstat_irqs, cpu) : 0;
+}
+
 static inline struct irq_desc *irq_data_to_desc(struct irq_data *data)
 {
 	return container_of(data->common, struct irq_desc, irq_common_data);
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -147,12 +147,12 @@ static ssize_t per_cpu_count_show(struct
 				  struct kobj_attribute *attr, char *buf)
 {
 	struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj);
-	int cpu, irq = desc->irq_data.irq;
 	ssize_t ret = 0;
 	char *p = "";
+	int cpu;
 
 	for_each_possible_cpu(cpu) {
-		unsigned int c = kstat_irqs_cpu(irq, cpu);
+		unsigned int c = irq_desc_kstat_cpu(desc, cpu);
 
 		ret += scnprintf(buf + ret, PAGE_SIZE - ret, "%s%u", p, c);
 		p = ",";



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49701.87909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRph-0007dr-N6; Thu, 10 Dec 2020 19:42:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49701.87909; Thu, 10 Dec 2020 19:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRph-0007de-Gd; Thu, 10 Dec 2020 19:42:33 +0000
Received: by outflank-mailman (input) for mailman id 49701;
 Thu, 10 Dec 2020 19:42:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpg-0007NY-HN
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:32 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2184410b-89a9-439f-b41e-9435301c810d;
 Thu, 10 Dec 2020 19:42:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2184410b-89a9-439f-b41e-9435301c810d
Message-Id: <20201210194042.967177918@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629339;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=gYsowh6i1mqGXNg6FrEIlf+Px9WkTuZjYPGZA4AoQf4=;
	b=gQRR2ovC6G1wG0/hHfvHA26mBklNiMZtkvMBYZWjgQshlf7o+cIXoD11qJjLT48W0LHgJA
	5Zvfg1News0uQs48erbM5kmDtHengxfmuqG2DKGnaIJ04vg1FJ7jnVpZXG8cNwMy19OeFP
	iPFhPWqfd87RfMeVK3ZgF11+LhI8jQI6rRQAPId6pBGrjdbz5+dRwipziIghpyTp0+nYcu
	/rMQolvJykCdajiQdhpg5/73/PAnO6jEv3XXq/Rv9w9yxTGzkAvSFeWdlWRlX1S05CmH1R
	pxA14gUUPV2LrG7qMC56g75TqBVjQeX4kqlFmRI413CglTnFsrbWRi0PstpOew==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629339;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=gYsowh6i1mqGXNg6FrEIlf+Px9WkTuZjYPGZA4AoQf4=;
	b=rxr0RqK5G71Mdyjh0pmNkfya7BLBal4i+pBNw5B4iIspa3uaIBOFXW7QFkpv0Ae/Yof/mz
	NtSLBAk6jBeGNuDQ==
Date: Thu, 10 Dec 2020 20:25:40 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 04/30] genirq: Provide irq_get_effective_affinity()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Provide an accessor to the effective interrupt affinity mask. Going to be
used to replace open coded fiddling with the irq descriptor.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/irq.h |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -907,6 +907,13 @@ struct cpumask *irq_data_get_effective_a
 }
 #endif
 
+static inline struct cpumask *irq_get_effective_affinity_mask(unsigned int irq)
+{
+	struct irq_data *d = irq_get_irq_data(irq);
+
+	return d ? irq_data_get_effective_affinity_mask(d) : NULL;
+}
+
 unsigned int arch_dynirq_lower_bound(unsigned int from);
 
 int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node,



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49702.87928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpm-0007mS-6q; Thu, 10 Dec 2020 19:42:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49702.87928; Thu, 10 Dec 2020 19:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpm-0007mE-1v; Thu, 10 Dec 2020 19:42:38 +0000
Received: by outflank-mailman (input) for mailman id 49702;
 Thu, 10 Dec 2020 19:42:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpk-0007OY-Cm
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:36 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16ba92d4-2e4c-40c8-9093-2e56ce350ec4;
 Thu, 10 Dec 2020 19:42:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16ba92d4-2e4c-40c8-9093-2e56ce350ec4
Message-Id: <20201210194043.546326568@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629347;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=5nQTlSWs0aM1JwskkO++sgVx3e7r1A0iUoENDct9w0g=;
	b=QPllCmzoid7R7QOyZWHCOYQKMI+juGJWW7IXPn4ldqvXap5zOWqeLwIujbfic04O+qhazO
	0xgWWwKg3GbwssA1OWCHaW+e3v13fVknyR8JXjPDh7PwNE+Ckpns9v0fFT+ZM3yGQ/k3Vv
	GCsqZz06V+zRI8tFVRs2T2IovMxYUYd8eD6JafQc5V2xJxNbrsWbTKI93mSM/Ta7w2MVgk
	H0DCRjstAo+JEpF+kGbxIAfIodj4WQwAN11f4kvj+YdByF1NUUzPqO+eeo3F+Zo17kc0So
	XSotZsWfd7b5nXWYK0Zyq3GvoLnhjvZlI0o0fsviKiEzV4hSu6MDcZkJycE3/A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629347;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=5nQTlSWs0aM1JwskkO++sgVx3e7r1A0iUoENDct9w0g=;
	b=vR1W3s1vgzWnaT5Q7BZrP5IMwXx2MgiIHJAS9ztGZt4iF+7Bkm46E7nJWgDdHbgllL9/0G
	km2qINIgPXSFZ2CQ==
Date: Thu, 10 Dec 2020 20:25:46 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 linux-arm-kernel@lists.infradead.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject:
 [patch 10/30] arm64/smp: Use irq_desc_kstat_cpu() in arch_show_interrupts()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

The irq descriptor is already there, no need to look it up again.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/kernel/smp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -809,7 +809,7 @@ int arch_show_interrupts(struct seq_file
 		seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i,
 			   prec >= 4 ? " " : "");
 		for_each_online_cpu(cpu)
-			seq_printf(p, "%10u ", kstat_irqs_cpu(irq, cpu));
+			seq_printf(p, "%10u ", irq_desc_kstat_cpu(ipi_desc[i], cpu));
 		seq_printf(p, "      %s\n", ipi_types[i]);
 	}
 



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49703.87940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpn-0007pn-JP; Thu, 10 Dec 2020 19:42:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49703.87940; Thu, 10 Dec 2020 19:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpn-0007pd-FM; Thu, 10 Dec 2020 19:42:39 +0000
Received: by outflank-mailman (input) for mailman id 49703;
 Thu, 10 Dec 2020 19:42:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpl-0007NY-HQ
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:37 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4d33a7a-fefc-4ebe-981c-ea5d03fd709c;
 Thu, 10 Dec 2020 19:42:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4d33a7a-fefc-4ebe-981c-ea5d03fd709c
Message-Id: <20201210194043.067097663@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629341;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=an/0SlT10gC/rfGB9cofMBAbBc2y8gDRMVN0GHlSQuQ=;
	b=4P4kE5RTAFQxk4E9p2NZ5tVZaT/fVdqT6FUBVR+n0VlRr8uKLCY6ouByJWK/y9MigmPqnb
	tesyONzFlkjZdJe1WWdFvx3h+r+GXjLB7sIlmHBZwN8ZTN8764SDWmNv/29J0LI/juCzrG
	5AR2DIz8iI4y5+WrS2Gk+rZPh543USYMAHoeudVHDJ9Wges2zOPIJ3rC2brtEkqswFGEBV
	qoEKpCAHt2vMMptUuKRYjL5+zlSE91zw5P4yyaAjNJTKm8BCJhUlMmWxn/QqILbbs/4fJM
	zRFVwll3IxpXJvkmkahKFPnuLZWACJTleFizSON231iYySZCGseiBRgkWQqHew==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629341;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=an/0SlT10gC/rfGB9cofMBAbBc2y8gDRMVN0GHlSQuQ=;
	b=0D9iZlJLFfBnhtOaF8MIaZu0T6LGBg7Mhm2UH/Vr6qq568TJA8295HDu2RHkjE+w+f0eVZ
	+xYpdmSgjdKynXCQ==
Date: Thu, 10 Dec 2020 20:25:41 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 05/30] genirq: Annotate irq stats data races
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Both the per cpu stats and the accumulated count are accessed lockless and
can be concurrently modified. That's intentional and the stats are a rough
estimate anyway. Annotate them with data_race().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/irq/irqdesc.c |    4 ++--
 kernel/irq/proc.c    |    5 +++--
 2 files changed, 5 insertions(+), 4 deletions(-)

--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -943,10 +943,10 @@ unsigned int kstat_irqs(unsigned int irq
 	if (!irq_settings_is_per_cpu_devid(desc) &&
 	    !irq_settings_is_per_cpu(desc) &&
 	    !irq_is_nmi(desc))
-	    return desc->tot_count;
+		return data_race(desc->tot_count);
 
 	for_each_possible_cpu(cpu)
-		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
+		sum += data_race(*per_cpu_ptr(desc->kstat_irqs, cpu));
 	return sum;
 }
 
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -488,9 +488,10 @@ int show_interrupts(struct seq_file *p,
 	if (!desc || irq_settings_is_hidden(desc))
 		goto outsparse;
 
-	if (desc->kstat_irqs)
+	if (desc->kstat_irqs) {
 		for_each_online_cpu(j)
-			any_count |= *per_cpu_ptr(desc->kstat_irqs, j);
+			any_count |= data_race(*per_cpu_ptr(desc->kstat_irqs, j));
+	}
 
 	if ((!desc->action || irq_desc_is_chained(desc)) && !any_count)
 		goto outsparse;



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49708.87952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpr-0007wC-5Q; Thu, 10 Dec 2020 19:42:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49708.87952; Thu, 10 Dec 2020 19:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpq-0007vn-UL; Thu, 10 Dec 2020 19:42:42 +0000
Received: by outflank-mailman (input) for mailman id 49708;
 Thu, 10 Dec 2020 19:42:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpp-0007OY-Cs
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:41 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41499f2c-83a5-41a7-9bc2-45586ee5240f;
 Thu, 10 Dec 2020 19:42:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41499f2c-83a5-41a7-9bc2-45586ee5240f
Message-Id: <20201210194043.659522455@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629348;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=b1npAX3rdKpi0zxYsfY5Z+OgET1V6fz47IjXP2N2S30=;
	b=XVTknQmD0+t9vnaeIeIdlOsx1km0mt60yX3sLzQQo1Ss1/zAJ0V4dKHa6DvwjgOabag8/L
	cdAIIF6lZarTaeoLKetYm0pzGwl8kcckN80Dn3W3jMGiKQboR+gMBzt00I83ziyHabYe8b
	NtMvjO2a5WGqd9peY6AIorbHYlQYKzPUUQ5sgSHC+l4y5YIFim4X7GOcwjhvZ7Git47A37
	gmcxNDaJq1jS9foBWYoWsrYgrXjv0vr506Cf/8Zt24Pt75S4yTc5RSKsMNiLugnEQs66lu
	JY2BKtk7vw81ebNMg9AR+ndNy6um/c3Nr6OXjzxeLIyPUkLePe2OzZqJJSi+0A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629348;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=b1npAX3rdKpi0zxYsfY5Z+OgET1V6fz47IjXP2N2S30=;
	b=WNYux1er1Ehidg7p8CyS69i/jC/6HS8muln7aTCmBy2Y4C4fUCVrgJw4x7KlJp6BDc2l6Z
	2PdGgfz2rCEe0FAA==
Date: Thu, 10 Dec 2020 20:25:47 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject:
 [patch 11/30] parisc/irq: Use irq_desc_kstat_cpu() in show_interrupts()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

The irq descriptor is already there, no need to look it up again.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: afzal mohammed <afzal.mohd.ma@gmail.com>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/kernel/irq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -218,7 +218,7 @@ int show_interrupts(struct seq_file *p,
 		seq_printf(p, "%3d: ", i);
 
 		for_each_online_cpu(j)
-			seq_printf(p, "%10u ", kstat_irqs_cpu(i, j));
+			seq_printf(p, "%10u ", irq_desc_kstat_cpu(desc, j));
 
 		seq_printf(p, " %14s", irq_desc_get_chip(desc)->name);
 #ifndef PARISC_IRQ_CR16_COUNTS



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49709.87964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRps-0007zT-Kf; Thu, 10 Dec 2020 19:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49709.87964; Thu, 10 Dec 2020 19:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRps-0007zL-FB; Thu, 10 Dec 2020 19:42:44 +0000
Received: by outflank-mailman (input) for mailman id 49709;
 Thu, 10 Dec 2020 19:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpq-0007NY-Hc
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:42 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8510efe7-e1de-4095-840d-46e2bdeb12cf;
 Thu, 10 Dec 2020 19:42:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8510efe7-e1de-4095-840d-46e2bdeb12cf
Message-Id: <20201210194043.268774449@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=KtzFp0nR+dJW1v891+1wc4D+KUTADWJVxpq6j1ykdfo=;
	b=BwWrBYP0c3Z0lFh3V6qC90H7c5G9PHEfLbQKd0hztOiqwJ4kBCiNk+VDHIcCk1P/HxJJE+
	zt6CbhOUy0zFcpFdRmOFsyfb9chR0Qx1RUaHNJMzGPe4/YSaExFvgbHq6rbAS7kLyxP5tj
	50aob2dpVHhgmHdoE9LrAgft46kAteQH0u20dVmWOXNd76SQCmuGq0ygZQrP10YFFulxyu
	/qBwgM/QVkALrmRL+BjjJvZRX8irP6Tqn07c/GO5qIfUgWd74BWRs6KIHsrquTbtL84IH6
	+M3rrImgxsn11U5KirO7h0p+JsOWwXjhPDlffX4TftaAXSN1Xiplo1xF1UMXlQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=KtzFp0nR+dJW1v891+1wc4D+KUTADWJVxpq6j1ykdfo=;
	b=YlMcwWy3vQ5LbjypUKpYuIGaMi15XbC955aXBdrVoJspoFoOrPcJRpjqt+rHhF7aRCDnxi
	VzPUfEGKZfuiRjBQ==
Date: Thu, 10 Dec 2020 20:25:43 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 07/30] genirq: Make kstat_irqs() static
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

No more users outside the core code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/kernel_stat.h |    1 -
 kernel/irq/irqdesc.c        |   19 ++++++-------------
 2 files changed, 6 insertions(+), 14 deletions(-)

--- a/include/linux/kernel_stat.h
+++ b/include/linux/kernel_stat.h
@@ -67,7 +67,6 @@ static inline unsigned int kstat_softirq
 /*
  * Number of interrupts per specific IRQ source, since bootup
  */
-extern unsigned int kstat_irqs(unsigned int irq);
 extern unsigned int kstat_irqs_usr(unsigned int irq);
 
 /*
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -924,15 +924,7 @@ static bool irq_is_nmi(struct irq_desc *
 	return desc->istate & IRQS_NMI;
 }
 
-/**
- * kstat_irqs - Get the statistics for an interrupt
- * @irq:	The interrupt number
- *
- * Returns the sum of interrupt counts on all cpus since boot for
- * @irq. The caller must ensure that the interrupt is not removed
- * concurrently.
- */
-unsigned int kstat_irqs(unsigned int irq)
+static unsigned int kstat_irqs(unsigned int irq)
 {
 	struct irq_desc *desc = irq_to_desc(irq);
 	unsigned int sum = 0;
@@ -951,13 +943,14 @@ unsigned int kstat_irqs(unsigned int irq
 }
 
 /**
- * kstat_irqs_usr - Get the statistics for an interrupt
+ * kstat_irqs_usr - Get the statistics for an interrupt from thread context
  * @irq:	The interrupt number
  *
  * Returns the sum of interrupt counts on all cpus since boot for @irq.
- * Contrary to kstat_irqs() this can be called from any context.
- * It uses rcu since a concurrent removal of an interrupt descriptor is
- * observing an rcu grace period before delayed_free_desc()/irq_kobj_release().
+ *
+ * It uses rcu to protect the access since a concurrent removal of an
+ * interrupt descriptor is observing an rcu grace period before
+ * delayed_free_desc()/irq_kobj_release().
  */
 unsigned int kstat_irqs_usr(unsigned int irq)
 {



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49717.87976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpx-00089K-8M; Thu, 10 Dec 2020 19:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49717.87976; Thu, 10 Dec 2020 19:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRpx-000896-3K; Thu, 10 Dec 2020 19:42:49 +0000
Received: by outflank-mailman (input) for mailman id 49717;
 Thu, 10 Dec 2020 19:42:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRpv-0007NY-Hv
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:47 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e29be8c-fb82-407f-b628-f3fc2e593376;
 Thu, 10 Dec 2020 19:42:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e29be8c-fb82-407f-b628-f3fc2e593376
Message-Id: <20201210194043.454288890@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629346;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=d6ZnDp1ZYVpzy4fVKUFFFULYmpUUlcl5sebpJ1aGphc=;
	b=MSBX3shBH0qoLZgoBaKe5kLbUB8TnsF2wkt/HDprwYNeTCAbd2lA/IPfG5shNVgtErFne6
	2e7dxedwAVSHA242jzYwEZEDos7nzVn3jxex71TScShSaummfLGCXQPpi9hsC6bxlpki1A
	PaKHNLN8C0jAcib2U6kKogdFm6aBvAusAl2wzmpxVjN8CVqMhcCswRy3h+4P9laPxunXLd
	H7LLUzUcw0fvfesHPPxL0uK/nLdCXiS7KyYQy2gAPBMvgGCcbyREn+oK+jObT1BhRalalZ
	V5Nn3/fVSWTJibiL01V8QA9WbzxTf/xt6yFIPqkmLVhG809hQRqJj+6eoADqiw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629346;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=d6ZnDp1ZYVpzy4fVKUFFFULYmpUUlcl5sebpJ1aGphc=;
	b=/jZEQ2YP5EiBLPFVPa41DzeVlkA22cUlEXVQ4rLDBt8Dt9AhsUKkU1UyqJF1WyN4nzOpZY
	l36XkFlXpQQTzpDg==
Date: Thu, 10 Dec 2020 20:25:45 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 09/30] ARM: smp: Use irq_desc_kstat_cpu() in show_ipi_list()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

The irq descriptor is already there, no need to look it up again.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/kernel/smp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -550,7 +550,7 @@ void show_ipi_list(struct seq_file *p, i
 		seq_printf(p, "%*s%u: ", prec - 1, "IPI", i);
 
 		for_each_online_cpu(cpu)
-			seq_printf(p, "%10u ", kstat_irqs_cpu(irq, cpu));
+			seq_printf(p, "%10u ", irq_desc_kstat_cpu(ipi_desc[i], cpu));
 
 		seq_printf(p, " %s\n", ipi_types[i]);
 	}



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49726.87988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRq5-0008Lw-O1; Thu, 10 Dec 2020 19:42:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49726.87988; Thu, 10 Dec 2020 19:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRq5-0008La-If; Thu, 10 Dec 2020 19:42:57 +0000
Received: by outflank-mailman (input) for mailman id 49726;
 Thu, 10 Dec 2020 19:42:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRq4-0007OY-Dj
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:56 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c090e87-a9aa-46ce-ba9b-6fbff997e738;
 Thu, 10 Dec 2020 19:42:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c090e87-a9aa-46ce-ba9b-6fbff997e738
Message-Id: <20201210194043.769108348@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629349;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=RkpDEDsz36bMGIvl/YLs0noKGbsbwyBTWfWGyYCIvwI=;
	b=FjXt0gBZbmoGEhW+JUNun0TgRlpWk5UHUbF8b/SYgA73R5qt2FK9Tl9JCUAhz1x+5ISxmb
	8V7v08WHaUsVznntzV+P4bCrHHh6loow5ZaaQC5qCuEudPhwVHK22AOQesI0Ecsqkh8veP
	Qwn4/oD2Zv3hZttfcTXroXZBKsSviSPS/hfZ0fieRb3ZsJsiILx4Kdfz9zR7w+K2srUSyp
	sVcjXZizv2IhQYS+ESc7LfjgqYgvOCII1d3ugOhi6vPvk7jAG3wwmOOag6qx9YOR3B1IuO
	KQi2Ohrd/6vH+kL/8yvl1XW4GpaahKbPHYgPc32mGshR05m6gXPFbQzdssR1DQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629349;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=RkpDEDsz36bMGIvl/YLs0noKGbsbwyBTWfWGyYCIvwI=;
	b=tNp+xNMz1hLFIiRaZEJhoV6X6hTrUYtKO/6Uex9oEhysEt44+COqeNeDA5dBz+L1+ts6Sq
	70KNBmuAKx0cHoAw==
Date: Thu, 10 Dec 2020 20:25:48 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject:
 [patch 12/30] s390/irq: Use irq_desc_kstat_cpu() in show_msi_interrupt()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

The irq descriptor is already there, no need to look it up again.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
---
 arch/s390/kernel/irq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/s390/kernel/irq.c
+++ b/arch/s390/kernel/irq.c
@@ -124,7 +124,7 @@ static void show_msi_interrupt(struct se
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	seq_printf(p, "%3d: ", irq);
 	for_each_online_cpu(cpu)
-		seq_printf(p, "%10u ", kstat_irqs_cpu(irq, cpu));
+		seq_printf(p, "%10u ", irq_desc_kstat_irq(desc, cpu));
 
 	if (desc->irq_data.chip)
 		seq_printf(p, " %8s", desc->irq_data.chip->name);



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:42:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49727.88000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRq7-0008OY-1r; Thu, 10 Dec 2020 19:42:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49727.88000; Thu, 10 Dec 2020 19:42:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRq6-0008OO-TZ; Thu, 10 Dec 2020 19:42:58 +0000
Received: by outflank-mailman (input) for mailman id 49727;
 Thu, 10 Dec 2020 19:42:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRq5-0007NY-I7
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:42:57 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c23bcf1d-1939-42b0-b0e3-72b6d5ff8c3a;
 Thu, 10 Dec 2020 19:42:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c23bcf1d-1939-42b0-b0e3-72b6d5ff8c3a
Message-Id: <20201210194043.862572239@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629351;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=JhTeD+U5tH7aZNj19fhxsRS/YgkGW74hooJZTJtUYG8=;
	b=kakkmibZTWswSOf57jMSc/btXnxa6RB0Oxv9VvloZSQtIG4WkM/MftbHazL1ObfAX1q6u6
	v9EgNe5gwBKYjRSGkjE0PSwHQU5J90KEPnfCKLss4hKXmA7Bkc/E+T/2WPZaHEkFR4i+UO
	QdX6JJ+VHkGH1egGtgEd0+oJvfTVSCCs6O0nmczsu900db4+znQu/4DZ0ZqIrtnAutI7sP
	lL8sGEqmceEeKou2jRcjeSLimqt3xaebgyRNSKG8cou2Ox4v3oxNWhwx9YCjoUSMJE+bd8
	k+CDY3uhTourxMY6j19wud6jiqrZu7xtNFwliIMKmsBQdSHoUGcZxQdzt+oYyw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629351;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=JhTeD+U5tH7aZNj19fhxsRS/YgkGW74hooJZTJtUYG8=;
	b=plOxIMbVY4CH/4Uw1Dak+ev4LYX0asa/YH9Oe3KWiQVB69dEiGhbzcOFTb0+Om9wD47q90
	EKL6tVW8YPmeHADg==
Date: Thu, 10 Dec 2020 20:25:49 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject:
 [patch 13/30] drm/i915/lpe_audio: Remove pointless irq_to_desc() usage
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Nothing uses the result and nothing should ever use it in driver code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Cc: intel-gfx@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
---
 drivers/gpu/drm/i915/display/intel_lpe_audio.c |    4 ----
 1 file changed, 4 deletions(-)

--- a/drivers/gpu/drm/i915/display/intel_lpe_audio.c
+++ b/drivers/gpu/drm/i915/display/intel_lpe_audio.c
@@ -297,13 +297,9 @@ int intel_lpe_audio_init(struct drm_i915
  */
 void intel_lpe_audio_teardown(struct drm_i915_private *dev_priv)
 {
-	struct irq_desc *desc;
-
 	if (!HAS_LPE_AUDIO(dev_priv))
 		return;
 
-	desc = irq_to_desc(dev_priv->lpe_audio.irq);
-
 	lpe_audio_platdev_destroy(dev_priv);
 
 	irq_free_desc(dev_priv->lpe_audio.irq);



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:43:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:43:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49732.88012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRqA-0008UK-FU; Thu, 10 Dec 2020 19:43:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49732.88012; Thu, 10 Dec 2020 19:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRqA-0008UA-Ab; Thu, 10 Dec 2020 19:43:02 +0000
Received: by outflank-mailman (input) for mailman id 49732;
 Thu, 10 Dec 2020 19:43:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRq9-0007OY-Da
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:01 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1849d6d4-68f3-400b-86eb-04ea70ec79b6;
 Thu, 10 Dec 2020 19:42:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1849d6d4-68f3-400b-86eb-04ea70ec79b6
Message-Id: <20201210194044.157283633@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629354;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=4K6qItYlSe7EBUp3WfMEzcI2y69yZ4OdOhSokF1HvFw=;
	b=dopfjz4hp6A4Xryu0tbdegM/v0YL6lWYdnP3XlIYCgQ9pjXx9ZT1p3TI0zk4elgm5gPd+q
	FURWbMd/ftse0CXn8TWD7Lf/LMt29YyfqJkuoxwIq2zLBOhpU5N1uQ9NTcofTSPiRoKfTO
	PEbzx4Og6n9n+3PdfyA3eCON82LOBsb5cz2c2++JdWXn2zeffzgZ5twz3L2T1NgK3Sr/6E
	z4hGHNQPL1mjIDXhMbZMKcry3IqKzOm1XwkJ497HBo+O7rAGzJagP8Bz95WOHjr107cL1M
	F3z7vCp/PgmQ7w0NfOrec8JvWkd2Glw3/TrHKxQy3AjIzKLFjQ11edATiOjGMg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629354;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=4K6qItYlSe7EBUp3WfMEzcI2y69yZ4OdOhSokF1HvFw=;
	b=K3Wur5e2Woy0OzRxLAAmfcCZIUuOyHlagbFXs1H4WET0hQYXgya6h4ah4Fheh+nueTyzzX
	ONj+v+orSFGMaLBQ==
Date: Thu, 10 Dec 2020 20:25:52 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Linus Walleij <linus.walleij@linaro.org>,
 Lee Jones <lee.jones@linaro.org>,
 linux-arm-kernel@lists.infradead.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 linux-gpio@vger.kernel.org,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject:
 [patch 16/30] mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

First of all drivers have absolutely no business to dig into the internals
of an irq descriptor. That's core code and subject to change. All of this
information is readily available to /proc/interrupts in a safe and race
free way.

Remove the inspection code which is a blatant violation of subsystem
boundaries and racy against concurrent modifications of the interrupt
descriptor.

Print the irq line instead so the information can be looked up in a sane
way in /proc/interrupts.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Lee Jones <lee.jones@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 drivers/mfd/ab8500-debugfs.c |   16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

--- a/drivers/mfd/ab8500-debugfs.c
+++ b/drivers/mfd/ab8500-debugfs.c
@@ -1513,24 +1513,14 @@ static int ab8500_interrupts_show(struct
 {
 	int line;
 
-	seq_puts(s, "name: number:  number of: wake:\n");
+	seq_puts(s, "name: number: irq: number of: wake:\n");
 
 	for (line = 0; line < num_interrupt_lines; line++) {
-		struct irq_desc *desc = irq_to_desc(line + irq_first);
-
-		seq_printf(s, "%3i:  %6i %4i",
+		seq_printf(s, "%3i:  %6i %4i %4i\n",
 			   line,
+			   line + irq_first,
 			   num_interrupts[line],
 			   num_wake_interrupts[line]);
-
-		if (desc && desc->name)
-			seq_printf(s, "-%-8s", desc->name);
-		if (desc && desc->action) {
-			struct irqaction *action = desc->action;
-
-			seq_printf(s, "  %s", action->name);
-			while ((action = action->next) != NULL)
-				seq_printf(s, ", %s", action->name);
 		}
 		seq_putc(s, '\n');
 	}



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:43:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:43:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49734.88024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRqC-000073-QJ; Thu, 10 Dec 2020 19:43:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49734.88024; Thu, 10 Dec 2020 19:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRqC-00006r-L6; Thu, 10 Dec 2020 19:43:04 +0000
Received: by outflank-mailman (input) for mailman id 49734;
 Thu, 10 Dec 2020 19:43:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqA-0007NY-IT
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:02 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9ad1c39-8968-41d0-afa8-7407fe036899;
 Thu, 10 Dec 2020 19:42:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9ad1c39-8968-41d0-afa8-7407fe036899
Message-Id: <20201210194043.957046529@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629352;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=aHjBasf7kS/mugPgGoqzaKHWX8S9MZAhVf8pOv9wzqI=;
	b=hIJe5uDFg4eKtha16rd5RMgP75gihkgyuo+wu9nJj9lb1QxB+dyj8VagUmUhTQC26INyeG
	8XIXTkbYJX+KGxPBwbTjWFL8gH8tKgvYA0sd+Cbqs0pmZohafXQRaBxZ4X0LrRHAFHI+fK
	34Ah/yU2AM+9WMw+oDh09bmBmxCAcLljD/Ywf7QGigq/72RSIaYCWXy9MRJcPUHTzSLEcU
	hQensxz9JJtZ3t/2uwIx4+1Y2fv0/x2+3nAGVvooEgtnvb2OUreKFZUgUZSfzBs+FE+M45
	5E1wbC+M9uufcsBkYfGEAcb5u4V11lC9yi8KNKp2A+CRv838rNvetMOidgM8lg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629352;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=aHjBasf7kS/mugPgGoqzaKHWX8S9MZAhVf8pOv9wzqI=;
	b=A/zWaVNR4ntcGYc6FuYZaabTLVIc+ZQNDV6wELc3l8+V7v8iKaL/rxgSQB4qy+gMpZm5QC
	1is0DPvmHcXJQvCg==
Date: Thu, 10 Dec 2020 20:25:50 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Driver code has no business with the internals of the irq descriptor.

Aside of that the count is per interrupt line and therefore takes
interrupts from other devices into account which share the interrupt line
and are not handled by the graphics driver.

Replace it with a pmu private count which only counts interrupts which
originate from the graphics card.

To avoid atomics or heuristics of some sort make the counter field
'unsigned long'. That limits the count to 4e9 on 32bit which is a lot and
postprocessing can easily deal with the occasional wraparound.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: intel-gfx@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
---
 drivers/gpu/drm/i915/i915_irq.c |   34 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_pmu.c |   18 +-----------------
 drivers/gpu/drm/i915/i915_pmu.h |    8 ++++++++
 3 files changed, 43 insertions(+), 17 deletions(-)

--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -60,6 +60,24 @@
  * and related files, but that will be described in separate chapters.
  */
 
+/*
+ * Interrupt statistic for PMU. Increments the counter only if the
+ * interrupt originated from the the GPU so interrupts from a device which
+ * shares the interrupt line are not accounted.
+ */
+static inline void pmu_irq_stats(struct drm_i915_private *priv,
+				 irqreturn_t res)
+{
+	if (unlikely(res != IRQ_HANDLED))
+		return;
+
+	/*
+	 * A clever compiler translates that into INC. A not so clever one
+	 * should at least prevent store tearing.
+	 */
+	WRITE_ONCE(priv->pmu.irq_count, priv->pmu.irq_count + 1);
+}
+
 typedef bool (*long_pulse_detect_func)(enum hpd_pin pin, u32 val);
 
 static const u32 hpd_ilk[HPD_NUM_PINS] = {
@@ -1599,6 +1617,8 @@ static irqreturn_t valleyview_irq_handle
 		valleyview_pipestat_irq_handler(dev_priv, pipe_stats);
 	} while (0);
 
+	pmu_irq_stats(dev_priv, ret);
+
 	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
 
 	return ret;
@@ -1676,6 +1696,8 @@ static irqreturn_t cherryview_irq_handle
 		valleyview_pipestat_irq_handler(dev_priv, pipe_stats);
 	} while (0);
 
+	pmu_irq_stats(dev_priv, ret);
+
 	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
 
 	return ret;
@@ -2103,6 +2125,8 @@ static irqreturn_t ilk_irq_handler(int i
 	if (sde_ier)
 		raw_reg_write(regs, SDEIER, sde_ier);
 
+	pmu_irq_stats(i915, ret);
+
 	/* IRQs are synced during runtime_suspend, we don't require a wakeref */
 	enable_rpm_wakeref_asserts(&i915->runtime_pm);
 
@@ -2419,6 +2443,8 @@ static irqreturn_t gen8_irq_handler(int
 
 	gen8_master_intr_enable(regs);
 
+	pmu_irq_stats(dev_priv, IRQ_HANDLED);
+
 	return IRQ_HANDLED;
 }
 
@@ -2514,6 +2540,8 @@ static __always_inline irqreturn_t
 
 	gen11_gu_misc_irq_handler(gt, gu_misc_iir);
 
+	pmu_irq_stats(i915, IRQ_HANDLED);
+
 	return IRQ_HANDLED;
 }
 
@@ -3688,6 +3716,8 @@ static irqreturn_t i8xx_irq_handler(int
 		i8xx_pipestat_irq_handler(dev_priv, iir, pipe_stats);
 	} while (0);
 
+	pmu_irq_stats(dev_priv, ret);
+
 	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
 
 	return ret;
@@ -3796,6 +3826,8 @@ static irqreturn_t i915_irq_handler(int
 		i915_pipestat_irq_handler(dev_priv, iir, pipe_stats);
 	} while (0);
 
+	pmu_irq_stats(dev_priv, ret);
+
 	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
 
 	return ret;
@@ -3941,6 +3973,8 @@ static irqreturn_t i965_irq_handler(int
 		i965_pipestat_irq_handler(dev_priv, iir, pipe_stats);
 	} while (0);
 
+	pmu_irq_stats(dev_priv, IRQ_HANDLED);
+
 	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
 
 	return ret;
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -423,22 +423,6 @@ static enum hrtimer_restart i915_sample(
 	return HRTIMER_RESTART;
 }
 
-static u64 count_interrupts(struct drm_i915_private *i915)
-{
-	/* open-coded kstat_irqs() */
-	struct irq_desc *desc = irq_to_desc(i915->drm.pdev->irq);
-	u64 sum = 0;
-	int cpu;
-
-	if (!desc || !desc->kstat_irqs)
-		return 0;
-
-	for_each_possible_cpu(cpu)
-		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
-
-	return sum;
-}
-
 static void i915_pmu_event_destroy(struct perf_event *event)
 {
 	struct drm_i915_private *i915 =
@@ -581,7 +565,7 @@ static u64 __i915_pmu_event_read(struct
 				   USEC_PER_SEC /* to MHz */);
 			break;
 		case I915_PMU_INTERRUPTS:
-			val = count_interrupts(i915);
+			val = READ_ONCE(pmu->irq_count);
 			break;
 		case I915_PMU_RC6_RESIDENCY:
 			val = get_rc6(&i915->gt);
--- a/drivers/gpu/drm/i915/i915_pmu.h
+++ b/drivers/gpu/drm/i915/i915_pmu.h
@@ -108,6 +108,14 @@ struct i915_pmu {
 	 */
 	ktime_t sleep_last;
 	/**
+	 * @irq_count: Number of interrupts
+	 *
+	 * Intentionally unsigned long to avoid atomics or heuristics on 32bit.
+	 * 4e9 interrupts are a lot and postprocessing can really deal with an
+	 * occasional wraparound easily. It's 32bit after all.
+	 */
+	unsigned long irq_count;
+	/**
 	 * @events_attr_group: Device events attribute group.
 	 */
 	struct attribute_group events_attr_group;



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49767.88036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxu-0001fQ-3w; Thu, 10 Dec 2020 19:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49767.88036; Thu, 10 Dec 2020 19:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxt-0001f0-V3; Thu, 10 Dec 2020 19:51:01 +0000
Received: by outflank-mailman (input) for mailman id 49767;
 Thu, 10 Dec 2020 19:51:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqo-0007NY-J8
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:42 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2580f224-7954-42ab-86fd-075f962e19d9;
 Thu, 10 Dec 2020 19:42:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2580f224-7954-42ab-86fd-075f962e19d9
Message-Id: <20201210194044.876342330@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629363;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=55TI7XJ63kr4B6KmttDRntIShtWEHQxTpDKUCCh7MMo=;
	b=DfHk2d4rsQCFRfxdAyxTdTyPSpwg7z0Sncg8I+9KfdVXncJCTb3XmwedEonMAdLY68Ijgw
	OqDECUKDGXVJ2e3I3uLB0GGar87pY3Yal2dFSEkuxLqu7+aFvat4rMB4RZqE4KrUhBk+R3
	gBmBzJxqdxsXIuyqJJAGw8Y30tyckVylqCLBgGNvi3gs7yKm4ZK17JvWGG44C/46sWyxXQ
	Ej9ZSopIvaJHIfeR+E9sAafcWAL7WpVPwj+5hDUZlquKEkpvZ61FA9B4DGvqfEnbmzwFaO
	eXj8bg+XHOWux9C1y9WQO9G2xEkTNlYuLikm/MUzYuUvhCxBqpZY2adGCKTIQA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629363;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=55TI7XJ63kr4B6KmttDRntIShtWEHQxTpDKUCCh7MMo=;
	b=eULJnI7rz5dW6ohKtM1f3ZResXMDuJkA8Ky9aCIj2PXNOKCWiuy+1w37KELJ76tMak6jIr
	Gp7eoIGkDj0U4pDA==
Date: Thu, 10 Dec 2020 20:25:59 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 23/30] net/mlx5: Use effective interrupt affinity
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Using the interrupt affinity mask for checking locality is not really
working well on architectures which support effective affinity masks.

The affinity mask is either the system wide default or set by user space,
but the architecture can or even must reduce the mask to the effective set,
which means that checking the affinity mask itself does not really tell
about the actual target CPUs.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org
Cc: linux-rdma@vger.kernel.org
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1998,7 +1998,7 @@ static int mlx5e_open_channel(struct mlx
 	c->num_tc   = params->num_tc;
 	c->xdp      = !!params->xdp_prog;
 	c->stats    = &priv->channel_stats[ix].ch;
-	c->aff_mask = irq_get_affinity_mask(irq);
+	c->aff_mask = irq_get_effective_affinity_mask(irq);
 	c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);
 
 	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49769.88050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxu-0001gu-Pk; Thu, 10 Dec 2020 19:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49769.88050; Thu, 10 Dec 2020 19:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxu-0001gT-Gv; Thu, 10 Dec 2020 19:51:02 +0000
Received: by outflank-mailman (input) for mailman id 49769;
 Thu, 10 Dec 2020 19:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqe-0007NY-It
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:32 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d13a7557-b416-4829-9e73-f83499976b25;
 Thu, 10 Dec 2020 19:42:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d13a7557-b416-4829-9e73-f83499976b25
Message-Id: <20201210194044.769458162@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629362;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=VOaf/d7D7QtaaC+WU1sXZf5eTJVDFk9LK/s7Au7CufU=;
	b=ebvAKnU9U5xxsuTyZRRyBD/pSxlAzqV321cu/1PSB+S30uzHIXKjKJyuLWtBWcYVZXlybn
	NBr9KiI1e9paQHPvIfvkgD6mJDZPhiGb5TN5PvcTekgbiahpQ8LQMlSJSX4xpUxk9f2Ol5
	F8oNCSe1H+ais24JO9t2IwvShGuXdvh8ES7plOqYyAwLRKf0TojyhsFUMV5kjVjEtAGPaz
	UOL8zFfw66+Fs8FOPny2ek7maFNW7ZTVUGseqbCjWR7MjO0q/MLzWc7S4CWwTTyNrDt17N
	pAxkdJedoxJwP4qnw1l3YgNebMwtGsjkmDdMWwXGJEDNlPXObkHeBB1kSiRQpQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629362;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=VOaf/d7D7QtaaC+WU1sXZf5eTJVDFk9LK/s7Au7CufU=;
	b=W/Zofh+C9ibp8KsiLJ6iyTc3jKq0LDlUAeV1W45RXNpyPicUPyvAT+QSfrk4IRw9rtLo2E
	zAcfM77lXBTE8oDg==
Date: Thu, 10 Dec 2020 20:25:58 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 22/30] net/mlx5: Replace irq_to_desc() abuse
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

No driver has any business with the internals of an interrupt
descriptor. Storing a pointer to it just to use yet another helper at the
actual usage site to retrieve the affinity mask is creative at best. Just
because C does not allow encapsulation does not mean that the kernel has no
limits.

Retrieve a pointer to the affinity mask itself and use that. It's still
using an interface which is usually not for random drivers, but definitely
less hideous than the previous hack.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/net/ethernet/mellanox/mlx5/core/en.h      |    2 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    2 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c |    6 +-----
 3 files changed, 3 insertions(+), 7 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -669,7 +669,7 @@ struct mlx5e_channel {
 	spinlock_t                 async_icosq_lock;
 
 	/* data path - accessed per napi poll */
-	struct irq_desc *irq_desc;
+	const struct cpumask	  *aff_mask;
 	struct mlx5e_ch_stats     *stats;
 
 	/* control */
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1998,7 +1998,7 @@ static int mlx5e_open_channel(struct mlx
 	c->num_tc   = params->num_tc;
 	c->xdp      = !!params->xdp_prog;
 	c->stats    = &priv->channel_stats[ix].ch;
-	c->irq_desc = irq_to_desc(irq);
+	c->aff_mask = irq_get_affinity_mask(irq);
 	c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);
 
 	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
@@ -40,12 +40,8 @@
 static inline bool mlx5e_channel_no_affinity_change(struct mlx5e_channel *c)
 {
 	int current_cpu = smp_processor_id();
-	const struct cpumask *aff;
-	struct irq_data *idata;
 
-	idata = irq_desc_get_irq_data(c->irq_desc);
-	aff = irq_data_get_affinity_mask(idata);
-	return cpumask_test_cpu(current_cpu, aff);
+	return cpumask_test_cpu(current_cpu, c->aff_mask);
 }
 
 static void mlx5e_handle_tx_dim(struct mlx5e_txqsq *sq)



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49773.88077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxw-0001jl-5y; Thu, 10 Dec 2020 19:51:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49773.88077; Thu, 10 Dec 2020 19:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxv-0001iv-Lc; Thu, 10 Dec 2020 19:51:03 +0000
Received: by outflank-mailman (input) for mailman id 49773;
 Thu, 10 Dec 2020 19:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRrD-0007NY-K8
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:44:07 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbc5972f-1509-41c9-b558-1642c1fd78bf;
 Thu, 10 Dec 2020 19:42:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbc5972f-1509-41c9-b558-1642c1fd78bf
Message-Id: <20201210194045.250321315@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629368;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=h+/J8p7baJr0DOpzZw5pWLEFMQkgQKdthJegqn8/Z3Y=;
	b=ViQ26TbnUeY8qEjEzYulRVnhFAoB9EDH9YUPDvEiFg//dnXloI1oen9eNgmVDH3RmZaBn0
	0l0hb/eh9AtQOqQOiBnzJtoucvR/Mgcwm3gjLtIH7WkV3WksbFVdOMHzNi5Zj6EnqfiQBo
	j3AWozoE82ZIwe2qbFS8RFleThcs9Bt3OInvffx5HlfgL9KUZRRlg7kyEJZxqWJqWBqEXh
	aZpEesZBBg55CfasJSjlzL/0bXPLxp/DGSRYtaZAfSC1smsroPfxhOGgzqx3F6MyhNadS8
	HUk9gvp8N0w4LGaQVpf17rVe2GplhulXqQJFiAihQg9NPYrVAz0L2MOk4uIbBw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629368;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=h+/J8p7baJr0DOpzZw5pWLEFMQkgQKdthJegqn8/Z3Y=;
	b=VbJwfVLRzH/9JhiMalVLwKmOuULPqzvXaWNQJQDqIexhTGvkL8AL9yE+2ch4ltaw7RfVm0
	CZcollJCruWADiAA==
Date: Thu, 10 Dec 2020 20:26:03 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>
Subject:
 [patch 27/30] xen/events: Only force affinity mask for percpu interrupts
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

All event channel setups bind the interrupt on CPU0 or the target CPU for
percpu interrupts and overwrite the affinity mask with the corresponding
cpumask. That does not make sense.

The XEN implementation of irqchip::irq_set_affinity() already picks a
single target CPU out of the affinity mask and the actual target is stored
in the effective CPU mask, so destroying the user chosen affinity mask
which might contain more than one CPU is wrong.

Change the implementation so that the channel is bound to CPU0 at the XEN
level and leave the affinity mask alone. At startup of the interrupt
affinity will be assigned out of the affinity mask and the XEN binding will
be updated. Only keep the enforcement for real percpu interrupts.

On resume the overwrite is not required either because info->cpu and the
affinity mask are still the same as at the time of suspend. Same for
rebind_evtchn_irq().

This also prepares for proper interrupt spreading.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/events/events_base.c |   42 ++++++++++++++++++++++++++-------------
 1 file changed, 28 insertions(+), 14 deletions(-)

--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -433,15 +433,20 @@ static bool pirq_needs_eoi_flag(unsigned
 	return info->u.pirq.flags & PIRQ_NEEDS_EOI;
 }
 
-static void bind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int cpu)
+static void bind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
+			       bool force_affinity)
 {
 	int irq = get_evtchn_to_irq(evtchn);
 	struct irq_info *info = info_for_irq(irq);
 
 	BUG_ON(irq == -1);
-#ifdef CONFIG_SMP
-	cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu));
-#endif
+
+	if (IS_ENABLED(CONFIG_SMP) && force_affinity) {
+		cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu));
+		cpumask_copy(irq_get_effective_affinity_mask(irq),
+			     cpumask_of(cpu));
+	}
+
 	xen_evtchn_port_bind_to_cpu(evtchn, cpu, info->cpu);
 
 	info->cpu = cpu;
@@ -788,7 +793,7 @@ static unsigned int __startup_pirq(unsig
 		goto err;
 
 	info->evtchn = evtchn;
-	bind_evtchn_to_cpu(evtchn, 0);
+	bind_evtchn_to_cpu(evtchn, 0, false);
 
 	rc = xen_evtchn_port_setup(evtchn);
 	if (rc)
@@ -1107,8 +1112,8 @@ static int bind_evtchn_to_irq_chip(evtch
 			irq = ret;
 			goto out;
 		}
-		/* New interdomain events are bound to VCPU 0. */
-		bind_evtchn_to_cpu(evtchn, 0);
+		/* New interdomain events are initially bound to VCPU 0. */
+		bind_evtchn_to_cpu(evtchn, 0, false);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
@@ -1156,7 +1161,11 @@ static int bind_ipi_to_irq(unsigned int
 			irq = ret;
 			goto out;
 		}
-		bind_evtchn_to_cpu(evtchn, cpu);
+		/*
+		 * Force the affinity mask to the target CPU so proc shows
+		 * the correct target.
+		 */
+		bind_evtchn_to_cpu(evtchn, cpu, true);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_IPI);
@@ -1269,7 +1278,11 @@ int bind_virq_to_irq(unsigned int virq,
 			goto out;
 		}
 
-		bind_evtchn_to_cpu(evtchn, cpu);
+		/*
+		 * Force the affinity mask for percpu interrupts so proc
+		 * shows the correct target.
+		 */
+		bind_evtchn_to_cpu(evtchn, cpu, percpu);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_VIRQ);
@@ -1634,8 +1647,7 @@ void rebind_evtchn_irq(evtchn_port_t evt
 
 	mutex_unlock(&irq_mapping_update_lock);
 
-        bind_evtchn_to_cpu(evtchn, info->cpu);
-	irq_set_affinity(irq, cpumask_of(info->cpu));
+	bind_evtchn_to_cpu(evtchn, info->cpu, false);
 
 	/* Unmask the event channel. */
 	enable_irq(irq);
@@ -1669,7 +1681,7 @@ static int xen_rebind_evtchn_to_cpu(evtc
 	 * it, but don't do the xenlinux-level rebind in that case.
 	 */
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
-		bind_evtchn_to_cpu(evtchn, tcpu);
+		bind_evtchn_to_cpu(evtchn, tcpu, false);
 
 	if (!masked)
 		unmask_evtchn(evtchn);
@@ -1798,7 +1810,8 @@ static void restore_cpu_virqs(unsigned i
 
 		/* Record the new mapping. */
 		(void)xen_irq_info_virq_setup(cpu, irq, evtchn, virq);
-		bind_evtchn_to_cpu(evtchn, cpu);
+		/* The affinity mask is still valid */
+		bind_evtchn_to_cpu(evtchn, cpu, false);
 	}
 }
 
@@ -1823,7 +1836,8 @@ static void restore_cpu_ipis(unsigned in
 
 		/* Record the new mapping. */
 		(void)xen_irq_info_ipi_setup(cpu, irq, evtchn, ipi);
-		bind_evtchn_to_cpu(evtchn, cpu);
+		/* The affinity mask is still valid */
+		bind_evtchn_to_cpu(evtchn, cpu, false);
 	}
 }
 



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49772.88064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxv-0001hs-Eg; Thu, 10 Dec 2020 19:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49772.88064; Thu, 10 Dec 2020 19:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxu-0001hM-Vv; Thu, 10 Dec 2020 19:51:02 +0000
Received: by outflank-mailman (input) for mailman id 49772;
 Thu, 10 Dec 2020 19:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRr3-0007NY-Jj
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:57 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1eb2b85e-8e26-421b-abcf-ae18cb7e2d27;
 Thu, 10 Dec 2020 19:42:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1eb2b85e-8e26-421b-abcf-ae18cb7e2d27
Message-Id: <20201210194044.972064156@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629364;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=9YlyTwMk4Mbfk7ZW8GDEykvPqsqe5qocY/DN2nXVexA=;
	b=hVw3OYCTWsMz0Aw61IHODSqZTydWtjxPDClGGC7iGCBQqFWidRe56rShTgIEwhvft2xxVQ
	M/YlcKaOAyEQXPl7Q3R3E32QqIuTfXQ19MYqHX2sGJ83TqbX6MyS7Wtr2e++pcbRAgNdOI
	cVxo/p5uF7/AAg7bzLR09pxbrsczA1ufWBrFzi+yYRMMnxIdHGVV8b25XSGFqMhhM/ZLfS
	qYZkmMDMAVG+a+Dk/ioVTw4ujul3+CsBK60KEwLQOzocKe9FTUxmnyQhV7MVP4FyuTzzdh
	Mx0/OtbsUm0nXPbmcxdV1KpjnJXXPfi7+2ISytWZAwaHiml7fGtt4XCeI+DnvA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629364;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=9YlyTwMk4Mbfk7ZW8GDEykvPqsqe5qocY/DN2nXVexA=;
	b=TDoY6kDn3fqOFjjhxcsd3OY1zWrz3aQ6qRhKrymB7SphyJoO4N2U2VRY7uCgEKkn/QVPql
	uhJWWeP2DlMdL5CA==
Date: Thu, 10 Dec 2020 20:26:00 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>
Subject: [patch 24/30] xen/events: Remove unused bind_evtchn_to_irq_lateeoi()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/events/events_base.c |    6 ------
 1 file changed, 6 deletions(-)

--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1132,12 +1132,6 @@ int bind_evtchn_to_irq(evtchn_port_t evt
 }
 EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
 
-int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn)
-{
-	return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip);
-}
-EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi);
-
 static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
 {
 	struct evtchn_bind_ipi bind_ipi;



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49768.88043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxu-0001g0-F8; Thu, 10 Dec 2020 19:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49768.88043; Thu, 10 Dec 2020 19:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxu-0001fj-78; Thu, 10 Dec 2020 19:51:02 +0000
Received: by outflank-mailman (input) for mailman id 49768;
 Thu, 10 Dec 2020 19:51:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRrN-0007NY-KY
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:44:17 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c030df19-1f76-46a7-946f-e083156fa1d2;
 Thu, 10 Dec 2020 19:42:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c030df19-1f76-46a7-946f-e083156fa1d2
Message-Id: <20201210194045.360198201@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629369;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=ZlPg9sxtWavWNstOnduWAE0LJ88d0safrspZWU+Bxkw=;
	b=K1DERQLLdTNr2zHRckKSlzdGpOu3K2YNmRf/Fd+Me9xL7e1+z/WEZj2/ktZthj5jq1hRc5
	H3K05V2lxe1VMwzID4A3YLY85G8CYSpDrmB064IUNRFZIZdMGhdgfMMXJmC2dwy27hx5dv
	+8AXgD/DGykOcBz1Xa8j76SRM06+jkTJXOvxx9wVIyXxp695f6gHDGbeeV9JlTl+CV+qJz
	iigE2xdn6yZ1PRpeA5+aMqV0fQZ1NATjbH3H6LTr2G8vwxub7OglAIb7biMKHi/AyccluD
	JT3nn4CWSg161dQ9jAMQEsqATs9zRWVAVD7sNovXx4fL9L9H47Prx2LPtWUjvw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629369;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=ZlPg9sxtWavWNstOnduWAE0LJ88d0safrspZWU+Bxkw=;
	b=5OkxueAP7qHY2hZdC3q6dEaDkreK0XDmJgigYgkjGtvPF1gC/y/J1hb+Pc7lUaZciQOAo7
	alJrfrmEbNVTr+Cw==
Date: Thu, 10 Dec 2020 20:26:04 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>
Subject: [patch 28/30] xen/events: Reduce irq_info::spurious_cnt storage size
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

To prepare for interrupt spreading reduce the storage size of
irq_info::spurious_cnt to u8 so the required flag for the spreading logic
will not increase the storage size.

Protect the usage site against overruns.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/events/events_base.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -95,7 +95,7 @@ struct irq_info {
 	struct list_head list;
 	struct list_head eoi_list;
 	short refcnt;
-	short spurious_cnt;
+	u8 spurious_cnt;
 	enum xen_irq_type type; /* type */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
@@ -528,8 +528,10 @@ static void xen_irq_lateeoi_locked(struc
 		return;
 
 	if (spurious) {
-		if ((1 << info->spurious_cnt) < (HZ << 2))
-			info->spurious_cnt++;
+		if ((1 << info->spurious_cnt) < (HZ << 2)) {
+			if (info->spurious_cnt != 0xFF)
+				info->spurious_cnt++;
+		}
 		if (info->spurious_cnt > 1) {
 			delay = 1 << (info->spurious_cnt - 2);
 			if (delay > HZ)



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49777.88092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxx-0001n3-8R; Thu, 10 Dec 2020 19:51:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49777.88092; Thu, 10 Dec 2020 19:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxw-0001ma-TV; Thu, 10 Dec 2020 19:51:04 +0000
Received: by outflank-mailman (input) for mailman id 49777;
 Thu, 10 Dec 2020 19:51:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqJ-0007OY-EH
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:11 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee44cf22-97b2-4bc0-82d3-4a4bedc9f1a6;
 Thu, 10 Dec 2020 19:42:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee44cf22-97b2-4bc0-82d3-4a4bedc9f1a6
Message-Id: <20201210194044.364211860@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629357;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=KHY7zLB5K6l5lfek87988opcUU+hIDNSDHLcEFMtdbE=;
	b=n90PZIqvfSG+SCpScanG1Bc3HG4388BY/dyhI0EaH1D5VR7ALSrd8Di/XFloXi9hIrAe0N
	2w3FtiOOS83MM1ENKBnmKIYjMGlFGK+8CqwAWA+3cn+4JTZrOFNQqz1w6/1pfc9dn229g5
	RxyxMUVWrnZNU+m4ITHF184nuNhmqfYSIV87jZv/HzJ2V1oJQzYeMWo1OTElMpaTWpCUk8
	uJoofHd7ipNCSNBQWtvCPnJySQsMZYd1ezWjVgB5vtUTP93BsKFGqlD/7O6EwVQkeOfVvC
	5nN2twyUqXClogpFmRmB9lY9+05jwyoSoCoyIBMb5KsyzJun61fUTTAt7Pca7g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629357;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=KHY7zLB5K6l5lfek87988opcUU+hIDNSDHLcEFMtdbE=;
	b=z1bwpMRZ9X4kcUp+yA6PiwjKs/Xplaj9Bs6jEJYC/xJl+CtwyGu1dX/ChX9QURLpfQuyav
	KbgY/emP2bdsj/AQ==
Date: Thu, 10 Dec 2020 20:25:54 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 18/30] PCI: xilinx-nwl: Use irq_data_get_irq_chip_data()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Going through a full irq descriptor lookup instead of just using the proper
helper function which provides direct access is suboptimal.

In fact it _is_ wrong because the chip callback needs to get the chip data
which is relevant for the chip while using the irq descriptor variant
returns the irq chip data of the top level chip of a hierarchy. It does not
matter in this case because the chip is the top level chip, but that
doesn't make it more correct.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Michal Simek <michal.simek@xilinx.com>
Cc: linux-pci@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
---
 drivers/pci/controller/pcie-xilinx-nwl.c |    8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

--- a/drivers/pci/controller/pcie-xilinx-nwl.c
+++ b/drivers/pci/controller/pcie-xilinx-nwl.c
@@ -379,13 +379,11 @@ static void nwl_pcie_msi_handler_low(str
 
 static void nwl_mask_leg_irq(struct irq_data *data)
 {
-	struct irq_desc *desc = irq_to_desc(data->irq);
-	struct nwl_pcie *pcie;
+	struct nwl_pcie *pcie = irq_data_get_irq_chip_data(data);
 	unsigned long flags;
 	u32 mask;
 	u32 val;
 
-	pcie = irq_desc_get_chip_data(desc);
 	mask = 1 << (data->hwirq - 1);
 	raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags);
 	val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);
@@ -395,13 +393,11 @@ static void nwl_mask_leg_irq(struct irq_
 
 static void nwl_unmask_leg_irq(struct irq_data *data)
 {
-	struct irq_desc *desc = irq_to_desc(data->irq);
-	struct nwl_pcie *pcie;
+	struct nwl_pcie *pcie = irq_data_get_irq_chip_data(data);
 	unsigned long flags;
 	u32 mask;
 	u32 val;
 
-	pcie = irq_desc_get_chip_data(desc);
 	mask = 1 << (data->hwirq - 1);
 	raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags);
 	val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49779.88108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxz-0001tt-Nb; Thu, 10 Dec 2020 19:51:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49779.88108; Thu, 10 Dec 2020 19:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRxz-0001tW-HF; Thu, 10 Dec 2020 19:51:07 +0000
Received: by outflank-mailman (input) for mailman id 49779;
 Thu, 10 Dec 2020 19:51:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqT-0007OY-EY
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:21 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6cc394ac-31c7-4a10-9ba6-c8a1e38799fb;
 Thu, 10 Dec 2020 19:42:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cc394ac-31c7-4a10-9ba6-c8a1e38799fb
Message-Id: <20201210194044.672935978@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629361;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=Qj+nUJLINsvJ1wGGRtokQbSqazbntoaovZZiHCUfEoU=;
	b=mj3Kocu4reYCYLMxNnXCB7pRMj/V2SipP9aFrWWSpDZJXbRGUx/etWSEc4UA9axZe8vH6Z
	zCNVDaK1BEFqY4IWswwKyY8R7IgAyKxoeqPwRK9IXNNmGiGmnhY15xlempvvXuPFw2/rOc
	mTCNzl+Tvzsv8H1Zumi6u4b7ahs/w0J8EI2jeDxL0Y5ZADiAONcn7tXXZuRD+mnMyLrQkH
	8jP8OkOSyfio3wKvjr3Z/OVEvqX5ZlAghchZfn24t0pny2f/12ifma3CZVk+AVM0PtaOxp
	ZDpdmrDdGhcDUTfi8/G7SVOrzhh/vhKhIeACLrs9VfJAm1rl5JDinUE3lR6YsQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629361;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=Qj+nUJLINsvJ1wGGRtokQbSqazbntoaovZZiHCUfEoU=;
	b=BJ2Ps/crskKu+hAqmkYsokffZfoswWJHgtK94U0YlOuWVRLji6xjfgt0Ui2ooTzFtjd00H
	sqLSJBfAAw+mcaDw==
Date: Thu, 10 Dec 2020 20:25:57 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 21/30] net/mlx4: Use effective interrupt affinity
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Using the interrupt affinity mask for checking locality is not really
working well on architectures which support effective affinity masks.

The affinity mask is either the system wide default or set by user space,
but the architecture can or even must reduce the mask to the effective set,
which means that checking the affinity mask itself does not really tell
about the actual target CPUs.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tariq Toukan <tariqt@nvidia.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org
Cc: linux-rdma@vger.kernel.org
---
 drivers/net/ethernet/mellanox/mlx4/en_cq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -117,7 +117,7 @@ int mlx4_en_activate_cq(struct mlx4_en_p
 			assigned_eq = true;
 		}
 		irq = mlx4_eq_get_irq(mdev->dev, cq->vector);
-		cq->aff_mask = irq_get_affinity_mask(irq);
+		cq->aff_mask = irq_get_effective_affinity_mask(irq);
 	} else {
 		/* For TX we use the same irq per
 		ring we assigned for the RX    */



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49782.88119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRy2-0001zm-0K; Thu, 10 Dec 2020 19:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49782.88119; Thu, 10 Dec 2020 19:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRy1-0001zW-Sc; Thu, 10 Dec 2020 19:51:09 +0000
Received: by outflank-mailman (input) for mailman id 49782;
 Thu, 10 Dec 2020 19:51:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRrS-0007NY-Kl
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:44:22 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b58d9fd9-7213-43af-b0be-ae6ae89e5ab1;
 Thu, 10 Dec 2020 19:42:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b58d9fd9-7213-43af-b0be-ae6ae89e5ab1
Message-Id: <20201210194045.457218278@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629371;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=DeAXpsHXHZePk9yJlBEnvP07GAeQwVu3hbsEAC1KvV4=;
	b=1ftX1NujPl+Xx3UaWEfCXjp+Nmg07+hPgK1Itu5O9SNBHoe9yl+RQXJ/2O1zp6IsfiLibV
	HNI3oNJClnToZKNFBhedsy/3BepLS3TXsVRVkXB5G9+wE4lEck2qZ5nIFKfa87g0ab144q
	Y8XYj/VugkwqMCvECW/TpEV1x4HCWrFdBzkevGfKqOFN0SaDM1/ov3Gwx/TBks3pXmGNrv
	6QLKNAZhpAG6K+WVzFwjUhr3/uI//AlEz6mL833Dkg8c8/wOmAQ4fMPk7JfBTna6uA13Ka
	n+R9468jt6R90wHee6UQ35MxaJ9NecvbD7sk9TpDtngv50OniDJk+BnWh4ozJQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629371;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=DeAXpsHXHZePk9yJlBEnvP07GAeQwVu3hbsEAC1KvV4=;
	b=RL8ukeyN4bqd9Ll57DRL2wfv9LwNl4AYd7GzKiPBhSvQTs7EEhPh2RseLXs/41XjAQfiFB
	9Mcn1n8WsG2pJ9Ag==
Date: Thu, 10 Dec 2020 20:26:05 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>
Subject: [patch 29/30] xen/events: Implement irq distribution
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Keep track of the assignments of event channels to CPUs and select the
online CPU with the least assigned channels in the affinity mask which is
handed to irq_chip::irq_set_affinity() from the core code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/events/events_base.c |   72 ++++++++++++++++++++++++++++++++++-----
 1 file changed, 64 insertions(+), 8 deletions(-)

--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -96,6 +96,7 @@ struct irq_info {
 	struct list_head eoi_list;
 	short refcnt;
 	u8 spurious_cnt;
+	u8 is_accounted;
 	enum xen_irq_type type; /* type */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
@@ -161,6 +162,9 @@ static DEFINE_PER_CPU(int [NR_VIRQS], vi
 /* IRQ <-> IPI mapping */
 static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
 
+/* Event channel distribution data */
+static atomic_t channels_on_cpu[NR_CPUS];
+
 static int **evtchn_to_irq;
 #ifdef CONFIG_X86
 static unsigned long *pirq_eoi_map;
@@ -257,6 +261,32 @@ static void set_info_for_irq(unsigned in
 		irq_set_chip_data(irq, info);
 }
 
+/* Per CPU channel accounting */
+static void channels_on_cpu_dec(struct irq_info *info)
+{
+	if (!info->is_accounted)
+		return;
+
+	info->is_accounted = 0;
+
+	if (WARN_ON_ONCE(info->cpu >= nr_cpu_ids))
+		return;
+
+	WARN_ON_ONCE(!atomic_add_unless(&channels_on_cpu[info->cpu], -1 , 0));
+}
+
+static void channels_on_cpu_inc(struct irq_info *info)
+{
+	if (WARN_ON_ONCE(info->cpu >= nr_cpu_ids))
+		return;
+
+	if (WARN_ON_ONCE(!atomic_add_unless(&channels_on_cpu[info->cpu], 1,
+					    INT_MAX)))
+		return;
+
+	info->is_accounted = 1;
+}
+
 /* Constructors for packed IRQ information. */
 static int xen_irq_info_common_setup(struct irq_info *info,
 				     unsigned irq,
@@ -339,6 +369,7 @@ static void xen_irq_info_cleanup(struct
 {
 	set_evtchn_to_irq(info->evtchn, -1);
 	info->evtchn = 0;
+	channels_on_cpu_dec(info);
 }
 
 /*
@@ -449,7 +480,9 @@ static void bind_evtchn_to_cpu(evtchn_po
 
 	xen_evtchn_port_bind_to_cpu(evtchn, cpu, info->cpu);
 
+	channels_on_cpu_dec(info);
 	info->cpu = cpu;
+	channels_on_cpu_inc(info);
 }
 
 /**
@@ -622,11 +655,6 @@ static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
 
-#ifdef CONFIG_SMP
-	/* By default all event channels notify CPU#0. */
-	cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(0));
-#endif
-
 	info = kzalloc(sizeof(*info), GFP_KERNEL);
 	if (info == NULL)
 		panic("Unable to allocate metadata for IRQ%d\n", irq);
@@ -1691,10 +1719,34 @@ static int xen_rebind_evtchn_to_cpu(evtc
 	return 0;
 }
 
+/*
+ * Find the CPU within @dest mask which has the least number of channels
+ * assigned. This is not precise as the per cpu counts can be modified
+ * concurrently.
+ */
+static unsigned int select_target_cpu(const struct cpumask *dest)
+{
+	unsigned int cpu, best_cpu = UINT_MAX, minch = UINT_MAX;
+
+	for_each_cpu_and(cpu, dest, cpu_online_mask) {
+		unsigned int curch = atomic_read(&channels_on_cpu[cpu]);
+
+		if (curch < minch) {
+			minch = curch;
+			best_cpu = cpu;
+		}
+	}
+
+	/* If this happens accounting is screwed up */
+	if (WARN_ON_ONCE(best_cpu == UINT_MAX))
+		best_cpu = cpumask_first(dest);
+	return best_cpu;
+}
+
 static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 			    bool force)
 {
-	unsigned tcpu = cpumask_first_and(dest, cpu_online_mask);
+	unsigned int tcpu = select_target_cpu(dest);
 	int ret;
 
 	ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
@@ -1922,8 +1974,12 @@ void xen_irq_resume(void)
 	xen_evtchn_resume();
 
 	/* No IRQ <-> event-channel mappings. */
-	list_for_each_entry(info, &xen_irq_list_head, list)
-		info->evtchn = 0; /* zap event-channel binding */
+	list_for_each_entry(info, &xen_irq_list_head, list) {
+		/* Zap event-channel binding */
+		info->evtchn = 0;
+		/* Adjust accounting */
+		channels_on_cpu_dec(info);
+	}
 
 	clear_evtchn_to_irq_all();
 



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49789.88132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRy5-0002AN-MQ; Thu, 10 Dec 2020 19:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49789.88132; Thu, 10 Dec 2020 19:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRy5-00029z-Cx; Thu, 10 Dec 2020 19:51:13 +0000
Received: by outflank-mailman (input) for mailman id 49789;
 Thu, 10 Dec 2020 19:51:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqn-0007OY-FR
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:41 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1858590-d800-406f-bab2-abd0c9b2f6ca;
 Thu, 10 Dec 2020 19:42:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1858590-d800-406f-bab2-abd0c9b2f6ca
Message-Id: <20201210194045.157601122@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629367;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=RU1bu/2cEIaQZD4XBTa1ORm36AT2+9xoAT9okJ4RH1c=;
	b=AhYGZMH5Lll3k86M+Mw+u/CMXrFP+z5OweedDSPLrQC7zMl71ejfF6LXNXRZTGD8kmOuJe
	qS19XHZYRwS6BqhVuvnoRC1T4ttKWNuaoSyT3BgYYN1BzFtZlPcOz+YQKy/43th5Y90u/R
	X6B6cmkRFkndNRXwbx2PLjfkgaDXN0K9HLJvSEGYF2Mwu+bniCBmPPPJ7kblKM0absRSol
	Xu6RJgcDTReN71V4H+Nn1MY4+vbA65bMSFOyl/jldJBTen7poJfNR1wRX6XEMJPR917pl1
	k5XFjcYwY2tULG78ylPnHOTKCe4dKJGK7v6roca6SRSYcgOrtkVCsvBrRQWZhw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629367;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=RU1bu/2cEIaQZD4XBTa1ORm36AT2+9xoAT9okJ4RH1c=;
	b=XdjqOHU/CtrxmjSRt3J35g254e3xJTlyAPxkG/hnQfineDJae9yPVorBnqLmfzdojIT+AV
	RtH7kedwWF+OweDQ==
Date: Thu, 10 Dec 2020 20:26:02 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>
Subject: [patch 26/30] xen/events: Use immediate affinity setting
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

There is absolutely no reason to mimic the x86 deferred affinity
setting. This mechanism is required to handle the hardware induced issues
of IO/APIC and MSI and is not in use when the interrupts are remapped.

XEN does not need this and can simply change the affinity from the calling
context. The core code invokes this with the interrupt descriptor lock held
so it is fully serialized against any other operation.

Mark the interrupts with IRQ_MOVE_PCNTXT to disable the deferred affinity
setting. The conditional mask/unmask operation is already handled in
xen_rebind_evtchn_to_cpu().

This makes XEN on x86 use the same mechanics as on e.g. ARM64 where
deferred affinity setting is not required and not implemented and the code
path in the ack functions is compiled out.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/events/events_base.c |   35 +++++++++--------------------------
 1 file changed, 9 insertions(+), 26 deletions(-)

--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -628,6 +628,11 @@ static void xen_irq_init(unsigned irq)
 	info->refcnt = -1;
 
 	set_info_for_irq(irq, info);
+	/*
+	 * Interrupt affinity setting can be immediate. No point
+	 * in delaying it until an interrupt is handled.
+	 */
+	irq_set_status_flags(irq, IRQ_MOVE_PCNTXT);
 
 	INIT_LIST_HEAD(&info->eoi_list);
 	list_add_tail(&info->list, &xen_irq_list_head);
@@ -739,18 +744,7 @@ static void eoi_pirq(struct irq_data *da
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	if (unlikely(irqd_is_setaffinity_pending(data)) &&
-	    likely(!irqd_irq_disabled(data))) {
-		int masked = test_and_set_mask(evtchn);
-
-		clear_evtchn(evtchn);
-
-		irq_move_masked_irq(data);
-
-		if (!masked)
-			unmask_evtchn(evtchn);
-	} else
-		clear_evtchn(evtchn);
+	clear_evtchn(evtchn);
 
 	if (pirq_needs_eoi(data->irq)) {
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
@@ -1641,7 +1635,6 @@ void rebind_evtchn_irq(evtchn_port_t evt
 	mutex_unlock(&irq_mapping_update_lock);
 
         bind_evtchn_to_cpu(evtchn, info->cpu);
-	/* This will be deferred until interrupt is processed */
 	irq_set_affinity(irq, cpumask_of(info->cpu));
 
 	/* Unmask the event channel. */
@@ -1688,8 +1681,9 @@ static int set_affinity_irq(struct irq_d
 			    bool force)
 {
 	unsigned tcpu = cpumask_first_and(dest, cpu_online_mask);
-	int ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
+	int ret;
 
+	ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
 	if (!ret)
 		irq_data_update_effective_affinity(data, cpumask_of(tcpu));
 
@@ -1719,18 +1713,7 @@ static void ack_dynirq(struct irq_data *
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	if (unlikely(irqd_is_setaffinity_pending(data)) &&
-	    likely(!irqd_irq_disabled(data))) {
-		int masked = test_and_set_mask(evtchn);
-
-		clear_evtchn(evtchn);
-
-		irq_move_masked_irq(data);
-
-		if (!masked)
-			unmask_evtchn(evtchn);
-	} else
-		clear_evtchn(evtchn);
+	clear_evtchn(evtchn);
 }
 
 static void mask_ack_dynirq(struct irq_data *data)



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49794.88144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRy7-0002F8-Cj; Thu, 10 Dec 2020 19:51:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49794.88144; Thu, 10 Dec 2020 19:51:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRy6-0002Ed-Vj; Thu, 10 Dec 2020 19:51:14 +0000
Received: by outflank-mailman (input) for mailman id 49794;
 Thu, 10 Dec 2020 19:51:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqP-0007NY-J0
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:17 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a68b993-5ffe-439a-904f-8daff40c5728;
 Thu, 10 Dec 2020 19:42:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a68b993-5ffe-439a-904f-8daff40c5728
Message-Id: <20201210194044.255887860@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629356;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=jUTg4Los1VXz05PCBCDed5HfB12tsGaOsCncsAJwQxU=;
	b=FXhZkYaxbEPy5U+kiVcqwCILOgmOw0n9WJerjpKGSfIyXXxkGyaPiRB5bUJw5/1GbQgK84
	3A3wne9ZskCGrHeBheov/UfaaJAXbhd6XuPaKkHQLZx3IXEKB8JWAfbiz9XVlErPWxhUds
	+BYgAnfA922lpGMT6q/hi+YidzMFXq4opliKvrzDffBcg9VA3r5bNxQxfXyUW8GfN7ZxEw
	tJHtK1YV9651CgrvRnAV+LL92Mn76NESJ20TqtKaDrLgHsHJdq2p3Erxm3ONWn5WeohH8D
	ygiFGZbw1MfYSBaQqVR0RpRK1HywIevKDTa8/tVw8wl8XT4OGsw2hu9prW3gNg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629356;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=jUTg4Los1VXz05PCBCDed5HfB12tsGaOsCncsAJwQxU=;
	b=jZYO6UEfS1fO8jTBQSg6h2JwtIawA4Pop19pbpdb83G7Islx2MDh7VNX5bpTao8FSicjDK
	BnzPlnU4fVkD+gBg==
Date: Thu, 10 Dec 2020 20:25:53 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 17/30] NTB/msi: Use irq_has_action()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Use the proper core function.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jon Mason <jdmason@kudzu.us>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Allen Hubbe <allenbh@gmail.com>
Cc: linux-ntb@googlegroups.com
---
 drivers/ntb/msi.c |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

--- a/drivers/ntb/msi.c
+++ b/drivers/ntb/msi.c
@@ -282,15 +282,13 @@ int ntbm_msi_request_threaded_irq(struct
 				  struct ntb_msi_desc *msi_desc)
 {
 	struct msi_desc *entry;
-	struct irq_desc *desc;
 	int ret;
 
 	if (!ntb->msi)
 		return -EINVAL;
 
 	for_each_pci_msi_entry(entry, ntb->pdev) {
-		desc = irq_to_desc(entry->irq);
-		if (desc->action)
+		if (irq_has_action(entry->irq))
 			continue;
 
 		ret = devm_request_threaded_irq(&ntb->dev, entry->irq, handler,



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49801.88157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyB-0002QL-K4; Thu, 10 Dec 2020 19:51:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49801.88157; Thu, 10 Dec 2020 19:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyB-0002Ps-7o; Thu, 10 Dec 2020 19:51:19 +0000
Received: by outflank-mailman (input) for mailman id 49801;
 Thu, 10 Dec 2020 19:51:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqZ-0007NY-It
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:27 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c6e5196-7580-4ef1-982b-c587c955aa15;
 Thu, 10 Dec 2020 19:42:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c6e5196-7580-4ef1-982b-c587c955aa15
Message-Id: <20201210194044.580936243@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629359;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=KrhhIjbt5v6ZrRAi/JobWuMkwODXuYkNejeEJ8r5wYA=;
	b=O2jdDNolMgwdeqbdehl1p9TNp5CMFEegQIULELNgmYRztgIcL4c3f02Qf6SlZ5l3g40OIQ
	RgS4jJlX+w2xWwCBcvE8uOXoWkE+AICi/Nh5GaOHSq9qtJLjJ382ImsNRhJ2OXUKqHjpIF
	I8MjuSPpVTKBCKMHB0jWJJqdHHehIe0RI7sKzCawI6DUxBj2zGutPRs1VZRztMy0HnwSsf
	OI5cZWwaKtKyxElh00AwZXcb13cApJGdpe1cNRwNFz3Ty/Mas6lCOq0rRGjsr5nCkNgDIf
	ZfeC1Cgjfe3pwIQtGOuNMnkwwWQv4/tiKJan+K2zSJCygXvLKALGPkNvkXEO8Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629359;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=KrhhIjbt5v6ZrRAi/JobWuMkwODXuYkNejeEJ8r5wYA=;
	b=AYssMWVwzMPlm7xFPvozIcusMMzaZeOC4IJgbRnnFFntYh+XGLTcZ4I1FH/Z2XIwzAjKaN
	H21cttJmA1LtW+AQ==
Date: Thu, 10 Dec 2020 20:25:56 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 20/30] net/mlx4: Replace irq_to_desc() abuse
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

No driver has any business with the internals of an interrupt
descriptor. Storing a pointer to it just to use yet another helper at the
actual usage site to retrieve the affinity mask is creative at best. Just
because C does not allow encapsulation does not mean that the kernel has no
limits.

Retrieve a pointer to the affinity mask itself and use that. It's still
using an interface which is usually not for random drivers, but definitely
less hideous than the previous hack.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tariq Toukan <tariqt@nvidia.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org
Cc: linux-rdma@vger.kernel.org
---
 drivers/net/ethernet/mellanox/mlx4/en_cq.c   |    8 +++-----
 drivers/net/ethernet/mellanox/mlx4/en_rx.c   |    6 +-----
 drivers/net/ethernet/mellanox/mlx4/mlx4_en.h |    3 ++-
 3 files changed, 6 insertions(+), 11 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -90,7 +90,7 @@ int mlx4_en_activate_cq(struct mlx4_en_p
 			int cq_idx)
 {
 	struct mlx4_en_dev *mdev = priv->mdev;
-	int err = 0;
+	int irq, err = 0;
 	int timestamp_en = 0;
 	bool assigned_eq = false;
 
@@ -116,10 +116,8 @@ int mlx4_en_activate_cq(struct mlx4_en_p
 
 			assigned_eq = true;
 		}
-
-		cq->irq_desc =
-			irq_to_desc(mlx4_eq_get_irq(mdev->dev,
-						    cq->vector));
+		irq = mlx4_eq_get_irq(mdev->dev, cq->vector);
+		cq->aff_mask = irq_get_affinity_mask(irq);
 	} else {
 		/* For TX we use the same irq per
 		ring we assigned for the RX    */
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -959,8 +959,6 @@ int mlx4_en_poll_rx_cq(struct napi_struc
 
 	/* If we used up all the quota - we're probably not done yet... */
 	if (done == budget || !clean_complete) {
-		const struct cpumask *aff;
-		struct irq_data *idata;
 		int cpu_curr;
 
 		/* in case we got here because of !clean_complete */
@@ -969,10 +967,8 @@ int mlx4_en_poll_rx_cq(struct napi_struc
 		INC_PERF_COUNTER(priv->pstats.napi_quota);
 
 		cpu_curr = smp_processor_id();
-		idata = irq_desc_get_irq_data(cq->irq_desc);
-		aff = irq_data_get_affinity_mask(idata);
 
-		if (likely(cpumask_test_cpu(cpu_curr, aff)))
+		if (likely(cpumask_test_cpu(cpu_curr, cq->aff_mask)))
 			return budget;
 
 		/* Current cpu is not according to smp_irq_affinity -
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -46,6 +46,7 @@
 #endif
 #include <linux/cpu_rmap.h>
 #include <linux/ptp_clock_kernel.h>
+#include <linux/irq.h>
 #include <net/xdp.h>
 
 #include <linux/mlx4/device.h>
@@ -380,7 +381,7 @@ struct mlx4_en_cq {
 	struct mlx4_cqe *buf;
 #define MLX4_EN_OPCODE_ERROR	0x1e
 
-	struct irq_desc *irq_desc;
+	const struct cpumask *aff_mask;
 };
 
 struct mlx4_en_port_profile {



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49805.88168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyD-0002VC-7p; Thu, 10 Dec 2020 19:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49805.88168; Thu, 10 Dec 2020 19:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyC-0002UJ-Qu; Thu, 10 Dec 2020 19:51:20 +0000
Received: by outflank-mailman (input) for mailman id 49805;
 Thu, 10 Dec 2020 19:51:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqK-0007NY-Ig
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:12 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe533430-b33d-4087-a011-f3a1e3dfc651;
 Thu, 10 Dec 2020 19:42:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe533430-b33d-4087-a011-f3a1e3dfc651
Message-Id: <20201210194044.065003856@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629353;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=fEH7SfFXgNBjB2KnJKxA/TzRUtcjVN/8Utn6le7GAgY=;
	b=jLhcNWAI6/f0h7r8+rFxJO/huD0wZGw+D4hiyZeoGMMj1an/mYZLtmxjGnZ/aSu9ajxbNk
	p75pL7pZYF9lDXL6k6cLxjzZOwC5q8ecIjJ39xF/dSPObsyqM+yudoL9yN0v7780Bn5yyF
	/Ve3t66EmBeMbc9BsRyBF2dkjfBuqeDKx8lYvDouLXZuAf0vFtIPH5zU4jlfdixs/IVdOS
	vmsr5U63fIVzglTEhwYSxzXESekgTuzdOHcprErKfUMwbSexx9oDdfFa+4Z1VcZaNX5rSu
	6l8HXwdp3e4RNwPr1pyTZfpMenA4zfmUQEoZ4/oyxWsRJm8zIR/y6ojvGi9aNQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629353;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=fEH7SfFXgNBjB2KnJKxA/TzRUtcjVN/8Utn6le7GAgY=;
	b=vcq9ylrl3bvEPVRyLq7+7uNgnxo0i4JU/giqH4jOxFiIi2uv7Mz1SZNae4Tgp0yDwrMR2T
	GgsHxKk5w8TyzfBQ==
Date: Thu, 10 Dec 2020 20:25:51 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-arm-kernel@lists.infradead.org,
 linux-gpio@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 15/30] pinctrl: nomadik: Use irq_has_action()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Let the core code do the fiddling with irq_desc.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-gpio@vger.kernel.org
---
 drivers/pinctrl/nomadik/pinctrl-nomadik.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
@@ -948,7 +948,6 @@ static void nmk_gpio_dbg_show_one(struct
 			   (mode < 0) ? "unknown" : modes[mode]);
 	} else {
 		int irq = chip->to_irq(chip, offset);
-		struct irq_desc	*desc = irq_to_desc(irq);
 		const int pullidx = pull ? 1 : 0;
 		int val;
 		static const char * const pulls[] = {
@@ -969,7 +968,7 @@ static void nmk_gpio_dbg_show_one(struct
 		 * This races with request_irq(), set_irq_type(),
 		 * and set_irq_wake() ... but those are "rare".
 		 */
-		if (irq > 0 && desc && desc->action) {
+		if (irq > 0 && irq_has_action(irq)) {
 			char *trigger;
 
 			if (nmk_chip->edge_rising & BIT(offset))



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49808.88174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyD-0002Wd-UB; Thu, 10 Dec 2020 19:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49808.88174; Thu, 10 Dec 2020 19:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyD-0002Vz-F2; Thu, 10 Dec 2020 19:51:21 +0000
Received: by outflank-mailman (input) for mailman id 49808;
 Thu, 10 Dec 2020 19:51:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqd-0007OY-FL
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:31 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25096ef7-a62d-487a-be20-cccc8d207e0d;
 Thu, 10 Dec 2020 19:42:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25096ef7-a62d-487a-be20-cccc8d207e0d
Message-Id: <20201210194045.065115500@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629366;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=tEVls3jI8V5LtEBTs0cm0Szr6SDBh3GYqdx/X7n2UZY=;
	b=bwVQ7gaQwvbfHTgP7MZDc/YhDLMbL2OEi1WnGDxUtEPa/zAOSNbqLaLeQTtD11Dg5082Qp
	Qz2zPtGhojJeSFokGiFaFmgg8DCDEwh1sTNplvk+ABtrVMp4PQqBe9MzYq5Rmcua4Vok7H
	MC3M8vw+sJGbud8ji2De2+9/N0C9SKASfhkEKBqszbBCF+wOvRW/V0B92szqDuXVqeCKre
	YRoIqIe820JAXb6fr66VrfJCEVfdlAwTC/JagtnRSgmfc3XazagnRYgk6a6iOnmsLRNX1p
	MVTREOpKae1E6DDuEk9FawkuR3E7r4E4WNu7go88D2q5JTNminPamGP01SwVTQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629366;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=tEVls3jI8V5LtEBTs0cm0Szr6SDBh3GYqdx/X7n2UZY=;
	b=oavywJTHDWv3olwcxAvfm8NfynriH4GVMpKUR3SAwF8GgtSSOX9oqiCpUNJcYYnYuyj2zl
	2KDiXd1knCZbiIAw==
Date: Thu, 10 Dec 2020 20:26:01 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>
Subject: [patch 25/30] xen/events: Remove disfunct affinity spreading
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

This function can only ever work when the event channels:

  - are already established
  - interrupts assigned to them
  - the affinity has been set by user space already

because any newly set up event channel is forced to be bound to CPU0 and
the affinity mask of the interrupt is forced to contain cpumask_of(0).

As the CPU0 enforcement was in place _before_ this was implemented it's
entirely unclear how that can ever have worked at all.

Remove it as preparation for doing it proper.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/events/events_base.c |    9 ---------
 drivers/xen/evtchn.c             |   34 +---------------------------------
 2 files changed, 1 insertion(+), 42 deletions(-)

--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1696,15 +1696,6 @@ static int set_affinity_irq(struct irq_d
 	return ret;
 }
 
-/* To be called with desc->lock held. */
-int xen_set_affinity_evtchn(struct irq_desc *desc, unsigned int tcpu)
-{
-	struct irq_data *d = irq_desc_get_irq_data(desc);
-
-	return set_affinity_irq(d, cpumask_of(tcpu), false);
-}
-EXPORT_SYMBOL_GPL(xen_set_affinity_evtchn);
-
 static void enable_dynirq(struct irq_data *data)
 {
 	evtchn_port_t evtchn = evtchn_from_irq(data->irq);
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -421,36 +421,6 @@ static void evtchn_unbind_from_user(stru
 	del_evtchn(u, evtchn);
 }
 
-static DEFINE_PER_CPU(int, bind_last_selected_cpu);
-
-static void evtchn_bind_interdom_next_vcpu(evtchn_port_t evtchn)
-{
-	unsigned int selected_cpu, irq;
-	struct irq_desc *desc;
-	unsigned long flags;
-
-	irq = irq_from_evtchn(evtchn);
-	desc = irq_to_desc(irq);
-
-	if (!desc)
-		return;
-
-	raw_spin_lock_irqsave(&desc->lock, flags);
-	selected_cpu = this_cpu_read(bind_last_selected_cpu);
-	selected_cpu = cpumask_next_and(selected_cpu,
-			desc->irq_common_data.affinity, cpu_online_mask);
-
-	if (unlikely(selected_cpu >= nr_cpu_ids))
-		selected_cpu = cpumask_first_and(desc->irq_common_data.affinity,
-				cpu_online_mask);
-
-	this_cpu_write(bind_last_selected_cpu, selected_cpu);
-
-	/* unmask expects irqs to be disabled */
-	xen_set_affinity_evtchn(desc, selected_cpu);
-	raw_spin_unlock_irqrestore(&desc->lock, flags);
-}
-
 static long evtchn_ioctl(struct file *file,
 			 unsigned int cmd, unsigned long arg)
 {
@@ -508,10 +478,8 @@ static long evtchn_ioctl(struct file *fi
 			break;
 
 		rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
-		if (rc == 0) {
+		if (rc == 0)
 			rc = bind_interdomain.local_port;
-			evtchn_bind_interdom_next_vcpu(rc);
-		}
 		break;
 	}
 



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49810.88184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyF-0002aL-Fg; Thu, 10 Dec 2020 19:51:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49810.88184; Thu, 10 Dec 2020 19:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyE-0002Ys-Ii; Thu, 10 Dec 2020 19:51:22 +0000
Received: by outflank-mailman (input) for mailman id 49810;
 Thu, 10 Dec 2020 19:51:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRr7-0007OY-GF
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:44:01 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id afefd95e-536b-426d-976d-9d470fe98d83;
 Thu, 10 Dec 2020 19:42:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afefd95e-536b-426d-976d-9d470fe98d83
Message-Id: <20201210194045.551428291@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629372;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=MtNO1nEPKIMiQX4klZrvOfUrY/ABsFwlxtMkntNh7CA=;
	b=KaxGnHH1MXTqooatghPE8cM6xpnGF4ig0d4EAWoirBaHpwaLjagwuDN+X7ntrzOWzhHJCE
	Ez3+VuPBK6N8THRx/c/U7BpDe7lSNCEOKsSoZIfAKG42zMl8uOqZUH0n+L+DqNCVrKQYTB
	oVBRlbi3XFzHShM0wZtfud+2wylFkbDbo+6w0T9nxHjXspQdXCx5rbXqQ/f7LVGQLOgMGC
	ENEQVPteM32WIaUIGu6797s46NpYcBHdeeyuJOYmvSReU4vykWzWq+/LUSyXrR2XMumg3x
	6Lu7jqz9tqhbcJsc0jSPa7puNiljDv9N6eQVXNVG6wM/APWxN3QkWoq0IdHFkw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629372;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=MtNO1nEPKIMiQX4klZrvOfUrY/ABsFwlxtMkntNh7CA=;
	b=v3GGn2bgoc9w48TBOZdjXg19jWm2bAJN6nmZaIKef4PpZW9mfQUCKMWPL0d+8SCh8ofn/B
	albFJ36DV6NevuBw==
Date: Thu, 10 Dec 2020 20:26:06 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>,
 linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 30/30] genirq: Remove export of irq_to_desc()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

No more (ab)use in modules finally. Remove the export so there won't come
new ones.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/irq/irqdesc.c |    1 -
 1 file changed, 1 deletion(-)

--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -352,7 +352,6 @@ struct irq_desc *irq_to_desc(unsigned in
 {
 	return radix_tree_lookup(&irq_desc_tree, irq);
 }
-EXPORT_SYMBOL(irq_to_desc);
 
 static void delete_irq_desc(unsigned int irq)
 {



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 19:51:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 19:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49822.88204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyL-0002qX-0n; Thu, 10 Dec 2020 19:51:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49822.88204; Thu, 10 Dec 2020 19:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knRyK-0002qN-S2; Thu, 10 Dec 2020 19:51:28 +0000
Received: by outflank-mailman (input) for mailman id 49822;
 Thu, 10 Dec 2020 19:51:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knRqO-0007OY-EG
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:43:16 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bc48ef13-6c2c-4973-826d-9ac63d48cf28;
 Thu, 10 Dec 2020 19:42:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc48ef13-6c2c-4973-826d-9ac63d48cf28
Message-Id: <20201210194044.473308721@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607629358;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=hb66G/YKxZssYUpIlaqnSR4ZW1k+vqs7JSWYZ8olni4=;
	b=JDGcEIf6xGhVDkvwWeMQLx9FUu/IsKss78ihAlKadMjAIf3dHANCzucDD5C2rHnEBhKkDC
	fs0LpWbG65iC+gn0dqp5B//hOUr0DzpNnSBFKb88Vue5ngozd1tGLn7CtzLKEI29imO25z
	joiUdDjr1d2lfyigu/UfV0ehuoSmHFo9AhA/FqDn1vazpBLlgf2ND1DS3dxniHAdDnilk3
	1qIkGuqztzebcid1bBRJTHx9CoWTI1/9XLZXX10DwpIeYbsmtua3l8q5FWiIGABqtq0mdG
	+jGGNyapUz+w2HLUzMDsdOexP4c9y0H7CnIR6EVRiH9/1y/HRKjopWTfNI7hPQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607629358;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:  references:references;
	bh=hb66G/YKxZssYUpIlaqnSR4ZW1k+vqs7JSWYZ8olni4=;
	b=Fyn3b5Uer8Bt9xBrcUD7/wbs4Zq09nDfu7V/W/qeQGG6ZMTLCbkszqx6GueqYSkgtweAWi
	EEbLWcn1hOF4e1Aw==
Date: Thu, 10 Dec 2020 20:25:55 +0100
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Marc Zyngier <maz@kernel.org>,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>,
 linux-pci@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>,
 linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>,
 Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>,
 Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com,
 Michal Simek <michal.simek@xilinx.com>,
 Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: [patch 19/30] PCI: mobiveil: Use irq_data_get_irq_chip_data()
References: <20201210192536.118432146@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-transfer-encoding: 8-bit

Going through a full irq descriptor lookup instead of just using the proper
helper function which provides direct access is suboptimal.

In fact it _is_ wrong because the chip callback needs to get the chip data
which is relevant for the chip while using the irq descriptor variant
returns the irq chip data of the top level chip of a hierarchy. It does not
matter in this case because the chip is the top level chip, but that
doesn't make it more correct.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>
Cc: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: linux-pci@vger.kernel.org
---
 drivers/pci/controller/mobiveil/pcie-mobiveil-host.c |    8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

--- a/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
+++ b/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
@@ -306,13 +306,11 @@ int mobiveil_host_init(struct mobiveil_p
 
 static void mobiveil_mask_intx_irq(struct irq_data *data)
 {
-	struct irq_desc *desc = irq_to_desc(data->irq);
-	struct mobiveil_pcie *pcie;
+	struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(data);
 	struct mobiveil_root_port *rp;
 	unsigned long flags;
 	u32 mask, shifted_val;
 
-	pcie = irq_desc_get_chip_data(desc);
 	rp = &pcie->rp;
 	mask = 1 << ((data->hwirq + PAB_INTX_START) - 1);
 	raw_spin_lock_irqsave(&rp->intx_mask_lock, flags);
@@ -324,13 +322,11 @@ static void mobiveil_mask_intx_irq(struc
 
 static void mobiveil_unmask_intx_irq(struct irq_data *data)
 {
-	struct irq_desc *desc = irq_to_desc(data->irq);
-	struct mobiveil_pcie *pcie;
+	struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(data);
 	struct mobiveil_root_port *rp;
 	unsigned long flags;
 	u32 shifted_val, mask;
 
-	pcie = irq_desc_get_chip_data(desc);
 	rp = &pcie->rp;
 	mask = 1 << ((data->hwirq + PAB_INTX_START) - 1);
 	raw_spin_lock_irqsave(&rp->intx_mask_lock, flags);



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 20:10:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 20:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49883.88216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knSH6-0005X8-U3; Thu, 10 Dec 2020 20:10:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49883.88216; Thu, 10 Dec 2020 20:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knSH6-0005X1-Qx; Thu, 10 Dec 2020 20:10:52 +0000
Received: by outflank-mailman (input) for mailman id 49883;
 Thu, 10 Dec 2020 20:10:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knSH5-0005Wr-HT; Thu, 10 Dec 2020 20:10:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knSH5-0008PC-An; Thu, 10 Dec 2020 20:10:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knSH5-00042T-30; Thu, 10 Dec 2020 20:10:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knSH5-0005LR-2V; Thu, 10 Dec 2020 20:10:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4ypChK/p4IJtq8gAWO2vUg0Gv5OJhIC79tyc71uryuU=; b=FQXB37y2xZ2WwkNj56P9Eo+rb8
	bwt/5sTaMLHo22s3joIWvGtIzfvnh6mRsSd0eMPeabqTk/aYGD1UxN+ztk1n+kM2/pxAG8zrFevDA
	9iYDyH04YjX78XXDNErtViFolnsx+aT3MMaz99y7/DgawmZTHIpAUOfAAmpcqwdsX5dg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157376-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157376: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5e7b204dbfae9a562fc73684986f936b97f63877
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 20:10:51 +0000

flight 157376 qemu-mainline real [real]
flight 157389 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157376/
http://logs.test-lab.xenproject.org/osstest/logs/157389/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                5e7b204dbfae9a562fc73684986f936b97f63877
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  112 days
Failing since        152659  2020-08-21 14:07:39 Z  111 days  232 attempts
Testing same since   157361  2020-12-10 00:36:49 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erich McMillan <erich.mcmillan@hp.com>
  Erich-McMillan <erich.mcmillan@hp.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiahui Cen <cenjiahui@huawei.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Levon <john.levon@nutanix.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Juan Quintela <quintela@redhat.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yubo Miao <miaoyubo@huawei.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 70796 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 20:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 20:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49892.88231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knSLF-0005u0-HO; Thu, 10 Dec 2020 20:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49892.88231; Thu, 10 Dec 2020 20:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knSLF-0005tt-EP; Thu, 10 Dec 2020 20:15:09 +0000
Received: by outflank-mailman (input) for mailman id 49892;
 Thu, 10 Dec 2020 20:15:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hbU1=FO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knSLE-0005to-98
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 20:15:08 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9159a3dc-e05a-453b-a281-4af17579fb37;
 Thu, 10 Dec 2020 20:15:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9159a3dc-e05a-453b-a281-4af17579fb37
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607631305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5fN5ChdPk6fH90ggZf7w1lWtyXdKRHqfW7WHc/pgsVs=;
	b=ckiewzsMmQUlBqMXgP4UCM+Np8mVHkkyuYZ55dFWcuWlHfyz/tgDeQPN8gOJwL71KCuDFp
	bTEV0UcVcW+THBv5y7dJyw0M+kQ1Riyvqn9OxQr3zwaLOv+uShsKUCjirolVbrlSa85oom
	iI6uU+J4mWFPa2hGzIZEAOuKh5iN+4FzvPkGUwZRJzd2A30gZdT6pTvBYw/wj4wJvz45Q9
	4xL68/ICbZjy5ErILON+zndUZYGmqVaFb9ADzRqHWrpdGoR4T6W0smF9rTYGSgd1QHwNu0
	XdkbiN6hBQ7XZ7qyI/o8e/Rwir0XY//nkekRFelqXb/XKTsF4qfFyF4RODMkvA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607631305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5fN5ChdPk6fH90ggZf7w1lWtyXdKRHqfW7WHc/pgsVs=;
	b=7Sv15HRz9k3S0goiNVwWDAqqbLafPJHvp7i7jT+K4bNnOThCzVhK/tC1gQGefAlx4Qus9O
	sPKSDB0EOeG3wDBg==
To: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>, Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, luto@kernel.org, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>, "VMware\, Inc." <pv-drivers@vmware.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: x86/ioapic: Cleanup the timer_works() irqflags mess
In-Reply-To: <20201210111008.GB88655@C02TD0UTHF1T.local>
References: <20201120114630.13552-1-jgross@suse.com> <20201120114630.13552-6-jgross@suse.com> <20201120115943.GD3021@hirez.programming.kicks-ass.net> <20201209181514.GA14235@C02TD0UTHF1T.local> <87tusuzu71.fsf@nanos.tec.linutronix.de> <20201210111008.GB88655@C02TD0UTHF1T.local>
Date: Thu, 10 Dec 2020 21:15:04 +0100
Message-ID: <87k0tpju47.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

Mark tripped over the creative irqflags handling in the IO-APIC timer
delivery check which ends up doing:

        local_irq_save(flags);
	local_irq_enable();
        local_irq_restore(flags);

which triggered a new consistency check he's working on required for
replacing the POPF based restore with a conditional STI.

That code is a historical mess and none of this is needed. Make it
straightforward use local_irq_disable()/enable() as that's all what is
required. It is invoked from interrupt enabled code nowadays.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/x86/kernel/apic/io_apic.c |   22 ++++++----------------
 1 file changed, 6 insertions(+), 16 deletions(-)

--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1618,21 +1618,16 @@ static void __init delay_without_tsc(voi
 static int __init timer_irq_works(void)
 {
 	unsigned long t1 = jiffies;
-	unsigned long flags;
 
 	if (no_timer_check)
 		return 1;
 
-	local_save_flags(flags);
 	local_irq_enable();
-
 	if (boot_cpu_has(X86_FEATURE_TSC))
 		delay_with_tsc();
 	else
 		delay_without_tsc();
 
-	local_irq_restore(flags);
-
 	/*
 	 * Expect a few ticks at least, to be sure some possible
 	 * glue logic does not lock up after one or two first
@@ -1641,10 +1636,10 @@ static int __init timer_irq_works(void)
 	 * least one tick may be lost due to delays.
 	 */
 
-	/* jiffies wrap? */
-	if (time_after(jiffies, t1 + 4))
-		return 1;
-	return 0;
+	local_irq_disable();
+
+	/* Did jiffies advance? */
+	return time_after(jiffies, t1 + 4);
 }
 
 /*
@@ -2117,13 +2112,12 @@ static inline void __init check_timer(vo
 	struct irq_cfg *cfg = irqd_cfg(irq_data);
 	int node = cpu_to_node(0);
 	int apic1, pin1, apic2, pin2;
-	unsigned long flags;
 	int no_pin1 = 0;
 
 	if (!global_clock_event)
 		return;
 
-	local_irq_save(flags);
+	local_irq_disable();
 
 	/*
 	 * get/set the timer IRQ vector:
@@ -2191,7 +2185,6 @@ static inline void __init check_timer(vo
 			goto out;
 		}
 		panic_if_irq_remap("timer doesn't work through Interrupt-remapped IO-APIC");
-		local_irq_disable();
 		clear_IO_APIC_pin(apic1, pin1);
 		if (!no_pin1)
 			apic_printk(APIC_QUIET, KERN_ERR "..MP-BIOS bug: "
@@ -2215,7 +2208,6 @@ static inline void __init check_timer(vo
 		/*
 		 * Cleanup, just in case ...
 		 */
-		local_irq_disable();
 		legacy_pic->mask(0);
 		clear_IO_APIC_pin(apic2, pin2);
 		apic_printk(APIC_QUIET, KERN_INFO "....... failed.\n");
@@ -2232,7 +2224,6 @@ static inline void __init check_timer(vo
 		apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
 		goto out;
 	}
-	local_irq_disable();
 	legacy_pic->mask(0);
 	apic_write(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_FIXED | cfg->vector);
 	apic_printk(APIC_QUIET, KERN_INFO "..... failed.\n");
@@ -2251,7 +2242,6 @@ static inline void __init check_timer(vo
 		apic_printk(APIC_QUIET, KERN_INFO "..... works.\n");
 		goto out;
 	}
-	local_irq_disable();
 	apic_printk(APIC_QUIET, KERN_INFO "..... failed :(.\n");
 	if (apic_is_x2apic_enabled())
 		apic_printk(APIC_QUIET, KERN_INFO
@@ -2260,7 +2250,7 @@ static inline void __init check_timer(vo
 	panic("IO-APIC + timer doesn't work!  Boot with apic=debug and send a "
 		"report.  Then try booting with the 'noapic' option.\n");
 out:
-	local_irq_restore(flags);
+	local_irq_enable();
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 20:52:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 20:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49906.88249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knSuq-0001Wa-Ua; Thu, 10 Dec 2020 20:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49906.88249; Thu, 10 Dec 2020 20:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knSuq-0001WT-RG; Thu, 10 Dec 2020 20:51:56 +0000
Received: by outflank-mailman (input) for mailman id 49906;
 Thu, 10 Dec 2020 20:51:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knSuq-0001WO-Fn
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 20:51:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knSup-0000pp-3t; Thu, 10 Dec 2020 20:51:55 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knSuo-0006EB-SO; Thu, 10 Dec 2020 20:51:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZMMdUx0x9wYMaOKo/RzR5w8M61hVMj8XIkOz66DqnpI=; b=2FF2cY1USmHb7fD+gLJugPnjQa
	vCu0p/vD4VWjbFH54bg/+3+xOju1+r9n94MkCKO0pQ1py3psWhuaKkyuLDMfH6wR/hO/libdfR+Nl
	FGirN/vRF+77kxXEQOj7P/AhzcCCMpqZpTz28UsHWO7OXArwbpR7QKFX5VhMOgYQduh4=;
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
Date: Thu, 10 Dec 2020 20:51:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 09/12/2020 14:29, Jan Beulich wrote:
> On 09.12.2020 13:11, Julien Grall wrote:
>> On 26/11/2020 11:20, Jan Beulich wrote:
>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>> When the host crashes it would sometimes be nice to have additional
>>>> debug data available which could be produced via debug keys, but
>>>> halting the server for manual intervention might be impossible due to
>>>> the need to reboot/kexec rather sooner than later.
>>>>
>>>> Add support for automatic debug key actions in case of crashes which
>>>> can be activated via boot- or runtime-parameter.
>>>>
>>>> Depending on the type of crash the desired data might be different, so
>>>> support different settings for the possible types of crashes.
>>>>
>>>> The parameter is "crash-debug" with the following syntax:
>>>>
>>>>     crash-debug-<type>=<string>
>>>>
>>>> with <type> being one of:
>>>>
>>>>     panic, hwdom, watchdog, kexeccmd, debugkey
>>>>
>>>> and <string> a sequence of debug key characters with '+' having the
>>>> special semantics of a 10 millisecond pause.
>>>>
>>>> So "crash-debug-watchdog=0+0qr" would result in special output in case
>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>>> domain info, run queues).
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> V2:
>>>> - switched special character '.' to '+' (Jan Beulich)
>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>> - added more text to the boot parameter description (Jan Beulich)
>>>>
>>>> V3:
>>>> - added const (Jan Beulich)
>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>> - added dummy get_irq_regs() helper on Arm
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> Except for the Arm aspect, where I'm not sure using
>>> guest_cpu_user_regs() is correct in all cases,
>>
>> I am not entirely sure to understand what get_irq_regs() is supposed to
>> returned on x86. Is it the registers saved from the most recent exception?
> 
> An interrupt (not an exception) sets the underlying per-CPU
> variable, such that interested parties will know the real
> context is not guest or "normal" Xen code, but an IRQ.

Thanks for the explanation. I am a bit confused to why we need to give a 
regs to handle_keypress() because no-one seems to use it. Do you have an 
explanation?

To add to the confusion, it looks like that get_irqs_regs() may return 
NULL. So sometimes we may pass guest_cpu_regs() (which may contain 
garbagge or a set too far).

I guess providing the wrong information to handle_keypress() is not 
going to matter that much because no-one use it (?). Although, I'd like 
to make sure this is not going to bite us in the future.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 21:01:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 21:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49913.88260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knT3y-0002dc-Rm; Thu, 10 Dec 2020 21:01:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49913.88260; Thu, 10 Dec 2020 21:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knT3y-0002dV-Oo; Thu, 10 Dec 2020 21:01:22 +0000
Received: by outflank-mailman (input) for mailman id 49913;
 Thu, 10 Dec 2020 21:01:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=25P7=FO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1knT3x-0002dQ-KR
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 21:01:21 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1dc6ae6-6f3d-4744-956a-0d3463d620f8;
 Thu, 10 Dec 2020 21:01:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1dc6ae6-6f3d-4744-956a-0d3463d620f8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607634080;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=cLVw+tb7nC6NuiIXKO+EthQbh7iwGUXQ/ATdbc6J608=;
  b=XYQUGpOnkggaaHRfIYKtwc8NgbcJ5HhFWOTa0f6IvpP7WucQXhOif24E
   OpKHtAiPgw4J4P/Tbj6LpdXLm+gWxscxu1PsVk9XOhjUYnZ8gSJnS7+iw
   dJo/ulj5HL9UOc5/ET8xKWOiVxOAEj0rATltCLYqDUkYnJuUuw3E16Mw/
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4K10j6FlT6IOh5YXfmR3bcHRnofq0S/L5SWMAPthFppoYpY8xcEfFnuAaH53iFfcxE37hO7JiP
 HYn1ayBp/WnSc6fPJp4LW76tHqyrP+gj6J6GNvArj+3vnYBOGA7575DsgZWcsTKw8MDFrTt5Mh
 XlQI3Hcyg5pn9LscvJRIJfCniLMrdt2uCbDpE0dqiiggeb76Gm5PXP7MgdV51xSa9XkwDtnJqk
 wtkuITGyODWUFyy0gjnsj7gZU4zfdqC2Yk74p0SmJIAwjFveS0uLF1XaTdxQACrYT1G2HeF2ZG
 quQ=
X-SBRS: 5.2
X-MesageID: 34198816
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,409,1599537600"; 
   d="scan'208";a="34198816"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
References: <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
 <20201210170319.GG455@antioche.eu.org>
 <ed06a0f4-8468-addf-2797-be3ba3a2d607@citrix.com>
 <20201210173551.GJ455@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b60639d9-5c27-ab86-eb97-f8627b3b32d2@citrix.com>
Date: Thu, 10 Dec 2020 21:01:12 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201210173551.GJ455@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 10/12/2020 17:35, Manuel Bouyer wrote:
> On Thu, Dec 10, 2020 at 05:18:39PM +0000, Andrew Cooper wrote:
>> However, Xen finds the mapping not-present when trying to demand-map it,
>> hence why the #PF is forwarded to the kernel.
>>
>> The way we pull guest virtual addresses was altered by XSA-286 (released
>> not too long ago despite its apparent age), but *should* have been no
>> functional change.  I wonder if we accidentally broke something there. 
>> What exactly are you running, Xen-wise, with the 4.13 version?
> It is 4.13.2, with the patch for XSA351

Thanks,

>> Given that this is init failing, presumably the issue would repro with
>> the net installer version too?
> Hopefully yes, maybe even as a domU. But I don't have a linux dom0 to test.
>
> If you have a Xen setup you can test with
> http://ftp.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/binary/kernel/netbsd-INSTALL_XEN3_DOMU.gz
>
> note that this won't boot as a dom0 kernel.

I've repro'd the problem.

When I modify Xen to explicitly demand-map the LDT in the MMUEXT_SET_LDT
hypercall, everything works fine.

Specifically, this delta:

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 723cc1070f..71a791d877 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3742,12 +3742,31 @@ long do_mmuext_op(
             else if ( (curr->arch.pv.ldt_ents != ents) ||
                       (curr->arch.pv.ldt_base != ptr) )
             {
+                unsigned int err = 0, tmp;
+
                 if ( pv_destroy_ldt(curr) )
                     flush_tlb_local();
 
                 curr->arch.pv.ldt_base = ptr;
                 curr->arch.pv.ldt_ents = ents;
                 load_LDT(curr);
+
+                printk("Probe new LDT\n");
+                asm volatile (
+                    "mov %%es, %[tmp];\n\t"
+                    "1: mov %[sel], %%es;\n\t"
+                    "mov %[tmp], %%es;\n\t"
+                    "2:\n\t"
+                    ".section .fixup,\"ax\"\n"
+                    "3: mov $1, %[err];\n\t"
+                    "jmp 2b\n\t"
+                    ".previous\n\t"
+                    _ASM_EXTABLE(1b, 3b)
+                    : [err] "+r" (err),
+                      [tmp] "=&r" (tmp)
+                    : [sel] "r" (0x3f)
+                    : "memory");
+                printk("  => err %u\n", err);
             }
             break;
         }

Which stashes %es, explicitly loads init's %ss selector to trigger the
#PF and Xen's lazy mapping, then restores %es.

(XEN) d1v0 Dropping PAT write of 0007010600070106
(XEN) Probe new LDT
(XEN) *** LDT Successful map, slot 0
(XEN)   => err 0
(XEN) d1 L1TF-vulnerable L4e 0000000801e88000 - Shadowing

And the domain is up and running:

# xl list
Name                                        ID   Mem VCPUs    State   
Time(s)
Domain-0                                     0  2656     8    
r-----      44.6
netbsd                                       1   256     1    
-b----       5.3

(Probably confused about the fact I gave it no disk...)

Now, in this case, we find that the virtual address provided for the LDT
is mapped, so we successfully copy the mapping into Xen's area, and init
runs happily.

So the mystery is why the LDT virtual address is not-present when Xen
tries to lazily map the LDT at the normal point...

Presumably you've got no Meltdown mitigations going on within the NetBSD
kernel?  (I suspect not, seeing as changing Xen changes the behaviour,
but it is worth asking).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 21:08:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 21:08:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49920.88273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knTAg-0002sx-Kq; Thu, 10 Dec 2020 21:08:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49920.88273; Thu, 10 Dec 2020 21:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knTAg-0002sq-Gt; Thu, 10 Dec 2020 21:08:18 +0000
Received: by outflank-mailman (input) for mailman id 49920;
 Thu, 10 Dec 2020 21:08:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knTAe-0002sl-Ph
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 21:08:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23a2b38f-d31d-469a-8a06-b7fb7a38cfc7;
 Thu, 10 Dec 2020 21:08:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23a2b38f-d31d-469a-8a06-b7fb7a38cfc7
Date: Thu, 10 Dec 2020 13:08:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607634495;
	bh=f0+B46ne+W0cDoni+oCaiX6Lb/zEDd88jj5OahkV8GY=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=GEDiMAzM5txj/HavJrVIwc6L+aQZ+96gHeGfnWgXt8dGQVVxJLx1xHOz5dQYfztHo
	 gJd7QfWPM2+wTOPYDd4epfRWR1B7IUuFdfWIRPLULOrhVtTdJeBpJfUkdFCYVBPUBa
	 8TsgrFtSc08xpaYspRE0BpLZSE7vZiqoO+NkebleikrxbKM97ziuuFWcnCU4/v2MjN
	 Q33QqVyoMp52RGGUqgnPTguLoubQfx4j1GFfmlR1MfJBQxm8xx0i91DdWz7tO1rWX6
	 1Ey5MXZhNQiaJURStJDxpdXOKgeD5EbIlx3DzcPbmKFx3YcRFGmagTYdzQHaZe5ioI
	 f0nfV7MElEPmQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Wei Liu <wl@xen.org>, cardoe@cardoe.com
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, famzheng@amazon.com, 
    Bertrand.Marquis@arm.com, julien@xen.org, andrew.cooper3@citrix.com
Subject: gitlab-docker-machine-oyster failure,  Was: [PATCH v6 00/25] xl /
 libxl: named PCI pass-through devices
In-Reply-To: <20201210155638.mxjx4zmjqmcpk7z3@liuwe-devbox-debian-v2>
Message-ID: <alpine.DEB.2.21.2012101305510.20986@sstabellini-ThinkPad-T480s>
References: <160746448732.12203.10647684023172140005@600e7e483b3a> <alpine.DEB.2.21.2012081702420.20986@sstabellini-ThinkPad-T480s> <20201209161433.d7xpx5zwtikd3fmk@liuwe-devbox-debian-v2> <alpine.DEB.2.21.2012091046400.20986@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012091839430.20986@sstabellini-ThinkPad-T480s> <20201210155638.mxjx4zmjqmcpk7z3@liuwe-devbox-debian-v2>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Doug,

After chatting with Wei on IRC, it became obvious that the issue is that
gitlab-docker-machine-oyster failed, see its grey status here under
"Runners":

https://gitlab.com/xen-project/patchew/xen/-/settings/ci_cd

Maybe it is just a matter of rebooting the VM? Doug, could you give it a
try?

Thank you!

Cheers,

Stefano



On Thu, 10 Dec 2020, Wei Liu wrote:
> On Wed, Dec 09, 2020 at 06:41:03PM -0800, Stefano Stabellini wrote:
> > On Wed, 9 Dec 2020, Stefano Stabellini wrote:
> > > On Wed, 9 Dec 2020, Wei Liu wrote:
> > > > On Tue, Dec 08, 2020 at 05:02:50PM -0800, Stefano Stabellini wrote:
> > > > > The pipeline failed because the "fedora-gcc-debug" build failed with a
> > > > > timeout: 
> > > > > 
> > > > > ERROR: Job failed: execution took longer than 1h0m0s seconds
> > > > > 
> > > > > given that all the other jobs passed (including the other Fedora job), I
> > > > > take this failed because the gitlab-ci x86 runners were overloaded?
> > > > > 
> > > > 
> > > > The CI system is configured to auto-scale as the number of jobs grows.
> > > > The limit is set to 10 (VMs) at the moment.
> > > > 
> > > > https://gitlab.com/xen-project/xen-gitlab-ci/-/commit/832bfd72ea3a227283bf3df88b418a9aae95a5a4
> > > > 
> > > > I haven't looked at the log, but the number of build jobs looks rather
> > > > larger than when we get started. Maybe the limit of 10 is not good
> > > > enough?
> > > 
> > > Interesting! That's only for the x86 runners, not the ARM runners (we
> > > only have 1 ARM64 runner), is that right?
> > > 
> > > If we could increase the number of VMs for x86 I think that would be
> > > helpful because we have very many x86 jobs.
> > 
> > I don't know what is going on but at the moment there seems to be only
> > one x86 build active
> > (https://gitlab.com/xen-project/patchew/xen/-/pipelines/227280736).
> > Should there be at least 3 of them?
> 
> Not sure what you meant here. That pipeline is green.
> 
> It may take some time for the CI to scale up if it is "cold". By default
> there is only 1 standby runner to reduce cost.



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 21:28:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 21:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49929.88285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knTU2-0004yx-6Y; Thu, 10 Dec 2020 21:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49929.88285; Thu, 10 Dec 2020 21:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knTU2-0004yq-3N; Thu, 10 Dec 2020 21:28:18 +0000
Received: by outflank-mailman (input) for mailman id 49929;
 Thu, 10 Dec 2020 21:28:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knTU0-0004yi-Az; Thu, 10 Dec 2020 21:28:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knTU0-0001Zh-4A; Thu, 10 Dec 2020 21:28:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knTTz-0007jl-Qt; Thu, 10 Dec 2020 21:28:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knTTz-0007BV-QQ; Thu, 10 Dec 2020 21:28:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zLrCVpFMKXAdosZRzIdSEViL4EJd5Uo53WmGfYePtf8=; b=NBjqLn87kLhIxbC08jL3aaQs6p
	GrkEObDjSdvCQAdm6amKsWqksmAzKDB3mCynjvPeMiSwzFYFH6u4SmewI4MzHzLeqNT+CMlPNev9c
	FYIpihkx6SpiLwUxIBaNZYGx9yAZ/qg59GxS9u5XD67UZnqBJiPo5tbFql6RXHVPjzec=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157390-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157390: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=10dc8c561c687c9e73e29743d04d828cca56a288
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 21:28:15 +0000

flight 157390 ovmf real [real]
flight 157393 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157390/
http://logs.test-lab.xenproject.org/osstest/logs/157393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 10dc8c561c687c9e73e29743d04d828cca56a288
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days    5 attempts
Testing same since   157383  2020-12-10 13:09:45 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 308 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 21:55:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 21:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49938.88300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knTuL-0007zU-8X; Thu, 10 Dec 2020 21:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49938.88300; Thu, 10 Dec 2020 21:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knTuL-0007zN-5T; Thu, 10 Dec 2020 21:55:29 +0000
Received: by outflank-mailman (input) for mailman id 49938;
 Thu, 10 Dec 2020 21:55:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knTuJ-0007zF-H2; Thu, 10 Dec 2020 21:55:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knTuJ-00025k-7Y; Thu, 10 Dec 2020 21:55:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knTuI-0000bw-Tv; Thu, 10 Dec 2020 21:55:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knTuI-0002NA-TO; Thu, 10 Dec 2020 21:55:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m0rOm1sDtdYHxnBcMr6WC2JLszxf65kcfoZ1/q0DlD8=; b=lcXil5zj0aPbmPUN2Z/9Zg7W/k
	HUIxf5caxFp7C2Kj+XifKtwtnstAnEu/CRIoKXbg9/qLL3yY4FUQYgb+PMQzJj9rkN6tJXV1Q+xa2
	ZP0zP0KIkgDs6Wg7lro6APtubTEjrEvUYPEyhRJ0um3IJjL/FqMOUvJ9O7uk7b5ZgBrw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157386-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157386: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a2f5ea9e314ba6778f885c805c921e9362ec0420
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 10 Dec 2020 21:55:26 +0000

flight 157386 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157386/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a2f5ea9e314ba6778f885c805c921e9362ec0420
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  132 days
Failing since        152366  2020-08-01 20:49:34 Z  131 days  226 attempts
Testing same since   157368  2020-12-10 04:03:39 Z    0 days    2 attempts

------------------------------------------------------------
3659 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 701346 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 22:29:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 22:29:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49949.88315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knURL-0002kc-1j; Thu, 10 Dec 2020 22:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49949.88315; Thu, 10 Dec 2020 22:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knURK-0002kV-Uw; Thu, 10 Dec 2020 22:29:34 +0000
Received: by outflank-mailman (input) for mailman id 49949;
 Thu, 10 Dec 2020 22:29:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knURI-0002kN-Nq
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 22:29:32 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b47a5bcc-8f85-4f9f-8a89-bf4892f7b9dd;
 Thu, 10 Dec 2020 22:29:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b47a5bcc-8f85-4f9f-8a89-bf4892f7b9dd
Date: Thu, 10 Dec 2020 14:29:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607639371;
	bh=CnycD9TnF+3CygDQlvn/xo/dycHeDccy1iY1awK9hts=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=kYpm4onCYiatdcKs/WPFhfTtwrErr+8ZgioehIOi4pZt8pslgRjXrEy8dVi5uZ1wx
	 ADgcjQudZPmEjVrRChaxjf3sT8b8AJtRClz+tkDDlGzVWO8XWzrwZgVJoVuHH1aBSP
	 yqhLhUdrMr7fwXbbz5PNSqv31liiJb5tg78Nqo8NKnTuSIyKHgQlK3UvBFgI0vzEVk
	 iSrRy/Z9bGmBCoxby7xB5NE0DgHhtA2u18Rt80PGUAfSvRGliAc8AJsjoR99JyDtln
	 fETdZS+eypKFv7RXy9+CcuhSog97fBHNo/2/hwZdC4pwB4+/WV6RXaWV3KnqjBSK/P
	 6eNyQTuwFwfow==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
In-Reply-To: <4B26BDEE-DA30-4B5B-A428-9D8D4659B581@arm.com>
Message-ID: <alpine.DEB.2.21.2012101428030.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com> <alpine.DEB.2.21.2012091131350.20986@sstabellini-ThinkPad-T480s> <4B26BDEE-DA30-4B5B-A428-9D8D4659B581@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Bertrand Marquis wrote:
> Hi Stefano,
> 
> > On 9 Dec 2020, at 19:38, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> >> Add vsysreg emulation for registers trapped when TID3 bit is activated
> >> in HSR.
> >> The emulation is returning the value stored in cpuinfo_guest structure
> >> for know registers and is handling reserved registers as RAZ.
> >> 
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >> ---
> >> Changes in V2: Rebase
> >> Changes in V3:
> >>  Fix commit message
> >>  Fix code style for GENERATE_TID3_INFO declaration
> >>  Add handling of reserved registers as RAZ.
> >> 
> >> ---
> >> xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
> >> 1 file changed, 53 insertions(+)
> >> 
> >> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> >> index 8a85507d9d..ef7a11dbdd 100644
> >> --- a/xen/arch/arm/arm64/vsysreg.c
> >> +++ b/xen/arch/arm/arm64/vsysreg.c
> >> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
> >>         break;                                                          \
> >>     }
> >> 
> >> +/* Macro to generate easily case for ID co-processor emulation */
> >> +#define GENERATE_TID3_INFO(reg, field, offset)                          \
> >> +    case HSR_SYSREG_##reg:                                              \
> >> +    {                                                                   \
> >> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
> >> +                          1, guest_cpuinfo.field.bits[offset]);         \
> > 
> > [...]
> > 
> >> +    HSR_SYSREG_TID3_RESERVED_CASE:
> >> +        /* Handle all reserved registers as RAZ */
> >> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
> > 
> > 
> > We are implementing both the known and the implementation defined
> > registers as read-as-zero. On write, we inject an exception.
> > 
> > However, reading the manual, it looks like the implementation defined
> > registers should be read-as-zero/write-ignore, is that right?
> 
> In the documentation, I did find all those defined as RO (Arm Architecture
> reference manual, chapter D12.3.1). Do you think we should handle Read
> only register as write ignore ? now i think of it RO does not explicitely mean
> if writes are ignored or should generate an exception.
> 
> > 
> > I couldn't easily find in the manual if it is OK to inject an exception
> > on write to a known register.
> 
> I am actually unsure if it should or not.
> I will try to run a test to check what is happening if this is done on the
> real hardware and come back to you on this one.

Yeah, that's the best way to do it: if writes are ignored on real
hardware, let's turn this into read-only/write-ignore, otherwise if they
generate an exception then let's keep the code as is.

Also you might want to do that both for a known register and also for an
unknown register to see if it makes a difference.

Thank you!


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 22:32:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 22:32:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49954.88327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knUU2-0003ig-G2; Thu, 10 Dec 2020 22:32:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49954.88327; Thu, 10 Dec 2020 22:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knUU2-0003iZ-CQ; Thu, 10 Dec 2020 22:32:22 +0000
Received: by outflank-mailman (input) for mailman id 49954;
 Thu, 10 Dec 2020 22:32:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUBE=FO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knUU1-0003iU-6I
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 22:32:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 211d9d38-0ea4-45b4-a6ae-5700815eb985;
 Thu, 10 Dec 2020 22:32:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 211d9d38-0ea4-45b4-a6ae-5700815eb985
Date: Thu, 10 Dec 2020 14:32:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607639539;
	bh=W/5uHIYxOBHxbM92gBrbq4qvTGuNeb2AWND1ZJSxW5E=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=K05PJPT56mZPn1F5fDCtkIDBVuHYza/B4RVQcXdUVkyn/rgFti7PDisIgePnmlina
	 PTEzv76ZuM4ALq/BZFyOTMfY8v04vnjiEo+2QaYk4LZMl+npnK9CDUnnMvbRt2etWu
	 LGcMX8JlSTqUgyL0sMVlcHi2OyKGOXY8HTJMTeZ1Ng+mcpm7oO/wbshdo1eAomu2gU
	 4XWfkgu4ShUbmT9k0q5/FcjV0rK2P6qlg1jQqUfI+MgmkanVF8GgAv/7iCaKUier5g
	 slwN78F20D5dTSSqiFc0rQg0dOCGFbdaJjwhoxXiHkTKB9s9rqlU8G/4li6lK7JV3Z
	 KRxpL7nlGxoDw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 5/7] xen/arm: Add handler for cp15 ID registers
In-Reply-To: <C881A361-12A2-42FF-B64E-7AE8A1F436EC@arm.com>
Message-ID: <alpine.DEB.2.21.2012101429590.20986@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <5a36325410f485dbdddc0f6088378cacc54c5243.1607524536.git.bertrand.marquis@arm.com> <alpine.DEB.2.21.2012091153400.20986@sstabellini-ThinkPad-T480s> <C881A361-12A2-42FF-B64E-7AE8A1F436EC@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1306876551-1607639539=:20986"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1306876551-1607639539=:20986
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 10 Dec 2020, Bertrand Marquis wrote:
> > On 9 Dec 2020, at 19:54, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> >> Add support for emulation of cp15 based ID registers (on arm32 or when
> >> running a 32bit guest on arm64).
> >> The handlers are returning the values stored in the guest_cpuinfo
> >> structure for known registers and RAZ for all reserved registers.
> >> In the current status the MVFR registers are no supported.
> >> 
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >> ---
> >> Changes in V2: Rebase
> >> Changes in V3:
> >>  Add case definition for reserved registers
> >>  Add handling of reserved registers as RAZ.
> >>  Fix code style in GENERATE_TID3_INFO declaration
> >> 
> >> ---
> >> xen/arch/arm/vcpreg.c        | 39 ++++++++++++++++++++++++++++++++++++
> >> xen/include/asm-arm/cpregs.h | 25 +++++++++++++++++++++++
> >> 2 files changed, 64 insertions(+)
> >> 
> >> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> >> index cdc91cdf5b..d371a1c38c 100644
> >> --- a/xen/arch/arm/vcpreg.c
> >> +++ b/xen/arch/arm/vcpreg.c
> >> @@ -155,6 +155,14 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
> >>         break;                                                      \
> >>     }
> >> 
> >> +/* Macro to generate easily case for ID co-processor emulation */
> >> +#define GENERATE_TID3_INFO(reg, field, offset)                      \
> >> +    case HSR_CPREG32(reg):                                          \
> >> +    {                                                               \
> >> +        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
> >> +                          1, guest_cpuinfo.field.bits[offset]);     \
> >> +    }
> >> +
> >> void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
> >> {
> >>     const struct hsr_cp32 cp32 = hsr.cp32;
> >> @@ -286,6 +294,37 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
> >>          */
> >>         return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
> >> 
> >> +    /*
> >> +     * HCR_EL2.TID3
> >> +     *
> >> +     * This is trapping most Identification registers used by a guest
> >> +     * to identify the processor features
> >> +     */
> >> +    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
> >> +    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
> >> +    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
> >> +    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
> >> +    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
> >> +    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
> >> +    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
> >> +    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
> >> +    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
> >> +    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
> >> +    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
> >> +    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
> >> +    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
> >> +    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
> >> +    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
> >> +    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
> >> +    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
> >> +    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
> >> +    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
> >> +    /* MVFR registers are in cp10 no cp15 */
> >> +
> >> +    HSR_CPREG32_TID3_RESERVED_CASE:
> >> +        /* Handle all reserved registers as RAZ */
> >> +        return handle_ro_raz(regs, regidx, cp32.read, hsr, 1);
> > 
> > Same question as for the aarch64 case: do we need to do write-ignore
> > for the reserved registers?
> 
> Arm architecture reference manual is listing all those registers as RO including
> the reserved ones (cf table D12-2). This said I have no objection to make them
> write ignore but from my understanding this would not reflect the hardware
> behaviour.

I think so too, but if you are going to run a test on ARMv8 that's even
better. Then we can apply the same policy (ignore or exception) here
too.

 
> >>     /*
> >>      * HCR_EL2.TIDCP
> >>      *
> >> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> >> index 2690ddeb7a..5cb1ad5cbe 100644
> >> --- a/xen/include/asm-arm/cpregs.h
> >> +++ b/xen/include/asm-arm/cpregs.h
> >> @@ -133,6 +133,31 @@
> >> #define VPIDR           p15,4,c0,c0,0   /* Virtualization Processor ID Register */
> >> #define VMPIDR          p15,4,c0,c0,5   /* Virtualization Multiprocessor ID Register */
> >> 
> >> +/*
> >> + * Those cases are catching all Reserved registers trapped by TID3 which
> >> + * currently have no assignment.
> >> + * HCR.TID3 is trapping all registers in the group 3:
> >> + * coproc == p15, opc1 == 0, CRn == c0, CRm == {c2-c7}, opc2 == {0-7}.
> >> + */
> >> +#define HSR_CPREG32_TID3_CASES(REG)     case HSR_CPREG32(p15,0,c0,REG,0): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,1): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,2): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,3): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,4): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,5): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,6): \
> >> +                                        case HSR_CPREG32(p15,0,c0,REG,7)
> >> +
> >> +#define HSR_CPREG32_TID3_RESERVED_CASE  case HSR_CPREG32(p15,0,c0,c3,0): \
> >> +                                        case HSR_CPREG32(p15,0,c0,c3,1): \
> >> +                                        case HSR_CPREG32(p15,0,c0,c3,2): \
> >> +                                        case HSR_CPREG32(p15,0,c0,c3,3): \
> >> +                                        case HSR_CPREG32(p15,0,c0,c3,7): \
> >> +                                        HSR_CPREG32_TID3_CASES(c4): \
> >> +                                        HSR_CPREG32_TID3_CASES(c5): \
> >> +                                        HSR_CPREG32_TID3_CASES(c6): \
> >> +                                        HSR_CPREG32_TID3_CASES(c7)
> > 
> > The following are missing, is it a problem?
> > 
> > p15,0,c0,c0,2
> > p15,0,c0,c0,3
> > p15,0,c0,c0,4
> > p15,0,c0,c0,6
> > p15,0,c0,c0,7
> 
> HCR.TID3 documentation is saying that access to "coproc == p15, opc1 == 0, 
> CRn == c0, CRm == {c2-c7}, opc2 == {0-7}” are trapped so CRm = c0 is not handled here.

OK, thank you for checking
--8323329-1306876551-1607639539=:20986--


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 22:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 22:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49963.88339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knUrX-0005vJ-6f; Thu, 10 Dec 2020 22:56:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49963.88339; Thu, 10 Dec 2020 22:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knUrX-0005vC-3O; Thu, 10 Dec 2020 22:56:39 +0000
Received: by outflank-mailman (input) for mailman id 49963;
 Thu, 10 Dec 2020 22:56:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YEO=FO=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1knUrW-0005v7-KM
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 22:56:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07d469f0-d296-460c-a9ec-bd1312035aa9;
 Thu, 10 Dec 2020 22:56:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07d469f0-d296-460c-a9ec-bd1312035aa9
X-Gm-Message-State: AOAM533SmKd2FPP/lUnL7fSit+hZHbmfELUke8gQHMYBeyyPGXtju0Mh
	+1J3bC+90/BglH1d8pIyQhX0GvmdLyFanAYCAA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607640996;
	bh=HILNalL3GlPRV+ztE2WnCsMQWzjgnQh5F8d+VjxfhHA=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=rQAahl+VcS81InegMIe7t5hV41qJEyVO5GTWCewjlXudNepqHfUqqhOrGLwDYfmZ/
	 VQ1zgfZVZQg5u35vkGV3DRmPUAtdMbwFJKC4NKP58rWd+TSt2vhdz8ZD2Px5yPlF74
	 O/DoTMYHdax5dwTKqUknAab3SL3m53gb4nyo9MhmI21pZzOL2Zw6zSR/cXbtqLmXbg
	 DmB4Z5cTanULPaj31OeUS3HJc09HF+snQneei7l4E18B1DnJkCzoWWCXhKaJhMKRec
	 yBRqTr+qkFbjL2ZZeVI3RvSShjYUktjv+258Mjbes4nTf9DlRhfFAiAtqWwaMqYiA5
	 O38B/o73R2+Bw==
X-Google-Smtp-Source: ABdhPJxCdEkGcUPgqgCmU+hGdc9mjJz2mo+m7nRBlZJj4NBRvD8ZfcoTXioyfo8/Gn48VrKyiwGC/1HmZUK4DbzvCWk=
X-Received: by 2002:a17:906:d784:: with SMTP id pj4mr8261525ejb.360.1607640993261;
 Thu, 10 Dec 2020 14:56:33 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194044.364211860@linutronix.de>
In-Reply-To: <20201210194044.364211860@linutronix.de>
From: Rob Herring <robh@kernel.org>
Date: Thu, 10 Dec 2020 16:56:21 -0600
X-Gmail-Original-Message-ID: <CAL_JsqKCGkyk9whiGQ0hPyWjSYXnC-TSbot85k7=bwVd0rwC=A@mail.gmail.com>
Message-ID: <CAL_JsqKCGkyk9whiGQ0hPyWjSYXnC-TSbot85k7=bwVd0rwC=A@mail.gmail.com>
Subject: Re: [patch 18/30] PCI: xilinx-nwl: Use irq_data_get_irq_chip_data()
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, 
	Bjorn Helgaas <bhelgaas@google.com>, Michal Simek <michal.simek@xilinx.com>, 
	PCI <linux-pci@vger.kernel.org>, 
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	Intel Graphics <intel-gfx@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij <linus.walleij@linaro.org>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, 
	Jon Mason <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, 
	linux-ntb@googlegroups.com, Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, 
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, 
	"David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, 
	linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>, 
	Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 10, 2020 at 1:42 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Going through a full irq descriptor lookup instead of just using the proper
> helper function which provides direct access is suboptimal.
>
> In fact it _is_ wrong because the chip callback needs to get the chip data
> which is relevant for the chip while using the irq descriptor variant
> returns the irq chip data of the top level chip of a hierarchy. It does not
> matter in this case because the chip is the top level chip, but that
> doesn't make it more correct.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Michal Simek <michal.simek@xilinx.com>
> Cc: linux-pci@vger.kernel.org
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  drivers/pci/controller/pcie-xilinx-nwl.c |    8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)

Reviewed-by: Rob Herring <robh@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 22:57:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 22:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49968.88351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knUs7-00060S-Fk; Thu, 10 Dec 2020 22:57:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49968.88351; Thu, 10 Dec 2020 22:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knUs7-00060L-CX; Thu, 10 Dec 2020 22:57:15 +0000
Received: by outflank-mailman (input) for mailman id 49968;
 Thu, 10 Dec 2020 22:57:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YEO=FO=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1knUs6-000603-LH
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 22:57:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id afb7d339-0d4e-4896-bfc2-568bf7f20aa1;
 Thu, 10 Dec 2020 22:57:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afb7d339-0d4e-4896-bfc2-568bf7f20aa1
X-Gm-Message-State: AOAM531NHylAM9wr0SV1+7Iy6Wk51s1LRk7rXzOkkIdCRakQLTaDW/oh
	rRkFmm4TtwU37eBDprki6kTiAwmkx7h7yEcH3g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607641028;
	bh=CumYBB9xVU4mI+ysFQ9GC3xvipGC4R7OnFI2Gfa5FNQ=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=IENA755C1/rgN0w8w2Z2W86pSCNpqQVdsIn6hGM3SHpMOEjpkwGZtsgL0hRxLES5J
	 GN69vLvpVngVt+noJivc35dT928cbXAlG4h3cswLCydXRYUgIxoqvWvZqpS8aiOmMt
	 05Ld+Z78+akIY5HGQHCAOlr0AaROFROMPYtjN+KD7+S/KXuaWg0xL6AO0iLiPlice4
	 iZANjFb7GSIlf67hQvw0Y9hSU5e2T/JFuteUXmOg3uHY1Ii/HhED2bYqVL2OdaPsEh
	 TPV4vxeX7oyFvjtBzj9WAWsfNEkX/EEF2pxlVmb7f9vuUC3ehG96CfByyDyOuWBqr1
	 447X7kFWCVCJA==
X-Google-Smtp-Source: ABdhPJyKMB0fAO5ccR1ON7eKp3wop/g844H0Em6KIAw65vcwOioEZudsqFxyo2xMX2ascC0HxcRxz5ivmcQz5wtwB24=
X-Received: by 2002:a17:906:c20f:: with SMTP id d15mr8477099ejz.341.1607641026526;
 Thu, 10 Dec 2020 14:57:06 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194044.473308721@linutronix.de>
In-Reply-To: <20201210194044.473308721@linutronix.de>
From: Rob Herring <robh@kernel.org>
Date: Thu, 10 Dec 2020 16:56:55 -0600
X-Gmail-Original-Message-ID: <CAL_JsqK4bVyqyT9ip9A5P7gQQwDt1HMksjkCe6bwHrBCGrZYug@mail.gmail.com>
Message-ID: <CAL_JsqK4bVyqyT9ip9A5P7gQQwDt1HMksjkCe6bwHrBCGrZYug@mail.gmail.com>
Subject: Re: [patch 19/30] PCI: mobiveil: Use irq_data_get_irq_chip_data()
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, 
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, 
	Bjorn Helgaas <bhelgaas@google.com>, PCI <linux-pci@vger.kernel.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, 
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	Intel Graphics <intel-gfx@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij <linus.walleij@linaro.org>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, 
	Jon Mason <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, 
	linux-ntb@googlegroups.com, Michal Simek <michal.simek@xilinx.com>, 
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, 
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, linux-rdma@vger.kernel.org, 
	Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 10, 2020 at 1:42 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Going through a full irq descriptor lookup instead of just using the proper
> helper function which provides direct access is suboptimal.
>
> In fact it _is_ wrong because the chip callback needs to get the chip data
> which is relevant for the chip while using the irq descriptor variant
> returns the irq chip data of the top level chip of a hierarchy. It does not
> matter in this case because the chip is the top level chip, but that
> doesn't make it more correct.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>
> Cc: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: linux-pci@vger.kernel.org
> ---
>  drivers/pci/controller/mobiveil/pcie-mobiveil-host.c |    8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)

Reviewed-by: Rob Herring <robh@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Dec 10 23:21:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 23:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49976.88364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVFr-0000cy-F1; Thu, 10 Dec 2020 23:21:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49976.88364; Thu, 10 Dec 2020 23:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVFr-0000cr-9m; Thu, 10 Dec 2020 23:21:47 +0000
Received: by outflank-mailman (input) for mailman id 49976;
 Thu, 10 Dec 2020 23:21:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3EOQ=FO=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1knVFp-0000cm-TY
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 23:21:46 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72a3a23b-512a-41cc-9e57-1c34b8744a00;
 Thu, 10 Dec 2020 23:21:45 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BANJcXS003074;
 Thu, 10 Dec 2020 23:19:38 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2130.oracle.com with ESMTP id 357yqc85cd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 10 Dec 2020 23:19:38 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BAMxr2o074450;
 Thu, 10 Dec 2020 23:19:32 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 358m52xf7m-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 10 Dec 2020 23:19:32 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BANJPdu022291;
 Thu, 10 Dec 2020 23:19:25 GMT
Received: from [10.39.227.125] (/10.39.227.125)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 10 Dec 2020 15:19:24 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72a3a23b-512a-41cc-9e57-1c34b8744a00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=+7Mm8NoMJp9j52xNSSS3U0SBEUJij5+TbHj/sEGWvTc=;
 b=u+3l3VwFE77ZdGsGjRddFE+dCehz5qrVKsbBnhGV+nhEsQXQr4wsZggjHTDzlgab5HWW
 W5egKIrrmhPu4ceJfalySBs8+I9FL0posNnaW9CbqbQrGVA0xg9R6RUkmEkz2s2No81L
 nPNXJeIgpYDvFxFB/WQZYYVQRZ8i2VKbURSVU2FxkTzOHIYKNUfXXIo5pBMOc10Am1by
 BIehnBSdx++jQOQE0Sqhoje+VUsTyicTJHRPdgx7QaW/te4Rwue+wm7tniYwBY8WiOcJ
 un2QEN6snEEYNGv5arjms4I4Cg54F0bI7f25KRnMCWmobnf82SR6SJhov103IcOkf0u5 8g== 
Subject: Re: [patch 24/30] xen/events: Remove unused
 bind_evtchn_to_irq_lateeoi()
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org,
        "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
        Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
        linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
        linux-arm-kernel@lists.infradead.org,
        Mark Rutland <mark.rutland@arm.com>,
        Catalin Marinas <catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
        Jani Nikula <jani.nikula@linux.intel.com>,
        Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
        Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
        Daniel Vetter <daniel@ffwll.ch>,
        Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
        Chris Wilson <chris@chris-wilson.co.uk>,
        Wambui Karuga <wambui.karugax@gmail.com>,
        intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
        Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
        Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
        Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
        Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
        linux-ntb@googlegroups.com,
        Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
        Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
        Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
        Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
        Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
        "David S. Miller" <davem@davemloft.net>,
        Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
        linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
        Leon Romanovsky <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194044.972064156@linutronix.de>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <748d8d81-ac0f-aee2-1a56-ba9c40fee52f@oracle.com>
Date: Thu, 10 Dec 2020 18:19:19 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210194044.972064156@linutronix.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9831 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 suspectscore=0
 bulkscore=0 malwarescore=0 phishscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012100148
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9831 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 clxscore=1011 malwarescore=0 bulkscore=0 phishscore=0 adultscore=0
 spamscore=0 priorityscore=1501 mlxscore=0 lowpriorityscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012100149


On 12/10/20 2:26 PM, Thomas Gleixner wrote:
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: xen-devel@lists.xenproject.org
> ---
>  drivers/xen/events/events_base.c |    6 ------
>  1 file changed, 6 deletions(-)
>
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -1132,12 +1132,6 @@ int bind_evtchn_to_irq(evtchn_port_t evt
>  }
>  EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
>  
> -int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn)
> -{
> -	return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip);
> -}
> -EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi);



include/xen/events.h also needs to be updated (and in the next patch for xen_set_affinity_evtchn() as well).


-boris



From xen-devel-bounces@lists.xenproject.org Thu Dec 10 23:21:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 10 Dec 2020 23:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49977.88374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVFy-0000fZ-Pk; Thu, 10 Dec 2020 23:21:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49977.88374; Thu, 10 Dec 2020 23:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVFy-0000fQ-MG; Thu, 10 Dec 2020 23:21:54 +0000
Received: by outflank-mailman (input) for mailman id 49977;
 Thu, 10 Dec 2020 23:21:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3EOQ=FO=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1knVFw-0000el-P0
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 23:21:52 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a86c38ee-5b62-4c94-a66e-7ae73e4d210e;
 Thu, 10 Dec 2020 23:21:52 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BANJcxD076282;
 Thu, 10 Dec 2020 23:21:08 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 35825mg2bs-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 10 Dec 2020 23:21:08 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BANKtw7149340;
 Thu, 10 Dec 2020 23:21:07 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3020.oracle.com with ESMTP id 358m42f5be-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 10 Dec 2020 23:21:07 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0BANKp8g025210;
 Thu, 10 Dec 2020 23:20:51 GMT
Received: from [10.39.227.125] (/10.39.227.125)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 10 Dec 2020 15:20:51 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a86c38ee-5b62-4c94-a66e-7ae73e4d210e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=g9HQMgG8/lH1Xl7UdSJNvDAMRQCo3FWW7Bk60ydYMFM=;
 b=yiiSqUT1ul8LHgAfKjBlIB4yYYSpi+fGANCSKFFlOWkOUouS3Dv2O9HSPvMZHyva2HLa
 t2URiRpBCtd04nx9Lq/bUlqBSfAvdgny9AZ7r99NqQl+3SLxNcNacBmNDpvbnkmq+RJf
 tkHuWp0PFf1OqSZUrQZdo1op9pLE6PCegbVRzlACJRrfsJakcoaM7H+sCwUihUEMr2fQ
 dVrKn7O2B0uCiKt4l0ihiQonSCwdjdrSogLQz8qpbD6/3t1pwKq6cTqj0HVlsjBYL0mf
 j+yxZxQYgeFZOyVTmAVytFe3f0zAIJgGCpXDL4UlTLva10Xtrln9de8y9hZfow66cFAT Dg== 
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org,
        "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
        Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
        linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
        linux-arm-kernel@lists.infradead.org,
        Mark Rutland <mark.rutland@arm.com>,
        Catalin Marinas <catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
        Jani Nikula <jani.nikula@linux.intel.com>,
        Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
        Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
        Daniel Vetter <daniel@ffwll.ch>,
        Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
        Chris Wilson <chris@chris-wilson.co.uk>,
        Wambui Karuga <wambui.karugax@gmail.com>,
        intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
        Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
        Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
        Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
        Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
        linux-ntb@googlegroups.com,
        Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
        Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
        Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
        Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
        Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
        "David S. Miller" <davem@davemloft.net>,
        Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
        linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
        Leon Romanovsky <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
Date: Thu, 10 Dec 2020 18:20:46 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210194045.250321315@linutronix.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9831 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 malwarescore=0 adultscore=0
 bulkscore=0 phishscore=0 suspectscore=0 mlxscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012100149
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9831 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 adultscore=0 bulkscore=0
 phishscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 mlxscore=0
 spamscore=0 lowpriorityscore=0 malwarescore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012100149


On 12/10/20 2:26 PM, Thomas Gleixner wrote:
> All event channel setups bind the interrupt on CPU0 or the target CPU for
> percpu interrupts and overwrite the affinity mask with the corresponding
> cpumask. That does not make sense.
>
> The XEN implementation of irqchip::irq_set_affinity() already picks a
> single target CPU out of the affinity mask and the actual target is stored
> in the effective CPU mask, so destroying the user chosen affinity mask
> which might contain more than one CPU is wrong.
>
> Change the implementation so that the channel is bound to CPU0 at the XEN
> level and leave the affinity mask alone. At startup of the interrupt
> affinity will be assigned out of the affinity mask and the XEN binding will
> be updated. 


If that's the case then I wonder whether we need this call at all and instead bind at startup time.


-boris


> Only keep the enforcement for real percpu interrupts.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 00:04:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 00:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49990.88387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVvO-0005WH-QV; Fri, 11 Dec 2020 00:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49990.88387; Fri, 11 Dec 2020 00:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVvO-0005WA-MY; Fri, 11 Dec 2020 00:04:42 +0000
Received: by outflank-mailman (input) for mailman id 49990;
 Fri, 11 Dec 2020 00:04:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knVvM-0005W5-Un
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 00:04:40 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19051dbb-3b0b-4225-bd80-a9665a49555a;
 Fri, 11 Dec 2020 00:04:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19051dbb-3b0b-4225-bd80-a9665a49555a
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607645077;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=u4caiGj9t4ml+Fg2/LjvRAStceu1HqnXy5JxnlwmdFA=;
	b=eJe5C3lGb4euYOQ5NqFfsUko6q7Wl5m2rZaxuQ1CHGu4Pm86NUNkZ6G0XU9RxY87NzbBDc
	ldDhHBRuz8FurePK9zxQgdZ8itpY4Ib/oRUPYjSjeYReInLr9h7ip+USHGm7wSsyPmDxLv
	kj/1ZDcjtWRNli3PMHxskKmD4HwrP9Z+f8gXizycgXKbIaIDlgPtawVOXPrlZTSGX8Zfnf
	IWYCSqSxGsHEKzkDR85bq7gyV8KfYwwNkrQ2+SYN1BD6txCpqD9nrnJuEv/HOAgcIkRmdB
	DiW9PCPEtazOZRhcxwcj1i5rFCGn+T95DeUWuc+jttPExN8BouNdQm/34fvnLQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607645077;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=u4caiGj9t4ml+Fg2/LjvRAStceu1HqnXy5JxnlwmdFA=;
	b=z8XrxSbQKj1sp6kNzbU4BhMX1SALkTHGkd40WvYT5KLsIfRVa3pnqtOT9ywbS00FEUqpMv
	tXHEfFQ9bRMVcSAg==
To: boris.ostrovsky@oracle.com, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula
 <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj
 Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Subject: Re: [patch 24/30] xen/events: Remove unused bind_evtchn_to_irq_lateeoi()
In-Reply-To: <748d8d81-ac0f-aee2-1a56-ba9c40fee52f@oracle.com>
References: <20201210192536.118432146@linutronix.de> <20201210194044.972064156@linutronix.de> <748d8d81-ac0f-aee2-1a56-ba9c40fee52f@oracle.com>
Date: Fri, 11 Dec 2020 01:04:37 +0100
Message-ID: <87im99i4x6.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, Dec 10 2020 at 18:19, boris ostrovsky wrote:
> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>> -EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi);
>
> include/xen/events.h also needs to be updated (and in the next patch for xen_set_affinity_evtchn() as well).

Darn, I lost that.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 00:07:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 00:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49995.88399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVxf-0005ev-7r; Fri, 11 Dec 2020 00:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49995.88399; Fri, 11 Dec 2020 00:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knVxf-0005eo-3p; Fri, 11 Dec 2020 00:07:03 +0000
Received: by outflank-mailman (input) for mailman id 49995;
 Fri, 11 Dec 2020 00:07:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knVxe-0005ei-Bp
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 00:07:02 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9db10e3d-6517-4091-b077-8ae4bd4b2d4b;
 Fri, 11 Dec 2020 00:07:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9db10e3d-6517-4091-b077-8ae4bd4b2d4b
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607645220;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QO8ZPs/aGTq6EeeiALBW4EwIli8xcBfNVbHQE3SuKNs=;
	b=gGpzgKDIdOJvLB57zkDYdB5nPR7tx5lmvyUREbAkcjTKLREnSTar6ATS+ovX5gwzQX+6as
	+gKXjZ78+gchUrT83h8ptRu8KDTcmgA5V5jrxe+Iroa2IR3dW4mzdsbfdAz67KCaNXJRFh
	HyT/Fqjz1nJPJbfByhhTG/yrTVt1YV6bo+ZpfwQUI97MYBpnXtPk3xo0rt3TJ6ElYxaC8x
	LYoOgKlBWXABRporT9mN8YQ9yVhNAWvziSIf5Q/k/u/zwNWg/xf9bb8eHNApJHV92yI6Fh
	8Vq/6Agy+63spk92uys0SUp9GgNSiQePxXptHhOLRD+usYLs0evVgfm0CC10Uw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607645220;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QO8ZPs/aGTq6EeeiALBW4EwIli8xcBfNVbHQE3SuKNs=;
	b=fI8WyKpmQQfuF7v98I0m9jYr52j6v9il/Y1kYa+Nz5btUfOeJehvfvrp82h91pkkXnG/cc
	g+EFTi6ulxrthHDw==
To: boris.ostrovsky@oracle.com, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula
 <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj
 Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu interrupts
In-Reply-To: <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
References: <20201210192536.118432146@linutronix.de> <20201210194045.250321315@linutronix.de> <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
Date: Fri, 11 Dec 2020 01:06:59 +0100
Message-ID: <87ft4di4t8.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 10 2020 at 18:20, boris ostrovsky wrote:
> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>> All event channel setups bind the interrupt on CPU0 or the target CPU for
>> percpu interrupts and overwrite the affinity mask with the corresponding
>> cpumask. That does not make sense.
>>
>> The XEN implementation of irqchip::irq_set_affinity() already picks a
>> single target CPU out of the affinity mask and the actual target is stor=
ed
>> in the effective CPU mask, so destroying the user chosen affinity mask
>> which might contain more than one CPU is wrong.
>>
>> Change the implementation so that the channel is bound to CPU0 at the XEN
>> level and leave the affinity mask alone. At startup of the interrupt
>> affinity will be assigned out of the affinity mask and the XEN binding w=
ill
>> be updated.=20
>
> If that's the case then I wonder whether we need this call at all and
> instead bind at startup time.

I was wondering about that, but my knowledge about the Xen internal
requirements is pretty limited. The current set at least survived basic
testing by J=C3=BCrgen.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:28:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50009.88441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEP-0005bL-TD; Fri, 11 Dec 2020 01:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50009.88441; Fri, 11 Dec 2020 01:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEP-0005bA-Pf; Fri, 11 Dec 2020 01:28:25 +0000
Received: by outflank-mailman (input) for mailman id 50009;
 Fri, 11 Dec 2020 01:28:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXEN-0005ZT-OB
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:28:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6b2e8ba-9845-4735-96f2-9ccf30ed8994;
 Fri, 11 Dec 2020 01:28:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6b2e8ba-9845-4735-96f2-9ccf30ed8994
Date: Thu, 10 Dec 2020 17:28:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650095;
	bh=iyyqipsbCLlHKxLAz8c/bs5tWrZdfYgefU9c13GGfOM=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=oLrjORxmeXUsg08TeXT8a6+T7EPURThENYNqaKvqvEGDQ26jcX8tUrTR2yzzkwuN6
	 y4p+RZXdQ7Gk9qoXoTRwRT3HpVWvB80kaRmFFLcv8WVYBcmF4X+08uH7eeQxdwSkcK
	 Io6Jg7H6EZu0Z++DDAMV4d2UvDIg/T/liyCOxbG7Ds1Rc34iy2WiFr9FvcLpHZ8QaN
	 IfeUJo9C3oqsEgTVqpwNsp7a08urz2uuLiITTH1wwJcG81vABFRctEkY41tVcJ4VXx
	 m03L2yGS2bNo2NY3BcSVL+DbThxVK4EWjiBDURK2zlgoGw5o0bKgxuwrmcPC94Uoo+
	 6Ho5L66mSs00A==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 1/8] xen/arm: Import the SMMUv3 driver from Linux
In-Reply-To: <db84c98cf5f59757d6e225e9aeae62d2757ea646.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101525470.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <db84c98cf5f59757d6e225e9aeae62d2757ea646.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-2020296028-1607642759=:6285"
Content-ID: <alpine.DEB.2.21.2012101728120.6285@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2020296028-1607642759=:6285
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2012101728121.6285@sstabellini-ThinkPad-T480s>

On Thu, 10 Dec 2020, Rahul Singh wrote:
> Based on tag Linux 5.8.18 commit ab435ce49bd1d02e33dfec24f76955dc1196970b
> 
> Directory structure change for the SMMUv3 driver starting from
> Linux 5.9, to revert the patches smoothly using the "git revert" command
> we decided to choose Linux 5.8.18.
> 
> Only difference between latest stable Linux 5.9.12 and Linux 5.8.18
> SMMUv3 driver is the use of the "fallthrough" keyword. This patch will
> be merged once "fallthrough" keyword implementation is available in XEN.
> 
> It's a copy of the Linux SMMUv3 driver. Xen specific code has not
> been added yet and code has not been compiled.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
> - Import the driver from Linux 5.8.18 as compared to the previous
>   version where Linux 5.9.8 is used. Linux 5.8.18 has been chosen to
>   smoothly revert the changes required as directory structure changes
>   for the SMMUv3 driver starting from 5.9. The only difference between
>   Linux 5.8.18 and Linux 5.9.8 is the use of fallthrough keyword.
> 
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 4165 +++++++++++++++++++++++++
>  1 file changed, 4165 insertions(+)
>  create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> new file mode 100644
> index 0000000000..f578677a5c
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -0,0 +1,4165 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * IOMMU API for ARM architected SMMUv3 implementations.
> + *
> + * Copyright (C) 2015 ARM Limited
> + *
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This driver is powered by bad coffee and bombay mix.
> + */
> +
> +#include <linux/acpi.h>
> +#include <linux/acpi_iort.h>
> +#include <linux/bitfield.h>
> +#include <linux/bitops.h>
> +#include <linux/crash_dump.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/err.h>
> +#include <linux/interrupt.h>
> +#include <linux/io-pgtable.h>
> +#include <linux/iommu.h>
> +#include <linux/iopoll.h>
> +#include <linux/module.h>
> +#include <linux/msi.h>
> +#include <linux/of.h>
> +#include <linux/of_address.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/pci-ats.h>
> +#include <linux/platform_device.h>
> +
> +#include <linux/amba/bus.h>
> +
> +/* MMIO registers */
> +#define ARM_SMMU_IDR0			0x0
> +#define IDR0_ST_LVL			GENMASK(28, 27)
> +#define IDR0_ST_LVL_2LVL		1
> +#define IDR0_STALL_MODEL		GENMASK(25, 24)
> +#define IDR0_STALL_MODEL_STALL		0
> +#define IDR0_STALL_MODEL_FORCE		2
> +#define IDR0_TTENDIAN			GENMASK(22, 21)
> +#define IDR0_TTENDIAN_MIXED		0
> +#define IDR0_TTENDIAN_LE		2
> +#define IDR0_TTENDIAN_BE		3
> +#define IDR0_CD2L			(1 << 19)
> +#define IDR0_VMID16			(1 << 18)
> +#define IDR0_PRI			(1 << 16)
> +#define IDR0_SEV			(1 << 14)
> +#define IDR0_MSI			(1 << 13)
> +#define IDR0_ASID16			(1 << 12)
> +#define IDR0_ATS			(1 << 10)
> +#define IDR0_HYP			(1 << 9)
> +#define IDR0_COHACC			(1 << 4)
> +#define IDR0_TTF			GENMASK(3, 2)
> +#define IDR0_TTF_AARCH64		2
> +#define IDR0_TTF_AARCH32_64		3
> +#define IDR0_S1P			(1 << 1)
> +#define IDR0_S2P			(1 << 0)
> +
> +#define ARM_SMMU_IDR1			0x4
> +#define IDR1_TABLES_PRESET		(1 << 30)
> +#define IDR1_QUEUES_PRESET		(1 << 29)
> +#define IDR1_REL			(1 << 28)
> +#define IDR1_CMDQS			GENMASK(25, 21)
> +#define IDR1_EVTQS			GENMASK(20, 16)
> +#define IDR1_PRIQS			GENMASK(15, 11)
> +#define IDR1_SSIDSIZE			GENMASK(10, 6)
> +#define IDR1_SIDSIZE			GENMASK(5, 0)
> +
> +#define ARM_SMMU_IDR3			0xc
> +#define IDR3_RIL			(1 << 10)
> +
> +#define ARM_SMMU_IDR5			0x14
> +#define IDR5_STALL_MAX			GENMASK(31, 16)
> +#define IDR5_GRAN64K			(1 << 6)
> +#define IDR5_GRAN16K			(1 << 5)
> +#define IDR5_GRAN4K			(1 << 4)
> +#define IDR5_OAS			GENMASK(2, 0)
> +#define IDR5_OAS_32_BIT			0
> +#define IDR5_OAS_36_BIT			1
> +#define IDR5_OAS_40_BIT			2
> +#define IDR5_OAS_42_BIT			3
> +#define IDR5_OAS_44_BIT			4
> +#define IDR5_OAS_48_BIT			5
> +#define IDR5_OAS_52_BIT			6
> +#define IDR5_VAX			GENMASK(11, 10)
> +#define IDR5_VAX_52_BIT			1
> +
> +#define ARM_SMMU_CR0			0x20
> +#define CR0_ATSCHK			(1 << 4)
> +#define CR0_CMDQEN			(1 << 3)
> +#define CR0_EVTQEN			(1 << 2)
> +#define CR0_PRIQEN			(1 << 1)
> +#define CR0_SMMUEN			(1 << 0)
> +
> +#define ARM_SMMU_CR0ACK			0x24
> +
> +#define ARM_SMMU_CR1			0x28
> +#define CR1_TABLE_SH			GENMASK(11, 10)
> +#define CR1_TABLE_OC			GENMASK(9, 8)
> +#define CR1_TABLE_IC			GENMASK(7, 6)
> +#define CR1_QUEUE_SH			GENMASK(5, 4)
> +#define CR1_QUEUE_OC			GENMASK(3, 2)
> +#define CR1_QUEUE_IC			GENMASK(1, 0)
> +/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */
> +#define CR1_CACHE_NC			0
> +#define CR1_CACHE_WB			1
> +#define CR1_CACHE_WT			2
> +
> +#define ARM_SMMU_CR2			0x2c
> +#define CR2_PTM				(1 << 2)
> +#define CR2_RECINVSID			(1 << 1)
> +#define CR2_E2H				(1 << 0)
> +
> +#define ARM_SMMU_GBPA			0x44
> +#define GBPA_UPDATE			(1 << 31)
> +#define GBPA_ABORT			(1 << 20)
> +
> +#define ARM_SMMU_IRQ_CTRL		0x50
> +#define IRQ_CTRL_EVTQ_IRQEN		(1 << 2)
> +#define IRQ_CTRL_PRIQ_IRQEN		(1 << 1)
> +#define IRQ_CTRL_GERROR_IRQEN		(1 << 0)
> +
> +#define ARM_SMMU_IRQ_CTRLACK		0x54
> +
> +#define ARM_SMMU_GERROR			0x60
> +#define GERROR_SFM_ERR			(1 << 8)
> +#define GERROR_MSI_GERROR_ABT_ERR	(1 << 7)
> +#define GERROR_MSI_PRIQ_ABT_ERR		(1 << 6)
> +#define GERROR_MSI_EVTQ_ABT_ERR		(1 << 5)
> +#define GERROR_MSI_CMDQ_ABT_ERR		(1 << 4)
> +#define GERROR_PRIQ_ABT_ERR		(1 << 3)
> +#define GERROR_EVTQ_ABT_ERR		(1 << 2)
> +#define GERROR_CMDQ_ERR			(1 << 0)
> +#define GERROR_ERR_MASK			0xfd
> +
> +#define ARM_SMMU_GERRORN		0x64
> +
> +#define ARM_SMMU_GERROR_IRQ_CFG0	0x68
> +#define ARM_SMMU_GERROR_IRQ_CFG1	0x70
> +#define ARM_SMMU_GERROR_IRQ_CFG2	0x74
> +
> +#define ARM_SMMU_STRTAB_BASE		0x80
> +#define STRTAB_BASE_RA			(1UL << 62)
> +#define STRTAB_BASE_ADDR_MASK		GENMASK_ULL(51, 6)
> +
> +#define ARM_SMMU_STRTAB_BASE_CFG	0x88
> +#define STRTAB_BASE_CFG_FMT		GENMASK(17, 16)
> +#define STRTAB_BASE_CFG_FMT_LINEAR	0
> +#define STRTAB_BASE_CFG_FMT_2LVL	1
> +#define STRTAB_BASE_CFG_SPLIT		GENMASK(10, 6)
> +#define STRTAB_BASE_CFG_LOG2SIZE	GENMASK(5, 0)
> +
> +#define ARM_SMMU_CMDQ_BASE		0x90
> +#define ARM_SMMU_CMDQ_PROD		0x98
> +#define ARM_SMMU_CMDQ_CONS		0x9c
> +
> +#define ARM_SMMU_EVTQ_BASE		0xa0
> +#define ARM_SMMU_EVTQ_PROD		0x100a8
> +#define ARM_SMMU_EVTQ_CONS		0x100ac
> +#define ARM_SMMU_EVTQ_IRQ_CFG0		0xb0
> +#define ARM_SMMU_EVTQ_IRQ_CFG1		0xb8
> +#define ARM_SMMU_EVTQ_IRQ_CFG2		0xbc
> +
> +#define ARM_SMMU_PRIQ_BASE		0xc0
> +#define ARM_SMMU_PRIQ_PROD		0x100c8
> +#define ARM_SMMU_PRIQ_CONS		0x100cc
> +#define ARM_SMMU_PRIQ_IRQ_CFG0		0xd0
> +#define ARM_SMMU_PRIQ_IRQ_CFG1		0xd8
> +#define ARM_SMMU_PRIQ_IRQ_CFG2		0xdc
> +
> +#define ARM_SMMU_REG_SZ			0xe00
> +
> +/* Common MSI config fields */
> +#define MSI_CFG0_ADDR_MASK		GENMASK_ULL(51, 2)
> +#define MSI_CFG2_SH			GENMASK(5, 4)
> +#define MSI_CFG2_MEMATTR		GENMASK(3, 0)
> +
> +/* Common memory attribute values */
> +#define ARM_SMMU_SH_NSH			0
> +#define ARM_SMMU_SH_OSH			2
> +#define ARM_SMMU_SH_ISH			3
> +#define ARM_SMMU_MEMATTR_DEVICE_nGnRE	0x1
> +#define ARM_SMMU_MEMATTR_OIWB		0xf
> +
> +#define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
> +#define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
> +#define Q_OVERFLOW_FLAG			(1U << 31)
> +#define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
> +#define Q_ENT(q, p)			((q)->base +			\
> +					 Q_IDX(&((q)->llq), p) *	\
> +					 (q)->ent_dwords)
> +
> +#define Q_BASE_RWA			(1UL << 62)
> +#define Q_BASE_ADDR_MASK		GENMASK_ULL(51, 5)
> +#define Q_BASE_LOG2SIZE			GENMASK(4, 0)
> +
> +/* Ensure DMA allocations are naturally aligned */
> +#ifdef CONFIG_CMA_ALIGNMENT
> +#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
> +#else
> +#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + MAX_ORDER - 1)
> +#endif
> +
> +/*
> + * Stream table.
> + *
> + * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
> + * 2lvl: 128k L1 entries,
> + *       256 lazy entries per table (each table covers a PCI bus)
> + */
> +#define STRTAB_L1_SZ_SHIFT		20
> +#define STRTAB_SPLIT			8
> +
> +#define STRTAB_L1_DESC_DWORDS		1
> +#define STRTAB_L1_DESC_SPAN		GENMASK_ULL(4, 0)
> +#define STRTAB_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 6)
> +
> +#define STRTAB_STE_DWORDS		8
> +#define STRTAB_STE_0_V			(1UL << 0)
> +#define STRTAB_STE_0_CFG		GENMASK_ULL(3, 1)
> +#define STRTAB_STE_0_CFG_ABORT		0
> +#define STRTAB_STE_0_CFG_BYPASS		4
> +#define STRTAB_STE_0_CFG_S1_TRANS	5
> +#define STRTAB_STE_0_CFG_S2_TRANS	6
> +
> +#define STRTAB_STE_0_S1FMT		GENMASK_ULL(5, 4)
> +#define STRTAB_STE_0_S1FMT_LINEAR	0
> +#define STRTAB_STE_0_S1FMT_64K_L2	2
> +#define STRTAB_STE_0_S1CTXPTR_MASK	GENMASK_ULL(51, 6)
> +#define STRTAB_STE_0_S1CDMAX		GENMASK_ULL(63, 59)
> +
> +#define STRTAB_STE_1_S1DSS		GENMASK_ULL(1, 0)
> +#define STRTAB_STE_1_S1DSS_TERMINATE	0x0
> +#define STRTAB_STE_1_S1DSS_BYPASS	0x1
> +#define STRTAB_STE_1_S1DSS_SSID0	0x2
> +
> +#define STRTAB_STE_1_S1C_CACHE_NC	0UL
> +#define STRTAB_STE_1_S1C_CACHE_WBRA	1UL
> +#define STRTAB_STE_1_S1C_CACHE_WT	2UL
> +#define STRTAB_STE_1_S1C_CACHE_WB	3UL
> +#define STRTAB_STE_1_S1CIR		GENMASK_ULL(3, 2)
> +#define STRTAB_STE_1_S1COR		GENMASK_ULL(5, 4)
> +#define STRTAB_STE_1_S1CSH		GENMASK_ULL(7, 6)
> +
> +#define STRTAB_STE_1_S1STALLD		(1UL << 27)
> +
> +#define STRTAB_STE_1_EATS		GENMASK_ULL(29, 28)
> +#define STRTAB_STE_1_EATS_ABT		0UL
> +#define STRTAB_STE_1_EATS_TRANS		1UL
> +#define STRTAB_STE_1_EATS_S1CHK		2UL
> +
> +#define STRTAB_STE_1_STRW		GENMASK_ULL(31, 30)
> +#define STRTAB_STE_1_STRW_NSEL1		0UL
> +#define STRTAB_STE_1_STRW_EL2		2UL
> +
> +#define STRTAB_STE_1_SHCFG		GENMASK_ULL(45, 44)
> +#define STRTAB_STE_1_SHCFG_INCOMING	1UL
> +
> +#define STRTAB_STE_2_S2VMID		GENMASK_ULL(15, 0)
> +#define STRTAB_STE_2_VTCR		GENMASK_ULL(50, 32)
> +#define STRTAB_STE_2_VTCR_S2T0SZ	GENMASK_ULL(5, 0)
> +#define STRTAB_STE_2_VTCR_S2SL0		GENMASK_ULL(7, 6)
> +#define STRTAB_STE_2_VTCR_S2IR0		GENMASK_ULL(9, 8)
> +#define STRTAB_STE_2_VTCR_S2OR0		GENMASK_ULL(11, 10)
> +#define STRTAB_STE_2_VTCR_S2SH0		GENMASK_ULL(13, 12)
> +#define STRTAB_STE_2_VTCR_S2TG		GENMASK_ULL(15, 14)
> +#define STRTAB_STE_2_VTCR_S2PS		GENMASK_ULL(18, 16)
> +#define STRTAB_STE_2_S2AA64		(1UL << 51)
> +#define STRTAB_STE_2_S2ENDI		(1UL << 52)
> +#define STRTAB_STE_2_S2PTW		(1UL << 54)
> +#define STRTAB_STE_2_S2R		(1UL << 58)
> +
> +#define STRTAB_STE_3_S2TTB_MASK		GENMASK_ULL(51, 4)
> +
> +/*
> + * Context descriptors.
> + *
> + * Linear: when less than 1024 SSIDs are supported
> + * 2lvl: at most 1024 L1 entries,
> + *       1024 lazy entries per table.
> + */
> +#define CTXDESC_SPLIT			10
> +#define CTXDESC_L2_ENTRIES		(1 << CTXDESC_SPLIT)
> +
> +#define CTXDESC_L1_DESC_DWORDS		1
> +#define CTXDESC_L1_DESC_V		(1UL << 0)
> +#define CTXDESC_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 12)
> +
> +#define CTXDESC_CD_DWORDS		8
> +#define CTXDESC_CD_0_TCR_T0SZ		GENMASK_ULL(5, 0)
> +#define CTXDESC_CD_0_TCR_TG0		GENMASK_ULL(7, 6)
> +#define CTXDESC_CD_0_TCR_IRGN0		GENMASK_ULL(9, 8)
> +#define CTXDESC_CD_0_TCR_ORGN0		GENMASK_ULL(11, 10)
> +#define CTXDESC_CD_0_TCR_SH0		GENMASK_ULL(13, 12)
> +#define CTXDESC_CD_0_TCR_EPD0		(1ULL << 14)
> +#define CTXDESC_CD_0_TCR_EPD1		(1ULL << 30)
> +
> +#define CTXDESC_CD_0_ENDI		(1UL << 15)
> +#define CTXDESC_CD_0_V			(1UL << 31)
> +
> +#define CTXDESC_CD_0_TCR_IPS		GENMASK_ULL(34, 32)
> +#define CTXDESC_CD_0_TCR_TBI0		(1ULL << 38)
> +
> +#define CTXDESC_CD_0_AA64		(1UL << 41)
> +#define CTXDESC_CD_0_S			(1UL << 44)
> +#define CTXDESC_CD_0_R			(1UL << 45)
> +#define CTXDESC_CD_0_A			(1UL << 46)
> +#define CTXDESC_CD_0_ASET		(1UL << 47)
> +#define CTXDESC_CD_0_ASID		GENMASK_ULL(63, 48)
> +
> +#define CTXDESC_CD_1_TTB0_MASK		GENMASK_ULL(51, 4)
> +
> +/*
> + * When the SMMU only supports linear context descriptor tables, pick a
> + * reasonable size limit (64kB).
> + */
> +#define CTXDESC_LINEAR_CDMAX		ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3))
> +
> +/* Command queue */
> +#define CMDQ_ENT_SZ_SHIFT		4
> +#define CMDQ_ENT_DWORDS			((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
> +#define CMDQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT)
> +
> +#define CMDQ_CONS_ERR			GENMASK(30, 24)
> +#define CMDQ_ERR_CERROR_NONE_IDX	0
> +#define CMDQ_ERR_CERROR_ILL_IDX		1
> +#define CMDQ_ERR_CERROR_ABT_IDX		2
> +#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
> +
> +#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
> +
> +/*
> + * This is used to size the command queue and therefore must be at least
> + * BITS_PER_LONG so that the valid_map works correctly (it relies on the
> + * total number of queue entries being a multiple of BITS_PER_LONG).
> + */
> +#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
> +
> +#define CMDQ_0_OP			GENMASK_ULL(7, 0)
> +#define CMDQ_0_SSV			(1UL << 11)
> +
> +#define CMDQ_PREFETCH_0_SID		GENMASK_ULL(63, 32)
> +#define CMDQ_PREFETCH_1_SIZE		GENMASK_ULL(4, 0)
> +#define CMDQ_PREFETCH_1_ADDR_MASK	GENMASK_ULL(63, 12)
> +
> +#define CMDQ_CFGI_0_SSID		GENMASK_ULL(31, 12)
> +#define CMDQ_CFGI_0_SID			GENMASK_ULL(63, 32)
> +#define CMDQ_CFGI_1_LEAF		(1UL << 0)
> +#define CMDQ_CFGI_1_RANGE		GENMASK_ULL(4, 0)
> +
> +#define CMDQ_TLBI_0_NUM			GENMASK_ULL(16, 12)
> +#define CMDQ_TLBI_RANGE_NUM_MAX		31
> +#define CMDQ_TLBI_0_SCALE		GENMASK_ULL(24, 20)
> +#define CMDQ_TLBI_0_VMID		GENMASK_ULL(47, 32)
> +#define CMDQ_TLBI_0_ASID		GENMASK_ULL(63, 48)
> +#define CMDQ_TLBI_1_LEAF		(1UL << 0)
> +#define CMDQ_TLBI_1_TTL			GENMASK_ULL(9, 8)
> +#define CMDQ_TLBI_1_TG			GENMASK_ULL(11, 10)
> +#define CMDQ_TLBI_1_VA_MASK		GENMASK_ULL(63, 12)
> +#define CMDQ_TLBI_1_IPA_MASK		GENMASK_ULL(51, 12)
> +
> +#define CMDQ_ATC_0_SSID			GENMASK_ULL(31, 12)
> +#define CMDQ_ATC_0_SID			GENMASK_ULL(63, 32)
> +#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
> +#define CMDQ_ATC_1_SIZE			GENMASK_ULL(5, 0)
> +#define CMDQ_ATC_1_ADDR_MASK		GENMASK_ULL(63, 12)
> +
> +#define CMDQ_PRI_0_SSID			GENMASK_ULL(31, 12)
> +#define CMDQ_PRI_0_SID			GENMASK_ULL(63, 32)
> +#define CMDQ_PRI_1_GRPID		GENMASK_ULL(8, 0)
> +#define CMDQ_PRI_1_RESP			GENMASK_ULL(13, 12)
> +
> +#define CMDQ_SYNC_0_CS			GENMASK_ULL(13, 12)
> +#define CMDQ_SYNC_0_CS_NONE		0
> +#define CMDQ_SYNC_0_CS_IRQ		1
> +#define CMDQ_SYNC_0_CS_SEV		2
> +#define CMDQ_SYNC_0_MSH			GENMASK_ULL(23, 22)
> +#define CMDQ_SYNC_0_MSIATTR		GENMASK_ULL(27, 24)
> +#define CMDQ_SYNC_0_MSIDATA		GENMASK_ULL(63, 32)
> +#define CMDQ_SYNC_1_MSIADDR_MASK	GENMASK_ULL(51, 2)
> +
> +/* Event queue */
> +#define EVTQ_ENT_SZ_SHIFT		5
> +#define EVTQ_ENT_DWORDS			((1 << EVTQ_ENT_SZ_SHIFT) >> 3)
> +#define EVTQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT)
> +
> +#define EVTQ_0_ID			GENMASK_ULL(7, 0)
> +
> +/* PRI queue */
> +#define PRIQ_ENT_SZ_SHIFT		4
> +#define PRIQ_ENT_DWORDS			((1 << PRIQ_ENT_SZ_SHIFT) >> 3)
> +#define PRIQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT)
> +
> +#define PRIQ_0_SID			GENMASK_ULL(31, 0)
> +#define PRIQ_0_SSID			GENMASK_ULL(51, 32)
> +#define PRIQ_0_PERM_PRIV		(1UL << 58)
> +#define PRIQ_0_PERM_EXEC		(1UL << 59)
> +#define PRIQ_0_PERM_READ		(1UL << 60)
> +#define PRIQ_0_PERM_WRITE		(1UL << 61)
> +#define PRIQ_0_PRG_LAST			(1UL << 62)
> +#define PRIQ_0_SSID_V			(1UL << 63)
> +
> +#define PRIQ_1_PRG_IDX			GENMASK_ULL(8, 0)
> +#define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
> +
> +/* High-level queue structures */
> +#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
> +#define ARM_SMMU_POLL_SPIN_COUNT	10
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +static bool disable_bypass = 1;
> +module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
> +MODULE_PARM_DESC(disable_bypass,
> +	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
> +
> +enum pri_resp {
> +	PRI_RESP_DENY = 0,
> +	PRI_RESP_FAIL = 1,
> +	PRI_RESP_SUCC = 2,
> +};
> +
> +enum arm_smmu_msi_index {
> +	EVTQ_MSI_INDEX,
> +	GERROR_MSI_INDEX,
> +	PRIQ_MSI_INDEX,
> +	ARM_SMMU_MAX_MSIS,
> +};
> +
> +static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
> +	[EVTQ_MSI_INDEX] = {
> +		ARM_SMMU_EVTQ_IRQ_CFG0,
> +		ARM_SMMU_EVTQ_IRQ_CFG1,
> +		ARM_SMMU_EVTQ_IRQ_CFG2,
> +	},
> +	[GERROR_MSI_INDEX] = {
> +		ARM_SMMU_GERROR_IRQ_CFG0,
> +		ARM_SMMU_GERROR_IRQ_CFG1,
> +		ARM_SMMU_GERROR_IRQ_CFG2,
> +	},
> +	[PRIQ_MSI_INDEX] = {
> +		ARM_SMMU_PRIQ_IRQ_CFG0,
> +		ARM_SMMU_PRIQ_IRQ_CFG1,
> +		ARM_SMMU_PRIQ_IRQ_CFG2,
> +	},
> +};
> +
> +struct arm_smmu_cmdq_ent {
> +	/* Common fields */
> +	u8				opcode;
> +	bool				substream_valid;
> +
> +	/* Command-specific fields */
> +	union {
> +		#define CMDQ_OP_PREFETCH_CFG	0x1
> +		struct {
> +			u32			sid;
> +			u8			size;
> +			u64			addr;
> +		} prefetch;
> +
> +		#define CMDQ_OP_CFGI_STE	0x3
> +		#define CMDQ_OP_CFGI_ALL	0x4
> +		#define CMDQ_OP_CFGI_CD		0x5
> +		#define CMDQ_OP_CFGI_CD_ALL	0x6
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			union {
> +				bool		leaf;
> +				u8		span;
> +			};
> +		} cfgi;
> +
> +		#define CMDQ_OP_TLBI_NH_ASID	0x11
> +		#define CMDQ_OP_TLBI_NH_VA	0x12
> +		#define CMDQ_OP_TLBI_EL2_ALL	0x20
> +		#define CMDQ_OP_TLBI_S12_VMALL	0x28
> +		#define CMDQ_OP_TLBI_S2_IPA	0x2a
> +		#define CMDQ_OP_TLBI_NSNH_ALL	0x30
> +		struct {
> +			u8			num;
> +			u8			scale;
> +			u16			asid;
> +			u16			vmid;
> +			bool			leaf;
> +			u8			ttl;
> +			u8			tg;
> +			u64			addr;
> +		} tlbi;
> +
> +		#define CMDQ_OP_ATC_INV		0x40
> +		#define ATC_INV_SIZE_ALL	52
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			u64			addr;
> +			u8			size;
> +			bool			global;
> +		} atc;
> +
> +		#define CMDQ_OP_PRI_RESP	0x41
> +		struct {
> +			u32			sid;
> +			u32			ssid;
> +			u16			grpid;
> +			enum pri_resp		resp;
> +		} pri;
> +
> +		#define CMDQ_OP_CMD_SYNC	0x46
> +		struct {
> +			u64			msiaddr;
> +		} sync;
> +	};
> +};
> +
> +struct arm_smmu_ll_queue {
> +	union {
> +		u64			val;
> +		struct {
> +			u32		prod;
> +			u32		cons;
> +		};
> +		struct {
> +			atomic_t	prod;
> +			atomic_t	cons;
> +		} atomic;
> +		u8			__pad[SMP_CACHE_BYTES];
> +	} ____cacheline_aligned_in_smp;
> +	u32				max_n_shift;
> +};
> +
> +struct arm_smmu_queue {
> +	struct arm_smmu_ll_queue	llq;
> +	int				irq; /* Wired interrupt */
> +
> +	__le64				*base;
> +	dma_addr_t			base_dma;
> +	u64				q_base;
> +
> +	size_t				ent_dwords;
> +
> +	u32 __iomem			*prod_reg;
> +	u32 __iomem			*cons_reg;
> +};
> +
> +struct arm_smmu_queue_poll {
> +	ktime_t				timeout;
> +	unsigned int			delay;
> +	unsigned int			spin_cnt;
> +	bool				wfe;
> +};
> +
> +struct arm_smmu_cmdq {
> +	struct arm_smmu_queue		q;
> +	atomic_long_t			*valid_map;
> +	atomic_t			owner_prod;
> +	atomic_t			lock;
> +};
> +
> +struct arm_smmu_cmdq_batch {
> +	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
> +	int				num;
> +};
> +
> +struct arm_smmu_evtq {
> +	struct arm_smmu_queue		q;
> +	u32				max_stalls;
> +};
> +
> +struct arm_smmu_priq {
> +	struct arm_smmu_queue		q;
> +};
> +
> +/* High-level stream table and context descriptor structures */
> +struct arm_smmu_strtab_l1_desc {
> +	u8				span;
> +
> +	__le64				*l2ptr;
> +	dma_addr_t			l2ptr_dma;
> +};
> +
> +struct arm_smmu_ctx_desc {
> +	u16				asid;
> +	u64				ttbr;
> +	u64				tcr;
> +	u64				mair;
> +};
> +
> +struct arm_smmu_l1_ctx_desc {
> +	__le64				*l2ptr;
> +	dma_addr_t			l2ptr_dma;
> +};
> +
> +struct arm_smmu_ctx_desc_cfg {
> +	__le64				*cdtab;
> +	dma_addr_t			cdtab_dma;
> +	struct arm_smmu_l1_ctx_desc	*l1_desc;
> +	unsigned int			num_l1_ents;
> +};
> +
> +struct arm_smmu_s1_cfg {
> +	struct arm_smmu_ctx_desc_cfg	cdcfg;
> +	struct arm_smmu_ctx_desc	cd;
> +	u8				s1fmt;
> +	u8				s1cdmax;
> +};
> +
> +struct arm_smmu_s2_cfg {
> +	u16				vmid;
> +	u64				vttbr;
> +	u64				vtcr;
> +};
> +
> +struct arm_smmu_strtab_cfg {
> +	__le64				*strtab;
> +	dma_addr_t			strtab_dma;
> +	struct arm_smmu_strtab_l1_desc	*l1_desc;
> +	unsigned int			num_l1_ents;
> +
> +	u64				strtab_base;
> +	u32				strtab_base_cfg;
> +};
> +
> +/* An SMMUv3 instance */
> +struct arm_smmu_device {
> +	struct device			*dev;
> +	void __iomem			*base;
> +	void __iomem			*page1;
> +
> +#define ARM_SMMU_FEAT_2_LVL_STRTAB	(1 << 0)
> +#define ARM_SMMU_FEAT_2_LVL_CDTAB	(1 << 1)
> +#define ARM_SMMU_FEAT_TT_LE		(1 << 2)
> +#define ARM_SMMU_FEAT_TT_BE		(1 << 3)
> +#define ARM_SMMU_FEAT_PRI		(1 << 4)
> +#define ARM_SMMU_FEAT_ATS		(1 << 5)
> +#define ARM_SMMU_FEAT_SEV		(1 << 6)
> +#define ARM_SMMU_FEAT_MSI		(1 << 7)
> +#define ARM_SMMU_FEAT_COHERENCY		(1 << 8)
> +#define ARM_SMMU_FEAT_TRANS_S1		(1 << 9)
> +#define ARM_SMMU_FEAT_TRANS_S2		(1 << 10)
> +#define ARM_SMMU_FEAT_STALLS		(1 << 11)
> +#define ARM_SMMU_FEAT_HYP		(1 << 12)
> +#define ARM_SMMU_FEAT_STALL_FORCE	(1 << 13)
> +#define ARM_SMMU_FEAT_VAX		(1 << 14)
> +#define ARM_SMMU_FEAT_RANGE_INV		(1 << 15)
> +	u32				features;
> +
> +#define ARM_SMMU_OPT_SKIP_PREFETCH	(1 << 0)
> +#define ARM_SMMU_OPT_PAGE0_REGS_ONLY	(1 << 1)
> +	u32				options;
> +
> +	struct arm_smmu_cmdq		cmdq;
> +	struct arm_smmu_evtq		evtq;
> +	struct arm_smmu_priq		priq;
> +
> +	int				gerr_irq;
> +	int				combined_irq;
> +
> +	unsigned long			ias; /* IPA */
> +	unsigned long			oas; /* PA */
> +	unsigned long			pgsize_bitmap;
> +
> +#define ARM_SMMU_MAX_ASIDS		(1 << 16)
> +	unsigned int			asid_bits;
> +
> +#define ARM_SMMU_MAX_VMIDS		(1 << 16)
> +	unsigned int			vmid_bits;
> +	DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
> +
> +	unsigned int			ssid_bits;
> +	unsigned int			sid_bits;
> +
> +	struct arm_smmu_strtab_cfg	strtab_cfg;
> +
> +	/* IOMMU core code handle */
> +	struct iommu_device		iommu;
> +};
> +
> +/* SMMU private data for each master */
> +struct arm_smmu_master {
> +	struct arm_smmu_device		*smmu;
> +	struct device			*dev;
> +	struct arm_smmu_domain		*domain;
> +	struct list_head		domain_head;
> +	u32				*sids;
> +	unsigned int			num_sids;
> +	bool				ats_enabled;
> +	unsigned int			ssid_bits;
> +};
> +
> +/* SMMU private data for an IOMMU domain */
> +enum arm_smmu_domain_stage {
> +	ARM_SMMU_DOMAIN_S1 = 0,
> +	ARM_SMMU_DOMAIN_S2,
> +	ARM_SMMU_DOMAIN_NESTED,
> +	ARM_SMMU_DOMAIN_BYPASS,
> +};
> +
> +struct arm_smmu_domain {
> +	struct arm_smmu_device		*smmu;
> +	struct mutex			init_mutex; /* Protects smmu pointer */
> +
> +	struct io_pgtable_ops		*pgtbl_ops;
> +	bool				non_strict;
> +	atomic_t			nr_ats_masters;
> +
> +	enum arm_smmu_domain_stage	stage;
> +	union {
> +		struct arm_smmu_s1_cfg	s1_cfg;
> +		struct arm_smmu_s2_cfg	s2_cfg;
> +	};
> +
> +	struct iommu_domain		domain;
> +
> +	struct list_head		devices;
> +	spinlock_t			devices_lock;
> +};
> +
> +struct arm_smmu_option_prop {
> +	u32 opt;
> +	const char *prop;
> +};
> +
> +static DEFINE_XARRAY_ALLOC1(asid_xa);
> +
> +static struct arm_smmu_option_prop arm_smmu_options[] = {
> +	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
> +	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
> +	{ 0, NULL},
> +};
> +
> +static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
> +						 struct arm_smmu_device *smmu)
> +{
> +	if (offset > SZ_64K)
> +		return smmu->page1 + offset - SZ_64K;
> +
> +	return smmu->base + offset;
> +}
> +
> +static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
> +{
> +	return container_of(dom, struct arm_smmu_domain, domain);
> +}
> +
> +static void parse_driver_options(struct arm_smmu_device *smmu)
> +{
> +	int i = 0;
> +
> +	do {
> +		if (of_property_read_bool(smmu->dev->of_node,
> +						arm_smmu_options[i].prop)) {
> +			smmu->options |= arm_smmu_options[i].opt;
> +			dev_notice(smmu->dev, "option %s\n",
> +				arm_smmu_options[i].prop);
> +		}
> +	} while (arm_smmu_options[++i].opt);
> +}
> +
> +/* Low-level queue manipulation functions */
> +static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
> +{
> +	u32 space, prod, cons;
> +
> +	prod = Q_IDX(q, q->prod);
> +	cons = Q_IDX(q, q->cons);
> +
> +	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
> +		space = (1 << q->max_n_shift) - (prod - cons);
> +	else
> +		space = cons - prod;
> +
> +	return space >= n;
> +}
> +
> +static bool queue_full(struct arm_smmu_ll_queue *q)
> +{
> +	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> +	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
> +}
> +
> +static bool queue_empty(struct arm_smmu_ll_queue *q)
> +{
> +	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> +	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
> +}
> +
> +static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
> +{
> +	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
> +		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
> +	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
> +		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
> +}
> +
> +static void queue_sync_cons_out(struct arm_smmu_queue *q)
> +{
> +	/*
> +	 * Ensure that all CPU accesses (reads and writes) to the queue
> +	 * are complete before we update the cons pointer.
> +	 */
> +	mb();
> +	writel_relaxed(q->llq.cons, q->cons_reg);
> +}
> +
> +static void queue_inc_cons(struct arm_smmu_ll_queue *q)
> +{
> +	u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
> +	q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
> +}
> +
> +static int queue_sync_prod_in(struct arm_smmu_queue *q)
> +{
> +	int ret = 0;
> +	u32 prod = readl_relaxed(q->prod_reg);
> +
> +	if (Q_OVF(prod) != Q_OVF(q->llq.prod))
> +		ret = -EOVERFLOW;
> +
> +	q->llq.prod = prod;
> +	return ret;
> +}
> +
> +static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
> +{
> +	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
> +	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
> +}
> +
> +static void queue_poll_init(struct arm_smmu_device *smmu,
> +			    struct arm_smmu_queue_poll *qp)
> +{
> +	qp->delay = 1;
> +	qp->spin_cnt = 0;
> +	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
> +	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
> +}
> +
> +static int queue_poll(struct arm_smmu_queue_poll *qp)
> +{
> +	if (ktime_compare(ktime_get(), qp->timeout) > 0)
> +		return -ETIMEDOUT;
> +
> +	if (qp->wfe) {
> +		wfe();
> +	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
> +		cpu_relax();
> +	} else {
> +		udelay(qp->delay);
> +		qp->delay *= 2;
> +		qp->spin_cnt = 0;
> +	}
> +
> +	return 0;
> +}
> +
> +static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
> +{
> +	int i;
> +
> +	for (i = 0; i < n_dwords; ++i)
> +		*dst++ = cpu_to_le64(*src++);
> +}
> +
> +static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
> +{
> +	int i;
> +
> +	for (i = 0; i < n_dwords; ++i)
> +		*dst++ = le64_to_cpu(*src++);
> +}
> +
> +static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
> +{
> +	if (queue_empty(&q->llq))
> +		return -EAGAIN;
> +
> +	queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords);
> +	queue_inc_cons(&q->llq);
> +	queue_sync_cons_out(q);
> +	return 0;
> +}
> +
> +/* High-level queue accessors */
> +static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> +{
> +	memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
> +	cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
> +
> +	switch (ent->opcode) {
> +	case CMDQ_OP_TLBI_EL2_ALL:
> +	case CMDQ_OP_TLBI_NSNH_ALL:
> +		break;
> +	case CMDQ_OP_PREFETCH_CFG:
> +		cmd[0] |= FIELD_PREP(CMDQ_PREFETCH_0_SID, ent->prefetch.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
> +		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
> +		break;
> +	case CMDQ_OP_CFGI_CD:
> +		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid);
> +		/* Fallthrough */
> +	case CMDQ_OP_CFGI_STE:
> +		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
> +		break;
> +	case CMDQ_OP_CFGI_CD_ALL:
> +		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
> +		break;
> +	case CMDQ_OP_CFGI_ALL:
> +		/* Cover the entire SID range */
> +		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
> +		break;
> +	case CMDQ_OP_TLBI_NH_VA:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
> +		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
> +		break;
> +	case CMDQ_OP_TLBI_S2_IPA:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
> +		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
> +		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
> +		break;
> +	case CMDQ_OP_TLBI_NH_ASID:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
> +		/* Fallthrough */
> +	case CMDQ_OP_TLBI_S12_VMALL:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> +		break;
> +	case CMDQ_OP_ATC_INV:
> +		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
> +		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
> +		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
> +		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
> +		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
> +		break;
> +	case CMDQ_OP_PRI_RESP:
> +		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
> +		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
> +		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid);
> +		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid);
> +		switch (ent->pri.resp) {
> +		case PRI_RESP_DENY:
> +		case PRI_RESP_FAIL:
> +		case PRI_RESP_SUCC:
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
> +		break;
> +	case CMDQ_OP_CMD_SYNC:
> +		if (ent->sync.msiaddr) {
> +			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> +			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> +		} else {
> +			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> +		}
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> +		break;
> +	default:
> +		return -ENOENT;
> +	}
> +
> +	return 0;
> +}
> +
> +static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
> +					 u32 prod)
> +{
> +	struct arm_smmu_queue *q = &smmu->cmdq.q;
> +	struct arm_smmu_cmdq_ent ent = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +	};
> +
> +	/*
> +	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
> +	 * payload, so the write will zero the entire command on that platform.
> +	 */
> +	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> +	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
> +		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
> +				   q->ent_dwords * 8;
> +	}
> +
> +	arm_smmu_cmdq_build_cmd(cmd, &ent);
> +}
> +
> +static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
> +{
> +	static const char *cerror_str[] = {
> +		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
> +		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
> +		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
> +		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
> +	};
> +
> +	int i;
> +	u64 cmd[CMDQ_ENT_DWORDS];
> +	struct arm_smmu_queue *q = &smmu->cmdq.q;
> +	u32 cons = readl_relaxed(q->cons_reg);
> +	u32 idx = FIELD_GET(CMDQ_CONS_ERR, cons);
> +	struct arm_smmu_cmdq_ent cmd_sync = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +	};
> +
> +	dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
> +		idx < ARRAY_SIZE(cerror_str) ?  cerror_str[idx] : "Unknown");
> +
> +	switch (idx) {
> +	case CMDQ_ERR_CERROR_ABT_IDX:
> +		dev_err(smmu->dev, "retrying command fetch\n");
> +	case CMDQ_ERR_CERROR_NONE_IDX:
> +		return;
> +	case CMDQ_ERR_CERROR_ATC_INV_IDX:
> +		/*
> +		 * ATC Invalidation Completion timeout. CONS is still pointing
> +		 * at the CMD_SYNC. Attempt to complete other pending commands
> +		 * by repeating the CMD_SYNC, though we might well end up back
> +		 * here since the ATC invalidation may still be pending.
> +		 */
> +		return;
> +	case CMDQ_ERR_CERROR_ILL_IDX:
> +		/* Fallthrough */
> +	default:
> +		break;
> +	}
> +
> +	/*
> +	 * We may have concurrent producers, so we need to be careful
> +	 * not to touch any of the shadow cmdq state.
> +	 */
> +	queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
> +	dev_err(smmu->dev, "skipping command in error state:\n");
> +	for (i = 0; i < ARRAY_SIZE(cmd); ++i)
> +		dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
> +
> +	/* Convert the erroneous command into a CMD_SYNC */
> +	if (arm_smmu_cmdq_build_cmd(cmd, &cmd_sync)) {
> +		dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
> +		return;
> +	}
> +
> +	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
> +}
> +
> +/*
> + * Command queue locking.
> + * This is a form of bastardised rwlock with the following major changes:
> + *
> + * - The only LOCK routines are exclusive_trylock() and shared_lock().
> + *   Neither have barrier semantics, and instead provide only a control
> + *   dependency.
> + *
> + * - The UNLOCK routines are supplemented with shared_tryunlock(), which
> + *   fails if the caller appears to be the last lock holder (yes, this is
> + *   racy). All successful UNLOCK routines have RELEASE semantics.
> + */
> +static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> +{
> +	int val;
> +
> +	/*
> +	 * We can try to avoid the cmpxchg() loop by simply incrementing the
> +	 * lock counter. When held in exclusive state, the lock counter is set
> +	 * to INT_MIN so these increments won't hurt as the value will remain
> +	 * negative.
> +	 */
> +	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> +		return;
> +
> +	do {
> +		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
> +	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
> +}
> +
> +static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
> +{
> +	(void)atomic_dec_return_release(&cmdq->lock);
> +}
> +
> +static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> +{
> +	if (atomic_read(&cmdq->lock) == 1)
> +		return false;
> +
> +	arm_smmu_cmdq_shared_unlock(cmdq);
> +	return true;
> +}
> +
> +#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
> +({									\
> +	bool __ret;							\
> +	local_irq_save(flags);						\
> +	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
> +	if (!__ret)							\
> +		local_irq_restore(flags);				\
> +	__ret;								\
> +})
> +
> +#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
> +({									\
> +	atomic_set_release(&cmdq->lock, 0);				\
> +	local_irq_restore(flags);					\
> +})
> +
> +
> +/*
> + * Command queue insertion.
> + * This is made fiddly by our attempts to achieve some sort of scalability
> + * since there is one queue shared amongst all of the CPUs in the system.  If
> + * you like mixed-size concurrency, dependency ordering and relaxed atomics,
> + * then you'll *love* this monstrosity.
> + *
> + * The basic idea is to split the queue up into ranges of commands that are
> + * owned by a given CPU; the owner may not have written all of the commands
> + * itself, but is responsible for advancing the hardware prod pointer when
> + * the time comes. The algorithm is roughly:
> + *
> + * 	1. Allocate some space in the queue. At this point we also discover
> + *	   whether the head of the queue is currently owned by another CPU,
> + *	   or whether we are the owner.
> + *
> + *	2. Write our commands into our allocated slots in the queue.
> + *
> + *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
> + *
> + *	4. If we are an owner:
> + *		a. Wait for the previous owner to finish.
> + *		b. Mark the queue head as unowned, which tells us the range
> + *		   that we are responsible for publishing.
> + *		c. Wait for all commands in our owned range to become valid.
> + *		d. Advance the hardware prod pointer.
> + *		e. Tell the next owner we've finished.
> + *
> + *	5. If we are inserting a CMD_SYNC (we may or may not have been an
> + *	   owner), then we need to stick around until it has completed:
> + *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
> + *		   to clear the first 4 bytes.
> + *		b. Otherwise, we spin waiting for the hardware cons pointer to
> + *		   advance past our command.
> + *
> + * The devil is in the details, particularly the use of locking for handling
> + * SYNC completion and freeing up space in the queue before we think that it is
> + * full.
> + */
> +static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
> +					       u32 sprod, u32 eprod, bool set)
> +{
> +	u32 swidx, sbidx, ewidx, ebidx;
> +	struct arm_smmu_ll_queue llq = {
> +		.max_n_shift	= cmdq->q.llq.max_n_shift,
> +		.prod		= sprod,
> +	};
> +
> +	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
> +	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
> +
> +	while (llq.prod != eprod) {
> +		unsigned long mask;
> +		atomic_long_t *ptr;
> +		u32 limit = BITS_PER_LONG;
> +
> +		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
> +		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
> +
> +		ptr = &cmdq->valid_map[swidx];
> +
> +		if ((swidx == ewidx) && (sbidx < ebidx))
> +			limit = ebidx;
> +
> +		mask = GENMASK(limit - 1, sbidx);
> +
> +		/*
> +		 * The valid bit is the inverse of the wrap bit. This means
> +		 * that a zero-initialised queue is invalid and, after marking
> +		 * all entries as valid, they become invalid again when we
> +		 * wrap.
> +		 */
> +		if (set) {
> +			atomic_long_xor(mask, ptr);
> +		} else { /* Poll */
> +			unsigned long valid;
> +
> +			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
> +			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
> +		}
> +
> +		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
> +	}
> +}
> +
> +/* Mark all entries in the range [sprod, eprod) as valid */
> +static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
> +					u32 sprod, u32 eprod)
> +{
> +	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
> +}
> +
> +/* Wait for all entries in the range [sprod, eprod) to become valid */
> +static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
> +					 u32 sprod, u32 eprod)
> +{
> +	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
> +}
> +
> +/* Wait for the command queue to become non-full */
> +static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> +					     struct arm_smmu_ll_queue *llq)
> +{
> +	unsigned long flags;
> +	struct arm_smmu_queue_poll qp;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	int ret = 0;
> +
> +	/*
> +	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> +	 * that fails, spin until somebody else updates it for us.
> +	 */
> +	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
> +		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
> +		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
> +		llq->val = READ_ONCE(cmdq->q.llq.val);
> +		return 0;
> +	}
> +
> +	queue_poll_init(smmu, &qp);
> +	do {
> +		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> +		if (!queue_full(llq))
> +			break;
> +
> +		ret = queue_poll(&qp);
> +	} while (!ret);
> +
> +	return ret;
> +}
> +
> +/*
> + * Wait until the SMMU signals a CMD_SYNC completion MSI.
> + * Must be called with the cmdq lock held in some capacity.
> + */
> +static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
> +					  struct arm_smmu_ll_queue *llq)
> +{
> +	int ret = 0;
> +	struct arm_smmu_queue_poll qp;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
> +
> +	queue_poll_init(smmu, &qp);
> +
> +	/*
> +	 * The MSI won't generate an event, since it's being written back
> +	 * into the command queue.
> +	 */
> +	qp.wfe = false;
> +	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
> +	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
> +	return ret;
> +}
> +
> +/*
> + * Wait until the SMMU cons index passes llq->prod.
> + * Must be called with the cmdq lock held in some capacity.
> + */
> +static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
> +					       struct arm_smmu_ll_queue *llq)
> +{
> +	struct arm_smmu_queue_poll qp;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	u32 prod = llq->prod;
> +	int ret = 0;
> +
> +	queue_poll_init(smmu, &qp);
> +	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> +	do {
> +		if (queue_consumed(llq, prod))
> +			break;
> +
> +		ret = queue_poll(&qp);
> +
> +		/*
> +		 * This needs to be a readl() so that our subsequent call
> +		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
> +		 *
> +		 * Specifically, we need to ensure that we observe all
> +		 * shared_lock()s by other CMD_SYNCs that share our owner,
> +		 * so that a failing call to tryunlock() means that we're
> +		 * the last one out and therefore we can safely advance
> +		 * cmdq->q.llq.cons. Roughly speaking:
> +		 *
> +		 * CPU 0		CPU1			CPU2 (us)
> +		 *
> +		 * if (sync)
> +		 * 	shared_lock();
> +		 *
> +		 * dma_wmb();
> +		 * set_valid_map();
> +		 *
> +		 * 			if (owner) {
> +		 *				poll_valid_map();
> +		 *				<control dependency>
> +		 *				writel(prod_reg);
> +		 *
> +		 *						readl(cons_reg);
> +		 *						tryunlock();
> +		 *
> +		 * Requires us to see CPU 0's shared_lock() acquisition.
> +		 */
> +		llq->cons = readl(cmdq->q.cons_reg);
> +	} while (!ret);
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
> +					 struct arm_smmu_ll_queue *llq)
> +{
> +	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> +	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
> +		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
> +
> +	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
> +}
> +
> +static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
> +					u32 prod, int n)
> +{
> +	int i;
> +	struct arm_smmu_ll_queue llq = {
> +		.max_n_shift	= cmdq->q.llq.max_n_shift,
> +		.prod		= prod,
> +	};
> +
> +	for (i = 0; i < n; ++i) {
> +		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
> +
> +		prod = queue_inc_prod_n(&llq, i);
> +		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
> +	}
> +}
> +
> +/*
> + * This is the actual insertion function, and provides the following
> + * ordering guarantees to callers:
> + *
> + * - There is a dma_wmb() before publishing any commands to the queue.
> + *   This can be relied upon to order prior writes to data structures
> + *   in memory (such as a CD or an STE) before the command.
> + *
> + * - On completion of a CMD_SYNC, there is a control dependency.
> + *   This can be relied upon to order subsequent writes to memory (e.g.
> + *   freeing an IOVA) after completion of the CMD_SYNC.
> + *
> + * - Command insertion is totally ordered, so if two CPUs each race to
> + *   insert their own list of commands then all of the commands from one
> + *   CPU will appear before any of the commands from the other CPU.
> + */
> +static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> +				       u64 *cmds, int n, bool sync)
> +{
> +	u64 cmd_sync[CMDQ_ENT_DWORDS];
> +	u32 prod;
> +	unsigned long flags;
> +	bool owner;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	struct arm_smmu_ll_queue llq = {
> +		.max_n_shift = cmdq->q.llq.max_n_shift,
> +	}, head = llq;
> +	int ret = 0;
> +
> +	/* 1. Allocate some space in the queue */
> +	local_irq_save(flags);
> +	llq.val = READ_ONCE(cmdq->q.llq.val);
> +	do {
> +		u64 old;
> +
> +		while (!queue_has_space(&llq, n + sync)) {
> +			local_irq_restore(flags);
> +			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> +				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
> +			local_irq_save(flags);
> +		}
> +
> +		head.cons = llq.cons;
> +		head.prod = queue_inc_prod_n(&llq, n + sync) |
> +					     CMDQ_PROD_OWNED_FLAG;
> +
> +		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
> +		if (old == llq.val)
> +			break;
> +
> +		llq.val = old;
> +	} while (1);
> +	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
> +	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
> +	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
> +
> +	/*
> +	 * 2. Write our commands into the queue
> +	 * Dependency ordering from the cmpxchg() loop above.
> +	 */
> +	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
> +	if (sync) {
> +		prod = queue_inc_prod_n(&llq, n);
> +		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
> +		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
> +
> +		/*
> +		 * In order to determine completion of our CMD_SYNC, we must
> +		 * ensure that the queue can't wrap twice without us noticing.
> +		 * We achieve that by taking the cmdq lock as shared before
> +		 * marking our slot as valid.
> +		 */
> +		arm_smmu_cmdq_shared_lock(cmdq);
> +	}
> +
> +	/* 3. Mark our slots as valid, ensuring commands are visible first */
> +	dma_wmb();
> +	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
> +
> +	/* 4. If we are the owner, take control of the SMMU hardware */
> +	if (owner) {
> +		/* a. Wait for previous owner to finish */
> +		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
> +
> +		/* b. Stop gathering work by clearing the owned flag */
> +		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
> +						   &cmdq->q.llq.atomic.prod);
> +		prod &= ~CMDQ_PROD_OWNED_FLAG;
> +
> +		/*
> +		 * c. Wait for any gathered work to be written to the queue.
> +		 * Note that we read our own entries so that we have the control
> +		 * dependency required by (d).
> +		 */
> +		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
> +
> +		/*
> +		 * d. Advance the hardware prod pointer
> +		 * Control dependency ordering from the entries becoming valid.
> +		 */
> +		writel_relaxed(prod, cmdq->q.prod_reg);
> +
> +		/*
> +		 * e. Tell the next owner we're done
> +		 * Make sure we've updated the hardware first, so that we don't
> +		 * race to update prod and potentially move it backwards.
> +		 */
> +		atomic_set_release(&cmdq->owner_prod, prod);
> +	}
> +
> +	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
> +	if (sync) {
> +		llq.prod = queue_inc_prod_n(&llq, n);
> +		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
> +		if (ret) {
> +			dev_err_ratelimited(smmu->dev,
> +					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
> +					    llq.prod,
> +					    readl_relaxed(cmdq->q.prod_reg),
> +					    readl_relaxed(cmdq->q.cons_reg));
> +		}
> +
> +		/*
> +		 * Try to unlock the cmq lock. This will fail if we're the last
> +		 * reader, in which case we can safely update cmdq->q.llq.cons
> +		 */
> +		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
> +			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
> +			arm_smmu_cmdq_shared_unlock(cmdq);
> +		}
> +	}
> +
> +	local_irq_restore(flags);
> +	return ret;
> +}
> +
> +static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> +				   struct arm_smmu_cmdq_ent *ent)
> +{
> +	u64 cmd[CMDQ_ENT_DWORDS];
> +
> +	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
> +		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
> +			 ent->opcode);
> +		return -EINVAL;
> +	}
> +
> +	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
> +}
> +
> +static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> +{
> +	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
> +}
> +
> +static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
> +				    struct arm_smmu_cmdq_batch *cmds,
> +				    struct arm_smmu_cmdq_ent *cmd)
> +{
> +	if (cmds->num == CMDQ_BATCH_ENTRIES) {
> +		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
> +		cmds->num = 0;
> +	}
> +	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
> +	cmds->num++;
> +}
> +
> +static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
> +				      struct arm_smmu_cmdq_batch *cmds)
> +{
> +	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
> +}
> +
> +/* Context descriptor manipulation functions */
> +static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
> +			     int ssid, bool leaf)
> +{
> +	size_t i;
> +	unsigned long flags;
> +	struct arm_smmu_master *master;
> +	struct arm_smmu_cmdq_batch cmds = {};
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode	= CMDQ_OP_CFGI_CD,
> +		.cfgi	= {
> +			.ssid	= ssid,
> +			.leaf	= leaf,
> +		},
> +	};
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> +		for (i = 0; i < master->num_sids; i++) {
> +			cmd.cfgi.sid = master->sids[i];
> +			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> +		}
> +	}
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> +}
> +
> +static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
> +					struct arm_smmu_l1_ctx_desc *l1_desc)
> +{
> +	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
> +
> +	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
> +					     &l1_desc->l2ptr_dma, GFP_KERNEL);
> +	if (!l1_desc->l2ptr) {
> +		dev_warn(smmu->dev,
> +			 "failed to allocate context descriptor table\n");
> +		return -ENOMEM;
> +	}
> +	return 0;
> +}
> +
> +static void arm_smmu_write_cd_l1_desc(__le64 *dst,
> +				      struct arm_smmu_l1_ctx_desc *l1_desc)
> +{
> +	u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) |
> +		  CTXDESC_L1_DESC_V;
> +
> +	/* See comment in arm_smmu_write_ctx_desc() */
> +	WRITE_ONCE(*dst, cpu_to_le64(val));
> +}
> +
> +static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain,
> +				   u32 ssid)
> +{
> +	__le64 *l1ptr;
> +	unsigned int idx;
> +	struct arm_smmu_l1_ctx_desc *l1_desc;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
> +
> +	if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR)
> +		return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS;
> +
> +	idx = ssid >> CTXDESC_SPLIT;
> +	l1_desc = &cdcfg->l1_desc[idx];
> +	if (!l1_desc->l2ptr) {
> +		if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc))
> +			return NULL;
> +
> +		l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS;
> +		arm_smmu_write_cd_l1_desc(l1ptr, l1_desc);
> +		/* An invalid L1CD can be cached */
> +		arm_smmu_sync_cd(smmu_domain, ssid, false);
> +	}
> +	idx = ssid & (CTXDESC_L2_ENTRIES - 1);
> +	return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
> +}
> +
> +static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
> +				   int ssid, struct arm_smmu_ctx_desc *cd)
> +{
> +	/*
> +	 * This function handles the following cases:
> +	 *
> +	 * (1) Install primary CD, for normal DMA traffic (SSID = 0).
> +	 * (2) Install a secondary CD, for SID+SSID traffic.
> +	 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
> +	 *     CD, then invalidate the old entry and mappings.
> +	 * (4) Remove a secondary CD.
> +	 */
> +	u64 val;
> +	bool cd_live;
> +	__le64 *cdptr;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax)))
> +		return -E2BIG;
> +
> +	cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid);
> +	if (!cdptr)
> +		return -ENOMEM;
> +
> +	val = le64_to_cpu(cdptr[0]);
> +	cd_live = !!(val & CTXDESC_CD_0_V);
> +
> +	if (!cd) { /* (4) */
> +		val = 0;
> +	} else if (cd_live) { /* (3) */
> +		val &= ~CTXDESC_CD_0_ASID;
> +		val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
> +		/*
> +		 * Until CD+TLB invalidation, both ASIDs may be used for tagging
> +		 * this substream's traffic
> +		 */
> +	} else { /* (1) and (2) */
> +		cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
> +		cdptr[2] = 0;
> +		cdptr[3] = cpu_to_le64(cd->mair);
> +
> +		/*
> +		 * STE is live, and the SMMU might read dwords of this CD in any
> +		 * order. Ensure that it observes valid values before reading
> +		 * V=1.
> +		 */
> +		arm_smmu_sync_cd(smmu_domain, ssid, true);
> +
> +		val = cd->tcr |
> +#ifdef __BIG_ENDIAN
> +			CTXDESC_CD_0_ENDI |
> +#endif
> +			CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
> +			CTXDESC_CD_0_AA64 |
> +			FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
> +			CTXDESC_CD_0_V;
> +
> +		/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
> +		if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
> +			val |= CTXDESC_CD_0_S;
> +	}
> +
> +	/*
> +	 * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
> +	 * "Configuration structures and configuration invalidation completion"
> +	 *
> +	 *   The size of single-copy atomic reads made by the SMMU is
> +	 *   IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
> +	 *   field within an aligned 64-bit span of a structure can be altered
> +	 *   without first making the structure invalid.
> +	 */
> +	WRITE_ONCE(cdptr[0], cpu_to_le64(val));
> +	arm_smmu_sync_cd(smmu_domain, ssid, true);
> +	return 0;
> +}
> +
> +static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
> +{
> +	int ret;
> +	size_t l1size;
> +	size_t max_contexts;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +	struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg;
> +
> +	max_contexts = 1 << cfg->s1cdmax;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) ||
> +	    max_contexts <= CTXDESC_L2_ENTRIES) {
> +		cfg->s1fmt = STRTAB_STE_0_S1FMT_LINEAR;
> +		cdcfg->num_l1_ents = max_contexts;
> +
> +		l1size = max_contexts * (CTXDESC_CD_DWORDS << 3);
> +	} else {
> +		cfg->s1fmt = STRTAB_STE_0_S1FMT_64K_L2;
> +		cdcfg->num_l1_ents = DIV_ROUND_UP(max_contexts,
> +						  CTXDESC_L2_ENTRIES);
> +
> +		cdcfg->l1_desc = devm_kcalloc(smmu->dev, cdcfg->num_l1_ents,
> +					      sizeof(*cdcfg->l1_desc),
> +					      GFP_KERNEL);
> +		if (!cdcfg->l1_desc)
> +			return -ENOMEM;
> +
> +		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
> +	}
> +
> +	cdcfg->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cdcfg->cdtab_dma,
> +					   GFP_KERNEL);
> +	if (!cdcfg->cdtab) {
> +		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
> +		ret = -ENOMEM;
> +		goto err_free_l1;
> +	}
> +
> +	return 0;
> +
> +err_free_l1:
> +	if (cdcfg->l1_desc) {
> +		devm_kfree(smmu->dev, cdcfg->l1_desc);
> +		cdcfg->l1_desc = NULL;
> +	}
> +	return ret;
> +}
> +
> +static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
> +{
> +	int i;
> +	size_t size, l1size;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
> +
> +	if (cdcfg->l1_desc) {
> +		size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
> +
> +		for (i = 0; i < cdcfg->num_l1_ents; i++) {
> +			if (!cdcfg->l1_desc[i].l2ptr)
> +				continue;
> +
> +			dmam_free_coherent(smmu->dev, size,
> +					   cdcfg->l1_desc[i].l2ptr,
> +					   cdcfg->l1_desc[i].l2ptr_dma);
> +		}
> +		devm_kfree(smmu->dev, cdcfg->l1_desc);
> +		cdcfg->l1_desc = NULL;
> +
> +		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
> +	} else {
> +		l1size = cdcfg->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
> +	}
> +
> +	dmam_free_coherent(smmu->dev, l1size, cdcfg->cdtab, cdcfg->cdtab_dma);
> +	cdcfg->cdtab_dma = 0;
> +	cdcfg->cdtab = NULL;
> +}
> +
> +static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
> +{
> +	if (!cd->asid)
> +		return;
> +
> +	xa_erase(&asid_xa, cd->asid);
> +}
> +
> +/* Stream table manipulation functions */
> +static void
> +arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
> +{
> +	u64 val = 0;
> +
> +	val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span);
> +	val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
> +
> +	/* See comment in arm_smmu_write_ctx_desc() */
> +	WRITE_ONCE(*dst, cpu_to_le64(val));
> +}
> +
> +static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.opcode	= CMDQ_OP_CFGI_STE,
> +		.cfgi	= {
> +			.sid	= sid,
> +			.leaf	= true,
> +		},
> +	};
> +
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +}
> +
> +static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
> +				      __le64 *dst)
> +{
> +	/*
> +	 * This is hideously complicated, but we only really care about
> +	 * three cases at the moment:
> +	 *
> +	 * 1. Invalid (all zero) -> bypass/fault (init)
> +	 * 2. Bypass/fault -> translation/bypass (attach)
> +	 * 3. Translation/bypass -> bypass/fault (detach)
> +	 *
> +	 * Given that we can't update the STE atomically and the SMMU
> +	 * doesn't read the thing in a defined order, that leaves us
> +	 * with the following maintenance requirements:
> +	 *
> +	 * 1. Update Config, return (init time STEs aren't live)
> +	 * 2. Write everything apart from dword 0, sync, write dword 0, sync
> +	 * 3. Update Config, sync
> +	 */
> +	u64 val = le64_to_cpu(dst[0]);
> +	bool ste_live = false;
> +	struct arm_smmu_device *smmu = NULL;
> +	struct arm_smmu_s1_cfg *s1_cfg = NULL;
> +	struct arm_smmu_s2_cfg *s2_cfg = NULL;
> +	struct arm_smmu_domain *smmu_domain = NULL;
> +	struct arm_smmu_cmdq_ent prefetch_cmd = {
> +		.opcode		= CMDQ_OP_PREFETCH_CFG,
> +		.prefetch	= {
> +			.sid	= sid,
> +		},
> +	};
> +
> +	if (master) {
> +		smmu_domain = master->domain;
> +		smmu = master->smmu;
> +	}
> +
> +	if (smmu_domain) {
> +		switch (smmu_domain->stage) {
> +		case ARM_SMMU_DOMAIN_S1:
> +			s1_cfg = &smmu_domain->s1_cfg;
> +			break;
> +		case ARM_SMMU_DOMAIN_S2:
> +		case ARM_SMMU_DOMAIN_NESTED:
> +			s2_cfg = &smmu_domain->s2_cfg;
> +			break;
> +		default:
> +			break;
> +		}
> +	}
> +
> +	if (val & STRTAB_STE_0_V) {
> +		switch (FIELD_GET(STRTAB_STE_0_CFG, val)) {
> +		case STRTAB_STE_0_CFG_BYPASS:
> +			break;
> +		case STRTAB_STE_0_CFG_S1_TRANS:
> +		case STRTAB_STE_0_CFG_S2_TRANS:
> +			ste_live = true;
> +			break;
> +		case STRTAB_STE_0_CFG_ABORT:
> +			BUG_ON(!disable_bypass);
> +			break;
> +		default:
> +			BUG(); /* STE corruption */
> +		}
> +	}
> +
> +	/* Nuke the existing STE_0 value, as we're going to rewrite it */
> +	val = STRTAB_STE_0_V;
> +
> +	/* Bypass/fault */
> +	if (!smmu_domain || !(s1_cfg || s2_cfg)) {
> +		if (!smmu_domain && disable_bypass)
> +			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
> +		else
> +			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
> +
> +		dst[0] = cpu_to_le64(val);
> +		dst[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG,
> +						STRTAB_STE_1_SHCFG_INCOMING));
> +		dst[2] = 0; /* Nuke the VMID */
> +		/*
> +		 * The SMMU can perform negative caching, so we must sync
> +		 * the STE regardless of whether the old value was live.
> +		 */
> +		if (smmu)
> +			arm_smmu_sync_ste_for_sid(smmu, sid);
> +		return;
> +	}
> +
> +	if (s1_cfg) {
> +		BUG_ON(ste_live);
> +		dst[1] = cpu_to_le64(
> +			 FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) |
> +			 FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
> +			 FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
> +			 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
> +			 FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1));
> +
> +		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
> +		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
> +			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
> +
> +		val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
> +			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) |
> +			FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) |
> +			FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
> +	}
> +
> +	if (s2_cfg) {
> +		BUG_ON(ste_live);
> +		dst[2] = cpu_to_le64(
> +			 FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) |
> +			 FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) |
> +#ifdef __BIG_ENDIAN
> +			 STRTAB_STE_2_S2ENDI |
> +#endif
> +			 STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
> +			 STRTAB_STE_2_S2R);
> +
> +		dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK);
> +
> +		val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
> +	}
> +
> +	if (master->ats_enabled)
> +		dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
> +						 STRTAB_STE_1_EATS_TRANS));
> +
> +	arm_smmu_sync_ste_for_sid(smmu, sid);
> +	/* See comment in arm_smmu_write_ctx_desc() */
> +	WRITE_ONCE(dst[0], cpu_to_le64(val));
> +	arm_smmu_sync_ste_for_sid(smmu, sid);
> +
> +	/* It's likely that we'll want to use the new STE soon */
> +	if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
> +		arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
> +}
> +
> +static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < nent; ++i) {
> +		arm_smmu_write_strtab_ent(NULL, -1, strtab);
> +		strtab += STRTAB_STE_DWORDS;
> +	}
> +}
> +
> +static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	size_t size;
> +	void *strtab;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +	struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
> +
> +	if (desc->l2ptr)
> +		return 0;
> +
> +	size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
> +	strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
> +
> +	desc->span = STRTAB_SPLIT + 1;
> +	desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
> +					  GFP_KERNEL);
> +	if (!desc->l2ptr) {
> +		dev_err(smmu->dev,
> +			"failed to allocate l2 stream table for SID %u\n",
> +			sid);
> +		return -ENOMEM;
> +	}
> +
> +	arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
> +	arm_smmu_write_strtab_l1_desc(strtab, desc);
> +	return 0;
> +}
> +
> +/* IRQ and event handlers */
> +static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
> +{
> +	int i;
> +	struct arm_smmu_device *smmu = dev;
> +	struct arm_smmu_queue *q = &smmu->evtq.q;
> +	struct arm_smmu_ll_queue *llq = &q->llq;
> +	u64 evt[EVTQ_ENT_DWORDS];
> +
> +	do {
> +		while (!queue_remove_raw(q, evt)) {
> +			u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
> +
> +			dev_info(smmu->dev, "event 0x%02x received:\n", id);
> +			for (i = 0; i < ARRAY_SIZE(evt); ++i)
> +				dev_info(smmu->dev, "\t0x%016llx\n",
> +					 (unsigned long long)evt[i]);
> +
> +		}
> +
> +		/*
> +		 * Not much we can do on overflow, so scream and pretend we're
> +		 * trying harder.
> +		 */
> +		if (queue_sync_prod_in(q) == -EOVERFLOW)
> +			dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
> +	} while (!queue_empty(llq));
> +
> +	/* Sync our overflow flag, as we believe we're up to speed */
> +	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
> +		    Q_IDX(llq, llq->cons);
> +	return IRQ_HANDLED;
> +}
> +
> +static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
> +{
> +	u32 sid, ssid;
> +	u16 grpid;
> +	bool ssv, last;
> +
> +	sid = FIELD_GET(PRIQ_0_SID, evt[0]);
> +	ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]);
> +	ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0;
> +	last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]);
> +	grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]);
> +
> +	dev_info(smmu->dev, "unexpected PRI request received:\n");
> +	dev_info(smmu->dev,
> +		 "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
> +		 sid, ssid, grpid, last ? "L" : "",
> +		 evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
> +		 evt[0] & PRIQ_0_PERM_READ ? "R" : "",
> +		 evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
> +		 evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
> +		 evt[1] & PRIQ_1_ADDR_MASK);
> +
> +	if (last) {
> +		struct arm_smmu_cmdq_ent cmd = {
> +			.opcode			= CMDQ_OP_PRI_RESP,
> +			.substream_valid	= ssv,
> +			.pri			= {
> +				.sid	= sid,
> +				.ssid	= ssid,
> +				.grpid	= grpid,
> +				.resp	= PRI_RESP_DENY,
> +			},
> +		};
> +
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	}
> +}
> +
> +static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
> +{
> +	struct arm_smmu_device *smmu = dev;
> +	struct arm_smmu_queue *q = &smmu->priq.q;
> +	struct arm_smmu_ll_queue *llq = &q->llq;
> +	u64 evt[PRIQ_ENT_DWORDS];
> +
> +	do {
> +		while (!queue_remove_raw(q, evt))
> +			arm_smmu_handle_ppr(smmu, evt);
> +
> +		if (queue_sync_prod_in(q) == -EOVERFLOW)
> +			dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
> +	} while (!queue_empty(llq));
> +
> +	/* Sync our overflow flag, as we believe we're up to speed */
> +	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
> +		      Q_IDX(llq, llq->cons);
> +	queue_sync_cons_out(q);
> +	return IRQ_HANDLED;
> +}
> +
> +static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
> +
> +static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
> +{
> +	u32 gerror, gerrorn, active;
> +	struct arm_smmu_device *smmu = dev;
> +
> +	gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
> +	gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
> +
> +	active = gerror ^ gerrorn;
> +	if (!(active & GERROR_ERR_MASK))
> +		return IRQ_NONE; /* No errors pending */
> +
> +	dev_warn(smmu->dev,
> +		 "unexpected global error reported (0x%08x), this could be serious\n",
> +		 active);
> +
> +	if (active & GERROR_SFM_ERR) {
> +		dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
> +		arm_smmu_device_disable(smmu);
> +	}
> +
> +	if (active & GERROR_MSI_GERROR_ABT_ERR)
> +		dev_warn(smmu->dev, "GERROR MSI write aborted\n");
> +
> +	if (active & GERROR_MSI_PRIQ_ABT_ERR)
> +		dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
> +
> +	if (active & GERROR_MSI_EVTQ_ABT_ERR)
> +		dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
> +
> +	if (active & GERROR_MSI_CMDQ_ABT_ERR)
> +		dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
> +
> +	if (active & GERROR_PRIQ_ABT_ERR)
> +		dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
> +
> +	if (active & GERROR_EVTQ_ABT_ERR)
> +		dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
> +
> +	if (active & GERROR_CMDQ_ERR)
> +		arm_smmu_cmdq_skip_err(smmu);
> +
> +	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
> +{
> +	struct arm_smmu_device *smmu = dev;
> +
> +	arm_smmu_evtq_thread(irq, dev);
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		arm_smmu_priq_thread(irq, dev);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
> +{
> +	arm_smmu_gerror_handler(irq, dev);
> +	return IRQ_WAKE_THREAD;
> +}
> +
> +static void
> +arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
> +			struct arm_smmu_cmdq_ent *cmd)
> +{
> +	size_t log2_span;
> +	size_t span_mask;
> +	/* ATC invalidates are always on 4096-bytes pages */
> +	size_t inval_grain_shift = 12;
> +	unsigned long page_start, page_end;
> +
> +	*cmd = (struct arm_smmu_cmdq_ent) {
> +		.opcode			= CMDQ_OP_ATC_INV,
> +		.substream_valid	= !!ssid,
> +		.atc.ssid		= ssid,
> +	};
> +
> +	if (!size) {
> +		cmd->atc.size = ATC_INV_SIZE_ALL;
> +		return;
> +	}
> +
> +	page_start	= iova >> inval_grain_shift;
> +	page_end	= (iova + size - 1) >> inval_grain_shift;
> +
> +	/*
> +	 * In an ATS Invalidate Request, the address must be aligned on the
> +	 * range size, which must be a power of two number of page sizes. We
> +	 * thus have to choose between grossly over-invalidating the region, or
> +	 * splitting the invalidation into multiple commands. For simplicity
> +	 * we'll go with the first solution, but should refine it in the future
> +	 * if multiple commands are shown to be more efficient.
> +	 *
> +	 * Find the smallest power of two that covers the range. The most
> +	 * significant differing bit between the start and end addresses,
> +	 * fls(start ^ end), indicates the required span. For example:
> +	 *
> +	 * We want to invalidate pages [8; 11]. This is already the ideal range:
> +	 *		x = 0b1000 ^ 0b1011 = 0b11
> +	 *		span = 1 << fls(x) = 4
> +	 *
> +	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
> +	 *		x = 0b0111 ^ 0b1010 = 0b1101
> +	 *		span = 1 << fls(x) = 16
> +	 */
> +	log2_span	= fls_long(page_start ^ page_end);
> +	span_mask	= (1ULL << log2_span) - 1;
> +
> +	page_start	&= ~span_mask;
> +
> +	cmd->atc.addr	= page_start << inval_grain_shift;
> +	cmd->atc.size	= log2_span;
> +}
> +
> +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
> +{
> +	int i;
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> +
> +	for (i = 0; i < master->num_sids; i++) {
> +		cmd.atc.sid = master->sids[i];
> +		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
> +	}
> +
> +	return arm_smmu_cmdq_issue_sync(master->smmu);
> +}
> +
> +static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
> +				   int ssid, unsigned long iova, size_t size)
> +{
> +	int i;
> +	unsigned long flags;
> +	struct arm_smmu_cmdq_ent cmd;
> +	struct arm_smmu_master *master;
> +	struct arm_smmu_cmdq_batch cmds = {};
> +
> +	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
> +		return 0;
> +
> +	/*
> +	 * Ensure that we've completed prior invalidation of the main TLBs
> +	 * before we read 'nr_ats_masters' in case of a concurrent call to
> +	 * arm_smmu_enable_ats():
> +	 *
> +	 *	// unmap()			// arm_smmu_enable_ats()
> +	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
> +	 *	smp_mb();			[...]
> +	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
> +	 *
> +	 * Ensures that we always see the incremented 'nr_ats_masters' count if
> +	 * ATS was enabled at the PCI device before completion of the TLBI.
> +	 */
> +	smp_mb();
> +	if (!atomic_read(&smmu_domain->nr_ats_masters))
> +		return 0;
> +
> +	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> +		if (!master->ats_enabled)
> +			continue;
> +
> +		for (i = 0; i < master->num_sids; i++) {
> +			cmd.atc.sid = master->sids[i];
> +			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
> +		}
> +	}
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
> +}
> +
> +/* IO_PGTABLE API */
> +static void arm_smmu_tlb_inv_context(void *cookie)
> +{
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
> +		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> +		cmd.tlbi.vmid	= 0;
> +	} else {
> +		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
> +		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> +	}
> +
> +	/*
> +	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
> +	 * PTEs previously cleared by unmaps on the current CPU not yet visible
> +	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
> +	 * insertion to guarantee those are observed before the TLBI. Do be
> +	 * careful, 007.
> +	 */
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
> +}
> +
> +static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
> +				   size_t granule, bool leaf,
> +				   struct arm_smmu_domain *smmu_domain)
> +{
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
> +	size_t inv_range = granule;
> +	struct arm_smmu_cmdq_batch cmds = {};
> +	struct arm_smmu_cmdq_ent cmd = {
> +		.tlbi = {
> +			.leaf	= leaf,
> +		},
> +	};
> +
> +	if (!size)
> +		return;
> +
> +	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
> +		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> +	} else {
> +		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
> +		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> +	}
> +
> +	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> +		/* Get the leaf page size */
> +		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
> +
> +		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
> +		cmd.tlbi.tg = (tg - 10) / 2;
> +
> +		/* Determine what level the granule is at */
> +		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
> +
> +		num_pages = size >> tg;
> +	}
> +
> +	while (iova < end) {
> +		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> +			/*
> +			 * On each iteration of the loop, the range is 5 bits
> +			 * worth of the aligned size remaining.
> +			 * The range in pages is:
> +			 *
> +			 * range = (num_pages & (0x1f << __ffs(num_pages)))
> +			 */
> +			unsigned long scale, num;
> +
> +			/* Determine the power of 2 multiple number of pages */
> +			scale = __ffs(num_pages);
> +			cmd.tlbi.scale = scale;
> +
> +			/* Determine how many chunks of 2^scale size we have */
> +			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
> +			cmd.tlbi.num = num - 1;
> +
> +			/* range is num * 2^scale * pgsize */
> +			inv_range = num << (scale + tg);
> +
> +			/* Clear out the lower order bits for the next iteration */
> +			num_pages -= num << scale;
> +		}
> +
> +		cmd.tlbi.addr = iova;
> +		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> +		iova += inv_range;
> +	}
> +	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> +
> +	/*
> +	 * Unfortunately, this can't be leaf-only since we may have
> +	 * zapped an entire table.
> +	 */
> +	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
> +}
> +
> +static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
> +					 unsigned long iova, size_t granule,
> +					 void *cookie)
> +{
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct iommu_domain *domain = &smmu_domain->domain;
> +
> +	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
> +}
> +
> +static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
> +				  size_t granule, void *cookie)
> +{
> +	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
> +}
> +
> +static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
> +				  size_t granule, void *cookie)
> +{
> +	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
> +}
> +
> +static const struct iommu_flush_ops arm_smmu_flush_ops = {
> +	.tlb_flush_all	= arm_smmu_tlb_inv_context,
> +	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
> +	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
> +	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
> +};
> +
> +/* IOMMU API */
> +static bool arm_smmu_capable(enum iommu_cap cap)
> +{
> +	switch (cap) {
> +	case IOMMU_CAP_CACHE_COHERENCY:
> +		return true;
> +	case IOMMU_CAP_NOEXEC:
> +		return true;
> +	default:
> +		return false;
> +	}
> +}
> +
> +static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
> +{
> +	struct arm_smmu_domain *smmu_domain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED &&
> +	    type != IOMMU_DOMAIN_DMA &&
> +	    type != IOMMU_DOMAIN_IDENTITY)
> +		return NULL;
> +
> +	/*
> +	 * Allocate the domain and initialise some of its data structures.
> +	 * We can't really do anything meaningful until we've added a
> +	 * master.
> +	 */
> +	smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
> +	if (!smmu_domain)
> +		return NULL;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&smmu_domain->domain)) {
> +		kfree(smmu_domain);
> +		return NULL;
> +	}
> +
> +	mutex_init(&smmu_domain->init_mutex);
> +	INIT_LIST_HEAD(&smmu_domain->devices);
> +	spin_lock_init(&smmu_domain->devices_lock);
> +
> +	return &smmu_domain->domain;
> +}
> +
> +static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
> +{
> +	int idx, size = 1 << span;
> +
> +	do {
> +		idx = find_first_zero_bit(map, size);
> +		if (idx == size)
> +			return -ENOSPC;
> +	} while (test_and_set_bit(idx, map));
> +
> +	return idx;
> +}
> +
> +static void arm_smmu_bitmap_free(unsigned long *map, int idx)
> +{
> +	clear_bit(idx, map);
> +}
> +
> +static void arm_smmu_domain_free(struct iommu_domain *domain)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	iommu_put_dma_cookie(domain);
> +	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
> +
> +	/* Free the CD and ASID, if we allocated them */
> +	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> +		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +
> +		if (cfg->cdcfg.cdtab)
> +			arm_smmu_free_cd_tables(smmu_domain);
> +		arm_smmu_free_asid(&cfg->cd);
> +	} else {
> +		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> +		if (cfg->vmid)
> +			arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
> +	}
> +
> +	kfree(smmu_domain);
> +}
> +
> +static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
> +				       struct arm_smmu_master *master,
> +				       struct io_pgtable_cfg *pgtbl_cfg)
> +{
> +	int ret;
> +	u32 asid;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
> +	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
> +
> +	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
> +		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> +	if (ret)
> +		return ret;
> +
> +	cfg->s1cdmax = master->ssid_bits;
> +
> +	ret = arm_smmu_alloc_cd_tables(smmu_domain);
> +	if (ret)
> +		goto out_free_asid;
> +
> +	cfg->cd.asid	= (u16)asid;
> +	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
> +	cfg->cd.tcr	= FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) |
> +			  FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) |
> +			  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
> +	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair;
> +
> +	/*
> +	 * Note that this will end up calling arm_smmu_sync_cd() before
> +	 * the master has been added to the devices list for this domain.
> +	 * This isn't an issue because the STE hasn't been installed yet.
> +	 */
> +	ret = arm_smmu_write_ctx_desc(smmu_domain, 0, &cfg->cd);
> +	if (ret)
> +		goto out_free_cd_tables;
> +
> +	return 0;
> +
> +out_free_cd_tables:
> +	arm_smmu_free_cd_tables(smmu_domain);
> +out_free_asid:
> +	arm_smmu_free_asid(&cfg->cd);
> +	return ret;
> +}
> +
> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
> +				       struct arm_smmu_master *master,
> +				       struct io_pgtable_cfg *pgtbl_cfg)
> +{
> +	int vmid;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> +	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
> +
> +	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
> +	if (vmid < 0)
> +		return vmid;
> +
> +	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
> +	cfg->vmid	= (u16)vmid;
> +	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
> +	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
> +			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
> +	return 0;
> +}
> +
> +static int arm_smmu_domain_finalise(struct iommu_domain *domain,
> +				    struct arm_smmu_master *master)
> +{
> +	int ret;
> +	unsigned long ias, oas;
> +	enum io_pgtable_fmt fmt;
> +	struct io_pgtable_cfg pgtbl_cfg;
> +	struct io_pgtable_ops *pgtbl_ops;
> +	int (*finalise_stage_fn)(struct arm_smmu_domain *,
> +				 struct arm_smmu_master *,
> +				 struct io_pgtable_cfg *);
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
> +		return 0;
> +	}
> +
> +	/* Restrict the stage to what we can actually support */
> +	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
> +	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> +
> +	switch (smmu_domain->stage) {
> +	case ARM_SMMU_DOMAIN_S1:
> +		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
> +		ias = min_t(unsigned long, ias, VA_BITS);
> +		oas = smmu->ias;
> +		fmt = ARM_64_LPAE_S1;
> +		finalise_stage_fn = arm_smmu_domain_finalise_s1;
> +		break;
> +	case ARM_SMMU_DOMAIN_NESTED:
> +	case ARM_SMMU_DOMAIN_S2:
> +		ias = smmu->ias;
> +		oas = smmu->oas;
> +		fmt = ARM_64_LPAE_S2;
> +		finalise_stage_fn = arm_smmu_domain_finalise_s2;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	pgtbl_cfg = (struct io_pgtable_cfg) {
> +		.pgsize_bitmap	= smmu->pgsize_bitmap,
> +		.ias		= ias,
> +		.oas		= oas,
> +		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
> +		.tlb		= &arm_smmu_flush_ops,
> +		.iommu_dev	= smmu->dev,
> +	};
> +
> +	if (smmu_domain->non_strict)
> +		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
> +
> +	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
> +	if (!pgtbl_ops)
> +		return -ENOMEM;
> +
> +	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
> +	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
> +	domain->geometry.force_aperture = true;
> +
> +	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
> +	if (ret < 0) {
> +		free_io_pgtable_ops(pgtbl_ops);
> +		return ret;
> +	}
> +
> +	smmu_domain->pgtbl_ops = pgtbl_ops;
> +	return 0;
> +}
> +
> +static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	__le64 *step;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +
> +	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
> +		struct arm_smmu_strtab_l1_desc *l1_desc;
> +		int idx;
> +
> +		/* Two-level walk */
> +		idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
> +		l1_desc = &cfg->l1_desc[idx];
> +		idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
> +		step = &l1_desc->l2ptr[idx];
> +	} else {
> +		/* Simple linear lookup */
> +		step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
> +	}
> +
> +	return step;
> +}
> +
> +static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
> +{
> +	int i, j;
> +	struct arm_smmu_device *smmu = master->smmu;
> +
> +	for (i = 0; i < master->num_sids; ++i) {
> +		u32 sid = master->sids[i];
> +		__le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
> +
> +		/* Bridged PCI devices may end up with duplicated IDs */
> +		for (j = 0; j < i; j++)
> +			if (master->sids[j] == sid)
> +				break;
> +		if (j < i)
> +			continue;
> +
> +		arm_smmu_write_strtab_ent(master, sid, step);
> +	}
> +}
> +
> +static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
> +{
> +	struct device *dev = master->dev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
> +		return false;
> +
> +	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
> +		return false;
> +
> +	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
> +}
> +
> +static void arm_smmu_enable_ats(struct arm_smmu_master *master)
> +{
> +	size_t stu;
> +	struct pci_dev *pdev;
> +	struct arm_smmu_device *smmu = master->smmu;
> +	struct arm_smmu_domain *smmu_domain = master->domain;
> +
> +	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
> +	if (!master->ats_enabled)
> +		return;
> +
> +	/* Smallest Translation Unit: log2 of the smallest supported granule */
> +	stu = __ffs(smmu->pgsize_bitmap);
> +	pdev = to_pci_dev(master->dev);
> +
> +	atomic_inc(&smmu_domain->nr_ats_masters);
> +	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
> +	if (pci_enable_ats(pdev, stu))
> +		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
> +}
> +
> +static void arm_smmu_disable_ats(struct arm_smmu_master *master)
> +{
> +	struct arm_smmu_domain *smmu_domain = master->domain;
> +
> +	if (!master->ats_enabled)
> +		return;
> +
> +	pci_disable_ats(to_pci_dev(master->dev));
> +	/*
> +	 * Ensure ATS is disabled at the endpoint before we issue the
> +	 * ATC invalidation via the SMMU.
> +	 */
> +	wmb();
> +	arm_smmu_atc_inv_master(master);
> +	atomic_dec(&smmu_domain->nr_ats_masters);
> +}
> +
> +static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
> +{
> +	int ret;
> +	int features;
> +	int num_pasids;
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return -ENODEV;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	features = pci_pasid_features(pdev);
> +	if (features < 0)
> +		return features;
> +
> +	num_pasids = pci_max_pasids(pdev);
> +	if (num_pasids <= 0)
> +		return num_pasids;
> +
> +	ret = pci_enable_pasid(pdev, features);
> +	if (ret) {
> +		dev_err(&pdev->dev, "Failed to enable PASID\n");
> +		return ret;
> +	}
> +
> +	master->ssid_bits = min_t(u8, ilog2(num_pasids),
> +				  master->smmu->ssid_bits);
> +	return 0;
> +}
> +
> +static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
> +{
> +	struct pci_dev *pdev;
> +
> +	if (!dev_is_pci(master->dev))
> +		return;
> +
> +	pdev = to_pci_dev(master->dev);
> +
> +	if (!pdev->pasid_enabled)
> +		return;
> +
> +	master->ssid_bits = 0;
> +	pci_disable_pasid(pdev);
> +}
> +
> +static void arm_smmu_detach_dev(struct arm_smmu_master *master)
> +{
> +	unsigned long flags;
> +	struct arm_smmu_domain *smmu_domain = master->domain;
> +
> +	if (!smmu_domain)
> +		return;
> +
> +	arm_smmu_disable_ats(master);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_del(&master->domain_head);
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	master->domain = NULL;
> +	master->ats_enabled = false;
> +	arm_smmu_install_ste_for_dev(master);
> +}
> +
> +static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int ret = 0;
> +	unsigned long flags;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +	struct arm_smmu_device *smmu;
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_master *master;
> +
> +	if (!fwspec)
> +		return -ENOENT;
> +
> +	master = dev_iommu_priv_get(dev);
> +	smmu = master->smmu;
> +
> +	arm_smmu_detach_dev(master);
> +
> +	mutex_lock(&smmu_domain->init_mutex);
> +
> +	if (!smmu_domain->smmu) {
> +		smmu_domain->smmu = smmu;
> +		ret = arm_smmu_domain_finalise(domain, master);
> +		if (ret) {
> +			smmu_domain->smmu = NULL;
> +			goto out_unlock;
> +		}
> +	} else if (smmu_domain->smmu != smmu) {
> +		dev_err(dev,
> +			"cannot attach to SMMU %s (upstream of %s)\n",
> +			dev_name(smmu_domain->smmu->dev),
> +			dev_name(smmu->dev));
> +		ret = -ENXIO;
> +		goto out_unlock;
> +	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 &&
> +		   master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) {
> +		dev_err(dev,
> +			"cannot attach to incompatible domain (%u SSID bits != %u)\n",
> +			smmu_domain->s1_cfg.s1cdmax, master->ssid_bits);
> +		ret = -EINVAL;
> +		goto out_unlock;
> +	}
> +
> +	master->domain = smmu_domain;
> +
> +	if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
> +		master->ats_enabled = arm_smmu_ats_supported(master);
> +
> +	arm_smmu_install_ste_for_dev(master);
> +
> +	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +	list_add(&master->domain_head, &smmu_domain->devices);
> +	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +	arm_smmu_enable_ats(master);
> +
> +out_unlock:
> +	mutex_unlock(&smmu_domain->init_mutex);
> +	return ret;
> +}
> +
> +static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> +			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> +{
> +	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> +
> +	if (!ops)
> +		return -ENODEV;
> +
> +	return ops->map(ops, iova, paddr, size, prot);
> +}
> +
> +static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			     size_t size, struct iommu_iotlb_gather *gather)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
> +
> +	if (!ops)
> +		return 0;
> +
> +	return ops->unmap(ops, iova, size, gather);
> +}
> +
> +static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	if (smmu_domain->smmu)
> +		arm_smmu_tlb_inv_context(smmu_domain);
> +}
> +
> +static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
> +				struct iommu_iotlb_gather *gather)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
> +			       gather->pgsize, true, smmu_domain);
> +}
> +
> +static phys_addr_t
> +arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> +{
> +	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> +
> +	if (domain->type == IOMMU_DOMAIN_IDENTITY)
> +		return iova;
> +
> +	if (!ops)
> +		return 0;
> +
> +	return ops->iova_to_phys(ops, iova);
> +}
> +
> +static struct platform_driver arm_smmu_driver;
> +
> +static
> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
> +							  fwnode);
> +	put_device(dev);
> +	return dev ? dev_get_drvdata(dev) : NULL;
> +}
> +
> +static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
> +{
> +	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
> +
> +	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
> +		limit *= 1UL << STRTAB_SPLIT;
> +
> +	return sid < limit;
> +}
> +
> +static struct iommu_ops arm_smmu_ops;
> +
> +static struct iommu_device *arm_smmu_probe_device(struct device *dev)
> +{
> +	int i, ret;
> +	struct arm_smmu_device *smmu;
> +	struct arm_smmu_master *master;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> +		return ERR_PTR(-ENODEV);
> +
> +	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
> +		return ERR_PTR(-EBUSY);
> +
> +	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!smmu)
> +		return ERR_PTR(-ENODEV);
> +
> +	master = kzalloc(sizeof(*master), GFP_KERNEL);
> +	if (!master)
> +		return ERR_PTR(-ENOMEM);
> +
> +	master->dev = dev;
> +	master->smmu = smmu;
> +	master->sids = fwspec->ids;
> +	master->num_sids = fwspec->num_ids;
> +	dev_iommu_priv_set(dev, master);
> +
> +	/* Check the SIDs are in range of the SMMU and our stream table */
> +	for (i = 0; i < master->num_sids; i++) {
> +		u32 sid = master->sids[i];
> +
> +		if (!arm_smmu_sid_in_range(smmu, sid)) {
> +			ret = -ERANGE;
> +			goto err_free_master;
> +		}
> +
> +		/* Ensure l2 strtab is initialised */
> +		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
> +			ret = arm_smmu_init_l2_strtab(smmu, sid);
> +			if (ret)
> +				goto err_free_master;
> +		}
> +	}
> +
> +	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
> +
> +	/*
> +	 * Note that PASID must be enabled before, and disabled after ATS:
> +	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
> +	 *
> +	 *   Behavior is undefined if this bit is Set and the value of the PASID
> +	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bits
> +	 *   are changed.
> +	 */
> +	arm_smmu_enable_pasid(master);
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
> +		master->ssid_bits = min_t(u8, master->ssid_bits,
> +					  CTXDESC_LINEAR_CDMAX);
> +
> +	return &smmu->iommu;
> +
> +err_free_master:
> +	kfree(master);
> +	dev_iommu_priv_set(dev, NULL);
> +	return ERR_PTR(ret);
> +}
> +
> +static void arm_smmu_release_device(struct device *dev)
> +{
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +	struct arm_smmu_master *master;
> +
> +	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> +		return;
> +
> +	master = dev_iommu_priv_get(dev);
> +	arm_smmu_detach_dev(master);
> +	arm_smmu_disable_pasid(master);
> +	kfree(master);
> +	iommu_fwspec_free(dev);
> +}
> +
> +static struct iommu_group *arm_smmu_device_group(struct device *dev)
> +{
> +	struct iommu_group *group;
> +
> +	/*
> +	 * We don't support devices sharing stream IDs other than PCI RID
> +	 * aliases, since the necessary ID-to-device lookup becomes rather
> +	 * impractical given a potential sparse 32-bit stream ID space.
> +	 */
> +	if (dev_is_pci(dev))
> +		group = pci_device_group(dev);
> +	else
> +		group = generic_device_group(dev);
> +
> +	return group;
> +}
> +
> +static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> +				    enum iommu_attr attr, void *data)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	switch (domain->type) {
> +	case IOMMU_DOMAIN_UNMANAGED:
> +		switch (attr) {
> +		case DOMAIN_ATTR_NESTING:
> +			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> +			return 0;
> +		default:
> +			return -ENODEV;
> +		}
> +		break;
> +	case IOMMU_DOMAIN_DMA:
> +		switch (attr) {
> +		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> +			*(int *)data = smmu_domain->non_strict;
> +			return 0;
> +		default:
> +			return -ENODEV;
> +		}
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> +				    enum iommu_attr attr, void *data)
> +{
> +	int ret = 0;
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +
> +	mutex_lock(&smmu_domain->init_mutex);
> +
> +	switch (domain->type) {
> +	case IOMMU_DOMAIN_UNMANAGED:
> +		switch (attr) {
> +		case DOMAIN_ATTR_NESTING:
> +			if (smmu_domain->smmu) {
> +				ret = -EPERM;
> +				goto out_unlock;
> +			}
> +
> +			if (*(int *)data)
> +				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> +			else
> +				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> +			break;
> +		default:
> +			ret = -ENODEV;
> +		}
> +		break;
> +	case IOMMU_DOMAIN_DMA:
> +		switch(attr) {
> +		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> +			smmu_domain->non_strict = *(int *)data;
> +			break;
> +		default:
> +			ret = -ENODEV;
> +		}
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +out_unlock:
> +	mutex_unlock(&smmu_domain->init_mutex);
> +	return ret;
> +}
> +
> +static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static void arm_smmu_get_resv_regions(struct device *dev,
> +				      struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					 prot, IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static struct iommu_ops arm_smmu_ops = {
> +	.capable		= arm_smmu_capable,
> +	.domain_alloc		= arm_smmu_domain_alloc,
> +	.domain_free		= arm_smmu_domain_free,
> +	.attach_dev		= arm_smmu_attach_dev,
> +	.map			= arm_smmu_map,
> +	.unmap			= arm_smmu_unmap,
> +	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
> +	.iotlb_sync		= arm_smmu_iotlb_sync,
> +	.iova_to_phys		= arm_smmu_iova_to_phys,
> +	.probe_device		= arm_smmu_probe_device,
> +	.release_device		= arm_smmu_release_device,
> +	.device_group		= arm_smmu_device_group,
> +	.domain_get_attr	= arm_smmu_domain_get_attr,
> +	.domain_set_attr	= arm_smmu_domain_set_attr,
> +	.of_xlate		= arm_smmu_of_xlate,
> +	.get_resv_regions	= arm_smmu_get_resv_regions,
> +	.put_resv_regions	= generic_iommu_put_resv_regions,
> +	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
> +};
> +
> +/* Probing and initialisation functions */
> +static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
> +				   struct arm_smmu_queue *q,
> +				   unsigned long prod_off,
> +				   unsigned long cons_off,
> +				   size_t dwords, const char *name)
> +{
> +	size_t qsz;
> +
> +	do {
> +		qsz = ((1 << q->llq.max_n_shift) * dwords) << 3;
> +		q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma,
> +					      GFP_KERNEL);
> +		if (q->base || qsz < PAGE_SIZE)
> +			break;
> +
> +		q->llq.max_n_shift--;
> +	} while (1);
> +
> +	if (!q->base) {
> +		dev_err(smmu->dev,
> +			"failed to allocate queue (0x%zx bytes) for %s\n",
> +			qsz, name);
> +		return -ENOMEM;
> +	}
> +
> +	if (!WARN_ON(q->base_dma & (qsz - 1))) {
> +		dev_info(smmu->dev, "allocated %u entries for %s\n",
> +			 1 << q->llq.max_n_shift, name);
> +	}
> +
> +	q->prod_reg	= arm_smmu_page1_fixup(prod_off, smmu);
> +	q->cons_reg	= arm_smmu_page1_fixup(cons_off, smmu);
> +	q->ent_dwords	= dwords;
> +
> +	q->q_base  = Q_BASE_RWA;
> +	q->q_base |= q->base_dma & Q_BASE_ADDR_MASK;
> +	q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift);
> +
> +	q->llq.prod = q->llq.cons = 0;
> +	return 0;
> +}
> +
> +static void arm_smmu_cmdq_free_bitmap(void *data)
> +{
> +	unsigned long *bitmap = data;
> +	bitmap_free(bitmap);
> +}
> +
> +static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
> +{
> +	int ret = 0;
> +	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> +	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
> +	atomic_long_t *bitmap;
> +
> +	atomic_set(&cmdq->owner_prod, 0);
> +	atomic_set(&cmdq->lock, 0);
> +
> +	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
> +	if (!bitmap) {
> +		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
> +		ret = -ENOMEM;
> +	} else {
> +		cmdq->valid_map = bitmap;
> +		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
> +	}
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
> +{
> +	int ret;
> +
> +	/* cmdq */
> +	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
> +				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
> +				      "cmdq");
> +	if (ret)
> +		return ret;
> +
> +	ret = arm_smmu_cmdq_init(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* evtq */
> +	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
> +				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
> +				      "evtq");
> +	if (ret)
> +		return ret;
> +
> +	/* priq */
> +	if (!(smmu->features & ARM_SMMU_FEAT_PRI))
> +		return 0;
> +
> +	return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
> +				       ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS,
> +				       "priq");
> +}
> +
> +static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
> +{
> +	unsigned int i;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +	size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
> +	void *strtab = smmu->strtab_cfg.strtab;
> +
> +	cfg->l1_desc = devm_kzalloc(smmu->dev, size, GFP_KERNEL);
> +	if (!cfg->l1_desc) {
> +		dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < cfg->num_l1_ents; ++i) {
> +		arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
> +		strtab += STRTAB_L1_DESC_DWORDS << 3;
> +	}
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
> +{
> +	void *strtab;
> +	u64 reg;
> +	u32 size, l1size;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +
> +	/* Calculate the L1 size, capped to the SIDSIZE. */
> +	size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
> +	size = min(size, smmu->sid_bits - STRTAB_SPLIT);
> +	cfg->num_l1_ents = 1 << size;
> +
> +	size += STRTAB_SPLIT;
> +	if (size < smmu->sid_bits)
> +		dev_warn(smmu->dev,
> +			 "2-level strtab only covers %u/%u bits of SID\n",
> +			 size, smmu->sid_bits);
> +
> +	l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
> +	strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
> +				     GFP_KERNEL);
> +	if (!strtab) {
> +		dev_err(smmu->dev,
> +			"failed to allocate l1 stream table (%u bytes)\n",
> +			size);
> +		return -ENOMEM;
> +	}
> +	cfg->strtab = strtab;
> +
> +	/* Configure strtab_base_cfg for 2 levels */
> +	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL);
> +	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size);
> +	reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT);
> +	cfg->strtab_base_cfg = reg;
> +
> +	return arm_smmu_init_l1_strtab(smmu);
> +}
> +
> +static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
> +{
> +	void *strtab;
> +	u64 reg;
> +	u32 size;
> +	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
> +
> +	size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
> +	strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
> +				     GFP_KERNEL);
> +	if (!strtab) {
> +		dev_err(smmu->dev,
> +			"failed to allocate linear stream table (%u bytes)\n",
> +			size);
> +		return -ENOMEM;
> +	}
> +	cfg->strtab = strtab;
> +	cfg->num_l1_ents = 1 << smmu->sid_bits;
> +
> +	/* Configure strtab_base_cfg for a linear table covering all SIDs */
> +	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR);
> +	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits);
> +	cfg->strtab_base_cfg = reg;
> +
> +	arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
> +	return 0;
> +}
> +
> +static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
> +{
> +	u64 reg;
> +	int ret;
> +
> +	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
> +		ret = arm_smmu_init_strtab_2lvl(smmu);
> +	else
> +		ret = arm_smmu_init_strtab_linear(smmu);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* Set the strtab base address */
> +	reg  = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK;
> +	reg |= STRTAB_BASE_RA;
> +	smmu->strtab_cfg.strtab_base = reg;
> +
> +	/* Allocate the first VMID for stage-2 bypass STEs */
> +	set_bit(0, smmu->vmid_map);
> +	return 0;
> +}
> +
> +static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
> +{
> +	int ret;
> +
> +	ret = arm_smmu_init_queues(smmu);
> +	if (ret)
> +		return ret;
> +
> +	return arm_smmu_init_strtab(smmu);
> +}
> +
> +static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
> +				   unsigned int reg_off, unsigned int ack_off)
> +{
> +	u32 reg;
> +
> +	writel_relaxed(val, smmu->base + reg_off);
> +	return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
> +					  1, ARM_SMMU_POLL_TIMEOUT_US);
> +}
> +
> +/* GBPA is "special" */
> +static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
> +{
> +	int ret;
> +	u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
> +
> +	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
> +					 1, ARM_SMMU_POLL_TIMEOUT_US);
> +	if (ret)
> +		return ret;
> +
> +	reg &= ~clr;
> +	reg |= set;
> +	writel_relaxed(reg | GBPA_UPDATE, gbpa);
> +	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
> +					 1, ARM_SMMU_POLL_TIMEOUT_US);
> +
> +	if (ret)
> +		dev_err(smmu->dev, "GBPA not responding to update\n");
> +	return ret;
> +}
> +
> +static void arm_smmu_free_msis(void *data)
> +{
> +	struct device *dev = data;
> +	platform_msi_domain_free_irqs(dev);
> +}
> +
> +static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
> +{
> +	phys_addr_t doorbell;
> +	struct device *dev = msi_desc_to_dev(desc);
> +	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
> +	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
> +
> +	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
> +	doorbell &= MSI_CFG0_ADDR_MASK;
> +
> +	writeq_relaxed(doorbell, smmu->base + cfg[0]);
> +	writel_relaxed(msg->data, smmu->base + cfg[1]);
> +	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
> +}
> +
> +static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
> +{
> +	struct msi_desc *desc;
> +	int ret, nvec = ARM_SMMU_MAX_MSIS;
> +	struct device *dev = smmu->dev;
> +
> +	/* Clear the MSI address regs */
> +	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
> +	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
> +
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
> +	else
> +		nvec--;
> +
> +	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
> +		return;
> +
> +	if (!dev->msi_domain) {
> +		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
> +		return;
> +	}
> +
> +	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
> +	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
> +	if (ret) {
> +		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
> +		return;
> +	}
> +
> +	for_each_msi_entry(desc, dev) {
> +		switch (desc->platform.msi_index) {
> +		case EVTQ_MSI_INDEX:
> +			smmu->evtq.q.irq = desc->irq;
> +			break;
> +		case GERROR_MSI_INDEX:
> +			smmu->gerr_irq = desc->irq;
> +			break;
> +		case PRIQ_MSI_INDEX:
> +			smmu->priq.q.irq = desc->irq;
> +			break;
> +		default:	/* Unknown */
> +			continue;
> +		}
> +	}
> +
> +	/* Add callback to free MSIs on teardown */
> +	devm_add_action(dev, arm_smmu_free_msis, dev);
> +}
> +
> +static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
> +{
> +	int irq, ret;
> +
> +	arm_smmu_setup_msis(smmu);
> +
> +	/* Request interrupt lines */
> +	irq = smmu->evtq.q.irq;
> +	if (irq) {
> +		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> +						arm_smmu_evtq_thread,
> +						IRQF_ONESHOT,
> +						"arm-smmu-v3-evtq", smmu);
> +		if (ret < 0)
> +			dev_warn(smmu->dev, "failed to enable evtq irq\n");
> +	} else {
> +		dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n");
> +	}
> +
> +	irq = smmu->gerr_irq;
> +	if (irq) {
> +		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
> +				       0, "arm-smmu-v3-gerror", smmu);
> +		if (ret < 0)
> +			dev_warn(smmu->dev, "failed to enable gerror irq\n");
> +	} else {
> +		dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n");
> +	}
> +
> +	if (smmu->features & ARM_SMMU_FEAT_PRI) {
> +		irq = smmu->priq.q.irq;
> +		if (irq) {
> +			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> +							arm_smmu_priq_thread,
> +							IRQF_ONESHOT,
> +							"arm-smmu-v3-priq",
> +							smmu);
> +			if (ret < 0)
> +				dev_warn(smmu->dev,
> +					 "failed to enable priq irq\n");
> +		} else {
> +			dev_warn(smmu->dev, "no priq irq - PRI will be broken\n");
> +		}
> +	}
> +}
> +
> +static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
> +{
> +	int ret, irq;
> +	u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
> +
> +	/* Disable IRQs first */
> +	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
> +				      ARM_SMMU_IRQ_CTRLACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to disable irqs\n");
> +		return ret;
> +	}
> +
> +	irq = smmu->combined_irq;
> +	if (irq) {
> +		/*
> +		 * Cavium ThunderX2 implementation doesn't support unique irq
> +		 * lines. Use a single irq line for all the SMMUv3 interrupts.
> +		 */
> +		ret = devm_request_threaded_irq(smmu->dev, irq,
> +					arm_smmu_combined_irq_handler,
> +					arm_smmu_combined_irq_thread,
> +					IRQF_ONESHOT,
> +					"arm-smmu-v3-combined-irq", smmu);
> +		if (ret < 0)
> +			dev_warn(smmu->dev, "failed to enable combined irq\n");
> +	} else
> +		arm_smmu_setup_unique_irqs(smmu);
> +
> +	if (smmu->features & ARM_SMMU_FEAT_PRI)
> +		irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
> +
> +	/* Enable interrupt generation on the SMMU */
> +	ret = arm_smmu_write_reg_sync(smmu, irqen_flags,
> +				      ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
> +	if (ret)
> +		dev_warn(smmu->dev, "failed to enable irqs\n");
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
> +{
> +	int ret;
> +
> +	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
> +	if (ret)
> +		dev_err(smmu->dev, "failed to clear cr0\n");
> +
> +	return ret;
> +}
> +
> +static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
> +{
> +	int ret;
> +	u32 reg, enables;
> +	struct arm_smmu_cmdq_ent cmd;
> +
> +	/* Clear CR0 and sync (disables SMMU and queue processing) */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
> +	if (reg & CR0_SMMUEN) {
> +		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
> +		WARN_ON(is_kdump_kernel() && !disable_bypass);
> +		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
> +	}
> +
> +	ret = arm_smmu_device_disable(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* CR1 (table and queue memory attributes) */
> +	reg = FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) |
> +	      FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) |
> +	      FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) |
> +	      FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) |
> +	      FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) |
> +	      FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB);
> +	writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
> +
> +	/* CR2 (random crap) */
> +	reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
> +	writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
> +
> +	/* Stream table */
> +	writeq_relaxed(smmu->strtab_cfg.strtab_base,
> +		       smmu->base + ARM_SMMU_STRTAB_BASE);
> +	writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
> +		       smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
> +
> +	/* Command queue */
> +	writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
> +	writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
> +	writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
> +
> +	enables = CR0_CMDQEN;
> +	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +				      ARM_SMMU_CR0ACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to enable command queue\n");
> +		return ret;
> +	}
> +
> +	/* Invalidate any cached configuration */
> +	cmd.opcode = CMDQ_OP_CFGI_ALL;
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +
> +	/* Invalidate any stale TLB entries */
> +	if (smmu->features & ARM_SMMU_FEAT_HYP) {
> +		cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	}
> +
> +	cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
> +	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	arm_smmu_cmdq_issue_sync(smmu);
> +
> +	/* Event queue */
> +	writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
> +	writel_relaxed(smmu->evtq.q.llq.prod,
> +		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_PROD, smmu));
> +	writel_relaxed(smmu->evtq.q.llq.cons,
> +		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_CONS, smmu));
> +
> +	enables |= CR0_EVTQEN;
> +	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +				      ARM_SMMU_CR0ACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to enable event queue\n");
> +		return ret;
> +	}
> +
> +	/* PRI queue */
> +	if (smmu->features & ARM_SMMU_FEAT_PRI) {
> +		writeq_relaxed(smmu->priq.q.q_base,
> +			       smmu->base + ARM_SMMU_PRIQ_BASE);
> +		writel_relaxed(smmu->priq.q.llq.prod,
> +			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_PROD, smmu));
> +		writel_relaxed(smmu->priq.q.llq.cons,
> +			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_CONS, smmu));
> +
> +		enables |= CR0_PRIQEN;
> +		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +					      ARM_SMMU_CR0ACK);
> +		if (ret) {
> +			dev_err(smmu->dev, "failed to enable PRI queue\n");
> +			return ret;
> +		}
> +	}
> +
> +	if (smmu->features & ARM_SMMU_FEAT_ATS) {
> +		enables |= CR0_ATSCHK;
> +		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +					      ARM_SMMU_CR0ACK);
> +		if (ret) {
> +			dev_err(smmu->dev, "failed to enable ATS check\n");
> +			return ret;
> +		}
> +	}
> +
> +	ret = arm_smmu_setup_irqs(smmu);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to setup irqs\n");
> +		return ret;
> +	}
> +
> +	if (is_kdump_kernel())
> +		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
> +
> +	/* Enable the SMMU interface, or ensure bypass */
> +	if (!bypass || disable_bypass) {
> +		enables |= CR0_SMMUEN;
> +	} else {
> +		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
> +		if (ret)
> +			return ret;
> +	}
> +	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
> +				      ARM_SMMU_CR0ACK);
> +	if (ret) {
> +		dev_err(smmu->dev, "failed to enable SMMU interface\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
> +{
> +	u32 reg;
> +	bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
> +
> +	/* IDR0 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
> +
> +	/* 2-level structures */
> +	if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL)
> +		smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
> +
> +	if (reg & IDR0_CD2L)
> +		smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB;
> +
> +	/*
> +	 * Translation table endianness.
> +	 * We currently require the same endianness as the CPU, but this
> +	 * could be changed later by adding a new IO_PGTABLE_QUIRK.
> +	 */
> +	switch (FIELD_GET(IDR0_TTENDIAN, reg)) {
> +	case IDR0_TTENDIAN_MIXED:
> +		smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE;
> +		break;
> +#ifdef __BIG_ENDIAN
> +	case IDR0_TTENDIAN_BE:
> +		smmu->features |= ARM_SMMU_FEAT_TT_BE;
> +		break;
> +#else
> +	case IDR0_TTENDIAN_LE:
> +		smmu->features |= ARM_SMMU_FEAT_TT_LE;
> +		break;
> +#endif
> +	default:
> +		dev_err(smmu->dev, "unknown/unsupported TT endianness!\n");
> +		return -ENXIO;
> +	}
> +
> +	/* Boolean feature flags */
> +	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
> +		smmu->features |= ARM_SMMU_FEAT_PRI;
> +
> +	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
> +		smmu->features |= ARM_SMMU_FEAT_ATS;
> +
> +	if (reg & IDR0_SEV)
> +		smmu->features |= ARM_SMMU_FEAT_SEV;
> +
> +	if (reg & IDR0_MSI)
> +		smmu->features |= ARM_SMMU_FEAT_MSI;
> +
> +	if (reg & IDR0_HYP)
> +		smmu->features |= ARM_SMMU_FEAT_HYP;
> +
> +	/*
> +	 * The coherency feature as set by FW is used in preference to the ID
> +	 * register, but warn on mismatch.
> +	 */
> +	if (!!(reg & IDR0_COHACC) != coherent)
> +		dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n",
> +			 coherent ? "true" : "false");
> +
> +	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
> +	case IDR0_STALL_MODEL_FORCE:
> +		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
> +		/* Fallthrough */
> +	case IDR0_STALL_MODEL_STALL:
> +		smmu->features |= ARM_SMMU_FEAT_STALLS;
> +	}
> +
> +	if (reg & IDR0_S1P)
> +		smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
> +
> +	if (reg & IDR0_S2P)
> +		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
> +
> +	if (!(reg & (IDR0_S1P | IDR0_S2P))) {
> +		dev_err(smmu->dev, "no translation support!\n");
> +		return -ENXIO;
> +	}
> +
> +	/* We only support the AArch64 table format at present */
> +	switch (FIELD_GET(IDR0_TTF, reg)) {
> +	case IDR0_TTF_AARCH32_64:
> +		smmu->ias = 40;
> +		/* Fallthrough */
> +	case IDR0_TTF_AARCH64:
> +		break;
> +	default:
> +		dev_err(smmu->dev, "AArch64 table format not supported!\n");
> +		return -ENXIO;
> +	}
> +
> +	/* ASID/VMID sizes */
> +	smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
> +	smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
> +
> +	/* IDR1 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
> +	if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) {
> +		dev_err(smmu->dev, "embedded implementation not supported\n");
> +		return -ENXIO;
> +	}
> +
> +	/* Queue sizes, capped to ensure natural alignment */
> +	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
> +					     FIELD_GET(IDR1_CMDQS, reg));
> +	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
> +		/*
> +		 * We don't support splitting up batches, so one batch of
> +		 * commands plus an extra sync needs to fit inside the command
> +		 * queue. There's also no way we can handle the weird alignment
> +		 * restrictions on the base pointer for a unit-length queue.
> +		 */
> +		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
> +			CMDQ_BATCH_ENTRIES);
> +		return -ENXIO;
> +	}
> +
> +	smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT,
> +					     FIELD_GET(IDR1_EVTQS, reg));
> +	smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT,
> +					     FIELD_GET(IDR1_PRIQS, reg));
> +
> +	/* SID/SSID sizes */
> +	smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg);
> +	smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
> +
> +	/*
> +	 * If the SMMU supports fewer bits than would fill a single L2 stream
> +	 * table, use a linear table instead.
> +	 */
> +	if (smmu->sid_bits <= STRTAB_SPLIT)
> +		smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
> +
> +	/* IDR3 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
> +	if (FIELD_GET(IDR3_RIL, reg))
> +		smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
> +
> +	/* IDR5 */
> +	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
> +
> +	/* Maximum number of outstanding stalls */
> +	smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg);
> +
> +	/* Page sizes */
> +	if (reg & IDR5_GRAN64K)
> +		smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
> +	if (reg & IDR5_GRAN16K)
> +		smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
> +	if (reg & IDR5_GRAN4K)
> +		smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
> +
> +	/* Input address size */
> +	if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT)
> +		smmu->features |= ARM_SMMU_FEAT_VAX;
> +
> +	/* Output address size */
> +	switch (FIELD_GET(IDR5_OAS, reg)) {
> +	case IDR5_OAS_32_BIT:
> +		smmu->oas = 32;
> +		break;
> +	case IDR5_OAS_36_BIT:
> +		smmu->oas = 36;
> +		break;
> +	case IDR5_OAS_40_BIT:
> +		smmu->oas = 40;
> +		break;
> +	case IDR5_OAS_42_BIT:
> +		smmu->oas = 42;
> +		break;
> +	case IDR5_OAS_44_BIT:
> +		smmu->oas = 44;
> +		break;
> +	case IDR5_OAS_52_BIT:
> +		smmu->oas = 52;
> +		smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */
> +		break;
> +	default:
> +		dev_info(smmu->dev,
> +			"unknown output address size. Truncating to 48-bit\n");
> +		/* Fallthrough */
> +	case IDR5_OAS_48_BIT:
> +		smmu->oas = 48;
> +	}
> +
> +	if (arm_smmu_ops.pgsize_bitmap == -1UL)
> +		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
> +	else
> +		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
> +
> +	/* Set the DMA mask for our table walker */
> +	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> +		dev_warn(smmu->dev,
> +			 "failed to set DMA mask for table walker\n");
> +
> +	smmu->ias = max(smmu->ias, smmu->oas);
> +
> +	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
> +		 smmu->ias, smmu->oas, smmu->features);
> +	return 0;
> +}
> +
> +#ifdef CONFIG_ACPI
> +static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu)
> +{
> +	switch (model) {
> +	case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX:
> +		smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY;
> +		break;
> +	case ACPI_IORT_SMMU_V3_HISILICON_HI161X:
> +		smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH;
> +		break;
> +	}
> +
> +	dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options);
> +}
> +
> +static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
> +				      struct arm_smmu_device *smmu)
> +{
> +	struct acpi_iort_smmu_v3 *iort_smmu;
> +	struct device *dev = smmu->dev;
> +	struct acpi_iort_node *node;
> +
> +	node = *(struct acpi_iort_node **)dev_get_platdata(dev);
> +
> +	/* Retrieve SMMUv3 specific data */
> +	iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
> +
> +	acpi_smmu_get_options(iort_smmu->model, smmu);
> +
> +	if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE)
> +		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> +
> +	return 0;
> +}
> +#else
> +static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
> +					     struct arm_smmu_device *smmu)
> +{
> +	return -ENODEV;
> +}
> +#endif
> +
> +static int arm_smmu_device_dt_probe(struct platform_device *pdev,
> +				    struct arm_smmu_device *smmu)
> +{
> +	struct device *dev = &pdev->dev;
> +	u32 cells;
> +	int ret = -EINVAL;
> +
> +	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
> +		dev_err(dev, "missing #iommu-cells property\n");
> +	else if (cells != 1)
> +		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
> +	else
> +		ret = 0;
> +
> +	parse_driver_options(smmu);
> +
> +	if (of_dma_is_coherent(dev->of_node))
> +		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
> +
> +	return ret;
> +}
> +
> +static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
> +{
> +	if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY)
> +		return SZ_64K;
> +	else
> +		return SZ_128K;
> +}
> +
> +static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
> +{
> +	int err;
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != ops) {
> +		err = bus_set_iommu(&pci_bus_type, ops);
> +		if (err)
> +			return err;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != ops) {
> +		err = bus_set_iommu(&amba_bustype, ops);
> +		if (err)
> +			goto err_reset_pci_ops;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != ops) {
> +		err = bus_set_iommu(&platform_bus_type, ops);
> +		if (err)
> +			goto err_reset_amba_ops;
> +	}
> +
> +	return 0;
> +
> +err_reset_amba_ops:
> +#ifdef CONFIG_ARM_AMBA
> +	bus_set_iommu(&amba_bustype, NULL);
> +#endif
> +err_reset_pci_ops: __maybe_unused;
> +#ifdef CONFIG_PCI
> +	bus_set_iommu(&pci_bus_type, NULL);
> +#endif
> +	return err;
> +}
> +
> +static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
> +				      resource_size_t size)
> +{
> +	struct resource res = {
> +		.flags = IORESOURCE_MEM,
> +		.start = start,
> +		.end = start + size - 1,
> +	};
> +
> +	return devm_ioremap_resource(dev, &res);
> +}
> +
> +static int arm_smmu_device_probe(struct platform_device *pdev)
> +{
> +	int irq, ret;
> +	struct resource *res;
> +	resource_size_t ioaddr;
> +	struct arm_smmu_device *smmu;
> +	struct device *dev = &pdev->dev;
> +	bool bypass;
> +
> +	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> +	if (!smmu) {
> +		dev_err(dev, "failed to allocate arm_smmu_device\n");
> +		return -ENOMEM;
> +	}
> +	smmu->dev = dev;
> +
> +	if (dev->of_node) {
> +		ret = arm_smmu_device_dt_probe(pdev, smmu);
> +	} else {
> +		ret = arm_smmu_device_acpi_probe(pdev, smmu);
> +		if (ret == -ENODEV)
> +			return ret;
> +	}
> +
> +	/* Set bypass mode according to firmware probing result */
> +	bypass = !!ret;
> +
> +	/* Base address */
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
> +		dev_err(dev, "MMIO region too small (%pr)\n", res);
> +		return -EINVAL;
> +	}
> +	ioaddr = res->start;
> +
> +	/*
> +	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
> +	 * the PMCG registers which are reserved by the PMU driver.
> +	 */
> +	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
> +	if (IS_ERR(smmu->base))
> +		return PTR_ERR(smmu->base);
> +
> +	if (arm_smmu_resource_size(smmu) > SZ_64K) {
> +		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
> +					       ARM_SMMU_REG_SZ);
> +		if (IS_ERR(smmu->page1))
> +			return PTR_ERR(smmu->page1);
> +	} else {
> +		smmu->page1 = smmu->base;
> +	}
> +
> +	/* Interrupt lines */
> +
> +	irq = platform_get_irq_byname_optional(pdev, "combined");
> +	if (irq > 0)
> +		smmu->combined_irq = irq;
> +	else {
> +		irq = platform_get_irq_byname_optional(pdev, "eventq");
> +		if (irq > 0)
> +			smmu->evtq.q.irq = irq;
> +
> +		irq = platform_get_irq_byname_optional(pdev, "priq");
> +		if (irq > 0)
> +			smmu->priq.q.irq = irq;
> +
> +		irq = platform_get_irq_byname_optional(pdev, "gerror");
> +		if (irq > 0)
> +			smmu->gerr_irq = irq;
> +	}
> +	/* Probe the h/w */
> +	ret = arm_smmu_device_hw_probe(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* Initialise in-memory data structures */
> +	ret = arm_smmu_init_structures(smmu);
> +	if (ret)
> +		return ret;
> +
> +	/* Record our private device structure */
> +	platform_set_drvdata(pdev, smmu);
> +
> +	/* Reset the device */
> +	ret = arm_smmu_device_reset(smmu, bypass);
> +	if (ret)
> +		return ret;
> +
> +	/* And we're up. Go go go! */
> +	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
> +				     "smmu3.%pa", &ioaddr);
> +	if (ret)
> +		return ret;
> +
> +	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
> +	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
> +
> +	ret = iommu_device_register(&smmu->iommu);
> +	if (ret) {
> +		dev_err(dev, "Failed to register iommu\n");
> +		return ret;
> +	}
> +
> +	return arm_smmu_set_bus_ops(&arm_smmu_ops);
> +}
> +
> +static int arm_smmu_device_remove(struct platform_device *pdev)
> +{
> +	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
> +
> +	arm_smmu_set_bus_ops(NULL);
> +	iommu_device_unregister(&smmu->iommu);
> +	iommu_device_sysfs_remove(&smmu->iommu);
> +	arm_smmu_device_disable(smmu);
> +
> +	return 0;
> +}
> +
> +static void arm_smmu_device_shutdown(struct platform_device *pdev)
> +{
> +	arm_smmu_device_remove(pdev);
> +}
> +
> +static const struct of_device_id arm_smmu_of_match[] = {
> +	{ .compatible = "arm,smmu-v3", },
> +	{ },
> +};
> +MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
> +
> +static struct platform_driver arm_smmu_driver = {
> +	.driver	= {
> +		.name			= "arm-smmu-v3",
> +		.of_match_table		= arm_smmu_of_match,
> +		.suppress_bind_attrs	= true,
> +	},
> +	.probe	= arm_smmu_device_probe,
> +	.remove	= arm_smmu_device_remove,
> +	.shutdown = arm_smmu_device_shutdown,
> +};
> +module_platform_driver(arm_smmu_driver);
> +
> +MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
> +MODULE_AUTHOR("Will Deacon <will@kernel.org>");
> +MODULE_ALIAS("platform:arm-smmu-v3");
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.17.1
> 
--8323329-2020296028-1607642759=:6285--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:28:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50008.88429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEN-0005Z7-Gi; Fri, 11 Dec 2020 01:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50008.88429; Fri, 11 Dec 2020 01:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEN-0005Z0-Bw; Fri, 11 Dec 2020 01:28:23 +0000
Received: by outflank-mailman (input) for mailman id 50008;
 Fri, 11 Dec 2020 01:28:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXEM-0005Ym-5Y
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:28:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 166e8ae2-6cda-43bc-908c-b18e9c07c0d1;
 Fri, 11 Dec 2020 01:28:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 166e8ae2-6cda-43bc-908c-b18e9c07c0d1
Date: Thu, 10 Dec 2020 17:28:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650100;
	bh=SOp3nJm456VVGv+TYrIpuEOGn9ZBpPPLfeDKLHgoii8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=eLwapZzyplmAZKEwOSQaU9kMjwRTxvsjnuYXGoiF6vUrV730ZpBBJEQh4vLGWMT8j
	 Zkz0XpFOEUA1E6RrCYUtoISynMU4qHYnNdPVI6Xki5jRGPakwF/llB3FYSm9rzkZBr
	 kmm92cp7vLhPy93WKoz0mRFLBZzOTaWeqr7ZiON2X96+c74ATRMa2n1llMk8vfaZx1
	 Cq7FB3OAAtNjdxbriXd3uGMd3LcdtUontU9vct8xfTAxQKHeENlBtbAp0p7dZIxtPn
	 +i294mU6eY5dtsMbex4uEcChZDPvDvZ3dsgSGiInLvidr5Yy4EycpPXCuh401IRk1l
	 DsLqDVUGWy26A==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 2/8] xen/arm: revert atomic operation related
 command-queue insertion patch
In-Reply-To: <06ce0b7f7574347c9de592677b44c4dac716d268.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101528310.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <06ce0b7f7574347c9de592677b44c4dac716d268.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> Linux SMMUv3 code implements the commands-queue insertion based on
> atomic operations implemented in Linux. Atomic functions used by the
> commands-queue insertion are not implemented in XEN therefore revert the
> patch that implemented the commands-queue insertion based on atomic
> operations.
> 
> Reverted the other patches also that are implemented based on the code
> that introduced the atomic-operations.
> 
> Atomic operations are introduced in the patch "iommu/arm-smmu-v3: Reduce
> contention during command-queue insertion" that fixed the bottleneck of
> the SMMU command queue insertion operation. A new algorithm for
> inserting commands into the queue is introduced in this patch, which is
> lock-free on the fast-path.
> 
> Consequence of reverting the patch is that the command queue insertion
> will be slow for large systems as spinlock will be used to serializes
> accesses from all CPUs to the single queue supported by the hardware.
> 
> Once the proper atomic operations will be available in XEN the driver
> can be updated.
> 
> Following commits are reverted in this patch:
> 1. "iommu/arm-smmu-v3: Add SMMUv3.2 range invalidation support"
>     commit 6a481a95d4c198a2dd0a61f8877b92a375757db8.
> 2. "iommu/arm-smmu-v3: Batch ATC invalidation commands"
>     commit 9e773aee8c3e1b3ba019c5c7f8435aaa836c6130.
> 3. "iommu/arm-smmu-v3: Batch context descriptor invalidation"
>     commit edd0351e7bc49555d8b5ad8438a65a7ca262c9f0.
> 4. "iommu/arm-smmu-v3: Add command queue batching helpers
>     commit 4ce8da453640147101bda418640394637c1a7cfc.
> 5. "iommu/arm-smmu-v3: Fix ATC invalidation ordering wrt main TLBs"
>     commit 353e3cf8590cf182a9f42e67993de3aca91e8090.
> 6. "iommu/arm-smmu-v3: Defer TLB invalidation until ->iotlb_sync()"
>     commit 2af2e72b18b499fa36d3f7379fd010ff25d2a984.
> 7. "iommu/arm-smmu-v3: Reduce contention during command-queue insertion"
>     commit 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
> - Added consequences of reverting this patch in commit message.
> - List all the commits that are reverted in this patch in commit
>   message.
> 
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 878 ++++++--------------------
>  1 file changed, 186 insertions(+), 692 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index f578677a5c..8b7747ed38 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -69,9 +69,6 @@
>  #define IDR1_SSIDSIZE			GENMASK(10, 6)
>  #define IDR1_SIDSIZE			GENMASK(5, 0)
>  
> -#define ARM_SMMU_IDR3			0xc
> -#define IDR3_RIL			(1 << 10)
> -
>  #define ARM_SMMU_IDR5			0x14
>  #define IDR5_STALL_MAX			GENMASK(31, 16)
>  #define IDR5_GRAN64K			(1 << 6)
> @@ -187,7 +184,7 @@
>  
>  #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
>  #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
> -#define Q_OVERFLOW_FLAG			(1U << 31)
> +#define Q_OVERFLOW_FLAG			(1 << 31)
>  #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
>  #define Q_ENT(q, p)			((q)->base +			\
>  					 Q_IDX(&((q)->llq), p) *	\
> @@ -330,15 +327,6 @@
>  #define CMDQ_ERR_CERROR_ABT_IDX		2
>  #define CMDQ_ERR_CERROR_ATC_INV_IDX	3
>  
> -#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
> -
> -/*
> - * This is used to size the command queue and therefore must be at least
> - * BITS_PER_LONG so that the valid_map works correctly (it relies on the
> - * total number of queue entries being a multiple of BITS_PER_LONG).
> - */
> -#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
> -
>  #define CMDQ_0_OP			GENMASK_ULL(7, 0)
>  #define CMDQ_0_SSV			(1UL << 11)
>  
> @@ -351,14 +339,9 @@
>  #define CMDQ_CFGI_1_LEAF		(1UL << 0)
>  #define CMDQ_CFGI_1_RANGE		GENMASK_ULL(4, 0)
>  
> -#define CMDQ_TLBI_0_NUM			GENMASK_ULL(16, 12)
> -#define CMDQ_TLBI_RANGE_NUM_MAX		31
> -#define CMDQ_TLBI_0_SCALE		GENMASK_ULL(24, 20)
>  #define CMDQ_TLBI_0_VMID		GENMASK_ULL(47, 32)
>  #define CMDQ_TLBI_0_ASID		GENMASK_ULL(63, 48)
>  #define CMDQ_TLBI_1_LEAF		(1UL << 0)
> -#define CMDQ_TLBI_1_TTL			GENMASK_ULL(9, 8)
> -#define CMDQ_TLBI_1_TG			GENMASK_ULL(11, 10)
>  #define CMDQ_TLBI_1_VA_MASK		GENMASK_ULL(63, 12)
>  #define CMDQ_TLBI_1_IPA_MASK		GENMASK_ULL(51, 12)
>  
> @@ -407,8 +390,9 @@
>  #define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
>  
>  /* High-level queue structures */
> -#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
> -#define ARM_SMMU_POLL_SPIN_COUNT	10
> +#define ARM_SMMU_POLL_TIMEOUT_US	100
> +#define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
> +#define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>  
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
> @@ -483,13 +467,9 @@ struct arm_smmu_cmdq_ent {
>  		#define CMDQ_OP_TLBI_S2_IPA	0x2a
>  		#define CMDQ_OP_TLBI_NSNH_ALL	0x30
>  		struct {
> -			u8			num;
> -			u8			scale;
>  			u16			asid;
>  			u16			vmid;
>  			bool			leaf;
> -			u8			ttl;
> -			u8			tg;
>  			u64			addr;
>  		} tlbi;
>  
> @@ -513,24 +493,15 @@ struct arm_smmu_cmdq_ent {
>  
>  		#define CMDQ_OP_CMD_SYNC	0x46
>  		struct {
> +			u32			msidata;
>  			u64			msiaddr;
>  		} sync;
>  	};
>  };
>  
>  struct arm_smmu_ll_queue {
> -	union {
> -		u64			val;
> -		struct {
> -			u32		prod;
> -			u32		cons;
> -		};
> -		struct {
> -			atomic_t	prod;
> -			atomic_t	cons;
> -		} atomic;
> -		u8			__pad[SMP_CACHE_BYTES];
> -	} ____cacheline_aligned_in_smp;
> +	u32				prod;
> +	u32				cons;
>  	u32				max_n_shift;
>  };
>  
> @@ -548,23 +519,9 @@ struct arm_smmu_queue {
>  	u32 __iomem			*cons_reg;
>  };
>  
> -struct arm_smmu_queue_poll {
> -	ktime_t				timeout;
> -	unsigned int			delay;
> -	unsigned int			spin_cnt;
> -	bool				wfe;
> -};
> -
>  struct arm_smmu_cmdq {
>  	struct arm_smmu_queue		q;
> -	atomic_long_t			*valid_map;
> -	atomic_t			owner_prod;
> -	atomic_t			lock;
> -};
> -
> -struct arm_smmu_cmdq_batch {
> -	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
> -	int				num;
> +	spinlock_t			lock;
>  };
>  
>  struct arm_smmu_evtq {
> @@ -647,7 +604,6 @@ struct arm_smmu_device {
>  #define ARM_SMMU_FEAT_HYP		(1 << 12)
>  #define ARM_SMMU_FEAT_STALL_FORCE	(1 << 13)
>  #define ARM_SMMU_FEAT_VAX		(1 << 14)
> -#define ARM_SMMU_FEAT_RANGE_INV		(1 << 15)
>  	u32				features;
>  
>  #define ARM_SMMU_OPT_SKIP_PREFETCH	(1 << 0)
> @@ -660,6 +616,8 @@ struct arm_smmu_device {
>  
>  	int				gerr_irq;
>  	int				combined_irq;
> +	u32				sync_nr;
> +	u8				prev_cmd_opcode;
>  
>  	unsigned long			ias; /* IPA */
>  	unsigned long			oas; /* PA */
> @@ -677,6 +635,12 @@ struct arm_smmu_device {
>  
>  	struct arm_smmu_strtab_cfg	strtab_cfg;
>  
> +	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
> +	union {
> +		u32			sync_count;
> +		u64			padding;
> +	};
> +
>  	/* IOMMU core code handle */
>  	struct iommu_device		iommu;
>  };
> @@ -763,21 +727,6 @@ static void parse_driver_options(struct arm_smmu_device *smmu)
>  }
>  
>  /* Low-level queue manipulation functions */
> -static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
> -{
> -	u32 space, prod, cons;
> -
> -	prod = Q_IDX(q, q->prod);
> -	cons = Q_IDX(q, q->cons);
> -
> -	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
> -		space = (1 << q->max_n_shift) - (prod - cons);
> -	else
> -		space = cons - prod;
> -
> -	return space >= n;
> -}
> -
>  static bool queue_full(struct arm_smmu_ll_queue *q)
>  {
>  	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> @@ -790,12 +739,9 @@ static bool queue_empty(struct arm_smmu_ll_queue *q)
>  	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
>  }
>  
> -static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
> +static void queue_sync_cons_in(struct arm_smmu_queue *q)
>  {
> -	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
> -		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
> -	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
> -		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
> +	q->llq.cons = readl_relaxed(q->cons_reg);
>  }
>  
>  static void queue_sync_cons_out(struct arm_smmu_queue *q)
> @@ -826,34 +772,46 @@ static int queue_sync_prod_in(struct arm_smmu_queue *q)
>  	return ret;
>  }
>  
> -static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
> +static void queue_sync_prod_out(struct arm_smmu_queue *q)
>  {
> -	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
> -	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
> +	writel(q->llq.prod, q->prod_reg);
>  }
>  
> -static void queue_poll_init(struct arm_smmu_device *smmu,
> -			    struct arm_smmu_queue_poll *qp)
> +static void queue_inc_prod(struct arm_smmu_ll_queue *q)
>  {
> -	qp->delay = 1;
> -	qp->spin_cnt = 0;
> -	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
> -	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
> +	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
> +	q->prod = Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
>  }
>  
> -static int queue_poll(struct arm_smmu_queue_poll *qp)
> +/*
> + * Wait for the SMMU to consume items. If sync is true, wait until the queue
> + * is empty. Otherwise, wait until there is at least one free slot.
> + */
> +static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
>  {
> -	if (ktime_compare(ktime_get(), qp->timeout) > 0)
> -		return -ETIMEDOUT;
> +	ktime_t timeout;
> +	unsigned int delay = 1, spin_cnt = 0;
>  
> -	if (qp->wfe) {
> -		wfe();
> -	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
> -		cpu_relax();
> -	} else {
> -		udelay(qp->delay);
> -		qp->delay *= 2;
> -		qp->spin_cnt = 0;
> +	/* Wait longer if it's a CMD_SYNC */
> +	timeout = ktime_add_us(ktime_get(), sync ?
> +					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
> +					    ARM_SMMU_POLL_TIMEOUT_US);
> +
> +	while (queue_sync_cons_in(q),
> +	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
> +		if (ktime_compare(ktime_get(), timeout) > 0)
> +			return -ETIMEDOUT;
> +
> +		if (wfe) {
> +			wfe();
> +		} else if (++spin_cnt < ARM_SMMU_CMDQ_SYNC_SPIN_COUNT) {
> +			cpu_relax();
> +			continue;
> +		} else {
> +			udelay(delay);
> +			delay *= 2;
> +			spin_cnt = 0;
> +		}
>  	}
>  
>  	return 0;
> @@ -867,6 +825,17 @@ static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
>  		*dst++ = cpu_to_le64(*src++);
>  }
>  
> +static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
> +{
> +	if (queue_full(&q->llq))
> +		return -ENOSPC;
> +
> +	queue_write(Q_ENT(q, q->llq.prod), ent, q->ent_dwords);
> +	queue_inc_prod(&q->llq);
> +	queue_sync_prod_out(q);
> +	return 0;
> +}
> +
>  static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
>  {
>  	int i;
> @@ -916,22 +885,14 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
>  		break;
>  	case CMDQ_OP_TLBI_NH_VA:
> -		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
> -		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
>  		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
>  		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
>  		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> -		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
> -		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
>  		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
>  		break;
>  	case CMDQ_OP_TLBI_S2_IPA:
> -		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
> -		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
>  		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
>  		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> -		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
> -		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
>  		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
>  		break;
>  	case CMDQ_OP_TLBI_NH_ASID:
> @@ -964,14 +925,20 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
>  		break;
>  	case CMDQ_OP_CMD_SYNC:
> -		if (ent->sync.msiaddr) {
> +		if (ent->sync.msiaddr)
>  			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> -			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> -		} else {
> +		else
>  			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> -		}
>  		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>  		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> +		/*
> +		 * Commands are written little-endian, but we want the SMMU to
> +		 * receive MSIData, and thus write it back to memory, in CPU
> +		 * byte order, so big-endian needs an extra byteswap here.
> +		 */
> +		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
> +				     cpu_to_le32(ent->sync.msidata));
> +		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>  		break;
>  	default:
>  		return -ENOENT;
> @@ -980,27 +947,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>  	return 0;
>  }
>  
> -static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
> -					 u32 prod)
> -{
> -	struct arm_smmu_queue *q = &smmu->cmdq.q;
> -	struct arm_smmu_cmdq_ent ent = {
> -		.opcode = CMDQ_OP_CMD_SYNC,
> -	};
> -
> -	/*
> -	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
> -	 * payload, so the write will zero the entire command on that platform.
> -	 */
> -	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> -	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
> -		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
> -				   q->ent_dwords * 8;
> -	}
> -
> -	arm_smmu_cmdq_build_cmd(cmd, &ent);
> -}
> -
>  static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  {
>  	static const char *cerror_str[] = {
> @@ -1059,474 +1005,109 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
>  	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
>  }
>  
> -/*
> - * Command queue locking.
> - * This is a form of bastardised rwlock with the following major changes:
> - *
> - * - The only LOCK routines are exclusive_trylock() and shared_lock().
> - *   Neither have barrier semantics, and instead provide only a control
> - *   dependency.
> - *
> - * - The UNLOCK routines are supplemented with shared_tryunlock(), which
> - *   fails if the caller appears to be the last lock holder (yes, this is
> - *   racy). All successful UNLOCK routines have RELEASE semantics.
> - */
> -static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> -{
> -	int val;
> -
> -	/*
> -	 * We can try to avoid the cmpxchg() loop by simply incrementing the
> -	 * lock counter. When held in exclusive state, the lock counter is set
> -	 * to INT_MIN so these increments won't hurt as the value will remain
> -	 * negative.
> -	 */
> -	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> -		return;
> -
> -	do {
> -		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
> -	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
> -}
> -
> -static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
> -{
> -	(void)atomic_dec_return_release(&cmdq->lock);
> -}
> -
> -static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> +static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
>  {
> -	if (atomic_read(&cmdq->lock) == 1)
> -		return false;
> -
> -	arm_smmu_cmdq_shared_unlock(cmdq);
> -	return true;
> -}
> -
> -#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
> -({									\
> -	bool __ret;							\
> -	local_irq_save(flags);						\
> -	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
> -	if (!__ret)							\
> -		local_irq_restore(flags);				\
> -	__ret;								\
> -})
> -
> -#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
> -({									\
> -	atomic_set_release(&cmdq->lock, 0);				\
> -	local_irq_restore(flags);					\
> -})
> -
> -
> -/*
> - * Command queue insertion.
> - * This is made fiddly by our attempts to achieve some sort of scalability
> - * since there is one queue shared amongst all of the CPUs in the system.  If
> - * you like mixed-size concurrency, dependency ordering and relaxed atomics,
> - * then you'll *love* this monstrosity.
> - *
> - * The basic idea is to split the queue up into ranges of commands that are
> - * owned by a given CPU; the owner may not have written all of the commands
> - * itself, but is responsible for advancing the hardware prod pointer when
> - * the time comes. The algorithm is roughly:
> - *
> - * 	1. Allocate some space in the queue. At this point we also discover
> - *	   whether the head of the queue is currently owned by another CPU,
> - *	   or whether we are the owner.
> - *
> - *	2. Write our commands into our allocated slots in the queue.
> - *
> - *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
> - *
> - *	4. If we are an owner:
> - *		a. Wait for the previous owner to finish.
> - *		b. Mark the queue head as unowned, which tells us the range
> - *		   that we are responsible for publishing.
> - *		c. Wait for all commands in our owned range to become valid.
> - *		d. Advance the hardware prod pointer.
> - *		e. Tell the next owner we've finished.
> - *
> - *	5. If we are inserting a CMD_SYNC (we may or may not have been an
> - *	   owner), then we need to stick around until it has completed:
> - *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
> - *		   to clear the first 4 bytes.
> - *		b. Otherwise, we spin waiting for the hardware cons pointer to
> - *		   advance past our command.
> - *
> - * The devil is in the details, particularly the use of locking for handling
> - * SYNC completion and freeing up space in the queue before we think that it is
> - * full.
> - */
> -static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
> -					       u32 sprod, u32 eprod, bool set)
> -{
> -	u32 swidx, sbidx, ewidx, ebidx;
> -	struct arm_smmu_ll_queue llq = {
> -		.max_n_shift	= cmdq->q.llq.max_n_shift,
> -		.prod		= sprod,
> -	};
> -
> -	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
> -	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
> -
> -	while (llq.prod != eprod) {
> -		unsigned long mask;
> -		atomic_long_t *ptr;
> -		u32 limit = BITS_PER_LONG;
> -
> -		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
> -		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
> -
> -		ptr = &cmdq->valid_map[swidx];
> -
> -		if ((swidx == ewidx) && (sbidx < ebidx))
> -			limit = ebidx;
> -
> -		mask = GENMASK(limit - 1, sbidx);
> -
> -		/*
> -		 * The valid bit is the inverse of the wrap bit. This means
> -		 * that a zero-initialised queue is invalid and, after marking
> -		 * all entries as valid, they become invalid again when we
> -		 * wrap.
> -		 */
> -		if (set) {
> -			atomic_long_xor(mask, ptr);
> -		} else { /* Poll */
> -			unsigned long valid;
> +	struct arm_smmu_queue *q = &smmu->cmdq.q;
> +	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
>  
> -			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
> -			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
> -		}
> +	smmu->prev_cmd_opcode = FIELD_GET(CMDQ_0_OP, cmd[0]);
>  
> -		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
> +	while (queue_insert_raw(q, cmd) == -ENOSPC) {
> +		if (queue_poll_cons(q, false, wfe))
> +			dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>  	}
>  }
>  
> -/* Mark all entries in the range [sprod, eprod) as valid */
> -static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
> -					u32 sprod, u32 eprod)
> -{
> -	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
> -}
> -
> -/* Wait for all entries in the range [sprod, eprod) to become valid */
> -static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
> -					 u32 sprod, u32 eprod)
> -{
> -	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
> -}
> -
> -/* Wait for the command queue to become non-full */
> -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> -					     struct arm_smmu_ll_queue *llq)
> +static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> +				    struct arm_smmu_cmdq_ent *ent)
>  {
> +	u64 cmd[CMDQ_ENT_DWORDS];
>  	unsigned long flags;
> -	struct arm_smmu_queue_poll qp;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	int ret = 0;
>  
> -	/*
> -	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> -	 * that fails, spin until somebody else updates it for us.
> -	 */
> -	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
> -		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
> -		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
> -		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		return 0;
> +	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
> +		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
> +			 ent->opcode);
> +		return;
>  	}
>  
> -	queue_poll_init(smmu, &qp);
> -	do {
> -		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> -		if (!queue_full(llq))
> -			break;
> -
> -		ret = queue_poll(&qp);
> -	} while (!ret);
> -
> -	return ret;
> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> +	arm_smmu_cmdq_insert_cmd(smmu, cmd);
> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  }
>  
>  /*
> - * Wait until the SMMU signals a CMD_SYNC completion MSI.
> - * Must be called with the cmdq lock held in some capacity.
> + * The difference between val and sync_idx is bounded by the maximum size of
> + * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
>   */
> -static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
> -					  struct arm_smmu_ll_queue *llq)
> -{
> -	int ret = 0;
> -	struct arm_smmu_queue_poll qp;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
> -
> -	queue_poll_init(smmu, &qp);
> -
> -	/*
> -	 * The MSI won't generate an event, since it's being written back
> -	 * into the command queue.
> -	 */
> -	qp.wfe = false;
> -	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
> -	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
> -	return ret;
> -}
> -
> -/*
> - * Wait until the SMMU cons index passes llq->prod.
> - * Must be called with the cmdq lock held in some capacity.
> - */
> -static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
> -					       struct arm_smmu_ll_queue *llq)
> -{
> -	struct arm_smmu_queue_poll qp;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	u32 prod = llq->prod;
> -	int ret = 0;
> -
> -	queue_poll_init(smmu, &qp);
> -	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
> -	do {
> -		if (queue_consumed(llq, prod))
> -			break;
> -
> -		ret = queue_poll(&qp);
> -
> -		/*
> -		 * This needs to be a readl() so that our subsequent call
> -		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
> -		 *
> -		 * Specifically, we need to ensure that we observe all
> -		 * shared_lock()s by other CMD_SYNCs that share our owner,
> -		 * so that a failing call to tryunlock() means that we're
> -		 * the last one out and therefore we can safely advance
> -		 * cmdq->q.llq.cons. Roughly speaking:
> -		 *
> -		 * CPU 0		CPU1			CPU2 (us)
> -		 *
> -		 * if (sync)
> -		 * 	shared_lock();
> -		 *
> -		 * dma_wmb();
> -		 * set_valid_map();
> -		 *
> -		 * 			if (owner) {
> -		 *				poll_valid_map();
> -		 *				<control dependency>
> -		 *				writel(prod_reg);
> -		 *
> -		 *						readl(cons_reg);
> -		 *						tryunlock();
> -		 *
> -		 * Requires us to see CPU 0's shared_lock() acquisition.
> -		 */
> -		llq->cons = readl(cmdq->q.cons_reg);
> -	} while (!ret);
> -
> -	return ret;
> -}
> -
> -static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
> -					 struct arm_smmu_ll_queue *llq)
> +static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
>  {
> -	if (smmu->features & ARM_SMMU_FEAT_MSI &&
> -	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
> -		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
> -
> -	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
> -}
> -
> -static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
> -					u32 prod, int n)
> -{
> -	int i;
> -	struct arm_smmu_ll_queue llq = {
> -		.max_n_shift	= cmdq->q.llq.max_n_shift,
> -		.prod		= prod,
> -	};
> +	ktime_t timeout;
> +	u32 val;
>  
> -	for (i = 0; i < n; ++i) {
> -		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
> +	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> +	val = smp_cond_load_acquire(&smmu->sync_count,
> +				    (int)(VAL - sync_idx) >= 0 ||
> +				    !ktime_before(ktime_get(), timeout));
>  
> -		prod = queue_inc_prod_n(&llq, i);
> -		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
> -	}
> +	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
>  }
>  
> -/*
> - * This is the actual insertion function, and provides the following
> - * ordering guarantees to callers:
> - *
> - * - There is a dma_wmb() before publishing any commands to the queue.
> - *   This can be relied upon to order prior writes to data structures
> - *   in memory (such as a CD or an STE) before the command.
> - *
> - * - On completion of a CMD_SYNC, there is a control dependency.
> - *   This can be relied upon to order subsequent writes to memory (e.g.
> - *   freeing an IOVA) after completion of the CMD_SYNC.
> - *
> - * - Command insertion is totally ordered, so if two CPUs each race to
> - *   insert their own list of commands then all of the commands from one
> - *   CPU will appear before any of the commands from the other CPU.
> - */
> -static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> -				       u64 *cmds, int n, bool sync)
> +static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>  {
> -	u64 cmd_sync[CMDQ_ENT_DWORDS];
> -	u32 prod;
> +	u64 cmd[CMDQ_ENT_DWORDS];
>  	unsigned long flags;
> -	bool owner;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	struct arm_smmu_ll_queue llq = {
> -		.max_n_shift = cmdq->q.llq.max_n_shift,
> -	}, head = llq;
> -	int ret = 0;
> -
> -	/* 1. Allocate some space in the queue */
> -	local_irq_save(flags);
> -	llq.val = READ_ONCE(cmdq->q.llq.val);
> -	do {
> -		u64 old;
> -
> -		while (!queue_has_space(&llq, n + sync)) {
> -			local_irq_restore(flags);
> -			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> -				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
> -			local_irq_save(flags);
> -		}
> -
> -		head.cons = llq.cons;
> -		head.prod = queue_inc_prod_n(&llq, n + sync) |
> -					     CMDQ_PROD_OWNED_FLAG;
> -
> -		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
> -		if (old == llq.val)
> -			break;
> -
> -		llq.val = old;
> -	} while (1);
> -	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
> -	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
> -	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
> -
> -	/*
> -	 * 2. Write our commands into the queue
> -	 * Dependency ordering from the cmpxchg() loop above.
> -	 */
> -	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
> -	if (sync) {
> -		prod = queue_inc_prod_n(&llq, n);
> -		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
> -		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
> -
> -		/*
> -		 * In order to determine completion of our CMD_SYNC, we must
> -		 * ensure that the queue can't wrap twice without us noticing.
> -		 * We achieve that by taking the cmdq lock as shared before
> -		 * marking our slot as valid.
> -		 */
> -		arm_smmu_cmdq_shared_lock(cmdq);
> -	}
> -
> -	/* 3. Mark our slots as valid, ensuring commands are visible first */
> -	dma_wmb();
> -	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
> -
> -	/* 4. If we are the owner, take control of the SMMU hardware */
> -	if (owner) {
> -		/* a. Wait for previous owner to finish */
> -		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
> -
> -		/* b. Stop gathering work by clearing the owned flag */
> -		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
> -						   &cmdq->q.llq.atomic.prod);
> -		prod &= ~CMDQ_PROD_OWNED_FLAG;
> -
> -		/*
> -		 * c. Wait for any gathered work to be written to the queue.
> -		 * Note that we read our own entries so that we have the control
> -		 * dependency required by (d).
> -		 */
> -		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
> +	struct arm_smmu_cmdq_ent ent = {
> +		.opcode = CMDQ_OP_CMD_SYNC,
> +		.sync	= {
> +			.msiaddr = virt_to_phys(&smmu->sync_count),
> +		},
> +	};
>  
> -		/*
> -		 * d. Advance the hardware prod pointer
> -		 * Control dependency ordering from the entries becoming valid.
> -		 */
> -		writel_relaxed(prod, cmdq->q.prod_reg);
> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
>  
> -		/*
> -		 * e. Tell the next owner we're done
> -		 * Make sure we've updated the hardware first, so that we don't
> -		 * race to update prod and potentially move it backwards.
> -		 */
> -		atomic_set_release(&cmdq->owner_prod, prod);
> +	/* Piggy-back on the previous command if it's a SYNC */
> +	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
> +		ent.sync.msidata = smmu->sync_nr;
> +	} else {
> +		ent.sync.msidata = ++smmu->sync_nr;
> +		arm_smmu_cmdq_build_cmd(cmd, &ent);
> +		arm_smmu_cmdq_insert_cmd(smmu, cmd);
>  	}
>  
> -	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
> -	if (sync) {
> -		llq.prod = queue_inc_prod_n(&llq, n);
> -		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
> -		if (ret) {
> -			dev_err_ratelimited(smmu->dev,
> -					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
> -					    llq.prod,
> -					    readl_relaxed(cmdq->q.prod_reg),
> -					    readl_relaxed(cmdq->q.cons_reg));
> -		}
> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  
> -		/*
> -		 * Try to unlock the cmq lock. This will fail if we're the last
> -		 * reader, in which case we can safely update cmdq->q.llq.cons
> -		 */
> -		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
> -			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
> -			arm_smmu_cmdq_shared_unlock(cmdq);
> -		}
> -	}
> -
> -	local_irq_restore(flags);
> -	return ret;
> +	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
>  }
>  
> -static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
> -				   struct arm_smmu_cmdq_ent *ent)
> +static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
>  	u64 cmd[CMDQ_ENT_DWORDS];
> +	unsigned long flags;
> +	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
> +	struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC };
> +	int ret;
>  
> -	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
> -		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
> -			 ent->opcode);
> -		return -EINVAL;
> -	}
> +	arm_smmu_cmdq_build_cmd(cmd, &ent);
>  
> -	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
> -}
> +	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> +	arm_smmu_cmdq_insert_cmd(smmu, cmd);
> +	ret = queue_poll_cons(&smmu->cmdq.q, true, wfe);
> +	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  
> -static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
> -{
> -	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
> +	return ret;
>  }
>  
> -static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
> -				    struct arm_smmu_cmdq_batch *cmds,
> -				    struct arm_smmu_cmdq_ent *cmd)
> +static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
> -	if (cmds->num == CMDQ_BATCH_ENTRIES) {
> -		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
> -		cmds->num = 0;
> -	}
> -	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
> -	cmds->num++;
> -}
> +	int ret;
> +	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
> +		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
>  
> -static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
> -				      struct arm_smmu_cmdq_batch *cmds)
> -{
> -	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
> +	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
> +		  : __arm_smmu_cmdq_issue_sync(smmu);
> +	if (ret)
> +		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
> +	return ret;
>  }
>  
>  /* Context descriptor manipulation functions */
> @@ -1536,7 +1117,6 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
>  	size_t i;
>  	unsigned long flags;
>  	struct arm_smmu_master *master;
> -	struct arm_smmu_cmdq_batch cmds = {};
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_cmdq_ent cmd = {
>  		.opcode	= CMDQ_OP_CFGI_CD,
> @@ -1550,12 +1130,12 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
>  	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
>  		for (i = 0; i < master->num_sids; i++) {
>  			cmd.cfgi.sid = master->sids[i];
> -			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> +			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>  		}
>  	}
>  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>  
> -	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> +	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
> @@ -2190,16 +1770,17 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
>  	cmd->atc.size	= log2_span;
>  }
>  
> -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
> +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
> +				   struct arm_smmu_cmdq_ent *cmd)
>  {
>  	int i;
> -	struct arm_smmu_cmdq_ent cmd;
>  
> -	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> +	if (!master->ats_enabled)
> +		return 0;
>  
>  	for (i = 0; i < master->num_sids; i++) {
> -		cmd.atc.sid = master->sids[i];
> -		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
> +		cmd->atc.sid = master->sids[i];
> +		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
>  	}
>  
>  	return arm_smmu_cmdq_issue_sync(master->smmu);
> @@ -2208,11 +1789,10 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
>  static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>  				   int ssid, unsigned long iova, size_t size)
>  {
> -	int i;
> +	int ret = 0;
>  	unsigned long flags;
>  	struct arm_smmu_cmdq_ent cmd;
>  	struct arm_smmu_master *master;
> -	struct arm_smmu_cmdq_batch cmds = {};
>  
>  	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
>  		return 0;
> @@ -2237,18 +1817,11 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>  	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
>  
>  	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> -	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> -		if (!master->ats_enabled)
> -			continue;
> -
> -		for (i = 0; i < master->num_sids; i++) {
> -			cmd.atc.sid = master->sids[i];
> -			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
> -		}
> -	}
> +	list_for_each_entry(master, &smmu_domain->devices, domain_head)
> +		ret |= arm_smmu_atc_inv_master(master, &cmd);
>  	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>  
> -	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
> +	return ret ? -ETIMEDOUT : 0;
>  }
>  
>  /* IO_PGTABLE API */
> @@ -2270,26 +1843,23 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  	/*
>  	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
>  	 * PTEs previously cleared by unmaps on the current CPU not yet visible
> -	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
> -	 * insertion to guarantee those are observed before the TLBI. Do be
> -	 * careful, 007.
> +	 * to the SMMU. We are relying on the DSB implicit in
> +	 * queue_sync_prod_out() to guarantee those are observed before the
> +	 * TLBI. Do be careful, 007.
>  	 */
>  	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
>  	arm_smmu_cmdq_issue_sync(smmu);
> -	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
>  }
>  
> -static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
> -				   size_t granule, bool leaf,
> -				   struct arm_smmu_domain *smmu_domain)
> +static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> +					  size_t granule, bool leaf, void *cookie)
>  {
> +	struct arm_smmu_domain *smmu_domain = cookie;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
> -	size_t inv_range = granule;
> -	struct arm_smmu_cmdq_batch cmds = {};
>  	struct arm_smmu_cmdq_ent cmd = {
>  		.tlbi = {
>  			.leaf	= leaf,
> +			.addr	= iova,
>  		},
>  	};
>  
> @@ -2304,78 +1874,37 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
>  	}
>  
> -	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> -		/* Get the leaf page size */
> -		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
> -
> -		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
> -		cmd.tlbi.tg = (tg - 10) / 2;
> -
> -		/* Determine what level the granule is at */
> -		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
> -
> -		num_pages = size >> tg;
> -	}
> -
> -	while (iova < end) {
> -		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
> -			/*
> -			 * On each iteration of the loop, the range is 5 bits
> -			 * worth of the aligned size remaining.
> -			 * The range in pages is:
> -			 *
> -			 * range = (num_pages & (0x1f << __ffs(num_pages)))
> -			 */
> -			unsigned long scale, num;
> -
> -			/* Determine the power of 2 multiple number of pages */
> -			scale = __ffs(num_pages);
> -			cmd.tlbi.scale = scale;
> -
> -			/* Determine how many chunks of 2^scale size we have */
> -			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
> -			cmd.tlbi.num = num - 1;
> -
> -			/* range is num * 2^scale * pgsize */
> -			inv_range = num << (scale + tg);
> -
> -			/* Clear out the lower order bits for the next iteration */
> -			num_pages -= num << scale;
> -		}
> -
> -		cmd.tlbi.addr = iova;
> -		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> -		iova += inv_range;
> -	}
> -	arm_smmu_cmdq_batch_submit(smmu, &cmds);
> -
> -	/*
> -	 * Unfortunately, this can't be leaf-only since we may have
> -	 * zapped an entire table.
> -	 */
> -	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
> +	do {
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +		cmd.tlbi.addr += granule;
> +	} while (size -= granule);
>  }
>  
>  static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
>  					 unsigned long iova, size_t granule,
>  					 void *cookie)
>  {
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct iommu_domain *domain = &smmu_domain->domain;
> -
> -	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
> +	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
>  }
>  
>  static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
>  				  size_t granule, void *cookie)
>  {
> -	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
> +	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
>  				  size_t granule, void *cookie)
>  {
> -	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
> +	struct arm_smmu_domain *smmu_domain = cookie;
> +	struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> +	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
> +	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static const struct iommu_flush_ops arm_smmu_flush_ops = {
> @@ -2701,6 +2230,7 @@ static void arm_smmu_enable_ats(struct arm_smmu_master *master)
>  
>  static void arm_smmu_disable_ats(struct arm_smmu_master *master)
>  {
> +	struct arm_smmu_cmdq_ent cmd;
>  	struct arm_smmu_domain *smmu_domain = master->domain;
>  
>  	if (!master->ats_enabled)
> @@ -2712,7 +2242,8 @@ static void arm_smmu_disable_ats(struct arm_smmu_master *master)
>  	 * ATC invalidation via the SMMU.
>  	 */
>  	wmb();
> -	arm_smmu_atc_inv_master(master);
> +	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
> +	arm_smmu_atc_inv_master(master, &cmd);
>  	atomic_dec(&smmu_domain->nr_ats_masters);
>  }
>  
> @@ -2856,13 +2387,18 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
>  static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
>  			     size_t size, struct iommu_iotlb_gather *gather)
>  {
> +	int ret;
>  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>  	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
>  
>  	if (!ops)
>  		return 0;
>  
> -	return ops->unmap(ops, iova, size, gather);
> +	ret = ops->unmap(ops, iova, size, gather);
> +	if (ret && arm_smmu_atc_inv_domain(smmu_domain, 0, iova, size))
> +		return 0;
> +
> +	return ret;
>  }
>  
>  static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> @@ -2876,10 +2412,10 @@ static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
>  static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
>  				struct iommu_iotlb_gather *gather)
>  {
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
>  
> -	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
> -			       gather->pgsize, true, smmu_domain);
> +	if (smmu)
> +		arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
>  static phys_addr_t
> @@ -3177,49 +2713,18 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  	return 0;
>  }
>  
> -static void arm_smmu_cmdq_free_bitmap(void *data)
> -{
> -	unsigned long *bitmap = data;
> -	bitmap_free(bitmap);
> -}
> -
> -static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
> -{
> -	int ret = 0;
> -	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
> -	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
> -	atomic_long_t *bitmap;
> -
> -	atomic_set(&cmdq->owner_prod, 0);
> -	atomic_set(&cmdq->lock, 0);
> -
> -	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
> -	if (!bitmap) {
> -		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
> -		ret = -ENOMEM;
> -	} else {
> -		cmdq->valid_map = bitmap;
> -		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
> -	}
> -
> -	return ret;
> -}
> -
>  static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
>  {
>  	int ret;
>  
>  	/* cmdq */
> +	spin_lock_init(&smmu->cmdq.lock);
>  	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
>  				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
>  				      "cmdq");
>  	if (ret)
>  		return ret;
>  
> -	ret = arm_smmu_cmdq_init(smmu);
> -	if (ret)
> -		return ret;
> -
>  	/* evtq */
>  	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
>  				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
> @@ -3800,15 +3305,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	/* Queue sizes, capped to ensure natural alignment */
>  	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
>  					     FIELD_GET(IDR1_CMDQS, reg));
> -	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
> -		/*
> -		 * We don't support splitting up batches, so one batch of
> -		 * commands plus an extra sync needs to fit inside the command
> -		 * queue. There's also no way we can handle the weird alignment
> -		 * restrictions on the base pointer for a unit-length queue.
> -		 */
> -		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
> -			CMDQ_BATCH_ENTRIES);
> +	if (!smmu->cmdq.q.llq.max_n_shift) {
> +		/* Odd alignment restrictions on the base, so ignore for now */
> +		dev_err(smmu->dev, "unit-length command queue not supported\n");
>  		return -ENXIO;
>  	}
>  
> @@ -3828,11 +3327,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	if (smmu->sid_bits <= STRTAB_SPLIT)
>  		smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
>  
> -	/* IDR3 */
> -	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
> -	if (FIELD_GET(IDR3_RIL, reg))
> -		smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
> -
>  	/* IDR5 */
>  	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:28:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50007.88417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXED-0005X0-7I; Fri, 11 Dec 2020 01:28:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50007.88417; Fri, 11 Dec 2020 01:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXED-0005Wt-40; Fri, 11 Dec 2020 01:28:13 +0000
Received: by outflank-mailman (input) for mailman id 50007;
 Fri, 11 Dec 2020 01:28:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXEC-0005Wo-Ao
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:28:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ca4f1bd-7ed1-452e-8eee-3f1e5d78b21b;
 Fri, 11 Dec 2020 01:28:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ca4f1bd-7ed1-452e-8eee-3f1e5d78b21b
Date: Thu, 10 Dec 2020 17:28:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650089;
	bh=JYKEvDuAO0Y0F/aSvrNNpAay9WTmDH0IzHxldVuOBp4=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=f1nJkFa8PK7RsNVu4hsGzc4JD/E8e4/AnQ8orxl5RN0GDfGpTS9Zs002lCYWiX6W9
	 5jAUNZbdNUi4ui1TL+leDPEfuo9K/QxF1B0nIrmc/9cdb/3yJnLf5X8oUL9VbivHh5
	 wlKC3Ixt5vPuiBHn4taRtYHWqhe4VXE9wokFZ1S5/raCnJINKlBda9/RmVjN1EaGKn
	 gPJ5m8ky4RcCKVlo4wU1FDuwbIjF11vAbmQmDoJc3RRZtmuI0ApWOSO4xKePLeYwjK
	 jMPLoze7jItn1x13msN30kF/grGdoLAk6aqacsX1A0GbM/3jsyL02j9PpIRwCBFgQo
	 fuTd/tZMjx1+w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
In-Reply-To: <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
Message-ID: <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-22-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s> <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Julien Grall wrote:
> On 10/12/2020 02:30, Stefano Stabellini wrote:
> > On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > 
> > > We need to send mapcache invalidation request to qemu/demu everytime
> > > the page gets removed from a guest.
> > > 
> > > At the moment, the Arm code doesn't explicitely remove the existing
> > > mapping before inserting the new mapping. Instead, this is done
> > > implicitely by __p2m_set_entry().
> > > 
> > > So we need to recognize a case when old entry is a RAM page *and*
> > > the new MFN is different in order to set the corresponding flag.
> > > The most suitable place to do this is p2m_free_entry(), there
> > > we can find the correct leaf type. The invalidation request
> > > will be sent in do_trap_hypercall() later on.
> > 
> > Why is it sent in do_trap_hypercall() ?
> 
> I believe this is following the approach used by x86. There are actually some
> discussion about it (see [1]).
> 
> Leaving aside the toolstack case for now, AFAIK, the only way a guest can
> modify its p2m is via an hypercall. Do you have an example otherwise?

OK this is a very important assumption. We should write it down for sure.
I think it is true today on ARM.


> When sending the invalidation request, the vCPU will be blocked until all the
> IOREQ server have acknowledged the invalidation. So the hypercall seems to be
> the best position to do it.
> 
> Alternatively, we could use check_for_vcpu_work() to check if the mapcache
> needs to be invalidated. The inconvenience is we would execute a few more
> instructions in each entry/exit path.

Yeah it would be more natural to call it from check_for_vcpu_work(). If
we put it between #ifdef CONFIG_IOREQ_SERVER it wouldn't be bad. But I
am not a fan of increasing the instructions on the exit path either.
>From this point of view, putting it at the end of do_trap_hypercall is a
nice trick actually. Let's just make sure it has a good comment on top.


> > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > CC: Julien Grall <julien.grall@arm.com>
> > > 
> > > ---
> > > Please note, this is a split/cleanup/hardening of Julien's PoC:
> > > "Add support for Guest IO forwarding to a device emulator"
> > > 
> > > Changes V1 -> V2:
> > >     - new patch, some changes were derived from (+ new explanation):
> > >       xen/ioreq: Make x86's invalidate qemu mapcache handling common
> > >     - put setting of the flag into __p2m_set_entry()
> > >     - clarify the conditions when the flag should be set
> > >     - use domain_has_ioreq_server()
> > >     - update do_trap_hypercall() by adding local variable
> > > 
> > > Changes V2 -> V3:
> > >     - update patch description
> > >     - move check to p2m_free_entry()
> > >     - add a comment
> > >     - use "curr" instead of "v" in do_trap_hypercall()
> > > ---
> > > ---
> > >   xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
> > >   xen/arch/arm/traps.c | 13 ++++++++++---
> > >   2 files changed, 26 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > index 5b8d494..9674f6f 100644
> > > --- a/xen/arch/arm/p2m.c
> > > +++ b/xen/arch/arm/p2m.c
> > > @@ -1,6 +1,7 @@
> > >   #include <xen/cpu.h>
> > >   #include <xen/domain_page.h>
> > >   #include <xen/iocap.h>
> > > +#include <xen/ioreq.h>
> > >   #include <xen/lib.h>
> > >   #include <xen/sched.h>
> > >   #include <xen/softirq.h>
> > > @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m,
> > >       if ( !p2m_is_valid(entry) )
> > >           return;
> > >   -    /* Nothing to do but updating the stats if the entry is a
> > > super-page. */
> > > -    if ( p2m_is_superpage(entry, level) )
> > > +    if ( p2m_is_superpage(entry, level) || (level == 3) )
> > >       {
> > > -        p2m->stats.mappings[level]--;
> > > -        return;
> > > -    }
> > > +#ifdef CONFIG_IOREQ_SERVER
> > > +        /*
> > > +         * If this gets called (non-recursively) then either the entry
> > > +         * was replaced by an entry with a different base (valid case) or
> > > +         * the shattering of a superpage was failed (error case).
> > > +         * So, at worst, the spurious mapcache invalidation might be
> > > sent.
> > > +         */
> > > +        if ( domain_has_ioreq_server(p2m->domain) &&
> > > +             (p2m->domain == current->domain) &&
> > > p2m_is_ram(entry.p2m.type) )
> > > +            p2m->domain->mapcache_invalidate = true;
> > 
> > Why the (p2m->domain == current->domain) check? Shouldn't we set
> > mapcache_invalidate to true anyway? What happens if p2m->domain !=
> > current->domain? We wouldn't want the domain to lose the
> > mapcache_invalidate notification.
> 
> This is also discussed in [1]. :) The main question is why would a
> toolstack/device model modify the guest memory after boot?
> 
> If we assume it does, then the device model would need to pause the domain
> before modifying the RAM.
> 
> We also need to make sure that all the IOREQ servers have invalidated
> the mapcache before the domain run again.
> 
> This would require quite a bit of work. I am not sure the effort is worth if
> there are no active users today.

OK, that explains why we think p2m->domain == current->domain, but why
do we need to have a check for it right here?

In other words, we don't think it is realistc to get here with
p2m->domain != current->domain, but let's say that we do somehow. What's
the best course of action? Probably, set mapcache_invalidate to true and
possibly print a warning?

Leaving mapcache_invalidate to false doesn't seem to be what we want to
do?

 
> > >       BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
> > >   @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs
> > > *regs, register_t *nr,
> > >           return;
> > >       }
> > >   -    current->hcall_preempted = false;
> > > +    curr->hcall_preempted = false;
> > >         perfc_incra(hypercalls, *nr);
> > >       call = arm_hypercall_table[*nr].fn;
> > > @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs
> > > *regs, register_t *nr,
> > >       HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
> > >     #ifndef NDEBUG
> > > -    if ( !current->hcall_preempted )
> > > +    if ( !curr->hcall_preempted )
> > >       {
> > >           /* Deliberately corrupt parameter regs used by this hypercall.
> > > */
> > >           switch ( arm_hypercall_table[*nr].nr_args ) {
> > > @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs
> > > *regs, register_t *nr,
> > >   #endif
> > >         /* Ensure the hypercall trap instruction is re-executed. */
> > > -    if ( current->hcall_preempted )
> > > +    if ( curr->hcall_preempted )
> > >           regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
> > > +
> > > +#ifdef CONFIG_IOREQ_SERVER
> > > +    if ( unlikely(curr->domain->mapcache_invalidate) &&
> > > +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
> > > +        ioreq_signal_mapcache_invalidate();
> > 
> > Why not just:
> > 
> > if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
> >      ioreq_signal_mapcache_invalidate();
> > 
> 
> This seems to match the x86 code. My guess is they tried to prevent the cost
> of the atomic operation if there is no chance mapcache_invalidate is true.
> 
> I am split whether the first check is worth it. The atomic operation should be
> uncontended most of the time, so it should be quick. But it will always be
> slower than just a read because there is always a store involved.

I am not a fun of optimizations with unclear benefits :-)


> On a related topic, Jan pointed out that the invalidation would not work
> properly if you have multiple vCPU modifying the P2M at the same time.

Uhm, yes.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:28:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:28:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50010.88453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEV-0005gP-Fz; Fri, 11 Dec 2020 01:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50010.88453; Fri, 11 Dec 2020 01:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEV-0005gD-CV; Fri, 11 Dec 2020 01:28:31 +0000
Received: by outflank-mailman (input) for mailman id 50010;
 Fri, 11 Dec 2020 01:28:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXEU-0005fg-Kh
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:28:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11b4fba2-6a62-49e6-ad3a-2fc04b26061f;
 Fri, 11 Dec 2020 01:28:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11b4fba2-6a62-49e6-ad3a-2fc04b26061f
Date: Thu, 10 Dec 2020 17:28:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650108;
	bh=MD8MZlK61mbrRNToR8OyiHXQadaLJYv/gx2kWam42NE=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=JmY5KvRgQluqLPdGuU2taLyrAwFT472aMRHf/i75MUjrFc1ZEfCR95NgeRQ3TagFV
	 hYZKyy0du7kNZ3rBRHzpTI/SCm0kR92iPGuJz9jgAXX547BQj5tkUlrBSbHuAAuLS+
	 kW4zsylGMhsdPr/CKzZeRVjI0Wl1Tw+LQx4gfZbO1OtB0eYBcPWXax+TVcCnxD2aW7
	 /UjQX0tabTZJqXiAORxoK3thDHmVqa3lkJrp3plBMVOcwfL0nD9VutvNNidk4/CHfr
	 mxsBicPgxwNmrhZC8yOD1xhEv/IX6vL/2jjNqP2oD5yRRBJxqBooqQ61j4ttc3HLQ/
	 qGI+gf9UmV1ag==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 3/8] xen/arm: revert patch related to XArray
In-Reply-To: <ca988f0f6c66a2e35d06c07e226d0145c1320611.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101529400.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <ca988f0f6c66a2e35d06c07e226d0145c1320611.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> XArray is not implemented in XEN revert the patch that introduce the
> XArray code in SMMUv3 driver.
> 
> XArray is added in preparation for sharing some ASIDs with the CPU,
> 
> As XEN support only Stage-2 translation, ASID is used for Stage-1
> translation there is no consequences of reverting this patch for XEN.
> 
> Once XArray is implemented in XEN this patch can be added in XEN if XEN
> supports Stage-1 translation.
> 
> Reverted the commit 0299a1a81ca056e79c1a7fb751f936ec0d5c7afe
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
>  - Added consequences of reverting this patch in commit message
> 
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 27 +++++++++------------------
>  1 file changed, 9 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 8b7747ed38..7b29ead48c 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -625,6 +625,7 @@ struct arm_smmu_device {
>  
>  #define ARM_SMMU_MAX_ASIDS		(1 << 16)
>  	unsigned int			asid_bits;
> +	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
>  
>  #define ARM_SMMU_MAX_VMIDS		(1 << 16)
>  	unsigned int			vmid_bits;
> @@ -690,8 +691,6 @@ struct arm_smmu_option_prop {
>  	const char *prop;
>  };
>  
> -static DEFINE_XARRAY_ALLOC1(asid_xa);
> -
>  static struct arm_smmu_option_prop arm_smmu_options[] = {
>  	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
>  	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
> @@ -1346,14 +1345,6 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
>  	cdcfg->cdtab = NULL;
>  }
>  
> -static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
> -{
> -	if (!cd->asid)
> -		return;
> -
> -	xa_erase(&asid_xa, cd->asid);
> -}
> -
>  /* Stream table manipulation functions */
>  static void
>  arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
> @@ -1988,9 +1979,10 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>  		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
>  
> -		if (cfg->cdcfg.cdtab)
> +		if (cfg->cdcfg.cdtab) {
>  			arm_smmu_free_cd_tables(smmu_domain);
> -		arm_smmu_free_asid(&cfg->cd);
> +			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
> +		}
>  	} else {
>  		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>  		if (cfg->vmid)
> @@ -2005,15 +1997,14 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>  				       struct io_pgtable_cfg *pgtbl_cfg)
>  {
>  	int ret;
> -	u32 asid;
> +	int asid;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
>  	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
>  
> -	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
> -		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
> -	if (ret)
> -		return ret;
> +	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
> +	if (asid < 0)
> +		return asid;
>  
>  	cfg->s1cdmax = master->ssid_bits;
>  
> @@ -2046,7 +2037,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
>  out_free_cd_tables:
>  	arm_smmu_free_cd_tables(smmu_domain);
>  out_free_asid:
> -	arm_smmu_free_asid(&cfg->cd);
> +	arm_smmu_bitmap_free(smmu->asid_map, asid);
>  	return ret;
>  }
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:28:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50013.88465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEc-0005mW-Pt; Fri, 11 Dec 2020 01:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50013.88465; Fri, 11 Dec 2020 01:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEc-0005mM-Lg; Fri, 11 Dec 2020 01:28:38 +0000
Received: by outflank-mailman (input) for mailman id 50013;
 Fri, 11 Dec 2020 01:28:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXEb-0005ll-G3
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:28:37 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a63fc49-f426-436e-ad12-b2bd723b5184;
 Fri, 11 Dec 2020 01:28:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a63fc49-f426-436e-ad12-b2bd723b5184
Date: Thu, 10 Dec 2020 17:28:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650116;
	bh=IReJJgjVVbTQUD9zyI+ACY5NU11VnkHtglGwXmLG+S8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=MzT+emg+30a3ZhCokgGEm70iYEMJU2++5HRli9qO3Nt/JEII3PgrfI26xIO66QPsA
	 6LQpU+yiHjAdAD46qty34cqzlcC7kgD5Z4M6RzITRcVTQkWB45le9+igF9C2itmTK1
	 8WW7NGPtwxvEihbvZNNXRmcUTJiohdxt2gJVOEeS0mJ5m38x8mqUXtHEt+OdMQuPG8
	 LFvapPKZYOh5JqzBB5MGF4UURFraMNTLkM0HgubrgDUuDd2fRZBjKUaK1KJD06xTmt
	 GR4PBt8jpyI2K7xTIKeLfHAZ/HjAFBji2IhGatzsSUQQb2aJR1NCNyPLK0UYydCDKr
	 Tq2uM6ebxg/TA==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/8] xen/arm: Remove support for Stage-1 translation
 on SMMUv3.
In-Reply-To: <a5e3509bbc4ce21e0d9d176d7a2984ef40ad0ae3.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101532110.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <a5e3509bbc4ce21e0d9d176d7a2984ef40ad0ae3.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> @@ -2087,29 +1693,8 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>  	}
>  
>  	/* Restrict the stage to what we can actually support */
> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> -
> -	switch (smmu_domain->stage) {
> -	case ARM_SMMU_DOMAIN_S1:
> -		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
> -		ias = min_t(unsigned long, ias, VA_BITS);
> -		oas = smmu->ias;
> -		fmt = ARM_64_LPAE_S1;
> -		finalise_stage_fn = arm_smmu_domain_finalise_s1;
> -		break;
> -	case ARM_SMMU_DOMAIN_NESTED:
> -	case ARM_SMMU_DOMAIN_S2:
> -		ias = smmu->ias;
> -		oas = smmu->oas;
> -		fmt = ARM_64_LPAE_S2;
> -		finalise_stage_fn = arm_smmu_domain_finalise_s2;
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> +	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
> +

Last time we agreed on adding an error message?


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:28:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50017.88477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEj-0005tG-2A; Fri, 11 Dec 2020 01:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50017.88477; Fri, 11 Dec 2020 01:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXEi-0005t8-Ul; Fri, 11 Dec 2020 01:28:44 +0000
Received: by outflank-mailman (input) for mailman id 50017;
 Fri, 11 Dec 2020 01:28:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXEg-0005re-TZ
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:28:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3dd4e877-3809-4696-903b-622b26cbd75d;
 Fri, 11 Dec 2020 01:28:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3dd4e877-3809-4696-903b-622b26cbd75d
Date: Thu, 10 Dec 2020 17:28:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650121;
	bh=HNKm2imjIECPStxL3ltFy0qN4phKaqzK0QgnjM0azy4=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=gPWatTZxt0e/qhfCDVAmtEiq9dVUVZ3fgClsFESsuHnu84eMzojkAb0nH3XgzgIFt
	 GHX/UvsafWeUTL3AVw4oOf9p4EDFp1So8joeh1tS832wKG0h9U63P7RdWQgwmWht2l
	 HC3EX+MsGE7lYsDaIAyrfFBnDrkYDTOFjFqu2GuuPiHhbCV/jEuR7k1ScgkOTEqjy0
	 msUspCxoL9eNx7y44xQFfyVg6HzZWBTAqC7J1Km0BFgbOWmZZ08RN0P/M5yuGjMKOc
	 uCGjxTwA7MbLPTUZ3CLxgKou4+kPN33d6+a5hMreeMoxpjkqPzjw27zasZvzu9Y3qv
	 Mfr5wWLoyxk9g==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 5/8] xen/device-tree: Add dt_property_match_string
 helper
In-Reply-To: <2cf4c10d0ce81290af96e29ee364df87c06ef849.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101539570.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <2cf4c10d0ce81290af96e29ee364df87c06ef849.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> Import the Linux helper of_property_match_string. This function searches
> a string list property and returns the index of a specific string value.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
>  - This patch is introduce in this verison.
> 
> ---
>  xen/common/device_tree.c      | 27 +++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h | 12 ++++++++++++
>  2 files changed, 39 insertions(+)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index e107c6f89f..18825e333e 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -208,6 +208,33 @@ int dt_property_read_string(const struct dt_device_node *np,
>      return 0;
>  }
>  
> +int dt_property_match_string(const struct dt_device_node *np,
> +                             const char *propname, const char *string)
> +{
> +    const struct dt_property *dtprop = dt_find_property(np, propname, NULL);
> +    size_t l;
> +    int i;
> +    const char *p, *end;
> +
> +    if ( !dtprop )
> +        return -EINVAL;
> +    if ( !dtprop->value )
> +        return -ENODATA;
> +
> +    p = dtprop->value;
> +    end = p + dtprop->length;
> +
> +    for ( i = 0; p < end; i++, p += l )
> +    {
> +        l = strnlen(p, end - p) + 1;
> +        if ( p + l > end )
> +            return -EILSEQ;
> +        if ( strcmp(string, p) == 0 )
> +            return i; /* Found it; return index */
> +    }
> +    return -ENODATA;
> +}
> +
>  bool_t dt_device_is_compatible(const struct dt_device_node *device,
>                                 const char *compat)
>  {
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index f2ad22b79c..b02696be94 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -400,6 +400,18 @@ static inline bool_t dt_property_read_bool(const struct dt_device_node *np,
>  int dt_property_read_string(const struct dt_device_node *np,
>                              const char *propname, const char **out_string);
>  
> +/**
> + * dt_property_match_string() - Find string in a list and return index
> + * @np: pointer to node containing string list property
> + * @propname: string list property name
> + * @string: pointer to string to search for in string list
> + *
> + * This function searches a string list property and returns the index
> + * of a specific string value.
> + */
> +int dt_property_match_string(const struct dt_device_node *np,
> +                             const char *propname, const char *string);
> +
>  /**
>   * Checks if the given "compat" string matches one of the strings in
>   * the device's "compatible" property
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:29:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50032.88488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXF1-00064v-CX; Fri, 11 Dec 2020 01:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50032.88488; Fri, 11 Dec 2020 01:29:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXF1-00064m-9B; Fri, 11 Dec 2020 01:29:03 +0000
Received: by outflank-mailman (input) for mailman id 50032;
 Fri, 11 Dec 2020 01:29:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXF0-0005re-Eq
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:29:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79eb2f64-6af5-4a20-ad32-8a1024365f92;
 Fri, 11 Dec 2020 01:29:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79eb2f64-6af5-4a20-ad32-8a1024365f92
Date: Thu, 10 Dec 2020 17:28:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650139;
	bh=xwcnFGOKiauXIGJuG56zbmg3yEFtQCGsqTFRlEKBMWo=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=hqXJQhrVwJe4GhPK5EgfQJcMurMepdx4qsGoZEQE60/2v1KP9aqcYxmqHmLJkiUBp
	 XcNlFKmEWtA2PP0jMGaUKRBYLb8KCpi2fGt8eXnZurO7MBRjUPUvQ2QoTLCpfBM/bz
	 4owL/057zD3mpeQhPZ588JADMQizwUsKfbFFhTPHSSqLjuBXL6VsvckemA/3M53+8h
	 B2vGq7UN5O3vatW7r7AEpBiXuqEP4kWJ4P+ALpO6hp7luLxqXQYHiLID8Qxjah+tzS
	 PpmqY0ryWdm99pkW8Tl1P7Kja4d9DcczEg1rXCCOLSX2tTXFGokwAq+3807yeHiLnf
	 +A6054VFnyP4A==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
In-Reply-To: <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101602530.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> Add support for ARM architected SMMUv3 implementation. It is based on
> the Linux SMMUv3 driver.
> 
> Driver is currently supported as Tech Preview.
> 
> Major differences with regard to Linux driver are as follows:
> 2. Only Stage-2 translation is supported as compared to the Linux driver
>    that supports both Stage-1 and Stage-2 translations.
> 3. Use P2M  page table instead of creating one as SMMUv3 has the
>    capability to share the page tables with the CPU.
> 4. Tasklets are used in place of threaded IRQ's in Linux for event queue
>    and priority queue IRQ handling.
> 5. Latest version of the Linux SMMUv3 code implements the commands queue
>    access functions based on atomic operations implemented in Linux.
>    Atomic functions used by the commands queue access functions are not
>    implemented in XEN therefore we decided to port the earlier version
>    of the code. Atomic operations are introduced to fix the bottleneck
>    of the SMMU command queue insertion operation. A new algorithm for
>    inserting commands into the queue is introduced, which is lock-free
>    on the fast-path.
>    Consequence of reverting the patch is that the command queue
>    insertion will be slow for large systems as spinlock will be used to
>    serializes accesses from all CPUs to the single queue supported by
>    the hardware. Once the proper atomic operations will be available in
>    XEN the driver can be updated.
> 6. Spin lock is used in place of mutex when attaching a device to the
>    SMMU, as there is no blocking locks implementation available in XEN.
>    This might introduce latency in XEN. Need to investigate before
>    driver is out for tech preview.
> 7. PCI ATS functionality is not supported, as there is no support
>    available in XEN to test the functionality. Code is not tested and
>    compiled. Code is guarded by the flag CONFIG_PCI_ATS.
> 8. MSI interrupts are not supported as there is no support available in
>    XEN to request MSI interrupts. Code is not tested and compiled. Code
>    is guarded by the flag CONFIG_MSI.

This is much better Rahul, great work!


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
> Changes in v3:
> - added return statement for readx_poll_timeout function.
> - remove iommu_get_dma_cookie and iommu_put_dma_cookie.
> - remove struct arm_smmu_xen_device as not required.
> - move dt_property_match_string to device_tree.c file.
> - replace arm_smmu_*_thread to arm_smmu_*_tasklet to avoid confusion.
> - use ARM_SMMU_REG_SZ as size when map memory to XEN.
> - remove bypass keyword to make sure when device-tree probe is failed we
>   are reporting error and not continuing to configure SMMU in bypass
>   mode.
> - fixed minor comments.
> 
> ---
>  MAINTAINERS                           |   6 +
>  SUPPORT.md                            |   1 +
>  xen/drivers/passthrough/Kconfig       |  11 +
>  xen/drivers/passthrough/arm/Makefile  |   1 +
>  xen/drivers/passthrough/arm/smmu-v3.c | 777 ++++++++++++++++++++++----
>  5 files changed, 683 insertions(+), 113 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index dab38a6a14..1d63489eec 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -249,6 +249,12 @@ F:	xen/include/asm-arm/
>  F:	xen/include/public/arch-arm/
>  F:	xen/include/public/arch-arm.h
>  
> +ARM SMMUv3
> +M:	Bertrand Marquis <bertrand.marquis@arm.com>
> +M:	Rahul Singh <rahul.singh@arm.com>
> +S:	Supported
> +F:	xen/drivers/passthrough/arm/smmu-v3.c
> +
>  Change Log
>  M:	Paul Durrant <paul@xen.org>
>  R:	Community Manager <community.manager@xenproject.org>
> diff --git a/SUPPORT.md b/SUPPORT.md
> index ab02aca5f4..5ee3c8651a 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -67,6 +67,7 @@ For the Cortex A57 r0p0 - r1p1, see Errata 832075.
>      Status, Intel VT-d: Supported
>      Status, ARM SMMUv1: Supported, not security supported
>      Status, ARM SMMUv2: Supported, not security supported
> +    Status, ARM SMMUv3: Tech Preview
>      Status, Renesas IPMMU-VMSA: Supported, not security supported
>  
>  ### ARM/GICv3 ITS
> diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
> index 0036007ec4..341ba92b30 100644
> --- a/xen/drivers/passthrough/Kconfig
> +++ b/xen/drivers/passthrough/Kconfig
> @@ -13,6 +13,17 @@ config ARM_SMMU
>  	  Say Y here if your SoC includes an IOMMU device implementing the
>  	  ARM SMMU architecture.
>  
> +config ARM_SMMU_V3
> +	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
> +	depends on ARM_64
> +	---help---
> +	 Support for implementations of the ARM System MMU architecture
> +	 version 3. Driver is in experimental stage and should not be used in
> +	 production.
> +
> +	 Say Y here if your system includes an IOMMU device implementing
> +	 the ARM SMMUv3 architecture.
> +
>  config IPMMU_VMSA
>  	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
>  	depends on ARM_64
> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
> index fcd918ea3e..c5fb3b58a5 100644
> --- a/xen/drivers/passthrough/arm/Makefile
> +++ b/xen/drivers/passthrough/arm/Makefile
> @@ -1,3 +1,4 @@
>  obj-y += iommu.o iommu_helpers.o iommu_fwspec.o
>  obj-$(CONFIG_ARM_SMMU) += smmu.o
>  obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
> +obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 2966015e5d..65b3db94ad 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2,37 +2,268 @@
>  /*
>   * IOMMU API for ARM architected SMMUv3 implementations.
>   *
> + * Based on Linux's SMMUv3 driver:
> + *    drivers/iommu/arm-smmu-v3.c
> + *    commit: ab435ce49bd1d02e33dfec24f76955dc1196970b
> + * and Xen's SMMU driver:
> + *    xen/drivers/passthrough/arm/smmu.c
> + *
> + * Major differences with regard to Linux driver are as follows:
> + *  1. Driver is currently supported as Tech Preview.
> + *  2. Only Stage-2 translation is supported as compared to the Linux driver
> + *     that supports both Stage-1 and Stage-2 translations.
> + *  3. Use P2M  page table instead of creating one as SMMUv3 has the
> + *     capability to share the page tables with the CPU.
> + *  4. Tasklets are used in place of threaded IRQ's in Linux for event queue
> + *     and priority queue IRQ handling.
> + *  5. Latest version of the Linux SMMUv3 code implements the commands queue
> + *     access functions based on atomic operations implemented in Linux.
> + *     Atomic functions used by the commands queue access functions are not
> + *     implemented in XEN therefore we decided to port the earlier version
> + *     of the code. Atomic operations are introduced to fix the bottleneck of
> + *     the SMMU command queue insertion operation. A new algorithm for
> + *     inserting commands into the queue is introduced, which is
> + *     lock-free on the fast-path.
> + *     Consequence of reverting the patch is that the command queue insertion
> + *     will be slow for large systems as spinlock will be used to serializes
> + *     accesses from all CPUs to the single queue supported by the hardware.
> + *     Once the proper atomic operations will be available in XEN the driver
> + *     can be updated.
> + *  6. Spin lock is used in place of Mutex when attaching a device to the SMMU,
> + *     as there is no blocking locks implementation available in XEN.This might
> + *     introduce latency in XEN. Need to investigate before driver is out for
> + *     Tech Preview.
> + *  7. PCI ATS functionality is not supported, as there is no support available
> + *     in XEN to test the functionality. Code is not tested and compiled. Code
> + *     is guarded by the flag CONFIG_PCI_ATS.
> + *  8. MSI interrupts are not supported as there is no support available
> + *     in XEN to request MSI interrupts. Code is not tested and compiled. Code
> + *     is guarded by the flag CONFIG_MSI.
> + *
> + * Following functionality should be supported before driver is out for tech
> + * preview
> + *
> + *  1. Investigate the timing analysis of using spin lock in place of mutex
> + *     when attaching a  device to SMMU.
> + *  2. Merged the latest Linux SMMUv3 driver code once atomic operation is
> + *     available in XEN.
> + *  3. PCI ATS and MSI interrupts should be supported.
> + *  4. Investigate side-effect of using tasklet in place of threaded IRQ and
> + *     fix if any.
> + *  5. fallthorugh keyword should be supported.
> + *  6. Implement the ffsll function in bitops.h file.
> + *
>   * Copyright (C) 2015 ARM Limited
>   *
>   * Author: Will Deacon <will.deacon@arm.com>
>   *
> - * This driver is powered by bad coffee and bombay mix.
> + * Copyright (C) 2020 Arm Ltd
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + *
>   */
>  
> -#include <linux/acpi.h>
> -#include <linux/acpi_iort.h>
> -#include <linux/bitfield.h>
> -#include <linux/bitops.h>
> -#include <linux/crash_dump.h>
> -#include <linux/delay.h>
> -#include <linux/dma-iommu.h>
> -#include <linux/err.h>
> -#include <linux/interrupt.h>
> -#include <linux/io-pgtable.h>
> -#include <linux/iommu.h>
> -#include <linux/iopoll.h>
> -#include <linux/module.h>
> -#include <linux/msi.h>
> -#include <linux/of.h>
> -#include <linux/of_address.h>
> -#include <linux/of_iommu.h>
> -#include <linux/of_platform.h>
> -#include <linux/pci.h>
> -#include <linux/pci-ats.h>
> -#include <linux/platform_device.h>
> -
> -#include <linux/amba/bus.h>
> +#include <xen/acpi.h>
> +#include <xen/config.h>
> +#include <xen/delay.h>
> +#include <xen/errno.h>
> +#include <xen/err.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/list.h>
> +#include <xen/mm.h>
> +#include <xen/rbtree.h>
> +#include <xen/sched.h>
> +#include <xen/sizes.h>
> +#include <xen/vmap.h>
> +#include <asm/atomic.h>
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <asm/iommu_fwspec.h>
> +#include <asm/platform.h>
> +
> +/* Linux compatibility functions. */
> +typedef paddr_t		dma_addr_t;
> +typedef paddr_t		phys_addr_t;
> +typedef unsigned int		gfp_t;
> +
> +#define platform_device		device
> +
> +#define GFP_KERNEL		0
> +
> +/* Alias to Xen device tree helpers */
> +#define device_node			dt_device_node
> +#define of_phandle_args		dt_phandle_args
> +#define of_device_id		dt_device_match
> +#define of_match_node		dt_match_node
> +#define of_property_read_u32(np, pname, out)	\
> +		(!dt_property_read_u32(np, pname, out))
> +#define of_property_read_bool		dt_property_read_bool
> +#define of_parse_phandle_with_args	dt_parse_phandle_with_args
> +
> +/* Alias to Xen time functions */
> +#define ktime_t s_time_t
> +#define ktime_get()			(NOW())
> +#define ktime_add_us(t, i)		(t + MICROSECS(i))
> +#define ktime_compare(t, i)		(t > (i))
> +
> +/* Alias to Xen allocation helpers */
> +#define kzalloc(size, flags)	_xzalloc(size, sizeof(void *))
> +#define kfree	xfree
> +#define devm_kzalloc(dev, size, flags)	 _xzalloc(size, sizeof(void *))
> +
> +/* Device logger functions */
> +#define dev_name(dev)	dt_node_full_name(dev->of_node)
> +#define dev_dbg(dev, fmt, ...)			\
> +	printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_notice(dev, fmt, ...)		\
> +	printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_warn(dev, fmt, ...)			\
> +	printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_err(dev, fmt, ...)			\
> +	printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_info(dev, fmt, ...)			\
> +	printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +#define dev_err_ratelimited(dev, fmt, ...)			\
> +	printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
> +
> +/*
> + * Periodically poll an address and wait between reads in us until a
> + * condition is met or a timeout occurs.
> + *
> + * @return: 0 when cond met, -ETIMEDOUT upon timeout
> + */
> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us)	\
> +({		\
> +	s_time_t deadline = NOW() + MICROSECS(timeout_us);		\
> +	for (;;) {		\
> +		(val) = op(addr);		\
> +		if (cond)		\
> +			break;		\
> +		if (NOW() > deadline) {		\
> +			(val) = op(addr);		\
> +			break;		\
> +		}		\
> +		udelay(sleep_us);		\
> +	}		\
> +	(cond) ? 0 : -ETIMEDOUT;		\
> +})

NIT: alignment of the '\'


> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us)	\
> +	readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
> +
> +#define FIELD_PREP(_mask, _val)			\
> +	(((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))
> +
> +#define FIELD_GET(_mask, _reg)			\
> +	(typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))

let's add ffsll to bitops.h


> +/*
> + * Helpers for DMA allocation. Just the function name is reused for
> + * porting code, these allocation are not managed allocations
> + */
> +static void *dmam_alloc_coherent(struct device *dev, size_t size,
> +				paddr_t *dma_handle, gfp_t gfp)
> +{
> +	void *vaddr;
> +	unsigned long alignment = size;
> +
> +	/*
> +	 * _xzalloc requires that the (align & (align -1)) = 0. Most of the
> +	 * allocations in SMMU code should send the right value for size. In
> +	 * case this is not true print a warning and align to the size of a
> +	 * (void *)
> +	 */
> +	if (size & (size - 1)) {
> +		printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n");
> +		alignment = sizeof(void *);
> +	}
> +
> +	vaddr = _xzalloc(size, alignment);
> +	if (!vaddr) {
> +		printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
> +		return NULL;
> +	}
> +
> +	*dma_handle = virt_to_maddr(vaddr);
> +
> +	return vaddr;
> +}
> +
> +
> +/* Xen specific code. */
> +struct iommu_domain {
> +	/* Runtime SMMU configuration for this iommu_domain */
> +	atomic_t		ref;
> +	/*
> +	 * Used to link iommu_domain contexts for a same domain.
> +	 * There is at least one per-SMMU to used by the domain.
> +	 */
> +	struct list_head		list;
> +};
>  
> +/* Describes information required for a Xen domain */
> +struct arm_smmu_xen_domain {
> +	spinlock_t		lock;
> +
> +	/* List of iommu domains associated to this domain */
> +	struct list_head	contexts;
> +};
> +
> +
> +/* Keep a list of devices associated with this driver */
> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
> +static LIST_HEAD(arm_smmu_devices);
> +
> +static inline void *dev_iommu_priv_get(struct device *dev)
> +{
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
> +}
> +
> +static inline void dev_iommu_priv_set(struct device *dev, void *priv)
> +{
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	fwspec->iommu_priv = priv;
> +}
> +
> +static int platform_get_irq_byname_optional(struct device *dev,
> +				const char *name)
> +{
> +	int index, ret;
> +	struct dt_device_node *np  = dev_to_dt(dev);
> +
> +	if (unlikely(!name))
> +		return -EINVAL;
> +
> +	index = dt_property_match_string(np, "interrupt-names", name);
> +	if (index < 0) {
> +		dev_info(dev, "IRQ %s not found\n", name);
> +		return index;
> +	}
> +
> +	ret = platform_get_irq(np, index);
> +	if (ret < 0) {
> +		dev_err(dev, "failed to get irq index %d\n", index);
> +		return -ENODEV;
> +	}
> +
> +	return ret;
> +}
> +
> +/* Start of Linux SMMUv3 code */
>  /* MMIO registers */
>  #define ARM_SMMU_IDR0			0x0
>  #define IDR0_ST_LVL			GENMASK(28, 27)
> @@ -402,6 +633,7 @@ enum pri_resp {
>  	PRI_RESP_SUCC = 2,
>  };
>  
> +#ifdef CONFIF_MSI

CONFIG_MSI


>  enum arm_smmu_msi_index {
>  	EVTQ_MSI_INDEX,
>  	GERROR_MSI_INDEX,
> @@ -426,6 +658,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
>  		ARM_SMMU_PRIQ_IRQ_CFG2,
>  	},
>  };
> +#endif
>  
>  struct arm_smmu_cmdq_ent {
>  	/* Common fields */
> @@ -534,6 +767,7 @@ struct arm_smmu_s2_cfg {
>  	u16				vmid;
>  	u64				vttbr;
>  	u64				vtcr;
> +	struct domain		*domain;
>  };

This looks like a strange struct to add a pointer back to *domain. Maybe
struct arm_smmu_domain would be a better place for it?


>  struct arm_smmu_strtab_cfg {
> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>  		u64			padding;
>  	};
>  
> -	/* IOMMU core code handle */
> -	struct iommu_device		iommu;
> +	/* Need to keep a list of SMMU devices */
> +	struct list_head		devices;
> +
> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
> +	struct tasklet		evtq_irq_tasklet;
> +	struct tasklet		priq_irq_tasklet;
> +	struct tasklet		combined_irq_tasklet;
>  };
>  
>  /* SMMU private data for each master */
> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>  
>  struct arm_smmu_domain {
>  	struct arm_smmu_device		*smmu;
> -	struct mutex			init_mutex; /* Protects smmu pointer */
>  
>  	bool				non_strict;
>  	atomic_t			nr_ats_masters;
> @@ -987,6 +1225,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>  	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>  }
>  
> +#ifdef CONFIF_MSI

CONFIG_MSI


>  /*
>   * The difference between val and sync_idx is bounded by the maximum size of
>   * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
> @@ -1030,6 +1269,13 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>  
>  	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
>  }
> +#else
> +static inline int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
> +{
> +	return 0;
> +}
> +#endif /* CONFIG_MSI */
> +
>  
>  static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>  {
> @@ -1072,7 +1318,7 @@ arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
>  	val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
>  
>  	/* See comment in arm_smmu_write_ctx_desc() */
> -	WRITE_ONCE(*dst, cpu_to_le64(val));
> +	write_atomic(dst, cpu_to_le64(val));
>  }
>  
>  static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
> @@ -1187,7 +1433,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
>  						 STRTAB_STE_1_EATS_TRANS));
>  
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
> -	WRITE_ONCE(dst[0], cpu_to_le64(val));
> +	write_atomic(&dst[0], cpu_to_le64(val));
>  	arm_smmu_sync_ste_for_sid(smmu, sid);
>  
>  	/* It's likely that we'll want to use the new STE soon */
> @@ -1234,7 +1480,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
>  }
>  
>  /* IRQ and event handlers */
> -static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
> +static void arm_smmu_evtq_tasklet(void *dev)
>  {
>  	int i;
>  	struct arm_smmu_device *smmu = dev;
> @@ -1264,7 +1510,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>  	/* Sync our overflow flag, as we believe we're up to speed */
>  	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>  		    Q_IDX(llq, llq->cons);
> -	return IRQ_HANDLED;
>  }
>  
>  static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
> @@ -1305,7 +1550,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>  	}
>  }
>  
> -static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
> +static void arm_smmu_priq_tasklet(void *dev)
>  {
>  	struct arm_smmu_device *smmu = dev;
>  	struct arm_smmu_queue *q = &smmu->priq.q;
> @@ -1324,12 +1569,12 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>  	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>  		      Q_IDX(llq, llq->cons);
>  	queue_sync_cons_out(q);
> -	return IRQ_HANDLED;
>  }
>  
>  static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
>  
> -static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
> +static void arm_smmu_gerror_handler(int irq, void *dev,
> +				struct cpu_user_regs *regs)
>  {
>  	u32 gerror, gerrorn, active;
>  	struct arm_smmu_device *smmu = dev;
> @@ -1339,7 +1584,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>  
>  	active = gerror ^ gerrorn;
>  	if (!(active & GERROR_ERR_MASK))
> -		return IRQ_NONE; /* No errors pending */
> +		return; /* No errors pending */
>  
>  	dev_warn(smmu->dev,
>  		 "unexpected global error reported (0x%08x), this could be serious\n",
> @@ -1372,26 +1617,44 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>  		arm_smmu_cmdq_skip_err(smmu);
>  
>  	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
> -	return IRQ_HANDLED;
>  }
>  
> -static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
> +static void arm_smmu_combined_irq_handler(int irq, void *dev,
> +				struct cpu_user_regs *regs)
> +{
> +	struct arm_smmu_device *smmu = dev;
> +
> +	arm_smmu_gerror_handler(irq, dev, regs);
> +
> +	tasklet_schedule(&(smmu->combined_irq_tasklet));
> +}
> +
> +static void arm_smmu_combined_irq_tasklet(void *dev)
>  {
>  	struct arm_smmu_device *smmu = dev;
>  
> -	arm_smmu_evtq_thread(irq, dev);
> +	arm_smmu_evtq_tasklet(dev);
>  	if (smmu->features & ARM_SMMU_FEAT_PRI)
> -		arm_smmu_priq_thread(irq, dev);
> +		arm_smmu_priq_tasklet(dev);
> +}
>  
> -	return IRQ_HANDLED;
> +static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
> +				struct cpu_user_regs *regs)
> +{
> +	struct arm_smmu_device *smmu = dev;
> +
> +	tasklet_schedule(&(smmu->evtq_irq_tasklet));
>  }
>  
> -static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
> +static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
> +				struct cpu_user_regs *regs)
>  {
> -	arm_smmu_gerror_handler(irq, dev);
> -	return IRQ_WAKE_THREAD;
> +	struct arm_smmu_device *smmu = dev;
> +
> +	tasklet_schedule(&(smmu->priq_irq_tasklet));
>  }
>  
> +#ifdef CONFIG_PCI_ATS
>  static void
>  arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
>  			struct arm_smmu_cmdq_ent *cmd)
> @@ -1498,6 +1761,7 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>  
>  	return ret ? -ETIMEDOUT : 0;
>  }
> +#endif
>  
>  static void arm_smmu_tlb_inv_context(void *cookie)
>  {
> @@ -1532,7 +1796,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(void)
>  	if (!smmu_domain)
>  		return NULL;
>  
> -	mutex_init(&smmu_domain->init_mutex);
>  	INIT_LIST_HEAD(&smmu_domain->devices);
>  	spin_lock_init(&smmu_domain->devices_lock);
>  
> @@ -1578,6 +1841,17 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>  	typeof(&arm_lpae_s2_cfg.vtcr) vtcr = &arm_lpae_s2_cfg.vtcr;
> +	uint64_t reg = READ_SYSREG64(VTCR_EL2);
> +
> +	vtcr->tsz	= FIELD_GET(STRTAB_STE_2_VTCR_S2T0SZ, reg);
> +	vtcr->sl	= FIELD_GET(STRTAB_STE_2_VTCR_S2SL0, reg);
> +	vtcr->irgn	= FIELD_GET(STRTAB_STE_2_VTCR_S2IR0, reg);
> +	vtcr->orgn	= FIELD_GET(STRTAB_STE_2_VTCR_S2OR0, reg);
> +	vtcr->sh	= FIELD_GET(STRTAB_STE_2_VTCR_S2SH0, reg);
> +	vtcr->tg	= FIELD_GET(STRTAB_STE_2_VTCR_S2TG, reg);
> +	vtcr->ps	= FIELD_GET(STRTAB_STE_2_VTCR_S2PS, reg);
> +
> +	arm_lpae_s2_cfg.vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
>  
>  	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>  	if (vmid < 0)
> @@ -1592,6 +1866,11 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
> +
> +	printk(XENLOG_DEBUG
> +		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
> +		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
> +
>  	return 0;
>  }
>  
> @@ -1653,6 +1932,7 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
>  	}
>  }
>  
> +#ifdef CONFIG_PCI_ATS
>  static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
>  {
>  	struct device *dev = master->dev;
> @@ -1751,6 +2031,23 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
>  
>  	pci_disable_pasid(pdev);
>  }
> +#else
> +static inline bool arm_smmu_ats_supported(struct arm_smmu_master *master)
> +{
> +	return false;
> +}
> +
> +static inline void arm_smmu_enable_ats(struct arm_smmu_master *master) { }
> +
> +static inline void arm_smmu_disable_ats(struct arm_smmu_master *master) { }
> +
> +static inline int arm_smmu_enable_pasid(struct arm_smmu_master *master)
> +{
> +	return 0;
> +}
> +
> +static inline void arm_smmu_disable_pasid(struct arm_smmu_master *master) { }
> +#endif
>  
>  static void arm_smmu_detach_dev(struct arm_smmu_master *master)
>  {
> @@ -1788,8 +2085,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  
>  	arm_smmu_detach_dev(master);
>  
> -	mutex_lock(&smmu_domain->init_mutex);
> -
>  	if (!smmu_domain->smmu) {
>  		smmu_domain->smmu = smmu;
>  		ret = arm_smmu_domain_finalise(domain, master);
> @@ -1820,7 +2115,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  	arm_smmu_enable_ats(master);
>  
>  out_unlock:
> -	mutex_unlock(&smmu_domain->init_mutex);
>  	return ret;
>  }
>  
> @@ -1833,8 +2127,10 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  
>  	return sid < limit;
>  }
> +/* Forward declaration */
> +static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
>  
> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
> +static int arm_smmu_add_device(u8 devfn, struct device *dev)
>  {
>  	int i, ret;
>  	struct arm_smmu_device *smmu;
> @@ -1842,14 +2138,15 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>  	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  
>  	if (!fwspec)
> -		return ERR_PTR(-ENODEV);
> +		return -ENODEV;
>  
> -	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
> -		return ERR_PTR(-EBUSY);
> +	smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
> +	if (!smmu)
> +		return -ENODEV;
>  
>  	master = kzalloc(sizeof(*master), GFP_KERNEL);
>  	if (!master)
> -		return ERR_PTR(-ENOMEM);
> +		return -ENOMEM;
>  
>  	master->dev = dev;
>  	master->smmu = smmu;
> @@ -1884,17 +2181,36 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>  	 */
>  	arm_smmu_enable_pasid(master);
>  
> -	return &smmu->iommu;
> +	return 0;
>  
>  err_free_master:
>  	kfree(master);
>  	dev_iommu_priv_set(dev, NULL);
> -	return ERR_PTR(ret);
> +	return ret;
>  }
>  
> -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +static int arm_smmu_dt_xlate(struct device *dev,
> +				const struct dt_phandle_args *args)
>  {
> -	return iommu_fwspec_add_ids(dev, args->args, 1);
> +	int ret;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	ret = iommu_fwspec_add_ids(dev, args->args, 1);
> +	if (ret)
> +		return ret;
> +
> +	if (dt_device_is_protected(dev_to_dt(dev))) {
> +		dev_err(dev, "Already added to SMMUv3\n");
> +		return -EEXIST;
> +	}
> +
> +	/* Let Xen know that the master device is protected by an IOMMU. */
> +	dt_device_set_protected(dev_to_dt(dev));
> +
> +	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
> +			dev_name(fwspec->iommu_dev), fwspec->num_ids);
> +
> +	return 0;
>  }
>  
>  /* Probing and initialisation functions */
> @@ -1923,8 +2239,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  		return -ENOMEM;
>  	}
>  
> -	if (!WARN_ON(q->base_dma & (qsz - 1))) {
> -		dev_info(smmu->dev, "allocated %u entries for %s\n",
> +	if (unlikely(q->base_dma & (qsz - 1))) {
> +		dev_warn(smmu->dev, "allocated %u entries for %s\n",
>  			 1 << q->llq.max_n_shift, name);
>  	}
>  
> @@ -2121,6 +2437,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
>  	return ret;
>  }
>  
> +#ifdef CONFIF_MSI

CONFIG_MSI


>  static void arm_smmu_free_msis(void *data)
>  {
>  	struct device *dev = data;
> @@ -2191,6 +2508,9 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
>  	/* Add callback to free MSIs on teardown */
>  	devm_add_action(dev, arm_smmu_free_msis, dev);
>  }
> +#else
> +static inline void arm_smmu_setup_msis(struct arm_smmu_device *smmu) { }
> +#endif /* CONFIG_MSI */
>  
>  static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  {
> @@ -2201,9 +2521,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  	/* Request interrupt lines */
>  	irq = smmu->evtq.q.irq;
>  	if (irq) {
> -		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> -						arm_smmu_evtq_thread,
> -						IRQF_ONESHOT,
> +		ret = request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
>  						"arm-smmu-v3-evtq", smmu);
>  		if (ret < 0)
>  			dev_warn(smmu->dev, "failed to enable evtq irq\n");
> @@ -2213,8 +2531,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  
>  	irq = smmu->gerr_irq;
>  	if (irq) {
> -		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
> -				       0, "arm-smmu-v3-gerror", smmu);
> +		ret = request_irq(irq, 0, arm_smmu_gerror_handler,
> +						"arm-smmu-v3-gerror", smmu);
>  		if (ret < 0)
>  			dev_warn(smmu->dev, "failed to enable gerror irq\n");
>  	} else {
> @@ -2224,11 +2542,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>  	if (smmu->features & ARM_SMMU_FEAT_PRI) {
>  		irq = smmu->priq.q.irq;
>  		if (irq) {
> -			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
> -							arm_smmu_priq_thread,
> -							IRQF_ONESHOT,
> -							"arm-smmu-v3-priq",
> -							smmu);
> +			ret = request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
> +							"arm-smmu-v3-priq", smmu);
>  			if (ret < 0)
>  				dev_warn(smmu->dev,
>  					 "failed to enable priq irq\n");
> @@ -2257,11 +2572,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
>  		 * Cavium ThunderX2 implementation doesn't support unique irq
>  		 * lines. Use a single irq line for all the SMMUv3 interrupts.
>  		 */
> -		ret = devm_request_threaded_irq(smmu->dev, irq,
> -					arm_smmu_combined_irq_handler,
> -					arm_smmu_combined_irq_thread,
> -					IRQF_ONESHOT,
> -					"arm-smmu-v3-combined-irq", smmu);
> +		ret = request_irq(irq, 0, arm_smmu_combined_irq_handler,
> +						"arm-smmu-v3-combined-irq", smmu);
>  		if (ret < 0)
>  			dev_warn(smmu->dev, "failed to enable combined irq\n");
>  	} else
> @@ -2290,7 +2602,7 @@ static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
>  	return ret;
>  }
>  
> -static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
> +static int arm_smmu_device_reset(struct arm_smmu_device *smmu)
>  {
>  	int ret;
>  	u32 reg, enables;
> @@ -2300,7 +2612,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
>  	if (reg & CR0_SMMUEN) {
>  		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
> -		WARN_ON(is_kdump_kernel() && !disable_bypass);
> +		WARN_ON(!disable_bypass);
>  		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
>  	}
>  
> @@ -2404,11 +2716,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
>  		return ret;
>  	}
>  
> -	if (is_kdump_kernel())
> -		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
> +	/* Initialize tasklets for threaded IRQs*/
> +	tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_tasklet, smmu);
> +	tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_tasklet, smmu);
> +	tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_tasklet,
> +				 smmu);
>  
>  	/* Enable the SMMU interface, or ensure bypass */
> -	if (!bypass || disable_bypass) {
> +	if (disable_bypass) {
>  		enables |= CR0_SMMUEN;
>  	} else {
>  		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
> @@ -2473,8 +2788,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  	if (reg & IDR0_SEV)
>  		smmu->features |= ARM_SMMU_FEAT_SEV;
>  
> +#ifdef CONFIF_MSI

CONFIG_MSI


>  	if (reg & IDR0_MSI)
>  		smmu->features |= ARM_SMMU_FEAT_MSI;
> +#endif
>  
>  	if (reg & IDR0_HYP)
>  		smmu->features |= ARM_SMMU_FEAT_HYP;
> @@ -2499,7 +2816,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
>  
>  	if (!(reg & IDR0_S2P)) {
> -		dev_err(smmu->dev, "no translation support!\n");
> +		dev_err(smmu->dev, "no stage-2 translation support!\n");
>  		return -ENXIO;
>  	}
>  
> @@ -2648,7 +2965,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
>  static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>  				    struct arm_smmu_device *smmu)
>  {
> -	struct device *dev = &pdev->dev;
> +	struct device *dev = pdev;
>  	u32 cells;
>  	int ret = -EINVAL;
>  
> @@ -2661,7 +2978,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>  
>  	parse_driver_options(smmu);
>  
> -	if (of_dma_is_coherent(dev->of_node))
> +	if (dt_get_property(dev->of_node, "dma-coherent", NULL))
>  		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
>  
>  	return ret;
> @@ -2675,63 +2992,49 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
>  		return SZ_128K;
>  }
>  
> -static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
> -				      resource_size_t size)
> -{
> -	struct resource res = {
> -		.flags = IORESOURCE_MEM,
> -		.start = start,
> -		.end = start + size - 1,
> -	};
> -
> -	return devm_ioremap_resource(dev, &res);
> -}
> -
>  static int arm_smmu_device_probe(struct platform_device *pdev)
>  {
>  	int irq, ret;
> -	struct resource *res;
> -	resource_size_t ioaddr;
> +	paddr_t ioaddr, iosize;
>  	struct arm_smmu_device *smmu;
> -	struct device *dev = &pdev->dev;
> -	bool bypass;
>  
> -	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
> +	smmu = devm_kzalloc(pdev, sizeof(*smmu), GFP_KERNEL);
>  	if (!smmu) {
> -		dev_err(dev, "failed to allocate arm_smmu_device\n");
> +		dev_err(pdev, "failed to allocate arm_smmu_device\n");
>  		return -ENOMEM;
>  	}
> -	smmu->dev = dev;
> +	smmu->dev = pdev;
>  
> -	if (dev->of_node) {
> +	if (pdev->of_node) {
>  		ret = arm_smmu_device_dt_probe(pdev, smmu);
> +		if (ret)
> +			return -EINVAL;
>  	} else {
>  		ret = arm_smmu_device_acpi_probe(pdev, smmu);
>  		if (ret == -ENODEV)
>  			return ret;
>  	}
>  
> -	/* Set bypass mode according to firmware probing result */
> -	bypass = !!ret;
> -
>  	/* Base address */
> -	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> -	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
> -		dev_err(dev, "MMIO region too small (%pr)\n", res);
> +	ret = dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
> +	if (ret)
> +		return -ENODEV;
> +
> +	if (iosize < arm_smmu_resource_size(smmu)) {
> +		dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
>  		return -EINVAL;
>  	}
> -	ioaddr = res->start;
>  
>  	/*
>  	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
>  	 * the PMCG registers which are reserved by the PMU driver.
>  	 */

Which PMU driver?


> -	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
> +	smmu->base = ioremap_nocache(ioaddr, ARM_SMMU_REG_SZ);
>  	if (IS_ERR(smmu->base))
>  		return PTR_ERR(smmu->base);
>  
> -	if (arm_smmu_resource_size(smmu) > SZ_64K) {
> -		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
> +	if (iosize > SZ_64K) {
> +		smmu->page1 = ioremap_nocache(ioaddr + SZ_64K,
>  					       ARM_SMMU_REG_SZ);
>  		if (IS_ERR(smmu->page1))
>  			return PTR_ERR(smmu->page1);
> @@ -2768,14 +3071,262 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>  		return ret;
>  
>  	/* Reset the device */
> -	ret = arm_smmu_device_reset(smmu, bypass);
> +	ret = arm_smmu_device_reset(smmu);
>  	if (ret)
>  		return ret;
>  
> +	/*
> +	 * Keep a list of all probed devices. This will be used to query
> +	 * the smmu devices based on the fwnode.
> +	 */
> +	INIT_LIST_HEAD(&smmu->devices);
> +
> +	spin_lock(&arm_smmu_devices_lock);
> +	list_add(&smmu->devices, &arm_smmu_devices);
> +	spin_unlock(&arm_smmu_devices_lock);
> +
>  	return 0;
>  }
>  
> -static const struct of_device_id arm_smmu_of_match[] = {
> +static const struct dt_device_match arm_smmu_of_match[] = {
>  	{ .compatible = "arm,smmu-v3", },
>  	{ },
>  };
> +
> +/* Start of Xen specific code. */
> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
> +{
> +	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +	struct iommu_domain *io_domain;
> +
> +	spin_lock(&xen_domain->lock);
> +
> +	list_for_each_entry(io_domain, &xen_domain->contexts, list) {
> +		/*
> +		 * Only invalidate the context when SMMU is present.
> +		 * This is because the context initialization is delayed
> +		 * until a master has been added.
> +		 */
> +		if (unlikely(!ACCESS_ONCE(to_smmu_domain(io_domain)->smmu)))
> +			continue;
> +
> +		arm_smmu_tlb_inv_context(to_smmu_domain(io_domain));
> +	}
> +
> +	spin_unlock(&xen_domain->lock);
> +
> +	return 0;
> +}
> +
> +static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
> +				unsigned long page_count, unsigned int flush_flags)
> +{
> +	return arm_smmu_iotlb_flush_all(d);
> +}
> +
> +static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
> +{
> +	struct arm_smmu_device *smmu = NULL;
> +
> +	spin_lock(&arm_smmu_devices_lock);
> +
> +	list_for_each_entry(smmu, &arm_smmu_devices, devices) {
> +		if (smmu->dev  == dev) {
> +			spin_unlock(&arm_smmu_devices_lock);
> +			return smmu;
> +		}
> +	}
> +
> +	spin_unlock(&arm_smmu_devices_lock);
> +
> +	return NULL;
> +}
> +
> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
> +				struct device *dev)
> +{
> +	struct iommu_domain *io_domain;
> +	struct arm_smmu_domain *smmu_domain;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +	struct arm_smmu_device *smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
> +
> +	if (!smmu)
> +		return NULL;
> +
> +	/*
> +	 * Loop through the &xen_domain->contexts to locate a context
> +	 * assigned to this SMMU
> +	 */
> +	list_for_each_entry(io_domain, &xen_domain->contexts, list) {
> +		smmu_domain = to_smmu_domain(io_domain);
> +		if (smmu_domain->smmu == smmu)
> +			return io_domain;
> +	}
> +	return NULL;
> +}
> +
> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *io_domain)
> +{
> +	list_del(&io_domain->list);
> +	arm_smmu_domain_free(io_domain);
> +}
> +
> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
> +		struct device *dev, u32 flag)
> +{
> +	int ret = 0;
> +	struct iommu_domain *io_domain;
> +	struct arm_smmu_domain *smmu_domain;
> +	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> +	spin_lock(&xen_domain->lock);
> +
> +	/*
> +	 * Check to see if an iommu_domain already exists for this xen domain
> +	 * under the same SMMU
> +	 */
> +	io_domain = arm_smmu_get_domain(d, dev);
> +	if (!io_domain) {
> +		io_domain = arm_smmu_domain_alloc();
> +		if (!io_domain) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +		smmu_domain = to_smmu_domain(io_domain);
> +		smmu_domain->s2_cfg.domain = d;
> +
> +		/* Chain the new context to the domain */
> +		list_add(&io_domain->list, &xen_domain->contexts);
> +	}
> +
> +	ret = arm_smmu_attach_dev(io_domain, dev);
> +	if (ret) {
> +		if (io_domain->ref.counter == 0)
> +			arm_smmu_destroy_iommu_domain(io_domain);
> +	} else {
> +		atomic_inc(&io_domain->ref);
> +	}
> +
> +out:
> +	spin_unlock(&xen_domain->lock);
> +	return ret;
> +}
> +
> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
> +{
> +	struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
> +	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +	struct arm_smmu_domain *arm_smmu = to_smmu_domain(io_domain);
> +	struct arm_smmu_master *master = dev_iommu_priv_get(dev);
> +
> +	if (!arm_smmu || arm_smmu->s2_cfg.domain != d) {
> +		dev_err(dev, " not attached to domain %d\n", d->domain_id);
> +		return -ESRCH;
> +	}
> +
> +	spin_lock(&xen_domain->lock);
> +
> +	arm_smmu_detach_dev(master);
> +	atomic_dec(&io_domain->ref);
> +
> +	if (io_domain->ref.counter == 0)
> +		arm_smmu_destroy_iommu_domain(io_domain);
> +
> +	spin_unlock(&xen_domain->lock);
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
> +				u8 devfn,  struct device *dev)
> +{
> +	int ret = 0;
> +
> +	/* Don't allow remapping on other domain than hwdom */
> +	if (t && t != hardware_domain)
> +		return -EPERM;
> +
> +	if (t == s)
> +		return 0;
> +
> +	ret = arm_smmu_deassign_dev(s, dev);
> +	if (ret)
> +		return ret;
> +
> +	if (t) {
> +		/* No flags are defined for ARM. */
> +		ret = arm_smmu_assign_dev(t, devfn, dev, 0);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int arm_smmu_iommu_xen_domain_init(struct domain *d)
> +{
> +	struct arm_smmu_xen_domain *xen_domain;
> +
> +	xen_domain = xzalloc(struct arm_smmu_xen_domain);
> +	if (!xen_domain)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&xen_domain->lock);
> +	INIT_LIST_HEAD(&xen_domain->contexts);
> +
> +	dom_iommu(d)->arch.priv = xen_domain;
> +	return 0;
> +
> +}
> +
> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
> +{
> +}
> +
> +static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
> +{
> +	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> +	ASSERT(list_empty(&xen_domain->contexts));
> +	xfree(xen_domain);
> +}
> +
> +static const struct iommu_ops arm_smmu_iommu_ops = {
> +	.init		= arm_smmu_iommu_xen_domain_init,
> +	.hwdom_init		= arm_smmu_iommu_hwdom_init,
> +	.teardown		= arm_smmu_iommu_xen_domain_teardown,
> +	.iotlb_flush		= arm_smmu_iotlb_flush,
> +	.iotlb_flush_all	= arm_smmu_iotlb_flush_all,
> +	.assign_device		= arm_smmu_assign_dev,
> +	.reassign_device	= arm_smmu_reassign_dev,
> +	.map_page		= arm_iommu_map_page,
> +	.unmap_page		= arm_iommu_unmap_page,
> +	.dt_xlate		= arm_smmu_dt_xlate,
> +	.add_device		= arm_smmu_add_device,
> +};
> +
> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
> +				const void *data)
> +{
> +	int rc;
> +
> +	/*
> +	 * Even if the device can't be initialized, we don't want to
> +	 * give the SMMU device to dom0.
> +	 */
> +	dt_device_set_used_by(dev, DOMID_XEN);
> +
> +	rc = arm_smmu_device_probe(dt_to_dev(dev));
> +	if (rc)
> +		return rc;
> +
> +	iommu_set_ops(&arm_smmu_iommu_ops);
> +
> +	return 0;
> +}
> +
> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
> +.dt_match = arm_smmu_of_match,
> +.init = arm_smmu_dt_init,
> +DT_DEVICE_END
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:29:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:29:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50033.88501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXF4-00067y-Sl; Fri, 11 Dec 2020 01:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50033.88501; Fri, 11 Dec 2020 01:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXF4-00067r-N5; Fri, 11 Dec 2020 01:29:06 +0000
Received: by outflank-mailman (input) for mailman id 50033;
 Fri, 11 Dec 2020 01:29:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXF3-00067E-Uy
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:29:05 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b597f50-d032-4a33-bbb5-357f783602fb;
 Fri, 11 Dec 2020 01:29:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b597f50-d032-4a33-bbb5-357f783602fb
Date: Thu, 10 Dec 2020 17:29:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650144;
	bh=o1YuXZ1UFfht1Mq6WAudo2SMaWA5ntFuZo2TWa2BfK8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=DlkVr2QcAMAtXfeDMT6AqyMalzsih5KC2tJM/sqpYMDAdOiQTcZfZW9q12TgwlRjU
	 cb9aK9bgKp7fvuuJQDbD0YjCKiH2pQ3i3561TzFawfyTZBjUSDKjhptIJ7NV/2SGB+
	 56ecrw/omuMy29MyTkwNQ76CufGA/9/4GG7itglaQlDYYh5e7uwgVkmDNwnn0rUxJ4
	 WJGUPtLCNGmPexZEvHR0K2Ajy+x0R/rjpLGcNkWm8inxTDyV4eSDSaeciandULeAdX
	 NKXiV+fkK5ssryWuXWCmnLMCUMGc26KK1TF6F3pDv5niI7BhXXtQNOyuppgGQV9NXD
	 C33VHgf3kGYng==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 8/8] xen/arm: smmuv3: Remove linux compatibility
 functions.
In-Reply-To: <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101622570.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> Replace all Linux compatible device tree handling function with the XEN
> functions.
> 
> Replace all Linux ktime function with the XEN time functions.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
>  - This patch is introduce in this version.
> 
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 32 +++++++--------------------
>  1 file changed, 8 insertions(+), 24 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 65b3db94ad..c19c56ebc8 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -101,22 +101,6 @@ typedef unsigned int		gfp_t;
>  
>  #define GFP_KERNEL		0
>  
> -/* Alias to Xen device tree helpers */
> -#define device_node			dt_device_node
> -#define of_phandle_args		dt_phandle_args
> -#define of_device_id		dt_device_match
> -#define of_match_node		dt_match_node
> -#define of_property_read_u32(np, pname, out)	\
> -		(!dt_property_read_u32(np, pname, out))
> -#define of_property_read_bool		dt_property_read_bool
> -#define of_parse_phandle_with_args	dt_parse_phandle_with_args
> -
> -/* Alias to Xen time functions */
> -#define ktime_t s_time_t
> -#define ktime_get()			(NOW())
> -#define ktime_add_us(t, i)		(t + MICROSECS(i))
> -#define ktime_compare(t, i)		(t > (i))
> -
>  /* Alias to Xen allocation helpers */
>  #define kzalloc(size, flags)	_xzalloc(size, sizeof(void *))
>  #define kfree	xfree
> @@ -922,7 +906,7 @@ static void parse_driver_options(struct arm_smmu_device *smmu)
>  	int i = 0;
>  
>  	do {
> -		if (of_property_read_bool(smmu->dev->of_node,
> +		if (dt_property_read_bool(smmu->dev->of_node,
>  						arm_smmu_options[i].prop)) {
>  			smmu->options |= arm_smmu_options[i].opt;
>  			dev_notice(smmu->dev, "option %s\n",
> @@ -994,17 +978,17 @@ static void queue_inc_prod(struct arm_smmu_ll_queue *q)
>   */
>  static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
>  {
> -	ktime_t timeout;
> +	s_time_t timeout;
>  	unsigned int delay = 1, spin_cnt = 0;
>  
>  	/* Wait longer if it's a CMD_SYNC */
> -	timeout = ktime_add_us(ktime_get(), sync ?
> +	timeout = NOW() + MICROSECS(sync ?
>  					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
>  					    ARM_SMMU_POLL_TIMEOUT_US);
>  
>  	while (queue_sync_cons_in(q),
>  	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
> -		if (ktime_compare(ktime_get(), timeout) > 0)
> +		if ((NOW() > timeout) > 0)
>  			return -ETIMEDOUT;
>  
>  		if (wfe) {
> @@ -1232,13 +1216,13 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
>   */
>  static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
>  {
> -	ktime_t timeout;
> +	s_time_t timeout;
>  	u32 val;
>  
> -	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> +	timeout = NOW() + MICROSECS(ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
>  	val = smp_cond_load_acquire(&smmu->sync_count,
>  				    (int)(VAL - sync_idx) >= 0 ||
> -				    !ktime_before(ktime_get(), timeout));
> +				    !(NOW() < timeout));
>  
>  	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
>  }
> @@ -2969,7 +2953,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>  	u32 cells;
>  	int ret = -EINVAL;
>  
> -	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
> +	if (!dt_property_read_u32(dev->of_node, "#iommu-cells", &cells))
>  		dev_err(dev, "missing #iommu-cells property\n");
>  	else if (cells != 1)
>  		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:29:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:29:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50043.88513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXFs-0006OB-6Y; Fri, 11 Dec 2020 01:29:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50043.88513; Fri, 11 Dec 2020 01:29:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXFs-0006O3-2e; Fri, 11 Dec 2020 01:29:56 +0000
Received: by outflank-mailman (input) for mailman id 50043;
 Fri, 11 Dec 2020 01:29:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXFq-0006Nj-BQ
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:29:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce061d52-3ccd-4a1f-9c14-36802c769ca3;
 Fri, 11 Dec 2020 01:29:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce061d52-3ccd-4a1f-9c14-36802c769ca3
Date: Thu, 10 Dec 2020 17:29:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650191;
	bh=wms8r2GXZPyGcW2c4gqwUeeLzGWJcKfBGN2LinPga7A=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=PVIRv23WVPOyrXAtjLJYPhpqsYlhhHJYsQR2aaJsePQLz31Wodome4WvC7NhPdurB
	 /P67hg2S4kwJxVUQZ1K2XqS5Nf0wuPFosmEbIo6GxOEUZWFn8vNoWc36wNf2pw9wQ6
	 VvxW6P1o8vfH22TNhZem7UcT7YgKUOj5QOT+7E2lEYDMv2UsIQs7D8G4bCqcqgTsGR
	 nVsXe+0jdwEGwNUzP0X9TBYaD/Y2f0+c9cyFusbKPj8HGsx9FJrKB9qonfAP7XHZay
	 DDYMhm/wQrp3z7YPwXnUpsUVWdnx/HRgPiJru8DUiZ6SGm7q7X1x3alVnwi6BAhxTT
	 O9bikxw4U2CkQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 6/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
In-Reply-To: <91b9845a03068d92aeaaa86fa67d4d06b2824652.1607617848.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2012101555100.6285@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <91b9845a03068d92aeaaa86fa67d4d06b2824652.1607617848.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Rahul Singh wrote:
> Remove code that is related to below functionality :
>  1. struct io_pgtable_ops
>  2. struct io_pgtable_cfg
>  3. struct iommu_flush_ops,
>  4. struct iommu_ops
>  5. module_param_named, MODULE_PARM_DESC, module_platform_driver,
>     MODULE_*
>  6. IOMMU domain-types
>  7. arm_smmu_set_bus_ops
>  8. iommu_device_sysfs_add, iommu_device_register,
>     iommu_device_set_fwnode
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
> - Commit message is updated to add more detail what is removed in this
>   patch.
> - remove instances of io_pgtable_cfg.
> - Added back ARM_SMMU_FEAT_COHERENCY feature.
> 
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c | 475 ++------------------------
>  1 file changed, 21 insertions(+), 454 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 0f16c63c49..2966015e5d 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -394,13 +394,7 @@
>  #define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
>  #define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>  
> -#define MSI_IOVA_BASE			0x8000000
> -#define MSI_IOVA_LENGTH			0x100000
> -
>  static bool disable_bypass = 1;
> -module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
> -MODULE_PARM_DESC(disable_bypass,
> -	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
>  
>  enum pri_resp {
>  	PRI_RESP_DENY = 0,
> @@ -552,6 +546,19 @@ struct arm_smmu_strtab_cfg {
>  	u32				strtab_base_cfg;
>  };
>  
> +struct arm_lpae_s2_cfg {
> +	u64			vttbr;
> +	struct {
> +		u32			ps:3;
> +		u32			tg:2;
> +		u32			sh:2;
> +		u32			orgn:2;
> +		u32			irgn:2;
> +		u32			sl:2;
> +		u32			tsz:6;
> +	} vtcr;
> +};
> +
>  /* An SMMUv3 instance */
>  struct arm_smmu_device {
>  	struct device			*dev;
> @@ -633,7 +640,6 @@ struct arm_smmu_domain {
>  	struct arm_smmu_device		*smmu;
>  	struct mutex			init_mutex; /* Protects smmu pointer */
>  
> -	struct io_pgtable_ops		*pgtbl_ops;
>  	bool				non_strict;
>  	atomic_t			nr_ats_masters;
>  
> @@ -1493,7 +1499,6 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
>  	return ret ? -ETIMEDOUT : 0;
>  }
>  
> -/* IO_PGTABLE API */
>  static void arm_smmu_tlb_inv_context(void *cookie)
>  {
>  	struct arm_smmu_domain *smmu_domain = cookie;
> @@ -1514,86 +1519,10 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  	arm_smmu_cmdq_issue_sync(smmu);
>  }
>  
> -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  size_t granule, bool leaf, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -	struct arm_smmu_cmdq_ent cmd = {
> -		.tlbi = {
> -			.leaf	= leaf,
> -			.addr	= iova,
> -		},
> -	};
> -
> -	if (!size)
> -		return;
> -
> -	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
> -	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
> -
> -	do {
> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> -		cmd.tlbi.addr += granule;
> -	} while (size -= granule);
> -}
> -
> -static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
> -					 unsigned long iova, size_t granule,
> -					 void *cookie)
> -{
> -	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
> -}
> -
> -static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain = cookie;
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static const struct iommu_flush_ops arm_smmu_flush_ops = {
> -	.tlb_flush_all	= arm_smmu_tlb_inv_context,
> -	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
> -	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
> -	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
> -};
> -
> -/* IOMMU API */
> -static bool arm_smmu_capable(enum iommu_cap cap)
> -{
> -	switch (cap) {
> -	case IOMMU_CAP_CACHE_COHERENCY:
> -		return true;
> -	case IOMMU_CAP_NOEXEC:
> -		return true;
> -	default:
> -		return false;
> -	}
> -}
> -
> -static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
> +static struct iommu_domain *arm_smmu_domain_alloc(void)
>  {
>  	struct arm_smmu_domain *smmu_domain;
>  
> -	if (type != IOMMU_DOMAIN_UNMANAGED &&
> -	    type != IOMMU_DOMAIN_DMA &&
> -	    type != IOMMU_DOMAIN_IDENTITY)
> -		return NULL;
> -
>  	/*
>  	 * Allocate the domain and initialise some of its data structures.
>  	 * We can't really do anything meaningful until we've added a
> @@ -1603,12 +1532,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
>  	if (!smmu_domain)
>  		return NULL;
>  
> -	if (type == IOMMU_DOMAIN_DMA &&
> -	    iommu_get_dma_cookie(&smmu_domain->domain)) {
> -		kfree(smmu_domain);
> -		return NULL;
> -	}
> -
>  	mutex_init(&smmu_domain->init_mutex);
>  	INIT_LIST_HEAD(&smmu_domain->devices);
>  	spin_lock_init(&smmu_domain->devices_lock);
> @@ -1640,9 +1563,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>  
> -	iommu_put_dma_cookie(domain);
> -	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
> -
>  	if (cfg->vmid)
>  		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>  
> @@ -1651,21 +1571,20 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
>  
>  
>  static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
> -				       struct arm_smmu_master *master,
> -				       struct io_pgtable_cfg *pgtbl_cfg)
> +				       struct arm_smmu_master *master)
>  {
>  	int vmid;
> +	struct arm_lpae_s2_cfg arm_lpae_s2_cfg;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
>  	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
> -	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
> +	typeof(&arm_lpae_s2_cfg.vtcr) vtcr = &arm_lpae_s2_cfg.vtcr;
>  
>  	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>  	if (vmid < 0)
>  		return vmid;
>  
> -	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
>  	cfg->vmid	= (u16)vmid;
> -	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
> +	cfg->vttbr	= arm_lpae_s2_cfg.vttbr;
>  	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
> @@ -1680,49 +1599,15 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
>  				    struct arm_smmu_master *master)
>  {
>  	int ret;
> -	unsigned long ias, oas;
> -	enum io_pgtable_fmt fmt;
> -	struct io_pgtable_cfg pgtbl_cfg;
> -	struct io_pgtable_ops *pgtbl_ops;
>  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -	struct arm_smmu_device *smmu = smmu_domain->smmu;
> -
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> -		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
> -		return 0;
> -	}
>  
>  	/* Restrict the stage to what we can actually support */
>  	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
>  
> -
> -	pgtbl_cfg = (struct io_pgtable_cfg) {
> -		.pgsize_bitmap	= smmu->pgsize_bitmap,
> -		.ias		= ias,
> -		.oas		= oas,
> -		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
> -		.tlb		= &arm_smmu_flush_ops,
> -		.iommu_dev	= smmu->dev,
> -	};
> -
> -	if (smmu_domain->non_strict)
> -		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
> -
> -	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
> -	if (!pgtbl_ops)
> -		return -ENOMEM;
> -
> -	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
> -	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
> -	domain->geometry.force_aperture = true;
> -
> -	ret = arm_smmu_domain_finalise_s2(smmu_domain, master, &pgtbl_cfg);
> -	if (ret < 0) {
> -		free_io_pgtable_ops(pgtbl_ops);
> +	ret = arm_smmu_domain_finalise_s2(smmu_domain, master);
> +	if (ret < 0)
>  		return ret;
> -	}
>  
> -	smmu_domain->pgtbl_ops = pgtbl_ops;
>  	return 0;
>  }
>  
> @@ -1939,76 +1824,6 @@ out_unlock:
>  	return ret;
>  }
>  
> -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> -{
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (!ops)
> -		return -ENODEV;
> -
> -	return ops->map(ops, iova, paddr, size, prot);
> -}
> -
> -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
> -			     size_t size, struct iommu_iotlb_gather *gather)
> -{
> -	int ret;
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
> -
> -	if (!ops)
> -		return 0;
> -
> -	ret = ops->unmap(ops, iova, size, gather);
> -	if (ret && arm_smmu_atc_inv_domain(smmu_domain, 0, iova, size))
> -		return 0;
> -
> -	return ret;
> -}
> -
> -static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	if (smmu_domain->smmu)
> -		arm_smmu_tlb_inv_context(smmu_domain);
> -}
> -
> -static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
> -				struct iommu_iotlb_gather *gather)
> -{
> -	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
> -
> -	if (smmu)
> -		arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static phys_addr_t
> -arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> -{
> -	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY)
> -		return iova;
> -
> -	if (!ops)
> -		return 0;
> -
> -	return ops->iova_to_phys(ops, iova);
> -}
> -
> -static struct platform_driver arm_smmu_driver;
> -
> -static
> -struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
> -{
> -	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
> -							  fwnode);
> -	put_device(dev);
> -	return dev ? dev_get_drvdata(dev) : NULL;
> -}
> -
>  static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  {
>  	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
> @@ -2019,8 +1834,6 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
>  	return sid < limit;
>  }
>  
> -static struct iommu_ops arm_smmu_ops;
> -
>  static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>  {
>  	int i, ret;
> @@ -2028,16 +1841,12 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>  	struct arm_smmu_master *master;
>  	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  
> -	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> +	if (!fwspec)
>  		return ERR_PTR(-ENODEV);
>  
>  	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
>  		return ERR_PTR(-EBUSY);
>  
> -	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
> -	if (!smmu)
> -		return ERR_PTR(-ENODEV);
> -
>  	master = kzalloc(sizeof(*master), GFP_KERNEL);
>  	if (!master)
>  		return ERR_PTR(-ENOMEM);
> @@ -2083,153 +1892,11 @@ err_free_master:
>  	return ERR_PTR(ret);
>  }
>  
> -static void arm_smmu_release_device(struct device *dev)
> -{
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -	struct arm_smmu_master *master;
> -
> -	if (!fwspec || fwspec->ops != &arm_smmu_ops)
> -		return;
> -
> -	master = dev_iommu_priv_get(dev);
> -	arm_smmu_detach_dev(master);
> -	arm_smmu_disable_pasid(master);
> -	kfree(master);
> -	iommu_fwspec_free(dev);
> -}
> -
> -static struct iommu_group *arm_smmu_device_group(struct device *dev)
> -{
> -	struct iommu_group *group;
> -
> -	/*
> -	 * We don't support devices sharing stream IDs other than PCI RID
> -	 * aliases, since the necessary ID-to-device lookup becomes rather
> -	 * impractical given a potential sparse 32-bit stream ID space.
> -	 */
> -	if (dev_is_pci(dev))
> -		group = pci_device_group(dev);
> -	else
> -		group = generic_device_group(dev);
> -
> -	return group;
> -}
> -
> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data = smmu_domain->non_strict;
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -}
> -
> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	int ret = 0;
> -	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> -
> -	mutex_lock(&smmu_domain->init_mutex);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			if (smmu_domain->smmu) {
> -				ret = -EPERM;
> -				goto out_unlock;
> -			}
> -
> -			if (*(int *)data)
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> -			else
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch(attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			smmu_domain->non_strict = *(int *)data;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	default:
> -		ret = -EINVAL;
> -	}
> -
> -out_unlock:
> -	mutex_unlock(&smmu_domain->init_mutex);
> -	return ret;
> -}
> -
>  static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  {
>  	return iommu_fwspec_add_ids(dev, args->args, 1);
>  }
>  
> -static void arm_smmu_get_resv_regions(struct device *dev,
> -				      struct list_head *head)
> -{
> -	struct iommu_resv_region *region;
> -	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> -
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> -					 prot, IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> -
> -	list_add_tail(&region->list, head);
> -
> -	iommu_dma_get_resv_regions(dev, head);
> -}
> -
> -static struct iommu_ops arm_smmu_ops = {
> -	.capable		= arm_smmu_capable,
> -	.domain_alloc		= arm_smmu_domain_alloc,
> -	.domain_free		= arm_smmu_domain_free,
> -	.attach_dev		= arm_smmu_attach_dev,
> -	.map			= arm_smmu_map,
> -	.unmap			= arm_smmu_unmap,
> -	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
> -	.iotlb_sync		= arm_smmu_iotlb_sync,
> -	.iova_to_phys		= arm_smmu_iova_to_phys,
> -	.probe_device		= arm_smmu_probe_device,
> -	.release_device		= arm_smmu_release_device,
> -	.device_group		= arm_smmu_device_group,
> -	.domain_get_attr	= arm_smmu_domain_get_attr,
> -	.domain_set_attr	= arm_smmu_domain_set_attr,
> -	.of_xlate		= arm_smmu_of_xlate,
> -	.get_resv_regions	= arm_smmu_get_resv_regions,
> -	.put_resv_regions	= generic_iommu_put_resv_regions,
> -	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
> -};
> -
>  /* Probing and initialisation functions */
>  static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  				   struct arm_smmu_queue *q,
> @@ -2929,16 +2596,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
>  		smmu->oas = 48;
>  	}
>  
> -	if (arm_smmu_ops.pgsize_bitmap == -1UL)
> -		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
> -	else
> -		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
> -
> -	/* Set the DMA mask for our table walker */
> -	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> -		dev_warn(smmu->dev,
> -			 "failed to set DMA mask for table walker\n");
> -
>  	smmu->ias = max(smmu->ias, smmu->oas);
>  
>  	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
> @@ -3018,43 +2675,6 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
>  		return SZ_128K;
>  }
>  
> -static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
> -{
> -	int err;
> -
> -#ifdef CONFIG_PCI
> -	if (pci_bus_type.iommu_ops != ops) {
> -		err = bus_set_iommu(&pci_bus_type, ops);
> -		if (err)
> -			return err;
> -	}
> -#endif
> -#ifdef CONFIG_ARM_AMBA
> -	if (amba_bustype.iommu_ops != ops) {
> -		err = bus_set_iommu(&amba_bustype, ops);
> -		if (err)
> -			goto err_reset_pci_ops;
> -	}
> -#endif
> -	if (platform_bus_type.iommu_ops != ops) {
> -		err = bus_set_iommu(&platform_bus_type, ops);
> -		if (err)
> -			goto err_reset_amba_ops;
> -	}
> -
> -	return 0;
> -
> -err_reset_amba_ops:
> -#ifdef CONFIG_ARM_AMBA
> -	bus_set_iommu(&amba_bustype, NULL);
> -#endif
> -err_reset_pci_ops: __maybe_unused;
> -#ifdef CONFIG_PCI
> -	bus_set_iommu(&pci_bus_type, NULL);
> -#endif
> -	return err;
> -}
> -
>  static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
>  				      resource_size_t size)
>  {
> @@ -3147,68 +2767,15 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
>  	if (ret)
>  		return ret;
>  
> -	/* Record our private device structure */
> -	platform_set_drvdata(pdev, smmu);
> -
>  	/* Reset the device */
>  	ret = arm_smmu_device_reset(smmu, bypass);
>  	if (ret)
>  		return ret;
>  
> -	/* And we're up. Go go go! */
> -	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
> -				     "smmu3.%pa", &ioaddr);
> -	if (ret)
> -		return ret;
> -
> -	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
> -	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
> -
> -	ret = iommu_device_register(&smmu->iommu);
> -	if (ret) {
> -		dev_err(dev, "Failed to register iommu\n");
> -		return ret;
> -	}
> -
> -	return arm_smmu_set_bus_ops(&arm_smmu_ops);
> -}
> -
> -static int arm_smmu_device_remove(struct platform_device *pdev)
> -{
> -	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
> -
> -	arm_smmu_set_bus_ops(NULL);
> -	iommu_device_unregister(&smmu->iommu);
> -	iommu_device_sysfs_remove(&smmu->iommu);
> -	arm_smmu_device_disable(smmu);
> -
>  	return 0;
>  }
>  
> -static void arm_smmu_device_shutdown(struct platform_device *pdev)
> -{
> -	arm_smmu_device_remove(pdev);
> -}
> -
>  static const struct of_device_id arm_smmu_of_match[] = {
>  	{ .compatible = "arm,smmu-v3", },
>  	{ },
>  };
> -MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
> -
> -static struct platform_driver arm_smmu_driver = {
> -	.driver	= {
> -		.name			= "arm-smmu-v3",
> -		.of_match_table		= arm_smmu_of_match,
> -		.suppress_bind_attrs	= true,
> -	},
> -	.probe	= arm_smmu_device_probe,
> -	.remove	= arm_smmu_device_remove,
> -	.shutdown = arm_smmu_device_shutdown,
> -};
> -module_platform_driver(arm_smmu_driver);
> -
> -MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
> -MODULE_AUTHOR("Will Deacon <will@kernel.org>");
> -MODULE_ALIAS("platform:arm-smmu-v3");
> -MODULE_LICENSE("GPL v2");
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 01:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 01:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50050.88525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXGk-0007J7-JM; Fri, 11 Dec 2020 01:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50050.88525; Fri, 11 Dec 2020 01:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXGk-0007Ic-D5; Fri, 11 Dec 2020 01:30:50 +0000
Received: by outflank-mailman (input) for mailman id 50050;
 Fri, 11 Dec 2020 01:30:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knXFU-0005re-G3
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 01:29:32 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2caf9f42-77b1-44e4-b91b-dc8ebceea8b5;
 Fri, 11 Dec 2020 01:29:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2caf9f42-77b1-44e4-b91b-dc8ebceea8b5
Date: Thu, 10 Dec 2020 17:29:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607650151;
	bh=RNy/hsKX8R9WjeekNbj2MDN7cquJjS8ntLT46xfA4Cg=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=UGxc3ujSOtJJ5TbaaIuxsYeLYP7Cxc8SggwSHIMDC3E7K+4itP0qS5UU4IdZ2zZdW
	 0mBGiW4ulP7obARrLTn0N3RaI1rLu1TCJzL8zH3ISeTFElGC4A+e8ofJUCiqv/eAJf
	 qhHVxgZJqnudYvPjrJBXd/QJS4yBrq+XatSj/na3RDdO0gkyxcCa0DvZsR0VjUE/ho
	 DRM8K9ysawZXtNnSgMTTeaqwuIo6j/0yv1dW/Hp+dDsN3Xe0FZiIZBZuo8lp2BnV2i
	 N+O7+AtbEu1PM07VnSIXBsUn30a1HrCUfR3cn0p0GWRVFnVFm8WqBypTVyqqXmRvFh
	 RwohoahgaRHIA==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A53 erratum #843419
In-Reply-To: <20201210104258.111-1-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2012101630080.6285@sstabellini-ThinkPad-T480s>
References: <20201210104258.111-1-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 10 Dec 2020, Luca Fancellu wrote:
> On the Cortex A53, when executing in AArch64 state, a load or store instruction
> which uses the result of an ADRP instruction as a base register, or which uses
> a base register written by an instruction immediately after an ADRP to the
> same register, might access an incorrect address.
> 
> The workaround is to enable the linker flag --fix-cortex-a53-843419
> if present, to check and fix the affected sequence. Otherwise print a warning
> that Xen may be susceptible to this errata
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  docs/misc/arm/silicon-errata.txt |  1 +
>  xen/arch/arm/Kconfig             | 19 +++++++++++++++++++
>  xen/arch/arm/Makefile            |  8 ++++++++
>  xen/scripts/Kbuild.include       | 12 ++++++++++++
>  4 files changed, 40 insertions(+)
> 
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> index 27bf957ebf..1925d8fd4e 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -45,6 +45,7 @@ stable hypervisors.
>  | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319    |
>  | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069    |
>  | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_819472    |
> +| ARM            | Cortex-A53      | #843419         | ARM64_ERRATUM_843419    |
>  | ARM            | Cortex-A55      | #1530923        | N/A                     |
>  | ARM            | Cortex-A57      | #852523         | N/A                     |
>  | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075    |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f5b1bcda03..41bde2f401 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -186,6 +186,25 @@ config ARM64_ERRATUM_819472
>  
>  	  If unsure, say Y.
>  
> +config ARM64_ERRATUM_843419
> +	bool "Cortex-A53: 843419: A load or store might access an incorrect address"
> +	default y
> +	depends on ARM_64
> +	help
> +	  This option adds an alternative code sequence to work around ARM
> +	  erratum 843419 on Cortex-A53 parts up to r0p4.
> +
> +	  When executing in AArch64 state, a load or store instruction which uses
> +	  the result of an ADRP instruction as a base register, or which uses a
> +	  base register written by an instruction immediately after an ADRP to the
> +	  same register, might access an incorrect address.
> +
> +	  The workaround enables the linker to check if the affected sequence is
> +	  produced and it will fix it with an alternative not affected sequence
> +	  that produce the same behavior.
> +
> +	  If unsure, say Y.
> +
>  config ARM64_ERRATUM_832075
>  	bool "Cortex-A57: 832075: possible deadlock on mixing exclusive memory accesses with device loads"
>  	default y
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 296c5e68bb..ad2d497c45 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -101,6 +101,14 @@ prelink.o: $(ALL_OBJS) FORCE
>  	$(call if_changed,ld)
>  endif
>  
> +ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
> +    ifeq ($(call ld-option, --fix-cortex-a53-843419),n)
> +        $(warning ld does not support --fix-cortex-a53-843419; xen may be susceptible to erratum)
> +    else
> +        XEN_LDFLAGS += --fix-cortex-a53-843419
> +    endif
> +endif

I was going to comment that maybe we should put the warning elsewhere.
However, I tested the patch and works fine with both new and old
compilers and you really need to go way back to gcc 4.9 to trigger the
warning, so on second thought I think it is OK as is.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  targets += prelink.o
>  
>  $(TARGET)-syms: prelink.o xen.lds
> diff --git a/xen/scripts/Kbuild.include b/xen/scripts/Kbuild.include
> index e62eddc365..83c7e1457b 100644
> --- a/xen/scripts/Kbuild.include
> +++ b/xen/scripts/Kbuild.include
> @@ -43,6 +43,18 @@ define as-option-add-closure
>      endif
>  endef
>  
> +# $(if-success,<command>,<then>,<else>)
> +# Return <then> if <command> exits with 0, <else> otherwise.
> +if-success = $(shell { $(1); } >/dev/null 2>&1 && echo "$(2)" || echo "$(3)")
> +
> +# $(success,<command>)
> +# Return y if <command> exits with 0, n otherwise
> +success = $(call if-success,$(1),y,n)
> +
> +# $(ld-option,<flag>)
> +# Return y if the linker supports <flag>, n otherwise
> +ld-option = $(call success,$(LD) -v $(1))
> +
>  # cc-ifversion
>  # Usage:  EXTRA_CFLAGS += $(call cc-ifversion, -lt, 0402, -O1)
>  cc-ifversion = $(shell [ $(CONFIG_GCC_VERSION)0 $(1) $(2)000 ] && echo $(3) || echo $(4))
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 02:01:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 02:01:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50068.88537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXjw-0002YQ-0L; Fri, 11 Dec 2020 02:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50068.88537; Fri, 11 Dec 2020 02:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knXjv-0002YJ-ST; Fri, 11 Dec 2020 02:00:59 +0000
Received: by outflank-mailman (input) for mailman id 50068;
 Fri, 11 Dec 2020 02:00:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knXjv-0002YB-8V; Fri, 11 Dec 2020 02:00:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knXju-0007h1-WD; Fri, 11 Dec 2020 02:00:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knXju-0004S8-OD; Fri, 11 Dec 2020 02:00:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knXju-00045Q-NG; Fri, 11 Dec 2020 02:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QMzcs9QWyLBhUpFJURdyOokQnSSRliMWmdgN6M7Flqg=; b=jj+/zYtj5z1WBRGtf510/MAfix
	zF3QWHD8XvflqUNch7m/dNYe1u6LFo6CkOVxVADTk02O4rbXuIKU2p8T+cJaQ3UK8xe2IBM/66u78
	dS8foL5qyo4B7GwPjL4swM+1laM/uAjUGAkSIoXZQF1bW+DRJNT5FDLqacXtqVcFP/wk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157394-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157394: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=10dc8c561c687c9e73e29743d04d828cca56a288
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 02:00:58 +0000

flight 157394 ovmf real [real]
flight 157397 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157394/
http://logs.test-lab.xenproject.org/osstest/logs/157397/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 10dc8c561c687c9e73e29743d04d828cca56a288
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days    6 attempts
Testing same since   157383  2020-12-10 13:09:45 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 308 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 03:23:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 03:23:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50080.88557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knZ1m-0002I9-Dl; Fri, 11 Dec 2020 03:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50080.88557; Fri, 11 Dec 2020 03:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knZ1m-0002I2-Av; Fri, 11 Dec 2020 03:23:30 +0000
Received: by outflank-mailman (input) for mailman id 50080;
 Fri, 11 Dec 2020 03:23:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knZ1l-0002Hu-7R; Fri, 11 Dec 2020 03:23:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knZ1k-0000uv-Us; Fri, 11 Dec 2020 03:23:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knZ1k-0007te-M6; Fri, 11 Dec 2020 03:23:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knZ1k-0002et-Lb; Fri, 11 Dec 2020 03:23:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Or7GeXfYVZNPHKLzIqIFkIBRPbAeTglCxCeHdAluYaU=; b=ybddNkPnzHCYm5osxXKAV2S38x
	sL18YJo54hEzfRgvOUkqMRh84/ly88RbLmETALkmmQ3bw9qOLWF2AS6peJslYokqLLRfRKDOB7eRB
	ZUWed5oMprPzv37JPptbC1GP4J8FezAC2mfgaa/XL8VEUqHzsBxPjMovdfJhlrysuYUY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157399-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157399: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=10dc8c561c687c9e73e29743d04d828cca56a288
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 03:23:28 +0000

flight 157399 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157399/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 10dc8c561c687c9e73e29743d04d828cca56a288
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days    7 attempts
Testing same since   157383  2020-12-10 13:09:45 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 308 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 04:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 04:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50090.88579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knZkN-0006ad-K8; Fri, 11 Dec 2020 04:09:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50090.88579; Fri, 11 Dec 2020 04:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knZkN-0006aW-Gg; Fri, 11 Dec 2020 04:09:35 +0000
Received: by outflank-mailman (input) for mailman id 50090;
 Fri, 11 Dec 2020 04:09:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knZkM-0006aO-9B; Fri, 11 Dec 2020 04:09:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knZkM-0001uU-1f; Fri, 11 Dec 2020 04:09:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knZkL-0001HJ-QL; Fri, 11 Dec 2020 04:09:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knZkL-00013I-Pr; Fri, 11 Dec 2020 04:09:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QcOQjQ8sSzqGwd90AExHxZY0VoPMj4CB6bK4i8QTAA4=; b=SCUfciEeSpH8e4rnn5L7I/Q9WZ
	6ezNu1jioZJ40bnBH9BKBhC8o/kCuaslvHTzLKEXPN6+k57l3lo34H6zoBTZUND3Hf93x2seciHLq
	2KTshq1IR4eyr0YsT3rZMoHqvcdpHnSm30wWmY0RFildiHcAdA2U1eisDOB3iDpXmTGY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157402-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157402: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 04:09:33 +0000

flight 157402 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157402/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days    8 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 05:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 05:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49900.88611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knafx-0004kc-Le; Fri, 11 Dec 2020 05:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49900.88611; Fri, 11 Dec 2020 05:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knafx-0004kU-EV; Fri, 11 Dec 2020 05:09:05 +0000
Received: by outflank-mailman (input) for mailman id 49900;
 Thu, 10 Dec 2020 20:33:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sId8=FO=linux.ibm.com=hca@srs-us1.protection.inumbo.net>)
 id 1knScs-00080H-5P
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 20:33:22 +0000
Received: from mx0b-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75e5151a-52c4-4a32-af2c-843197825ba8;
 Thu, 10 Dec 2020 20:33:20 +0000 (UTC)
Received: from pps.filterd (m0098421.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0BAK4Fvx057985; Thu, 10 Dec 2020 15:31:40 -0500
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 35bst29qva-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Dec 2020 15:31:40 -0500
Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0BAK4GjK058200;
 Thu, 10 Dec 2020 15:31:39 -0500
Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com
 [169.51.49.98])
 by mx0a-001b2d01.pphosted.com with ESMTP id 35bst29qu4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Dec 2020 15:31:39 -0500
Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1])
 by ppma03ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BAKRkNd013529;
 Thu, 10 Dec 2020 20:31:36 GMT
Received: from b06cxnps4074.portsmouth.uk.ibm.com
 (d06relay11.portsmouth.uk.ibm.com [9.149.109.196])
 by ppma03ams.nl.ibm.com with ESMTP id 3581u865vj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 10 Dec 2020 20:31:36 +0000
Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com
 [9.149.105.62])
 by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 0BAKVYT024117666
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 10 Dec 2020 20:31:34 GMT
Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id DFC83AE045;
 Thu, 10 Dec 2020 20:31:33 +0000 (GMT)
Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id E2863AE051;
 Thu, 10 Dec 2020 20:31:31 +0000 (GMT)
Received: from osiris (unknown [9.171.22.54])
 by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTPS;
 Thu, 10 Dec 2020 20:31:31 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75e5151a-52c4-4a32-af2c-843197825ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc :
 subject : message-id : references : mime-version : content-type :
 in-reply-to; s=pp1; bh=3Ov7MsWd2WvcFRWayPftrhptP83tCk6eaRZKMJ1S1Ic=;
 b=qJ2qXmRZHShK3+KCGTAQ+GLO7T5ClcHbRucTlMptHMZtELrH8qgXcIaAHVB6PID/nku1
 ByEtEVbJsiJYGI03QlfGkQ1cRWQd6oOEMn8yuCkftiInCUBb7cOBy+jxJoIaI5JGWhnb
 ReeNDn1ZxIP8QeHupKbq+wYT9Qp4BCHp8SfaSiQ9Ullxz+NVNlETe0UEchxkp3jV8B2g
 WzhBnCQf3xdIuJ0+HTPpMuRtk6+5Tkn4guG5jhcrwWTqI/FxJkyWrFviDJx1F+MGhTWO
 qZWE27Ax0tV7PUfy/8exNPVSXkxQALqNVg6/vWplCQaefgrg6FRXhrzr6Klx6j7s6nnk /w== 
Date: Thu, 10 Dec 2020 21:31:30 +0100
From: Heiko Carstens <hca@linux.ibm.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>,
        Marc Zyngier <maz@kernel.org>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        linux-s390@vger.kernel.org,
        "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
        Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
        linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
        linux-arm-kernel@lists.infradead.org,
        Mark Rutland <mark.rutland@arm.com>,
        Catalin Marinas <catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>,
        Jani Nikula <jani.nikula@linux.intel.com>,
        Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
        Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
        Daniel Vetter <daniel@ffwll.ch>,
        Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
        Chris Wilson <chris@chris-wilson.co.uk>,
        Wambui Karuga <wambui.karugax@gmail.com>,
        intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
        Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
        Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
        Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
        Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
        linux-ntb@googlegroups.com,
        Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
        Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
        Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
        Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
        Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
        "David S. Miller" <davem@davemloft.net>,
        Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
        linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
        Leon Romanovsky <leon@kernel.org>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org
Subject: Re: [patch 12/30] s390/irq: Use irq_desc_kstat_cpu() in
 show_msi_interrupt()
Message-ID: <20201210203130.GB4250@osiris>
References: <20201210192536.118432146@linutronix.de>
 <20201210194043.769108348@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201210194043.769108348@linutronix.de>
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737
 definitions=2020-12-10_08:2020-12-09,2020-12-10 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 phishscore=0
 lowpriorityscore=0 mlxlogscore=996 clxscore=1011 adultscore=0 mlxscore=0
 bulkscore=0 suspectscore=1 spamscore=0 impostorscore=0 priorityscore=1501
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012100122

On Thu, Dec 10, 2020 at 08:25:48PM +0100, Thomas Gleixner wrote:
> The irq descriptor is already there, no need to look it up again.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: linux-s390@vger.kernel.org
> ---
>  arch/s390/kernel/irq.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Acked-by: Heiko Carstens <hca@linux.ibm.com>


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 05:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 05:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.49763.88603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knafx-0004kA-9Z; Fri, 11 Dec 2020 05:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 49763.88603; Fri, 11 Dec 2020 05:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knafx-0004k3-6W; Fri, 11 Dec 2020 05:09:05 +0000
Received: by outflank-mailman (input) for mailman id 49763;
 Thu, 10 Dec 2020 19:48:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wOlC=FO=linux.intel.com=ville.syrjala@srs-us1.protection.inumbo.net>)
 id 1knRvZ-0000un-MH
 for xen-devel@lists.xenproject.org; Thu, 10 Dec 2020 19:48:37 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab5e9545-d521-4670-870c-f0eb8c2bab29;
 Thu, 10 Dec 2020 19:48:36 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 10 Dec 2020 11:48:35 -0800
Received: from stinkbox.fi.intel.com (HELO stinkbox) ([10.237.72.174])
 by orsmga007.jf.intel.com with SMTP; 10 Dec 2020 11:48:24 -0800
Received: by stinkbox (sSMTP sendmail emulation);
 Thu, 10 Dec 2020 21:48:23 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab5e9545-d521-4670-870c-f0eb8c2bab29
IronPort-SDR: Nx7/L+qSs5ThlOXQ/aWPO/3NjhsyncLbEhNNbJObB5FwFT7XloKPmLy4/poBFCb74p+KqtCL5v
 OpkCtW/FSZ4Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9831"; a="192650837"
X-IronPort-AV: E=Sophos;i="5.78,409,1599548400"; 
   d="scan'208";a="192650837"
IronPort-SDR: Qef2UfSDuwyPiJ27NUpAFZLQLjXKYRK609+0x1abl5RvYR7KCkszp/IICW7DO8jsBWdQpq5EWc
 XoQtIWeVTwmw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,409,1599548400"; 
   d="scan'208";a="376070310"
Date: Thu, 10 Dec 2020 21:48:23 +0200
From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= <ville.syrjala@linux.intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	dri-devel@lists.freedesktop.org,
	Chris Wilson <chris@chris-wilson.co.uk>,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	netdev@vger.kernel.org, Will Deacon <will@kernel.org>,
	Michal Simek <michal.simek@xilinx.com>,
	Rob Herring <robh@kernel.org>, linux-s390@vger.kernel.org,
	afzal mohammed <afzal.mohd.ma@gmail.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Dave Jiang <dave.jiang@intel.com>, xen-devel@lists.xenproject.org,
	Leon Romanovsky <leon@kernel.org>, linux-rdma@vger.kernel.org,
	Marc Zyngier <maz@kernel.org>, Helge Deller <deller@gmx.de>,
	Russell King <linux@armlinux.org.uk>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	linux-pci@vger.kernel.org, Jakub Kicinski <kuba@kernel.org>,
	Heiko Carstens <hca@linux.ibm.com>,
	Wambui Karuga <wambui.karugax@gmail.com>,
	Allen Hubbe <allenbh@gmail.com>, Juergen Gross <jgross@suse.com>,
	intel-gfx@lists.freedesktop.org, linux-gpio@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Lee Jones <lee.jones@linaro.org>,
	linux-arm-kernel@lists.infradead.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Airlie <airlied@linux.ie>, linux-parisc@vger.kernel.org,
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
	Tariq Toukan <tariqt@nvidia.com>, Jon Mason <jdmason@kudzu.us>,
	linux-ntb@googlegroups.com, Saeed Mahameed <saeedm@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Intel-gfx] [patch 13/30] drm/i915/lpe_audio: Remove pointless
 irq_to_desc() usage
Message-ID: <X9J7h+myHaraeoKH@intel.com>
References: <20201210192536.118432146@linutronix.de>
 <20201210194043.862572239@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201210194043.862572239@linutronix.de>
X-Patchwork-Hint: comment

On Thu, Dec 10, 2020 at 08:25:49PM +0100, Thomas Gleixner wrote:
> Nothing uses the result and nothing should ever use it in driver code.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Jani Nikula <jani.nikula@linux.intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Wambui Karuga <wambui.karugax@gmail.com>
> Cc: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org

Reviewed-by: Ville Syrjl <ville.syrjala@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_lpe_audio.c |    4 ----
>  1 file changed, 4 deletions(-)
> 
> --- a/drivers/gpu/drm/i915/display/intel_lpe_audio.c
> +++ b/drivers/gpu/drm/i915/display/intel_lpe_audio.c
> @@ -297,13 +297,9 @@ int intel_lpe_audio_init(struct drm_i915
>   */
>  void intel_lpe_audio_teardown(struct drm_i915_private *dev_priv)
>  {
> -	struct irq_desc *desc;
> -
>  	if (!HAS_LPE_AUDIO(dev_priv))
>  		return;
>  
> -	desc = irq_to_desc(dev_priv->lpe_audio.irq);
> -
>  	lpe_audio_platdev_destroy(dev_priv);
>  
>  	irq_free_desc(dev_priv->lpe_audio.irq);
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Ville Syrjl
Intel


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 05:10:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 05:10:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50109.88627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knahb-0005f5-1C; Fri, 11 Dec 2020 05:10:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50109.88627; Fri, 11 Dec 2020 05:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knaha-0005ey-SF; Fri, 11 Dec 2020 05:10:46 +0000
Received: by outflank-mailman (input) for mailman id 50109;
 Fri, 11 Dec 2020 05:10:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knahZ-0005d2-JH
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 05:10:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bc0a885a-a840-4914-9d8f-436b4e632912;
 Fri, 11 Dec 2020 05:10:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 89C2DAB91;
 Fri, 11 Dec 2020 05:10:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc0a885a-a840-4914-9d8f-436b4e632912
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607663443; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Tw8WaYEedOoP6uhdqQDj4FZ0EAnAR9qHZSp5wafjsIg=;
	b=RSe3GVINPA6mwnWqpxRj8C9UTL+BPcbllw0Jd3tlnuM7c4kwSeqEQEIxImNXBlC8vx+2u1
	lA6EJ0XwdoFW1xEM2TtixXyMKKnxqwvVltg4iWOvd6RBHTgkMnSUT7RGpYUltKe435KCOq
	6Dor5nBpZV8g2Lp2ly+q87J7HUENxLQ=
Subject: Re: x86/ioapic: Cleanup the timer_works() irqflags mess
To: Thomas Gleixner <tglx@linutronix.de>, Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, luto@kernel.org,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <20201209181514.GA14235@C02TD0UTHF1T.local>
 <87tusuzu71.fsf@nanos.tec.linutronix.de>
 <20201210111008.GB88655@C02TD0UTHF1T.local>
 <87k0tpju47.fsf@nanos.tec.linutronix.de>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0a90cfff-05d6-1475-43f8-e41b5af24281@suse.com>
Date: Fri, 11 Dec 2020 06:10:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <87k0tpju47.fsf@nanos.tec.linutronix.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ut1CfDbO7XZTjsKp0jfE35exNGTBAXJ4U"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ut1CfDbO7XZTjsKp0jfE35exNGTBAXJ4U
Content-Type: multipart/mixed; boundary="FClLxLK0RQjd7JNbzYVWLfCZWmbD9gnXP";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Thomas Gleixner <tglx@linutronix.de>, Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, luto@kernel.org,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <0a90cfff-05d6-1475-43f8-e41b5af24281@suse.com>
Subject: Re: x86/ioapic: Cleanup the timer_works() irqflags mess
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <20201209181514.GA14235@C02TD0UTHF1T.local>
 <87tusuzu71.fsf@nanos.tec.linutronix.de>
 <20201210111008.GB88655@C02TD0UTHF1T.local>
 <87k0tpju47.fsf@nanos.tec.linutronix.de>
In-Reply-To: <87k0tpju47.fsf@nanos.tec.linutronix.de>

--FClLxLK0RQjd7JNbzYVWLfCZWmbD9gnXP
Content-Type: multipart/mixed;
 boundary="------------3B73900FA14EEF59C60EECEC"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------3B73900FA14EEF59C60EECEC
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.12.20 21:15, Thomas Gleixner wrote:
> Mark tripped over the creative irqflags handling in the IO-APIC timer
> delivery check which ends up doing:
>=20
>          local_irq_save(flags);
> 	local_irq_enable();
>          local_irq_restore(flags);
>=20
> which triggered a new consistency check he's working on required for
> replacing the POPF based restore with a conditional STI.
>=20
> That code is a historical mess and none of this is needed. Make it
> straightforward use local_irq_disable()/enable() as that's all what is
> required. It is invoked from interrupt enabled code nowadays.
>=20
> Reported-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Mark Rutland <mark.rutland@arm.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------3B73900FA14EEF59C60EECEC
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3B73900FA14EEF59C60EECEC--

--FClLxLK0RQjd7JNbzYVWLfCZWmbD9gnXP--

--ut1CfDbO7XZTjsKp0jfE35exNGTBAXJ4U
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/S/1IFAwAAAAAACgkQsN6d1ii/Ey8w
Zwf9FLP8Fj1JNG2NYn58CzNrRiyr6YDBpKBiJh+uBmcJAAc34XAwkzDVUYbbaFsr12h2bICSlnJ8
jEN0pu4X53sJVw2xxOEfx2HlGI/ksGGRhrAFARK/h8staI70tGsbSOHJQNJCbmZKHOn55OkNQR0H
yA4TBnoai2gnYd1Tc96EnKJ40Pch1hg2X3C+9sWGqwd31dbv0wsnLeMOiORzwikM8mIVKrzutkCC
p8AySw3U0kMObQPydcJO3Ub4msqzkccfByYDWWOCsEi+1kfIEYqv28eTTfRW5FMYpQF0VCRSkQec
oU3arD7srQaZGLOwwClaEpgS3z5yk8VDOxX++d8Eiw==
=eb6p
-----END PGP SIGNATURE-----

--ut1CfDbO7XZTjsKp0jfE35exNGTBAXJ4U--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 05:39:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 05:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50120.88642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knb9Z-0008FQ-G2; Fri, 11 Dec 2020 05:39:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50120.88642; Fri, 11 Dec 2020 05:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knb9Z-0008FJ-D0; Fri, 11 Dec 2020 05:39:41 +0000
Received: by outflank-mailman (input) for mailman id 50120;
 Fri, 11 Dec 2020 05:39:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knb9Y-0008FB-GK; Fri, 11 Dec 2020 05:39:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knb9Y-0004BL-B8; Fri, 11 Dec 2020 05:39:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knb9Y-0003sU-4f; Fri, 11 Dec 2020 05:39:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knb9Y-00023c-4C; Fri, 11 Dec 2020 05:39:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MC3v/exc/BfbTITLZThcKTorTTTpkwbcJzLdaylxT0k=; b=AjvsFtJn2oxWR7GRxZPrdPvSNd
	kz6u9crHyTJaOEVAxN0OfLxco7Aiela7UIw3RYl60jDWWInIituqys7ItpsMLkNS+5HE0axYwLtHB
	v/z0n/A8ocaZl7XVi4JAjNANnTIUFTAvm1jsh/kuEHdRBJJnhPIB0EgQmTxiTICiXthU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157406-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157406: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 05:39:40 +0000

flight 157406 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157406/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days    9 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 06:18:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 06:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50136.88663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knbkX-0003yf-K8; Fri, 11 Dec 2020 06:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50136.88663; Fri, 11 Dec 2020 06:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knbkX-0003yY-Fv; Fri, 11 Dec 2020 06:17:53 +0000
Received: by outflank-mailman (input) for mailman id 50136;
 Fri, 11 Dec 2020 06:17:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knbkV-0003yS-NN
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 06:17:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 143042d3-79e4-48a9-a663-9e1072dc383e;
 Fri, 11 Dec 2020 06:17:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13F15AF2C;
 Fri, 11 Dec 2020 06:17:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 143042d3-79e4-48a9-a663-9e1072dc383e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607667469; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0Hqvh2FmQdxs+E1P/+UvffdZOKFlLtbSz3sTxGcARMo=;
	b=LBVJx6W5F4URqC3X64ehrowu5YRikB8/j/vxRSbcQ3IL9ZnU1EhEzYDoZmjik7Ss9AAF9k
	VbJCYa9d4uXocu0FLUSMKEjfe+DRohzwZKmTlPFzKcaaj/1KN/wjUDbns5LbsUhpaSTxU7
	2cbe908PI5Up5se8cqa9SOITXkOZdgE=
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
To: boris.ostrovsky@oracle.com, Thomas Gleixner <tglx@linutronix.de>,
 LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a4bce428-4420-6064-c7cc-7136a7544a52@suse.com>
Date: Fri, 11 Dec 2020 07:17:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="bInmAjsEBwVQG9mgj6zMfi4a5q7thbvRq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--bInmAjsEBwVQG9mgj6zMfi4a5q7thbvRq
Content-Type: multipart/mixed; boundary="z0MmiO2yT7oWPERWG97Qa1OHWGxwVFKEb";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: boris.ostrovsky@oracle.com, Thomas Gleixner <tglx@linutronix.de>,
 LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Message-ID: <a4bce428-4420-6064-c7cc-7136a7544a52@suse.com>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
In-Reply-To: <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>

--z0MmiO2yT7oWPERWG97Qa1OHWGxwVFKEb
Content-Type: multipart/mixed;
 boundary="------------4E00B26896755A3BF8CF5A38"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4E00B26896755A3BF8CF5A38
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>=20
> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>> All event channel setups bind the interrupt on CPU0 or the target CPU =
for
>> percpu interrupts and overwrite the affinity mask with the correspondi=
ng
>> cpumask. That does not make sense.
>>
>> The XEN implementation of irqchip::irq_set_affinity() already picks a
>> single target CPU out of the affinity mask and the actual target is st=
ored
>> in the effective CPU mask, so destroying the user chosen affinity mask=

>> which might contain more than one CPU is wrong.
>>
>> Change the implementation so that the channel is bound to CPU0 at the =
XEN
>> level and leave the affinity mask alone. At startup of the interrupt
>> affinity will be assigned out of the affinity mask and the XEN binding=
 will
>> be updated.
>=20
>=20
> If that's the case then I wonder whether we need this call at all and i=
nstead bind at startup time.

This binding to cpu0 was introduced with commit 97253eeeb792d61ed2
and I have no reason to believe the underlying problem has been
eliminated.


Juergen

--------------4E00B26896755A3BF8CF5A38
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4E00B26896755A3BF8CF5A38--

--z0MmiO2yT7oWPERWG97Qa1OHWGxwVFKEb--

--bInmAjsEBwVQG9mgj6zMfi4a5q7thbvRq
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TDwgFAwAAAAAACgkQsN6d1ii/Ey8h
xggAkCsUD7isOuCqEaUo01HAUjpM8mZxAWSEsOLkScDlic57fpNsYS+rZOVeFHDluVxb/jn8HPuE
LR2JRJJAoeDn0oVgoTexsNoc9s8tTVlR9AJDCBkQkfUCyo3m8fjaqh0uWYt8Fq5hiDLYdtopGANt
SOatxh8szxVT1K7ewdOKYpq/E4LMlP/Ixqp7Vt3sLFUfbO23BK9BAQIN4HjEJ8UhwJmx97xEz4MQ
3uq2q9YtvtamCfptwByf8jIwqQyfSTBFfng5U6S7cx5rxXjTnWWg2uK1+Yg6YeJ7rKitwBc87VUp
3/xQ/cnnv/3SzOeVN5mv6RBsdEMDniBwFSlF8BwRpA==
=wkA2
-----END PGP SIGNATURE-----

--bInmAjsEBwVQG9mgj6zMfi4a5q7thbvRq--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 06:54:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 06:54:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50142.88675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncJj-00082S-Da; Fri, 11 Dec 2020 06:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50142.88675; Fri, 11 Dec 2020 06:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncJj-00082L-9Y; Fri, 11 Dec 2020 06:54:15 +0000
Received: by outflank-mailman (input) for mailman id 50142;
 Fri, 11 Dec 2020 06:54:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kncJh-00082D-J3; Fri, 11 Dec 2020 06:54:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kncJh-0005mZ-BC; Fri, 11 Dec 2020 06:54:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kncJg-0005d1-CS; Fri, 11 Dec 2020 06:54:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kncJg-0002p5-Bz; Fri, 11 Dec 2020 06:54:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NUtm4rGCJI+6UGYJWd3W7VR3oZkALXcDh7etsPZC+PY=; b=3Vg7YYDVBGV35PisiYolFr79aE
	xQVr2jetZzLYDupn4/iA9FZRyjvV+i0SLsv35nQ1f8bLIFz2cbE2p4yGahjXMH5h8pCX/p1Hx0PIg
	9YmqrJhszIRMaockgUQxZT61kBgxSOJeBjr9E7w/kTwiIca312ZBn1WUm6M61ZEWS52g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157392-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157392: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-saverestore.2:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2ecfc0657afa5d29a373271b342f704a1a3c6737
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 06:54:12 +0000

flight 157392 qemu-mainline real [real]
flight 157405 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157392/
http://logs.test-lab.xenproject.org/osstest/logs/157405/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 17 guest-saverestore.2 fail in 157405 pass in 157392
 test-armhf-armhf-libvirt 18 guest-start/debian.repeat fail pass in 157405-retest

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                2ecfc0657afa5d29a373271b342f704a1a3c6737
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  112 days
Failing since        152659  2020-08-21 14:07:39 Z  111 days  233 attempts
Testing same since   157392  2020-12-10 20:40:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erich McMillan <erich.mcmillan@hp.com>
  Erich-McMillan <erich.mcmillan@hp.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiahui Cen <cenjiahui@huawei.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Levon <john.levon@nutanix.com>
  John Snow <jsnow@redhat.com>
  John Wang <wangzhiqiang.bj@bytedance.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Juan Quintela <quintela@redhat.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Kunkun Jiang <jiangkunkun@huawei.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vikram Garhwal <fnu.vikram@xilinx.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yubo Miao <miaoyubo@huawei.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zihao Chang <changzihao1@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 72149 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 07:02:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 07:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50150.88690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncS2-0000lE-I6; Fri, 11 Dec 2020 07:02:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50150.88690; Fri, 11 Dec 2020 07:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncS2-0000l7-Dv; Fri, 11 Dec 2020 07:02:50 +0000
Received: by outflank-mailman (input) for mailman id 50150;
 Fri, 11 Dec 2020 07:02:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kncS1-0000kb-5d
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 07:02:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3286f51-33c3-4046-b1e3-b34ef21bd2c9;
 Fri, 11 Dec 2020 07:02:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CB60ADCA;
 Fri, 11 Dec 2020 07:02:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3286f51-33c3-4046-b1e3-b34ef21bd2c9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607670167; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZBqmbm5VfuJKOHUyvQvzGbk9jG7yYq9D7Vm8M4k3Mt0=;
	b=U3fsRSIgWrR1GCFfHXjDXx/Rr/EJ6LW6kewdbKaZOpOHm0reGF4WKZpQtBD3KhMuuB2art
	e+82m8ygxn7+oFxyaOZ+nfssPNx9JFiCJUQcTK1Q09vsj3NyaAjsLGGtchUL7A+eaWPoVU
	kE5j7bi/YRl36wGj8NEIwJVWuFO+0C4=
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
Message-ID: <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
Date: Fri, 11 Dec 2020 08:02:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="j8SyKl99jb6EcJZhKFnQ0QEA2yjymX0go"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--j8SyKl99jb6EcJZhKFnQ0QEA2yjymX0go
Content-Type: multipart/mixed; boundary="kL9BXGOVnS7dKdZswMvkiGPLQIM50byAn";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
Message-ID: <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
In-Reply-To: <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>

--kL9BXGOVnS7dKdZswMvkiGPLQIM50byAn
Content-Type: multipart/mixed;
 boundary="------------24BA5CAC7FEDE7F694C2280D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------24BA5CAC7FEDE7F694C2280D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.12.20 21:51, Julien Grall wrote:
> Hi Jan,
>=20
> On 09/12/2020 14:29, Jan Beulich wrote:
>> On 09.12.2020 13:11, Julien Grall wrote:
>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>> When the host crashes it would sometimes be nice to have additional=

>>>>> debug data available which could be produced via debug keys, but
>>>>> halting the server for manual intervention might be impossible due =
to
>>>>> the need to reboot/kexec rather sooner than later.
>>>>>
>>>>> Add support for automatic debug key actions in case of crashes whic=
h
>>>>> can be activated via boot- or runtime-parameter.
>>>>>
>>>>> Depending on the type of crash the desired data might be different,=
 so
>>>>> support different settings for the possible types of crashes.
>>>>>
>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>
>>>>> =C2=A0=C2=A0=C2=A0 crash-debug-<type>=3D<string>
>>>>>
>>>>> with <type> being one of:
>>>>>
>>>>> =C2=A0=C2=A0=C2=A0 panic, hwdom, watchdog, kexeccmd, debugkey
>>>>>
>>>>> and <string> a sequence of debug key characters with '+' having the=

>>>>> special semantics of a 10 millisecond pause.
>>>>>
>>>>> So "crash-debug-watchdog=3D0+0qr" would result in special output in=
 case
>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>>>> domain info, run queues).
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> ---
>>>>> V2:
>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>> - added more text to the boot parameter description (Jan Beulich)
>>>>>
>>>>> V3:
>>>>> - added const (Jan Beulich)
>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>
>>>> Except for the Arm aspect, where I'm not sure using
>>>> guest_cpu_user_regs() is correct in all cases,
>>>
>>> I am not entirely sure to understand what get_irq_regs() is supposed =
to
>>> returned on x86. Is it the registers saved from the most recent=20
>>> exception?
>>
>> An interrupt (not an exception) sets the underlying per-CPU
>> variable, such that interested parties will know the real
>> context is not guest or "normal" Xen code, but an IRQ.
>=20
> Thanks for the explanation. I am a bit confused to why we need to give =
a=20
> regs to handle_keypress() because no-one seems to use it. Do you have a=
n=20
> explanation?

dump_registers() (key 'd') is using it.

>=20
> To add to the confusion, it looks like that get_irqs_regs() may return =

> NULL. So sometimes we may pass guest_cpu_regs() (which may contain=20
> garbagge or a set too far).

I guess this is a best effort approach.

>=20
> I guess providing the wrong information to handle_keypress() is not=20
> going to matter that much because no-one use it (?). Although, I'd like=
=20
> to make sure this is not going to bite us in the future.

TBH using the 'd' handler isn't making that much sense, as the
information delivered would be of interest only in case of a panic(),
which is already printing that information.


Juergen

--------------24BA5CAC7FEDE7F694C2280D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------24BA5CAC7FEDE7F694C2280D--

--kL9BXGOVnS7dKdZswMvkiGPLQIM50byAn--

--j8SyKl99jb6EcJZhKFnQ0QEA2yjymX0go
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TGZYFAwAAAAAACgkQsN6d1ii/Ey+/
2gf+ILLbOFFtrBaakEYxb487uUaeF7tO79DcHavFSv0f5NicfeNzDViUg7lDbFXsCOlYkJ0y14Qh
88PCwzA7tscJOKY53Pfjey0ICNXJoaPLhRIpsVrPVaCjOD2W9/wsImgNz9TWp3aDBFGVq7GL1qC6
d9Zt/SK8ZgmPUD2EhWQBGO6qNZtdY2LYuVm03ip8sbyPxBZXQxKLDFeuINSmeGQ1hJ00XBRQ6gII
La/+H0fxSCg6wU81AKkPf8/uXVZU8tdECr5Oh9WcSygkXAa9LaIHj732dIyL7SeOJLDN8KRVKJhU
1pGqfeKHwhxN557uilv+RBt1rDJtiBB3BJUjSZpJjQ==
=pdjs
-----END PGP SIGNATURE-----

--j8SyKl99jb6EcJZhKFnQ0QEA2yjymX0go--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 07:12:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 07:12:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50156.88702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncao-0001rd-Ch; Fri, 11 Dec 2020 07:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50156.88702; Fri, 11 Dec 2020 07:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncao-0001rW-93; Fri, 11 Dec 2020 07:11:54 +0000
Received: by outflank-mailman (input) for mailman id 50156;
 Fri, 11 Dec 2020 07:11:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kncan-0001rM-03; Fri, 11 Dec 2020 07:11:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kncam-0006BI-OP; Fri, 11 Dec 2020 07:11:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kncam-0006Cd-G0; Fri, 11 Dec 2020 07:11:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kncam-0008Vk-FU; Fri, 11 Dec 2020 07:11:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HlkK6fOjAUAymfKSZILMYC/+8p2rp5hfLzNIGpvIwE0=; b=DaZeFjITE9bupfo12GKXRMoZZ1
	TrBy/Xnc+yl5WDoMDqPeYraIU88H+ROiii1h7u2d55rwQzi51+Tvzrh3ArwbJFz3Z3lgonnYd4JLC
	tVPbE5Ee8RaDcS4bTn513yz3JIMRKqqFZmT99Y0RoNEqM1wdaLEivMttuYEg2ciqRrhc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157410-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157410: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 07:11:52 +0000

flight 157410 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157410/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days   10 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 07:25:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 07:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50164.88716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncnL-00032K-IG; Fri, 11 Dec 2020 07:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50164.88716; Fri, 11 Dec 2020 07:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncnL-00032D-Ew; Fri, 11 Dec 2020 07:24:51 +0000
Received: by outflank-mailman (input) for mailman id 50164;
 Fri, 11 Dec 2020 07:24:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9HZb=FP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kncnJ-00031h-Oc
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 07:24:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d37bc409-cd15-4e19-b310-18ecb16fec50;
 Fri, 11 Dec 2020 07:24:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B442FAF17;
 Fri, 11 Dec 2020 07:24:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d37bc409-cd15-4e19-b310-18ecb16fec50
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607671486; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UVUpDzFOHRfE4xEVcTRl60YKx16iFfJU7TvXuWfyaQc=;
	b=Ut0pWvvggQwAWB0sUhnIoAKJlnJzlVCcT2nPWO0Aduv0XN4aZTDz1KYry8iwptJlfF1C5U
	2c35qqP3Oa4PjKkHQkDmUBuQQE7WohRRKt8CMC2cEsj6f+i0/2hS33h/snKrJYf+DW3whq
	ilICMw3tFAsZ2NoMjKeveWUydvxaC2o=
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
Date: Fri, 11 Dec 2020 08:24:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.12.2020 08:02, Jürgen Groß wrote:
> On 10.12.20 21:51, Julien Grall wrote:
>> Hi Jan,
>>
>> On 09/12/2020 14:29, Jan Beulich wrote:
>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>> When the host crashes it would sometimes be nice to have additional
>>>>>> debug data available which could be produced via debug keys, but
>>>>>> halting the server for manual intervention might be impossible due to
>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>
>>>>>> Add support for automatic debug key actions in case of crashes which
>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>
>>>>>> Depending on the type of crash the desired data might be different, so
>>>>>> support different settings for the possible types of crashes.
>>>>>>
>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>
>>>>>>     crash-debug-<type>=<string>
>>>>>>
>>>>>> with <type> being one of:
>>>>>>
>>>>>>     panic, hwdom, watchdog, kexeccmd, debugkey
>>>>>>
>>>>>> and <string> a sequence of debug key characters with '+' having the
>>>>>> special semantics of a 10 millisecond pause.
>>>>>>
>>>>>> So "crash-debug-watchdog=0+0qr" would result in special output in case
>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>>>>> domain info, run queues).
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>> ---
>>>>>> V2:
>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>> - added more text to the boot parameter description (Jan Beulich)
>>>>>>
>>>>>> V3:
>>>>>> - added const (Jan Beulich)
>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>
>>>>> Except for the Arm aspect, where I'm not sure using
>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>
>>>> I am not entirely sure to understand what get_irq_regs() is supposed to
>>>> returned on x86. Is it the registers saved from the most recent 
>>>> exception?
>>>
>>> An interrupt (not an exception) sets the underlying per-CPU
>>> variable, such that interested parties will know the real
>>> context is not guest or "normal" Xen code, but an IRQ.
>>
>> Thanks for the explanation. I am a bit confused to why we need to give a 
>> regs to handle_keypress() because no-one seems to use it. Do you have an 
>> explanation?
> 
> dump_registers() (key 'd') is using it.
> 
>>
>> To add to the confusion, it looks like that get_irqs_regs() may return 
>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain 
>> garbagge or a set too far).
> 
> I guess this is a best effort approach.

Indeed. If there are ways to make it "more best", we should of
course follow them. (Except before Dom0 starts, I'm afraid I
don't see though where garbage would come from. And even then,
just like for the idle vCPU-s, it shouldn't really be garbage,
or else this suggests missing initialization somewhere.)

>> I guess providing the wrong information to handle_keypress() is not 
>> going to matter that much because no-one use it (?). Although, I'd like 
>> to make sure this is not going to bite us in the future.
> 
> TBH using the 'd' handler isn't making that much sense, as the
> information delivered would be of interest only in case of a panic(),
> which is already printing that information.

I disagree. I've had numerous cases where I found this key very
useful. Or do you mean what you say just for the new purpose
added here?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 07:29:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 07:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50170.88729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncro-0003Dj-1h; Fri, 11 Dec 2020 07:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50170.88729; Fri, 11 Dec 2020 07:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kncrn-0003Dc-V3; Fri, 11 Dec 2020 07:29:27 +0000
Received: by outflank-mailman (input) for mailman id 50170;
 Fri, 11 Dec 2020 07:29:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kncrm-0003DX-7D
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 07:29:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67638adc-6b35-4e3d-9229-58b3d12efaab;
 Fri, 11 Dec 2020 07:29:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 12DE8AD89;
 Fri, 11 Dec 2020 07:29:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67638adc-6b35-4e3d-9229-58b3d12efaab
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607671764; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fPwAidkTrgf97zPXZszv7FmlRWHvgtBbJyaopcekBzQ=;
	b=hFNKxjrOkkvgjXmOlhjDFHWUhdLU4+mFBdj5FV7O66TEaBdRv1LrsZkMh3RAYK2+PA8dea
	115l7xaIH2F3jkwJu7mcCCmrN6tjhMrOjjImH75/EK2orywE5SgD5NLO8G4nAenGugHUi5
	152EGphoiwqr68mF5jgZKikj7qKmVWo=
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6c3d19ef-0d35-df66-5e12-29b429b7508c@suse.com>
Date: Fri, 11 Dec 2020 08:29:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="VFLHCm6CY74osF9WDGGBBLkwrxodNhg0l"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VFLHCm6CY74osF9WDGGBBLkwrxodNhg0l
Content-Type: multipart/mixed; boundary="GKqxofmATCbfupQHtnk6QK3EdX5MgUoBC";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
Message-ID: <6c3d19ef-0d35-df66-5e12-29b429b7508c@suse.com>
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
In-Reply-To: <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>

--GKqxofmATCbfupQHtnk6QK3EdX5MgUoBC
Content-Type: multipart/mixed;
 boundary="------------830E8BAD214008B147031534"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------830E8BAD214008B147031534
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.12.20 08:24, Jan Beulich wrote:
> On 11.12.2020 08:02, J=C3=BCrgen Gro=C3=9F wrote:
>> On 10.12.20 21:51, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 09/12/2020 14:29, Jan Beulich wrote:
>>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>>> When the host crashes it would sometimes be nice to have addition=
al
>>>>>>> debug data available which could be produced via debug keys, but
>>>>>>> halting the server for manual intervention might be impossible du=
e to
>>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>>
>>>>>>> Add support for automatic debug key actions in case of crashes wh=
ich
>>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>>
>>>>>>> Depending on the type of crash the desired data might be differen=
t, so
>>>>>>> support different settings for the possible types of crashes.
>>>>>>>
>>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>>
>>>>>>>  =C2=A0=C2=A0=C2=A0 crash-debug-<type>=3D<string>
>>>>>>>
>>>>>>> with <type> being one of:
>>>>>>>
>>>>>>>  =C2=A0=C2=A0=C2=A0 panic, hwdom, watchdog, kexeccmd, debugkey
>>>>>>>
>>>>>>> and <string> a sequence of debug key characters with '+' having t=
he
>>>>>>> special semantics of a 10 millisecond pause.
>>>>>>>
>>>>>>> So "crash-debug-watchdog=3D0+0qr" would result in special output =
in case
>>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,=

>>>>>>> domain info, run queues).
>>>>>>>
>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>> ---
>>>>>>> V2:
>>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>>> - added more text to the boot parameter description (Jan Beulich)=

>>>>>>>
>>>>>>> V3:
>>>>>>> - added const (Jan Beulich)
>>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>>
>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>
>>>>>> Except for the Arm aspect, where I'm not sure using
>>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>>
>>>>> I am not entirely sure to understand what get_irq_regs() is suppose=
d to
>>>>> returned on x86. Is it the registers saved from the most recent
>>>>> exception?
>>>>
>>>> An interrupt (not an exception) sets the underlying per-CPU
>>>> variable, such that interested parties will know the real
>>>> context is not guest or "normal" Xen code, but an IRQ.
>>>
>>> Thanks for the explanation. I am a bit confused to why we need to giv=
e a
>>> regs to handle_keypress() because no-one seems to use it. Do you have=
 an
>>> explanation?
>>
>> dump_registers() (key 'd') is using it.
>>
>>>
>>> To add to the confusion, it looks like that get_irqs_regs() may retur=
n
>>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain
>>> garbagge or a set too far).
>>
>> I guess this is a best effort approach.
>=20
> Indeed. If there are ways to make it "more best", we should of
> course follow them. (Except before Dom0 starts, I'm afraid I
> don't see though where garbage would come from. And even then,
> just like for the idle vCPU-s, it shouldn't really be garbage,
> or else this suggests missing initialization somewhere.)
>=20
>>> I guess providing the wrong information to handle_keypress() is not
>>> going to matter that much because no-one use it (?). Although, I'd li=
ke
>>> to make sure this is not going to bite us in the future.
>>
>> TBH using the 'd' handler isn't making that much sense, as the
>> information delivered would be of interest only in case of a panic(),
>> which is already printing that information.
>=20
> I disagree. I've had numerous cases where I found this key very
> useful. Or do you mean what you say just for the new purpose
> added here?

Just for the new purpose.


Juergen

--------------830E8BAD214008B147031534
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------830E8BAD214008B147031534--

--GKqxofmATCbfupQHtnk6QK3EdX5MgUoBC--

--VFLHCm6CY74osF9WDGGBBLkwrxodNhg0l
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TH9MFAwAAAAAACgkQsN6d1ii/Ey9g
1gf/TJv6fuI9FxMhKRjAfz6C/GhUxhTHy0HSQISDWtpOgcSWbHfTWe2Aq/uKSuGTQyOQ2A+0ybCb
ndQXnWdR3AyT2V2laJeJ2FT33OnErwXq/gt2iLT+upvBRuIOwp99C+v/J1+9b6BIV1hJpd2tWM/T
FqEf+EZFgH3vsCH07+UmzSZvlvA4Y86RJsSfGp75syWxsMyMPzyAzHLY1TUwayxFCvglIrNvoido
y0/T/NLAVxwFbKMRPQhN8LoZJTMUVnTzJFGibXUzxfldYKUVGo+Cs1avt1RH93zB7a+KgBBoisWI
R6E+nwwabDngVBJM8FtgPSyxby9aoKyaj5hfMCepTQ==
=cNXU
-----END PGP SIGNATURE-----

--VFLHCm6CY74osF9WDGGBBLkwrxodNhg0l--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 08:10:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 08:10:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50181.88746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kndVN-0000Bp-BV; Fri, 11 Dec 2020 08:10:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50181.88746; Fri, 11 Dec 2020 08:10:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kndVN-0000Bi-8N; Fri, 11 Dec 2020 08:10:21 +0000
Received: by outflank-mailman (input) for mailman id 50181;
 Fri, 11 Dec 2020 08:10:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kndVL-0000Ba-Vk; Fri, 11 Dec 2020 08:10:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kndVL-0007tg-NB; Fri, 11 Dec 2020 08:10:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kndVL-0007zi-Dm; Fri, 11 Dec 2020 08:10:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kndVL-0008E7-DH; Fri, 11 Dec 2020 08:10:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RYlpjxT2FfkE6d4BGj6QV4tNu3JbBglkJr/0ovdKbQI=; b=XuOwNgokib7TanqPQxTj7iz/G5
	Kf0PLzL9siPCDDCEasgv6j4mVchM2WES9+OuEa90KnMzMx6xaBhXkU2cvO5yVZjJbUA7w02dV8+p6
	f80svC2YbVTXQgERWzDDcqN806IeNGSuhuO2Sjj5Yv/yDWH+g8rjp0CrXwvoppVeq8Og=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157395-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157395: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9fca90cf28920c6d0723d7efd1eae0b0fb90309c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 08:10:19 +0000

flight 157395 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157395/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9fca90cf28920c6d0723d7efd1eae0b0fb90309c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  132 days
Failing since        152366  2020-08-01 20:49:34 Z  131 days  227 attempts
Testing same since   157395  2020-12-10 22:11:19 Z    0 days    1 attempts

------------------------------------------------------------
3659 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 701672 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 08:22:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 08:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50192.88761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kndh8-0001Vn-F9; Fri, 11 Dec 2020 08:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50192.88761; Fri, 11 Dec 2020 08:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kndh8-0001Vg-CD; Fri, 11 Dec 2020 08:22:30 +0000
Received: by outflank-mailman (input) for mailman id 50192;
 Fri, 11 Dec 2020 08:22:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0WZY=FP=linaro.org=linus.walleij@srs-us1.protection.inumbo.net>)
 id 1kndh6-0001Vb-Vi
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 08:22:29 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea2351fe-2a03-4101-8b3c-0a0e64b5593b;
 Fri, 11 Dec 2020 08:22:27 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id m12so12201002lfo.7
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 00:22:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea2351fe-2a03-4101-8b3c-0a0e64b5593b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=2E9lnRJNmKjai1CIMZDErJ15p/JF+kjk/cqnxyQj4Dg=;
        b=Sg4cSLWy6IZckqqkeAYWtdI97FK7RQhI79W1qCeW5ejotUknN/MdP8pOZwiJvJO7uZ
         1g0ChmDEvTyZ23ejJpGuTfR48BiteLYb+OL/KdYtSnujk6eEaD9Otld8+8Y1g2J3MkZy
         tBoVa7+XwKCBx80qyQ1lEL4q5LEMtfg9bICdHSQfD91by6GBTt8+YUD0LCicXSAecwzu
         UGwPYj7soqTu0UOqubjIEnM4Y4/SCD8/nRHiZ64GIKuX4gCcysUu5Anc/HkVqEhjxidw
         EOW9o76j1CYEbDn1Eew6P3PNlPF6IkqqOz0I8fQQT/K3wBjEt9vG4/VgfYmY3wimDZ9S
         ttIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=2E9lnRJNmKjai1CIMZDErJ15p/JF+kjk/cqnxyQj4Dg=;
        b=phd2UmfAKu1u4CSkuo0ezhxtrBgftnfbOH6npVJ4W4oAz/DiU+h40VHSmB/k1c9Xjw
         cTIrHG2Pi8GUOSw67uO5YBDDcJw6KQOlazDN+nTPjT0QaBTuJKV06FEKerI3n1EwQ86G
         jXCi6Fa4mFviHL4E7kRcNW1fsJ1cq5kd9TCidypflhfkfDJNjcxovItMF5C7c03F8eBA
         SljBXd1YGPPEZ8G+GjCZa9f32V3RVrzuEnqBuRW8EaY3MsHV6MBwa61fSlvcKug3XZqR
         4UVA2KGK3IcK1YJWj2OmhYO+H9OC769qvlZjgDOCm9kOAKgyLcZyzm1Myg5y+UHFBrPh
         BToQ==
X-Gm-Message-State: AOAM531RvU4i8kHuICbR3ktbe0VGft0W3j52V0xpk8I1V2idvUfTTaAw
	+8p2VJxug1aiIs39a5lQpPWtzmnCXAV2lBMnU7HS2Q==
X-Google-Smtp-Source: ABdhPJy4uaPybRyNu7M8RrLozSmYT+Qwt4yF5yxQSb1/0NkgevgBErArzujQXpmZiKS3OBl0PE4C3Qi2TwkDPG47X1M=
X-Received: by 2002:a19:8384:: with SMTP id f126mr3904234lfd.649.1607674946619;
 Fri, 11 Dec 2020 00:22:26 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194044.157283633@linutronix.de>
In-Reply-To: <20201210194044.157283633@linutronix.de>
From: Linus Walleij <linus.walleij@linaro.org>
Date: Fri, 11 Dec 2020 09:22:15 +0100
Message-ID: <CACRpkdZuPp0KN1BCJ26vWH1=nopaD-ctv6bh-rt2X9bJczZE-Q@mail.gmail.com>
Subject: Re: [patch 16/30] mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, Lee Jones <lee.jones@linaro.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	intel-gfx <intel-gfx@lists.freedesktop.org>, 
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, Jon Mason <jdmason@kudzu.us>, 
	Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, linux-ntb@googlegroups.com, 
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Rob Herring <robh@kernel.org>, 
	Bjorn Helgaas <bhelgaas@google.com>, Michal Simek <michal.simek@xilinx.com>, 
	linux-pci <linux-pci@vger.kernel.org>, 
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, 
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, 
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, linux-rdma@vger.kernel.org, 
	Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 10, 2020 at 8:42 PM Thomas Gleixner <tglx@linutronix.de> wrote:

> First of all drivers have absolutely no business to dig into the internals
> of an irq descriptor. That's core code and subject to change. All of this
> information is readily available to /proc/interrupts in a safe and race
> free way.
>
> Remove the inspection code which is a blatant violation of subsystem
> boundaries and racy against concurrent modifications of the interrupt
> descriptor.
>
> Print the irq line instead so the information can be looked up in a sane
> way in /proc/interrupts.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Linus Walleij <linus.walleij@linaro.org>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: linux-arm-kernel@lists.infradead.org

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 08:51:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 08:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50202.88773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kne8w-0004ao-Q7; Fri, 11 Dec 2020 08:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50202.88773; Fri, 11 Dec 2020 08:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kne8w-0004ah-Mn; Fri, 11 Dec 2020 08:51:14 +0000
Received: by outflank-mailman (input) for mailman id 50202;
 Fri, 11 Dec 2020 08:51:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kne8v-0004aZ-7Q; Fri, 11 Dec 2020 08:51:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kne8v-0000IV-1c; Fri, 11 Dec 2020 08:51:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kne8u-00017r-PB; Fri, 11 Dec 2020 08:51:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kne8u-0006lK-Of; Fri, 11 Dec 2020 08:51:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sKaNeiDOoisEf4wBpBc5/M2vHXOwVBQzeuyfFtwP8cM=; b=XPelqUmM6EyGinzmjq7eAhm5c2
	rveL01QgNiNwMRyW45A5XMEQ1V54TeaehRED3Zvku2oZ0cBtROyhTAf4nMLp3yyfbQLkxzzELb+hr
	BRgPBziyLnz+T5e6QhxT7sA9QOdOqtkWIjvNpIAe0z0JIQnF45fonAz7O5IDtZOSpJCM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157412-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157412: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 08:51:12 +0000

flight 157412 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157412/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    1 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days   11 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 08:53:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 08:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50209.88789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kneAq-0004mX-Du; Fri, 11 Dec 2020 08:53:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50209.88789; Fri, 11 Dec 2020 08:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kneAq-0004mQ-A7; Fri, 11 Dec 2020 08:53:12 +0000
Received: by outflank-mailman (input) for mailman id 50209;
 Fri, 11 Dec 2020 08:53:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kneAp-0004mK-Pv
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 08:53:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e82309c-62e5-46ab-8ccc-55d6c4e42427;
 Fri, 11 Dec 2020 08:53:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 36BC8ACBD;
 Fri, 11 Dec 2020 08:53:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e82309c-62e5-46ab-8ccc-55d6c4e42427
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607676790; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=W1if7SktlQyaw0k06QJfTAiDbXDHkI8vZQRt4T89FBQ=;
	b=I64aHuRUrWqVzr2kNc9mF1V9MEivQ/+7hRCyNHqCCt7QbNWKcQc5CHwWJdwu3l19JdHdZ1
	nrqyLMLMHhs4FIO7BZt8lnmKNPRGFQCnJ9pVe7q2aO1vnbuBLtfDiD09z6n1v5+2CkMa2y
	RAGw+7KCkQ5qaLQx8RfMSuTJKyQnnqA=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.10-rc8
Date: Fri, 11 Dec 2020 09:53:09 +0100
Message-Id: <20201211085309.8128-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10c-rc8-tag

xen: branch for v5.10-rc8

It contains a short series fixing a regression introduced in 5.9 for
running as Xen dom0 on a system with NVMe backed storage.

Thanks.

Juergen

 drivers/block/xen-blkback/blkback.c |  89 +++++---------------------
 drivers/block/xen-blkback/common.h  |   4 +-
 drivers/block/xen-blkback/xenbus.c  |   6 +-
 drivers/xen/grant-table.c           | 123 ++++++++++++++++++++++++++++++++++++
 drivers/xen/unpopulated-alloc.c     |  20 +++---
 drivers/xen/xen-scsiback.c          |  60 ++++--------------
 include/xen/grant_table.h           |  17 +++++
 7 files changed, 182 insertions(+), 137 deletions(-)

Juergen Gross (2):
      xen: add helpers for caching grant mapping pages
      xen: don't use page->lru for ZONE_DEVICE memory


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 08:59:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 08:59:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50224.88800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kneGY-00050W-2m; Fri, 11 Dec 2020 08:59:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50224.88800; Fri, 11 Dec 2020 08:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kneGX-00050P-W9; Fri, 11 Dec 2020 08:59:05 +0000
Received: by outflank-mailman (input) for mailman id 50224;
 Fri, 11 Dec 2020 08:59:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9HZb=FP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kneGW-0004zc-KT
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 08:59:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbc167af-a1d5-4605-9898-55368be07aab;
 Fri, 11 Dec 2020 08:58:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 20D7AACBD;
 Fri, 11 Dec 2020 08:58:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbc167af-a1d5-4605-9898-55368be07aab
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607677135; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mPnSiuGj9ULrNN2R6pZ+vGFndCcen1k29TTiIOfk5s0=;
	b=YIqKRN5LcOeGyyMkGW22KyiSgKQ+ZGMbaZKUlN2aq15eiP2w8ujsFIedeFm0PMIkItukNY
	ERbH2HjfuoVlbcwv5wJefGtbJ8HwwBroo6xQ6J6Jts6jI71Q8LWRBmP9GI0BUMb/nSZ7vg
	RlsbgQq8Ah8COmXvA1CfE2xOatEwGuo=
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
References: <20201209101512.GA1299@antioche.eu.org>
 <3f7e50bb-24ad-1e32-9ea1-ba87007d3796@citrix.com>
 <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2c345ef9-1f05-f883-d294-7ac1b3851f08@suse.com>
Date: Fri, 11 Dec 2020 09:58:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210095139.GA455@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.12.2020 10:51, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 07:08:41PM +0000, Andrew Cooper wrote:
>> Oh of course - we don't follow the exit-to-guest path on the way out here.
>>
>> As a gross hack to check that we've at least diagnosed the issue
>> appropriately, could you modify NetBSD to explicitly load the %ss
>> selector into %es (or any other free segment) before first entering user
>> context?
> 
> If I understood it properly, the user %ss is loaded by Xen from the
> trapframe when the guest swictes from kernel to user mode, isn't it ?
> So you mean setting %es to the same value in the trapframe ?
> 
> Actually I used %fs because %es is set equal to %ds.
> Xen 4.13 boots fine with this change, but with 4.15 I get a loop of:
> 
> 
> (XEN) *** LDT: gl1e 0000000000000000 not present                               
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                                  

Could you please revert 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")?
I think there was a thinko there in that the change can't be split from
the bigger one which was part of the originally planned set for XSA-286.
We mustn't avoid the switching of page tables as long as
guest_get_eff{,_kern}_l1e() makes use of the linear page tables.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 09:35:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 09:35:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50241.88816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kneps-0000gD-35; Fri, 11 Dec 2020 09:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50241.88816; Fri, 11 Dec 2020 09:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knepr-0000g6-VW; Fri, 11 Dec 2020 09:35:35 +0000
Received: by outflank-mailman (input) for mailman id 50241;
 Fri, 11 Dec 2020 09:35:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=byGL=FP=gmail.com=bmeng.cn@srs-us1.protection.inumbo.net>)
 id 1knepq-0000g1-7Q
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 09:35:34 +0000
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de204c78-ac46-4529-bb81-a3fbbcc83f23;
 Fri, 11 Dec 2020 09:35:33 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id 2so8192356ilg.9
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 01:35:32 -0800 (PST)
Received: from pek-vx-bsp2.wrs.com (unknown-124-94.windriver.com.
 [147.11.124.94])
 by smtp.gmail.com with ESMTPSA id g1sm4065362ioh.39.2020.12.11.01.35.25
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 11 Dec 2020 01:35:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de204c78-ac46-4529-bb81-a3fbbcc83f23
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=A/cB+UmI9E0j+nJVr2oYBsjH7YKCVvR2Pxv7tFSl0zw=;
        b=STLAf7gl4jww/Y/EmM3KDbzh19tbx+o3mypDCyNXNlXCidHBThhWZcmNdusW7ktDOi
         vbjZvfoaGg62/JV5pqfI6H7Fn1RpvdkNxEAJHks2pEpV6jQQlL8HA/RdRLfrwSl3+vQM
         SVMxI2HuzJNy1pa8JNIztJLaV2+ePJSF8dexpVwYg/EExa6GYu5z5KRNTxyTN3Hg0C4S
         QeQloeUwxKXkvDNwhNRt3O3S6kRtb5ZSJGedvtp6n768WWjQPx3bQRtcnR8RMSfLIcmX
         Kmz01NksaYJWZIJ2DEM/9P7Pel2XLFnajZTWEVheM1V7fM2Gi/6+5lrBA3vjq7dj3fQx
         ZSRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=A/cB+UmI9E0j+nJVr2oYBsjH7YKCVvR2Pxv7tFSl0zw=;
        b=nocLolxo/UwRiWU7z3+y4Q9setSw28TuxYd9uQwxKAHOXXBg4GVqb0U2mh4GbwvE+u
         AjFCZ02nOHEiJcTvnNv5re0DhJ7yPXJGUiAAjIWL6eWRkdNxq8qA3dKQByeUPRh9mVuH
         Cvd5z9urthY5mISYcCO0QNexAwAlab9Ts3G7MYqO76IcdOPe+xfOeBwvFWAEJVZ5JQL5
         putDv0qkZ99KOzcmVdbMo6H//aOjUojZfnucslSiIWk2gfRqbcFZ/ghFPSTn+IXI932W
         OROHW7BPsssLOHTv20l4dWkjc9C1juByxp1HT1vnQA7a+Y8kH0StAN2lcRZhJepT8x7H
         a+/g==
X-Gm-Message-State: AOAM531kUkJEfA2PXZ172/BjNk4UennC9Laa1jja5XREA/vGrajGQMED
	Wy5bpX791nMOihY53gvipeI=
X-Google-Smtp-Source: ABdhPJwVfhlAvs/EzRx/jVUtjRoK8Ceqg5JPndS+7DZoIvG8b2oUCVojGSmtF0TvrgKOfufEBw0vpA==
X-Received: by 2002:a92:dccb:: with SMTP id b11mr14709054ilr.36.1607679332523;
        Fri, 11 Dec 2020 01:35:32 -0800 (PST)
From: Bin Meng <bmeng.cn@gmail.com>
To: qemu-devel@nongnu.org
Cc: Bin Meng <bin.meng@windriver.com>,
	Alistair Francis <alistair@alistair23.me>,
	Andrew Jeffery <andrew@aj.id.au>,
	Anthony Perard <anthony.perard@citrix.com>,
	Beniamino Galvani <b.galvani@gmail.com>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	Joel Stanley <joel@jms.id.au>,
	Li Zhijian <lizhijian@cn.fujitsu.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Chubb <peter.chubb@nicta.com.au>,
	Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Zhang Chen <chen.zhang@intel.com>,
	qemu-arm@nongnu.org,
	qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 3/3] net: checksum: Introduce fine control over checksum type
Date: Fri, 11 Dec 2020 17:35:12 +0800
Message-Id: <1607679312-51325-3-git-send-email-bmeng.cn@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1607679312-51325-1-git-send-email-bmeng.cn@gmail.com>
References: <1607679312-51325-1-git-send-email-bmeng.cn@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Bin Meng <bin.meng@windriver.com>

At present net_checksum_calculate() blindly calculates all types of
checksums (IP, TCP, UDP). Some NICs may have a per type setting in
their BDs to control what checksum should be offloaded. To support
such hardware behavior, introduce a 'csum_flag' parameter to the
net_checksum_calculate() API to allow fine control over what type
checksum is calculated.

Existing users of this API are updated accordingly.

Signed-off-by: Bin Meng <bin.meng@windriver.com>

---

Changes in v2:
- update ftgmac100.c per Cédric Le Goater's suggestion
- simplify fsl_etsec and imx_fec checksum logic

 include/net/checksum.h        |  7 ++++++-
 hw/net/allwinner-sun8i-emac.c |  2 +-
 hw/net/cadence_gem.c          |  2 +-
 hw/net/fsl_etsec/rings.c      | 18 +++++++++---------
 hw/net/ftgmac100.c            | 13 ++++++++++++-
 hw/net/imx_fec.c              | 20 ++++++++------------
 hw/net/virtio-net.c           |  2 +-
 hw/net/xen_nic.c              |  2 +-
 net/checksum.c                | 18 ++++++++++++++----
 net/filter-rewriter.c         |  4 ++--
 10 files changed, 55 insertions(+), 33 deletions(-)

diff --git a/include/net/checksum.h b/include/net/checksum.h
index 05a0d27..7dec37e 100644
--- a/include/net/checksum.h
+++ b/include/net/checksum.h
@@ -21,11 +21,16 @@
 #include "qemu/bswap.h"
 struct iovec;
 
+#define CSUM_IP     0x01
+#define CSUM_TCP    0x02
+#define CSUM_UDP    0x04
+#define CSUM_ALL    (CSUM_IP | CSUM_TCP | CSUM_UDP)
+
 uint32_t net_checksum_add_cont(int len, uint8_t *buf, int seq);
 uint16_t net_checksum_finish(uint32_t sum);
 uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
                              uint8_t *addrs, uint8_t *buf);
-void net_checksum_calculate(uint8_t *data, int length);
+void net_checksum_calculate(uint8_t *data, int length, int csum_flag);
 
 static inline uint32_t
 net_checksum_add(int len, uint8_t *buf)
diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index 38d3285..0427689 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -514,7 +514,7 @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
         /* After the last descriptor, send the packet */
         if (desc.status2 & TX_DESC_STATUS2_LAST_DESC) {
             if (desc.status2 & TX_DESC_STATUS2_CHECKSUM_MASK) {
-                net_checksum_calculate(packet_buf, packet_bytes);
+                net_checksum_calculate(packet_buf, packet_bytes, CSUM_ALL);
             }
 
             qemu_send_packet(nc, packet_buf, packet_bytes);
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
index 7a53469..9a4474a 100644
--- a/hw/net/cadence_gem.c
+++ b/hw/net/cadence_gem.c
@@ -1266,7 +1266,7 @@ static void gem_transmit(CadenceGEMState *s)
 
                 /* Is checksum offload enabled? */
                 if (s->regs[GEM_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
-                    net_checksum_calculate(s->tx_packet, total_bytes);
+                    net_checksum_calculate(s->tx_packet, total_bytes, CSUM_ALL);
                 }
 
                 /* Update MAC statistics */
diff --git a/hw/net/fsl_etsec/rings.c b/hw/net/fsl_etsec/rings.c
index 628648a..121415a 100644
--- a/hw/net/fsl_etsec/rings.c
+++ b/hw/net/fsl_etsec/rings.c
@@ -183,13 +183,11 @@ static void process_tx_fcb(eTSEC *etsec)
     uint8_t *l3_header = etsec->tx_buffer + 8 + l3_header_offset;
     /* L4 header */
     uint8_t *l4_header = l3_header + l4_header_offset;
+    int csum = 0;
 
     /* if packet is IP4 and IP checksum is requested */
     if (flags & FCB_TX_IP && flags & FCB_TX_CIP) {
-        /* do IP4 checksum (TODO This function does TCP/UDP checksum
-         * but not sure if it also does IP4 checksum.) */
-        net_checksum_calculate(etsec->tx_buffer + 8,
-                etsec->tx_buffer_len - 8);
+        csum |= CSUM_IP;
     }
     /* TODO Check the correct usage of the PHCS field of the FCB in case the NPH
      * flag is on */
@@ -201,9 +199,7 @@ static void process_tx_fcb(eTSEC *etsec)
             /* if checksum is requested */
             if (flags & FCB_TX_CTU) {
                 /* do UDP checksum */
-
-                net_checksum_calculate(etsec->tx_buffer + 8,
-                        etsec->tx_buffer_len - 8);
+                csum |= CSUM_UDP;
             } else {
                 /* set checksum field to 0 */
                 l4_header[6] = 0;
@@ -211,10 +207,14 @@ static void process_tx_fcb(eTSEC *etsec)
             }
         } else if (flags & FCB_TX_CTU) { /* if TCP and checksum is requested */
             /* do TCP checksum */
-            net_checksum_calculate(etsec->tx_buffer + 8,
-                                   etsec->tx_buffer_len - 8);
+            csum |= CSUM_TCP;
         }
     }
+
+    if (csum) {
+        net_checksum_calculate(etsec->tx_buffer + 8,
+                               etsec->tx_buffer_len - 8, csum);
+    }
 }
 
 static void process_tx_bd(eTSEC         *etsec,
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
index 782ff19..25685ba 100644
--- a/hw/net/ftgmac100.c
+++ b/hw/net/ftgmac100.c
@@ -564,6 +564,7 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
         ptr += len;
         frame_size += len;
         if (bd.des0 & FTGMAC100_TXDES0_LTS) {
+            int csum = 0;
 
             /* Check for VLAN */
             if (flags & FTGMAC100_TXDES1_INS_VLANTAG &&
@@ -573,8 +574,18 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
             }
 
             if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
-                net_checksum_calculate(s->frame, frame_size);
+                csum |= CSUM_IP;
             }
+            if (flags & FTGMAC100_TXDES1_TCP_CHKSUM) {
+                csum |= CSUM_TCP;
+            }
+            if (flags & FTGMAC100_TXDES1_UDP_CHKSUM) {
+                csum |= CSUM_UDP;
+            }
+            if (csum) {
+                net_checksum_calculate(s->frame, frame_size, csum);
+            }
+
             /* Last buffer in frame.  */
             qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);
             ptr = s->frame;
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
index 2c14804..f03450c 100644
--- a/hw/net/imx_fec.c
+++ b/hw/net/imx_fec.c
@@ -561,22 +561,18 @@ static void imx_enet_do_tx(IMXFECState *s, uint32_t index)
         ptr += len;
         frame_size += len;
         if (bd.flags & ENET_BD_L) {
+            int csum = 0;
+
             if (bd.option & ENET_BD_PINS) {
-                struct ip_header *ip_hd = PKT_GET_IP_HDR(s->frame);
-                if (IP_HEADER_VERSION(ip_hd) == 4) {
-                    net_checksum_calculate(s->frame, frame_size);
-                }
+                csum |= (CSUM_TCP | CSUM_UDP);
             }
             if (bd.option & ENET_BD_IINS) {
-                struct ip_header *ip_hd = PKT_GET_IP_HDR(s->frame);
-                /* We compute checksum only for IPv4 frames */
-                if (IP_HEADER_VERSION(ip_hd) == 4) {
-                    uint16_t csum;
-                    ip_hd->ip_sum = 0;
-                    csum = net_raw_checksum((uint8_t *)ip_hd, sizeof(*ip_hd));
-                    ip_hd->ip_sum = cpu_to_be16(csum);
-                }
+                csum |= CSUM_IP;
+            }
+            if (csum) {
+                net_checksum_calculate(s->frame, frame_size, csum);
             }
+
             /* Last buffer in frame.  */
 
             qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 044ac95..4082be3 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1471,7 +1471,7 @@ static void work_around_broken_dhclient(struct virtio_net_hdr *hdr,
         (buf[12] == 0x08 && buf[13] == 0x00) && /* ethertype == IPv4 */
         (buf[23] == 17) && /* ip.protocol == UDP */
         (buf[34] == 0 && buf[35] == 67)) { /* udp.srcport == bootps */
-        net_checksum_calculate(buf, size);
+        net_checksum_calculate(buf, size, CSUM_UDP);
         hdr->flags &= ~VIRTIO_NET_HDR_F_NEEDS_CSUM;
     }
 }
diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
index 00a7fdf..5c815b4 100644
--- a/hw/net/xen_nic.c
+++ b/hw/net/xen_nic.c
@@ -174,7 +174,7 @@ static void net_tx_packets(struct XenNetDev *netdev)
                     tmpbuf = g_malloc(XC_PAGE_SIZE);
                 }
                 memcpy(tmpbuf, page + txreq.offset, txreq.size);
-                net_checksum_calculate(tmpbuf, txreq.size);
+                net_checksum_calculate(tmpbuf, txreq.size, CSUM_ALL);
                 qemu_send_packet(qemu_get_queue(netdev->nic), tmpbuf,
                                  txreq.size);
             } else {
diff --git a/net/checksum.c b/net/checksum.c
index dabd290..70f4eae 100644
--- a/net/checksum.c
+++ b/net/checksum.c
@@ -57,7 +57,7 @@ uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
     return net_checksum_finish(sum);
 }
 
-void net_checksum_calculate(uint8_t *data, int length)
+void net_checksum_calculate(uint8_t *data, int length, int csum_flag)
 {
     int mac_hdr_len, ip_len;
     struct ip_header *ip;
@@ -108,9 +108,11 @@ void net_checksum_calculate(uint8_t *data, int length)
     }
 
     /* Calculate IP checksum */
-    stw_he_p(&ip->ip_sum, 0);
-    csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
-    stw_be_p(&ip->ip_sum, csum);
+    if (csum_flag & CSUM_IP) {
+        stw_he_p(&ip->ip_sum, 0);
+        csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
+        stw_be_p(&ip->ip_sum, csum);
+    }
 
     if (IP4_IS_FRAGMENT(ip)) {
         return; /* a fragmented IP packet */
@@ -128,6 +130,10 @@ void net_checksum_calculate(uint8_t *data, int length)
     switch (ip->ip_p) {
     case IP_PROTO_TCP:
     {
+        if (!(csum_flag & CSUM_TCP)) {
+            return;
+        }
+
         tcp_header *tcp = (tcp_header *)(ip + 1);
 
         if (ip_len < sizeof(tcp_header)) {
@@ -148,6 +154,10 @@ void net_checksum_calculate(uint8_t *data, int length)
     }
     case IP_PROTO_UDP:
     {
+        if (!(csum_flag & CSUM_UDP)) {
+            return;
+        }
+
         udp_header *udp = (udp_header *)(ip + 1);
 
         if (ip_len < sizeof(udp_header)) {
diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
index e063a81..80caac5 100644
--- a/net/filter-rewriter.c
+++ b/net/filter-rewriter.c
@@ -114,7 +114,7 @@ static int handle_primary_tcp_pkt(RewriterState *rf,
             tcp_pkt->th_ack = htonl(ntohl(tcp_pkt->th_ack) + conn->offset);
 
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
-                                   pkt->size - pkt->vnet_hdr_len);
+                                   pkt->size - pkt->vnet_hdr_len, CSUM_TCP);
         }
 
         /*
@@ -216,7 +216,7 @@ static int handle_secondary_tcp_pkt(RewriterState *rf,
             tcp_pkt->th_seq = htonl(ntohl(tcp_pkt->th_seq) - conn->offset);
 
             net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
-                                   pkt->size - pkt->vnet_hdr_len);
+                                   pkt->size - pkt->vnet_hdr_len, CSUM_TCP);
         }
     }
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 09:43:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 09:43:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50250.88828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knex7-0001iv-Rl; Fri, 11 Dec 2020 09:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50250.88828; Fri, 11 Dec 2020 09:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knex7-0001io-O5; Fri, 11 Dec 2020 09:43:05 +0000
Received: by outflank-mailman (input) for mailman id 50250;
 Fri, 11 Dec 2020 09:43:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9HZb=FP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knex5-0001ij-SF
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 09:43:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9902b670-8dfe-4c8a-8bc7-d7ae342bf8c8;
 Fri, 11 Dec 2020 09:43:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B8BBAD5A;
 Fri, 11 Dec 2020 09:43:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9902b670-8dfe-4c8a-8bc7-d7ae342bf8c8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607679780; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MPNDFknFct4DE2OXIeQwMKd20dtrKuTG5KpVP1Qra+I=;
	b=agW2rTLblAgeGwkpaRGwKeGidQEsnkVC/OxBwxkV+vPYEFQ/XENhljH6ITmLlEyrpf+Gva
	aH0b7vTv2/PvQooWyWxR9WZb9TB6PwUwYXzhd+i3CbhmnSfEAzi8Cy+IpxcBa4dyDLVV54
	qDKZ30XU5I9weF9AwaBndbaiySvPHlI=
Subject: Re: [PATCH] x86/HVM: refine when to send mapcache invalidation
 request to qemu
To: Hongyan Xia <hx242@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "olekstysh@gmail.com" <olekstysh@gmail.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <f92f62bf-2f8d-34db-4be5-d3e6a4b9d580@suse.com>
 <c6bcaecf71f9e51bdac15c7f97c8ce8460bef306.camel@xen.org>
 <d522f01e-af5f-fc65-2888-2573dbcefcf5@suse.com>
 <e484c21ccda8c0c8049655288d7bf72f74f0de38.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <af2d462e-be7c-9a8d-8421-69e3d6e0d948@suse.com>
Date: Fri, 11 Dec 2020 10:43:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <e484c21ccda8c0c8049655288d7bf72f74f0de38.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.12.2020 16:55, Hongyan Xia wrote:
> On Thu, 2020-12-10 at 14:37 +0100, Jan Beulich wrote:
>> On 10.12.2020 14:09, Hongyan Xia wrote:
>>> On Mon, 2020-09-28 at 12:44 +0200, Jan Beulich wrote:
>>>> Plus finally there's no point sending the request for the local
>>>> domain
>>>> when the domain acted upon is a different one. If anything that
>>>> domain's
>>>> qemu's mapcache may need invalidating, but it's unclear how
>>>> useful
>>>> this
>>>> would be: That remote domain may not execute hypercalls at all,
>>>> and
>>>> hence may never make it to the point where the request actually
>>>> gets
>>>> issued. I guess the assumption is that such manipulation is not
>>>> supposed
>>>> to happen anymore once the guest has been started?
>>>
>>> I may still want to set the invalidation signal to true even if the
>>> domain acted on is not the local domain. I know the remote domain
>>> may
>>> never reach the point to issue the invalidate, but it sounds to me
>>> that
>>> the problem is not whether we should set the signal but whether we
>>> can
>>> change where the signal is checked to make sure the point of issue
>>> can
>>> be reliably triggered, and the latter can be done in a future
>>> patch.
>>
>> One of Paul's replies was quite helpful here: The main thing to
> 
> Hmm, I seem to not be able to see the whole thread...

This may have been on the thread which had prompted the creation
of this patch.

>> worry about is for the vCPU to not continue running before the
>> invalidation request was signaled (or else, aiui, qemu may serve
>> a subsequent emulation request by the guest incorrectly, because
>> of using the stale mapping). Hence I believe for a non-paused
>> guest remote operations simply cannot be allowed when the may
>> lead to the need for invalidation. Therefore yes, if we assume
>> the guest is paused in such cases, we could drop the "is current"
>> check, but we'd then still need to arrange for actual signaling
>> before the guest gets to run again. I wonder whether
>> handle_hvm_io_completion() (or its caller, hvm_do_resume(),
>> right after that other call) wouldn't be a good place to do so.
> 
> Actually, the existing code must assume that when QEMU is up, the only
> one that manipulates the p2m is the guest itself like you said.

Not sure what "that" you mean existing code must assume, and why
you would outright exclude external p2m manipulation when the
guest is up. At the very least in a hypothetical memory-hot-
unplug scenario manipulation would still necessarily occur from
outside the guest (yet without real need for pausing it). And
obviously mem-sharing and mem-paging are also doing external
manipulations, albeit maybe they pause the guest for every
change.

> If the
> caller is XENMEM_decrease_reservation, the code does not even check
> which p2m this is for and unconditionally sets the QEMU invalidate flag
> for the current domain.

This observation is part of what had prompted the change here.

> Athough this assumption may simply be wrong
> now, so I agree care should be taken for remote p2m ops (I may need to
> read the code more to know how this should be done).

I believe the assumption stems from the time where the controlling
domain would necessarily be PV, and hence a decrease-reservation
request by a HVM domain could only have been for itself.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 09:52:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 09:52:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50261.88839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knf62-0002nz-Nw; Fri, 11 Dec 2020 09:52:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50261.88839; Fri, 11 Dec 2020 09:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knf62-0002nr-Ks; Fri, 11 Dec 2020 09:52:18 +0000
Received: by outflank-mailman (input) for mailman id 50261;
 Fri, 11 Dec 2020 09:52:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d0pw=FP=intel.com=jani.nikula@srs-us1.protection.inumbo.net>)
 id 1knf61-0002nm-Bq
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 09:52:17 +0000
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed09696d-d007-410e-996f-ce74805d79dc;
 Fri, 11 Dec 2020 09:52:15 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Dec 2020 01:52:14 -0800
Received: from dkreft-mobl1.ger.corp.intel.com (HELO localhost)
 ([10.249.158.206])
 by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Dec 2020 01:52:00 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed09696d-d007-410e-996f-ce74805d79dc
IronPort-SDR: s2Qni574t8YgehTm1gtnv3CLeXL4biW8f6va+2fW2T0jWbJkUwAyRi47+ynMTKOytygnmCDDB8
 AS3VIAzDE4Rw==
X-IronPort-AV: E=McAfee;i="6000,8403,9831"; a="153640765"
X-IronPort-AV: E=Sophos;i="5.78,411,1599548400"; 
   d="scan'208";a="153640765"
IronPort-SDR: PVl8OU8ROdRYNRcuuI1cwHnm0Pahwqqsf4O0coU6GZWsYYiN/hRTouvAiYE5Ccs0+xvEJJqBLa
 InJ3eAdxnHfQ==
X-IronPort-AV: E=Sophos;i="5.78,411,1599548400"; 
   d="scan'208";a="440808660"
From: Jani Nikula <jani.nikula@linux.intel.com>
To: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>, Thomas
 Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>, Karthikeyan Mitran
 <m.karthikeyan@mobiveil.co.in>, Peter Zijlstra <peterz@infradead.org>,
 Catalin Marinas <catalin.marinas@arm.com>, Linus Walleij
 <linus.walleij@linaro.org>, dri-devel@lists.freedesktop.org, Chris Wilson
 <chris@chris-wilson.co.uk>, "James E.J. Bottomley"
 <James.Bottomley@hansenpartnership.com>, Russell King
 <linux@armlinux.org.uk>, afzal mohammed <afzal.mohd.ma@gmail.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Rob Herring <robh@kernel.org>,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Dave Jiang
 <dave.jiang@intel.com>, Leon Romanovsky <leon@kernel.org>,
 linux-rdma@vger.kernel.org, Marc Zyngier <maz@kernel.org>, Helge Deller
 <deller@gmx.de>, Michal Simek <michal.simek@xilinx.com>, Christian
 Borntraeger <borntraeger@de.ibm.com>, linux-pci@vger.kernel.org,
 xen-devel@lists.xenproject.org, intel-gfx@lists.freedesktop.org, Wambui
 Karuga <wambui.karugax@gmail.com>, Allen Hubbe <allenbh@gmail.com>, Will
 Deacon <will@kernel.org>, linux-s390@vger.kernel.org, Heiko Carstens
 <hca@linux.ibm.com>, linux-gpio@vger.kernel.org, Stefano Stabellini
 <sstabellini@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Lee Jones <lee.jones@linaro.org>,
 linux-arm-kernel@lists.infradead.org, Juergen Gross <jgross@suse.com>,
 David Airlie <airlied@linux.ie>, linux-parisc@vger.kernel.org,
 netdev@vger.kernel.org, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, LKML
 <linux-kernel@vger.kernel.org>, Tariq Toukan <tariqt@nvidia.com>, Jon
 Mason <jdmason@kudzu.us>, linux-ntb@googlegroups.com, Saeed Mahameed
 <saeedm@nvidia.com>, "David S. Miller" <davem@davemloft.net>
Subject: Re: [Intel-gfx] [patch 13/30] drm/i915/lpe_audio: Remove pointless irq_to_desc() usage
In-Reply-To: <X9J7h+myHaraeoKH@intel.com>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo
References: <20201210192536.118432146@linutronix.de> <20201210194043.862572239@linutronix.de> <X9J7h+myHaraeoKH@intel.com>
Date: Fri, 11 Dec 2020 11:51:57 +0200
Message-ID: <87zh2k7jr6.fsf@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Thu, 10 Dec 2020, Ville Syrj=C3=A4l=C3=A4 <ville.syrjala@linux.intel.com=
> wrote:
> On Thu, Dec 10, 2020 at 08:25:49PM +0100, Thomas Gleixner wrote:
>> Nothing uses the result and nothing should ever use it in driver code.
>>=20
>> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Jani Nikula <jani.nikula@linux.intel.com>
>> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>> Cc: David Airlie <airlied@linux.ie>
>> Cc: Daniel Vetter <daniel@ffwll.ch>
>> Cc: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Wambui Karuga <wambui.karugax@gmail.com>
>> Cc: intel-gfx@lists.freedesktop.org
>> Cc: dri-devel@lists.freedesktop.org
>
> Reviewed-by: Ville Syrj=C3=A4l=C3=A4 <ville.syrjala@linux.intel.com>

Thomas, I presume you want to merge this series as a whole.

Acked-by: Jani Nikula <jani.nikula@intel.com>

for merging via whichever tree makes most sense. Please let us know if
you want us to pick this up via drm-intel instead.

>
>> ---
>>  drivers/gpu/drm/i915/display/intel_lpe_audio.c |    4 ----
>>  1 file changed, 4 deletions(-)
>>=20
>> --- a/drivers/gpu/drm/i915/display/intel_lpe_audio.c
>> +++ b/drivers/gpu/drm/i915/display/intel_lpe_audio.c
>> @@ -297,13 +297,9 @@ int intel_lpe_audio_init(struct drm_i915
>>   */
>>  void intel_lpe_audio_teardown(struct drm_i915_private *dev_priv)
>>  {
>> -	struct irq_desc *desc;
>> -
>>  	if (!HAS_LPE_AUDIO(dev_priv))
>>  		return;
>>=20=20
>> -	desc =3D irq_to_desc(dev_priv->lpe_audio.irq);
>> -
>>  	lpe_audio_platdev_destroy(dev_priv);
>>=20=20
>>  	irq_free_desc(dev_priv->lpe_audio.irq);
>>=20
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

--=20
Jani Nikula, Intel Open Source Graphics Center


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 09:54:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 09:54:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50267.88852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knf88-0002yO-8n; Fri, 11 Dec 2020 09:54:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50267.88852; Fri, 11 Dec 2020 09:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knf88-0002yH-5f; Fri, 11 Dec 2020 09:54:28 +0000
Received: by outflank-mailman (input) for mailman id 50267;
 Fri, 11 Dec 2020 09:54:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d0pw=FP=intel.com=jani.nikula@srs-us1.protection.inumbo.net>)
 id 1knf87-0002yC-5A
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 09:54:27 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0597931-65b3-4250-a916-54a0b1f037c0;
 Fri, 11 Dec 2020 09:54:24 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Dec 2020 01:54:23 -0800
Received: from dkreft-mobl1.ger.corp.intel.com (HELO localhost)
 ([10.249.158.206])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Dec 2020 01:54:06 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0597931-65b3-4250-a916-54a0b1f037c0
IronPort-SDR: +amFNZ7mxAlYsnds9z7Y9ZRN81FoSbKLWQl6Loom5v8Sh7Dz9hTA03N88DAOFBs/oTWAfw/iDd
 N9szeEbi+ntw==
X-IronPort-AV: E=McAfee;i="6000,8403,9831"; a="238510584"
X-IronPort-AV: E=Sophos;i="5.78,411,1599548400"; 
   d="scan'208";a="238510584"
IronPort-SDR: /ufv3vGNGQifQf4WI2wmyr1gYEo9JLcct/eEa85fh974wAfL2ugGQDoLz9o6bLFhFYPLXeOH+B
 J5isAR9V2pBA==
X-IronPort-AV: E=Sophos;i="5.78,411,1599548400"; 
   d="scan'208";a="333982962"
From: Jani Nikula <jani.nikula@linux.intel.com>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, "James
 E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Pankaj Bharadiya
 <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee
 Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Subject: Re: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
In-Reply-To: <20201210194043.957046529@linutronix.de>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo
References: <20201210192536.118432146@linutronix.de> <20201210194043.957046529@linutronix.de>
Date: Fri, 11 Dec 2020 11:54:03 +0200
Message-ID: <87wnxo7jno.fsf@intel.com>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, 10 Dec 2020, Thomas Gleixner <tglx@linutronix.de> wrote:
> Driver code has no business with the internals of the irq descriptor.
>
> Aside of that the count is per interrupt line and therefore takes
> interrupts from other devices into account which share the interrupt line
> and are not handled by the graphics driver.
>
> Replace it with a pmu private count which only counts interrupts which
> originate from the graphics card.
>
> To avoid atomics or heuristics of some sort make the counter field
> 'unsigned long'. That limits the count to 4e9 on 32bit which is a lot and
> postprocessing can easily deal with the occasional wraparound.

I'll let Tvrtko and Chris review the substance here, but assuming they
don't object,

Acked-by: Jani Nikula <jani.nikula@intel.com>

for merging via whichever tree makes most sense.

>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> Cc: Jani Nikula <jani.nikula@linux.intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> ---
>  drivers/gpu/drm/i915/i915_irq.c |   34 ++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_pmu.c |   18 +-----------------
>  drivers/gpu/drm/i915/i915_pmu.h |    8 ++++++++
>  3 files changed, 43 insertions(+), 17 deletions(-)
>
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -60,6 +60,24 @@
>   * and related files, but that will be described in separate chapters.
>   */
>  
> +/*
> + * Interrupt statistic for PMU. Increments the counter only if the
> + * interrupt originated from the the GPU so interrupts from a device which
> + * shares the interrupt line are not accounted.
> + */
> +static inline void pmu_irq_stats(struct drm_i915_private *priv,
> +				 irqreturn_t res)
> +{
> +	if (unlikely(res != IRQ_HANDLED))
> +		return;
> +
> +	/*
> +	 * A clever compiler translates that into INC. A not so clever one
> +	 * should at least prevent store tearing.
> +	 */
> +	WRITE_ONCE(priv->pmu.irq_count, priv->pmu.irq_count + 1);
> +}
> +
>  typedef bool (*long_pulse_detect_func)(enum hpd_pin pin, u32 val);
>  
>  static const u32 hpd_ilk[HPD_NUM_PINS] = {
> @@ -1599,6 +1617,8 @@ static irqreturn_t valleyview_irq_handle
>  		valleyview_pipestat_irq_handler(dev_priv, pipe_stats);
>  	} while (0);
>  
> +	pmu_irq_stats(dev_priv, ret);
> +
>  	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>  
>  	return ret;
> @@ -1676,6 +1696,8 @@ static irqreturn_t cherryview_irq_handle
>  		valleyview_pipestat_irq_handler(dev_priv, pipe_stats);
>  	} while (0);
>  
> +	pmu_irq_stats(dev_priv, ret);
> +
>  	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>  
>  	return ret;
> @@ -2103,6 +2125,8 @@ static irqreturn_t ilk_irq_handler(int i
>  	if (sde_ier)
>  		raw_reg_write(regs, SDEIER, sde_ier);
>  
> +	pmu_irq_stats(i915, ret);
> +
>  	/* IRQs are synced during runtime_suspend, we don't require a wakeref */
>  	enable_rpm_wakeref_asserts(&i915->runtime_pm);
>  
> @@ -2419,6 +2443,8 @@ static irqreturn_t gen8_irq_handler(int
>  
>  	gen8_master_intr_enable(regs);
>  
> +	pmu_irq_stats(dev_priv, IRQ_HANDLED);
> +
>  	return IRQ_HANDLED;
>  }
>  
> @@ -2514,6 +2540,8 @@ static __always_inline irqreturn_t
>  
>  	gen11_gu_misc_irq_handler(gt, gu_misc_iir);
>  
> +	pmu_irq_stats(i915, IRQ_HANDLED);
> +
>  	return IRQ_HANDLED;
>  }
>  
> @@ -3688,6 +3716,8 @@ static irqreturn_t i8xx_irq_handler(int
>  		i8xx_pipestat_irq_handler(dev_priv, iir, pipe_stats);
>  	} while (0);
>  
> +	pmu_irq_stats(dev_priv, ret);
> +
>  	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>  
>  	return ret;
> @@ -3796,6 +3826,8 @@ static irqreturn_t i915_irq_handler(int
>  		i915_pipestat_irq_handler(dev_priv, iir, pipe_stats);
>  	} while (0);
>  
> +	pmu_irq_stats(dev_priv, ret);
> +
>  	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>  
>  	return ret;
> @@ -3941,6 +3973,8 @@ static irqreturn_t i965_irq_handler(int
>  		i965_pipestat_irq_handler(dev_priv, iir, pipe_stats);
>  	} while (0);
>  
> +	pmu_irq_stats(dev_priv, IRQ_HANDLED);
> +
>  	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>  
>  	return ret;
> --- a/drivers/gpu/drm/i915/i915_pmu.c
> +++ b/drivers/gpu/drm/i915/i915_pmu.c
> @@ -423,22 +423,6 @@ static enum hrtimer_restart i915_sample(
>  	return HRTIMER_RESTART;
>  }
>  
> -static u64 count_interrupts(struct drm_i915_private *i915)
> -{
> -	/* open-coded kstat_irqs() */
> -	struct irq_desc *desc = irq_to_desc(i915->drm.pdev->irq);
> -	u64 sum = 0;
> -	int cpu;
> -
> -	if (!desc || !desc->kstat_irqs)
> -		return 0;
> -
> -	for_each_possible_cpu(cpu)
> -		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
> -
> -	return sum;
> -}
> -
>  static void i915_pmu_event_destroy(struct perf_event *event)
>  {
>  	struct drm_i915_private *i915 =
> @@ -581,7 +565,7 @@ static u64 __i915_pmu_event_read(struct
>  				   USEC_PER_SEC /* to MHz */);
>  			break;
>  		case I915_PMU_INTERRUPTS:
> -			val = count_interrupts(i915);
> +			val = READ_ONCE(pmu->irq_count);
>  			break;
>  		case I915_PMU_RC6_RESIDENCY:
>  			val = get_rc6(&i915->gt);
> --- a/drivers/gpu/drm/i915/i915_pmu.h
> +++ b/drivers/gpu/drm/i915/i915_pmu.h
> @@ -108,6 +108,14 @@ struct i915_pmu {
>  	 */
>  	ktime_t sleep_last;
>  	/**
> +	 * @irq_count: Number of interrupts
> +	 *
> +	 * Intentionally unsigned long to avoid atomics or heuristics on 32bit.
> +	 * 4e9 interrupts are a lot and postprocessing can really deal with an
> +	 * occasional wraparound easily. It's 32bit after all.
> +	 */
> +	unsigned long irq_count;
> +	/**
>  	 * @events_attr_group: Device events attribute group.
>  	 */
>  	struct attribute_group events_attr_group;
>

-- 
Jani Nikula, Intel Open Source Graphics Center


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 09:56:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 09:56:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50274.88864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knf9k-000376-Lh; Fri, 11 Dec 2020 09:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50274.88864; Fri, 11 Dec 2020 09:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knf9k-00036z-Hf; Fri, 11 Dec 2020 09:56:08 +0000
Received: by outflank-mailman (input) for mailman id 50274;
 Fri, 11 Dec 2020 09:56:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knf9j-00036r-5k; Fri, 11 Dec 2020 09:56:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knf9i-0001f2-T8; Fri, 11 Dec 2020 09:56:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knf9i-0003Vm-Jv; Fri, 11 Dec 2020 09:56:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knf9i-0003hE-JT; Fri, 11 Dec 2020 09:56:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BOQy4Hv4A8atRKGaWGrTZtmoDlQ9OghSAvF4IEoxltA=; b=40cLeUApUs88CMMEOm/ZR3XV8J
	VvmBFob8q9g3u0c6ZgPQmVmrI+GiLjh3seqIu+tlztDMhlV3CpHMR0tG+2wQmcJkALVniIHCz3Rew
	yuvbrRcO7s4c1JGFBHw8TeBF6GHjrQcBUU6FOoZTHEHepoj0SPCknKCjQbghDfzzI46M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157398-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157398: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
X-Osstest-Versions-That:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 09:56:06 +0000

flight 157398 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157398/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157365
 build-i386                    6 xen-build                fail REGR. vs. 157365
 build-i386-xsm                6 xen-build                fail REGR. vs. 157365
 build-amd64                   6 xen-build                fail REGR. vs. 157365

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 157335
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157365
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157365
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20
baseline version:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20

Last test of basis   157398  2020-12-11 01:51:23 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50285.88879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfDq-00045B-DX; Fri, 11 Dec 2020 10:00:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50285.88879; Fri, 11 Dec 2020 10:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfDq-000454-AT; Fri, 11 Dec 2020 10:00:22 +0000
Received: by outflank-mailman (input) for mailman id 50285;
 Fri, 11 Dec 2020 10:00:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9HZb=FP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knfDp-00044z-BX
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:00:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16ee8f74-13de-448a-ae1b-8a0c953f4ab0;
 Fri, 11 Dec 2020 10:00:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9BD5EACF1;
 Fri, 11 Dec 2020 10:00:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16ee8f74-13de-448a-ae1b-8a0c953f4ab0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607680819; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JKr0IU1OHPnTj92qY4obtwJ5XOYnO0ZBGpisAWTb8pU=;
	b=GQS6a5T2PJwKxC88/EptTmt84koBd3kSBG9w2c9LN6wld1KHhjEEgNzM0oSR140o2jBC0B
	FkKrXsHPGvDT6vPyVZVSWsF7WD7M3mZx2+b3ZNYC+0wKSEvPfV4YUiaCGjSGVtOXXDPVvD
	hxkCZTVR8lWMOQt05wYRPf8kbQSxKC0=
Subject: Re: [PATCH v3 2/8] lib: collect library files in an archive
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
 <X9I1GCAM2nn8W8eN@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <65b94fd1-c840-cb1b-51f7-c9a5b158cc1e@suse.com>
Date: Fri, 11 Dec 2020 11:00:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <X9I1GCAM2nn8W8eN@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.12.2020 15:47, Anthony PERARD wrote:
> On Mon, Nov 23, 2020 at 04:21:19PM +0100, Jan Beulich wrote:
>> --- a/xen/Rules.mk
>> +++ b/xen/Rules.mk
>> @@ -60,7 +64,14 @@ include Makefile
>>  # ---------------------------------------------------------------------------
>>  
>>  quiet_cmd_ld = LD      $@
>> -cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
>> +cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
>> +               --start-group $(filter %.a,$(real-prereqs)) --end-group
> 
> It might be a bit weird to modify the generic LD command for the benefit
> of only prelink.o objects but it's probably fine as long as we only use
> archives for lib.a. libelf and libfdt will just have --start/end-group
> added to there ld command line. So I guess the change is fine.

I'm afraid I don't understand what the concern is. Neither libelf
nor libfdt use any %.a right now. Or are you referring to them
merely because it's just them which have got converted to using
$(call if-changed ...), and your remark would eventually apply to
e.g. built_in.o as well? And then further is all you're worried
about the fact that there may be "--start-group  --end-group" on
the command line, i.e. with nothing inbetween? If so, besides
possibly looking a little odd if someone inspected the command
lines closely, what possible issue do you see? (If there is one,
making the addition of both options conditional upon there being
any/multiple %.a in the first place wouldn't be a big problem,
albeit Linux also doesn't care whether ${KBUILD_VMLINUX_LIBS} is
empty.)

> The rest looks good,
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks, but I'd prefer the above clarified.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:04:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50295.88891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfI2-0004Pl-W7; Fri, 11 Dec 2020 10:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50295.88891; Fri, 11 Dec 2020 10:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfI2-0004Pe-So; Fri, 11 Dec 2020 10:04:42 +0000
Received: by outflank-mailman (input) for mailman id 50295;
 Fri, 11 Dec 2020 10:04:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RkSQ=FP=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1knfI2-0004PZ-B6
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:04:42 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5967531-dcf0-4cf9-96dc-5d72e79b4b80;
 Fri, 11 Dec 2020 10:04:41 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id c5so4770098wrp.6
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 02:04:41 -0800 (PST)
Received: from dell ([91.110.221.240])
 by smtp.gmail.com with ESMTPSA id 125sm14307876wmc.27.2020.12.11.02.04.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 11 Dec 2020 02:04:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5967531-dcf0-4cf9-96dc-5d72e79b4b80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=1m1097L8Z65yNEwPVsqCFAhMTO7JcIQztqm0XMNCEPQ=;
        b=QxFc8GC7E8T+U1gL6XJZvjeXxiWbuXfRqv6+3/ZJOEoQYn8YiRuOLDZAWueGTraddr
         xPqz2bb/8UoAjcvd5vsoN8LW5Iq1Lp0AqSabcEPsXCi+l/VkgtRG+OzcSz2lDjE/Pdix
         jshzcxstN+qosX02zQCGS+/I2FYEYzEAEBcOtJSJ1mr8h1MxoB17ALXOASyS2RwWORLD
         sYyHzWywWpvht4yy3yfrgCqZgrWTC9ga2KkmmQ+oI2PP2v+Qn9n0ORXRDwcKBkqiCMKf
         N0XznwYuSkPJfUeKcTzGZ2pjw4N1X+j5jdnHapfeS7fwZVeyxQTBUj+e7/XfSTgXR3Ap
         ykzw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=1m1097L8Z65yNEwPVsqCFAhMTO7JcIQztqm0XMNCEPQ=;
        b=uGIfWrH1tv6o94axE00EL+tx6RWBN1ZhPM1+3qERSqYiFY1kPXL3QBQFEGETiYbPen
         vfH7bgvgMQQqs+DqbgvSh6H0VkIy/9T6fzLZQ7IZ9xUsxsVak5VK7QzsakalPijGfN4j
         +4kUnY6MMGHeZsqTUYEptK9wjdZDL5W9NzaZEFTwSXR4ko812anN4gtT/NVZE3p2X66B
         kc+pmWyZzWaOKxScmeQAsk4vL1lsVibuGf8szAGP1fA2st0vLfvW+gdZ/X57rTnoYQQQ
         /whJo1/cIuiz96MltxCdKxaEfeeJa0sSvM3a0T91Y2p8W11ATqeCAkyqFLSZaZiVf1lx
         8UbA==
X-Gm-Message-State: AOAM531ZSA+kP+e4a0T9kbFObPFT91AB/lkeSrU+v8L1VbW48u48D7aF
	l7Yc5T838qIywHmzou5xaDU58g==
X-Google-Smtp-Source: ABdhPJwPXfTZPDeiEu60EOOVLfjPDebdQDfkcu30zCcF3k07tbHpbRDSiAPsDeGxO9u3bWjkQ8cNJw==
X-Received: by 2002:a5d:6ccc:: with SMTP id c12mr13142414wrc.4.1607681080467;
        Fri, 11 Dec 2020 02:04:40 -0800 (PST)
Date: Fri, 11 Dec 2020 10:04:36 +0000
From: Lee Jones <lee.jones@linaro.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Marc Zyngier <maz@kernel.org>,
	Linus Walleij <linus.walleij@linaro.org>,
	linux-arm-kernel@lists.infradead.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>,
	afzal mohammed <afzal.mohd.ma@gmail.com>,
	linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
	Mark Rutland <mark.rutland@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Wambui Karuga <wambui.karugax@gmail.com>,
	intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	linux-gpio@vger.kernel.org, Jon Mason <jdmason@kudzu.us>,
	Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
	linux-ntb@googlegroups.com,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
	Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
	Tariq Toukan <tariqt@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
	linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [patch 16/30] mfd: ab8500-debugfs: Remove the racy fiddling with
 irq_desc
Message-ID: <20201211100436.GC5029@dell>
References: <20201210192536.118432146@linutronix.de>
 <20201210194044.157283633@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201210194044.157283633@linutronix.de>

On Thu, 10 Dec 2020, Thomas Gleixner wrote:

> First of all drivers have absolutely no business to dig into the internals
> of an irq descriptor. That's core code and subject to change. All of this
> information is readily available to /proc/interrupts in a safe and race
> free way.
> 
> Remove the inspection code which is a blatant violation of subsystem
> boundaries and racy against concurrent modifications of the interrupt
> descriptor.
> 
> Print the irq line instead so the information can be looked up in a sane
> way in /proc/interrupts.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Linus Walleij <linus.walleij@linaro.org>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  drivers/mfd/ab8500-debugfs.c |   16 +++-------------
>  1 file changed, 3 insertions(+), 13 deletions(-)

Acked-by: Lee Jones <lee.jones@linaro.org>

-- 
Lee Jones [李琼斯]
Senior Technical Lead - Developer Services
Linaro.org │ Open source software for Arm SoCs
Follow Linaro: Facebook | Twitter | Blog


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50305.88906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfQd-0005W9-Ua; Fri, 11 Dec 2020 10:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50305.88906; Fri, 11 Dec 2020 10:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfQd-0005W2-RV; Fri, 11 Dec 2020 10:13:35 +0000
Received: by outflank-mailman (input) for mailman id 50305;
 Fri, 11 Dec 2020 10:13:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knfQc-0005Vx-Km
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:13:34 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05c504c2-a916-4904-9a79-4ff2af0040cd;
 Fri, 11 Dec 2020 10:13:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05c504c2-a916-4904-9a79-4ff2af0040cd
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607681612;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4TDbx5/eRxAHfo/4EOFfTsBBr8/iJ4NkSA5alVnInbw=;
	b=Ebn4XFcCDWYTM7X6Ch/mHznY5CAClfBbnhfLrO3Y6Kuikr6oFUaWSYQxhfOCU5ekEgEuk5
	ABxuPFqFDpUr/6fqEOmv/vOsnXIUuOejIYPyy0GJ0t4RBjN90FMlPlJ+tVDpM1qv0p7R8a
	xeZwcfSm29wJ/oRRI9rOhGLtzHCBhA3bpzeRWeG0Z24sHj1GQq05/L9FR4mQ32JHOYHSXu
	dDKuIj780YnhAu3jYewZAvLFsgNKDUqY7qYYXiJp4nkUDKsOaT+FgTS0xQYjkM48kJqr48
	89TXlGkoZBhTjLzmOxl42qVBK3QR64G1yObP38Wf6bryGtmLISA8jiMWLoDDjQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607681612;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4TDbx5/eRxAHfo/4EOFfTsBBr8/iJ4NkSA5alVnInbw=;
	b=xExbb1LasawAWe5q2U6DGt3Co7x6OFVa9wgrktZhoDZT1DZdHSUTeF92DjXx94ng108/hg
	9efMO3KmpdkXHQDQ==
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 boris.ostrovsky@oracle.com, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, "James E.J. Bottomley"
 <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula
 <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj
 Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu interrupts
In-Reply-To: <a4bce428-4420-6064-c7cc-7136a7544a52@suse.com>
References: <20201210192536.118432146@linutronix.de> <20201210194045.250321315@linutronix.de> <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com> <a4bce428-4420-6064-c7cc-7136a7544a52@suse.com>
Date: Fri, 11 Dec 2020 11:13:31 +0100
Message-ID: <874kksiras.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 11 2020 at 07:17, J=C3=BCrgen Gro=C3=9F wrote:
> On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>>=20
>> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>>> All event channel setups bind the interrupt on CPU0 or the target CPU f=
or
>>> percpu interrupts and overwrite the affinity mask with the corresponding
>>> cpumask. That does not make sense.
>>>
>>> The XEN implementation of irqchip::irq_set_affinity() already picks a
>>> single target CPU out of the affinity mask and the actual target is sto=
red
>>> in the effective CPU mask, so destroying the user chosen affinity mask
>>> which might contain more than one CPU is wrong.
>>>
>>> Change the implementation so that the channel is bound to CPU0 at the X=
EN
>>> level and leave the affinity mask alone. At startup of the interrupt
>>> affinity will be assigned out of the affinity mask and the XEN binding =
will
>>> be updated.
>>=20
>>=20
>> If that's the case then I wonder whether we need this call at all and in=
stead bind at startup time.
>
> This binding to cpu0 was introduced with commit 97253eeeb792d61ed2
> and I have no reason to believe the underlying problem has been
> eliminated.

    "The kernel-side VCPU binding was not being correctly set for newly
     allocated or bound interdomain events.  In ARM guests where 2-level
     events were used, this would result in no interdomain events being
     handled because the kernel-side VCPU masks would all be clear.

     x86 guests would work because the irq affinity was set during irq
     setup and this would set the correct kernel-side VCPU binding."

I'm not convinced that this is really correctly analyzed because affinity
setting is done at irq startup.

                switch (__irq_startup_managed(desc, aff, force)) {
	        case IRQ_STARTUP_NORMAL:
	                ret =3D __irq_startup(desc);
                        irq_setup_affinity(desc);
			break;

which is completely architecture agnostic. So why should this magically
work on x86 and not on ARM if both are using the same XEN irqchip with
the same irqchip callbacks.

Thanks,

        tglx




From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:13:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:13:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50306.88918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfQl-0005Ye-7z; Fri, 11 Dec 2020 10:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50306.88918; Fri, 11 Dec 2020 10:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfQl-0005YW-42; Fri, 11 Dec 2020 10:13:43 +0000
Received: by outflank-mailman (input) for mailman id 50306;
 Fri, 11 Dec 2020 10:13:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=riDw=FP=linux.intel.com=tvrtko.ursulin@srs-us1.protection.inumbo.net>)
 id 1knfQj-0005Xv-5B
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:13:41 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c62624c6-125c-4083-ae3a-8b632c09ba5d;
 Fri, 11 Dec 2020 10:13:40 +0000 (UTC)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Dec 2020 02:13:38 -0800
Received: from ynaki-mobl1.ger.corp.intel.com (HELO [10.214.252.46])
 ([10.214.252.46])
 by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Dec 2020 02:13:24 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c62624c6-125c-4083-ae3a-8b632c09ba5d
IronPort-SDR: MgrogHbCDcqKd4MJcKL5V92Lj0EAhxjNUtf16Ksen8CIJC2PbPspAJEp6RSXlKS1bIqBbDWp3Z
 /5x64WaKNpFw==
X-IronPort-AV: E=McAfee;i="6000,8403,9831"; a="173644269"
X-IronPort-AV: E=Sophos;i="5.78,411,1599548400"; 
   d="scan'208";a="173644269"
IronPort-SDR: RsQ7lmI/Ay9vaPccFC88f8JfxCXrqcVSXV/MMPaui8twEB+RtpNdJLZX44RdUN4vBcYtBFso8b
 xcHG9HzNR9pw==
X-IronPort-AV: E=Sophos;i="5.78,411,1599548400"; 
   d="scan'208";a="321689328"
Subject: Re: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20201210192536.118432146@linutronix.de>
 <20201210194043.957046529@linutronix.de>
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Organization: Intel Corporation UK Plc
Message-ID: <ad05af1a-5463-2a80-0887-7629721d6863@linux.intel.com>
Date: Fri, 11 Dec 2020 10:13:21 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201210194043.957046529@linutronix.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit


On 10/12/2020 19:25, Thomas Gleixner wrote:
> Driver code has no business with the internals of the irq descriptor.
> 
> Aside of that the count is per interrupt line and therefore takes
> interrupts from other devices into account which share the interrupt line
> and are not handled by the graphics driver.
> 
> Replace it with a pmu private count which only counts interrupts which
> originate from the graphics card.
> 
> To avoid atomics or heuristics of some sort make the counter field
> 'unsigned long'. That limits the count to 4e9 on 32bit which is a lot and
> postprocessing can easily deal with the occasional wraparound.

After my failed hasty sketch from last night I had a different one which 
was kind of heuristics based (re-reading the upper dword and retrying if 
it changed on 32-bit). But you are right - it is okay to at least start 
like this today and if later there is a need we can either do that or 
deal with wrap at PMU read time.

So thanks for dealing with it, some small comments below but overall it 
is fine.

> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> Cc: Jani Nikula <jani.nikula@linux.intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: intel-gfx@lists.freedesktop.org
> Cc: dri-devel@lists.freedesktop.org
> ---
>   drivers/gpu/drm/i915/i915_irq.c |   34 ++++++++++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/i915_pmu.c |   18 +-----------------
>   drivers/gpu/drm/i915/i915_pmu.h |    8 ++++++++
>   3 files changed, 43 insertions(+), 17 deletions(-)
> 
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -60,6 +60,24 @@
>    * and related files, but that will be described in separate chapters.
>    */
>   
> +/*
> + * Interrupt statistic for PMU. Increments the counter only if the
> + * interrupt originated from the the GPU so interrupts from a device which
> + * shares the interrupt line are not accounted.
> + */
> +static inline void pmu_irq_stats(struct drm_i915_private *priv,

We never use priv as a local name, it should be either i915 or dev_priv.

> +				 irqreturn_t res)
> +{
> +	if (unlikely(res != IRQ_HANDLED))
> +		return;
> +
> +	/*
> +	 * A clever compiler translates that into INC. A not so clever one
> +	 * should at least prevent store tearing.
> +	 */
> +	WRITE_ONCE(priv->pmu.irq_count, priv->pmu.irq_count + 1);

Curious, probably more educational for me - given x86_32 and x86_64, and 
the context of it getting called, what is the difference from just doing 
irq_count++?

> +}
> +
>   typedef bool (*long_pulse_detect_func)(enum hpd_pin pin, u32 val);
>   
>   static const u32 hpd_ilk[HPD_NUM_PINS] = {
> @@ -1599,6 +1617,8 @@ static irqreturn_t valleyview_irq_handle
>   		valleyview_pipestat_irq_handler(dev_priv, pipe_stats);
>   	} while (0);
>   
> +	pmu_irq_stats(dev_priv, ret);
> +
>   	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>   
>   	return ret;
> @@ -1676,6 +1696,8 @@ static irqreturn_t cherryview_irq_handle
>   		valleyview_pipestat_irq_handler(dev_priv, pipe_stats);
>   	} while (0);
>   
> +	pmu_irq_stats(dev_priv, ret);
> +
>   	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>   
>   	return ret;
> @@ -2103,6 +2125,8 @@ static irqreturn_t ilk_irq_handler(int i
>   	if (sde_ier)
>   		raw_reg_write(regs, SDEIER, sde_ier);
>   
> +	pmu_irq_stats(i915, ret);
> +
>   	/* IRQs are synced during runtime_suspend, we don't require a wakeref */
>   	enable_rpm_wakeref_asserts(&i915->runtime_pm);
>   
> @@ -2419,6 +2443,8 @@ static irqreturn_t gen8_irq_handler(int
>   
>   	gen8_master_intr_enable(regs);
>   
> +	pmu_irq_stats(dev_priv, IRQ_HANDLED);
> +
>   	return IRQ_HANDLED;
>   }
>   
> @@ -2514,6 +2540,8 @@ static __always_inline irqreturn_t
>   
>   	gen11_gu_misc_irq_handler(gt, gu_misc_iir);
>   
> +	pmu_irq_stats(i915, IRQ_HANDLED);
> +
>   	return IRQ_HANDLED;
>   }
>   
> @@ -3688,6 +3716,8 @@ static irqreturn_t i8xx_irq_handler(int
>   		i8xx_pipestat_irq_handler(dev_priv, iir, pipe_stats);
>   	} while (0);
>   
> +	pmu_irq_stats(dev_priv, ret);
> +
>   	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>   
>   	return ret;
> @@ -3796,6 +3826,8 @@ static irqreturn_t i915_irq_handler(int
>   		i915_pipestat_irq_handler(dev_priv, iir, pipe_stats);
>   	} while (0);
>   
> +	pmu_irq_stats(dev_priv, ret);
> +
>   	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>   
>   	return ret;
> @@ -3941,6 +3973,8 @@ static irqreturn_t i965_irq_handler(int
>   		i965_pipestat_irq_handler(dev_priv, iir, pipe_stats);
>   	} while (0);
>   
> +	pmu_irq_stats(dev_priv, IRQ_HANDLED);
> +
>   	enable_rpm_wakeref_asserts(&dev_priv->runtime_pm);
>   
>   	return ret;
> --- a/drivers/gpu/drm/i915/i915_pmu.c
> +++ b/drivers/gpu/drm/i915/i915_pmu.c
> @@ -423,22 +423,6 @@ static enum hrtimer_restart i915_sample(
>   	return HRTIMER_RESTART;
>   }

In this file you can also drop the #include <linux/irq.h> line.

>   
> -static u64 count_interrupts(struct drm_i915_private *i915)
> -{
> -	/* open-coded kstat_irqs() */
> -	struct irq_desc *desc = irq_to_desc(i915->drm.pdev->irq);
> -	u64 sum = 0;
> -	int cpu;
> -
> -	if (!desc || !desc->kstat_irqs)
> -		return 0;
> -
> -	for_each_possible_cpu(cpu)
> -		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
> -
> -	return sum;
> -}
> -
>   static void i915_pmu_event_destroy(struct perf_event *event)
>   {
>   	struct drm_i915_private *i915 =
> @@ -581,7 +565,7 @@ static u64 __i915_pmu_event_read(struct
>   				   USEC_PER_SEC /* to MHz */);
>   			break;
>   		case I915_PMU_INTERRUPTS:
> -			val = count_interrupts(i915);
> +			val = READ_ONCE(pmu->irq_count);

I guess same curiosity about READ_ONCE like in the increment site.

>   			break;
>   		case I915_PMU_RC6_RESIDENCY:
>   			val = get_rc6(&i915->gt);
> --- a/drivers/gpu/drm/i915/i915_pmu.h
> +++ b/drivers/gpu/drm/i915/i915_pmu.h
> @@ -108,6 +108,14 @@ struct i915_pmu {
>   	 */
>   	ktime_t sleep_last;
>   	/**
> +	 * @irq_count: Number of interrupts
> +	 *
> +	 * Intentionally unsigned long to avoid atomics or heuristics on 32bit.
> +	 * 4e9 interrupts are a lot and postprocessing can really deal with an
> +	 * occasional wraparound easily. It's 32bit after all.
> +	 */
> +	unsigned long irq_count;
> +	/**
>   	 * @events_attr_group: Device events attribute group.
>   	 */
>   	struct attribute_group events_attr_group;
> 

Regards,

Tvrtko


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:22:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:22:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50321.88929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfZ3-0006cI-5b; Fri, 11 Dec 2020 10:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50321.88929; Fri, 11 Dec 2020 10:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfZ3-0006cB-2Y; Fri, 11 Dec 2020 10:22:17 +0000
Received: by outflank-mailman (input) for mailman id 50321;
 Fri, 11 Dec 2020 10:22:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knfZ1-0006c5-W2
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:22:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knfZ0-0002Jh-Tn; Fri, 11 Dec 2020 10:22:14 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knfZ0-0006Jt-IO; Fri, 11 Dec 2020 10:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=9D9zK/DCg+E/AQjA1XGoo+bvPPsN7UhLV3z8DTBQOis=; b=jlJ9D2VpCAPHcxXt8crEyZOnza
	40GtPeFW0GLNbSE5iBuvIWni8+nl+OrmCtTTwNcGBSax63emB3kwzrtA72XYBe2REZBcS1Gd5HpvN
	3R1YHDZj9834fdiLOaCvt4gqL8CUEHl3vzScxfdk81Vk6BQM92D24ZGMh3O9w2lhBMr4=;
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
Date: Fri, 11 Dec 2020 10:22:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 11/12/2020 07:24, Jan Beulich wrote:
> On 11.12.2020 08:02, Jürgen Groß wrote:
>> On 10.12.20 21:51, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 09/12/2020 14:29, Jan Beulich wrote:
>>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>>> When the host crashes it would sometimes be nice to have additional
>>>>>>> debug data available which could be produced via debug keys, but
>>>>>>> halting the server for manual intervention might be impossible due to
>>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>>
>>>>>>> Add support for automatic debug key actions in case of crashes which
>>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>>
>>>>>>> Depending on the type of crash the desired data might be different, so
>>>>>>> support different settings for the possible types of crashes.
>>>>>>>
>>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>>
>>>>>>>      crash-debug-<type>=<string>
>>>>>>>
>>>>>>> with <type> being one of:
>>>>>>>
>>>>>>>      panic, hwdom, watchdog, kexeccmd, debugkey
>>>>>>>
>>>>>>> and <string> a sequence of debug key characters with '+' having the
>>>>>>> special semantics of a 10 millisecond pause.
>>>>>>>
>>>>>>> So "crash-debug-watchdog=0+0qr" would result in special output in case
>>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>>>>>> domain info, run queues).
>>>>>>>
>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>> ---
>>>>>>> V2:
>>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>>> - added more text to the boot parameter description (Jan Beulich)
>>>>>>>
>>>>>>> V3:
>>>>>>> - added const (Jan Beulich)
>>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>>
>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>
>>>>>> Except for the Arm aspect, where I'm not sure using
>>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>>
>>>>> I am not entirely sure to understand what get_irq_regs() is supposed to
>>>>> returned on x86. Is it the registers saved from the most recent
>>>>> exception?
>>>>
>>>> An interrupt (not an exception) sets the underlying per-CPU
>>>> variable, such that interested parties will know the real
>>>> context is not guest or "normal" Xen code, but an IRQ.
>>>
>>> Thanks for the explanation. I am a bit confused to why we need to give a
>>> regs to handle_keypress() because no-one seems to use it. Do you have an
>>> explanation?
>>
>> dump_registers() (key 'd') is using it.
>>
>>>
>>> To add to the confusion, it looks like that get_irqs_regs() may return
>>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain
>>> garbagge or a set too far).
>>
>> I guess this is a best effort approach.
> 
> Indeed. If there are ways to make it "more best", we should of
> course follow them. (Except before Dom0 starts, I'm afraid I
> don't see though where garbage would come from. And even then,
> just like for the idle vCPU-s, it shouldn't really be garbage,
> or else this suggests missing initialization somewhere.)

So I decided to mimick what 'd' does to see what happen if this is 
called early boot.


diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7fcff9af2a7e..9d33507a26eb 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -857,6 +857,8 @@ void __init start_xen(unsigned long boot_phys_offset,
       */
      system_state = SYS_STATE_boot;

+    dump_execstate(guest_cpu_user_regs());
+
      vm_init();

      if ( acpi_disabled )
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 30d6f375a3af..50fcf2e8d70e 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1678,6 +1678,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
          end_boot_allocator();

      system_state = SYS_STATE_boot;
+    dump_execstate(guest_cpu_user_regs());
      /*
       * No calls involving ACPI code should go between the setting of
       * SYS_STATE_boot and vm_init() (or else acpi_os_{,un}map_memory()

It leads to crash on both Arm and x86.

For the Arm crash:

(XEN) Data Abort Trap. Syndrome=0x1c08006
(XEN) Walking Hypervisor VA 0x10 on CPU0 via TTBR 0x0000000065a7f000
(XEN) 0TH[0x0] = 0x0000000065a7ef7f
(XEN) 1ST[0x0] = 0x0000000065a7bf7f
(XEN) 2ND[0x0] = 0x0000000000000000
(XEN) CPU0: Unexpected Trap: Data Abort
(XEN) ----[ Xen-4.15-unstable  arm64  debug=y   Not tainted ]----
(XEN) CPU:    0
(XEN) PC:     0000000000219674 dump_execstate+0x58/0x1ec
(XEN) LR:     00000000002d77dc
(XEN) SP:     000000000030fdc0
(XEN) CPSR:   800003c9 MODE:64-bit EL2h (Hypervisor, handler)
(XEN)      X0: 0000000000000000  X1: 0000000000000000  X2: 0000000000007fff
(XEN)      X3: 00000000002b7198  X4: 0000000000000080  X5: 00000000002e9a68
(XEN)      X6: 0080808080808080  X7: fefefefefefeff09  X8: 7f7f7f7f7f7f7f7f
(XEN)      X9: 717164616f726051 X10: 7f7f7f7f7f7f7f7f X11: 0101010101010101
(XEN)     X12: 0000000000000008 X13: 00000000002b9a48 X14: 0000000000000000
(XEN)     X15: 0000000000400000 X16: 00000000002ba000 X17: 00000000002b9000
(XEN)     X18: 00000000002b9000 X19: 0000000000000000 X20: 000000000030feb0
(XEN)     X21: 0000000080000000 X22: 00000000002f0d30 X23: 00000000002f1d68
(XEN)     X24: 00000000002f0eb8 X25: 0000000040000000 X26: 0000000080000000
(XEN)     X27: 0000000000000018 X28: 000000000003f970  FP: 000000000030fdc0
(XEN)
(XEN)   VTCR_EL2: 00000000
(XEN)  VTTBR_EL2: 0000000000000000
(XEN)
(XEN)  SCTLR_EL2: 30cd183d
(XEN)    HCR_EL2: 0000000000000038
(XEN)  TTBR0_EL2: 0000000065a7f000
(XEN)
(XEN)    ESR_EL2: 97c08006
(XEN)  HPFAR_EL2: 0000000000000000
(XEN)    FAR_EL2: 0000000000000010
(XEN)
(XEN) Xen stack trace from sp=000000000030fdc0:
(XEN)    000000000030fdf0 00000000002d77dc 0000000000080000 000000007fffc000
(XEN)    0000000080000000 00000000002f0d30 000000007f68b250 00000000002001b8
(XEN)    0000000065932000 0000000065732000 00000000784f9000 0000000000000000
(XEN)    0000000000400000 0000000065a2ad30 0000000000000630 0000000000000001
(XEN)    0000000000000001 0000000000000001 0000000000000000 0000000000003000
(XEN)    00000000784f9000 00000000002bc8e4 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000300000000 0000000000000000 00000040ffffffff
(XEN)    00000000ffffffff 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<0000000000219674>] dump_execstate+0x58/0x1ec (PC)
(XEN)    [<00000000002d77dc>] start_xen+0x3d0/0xcf8 (LR)
(XEN)    [<00000000002d77dc>] start_xen+0x3d0/0xcf8
(XEN)    [<00000000002001b8>] arm64/head.o#primary_switched+0x10/0x30
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) CPU0: Unexpected Trap: Data Abort
(XEN) ****************************************

For the x86 crash:

(XEN) Early fatal page fault at e008:ffff82d0402188b4 
(cr2=0000000000000010, ec=0000)
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d0402188b4>] dump_execstate+0x42/0x167
(XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000000   rcx: 0000000000000000
(XEN) rdx: ffff82d0404affff   rsi: 000000000000000a   rdi: ffff82d0404afef8
(XEN) rbp: ffff82d0404afd90   rsp: ffff82d0404afd80   r8:  0000000000000004
(XEN) r9:  0101010101010101   r10: 0f0f0f0f0f0f0f0f   r11: 5555555555555555
(XEN) r12: ffff82d0404afef8   r13: 0000000000800163   r14: ffff83000009dfb0
(XEN) r15: 0000000000000002   cr0: 0000000080050033   cr4: 00000000000000a0
(XEN) cr3: 00000000bfa9e000   cr2: 0000000000000010
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d0402188b4> (dump_execstate+0x42/0x167):
(XEN)  ff 7f 00 00 48 8b 40 c9 <48> 8b 40 10 66 81 38 ff 7f 75 49 3b 1d 
23 18 27
(XEN) Xen stack trace from rsp=ffff82d0404afd80:
(XEN)    000000000023ffff 00000000000005ed ffff82d0404afee8 ffff82d0404378cb
(XEN)    0000000000000002 0000000000000002 0000000000000002 0000000000000001
(XEN)    0000000000000001 0000000000000001 0000000000000001 0000000000000000
(XEN)    00000000000001ff 0000000002a45fff 0000000000240000 0000000002a45000
(XEN)    0000000000100000 0000000000000000 00000000000001ff ffff82d040475c80
(XEN)    ffff82d000800163 ffff83000009dee0 ffff83000009dfb0 0000000000200001
(XEN)    0000000100000000 0000000100000000 ffff83000009df80 642ded38bf9fe4f3
(XEN)    bf9fed3500000000 bfaafe980009df73 0009df73bf9fe7ea 00000004bf9fed31
(XEN)    bfaafeb00009df01 0000000800000000 000000010000006e 0000000000000003
(XEN)    00000000000002f8 ffff82d0405b0000 ffff82d0404b0000 0000000000000002
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 ffff82d04020012f 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000e01000000000 0000000000000000 0000000000000000 00000000000000a0
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d0402188b4>] R dump_execstate+0x42/0x167
(XEN)    [<ffff82d0404378cb>] F __start_xen+0x1e10/0x2906
(XEN)    [<ffff82d04020012f>] F __high_start+0x8f/0x91
(XEN)
(XEN) Pagetable walk from 0000000000000010:
(XEN)  L4[0x000] = 00000000bfa54063 ffffffffffffffff
(XEN)  L3[0x000] = 00000000bfa50063 ffffffffffffffff
(XEN)  L2[0x000] = 00000000bfa4f063 ffffffffffffffff
(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL TRAP: vec 14, #PF[0000] IN INTERRUPT CONTEXT
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

So I think guest_cpu_user_regs() is not quite yet ready to be called 
from panic().

A different approach my be to generate an exception and call the 
keyhandler from there. At least you know that the register would always 
be accurate.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:23:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50327.88942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfa2-0006lb-Jy; Fri, 11 Dec 2020 10:23:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50327.88942; Fri, 11 Dec 2020 10:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfa2-0006lU-Gu; Fri, 11 Dec 2020 10:23:18 +0000
Received: by outflank-mailman (input) for mailman id 50327;
 Fri, 11 Dec 2020 10:23:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knfa1-0006lN-Le
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:23:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8387df57-19cc-4493-b584-3ca9a63f0df5;
 Fri, 11 Dec 2020 10:23:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86CDFADA2;
 Fri, 11 Dec 2020 10:23:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8387df57-19cc-4493-b584-3ca9a63f0df5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607682195; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Ylj5Ou5EGxXoY8HzgwHM13TBu2785E6dSVq65gzanwU=;
	b=JX3YEGDayzn/l2cWmgQW18vQXaGMPfebAVghI4mXIKoPFWr46I+1iXrHo1/89wbKgnsUQw
	fJfjpQHih3nDIAzyPuBktA46WbSRZFyDOHCZ4ClWbuCsmJrqvPgr1yruvflf2j4L0wRo2h
	gXPAph/J2UIJAu9VFo9bE4J63Z3Pprg=
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
To: Thomas Gleixner <tglx@linutronix.de>, boris.ostrovsky@oracle.com,
 LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
 <a4bce428-4420-6064-c7cc-7136a7544a52@suse.com>
 <874kksiras.fsf@nanos.tec.linutronix.de>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <83b596c7-453b-34b5-a6a5-6c04d20e818a@suse.com>
Date: Fri, 11 Dec 2020 11:23:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <874kksiras.fsf@nanos.tec.linutronix.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JLZob3e3TcGZaffU2V6GYg9zw25iTy30k"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JLZob3e3TcGZaffU2V6GYg9zw25iTy30k
Content-Type: multipart/mixed; boundary="W7MOwpcHMEZDHVpnNeRdRpRviypq0k3b2";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Thomas Gleixner <tglx@linutronix.de>, boris.ostrovsky@oracle.com,
 LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Message-ID: <83b596c7-453b-34b5-a6a5-6c04d20e818a@suse.com>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
 <a4bce428-4420-6064-c7cc-7136a7544a52@suse.com>
 <874kksiras.fsf@nanos.tec.linutronix.de>
In-Reply-To: <874kksiras.fsf@nanos.tec.linutronix.de>

--W7MOwpcHMEZDHVpnNeRdRpRviypq0k3b2
Content-Type: multipart/mixed;
 boundary="------------C0A97F76D1E2DDEBCBB7C379"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C0A97F76D1E2DDEBCBB7C379
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.12.20 11:13, Thomas Gleixner wrote:
> On Fri, Dec 11 2020 at 07:17, J=C3=BCrgen Gro=C3=9F wrote:
>> On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>>>
>>> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>>>> All event channel setups bind the interrupt on CPU0 or the target CP=
U for
>>>> percpu interrupts and overwrite the affinity mask with the correspon=
ding
>>>> cpumask. That does not make sense.
>>>>
>>>> The XEN implementation of irqchip::irq_set_affinity() already picks =
a
>>>> single target CPU out of the affinity mask and the actual target is =
stored
>>>> in the effective CPU mask, so destroying the user chosen affinity ma=
sk
>>>> which might contain more than one CPU is wrong.
>>>>
>>>> Change the implementation so that the channel is bound to CPU0 at th=
e XEN
>>>> level and leave the affinity mask alone. At startup of the interrupt=

>>>> affinity will be assigned out of the affinity mask and the XEN bindi=
ng will
>>>> be updated.
>>>
>>>
>>> If that's the case then I wonder whether we need this call at all and=
 instead bind at startup time.
>>
>> This binding to cpu0 was introduced with commit 97253eeeb792d61ed2
>> and I have no reason to believe the underlying problem has been
>> eliminated.
>=20
>      "The kernel-side VCPU binding was not being correctly set for newl=
y
>       allocated or bound interdomain events.  In ARM guests where 2-lev=
el
>       events were used, this would result in no interdomain events bein=
g
>       handled because the kernel-side VCPU masks would all be clear.
>=20
>       x86 guests would work because the irq affinity was set during irq=

>       setup and this would set the correct kernel-side VCPU binding."
>=20
> I'm not convinced that this is really correctly analyzed because affini=
ty
> setting is done at irq startup.
>=20
>                  switch (__irq_startup_managed(desc, aff, force)) {
> 	        case IRQ_STARTUP_NORMAL:
> 	                ret =3D __irq_startup(desc);
>                          irq_setup_affinity(desc);
> 			break;
>=20
> which is completely architecture agnostic. So why should this magically=

> work on x86 and not on ARM if both are using the same XEN irqchip with
> the same irqchip callbacks.

I think this might be related to _initial_ cpu binding of events and
changing the binding later. This might be handled differently in the
hypervisor.


Juergen

--------------C0A97F76D1E2DDEBCBB7C379
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C0A97F76D1E2DDEBCBB7C379--

--W7MOwpcHMEZDHVpnNeRdRpRviypq0k3b2--

--JLZob3e3TcGZaffU2V6GYg9zw25iTy30k
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TSJEFAwAAAAAACgkQsN6d1ii/Ey+l
mgf/d2+/6FRTlqTIKtgTaI9zWYXHFZUVv6aYW6iBE6VxANXvSWq9HMk6KnsreMEFtZ6/gFqr/hEu
oLKI4zId8FjcveupY8yiEXvBkWDXXQHXm2vw2fO6Fe2D0RCcR0QLeFpvolQBAp0s4pGQNCWixekr
Q6YyAWOKOmAjWmLsSsyend9GfjL+BFR6pObB4CLRdm5rvQHbPW6pNHBTTX2bxeszchEibXqmy+eX
K7jbAijkr/Vq+9h84FBdAOZQRgXCIrWI14ae2O+8CGs9w5v/Yczedv+z4knUyDtXHly4+zuhkPSW
tmVXo5DD0Oh4PaNITuXRo6Y9sSrWG1w22F+SVn8vKg==
=c494
-----END PGP SIGNATURE-----

--JLZob3e3TcGZaffU2V6GYg9zw25iTy30k--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:32:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:32:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50343.88956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfjI-0007vT-Il; Fri, 11 Dec 2020 10:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50343.88956; Fri, 11 Dec 2020 10:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfjI-0007vM-FO; Fri, 11 Dec 2020 10:32:52 +0000
Received: by outflank-mailman (input) for mailman id 50343;
 Fri, 11 Dec 2020 10:32:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9HZb=FP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knfjH-0007vG-8I
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:32:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00046348-9ca6-4994-bafd-9171c6227e53;
 Fri, 11 Dec 2020 10:32:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3397AE30;
 Fri, 11 Dec 2020 10:32:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00046348-9ca6-4994-bafd-9171c6227e53
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607682767; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0V2lVIbqbq9QYvDmJ9XN2D3RTwxAY+d7SpZ+n/u/Xkw=;
	b=kZfVF57fwQJPdXwoJ/AqRG9lDWpctHgu2OfC5PXnNFefgjwqlFd3Nc7jZwsCLVONf391DW
	pTDjNRDp6UmJs4oAti7D2YbQV723YOaHevNO5fG0Go9b/JwHDy5+OHzjrf535QK1NxGeUz
	gRPa7TW+UDVZ5HSb0E73gQQ/FD5/Jxg=
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
Date: Fri, 11 Dec 2020 11:32:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 12:54, Julien Grall wrote:
> On 23/11/2020 13:29, Jan Beulich wrote:
>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>       long           rc = 0;
>>   
>>    again:
>> -    spin_lock(&d1->event_lock);
>> +    write_lock(&d1->event_lock);
>>   
>>       if ( !port_is_valid(d1, port1) )
>>       {
>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>                   BUG();
>>   
>>               if ( d1 < d2 )
>> -            {
>> -                spin_lock(&d2->event_lock);
>> -            }
>> +                read_lock(&d2->event_lock);
> 
> This change made me realized that I don't quite understand how the 
> rwlock is meant to work for event_lock. I was actually expecting this to 
> be a write_lock() given there are state changed in the d2 events.

Well, the protection needs to be against racing changes, i.e.
parallel invocations of this same function, or evtchn_close().
It is debatable whether evtchn_status() and
domain_dump_evtchn_info() would better also be locked out
(other read_lock() uses aren't applicable to interdomain
channels).

> Could you outline how a developper can find out whether he/she should 
> use read_lock or write_lock?

I could try to, but it would again be a port type dependent
model, just like for the per-channel locks. So I'd like it to
be clarified first whether you aren't instead indirectly
asking for these to become write_lock().

>> --- a/xen/common/rwlock.c
>> +++ b/xen/common/rwlock.c
>> @@ -102,6 +102,14 @@ void queue_write_lock_slowpath(rwlock_t
>>       spin_unlock(&lock->lock);
>>   }
>>   
>> +void _rw_barrier(rwlock_t *lock)
>> +{
>> +    check_barrier(&lock->lock.debug);
>> +    smp_mb();
>> +    while ( _rw_is_locked(lock) )
>> +        arch_lock_relax();
>> +    smp_mb();
>> +}
> 
> As I pointed out when this implementation was first proposed (see [1]), 
> there is a risk that the loop will never exit.

The [1] reference was missing, but I recall you saying so.

> I think the following implementation would be better (although it is ugly):
> 
> write_lock();
> /* do nothing */
> write_unlock();
> 
> This will act as a barrier between lock held before and after the call.

Right, and back then I indicated agreement. When getting to
actually carry out the change, I realized though that then the less
restrictive check_barrier() can't be used anymore (or to be precise,
it could be used, but the stronger check_lock() would subsequently
still come into play). This isn't a problem here, but would be for
any IRQ-safe r/w lock that the barrier may want to be used on down
the road.

Thinking about it, a read_lock() / read_unlock() pair would suffice
though. But this would then still have check_lock() involved.

Given all of this, maybe it's better not to introduce the function
at all and instead open-code the read_lock() / read_unlock() pair at
the use site.

> As an aside, I think the introduction of rw_barrier() deserve to be a in 
> separate patch to help the review.

I'm aware there are differing views on this - to me, putting this in
a separate patch would be introduction of dead code. But as per
above maybe the function now won't get introduced anymore anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:38:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:38:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50350.88971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfoa-0008AK-6r; Fri, 11 Dec 2020 10:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50350.88971; Fri, 11 Dec 2020 10:38:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfoa-0008AD-3k; Fri, 11 Dec 2020 10:38:20 +0000
Received: by outflank-mailman (input) for mailman id 50350;
 Fri, 11 Dec 2020 10:38:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knfoY-0008A6-4E
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:38:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d9373ea-fe01-4dea-99db-3bf25ff07ede;
 Fri, 11 Dec 2020 10:38:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7DBFBAC94;
 Fri, 11 Dec 2020 10:38:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d9373ea-fe01-4dea-99db-3bf25ff07ede
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607683094; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hjbBJphr/kFU11tWqq7rkklx4zkUiG5axoLhXCAaJFM=;
	b=lBZIHvMmG5cZGSZSej/+yTxPeYgp03b9ZCZ4lpoyJkbj8x3VUQ0NtIIAJQUIcvdfxH1cYB
	wegobII6MaVvjE1JIkSU9CE0OBSR76RA3Urk4r5IhIcQeMSXUhjrYe4pi2b++LdQ4v2FMU
	SaYK8caf9hYMHtKGjOvjBbMtkijmLzg=
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
 <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
Message-ID: <18959d53-30d9-b702-81df-8a4051d61fb2@suse.com>
Date: Fri, 11 Dec 2020 11:38:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="2MqDj9lGwb0uIiyblMH9VyR9dEXKCrSXx"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--2MqDj9lGwb0uIiyblMH9VyR9dEXKCrSXx
Content-Type: multipart/mixed; boundary="k5QwlYKr4KqbipLgmLxmKHtvdyYZuMvST";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
Message-ID: <18959d53-30d9-b702-81df-8a4051d61fb2@suse.com>
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
 <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
In-Reply-To: <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>

--k5QwlYKr4KqbipLgmLxmKHtvdyYZuMvST
Content-Type: multipart/mixed;
 boundary="------------A7E8C128CAAB1709D0905185"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------A7E8C128CAAB1709D0905185
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.12.20 11:22, Julien Grall wrote:
> Hi,
>=20
> On 11/12/2020 07:24, Jan Beulich wrote:
>> On 11.12.2020 08:02, J=C3=BCrgen Gro=C3=9F wrote:
>>> On 10.12.20 21:51, Julien Grall wrote:
>>>> Hi Jan,
>>>>
>>>> On 09/12/2020 14:29, Jan Beulich wrote:
>>>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>>>> When the host crashes it would sometimes be nice to have additio=
nal
>>>>>>>> debug data available which could be produced via debug keys, but=

>>>>>>>> halting the server for manual intervention might be impossible=20
>>>>>>>> due to
>>>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>>>
>>>>>>>> Add support for automatic debug key actions in case of crashes=20
>>>>>>>> which
>>>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>>>
>>>>>>>> Depending on the type of crash the desired data might be=20
>>>>>>>> different, so
>>>>>>>> support different settings for the possible types of crashes.
>>>>>>>>
>>>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>>>
>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0 crash-debug-<type>=3D<string>
>>>>>>>>
>>>>>>>> with <type> being one of:
>>>>>>>>
>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0 panic, hwdom, watchdog, kexeccmd, debug=
key
>>>>>>>>
>>>>>>>> and <string> a sequence of debug key characters with '+' having =
the
>>>>>>>> special semantics of a 10 millisecond pause.
>>>>>>>>
>>>>>>>> So "crash-debug-watchdog=3D0+0qr" would result in special output=
=20
>>>>>>>> in case
>>>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state=
,
>>>>>>>> domain info, run queues).
>>>>>>>>
>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>> ---
>>>>>>>> V2:
>>>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>>>> - added more text to the boot parameter description (Jan Beulich=
)
>>>>>>>>
>>>>>>>> V3:
>>>>>>>> - added const (Jan Beulich)
>>>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>>>
>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>
>>>>>>> Except for the Arm aspect, where I'm not sure using
>>>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>>>
>>>>>> I am not entirely sure to understand what get_irq_regs() is=20
>>>>>> supposed to
>>>>>> returned on x86. Is it the registers saved from the most recent
>>>>>> exception?
>>>>>
>>>>> An interrupt (not an exception) sets the underlying per-CPU
>>>>> variable, such that interested parties will know the real
>>>>> context is not guest or "normal" Xen code, but an IRQ.
>>>>
>>>> Thanks for the explanation. I am a bit confused to why we need to=20
>>>> give a
>>>> regs to handle_keypress() because no-one seems to use it. Do you=20
>>>> have an
>>>> explanation?
>>>
>>> dump_registers() (key 'd') is using it.
>>>
>>>>
>>>> To add to the confusion, it looks like that get_irqs_regs() may retu=
rn
>>>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain
>>>> garbagge or a set too far).
>>>
>>> I guess this is a best effort approach.
>>
>> Indeed. If there are ways to make it "more best", we should of
>> course follow them. (Except before Dom0 starts, I'm afraid I
>> don't see though where garbage would come from. And even then,
>> just like for the idle vCPU-s, it shouldn't really be garbage,
>> or else this suggests missing initialization somewhere.)
>=20
> So I decided to mimick what 'd' does to see what happen if this is=20
> called early boot.
>=20
>=20
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 7fcff9af2a7e..9d33507a26eb 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -857,6 +857,8 @@ void __init start_xen(unsigned long boot_phys_offse=
t,
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>  =C2=A0=C2=A0=C2=A0=C2=A0 system_state =3D SYS_STATE_boot;
>=20
> +=C2=A0=C2=A0=C2=A0 dump_execstate(guest_cpu_user_regs());
> +
>  =C2=A0=C2=A0=C2=A0=C2=A0 vm_init();
>=20
>  =C2=A0=C2=A0=C2=A0=C2=A0 if ( acpi_disabled )
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 30d6f375a3af..50fcf2e8d70e 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1678,6 +1678,7 @@ void __init noreturn __start_xen(unsigned long mb=
i_p)
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 end_boot_allocator();=

>=20
>  =C2=A0=C2=A0=C2=A0=C2=A0 system_state =3D SYS_STATE_boot;
> +=C2=A0=C2=A0=C2=A0 dump_execstate(guest_cpu_user_regs());
>  =C2=A0=C2=A0=C2=A0=C2=A0 /*
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * No calls involving ACPI code should g=
o between the setting of
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * SYS_STATE_boot and vm_init() (or else=
 acpi_os_{,un}map_memory()
>=20
> It leads to crash on both Arm and x86.
>=20
> For the Arm crash:
>=20
> (XEN) Data Abort Trap. Syndrome=3D0x1c08006
> (XEN) Walking Hypervisor VA 0x10 on CPU0 via TTBR 0x0000000065a7f000
> (XEN) 0TH[0x0] =3D 0x0000000065a7ef7f
> (XEN) 1ST[0x0] =3D 0x0000000065a7bf7f
> (XEN) 2ND[0x0] =3D 0x0000000000000000
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ----[ Xen-4.15-unstable=C2=A0 arm64=C2=A0 debug=3Dy=C2=A0=C2=A0 N=
ot tainted ]----
> (XEN) CPU:=C2=A0=C2=A0=C2=A0 0
> (XEN) PC:=C2=A0=C2=A0=C2=A0=C2=A0 0000000000219674 dump_execstate+0x58/=
0x1ec
> (XEN) LR:=C2=A0=C2=A0=C2=A0=C2=A0 00000000002d77dc
> (XEN) SP:=C2=A0=C2=A0=C2=A0=C2=A0 000000000030fdc0
> (XEN) CPSR:=C2=A0=C2=A0 800003c9 MODE:64-bit EL2h (Hypervisor, handler)=

> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X0: 0000000000000000=C2=A0 X1: 0000=
000000000000=C2=A0 X2: 0000000000007fff
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X3: 00000000002b7198=C2=A0 X4: 0000=
000000000080=C2=A0 X5: 00000000002e9a68
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X6: 0080808080808080=C2=A0 X7: fefe=
fefefefeff09=C2=A0 X8: 7f7f7f7f7f7f7f7f
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X9: 717164616f726051 X10: 7f7f7f7f7=
f7f7f7f X11: 0101010101010101
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X12: 0000000000000008 X13: 00000000002b9a=
48 X14: 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X15: 0000000000400000 X16: 00000000002ba0=
00 X17: 00000000002b9000
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X18: 00000000002b9000 X19: 00000000000000=
00 X20: 000000000030feb0
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X21: 0000000080000000 X22: 00000000002f0d=
30 X23: 00000000002f1d68
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X24: 00000000002f0eb8 X25: 00000000400000=
00 X26: 0000000080000000
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X27: 0000000000000018 X28: 000000000003f9=
70=C2=A0 FP: 000000000030fdc0
> (XEN)
> (XEN)=C2=A0=C2=A0 VTCR_EL2: 00000000
> (XEN)=C2=A0 VTTBR_EL2: 0000000000000000
> (XEN)
> (XEN)=C2=A0 SCTLR_EL2: 30cd183d
> (XEN)=C2=A0=C2=A0=C2=A0 HCR_EL2: 0000000000000038
> (XEN)=C2=A0 TTBR0_EL2: 0000000065a7f000
> (XEN)
> (XEN)=C2=A0=C2=A0=C2=A0 ESR_EL2: 97c08006
> (XEN)=C2=A0 HPFAR_EL2: 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 FAR_EL2: 0000000000000010
> (XEN)
> (XEN) Xen stack trace from sp=3D000000000030fdc0:
> (XEN)=C2=A0=C2=A0=C2=A0 000000000030fdf0 00000000002d77dc 0000000000080=
000=20
> 000000007fffc000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000080000000 00000000002f0d30 000000007f68b=
250=20
> 00000000002001b8
> (XEN)=C2=A0=C2=A0=C2=A0 0000000065932000 0000000065732000 00000000784f9=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000400000 0000000065a2ad30 0000000000000=
630=20
> 0000000000000001
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000001 0000000000000001 0000000000000=
000=20
> 0000000000003000
> (XEN)=C2=A0=C2=A0=C2=A0 00000000784f9000 00000000002bc8e4 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000300000000 0000000000000=
000=20
> 00000040ffffffff
> (XEN)=C2=A0=C2=A0=C2=A0 00000000ffffffff 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN) Xen call trace:
> (XEN)=C2=A0=C2=A0=C2=A0 [<0000000000219674>] dump_execstate+0x58/0x1ec =
(PC)
> (XEN)=C2=A0=C2=A0=C2=A0 [<00000000002d77dc>] start_xen+0x3d0/0xcf8 (LR)=

> (XEN)=C2=A0=C2=A0=C2=A0 [<00000000002d77dc>] start_xen+0x3d0/0xcf8
> (XEN)=C2=A0=C2=A0=C2=A0 [<00000000002001b8>] arm64/head.o#primary_switc=
hed+0x10/0x30
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ****************************************
>=20
> For the x86 crash:
>=20
> (XEN) Early fatal page fault at e008:ffff82d0402188b4=20
> (cr2=3D0000000000000010, ec=3D0000)
> (XEN) ----[ Xen-4.15-unstable=C2=A0 x86_64=C2=A0 debug=3Dy=C2=A0=C2=A0 =
Tainted:=C2=A0=C2=A0 C=C2=A0=C2=A0 ]----
> (XEN) CPU:=C2=A0=C2=A0=C2=A0 0
> (XEN) RIP:=C2=A0=C2=A0=C2=A0 e008:[<ffff82d0402188b4>] dump_execstate+0=
x42/0x167
> (XEN) RFLAGS: 0000000000010086=C2=A0=C2=A0 CONTEXT: hypervisor
> (XEN) rax: 0000000000000000=C2=A0=C2=A0 rbx: 0000000000000000=C2=A0=C2=A0=
 rcx: 0000000000000000
> (XEN) rdx: ffff82d0404affff=C2=A0=C2=A0 rsi: 000000000000000a=C2=A0=C2=A0=
 rdi: ffff82d0404afef8
> (XEN) rbp: ffff82d0404afd90=C2=A0=C2=A0 rsp: ffff82d0404afd80=C2=A0=C2=A0=
 r8:=C2=A0 0000000000000004
> (XEN) r9:=C2=A0 0101010101010101=C2=A0=C2=A0 r10: 0f0f0f0f0f0f0f0f=C2=A0=
=C2=A0 r11: 5555555555555555
> (XEN) r12: ffff82d0404afef8=C2=A0=C2=A0 r13: 0000000000800163=C2=A0=C2=A0=
 r14: ffff83000009dfb0
> (XEN) r15: 0000000000000002=C2=A0=C2=A0 cr0: 0000000080050033=C2=A0=C2=A0=
 cr4: 00000000000000a0
> (XEN) cr3: 00000000bfa9e000=C2=A0=C2=A0 cr2: 0000000000000010
> (XEN) fsb: 0000000000000000=C2=A0=C2=A0 gsb: 0000000000000000=C2=A0=C2=A0=
 gss: 0000000000000000
> (XEN) ds: 0000=C2=A0=C2=A0 es: 0000=C2=A0=C2=A0 fs: 0000=C2=A0=C2=A0 gs=
: 0000=C2=A0=C2=A0 ss: 0000=C2=A0=C2=A0 cs: e008
> (XEN) Xen code around <ffff82d0402188b4> (dump_execstate+0x42/0x167):
> (XEN)=C2=A0 ff 7f 00 00 48 8b 40 c9 <48> 8b 40 10 66 81 38 ff 7f 75 49 =
3b 1d=20
> 23 18 27
> (XEN) Xen stack trace from rsp=3Dffff82d0404afd80:
> (XEN)=C2=A0=C2=A0=C2=A0 000000000023ffff 00000000000005ed ffff82d0404af=
ee8=20
> ffff82d0404378cb
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000002 0000000000000002 0000000000000=
002=20
> 0000000000000001
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000001 0000000000000001 0000000000000=
001=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 00000000000001ff 0000000002a45fff 0000000000240=
000=20
> 0000000002a45000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000100000 0000000000000000 0000000000000=
1ff=20
> ffff82d040475c80
> (XEN)=C2=A0=C2=A0=C2=A0 ffff82d000800163 ffff83000009dee0 ffff83000009d=
fb0=20
> 0000000000200001
> (XEN)=C2=A0=C2=A0=C2=A0 0000000100000000 0000000100000000 ffff83000009d=
f80=20
> 642ded38bf9fe4f3
> (XEN)=C2=A0=C2=A0=C2=A0 bf9fed3500000000 bfaafe980009df73 0009df73bf9fe=
7ea=20
> 00000004bf9fed31
> (XEN)=C2=A0=C2=A0=C2=A0 bfaafeb00009df01 0000000800000000 0000000100000=
06e=20
> 0000000000000003
> (XEN)=C2=A0=C2=A0=C2=A0 00000000000002f8 ffff82d0405b0000 ffff82d0404b0=
000=20
> 0000000000000002
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 ffff82d040200=
12f=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN)=C2=A0=C2=A0=C2=A0 0000e01000000000 0000000000000000 0000000000000=
000=20
> 00000000000000a0
> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 0000000000000=
000=20
> 0000000000000000
> (XEN) Xen call trace:
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402188b4>] R dump_execstate+0x42/0x16=
7
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0404378cb>] F __start_xen+0x1e10/0x290=
6
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d04020012f>] F __high_start+0x8f/0x91
> (XEN)
> (XEN) Pagetable walk from 0000000000000010:
> (XEN)=C2=A0 L4[0x000] =3D 00000000bfa54063 ffffffffffffffff
> (XEN)=C2=A0 L3[0x000] =3D 00000000bfa50063 ffffffffffffffff
> (XEN)=C2=A0 L2[0x000] =3D 00000000bfa4f063 ffffffffffffffff
> (XEN)=C2=A0 L1[0x000] =3D 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) FATAL TRAP: vec 14, #PF[0000] IN INTERRUPT CONTEXT
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>=20
> So I think guest_cpu_user_regs() is not quite yet ready to be called=20
> from panic().

guest_cpu_user_regs() isn't the problem, but dump_execstate().

This is one of the caveats from the added boot parameter doc: some debug
keys might lead to problems. 'd' seems to be such a key when used for
the panic() case and the panic() happens in early boot.

>=20
> A different approach my be to generate an exception and call the=20
> keyhandler from there. At least you know that the register would always=
=20
> be accurate.

Or dump_execstate() is modified to accept NULL for regs and it will do
nothing in case guest_cpu_user_regs() isn't valid (a test for idle vcpu
might be the easiest way to determine that).


Juergen

--------------A7E8C128CAAB1709D0905185
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------A7E8C128CAAB1709D0905185--

--k5QwlYKr4KqbipLgmLxmKHtvdyYZuMvST--

--2MqDj9lGwb0uIiyblMH9VyR9dEXKCrSXx
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TTBUFAwAAAAAACgkQsN6d1ii/Ey/K
HAgAnQ52+pN5Q+9Uz3faDXuWJWzUm44os+U1QLic41GiMNDyVFVjNwvr7JSAKHLHralQ2t8JM7LJ
kng2CzHODWK2iCrNEWxOVEcDsMZRaBUfzkB/FwS8BbHo4CGbSjVSHvqnfe0i/7SuvPxzs8b+wqxG
bSUh/glIhYo324I7HtoDV0a4yeFPpKWMr8tRYk0FvWH+WcjMz1qH6btXmP+jImEwYbFqpQBJWHKV
anJYZhr57841qgUgf2mHyU4u3OnuUFPLocEGeBar/Idyv5drr8PUivZChRRRh0K5I4kauMgdWZy7
GZCkUeIPoeZFdy+SRTQJMi+TsQ4g4wde7m706IJaig==
=FUZr
-----END PGP SIGNATURE-----

--2MqDj9lGwb0uIiyblMH9VyR9dEXKCrSXx--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:44:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50368.88986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfu9-0000nU-0a; Fri, 11 Dec 2020 10:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50368.88986; Fri, 11 Dec 2020 10:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfu8-0000nN-Tx; Fri, 11 Dec 2020 10:44:04 +0000
Received: by outflank-mailman (input) for mailman id 50368;
 Fri, 11 Dec 2020 10:44:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9HZb=FP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1knfu8-0000nI-8Y
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:44:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3b31482-37fe-4b26-b446-1148c34d401a;
 Fri, 11 Dec 2020 10:44:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 285D1AC94;
 Fri, 11 Dec 2020 10:44:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3b31482-37fe-4b26-b446-1148c34d401a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607683442; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=n27DzJ9lj/58T4Dlyk07z/5g/H0Ws6ihMzbTX8GAOps=;
	b=tSHxly/hrZ96yHIc1jT834Ej1qTU6u7G/TQ4vKV3bTrkysTMcD9MJFZdmChW0Yg+wAcvoe
	o5W+JSDdQmJA8muKsTQdyYTdSwOZAyukFKTBYhpPCZqP/+GivpjHuu5OOa74bjHwjNmZ2x
	WHFY8oT9bRG6hlKIj8BQJOIgUgx1IqA=
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
 <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f377d5e-da9c-1300-8010-099ea57b020c@suse.com>
Date: Fri, 11 Dec 2020 11:44:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.12.2020 11:22, Julien Grall wrote:
> Hi,
> 
> On 11/12/2020 07:24, Jan Beulich wrote:
>> On 11.12.2020 08:02, Jürgen Groß wrote:
>>> On 10.12.20 21:51, Julien Grall wrote:
>>>> Hi Jan,
>>>>
>>>> On 09/12/2020 14:29, Jan Beulich wrote:
>>>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>>>> When the host crashes it would sometimes be nice to have additional
>>>>>>>> debug data available which could be produced via debug keys, but
>>>>>>>> halting the server for manual intervention might be impossible due to
>>>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>>>
>>>>>>>> Add support for automatic debug key actions in case of crashes which
>>>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>>>
>>>>>>>> Depending on the type of crash the desired data might be different, so
>>>>>>>> support different settings for the possible types of crashes.
>>>>>>>>
>>>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>>>
>>>>>>>>      crash-debug-<type>=<string>
>>>>>>>>
>>>>>>>> with <type> being one of:
>>>>>>>>
>>>>>>>>      panic, hwdom, watchdog, kexeccmd, debugkey
>>>>>>>>
>>>>>>>> and <string> a sequence of debug key characters with '+' having the
>>>>>>>> special semantics of a 10 millisecond pause.
>>>>>>>>
>>>>>>>> So "crash-debug-watchdog=0+0qr" would result in special output in case
>>>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>>>>>>> domain info, run queues).
>>>>>>>>
>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>> ---
>>>>>>>> V2:
>>>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>>>> - added more text to the boot parameter description (Jan Beulich)
>>>>>>>>
>>>>>>>> V3:
>>>>>>>> - added const (Jan Beulich)
>>>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>>>
>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>
>>>>>>> Except for the Arm aspect, where I'm not sure using
>>>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>>>
>>>>>> I am not entirely sure to understand what get_irq_regs() is supposed to
>>>>>> returned on x86. Is it the registers saved from the most recent
>>>>>> exception?
>>>>>
>>>>> An interrupt (not an exception) sets the underlying per-CPU
>>>>> variable, such that interested parties will know the real
>>>>> context is not guest or "normal" Xen code, but an IRQ.
>>>>
>>>> Thanks for the explanation. I am a bit confused to why we need to give a
>>>> regs to handle_keypress() because no-one seems to use it. Do you have an
>>>> explanation?
>>>
>>> dump_registers() (key 'd') is using it.
>>>
>>>>
>>>> To add to the confusion, it looks like that get_irqs_regs() may return
>>>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain
>>>> garbagge or a set too far).
>>>
>>> I guess this is a best effort approach.
>>
>> Indeed. If there are ways to make it "more best", we should of
>> course follow them. (Except before Dom0 starts, I'm afraid I
>> don't see though where garbage would come from. And even then,
>> just like for the idle vCPU-s, it shouldn't really be garbage,
>> or else this suggests missing initialization somewhere.)
> 
> So I decided to mimick what 'd' does to see what happen if this is 
> called early boot.

But this isn't really relevant here: If you need to deal with a
crash during boot, just don't specify these command line options
(and that's on top of Jürgen's indication that 'd' may not be
very useful to specify here anyway, albeit now that I think
about this I'm not so sure anymore - panic() only logs the local
CPUs registers iirc, while 'd' would log everyone's). Of course
Jürgen could go and limit honoring of the option to sufficiently
high SYS_STATE_*. In particular at least the x86 crash you've
observed is - afaict - from the is_idle_vcpu(current) check in
dump_execstate(), which requires init_idle_domain() to have run
before.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:47:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:47:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50373.88998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfxM-0000wm-Gw; Fri, 11 Dec 2020 10:47:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50373.88998; Fri, 11 Dec 2020 10:47:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfxM-0000wf-Dw; Fri, 11 Dec 2020 10:47:24 +0000
Received: by outflank-mailman (input) for mailman id 50373;
 Fri, 11 Dec 2020 10:47:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b3T3=FP=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1knfxL-0000wa-85
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:47:23 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfca412b-e517-43ac-89d2-b59cd233735d;
 Fri, 11 Dec 2020 10:47:21 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BBAlDDN005781
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 11 Dec 2020 11:47:14 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 233432E946C; Fri, 11 Dec 2020 11:47:08 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfca412b-e517-43ac-89d2-b59cd233735d
Date: Fri, 11 Dec 2020 11:47:08 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201211104708.GD1423@antioche.eu.org>
References: <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <4c3bff12-821b-83fb-e054-61b07b97fa70@citrix.com>
 <20201210170319.GG455@antioche.eu.org>
 <ed06a0f4-8468-addf-2797-be3ba3a2d607@citrix.com>
 <20201210173551.GJ455@antioche.eu.org>
 <b60639d9-5c27-ab86-eb97-f8627b3b32d2@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b60639d9-5c27-ab86-eb97-f8627b3b32d2@citrix.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 11 Dec 2020 11:47:14 +0100 (MET)

On Thu, Dec 10, 2020 at 09:01:12PM +0000, Andrew Cooper wrote:
> I've repro'd the problem.
> 
> When I modify Xen to explicitly demand-map the LDT in the MMUEXT_SET_LDT
> hypercall, everything works fine.
> 
> Specifically, this delta:
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 723cc1070f..71a791d877 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3742,12 +3742,31 @@ long do_mmuext_op(
>  else if ( (curr->arch.pv.ldt_ents != ents) ||
>  (curr->arch.pv.ldt_base != ptr) )
>  {
> + unsigned int err = 0, tmp;
> +
>  if ( pv_destroy_ldt(curr) )
>  flush_tlb_local();
> 
>  curr->arch.pv.ldt_base = ptr;
>  curr->arch.pv.ldt_ents = ents;
>  load_LDT(curr);
> +
> + printk("Probe new LDT\n");
> + asm volatile (
> + "mov %%es, %[tmp];\n\t"
> + "1: mov %[sel], %%es;\n\t"
> + "mov %[tmp], %%es;\n\t"
> + "2:\n\t"
> + ".section .fixup,\"ax\"\n"
> + "3: mov $1, %[err];\n\t"
> + "jmp 2b\n\t"
> + ".previous\n\t"
> + _ASM_EXTABLE(1b, 3b)
> + : [err] "+r" (err),
> + [tmp] "=&r" (tmp)
> + : [sel] "r" (0x3f)
> + : "memory");
> + printk(" => err %u\n", err);
>  }
>  break;
>  }
> 
> Which stashes %es, explicitly loads init's %ss selector to trigger the
> #PF and Xen's lazy mapping, then restores %es.

Yes, this works for dom0 too, I have it running multiuser

> [...]
> 
> Presumably you've got no Meltdown mitigations going on within the NetBSD
> kernel? (I suspect not, seeing as changing Xen changes the behaviour,
> but it is worth asking).

No, there's no Meltdown mitigations for PV in NetBSD. as I see it,
for amd64 at last, the Xen kernel has to do it anyway, so it's not usefull
to implement it in the guest's kernel. Did I miss something ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:47:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:47:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50375.89011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfxU-0000zn-QY; Fri, 11 Dec 2020 10:47:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50375.89011; Fri, 11 Dec 2020 10:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfxU-0000ze-Ms; Fri, 11 Dec 2020 10:47:32 +0000
Received: by outflank-mailman (input) for mailman id 50375;
 Fri, 11 Dec 2020 10:47:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUI=FP=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knfxT-0000zP-LS
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:47:31 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.74]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ffe559c-faa2-41a2-8854-333585e44b95;
 Fri, 11 Dec 2020 10:47:30 +0000 (UTC)
Received: from AS8PR04CA0059.eurprd04.prod.outlook.com (2603:10a6:20b:312::34)
 by DBBPR08MB4346.eurprd08.prod.outlook.com (2603:10a6:10:ca::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13; Fri, 11 Dec
 2020 10:47:28 +0000
Received: from VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::b3) by AS8PR04CA0059.outlook.office365.com
 (2603:10a6:20b:312::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.13 via Frontend
 Transport; Fri, 11 Dec 2020 10:47:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT056.mail.protection.outlook.com (10.152.19.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Fri, 11 Dec 2020 10:47:28 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Fri, 11 Dec 2020 10:47:27 +0000
Received: from ea805283bef2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C6E95099-EF90-498D-B83E-AF5D53533BB4.1; 
 Fri, 11 Dec 2020 10:46:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ea805283bef2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 11 Dec 2020 10:46:50 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4629.eurprd08.prod.outlook.com (2603:10a6:10:f4::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Fri, 11 Dec
 2020 10:46:48 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3654.013; Fri, 11 Dec 2020
 10:46:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ffe559c-faa2-41a2-8854-333585e44b95
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5zLMZpv/HzrZx5MO2wX7WXWcoDxwiaK0i66zyzHvxlM=;
 b=MxWq9U2gBwp+03VgqYJwJxqHcXf8mJUUsD0+MZ+BvfURFoxilYkKvs88n0zhbiq/WFhlco/zTmBKox+oe1jZt/CKuBRKQsSFYwUWJOJ7BWN3DDZOPQyx+qIi0+UBLFBL7PwbBbQXKTGsuvScH4h3ZiZPM9PFckuXg57Mx82PFKU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5db55475e6605dc3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NAq2xfMuabPgpev0p8URFxz5/xeHeEI/Ex07pMxUREFNKxBIp4ZPPYQz/joVYjlMgWBhy6rW6P/1fs21GHEDZKF3IbFmi9VXEkj9XMQpfQPmXFs7I9rJTryF9/C3bwms0MraIcbyBKcG+YvRwJWk8gqGlRx/4gIH/HJhGK2h9wrIqXMy6k0Q2Ys3sjLIWU9b3DQhRHkyJIrUSE1DurtwfqtVxq1yH/MfD23nrvEgvOsh2sX+ifxLUbGFz71CRMztD5g3dt7HR+fyfmfmMCfOdKNzMM7fc6UBJjZVVxY4YhJaWv07XyJCaFnvY9XKqaaBxUIGimAqtQx/Ygd5Msp7jQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5zLMZpv/HzrZx5MO2wX7WXWcoDxwiaK0i66zyzHvxlM=;
 b=WWoPgKPBTzOXPFct3aqAJdGIspt+hspK6GsLCO3X3J0a+D8dg/PEzmAQsGkp3BahQMJ3bT7kCW+MCmlD78FwVHjrAt5o59svTg+WNcCz01hWjEayU5Ku1re6XiVMT6FesBkLZ1myvX1mLW4W7Klw15EnULFa5z+nPcALgO7mgtm6lEUQQkDOqnwYSzrZxnncsVCMnvHoxpirbdKGHH1FLR9PbI6uDYJ33xlMlRf155r6E4nARuNCVSnLPuFZcmPr6Owehi3MOGo7Qe6lJWtQpMD1AMtijD1vNyksUmjaYWAd03n6JlTEbr1BTHo5s1wUxySvk7JYN+b8HrogmOwVIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5zLMZpv/HzrZx5MO2wX7WXWcoDxwiaK0i66zyzHvxlM=;
 b=MxWq9U2gBwp+03VgqYJwJxqHcXf8mJUUsD0+MZ+BvfURFoxilYkKvs88n0zhbiq/WFhlco/zTmBKox+oe1jZt/CKuBRKQsSFYwUWJOJ7BWN3DDZOPQyx+qIi0+UBLFBL7PwbBbQXKTGsuvScH4h3ZiZPM9PFckuXg57Mx82PFKU=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/8] xen/arm: Remove support for Stage-1 translation on
 SMMUv3.
Thread-Topic: [PATCH v3 4/8] xen/arm: Remove support for Stage-1 translation
 on SMMUv3.
Thread-Index: AQHWzxXY77Y6xwXFHEep3JN+7VkcoanxG7eAgACb9YA=
Date: Fri, 11 Dec 2020 10:46:48 +0000
Message-ID: <0904E964-5987-4972-A643-A0FE43F5A99E@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <a5e3509bbc4ce21e0d9d176d7a2984ef40ad0ae3.1607617848.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012101532110.6285@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012101532110.6285@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 56e3bae8-81f0-456e-61f9-08d89dc2274b
x-ms-traffictypediagnostic: DBBPR08MB4629:|DBBPR08MB4346:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB4346966ED762BB53D8A8D2CEFCCA0@DBBPR08MB4346.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gLg8oquWBQ6OQ9gymBK+vsEdZeLu2+9tZeSCecUXHJMx6h856XPM+T1TpgHz21xZ0csNVoluBczmuEIkgpvULy2aCYQA9+D6A/2xWASE7GNRxgl8FrMasXyksjEk2MlCnFMarITP39I0Ijw7DrjkkaZS3pnJt2zBAZMZv02VpzKfBw7skopQtt681O+ySL1K3b64r9cqeoMNG4vP1/KtX7SXI4xuHQzS6P6CNlSUNbdN/BlJ6xK3xLB1lAobCzXbHxbPiC1mmqhOwzlFCwrlNRm8vYKa0aA+2+gewQOnxbf74IPUdlI2o86ICcQEiolt0Yz5WJRj1oCXZiRMJmSx0gv9f9Hnrr8oZTxlgdlziXXFj3fqUqEC7RWe0hTtEXiU
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(396003)(346002)(39850400004)(136003)(2616005)(26005)(186003)(64756008)(66556008)(316002)(66446008)(6916009)(5660300002)(83380400001)(91956017)(8676002)(66476007)(6486002)(66946007)(71200400001)(54906003)(6506007)(53546011)(36756003)(86362001)(4326008)(8936002)(76116006)(2906002)(478600001)(33656002)(6512007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?YvSkXY9EVbILUu29chIexfNaycfwKQr5ulMELIRA8oVGTZQluVglp5Ytub53?=
 =?us-ascii?Q?Ksyq3vpziXYlSHljvGPcSYqtcYHPOL7Z3AnnNjOCf4G7fjp2qSc26CljJVR/?=
 =?us-ascii?Q?daFp4x2TTLonFy435ssxo8/yCwGQalylh2A7306hQPM50qKlA/7pS4FPnHgh?=
 =?us-ascii?Q?d4VwW0Ca6+TAggdY2SsnhXCVqgv6owS/Tg1SHkhf6LnKEulnUyQVzX93i3Ig?=
 =?us-ascii?Q?3QBGl1sKM+eP9bXMzgSP9e80nLcsnPhX/kp8iQLF82QVUL37tgQAiOZqL/cJ?=
 =?us-ascii?Q?+td8OOqg5x+23LZd70QlPPmWHcafVcBdzSTK02yeGH3Kyyx3AtTX+AOygrr8?=
 =?us-ascii?Q?zccRNEd7raSdUwJob/8jDTjAIZuvc3H2vKiCC642Aojugv+A8iR4cXhP7otp?=
 =?us-ascii?Q?FWdXZF7GqqetcZqJ7xo8cJaIewcGVPflAVhwMmG7K+kkYOiGIEdeAWLyXb1I?=
 =?us-ascii?Q?zLaQ2mabMwIW0gGRAZ9KKhidIw7WpnzBpEAmcqIykE38IeLI8ZQYFaefjcdr?=
 =?us-ascii?Q?GY168Y+KQCs6IFGZbGwKB3rmmMsusNkm8/WPjhd2VhhFQPYeSgNtNiOEEpBA?=
 =?us-ascii?Q?OJN8jhPEwjTbSLTJx1r6xqlphTZ8bCGCXU6mxeYolQ/+zrt6z/Bep0HcdSza?=
 =?us-ascii?Q?/x4p6PUPkCCHCMM21KxmFdBHZQbNR0jQj2P9t1BO7I7kpfSMeB1j8AMyKUHI?=
 =?us-ascii?Q?9jpTn71JiNx62qOzehLrc61g1iSqTIWILjGFRLQI3WB7lfZ9tmfb703mqHEW?=
 =?us-ascii?Q?Gz3DQYPmS2EerQxJ/yxe6d7ztZWIImou15QY4LSc/5niLsPYRsjCKLDjiksb?=
 =?us-ascii?Q?LWH80R2KfS4ez9dTppLcRZtCpaGGArICBXTqkyfThdUfWJ2rG+jgIy/An1yn?=
 =?us-ascii?Q?QbEGaUZNfIONIGui2kpA9uZAhCZGP3l5odr6KlK+7rKP41moJLVYYvnQSgiE?=
 =?us-ascii?Q?bg1+TcX0CdmSboQATFbJVkrFeoUPnUy4x+NjcvpbVYPy7O32QQ/ayDNbNT4R?=
 =?us-ascii?Q?tMFY?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <EFAFE71D63E0B54E933B0FA654546243@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4629
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	18a42812-31bf-4bee-5d25-08d89dc20fab
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Dmdg6zBFZcBzBC/itgBJ/WQbDcdOUly9BlnTJSdw6p3yP7/PuoZx2oS4i+eOEgH2Evzs4KlnnJDCKfZZU5WtAv4xzvVTvOJ6wITJqIfz6Up8Tn/LlnnlRMr7lhWAGehsoX1892w1EKOjhtkqlnUoZLxtAdn0kev4aUAF52GKvbNhIC/ZcpOPDv1OG9/MNyAxkSfxuJP957ScHUXFaX0Y2U2IsbuC9CF9F7XRrcdTyfMC7QeieDzzv/mQzeqcYlyflsrUF0+kVyMs2hyWwxAAyWe0Fdpv3zsxVdmvz54DF4/vl11OMj9G1pidTgTM2XVSuEfhRt5N0tdJ4G8Gk7EIBB0Ueiu+A2OLVOdTYmfFoYjuVtEHkAVIs1uN5rk+jyfgpxeOsy8jobI1JfONgN8Xmw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(39850400004)(396003)(376002)(46966005)(82740400003)(53546011)(82310400003)(81166007)(83380400001)(70206006)(6506007)(2616005)(8936002)(107886003)(70586007)(2906002)(6486002)(186003)(8676002)(6512007)(33656002)(54906003)(36756003)(26005)(86362001)(4326008)(356005)(6862004)(478600001)(316002)(5660300002)(47076004)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2020 10:47:28.0143
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 56e3bae8-81f0-456e-61f9-08d89dc2274b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4346

Hello Stefano,

> On 11 Dec 2020, at 1:28 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Thu, 10 Dec 2020, Rahul Singh wrote:
>> @@ -2087,29 +1693,8 @@ static int arm_smmu_domain_finalise(struct iommu_=
domain *domain,
>> 	}
>>=20
>> 	/* Restrict the stage to what we can actually support */
>> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
>> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>> -	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
>> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_S1;
>> -
>> -	switch (smmu_domain->stage) {
>> -	case ARM_SMMU_DOMAIN_S1:
>> -		ias =3D (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
>> -		ias =3D min_t(unsigned long, ias, VA_BITS);
>> -		oas =3D smmu->ias;
>> -		fmt =3D ARM_64_LPAE_S1;
>> -		finalise_stage_fn =3D arm_smmu_domain_finalise_s1;
>> -		break;
>> -	case ARM_SMMU_DOMAIN_NESTED:
>> -	case ARM_SMMU_DOMAIN_S2:
>> -		ias =3D smmu->ias;
>> -		oas =3D smmu->oas;
>> -		fmt =3D ARM_64_LPAE_S2;
>> -		finalise_stage_fn =3D arm_smmu_domain_finalise_s2;
>> -		break;
>> -	default:
>> -		return -EINVAL;
>> -	}
>> +	smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>> +
>=20
> Last time we agreed on adding an error message?

Yes I added in next patch to report error if Stage-2 is not supported=20

if (reg & IDR0_S2P)                                                        =
=20
        smmu->features |=3D ARM_SMMU_FEAT_TRANS_S2;                        =
      =20
                                                                           =
    =20
    if (!(reg & IDR0_S2P)) {                                               =
    =20
        dev_err(smmu->dev, "no stage-2 translation support!\n");           =
    =20
        return -ENXIO;                                                     =
    =20
    }


Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:49:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50384.89022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfzd-0001Dl-87; Fri, 11 Dec 2020 10:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50384.89022; Fri, 11 Dec 2020 10:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knfzd-0001De-5C; Fri, 11 Dec 2020 10:49:45 +0000
Received: by outflank-mailman (input) for mailman id 50384;
 Fri, 11 Dec 2020 10:49:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knfzb-0001DZ-26
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:49:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knfza-0002sk-69; Fri, 11 Dec 2020 10:49:42 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knfzZ-0007xN-N2; Fri, 11 Dec 2020 10:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kUtO6OUIZS+BF3kaegfDuyN/BVTedtpv0m//7v4jA3s=; b=gFguw8J3IuHCrGphGyv9BWyyC8
	R4ZhdNhuYMpmLXXgMyhILmnJpz9z87Egn6h0VFpN0r7UUoa/Oid0mENFK2rqn9TfmQ3++VqSNDgvg
	y2XJ6fEMTTGadR61HcjpXlTFHD8Y6B1VJ7sGksj6AMrzeCTThscR11WPxSpbn1b3598Y=;
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
 <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
 <18959d53-30d9-b702-81df-8a4051d61fb2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ece0eb78-4e5f-2c2d-598d-aaf126fbcd23@xen.org>
Date: Fri, 11 Dec 2020 10:49:39 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <18959d53-30d9-b702-81df-8a4051d61fb2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 11/12/2020 10:38, Jürgen Groß wrote:
> On 11.12.20 11:22, Julien Grall wrote:
>> Hi,
>>
>> On 11/12/2020 07:24, Jan Beulich wrote:
>>> On 11.12.2020 08:02, Jürgen Groß wrote:
>>>> On 10.12.20 21:51, Julien Grall wrote:
>>>>> Hi Jan,
>>>>>
>>>>> On 09/12/2020 14:29, Jan Beulich wrote:
>>>>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>>>>> When the host crashes it would sometimes be nice to have 
>>>>>>>>> additional
>>>>>>>>> debug data available which could be produced via debug keys, but
>>>>>>>>> halting the server for manual intervention might be impossible 
>>>>>>>>> due to
>>>>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>>>>
>>>>>>>>> Add support for automatic debug key actions in case of crashes 
>>>>>>>>> which
>>>>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>>>>
>>>>>>>>> Depending on the type of crash the desired data might be 
>>>>>>>>> different, so
>>>>>>>>> support different settings for the possible types of crashes.
>>>>>>>>>
>>>>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>>>>
>>>>>>>>>      crash-debug-<type>=<string>
>>>>>>>>>
>>>>>>>>> with <type> being one of:
>>>>>>>>>
>>>>>>>>>      panic, hwdom, watchdog, kexeccmd, debugkey
>>>>>>>>>
>>>>>>>>> and <string> a sequence of debug key characters with '+' having 
>>>>>>>>> the
>>>>>>>>> special semantics of a 10 millisecond pause.
>>>>>>>>>
>>>>>>>>> So "crash-debug-watchdog=0+0qr" would result in special output 
>>>>>>>>> in case
>>>>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
>>>>>>>>> domain info, run queues).
>>>>>>>>>
>>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>>> ---
>>>>>>>>> V2:
>>>>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>>>>> - added more text to the boot parameter description (Jan Beulich)
>>>>>>>>>
>>>>>>>>> V3:
>>>>>>>>> - added const (Jan Beulich)
>>>>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>>>>
>>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>>
>>>>>>>> Except for the Arm aspect, where I'm not sure using
>>>>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>>>>
>>>>>>> I am not entirely sure to understand what get_irq_regs() is 
>>>>>>> supposed to
>>>>>>> returned on x86. Is it the registers saved from the most recent
>>>>>>> exception?
>>>>>>
>>>>>> An interrupt (not an exception) sets the underlying per-CPU
>>>>>> variable, such that interested parties will know the real
>>>>>> context is not guest or "normal" Xen code, but an IRQ.
>>>>>
>>>>> Thanks for the explanation. I am a bit confused to why we need to 
>>>>> give a
>>>>> regs to handle_keypress() because no-one seems to use it. Do you 
>>>>> have an
>>>>> explanation?
>>>>
>>>> dump_registers() (key 'd') is using it.
>>>>
>>>>>
>>>>> To add to the confusion, it looks like that get_irqs_regs() may return
>>>>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain
>>>>> garbagge or a set too far).
>>>>
>>>> I guess this is a best effort approach.
>>>
>>> Indeed. If there are ways to make it "more best", we should of
>>> course follow them. (Except before Dom0 starts, I'm afraid I
>>> don't see though where garbage would come from. And even then,
>>> just like for the idle vCPU-s, it shouldn't really be garbage,
>>> or else this suggests missing initialization somewhere.)
>>
>> So I decided to mimick what 'd' does to see what happen if this is 
>> called early boot.
>>
>>
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 7fcff9af2a7e..9d33507a26eb 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -857,6 +857,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>>        */
>>       system_state = SYS_STATE_boot;
>>
>> +    dump_execstate(guest_cpu_user_regs());
>> +
>>       vm_init();
>>
>>       if ( acpi_disabled )
>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>> index 30d6f375a3af..50fcf2e8d70e 100644
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -1678,6 +1678,7 @@ void __init noreturn __start_xen(unsigned long 
>> mbi_p)
>>           end_boot_allocator();
>>
>>       system_state = SYS_STATE_boot;
>> +    dump_execstate(guest_cpu_user_regs());
>>       /*
>>        * No calls involving ACPI code should go between the setting of
>>        * SYS_STATE_boot and vm_init() (or else acpi_os_{,un}map_memory()
>>
>> It leads to crash on both Arm and x86.
>>
>> For the Arm crash:
>>
>> (XEN) Data Abort Trap. Syndrome=0x1c08006
>> (XEN) Walking Hypervisor VA 0x10 on CPU0 via TTBR 0x0000000065a7f000
>> (XEN) 0TH[0x0] = 0x0000000065a7ef7f
>> (XEN) 1ST[0x0] = 0x0000000065a7bf7f
>> (XEN) 2ND[0x0] = 0x0000000000000000
>> (XEN) CPU0: Unexpected Trap: Data Abort
>> (XEN) ----[ Xen-4.15-unstable  arm64  debug=y   Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) PC:     0000000000219674 dump_execstate+0x58/0x1ec
>> (XEN) LR:     00000000002d77dc
>> (XEN) SP:     000000000030fdc0
>> (XEN) CPSR:   800003c9 MODE:64-bit EL2h (Hypervisor, handler)
>> (XEN)      X0: 0000000000000000  X1: 0000000000000000  X2: 
>> 0000000000007fff
>> (XEN)      X3: 00000000002b7198  X4: 0000000000000080  X5: 
>> 00000000002e9a68
>> (XEN)      X6: 0080808080808080  X7: fefefefefefeff09  X8: 
>> 7f7f7f7f7f7f7f7f
>> (XEN)      X9: 717164616f726051 X10: 7f7f7f7f7f7f7f7f X11: 
>> 0101010101010101
>> (XEN)     X12: 0000000000000008 X13: 00000000002b9a48 X14: 
>> 0000000000000000
>> (XEN)     X15: 0000000000400000 X16: 00000000002ba000 X17: 
>> 00000000002b9000
>> (XEN)     X18: 00000000002b9000 X19: 0000000000000000 X20: 
>> 000000000030feb0
>> (XEN)     X21: 0000000080000000 X22: 00000000002f0d30 X23: 
>> 00000000002f1d68
>> (XEN)     X24: 00000000002f0eb8 X25: 0000000040000000 X26: 
>> 0000000080000000
>> (XEN)     X27: 0000000000000018 X28: 000000000003f970  FP: 
>> 000000000030fdc0
>> (XEN)
>> (XEN)   VTCR_EL2: 00000000
>> (XEN)  VTTBR_EL2: 0000000000000000
>> (XEN)
>> (XEN)  SCTLR_EL2: 30cd183d
>> (XEN)    HCR_EL2: 0000000000000038
>> (XEN)  TTBR0_EL2: 0000000065a7f000
>> (XEN)
>> (XEN)    ESR_EL2: 97c08006
>> (XEN)  HPFAR_EL2: 0000000000000000
>> (XEN)    FAR_EL2: 0000000000000010
>> (XEN)
>> (XEN) Xen stack trace from sp=000000000030fdc0:
>> (XEN)    000000000030fdf0 00000000002d77dc 0000000000080000 
>> 000000007fffc000
>> (XEN)    0000000080000000 00000000002f0d30 000000007f68b250 
>> 00000000002001b8
>> (XEN)    0000000065932000 0000000065732000 00000000784f9000 
>> 0000000000000000
>> (XEN)    0000000000400000 0000000065a2ad30 0000000000000630 
>> 0000000000000001
>> (XEN)    0000000000000001 0000000000000001 0000000000000000 
>> 0000000000003000
>> (XEN)    00000000784f9000 00000000002bc8e4 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000300000000 0000000000000000 
>> 00000040ffffffff
>> (XEN)    00000000ffffffff 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN) Xen call trace:
>> (XEN)    [<0000000000219674>] dump_execstate+0x58/0x1ec (PC)
>> (XEN)    [<00000000002d77dc>] start_xen+0x3d0/0xcf8 (LR)
>> (XEN)    [<00000000002d77dc>] start_xen+0x3d0/0xcf8
>> (XEN)    [<00000000002001b8>] arm64/head.o#primary_switched+0x10/0x30
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) CPU0: Unexpected Trap: Data Abort
>> (XEN) ****************************************
>>
>> For the x86 crash:
>>
>> (XEN) Early fatal page fault at e008:ffff82d0402188b4 
>> (cr2=0000000000000010, ec=0000)
>> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82d0402188b4>] dump_execstate+0x42/0x167
>> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
>> (XEN) rax: 0000000000000000   rbx: 0000000000000000   rcx: 
>> 0000000000000000
>> (XEN) rdx: ffff82d0404affff   rsi: 000000000000000a   rdi: 
>> ffff82d0404afef8
>> (XEN) rbp: ffff82d0404afd90   rsp: ffff82d0404afd80   r8:  
>> 0000000000000004
>> (XEN) r9:  0101010101010101   r10: 0f0f0f0f0f0f0f0f   r11: 
>> 5555555555555555
>> (XEN) r12: ffff82d0404afef8   r13: 0000000000800163   r14: 
>> ffff83000009dfb0
>> (XEN) r15: 0000000000000002   cr0: 0000000080050033   cr4: 
>> 00000000000000a0
>> (XEN) cr3: 00000000bfa9e000   cr2: 0000000000000010
>> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 
>> 0000000000000000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>> (XEN) Xen code around <ffff82d0402188b4> (dump_execstate+0x42/0x167):
>> (XEN)  ff 7f 00 00 48 8b 40 c9 <48> 8b 40 10 66 81 38 ff 7f 75 49 3b 
>> 1d 23 18 27
>> (XEN) Xen stack trace from rsp=ffff82d0404afd80:
>> (XEN)    000000000023ffff 00000000000005ed ffff82d0404afee8 
>> ffff82d0404378cb
>> (XEN)    0000000000000002 0000000000000002 0000000000000002 
>> 0000000000000001
>> (XEN)    0000000000000001 0000000000000001 0000000000000001 
>> 0000000000000000
>> (XEN)    00000000000001ff 0000000002a45fff 0000000000240000 
>> 0000000002a45000
>> (XEN)    0000000000100000 0000000000000000 00000000000001ff 
>> ffff82d040475c80
>> (XEN)    ffff82d000800163 ffff83000009dee0 ffff83000009dfb0 
>> 0000000000200001
>> (XEN)    0000000100000000 0000000100000000 ffff83000009df80 
>> 642ded38bf9fe4f3
>> (XEN)    bf9fed3500000000 bfaafe980009df73 0009df73bf9fe7ea 
>> 00000004bf9fed31
>> (XEN)    bfaafeb00009df01 0000000800000000 000000010000006e 
>> 0000000000000003
>> (XEN)    00000000000002f8 ffff82d0405b0000 ffff82d0404b0000 
>> 0000000000000002
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 ffff82d04020012f 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000e01000000000 0000000000000000 0000000000000000 
>> 00000000000000a0
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d0402188b4>] R dump_execstate+0x42/0x167
>> (XEN)    [<ffff82d0404378cb>] F __start_xen+0x1e10/0x2906
>> (XEN)    [<ffff82d04020012f>] F __high_start+0x8f/0x91
>> (XEN)
>> (XEN) Pagetable walk from 0000000000000010:
>> (XEN)  L4[0x000] = 00000000bfa54063 ffffffffffffffff
>> (XEN)  L3[0x000] = 00000000bfa50063 ffffffffffffffff
>> (XEN)  L2[0x000] = 00000000bfa4f063 ffffffffffffffff
>> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) FATAL TRAP: vec 14, #PF[0000] IN INTERRUPT CONTEXT
>> (XEN) ****************************************
>> (XEN)
>> (XEN) Reboot in five seconds...
>>
>> So I think guest_cpu_user_regs() is not quite yet ready to be called 
>> from panic().
> 
> guest_cpu_user_regs() isn't the problem, but dump_execstate().
> 
> This is one of the caveats from the added boot parameter doc: some debug
> keys might lead to problems. 'd' seems to be such a key when used for
> the panic() case and the panic() happens in early boot.

Right, I think we should be clearer in the documentation about the keys 
we know works.

> 
>>
>> A different approach my be to generate an exception and call the 
>> keyhandler from there. At least you know that the register would 
>> always be accurate.
> 
> Or dump_execstate() is modified to accept NULL for regs and it will do
> nothing in case guest_cpu_user_regs() isn't valid (a test for idle vcpu
> might be the easiest way to determine that).

So Jan pointed out that current may not be initialized properly in early 
boot. So we possibly want to use "system_state" instead.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:51:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:51:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50392.89035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kng1Z-00026n-N4; Fri, 11 Dec 2020 10:51:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50392.89035; Fri, 11 Dec 2020 10:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kng1Z-00026g-Jh; Fri, 11 Dec 2020 10:51:45 +0000
Received: by outflank-mailman (input) for mailman id 50392;
 Fri, 11 Dec 2020 10:51:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kng1Y-00026Z-KH
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:51:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 836895c9-967a-4368-b2ad-930048613977;
 Fri, 11 Dec 2020 10:51:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0791AE30;
 Fri, 11 Dec 2020 10:51:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 836895c9-967a-4368-b2ad-930048613977
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607683898; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cEPs72V6kbjOf2vtS1YWBD94WU9sx8YmyOMjFajxa18=;
	b=Il2kLVWQ+rZg9SGG2p2gb9CWyF/pAIdSTXJ/0TZdRsvxjk+YYHGvmRYvrSq21EqBFT5Rvo
	nMZq1y0jvaXBt9GqmE2yGjvjG4fMMPwWFRStQ5z5DkvOhnPw4PyucGiUYfTD9rxgqWP3p/
	aSo96o+TKPH8nsOZ6+Ag0rNGS+0EJgc=
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
 <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
 <18959d53-30d9-b702-81df-8a4051d61fb2@suse.com>
 <ece0eb78-4e5f-2c2d-598d-aaf126fbcd23@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <683a9587-1e24-be6f-fa66-9c9050a00f66@suse.com>
Date: Fri, 11 Dec 2020 11:51:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ece0eb78-4e5f-2c2d-598d-aaf126fbcd23@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5yoA3nk5Guql2tAWJh4bpFi6kZb2VLqfk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5yoA3nk5Guql2tAWJh4bpFi6kZb2VLqfk
Content-Type: multipart/mixed; boundary="9twPgILxTIXvEKJmSUuXPfqXwg6jZD3bj";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
Message-ID: <683a9587-1e24-be6f-fa66-9c9050a00f66@suse.com>
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
References: <20201126080340.6154-1-jgross@suse.com>
 <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
 <98c45abd-8796-088c-e2a6-9ad494beeb9e@xen.org>
 <59f126a3-f716-345b-b464-746e6156c15a@suse.com>
 <1e305cf6-aa14-54cc-a77d-88bb38ba4c6e@xen.org>
 <7271b2f4-816a-5541-5402-50ea29218d81@suse.com>
 <077f3e02-0e07-1549-cc41-62b42177e19c@suse.com>
 <699e48ea-8807-a1f3-d2b9-dc918913ede8@xen.org>
 <18959d53-30d9-b702-81df-8a4051d61fb2@suse.com>
 <ece0eb78-4e5f-2c2d-598d-aaf126fbcd23@xen.org>
In-Reply-To: <ece0eb78-4e5f-2c2d-598d-aaf126fbcd23@xen.org>

--9twPgILxTIXvEKJmSUuXPfqXwg6jZD3bj
Content-Type: multipart/mixed;
 boundary="------------C2364B7C7D8B3952A37F3941"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C2364B7C7D8B3952A37F3941
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.12.20 11:49, Julien Grall wrote:
>=20
>=20
> On 11/12/2020 10:38, J=C3=BCrgen Gro=C3=9F wrote:
>> On 11.12.20 11:22, Julien Grall wrote:
>>> Hi,
>>>
>>> On 11/12/2020 07:24, Jan Beulich wrote:
>>>> On 11.12.2020 08:02, J=C3=BCrgen Gro=C3=9F wrote:
>>>>> On 10.12.20 21:51, Julien Grall wrote:
>>>>>> Hi Jan,
>>>>>>
>>>>>> On 09/12/2020 14:29, Jan Beulich wrote:
>>>>>>> On 09.12.2020 13:11, Julien Grall wrote:
>>>>>>>> On 26/11/2020 11:20, Jan Beulich wrote:
>>>>>>>>> On 26.11.2020 09:03, Juergen Gross wrote:
>>>>>>>>>> When the host crashes it would sometimes be nice to have=20
>>>>>>>>>> additional
>>>>>>>>>> debug data available which could be produced via debug keys, b=
ut
>>>>>>>>>> halting the server for manual intervention might be impossible=
=20
>>>>>>>>>> due to
>>>>>>>>>> the need to reboot/kexec rather sooner than later.
>>>>>>>>>>
>>>>>>>>>> Add support for automatic debug key actions in case of crashes=
=20
>>>>>>>>>> which
>>>>>>>>>> can be activated via boot- or runtime-parameter.
>>>>>>>>>>
>>>>>>>>>> Depending on the type of crash the desired data might be=20
>>>>>>>>>> different, so
>>>>>>>>>> support different settings for the possible types of crashes.
>>>>>>>>>>
>>>>>>>>>> The parameter is "crash-debug" with the following syntax:
>>>>>>>>>>
>>>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0 crash-debug-<type>=3D<string>
>>>>>>>>>>
>>>>>>>>>> with <type> being one of:
>>>>>>>>>>
>>>>>>>>>> =C2=A0=C2=A0=C2=A0=C2=A0 panic, hwdom, watchdog, kexeccmd, deb=
ugkey
>>>>>>>>>>
>>>>>>>>>> and <string> a sequence of debug key characters with '+'=20
>>>>>>>>>> having the
>>>>>>>>>> special semantics of a 10 millisecond pause.
>>>>>>>>>>
>>>>>>>>>> So "crash-debug-watchdog=3D0+0qr" would result in special outp=
ut=20
>>>>>>>>>> in case
>>>>>>>>>> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 sta=
te,
>>>>>>>>>> domain info, run queues).
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>>>> ---
>>>>>>>>>> V2:
>>>>>>>>>> - switched special character '.' to '+' (Jan Beulich)
>>>>>>>>>> - 10 ms instead of 1 s pause (Jan Beulich)
>>>>>>>>>> - added more text to the boot parameter description (Jan Beuli=
ch)
>>>>>>>>>>
>>>>>>>>>> V3:
>>>>>>>>>> - added const (Jan Beulich)
>>>>>>>>>> - thorough test of crash reason parameter (Jan Beulich)
>>>>>>>>>> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
>>>>>>>>>> - added dummy get_irq_regs() helper on Arm
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>>>
>>>>>>>>> Except for the Arm aspect, where I'm not sure using
>>>>>>>>> guest_cpu_user_regs() is correct in all cases,
>>>>>>>>
>>>>>>>> I am not entirely sure to understand what get_irq_regs() is=20
>>>>>>>> supposed to
>>>>>>>> returned on x86. Is it the registers saved from the most recent
>>>>>>>> exception?
>>>>>>>
>>>>>>> An interrupt (not an exception) sets the underlying per-CPU
>>>>>>> variable, such that interested parties will know the real
>>>>>>> context is not guest or "normal" Xen code, but an IRQ.
>>>>>>
>>>>>> Thanks for the explanation. I am a bit confused to why we need to =

>>>>>> give a
>>>>>> regs to handle_keypress() because no-one seems to use it. Do you=20
>>>>>> have an
>>>>>> explanation?
>>>>>
>>>>> dump_registers() (key 'd') is using it.
>>>>>
>>>>>>
>>>>>> To add to the confusion, it looks like that get_irqs_regs() may=20
>>>>>> return
>>>>>> NULL. So sometimes we may pass guest_cpu_regs() (which may contain=

>>>>>> garbagge or a set too far).
>>>>>
>>>>> I guess this is a best effort approach.
>>>>
>>>> Indeed. If there are ways to make it "more best", we should of
>>>> course follow them. (Except before Dom0 starts, I'm afraid I
>>>> don't see though where garbage would come from. And even then,
>>>> just like for the idle vCPU-s, it shouldn't really be garbage,
>>>> or else this suggests missing initialization somewhere.)
>>>
>>> So I decided to mimick what 'd' does to see what happen if this is=20
>>> called early boot.
>>>
>>>
>>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>>> index 7fcff9af2a7e..9d33507a26eb 100644
>>> --- a/xen/arch/arm/setup.c
>>> +++ b/xen/arch/arm/setup.c
>>> @@ -857,6 +857,8 @@ void __init start_xen(unsigned long=20
>>> boot_phys_offset,
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 system_state =3D SYS_STATE_boot;
>>>
>>> +=C2=A0=C2=A0=C2=A0 dump_execstate(guest_cpu_user_regs());
>>> +
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 vm_init();
>>>
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( acpi_disabled )
>>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>>> index 30d6f375a3af..50fcf2e8d70e 100644
>>> --- a/xen/arch/x86/setup.c
>>> +++ b/xen/arch/x86/setup.c
>>> @@ -1678,6 +1678,7 @@ void __init noreturn __start_xen(unsigned long =

>>> mbi_p)
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 end_boot_alloc=
ator();
>>>
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 system_state =3D SYS_STATE_boot;
>>> +=C2=A0=C2=A0=C2=A0 dump_execstate(guest_cpu_user_regs());
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * No calls involving ACPI code s=
hould go between the setting of
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * SYS_STATE_boot and vm_init() (=
or else acpi_os_{,un}map_memory()
>>>
>>> It leads to crash on both Arm and x86.
>>>
>>> For the Arm crash:
>>>
>>> (XEN) Data Abort Trap. Syndrome=3D0x1c08006
>>> (XEN) Walking Hypervisor VA 0x10 on CPU0 via TTBR 0x0000000065a7f000
>>> (XEN) 0TH[0x0] =3D 0x0000000065a7ef7f
>>> (XEN) 1ST[0x0] =3D 0x0000000065a7bf7f
>>> (XEN) 2ND[0x0] =3D 0x0000000000000000
>>> (XEN) CPU0: Unexpected Trap: Data Abort
>>> (XEN) ----[ Xen-4.15-unstable=C2=A0 arm64=C2=A0 debug=3Dy=C2=A0=C2=A0=
 Not tainted ]----
>>> (XEN) CPU:=C2=A0=C2=A0=C2=A0 0
>>> (XEN) PC:=C2=A0=C2=A0=C2=A0=C2=A0 0000000000219674 dump_execstate+0x5=
8/0x1ec
>>> (XEN) LR:=C2=A0=C2=A0=C2=A0=C2=A0 00000000002d77dc
>>> (XEN) SP:=C2=A0=C2=A0=C2=A0=C2=A0 000000000030fdc0
>>> (XEN) CPSR:=C2=A0=C2=A0 800003c9 MODE:64-bit EL2h (Hypervisor, handle=
r)
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X0: 0000000000000000=C2=A0 X1: 00=
00000000000000=C2=A0 X2:=20
>>> 0000000000007fff
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X3: 00000000002b7198=C2=A0 X4: 00=
00000000000080=C2=A0 X5:=20
>>> 00000000002e9a68
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X6: 0080808080808080=C2=A0 X7: fe=
fefefefefeff09=C2=A0 X8:=20
>>> 7f7f7f7f7f7f7f7f
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 X9: 717164616f726051 X10: 7f7f7f7=
f7f7f7f7f X11:=20
>>> 0101010101010101
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X12: 0000000000000008 X13: 00000000002b=
9a48 X14:=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X15: 0000000000400000 X16: 00000000002b=
a000 X17:=20
>>> 00000000002b9000
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X18: 00000000002b9000 X19: 000000000000=
0000 X20:=20
>>> 000000000030feb0
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X21: 0000000080000000 X22: 00000000002f=
0d30 X23:=20
>>> 00000000002f1d68
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X24: 00000000002f0eb8 X25: 000000004000=
0000 X26:=20
>>> 0000000080000000
>>> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 X27: 0000000000000018 X28: 000000000003=
f970=C2=A0 FP:=20
>>> 000000000030fdc0
>>> (XEN)
>>> (XEN)=C2=A0=C2=A0 VTCR_EL2: 00000000
>>> (XEN)=C2=A0 VTTBR_EL2: 0000000000000000
>>> (XEN)
>>> (XEN)=C2=A0 SCTLR_EL2: 30cd183d
>>> (XEN)=C2=A0=C2=A0=C2=A0 HCR_EL2: 0000000000000038
>>> (XEN)=C2=A0 TTBR0_EL2: 0000000065a7f000
>>> (XEN)
>>> (XEN)=C2=A0=C2=A0=C2=A0 ESR_EL2: 97c08006
>>> (XEN)=C2=A0 HPFAR_EL2: 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 FAR_EL2: 0000000000000010
>>> (XEN)
>>> (XEN) Xen stack trace from sp=3D000000000030fdc0:
>>> (XEN)=C2=A0=C2=A0=C2=A0 000000000030fdf0 00000000002d77dc 00000000000=
80000=20
>>> 000000007fffc000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000080000000 00000000002f0d30 000000007f6=
8b250=20
>>> 00000000002001b8
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000065932000 0000000065732000 00000000784=
f9000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000400000 0000000065a2ad30 00000000000=
00630=20
>>> 0000000000000001
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000001 0000000000000001 00000000000=
00000=20
>>> 0000000000003000
>>> (XEN)=C2=A0=C2=A0=C2=A0 00000000784f9000 00000000002bc8e4 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000300000000 00000000000=
00000=20
>>> 00000040ffffffff
>>> (XEN)=C2=A0=C2=A0=C2=A0 00000000ffffffff 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN) Xen call trace:
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<0000000000219674>] dump_execstate+0x58/0x1e=
c (PC)
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<00000000002d77dc>] start_xen+0x3d0/0xcf8 (L=
R)
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<00000000002d77dc>] start_xen+0x3d0/0xcf8
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<00000000002001b8>] arm64/head.o#primary_swi=
tched+0x10/0x30
>>> (XEN)
>>> (XEN)
>>> (XEN) ****************************************
>>> (XEN) Panic on CPU 0:
>>> (XEN) CPU0: Unexpected Trap: Data Abort
>>> (XEN) ****************************************
>>>
>>> For the x86 crash:
>>>
>>> (XEN) Early fatal page fault at e008:ffff82d0402188b4=20
>>> (cr2=3D0000000000000010, ec=3D0000)
>>> (XEN) ----[ Xen-4.15-unstable=C2=A0 x86_64=C2=A0 debug=3Dy=C2=A0=C2=A0=
 Tainted:=C2=A0=C2=A0 C=C2=A0=C2=A0 ]----
>>> (XEN) CPU:=C2=A0=C2=A0=C2=A0 0
>>> (XEN) RIP:=C2=A0=C2=A0=C2=A0 e008:[<ffff82d0402188b4>] dump_execstate=
+0x42/0x167
>>> (XEN) RFLAGS: 0000000000010086=C2=A0=C2=A0 CONTEXT: hypervisor
>>> (XEN) rax: 0000000000000000=C2=A0=C2=A0 rbx: 0000000000000000=C2=A0=C2=
=A0 rcx:=20
>>> 0000000000000000
>>> (XEN) rdx: ffff82d0404affff=C2=A0=C2=A0 rsi: 000000000000000a=C2=A0=C2=
=A0 rdi:=20
>>> ffff82d0404afef8
>>> (XEN) rbp: ffff82d0404afd90=C2=A0=C2=A0 rsp: ffff82d0404afd80=C2=A0=C2=
=A0 r8:=20
>>> 0000000000000004
>>> (XEN) r9:=C2=A0 0101010101010101=C2=A0=C2=A0 r10: 0f0f0f0f0f0f0f0f=C2=
=A0=C2=A0 r11:=20
>>> 5555555555555555
>>> (XEN) r12: ffff82d0404afef8=C2=A0=C2=A0 r13: 0000000000800163=C2=A0=C2=
=A0 r14:=20
>>> ffff83000009dfb0
>>> (XEN) r15: 0000000000000002=C2=A0=C2=A0 cr0: 0000000080050033=C2=A0=C2=
=A0 cr4:=20
>>> 00000000000000a0
>>> (XEN) cr3: 00000000bfa9e000=C2=A0=C2=A0 cr2: 0000000000000010
>>> (XEN) fsb: 0000000000000000=C2=A0=C2=A0 gsb: 0000000000000000=C2=A0=C2=
=A0 gss:=20
>>> 0000000000000000
>>> (XEN) ds: 0000=C2=A0=C2=A0 es: 0000=C2=A0=C2=A0 fs: 0000=C2=A0=C2=A0 =
gs: 0000=C2=A0=C2=A0 ss: 0000=C2=A0=C2=A0 cs: e008
>>> (XEN) Xen code around <ffff82d0402188b4> (dump_execstate+0x42/0x167):=

>>> (XEN)=C2=A0 ff 7f 00 00 48 8b 40 c9 <48> 8b 40 10 66 81 38 ff 7f 75 4=
9 3b=20
>>> 1d 23 18 27
>>> (XEN) Xen stack trace from rsp=3Dffff82d0404afd80:
>>> (XEN)=C2=A0=C2=A0=C2=A0 000000000023ffff 00000000000005ed ffff82d0404=
afee8=20
>>> ffff82d0404378cb
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000002 0000000000000002 00000000000=
00002=20
>>> 0000000000000001
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000001 0000000000000001 00000000000=
00001=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 00000000000001ff 0000000002a45fff 00000000002=
40000=20
>>> 0000000002a45000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000100000 0000000000000000 00000000000=
001ff=20
>>> ffff82d040475c80
>>> (XEN)=C2=A0=C2=A0=C2=A0 ffff82d000800163 ffff83000009dee0 ffff8300000=
9dfb0=20
>>> 0000000000200001
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000100000000 0000000100000000 ffff8300000=
9df80=20
>>> 642ded38bf9fe4f3
>>> (XEN)=C2=A0=C2=A0=C2=A0 bf9fed3500000000 bfaafe980009df73 0009df73bf9=
fe7ea=20
>>> 00000004bf9fed31
>>> (XEN)=C2=A0=C2=A0=C2=A0 bfaafeb00009df01 0000000800000000 00000001000=
0006e=20
>>> 0000000000000003
>>> (XEN)=C2=A0=C2=A0=C2=A0 00000000000002f8 ffff82d0405b0000 ffff82d0404=
b0000=20
>>> 0000000000000002
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 ffff82d0402=
0012f=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000e01000000000 0000000000000000 00000000000=
00000=20
>>> 00000000000000a0
>>> (XEN)=C2=A0=C2=A0=C2=A0 0000000000000000 0000000000000000 00000000000=
00000=20
>>> 0000000000000000
>>> (XEN) Xen call trace:
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402188b4>] R dump_execstate+0x42/0x=
167
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0404378cb>] F __start_xen+0x1e10/0x2=
906
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d04020012f>] F __high_start+0x8f/0x91=

>>> (XEN)
>>> (XEN) Pagetable walk from 0000000000000010:
>>> (XEN)=C2=A0 L4[0x000] =3D 00000000bfa54063 ffffffffffffffff
>>> (XEN)=C2=A0 L3[0x000] =3D 00000000bfa50063 ffffffffffffffff
>>> (XEN)=C2=A0 L2[0x000] =3D 00000000bfa4f063 ffffffffffffffff
>>> (XEN)=C2=A0 L1[0x000] =3D 0000000000000000 ffffffffffffffff
>>> (XEN)
>>> (XEN) ****************************************
>>> (XEN) Panic on CPU 0:
>>> (XEN) FATAL TRAP: vec 14, #PF[0000] IN INTERRUPT CONTEXT
>>> (XEN) ****************************************
>>> (XEN)
>>> (XEN) Reboot in five seconds...
>>>
>>> So I think guest_cpu_user_regs() is not quite yet ready to be called =

>>> from panic().
>>
>> guest_cpu_user_regs() isn't the problem, but dump_execstate().
>>
>> This is one of the caveats from the added boot parameter doc: some deb=
ug
>> keys might lead to problems. 'd' seems to be such a key when used for
>> the panic() case and the panic() happens in early boot.
>=20
> Right, I think we should be clearer in the documentation about the keys=
=20
> we know works.
>=20
>>
>>>
>>> A different approach my be to generate an exception and call the=20
>>> keyhandler from there. At least you know that the register would=20
>>> always be accurate.
>>
>> Or dump_execstate() is modified to accept NULL for regs and it will do=

>> nothing in case guest_cpu_user_regs() isn't valid (a test for idle vcp=
u
>> might be the easiest way to determine that).
>=20
> So Jan pointed out that current may not be initialized properly in earl=
y=20
> boot. So we possibly want to use "system_state" instead.

Fine with me. Will send an updated patch.


Juergen


--------------C2364B7C7D8B3952A37F3941
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C2364B7C7D8B3952A37F3941--

--9twPgILxTIXvEKJmSUuXPfqXwg6jZD3bj--

--5yoA3nk5Guql2tAWJh4bpFi6kZb2VLqfk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TTzkFAwAAAAAACgkQsN6d1ii/Ey8f
lwf9HIOiY6OPYfgEZpODn6dTlW69nfMJ5PxcgDPgq/lfRvzay2+7SsIaIuFZX/oMjwThYylZ+/8J
UU7P6QwitYWtJNtYaSsw2WWamP6f3ROWWKPslQUT+ZxiTD3FPzoO2ZnyJ1I2HPxVtTC0H7eDnD/6
B6FCaKk+gj/cFHtO6myxkMaCISgrgInv2piVrU+K3Nf86KngIL6o1Yd90Qr7l0CIAu2puW9BsQex
/TjgnHxqoQ+4H8puwPgG6N9PzOJgYs7Y4jLpXf6lUYiBnYQApMlABh6lrp3TTgYm1oV5GLJ6wMyc
vdDAoAb85UsL+SjK8+dxPx/jG9/wY2TpCg6WI/JGzw==
=wVKI
-----END PGP SIGNATURE-----

--5yoA3nk5Guql2tAWJh4bpFi6kZb2VLqfk--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 10:57:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 10:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50403.89055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kng71-0002Rl-ET; Fri, 11 Dec 2020 10:57:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50403.89055; Fri, 11 Dec 2020 10:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kng71-0002Re-Bb; Fri, 11 Dec 2020 10:57:23 +0000
Received: by outflank-mailman (input) for mailman id 50403;
 Fri, 11 Dec 2020 10:57:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kng70-0002RZ-2T
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 10:57:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kng6z-00033w-98; Fri, 11 Dec 2020 10:57:21 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kng6y-0008Q5-UC; Fri, 11 Dec 2020 10:57:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ikNlRQpmBNuZOebDCi85Oyf+XUjW1ufYuBgwijt74HE=; b=YOB+seYxdEv+7XuAuLaz8z0hO7
	E50OGv0RVIv6cFSz+W7ubLUlF4xEkLB7cOQMHQ+Hw57kbpY/QjEn228ux3jlR6O0gWd4rpfBH621y
	UscS/9qIIubXNgoKnJzbBArS5tYGovDV52v8WPet5DXYGBNzHjSRmsm+arpvz8wDncjo=;
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
Date: Fri, 11 Dec 2020 10:57:19 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 11/12/2020 10:32, Jan Beulich wrote:
> On 09.12.2020 12:54, Julien Grall wrote:
>> On 23/11/2020 13:29, Jan Beulich wrote:
>>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>>        long           rc = 0;
>>>    
>>>     again:
>>> -    spin_lock(&d1->event_lock);
>>> +    write_lock(&d1->event_lock);
>>>    
>>>        if ( !port_is_valid(d1, port1) )
>>>        {
>>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>>                    BUG();
>>>    
>>>                if ( d1 < d2 )
>>> -            {
>>> -                spin_lock(&d2->event_lock);
>>> -            }
>>> +                read_lock(&d2->event_lock);
>>
>> This change made me realized that I don't quite understand how the
>> rwlock is meant to work for event_lock. I was actually expecting this to
>> be a write_lock() given there are state changed in the d2 events.
> 
> Well, the protection needs to be against racing changes, i.e.
> parallel invocations of this same function, or evtchn_close().
> It is debatable whether evtchn_status() and
> domain_dump_evtchn_info() would better also be locked out
> (other read_lock() uses aren't applicable to interdomain
> channels).
> 
>> Could you outline how a developper can find out whether he/she should
>> use read_lock or write_lock?
> 
> I could try to, but it would again be a port type dependent
> model, just like for the per-channel locks.

It is quite important to have clear locking strategy (in particular 
rwlock) so we can make correct decision when to use read_lock or write_lock.

> So I'd like it to
> be clarified first whether you aren't instead indirectly
> asking for these to become write_lock()

Well, I don't understand why this is a read_lock() (even with your 
previous explanation). I am not suggesting to switch to a write_lock(), 
but instead asking for the reasoning behind the decision.

>>> --- a/xen/common/rwlock.c
>>> +++ b/xen/common/rwlock.c
>>> @@ -102,6 +102,14 @@ void queue_write_lock_slowpath(rwlock_t
>>>        spin_unlock(&lock->lock);
>>>    }
>>>    
>>> +void _rw_barrier(rwlock_t *lock)
>>> +{
>>> +    check_barrier(&lock->lock.debug);
>>> +    smp_mb();
>>> +    while ( _rw_is_locked(lock) )
>>> +        arch_lock_relax();
>>> +    smp_mb();
>>> +}
>>
>> As I pointed out when this implementation was first proposed (see [1]),
>> there is a risk that the loop will never exit.
> 
> The [1] reference was missing, but I recall you saying so.
> 
>> I think the following implementation would be better (although it is ugly):
>>
>> write_lock();
>> /* do nothing */
>> write_unlock();
>>
>> This will act as a barrier between lock held before and after the call.
> 
> Right, and back then I indicated agreement. When getting to
> actually carry out the change, I realized though that then the less
> restrictive check_barrier() can't be used anymore (or to be precise,
> it could be used, but the stronger check_lock() would subsequently
> still come into play). This isn't a problem here, but would be for
> any IRQ-safe r/w lock that the barrier may want to be used on down
> the road.
> 
> Thinking about it, a read_lock() / read_unlock() pair would suffice
> though. But this would then still have check_lock() involved.
> 
> Given all of this, maybe it's better not to introduce the function
> at all and instead open-code the read_lock() / read_unlock() pair at
> the use site.

IIUC, the read_lock() would be sufficient because we only care about 
"write" side and not read. Is that correct?

> 
>> As an aside, I think the introduction of rw_barrier() deserve to be a in
>> separate patch to help the review.
> 
> I'm aware there are differing views on this - to me, putting this in
> a separate patch would be introduction of dead code. 

This is only dead code if we decide to not use rw_barrier() :).

The idea behind introducing rw_barrier() in its own patch is so you can 
explanation why it was implemented like that. Arguably, this explanation 
can be added in the same patch...

There are other added benefits such as making a hint to the reviewer 
that this part will require more careful review. I am sure one will say 
that reviewer should always be careful...

But, personally, my level of carefulness will depend on the author and 
the type of the patch.

Anyway, I am happy with the open-coded version with an explanation in 
the code/commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 11:16:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 11:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50419.89078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngP5-0004Zu-EQ; Fri, 11 Dec 2020 11:16:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50419.89078; Fri, 11 Dec 2020 11:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngP5-0004Zn-BT; Fri, 11 Dec 2020 11:16:03 +0000
Received: by outflank-mailman (input) for mailman id 50419;
 Fri, 11 Dec 2020 11:16:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b3T3=FP=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kngP4-0004Zi-4E
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 11:16:02 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23b849e2-5745-4cc8-a9bf-4ee5f1c77afd;
 Fri, 11 Dec 2020 11:16:00 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BBBFpcd012536
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 11 Dec 2020 12:15:52 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 3057D2E946C; Fri, 11 Dec 2020 12:15:46 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23b849e2-5745-4cc8-a9bf-4ee5f1c77afd
Date: Fri, 11 Dec 2020 12:15:46 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: dom0 PV looping on search_pre_exception_table()
Message-ID: <20201211111546.GE1423@antioche.eu.org>
References: <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <2c345ef9-1f05-f883-d294-7ac1b3851f08@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2c345ef9-1f05-f883-d294-7ac1b3851f08@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 11 Dec 2020 12:15:52 +0100 (MET)

On Fri, Dec 11, 2020 at 09:58:54AM +0100, Jan Beulich wrote:
> Could you please revert 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")?
> I think there was a thinko there in that the change can't be split from
> the bigger one which was part of the originally planned set for XSA-286.
> We mustn't avoid the switching of page tables as long as
> guest_get_eff{,_kern}_l1e() makes use of the linear page tables.

Yes, reverting this commit also makes the dom0 boot.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 11:19:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 11:19:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50425.89090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngSL-0004lK-UN; Fri, 11 Dec 2020 11:19:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50425.89090; Fri, 11 Dec 2020 11:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngSL-0004lD-RQ; Fri, 11 Dec 2020 11:19:25 +0000
Received: by outflank-mailman (input) for mailman id 50425;
 Fri, 11 Dec 2020 11:19:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kngSK-0004l2-HT; Fri, 11 Dec 2020 11:19:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kngSK-0003Xk-8k; Fri, 11 Dec 2020 11:19:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kngSJ-0007Xw-Vt; Fri, 11 Dec 2020 11:19:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kngSJ-0002zO-VR; Fri, 11 Dec 2020 11:19:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0pNCnLOg0yYAAi4Tv5qA6+/ZYVD/H/F6BoRjSN0Omdo=; b=sTpV2ZzFIBR8tLUHEUw8EZJWPK
	pvvMRxTwwftS83Q8MmAEuPt0hA4krVUrQrh+Tt8UkEboA92FLqqGeC+BO2t04qWPpl3oo20/V/ZaD
	19kf80RY0ibyhpfkQqrotfrL2mpaRceNETscGZwn4cWPDueFpNa0ynKvWezP3U9QcMJc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157404-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157404: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-xsm:xen-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64:xen-build:fail:regression
    libvirt:build-i386-xsm:xen-build:fail:regression
    libvirt:build-i386:xen-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=641fd93de163988eca43990a2431a80750eea991
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 11:19:23 +0000

flight 157404 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157404/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-xsm               6 xen-build                fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64                   6 xen-build                fail REGR. vs. 151777
 build-i386-xsm                6 xen-build                fail REGR. vs. 151777
 build-i386                    6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              641fd93de163988eca43990a2431a80750eea991
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  154 days
Failing since        151818  2020-07-11 04:18:52 Z  153 days  148 attempts
Testing same since   157404  2020-12-11 04:19:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32541 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 11:21:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 11:21:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50433.89106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngUH-0005cR-Ci; Fri, 11 Dec 2020 11:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50433.89106; Fri, 11 Dec 2020 11:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngUH-0005cK-8h; Fri, 11 Dec 2020 11:21:25 +0000
Received: by outflank-mailman (input) for mailman id 50433;
 Fri, 11 Dec 2020 11:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EbM9=FP=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kngUG-0005cF-AG
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 11:21:24 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f6ccc01-fabb-4671-95f5-68f6f7db957e;
 Fri, 11 Dec 2020 11:21:22 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id y16so10497049ljk.1
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 03:21:22 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v4sm851013lfa.55.2020.12.11.03.21.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 11 Dec 2020 03:21:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f6ccc01-fabb-4671-95f5-68f6f7db957e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=NfgAmYukiKDGaGUnrKC9VRycpt8PM73t5JHXPw/EDhU=;
        b=SbN2nrinVk1r0H7Rr0v3tBz+ppGdqmRX4FBiB30gjNsOlbQ+y4yWYlnJnePdYx0hJE
         810tHr2+/0Jx6tOE0yeUbZCY9nwZl/d/XBKuQNQjrxIby5b6bsuqMsMRQrjCEtFk/FEV
         LmeSVD/1rIvttY9OhJ4TwF8q71fTtdZeIzU4qMHZ8xCkXAEPALeGk2KSB5dqImwomd4F
         q60/ZMvlcNWyn9hTQvEWurLmA7ceg1zkt1qIp9xpNeYk4u2w/u0NiCPvqEvyTgQFom0/
         CH1xN6US0lzIsGTWRdqGP6bk6pvvMRi9f6GCO5zb28f3ixhohqJ9obRcFcXE2NncQ9lt
         YJRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=NfgAmYukiKDGaGUnrKC9VRycpt8PM73t5JHXPw/EDhU=;
        b=IEcTzS7NrGZc9Pp3pH+QksP2QAeYrq5xWkkgTZXTsADKXU2Pg45gCq16j0IJSxTjAG
         NamY7TO0jYVcu/YqtOZdvGETEVVuNZEt505RaXTBRcshc9SGRYi+bPwlx4WrEHTWcNGI
         AwkVoNaa5RTYdwRdMyoqzZKbHi0FpHJslmWNGPup4e4824U+XBpUlWURrk1VI2o3eia4
         BvGcIeHBy31LCBr+vQHN390b7YD8CGyYlUwszeciUIfMXkW/7NtwlUuaapRJCNc0BNSB
         iyW7u+d/AJF5LKVtsm7UhiIxLwei4LSs31wZ9wdRbkXBqJVwCrXuJ3xLo5sw9Toir86L
         H7Bw==
X-Gm-Message-State: AOAM531Qu3oOAs8OTZuno3VTpfUdwdH52G+S7dUaYNAZO3r5UC9eKyg0
	m4AosRJXcgOFAGhAULWtk3Q=
X-Google-Smtp-Source: ABdhPJx8Lei+p6xROvk8lCR6Y61AenLOX+GFCaP+VAo8S31G//IG/ivW/kw9LaM5mksHdZg2zQDB/Q==
X-Received: by 2002:a2e:86c4:: with SMTP id n4mr4622683ljj.208.1607685681586;
        Fri, 11 Dec 2020 03:21:21 -0800 (PST)
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s>
 <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
 <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7c8a9ad9-2b18-7028-17bc-20ee5a341323@gmail.com>
Date: Fri, 11 Dec 2020 13:21:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11.12.20 03:28, Stefano Stabellini wrote:

Hi Julien, Stefano

> On Thu, 10 Dec 2020, Julien Grall wrote:
>> On 10/12/2020 02:30, Stefano Stabellini wrote:
>>> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> We need to send mapcache invalidation request to qemu/demu everytime
>>>> the page gets removed from a guest.
>>>>
>>>> At the moment, the Arm code doesn't explicitely remove the existing
>>>> mapping before inserting the new mapping. Instead, this is done
>>>> implicitely by __p2m_set_entry().
>>>>
>>>> So we need to recognize a case when old entry is a RAM page *and*
>>>> the new MFN is different in order to set the corresponding flag.
>>>> The most suitable place to do this is p2m_free_entry(), there
>>>> we can find the correct leaf type. The invalidation request
>>>> will be sent in do_trap_hypercall() later on.
>>> Why is it sent in do_trap_hypercall() ?
>> I believe this is following the approach used by x86. There are actually some
>> discussion about it (see [1]).
>>
>> Leaving aside the toolstack case for now, AFAIK, the only way a guest can
>> modify its p2m is via an hypercall. Do you have an example otherwise?
> OK this is a very important assumption. We should write it down for sure.
> I think it is true today on ARM.
>
>
>> When sending the invalidation request, the vCPU will be blocked until all the
>> IOREQ server have acknowledged the invalidation. So the hypercall seems to be
>> the best position to do it.
>>
>> Alternatively, we could use check_for_vcpu_work() to check if the mapcache
>> needs to be invalidated. The inconvenience is we would execute a few more
>> instructions in each entry/exit path.
> Yeah it would be more natural to call it from check_for_vcpu_work(). If
> we put it between #ifdef CONFIG_IOREQ_SERVER it wouldn't be bad. But I
> am not a fan of increasing the instructions on the exit path either.
>  From this point of view, putting it at the end of do_trap_hypercall is a
> nice trick actually. Let's just make sure it has a good comment on top.
>
>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>
>>>> ---
>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>>> "Add support for Guest IO forwarding to a device emulator"
>>>>
>>>> Changes V1 -> V2:
>>>>      - new patch, some changes were derived from (+ new explanation):
>>>>        xen/ioreq: Make x86's invalidate qemu mapcache handling common
>>>>      - put setting of the flag into __p2m_set_entry()
>>>>      - clarify the conditions when the flag should be set
>>>>      - use domain_has_ioreq_server()
>>>>      - update do_trap_hypercall() by adding local variable
>>>>
>>>> Changes V2 -> V3:
>>>>      - update patch description
>>>>      - move check to p2m_free_entry()
>>>>      - add a comment
>>>>      - use "curr" instead of "v" in do_trap_hypercall()
>>>> ---
>>>> ---
>>>>    xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
>>>>    xen/arch/arm/traps.c | 13 ++++++++++---
>>>>    2 files changed, 26 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 5b8d494..9674f6f 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -1,6 +1,7 @@
>>>>    #include <xen/cpu.h>
>>>>    #include <xen/domain_page.h>
>>>>    #include <xen/iocap.h>
>>>> +#include <xen/ioreq.h>
>>>>    #include <xen/lib.h>
>>>>    #include <xen/sched.h>
>>>>    #include <xen/softirq.h>
>>>> @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>>>>        if ( !p2m_is_valid(entry) )
>>>>            return;
>>>>    -    /* Nothing to do but updating the stats if the entry is a
>>>> super-page. */
>>>> -    if ( p2m_is_superpage(entry, level) )
>>>> +    if ( p2m_is_superpage(entry, level) || (level == 3) )
>>>>        {
>>>> -        p2m->stats.mappings[level]--;
>>>> -        return;
>>>> -    }
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +        /*
>>>> +         * If this gets called (non-recursively) then either the entry
>>>> +         * was replaced by an entry with a different base (valid case) or
>>>> +         * the shattering of a superpage was failed (error case).
>>>> +         * So, at worst, the spurious mapcache invalidation might be
>>>> sent.
>>>> +         */
>>>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>>>> +             (p2m->domain == current->domain) &&
>>>> p2m_is_ram(entry.p2m.type) )
>>>> +            p2m->domain->mapcache_invalidate = true;
>>> Why the (p2m->domain == current->domain) check? Shouldn't we set
>>> mapcache_invalidate to true anyway? What happens if p2m->domain !=
>>> current->domain? We wouldn't want the domain to lose the
>>> mapcache_invalidate notification.
>> This is also discussed in [1]. :) The main question is why would a
>> toolstack/device model modify the guest memory after boot?
>>
>> If we assume it does, then the device model would need to pause the domain
>> before modifying the RAM.
>>
>> We also need to make sure that all the IOREQ servers have invalidated
>> the mapcache before the domain run again.
>>
>> This would require quite a bit of work. I am not sure the effort is worth if
>> there are no active users today.
> OK, that explains why we think p2m->domain == current->domain, but why
> do we need to have a check for it right here?
>
> In other words, we don't think it is realistc to get here with
> p2m->domain != current->domain, but let's say that we do somehow. What's
> the best course of action? Probably, set mapcache_invalidate to true and
> possibly print a warning?
>
> Leaving mapcache_invalidate to false doesn't seem to be what we want to
> do?
>
>   
>>>>        BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
>>>>    @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs
>>>> *regs, register_t *nr,
>>>>            return;
>>>>        }
>>>>    -    current->hcall_preempted = false;
>>>> +    curr->hcall_preempted = false;
>>>>          perfc_incra(hypercalls, *nr);
>>>>        call = arm_hypercall_table[*nr].fn;
>>>> @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs
>>>> *regs, register_t *nr,
>>>>        HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
>>>>      #ifndef NDEBUG
>>>> -    if ( !current->hcall_preempted )
>>>> +    if ( !curr->hcall_preempted )
>>>>        {
>>>>            /* Deliberately corrupt parameter regs used by this hypercall.
>>>> */
>>>>            switch ( arm_hypercall_table[*nr].nr_args ) {
>>>> @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs
>>>> *regs, register_t *nr,
>>>>    #endif
>>>>          /* Ensure the hypercall trap instruction is re-executed. */
>>>> -    if ( current->hcall_preempted )
>>>> +    if ( curr->hcall_preempted )
>>>>            regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
>>>> +
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +    if ( unlikely(curr->domain->mapcache_invalidate) &&
>>>> +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
>>>> +        ioreq_signal_mapcache_invalidate();
>>> Why not just:
>>>
>>> if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
>>>       ioreq_signal_mapcache_invalidate();
>>>
>> This seems to match the x86 code. My guess is they tried to prevent the cost
>> of the atomic operation if there is no chance mapcache_invalidate is true.
>>
>> I am split whether the first check is worth it. The atomic operation should be
>> uncontended most of the time, so it should be quick. But it will always be
>> slower than just a read because there is always a store involved.
> I am not a fun of optimizations with unclear benefits :-)
>
>
>> On a related topic, Jan pointed out that the invalidation would not work
>> properly if you have multiple vCPU modifying the P2M at the same time.
>>
Thanks to Julien, he explained all bits in detail. Indeed I followed how 
it was done on x86 (place where to send the invalidation request, the 
code to check whether the flag is set, which at first glance, appears 
odd, etc)
and review comments (to latch current into the local variable, and make 
sure that domain sends invalidation request on itself).
Regarding what to do if p2m->domain != current->domain in 
p2m_free_entry(). Probably we could set flag only if guest is paused, 
otherwise just print a warning. Thoughts?


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 11:45:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 11:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50467.89136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngr8-0007zI-Hi; Fri, 11 Dec 2020 11:45:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50467.89136; Fri, 11 Dec 2020 11:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kngr8-0007zB-EC; Fri, 11 Dec 2020 11:45:02 +0000
Received: by outflank-mailman (input) for mailman id 50467;
 Fri, 11 Dec 2020 11:45:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=23LR=FP=amazon.com=prvs=6077e6b67=havanur@srs-us1.protection.inumbo.net>)
 id 1kngr6-0007z6-Uu
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 11:45:01 +0000
Received: from smtp-fw-9103.amazon.com (unknown [207.171.188.200])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aba78b82-178c-45bc-99d3-79849b36a793;
 Fri, 11 Dec 2020 11:44:59 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-538b0bfb.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9103.sea19.amazon.com with ESMTP;
 11 Dec 2020 11:44:52 +0000
Received: from EX13D36EUA002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-538b0bfb.us-west-2.amazon.com (Postfix) with ESMTPS
 id 3A097A1E9A
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 11:44:52 +0000 (UTC)
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX13D36EUA002.ant.amazon.com (10.43.165.193) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 11 Dec 2020 11:44:50 +0000
Received: from dev-dsk-havanur-1a-5f065856.eu-west-1.amazon.com
 (172.19.122.179) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Fri, 11 Dec 2020 11:44:49 +0000
Received: by dev-dsk-havanur-1a-5f065856.eu-west-1.amazon.com (Postfix,
 from userid 11119479)
 id 1024C85312; Fri, 11 Dec 2020 11:44:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aba78b82-178c-45bc-99d3-79849b36a793
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1607687099; x=1639223099;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=bWTBNMn9wmpyCukdPNpbfEWEuuOq3TCX8H0gkb2gEpI=;
  b=tuAOI22QuflZNcXZX16UggofKUISr+5DqbdNbibYfmmv/yfIVOiZXGyU
   q0FYbS94ewRgeydE5WM7qWzSy0v+D9kSz1T3jUoiRsfxh6O+TzH8baGX5
   Y408m5A50Xr3qV5/Cczj0EPlDeNdQglVY1nfzjfefJSNiYbkNpo+Kqp4V
   A=;
X-IronPort-AV: E=Sophos;i="5.78,411,1599523200"; 
   d="scan'208";a="902266083"
From: Harsha Shamsundara Havanur <havanur@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: Harsha Shamsundara Havanur <havanur@amazon.com>
Subject: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain
Date: Fri, 11 Dec 2020 11:44:36 +0000
Message-ID: <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
X-Mailer: git-send-email 2.16.6
In-Reply-To: <cover.1607686878.git.havanur@amazon.com>
References: <cover.1607686878.git.havanur@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk

A HVM domain flushes cache on all the cpus using
`flush_all` macro which uses cpu_online_map, during
i) creation of a new domain
ii) when device-model op is performed
iii) when domain is destructed.

This triggers IPI on all the cpus, thus affecting other
domains that are pinned to different pcpus. This patch
restricts cache flush to the set of cpus affinitized to
the current domain using `domain->dirty_cpumask`.

Signed-off-by: Harsha Shamsundara Havanur <havanur@amazon.com>
---
 xen/arch/x86/hvm/hvm.c     | 2 +-
 xen/arch/x86/hvm/mtrr.c    | 6 +++---
 xen/arch/x86/hvm/svm/svm.c | 2 +-
 xen/arch/x86/hvm/vmx/vmx.c | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 54e32e4fe8..ec247c7010 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2219,7 +2219,7 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
             domain_pause_nosync(v->domain);
 
             /* Flush physical caches. */
-            flush_all(FLUSH_CACHE);
+            flush_mask(v->domain->dirty_cpumask, FLUSH_CACHE);
             hvm_set_uc_mode(v, 1);
 
             domain_unpause(v->domain);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index fb051d59c3..0d804c1fa0 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -631,7 +631,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
                         break;
                     /* fall through */
                 default:
-                    flush_all(FLUSH_CACHE);
+                    flush_mask(d->dirty_cpumask, FLUSH_CACHE);
                     break;
                 }
                 return 0;
@@ -683,7 +683,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     list_add_rcu(&range->list, &d->arch.hvm.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
     if ( type != PAT_TYPE_WRBACK )
-        flush_all(FLUSH_CACHE);
+        flush_mask(d->dirty_cpumask, FLUSH_CACHE);
 
     return 0;
 }
@@ -785,7 +785,7 @@ void memory_type_changed(struct domain *d)
          d->vcpu && d->vcpu[0] )
     {
         p2m_memory_type_changed(d);
-        flush_all(FLUSH_CACHE);
+        flush_mask(d->dirty_cpumask, FLUSH_CACHE);
     }
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index cfea5b5523..383e763d7d 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2395,7 +2395,7 @@ static void svm_vmexit_mce_intercept(
 static void svm_wbinvd_intercept(void)
 {
     if ( cache_flush_permitted(current->domain) )
-        flush_all(FLUSH_CACHE);
+        flush_mask(current->domain->dirty_cpumask, FLUSH_CACHE);
 }
 
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 86b8916a5d..a05c7036c4 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3349,7 +3349,7 @@ static void vmx_wbinvd_intercept(void)
         return;
 
     if ( cpu_has_wbinvd_exiting )
-        flush_all(FLUSH_CACHE);
+        flush_mask(current->domain->dirty_cpumask, FLUSH_CACHE);
     else
         wbinvd();
 }
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 12:11:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 12:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50489.89166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhG6-0002ao-3R; Fri, 11 Dec 2020 12:10:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50489.89166; Fri, 11 Dec 2020 12:10:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhG5-0002ah-Vd; Fri, 11 Dec 2020 12:10:49 +0000
Received: by outflank-mailman (input) for mailman id 50489;
 Fri, 11 Dec 2020 12:10:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XUOP=FP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1knhG4-0002Xw-H1
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 12:10:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c2754de-0181-407e-a1ab-13766338e4c3;
 Fri, 11 Dec 2020 12:10:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41E96AE87;
 Fri, 11 Dec 2020 12:10:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c2754de-0181-407e-a1ab-13766338e4c3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607688646; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aJZQMJEqjqCeKHnTDA0IPhE5vARBQoDmh6z4taBza3s=;
	b=P1ENv6ghYnMSOzdKwSZDHOniw7i+gHM9SWvjsPS+nXcpf4uhfjZJjCfiznixdHS7UHUGUl
	a34b0cvwxyIOGXHl1JLWLzB64bZDoXsOpneHkxjbVOOIAwHRwzjzh7nkMxX9MEEiCvDvNr
	htx20xTSyXpVilz7Cd7mz1QsfdAQFf8=
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
To: boris.ostrovsky@oracle.com, Thomas Gleixner <tglx@linutronix.de>,
 LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com>
Date: Fri, 11 Dec 2020 13:10:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qMVMfsvlAyIOVuhurM0AhbsuX1abhQOZS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qMVMfsvlAyIOVuhurM0AhbsuX1abhQOZS
Content-Type: multipart/mixed; boundary="1SKmisPo0wwQx0jet3HDKXRMk6SQ4aMth";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: boris.ostrovsky@oracle.com, Thomas Gleixner <tglx@linutronix.de>,
 LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Message-ID: <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
In-Reply-To: <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>

--1SKmisPo0wwQx0jet3HDKXRMk6SQ4aMth
Content-Type: multipart/mixed;
 boundary="------------8ECAEB8E864B85BAB060713C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8ECAEB8E864B85BAB060713C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>=20
> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>> All event channel setups bind the interrupt on CPU0 or the target CPU =
for
>> percpu interrupts and overwrite the affinity mask with the correspondi=
ng
>> cpumask. That does not make sense.
>>
>> The XEN implementation of irqchip::irq_set_affinity() already picks a
>> single target CPU out of the affinity mask and the actual target is st=
ored
>> in the effective CPU mask, so destroying the user chosen affinity mask=

>> which might contain more than one CPU is wrong.
>>
>> Change the implementation so that the channel is bound to CPU0 at the =
XEN
>> level and leave the affinity mask alone. At startup of the interrupt
>> affinity will be assigned out of the affinity mask and the XEN binding=
 will
>> be updated.
>=20
>=20
> If that's the case then I wonder whether we need this call at all and i=
nstead bind at startup time.

After some discussion with Thomas on IRC and xen-devel archaeology the
result is: this will be needed especially for systems running on a
single vcpu (e.g. small guests), as the .irq_set_affinity() callback
won't be called in this case when starting the irq.


Juergen

--------------8ECAEB8E864B85BAB060713C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8ECAEB8E864B85BAB060713C--

--1SKmisPo0wwQx0jet3HDKXRMk6SQ4aMth--

--qMVMfsvlAyIOVuhurM0AhbsuX1abhQOZS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/TYcMFAwAAAAAACgkQsN6d1ii/Ey+q
kwgAhqGwSjkPCHuD6iXs+izA0i+SRbhYcA5DS/prsjTsYrIr31Nv0iAWAuq87gH+Uo5StBRXaRlR
Vh9HiOFFv8ScTgdoiZDUycGN07TFuj9NJGJp/TvD+OZN17OQt2w1Pw1JeRI5RNsVTm22OMUH4Om8
D5t0xrU0zymXmndnx8OZEQ/j0W+hCRjIoNpmjegRa1p8q12pzI9FJByuAhVVTqmcfucWD2sIXlFk
ZYAwwiA5sMnSj7UYTiR6lkIWMPv4D0FJYC1GwAMI6EONFeO6SBjMqZsWhymL1P1AU1WoSAe19C/e
DRzPDV1x+jKSYVArD4THJwjqoa7QDXngm7UxnYCYdg==
=Bajp
-----END PGP SIGNATURE-----

--qMVMfsvlAyIOVuhurM0AhbsuX1abhQOZS--


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 12:19:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 12:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50501.89178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhOT-0002t8-1h; Fri, 11 Dec 2020 12:19:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50501.89178; Fri, 11 Dec 2020 12:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhOS-0002t1-TC; Fri, 11 Dec 2020 12:19:28 +0000
Received: by outflank-mailman (input) for mailman id 50501;
 Fri, 11 Dec 2020 12:19:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Reg8=FP=kaod.org=clg@srs-us1.protection.inumbo.net>)
 id 1knhOR-0002sw-Df
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 12:19:27 +0000
Received: from 3.mo52.mail-out.ovh.net (unknown [178.33.254.192])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b88c49c-0ecf-4dd3-8f97-6e899678f2a0;
 Fri, 11 Dec 2020 12:19:25 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.109.138.22])
 by mo52.mail-out.ovh.net (Postfix) with ESMTPS id 93D2921BC55;
 Fri, 11 Dec 2020 13:19:19 +0100 (CET)
Received: from kaod.org (37.59.142.104) by DAG4EX1.mxp5.local (172.16.2.31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Fri, 11 Dec
 2020 13:19:18 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b88c49c-0ecf-4dd3-8f97-6e899678f2a0
Authentication-Results: garm.ovh; auth=pass (GARM-104R005826530b9-912c-480b-b60c-73d675feaa08,
                    9EEBF3925B94F143FF93F47EDF07ACA53746D722) smtp.auth=clg@kaod.org
X-OVh-ClientIp: 82.64.250.170
Subject: Re: [PATCH v2 3/3] net: checksum: Introduce fine control over
 checksum type
To: Bin Meng <bmeng.cn@gmail.com>, <qemu-devel@nongnu.org>
CC: Bin Meng <bin.meng@windriver.com>, Alistair Francis
	<alistair@alistair23.me>, Andrew Jeffery <andrew@aj.id.au>, Anthony Perard
	<anthony.perard@citrix.com>, Beniamino Galvani <b.galvani@gmail.com>, David
 Gibson <david@gibson.dropbear.id.au>, "Edgar E. Iglesias"
	<edgar.iglesias@gmail.com>, Jason Wang <jasowang@redhat.com>, Joel Stanley
	<joel@jms.id.au>, Li Zhijian <lizhijian@cn.fujitsu.com>, "Michael S. Tsirkin"
	<mst@redhat.com>, Paul Durrant <paul@xen.org>, Peter Chubb
	<peter.chubb@nicta.com.au>, Peter Maydell <peter.maydell@linaro.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Zhang Chen <chen.zhang@intel.com>,
	<qemu-arm@nongnu.org>, <qemu-ppc@nongnu.org>,
	<xen-devel@lists.xenproject.org>
References: <1607679312-51325-1-git-send-email-bmeng.cn@gmail.com>
 <1607679312-51325-3-git-send-email-bmeng.cn@gmail.com>
From: =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>
Message-ID: <74ef44be-2fbe-002b-a0da-0185e87ccea8@kaod.org>
Date: Fri, 11 Dec 2020 13:19:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1607679312-51325-3-git-send-email-bmeng.cn@gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [37.59.142.104]
X-ClientProxiedBy: DAG3EX2.mxp5.local (172.16.2.22) To DAG4EX1.mxp5.local
 (172.16.2.31)
X-Ovh-Tracer-GUID: a13324fb-f895-43ae-934d-836cd71c89fa
X-Ovh-Tracer-Id: 7189715332854156243
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedrudekvddgfeelucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhepuffvfhfhkffffgggjggtgfhisehtkeertddtfeejnecuhfhrohhmpeevrogurhhitggpnfgvpgfiohgrthgvrhcuoegtlhhgsehkrghougdrohhrgheqnecuggftrfgrthhtvghrnhepjeekudeuudevleegudeugeekleffveeludejteffiedvledvgfekueefudehheefnecukfhppedtrddtrddtrddtpdefjedrheelrddugedvrddutdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmohguvgepshhmthhpqdhouhhtpdhhvghlohepmhigphhlrghnhedrmhgrihhlrdhovhhhrdhnvghtpdhinhgvtheptddrtddrtddrtddpmhgrihhlfhhrohhmpegtlhhgsehkrghougdrohhrghdprhgtphhtthhopegsmhgvnhhgrdgtnhesghhmrghilhdrtghomh

On 12/11/20 10:35 AM, Bin Meng wrote:
> From: Bin Meng <bin.meng@windriver.com>
> 
> At present net_checksum_calculate() blindly calculates all types of
> checksums (IP, TCP, UDP). Some NICs may have a per type setting in
> their BDs to control what checksum should be offloaded. To support
> such hardware behavior, introduce a 'csum_flag' parameter to the
> net_checksum_calculate() API to allow fine control over what type
> checksum is calculated.
> 
> Existing users of this API are updated accordingly.
> 
> Signed-off-by: Bin Meng <bin.meng@windriver.com>

For the ftgmac100 part,

Reviewed-by: Cédric Le Goater <clg@kaod.org>

Thanks,

C.


> ---
> 
> Changes in v2:
> - update ftgmac100.c per Cédric Le Goater's suggestion
> - simplify fsl_etsec and imx_fec checksum logic
> 
>  include/net/checksum.h        |  7 ++++++-
>  hw/net/allwinner-sun8i-emac.c |  2 +-
>  hw/net/cadence_gem.c          |  2 +-
>  hw/net/fsl_etsec/rings.c      | 18 +++++++++---------
>  hw/net/ftgmac100.c            | 13 ++++++++++++-
>  hw/net/imx_fec.c              | 20 ++++++++------------
>  hw/net/virtio-net.c           |  2 +-
>  hw/net/xen_nic.c              |  2 +-
>  net/checksum.c                | 18 ++++++++++++++----
>  net/filter-rewriter.c         |  4 ++--
>  10 files changed, 55 insertions(+), 33 deletions(-)
> 
> diff --git a/include/net/checksum.h b/include/net/checksum.h
> index 05a0d27..7dec37e 100644
> --- a/include/net/checksum.h
> +++ b/include/net/checksum.h
> @@ -21,11 +21,16 @@
>  #include "qemu/bswap.h"
>  struct iovec;
>  
> +#define CSUM_IP     0x01
> +#define CSUM_TCP    0x02
> +#define CSUM_UDP    0x04
> +#define CSUM_ALL    (CSUM_IP | CSUM_TCP | CSUM_UDP)
> +
>  uint32_t net_checksum_add_cont(int len, uint8_t *buf, int seq);
>  uint16_t net_checksum_finish(uint32_t sum);
>  uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
>                               uint8_t *addrs, uint8_t *buf);
> -void net_checksum_calculate(uint8_t *data, int length);
> +void net_checksum_calculate(uint8_t *data, int length, int csum_flag);
>  
>  static inline uint32_t
>  net_checksum_add(int len, uint8_t *buf)
> diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
> index 38d3285..0427689 100644
> --- a/hw/net/allwinner-sun8i-emac.c
> +++ b/hw/net/allwinner-sun8i-emac.c
> @@ -514,7 +514,7 @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
>          /* After the last descriptor, send the packet */
>          if (desc.status2 & TX_DESC_STATUS2_LAST_DESC) {
>              if (desc.status2 & TX_DESC_STATUS2_CHECKSUM_MASK) {
> -                net_checksum_calculate(packet_buf, packet_bytes);
> +                net_checksum_calculate(packet_buf, packet_bytes, CSUM_ALL);
>              }
>  
>              qemu_send_packet(nc, packet_buf, packet_bytes);
> diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
> index 7a53469..9a4474a 100644
> --- a/hw/net/cadence_gem.c
> +++ b/hw/net/cadence_gem.c
> @@ -1266,7 +1266,7 @@ static void gem_transmit(CadenceGEMState *s)
>  
>                  /* Is checksum offload enabled? */
>                  if (s->regs[GEM_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
> -                    net_checksum_calculate(s->tx_packet, total_bytes);
> +                    net_checksum_calculate(s->tx_packet, total_bytes, CSUM_ALL);
>                  }
>  
>                  /* Update MAC statistics */
> diff --git a/hw/net/fsl_etsec/rings.c b/hw/net/fsl_etsec/rings.c
> index 628648a..121415a 100644
> --- a/hw/net/fsl_etsec/rings.c
> +++ b/hw/net/fsl_etsec/rings.c
> @@ -183,13 +183,11 @@ static void process_tx_fcb(eTSEC *etsec)
>      uint8_t *l3_header = etsec->tx_buffer + 8 + l3_header_offset;
>      /* L4 header */
>      uint8_t *l4_header = l3_header + l4_header_offset;
> +    int csum = 0;
>  
>      /* if packet is IP4 and IP checksum is requested */
>      if (flags & FCB_TX_IP && flags & FCB_TX_CIP) {
> -        /* do IP4 checksum (TODO This function does TCP/UDP checksum
> -         * but not sure if it also does IP4 checksum.) */
> -        net_checksum_calculate(etsec->tx_buffer + 8,
> -                etsec->tx_buffer_len - 8);
> +        csum |= CSUM_IP;
>      }
>      /* TODO Check the correct usage of the PHCS field of the FCB in case the NPH
>       * flag is on */
> @@ -201,9 +199,7 @@ static void process_tx_fcb(eTSEC *etsec)
>              /* if checksum is requested */
>              if (flags & FCB_TX_CTU) {
>                  /* do UDP checksum */
> -
> -                net_checksum_calculate(etsec->tx_buffer + 8,
> -                        etsec->tx_buffer_len - 8);
> +                csum |= CSUM_UDP;
>              } else {
>                  /* set checksum field to 0 */
>                  l4_header[6] = 0;
> @@ -211,10 +207,14 @@ static void process_tx_fcb(eTSEC *etsec)
>              }
>          } else if (flags & FCB_TX_CTU) { /* if TCP and checksum is requested */
>              /* do TCP checksum */
> -            net_checksum_calculate(etsec->tx_buffer + 8,
> -                                   etsec->tx_buffer_len - 8);
> +            csum |= CSUM_TCP;
>          }
>      }
> +
> +    if (csum) {
> +        net_checksum_calculate(etsec->tx_buffer + 8,
> +                               etsec->tx_buffer_len - 8, csum);
> +    }
>  }
>  
>  static void process_tx_bd(eTSEC         *etsec,
> diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
> index 782ff19..25685ba 100644
> --- a/hw/net/ftgmac100.c
> +++ b/hw/net/ftgmac100.c
> @@ -564,6 +564,7 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
>          ptr += len;
>          frame_size += len;
>          if (bd.des0 & FTGMAC100_TXDES0_LTS) {
> +            int csum = 0;
>  
>              /* Check for VLAN */
>              if (flags & FTGMAC100_TXDES1_INS_VLANTAG &&
> @@ -573,8 +574,18 @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
>              }
>  
>              if (flags & FTGMAC100_TXDES1_IP_CHKSUM) {
> -                net_checksum_calculate(s->frame, frame_size);
> +                csum |= CSUM_IP;
>              }
> +            if (flags & FTGMAC100_TXDES1_TCP_CHKSUM) {
> +                csum |= CSUM_TCP;
> +            }
> +            if (flags & FTGMAC100_TXDES1_UDP_CHKSUM) {
> +                csum |= CSUM_UDP;
> +            }
> +            if (csum) {
> +                net_checksum_calculate(s->frame, frame_size, csum);
> +            }
> +
>              /* Last buffer in frame.  */
>              qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);
>              ptr = s->frame;
> diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
> index 2c14804..f03450c 100644
> --- a/hw/net/imx_fec.c
> +++ b/hw/net/imx_fec.c
> @@ -561,22 +561,18 @@ static void imx_enet_do_tx(IMXFECState *s, uint32_t index)
>          ptr += len;
>          frame_size += len;
>          if (bd.flags & ENET_BD_L) {
> +            int csum = 0;
> +
>              if (bd.option & ENET_BD_PINS) {
> -                struct ip_header *ip_hd = PKT_GET_IP_HDR(s->frame);
> -                if (IP_HEADER_VERSION(ip_hd) == 4) {
> -                    net_checksum_calculate(s->frame, frame_size);
> -                }
> +                csum |= (CSUM_TCP | CSUM_UDP);
>              }
>              if (bd.option & ENET_BD_IINS) {
> -                struct ip_header *ip_hd = PKT_GET_IP_HDR(s->frame);
> -                /* We compute checksum only for IPv4 frames */
> -                if (IP_HEADER_VERSION(ip_hd) == 4) {
> -                    uint16_t csum;
> -                    ip_hd->ip_sum = 0;
> -                    csum = net_raw_checksum((uint8_t *)ip_hd, sizeof(*ip_hd));
> -                    ip_hd->ip_sum = cpu_to_be16(csum);
> -                }
> +                csum |= CSUM_IP;
> +            }
> +            if (csum) {
> +                net_checksum_calculate(s->frame, frame_size, csum);
>              }
> +
>              /* Last buffer in frame.  */
>  
>              qemu_send_packet(qemu_get_queue(s->nic), s->frame, frame_size);
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 044ac95..4082be3 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -1471,7 +1471,7 @@ static void work_around_broken_dhclient(struct virtio_net_hdr *hdr,
>          (buf[12] == 0x08 && buf[13] == 0x00) && /* ethertype == IPv4 */
>          (buf[23] == 17) && /* ip.protocol == UDP */
>          (buf[34] == 0 && buf[35] == 67)) { /* udp.srcport == bootps */
> -        net_checksum_calculate(buf, size);
> +        net_checksum_calculate(buf, size, CSUM_UDP);
>          hdr->flags &= ~VIRTIO_NET_HDR_F_NEEDS_CSUM;
>      }
>  }
> diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
> index 00a7fdf..5c815b4 100644
> --- a/hw/net/xen_nic.c
> +++ b/hw/net/xen_nic.c
> @@ -174,7 +174,7 @@ static void net_tx_packets(struct XenNetDev *netdev)
>                      tmpbuf = g_malloc(XC_PAGE_SIZE);
>                  }
>                  memcpy(tmpbuf, page + txreq.offset, txreq.size);
> -                net_checksum_calculate(tmpbuf, txreq.size);
> +                net_checksum_calculate(tmpbuf, txreq.size, CSUM_ALL);
>                  qemu_send_packet(qemu_get_queue(netdev->nic), tmpbuf,
>                                   txreq.size);
>              } else {
> diff --git a/net/checksum.c b/net/checksum.c
> index dabd290..70f4eae 100644
> --- a/net/checksum.c
> +++ b/net/checksum.c
> @@ -57,7 +57,7 @@ uint16_t net_checksum_tcpudp(uint16_t length, uint16_t proto,
>      return net_checksum_finish(sum);
>  }
>  
> -void net_checksum_calculate(uint8_t *data, int length)
> +void net_checksum_calculate(uint8_t *data, int length, int csum_flag)
>  {
>      int mac_hdr_len, ip_len;
>      struct ip_header *ip;
> @@ -108,9 +108,11 @@ void net_checksum_calculate(uint8_t *data, int length)
>      }
>  
>      /* Calculate IP checksum */
> -    stw_he_p(&ip->ip_sum, 0);
> -    csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
> -    stw_be_p(&ip->ip_sum, csum);
> +    if (csum_flag & CSUM_IP) {
> +        stw_he_p(&ip->ip_sum, 0);
> +        csum = net_raw_checksum((uint8_t *)ip, IP_HDR_GET_LEN(ip));
> +        stw_be_p(&ip->ip_sum, csum);
> +    }
>  
>      if (IP4_IS_FRAGMENT(ip)) {
>          return; /* a fragmented IP packet */
> @@ -128,6 +130,10 @@ void net_checksum_calculate(uint8_t *data, int length)
>      switch (ip->ip_p) {
>      case IP_PROTO_TCP:
>      {
> +        if (!(csum_flag & CSUM_TCP)) {
> +            return;
> +        }
> +
>          tcp_header *tcp = (tcp_header *)(ip + 1);
>  
>          if (ip_len < sizeof(tcp_header)) {
> @@ -148,6 +154,10 @@ void net_checksum_calculate(uint8_t *data, int length)
>      }
>      case IP_PROTO_UDP:
>      {
> +        if (!(csum_flag & CSUM_UDP)) {
> +            return;
> +        }
> +
>          udp_header *udp = (udp_header *)(ip + 1);
>  
>          if (ip_len < sizeof(udp_header)) {
> diff --git a/net/filter-rewriter.c b/net/filter-rewriter.c
> index e063a81..80caac5 100644
> --- a/net/filter-rewriter.c
> +++ b/net/filter-rewriter.c
> @@ -114,7 +114,7 @@ static int handle_primary_tcp_pkt(RewriterState *rf,
>              tcp_pkt->th_ack = htonl(ntohl(tcp_pkt->th_ack) + conn->offset);
>  
>              net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
> -                                   pkt->size - pkt->vnet_hdr_len);
> +                                   pkt->size - pkt->vnet_hdr_len, CSUM_TCP);
>          }
>  
>          /*
> @@ -216,7 +216,7 @@ static int handle_secondary_tcp_pkt(RewriterState *rf,
>              tcp_pkt->th_seq = htonl(ntohl(tcp_pkt->th_seq) - conn->offset);
>  
>              net_checksum_calculate((uint8_t *)pkt->data + pkt->vnet_hdr_len,
> -                                   pkt->size - pkt->vnet_hdr_len);
> +                                   pkt->size - pkt->vnet_hdr_len, CSUM_TCP);
>          }
>      }
>  
> 



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 12:37:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 12:37:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50522.89202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhfe-0004xi-Lk; Fri, 11 Dec 2020 12:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50522.89202; Fri, 11 Dec 2020 12:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhfe-0004xb-IZ; Fri, 11 Dec 2020 12:37:14 +0000
Received: by outflank-mailman (input) for mailman id 50522;
 Fri, 11 Dec 2020 12:37:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knhfd-0004xW-3y
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 12:37:13 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a214fe2f-cfd5-4381-a1b4-7a7728f98cc4;
 Fri, 11 Dec 2020 12:37:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a214fe2f-cfd5-4381-a1b4-7a7728f98cc4
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607690230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xeKMIUUY7UN7nseclqb6capkWQKCLYLlFPvEsM/avAA=;
	b=3520JJ+cuub+0iUSOK2WXjhkvCbFqeJNnWTo2fJ47aLTUyErKzNfEkULJPsX3Qa7RkBVwA
	4q6jw5fcFuYyZr/ciqFHaAnQVfC1Alp3kCIzCyANWRIH5wkikSXOxsM9LH0a3lH+xiAa45
	f0/UcyAfqGDvjb7eJDdGBHj9hJNdTg1RDGpqvbSImr23oJWz9hqAB7upD/U8t27689mDeg
	F/Tz1FEjoqnkk1C0t4mKGVaykVWwH0Pu53ShxAoYAYxl3sBd5Q86Au5G3wKiLThho8mDJo
	PrspxrgGCxFQ2+rt5HHBAUWr3eImu1NL/lsNiVCRl0iT/SNo3u6jwsn7CYROBw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607690230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xeKMIUUY7UN7nseclqb6capkWQKCLYLlFPvEsM/avAA=;
	b=BclOyliY9++BXeKdFZCu5z1BEMLnNc8fqr8CcstXRZSRH4lrLjensrS/wg9nxh2q1FTvD5
	HDIGPJaNNIgQp/DQ==
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 boris.ostrovsky@oracle.com, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, "James E.J. Bottomley"
 <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula
 <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj
 Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu interrupts
In-Reply-To: <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com>
References: <20201210192536.118432146@linutronix.de> <20201210194045.250321315@linutronix.de> <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com> <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com>
Date: Fri, 11 Dec 2020 13:37:10 +0100
Message-ID: <871rfwiknd.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 11 2020 at 13:10, J=C3=BCrgen Gro=C3=9F wrote:
> On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>>=20
>> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>>> All event channel setups bind the interrupt on CPU0 or the target CPU f=
or
>>> percpu interrupts and overwrite the affinity mask with the corresponding
>>> cpumask. That does not make sense.
>>>
>>> The XEN implementation of irqchip::irq_set_affinity() already picks a
>>> single target CPU out of the affinity mask and the actual target is sto=
red
>>> in the effective CPU mask, so destroying the user chosen affinity mask
>>> which might contain more than one CPU is wrong.
>>>
>>> Change the implementation so that the channel is bound to CPU0 at the X=
EN
>>> level and leave the affinity mask alone. At startup of the interrupt
>>> affinity will be assigned out of the affinity mask and the XEN binding =
will
>>> be updated.
>>=20
>>=20
>> If that's the case then I wonder whether we need this call at all and in=
stead bind at startup time.
>
> After some discussion with Thomas on IRC and xen-devel archaeology the
> result is: this will be needed especially for systems running on a
> single vcpu (e.g. small guests), as the .irq_set_affinity() callback
> won't be called in this case when starting the irq.

That's right, but not limited to ARM. The same problem exists on x86 UP.
So yes, the call makes sense, but the changelog is not really useful.
Let me add a comment to this.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 12:58:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 12:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50551.89237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhzd-00079K-UO; Fri, 11 Dec 2020 12:57:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50551.89237; Fri, 11 Dec 2020 12:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knhzd-00079D-R8; Fri, 11 Dec 2020 12:57:53 +0000
Received: by outflank-mailman (input) for mailman id 50551;
 Fri, 11 Dec 2020 12:57:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knhzc-000798-Rz
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 12:57:52 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a42b8ea-ef21-49d6-a180-34d5154b8e0c;
 Fri, 11 Dec 2020 12:57:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a42b8ea-ef21-49d6-a180-34d5154b8e0c
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607691469;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JI1elAtJ+5DtZ6K623jhzzi7pTI2yY5tK5xHv6JaNRg=;
	b=IsUihC8x41gfVuZ56mJpZCBvWH6AGSSnlAtCqFu+Y6vHBUWXmKQx9YvU5B8lrG6utDrQMP
	8canZDEEsNDVhNNdoKfnFOQzTFOpz9W+psXaARAvFsUFtyOebsqoT9kLX8kBnNz5EacFR5
	rK53uTCJCAI+CQzV9BLaBuU/gNEQZy9UASonSM4NSF0ZsDg2qgrh9flw6frSO8WIrRyHsw
	fF1cIAuhmb0wJbfH0peqshua47BZpnwfG0pDc4EWPJORiwGLQuagVjgVxqHnf+tQ9+0hUT
	H0SNNDNOI9Tui6mw4dUpmVwGgPTgPo14TX1s4jNlbb7LIWdUurk9RmWQuaaNkw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607691469;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JI1elAtJ+5DtZ6K623jhzzi7pTI2yY5tK5xHv6JaNRg=;
	b=9Ep94KYblAfQy2iSeDDH/NSIpD2gqoyh3N3s+6g4ZgyvdTP4pfDpIIp6ve3zDE8y9b1XGh
	NoOqscBMvs3mS3Bw==
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Jani Nikula <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, "James
 E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Pankaj Bharadiya
 <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee
 Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Subject: Re: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
In-Reply-To: <ad05af1a-5463-2a80-0887-7629721d6863@linux.intel.com>
References: <20201210192536.118432146@linutronix.de> <20201210194043.957046529@linutronix.de> <ad05af1a-5463-2a80-0887-7629721d6863@linux.intel.com>
Date: Fri, 11 Dec 2020 13:57:49 +0100
Message-ID: <87y2i4h54i.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Dec 11 2020 at 10:13, Tvrtko Ursulin wrote:
> On 10/12/2020 19:25, Thomas Gleixner wrote:

>> 
>> Aside of that the count is per interrupt line and therefore takes
>> interrupts from other devices into account which share the interrupt line
>> and are not handled by the graphics driver.
>> 
>> Replace it with a pmu private count which only counts interrupts which
>> originate from the graphics card.
>> 
>> To avoid atomics or heuristics of some sort make the counter field
>> 'unsigned long'. That limits the count to 4e9 on 32bit which is a lot and
>> postprocessing can easily deal with the occasional wraparound.
>
> After my failed hasty sketch from last night I had a different one which 
> was kind of heuristics based (re-reading the upper dword and retrying if 
> it changed on 32-bit).

The problem is that there will be two seperate modifications for the low
and high word. Several ways how the compiler can translate this, but the
problem is the same for all of them:

CPU 0                           CPU 1
        load low
        load high
        add  low, 1
        addc high, 0            
        store low               load high
--> NMI                         load low
                                load high and compare
        store high

You can't catch that. If this really becomes an issue you need a
sequence counter around it.
      

> But you are right - it is okay to at least start 
> like this today and if later there is a need we can either do that or 
> deal with wrap at PMU read time.

Right.

>> +/*
>> + * Interrupt statistic for PMU. Increments the counter only if the
>> + * interrupt originated from the the GPU so interrupts from a device which
>> + * shares the interrupt line are not accounted.
>> + */
>> +static inline void pmu_irq_stats(struct drm_i915_private *priv,
>
> We never use priv as a local name, it should be either i915 or
> dev_priv.

Sure, will fix.

>> +	/*
>> +	 * A clever compiler translates that into INC. A not so clever one
>> +	 * should at least prevent store tearing.
>> +	 */
>> +	WRITE_ONCE(priv->pmu.irq_count, priv->pmu.irq_count + 1);
>
> Curious, probably more educational for me - given x86_32 and x86_64, and 
> the context of it getting called, what is the difference from just doing 
> irq_count++?

Several reasons:

    1) The compiler can pretty much do what it wants with cnt++
       including tearing and whatever. https://lwn.net/Articles/816850/
       for the full set of insanities.

       Not really a problem here, but

    2) It's annotating the reader and the writer side and documenting
       that this is subject to concurrency

    3) It will prevent KCSAN to complain about the data race,
       i.e. concurrent modification while reading.

Thanks,

        tglx

>> --- a/drivers/gpu/drm/i915/i915_pmu.c
>> +++ b/drivers/gpu/drm/i915/i915_pmu.c
>> @@ -423,22 +423,6 @@ static enum hrtimer_restart i915_sample(
>>   	return HRTIMER_RESTART;
>>   }
>
> In this file you can also drop the #include <linux/irq.h> line.

Indeed.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 13:07:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 13:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50562.89250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kni8q-0008G8-Sp; Fri, 11 Dec 2020 13:07:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50562.89250; Fri, 11 Dec 2020 13:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kni8q-0008G1-Pk; Fri, 11 Dec 2020 13:07:24 +0000
Received: by outflank-mailman (input) for mailman id 50562;
 Fri, 11 Dec 2020 13:07:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kni8p-0008Ft-L8; Fri, 11 Dec 2020 13:07:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kni8p-0005o0-EM; Fri, 11 Dec 2020 13:07:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kni8p-0002kT-6h; Fri, 11 Dec 2020 13:07:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kni8p-0001kU-68; Fri, 11 Dec 2020 13:07:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8QIXR9J9hM98p2a7CGz5QrUqO5cKE8G6v2CAdXeHqT0=; b=c0xWOWbtgycjPgMmGEYUot6/Av
	oCIpgvesZ2ZIw2W7FN/nvox12tlHPnCdhoM/rP8qs/h3kCXXugyxSn/+Erl3gJJlrdmrCIby68VX5
	8gsZ90DDw+XxGltuONQNK3th+6tv6slMyH9poFnZvFNJU8NWTkNTxYw3APj3dD+NrnEQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157421-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157421: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 13:07:23 +0000

flight 157421 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157421/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20

Last test of basis   157293  2020-12-08 08:00:25 Z    3 days
Testing same since   157421  2020-12-11 11:03:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   777e3590f1..8e0fe4fe5f  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 13:14:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 13:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50574.89268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniF2-0000t2-LZ; Fri, 11 Dec 2020 13:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50574.89268; Fri, 11 Dec 2020 13:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniF2-0000sv-IJ; Fri, 11 Dec 2020 13:13:48 +0000
Received: by outflank-mailman (input) for mailman id 50574;
 Fri, 11 Dec 2020 13:13:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Df1x=FP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kniF1-0000sq-8U
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 13:13:47 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d2d8a22-51ea-4fd3-beb6-b3b513ae50a0;
 Fri, 11 Dec 2020 13:13:45 +0000 (UTC)
Received: from DB6PR0802CA0029.eurprd08.prod.outlook.com (2603:10a6:4:a3::15)
 by HE1PR0801MB1642.eurprd08.prod.outlook.com (2603:10a6:3:86::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.14; Fri, 11 Dec
 2020 13:13:44 +0000
Received: from DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::f8) by DB6PR0802CA0029.outlook.office365.com
 (2603:10a6:4:a3::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Fri, 11 Dec 2020 13:13:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT039.mail.protection.outlook.com (10.152.21.120) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Fri, 11 Dec 2020 13:13:44 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Fri, 11 Dec 2020 13:13:44 +0000
Received: from 054d58e0a843.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 767BE7D0-F9D8-4FC5-A055-1A9E61E4A57B.1; 
 Fri, 11 Dec 2020 13:13:38 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 054d58e0a843.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 11 Dec 2020 13:13:38 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.17; Fri, 11 Dec
 2020 13:13:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.018; Fri, 11 Dec 2020
 13:13:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d2d8a22-51ea-4fd3-beb6-b3b513ae50a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q8bRTrq8AuDd9McH6n7Uzv/fwPtioBVwO1NcmUkYkFk=;
 b=LJbKvV5hxoCuES5+wPU5HdmBOvl6QvoeO/9SYynROlDAqE5AcEkggZrHiAjB5BgMMDNXqSlhmN4c+FTNJp+Wk+rfBPhukquRV4p9mvDwbDcaOejIdFUia7SrXJokvlGc7q7Htq2RXcHceBMpZ5KS+Is6ybH0oV1Y8FP0sSGhdxI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f582eef5c0fd22b1
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J/nabH7RjnskT27z5U/0mjFJ85ROMa3Xi9wZrcOwEJsclByOJnDseCnt9KrxNyCwOtpPtwsamZ5kffN2NNY3I2bcIc7Kup8yYqnO8SEn8cYXV7b6WxI/XTsOHGdagWJXwaG/H2FU7e9F0iuSvpRaPLagB2qJXBbKg6F3Q9FUJlcN1B4hKow7dX2a6oD2j3vLv+EAlGT6d6/VeeBXyIWAbqrQcnyymqPXqHA/0pS0ebhAtv8ElXAoVAeo5cE1u6268UGgr1MFUND+5GlyDatdTbzL2DKiIrGnjI85GdTnVHCIUWgezu6PLbDx10nawvww56hXF/La67A3oUvIIsLXow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q8bRTrq8AuDd9McH6n7Uzv/fwPtioBVwO1NcmUkYkFk=;
 b=IJCp7UhDeebNBTxdrji37lnuXVhy6n/T8AJ25uiV1HdezXEpsujwjWFuExFBX8tFB3fl4acA3bBANuvifg8I+7nUmfzzwUeW+leL8xblQtAik+Q+vTfkd6hmVxks1QDkJQBXUbclQvzpMhByclRMONVAQjSIINNyJARw1Ney6Ac5mDeTbtIBF438JP3ce9W2Sa6xtG/qK/RJf0AxtohF3jk6MBocG1YqV2WrlZexHl96dsL/ztchpcEQBVUidAQ2SuuQIOqN7Bw/RM9J18e7d8d5XchN2gwnht3MysodNtisl5a1C+oesvyYZxNmdiuPwKnc0AURRjMYgQ4cXDiH/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q8bRTrq8AuDd9McH6n7Uzv/fwPtioBVwO1NcmUkYkFk=;
 b=LJbKvV5hxoCuES5+wPU5HdmBOvl6QvoeO/9SYynROlDAqE5AcEkggZrHiAjB5BgMMDNXqSlhmN4c+FTNJp+Wk+rfBPhukquRV4p9mvDwbDcaOejIdFUia7SrXJokvlGc7q7Htq2RXcHceBMpZ5KS+Is6ybH0oV1Y8FP0sSGhdxI=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 5/8] xen/device-tree: Add dt_property_match_string
 helper
Thread-Topic: [PATCH v3 5/8] xen/device-tree: Add dt_property_match_string
 helper
Thread-Index: AQHWzxXuqdpi5XA8kUmigr++gHdisqnx4LEA
Date: Fri, 11 Dec 2020 13:13:37 +0000
Message-ID: <04BD75B6-D4C7-425C-93CF-0C1572260F47@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <2cf4c10d0ce81290af96e29ee364df87c06ef849.1607617848.git.rahul.singh@arm.com>
In-Reply-To:
 <2cf4c10d0ce81290af96e29ee364df87c06ef849.1607617848.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 66b45cb5-4694-4b70-6860-08d89dd6962b
x-ms-traffictypediagnostic: DB6PR08MB2693:|HE1PR0801MB1642:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB1642AC1993DFB2610A33BEB29DCA0@HE1PR0801MB1642.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 uIBtH3UA+3umMp3/Xn0vPktNOv2xXalhyk4O1IYiEfUlAguFdXCH5OP7XK6qobxxB67eLqImxe7khU61yV72+iurR/J6RuGVswFvLpSxGELQLr0LfvVrvZaJ0oPSgHXMcdmhAVKvSelxDRfAPF+VL+Yqp0BxZyepYL5Zssh+WYouVBD6iHogkeSXrP7JbW6mQKA2UYaLN7KZDr39J0NBC1ZL+ZoRIvLzQtMryxdjXhrCepPFgivwTA63dkL/tr9gWtYBKI1ftuJFSYewh2g8PJg3QL8AtxTn7CIINeUaU6ixgDGdLEez8WFayKaB440LPbEWHd7bbjDn0vmcItGKALrKfGCfRPABthkOT0SKZ+Gu9uMTbMC7Bzv/oiExCjJt
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(136003)(39860400002)(346002)(366004)(33656002)(66556008)(316002)(66476007)(64756008)(76116006)(91956017)(66946007)(5660300002)(71200400001)(26005)(6506007)(186003)(53546011)(2616005)(86362001)(66446008)(37006003)(54906003)(6486002)(6512007)(8676002)(4326008)(6862004)(2906002)(36756003)(478600001)(6636002)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?LhvYvyILzd+H5G2+RTWPs3tU+ij76PInMFjDhcCF6HGyebmUZA5p03meVA5d?=
 =?us-ascii?Q?NX7lk328aBAQoQ1Glyn9LtxMqMDxKqDhEqO2vHkALP0FsR09B4yfNuIVg2S4?=
 =?us-ascii?Q?TiwtAQR2TAuYyPIo8SMz7JaYy2JHCYwitDNxed/fMxbE7/8qpy1JsJzj4gD2?=
 =?us-ascii?Q?s4X3dFhzMP/QC2yQvmLpSiWm12Ygtg4YJrAKI5IWkDE1iVZ6LRdRw17tg460?=
 =?us-ascii?Q?Kl9IfX6+XoithnKI1Ds5WdpnwnbfZL4Nu+vea96oiOCxUo3fWlbV+cz4ac1A?=
 =?us-ascii?Q?Bh6NieeXIFi6qbkESKaFS7E6V+dKNenXL99N9OgKUJo5Ew/H3ReEHCjiXJDK?=
 =?us-ascii?Q?HAwGQ3vNY47kCJnJtgCQ2wQGzOll67XFrXWbFG6Y2UGFeCndYiEE9ZtBnMBe?=
 =?us-ascii?Q?kTrv7nKBJbdINcpIOQkklq0bX6JbW2gYIyTrhHCm4JuyFplK/LT2pFahNr5E?=
 =?us-ascii?Q?EBK05JGmEP6Ixh+JWAwGhHmwuH4nbDLNydGXMqqTRBNKfnxN4AsS+z3/qbSs?=
 =?us-ascii?Q?0LGpht5WgBnkX4JuSlf6T5SBFHJeO5FHpiEjDKvzfVQRT66K1cPiiIZoRBQt?=
 =?us-ascii?Q?f6Av1qK+85na4U200QmcN9N5X9WP3K9m6Uhp4B11N8bQY1AWqAfUH24h37qv?=
 =?us-ascii?Q?x7IQ0zEHQOYVfbATxO96Cv/XbijbVQvjx4OQnV4IRZIwLImLqtAbUv19nju5?=
 =?us-ascii?Q?DQBqenBnIivNXsBMS96HzDHBtkm0u3MjOJ7og1O0+2KsRnq7F4DMddMZ2VIY?=
 =?us-ascii?Q?HszqeapWRQx00SAnystIxF9UubEKe+bLbmM8AtYSpwt363VlWtyvioJNe2mm?=
 =?us-ascii?Q?/YM9a5v48n8kFdgik0wyw0o04C1P3TD9NUDUSlSuPOi/iJm6/ntiFrmo0MEq?=
 =?us-ascii?Q?Bq2snQTmN8rnUjO9hFtK5lBfRisR4Jw8eUEFqjEXy0V6zZd3d1PMVCzu4/io?=
 =?us-ascii?Q?jG9z3aozTBmN+6zehvSMeQt8OIcwK7Y18FJdLTnQDGCmoILqcuBQhzWvRvaX?=
 =?us-ascii?Q?yus5?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FD1C6A93FECA2342A5E25928D6561FCA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2693
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bd429ec4-b7ae-47b6-f7f0-08d89dd6920b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o2g/I09F5HFeFDN8GMIhq2YwJd9fDDzx/12uu3Y/pIoJHAnwSuJt8ptkzt0yftCF6nZJOCJdq7ZWlHsDZe9zWC4DfF8Af/pOKNuD98k7plR2IZAHbYfohjv0BSM96Wk7QlUJsFris9fBsuXC8Qljh9yhMKRTD/OuuOyZlaZ4jEuEFS+icDTvsI7gD5KOFOl089aszkFQhdi2YF7l7S9sGVm7SyFd2v/BeltpQjt7XPemfwM8x4Tzu/KcKJLGC2etDF50MFkjIQ6hxU8mh+Ld4xHTN0hNPxKmaEk3ByOWWVeC3azctTKCD6nWitneCwaN1o6pvasug9kfvzwGH2/inBtgaEWzm0JFKsl7I6t93I/8XOu9btkvK4OXWmkmXyJaoXt0rQ/QGT5y0rgur5X25Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(39860400002)(46966005)(81166007)(186003)(356005)(478600001)(316002)(37006003)(47076004)(86362001)(54906003)(8676002)(82740400003)(70206006)(2616005)(6636002)(70586007)(8936002)(33656002)(6486002)(4326008)(6512007)(82310400003)(6862004)(26005)(2906002)(336012)(36756003)(6506007)(5660300002)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2020 13:13:44.0849
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 66b45cb5-4694-4b70-6860-08d89dd6962b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1642

Hi,

> On 10 Dec 2020, at 16:57, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> Import the Linux helper of_property_match_string. This function searches
> a string list property and returns the index of a specific string value.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks
Bertrand

> ---
> Changes in v3:
> - This patch is introduce in this verison.
>=20
> ---
> xen/common/device_tree.c      | 27 +++++++++++++++++++++++++++
> xen/include/xen/device_tree.h | 12 ++++++++++++
> 2 files changed, 39 insertions(+)
>=20
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index e107c6f89f..18825e333e 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -208,6 +208,33 @@ int dt_property_read_string(const struct dt_device_n=
ode *np,
>     return 0;
> }
>=20
> +int dt_property_match_string(const struct dt_device_node *np,
> +                             const char *propname, const char *string)
> +{
> +    const struct dt_property *dtprop =3D dt_find_property(np, propname, =
NULL);
> +    size_t l;
> +    int i;
> +    const char *p, *end;
> +
> +    if ( !dtprop )
> +        return -EINVAL;
> +    if ( !dtprop->value )
> +        return -ENODATA;
> +
> +    p =3D dtprop->value;
> +    end =3D p + dtprop->length;
> +
> +    for ( i =3D 0; p < end; i++, p +=3D l )
> +    {
> +        l =3D strnlen(p, end - p) + 1;
> +        if ( p + l > end )
> +            return -EILSEQ;
> +        if ( strcmp(string, p) =3D=3D 0 )
> +            return i; /* Found it; return index */
> +    }
> +    return -ENODATA;
> +}
> +
> bool_t dt_device_is_compatible(const struct dt_device_node *device,
>                                const char *compat)
> {
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.=
h
> index f2ad22b79c..b02696be94 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -400,6 +400,18 @@ static inline bool_t dt_property_read_bool(const str=
uct dt_device_node *np,
> int dt_property_read_string(const struct dt_device_node *np,
>                             const char *propname, const char **out_string=
);
>=20
> +/**
> + * dt_property_match_string() - Find string in a list and return index
> + * @np: pointer to node containing string list property
> + * @propname: string list property name
> + * @string: pointer to string to search for in string list
> + *
> + * This function searches a string list property and returns the index
> + * of a specific string value.
> + */
> +int dt_property_match_string(const struct dt_device_node *np,
> +                             const char *propname, const char *string);
> +
> /**
>  * Checks if the given "compat" string matches one of the strings in
>  * the device's "compatible" property
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 13:15:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 13:15:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50579.89283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniH0-00011N-39; Fri, 11 Dec 2020 13:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50579.89283; Fri, 11 Dec 2020 13:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniGz-00011G-WE; Fri, 11 Dec 2020 13:15:50 +0000
Received: by outflank-mailman (input) for mailman id 50579;
 Fri, 11 Dec 2020 13:15:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Df1x=FP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kniGy-000118-Gm
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 13:15:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.56]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8842043-f52b-49a3-9e46-776af32077eb;
 Fri, 11 Dec 2020 13:15:47 +0000 (UTC)
Received: from AM6P192CA0013.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::26)
 by DBBPR08MB4553.eurprd08.prod.outlook.com (2603:10a6:10:d2::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.14; Fri, 11 Dec
 2020 13:15:45 +0000
Received: from VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::1b) by AM6P192CA0013.outlook.office365.com
 (2603:10a6:209:83::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Fri, 11 Dec 2020 13:15:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT034.mail.protection.outlook.com (10.152.18.85) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Fri, 11 Dec 2020 13:15:45 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Fri, 11 Dec 2020 13:15:44 +0000
Received: from 77c82f939123.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8118355A-4FF1-4D29-AB3B-B4A9241AC075.1; 
 Fri, 11 Dec 2020 13:15:07 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 77c82f939123.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 11 Dec 2020 13:15:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.17; Fri, 11 Dec
 2020 13:15:05 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.018; Fri, 11 Dec 2020
 13:15:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8842043-f52b-49a3-9e46-776af32077eb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i9yaz6ZfSjicd0DadFFb622WPH4VUCuIsmFFsvFKT0o=;
 b=JO+Y3yfdYMiabn/mpD2crtXxNNUgvNi+O8MxzvyQ60nHeKLnzqs+oTYb5XTwPwG+fg2p4CX0R0Xsy7OEvUeBn3ESEDcZayKntPQXt0r+SwjecTGeGUI86uXIICLnyDPzIFvHdkgi11SK4toSNs3jxX2eYVMli7FUUL+Rf2ZVNJc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9a5112ff9cb75e87
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bORRjMi2pra05J7cksibIuWv4w8WFj2QWKLRUVhe2BzOVpXkXIZbHSMRonwSsZEZtgiFETgqb9XKhAxFZp/vsTjYmxYWAdZ/g6Zv34aw1mVDUvO5pv+xJocMJQSR2eiWSZ0bdkDGfQYojgvP2a6xAyL8BdXS6iV92IIgfT6UDYItkUPEN9I2SzRl/SbreauY7Jr0L9wtGe6wz3SZmZhnvd+GPK6xL4ygS3zgzwN3HqrCJL15RB/n8mOrDlSc9ZSAbjank8J3fcqDqeefN6lfGI9DKvjssrPudPBPUCVz2MbtB7se9wCsB4YC1DS9sVxQ7fDs2O2waThruNjLXELohw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i9yaz6ZfSjicd0DadFFb622WPH4VUCuIsmFFsvFKT0o=;
 b=MyzkX/ArU77+03X8RzZ5xhUGQF2Nk8bOFWd8qLO26kVT/myIfv39Yupra1t3WWIDmgYHG/uqesY3GOh+pqv46zti16bjOLgIxVY/fDSfgfLGfE1T563TOgVMys5o4xYKP+gCVJ+1wKqwi3A6dAwNxqN/CuR2ALnzdMJfp2kPaWZQc2vlYG9uo84qdxNODj/GMrku3ghIrfmcCvieYk7Hp9mSSSzwTD7j1kagWPtIxCljE2RFjBRt52Yhag39zkbBNbAmsHWo3mB2senVYbFim0pLKdhUATz0UaUk387AwQsmmsh/yxhtioJDxoT2dvxVaFI3ocaFdKMRrqVsdQn6Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i9yaz6ZfSjicd0DadFFb622WPH4VUCuIsmFFsvFKT0o=;
 b=JO+Y3yfdYMiabn/mpD2crtXxNNUgvNi+O8MxzvyQ60nHeKLnzqs+oTYb5XTwPwG+fg2p4CX0R0Xsy7OEvUeBn3ESEDcZayKntPQXt0r+SwjecTGeGUI86uXIICLnyDPzIFvHdkgi11SK4toSNs3jxX2eYVMli7FUUL+Rf2ZVNJc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 8/8] xen/arm: smmuv3: Remove linux compatibility
 functions.
Thread-Topic: [PATCH v3 8/8] xen/arm: smmuv3: Remove linux compatibility
 functions.
Thread-Index: AQHWzxYmAyN3BeJ+Bke0TiYbqyxvo6nx4RoA
Date: Fri, 11 Dec 2020 13:15:04 +0000
Message-ID: <88AA5EB2-928C-42E7-9A57-DB430036D48E@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
In-Reply-To:
 <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 603256a8-b8c3-475d-3423-08d89dd6de71
x-ms-traffictypediagnostic: DB6PR08MB2693:|DBBPR08MB4553:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB4553D0C5C763BF2C019502149DCA0@DBBPR08MB4553.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:873;OLM:873;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gGFI7lTBkcnnqrUVPFjfdI2Uw1buL7Gg1xV0AXntNTQEyV7ckX872o3asrMKdzsT7NKQqu+IzUR30PdFUtSZ5+ZcgOMBumr6nM3hHZnLlBpswCpb1puswVscCO5fev1tuwNeGCUMYr+WoCqGrDeK8H24lwG8wn9Ed5XUbJeG+n5a06VsPGfPtYsisL7pDzMxoxhhZsogo+BFhjUveKFqpSz301hTAsJMIjpRKN0qgE/pwh6yboeWnXyxsFc+k1y0i/mitYdq9KBto16u8rIdyihT8Gx8odDTxbX/EUUDxbfFHkVijn1z7flyKAvdHq2YGlmtIdlmx7H9bg4r9jQtuWBhverqT7tFuPgdUg0myezRWRL5znMc10OGjAJsKrx8
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(136003)(39860400002)(346002)(366004)(33656002)(66556008)(316002)(66476007)(64756008)(76116006)(91956017)(66946007)(5660300002)(71200400001)(26005)(6506007)(186003)(53546011)(2616005)(86362001)(66446008)(83380400001)(37006003)(54906003)(6486002)(6512007)(8676002)(4326008)(6862004)(2906002)(36756003)(478600001)(6636002)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?EpRWtH0GUucIdvUdPsm7ofFWDuDBqW399OLd+rT7iDJIwCgUjn9LAs3VfjpX?=
 =?us-ascii?Q?sAy+HO0Z54evZQ8BYts5lyI2t8Yk1XYTp8NVDQ1ZRbhjmgM4Fxb1RkRkDY4P?=
 =?us-ascii?Q?a1+6f5+asZMx2XLLJF5jdulB8VAaggYvEBmd3FZifu1NmNLVBzcpabt5FaP4?=
 =?us-ascii?Q?zRxoN7o/FHonp22XtSJc1Ve4nzVi1fPNhbRi0rgsXn60tUV/P4a9Ut792ULW?=
 =?us-ascii?Q?ZYMztvi1Ov0zpgsXbXpWmLIH+2g1ksYFNIlIkGw5pkliaqukJU/awumeoG7K?=
 =?us-ascii?Q?Ux3MZH7ZlxqDyBjDGc2x8uDCI64uEnASSNbbxMBRnR+B7jcc15ACijeR3MVS?=
 =?us-ascii?Q?oysnW9aOIQ33Ld0XaxyQkfe5xdu4K9eKqTAjvjC2fs3VMRC/jxm9Fz1PQZMG?=
 =?us-ascii?Q?p1Nx7nC0jjJShm+nWs07U+Sx08e6QY+lu0O/8m05fbEu1OabAjxDlUfJ0q0+?=
 =?us-ascii?Q?ZaIZq0+rQuaS1MkL3Sl297QaZRr/GGklLWn5KF8EyAeq51GU+2xH1WU96oJO?=
 =?us-ascii?Q?N8BNsoukuyFgLMtkvhBYbbKD5NNYBeo5Y/PpkLt9iGPvqzseXmQ3FxTt60wN?=
 =?us-ascii?Q?BOLOaX46G9vgobv7GChRPJlydZA919Ls2PZLY0lV0NdGL8mJqW9tnN/4mm+m?=
 =?us-ascii?Q?WEcQUrgVYWP8xpsu54u6dtvg8wrG0n/ZH6fBe7Uh2NRB24WLmI47epH3/Bu7?=
 =?us-ascii?Q?JTwrUKR/NTiGf1C5u1tc1uLQeZYHdCLOTkk3imjGRLsu4VTqCo+ZzmeW2qVj?=
 =?us-ascii?Q?5r4LnJbj4tbUuohHBzwo2/b7Erp8RdItt8/VXqM29Ii167NVqGyr0lMbWPTQ?=
 =?us-ascii?Q?1z5kKXhSAG5MuJiBe1XaFWvz4NtbKdXgBo6lHCMsalvcFVRufzsRxPhV4N+T?=
 =?us-ascii?Q?ZLsPiM0POLidzU9/mSLJRAI8MT9qAs0VuWRd/CJ8CCByPrtoOLamtSzdvjXG?=
 =?us-ascii?Q?EsfVXPTQbDg/vwXRAcWzS51H0XB+8t3BKH3aDtF/FRvHZfpCL6Nn267sFlLX?=
 =?us-ascii?Q?Ue2P?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <79753DA0668DBF45A952618DCB218046@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2693
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0a8c2f2a-9cd1-489f-56e0-08d89dd6c668
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YIQSRq9hW4MF9FMBQtDu0nnjBQnzWeyfssvW1oVWUcV1kjpjy6uohbNAoIjqTBgblDsc9mnWbf1HY3KQPW+vTOhpXMtNekfz3CHQ2+ISxGW90oQBAY6N9EI/9ArIFgXWQlLqOXjouoyb7zd7w4aIpwIsNxXoLCnHNZ7SL3a8d1uZSyn7J/DXm8zG9cGy7Z7gZV/vR/wO6vIGMhcilC/h9YDMBZLK1Pj5EdIL2kTCMNCLl90wlsXCRroDfeoBzZ+k0mv/4A1iCGic9j/WToChvRiHrztCMSI8OEQe8gpxbuHDVIY7vlK9DmDM3O5RS6XXA+moUGhjkk9Y3RI5lILFjJ4vSLvHJqAdx1GGa+1uV8+W6L/Wlap8UpjF3FOrry+njK/WRiyjX6I/wlSvyomWtQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39860400002)(136003)(46966005)(82740400003)(53546011)(83380400001)(6486002)(81166007)(82310400003)(2616005)(6506007)(107886003)(8936002)(54906003)(33656002)(70586007)(2906002)(70206006)(8676002)(36756003)(37006003)(86362001)(6636002)(186003)(26005)(6512007)(478600001)(6862004)(316002)(356005)(4326008)(5660300002)(47076004)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2020 13:15:45.2118
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 603256a8-b8c3-475d-3423-08d89dd6de71
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4553

Hi Rahul,

> On 10 Dec 2020, at 16:57, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> Replace all Linux compatible device tree handling function with the XEN
> functions.
>=20
> Replace all Linux ktime function with the XEN time functions.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes in v3:
> - This patch is introduce in this version.
>=20
> ---
> xen/drivers/passthrough/arm/smmu-v3.c | 32 +++++++--------------------
> 1 file changed, 8 insertions(+), 24 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthro=
ugh/arm/smmu-v3.c
> index 65b3db94ad..c19c56ebc8 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -101,22 +101,6 @@ typedef unsigned int		gfp_t;
>=20
> #define GFP_KERNEL		0
>=20
> -/* Alias to Xen device tree helpers */
> -#define device_node			dt_device_node
> -#define of_phandle_args		dt_phandle_args
> -#define of_device_id		dt_device_match
> -#define of_match_node		dt_match_node
> -#define of_property_read_u32(np, pname, out)	\
> -		(!dt_property_read_u32(np, pname, out))
> -#define of_property_read_bool		dt_property_read_bool
> -#define of_parse_phandle_with_args	dt_parse_phandle_with_args
> -
> -/* Alias to Xen time functions */
> -#define ktime_t s_time_t
> -#define ktime_get()			(NOW())
> -#define ktime_add_us(t, i)		(t + MICROSECS(i))
> -#define ktime_compare(t, i)		(t > (i))
> -
> /* Alias to Xen allocation helpers */
> #define kzalloc(size, flags)	_xzalloc(size, sizeof(void *))
> #define kfree	xfree
> @@ -922,7 +906,7 @@ static void parse_driver_options(struct arm_smmu_devi=
ce *smmu)
> 	int i =3D 0;
>=20
> 	do {
> -		if (of_property_read_bool(smmu->dev->of_node,
> +		if (dt_property_read_bool(smmu->dev->of_node,
> 						arm_smmu_options[i].prop)) {
> 			smmu->options |=3D arm_smmu_options[i].opt;
> 			dev_notice(smmu->dev, "option %s\n",
> @@ -994,17 +978,17 @@ static void queue_inc_prod(struct arm_smmu_ll_queue=
 *q)
>  */
> static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
> {
> -	ktime_t timeout;
> +	s_time_t timeout;
> 	unsigned int delay =3D 1, spin_cnt =3D 0;
>=20
> 	/* Wait longer if it's a CMD_SYNC */
> -	timeout =3D ktime_add_us(ktime_get(), sync ?
> +	timeout =3D NOW() + MICROSECS(sync ?
> 					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
> 					    ARM_SMMU_POLL_TIMEOUT_US);
>=20
> 	while (queue_sync_cons_in(q),
> 	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
> -		if (ktime_compare(ktime_get(), timeout) > 0)
> +		if ((NOW() > timeout) > 0)
> 			return -ETIMEDOUT;
>=20
> 		if (wfe) {
> @@ -1232,13 +1216,13 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_sm=
mu_device *smmu,
>  */
> static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 syn=
c_idx)
> {
> -	ktime_t timeout;
> +	s_time_t timeout;
> 	u32 val;
>=20
> -	timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> +	timeout =3D NOW() + MICROSECS(ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
> 	val =3D smp_cond_load_acquire(&smmu->sync_count,
> 				    (int)(VAL - sync_idx) >=3D 0 ||
> -				    !ktime_before(ktime_get(), timeout));
> +				    !(NOW() < timeout));
>=20
> 	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
> }
> @@ -2969,7 +2953,7 @@ static int arm_smmu_device_dt_probe(struct platform=
_device *pdev,
> 	u32 cells;
> 	int ret =3D -EINVAL;
>=20
> -	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
> +	if (!dt_property_read_u32(dev->of_node, "#iommu-cells", &cells))
> 		dev_err(dev, "missing #iommu-cells property\n");
> 	else if (cells !=3D 1)
> 		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 13:16:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 13:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50582.89294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniHc-00018q-Ix; Fri, 11 Dec 2020 13:16:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50582.89294; Fri, 11 Dec 2020 13:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniHc-00018j-FO; Fri, 11 Dec 2020 13:16:28 +0000
Received: by outflank-mailman (input) for mailman id 50582;
 Fri, 11 Dec 2020 13:16:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Df1x=FP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kniHb-00018Z-Ct
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 13:16:27 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.87]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8addfdb6-df57-4ca3-b5a5-97dd1ed240c1;
 Fri, 11 Dec 2020 13:16:25 +0000 (UTC)
Received: from AS8PR04CA0262.eurprd04.prod.outlook.com (2603:10a6:20b:330::27)
 by DB6PR0801MB1752.eurprd08.prod.outlook.com (2603:10a6:4:3c::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Fri, 11 Dec
 2020 13:16:23 +0000
Received: from AM5EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:330:cafe::f2) by AS8PR04CA0262.outlook.office365.com
 (2603:10a6:20b:330::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Fri, 11 Dec 2020 13:16:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT040.mail.protection.outlook.com (10.152.17.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Fri, 11 Dec 2020 13:16:22 +0000
Received: ("Tessian outbound 76bd5a04122f:v71");
 Fri, 11 Dec 2020 13:16:22 +0000
Received: from a45c0b59d6f5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F3D06D0D-D44A-4336-A657-AC0CCBD08270.1; 
 Fri, 11 Dec 2020 13:16:02 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a45c0b59d6f5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 11 Dec 2020 13:16:02 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.17; Fri, 11 Dec
 2020 13:16:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.018; Fri, 11 Dec 2020
 13:16:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8addfdb6-df57-4ca3-b5a5-97dd1ed240c1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7x7W0wue6NMEBQWtTlpsdnXGUyktP2p2dn+tYcWezpU=;
 b=Ho8aZRBILnmP7WRPqvLCCts6Xi+1VZh9+MffvGabsW+1o4VABogq5FsK2N43kaS/+rUCJiVQ7R/bwxiWtueS8fhrMnhE2kr3JotFi5ANHA+4isZV/q02E0lWWmrYjqnlqS/htAmkDYmtVogJn+ZpW0giR9Dvkg8xjFAzY6SkeH4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 19b4176c48bab463
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KGNAGnx7GNK0RTq0BraL0j0sHIH3GO5W9B4E/vmxtzgRP18b5tMReGxL5pxRGYpXAluWPmxRlZEwqbKM/FR6/swJpLzLV1dIciQ0YAfRHmTiDaf/655AYeowby2ialHkRnKywumjIsacOx20M+WD34q8gOdVxoNs0RdjOYx+ITySHAKrzYVwjmdmU46j73BHWjCWyJdmWCtFK11veP5/Zu43n32zSi3kkI+PAJoxR1/PWAr1gKOghC23jTD3lmmO8S+l/mnCZl7Nfxxq4v2IRbivY7R5/6xGKJqGNjZ0f/C7qzfwj8ELpv6EfGbI5WjG4+wJSefY8Dvw8m4ACji1Tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7x7W0wue6NMEBQWtTlpsdnXGUyktP2p2dn+tYcWezpU=;
 b=IrOyv0ZXZ9+RUiIN+Vvbuz7k0qOB/0j6F22xQ1ld284ZPpbGM8VlXgnP1ab4N5iuGx0/5JkPRu9DAeyIlSxPwvq8gMw4pSa+RDRZTsaGfyAqLZ9YeO6bwpLTk96dWayaQQWpj+zKqFSBzqn8Rz6nrQK/ucyTjVzP9wa7muuw2+HOXQQ/W/cbgrhnqtk40xpXm4ARexrnuDlBG752y0MAl5EkX3lPGNCKQOvA+4DQDFUwsGleSWAMLV45cbGBrkv4ph1d7imur62dIuz90GitLCVXiuVhgYiuKvfgC4FfTh6J09i58g4+zbcxAExDUPZMtPiBaGoy82PKuPwpWKWLzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7x7W0wue6NMEBQWtTlpsdnXGUyktP2p2dn+tYcWezpU=;
 b=Ho8aZRBILnmP7WRPqvLCCts6Xi+1VZh9+MffvGabsW+1o4VABogq5FsK2N43kaS/+rUCJiVQ7R/bwxiWtueS8fhrMnhE2kr3JotFi5ANHA+4isZV/q02E0lWWmrYjqnlqS/htAmkDYmtVogJn+ZpW0giR9Dvkg8xjFAzY6SkeH4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 6/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Topic: [PATCH v3 6/8] xen/arm: Remove Linux specific code that is not
 usable in XEN
Thread-Index: AQHWzxYNKr+sDTzECkSEC1bUGUlHeKnx4V0A
Date: Fri, 11 Dec 2020 13:16:00 +0000
Message-ID: <7803EA32-61B6-443D-9E1D-32789659DBF0@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <91b9845a03068d92aeaaa86fa67d4d06b2824652.1607617848.git.rahul.singh@arm.com>
In-Reply-To:
 <91b9845a03068d92aeaaa86fa67d4d06b2824652.1607617848.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 32e64e99-436e-48ac-4d90-08d89dd6f4b0
x-ms-traffictypediagnostic: DB6PR08MB2693:|DB6PR0801MB1752:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1752B85FB14ED977C1DC6F799DCA0@DB6PR0801MB1752.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5797;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 F4EME2C/KhUMzvzts1Ni0zGTfw4xRycAK4boMCTCpzfk/v6PRfJfyehZIx6obkxN/iQluKHPlRAdsD2Pt2nsSveSGTmZ8HwkS9cytk8VgjIkux/VApW2ECFbOvsA3j+w6STiLx2GbH8GkhdP5MPdK1sqMZsT2c2I7AOVwszQ8UGQCsRdX2M/A9eYq8PsII38JEQSEyVPRWHmhT4JBqqGcwEPqPFUc+rmIY+ocRrDY7Yx0NtFZs5EU+W/GFE5IUC45oKFyUkSXD4bl3LLB2+W6aS7Mmkv0vHuNg/P4CwDrOKMAZDOuGTnWQ4T56rJZkNznBZTZig6R912xZO9VRGH5rgxTytD3lvtIZcD6VozdvZgg3jo75+H1arhE4x318tt
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(136003)(39860400002)(346002)(366004)(33656002)(66556008)(316002)(66476007)(30864003)(64756008)(76116006)(91956017)(66946007)(5660300002)(71200400001)(26005)(6506007)(186003)(53546011)(2616005)(86362001)(66446008)(83380400001)(37006003)(54906003)(6486002)(6512007)(8676002)(4326008)(6862004)(2906002)(36756003)(478600001)(6636002)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?o9MoOfM+XlTpdDxbpV6IG0kCMktbAfB4HRAGimD2zo0DunkQopqY6A8lLICK?=
 =?us-ascii?Q?ptpT0s6Cy6/sIJb9pkGpzkYBMI9FSgeFRU0iBuHwEg0Ju4sUm12juSCyjfr3?=
 =?us-ascii?Q?IITs7WcFIela3Hhp0QC3VfgH+S99yI5MEpnnf7MMZcjKj+6/EOP6lgJYyC8t?=
 =?us-ascii?Q?j2yaaC9JbgnatZqgmzo7uzMFkgIxsyTG4UtC0HasML69NVIDiqQoFThg9JQL?=
 =?us-ascii?Q?MHFHULrcZcs45Q0dznwglNETNFf9EfQHgnsJXTA4o+IEzP6CxNdhxGmmQiet?=
 =?us-ascii?Q?z1mTi9gJ4qxPlVjHKqcLF9dnMR3fmAk1lqIJYUA9CYFduDh6APxRjwu2CXKV?=
 =?us-ascii?Q?QJ2VhXZSUaCX6D4AVZSlHoAFuKivjm49g6AFrpYwqTpFz19i56dI1u7nvLPZ?=
 =?us-ascii?Q?rOQX+WpMoGATcFnexzjYmd5IsVAYqF2otqhfuEDDNSMokxgiSOtCc80Vh5Z/?=
 =?us-ascii?Q?HQpTEYP8WA1d1nIkI3p/SuzNziZEHGbapMTFffo79XCRAyYOWONgb4jViNkl?=
 =?us-ascii?Q?/jzxTK3Kzl1TKXt42h/rEhyAMBc4cBQQaiPhLaRc4S+UmVHIAG4z/xO3/sz1?=
 =?us-ascii?Q?1HJjUnH5fXih8sCQwQepGO+Hk8z9NjIGHIiLDQ7sZaUUJGPag1MXiRxHh7or?=
 =?us-ascii?Q?2Xm1lafxqKdhtLBIERMpZTWCagKX3fmox1Qzotfx4+9fPOjWp4sJjPiGUasQ?=
 =?us-ascii?Q?cexPqhk89DO46J335U8IevWzuWNp8NVD7vvC3yyDC/JIgUcklBdbsv7VFaXv?=
 =?us-ascii?Q?pT+xGkhxdUQV1Xs55s+uQOaQ61z7TFnhgswqzSCHuTNSZZNdgoU/GrctyOXh?=
 =?us-ascii?Q?Iw0ydQn0lRXHrgtgFVFEQiIE3iIbNKqUIMpEUYQTI2IvY8c/tW7TI6jvjBjo?=
 =?us-ascii?Q?01uKTxWoQYrRukN3dVF3f1PF/YjTW+SsDFIZCz6/KBrQElXOSzP6oImLZLWO?=
 =?us-ascii?Q?Jw3mKThCy1d6obd3uzpjMNXqPhBCN+Ko6x7Jn7jMszzPsIbQeIb+u3vlSF6b?=
 =?us-ascii?Q?vMEg?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2E722718FDC41C4E95104B2D317B55FD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2693
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	59f8c60a-8fd6-4ae6-c0b5-08d89dd6e7a3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/Pm0ViqwBnjBS0cjFcNVB0RTpoIRwddGhK2YO2dAfz4wsKpOTE9FJB5SYOsUWM2J+AChMlijkV2la+R7doby/RXAdkUPC3yrwyJ7AUrX6s61KpF5h1QvGMPTWaC99/BTMhvpDvE63lj9VpZ6ZEFGhM/VP/Ev6HzgV/AVEzSRzJryRrSHncbtMa//5wiB2Yxzxt7LuOtF9BtrPv/pv8JCePGQfv11BoorHv8YiJLjOGNNMher9YvapmOCHNlXz9O19Wg+BTMJUFGGnxXPf0sye4ryQbvngSdgyHSyqgyAOv2fVFOAGIfkC0yK7bmMhkhP5lYlZ/EuAcqMz3K6SECfIf3RZJfPcV0wbUfdDJjWeURCWIwFZi/t4xJHx07TkiXwum0/sLq2Ny06WealCtd1zg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(39860400002)(46966005)(336012)(5660300002)(30864003)(6636002)(316002)(47076004)(6486002)(186003)(54906003)(83380400001)(53546011)(356005)(6512007)(82310400003)(37006003)(82740400003)(478600001)(6862004)(8936002)(81166007)(26005)(2616005)(36756003)(6506007)(33656002)(70206006)(8676002)(86362001)(2906002)(107886003)(70586007)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2020 13:16:22.5842
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 32e64e99-436e-48ac-4d90-08d89dd6f4b0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1752

Hi,

> On 10 Dec 2020, at 16:57, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> Remove code that is related to below functionality :
> 1. struct io_pgtable_ops
> 2. struct io_pgtable_cfg
> 3. struct iommu_flush_ops,
> 4. struct iommu_ops
> 5. module_param_named, MODULE_PARM_DESC, module_platform_driver,
>    MODULE_*
> 6. IOMMU domain-types
> 7. arm_smmu_set_bus_ops
> 8. iommu_device_sysfs_add, iommu_device_register,
>    iommu_device_set_fwnode
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes in v3:
> - Commit message is updated to add more detail what is removed in this
>  patch.
> - remove instances of io_pgtable_cfg.
> - Added back ARM_SMMU_FEAT_COHERENCY feature.
>=20
> ---
> xen/drivers/passthrough/arm/smmu-v3.c | 475 ++------------------------
> 1 file changed, 21 insertions(+), 454 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthro=
ugh/arm/smmu-v3.c
> index 0f16c63c49..2966015e5d 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -394,13 +394,7 @@
> #define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
> #define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
>=20
> -#define MSI_IOVA_BASE			0x8000000
> -#define MSI_IOVA_LENGTH			0x100000
> -
> static bool disable_bypass =3D 1;
> -module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
> -MODULE_PARM_DESC(disable_bypass,
> -	"Disable bypass streams such that incoming transactions from devices th=
at are not attached to an iommu domain will report an abort back to the dev=
ice and will not be allowed to pass through the SMMU.");
>=20
> enum pri_resp {
> 	PRI_RESP_DENY =3D 0,
> @@ -552,6 +546,19 @@ struct arm_smmu_strtab_cfg {
> 	u32				strtab_base_cfg;
> };
>=20
> +struct arm_lpae_s2_cfg {
> +	u64			vttbr;
> +	struct {
> +		u32			ps:3;
> +		u32			tg:2;
> +		u32			sh:2;
> +		u32			orgn:2;
> +		u32			irgn:2;
> +		u32			sl:2;
> +		u32			tsz:6;
> +	} vtcr;
> +};
> +
> /* An SMMUv3 instance */
> struct arm_smmu_device {
> 	struct device			*dev;
> @@ -633,7 +640,6 @@ struct arm_smmu_domain {
> 	struct arm_smmu_device		*smmu;
> 	struct mutex			init_mutex; /* Protects smmu pointer */
>=20
> -	struct io_pgtable_ops		*pgtbl_ops;
> 	bool				non_strict;
> 	atomic_t			nr_ats_masters;
>=20
> @@ -1493,7 +1499,6 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_=
domain *smmu_domain,
> 	return ret ? -ETIMEDOUT : 0;
> }
>=20
> -/* IO_PGTABLE API */
> static void arm_smmu_tlb_inv_context(void *cookie)
> {
> 	struct arm_smmu_domain *smmu_domain =3D cookie;
> @@ -1514,86 +1519,10 @@ static void arm_smmu_tlb_inv_context(void *cookie=
)
> 	arm_smmu_cmdq_issue_sync(smmu);
> }
>=20
> -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t siz=
e,
> -					  size_t granule, bool leaf, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain =3D cookie;
> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> -	struct arm_smmu_cmdq_ent cmd =3D {
> -		.tlbi =3D {
> -			.leaf	=3D leaf,
> -			.addr	=3D iova,
> -		},
> -	};
> -
> -	if (!size)
> -		return;
> -
> -	cmd.opcode	=3D CMDQ_OP_TLBI_S2_IPA;
> -	cmd.tlbi.vmid	=3D smmu_domain->s2_cfg.vmid;
> -
> -	do {
> -		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> -		cmd.tlbi.addr +=3D granule;
> -	} while (size -=3D granule);
> -}
> -
> -static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gath=
er,
> -					 unsigned long iova, size_t granule,
> -					 void *cookie)
> -{
> -	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
> -}
> -
> -static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain =3D cookie;
> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
> -				  size_t granule, void *cookie)
> -{
> -	struct arm_smmu_domain *smmu_domain =3D cookie;
> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> -
> -	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
> -	arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static const struct iommu_flush_ops arm_smmu_flush_ops =3D {
> -	.tlb_flush_all	=3D arm_smmu_tlb_inv_context,
> -	.tlb_flush_walk =3D arm_smmu_tlb_inv_walk,
> -	.tlb_flush_leaf =3D arm_smmu_tlb_inv_leaf,
> -	.tlb_add_page	=3D arm_smmu_tlb_inv_page_nosync,
> -};
> -
> -/* IOMMU API */
> -static bool arm_smmu_capable(enum iommu_cap cap)
> -{
> -	switch (cap) {
> -	case IOMMU_CAP_CACHE_COHERENCY:
> -		return true;
> -	case IOMMU_CAP_NOEXEC:
> -		return true;
> -	default:
> -		return false;
> -	}
> -}
> -
> -static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
> +static struct iommu_domain *arm_smmu_domain_alloc(void)
> {
> 	struct arm_smmu_domain *smmu_domain;
>=20
> -	if (type !=3D IOMMU_DOMAIN_UNMANAGED &&
> -	    type !=3D IOMMU_DOMAIN_DMA &&
> -	    type !=3D IOMMU_DOMAIN_IDENTITY)
> -		return NULL;
> -
> 	/*
> 	 * Allocate the domain and initialise some of its data structures.
> 	 * We can't really do anything meaningful until we've added a
> @@ -1603,12 +1532,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(=
unsigned type)
> 	if (!smmu_domain)
> 		return NULL;
>=20
> -	if (type =3D=3D IOMMU_DOMAIN_DMA &&
> -	    iommu_get_dma_cookie(&smmu_domain->domain)) {
> -		kfree(smmu_domain);
> -		return NULL;
> -	}
> -
> 	mutex_init(&smmu_domain->init_mutex);
> 	INIT_LIST_HEAD(&smmu_domain->devices);
> 	spin_lock_init(&smmu_domain->devices_lock);
> @@ -1640,9 +1563,6 @@ static void arm_smmu_domain_free(struct iommu_domai=
n *domain)
> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> 	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>=20
> -	iommu_put_dma_cookie(domain);
> -	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
> -
> 	if (cfg->vmid)
> 		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
>=20
> @@ -1651,21 +1571,20 @@ static void arm_smmu_domain_free(struct iommu_dom=
ain *domain)
>=20
>=20
> static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domai=
n,
> -				       struct arm_smmu_master *master,
> -				       struct io_pgtable_cfg *pgtbl_cfg)
> +				       struct arm_smmu_master *master)
> {
> 	int vmid;
> +	struct arm_lpae_s2_cfg arm_lpae_s2_cfg;
> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> 	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
> -	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
> +	typeof(&arm_lpae_s2_cfg.vtcr) vtcr =3D &arm_lpae_s2_cfg.vtcr;
>=20
> 	vmid =3D arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
> 	if (vmid < 0)
> 		return vmid;
>=20
> -	vtcr =3D &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
> 	cfg->vmid	=3D (u16)vmid;
> -	cfg->vttbr	=3D pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
> +	cfg->vttbr	=3D arm_lpae_s2_cfg.vttbr;
> 	cfg->vtcr	=3D FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
> 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
> 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
> @@ -1680,49 +1599,15 @@ static int arm_smmu_domain_finalise(struct iommu_=
domain *domain,
> 				    struct arm_smmu_master *master)
> {
> 	int ret;
> -	unsigned long ias, oas;
> -	enum io_pgtable_fmt fmt;
> -	struct io_pgtable_cfg pgtbl_cfg;
> -	struct io_pgtable_ops *pgtbl_ops;
> 	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
> -	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
> -
> -	if (domain->type =3D=3D IOMMU_DOMAIN_IDENTITY) {
> -		smmu_domain->stage =3D ARM_SMMU_DOMAIN_BYPASS;
> -		return 0;
> -	}
>=20
> 	/* Restrict the stage to what we can actually support */
> 	smmu_domain->stage =3D ARM_SMMU_DOMAIN_S2;
>=20
> -
> -	pgtbl_cfg =3D (struct io_pgtable_cfg) {
> -		.pgsize_bitmap	=3D smmu->pgsize_bitmap,
> -		.ias		=3D ias,
> -		.oas		=3D oas,
> -		.coherent_walk	=3D smmu->features & ARM_SMMU_FEAT_COHERENCY,
> -		.tlb		=3D &arm_smmu_flush_ops,
> -		.iommu_dev	=3D smmu->dev,
> -	};
> -
> -	if (smmu_domain->non_strict)
> -		pgtbl_cfg.quirks |=3D IO_PGTABLE_QUIRK_NON_STRICT;
> -
> -	pgtbl_ops =3D alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
> -	if (!pgtbl_ops)
> -		return -ENOMEM;
> -
> -	domain->pgsize_bitmap =3D pgtbl_cfg.pgsize_bitmap;
> -	domain->geometry.aperture_end =3D (1UL << pgtbl_cfg.ias) - 1;
> -	domain->geometry.force_aperture =3D true;
> -
> -	ret =3D arm_smmu_domain_finalise_s2(smmu_domain, master, &pgtbl_cfg);
> -	if (ret < 0) {
> -		free_io_pgtable_ops(pgtbl_ops);
> +	ret =3D arm_smmu_domain_finalise_s2(smmu_domain, master);
> +	if (ret < 0)
> 		return ret;
> -	}
>=20
> -	smmu_domain->pgtbl_ops =3D pgtbl_ops;
> 	return 0;
> }
>=20
> @@ -1939,76 +1824,6 @@ out_unlock:
> 	return ret;
> }
>=20
> -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> -{
> -	struct io_pgtable_ops *ops =3D to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (!ops)
> -		return -ENODEV;
> -
> -	return ops->map(ops, iova, paddr, size, prot);
> -}
> -
> -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long =
iova,
> -			     size_t size, struct iommu_iotlb_gather *gather)
> -{
> -	int ret;
> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
> -	struct io_pgtable_ops *ops =3D smmu_domain->pgtbl_ops;
> -
> -	if (!ops)
> -		return 0;
> -
> -	ret =3D ops->unmap(ops, iova, size, gather);
> -	if (ret && arm_smmu_atc_inv_domain(smmu_domain, 0, iova, size))
> -		return 0;
> -
> -	return ret;
> -}
> -
> -static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
> -{
> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
> -
> -	if (smmu_domain->smmu)
> -		arm_smmu_tlb_inv_context(smmu_domain);
> -}
> -
> -static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
> -				struct iommu_iotlb_gather *gather)
> -{
> -	struct arm_smmu_device *smmu =3D to_smmu_domain(domain)->smmu;
> -
> -	if (smmu)
> -		arm_smmu_cmdq_issue_sync(smmu);
> -}
> -
> -static phys_addr_t
> -arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> -{
> -	struct io_pgtable_ops *ops =3D to_smmu_domain(domain)->pgtbl_ops;
> -
> -	if (domain->type =3D=3D IOMMU_DOMAIN_IDENTITY)
> -		return iova;
> -
> -	if (!ops)
> -		return 0;
> -
> -	return ops->iova_to_phys(ops, iova);
> -}
> -
> -static struct platform_driver arm_smmu_driver;
> -
> -static
> -struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwn=
ode)
> -{
> -	struct device *dev =3D driver_find_device_by_fwnode(&arm_smmu_driver.dr=
iver,
> -							  fwnode);
> -	put_device(dev);
> -	return dev ? dev_get_drvdata(dev) : NULL;
> -}
> -
> static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
> {
> 	unsigned long limit =3D smmu->strtab_cfg.num_l1_ents;
> @@ -2019,8 +1834,6 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_d=
evice *smmu, u32 sid)
> 	return sid < limit;
> }
>=20
> -static struct iommu_ops arm_smmu_ops;
> -
> static struct iommu_device *arm_smmu_probe_device(struct device *dev)
> {
> 	int i, ret;
> @@ -2028,16 +1841,12 @@ static struct iommu_device *arm_smmu_probe_device=
(struct device *dev)
> 	struct arm_smmu_master *master;
> 	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>=20
> -	if (!fwspec || fwspec->ops !=3D &arm_smmu_ops)
> +	if (!fwspec)
> 		return ERR_PTR(-ENODEV);
>=20
> 	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
> 		return ERR_PTR(-EBUSY);
>=20
> -	smmu =3D arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
> -	if (!smmu)
> -		return ERR_PTR(-ENODEV);
> -
> 	master =3D kzalloc(sizeof(*master), GFP_KERNEL);
> 	if (!master)
> 		return ERR_PTR(-ENOMEM);
> @@ -2083,153 +1892,11 @@ err_free_master:
> 	return ERR_PTR(ret);
> }
>=20
> -static void arm_smmu_release_device(struct device *dev)
> -{
> -	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
> -	struct arm_smmu_master *master;
> -
> -	if (!fwspec || fwspec->ops !=3D &arm_smmu_ops)
> -		return;
> -
> -	master =3D dev_iommu_priv_get(dev);
> -	arm_smmu_detach_dev(master);
> -	arm_smmu_disable_pasid(master);
> -	kfree(master);
> -	iommu_fwspec_free(dev);
> -}
> -
> -static struct iommu_group *arm_smmu_device_group(struct device *dev)
> -{
> -	struct iommu_group *group;
> -
> -	/*
> -	 * We don't support devices sharing stream IDs other than PCI RID
> -	 * aliases, since the necessary ID-to-device lookup becomes rather
> -	 * impractical given a potential sparse 32-bit stream ID space.
> -	 */
> -	if (dev_is_pci(dev))
> -		group =3D pci_device_group(dev);
> -	else
> -		group =3D generic_device_group(dev);
> -
> -	return group;
> -}
> -
> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			*(int *)data =3D (smmu_domain->stage =3D=3D ARM_SMMU_DOMAIN_NESTED);
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data =3D smmu_domain->non_strict;
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -}
> -
> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> -{
> -	int ret =3D 0;
> -	struct arm_smmu_domain *smmu_domain =3D to_smmu_domain(domain);
> -
> -	mutex_lock(&smmu_domain->init_mutex);
> -
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			if (smmu_domain->smmu) {
> -				ret =3D -EPERM;
> -				goto out_unlock;
> -			}
> -
> -			if (*(int *)data)
> -				smmu_domain->stage =3D ARM_SMMU_DOMAIN_NESTED;
> -			else
> -				smmu_domain->stage =3D ARM_SMMU_DOMAIN_S1;
> -			break;
> -		default:
> -			ret =3D -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch(attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			smmu_domain->non_strict =3D *(int *)data;
> -			break;
> -		default:
> -			ret =3D -ENODEV;
> -		}
> -		break;
> -	default:
> -		ret =3D -EINVAL;
> -	}
> -
> -out_unlock:
> -	mutex_unlock(&smmu_domain->init_mutex);
> -	return ret;
> -}
> -
> static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *=
args)
> {
> 	return iommu_fwspec_add_ids(dev, args->args, 1);
> }
>=20
> -static void arm_smmu_get_resv_regions(struct device *dev,
> -				      struct list_head *head)
> -{
> -	struct iommu_resv_region *region;
> -	int prot =3D IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> -
> -	region =3D iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> -					 prot, IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> -
> -	list_add_tail(&region->list, head);
> -
> -	iommu_dma_get_resv_regions(dev, head);
> -}
> -
> -static struct iommu_ops arm_smmu_ops =3D {
> -	.capable		=3D arm_smmu_capable,
> -	.domain_alloc		=3D arm_smmu_domain_alloc,
> -	.domain_free		=3D arm_smmu_domain_free,
> -	.attach_dev		=3D arm_smmu_attach_dev,
> -	.map			=3D arm_smmu_map,
> -	.unmap			=3D arm_smmu_unmap,
> -	.flush_iotlb_all	=3D arm_smmu_flush_iotlb_all,
> -	.iotlb_sync		=3D arm_smmu_iotlb_sync,
> -	.iova_to_phys		=3D arm_smmu_iova_to_phys,
> -	.probe_device		=3D arm_smmu_probe_device,
> -	.release_device		=3D arm_smmu_release_device,
> -	.device_group		=3D arm_smmu_device_group,
> -	.domain_get_attr	=3D arm_smmu_domain_get_attr,
> -	.domain_set_attr	=3D arm_smmu_domain_set_attr,
> -	.of_xlate		=3D arm_smmu_of_xlate,
> -	.get_resv_regions	=3D arm_smmu_get_resv_regions,
> -	.put_resv_regions	=3D generic_iommu_put_resv_regions,
> -	.pgsize_bitmap		=3D -1UL, /* Restricted during device attach */
> -};
> -
> /* Probing and initialisation functions */
> static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
> 				   struct arm_smmu_queue *q,
> @@ -2929,16 +2596,6 @@ static int arm_smmu_device_hw_probe(struct arm_smm=
u_device *smmu)
> 		smmu->oas =3D 48;
> 	}
>=20
> -	if (arm_smmu_ops.pgsize_bitmap =3D=3D -1UL)
> -		arm_smmu_ops.pgsize_bitmap =3D smmu->pgsize_bitmap;
> -	else
> -		arm_smmu_ops.pgsize_bitmap |=3D smmu->pgsize_bitmap;
> -
> -	/* Set the DMA mask for our table walker */
> -	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
> -		dev_warn(smmu->dev,
> -			 "failed to set DMA mask for table walker\n");
> -
> 	smmu->ias =3D max(smmu->ias, smmu->oas);
>=20
> 	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
> @@ -3018,43 +2675,6 @@ static unsigned long arm_smmu_resource_size(struct=
 arm_smmu_device *smmu)
> 		return SZ_128K;
> }
>=20
> -static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
> -{
> -	int err;
> -
> -#ifdef CONFIG_PCI
> -	if (pci_bus_type.iommu_ops !=3D ops) {
> -		err =3D bus_set_iommu(&pci_bus_type, ops);
> -		if (err)
> -			return err;
> -	}
> -#endif
> -#ifdef CONFIG_ARM_AMBA
> -	if (amba_bustype.iommu_ops !=3D ops) {
> -		err =3D bus_set_iommu(&amba_bustype, ops);
> -		if (err)
> -			goto err_reset_pci_ops;
> -	}
> -#endif
> -	if (platform_bus_type.iommu_ops !=3D ops) {
> -		err =3D bus_set_iommu(&platform_bus_type, ops);
> -		if (err)
> -			goto err_reset_amba_ops;
> -	}
> -
> -	return 0;
> -
> -err_reset_amba_ops:
> -#ifdef CONFIG_ARM_AMBA
> -	bus_set_iommu(&amba_bustype, NULL);
> -#endif
> -err_reset_pci_ops: __maybe_unused;
> -#ifdef CONFIG_PCI
> -	bus_set_iommu(&pci_bus_type, NULL);
> -#endif
> -	return err;
> -}
> -
> static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t=
 start,
> 				      resource_size_t size)
> {
> @@ -3147,68 +2767,15 @@ static int arm_smmu_device_probe(struct platform_=
device *pdev)
> 	if (ret)
> 		return ret;
>=20
> -	/* Record our private device structure */
> -	platform_set_drvdata(pdev, smmu);
> -
> 	/* Reset the device */
> 	ret =3D arm_smmu_device_reset(smmu, bypass);
> 	if (ret)
> 		return ret;
>=20
> -	/* And we're up. Go go go! */
> -	ret =3D iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
> -				     "smmu3.%pa", &ioaddr);
> -	if (ret)
> -		return ret;
> -
> -	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
> -	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
> -
> -	ret =3D iommu_device_register(&smmu->iommu);
> -	if (ret) {
> -		dev_err(dev, "Failed to register iommu\n");
> -		return ret;
> -	}
> -
> -	return arm_smmu_set_bus_ops(&arm_smmu_ops);
> -}
> -
> -static int arm_smmu_device_remove(struct platform_device *pdev)
> -{
> -	struct arm_smmu_device *smmu =3D platform_get_drvdata(pdev);
> -
> -	arm_smmu_set_bus_ops(NULL);
> -	iommu_device_unregister(&smmu->iommu);
> -	iommu_device_sysfs_remove(&smmu->iommu);
> -	arm_smmu_device_disable(smmu);
> -
> 	return 0;
> }
>=20
> -static void arm_smmu_device_shutdown(struct platform_device *pdev)
> -{
> -	arm_smmu_device_remove(pdev);
> -}
> -
> static const struct of_device_id arm_smmu_of_match[] =3D {
> 	{ .compatible =3D "arm,smmu-v3", },
> 	{ },
> };
> -MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
> -
> -static struct platform_driver arm_smmu_driver =3D {
> -	.driver	=3D {
> -		.name			=3D "arm-smmu-v3",
> -		.of_match_table		=3D arm_smmu_of_match,
> -		.suppress_bind_attrs	=3D true,
> -	},
> -	.probe	=3D arm_smmu_device_probe,
> -	.remove	=3D arm_smmu_device_remove,
> -	.shutdown =3D arm_smmu_device_shutdown,
> -};
> -module_platform_driver(arm_smmu_driver);
> -
> -MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations=
");
> -MODULE_AUTHOR("Will Deacon <will@kernel.org>");
> -MODULE_ALIAS("platform:arm-smmu-v3");
> -MODULE_LICENSE("GPL v2");
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 13:57:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 13:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50624.89325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniui-0005P8-Cb; Fri, 11 Dec 2020 13:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50624.89325; Fri, 11 Dec 2020 13:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kniui-0005P1-8i; Fri, 11 Dec 2020 13:56:52 +0000
Received: by outflank-mailman (input) for mailman id 50624;
 Fri, 11 Dec 2020 13:56:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nv6e=FP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kniug-0005Ow-V8
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 13:56:50 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b62ecfc1-3390-4ead-827b-32d980ad6ff0;
 Fri, 11 Dec 2020 13:56:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b62ecfc1-3390-4ead-827b-32d980ad6ff0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607695009;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Y9q/K51kNVP1X62NUU7bPzV0ysaWRI9CTxHOwDYR8cg=;
  b=ZQUc9v3ko9tDvDGkzK9cp994xESiwy+8bIvmg5Ph0uHygWBPwVxr84BP
   qDgHx2R1G2LP7jA2iYEhJmdpKZB7mCnp99IpurqNlZtEmS7F5Nyst49ai
   cWBmyt2k1pMDDuG+llAvfA9W8nfFkmxjrKu/rmtCwHZ02fKi07PFKj/Hu
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 2AX/flIBpSAkFKmEn8A5EmZ9AIVTEcTNs7qkHBgYHDsWdg45+5R3AocvWAzhEaCGSKNuNP2gOG
 /a7gyjz0uzos6JvXet4SEk+piKEzpQn9S79yv32wf+Rjyw/AMdBK4Nfm35doFVerF6fapLop9R
 peLqKXVyfKdm9WUO04iTrqfkJLwl58aL6PFaaTnAFa9UfiEehaKrHZFYBFgLNdYd/oWvou60M3
 tKt5eTZX80rJ5+Y87E8vjLy6cwKFW+AbUexJzC0oP7RflPEQJLVkJ6qDPJyFjj83XtFzV7ma3r
 4Jk=
X-SBRS: 5.2
X-MesageID: 33051843
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,411,1599537600"; 
   d="scan'208";a="33051843"
Subject: Re: dom0 PV looping on search_pre_exception_table()
To: Manuel Bouyer <bouyer@antioche.eu.org>, Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>
References: <20201209135908.GA4269@antioche.eu.org>
 <c612616a-3fcd-be93-7594-20c0c3b71b7a@citrix.com>
 <20201209154431.GA4913@antioche.eu.org>
 <52e1b10d-75d4-63ac-f91e-cb8f0dcca493@citrix.com>
 <20201209163049.GA6158@antioche.eu.org>
 <30a71c9d-3eff-3727-9c61-e387b5bccc95@citrix.com>
 <20201209185714.GS1469@antioche.eu.org>
 <6c06abf1-7efe-f02c-536a-337a2704e265@citrix.com>
 <20201210095139.GA455@antioche.eu.org>
 <2c345ef9-1f05-f883-d294-7ac1b3851f08@suse.com>
 <20201211111546.GE1423@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cfb58d3a-e74e-77a5-9974-6782f5b500af@citrix.com>
Date: Fri, 11 Dec 2020 13:56:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201211111546.GE1423@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 11/12/2020 11:15, Manuel Bouyer wrote:
> On Fri, Dec 11, 2020 at 09:58:54AM +0100, Jan Beulich wrote:
>> Could you please revert 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")?
>> I think there was a thinko there in that the change can't be split from
>> the bigger one which was part of the originally planned set for XSA-286.
>> We mustn't avoid the switching of page tables as long as
>> guest_get_eff{,_kern}_l1e() makes use of the linear page tables.
> Yes, reverting this commit also makes the dom0 boot.
>

This was going to be my next area of investigation.  Thanks for confirming.

In hindsight, the bug is very obvious...

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 13:57:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 13:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50629.89337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knivN-0005Uu-Ll; Fri, 11 Dec 2020 13:57:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50629.89337; Fri, 11 Dec 2020 13:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knivN-0005Un-I5; Fri, 11 Dec 2020 13:57:33 +0000
Received: by outflank-mailman (input) for mailman id 50629;
 Fri, 11 Dec 2020 13:57:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knivL-0005Ui-Rw
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 13:57:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knivK-0006nl-Q2; Fri, 11 Dec 2020 13:57:30 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knivK-0004OD-JE; Fri, 11 Dec 2020 13:57:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8u4AlIr54MyFnBOLLT1YxcwplyAh8qmp63OX2KUl6xA=; b=AWp6dcRI+8Dk1si+csAZ/wR2L5
	7Z/GarNkPRZaPA9d0/LK+k0xtFcCia3tYwZvhi1Md8sc8HHTH20oWtmCQ1v0ZzTLmT/w+/AHDbpge
	TT6kRRxFH1LpKveK48CHZRka4s2SHQZIj5EKGtyJLHKjkD223xDM8ImQhSVDyP5NwurQ=;
Subject: Re: [PATCH v3 8/8] xen/arm: smmuv3: Remove linux compatibility
 functions.
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aa6fbfa5-3afa-776e-287c-177932fd4764@xen.org>
Date: Fri, 11 Dec 2020 13:57:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c38df3122a9e74e2324936c8bd36d372cdc3009a.1607617848.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 10/12/2020 16:57, Rahul Singh wrote:
> Replace all Linux compatible device tree handling function with the XEN
> functions.

Right, but they were introduced in the previous patch. I dislike the 
idea to add code for removing afterwards (in some cases you actually do 
the renaming directly...).

So I would rather move this patch before patch #7 so we don't undo what 
we just did.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 14:16:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 14:16:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50640.89357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjDv-0007en-Dg; Fri, 11 Dec 2020 14:16:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50640.89357; Fri, 11 Dec 2020 14:16:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjDv-0007eg-Ah; Fri, 11 Dec 2020 14:16:43 +0000
Received: by outflank-mailman (input) for mailman id 50640;
 Fri, 11 Dec 2020 14:16:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nv6e=FP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1knjDu-0007ea-Ac
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 14:16:42 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d12eb43e-2f46-4a07-8aeb-58085b813241;
 Fri, 11 Dec 2020 14:16:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d12eb43e-2f46-4a07-8aeb-58085b813241
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607696200;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=8H42SUW0bR+8N2cMiG0OFcydNJK4AD9EAmy9cNPqMes=;
  b=QxJtiGo33lnCKH0ix7z83xnvm1wNLPujhUocW8KSYQIcMIaufICcv0XC
   0EhMCX10iOVitVaHwwz8uF9xc2oU7vqNdCaNGQ0W23LjM0WaoQx5vL+bj
   LhL1XdvPbYWrPpfrHiysa+r2mjPNkowZ2sdCvUE7TGH5Bf8bazvpzvu53
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: K/hnHrSfABfZTpsNhA/pGdRyqzpZOXPhq4DhGpjVbeyslbvyAzd555Hm/LJfPGmox3HTuwdg0B
 9Z7WudJnerU5NJImmkowFvFY77OTACjec7QwG9FK1GKfACUzuWs6Kw5tgFybJ5KnSq9+xkYqGQ
 3eNrwh9MDHkpxez4suqGwhE9g4GVv5goSUca0JDb8jKTjGN2Zu/u2HPt2susyjNerec37Mlf6C
 UaqviWFTx5e2fN8PMELHvoyP4H5A6jbq/b+Fh8k+LHb0sO6Utz6oBIy/FezKO3dDTtVt/6GyI2
 OO4=
X-SBRS: 5.2
X-MesageID: 33012202
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,411,1599537600"; 
   d="scan'208";a="33012202"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Manuel Bouyer
	<bouyer@antioche.eu.org>
Subject: [PATCH] Revert "x86/mm: drop guest_get_eff_l1e()"
Date: Fri, 11 Dec 2020 14:16:15 +0000
Message-ID: <20201211141615.12489-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This reverts commit 9ff9705647646aa937b5f5c1426a64c69a62b3bd.

The change is only correct in the original context of XSA-286, where Xen's use
of the linear pagetables were dropped.  However, performance problems
interfered with that plan, and XSA-286 was fixed differently.

This broke Xen's lazy faulting of the LDT for 64bit PV guests when an access
was first encountered in user context.  Xen would proceed to read the
registered LDT virtual address out of the user pagetables, not the kernel
pagetables.

Given the nature of the bug, it would have also interfered with the IO
permisison bitmap functionality of userspace, which similarly needs to read
data using the kernel pagetables.

Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Manuel Bouyer <bouyer@antioche.eu.org>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Manuel Bouyer <bouyer@antioche.eu.org>

There is also a bug with Xen's IRET handling, but that has been broken forever
and is much more complicated to fix.  I'll put it on my TODO list, but no idea
when I'll get around to addressing it.
---
 xen/arch/x86/pv/mm.c            | 21 +++++++++++++++++++++
 xen/arch/x86/pv/mm.h            |  7 ++-----
 xen/arch/x86/pv/ro-page-fault.c |  2 +-
 3 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
index 5d74d11cba..14cb0f2d4e 100644
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -56,6 +56,27 @@ l1_pgentry_t *map_guest_l1e(unsigned long linear, mfn_t *gl1mfn)
 }
 
 /*
+ * Read the guest's l1e that maps this address, from the kernel-mode
+ * page tables.
+ */
+static l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
+{
+    struct vcpu *curr = current;
+    const bool user_mode = !(curr->arch.flags & TF_kernel_mode);
+    l1_pgentry_t l1e;
+
+    if ( user_mode )
+        toggle_guest_pt(curr);
+
+    l1e = guest_get_eff_l1e(linear);
+
+    if ( user_mode )
+        toggle_guest_pt(curr);
+
+    return l1e;
+}
+
+/*
  * Map a guest's LDT page (covering the byte at @offset from start of the LDT)
  * into Xen's virtual range.  Returns true if the mapping changed, false
  * otherwise.
diff --git a/xen/arch/x86/pv/mm.h b/xen/arch/x86/pv/mm.h
index 2a21859dd4..b1b66e46c8 100644
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -5,11 +5,8 @@ l1_pgentry_t *map_guest_l1e(unsigned long linear, mfn_t *gl1mfn);
 
 int new_guest_cr3(mfn_t mfn);
 
-/*
- * Read the guest's l1e that maps this address, from the kernel-mode
- * page tables.
- */
-static inline l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
+/* Read a PV guest's l1e that maps this linear address. */
+static inline l1_pgentry_t guest_get_eff_l1e(unsigned long linear)
 {
     l1_pgentry_t l1e;
 
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index 8d0007ede5..7f6fbc92fb 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -342,7 +342,7 @@ int pv_ro_page_fault(unsigned long addr, struct cpu_user_regs *regs)
     bool mmio_ro;
 
     /* Attempt to read the PTE that maps the VA being accessed. */
-    pte = guest_get_eff_kern_l1e(addr);
+    pte = guest_get_eff_l1e(addr);
 
     /* We are only looking for read-only mappings */
     if ( ((l1e_get_flags(pte) & (_PAGE_PRESENT | _PAGE_RW)) != _PAGE_PRESENT) )
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 14:19:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 14:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50647.89370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjGK-0007ox-SG; Fri, 11 Dec 2020 14:19:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50647.89370; Fri, 11 Dec 2020 14:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjGK-0007oq-OY; Fri, 11 Dec 2020 14:19:12 +0000
Received: by outflank-mailman (input) for mailman id 50647;
 Fri, 11 Dec 2020 14:19:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Aoib=FP=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
 id 1knjGK-0007ol-2S
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 14:19:12 +0000
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.86.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 504687d0-e28a-4fed-a5ed-2672fa0832d0;
 Fri, 11 Dec 2020 14:19:11 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-53-s0AncPC_O7qxZEyLU7X13A-1; Fri, 11 Dec 2020 14:19:07 +0000
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Fri, 11 Dec 2020 14:19:05 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000; 
 Fri, 11 Dec 2020 14:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 504687d0-e28a-4fed-a5ed-2672fa0832d0
X-MC-Unique: s0AncPC_O7qxZEyLU7X13A-1
From: David Laight <David.Laight@ACULAB.COM>
To: 'Thomas Gleixner' <tglx@linutronix.de>, Tvrtko Ursulin
	<tvrtko.ursulin@linux.intel.com>, LKML <linux-kernel@vger.kernel.org>
CC: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>, Jani
 Nikula <jani.nikula@linux.intel.com>, Joonas Lahtinen
	<joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>, "James
 E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
	<deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Russell King
	<linux@armlinux.org.uk>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, Mark Rutland <mark.rutland@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
	<hca@linux.ibm.com>, "linux-s390@vger.kernel.org"
	<linux-s390@vger.kernel.org>, Pankaj Bharadiya
	<pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
	<chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, "Linus
 Walleij" <linus.walleij@linaro.org>, "linux-gpio@vger.kernel.org"
	<linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, Jon Mason
	<jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe
	<allenbh@gmail.com>, "linux-ntb@googlegroups.com"
	<linux-ntb@googlegroups.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Michal
 Simek" <michal.simek@xilinx.com>, "linux-pci@vger.kernel.org"
	<linux-pci@vger.kernel.org>, Karthikeyan Mitran
	<m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, "Tariq
 Toukan" <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, "Jakub
 Kicinski" <kuba@kernel.org>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-rdma@vger.kernel.org"
	<linux-rdma@vger.kernel.org>, Saeed Mahameed <saeedm@nvidia.com>, "Leon
 Romanovsky" <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
Thread-Topic: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
Thread-Index: AQHWz72qwjNpP0n0UkWT70W8RrLS8qnx7xrw
Date: Fri, 11 Dec 2020 14:19:05 +0000
Message-ID: <d6cbfa118490459bb0671394f00323fc@AcuMS.aculab.com>
References: <20201210192536.118432146@linutronix.de>
 <20201210194043.957046529@linutronix.de>
 <ad05af1a-5463-2a80-0887-7629721d6863@linux.intel.com>
 <87y2i4h54i.fsf@nanos.tec.linutronix.de>
In-Reply-To: <87y2i4h54i.fsf@nanos.tec.linutronix.de>
Accept-Language: en-GB, en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.202.205.107]
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=C51A453 smtp.mailfrom=david.laight@aculab.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: aculab.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

From: Thomas Gleixner
> Sent: 11 December 2020 12:58
..
> > After my failed hasty sketch from last night I had a different one whic=
h
> > was kind of heuristics based (re-reading the upper dword and retrying i=
f
> > it changed on 32-bit).
>=20
> The problem is that there will be two seperate modifications for the low
> and high word. Several ways how the compiler can translate this, but the
> problem is the same for all of them:
>=20
> CPU 0                           CPU 1
>         load low
>         load high
>         add  low, 1
>         addc high, 0
>         store low               load high
> --> NMI                         load low
>                                 load high and compare
>         store high
>=20
> You can't catch that. If this really becomes an issue you need a
> sequence counter around it.

Or just two copies of the high word.
Provided the accesses are sequenced:
writer:
=09load high:low
=09add small_value,high:low
=09store high
=09store low
=09store high_copy
reader:
=09load high_copy
=09load low
=09load high
=09if (high !=3D high_copy)
=09=09low =3D 0;

The read value is always stale, so it probably doesn't
matter that the value you have is one that is between the
value when you started and that when you finished.

=09David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1=
PT, UK
Registration No: 1397386 (Wales)



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 14:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 14:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50657.89384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjMR-0000Qj-Kc; Fri, 11 Dec 2020 14:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50657.89384; Fri, 11 Dec 2020 14:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjMR-0000Qc-Hf; Fri, 11 Dec 2020 14:25:31 +0000
Received: by outflank-mailman (input) for mailman id 50657;
 Fri, 11 Dec 2020 14:25:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knjMP-0000Ph-N5
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 14:25:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knjMJ-0007SQ-J7; Fri, 11 Dec 2020 14:25:23 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knjMJ-0006Yz-Bl; Fri, 11 Dec 2020 14:25:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zDYThixRDAossD0v/B7rJxf/STHrtG2oS1TgDu3csnw=; b=LRyQze3Vp5SI3v/aTfrUPZuN3L
	qGRmwfNbC1zJuceOyVBYxFY89LbyLgYHsiiItUFdkKhqMbxKYppoj6g1SQKNkd8jTL9QatIilB7+s
	yVurjeSYJha0hIH5LPxaeR6gQl9AR+cEjbBdpl89LB5bAlNwkbq8wStNkb7N4YbR7cLE=;
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
Date: Fri, 11 Dec 2020 14:25:20 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 10/12/2020 16:57, Rahul Singh wrote:
>   struct arm_smmu_strtab_cfg {
> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>   		u64			padding;
>   	};
>   
> -	/* IOMMU core code handle */
> -	struct iommu_device		iommu;
> +	/* Need to keep a list of SMMU devices */
> +	struct list_head		devices;
> +
> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
> +	struct tasklet		evtq_irq_tasklet;
> +	struct tasklet		priq_irq_tasklet;
> +	struct tasklet		combined_irq_tasklet;
>   };
>   
>   /* SMMU private data for each master */
> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>   
>   struct arm_smmu_domain {
>   	struct arm_smmu_device		*smmu;
> -	struct mutex			init_mutex; /* Protects smmu pointer */

Hmmm... Your commit message says the mutex would be replaced by a 
spinlock. However, you are dropping the lock. What I did miss?

[...]

> @@ -1578,6 +1841,17 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>   	struct arm_smmu_device *smmu = smmu_domain->smmu;
>   	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
>   	typeof(&arm_lpae_s2_cfg.vtcr) vtcr = &arm_lpae_s2_cfg.vtcr;
> +	uint64_t reg = READ_SYSREG64(VTCR_EL2);

Please don't use VTCR_EL2 here. You should be able to infer the 
parameter from the p2m_ipa_bits.

Also, I still don't see code that will restrict the IPA bits because on 
the support for the SMMU.

> +
> +	vtcr->tsz	= FIELD_GET(STRTAB_STE_2_VTCR_S2T0SZ, reg);
> +	vtcr->sl	= FIELD_GET(STRTAB_STE_2_VTCR_S2SL0, reg);
> +	vtcr->irgn	= FIELD_GET(STRTAB_STE_2_VTCR_S2IR0, reg);
> +	vtcr->orgn	= FIELD_GET(STRTAB_STE_2_VTCR_S2OR0, reg);
> +	vtcr->sh	= FIELD_GET(STRTAB_STE_2_VTCR_S2SH0, reg);
> +	vtcr->tg	= FIELD_GET(STRTAB_STE_2_VTCR_S2TG, reg);
> +	vtcr->ps	= FIELD_GET(STRTAB_STE_2_VTCR_S2PS, reg);
> +
> +	arm_lpae_s2_cfg.vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
>   
>   	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>   	if (vmid < 0)
> @@ -1592,6 +1866,11 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
>   			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
>   			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
>   			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
> +
> +	printk(XENLOG_DEBUG
> +		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
> +		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
> +
>   	return 0;
>   }

[...]

> @@ -1923,8 +2239,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>   		return -ENOMEM;
>   	}
>   
> -	if (!WARN_ON(q->base_dma & (qsz - 1))) { > -		dev_info(smmu->dev, "allocated %u entries for %s\n",
> +	if (unlikely(q->base_dma & (qsz - 1))) {
> +		dev_warn(smmu->dev, "allocated %u entries for %s\n",
dev_warn() is not the same as WARN_ON(). But really, the first step is 
for you to try to change behavior of WARN_ON() in Xen.

If this doesn't go through then we can discuss about approach to 
mitigate it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 14:29:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 14:29:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50663.89397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjQN-0000cE-5a; Fri, 11 Dec 2020 14:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50663.89397; Fri, 11 Dec 2020 14:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjQN-0000c7-2M; Fri, 11 Dec 2020 14:29:35 +0000
Received: by outflank-mailman (input) for mailman id 50663;
 Fri, 11 Dec 2020 14:29:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knjQL-0000bz-Na
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 14:29:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knjQH-0007YB-VR; Fri, 11 Dec 2020 14:29:29 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knjQH-0006jH-Nv; Fri, 11 Dec 2020 14:29:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QAvuEyVoXxjXe/JQfwr5Wkh2jPuI7x0e83APCvioKjY=; b=CnplNqXsDHJwa2Dh5oeDrTlf3J
	Wee/RRdilYzOSJd9RXDtXPw2Gu+a6zqhUvWmIrav8/E64/eKjdHOJ+Ba2ENrf6JaZm4uhXN0egqpq
	59zmG4T2h2KI67Pp68BwIuWRta2Pp4S7Q7AXmqWtld/iODgw1Mh8CCu6mQPzI9/JBvPY=;
Subject: Re: [PATCH v3 0/8] xen/arm: Add support for SMMUv3 driver
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
References: <cover.1607617848.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ea121c23-4deb-c566-4d1d-bb9dd4959015@xen.org>
Date: Fri, 11 Dec 2020 14:29:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <cover.1607617848.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

On 10/12/2020 16:56, Rahul Singh wrote:
> This patch series is v3 of the work to add support for the SMMUv3 driver.
> 
> Approach taken is to first merge the Linux copy of the SMMUv3 driver
> (tag v5.8.18) and then modify the driver to build on XEN.
> 
> MSI and PCI ATS functionality are not supported. Code is not tested and
> compiled. Code is guarded by the flag CONFIG_PCI_ATS and CONFIG_MSI to compile
> the driver.
> 
> Code specific to Linux is removed from the driver to avoid dead code.
> 
> Driver is currently supported as tech preview.
> 
> Following functionality should be supported before driver is out for tech
> preview
> 1. Investigate the timing analysis of using spin lock in place of mutex when
>     attaching a  device to SMMU.
> 2. Merged the latest Linux SMMUv3 driver code once atomic operation is
>     available in XEN.
> 3. PCI ATS and MSI interrupts should be supported.
> 4. Investigate side-effect of using tasklet in place of threaded IRQ and fix
>     if any.
In your last e-mail, you wrote that you would investigate and then come 
back to use. It wasn't clear that this meant you will not deal with it 
in this series.

> 5. fallthorugh keyword should be supported.

This one should really be done now... It is not a complicated one...

> 6. Implement the ffsll function in bitops.h file.

... same here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 14:31:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 14:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50676.89427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjSH-0001eS-Vb; Fri, 11 Dec 2020 14:31:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50676.89427; Fri, 11 Dec 2020 14:31:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjSH-0001eL-RT; Fri, 11 Dec 2020 14:31:33 +0000
Received: by outflank-mailman (input) for mailman id 50676;
 Fri, 11 Dec 2020 14:31:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cAXP=FP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1knjSG-0001bD-DI
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 14:31:32 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e159b767-0166-4098-b978-6e322948493a;
 Fri, 11 Dec 2020 14:31:31 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BBEK3q4153543;
 Fri, 11 Dec 2020 14:29:21 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 3581mratkd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 11 Dec 2020 14:29:21 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BBEPIII069767;
 Fri, 11 Dec 2020 14:29:20 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 358kstfcjw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 11 Dec 2020 14:29:20 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BBETD7i006093;
 Fri, 11 Dec 2020 14:29:13 GMT
Received: from [10.39.222.144] (/10.39.222.144)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 11 Dec 2020 06:29:13 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e159b767-0166-4098-b978-6e322948493a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=XwmNJXsce0biO7XLe+zGaIsjtrmRfK/vTz0mYn6nQBU=;
 b=bR7yH4+1kTFS/uaRb9qwsHTyFm2EoNVmWnQB/6uOrO708u+eqNtBobFpmn1ttrRdH1ys
 eWZmcv+9rBDGS0S9dxVH7WJEjVIq3kan/8A2J+2JEfl4cmi6SdTHDxhFZ9FOjgghK0i/
 gSlqjyBs3FtwP3+ZAOMx//Xrh2s4GGZVNmZZvw8Dg7A8ucgqyFuZ1jZz6HYR0R/QX3KB
 jq1go8jZfLC3JdZILS9AqGFwicAYPALMmwsEwRd+5bZO9goG03a6o19lTH5pgGTEdXEs
 I4ECgSmCCiTYqeMDSavCaEpGT9QkWsYwEp7lVPsTzWiLDfHkK38lg3Q5ATAnYCJ8Q3TZ zQ== 
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
To: Thomas Gleixner <tglx@linutronix.de>,
        =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>,
        LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org,
        "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
        Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
        linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
        linux-arm-kernel@lists.infradead.org,
        Mark Rutland <mark.rutland@arm.com>,
        Catalin Marinas <catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
        Jani Nikula <jani.nikula@linux.intel.com>,
        Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
        Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
        Daniel Vetter <daniel@ffwll.ch>,
        Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
        Chris Wilson <chris@chris-wilson.co.uk>,
        Wambui Karuga <wambui.karugax@gmail.com>,
        intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
        Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
        Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
        Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
        Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
        linux-ntb@googlegroups.com,
        Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
        Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
        Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
        Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
        Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
        "David S. Miller" <davem@davemloft.net>,
        Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
        linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
        Leon Romanovsky <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
 <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com>
 <871rfwiknd.fsf@nanos.tec.linutronix.de>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <9806692f-24a3-4b6f-ae55-86bd66481271@oracle.com>
Date: Fri, 11 Dec 2020 09:29:09 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <871rfwiknd.fsf@nanos.tec.linutronix.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9831 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0
 bulkscore=0 malwarescore=0 phishscore=0 mlxscore=0 spamscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012110094
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9831 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 clxscore=1015 malwarescore=0 priorityscore=1501 adultscore=0
 lowpriorityscore=0 phishscore=0 spamscore=0 impostorscore=0 mlxscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012110093


On 12/11/20 7:37 AM, Thomas Gleixner wrote:
> On Fri, Dec 11 2020 at 13:10, Jürgen Groß wrote:
>> On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>>> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>>>> All event channel setups bind the interrupt on CPU0 or the target CPU for
>>>> percpu interrupts and overwrite the affinity mask with the corresponding
>>>> cpumask. That does not make sense.
>>>>
>>>> The XEN implementation of irqchip::irq_set_affinity() already picks a
>>>> single target CPU out of the affinity mask and the actual target is stored
>>>> in the effective CPU mask, so destroying the user chosen affinity mask
>>>> which might contain more than one CPU is wrong.
>>>>
>>>> Change the implementation so that the channel is bound to CPU0 at the XEN
>>>> level and leave the affinity mask alone. At startup of the interrupt
>>>> affinity will be assigned out of the affinity mask and the XEN binding will
>>>> be updated.
>>>
>>> If that's the case then I wonder whether we need this call at all and instead bind at startup time.
>> After some discussion with Thomas on IRC and xen-devel archaeology the
>> result is: this will be needed especially for systems running on a
>> single vcpu (e.g. small guests), as the .irq_set_affinity() callback
>> won't be called in this case when starting the irq.


On UP are we not then going to end up with an empty affinity mask? Or are we guaranteed to have it set to 1 by interrupt generic code?


This is actually why I brought this up in the first place --- a potential mismatch between the affinity mask and Xen-specific data (e.g. info->cpu and then protocol-specific data in event channel code). Even if they are re-synchronized later, at startup time (for SMP).


I don't see anything that would cause a problem right now but I worry that this inconsistency may come up at some point.


-boris


> That's right, but not limited to ARM. The same problem exists on x86 UP.
> So yes, the call makes sense, but the changelog is not really useful.
> Let me add a comment to this.
>
> Thanks,
>
>         tglx
>


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 14:39:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 14:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50696.89450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjaL-00027m-45; Fri, 11 Dec 2020 14:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50696.89450; Fri, 11 Dec 2020 14:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knjaL-00027f-1A; Fri, 11 Dec 2020 14:39:53 +0000
Received: by outflank-mailman (input) for mailman id 50696;
 Fri, 11 Dec 2020 14:39:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knjaJ-00027X-Mx; Fri, 11 Dec 2020 14:39:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knjaJ-0007lB-HD; Fri, 11 Dec 2020 14:39:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knjaJ-0008FQ-9V; Fri, 11 Dec 2020 14:39:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knjaJ-0000Xl-8a; Fri, 11 Dec 2020 14:39:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EBGm4vB9uFQav9uYvV8AujFowBK/dJ0aIDQVne3Qslg=; b=jdkwdeFqimR9Zo2cdCTgu7vR/e
	pRxaQgGmHBNJ+PDeA9E8kQV+MvVbStpvpFiYgyIazNVZ9J1dRmqMdiKFzGhyc0N8N34ml4q+zvFmo
	ZiR/0a/FVzkkubGOYXTD/Tc+9eMZ/pFUpPwpx7tbDHlwjLe5fcxa0YKy7v2PXfzQUu0s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157416-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157416: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 14:39:51 +0000

flight 157416 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157416/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    2 days
Failing since        157348  2020-12-09 15:39:39 Z    1 days   12 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 15:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 15:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50725.89475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knkWr-0000NT-Ps; Fri, 11 Dec 2020 15:40:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50725.89475; Fri, 11 Dec 2020 15:40:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knkWr-0000NM-M0; Fri, 11 Dec 2020 15:40:21 +0000
Received: by outflank-mailman (input) for mailman id 50725;
 Fri, 11 Dec 2020 15:40:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knkWq-0000NE-Sk; Fri, 11 Dec 2020 15:40:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knkWq-0000YF-In; Fri, 11 Dec 2020 15:40:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knkWq-0003kb-8O; Fri, 11 Dec 2020 15:40:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knkWq-0007U5-7t; Fri, 11 Dec 2020 15:40:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cmYH/FCp6XoOqJvJVjyq31+U4seOaXHLwEfsdnNy2fE=; b=dEdbRayM4AW3o/NQ5MlgtnsC1L
	Qqo/SaFGy0XGESv4MGvjX8GHTLqF4n57TZyWUkfsW6shqMhGY5Gvv4CrgNxoQoe2lXYrk8lxtEa4i
	xNGCHZLG6OhUwZa6LwdP7HZq0CTC29l3VwBeRw2k3nZ6IMruAlOgvXAgIlq3Z8lNmniE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157411-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157411: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2ecfc0657afa5d29a373271b342f704a1a3c6737
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 15:40:20 +0000

flight 157411 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157411/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                2ecfc0657afa5d29a373271b342f704a1a3c6737
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  113 days
Failing since        152659  2020-08-21 14:07:39 Z  112 days  234 attempts
Testing same since   157392  2020-12-10 20:40:10 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erich McMillan <erich.mcmillan@hp.com>
  Erich-McMillan <erich.mcmillan@hp.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiahui Cen <cenjiahui@huawei.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Levon <john.levon@nutanix.com>
  John Snow <jsnow@redhat.com>
  John Wang <wangzhiqiang.bj@bytedance.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Juan Quintela <quintela@redhat.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Kunkun Jiang <jiangkunkun@huawei.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vikram Garhwal <fnu.vikram@xilinx.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yubo Miao <miaoyubo@huawei.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zihao Chang <changzihao1@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 72149 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 15:50:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 15:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50739.89489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knkgC-0000o2-11; Fri, 11 Dec 2020 15:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50739.89489; Fri, 11 Dec 2020 15:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knkgB-0000nv-US; Fri, 11 Dec 2020 15:49:59 +0000
Received: by outflank-mailman (input) for mailman id 50739;
 Fri, 11 Dec 2020 15:49:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FkE0=FP=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1knkgA-0000nq-Am
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 15:49:58 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1827a939-1510-4203-9c69-e247b60fa3b5;
 Fri, 11 Dec 2020 15:49:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1827a939-1510-4203-9c69-e247b60fa3b5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607701797;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=HXzTfy3peath7yRLaZn9TW89udAT16dHtncFLJ3VpKg=;
  b=bG8B7/upISNdWKom9r8abGzjYriXh4XqUTngJB6BV3llXRJKDzqn4Q9M
   6jJC0Q6INNpDBSm+chbV7/SK6dkOJ42NQ4UQLfb8QHX7uee29MB2PFFuw
   suuvh2giGhW/uwAu7DdgOV49ucSd+5f1OZszb6oU3tDM9kK2nEii7l95y
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: q4fG5HaXf2FgSuNaHkpsHh4khpNkmWCUkJYwlzY9LM+hFG1971iZxTjqCSjNRHZxVSF491GMJq
 8gKghUbMHToYeBj+DLRPbS3bHsIATEx/5gH36HYbhXg+RXXto6mOKsCiAiXEIZUgfAgZp0BG24
 HCSiAz9x4da5Z5t1wDnooyB2lyxGD7ClfL4poULbmfRv6a04WcnsdtxysB2UTvCu2C3rwAvF30
 RgghDWFDrR5tb/OBC3duU2l+MWRNvjH/ZsVYmn4FMvsfcmn2sPvqTySqoo7ChEbxVKHItOleoq
 Qho=
X-SBRS: 5.2
X-MesageID: 33385861
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,411,1599537600"; 
   d="scan'208";a="33385861"
Date: Fri, 11 Dec 2020 15:49:52 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v3 2/8] lib: collect library files in an archive
Message-ID: <X9OVIGVdEteBNRIn@perard.uk.xensource.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
 <X9I1GCAM2nn8W8eN@perard.uk.xensource.com>
 <65b94fd1-c840-cb1b-51f7-c9a5b158cc1e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <65b94fd1-c840-cb1b-51f7-c9a5b158cc1e@suse.com>

On Fri, Dec 11, 2020 at 11:00:19AM +0100, Jan Beulich wrote:
> On 10.12.2020 15:47, Anthony PERARD wrote:
> > On Mon, Nov 23, 2020 at 04:21:19PM +0100, Jan Beulich wrote:
> >> --- a/xen/Rules.mk
> >> +++ b/xen/Rules.mk
> >> @@ -60,7 +64,14 @@ include Makefile
> >>  # ---------------------------------------------------------------------------
> >>  
> >>  quiet_cmd_ld = LD      $@
> >> -cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
> >> +cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
> >> +               --start-group $(filter %.a,$(real-prereqs)) --end-group
> > 
> > It might be a bit weird to modify the generic LD command for the benefit
> > of only prelink.o objects but it's probably fine as long as we only use
> > archives for lib.a. libelf and libfdt will just have --start/end-group
> > added to there ld command line. So I guess the change is fine.
> 
> I'm afraid I don't understand what the concern is. Neither libelf
> nor libfdt use any %.a right now. Or are you referring to them
> merely because it's just them which have got converted to using
> $(call if-changed ...), and your remark would eventually apply to
> e.g. built_in.o as well? And then further is all you're worried
> about the fact that there may be "--start-group  --end-group" on
> the command line, i.e. with nothing inbetween? If so, besides
> possibly looking a little odd if someone inspected the command
> lines closely, what possible issue do you see? (If there is one,
> making the addition of both options conditional upon there being
> any/multiple %.a in the first place wouldn't be a big problem,
> albeit Linux also doesn't care whether ${KBUILD_VMLINUX_LIBS} is
> empty.)

Well, maybe one day we might want to collect objects in build_in.a archives
rather than build_in.o, like Linux does, but Xen is probably too small
for that to have any impact on the time it takes to build. And if that
happen, we will have to change the linker's command anyway to use
--whole-archive. So the change is fine.

As for "--start-group --end-group", yes it would just look odd, but I don't
think there would by any issue with option added to the command line.

> > The rest looks good,
> > Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Thanks, but I'd prefer the above clarified.
> 
> Jan

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 17:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 17:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50778.89537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knlmR-0000en-Ih; Fri, 11 Dec 2020 17:00:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50778.89537; Fri, 11 Dec 2020 17:00:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knlmR-0000eg-Ev; Fri, 11 Dec 2020 17:00:31 +0000
Received: by outflank-mailman (input) for mailman id 50778;
 Fri, 11 Dec 2020 17:00:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Df1x=FP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1knlmP-0000eY-NB
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 17:00:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55299032-eb98-4d12-9ffa-bfe0d22833d5;
 Fri, 11 Dec 2020 17:00:26 +0000 (UTC)
Received: from MR2P264CA0014.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::26) by
 HE1PR0802MB2267.eurprd08.prod.outlook.com (2603:10a6:3:ce::20) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12; Fri, 11 Dec 2020 17:00:24 +0000
Received: from VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::a9) by MR2P264CA0014.outlook.office365.com
 (2603:10a6:500:1::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21 via Frontend
 Transport; Fri, 11 Dec 2020 17:00:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT037.mail.protection.outlook.com (10.152.19.70) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Fri, 11 Dec 2020 17:00:23 +0000
Received: ("Tessian outbound fc5cc0046d61:v71");
 Fri, 11 Dec 2020 17:00:23 +0000
Received: from cd2b36560a49.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9BE0F955-F7EE-4CF9-9CE8-0BF6BE1E286F.1; 
 Fri, 11 Dec 2020 17:00:10 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd2b36560a49.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 11 Dec 2020 17:00:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3387.eurprd08.prod.outlook.com (2603:10a6:10:45::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.22; Fri, 11 Dec
 2020 17:00:09 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.018; Fri, 11 Dec 2020
 17:00:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55299032-eb98-4d12-9ffa-bfe0d22833d5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hm+xZweXO57IT1cyV/up89eBv4of/vIUESJuZJppyw0=;
 b=JZNp0tMbJ47AHhfxvcDbstpgND045q45ib6MiqOa3ZaiR4UgBmEoe3kxS2fZfLwHH3d319ZLu4rlaGFSy71wcRvgZC//M+pmk5Qkpudx1g0y+RYV4NAXwgz0QFfLmyNQLaWVVrTV/ejOxeu/vRpobcbOay7JEaXt4egM5e49NJ0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a3a446d976d65dad
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KYJdqkxELPysEyUEGa6AwBUc+H3yFZc+vDDGVU2OYdJR1xJ80oGXNEPDeYdOTF+5cqxU5SwfjuFHvumox2dmgmjCYfbKaZ4H1UE7Q8Wq8g2ZxmbPAIW7tunHVgwHCc9vkCYZtJU/MwO8h3kD5s1JAt1lzrLzgvNGxrCeKzdNKmj2nWYiBctEvIF2fj2LeZII2Cawnvewi0gBdbnP9RU5he7arOki+FUfNBcawCT1w72fWZbDiPUio2Hp8UNO4VRSunjoZitSrzVi6maqRE8LUGOmqxFI7Bpcbz7rNrTZxjs1kL4ul7bFE2DGhZeAeFpS5PorvIv6GUzUzwcOS1tExQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hm+xZweXO57IT1cyV/up89eBv4of/vIUESJuZJppyw0=;
 b=TicDNRWk3a9VEL9w/NQGKWMG8rb+xqlZ8VNLVtuUwzwtA8+xuW+EOS9pmAFcfqwOjkWejdzlY6T3g1AoTxcPnFGgqRLwtQVpuqz0i1wqWv5VKL4Vlxz87Tw5fRxHwdd6bC/BtrdtX/CtBk0leOyc6eszbqs3bOD5RxbkMRmHXGkblPBVXlZHSl0ir27/bpQX0Kt8eauMGPmgX2iKogFSDQ3ESKv8Ho2+Z5EZ3CkSFAiV6UoD4XfOfT7lS/zBBLVekYyxOCZz0kZKyrMOZ08Y4UXW8xR/O1xl7uZnWQAfSw7UEd06vPFHERoVAhZiCwUtH5Z2GeipZBYIi8TWH1WC6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hm+xZweXO57IT1cyV/up89eBv4of/vIUESJuZJppyw0=;
 b=JZNp0tMbJ47AHhfxvcDbstpgND045q45ib6MiqOa3ZaiR4UgBmEoe3kxS2fZfLwHH3d319ZLu4rlaGFSy71wcRvgZC//M+pmk5Qkpudx1g0y+RYV4NAXwgz0QFfLmyNQLaWVVrTV/ejOxeu/vRpobcbOay7JEaXt4egM5e49NJ0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Topic: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Index: AQHWzkloLLvBmqE820eVrJEh4IHR0KnvKSEAgAFJwQCAAHhlAIABNlCA
Date: Fri, 11 Dec 2020 17:00:09 +0000
Message-ID: <5C3F0DF9-417B-4946-A906-FE2A9CD4A38F@arm.com>
References: <cover.1607524536.git.bertrand.marquis@arm.com>
 <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012091131350.20986@sstabellini-ThinkPad-T480s>
 <4B26BDEE-DA30-4B5B-A428-9D8D4659B581@arm.com>
 <alpine.DEB.2.21.2012101428030.20986@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012101428030.20986@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 79335acb-ac39-473c-dbb3-08d89df64045
x-ms-traffictypediagnostic: DB7PR08MB3387:|HE1PR0802MB2267:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB226777F7341658F5FDB1E3419DCA0@HE1PR0802MB2267.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 CMKIr+/webm9Az8gRmmzDBrPV6ZV8eFHAXaQx4AecaujZ7p3kP5Fk68Zg9+a1oR7RaVwF4UiVefcJ0mqdHgMWYOrrRA7hP9TK1Ru3Ep8pOkOCA+eOBmQRJafH4nNLcipiW8iFcYoV0+y8T2veq/zmwrbQHaWIgODvidp8DXbgNl9tOuBOWpIyxHYp3uLSuHZCBU39FBfU1o9DP8redpDKzZlEz7l7lfFKpeZqrl02JKsrVVS81jtYgcTrs2F95LT1OK0MbSH+GaqUYdlEWlp5rKPJAzBWCsqHNHm+nomAr845mdEGfs2YL3kkKhtnV4w8WYtvPNWFjQNcsyIvXQWgyhWNbbKjgsoHDQff4+p2V/XSOtPwZxcJHzTtrZgj9X3
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(136003)(346002)(396003)(39860400002)(66446008)(2616005)(66476007)(36756003)(6512007)(86362001)(6916009)(91956017)(2906002)(66946007)(64756008)(76116006)(26005)(6486002)(71200400001)(53546011)(6506007)(8676002)(4326008)(33656002)(186003)(8936002)(478600001)(316002)(54906003)(83380400001)(66556008)(5660300002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?/VHPKKuSAD1zk1GMAkOXJFXjdyiUT9g9AkDf4gMTy9fqe8C+8PZn6cFvasS6?=
 =?us-ascii?Q?I4qU9UVQHyu/EFMniswdfKujIfg8TyxFfI/w6C4Y7KoXz1uAj2NDLlaFHDNr?=
 =?us-ascii?Q?jcMoj/ZAXoA6s4HBEeJONrJnF6DSW9zrNB85CbuFt19GhVS5gczJ+6jvAhTo?=
 =?us-ascii?Q?UiSm5zDur1IxHBJRs3c74YaLHMQZXH2DhDsO3RAvwqe5nyLf8VBq+6nzC75j?=
 =?us-ascii?Q?uWkw6bMSAcdnEaewbo/HDwLjz78ThTp+3NjbK5a0xuaop/dJrvuNwcECHt/f?=
 =?us-ascii?Q?4A2/BaRyc1p9We0UaCCxMSUM+2N1J3TU4VaVFpO7Dp1VUFLZH0AOhosnDhix?=
 =?us-ascii?Q?IVlDKz/XiPrb6yywhmIYOqW2fEA3wY4i/BmExKsoVhMULyfSnosyh6rMYdkG?=
 =?us-ascii?Q?zb3gMGDpLo3XYG25jwIe2t/kG2q2wK8H6mHlycoCpaROlvHtH1M/zIdd7r3w?=
 =?us-ascii?Q?vnIyOZsmIqhhTR4YQvRZwIdWNXt65UxNtYYThHTRz7Mme3BD9cgsHaHScBch?=
 =?us-ascii?Q?uovevXpq0EdR8WomTmITvmgPAY7FKqrQA3AqvujGsjh4Km3P3ndFIN5An4qf?=
 =?us-ascii?Q?+jQ+zORxn6uInbK8HMUXQGeGw6yDUTH3qv2hGi7x8kSzRakgHkuwfPaEA5jH?=
 =?us-ascii?Q?tQKKhyjOQmueYt1B3NM5UUrRr/7I6CE2oQE6frBbbTVs/o8QEXxVA1a21TpH?=
 =?us-ascii?Q?0P8ZhegwOVt6EtWi1+PSf9USWIOCEDn8CU3/6mPmiCIhIxPTg9mo6T3MrgF/?=
 =?us-ascii?Q?x3cMwMfeljZpLPm/+yddnVkgz/Lo3jQxn1UOUwSZ1B8ZcOntd7qu+dRrRUAh?=
 =?us-ascii?Q?1nTeCBxRPse/V4qJ2wmyTBarjo07CCbIQoeLrGov/+dwG3SSI3tPdoay+lRX?=
 =?us-ascii?Q?RVLQsoYxIrVdcT3ywRRc47uCw/7p5gImZA5EwzwNvVtsJoDoSlMd5Io/LryF?=
 =?us-ascii?Q?v0gxwealz2jRNmRJR9BF1S0lf13CtIPe180v+YIitWXCdttRyPGgKN9KZLDU?=
 =?us-ascii?Q?3HiE?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F17DAC197F0E694DA85183D123E3928D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3387
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9ed1ff25-a378-4a9a-fdaa-08d89df637f1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MWrWM26SjyHl9PbhJS8+kwaUIR8F+TB9Uzdjo3HgWfzcRa4xd4QjEdS2ao/LjgRiuDg8ozfVupHNFA/3xwhjg/C8oUJhk+l1YptubBX9c6Z0t3YxUmqo8JJIMJx1UHJe7sHCbo5Hzj++3jWHw1bkNZKn8Wl4VasrCf0MX8hN8X+GSv+/peef3h6yRMHxIsb2ngPtp44mX59FxfdVH4HCZtnM1T09+0+IQOL711fUB9s+ODECKos5bl4tnMs5cgZHmObZ0567HVG1xonx4NTo2EvcRTLlke9Rh9xz0yNQJmdfLKKqqhBKSqMjbcjvjdTX/hO84ixqvTZwYZu8Jsw+ypooB76z7Dng1XTMf5FosEQcPtpUqWvhOjSHOCK87v3JFUwMbtRs5iSGGEhXxXZFnw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(46966005)(54906003)(316002)(4326008)(8676002)(478600001)(6862004)(5660300002)(8936002)(336012)(2616005)(6512007)(83380400001)(6506007)(53546011)(70206006)(26005)(107886003)(2906002)(33656002)(82310400003)(70586007)(47076004)(81166007)(186003)(356005)(86362001)(6486002)(82740400003)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2020 17:00:23.7441
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79335acb-ac39-473c-dbb3-08d89df64045
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2267

Hi Stefano,

> On 10 Dec 2020, at 22:29, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 10 Dec 2020, Bertrand Marquis wrote:
>> Hi Stefano,
>>=20
>>> On 9 Dec 2020, at 19:38, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>>>=20
>>> On Wed, 9 Dec 2020, Bertrand Marquis wrote:
>>>> Add vsysreg emulation for registers trapped when TID3 bit is activated
>>>> in HSR.
>>>> The emulation is returning the value stored in cpuinfo_guest structure
>>>> for know registers and is handling reserved registers as RAZ.
>>>>=20
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>> ---
>>>> Changes in V2: Rebase
>>>> Changes in V3:
>>>> Fix commit message
>>>> Fix code style for GENERATE_TID3_INFO declaration
>>>> Add handling of reserved registers as RAZ.
>>>>=20
>>>> ---
>>>> xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
>>>> 1 file changed, 53 insertions(+)
>>>>=20
>>>> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg=
.c
>>>> index 8a85507d9d..ef7a11dbdd 100644
>>>> --- a/xen/arch/arm/arm64/vsysreg.c
>>>> +++ b/xen/arch/arm/arm64/vsysreg.c
>>>> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>>>>        break;                                                         =
 \
>>>>    }
>>>>=20
>>>> +/* Macro to generate easily case for ID co-processor emulation */
>>>> +#define GENERATE_TID3_INFO(reg, field, offset)                       =
   \
>>>> +    case HSR_SYSREG_##reg:                                           =
   \
>>>> +    {                                                                =
   \
>>>> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,=
   \
>>>> +                          1, guest_cpuinfo.field.bits[offset]);      =
   \
>>>=20
>>> [...]
>>>=20
>>>> +    HSR_SYSREG_TID3_RESERVED_CASE:
>>>> +        /* Handle all reserved registers as RAZ */
>>>> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
>>>=20
>>>=20
>>> We are implementing both the known and the implementation defined
>>> registers as read-as-zero. On write, we inject an exception.
>>>=20
>>> However, reading the manual, it looks like the implementation defined
>>> registers should be read-as-zero/write-ignore, is that right?
>>=20
>> In the documentation, I did find all those defined as RO (Arm Architectu=
re
>> reference manual, chapter D12.3.1). Do you think we should handle Read
>> only register as write ignore ? now i think of it RO does not explicitel=
y mean
>> if writes are ignored or should generate an exception.
>>=20
>>>=20
>>> I couldn't easily find in the manual if it is OK to inject an exception
>>> on write to a known register.
>>=20
>> I am actually unsure if it should or not.
>> I will try to run a test to check what is happening if this is done on t=
he
>> real hardware and come back to you on this one.
>=20
> Yeah, that's the best way to do it: if writes are ignored on real
> hardware, let's turn this into read-only/write-ignore, otherwise if they
> generate an exception then let's keep the code as is.
>=20
> Also you might want to do that both for a known register and also for an
> unknown register to see if it makes a difference.

I did a test with the following:
- WRITE_SYSREG64(0xf, S3_0_C0_C3_3)
- WRITE_SYSREG64(0xf, ID_MMFR0_EL1)
- WRITE_SYSREG64(0xf, ID_AA64MMFR0_EL1)

All generate exceptions like:
Hypervisor Trap. HSR=3D0x2000000 EC=3D0x0 IL=3D1 Syndrome=3D0x0

So I think it is right to generate an exception if one of them is accessed.

Regards
Bertrand

>=20
> Thank you!



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 17:41:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 17:41:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50794.89548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmPm-0004mH-Oe; Fri, 11 Dec 2020 17:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50794.89548; Fri, 11 Dec 2020 17:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmPm-0004mA-Le; Fri, 11 Dec 2020 17:41:10 +0000
Received: by outflank-mailman (input) for mailman id 50794;
 Fri, 11 Dec 2020 17:41:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knmPm-0004m2-3u; Fri, 11 Dec 2020 17:41:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knmPl-0003c2-S4; Fri, 11 Dec 2020 17:41:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knmPl-00025g-LC; Fri, 11 Dec 2020 17:41:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knmPl-0003wD-Kh; Fri, 11 Dec 2020 17:41:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jA9rb5VQ98/9V36RWJpWD0MOcaRu6nqslXj9yaP2wd4=; b=p921X16YhnHeFLT8pYr1ds+aZv
	3gp1pFP0ULSc2C2JaXr1wmcgogiW0Fix0MgBCsSnIXg2K0y0lWVMp26E0Bbfems+pd4H+uLb7vkXT
	2Tx7/Kd6WUmkD/WlqN5INJjO5utm/VK4My5PpePH5gz3Lo/r8lcnkfdVcMbwtmOA/q5I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157414-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157414: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=33dc9614dc208291d0c4bcdeb5d30d481dcd2c4c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 17:41:09 +0000

flight 157414 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157414/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                33dc9614dc208291d0c4bcdeb5d30d481dcd2c4c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  132 days
Failing since        152366  2020-08-01 20:49:34 Z  131 days  228 attempts
Testing same since   157414  2020-12-11 08:12:48 Z    0 days    1 attempts

------------------------------------------------------------
3677 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 704276 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 17:53:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 17:53:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50804.89570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmbq-0005wO-54; Fri, 11 Dec 2020 17:53:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50804.89570; Fri, 11 Dec 2020 17:53:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmbq-0005wH-1X; Fri, 11 Dec 2020 17:53:38 +0000
Received: by outflank-mailman (input) for mailman id 50804;
 Fri, 11 Dec 2020 17:53:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pcPH=FP=gmail.com=andy.shevchenko@srs-us1.protection.inumbo.net>)
 id 1knmbo-0005wB-Ux
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 17:53:37 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b274a65-555a-4fa5-ac9d-4c4b3e6705a4;
 Fri, 11 Dec 2020 17:53:36 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id p21so2119767pjv.0
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 09:53:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b274a65-555a-4fa5-ac9d-4c4b3e6705a4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=qJMuReBBloytY5lBy9ILiGRjMb37kF5Bm1u/sykMtGI=;
        b=pbcP9ZyMd8cecm/Ac29HxHof8LYE8oIxJsfNxtiP4g+O1ufaXL9wqRka/RSo4Tovrl
         0+y5AKL9oCVabUsoTOFqz/Yq9QfdTj0Typ4F+2X8XEQKxDDPDKl8NptHh9zo30qru18d
         QGlaxM7GEPdID0olwlCiRofo+G16Jl5uy1wTEIGaTXATmtYB0SKJ7RfAvwDsjDcWrkzF
         kKjuZM4tWxnD1BBGBwlxpereCqs+GnyIsKzQ1aCY79/NwQxDiafuQ2m4U5xGO28vG9vA
         fsrh/72iZb6gHAJp4ifwj1MfQl9QfYGQYjUntoAEcq5P8hce+uphA5O/Y7Ku0HiAUL7i
         H9Mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=qJMuReBBloytY5lBy9ILiGRjMb37kF5Bm1u/sykMtGI=;
        b=gHjtm7ourVH/5zKhY2xJDu0dY80OVAqDX/i9YdJ0xO5wFAgcshKvFLuvQUqlAxIIrk
         SvxyOd5ECNJLIiUB31TN5ehZhync2uFs2ziWEpCOHhncNW/w0mT42LuzO6ZivIX6GlL0
         f8CcCjSylZA9nSxUN1EcFuzXCG0DTi7OnEirqiw/GB5hBMavJWDhTLPFHZW2ryrrsDzv
         llSNbxEHCUtAHYpUnwguGnMlhqIjmsC1B0vjFIYEbUx3GhM733N9reE5EGRrMHVFofdG
         yGRaNjobpk7IFeQAf4I6x1LcLBIc+mURdvLZ2+ZgOmV4Rskn14+nBww68nU86AtaSXZe
         xjSQ==
X-Gm-Message-State: AOAM530jOFKnjnW2MU8NI5vYyy/eBFles/fH+1CJm5mDk3rVustnSjlQ
	e/c7XyBFiENX5Sd4oJ8ZMXA6JWsLZS1qE2mKRwA=
X-Google-Smtp-Source: ABdhPJzHUDKz30vqqKYAobJsngcYE8DjK4RijmUtwtDAm93E/clqz0VBoWDrwvzZzwrJPWtEJfA5ahq579HFtI6a1+g=
X-Received: by 2002:a17:90a:34cb:: with SMTP id m11mr14313128pjf.181.1607709215341;
 Fri, 11 Dec 2020 09:53:35 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194042.860029489@linutronix.de>
In-Reply-To: <20201210194042.860029489@linutronix.de>
From: Andy Shevchenko <andy.shevchenko@gmail.com>
Date: Fri, 11 Dec 2020 19:53:07 +0200
Message-ID: <CAHp75Vc-2OjE2uwvNRiyLMQ8GSN3P7SehKD-yf229_7ocaktiw@mail.gmail.com>
Subject: Re: [patch 03/30] genirq: Move irq_set_lockdep_class() to core
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, 
	linux-arm Mailing List <linux-arm-kernel@lists.infradead.org>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	intel-gfx <intel-gfx@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij <linus.walleij@linaro.org>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, 
	Jon Mason <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, 
	linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, 
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, 
	Michal Simek <michal.simek@xilinx.com>, linux-pci <linux-pci@vger.kernel.org>, 
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, 
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, 
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, 
	"open list:HFI1 DRIVER" <linux-rdma@vger.kernel.org>, Saeed Mahameed <saeedm@nvidia.com>, 
	Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 10, 2020 at 10:14 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> irq_set_lockdep_class() is used from modules and requires irq_to_desc() to
> be exported. Move it into the core code which lifts another requirement for
> the export.

...

> +       if (IS_ENABLED(CONFIG_LOCKDEP))
> +               __irq_set_lockdep_class(irq, lock_class, request_class);

Maybe I missed something, but even if the compiler does not warn the
use of if IS_ENABLED() with complimentary #ifdef seems inconsistent.

> +#ifdef CONFIG_LOCKDEP
...
> +EXPORT_SYMBOL_GPL(irq_set_lockdep_class);
> +#endif


-- 
With Best Regards,
Andy Shevchenko


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 18:08:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 18:08:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50810.89582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmqC-0007Aq-FC; Fri, 11 Dec 2020 18:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50810.89582; Fri, 11 Dec 2020 18:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmqC-0007Aj-Bw; Fri, 11 Dec 2020 18:08:28 +0000
Received: by outflank-mailman (input) for mailman id 50810;
 Fri, 11 Dec 2020 18:08:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ekOa=FP=kernel.org=maz@srs-us1.protection.inumbo.net>)
 id 1knmqA-0007AZ-9H
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 18:08:26 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cce3f81d-5a97-4485-b75b-ef5169fba09a;
 Fri, 11 Dec 2020 18:08:25 +0000 (UTC)
Received: from disco-boy.misterjones.org (disco-boy.misterjones.org
 [51.254.78.96])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 891FF23D3C;
 Fri, 11 Dec 2020 18:08:24 +0000 (UTC)
Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78]
 helo=wait-a-minute.misterjones.org)
 by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94)
 (envelope-from <maz@kernel.org>)
 id 1knmq6-000WxV-AL; Fri, 11 Dec 2020 18:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cce3f81d-5a97-4485-b75b-ef5169fba09a
Date: Fri, 11 Dec 2020 18:08:19 +0000
Message-ID: <87zh2k43n0.wl-maz@kernel.org>
From: Marc Zyngier <maz@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-arm-kernel@lists.infradead.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>,
	afzal mohammed <afzal.mohd.ma@gmail.com>,
	linux-parisc@vger.kernel.org,
	Mark Rutland <mark.rutland@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	linux-s390@vger.kernel.org,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	David Airlie <airlied@linux.ie>,
	Daniel Vetter <daniel@ffwll.ch>,
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Wambui Karuga <wambui.karugax@gmail.com>,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	linux-gpio@vger.kernel.org,
	Lee Jones <lee.jones@linaro.org>,
	Jon Mason <jdmason@kudzu.us>,
	Dave Jiang <dave.jiang@intel.com>,
	Allen Hubbe <allenbh@gmail.com>,
	linux-ntb@googlegroups.com,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Michal Simek <michal.simek@xilinx.com>,
	linux-pci@vger.kernel.org,
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
	Tariq Toukan <tariqt@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	netdev@vger.kernel.org,
	linux-rdma@vger.kernel.org,
	Saeed Mahameed <saeedm@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [patch 09/30] ARM: smp: Use irq_desc_kstat_cpu() in show_ipi_list()
In-Reply-To: <20201210194043.454288890@linutronix.de>
References: <20201210192536.118432146@linutronix.de>
	<20201210194043.454288890@linutronix.de>
User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue)
 FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 Emacs/27.1
 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO)
MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue")
Content-Type: text/plain; charset=US-ASCII
X-SA-Exim-Connect-IP: 62.31.163.78
X-SA-Exim-Rcpt-To: tglx@linutronix.de, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org, James.Bottomley@HansenPartnership.com, deller@gmx.de, afzal.mohd.ma@gmail.com, linux-parisc@vger.kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, borntraeger@de.ibm.com, hca@linux.ibm.com, linux-s390@vger.kernel.org, jani.nikula@linux.intel.com, joonas.lahtinen@linux.intel.com, rodrigo.vivi@intel.com, airlied@linux.ie, daniel@ffwll.ch, pankaj.laxminarayan.bharadiya@intel.com, chris@chris-wilson.co.uk, wambui.karugax@gmail.com, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, tvrtko.ursulin@linux.intel.com, linus.walleij@linaro.org, linux-gpio@vger.kernel.org, lee.jones@linaro.org, jdmason@kudzu.us, dave.jiang@intel.com, allenbh@gmail.com, linux-ntb@googlegroups.com, lorenzo.pieralisi@arm.com, robh@kernel.org, bhelgaas@google.com, michal.simek@xilinx.com, linux-pci@vger.kernel.org, m.karthike
 yan@mobiveil.co.in, Zhiqiang.Hou@nxp.com, tariqt@nvidia.com, davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com, leon@kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, xen-devel@lists.xenproject.org
X-SA-Exim-Mail-From: maz@kernel.org
X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false

On Thu, 10 Dec 2020 19:25:45 +0000,
Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> The irq descriptor is already there, no need to look it up again.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm/kernel/smp.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/arch/arm/kernel/smp.c
> +++ b/arch/arm/kernel/smp.c
> @@ -550,7 +550,7 @@ void show_ipi_list(struct seq_file *p, i
>  		seq_printf(p, "%*s%u: ", prec - 1, "IPI", i);
>  
>  		for_each_online_cpu(cpu)
> -			seq_printf(p, "%10u ", kstat_irqs_cpu(irq, cpu));
> +			seq_printf(p, "%10u ", irq_desc_kstat_cpu(ipi_desc[i], cpu));
>  
>  		seq_printf(p, " %s\n", ipi_types[i]);
>  	}
> 
> 

Acked-by: Marc Zyngier <maz@kernel.org>

-- 
Without deviation from the norm, progress is not possible.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 18:08:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 18:08:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50811.89594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmqP-0007DV-OT; Fri, 11 Dec 2020 18:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50811.89594; Fri, 11 Dec 2020 18:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmqP-0007DN-Kt; Fri, 11 Dec 2020 18:08:41 +0000
Received: by outflank-mailman (input) for mailman id 50811;
 Fri, 11 Dec 2020 18:08:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ekOa=FP=kernel.org=maz@srs-us1.protection.inumbo.net>)
 id 1knmqP-0007DB-14
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 18:08:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a9d3618-424e-44ba-9941-7085f118400d;
 Fri, 11 Dec 2020 18:08:40 +0000 (UTC)
Received: from disco-boy.misterjones.org (disco-boy.misterjones.org
 [51.254.78.96])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3300923EF3;
 Fri, 11 Dec 2020 18:08:39 +0000 (UTC)
Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78]
 helo=wait-a-minute.misterjones.org)
 by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94)
 (envelope-from <maz@kernel.org>)
 id 1knmqL-000Wxd-8Z; Fri, 11 Dec 2020 18:08:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a9d3618-424e-44ba-9941-7085f118400d
Date: Fri, 11 Dec 2020 18:08:34 +0000
Message-ID: <87y2i443ml.wl-maz@kernel.org>
From: Marc Zyngier <maz@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>,
	afzal mohammed <afzal.mohd.ma@gmail.com>,
	linux-parisc@vger.kernel.org,
	Russell King <linux@armlinux.org.uk>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	linux-s390@vger.kernel.org,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	David Airlie <airlied@linux.ie>,
	Daniel Vetter <daniel@ffwll.ch>,
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Wambui Karuga <wambui.karugax@gmail.com>,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	linux-gpio@vger.kernel.org,
	Lee Jones <lee.jones@linaro.org>,
	Jon Mason <jdmason@kudzu.us>,
	Dave Jiang <dave.jiang@intel.com>,
	Allen Hubbe <allenbh@gmail.com>,
	linux-ntb@googlegroups.com,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Michal Simek <michal.simek@xilinx.com>,
	linux-pci@vger.kernel.org,
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
	Tariq Toukan <tariqt@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	netdev@vger.kernel.org,
	linux-rdma@vger.kernel.org,
	Saeed Mahameed <saeedm@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [patch 10/30] arm64/smp: Use irq_desc_kstat_cpu() in arch_show_interrupts()
In-Reply-To: <20201210194043.546326568@linutronix.de>
References: <20201210192536.118432146@linutronix.de>
	<20201210194043.546326568@linutronix.de>
User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue)
 FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 Emacs/27.1
 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO)
MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue")
Content-Type: text/plain; charset=US-ASCII
X-SA-Exim-Connect-IP: 62.31.163.78
X-SA-Exim-Rcpt-To: tglx@linutronix.de, linux-kernel@vger.kernel.org, peterz@infradead.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, James.Bottomley@HansenPartnership.com, deller@gmx.de, afzal.mohd.ma@gmail.com, linux-parisc@vger.kernel.org, linux@armlinux.org.uk, borntraeger@de.ibm.com, hca@linux.ibm.com, linux-s390@vger.kernel.org, jani.nikula@linux.intel.com, joonas.lahtinen@linux.intel.com, rodrigo.vivi@intel.com, airlied@linux.ie, daniel@ffwll.ch, pankaj.laxminarayan.bharadiya@intel.com, chris@chris-wilson.co.uk, wambui.karugax@gmail.com, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, tvrtko.ursulin@linux.intel.com, linus.walleij@linaro.org, linux-gpio@vger.kernel.org, lee.jones@linaro.org, jdmason@kudzu.us, dave.jiang@intel.com, allenbh@gmail.com, linux-ntb@googlegroups.com, lorenzo.pieralisi@arm.com, robh@kernel.org, bhelgaas@google.com, michal.simek@xilinx.com, linux-pci@vger.kernel.org, m.karthike
 yan@mobiveil.co.in, Zhiqiang.Hou@nxp.com, tariqt@nvidia.com, davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org, saeedm@nvidia.com, leon@kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, xen-devel@lists.xenproject.org
X-SA-Exim-Mail-From: maz@kernel.org
X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false

On Thu, 10 Dec 2020 19:25:46 +0000,
Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> The irq descriptor is already there, no need to look it up again.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/kernel/smp.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -809,7 +809,7 @@ int arch_show_interrupts(struct seq_file
>  		seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i,
>  			   prec >= 4 ? " " : "");
>  		for_each_online_cpu(cpu)
> -			seq_printf(p, "%10u ", kstat_irqs_cpu(irq, cpu));
> +			seq_printf(p, "%10u ", irq_desc_kstat_cpu(ipi_desc[i], cpu));
>  		seq_printf(p, "      %s\n", ipi_types[i]);
>  	}
>  
> 
> 

Acked-by: Marc Zyngier <maz@kernel.org>

-- 
Without deviation from the norm, progress is not possible.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 18:12:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 18:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50823.89606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmu2-0008FM-8V; Fri, 11 Dec 2020 18:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50823.89606; Fri, 11 Dec 2020 18:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knmu2-0008FF-5C; Fri, 11 Dec 2020 18:12:26 +0000
Received: by outflank-mailman (input) for mailman id 50823;
 Fri, 11 Dec 2020 18:12:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pcPH=FP=gmail.com=andy.shevchenko@srs-us1.protection.inumbo.net>)
 id 1knmu1-0008FA-FL
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 18:12:25 +0000
Received: from mail-pg1-x52a.google.com (unknown [2607:f8b0:4864:20::52a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65348a7e-327b-40e1-9d88-b9f5d3ecd324;
 Fri, 11 Dec 2020 18:12:24 +0000 (UTC)
Received: by mail-pg1-x52a.google.com with SMTP id w16so7648879pga.9
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 10:12:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65348a7e-327b-40e1-9d88-b9f5d3ecd324
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=q/1zdMH3DpxGtZ8QgtITmolgonObU/7AjOPUuWKrWNo=;
        b=Pm8buLXst2LcV9x4pk4vIxyybuwSbvL97o6tQP/6i0ZC0zOJ6xhhPvOmmJHnVFkhgw
         sQfgcxJImHA+DnVZbZxb64zhatDbthp68cZ4iRaEqM/K8XSe9UZnfE94uYtkDbOOpabu
         L1rj/JPmJET4DAk/lcrawpEkrx9i3ul2i6qGlaQuFoK2UkPf+SMWdOeOPPS8QAdKSG2Q
         g24KIZVw1nm3kfJCayecl50J7bltlRshvQcWiGMUSmazoa8KfAW/iwqyejRvRxGzsh3A
         q/OBzRHzggVvK36H+lQNs9ICk+Z16dfzwLBUVGvlOwEUqmBMnCQtHV8wDQuwIxn+Jo8v
         0Yfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=q/1zdMH3DpxGtZ8QgtITmolgonObU/7AjOPUuWKrWNo=;
        b=HUzHojBGCWGhBPpgV/zA3WQ2D6vjaZLwtQ3nqx60IOUJIG1dIcSX2jWSw9mkk3fDuo
         AdxvNqi5rafwV+07Iykgn01kcb/VxzlQ6sqSte6R1NhoG7VDg/tNXEyuanEdlyKl+Bs2
         7y4iVe7zZDGiuxI2PpqdLnIm4JoCA0c0tVyEGBiFb2+uwGw2wNRm1UStSWHzwKRdgDwn
         iEFXSsoYDUpMH7u5DNCcUz0Ywg5xEoa/WqLpuFB8xZV80DYA8E3a6OIChVCtyqLXKEEt
         kzPSebDKzYHRsbF5sRFNGLuuokbP5USWyYE1tgcu9mCysWUu6cnK7+PuG9QM1gNSBcJQ
         w7tw==
X-Gm-Message-State: AOAM5317bKwsAx64UINu5ehzbA84dB2HfW4tJctwkR0GVf/JWvZKkT8w
	0p9vqKU5t8W2quZW2Qyaco0iYtrSsO4J01uWeSs=
X-Google-Smtp-Source: ABdhPJzPNFm8swDHOmLl35eT5h+YQB676NTtT2BWFgfALxfYZ8r2UnWa+iJwNQSLiKZOMQcPIak2nNSo7eC4Ue1vtXA=
X-Received: by 2002:a63:4002:: with SMTP id n2mr13054398pga.4.1607710343875;
 Fri, 11 Dec 2020 10:12:23 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194044.157283633@linutronix.de>
In-Reply-To: <20201210194044.157283633@linutronix.de>
From: Andy Shevchenko <andy.shevchenko@gmail.com>
Date: Fri, 11 Dec 2020 20:12:07 +0200
Message-ID: <CAHp75Veo9aQLCp9ZuCcoexPLHM=R_PEu6uhP_P2bSpsVzyUaNQ@mail.gmail.com>
Subject: Re: [patch 16/30] mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, Linus Walleij <linus.walleij@linaro.org>, 
	Lee Jones <lee.jones@linaro.org>, 
	linux-arm Mailing List <linux-arm-kernel@lists.infradead.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	intel-gfx <intel-gfx@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, Jon Mason <jdmason@kudzu.us>, 
	Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, linux-ntb@googlegroups.com, 
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Rob Herring <robh@kernel.org>, 
	Bjorn Helgaas <bhelgaas@google.com>, Michal Simek <michal.simek@xilinx.com>, 
	linux-pci <linux-pci@vger.kernel.org>, 
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, 
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, 
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, 
	"open list:HFI1 DRIVER" <linux-rdma@vger.kernel.org>, Saeed Mahameed <saeedm@nvidia.com>, 
	Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 10, 2020 at 9:57 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> First of all drivers have absolutely no business to dig into the internals
> of an irq descriptor. That's core code and subject to change. All of this
> information is readily available to /proc/interrupts in a safe and race
> free way.
>
> Remove the inspection code which is a blatant violation of subsystem
> boundaries and racy against concurrent modifications of the interrupt
> descriptor.
>
> Print the irq line instead so the information can be looked up in a sane
> way in /proc/interrupts.

...

> -               seq_printf(s, "%3i:  %6i %4i",
> +               seq_printf(s, "%3i:  %6i %4i %4i\n",

Seems different specifiers, I think the intention was something like
               seq_printf(s, "%3i:  %4i %6i %4i\n",

>                            line,
> +                          line + irq_first,
>                            num_interrupts[line],
>                            num_wake_interrupts[line]);


-- 
With Best Regards,
Andy Shevchenko


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 19:00:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 19:00:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50835.89623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knneR-0004cH-4l; Fri, 11 Dec 2020 19:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50835.89623; Fri, 11 Dec 2020 19:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knneR-0004cA-1Z; Fri, 11 Dec 2020 19:00:23 +0000
Received: by outflank-mailman (input) for mailman id 50835;
 Fri, 11 Dec 2020 19:00:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knneP-0004c5-14
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 19:00:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd98cb23-33b2-4ebe-aa99-567cf0d7e846;
 Fri, 11 Dec 2020 19:00:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd98cb23-33b2-4ebe-aa99-567cf0d7e846
Date: Fri, 11 Dec 2020 11:00:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607713219;
	bh=oyG2ewwl/dtGy3sv7RnoOIWcrm8Zb6PwqCrc4VzsygU=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=iArBkGo0IWe4TkIBzvZKsM0vgEiDOgUMbMjPzlqAxDK5zH0umTJJGxLmVgyHMhgpq
	 +/GBuQP7r3wt/ozQFln2VNYgD9AukCkjspqXD5fKB45mokEAZcI2s8oZK3p+CjZ2TK
	 rXGGVci+VdWvuujEY9PFHmgAPuJXYH/aGg11x/YqzVkOcskf3BKiqBZo/T5TpFCPUn
	 gIP0GTVoUxrnRWqu5fGFIrfzQsKO+cJdG3MPYO0czN9YwTRi9WItKBhjLOZ5kFMF5u
	 O3TJ7prJyRLU2hI6LCmjiqG8eaIB4J4k8KQhpDDV+vNKPnWzwkqC6ypDm8So+ajVK8
	 PZs05XHODxuOg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 4/7] xen/arm: Add handler for ID registers on arm64
In-Reply-To: <5C3F0DF9-417B-4946-A906-FE2A9CD4A38F@arm.com>
Message-ID: <alpine.DEB.2.21.2012111058410.10222@sstabellini-ThinkPad-T480s>
References: <cover.1607524536.git.bertrand.marquis@arm.com> <e991b05af11d00627709caf847c5de99f487cab0.1607524536.git.bertrand.marquis@arm.com> <alpine.DEB.2.21.2012091131350.20986@sstabellini-ThinkPad-T480s> <4B26BDEE-DA30-4B5B-A428-9D8D4659B581@arm.com>
 <alpine.DEB.2.21.2012101428030.20986@sstabellini-ThinkPad-T480s> <5C3F0DF9-417B-4946-A906-FE2A9CD4A38F@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 11 Dec 2020, Bertrand Marquis wrote:
> Hi Stefano,
> 
> > On 10 Dec 2020, at 22:29, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Thu, 10 Dec 2020, Bertrand Marquis wrote:
> >> Hi Stefano,
> >> 
> >>> On 9 Dec 2020, at 19:38, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>> 
> >>> On Wed, 9 Dec 2020, Bertrand Marquis wrote:
> >>>> Add vsysreg emulation for registers trapped when TID3 bit is activated
> >>>> in HSR.
> >>>> The emulation is returning the value stored in cpuinfo_guest structure
> >>>> for know registers and is handling reserved registers as RAZ.
> >>>> 
> >>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >>>> ---
> >>>> Changes in V2: Rebase
> >>>> Changes in V3:
> >>>> Fix commit message
> >>>> Fix code style for GENERATE_TID3_INFO declaration
> >>>> Add handling of reserved registers as RAZ.
> >>>> 
> >>>> ---
> >>>> xen/arch/arm/arm64/vsysreg.c | 53 ++++++++++++++++++++++++++++++++++++
> >>>> 1 file changed, 53 insertions(+)
> >>>> 
> >>>> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> >>>> index 8a85507d9d..ef7a11dbdd 100644
> >>>> --- a/xen/arch/arm/arm64/vsysreg.c
> >>>> +++ b/xen/arch/arm/arm64/vsysreg.c
> >>>> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
> >>>>        break;                                                          \
> >>>>    }
> >>>> 
> >>>> +/* Macro to generate easily case for ID co-processor emulation */
> >>>> +#define GENERATE_TID3_INFO(reg, field, offset)                          \
> >>>> +    case HSR_SYSREG_##reg:                                              \
> >>>> +    {                                                                   \
> >>>> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
> >>>> +                          1, guest_cpuinfo.field.bits[offset]);         \
> >>> 
> >>> [...]
> >>> 
> >>>> +    HSR_SYSREG_TID3_RESERVED_CASE:
> >>>> +        /* Handle all reserved registers as RAZ */
> >>>> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
> >>> 
> >>> 
> >>> We are implementing both the known and the implementation defined
> >>> registers as read-as-zero. On write, we inject an exception.
> >>> 
> >>> However, reading the manual, it looks like the implementation defined
> >>> registers should be read-as-zero/write-ignore, is that right?
> >> 
> >> In the documentation, I did find all those defined as RO (Arm Architecture
> >> reference manual, chapter D12.3.1). Do you think we should handle Read
> >> only register as write ignore ? now i think of it RO does not explicitely mean
> >> if writes are ignored or should generate an exception.
> >> 
> >>> 
> >>> I couldn't easily find in the manual if it is OK to inject an exception
> >>> on write to a known register.
> >> 
> >> I am actually unsure if it should or not.
> >> I will try to run a test to check what is happening if this is done on the
> >> real hardware and come back to you on this one.
> > 
> > Yeah, that's the best way to do it: if writes are ignored on real
> > hardware, let's turn this into read-only/write-ignore, otherwise if they
> > generate an exception then let's keep the code as is.
> > 
> > Also you might want to do that both for a known register and also for an
> > unknown register to see if it makes a difference.
> 
> I did a test with the following:
> - WRITE_SYSREG64(0xf, S3_0_C0_C3_3)
> - WRITE_SYSREG64(0xf, ID_MMFR0_EL1)
> - WRITE_SYSREG64(0xf, ID_AA64MMFR0_EL1)
> 
> All generate exceptions like:
> Hypervisor Trap. HSR=0x2000000 EC=0x0 IL=1 Syndrome=0x0
> 
> So I think it is right to generate an exception if one of them is accessed.

Great, thanks for checking. In that case the patch is fine as is.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 19:03:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 19:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50840.89635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knnhi-0004w8-OS; Fri, 11 Dec 2020 19:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50840.89635; Fri, 11 Dec 2020 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knnhi-0004w1-LZ; Fri, 11 Dec 2020 19:03:46 +0000
Received: by outflank-mailman (input) for mailman id 50840;
 Fri, 11 Dec 2020 19:03:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knnhh-0004vt-UE; Fri, 11 Dec 2020 19:03:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knnhh-0005Qi-Ke; Fri, 11 Dec 2020 19:03:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knnhh-00073m-B2; Fri, 11 Dec 2020 19:03:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knnhh-00025p-AZ; Fri, 11 Dec 2020 19:03:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+jmCiIAUL9jzWpjBtLDPE6VoukUmVQsxvJanalf+pkU=; b=BMQdoJjVGMUPD5zSEV68EKPW+I
	x1X3tsP/ACIdU+QxtZx2DCzoe7hmiHysfiG3ZZXrYWVTizAbwplWrNlrQ6z2WbyKdGNbO1kmH6U4J
	XqaSHhBwcIUcVj8YOwYPU/UqgxhoZqCaI1QznLjyqJl9THb1gjOSAeBt+ECU6HiRVJ6Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157431-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157431: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2bff021f53b211386abad8cd661e6bb38d0fd524
X-Osstest-Versions-That:
    linux=ec274ecd62f9e0404c935ff073346d243d5082e6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 19:03:45 +0000

flight 157431 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157431/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157303
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157303
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157303
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157303
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157303
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157303
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157303
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157303
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157303
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157303
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157303
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2bff021f53b211386abad8cd661e6bb38d0fd524
baseline version:
 linux                ec274ecd62f9e0404c935ff073346d243d5082e6

Last test of basis   157303  2020-12-08 10:10:11 Z    3 days
Testing same since   157431  2020-12-11 12:40:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Bjørn Mork <bjorn@mork.no>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Eggers <ceggers@arri.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Giacinto Cifelli <gciofono@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hoang Huu Le <hoang.h.le@dektech.com.au>
  Jack Pham <jackp@codeaurora.org>
  Jakub Kicinski <kuba@kernel.org>
  Jan-Niklas Burfeind <kernel@aiyionpri.me>
  Jann Horn <jannh@google.com>
  Jian-Hong Pan <jhp@endlessos.org>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Kailang Yang <kailang@realtek.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kirill Tkhai <ktkhai@virtuozzo.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Laurent Vivier <lvivier@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lukas Wunner <lukas@wunner.de>
  Luo Meng <luomeng12@huawei.com>
  Mahesh Salgaonkar <mahesh@linux.ibm.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Menglong Dong <dong.menglong@zte.com.cn>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Ellerman <mpe@ellerman.id.au> (ppc32)
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Nicholas Piggin <npiggin@gmail.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Qian Cai <qcai@redhat.com>
  Richard Fitzgerald <rf@opensource.cirrus.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sasha Levin <sashal@kernel.org>
  Sergei Shtepa <sergei.shtepa@veeam.com>
  Shisong Qin <qinshisong1205@gmail.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Gleixner <tglx@linutronix.de>
  Vamsi Krishna Samavedam <vskrishn@codeaurora.org>
  Vincent Palatin <vpalatin@chromium.org>
  Will Deacon <will@kernel.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Yang Shi <shy828301@gmail.com>
  Zhihao Cheng <chengzhihao1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   ec274ecd62f9..2bff021f53b2  2bff021f53b211386abad8cd661e6bb38d0fd524 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 19:07:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 19:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50850.89654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knnlA-000579-BQ; Fri, 11 Dec 2020 19:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50850.89654; Fri, 11 Dec 2020 19:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knnlA-000572-7f; Fri, 11 Dec 2020 19:07:20 +0000
Received: by outflank-mailman (input) for mailman id 50850;
 Fri, 11 Dec 2020 19:07:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZKjA=FP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1knnl8-00056u-SV
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 19:07:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0fbf4fb7-2e9f-433c-863f-4af03b10cce3;
 Fri, 11 Dec 2020 19:07:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fbf4fb7-2e9f-433c-863f-4af03b10cce3
Date: Fri, 11 Dec 2020 11:07:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607713637;
	bh=us27i/PgVGRjrUlAWmDNg4p1UJDImH59CMjHTYYyrDM=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=seJCm0m9UYFXbJQcvdwgPgKBL2KBYaG1G0tSZQBZJe9bQ4Jt66fQt2vuZ24Tczli3
	 pfXfLQggA52Rc7TxMzwzUOGym9YYduifmu6dm6XcjnsCmZTUo+DS/UKSZUINIlzexI
	 IgU2pbuOpVSVQxsg7hA9zV2kjbXPBhAbXeZJmFi4QQ/p53lR7mhvrLCjK6O1FzScc4
	 K8RMF66jI+OvKd0pGDVQx3w6ZuyiaUXUdMtS+YEgZmMCDq5KT1kbSKqveqpmRiGc4a
	 +KotSaE0uhuxMFpW4IN7BRdFTms37ZZXuGEJLnrvJO2o8frNeFBH/t/GlHGtQKF5xs
	 zG7qOjP9/t3Zw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
In-Reply-To: <7c8a9ad9-2b18-7028-17bc-20ee5a341323@gmail.com>
Message-ID: <alpine.DEB.2.21.2012111105520.10222@sstabellini-ThinkPad-T480s>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> <1606732298-22107-22-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s> <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
 <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s> <7c8a9ad9-2b18-7028-17bc-20ee5a341323@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 11 Dec 2020, Oleksandr wrote:
> On 11.12.20 03:28, Stefano Stabellini wrote:
> > On Thu, 10 Dec 2020, Julien Grall wrote:
> > > On 10/12/2020 02:30, Stefano Stabellini wrote:
> > > > On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
> > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > > 
> > > > > We need to send mapcache invalidation request to qemu/demu everytime
> > > > > the page gets removed from a guest.
> > > > > 
> > > > > At the moment, the Arm code doesn't explicitely remove the existing
> > > > > mapping before inserting the new mapping. Instead, this is done
> > > > > implicitely by __p2m_set_entry().
> > > > > 
> > > > > So we need to recognize a case when old entry is a RAM page *and*
> > > > > the new MFN is different in order to set the corresponding flag.
> > > > > The most suitable place to do this is p2m_free_entry(), there
> > > > > we can find the correct leaf type. The invalidation request
> > > > > will be sent in do_trap_hypercall() later on.
> > > > Why is it sent in do_trap_hypercall() ?
> > > I believe this is following the approach used by x86. There are actually
> > > some
> > > discussion about it (see [1]).
> > > 
> > > Leaving aside the toolstack case for now, AFAIK, the only way a guest can
> > > modify its p2m is via an hypercall. Do you have an example otherwise?
> > OK this is a very important assumption. We should write it down for sure.
> > I think it is true today on ARM.
> > 
> > 
> > > When sending the invalidation request, the vCPU will be blocked until all
> > > the
> > > IOREQ server have acknowledged the invalidation. So the hypercall seems to
> > > be
> > > the best position to do it.
> > > 
> > > Alternatively, we could use check_for_vcpu_work() to check if the mapcache
> > > needs to be invalidated. The inconvenience is we would execute a few more
> > > instructions in each entry/exit path.
> > Yeah it would be more natural to call it from check_for_vcpu_work(). If
> > we put it between #ifdef CONFIG_IOREQ_SERVER it wouldn't be bad. But I
> > am not a fan of increasing the instructions on the exit path either.
> >  From this point of view, putting it at the end of do_trap_hypercall is a
> > nice trick actually. Let's just make sure it has a good comment on top.
> > 
> > 
> > > > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > > CC: Julien Grall <julien.grall@arm.com>
> > > > > 
> > > > > ---
> > > > > Please note, this is a split/cleanup/hardening of Julien's PoC:
> > > > > "Add support for Guest IO forwarding to a device emulator"
> > > > > 
> > > > > Changes V1 -> V2:
> > > > >      - new patch, some changes were derived from (+ new explanation):
> > > > >        xen/ioreq: Make x86's invalidate qemu mapcache handling common
> > > > >      - put setting of the flag into __p2m_set_entry()
> > > > >      - clarify the conditions when the flag should be set
> > > > >      - use domain_has_ioreq_server()
> > > > >      - update do_trap_hypercall() by adding local variable
> > > > > 
> > > > > Changes V2 -> V3:
> > > > >      - update patch description
> > > > >      - move check to p2m_free_entry()
> > > > >      - add a comment
> > > > >      - use "curr" instead of "v" in do_trap_hypercall()
> > > > > ---
> > > > > ---
> > > > >    xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
> > > > >    xen/arch/arm/traps.c | 13 ++++++++++---
> > > > >    2 files changed, 26 insertions(+), 11 deletions(-)
> > > > > 
> > > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > > index 5b8d494..9674f6f 100644
> > > > > --- a/xen/arch/arm/p2m.c
> > > > > +++ b/xen/arch/arm/p2m.c
> > > > > @@ -1,6 +1,7 @@
> > > > >    #include <xen/cpu.h>
> > > > >    #include <xen/domain_page.h>
> > > > >    #include <xen/iocap.h>
> > > > > +#include <xen/ioreq.h>
> > > > >    #include <xen/lib.h>
> > > > >    #include <xen/sched.h>
> > > > >    #include <xen/softirq.h>
> > > > > @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain
> > > > > *p2m,
> > > > >        if ( !p2m_is_valid(entry) )
> > > > >            return;
> > > > >    -    /* Nothing to do but updating the stats if the entry is a
> > > > > super-page. */
> > > > > -    if ( p2m_is_superpage(entry, level) )
> > > > > +    if ( p2m_is_superpage(entry, level) || (level == 3) )
> > > > >        {
> > > > > -        p2m->stats.mappings[level]--;
> > > > > -        return;
> > > > > -    }
> > > > > +#ifdef CONFIG_IOREQ_SERVER
> > > > > +        /*
> > > > > +         * If this gets called (non-recursively) then either the
> > > > > entry
> > > > > +         * was replaced by an entry with a different base (valid
> > > > > case) or
> > > > > +         * the shattering of a superpage was failed (error case).
> > > > > +         * So, at worst, the spurious mapcache invalidation might be
> > > > > sent.
> > > > > +         */
> > > > > +        if ( domain_has_ioreq_server(p2m->domain) &&
> > > > > +             (p2m->domain == current->domain) &&
> > > > > p2m_is_ram(entry.p2m.type) )
> > > > > +            p2m->domain->mapcache_invalidate = true;
> > > > Why the (p2m->domain == current->domain) check? Shouldn't we set
> > > > mapcache_invalidate to true anyway? What happens if p2m->domain !=
> > > > current->domain? We wouldn't want the domain to lose the
> > > > mapcache_invalidate notification.
> > > This is also discussed in [1]. :) The main question is why would a
> > > toolstack/device model modify the guest memory after boot?
> > > 
> > > If we assume it does, then the device model would need to pause the domain
> > > before modifying the RAM.
> > > 
> > > We also need to make sure that all the IOREQ servers have invalidated
> > > the mapcache before the domain run again.
> > > 
> > > This would require quite a bit of work. I am not sure the effort is worth
> > > if
> > > there are no active users today.
> > OK, that explains why we think p2m->domain == current->domain, but why
> > do we need to have a check for it right here?
> > 
> > In other words, we don't think it is realistc to get here with
> > p2m->domain != current->domain, but let's say that we do somehow. What's
> > the best course of action? Probably, set mapcache_invalidate to true and
> > possibly print a warning?
> > 
> > Leaving mapcache_invalidate to false doesn't seem to be what we want to
> > do?
> > 
> >   
> > > > >        BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
> > > > >    @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct
> > > > > cpu_user_regs
> > > > > *regs, register_t *nr,
> > > > >            return;
> > > > >        }
> > > > >    -    current->hcall_preempted = false;
> > > > > +    curr->hcall_preempted = false;
> > > > >          perfc_incra(hypercalls, *nr);
> > > > >        call = arm_hypercall_table[*nr].fn;
> > > > > @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct
> > > > > cpu_user_regs
> > > > > *regs, register_t *nr,
> > > > >        HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
> > > > >      #ifndef NDEBUG
> > > > > -    if ( !current->hcall_preempted )
> > > > > +    if ( !curr->hcall_preempted )
> > > > >        {
> > > > >            /* Deliberately corrupt parameter regs used by this
> > > > > hypercall.
> > > > > */
> > > > >            switch ( arm_hypercall_table[*nr].nr_args ) {
> > > > > @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct
> > > > > cpu_user_regs
> > > > > *regs, register_t *nr,
> > > > >    #endif
> > > > >          /* Ensure the hypercall trap instruction is re-executed. */
> > > > > -    if ( current->hcall_preempted )
> > > > > +    if ( curr->hcall_preempted )
> > > > >            regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
> > > > > +
> > > > > +#ifdef CONFIG_IOREQ_SERVER
> > > > > +    if ( unlikely(curr->domain->mapcache_invalidate) &&
> > > > > +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
> > > > > +        ioreq_signal_mapcache_invalidate();
> > > > Why not just:
> > > > 
> > > > if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
> > > >       ioreq_signal_mapcache_invalidate();
> > > > 
> > > This seems to match the x86 code. My guess is they tried to prevent the
> > > cost
> > > of the atomic operation if there is no chance mapcache_invalidate is true.
> > > 
> > > I am split whether the first check is worth it. The atomic operation
> > > should be
> > > uncontended most of the time, so it should be quick. But it will always be
> > > slower than just a read because there is always a store involved.
> > I am not a fun of optimizations with unclear benefits :-)
> > 
> > 
> > > On a related topic, Jan pointed out that the invalidation would not work
> > > properly if you have multiple vCPU modifying the P2M at the same time.
> > > 
> Thanks to Julien, he explained all bits in detail. Indeed I followed how it
> was done on x86 (place where to send the invalidation request, the code to
> check whether the flag is set, which at first glance, appears odd, etc)
> and review comments (to latch current into the local variable, and make sure
> that domain sends invalidation request on itself).
> Regarding what to do if p2m->domain != current->domain in p2m_free_entry().
> Probably we could set flag only if guest is paused, otherwise just print a
> warning. Thoughts?

I'd do something like:

if ( domain_has_ioreq_server(p2m->domain) && p2m_is_ram(entry.p2m.type) )
{
    WARN_ON(p2m->domain != current->domain);
    p2m->domain->mapcache_invalidate = true;
}

but maybe Julien has a better idea.



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 19:27:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 19:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50859.89669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kno4b-0007BC-6b; Fri, 11 Dec 2020 19:27:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50859.89669; Fri, 11 Dec 2020 19:27:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kno4b-0007B5-3K; Fri, 11 Dec 2020 19:27:25 +0000
Received: by outflank-mailman (input) for mailman id 50859;
 Fri, 11 Dec 2020 19:27:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kno4Z-0007B0-PP
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 19:27:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kno4X-0005wj-6i; Fri, 11 Dec 2020 19:27:21 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kno4W-0000TP-Pd; Fri, 11 Dec 2020 19:27:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=gVVjjCCI/pCicmPL4u6pBHMA3vdJbw4qNPi8EGaFJUU=; b=3GyQ5Ze5VMUMkjQOQaBJi5qGgE
	oqwXFk87VgujU6iKsD+8g7mwm8cy2yCacQ8uNfSTLPTRI9y9sBNH6yk50ORtMIWHBzDvtqTc63n42
	H01DYDNvRtYpeoj5i+UHJOSEc55BcmhOoeXhFG+jl8a/lnME5Gux7JiyB9iBpomUigy8=;
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s>
 <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
 <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <388a7e92-1881-c68e-dd7c-8c9c119c1be4@xen.org>
Date: Fri, 11 Dec 2020 19:27:18 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 11/12/2020 01:28, Stefano Stabellini wrote:
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>
>>>> ---
>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>>> "Add support for Guest IO forwarding to a device emulator"
>>>>
>>>> Changes V1 -> V2:
>>>>      - new patch, some changes were derived from (+ new explanation):
>>>>        xen/ioreq: Make x86's invalidate qemu mapcache handling common
>>>>      - put setting of the flag into __p2m_set_entry()
>>>>      - clarify the conditions when the flag should be set
>>>>      - use domain_has_ioreq_server()
>>>>      - update do_trap_hypercall() by adding local variable
>>>>
>>>> Changes V2 -> V3:
>>>>      - update patch description
>>>>      - move check to p2m_free_entry()
>>>>      - add a comment
>>>>      - use "curr" instead of "v" in do_trap_hypercall()
>>>> ---
>>>> ---
>>>>    xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
>>>>    xen/arch/arm/traps.c | 13 ++++++++++---
>>>>    2 files changed, 26 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 5b8d494..9674f6f 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -1,6 +1,7 @@
>>>>    #include <xen/cpu.h>
>>>>    #include <xen/domain_page.h>
>>>>    #include <xen/iocap.h>
>>>> +#include <xen/ioreq.h>
>>>>    #include <xen/lib.h>
>>>>    #include <xen/sched.h>
>>>>    #include <xen/softirq.h>
>>>> @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>>>>        if ( !p2m_is_valid(entry) )
>>>>            return;
>>>>    -    /* Nothing to do but updating the stats if the entry is a
>>>> super-page. */
>>>> -    if ( p2m_is_superpage(entry, level) )
>>>> +    if ( p2m_is_superpage(entry, level) || (level == 3) )
>>>>        {
>>>> -        p2m->stats.mappings[level]--;
>>>> -        return;
>>>> -    }
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +        /*
>>>> +         * If this gets called (non-recursively) then either the entry
>>>> +         * was replaced by an entry with a different base (valid case) or
>>>> +         * the shattering of a superpage was failed (error case).
>>>> +         * So, at worst, the spurious mapcache invalidation might be
>>>> sent.
>>>> +         */
>>>> +        if ( domain_has_ioreq_server(p2m->domain) &&

Hmmm... I didn't realize that you were going to call 
domain_has_ioreq_server() here. Per your comment, this can only be 
called when on p2m->domain == current.

One way would be to switch the two check. However, I am not entirely 
sure this is necessary. I see no issue to always set mapcache_invalidate 
even if there are no IOREQ server available.

>>>> +             (p2m->domain == current->domain) &&
>>>> p2m_is_ram(entry.p2m.type) )
>>>> +            p2m->domain->mapcache_invalidate = true;
>>>
>>> Why the (p2m->domain == current->domain) check? Shouldn't we set
>>> mapcache_invalidate to true anyway? What happens if p2m->domain !=
>>> current->domain? We wouldn't want the domain to lose the
>>> mapcache_invalidate notification.
>>
>> This is also discussed in [1]. :) The main question is why would a
>> toolstack/device model modify the guest memory after boot?
>>
>> If we assume it does, then the device model would need to pause the domain
>> before modifying the RAM.
>>
>> We also need to make sure that all the IOREQ servers have invalidated
>> the mapcache before the domain run again.
>>
>> This would require quite a bit of work. I am not sure the effort is worth if
>> there are no active users today.
> 
> OK, that explains why we think p2m->domain == current->domain, but why
> do we need to have a check for it right here?
> 
> In other words, we don't think it is realistc to get here with
> p2m->domain != current->domain, but let's say that we do somehow.

I am guessing by "here", you mean in the situation where a RAM entry 
would be removed. Is it correct? If so yes, I don't believe this should 
happen today (even at domain creation/destruction).

> What's
> the best course of action?

The best course of action would be to forward the invalidation to *all* 
the IOREQ servers and wait for it before the domain can run again.

> Probably, set mapcache_invalidate to true and
> possibly print a warning?
So if the toolstack (or an IOREQ server ends to up use it), then we need 
to make sure all the IOREQ server have invalidated the mapcache before 
the domain can run again.

> 
> Leaving mapcache_invalidate to false doesn't seem to be what we want to
> do?

Setting to true/false is not going to be very helful because the guest 
may never issue an hypercall.

Without any more work, the guest may get corrupted. So I would suggest 
to either prevent the P2M to be modified after the domain has been 
created and before it is destroyed (it more a stopgap) or fix it properly.

>>>>        BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
>>>>    @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs
>>>> *regs, register_t *nr,
>>>>            return;
>>>>        }
>>>>    -    current->hcall_preempted = false;
>>>> +    curr->hcall_preempted = false;
>>>>          perfc_incra(hypercalls, *nr);
>>>>        call = arm_hypercall_table[*nr].fn;
>>>> @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs
>>>> *regs, register_t *nr,
>>>>        HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
>>>>      #ifndef NDEBUG
>>>> -    if ( !current->hcall_preempted )
>>>> +    if ( !curr->hcall_preempted )
>>>>        {
>>>>            /* Deliberately corrupt parameter regs used by this hypercall.
>>>> */
>>>>            switch ( arm_hypercall_table[*nr].nr_args ) {
>>>> @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs
>>>> *regs, register_t *nr,
>>>>    #endif
>>>>          /* Ensure the hypercall trap instruction is re-executed. */
>>>> -    if ( current->hcall_preempted )
>>>> +    if ( curr->hcall_preempted )
>>>>            regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
>>>> +
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +    if ( unlikely(curr->domain->mapcache_invalidate) &&
>>>> +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
>>>> +        ioreq_signal_mapcache_invalidate();
>>>
>>> Why not just:
>>>
>>> if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
>>>       ioreq_signal_mapcache_invalidate();
>>>
>>
>> This seems to match the x86 code. My guess is they tried to prevent the cost
>> of the atomic operation if there is no chance mapcache_invalidate is true.
>>
>> I am split whether the first check is worth it. The atomic operation should be
>> uncontended most of the time, so it should be quick. But it will always be
>> slower than just a read because there is always a store involved.
> 
> I am not a fun of optimizations with unclear benefits :-)

I thought a bit more about it and I am actually leaning towards keeping 
the first check.

The common implementation of the hypercall path is mostly (if not all) 
accessing per-vCPU variables. So the hypercalls can mostly work 
independently (at least in the common part).

Assuming we drop the first check, we would now be writing to a 
per-domain variable at every hypercall. You are probably going to notice 
performance impact if you were going to benchmark concurrent no-op 
hypercall, because the cache line is going to bounce (because of the write).

Arguably, this may become noise if you execute a full hypercall. But I 
would still like to treat the common hypercall path as the entry/exit 
path. IOW, I would like to be careful in what we add there.

The main reason is hypercalls may be used quite a lot by PV backends or 
device emulator (if we think about Virtio).

If we decided to move the change in the entry/exit path, then it would
definitely be an issue for the reasons I explained above. So I would 
also like to avoid the write in shared variable if we can.

FAOD, I am not saying this optmization will save the world :). I am sure 
there will be more (in particular in the vGIC part) in order to get 
Virtio performance in par with PV backends on Xen.

This discussion would also be moot if ...

> 
>> On a related topic, Jan pointed out that the invalidation would not work
>> properly if you have multiple vCPU modifying the P2M at the same time.

... we had a per-vCPU flag instead.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 19:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 19:37:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50866.89680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knoET-0008Ff-4U; Fri, 11 Dec 2020 19:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50866.89680; Fri, 11 Dec 2020 19:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knoET-0008FY-1Y; Fri, 11 Dec 2020 19:37:37 +0000
Received: by outflank-mailman (input) for mailman id 50866;
 Fri, 11 Dec 2020 19:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1knoER-0008FT-Ll
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 19:37:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knoEQ-00069T-HW; Fri, 11 Dec 2020 19:37:34 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1knoEQ-0001Bi-3t; Fri, 11 Dec 2020 19:37:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=50C/8hgHJ0gb0loEo9qrkV4vBVsfJ8aByzF86l2MjbE=; b=oagEvo6AhxrWUFDTH2l8Oc/NeZ
	8i3eFp65F3z+gAV7aQe7u3B9+O0AZ34kUK2Mcmsjx6W8GNUbAouSzKjAKav0jOJnIIAXvBnxwiED0
	Y5+tikDdQbC4DoR68BfbtRJBLSTmiFD20I2D/fKDpFuOiUjF8b67YSdIO1yitGTuqO84=;
Subject: Re: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.21.2012091822300.20986@sstabellini-ThinkPad-T480s>
 <a6897469-f031-e49d-0b4c-b1aa10d66d6d@xen.org>
 <alpine.DEB.2.21.2012101443060.20986@sstabellini-ThinkPad-T480s>
 <7c8a9ad9-2b18-7028-17bc-20ee5a341323@gmail.com>
 <alpine.DEB.2.21.2012111105520.10222@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <945a5444-dc96-5f47-854c-b42b1d17ce0b@xen.org>
Date: Fri, 11 Dec 2020 19:37:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012111105520.10222@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 11/12/2020 19:07, Stefano Stabellini wrote:
> On Fri, 11 Dec 2020, Oleksandr wrote:
>> On 11.12.20 03:28, Stefano Stabellini wrote:
>>> On Thu, 10 Dec 2020, Julien Grall wrote:
>>>> On 10/12/2020 02:30, Stefano Stabellini wrote:
>>>>> On Mon, 30 Nov 2020, Oleksandr Tyshchenko wrote:
>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>>
>>>>>> We need to send mapcache invalidation request to qemu/demu everytime
>>>>>> the page gets removed from a guest.
>>>>>>
>>>>>> At the moment, the Arm code doesn't explicitely remove the existing
>>>>>> mapping before inserting the new mapping. Instead, this is done
>>>>>> implicitely by __p2m_set_entry().
>>>>>>
>>>>>> So we need to recognize a case when old entry is a RAM page *and*
>>>>>> the new MFN is different in order to set the corresponding flag.
>>>>>> The most suitable place to do this is p2m_free_entry(), there
>>>>>> we can find the correct leaf type. The invalidation request
>>>>>> will be sent in do_trap_hypercall() later on.
>>>>> Why is it sent in do_trap_hypercall() ?
>>>> I believe this is following the approach used by x86. There are actually
>>>> some
>>>> discussion about it (see [1]).
>>>>
>>>> Leaving aside the toolstack case for now, AFAIK, the only way a guest can
>>>> modify its p2m is via an hypercall. Do you have an example otherwise?
>>> OK this is a very important assumption. We should write it down for sure.
>>> I think it is true today on ARM.
>>>
>>>
>>>> When sending the invalidation request, the vCPU will be blocked until all
>>>> the
>>>> IOREQ server have acknowledged the invalidation. So the hypercall seems to
>>>> be
>>>> the best position to do it.
>>>>
>>>> Alternatively, we could use check_for_vcpu_work() to check if the mapcache
>>>> needs to be invalidated. The inconvenience is we would execute a few more
>>>> instructions in each entry/exit path.
>>> Yeah it would be more natural to call it from check_for_vcpu_work(). If
>>> we put it between #ifdef CONFIG_IOREQ_SERVER it wouldn't be bad. But I
>>> am not a fan of increasing the instructions on the exit path either.
>>>   From this point of view, putting it at the end of do_trap_hypercall is a
>>> nice trick actually. Let's just make sure it has a good comment on top.
>>>
>>>
>>>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>>>
>>>>>> ---
>>>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>>>>> "Add support for Guest IO forwarding to a device emulator"
>>>>>>
>>>>>> Changes V1 -> V2:
>>>>>>       - new patch, some changes were derived from (+ new explanation):
>>>>>>         xen/ioreq: Make x86's invalidate qemu mapcache handling common
>>>>>>       - put setting of the flag into __p2m_set_entry()
>>>>>>       - clarify the conditions when the flag should be set
>>>>>>       - use domain_has_ioreq_server()
>>>>>>       - update do_trap_hypercall() by adding local variable
>>>>>>
>>>>>> Changes V2 -> V3:
>>>>>>       - update patch description
>>>>>>       - move check to p2m_free_entry()
>>>>>>       - add a comment
>>>>>>       - use "curr" instead of "v" in do_trap_hypercall()
>>>>>> ---
>>>>>> ---
>>>>>>     xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
>>>>>>     xen/arch/arm/traps.c | 13 ++++++++++---
>>>>>>     2 files changed, 26 insertions(+), 11 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>>>> index 5b8d494..9674f6f 100644
>>>>>> --- a/xen/arch/arm/p2m.c
>>>>>> +++ b/xen/arch/arm/p2m.c
>>>>>> @@ -1,6 +1,7 @@
>>>>>>     #include <xen/cpu.h>
>>>>>>     #include <xen/domain_page.h>
>>>>>>     #include <xen/iocap.h>
>>>>>> +#include <xen/ioreq.h>
>>>>>>     #include <xen/lib.h>
>>>>>>     #include <xen/sched.h>
>>>>>>     #include <xen/softirq.h>
>>>>>> @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain
>>>>>> *p2m,
>>>>>>         if ( !p2m_is_valid(entry) )
>>>>>>             return;
>>>>>>     -    /* Nothing to do but updating the stats if the entry is a
>>>>>> super-page. */
>>>>>> -    if ( p2m_is_superpage(entry, level) )
>>>>>> +    if ( p2m_is_superpage(entry, level) || (level == 3) )
>>>>>>         {
>>>>>> -        p2m->stats.mappings[level]--;
>>>>>> -        return;
>>>>>> -    }
>>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>>> +        /*
>>>>>> +         * If this gets called (non-recursively) then either the
>>>>>> entry
>>>>>> +         * was replaced by an entry with a different base (valid
>>>>>> case) or
>>>>>> +         * the shattering of a superpage was failed (error case).
>>>>>> +         * So, at worst, the spurious mapcache invalidation might be
>>>>>> sent.
>>>>>> +         */
>>>>>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>>>>>> +             (p2m->domain == current->domain) &&
>>>>>> p2m_is_ram(entry.p2m.type) )
>>>>>> +            p2m->domain->mapcache_invalidate = true;
>>>>> Why the (p2m->domain == current->domain) check? Shouldn't we set
>>>>> mapcache_invalidate to true anyway? What happens if p2m->domain !=
>>>>> current->domain? We wouldn't want the domain to lose the
>>>>> mapcache_invalidate notification.
>>>> This is also discussed in [1]. :) The main question is why would a
>>>> toolstack/device model modify the guest memory after boot?
>>>>
>>>> If we assume it does, then the device model would need to pause the domain
>>>> before modifying the RAM.
>>>>
>>>> We also need to make sure that all the IOREQ servers have invalidated
>>>> the mapcache before the domain run again.
>>>>
>>>> This would require quite a bit of work. I am not sure the effort is worth
>>>> if
>>>> there are no active users today.
>>> OK, that explains why we think p2m->domain == current->domain, but why
>>> do we need to have a check for it right here?
>>>
>>> In other words, we don't think it is realistc to get here with
>>> p2m->domain != current->domain, but let's say that we do somehow. What's
>>> the best course of action? Probably, set mapcache_invalidate to true and
>>> possibly print a warning?
>>>
>>> Leaving mapcache_invalidate to false doesn't seem to be what we want to
>>> do?
>>>
>>>    
>>>>>>         BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
>>>>>>     @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct
>>>>>> cpu_user_regs
>>>>>> *regs, register_t *nr,
>>>>>>             return;
>>>>>>         }
>>>>>>     -    current->hcall_preempted = false;
>>>>>> +    curr->hcall_preempted = false;
>>>>>>           perfc_incra(hypercalls, *nr);
>>>>>>         call = arm_hypercall_table[*nr].fn;
>>>>>> @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct
>>>>>> cpu_user_regs
>>>>>> *regs, register_t *nr,
>>>>>>         HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
>>>>>>       #ifndef NDEBUG
>>>>>> -    if ( !current->hcall_preempted )
>>>>>> +    if ( !curr->hcall_preempted )
>>>>>>         {
>>>>>>             /* Deliberately corrupt parameter regs used by this
>>>>>> hypercall.
>>>>>> */
>>>>>>             switch ( arm_hypercall_table[*nr].nr_args ) {
>>>>>> @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct
>>>>>> cpu_user_regs
>>>>>> *regs, register_t *nr,
>>>>>>     #endif
>>>>>>           /* Ensure the hypercall trap instruction is re-executed. */
>>>>>> -    if ( current->hcall_preempted )
>>>>>> +    if ( curr->hcall_preempted )
>>>>>>             regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
>>>>>> +
>>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>>> +    if ( unlikely(curr->domain->mapcache_invalidate) &&
>>>>>> +         test_and_clear_bool(curr->domain->mapcache_invalidate) )
>>>>>> +        ioreq_signal_mapcache_invalidate();
>>>>> Why not just:
>>>>>
>>>>> if ( unlikely(test_and_clear_bool(curr->domain->mapcache_invalidate)) )
>>>>>        ioreq_signal_mapcache_invalidate();
>>>>>
>>>> This seems to match the x86 code. My guess is they tried to prevent the
>>>> cost
>>>> of the atomic operation if there is no chance mapcache_invalidate is true.
>>>>
>>>> I am split whether the first check is worth it. The atomic operation
>>>> should be
>>>> uncontended most of the time, so it should be quick. But it will always be
>>>> slower than just a read because there is always a store involved.
>>> I am not a fun of optimizations with unclear benefits :-)
>>>
>>>
>>>> On a related topic, Jan pointed out that the invalidation would not work
>>>> properly if you have multiple vCPU modifying the P2M at the same time.
>>>>
>> Thanks to Julien, he explained all bits in detail. Indeed I followed how it
>> was done on x86 (place where to send the invalidation request, the code to
>> check whether the flag is set, which at first glance, appears odd, etc)
>> and review comments (to latch current into the local variable, and make sure
>> that domain sends invalidation request on itself).
>> Regarding what to do if p2m->domain != current->domain in p2m_free_entry().
>> Probably we could set flag only if guest is paused, otherwise just print a
>> warning. Thoughts?
> 
> I'd do something like:
> 
> if ( domain_has_ioreq_server(p2m->domain) && p2m_is_ram(entry.p2m.type) )
> {
>      WARN_ON(p2m->domain != current->domain)

IOREQ server are not trusted. Yet they will be able to reach this patch 
if one re-use the stubdomain model (they are allowed to modify guest 
layout).

So this change would hand a DoS attack to the IOREQ server on a silver 
platter :).

In general, we should avoid to use WARN_ON() for things that can be 
triggered by a domain. Instead we should use gprintk(XENLOG_WARNING, 
"...") to allow rate-limit.

On the cons side, it would be more difficult to spot any misue with a 
gprintk().

>      p2m->domain->mapcache_invalidate = true;
> }
> 
> but maybe Julien has a better idea.

I suggested a different approach and some rationale in answer to your 
e-mail. Although, I am not sure if we could call it a better approach 
:). We can continue the discusison there.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 19:43:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 19:43:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50872.89693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knoKO-0000qp-Ri; Fri, 11 Dec 2020 19:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50872.89693; Fri, 11 Dec 2020 19:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knoKO-0000qi-OX; Fri, 11 Dec 2020 19:43:44 +0000
Received: by outflank-mailman (input) for mailman id 50872;
 Fri, 11 Dec 2020 19:43:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUI=FP=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1knoKN-0000qd-Ar
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 19:43:43 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5746130-506f-4585-8fab-d54cfb7f97a7;
 Fri, 11 Dec 2020 19:43:37 +0000 (UTC)
Received: from AM6P191CA0006.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::19)
 by AM6PR08MB5079.eurprd08.prod.outlook.com (2603:10a6:20b:e8::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.21; Fri, 11 Dec
 2020 19:43:33 +0000
Received: from AM5EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::de) by AM6P191CA0006.outlook.office365.com
 (2603:10a6:209:8b::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Fri, 11 Dec 2020 19:43:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT040.mail.protection.outlook.com (10.152.17.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Fri, 11 Dec 2020 19:43:33 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Fri, 11 Dec 2020 19:43:32 +0000
Received: from 5c7e87a4b28a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E17E27BD-393B-4F87-A7D0-108D12476D9F.1; 
 Fri, 11 Dec 2020 19:43:17 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5c7e87a4b28a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 11 Dec 2020 19:43:17 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB5994.eurprd08.prod.outlook.com (2603:10a6:10:20d::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Fri, 11 Dec
 2020 19:43:14 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3654.013; Fri, 11 Dec 2020
 19:43:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5746130-506f-4585-8fab-d54cfb7f97a7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h5DUDUIj6rIQkSTuXwnTyIepVfRMisCCtITR3ImmBX4=;
 b=dMFeZDnO691JItFCRRmj/NgejsQlZKCmv2malbh29eLMia86syml3yv1lAsBHAP4pGSIUypbXylPxJUIbHDiRHIt0KVBwwzKzNz7LdnLvp9DWCz2RYXaH5QtV0Rakd4EK8RdUEWHhWT5akqK+23jPDwSR3PUsKfCtuMwlVKYfnA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 43d5ac9299818bec
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vw4B5HsbML8Sd6w0fwbSnTfFUwkmETSurW5wZox/kvUlLDmc5323QxwbF9QQsLQeR7+2rQ11k/qSB4fjL3qlOS0rvwbdoMZbE65/AQSQuS1aJ58Sy66bZPEJeVweGnbqHSGER4+X4x58NTEZ5ta3OeyfKQ4uACjEbwazJV+1qhaEp1OAmIB+3RWynK2fB0xRK2YwRzJTW5aazRNBSZDOoQwn22MOqxKIzt8E6jXJvTlZQJEPfZzboJHivu2AtHfLCUMnCSeQKR2PSdewhXUIAoHil0IzglnvaUmvE92/zygYq3GBiZv/HRH7YLRIDKSrWUgIWrzS3+WI/KIbVCbBvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h5DUDUIj6rIQkSTuXwnTyIepVfRMisCCtITR3ImmBX4=;
 b=n5e3kBR+CNt9rWFClkyUcXPp1CKvCs9fQa3XHBDRNRfhCwtZr0nsj032yVig1G3xPrb8++O4xsTCp2kbUqXUW3KxZ/QnWIXwLjjafyrwglhk6Q+BZhS0tzesSr/wrOHC4FMVZT9tb8Gl1tMkXjHBZtAShDCrvINPoohQX9P74FNyIBHDccyH3yXmJvS3LBYftVB8gmFVryTSlr0jJawGLa5Qedn0U9nChjkGdcXonG6Airbu8TcS3x4/RuTRVuATWgEhQPeCFHBWBJoGWXxMGD0dwhxXy8bmCkWKkfQNoUmCdrSy4U4Ek4022s2NNYJ/JOA3ITomTlBmixI/W9KrYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h5DUDUIj6rIQkSTuXwnTyIepVfRMisCCtITR3ImmBX4=;
 b=dMFeZDnO691JItFCRRmj/NgejsQlZKCmv2malbh29eLMia86syml3yv1lAsBHAP4pGSIUypbXylPxJUIbHDiRHIt0KVBwwzKzNz7LdnLvp9DWCz2RYXaH5QtV0Rakd4EK8RdUEWHhWT5akqK+23jPDwSR3PUsKfCtuMwlVKYfnA=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWzxYvTK5u+0PVw0yI12PrfGSPTKnxG9IAgAExuoA=
Date: Fri, 11 Dec 2020 19:43:14 +0000
Message-ID: <4D66FAE7-CD0F-4005-9887-3194EA202C41@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2012101602530.6285@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012101602530.6285@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 55f9c3c6-3cfa-4d13-fa8f-08d89e0d0b30
x-ms-traffictypediagnostic: DBBPR08MB5994:|AM6PR08MB5079:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB50790920F21F95C5CFCF268CFCCA0@AM6PR08MB5079.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 w67w9ZBbmA/a7H05Ogs9+I7Rd7Ji3odcs+0/dONDSoal4Yzh+bFA6RnFKzSjcp3ElKrIMWtv014jhMp3mrgQn35hr+OEE4cLlugRb46Zg/u1Icdp4SiBZSvb+NJQ5r1aONPUFD1pchVJT7deptOjsP3I78J+2QhlzkTyvw4KUYHf7OKAVA4HAp6VLxslugj4lmrii8VGK31TSz9pNGneZ7HwBltBd8f1sWYrT7YhY8IikZooDR5sHJrWdM8low4yG77cfW8JCuiUyZBaR4wkSg4Axdl9o7JTJdc9b3TK++G22rWpigetcPi/RlGZ/OoZch7ePSHhyt9P8pLSp9CaOeTVuFGyNSkWdbNQzRgb+Y6scil9K8X/L8p5/y/dyZw47dbZ0GnL2Ahmyj4DqEBBj4Mb6rdSM8GKDEWVhlVC5pDD2wHevmvVlmJgpI1lPYKwCwJcyzwAp3Ni0odreDK1QxQFlYJ1dirHCr5fg6+i3M0G6+E8uifX+2KYcy2fpkKa
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(376002)(346002)(136003)(396003)(26005)(5660300002)(53546011)(186003)(6512007)(91956017)(54906003)(33656002)(6506007)(66476007)(64756008)(2906002)(83380400001)(8676002)(66446008)(86362001)(8936002)(7416002)(316002)(71200400001)(4326008)(66946007)(2616005)(478600001)(6916009)(76116006)(66556008)(6486002)(36756003)(30864003)(2004002)(45980500001)(579004)(559001)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?VwrZEJ5wNvZLgU+alTITIV3Fnl1H6EFrZ4pVZa5cz97bKRIV/UIgP9D9WSMV?=
 =?us-ascii?Q?OaQwznKrjWF8N+p6IJDQUbf9ZsMMeEFFIoWSxiBpMGwbZyVGX/cGf6syhII/?=
 =?us-ascii?Q?s+dCTMO1YGnsAMjrrIJQEzJaKZDQl/Gk+7QtQ8y/IIz8zwwblI8n7cCAhWoy?=
 =?us-ascii?Q?djhAK2lgW2pcPXyx+ipfz9aQj0gRQW1yTgkz4Pe8B3vqxuGzjFl61GmZRb/U?=
 =?us-ascii?Q?yO3COet9gcPmltnBNFpNR5Z1muG1B8AIEp5ROYBSSWff40AvSxOL29FNNHnk?=
 =?us-ascii?Q?lD6iP90EskvK/nbFUR7kdp9lk2Bl056FkkWba/tXhonuiPDHIvr3JRXYk/2c?=
 =?us-ascii?Q?Ky2lHyFEbvxx7+th7JSg5iJYFkUfXGCmYq6NiXgEXr3lGnqv8IUbCESmaU5A?=
 =?us-ascii?Q?eWx3CtOAbrDKsuT9bf/jVb2+P9Z4JTua2Ds1B/eqneHbpnnumQZciI0//DFP?=
 =?us-ascii?Q?TUo87OfU845O/aafi1BSAHhBI+Tw9vhbgnThVlo3QNoomRi3gmcEX+Q/PMUH?=
 =?us-ascii?Q?5DczkRodUZfag8Ai4wVoLYCY3EgaAVl+Hy9ON/LO4spot+QX2HKb1vindC2Q?=
 =?us-ascii?Q?7KrQmregWBjl9DiWQaC3kdVgVy/9ZqWcO+kitIxYSSRHSSb+I9lVtQO0/7FH?=
 =?us-ascii?Q?I81ohOfx4lZuStBJPVFNnZuvmAsFtH9wOM3YhITR7Z9Hu3XrMvf5WXzsR1+f?=
 =?us-ascii?Q?5R9TsCdGdbqEBCAN1oubmvcz8jfPF49JROGsVxvZURk4KKsKKaVAbnQIT9EE?=
 =?us-ascii?Q?n69HigutwAZ9CjlGCyzP3XQKzoRhQ9DywOZ4BTgIsLNdiFEf2OW6TalkAFEw?=
 =?us-ascii?Q?E4UfbkbHi+c5tdU9GEkPOLTC3nKM9PMXK4n6kG9vBHAlD8i7av+YdeV+v7gr?=
 =?us-ascii?Q?sW2ZdtRn63XkHQaCTVGZcCS1wP87IlYiQ3gZxaRouiAWpSRQb3vYs+ZFNt03?=
 =?us-ascii?Q?1IwxVyHTsU9QifcOin3H+iWFOjtsSgS+L06sBy+4ndTdIbPNiVWZY4BF68eG?=
 =?us-ascii?Q?MdjI?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <571F8B445A12D54782CD129BDEC18FA0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5994
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f0ee122c-c90a-4028-2100-08d89e0d0014
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N60a7Ox8i5J3FDU2flxUc8pWqb0lPhT8k+pAce3CBLYyZOQ4SK5yMZNTuq/rTgfwrvVpMeXXMUWnSSGzcTUjbZ0WK2xbYn5wYXBlQqNwQsoijpq0Rt00OM9Vu/UNhSah+IJOVAqaODutyv2mdVQF8hxSymhjeRaP3rdtUMxxEDZTgqfETQisUa1TMOfUVobjhOG7EE3RZB78cXxyWe05yxVBBZcYEYdAFPC9bK1hzUz/AVt0IXmvLvBslFMquqU5al9wsVREyAUDHRKAf2L06+IjJIeRNCn3v1dQlSFCTIqaBxb/vulIJOtfVtWfD2zoi1W6+J6vSPLApUSeeVMj2rFtIVDrV+Qh0ZU+fHsPrc0SPxEelgwMlPu0XosqPEp0Mkx085FVMujnQ7kWFIIgBa//sQdsSpn3wrIAobc9gcYKRJWm9bqYWlX9+rPBTN5mqbLXz6k8W9gqyEyAoQRtTI7wD02meyFmuSSS0MXI5OGTEgJmHKH5+/OvqJC14/HbpUOFTbnJPsDTZtKFmCiQgg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(376002)(136003)(46966005)(54906003)(36756003)(86362001)(82740400003)(30864003)(316002)(6506007)(70206006)(53546011)(6862004)(478600001)(82310400003)(6486002)(26005)(4326008)(356005)(2616005)(5660300002)(47076004)(2906002)(6512007)(107886003)(186003)(8676002)(336012)(33656002)(83380400001)(81166007)(8936002)(70586007)(2004002)(579004)(309714004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2020 19:43:33.1668
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 55f9c3c6-3cfa-4d13-fa8f-08d89e0d0b30
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5079

Hello Stefano,

Thanks for reviewing the code.

> On 11 Dec 2020, at 1:28 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Thu, 10 Dec 2020, Rahul Singh wrote:
>> Add support for ARM architected SMMUv3 implementation. It is based on
>> the Linux SMMUv3 driver.
>>=20
>> Driver is currently supported as Tech Preview.
>>=20
>> Major differences with regard to Linux driver are as follows:
>> 2. Only Stage-2 translation is supported as compared to the Linux driver
>>   that supports both Stage-1 and Stage-2 translations.
>> 3. Use P2M  page table instead of creating one as SMMUv3 has the
>>   capability to share the page tables with the CPU.
>> 4. Tasklets are used in place of threaded IRQ's in Linux for event queue
>>   and priority queue IRQ handling.
>> 5. Latest version of the Linux SMMUv3 code implements the commands queue
>>   access functions based on atomic operations implemented in Linux.
>>   Atomic functions used by the commands queue access functions are not
>>   implemented in XEN therefore we decided to port the earlier version
>>   of the code. Atomic operations are introduced to fix the bottleneck
>>   of the SMMU command queue insertion operation. A new algorithm for
>>   inserting commands into the queue is introduced, which is lock-free
>>   on the fast-path.
>>   Consequence of reverting the patch is that the command queue
>>   insertion will be slow for large systems as spinlock will be used to
>>   serializes accesses from all CPUs to the single queue supported by
>>   the hardware. Once the proper atomic operations will be available in
>>   XEN the driver can be updated.
>> 6. Spin lock is used in place of mutex when attaching a device to the
>>   SMMU, as there is no blocking locks implementation available in XEN.
>>   This might introduce latency in XEN. Need to investigate before
>>   driver is out for tech preview.
>> 7. PCI ATS functionality is not supported, as there is no support
>>   available in XEN to test the functionality. Code is not tested and
>>   compiled. Code is guarded by the flag CONFIG_PCI_ATS.
>> 8. MSI interrupts are not supported as there is no support available in
>>   XEN to request MSI interrupts. Code is not tested and compiled. Code
>>   is guarded by the flag CONFIG_MSI.
>=20
> This is much better Rahul, great work!

Thanks.

>=20
>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>> Changes in v3:
>> - added return statement for readx_poll_timeout function.
>> - remove iommu_get_dma_cookie and iommu_put_dma_cookie.
>> - remove struct arm_smmu_xen_device as not required.
>> - move dt_property_match_string to device_tree.c file.
>> - replace arm_smmu_*_thread to arm_smmu_*_tasklet to avoid confusion.
>> - use ARM_SMMU_REG_SZ as size when map memory to XEN.
>> - remove bypass keyword to make sure when device-tree probe is failed we
>>  are reporting error and not continuing to configure SMMU in bypass
>>  mode.
>> - fixed minor comments.
>>=20
>> ---
>> MAINTAINERS                           |   6 +
>> SUPPORT.md                            |   1 +
>> xen/drivers/passthrough/Kconfig       |  11 +
>> xen/drivers/passthrough/arm/Makefile  |   1 +
>> xen/drivers/passthrough/arm/smmu-v3.c | 777 ++++++++++++++++++++++----
>> 5 files changed, 683 insertions(+), 113 deletions(-)
>>=20
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index dab38a6a14..1d63489eec 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -249,6 +249,12 @@ F:	xen/include/asm-arm/
>> F:	xen/include/public/arch-arm/
>> F:	xen/include/public/arch-arm.h
>>=20
>> +ARM SMMUv3
>> +M:	Bertrand Marquis <bertrand.marquis@arm.com>
>> +M:	Rahul Singh <rahul.singh@arm.com>
>> +S:	Supported
>> +F:	xen/drivers/passthrough/arm/smmu-v3.c
>> +
>> Change Log
>> M:	Paul Durrant <paul@xen.org>
>> R:	Community Manager <community.manager@xenproject.org>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index ab02aca5f4..5ee3c8651a 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -67,6 +67,7 @@ For the Cortex A57 r0p0 - r1p1, see Errata 832075.
>>     Status, Intel VT-d: Supported
>>     Status, ARM SMMUv1: Supported, not security supported
>>     Status, ARM SMMUv2: Supported, not security supported
>> +    Status, ARM SMMUv3: Tech Preview
>>     Status, Renesas IPMMU-VMSA: Supported, not security supported
>>=20
>> ### ARM/GICv3 ITS
>> diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/K=
config
>> index 0036007ec4..341ba92b30 100644
>> --- a/xen/drivers/passthrough/Kconfig
>> +++ b/xen/drivers/passthrough/Kconfig
>> @@ -13,6 +13,17 @@ config ARM_SMMU
>> 	  Say Y here if your SoC includes an IOMMU device implementing the
>> 	  ARM SMMU architecture.
>>=20
>> +config ARM_SMMU_V3
>> +	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
>> +	depends on ARM_64
>> +	---help---
>> +	 Support for implementations of the ARM System MMU architecture
>> +	 version 3. Driver is in experimental stage and should not be used in
>> +	 production.
>> +
>> +	 Say Y here if your system includes an IOMMU device implementing
>> +	 the ARM SMMUv3 architecture.
>> +
>> config IPMMU_VMSA
>> 	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
>> 	depends on ARM_64
>> diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthro=
ugh/arm/Makefile
>> index fcd918ea3e..c5fb3b58a5 100644
>> --- a/xen/drivers/passthrough/arm/Makefile
>> +++ b/xen/drivers/passthrough/arm/Makefile
>> @@ -1,3 +1,4 @@
>> obj-y +=3D iommu.o iommu_helpers.o iommu_fwspec.o
>> obj-$(CONFIG_ARM_SMMU) +=3D smmu.o
>> obj-$(CONFIG_IPMMU_VMSA) +=3D ipmmu-vmsa.o
>> +obj-$(CONFIG_ARM_SMMU_V3) +=3D smmu-v3.o
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthr=
ough/arm/smmu-v3.c
>> index 2966015e5d..65b3db94ad 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -2,37 +2,268 @@
>> /*
>>  * IOMMU API for ARM architected SMMUv3 implementations.
>>  *
>> + * Based on Linux's SMMUv3 driver:
>> + *    drivers/iommu/arm-smmu-v3.c
>> + *    commit: ab435ce49bd1d02e33dfec24f76955dc1196970b
>> + * and Xen's SMMU driver:
>> + *    xen/drivers/passthrough/arm/smmu.c
>> + *
>> + * Major differences with regard to Linux driver are as follows:
>> + *  1. Driver is currently supported as Tech Preview.
>> + *  2. Only Stage-2 translation is supported as compared to the Linux d=
river
>> + *     that supports both Stage-1 and Stage-2 translations.
>> + *  3. Use P2M  page table instead of creating one as SMMUv3 has the
>> + *     capability to share the page tables with the CPU.
>> + *  4. Tasklets are used in place of threaded IRQ's in Linux for event =
queue
>> + *     and priority queue IRQ handling.
>> + *  5. Latest version of the Linux SMMUv3 code implements the commands =
queue
>> + *     access functions based on atomic operations implemented in Linux=
.
>> + *     Atomic functions used by the commands queue access functions are=
 not
>> + *     implemented in XEN therefore we decided to port the earlier vers=
ion
>> + *     of the code. Atomic operations are introduced to fix the bottlen=
eck of
>> + *     the SMMU command queue insertion operation. A new algorithm for
>> + *     inserting commands into the queue is introduced, which is
>> + *     lock-free on the fast-path.
>> + *     Consequence of reverting the patch is that the command queue ins=
ertion
>> + *     will be slow for large systems as spinlock will be used to seria=
lizes
>> + *     accesses from all CPUs to the single queue supported by the hard=
ware.
>> + *     Once the proper atomic operations will be available in XEN the d=
river
>> + *     can be updated.
>> + *  6. Spin lock is used in place of Mutex when attaching a device to t=
he SMMU,
>> + *     as there is no blocking locks implementation available in XEN.Th=
is might
>> + *     introduce latency in XEN. Need to investigate before driver is o=
ut for
>> + *     Tech Preview.
>> + *  7. PCI ATS functionality is not supported, as there is no support a=
vailable
>> + *     in XEN to test the functionality. Code is not tested and compile=
d. Code
>> + *     is guarded by the flag CONFIG_PCI_ATS.
>> + *  8. MSI interrupts are not supported as there is no support availabl=
e
>> + *     in XEN to request MSI interrupts. Code is not tested and compile=
d. Code
>> + *     is guarded by the flag CONFIG_MSI.
>> + *
>> + * Following functionality should be supported before driver is out for=
 tech
>> + * preview
>> + *
>> + *  1. Investigate the timing analysis of using spin lock in place of m=
utex
>> + *     when attaching a  device to SMMU.
>> + *  2. Merged the latest Linux SMMUv3 driver code once atomic operation=
 is
>> + *     available in XEN.
>> + *  3. PCI ATS and MSI interrupts should be supported.
>> + *  4. Investigate side-effect of using tasklet in place of threaded IR=
Q and
>> + *     fix if any.
>> + *  5. fallthorugh keyword should be supported.
>> + *  6. Implement the ffsll function in bitops.h file.
>> + *
>>  * Copyright (C) 2015 ARM Limited
>>  *
>>  * Author: Will Deacon <will.deacon@arm.com>
>>  *
>> - * This driver is powered by bad coffee and bombay mix.
>> + * Copyright (C) 2020 Arm Ltd
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>=
.
>> + *
>>  */
>>=20
>> -#include <linux/acpi.h>
>> -#include <linux/acpi_iort.h>
>> -#include <linux/bitfield.h>
>> -#include <linux/bitops.h>
>> -#include <linux/crash_dump.h>
>> -#include <linux/delay.h>
>> -#include <linux/dma-iommu.h>
>> -#include <linux/err.h>
>> -#include <linux/interrupt.h>
>> -#include <linux/io-pgtable.h>
>> -#include <linux/iommu.h>
>> -#include <linux/iopoll.h>
>> -#include <linux/module.h>
>> -#include <linux/msi.h>
>> -#include <linux/of.h>
>> -#include <linux/of_address.h>
>> -#include <linux/of_iommu.h>
>> -#include <linux/of_platform.h>
>> -#include <linux/pci.h>
>> -#include <linux/pci-ats.h>
>> -#include <linux/platform_device.h>
>> -
>> -#include <linux/amba/bus.h>
>> +#include <xen/acpi.h>
>> +#include <xen/config.h>
>> +#include <xen/delay.h>
>> +#include <xen/errno.h>
>> +#include <xen/err.h>
>> +#include <xen/irq.h>
>> +#include <xen/lib.h>
>> +#include <xen/list.h>
>> +#include <xen/mm.h>
>> +#include <xen/rbtree.h>
>> +#include <xen/sched.h>
>> +#include <xen/sizes.h>
>> +#include <xen/vmap.h>
>> +#include <asm/atomic.h>
>> +#include <asm/device.h>
>> +#include <asm/io.h>
>> +#include <asm/iommu_fwspec.h>
>> +#include <asm/platform.h>
>> +
>> +/* Linux compatibility functions. */
>> +typedef paddr_t		dma_addr_t;
>> +typedef paddr_t		phys_addr_t;
>> +typedef unsigned int		gfp_t;
>> +
>> +#define platform_device		device
>> +
>> +#define GFP_KERNEL		0
>> +
>> +/* Alias to Xen device tree helpers */
>> +#define device_node			dt_device_node
>> +#define of_phandle_args		dt_phandle_args
>> +#define of_device_id		dt_device_match
>> +#define of_match_node		dt_match_node
>> +#define of_property_read_u32(np, pname, out)	\
>> +		(!dt_property_read_u32(np, pname, out))
>> +#define of_property_read_bool		dt_property_read_bool
>> +#define of_parse_phandle_with_args	dt_parse_phandle_with_args
>> +
>> +/* Alias to Xen time functions */
>> +#define ktime_t s_time_t
>> +#define ktime_get()			(NOW())
>> +#define ktime_add_us(t, i)		(t + MICROSECS(i))
>> +#define ktime_compare(t, i)		(t > (i))
>> +
>> +/* Alias to Xen allocation helpers */
>> +#define kzalloc(size, flags)	_xzalloc(size, sizeof(void *))
>> +#define kfree	xfree
>> +#define devm_kzalloc(dev, size, flags)	 _xzalloc(size, sizeof(void *))
>> +
>> +/* Device logger functions */
>> +#define dev_name(dev)	dt_node_full_name(dev->of_node)
>> +#define dev_dbg(dev, fmt, ...)			\
>> +	printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
>> +#define dev_notice(dev, fmt, ...)		\
>> +	printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
>> +#define dev_warn(dev, fmt, ...)			\
>> +	printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS_=
_)
>> +#define dev_err(dev, fmt, ...)			\
>> +	printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
>> +#define dev_info(dev, fmt, ...)			\
>> +	printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
>> +#define dev_err_ratelimited(dev, fmt, ...)			\
>> +	printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
>> +
>> +/*
>> + * Periodically poll an address and wait between reads in us until a
>> + * condition is met or a timeout occurs.
>> + *
>> + * @return: 0 when cond met, -ETIMEDOUT upon timeout
>> + */
>> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us)	\
>> +({		\
>> +	s_time_t deadline =3D NOW() + MICROSECS(timeout_us);		\
>> +	for (;;) {		\
>> +		(val) =3D op(addr);		\
>> +		if (cond)		\
>> +			break;		\
>> +		if (NOW() > deadline) {		\
>> +			(val) =3D op(addr);		\
>> +			break;		\
>> +		}		\
>> +		udelay(sleep_us);		\
>> +	}		\
>> +	(cond) ? 0 : -ETIMEDOUT;		\
>> +})
>=20
> NIT: alignment of the '\'

Ack.=20
>=20
>=20
>> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_u=
s)	\
>> +	readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_u=
s)
>> +
>> +#define FIELD_PREP(_mask, _val)			\
>> +	(((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))
>> +
>> +#define FIELD_GET(_mask, _reg)			\
>> +	(typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
>=20
> let's add ffsll to bitops.h

Ok. I will implement the ffsll and add to include/asm-arm/bitops.h=09
=20
>=20
>=20
>> +/*
>> + * Helpers for DMA allocation. Just the function name is reused for
>> + * porting code, these allocation are not managed allocations
>> + */
>> +static void *dmam_alloc_coherent(struct device *dev, size_t size,
>> +				paddr_t *dma_handle, gfp_t gfp)
>> +{
>> +	void *vaddr;
>> +	unsigned long alignment =3D size;
>> +
>> +	/*
>> +	 * _xzalloc requires that the (align & (align -1)) =3D 0. Most of the
>> +	 * allocations in SMMU code should send the right value for size. In
>> +	 * case this is not true print a warning and align to the size of a
>> +	 * (void *)
>> +	 */
>> +	if (size & (size - 1)) {
>> +		printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n"=
);
>> +		alignment =3D sizeof(void *);
>> +	}
>> +
>> +	vaddr =3D _xzalloc(size, alignment);
>> +	if (!vaddr) {
>> +		printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
>> +		return NULL;
>> +	}
>> +
>> +	*dma_handle =3D virt_to_maddr(vaddr);
>> +
>> +	return vaddr;
>> +}
>> +
>> +
>> +/* Xen specific code. */
>> +struct iommu_domain {
>> +	/* Runtime SMMU configuration for this iommu_domain */
>> +	atomic_t		ref;
>> +	/*
>> +	 * Used to link iommu_domain contexts for a same domain.
>> +	 * There is at least one per-SMMU to used by the domain.
>> +	 */
>> +	struct list_head		list;
>> +};
>>=20
>> +/* Describes information required for a Xen domain */
>> +struct arm_smmu_xen_domain {
>> +	spinlock_t		lock;
>> +
>> +	/* List of iommu domains associated to this domain */
>> +	struct list_head	contexts;
>> +};
>> +
>> +
>> +/* Keep a list of devices associated with this driver */
>> +static DEFINE_SPINLOCK(arm_smmu_devices_lock);
>> +static LIST_HEAD(arm_smmu_devices);
>> +
>> +static inline void *dev_iommu_priv_get(struct device *dev)
>> +{
>> +	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> +
>> +	return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
>> +}
>> +
>> +static inline void dev_iommu_priv_set(struct device *dev, void *priv)
>> +{
>> +	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> +
>> +	fwspec->iommu_priv =3D priv;
>> +}
>> +
>> +static int platform_get_irq_byname_optional(struct device *dev,
>> +				const char *name)
>> +{
>> +	int index, ret;
>> +	struct dt_device_node *np  =3D dev_to_dt(dev);
>> +
>> +	if (unlikely(!name))
>> +		return -EINVAL;
>> +
>> +	index =3D dt_property_match_string(np, "interrupt-names", name);
>> +	if (index < 0) {
>> +		dev_info(dev, "IRQ %s not found\n", name);
>> +		return index;
>> +	}
>> +
>> +	ret =3D platform_get_irq(np, index);
>> +	if (ret < 0) {
>> +		dev_err(dev, "failed to get irq index %d\n", index);
>> +		return -ENODEV;
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +/* Start of Linux SMMUv3 code */
>> /* MMIO registers */
>> #define ARM_SMMU_IDR0			0x0
>> #define IDR0_ST_LVL			GENMASK(28, 27)
>> @@ -402,6 +633,7 @@ enum pri_resp {
>> 	PRI_RESP_SUCC =3D 2,
>> };
>>=20
>> +#ifdef CONFIF_MSI
>=20
> CONFIG_MSI

Ack. I will fix all the typo for CONFIG_MSI
>=20
>=20
>> enum arm_smmu_msi_index {
>> 	EVTQ_MSI_INDEX,
>> 	GERROR_MSI_INDEX,
>> @@ -426,6 +658,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSI=
S][3] =3D {
>> 		ARM_SMMU_PRIQ_IRQ_CFG2,
>> 	},
>> };
>> +#endif
>>=20
>> struct arm_smmu_cmdq_ent {
>> 	/* Common fields */
>> @@ -534,6 +767,7 @@ struct arm_smmu_s2_cfg {
>> 	u16				vmid;
>> 	u64				vttbr;
>> 	u64				vtcr;
>> +	struct domain		*domain;
>> };
>=20
> This looks like a strange struct to add a pointer back to *domain. Maybe
> struct arm_smmu_domain would be a better place for it?

Ok yes you right. I will modify.
>=20
>=20
>> struct arm_smmu_strtab_cfg {
>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>> 		u64			padding;
>> 	};
>>=20
>> -	/* IOMMU core code handle */
>> -	struct iommu_device		iommu;
>> +	/* Need to keep a list of SMMU devices */
>> +	struct list_head		devices;
>> +
>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>> +	struct tasklet		evtq_irq_tasklet;
>> +	struct tasklet		priq_irq_tasklet;
>> +	struct tasklet		combined_irq_tasklet;
>> };
>>=20
>> /* SMMU private data for each master */
>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>=20
>> struct arm_smmu_domain {
>> 	struct arm_smmu_device		*smmu;
>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>=20
>> 	bool				non_strict;
>> 	atomic_t			nr_ats_masters;
>> @@ -987,6 +1225,7 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu=
_device *smmu,
>> 	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>> }
>>=20
>> +#ifdef CONFIF_MSI
>=20
> CONFIG_MSI

Ack.=20
>=20
>=20
>> /*
>>  * The difference between val and sync_idx is bounded by the maximum siz=
e of
>>  * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmeti=
c.
>> @@ -1030,6 +1269,13 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct =
arm_smmu_device *smmu)
>>=20
>> 	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
>> }
>> +#else
>> +static inline int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device=
 *smmu)
>> +{
>> +	return 0;
>> +}
>> +#endif /* CONFIG_MSI */
>> +
>>=20
>> static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
>> {
>> @@ -1072,7 +1318,7 @@ arm_smmu_write_strtab_l1_desc(__le64 *dst, struct =
arm_smmu_strtab_l1_desc *desc)
>> 	val |=3D desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
>>=20
>> 	/* See comment in arm_smmu_write_ctx_desc() */
>> -	WRITE_ONCE(*dst, cpu_to_le64(val));
>> +	write_atomic(dst, cpu_to_le64(val));
>> }
>>=20
>> static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 =
sid)
>> @@ -1187,7 +1433,7 @@ static void arm_smmu_write_strtab_ent(struct arm_s=
mmu_master *master, u32 sid,
>> 						 STRTAB_STE_1_EATS_TRANS));
>>=20
>> 	arm_smmu_sync_ste_for_sid(smmu, sid);
>> -	WRITE_ONCE(dst[0], cpu_to_le64(val));
>> +	write_atomic(&dst[0], cpu_to_le64(val));
>> 	arm_smmu_sync_ste_for_sid(smmu, sid);
>>=20
>> 	/* It's likely that we'll want to use the new STE soon */
>> @@ -1234,7 +1480,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu=
_device *smmu, u32 sid)
>> }
>>=20
>> /* IRQ and event handlers */
>> -static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
>> +static void arm_smmu_evtq_tasklet(void *dev)
>> {
>> 	int i;
>> 	struct arm_smmu_device *smmu =3D dev;
>> @@ -1264,7 +1510,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, v=
oid *dev)
>> 	/* Sync our overflow flag, as we believe we're up to speed */
>> 	llq->cons =3D Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>> 		    Q_IDX(llq, llq->cons);
>> -	return IRQ_HANDLED;
>> }
>>=20
>> static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
>> @@ -1305,7 +1550,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_de=
vice *smmu, u64 *evt)
>> 	}
>> }
>>=20
>> -static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
>> +static void arm_smmu_priq_tasklet(void *dev)
>> {
>> 	struct arm_smmu_device *smmu =3D dev;
>> 	struct arm_smmu_queue *q =3D &smmu->priq.q;
>> @@ -1324,12 +1569,12 @@ static irqreturn_t arm_smmu_priq_thread(int irq,=
 void *dev)
>> 	llq->cons =3D Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
>> 		      Q_IDX(llq, llq->cons);
>> 	queue_sync_cons_out(q);
>> -	return IRQ_HANDLED;
>> }
>>=20
>> static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
>>=20
>> -static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
>> +static void arm_smmu_gerror_handler(int irq, void *dev,
>> +				struct cpu_user_regs *regs)
>> {
>> 	u32 gerror, gerrorn, active;
>> 	struct arm_smmu_device *smmu =3D dev;
>> @@ -1339,7 +1584,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq=
, void *dev)
>>=20
>> 	active =3D gerror ^ gerrorn;
>> 	if (!(active & GERROR_ERR_MASK))
>> -		return IRQ_NONE; /* No errors pending */
>> +		return; /* No errors pending */
>>=20
>> 	dev_warn(smmu->dev,
>> 		 "unexpected global error reported (0x%08x), this could be serious\n",
>> @@ -1372,26 +1617,44 @@ static irqreturn_t arm_smmu_gerror_handler(int i=
rq, void *dev)
>> 		arm_smmu_cmdq_skip_err(smmu);
>>=20
>> 	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
>> -	return IRQ_HANDLED;
>> }
>>=20
>> -static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
>> +static void arm_smmu_combined_irq_handler(int irq, void *dev,
>> +				struct cpu_user_regs *regs)
>> +{
>> +	struct arm_smmu_device *smmu =3D dev;
>> +
>> +	arm_smmu_gerror_handler(irq, dev, regs);
>> +
>> +	tasklet_schedule(&(smmu->combined_irq_tasklet));
>> +}
>> +
>> +static void arm_smmu_combined_irq_tasklet(void *dev)
>> {
>> 	struct arm_smmu_device *smmu =3D dev;
>>=20
>> -	arm_smmu_evtq_thread(irq, dev);
>> +	arm_smmu_evtq_tasklet(dev);
>> 	if (smmu->features & ARM_SMMU_FEAT_PRI)
>> -		arm_smmu_priq_thread(irq, dev);
>> +		arm_smmu_priq_tasklet(dev);
>> +}
>>=20
>> -	return IRQ_HANDLED;
>> +static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
>> +				struct cpu_user_regs *regs)
>> +{
>> +	struct arm_smmu_device *smmu =3D dev;
>> +
>> +	tasklet_schedule(&(smmu->evtq_irq_tasklet));
>> }
>>=20
>> -static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
>> +static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
>> +				struct cpu_user_regs *regs)
>> {
>> -	arm_smmu_gerror_handler(irq, dev);
>> -	return IRQ_WAKE_THREAD;
>> +	struct arm_smmu_device *smmu =3D dev;
>> +
>> +	tasklet_schedule(&(smmu->priq_irq_tasklet));
>> }
>>=20
>> +#ifdef CONFIG_PCI_ATS
>> static void
>> arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
>> 			struct arm_smmu_cmdq_ent *cmd)
>> @@ -1498,6 +1761,7 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu=
_domain *smmu_domain,
>>=20
>> 	return ret ? -ETIMEDOUT : 0;
>> }
>> +#endif
>>=20
>> static void arm_smmu_tlb_inv_context(void *cookie)
>> {
>> @@ -1532,7 +1796,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(=
void)
>> 	if (!smmu_domain)
>> 		return NULL;
>>=20
>> -	mutex_init(&smmu_domain->init_mutex);
>> 	INIT_LIST_HEAD(&smmu_domain->devices);
>> 	spin_lock_init(&smmu_domain->devices_lock);
>>=20
>> @@ -1578,6 +1841,17 @@ static int arm_smmu_domain_finalise_s2(struct arm=
_smmu_domain *smmu_domain,
>> 	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>> 	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>> 	typeof(&arm_lpae_s2_cfg.vtcr) vtcr =3D &arm_lpae_s2_cfg.vtcr;
>> +	uint64_t reg =3D READ_SYSREG64(VTCR_EL2);
>> +
>> +	vtcr->tsz	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2T0SZ, reg);
>> +	vtcr->sl	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2SL0, reg);
>> +	vtcr->irgn	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2IR0, reg);
>> +	vtcr->orgn	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2OR0, reg);
>> +	vtcr->sh	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2SH0, reg);
>> +	vtcr->tg	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2TG, reg);
>> +	vtcr->ps	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2PS, reg);
>> +
>> +	arm_lpae_s2_cfg.vttbr  =3D page_to_maddr(cfg->domain->arch.p2m.root);
>>=20
>> 	vmid =3D arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>> 	if (vmid < 0)
>> @@ -1592,6 +1866,11 @@ static int arm_smmu_domain_finalise_s2(struct arm=
_smmu_domain *smmu_domain,
>> 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
>> 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
>> 			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
>> +
>> +	printk(XENLOG_DEBUG
>> +		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n=
",
>> +		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
>> +
>> 	return 0;
>> }
>>=20
>> @@ -1653,6 +1932,7 @@ static void arm_smmu_install_ste_for_dev(struct ar=
m_smmu_master *master)
>> 	}
>> }
>>=20
>> +#ifdef CONFIG_PCI_ATS
>> static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
>> {
>> 	struct device *dev =3D master->dev;
>> @@ -1751,6 +2031,23 @@ static void arm_smmu_disable_pasid(struct arm_smm=
u_master *master)
>>=20
>> 	pci_disable_pasid(pdev);
>> }
>> +#else
>> +static inline bool arm_smmu_ats_supported(struct arm_smmu_master *maste=
r)
>> +{
>> +	return false;
>> +}
>> +
>> +static inline void arm_smmu_enable_ats(struct arm_smmu_master *master) =
{ }
>> +
>> +static inline void arm_smmu_disable_ats(struct arm_smmu_master *master)=
 { }
>> +
>> +static inline int arm_smmu_enable_pasid(struct arm_smmu_master *master)
>> +{
>> +	return 0;
>> +}
>> +
>> +static inline void arm_smmu_disable_pasid(struct arm_smmu_master *maste=
r) { }
>> +#endif
>>=20
>> static void arm_smmu_detach_dev(struct arm_smmu_master *master)
>> {
>> @@ -1788,8 +2085,6 @@ static int arm_smmu_attach_dev(struct iommu_domain=
 *domain, struct device *dev)
>>=20
>> 	arm_smmu_detach_dev(master);
>>=20
>> -	mutex_lock(&smmu_domain->init_mutex);
>> -
>> 	if (!smmu_domain->smmu) {
>> 		smmu_domain->smmu =3D smmu;
>> 		ret =3D arm_smmu_domain_finalise(domain, master);
>> @@ -1820,7 +2115,6 @@ static int arm_smmu_attach_dev(struct iommu_domain=
 *domain, struct device *dev)
>> 	arm_smmu_enable_ats(master);
>>=20
>> out_unlock:
>> -	mutex_unlock(&smmu_domain->init_mutex);
>> 	return ret;
>> }
>>=20
>> @@ -1833,8 +2127,10 @@ static bool arm_smmu_sid_in_range(struct arm_smmu=
_device *smmu, u32 sid)
>>=20
>> 	return sid < limit;
>> }
>> +/* Forward declaration */
>> +static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
>>=20
>> -static struct iommu_device *arm_smmu_probe_device(struct device *dev)
>> +static int arm_smmu_add_device(u8 devfn, struct device *dev)
>> {
>> 	int i, ret;
>> 	struct arm_smmu_device *smmu;
>> @@ -1842,14 +2138,15 @@ static struct iommu_device *arm_smmu_probe_devic=
e(struct device *dev)
>> 	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>>=20
>> 	if (!fwspec)
>> -		return ERR_PTR(-ENODEV);
>> +		return -ENODEV;
>>=20
>> -	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
>> -		return ERR_PTR(-EBUSY);
>> +	smmu =3D arm_smmu_get_by_dev(fwspec->iommu_dev);
>> +	if (!smmu)
>> +		return -ENODEV;
>>=20
>> 	master =3D kzalloc(sizeof(*master), GFP_KERNEL);
>> 	if (!master)
>> -		return ERR_PTR(-ENOMEM);
>> +		return -ENOMEM;
>>=20
>> 	master->dev =3D dev;
>> 	master->smmu =3D smmu;
>> @@ -1884,17 +2181,36 @@ static struct iommu_device *arm_smmu_probe_devic=
e(struct device *dev)
>> 	 */
>> 	arm_smmu_enable_pasid(master);
>>=20
>> -	return &smmu->iommu;
>> +	return 0;
>>=20
>> err_free_master:
>> 	kfree(master);
>> 	dev_iommu_priv_set(dev, NULL);
>> -	return ERR_PTR(ret);
>> +	return ret;
>> }
>>=20
>> -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args=
 *args)
>> +static int arm_smmu_dt_xlate(struct device *dev,
>> +				const struct dt_phandle_args *args)
>> {
>> -	return iommu_fwspec_add_ids(dev, args->args, 1);
>> +	int ret;
>> +	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> +
>> +	ret =3D iommu_fwspec_add_ids(dev, args->args, 1);
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (dt_device_is_protected(dev_to_dt(dev))) {
>> +		dev_err(dev, "Already added to SMMUv3\n");
>> +		return -EEXIST;
>> +	}
>> +
>> +	/* Let Xen know that the master device is protected by an IOMMU. */
>> +	dt_device_set_protected(dev_to_dt(dev));
>> +
>> +	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
>> +			dev_name(fwspec->iommu_dev), fwspec->num_ids);
>> +
>> +	return 0;
>> }
>>=20
>> /* Probing and initialisation functions */
>> @@ -1923,8 +2239,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu=
_device *smmu,
>> 		return -ENOMEM;
>> 	}
>>=20
>> -	if (!WARN_ON(q->base_dma & (qsz - 1))) {
>> -		dev_info(smmu->dev, "allocated %u entries for %s\n",
>> +	if (unlikely(q->base_dma & (qsz - 1))) {
>> +		dev_warn(smmu->dev, "allocated %u entries for %s\n",
>> 			 1 << q->llq.max_n_shift, name);
>> 	}
>>=20
>> @@ -2121,6 +2437,7 @@ static int arm_smmu_update_gbpa(struct arm_smmu_de=
vice *smmu, u32 set, u32 clr)
>> 	return ret;
>> }
>>=20
>> +#ifdef CONFIF_MSI
>=20
> CONFIG_MSI

Ack.=20
>=20
>=20
>> static void arm_smmu_free_msis(void *data)
>> {
>> 	struct device *dev =3D data;
>> @@ -2191,6 +2508,9 @@ static void arm_smmu_setup_msis(struct arm_smmu_de=
vice *smmu)
>> 	/* Add callback to free MSIs on teardown */
>> 	devm_add_action(dev, arm_smmu_free_msis, dev);
>> }
>> +#else
>> +static inline void arm_smmu_setup_msis(struct arm_smmu_device *smmu) { =
}
>> +#endif /* CONFIG_MSI */
>>=20
>> static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
>> {
>> @@ -2201,9 +2521,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_=
smmu_device *smmu)
>> 	/* Request interrupt lines */
>> 	irq =3D smmu->evtq.q.irq;
>> 	if (irq) {
>> -		ret =3D devm_request_threaded_irq(smmu->dev, irq, NULL,
>> -						arm_smmu_evtq_thread,
>> -						IRQF_ONESHOT,
>> +		ret =3D request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
>> 						"arm-smmu-v3-evtq", smmu);
>> 		if (ret < 0)
>> 			dev_warn(smmu->dev, "failed to enable evtq irq\n");
>> @@ -2213,8 +2531,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_=
smmu_device *smmu)
>>=20
>> 	irq =3D smmu->gerr_irq;
>> 	if (irq) {
>> -		ret =3D devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
>> -				       0, "arm-smmu-v3-gerror", smmu);
>> +		ret =3D request_irq(irq, 0, arm_smmu_gerror_handler,
>> +						"arm-smmu-v3-gerror", smmu);
>> 		if (ret < 0)
>> 			dev_warn(smmu->dev, "failed to enable gerror irq\n");
>> 	} else {
>> @@ -2224,11 +2542,8 @@ static void arm_smmu_setup_unique_irqs(struct arm=
_smmu_device *smmu)
>> 	if (smmu->features & ARM_SMMU_FEAT_PRI) {
>> 		irq =3D smmu->priq.q.irq;
>> 		if (irq) {
>> -			ret =3D devm_request_threaded_irq(smmu->dev, irq, NULL,
>> -							arm_smmu_priq_thread,
>> -							IRQF_ONESHOT,
>> -							"arm-smmu-v3-priq",
>> -							smmu);
>> +			ret =3D request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
>> +							"arm-smmu-v3-priq", smmu);
>> 			if (ret < 0)
>> 				dev_warn(smmu->dev,
>> 					 "failed to enable priq irq\n");
>> @@ -2257,11 +2572,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_de=
vice *smmu)
>> 		 * Cavium ThunderX2 implementation doesn't support unique irq
>> 		 * lines. Use a single irq line for all the SMMUv3 interrupts.
>> 		 */
>> -		ret =3D devm_request_threaded_irq(smmu->dev, irq,
>> -					arm_smmu_combined_irq_handler,
>> -					arm_smmu_combined_irq_thread,
>> -					IRQF_ONESHOT,
>> -					"arm-smmu-v3-combined-irq", smmu);
>> +		ret =3D request_irq(irq, 0, arm_smmu_combined_irq_handler,
>> +						"arm-smmu-v3-combined-irq", smmu);
>> 		if (ret < 0)
>> 			dev_warn(smmu->dev, "failed to enable combined irq\n");
>> 	} else
>> @@ -2290,7 +2602,7 @@ static int arm_smmu_device_disable(struct arm_smmu=
_device *smmu)
>> 	return ret;
>> }
>>=20
>> -static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool byp=
ass)
>> +static int arm_smmu_device_reset(struct arm_smmu_device *smmu)
>> {
>> 	int ret;
>> 	u32 reg, enables;
>> @@ -2300,7 +2612,7 @@ static int arm_smmu_device_reset(struct arm_smmu_d=
evice *smmu, bool bypass)
>> 	reg =3D readl_relaxed(smmu->base + ARM_SMMU_CR0);
>> 	if (reg & CR0_SMMUEN) {
>> 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
>> -		WARN_ON(is_kdump_kernel() && !disable_bypass);
>> +		WARN_ON(!disable_bypass);
>> 		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
>> 	}
>>=20
>> @@ -2404,11 +2716,14 @@ static int arm_smmu_device_reset(struct arm_smmu=
_device *smmu, bool bypass)
>> 		return ret;
>> 	}
>>=20
>> -	if (is_kdump_kernel())
>> -		enables &=3D ~(CR0_EVTQEN | CR0_PRIQEN);
>> +	/* Initialize tasklets for threaded IRQs*/
>> +	tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_tasklet, smmu);
>> +	tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_tasklet, smmu);
>> +	tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_taskle=
t,
>> +				 smmu);
>>=20
>> 	/* Enable the SMMU interface, or ensure bypass */
>> -	if (!bypass || disable_bypass) {
>> +	if (disable_bypass) {
>> 		enables |=3D CR0_SMMUEN;
>> 	} else {
>> 		ret =3D arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
>> @@ -2473,8 +2788,10 @@ static int arm_smmu_device_hw_probe(struct arm_sm=
mu_device *smmu)
>> 	if (reg & IDR0_SEV)
>> 		smmu->features |=3D ARM_SMMU_FEAT_SEV;
>>=20
>> +#ifdef CONFIF_MSI
>=20
> CONFIG_MSI

Ack.

>=20
>=20
>> 	if (reg & IDR0_MSI)
>> 		smmu->features |=3D ARM_SMMU_FEAT_MSI;
>> +#endif
>>=20
>> 	if (reg & IDR0_HYP)
>> 		smmu->features |=3D ARM_SMMU_FEAT_HYP;
>> @@ -2499,7 +2816,7 @@ static int arm_smmu_device_hw_probe(struct arm_smm=
u_device *smmu)
>> 		smmu->features |=3D ARM_SMMU_FEAT_TRANS_S2;
>>=20
>> 	if (!(reg & IDR0_S2P)) {
>> -		dev_err(smmu->dev, "no translation support!\n");
>> +		dev_err(smmu->dev, "no stage-2 translation support!\n");
>> 		return -ENXIO;
>> 	}
>>=20
>> @@ -2648,7 +2965,7 @@ static inline int arm_smmu_device_acpi_probe(struc=
t platform_device *pdev,
>> static int arm_smmu_device_dt_probe(struct platform_device *pdev,
>> 				    struct arm_smmu_device *smmu)
>> {
>> -	struct device *dev =3D &pdev->dev;
>> +	struct device *dev =3D pdev;
>> 	u32 cells;
>> 	int ret =3D -EINVAL;
>>=20
>> @@ -2661,7 +2978,7 @@ static int arm_smmu_device_dt_probe(struct platfor=
m_device *pdev,
>>=20
>> 	parse_driver_options(smmu);
>>=20
>> -	if (of_dma_is_coherent(dev->of_node))
>> +	if (dt_get_property(dev->of_node, "dma-coherent", NULL))
>> 		smmu->features |=3D ARM_SMMU_FEAT_COHERENCY;
>>=20
>> 	return ret;
>> @@ -2675,63 +2992,49 @@ static unsigned long arm_smmu_resource_size(stru=
ct arm_smmu_device *smmu)
>> 		return SZ_128K;
>> }
>>=20
>> -static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size=
_t start,
>> -				      resource_size_t size)
>> -{
>> -	struct resource res =3D {
>> -		.flags =3D IORESOURCE_MEM,
>> -		.start =3D start,
>> -		.end =3D start + size - 1,
>> -	};
>> -
>> -	return devm_ioremap_resource(dev, &res);
>> -}
>> -
>> static int arm_smmu_device_probe(struct platform_device *pdev)
>> {
>> 	int irq, ret;
>> -	struct resource *res;
>> -	resource_size_t ioaddr;
>> +	paddr_t ioaddr, iosize;
>> 	struct arm_smmu_device *smmu;
>> -	struct device *dev =3D &pdev->dev;
>> -	bool bypass;
>>=20
>> -	smmu =3D devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
>> +	smmu =3D devm_kzalloc(pdev, sizeof(*smmu), GFP_KERNEL);
>> 	if (!smmu) {
>> -		dev_err(dev, "failed to allocate arm_smmu_device\n");
>> +		dev_err(pdev, "failed to allocate arm_smmu_device\n");
>> 		return -ENOMEM;
>> 	}
>> -	smmu->dev =3D dev;
>> +	smmu->dev =3D pdev;
>>=20
>> -	if (dev->of_node) {
>> +	if (pdev->of_node) {
>> 		ret =3D arm_smmu_device_dt_probe(pdev, smmu);
>> +		if (ret)
>> +			return -EINVAL;
>> 	} else {
>> 		ret =3D arm_smmu_device_acpi_probe(pdev, smmu);
>> 		if (ret =3D=3D -ENODEV)
>> 			return ret;
>> 	}
>>=20
>> -	/* Set bypass mode according to firmware probing result */
>> -	bypass =3D !!ret;
>> -
>> 	/* Base address */
>> -	res =3D platform_get_resource(pdev, IORESOURCE_MEM, 0);
>> -	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
>> -		dev_err(dev, "MMIO region too small (%pr)\n", res);
>> +	ret =3D dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
>> +	if (ret)
>> +		return -ENODEV;
>> +
>> +	if (iosize < arm_smmu_resource_size(smmu)) {
>> +		dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
>> 		return -EINVAL;
>> 	}
>> -	ioaddr =3D res->start;
>>=20
>> 	/*
>> 	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
>> 	 * the PMCG registers which are reserved by the PMU driver.
>> 	 */
>=20
> Which PMU driver?

I want to remove this in this patch but somehow I missed. I will modifying =
the comment in next version.

>=20
>=20
>> -	smmu->base =3D arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
>> +	smmu->base =3D ioremap_nocache(ioaddr, ARM_SMMU_REG_SZ);
>> 	if (IS_ERR(smmu->base))
>> 		return PTR_ERR(smmu->base);
>>=20
>> -	if (arm_smmu_resource_size(smmu) > SZ_64K) {
>> -		smmu->page1 =3D arm_smmu_ioremap(dev, ioaddr + SZ_64K,
>> +	if (iosize > SZ_64K) {
>> +		smmu->page1 =3D ioremap_nocache(ioaddr + SZ_64K,
>> 					       ARM_SMMU_REG_SZ);
>> 		if (IS_ERR(smmu->page1))
>> 			return PTR_ERR(smmu->page1);
>> @@ -2768,14 +3071,262 @@ static int arm_smmu_device_probe(struct platfor=
m_device *pdev)
>> 		return ret;
>>=20
>> 	/* Reset the device */
>> -	ret =3D arm_smmu_device_reset(smmu, bypass);
>> +	ret =3D arm_smmu_device_reset(smmu);
>> 	if (ret)
>> 		return ret;
>>=20
>> +	/*
>> +	 * Keep a list of all probed devices. This will be used to query
>> +	 * the smmu devices based on the fwnode.
>> +	 */
>> +	INIT_LIST_HEAD(&smmu->devices);
>> +
>> +	spin_lock(&arm_smmu_devices_lock);
>> +	list_add(&smmu->devices, &arm_smmu_devices);
>> +	spin_unlock(&arm_smmu_devices_lock);
>> +
>> 	return 0;
>> }
>>=20
>> -static const struct of_device_id arm_smmu_of_match[] =3D {
>> +static const struct dt_device_match arm_smmu_of_match[] =3D {
>> 	{ .compatible =3D "arm,smmu-v3", },
>> 	{ },
>> };
>> +
>> +/* Start of Xen specific code. */
>> +static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>> +{
>> +	struct arm_smmu_xen_domain *xen_domain =3D dom_iommu(d)->arch.priv;
>> +	struct iommu_domain *io_domain;
>> +
>> +	spin_lock(&xen_domain->lock);
>> +
>> +	list_for_each_entry(io_domain, &xen_domain->contexts, list) {
>> +		/*
>> +		 * Only invalidate the context when SMMU is present.
>> +		 * This is because the context initialization is delayed
>> +		 * until a master has been added.
>> +		 */
>> +		if (unlikely(!ACCESS_ONCE(to_smmu_domain(io_domain)->smmu)))
>> +			continue;
>> +
>> +		arm_smmu_tlb_inv_context(to_smmu_domain(io_domain));
>> +	}
>> +
>> +	spin_unlock(&xen_domain->lock);
>> +
>> +	return 0;
>> +}
>> +
>> +static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t df=
n,
>> +				unsigned long page_count, unsigned int flush_flags)
>> +{
>> +	return arm_smmu_iotlb_flush_all(d);
>> +}
>> +
>> +static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
>> +{
>> +	struct arm_smmu_device *smmu =3D NULL;
>> +
>> +	spin_lock(&arm_smmu_devices_lock);
>> +
>> +	list_for_each_entry(smmu, &arm_smmu_devices, devices) {
>> +		if (smmu->dev  =3D=3D dev) {
>> +			spin_unlock(&arm_smmu_devices_lock);
>> +			return smmu;
>> +		}
>> +	}
>> +
>> +	spin_unlock(&arm_smmu_devices_lock);
>> +
>> +	return NULL;
>> +}
>> +
>> +static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
>> +				struct device *dev)
>> +{
>> +	struct iommu_domain *io_domain;
>> +	struct arm_smmu_domain *smmu_domain;
>> +	struct iommu_fwspec *fwspec =3D dev_iommu_fwspec_get(dev);
>> +	struct arm_smmu_xen_domain *xen_domain =3D dom_iommu(d)->arch.priv;
>> +	struct arm_smmu_device *smmu =3D arm_smmu_get_by_dev(fwspec->iommu_dev=
);
>> +
>> +	if (!smmu)
>> +		return NULL;
>> +
>> +	/*
>> +	 * Loop through the &xen_domain->contexts to locate a context
>> +	 * assigned to this SMMU
>> +	 */
>> +	list_for_each_entry(io_domain, &xen_domain->contexts, list) {
>> +		smmu_domain =3D to_smmu_domain(io_domain);
>> +		if (smmu_domain->smmu =3D=3D smmu)
>> +			return io_domain;
>> +	}
>> +	return NULL;
>> +}
>> +
>> +static void arm_smmu_destroy_iommu_domain(struct iommu_domain *io_domai=
n)
>> +{
>> +	list_del(&io_domain->list);
>> +	arm_smmu_domain_free(io_domain);
>> +}
>> +
>> +static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
>> +		struct device *dev, u32 flag)
>> +{
>> +	int ret =3D 0;
>> +	struct iommu_domain *io_domain;
>> +	struct arm_smmu_domain *smmu_domain;
>> +	struct arm_smmu_xen_domain *xen_domain =3D dom_iommu(d)->arch.priv;
>> +
>> +	spin_lock(&xen_domain->lock);
>> +
>> +	/*
>> +	 * Check to see if an iommu_domain already exists for this xen domain
>> +	 * under the same SMMU
>> +	 */
>> +	io_domain =3D arm_smmu_get_domain(d, dev);
>> +	if (!io_domain) {
>> +		io_domain =3D arm_smmu_domain_alloc();
>> +		if (!io_domain) {
>> +			ret =3D -ENOMEM;
>> +			goto out;
>> +		}
>> +		smmu_domain =3D to_smmu_domain(io_domain);
>> +		smmu_domain->s2_cfg.domain =3D d;
>> +
>> +		/* Chain the new context to the domain */
>> +		list_add(&io_domain->list, &xen_domain->contexts);
>> +	}
>> +
>> +	ret =3D arm_smmu_attach_dev(io_domain, dev);
>> +	if (ret) {
>> +		if (io_domain->ref.counter =3D=3D 0)
>> +			arm_smmu_destroy_iommu_domain(io_domain);
>> +	} else {
>> +		atomic_inc(&io_domain->ref);
>> +	}
>> +
>> +out:
>> +	spin_unlock(&xen_domain->lock);
>> +	return ret;
>> +}
>> +
>> +static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
>> +{
>> +	struct iommu_domain *io_domain =3D arm_smmu_get_domain(d, dev);
>> +	struct arm_smmu_xen_domain *xen_domain =3D dom_iommu(d)->arch.priv;
>> +	struct arm_smmu_domain *arm_smmu =3D to_smmu_domain(io_domain);
>> +	struct arm_smmu_master *master =3D dev_iommu_priv_get(dev);
>> +
>> +	if (!arm_smmu || arm_smmu->s2_cfg.domain !=3D d) {
>> +		dev_err(dev, " not attached to domain %d\n", d->domain_id);
>> +		return -ESRCH;
>> +	}
>> +
>> +	spin_lock(&xen_domain->lock);
>> +
>> +	arm_smmu_detach_dev(master);
>> +	atomic_dec(&io_domain->ref);
>> +
>> +	if (io_domain->ref.counter =3D=3D 0)
>> +		arm_smmu_destroy_iommu_domain(io_domain);
>> +
>> +	spin_unlock(&xen_domain->lock);
>> +
>> +	return 0;
>> +}
>> +
>> +static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
>> +				u8 devfn,  struct device *dev)
>> +{
>> +	int ret =3D 0;
>> +
>> +	/* Don't allow remapping on other domain than hwdom */
>> +	if (t && t !=3D hardware_domain)
>> +		return -EPERM;
>> +
>> +	if (t =3D=3D s)
>> +		return 0;
>> +
>> +	ret =3D arm_smmu_deassign_dev(s, dev);
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (t) {
>> +		/* No flags are defined for ARM. */
>> +		ret =3D arm_smmu_assign_dev(t, devfn, dev, 0);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int arm_smmu_iommu_xen_domain_init(struct domain *d)
>> +{
>> +	struct arm_smmu_xen_domain *xen_domain;
>> +
>> +	xen_domain =3D xzalloc(struct arm_smmu_xen_domain);
>> +	if (!xen_domain)
>> +		return -ENOMEM;
>> +
>> +	spin_lock_init(&xen_domain->lock);
>> +	INIT_LIST_HEAD(&xen_domain->contexts);
>> +
>> +	dom_iommu(d)->arch.priv =3D xen_domain;
>> +	return 0;
>> +
>> +}
>> +
>> +static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
>> +{
>> +}
>> +
>> +static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
>> +{
>> +	struct arm_smmu_xen_domain *xen_domain =3D dom_iommu(d)->arch.priv;
>> +
>> +	ASSERT(list_empty(&xen_domain->contexts));
>> +	xfree(xen_domain);
>> +}
>> +
>> +static const struct iommu_ops arm_smmu_iommu_ops =3D {
>> +	.init		=3D arm_smmu_iommu_xen_domain_init,
>> +	.hwdom_init		=3D arm_smmu_iommu_hwdom_init,
>> +	.teardown		=3D arm_smmu_iommu_xen_domain_teardown,
>> +	.iotlb_flush		=3D arm_smmu_iotlb_flush,
>> +	.iotlb_flush_all	=3D arm_smmu_iotlb_flush_all,
>> +	.assign_device		=3D arm_smmu_assign_dev,
>> +	.reassign_device	=3D arm_smmu_reassign_dev,
>> +	.map_page		=3D arm_iommu_map_page,
>> +	.unmap_page		=3D arm_iommu_unmap_page,
>> +	.dt_xlate		=3D arm_smmu_dt_xlate,
>> +	.add_device		=3D arm_smmu_add_device,
>> +};
>> +
>> +static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>> +				const void *data)
>> +{
>> +	int rc;
>> +
>> +	/*
>> +	 * Even if the device can't be initialized, we don't want to
>> +	 * give the SMMU device to dom0.
>> +	 */
>> +	dt_device_set_used_by(dev, DOMID_XEN);
>> +
>> +	rc =3D arm_smmu_device_probe(dt_to_dev(dev));
>> +	if (rc)
>> +		return rc;
>> +
>> +	iommu_set_ops(&arm_smmu_iommu_ops);
>> +
>> +	return 0;
>> +}
>> +
>> +DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
>> +.dt_match =3D arm_smmu_of_match,
>> +.init =3D arm_smmu_dt_init,
>> +DT_DEVICE_END
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 20:11:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 20:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50880.89704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knolJ-0003st-8b; Fri, 11 Dec 2020 20:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50880.89704; Fri, 11 Dec 2020 20:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knolJ-0003sm-5U; Fri, 11 Dec 2020 20:11:33 +0000
Received: by outflank-mailman (input) for mailman id 50880;
 Fri, 11 Dec 2020 20:11:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JY/4=FP=redhat.com=dgilbert@srs-us1.protection.inumbo.net>)
 id 1knolH-0003sg-UY
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 20:11:32 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6a987fd0-289d-44f7-9fda-a26d7657b51e;
 Fri, 11 Dec 2020 20:11:30 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-594-3C6gZF0IO32OBrWN_dnmBQ-1; Fri, 11 Dec 2020 15:11:28 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E63F9801817;
 Fri, 11 Dec 2020 20:11:26 +0000 (UTC)
Received: from work-vm (ovpn-113-63.ams2.redhat.com [10.36.113.63])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id B4AEB10023B5;
 Fri, 11 Dec 2020 20:11:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a987fd0-289d-44f7-9fda-a26d7657b51e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607717490;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mdUEd5LXHeWfYapDDVIeEiWas/cE1wwbAHHmrLJFBdw=;
	b=RdCDzPOLxQ6LxjH6gsOUeYzPA3ceWdpMpSTnSHaTAVjkocbT5AemWjNcSsITN7Hv3N5HWb
	64aaoScq4PSNZUOU7+/JzzB9GMtXluB3OwO2dZA9wfRDek2oyNWck/Bso9xsRpP6QD3LCT
	1VjMTBKtCGHmtbDUG+c/YT2U65JejdE=
X-MC-Unique: 3C6gZF0IO32OBrWN_dnmBQ-1
Date: Fri, 11 Dec 2020 20:11:11 +0000
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: marcandre.lureau@redhat.com
Cc: qemu-devel@nongnu.org, philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>
Subject: Re: [PATCH v3 06/13] virtiofsd: replace _Static_assert with
 QEMU_BUILD_BUG_ON
Message-ID: <20201211201111.GI3380@work-vm>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-7-marcandre.lureau@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-7-marcandre.lureau@redhat.com>
User-Agent: Mutt/1.14.6 (2020-07-11)
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dgilbert@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

* marcandre.lureau@redhat.com (marcandre.lureau@redhat.com) wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> This allows to get rid of a check for older GCC version (which was a bit
> bogus too since it was falling back on c++ version..)
> 
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>

Yes I think that's OK; this is an imported file, but we've already
mangled it into QEMU's style and added includes etc.


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  tools/virtiofsd/fuse_common.h | 11 +----------
>  1 file changed, 1 insertion(+), 10 deletions(-)
> 
> diff --git a/tools/virtiofsd/fuse_common.h b/tools/virtiofsd/fuse_common.h
> index 5aee5193eb..a2484060b6 100644
> --- a/tools/virtiofsd/fuse_common.h
> +++ b/tools/virtiofsd/fuse_common.h
> @@ -809,15 +809,6 @@ void fuse_remove_signal_handlers(struct fuse_session *se);
>   *
>   * On 32bit systems please add -D_FILE_OFFSET_BITS=64 to your compile flags!
>   */
> -
> -#if defined(__GNUC__) &&                                      \
> -    (__GNUC__ > 4 || __GNUC__ == 4 && __GNUC_MINOR__ >= 6) && \
> -    !defined __cplusplus
> -_Static_assert(sizeof(off_t) == 8, "fuse: off_t must be 64bit");
> -#else
> -struct _fuse_off_t_must_be_64bit_dummy_struct {
> -    unsigned _fuse_off_t_must_be_64bit:((sizeof(off_t) == 8) ? 1 : -1);
> -};
> -#endif
> +QEMU_BUILD_BUG_ON(sizeof(off_t) != 8);
>  
>  #endif /* FUSE_COMMON_H_ */
> -- 
> 2.29.0
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 21:08:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 21:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50896.89722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knpe8-0000Xs-G0; Fri, 11 Dec 2020 21:08:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50896.89722; Fri, 11 Dec 2020 21:08:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knpe8-0000Xl-Cw; Fri, 11 Dec 2020 21:08:12 +0000
Received: by outflank-mailman (input) for mailman id 50896;
 Fri, 11 Dec 2020 21:08:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knpe6-0000Wt-Sk
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 21:08:11 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34743719-b6af-49b8-8e4c-b2c2f3560e50;
 Fri, 11 Dec 2020 21:08:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34743719-b6af-49b8-8e4c-b2c2f3560e50
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607720888;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HhSZk3JoDjyQOLMPOFgRgzwUyrTACu0UGSGM/7NvY6I=;
	b=ODjND6hv3nPITeDJOTxlIhVMHSqc6z+V97F+ULhmNWUDUg4MeWWcc8/RjfBSpJ+3E7KGvI
	7u3oIQ1T0blXK7tiZjRPA+EmHIW9gpYRAbWAbVY6gfKBefi2hgLwHuva9Pr5DzcIfb4zGY
	SuyetRvP8eIvO7dEK0SnW5NkAhYPqinzrr4Dc4PuL6rLfSElkjk6Dm5HgGKkZSl8lMyVVb
	GCj6eMo+nj1M3TWtminM7NXFXEDFn7q5iOiTlBNVs+z2wkgYrK9LnDALIH7BFq1ID6L7rE
	ud/rYvRghXbFi0znwvlxBBbwl+qtg9iLqpnrT1m9/JCtlXDnnHPHRcYj9dijAA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607720888;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HhSZk3JoDjyQOLMPOFgRgzwUyrTACu0UGSGM/7NvY6I=;
	b=jd0RTQztrN1EtCKge/4M0pzHQ/MU02TRkL1ocrE3lvW4v9ht90/z6O6t5eEalNoD9uJ0/m
	DmETqLTaJjhHPNAQ==
To: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra
 <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm Mailing List <linux-arm-kernel@lists.infradead.org>, Mark
 Rutland <mark.rutland@arm.com>, Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org, Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi
 <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>, Daniel Vetter
 <daniel@ffwll.ch>, Pankaj Bharadiya
 <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx <intel-gfx@lists.freedesktop.org>, dri-devel
 <dri-devel@lists.freedesktop.org>, Tvrtko Ursulin
 <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, "open list\:GPIO SUBSYSTEM"
 <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, Jon Mason
 <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe
 <allenbh@gmail.com>, linux-ntb@googlegroups.com, Lorenzo Pieralisi
 <lorenzo.pieralisi@arm.com>, Rob Herring <robh@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Michal Simek <michal.simek@xilinx.com>, linux-pci
 <linux-pci@vger.kernel.org>, Karthikeyan Mitran
 <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq
 Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, Jakub
 Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, "open
 list\:HFI1 DRIVER" <linux-rdma@vger.kernel.org>, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Subject: Re: [patch 03/30] genirq: Move irq_set_lockdep_class() to core
In-Reply-To: <CAHp75Vc-2OjE2uwvNRiyLMQ8GSN3P7SehKD-yf229_7ocaktiw@mail.gmail.com>
References: <20201210192536.118432146@linutronix.de> <20201210194042.860029489@linutronix.de> <CAHp75Vc-2OjE2uwvNRiyLMQ8GSN3P7SehKD-yf229_7ocaktiw@mail.gmail.com>
Date: Fri, 11 Dec 2020 22:08:07 +0100
Message-ID: <87h7osgifc.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Dec 11 2020 at 19:53, Andy Shevchenko wrote:

> On Thu, Dec 10, 2020 at 10:14 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>>
>> irq_set_lockdep_class() is used from modules and requires irq_to_desc() to
>> be exported. Move it into the core code which lifts another requirement for
>> the export.
>
> ...
>
>> +       if (IS_ENABLED(CONFIG_LOCKDEP))
>> +               __irq_set_lockdep_class(irq, lock_class, request_class);

You are right. Let me fix that.


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 21:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 21:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50900.89734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knpgW-0001LQ-Tn; Fri, 11 Dec 2020 21:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50900.89734; Fri, 11 Dec 2020 21:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knpgW-0001LJ-Qk; Fri, 11 Dec 2020 21:10:40 +0000
Received: by outflank-mailman (input) for mailman id 50900;
 Fri, 11 Dec 2020 21:10:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knpgV-0001LD-4j
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 21:10:39 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 335dd8a5-c5d8-4720-bfbd-08f1e6b6cb1b;
 Fri, 11 Dec 2020 21:10:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 335dd8a5-c5d8-4720-bfbd-08f1e6b6cb1b
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607721037;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Mo2pAYuz+fdW3SgRq/XPKjojssrrAkt+Ucf5porPagA=;
	b=Jwq6ijv1GHVoieraIz9ydp2q4O6t/9GUc1eMMejV0w+8HMZFh2MudG98dd8dHkmBbtcY23
	rWBcv8bjhS74ccS+ED0Y2Ffx6+AE3ILOJic4uDyRWA5WOr4daMkBthSaLo/d+kCRNbQOx4
	/u1tQekXWSXK+tnBOy8SnybwPlOHVa58yu83Wz/vzB6Q7kN2A+/PUNns70l+Y5AN2H5xGf
	7iGU6k2Q0hE5HprbYz1Eu3KcHibChvhCzqBaZRj/WCHuBdSDA1VHcJDgDidwq1qqERl/TB
	s9Q8j4pTrdZ9IKgusBRQ2bAa+V0Lruumrto07aTzrOd30AjJDgps945/dnu+DQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607721037;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Mo2pAYuz+fdW3SgRq/XPKjojssrrAkt+Ucf5porPagA=;
	b=6/ko850V23KEuDS4nOrh7BVWq2Ew0gOANx75sF2cdzYDioAkFAZKWg5jsxMNTHTfHBIf/A
	ckLpFAtkm+PHw5Bw==
To: David Laight <David.Laight@ACULAB.COM>, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Jani Nikula <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 "intel-gfx\@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
 "dri-devel\@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge
 Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 "linux-parisc\@vger.kernel.org" <linux-parisc@vger.kernel.org>, Russell
 King <linux@armlinux.org.uk>, "linux-arm-kernel\@lists.infradead.org"
 <linux-arm-kernel@lists.infradead.org>, Mark Rutland
 <mark.rutland@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Will
 Deacon <will@kernel.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, "linux-s390\@vger.kernel.org"
 <linux-s390@vger.kernel.org>, Pankaj Bharadiya
 <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 Linus Walleij <linus.walleij@linaro.org>, "linux-gpio\@vger.kernel.org"
 <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, Jon Mason
 <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe
 <allenbh@gmail.com>, "linux-ntb\@googlegroups.com"
 <linux-ntb@googlegroups.com>, Lorenzo Pieralisi
 <lorenzo.pieralisi@arm.com>, Rob Herring <robh@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Michal Simek <michal.simek@xilinx.com>,
 "linux-pci\@vger.kernel.org" <linux-pci@vger.kernel.org>, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 "netdev\@vger.kernel.org" <netdev@vger.kernel.org>,
 "linux-rdma\@vger.kernel.org" <linux-rdma@vger.kernel.org>, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, "xen-devel\@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: RE: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
In-Reply-To: <d6cbfa118490459bb0671394f00323fc@AcuMS.aculab.com>
References: <20201210192536.118432146@linutronix.de> <20201210194043.957046529@linutronix.de> <ad05af1a-5463-2a80-0887-7629721d6863@linux.intel.com> <87y2i4h54i.fsf@nanos.tec.linutronix.de> <d6cbfa118490459bb0671394f00323fc@AcuMS.aculab.com>
Date: Fri, 11 Dec 2020 22:10:36 +0100
Message-ID: <87eejwgib7.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Dec 11 2020 at 14:19, David Laight wrote:
> From: Thomas Gleixner
>> You can't catch that. If this really becomes an issue you need a
>> sequence counter around it.
>
> Or just two copies of the high word.
> Provided the accesses are sequenced:
> writer:
> 	load high:low
> 	add small_value,high:low
> 	store high
> 	store low
> 	store high_copy
> reader:
> 	load high_copy
> 	load low
> 	load high
> 	if (high != high_copy)
> 		low = 0;

And low = 0 is solving what? You need to loop back and retry until it's
consistent and then it's nothing else than an open coded sequence count.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 21:27:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 21:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50914.89753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knpx5-0002fm-Fc; Fri, 11 Dec 2020 21:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50914.89753; Fri, 11 Dec 2020 21:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knpx5-0002ff-C1; Fri, 11 Dec 2020 21:27:47 +0000
Received: by outflank-mailman (input) for mailman id 50914;
 Fri, 11 Dec 2020 21:27:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knpx4-0002fa-GS
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 21:27:46 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id afc1cfc5-c3b3-47b4-8507-7ddc1076cf16;
 Fri, 11 Dec 2020 21:27:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afc1cfc5-c3b3-47b4-8507-7ddc1076cf16
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607722064;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RPMMCIZpTkbwn/vpPbbWIvtIfsdUJZbFWGvj4XBXX/8=;
	b=Kc/FVw++R6e2tySXtGXcjxUCfxzjuTQFTglr5qIAHuhSnqpTp+Ikj+7Hq7mXQTlJIhQso1
	GzueGJq666vDoGnyl7z+R0hqgzUE1QvEz7r/C1zoUW/uJEMWoH29CdDJSk0WUoP2avCyFO
	CRX7sZOyAR4dgW8urx4t099te3vzsLBebYBFzSZHvt/ir/v+j+DUtH3KLF65BJZt70WJkw
	AE4vdTRQhanAKvsPxu5Ss601VFRMIT6sTuiXwTfT17QUlJMwNoTmBl7avya6WCOOi1pn/V
	ZDVRxzcxQEEpw+Wd6+ymYN074dQBF+mj2DWvKS0m9PZ0lY8tuhofO6ggRZ9GJg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607722064;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RPMMCIZpTkbwn/vpPbbWIvtIfsdUJZbFWGvj4XBXX/8=;
	b=tFwThmGQdDL1Ql7epJregGVf1InSk/8mJJwiaGupYZNmlO1tti86n4xUB3jqi7xZFrdYET
	SNdr5SpWVlT7QnDA==
To: boris.ostrovsky@oracle.com, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, "James E.J. Bottomley"
 <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula
 <jani.nikula@linux.intel.com>, Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj
 Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S.
 Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu interrupts
In-Reply-To: <9806692f-24a3-4b6f-ae55-86bd66481271@oracle.com>
References: <20201210192536.118432146@linutronix.de> <20201210194045.250321315@linutronix.de> <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com> <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com> <871rfwiknd.fsf@nanos.tec.linutronix.de> <9806692f-24a3-4b6f-ae55-86bd66481271@oracle.com>
Date: Fri, 11 Dec 2020 22:27:43 +0100
Message-ID: <877dpoghio.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 11 2020 at 09:29, boris ostrovsky wrote:

> On 12/11/20 7:37 AM, Thomas Gleixner wrote:
>> On Fri, Dec 11 2020 at 13:10, J=C3=BCrgen Gro=C3=9F wrote:
>>> On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>>>> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>>>>> Change the implementation so that the channel is bound to CPU0 at the=
 XEN
>>>>> level and leave the affinity mask alone. At startup of the interrupt
>>>>> affinity will be assigned out of the affinity mask and the XEN bindin=
g will
>>>>> be updated.
>>>>
>>>> If that's the case then I wonder whether we need this call at all and =
instead bind at startup time.
>>> After some discussion with Thomas on IRC and xen-devel archaeology the
>>> result is: this will be needed especially for systems running on a
>>> single vcpu (e.g. small guests), as the .irq_set_affinity() callback
>>> won't be called in this case when starting the irq.
>
> On UP are we not then going to end up with an empty affinity mask? Or
> are we guaranteed to have it set to 1 by interrupt generic code?

An UP kernel does not ever look on the affinity mask. The
chip::irq_set_affinity() callback is not invoked so the mask is
irrelevant.

A SMP kernel on a UP machine sets CPU0 in the mask so all is good.

> This is actually why I brought this up in the first place --- a
> potential mismatch between the affinity mask and Xen-specific data
> (e.g. info->cpu and then protocol-specific data in event channel
> code). Even if they are re-synchronized later, at startup time (for
> SMP).

Which is not a problem either. The affinity mask is only relevant for
setting the affinity, but it's not relevant for delivery and never can
be.

> I don't see anything that would cause a problem right now but I worry
> that this inconsistency may come up at some point.

As long as the affinity mask becomes not part of the event channel magic
this should never matter.

Look at it from hardware:

interrupt is affine to CPU0

     CPU0 runs:
=20=20=20=20=20
     set_affinity(CPU0 -> CPU1)
        local_irq_disable()
=20=20=20=20=20=20=20=20
 --> interrupt is raised in hardware and pending on CPU0

        irq hardware is reconfigured to be affine to CPU1

        local_irq_enable()

 --> interrupt is handled on CPU0

the next interrupt will be raised on CPU1

So info->cpu which is registered via the hypercall binds the 'hardware
delivery' and whenever the new affinity is written it is rebound to some
other CPU and the next interrupt is then raised on this other CPU.

It's not any different from the hardware example at least not as far as
I understood the code.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:06:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:06:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50923.89765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqYQ-0006dm-DC; Fri, 11 Dec 2020 22:06:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50923.89765; Fri, 11 Dec 2020 22:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqYQ-0006dZ-A6; Fri, 11 Dec 2020 22:06:22 +0000
Received: by outflank-mailman (input) for mailman id 50923;
 Fri, 11 Dec 2020 22:06:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Vdm=FP=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1knqYP-0006dS-CZ
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:06:21 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 96540e95-8500-4555-88be-5e621f09e7c8;
 Fri, 11 Dec 2020 22:06:18 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-250-SvvGBz7CO8qmThcdqBMh3g-1; Fri, 11 Dec 2020 17:06:17 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9442119251A1;
 Fri, 11 Dec 2020 22:06:14 +0000 (UTC)
Received: from localhost (ovpn-116-160.rdu2.redhat.com [10.10.116.160])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D346B19C78;
 Fri, 11 Dec 2020 22:06:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96540e95-8500-4555-88be-5e621f09e7c8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607724378;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZJe/QLookReovEWCweH6zzebkyNL6c/FJv8O55OglUc=;
	b=MoaxH5iX+Bu8BkYmJlhmk7VuB2ZGjFtrYilecMwHMUJ71ftVp8ZuvHKyvq01Bk6mdvXxXa
	fGZ4WrJssB6vPbKhXuaqoWMuirg11Kwa+DXkfVYXJ+jFITuzf3hvlBncopXAddST+rCBUm
	93uzUskPRo3y6eIJfMoE5QFUW3iEpZ4=
X-MC-Unique: SvvGBz7CO8qmThcdqBMh3g-1
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Markus Armbruster <armbru@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Max Reitz <mreitz@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v4 09/32] qdev: Make qdev_get_prop_ptr() get Object* arg
Date: Fri, 11 Dec 2020 17:05:06 -0500
Message-Id: <20201211220529.2290218-10-ehabkost@redhat.com>
In-Reply-To: <20201211220529.2290218-1-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make the code more generic and not specific to TYPE_DEVICE.

Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com> #s390 parts
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
- Fix build error with CONFIG_XEN
  I took the liberty of keeping the Reviewed-by line from
  Marc-André as the build fix is a trivial one line change
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  8 ++--
 hw/block/xen-block.c             |  5 +-
 hw/core/qdev-properties-system.c | 57 +++++++++-------------
 hw/core/qdev-properties.c        | 82 +++++++++++++-------------------
 hw/s390x/css.c                   |  5 +-
 hw/s390x/s390-pci-bus.c          |  4 +-
 hw/vfio/pci-quirks.c             |  5 +-
 8 files changed, 68 insertions(+), 100 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 0ea822e6a7..0b92cfc761 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -302,7 +302,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+void *qdev_get_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(DeviceState *dev,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index e6aeb63587..3973105658 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,8 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    TPMBackend **be = qdev_get_prop_ptr(dev, opaque);
+    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -49,7 +48,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -73,9 +72,8 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 8a7a3f5452..905e4acd97 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -335,9 +335,8 @@ static char *disk_to_vbd_name(unsigned int disk)
 static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -398,7 +397,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 77b31eb9dc..9ac9b95852 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -59,9 +59,8 @@ static bool check_prop_still_unset(DeviceState *dev, const char *name,
 static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -87,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -185,7 +184,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(dev, prop);
+    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -218,8 +217,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    CharBackend *be = qdev_get_prop_ptr(dev, opaque);
+    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -232,7 +230,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -272,9 +270,8 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -297,9 +294,8 @@ const PropertyInfo qdev_prop_chr = {
 static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -315,7 +311,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -381,9 +377,8 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
 static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -395,7 +390,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -461,9 +456,8 @@ const PropertyInfo qdev_prop_netdev = {
 static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -475,7 +469,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -582,7 +576,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -674,9 +668,8 @@ const PropertyInfo qdev_prop_multifd_compression = {
 static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -693,7 +686,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -761,7 +754,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -804,8 +797,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    DeviceState *dev = DEVICE(obj);
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -828,9 +820,8 @@ const PropertyInfo qdev_prop_pci_devfn = {
 static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -857,7 +848,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     char *e;
     unsigned long val;
@@ -951,9 +942,8 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
 static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -982,7 +972,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     if (dev->realized) {
@@ -1028,9 +1018,8 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
 static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -1068,7 +1057,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     if (dev->realized) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 3a4638f4de..0a54a922c8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -38,9 +38,9 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
+void *qdev_get_prop_ptr(Object *obj, Property *prop)
 {
-    void *ptr = dev;
+    void *ptr = obj;
     ptr += prop->offset;
     return ptr;
 }
@@ -48,9 +48,8 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
@@ -60,7 +59,7 @@ void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -94,8 +93,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint32_t *p = qdev_get_prop_ptr(dev, props);
+    uint32_t *p = qdev_get_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -107,9 +105,8 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(dev, prop);
+    uint32_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -156,8 +153,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint64_t *p = qdev_get_prop_ptr(dev, props);
+    uint64_t *p = qdev_get_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -169,9 +165,8 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(dev, prop);
+    uint64_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -208,9 +203,8 @@ const PropertyInfo qdev_prop_bit64 = {
 static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -220,7 +214,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -242,9 +236,8 @@ const PropertyInfo qdev_prop_bool = {
 static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -254,7 +247,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -288,9 +281,8 @@ const PropertyInfo qdev_prop_uint8 = {
 void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -300,7 +292,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -322,9 +314,8 @@ const PropertyInfo qdev_prop_uint16 = {
 static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -334,7 +325,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -347,9 +338,8 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
                              void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -359,7 +349,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -388,9 +378,8 @@ const PropertyInfo qdev_prop_int32 = {
 static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -400,7 +389,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -413,9 +402,8 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -425,7 +413,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -454,15 +442,14 @@ const PropertyInfo qdev_prop_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(DEVICE(obj), prop));
+    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -477,7 +464,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -515,9 +502,8 @@ const PropertyInfo qdev_prop_on_off_auto = {
 void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -528,7 +514,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
     if (dev->realized) {
@@ -563,9 +549,8 @@ const PropertyInfo qdev_prop_size32 = {
 static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -581,7 +566,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -653,7 +638,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
     void **arrayptr = (void *)dev + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -699,7 +684,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->prop.offset = eltptr - (void *)dev;
-        assert(qdev_get_prop_ptr(dev, &arrayprop->prop) == eltptr);
+        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
         object_property_add(obj, propname,
                             arrayprop->prop.info->name,
                             arrayprop->prop.info->get,
@@ -893,9 +878,8 @@ void qdev_prop_set_globals(DeviceState *dev)
 static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -905,7 +889,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 9961cfe7bf..2b8f33fec2 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2343,9 +2343,8 @@ void css_reset(void)
 static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2375,7 +2374,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 05f7460aec..8b6be1197b 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1330,7 +1330,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(DEVICE(obj), prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1341,7 +1341,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
     DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 57150913b7..53569925a2 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1488,9 +1488,8 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1501,7 +1500,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:06:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50924.89777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqYY-0006gZ-R4; Fri, 11 Dec 2020 22:06:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50924.89777; Fri, 11 Dec 2020 22:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqYY-0006gR-Na; Fri, 11 Dec 2020 22:06:30 +0000
Received: by outflank-mailman (input) for mailman id 50924;
 Fri, 11 Dec 2020 22:06:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Aoib=FP=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
 id 1knqYX-0006g6-Ia
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:06:29 +0000
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.86.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6d03bfe2-7f9d-4591-bbeb-440ff25a4d21;
 Fri, 11 Dec 2020 22:06:28 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-25-lRPy8p40M8CPiAlacIukIQ-1; Fri, 11 Dec 2020 22:06:25 +0000
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Fri, 11 Dec 2020 22:06:24 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000; 
 Fri, 11 Dec 2020 22:06:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d03bfe2-7f9d-4591-bbeb-440ff25a4d21
X-MC-Unique: lRPy8p40M8CPiAlacIukIQ-1
From: David Laight <David.Laight@ACULAB.COM>
To: 'Thomas Gleixner' <tglx@linutronix.de>, Tvrtko Ursulin
	<tvrtko.ursulin@linux.intel.com>, LKML <linux-kernel@vger.kernel.org>
CC: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
	"Jani Nikula" <jani.nikula@linux.intel.com>, Joonas Lahtinen
	<joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>, "James
 E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
	<deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Russell King
	<linux@armlinux.org.uk>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, Mark Rutland <mark.rutland@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
	<hca@linux.ibm.com>, "linux-s390@vger.kernel.org"
	<linux-s390@vger.kernel.org>, Pankaj Bharadiya
	<pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
	<chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, "Linus
 Walleij" <linus.walleij@linaro.org>, "linux-gpio@vger.kernel.org"
	<linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, Jon Mason
	<jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe
	<allenbh@gmail.com>, "linux-ntb@googlegroups.com"
	<linux-ntb@googlegroups.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Michal
 Simek" <michal.simek@xilinx.com>, "linux-pci@vger.kernel.org"
	<linux-pci@vger.kernel.org>, Karthikeyan Mitran
	<m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, "Tariq
 Toukan" <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, "Jakub
 Kicinski" <kuba@kernel.org>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-rdma@vger.kernel.org"
	<linux-rdma@vger.kernel.org>, Saeed Mahameed <saeedm@nvidia.com>, "Leon
 Romanovsky" <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
Thread-Topic: [patch 14/30] drm/i915/pmu: Replace open coded kstat_irqs() copy
Thread-Index: AQHWz72qwjNpP0n0UkWT70W8RrLS8qnx7xrwgAB1jgCAAA6ksA==
Date: Fri, 11 Dec 2020 22:06:24 +0000
Message-ID: <6dd2eb7de7ad4a5893f057b90662718f@AcuMS.aculab.com>
References: <20201210192536.118432146@linutronix.de>
 <20201210194043.957046529@linutronix.de>
 <ad05af1a-5463-2a80-0887-7629721d6863@linux.intel.com>
 <87y2i4h54i.fsf@nanos.tec.linutronix.de>
 <d6cbfa118490459bb0671394f00323fc@AcuMS.aculab.com>
 <87eejwgib7.fsf@nanos.tec.linutronix.de>
In-Reply-To: <87eejwgib7.fsf@nanos.tec.linutronix.de>
Accept-Language: en-GB, en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.202.205.107]
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=C51A453 smtp.mailfrom=david.laight@aculab.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: aculab.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

From: Thomas Gleixner
> Sent: 11 December 2020 21:11
>=20
> On Fri, Dec 11 2020 at 14:19, David Laight wrote:
> > From: Thomas Gleixner
> >> You can't catch that. If this really becomes an issue you need a
> >> sequence counter around it.
> >
> > Or just two copies of the high word.
> > Provided the accesses are sequenced:
> > writer:
> > =09load high:low
> > =09add small_value,high:low
> > =09store high
> > =09store low
> > =09store high_copy
> > reader:
> > =09load high_copy
> > =09load low
> > =09load high
> > =09if (high !=3D high_copy)
> > =09=09low =3D 0;
>=20
> And low =3D 0 is solving what? You need to loop back and retry until it's
> consistent and then it's nothing else than an open coded sequence count.

If it is a counter or timestamp then the high:0 value happened
some time between when you started trying to read the value and
when you finished trying to read it.

As such it is a perfectly reasonable return value.

=09David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1=
PT, UK
Registration No: 1397386 (Wales)



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:07:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:07:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50930.89789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqZO-0006qe-6g; Fri, 11 Dec 2020 22:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50930.89789; Fri, 11 Dec 2020 22:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqZO-0006qX-2w; Fri, 11 Dec 2020 22:07:22 +0000
Received: by outflank-mailman (input) for mailman id 50930;
 Fri, 11 Dec 2020 22:07:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Vdm=FP=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1knqZM-0006qM-ON
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:07:20 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ad07fcae-a49e-428d-9bc2-121e19c1c9c6;
 Fri, 11 Dec 2020 22:07:19 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-513-UpP2mNjkPBywjRPvEBuG2A-1; Fri, 11 Dec 2020 17:07:15 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B9232C292;
 Fri, 11 Dec 2020 22:07:12 +0000 (UTC)
Received: from localhost (ovpn-116-160.rdu2.redhat.com [10.10.116.160])
 by smtp.corp.redhat.com (Postfix) with ESMTP id EB9BF5D9E2;
 Fri, 11 Dec 2020 22:07:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad07fcae-a49e-428d-9bc2-121e19c1c9c6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607724439;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=liR73qHgx91bdEvSe0pRJgUhS7neRyLuX3mOz/BU+gI=;
	b=eg1tAX/G2AAy/coUwyuoVaHmJ4QWjK+OCIsXL2HEzBwN/WBmRBRO4VqxiTiQReCWDi1Ss+
	kQjMExMTfoJLXRqX/wtrkceiht6Ik3h7bxKzuRRTkg/MH1HNvsxWMtUg/MgTcwNJdfLAb5
	Clkdj/9Q5KZhQn7Zda1Ijjn3C4OCB8s=
X-MC-Unique: UpP2mNjkPBywjRPvEBuG2A-1
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Markus Armbruster <armbru@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v4 23/32] qdev: Move dev->realized check to qdev_property_set()
Date: Fri, 11 Dec 2020 17:05:20 -0500
Message-Id: <20201211220529.2290218-24-ehabkost@redhat.com>
In-Reply-To: <20201211220529.2290218-1-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Every single qdev property setter function manually checks
dev->realized.  We can just check dev->realized inside
qdev_property_set() instead.

The check is being added as a separate function
(qdev_prop_allow_set()) because it will become a callback later.

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
* Removed unused variable at xen_block_set_vdev()
* Redone patch after changes in the previous patches in the
  series
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 backends/tpm/tpm_util.c          |   6 --
 hw/block/xen-block.c             |   6 --
 hw/core/qdev-properties-system.c |  70 ----------------------
 hw/core/qdev-properties.c        | 100 ++++++-------------------------
 hw/s390x/css.c                   |   6 --
 hw/s390x/s390-pci-bus.c          |   6 --
 hw/vfio/pci-quirks.c             |   6 --
 target/sparc/cpu.c               |   6 --
 8 files changed, 18 insertions(+), 188 deletions(-)

diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index a5d997e7dc..39b45fa46d 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 905e4acd97..bd1aef63a7 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -395,17 +395,11 @@ static int vbd_name_to_disk(const char *name, const char **endp,
 static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 42529c3b65..f31aea3de1 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -94,11 +94,6 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
     bool blk_created = false;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -230,17 +225,11 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -311,18 +300,12 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -390,7 +373,6 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
 static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
@@ -398,11 +380,6 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
     int queues, err = 0, i = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -469,18 +446,12 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
 static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -582,11 +553,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     uint64_t value;
     Error *local_err = NULL;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -686,7 +652,6 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
 static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
@@ -694,11 +659,6 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
     char *str;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_str(v, name, &str, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
@@ -754,17 +714,11 @@ const PropertyInfo qdev_prop_reserved_region = {
 static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, NULL)) {
         if (!visit_type_int32(v, name, &value, errp)) {
             return;
@@ -848,7 +802,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
@@ -857,11 +810,6 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
     unsigned long dom = 0, bus = 0;
     unsigned int slot = 0, func = 0;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -972,16 +920,10 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
                          errp)) {
         return;
@@ -1057,16 +999,10 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
                          errp)) {
         return;
@@ -1129,16 +1065,10 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index b924f13d58..92f48ecbf2 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -24,6 +24,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
     }
 }
 
+/* returns: true if property is allowed to be set, false otherwise */
+static bool qdev_prop_allow_set(Object *obj, const char *name,
+                                Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return false;
+    }
+    return true;
+}
+
 void qdev_prop_allow_set_link_before_realize(const Object *obj,
                                              const char *name,
                                              Object *val, Error **errp)
@@ -65,6 +78,11 @@ static void field_prop_set(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
+
+    if (!qdev_prop_allow_set(obj, name, errp)) {
+        return;
+    }
+
     return prop->info->set(obj, v, name, opaque, errp);
 }
 
@@ -90,15 +108,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
 
@@ -148,15 +160,9 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -208,15 +214,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -245,15 +245,9 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_bool(v, name, ptr, errp);
 }
 
@@ -278,15 +272,9 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint8(v, name, ptr, errp);
 }
 
@@ -323,15 +311,9 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
 static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint16(v, name, ptr, errp);
 }
 
@@ -356,15 +338,9 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
 static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint32(v, name, ptr, errp);
 }
 
@@ -380,15 +356,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
 static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int32(v, name, ptr, errp);
 }
 
@@ -420,15 +390,9 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
 static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint64(v, name, ptr, errp);
 }
 
@@ -444,15 +408,9 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
 static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int64(v, name, ptr, errp);
 }
 
@@ -495,16 +453,10 @@ static void get_string(Object *obj, Visitor *v, const char *name,
 static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -545,16 +497,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
 static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -621,10 +567,6 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
     const char *arrayname;
     int i;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
     if (*alenptr) {
         error_setg(errp, "array size property %s may not be set more than once",
                    name);
@@ -864,15 +806,9 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_size(v, name, ptr, errp);
 }
 
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 7a44320d12..496e2c5801 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
 static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 8b6be1197b..30511f620e 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1338,16 +1338,10 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
 static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
     }
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 53569925a2..802979635c 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
     }
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 92534bcd18..b730146bbe 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
 static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     const int64_t min = MIN_NWINDOWS;
     const int64_t max = MAX_NWINDOWS;
     SPARCCPU *cpu = SPARC_CPU(obj);
     int64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_int(v, name, &value, errp)) {
         return;
     }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:07:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:07:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50932.89801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqZS-0006tZ-L9; Fri, 11 Dec 2020 22:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50932.89801; Fri, 11 Dec 2020 22:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqZS-0006tQ-Gl; Fri, 11 Dec 2020 22:07:26 +0000
Received: by outflank-mailman (input) for mailman id 50932;
 Fri, 11 Dec 2020 22:07:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knqZR-0006s6-K5
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:07:25 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c146a60-1cb4-4cb1-8ed7-f25d7d461d55;
 Fri, 11 Dec 2020 22:07:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c146a60-1cb4-4cb1-8ed7-f25d7d461d55
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607724443;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uE5KR5qlfz+ztx9s8pbRCB1Be+kxvrwztlDzBEoJCeU=;
	b=upIVkJhj5g1RmAalIreFExmj0QWc7xxC7dfgYyTKQ7/s1CPaLrvswCn2cis9a5ELIHv+rz
	IPyhjuoIqfAMI+60hoe1U1U27IhweRJLvNYpN7cTAqQ0o0KlM2VdD+/ibGw4UCrbJtLWLY
	inuTJUAvrZhdYHuTfLNeogwgdoyl5SGN/dy3kNNRxVqcfza2kw+U/H2LlJBjtaANpZFTIZ
	ptr78MOU90DKS42tb76OuG8X418bmY7SaqmPbyqibyrcTnWO7/ljKjMmuQI0HfaTAZd7Ho
	gSP1o6so90u917yflugfQ7VOzMuRufMAcgTvFJSwBRyBFBjjIC3oXcfJ628ySQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607724443;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uE5KR5qlfz+ztx9s8pbRCB1Be+kxvrwztlDzBEoJCeU=;
	b=6l4EGHeIGgEoz1BM8xIprj71LYQ/8TNyn+KsbwNoVJfPjiraGOnwYmma/niMdHbbcB5BbA
	SooB9xEZT4EUdpBw==
To: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>, linux-arm Mailing List <linux-arm-kernel@lists.infradead.org>, Mark
 Rutland <mark.rutland@arm.com>, Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>, Christian Borntraeger
 <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org, Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi
 <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>, Daniel Vetter
 <daniel@ffwll.ch>, Pankaj Bharadiya
 <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
 intel-gfx <intel-gfx@lists.freedesktop.org>, dri-devel
 <dri-devel@lists.freedesktop.org>, Tvrtko Ursulin
 <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>, "open list\:GPIO SUBSYSTEM"
 <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, Jon Mason
 <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe
 <allenbh@gmail.com>, linux-ntb@googlegroups.com, Lorenzo Pieralisi
 <lorenzo.pieralisi@arm.com>, Rob Herring <robh@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Michal Simek <michal.simek@xilinx.com>, linux-pci
 <linux-pci@vger.kernel.org>, Karthikeyan Mitran
 <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq
 Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, Jakub
 Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, "open
 list\:HFI1 DRIVER" <linux-rdma@vger.kernel.org>, Saeed Mahameed
 <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Subject: Re: [patch 03/30] genirq: Move irq_set_lockdep_class() to core
In-Reply-To: <87h7osgifc.fsf@nanos.tec.linutronix.de>
References: <20201210192536.118432146@linutronix.de> <20201210194042.860029489@linutronix.de> <CAHp75Vc-2OjE2uwvNRiyLMQ8GSN3P7SehKD-yf229_7ocaktiw@mail.gmail.com> <87h7osgifc.fsf@nanos.tec.linutronix.de>
Date: Fri, 11 Dec 2020 23:07:22 +0100
Message-ID: <87360cgfol.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, Dec 11 2020 at 22:08, Thomas Gleixner wrote:

> On Fri, Dec 11 2020 at 19:53, Andy Shevchenko wrote:
>
>> On Thu, Dec 10, 2020 at 10:14 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>>>
>>> irq_set_lockdep_class() is used from modules and requires irq_to_desc() to
>>> be exported. Move it into the core code which lifts another requirement for
>>> the export.
>>
>> ...
>>
>>> +       if (IS_ENABLED(CONFIG_LOCKDEP))
>>> +               __irq_set_lockdep_class(irq, lock_class, request_class);
>
> You are right. Let me fix that.

No. I have to correct myself. You're wrong.

The inline is evaluated in the compilation units which include that
header and because the function declaration is unconditional it is
happy.

Now the optimizer stage makes the whole thing a NOOP if CONFIG_LOCKDEP=n
and thereby drops the reference to the function which makes it not
required for linking.

So in the file where the function is implemented:

#ifdef CONFIG_LOCKDEP
void __irq_set_lockdep_class(....)
{
}
#endif

The whole block is either discarded because CONFIG_LOCKDEP is not
defined or compile if it is defined which makes it available for the
linker.

And in the latter case the optimizer keeps the call in the inline (it
optimizes the condition away because it's always true).

So in both cases the compiler and the linker are happy and everything
works as expected.

It would fail if the header file had the following:

#ifdef CONFIG_LOCKDEP
void __irq_set_lockdep_class(....);
#endif

Because then it would complain about the missing function prototype when
it evaluates the inline.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:07:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:07:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50940.89813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqZw-00073P-UN; Fri, 11 Dec 2020 22:07:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50940.89813; Fri, 11 Dec 2020 22:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqZw-00073I-RE; Fri, 11 Dec 2020 22:07:56 +0000
Received: by outflank-mailman (input) for mailman id 50940;
 Fri, 11 Dec 2020 22:07:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Vdm=FP=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1knqZv-000732-1m
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:07:55 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b1795c04-b247-4fdf-a045-784b1937d0b7;
 Fri, 11 Dec 2020 22:07:53 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-536-YzaAqyL8PFaykowR_ziKuw-1; Fri, 11 Dec 2020 17:07:49 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 20688C294;
 Fri, 11 Dec 2020 22:07:47 +0000 (UTC)
Received: from localhost (ovpn-116-160.rdu2.redhat.com [10.10.116.160])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CFBD25C8AA;
 Fri, 11 Dec 2020 22:07:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1795c04-b247-4fdf-a045-784b1937d0b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607724472;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/5ZoGD/Z4aeOrYeqXoenztMGPJ1MAS8OAd9A0/6z3M4=;
	b=Kzlg6zy20qY8nNf59l6EzwhV0gwmr1h7YHS6AYVDI8KU0SdWhxcURDDRIhyjFeSnJIz0we
	coxYo89iCn5TRbDyCFaEwFEVbVUqxdnl5jTE8O11AiZkglk7a+5FK66SU8QMTLStKATgI8
	b+XbR9VT5r377b+qIci4xJNTScaLnPs=
X-MC-Unique: YzaAqyL8PFaykowR_ziKuw-1
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Markus Armbruster <armbru@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v4 30/32] qdev: Rename qdev_get_prop_ptr() to object_field_prop_ptr()
Date: Fri, 11 Dec 2020 17:05:27 -0500
Message-Id: <20201211220529.2290218-31-ehabkost@redhat.com>
In-Reply-To: <20201211220529.2290218-1-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function will be moved to common QOM code, as it is not
specific to TYPE_DEVICE anymore.

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
* Rename to object_field_prop_ptr() instead of object_static_prop_ptr()
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  6 ++--
 hw/block/xen-block.c             |  4 +--
 hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
 hw/core/qdev-properties.c        | 60 ++++++++++++++++----------------
 hw/s390x/css.c                   |  4 +--
 hw/s390x/s390-pci-bus.c          |  4 +--
 hw/vfio/pci-quirks.c             |  4 +--
 8 files changed, 67 insertions(+), 67 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 90222822f1..97bb9494ae 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -193,7 +193,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop);
+void *object_field_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(Object *obj,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index 39b45fa46d..a6e6d3e72f 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,7 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
+    TPMBackend **be = object_field_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend *s, **be = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend **be = object_field_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index bd1aef63a7..718d886e5c 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 590c5f3d97..e6d378a34e 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -62,7 +62,7 @@ static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_field_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -88,7 +88,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_field_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -181,7 +181,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
+    BlockBackend **ptr = object_field_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -214,7 +214,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
+    CharBackend *be = object_field_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -226,7 +226,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -262,7 +262,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -286,7 +286,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_field_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -301,7 +301,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_field_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -363,7 +363,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -374,7 +374,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -436,7 +436,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -447,7 +447,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -549,7 +549,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -637,7 +637,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -653,7 +653,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -715,7 +715,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t value, *ptr = object_field_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -753,7 +753,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -777,7 +777,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -803,7 +803,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char *str, *p;
     char *e;
     unsigned long val;
@@ -893,7 +893,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -921,7 +921,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
@@ -963,7 +963,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -1000,7 +1000,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
@@ -1051,7 +1051,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -1066,7 +1066,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index c1dd4ae71b..3d648b088d 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -50,7 +50,7 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop)
+void *object_field_prop_ptr(Object *obj, Property *prop)
 {
     void *ptr = obj;
     ptr += prop->offset;
@@ -100,7 +100,7 @@ void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -109,7 +109,7 @@ void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -138,7 +138,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    uint32_t *p = qdev_get_prop_ptr(obj, props);
+    uint32_t *p = object_field_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -151,7 +151,7 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(obj, prop);
+    uint32_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -192,7 +192,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    uint64_t *p = qdev_get_prop_ptr(obj, props);
+    uint64_t *p = object_field_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -205,7 +205,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(obj, prop);
+    uint64_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -237,7 +237,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -246,7 +246,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -264,7 +264,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -273,7 +273,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -303,7 +303,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -312,7 +312,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -330,7 +330,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -339,7 +339,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -348,7 +348,7 @@ void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -357,7 +357,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -382,7 +382,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -391,7 +391,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -400,7 +400,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -409,7 +409,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -433,14 +433,14 @@ const PropertyInfo prop_info_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
+    g_free(*(char **)object_field_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_field_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -454,7 +454,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -488,7 +488,7 @@ void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -498,7 +498,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
 
     if (!visit_type_size(v, name, &value, errp)) {
@@ -561,7 +561,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *alenptr = object_field_prop_ptr(obj, prop);
     void **arrayptr = (void *)dev + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -603,7 +603,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->prop.offset = eltptr - (void *)dev;
-        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
+        assert(object_field_prop_ptr(obj, &arrayprop->prop) == eltptr);
         object_property_add(obj, propname,
                             arrayprop->prop.info->name,
                             field_prop_getter(arrayprop->prop.info),
@@ -798,7 +798,7 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -807,7 +807,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 496e2c5801..fe47751df4 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 30511f620e..dd138dae94 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1330,7 +1330,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1340,7 +1340,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
 {
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 802979635c..fc8d63c850 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t value, *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50955.89825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqnH-0000ew-EL; Fri, 11 Dec 2020 22:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50955.89825; Fri, 11 Dec 2020 22:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knqnH-0000ep-AA; Fri, 11 Dec 2020 22:21:43 +0000
Received: by outflank-mailman (input) for mailman id 50955;
 Fri, 11 Dec 2020 22:21:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nv6e=FP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1knqnG-0000ek-B3
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:21:42 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0884bbe5-0e28-4ced-a7a7-41fb5a21ab23;
 Fri, 11 Dec 2020 22:21:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0884bbe5-0e28-4ced-a7a7-41fb5a21ab23
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607725300;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=SlMSeRdV/lwxDX3djLyqBMWP9qBV06XtN2qLgsx2f6c=;
  b=VCYhqaHM2dBvS/Cww6DttZnDNb+r9wMY4QQ3V77rWPcFBQw3q6eDHD3I
   sp5mQ4sF1YKOKP+qvKsorBoAjWBDA4cKuNlVWBBXzIM2jk8jKJdSuG2eC
   y9MPUMRl4FZKzVT0zbufW64bf4SNgrq2lafFJNFo4JE10e9Zcq+YpE5we
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: zxKa8rgyFeg2y8fCG3oi7VnW2gUpr9OYlkvCF/sBtRwerKcGUj3U846xesB6P0sd8TbXpQL30D
 M+yqnl9FyOSn4AVTQICd1n20O7aUuBZJHO1hO49Y7WMuixViLEiIv7JOWQDh2eVZ9soCVAFoQW
 vfrc7AECE95KCs5m0z0M9zvfWWGUs6ClwL+l6kwP1ibvxZE1mGugR63NagsSn+AjDypKzJbLJy
 9z6mlpc3fKmjPgHdEEba0IFcfG0cgQo5L6XZQsW65QXYXuCPJHzmXpVX8QCySh2BtbtSv6z0yL
 eso=
X-SBRS: 5.2
X-MesageID: 33047865
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,412,1599537600"; 
   d="scan'208";a="33047865"
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu
 interrupts
To: Thomas Gleixner <tglx@linutronix.de>, <boris.ostrovsky@oracle.com>,
	=?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, LKML
	<linux-kernel@vger.kernel.org>
CC: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, "James E.J. Bottomley"
	<James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>, "afzal
 mohammed" <afzal.mohd.ma@gmail.com>, <linux-parisc@vger.kernel.org>, "Russell
 King" <linux@armlinux.org.uk>, <linux-arm-kernel@lists.infradead.org>, "Mark
 Rutland" <mark.rutland@arm.com>, Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Christian Borntraeger
	<borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>,
	<linux-s390@vger.kernel.org>, Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi
	<rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>, Daniel Vetter
	<daniel@ffwll.ch>, Pankaj Bharadiya
	<pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
	<chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>,
	<intel-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>, "Tvrtko
 Ursulin" <tvrtko.ursulin@linux.intel.com>, Linus Walleij
	<linus.walleij@linaro.org>, <linux-gpio@vger.kernel.org>, Lee Jones
	<lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
	<dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
	<linux-ntb@googlegroups.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Michal
 Simek" <michal.simek@xilinx.com>, <linux-pci@vger.kernel.org>, "Karthikeyan
 Mitran" <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, <netdev@vger.kernel.org>,
	<linux-rdma@vger.kernel.org>, Saeed Mahameed <saeedm@nvidia.com>, "Leon
 Romanovsky" <leon@kernel.org>
References: <20201210192536.118432146@linutronix.de>
 <20201210194045.250321315@linutronix.de>
 <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com>
 <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com>
 <871rfwiknd.fsf@nanos.tec.linutronix.de>
 <9806692f-24a3-4b6f-ae55-86bd66481271@oracle.com>
 <877dpoghio.fsf@nanos.tec.linutronix.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <edbedd7a-4463-d934-73c9-fa046c19cf6d@citrix.com>
Date: Fri, 11 Dec 2020 22:21:19 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <877dpoghio.fsf@nanos.tec.linutronix.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 11/12/2020 21:27, Thomas Gleixner wrote:
> On Fri, Dec 11 2020 at 09:29, boris ostrovsky wrote:
>
>> On 12/11/20 7:37 AM, Thomas Gleixner wrote:
>>> On Fri, Dec 11 2020 at 13:10, Jürgen Groß wrote:
>>>> On 11.12.20 00:20, boris.ostrovsky@oracle.com wrote:
>>>>> On 12/10/20 2:26 PM, Thomas Gleixner wrote:
>>>>>> Change the implementation so that the channel is bound to CPU0 at the XEN
>>>>>> level and leave the affinity mask alone. At startup of the interrupt
>>>>>> affinity will be assigned out of the affinity mask and the XEN binding will
>>>>>> be updated.
>>>>> If that's the case then I wonder whether we need this call at all and instead bind at startup time.
>>>> After some discussion with Thomas on IRC and xen-devel archaeology the
>>>> result is: this will be needed especially for systems running on a
>>>> single vcpu (e.g. small guests), as the .irq_set_affinity() callback
>>>> won't be called in this case when starting the irq.
>> On UP are we not then going to end up with an empty affinity mask? Or
>> are we guaranteed to have it set to 1 by interrupt generic code?
> An UP kernel does not ever look on the affinity mask. The
> chip::irq_set_affinity() callback is not invoked so the mask is
> irrelevant.
>
> A SMP kernel on a UP machine sets CPU0 in the mask so all is good.
>
>> This is actually why I brought this up in the first place --- a
>> potential mismatch between the affinity mask and Xen-specific data
>> (e.g. info->cpu and then protocol-specific data in event channel
>> code). Even if they are re-synchronized later, at startup time (for
>> SMP).
> Which is not a problem either. The affinity mask is only relevant for
> setting the affinity, but it's not relevant for delivery and never can
> be.
>
>> I don't see anything that would cause a problem right now but I worry
>> that this inconsistency may come up at some point.
> As long as the affinity mask becomes not part of the event channel magic
> this should never matter.
>
> Look at it from hardware:
>
> interrupt is affine to CPU0
>
>      CPU0 runs:
>      
>      set_affinity(CPU0 -> CPU1)
>         local_irq_disable()
>         
>  --> interrupt is raised in hardware and pending on CPU0
>
>         irq hardware is reconfigured to be affine to CPU1
>
>         local_irq_enable()
>
>  --> interrupt is handled on CPU0
>
> the next interrupt will be raised on CPU1
>
> So info->cpu which is registered via the hypercall binds the 'hardware
> delivery' and whenever the new affinity is written it is rebound to some
> other CPU and the next interrupt is then raised on this other CPU.
>
> It's not any different from the hardware example at least not as far as
> I understood the code.

Xen's event channels do have a couple of quirks.

Binding an event channel always results in one spurious event being
delivered.  This is to cover notifications which can get lost during the
bidirectional setup, or re-setups in certain configurations.

Binding an interdomain or pirq event channel always defaults to vCPU0. 
There is no way to atomically set the affinity while binding.  I believe
the API predates SMP guest support in Xen, and noone has fixed it up since.

As a consequence, the guest will observe the event raised on vCPU0 as
part of setting up the event, even if it attempts to set a different
affinity immediately afterwards.  A little bit of care needs to be taken
when binding an event channel on vCPUs other than 0, to ensure that the
callback is safe with respect to any remaining state needing initialisation.

Beyond this, there is nothing magic I'm aware of.

We have seen soft lockups before in certain scenarios, simply due to the
quantity of events hitting vCPU0 before irqbalance gets around to
spreading the load.  This is why there is an attempt to round-robin the
userspace event channel affinities by default, but I still don't see why
this would need custom affinity logic itself.

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 22:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 22:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50965.89846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knrLD-0003kV-CB; Fri, 11 Dec 2020 22:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50965.89846; Fri, 11 Dec 2020 22:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knrLD-0003kO-90; Fri, 11 Dec 2020 22:56:47 +0000
Received: by outflank-mailman (input) for mailman id 50965;
 Fri, 11 Dec 2020 22:56:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YvCS=FP=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1knrLC-0003kJ-7X
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 22:56:46 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edfd3bd1-f1d8-432b-a137-c68243eb3b00;
 Fri, 11 Dec 2020 22:56:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: edfd3bd1-f1d8-432b-a137-c68243eb3b00
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1607727400;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RMdJ5cNpvG94MqmN/2d6QVEWUP/YrLyBgV8MUkzsNlA=;
	b=0WcDMUhLUE5DmQJjljMHI2zWwird6I5Vzppg/zn6AZ6e1OM3STU+JeGE05b5oqLFOvFfmh
	CDXWZX+LKXvz41MQk+jw0mabsf/xLxFNSlsc7D+IqZgppXn+tyA6NewoUrmvCeXtmt2AST
	EViRmHdbhCgpLeje9DpFYTQlqeoEtc+eq14ZwN+Y4G4vhAzpgACbffqabmLR4OWIgQ5TkH
	0732UhUwEAjs3Gs9iSFScNlfSc9uVrpQKD7eElRAgICAQIyVgSIk6IrtHhKDjUooShlbLw
	6lnH9jzEFDJiTz3F4zTjotppmf5h8vHSXnbS356QRRnsZMAn1kegUSilRX0RVQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1607727400;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RMdJ5cNpvG94MqmN/2d6QVEWUP/YrLyBgV8MUkzsNlA=;
	b=ljcajSXupjVbHPbFKhUP57PPTY2aSW+D5nnua5xYh57kSUlRXkx3yzKeC3jH07q5vSS64l
	BGsVaGm3g05BhRAQ==
To: Andrew Cooper <andrew.cooper3@citrix.com>, boris.ostrovsky@oracle.com,
 =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, LKML
 <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>, afzal
 mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, Russell
 King <linux@armlinux.org.uk>, linux-arm-kernel@lists.infradead.org, Mark
 Rutland <mark.rutland@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula <jani.nikula@linux.intel.com>, Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org, Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>, Leon
 Romanovsky <leon@kernel.org>
Subject: Re: [patch 27/30] xen/events: Only force affinity mask for percpu interrupts
In-Reply-To: <edbedd7a-4463-d934-73c9-fa046c19cf6d@citrix.com>
References: <20201210192536.118432146@linutronix.de> <20201210194045.250321315@linutronix.de> <7f7af60f-567f-cdef-f8db-8062a44758ce@oracle.com> <2164a0ce-0e0d-c7dc-ac97-87c8f384ad82@suse.com> <871rfwiknd.fsf@nanos.tec.linutronix.de> <9806692f-24a3-4b6f-ae55-86bd66481271@oracle.com> <877dpoghio.fsf@nanos.tec.linutronix.de> <edbedd7a-4463-d934-73c9-fa046c19cf6d@citrix.com>
Date: Fri, 11 Dec 2020 23:56:40 +0100
Message-ID: <87y2i4eytz.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Andrew,

On Fri, Dec 11 2020 at 22:21, Andrew Cooper wrote:
> On 11/12/2020 21:27, Thomas Gleixner wrote:
>> It's not any different from the hardware example at least not as far as
>> I understood the code.
>
> Xen's event channels do have a couple of quirks.

Why am I not surprised?

> Binding an event channel always results in one spurious event being
> delivered.=C2=A0 This is to cover notifications which can get lost during=
 the
> bidirectional setup, or re-setups in certain configurations.
>
> Binding an interdomain or pirq event channel always defaults to vCPU0.=C2=
=A0
> There is no way to atomically set the affinity while binding.=C2=A0 I bel=
ieve
> the API predates SMP guest support in Xen, and noone has fixed it up
> since.

That's fine. I'm not changing that.

What I'm changing is the unwanted and unnecessary overwriting of the
actual affinity mask.

We have a similar issue on real hardware where we can only target _one_
CPU and not all CPUs in the affinity mask. So we still can preserve the
(user) requested mask and just affine it to one CPU which is reflected
in the effective affinity mask. This the right thing to do for two
reasons:

   1) It allows proper interrupt distribution

   2) It does not break (user) requested affinity when the effective
      target CPU goes offline and the affinity mask still contains
      online CPUs. If you overwrite it you lost track of the requested
      broader mask.

> As a consequence, the guest will observe the event raised on vCPU0 as
> part of setting up the event, even if it attempts to set a different
> affinity immediately afterwards.=C2=A0 A little bit of care needs to be t=
aken
> when binding an event channel on vCPUs other than 0, to ensure that the
> callback is safe with respect to any remaining state needing
> initialisation.

That's preserved for all non percpu interrupts. The percpu variant of
VIRQ and IPIs did binding to vCPU !=3D 0 already before this change.

> Beyond this, there is nothing magic I'm aware of.
>
> We have seen soft lockups before in certain scenarios, simply due to the
> quantity of events hitting vCPU0 before irqbalance gets around to
> spreading the load.=C2=A0 This is why there is an attempt to round-robin =
the
> userspace event channel affinities by default, but I still don't see why
> this would need custom affinity logic itself.

Just the previous attempt makes no sense for the reasons I outlined in
the changelog. So now with this new spreading mechanics you get the
distribution for all cases:

  1) Post setup using and respecting the default affinity mask which can
     be set as a kernel commandline parameter.

  2) Runtime (user) requested affinity change with a mask which contains
     more than one vCPU. The previous logic always chose the first one
     in the mask.

     So assume userspace affines 4 irqs to a CPU 0-3 and 4 irqs to CPU
     4-7 then 4 irqs end up on CPU0 and 4 on CPU4

     The new algorithm which is similar to what we have on x86 (minus
     the vector space limitation) picks the CPU which has the least
     number of channels affine to it at that moment. If e.g. all 8 CPUs
     have the same number of vectors before that change then in the
     example above the first 4 are spread to CPU0-3 and the second 4 to
     CPU4-7

Thanks,

        tglx
=20=20=20


From xen-devel-bounces@lists.xenproject.org Fri Dec 11 23:57:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 11 Dec 2020 23:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50974.89860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knsHs-0001FM-Uk; Fri, 11 Dec 2020 23:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50974.89860; Fri, 11 Dec 2020 23:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knsHs-0001FF-RG; Fri, 11 Dec 2020 23:57:24 +0000
Received: by outflank-mailman (input) for mailman id 50974;
 Fri, 11 Dec 2020 23:57:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knsHr-0001F7-KX; Fri, 11 Dec 2020 23:57:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knsHr-00035Y-8V; Fri, 11 Dec 2020 23:57:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knsHr-0005or-1v; Fri, 11 Dec 2020 23:57:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knsHr-0001pa-1N; Fri, 11 Dec 2020 23:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=3ld2xvypCZxJvMWa5rduiip4qnouATZwstt2kV2XV20=; b=QE6z1orn8Y8oMHCYN1AAcsGkTY
	CqDnzEW3dt5JBnMt5uAJ4Dc6Fanc+f/9jXn3yVieIfmcSU/NgxS2DKK/VUYtGp2xQzKcCM/Mqcxpx
	WyFSzWYhBRcpZIoOZ/p1I4xdN4V0P+qxIyn4093X8d2sf+cHd8W+85/uqCNee/sct2dY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [ovmf bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
Message-Id: <E1knsHr-0001pa-1N@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 11 Dec 2020 23:57:23 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157448/


  commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Author: Guo Dong <guo.dong@intel.com>
  Date:   Wed Dec 2 14:18:18 2020 -0700
  
      UefiCpuPkg/CpuDxe: Fix boot error
      
      REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
      
      When DXE drivers are dispatched above 4GB memory and
      the system is already in 64bit mode, the address
      setCodeSelectorLongJump in stack will be override
      by parameter. so change to use 64bit address and
      jump to qword address.
      
      Signed-off-by: Guo Dong <guo.dong@intel.com>
      Reviewed-by: Ray Ni <ray.ni@intel.com>
      Reviewed-by: Eric Dong <eric.dong@intel.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/ovmf/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/ovmf/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/157448.bisection-summary --basis-template=157345 --blessings=real,real-bisect,real-retry ovmf test-amd64-amd64-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 157416 fail [host=pinot0] / 157394 [host=elbling1] 157390 [host=elbling0] 157383 [host=albana0] 157366 [host=godello1] 157354 [host=godello0] 157348 [host=fiano0] 157345 [host=huxelrebe0] 157338 [host=chardonnay1] 157333 [host=chardonnay0] 157323 [host=rimava1] 157255 [host=huxelrebe1] 157214 [host=elbling1] 157204 [host=godello0] 157194 [host=elbling0] 157191 [host=fiano1] 157184 [host=albana0] 157178 [host=albana1] 157167 [host=godello1] 157117 [host=pinot1] 157104 [host=huxelrebe1] 157060 [h\
 ost=huxelrebe0] 157055 [host=godello0] 157042 ok.
Failure / basis pass flights: 157416 / 157042
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf https://github.com/tianocore/edk2.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d4633b36b94f7b4a1f41901657cbbff452173d35 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 872f953262d68a11da7bc2fb3ded16df234b8700 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 181f2c224ccd0a2900d6ae94ec390a546731f593
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 https://github.com/tianocore/edk2.git#872f953262d68a11da7bc2fb3ded16df234b8700-d4633b36b94f7b4a1f41901657cbbff452173d35 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c743\
 7ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/osstest/seabios.git#748d619be3282fba35f99446098ac2d0579f6063-748d619be3282fba35f99446098ac2d0579f6063 git://xenbits.xen.org/xen.git#181f2c224ccd0a2900d6ae94ec390a546731f593-777e3590f154e6a8af560dd318b9465fa168db20
Loaded 10001 nodes in revision graph
Searching for test results:
 157042 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 872f953262d68a11da7bc2fb3ded16df234b8700 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 181f2c224ccd0a2900d6ae94ec390a546731f593
 157055 [host=godello0]
 157060 [host=huxelrebe0]
 157104 [host=huxelrebe1]
 157117 [host=pinot1]
 157167 [host=godello1]
 157178 [host=albana1]
 157184 [host=albana0]
 157191 [host=fiano1]
 157194 [host=elbling0]
 157204 [host=godello0]
 157214 [host=elbling1]
 157255 [host=huxelrebe1]
 157323 [host=rimava1]
 157333 [host=chardonnay0]
 157338 [host=chardonnay1]
 157345 [host=huxelrebe0]
 157348 [host=fiano0]
 157354 [host=godello0]
 157366 [host=godello1]
 157383 [host=albana0]
 157390 [host=elbling0]
 157394 [host=elbling1]
 157399 []
 157402 []
 157406 []
 157410 []
 157412 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d4633b36b94f7b4a1f41901657cbbff452173d35 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157415 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 872f953262d68a11da7bc2fb3ded16df234b8700 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 181f2c224ccd0a2900d6ae94ec390a546731f593
 157419 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d4633b36b94f7b4a1f41901657cbbff452173d35 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157427 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 97e2b622d1f32ba35194dbca104c3bf918bf3271 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 728acba1ba4ad6f9b69fd6929362a9750fe4dbe8
 157432 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157416 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d4633b36b94f7b4a1f41901657cbbff452173d35 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157434 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c5d970a01e76c1a20f6bb009b32e479ad2444548 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157436 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d315bd2286cde306f1ef5256026038e610505cca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157440 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157441 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157444 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157445 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157446 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
 157448 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
Searching for interesting versions
 Result found: flight 157042 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20, results HASH(0x55d11c590470) HASH(0x55d11c58de68) HASH(0x55d11c598938) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 97e2b622d1f32ba35194dbca104c3bf918bf3271 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 728acba1ba4ad6f9b69fd6929362a9750fe4dbe8, results HASH(0x55d11c592c20) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 872f953262d68a11da7bc2fb3ded16df234b8700 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0\
 bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 181f2c224ccd0a2900d6ae94ec390a546731f593, results HASH(0x55d11c5970b0) HASH(0x55d11c5953a8) Result found: flight 157412 (fail), for basis failure (at ancestor ~5611)
 Repro found: flight 157415 (pass), for basis pass
 Repro found: flight 157416 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 748d619be3282fba35f99446098ac2d0579f6063 777e3590f154e6a8af560dd318b9465fa168db20
No revisions left to test, checking graph state.
 Result found: flight 157432 (pass), for last pass
 Result found: flight 157441 (fail), for first failure
 Repro found: flight 157444 (pass), for last pass
 Repro found: flight 157445 (fail), for first failure
 Repro found: flight 157446 (pass), for last pass
 Repro found: flight 157448 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf https://github.com/tianocore/edk2.git
  Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157448/


  commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Author: Guo Dong <guo.dong@intel.com>
  Date:   Wed Dec 2 14:18:18 2020 -0700
  
      UefiCpuPkg/CpuDxe: Fix boot error
      
      REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
      
      When DXE drivers are dispatched above 4GB memory and
      the system is already in 64bit mode, the address
      setCodeSelectorLongJump in stack will be override
      by parameter. so change to use 64bit address and
      jump to qword address.
      
      Signed-off-by: Guo Dong <guo.dong@intel.com>
      Reviewed-by: Ray Ni <ray.ni@intel.com>
      Reviewed-by: Eric Dong <eric.dong@intel.com>

Revision graph left in /home/logs/results/bisect/ovmf/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
157448: tolerable ALL FAIL

flight 157448 ovmf real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/157448/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Dec 12 00:23:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 00:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50987.89875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knsgy-0004qF-4O; Sat, 12 Dec 2020 00:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50987.89875; Sat, 12 Dec 2020 00:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knsgy-0004q8-16; Sat, 12 Dec 2020 00:23:20 +0000
Received: by outflank-mailman (input) for mailman id 50987;
 Sat, 12 Dec 2020 00:23:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knsgw-0004q0-SH; Sat, 12 Dec 2020 00:23:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knsgw-0004FU-Lc; Sat, 12 Dec 2020 00:23:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knsgw-0007Cn-ET; Sat, 12 Dec 2020 00:23:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knsgw-0000xG-Dy; Sat, 12 Dec 2020 00:23:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SEHrRWu0NS+Zl+w1Ex5ezFWMpdLSVlXKKs5X5O8AcwU=; b=4Z5296YEmGcwuFDPQhNI7+AJYG
	iWkUbi5BTLfaLX+6qb1gB5o3+tSK88MEmXStoPQbBcoDG+pc7A4DcyohhlI0A/FHLS8Kc+gvy0acC
	a7OPzUE9jd/qtB16lV4V32axHhYeZCW9gPdwo96nPdgpmYYNTz9F9GNXCIw0t5N46u18=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157437-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157437: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 00:23:18 +0000

flight 157437 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157437/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    2 days
Failing since        157348  2020-12-09 15:39:39 Z    2 days   13 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 00:46:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 00:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50995.89891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knt34-0006vj-2y; Sat, 12 Dec 2020 00:46:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50995.89891; Sat, 12 Dec 2020 00:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knt33-0006vc-VH; Sat, 12 Dec 2020 00:46:09 +0000
Received: by outflank-mailman (input) for mailman id 50995;
 Sat, 12 Dec 2020 00:46:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LQi2=FQ=linaro.org=linus.walleij@srs-us1.protection.inumbo.net>)
 id 1knt33-0006vX-4M
 for xen-devel@lists.xenproject.org; Sat, 12 Dec 2020 00:46:09 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e356da6-78e4-4518-9378-652cf0adb003;
 Sat, 12 Dec 2020 00:46:05 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id o17so12969551lfg.4
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 16:46:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e356da6-78e4-4518-9378-652cf0adb003
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1p8BkEtCdcNBP/TpCqa88TfNrEdwTBnXM61elkmQDlU=;
        b=vPmJzs+nnMzMYiLeSl4C70AwwAUmtSXcRtz7EHFhXOG3+HSYoFdEGzQYdbnIfuLhe1
         cpLA6/bKDco/ZW56K0yojauLy12p9RMJ4MYQUMAZkQKn1JGQikvuaZfR0PCAwR/VHwqu
         l8KSlmgfW9XQH81QbMWcZWDg6fTfP5u9gInsHFToPISthtmCvix2Zi4MyAVk8WOEDUbJ
         6rVaVkrA8lMaHHNzl8tzPLP3FKzwRwCbx4UWePv66IKRrK3BWVszh2/HuHa0/BXzK05/
         +RmxfJa60vjGUHEMlcdmB6LQFBStfB7tIZX79j1mfET935nYYecl+y81TGJ83zzfgTnD
         hTzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1p8BkEtCdcNBP/TpCqa88TfNrEdwTBnXM61elkmQDlU=;
        b=nbxWiA5Aef52js6Bi4Q/c0Z2LNkPj1Gn8iMf4MBWHlBaVoInSrcFyqwIJ+BAFgcmVc
         4ju4mmPY9IGTMxyATXutHFp7KcqBlb6dMgk3oCNi4/WKcoO4vkxmFskTHpYAZXnR8/dz
         G48xowDq78JLNDg9jVae6EjjK/aK73L0eP7QAa6g3h8Yo0mZ0sNNQdWhtAHprNDGTst7
         BNrj9i83Mc9m8+QTJ730w7OmpCPqfU1KoIddimcVHvfVO7ci2dfIZhGxCnULaIuMl9w+
         WrnNtf5dC3UzYJIz2C6LwVr5YkzTG74Bs4J/XwCOddd+ENUoPo+HUs5A921/7ZpmzJkk
         QV4g==
X-Gm-Message-State: AOAM533/Z1SDCokQGKmqCpNOxESdw7w6DcM6j3WAZZYWAF79QdzOe4OQ
	vLs0P1mwygNTxF0oJ7m45ftighCyRv1d8om2g92Q/Q==
X-Google-Smtp-Source: ABdhPJw+G3KfJG3M7IoDnPiDCepv2YgkNV/jArpapqMifWmjCRwgJbTiMDpguzIYWZxJHM0zdf1udksfd+uvUSH+FdI=
X-Received: by 2002:a05:651c:205b:: with SMTP id t27mr2692550ljo.368.1607733964618;
 Fri, 11 Dec 2020 16:46:04 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194044.065003856@linutronix.de>
In-Reply-To: <20201210194044.065003856@linutronix.de>
From: Linus Walleij <linus.walleij@linaro.org>
Date: Sat, 12 Dec 2020 01:45:53 +0100
Message-ID: <CACRpkdbKZzaTq+Am6q38Ya5wuUjiMbLE5g2i8bb_mJEWTkXgCg@mail.gmail.com>
Subject: Re: [patch 15/30] pinctrl: nomadik: Use irq_has_action()
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	intel-gfx <intel-gfx@lists.freedesktop.org>, 
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Lee Jones <lee.jones@linaro.org>, 
	Jon Mason <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, 
	linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, 
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, 
	Michal Simek <michal.simek@xilinx.com>, linux-pci <linux-pci@vger.kernel.org>, 
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, 
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, 
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, linux-rdma@vger.kernel.org, 
	Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 10, 2020 at 8:42 PM Thomas Gleixner <tglx@linutronix.de> wrote:

> Let the core code do the fiddling with irq_desc.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Linus Walleij <linus.walleij@linaro.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-gpio@vger.kernel.org

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

I suppose you will funnel this directly to Torvalds, else tell me and
I'll apply it to my tree.

Yours,
Linus Walleij


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 01:58:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 01:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51010.89917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knuAd-0005M6-Rn; Sat, 12 Dec 2020 01:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51010.89917; Sat, 12 Dec 2020 01:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knuAd-0005Lx-Ku; Sat, 12 Dec 2020 01:58:03 +0000
Received: by outflank-mailman (input) for mailman id 51010;
 Sat, 12 Dec 2020 01:58:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knuAd-0005Lp-6n; Sat, 12 Dec 2020 01:58:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knuAd-0005eA-0h; Sat, 12 Dec 2020 01:58:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knuAc-00059G-LE; Sat, 12 Dec 2020 01:58:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knuAc-0005FB-ID; Sat, 12 Dec 2020 01:58:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R7ZPaWQrhj5dgp+ppilGuPP3Y5zV+smYLxAIbaA1ak4=; b=CVcPAPo+QFujqQvxWYqhtWL7Ho
	UcJ1iq/2SU/NbmAEszVdtnONEho6oflKFKAC//1vzbZAfpNzGqlHzZ2lCir+4s2BohdB0iZ0hUP8b
	o49UEM/lpyJmGrnj6QcQyE2JNxN4k9DXFl1HiSvFzP4HBIUU+t3T4fIJEPyX4qMAqkkA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157433-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157433: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 01:58:02 +0000

flight 157433 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157433/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157365
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157365
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157365
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157365
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157365
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157365
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157365
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157365
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157365
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157398
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157398
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20

Last test of basis   157398  2020-12-11 01:51:23 Z    1 days
Testing same since   157433  2020-12-11 13:36:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   777e3590f1..8e0fe4fe5f  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4 -> master


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 05:44:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 05:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50783.89944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knxhf-0003LW-LZ; Sat, 12 Dec 2020 05:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50783.89944; Sat, 12 Dec 2020 05:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knxhf-0003LP-I9; Sat, 12 Dec 2020 05:44:23 +0000
Received: by outflank-mailman (input) for mailman id 50783;
 Fri, 11 Dec 2020 17:03:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o6lN=FP=redhat.com=wainersm@srs-us1.protection.inumbo.net>)
 id 1knlpf-0000wk-Iu
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 17:03:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a81dd4fe-f9fd-4ad7-ad25-1476d485ed22;
 Fri, 11 Dec 2020 17:03:48 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-336-j0uHi-AaORKFbfD_QMHVaQ-1; Fri, 11 Dec 2020 12:03:44 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DDC751087D78;
 Fri, 11 Dec 2020 17:03:40 +0000 (UTC)
Received: from wainer-laptop.localdomain (ovpn-114-123.rdu2.redhat.com
 [10.10.114.123])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E77FB5D9E8;
 Fri, 11 Dec 2020 17:03:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a81dd4fe-f9fd-4ad7-ad25-1476d485ed22
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607706228;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Leog1T7cuecv5R6fqJMs7ncd85agCnp/s8z42hrWKr0=;
	b=eVAKegW8IydJQr3d398gO9kE7OusJBSimgGMoJwdpli45Dxuy/vQVDtyn3H6UtPL9EiJij
	UHx1e4lUSwLpPdej8E9PlP9N1yrDrAzJx1VXGb6mdJODEmv6wTXqqX21199/xA7DVbpN/9
	wGWrJTYXccckuvAEn70DkeMSFareqlU=
X-MC-Unique: j0uHi-AaORKFbfD_QMHVaQ-1
Subject: Re: [PATCH v3 1/5] gitlab-ci: Document 'build-tcg-disabled' is a KVM
 X86 job
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Marcelo Tosatti <mtosatti@redhat.com>, kvm@vger.kernel.org,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, qemu-s390x@nongnu.org,
 Halil Pasic <pasic@linux.ibm.com>, Willian Rampazzo <wrampazz@redhat.com>,
 Paul Durrant <paul@xen.org>, Cornelia Huck <cohuck@redhat.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Claudio Fontana <cfontana@suse.de>
References: <20201207131503.3858889-1-philmd@redhat.com>
 <20201207131503.3858889-2-philmd@redhat.com>
From: Wainer dos Santos Moschetta <wainersm@redhat.com>
Message-ID: <41db8ee1-23bf-bbde-f99a-5a314fac215d@redhat.com>
Date: Fri, 11 Dec 2020 14:03:30 -0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201207131503.3858889-2-philmd@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

Hi,

On 12/7/20 10:14 AM, Philippe Mathieu-Daudé wrote:
> Document what this job cover (build X86 targets with
> KVM being the single accelerator available).
>
> Reviewed-by: Thomas Huth <thuth@redhat.com>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>   .gitlab-ci.yml | 5 +++++
>   1 file changed, 5 insertions(+)

Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>

>
> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> index d0173e82b16..ee31b1020fe 100644
> --- a/.gitlab-ci.yml
> +++ b/.gitlab-ci.yml
> @@ -220,6 +220,11 @@ build-disabled:
>         s390x-softmmu i386-linux-user
>       MAKE_CHECK_ARGS: check-qtest SPEED=slow
>   
> +# This jobs explicitly disable TCG (--disable-tcg), KVM is detected by
> +# the configure script. The container doesn't contain Xen headers so
> +# Xen accelerator is not detected / selected. As result it build the
> +# i386-softmmu and x86_64-softmmu with KVM being the single accelerator
> +# available.
>   build-tcg-disabled:
>     <<: *native_build_job_definition
>     variables:



From xen-devel-bounces@lists.xenproject.org Sat Dec 12 05:45:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 05:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.50912.89955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knxiQ-0003RW-Vp; Sat, 12 Dec 2020 05:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 50912.89955; Sat, 12 Dec 2020 05:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knxiQ-0003RP-Sa; Sat, 12 Dec 2020 05:45:10 +0000
Received: by outflank-mailman (input) for mailman id 50912;
 Fri, 11 Dec 2020 21:27:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tmXl=FP=redhat.com=wrampazz@srs-us1.protection.inumbo.net>)
 id 1knpwZ-0002f2-I9
 for xen-devel@lists.xenproject.org; Fri, 11 Dec 2020 21:27:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 90ff68e5-2648-47d5-b9af-f152df81388b;
 Fri, 11 Dec 2020 21:27:14 +0000 (UTC)
Received: from mail-ua1-f69.google.com (mail-ua1-f69.google.com
 [209.85.222.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-72-3dGxt_KeNhCxA8d0MZeqwQ-1; Fri, 11 Dec 2020 16:27:13 -0500
Received: by mail-ua1-f69.google.com with SMTP id b3so1934601uas.10
 for <xen-devel@lists.xenproject.org>; Fri, 11 Dec 2020 13:27:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90ff68e5-2648-47d5-b9af-f152df81388b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607722034;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2baaQLSNBEeqb8s4I052o+MuhTpKsGsOVMcltvWRcFE=;
	b=MS/6rZbU/buc9zcnquU9rlpJ7y6a29qJAmOsKrzkVtmTZyxPFJ0NPAiB0rgEoJFTJvx2V1
	Z9YEI0TC0K0ndOiqNDziy+/tOtgq41Zpah5T2qcJLr10T/LTEsUh6KFqu4MK8Yty+J+5DT
	UQLIzxVDLGm2q9BGWkNvwWa/gAsIOsc=
X-MC-Unique: 3dGxt_KeNhCxA8d0MZeqwQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=2baaQLSNBEeqb8s4I052o+MuhTpKsGsOVMcltvWRcFE=;
        b=LxlaL3RPN8X0OgE9FJ9vcv4FTLJLCChyOuIfyVra8D12XbVIeE9Ee8hiFTklSHICNy
         blW8bkfjrZh7g6fj39jdEgjqVZxcRvPGtF2N8Jqeju1WEGgIUrKyFimFZSCxg+CyEoTO
         0YOwieTZ6yfc1/zMB4wJpmr6jjsbNdTxAODggnixbZsqFKcCxSXaSYrg0PU9kvl2iFyr
         MA2Ho2Ihm6BNNnHtmkG6lFp6YKBUTgrhd3RN0NIehJ6mJFDT1/Ftnj2hrLD/qoDKPuDL
         0a9Jn5rQwQg605EPF71YKTiTDBKSGJ3Ao3ky5QtzikF2pRDjRVj2MUDALoCCxJkwR1QF
         yCWg==
X-Gm-Message-State: AOAM530aD05i+5Z43ToDV6zIxG5pHFsyO2y/uZufmmpbVXIRZqrYbv2U
	w60VUCC0IoX5wha0i+Y7WOZO70Pz0lNwoP+J53YjXiOQbq2hxcQAavgWM7KI15yX9/VBonTtV4W
	eOqrjsmDpJTc8j4aFBDbE2s5+chgTDJSgZxKUZsxVnp4=
X-Received: by 2002:a1f:3216:: with SMTP id y22mr16017730vky.1.1607722032623;
        Fri, 11 Dec 2020 13:27:12 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzoAxYS7LmWjUd46I3qBH/wmX8+rEnzXAY9WUS8yNPb88aaCCmUw66PGF9HvP9W43g/thyTvGXx+ar4+M0xIEI=
X-Received: by 2002:a1f:3216:: with SMTP id y22mr16017712vky.1.1607722032420;
 Fri, 11 Dec 2020 13:27:12 -0800 (PST)
MIME-Version: 1.0
References: <20201207131503.3858889-1-philmd@redhat.com> <20201207131503.3858889-2-philmd@redhat.com>
In-Reply-To: <20201207131503.3858889-2-philmd@redhat.com>
From: Willian Rampazzo <wrampazz@redhat.com>
Date: Fri, 11 Dec 2020 18:27:01 -0300
Message-ID: <CAKJDGDYwUdGxHC4ctzqO6JfrsGQDv7uwdCC29x5Ty61=fzV2RA@mail.gmail.com>
Subject: Re: [PATCH v3 1/5] gitlab-ci: Document 'build-tcg-disabled' is a KVM
 X86 job
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Cc: qemu-devel <qemu-devel@nongnu.org>, Thomas Huth <thuth@redhat.com>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Marcelo Tosatti <mtosatti@redhat.com>, kvm@vger.kernel.org, 
	Paolo Bonzini <pbonzini@redhat.com>, Anthony Perard <anthony.perard@citrix.com>, qemu-s390x@nongnu.org, 
	Halil Pasic <pasic@linux.ibm.com>, Paul Durrant <paul@xen.org>, Cornelia Huck <cohuck@redhat.com>, 
	xen-devel@lists.xenproject.org, =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Claudio Fontana <cfontana@suse.de>, Wainer dos Santos Moschetta <wainersm@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=wrampazz@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Dec 7, 2020 at 10:15 AM Philippe Mathieu-Daud=C3=A9
<philmd@redhat.com> wrote:
>
> Document what this job cover (build X86 targets with
> KVM being the single accelerator available).
>
> Reviewed-by: Thomas Huth <thuth@redhat.com>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  .gitlab-ci.yml | 5 +++++
>  1 file changed, 5 insertions(+)

Reviewed-by: WIllian Rampazzo <willianr@redhat.com>



From xen-devel-bounces@lists.xenproject.org Sat Dec 12 06:18:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 06:18:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51046.89968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knyEj-0006cr-Jw; Sat, 12 Dec 2020 06:18:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51046.89968; Sat, 12 Dec 2020 06:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knyEj-0006ck-G6; Sat, 12 Dec 2020 06:18:33 +0000
Received: by outflank-mailman (input) for mailman id 51046;
 Sat, 12 Dec 2020 06:18:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knyEh-0006cc-LW; Sat, 12 Dec 2020 06:18:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knyEh-0003UF-FW; Sat, 12 Dec 2020 06:18:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knyEh-00009Y-6j; Sat, 12 Dec 2020 06:18:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knyEh-00063V-6E; Sat, 12 Dec 2020 06:18:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L4u19LnAu5MSMvhFD7m3/vTliHOAC0HXJSbj3TfYLmA=; b=idC9Lzfd0mp4oFEW7bh1OlWOkz
	j3mi4gTKpfMzm8mpp3LtyQheWbX6ew17tmI7Xt1h0N+5UCzukXHW8JF0RWMJnazBk1eqYrxkEnX+J
	rIEdPRBhMXDd0k8nJdMi31Jk8UVCDgEqe71gOz2hcj8S+O944rECKvIRrmzr4omaXQ2Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157438-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157438: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b785d25e91718a660546a6550f64b3c543af7754
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 06:18:31 +0000

flight 157438 qemu-mainline real [real]
flight 157454 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157438/
http://logs.test-lab.xenproject.org/osstest/logs/157454/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                b785d25e91718a660546a6550f64b3c543af7754
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  113 days
Failing since        152659  2020-08-21 14:07:39 Z  112 days  235 attempts
Testing same since   157438  2020-12-11 16:06:47 Z    0 days    1 attempts

------------------------------------------------------------
307 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 74209 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 07:04:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 07:04:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51057.89982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knywr-00036a-5m; Sat, 12 Dec 2020 07:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51057.89982; Sat, 12 Dec 2020 07:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knywr-00036T-2l; Sat, 12 Dec 2020 07:04:09 +0000
Received: by outflank-mailman (input) for mailman id 51057;
 Sat, 12 Dec 2020 07:04:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knywp-00036L-8k; Sat, 12 Dec 2020 07:04:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knywo-0004Pn-Vf; Sat, 12 Dec 2020 07:04:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knywo-000334-Nr; Sat, 12 Dec 2020 07:04:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knywo-0004MQ-NN; Sat, 12 Dec 2020 07:04:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ULXahn7Si4WcU3LRqwvR8L1Wwt0NCa1ZsINJtfRI0o8=; b=bNyeXzYBuwVf/WbIDyoRbQib0b
	BOrwJOSKISk0nKGZ8ylWeuQ1DlbIrD9M9g2p3mWvcqwFHIWhhGUVlxdlQ6aAoLD1ahosWkEfP82cA
	8WwGWDZ72aEhjfUdZ/eJtpX3nf2MEKIWiRXbSfmtB26bXs7UkKtASbeROJfSgf2tsNJE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157453-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157453: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=e3b9d3002a8bb8a752ef26c6451ed5aea7eaee32
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 07:04:06 +0000

flight 157453 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157453/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e3b9d3002a8bb8a752ef26c6451ed5aea7eaee32
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  155 days
Failing since        151818  2020-07-11 04:18:52 Z  154 days  149 attempts
Testing same since   157453  2020-12-12 04:19:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32583 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 07:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 07:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51069.90004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knzQI-00068p-0Z; Sat, 12 Dec 2020 07:34:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51069.90004; Sat, 12 Dec 2020 07:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knzQH-00068i-TG; Sat, 12 Dec 2020 07:34:33 +0000
Received: by outflank-mailman (input) for mailman id 51069;
 Sat, 12 Dec 2020 07:34:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knzQG-00068a-CB; Sat, 12 Dec 2020 07:34:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knzQF-00050m-UM; Sat, 12 Dec 2020 07:34:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knzQF-0005pW-N5; Sat, 12 Dec 2020 07:34:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knzQF-0004cs-Ma; Sat, 12 Dec 2020 07:34:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XQryAXj5aiQT4IWrjgBJXeAl90BIcbZvA9jaQfa0lJY=; b=d4Rrfhw8YLBMFp7c77Zja4qhLf
	EBpAUKZID+d6/utHNmBf6/lCD6VhULa+nYcMKa7cMw66+cEZdCaEF4poJ41VAHwrOaBXLcSwGWs2c
	yedARVBKpGtqY5k9+2C0IUbufxu+EYaus+QnhzYj+JJDL9Ct+yvjPOr1jH8PhecyLDA0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157449-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157449: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 07:34:31 +0000

flight 157449 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157449/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    2 days
Failing since        157348  2020-12-09 15:39:39 Z    2 days   14 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 07:56:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 07:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51079.90019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knzky-0008Cr-IM; Sat, 12 Dec 2020 07:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51079.90019; Sat, 12 Dec 2020 07:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1knzky-0008Ck-FK; Sat, 12 Dec 2020 07:55:56 +0000
Received: by outflank-mailman (input) for mailman id 51079;
 Sat, 12 Dec 2020 07:55:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knzkx-0008Cc-Fw; Sat, 12 Dec 2020 07:55:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knzkx-0005Q6-7K; Sat, 12 Dec 2020 07:55:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1knzkw-0007Io-Us; Sat, 12 Dec 2020 07:55:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1knzkw-0006sj-UL; Sat, 12 Dec 2020 07:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cJlarOWRplhcZwIoTmNIbULMS7bcWVUzRjZk9DD2QGQ=; b=VRcJyMurPo+sDL9uq4FyZ5d1Dh
	FMYeT1zGBnB95BOEPNrVTyt3GRiQtGdHzoc8XWdKXW6nZ2wcrbczof5X+Xsa5QbWm9GcvL7W1QxjB
	ACnpEPBqgVu8Uh3A2JmaDM9Bx25DB+PpDp7WkvR70pFa0ZH1HcO2H+ejUH1l9q0wHESI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157442-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157442: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=33dc9614dc208291d0c4bcdeb5d30d481dcd2c4c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 07:55:54 +0000

flight 157442 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157442/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 157414 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install fail in 157414 REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 157414 REGR. vs. 152332
 test-arm64-arm64-xl     10 host-ping-check-xen fail in 157414 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 157414 pass in 157442
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157414
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157414
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10    fail pass in 157414
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen        fail pass in 157414
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157414
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 157414

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157414 blocked in 152332
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 157414 like 152332
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 157414 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                33dc9614dc208291d0c4bcdeb5d30d481dcd2c4c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  133 days
Failing since        152366  2020-08-01 20:49:34 Z  132 days  229 attempts
Testing same since   157414  2020-12-11 08:12:48 Z    0 days    2 attempts

------------------------------------------------------------
3677 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 704276 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 10:51:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 10:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51120.90033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko2Uj-0000r9-AN; Sat, 12 Dec 2020 10:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51120.90033; Sat, 12 Dec 2020 10:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko2Uj-0000r2-7F; Sat, 12 Dec 2020 10:51:21 +0000
Received: by outflank-mailman (input) for mailman id 51120;
 Sat, 12 Dec 2020 10:51:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko2Ui-0000qu-NY; Sat, 12 Dec 2020 10:51:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko2Ui-00017Y-Hl; Sat, 12 Dec 2020 10:51:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko2Ui-0008BW-6H; Sat, 12 Dec 2020 10:51:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ko2Ui-0007bS-5o; Sat, 12 Dec 2020 10:51:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OzAunM2CTQyyX2EV/5jRvo2lSGYeyH0SN8bmBZyPrec=; b=YJVQnYlg+aNJTgfx2tbCLJqyj+
	K/J3Vm28NSXEFOiQ0YNAL5GJ1alNmKh0Antymv9ovq/0TPyL/HdioEq3ICfFMGNd1BIU103MOtotg
	fI+4QLIacLGPvKso5j5pJ3elk+IvCc35XVCZm+bR/MXilX7QaDK5e5Dt7MvuCuuyTVHc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157451-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157451: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 10:51:20 +0000

flight 157451 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157451/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 157433

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157433
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157433
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157433
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157433
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157433
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157433
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157433
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157433
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157433
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157433
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157433
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157451  2020-12-12 02:00:53 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Dec 12 13:22:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 13:22:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51154.90055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko4r4-00074a-Pm; Sat, 12 Dec 2020 13:22:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51154.90055; Sat, 12 Dec 2020 13:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko4r4-00074T-ML; Sat, 12 Dec 2020 13:22:34 +0000
Received: by outflank-mailman (input) for mailman id 51154;
 Sat, 12 Dec 2020 13:22:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uJ6z=FQ=gmail.com=andy.shevchenko@srs-us1.protection.inumbo.net>)
 id 1ko4r2-00074O-Gx
 for xen-devel@lists.xenproject.org; Sat, 12 Dec 2020 13:22:32 +0000
Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b26b176-c3df-47d5-b858-c07763c636dd;
 Sat, 12 Dec 2020 13:22:31 +0000 (UTC)
Received: by mail-pf1-x430.google.com with SMTP id f9so8917878pfc.11
 for <xen-devel@lists.xenproject.org>; Sat, 12 Dec 2020 05:22:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b26b176-c3df-47d5-b858-c07763c636dd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=VTZpp0n/Om/Jf11Mz+DqRKpiaIjN4mJ897+taVy59Ug=;
        b=gs+DafSa6tm6ZhcmtOMAV2zOpZsgVdlfnJO3Cc8h6uZ7zEIkJ//kiGJiG610YJYd9c
         hi7RAK/gnCn0RzWdwMYCDo5iWNdBQbtvJKYrsQI5ZcoVP15/FlhWsHbcStHoxRpLv5K2
         7tZggztOJ5OrWHKSc7C0B5gdsPwG3Jh4XQNGDZzJ5uC91i9eb/FOwSgLNZwxuhvbJlm1
         NZRNGWscN+BvdnS2hdE/2kf8TIOM+A9TYoTVx3z4X0kR366gEuSlj/jqKKPOYQFh4Kj3
         5afdtXgK8N3xa39LgKIpCZwsKgk5qgska42v8wTbiGBbB+guXeCE2Frkke6P3b+dq3Ir
         zifA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=VTZpp0n/Om/Jf11Mz+DqRKpiaIjN4mJ897+taVy59Ug=;
        b=ms5m0BjK7zSIF2hRB06tK7MzvxZravSQpGaN0NUOfPA1YP3lKOqAQLsY/maCsB0m7j
         24jwD0Nqe7U3UYpzKB28spK0ZDH7OsSXfJgW1ZoD1rJ1DSfvXwaKrvXScJVXxpHU1BTq
         Ld7c7SR2dNorhilkedX/qcTsQ20Qb2+h1YxDDWO/2R+TpUAYUzs1Q4FknKhWlYvBLgfy
         8po4sX4FAPRNqj7NF4rMc3oizL0pC/y/V4ZnMeccEhk78HH43x9iaPQqwocxKUz9G9sJ
         HdfXOOP0oM6EvqYo08wTAEfpEUQaLiiTwogPvjVLVfvo1fd7kt75bncnVCwxAEOpCa0t
         8B/g==
X-Gm-Message-State: AOAM532j01pxkI8CuuCo0hiRajqCXyGUJeHc9zyztSUua0ue9Qpg570v
	p65lVwuwiCMuc3tAPchI1m/A70MOITiRx+RP530=
X-Google-Smtp-Source: ABdhPJz6qkGZaIbcK80z8oNTbmCmaNucZ+fxbrx52JjiY59cFl32wS2FAI0NK0C4i2f9nCp4IxJVJq78J8uiblnFRTg=
X-Received: by 2002:a05:6a00:170a:b029:19d:afca:4704 with SMTP id
 h10-20020a056a00170ab029019dafca4704mr15887538pfc.7.1607779350726; Sat, 12
 Dec 2020 05:22:30 -0800 (PST)
MIME-Version: 1.0
References: <20201210192536.118432146@linutronix.de> <20201210194042.860029489@linutronix.de>
 <CAHp75Vc-2OjE2uwvNRiyLMQ8GSN3P7SehKD-yf229_7ocaktiw@mail.gmail.com>
 <87h7osgifc.fsf@nanos.tec.linutronix.de> <87360cgfol.fsf@nanos.tec.linutronix.de>
In-Reply-To: <87360cgfol.fsf@nanos.tec.linutronix.de>
From: Andy Shevchenko <andy.shevchenko@gmail.com>
Date: Sat, 12 Dec 2020 15:22:14 +0200
Message-ID: <CAHp75Ve5zzeQw8P2wD083WW5+KGehETTy810wksfpXbj+3GBug@mail.gmail.com>
Subject: Re: [patch 03/30] genirq: Move irq_set_lockdep_class() to core
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
	Marc Zyngier <maz@kernel.org>, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org, 
	Russell King <linux@armlinux.org.uk>, 
	linux-arm Mailing List <linux-arm-kernel@lists.infradead.org>, Mark Rutland <mark.rutland@arm.com>, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org, 
	Jani Nikula <jani.nikula@linux.intel.com>, 
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, 
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, 
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, 
	Chris Wilson <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
	intel-gfx <intel-gfx@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij <linus.walleij@linaro.org>, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, Lee Jones <lee.jones@linaro.org>, 
	Jon Mason <jdmason@kudzu.us>, Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>, 
	linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, 
	Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, 
	Michal Simek <michal.simek@xilinx.com>, linux-pci <linux-pci@vger.kernel.org>, 
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>, Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, 
	Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>, 
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>, 
	"open list:HFI1 DRIVER" <linux-rdma@vger.kernel.org>, Saeed Mahameed <saeedm@nvidia.com>, 
	Leon Romanovsky <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Sat, Dec 12, 2020 at 12:07 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Fri, Dec 11 2020 at 22:08, Thomas Gleixner wrote:
>
> > On Fri, Dec 11 2020 at 19:53, Andy Shevchenko wrote:
> >
> >> On Thu, Dec 10, 2020 at 10:14 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >>>
> >>> irq_set_lockdep_class() is used from modules and requires irq_to_desc() to
> >>> be exported. Move it into the core code which lifts another requirement for
> >>> the export.
> >>
> >> ...
> >>
> >>> +       if (IS_ENABLED(CONFIG_LOCKDEP))
> >>> +               __irq_set_lockdep_class(irq, lock_class, request_class);
> >
> > You are right. Let me fix that.
>
> No. I have to correct myself. You're wrong.
>
> The inline is evaluated in the compilation units which include that
> header and because the function declaration is unconditional it is
> happy.
>
> Now the optimizer stage makes the whole thing a NOOP if CONFIG_LOCKDEP=n
> and thereby drops the reference to the function which makes it not
> required for linking.
>
> So in the file where the function is implemented:
>
> #ifdef CONFIG_LOCKDEP
> void __irq_set_lockdep_class(....)
> {
> }
> #endif
>
> The whole block is either discarded because CONFIG_LOCKDEP is not
> defined or compile if it is defined which makes it available for the
> linker.
>
> And in the latter case the optimizer keeps the call in the inline (it
> optimizes the condition away because it's always true).
>
> So in both cases the compiler and the linker are happy and everything
> works as expected.
>
> It would fail if the header file had the following:
>
> #ifdef CONFIG_LOCKDEP
> void __irq_set_lockdep_class(....);
> #endif
>
> Because then it would complain about the missing function prototype when
> it evaluates the inline.

I understand that (that's why I put "if even no warning") and what I'm
talking about is the purpose of IS_ENABLED(). It's usually good for
compile testing !CONFIG_FOO cases. But here it seems inconsistent.

The pattern I usually see in the cases like this is

 #ifdef CONFIG_LOCKDEP
 void __irq_set_lockdep_class(....);
 #else
 static inline void ... {}
 #endif

and call it directly in the caller.

It's not a big deal, so up to you.

-- 
With Best Regards,
Andy Shevchenko


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 13:36:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 13:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51160.90067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko54D-0008Cp-Ss; Sat, 12 Dec 2020 13:36:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51160.90067; Sat, 12 Dec 2020 13:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko54D-0008Ci-Pp; Sat, 12 Dec 2020 13:36:09 +0000
Received: by outflank-mailman (input) for mailman id 51160;
 Sat, 12 Dec 2020 13:36:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EAKn=FQ=amazon.com=prvs=608be7199=havanur@srs-us1.protection.inumbo.net>)
 id 1ko54C-0008Cd-Qg
 for xen-devel@lists.xenproject.org; Sat, 12 Dec 2020 13:36:08 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 119c40f5-ac2c-4216-8802-d3b0a41e5e9c;
 Sat, 12 Dec 2020 13:36:05 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-16acd5e0.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 12 Dec 2020 13:35:58 +0000
Received: from EX13D36EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1a-16acd5e0.us-east-1.amazon.com (Postfix) with ESMTPS
 id 2DC16A216A; Sat, 12 Dec 2020 13:35:57 +0000 (UTC)
Received: from EX13D36EUC004.ant.amazon.com (10.43.164.126) by
 EX13D36EUC004.ant.amazon.com (10.43.164.126) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Sat, 12 Dec 2020 13:35:41 +0000
Received: from EX13D36EUC004.ant.amazon.com ([10.43.164.126]) by
 EX13D36EUC004.ant.amazon.com ([10.43.164.126]) with mapi id 15.00.1497.006;
 Sat, 12 Dec 2020 13:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 119c40f5-ac2c-4216-8802-d3b0a41e5e9c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1607780165; x=1639316165;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=vkrXKhuFgTTLv1TCPVyui6xJb0oRVKSQmEf5fGFEub0=;
  b=vXUvXs5yd3B+ef1kzeRl9rhGozU+8bqnljUN8+dv4lt+UOeilaLTUBYB
   3FMN0uA5B2DXZ9nHhZEuJjJQL6Hzg2qPz+7qawihsK7dMIquUPMxiQWSh
   ZSAfHCWwGu/mCuysH7tj+3IWUUUtSQjqt6ptCZ9K9U/xUFDxw3mFiU+v1
   8=;
X-IronPort-AV: E=Sophos;i="5.78,414,1599523200"; 
   d="scan'208";a="68942740"
From: "Shamsundara Havanur, Harsha" <havanur@amazon.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "jbeulich@suse.com" <jbeulich@suse.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
Thread-Topic: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
Thread-Index: AQHWz7MJSRGqvamGtEii6iRZ4MXyYqnzd/YA
Date: Sat, 12 Dec 2020 13:35:41 +0000
Message-ID: <40c8b5378f1075e9b40eafbae61932e75acbf327.camel@amazon.com>
References: <cover.1607686878.git.havanur@amazon.com>
	 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
In-Reply-To: <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.29]
Content-Type: text/plain; charset="utf-8"
Content-ID: <1087129A2472B74AB1C1671859BCD3FE@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

Q0NpbmcgQW5keSBhbmQgSmFuDQpJcyByZXN0cmljdGluZyBjYWNoZSBmbHVzaCB0byBzZXQgb2Yg
Y3B1cyBib3VuZCB0byB0aGUgZG9tYWluLCBhDQpyaWdodCB0aGluZyB0byBkbz8NCg0KT24gRnJp
LCAyMDIwLTEyLTExIGF0IDExOjQ0ICswMDAwLCBIYXJzaGEgU2hhbXN1bmRhcmEgSGF2YW51ciB3
cm90ZToNCj4gQSBIVk0gZG9tYWluIGZsdXNoZXMgY2FjaGUgb24gYWxsIHRoZSBjcHVzIHVzaW5n
DQo+IGBmbHVzaF9hbGxgIG1hY3JvIHdoaWNoIHVzZXMgY3B1X29ubGluZV9tYXAsIGR1cmluZw0K
PiBpKSBjcmVhdGlvbiBvZiBhIG5ldyBkb21haW4NCj4gaWkpIHdoZW4gZGV2aWNlLW1vZGVsIG9w
IGlzIHBlcmZvcm1lZA0KPiBpaWkpIHdoZW4gZG9tYWluIGlzIGRlc3RydWN0ZWQuDQo+IA0KPiBU
aGlzIHRyaWdnZXJzIElQSSBvbiBhbGwgdGhlIGNwdXMsIHRodXMgYWZmZWN0aW5nIG90aGVyDQo+
IGRvbWFpbnMgdGhhdCBhcmUgcGlubmVkIHRvIGRpZmZlcmVudCBwY3B1cy4gVGhpcyBwYXRjaA0K
PiByZXN0cmljdHMgY2FjaGUgZmx1c2ggdG8gdGhlIHNldCBvZiBjcHVzIGFmZmluaXRpemVkIHRv
DQo+IHRoZSBjdXJyZW50IGRvbWFpbiB1c2luZyBgZG9tYWluLT5kaXJ0eV9jcHVtYXNrYC4NCj4g
DQo+IFNpZ25lZC1vZmYtYnk6IEhhcnNoYSBTaGFtc3VuZGFyYSBIYXZhbnVyIDxoYXZhbnVyQGFt
YXpvbi5jb20+DQo+IC0tLQ0KPiAgeGVuL2FyY2gveDg2L2h2bS9odm0uYyAgICAgfCAyICstDQo+
ICB4ZW4vYXJjaC94ODYvaHZtL210cnIuYyAgICB8IDYgKysrLS0tDQo+ICB4ZW4vYXJjaC94ODYv
aHZtL3N2bS9zdm0uYyB8IDIgKy0NCj4gIHhlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jIHwgMiAr
LQ0KPiAgNCBmaWxlcyBjaGFuZ2VkLCA2IGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0pDQo+
IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS9odm0uYyBiL3hlbi9hcmNoL3g4Ni9o
dm0vaHZtLmMNCj4gaW5kZXggNTRlMzJlNGZlOC4uZWMyNDdjNzAxMCAxMDA2NDQNCj4gLS0tIGEv
eGVuL2FyY2gveDg2L2h2bS9odm0uYw0KPiArKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2bS5jDQo+
IEBAIC0yMjE5LDcgKzIyMTksNyBAQCB2b2lkIGh2bV9zaGFkb3dfaGFuZGxlX2NkKHN0cnVjdCB2
Y3B1ICp2LA0KPiB1bnNpZ25lZCBsb25nIHZhbHVlKQ0KPiAgICAgICAgICAgICAgZG9tYWluX3Bh
dXNlX25vc3luYyh2LT5kb21haW4pOw0KPiAgDQo+ICAgICAgICAgICAgICAvKiBGbHVzaCBwaHlz
aWNhbCBjYWNoZXMuICovDQo+IC0gICAgICAgICAgICBmbHVzaF9hbGwoRkxVU0hfQ0FDSEUpOw0K
PiArICAgICAgICAgICAgZmx1c2hfbWFzayh2LT5kb21haW4tPmRpcnR5X2NwdW1hc2ssIEZMVVNI
X0NBQ0hFKTsNCj4gICAgICAgICAgICAgIGh2bV9zZXRfdWNfbW9kZSh2LCAxKTsNCj4gIA0KPiAg
ICAgICAgICAgICAgZG9tYWluX3VucGF1c2Uodi0+ZG9tYWluKTsNCj4gZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9odm0vbXRyci5jIGIveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMNCj4gaW5kZXgg
ZmIwNTFkNTljMy4uMGQ4MDRjMWZhMCAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9t
dHJyLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMNCj4gQEAgLTYzMSw3ICs2MzEs
NyBAQCBpbnQgaHZtX3NldF9tZW1fcGlubmVkX2NhY2hlYXR0cihzdHJ1Y3QgZG9tYWluDQo+ICpk
LCB1aW50NjRfdCBnZm5fc3RhcnQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICBicmVhazsN
Cj4gICAgICAgICAgICAgICAgICAgICAgLyogZmFsbCB0aHJvdWdoICovDQo+ICAgICAgICAgICAg
ICAgICAgZGVmYXVsdDoNCj4gLSAgICAgICAgICAgICAgICAgICAgZmx1c2hfYWxsKEZMVVNIX0NB
Q0hFKTsNCj4gKyAgICAgICAgICAgICAgICAgICAgZmx1c2hfbWFzayhkLT5kaXJ0eV9jcHVtYXNr
LCBGTFVTSF9DQUNIRSk7DQo+ICAgICAgICAgICAgICAgICAgICAgIGJyZWFrOw0KPiAgICAgICAg
ICAgICAgICAgIH0NCj4gICAgICAgICAgICAgICAgICByZXR1cm4gMDsNCj4gQEAgLTY4Myw3ICs2
ODMsNyBAQCBpbnQgaHZtX3NldF9tZW1fcGlubmVkX2NhY2hlYXR0cihzdHJ1Y3QgZG9tYWluDQo+
ICpkLCB1aW50NjRfdCBnZm5fc3RhcnQsDQo+ICAgICAgbGlzdF9hZGRfcmN1KCZyYW5nZS0+bGlz
dCwgJmQtDQo+ID5hcmNoLmh2bS5waW5uZWRfY2FjaGVhdHRyX3Jhbmdlcyk7DQo+ICAgICAgcDJt
X21lbW9yeV90eXBlX2NoYW5nZWQoZCk7DQo+ICAgICAgaWYgKCB0eXBlICE9IFBBVF9UWVBFX1dS
QkFDSyApDQo+IC0gICAgICAgIGZsdXNoX2FsbChGTFVTSF9DQUNIRSk7DQo+ICsgICAgICAgIGZs
dXNoX21hc2soZC0+ZGlydHlfY3B1bWFzaywgRkxVU0hfQ0FDSEUpOw0KPiAgDQo+ICAgICAgcmV0
dXJuIDA7DQo+ICB9DQo+IEBAIC03ODUsNyArNzg1LDcgQEAgdm9pZCBtZW1vcnlfdHlwZV9jaGFu
Z2VkKHN0cnVjdCBkb21haW4gKmQpDQo+ICAgICAgICAgICBkLT52Y3B1ICYmIGQtPnZjcHVbMF0g
KQ0KPiAgICAgIHsNCj4gICAgICAgICAgcDJtX21lbW9yeV90eXBlX2NoYW5nZWQoZCk7DQo+IC0g
ICAgICAgIGZsdXNoX2FsbChGTFVTSF9DQUNIRSk7DQo+ICsgICAgICAgIGZsdXNoX21hc2soZC0+
ZGlydHlfY3B1bWFzaywgRkxVU0hfQ0FDSEUpOw0KPiAgICAgIH0NCj4gIH0NCj4gIA0KPiBkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS9zdm0vc3ZtLmMgYi94ZW4vYXJjaC94ODYvaHZtL3N2
bS9zdm0uYw0KPiBpbmRleCBjZmVhNWI1NTIzLi4zODNlNzYzZDdkIDEwMDY0NA0KPiAtLS0gYS94
ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYw0KPiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3N2bS9z
dm0uYw0KPiBAQCAtMjM5NSw3ICsyMzk1LDcgQEAgc3RhdGljIHZvaWQgc3ZtX3ZtZXhpdF9tY2Vf
aW50ZXJjZXB0KA0KPiAgc3RhdGljIHZvaWQgc3ZtX3diaW52ZF9pbnRlcmNlcHQodm9pZCkNCj4g
IHsNCj4gICAgICBpZiAoIGNhY2hlX2ZsdXNoX3Blcm1pdHRlZChjdXJyZW50LT5kb21haW4pICkN
Cj4gLSAgICAgICAgZmx1c2hfYWxsKEZMVVNIX0NBQ0hFKTsNCj4gKyAgICAgICAgZmx1c2hfbWFz
ayhjdXJyZW50LT5kb21haW4tPmRpcnR5X2NwdW1hc2ssIEZMVVNIX0NBQ0hFKTsNCj4gIH0NCj4g
IA0KPiAgc3RhdGljIHZvaWQgc3ZtX3ZtZXhpdF9kb19pbnZhbGlkYXRlX2NhY2hlKHN0cnVjdCBj
cHVfdXNlcl9yZWdzDQo+ICpyZWdzLA0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92
bXgvdm14LmMgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiBpbmRleCA4NmI4OTE2YTVk
Li5hMDVjNzAzNmM0IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0K
PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiBAQCAtMzM0OSw3ICszMzQ5LDcg
QEAgc3RhdGljIHZvaWQgdm14X3diaW52ZF9pbnRlcmNlcHQodm9pZCkNCj4gICAgICAgICAgcmV0
dXJuOw0KPiAgDQo+ICAgICAgaWYgKCBjcHVfaGFzX3diaW52ZF9leGl0aW5nICkNCj4gLSAgICAg
ICAgZmx1c2hfYWxsKEZMVVNIX0NBQ0hFKTsNCj4gKyAgICAgICAgZmx1c2hfbWFzayhjdXJyZW50
LT5kb21haW4tPmRpcnR5X2NwdW1hc2ssIEZMVVNIX0NBQ0hFKTsNCj4gICAgICBlbHNlDQo+ICAg
ICAgICAgIHdiaW52ZCgpOw0KPiAgfQ0K


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 13:53:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 13:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51171.90079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko5Kv-0001nC-Ca; Sat, 12 Dec 2020 13:53:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51171.90079; Sat, 12 Dec 2020 13:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko5Kv-0001n5-9N; Sat, 12 Dec 2020 13:53:25 +0000
Received: by outflank-mailman (input) for mailman id 51171;
 Sat, 12 Dec 2020 13:53:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BfRh=FQ=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1ko5Ku-0001n0-Cz
 for xen-devel@lists.xenproject.org; Sat, 12 Dec 2020 13:53:24 +0000
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab2a89e8-f81f-455e-98ba-0e7b79a8f4c6;
 Sat, 12 Dec 2020 13:53:23 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id v22so12359642edt.9
 for <xen-devel@lists.xenproject.org>; Sat, 12 Dec 2020 05:53:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab2a89e8-f81f-455e-98ba-0e7b79a8f4c6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=aqJp4gKfamwnwqe8EMieEJaV+n0fzH9AmnYkNTme3fQ=;
        b=TC+B4GNRTpn/r+SvL+bvNGYRqF3e8Vj1nDNqdPPO4ItwyM9NESVl6FUy+NYJbvv6op
         hOB08xCYyw0iQP0A8lF2jmnF9pqVq+GqWhzpAJoYg9ISZu1z9cxQCw5zD7hSBPmXvfpi
         tEg3reKaZg4RwHzdNRXuZebgzfDbWmpxhQxXqFPRsjd8yi4+drOgBBry/98/eAWGaKJw
         8zmHMVfoiMIEURI/0g8m9ib9djl4YfB93MOBvy2NN9o0d9TWELP0HbyX4GyUhh3vDcDA
         J+Z6Wdh8eynJ5jj4PW8EXBfNdMTqNoD/wg1EpmTfGv5zaEgohAt8iGwbx8oG8sfGZe/6
         WI+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=aqJp4gKfamwnwqe8EMieEJaV+n0fzH9AmnYkNTme3fQ=;
        b=G+f6u2FknhmX9T0UZ/bQno7Kq5M8pXvJGYDWaklgf4F8ShVvg+f4zW20F1QBlbV6OU
         5gpADZ7CE0qy1bhq4gVf/57ogslpDOxiAorDXo1+iKklAKnDQO9/GloGW7yzXf5Zuf1n
         yhqXQrcbKSzxIr5nB34HlbOFEI2+n0392B7QicCFlDDjXtRmfpPQdVpQfBTg36X8bGi5
         wc0QfxYw5ti7kwyW+AsfDpaJWF3RanmetXeiHK12JxGkcizsC2lVupmQRJp9xcsXHlJw
         KXiCmMX8DZ9oKszjv5Gu3pEtAY9fYh/mFbSon4wLsAsQV0P3DIB0tAIXFAExAg+yqVuH
         O9pA==
X-Gm-Message-State: AOAM530g+7HArpiOfVjE3GnaYnAbVE0ZWN4Olt+su6Alx2pxafEoYpEo
	z6Sx0+2APygqCoOIv8dEdXM6FTFxIWbxe1nCwPc=
X-Google-Smtp-Source: ABdhPJwRYgIUgihzry5rDxdOpf6xvg2ddglMC+Ji3KvWM3mWc1/FcIcg0uP8pe4SFDAao5orqB2pkFnuroGhvbtV1rg=
X-Received: by 2002:a05:6402:1155:: with SMTP id g21mr17219102edw.53.1607781202707;
 Sat, 12 Dec 2020 05:53:22 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com> <20201210134752.780923-12-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-12-marcandre.lureau@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@gmail.com>
Date: Sat, 12 Dec 2020 17:53:10 +0400
Message-ID: <CAJ+F1CL=m4bLdCaKHYuVNTzBdGZnK-_q5pGNoV8N37-H51u+Dw@mail.gmail.com>
Subject: Re: [PATCH v3 11/13] compiler: remove GNUC check
To: QEMU <qemu-devel@nongnu.org>
Cc: Peter Maydell <peter.maydell@linaro.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Paul Durrant <paul@xen.org>, Richard Henderson <richard.henderson@linaro.org>, 
	Laurent Vivier <laurent@vivier.eu>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	"open list:ARM" <qemu-arm@nongnu.org>, Gerd Hoffmann <kraxel@redhat.com>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Content-Type: multipart/alternative; boundary="00000000000076cb0505b644bb52"

--00000000000076cb0505b644bb52
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 10, 2020 at 6:14 PM <marcandre.lureau@redhat.com> wrote:

> From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
>
> QEMU requires Clang or GCC, that define and support __GNUC__ extensions.
>
> Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> ---
>  include/qemu/compiler.h | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
> index 6212295e52..5e6cf2c8e8 100644
> --- a/include/qemu/compiler.h
> +++ b/include/qemu/compiler.h
> @@ -64,14 +64,10 @@
>      (offsetof(container, field) + sizeof_field(container, field))
>
>  /* Convert from a base type to a parent type, with compile time
> checking.  */
> -#ifdef __GNUC__
>  #define DO_UPCAST(type, field, dev) ( __extension__ ( { \
>      char __attribute__((unused)) offset_must_be_zero[ \
>          -offsetof(type, field)]; \
>      container_of(dev, type, field);}))
> -#else
> -#define DO_UPCAST(type, field, dev) container_of(dev, type, field)
> -#endif
>
>  #define typeof_field(type, field) typeof(((type *)0)->field)
>  #define type_check(t1,t2) ((t1*)0 - (t2*)0)
> @@ -102,7 +98,7 @@
>  #if defined(__clang__)
>  /* clang doesn't support gnu_printf, so use printf. */
>  # define GCC_FMT_ATTR(n, m) __attribute__((format(printf, n, m)))
> -#elif defined(__GNUC__)
> +#else
>  /* Use gnu_printf (qemu uses standard format strings). */
>  # define GCC_FMT_ATTR(n, m) __attribute__((format(gnu_printf, n, m)))
>  # if defined(_WIN32)
> @@ -112,8 +108,6 @@
>   */
>  #  define __printf__ __gnu_printf__
>  # endif
> -#else
> -#define GCC_FMT_ATTR(n, m)
>  #endif
>
>  #ifndef __has_warning
> --
> 2.29.0
>
>
>
Peter, Paolo, anyone to give a review?
thanks


--=20
Marc-Andr=C3=A9 Lureau

--00000000000076cb0505b644bb52
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Thu, Dec 10, 2020 at 6:14 PM &lt;<=
a href=3D"mailto:marcandre.lureau@redhat.com">marcandre.lureau@redhat.com</=
a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Fr=
om: Marc-Andr=C3=A9 Lureau &lt;<a href=3D"mailto:marcandre.lureau@redhat.co=
m" target=3D"_blank">marcandre.lureau@redhat.com</a>&gt;<br>
<br>
QEMU requires Clang or GCC, that define and support __GNUC__ extensions.<br=
>
<br>
Signed-off-by: Marc-Andr=C3=A9 Lureau &lt;<a href=3D"mailto:marcandre.lurea=
u@redhat.com" target=3D"_blank">marcandre.lureau@redhat.com</a>&gt;<br>
---<br>
=C2=A0include/qemu/compiler.h | 8 +-------<br>
=C2=A01 file changed, 1 insertion(+), 7 deletions(-)<br>
<br>
diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h<br>
index 6212295e52..5e6cf2c8e8 100644<br>
--- a/include/qemu/compiler.h<br>
+++ b/include/qemu/compiler.h<br>
@@ -64,14 +64,10 @@<br>
=C2=A0 =C2=A0 =C2=A0(offsetof(container, field) + sizeof_field(container, f=
ield))<br>
<br>
=C2=A0/* Convert from a base type to a parent type, with compile time check=
ing.=C2=A0 */<br>
-#ifdef __GNUC__<br>
=C2=A0#define DO_UPCAST(type, field, dev) ( __extension__ ( { \<br>
=C2=A0 =C2=A0 =C2=A0char __attribute__((unused)) offset_must_be_zero[ \<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-offsetof(type, field)]; \<br>
=C2=A0 =C2=A0 =C2=A0container_of(dev, type, field);}))<br>
-#else<br>
-#define DO_UPCAST(type, field, dev) container_of(dev, type, field)<br>
-#endif<br>
<br>
=C2=A0#define typeof_field(type, field) typeof(((type *)0)-&gt;field)<br>
=C2=A0#define type_check(t1,t2) ((t1*)0 - (t2*)0)<br>
@@ -102,7 +98,7 @@<br>
=C2=A0#if defined(__clang__)<br>
=C2=A0/* clang doesn&#39;t support gnu_printf, so use printf. */<br>
=C2=A0# define GCC_FMT_ATTR(n, m) __attribute__((format(printf, n, m)))<br>
-#elif defined(__GNUC__)<br>
+#else<br>
=C2=A0/* Use gnu_printf (qemu uses standard format strings). */<br>
=C2=A0# define GCC_FMT_ATTR(n, m) __attribute__((format(gnu_printf, n, m)))=
<br>
=C2=A0# if defined(_WIN32)<br>
@@ -112,8 +108,6 @@<br>
=C2=A0 */<br>
=C2=A0#=C2=A0 define __printf__ __gnu_printf__<br>
=C2=A0# endif<br>
-#else<br>
-#define GCC_FMT_ATTR(n, m)<br>
=C2=A0#endif<br>
<br>
=C2=A0#ifndef __has_warning<br>
-- <br>
2.29.0<br>
<br>
<br></blockquote><div><br></div><div>Peter, Paolo, anyone to give a review?=
</div><div>thanks<br></div></div><br clear=3D"all"><br>-- <br><div dir=3D"l=
tr" class=3D"gmail_signature">Marc-Andr=C3=A9 Lureau<br></div></div>

--00000000000076cb0505b644bb52--


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 14:01:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 14:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51179.90091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko5SE-0002uE-58; Sat, 12 Dec 2020 14:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51179.90091; Sat, 12 Dec 2020 14:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko5SE-0002u7-27; Sat, 12 Dec 2020 14:00:58 +0000
Received: by outflank-mailman (input) for mailman id 51179;
 Sat, 12 Dec 2020 14:00:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=twOw=FQ=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1ko5SC-0002u2-4A
 for xen-devel@lists.xenproject.org; Sat, 12 Dec 2020 14:00:56 +0000
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fb17729-8f50-472c-bd9f-3f358ec2a00c;
 Sat, 12 Dec 2020 14:00:55 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id h16so12382703edt.7
 for <xen-devel@lists.xenproject.org>; Sat, 12 Dec 2020 06:00:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fb17729-8f50-472c-bd9f-3f358ec2a00c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=tzUsb1oAOgAwkhpFJliDtvmjm9NTDD2Shb+dSAyDTts=;
        b=n9AuUX2UaXmUg0AQdVSCooAu338lwKCj9TI2vXMFYlfT+MrTniTIB3Ov4IeJJ5EASR
         fbuhpAaxcZ3RGcMtOTpL0vuS7vQfIonVhunxKGjpdkHffC8pDVeLM+InDVDm6o6XpkbL
         P6mG/3LV2OG3XF25pE3QaRjGvYuLBCnnBACJy2Dm+Y04ZS8ugdVDQSP+lDYK/HgVSMgZ
         29A4LIoUdyE3oh2fqxw4yUP5njhz0KUSLA5tl6Um9hjkmo+W55NmseDlfeyJ7sFCzJt1
         CnKA/jTtQxx8LpkVLOx9uTfZIrZeUYOjPn0FxC1TP9zqMDzGn71l0mcZNCBuVxM6ZVxE
         Tk0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=tzUsb1oAOgAwkhpFJliDtvmjm9NTDD2Shb+dSAyDTts=;
        b=CVgijQYcVm2B710/mz9IMa+dgqbd8TXMjtrQahJzSZj/Lwv+tMy+IgB5xn6CsGYuzQ
         o/2Y3FIUSC1fLqwUu2R56KcV0Vyo34F5qW+ROevxnFgffSpoN2E5CdzFciFLpjfdBjOu
         JTt11XTBCJVSI2BqnQ8E1O2ziZfFCvjgbdrCEneTfzEtgA3aRAXe6qgste9hCMuHGyub
         b3fKMK5Crj52uH5yR/qToIqwpbX7r5kB0CNByZ5GHrvXSXzzmtL/a0cgYWFEzHK3KNO3
         2Qru1SrDqNkPa/Fpl5Oyr6K0Bj1fe78t7tV0TaaSoqcvoQAhHpVCT04omRqKSsTUOOIK
         llPQ==
X-Gm-Message-State: AOAM531al2L2Jqd9IVT1DPInztyj75aDjQggnANf8F1PjmtYfcPMs/vt
	h3hTXWpfd6wN+D8aymV/0WYF1WrcV5unwHS/YoNGJw==
X-Google-Smtp-Source: ABdhPJx7XdlDKAFWotIqaFdp3hZice5M/x2dBsqEs5YhnYm27HfNvYl5CAYYbew51YQQZuw/X321TEP+/lm/MX+yv4c=
X-Received: by 2002:aa7:c388:: with SMTP id k8mr16509651edq.36.1607781654038;
 Sat, 12 Dec 2020 06:00:54 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com> <20201210134752.780923-12-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-12-marcandre.lureau@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Sat, 12 Dec 2020 14:00:42 +0000
Message-ID: <CAFEAcA_3eSKuAZj=pwV33csLdbVnsAhkm4ZNehinn7YYUkJ44A@mail.gmail.com>
Subject: Re: [PATCH v3 11/13] compiler: remove GNUC check
To: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>
Cc: QEMU Developers <qemu-devel@nongnu.org>, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	Richard Henderson <richard.henderson@linaro.org>, Laurent Vivier <laurent@vivier.eu>, 
	Paul Durrant <paul@xen.org>, "open list:X86" <xen-devel@lists.xenproject.org>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm <qemu-arm@nongnu.org>, 
	Paolo Bonzini <pbonzini@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 10 Dec 2020 at 13:50, <marcandre.lureau@redhat.com> wrote:
>
> From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
>
> QEMU requires Clang or GCC, that define and support __GNUC__ extensions.
>
> Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> ---
>  include/qemu/compiler.h | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)


Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 14:13:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 14:13:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51185.90102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko5eF-00041K-9Q; Sat, 12 Dec 2020 14:13:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51185.90102; Sat, 12 Dec 2020 14:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko5eF-00041D-6F; Sat, 12 Dec 2020 14:13:23 +0000
Received: by outflank-mailman (input) for mailman id 51185;
 Sat, 12 Dec 2020 14:13:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko5eD-000415-BF; Sat, 12 Dec 2020 14:13:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko5eD-0005QN-2l; Sat, 12 Dec 2020 14:13:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko5eC-00026u-Og; Sat, 12 Dec 2020 14:13:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ko5eC-000615-OD; Sat, 12 Dec 2020 14:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4hvDCcAnFNjmoNA+q0iSWswyaYDh/GlbuupxgDnGnHg=; b=ECWdC87MFDba2k5yd8LR7rL8I3
	DlKJHJWvBs4Oz18EoR1QJCgnGfezVETqOSF3hLquUmtEAcA5WHnPbcilveCXe+Dym1oAC71lbcl4O
	o2Rrv7VPpnU6McGlCakaXd5Jy99+e1aEke8oxYWrmZgAZDK8ib3Iy6b1hNXferaYZVEc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157458-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157458: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 14:13:20 +0000

flight 157458 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157458/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    2 days   15 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 14:57:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 14:57:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51203.90118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko6Ky-00080p-Le; Sat, 12 Dec 2020 14:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51203.90118; Sat, 12 Dec 2020 14:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko6Ky-00080i-IQ; Sat, 12 Dec 2020 14:57:32 +0000
Received: by outflank-mailman (input) for mailman id 51203;
 Sat, 12 Dec 2020 14:57:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko6Kx-00080a-Bb; Sat, 12 Dec 2020 14:57:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko6Kx-0006Hx-4o; Sat, 12 Dec 2020 14:57:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko6Kw-0003Tj-RW; Sat, 12 Dec 2020 14:57:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ko6Kw-0005FZ-R3; Sat, 12 Dec 2020 14:57:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8IJFIMJUN3gLbXDBpNuI7WfCBsi5GKpWeRkXqMAEHgQ=; b=MIGTCZLon8wPjvC5tX2gQM9XP6
	HBMhK/bJxZUqVvCdWKsJMAvW6X3gzrI0VckrNRrhCgW12Uy43J4qB3qt5o0qJ7r3mmDY7rZ+eqE70
	H1NHJY4dGOzyGvlApiAasxsZMVFuOgXB5ODvlMiKX/MRM27fGDJH19aTufyRiWujlfRE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157456-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157456: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a4b307b0eaf44530cf03934e4db161db1ea7389f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 14:57:30 +0000

flight 157456 qemu-mainline real [real]
flight 157461 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157456/
http://logs.test-lab.xenproject.org/osstest/logs/157461/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a4b307b0eaf44530cf03934e4db161db1ea7389f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  114 days
Failing since        152659  2020-08-21 14:07:39 Z  113 days  236 attempts
Testing same since   157456  2020-12-12 06:20:23 Z    0 days    1 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 74572 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 17:45:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 17:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51255.90145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko8xf-00086j-Rl; Sat, 12 Dec 2020 17:45:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51255.90145; Sat, 12 Dec 2020 17:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko8xf-00086c-OJ; Sat, 12 Dec 2020 17:45:39 +0000
Received: by outflank-mailman (input) for mailman id 51255;
 Sat, 12 Dec 2020 17:45:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko8xe-00086U-9p; Sat, 12 Dec 2020 17:45:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko8xe-0001rC-2V; Sat, 12 Dec 2020 17:45:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko8xd-00011v-QS; Sat, 12 Dec 2020 17:45:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ko8xd-0006Iy-Q0; Sat, 12 Dec 2020 17:45:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EPCmTOidoNI/gRGbjEisbY3QLOOAqt23L6z1j/X9jyU=; b=sEH6PertbpqewVlzbgJWdA3x2L
	FRez4q9WmrOaLFyB/21Oaumn8T9WyEgzuFy7aIcK9oPaMkMymSMG+Pj6BxZBdnof5o6/QRbyV5B2n
	KtWKV7vngFGp9ZUFmzfURH33PJVIJvVqeN+saDXGAz7iWXqJsUu8zmvf3TwUem7HUJdM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157459-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157459: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7f376f1917d7461e05b648983e8d2aea9d0712b2
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 17:45:37 +0000

flight 157459 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157459/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7f376f1917d7461e05b648983e8d2aea9d0712b2
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  133 days
Failing since        152366  2020-08-01 20:49:34 Z  132 days  230 attempts
Testing same since   157459  2020-12-12 07:58:35 Z    0 days    1 attempts

------------------------------------------------------------
3685 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 706083 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 18:05:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 18:05:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51267.90159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko9H0-0001oI-M5; Sat, 12 Dec 2020 18:05:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51267.90159; Sat, 12 Dec 2020 18:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ko9H0-0001oB-JI; Sat, 12 Dec 2020 18:05:38 +0000
Received: by outflank-mailman (input) for mailman id 51267;
 Sat, 12 Dec 2020 18:05:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko9Gz-0001o3-19; Sat, 12 Dec 2020 18:05:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko9Gy-0002M3-Ny; Sat, 12 Dec 2020 18:05:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ko9Gy-0002Pq-G6; Sat, 12 Dec 2020 18:05:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ko9Gy-0007M8-Fb; Sat, 12 Dec 2020 18:05:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3tcksIC1QRwuMSTQgnDmlwCwyblfwakBtjDGnLL3VgM=; b=YdL2nloWKTKySQJlmCPTEq8LIO
	2ihBTZvIst//Lolh7FtX2u+3+z0oTRFP+DB8kJ+YhtxEQ/2nRQmxvH5rmrNxtxKDkbDLEnJCeiaCY
	v8QK6Xw5oSyu3qrrAGHHBfo6GNFGupu/f1MYxxf9rz4IKL7Pg49ZxVajliVusIkoavUI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157462-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157462: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 18:05:36 +0000

flight 157462 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157462/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   16 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 19:01:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 19:01:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51283.90175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koA8n-0007b9-OD; Sat, 12 Dec 2020 19:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51283.90175; Sat, 12 Dec 2020 19:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koA8n-0007b2-L9; Sat, 12 Dec 2020 19:01:13 +0000
Received: by outflank-mailman (input) for mailman id 51283;
 Sat, 12 Dec 2020 19:01:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GRo8=FQ=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1koA8n-0007ax-6T
 for xen-devel@lists.xenproject.org; Sat, 12 Dec 2020 19:01:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 913e356a-4934-4f40-b0c9-836062720428;
 Sat, 12 Dec 2020 19:01:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 913e356a-4934-4f40-b0c9-836062720428
Subject: Re: [GIT PULL] xen: branch for v5.10-rc8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607799671;
	bh=/z4e4OBG9dX34wwm48LbjXS2Jefc/mywFk+XdssA/Ws=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=COim9VWaDQNCkRwjVETWa6JIi88ouDhkK0sLAqw9k4SxZaJfC2DbE5z+3tpvY6nje
	 lx7hjbWu8CYxXnQcYFpwHXNkgZwDiqGqYHJBPNB8Eq37+8o/v3FJhIDtemKO1CWbGJ
	 3P31QoluUQ9XWyuDs/Y2edDVkxF+AgmA6eiuIQB3CCaqbLZ5NMyAgxc1/QZU+9+2eJ
	 6EAbcFJ/4VOL9hM8xUswWWQSsi1lyVhWLaVJFFLf2SSaVPmtetKeKXOU0yir8VJDMo
	 RTV2YvVAT/rHM8wy9U0UaXgzwqkNzuSiL7HtitZjH6soF3ZftZNvk/IDcesuCn/z9s
	 kLziimxS04E+Q==
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201211085309.8128-1-jgross@suse.com>
References: <20201211085309.8128-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20201211085309.8128-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10c-rc8-tag
X-PR-Tracked-Commit-Id: ee32f32335e8c7f6154bf397f4ac9b6175b488a8
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: b53966ffd4c0676c02987d4fc33b99bdfc548cf0
Message-Id: <160779967131.16081.10288142971358980370.pr-tracker-bot@kernel.org>
Date: Sat, 12 Dec 2020 19:01:11 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Fri, 11 Dec 2020 09:53:09 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10c-rc8-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/b53966ffd4c0676c02987d4fc33b99bdfc548cf0

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sat Dec 12 23:31:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 12 Dec 2020 23:31:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51332.90193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koEM9-0008CJ-2x; Sat, 12 Dec 2020 23:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51332.90193; Sat, 12 Dec 2020 23:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koEM8-0008CC-Vu; Sat, 12 Dec 2020 23:31:16 +0000
Received: by outflank-mailman (input) for mailman id 51332;
 Sat, 12 Dec 2020 23:31:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koEM7-0008C4-Te; Sat, 12 Dec 2020 23:31:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koEM7-0000gg-Kk; Sat, 12 Dec 2020 23:31:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koEM7-0000LZ-D8; Sat, 12 Dec 2020 23:31:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koEM7-0006yd-Ce; Sat, 12 Dec 2020 23:31:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d3cBJRekGG8L6ysNqURe91MUKYVLcWHrNIgZgcbZ4YM=; b=5lnuXbn5mhOb+vSlH+8o46lR0h
	k9DCNii3GMPJYlIGNkXHOd2vi6OzckcMscaOPf4GP9MdtU5d9+37LihOxdDK+R6ITLfcJAyNZC2KG
	D8iQgH6raTp5iO61hwnvOnLuhMbScLPP+JHbqSATQ4K7OZ2Nx/fjjwTcm/QwPVPZ1j1M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157467-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157467: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 12 Dec 2020 23:31:15 +0000

flight 157467 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157467/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   17 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 00:28:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 00:28:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51344.90214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koFEx-0005O5-QP; Sun, 13 Dec 2020 00:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51344.90214; Sun, 13 Dec 2020 00:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koFEx-0005Ny-N6; Sun, 13 Dec 2020 00:27:55 +0000
Received: by outflank-mailman (input) for mailman id 51344;
 Sun, 13 Dec 2020 00:27:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFEx-0005NP-11; Sun, 13 Dec 2020 00:27:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFEw-0002T0-IG; Sun, 13 Dec 2020 00:27:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFEw-0002Fs-3O; Sun, 13 Dec 2020 00:27:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koFEw-0005HN-2x; Sun, 13 Dec 2020 00:27:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ANuWrw29m8fBZBXeVqwmYiU91pjzHQU8gOsZU/ZG1IQ=; b=Rj6L1bEZlFLmiNj+PjQM3/+uXz
	XH8FmgLxdDnL7zbVFlqvOqkUZDgFltaF4hSXTu9c47g7sEqb11YdqSfrJwlNoT227WiLfHMzfI6VS
	EZ0YcnBFnZA6GAoxi4DIxORx20f70b5qjfqCg2SzoaNwQZC9HR9HUA1sD/r5GGcDl4Og=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157469-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157469: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 00:27:54 +0000

flight 157469 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157469/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   18 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 00:57:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 00:57:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51354.90229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koFhg-0008KA-7d; Sun, 13 Dec 2020 00:57:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51354.90229; Sun, 13 Dec 2020 00:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koFhg-0008K3-4b; Sun, 13 Dec 2020 00:57:36 +0000
Received: by outflank-mailman (input) for mailman id 51354;
 Sun, 13 Dec 2020 00:57:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFhe-0008Jv-Oi; Sun, 13 Dec 2020 00:57:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFhe-00033Q-Gv; Sun, 13 Dec 2020 00:57:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFhe-00031O-8M; Sun, 13 Dec 2020 00:57:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koFhe-0004ON-7t; Sun, 13 Dec 2020 00:57:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YCmil/m2CLuOStKQWd0HB4x5GSX/aSi1icfzMYrp87E=; b=tm7pGwdjEFyIYb65QON8juFpXg
	r1onfosTV4yM40RkMaioGvXecQca0EHUeAis+E2Vb5s2/7VU0YeemXUtNvwW9eeddDI1J/YKeBkQ1
	z7TrL6Ol63I2HtTKZA/5pHH8eAOzjgtNHpjueP49cDBn+CY9IKGspBhV85CSsTu6OoPc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157472-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157472: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 00:57:34 +0000

flight 157472 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157472/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   19 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 01:09:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 01:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51363.90243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koFtU-00010z-D3; Sun, 13 Dec 2020 01:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51363.90243; Sun, 13 Dec 2020 01:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koFtU-00010s-9D; Sun, 13 Dec 2020 01:09:48 +0000
Received: by outflank-mailman (input) for mailman id 51363;
 Sun, 13 Dec 2020 01:09:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFtS-00010k-QR; Sun, 13 Dec 2020 01:09:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFtS-0002pf-Ig; Sun, 13 Dec 2020 01:09:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koFtS-0003Id-6u; Sun, 13 Dec 2020 01:09:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koFtS-0005EP-6M; Sun, 13 Dec 2020 01:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/qu+quxucznjGipKQ2K8ECKoMyWrvZtRhgapFgSm6P0=; b=4jr+ezsmhbdl/8ZlDeeZLpwnBE
	rhzZ8XVCxZLCRU+Ylo8TCowEdjpabfJ6M+tWL4aPhWLN2k8IQRyM8mDhoETMugZNOpYylLK1nElVO
	IMdi1NzgMf0nY744S/vg0dIW6PQELhpbGkO2jRr3WcyUqFRbUJlqVa8yc05hjXZYuE1g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157463-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157463: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a4b307b0eaf44530cf03934e4db161db1ea7389f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 01:09:46 +0000

flight 157463 qemu-mainline real [real]
flight 157471 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157463/
http://logs.test-lab.xenproject.org/osstest/logs/157471/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a4b307b0eaf44530cf03934e4db161db1ea7389f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  114 days
Failing since        152659  2020-08-21 14:07:39 Z  113 days  237 attempts
Testing same since   157456  2020-12-12 06:20:23 Z    0 days    2 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 74572 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 01:55:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 01:55:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51377.90265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koGc0-0005q6-BZ; Sun, 13 Dec 2020 01:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51377.90265; Sun, 13 Dec 2020 01:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koGc0-0005py-5O; Sun, 13 Dec 2020 01:55:48 +0000
Received: by outflank-mailman (input) for mailman id 51377;
 Sun, 13 Dec 2020 01:55:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koGby-0005pq-TN; Sun, 13 Dec 2020 01:55:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koGby-0003j4-Ho; Sun, 13 Dec 2020 01:55:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koGby-0004Jq-AW; Sun, 13 Dec 2020 01:55:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koGby-0000AI-A1; Sun, 13 Dec 2020 01:55:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2NFoLEcGv5X+wEgLvIoaubxB4UukChOVFduBGSr3tzQ=; b=xeRDTQ+/t2R4W6/GYtVLA5dvGf
	6WOSCPf8aem4mJZvfhVhenLd73P3HOnOiMZLjzX3eZkTS6XSDA3WzNXZSWsfnsDJ6HuUm6zPvn/wJ
	cpiLqmmGph1WJRegJyN3sCZMgZDnC92FA+bLBIuwoab1k0TFS/6fIcq2s8mHfmMj2hik=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157473-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157473: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 01:55:46 +0000

flight 157473 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157473/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   20 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 02:52:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 02:52:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51388.90279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koHUY-0003Xx-Az; Sun, 13 Dec 2020 02:52:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51388.90279; Sun, 13 Dec 2020 02:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koHUY-0003Xq-80; Sun, 13 Dec 2020 02:52:10 +0000
Received: by outflank-mailman (input) for mailman id 51388;
 Sun, 13 Dec 2020 02:52:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koHUW-0003Xi-UX; Sun, 13 Dec 2020 02:52:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koHUW-0005KP-RT; Sun, 13 Dec 2020 02:52:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koHUW-0005Xj-Hv; Sun, 13 Dec 2020 02:52:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koHUW-0005HU-HP; Sun, 13 Dec 2020 02:52:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m0TdEOyT2Y5JpmQ+cQ9gd+C43Mm/dSfMoS84appiUMo=; b=L2Tb+rj7IMkStUAgwfbRWCYn3R
	8fdVtlcVockc5FshRXOZqa8YNx2N7GBJK0GsZ5lGle/lCu0PAtT9yR8blmFfwsgBWnbw1PI4czaxy
	kjLUUdH1FAi7IoNO3gqIsqnB8pmuSZMYjDp0gwFaUQH4h4LX8n+k8vuiVJyT/jwbvO9E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157477-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157477: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 02:52:08 +0000

flight 157477 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157477/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   21 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    1 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 03:24:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 03:24:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51397.90295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koI07-0006bd-Tn; Sun, 13 Dec 2020 03:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51397.90295; Sun, 13 Dec 2020 03:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koI07-0006bW-Pm; Sun, 13 Dec 2020 03:24:47 +0000
Received: by outflank-mailman (input) for mailman id 51397;
 Sun, 13 Dec 2020 03:24:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koI07-0006bO-6g; Sun, 13 Dec 2020 03:24:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koI07-00060C-08; Sun, 13 Dec 2020 03:24:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koI06-0006U1-LM; Sun, 13 Dec 2020 03:24:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koI06-0007VZ-Ks; Sun, 13 Dec 2020 03:24:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sEwKivZnpdAnczHRZj0QDI93KumiQ5aMZkzH13hfLZk=; b=sauLBABhgp7OvAv5KJYmlI/7yE
	TT1ItE2VmRP1n2NmeQ2RvQN4x03hoMP2XPvzla1nWcUX6Em9ibg+zK+nLFLL0+IpZLNPFRSn4/49V
	z/RmPz1ZcD/cjuZHyGqjIwyg0gRrda/sJcXNsMTDHN20KL29yEWQk5nF+bwoPSRIyJTY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157466-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157466: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7f376f1917d7461e05b648983e8d2aea9d0712b2
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 03:24:46 +0000

flight 157466 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157466/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken  in 157459
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 157459 pass in 157466
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157459

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1 5 host-install(5) broken in 157459 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7f376f1917d7461e05b648983e8d2aea9d0712b2
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  134 days
Failing since        152366  2020-08-01 20:49:34 Z  133 days  231 attempts
Testing same since   157459  2020-12-12 07:58:35 Z    0 days    2 attempts

------------------------------------------------------------
3685 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken

Not pushing.

(No revision log; it would be 706083 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 03:53:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 03:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51408.90309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koIRy-00019Z-82; Sun, 13 Dec 2020 03:53:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51408.90309; Sun, 13 Dec 2020 03:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koIRy-00019S-54; Sun, 13 Dec 2020 03:53:34 +0000
Received: by outflank-mailman (input) for mailman id 51408;
 Sun, 13 Dec 2020 03:53:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koIRx-00019K-4K; Sun, 13 Dec 2020 03:53:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koIRx-0006aA-0J; Sun, 13 Dec 2020 03:53:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koIRw-0007DX-PW; Sun, 13 Dec 2020 03:53:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koIRw-0007ZQ-P1; Sun, 13 Dec 2020 03:53:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PE+7iloyWr59ZZwGRyz2VcybQHZiMdAynLKoVgZZAzE=; b=JSdey9HRG8YyHl8/ChHik7HoI8
	HLHlLriSvxOoxr670xCmZAjzZhbQq8BC2NqNMiu5/y6qBCG0nbsSbNoDpOPEk1eEh4rXxuVpU7Zl6
	7GplUWWeElGXgtAQNwthmLW2v+A3gLBw7gIaVHXimlc8OcLgMt/gnfcej9LfpnxZA0Ww=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157478-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157478: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 03:53:32 +0000

flight 157478 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157478/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   22 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 04:37:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 04:37:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51418.90325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koJ8S-0005Cz-Nl; Sun, 13 Dec 2020 04:37:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51418.90325; Sun, 13 Dec 2020 04:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koJ8S-0005Cs-Kj; Sun, 13 Dec 2020 04:37:28 +0000
Received: by outflank-mailman (input) for mailman id 51418;
 Sun, 13 Dec 2020 04:37:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koJ8R-0005Ck-QQ; Sun, 13 Dec 2020 04:37:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koJ8R-0007h5-Ju; Sun, 13 Dec 2020 04:37:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koJ8R-00005C-Au; Sun, 13 Dec 2020 04:37:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koJ8R-0004Ds-AO; Sun, 13 Dec 2020 04:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y20zh7qpkbRQ3hRX6DsIV53Op3+xtdIp6dLSsKjiksU=; b=tGZiyBQyCcDH+MW9fMK2DpFwRl
	+18+fsD1U/ag98q48qh4uzeg6aqZvqsDXZo6fZwYfLDFz2JLY6bWdSgA1gYWL89VLH0k2ZFQZ6ge8
	QGWbReyDsUkI1ut/PJPmknOdMTXMLXAZQADCn7XTAX+y9fl/i0eNOY4FEnVN2d2MASJU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157480-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157480: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 04:37:27 +0000

flight 157480 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157480/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   23 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 05:16:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 05:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51429.90340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koJjf-00019n-RY; Sun, 13 Dec 2020 05:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51429.90340; Sun, 13 Dec 2020 05:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koJjf-00019g-NU; Sun, 13 Dec 2020 05:15:55 +0000
Received: by outflank-mailman (input) for mailman id 51429;
 Sun, 13 Dec 2020 05:15:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koJje-00019Y-5S; Sun, 13 Dec 2020 05:15:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koJjd-0000KX-U0; Sun, 13 Dec 2020 05:15:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koJjd-00013W-Kd; Sun, 13 Dec 2020 05:15:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koJjd-0002JR-K7; Sun, 13 Dec 2020 05:15:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dtZDNQ3gYkA6BhzNflAtYgiM9OWJiAGDaPUkBCcsi/o=; b=mT+2C822HN0B9KDtWLGlV18A6a
	QjFwBIaYW4nOPsDQuAcXHOottFec0Y4O482SO2f0ePTb15cdf0upEeED6RQ7HVu0tfgHx7p9PTGIt
	eQUdJNeBI367pBYxe6SxTRq/z9S4/rxDANjkMl0S4s+D12vQUO1++anYiBqoOP3Ts9XU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157482-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157482: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 05:15:53 +0000

flight 157482 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157482/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   24 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 06:04:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 06:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51442.90361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koKUr-000680-Po; Sun, 13 Dec 2020 06:04:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51442.90361; Sun, 13 Dec 2020 06:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koKUr-00067t-LK; Sun, 13 Dec 2020 06:04:41 +0000
Received: by outflank-mailman (input) for mailman id 51442;
 Sun, 13 Dec 2020 06:04:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koKUq-00067l-GW; Sun, 13 Dec 2020 06:04:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koKUq-0001Pi-9z; Sun, 13 Dec 2020 06:04:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koKUq-0002GF-2C; Sun, 13 Dec 2020 06:04:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koKUq-0000BC-1f; Sun, 13 Dec 2020 06:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mMgtNUWLQoGroaHyqOQA94wo9pAfIwNCCKGxl0Pah6M=; b=tZrklNQ1WLSnctumDEQ7XxeZeh
	Af31c2WaGlAzGq9/EzzjjcQGPWcFMvcOTKu1gWJ8+Ka6YFUcW+CmAyDEgdmiyCA2SnpYBTwbsonJz
	CrzaK0YNGXXdRfnrlbG3rlByu1l4QqdHHgTrjB6O55Mm1MqmTQS7ECBh6vTne6p3PnNU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157484-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157484: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 06:04:40 +0000

flight 157484 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157484/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   25 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 06:27:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 06:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51452.90376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koKqV-0008As-LG; Sun, 13 Dec 2020 06:27:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51452.90376; Sun, 13 Dec 2020 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koKqV-0008Al-IH; Sun, 13 Dec 2020 06:27:03 +0000
Received: by outflank-mailman (input) for mailman id 51452;
 Sun, 13 Dec 2020 06:27:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koKqU-0008Ad-Bn; Sun, 13 Dec 2020 06:27:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koKqU-0001so-6X; Sun, 13 Dec 2020 06:27:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koKqT-0002pk-Vj; Sun, 13 Dec 2020 06:27:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koKqT-0006ev-VD; Sun, 13 Dec 2020 06:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d3yJbEb/qLoBBE3i6bULX/+inwJHbmiJ4RnpZBcdTsk=; b=Xivoe5kV3WcNorOd2pwZ6+h/3b
	ETXQjHbMuBSqo218hff22Kcs+HZvUAWUAqvILB9Mm0GGqDtmVJdfwhcFq1Wre5wLaX3kqkpdC8W7j
	h//kKU8jNHmdQ4fHiIMXFNgAQioGO+CmDt7+ZlN2bwf4V7jd7j7gn42HOQd8a/mayzAM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157485-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157485: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 06:27:01 +0000

flight 157485 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157485/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   26 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   19 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 06:45:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 06:45:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51461.90391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koL8S-0001m1-7l; Sun, 13 Dec 2020 06:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51461.90391; Sun, 13 Dec 2020 06:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koL8S-0001lu-4c; Sun, 13 Dec 2020 06:45:36 +0000
Received: by outflank-mailman (input) for mailman id 51461;
 Sun, 13 Dec 2020 06:45:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koL8Q-0001lm-V5; Sun, 13 Dec 2020 06:45:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koL8Q-0002FM-LE; Sun, 13 Dec 2020 06:45:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koL8Q-0003JM-Ap; Sun, 13 Dec 2020 06:45:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koL8Q-0001DI-AM; Sun, 13 Dec 2020 06:45:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FJ2SPpR15Jpx9AX5FGhsdLgM/H8rbRK+S9OWEklYAIY=; b=yxbnybmHBtHxeXxKNoM4VBY4RO
	2r4bUYqWQM/gvTdOQbeRc8N7SgSEIej0PgO4A9CXJpPKdmHAbh9z1BkIbXhL5Pzk7pyHui4OXiTbg
	bOwU813SGj4qRURH/qEWeyq9Vd+ec66p0yxJ2+OxKj21MWA/Tp2eF/5vIwGar/OUu2pI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157474-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157474: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=17584289af1aaa72c932e7e47c25d583b329dc45
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 06:45:34 +0000

flight 157474 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157474/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                17584289af1aaa72c932e7e47c25d583b329dc45
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  114 days
Failing since        152659  2020-08-21 14:07:39 Z  113 days  238 attempts
Testing same since   157474  2020-12-13 01:37:19 Z    0 days    1 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 75406 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 07:06:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 07:06:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51471.90405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koLSR-0003tc-7b; Sun, 13 Dec 2020 07:06:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51471.90405; Sun, 13 Dec 2020 07:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koLSR-0003tV-4Y; Sun, 13 Dec 2020 07:06:15 +0000
Received: by outflank-mailman (input) for mailman id 51471;
 Sun, 13 Dec 2020 07:06:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koLSQ-0003tN-FB; Sun, 13 Dec 2020 07:06:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koLSQ-0002h0-5A; Sun, 13 Dec 2020 07:06:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koLSP-0003lE-TS; Sun, 13 Dec 2020 07:06:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koLSP-0001P3-T0; Sun, 13 Dec 2020 07:06:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FSmkYiF31BCXSKuSyMhQLXVRPLx5BDCTbeKYXWA44BI=; b=lULFFl/Lxj+n3SMxz8ONwL2Wg1
	CRjMMwtGn0uDFT8182EPzRyp4g0D7cCDqfvr1LAtqlhGuwBqllIS8OzLdt/fGfIdztacGSwVOXLT3
	zSfnKDTOb4DoH6ojTEPtrJUC3gP0xf2dZTldYzWHFFO/2tH54RSEvYJqRe6jikhGPqjc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157481-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157481: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-xsm:xen-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64:xen-build:fail:regression
    libvirt:build-i386:xen-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-xsm:xen-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=cd338954b7056ba5c98fa860ce120358cbb74566
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 07:06:13 +0000

flight 157481 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157481/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64                   6 xen-build                fail REGR. vs. 151777
 build-i386                    6 xen-build                fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-xsm                6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              cd338954b7056ba5c98fa860ce120358cbb74566
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  156 days
Failing since        151818  2020-07-11 04:18:52 Z  155 days  150 attempts
Testing same since   157481  2020-12-13 04:19:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32723 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 07:25:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 07:25:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51482.90421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koLkx-0005vI-TY; Sun, 13 Dec 2020 07:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51482.90421; Sun, 13 Dec 2020 07:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koLkx-0005vB-QA; Sun, 13 Dec 2020 07:25:23 +0000
Received: by outflank-mailman (input) for mailman id 51482;
 Sun, 13 Dec 2020 07:25:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koLkw-0005v3-K0; Sun, 13 Dec 2020 07:25:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koLkw-00033v-Bq; Sun, 13 Dec 2020 07:25:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koLkw-0004BG-3g; Sun, 13 Dec 2020 07:25:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koLkw-0004Od-3C; Sun, 13 Dec 2020 07:25:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TCIvHygaGmK9zMcpdfNRKwLHT+AkoUZFFTPqserummY=; b=jFTP/NEE3UEgsabdAMIk33u05z
	9tLS+Bvo870g6jacIm48/oboujfFBbOEhQcdwa6kFIjSDIUIZ/iok7xmE+AnTYSAqjz7oHYBQZtwI
	zN5q8DzeXQLnAD/EjYFCbAXSr8rGU105F370GEl5HFt8gHRBzfYTDHspKhu2EbdJro7o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157486-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157486: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 07:25:22 +0000

flight 157486 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157486/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   27 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   20 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 08:27:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 08:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51501.90441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koMik-0003yG-PN; Sun, 13 Dec 2020 08:27:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51501.90441; Sun, 13 Dec 2020 08:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koMik-0003y9-MS; Sun, 13 Dec 2020 08:27:10 +0000
Received: by outflank-mailman (input) for mailman id 51501;
 Sun, 13 Dec 2020 08:27:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koMij-0003y1-2i; Sun, 13 Dec 2020 08:27:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koMii-0004rB-T7; Sun, 13 Dec 2020 08:27:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koMii-0005qI-Kz; Sun, 13 Dec 2020 08:27:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koMii-0006BC-Ka; Sun, 13 Dec 2020 08:27:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j1160LEsQJ1knliO2OfQWvS4UAE8MDdD5XGyKeDUn8Q=; b=mVOQmx7unnnwP836nDSZGRab/s
	yBt2Zys7JdtI//hav77CZxlR8ANVE1DM3cC5pUqD4g/B48lvZlp7lOGgMgOCrzLXhXPght+Dr9gUR
	MIxkVeps3aaZNEweEIk7sbKbOh089v7j5Una64g/6/POyMQCEoWIiEDrVYGfN9msb7R0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157488-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157488: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 08:27:08 +0000

flight 157488 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   28 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   21 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 09:41:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 09:41:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51525.90463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koNsJ-0003JL-EW; Sun, 13 Dec 2020 09:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51525.90463; Sun, 13 Dec 2020 09:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koNsJ-0003JE-BK; Sun, 13 Dec 2020 09:41:07 +0000
Received: by outflank-mailman (input) for mailman id 51525;
 Sun, 13 Dec 2020 09:41:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koNsI-0003J6-8l; Sun, 13 Dec 2020 09:41:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koNsI-0006KY-0t; Sun, 13 Dec 2020 09:41:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koNsH-0008Ij-NM; Sun, 13 Dec 2020 09:41:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koNsH-0003is-Mu; Sun, 13 Dec 2020 09:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=myJAk68myMmknP7huauIPgWOTKcD0pyR+NFcWE0iaqk=; b=492JEDLE8914mOysyLnFLZsWZT
	aQiKK1lsPPo9ufHFHm0MnxxUNLNKliuqLzk0Hood3MHvMGz7pc+MDwPM891sV9W2fWy28e+KL4Doa
	3EfMxGmB6ZGdr4v+wBBXNfro//B94g04kvXdWyc9ZI2QoUj2OgLf9CBu8/9ODzrO5qiI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157490-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157490: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 09:41:05 +0000

flight 157490 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157490/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   29 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   22 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 09:57:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 09:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51539.90478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koO7s-0004Ve-Oh; Sun, 13 Dec 2020 09:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51539.90478; Sun, 13 Dec 2020 09:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koO7s-0004VX-Ld; Sun, 13 Dec 2020 09:57:12 +0000
Received: by outflank-mailman (input) for mailman id 51539;
 Sun, 13 Dec 2020 09:57:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koO7q-0004VP-Pb; Sun, 13 Dec 2020 09:57:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koO7q-0006ez-HI; Sun, 13 Dec 2020 09:57:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koO7q-0000LA-Am; Sun, 13 Dec 2020 09:57:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koO7q-0001G8-AI; Sun, 13 Dec 2020 09:57:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SqfapOl+wpHecExeXDJruARrhuGLcZddnrOYu5beqZ0=; b=w+oUUI0uIg2kdTVVHbn7Kb5qUR
	l3woCNt8eim+IFoynuIpDsUeBOk2SKgZQnU/+vgWfti7UrFIGhKrin0fbPmHE2Qv5HSA3xhk3LzXI
	R3sMP1GN0gZSu5h58XPCAbgxxu1fRev3V4po7VmbLgMD0LGPjy1iyVy3KABkdta++Fq8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157492-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157492: all pass - PUSHED
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=777e3590f154e6a8af560dd318b9465fa168db20
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 09:57:10 +0000

flight 157492 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157492/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  777e3590f154e6a8af560dd318b9465fa168db20

Last test of basis   157343  2020-12-09 09:19:25 Z    4 days
Testing same since   157492  2020-12-13 09:18:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   777e3590f1..8e0fe4fe5f  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 11:01:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 11:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51560.90493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koP7a-0002xE-Gl; Sun, 13 Dec 2020 11:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51560.90493; Sun, 13 Dec 2020 11:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koP7a-0002x7-CW; Sun, 13 Dec 2020 11:00:58 +0000
Received: by outflank-mailman (input) for mailman id 51560;
 Sun, 13 Dec 2020 11:00:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koP7Y-0002wz-Lo; Sun, 13 Dec 2020 11:00:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koP7Y-000828-E9; Sun, 13 Dec 2020 11:00:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koP7Y-0002UW-5k; Sun, 13 Dec 2020 11:00:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koP7X-0000sy-Qa; Sun, 13 Dec 2020 11:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p1pRzPIm6uTb/NBr+2rlcH/bvicxI1zlgaryBRJOKWo=; b=fSAZfvMCHG1D2703fv980yGNmo
	qPspyXOzgt5spPlhFuOgTHQY2PAHe/ldwNYZohmIM5dfMsSzVdUai+mRr4VevRGToUgffYEKuahgv
	e5jGyXi5wxwn0hFFg6PNCv7ETH/QAZ9J+FjAGq586BM1emT39wRuu+4whof5wU4XNjso=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157493-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157493: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 11:00:55 +0000

flight 157493 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157493/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   30 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   23 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 11:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 11:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51569.90508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koPFm-0003Ea-Ah; Sun, 13 Dec 2020 11:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51569.90508; Sun, 13 Dec 2020 11:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koPFm-0003ET-6t; Sun, 13 Dec 2020 11:09:26 +0000
Received: by outflank-mailman (input) for mailman id 51569;
 Sun, 13 Dec 2020 11:09:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koPFk-0003EL-83; Sun, 13 Dec 2020 11:09:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koPFj-0008Cy-Ub; Sun, 13 Dec 2020 11:09:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koPFj-0002hi-H5; Sun, 13 Dec 2020 11:09:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koPFj-0000y1-Ga; Sun, 13 Dec 2020 11:09:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oW/cCWwZaJUVbbN62hF/2eC/8Y4PYZs+VhGnPlkFANQ=; b=1qivrtAoGMk4iREdm9AbF6DORa
	L1gQnif/nR0PA/R+lERv9DzA4G0Vqaqwoqu5O5BAVhIbgno3yNNbx2Dgb7PXb5nL19vbK9Fw+PDHU
	9FpjJL7iedxoxEWIY6+7gLZX9razs2BH0jwNGrST6E7SApNJtW+yOKzwFZlRPryfnmow=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157476-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157476: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 11:09:23 +0000

flight 157476 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157476/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157451
 build-i386                    6 xen-build                fail REGR. vs. 157451
 build-amd64                   6 xen-build                fail REGR. vs. 157451
 build-i386-xsm                6 xen-build                fail REGR. vs. 157451

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157451
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157451
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157476  2020-12-13 01:53:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Dec 13 11:55:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 11:55:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51600.90541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koPyW-0008A0-R4; Sun, 13 Dec 2020 11:55:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51600.90541; Sun, 13 Dec 2020 11:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koPyW-00089t-Ns; Sun, 13 Dec 2020 11:55:40 +0000
Received: by outflank-mailman (input) for mailman id 51600;
 Sun, 13 Dec 2020 11:55:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koPyV-00089l-Cq; Sun, 13 Dec 2020 11:55:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koPyV-0000gh-7I; Sun, 13 Dec 2020 11:55:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koPyV-00048F-0T; Sun, 13 Dec 2020 11:55:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koPyV-00051V-01; Sun, 13 Dec 2020 11:55:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qACdZKBzMFqEeV3evAFOTgI/bcJHw1li1FDa94efNiI=; b=sxNewmN0CTLUjPFdlHjKm60Udc
	P3oGU3ejYp0BvC6xcTCnRWcAtAKTmPDUJ8ZgsoHGBZggrQDFvSABSjtoYryQx/AJDFTAdHzUjQiLK
	K/A5okgV+xIOjnxaIkCVIZ/lvNlOxGymhhggb1yH0XhGOEQWKp4YEQn7Hou6D4S35FLU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157495-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157495: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 11:55:39 +0000

flight 157495 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157495/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    3 days
Failing since        157348  2020-12-09 15:39:39 Z    3 days   31 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   24 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 12:07:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 12:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51614.90556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQ9w-0000sP-3o; Sun, 13 Dec 2020 12:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51614.90556; Sun, 13 Dec 2020 12:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQ9w-0000sI-05; Sun, 13 Dec 2020 12:07:28 +0000
Received: by outflank-mailman (input) for mailman id 51614;
 Sun, 13 Dec 2020 12:07:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koQ9v-0000sA-IE; Sun, 13 Dec 2020 12:07:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koQ9v-0000yl-83; Sun, 13 Dec 2020 12:07:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koQ9v-0004OD-1N; Sun, 13 Dec 2020 12:07:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koQ9v-0003KR-0s; Sun, 13 Dec 2020 12:07:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WNPM1lbmJzh2Z+45GDKeYqWNuJMXlvcithe5mRVy8So=; b=Ak01NY5JdM4r081QFQYncuLqMp
	v46CG1XQ6FuiZFaFpFy2dvSDym9d3hvnE0CLDf3ca7T7M0yRlLc6LwhJrMAoe4T8+/z+dNCmlbQ5C
	ldn2SbBmljfL38Nw63c5Ot6sfwm4qHxyALMQqnI5jMCuKhdu52yqZy59kZwm4znEWOtU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157479-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157479: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6bff9bb8a292668e7da3e740394b061e5201f683
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 12:07:27 +0000

flight 157479 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157479/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 build-amd64                   6 xen-build                fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 build-i386                    6 xen-build                fail REGR. vs. 152332
 build-i386-xsm                6 xen-build                fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6bff9bb8a292668e7da3e740394b061e5201f683
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  134 days
Failing since        152366  2020-08-01 20:49:34 Z  133 days  232 attempts
Testing same since   157479  2020-12-13 03:27:28 Z    0 days    1 attempts

------------------------------------------------------------
3694 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 706779 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 12:09:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 12:09:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51583.90574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC1-00012n-J9; Sun, 13 Dec 2020 12:09:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51583.90574; Sun, 13 Dec 2020 12:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC1-00012g-FR; Sun, 13 Dec 2020 12:09:37 +0000
Received: by outflank-mailman (input) for mailman id 51583;
 Sun, 13 Dec 2020 11:24:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Nq2=FR=gmail.com=ttoukan.linux@srs-us1.protection.inumbo.net>)
 id 1koPU3-0005D3-AJ
 for xen-devel@lists.xenproject.org; Sun, 13 Dec 2020 11:24:11 +0000
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f4dcfac-4207-437f-a444-4abf79e277e2;
 Sun, 13 Dec 2020 11:24:10 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id g20so18639388ejb.1
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 03:24:10 -0800 (PST)
Received: from [192.168.0.107] ([77.127.34.194])
 by smtp.gmail.com with ESMTPSA id de12sm12533753edb.82.2020.12.13.03.24.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 13 Dec 2020 03:24:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f4dcfac-4207-437f-a444-4abf79e277e2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=IW66fMc7VTs1PtxS+22ry7ms1/9mizOXCdVTfKV5hpo=;
        b=bQn+oB1JoyeFOF0nG/mlAluMS+9oNaqRxA17ZBM9Oe3AOE8/JG6goH/EB1c4QkEda4
         T1lTzWqlfslNfkdjp202c07yMWPNGbIBbpqQZ+AzfJmqs+Kgj+dLxGZ+wWZGA5mc0NdB
         BijFlgEZnvzEJb3MGQPxQa2/dgeJuYhDvxuwz6QAxAIoRHfwl22d5quEoZXOhJp6a+qE
         ETivnglunAiRLTbK3ng7taHfwUqrNTJL3Mef8dDWPaAGgN2enm8bDppfomcBBDdM6fOw
         jt/C5/9LwS2YQBDgMwgsJ0z4C3ZaAE2+qiQJ4375nfTotPFrKA5zulfu7Q8I9D/Awk7+
         1yaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=IW66fMc7VTs1PtxS+22ry7ms1/9mizOXCdVTfKV5hpo=;
        b=fYuwQNwZpAFqycgRUxR3vGff2Pbk4h7lh+vUcQwXP2j1HaAWxLN7o39qw6Oadyke1y
         E2K2G+sE4GBaJPm3ioUT+/sdhHW/yEQD11g5hdR/RmEQ9rXEQkVg2N17e27hyqFSh485
         AwRhh4W/fcRCRmPGSSihCI3aR0OXufBzOZFfcprt/Pn1pAaSfvswdwKTytyRf9j2aSHK
         4vWfjrZXdZptzOddR/DSKudShAyHTA6R65l34wXGD2xeL4QCI88WmcfjJq9L016eHZtg
         Bnu7CtmR8nccXbwvMaLL7zRZuZH+89N1P3c+mqgvWM37oNMJgmmbcqvFyqQ8r1crxNkL
         XHPg==
X-Gm-Message-State: AOAM532OUATXIkkDxQvthKow/a0sZYcnT3ffF0TEpqe1cLrF96J9U0Ra
	QqFPguek1PzHWLqQ87Ytd8OIlmSB/DvZ8w==
X-Google-Smtp-Source: ABdhPJynlxi2QKNBVsllMFmw4nuD/o/QttB0CiO5OzyzZ7oeGscU5QYtBXumKqp3eM2r1PK4665luQ==
X-Received: by 2002:a17:906:a2d0:: with SMTP id by16mr18015254ejb.207.1607858649230;
        Sun, 13 Dec 2020 03:24:09 -0800 (PST)
Subject: Re: [patch 20/30] net/mlx4: Replace irq_to_desc() abuse
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20201210192536.118432146@linutronix.de>
 <20201210194044.580936243@linutronix.de>
From: Tariq Toukan <ttoukan.linux@gmail.com>
Message-ID: <01e427f9-7238-d6a8-25ec-8585914d32df@gmail.com>
Date: Sun, 13 Dec 2020 13:24:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210194044.580936243@linutronix.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 12/10/2020 9:25 PM, Thomas Gleixner wrote:
> No driver has any business with the internals of an interrupt
> descriptor. Storing a pointer to it just to use yet another helper at the
> actual usage site to retrieve the affinity mask is creative at best. Just
> because C does not allow encapsulation does not mean that the kernel has no
> limits.
> 
> Retrieve a pointer to the affinity mask itself and use that. It's still
> using an interface which is usually not for random drivers, but definitely
> less hideous than the previous hack.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tariq Toukan <tariqt@nvidia.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: netdev@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> ---
>   drivers/net/ethernet/mellanox/mlx4/en_cq.c   |    8 +++-----
>   drivers/net/ethernet/mellanox/mlx4/en_rx.c   |    6 +-----
>   drivers/net/ethernet/mellanox/mlx4/mlx4_en.h |    3 ++-
>   3 files changed, 6 insertions(+), 11 deletions(-)
> 

Reviewed-by: Tariq Toukan <tariqt@nvidia.com>

Thanks for your patch.


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 12:09:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 12:09:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51585.90581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC1-00013I-TW; Sun, 13 Dec 2020 12:09:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51585.90581; Sun, 13 Dec 2020 12:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC1-000137-Nh; Sun, 13 Dec 2020 12:09:37 +0000
Received: by outflank-mailman (input) for mailman id 51585;
 Sun, 13 Dec 2020 11:31:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Nq2=FR=gmail.com=ttoukan.linux@srs-us1.protection.inumbo.net>)
 id 1koPbC-00067w-5c
 for xen-devel@lists.xenproject.org; Sun, 13 Dec 2020 11:31:34 +0000
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 528af690-637e-4352-bf2a-d1f32315e7ee;
 Sun, 13 Dec 2020 11:31:33 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id w1so13947694ejf.11
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 03:31:33 -0800 (PST)
Received: from [192.168.0.107] ([77.127.34.194])
 by smtp.gmail.com with ESMTPSA id d6sm11014971ejy.114.2020.12.13.03.31.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 13 Dec 2020 03:31:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 528af690-637e-4352-bf2a-d1f32315e7ee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=TjWjwM1ceLhbF1IYJl3oWPHfyZ88hbVMXIeiKOfmwOM=;
        b=vPvvh2iQ0r9ZbJoXMCDIftnNA4fETiNQFFFHNFtTvfww3hKWzjIscr3aHAUNvJNsBo
         Ivf0d9Mvb99JMJV9ehcDPkg7swsKEMl7+Y/hUHya1SLUv75EVwMKcVXA4wQhFYugquwI
         v+qVZoFUku+Z4S07otlNJUYYZFwuVftQM+5pKMuoRFtJsCSR2/FO/OAFm9dwXHUCnb0g
         nyevY+rBPRTACXhZ931IzFg7Awoce5sQ06vu7Ya86N9A/av13YTjQRgaXCWsCWUM/pP3
         fYNitfafK87nYGzSZSaCif4DP/TVC+LUGSPprBAe6V1QO0w7yMUKiZ5MZM5KRMECQCql
         1tPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=TjWjwM1ceLhbF1IYJl3oWPHfyZ88hbVMXIeiKOfmwOM=;
        b=qwpWounRp5P8vZCydRIRhtRSCy7pVshpfhHbu/oBMgk9JFo8tGXt7qIj/W92oPKtKd
         A0wJephgavbfE/jD2nlYGhHxDssd9zz8tzWKAu8dhRRIIEGyBFfNIJP+JFTZHlJBhxyd
         2lnKcMkrj65YYOWW/3G20+oBvqUDvKm/yvfMlTuFiwKTrmcBZN/vFZKm4z7Fh0Ag3ozR
         QOKMf/AOkAT9XKGTyC0tvYbZtzEDwshT45Rq5gBhkyPQJGkX2dUbOX+BiI4kzX3YLbqy
         LIDY+gxuc5INduJZ5boyM4UUOs/hFxNETujHsZmIquzquNyt03cmK9r2cMhAFUFKqIBl
         8xRg==
X-Gm-Message-State: AOAM533LErAMVfj2ypuzQw2BtGCmKqOk8dirZkx3XrmGhqG64mudWVXC
	xZo/KSWgMfbd0KBDoHC4q0S+2V1Neu3xiQ==
X-Google-Smtp-Source: ABdhPJyEr8C9Ogdb1mA5xMC+43Vgspphph1JMIeMGN8re3oj2ZBqnM3siDmfQyHHB65lUudEMrQXmA==
X-Received: by 2002:a17:906:4bc5:: with SMTP id x5mr1966810ejv.55.1607859092305;
        Sun, 13 Dec 2020 03:31:32 -0800 (PST)
Subject: Re: [patch 21/30] net/mlx4: Use effective interrupt affinity
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Tariq Toukan <tariqt@nvidia.com>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Saeed Mahameed <saeedm@nvidia.com>,
 Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20201210192536.118432146@linutronix.de>
 <20201210194044.672935978@linutronix.de>
From: Tariq Toukan <ttoukan.linux@gmail.com>
Message-ID: <57c3f9d3-7262-9916-626b-c2234de763f0@gmail.com>
Date: Sun, 13 Dec 2020 13:31:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210194044.672935978@linutronix.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 12/10/2020 9:25 PM, Thomas Gleixner wrote:
> Using the interrupt affinity mask for checking locality is not really
> working well on architectures which support effective affinity masks.
> 
> The affinity mask is either the system wide default or set by user space,
> but the architecture can or even must reduce the mask to the effective set,
> which means that checking the affinity mask itself does not really tell
> about the actual target CPUs.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tariq Toukan <tariqt@nvidia.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: netdev@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> ---
>   drivers/net/ethernet/mellanox/mlx4/en_cq.c |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> @@ -117,7 +117,7 @@ int mlx4_en_activate_cq(struct mlx4_en_p
>   			assigned_eq = true;
>   		}
>   		irq = mlx4_eq_get_irq(mdev->dev, cq->vector);
> -		cq->aff_mask = irq_get_affinity_mask(irq);
> +		cq->aff_mask = irq_get_effective_affinity_mask(irq);
>   	} else {
>   		/* For TX we use the same irq per
>   		ring we assigned for the RX    */
> 

Reviewed-by: Tariq Toukan <tariqt@nvidia.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 12:09:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 12:09:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51588.90591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC2-000144-Av; Sun, 13 Dec 2020 12:09:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51588.90591; Sun, 13 Dec 2020 12:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC2-00013r-2l; Sun, 13 Dec 2020 12:09:38 +0000
Received: by outflank-mailman (input) for mailman id 51588;
 Sun, 13 Dec 2020 11:34:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Nq2=FR=gmail.com=ttoukan.linux@srs-us1.protection.inumbo.net>)
 id 1koPdi-0006Bt-El
 for xen-devel@lists.xenproject.org; Sun, 13 Dec 2020 11:34:10 +0000
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3fc9ef54-38a7-43ef-b2ce-9294d4b6af47;
 Sun, 13 Dec 2020 11:34:09 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id ce23so18611951ejb.8
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 03:34:09 -0800 (PST)
Received: from [192.168.0.107] ([77.127.34.194])
 by smtp.gmail.com with ESMTPSA id r21sm1242331eds.91.2020.12.13.03.34.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 13 Dec 2020 03:34:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fc9ef54-38a7-43ef-b2ce-9294d4b6af47
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=32JDNSeqhHqyIaOh/qzFvhnj5b/m+w8QZFr6dtEN8Bw=;
        b=dTA8b5OUH5yKZKYvrwD8FPaAH1qo6ziTqTgli2eWV/QQwwjrj5RItxhiVtWCYn5J4r
         iU5u2+H3CEqrvgOG4ifSiMinjtA03OpWth2tg99EZKrbp/XHweTnJDEVAsFXw8BEi/2k
         62/n48qL7LWG8Lg/TWLV8Ekvi2YV2XdqZDT6dMQ7h03xg9aGRIuNRGCnlY5Zc5d3YnN9
         nNHTJNN2xuMH7nQvu9FSiVhtrlE/Mw87dMJ3GteW8ox/7IYneqfAWtuLpskhNM3GN80O
         a83qBHYHs8rA5sz+ldeezNiCzs60YSlz4P6KTdIerqBLo/hvBarQx+UVsAPAPvEcyL2G
         H9Ew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=32JDNSeqhHqyIaOh/qzFvhnj5b/m+w8QZFr6dtEN8Bw=;
        b=rUlC+0T/RWlO6cPdXx6eva7Dcjdw75EJ2gAfM7taCYhiSEn44PfrLJfxRVXVfzwSi3
         8UeUfBa9amnMb8oGGYpWNSOmQXGsRh3rqoOLswLxBgg9zDOazhK4IARuyzLa+M5p92AP
         6ulpTcpnXOetZ2FRNHFuoRfGL3xPoMDZFCmPwStZ9aBmvHImFn0yL+73mfAPfcbu5Arx
         2fFJplneKPbHi7Wz40wWGlhsj66CKnli7JJrKHuAtg5miJEGyXRckSaUpNousxlS/E0P
         Rz39yE+Pj1Gd+++04r3Ct96wxeqKVXS72pSpmiW8GGhVn7GfXeqdgt3u6EZz62tpaGBb
         5usQ==
X-Gm-Message-State: AOAM531Sa0Blfw1s52tSk6P5hZzYYyWkkOSY5owFLTtbpWMRQ89G5oJb
	9pWHW9sajEX9KBEWSJRW2J3cv8w4yqrQeg==
X-Google-Smtp-Source: ABdhPJy0f+aRaj1FvtAzME5ZsGleR5z0tMVYyWtlz18MxubIwHyo0udN2mZLmYeJ/0XwIck0k7j0dQ==
X-Received: by 2002:a17:906:85cf:: with SMTP id i15mr10621618ejy.373.1607859248729;
        Sun, 13 Dec 2020 03:34:08 -0800 (PST)
Subject: Re: [patch 22/30] net/mlx5: Replace irq_to_desc() abuse
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20201210192536.118432146@linutronix.de>
 <20201210194044.769458162@linutronix.de>
From: Tariq Toukan <ttoukan.linux@gmail.com>
Message-ID: <02be0e10-f2b5-7cbb-3271-4d872616ffd4@gmail.com>
Date: Sun, 13 Dec 2020 13:34:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210194044.769458162@linutronix.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 12/10/2020 9:25 PM, Thomas Gleixner wrote:
> No driver has any business with the internals of an interrupt
> descriptor. Storing a pointer to it just to use yet another helper at the
> actual usage site to retrieve the affinity mask is creative at best. Just
> because C does not allow encapsulation does not mean that the kernel has no
> limits.
> 
> Retrieve a pointer to the affinity mask itself and use that. It's still
> using an interface which is usually not for random drivers, but definitely
> less hideous than the previous hack.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>   drivers/net/ethernet/mellanox/mlx5/core/en.h      |    2 +-
>   drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    2 +-
>   drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c |    6 +-----
>   3 files changed, 3 insertions(+), 7 deletions(-)
> 

Reviewed-by: Tariq Toukan <tariqt@nvidia.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 12:09:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 12:09:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51590.90600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC2-00015b-QC; Sun, 13 Dec 2020 12:09:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51590.90600; Sun, 13 Dec 2020 12:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koQC2-00014x-FY; Sun, 13 Dec 2020 12:09:38 +0000
Received: by outflank-mailman (input) for mailman id 51590;
 Sun, 13 Dec 2020 11:36:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Nq2=FR=gmail.com=ttoukan.linux@srs-us1.protection.inumbo.net>)
 id 1koPfZ-0006Do-Tn
 for xen-devel@lists.xenproject.org; Sun, 13 Dec 2020 11:36:05 +0000
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd2c99b0-0959-44a4-96e4-bab4bb363a6e;
 Sun, 13 Dec 2020 11:36:05 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id cm17so14164755edb.4
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 03:36:05 -0800 (PST)
Received: from [192.168.0.107] ([77.127.34.194])
 by smtp.gmail.com with ESMTPSA id ef11sm11222266ejb.15.2020.12.13.03.35.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 13 Dec 2020 03:36:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd2c99b0-0959-44a4-96e4-bab4bb363a6e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=yO3vpR//yJ3KukN+tPfw/ullrK0O2qQE7kKjDuWCMp8=;
        b=c6i+ahBjD9/Rh0qH45as2/C0d/Lv5nDMeI06jod3TPucN2QtiguKTRZxCznQ+jpaBW
         gvv5i+HN/z0qVNmozmLuRxtFAJ9V7+Ekywb7P9Eyni0NfuaKlPKyK+aL+VAfVSMYk6ET
         IsS0+k2gRUdseUfqRF/UdU5bAlwQl7B5WJwI1REXIqN2aaNIVtttoMQkfvNoRFm/NQ/2
         CwmCf0DUfVLjwlDCo2/HXyzoa2xfU2BeyMBSUUM5MgQ4udzSF7fTqhAJZgUGdY3OmduU
         Wly23cmVni7LPdYQgfxlnoj5hfppXocENHNXxV4IuuZv3YXWmfxmP1sbgWNiQqwcc5pM
         VImw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=yO3vpR//yJ3KukN+tPfw/ullrK0O2qQE7kKjDuWCMp8=;
        b=rGS5RWR64RNyKxhMKd+dGbzve6JJRxxkKmNJBRV1wXsaCHD/nrfxR/mMEXCshgWUxR
         2boDj9o69O0FSo1Z8SETgNx8EpQkD+yBc2C07tzaZH4ODKTPXsok/raQzdyqrO5GINat
         JToIBtV2TMzJN3/5Yx6fRVL0r7KE8BBqzI5Nl2rTP/xryuaXvUQ/L+h0Xd+vX6HGELmU
         bappG7f7TSmyRP66iflYjhvOEVsRwV2X7nQhRzdTgUMSDVexE2UmBAGO2zoIrOW74x0z
         rTLgHQYLXMxqAi4xlvfCh4pD42n+ICv9BYhWNpVPt/17KQzO7Wp1UZt9mr0cEEApRB3A
         Q8lg==
X-Gm-Message-State: AOAM533uTWRLxJIlf7PiJ9fq1g4KCA28rDLCM8/67GOXvUEKYgXhDZlB
	HTkyDBJu0AXZCLMAMXLSqFR4b10zLIqL4A==
X-Google-Smtp-Source: ABdhPJzH+/aSvmgK74NOHKHONm6dhSpADhiv/j9gb2Z3Pv7veSPcdPhRE3SKaG8WhajpgAcAsGPlGw==
X-Received: by 2002:a50:a6c9:: with SMTP id f9mr20130904edc.158.1607859364213;
        Sun, 13 Dec 2020 03:36:04 -0800 (PST)
Subject: Re: [patch 23/30] net/mlx5: Use effective interrupt affinity
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>,
 Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Heiko Carstens <hca@linux.ibm.com>, linux-s390@vger.kernel.org,
 Jani Nikula <jani.nikula@linux.intel.com>,
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
 Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>,
 Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Wambui Karuga <wambui.karugax@gmail.com>, intel-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org,
 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
 Linus Walleij <linus.walleij@linaro.org>, linux-gpio@vger.kernel.org,
 Lee Jones <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>,
 Dave Jiang <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>,
 Michal Simek <michal.simek@xilinx.com>, linux-pci@vger.kernel.org,
 Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
 Hou Zhiqiang <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20201210192536.118432146@linutronix.de>
 <20201210194044.876342330@linutronix.de>
From: Tariq Toukan <ttoukan.linux@gmail.com>
Message-ID: <f0a01d6e-0333-e929-eabb-28cb444effe0@gmail.com>
Date: Sun, 13 Dec 2020 13:35:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201210194044.876342330@linutronix.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 12/10/2020 9:25 PM, Thomas Gleixner wrote:
> Using the interrupt affinity mask for checking locality is not really
> working well on architectures which support effective affinity masks.
> 
> The affinity mask is either the system wide default or set by user space,
> but the architecture can or even must reduce the mask to the effective set,
> which means that checking the affinity mask itself does not really tell
> about the actual target CPUs.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Saeed Mahameed <saeedm@nvidia.com>
> Cc: Leon Romanovsky <leon@kernel.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: netdev@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> ---
>   drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -1998,7 +1998,7 @@ static int mlx5e_open_channel(struct mlx
>   	c->num_tc   = params->num_tc;
>   	c->xdp      = !!params->xdp_prog;
>   	c->stats    = &priv->channel_stats[ix].ch;
> -	c->aff_mask = irq_get_affinity_mask(irq);
> +	c->aff_mask = irq_get_effective_affinity_mask(irq);
>   	c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);
>   
>   	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
> 

Reviewed-by: Tariq Toukan <tariqt@nvidia.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 14:56:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 14:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51691.90636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koSnV-0000yE-KE; Sun, 13 Dec 2020 14:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51691.90636; Sun, 13 Dec 2020 14:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koSnV-0000y7-HD; Sun, 13 Dec 2020 14:56:29 +0000
Received: by outflank-mailman (input) for mailman id 51691;
 Sun, 13 Dec 2020 14:56:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koSnU-0000xz-GR; Sun, 13 Dec 2020 14:56:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koSnU-0004S3-7v; Sun, 13 Dec 2020 14:56:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koSnT-0002R1-VQ; Sun, 13 Dec 2020 14:56:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koSnT-0002tg-Uu; Sun, 13 Dec 2020 14:56:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tcWDIRgnKRJZlwf5pntU60MTQ679oDz/RUNEUSFmj1k=; b=CtTrNaLah/SvaWzRFWefBbLZrr
	j/tjUn3MuMVI5r7vcY0d34ejkqKumolfj7Yx2vfJ6H5RUU4ShZaEV54dhSqO0uG0QH1NmQDPEONqS
	tSQOJYWdSc1Q6TgaFxidbbEnFX1PxlYuaABms4BsPXZ0c05E4hl5COgsko9yVifhDEeE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157487-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157487: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=17584289af1aaa72c932e7e47c25d583b329dc45
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 14:56:27 +0000

flight 157487 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157487/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail in 157474 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd       8 xen-boot                   fail pass in 157474

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157474 like 152631
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 157474 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 157474 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                17584289af1aaa72c932e7e47c25d583b329dc45
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  115 days
Failing since        152659  2020-08-21 14:07:39 Z  114 days  239 attempts
Testing same since   157474  2020-12-13 01:37:19 Z    0 days    2 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 75406 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 16:01:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 16:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51735.90657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koToO-0008Ih-Rq; Sun, 13 Dec 2020 16:01:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51735.90657; Sun, 13 Dec 2020 16:01:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koToO-0008Ia-Oi; Sun, 13 Dec 2020 16:01:28 +0000
Received: by outflank-mailman (input) for mailman id 51735;
 Sun, 13 Dec 2020 16:01:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gAG0=FR=vivier.eu=laurent@srs-us1.protection.inumbo.net>)
 id 1koToN-0008IR-6G
 for xen-devel@lists.xenproject.org; Sun, 13 Dec 2020 16:01:27 +0000
Received: from mout.kundenserver.de (unknown [212.227.126.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab1a7c94-60d6-49f7-a406-ba55eed1563e;
 Sun, 13 Dec 2020 16:01:26 +0000 (UTC)
Received: from [192.168.100.1] ([82.252.152.214]) by mrelayeu.kundenserver.de
 (mreue010 [213.165.67.103]) with ESMTPSA (Nemesis) id
 1MekrN-1kEbDv4Bxl-00akgO; Sun, 13 Dec 2020 17:01:11 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab1a7c94-60d6-49f7-a406-ba55eed1563e
Subject: Re: [PATCH] hw/xen: Don't use '#' flag of printf format
To: Xinhao Zhang <zhangxinhao1@huawei.com>, qemu-devel@nongnu.org,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, paul@xen.org, qemu-trivial@nongnu.org,
 alex.chen@huawei.com, anthony.perard@citrix.com, dengkai1@huawei.com
References: <20201104133709.3326630-1-zhangxinhao1@huawei.com>
From: Laurent Vivier <laurent@vivier.eu>
Message-ID: <f6eeb660-4263-f136-537c-17b691f97b3f@vivier.eu>
Date: Sun, 13 Dec 2020 17:01:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201104133709.3326630-1-zhangxinhao1@huawei.com>
Content-Type: text/plain; charset=utf-8
Content-Language: fr
Content-Transfer-Encoding: 8bit
X-Provags-ID: V03:K1:2y9IBdWYBnBhpPGMqUQaHTv2LpCTGAwRqyt+pUxnQ45+V86rzLj
 lrvTZfo+eBUICKMGbqYjBuDFh/PccXZwFUbeK8gAuqUPzx/nJjDUEXYKk3IsHSPoW4F4MyY
 YQHBj3JE2bd+SudANMUP8t7ErUsmnM/C/P1REVATB7O0aKyZQV6P6VaXt2RtS6JtwK8DFbU
 6Fje3V41z7HMn8btEEzjw==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:Zqai2xthBTc=:mOGWGHOED5IrljIXxod8ci
 vPn6Sr6Jm1sDdIqvEvUy5IgG4BVq1GOZISwnBoanQHLMYvxi/piUoZ2KcIMbwdUPFjI8O3TX8
 EFUdZSqB0EyHc+pqlyjYgLm7J0r9G8MW6uSTe9fNuxCAj6YwGrtpsnc5KTCkfWAXNVjHhx8v8
 Pue+4fBh6CyuejyhWPOlRs6caEl9gVSfwzef4QKsgEhoA9hJxniGo6V85bETEa0UN8TD8aRug
 NFfneyRSKJEAvgFKBBWYoqnLuvCwKjq8YOvYLxpAUi60lSWNP0YIiLbVfT1xSNvUYQ4OMWS1p
 hurVaMaY6PrFrElRT9VdoOPra7VX+qAvQgyQuamA/9B8XOgDo5Lq0zmp95MmkEx9Bu6ZUx9NE
 qc022BekhVMV8XVUgIMeN0chIfeEjSKm39UnHlayAB2JhUQ+31PdKvWogX+CD6Z5ZVeZMV3kt
 xyDpKc2DtA==

Le 04/11/2020 à 14:37, Xinhao Zhang a écrit :
> Fix code style. Don't use '#' flag of printf format ('%#') in
> format strings, use '0x' prefix instead
> 
> Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com>
> Signed-off-by: Kai Deng <dengkai1@huawei.com>
> ---
>  hw/xen/xen_pt.c             | 10 +++++-----
>  hw/xen/xen_pt_config_init.c |  6 +++---
>  hw/xen/xen_pt_msi.c         | 16 ++++++++--------
>  3 files changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6d359ee486..a5f3dd590c 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -489,7 +489,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s, uint16_t *cmd)
>          pci_register_bar(&s->dev, i, type, &s->bar[i]);
>  
>          XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%08"PRIx64
> -                   " base_addr=0x%08"PRIx64" type: %#x)\n",
> +                   " base_addr=0x%08"PRIx64" type: 0x%x)\n",
>                     i, r->size, r->base_addr, type);
>      }
>  
> @@ -578,7 +578,7 @@ static void xen_pt_check_bar_overlap(PCIBus *bus, PCIDevice *d, void *opaque)
>          if (ranges_overlap(arg->addr, arg->size, r->addr, r->size)) {
>              XEN_PT_WARN(&s->dev,
>                          "Overlapped to device [%02x:%02x.%d] Region: %i"
> -                        " (addr: %#"FMT_PCIBUS", len: %#"FMT_PCIBUS")\n",
> +                        " (addr: 0x%"FMT_PCIBUS", len: 0x%"FMT_PCIBUS")\n",
>                          pci_bus_num(bus), PCI_SLOT(d->devfn),
>                          PCI_FUNC(d->devfn), i, r->addr, r->size);
>              arg->rc = true;
> @@ -618,8 +618,8 @@ static void xen_pt_region_update(XenPCIPassthroughState *s,
>      pci_for_each_device(pci_get_bus(d), pci_dev_bus_num(d),
>                          xen_pt_check_bar_overlap, &args);
>      if (args.rc) {
> -        XEN_PT_WARN(d, "Region: %d (addr: %#"FMT_PCIBUS
> -                    ", len: %#"FMT_PCIBUS") is overlapped.\n",
> +        XEN_PT_WARN(d, "Region: %d (addr: 0x%"FMT_PCIBUS
> +                    ", len: 0x%"FMT_PCIBUS") is overlapped.\n",
>                      bar, sec->offset_within_address_space,
>                      int128_get64(sec->size));
>      }
> @@ -786,7 +786,7 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>  
>      /* register real device */
>      XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
> -               " to devfn %#x\n",
> +               " to devfn 0x%x\n",
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
> index c8724cc7c8..c5c4e943a8 100644
> --- a/hw/xen/xen_pt_config_init.c
> +++ b/hw/xen/xen_pt_config_init.c
> @@ -1622,7 +1622,7 @@ static int xen_pt_pcie_size_init(XenPCIPassthroughState *s,
>          case PCI_EXP_TYPE_PCIE_BRIDGE:
>          case PCI_EXP_TYPE_RC_EC:
>          default:
> -            XEN_PT_ERR(d, "Unsupported device/port type %#x.\n", type);
> +            XEN_PT_ERR(d, "Unsupported device/port type 0x%x.\n", type);
>              return -1;
>          }
>      }
> @@ -1645,11 +1645,11 @@ static int xen_pt_pcie_size_init(XenPCIPassthroughState *s,
>          case PCI_EXP_TYPE_PCIE_BRIDGE:
>          case PCI_EXP_TYPE_RC_EC:
>          default:
> -            XEN_PT_ERR(d, "Unsupported device/port type %#x.\n", type);
> +            XEN_PT_ERR(d, "Unsupported device/port type 0x%x.\n", type);
>              return -1;
>          }
>      } else {
> -        XEN_PT_ERR(d, "Unsupported capability version %#x.\n", version);
> +        XEN_PT_ERR(d, "Unsupported capability version 0x%x.\n", version);
>          return -1;
>      }
>  
> diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
> index fb4b887b92..b71563f98a 100644
> --- a/hw/xen/xen_pt_msi.c
> +++ b/hw/xen/xen_pt_msi.c
> @@ -123,7 +123,7 @@ static int msi_msix_setup(XenPCIPassthroughState *s,
>              *ppirq = XEN_PT_UNASSIGNED_PIRQ;
>          } else {
>              XEN_PT_LOG(&s->dev, "requested pirq %d for MSI%s"
> -                       " (vec: %#x, entry: %#x)\n",
> +                       " (vec: 0x%x, entry: 0x%x)\n",
>                         *ppirq, is_msix ? "-X" : "", gvec, msix_entry);
>          }
>      }
> @@ -142,7 +142,7 @@ static int msi_msix_setup(XenPCIPassthroughState *s,
>                                       msix_entry, table_base);
>          if (rc) {
>              XEN_PT_ERR(&s->dev,
> -                       "Mapping of MSI%s (err: %i, vec: %#x, entry %#x)\n",
> +                       "Mapping of MSI%s (err: %i, vec: 0x%x, entry 0x%x)\n",
>                         is_msix ? "-X" : "", errno, gvec, msix_entry);
>              return rc;
>          }
> @@ -165,8 +165,8 @@ static int msi_msix_update(XenPCIPassthroughState *s,
>      int rc = 0;
>      uint64_t table_addr = 0;
>  
> -    XEN_PT_LOG(d, "Updating MSI%s with pirq %d gvec %#x gflags %#x"
> -               " (entry: %#x)\n",
> +    XEN_PT_LOG(d, "Updating MSI%s with pirq %d gvec 0x%x gflags 0x%x"
> +               " (entry: 0x%x)\n",
>                 is_msix ? "-X" : "", pirq, gvec, gflags, msix_entry);
>  
>      if (is_msix) {
> @@ -208,11 +208,11 @@ static int msi_msix_disable(XenPCIPassthroughState *s,
>      }
>  
>      if (is_binded) {
> -        XEN_PT_LOG(d, "Unbind MSI%s with pirq %d, gvec %#x\n",
> +        XEN_PT_LOG(d, "Unbind MSI%s with pirq %d, gvec 0x%x\n",
>                     is_msix ? "-X" : "", pirq, gvec);
>          rc = xc_domain_unbind_msi_irq(xen_xc, xen_domid, gvec, pirq, gflags);
>          if (rc) {
> -            XEN_PT_ERR(d, "Unbinding of MSI%s failed. (err: %d, pirq: %d, gvec: %#x)\n",
> +            XEN_PT_ERR(d, "Unbinding of MSI%s failed. (err: %d, pirq: %d, gvec: 0x%x)\n",
>                         is_msix ? "-X" : "", errno, pirq, gvec);
>              return rc;
>          }
> @@ -539,7 +539,7 @@ int xen_pt_msix_init(XenPCIPassthroughState *s, uint32_t base)
>      }
>  
>      if (id != PCI_CAP_ID_MSIX) {
> -        XEN_PT_ERR(d, "Invalid id %#x base %#x\n", id, base);
> +        XEN_PT_ERR(d, "Invalid id 0x%x base 0x%x\n", id, base);
>          return -1;
>      }
>  
> @@ -582,7 +582,7 @@ int xen_pt_msix_init(XenPCIPassthroughState *s, uint32_t base)
>          XEN_PT_ERR(d, "Can't open /dev/mem: %s\n", strerror(errno));
>          goto error_out;
>      }
> -    XEN_PT_LOG(d, "table_off = %#x, total_entries = %d\n",
> +    XEN_PT_LOG(d, "table_off = 0x%x, total_entries = %d\n",
>                 table_off, total_entries);
>      msix->table_offset_adjust = table_off & 0x0fff;
>      msix->phys_iomem_base =
> 

Applied to my trivial-patches branch.

Thanks,
Laurent


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 18:22:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 18:22:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51786.90676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koW0a-0004vG-Gl; Sun, 13 Dec 2020 18:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51786.90676; Sun, 13 Dec 2020 18:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koW0a-0004v9-D7; Sun, 13 Dec 2020 18:22:12 +0000
Received: by outflank-mailman (input) for mailman id 51786;
 Sun, 13 Dec 2020 18:22:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koW0Z-0004v1-FH; Sun, 13 Dec 2020 18:22:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koW0Z-0000pH-Ab; Sun, 13 Dec 2020 18:22:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koW0Z-0003nM-1p; Sun, 13 Dec 2020 18:22:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koW0Z-0000Ir-1L; Sun, 13 Dec 2020 18:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cw2koTru7PYQ9U8Hz7krVHR6POlWFpThBuNi+oYdWl8=; b=hH7y7wI6HRsD5nAl6HkuzTxBnL
	V1J98gBtZfKB9Gvc8kXwoYlD8dnZZ5FdhoCYAXk0YoJdluSsPrG8n2lVFKE772FzVvEnFjs50pLmS
	S5wMqpc7q6TUWtklXB6PVHEAXZrRrjYM8eRPfqC4cQg0CLVUA/wkubRf7J2dxQIQLrfI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157501-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157501: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 18:22:11 +0000

flight 157501 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157501/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    4 days
Failing since        157348  2020-12-09 15:39:39 Z    4 days   32 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   25 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 20:45:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 20:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51813.90691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koYEx-0001W1-0E; Sun, 13 Dec 2020 20:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51813.90691; Sun, 13 Dec 2020 20:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koYEw-0001Vu-SL; Sun, 13 Dec 2020 20:45:10 +0000
Received: by outflank-mailman (input) for mailman id 51813;
 Sun, 13 Dec 2020 20:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koYEv-0001Vm-56; Sun, 13 Dec 2020 20:45:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koYEu-0003s0-6s; Sun, 13 Dec 2020 20:45:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koYEt-0000bo-Ua; Sun, 13 Dec 2020 20:45:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koYEt-0005BX-U4; Sun, 13 Dec 2020 20:45:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+ojEsTu3KynVN5/K4JkGiiU4zmVrRACmv/maplrEvdo=; b=Pt1NsD1NI1ghi/KRgcwAJ1CGA6
	KQFOiqWAtTS58lHm8Aj/0ALLL/YoZlfIfcDtXqEjMLH5DqvEA2bD5pSVFNugWu0eISLqzrZbFkg0i
	4yWmjYhZ5b3rbjRrvNCBbb7u6NfrKnht9y//obpanXS0PSR8avspX1oOaiqOl35VGP+Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157500-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157500: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6bff9bb8a292668e7da3e740394b061e5201f683
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 20:45:07 +0000

flight 157500 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157500/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157479 REGR. vs. 152332
 build-amd64                   6 xen-build      fail in 157479 REGR. vs. 152332
 build-amd64-xsm               6 xen-build      fail in 157479 REGR. vs. 152332
 build-i386                    6 xen-build      fail in 157479 REGR. vs. 152332
 build-i386-xsm                6 xen-build      fail in 157479 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157479 pass in 157500
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157479 pass in 157500
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen        fail pass in 157479
 test-arm64-arm64-examine      8 reboot                     fail pass in 157479

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)          blocked in 157479 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)     blocked in 157479 n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)          blocked in 157479 n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)          blocked in 157479 n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)   blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 157479 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)        blocked in 157479 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)     blocked in 157479 n/a
 test-amd64-i386-xl-raw        1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)        blocked in 157479 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 157479 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-libvirt       1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 157479 n/a
 test-amd64-i386-pair          1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)    blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)      blocked in 157479 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)    blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)    blocked in 157479 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-pair         1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)          blocked in 157479 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)           blocked in 157479 n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)          blocked in 157479 n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)      blocked in 157479 n/a
 test-amd64-amd64-xl           1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-examine       1 build-check(1)           blocked in 157479 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 157479 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)    blocked in 157479 n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-libvirt      1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 157479 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-examine      1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)    blocked in 157479 n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)   blocked in 157479 n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)        blocked in 157479 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)    blocked in 157479 n/a
 test-amd64-i386-xl            1 build-check(1)           blocked in 157479 n/a
 build-i386-libvirt            1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)           blocked in 157479 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)         blocked in 157479 n/a
 test-amd64-i386-xl-shadow     1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 157479 n/a
 build-amd64-libvirt           1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 157479 n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)           blocked in 157479 n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1) blocked in 157479 n/a
 test-amd64-amd64-pygrub       1 build-check(1)           blocked in 157479 n/a
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 157479 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157479 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6bff9bb8a292668e7da3e740394b061e5201f683
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  135 days
Failing since        152366  2020-08-01 20:49:34 Z  133 days  233 attempts
Testing same since   157479  2020-12-13 03:27:28 Z    0 days    2 attempts

------------------------------------------------------------
3694 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 706779 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 13 22:57:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 13 Dec 2020 22:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51840.90712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koaIr-0005Zg-4B; Sun, 13 Dec 2020 22:57:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51840.90712; Sun, 13 Dec 2020 22:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koaIr-0005ZZ-0c; Sun, 13 Dec 2020 22:57:21 +0000
Received: by outflank-mailman (input) for mailman id 51840;
 Sun, 13 Dec 2020 22:57:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koaIq-0005ZQ-4d; Sun, 13 Dec 2020 22:57:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koaIp-0006Z9-UB; Sun, 13 Dec 2020 22:57:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koaIp-0005LX-Lv; Sun, 13 Dec 2020 22:57:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koaIp-0008TR-LC; Sun, 13 Dec 2020 22:57:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J/o3/I6224LLthBLRrPhSu+tKebwEZgYgGlm637LNuk=; b=lyN0choQKKOOv8gl0GCViWm+5x
	tvAqBfsnPzU4bSLzgqa6hxkbOFG8okz0pVSoR7mjvdOUUakQAOlF6KO6WhtC8bwF0yZGwgM+me0al
	508gKmtdjpgvPCbK2CTN5mzFvWmo06DYn71B5GzI1FF25seRMPYsLQFbjZUROM+76SkQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157507-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157507: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 13 Dec 2020 22:57:19 +0000

flight 157507 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157507/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    4 days
Failing since        157348  2020-12-09 15:39:39 Z    4 days   33 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   26 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 01:43:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 01:43:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51866.90733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kocsu-0005Aw-3p; Mon, 14 Dec 2020 01:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51866.90733; Mon, 14 Dec 2020 01:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kocst-0005An-SU; Mon, 14 Dec 2020 01:42:43 +0000
Received: by outflank-mailman (input) for mailman id 51866;
 Mon, 14 Dec 2020 01:42:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kocst-0005Af-03; Mon, 14 Dec 2020 01:42:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kocss-0001fi-Q7; Mon, 14 Dec 2020 01:42:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kocss-0004GA-Ep; Mon, 14 Dec 2020 01:42:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kocss-0003jO-EN; Mon, 14 Dec 2020 01:42:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VLVyUEv3hzxKfXMOJUg1KwQZ9P8WX8IIrHG6ItjD0wY=; b=pPz08sZhHe2Ow/19ehgwv3aY6b
	vnJ3bgI8vpCg8p5o+gCixXZcOMAoA+rHRUbumwzAbjG1Vd40Bggg2Z5AlckEpzZUYFO0we+lvyHlI
	IpsdF6biWAG0Icr+uxOHU1ukeV1IgRySEXPlFYXjlR2qiRaaxsUau7h3/cU1O1pvdkfY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157510-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157510: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 01:42:42 +0000

flight 157510 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157510/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    4 days
Failing since        157348  2020-12-09 15:39:39 Z    4 days   34 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    2 days   27 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 02:19:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 02:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51875.90747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kodSa-000089-PV; Mon, 14 Dec 2020 02:19:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51875.90747; Mon, 14 Dec 2020 02:19:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kodSa-000082-MW; Mon, 14 Dec 2020 02:19:36 +0000
Received: by outflank-mailman (input) for mailman id 51875;
 Mon, 14 Dec 2020 02:19:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kodSZ-00007A-9B; Mon, 14 Dec 2020 02:19:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kodSY-0002rI-WF; Mon, 14 Dec 2020 02:19:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kodSY-0005EP-Id; Mon, 14 Dec 2020 02:19:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kodSY-00079J-I7; Mon, 14 Dec 2020 02:19:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JAbM4zMrt/gcK/1TRRerFr3HRblZu+ec9lTJd4hqVP8=; b=p1K/Np5ylIy3QNcp0/o1BOYr/T
	4h1IX+5+otiiKIZPwpGfZeBX2PLfhe5ieBqFHjbg+tyi9pOzIy0WO6Af8IxTA7A0wSUhoye5NrfGH
	e2r1JpVr9F4qTR0tSwcxmD1T3zUEEUOcZ/48GsDJj5mI4xhgojG2EO0Ehk14ibHnu3qQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157504-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157504: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=17584289af1aaa72c932e7e47c25d583b329dc45
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 02:19:34 +0000

flight 157504 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157504/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631
 build-amd64                   6 xen-build      fail in 157487 REGR. vs. 152631
 build-amd64-xsm               6 xen-build      fail in 157487 REGR. vs. 152631
 build-i386                    6 xen-build      fail in 157487 REGR. vs. 152631
 build-i386-xsm                6 xen-build      fail in 157487 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd       8 xen-boot         fail in 157487 pass in 157504
 test-armhf-armhf-libvirt     18 guest-start/debian.repeat  fail pass in 157487

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)    blocked in 157487 n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1) blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 157487 n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)      blocked in 157487 n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)          blocked in 157487 n/a
 test-amd64-i386-xl            1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-libvirt       1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)      blocked in 157487 n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)          blocked in 157487 n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)         blocked in 157487 n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-xl-raw        1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-libvirt      1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 157487 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)        blocked in 157487 n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)          blocked in 157487 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 157487 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 157487 n/a
 test-amd64-i386-xl-shadow     1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 157487 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 157487 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)    blocked in 157487 n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1) blocked in 157487 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-pair          1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 157487 n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 157487 n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 157487 n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)          blocked in 157487 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)    blocked in 157487 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 157487 n/a
 test-amd64-amd64-pygrub       1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 157487 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)     blocked in 157487 n/a
 test-amd64-amd64-xl           1 build-check(1)           blocked in 157487 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)    blocked in 157487 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)           blocked in 157487 n/a
 build-amd64-libvirt           1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)           blocked in 157487 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)        blocked in 157487 n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)        blocked in 157487 n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)          blocked in 157487 n/a
 build-i386-libvirt            1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 157487 n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-pair         1 build-check(1)           blocked in 157487 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                17584289af1aaa72c932e7e47c25d583b329dc45
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  115 days
Failing since        152659  2020-08-21 14:07:39 Z  114 days  240 attempts
Testing same since   157474  2020-12-13 01:37:19 Z    1 days    3 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 75406 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 06:52:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 06:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51897.90774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kohiA-0001uJ-2P; Mon, 14 Dec 2020 06:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51897.90774; Mon, 14 Dec 2020 06:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kohi9-0001uC-Vm; Mon, 14 Dec 2020 06:51:57 +0000
Received: by outflank-mailman (input) for mailman id 51897;
 Mon, 14 Dec 2020 06:51:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kohi8-0001u4-Qr; Mon, 14 Dec 2020 06:51:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kohi8-00012t-KK; Mon, 14 Dec 2020 06:51:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kohi8-00041r-9U; Mon, 14 Dec 2020 06:51:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kohi8-0005sb-8y; Mon, 14 Dec 2020 06:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YCKCPEnCItc8QV1vVPKKkBDoYjWVrB2m9ItXXewdGl4=; b=mmREWp1rkXjxol5+mnoctL2j6R
	OMM3JDtuYNgno+z12ZF7B6FlTCjf7iuoABYZu+2XeIe425YA+XeVsz/Z7euFEcP3zBHEjaiL8Oh/S
	OxJHNm+Xj+zp/I7gCrFHqMNyRwL9mr8UXA6EqnfQ6gnKkoZeDZxWe9p+i1IGRQgvge0E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157516-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157516: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=cd338954b7056ba5c98fa860ce120358cbb74566
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 06:51:56 +0000

flight 157516 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157516/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              cd338954b7056ba5c98fa860ce120358cbb74566
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  157 days
Failing since        151818  2020-07-11 04:18:52 Z  156 days  151 attempts
Testing same since   157481  2020-12-13 04:19:15 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32723 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:46:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51906.90790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiYZ-0006nT-1K; Mon, 14 Dec 2020 07:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51906.90790; Mon, 14 Dec 2020 07:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiYY-0006nM-U6; Mon, 14 Dec 2020 07:46:06 +0000
Received: by outflank-mailman (input) for mailman id 51906;
 Mon, 14 Dec 2020 07:46:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fVEM=FS=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1koiYX-0006nH-O5
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:46:05 +0000
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0c14762-5a1a-4d20-ab84-6b5de9c60426;
 Mon, 14 Dec 2020 07:46:04 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id r7so15397540wrc.5
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 23:46:04 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-221.range109-146.btcentralplus.com. [109.146.187.221])
 by smtp.gmail.com with ESMTPSA id h9sm29004797wre.24.2020.12.13.23.46.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 13 Dec 2020 23:46:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0c14762-5a1a-4d20-ab84-6b5de9c60426
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=7CLvZWlJGE9pAbmF1ZeVYqzKa4QzcerK0aitwu3LRYA=;
        b=jLOYyiw7BE4Nml9SAwuHRbX77K8ATp5KM8rg8TWHBoiM6Xcz3Nq42GqrBUgJdHn82c
         q7s/9NFh96P6pnGgmpcBWTo9rDXnCqiwbB97RJjEP11JfqEpzYUV/ioqShXNmrlz/vwo
         xD4SCySmndb6kow6jKi4h+88o+ut3dvlXeXwxRSoggJenIfH7uGAH3O+fibgOEPYUhff
         kqc3so27WcWHDj9I5RNUeNEb5aBn4wDwwA4dhDtKGGybK3/vsai8ZlvMRI9ZnXYPJiKv
         SuEiadp6qV2s6HxLqm6kP/ORdMnoU+PpT2Km8RbLrFEe76cfa4s6trQh23aoYy55+nkr
         SIkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=7CLvZWlJGE9pAbmF1ZeVYqzKa4QzcerK0aitwu3LRYA=;
        b=s42AcP8nWUrl5R96Mytn2SA+j/4uwXJ26Jp5TKbv8Mp3EoOgs2CG/jA7ukIYkQAAG2
         7/sYGXwjtYMZl14Y4+j+4VkOfl7pYcADOspV5GPTqBE0WCHk/sCV4TKn2p3IGKZTloEM
         t9cI/bRrdPrelLdAYCGcLwzZT++hrzdc+k2FurygQMw3jasVFrx5hBVfNWMw1SSw8Q8v
         sBh4xPi+jZP1g2t1b0qHWUARi+QK5E8zDpD4Hkx12CHzLg6Wa0Ou147JzAtjTJAXSOIi
         85HmryTguP/b/r+uwcP8k/1HdrWxw9YmfWXOxPyyqfeTRMG/9jZ5yYtd750CF/KeMK+D
         CVRg==
X-Gm-Message-State: AOAM531+cUhOG30l8Luz5FAlBae7IY2k68VmdEnaQPn3sQ6lykdN8v+1
	/99CiDziTgYoEOsliE7y0NM=
X-Google-Smtp-Source: ABdhPJxNeGQl/cu2GYjlWH7t8lH884fbYLCIL+vwErYrctSCVkJrX7E48+6a7CHyHCMcluRZXgBEzg==
X-Received: by 2002:a5d:6607:: with SMTP id n7mr24256141wru.206.1607931963536;
        Sun, 13 Dec 2020 23:46:03 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Eduardo Habkost'" <ehabkost@redhat.com>,
	<qemu-devel@nongnu.org>
Cc: "'Markus Armbruster'" <armbru@redhat.com>,
	"'Igor Mammedov'" <imammedo@redhat.com>,
	"'Stefan Berger'" <stefanb@linux.ibm.com>,
	=?UTF-8?Q?'Marc-Andr=C3=A9_Lureau'?= <marcandre.lureau@redhat.com>,
	"'Daniel P. Berrange'" <berrange@redhat.com>,
	=?UTF-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
	"'John Snow'" <jsnow@redhat.com>,
	"'Kevin Wolf'" <kwolf@redhat.com>,
	"'Eric Blake'" <eblake@redhat.com>,
	"'Paolo Bonzini'" <pbonzini@redhat.com>,
	"'Cornelia Huck'" <cohuck@redhat.com>,
	"'Stefan Berger'" <stefanb@linux.vnet.ibm.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Anthony Perard'" <anthony.perard@citrix.com>,
	"'Max Reitz'" <mreitz@redhat.com>,
	"'Thomas Huth'" <thuth@redhat.com>,
	"'Richard Henderson'" <rth@twiddle.net>,
	"'David Hildenbrand'" <david@redhat.com>,
	"'Halil Pasic'" <pasic@linux.ibm.com>,
	"'Christian Borntraeger'" <borntraeger@de.ibm.com>,
	"'Matthew Rosato'" <mjrosato@linux.ibm.com>,
	"'Alex Williamson'" <alex.williamson@redhat.com>,
	<xen-devel@lists.xenproject.org>,
	<qemu-block@nongnu.org>,
	<qemu-s390x@nongnu.org>
References: <20201211220529.2290218-1-ehabkost@redhat.com> <20201211220529.2290218-10-ehabkost@redhat.com>
In-Reply-To: <20201211220529.2290218-10-ehabkost@redhat.com>
Subject: RE: [PATCH v4 09/32] qdev: Make qdev_get_prop_ptr() get Object* arg
Date: Mon, 14 Dec 2020 07:46:03 -0000
Message-ID: <009b01d6d1ed$2d415900$87c40b00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLXjZRr+qbM/S7ZJ9VMMfP1SWcevwM8aMtWp9rbvdA=

> -----Original Message-----
> From: Eduardo Habkost <ehabkost@redhat.com>
> Sent: 11 December 2020 22:05
> To: qemu-devel@nongnu.org
> Cc: Markus Armbruster <armbru@redhat.com>; Igor Mammedov =
<imammedo@redhat.com>; Stefan Berger
> <stefanb@linux.ibm.com>; Marc-Andr=C3=A9 Lureau =
<marcandre.lureau@redhat.com>; Daniel P. Berrange
> <berrange@redhat.com>; Philippe Mathieu-Daud=C3=A9 =
<philmd@redhat.com>; John Snow <jsnow@redhat.com>; Kevin
> Wolf <kwolf@redhat.com>; Eric Blake <eblake@redhat.com>; Paolo Bonzini =
<pbonzini@redhat.com>; Cornelia
> Huck <cohuck@redhat.com>; Stefan Berger <stefanb@linux.vnet.ibm.com>; =
Stefano Stabellini
> <sstabellini@kernel.org>; Anthony Perard <anthony.perard@citrix.com>; =
Paul Durrant <paul@xen.org>; Max
> Reitz <mreitz@redhat.com>; Thomas Huth <thuth@redhat.com>; Richard =
Henderson <rth@twiddle.net>; David
> Hildenbrand <david@redhat.com>; Halil Pasic <pasic@linux.ibm.com>; =
Christian Borntraeger
> <borntraeger@de.ibm.com>; Matthew Rosato <mjrosato@linux.ibm.com>; =
Alex Williamson
> <alex.williamson@redhat.com>; xen-devel@lists.xenproject.org; =
qemu-block@nongnu.org; qemu-
> s390x@nongnu.org
> Subject: [PATCH v4 09/32] qdev: Make qdev_get_prop_ptr() get Object* =
arg
>=20
> Make the code more generic and not specific to TYPE_DEVICE.
>=20
> Reviewed-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> Reviewed-by: Cornelia Huck <cohuck@redhat.com> #s390 parts
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Xen parts...

Acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:46:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:46:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51910.90801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiZ5-0006sg-AC; Mon, 14 Dec 2020 07:46:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51910.90801; Mon, 14 Dec 2020 07:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiZ5-0006sZ-76; Mon, 14 Dec 2020 07:46:39 +0000
Received: by outflank-mailman (input) for mailman id 51910;
 Mon, 14 Dec 2020 07:46:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fVEM=FS=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1koiZ3-0006sO-Ob
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:46:37 +0000
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09ee6bb4-1e0a-4f04-8e39-c84790cbcc61;
 Mon, 14 Dec 2020 07:46:36 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id i9so15409176wrc.4
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 23:46:36 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-221.range109-146.btcentralplus.com. [109.146.187.221])
 by smtp.gmail.com with ESMTPSA id l8sm22757751wrb.73.2020.12.13.23.46.34
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 13 Dec 2020 23:46:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09ee6bb4-1e0a-4f04-8e39-c84790cbcc61
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=BHdVfpz+ocsNz74gAxwIP81g3moH3iC8iFGO8Kw/XoA=;
        b=EqIYR0tThu0qgNT9T9xuUohREN7uU3/nYv2o9PZnwkOoy01/33JaIyu7qjOqsBcuBt
         ttZXZnErtJvmoG+OdZZuREtbZKdtqk1cClVT5cU94+1MAhbExhdJUJ+wLJrr0OPNsnds
         WnljGaZ+nCGc0loroMGOehmrR4mhGjlb/nyZz2nzFZ2I5EwLb0JQZ4tgue3wh9DA8wBQ
         LPCqWQ1saSnEsV2iewf7sACgyqhbZtNSpil3n9GJA+y+9HIJSZ9aBAmIrNkpBRmm5bki
         cdHT6QneiqzWsET313O/4GTy/f5RqgWPwTsgQw7dbD1zUi5olGdsus9Mp1k8WVixPPth
         ymCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=BHdVfpz+ocsNz74gAxwIP81g3moH3iC8iFGO8Kw/XoA=;
        b=nBz0zMzVbs8dl3NN+6HHKQR7Av1VFc/Raxhw7KKr+2P6oTRNm4sLeTrFuA82LOh0GT
         iXXJShnXI6qvU7uLEbtL/lSaFZ83c1VATzqkTS9xyKVrycX3O2CyS8vpZ2/5oMiEIALD
         uxBGEz4CMAzx7JnAJe27WGao4bQ9Wfo3Njsbcq0Z1zw/pwXZ5rmcBu6C235dsK69zMmN
         jlBkmjeEWhtNgdODgXMRMFQSRoD41+7IkmGFCy7UGb9IU4NagySDFZPX2iU80I33M3Lf
         R6ygRFrtTtXPee3XCqOkW+EayGxJQrcZMSPz057m1DXvVXiwi0M5wP3d5bPdDPRXuAJ8
         +xEA==
X-Gm-Message-State: AOAM533sLqO5ttm5WH0WnFyz5iJI7eZHsUric8yQzYtZMey4HZGzh+HS
	KkcL89Z984yPlVE5hFTMKJA=
X-Google-Smtp-Source: ABdhPJx1o2F74k/LxZ5tJsRx38QeGjW5/fG5PrI1cslg1GHy7K/GNOmK8lswMQAkQ3sLyeXEW4rXaA==
X-Received: by 2002:adf:e688:: with SMTP id r8mr27357468wrm.20.1607931996233;
        Sun, 13 Dec 2020 23:46:36 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Eduardo Habkost'" <ehabkost@redhat.com>,
	<qemu-devel@nongnu.org>
Cc: "'Markus Armbruster'" <armbru@redhat.com>,
	"'Igor Mammedov'" <imammedo@redhat.com>,
	"'Stefan Berger'" <stefanb@linux.ibm.com>,
	=?UTF-8?Q?'Marc-Andr=C3=A9_Lureau'?= <marcandre.lureau@redhat.com>,
	"'Daniel P. Berrange'" <berrange@redhat.com>,
	=?UTF-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
	"'John Snow'" <jsnow@redhat.com>,
	"'Kevin Wolf'" <kwolf@redhat.com>,
	"'Eric Blake'" <eblake@redhat.com>,
	"'Paolo Bonzini'" <pbonzini@redhat.com>,
	"'Stefan Berger'" <stefanb@linux.vnet.ibm.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Anthony Perard'" <anthony.perard@citrix.com>,
	"'Max Reitz'" <mreitz@redhat.com>,
	"'Cornelia Huck'" <cohuck@redhat.com>,
	"'Halil Pasic'" <pasic@linux.ibm.com>,
	"'Christian Borntraeger'" <borntraeger@de.ibm.com>,
	"'Richard Henderson'" <rth@twiddle.net>,
	"'David Hildenbrand'" <david@redhat.com>,
	"'Thomas Huth'" <thuth@redhat.com>,
	"'Matthew Rosato'" <mjrosato@linux.ibm.com>,
	"'Alex Williamson'" <alex.williamson@redhat.com>,
	"'Mark Cave-Ayland'" <mark.cave-ayland@ilande.co.uk>,
	"'Artyom Tarasenko'" <atar4qemu@gmail.com>,
	<xen-devel@lists.xenproject.org>,
	<qemu-block@nongnu.org>,
	<qemu-s390x@nongnu.org>
References: <20201211220529.2290218-1-ehabkost@redhat.com> <20201211220529.2290218-24-ehabkost@redhat.com>
In-Reply-To: <20201211220529.2290218-24-ehabkost@redhat.com>
Subject: RE: [PATCH v4 23/32] qdev: Move dev->realized check to qdev_property_set()
Date: Mon, 14 Dec 2020 07:46:36 -0000
Message-ID: <009c01d6d1ed$40f216b0$c2d64410$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLXjZRr+qbM/S7ZJ9VMMfP1SWcevwGTHZYap+gmSaA=

> -----Original Message-----
> From: Eduardo Habkost <ehabkost@redhat.com>
> Sent: 11 December 2020 22:05
> To: qemu-devel@nongnu.org
> Cc: Markus Armbruster <armbru@redhat.com>; Igor Mammedov =
<imammedo@redhat.com>; Stefan Berger
> <stefanb@linux.ibm.com>; Marc-Andr=C3=A9 Lureau =
<marcandre.lureau@redhat.com>; Daniel P. Berrange
> <berrange@redhat.com>; Philippe Mathieu-Daud=C3=A9 =
<philmd@redhat.com>; John Snow <jsnow@redhat.com>; Kevin
> Wolf <kwolf@redhat.com>; Eric Blake <eblake@redhat.com>; Paolo Bonzini =
<pbonzini@redhat.com>; Stefan
> Berger <stefanb@linux.vnet.ibm.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Anthony Perard
> <anthony.perard@citrix.com>; Paul Durrant <paul@xen.org>; Max Reitz =
<mreitz@redhat.com>; Cornelia Huck
> <cohuck@redhat.com>; Halil Pasic <pasic@linux.ibm.com>; Christian =
Borntraeger
> <borntraeger@de.ibm.com>; Richard Henderson <rth@twiddle.net>; David =
Hildenbrand <david@redhat.com>;
> Thomas Huth <thuth@redhat.com>; Matthew Rosato =
<mjrosato@linux.ibm.com>; Alex Williamson
> <alex.williamson@redhat.com>; Mark Cave-Ayland =
<mark.cave-ayland@ilande.co.uk>; Artyom Tarasenko
> <atar4qemu@gmail.com>; xen-devel@lists.xenproject.org; =
qemu-block@nongnu.org; qemu-s390x@nongnu.org
> Subject: [PATCH v4 23/32] qdev: Move dev->realized check to =
qdev_property_set()
>=20
> Every single qdev property setter function manually checks
> dev->realized.  We can just check dev->realized inside
> qdev_property_set() instead.
>=20
> The check is being added as a separate function
> (qdev_prop_allow_set()) because it will become a callback later.
>=20
> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Xen parts...

Acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:47:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:47:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51916.90813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiZr-00070E-Kw; Mon, 14 Dec 2020 07:47:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51916.90813; Mon, 14 Dec 2020 07:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiZr-000707-HB; Mon, 14 Dec 2020 07:47:27 +0000
Received: by outflank-mailman (input) for mailman id 51916;
 Mon, 14 Dec 2020 07:47:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fVEM=FS=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1koiZp-0006zx-Rx
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:47:25 +0000
Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 138eebf9-10a6-48e3-b5c0-3c474a0619b5;
 Mon, 14 Dec 2020 07:47:25 +0000 (UTC)
Received: by mail-wm1-x334.google.com with SMTP id v14so12784677wml.1
 for <xen-devel@lists.xenproject.org>; Sun, 13 Dec 2020 23:47:25 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-221.range109-146.btcentralplus.com. [109.146.187.221])
 by smtp.gmail.com with ESMTPSA id x66sm27844024wmg.26.2020.12.13.23.47.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 13 Dec 2020 23:47:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 138eebf9-10a6-48e3-b5c0-3c474a0619b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=q9f0dLrgKME11QwP+Lz1f8G042/xNt7QORwsmse8W30=;
        b=Z8fdp8uibo5J7i6A5eXX0hVDeL/FKch5Y6Uz+fhYFztuVApB2vT3A+AIAZzChqJFLn
         Br84u47JhMVvg4OeLycAsmc4zEXKi4lOhMxbJbb005dTFLs7u1a4e5+PwJQh2jFqiOBY
         22aFebb+8xibP+rQonKnqKDLB6jSK48r7YcDlzZJFLrLdUOB2mVMQLznYW/1swZjE4NP
         tqMfb290URo2+yTH+n6CMeBpO610UsNIPvTifnY76RjBoAC7teDOHURK0r8PjqwRUmfv
         zmooEKMqub62XwLcdV1s91g+zy19S3lqMBRY6q3rgTMEwF2Fc8NrDo/9uwACGqhiHYOh
         VrEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=q9f0dLrgKME11QwP+Lz1f8G042/xNt7QORwsmse8W30=;
        b=f4G7cq4obCwQVf6YXy3mmu/m/ffgR3hDDjc6vhEri3/K/2FKiJ3GIKZw1dqetByBmK
         Xd6gA3o4kqtvbGZEms1XHg2vnFfUsuJ/2yIS7lq+9iga1qH4WuXcDmEbI2iPLb9Y44r4
         IkxKxJ9mAVYceS0cP2PVXDw/bboDhdYM/uYKlgHKGpBgwTd5Ak3fmu7y927avTip7eVh
         k92JyCwqeKzpmxXqbSW3Q66XCkhywMugKbhEPDD8ukZ1qkug819wHuxpLDOHkCMjlcx+
         bs5x63o5Kn0hs5UAZnpJyH3ZBJyUD8vQnyiKlGWbG8S0mNjcvEJvPnmUPmMcsAW3Y4Df
         Cipw==
X-Gm-Message-State: AOAM530780EOfAg65FkgINIO8qEfEaL3W2ywNklPVn5N7691ZGl/sOqG
	vM0E1miQBN9AEALkZvB8ALc=
X-Google-Smtp-Source: ABdhPJzu+zNeTRFRPazIHL+E4xY5IryYTfv2xhHMgRDdh02X4K6G7ujVJ7rk339hArcq0uo/hSscZw==
X-Received: by 2002:a7b:c145:: with SMTP id z5mr26334263wmi.164.1607932044358;
        Sun, 13 Dec 2020 23:47:24 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Eduardo Habkost'" <ehabkost@redhat.com>,
	<qemu-devel@nongnu.org>
Cc: "'Markus Armbruster'" <armbru@redhat.com>,
	"'Igor Mammedov'" <imammedo@redhat.com>,
	"'Stefan Berger'" <stefanb@linux.ibm.com>,
	=?UTF-8?Q?'Marc-Andr=C3=A9_Lureau'?= <marcandre.lureau@redhat.com>,
	"'Daniel P. Berrange'" <berrange@redhat.com>,
	=?UTF-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
	"'John Snow'" <jsnow@redhat.com>,
	"'Kevin Wolf'" <kwolf@redhat.com>,
	"'Eric Blake'" <eblake@redhat.com>,
	"'Paolo Bonzini'" <pbonzini@redhat.com>,
	"'Stefan Berger'" <stefanb@linux.vnet.ibm.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Anthony Perard'" <anthony.perard@citrix.com>,
	"'Max Reitz'" <mreitz@redhat.com>,
	"'Cornelia Huck'" <cohuck@redhat.com>,
	"'Halil Pasic'" <pasic@linux.ibm.com>,
	"'Christian Borntraeger'" <borntraeger@de.ibm.com>,
	"'Richard Henderson'" <rth@twiddle.net>,
	"'David Hildenbrand'" <david@redhat.com>,
	"'Thomas Huth'" <thuth@redhat.com>,
	"'Matthew Rosato'" <mjrosato@linux.ibm.com>,
	"'Alex Williamson'" <alex.williamson@redhat.com>,
	<xen-devel@lists.xenproject.org>,
	<qemu-block@nongnu.org>,
	<qemu-s390x@nongnu.org>
References: <20201211220529.2290218-1-ehabkost@redhat.com> <20201211220529.2290218-31-ehabkost@redhat.com>
In-Reply-To: <20201211220529.2290218-31-ehabkost@redhat.com>
Subject: RE: [PATCH v4 30/32] qdev: Rename qdev_get_prop_ptr() to object_field_prop_ptr()
Date: Mon, 14 Dec 2020 07:47:25 -0000
Message-ID: <009d01d6d1ed$5da99ee0$18fcdca0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLXjZRr+qbM/S7ZJ9VMMfP1SWcevwIpMTjHp+N10QA=

> -----Original Message-----
> From: Eduardo Habkost <ehabkost@redhat.com>
> Sent: 11 December 2020 22:05
> To: qemu-devel@nongnu.org
> Cc: Markus Armbruster <armbru@redhat.com>; Igor Mammedov =
<imammedo@redhat.com>; Stefan Berger
> <stefanb@linux.ibm.com>; Marc-Andr=C3=A9 Lureau =
<marcandre.lureau@redhat.com>; Daniel P. Berrange
> <berrange@redhat.com>; Philippe Mathieu-Daud=C3=A9 =
<philmd@redhat.com>; John Snow <jsnow@redhat.com>; Kevin
> Wolf <kwolf@redhat.com>; Eric Blake <eblake@redhat.com>; Paolo Bonzini =
<pbonzini@redhat.com>; Stefan
> Berger <stefanb@linux.vnet.ibm.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Anthony Perard
> <anthony.perard@citrix.com>; Paul Durrant <paul@xen.org>; Max Reitz =
<mreitz@redhat.com>; Cornelia Huck
> <cohuck@redhat.com>; Halil Pasic <pasic@linux.ibm.com>; Christian =
Borntraeger
> <borntraeger@de.ibm.com>; Richard Henderson <rth@twiddle.net>; David =
Hildenbrand <david@redhat.com>;
> Thomas Huth <thuth@redhat.com>; Matthew Rosato =
<mjrosato@linux.ibm.com>; Alex Williamson
> <alex.williamson@redhat.com>; xen-devel@lists.xenproject.org; =
qemu-block@nongnu.org; qemu-
> s390x@nongnu.org
> Subject: [PATCH v4 30/32] qdev: Rename qdev_get_prop_ptr() to =
object_field_prop_ptr()
>=20
> The function will be moved to common QOM code, as it is not
> specific to TYPE_DEVICE anymore.
>=20
> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Xen parts...

Acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:56:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51924.90853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiiX-000883-9n; Mon, 14 Dec 2020 07:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51924.90853; Mon, 14 Dec 2020 07:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiiX-00087q-5y; Mon, 14 Dec 2020 07:56:25 +0000
Received: by outflank-mailman (input) for mailman id 51924;
 Mon, 14 Dec 2020 07:56:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1koiiW-00084u-0o
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:56:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77c91e7b-82b9-4efa-a6bd-8a7d300e751c;
 Mon, 14 Dec 2020 07:56:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7AE46ACA5;
 Mon, 14 Dec 2020 07:56:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77c91e7b-82b9-4efa-a6bd-8a7d300e751c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607932577; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eg1c78s9LNkI41yc1G5ej9Fm++xiGV/QggtLqwnFVz8=;
	b=XLkzyl+dE9NarKGUaMHvMul9Fl1T4yp6CCztRcmk0fsg+gGxEv4060P69tXZgIWyffoF64
	HwVNiyJscfgeTkPTlp8o4VAF4T7NOyhjIr8aZTHB8PQ+4zjcIqEt0ryoicJDfG9z25V3vq
	MtnddzbiYNUnhLKx9KwfsE2eK29sxaw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/3] xen/arm: add support for run_in_exception_handler()
Date: Mon, 14 Dec 2020 08:56:13 +0100
Message-Id: <20201214075615.25038-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201214075615.25038-1-jgross@suse.com>
References: <20201214075615.25038-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add support to run a function in an exception handler for Arm. Do it
the same way as on x86 via a bug_frame.

Unfortunately inline assembly on Arm seems to be less capable than on
x86, leading to functions called via run_in_exception_handler() having
to be globally visible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- new patch

I have verified the created bugframe is correct by inspecting the
created binary.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/arm/traps.c       | 10 +++++++++-
 xen/drivers/char/ns16550.c |  3 ++-
 xen/include/asm-arm/bug.h  | 32 +++++++++++++++++++++-----------
 3 files changed, 32 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd4c6..6e677affe2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1236,8 +1236,16 @@ int do_bug_frame(const struct cpu_user_regs *regs, vaddr_t pc)
     if ( !bug )
         return -ENOENT;
 
+    if ( id == BUGFRAME_run_fn )
+    {
+        void (*fn)(const struct cpu_user_regs *) = bug_ptr(bug);
+
+        fn(regs);
+        return 0;
+    }
+
     /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
-    filename = bug_file(bug);
+    filename = bug_ptr(bug);
     if ( !is_kernel(filename) )
         return -EINVAL;
     fixup = strlen(filename);
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 9235d854fe..dd6500acc8 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -192,7 +192,8 @@ static void ns16550_interrupt(
 /* Safe: ns16550_poll() runs as softirq so not reentrant on a given CPU. */
 static DEFINE_PER_CPU(struct serial_port *, poll_port);
 
-static void __ns16550_poll(struct cpu_user_regs *regs)
+/* run_in_exception_handler() on Arm requires globally visible symbol. */
+void __ns16550_poll(struct cpu_user_regs *regs)
 {
     struct serial_port *port = this_cpu(poll_port);
     struct ns16550 *uart = port->uart;
diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h
index 36c803357c..a7da2c306f 100644
--- a/xen/include/asm-arm/bug.h
+++ b/xen/include/asm-arm/bug.h
@@ -15,34 +15,38 @@
 
 struct bug_frame {
     signed int loc_disp;    /* Relative address to the bug address */
-    signed int file_disp;   /* Relative address to the filename */
+    signed int ptr_disp;    /* Relative address to the filename or function */
     signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
     uint16_t line;          /* Line number */
     uint32_t pad0:16;       /* Padding for 8-bytes align */
 };
 
 #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
-#define bug_file(b) ((const void *)(b) + (b)->file_disp);
+#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
 #define bug_line(b) ((b)->line)
 #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
 
-#define BUGFRAME_warn   0
-#define BUGFRAME_bug    1
-#define BUGFRAME_assert 2
+#define BUGFRAME_run_fn 0
+#define BUGFRAME_warn   1
+#define BUGFRAME_bug    2
+#define BUGFRAME_assert 3
 
-#define BUGFRAME_NR     3
+#define BUGFRAME_NR     4
 
 /* Many versions of GCC doesn't support the asm %c parameter which would
  * be preferable to this unpleasantness. We use mergeable string
  * sections to avoid multiple copies of the string appearing in the
  * Xen image.
  */
-#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
+#define BUG_FRAME(type, line, ptr, ptr_is_file, has_msg, msg) do {          \
     BUILD_BUG_ON((line) >> 16);                                             \
     BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
     asm ("1:"BUG_INSTR"\n"                                                  \
          ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
-         "2:\t.asciz " __stringify(file) "\n"                               \
+         "2:\n"                                                             \
+         ".if " #ptr_is_file "\n"                                           \
+         "\t.asciz " __stringify(ptr) "\n"                                  \
+         ".endif\n"                                                         \
          "3:\n"                                                             \
          ".if " #has_msg "\n"                                               \
          "\t.asciz " #msg "\n"                                              \
@@ -52,21 +56,27 @@ struct bug_frame {
          "4:\n"                                                             \
          ".p2align 2\n"                                                     \
          ".long (1b - 4b)\n"                                                \
+         ".if " #ptr_is_file "\n"                                           \
          ".long (2b - 4b)\n"                                                \
+         ".else\n"                                                          \
+         ".long (" #ptr " - 4b)\n"                                          \
+         ".endif\n"                                                         \
          ".long (3b - 4b)\n"                                                \
          ".hword " __stringify(line) ", 0\n"                                \
          ".popsection");                                                    \
 } while (0)
 
-#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
+#define run_in_exception_handler(fn) BUG_FRAME(BUGFRAME_run_fn, 0, fn, 0, 0, "")
+
+#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 1, 0, "")
 
 #define BUG() do {                                              \
-    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
+    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 1, 0, "");        \
     unreachable();                                              \
 } while (0)
 
 #define assert_failed(msg) do {                                 \
-    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
+    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, 1, msg);     \
     unreachable();                                              \
 } while (0)
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:56:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51923.90841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiiU-000863-0j; Mon, 14 Dec 2020 07:56:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51923.90841; Mon, 14 Dec 2020 07:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiiT-00085w-TL; Mon, 14 Dec 2020 07:56:21 +0000
Received: by outflank-mailman (input) for mailman id 51923;
 Mon, 14 Dec 2020 07:56:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1koiiS-00084z-9R
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:56:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9baafd8a-bc33-4ffc-8299-c22e92bc3d06;
 Mon, 14 Dec 2020 07:56:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B1377AD4D;
 Mon, 14 Dec 2020 07:56:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9baafd8a-bc33-4ffc-8299-c22e92bc3d06
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607932577; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KnYqlf/P+S7NcIzAGFzhLoCYbM4Ex5RUcFQo+09bQv0=;
	b=JMIzCIoqvl+4Kk85zJ96fL3dk1LmsPf7Eh90am/VZ8i3hIbfNMeKcneb9ggiJJlXwZqy9a
	5mdT6doWSYcEioBQeFwH+n+Ukf3MXE53xbmkeT+fHY/gn7OI/soD0d+F3La1R9/3W1sM5N
	tAjNq8oW73Uxicw76ymhKMK82ifFKZE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 2/3] xen: enable keyhandlers to work without register set specified
Date: Mon, 14 Dec 2020 08:56:14 +0100
Message-Id: <20201214075615.25038-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201214075615.25038-1-jgross@suse.com>
References: <20201214075615.25038-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are only two keyhandlers which make use of the cpu_user_regs
struct passed to them. In order to be able to call any keyhandler in
non-interrupt contexts, too, modify those two handlers to copy with a
NULL regs pointer by using run_in_exception_handler() in that case.

Suggested-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- new patch
---
 xen/common/keyhandler.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 68364e987d..de120fa092 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -181,7 +181,10 @@ static void dump_registers(unsigned char key, struct cpu_user_regs *regs)
     cpumask_copy(&dump_execstate_mask, &cpu_online_map);
 
     /* Get local execution state out immediately, in case we get stuck. */
-    dump_execstate(regs);
+    if ( regs )
+        dump_execstate(regs);
+    else
+        run_in_exception_handler(dump_execstate);
 
     /* Alt. handling: remaining CPUs are dumped asynchronously one-by-one. */
     if ( alt_key_handling )
@@ -481,15 +484,24 @@ static void run_all_keyhandlers(unsigned char key, struct cpu_user_regs *regs)
     tasklet_schedule(&run_all_keyhandlers_tasklet);
 }
 
-static void do_debug_key(unsigned char key, struct cpu_user_regs *regs)
+/* run_in_exception_handler() on Arm requires globally visible symbol. */
+void do_debugger_trap_fatal(struct cpu_user_regs *regs)
 {
-    printk("'%c' pressed -> trapping into debugger\n", key);
     (void)debugger_trap_fatal(0xf001, regs);
 
     /* Prevent tail call optimisation, which confuses xendbg. */
     barrier();
 }
 
+static void do_debug_key(unsigned char key, struct cpu_user_regs *regs)
+{
+    printk("'%c' pressed -> trapping into debugger\n", key);
+    if ( regs )
+        do_debugger_trap_fatal(regs);
+    else
+        run_in_exception_handler(do_debugger_trap_fatal);
+}
+
 static void do_toggle_alt_key(unsigned char key, struct cpu_user_regs *regs)
 {
     alt_key_handling = !alt_key_handling;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:56:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:56:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51925.90865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiib-0008CD-KC; Mon, 14 Dec 2020 07:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51925.90865; Mon, 14 Dec 2020 07:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiib-0008C4-GW; Mon, 14 Dec 2020 07:56:29 +0000
Received: by outflank-mailman (input) for mailman id 51925;
 Mon, 14 Dec 2020 07:56:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1koiib-00084u-13
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:56:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc7a59b2-24d4-49b6-bf31-c7e39c17db19;
 Mon, 14 Dec 2020 07:56:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED18CADE1;
 Mon, 14 Dec 2020 07:56:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc7a59b2-24d4-49b6-bf31-c7e39c17db19
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607932578; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=suR4RG6T/UXhQCsJzrD78qjsYpHI44RyxZHYiYIhyhw=;
	b=ZtZ4p22y1wIsbwX5UkkNEa8JLGpdr1YGd8rGp9CFD2oinxfY/TU+5H5Hx5w23+vQYkcrRY
	P7dM1MX9lxk3rDH6NP1hUB+tsFy03al58u66BlH5dufaxtLvrkSF/dNmeC62k+Pm5Mxr0t
	iGtCsuCU2EyLHcugQwD3yMiBOtkr3nk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 3/3] xen: add support for automatic debug key actions in case of crash
Date: Mon, 14 Dec 2020 08:56:15 +0100
Message-Id: <20201214075615.25038-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201214075615.25038-1-jgross@suse.com>
References: <20201214075615.25038-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Depending on the type of crash the desired data might be different, so
support different settings for the possible types of crashes.

The parameter is "crash-debug" with the following syntax:

  crash-debug-<type>=<string>

with <type> being one of:

  panic, hwdom, watchdog, kexeccmd, debugkey

and <string> a sequence of debug key characters with '+' having the
special semantics of a 10 millisecond pause.

So "crash-debug-watchdog=0+0qr" would result in special output in case
of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
domain info, run queues).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- switched special character '.' to '+' (Jan Beulich)
- 10 ms instead of 1 s pause (Jan Beulich)
- added more text to the boot parameter description (Jan Beulich)

V3:
- added const (Jan Beulich)
- thorough test of crash reason parameter (Jan Beulich)
- kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
- added dummy get_irq_regs() helper on Arm

V4:
- call keyhandlers with NULL for regs
- use ARRAY_SIZE() (Jan Beulich)
- don't activate handlers in early boot (Jan Beulich)
- avoid recursion
- extend documentation a bit

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/xen-command-line.pandoc | 41 +++++++++++++++++++++++
 xen/common/kexec.c                |  8 +++--
 xen/common/keyhandler.c           | 55 +++++++++++++++++++++++++++++++
 xen/common/shutdown.c             |  4 +--
 xen/drivers/char/console.c        |  2 +-
 xen/include/xen/kexec.h           | 10 ++++--
 xen/include/xen/keyhandler.h      | 10 ++++++
 7 files changed, 122 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index b4a0d60c11..e4c0a144fc 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -574,6 +574,47 @@ reduction of features at Xen's disposal to manage guests.
 ### cpuinfo (x86)
 > `= <boolean>`
 
+### crash-debug-debugkey
+### crash-debug-hwdom
+### crash-debug-kexeccmd
+### crash-debug-panic
+### crash-debug-watchdog
+> `= <string>`
+
+> Can be modified at runtime
+
+Specify debug-key actions in cases of crashes. Each of the parameters applies
+to a different crash reason. The `<string>` is a sequence of debug key
+characters, with `+` having the special meaning of a 10 millisecond pause.
+
+`crash-debug-debugkey` will be used for crashes induced by the `C` debug
+key (i.e. manually induced crash).
+
+`crash-debug-hwdom` denotes a crash of dom0.
+
+`crash-debug-kexeccmd` is an explicit request of dom0 to continue with the
+kdump kernel via kexec. Only available on hypervisors built with CONFIG_KEXEC.
+
+`crash-debug-panic` is a crash of the hypervisor.
+
+`crash-debug-watchdog` is a crash due to the watchdog timer expiring.
+
+It should be noted that dumping diagnosis data to the console can fail in
+multiple ways (missing data, hanging system, ...) depending on the reason
+of the crash, which might have left the hypervisor in a bad state. In case
+a debug-key action leads to another crash recursion will be avoided, so no
+additional debug-key actions will be performed in this case. A crash in the
+early boot phase will not result in any debug-key action, as the system
+might not yet be in a state where the handlers can work.
+
+So e.g. `crash-debug-watchdog=0+0r` would dump dom0 state twice with 10
+milliseconds between the two state dumps, followed by the run queues of the
+hypervisor, if the system crashes due to a watchdog timeout.
+
+Depending on the reason of the system crash it might happen that triggering
+some debug key action will result in a hang instead of dumping data and then
+doing a reboot or crash dump.
+
 ### crashinfo_maxaddr
 > `= <size>`
 
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 52cdc4ebc3..ebeee6405a 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -373,10 +373,12 @@ static int kexec_common_shutdown(void)
     return 0;
 }
 
-void kexec_crash(void)
+void kexec_crash(enum crash_reason reason)
 {
     int pos;
 
+    keyhandler_crash_action(reason);
+
     pos = (test_bit(KEXEC_FLAG_CRASH_POS, &kexec_flags) != 0);
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
@@ -409,7 +411,7 @@ static long kexec_reboot(void *_image)
 static void do_crashdump_trigger(unsigned char key)
 {
     printk("'%c' pressed -> triggering crashdump\n", key);
-    kexec_crash();
+    kexec_crash(CRASHREASON_DEBUGKEY);
     printk(" * no crash kernel loaded!\n");
 }
 
@@ -840,7 +842,7 @@ static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
         ret = continue_hypercall_on_cpu(0, kexec_reboot, image);
         break;
     case KEXEC_TYPE_CRASH:
-        kexec_crash(); /* Does not return */
+        kexec_crash(CRASHREASON_KEXECCMD); /* Does not return */
         break;
     }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index de120fa092..806355ed8b 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -3,7 +3,9 @@
  */
 
 #include <asm/regs.h>
+#include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/param.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
 #include <xen/console.h>
@@ -519,6 +521,59 @@ void __init initialize_keytable(void)
     }
 }
 
+#define CRASHACTION_SIZE  32
+static char crash_debug_panic[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-panic", crash_debug_panic);
+static char crash_debug_hwdom[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
+static char crash_debug_watchdog[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
+#ifdef CONFIG_KEXEC
+static char crash_debug_kexeccmd[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
+#else
+#define crash_debug_kexeccmd NULL
+#endif
+static char crash_debug_debugkey[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
+
+void keyhandler_crash_action(enum crash_reason reason)
+{
+    static const char *const crash_action[] = {
+        [CRASHREASON_PANIC] = crash_debug_panic,
+        [CRASHREASON_HWDOM] = crash_debug_hwdom,
+        [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
+        [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
+        [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
+    };
+    static bool ignore;
+    const char *action;
+
+    /* Some handlers are not functional too early. */
+    if ( system_state < SYS_STATE_smp_boot )
+        return;
+
+    /* Avoid recursion. */
+    if ( ignore )
+        return;
+    ignore = true;
+
+    if ( (unsigned int)reason >= ARRAY_SIZE(crash_action) )
+        return;
+    action = crash_action[reason];
+    if ( !action )
+        return;
+
+    while ( *action )
+    {
+        if ( *action == '+' )
+            mdelay(10);
+        else
+            handle_keypress(*action, NULL);
+        action++;
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 912593915b..abde48aa4c 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -43,7 +43,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_crash:
         debugger_trap_immediate();
         printk("Hardware Dom%u crashed: ", hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_HWDOM);
         maybe_reboot();
         break; /* not reached */
 
@@ -56,7 +56,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_watchdog:
         printk("Hardware Dom%u shutdown: watchdog rebooting machine\n",
                hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_WATCHDOG);
         machine_restart(0);
         break; /* not reached */
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 861ad53a8f..acec277f5e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1271,7 +1271,7 @@ void panic(const char *fmt, ...)
 
     debugger_trap_immediate();
 
-    kexec_crash();
+    kexec_crash(CRASHREASON_PANIC);
 
     if ( opt_noreboot )
         machine_halt();
diff --git a/xen/include/xen/kexec.h b/xen/include/xen/kexec.h
index e85ba16405..9f7a912e97 100644
--- a/xen/include/xen/kexec.h
+++ b/xen/include/xen/kexec.h
@@ -1,6 +1,8 @@
 #ifndef __XEN_KEXEC_H__
 #define __XEN_KEXEC_H__
 
+#include <xen/keyhandler.h>
+
 #ifdef CONFIG_KEXEC
 
 #include <public/kexec.h>
@@ -48,7 +50,7 @@ void machine_kexec_unload(struct kexec_image *image);
 void machine_kexec_reserved(xen_kexec_reserve_t *reservation);
 void machine_reboot_kexec(struct kexec_image *image);
 void machine_kexec(struct kexec_image *image);
-void kexec_crash(void);
+void kexec_crash(enum crash_reason reason);
 void kexec_crash_save_cpu(void);
 struct crash_xen_info *kexec_crash_save_info(void);
 void machine_crash_shutdown(void);
@@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
 #define kexecing 0
 
 static inline void kexec_early_calculations(void) {}
-static inline void kexec_crash(void) {}
+static inline void kexec_crash(enum crash_reason reason)
+{
+    keyhandler_crash_action(reason);
+}
+
 static inline void kexec_crash_save_cpu(void) {}
 static inline void set_kexec_crash_area_size(u64 system_ram) {}
 
diff --git a/xen/include/xen/keyhandler.h b/xen/include/xen/keyhandler.h
index 5131e86cbc..9c5830a037 100644
--- a/xen/include/xen/keyhandler.h
+++ b/xen/include/xen/keyhandler.h
@@ -48,4 +48,14 @@ void register_irq_keyhandler(unsigned char key,
 /* Inject a keypress into the key-handling subsystem. */
 extern void handle_keypress(unsigned char key, struct cpu_user_regs *regs);
 
+enum crash_reason {
+    CRASHREASON_PANIC,
+    CRASHREASON_HWDOM,
+    CRASHREASON_WATCHDOG,
+    CRASHREASON_KEXECCMD,
+    CRASHREASON_DEBUGKEY,
+};
+
+void keyhandler_crash_action(enum crash_reason reason);
+
 #endif /* __XEN_KEYHANDLER_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 07:56:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 07:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51922.90829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiiS-00085B-PO; Mon, 14 Dec 2020 07:56:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51922.90829; Mon, 14 Dec 2020 07:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koiiS-000854-Kx; Mon, 14 Dec 2020 07:56:20 +0000
Received: by outflank-mailman (input) for mailman id 51922;
 Mon, 14 Dec 2020 07:56:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1koiiR-00084u-39
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 07:56:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04d04060-3d0f-4ddf-96bd-56a524aa508e;
 Mon, 14 Dec 2020 07:56:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41D90AC10;
 Mon, 14 Dec 2020 07:56:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04d04060-3d0f-4ddf-96bd-56a524aa508e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607932577; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=M+3a3E8bow1XpCbEoYeJd2DUDpP7T6AQXwF6GP582zA=;
	b=Qvs2RwsUE39y6vI3cPpHrlaAursFOniDfvKEMCna0UL/iCcmwyCplpS+cx3+5LKjJ876+2
	A6WWSKDSyHk/Rw2emh1aLL9gcUNbrus7SsPqGyt2sUOFJD/XPazuCjqyK2AUdR2LxgB9I3
	15lk7CxfG0dd+FvqUc5wJ6Oyc6931aA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 0/3] xen: add support for automatic debug key actions in case of crash
Date: Mon, 14 Dec 2020 08:56:12 +0100
Message-Id: <20201214075615.25038-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Changes in V4:
- addressed comments (now patch 3)
- added patches 1 and 2

Some further remarks to the new patches added:

Patch 1 adds Arm support for run_in_exception_handler(). Constructing
the related bug_frame is unfortunately not as easy as on x86.

I have verified that %c in inline assembly isn't supported by gcc 7 for
arm64, so this was the only way I've found to support this feature. In
theory it might be possible to add a variable referencing the called
function and to discard that variable again when linking, but I'd like
to add this more intrusive modification only if really wanted.

There is more potential for unifying struct bug_frame between x86 and
Arm, either by:
- using the Arm layout on x86, too (resulting in a grow of the bugframe
  data for the cases without messages)
- trying to construct the data in C instead of inline assembly, which
  will need to either keep the x86 assembler BUG_FRAME construction, or
  to add a few functions issuing the ASSERT/BUG/WARN which would then
  need to be called from *.S files.

Patch 2 opens up more potential for simplification: in theory there is
no need any more to call any key handler with the regs parameter,
allowing to use the same prototype for all handlers. The downside would
be to have an additional irq frame on the stack for the dump_registers()
and the do_debug_key() handlers. In case this is acceptable I'd be
happy to send a related cleanup patch.

Juergen Gross (3):
  xen/arm: add support for run_in_exception_handler()
  xen: enable keyhandlers to work without register set specified
  xen: add support for automatic debug key actions in case of crash

 docs/misc/xen-command-line.pandoc | 41 +++++++++++++++++
 xen/arch/arm/traps.c              | 10 ++++-
 xen/common/kexec.c                |  8 ++--
 xen/common/keyhandler.c           | 73 +++++++++++++++++++++++++++++--
 xen/common/shutdown.c             |  4 +-
 xen/drivers/char/console.c        |  2 +-
 xen/drivers/char/ns16550.c        |  3 +-
 xen/include/asm-arm/bug.h         | 32 +++++++++-----
 xen/include/xen/kexec.h           | 10 ++++-
 xen/include/xen/keyhandler.h      | 10 +++++
 10 files changed, 169 insertions(+), 24 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 08:05:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 08:05:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51946.90880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koirC-0001bl-10; Mon, 14 Dec 2020 08:05:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51946.90880; Mon, 14 Dec 2020 08:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koirB-0001be-Tl; Mon, 14 Dec 2020 08:05:21 +0000
Received: by outflank-mailman (input) for mailman id 51946;
 Mon, 14 Dec 2020 08:05:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K590=FS=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1koirA-0001bZ-5M
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 08:05:20 +0000
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9972717d-5d78-4853-ae96-9d20773f3514;
 Mon, 14 Dec 2020 08:05:19 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id q22so3509953eja.2
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 00:05:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9972717d-5d78-4853-ae96-9d20773f3514
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=TiB2F/QWVAlfD2IsKZXhLFiZ75QHq0FFdQ/Lizli0g0=;
        b=F315ig/6/Gu7lB+D3/eCEGRGZzsyMTap3fAtk2tQ6P6Vkm6DEJK15FNnVvp+jcjujD
         liTiZGjVI7uNM6WbG2ZYdCjObA9P6KKY9/b0f59rKrPIkStxpyIh/NncDp242+VgBmvd
         /KJlqgHxBNhrr4y/RVZyDMBjYn2aU3JMZa0J7xmyhVvBe9crJTCIjg39wsbnTFBgl89J
         nWn7w83o/TFQhoXB313BkvWEgzVRDFgSWDBGg83JL/JbTKtsOexswDFm/K2wTqOyRCCv
         MqIzh6tZuacUZEokI5s1Wwx9/4kNP8vr4/csavrGM9gm61kIO19ZHdpjQXH9ueNQNxXN
         /lgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=TiB2F/QWVAlfD2IsKZXhLFiZ75QHq0FFdQ/Lizli0g0=;
        b=HvlDIeTB2pUtLPQi81tqiGtIelknn6BgqfX6mqTV1AHBdgNWgfAdi3x/A4x85vL+ek
         D+9Y/+bA03Oj4+C8MTjknNLyiMnlQe/VD9qen+u/nFe2Jbw34UojuYa6wN3poZIBVy8o
         Mr2qrT/n9K6gYdsbx3bRbxj0u/wpmoNV9sqoInL4yhEoYJElVEWVqk/AuoxBwSdKgiwb
         pmHOlpGm8ITHXe53a503ASAuvKjwHGn5g94/CJwlf4iUOx9Q3Rnr+XQgEHDgh9YLV2AM
         R51OG1GqhBTqPotpF/ZZ09BcHL2QIp4QS+GvP+spwD5GSZPZ7lMkHuVbWf01M+01MMSp
         pjMA==
X-Gm-Message-State: AOAM532iHADdkzDIlHiIAbUlrVU5ji/c3ERAr/mlJgl5Dlb0e36XyMoW
	zfdM2H3pK6PzjytWSEHbhKKtDBrWUTSwIj9oLI4=
X-Google-Smtp-Source: ABdhPJznfNt7s53g5IQ7FIyvHkE8ul5p+FofIGYKbfcP6yJUlWHy0cPXR1+relJpFBG9qlDzAUw/X9v5sYr7YLru7eo=
X-Received: by 2002:a17:906:9452:: with SMTP id z18mr12709453ejx.389.1607933118217;
 Mon, 14 Dec 2020 00:05:18 -0800 (PST)
MIME-Version: 1.0
References: <20201210134752.780923-1-marcandre.lureau@redhat.com> <20201210134752.780923-14-marcandre.lureau@redhat.com>
In-Reply-To: <20201210134752.780923-14-marcandre.lureau@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@gmail.com>
Date: Mon, 14 Dec 2020 12:05:05 +0400
Message-ID: <CAJ+F1C+_CE5uaQ7QMkaca498WFcRWSb+zez2zwi_BqUMCTK2zA@mail.gmail.com>
Subject: Re: [PATCH v3 13/13] compiler.h: remove QEMU_GNUC_PREREQ
To: QEMU <qemu-devel@nongnu.org>
Cc: Peter Maydell <peter.maydell@linaro.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Paul Durrant <paul@xen.org>, Richard Henderson <richard.henderson@linaro.org>, 
	Laurent Vivier <laurent@vivier.eu>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	"open list:ARM" <qemu-arm@nongnu.org>, Gerd Hoffmann <kraxel@redhat.com>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Content-Type: multipart/alternative; boundary="0000000000005583ba05b6681a42"

--0000000000005583ba05b6681a42
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi

On Thu, Dec 10, 2020 at 6:07 PM <marcandre.lureau@redhat.com> wrote:

> From: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
>
> When needed, the G_GNUC_CHECK_VERSION() glib macro can be used instead.
>
> Signed-off-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> ---
>  include/qemu/compiler.h    | 11 -----------
>  scripts/cocci-macro-file.h |  1 -
>  2 files changed, 12 deletions(-)
>
> diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
> index 5e6cf2c8e8..1b9e58e82b 100644
> --- a/include/qemu/compiler.h
> +++ b/include/qemu/compiler.h
> @@ -11,17 +11,6 @@
>  #define QEMU_STATIC_ANALYSIS 1
>  #endif
>
>
> -/*----------------------------------------------------------------------=
------
> -| The macro QEMU_GNUC_PREREQ tests for minimum version of the GNU C
> compiler.
> -| The code is a copy of SOFTFLOAT_GNUC_PREREQ, see softfloat-macros.h.
>
> -*-----------------------------------------------------------------------=
-----*/
> -#if defined(__GNUC__) && defined(__GNUC_MINOR__)
> -# define QEMU_GNUC_PREREQ(maj, min) \
> -         ((__GNUC__ << 16) + __GNUC_MINOR__ >=3D ((maj) << 16) + (min))
> -#else
> -# define QEMU_GNUC_PREREQ(maj, min) 0
> -#endif
> -
>  #define QEMU_NORETURN __attribute__ ((__noreturn__))
>
>  #define QEMU_WARN_UNUSED_RESULT __attribute__((warn_unused_result))
> diff --git a/scripts/cocci-macro-file.h b/scripts/cocci-macro-file.h
> index c6bbc05ba3..20eea6b708 100644
> --- a/scripts/cocci-macro-file.h
> +++ b/scripts/cocci-macro-file.h
> @@ -19,7 +19,6 @@
>   */
>
>  /* From qemu/compiler.h */
> -#define QEMU_GNUC_PREREQ(maj, min) 1
>  #define QEMU_NORETURN __attribute__ ((__noreturn__))
>  #define QEMU_WARN_UNUSED_RESULT __attribute__((warn_unused_result))
>  #define QEMU_SENTINEL __attribute__((sentinel))
>

ping, thanks

--=20
Marc-Andr=C3=A9 Lureau

--0000000000005583ba05b6681a42
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr">Hi<br></div><br><div class=3D"gmail_quote=
"><div dir=3D"ltr" class=3D"gmail_attr">On Thu, Dec 10, 2020 at 6:07 PM &lt=
;<a href=3D"mailto:marcandre.lureau@redhat.com">marcandre.lureau@redhat.com=
</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:=
0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">=
From: Marc-Andr=C3=A9 Lureau &lt;<a href=3D"mailto:marcandre.lureau@redhat.=
com" target=3D"_blank">marcandre.lureau@redhat.com</a>&gt;<br>
<br>
When needed, the G_GNUC_CHECK_VERSION() glib macro can be used instead.<br>
<br>
Signed-off-by: Marc-Andr=C3=A9 Lureau &lt;<a href=3D"mailto:marcandre.lurea=
u@redhat.com" target=3D"_blank">marcandre.lureau@redhat.com</a>&gt;<br>
---<br>
=C2=A0include/qemu/compiler.h=C2=A0 =C2=A0 | 11 -----------<br>
=C2=A0scripts/cocci-macro-file.h |=C2=A0 1 -<br>
=C2=A02 files changed, 12 deletions(-)<br>
<br>
diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h<br>
index 5e6cf2c8e8..1b9e58e82b 100644<br>
--- a/include/qemu/compiler.h<br>
+++ b/include/qemu/compiler.h<br>
@@ -11,17 +11,6 @@<br>
=C2=A0#define QEMU_STATIC_ANALYSIS 1<br>
=C2=A0#endif<br>
<br>
-/*------------------------------------------------------------------------=
----<br>
-| The macro QEMU_GNUC_PREREQ tests for minimum version of the GNU C compil=
er.<br>
-| The code is a copy of SOFTFLOAT_GNUC_PREREQ, see softfloat-macros.h.<br>
-*-------------------------------------------------------------------------=
---*/<br>
-#if defined(__GNUC__) &amp;&amp; defined(__GNUC_MINOR__)<br>
-# define QEMU_GNUC_PREREQ(maj, min) \<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0((__GNUC__ &lt;&lt; 16) + __GNUC_MINOR__=
 &gt;=3D ((maj) &lt;&lt; 16) + (min))<br>
-#else<br>
-# define QEMU_GNUC_PREREQ(maj, min) 0<br>
-#endif<br>
-<br>
=C2=A0#define QEMU_NORETURN __attribute__ ((__noreturn__))<br>
<br>
=C2=A0#define QEMU_WARN_UNUSED_RESULT __attribute__((warn_unused_result))<b=
r>
diff --git a/scripts/cocci-macro-file.h b/scripts/cocci-macro-file.h<br>
index c6bbc05ba3..20eea6b708 100644<br>
--- a/scripts/cocci-macro-file.h<br>
+++ b/scripts/cocci-macro-file.h<br>
@@ -19,7 +19,6 @@<br>
=C2=A0 */<br>
<br>
=C2=A0/* From qemu/compiler.h */<br>
-#define QEMU_GNUC_PREREQ(maj, min) 1<br>
=C2=A0#define QEMU_NORETURN __attribute__ ((__noreturn__))<br>
=C2=A0#define QEMU_WARN_UNUSED_RESULT __attribute__((warn_unused_result))<b=
r>
=C2=A0#define QEMU_SENTINEL __attribute__((sentinel))<br></blockquote><div>=
<br></div><div>ping, thanks <br></div></div><br>-- <br><div dir=3D"ltr" cla=
ss=3D"gmail_signature">Marc-Andr=C3=A9 Lureau<br></div></div>

--0000000000005583ba05b6681a42--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 08:27:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 08:27:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51953.90892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojCC-0003dd-Q5; Mon, 14 Dec 2020 08:27:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51953.90892; Mon, 14 Dec 2020 08:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojCC-0003dW-Lo; Mon, 14 Dec 2020 08:27:04 +0000
Received: by outflank-mailman (input) for mailman id 51953;
 Mon, 14 Dec 2020 08:27:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kojCA-0003dO-Qb; Mon, 14 Dec 2020 08:27:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kojCA-0003Zz-E9; Mon, 14 Dec 2020 08:27:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kojCA-000841-06; Mon, 14 Dec 2020 08:27:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kojC9-0007XF-Vr; Mon, 14 Dec 2020 08:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/Cjt4Z/aILWBZmHpCrbwTUWorSlzLxdqh+budnAlSzY=; b=gpgpbfGJuxxy6fPI6yougXzk3y
	CII0tSKiECvd9X+wB9qDGzQDZ++C1JMhTzKZ7vHHUTKrDuAl00N0CotxAgQk7N5VY+sLul3Grd+Ro
	8rToQK84TDoslSknx1QDfVBteu/sf1gTa3LpEThYEJet202CU9o4YMeQ6H1YkKaJPGyc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157509-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157509: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ec6f5e0e5ca0764b4bc522c9f9d5abf876a0e3e3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 08:27:01 +0000

flight 157509 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157509/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ec6f5e0e5ca0764b4bc522c9f9d5abf876a0e3e3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  135 days
Failing since        152366  2020-08-01 20:49:34 Z  134 days  234 attempts
Testing same since   157509  2020-12-13 21:08:53 Z    0 days    1 attempts

------------------------------------------------------------
3694 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 707297 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 08:27:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 08:27:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51959.90907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojCr-0003kL-Ac; Mon, 14 Dec 2020 08:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51959.90907; Mon, 14 Dec 2020 08:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojCr-0003kE-7R; Mon, 14 Dec 2020 08:27:45 +0000
Received: by outflank-mailman (input) for mailman id 51959;
 Mon, 14 Dec 2020 08:27:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kojCq-0003js-3y
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 08:27:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f003da8-f11f-4198-96ca-d6bc9ab4c1dc;
 Mon, 14 Dec 2020 08:27:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E409B2C5;
 Mon, 14 Dec 2020 08:27:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f003da8-f11f-4198-96ca-d6bc9ab4c1dc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607934458; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=udJGY57Ul0HuXMWkcLcPHpsVPrbRTIZSixFgD7KUKHE=;
	b=mZOgQUeqw93i9TXXYq7ggqBjB+5Wrkx3YuprLXC9Afe3c+8Hy3E0Oe0yhe6RrN630YoPof
	neTvjkXA6XoWdLm23wefprTavvqo+iSJPLsFtzRCxdyAif/DJQEcZiK0V4F7NMSrR7AY+l
	IDMeAsoE+J2q6kXAoA01kJPOXD+/05c=
Subject: Re: [PATCH] Revert "x86/mm: drop guest_get_eff_l1e()"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Manuel Bouyer <bouyer@antioche.eu.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201211141615.12489-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <454ec720-b823-c2aa-7de4-84c14db2b96f@suse.com>
Date: Mon, 14 Dec 2020 09:27:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201211141615.12489-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.12.2020 15:16, Andrew Cooper wrote:
> This reverts commit 9ff9705647646aa937b5f5c1426a64c69a62b3bd.
> 
> The change is only correct in the original context of XSA-286, where Xen's use
> of the linear pagetables were dropped.  However, performance problems
> interfered with that plan, and XSA-286 was fixed differently.
> 
> This broke Xen's lazy faulting of the LDT for 64bit PV guests when an access
> was first encountered in user context.  Xen would proceed to read the
> registered LDT virtual address out of the user pagetables, not the kernel
> pagetables.
> 
> Given the nature of the bug, it would have also interfered with the IO
> permisison bitmap functionality of userspace, which similarly needs to read
> data using the kernel pagetables.

This paragraph wants dropping afaict - guest_io_okay() has
explicit calls to toggle_guest_pt(), and hence is unaffected by
the bug here (and there is in particular page tables reading
involved there). Then ...

> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Tested-by: Manuel Bouyer <bouyer@antioche.eu.org>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I was wondering however whether we really want a full revert. I've
locally made the change below as an alternative.

Jan

x86/PV: guest_get_eff_kern_l1e() may still need to switch page tables

While indeed unnecessary for pv_ro_page_fault(), pv_map_ldt_shadow_page()
may run when guest user mode is active, and hence may need to switch to
the kernel page tables in order to retrieve an LDT page mapping.

Fixes: 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")
Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This is the alternative to fully reverting the offending commit.

I've also been considering to drop the paging-mode-translate ASSERT(),
now that we always have translate == external.

--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -11,10 +11,14 @@ int new_guest_cr3(mfn_t mfn);
  */
 static inline l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
 {
+    struct vcpu *curr = current;
     l1_pgentry_t l1e;
 
-    ASSERT(!paging_mode_translate(current->domain));
-    ASSERT(!paging_mode_external(current->domain));
+    ASSERT(!paging_mode_translate(curr->domain));
+    ASSERT(!paging_mode_external(curr->domain));
+
+    if ( !(curr->arch.flags & TF_kernel_mode) )
+        toggle_guest_pt(curr);
 
     if ( unlikely(!__addr_ok(linear)) ||
          __copy_from_user(&l1e,
@@ -22,6 +26,9 @@ static inline l1_pgentry_t guest_get_eff
                           sizeof(l1_pgentry_t)) )
         l1e = l1e_empty();
 
+    if ( !(curr->arch.flags & TF_kernel_mode) )
+        toggle_guest_pt(curr);
+
     return l1e;
 }
 


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 08:52:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 08:52:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51976.90923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojaX-0006e1-FD; Mon, 14 Dec 2020 08:52:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51976.90923; Mon, 14 Dec 2020 08:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojaX-0006du-Bj; Mon, 14 Dec 2020 08:52:13 +0000
Received: by outflank-mailman (input) for mailman id 51976;
 Mon, 14 Dec 2020 08:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kojaV-0006dp-Sz
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 08:52:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e84d4b78-b7f9-4168-841e-7944cd537383;
 Mon, 14 Dec 2020 08:52:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 51583AF31;
 Mon, 14 Dec 2020 08:52:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e84d4b78-b7f9-4168-841e-7944cd537383
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607935929; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tRuCku03BruteHIQNSgq95gXrnU8F7jNNIn6OX3RNQk=;
	b=I0AlMAOrm7Y8fuM6zufXNSDl+VJtMpB46B9cr9A0qBiZ3z7aesnc6N9nzRMQgLzRjEUhRT
	nYqQil0681Vt9EVMf/o4Fj7okLK/TdGtkSUhhW1Jr9WD+sJwVvvoeGANiSPlvgKQHBvG5m
	UQqokTD7A3qk9eABeOVZ7SaJBnMOrxM=
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
To: Harsha Shamsundara Havanur <havanur@amazon.com>
References: <cover.1607686878.git.havanur@amazon.com>
 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
Date: Mon, 14 Dec 2020 09:52:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.12.2020 12:44, Harsha Shamsundara Havanur wrote:
> A HVM domain flushes cache on all the cpus using
> `flush_all` macro which uses cpu_online_map, during
> i) creation of a new domain
> ii) when device-model op is performed
> iii) when domain is destructed.
> 
> This triggers IPI on all the cpus, thus affecting other
> domains that are pinned to different pcpus. This patch
> restricts cache flush to the set of cpus affinitized to
> the current domain using `domain->dirty_cpumask`.

But then you need to effect cache flushing when a CPU gets
taken out of domain->dirty_cpumask. I don't think you/we want
to do that.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:03:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:03:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51984.90935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojlV-0007lM-IW; Mon, 14 Dec 2020 09:03:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51984.90935; Mon, 14 Dec 2020 09:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojlV-0007lF-Dn; Mon, 14 Dec 2020 09:03:33 +0000
Received: by outflank-mailman (input) for mailman id 51984;
 Mon, 14 Dec 2020 09:03:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kojlT-0007lA-JP
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:03:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b7e670b-fad6-473f-b56a-93f25bab5c0f;
 Mon, 14 Dec 2020 09:03:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 09240AC10;
 Mon, 14 Dec 2020 09:03:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b7e670b-fad6-473f-b56a-93f25bab5c0f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607936610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0NgiftfljNLjlF2FZgtFopk9t52PfwgGAjN3ponFFHQ=;
	b=PKsWk1tWU3PV/r61wFreN0yf8jSgRvOd5srjlYx3/w1HGoIODIYpNwjSRfC6XzTcs76M18
	0HRmLVYZXu+R+OR1LqR+sUySL4DGVOaEFC+Zd8O45MiDqqllmiZQzkvtwxRotFyy5Tzmb1
	mJZ6oBesDP4OIyDRYdptOk0THP9wgOY=
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
Date: Mon, 14 Dec 2020 10:03:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201214075615.25038-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.12.2020 08:56, Juergen Gross wrote:
> Add support to run a function in an exception handler for Arm. Do it
> the same way as on x86 via a bug_frame.
> 
> Unfortunately inline assembly on Arm seems to be less capable than on
> x86, leading to functions called via run_in_exception_handler() having
> to be globally visible.

Could you extend on this? I don't understand what the relevant
difference is, from just looking at the changes.

> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V4:
> - new patch
> 
> I have verified the created bugframe is correct by inspecting the
> created binary.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/arch/arm/traps.c       | 10 +++++++++-
>  xen/drivers/char/ns16550.c |  3 ++-
>  xen/include/asm-arm/bug.h  | 32 +++++++++++++++++++++-----------
>  3 files changed, 32 insertions(+), 13 deletions(-)

Aiui you also need to modify xen.lds.S to cover the new (or really
the last renamed) section.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:10:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:10:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.51990.90947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojrZ-00014o-C6; Mon, 14 Dec 2020 09:09:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 51990.90947; Mon, 14 Dec 2020 09:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojrZ-00014f-4d; Mon, 14 Dec 2020 09:09:49 +0000
Received: by outflank-mailman (input) for mailman id 51990;
 Mon, 14 Dec 2020 09:09:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kojrY-00014a-7b
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:09:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cc5a363-7e46-404b-80da-ccf278866996;
 Mon, 14 Dec 2020 09:09:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D642AAC10;
 Mon, 14 Dec 2020 09:09:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7cc5a363-7e46-404b-80da-ccf278866996
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607936986; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=slZrT1atWRvMCFjihXwUTflP2Ple2Ww37AqLliJnGWc=;
	b=uOPmyjFd++NixAg8Oklvu+Unr9Lu7u5C9X9OQG9FU1A9FjC7lmF2WCLNeM9x/bLhImk1G2
	b+f54qzGG48SpVggDPOtOwqDEmWPLQQH2f08x5lnDtRcE3DbaUYbiCfaZwl7J4WW1hP6gs
	0pb89zN6/aqflOcrxVHt6W9N5xdhs7Q=
Subject: Re: [PATCH v4 0/3] xen: add support for automatic debug key actions
 in case of crash
To: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <247d9c9c-5ab1-c733-6960-e406040c28ac@suse.com>
Date: Mon, 14 Dec 2020 10:09:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201214075615.25038-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.12.2020 08:56, Juergen Gross wrote:
> Patch 2 opens up more potential for simplification: in theory there is
> no need any more to call any key handler with the regs parameter,
> allowing to use the same prototype for all handlers. The downside would
> be to have an additional irq frame on the stack for the dump_registers()
> and the do_debug_key() handlers.

This isn't the only downside, is it? We'd then also need to be able
to (sufficiently cleanly) unwind through the new frame to reach the
prior one, in order to avoid logging less reliable information. Plus
decompose the prior frame as well to avoid logging less easy to
consume data.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:15:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:15:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52001.90967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojx4-0003HY-0q; Mon, 14 Dec 2020 09:15:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52001.90967; Mon, 14 Dec 2020 09:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojx3-0003HR-TY; Mon, 14 Dec 2020 09:15:29 +0000
Received: by outflank-mailman (input) for mailman id 52001;
 Mon, 14 Dec 2020 09:15:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kojx2-0003HM-DK
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:15:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8862acc8-6dfc-4097-8db8-47eb01a88409;
 Mon, 14 Dec 2020 09:15:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0ACD6AC10;
 Mon, 14 Dec 2020 09:15:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8862acc8-6dfc-4097-8db8-47eb01a88409
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937326; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UHGCfTLOPjCY/McK9x/0re8zCcMcni+TF2YqgBzEjzM=;
	b=kHzQTFRHzG6vR5qnYh9GLbpHpu5SBkygcCWLjK24nU7zaygi/VnLcjjHSmDwiQYO8MvWGy
	zCTnTviqxQR2fg7B5U1VYKABf7qyS9QUxfqQVmQ1qAL6vD8pizEtMnBd/X0CgkLvZyzqED
	46SGBwP3rOUiR/bo2xD7QwBQGsUU9kg=
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
Message-ID: <9a6a397d-2c4c-acdc-d3ff-b286e522c9bc@suse.com>
Date: Mon, 14 Dec 2020 10:15:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="jVt40A1CgEqV3Al9qu9o3BMc3dvWCQbTa"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--jVt40A1CgEqV3Al9qu9o3BMc3dvWCQbTa
Content-Type: multipart/mixed; boundary="muiStdmww0KAaLVZdw4RuhQdbvYxTrpjZ";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <9a6a397d-2c4c-acdc-d3ff-b286e522c9bc@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
In-Reply-To: <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>

--muiStdmww0KAaLVZdw4RuhQdbvYxTrpjZ
Content-Type: multipart/mixed;
 boundary="------------79B6D6EC00D52FF6048695D4"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------79B6D6EC00D52FF6048695D4
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 10:03, Jan Beulich wrote:
> On 14.12.2020 08:56, Juergen Gross wrote:
>> Add support to run a function in an exception handler for Arm. Do it
>> the same way as on x86 via a bug_frame.
>>
>> Unfortunately inline assembly on Arm seems to be less capable than on
>> x86, leading to functions called via run_in_exception_handler() having=

>> to be globally visible.
>=20
> Could you extend on this? I don't understand what the relevant
> difference is, from just looking at the changes.

The problem seems to be that a static symbol referenced from the inline
asm seems not to silence the error that this static symbol isn't being
used. On x86 the bug_frame is constructed using the %c modifier, which
is not supported for Arm (at least gcc 7 used in my compile test
complained), but seems to be enough for gcc on x86 to not complain.

>=20
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V4:
>> - new patch
>>
>> I have verified the created bugframe is correct by inspecting the
>> created binary.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   xen/arch/arm/traps.c       | 10 +++++++++-
>>   xen/drivers/char/ns16550.c |  3 ++-
>>   xen/include/asm-arm/bug.h  | 32 +++++++++++++++++++++-----------
>>   3 files changed, 32 insertions(+), 13 deletions(-)
>=20
> Aiui you also need to modify xen.lds.S to cover the new (or really
> the last renamed) section.

Oh, right. I thought of that before, but forgot again. Thanks for the
reminder.


Juergen

--------------79B6D6EC00D52FF6048695D4
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------79B6D6EC00D52FF6048695D4--

--muiStdmww0KAaLVZdw4RuhQdbvYxTrpjZ--

--jVt40A1CgEqV3Al9qu9o3BMc3dvWCQbTa
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB4BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XLS0FAwAAAAAACgkQsN6d1ii/Ey+7
fwf46shmySkTXfiaH6z7+fAhQCwVWfUgTxOX15W5Kh1bbzWTre79HrVLX43wYlPl05Asfp7P2kvg
x6CJHaZK7aRrQp+SF5lvyYM1DOkl33B9J8l3w2cb64G5+IgQe7yEaIB/hABYUzcl+fVcmxRtlfaI
IrCXR8Wcy0R3vB0KR77wubiY+J5W5FbC5BNSAbM3UjKKEbqfhUn4u9LfSJrxsyARnqK9S4spwZ05
fE1F8+hvG3VWCxz2DhEBGW8O7OzitEwYaXRkCEPTxGlZhVp/9rajjVmT3vS+wuSYLzfeSp11/13j
6GZ8RQVdA+D+SL4bQgLzFccBzFO2yKSr4FqWsRAM
=VASE
-----END PGP SIGNATURE-----

--jVt40A1CgEqV3Al9qu9o3BMc3dvWCQbTa--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:16:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52006.90979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojy3-0003OJ-Ag; Mon, 14 Dec 2020 09:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52006.90979; Mon, 14 Dec 2020 09:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kojy3-0003OC-7i; Mon, 14 Dec 2020 09:16:31 +0000
Received: by outflank-mailman (input) for mailman id 52006;
 Mon, 14 Dec 2020 09:16:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kojy1-0003O6-NM
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:16:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e491a757-f6ee-48a3-b21b-d0b2d6d5f93a;
 Mon, 14 Dec 2020 09:16:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 235BAAC90;
 Mon, 14 Dec 2020 09:16:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e491a757-f6ee-48a3-b21b-d0b2d6d5f93a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937388; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JUTLxvfNb2x61zLlroHHPCUhp4WT7na87ACH224Q9a8=;
	b=LkSAjiB6wvKctiXwWrMQU7J5DpUOq19WWCPX+ba0ihQajEqPAmCv9+ofpvcU0lKEoyAX2X
	7hCclX27wUQV4OiJgDYRceJ1rqFyDNOHmXYbkKOLcDde7M24be8k0k5z4n53rnhJpDWxCt
	iIVN8NiG4+rtI/5VDaGXyMvKKBF92mU=
Subject: Re: [PATCH v4 3/3] xen: add support for automatic debug key actions
 in case of crash
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d1b33e7f-2dd8-9d44-62c9-86ec46d919fe@suse.com>
Date: Mon, 14 Dec 2020 10:16:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201214075615.25038-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.12.2020 08:56, Juergen Gross wrote:
> @@ -519,6 +521,59 @@ void __init initialize_keytable(void)
>      }
>  }
>  
> +#define CRASHACTION_SIZE  32
> +static char crash_debug_panic[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-panic", crash_debug_panic);
> +static char crash_debug_hwdom[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
> +static char crash_debug_watchdog[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
> +#ifdef CONFIG_KEXEC
> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
> +#else
> +#define crash_debug_kexeccmd NULL
> +#endif
> +static char crash_debug_debugkey[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
> +
> +void keyhandler_crash_action(enum crash_reason reason)
> +{
> +    static const char *const crash_action[] = {
> +        [CRASHREASON_PANIC] = crash_debug_panic,
> +        [CRASHREASON_HWDOM] = crash_debug_hwdom,
> +        [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
> +        [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
> +        [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
> +    };
> +    static bool ignore;
> +    const char *action;
> +
> +    /* Some handlers are not functional too early. */
> +    if ( system_state < SYS_STATE_smp_boot )
> +        return;
> +
> +    /* Avoid recursion. */
> +    if ( ignore )
> +        return;
> +    ignore = true;
> +
> +    if ( (unsigned int)reason >= ARRAY_SIZE(crash_action) )
> +        return;
> +    action = crash_action[reason];
> +    if ( !action )
> +        return;

If we consider either of the last two "return"s to possibly be
taken, I think the "ignore" logic wants to live below them, not
above, avoiding no output at all when a recursive invocation
turns out to be a "good" one. Then, as before,
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:19:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:19:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52011.90991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok1O-0003gH-U1; Mon, 14 Dec 2020 09:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52011.90991; Mon, 14 Dec 2020 09:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok1O-0003gA-RA; Mon, 14 Dec 2020 09:19:58 +0000
Received: by outflank-mailman (input) for mailman id 52011;
 Mon, 14 Dec 2020 09:19:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kok1N-0003g5-L1
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:19:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 883f80e2-fb5a-45db-b464-bb6f6fec97ee;
 Mon, 14 Dec 2020 09:19:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9FA7AC10;
 Mon, 14 Dec 2020 09:19:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 883f80e2-fb5a-45db-b464-bb6f6fec97ee
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937595; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zAWigpfJMXrhYnFqL7KXlLIgod5mhUz+XtyC+Hf5HJw=;
	b=ibYgMUFpBOA71Ll9iOVZp0+vLTzm4EFPn+aFsjERMT4E2c6zlSRsSO1aam706yjMwVTCsG
	KZerycvaMS71uwEdH8yF16h1V06Ji9EtJdD1+8WEmBYUzqNGqN4qp7vKNIGjr1mR+88NrX
	9O2FsY4U33jFQFAR/xeDJdYxXuqSPXU=
Subject: Re: [PATCH v4 3/3] xen: add support for automatic debug key actions
 in case of crash
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-4-jgross@suse.com>
 <d1b33e7f-2dd8-9d44-62c9-86ec46d919fe@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <858755cd-ea76-c537-00ba-8670465871be@suse.com>
Date: Mon, 14 Dec 2020 10:19:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d1b33e7f-2dd8-9d44-62c9-86ec46d919fe@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="es9mkHtinfBtJHjT5515SHq3r67F6URJ0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--es9mkHtinfBtJHjT5515SHq3r67F6URJ0
Content-Type: multipart/mixed; boundary="xyp1LgC1FKjWYb7HihTA5FkQ0HiAxhxzY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <858755cd-ea76-c537-00ba-8670465871be@suse.com>
Subject: Re: [PATCH v4 3/3] xen: add support for automatic debug key actions
 in case of crash
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-4-jgross@suse.com>
 <d1b33e7f-2dd8-9d44-62c9-86ec46d919fe@suse.com>
In-Reply-To: <d1b33e7f-2dd8-9d44-62c9-86ec46d919fe@suse.com>

--xyp1LgC1FKjWYb7HihTA5FkQ0HiAxhxzY
Content-Type: multipart/mixed;
 boundary="------------1F1D5D90A7A4BF8D9F983D81"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1F1D5D90A7A4BF8D9F983D81
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 10:16, Jan Beulich wrote:
> On 14.12.2020 08:56, Juergen Gross wrote:
>> @@ -519,6 +521,59 @@ void __init initialize_keytable(void)
>>       }
>>   }
>>  =20
>> +#define CRASHACTION_SIZE  32
>> +static char crash_debug_panic[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-panic", crash_debug_panic);
>> +static char crash_debug_hwdom[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
>> +static char crash_debug_watchdog[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
>> +#ifdef CONFIG_KEXEC
>> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
>> +#else
>> +#define crash_debug_kexeccmd NULL
>> +#endif
>> +static char crash_debug_debugkey[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
>> +
>> +void keyhandler_crash_action(enum crash_reason reason)
>> +{
>> +    static const char *const crash_action[] =3D {
>> +        [CRASHREASON_PANIC] =3D crash_debug_panic,
>> +        [CRASHREASON_HWDOM] =3D crash_debug_hwdom,
>> +        [CRASHREASON_WATCHDOG] =3D crash_debug_watchdog,
>> +        [CRASHREASON_KEXECCMD] =3D crash_debug_kexeccmd,
>> +        [CRASHREASON_DEBUGKEY] =3D crash_debug_debugkey,
>> +    };
>> +    static bool ignore;
>> +    const char *action;
>> +
>> +    /* Some handlers are not functional too early. */
>> +    if ( system_state < SYS_STATE_smp_boot )
>> +        return;
>> +
>> +    /* Avoid recursion. */
>> +    if ( ignore )
>> +        return;
>> +    ignore =3D true;
>> +
>> +    if ( (unsigned int)reason >=3D ARRAY_SIZE(crash_action) )
>> +        return;
>> +    action =3D crash_action[reason];
>> +    if ( !action )
>> +        return;
>=20
> If we consider either of the last two "return"s to possibly be
> taken, I think the "ignore" logic wants to live below them, not
> above, avoiding no output at all when a recursive invocation
> turns out to be a "good" one. Then, as before,
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Fine with me.


Juergen


--------------1F1D5D90A7A4BF8D9F983D81
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1F1D5D90A7A4BF8D9F983D81--

--xyp1LgC1FKjWYb7HihTA5FkQ0HiAxhxzY--

--es9mkHtinfBtJHjT5515SHq3r67F6URJ0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XLjoFAwAAAAAACgkQsN6d1ii/Ey/c
5wf/cWKLNaGKZUioY+HjtU9k0sUXbRsKUBSxeEv0glPrRtnd28FEhiZMErngoGB/6QCs07PeM9Wk
RNjYSKfTTI4bjSalehydpVYmBoe+/z1780OQVDZMwHSuARuF2Z1akV7drLwmfTxhmlUYNAMODm33
YtzJqumb5NihIdqDJLEARLlss8uIRqiiFbUL06D3vLEt/kfOVZzeJq4LtTRoNNPcTMnht96EzIuJ
yK7EAQJT1pcdylG619z361G4yNnP7JyH/OZ2AThS2ekwZSAfwBpL5sQpShZajAzbcr67hOCQpkHO
9q3g8BVUZe5/PZzv/hst3IW3lHwfUhDuILA3KOfuUA==
=LSUw
-----END PGP SIGNATURE-----

--es9mkHtinfBtJHjT5515SHq3r67F6URJ0--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:21:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:21:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52016.91003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok2b-0004TF-8w; Mon, 14 Dec 2020 09:21:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52016.91003; Mon, 14 Dec 2020 09:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok2b-0004T8-5x; Mon, 14 Dec 2020 09:21:13 +0000
Received: by outflank-mailman (input) for mailman id 52016;
 Mon, 14 Dec 2020 09:21:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kok2a-0004T2-A1
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:21:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 466c4478-7568-4d82-a470-dd0f3fa97898;
 Mon, 14 Dec 2020 09:21:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66389B714;
 Mon, 14 Dec 2020 09:21:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 466c4478-7568-4d82-a470-dd0f3fa97898
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937670; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=a1s8p/1Ns9DLckuJmuLLbMHBJnhiw3xo0zbi3k79Kf0=;
	b=soSy7R4O6ubpb/UNuEuk3t3n7V+r9y0GiPwAhLhqkbSZspdpSFF5vQXsUEVtwO9cJ2JdMu
	Gv76Y1+2MldXA8wJx1z9ITTYmxC7tu5uU9nZSSat7KwXQ8YvbPSKXDO7geq0LL5YJbyNtB
	SKW/AfLdf6ogw/Qb7bYoj0HDNghlXJg=
Subject: Re: [PATCH v4 0/3] xen: add support for automatic debug key actions
 in case of crash
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <247d9c9c-5ab1-c733-6960-e406040c28ac@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dd39f92b-e0b0-135b-faf2-379c21652df3@suse.com>
Date: Mon, 14 Dec 2020 10:21:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <247d9c9c-5ab1-c733-6960-e406040c28ac@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="lC4azOJXGLRXtv8hAEbuDSagBdcDtjWQi"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--lC4azOJXGLRXtv8hAEbuDSagBdcDtjWQi
Content-Type: multipart/mixed; boundary="RxaJQnM42FSKvxsjjWsczO2cUwYBkWsE5";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <dd39f92b-e0b0-135b-faf2-379c21652df3@suse.com>
Subject: Re: [PATCH v4 0/3] xen: add support for automatic debug key actions
 in case of crash
References: <20201214075615.25038-1-jgross@suse.com>
 <247d9c9c-5ab1-c733-6960-e406040c28ac@suse.com>
In-Reply-To: <247d9c9c-5ab1-c733-6960-e406040c28ac@suse.com>

--RxaJQnM42FSKvxsjjWsczO2cUwYBkWsE5
Content-Type: multipart/mixed;
 boundary="------------9A2342D1207237E138C14F0B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9A2342D1207237E138C14F0B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 10:09, Jan Beulich wrote:
> On 14.12.2020 08:56, Juergen Gross wrote:
>> Patch 2 opens up more potential for simplification: in theory there is=

>> no need any more to call any key handler with the regs parameter,
>> allowing to use the same prototype for all handlers. The downside woul=
d
>> be to have an additional irq frame on the stack for the dump_registers=
()
>> and the do_debug_key() handlers.
>=20
> This isn't the only downside, is it? We'd then also need to be able
> to (sufficiently cleanly) unwind through the new frame to reach the
> prior one, in order to avoid logging less reliable information. Plus
> decompose the prior frame as well to avoid logging less easy to
> consume data.

Yes, this was implied by the "additional irq frame on the stack".


Juergen

--------------9A2342D1207237E138C14F0B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9A2342D1207237E138C14F0B--

--RxaJQnM42FSKvxsjjWsczO2cUwYBkWsE5--

--lC4azOJXGLRXtv8hAEbuDSagBdcDtjWQi
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XLoUFAwAAAAAACgkQsN6d1ii/Ey/n
swf/SdjPfYcvout4T0zfjTdv3ftsQwoAUkAV4X03ToA1OTahTzY1WFnTn9OCxWoD8Gu05+nxY83h
4cTopiFDYLiLMckhO7JoDFrF4sQPrJHYgmVLeSo0n22KzqIZFA50H57GBh8oXptk0rfLQn6F/47p
gWHMQfumpvxolVwbiXCJdickzFpU2uCpK+cNn9tASbR3lf2mQWidVn3g58l+lb4WJlp2cMph8oYP
wz+rOA3745TsfsYjeAhah7gosLQWY1nwTAOV1N3gVLON7IV/mII+7476EGkZ6shpP6reRFTcNfUl
TN3k4c0+DV7pgR0tFxs2lsGPo+Assk9qf5d+96sCVw==
=+faX
-----END PGP SIGNATURE-----

--lC4azOJXGLRXtv8hAEbuDSagBdcDtjWQi--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:22:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:22:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52021.91015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok4A-0004bB-Kq; Mon, 14 Dec 2020 09:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52021.91015; Mon, 14 Dec 2020 09:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok4A-0004b4-Hi; Mon, 14 Dec 2020 09:22:50 +0000
Received: by outflank-mailman (input) for mailman id 52021;
 Mon, 14 Dec 2020 09:22:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kok49-0004au-D6
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:22:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88c464d9-8590-4f3e-824c-51b1cc544db9;
 Mon, 14 Dec 2020 09:22:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3CFEAC90;
 Mon, 14 Dec 2020 09:22:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88c464d9-8590-4f3e-824c-51b1cc544db9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937767; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=USPPtQ3wjrCuNULsOnxyw+Dzmy/4ZrEJCVU/Ny1Y/M4=;
	b=nzDXZLXYqWRNIRBiyRPKbZ+E/HnzTdPoDROekfseIhcxp5C1scDCTqyAoA14X50HtabvML
	tSjzmyiNZu2Ijzlsffl9SJjiRFow51kw0WUZ1e6O5NIikpgyKfLen2MU3wNRT3RrRCoUrl
	ud8AKDU6PHhT79oV+TtpQt2Aqgc14/I=
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
 <9a6a397d-2c4c-acdc-d3ff-b286e522c9bc@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8c86ccf6-0384-f91f-1fd5-6a179158d91e@suse.com>
Date: Mon, 14 Dec 2020 10:22:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <9a6a397d-2c4c-acdc-d3ff-b286e522c9bc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.12.2020 10:15, Jürgen Groß wrote:
> On 14.12.20 10:03, Jan Beulich wrote:
>> On 14.12.2020 08:56, Juergen Gross wrote:
>>> Add support to run a function in an exception handler for Arm. Do it
>>> the same way as on x86 via a bug_frame.
>>>
>>> Unfortunately inline assembly on Arm seems to be less capable than on
>>> x86, leading to functions called via run_in_exception_handler() having
>>> to be globally visible.
>>
>> Could you extend on this? I don't understand what the relevant
>> difference is, from just looking at the changes.
> 
> The problem seems to be that a static symbol referenced from the inline
> asm seems not to silence the error that this static symbol isn't being
> used. On x86 the bug_frame is constructed using the %c modifier, which
> is not supported for Arm (at least gcc 7 used in my compile test
> complained), but seems to be enough for gcc on x86 to not complain.

But this isn't tied to %c not working on older gcc, is it? It looks
instead to be tied to you not specifying the function pointer as an
input in the asm(). The compiler would know the symbol is referenced
even if the input wasn't used at all in any of the asm()'s operands
(but of course it would be better to use the operand once you have
it, if it can be used in some way despite %c not being possible to
use).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:24:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52026.91026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok5f-0004kk-0R; Mon, 14 Dec 2020 09:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52026.91026; Mon, 14 Dec 2020 09:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok5e-0004kd-Te; Mon, 14 Dec 2020 09:24:22 +0000
Received: by outflank-mailman (input) for mailman id 52026;
 Mon, 14 Dec 2020 09:24:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kok5d-0004kY-Tz
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:24:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 338fba1a-b0e0-45cb-9b5c-2f6104e149f9;
 Mon, 14 Dec 2020 09:24:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C14EB719;
 Mon, 14 Dec 2020 09:24:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 338fba1a-b0e0-45cb-9b5c-2f6104e149f9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937860; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=02uHgHFOx1xv38bGx6gxW0glBdRxZ0Uf7XJBEfVhmAM=;
	b=s2hANY+wf4cWP7z6JzoyEvyts9tc7c22maUWxmeMAkP7QfIteGsfPRv8FISP2VzSXoYIng
	i7Sd1aR9BoWSrVR/BnImaJm6OGyGW7muxIwlGUgYfNUGyDFHCipP4UoVK30v4O4jWjGp+y
	ttviFhqMWf8TCUMJrZH6UVYflS9GBTQ=
Subject: Re: [PATCH v4 0/3] xen: add support for automatic debug key actions
 in case of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <247d9c9c-5ab1-c733-6960-e406040c28ac@suse.com>
 <dd39f92b-e0b0-135b-faf2-379c21652df3@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9de26f40-87e9-aab6-01d7-720217e23721@suse.com>
Date: Mon, 14 Dec 2020 10:24:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <dd39f92b-e0b0-135b-faf2-379c21652df3@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.12.2020 10:21, Jürgen Groß wrote:
> On 14.12.20 10:09, Jan Beulich wrote:
>> On 14.12.2020 08:56, Juergen Gross wrote:
>>> Patch 2 opens up more potential for simplification: in theory there is
>>> no need any more to call any key handler with the regs parameter,
>>> allowing to use the same prototype for all handlers. The downside would
>>> be to have an additional irq frame on the stack for the dump_registers()
>>> and the do_debug_key() handlers.
>>
>> This isn't the only downside, is it? We'd then also need to be able
>> to (sufficiently cleanly) unwind through the new frame to reach the
>> prior one, in order to avoid logging less reliable information. Plus
>> decompose the prior frame as well to avoid logging less easy to
>> consume data.
> 
> Yes, this was implied by the "additional irq frame on the stack".

Oh, okay - I read it as just referring to the possible concern of
more of the not overly large stack to get used.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:24:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:24:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52031.91038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok6D-0004qi-9R; Mon, 14 Dec 2020 09:24:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52031.91038; Mon, 14 Dec 2020 09:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok6D-0004qb-6A; Mon, 14 Dec 2020 09:24:57 +0000
Received: by outflank-mailman (input) for mailman id 52031;
 Mon, 14 Dec 2020 09:24:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kok6B-0004qT-Qm
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:24:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ac5c23c-1502-4ff5-b335-ede851a614d5;
 Mon, 14 Dec 2020 09:24:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10251AE87;
 Mon, 14 Dec 2020 09:24:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ac5c23c-1502-4ff5-b335-ede851a614d5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607937894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8ADO6RmQyG6N+pY5hxnecOdwh6Mv2lf30QT5YbGl/t0=;
	b=RuGVJ3vptgkxFmvtZqrpSty5c5CtYTLErmK6V+pVdiYdyrgFW2+8X5pAl3AkcY4i6Z/qzb
	jjIA8oH42SSlTxQ4GwEKtzZXQw488xK92ztuwwAai1xyuRUR7Rs2EUJsVueLQ9k5P3o78+
	Ap8R3BzKOJ4R+j41Xdrm5pq7KXpYePk=
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
 <9a6a397d-2c4c-acdc-d3ff-b286e522c9bc@suse.com>
 <8c86ccf6-0384-f91f-1fd5-6a179158d91e@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1935b802-e121-4678-4653-a5503def7f72@suse.com>
Date: Mon, 14 Dec 2020 10:24:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8c86ccf6-0384-f91f-1fd5-6a179158d91e@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Dc9aT2nPf0PPwCxdP7D24kIN4iq2WLueu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Dc9aT2nPf0PPwCxdP7D24kIN4iq2WLueu
Content-Type: multipart/mixed; boundary="xmopCqjAWIlTx21Fxnnl8RDjI5Kw02Ea7";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <1935b802-e121-4678-4653-a5503def7f72@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <74be05c2-375e-6b7e-ef87-31d4f7338a03@suse.com>
 <9a6a397d-2c4c-acdc-d3ff-b286e522c9bc@suse.com>
 <8c86ccf6-0384-f91f-1fd5-6a179158d91e@suse.com>
In-Reply-To: <8c86ccf6-0384-f91f-1fd5-6a179158d91e@suse.com>

--xmopCqjAWIlTx21Fxnnl8RDjI5Kw02Ea7
Content-Type: multipart/mixed;
 boundary="------------73D03E1E121A34B35F433949"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------73D03E1E121A34B35F433949
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 10:22, Jan Beulich wrote:
> On 14.12.2020 10:15, J=C3=BCrgen Gro=C3=9F wrote:
>> On 14.12.20 10:03, Jan Beulich wrote:
>>> On 14.12.2020 08:56, Juergen Gross wrote:
>>>> Add support to run a function in an exception handler for Arm. Do it=

>>>> the same way as on x86 via a bug_frame.
>>>>
>>>> Unfortunately inline assembly on Arm seems to be less capable than o=
n
>>>> x86, leading to functions called via run_in_exception_handler() havi=
ng
>>>> to be globally visible.
>>>
>>> Could you extend on this? I don't understand what the relevant
>>> difference is, from just looking at the changes.
>>
>> The problem seems to be that a static symbol referenced from the inlin=
e
>> asm seems not to silence the error that this static symbol isn't being=

>> used. On x86 the bug_frame is constructed using the %c modifier, which=

>> is not supported for Arm (at least gcc 7 used in my compile test
>> complained), but seems to be enough for gcc on x86 to not complain.
>=20
> But this isn't tied to %c not working on older gcc, is it? It looks
> instead to be tied to you not specifying the function pointer as an
> input in the asm(). The compiler would know the symbol is referenced
> even if the input wasn't used at all in any of the asm()'s operands
> (but of course it would be better to use the operand once you have
> it, if it can be used in some way despite %c not being possible to
> use).

Let me check that.


Juergen

--------------73D03E1E121A34B35F433949
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------73D03E1E121A34B35F433949--

--xmopCqjAWIlTx21Fxnnl8RDjI5Kw02Ea7--

--Dc9aT2nPf0PPwCxdP7D24kIN4iq2WLueu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XL2UFAwAAAAAACgkQsN6d1ii/Ey+d
vAgAh8gUaOf1hZRsAGxkRIQ9RC+p/NLWF1MFVSGc58GrLCBaCcAO3r79B8nZVSQEc/6chCEzqEAX
GQU4N0WyPbv0G/dAkaaKe24/2BJyHsBd9FAtu/UCrqWF/UZWbD2FjsRPwYHbqPqBg7ojiC7Ka91u
emauYOtiqJ8hOEfpZ9PBwWznpyqncIg91khG9z1rZsh01rwc5vKeC1el566iQo4/OSK3Nloldthe
wRaMOyiqySEKi5/KLiZymmt12aKvdCANiLmiulinaqRykOuNdgZnk/ztgjJk/UnDpCw5H0tPGSQh
LZaIyC/ORTHN+mT5TiahCRb0ASutadivZWHtlzaBmQ==
=lYV6
-----END PGP SIGNATURE-----

--Dc9aT2nPf0PPwCxdP7D24kIN4iq2WLueu--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:26:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:26:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52036.91050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok7h-00050W-Pd; Mon, 14 Dec 2020 09:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52036.91050; Mon, 14 Dec 2020 09:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kok7h-00050P-Mi; Mon, 14 Dec 2020 09:26:29 +0000
Received: by outflank-mailman (input) for mailman id 52036;
 Mon, 14 Dec 2020 09:26:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WSiX=FS=amazon.com=prvs=61050d9d8=havanur@srs-us1.protection.inumbo.net>)
 id 1kok7g-00050K-E7
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:26:28 +0000
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc78b409-2c44-4424-990a-ce9bf15fde16;
 Mon, 14 Dec 2020 09:26:27 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-42f764a0.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 14 Dec 2020 09:26:19 +0000
Received: from EX13D36EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1e-42f764a0.us-east-1.amazon.com (Postfix) with ESMTPS
 id 24C42E0E99; Mon, 14 Dec 2020 09:26:19 +0000 (UTC)
Received: from EX13D36EUC004.ant.amazon.com (10.43.164.126) by
 EX13D36EUC002.ant.amazon.com (10.43.164.99) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 14 Dec 2020 09:26:18 +0000
Received: from EX13D36EUC004.ant.amazon.com ([10.43.164.126]) by
 EX13D36EUC004.ant.amazon.com ([10.43.164.126]) with mapi id 15.00.1497.006;
 Mon, 14 Dec 2020 09:26:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc78b409-2c44-4424-990a-ce9bf15fde16
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1607937987; x=1639473987;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version:subject;
  bh=n6clB3GoZQv+CZQo8AmhQkhJkKd7F62DVSjTX3yClHo=;
  b=LV8a1C96Y2XjrGcFYAbQpr4B0Wsk3fFcQWPbsFP5JrZi2e2m0V4kA8FJ
   yOQQynosCkxR1cV6ef6lbnAV9Ubefd/o/EifsNYmypbIZ4h+bIlqrel1Y
   QBVB8wq3eLZGN+hFpxg4oIhKdQZhU+BKpp4s/YJQaJfguNEeAYWM1yLUm
   I=;
X-IronPort-AV: E=Sophos;i="5.78,418,1599523200"; 
   d="scan'208";a="95702571"
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain
Thread-Topic: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain
From: "Shamsundara Havanur, Harsha" <havanur@amazon.com>
To: "jbeulich@suse.com" <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQHWz7MJSRGqvamGtEii6iRZ4MXyYqn2TWsAgAAJiIA=
Date: Mon, 14 Dec 2020 09:26:17 +0000
Message-ID: <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
References: <cover.1607686878.git.havanur@amazon.com>
	 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
	 <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
In-Reply-To: <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-ID: <32E7025488A69E4689051D4A87B5827A@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

T24gTW9uLCAyMDIwLTEyLTE0IGF0IDA5OjUyICswMTAwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4g
Q0FVVElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5p
emF0aW9uLiBEbw0KPiBub3QgY2xpY2sgbGlua3Mgb3Igb3BlbiBhdHRhY2htZW50cyB1bmxlc3Mg
eW91IGNhbiBjb25maXJtIHRoZSBzZW5kZXINCj4gYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2Fm
ZS4NCj4gDQo+IA0KPiANCj4gT24gMTEuMTIuMjAyMCAxMjo0NCwgSGFyc2hhIFNoYW1zdW5kYXJh
IEhhdmFudXIgd3JvdGU6DQo+ID4gQSBIVk0gZG9tYWluIGZsdXNoZXMgY2FjaGUgb24gYWxsIHRo
ZSBjcHVzIHVzaW5nDQo+ID4gYGZsdXNoX2FsbGAgbWFjcm8gd2hpY2ggdXNlcyBjcHVfb25saW5l
X21hcCwgZHVyaW5nDQo+ID4gaSkgY3JlYXRpb24gb2YgYSBuZXcgZG9tYWluDQo+ID4gaWkpIHdo
ZW4gZGV2aWNlLW1vZGVsIG9wIGlzIHBlcmZvcm1lZA0KPiA+IGlpaSkgd2hlbiBkb21haW4gaXMg
ZGVzdHJ1Y3RlZC4NCj4gPiANCj4gPiBUaGlzIHRyaWdnZXJzIElQSSBvbiBhbGwgdGhlIGNwdXMs
IHRodXMgYWZmZWN0aW5nIG90aGVyDQo+ID4gZG9tYWlucyB0aGF0IGFyZSBwaW5uZWQgdG8gZGlm
ZmVyZW50IHBjcHVzLiBUaGlzIHBhdGNoDQo+ID4gcmVzdHJpY3RzIGNhY2hlIGZsdXNoIHRvIHRo
ZSBzZXQgb2YgY3B1cyBhZmZpbml0aXplZCB0bw0KPiA+IHRoZSBjdXJyZW50IGRvbWFpbiB1c2lu
ZyBgZG9tYWluLT5kaXJ0eV9jcHVtYXNrYC4NCj4gDQo+IEJ1dCB0aGVuIHlvdSBuZWVkIHRvIGVm
ZmVjdCBjYWNoZSBmbHVzaGluZyB3aGVuIGEgQ1BVIGdldHMNCj4gdGFrZW4gb3V0IG9mIGRvbWFp
bi0+ZGlydHlfY3B1bWFzay4gSSBkb24ndCB0aGluayB5b3Uvd2Ugd2FudA0KPiB0byBkbyB0aGF0
Lg0KPiANCklmIHdlIGRvIG5vdCByZXN0cmljdCwgaXQgY291bGQgbGVhZCB0byBEb1MgYXR0YWNr
LCB3aGVyZSBhIG1hbGljaW91cw0KZ3Vlc3QgY291bGQga2VlcCB3cml0aW5nIHRvIE1UUlIgcmVn
aXN0ZXJzIG9yIGRvIGEgY2FjaGUgZmx1c2ggdGhyb3VnaA0KRE0gT3AgYW5kIGtlZXAgc2VuZGlu
ZyBJUElzIHRvIG90aGVyIG5laWdoYm9yaW5nIGd1ZXN0cy4NCg0KLUhhcnNoYQ0KPiBKYW4NCj4g
DQo=


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52047.91063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokL7-0006sT-2C; Mon, 14 Dec 2020 09:40:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52047.91063; Mon, 14 Dec 2020 09:40:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokL6-0006sM-VL; Mon, 14 Dec 2020 09:40:20 +0000
Received: by outflank-mailman (input) for mailman id 52047;
 Mon, 14 Dec 2020 09:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kokL6-0006sH-2W
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:40:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b68e3b1-666a-48c2-9162-a32995cbf4bf;
 Mon, 14 Dec 2020 09:40:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01F46AE87;
 Mon, 14 Dec 2020 09:40:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b68e3b1-666a-48c2-9162-a32995cbf4bf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607938818; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Vi+x2SPWeGDmWZlef6Aw3glcFvJM18v/vG+w+YwbMrM=;
	b=vRrtWqXbroA8bOH1X0Ajz/f2mSpsCF8cKK11PMTLxAKgJhw/gcW2y5/FYCtQqlQDUO1OCF
	NLEvIM4bmZ99T0Ik/JJ+rpcC2FGnuy7bIGr5YBFsKuWh8gooQpHx+NOvgQZqHKZhz6xPFx
	aYjKg6W8G33H0/LfEy7LHBGgrL4NETo=
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
Date: Mon, 14 Dec 2020 10:40:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.12.2020 11:57, Julien Grall wrote:
> On 11/12/2020 10:32, Jan Beulich wrote:
>> On 09.12.2020 12:54, Julien Grall wrote:
>>> On 23/11/2020 13:29, Jan Beulich wrote:
>>>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>>>        long           rc = 0;
>>>>    
>>>>     again:
>>>> -    spin_lock(&d1->event_lock);
>>>> +    write_lock(&d1->event_lock);
>>>>    
>>>>        if ( !port_is_valid(d1, port1) )
>>>>        {
>>>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>>>                    BUG();
>>>>    
>>>>                if ( d1 < d2 )
>>>> -            {
>>>> -                spin_lock(&d2->event_lock);
>>>> -            }
>>>> +                read_lock(&d2->event_lock);
>>>
>>> This change made me realized that I don't quite understand how the
>>> rwlock is meant to work for event_lock. I was actually expecting this to
>>> be a write_lock() given there are state changed in the d2 events.
>>
>> Well, the protection needs to be against racing changes, i.e.
>> parallel invocations of this same function, or evtchn_close().
>> It is debatable whether evtchn_status() and
>> domain_dump_evtchn_info() would better also be locked out
>> (other read_lock() uses aren't applicable to interdomain
>> channels).
>>
>>> Could you outline how a developper can find out whether he/she should
>>> use read_lock or write_lock?
>>
>> I could try to, but it would again be a port type dependent
>> model, just like for the per-channel locks.
> 
> It is quite important to have clear locking strategy (in particular 
> rwlock) so we can make correct decision when to use read_lock or write_lock.
> 
>> So I'd like it to
>> be clarified first whether you aren't instead indirectly
>> asking for these to become write_lock()
> 
> Well, I don't understand why this is a read_lock() (even with your 
> previous explanation). I am not suggesting to switch to a write_lock(), 
> but instead asking for the reasoning behind the decision.

So if what I've said in my previous reply isn't enough (including the
argument towards using two write_lock() here), I'm struggling to
figure what else to say. The primary goal is to exclude changes to
the same ports. For this it is sufficient to hold just one of the two
locks in writer mode, as the other (racing) one will acquire that
same lock for at least reading. The question whether both need to use
writer mode can only be decided when looking at the sites acquiring
just one of the locks in reader mode (hence the reference to
evtchn_status() and domain_dump_evtchn_info()) - if races with them
are deemed to be a problem, switching to both-writers will be needed.

>>>> --- a/xen/common/rwlock.c
>>>> +++ b/xen/common/rwlock.c
>>>> @@ -102,6 +102,14 @@ void queue_write_lock_slowpath(rwlock_t
>>>>        spin_unlock(&lock->lock);
>>>>    }
>>>>    
>>>> +void _rw_barrier(rwlock_t *lock)
>>>> +{
>>>> +    check_barrier(&lock->lock.debug);
>>>> +    smp_mb();
>>>> +    while ( _rw_is_locked(lock) )
>>>> +        arch_lock_relax();
>>>> +    smp_mb();
>>>> +}
>>>
>>> As I pointed out when this implementation was first proposed (see [1]),
>>> there is a risk that the loop will never exit.
>>
>> The [1] reference was missing, but I recall you saying so.
>>
>>> I think the following implementation would be better (although it is ugly):
>>>
>>> write_lock();
>>> /* do nothing */
>>> write_unlock();
>>>
>>> This will act as a barrier between lock held before and after the call.
>>
>> Right, and back then I indicated agreement. When getting to
>> actually carry out the change, I realized though that then the less
>> restrictive check_barrier() can't be used anymore (or to be precise,
>> it could be used, but the stronger check_lock() would subsequently
>> still come into play). This isn't a problem here, but would be for
>> any IRQ-safe r/w lock that the barrier may want to be used on down
>> the road.
>>
>> Thinking about it, a read_lock() / read_unlock() pair would suffice
>> though. But this would then still have check_lock() involved.
>>
>> Given all of this, maybe it's better not to introduce the function
>> at all and instead open-code the read_lock() / read_unlock() pair at
>> the use site.
> 
> IIUC, the read_lock() would be sufficient because we only care about 
> "write" side and not read. Is that correct?

Correct - as the comment says, what we need to guard against is only
the allocation of new ports (which isn't even all "write" sides, but
exactly one of them).

>>> As an aside, I think the introduction of rw_barrier() deserve to be a in
>>> separate patch to help the review.
>>
>> I'm aware there are differing views on this - to me, putting this in
>> a separate patch would be introduction of dead code. 
> 
> This is only dead code if we decide to not use rw_barrier() :).
> 
> The idea behind introducing rw_barrier() in its own patch is so you can 
> explanation why it was implemented like that. Arguably, this explanation 
> can be added in the same patch...
> 
> There are other added benefits such as making a hint to the reviewer 
> that this part will require more careful review. I am sure one will say 
> that reviewer should always be careful...
> 
> But, personally, my level of carefulness will depend on the author and 
> the type of the patch.
> 
> Anyway, I am happy with the open-coded version with an explanation in 
> the code/commit message.

Okay, will change to that then.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 09:56:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 09:56:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52075.91110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koka8-0008AF-Uc; Mon, 14 Dec 2020 09:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52075.91110; Mon, 14 Dec 2020 09:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koka8-0008A8-QS; Mon, 14 Dec 2020 09:55:52 +0000
Received: by outflank-mailman (input) for mailman id 52075;
 Mon, 14 Dec 2020 09:55:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1koka7-0008A3-4Q
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 09:55:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a639bae2-21b8-4800-8dd0-b11dfcd442cd;
 Mon, 14 Dec 2020 09:55:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4FD3FAC10;
 Mon, 14 Dec 2020 09:55:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a639bae2-21b8-4800-8dd0-b11dfcd442cd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607939748; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/M9Woq0xrSX0Ph47W4TRzOAiFvA1qaoHwi4ZzNqc5Y8=;
	b=CcDQcVMEvv3mZYF5XdIO7EP8b1JQOD8q1hJpqHZ1heMPkd2uI3IEinElz+E6fMQrMJry5S
	R3YYR87yxhBw/sJhs0OApoiKH2OSc1A1cJ1rw1dd5ykej1MgXsfn67xM/ynJ2+m21qsIaj
	ouEzlDu4uizKTSdKU4ywavnpbCsh2z4=
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
To: "Shamsundara Havanur, Harsha" <havanur@amazon.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1607686878.git.havanur@amazon.com>
 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
 <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
 <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <db7fdadc-91de-aa5f-51cd-f2c78f88d034@suse.com>
Date: Mon, 14 Dec 2020 10:55:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.12.2020 10:26, Shamsundara Havanur, Harsha wrote:
> On Mon, 2020-12-14 at 09:52 +0100, Jan Beulich wrote:
>> On 11.12.2020 12:44, Harsha Shamsundara Havanur wrote:
>>> A HVM domain flushes cache on all the cpus using
>>> `flush_all` macro which uses cpu_online_map, during
>>> i) creation of a new domain
>>> ii) when device-model op is performed
>>> iii) when domain is destructed.
>>>
>>> This triggers IPI on all the cpus, thus affecting other
>>> domains that are pinned to different pcpus. This patch
>>> restricts cache flush to the set of cpus affinitized to
>>> the current domain using `domain->dirty_cpumask`.
>>
>> But then you need to effect cache flushing when a CPU gets
>> taken out of domain->dirty_cpumask. I don't think you/we want
>> to do that.
>>
> If we do not restrict, it could lead to DoS attack, where a malicious
> guest could keep writing to MTRR registers or do a cache flush through
> DM Op and keep sending IPIs to other neighboring guests.

Could you outline how this can become a DoS? Throughput may be
(heavily) impacted, yes, but I don't see how this could suppress
forward progress altogether. Improved accounting may be desirable
here, such that the time spent in the flushes gets subtracted
from the initiator's credits rather than the vCPU's which happens
to run on the subject pCPU at the time. This is a more general
topic though, which was previously brought up: Time spent in
servicing interrupts should in general not be accounted to the
vCPU running of which happened to be interrupted. It's just that
for the majority of interrupts the amount of time needed to
handle them is pretty well bounded, albeit very high interrupt
rates could have the same effect as a single interrupt taking
very long to service.

An intermediate option (albeit presumably somewhat intrusive)
might be to use e.g. a tasklet on each CPU to effect the
flushing. This wouldn't reduce the overall hit on the system,
but would at least avoid penalizing other vCPU-s as to their
scheduling time slices. The issuing vCPU would then need
pausing until all of the flushes got carried out.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:18:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:18:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52084.91124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokvO-0001nd-SX; Mon, 14 Dec 2020 10:17:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52084.91124; Mon, 14 Dec 2020 10:17:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokvO-0001nW-PS; Mon, 14 Dec 2020 10:17:50 +0000
Received: by outflank-mailman (input) for mailman id 52084;
 Mon, 14 Dec 2020 10:17:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cKns=FS=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kokvN-0001nR-Ks
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:17:49 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 5ddc1232-acaf-481a-844d-23643d0c3141;
 Mon, 14 Dec 2020 10:17:45 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-503-2-QNSaV0Nc2_MadRlkiCng-1; Mon, 14 Dec 2020 05:17:43 -0500
Received: by mail-wr1-f70.google.com with SMTP id w5so4137335wrl.9
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 02:17:43 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e?
 ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
 by smtp.gmail.com with ESMTPSA id q73sm31034403wme.44.2020.12.14.02.17.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Dec 2020 02:17:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ddc1232-acaf-481a-844d-23643d0c3141
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607941064;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rABuQ01f4xN4/9IJkTF6M26zuMCOEKUzY7Y45Txfisg=;
	b=XPe4bUF6Q5AJiRxV8MTWUVP47TJYRyg/NXwZ8eGI78uAS3rF+XZey7PfulNYZEvtxMeMWf
	rhfpbee+KA6spAF6e78XOINq2gpZ2KB5lJL4FLNhmT/xzwRysJeaTuaai4WM6TvgM1mUL2
	WESljk80cCgLyk5TtTHeOfLSNMCiJn8=
X-MC-Unique: 2-QNSaV0Nc2_MadRlkiCng-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=rABuQ01f4xN4/9IJkTF6M26zuMCOEKUzY7Y45Txfisg=;
        b=Zrd1KqSWKl7pcIuMq+/kMZKDNFB0Wbh7CjapqhkwAPIxaCZcnPw1g1eP8OnC5R2GPO
         urEf1hpMe2clNlVnT4on4Fn7/uJegKMSHArAUbFnVSMtTQDZEWFJBd/zFLwmPmZxt/VD
         9bAWK4miy4yOZEc5VxtsjvExsapb4OdETnstbkJVvKHDZQFMlgQmcbgu8yl+R/9uiCyv
         OSM1cQbIKHOKc1qBfmHWgfc55rLfJsW9FcRQ5VZyc5G9RSiyFUR1n9ZQpQ68sjaYuh00
         2oidyuI5Y5pbVQ42LQ9ULhWNL7QW3extDinJ2QKofB2CMYoSoLqwo1PIO1YkcUzdN5Jd
         zSvA==
X-Gm-Message-State: AOAM532dH1j/3tJCTad8WKD/G6JcyO86P7ncl59YsBsOiVlh0LWzVDeI
	GtyAOX09MoD6GCL4dXuylC/WSvDsFPvb/jSrYS3I8AjME4FPFh6EouGVgrf62+a8hcPlubAg6ZQ
	m9p+8ILkIJkzET2rx1oY6OOryZHbLBAkNREOL7FGdpLT8Qa1ImmzTHKWUL5PFA352BaDB1wAf0F
	SxlAI=
X-Received: by 2002:a05:6000:124e:: with SMTP id j14mr22150164wrx.310.1607941061891;
        Mon, 14 Dec 2020 02:17:41 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxqJL+ArEmh2h1Fbd/p0DwTxOhnlMBoZKGQ8PSaiO0CapA4/tcA4p4VfRG8DdYnnBgO/YwSfw==
X-Received: by 2002:a05:6000:124e:: with SMTP id j14mr22150132wrx.310.1607941061678;
        Mon, 14 Dec 2020 02:17:41 -0800 (PST)
Subject: Re: [PATCH v3 03/13] compiler.h: remove GCC < 3 __builtin_expect
 fallback
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 marcandre.lureau@redhat.com, qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-4-marcandre.lureau@redhat.com>
 <fead8bf1-7848-8809-c67a-e6354e7b5cf7@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <e70d683d-7eb6-5717-eea4-02115935d232@redhat.com>
Date: Mon, 14 Dec 2020 11:17:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <fead8bf1-7848-8809-c67a-e6354e7b5cf7@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10/12/20 15:32, Philippe Mathieu-Daudé wrote:
> On 12/10/20 2:47 PM, marcandre.lureau@redhat.com wrote:
>> From: Marc-André Lureau <marcandre.lureau@redhat.com>
>>
>> Since commit efc6c07 ("configure: Add a test for the minimum compiler
>> version"), QEMU explicitely depends on GCC >= 4.8.
>>
>> (clang >= 3.4 advertizes itself as GCC >= 4.2 compatible and supports
>> __builtin_expect too)
>>
>> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
>> ---
>>   include/qemu/compiler.h | 4 ----
>>   1 file changed, 4 deletions(-)
>>
>> diff --git a/include/qemu/compiler.h b/include/qemu/compiler.h
>> index c76281f354..226ead6c90 100644
>> --- a/include/qemu/compiler.h
>> +++ b/include/qemu/compiler.h
>> @@ -44,10 +44,6 @@
>>   #endif
>>   
>>   #ifndef likely
>> -#if __GNUC__ < 3
>> -#define __builtin_expect(x, n) (x)
>> -#endif
>> -
>>   #define likely(x)   __builtin_expect(!!(x), 1)
>>   #define unlikely(x)   __builtin_expect(!!(x), 0)
>>   #endif
>>
> 
> Trying with GCC 10:
> warning: implicit declaration of function ‘likely’
> [-Wimplicit-function-declaration]
> 
> Clang 10:
> warning: implicit declaration of function 'likely' is invalid in C99
> [-Wimplicit-function-declaration]
> 
> Shouldn't it becleaner to test in the configure script or Meson that
> likely() and unlikely() are not defined, and define them here
> unconditionally?

I think the point of the "#ifndef likely" is that some header file 
(maybe something from Linux?) might be defining them unexpectedly.  So 
it's difficult to do the test at configure/meson time.  I would also 
tend towards removing the #ifndef and seeing if something breaks, but 
not as part of this series.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:18:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:18:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52086.91137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokve-0001rK-5D; Mon, 14 Dec 2020 10:18:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52086.91137; Mon, 14 Dec 2020 10:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokve-0001rD-29; Mon, 14 Dec 2020 10:18:06 +0000
Received: by outflank-mailman (input) for mailman id 52086;
 Mon, 14 Dec 2020 10:18:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kokvc-0001qd-L0
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:18:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kokvZ-0002rT-RI; Mon, 14 Dec 2020 10:18:01 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kokvZ-0005Wl-DA; Mon, 14 Dec 2020 10:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=H1AcoxOBU4DYwP+eHZO1DuAmYUYc/viR9r4DPmmJ4nU=; b=vnhC/YsmIeVUpBOx6kIpVky1Uw
	0hZWClr6ECKsM3sOkNwvMnoRf2rwTxB+t4yYAnYYKQXtKmW87B0kiowwI+oLb1snDX1QQfrxLdxvp
	O6TBaocVqavRBVKXIsoqU7VvoS64EG+wRjGJ9uf2Ewk5FvGMorkCwiAXWGfFHdLELhP4=;
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
Date: Mon, 14 Dec 2020 10:17:58 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201214075615.25038-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/12/2020 07:56, Juergen Gross wrote:
> Add support to run a function in an exception handler for Arm. Do it
> the same way as on x86 via a bug_frame.
> 
> Unfortunately inline assembly on Arm seems to be less capable than on
> x86, leading to functions called via run_in_exception_handler() having
> to be globally visible.

Jan already commented on this, so I am not going to comment again.

> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V4:
> - new patch
> 
> I have verified the created bugframe is correct by inspecting the
> created binary.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   xen/arch/arm/traps.c       | 10 +++++++++-
>   xen/drivers/char/ns16550.c |  3 ++-
>   xen/include/asm-arm/bug.h  | 32 +++++++++++++++++++++-----------
>   3 files changed, 32 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 22bd1bd4c6..6e677affe2 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1236,8 +1236,16 @@ int do_bug_frame(const struct cpu_user_regs *regs, vaddr_t pc)
>       if ( !bug )
>           return -ENOENT;
>   
> +    if ( id == BUGFRAME_run_fn )
> +    {
> +        void (*fn)(const struct cpu_user_regs *) = bug_ptr(bug);
> +
> +        fn(regs);
> +        return 0;
> +    }
> +
>       /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
> -    filename = bug_file(bug);
> +    filename = bug_ptr(bug);
>       if ( !is_kernel(filename) )
>           return -EINVAL;
>       fixup = strlen(filename);
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 9235d854fe..dd6500acc8 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -192,7 +192,8 @@ static void ns16550_interrupt(
>   /* Safe: ns16550_poll() runs as softirq so not reentrant on a given CPU. */
>   static DEFINE_PER_CPU(struct serial_port *, poll_port);
>   
> -static void __ns16550_poll(struct cpu_user_regs *regs)
> +/* run_in_exception_handler() on Arm requires globally visible symbol. */
> +void __ns16550_poll(struct cpu_user_regs *regs)
>   {
>       struct serial_port *port = this_cpu(poll_port);
>       struct ns16550 *uart = port->uart;
> diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h
> index 36c803357c..a7da2c306f 100644
> --- a/xen/include/asm-arm/bug.h
> +++ b/xen/include/asm-arm/bug.h
> @@ -15,34 +15,38 @@
>   
>   struct bug_frame {
>       signed int loc_disp;    /* Relative address to the bug address */
> -    signed int file_disp;   /* Relative address to the filename */
> +    signed int ptr_disp;    /* Relative address to the filename or function */
>       signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
>       uint16_t line;          /* Line number */
>       uint32_t pad0:16;       /* Padding for 8-bytes align */
>   };
>   
>   #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>   #define bug_line(b) ((b)->line)
>   #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>   
> -#define BUGFRAME_warn   0
> -#define BUGFRAME_bug    1
> -#define BUGFRAME_assert 2
> +#define BUGFRAME_run_fn 0
> +#define BUGFRAME_warn   1
> +#define BUGFRAME_bug    2
> +#define BUGFRAME_assert 3

Why did you renumber it? IOW, why can't BUGFRAME_run_fn be defined as 3?

>   
> -#define BUGFRAME_NR     3
> +#define BUGFRAME_NR     4
>   
>   /* Many versions of GCC doesn't support the asm %c parameter which would
>    * be preferable to this unpleasantness. We use mergeable string
>    * sections to avoid multiple copies of the string appearing in the
>    * Xen image.
>    */
> -#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
> +#define BUG_FRAME(type, line, ptr, ptr_is_file, has_msg, msg) do {          \
>       BUILD_BUG_ON((line) >> 16);                                             \
>       BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
>       asm ("1:"BUG_INSTR"\n"                                                  \
>            ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
> -         "2:\t.asciz " __stringify(file) "\n"                               \
> +         "2:\n"                                                             \
> +         ".if " #ptr_is_file "\n"                                           \
> +         "\t.asciz " __stringify(ptr) "\n"                                  \
> +         ".endif\n"                                                         \
>            "3:\n"                                                             \
>            ".if " #has_msg "\n"                                               \
>            "\t.asciz " #msg "\n"                                              \
> @@ -52,21 +56,27 @@ struct bug_frame {
>            "4:\n"                                                             \
>            ".p2align 2\n"                                                     \
>            ".long (1b - 4b)\n"                                                \
> +         ".if " #ptr_is_file "\n"                                           \
>            ".long (2b - 4b)\n"                                                \
> +         ".else\n"                                                          \
> +         ".long (" #ptr " - 4b)\n"                                          \
> +         ".endif\n"                                                         \
>            ".long (3b - 4b)\n"                                                \
>            ".hword " __stringify(line) ", 0\n"                                \
>            ".popsection");                                                    \
>   } while (0)
>   
> -#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
> +#define run_in_exception_handler(fn) BUG_FRAME(BUGFRAME_run_fn, 0, fn, 0, 0, "")
> +
> +#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 1, 0, "")
>   
>   #define BUG() do {                                              \
> -    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
> +    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 1, 0, "");        \
>       unreachable();                                              \
>   } while (0)
>   
>   #define assert_failed(msg) do {                                 \
> -    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
> +    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, 1, msg);     \
>       unreachable();                                              \
>   } while (0)
>   
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:20:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:20:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52096.91149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokxz-0002q3-Ow; Mon, 14 Dec 2020 10:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52096.91149; Mon, 14 Dec 2020 10:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kokxz-0002pw-M0; Mon, 14 Dec 2020 10:20:31 +0000
Received: by outflank-mailman (input) for mailman id 52096;
 Mon, 14 Dec 2020 10:20:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cKns=FS=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kokxx-0002po-Hf
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:20:29 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1f0ca9d2-b0e9-4233-9ccd-2f953a2c85c6;
 Mon, 14 Dec 2020 10:20:28 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-74-8QHRx1grOmGir5UfEw_xSg-1; Mon, 14 Dec 2020 05:20:26 -0500
Received: by mail-wm1-f72.google.com with SMTP id g198so3243840wme.7
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 02:20:26 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e?
 ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
 by smtp.gmail.com with ESMTPSA id r2sm31320917wrn.83.2020.12.14.02.20.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Dec 2020 02:20:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f0ca9d2-b0e9-4233-9ccd-2f953a2c85c6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607941228;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VKJuWEjZ7HEqn+md8jv6dUOfhVZW6quQNiqaNPkOsHc=;
	b=T5PxpI14sCZ1XsiI3RPPOiiU3L9q5vn69dpEIkHiuS0X/jHDrNKwJnbpyCWbN+Au/8LesD
	pTBGQZj9fQ3acRrH4cDXrXOOQB0LWEHqf70y18LsyorLShTawKywkFK7FUAQZ/uO7Fnh3v
	87XUbPaakiZyITQZ0vkKMuvC8yta5E0=
X-MC-Unique: 8QHRx1grOmGir5UfEw_xSg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=VKJuWEjZ7HEqn+md8jv6dUOfhVZW6quQNiqaNPkOsHc=;
        b=q4spFrjo2UEmVMdM0Ygyjjd6ZrwN/G+kylKwWHUpVsJ30mISIVMqtqJblkRBNkMizd
         PrcqrXnUelLNNsT4eSSEUY4Ch2rMg88iVXHi2CHKCkg4YzAtvrw3507U1GwQaNB11ga7
         eeMRnxOiycHTBMLVLBd9Kk18+9ZGs89Jtwyu5PCIYB13w69J12Z5gW4fP+z9NP0MIww9
         UmHDQG+JMEIjJWnGtfPRaL//b9UMwUEMn6ZiuWm6fh/W0BssAbbK85EGTX0STQL05Pmw
         o/yXT6EOZc0zsxOKk9Tg5D0gn1ZR1Bet8drXWrFXhw6fXUWRvYb3oeqwLHyZyIFPIz2D
         5REQ==
X-Gm-Message-State: AOAM530fZqkWzMyij7CkBvSBGv6b7fRYT7Mc2pOEON7z/msZ598TLRJm
	LnG9OklAvPy4Gc1fEC/LqIUBMRD8r4/agIrKx1fvF6/rZycjG8Ze8W1HXCnDTbF74jHXu8KzQfl
	8n4gaMQo76Chf5WRW2AyguecUuuQ=
X-Received: by 2002:a1c:f017:: with SMTP id a23mr26187757wmb.56.1607941225266;
        Mon, 14 Dec 2020 02:20:25 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzf3/oGm0fU19mZH+puOGsoNRp7IG0cdANi1T7twa2ceqU7+cBZmSBiH5yH7AKXpx6CW0C0kg==
X-Received: by 2002:a1c:f017:: with SMTP id a23mr26187738wmb.56.1607941225059;
        Mon, 14 Dec 2020 02:20:25 -0800 (PST)
Subject: Re: [PATCH v3 00/13] Remove GCC < 4.8 checks
To: marcandre.lureau@redhat.com, qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 Laurent Vivier <laurent@vivier.eu>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 philmd@redhat.com
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <9df914a2-cf0a-6cf5-76ee-502a75873825@redhat.com>
Date: Mon, 14 Dec 2020 11:20:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-1-marcandre.lureau@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10/12/20 14:47, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> Hi,
> 
> Since commit efc6c07 ("configure: Add a test for the minimum compiler version"),
> QEMU explicitely depends on GCC >= 4.8.
> 
> v3:
>   - drop first patch replacing QEMU_GNUC_PREREQ with G_GNUC_CHECK_VERSION
>   - add last patch to remove QEMU_GNUC_PREREQ
>   - tweak commit messages to replace clang 3.8 with clang 3.4
>   - fix some extra coding style
>   - collect r-b/a-b tags
> 
> v2:
>   - include reviewed Philippe earlier series
>   - drop problematic patch to replace GCC_FMT_ATTR, but tweak the check to be clang
>   - replace QEMU_GNUC_PREREQ with G_GNUC_CHECK_VERSION
>   - split changes
>   - add patches to drop __GNUC__ checks (clang advertizes itself as 4.2.1, unless
>     -fgnuc-version=0)
> 
> Marc-André Lureau (11):
>    compiler.h: remove GCC < 3 __builtin_expect fallback
>    qemu-plugin.h: remove GCC < 4
>    tests: remove GCC < 4 fallbacks
>    virtiofsd: replace _Static_assert with QEMU_BUILD_BUG_ON
>    compiler.h: explicit case for Clang printf attribute
>    audio: remove GNUC & MSVC check
>    poison: remove GNUC check
>    xen: remove GNUC check
>    compiler: remove GNUC check
>    linux-user: remove GNUC check
>    compiler.h: remove QEMU_GNUC_PREREQ
> 
> Philippe Mathieu-Daudé (2):
>    qemu/atomic: Drop special case for unsupported compiler
>    accel/tcg: Remove special case for GCC < 4.6
> 
>   include/exec/poison.h              |  2 --
>   include/hw/xen/interface/io/ring.h |  9 ------
>   include/qemu/atomic.h              | 17 -----------
>   include/qemu/compiler.h            | 45 ++++++++----------------------
>   include/qemu/qemu-plugin.h         |  9 ++----
>   scripts/cocci-macro-file.h         |  1 -
>   tools/virtiofsd/fuse_common.h      | 11 +-------
>   accel/tcg/cpu-exec.c               |  2 +-
>   audio/audio.c                      |  8 +-----
>   linux-user/strace.c                |  4 ---
>   tests/tcg/arm/fcvt.c               |  8 ++----
>   11 files changed, 20 insertions(+), 96 deletions(-)
> 

Queued, thanks.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:24:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:24:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52101.91160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kol1o-00030h-9k; Mon, 14 Dec 2020 10:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52101.91160; Mon, 14 Dec 2020 10:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kol1o-00030a-6l; Mon, 14 Dec 2020 10:24:28 +0000
Received: by outflank-mailman (input) for mailman id 52101;
 Mon, 14 Dec 2020 10:24:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kol1n-00030V-Hy
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:24:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kol1l-0002y2-GG; Mon, 14 Dec 2020 10:24:25 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kol1l-0005xh-6X; Mon, 14 Dec 2020 10:24:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kUF7z5Be0I5ppXEQuBWcHR5q6dAPx7UEU72KWXIwxGY=; b=bbHy+Xn/PbzVrw4aB+CtVmZD9W
	9teNFFKO1TiNXiB4gOvEIuhaf8snjhp/53yJKc7LrWpPpFUiV0ceDoIr+LE1zMJpefWBvypfyP9JL
	flysVVb4sr2xY5u8En65l6EosXKCtfsPsmAcRw16jf4Ole9G5lIM1Z1YcvHdNY5u1zYw=;
Subject: Re: [PATCH v4 3/3] xen: add support for automatic debug key actions
 in case of crash
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8dc62a3f-db2d-51b9-1264-28af3a13052d@xen.org>
Date: Mon, 14 Dec 2020 10:24:23 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201214075615.25038-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/12/2020 07:56, Juergen Gross wrote:
> diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
> index de120fa092..806355ed8b 100644
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -3,7 +3,9 @@
>    */
>   
>   #include <asm/regs.h>
> +#include <xen/delay.h>
>   #include <xen/keyhandler.h>
> +#include <xen/param.h>
>   #include <xen/shutdown.h>
>   #include <xen/event.h>
>   #include <xen/console.h>
> @@ -519,6 +521,59 @@ void __init initialize_keytable(void)
>       }
>   }
>   
> +#define CRASHACTION_SIZE  32
> +static char crash_debug_panic[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-panic", crash_debug_panic);
> +static char crash_debug_hwdom[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
> +static char crash_debug_watchdog[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
> +#ifdef CONFIG_KEXEC
> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
> +#else
> +#define crash_debug_kexeccmd NULL
> +#endif
> +static char crash_debug_debugkey[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
> +
> +void keyhandler_crash_action(enum crash_reason reason)
> +{
> +    static const char *const crash_action[] = {
> +        [CRASHREASON_PANIC] = crash_debug_panic,
> +        [CRASHREASON_HWDOM] = crash_debug_hwdom,
> +        [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
> +        [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
> +        [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
> +    };
> +    static bool ignore;
> +    const char *action;
> +
> +    /* Some handlers are not functional too early. */

Can you explain in the commit message why this is necessary (An example 
would be useful)?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:46:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:46:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52108.91172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolN8-0004x7-5k; Mon, 14 Dec 2020 10:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52108.91172; Mon, 14 Dec 2020 10:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolN8-0004x0-2U; Mon, 14 Dec 2020 10:46:30 +0000
Received: by outflank-mailman (input) for mailman id 52108;
 Mon, 14 Dec 2020 10:46:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UiKs=FS=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1kolN6-0004wv-Td
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:46:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fa86bfc6-a847-453f-a7bb-a02a2ff208fa;
 Mon, 14 Dec 2020 10:46:27 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-30-xWMsxTH2P-Cw0hCX1-vmVw-1; Mon, 14 Dec 2020 05:46:26 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C45CE180A092;
 Mon, 14 Dec 2020 10:46:23 +0000 (UTC)
Received: from gondolin (ovpn-113-171.ams2.redhat.com [10.36.113.171])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 492F036FA;
 Mon, 14 Dec 2020 10:46:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa86bfc6-a847-453f-a7bb-a02a2ff208fa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607942787;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F0weGPvVnxCMZMQQMsPnBuU6LsncWqvHjw8ElN8KahE=;
	b=Lefayf+Qulq87N6AYyRox0UY/h0M5cd/f8evMWuCyg//RVJWNTO1DkqulDg8LevRDL5nLq
	bH+OZEA91kIL3Ndw1o9LOKSFJzEGkOifHZFWAnp/UTMkCs5yeVXcYIG4WG6n8TObIuEdb1
	jlCVHGSP0l8TUKibeGXtxgwFZRawUZ0=
X-MC-Unique: xWMsxTH2P-Cw0hCX1-vmVw-1
Date: Mon, 14 Dec 2020 11:46:04 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>, Igor
 Mammedov <imammedo@redhat.com>, Stefan Berger <stefanb@linux.ibm.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6k=?= Lureau <marcandre.lureau@redhat.com>, "Daniel
 P. Berrange" <berrange@redhat.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVk?=
 =?UTF-8?B?w6k=?= <philmd@redhat.com>, John Snow <jsnow@redhat.com>, Kevin
 Wolf <kwolf@redhat.com>, Eric Blake <eblake@redhat.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Stefan Berger <stefanb@linux.vnet.ibm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Anthony Perard
 <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, Max Reitz
 <mreitz@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, Christian
 Borntraeger <borntraeger@de.ibm.com>, Richard Henderson <rth@twiddle.net>,
 David Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Matthew Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Artyom Tarasenko <atar4qemu@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 qemu-s390x@nongnu.org
Subject: Re: [PATCH v4 23/32] qdev: Move dev->realized check to
 qdev_property_set()
Message-ID: <20201214114604.2b439baf.cohuck@redhat.com>
In-Reply-To: <20201211220529.2290218-24-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
	<20201211220529.2290218-24-ehabkost@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=cohuck@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, 11 Dec 2020 17:05:20 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> Every single qdev property setter function manually checks
> dev->realized.  We can just check dev->realized inside
> qdev_property_set() instead.
>=20
> The check is being added as a separate function
> (qdev_prop_allow_set()) because it will become a callback later.
>=20
> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Changes v1 -> v2:
> * Removed unused variable at xen_block_set_vdev()
> * Redone patch after changes in the previous patches in the
>   series
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  backends/tpm/tpm_util.c          |   6 --
>  hw/block/xen-block.c             |   6 --
>  hw/core/qdev-properties-system.c |  70 ----------------------
>  hw/core/qdev-properties.c        | 100 ++++++-------------------------
>  hw/s390x/css.c                   |   6 --
>  hw/s390x/s390-pci-bus.c          |   6 --
>  hw/vfio/pci-quirks.c             |   6 --
>  target/sparc/cpu.c               |   6 --
>  8 files changed, 18 insertions(+), 188 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:51:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:51:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52114.91184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolRe-0005uO-Pd; Mon, 14 Dec 2020 10:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52114.91184; Mon, 14 Dec 2020 10:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolRe-0005uH-M1; Mon, 14 Dec 2020 10:51:10 +0000
Received: by outflank-mailman (input) for mailman id 52114;
 Mon, 14 Dec 2020 10:51:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kolRd-0005uC-EH
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:51:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66c3c5be-216d-4b3e-88f2-2ac38e7de304;
 Mon, 14 Dec 2020 10:51:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 58536AC10;
 Mon, 14 Dec 2020 10:51:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66c3c5be-216d-4b3e-88f2-2ac38e7de304
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607943065; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Z/krnIO45mVa8W6f4tFNy2JjQuPq7jNuJfyAFOaS6OQ=;
	b=qNj+l9hFCGwfvX1B2XdyiU+CJ4uIGDQhZd4zpCQzfC400/Fc1ACbW4slI1VL/m1DUpKOm+
	ISOVpNHCCJb5Scd1+lnOEwx6Jbdp7AM2Bohb98pBd8z9UvWErEcS8nUfhsH+nJ6wJD9TJy
	gZoj00WSwXNMehFgXk2zyJSN6FnNnuk=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
Message-ID: <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
Date: Mon, 14 Dec 2020 11:51:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="WWFYp3kwM8FW2yRDgI9sNxqyWg3j2mXny"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--WWFYp3kwM8FW2yRDgI9sNxqyWg3j2mXny
Content-Type: multipart/mixed; boundary="tjzsdnnAot89f7bwP9evAucCozeIb7ZPO";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Message-ID: <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
In-Reply-To: <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>

--tjzsdnnAot89f7bwP9evAucCozeIb7ZPO
Content-Type: multipart/mixed;
 boundary="------------15B64D5D3C81ECAD90C8B8A2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------15B64D5D3C81ECAD90C8B8A2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 11:17, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/12/2020 07:56, Juergen Gross wrote:
>> Add support to run a function in an exception handler for Arm. Do it
>> the same way as on x86 via a bug_frame.
>>
>> Unfortunately inline assembly on Arm seems to be less capable than on
>> x86, leading to functions called via run_in_exception_handler() having=

>> to be globally visible.
>=20
> Jan already commented on this, so I am not going to comment again.

Maybe I can ask some Arm specific question related to this:

In my experiments the only working solution was using the "i" constraint
for the function pointer. Do you know whether this is supported for all
gcc versions we care about?

Or is there another way to achieve the desired functionality? I'm using
now the following macros:

#define BUG_FRAME_run_fn(fn) do {                                      \
     asm ("1:"BUG_INSTR"\n"                                             \=

          ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn)      \=

                        ", \"a\", %%progbits\n"                         \=

          "2:\n"                                                        \=

          ".p2align 2\n"                                                \=

          ".long (1b - 2b)\n"                                           \=

          ".long (%0 - 2b)\n"                                           \=

          ".long 0\n"                                                   \=

          ".hword 0, 0\n"                                               \=

          ".popsection" :: "i" (fn));                                   \=

} while (0)

#define run_in_exception_handler(fn) BUG_FRAME_run_fn(fn)

>=20
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V4:
>> - new patch
>>
>> I have verified the created bugframe is correct by inspecting the
>> created binary.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> =C2=A0 xen/arch/arm/traps.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 10 +=
++++++++-
>> =C2=A0 xen/drivers/char/ns16550.c |=C2=A0 3 ++-
>> =C2=A0 xen/include/asm-arm/bug.h=C2=A0 | 32 +++++++++++++++++++++-----=
------
>> =C2=A0 3 files changed, 32 insertions(+), 13 deletions(-)
>>
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 22bd1bd4c6..6e677affe2 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -1236,8 +1236,16 @@ int do_bug_frame(const struct cpu_user_regs=20
>> *regs, vaddr_t pc)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( !bug )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -ENOENT;=

>> +=C2=A0=C2=A0=C2=A0 if ( id =3D=3D BUGFRAME_run_fn )
>> +=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 void (*fn)(const struct cp=
u_user_regs *) =3D bug_ptr(bug);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fn(regs);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return 0;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* WARN, BUG or ASSERT: decode the file=
name pointer and line=20
>> number. */
>> -=C2=A0=C2=A0=C2=A0 filename =3D bug_file(bug);
>> +=C2=A0=C2=A0=C2=A0 filename =3D bug_ptr(bug);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( !is_kernel(filename) )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -EINVAL;=

>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fixup =3D strlen(filename);
>> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
>> index 9235d854fe..dd6500acc8 100644
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -192,7 +192,8 @@ static void ns16550_interrupt(
>> =C2=A0 /* Safe: ns16550_poll() runs as softirq so not reentrant on a g=
iven=20
>> CPU. */
>> =C2=A0 static DEFINE_PER_CPU(struct serial_port *, poll_port);
>> -static void __ns16550_poll(struct cpu_user_regs *regs)
>> +/* run_in_exception_handler() on Arm requires globally visible=20
>> symbol. */
>> +void __ns16550_poll(struct cpu_user_regs *regs)
>> =C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct serial_port *port =3D this_cpu(p=
oll_port);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct ns16550 *uart =3D port->uart;
>> diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h
>> index 36c803357c..a7da2c306f 100644
>> --- a/xen/include/asm-arm/bug.h
>> +++ b/xen/include/asm-arm/bug.h
>> @@ -15,34 +15,38 @@
>> =C2=A0 struct bug_frame {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signed int loc_disp;=C2=A0=C2=A0=C2=A0 =
/* Relative address to the bug address */
>> -=C2=A0=C2=A0=C2=A0 signed int file_disp;=C2=A0=C2=A0 /* Relative addr=
ess to the filename */
>> +=C2=A0=C2=A0=C2=A0 signed int ptr_disp;=C2=A0=C2=A0=C2=A0 /* Relative=
 address to the filename or=20
>> function */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signed int msg_disp;=C2=A0=C2=A0=C2=A0 =
/* Relative address to the predicate=20
>> (for ASSERT) */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint16_t line;=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Line number */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint32_t pad0:16;=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 /* Padding for 8-bytes align */
>> =C2=A0 };
>> =C2=A0 #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>> =C2=A0 #define bug_line(b) ((b)->line)
>> =C2=A0 #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>> -#define BUGFRAME_warn=C2=A0=C2=A0 0
>> -#define BUGFRAME_bug=C2=A0=C2=A0=C2=A0 1
>> -#define BUGFRAME_assert 2
>> +#define BUGFRAME_run_fn 0
>> +#define BUGFRAME_warn=C2=A0=C2=A0 1
>> +#define BUGFRAME_bug=C2=A0=C2=A0=C2=A0 2
>> +#define BUGFRAME_assert 3
>=20
> Why did you renumber it? IOW, why can't BUGFRAME_run_fn be defined as 3=
?

This matches x86 definition. IMO there is no reason to have a different
definition and this will make it more obvious that it might be a good
idea to have a common include/xen/bug.h header.


Juergen

--------------15B64D5D3C81ECAD90C8B8A2
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------15B64D5D3C81ECAD90C8B8A2--

--tjzsdnnAot89f7bwP9evAucCozeIb7ZPO--

--WWFYp3kwM8FW2yRDgI9sNxqyWg3j2mXny
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XQ5gFAwAAAAAACgkQsN6d1ii/Ey8x
YwgAnwkthDUEVYv4N4BHB/GdlZ+5LsoaLs38S4jbvWbtJe7jbUcRqHRCIkc2PWSUkZryCwZXiLcX
ZigTC1l0V6yggvLNTfAwzMywSYZf+Yd01OFz+zVvIG8UkQWLHI3qdyyJHaOjOjdS4CtsmT24yEGw
qEu/44W0baRBhqa0gB0Bjkvra9/REbbgRmwwjSqwx49i+42hE6tEDSSqitZWxNqmlQJYlOaWItDz
qV6mhPW32Wnj4rihX2eAlKXsaMkLuu3oYFqnLh4HIYgZhEgg5FaqRwiFHBjj2Gdaq1PFb2IJ9fub
XkD8RWVyGnTBVCTSu46U9EmKqv9qTrkzvNgnlM3axw==
=OA7Z
-----END PGP SIGNATURE-----

--WWFYp3kwM8FW2yRDgI9sNxqyWg3j2mXny--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52121.91197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolX8-00066G-Ez; Mon, 14 Dec 2020 10:56:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52121.91197; Mon, 14 Dec 2020 10:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolX8-000669-Bk; Mon, 14 Dec 2020 10:56:50 +0000
Received: by outflank-mailman (input) for mailman id 52121;
 Mon, 14 Dec 2020 10:56:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kolX6-000664-Ej
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:56:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kolX5-0003VM-Ml; Mon, 14 Dec 2020 10:56:47 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kolX5-0008Hg-Fa; Mon, 14 Dec 2020 10:56:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=6+RzVgD5q5DCgFXmvhUasC2LankJp1uJjdkB4X1yTZY=; b=6y7FngxccCVmAdcZeaASuuqYqa
	7uXiYI8B2KLd0fynhadj8GDtluQJQzi6HtDeNxyvfUsx4H03pcEW2AiWDHb7EgRO1LJYXUrnWJ3Ni
	77ZNGu+I9f/KqPjTua1NDphzsEfipNb0xBDXl+WWU2spVy8R3i5uwG5f7LWIpER/yWPU=;
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
To: "Shamsundara Havanur, Harsha" <havanur@amazon.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, "Wieczorkiewicz, Pawel" <wipawel@amazon.de>
References: <cover.1607686878.git.havanur@amazon.com>
 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
 <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
 <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bf70db2d-cf03-11cb-887e-aa38094b3d5f@xen.org>
Date: Mon, 14 Dec 2020 10:56:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Harsha,

On 14/12/2020 09:26, Shamsundara Havanur, Harsha wrote:
> On Mon, 2020-12-14 at 09:52 +0100, Jan Beulich wrote:
>> CAUTION: This email originated from outside of the organization. Do
>> not click links or open attachments unless you can confirm the sender
>> and know the content is safe.
>>
>>
>>
>> On 11.12.2020 12:44, Harsha Shamsundara Havanur wrote:
>>> A HVM domain flushes cache on all the cpus using
>>> `flush_all` macro which uses cpu_online_map, during
>>> i) creation of a new domain
>>> ii) when device-model op is performed
>>> iii) when domain is destructed.
>>>
>>> This triggers IPI on all the cpus, thus affecting other
>>> domains that are pinned to different pcpus. This patch
>>> restricts cache flush to the set of cpus affinitized to
>>> the current domain using `domain->dirty_cpumask`.
>>
>> But then you need to effect cache flushing when a CPU gets
>> taken out of domain->dirty_cpumask. I don't think you/we want
>> to do that.
>>
> If we do not restrict, it could lead to DoS attack, where a malicious
> guest could keep writing to MTRR registers or do a cache flush through
> DM Op and keep sending IPIs to other neighboring guests.

I saw Jan already answered about the alleged DoS, so I will just focus 
on the resolution.

I agree that in the ideal situation we want to limit the impact on the 
other vCPUs. However, we also need to make sure the cure is not worse 
than the symptoms.

The cache flush cannot be restricted in all the pinning situation 
because pinning doesn't imply the pCPU will be dedicated to a given vCPU 
or even the vCPU will stick on pCPU (we may allow floating on a NUMA 
socket). Although your setup may offer this guarantee.

My knowledge in this area is quite limited. But below a few question 
that hopefully will help to make a decision.

The first question to answer is: can the flush can be restricted in a 
setup where each vCPUs are running on a decicated pCPU (i.e partionned 
system)?

If the answer is yes, then we should figure out whether using 
domain->dirty_cpumask would always be correct? For instance, a vCPU may 
not have yet run, so can we consider the associated pCPU cache would be 
consistent?

Another line of question is what can we do on system supporting 
self-snooping? IOW, would it be possible to restrict the flush for all 
the setup?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 10:57:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 10:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52123.91209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolXU-0006Cx-NH; Mon, 14 Dec 2020 10:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52123.91209; Mon, 14 Dec 2020 10:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolXU-0006Cq-KA; Mon, 14 Dec 2020 10:57:12 +0000
Received: by outflank-mailman (input) for mailman id 52123;
 Mon, 14 Dec 2020 10:57:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UiKs=FS=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1kolXT-0006Cd-FC
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 10:57:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 90338825-5843-47a3-97aa-cbf92c8ef737;
 Mon, 14 Dec 2020 10:57:09 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-301-0FiotJxVMLufwGePF1JPOQ-1; Mon, 14 Dec 2020 05:57:07 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 57EC818C89C4;
 Mon, 14 Dec 2020 10:57:05 +0000 (UTC)
Received: from gondolin (ovpn-113-171.ams2.redhat.com [10.36.113.171])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 71D1671C94;
 Mon, 14 Dec 2020 10:56:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90338825-5843-47a3-97aa-cbf92c8ef737
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607943429;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ihKJQubl2WVLBfE3VRV687ZDrqvVtE5WofmHPnkT1gY=;
	b=btSuYfTF4wTpftgYXh/YNlCyco5d5MX283/MGnPIT9aj1ZEFX9ILEKAqi9+Oto9qCpJ09a
	WGFlTyicsYEYCRu8WnZ3XfqwUlRWHo/UnPjkMAohoxYbEkRZdZHphCNgt4RAlROns8naK1
	7ohlZMqnxZKquzD2pVLxTFPf0rMKUNU=
X-MC-Unique: 0FiotJxVMLufwGePF1JPOQ-1
Date: Mon, 14 Dec 2020 11:56:46 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>, Igor
 Mammedov <imammedo@redhat.com>, Stefan Berger <stefanb@linux.ibm.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6k=?= Lureau <marcandre.lureau@redhat.com>, "Daniel
 P. Berrange" <berrange@redhat.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVk?=
 =?UTF-8?B?w6k=?= <philmd@redhat.com>, John Snow <jsnow@redhat.com>, Kevin
 Wolf <kwolf@redhat.com>, Eric Blake <eblake@redhat.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Stefan Berger <stefanb@linux.vnet.ibm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Anthony Perard
 <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, Max Reitz
 <mreitz@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, Christian
 Borntraeger <borntraeger@de.ibm.com>, Richard Henderson <rth@twiddle.net>,
 David Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Matthew Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-block@nongnu.org, qemu-s390x@nongnu.org
Subject: Re: [PATCH v4 30/32] qdev: Rename qdev_get_prop_ptr() to
 object_field_prop_ptr()
Message-ID: <20201214115646.42998a6e.cohuck@redhat.com>
In-Reply-To: <20201211220529.2290218-31-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
	<20201211220529.2290218-31-ehabkost@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=cohuck@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, 11 Dec 2020 17:05:27 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> The function will be moved to common QOM code, as it is not
> specific to TYPE_DEVICE anymore.
>=20
> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Changes v1 -> v2:
> * Rename to object_field_prop_ptr() instead of object_static_prop_ptr()
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  include/hw/qdev-properties.h     |  2 +-
>  backends/tpm/tpm_util.c          |  6 ++--
>  hw/block/xen-block.c             |  4 +--
>  hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
>  hw/core/qdev-properties.c        | 60 ++++++++++++++++----------------
>  hw/s390x/css.c                   |  4 +--
>  hw/s390x/s390-pci-bus.c          |  4 +--
>  hw/vfio/pci-quirks.c             |  4 +--
>  8 files changed, 67 insertions(+), 67 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 11:06:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 11:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52131.91220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolgY-0007GB-Lp; Mon, 14 Dec 2020 11:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52131.91220; Mon, 14 Dec 2020 11:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolgY-0007G4-Ig; Mon, 14 Dec 2020 11:06:34 +0000
Received: by outflank-mailman (input) for mailman id 52131;
 Mon, 14 Dec 2020 11:06:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kolgX-0007Fw-9J; Mon, 14 Dec 2020 11:06:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kolgX-0003h5-4D; Mon, 14 Dec 2020 11:06:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kolgW-0007Yj-Qs; Mon, 14 Dec 2020 11:06:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kolgW-0004D2-QI; Mon, 14 Dec 2020 11:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rIhtYNifRNtqqQrRiGXOXZfPrstV2PStkwLr33WoguA=; b=NCcWqUeVHieqcYDGFxtKvi3R70
	Kmglto0mkeemeWM/4AQ+w4hGnsLEM5DNwKpww8dTrcO5UG1fpbJyvrIZSAWWOSg7jZ68jZJ0fJLD7
	YE1g7KhS6I+eBOwg18IMiiIgVAjoxUJDR//oxu+aIK56mlSRyyTkKiF/E17Wuws8Owog=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157513-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157513: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=d4633b36b94f7b4a1f41901657cbbff452173d35
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 11:06:32 +0000

flight 157513 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157513/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 d4633b36b94f7b4a1f41901657cbbff452173d35
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    4 days
Failing since        157348  2020-12-09 15:39:39 Z    4 days   35 attempts
Testing same since   157402  2020-12-11 03:39:45 Z    3 days   28 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 360 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 11:11:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 11:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52139.91236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kollI-0008EK-97; Mon, 14 Dec 2020 11:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52139.91236; Mon, 14 Dec 2020 11:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kollI-0008ED-5o; Mon, 14 Dec 2020 11:11:28 +0000
Received: by outflank-mailman (input) for mailman id 52139;
 Mon, 14 Dec 2020 11:11:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kollG-0008E8-DL
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 11:11:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab620cb7-f03d-42ab-9152-9c1195cf0725;
 Mon, 14 Dec 2020 11:11:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A023CAC10;
 Mon, 14 Dec 2020 11:11:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab620cb7-f03d-42ab-9152-9c1195cf0725
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607944284; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=emJZN2S2qXQhrmBMSpLQuldS8IQJcUyy+pQmjM0bZu0=;
	b=IACYz4Db6DT/EV5dDwRA0N7mLhoub7jhdkNj8d7WouS5Ko1H/6733h45EOR2JpK83E/ow2
	uia2bjaEJA9TfTVKeZVY789KBQb9v5R7oYl8fm0Bz7rjH+uYMKjWREsyVuWss0HGNloa0C
	mIuTXOKJgV4/KtvOFYOTIz/fCebSFZY=
Subject: Re: [PATCH v4 3/3] xen: add support for automatic debug key actions
 in case of crash
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-4-jgross@suse.com>
 <8dc62a3f-db2d-51b9-1264-28af3a13052d@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <08c72f15-c89b-5887-b828-a7cfdeb4834a@suse.com>
Date: Mon, 14 Dec 2020 12:11:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8dc62a3f-db2d-51b9-1264-28af3a13052d@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="9U3WsiJymg4oJTBtMr0uOCQM3lKl1Fs1k"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--9U3WsiJymg4oJTBtMr0uOCQM3lKl1Fs1k
Content-Type: multipart/mixed; boundary="83vukAr0ktrel4PgkgiFLSgP8sMRwveNy";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <08c72f15-c89b-5887-b828-a7cfdeb4834a@suse.com>
Subject: Re: [PATCH v4 3/3] xen: add support for automatic debug key actions
 in case of crash
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-4-jgross@suse.com>
 <8dc62a3f-db2d-51b9-1264-28af3a13052d@xen.org>
In-Reply-To: <8dc62a3f-db2d-51b9-1264-28af3a13052d@xen.org>

--83vukAr0ktrel4PgkgiFLSgP8sMRwveNy
Content-Type: multipart/mixed;
 boundary="------------C3C4FBCB5B66B8C291814506"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C3C4FBCB5B66B8C291814506
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 11:24, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/12/2020 07:56, Juergen Gross wrote:
>> diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
>> index de120fa092..806355ed8b 100644
>> --- a/xen/common/keyhandler.c
>> +++ b/xen/common/keyhandler.c
>> @@ -3,7 +3,9 @@
>> =C2=A0=C2=A0 */
>> =C2=A0 #include <asm/regs.h>
>> +#include <xen/delay.h>
>> =C2=A0 #include <xen/keyhandler.h>
>> +#include <xen/param.h>
>> =C2=A0 #include <xen/shutdown.h>
>> =C2=A0 #include <xen/event.h>
>> =C2=A0 #include <xen/console.h>
>> @@ -519,6 +521,59 @@ void __init initialize_keytable(void)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0 }
>> +#define CRASHACTION_SIZE=C2=A0 32
>> +static char crash_debug_panic[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-panic", crash_debug_panic);
>> +static char crash_debug_hwdom[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
>> +static char crash_debug_watchdog[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
>> +#ifdef CONFIG_KEXEC
>> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
>> +#else
>> +#define crash_debug_kexeccmd NULL
>> +#endif
>> +static char crash_debug_debugkey[CRASHACTION_SIZE];
>> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
>> +
>> +void keyhandler_crash_action(enum crash_reason reason)
>> +{
>> +=C2=A0=C2=A0=C2=A0 static const char *const crash_action[] =3D {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [CRASHREASON_PANIC] =3D cr=
ash_debug_panic,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [CRASHREASON_HWDOM] =3D cr=
ash_debug_hwdom,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [CRASHREASON_WATCHDOG] =3D=
 crash_debug_watchdog,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [CRASHREASON_KEXECCMD] =3D=
 crash_debug_kexeccmd,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [CRASHREASON_DEBUGKEY] =3D=
 crash_debug_debugkey,
>> +=C2=A0=C2=A0=C2=A0 };
>> +=C2=A0=C2=A0=C2=A0 static bool ignore;
>> +=C2=A0=C2=A0=C2=A0 const char *action;
>> +
>> +=C2=A0=C2=A0=C2=A0 /* Some handlers are not functional too early. */
>=20
> Can you explain in the commit message why this is necessary (An example=
=20
> would be useful)?

Okay.


Juergen

--------------C3C4FBCB5B66B8C291814506
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C3C4FBCB5B66B8C291814506--

--83vukAr0ktrel4PgkgiFLSgP8sMRwveNy--

--9U3WsiJymg4oJTBtMr0uOCQM3lKl1Fs1k
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XSFsFAwAAAAAACgkQsN6d1ii/Ey+2
Cgf5AZZ6mmzodkEf0x9u02teHJYOe+U9e4jZ7BbR99T8F9O3HNPDDATzflxznFEa3OjzvJOh1hXN
tgiLLOnTJsRnJ87Mjuj/qAnWNDQ7Gaxd235ToFslR00jjbe12u29mOoZ7Xem1atZ7oYLjdFuxqmk
oDG2Un+oIe0v22zjQU/edMWEkCv0BfBaVrin4bLjedxZQXfyMowmWvPxo/uEOlqvXVkqmIrv+xpc
+O12R4oJzaZLqt5+IwXkeLsbWm8NIda6OJM7V7kAwxieZftj7EFm4GA1fzU+YqSC1OU8ou6P0Nd+
Sdtgx996QUqmgfD4sd2w/dT3+2+zxI7mC7Owd9BdQQ==
=BbKb
-----END PGP SIGNATURE-----

--9U3WsiJymg4oJTBtMr0uOCQM3lKl1Fs1k--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 11:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 11:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52145.91247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koloa-0008O9-Q2; Mon, 14 Dec 2020 11:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52145.91247; Mon, 14 Dec 2020 11:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koloa-0008O2-Mz; Mon, 14 Dec 2020 11:14:52 +0000
Received: by outflank-mailman (input) for mailman id 52145;
 Mon, 14 Dec 2020 11:14:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1koloZ-0008Nx-0o
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 11:14:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1koloW-0003q3-9B; Mon, 14 Dec 2020 11:14:48 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1koloV-0001Ar-TR; Mon, 14 Dec 2020 11:14:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zGBYvUDhrp4vJoOWJ+Hmfmk6AM3uCACamJcia3BpCSo=; b=I1bJFLQIgLkmewfbGnhIxnw2hQ
	WyitygjzISJGrr8L3xbEKSDWKFNGjkka7YnEs0ihFSlKithJdFeBqHP8SKCjC4a94C9Iz+9HuFoI7
	87fegHPOnIjaaIAvCCinoPO5BxJOML74H6qRWB4cBEiCPaovhuZW4mIvo1O3h575a03s=;
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
 <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
Date: Mon, 14 Dec 2020 11:14:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 14/12/2020 10:51, Jürgen Groß wrote:
> On 14.12.20 11:17, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 14/12/2020 07:56, Juergen Gross wrote:
>>> Add support to run a function in an exception handler for Arm. Do it
>>> the same way as on x86 via a bug_frame.
>>>
>>> Unfortunately inline assembly on Arm seems to be less capable than on
>>> x86, leading to functions called via run_in_exception_handler() having
>>> to be globally visible.
>>
>> Jan already commented on this, so I am not going to comment again.
> 
> Maybe I can ask some Arm specific question related to this:
> 
> In my experiments the only working solution was using the "i" constraint
> for the function pointer. Do you know whether this is supported for all
> gcc versions we care about?

I don't know for sure. However, Linux has been using "i" since 2012. So 
I would assume it ought to be fine for all the version we care.

> 
> Or is there another way to achieve the desired functionality? I'm using
> now the following macros:
> 
> #define BUG_FRAME_run_fn(fn) do {                                      \
>      asm ("1:"BUG_INSTR"\n"                                             \
>           ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn)      \
>                         ", \"a\", %%progbits\n"                         \
>           "2:\n"                                                        \
>           ".p2align 2\n"                                                \
>           ".long (1b - 2b)\n"                                           \
>           ".long (%0 - 2b)\n"                                           \
>           ".long 0\n"                                                   \
>           ".hword 0, 0\n"                                               \
>           ".popsection" :: "i" (fn));                                   \
> } while (0)

May I ask why we need a new macro?

> 
> #define run_in_exception_handler(fn) BUG_FRAME_run_fn(fn)
> 
>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V4:
>>> - new patch
>>>
>>> I have verified the created bugframe is correct by inspecting the
>>> created binary.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   xen/arch/arm/traps.c       | 10 +++++++++-
>>>   xen/drivers/char/ns16550.c |  3 ++-
>>>   xen/include/asm-arm/bug.h  | 32 +++++++++++++++++++++-----------
>>>   3 files changed, 32 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>> index 22bd1bd4c6..6e677affe2 100644
>>> --- a/xen/arch/arm/traps.c
>>> +++ b/xen/arch/arm/traps.c
>>> @@ -1236,8 +1236,16 @@ int do_bug_frame(const struct cpu_user_regs 
>>> *regs, vaddr_t pc)
>>>       if ( !bug )
>>>           return -ENOENT;
>>> +    if ( id == BUGFRAME_run_fn )
>>> +    {
>>> +        void (*fn)(const struct cpu_user_regs *) = bug_ptr(bug);
>>> +
>>> +        fn(regs);
>>> +        return 0;
>>> +    }
>>> +
>>>       /* WARN, BUG or ASSERT: decode the filename pointer and line 
>>> number. */
>>> -    filename = bug_file(bug);
>>> +    filename = bug_ptr(bug);
>>>       if ( !is_kernel(filename) )
>>>           return -EINVAL;
>>>       fixup = strlen(filename);
>>> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
>>> index 9235d854fe..dd6500acc8 100644
>>> --- a/xen/drivers/char/ns16550.c
>>> +++ b/xen/drivers/char/ns16550.c
>>> @@ -192,7 +192,8 @@ static void ns16550_interrupt(
>>>   /* Safe: ns16550_poll() runs as softirq so not reentrant on a given 
>>> CPU. */
>>>   static DEFINE_PER_CPU(struct serial_port *, poll_port);
>>> -static void __ns16550_poll(struct cpu_user_regs *regs)
>>> +/* run_in_exception_handler() on Arm requires globally visible 
>>> symbol. */
>>> +void __ns16550_poll(struct cpu_user_regs *regs)
>>>   {
>>>       struct serial_port *port = this_cpu(poll_port);
>>>       struct ns16550 *uart = port->uart;
>>> diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h
>>> index 36c803357c..a7da2c306f 100644
>>> --- a/xen/include/asm-arm/bug.h
>>> +++ b/xen/include/asm-arm/bug.h
>>> @@ -15,34 +15,38 @@
>>>   struct bug_frame {
>>>       signed int loc_disp;    /* Relative address to the bug address */
>>> -    signed int file_disp;   /* Relative address to the filename */
>>> +    signed int ptr_disp;    /* Relative address to the filename or 
>>> function */
>>>       signed int msg_disp;    /* Relative address to the predicate 
>>> (for ASSERT) */
>>>       uint16_t line;          /* Line number */
>>>       uint32_t pad0:16;       /* Padding for 8-bytes align */
>>>   };
>>>   #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>>   #define bug_line(b) ((b)->line)
>>>   #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>> -#define BUGFRAME_warn   0
>>> -#define BUGFRAME_bug    1
>>> -#define BUGFRAME_assert 2
>>> +#define BUGFRAME_run_fn 0
>>> +#define BUGFRAME_warn   1
>>> +#define BUGFRAME_bug    2
>>> +#define BUGFRAME_assert 3
>>
>> Why did you renumber it? IOW, why can't BUGFRAME_run_fn be defined as 3?
> 
> This matches x86 definition. IMO there is no reason to have a different
> definition and this will make it more obvious that it might be a good
> idea to have a common include/xen/bug.h header.

I agree that common header would be nice. Although, I am not sure if 
this is achievable. However, my point here is this change would have 
deserved half-sentence in the commit message because to me this look 
like unwanted churn.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 11:22:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 11:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52156.91260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolvN-0000yK-Nf; Mon, 14 Dec 2020 11:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52156.91260; Mon, 14 Dec 2020 11:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kolvN-0000yD-KQ; Mon, 14 Dec 2020 11:21:53 +0000
Received: by outflank-mailman (input) for mailman id 52156;
 Mon, 14 Dec 2020 11:21:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kolvN-0000y8-45
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 11:21:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4563709c-3621-443d-92ee-5b89aa8bc2ee;
 Mon, 14 Dec 2020 11:21:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93F8EAD60;
 Mon, 14 Dec 2020 11:21:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4563709c-3621-443d-92ee-5b89aa8bc2ee
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607944910; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DNYIxy+r7yeoJ7MK2nSf2PZmeT9Cry+FFG0QZQaOnAU=;
	b=HpCmy6enjvJ6RrL5UCLYMa9mirIls1kQH13ATI6WiUJef/iTDTZu3BoL2IE4JquundiZxf
	4b1bvvnaA9eLU+H7/WzoTRT46fpQCijNrTmCAJuOdaq5KNQ6lTVMwie+JbwgWOzDb34U0R
	WVWH3WMzcw7vWTMbsRqeuvffl2ZrZXY=
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
 <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
 <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
Date: Mon, 14 Dec 2020 12:21:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qgaPEs4YRdMYcPAfBVKQCcnPChZiZsQvI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qgaPEs4YRdMYcPAfBVKQCcnPChZiZsQvI
Content-Type: multipart/mixed; boundary="eBZ4fSXuoGTBHEGOFto4gaLIlLANKd0cl";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Message-ID: <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
 <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
 <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
In-Reply-To: <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>

--eBZ4fSXuoGTBHEGOFto4gaLIlLANKd0cl
Content-Type: multipart/mixed;
 boundary="------------2017CD014B179C77A9911C0F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2017CD014B179C77A9911C0F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 12:14, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/12/2020 10:51, J=C3=BCrgen Gro=C3=9F wrote:
>> On 14.12.20 11:17, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 14/12/2020 07:56, Juergen Gross wrote:
>>>> Add support to run a function in an exception handler for Arm. Do it=

>>>> the same way as on x86 via a bug_frame.
>>>>
>>>> Unfortunately inline assembly on Arm seems to be less capable than o=
n
>>>> x86, leading to functions called via run_in_exception_handler() havi=
ng
>>>> to be globally visible.
>>>
>>> Jan already commented on this, so I am not going to comment again.
>>
>> Maybe I can ask some Arm specific question related to this:
>>
>> In my experiments the only working solution was using the "i" constrai=
nt
>> for the function pointer. Do you know whether this is supported for al=
l
>> gcc versions we care about?
>=20
> I don't know for sure. However, Linux has been using "i" since 2012. So=
=20
> I would assume it ought to be fine for all the version we care.
>=20
>>
>> Or is there another way to achieve the desired functionality? I'm usin=
g
>> now the following macros:
>>
>> #define BUG_FRAME_run_fn(fn) do {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0 asm ("1:"BUG_INSTR"\n"=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".pushsection .=
bug_frames." __stringify(BUGFRAME_run_fn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \=

>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ", \"a=
\", %%progbits\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "2:\n"=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".p2align 2\n"=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (1b - 2b=
)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (%0 - 2b=
)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long 0\n"=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".hword 0, 0\n"=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".popsection" :=
: "i" (fn));=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>> } while (0)
>=20
> May I ask why we need a new macro?

Using a common one might be possible, but not with the current way how
BUG_FRAME() is defined: gcc complained about the input parameter in case
of ASSERT() and WARN().

I might be missing something, but this was the fastest way to at least
confirm the scheme is working for Arm.

>=20
>>
>> #define run_in_exception_handler(fn) BUG_FRAME_run_fn(fn)
>>
>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> V4:
>>>> - new patch
>>>>
>>>> I have verified the created bugframe is correct by inspecting the
>>>> created binary.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> =C2=A0 xen/arch/arm/traps.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 10=
 +++++++++-
>>>> =C2=A0 xen/drivers/char/ns16550.c |=C2=A0 3 ++-
>>>> =C2=A0 xen/include/asm-arm/bug.h=C2=A0 | 32 +++++++++++++++++++++---=
--------
>>>> =C2=A0 3 files changed, 32 insertions(+), 13 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 22bd1bd4c6..6e677affe2 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -1236,8 +1236,16 @@ int do_bug_frame(const struct cpu_user_regs=20
>>>> *regs, vaddr_t pc)
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( !bug )
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -ENOEN=
T;
>>>> +=C2=A0=C2=A0=C2=A0 if ( id =3D=3D BUGFRAME_run_fn )
>>>> +=C2=A0=C2=A0=C2=A0 {
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 void (*fn)(const struct =
cpu_user_regs *) =3D bug_ptr(bug);
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fn(regs);
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return 0;
>>>> +=C2=A0=C2=A0=C2=A0 }
>>>> +
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* WARN, BUG or ASSERT: decode the fi=
lename pointer and line=20
>>>> number. */
>>>> -=C2=A0=C2=A0=C2=A0 filename =3D bug_file(bug);
>>>> +=C2=A0=C2=A0=C2=A0 filename =3D bug_ptr(bug);
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( !is_kernel(filename) )
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -EINVA=
L;
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fixup =3D strlen(filename);
>>>> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c=

>>>> index 9235d854fe..dd6500acc8 100644
>>>> --- a/xen/drivers/char/ns16550.c
>>>> +++ b/xen/drivers/char/ns16550.c
>>>> @@ -192,7 +192,8 @@ static void ns16550_interrupt(
>>>> =C2=A0 /* Safe: ns16550_poll() runs as softirq so not reentrant on a=
=20
>>>> given CPU. */
>>>> =C2=A0 static DEFINE_PER_CPU(struct serial_port *, poll_port);
>>>> -static void __ns16550_poll(struct cpu_user_regs *regs)
>>>> +/* run_in_exception_handler() on Arm requires globally visible=20
>>>> symbol. */
>>>> +void __ns16550_poll(struct cpu_user_regs *regs)
>>>> =C2=A0 {
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct serial_port *port =3D this_cpu=
(poll_port);
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct ns16550 *uart =3D port->uart;
>>>> diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h
>>>> index 36c803357c..a7da2c306f 100644
>>>> --- a/xen/include/asm-arm/bug.h
>>>> +++ b/xen/include/asm-arm/bug.h
>>>> @@ -15,34 +15,38 @@
>>>> =C2=A0 struct bug_frame {
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signed int loc_disp;=C2=A0=C2=A0=C2=A0=
 /* Relative address to the bug address */
>>>> -=C2=A0=C2=A0=C2=A0 signed int file_disp;=C2=A0=C2=A0 /* Relative ad=
dress to the filename */
>>>> +=C2=A0=C2=A0=C2=A0 signed int ptr_disp;=C2=A0=C2=A0=C2=A0 /* Relati=
ve address to the filename or=20
>>>> function */
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signed int msg_disp;=C2=A0=C2=A0=C2=A0=
 /* Relative address to the predicate=20
>>>> (for ASSERT) */
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint16_t line;=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Line number */
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint32_t pad0:16;=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 /* Padding for 8-bytes align */
>>>> =C2=A0 };
>>>> =C2=A0 #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>>> =C2=A0 #define bug_line(b) ((b)->line)
>>>> =C2=A0 #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>>> -#define BUGFRAME_warn=C2=A0=C2=A0 0
>>>> -#define BUGFRAME_bug=C2=A0=C2=A0=C2=A0 1
>>>> -#define BUGFRAME_assert 2
>>>> +#define BUGFRAME_run_fn 0
>>>> +#define BUGFRAME_warn=C2=A0=C2=A0 1
>>>> +#define BUGFRAME_bug=C2=A0=C2=A0=C2=A0 2
>>>> +#define BUGFRAME_assert 3
>>>
>>> Why did you renumber it? IOW, why can't BUGFRAME_run_fn be defined as=
 3?
>>
>> This matches x86 definition. IMO there is no reason to have a differen=
t
>> definition and this will make it more obvious that it might be a good
>> idea to have a common include/xen/bug.h header.
>=20
> I agree that common header would be nice. Although, I am not sure if=20
> this is achievable. However, my point here is this change would have=20
> deserved half-sentence in the commit message because to me this look=20
> like unwanted churn.

Okay.


Juergen

--------------2017CD014B179C77A9911C0F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2017CD014B179C77A9911C0F--

--eBZ4fSXuoGTBHEGOFto4gaLIlLANKd0cl--

--qgaPEs4YRdMYcPAfBVKQCcnPChZiZsQvI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XSs0FAwAAAAAACgkQsN6d1ii/Ey9C
/gf/QfjT2kG+VpXYPydG+Bq+k+zjOzVdjSjFNP9mNtgYc+u9ZmUhi02l64EA439QCrQaXww+6Vgc
2XaB/owLySwDNgI/8ZjDze+BsQJQzs7XB+/pgbdrhpzkukqk1ffje1I2T5U+LQIyStnqpPzDoMSx
/xrZKEERKYBtf9AZo6NS92NZahtFMluPzbqbU5glHALEW5rGwvntTBdZFhs6TLjKriIEZTGTqu2i
P7menq19FzBO/Nchjuea3nPkBt4BYHnCs+thG+sRCuP1K5c7RYFiTZnX2pjswqJoeWY3j4Fudr9y
dp1/WGI358mCBCAQxnbYMMTQhCoqCTeMKvElzrPkTw==
=oeM3
-----END PGP SIGNATURE-----

--qgaPEs4YRdMYcPAfBVKQCcnPChZiZsQvI--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 11:47:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 11:47:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52167.91272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1komKC-000308-QY; Mon, 14 Dec 2020 11:47:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52167.91272; Mon, 14 Dec 2020 11:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1komKC-000301-Ne; Mon, 14 Dec 2020 11:47:32 +0000
Received: by outflank-mailman (input) for mailman id 52167;
 Mon, 14 Dec 2020 11:47:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1komKB-0002zw-RL
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 11:47:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1komK9-0004QA-BV; Mon, 14 Dec 2020 11:47:29 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1komK9-0003L5-38; Mon, 14 Dec 2020 11:47:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DYCenHSEbbBZ7Dfw8uI9vo1283A7/U21yPi3ldUppck=; b=uWvbjGrQpQM54Rc8v7Etvuw5V7
	oohSH/Vtv4XCEK0qARLaHOmsdEFPFIPt8Y2DBXnmr6CeW8Ox7QhK2TdvLSplE9I0hSRJIYDWgXe3Z
	5Los1ozZ3qQeLhPh0DuT8IAWYBnVLO9TETy4R5Xa+xWjfS8jmzYPriCHHU2TbPZXl6E8=;
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
 <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
 <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
 <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f3f54c23-7b4f-57d8-29d1-99019a02b824@xen.org>
Date: Mon, 14 Dec 2020 11:47:26 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 14/12/2020 11:21, Jürgen Groß wrote:
> On 14.12.20 12:14, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 14/12/2020 10:51, Jürgen Groß wrote:
>>> On 14.12.20 11:17, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 14/12/2020 07:56, Juergen Gross wrote:
>>>>> Add support to run a function in an exception handler for Arm. Do it
>>>>> the same way as on x86 via a bug_frame.
>>>>>
>>>>> Unfortunately inline assembly on Arm seems to be less capable than on
>>>>> x86, leading to functions called via run_in_exception_handler() having
>>>>> to be globally visible.
>>>>
>>>> Jan already commented on this, so I am not going to comment again.
>>>
>>> Maybe I can ask some Arm specific question related to this:
>>>
>>> In my experiments the only working solution was using the "i" constraint
>>> for the function pointer. Do you know whether this is supported for all
>>> gcc versions we care about?
>>
>> I don't know for sure. However, Linux has been using "i" since 2012. 
>> So I would assume it ought to be fine for all the version we care.
>>
>>>
>>> Or is there another way to achieve the desired functionality? I'm using
>>> now the following macros:
>>>
>>> #define BUG_FRAME_run_fn(fn) do {                                      \
>>>      asm 
>>> ("1:"BUG_INSTR"\n"                                             \
>>>           ".pushsection .bug_frames." 
>>> __stringify(BUGFRAME_run_fn)      \
>>>                         ", \"a\", 
>>> %%progbits\n"                         \
>>>           
>>> "2:\n"                                                        \
>>>           ".p2align 
>>> 2\n"                                                \
>>>           ".long (1b - 
>>> 2b)\n"                                           \
>>>           ".long (%0 - 
>>> 2b)\n"                                           \
>>>           ".long 
>>> 0\n"                                                   \
>>>           ".hword 0, 
>>> 0\n"                                               \
>>>           ".popsection" :: "i" 
>>> (fn));                                   \
>>> } while (0)
>>
>> May I ask why we need a new macro?
> 
> Using a common one might be possible, but not with the current way how
> BUG_FRAME() is defined: gcc complained about the input parameter in case
> of ASSERT() and WARN().

Could you share the code and the error message?

> 
> I might be missing something, but this was the fastest way to at least
> confirm the scheme is working for Arm.

Make senses. I also don't have much time to invest in trying to have a 
common macro. So I am happy with a new macro.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 11:59:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 11:59:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52188.91284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1komVq-0004Cb-Sx; Mon, 14 Dec 2020 11:59:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52188.91284; Mon, 14 Dec 2020 11:59:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1komVq-0004CU-Px; Mon, 14 Dec 2020 11:59:34 +0000
Received: by outflank-mailman (input) for mailman id 52188;
 Mon, 14 Dec 2020 11:59:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XC/h=FS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1komVp-0004CP-Kv
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 11:59:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3a35a4c-ade5-4238-92d1-2de59ce397d5;
 Mon, 14 Dec 2020 11:59:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5C8BDAD2B;
 Mon, 14 Dec 2020 11:59:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a35a4c-ade5-4238-92d1-2de59ce397d5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607947171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m2nDYuI605UdOo8dGxCEMaTM8HZdQ5wFPeXMlUV71SE=;
	b=p8G6RNn3A+HrrjS+HI51104brh2Zt1/46RDKtqgx7GN6xqhViHpeNIXCaVUUryGsBiL6Lk
	82WYw1D5UMVos8wJbzAqnDXabU+MdOxGY8/UF7p1IEbM/+DCaZvrc51Rvmv7ovA3SMX/n5
	ZsB0PK2jX1dX4SfKyRJeKiSo03Rk2us=
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
 <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
 <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
 <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
Message-ID: <fb1c2a7e-e14a-e982-f145-e34f79fd746b@suse.com>
Date: Mon, 14 Dec 2020 12:59:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="3z0xu6BEG56Asx9Sin13gtj964afu5q0S"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--3z0xu6BEG56Asx9Sin13gtj964afu5q0S
Content-Type: multipart/mixed; boundary="jmU8sXIO0Bi5xoIMHjYlik0mZum4YZEvK";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Message-ID: <fb1c2a7e-e14a-e982-f145-e34f79fd746b@suse.com>
Subject: Re: [PATCH v4 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201214075615.25038-1-jgross@suse.com>
 <20201214075615.25038-2-jgross@suse.com>
 <9d3f8583-cfba-0174-3275-b418648f3f31@xen.org>
 <3042ff2f-5d55-a132-a5fc-b214ec53e7a1@suse.com>
 <4a632e73-87ea-c037-09e1-dfc88d19d9b2@xen.org>
 <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>
In-Reply-To: <3f49eb17-0b2a-5b4f-81db-66454f13cf90@suse.com>

--jmU8sXIO0Bi5xoIMHjYlik0mZum4YZEvK
Content-Type: multipart/mixed;
 boundary="------------AD364DCCBC5AD70752A1DA6D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AD364DCCBC5AD70752A1DA6D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 12:21, J=C3=BCrgen Gro=C3=9F wrote:
> On 14.12.20 12:14, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 14/12/2020 10:51, J=C3=BCrgen Gro=C3=9F wrote:
>>> On 14.12.20 11:17, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 14/12/2020 07:56, Juergen Gross wrote:
>>>>> Add support to run a function in an exception handler for Arm. Do i=
t
>>>>> the same way as on x86 via a bug_frame.
>>>>>
>>>>> Unfortunately inline assembly on Arm seems to be less capable than =
on
>>>>> x86, leading to functions called via run_in_exception_handler() hav=
ing
>>>>> to be globally visible.
>>>>
>>>> Jan already commented on this, so I am not going to comment again.
>>>
>>> Maybe I can ask some Arm specific question related to this:
>>>
>>> In my experiments the only working solution was using the "i" constra=
int
>>> for the function pointer. Do you know whether this is supported for a=
ll
>>> gcc versions we care about?
>>
>> I don't know for sure. However, Linux has been using "i" since 2012.=20
>> So I would assume it ought to be fine for all the version we care.
>>
>>>
>>> Or is there another way to achieve the desired functionality? I'm usi=
ng
>>> now the following macros:
>>>
>>> #define BUG_FRAME_run_fn(fn) do {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0 asm=20
>>> ("1:"BUG_INSTR"\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".pushsection =
=2Ebug_frames."=20
>>> __stringify(BUGFRAME_run_fn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ", =
\"a\",=20
>>> %%progbits\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 \
>>>          =20
>>> "2:\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".p2align=20
>>> 2\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (1b -=20
>>> 2b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (%0 -=20
>>> 2b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long=20
>>> 0\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".hword 0,=20
>>> 0\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".popsection" =
:: "i"=20
>>> (fn));=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>> } while (0)
>>
>> May I ask why we need a new macro?
>=20
> Using a common one might be possible, but not with the current way how
> BUG_FRAME() is defined: gcc complained about the input parameter in cas=
e
> of ASSERT() and WARN().
>=20
> I might be missing something, but this was the fastest way to at least
> confirm the scheme is working for Arm.

Okay, I think I have found a way to use a common macro, which seems to
be even more simple than the original one:

#define BUG_FRAME(type, line, ptr, msg) do {                        \
     BUILD_BUG_ON((line) >> 16);                                     \
     BUILD_BUG_ON((type) >=3D BUGFRAME_NR);                            \
     asm ("1:"BUG_INSTR"\n"                                          \
          ".pushsection .bug_frames." __stringify(type)              \
                        ", \"a\", %%progbits\n"                      \
          "2:\n"                                                     \
          ".p2align 2\n"                                             \
          ".long (1b - 2b)\n"                                        \
          ".long (%0 - 2b)\n"                                        \
          ".long (%1 - 2b)\n"                                        \
          ".hword " __stringify(line) ", 0\n"                        \
          ".popsection" :: "i" (ptr), "i" (msg)); i                  \
} while (0)


Juergen

--------------AD364DCCBC5AD70752A1DA6D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AD364DCCBC5AD70752A1DA6D--

--jmU8sXIO0Bi5xoIMHjYlik0mZum4YZEvK--

--3z0xu6BEG56Asx9Sin13gtj964afu5q0S
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/XU6IFAwAAAAAACgkQsN6d1ii/Ey/c
HQf/bDjoNaWQD/YUv27z8QMF1z4frduSZvflorg7OeBmRnZIQuMtdKGqjlNuqdACcZYHtFuepIUn
rXjJLWsgRC/YNq/RJpsRnXzOHy2vkPe2XOgwYtNWyawID5KDnNMZ25YtB3uZYSocWJT+srWWza1L
xTGPR1a6iJlswm7BzMdu4MTMxmm1in4z/oAnsbi9/iTrJMsVHjg+1roujoaZgVgKbOp12I3Bt4E0
+o3u7k+yxNT8Qm0khTd0UvgqcXLqjTEQFGcsLq2nhN0voa5ps236TazB63BtryUSA9vc5I9KD+kE
EmI8fr9W7mMTfY3I3zt2ReJxAT5ENQ7pXZzKsctnbg==
=AhN3
-----END PGP SIGNATURE-----

--3z0xu6BEG56Asx9Sin13gtj964afu5q0S--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 12:27:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 12:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52207.91302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1komwW-00078G-Ga; Mon, 14 Dec 2020 12:27:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52207.91302; Mon, 14 Dec 2020 12:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1komwW-000789-Dd; Mon, 14 Dec 2020 12:27:08 +0000
Received: by outflank-mailman (input) for mailman id 52207;
 Mon, 14 Dec 2020 12:27:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1komwU-00077i-Ns; Mon, 14 Dec 2020 12:27:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1komwU-00056b-FQ; Mon, 14 Dec 2020 12:27:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1komwU-0004BP-5K; Mon, 14 Dec 2020 12:27:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1komwU-0007OA-4r; Mon, 14 Dec 2020 12:27:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hAec0YCNMwKhoq1T3JoG6MCNXAFJw9T2luJ/gQdHOrI=; b=AToVBnDl5nSuItpEetGifZOgra
	4T0aKsJFzDNACfMGu/Dha2XuWU/N1YWgx2hKuc3Y+J0VczFS2W+dheJYVVSwkqFXgg9vFBIxxps96
	shHlRCOVspwR/q7WkBwEXON1v6EjSxRrZwl+bFBYNIR5IWvcUXCJie93mFzLnyFtS1Dc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157512-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157512: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 12:27:06 +0000

flight 157512 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157512/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157451 pass in 157512
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157451

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157451
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157451
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157451
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157451
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157451
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157451
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157451
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157451
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157451
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157476
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157476
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157512  2020-12-14 01:51:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 13:22:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 13:22:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52241.91317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1konoB-0004Rq-Nq; Mon, 14 Dec 2020 13:22:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52241.91317; Mon, 14 Dec 2020 13:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1konoB-0004Rj-KA; Mon, 14 Dec 2020 13:22:35 +0000
Received: by outflank-mailman (input) for mailman id 52241;
 Mon, 14 Dec 2020 13:22:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oe/o=FS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1konoA-0004RZ-1j
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 13:22:34 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e01e75d3-f6ad-4cb5-86e1-077d24e1129c;
 Mon, 14 Dec 2020 13:22:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e01e75d3-f6ad-4cb5-86e1-077d24e1129c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607952152;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=qM7SgSHhkF8WKfmbLi+/z+HVJ5RbulhwJlW7+065cRk=;
  b=eo+40o141tjCK6lNy59olrPogfYGc8OQLCkGIySEun+I9gkoKWuYVk7m
   Hc1GFdeTsX5jL5SmpsN5A6Slo0Ujmxr4j7OnQDJZp3h4ymqPruhU9i5TA
   inX3vAp2weouGyLds3CMdn2bcTLTQTj/wwJkMkQ32hoE/1BM+Ygx948IZ
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ES5v1KAWikmdTbi+he6w+WwCDOcj4Ts9/6SQfc7wAZgpZ+jgHdQwJcez7orTTdUcp5UZWXo7w5
 kKUuPSMkZxZ/IKGYKtBKXUmx812svJvgExN73ZRbMhMmA5l/aQDlhSuMrDbNifvLQfT6awtMjf
 hfXk2yYO7D/Kz/HXiptj5qbSqY5i02RErfejXXh/PwIr5BH3r4eZTMRalr8cyy8qPvqgjqRn3w
 qNOs5fLfU/8hU9zDOsfKD92spTLS64DGMy9z3wJbpIchCe2I/SjC+agvu9T01Ub+Kh/5mK+lPo
 5Cs=
X-SBRS: 5.2
X-MesageID: 33162698
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,418,1599537600"; 
   d="scan'208";a="33162698"
Subject: Re: [PATCH] Revert "x86/mm: drop guest_get_eff_l1e()"
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Manuel Bouyer <bouyer@antioche.eu.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20201211141615.12489-1-andrew.cooper3@citrix.com>
 <454ec720-b823-c2aa-7de4-84c14db2b96f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3ab84773-6fec-b653-0d5b-a9374ef336c9@citrix.com>
Date: Mon, 14 Dec 2020 13:21:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <454ec720-b823-c2aa-7de4-84c14db2b96f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 14/12/2020 08:27, Jan Beulich wrote:
> On 11.12.2020 15:16, Andrew Cooper wrote:
>> This reverts commit 9ff9705647646aa937b5f5c1426a64c69a62b3bd.
>>
>> The change is only correct in the original context of XSA-286, where Xen's use
>> of the linear pagetables were dropped.  However, performance problems
>> interfered with that plan, and XSA-286 was fixed differently.
>>
>> This broke Xen's lazy faulting of the LDT for 64bit PV guests when an access
>> was first encountered in user context.  Xen would proceed to read the
>> registered LDT virtual address out of the user pagetables, not the kernel
>> pagetables.
>>
>> Given the nature of the bug, it would have also interfered with the IO
>> permisison bitmap functionality of userspace, which similarly needs to read
>> data using the kernel pagetables.
> This paragraph wants dropping afaict - guest_io_okay() has
> explicit calls to toggle_guest_pt(), and hence is unaffected by
> the bug here (and there is in particular page tables reading
> involved there). Then ...
>
>> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Tested-by: Manuel Bouyer <bouyer@antioche.eu.org>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> I was wondering however whether we really want a full revert. I've
> locally made the change below as an alternative.
>
> Jan
>
> x86/PV: guest_get_eff_kern_l1e() may still need to switch page tables
>
> While indeed unnecessary for pv_ro_page_fault(), pv_map_ldt_shadow_page()
> may run when guest user mode is active, and hence may need to switch to
> the kernel page tables in order to retrieve an LDT page mapping.
>
> Fixes: 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")
> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> This is the alternative to fully reverting the offending commit.

Hmm yes - I think I prefer this, because we don't really want to keep
the non-kern alternative.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> however ...

> I've also been considering to drop the paging-mode-translate ASSERT(),
> now that we always have translate == external.

I'd suggest not making that change here in this bugfix.  I think we do
want to alter how we do asserts like this, and there are other similarly
impacted code blocks.

>
> --- a/xen/arch/x86/pv/mm.h
> +++ b/xen/arch/x86/pv/mm.h
> @@ -11,10 +11,14 @@ int new_guest_cr3(mfn_t mfn);
>   */
>  static inline l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
>  {
> +    struct vcpu *curr = current;
>      l1_pgentry_t l1e;
>  
> -    ASSERT(!paging_mode_translate(current->domain));
> -    ASSERT(!paging_mode_external(current->domain));
> +    ASSERT(!paging_mode_translate(curr->domain));
> +    ASSERT(!paging_mode_external(curr->domain));
> +
> +    if ( !(curr->arch.flags & TF_kernel_mode) )

... pull this out into a variable, like the original code used to do.

bool user_mode = !(curr->arch.flags & TF_kernel_mode);

I've forgotten which static checker tripped up over this form, but one
did IIRC.

~Andrew

> +        toggle_guest_pt(curr);
>  
>      if ( unlikely(!__addr_ok(linear)) ||
>           __copy_from_user(&l1e,
> @@ -22,6 +26,9 @@ static inline l1_pgentry_t guest_get_eff
>                            sizeof(l1_pgentry_t)) )
>          l1e = l1e_empty();
>  
> +    if ( !(curr->arch.flags & TF_kernel_mode) )
> +        toggle_guest_pt(curr);
> +
>      return l1e;
>  }
>  



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 13:56:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 13:56:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52257.91328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kooKP-0007Jw-9m; Mon, 14 Dec 2020 13:55:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52257.91328; Mon, 14 Dec 2020 13:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kooKP-0007Jp-6n; Mon, 14 Dec 2020 13:55:53 +0000
Received: by outflank-mailman (input) for mailman id 52257;
 Mon, 14 Dec 2020 13:55:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kooKN-0007Jk-Qj
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 13:55:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b22d7080-a990-43c1-b495-ed87001f7a22;
 Mon, 14 Dec 2020 13:55:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CF6BBAC7F;
 Mon, 14 Dec 2020 13:55:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b22d7080-a990-43c1-b495-ed87001f7a22
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607954149; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lNYpPEClLjxLbkfVQ8T387bhQE14P7Zufj3BdmP/yaU=;
	b=YJf09CE4zXHqZGHDKo/uzjDk+ssnVwbXtY02AJlEggvBn63y1TMAWyAymmi4Igwm0oZ/bR
	jEdiecSRmdtIfq1BJPQxbu+EG1LsRwKzntqWc23DE8ala/UryqMyRnFMWeQDYjGJLy76BI
	Cjk1xbsbsi9E3ri86M1uqVARi+cr97Y=
Subject: Re: [PATCH] Revert "x86/mm: drop guest_get_eff_l1e()"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Manuel Bouyer <bouyer@antioche.eu.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201211141615.12489-1-andrew.cooper3@citrix.com>
 <454ec720-b823-c2aa-7de4-84c14db2b96f@suse.com>
 <3ab84773-6fec-b653-0d5b-a9374ef336c9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1b29a6c3-cd72-9fe2-dbef-076db891bdda@suse.com>
Date: Mon, 14 Dec 2020 14:55:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <3ab84773-6fec-b653-0d5b-a9374ef336c9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.12.2020 14:21, Andrew Cooper wrote:
> On 14/12/2020 08:27, Jan Beulich wrote:
>> On 11.12.2020 15:16, Andrew Cooper wrote:
>>> This reverts commit 9ff9705647646aa937b5f5c1426a64c69a62b3bd.
>>>
>>> The change is only correct in the original context of XSA-286, where Xen's use
>>> of the linear pagetables were dropped.  However, performance problems
>>> interfered with that plan, and XSA-286 was fixed differently.
>>>
>>> This broke Xen's lazy faulting of the LDT for 64bit PV guests when an access
>>> was first encountered in user context.  Xen would proceed to read the
>>> registered LDT virtual address out of the user pagetables, not the kernel
>>> pagetables.
>>>
>>> Given the nature of the bug, it would have also interfered with the IO
>>> permisison bitmap functionality of userspace, which similarly needs to read
>>> data using the kernel pagetables.
>> This paragraph wants dropping afaict - guest_io_okay() has
>> explicit calls to toggle_guest_pt(), and hence is unaffected by
>> the bug here (and there is in particular page tables reading
>> involved there). Then ...
>>
>>> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Tested-by: Manuel Bouyer <bouyer@antioche.eu.org>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> I was wondering however whether we really want a full revert. I've
>> locally made the change below as an alternative.
>>
>> Jan
>>
>> x86/PV: guest_get_eff_kern_l1e() may still need to switch page tables
>>
>> While indeed unnecessary for pv_ro_page_fault(), pv_map_ldt_shadow_page()
>> may run when guest user mode is active, and hence may need to switch to
>> the kernel page tables in order to retrieve an LDT page mapping.
>>
>> Fixes: 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")
>> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> This is the alternative to fully reverting the offending commit.
> 
> Hmm yes - I think I prefer this, because we don't really want to keep
> the non-kern alternative.
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> however ...

Thanks.

>> I've also been considering to drop the paging-mode-translate ASSERT(),
>> now that we always have translate == external.
> 
> I'd suggest not making that change here in this bugfix.  I think we do
> want to alter how we do asserts like this, and there are other similarly
> impacted code blocks.

Okay, will look forward to learn what exactly you have in mind.

>> --- a/xen/arch/x86/pv/mm.h
>> +++ b/xen/arch/x86/pv/mm.h
>> @@ -11,10 +11,14 @@ int new_guest_cr3(mfn_t mfn);
>>   */
>>  static inline l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
>>  {
>> +    struct vcpu *curr = current;
>>      l1_pgentry_t l1e;
>>  
>> -    ASSERT(!paging_mode_translate(current->domain));
>> -    ASSERT(!paging_mode_external(current->domain));
>> +    ASSERT(!paging_mode_translate(curr->domain));
>> +    ASSERT(!paging_mode_external(curr->domain));
>> +
>> +    if ( !(curr->arch.flags & TF_kernel_mode) )
> 
> ... pull this out into a variable, like the original code used to do.
> 
> bool user_mode = !(curr->arch.flags & TF_kernel_mode);
> 
> I've forgotten which static checker tripped up over this form, but one
> did IIRC.

I've made the change (will send the result in a minute), but I'm curious
not so much which checker might have taken issue here but why.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 13:57:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 13:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52262.91341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kooMP-0007Td-Pd; Mon, 14 Dec 2020 13:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52262.91341; Mon, 14 Dec 2020 13:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kooMP-0007TW-Mo; Mon, 14 Dec 2020 13:57:57 +0000
Received: by outflank-mailman (input) for mailman id 52262;
 Mon, 14 Dec 2020 13:57:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MGmN=FS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kooMO-0007TQ-KG
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 13:57:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a34039fc-e278-45f7-807f-2f2d9c2d5d75;
 Mon, 14 Dec 2020 13:57:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 67511AC10;
 Mon, 14 Dec 2020 13:57:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a34039fc-e278-45f7-807f-2f2d9c2d5d75
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1607954274; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=711qclcSFHYF04Ohd+l0igifzO77pqWLewjPtkweRvM=;
	b=SBRKDTPGMAD81xiqa+FTY/jQ591SISgNQ4rTMalDsraVMewoYOIU5RRCsKV02E5GBOcIWK
	Aaz8073+tdbbyvacLaCZb14m6JPzirxBEWJjSYOxaeBuu2L2J4V0GzjvrFLpu4b8GzX0oA
	BbKh0BSjTuWt4AXFgBGc2ppbgsvnLDg=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Manuel Bouyer <bouyer@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/PV: guest_get_eff_kern_l1e() may still need to switch
 page tables
Message-ID: <89ae6a3b-bfbf-a701-53f5-4dfc80065924@suse.com>
Date: Mon, 14 Dec 2020 14:57:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While indeed unnecessary for pv_ro_page_fault(), pv_map_ldt_shadow_page()
may run when guest user mode is active, and hence may need to switch to
the kernel page tables in order to retrieve an LDT page mapping.

Fixes: 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")
Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Manuel, could you test this again, just to be on the safe side
before we throw it in (at which point we could then also again
add a Tested-by)? Thanks.

--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -11,10 +11,15 @@ int new_guest_cr3(mfn_t mfn);
  */
 static inline l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
 {
+    struct vcpu *curr = current;
+    bool user_mode = !(curr->arch.flags & TF_kernel_mode);
     l1_pgentry_t l1e;
 
-    ASSERT(!paging_mode_translate(current->domain));
-    ASSERT(!paging_mode_external(current->domain));
+    ASSERT(!paging_mode_translate(curr->domain));
+    ASSERT(!paging_mode_external(curr->domain));
+
+    if ( user_mode )
+        toggle_guest_pt(curr);
 
     if ( unlikely(!__addr_ok(linear)) ||
          __copy_from_user(&l1e,
@@ -22,6 +27,9 @@ static inline l1_pgentry_t guest_get_eff
                           sizeof(l1_pgentry_t)) )
         l1e = l1e_empty();
 
+    if ( user_mode )
+        toggle_guest_pt(curr);
+
     return l1e;
 }
 


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 14:37:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 14:37:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52288.91356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kooym-0002vi-0h; Mon, 14 Dec 2020 14:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52288.91356; Mon, 14 Dec 2020 14:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kooyl-0002vb-T3; Mon, 14 Dec 2020 14:37:35 +0000
Received: by outflank-mailman (input) for mailman id 52288;
 Mon, 14 Dec 2020 14:37:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oe/o=FS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kooyk-0002vW-IW
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 14:37:34 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6323ce07-4dab-422d-a998-5e1b712a4c68;
 Mon, 14 Dec 2020 14:37:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6323ce07-4dab-422d-a998-5e1b712a4c68
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607956653;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=nmNIe/ATa0jzxlCrAQ4P1sJQEdCuhqzkG5l6Bd27eYY=;
  b=LJpelrdM43gOT83VqO6YhDpT6TcP9kVsWREtFvYAhXthl0ac1akz2zQT
   wr7Yv+cyQnvD75uzeb3LwrEj94wEgA8SgJoh4Jb3p/qb8DNnyVn4H28W0
   n4gaCL4cV092/hBaw51w+gsvwEgZXIBAsVvKG0/3n1r+BABBs+I7sdmrZ
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: dko5nyLuNZiqVXaAGA9niPDUim9pTpy9+1cOSEfvAozg2DPcGfy4bhYqIl7w9ruz7Ni3VBPmQ/
 DQfIX1z94jGlAlN9F2yQXp61BOjZ1MOHGUhXVtgGrU7sCabF5ymXWq5f8Lwjl6Tl9z8YoZTMDL
 TRogQBlZ1mHi4LHDY/XtLcK6amROih9m2a2lyjTggaXwYWHvu5vILYK5+SZmOY7EdXrYjDMzdm
 VCP1ikvBdZVxta4OkwiSm8Chu/5jUaMhfpWRgtFcZSdgUn8onPwvXNdhQ8HJSVXgJFFopv0fzg
 4zk=
X-SBRS: 5.2
X-MesageID: 33141525
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,419,1599537600"; 
   d="scan'208";a="33141525"
Subject: Re: [PATCH] x86/PV: guest_get_eff_kern_l1e() may still need to switch
 page tables
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Manuel Bouyer <bouyer@antioche.eu.org>
References: <89ae6a3b-bfbf-a701-53f5-4dfc80065924@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <76e5b234-9469-c0b0-c237-ac4d538edbbb@citrix.com>
Date: Mon, 14 Dec 2020 14:37:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <89ae6a3b-bfbf-a701-53f5-4dfc80065924@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 14/12/2020 13:57, Jan Beulich wrote:
> While indeed unnecessary for pv_ro_page_fault(), pv_map_ldt_shadow_page()
> may run when guest user mode is active, and hence may need to switch to
> the kernel page tables in order to retrieve an LDT page mapping.
>
> Fixes: 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")
> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> Manuel, could you test this again, just to be on the safe side
> before we throw it in (at which point we could then also again
> add a Tested-by)? Thanks.

I've got a repro of the issue (literally - just booting the
netinstaller), and this does fix it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 14:56:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 14:56:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52300.91373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kopGe-0004pu-Md; Mon, 14 Dec 2020 14:56:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52300.91373; Mon, 14 Dec 2020 14:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kopGe-0004pn-JL; Mon, 14 Dec 2020 14:56:04 +0000
Received: by outflank-mailman (input) for mailman id 52300;
 Mon, 14 Dec 2020 14:56:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q8e0=FS=redhat.com=imammedo@srs-us1.protection.inumbo.net>)
 id 1kopGd-0004pi-5U
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 14:56:03 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e3f625a2-965b-49ae-9099-188c6a461dff;
 Mon, 14 Dec 2020 14:56:00 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-333-vSORlgemOjCyIva5SGouQg-1; Mon, 14 Dec 2020 09:55:58 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AB08910054FF;
 Mon, 14 Dec 2020 14:55:55 +0000 (UTC)
Received: from localhost (unknown [10.40.208.9])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CB03718B5E;
 Mon, 14 Dec 2020 14:55:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3f625a2-965b-49ae-9099-188c6a461dff
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607957760;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AFj8Qphsj1DvLlA6c+EHeqCWHkgizY5fVaZLxxpViPo=;
	b=iLKfFxY1Q0hUMOTv0JEuBaVBaaQzBzmbDMhLNWbYZxwJhj7DUkHiZNiaLuYLrTiv2kykbj
	u+zH2d0cn1OKsp6DPA1pgdaTShsjzw3ZXX4G2kysTpvF1V6alNQ6ByuL16HcRHT7MnYZOk
	zmxKw20AYpLkmK0DIt9aHR2BgAl441Q=
X-MC-Unique: vSORlgemOjCyIva5SGouQg-1
Date: Mon, 14 Dec 2020 15:55:30 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>, Stefan
 Berger <stefanb@linux.ibm.com>, =?UTF-8?B?TWFyYy1BbmRyw6k=?= Lureau
 <marcandre.lureau@redhat.com>, "Daniel P. Berrange" <berrange@redhat.com>,
 Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>, John Snow
 <jsnow@redhat.com>, Kevin Wolf <kwolf@redhat.com>, Eric Blake
 <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, Stefan Berger
 <stefanb@linux.vnet.ibm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Max Reitz <mreitz@redhat.com>, Cornelia Huck <cohuck@redhat.com>, Halil
 Pasic <pasic@linux.ibm.com>, Christian Borntraeger
 <borntraeger@de.ibm.com>, Richard Henderson <rth@twiddle.net>, David
 Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>, Matthew
 Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Artyom Tarasenko <atar4qemu@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 qemu-s390x@nongnu.org
Subject: Re: [PATCH v4 23/32] qdev: Move dev->realized check to
 qdev_property_set()
Message-ID: <20201214155530.55f80cd6@redhat.com>
In-Reply-To: <20201211220529.2290218-24-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
	<20201211220529.2290218-24-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=imammedo@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, 11 Dec 2020 17:05:20 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> Every single qdev property setter function manually checks
> dev->realized.  We can just check dev->realized inside
> qdev_property_set() instead.
>=20
> The check is being added as a separate function
> (qdev_prop_allow_set()) because it will become a callback later.

is callback added within this series?
and I'd add here what's the purpose of it.

>=20
> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Changes v1 -> v2:
> * Removed unused variable at xen_block_set_vdev()
> * Redone patch after changes in the previous patches in the
>   series
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  backends/tpm/tpm_util.c          |   6 --
>  hw/block/xen-block.c             |   6 --
>  hw/core/qdev-properties-system.c |  70 ----------------------
>  hw/core/qdev-properties.c        | 100 ++++++-------------------------
>  hw/s390x/css.c                   |   6 --
>  hw/s390x/s390-pci-bus.c          |   6 --
>  hw/vfio/pci-quirks.c             |   6 --
>  target/sparc/cpu.c               |   6 --
>  8 files changed, 18 insertions(+), 188 deletions(-)
>=20
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index a5d997e7dc..39b45fa46d 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const ch=
ar *name, void *opaque,
>  static void set_tpm(Object *obj, Visitor *v, const char *name, void *opa=
que,
>                      Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      TPMBackend *s, **be =3D qdev_get_prop_ptr(obj, prop);
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index 905e4acd97..bd1aef63a7 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -395,17 +395,11 @@ static int vbd_name_to_disk(const char *name, const=
 char **endp,
>  static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name=
,
>                                 void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      XenBlockVdev *vdev =3D qdev_get_prop_ptr(obj, prop);
>      char *str, *p;
>      const char *end;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-s=
ystem.c
> index 42529c3b65..f31aea3de1 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -94,11 +94,6 @@ static void set_drive_helper(Object *obj, Visitor *v, =
const char *name,
>      bool blk_created =3D false;
>      int ret;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -230,17 +225,11 @@ static void get_chr(Object *obj, Visitor *v, const =
char *name, void *opaque,
>  static void set_chr(Object *obj, Visitor *v, const char *name, void *opa=
que,
>                      Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      CharBackend *be =3D qdev_get_prop_ptr(obj, prop);
>      Chardev *s;
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -311,18 +300,12 @@ static void get_mac(Object *obj, Visitor *v, const =
char *name, void *opaque,
>  static void set_mac(Object *obj, Visitor *v, const char *name, void *opa=
que,
>                      Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      MACAddr *mac =3D qdev_get_prop_ptr(obj, prop);
>      int i, pos;
>      char *str;
>      const char *p;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -390,7 +373,6 @@ static void get_netdev(Object *obj, Visitor *v, const=
 char *name,
>  static void set_netdev(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      NICPeers *peers_ptr =3D qdev_get_prop_ptr(obj, prop);
>      NetClientState **ncs =3D peers_ptr->ncs;
> @@ -398,11 +380,6 @@ static void set_netdev(Object *obj, Visitor *v, cons=
t char *name,
>      int queues, err =3D 0, i =3D 0;
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -469,18 +446,12 @@ static void get_audiodev(Object *obj, Visitor *v, c=
onst char* name,
>  static void set_audiodev(Object *obj, Visitor *v, const char* name,
>                           void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      QEMUSoundCard *card =3D qdev_get_prop_ptr(obj, prop);
>      AudioState *state;
>      int err =3D 0;
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -582,11 +553,6 @@ static void set_blocksize(Object *obj, Visitor *v, c=
onst char *name,
>      uint64_t value;
>      Error *local_err =3D NULL;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_size(v, name, &value, errp)) {
>          return;
>      }
> @@ -686,7 +652,6 @@ static void get_reserved_region(Object *obj, Visitor =
*v, const char *name,
>  static void set_reserved_region(Object *obj, Visitor *v, const char *nam=
e,
>                                  void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      ReservedRegion *rr =3D qdev_get_prop_ptr(obj, prop);
>      Error *local_err =3D NULL;
> @@ -694,11 +659,6 @@ static void set_reserved_region(Object *obj, Visitor=
 *v, const char *name,
>      char *str;
>      int ret;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_str(v, name, &str, &local_err);
>      if (local_err) {
>          error_propagate(errp, local_err);
> @@ -754,17 +714,11 @@ const PropertyInfo qdev_prop_reserved_region =3D {
>  static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      int32_t value, *ptr =3D qdev_get_prop_ptr(obj, prop);
>      unsigned int slot, fn, n;
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, NULL)) {
>          if (!visit_type_int32(v, name, &value, errp)) {
>              return;
> @@ -848,7 +802,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor=
 *v, const char *name,
>  static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *na=
me,
>                                   void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      PCIHostDeviceAddress *addr =3D qdev_get_prop_ptr(obj, prop);
>      char *str, *p;
> @@ -857,11 +810,6 @@ static void set_pci_host_devaddr(Object *obj, Visito=
r *v, const char *name,
>      unsigned long dom =3D 0, bus =3D 0;
>      unsigned int slot =3D 0, func =3D 0;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -972,16 +920,10 @@ static void get_prop_pcielinkspeed(Object *obj, Vis=
itor *v, const char *name,
>  static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *=
name,
>                                     void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      PCIExpLinkSpeed *p =3D qdev_get_prop_ptr(obj, prop);
>      int speed;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
>                           errp)) {
>          return;
> @@ -1057,16 +999,10 @@ static void get_prop_pcielinkwidth(Object *obj, Vi=
sitor *v, const char *name,
>  static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *=
name,
>                                     void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      PCIExpLinkWidth *p =3D qdev_get_prop_ptr(obj, prop);
>      int width;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_enum(v, name, &width, prop->info->enum_table,
>                           errp)) {
>          return;
> @@ -1129,16 +1065,10 @@ static void get_uuid(Object *obj, Visitor *v, con=
st char *name, void *opaque,
>  static void set_uuid(Object *obj, Visitor *v, const char *name, void *op=
aque,
>                      Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      QemuUUID *uuid =3D qdev_get_prop_ptr(obj, prop);
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index b924f13d58..92f48ecbf2 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -24,6 +24,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, con=
st char *name,
>      }
>  }
> =20
> +/* returns: true if property is allowed to be set, false otherwise */
> +static bool qdev_prop_allow_set(Object *obj, const char *name,
> +                                Error **errp)
> +{
> +    DeviceState *dev =3D DEVICE(obj);
> +
> +    if (dev->realized) {
> +        qdev_prop_set_after_realize(dev, name, errp);
> +        return false;
> +    }
> +    return true;
> +}
> +
>  void qdev_prop_allow_set_link_before_realize(const Object *obj,
>                                               const char *name,
>                                               Object *val, Error **errp)
> @@ -65,6 +78,11 @@ static void field_prop_set(Object *obj, Visitor *v, co=
nst char *name,
>                             void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> +
> +    if (!qdev_prop_allow_set(obj, name, errp)) {
> +        return;
> +    }
> +
>      return prop->info->set(obj, v, name, opaque, errp);
>  }
> =20
> @@ -90,15 +108,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, =
const char *name,
>  void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      int *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
>  }
> =20
> @@ -148,15 +160,9 @@ static void prop_get_bit(Object *obj, Visitor *v, co=
nst char *name,
>  static void prop_set_bit(Object *obj, Visitor *v, const char *name,
>                           void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      bool value;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_bool(v, name, &value, errp)) {
>          return;
>      }
> @@ -208,15 +214,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, =
const char *name,
>  static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      bool value;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_bool(v, name, &value, errp)) {
>          return;
>      }
> @@ -245,15 +245,9 @@ static void get_bool(Object *obj, Visitor *v, const =
char *name, void *opaque,
>  static void set_bool(Object *obj, Visitor *v, const char *name, void *op=
aque,
>                       Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      bool *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_bool(v, name, ptr, errp);
>  }
> =20
> @@ -278,15 +272,9 @@ static void get_uint8(Object *obj, Visitor *v, const=
 char *name, void *opaque,
>  static void set_uint8(Object *obj, Visitor *v, const char *name, void *o=
paque,
>                        Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint8_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_uint8(v, name, ptr, errp);
>  }
> =20
> @@ -323,15 +311,9 @@ static void get_uint16(Object *obj, Visitor *v, cons=
t char *name,
>  static void set_uint16(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint16_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_uint16(v, name, ptr, errp);
>  }
> =20
> @@ -356,15 +338,9 @@ static void get_uint32(Object *obj, Visitor *v, cons=
t char *name,
>  static void set_uint32(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_uint32(v, name, ptr, errp);
>  }
> =20
> @@ -380,15 +356,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v=
, const char *name,
>  static void set_int32(Object *obj, Visitor *v, const char *name, void *o=
paque,
>                        Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      int32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_int32(v, name, ptr, errp);
>  }
> =20
> @@ -420,15 +390,9 @@ static void get_uint64(Object *obj, Visitor *v, cons=
t char *name,
>  static void set_uint64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_uint64(v, name, ptr, errp);
>  }
> =20
> @@ -444,15 +408,9 @@ static void get_int64(Object *obj, Visitor *v, const=
 char *name,
>  static void set_int64(Object *obj, Visitor *v, const char *name,
>                        void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      int64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_int64(v, name, ptr, errp);
>  }
> =20
> @@ -495,16 +453,10 @@ static void get_string(Object *obj, Visitor *v, con=
st char *name,
>  static void set_string(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      char **ptr =3D qdev_get_prop_ptr(obj, prop);
>      char *str;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> @@ -545,16 +497,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor =
*v, const char *name,
>  static void set_size32(Object *obj, Visitor *v, const char *name, void *=
opaque,
>                         Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
>      uint64_t value;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_size(v, name, &value, errp)) {
>          return;
>      }
> @@ -621,10 +567,6 @@ static void set_prop_arraylen(Object *obj, Visitor *=
v, const char *name,
>      const char *arrayname;
>      int i;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
>      if (*alenptr) {
>          error_setg(errp, "array size property %s may not be set more tha=
n once",
>                     name);
> @@ -864,15 +806,9 @@ static void get_size(Object *obj, Visitor *v, const =
char *name, void *opaque,
>  static void set_size(Object *obj, Visitor *v, const char *name, void *op=
aque,
>                       Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      visit_type_size(v, name, ptr, errp);
>  }
> =20
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 7a44320d12..496e2c5801 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v=
, const char *name,
>  static void set_css_devid(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      CssDevId *dev_id =3D qdev_get_prop_ptr(obj, prop);
>      char *str;
>      int num, n1, n2;
>      unsigned int cssid, ssid, devid;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_str(v, name, &str, errp)) {
>          return;
>      }
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index 8b6be1197b..30511f620e 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1338,16 +1338,10 @@ static void s390_pci_get_fid(Object *obj, Visitor=
 *v, const char *name,
>  static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
>                           void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      S390PCIBusDevice *zpci =3D S390_PCI_DEVICE(obj);
>      Property *prop =3D opaque;
>      uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_uint32(v, name, ptr, errp)) {
>          return;
>      }
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 53569925a2..802979635c 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj=
, Visitor *v,
>                                         const char *name, void *opaque,
>                                         Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
>      uint8_t value, *ptr =3D qdev_get_prop_ptr(obj, prop);
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_uint8(v, name, &value, errp)) {
>          return;
>      }
> diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
> index 92534bcd18..b730146bbe 100644
> --- a/target/sparc/cpu.c
> +++ b/target/sparc/cpu.c
> @@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor=
 *v, const char *name,
>  static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name=
,
>                                 void *opaque, Error **errp)
>  {
> -    DeviceState *dev =3D DEVICE(obj);
>      const int64_t min =3D MIN_NWINDOWS;
>      const int64_t max =3D MAX_NWINDOWS;
>      SPARCCPU *cpu =3D SPARC_CPU(obj);
>      int64_t value;
> =20
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>      if (!visit_type_int(v, name, &value, errp)) {
>          return;
>      }



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 15:13:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 15:13:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52308.91386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kopXa-0006mj-3I; Mon, 14 Dec 2020 15:13:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52308.91386; Mon, 14 Dec 2020 15:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kopXa-0006mc-01; Mon, 14 Dec 2020 15:13:34 +0000
Received: by outflank-mailman (input) for mailman id 52308;
 Mon, 14 Dec 2020 15:13:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q8e0=FS=redhat.com=imammedo@srs-us1.protection.inumbo.net>)
 id 1kopXY-0006mX-JJ
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 15:13:32 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 21b27c20-3919-41de-a59d-eaed9c6f6ebf;
 Mon, 14 Dec 2020 15:13:30 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-88-j1oJ2IWxNMmcWJ7fg9-uNw-1; Mon, 14 Dec 2020 10:13:24 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CE16484A5E3;
 Mon, 14 Dec 2020 15:13:22 +0000 (UTC)
Received: from localhost (unknown [10.40.208.9])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6D3177086A;
 Mon, 14 Dec 2020 15:13:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21b27c20-3919-41de-a59d-eaed9c6f6ebf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607958809;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kiqrpl+XecPgLF6P/gzVO7wMfg7k/CqZgK+4CTUDk8o=;
	b=cTgsyXNo1jQoCkLbKjo1T/4N3So3wPVgXCG517v68ZNJz55trf65H/tcIcwr5ghbRdIN2I
	7oOunf4NHNiLJ6cty7fi908aL2Oi/BA3Pvd6dp+IDdveu059CssVuL8jH3Z6/BwlHO2LxD
	UwHG+v3B3+B/H0B26k2TgIR9xc/IZ3A=
X-MC-Unique: j1oJ2IWxNMmcWJ7fg9-uNw-1
Date: Mon, 14 Dec 2020 16:13:02 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Matthew Rosato <mjrosato@linux.ibm.com>, Paul
 Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org, Stefan Berger
 <stefanb@linux.vnet.ibm.com>, David Hildenbrand <david@redhat.com>, Markus
 Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Anthony Perard
 <anthony.perard@citrix.com>, =?UTF-8?B?TWFyYy1BbmRyw6k=?= Lureau
 <marcandre.lureau@redhat.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <philmd@redhat.com>, Thomas Huth <thuth@redhat.com>, Alex Williamson
 <alex.williamson@redhat.com>, John Snow <jsnow@redhat.com>, Richard
 Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>, "Daniel P.
 Berrange" <berrange@redhat.com>, Cornelia Huck <cohuck@redhat.com>,
 qemu-s390x@nongnu.org, Max Reitz <mreitz@redhat.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Stefan Berger <stefanb@linux.ibm.com>
Subject: Re: [PATCH v4 30/32] qdev: Rename qdev_get_prop_ptr() to
 object_field_prop_ptr()
Message-ID: <20201214161302.1c4de090@redhat.com>
In-Reply-To: <20201211220529.2290218-31-ehabkost@redhat.com>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
	<20201211220529.2290218-31-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=imammedo@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, 11 Dec 2020 17:05:27 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> The function will be moved to common QOM code, as it is not
> specific to TYPE_DEVICE anymore.
>=20
> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
> Changes v1 -> v2:
> * Rename to object_field_prop_ptr() instead of object_static_prop_ptr()
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  include/hw/qdev-properties.h     |  2 +-
>  backends/tpm/tpm_util.c          |  6 ++--
>  hw/block/xen-block.c             |  4 +--
>  hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
>  hw/core/qdev-properties.c        | 60 ++++++++++++++++----------------
>  hw/s390x/css.c                   |  4 +--
>  hw/s390x/s390-pci-bus.c          |  4 +--
>  hw/vfio/pci-quirks.c             |  4 +--
>  8 files changed, 67 insertions(+), 67 deletions(-)
>=20
> diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
> index 90222822f1..97bb9494ae 100644
> --- a/include/hw/qdev-properties.h
> +++ b/include/hw/qdev-properties.h
> @@ -193,7 +193,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const ch=
ar *name,
>                             const uint8_t *value);
>  void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
> =20
> -void *qdev_get_prop_ptr(Object *obj, Property *prop);
> +void *object_field_prop_ptr(Object *obj, Property *prop);
> =20
>  void qdev_prop_register_global(GlobalProperty *prop);
>  const GlobalProperty *qdev_find_global_prop(Object *obj,
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index 39b45fa46d..a6e6d3e72f 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -35,7 +35,7 @@
>  static void get_tpm(Object *obj, Visitor *v, const char *name, void *opa=
que,
>                      Error **errp)
>  {
> -    TPMBackend **be =3D qdev_get_prop_ptr(obj, opaque);
> +    TPMBackend **be =3D object_field_prop_ptr(obj, opaque);
>      char *p;
> =20
>      p =3D g_strdup(*be ? (*be)->id : "");
> @@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char=
 *name, void *opaque,
>                      Error **errp)
>  {
>      Property *prop =3D opaque;
> -    TPMBackend *s, **be =3D qdev_get_prop_ptr(obj, prop);
> +    TPMBackend *s, **be =3D object_field_prop_ptr(obj, prop);
>      char *str;
> =20
>      if (!visit_type_str(v, name, &str, errp)) {
> @@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char=
 *name, void *opaque,
>  static void release_tpm(Object *obj, const char *name, void *opaque)
>  {
>      Property *prop =3D opaque;
> -    TPMBackend **be =3D qdev_get_prop_ptr(obj, prop);
> +    TPMBackend **be =3D object_field_prop_ptr(obj, prop);
> =20
>      if (*be) {
>          tpm_backend_reset(*be);
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index bd1aef63a7..718d886e5c 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *=
v, const char *name,
>                                 void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    XenBlockVdev *vdev =3D qdev_get_prop_ptr(obj, prop);
> +    XenBlockVdev *vdev =3D object_field_prop_ptr(obj, prop);
>      char *str;
> =20
>      switch (vdev->type) {
> @@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *=
v, const char *name,
>                                 void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    XenBlockVdev *vdev =3D qdev_get_prop_ptr(obj, prop);
> +    XenBlockVdev *vdev =3D object_field_prop_ptr(obj, prop);
>      char *str, *p;
>      const char *end;
> =20
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-s=
ystem.c
> index 590c5f3d97..e6d378a34e 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -62,7 +62,7 @@ static void get_drive(Object *obj, Visitor *v, const ch=
ar *name, void *opaque,
>                        Error **errp)
>  {
>      Property *prop =3D opaque;
> -    void **ptr =3D qdev_get_prop_ptr(obj, prop);
> +    void **ptr =3D object_field_prop_ptr(obj, prop);
>      const char *value;
>      char *p;
> =20
> @@ -88,7 +88,7 @@ static void set_drive_helper(Object *obj, Visitor *v, c=
onst char *name,
>  {
>      DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
> -    void **ptr =3D qdev_get_prop_ptr(obj, prop);
> +    void **ptr =3D object_field_prop_ptr(obj, prop);
>      char *str;
>      BlockBackend *blk;
>      bool blk_created =3D false;
> @@ -181,7 +181,7 @@ static void release_drive(Object *obj, const char *na=
me, void *opaque)
>  {
>      DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
> -    BlockBackend **ptr =3D qdev_get_prop_ptr(obj, prop);
> +    BlockBackend **ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      if (*ptr) {
>          AioContext *ctx =3D blk_get_aio_context(*ptr);
> @@ -214,7 +214,7 @@ const PropertyInfo qdev_prop_drive_iothread =3D {
>  static void get_chr(Object *obj, Visitor *v, const char *name, void *opa=
que,
>                      Error **errp)
>  {
> -    CharBackend *be =3D qdev_get_prop_ptr(obj, opaque);
> +    CharBackend *be =3D object_field_prop_ptr(obj, opaque);
>      char *p;
> =20
>      p =3D g_strdup(be->chr && be->chr->label ? be->chr->label : "");
> @@ -226,7 +226,7 @@ static void set_chr(Object *obj, Visitor *v, const ch=
ar *name, void *opaque,
>                      Error **errp)
>  {
>      Property *prop =3D opaque;
> -    CharBackend *be =3D qdev_get_prop_ptr(obj, prop);
> +    CharBackend *be =3D object_field_prop_ptr(obj, prop);
>      Chardev *s;
>      char *str;
> =20
> @@ -262,7 +262,7 @@ static void set_chr(Object *obj, Visitor *v, const ch=
ar *name, void *opaque,
>  static void release_chr(Object *obj, const char *name, void *opaque)
>  {
>      Property *prop =3D opaque;
> -    CharBackend *be =3D qdev_get_prop_ptr(obj, prop);
> +    CharBackend *be =3D object_field_prop_ptr(obj, prop);
> =20
>      qemu_chr_fe_deinit(be, false);
>  }
> @@ -286,7 +286,7 @@ static void get_mac(Object *obj, Visitor *v, const ch=
ar *name, void *opaque,
>                      Error **errp)
>  {
>      Property *prop =3D opaque;
> -    MACAddr *mac =3D qdev_get_prop_ptr(obj, prop);
> +    MACAddr *mac =3D object_field_prop_ptr(obj, prop);
>      char buffer[2 * 6 + 5 + 1];
>      char *p =3D buffer;
> =20
> @@ -301,7 +301,7 @@ static void set_mac(Object *obj, Visitor *v, const ch=
ar *name, void *opaque,
>                      Error **errp)
>  {
>      Property *prop =3D opaque;
> -    MACAddr *mac =3D qdev_get_prop_ptr(obj, prop);
> +    MACAddr *mac =3D object_field_prop_ptr(obj, prop);
>      int i, pos;
>      char *str;
>      const char *p;
> @@ -363,7 +363,7 @@ static void get_netdev(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    NICPeers *peers_ptr =3D qdev_get_prop_ptr(obj, prop);
> +    NICPeers *peers_ptr =3D object_field_prop_ptr(obj, prop);
>      char *p =3D g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "=
");
> =20
>      visit_type_str(v, name, &p, errp);
> @@ -374,7 +374,7 @@ static void set_netdev(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    NICPeers *peers_ptr =3D qdev_get_prop_ptr(obj, prop);
> +    NICPeers *peers_ptr =3D object_field_prop_ptr(obj, prop);
>      NetClientState **ncs =3D peers_ptr->ncs;
>      NetClientState *peers[MAX_QUEUE_NUM];
>      int queues, err =3D 0, i =3D 0;
> @@ -436,7 +436,7 @@ static void get_audiodev(Object *obj, Visitor *v, con=
st char* name,
>                           void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    QEMUSoundCard *card =3D qdev_get_prop_ptr(obj, prop);
> +    QEMUSoundCard *card =3D object_field_prop_ptr(obj, prop);
>      char *p =3D g_strdup(audio_get_id(card));
> =20
>      visit_type_str(v, name, &p, errp);
> @@ -447,7 +447,7 @@ static void set_audiodev(Object *obj, Visitor *v, con=
st char* name,
>                           void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    QEMUSoundCard *card =3D qdev_get_prop_ptr(obj, prop);
> +    QEMUSoundCard *card =3D object_field_prop_ptr(obj, prop);
>      AudioState *state;
>      int err =3D 0;
>      char *str;
> @@ -549,7 +549,7 @@ static void set_blocksize(Object *obj, Visitor *v, co=
nst char *name,
>  {
>      DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
>      uint64_t value;
>      Error *local_err =3D NULL;
> =20
> @@ -637,7 +637,7 @@ static void get_reserved_region(Object *obj, Visitor =
*v, const char *name,
>                                  void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    ReservedRegion *rr =3D qdev_get_prop_ptr(obj, prop);
> +    ReservedRegion *rr =3D object_field_prop_ptr(obj, prop);
>      char buffer[64];
>      char *p =3D buffer;
>      int rc;
> @@ -653,7 +653,7 @@ static void set_reserved_region(Object *obj, Visitor =
*v, const char *name,
>                                  void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    ReservedRegion *rr =3D qdev_get_prop_ptr(obj, prop);
> +    ReservedRegion *rr =3D object_field_prop_ptr(obj, prop);
>      Error *local_err =3D NULL;
>      const char *endptr;
>      char *str;
> @@ -715,7 +715,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, co=
nst char *name,
>                            void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int32_t value, *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int32_t value, *ptr =3D object_field_prop_ptr(obj, prop);
>      unsigned int slot, fn, n;
>      char *str;
> =20
> @@ -753,7 +753,7 @@ invalid:
>  static int print_pci_devfn(Object *obj, Property *prop, char *dest,
>                             size_t len)
>  {
> -    int32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      if (*ptr =3D=3D -1) {
>          return snprintf(dest, len, "<unset>");
> @@ -777,7 +777,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor=
 *v, const char *name,
>                                   void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    PCIHostDeviceAddress *addr =3D qdev_get_prop_ptr(obj, prop);
> +    PCIHostDeviceAddress *addr =3D object_field_prop_ptr(obj, prop);
>      char buffer[] =3D "ffff:ff:ff.f";
>      char *p =3D buffer;
>      int rc =3D 0;
> @@ -803,7 +803,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor=
 *v, const char *name,
>                                   void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    PCIHostDeviceAddress *addr =3D qdev_get_prop_ptr(obj, prop);
> +    PCIHostDeviceAddress *addr =3D object_field_prop_ptr(obj, prop);
>      char *str, *p;
>      char *e;
>      unsigned long val;
> @@ -893,7 +893,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visit=
or *v, const char *name,
>                                     void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    PCIExpLinkSpeed *p =3D qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkSpeed *p =3D object_field_prop_ptr(obj, prop);
>      int speed;
> =20
>      switch (*p) {
> @@ -921,7 +921,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visit=
or *v, const char *name,
>                                     void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    PCIExpLinkSpeed *p =3D qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkSpeed *p =3D object_field_prop_ptr(obj, prop);
>      int speed;
> =20
>      if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
> @@ -963,7 +963,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visit=
or *v, const char *name,
>                                     void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    PCIExpLinkWidth *p =3D qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkWidth *p =3D object_field_prop_ptr(obj, prop);
>      int width;
> =20
>      switch (*p) {
> @@ -1000,7 +1000,7 @@ static void set_prop_pcielinkwidth(Object *obj, Vis=
itor *v, const char *name,
>                                     void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    PCIExpLinkWidth *p =3D qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkWidth *p =3D object_field_prop_ptr(obj, prop);
>      int width;
> =20
>      if (!visit_type_enum(v, name, &width, prop->info->enum_table,
> @@ -1051,7 +1051,7 @@ static void get_uuid(Object *obj, Visitor *v, const=
 char *name, void *opaque,
>                       Error **errp)
>  {
>      Property *prop =3D opaque;
> -    QemuUUID *uuid =3D qdev_get_prop_ptr(obj, prop);
> +    QemuUUID *uuid =3D object_field_prop_ptr(obj, prop);
>      char buffer[UUID_FMT_LEN + 1];
>      char *p =3D buffer;
> =20
> @@ -1066,7 +1066,7 @@ static void set_uuid(Object *obj, Visitor *v, const=
 char *name, void *opaque,
>                      Error **errp)
>  {
>      Property *prop =3D opaque;
> -    QemuUUID *uuid =3D qdev_get_prop_ptr(obj, prop);
> +    QemuUUID *uuid =3D object_field_prop_ptr(obj, prop);
>      char *str;
> =20
>      if (!visit_type_str(v, name, &str, errp)) {
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index c1dd4ae71b..3d648b088d 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -50,7 +50,7 @@ void qdev_prop_allow_set_link_before_realize(const Obje=
ct *obj,
>      }
>  }
> =20
> -void *qdev_get_prop_ptr(Object *obj, Property *prop)
> +void *object_field_prop_ptr(Object *obj, Property *prop)
>  {
>      void *ptr =3D obj;
>      ptr +=3D prop->offset;
> @@ -100,7 +100,7 @@ void field_prop_get_enum(Object *obj, Visitor *v, con=
st char *name,
>                           void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
>  }
> @@ -109,7 +109,7 @@ void field_prop_set_enum(Object *obj, Visitor *v, con=
st char *name,
>                           void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
>  }
> @@ -138,7 +138,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
> =20
>  static void bit_prop_set(Object *obj, Property *props, bool val)
>  {
> -    uint32_t *p =3D qdev_get_prop_ptr(obj, props);
> +    uint32_t *p =3D object_field_prop_ptr(obj, props);
>      uint32_t mask =3D qdev_get_prop_mask(props);
>      if (val) {
>          *p |=3D mask;
> @@ -151,7 +151,7 @@ static void prop_get_bit(Object *obj, Visitor *v, con=
st char *name,
>                           void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint32_t *p =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *p =3D object_field_prop_ptr(obj, prop);
>      bool value =3D (*p & qdev_get_prop_mask(prop)) !=3D 0;
> =20
>      visit_type_bool(v, name, &value, errp);
> @@ -192,7 +192,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
> =20
>  static void bit64_prop_set(Object *obj, Property *props, bool val)
>  {
> -    uint64_t *p =3D qdev_get_prop_ptr(obj, props);
> +    uint64_t *p =3D object_field_prop_ptr(obj, props);
>      uint64_t mask =3D qdev_get_prop_mask64(props);
>      if (val) {
>          *p |=3D mask;
> @@ -205,7 +205,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, c=
onst char *name,
>                             void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint64_t *p =3D qdev_get_prop_ptr(obj, prop);
> +    uint64_t *p =3D object_field_prop_ptr(obj, prop);
>      bool value =3D (*p & qdev_get_prop_mask64(prop)) !=3D 0;
> =20
>      visit_type_bool(v, name, &value, errp);
> @@ -237,7 +237,7 @@ static void get_bool(Object *obj, Visitor *v, const c=
har *name, void *opaque,
>                       Error **errp)
>  {
>      Property *prop =3D opaque;
> -    bool *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    bool *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_bool(v, name, ptr, errp);
>  }
> @@ -246,7 +246,7 @@ static void set_bool(Object *obj, Visitor *v, const c=
har *name, void *opaque,
>                       Error **errp)
>  {
>      Property *prop =3D opaque;
> -    bool *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    bool *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_bool(v, name, ptr, errp);
>  }
> @@ -264,7 +264,7 @@ static void get_uint8(Object *obj, Visitor *v, const =
char *name, void *opaque,
>                        Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint8_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint8(v, name, ptr, errp);
>  }
> @@ -273,7 +273,7 @@ static void set_uint8(Object *obj, Visitor *v, const =
char *name, void *opaque,
>                        Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint8_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint8(v, name, ptr, errp);
>  }
> @@ -303,7 +303,7 @@ static void get_uint16(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint16_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint16_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint16(v, name, ptr, errp);
>  }
> @@ -312,7 +312,7 @@ static void set_uint16(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint16_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint16_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint16(v, name, ptr, errp);
>  }
> @@ -330,7 +330,7 @@ static void get_uint32(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint32(v, name, ptr, errp);
>  }
> @@ -339,7 +339,7 @@ static void set_uint32(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint32(v, name, ptr, errp);
>  }
> @@ -348,7 +348,7 @@ void field_prop_get_int32(Object *obj, Visitor *v, co=
nst char *name,
>                            void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_int32(v, name, ptr, errp);
>  }
> @@ -357,7 +357,7 @@ static void set_int32(Object *obj, Visitor *v, const =
char *name, void *opaque,
>                        Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_int32(v, name, ptr, errp);
>  }
> @@ -382,7 +382,7 @@ static void get_uint64(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint64(v, name, ptr, errp);
>  }
> @@ -391,7 +391,7 @@ static void set_uint64(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint64(v, name, ptr, errp);
>  }
> @@ -400,7 +400,7 @@ static void get_int64(Object *obj, Visitor *v, const =
char *name,
>                        void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int64_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_int64(v, name, ptr, errp);
>  }
> @@ -409,7 +409,7 @@ static void set_int64(Object *obj, Visitor *v, const =
char *name,
>                        void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    int64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    int64_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_int64(v, name, ptr, errp);
>  }
> @@ -433,14 +433,14 @@ const PropertyInfo prop_info_int64 =3D {
>  static void release_string(Object *obj, const char *name, void *opaque)
>  {
>      Property *prop =3D opaque;
> -    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
> +    g_free(*(char **)object_field_prop_ptr(obj, prop));
>  }
> =20
>  static void get_string(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    char **ptr =3D qdev_get_prop_ptr(obj, prop);
> +    char **ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      if (!*ptr) {
>          char *str =3D (char *)"";
> @@ -454,7 +454,7 @@ static void set_string(Object *obj, Visitor *v, const=
 char *name,
>                         void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    char **ptr =3D qdev_get_prop_ptr(obj, prop);
> +    char **ptr =3D object_field_prop_ptr(obj, prop);
>      char *str;
> =20
>      if (!visit_type_str(v, name, &str, errp)) {
> @@ -488,7 +488,7 @@ void field_prop_get_size32(Object *obj, Visitor *v, c=
onst char *name,
>                             void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
>      uint64_t value =3D *ptr;
> =20
>      visit_type_size(v, name, &value, errp);
> @@ -498,7 +498,7 @@ static void set_size32(Object *obj, Visitor *v, const=
 char *name, void *opaque,
>                         Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
>      uint64_t value;
> =20
>      if (!visit_type_size(v, name, &value, errp)) {
> @@ -561,7 +561,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v=
, const char *name,
>       */
>      DeviceState *dev =3D DEVICE(obj);
>      Property *prop =3D opaque;
> -    uint32_t *alenptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *alenptr =3D object_field_prop_ptr(obj, prop);
>      void **arrayptr =3D (void *)dev + prop->arrayoffset;
>      void *eltptr;
>      const char *arrayname;
> @@ -603,7 +603,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v=
, const char *name,
>           * being inside the device struct.
>           */
>          arrayprop->prop.offset =3D eltptr - (void *)dev;
> -        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) =3D=3D eltptr);
> +        assert(object_field_prop_ptr(obj, &arrayprop->prop) =3D=3D eltpt=
r);
>          object_property_add(obj, propname,
>                              arrayprop->prop.info->name,
>                              field_prop_getter(arrayprop->prop.info),
> @@ -798,7 +798,7 @@ static void get_size(Object *obj, Visitor *v, const c=
har *name, void *opaque,
>                       Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_size(v, name, ptr, errp);
>  }
> @@ -807,7 +807,7 @@ static void set_size(Object *obj, Visitor *v, const c=
har *name, void *opaque,
>                       Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint64_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_size(v, name, ptr, errp);
>  }
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 496e2c5801..fe47751df4 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, =
const char *name,
>                            void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    CssDevId *dev_id =3D qdev_get_prop_ptr(obj, prop);
> +    CssDevId *dev_id =3D object_field_prop_ptr(obj, prop);
>      char buffer[] =3D "xx.x.xxxx";
>      char *p =3D buffer;
>      int r;
> @@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, =
const char *name,
>                            void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    CssDevId *dev_id =3D qdev_get_prop_ptr(obj, prop);
> +    CssDevId *dev_id =3D object_field_prop_ptr(obj, prop);
>      char *str;
>      int num, n1, n2;
>      unsigned int cssid, ssid, devid;
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index 30511f620e..dd138dae94 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1330,7 +1330,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *=
v, const char *name,
>                           void *opaque, Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint32(v, name, ptr, errp);
>  }
> @@ -1340,7 +1340,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *=
v, const char *name,
>  {
>      S390PCIBusDevice *zpci =3D S390_PCI_DEVICE(obj);
>      Property *prop =3D opaque;
> -    uint32_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      if (!visit_type_uint32(v, name, ptr, errp)) {
>          return;
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 802979635c..fc8d63c850 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj,=
 Visitor *v,
>                                         Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint8_t *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      visit_type_uint8(v, name, ptr, errp);
>  }
> @@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj,=
 Visitor *v,
>                                         Error **errp)
>  {
>      Property *prop =3D opaque;
> -    uint8_t value, *ptr =3D qdev_get_prop_ptr(obj, prop);
> +    uint8_t value, *ptr =3D object_field_prop_ptr(obj, prop);
> =20
>      if (!visit_type_uint8(v, name, &value, errp)) {
>          return;



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 15:34:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 15:34:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52323.91398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koprl-0000K7-RP; Mon, 14 Dec 2020 15:34:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52323.91398; Mon, 14 Dec 2020 15:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koprl-0000K0-ON; Mon, 14 Dec 2020 15:34:25 +0000
Received: by outflank-mailman (input) for mailman id 52323;
 Mon, 14 Dec 2020 15:34:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koprk-0000Js-8b; Mon, 14 Dec 2020 15:34:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koprj-0008H9-Tp; Mon, 14 Dec 2020 15:34:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koprj-00012X-MJ; Mon, 14 Dec 2020 15:34:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koprj-0004AE-Li; Mon, 14 Dec 2020 15:34:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JYDmIZu5DYJb56mgdSHVhMYxmhWBPwqZeEMNCtYOA5k=; b=lMlXq2Z2sj4RkYjzrvfUY+LjV8
	p8Cc5PFBZtRAujPwm7iu9B1VbDMF/T5HJWKYvFfyprUStpyfcio+AsYpyXLYWN2R6yeoW8sNKYHYp
	IspgtbUumUSOP8ylfh1E2hCYktuCjlYNEt/iqCxQQ2s7996VZ4qquVdm/mA9IlNFH4NA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157521-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157521: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=793c59da135be023b02ff93a57a3bb6b34044906
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 15:34:23 +0000

flight 157521 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157521/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 793c59da135be023b02ff93a57a3bb6b34044906
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    4 days   36 attempts
Testing same since   157521  2020-12-14 11:09:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 15:45:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 15:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52333.91412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koq2q-0001MX-TL; Mon, 14 Dec 2020 15:45:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52333.91412; Mon, 14 Dec 2020 15:45:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koq2q-0001MQ-QL; Mon, 14 Dec 2020 15:45:52 +0000
Received: by outflank-mailman (input) for mailman id 52333;
 Mon, 14 Dec 2020 15:45:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koq2p-0001MI-Je; Mon, 14 Dec 2020 15:45:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koq2p-0008T0-8K; Mon, 14 Dec 2020 15:45:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koq2p-0001cU-1X; Mon, 14 Dec 2020 15:45:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koq2p-0006nY-13; Mon, 14 Dec 2020 15:45:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UrNmpvyqTjhSuDBnGvAybWHGZhga0z2mY8k9tC1l1AE=; b=g1Qk8lenCTJ8scGo1YOsIb/G93
	GWzOnrVlWJ0J6MlJV67j+v/sy9ukzJrz1pFdftuNvIj4eeQ8m9IcuylQ+nufIq4BYl9zEI0xUUQ7i
	7PxXo67lziWR2aD9s7rnDY4gDK5IkpRbKbQBeAf5cJhSKAc982GyP+UufBDM5fk+KmxU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157514-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157514: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=17584289af1aaa72c932e7e47c25d583b329dc45
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 15:45:51 +0000

flight 157514 qemu-mainline real [real]
flight 157523 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157514/
http://logs.test-lab.xenproject.org/osstest/logs/157523/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                17584289af1aaa72c932e7e47c25d583b329dc45
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  116 days
Failing since        152659  2020-08-21 14:07:39 Z  115 days  241 attempts
Testing same since   157474  2020-12-13 01:37:19 Z    1 days    4 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 75406 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 16:02:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 16:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52351.91428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koqIJ-0003nA-C3; Mon, 14 Dec 2020 16:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52351.91428; Mon, 14 Dec 2020 16:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koqIJ-0003n3-90; Mon, 14 Dec 2020 16:01:51 +0000
Received: by outflank-mailman (input) for mailman id 52351;
 Mon, 14 Dec 2020 16:01:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oe/o=FS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1koqIH-0003my-L8
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:01:49 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5cbb4e2-9970-4e6e-b705-3d848d932767;
 Mon, 14 Dec 2020 16:01:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5cbb4e2-9970-4e6e-b705-3d848d932767
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607961708;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Auc/p++hk2V3I9MRwZtYTdF1+rZwGFxV40etKNGTYvQ=;
  b=a/CS6Y4OoOA2bEY0ADKBok1RiqwgmuLFnecMm7vE9yOQjtjAirBjAN4/
   qupyVBXL0uVvxGdWhCf3AIe855nTrqS3zdpBaopkikL0svYmRnoIkG9N9
   cXQX4u/cY/YkOKoHR6VbMrwZPjFWQ3mvag44RptpRv07OG/J6uo4RNw3N
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bgyobZM7MZ8Epmh7+ScRzaeNZCtWUHuwtJRp0Cy1mnFvT+NnA5rOZlZ035GdK9nTKq2D/JOvge
 Imw91bjvMsPP2Md4q/LaUYelRGlLRWlXFM6impPcjuOECJ2AnR27NL5OT8ueU1H0JVOy36zOzs
 dKQhsQLStdCRuWoVHfFZ3UEzQV1zatJG7ovEHV3lZjZQ0WrD8SxCIDJ6415nwm+QvbPz/Wvv2D
 rw08EYEanmUfBedR1BwxmcxWcp0vuU21dtSP+8dVgCo0kroYrqKONpp62OzpFefweIJxSXiiV5
 gEg=
X-SBRS: 5.2
X-MesageID: 33191580
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,420,1599537600"; 
   d="scan'208";a="33191580"
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
To: Julien Grall <julien@xen.org>, "Shamsundara Havanur, Harsha"
	<havanur@amazon.com>, "jbeulich@suse.com" <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Paul
 Durrant" <paul@xen.org>, "Wieczorkiewicz, Pawel" <wipawel@amazon.de>
References: <cover.1607686878.git.havanur@amazon.com>
 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
 <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
 <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
 <bf70db2d-cf03-11cb-887e-aa38094b3d5f@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <607cba7c-15b6-0197-6000-cc823038d320@citrix.com>
Date: Mon, 14 Dec 2020 16:01:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bf70db2d-cf03-11cb-887e-aa38094b3d5f@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 14/12/2020 10:56, Julien Grall wrote:
> Hi Harsha,
>
> On 14/12/2020 09:26, Shamsundara Havanur, Harsha wrote:
>> On Mon, 2020-12-14 at 09:52 +0100, Jan Beulich wrote:
>>> CAUTION: This email originated from outside of the organization. Do
>>> not click links or open attachments unless you can confirm the sender
>>> and know the content is safe.
>>>
>>>
>>>
>>> On 11.12.2020 12:44, Harsha Shamsundara Havanur wrote:
>>>> A HVM domain flushes cache on all the cpus using
>>>> `flush_all` macro which uses cpu_online_map, during
>>>> i) creation of a new domain
>>>> ii) when device-model op is performed
>>>> iii) when domain is destructed.
>>>>
>>>> This triggers IPI on all the cpus, thus affecting other
>>>> domains that are pinned to different pcpus. This patch
>>>> restricts cache flush to the set of cpus affinitized to
>>>> the current domain using `domain->dirty_cpumask`.
>>>
>>> But then you need to effect cache flushing when a CPU gets
>>> taken out of domain->dirty_cpumask. I don't think you/we want
>>> to do that.
>>>
>> If we do not restrict, it could lead to DoS attack, where a malicious
>> guest could keep writing to MTRR registers or do a cache flush through
>> DM Op and keep sending IPIs to other neighboring guests.
>
> I saw Jan already answered about the alleged DoS, so I will just focus
> on the resolution.
>
> I agree that in the ideal situation we want to limit the impact on the
> other vCPUs. However, we also need to make sure the cure is not worse
> than the symptoms.

And specifically, only a change which is correct.  This patch very
definitely isn't.

Lines can get cached on other cpus from, e.g. qemu mappings and PV backends.

>
> The cache flush cannot be restricted in all the pinning situation
> because pinning doesn't imply the pCPU will be dedicated to a given
> vCPU or even the vCPU will stick on pCPU (we may allow floating on a
> NUMA socket). Although your setup may offer this guarantee.
>
> My knowledge in this area is quite limited. But below a few question
> that hopefully will help to make a decision.
>
> The first question to answer is: can the flush can be restricted in a
> setup where each vCPUs are running on a decicated pCPU (i.e partionned
> system)?

Not really.  Lines can become cached even from speculation in the directmap.

If you need to flush the caches (and don't have a virtual mapping to
issue clflush/clflushopt/clwb over), it must be on all CPUs.

> If the answer is yes, then we should figure out whether using
> domain->dirty_cpumask would always be correct? For instance, a vCPU
> may not have yet run, so can we consider the associated pCPU cache
> would be consistent?
>
> Another line of question is what can we do on system supporting
> self-snooping? IOW, would it be possible to restrict the flush for all
> the setup?

Right - this is the avenue which ought to be investigated.  (Working)
self-noop ought to remove the need for some of these cache flushes. 
Others look like they're not correct to begin with.  Others, such as the
wbinvd intercepts absolutely must stay as they are.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 16:08:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 16:08:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52357.91440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koqOT-00040J-4V; Mon, 14 Dec 2020 16:08:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52357.91440; Mon, 14 Dec 2020 16:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koqOT-00040C-0p; Mon, 14 Dec 2020 16:08:13 +0000
Received: by outflank-mailman (input) for mailman id 52357;
 Mon, 14 Dec 2020 16:08:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C0N7=FS=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1koqOS-000407-D8
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:08:12 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c264d67d-0e2c-434d-b970-2024800322d5;
 Mon, 14 Dec 2020 16:08:10 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BEG82mf024267
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 14 Dec 2020 17:08:03 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id BF20C2E936F; Mon, 14 Dec 2020 17:07:57 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c264d67d-0e2c-434d-b970-2024800322d5
Date: Mon, 14 Dec 2020 17:07:57 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/PV: guest_get_eff_kern_l1e() may still need to
 switch page tables
Message-ID: <20201214160757.GA5165@antioche.eu.org>
References: <89ae6a3b-bfbf-a701-53f5-4dfc80065924@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <89ae6a3b-bfbf-a701-53f5-4dfc80065924@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 14 Dec 2020 17:08:04 +0100 (MET)

On Mon, Dec 14, 2020 at 02:57:53PM +0100, Jan Beulich wrote:
> While indeed unnecessary for pv_ro_page_fault(), pv_map_ldt_shadow_page()
> may run when guest user mode is active, and hence may need to switch to
> the kernel page tables in order to retrieve an LDT page mapping.
> 
> Fixes: 9ff970564764 ("x86/mm: drop guest_get_eff_l1e()")
> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> Manuel, could you test this again, just to be on the safe side
> before we throw it in (at which point we could then also again
> add a Tested-by)? Thanks.

Yes, this works for me. thanks !

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 16:28:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 16:28:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52365.91452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koqhn-0005wN-TO; Mon, 14 Dec 2020 16:28:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52365.91452; Mon, 14 Dec 2020 16:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koqhn-0005wG-QK; Mon, 14 Dec 2020 16:28:11 +0000
Received: by outflank-mailman (input) for mailman id 52365;
 Mon, 14 Dec 2020 16:28:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koqhm-0005w7-5r; Mon, 14 Dec 2020 16:28:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koqhl-0001IZ-UD; Mon, 14 Dec 2020 16:28:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koqhl-0002zE-Lp; Mon, 14 Dec 2020 16:28:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koqhl-0003we-LM; Mon, 14 Dec 2020 16:28:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OZZ6uVho4OT6ln3uhkORbuiRyIGhOVe+kBFqgQGyP4U=; b=Y1p2IRz3E/Jzgl4U0UV9bjj4w/
	tXMtWZ7Nhl3V23QK6v5R3p/7pcRFVqJqHYt0sfYLMXW+AKIw/ZjcYiFfqILnAZwc81JWKkTFRljcT
	qHII7Hk59rbKK9sy8cLfJW04e0TflRH5pRhVcKtfjQChcOZhU16JLsgoDXvt2+0t+jm8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157525-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157525: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=793c59da135be023b02ff93a57a3bb6b34044906
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 16:28:09 +0000

flight 157525 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157525/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 793c59da135be023b02ff93a57a3bb6b34044906
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   37 attempts
Testing same since   157521  2020-12-14 11:09:46 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 16:57:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 16:57:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52442.91485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kor9k-0000mS-Eh; Mon, 14 Dec 2020 16:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52442.91485; Mon, 14 Dec 2020 16:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kor9k-0000mL-Bq; Mon, 14 Dec 2020 16:57:04 +0000
Received: by outflank-mailman (input) for mailman id 52442;
 Mon, 14 Dec 2020 16:57:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yspo=FS=redhat.com=stefanha@srs-us1.protection.inumbo.net>)
 id 1kor9j-0000mG-EK
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:57:03 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c89a3f52-eed2-49d5-9abc-09d5230136e7;
 Mon, 14 Dec 2020 16:57:02 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-573-UWI2jJIwONaDWDAti9D55g-1; Mon, 14 Dec 2020 11:57:00 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 54F83800400;
 Mon, 14 Dec 2020 16:56:58 +0000 (UTC)
Received: from localhost (ovpn-113-200.ams2.redhat.com [10.36.113.200])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5116977BE1;
 Mon, 14 Dec 2020 16:56:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c89a3f52-eed2-49d5-9abc-09d5230136e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965022;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Vy11eL4SPTpFH8ha19SW6rVhq3mxfKeo6GeSoh9uImw=;
	b=Shi8lbdXJZmo2UuY9w55nS4kZkJQcCrUUtGrleD9p0liMR7q6ZMYo9/T5XgXUY2bZ/0Vmc
	AsgjuY8nruRQ+ilzNVs7rU5xDBbgeXEFUJOgLTkokT+v1ch1IamCFe4q/fs/y199vKcrdO
	14B2pewe7InDRoWZ/+EGhhYyl/iaB/U=
X-MC-Unique: UWI2jJIwONaDWDAti9D55g-1
Date: Mon, 14 Dec 2020 16:56:50 +0000
From: Stefan Hajnoczi <stefanha@redhat.com>
To: marcandre.lureau@redhat.com
Cc: qemu-devel@nongnu.org, philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org, Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>
Subject: Re: [PATCH v3 01/13] qemu/atomic: Drop special case for unsupported
 compiler
Message-ID: <20201214165650.GG620320@stefanha-x1.localdomain>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-2-marcandre.lureau@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-2-marcandre.lureau@redhat.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=stefanha@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="iJXiJc/TAIT2rh2r"
Content-Disposition: inline

--iJXiJc/TAIT2rh2r
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 10, 2020 at 05:47:40PM +0400, marcandre.lureau@redhat.com wrote=
:
> From: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
>=20
> Since commit efc6c070aca ("configure: Add a test for the
> minimum compiler version") the minimum compiler version
> required for GCC is 4.8, which has the GCC BZ#36793 bug fixed.
>=20
> We can safely remove the special case introduced in commit
> a281ebc11a6 ("virtio: add missing mb() on notification").
>=20
> With clang 3.4, __ATOMIC_RELAXED is defined, so the chunk to
> remove (which is x86-specific), isn't reached either.
>=20
> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> Reviewed-by: Marc-Andr=E9 Lureau <marcandre.lureau@redhat.com>
> ---
>  include/qemu/atomic.h | 17 -----------------
>  1 file changed, 17 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

--iJXiJc/TAIT2rh2r
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAl/XmVEACgkQnKSrs4Gr
c8jjJQgAo8d7/t5LGnL7hwWSSveWGkFFsOOGo/SVqa4OJ7XBEP26SZ7KCL5QIHYz
WGIKZI+jjROvOYI0wqtxkv/4VVxbD8Dbd5XsndSAKWBq/LPt18XYuFhmO2pLWOGy
r6zWizooyUsOPqvkOt4Oud3AWqCiWyDykKtnRhYOV07sv2TAnaR0LpoB6c0khohS
6hjCjj2GK2KuajUtwaVGbF/C12RYeAbnpy0bwzU+rKDFNcqII4VCEhkmYXZ5hhd5
aPASx3NU4L3cCK3RM7yhlHNsyk8fP/eeQhSarqTQy/uBQQvFHudI9xpk2fkDWbo5
7+NeGO0qXKCzVz6wmVob7FXMYCJK0Q==
=BDgQ
-----END PGP SIGNATURE-----

--iJXiJc/TAIT2rh2r--



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 16:57:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 16:57:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52447.91497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1korAS-0000sK-Py; Mon, 14 Dec 2020 16:57:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52447.91497; Mon, 14 Dec 2020 16:57:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1korAS-0000sD-Lg; Mon, 14 Dec 2020 16:57:48 +0000
Received: by outflank-mailman (input) for mailman id 52447;
 Mon, 14 Dec 2020 16:57:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yspo=FS=redhat.com=stefanha@srs-us1.protection.inumbo.net>)
 id 1korAR-0000s6-2Q
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:57:47 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a11fc9f2-d671-4fbf-ad33-77c01b78d4ec;
 Mon, 14 Dec 2020 16:57:46 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-51-dOTt1Uv3MgeXrCvhXQnztA-1; Mon, 14 Dec 2020 11:57:42 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 007DB1006C85;
 Mon, 14 Dec 2020 16:57:41 +0000 (UTC)
Received: from localhost (ovpn-113-200.ams2.redhat.com [10.36.113.200])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E6D352BCD0;
 Mon, 14 Dec 2020 16:57:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a11fc9f2-d671-4fbf-ad33-77c01b78d4ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965066;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nzkkrxNsf/tKSpJy0Ol/DYhuNPJEQyRqILQRkCfzVFw=;
	b=OKK/22HEkZddUhpVI8gVNgUxzULUXeTxVv5VsJNn4NTD38eTo/2ird+4hsow3bojQmk7yX
	HbnylFKtpr1NuTIP2qW3otBjWniwyljBnV+T5Z8cC4gDoVvvUY3FjCgAUVVE/ARgYnDtyl
	+ABU5mG15gwtO5BhkPfhN0acSUgFOr8=
X-MC-Unique: dOTt1Uv3MgeXrCvhXQnztA-1
Date: Mon, 14 Dec 2020 16:57:31 +0000
From: Stefan Hajnoczi <stefanha@redhat.com>
To: marcandre.lureau@redhat.com
Cc: qemu-devel@nongnu.org, philmd@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Laurent Vivier <laurent@vivier.eu>, Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org, Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>
Subject: Re: [PATCH v3 06/13] virtiofsd: replace _Static_assert with
 QEMU_BUILD_BUG_ON
Message-ID: <20201214165731.GH620320@stefanha-x1.localdomain>
References: <20201210134752.780923-1-marcandre.lureau@redhat.com>
 <20201210134752.780923-7-marcandre.lureau@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201210134752.780923-7-marcandre.lureau@redhat.com>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=stefanha@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Enx9fNJ0XV5HaWRu"
Content-Disposition: inline

--Enx9fNJ0XV5HaWRu
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 10, 2020 at 05:47:45PM +0400, marcandre.lureau@redhat.com wrote=
:
> From: Marc-Andr=E9 Lureau <marcandre.lureau@redhat.com>
>=20
> This allows to get rid of a check for older GCC version (which was a bit
> bogus too since it was falling back on c++ version..)
>=20
> Signed-off-by: Marc-Andr=E9 Lureau <marcandre.lureau@redhat.com>
> ---
>  tools/virtiofsd/fuse_common.h | 11 +----------
>  1 file changed, 1 insertion(+), 10 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

--Enx9fNJ0XV5HaWRu
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAl/XmXsACgkQnKSrs4Gr
c8hM/QgAo7goQSbSjtniQApqy5STJw15VfReJdpV17jhRtfEYUPuLzhVUULS8G8b
WI0xMh1L83QCQmaFxogrLPGI+zXM+slDrmn/zPcX3tyVXcs6UfJw6hbV2gk1y8fA
kOCUVF1aTRe5M2SezgchbA6badCZ+Wv28xdAqvVbARJXOKKDWP3lZwaKsUESu9Os
JyzoFCOajcEZru/pMKpd3DYKKjmHyr+AWMaY3+LEXAjJYi2SZfuTNDb30aiomP9N
9ArM4OFuww0DmiKzYaOA8IurGv3KSXFbxMyckl05OKHW18iAjX2e9o7lXe3OFFOO
LI+MK8lFJx1vbKX7lBL8Li1tsThXnA==
=jtpu
-----END PGP SIGNATURE-----

--Enx9fNJ0XV5HaWRu--



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 17:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 17:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52476.91509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koraN-0003mv-9I; Mon, 14 Dec 2020 17:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52476.91509; Mon, 14 Dec 2020 17:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koraN-0003mo-5p; Mon, 14 Dec 2020 17:24:35 +0000
Received: by outflank-mailman (input) for mailman id 52476;
 Mon, 14 Dec 2020 17:24:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hKoR=FS=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1koraM-0003mj-2G
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 17:24:34 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id cb2bce35-7541-4f67-9b90-134b5b10151c;
 Mon, 14 Dec 2020 17:24:33 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-583-5mqhwgx1OAqCYa9M4ykgwA-1; Mon, 14 Dec 2020 12:24:31 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E043E107ACF5;
 Mon, 14 Dec 2020 17:24:28 +0000 (UTC)
Received: from localhost (ovpn-116-160.rdu2.redhat.com [10.10.116.160])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6BE2410021AA;
 Mon, 14 Dec 2020 17:24:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb2bce35-7541-4f67-9b90-134b5b10151c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607966672;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tVrw542DX2KLO2TPzENouPrv7sTLbHUO9KuI1dV/ApQ=;
	b=YjZVnEhMg8/zcRnSWqPQqai/Fmz+TJeCXp2n0b44Wp4AwGn3giDjSfNdCzElgPc8bDHHXz
	M9q3H4tJsByuKozGDnxwQV7dPOCNOVp3s4Sp0iVHyWhR5HCYLAAjDvoRy53PQKqO87x3cG
	P+xx5BDLRyy7O1/e/a7QbHfEf7f/VVM=
X-MC-Unique: 5mqhwgx1OAqCYa9M4ykgwA-1
Date: Mon, 14 Dec 2020 12:24:18 -0500
From: Eduardo Habkost <ehabkost@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	John Snow <jsnow@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: Re: [PATCH v4 23/32] qdev: Move dev->realized check to
 qdev_property_set()
Message-ID: <20201214172418.GK1289986@habkost.net>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
 <20201211220529.2290218-24-ehabkost@redhat.com>
 <20201214155530.55f80cd6@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201214155530.55f80cd6@redhat.com>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline

On Mon, Dec 14, 2020 at 03:55:30PM +0100, Igor Mammedov wrote:
> On Fri, 11 Dec 2020 17:05:20 -0500
> Eduardo Habkost <ehabkost@redhat.com> wrote:
> 
> > Every single qdev property setter function manually checks
> > dev->realized.  We can just check dev->realized inside
> > qdev_property_set() instead.
> > 
> > The check is being added as a separate function
> > (qdev_prop_allow_set()) because it will become a callback later.
> 
> is callback added within this series?
> and I'd add here what's the purpose of it.

It will be added in part 2 of the series.  See v3:
https://lore.kernel.org/qemu-devel/20201112214350.872250-35-ehabkost@redhat.com/

I don't know what else I could say about its purpose, in addition
to what I wrote above, and the comment below[1].

If you are just curious about the callback and confused because
it is not anywhere in this series, I can just remove the
paragraph above from the commit message.  Would that be enough?

> 
[...]
> > +/* returns: true if property is allowed to be set, false otherwise */

[1] ^^^

> > +static bool qdev_prop_allow_set(Object *obj, const char *name,
> > +                                Error **errp)
> > +{
> > +    DeviceState *dev = DEVICE(obj);
> > +
> > +    if (dev->realized) {
> > +        qdev_prop_set_after_realize(dev, name, errp);
> > +        return false;
> > +    }
> > +    return true;
> > +}
> > +

-- 
Eduardo



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 17:26:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 17:26:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52481.91521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1korbk-0003tm-KK; Mon, 14 Dec 2020 17:26:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52481.91521; Mon, 14 Dec 2020 17:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1korbk-0003tf-HF; Mon, 14 Dec 2020 17:26:00 +0000
Received: by outflank-mailman (input) for mailman id 52481;
 Mon, 14 Dec 2020 17:25:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1korbj-0003tW-1S; Mon, 14 Dec 2020 17:25:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1korbi-0002I7-R3; Mon, 14 Dec 2020 17:25:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1korbi-0005EJ-FL; Mon, 14 Dec 2020 17:25:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1korbi-0002gS-Er; Mon, 14 Dec 2020 17:25:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7aLJgDWHzvQ+ezvj4q5GHJuMkFbsK3cmDKRvf8vh7p8=; b=OIPS30mJIqlpJblPlXyc9EXQkG
	7VyIpEZEYMJfIuhK8Obc3BzlGCT5nMcx/9GLs171VDxWUaMtvL35TbcjDeavOKigqpo+X39qB48Vh
	29F9lcg7Ua15CKqo2Tje0Y7iRxmV5D7FcRioFw8tWkhxRseBvETcfZGPmOzaKsOYt57c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157519-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157519: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2c85ebc57b3e1817b6ce1a6b703928e113a90442
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 17:25:58 +0000

flight 157519 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157519/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2c85ebc57b3e1817b6ce1a6b703928e113a90442
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  135 days
Failing since        152366  2020-08-01 20:49:34 Z  134 days  235 attempts
Testing same since   157519  2020-12-14 08:29:24 Z    0 days    1 attempts

------------------------------------------------------------
3694 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 707303 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52390.91627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshH-0001v6-Vc; Mon, 14 Dec 2020 18:35:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52390.91627; Mon, 14 Dec 2020 18:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshH-0001uT-HJ; Mon, 14 Dec 2020 18:35:47 +0000
Received: by outflank-mailman (input) for mailman id 52390;
 Mon, 14 Dec 2020 16:37:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqZ-0006vN-JH
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:15 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 529ef58c-2875-4c18-8456-429b71c1dbf1;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaX4U008860;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaX6J007339;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 34CCCAAC68; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 529ef58c-2875-4c18-8456-429b71c1dbf1
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 08/24] Make libs/call build on NetBSD
Date: Mon, 14 Dec 2020 17:36:07 +0100
Message-Id: <20201214163623.2127-9-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/call/netbsd.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/tools/libs/call/netbsd.c b/tools/libs/call/netbsd.c
index a5502da377..1a771e9928 100644
--- a/tools/libs/call/netbsd.c
+++ b/tools/libs/call/netbsd.c
@@ -19,12 +19,14 @@
  * Split from xc_netbsd.c
  */
 
-#include "xc_private.h"
 
 #include <unistd.h>
 #include <fcntl.h>
 #include <malloc.h>
+#include <errno.h>
 #include <sys/mman.h>
+#include <sys/ioctl.h>
+#include "private.h"
 
 int osdep_xencall_open(xencall_handle *xcall)
 {
@@ -69,12 +71,13 @@ int osdep_xencall_close(xencall_handle *xcall)
     return close(fd);
 }
 
-void *osdep_alloc_hypercall_buffer(xencall_handle *xcall, size_t npages)
+void *osdep_alloc_pages(xencall_handle *xcall, size_t npages)
 {
-    size_t size = npages * XC_PAGE_SIZE;
+    size_t size = npages * PAGE_SIZE;
     void *p;
+    int ret;
 
-    ret = posix_memalign(&p, XC_PAGE_SIZE, size);
+    ret = posix_memalign(&p, PAGE_SIZE, size);
     if ( ret != 0 || !p )
         return NULL;
 
@@ -86,14 +89,13 @@ void *osdep_alloc_hypercall_buffer(xencall_handle *xcall, size_t npages)
     return p;
 }
 
-void osdep_free_hypercall_buffer(xencall_handle *xcall, void *ptr,
-                                 size_t npages)
+void osdep_free_pages(xencall_handle *xcall, void *ptr, size_t npages)
 {
-    (void) munlock(ptr, npages * XC_PAGE_SIZE);
+    (void) munlock(ptr, npages * PAGE_SIZE);
     free(ptr);
 }
 
-int do_xen_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
+int osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
 {
     int fd = xcall->fd;
     int error = ioctl(fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52396.91678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshM-00025H-0U; Mon, 14 Dec 2020 18:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52396.91678; Mon, 14 Dec 2020 18:35:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshL-00024m-51; Mon, 14 Dec 2020 18:35:51 +0000
Received: by outflank-mailman (input) for mailman id 52396;
 Mon, 14 Dec 2020 16:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrS-0006vN-LG
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:10 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3300b86f-0b1d-4ac4-8ba9-60faeafc2272;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXKp004605;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaWlD002558;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 24E1CAAC65; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3300b86f-0b1d-4ac4-8ba9-60faeafc2272
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 07/24] Remove NetBSD's system headers. We'll use the system-provided ones, which are up to date.
Date: Mon, 14 Dec 2020 17:36:06 +0100
Message-Id: <20201214163623.2127-8-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/include/Makefile                 |   2 +-
 tools/include/xen-sys/NetBSD/evtchn.h  |  86 --------------------
 tools/include/xen-sys/NetBSD/privcmd.h | 106 -------------------------
 3 files changed, 1 insertion(+), 193 deletions(-)
 delete mode 100644 tools/include/xen-sys/NetBSD/evtchn.h
 delete mode 100644 tools/include/xen-sys/NetBSD/privcmd.h

diff --git a/tools/include/Makefile b/tools/include/Makefile
index 4d4ec5f974..5e90179e66 100644
--- a/tools/include/Makefile
+++ b/tools/include/Makefile
@@ -68,7 +68,7 @@ install: all
 	$(INSTALL_DATA) xen/foreign/*.h $(DESTDIR)$(includedir)/xen/foreign
 	$(INSTALL_DATA) xen/hvm/*.h $(DESTDIR)$(includedir)/xen/hvm
 	$(INSTALL_DATA) xen/io/*.h $(DESTDIR)$(includedir)/xen/io
-	$(INSTALL_DATA) xen/sys/*.h $(DESTDIR)$(includedir)/xen/sys
+	$(INSTALL_DATA) xen/sys/*.h $(DESTDIR)$(includedir)/xen/sys || true
 	$(INSTALL_DATA) xen/xsm/*.h $(DESTDIR)$(includedir)/xen/xsm
 
 .PHONY: uninstall
diff --git a/tools/include/xen-sys/NetBSD/evtchn.h b/tools/include/xen-sys/NetBSD/evtchn.h
deleted file mode 100644
index 2d8a1f9164..0000000000
--- a/tools/include/xen-sys/NetBSD/evtchn.h
+++ /dev/null
@@ -1,86 +0,0 @@
-/* $NetBSD: evtchn.h,v 1.1.1.1 2007/06/14 19:39:45 bouyer Exp $ */
-/******************************************************************************
- * evtchn.h
- * 
- * Interface to /dev/xen/evtchn.
- * 
- * Copyright (c) 2003-2005, K A Fraser
- * 
- * This file may be distributed separately from the Linux kernel, or
- * incorporated into other software packages, subject to the following license:
- * 
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this source file (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy, modify,
- * merge, publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so, subject to
- * the following conditions:
- * 
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- * 
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- */
-
-#ifndef __NetBSD_EVTCHN_H__
-#define __NetBSD_EVTCHN_H__
-
-/*
- * Bind a fresh port to VIRQ @virq.
- */
-#define IOCTL_EVTCHN_BIND_VIRQ				\
-	_IOWR('E', 4, struct ioctl_evtchn_bind_virq)
-struct ioctl_evtchn_bind_virq {
-	unsigned int virq;
-	unsigned int port;
-};
-
-/*
- * Bind a fresh port to remote <@remote_domain, @remote_port>.
- */
-#define IOCTL_EVTCHN_BIND_INTERDOMAIN			\
-	_IOWR('E', 5, struct ioctl_evtchn_bind_interdomain)
-struct ioctl_evtchn_bind_interdomain {
-	unsigned int remote_domain, remote_port;
-	unsigned int port;
-};
-
-/*
- * Allocate a fresh port for binding to @remote_domain.
- */
-#define IOCTL_EVTCHN_BIND_UNBOUND_PORT			\
-	_IOWR('E', 6, struct ioctl_evtchn_bind_unbound_port)
-struct ioctl_evtchn_bind_unbound_port {
-	unsigned int remote_domain;
-	unsigned int port;
-};
-
-/*
- * Unbind previously allocated @port.
- */
-#define IOCTL_EVTCHN_UNBIND				\
-	_IOW('E', 7, struct ioctl_evtchn_unbind)
-struct ioctl_evtchn_unbind {
-	unsigned int port;
-};
-
-/*
- * Send event to previously allocated @port.
- */
-#define IOCTL_EVTCHN_NOTIFY				\
-	_IOW('E', 8, struct ioctl_evtchn_notify)
-struct ioctl_evtchn_notify {
-	unsigned int port;
-};
-
-/* Clear and reinitialise the event buffer. Clear error condition. */
-#define IOCTL_EVTCHN_RESET				\
-	_IO('E', 9)
-
-#endif /* __NetBSD_EVTCHN_H__ */
diff --git a/tools/include/xen-sys/NetBSD/privcmd.h b/tools/include/xen-sys/NetBSD/privcmd.h
deleted file mode 100644
index 555bad973e..0000000000
--- a/tools/include/xen-sys/NetBSD/privcmd.h
+++ /dev/null
@@ -1,106 +0,0 @@
-/*	NetBSD: xenio.h,v 1.3 2005/05/24 12:07:12 yamt Exp $	*/
-
-/******************************************************************************
- * privcmd.h
- * 
- * Copyright (c) 2003-2004, K A Fraser
- * 
- * This file may be distributed separately from the Linux kernel, or
- * incorporated into other software packages, subject to the following license:
- * 
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this source file (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy, modify,
- * merge, publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so, subject to
- * the following conditions:
- * 
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- * 
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- */
-
-#ifndef __NetBSD_PRIVCMD_H__
-#define __NetBSD_PRIVCMD_H__
-
-/* Interface to /dev/xen/privcmd */
-
-typedef struct privcmd_hypercall
-{
-    unsigned long op;
-    unsigned long arg[5];
-    long retval;
-} privcmd_hypercall_t;
-
-typedef struct privcmd_mmap_entry {
-    unsigned long va;
-    unsigned long mfn;
-    unsigned long npages;
-} privcmd_mmap_entry_t; 
-
-typedef struct privcmd_mmap {
-    int num;
-    domid_t dom; /* target domain */
-    privcmd_mmap_entry_t *entry;
-} privcmd_mmap_t; 
-
-typedef struct privcmd_mmapbatch {
-    int num;     /* number of pages to populate */
-    domid_t dom; /* target domain */
-    unsigned long addr;  /* virtual address */
-    unsigned long *arr; /* array of mfns - top nibble set on err */
-} privcmd_mmapbatch_t; 
-
-typedef struct privcmd_blkmsg
-{
-    unsigned long op;
-    void         *buf;
-    int           buf_size;
-} privcmd_blkmsg_t;
-
-/*
- * @cmd: IOCTL_PRIVCMD_HYPERCALL
- * @arg: &privcmd_hypercall_t
- * Return: Value returned from execution of the specified hypercall.
- */
-#define IOCTL_PRIVCMD_HYPERCALL         \
-    _IOWR('P', 0, privcmd_hypercall_t)
-
-#if defined(_KERNEL)
-/* compat */
-#define IOCTL_PRIVCMD_INITDOMAIN_EVTCHN_OLD \
-    _IO('P', 1)
-#endif /* defined(_KERNEL) */
-    
-#define IOCTL_PRIVCMD_MMAP             \
-    _IOW('P', 2, privcmd_mmap_t)
-#define IOCTL_PRIVCMD_MMAPBATCH        \
-    _IOW('P', 3, privcmd_mmapbatch_t)
-#define IOCTL_PRIVCMD_GET_MACH2PHYS_START_MFN \
-    _IOR('P', 4, unsigned long)
-
-/*
- * @cmd: IOCTL_PRIVCMD_INITDOMAIN_EVTCHN
- * @arg: n/a
- * Return: Port associated with domain-controller end of control event channel
- *         for the initial domain.
- */
-#define IOCTL_PRIVCMD_INITDOMAIN_EVTCHN \
-    _IOR('P', 5, int)
-
-/* Interface to /dev/xenevt */
-/* EVTCHN_RESET: Clear and reinit the event buffer. Clear error condition. */
-#define EVTCHN_RESET  _IO('E', 1)
-/* EVTCHN_BIND: Bind to the specified event-channel port. */
-#define EVTCHN_BIND   _IOW('E', 2, unsigned long)
-/* EVTCHN_UNBIND: Unbind from the specified event-channel port. */
-#define EVTCHN_UNBIND _IOW('E', 3, unsigned long)
-
-#endif /* __NetBSD_PRIVCMD_H__ */
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52382.91581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshF-0001q8-On; Mon, 14 Dec 2020 18:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52382.91581; Mon, 14 Dec 2020 18:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshF-0001pm-E8; Mon, 14 Dec 2020 18:35:45 +0000
Received: by outflank-mailman (input) for mailman id 52382;
 Mon, 14 Dec 2020 16:36:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqF-0006vN-IW
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:36:55 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48725c45-d918-4625-8df9-1a2d00cdb5b0;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaW4U006564;
 Mon, 14 Dec 2020 17:36:32 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaWqO005623;
 Mon, 14 Dec 2020 17:36:32 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 8BA1FAAC66; Mon, 14 Dec 2020 17:36:32 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48725c45-d918-4625-8df9-1a2d00cdb5b0
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 00/24] NetBSD fixes
Date: Mon, 14 Dec 2020 17:35:59 +0100
Message-Id: <20201214163623.2127-1-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:32 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

Hello,
here is a set of 24 patches, which are needed to build and run the
tools on NetBSD. They are extracted from NetBSD's pkgsrc repository for
Xen 4.13, and ported to 4.15. 

Manuel Bouyer (24):
  Fix lock directory path for NetBSD
  NetBSD doens't need xenbackendd with xl toolstack
  Fix lock directory path for NetBSD
  Make it build on NetBSD
  Introduce locking functions for block device setup on NetBSD
  Handle the case where vifname is not present in xenstore.
  Remove NetBSD's system headers. We'll use the system-provided ones,
    which are up to date.
  Make it build on NetBSD
  Use xen/xenio.h on NetBSD
  Make it build on NetBSD
  Implement foreignmemory on NetBSD
  Implement gnttab on NetBSD
  Don't assume tv_sec is a unsigned long (for NetBSD)
  Pass bridge name to qemu When starting qemu, set an environnement
    variable XEN_DOMAIN_ID, to be used by qemu helper scripts
  Make it build on NetBSD
  Switch NetBSD to QEMU_XEN (!traditional)
  Make it build on NetBSD
  This doens't need xen/sys/evtchn.h (NetBSD fix)
  errno may not be a gobal R/W variable, use a local variable instead
    (fix build on NetBSD)
  If FILENAME_MAX is defined, use it instead of arbitrary value (fix
    format-truncation errors with GCC >= 7)
  Fix unused functions/variables error
  If PTHREAD_STACK_MIN is not defined, use DEFAULT_THREAD_STACKSIZE
  Use xen/xenio.h on NetBSD
  Fix error: array subscript has type 'char' [-Werror=char-subscripts]

 m4/paths.m4                                   |   2 +-
 tools/Makefile                                |   1 -
 tools/configure                               |   2 +-
 tools/debugger/gdbsx/xg/xg_main.c             |  11 +
 tools/hotplug/NetBSD/Makefile                 |   1 +
 tools/hotplug/NetBSD/block                    |   5 +-
 tools/hotplug/NetBSD/locking.sh               |  72 +++++
 tools/hotplug/NetBSD/vif-bridge               |   5 +-
 tools/hotplug/NetBSD/vif-ip                   |   4 +
 tools/include/Makefile                        |   2 +-
 tools/include/xen-sys/NetBSD/evtchn.h         |  86 ------
 tools/include/xen-sys/NetBSD/privcmd.h        | 106 -------
 tools/libs/call/netbsd.c                      |  18 +-
 tools/libs/call/private.h                     |   8 +-
 tools/libs/ctrl/xc_private.h                  |   4 +
 tools/libs/evtchn/netbsd.c                    |   8 +-
 tools/libs/foreignmemory/Makefile             |   2 +-
 tools/libs/foreignmemory/netbsd.c             |  76 ++++-
 tools/libs/foreignmemory/private.h            |  10 +-
 tools/libs/gnttab/Makefile                    |   2 +-
 tools/libs/gnttab/netbsd.c                    | 267 ++++++++++++++++++
 tools/libs/light/libxl_create.c               |   8 +-
 tools/libs/light/libxl_dm.c                   |  19 ++
 tools/libs/light/libxl_netbsd.c               |   2 +-
 tools/libs/light/libxl_qmp.c                  |   2 +-
 tools/libs/light/libxl_uuid.c                 |   4 +-
 tools/libs/stat/xenstat_netbsd.c              |  11 -
 tools/libs/store/xs.c                         |   4 +
 tools/ocaml/libs/eventchn/xeneventchn_stubs.c |   1 -
 tools/xenpaging/xenpaging.c                   |   5 +-
 tools/xenpmd/xenpmd.c                         |   4 +
 tools/xentrace/xentrace.c                     |   2 +-
 xen/tools/symbols.c                           |   4 +-
 33 files changed, 508 insertions(+), 250 deletions(-)
 create mode 100644 tools/hotplug/NetBSD/locking.sh
 delete mode 100644 tools/include/xen-sys/NetBSD/evtchn.h
 delete mode 100644 tools/include/xen-sys/NetBSD/privcmd.h
 create mode 100644 tools/libs/gnttab/netbsd.c

-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52378.91555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshE-0001oQ-Ny; Mon, 14 Dec 2020 18:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52378.91555; Mon, 14 Dec 2020 18:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshE-0001o8-Fc; Mon, 14 Dec 2020 18:35:44 +0000
Received: by outflank-mailman (input) for mailman id 52378;
 Mon, 14 Dec 2020 16:36:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqq5-0006vN-JV
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:36:45 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bf79bc0-6229-4603-89c1-3e683c759290;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXcQ012841;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXMp002012;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 8B14CAAC67; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bf79bc0-6229-4603-89c1-3e683c759290
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 13/24] Don't assume tv_sec is a unsigned long (for NetBSD)
Date: Mon, 14 Dec 2020 17:36:12 +0100
Message-Id: <20201214163623.2127-14-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/light/libxl_create.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519..44691010bc 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -496,7 +496,7 @@ int libxl__domain_build(libxl__gc *gc,
         vments[2] = "image/ostype";
         vments[3] = "hvm";
         vments[4] = "start_time";
-        vments[5] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
+        vments[5] = GCSPRINTF("%jd.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
 
         localents = libxl__calloc(gc, 13, sizeof(char *));
         i = 0;
@@ -535,7 +535,7 @@ int libxl__domain_build(libxl__gc *gc,
         vments[i++] = "image/kernel";
         vments[i++] = (char *) state->pv_kernel.path;
         vments[i++] = "start_time";
-        vments[i++] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
+        vments[i++] = GCSPRINTF("%jd.%02d", (intmax_t)start_time.tv_sec,(int)start_time.tv_usec/10000);
         if (state->pv_ramdisk.path) {
             vments[i++] = "image/ramdisk";
             vments[i++] = (char *) state->pv_ramdisk.path;
@@ -1502,7 +1502,7 @@ static void domcreate_stream_done(libxl__egc *egc,
         vments[2] = "image/ostype";
         vments[3] = "hvm";
         vments[4] = "start_time";
-        vments[5] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
+        vments[5] = GCSPRINTF("%jd.%02d", (intmax_t)start_time.tv_sec,(int)start_time.tv_usec/10000);
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         vments = libxl__calloc(gc, 11, sizeof(char *));
@@ -1512,7 +1512,7 @@ static void domcreate_stream_done(libxl__egc *egc,
         vments[i++] = "image/kernel";
         vments[i++] = (char *) state->pv_kernel.path;
         vments[i++] = "start_time";
-        vments[i++] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
+        vments[i++] = GCSPRINTF("%jd.%02d", (intmax_t)start_time.tv_sec,(int)start_time.tv_usec/10000);
         if (state->pv_ramdisk.path) {
             vments[i++] = "image/ramdisk";
             vments[i++] = (char *) state->pv_ramdisk.path;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52392.91638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshI-0001xQ-Mb; Mon, 14 Dec 2020 18:35:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52392.91638; Mon, 14 Dec 2020 18:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshI-0001wh-Ak; Mon, 14 Dec 2020 18:35:48 +0000
Received: by outflank-mailman (input) for mailman id 52392;
 Mon, 14 Dec 2020 16:37:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqe-0006vN-JC
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:20 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ee0e64c-6f61-4b85-982b-7d787e7a9922;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXek012603;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXpP018804;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 0C78EAAC66; Mon, 14 Dec 2020 17:36:32 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ee0e64c-6f61-4b85-982b-7d787e7a9922
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 05/24] Introduce locking functions for block device setup on NetBSD
Date: Mon, 14 Dec 2020 17:36:04 +0100
Message-Id: <20201214163623.2127-6-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Spam-Score: 3.083 (***) BAYES_00,HEADER_FROM_DIFFERENT_DOMAINS,MEDICAL,SPF_HELO_SOFTFAIL,SPF_NONE
X-Spam-Status: Yes, hits=3.083 required=3
X-Spam-Report: Content analysis details:   (3.1 points, 3.0 required)
                pts rule name              description
               --- ---------              -----------
               -1.9 BAYES_00               BODY: Bayes spam probability is 0 to 1%
                                           [score: 0.0000]
                0.2 HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level
                                           mail domains are different
                0.0 SPF_NONE               SPF: sender does not publish an SPF Record
                0.7 SPF_HELO_SOFTFAIL      SPF: HELO does not match SPF record (softfail)
                4.0 MEDICAL                Medical or commercial database
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/hotplug/NetBSD/Makefile   |  1 +
 tools/hotplug/NetBSD/block      |  5 ++-
 tools/hotplug/NetBSD/locking.sh | 72 +++++++++++++++++++++++++++++++++
 3 files changed, 77 insertions(+), 1 deletion(-)
 create mode 100644 tools/hotplug/NetBSD/locking.sh

diff --git a/tools/hotplug/NetBSD/Makefile b/tools/hotplug/NetBSD/Makefile
index 6926885ab8..114b223207 100644
--- a/tools/hotplug/NetBSD/Makefile
+++ b/tools/hotplug/NetBSD/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 # Xen script dir and scripts to go there.
 XEN_SCRIPTS =
+XEN_SCRIPTS += locking.sh
 XEN_SCRIPTS += block
 XEN_SCRIPTS += vif-bridge
 XEN_SCRIPTS += vif-ip
diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index 32c20b6c89..23c8e38ebf 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -6,6 +6,7 @@
 
 DIR=$(dirname "$0")
 . "${DIR}/hotplugpath.sh"
+. "${DIR}/locking.sh"
 
 PATH=${bindir}:${sbindir}:${LIBEXEC_BIN}:/bin:/usr/bin:/sbin:/usr/sbin
 export PATH
@@ -62,6 +63,7 @@ case $xstatus in
 			available_disks="$available_disks $disk"
 			eval $disk=free
 		done
+		claim_lock block
 		# Mark the used vnd(4) devices as ``used''.
 		for disk in `sysctl hw.disknames`; do
 			case $disk in
@@ -77,6 +79,7 @@ case $xstatus in
 				break	
 			fi
 		done
+		release_lock block
 		if [ x$device = x ] ; then
 			error "no available vnd device"
 		fi
@@ -86,7 +89,7 @@ case $xstatus in
 		device=$xparams
 		;;
 	esac
-	physical_device=$(stat -f '%r' "$device")
+	physical_device=$(stat -L -f '%r' "$device")
 	xenstore-write $xpath/physical-device $physical_device
 	xenstore-write $xpath/hotplug-status connected
 	exit 0
diff --git a/tools/hotplug/NetBSD/locking.sh b/tools/hotplug/NetBSD/locking.sh
new file mode 100644
index 0000000000..88257f62b7
--- /dev/null
+++ b/tools/hotplug/NetBSD/locking.sh
@@ -0,0 +1,72 @@
+#!/bin/sh
+#
+# Copyright (c) 2016, Christoph Badura.  All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE AUTHOR(S) ``AS IS'' AND ANY EXPRESS
+# OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR(S) BE LIABLE FOR ANY DIRECT,
+# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+# SUCH DAMAGE.
+#
+
+LOCK_BASEDIR="$XEN_LOCK_DIR/xen-hotplug"
+
+_lockfd=9
+_have_lock=0	# lock not taken yet.
+
+SHLOCK="shlock ${_shlock_debug-}"
+
+_lock_set_vars() {
+	_lockfile="$LOCK_BASEDIR/$1.lock"
+	_lockfifo="$LOCK_BASEDIR/$1.fifo"
+}
+
+_lock_init() {
+	mkdir -p "$LOCK_BASEDIR" 2>/dev/null || true
+	mkfifo $_lockfifo 2>/dev/null || true
+}
+
+#
+# use a named pipe as condition variable
+# opening for read-only blocks when there's no writer.
+# opening for read-write never blocks but unblocks any waiting readers.
+# 
+_lock_wait_cv() {
+	eval "exec $_lockfd<  $_lockfifo ; exec $_lockfd<&-"
+}
+_lock_signal_cv() {
+	eval "exec $_lockfd<> $_lockfifo ; exec $_lockfd<&-"
+}
+
+claim_lock() {
+	_lock_set_vars $1
+	_lock_init
+	until $SHLOCK -f $_lockfile -p $$; do
+		_lock_wait_cv
+	done
+	_have_lock=1
+	# be sure to release the lock when the shell exits
+	trap "release_lock $1" 0 1 2 15
+}
+
+release_lock() {
+	_lock_set_vars $1
+	[ "$_have_lock" != 0 -a -f $_lockfile ] && rm $_lockfile
+	_have_lock=0
+	_lock_signal_cv;
+}
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52384.91594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshG-0001r1-Aj; Mon, 14 Dec 2020 18:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52384.91594; Mon, 14 Dec 2020 18:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshF-0001qp-Ub; Mon, 14 Dec 2020 18:35:45 +0000
Received: by outflank-mailman (input) for mailman id 52384;
 Mon, 14 Dec 2020 16:37:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqK-0006vN-Ii
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:00 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 850d1873-3bc1-4c17-8819-5aba59513026;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXNY025686;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXTu001450;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 59050AAC67; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 850d1873-3bc1-4c17-8819-5aba59513026
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 10/24] Make libs/evtchn build on NetBSD
Date: Mon, 14 Dec 2020 17:36:09 +0100
Message-Id: <20201214163623.2127-11-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/evtchn/netbsd.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libs/evtchn/netbsd.c b/tools/libs/evtchn/netbsd.c
index 8b8545d2f9..6d4ce28011 100644
--- a/tools/libs/evtchn/netbsd.c
+++ b/tools/libs/evtchn/netbsd.c
@@ -25,10 +25,10 @@
 
 #include <sys/ioctl.h>
 
-#include <xen/sys/evtchn.h>
-
 #include "private.h"
 
+#include <xen/xenio3.h>
+
 #define EVTCHN_DEV_NAME  "/dev/xenevt"
 
 int osdep_evtchn_open(xenevtchn_handle *xce)
@@ -131,7 +131,7 @@ xenevtchn_port_or_error_t xenevtchn_pending(xenevtchn_handle *xce)
     int fd = xce->fd;
     evtchn_port_t port;
 
-    if ( read_exact(fd, (char *)&port, sizeof(port)) == -1 )
+    if ( read(fd, (char *)&port, sizeof(port)) == -1 )
         return -1;
 
     return port;
@@ -140,7 +140,7 @@ xenevtchn_port_or_error_t xenevtchn_pending(xenevtchn_handle *xce)
 int xenevtchn_unmask(xenevtchn_handle *xce, evtchn_port_t port)
 {
     int fd = xce->fd;
-    return write_exact(fd, (char *)&port, sizeof(port));
+    return write(fd, (char *)&port, sizeof(port));
 }
 
 /*
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52388.91618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshH-0001tF-DQ; Mon, 14 Dec 2020 18:35:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52388.91618; Mon, 14 Dec 2020 18:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshG-0001sh-Rh; Mon, 14 Dec 2020 18:35:46 +0000
Received: by outflank-mailman (input) for mailman id 52388;
 Mon, 14 Dec 2020 16:37:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqU-0006vN-JD
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:10 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de156a27-9ec4-4675-91d3-84c974aff5bf;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaWYu006863;
 Mon, 14 Dec 2020 17:36:32 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaWeD021269;
 Mon, 14 Dec 2020 17:36:32 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id B9906AAC67; Mon, 14 Dec 2020 17:36:32 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de156a27-9ec4-4675-91d3-84c974aff5bf
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 02/24] NetBSD doens't need xenbackendd with xl toolstack
Date: Mon, 14 Dec 2020 17:36:01 +0100
Message-Id: <20201214163623.2127-3-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/Makefile | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/Makefile b/tools/Makefile
index ed71474421..757a560be0 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -18,7 +18,6 @@ SUBDIRS-$(CONFIG_X86) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
 SUBDIRS-y += xentop
-SUBDIRS-$(CONFIG_NetBSD) += xenbackendd
 SUBDIRS-y += libfsimage
 SUBDIRS-$(CONFIG_Linux) += vchan
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52386.91604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshG-0001rz-NQ; Mon, 14 Dec 2020 18:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52386.91604; Mon, 14 Dec 2020 18:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshG-0001rW-Cw; Mon, 14 Dec 2020 18:35:46 +0000
Received: by outflank-mailman (input) for mailman id 52386;
 Mon, 14 Dec 2020 16:37:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqP-0006vN-If
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:05 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67e0c9a6-fb09-457e-b95d-83f6c8056612;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXKv022303;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXZt022112;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id E245AAAC68; Mon, 14 Dec 2020 17:36:32 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67e0c9a6-fb09-457e-b95d-83f6c8056612
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 04/24] Make xg_main.c  build on NetBSD
Date: Mon, 14 Dec 2020 17:36:03 +0100
Message-Id: <20201214163623.2127-5-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/debugger/gdbsx/xg/xg_main.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index a4e8653168..fa2741ccf8 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -49,7 +49,11 @@
 #include "xg_public.h"
 #include <xen/version.h>
 #include <xen/domctl.h>
+#ifdef __NetBSD__
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
 
@@ -126,12 +130,19 @@ xg_init()
     int flags, saved_errno;
 
     XGTRC("E\n");
+#ifdef __NetBSD__
+    if ((_dom0_fd=open("/kern/xen/privcmd", O_RDWR)) == -1) {
+        perror("Failed to open /kern/xen/privcmd\n");
+        return -1;
+    }
+#else
     if ((_dom0_fd=open("/dev/xen/privcmd", O_RDWR)) == -1) {
         if ((_dom0_fd=open("/proc/xen/privcmd", O_RDWR)) == -1) {
             perror("Failed to open /dev/xen/privcmd or /proc/xen/privcmd\n");
             return -1;
         }
     }
+#endif
     /* Although we return the file handle as the 'xc handle' the API
      * does not specify / guarentee that this integer is in fact
      * a file handle. Thus we must take responsiblity to ensure
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52395.91668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshL-00022k-05; Mon, 14 Dec 2020 18:35:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52395.91668; Mon, 14 Dec 2020 18:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshK-00021l-5H; Mon, 14 Dec 2020 18:35:50 +0000
Received: by outflank-mailman (input) for mailman id 52395;
 Mon, 14 Dec 2020 16:39:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqj-0006vN-JB
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:25 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cc2c6c1-de36-4084-bf41-baca4af23928;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXZL001596;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaX6L007339;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id ADC85AAC68; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cc2c6c1-de36-4084-bf41-baca4af23928
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 15/24] Make libs/light build on NetBSD
Date: Mon, 14 Dec 2020 17:36:14 +0100
Message-Id: <20201214163623.2127-16-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/light/libxl_dm.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 5948ace60d..c93bdf2cc9 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -3659,6 +3659,14 @@ static int kill_device_model_uid_child(libxl__destroy_devicemodel_state *ddms,
 
     LOGD(DEBUG, domid, "DM reaper: calling setresuid(%d, %d, 0)",
          reaper_uid, dm_kill_uid);
+#ifdef __NetBSD__
+    r = setuid(dm_kill_uid);
+    if (r) {
+        LOGED(ERROR, domid, "setuid to %d", dm_kill_uid);
+        rc = rc ?: ERROR_FAIL;
+        goto out;
+    }
+#else /* __NetBSD__ */
     r = setresuid(reaper_uid, dm_kill_uid, 0);
     if (r) {
         LOGED(ERROR, domid, "setresuid to (%d, %d, 0)",
@@ -3666,6 +3674,7 @@ static int kill_device_model_uid_child(libxl__destroy_devicemodel_state *ddms,
         rc = rc ?: ERROR_FAIL;
         goto out;
     }
+#endif /* __NetBSD__ */
 
     /*
      * And kill everyone but me.
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52394.91658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshJ-0001zy-U4; Mon, 14 Dec 2020 18:35:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52394.91658; Mon, 14 Dec 2020 18:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshJ-0001yw-8y; Mon, 14 Dec 2020 18:35:49 +0000
Received: by outflank-mailman (input) for mailman id 52394;
 Mon, 14 Dec 2020 16:39:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrD-0006vN-KY
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:55 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb74707c-f3c6-477f-b60f-5f9713a046bb;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaY9l012882;
 Mon, 14 Dec 2020 17:36:34 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaYu8020075;
 Mon, 14 Dec 2020 17:36:34 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 11BAFAAC66; Mon, 14 Dec 2020 17:36:34 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb74707c-f3c6-477f-b60f-5f9713a046bb
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 20/24] If FILENAME_MAX is defined, use it instead of arbitrary value (fix format-truncation errors with GCC >= 7)
Date: Mon, 14 Dec 2020 17:36:19 +0100
Message-Id: <20201214163623.2127-21-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/xenpmd/xenpmd.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
index 12b82cf43e..cfd22e64e3 100644
--- a/tools/xenpmd/xenpmd.c
+++ b/tools/xenpmd/xenpmd.c
@@ -101,7 +101,11 @@ FILE *get_next_battery_file(DIR *battery_dir,
 {
     FILE *file = 0;
     struct dirent *dir_entries;
+#ifdef FILENAME_MAX
+    char file_name[FILENAME_MAX];
+#else
     char file_name[284];
+#endif
     int ret;
     
     do 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52380.91569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshF-0001pC-84; Mon, 14 Dec 2020 18:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52380.91569; Mon, 14 Dec 2020 18:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshE-0001ox-U4; Mon, 14 Dec 2020 18:35:44 +0000
Received: by outflank-mailman (input) for mailman id 52380;
 Mon, 14 Dec 2020 16:36:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqA-0006vN-IE
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:36:50 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b64aa07-f16c-437c-8ee1-cbd63850eb72;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaW2U004851;
 Mon, 14 Dec 2020 17:36:32 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaWEe015623;
 Mon, 14 Dec 2020 17:36:32 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id AA212AAC65; Mon, 14 Dec 2020 17:36:32 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b64aa07-f16c-437c-8ee1-cbd63850eb72
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 01/24] Fix lock directory path for NetBSD
Date: Mon, 14 Dec 2020 17:36:00 +0100
Message-Id: <20201214163623.2127-2-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:32 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 m4/paths.m4 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/m4/paths.m4 b/m4/paths.m4
index 89d3bb8312..1c107b1a61 100644
--- a/m4/paths.m4
+++ b/m4/paths.m4
@@ -142,7 +142,7 @@ AC_SUBST(XEN_SCRIPT_DIR)
 
 case "$host_os" in
 *freebsd*) XEN_LOCK_DIR=$localstatedir/lib ;;
-*netbsd*) XEN_LOCK_DIR=$localstatedir/lib ;;
+*netbsd*) XEN_LOCK_DIR=$rundir_path ;;
 *) XEN_LOCK_DIR=$localstatedir/lock ;;
 esac
 AC_SUBST(XEN_LOCK_DIR)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52397.91683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshM-00027P-D5; Mon, 14 Dec 2020 18:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52397.91683; Mon, 14 Dec 2020 18:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshL-00026r-Tk; Mon, 14 Dec 2020 18:35:51 +0000
Received: by outflank-mailman (input) for mailman id 52397;
 Mon, 14 Dec 2020 16:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrr-0006vN-MO
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:35 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c384c0fc-48f1-4fa8-8421-2f70ab499674;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXYK008956;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXQ1024496;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 7F396AAC66; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c384c0fc-48f1-4fa8-8421-2f70ab499674
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 12/24] Implement gnttab on NetBSD
Date: Mon, 14 Dec 2020 17:36:11 +0100
Message-Id: <20201214163623.2127-13-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/gnttab/Makefile |   2 +-
 tools/libs/gnttab/netbsd.c | 267 +++++++++++++++++++++++++++++++++++++
 2 files changed, 268 insertions(+), 1 deletion(-)
 create mode 100644 tools/libs/gnttab/netbsd.c

diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index d86c49d243..ae390ce60f 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -10,7 +10,7 @@ SRCS-GNTSHR            += gntshr_core.c
 SRCS-$(CONFIG_Linux)   += $(SRCS-GNTTAB) $(SRCS-GNTSHR) linux.c
 SRCS-$(CONFIG_MiniOS)  += $(SRCS-GNTTAB) gntshr_unimp.c minios.c
 SRCS-$(CONFIG_FreeBSD) += $(SRCS-GNTTAB) $(SRCS-GNTSHR) freebsd.c
+SRCS-$(CONFIG_NetBSD)  += $(SRCS-GNTTAB) $(SRCS-GNTSHR) netbsd.c
 SRCS-$(CONFIG_SunOS)   += gnttab_unimp.c gntshr_unimp.c
-SRCS-$(CONFIG_NetBSD)  += gnttab_unimp.c gntshr_unimp.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
diff --git a/tools/libs/gnttab/netbsd.c b/tools/libs/gnttab/netbsd.c
new file mode 100644
index 0000000000..2df7058cd7
--- /dev/null
+++ b/tools/libs/gnttab/netbsd.c
@@ -0,0 +1,267 @@
+/*
+ * Copyright (c) 2007-2008, D G Murray <Derek.Murray@cl.cam.ac.uk>
+ * Copyright (c) 2016-2017, Akshay Jaggi <jaggi@FreeBSD.org>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Split out from linux.c
+ */
+
+#include <fcntl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <string.h>
+
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+
+#include <xen/xen.h>
+#include <xen/xenio.h>
+
+#include "private.h"
+
+#define PAGE_SHIFT           12
+#define PAGE_SIZE            (1UL << PAGE_SHIFT)
+#define PAGE_MASK            (~(PAGE_SIZE-1))
+
+#define DEVXEN "/kern/xen/privcmd"
+
+int osdep_gnttab_open(xengnttab_handle *xgt)
+{
+    int fd = open(DEVXEN, O_RDWR|O_CLOEXEC);
+
+    if ( fd == -1 )
+        return -1;
+    xgt->fd = fd;
+
+    return 0;
+}
+
+int osdep_gnttab_close(xengnttab_handle *xgt)
+{
+    if ( xgt->fd == -1 )
+        return 0;
+
+    return close(xgt->fd);
+}
+
+int osdep_gnttab_set_max_grants(xengnttab_handle *xgt, uint32_t count)
+{
+    return 0;
+}
+
+void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
+                             uint32_t count, int flags, int prot,
+                             uint32_t *domids, uint32_t *refs,
+                             uint32_t notify_offset,
+                             evtchn_port_t notify_port)
+{
+    uint32_t i;
+    int fd = xgt->fd;
+    struct ioctl_gntdev_mmap_grant_ref map;
+    void *addr = NULL;
+    int domids_stride;
+    unsigned int refs_size = count * sizeof(struct ioctl_gntdev_grant_ref);
+    int rv;
+
+    domids_stride = (flags & XENGNTTAB_GRANT_MAP_SINGLE_DOMAIN) ? 0 : 1;
+    map.refs = malloc(refs_size);
+
+    for ( i = 0; i < count; i++ )
+    {
+        map.refs[i].domid = domids[i * domids_stride];
+        map.refs[i].ref = refs[i];
+    }
+
+    map.count = count;
+    addr = mmap(NULL, count * PAGE_SIZE,
+	prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
+
+    if (map.va == MAP_FAILED) {
+        GTERROR(xgt->logger, "osdep_gnttab_grant_map: mmap failed");
+	munmap((void *)map.va, count * PAGE_SIZE);
+        addr = MAP_FAILED;
+    }
+    map.va = addr;
+
+    map.notify.offset = 0;
+    map.notify.action = 0;
+    if ( notify_offset < PAGE_SIZE * count )
+    {
+	map.notify.offset = notify_offset;
+	map.notify.action |= UNMAP_NOTIFY_CLEAR_BYTE;
+    }
+    if ( notify_port != -1 )
+    {
+       map.notify.event_channel_port = notify_port;
+       map.notify.action |= UNMAP_NOTIFY_SEND_EVENT;
+    }
+
+    rv = ioctl(fd, IOCTL_GNTDEV_MMAP_GRANT_REF, &map);
+    if ( rv )
+    {
+        GTERROR(xgt->logger,
+	    "ioctl IOCTL_GNTDEV_MMAP_GRANT_REF failed: %d", rv);
+        munmap(addr, count * PAGE_SIZE);
+        addr = MAP_FAILED;
+    }
+    free(map.refs);
+    return addr;
+}
+
+int osdep_gnttab_unmap(xengnttab_handle *xgt,
+                       void *start_address,
+                       uint32_t count)
+{
+    int rc;
+    if ( start_address == NULL )
+    {
+        errno = EINVAL;
+        return -1;
+    }
+
+    /* Next, unmap the memory. */
+    rc = munmap(start_address, count * PAGE_SIZE);
+
+    return rc;
+}
+
+int osdep_gnttab_grant_copy(xengnttab_handle *xgt,
+                            uint32_t count,
+                            xengnttab_grant_copy_segment_t *segs)
+{
+    errno = ENOSYS;
+    return -1;
+}
+
+int osdep_gntshr_open(xengntshr_handle *xgs)
+{
+
+    int fd = open(DEVXEN, O_RDWR);
+
+    if ( fd == -1 )
+        return -1;
+    xgs->fd = fd;
+
+    return 0;
+}
+
+int osdep_gntshr_close(xengntshr_handle *xgs)
+{
+    if ( xgs->fd == -1 )
+        return 0;
+
+    return close(xgs->fd);
+}
+
+void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
+                               uint32_t domid, int count,
+                               uint32_t *refs, int writable,
+                               uint32_t notify_offset,
+                               evtchn_port_t notify_port)
+{
+    int err;
+    int fd = xgs->fd;
+    void *area = NULL;
+    struct ioctl_gntdev_alloc_grant_ref alloc;
+
+    alloc.gref_ids = malloc(count * sizeof(uint32_t));
+    if ( alloc.gref_ids == NULL )
+        return NULL;
+    alloc.domid = domid;
+    alloc.flags = writable ? GNTDEV_ALLOC_FLAG_WRITABLE : 0;
+    alloc.count = count;
+    area = mmap(NULL, count * PAGE_SIZE,
+	PROT_READ | PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0);
+
+    if (area == MAP_FAILED) {
+        GTERROR(xgs->logger, "osdep_gnttab_grant_map: mmap failed");
+        area = MAP_FAILED;
+	goto out;
+    }
+    alloc.va = area;
+
+    alloc.notify.offset = 0;
+    alloc.notify.action = 0;
+    if ( notify_offset < PAGE_SIZE * count )
+    {
+	alloc.notify.offset = notify_offset;
+	alloc.notify.action |= UNMAP_NOTIFY_CLEAR_BYTE;
+    }
+    if ( notify_port != -1 )
+    {
+       alloc.notify.event_channel_port = notify_port;
+       alloc.notify.action |= UNMAP_NOTIFY_SEND_EVENT;
+    }
+    err = ioctl(fd, IOCTL_GNTDEV_ALLOC_GRANT_REF, &alloc);
+    if ( err )
+    {
+        GSERROR(xgs->logger, "IOCTL_GNTDEV_ALLOC_GRANT_REF failed");
+	munmap(area, count * PAGE_SIZE);
+	area = MAP_FAILED;
+        goto out;
+    }
+    memcpy(refs, alloc.gref_ids, count * sizeof(uint32_t));
+
+ out:
+    free(alloc.gref_ids);
+    return area;
+}
+
+int osdep_gntshr_unshare(xengntshr_handle *xgs,
+                         void *start_address, uint32_t count)
+{
+    return munmap(start_address, count * PAGE_SIZE);
+}
+
+/*
+ * The functions below are Linux-isms that will likely never be implemented
+ * on FreeBSD unless FreeBSD also implements something akin to Linux dmabuf.
+ */
+int osdep_gnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
+                                      uint32_t flags, uint32_t count,
+                                      const uint32_t *refs,
+                                      uint32_t *dmabuf_fd)
+{
+    abort();
+}
+
+int osdep_gnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt,
+                                          uint32_t fd, uint32_t wait_to_ms)
+{
+    abort();
+}
+
+int osdep_gnttab_dmabuf_imp_to_refs(xengnttab_handle *xgt, uint32_t domid,
+                                    uint32_t fd, uint32_t count, uint32_t *refs)
+{
+    abort();
+}
+
+int osdep_gnttab_dmabuf_imp_release(xengnttab_handle *xgt, uint32_t fd)
+{
+    abort();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52398.91696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshO-0002CM-68; Mon, 14 Dec 2020 18:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52398.91696; Mon, 14 Dec 2020 18:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshM-00029u-Rf; Mon, 14 Dec 2020 18:35:52 +0000
Received: by outflank-mailman (input) for mailman id 52398;
 Mon, 14 Dec 2020 16:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqr8-0006vN-KF
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:50 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a833fa82-3c8c-441c-9cab-68892c90c994;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaYlm002086;
 Mon, 14 Dec 2020 17:36:34 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaY9S010884;
 Mon, 14 Dec 2020 17:36:34 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 354D6AAC66; Mon, 14 Dec 2020 17:36:34 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a833fa82-3c8c-441c-9cab-68892c90c994
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 22/24] If PTHREAD_STACK_MIN is not defined, use DEFAULT_THREAD_STACKSIZE
Date: Mon, 14 Dec 2020 17:36:21 +0100
Message-Id: <20201214163623.2127-23-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/store/xs.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 4ac73ec317..8e646b98d6 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -811,9 +811,13 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 
 #ifdef USE_PTHREAD
 #define DEFAULT_THREAD_STACKSIZE (16 * 1024)
+#ifndef PTHREAD_STACK_MIN
+#define READ_THREAD_STACKSIZE DEFAULT_THREAD_STACKSIZE
+#else
 #define READ_THREAD_STACKSIZE 					\
 	((DEFAULT_THREAD_STACKSIZE < PTHREAD_STACK_MIN) ? 	\
 	PTHREAD_STACK_MIN : DEFAULT_THREAD_STACKSIZE)
+#endif
 
 	/* We dynamically create a reader thread on demand. */
 	mutex_lock(&h->request_mutex);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52374.91542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshE-0001nG-0E; Mon, 14 Dec 2020 18:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52374.91542; Mon, 14 Dec 2020 18:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshD-0001n9-SK; Mon, 14 Dec 2020 18:35:43 +0000
Received: by outflank-mailman (input) for mailman id 52374;
 Mon, 14 Dec 2020 16:36:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqpv-0006vN-Ng
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:36:35 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2def1db3-153e-4a2f-8db9-8c5861d6e445;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXr1007897;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXTs001450;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 43F21AAC69; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2def1db3-153e-4a2f-8db9-8c5861d6e445
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 09/24] Use xen/xenio.h on NetBSD
Date: Mon, 14 Dec 2020 17:36:08 +0100
Message-Id: <20201214163623.2127-10-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/call/private.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/libs/call/private.h b/tools/libs/call/private.h
index 21f992b37e..96922e03d5 100644
--- a/tools/libs/call/private.h
+++ b/tools/libs/call/private.h
@@ -7,13 +7,19 @@
 #include <xencall.h>
 
 #include <xen/xen.h>
+#ifdef __NetBSD__
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 
 #ifndef PAGE_SHIFT /* Mini-os, Yukk */
 #define PAGE_SHIFT           12
 #endif
-#ifndef __MINIOS__ /* Yukk */
+#ifndef PAGE_SIZE
 #define PAGE_SIZE            (1UL << PAGE_SHIFT)
+#endif
+#ifndef PAGE_MASK
 #define PAGE_MASK            (~(PAGE_SIZE-1))
 #endif
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:35:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52376.91547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshE-0001ni-Aw; Mon, 14 Dec 2020 18:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52376.91547; Mon, 14 Dec 2020 18:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshE-0001nX-59; Mon, 14 Dec 2020 18:35:44 +0000
Received: by outflank-mailman (input) for mailman id 52376;
 Mon, 14 Dec 2020 16:36:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqq0-0006vN-Hq
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:36:40 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af584279-f0a1-4295-b802-776de9f7ff87;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaWHF022880;
 Mon, 14 Dec 2020 17:36:32 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaWlB002558;
 Mon, 14 Dec 2020 17:36:32 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id CBC52AAC67; Mon, 14 Dec 2020 17:36:32 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af584279-f0a1-4295-b802-776de9f7ff87
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 03/24] Fix lock directory path for NetBSD
Date: Mon, 14 Dec 2020 17:36:02 +0100
Message-Id: <20201214163623.2127-4-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/configure | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/configure b/tools/configure
index 8a708e9baa..131112c41e 100755
--- a/tools/configure
+++ b/tools/configure
@@ -4030,7 +4030,7 @@ XEN_SCRIPT_DIR=$XEN_CONFIG_DIR/scripts
 
 case "$host_os" in
 *freebsd*) XEN_LOCK_DIR=$localstatedir/lib ;;
-*netbsd*) XEN_LOCK_DIR=$localstatedir/lib ;;
+*netbsd*) XEN_LOCK_DIR=$localstatedir/run ;;
 *) XEN_LOCK_DIR=$localstatedir/lock ;;
 esac
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52399.91708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshR-0002I7-AX; Mon, 14 Dec 2020 18:35:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52399.91708; Mon, 14 Dec 2020 18:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshO-0002G2-RK; Mon, 14 Dec 2020 18:35:54 +0000
Received: by outflank-mailman (input) for mailman id 52399;
 Mon, 14 Dec 2020 16:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqy-0006vN-Jn
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:40 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36f209d1-9377-407d-8c57-ce478a8fd182;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXAO005133;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXTw001450;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 6A40CAAC65; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36f209d1-9377-407d-8c57-ce478a8fd182
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 11/24] Implement foreignmemory on NetBSD
Date: Mon, 14 Dec 2020 17:36:10 +0100
Message-Id: <20201214163623.2127-12-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/foreignmemory/Makefile  |  2 +-
 tools/libs/foreignmemory/netbsd.c  | 76 ++++++++++++++++++++++++++----
 tools/libs/foreignmemory/private.h | 10 +++-
 3 files changed, 75 insertions(+), 13 deletions(-)

diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index 13850f7988..f191cdbed0 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -8,7 +8,7 @@ SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += linux.c
 SRCS-$(CONFIG_FreeBSD) += freebsd.c
 SRCS-$(CONFIG_SunOS)   += compat.c solaris.c
-SRCS-$(CONFIG_NetBSD)  += compat.c netbsd.c
+SRCS-$(CONFIG_NetBSD)  += netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory/netbsd.c
index 54a418ebd6..6d740ec2a3 100644
--- a/tools/libs/foreignmemory/netbsd.c
+++ b/tools/libs/foreignmemory/netbsd.c
@@ -19,7 +19,9 @@
 
 #include <unistd.h>
 #include <fcntl.h>
+#include <errno.h>
 #include <sys/mman.h>
+#include <sys/ioctl.h>
 
 #include "private.h"
 
@@ -66,15 +68,17 @@ int osdep_xenforeignmemory_close(xenforeignmemory_handle *fmem)
     return close(fd);
 }
 
-void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
-                              void *addr, int prot, int flags,
-                              xen_pfn_t *arr, int num)
+void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
+                                 uint32_t dom, void *addr,
+				 int prot, int flags, size_t num,
+				 const xen_pfn_t arr[/*num*/], int err[/*num*/])
+
 {
     int fd = fmem->fd;
-    privcmd_mmapbatch_t ioctlx;
-    addr = mmap(addr, num*XC_PAGE_SIZE, prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
+    privcmd_mmapbatch_v2_t ioctlx;
+    addr = mmap(addr, num*PAGE_SIZE, prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
     if ( addr == MAP_FAILED ) {
-        PERROR("osdep_map_foreign_batch: mmap failed");
+        PERROR("osdep_xenforeignmemory_map: mmap failed");
         return NULL;
     }
 
@@ -82,11 +86,12 @@ void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
     ioctlx.dom=dom;
     ioctlx.addr=(unsigned long)addr;
     ioctlx.arr=arr;
-    if ( ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH, &ioctlx) < 0 )
+    ioctlx.err=err;
+    if ( ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH_V2, &ioctlx) < 0 )
     {
         int saved_errno = errno;
-        PERROR("osdep_map_foreign_batch: ioctl failed");
-        (void)munmap(addr, num*XC_PAGE_SIZE);
+        PERROR("osdep_xenforeignmemory_map: ioctl failed");
+        (void)munmap(addr, num*PAGE_SIZE);
         errno = saved_errno;
         return NULL;
     }
@@ -97,7 +102,58 @@ void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num*XC_PAGE_SIZE);
+    return munmap(addr, num*PAGE_SIZE);
+}
+
+int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
+                                    domid_t domid)
+{
+    return 0;
+}
+
+int osdep_xenforeignmemory_unmap_resource(
+    xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres)
+{
+    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+}
+
+int osdep_xenforeignmemory_map_resource(
+    xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres)
+{
+    privcmd_mmap_resource_t mr = {
+        .dom = fres->domid,
+        .type = fres->type,
+        .id = fres->id,
+        .idx = fres->frame,
+        .num = fres->nr_frames,
+    };
+    int rc;
+
+    fres->addr = mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+                      fres->prot, fres->flags | MAP_ANON | MAP_SHARED, -1, 0);
+    if ( fres->addr == MAP_FAILED )
+        return -1;
+
+    mr.addr = (uintptr_t)fres->addr;
+
+    rc = ioctl(fmem->fd, IOCTL_PRIVCMD_MMAP_RESOURCE, &mr);
+    if ( rc )
+    {
+        int saved_errno;
+
+        if ( errno != fmem->unimpl_errno && errno != EOPNOTSUPP )
+            PERROR("ioctl failed");
+        else
+            errno = EOPNOTSUPP;
+
+        saved_errno = errno;
+        (void)osdep_xenforeignmemory_unmap_resource(fmem, fres);
+        errno = saved_errno;
+
+        return -1;
+    }
+
+    return 0;
 }
 
 /*
diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemory/private.h
index 8f1bf081ed..abeceb8720 100644
--- a/tools/libs/foreignmemory/private.h
+++ b/tools/libs/foreignmemory/private.h
@@ -8,7 +8,13 @@
 #include <xentoolcore_internal.h>
 
 #include <xen/xen.h>
+
+#ifdef __NetBSD__
+#include <xen/xen.h>
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 
 #ifndef PAGE_SHIFT /* Mini-os, Yukk */
 #define PAGE_SHIFT           12
@@ -38,7 +44,7 @@ int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
 
 #if defined(__NetBSD__) || defined(__sun__)
 /* Strictly compat for those two only only */
-void *compat_mapforeign_batch(xenforeignmem_handle *fmem, uint32_t dom,
+void *osdep_map_foreign_batch(xenforeignmemory_handle *fmem, uint32_t dom,
                               void *addr, int prot, int flags,
                               xen_pfn_t *arr, int num);
 #endif
@@ -54,7 +60,7 @@ struct xenforeignmemory_resource_handle {
     int flags;
 };
 
-#ifndef __linux__
+#if  !defined(__linux__) && !defined(__NetBSD__)
 static inline int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
                                                   domid_t domid)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52400.91724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshV-0002V6-Ux; Mon, 14 Dec 2020 18:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52400.91724; Mon, 14 Dec 2020 18:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshS-0002Qo-Ab; Mon, 14 Dec 2020 18:35:58 +0000
Received: by outflank-mailman (input) for mailman id 52400;
 Mon, 14 Dec 2020 16:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqo-0006vN-JO
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:30 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 518c5fc0-b439-4fb1-9fa8-41c5df3b2e58;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXl0021417;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaX6N007339;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id D0DE8AAC66; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 518c5fc0-b439-4fb1-9fa8-41c5df3b2e58
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 17/24] Make libs/light build on NetBSD
Date: Mon, 14 Dec 2020 17:36:16 +0100
Message-Id: <20201214163623.2127-18-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/light/libxl_uuid.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_uuid.c b/tools/libs/light/libxl_uuid.c
index dadb79bad8..a8ee5f253e 100644
--- a/tools/libs/light/libxl_uuid.c
+++ b/tools/libs/light/libxl_uuid.c
@@ -82,7 +82,7 @@ void libxl_uuid_generate(libxl_uuid *uuid)
     uuid_enc_be(uuid->uuid, &nat_uuid);
 }
 
-#ifdef __FreeBSD__
+#if defined(__FreeBSD__) || defined(__NetBSD__)
 int libxl_uuid_from_string(libxl_uuid *uuid, const char *in)
 {
     uint32_t status;
@@ -120,7 +120,7 @@ void libxl_uuid_clear(libxl_uuid *uuid)
     memset(&uuid->uuid, 0, sizeof(uuid->uuid));
 }
 
-#ifdef __FreeBSD__
+#if defined(__FreeBSD__) || defined(__NetBSD__)
 int libxl_uuid_compare(const libxl_uuid *uuid1, const libxl_uuid *uuid2)
 {
     uuid_t nat_uuid1, nat_uuid2;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52401.91735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshY-0002fc-Q7; Mon, 14 Dec 2020 18:36:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52401.91735; Mon, 14 Dec 2020 18:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshW-0002by-C9; Mon, 14 Dec 2020 18:36:02 +0000
Received: by outflank-mailman (input) for mailman id 52401;
 Mon, 14 Dec 2020 16:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrN-0006vN-Kv
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:05 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c42ed7b-9363-41a8-b7e8-ed1062ae1223;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaYbe026385;
 Mon, 14 Dec 2020 17:36:34 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaYbT014229;
 Mon, 14 Dec 2020 17:36:34 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 0101BAAC66; Mon, 14 Dec 2020 17:36:34 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c42ed7b-9363-41a8-b7e8-ed1062ae1223
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 19/24] errno may not be a gobal R/W variable, use a local variable instead (fix build on NetBSD)
Date: Mon, 14 Dec 2020 17:36:18 +0100
Message-Id: <20201214163623.2127-20-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/xenpaging/xenpaging.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 33098046c2..39c8c83b4b 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -180,10 +180,11 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
 static void *init_page(void)
 {
     void *buffer;
+    int rc;
 
     /* Allocated page memory */
-    errno = posix_memalign(&buffer, XC_PAGE_SIZE, XC_PAGE_SIZE);
-    if ( errno != 0 )
+    rc = posix_memalign(&buffer, XC_PAGE_SIZE, XC_PAGE_SIZE);
+    if ( rc != 0 )
         return NULL;
 
     /* Lock buffer in memory so it can't be paged out */
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52402.91747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshb-0002oV-Lg; Mon, 14 Dec 2020 18:36:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52402.91747; Mon, 14 Dec 2020 18:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshZ-0002lg-Jq; Mon, 14 Dec 2020 18:36:05 +0000
Received: by outflank-mailman (input) for mailman id 52402;
 Mon, 14 Dec 2020 16:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrh-0006vN-Lq
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:25 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f338a86-e757-4555-9dbd-e383e782b2e1;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXW1000190;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaWlF002558;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 9F8C5AAC67; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f338a86-e757-4555-9dbd-e383e782b2e1
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 14/24] Pass bridge name to qemu and set XEN_DOMAIN_ID
Date: Mon, 14 Dec 2020 17:36:13 +0100
Message-Id: <20201214163623.2127-15-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

Pass bridge name to qemu
When starting qemu, set an environnement variable XEN_DOMAIN_ID,
to be used by qemu helper scripts

---
 tools/libs/light/libxl_dm.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3da83259c0..5948ace60d 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -761,6 +761,10 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
         int nr_set_cpus = 0;
         char *s;
 
+	static char buf[12];
+	snprintf(buf, sizeof(buf), "%d", domid);
+        flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", buf);
+
         if (b_info->kernel) {
             LOGD(ERROR, domid, "HVM direct kernel boot is not supported by "
                  "qemu-xen-traditional");
@@ -1547,8 +1551,10 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args,
                                  GCSPRINTF("type=tap,id=net%d,ifname=%s,"
+					   "br=%s,"
                                            "script=%s,downscript=%s",
                                            nics[i].devid, ifname,
+					   nics[i].bridge,
                                            libxl_tapif_script(gc),
                                            libxl_tapif_script(gc)));
 
@@ -1825,6 +1831,10 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_append(dm_args, GCSPRINTF("%"PRId64, ram_size));
 
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
+	static char buf[12];
+	snprintf(buf, sizeof(buf), "%d", guest_domid);
+        flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", buf);
+
         if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI)
             flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
         for (i = 0; i < num_disks; i++) {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52403.91760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshe-0002vN-2f; Mon, 14 Dec 2020 18:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52403.91760; Mon, 14 Dec 2020 18:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshc-0002tE-8S; Mon, 14 Dec 2020 18:36:08 +0000
Received: by outflank-mailman (input) for mailman id 52403;
 Mon, 14 Dec 2020 16:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrm-0006vN-Lw
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:30 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa988afb-11e3-4515-8642-5030b4b9fb4a;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaYF2021664;
 Mon, 14 Dec 2020 17:36:34 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaY2G005438;
 Mon, 14 Dec 2020 17:36:34 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 555F5AAC66; Mon, 14 Dec 2020 17:36:34 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa988afb-11e3-4515-8642-5030b4b9fb4a
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 24/24] Fix error: array subscript has type 'char' [-Werror=char-subscripts]
Date: Mon, 14 Dec 2020 17:36:23 +0100
Message-Id: <20201214163623.2127-25-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/light/libxl_qmp.c | 2 +-
 tools/xentrace/xentrace.c    | 2 +-
 xen/tools/symbols.c          | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libs/light/libxl_qmp.c b/tools/libs/light/libxl_qmp.c
index c394000ea9..9b638e6f54 100644
--- a/tools/libs/light/libxl_qmp.c
+++ b/tools/libs/light/libxl_qmp.c
@@ -1249,7 +1249,7 @@ static int qmp_error_class_to_libxl_error_code(libxl__gc *gc,
                 se++;
                 continue;
             }
-            if (tolower(*s) != tolower(*se))
+            if (tolower((unsigned char)*s) != tolower((unsigned char)*se))
                 break;
             s++, se++;
         }
diff --git a/tools/xentrace/xentrace.c b/tools/xentrace/xentrace.c
index 4b50b8a53e..a8903ebf46 100644
--- a/tools/xentrace/xentrace.c
+++ b/tools/xentrace/xentrace.c
@@ -957,7 +957,7 @@ static int parse_cpumask_range(const char *mask_str, xc_cpumap_t map)
 {
     unsigned int a, b;
     int nmaskbits;
-    char c;
+    unsigned char c;
     int in_range;
     const char *s;
 
diff --git a/xen/tools/symbols.c b/xen/tools/symbols.c
index 9f9e2c9900..0b12452616 100644
--- a/xen/tools/symbols.c
+++ b/xen/tools/symbols.c
@@ -173,11 +173,11 @@ static int read_symbol(FILE *in, struct sym_entry *s)
 	/* include the type field in the symbol name, so that it gets
 	 * compressed together */
 	s->len = strlen(str) + 1;
-	if (islower(stype) && filename)
+	if (islower((unsigned char)stype) && filename)
 		s->len += strlen(filename) + 1;
 	s->sym = malloc(s->len + 1);
 	sym = SYMBOL_NAME(s);
-	if (islower(stype) && filename) {
+	if (islower((unsigned char)stype) && filename) {
 		sym = stpcpy(sym, filename);
 		*sym++ = '#';
 	}
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52415.91773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshg-00031u-VX; Mon, 14 Dec 2020 18:36:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52415.91773; Mon, 14 Dec 2020 18:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshe-0002zv-VH; Mon, 14 Dec 2020 18:36:10 +0000
Received: by outflank-mailman (input) for mailman id 52415;
 Mon, 14 Dec 2020 16:39:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqr3-0006vN-K2
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:45 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a4bbe2a-34ba-49bf-b439-89a8eb3a5f4a;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXws020700;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXU0001450;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id BE55BAAC66; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a4bbe2a-34ba-49bf-b439-89a8eb3a5f4a
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 16/24] Switch NetBSD to QEMU_XEN (!traditional)
Date: Mon, 14 Dec 2020 17:36:15 +0100
Message-Id: <20201214163623.2127-17-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/light/libxl_netbsd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_netbsd.c b/tools/libs/light/libxl_netbsd.c
index e66a393d7f..31334f932c 100644
--- a/tools/libs/light/libxl_netbsd.c
+++ b/tools/libs/light/libxl_netbsd.c
@@ -110,7 +110,7 @@ out:
 
 libxl_device_model_version libxl__default_device_model(libxl__gc *gc)
 {
-    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
+    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
 }
 
 int libxl__pci_numdevs(libxl__gc *gc)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52418.91786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshj-0003A9-LI; Mon, 14 Dec 2020 18:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52418.91786; Mon, 14 Dec 2020 18:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshi-000381-15; Mon, 14 Dec 2020 18:36:14 +0000
Received: by outflank-mailman (input) for mailman id 52418;
 Mon, 14 Dec 2020 16:39:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrI-0006vN-Kd
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:00 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66502db2-5ab0-4a6e-a577-5b48f32b7801;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXBI008620;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXb6020782;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id E6B77AAC66; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66502db2-5ab0-4a6e-a577-5b48f32b7801
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 18/24] xeneventchn_stubs.c doens't need xen/sys/evtchn.h (NetBSD fix)
Date: Mon, 14 Dec 2020 17:36:17 +0100
Message-Id: <20201214163623.2127-19-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/ocaml/libs/eventchn/xeneventchn_stubs.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/ocaml/libs/eventchn/xeneventchn_stubs.c b/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
index ba40078d09..f889a7a2e4 100644
--- a/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
+++ b/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
@@ -22,7 +22,6 @@
 #include <stdint.h>
 #include <sys/ioctl.h>
 #include <xen/xen.h>
-#include <xen/sys/evtchn.h>
 #include <xenevtchn.h>
 
 #define CAML_NAME_SPACE
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52419.91798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshl-0003F6-KP; Mon, 14 Dec 2020 18:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52419.91798; Mon, 14 Dec 2020 18:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshk-0003Di-75; Mon, 14 Dec 2020 18:36:16 +0000
Received: by outflank-mailman (input) for mailman id 52419;
 Mon, 14 Dec 2020 16:39:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqqt-0006vN-JV
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:37:35 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43a2499d-851f-42d2-b0b1-c68e681c84f0;
 Mon, 14 Dec 2020 16:36:34 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaXng010884;
 Mon, 14 Dec 2020 17:36:33 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaXBX017131;
 Mon, 14 Dec 2020 17:36:33 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 1CC31AAC68; Mon, 14 Dec 2020 17:36:33 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43a2499d-851f-42d2-b0b1-c68e681c84f0
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 06/24] Handle the case where vifname is not present in xenstore.
Date: Mon, 14 Dec 2020 17:36:05 +0100
Message-Id: <20201214163623.2127-7-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:33 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/hotplug/NetBSD/vif-bridge | 5 ++++-
 tools/hotplug/NetBSD/vif-ip     | 4 ++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/hotplug/NetBSD/vif-bridge b/tools/hotplug/NetBSD/vif-bridge
index b58e922601..cd428b5936 100644
--- a/tools/hotplug/NetBSD/vif-bridge
+++ b/tools/hotplug/NetBSD/vif-bridge
@@ -23,7 +23,10 @@ case $xstatus in
 	xbridge=$(xenstore-read "$xpath/bridge")
 	xfid=$(xenstore-read "$xpath/frontend-id")
 	xhandle=$(xenstore-read "$xpath/handle")
-	iface=$(xenstore-read "$xpath/vifname")
+	iface=$(xenstore-read "$xpath/vifname") || true
+	if [ x${iface} = "x" ] ; then
+		iface=xvif$xfid.$xhandle
+	fi
 	ifconfig $iface up
 	brconfig $xbridge add $iface
 	xenstore-write $xpath/hotplug-status connected
diff --git a/tools/hotplug/NetBSD/vif-ip b/tools/hotplug/NetBSD/vif-ip
index 83cbfe20e2..944f50f881 100644
--- a/tools/hotplug/NetBSD/vif-ip
+++ b/tools/hotplug/NetBSD/vif-ip
@@ -24,6 +24,10 @@ case $xstatus in
 	xfid=$(xenstore-read "$xpath/frontend-id")
 	xhandle=$(xenstore-read "$xpath/handle")
 	iface=$(xenstore-read "$xpath/vifname")
+	iface=$(xenstore-read "$xpath/vifname") || true
+	if [ x${iface} = "x" ] ; then
+		iface=xvif$xfid.$xhandle
+	fi
 	ifconfig $iface $xip up
 	xenstore-write $xpath/hotplug-status connected
 	exit 0
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52420.91809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshn-0003Ke-N4; Mon, 14 Dec 2020 18:36:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52420.91809; Mon, 14 Dec 2020 18:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshm-0003Ie-04; Mon, 14 Dec 2020 18:36:18 +0000
Received: by outflank-mailman (input) for mailman id 52420;
 Mon, 14 Dec 2020 16:39:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrc-0006vN-Lp
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:20 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8bcad583-616e-4839-92f7-87b1ad9ad5fa;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaYZu023577;
 Mon, 14 Dec 2020 17:36:34 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaYxu022690;
 Mon, 14 Dec 2020 17:36:34 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 22A26AAC66; Mon, 14 Dec 2020 17:36:34 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bcad583-616e-4839-92f7-87b1ad9ad5fa
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 21/24] Fix unused functions/variables error
Date: Mon, 14 Dec 2020 17:36:20 +0100
Message-Id: <20201214163623.2127-22-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/stat/xenstat_netbsd.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/tools/libs/stat/xenstat_netbsd.c b/tools/libs/stat/xenstat_netbsd.c
index 6e9d6aee10..64eda9e1ae 100644
--- a/tools/libs/stat/xenstat_netbsd.c
+++ b/tools/libs/stat/xenstat_netbsd.c
@@ -55,11 +55,6 @@ get_priv_data(xenstat_handle *handle)
 }
 
 /* Expected format of /proc/net/dev */
-static const char PROCNETDEV_HEADER[] =
-    "Inter-|   Receive                                                |"
-    "  Transmit\n"
-    " face |bytes    packets errs drop fifo frame compressed multicast|"
-    "bytes    packets errs drop fifo colls carrier compressed\n";
 
 /* Collect information about networks */
 int xenstat_collect_networks(xenstat_node * node)
@@ -76,12 +71,6 @@ void xenstat_uninit_networks(xenstat_handle * handle)
 		fclose(priv->procnetdev);
 }
 
-static int read_attributes_vbd(const char *vbd_directory, const char *what, char *ret, int cap)
-{
-	/* XXX implement */
-	return 0;
-}
-
 /* Collect information about VBDs */
 int xenstat_collect_vbds(xenstat_node * node)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52421.91821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshq-0003RX-3D; Mon, 14 Dec 2020 18:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52421.91821; Mon, 14 Dec 2020 18:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kosho-0003QQ-Fp; Mon, 14 Dec 2020 18:36:20 +0000
Received: by outflank-mailman (input) for mailman id 52421;
 Mon, 14 Dec 2020 16:39:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nuho=FS=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1koqrX-0006vN-LY
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 16:38:15 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcbac439-46ba-41f8-8fb9-3b1a9d40e0c3;
 Mon, 14 Dec 2020 16:36:35 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 0BEGaYjL018346;
 Mon, 14 Dec 2020 17:36:34 +0100 (CET)
Received: from borneo.soc.lip6.fr (borneo [132.227.103.47])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 0BEGaYxR010544;
 Mon, 14 Dec 2020 17:36:34 +0100 (MET)
Received: by borneo.soc.lip6.fr (Postfix, from userid 373)
 id 44A4CAAC66; Mon, 14 Dec 2020 17:36:34 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcbac439-46ba-41f8-8fb9-3b1a9d40e0c3
From: Manuel Bouyer <bouyer@netbsd.org>
To: xen-devel@lists.xenproject.org
Cc: Manuel Bouyer <bouyer@netbsd.org>
Subject: [PATCH 23/24] Use xen/xenio.h on NetBSD
Date: Mon, 14 Dec 2020 17:36:22 +0100
Message-Id: <20201214163623.2127-24-bouyer@netbsd.org>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Mon, 14 Dec 2020 17:36:34 +0100 (CET)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

---
 tools/libs/ctrl/xc_private.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..68e388f488 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -39,7 +39,11 @@
 #include <xenforeignmemory.h>
 #include <xendevicemodel.h>
 
+#ifdef __NetBSD__
+#include <xen/xenio.h>
+#else
 #include <xen/sys/privcmd.h>
+#endif
 
 #include <xen-tools/libs.h>
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52459.91834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshs-0003Xa-Hp; Mon, 14 Dec 2020 18:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52459.91834; Mon, 14 Dec 2020 18:36:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshq-0003VZ-Bq; Mon, 14 Dec 2020 18:36:22 +0000
Received: by outflank-mailman (input) for mailman id 52459;
 Mon, 14 Dec 2020 17:05:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O6EJ=FS=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1korI3-0001vO-4j
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 17:05:39 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6889cd7e-a680-41df-9e47-d6ff21a639d1;
 Mon, 14 Dec 2020 17:05:38 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-490-gNP_Li97NzyoS-gY09zc6A-1; Mon, 14 Dec 2020 12:05:35 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F0CE11074640;
 Mon, 14 Dec 2020 17:05:33 +0000 (UTC)
Received: from toolbox.redhat.com (ovpn-112-231.rdu2.redhat.com
 [10.10.112.231])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 02D775D6AB;
 Mon, 14 Dec 2020 17:05:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6889cd7e-a680-41df-9e47-d6ff21a639d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965537;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Ofxiw979EA5hQ5retJsvejrszYOaT4OA8K7Ptz7ohEw=;
	b=bJv4+7bcsGHslvrR2SpcOQv+eqAXGpjE4cZ2NqR9DF2TQ4UQ1YOkVPy/uhg0tBjHNF2oFq
	zwUoQ++i3hOQ8yFmhrjtldlxwKT3vbEzbzi+wQNLuMYGKbWvYK7IYPixOYrANgCQU/uXch
	7vTFl0exfGBqiXEi/V2YrnkXVZYJTXE=
X-MC-Unique: gNP_Li97NzyoS-gY09zc6A-1
From: Sergio Lopez <slp@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Sergio Lopez <slp@redhat.com>
Subject: [PATCH v2 0/4] nbd/server: Quiesce coroutines on context switch
Date: Mon, 14 Dec 2020 18:05:15 +0100
Message-Id: <20201214170519.223781-1-slp@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

This series allows the NBD server to properly switch between AIO contexts,=
=0D
having quiesced recv_coroutine and send_coroutine before doing the transiti=
on.=0D
=0D
We need this because we send back devices running in IO Thread owned contex=
ts=0D
to the main context when stopping the data plane, something that can happen=
=0D
multiple times during the lifetime of a VM (usually during the boot sequenc=
e or=0D
on a reboot), and we drag the NBD server of the correspoing export with it.=
=0D
=0D
While there, fix also a problem caused by a cross-dependency between=0D
closing the export's client connections and draining the block=0D
layer. The visible effect of this problem was QEMU getting hung when=0D
the guest request a power off while there's an active NBD client.=0D
=0D
v2:=0D
 - Replace "virtio-blk: Acquire context while switching them on=0D
 dataplane start" with "block: Honor blk_set_aio_context() context=0D
 requirements" (Kevin Wolf)=0D
 - Add "block: Avoid processing BDS twice in=0D
 bdrv_set_aio_context_ignore()"=0D
 - Add "block: Close block exports in two steps"=0D
 - Rename nbd_read_eof() to nbd_server_read_eof() (Eric Blake)=0D
 - Fix double space and typo in comment. (Eric Blake)=0D
=0D
Sergio Lopez (4):=0D
  block: Honor blk_set_aio_context() context requirements=0D
  block: Avoid processing BDS twice in bdrv_set_aio_context_ignore()=0D
  nbd/server: Quiesce coroutines on context switch=0D
  block: Close block exports in two steps=0D
=0D
 block.c                         |  27 ++++++-=0D
 block/export/export.c           |  10 +--=0D
 blockdev-nbd.c                  |   2 +-=0D
 hw/block/dataplane/virtio-blk.c |   4 ++=0D
 hw/block/dataplane/xen-block.c  |   7 +-=0D
 hw/scsi/virtio-scsi.c           |   6 +-=0D
 include/block/export.h          |   4 +-=0D
 nbd/server.c                    | 120 ++++++++++++++++++++++++++++----=0D
 qemu-nbd.c                      |   2 +-=0D
 stubs/blk-exp-close-all.c       |   2 +-=0D
 10 files changed, 156 insertions(+), 28 deletions(-)=0D
=0D
--=20=0D
2.26.2=0D
=0D



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52461.91844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kosht-0003d8-S2; Mon, 14 Dec 2020 18:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52461.91844; Mon, 14 Dec 2020 18:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshs-0003bh-Fx; Mon, 14 Dec 2020 18:36:24 +0000
Received: by outflank-mailman (input) for mailman id 52461;
 Mon, 14 Dec 2020 17:05:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O6EJ=FS=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1korI8-0001vO-1f
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 17:05:44 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 76bc8c39-8d8b-419e-a334-5f0d38c62da0;
 Mon, 14 Dec 2020 17:05:42 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-580-s8NqP3znNFanUNxldhXiHw-1; Mon, 14 Dec 2020 12:05:38 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 548C3B8122;
 Mon, 14 Dec 2020 17:05:37 +0000 (UTC)
Received: from toolbox.redhat.com (ovpn-112-231.rdu2.redhat.com
 [10.10.112.231])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 50A455D6AB;
 Mon, 14 Dec 2020 17:05:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76bc8c39-8d8b-419e-a334-5f0d38c62da0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965542;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+Eya5ky4YkXegn79a2KsK7rMFFfw80cpB9O81T1hhcs=;
	b=D8nfXyPen1o7Wnf4yvy+Rd7RUlMUAVWGCfe9xBxy2E3ybg/9xGw3zfDtsfFHNtmg3enqjU
	3Oz+MUgAS/NFfi74odgd45oEm9ZN17vaI04tpuPxGAeWM+ymkXjSas/nvkknBq3I2RazsO
	s2Qv27XHqZrZg4S5HIMoRwzGsPEBhj0=
X-MC-Unique: s8NqP3znNFanUNxldhXiHw-1
From: Sergio Lopez <slp@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Sergio Lopez <slp@redhat.com>
Subject: [PATCH v2 1/4] block: Honor blk_set_aio_context() context requirements
Date: Mon, 14 Dec 2020 18:05:16 +0100
Message-Id: <20201214170519.223781-2-slp@redhat.com>
In-Reply-To: <20201214170519.223781-1-slp@redhat.com>
References: <20201214170519.223781-1-slp@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

The documentation for bdrv_set_aio_context_ignore() states this:

 * The caller must own the AioContext lock for the old AioContext of bs, but it
 * must not own the AioContext lock for new_context (unless new_context is the
 * same as the current context of bs).

As blk_set_aio_context() makes use of this function, this rule also
applies to it.

Fix all occurrences where this rule wasn't honored.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Sergio Lopez <slp@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 4 ++++
 hw/block/dataplane/xen-block.c  | 7 ++++++-
 hw/scsi/virtio-scsi.c           | 6 ++++--
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 37499c5564..e9050c8987 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -172,6 +172,7 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     VirtIOBlockDataPlane *s = vblk->dataplane;
     BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vblk)));
     VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+    AioContext *old_context;
     unsigned i;
     unsigned nvqs = s->conf->num_queues;
     Error *local_err = NULL;
@@ -214,7 +215,10 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     vblk->dataplane_started = true;
     trace_virtio_blk_data_plane_start(s);
 
+    old_context = blk_get_aio_context(s->conf->conf.blk);
+    aio_context_acquire(old_context);
     r = blk_set_aio_context(s->conf->conf.blk, s->ctx, &local_err);
+    aio_context_release(old_context);
     if (r < 0) {
         error_report_err(local_err);
         goto fail_guest_notifiers;
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 71c337c7b7..3675f8deaf 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -725,6 +725,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
 {
     ERRP_GUARD();
     XenDevice *xendev = dataplane->xendev;
+    AioContext *old_context;
     unsigned int ring_size;
     unsigned int i;
 
@@ -808,10 +809,14 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
         goto stop;
     }
 
-    aio_context_acquire(dataplane->ctx);
+    old_context = blk_get_aio_context(dataplane->blk);
+    aio_context_acquire(old_context);
     /* If other users keep the BlockBackend in the iothread, that's ok */
     blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
+    aio_context_release(old_context);
+
     /* Only reason for failure is a NULL channel */
+    aio_context_acquire(dataplane->ctx);
     xen_device_set_event_channel_context(xendev, dataplane->event_channel,
                                          dataplane->ctx, &error_abort);
     aio_context_release(dataplane->ctx);
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 3db9a8aae9..7a347ceac5 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -821,15 +821,17 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
+    AioContext *old_context;
     int ret;
 
     if (s->ctx && !s->dataplane_fenced) {
         if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
             return;
         }
-        virtio_scsi_acquire(s);
+        old_context = blk_get_aio_context(sd->conf.blk);
+        aio_context_acquire(old_context);
         ret = blk_set_aio_context(sd->conf.blk, s->ctx, errp);
-        virtio_scsi_release(s);
+        aio_context_release(old_context);
         if (ret < 0) {
             return;
         }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52463.91851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshv-0003he-1M; Mon, 14 Dec 2020 18:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52463.91851; Mon, 14 Dec 2020 18:36:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshu-0003g8-5h; Mon, 14 Dec 2020 18:36:26 +0000
Received: by outflank-mailman (input) for mailman id 52463;
 Mon, 14 Dec 2020 17:06:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O6EJ=FS=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1korJ0-0001x2-3B
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 17:06:38 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9ad1bd6e-18cc-453b-bac9-306d71786958;
 Mon, 14 Dec 2020 17:06:37 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-244-D0CmhjvTOY6pEL_0tz-PPA-1; Mon, 14 Dec 2020 12:06:33 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7C6511012EA3;
 Mon, 14 Dec 2020 17:06:19 +0000 (UTC)
Received: from toolbox.redhat.com (ovpn-112-231.rdu2.redhat.com
 [10.10.112.231])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A9AEB5D6AB;
 Mon, 14 Dec 2020 17:05:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ad1bd6e-18cc-453b-bac9-306d71786958
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965597;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/WpOYjMEI+uu6Ezu621PcgD7x1p7LtSJUlUJzLyqhGc=;
	b=GDKALjNQdTUuK4lCz8sOK7Ds7EM8Kw3WgME4H6+oOwUCaDYsXjzGjQ6rxn81lQy3+saEZ3
	rfdLTGdkOZMOh7w9JHKG5sSnKtqupk0VUeCHYo5HcJ1tjZ4PE2KE1vlZv+aPVirnLBGGyv
	NuwUIM78Rklg4sjogSAFOTZzTeFjTRI=
X-MC-Unique: D0CmhjvTOY6pEL_0tz-PPA-1
From: Sergio Lopez <slp@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Sergio Lopez <slp@redhat.com>
Subject: [PATCH v2 2/4] block: Avoid processing BDS twice in bdrv_set_aio_context_ignore()
Date: Mon, 14 Dec 2020 18:05:17 +0100
Message-Id: <20201214170519.223781-3-slp@redhat.com>
In-Reply-To: <20201214170519.223781-1-slp@redhat.com>
References: <20201214170519.223781-1-slp@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

While processing the parents of a BDS, one of the parents may process
the child that's doing the tail recursion, which leads to a BDS being
processed twice. This is especially problematic for the aio_notifiers,
as they might attempt to work on both the old and the new AIO
contexts.

To avoid this, add the BDS pointer to the ignore list, and check the
child BDS pointer while iterating over the children.

Signed-off-by: Sergio Lopez <slp@redhat.com>
---
 block.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/block.c b/block.c
index f1cedac362..bc8a66ab6e 100644
--- a/block.c
+++ b/block.c
@@ -6465,12 +6465,17 @@ void bdrv_set_aio_context_ignore(BlockDriverState *bs,
     bdrv_drained_begin(bs);
 
     QLIST_FOREACH(child, &bs->children, next) {
-        if (g_slist_find(*ignore, child)) {
+        if (g_slist_find(*ignore, child) || g_slist_find(*ignore, child->bs)) {
             continue;
         }
         *ignore = g_slist_prepend(*ignore, child);
         bdrv_set_aio_context_ignore(child->bs, new_context, ignore);
     }
+    /*
+     * Add a reference to this BS to the ignore list, so its
+     * parents won't attempt to process it again.
+     */
+    *ignore = g_slist_prepend(*ignore, bs);
     QLIST_FOREACH(child, &bs->parents, next_parent) {
         if (g_slist_find(*ignore, child)) {
             continue;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52465.91866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshw-0003mG-LT; Mon, 14 Dec 2020 18:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52465.91866; Mon, 14 Dec 2020 18:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshv-0003ks-PK; Mon, 14 Dec 2020 18:36:27 +0000
Received: by outflank-mailman (input) for mailman id 52465;
 Mon, 14 Dec 2020 17:06:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O6EJ=FS=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1korJ6-0001xX-BH
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 17:06:44 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d7c0dce5-848f-4709-b476-4e994209fe41;
 Mon, 14 Dec 2020 17:06:43 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-499-761DnAutO9qHxmbuLzXzFA-1; Mon, 14 Dec 2020 12:06:39 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A75508AB3B9;
 Mon, 14 Dec 2020 17:06:22 +0000 (UTC)
Received: from toolbox.redhat.com (ovpn-112-231.rdu2.redhat.com
 [10.10.112.231])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 72BDC62A25;
 Mon, 14 Dec 2020 17:06:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7c0dce5-848f-4709-b476-4e994209fe41
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965603;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=399Y9fIGtHICB7zQVl2y/q4KwqXDdZ2nv2mnaFw6NX8=;
	b=ZChTdDjNcRvbYd+NUF68br9MV+RSDoVzS55RPFrBdR8jNQHTk+MBVi35kLHmGvR25RIBIv
	1qxJ0eN90i8PlZO+mz2B0OAAi4Ga89Y55VtLcCRvG9J5CPgeIssPk2UZlXyTVRM5YDyZDj
	DJrDhtNGwu/JqgExCOyRsS+UcFbCFiU=
X-MC-Unique: 761DnAutO9qHxmbuLzXzFA-1
From: Sergio Lopez <slp@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Sergio Lopez <slp@redhat.com>
Subject: [PATCH v2 3/4] nbd/server: Quiesce coroutines on context switch
Date: Mon, 14 Dec 2020 18:05:18 +0100
Message-Id: <20201214170519.223781-4-slp@redhat.com>
In-Reply-To: <20201214170519.223781-1-slp@redhat.com>
References: <20201214170519.223781-1-slp@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

When switching between AIO contexts we need to me make sure that both
recv_coroutine and send_coroutine are not scheduled to run. Otherwise,
QEMU may crash while attaching the new context with an error like
this one:

aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'

To achieve this we need a local implementation of
'qio_channel_readv_all_eof' named 'nbd_read_eof' (a trick already done
by 'nbd/client.c') that allows us to interrupt the operation and to
know when recv_coroutine is yielding.

With this in place, we delegate detaching the AIO context to the
owning context with a BH ('nbd_aio_detach_bh') scheduled using
'aio_wait_bh_oneshot'. This BH signals that we need to quiesce the
channel by setting 'client->quiescing' to 'true', and either waits for
the coroutine to finish using AIO_WAIT_WHILE or, if it's yielding in
'nbd_read_eof', actively enters the coroutine to interrupt it.

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1900326
Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 nbd/server.c | 120 +++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 106 insertions(+), 14 deletions(-)

diff --git a/nbd/server.c b/nbd/server.c
index 613ed2634a..7229f487d2 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -132,6 +132,9 @@ struct NBDClient {
     CoMutex send_lock;
     Coroutine *send_coroutine;
 
+    bool read_yielding;
+    bool quiescing;
+
     QTAILQ_ENTRY(NBDClient) next;
     int nb_requests;
     bool closing;
@@ -1352,14 +1355,60 @@ static coroutine_fn int nbd_negotiate(NBDClient *client, Error **errp)
     return 0;
 }
 
-static int nbd_receive_request(QIOChannel *ioc, NBDRequest *request,
+/* nbd_read_eof
+ * Tries to read @size bytes from @ioc. This is a local implementation of
+ * qio_channel_readv_all_eof. We have it here because we need it to be
+ * interruptible and to know when the coroutine is yielding.
+ * Returns 1 on success
+ *         0 on eof, when no data was read (errp is not set)
+ *         negative errno on failure (errp is set)
+ */
+static inline int coroutine_fn
+nbd_read_eof(NBDClient *client, void *buffer, size_t size, Error **errp)
+{
+    bool partial = false;
+
+    assert(size);
+    while (size > 0) {
+        struct iovec iov = { .iov_base = buffer, .iov_len = size };
+        ssize_t len;
+
+        len = qio_channel_readv(client->ioc, &iov, 1, errp);
+        if (len == QIO_CHANNEL_ERR_BLOCK) {
+            client->read_yielding = true;
+            qio_channel_yield(client->ioc, G_IO_IN);
+            client->read_yielding = false;
+            if (client->quiescing) {
+                return -EAGAIN;
+            }
+            continue;
+        } else if (len < 0) {
+            return -EIO;
+        } else if (len == 0) {
+            if (partial) {
+                error_setg(errp,
+                           "Unexpected end-of-file before all bytes were read");
+                return -EIO;
+            } else {
+                return 0;
+            }
+        }
+
+        partial = true;
+        size -= len;
+        buffer = (uint8_t *) buffer + len;
+    }
+    return 1;
+}
+
+static int nbd_receive_request(NBDClient *client, NBDRequest *request,
                                Error **errp)
 {
     uint8_t buf[NBD_REQUEST_SIZE];
     uint32_t magic;
     int ret;
 
-    ret = nbd_read(ioc, buf, sizeof(buf), "request", errp);
+    ret = nbd_read_eof(client, buf, sizeof(buf), errp);
     if (ret < 0) {
         return ret;
     }
@@ -1480,11 +1529,37 @@ static void blk_aio_attached(AioContext *ctx, void *opaque)
 
     QTAILQ_FOREACH(client, &exp->clients, next) {
         qio_channel_attach_aio_context(client->ioc, ctx);
+
+        assert(client->recv_coroutine == NULL);
+        assert(client->send_coroutine == NULL);
+
+        if (client->quiescing) {
+            client->quiescing = false;
+            nbd_client_receive_next_request(client);
+        }
+    }
+}
+
+static void nbd_aio_detach_bh(void *opaque)
+{
+    NBDExport *exp = opaque;
+    NBDClient *client;
+
+    QTAILQ_FOREACH(client, &exp->clients, next) {
+        qio_channel_detach_aio_context(client->ioc);
+        client->quiescing = true;
+
         if (client->recv_coroutine) {
-            aio_co_schedule(ctx, client->recv_coroutine);
+            if (client->read_yielding) {
+                qemu_aio_coroutine_enter(exp->common.ctx,
+                                         client->recv_coroutine);
+            } else {
+                AIO_WAIT_WHILE(exp->common.ctx, client->recv_coroutine != NULL);
+            }
         }
+
         if (client->send_coroutine) {
-            aio_co_schedule(ctx, client->send_coroutine);
+            AIO_WAIT_WHILE(exp->common.ctx, client->send_coroutine != NULL);
         }
     }
 }
@@ -1492,13 +1567,10 @@ static void blk_aio_attached(AioContext *ctx, void *opaque)
 static void blk_aio_detach(void *opaque)
 {
     NBDExport *exp = opaque;
-    NBDClient *client;
 
     trace_nbd_blk_aio_detach(exp->name, exp->common.ctx);
 
-    QTAILQ_FOREACH(client, &exp->clients, next) {
-        qio_channel_detach_aio_context(client->ioc);
-    }
+    aio_wait_bh_oneshot(exp->common.ctx, nbd_aio_detach_bh, exp);
 
     exp->common.ctx = NULL;
 }
@@ -2151,20 +2223,23 @@ static int nbd_co_send_bitmap(NBDClient *client, uint64_t handle,
 
 /* nbd_co_receive_request
  * Collect a client request. Return 0 if request looks valid, -EIO to drop
- * connection right away, and any other negative value to report an error to
- * the client (although the caller may still need to disconnect after reporting
- * the error).
+ * connection right away, -EAGAIN to indicate we were interrupted and the
+ * channel should be quiesced, and any other negative value to report an error
+ * to the client (although the caller may still need to disconnect after
+ * reporting the error).
  */
 static int nbd_co_receive_request(NBDRequestData *req, NBDRequest *request,
                                   Error **errp)
 {
     NBDClient *client = req->client;
     int valid_flags;
+    int ret;
 
     g_assert(qemu_in_coroutine());
     assert(client->recv_coroutine == qemu_coroutine_self());
-    if (nbd_receive_request(client->ioc, request, errp) < 0) {
-        return -EIO;
+    ret = nbd_receive_request(client, request, errp);
+    if (ret < 0) {
+        return  ret;
     }
 
     trace_nbd_co_receive_request_decode_type(request->handle, request->type,
@@ -2507,6 +2582,17 @@ static coroutine_fn void nbd_trip(void *opaque)
         return;
     }
 
+    if (client->quiescing) {
+        /*
+         * We're switching between AIO contexts. Don't attempt to receive a new
+         * request and kick the main context which may be waiting for us.
+         */
+        nbd_client_put(client);
+        client->recv_coroutine = NULL;
+        aio_wait_kick();
+        return;
+    }
+
     req = nbd_request_get(client);
     ret = nbd_co_receive_request(req, &request, &local_err);
     client->recv_coroutine = NULL;
@@ -2519,6 +2605,11 @@ static coroutine_fn void nbd_trip(void *opaque)
         goto done;
     }
 
+    if (ret == -EAGAIN) {
+        assert(client->quiescing);
+        goto done;
+    }
+
     nbd_client_receive_next_request(client);
     if (ret == -EIO) {
         goto disconnect;
@@ -2565,7 +2656,8 @@ disconnect:
 
 static void nbd_client_receive_next_request(NBDClient *client)
 {
-    if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS) {
+    if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS &&
+        !client->quiescing) {
         nbd_client_get(client);
         client->recv_coroutine = qemu_coroutine_create(nbd_trip, client);
         aio_co_schedule(client->exp->common.ctx, client->recv_coroutine);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 18:36:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 18:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52467.91878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshy-0003sS-JE; Mon, 14 Dec 2020 18:36:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52467.91878; Mon, 14 Dec 2020 18:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koshx-0003r9-Sd; Mon, 14 Dec 2020 18:36:29 +0000
Received: by outflank-mailman (input) for mailman id 52467;
 Mon, 14 Dec 2020 17:06:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O6EJ=FS=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1korJD-0001y4-RW
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 17:06:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 94297bc7-4b2c-40b3-8999-002e654fcffe;
 Mon, 14 Dec 2020 17:06:50 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-557-_8-ouXvpNqK8vTFc4U2Pjg-1; Mon, 14 Dec 2020 12:06:47 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0BAA0800685;
 Mon, 14 Dec 2020 17:06:26 +0000 (UTC)
Received: from toolbox.redhat.com (ovpn-112-231.rdu2.redhat.com
 [10.10.112.231])
 by smtp.corp.redhat.com (Postfix) with ESMTP id F3A39669FC;
 Mon, 14 Dec 2020 17:06:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94297bc7-4b2c-40b3-8999-002e654fcffe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1607965610;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bR0d1JArrAy1qnHujAAvCo8e9rCfTiOxzdxX6U9l434=;
	b=A17hCsS2dLfOAdeLek1rUNWKlSiON6lRiCofKLHx/32gKFAprl3Q7BCsU4qXnAFxpiuCvX
	w+llJLsolyD6t1Ci3p8KhnAtf7i4+RWdwmXlLOvEil/OOOpZ+t2LdrX6n5b1i64KIQjkGS
	se/SAZIwC8lWaQ+osAjiAtu2+3aZ2oM=
X-MC-Unique: _8-ouXvpNqK8vTFc4U2Pjg-1
From: Sergio Lopez <slp@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Sergio Lopez <slp@redhat.com>
Subject: [PATCH v2 4/4] block: Close block exports in two steps
Date: Mon, 14 Dec 2020 18:05:19 +0100
Message-Id: <20201214170519.223781-5-slp@redhat.com>
In-Reply-To: <20201214170519.223781-1-slp@redhat.com>
References: <20201214170519.223781-1-slp@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

There's a cross-dependency between closing the block exports and
draining the block layer. The latter needs that we close all export's
client connections to ensure they won't queue more requests, but the
exports may have coroutines yielding in the block layer, which implies
they can't be fully closed until we drain it.

To break this cross-dependency, this change adds a "bool wait"
argument to blk_exp_close_all() and blk_exp_close_all_type(), so
callers can decide whether they want to wait for the exports to be
fully quiesced, or just return after requesting them to shut down.

Then, in bdrv_close_all we make two calls, one without waiting to
close all client connections, and another after draining the block
layer, this time waiting for the exports to be fully quiesced.

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1900505
Signed-off-by: Sergio Lopez <slp@redhat.com>
---
 block.c                   | 20 +++++++++++++++++++-
 block/export/export.c     | 10 ++++++----
 blockdev-nbd.c            |  2 +-
 include/block/export.h    |  4 ++--
 qemu-nbd.c                |  2 +-
 stubs/blk-exp-close-all.c |  2 +-
 6 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/block.c b/block.c
index bc8a66ab6e..41db70ac07 100644
--- a/block.c
+++ b/block.c
@@ -4472,13 +4472,31 @@ static void bdrv_close(BlockDriverState *bs)
 void bdrv_close_all(void)
 {
     assert(job_next(NULL) == NULL);
-    blk_exp_close_all();
+
+    /*
+     * There's a cross-dependency between closing the block exports and
+     * draining the block layer. The latter needs that we close all export's
+     * client connections to ensure they won't queue more requests, but the
+     * exports may have coroutines yielding in the block layer, which implies
+     * they can't be fully closed until we drain it.
+     *
+     * Make a first call to close all export's client connections, without
+     * waiting for each export to be fully quiesced.
+     */
+    blk_exp_close_all(false);
 
     /* Drop references from requests still in flight, such as canceled block
      * jobs whose AIO context has not been polled yet */
     bdrv_drain_all();
 
     blk_remove_all_bs();
+
+    /*
+     * Make a second call to shut down the exports, this time waiting for them
+     * to be fully quiesced.
+     */
+    blk_exp_close_all(true);
+
     blockdev_close_all_bdrv_states();
 
     assert(QTAILQ_EMPTY(&all_bdrv_states));
diff --git a/block/export/export.c b/block/export/export.c
index bad6f21b1c..0124ebd9f9 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -280,7 +280,7 @@ static bool blk_exp_has_type(BlockExportType type)
 }
 
 /* type == BLOCK_EXPORT_TYPE__MAX for all types */
-void blk_exp_close_all_type(BlockExportType type)
+void blk_exp_close_all_type(BlockExportType type, bool wait)
 {
     BlockExport *exp, *next;
 
@@ -293,12 +293,14 @@ void blk_exp_close_all_type(BlockExportType type)
         blk_exp_request_shutdown(exp);
     }
 
-    AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));
+    if (wait) {
+        AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));
+    }
 }
 
-void blk_exp_close_all(void)
+void blk_exp_close_all(bool wait)
 {
-    blk_exp_close_all_type(BLOCK_EXPORT_TYPE__MAX);
+    blk_exp_close_all_type(BLOCK_EXPORT_TYPE__MAX, wait);
 }
 
 void qmp_block_export_add(BlockExportOptions *export, Error **errp)
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index d8443d235b..d71d4da7c2 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -266,7 +266,7 @@ void qmp_nbd_server_stop(Error **errp)
         return;
     }
 
-    blk_exp_close_all_type(BLOCK_EXPORT_TYPE_NBD);
+    blk_exp_close_all_type(BLOCK_EXPORT_TYPE_NBD, true);
 
     nbd_server_free(nbd_server);
     nbd_server = NULL;
diff --git a/include/block/export.h b/include/block/export.h
index 7feb02e10d..71c25928ce 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -83,7 +83,7 @@ BlockExport *blk_exp_find(const char *id);
 void blk_exp_ref(BlockExport *exp);
 void blk_exp_unref(BlockExport *exp);
 void blk_exp_request_shutdown(BlockExport *exp);
-void blk_exp_close_all(void);
-void blk_exp_close_all_type(BlockExportType type);
+void blk_exp_close_all(bool wait);
+void blk_exp_close_all_type(BlockExportType type, bool wait);
 
 #endif
diff --git a/qemu-nbd.c b/qemu-nbd.c
index a7075c5419..928f4466f6 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -1122,7 +1122,7 @@ int main(int argc, char **argv)
     do {
         main_loop_wait(false);
         if (state == TERMINATE) {
-            blk_exp_close_all();
+            blk_exp_close_all(true);
             state = TERMINATED;
         }
     } while (state != TERMINATED);
diff --git a/stubs/blk-exp-close-all.c b/stubs/blk-exp-close-all.c
index 1c71316763..ecd0ce611f 100644
--- a/stubs/blk-exp-close-all.c
+++ b/stubs/blk-exp-close-all.c
@@ -2,6 +2,6 @@
 #include "block/export.h"
 
 /* Only used in programs that support block exports (libblockdev.fa) */
-void blk_exp_close_all(void)
+void blk_exp_close_all(bool wait)
 {
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 19:05:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 19:05:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52623.91902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kot9p-00089u-Kt; Mon, 14 Dec 2020 19:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52623.91902; Mon, 14 Dec 2020 19:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kot9p-00089n-H6; Mon, 14 Dec 2020 19:05:17 +0000
Received: by outflank-mailman (input) for mailman id 52623;
 Mon, 14 Dec 2020 19:05:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WSiX=FS=amazon.com=prvs=61050d9d8=havanur@srs-us1.protection.inumbo.net>)
 id 1kot9o-00089a-B7
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 19:05:16 +0000
Received: from smtp-fw-9103.amazon.com (unknown [207.171.188.200])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 531771dc-ed0b-4804-9dec-e1b215fc8a0d;
 Mon, 14 Dec 2020 19:05:15 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-c7c08562.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9103.sea19.amazon.com with ESMTP;
 14 Dec 2020 19:05:07 +0000
Received: from EX13D05EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1e-c7c08562.us-east-1.amazon.com (Postfix) with ESMTPS
 id 71B74240CB5; Mon, 14 Dec 2020 19:05:06 +0000 (UTC)
Received: from EX13D36EUC004.ant.amazon.com (10.43.164.126) by
 EX13D05EUC004.ant.amazon.com (10.43.164.38) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 14 Dec 2020 19:05:05 +0000
Received: from EX13D36EUC004.ant.amazon.com ([10.43.164.126]) by
 EX13D36EUC004.ant.amazon.com ([10.43.164.126]) with mapi id 15.00.1497.006;
 Mon, 14 Dec 2020 19:05:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 531771dc-ed0b-4804-9dec-e1b215fc8a0d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1607972715; x=1639508715;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version:subject;
  bh=5MmXiqh799VlqIO+guYJDr322v3Amqz5ACpbnNbbumc=;
  b=s7afBT8nyVGraVf51+CjQxMo0k1/a3NMSYshHuHmxHM/qc7fRG/u15Jd
   PkCEWT1vVZ2lZn1F1NqRcYwPjlRrq75UU2b+U8BReqJq49turSuKe5LJ9
   U92qraFJ//DNmVrkKu0oEA6tbM/tmrMWoulAD9WbP6zcBCaNvWvTkDDTU
   0=;
X-IronPort-AV: E=Sophos;i="5.78,420,1599523200"; 
   d="scan'208";a="902983114"
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain
Thread-Topic: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain
From: "Shamsundara Havanur, Harsha" <havanur@amazon.com>
To: "jbeulich@suse.com" <jbeulich@suse.com>, "julien@xen.org"
	<julien@xen.org>, "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
CC: "Wieczorkiewicz, Pawel" <wipawel@amazon.de>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"paul@xen.org" <paul@xen.org>
Thread-Index: AQHWz7MJSRGqvamGtEii6iRZ4MXyYqn2TWsAgAAJiICAABlGgIAAVTOAgAAzPQA=
Date: Mon, 14 Dec 2020 19:05:05 +0000
Message-ID: <eef19ecad32ac9379b6535ec2a4b444e78b29058.camel@amazon.com>
References: <cover.1607686878.git.havanur@amazon.com>
	 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
	 <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
	 <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
	 <bf70db2d-cf03-11cb-887e-aa38094b3d5f@xen.org>
	 <607cba7c-15b6-0197-6000-cc823038d320@citrix.com>
In-Reply-To: <607cba7c-15b6-0197-6000-cc823038d320@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.166.16]
Content-Type: text/plain; charset="utf-8"
Content-ID: <E9D2D7CF32CAFC408ACAFB4DD175F725@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

T24gTW9uLCAyMDIwLTEyLTE0IGF0IDE2OjAxICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0K
PiBDQVVUSU9OOiBUaGlzIGVtYWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdh
bml6YXRpb24uIERvDQo+IG5vdCBjbGljayBsaW5rcyBvciBvcGVuIGF0dGFjaG1lbnRzIHVubGVz
cyB5b3UgY2FuIGNvbmZpcm0gdGhlIHNlbmRlcg0KPiBhbmQga25vdyB0aGUgY29udGVudCBpcyBz
YWZlLg0KPiANCj4gDQo+IA0KPiBPbiAxNC8xMi8yMDIwIDEwOjU2LCBKdWxpZW4gR3JhbGwgd3Jv
dGU6DQo+ID4gSGkgSGFyc2hhLA0KPiA+IA0KPiA+IE9uIDE0LzEyLzIwMjAgMDk6MjYsIFNoYW1z
dW5kYXJhIEhhdmFudXIsIEhhcnNoYSB3cm90ZToNCj4gPiA+IE9uIE1vbiwgMjAyMC0xMi0xNCBh
dCAwOTo1MiArMDEwMCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+ID4gPiA+IENBVVRJT046IFRoaXMg
ZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlDQo+ID4gPiA+IG9yZ2FuaXphdGlv
bi4gRG8NCj4gPiA+ID4gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNz
IHlvdSBjYW4gY29uZmlybSB0aGUNCj4gPiA+ID4gc2VuZGVyDQo+ID4gPiA+IGFuZCBrbm93IHRo
ZSBjb250ZW50IGlzIHNhZmUuDQo+ID4gPiA+IA0KPiA+ID4gPiANCj4gPiA+ID4gDQo+ID4gPiA+
IE9uIDExLjEyLjIwMjAgMTI6NDQsIEhhcnNoYSBTaGFtc3VuZGFyYSBIYXZhbnVyIHdyb3RlOg0K
PiA+ID4gPiA+IEEgSFZNIGRvbWFpbiBmbHVzaGVzIGNhY2hlIG9uIGFsbCB0aGUgY3B1cyB1c2lu
Zw0KPiA+ID4gPiA+IGBmbHVzaF9hbGxgIG1hY3JvIHdoaWNoIHVzZXMgY3B1X29ubGluZV9tYXAs
IGR1cmluZw0KPiA+ID4gPiA+IGkpIGNyZWF0aW9uIG9mIGEgbmV3IGRvbWFpbg0KPiA+ID4gPiA+
IGlpKSB3aGVuIGRldmljZS1tb2RlbCBvcCBpcyBwZXJmb3JtZWQNCj4gPiA+ID4gPiBpaWkpIHdo
ZW4gZG9tYWluIGlzIGRlc3RydWN0ZWQuDQo+ID4gPiA+ID4gDQo+ID4gPiA+ID4gVGhpcyB0cmln
Z2VycyBJUEkgb24gYWxsIHRoZSBjcHVzLCB0aHVzIGFmZmVjdGluZyBvdGhlcg0KPiA+ID4gPiA+
IGRvbWFpbnMgdGhhdCBhcmUgcGlubmVkIHRvIGRpZmZlcmVudCBwY3B1cy4gVGhpcyBwYXRjaA0K
PiA+ID4gPiA+IHJlc3RyaWN0cyBjYWNoZSBmbHVzaCB0byB0aGUgc2V0IG9mIGNwdXMgYWZmaW5p
dGl6ZWQgdG8NCj4gPiA+ID4gPiB0aGUgY3VycmVudCBkb21haW4gdXNpbmcgYGRvbWFpbi0+ZGly
dHlfY3B1bWFza2AuDQo+ID4gPiA+IA0KPiA+ID4gPiBCdXQgdGhlbiB5b3UgbmVlZCB0byBlZmZl
Y3QgY2FjaGUgZmx1c2hpbmcgd2hlbiBhIENQVSBnZXRzDQo+ID4gPiA+IHRha2VuIG91dCBvZiBk
b21haW4tPmRpcnR5X2NwdW1hc2suIEkgZG9uJ3QgdGhpbmsgeW91L3dlIHdhbnQNCj4gPiA+ID4g
dG8gZG8gdGhhdC4NCj4gPiA+ID4gDQo+ID4gPiANCj4gPiA+IElmIHdlIGRvIG5vdCByZXN0cmlj
dCwgaXQgY291bGQgbGVhZCB0byBEb1MgYXR0YWNrLCB3aGVyZSBhDQo+ID4gPiBtYWxpY2lvdXMN
Cj4gPiA+IGd1ZXN0IGNvdWxkIGtlZXAgd3JpdGluZyB0byBNVFJSIHJlZ2lzdGVycyBvciBkbyBh
IGNhY2hlIGZsdXNoDQo+ID4gPiB0aHJvdWdoDQo+ID4gPiBETSBPcCBhbmQga2VlcCBzZW5kaW5n
IElQSXMgdG8gb3RoZXIgbmVpZ2hib3JpbmcgZ3Vlc3RzLg0KPiA+IA0KPiA+IEkgc2F3IEphbiBh
bHJlYWR5IGFuc3dlcmVkIGFib3V0IHRoZSBhbGxlZ2VkIERvUywgc28gSSB3aWxsIGp1c3QNCj4g
PiBmb2N1cw0KPiA+IG9uIHRoZSByZXNvbHV0aW9uLg0KPiA+IA0KPiA+IEkgYWdyZWUgdGhhdCBp
biB0aGUgaWRlYWwgc2l0dWF0aW9uIHdlIHdhbnQgdG8gbGltaXQgdGhlIGltcGFjdCBvbg0KPiA+
IHRoZQ0KPiA+IG90aGVyIHZDUFVzLiBIb3dldmVyLCB3ZSBhbHNvIG5lZWQgdG8gbWFrZSBzdXJl
IHRoZSBjdXJlIGlzIG5vdA0KPiA+IHdvcnNlDQo+ID4gdGhhbiB0aGUgc3ltcHRvbXMuDQo+IA0K
PiBBbmQgc3BlY2lmaWNhbGx5LCBvbmx5IGEgY2hhbmdlIHdoaWNoIGlzIGNvcnJlY3QuICBUaGlz
IHBhdGNoIHZlcnkNCj4gZGVmaW5pdGVseSBpc24ndC4NCj4gDQo+IExpbmVzIGNhbiBnZXQgY2Fj
aGVkIG9uIG90aGVyIGNwdXMgZnJvbSwgZS5nLiBxZW11IG1hcHBpbmdzIGFuZCBQVg0KPiBiYWNr
ZW5kcy4NCj4gDQo+ID4gDQo+ID4gVGhlIGNhY2hlIGZsdXNoIGNhbm5vdCBiZSByZXN0cmljdGVk
IGluIGFsbCB0aGUgcGlubmluZyBzaXR1YXRpb24NCj4gPiBiZWNhdXNlIHBpbm5pbmcgZG9lc24n
dCBpbXBseSB0aGUgcENQVSB3aWxsIGJlIGRlZGljYXRlZCB0byBhIGdpdmVuDQo+ID4gdkNQVSBv
ciBldmVuIHRoZSB2Q1BVIHdpbGwgc3RpY2sgb24gcENQVSAod2UgbWF5IGFsbG93IGZsb2F0aW5n
IG9uDQo+ID4gYQ0KPiA+IE5VTUEgc29ja2V0KS4gQWx0aG91Z2ggeW91ciBzZXR1cCBtYXkgb2Zm
ZXIgdGhpcyBndWFyYW50ZWUuDQo+ID4gDQo+ID4gTXkga25vd2xlZGdlIGluIHRoaXMgYXJlYSBp
cyBxdWl0ZSBsaW1pdGVkLiBCdXQgYmVsb3cgYSBmZXcNCj4gPiBxdWVzdGlvbg0KPiA+IHRoYXQg
aG9wZWZ1bGx5IHdpbGwgaGVscCB0byBtYWtlIGEgZGVjaXNpb24uDQo+ID4gDQo+ID4gVGhlIGZp
cnN0IHF1ZXN0aW9uIHRvIGFuc3dlciBpczogY2FuIHRoZSBmbHVzaCBjYW4gYmUgcmVzdHJpY3Rl
ZCBpbg0KPiA+IGENCj4gPiBzZXR1cCB3aGVyZSBlYWNoIHZDUFVzIGFyZSBydW5uaW5nIG9uIGEg
ZGVjaWNhdGVkIHBDUFUgKGkuZQ0KPiA+IHBhcnRpb25uZWQNCj4gPiBzeXN0ZW0pPw0KPiANCj4g
Tm90IHJlYWxseS4gIExpbmVzIGNhbiBiZWNvbWUgY2FjaGVkIGV2ZW4gZnJvbSBzcGVjdWxhdGlv
biBpbiB0aGUNCj4gZGlyZWN0bWFwLg0KPiANCj4gSWYgeW91IG5lZWQgdG8gZmx1c2ggdGhlIGNh
Y2hlcyAoYW5kIGRvbid0IGhhdmUgYSB2aXJ0dWFsIG1hcHBpbmcgdG8NCj4gaXNzdWUgY2xmbHVz
aC9jbGZsdXNob3B0L2Nsd2Igb3ZlciksIGl0IG11c3QgYmUgb24gYWxsIENQVXMuDQoNCklmIGxp
bmVzIGFyZSBjYWNoZWQgZHVlIHRvIGFnZ3Jlc3NpdmUgc3BlY3VsYXRpb24gZnJvbSBhIGRpZmZl
cmVudA0KZ3Vlc3QsIHdvdWxkbid0IHRoZXkgYmUgaW52YWxpZGF0ZWQgYXQgdGhlIHNwZWN1bGF0
aW9uIGJvdW5kYXJ5LCBzaW5jZQ0KaXQncyBhIHdyb25nIHNwZWN1bGF0aW9uPyBXb3VsZCBpdCBz
dGlsbCByZXF1aXJlIHRvIGJlIGZsdXNoZWQNCmV4cGxpY2l0bHk/DQoNCi1IYXJzaGENCg0KPiAN
Cj4gPiBJZiB0aGUgYW5zd2VyIGlzIHllcywgdGhlbiB3ZSBzaG91bGQgZmlndXJlIG91dCB3aGV0
aGVyIHVzaW5nDQo+ID4gZG9tYWluLT5kaXJ0eV9jcHVtYXNrIHdvdWxkIGFsd2F5cyBiZSBjb3Jy
ZWN0PyBGb3IgaW5zdGFuY2UsIGEgdkNQVQ0KPiA+IG1heSBub3QgaGF2ZSB5ZXQgcnVuLCBzbyBj
YW4gd2UgY29uc2lkZXIgdGhlIGFzc29jaWF0ZWQgcENQVSBjYWNoZQ0KPiA+IHdvdWxkIGJlIGNv
bnNpc3RlbnQ/DQo+ID4gDQo+ID4gQW5vdGhlciBsaW5lIG9mIHF1ZXN0aW9uIGlzIHdoYXQgY2Fu
IHdlIGRvIG9uIHN5c3RlbSBzdXBwb3J0aW5nDQo+ID4gc2VsZi1zbm9vcGluZz8gSU9XLCB3b3Vs
ZCBpdCBiZSBwb3NzaWJsZSB0byByZXN0cmljdCB0aGUgZmx1c2ggZm9yDQo+ID4gYWxsDQo+ID4g
dGhlIHNldHVwPw0KPiANCj4gUmlnaHQgLSB0aGlzIGlzIHRoZSBhdmVudWUgd2hpY2ggb3VnaHQg
dG8gYmUgaW52ZXN0aWdhdGVkLiAgKFdvcmtpbmcpDQo+IHNlbGYtbm9vcCBvdWdodCB0byByZW1v
dmUgdGhlIG5lZWQgZm9yIHNvbWUgb2YgdGhlc2UgY2FjaGUgZmx1c2hlcy4NCj4gT3RoZXJzIGxv
b2sgbGlrZSB0aGV5J3JlIG5vdCBjb3JyZWN0IHRvIGJlZ2luIHdpdGguICBPdGhlcnMsIHN1Y2gg
YXMNCj4gdGhlDQo+IHdiaW52ZCBpbnRlcmNlcHRzIGFic29sdXRlbHkgbXVzdCBzdGF5IGFzIHRo
ZXkgYXJlLg0KPiANCj4gfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 19:09:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 19:09:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52629.91913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kotDm-0008UO-5E; Mon, 14 Dec 2020 19:09:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52629.91913; Mon, 14 Dec 2020 19:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kotDm-0008UH-1s; Mon, 14 Dec 2020 19:09:22 +0000
Received: by outflank-mailman (input) for mailman id 52629;
 Mon, 14 Dec 2020 19:09:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PveR=FS=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kotDk-0008UC-EY
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 19:09:20 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.53]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5f9ab2e-5f81-495c-ad74-1d5281a5bc69;
 Mon, 14 Dec 2020 19:09:17 +0000 (UTC)
Received: from AM6P192CA0002.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::15)
 by DB6PR0801MB1911.eurprd08.prod.outlook.com (2603:10a6:4:74::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.21; Mon, 14 Dec
 2020 19:09:14 +0000
Received: from VE1EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::a5) by AM6P192CA0002.outlook.office365.com
 (2603:10a6:209:83::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Mon, 14 Dec 2020 19:09:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT039.mail.protection.outlook.com (10.152.19.196) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Mon, 14 Dec 2020 19:09:13 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Mon, 14 Dec 2020 19:09:13 +0000
Received: from 64f83b45dbc7.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4D23F0E5-2F07-4FF7-848C-703C6BD5EC3F.1; 
 Mon, 14 Dec 2020 19:08:58 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 64f83b45dbc7.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 14 Dec 2020 19:08:58 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3611.eurprd08.prod.outlook.com (2603:10a6:10:4d::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.17; Mon, 14 Dec
 2020 19:08:55 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3654.025; Mon, 14 Dec 2020
 19:08:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5f9ab2e-5f81-495c-ad74-1d5281a5bc69
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rSGFIJzYlh9dzy54T2HBNgmjSVHU+KuivH0e2qBhJxQ=;
 b=9FZj5GpyEoV4zpkgwwrAJOPfR5KXRoCoFxk7f3xnEcFI0eAMswXHGyAujnUp6dt2gVzw5GtErPnRdq2d8x1pzIYR6tiCh8/PRl+7HWF8r1MqIfgExTP76/haiKZdzIYOWigJ04gk8qLspFwiGmj9Q2Q7/j101At3C5VqB23v2sA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 50d3ff5db2d9c6d4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GozBHyOnQMYnJt74974pwseYItNf4GTTbHH0o8LkHJKA/cIbPHiL8Ex+glvmSqpp+yA70znOBiOgudu3Z6gWAxsUJWqNzos0KbvMMcsHhLTZaDU9/dG300jvw3selVn0m0amJM7slMl4xcyHGQz8lvtuREciMRZGMxwNPE8gBxw+TtJ/BqtJ+FnqJvcSt4hLSguqUISoAXRHl4Us7o2qxgZjFlk57B6TkyIkJ8R+Zrza9KGPl/m+n8tHootNZYTHUzK1KB1TFz+Kt2l4w/My821dB6iqDlqJbnbKQW4T7EwyuFt8WjiqAQWigQkcs7IRVbTocKUs2xb+jwYmrtzREw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rSGFIJzYlh9dzy54T2HBNgmjSVHU+KuivH0e2qBhJxQ=;
 b=HeCWn+KIK2r1O9vpAVkVKs/OUXYJ1Az4yKUX/7XsF5VZmjeMmSxxxKQbL+RR7FxlT4zHU+i1apd0c3QGQRCnvYxFgGz7Uenn5g9V84MnyTt1FLB4XOX8Zc6gldFvV/v//IpCpa7BdpRcEgE7q2wY/roGoQjYrNEpjo8WCloy3nb/pY5/+Dc/GBEhDaj0j7VfR9rNHEd4J3bK0cr1arkL2IaFGRTDXMh5By9lXZ4Xh1+vQC9GNvu9j2sj3tI+idQeyoa6MTVK0lwk8FW2n9blnKDidE4A+JVdfqqGD6usPSVUABZaqWMkA55XuS26DP/n/wwJ84QZuuzuzkaucu6lCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rSGFIJzYlh9dzy54T2HBNgmjSVHU+KuivH0e2qBhJxQ=;
 b=9FZj5GpyEoV4zpkgwwrAJOPfR5KXRoCoFxk7f3xnEcFI0eAMswXHGyAujnUp6dt2gVzw5GtErPnRdq2d8x1pzIYR6tiCh8/PRl+7HWF8r1MqIfgExTP76/haiKZdzIYOWigJ04gk8qLspFwiGmj9Q2Q7/j101At3C5VqB23v2sA=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWzxYvTK5u+0PVw0yI12PrfGSPTKnx9LwAgAUFtoA=
Date: Mon, 14 Dec 2020 19:08:55 +0000
Message-ID: <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
In-Reply-To: <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 3c10d454-3622-467d-7ca1-08d8a063bf07
x-ms-traffictypediagnostic: DB7PR08MB3611:|DB6PR0801MB1911:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB19116558D21E0C843EE5511CFCC70@DB6PR0801MB1911.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:1417;OLM:639;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Et0Vyj5gKWmf4a87tZURc6bpp2vht4WAJx1XCXz5gloBVqrASERCwFySOPRHBKUlMHL/VUaQpMwA+0BR1AoLbDCiN6g0LnbCgkeON39b9bDo6iwMxdJDLdcecso6gXy6U1DiEzmlFqNadqUC6/d3X4nSi+TKMCK40ZXE/dwoON+kC8+b0o0dyvH9L25tz2yIuMS+ndqnvS7XBhdnJtao5svbIBSidC0EbHBzmlzYG77YTSzeC5iA9Muc6UrFqxpktVgx203wgNIkOBuM2bIyo082AJJxjfiGMzSwbV4ipxdBnapmH03Wzv4udNNXW79dQNB3CME3JRhsYsiXobql1UlhdSxTXP95J4EPURCiRxeG2XynDU9qP1cPs8Tg2V2V
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(376002)(366004)(396003)(136003)(346002)(6506007)(478600001)(26005)(76116006)(8936002)(83380400001)(33656002)(91956017)(66556008)(36756003)(71200400001)(2906002)(4326008)(66946007)(64756008)(66446008)(6512007)(316002)(8676002)(54906003)(186003)(66476007)(5660300002)(6486002)(86362001)(6916009)(2616005)(53546011)(7416002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?i2PQqvb/AJogUH3hB8EcveekGPxIzgIVKcB97iTnggp90wKWA9VxKH2sq5oI?=
 =?us-ascii?Q?YILsrlDx48ui/C4VcT1DhR0WmNqz65+8OXYQqx/gu0FpzfyueIayaFHx0c2l?=
 =?us-ascii?Q?izqTiEahhywGTBtyfdU41sCxTahSRStQpJPfSWqfhkUOzyaUbIE40+aWfL0U?=
 =?us-ascii?Q?kVKt9+vA1imJmHQoSHiwavu1jJvZE68U3z22FPL65bC5ke9iMxl6a5FWBCem?=
 =?us-ascii?Q?i9klCPu2QVGwlQ9srwzfurLeXBHMt0msRXoVF264LU6gdFuM0BVl/h0RU5Iu?=
 =?us-ascii?Q?C0Kwklizt2K4oKZTQreRUg3V4K1NZ+yK94er0VrdLixZSD0vPaBjfwwDoT1V?=
 =?us-ascii?Q?VVd6Hcg7suJDOcPJYh6WqalhVEUuZoz3rtuvbihyApbTrytk/fRRJzXWRJ/5?=
 =?us-ascii?Q?zF5K+XdI3hgbAhupBjKd7mJsnCw2Hx8qDocMDmaYy7MqcevCVOE7GQATemAS?=
 =?us-ascii?Q?AcqE80rTlMcP6+9t4VMF765ho0QsCxFuKHp2zoEKM3//ZlZDsOQPO4+GrzUf?=
 =?us-ascii?Q?MhvUJWjsezxuGpbPS5u94jnesIIskHjOUDhQQ6tp0nG4yq3DGQhEKK/74b+0?=
 =?us-ascii?Q?T9Ed3kEgka0laU2W1z8eukvoVPUuwXbMs1kNNo2hOg8Q6t0/04efnnLVWWUm?=
 =?us-ascii?Q?+awIHx9dWUNLWRK0Dh0hEYRBNy2nJK2H+QZxrmjQvtMlB2hX3I7thn2IXq7V?=
 =?us-ascii?Q?WDbZEXMfB1n6RpVPleTJ9Ye5RPNQX6uNxsmWPqt/dPJmQUtIU4u2EiSQ1dxv?=
 =?us-ascii?Q?OtjIHveIP/IIGtfpwgNz9e95LhcYL3Zcxwkh823m/rCiPQAJ6NbBY16ktGLw?=
 =?us-ascii?Q?mWsBy71eH9v0/Odx/kXOW5MCyuz5220zxRfeNeHDtmZWKO6hUbgoER6ThZgE?=
 =?us-ascii?Q?NSi/HcgDKnfntztB61cc5nYVTxi0gYWuGKuW9ZvXVah6MlKJJs0oTuISRGEB?=
 =?us-ascii?Q?LSVWWWUSTUdTUyNufDRSEhhzXRewg/WwHD1NpW2uvwqUi5npzaIAeHvmwyb0?=
 =?us-ascii?Q?ZFz+?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F2FD3B9CA8B76B458CCB44B11CE14D4F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3611
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ba525c9d-106c-4e13-4808-08d8a063b3e8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Dtuh7BaycAifF/Gzd4RXhOePUM4WbkqHqUoG4+iGMeiYPOUzbwIfeTbyBDFDCNX/uvYJLEV9eoil13aKJ2mPjh39UP8n59aOa1x4GCx6uMK0AQQYynB6x0p4Pd6+SxLO0T+mG5ILyn0rgLSkT1pELeJ0VZ3ElMdYqIpW/Fyilkm6BLOO8YHjqwa7rY+izm+0a43/8gXCr1/9HUH5OWBVKNqwtR2KIqKSFu5WYpINOmm2uF+pjNlXEXGKzWWzPPT6gxPBYYXNmr9xpjqoye1X801vJRyjMwhV6J9D5F7ZcqjnH0I/0qStXK6PaYOYvP3VRDeB11ycsKxsymlyyumMQisRBvPtcsaKJoIEzGbu9INGr/1bDU/Vr7SU9ph4hPBS+racryilj/a5m7NE28Kb1w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(39850400004)(136003)(46966005)(2906002)(54906003)(356005)(53546011)(70586007)(82310400003)(186003)(2616005)(81166007)(83380400001)(6506007)(107886003)(478600001)(6512007)(70206006)(86362001)(6862004)(8936002)(336012)(6486002)(4326008)(82740400003)(47076004)(33656002)(316002)(36756003)(5660300002)(8676002)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Dec 2020 19:09:13.8681
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c10d454-3622-467d-7ca1-08d8a063bf07
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1911

Hello Julien,

> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Rahul,
>=20
> On 10/12/2020 16:57, Rahul Singh wrote:
>>  struct arm_smmu_strtab_cfg {
>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>  		u64			padding;
>>  	};
>>  -	/* IOMMU core code handle */
>> -	struct iommu_device		iommu;
>> +	/* Need to keep a list of SMMU devices */
>> +	struct list_head		devices;
>> +
>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>> +	struct tasklet		evtq_irq_tasklet;
>> +	struct tasklet		priq_irq_tasklet;
>> +	struct tasklet		combined_irq_tasklet;
>>  };
>>    /* SMMU private data for each master */
>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>    struct arm_smmu_domain {
>>  	struct arm_smmu_device		*smmu;
>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>=20
> Hmmm... Your commit message says the mutex would be replaced by a spinloc=
k. However, you are dropping the lock. What I did miss?

Linux code using the mutex in the function arm_smmu_attach_dev() but in XEN=
 this function is called from arm_smmu_assign_dev() which already has the s=
pin_lock when arm_smmu_attach_dev() function I called so I drop the mutex t=
o avoid nested spinlock.=20
Timing analysis of using spin lock in place of mutex as compared to linux  =
when attaching a  device to SMMU is still valid.

>=20
> [...]
>=20
>> @@ -1578,6 +1841,17 @@ static int arm_smmu_domain_finalise_s2(struct arm=
_smmu_domain *smmu_domain,
>>  	struct arm_smmu_device *smmu =3D smmu_domain->smmu;
>>  	struct arm_smmu_s2_cfg *cfg =3D &smmu_domain->s2_cfg;
>>  	typeof(&arm_lpae_s2_cfg.vtcr) vtcr =3D &arm_lpae_s2_cfg.vtcr;
>> +	uint64_t reg =3D READ_SYSREG64(VTCR_EL2);
>=20
> Please don't use VTCR_EL2 here. You should be able to infer the parameter=
 from the p2m_ipa_bits.

Ok.

>=20
> Also, I still don't see code that will restrict the IPA bits because on t=
he support for the SMMU.

Ok I will add the code to restrict the IPA bits in next version.
>=20
>> +
>> +	vtcr->tsz	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2T0SZ, reg);
>> +	vtcr->sl	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2SL0, reg);
>> +	vtcr->irgn	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2IR0, reg);
>> +	vtcr->orgn	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2OR0, reg);
>> +	vtcr->sh	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2SH0, reg);
>> +	vtcr->tg	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2TG, reg);
>> +	vtcr->ps	=3D FIELD_GET(STRTAB_STE_2_VTCR_S2PS, reg);
>> +
>> +	arm_lpae_s2_cfg.vttbr  =3D page_to_maddr(cfg->domain->arch.p2m.root);
>>    	vmid =3D arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>>  	if (vmid < 0)
>> @@ -1592,6 +1866,11 @@ static int arm_smmu_domain_finalise_s2(struct arm=
_smmu_domain *smmu_domain,
>>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
>>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
>>  			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
>> +
>> +	printk(XENLOG_DEBUG
>> +		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n=
",
>> +		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
>> +
>>  	return 0;
>>  }
>=20
> [...]
>=20
>> @@ -1923,8 +2239,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu=
_device *smmu,
>>  		return -ENOMEM;
>>  	}
>>  -	if (!WARN_ON(q->base_dma & (qsz - 1))) { > -		dev_info(smmu->dev, "al=
located %u entries for %s\n",
>> +	if (unlikely(q->base_dma & (qsz - 1))) {
>> +		dev_warn(smmu->dev, "allocated %u entries for %s\n",
> dev_warn() is not the same as WARN_ON(). But really, the first step is fo=
r you to try to change behavior of WARN_ON() in Xen.

Ok. I will make sure we have the same behaviour as linux by modifying the c=
ode as below.

WARN_ON(q->base_dma & (qsz - 1));
if (!unlikely(q->base_dma & (qsz - 1))) {
	dev_info(smmu->dev, "allocated %u entries for %s\n",
		1 << q->llq.max_n_shift, name);
}

Regards,
Rahul

>=20
> If this doesn't go through then we can discuss about approach to mitigate=
 it.
>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 19:35:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 19:35:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52635.91926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kotcp-0002id-7y; Mon, 14 Dec 2020 19:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52635.91926; Mon, 14 Dec 2020 19:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kotcp-0002iW-4i; Mon, 14 Dec 2020 19:35:15 +0000
Received: by outflank-mailman (input) for mailman id 52635;
 Mon, 14 Dec 2020 19:35:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kotco-0002iR-6F
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 19:35:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kotcj-0004cA-Vb; Mon, 14 Dec 2020 19:35:09 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kotcj-0003q3-NF; Mon, 14 Dec 2020 19:35:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QC56/L8dZ8gMmsCuXnU4A4bTJ9SlqYisdqCPjcMZX7A=; b=nrvCYkD1StrE1F7l4AJVc4UFxV
	LOZpPUalC4cvPWK2GlNwCGKWM9Vhgkvuud7RaKfsaO9X0cDmcismLb9Pnv//j9XPAFpe0GOhldOi5
	4lK7eSeQNuzGzvuX9MXumGPyMv7HGvRUBEPpQVfP25KgDmdNjS5WlsEaQ/MYeZ00Edf4=;
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
 <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
Date: Mon, 14 Dec 2020 19:35:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/12/2020 19:08, Rahul Singh wrote:
> Hello Julien,

Hi Rahul,

> 
>> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Rahul,
>>
>> On 10/12/2020 16:57, Rahul Singh wrote:
>>>   struct arm_smmu_strtab_cfg {
>>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>>   		u64			padding;
>>>   	};
>>>   -	/* IOMMU core code handle */
>>> -	struct iommu_device		iommu;
>>> +	/* Need to keep a list of SMMU devices */
>>> +	struct list_head		devices;
>>> +
>>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>>> +	struct tasklet		evtq_irq_tasklet;
>>> +	struct tasklet		priq_irq_tasklet;
>>> +	struct tasklet		combined_irq_tasklet;
>>>   };
>>>     /* SMMU private data for each master */
>>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>>     struct arm_smmu_domain {
>>>   	struct arm_smmu_device		*smmu;
>>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>
>> Hmmm... Your commit message says the mutex would be replaced by a spinlock. However, you are dropping the lock. What I did miss?
> 
> Linux code using the mutex in the function arm_smmu_attach_dev() but in XEN this function is called from arm_smmu_assign_dev() which already has the spin_lock when arm_smmu_attach_dev() function I called so I drop the mutex to avoid nested spinlock.
> Timing analysis of using spin lock in place of mutex as compared to linux  when attaching a  device to SMMU is still valid.

I think it would be better to keep the current locking until the 
investigation is done.

But if you still want to make this change, then you should explain in 
the commit message why the lock is dropped.

[...]

> WARN_ON(q->base_dma & (qsz - 1));
> if (!unlikely(q->base_dma & (qsz - 1))) {
> 	dev_info(smmu->dev, "allocated %u entries for %s\n",
> 		1 << q->llq.max_n_shift, name);
> }

Right, but this doesn't address the second part of my comment.

This change would *not* be necessary if the implementation of WARN_ON() 
in Xen return whether the warn was triggered.

Before considering to change the SMMU code, you should first attempt to 
modify implementation of the WARN_ON(). We can discuss other approach if 
the discussion goes nowhere.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 19:37:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 19:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52640.91938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koteO-0002q9-K4; Mon, 14 Dec 2020 19:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52640.91938; Mon, 14 Dec 2020 19:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koteO-0002q2-GI; Mon, 14 Dec 2020 19:36:52 +0000
Received: by outflank-mailman (input) for mailman id 52640;
 Mon, 14 Dec 2020 19:36:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oe/o=FS=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1koteN-0002pt-2u
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 19:36:51 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7fe405d-4cb2-423a-acae-cbb1c81e465c;
 Mon, 14 Dec 2020 19:36:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7fe405d-4cb2-423a-acae-cbb1c81e465c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1607974608;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=TIChKZk7bNLsqNN8smKs6e/5++JrlSASiflxKxbc3Nc=;
  b=WDEOpY2X+GPlEWLoYPO3j6mYmCOaWSNb7UgBGqd+2pg+DqqMo8IgJ3MG
   2gj+ZLsI6UeLXYBT02eQoO8JHjVCCZlXfULcgEOBaGSGkoaW8SY4fZtk/
   IMHuvGuIi+LbZkWhVoMYX3vllQkxI0J21w16tEecMyLZJhg57XGUNYUP6
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: wtCuCDQwScHCQiG/4yeCa/ENPylAvlOauVV7w7qg6IT0th/SGpBIOsYgSyHUbyBnHpCJK0zr82
 DHwzdcQ+tltBJAy1MI92e0kFvncwNA2469s5OkHdqUuj01X57i/nGPfPgN9T90HsP7rF/OFX0g
 4+kRkOd4QQMXaZDU5Fbw+/j37OvTs7Y8+lvK8p3OcWdaDtXZ+I8PxMQlngmWOhxxNj2QfCLFzY
 AvySKUPJgGL753mOPKDWZzltCj0qgCuYrHplyFogJEJ5zC6kSthOsIb1i5/gFD3IbC1AU9O3D6
 P5Q=
X-SBRS: 5.2
X-MesageID: 33172448
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,420,1599537600"; 
   d="scan'208";a="33172448"
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the
 domain
To: "Shamsundara Havanur, Harsha" <havanur@amazon.com>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "julien@xen.org" <julien@xen.org>
CC: "Wieczorkiewicz, Pawel" <wipawel@amazon.de>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"paul@xen.org" <paul@xen.org>
References: <cover.1607686878.git.havanur@amazon.com>
 <aad47c43b7cd7a391492b8be7b881cd37e9764c7.1607686878.git.havanur@amazon.com>
 <149f7f6e-0ff4-affc-b65d-0f880fa27b13@suse.com>
 <81b5d64b0a08d217e0ae53606cd1b8afd59283e4.camel@amazon.com>
 <bf70db2d-cf03-11cb-887e-aa38094b3d5f@xen.org>
 <607cba7c-15b6-0197-6000-cc823038d320@citrix.com>
 <eef19ecad32ac9379b6535ec2a4b444e78b29058.camel@amazon.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d41f91ae-4df5-abe1-e58e-92a2424c077a@citrix.com>
Date: Mon, 14 Dec 2020 19:36:42 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <eef19ecad32ac9379b6535ec2a4b444e78b29058.camel@amazon.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 14/12/2020 19:05, Shamsundara Havanur, Harsha wrote:
> On Mon, 2020-12-14 at 16:01 +0000, Andrew Cooper wrote:
>> CAUTION: This email originated from outside of the organization. Do
>> not click links or open attachments unless you can confirm the sender
>> and know the content is safe.
>>
>>
>>
>> On 14/12/2020 10:56, Julien Grall wrote:
>>> Hi Harsha,
>>>
>>> On 14/12/2020 09:26, Shamsundara Havanur, Harsha wrote:
>>>> On Mon, 2020-12-14 at 09:52 +0100, Jan Beulich wrote:
>>>>> CAUTION: This email originated from outside of the
>>>>> organization. Do
>>>>> not click links or open attachments unless you can confirm the
>>>>> sender
>>>>> and know the content is safe.
>>>>>
>>>>>
>>>>>
>>>>> On 11.12.2020 12:44, Harsha Shamsundara Havanur wrote:
>>>>>> A HVM domain flushes cache on all the cpus using
>>>>>> `flush_all` macro which uses cpu_online_map, during
>>>>>> i) creation of a new domain
>>>>>> ii) when device-model op is performed
>>>>>> iii) when domain is destructed.
>>>>>>
>>>>>> This triggers IPI on all the cpus, thus affecting other
>>>>>> domains that are pinned to different pcpus. This patch
>>>>>> restricts cache flush to the set of cpus affinitized to
>>>>>> the current domain using `domain->dirty_cpumask`.
>>>>> But then you need to effect cache flushing when a CPU gets
>>>>> taken out of domain->dirty_cpumask. I don't think you/we want
>>>>> to do that.
>>>>>
>>>> If we do not restrict, it could lead to DoS attack, where a
>>>> malicious
>>>> guest could keep writing to MTRR registers or do a cache flush
>>>> through
>>>> DM Op and keep sending IPIs to other neighboring guests.
>>> I saw Jan already answered about the alleged DoS, so I will just
>>> focus
>>> on the resolution.
>>>
>>> I agree that in the ideal situation we want to limit the impact on
>>> the
>>> other vCPUs. However, we also need to make sure the cure is not
>>> worse
>>> than the symptoms.
>> And specifically, only a change which is correct.  This patch very
>> definitely isn't.
>>
>> Lines can get cached on other cpus from, e.g. qemu mappings and PV
>> backends.
>>
>>> The cache flush cannot be restricted in all the pinning situation
>>> because pinning doesn't imply the pCPU will be dedicated to a given
>>> vCPU or even the vCPU will stick on pCPU (we may allow floating on
>>> a
>>> NUMA socket). Although your setup may offer this guarantee.
>>>
>>> My knowledge in this area is quite limited. But below a few
>>> question
>>> that hopefully will help to make a decision.
>>>
>>> The first question to answer is: can the flush can be restricted in
>>> a
>>> setup where each vCPUs are running on a decicated pCPU (i.e
>>> partionned
>>> system)?
>> Not really.  Lines can become cached even from speculation in the
>> directmap.
>>
>> If you need to flush the caches (and don't have a virtual mapping to
>> issue clflush/clflushopt/clwb over), it must be on all CPUs.
> If lines are cached due to aggressive speculation from a different
> guest, wouldn't they be invalidated at the speculation boundary, since
> it's a wrong speculation? Would it still require to be flushed
> explicitly?

No.  Caches are microarchitectural state (just like TLBs, linefill
buffers, etc.)

The entire mess surrounding speculative security issues is that the
perturbance from bad speculation survive, and can be recovered at a
later point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 20:01:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 20:01:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52678.91950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kou2I-0005gp-FT; Mon, 14 Dec 2020 20:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52678.91950; Mon, 14 Dec 2020 20:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kou2I-0005gi-CR; Mon, 14 Dec 2020 20:01:34 +0000
Received: by outflank-mailman (input) for mailman id 52678;
 Mon, 14 Dec 2020 20:01:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PveR=FS=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kou2G-0005gd-Ou
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 20:01:32 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::605])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd96f3cf-05c5-4800-86f2-eceeaa17ccae;
 Mon, 14 Dec 2020 20:01:29 +0000 (UTC)
Received: from MR2P264CA0160.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::23) by
 AM7PR08MB5414.eurprd08.prod.outlook.com (2603:10a6:20b:105::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.17; Mon, 14 Dec
 2020 20:01:26 +0000
Received: from VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::f9) by MR2P264CA0160.outlook.office365.com
 (2603:10a6:501:1::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.19 via Frontend
 Transport; Mon, 14 Dec 2020 20:01:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT008.mail.protection.outlook.com (10.152.18.75) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Mon, 14 Dec 2020 20:01:25 +0000
Received: ("Tessian outbound 39646a0fd094:v71");
 Mon, 14 Dec 2020 20:01:24 +0000
Received: from b49702949785.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B1BF0D58-AA54-47B9-9D19-E6CCCAEB6608.1; 
 Mon, 14 Dec 2020 20:01:09 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b49702949785.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 14 Dec 2020 20:01:09 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4632.eurprd08.prod.outlook.com (2603:10a6:10:db::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Mon, 14 Dec
 2020 20:01:07 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::11cb:318b:f0a0:f125%5]) with mapi id 15.20.3654.025; Mon, 14 Dec 2020
 20:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd96f3cf-05c5-4800-86f2-eceeaa17ccae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k8OenHzXt6Q1vtDdwB5EEOrKrVTMPFSG0ASsxJqAA3U=;
 b=9xdAmvNc1R4FZZZyB6pQs43OYEgE24Ue3JCujkwUf+L8s1ceyKAzDjeN0CdmJpAPdBRbsFY7DKO/cdalLryrdShF9A0Hzg0yxnibx5evgTvHOABK1ytqmKrHdB17ZYJhVd/wUG6b4bzw/bS9PrxnYLoEb5usDqUmNgGLIjAc3cw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f70ac79d4693f58d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OBmM6FgfzDFddi+nFbYWbiUzJQ6NBl4/IED4OqxE3mU/nEtbp2Hb0nnKelzt8YiLzsTixYwOtopMELdTZm8c0/NUBGHt9PyE7D1JyEaAWX53GfnfgdfWU6SKElynpdV7AGGzejS9tySSBu7bPUDCzunIeUabxb9Kk5GeS46JYV+xDMm+79Ow2yJGFPk7kyBR5igb8QVzxt7Rnm9fkl77cD+A9uBhj6XwKG8XQ7cBOmUxix0RQ6p/WGhWZYw6bc1vDLpz8qLnss5zOCkhjrxiGVVk2ywCsnRriF93ZuOI9dxsdGMUsiGgbInLdmkewq4jSFA8PBoYBYufZMcxOAtGNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k8OenHzXt6Q1vtDdwB5EEOrKrVTMPFSG0ASsxJqAA3U=;
 b=AfWTuBGwpMfXhDnbnDTf31qcpo7UrrbKKQKz4EUrDkqNBq6LBSoRhlumLuNYA2dsA3hRGtd1kJPasfKWJRRNMwxloovL+EURop2Bv6dQ8QnNIrXrbCGUNQ7ANVtYRchJQPAvoXeqSMicric/lltvT4PCgTiJje3ACA8WcD3S8Ad9VhTyXsaNR5rDM0vIXCahvxrRMoLcnd32F9x89pE9nR6urp8wf+yz7jR+h69VHAboOzO4O9o1JgdTAsNcXOaTrbdTThwn4CHJkYgtnQQKltQ4wYjFOJKCJsr8XkVDxs89VoX4lB46GW3cQmEXiTvkEqQtSmpxJlCf3R93fwxV6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k8OenHzXt6Q1vtDdwB5EEOrKrVTMPFSG0ASsxJqAA3U=;
 b=9xdAmvNc1R4FZZZyB6pQs43OYEgE24Ue3JCujkwUf+L8s1ceyKAzDjeN0CdmJpAPdBRbsFY7DKO/cdalLryrdShF9A0Hzg0yxnibx5evgTvHOABK1ytqmKrHdB17ZYJhVd/wUG6b4bzw/bS9PrxnYLoEb5usDqUmNgGLIjAc3cw=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v3 0/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v3 0/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWzxWmv96s8jlSw0aSETONfL1Dbanx9eOAgAUTqAA=
Date: Mon, 14 Dec 2020 20:01:07 +0000
Message-ID: <8ED5EAAF-48B0-4289-BCB0-232F70001134@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <ea121c23-4deb-c566-4d1d-bb9dd4959015@xen.org>
In-Reply-To: <ea121c23-4deb-c566-4d1d-bb9dd4959015@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 94781a9d-408a-4fe4-23c5-08d8a06b098d
x-ms-traffictypediagnostic: DBBPR08MB4632:|AM7PR08MB5414:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM7PR08MB5414F330F5ABC34E1DD0BDA6FCC70@AM7PR08MB5414.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yCoi0nuTtXt0q23IAoS4zkITEhhkO6TydrrVUVauZiG5NKfUXDqaIgezNWRVOs0oYzVs1vyqTRTLvLY62NPPY/b9bw0Lyg8n1OAG7XxP7Zheatz8Gov+udCeWd8KXQHbQEk/R84uI6z1YakpjjCNI/yIXzPbPgnM68tLzumzSdLYYBOUvxGWlSa9TL8MQPGKw1EF3DuD0vO60Y51O+h0l8e2j+ipBuBTTvon6a//cWVB/wLzfPav0u2QAYQXXJIoNw6rma/1mSPrg+bSqBxEz3AfRdOuGzH3SeA59Z5+yc/CG9ewUNtaOQGFlsDntruJvISC5nw6/8RKCrWhsMJVfo4ZFWtpJ99vcp3ayofcMl+bMmf4DHuZFUAznuGUU1G1
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(376002)(136003)(39850400004)(71200400001)(5660300002)(83380400001)(478600001)(33656002)(6486002)(6506007)(91956017)(8936002)(26005)(86362001)(53546011)(76116006)(66946007)(186003)(4326008)(6512007)(64756008)(8676002)(7416002)(66446008)(66476007)(36756003)(2616005)(54906003)(66556008)(316002)(6916009)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?cXliME9HRm16alNlTXVFNHdMOE00ZzRqWVNpNmFGZWF5ajgxSUNQaDlWdERS?=
 =?utf-8?B?YURUNWhzTHZ6QlhXbCtMM3JMY0NOdm9Zb21tUEVub1JhVmhxdmVndTczOHdr?=
 =?utf-8?B?NmJJVG5JdkZQai8wSHVPL2hkVzltVWI5aFp4Ny9SZ3kzS25hUkMrRSt0a3dK?=
 =?utf-8?B?R2VHNFp5eURSWlZmdk9Bd29mNXhVamsrNWFhVFo2allVNHBjTis4Y0Q1MDND?=
 =?utf-8?B?N2tvc2FCWGZReGkzRmpGUnU0bzR2a2IweXdpcHdLZDlYUXFzTnhZTzBGV2RP?=
 =?utf-8?B?QVIxSXg4cUFDSzdZSDZGaG1xRmRGWlJHL2xkL0J4bjIramFrKy9QcUs0MERp?=
 =?utf-8?B?ZjNSR240TXB1S20yMzc4N1RlSnRCZjBBOXVVOFVzbkNmZzAvbFI0Nm0rY293?=
 =?utf-8?B?S0dpbzVoWXNjNkZkZUhLdWN3OGNaZHNqRHRHa0ErSlBHdFRSZnRTdGwvN3pF?=
 =?utf-8?B?Nzg1QTBwQ0tYOTJLczZLeVdJTDJRajdlNTBIWldWZWxsM0hhOEFzeTdPbURP?=
 =?utf-8?B?d1BIZUpwR20zbDRaRlVNRTYzQk5ubTd1TkdZNUtQblBraGR4dFR4TStLRjUr?=
 =?utf-8?B?dVhoVGJsUFp6WVlJdjdkU1pqZHVQc3Y0c1pWb2RZL1h2SzdKVDVqQXg1R0h1?=
 =?utf-8?B?QnZzbXFHemFFQXBUQk8zVDVwSy8vb1pXRENucFN4akx4b1pSNlk4UEg3Ny84?=
 =?utf-8?B?NGhXYWN6Rk5yZW0rV1pQTGpIeVE5NkNqV2ZhZFErczEzZTR2WTN2OXdYK1N5?=
 =?utf-8?B?cmdrU3lpd1lzWitNREVIcStjaHB5T28xdXR3dmQrSDNVMzcySkNNQU5nVmxv?=
 =?utf-8?B?L215VkQvVExud1hhcHQ2TjhFY2JITEhTQTMxU2c5cjM5cFhjcnc4aXovSGpK?=
 =?utf-8?B?UHJFZ3NySk84YVpmM29DSmZQUkVnMnMrdDFnRGlHVzhqVFpzNTBjVzNUZ1JN?=
 =?utf-8?B?RzJzSVo2QzFxUVpxMkxpdlhsRjlXblA4cFllTFdyNVNnMXZVZloweUxwUlBG?=
 =?utf-8?B?TWRSc2EvOGlPbXB2OW4zSURMRVdMNGdqcm50VGVEaUxhY2dOUjVKUGIxSXRW?=
 =?utf-8?B?Y0xYQTl4TkIwaWgxWVRkSzdxODVaa2NsNGVTUEdSQjZNdXlyVXozU0k1c2J6?=
 =?utf-8?B?di9nM0lFZmdrOGNlL2trcUR6WCtjaE1WVUR5cTNjM2luc2o5V2g1c3hHa2NS?=
 =?utf-8?B?VzZZUUhLRENmcGR3a2hXWnFuZkhKcWkzNkRDMkNGMXYvRDdkb3JwN0dLOFBk?=
 =?utf-8?B?bCtzQnpIejJQOHVJdlptQm50OFVMQS9jZFZhMWxCTDZEcWhuOGZJdytabG0y?=
 =?utf-8?Q?fpFLTscblnfM+Uu5Ocie/x7/FmIXL4WaHy?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <FAEA1E316DC2174CA124B93A8311D1BB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4632
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	50f44467-c899-4ac6-937c-08d8a06afec9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Fn2goNbYig/lKcSyRJmbJNaLaOOG6Vc/6JgQkUyB5AhlHYZOx243dBgHhEQgP+O4bOcpCJS3HX/+bArqh39zz2/Aon+yo7cmpPtYmLpQUXGKUtJgaIKVMROaDuEsyBRMwiRjj/+M/NF5aKn+wCHnf+zBBHUt44G3HU8RkEM0rLOthOtfdPiXALTjXMwnE3wYLH3KM2pNpwZdO2e67XWm99Yew6Gsgp8riDNtFFjjXK7+SYCDH/MBLDqnCo+olAPGuGLynSp8Hvg9iKnQMqEAtyFHL2XBp+oi3tw+1Z0FXpLByzIDVCBe18VEGm0imWKyt3lk0bO5tGJTgdxxoAkB2bHw2WoPbm+HbDFx0s4AfmKvZimeh5lLW3cZg1XH49kijMLDcBkZpf5a4LFy+7bFfA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39850400004)(376002)(396003)(136003)(46966005)(6506007)(186003)(36756003)(6486002)(82310400003)(53546011)(86362001)(316002)(82740400003)(6862004)(81166007)(2906002)(47076004)(478600001)(5660300002)(4326008)(70206006)(356005)(26005)(8676002)(8936002)(70586007)(336012)(83380400001)(33656002)(54906003)(6512007)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Dec 2020 20:01:25.3775
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 94781a9d-408a-4fe4-23c5-08d8a06b098d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5414

SGVsbG8gSnVsaWVuLFN0ZWZhbm8NCg0KPiBPbiAxMSBEZWMgMjAyMCwgYXQgMjoyOSBwbSwgSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSBSYWh1bCwNCj4gDQo+
IE9uIDEwLzEyLzIwMjAgMTY6NTYsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4gVGhpcyBwYXRjaCBz
ZXJpZXMgaXMgdjMgb2YgdGhlIHdvcmsgdG8gYWRkIHN1cHBvcnQgZm9yIHRoZSBTTU1VdjMgZHJp
dmVyLg0KPj4gQXBwcm9hY2ggdGFrZW4gaXMgdG8gZmlyc3QgbWVyZ2UgdGhlIExpbnV4IGNvcHkg
b2YgdGhlIFNNTVV2MyBkcml2ZXINCj4+ICh0YWcgdjUuOC4xOCkgYW5kIHRoZW4gbW9kaWZ5IHRo
ZSBkcml2ZXIgdG8gYnVpbGQgb24gWEVOLg0KPj4gTVNJIGFuZCBQQ0kgQVRTIGZ1bmN0aW9uYWxp
dHkgYXJlIG5vdCBzdXBwb3J0ZWQuIENvZGUgaXMgbm90IHRlc3RlZCBhbmQNCj4+IGNvbXBpbGVk
LiBDb2RlIGlzIGd1YXJkZWQgYnkgdGhlIGZsYWcgQ09ORklHX1BDSV9BVFMgYW5kIENPTkZJR19N
U0kgdG8gY29tcGlsZQ0KPj4gdGhlIGRyaXZlci4NCj4+IENvZGUgc3BlY2lmaWMgdG8gTGludXgg
aXMgcmVtb3ZlZCBmcm9tIHRoZSBkcml2ZXIgdG8gYXZvaWQgZGVhZCBjb2RlLg0KPj4gRHJpdmVy
IGlzIGN1cnJlbnRseSBzdXBwb3J0ZWQgYXMgdGVjaCBwcmV2aWV3Lg0KPj4gRm9sbG93aW5nIGZ1
bmN0aW9uYWxpdHkgc2hvdWxkIGJlIHN1cHBvcnRlZCBiZWZvcmUgZHJpdmVyIGlzIG91dCBmb3Ig
dGVjaA0KPj4gcHJldmlldw0KPj4gMS4gSW52ZXN0aWdhdGUgdGhlIHRpbWluZyBhbmFseXNpcyBv
ZiB1c2luZyBzcGluIGxvY2sgaW4gcGxhY2Ugb2YgbXV0ZXggd2hlbg0KPj4gICAgYXR0YWNoaW5n
IGEgIGRldmljZSB0byBTTU1VLg0KPj4gMi4gTWVyZ2VkIHRoZSBsYXRlc3QgTGludXggU01NVXYz
IGRyaXZlciBjb2RlIG9uY2UgYXRvbWljIG9wZXJhdGlvbiBpcw0KPj4gICAgYXZhaWxhYmxlIGlu
IFhFTi4NCj4+IDMuIFBDSSBBVFMgYW5kIE1TSSBpbnRlcnJ1cHRzIHNob3VsZCBiZSBzdXBwb3J0
ZWQuDQo+PiA0LiBJbnZlc3RpZ2F0ZSBzaWRlLWVmZmVjdCBvZiB1c2luZyB0YXNrbGV0IGluIHBs
YWNlIG9mIHRocmVhZGVkIElSUSBhbmQgZml4DQo+PiAgICBpZiBhbnkuDQo+IEluIHlvdXIgbGFz
dCBlLW1haWwsIHlvdSB3cm90ZSB0aGF0IHlvdSB3b3VsZCBpbnZlc3RpZ2F0ZSBhbmQgdGhlbiBj
b21lIGJhY2sgdG8gdXNlLiBJdCB3YXNuJ3QgY2xlYXIgdGhhdCB0aGlzIG1lYW50IHlvdSB3aWxs
IG5vdCBkZWFsIHdpdGggaXQgaW4gdGhpcyBzZXJpZXMuDQo+IA0KDQpZZXMsIEkgd2lsbCBpbnZl
c3RpZ2F0ZSB0aGUgc2lkZS1lZmZlY3Qgb2YgdXNpbmcgdGFza2xldCBidXQgbm90IHBhcnQgb2Yg
dGhpcyBwYXRjaCBzZXJpZXMuIEl0IHdpbGwgYmUgZ3JlYXQgaWYgd2UgcHJvY2VlZCBvbiB0aGlz
IHBhdGNoIHNlcmllcyBhcyBpdCBpcyAodXNpbmcgdGFza2xldCBpbiBwbGFjZSBvZiB0aHJlYWRl
ZC1JUlEpLg0KDQo+PiA1LiBmYWxsdGhvcnVnaCBrZXl3b3JkIHNob3VsZCBiZSBzdXBwb3J0ZWQu
DQo+IA0KPiBUaGlzIG9uZSBzaG91bGQgcmVhbGx5IGJlIGRvbmUgbm93Li4uIEl0IGlzIG5vdCBh
IGNvbXBsaWNhdGVkIG9uZS4uLg0KDQpPay4gSSB3aWxsIGltcGxlbWVudCBpbiBuZXh0IHZlcnNp
b24uDQogDQo+IA0KPj4gNi4gSW1wbGVtZW50IHRoZSBmZnNsbCBmdW5jdGlvbiBpbiBiaXRvcHMu
aCBmaWxlLg0KDQpXaGlsZSBpbXBsZW1lbnRpbmcgdGhlIGNvZGUgSSBmb3VuZCBvdXQgdGhhdCBM
aW51eCBpcyB1c2luZyB0aGUgYnVpbHQtaW4gZnVuY3Rpb24g4oCcX19idWlsdGluX2Zmc2xsKCkg
4oCcIGZvciBmZnNsbCBhbmQgdGhlcmUgaXMgbm8gaW1wbGVtZW50YXRpb24gYXZhaWxhYmxlIGlu
IExpbnV4IGZvciBmZnNsbC4NCklmIHdlIGltcGxlbWVudCB0aGUgZmZzbGwgaW4gWEVOIHdlIHdp
bGwgZGl2ZXJnZSBmcm9tIExpbnV4LiBJIGFtIHRoaW5raW5nIG9mIGFkZGluZyB0aGUgYmVsb3cg
Y29kZSBpbiAieGVuL2luY2x1ZGUvYXNtLWFybS9iaXRvcHMuaOKAnS4gDQpQbGVhc2Ugc3VnZ2Vz
dCBpZiBpdCBpcyBva2F5Pw0KDQpzdGF0aWMgYWx3YXlzX2lubGluZSBpbnQgZmZzbGwobG9uZyBs
b25nIHgpICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KeyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICANCiAgICByZXR1cm4gX19idWlsdGluX2Zmc2xsKHgpOyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICANCn0NCg0KUmVnYXJk
cywNClJhaHVsDQoNCj4gDQo+IC4uLiBzYW1lIGhlcmUuDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAt
LSANCj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 21:25:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 21:25:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52706.91988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovLX-0004e1-5U; Mon, 14 Dec 2020 21:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52706.91988; Mon, 14 Dec 2020 21:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovLX-0004du-1O; Mon, 14 Dec 2020 21:25:31 +0000
Received: by outflank-mailman (input) for mailman id 52706;
 Mon, 14 Dec 2020 21:25:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kovLV-0004dp-Fd
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 21:25:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kovLV-0006YJ-2Z; Mon, 14 Dec 2020 21:25:29 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kovLU-0002kx-RO; Mon, 14 Dec 2020 21:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Date:
	Message-ID:Subject:From:Cc:To;
	bh=W3CUiMegB4BVkOevPrxxwlz9k2sF0VIgjinR7mJO+zE=; b=GJjfM94GNzkD7aFMOC4OuSnRby
	vXidysS/3DsaeOhU9ZvF6l5ynXOe0eNB78/m57sdxESt2jdQw40qybnQEycOhEH0vBKy/faEEpkmu
	uP6u5mG7GM+15WpbQTUs1v6awvkCX41TyEbguIREKlGEZK8s+gfpmBAxxjXWCwAbQdBM=;
To: aams@amazon.de, Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
From: Julien Grall <julien@xen.org>
Subject: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
Message-ID: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
Date: Mon, 14 Dec 2020 21:25:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

When testing Linux 5.10 dom0, I could reliably hit the following warning 
with using event 2L ABI:

[  589.591737] Interrupt for port 34, but apparently not enabled; 
per-user 00000000a86a4c1b
[  589.593259] WARNING: CPU: 0 PID: 1111 at 
/home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:170 
evtchn_interrupt+0xeb/0x100
[  589.595514] Modules linked in:
[  589.596145] CPU: 0 PID: 1111 Comm: qemu-system-i38 Tainted: G 
W         5.10.0+ #180
[  589.597708] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
[  589.599782] RIP: e030:evtchn_interrupt+0xeb/0x100
[  589.600698] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00 00 00 
e8 d9 10 ca ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 31 3d 82 e8 65 29 a0 
ff <0f> 0b e9 42 ff ff ff 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 0f
[  589.604087] RSP: e02b:ffffc90040003e70 EFLAGS: 00010086
[  589.605102] RAX: 0000000000000000 RBX: ffff888102091800 RCX: 
0000000000000027
[  589.606445] RDX: 0000000000000000 RSI: ffff88817fe19150 RDI: 
ffff88817fe19158
[  589.607790] RBP: ffff88810f5ab980 R08: 0000000000000001 R09: 
0000000000328980
[  589.609134] R10: 0000000000000000 R11: ffffc90040003c70 R12: 
ffff888107fd3c00
[  589.610484] R13: ffffc90040003ed4 R14: 0000000000000000 R15: 
ffff88810f5ffd80
[  589.611828] FS:  00007f960c4b8ac0(0000) GS:ffff88817fe00000(0000) 
knlGS:0000000000000000
[  589.613348] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[  589.614525] CR2: 00007f17ee72e000 CR3: 000000010f5b6000 CR4: 
0000000000050660
[  589.615874] Call Trace:
[  589.616402]  <IRQ>
[  589.616855]  __handle_irq_event_percpu+0x4e/0x2c0
[  589.617784]  handle_irq_event_percpu+0x30/0x80
[  589.618660]  handle_irq_event+0x3a/0x60
[  589.619428]  handle_edge_irq+0x9b/0x1f0
[  589.620209]  generic_handle_irq+0x4f/0x60
[  589.621008]  evtchn_2l_handle_events+0x160/0x280
[  589.621913]  __xen_evtchn_do_upcall+0x66/0xb0
[  589.622767]  __xen_pv_evtchn_do_upcall+0x11/0x20
[  589.623665]  asm_call_irq_on_stack+0x12/0x20
[  589.624511]  </IRQ>
[  589.624978]  xen_pv_evtchn_do_upcall+0x77/0xf0
[  589.625848]  exc_xen_hypervisor_callback+0x8/0x10

This can be reproduced when creating/destroying guest in a loop. 
Although, I have struggled to reproduce it on a vanilla Xen.

After several hours of debugging, I think I have found the root cause.

While we only expect the unmask to happen when the event channel is 
EOIed, there is an unmask happening as part of handle_edge_irq() because 
the interrupt was seen as pending by another vCPU (IRQS_PENDING is set).

It turns out that the event channel is set for multiple vCPU is in 
cpu_evtchn_mask. This is happening because the affinity is not cleared 
when freeing an event channel.

The implementation of evtchn_2l_handle_events() will look for all the 
active interrupts for the current vCPU and later on clear the pending 
bit (via the ack() callback). IOW, I believe, this is not an atomic 
operation.

Even if Xen will notify the event to a single vCPU, evtchn_pending_sel 
may still be set on the other vCPU (thanks to a different event 
channel). Therefore, there is a chance that two vCPUs will try to handle 
the same interrupt.

The IRQ handler handle_edge_irq() is able to deal with that and will 
mask/unmask the interrupt. This will mess us with the lateeoi logic 
(although, I managed to reproduce it once without XSA-332).

My initial idea to fix the problem was to switch the affinity from CPU X 
to CPU0 when the event channel is freed.

However, I am not sure this is enough because I haven't found anything 
yet preventing a race between evtchn_2l_handle_events9) and 
evtchn_2l_bind_vcpu().

So maybe we want to introduce a refcounting (if there is nothing 
provided by the IRQ framework) and only unmask when the counter drop to 0.

Any opinions?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 21:43:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 21:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52621.92000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcl-0006Vr-Ii; Mon, 14 Dec 2020 21:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52621.92000; Mon, 14 Dec 2020 21:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcl-0006Vk-FR; Mon, 14 Dec 2020 21:43:19 +0000
Received: by outflank-mailman (input) for mailman id 52621;
 Mon, 14 Dec 2020 18:53:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V0bF=FS=canonical.com=guilherme.piccoli@srs-us1.protection.inumbo.net>)
 id 1kosyS-0007CE-Tm
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 18:53:32 +0000
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30e49d43-231a-4219-9fb1-1a396327b478;
 Mon, 14 Dec 2020 18:53:32 +0000 (UTC)
Received: from mail-ed1-f69.google.com ([209.85.208.69])
 by youngberry.canonical.com with esmtps
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <guilherme.piccoli@canonical.com>) id 1kosyR-0001Ba-El
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 18:53:31 +0000
Received: by mail-ed1-f69.google.com with SMTP id cm4so8722326edb.0
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 10:53:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30e49d43-231a-4219-9fb1-1a396327b478
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=JG5ef47m2BX+tEst66UNr3S/GTFXEHs77QPAKIP06kc=;
        b=qc/G0RuwmIYK/5sXGAIAovTJZ4RfmfvkxuA9IdqHSl090aV1Kxkn//4KTh6277K25M
         d7lPnvC78kQ5Hb5Clh9UUSZmd+tjNzxi00Y1Ep0Vmlj+qFKmxP3rb06etyDtpWFfxxWI
         qr6bvxp2sVeWUn5t0C09reE5aUEz8KDL4n9efPFRZH39O+ZIy5J6J9TPBsHjYvCRYJPU
         AwdDCDgCNX2lCwncD+93TxTXYJW8J6OPh0+mkjXaEdtdfGPcukmgsku23DUQOYfqPnnk
         1R/bVZsPSUXRu2grS7O/fY21ysiUrVmkqGNZ8thKkYQ/IengU4SfZ0YAFjc4Rwnrwbov
         X8rA==
X-Gm-Message-State: AOAM53313Dttsuz6W1NcXF9Tfdrr2t0V+3Ym64eryZZmZQCXymOVrGXD
	GmAPBBax2jgzcP1EG1LcDk89l54Fi1OVD9B62EvDtiTw5TPOrBk69ZtHITFjF989CgMImNFgpfW
	OC6+StC9uuOGkXwQzoy0Nnr4GRPs6XAC8feTXtFJdZrWi5EBBHqX2IOED149N
X-Received: by 2002:a17:906:af49:: with SMTP id ly9mr23184948ejb.38.1607972010996;
        Mon, 14 Dec 2020 10:53:30 -0800 (PST)
X-Google-Smtp-Source: ABdhPJw43xBOl1hR1+1GkFo7KHmuG2ixIL07q9pTifoNj7cWg8JF3A9PBHyJJ95TRUo2ZmCc7oxgmsLixgh/hpH644U=
X-Received: by 2002:a17:906:af49:: with SMTP id ly9mr23184940ejb.38.1607972010806;
 Mon, 14 Dec 2020 10:53:30 -0800 (PST)
MIME-Version: 1.0
References: <87h7oudcbx.fsf@vps.thesusis.net>
In-Reply-To: <87h7oudcbx.fsf@vps.thesusis.net>
From: "Guilherme G. Piccoli" <guilherme.piccoli@canonical.com>
Date: Mon, 14 Dec 2020 15:52:55 -0300
Message-ID: <CAHD1Q_zcruQ6KVHApvhb=0+mG0m80T+tmg1UzjQBki8j+aR51A@mail.gmail.com>
Subject: Re: kexec not working in xen domU?
To: Phillip Susi <phill@thesusis.net>
Cc: kexec mailing list <kexec@lists.infradead.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Wed, Dec 9, 2020 at 4:13 PM Phillip Susi <phill@thesusis.net> wrote:
>
> Whenever I try to use kexec in a xen domU, the domain just reboots all
> the way through the bios rather than loading the kexec'ed kernel with
> the given command line.  Is this a known issue?  I've tried with both
> systemctl kexec and kexec -e.
>

Can you capture the serial console in a pastebin? Maybe add something
like "earlyprintk=ttySX", where ttySX is your known-to-work serial
console output. This helps to determine if it's a shutdown issue or an
early boot problem.
Also, worth to CC Xen mailing-list in this discussion I guess, I'm CCing them.
Cheers,


Guilherme


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 21:43:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 21:43:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52702.92027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcm-0006Xs-KQ; Mon, 14 Dec 2020 21:43:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52702.92027; Mon, 14 Dec 2020 21:43:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcm-0006XS-Cn; Mon, 14 Dec 2020 21:43:20 +0000
Received: by outflank-mailman (input) for mailman id 52702;
 Mon, 14 Dec 2020 21:13:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6luh=FS=kernel.org=saeed@srs-us1.protection.inumbo.net>)
 id 1kov9c-0003j1-Pw
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 21:13:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43b336df-5a5c-4021-b07e-fe6cca59fadb;
 Mon, 14 Dec 2020 21:13:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43b336df-5a5c-4021-b07e-fe6cca59fadb
Message-ID: <0f8eda3bbed1100c1c1f7015dd5c172f8d735c94.camel@kernel.org>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607980391;
	bh=Pzvi/5ISTUEc02BC83hB6nniQmkTHPAhNCBBNcfKAUY=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=Ks3P3xnvXkJSp+uh7CcZoik6MJkEKtx2PvE0VYPE/pepC3FjT65YOm45Ebhka542Q
	 KDRZnMtG6PVk8azMh03nN0D62HLqcKsXaM3NGfl0cnBfDn4M/eCG3Bapi5UEiWJojE
	 Wz5xGjdq/ndTqDY7tNprPuvOQMJrfpTrREHq07hUDedim+JiPcYSm9y/a+CVpdgIGp
	 tK5qViUVIaC1nIaet1VMQYFfwNA6Uv7knO5wHsw+J306Af6r5YPeiXS/tptiFYTABJ
	 2yXmYvRm101qHL41Iu7+0L42D5JI7GLbb235mT9lCcvuGwm/DUlfBNloCq8GKc5F9/
	 KjhGS5BzrCEJg==
Subject: Re: [patch 22/30] net/mlx5: Replace irq_to_desc() abuse
From: Saeed Mahameed <saeed@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>, 
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, Helge
 Deller <deller@gmx.de>, afzal mohammed <afzal.mohd.ma@gmail.com>,
 linux-parisc@vger.kernel.org, Russell King <linux@armlinux.org.uk>,
 linux-arm-kernel@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Christian Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens
 <hca@linux.ibm.com>, linux-s390@vger.kernel.org, Jani Nikula
 <jani.nikula@linux.intel.com>,  Joonas Lahtinen
 <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Pankaj
 Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>,  linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>,  linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>,  Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, "David S. Miller"
 <davem@davemloft.net>,  Jakub Kicinski <kuba@kernel.org>,
 netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Leon Romanovsky
 <leon@kernel.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen
 Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, 
 xen-devel@lists.xenproject.org
Date: Mon, 14 Dec 2020 13:13:07 -0800
In-Reply-To: <20201210194044.769458162@linutronix.de>
References: <20201210192536.118432146@linutronix.de>
	 <20201210194044.769458162@linutronix.de>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.36.5 (3.36.5-1.fc32) 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Thu, 2020-12-10 at 20:25 +0100, Thomas Gleixner wrote:
> No driver has any business with the internals of an interrupt
> descriptor. Storing a pointer to it just to use yet another helper at
> the
> actual usage site to retrieve the affinity mask is creative at best.
> Just
> because C does not allow encapsulation does not mean that the kernel
> has no
> limits.
> 

you can't blame the developers for using stuff from include/linux/
Not all developers are the same, and sometime we don't read in between
the lines, you can't assume all driver developers to be expert on irq
APIs disciplines.

your rules must be programmatically expressed, for instance,
you can just hide struct irq_desc and irq_to_desc() in kernel/irq/ and
remove them from include/linux/ header files, if you want privacy in
your subsystem, don't put all your header files on display under
include/linux.


> Retrieve a pointer to the affinity mask itself and use that. It's
> still
> using an interface which is usually not for random drivers, but
> definitely
> less hideous than the previous hack.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/en.h      |    2 +-
>  drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    2 +-
>  drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c |    6 +-----
>  3 files changed, 3 insertions(+), 7 deletions(-)
> 
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> @@ -669,7 +669,7 @@ struct mlx5e_channel {
>  	spinlock_t                 async_icosq_lock;
>  
>  	/* data path - accessed per napi poll */
> -	struct irq_desc *irq_desc;
> +	const struct cpumask	  *aff_mask;
>  	struct mlx5e_ch_stats     *stats;
>  
>  	/* control */
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -1998,7 +1998,7 @@ static int mlx5e_open_channel(struct mlx
>  	c->num_tc   = params->num_tc;
>  	c->xdp      = !!params->xdp_prog;
>  	c->stats    = &priv->channel_stats[ix].ch;
> -	c->irq_desc = irq_to_desc(irq);
> +	c->aff_mask = irq_get_affinity_mask(irq);

as long as the affinity mask pointer stays the same for the lifetime of
the irq vector.

Assuming that:
Acked-by: Saeed Mahameed <saeedm@nvidia.com>




From xen-devel-bounces@lists.xenproject.org Mon Dec 14 21:43:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 21:43:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52695.92007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcl-0006WN-U8; Mon, 14 Dec 2020 21:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52695.92007; Mon, 14 Dec 2020 21:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcl-0006WB-NY; Mon, 14 Dec 2020 21:43:19 +0000
Received: by outflank-mailman (input) for mailman id 52695;
 Mon, 14 Dec 2020 20:43:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1VlC=FS=vps.thesusis.net=psusi@srs-us1.protection.inumbo.net>)
 id 1kouh3-00012R-AX
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 20:43:41 +0000
Received: from vps.thesusis.net (unknown [34.202.238.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb657a85-d9a4-4d5e-8fb9-319b6080a827;
 Mon, 14 Dec 2020 20:43:40 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by vps.thesusis.net (Postfix) with ESMTP id 5F65126E7D;
 Mon, 14 Dec 2020 15:25:48 -0500 (EST)
Received: from vps.thesusis.net ([127.0.0.1])
 by localhost (vps.thesusis.net [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id SzwiUz0EGBcR; Mon, 14 Dec 2020 15:25:48 -0500 (EST)
Received: by vps.thesusis.net (Postfix, from userid 1000)
 id 2294826E79; Mon, 14 Dec 2020 15:25:48 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb657a85-d9a4-4d5e-8fb9-319b6080a827
References: <87h7oudcbx.fsf@vps.thesusis.net> <CAHD1Q_zcruQ6KVHApvhb=0+mG0m80T+tmg1UzjQBki8j+aR51A@mail.gmail.com>
User-agent: mu4e 1.5.7; emacs 26.3
From: Phillip Susi <phill@thesusis.net>
To: "Guilherme G. Piccoli" <guilherme.piccoli@canonical.com>
Cc: kexec mailing list <kexec@lists.infradead.org>, xen-devel@lists.xenproject.org
Subject: Re: kexec not working in xen domU?
Date: Mon, 14 Dec 2020 15:08:57 -0500
In-reply-to: <CAHD1Q_zcruQ6KVHApvhb=0+mG0m80T+tmg1UzjQBki8j+aR51A@mail.gmail.com>
Message-ID: <87czzcdtir.fsf@vps.thesusis.net>
MIME-Version: 1.0
Content-Type: text/plain


Guilherme G. Piccoli writes:

> Can you capture the serial console in a pastebin? Maybe add something
> like "earlyprintk=ttySX", where ttySX is your known-to-work serial
> console output. This helps to determine if it's a shutdown issue or an
> early boot problem.

The regular xen cosole should work for this shouldn't it?  So
earlyprintk=hvc0 I guess?  I also threw in console=hvc0 and loglevel=7:

[  184.734810] systemd-shutdown[1]: Syncing filesystems and block
devices.
[  185.772511] systemd-shutdown[1]: Sending SIGTERM to remaining
processes...
[  185.896957] systemd-shutdown[1]: Sending SIGKILL to remaining
processes...
[  185.901111] systemd-shutdown[1]: Unmounting file systems.
[  185.902180] [1035]: Remounting '/' read-only in with options
'errors=remount-ro'.
[  185.990634] EXT4-fs (xvda1): re-mounted. Opts: errors=remount-ro
[  186.002373] systemd-shutdown[1]: All filesystems unmounted.
[  186.002411] systemd-shutdown[1]: Deactivating swaps.
[  186.002502] systemd-shutdown[1]: All swaps deactivated.
[  186.002529] systemd-shutdown[1]: Detaching loop devices.
[  186.002699] systemd-shutdown[1]: All loop devices detached.
[  186.002727] systemd-shutdown[1]: Stopping MD devices.
[  186.002814] systemd-shutdown[1]: All MD devices stopped.
[  186.002840] systemd-shutdown[1]: Detaching DM devices.
[  186.002974] systemd-shutdown[1]: All DM devices detached.
[  186.003017] systemd-shutdown[1]: All filesystems, swaps, loop
devices, MD devices and DM devices detached.
[  186.168475] systemd-shutdown[1]: Syncing filesystems and block
devices.
[  186.169150] systemd-shutdown[1]: Rebooting with kexec.
[  186.418653] xenbus_probe_frontend: xenbus_frontend_dev_shutdown:
device/vbd/5632: Initialising != Connected, skipping
[  186.427377] kexec_core: Starting new kernel



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 21:43:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 21:43:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52697.92016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcm-0006X5-9d; Mon, 14 Dec 2020 21:43:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52697.92016; Mon, 14 Dec 2020 21:43:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kovcm-0006Wq-16; Mon, 14 Dec 2020 21:43:20 +0000
Received: by outflank-mailman (input) for mailman id 52697;
 Mon, 14 Dec 2020 20:58:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6luh=FS=kernel.org=saeed@srs-us1.protection.inumbo.net>)
 id 1kouvZ-0001zU-Fj
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 20:58:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbe7f83e-25cb-4c26-a5be-c834ffa7c5b1;
 Mon, 14 Dec 2020 20:58:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbe7f83e-25cb-4c26-a5be-c834ffa7c5b1
Message-ID: <8035075adf8738792f4fa39032eeeb997bc1e653.camel@kernel.org>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607979519;
	bh=cldqRzZxWGs3Mp0GWam6BN3UIEF7254f3nfHMyN7cv4=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=Gtxqgx/zUgAV/HC4HUDaTWWErUYbIXoTgkKUYMD/ELlewv6ngbmFefFbLOnnQE37B
	 fwW9w8lNXB0jkS0zaOY6KiQuNNvzR6i/TAi+fgG1AW/EykMErJidLU/39fKuq2JQ6H
	 cYZqxR/cWcdcz2hfp8VGfziH+pOYPr8oSmVaMVsoIT/cKvofZxClMOcYe7qD5WQeeL
	 NIffLKExOZStE6EtkCW8G8ODK2nn1z9f76JCt7Kq0Z4tPHOOMdZSDtSFcABv2Uu75T
	 yhfntlEnCdaAYrYdYHk5OeIYZ35zyWy0f9d7SzNO3zktL7NiFTX5VRmoH1k+TTpoKc
	 hhNH7an3hKyYw==
Subject: Re: [patch 23/30] net/mlx5: Use effective interrupt affinity
From: Saeed Mahameed <saeed@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>, Marc Zyngier <maz@kernel.org>, 
 Leon Romanovsky <leon@kernel.org>, "David S. Miller" <davem@davemloft.net>,
 Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
 linux-rdma@vger.kernel.org, "James E.J. Bottomley"
 <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
 afzal mohammed <afzal.mohd.ma@gmail.com>, linux-parisc@vger.kernel.org,
 Russell King <linux@armlinux.org.uk>, linux-arm-kernel@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Christian
 Borntraeger <borntraeger@de.ibm.com>, Heiko Carstens <hca@linux.ibm.com>,
 linux-s390@vger.kernel.org, Jani Nikula <jani.nikula@linux.intel.com>, 
 Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi
 <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>, Daniel Vetter
 <daniel@ffwll.ch>, Pankaj Bharadiya
 <pankaj.laxminarayan.bharadiya@intel.com>, Chris Wilson
 <chris@chris-wilson.co.uk>, Wambui Karuga <wambui.karugax@gmail.com>, 
 intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tvrtko
 Ursulin <tvrtko.ursulin@linux.intel.com>, Linus Walleij
 <linus.walleij@linaro.org>,  linux-gpio@vger.kernel.org, Lee Jones
 <lee.jones@linaro.org>, Jon Mason <jdmason@kudzu.us>, Dave Jiang
 <dave.jiang@intel.com>, Allen Hubbe <allenbh@gmail.com>,
 linux-ntb@googlegroups.com, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Rob Herring <robh@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Michal
 Simek <michal.simek@xilinx.com>,  linux-pci@vger.kernel.org, Karthikeyan
 Mitran <m.karthikeyan@mobiveil.co.in>,  Hou Zhiqiang
 <Zhiqiang.Hou@nxp.com>, Tariq Toukan <tariqt@nvidia.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Mon, 14 Dec 2020 12:58:36 -0800
In-Reply-To: <20201210194044.876342330@linutronix.de>
References: <20201210192536.118432146@linutronix.de>
	 <20201210194044.876342330@linutronix.de>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.36.5 (3.36.5-1.fc32) 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Thu, 2020-12-10 at 20:25 +0100, Thomas Gleixner wrote:
> Using the interrupt affinity mask for checking locality is not really
> working well on architectures which support effective affinity masks.
> 
> The affinity mask is either the system wide default or set by user
> space,
> but the architecture can or even must reduce the mask to the
> effective set,
> which means that checking the affinity mask itself does not really
> tell
> about the actual target CPUs.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Saeed Mahameed <saeedm@nvidia.com>
> Cc: Leon Romanovsky <leon@kernel.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: netdev@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> 

Acked-by: Saeed Mahameed <saeedm@nvidia.com>



From xen-devel-bounces@lists.xenproject.org Mon Dec 14 22:12:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 22:12:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52731.92047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kow4R-0001Cq-Pm; Mon, 14 Dec 2020 22:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52731.92047; Mon, 14 Dec 2020 22:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kow4R-0001Cj-Ms; Mon, 14 Dec 2020 22:11:55 +0000
Received: by outflank-mailman (input) for mailman id 52731;
 Mon, 14 Dec 2020 22:11:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nAAc=FS=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1kow4P-0001B7-Rp
 for xen-devel@lists.xenproject.org; Mon, 14 Dec 2020 22:11:53 +0000
Received: from mail-il1-x12c.google.com (unknown [2607:f8b0:4864:20::12c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7aea62ea-7663-44e8-8099-1bc20991033c;
 Mon, 14 Dec 2020 22:11:52 +0000 (UTC)
Received: by mail-il1-x12c.google.com with SMTP id p5so17321779iln.8
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 14:11:52 -0800 (PST)
Received: from [100.64.72.3] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id l78sm12417573ild.30.2020.12.14.14.11.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 14 Dec 2020 14:11:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7aea62ea-7663-44e8-8099-1bc20991033c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:date:message-id
         :references:cc:in-reply-to:to;
        bh=Ehqbtw+xjl33wXScdfoTBtwSesAfNQAHAc1Tvg1WS/A=;
        b=bhDZlepEmyvUsTHOGP1ik251oafXHlXFWOowM7IaOoqg0iS/8Icif5UA/xTazcBvJq
         1JwCHqmgvAHo4BaG8hhclCPdJbpBuZVF17Z2I4fJMDLX3MD3LpOwrG284On/N5xLm097
         lb9sKFV1cH8ZlK+bUVDu1Je7HPc7uEHZYsR0vdbkc2A00ZEVzwzBNywrbj2asvHZ6gqV
         tyJ2x8mRcsLt+yFR9DmWRa2MbHEuxOnHauD4bINQjJFNL2uMy0smD2TGPLAX0eKtqCBf
         I+88KM04l6lDjr5tSxhzAc4YNXzfTrSoagXE9oHuGQu6MXT7bi07kWZpQb+2+LUFLtaO
         bGvg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:date:message-id:references:cc:in-reply-to:to;
        bh=Ehqbtw+xjl33wXScdfoTBtwSesAfNQAHAc1Tvg1WS/A=;
        b=aUE5MQ5U0iVEtPan/aYzB4R1aI+boaQkEmo4GNZ6OXF82DwJVtYn0x+htgB3DDFfvU
         4Us80fM4Q2GKGX3ikhgtbhdc9TmTYbAk6ggO/bcQpcnINlcvJKXYZA25IyqVJKAisGfT
         dDCGoqjp8X0TOcAwH5/tzdq2vlF1/jGiLTi0KQZatGSybtJJ7O/3Wg+4HilLu+9eV1tM
         nMcA2iRHSpKIP492BRaxTDuaWQFD1+IanwvfEWrTsvFWNHVswGVpYJBD4wBQliXfQn8v
         ZTQbY5KNs0Kdc0UhMDgUqKVZb3GdKaYs+ZHDNZcRBdhq8xyFOtpTAvHmU9cejE/3XM2l
         jJrg==
X-Gm-Message-State: AOAM533mzUZJo1FJ1gePV9OfZ2ACp9yBhqwC6q2WOdD0soBX1ungDJZ3
	mpkMvADVq5iT1+NmpTSQ1N8=
X-Google-Smtp-Source: ABdhPJyNHNP//MQMvEkBPFkEEZAWMHVXNF+rWzY7xlXJMdRAxwhhXwmaBf3gvE9hKilmEQePahazEg==
X-Received: by 2002:a92:da46:: with SMTP id p6mr24745755ilq.136.1607983912338;
        Mon, 14 Dec 2020 14:11:52 -0800 (PST)
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [openxt-dev] Re: Follow up on libxl-fix-reboot.patch
Date: Mon, 14 Dec 2020 17:11:50 -0500
Message-Id: <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com>
References: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com>
Cc: Chris Rogers <crogers122@gmail.com>,
 Jason Andryuk <jandryuk@gmail.com>
In-Reply-To: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com>
To: openxt <openxt@googlegroups.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 =?utf-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
 Olivier Lambert <olivier.lambert@vates.fr>,
 Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: iPhone Mail (18C66)

(adding xen-devel & toolstack devs)

On Dec 14, 2020, at 16:12, Jason Andryuk <jandryuk@gmail.com> wrote:
>=20
> =EF=BB=BFOn Fri, Dec 11, 2020 at 3:56 PM Chris Rogers <crogers122@gmail.co=
m> wrote:
>>=20
>> This is a follow up to a request during our roadmapping meeting to clarif=
y the purpose of libxl-fix-reboot.patch on the current version of Xen in Ope=
nXT (4.12).  It's pretty simple.  While the domctl API does define a trigger=
 for reset in xen/include/public/domctl.h:
>>=20
>=20
>> The call stack looks like this:
>>> libxl_send_trigger(ctx, domid, LIBXL_TRIGGER_RESET, 0);
>>> xc_domain_send_trigger(ctx->xch, domid, XEN_DOMCTL_SENDTRIGGER_RESET, vc=
upid);
>>> do_domctl()
>>> arch_do_domctl()
>> and reaching the case statement in arch_do_domctl() for XEN_DOMCTL_sendtr=
igger, with RESET, we get -ENOSYS as illustrated above.
>=20
> Thanks, Chris.  It's surprising that xl trigger reset exists, but
> isn't wired through to do anything.  And that reboot has a fallback
> command to something that doesn't work.
>=20
> If we have to turn reboot into shutdown + start, it seems like that
> could be done in xenmgr instead of libxl.  Similarly, this may avoid
> the signaling between xenmgr and libxl.

If upstream Xen's libxl cannot support VM reset, can we drop/hide reset supp=
ort from the OpenXT CLI and UIVM? That would avoid incurring costs for a fak=
e feature with no long-term future. A reset is not the same as shutdown + st=
art.  If reset is not supportable, the user can perform shutdown + reboot ma=
nually. Then they would at least be aware of the consequences, e.g. temporar=
y storage snapshots will be deleted and changes lost immediately.

OpenXT derivatives which need reset support can use another Xen toolstack wh=
ich provides this capability, e.g. the Citrix XenServer xapi ocaml toolstack=
, for this single function.  Or the old XenClient xenops fork of xapi.

The long-term direction, based on an upstream prototype in Rust, is a low le=
vel toolstack daemon that accepts input over an RPC protocol that is stable a=
nd versioned, which will drive a stable hypercall ABI for Xen. We can ask fo=
r reset support to be prioritized in the Rust prototype, which would then en=
able testing of OpenXT integration.

Hopefully an upstream Xen LibXL developer will recall why the reset logic is=
n't yet fully wired up.

Rich=


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 22:21:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 22:21:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52737.92060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kowDE-0002Az-K7; Mon, 14 Dec 2020 22:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52737.92060; Mon, 14 Dec 2020 22:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kowDE-0002As-H3; Mon, 14 Dec 2020 22:21:00 +0000
Received: by outflank-mailman (input) for mailman id 52737;
 Mon, 14 Dec 2020 22:20:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kowDD-0002Ak-37; Mon, 14 Dec 2020 22:20:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kowDC-0007UZ-Uy; Mon, 14 Dec 2020 22:20:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kowDC-00020c-Ml; Mon, 14 Dec 2020 22:20:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kowDC-0006kL-MI; Mon, 14 Dec 2020 22:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YpSXiWvHcyBS2e7cYhd6sqLmyR9l/+avvqIBYHiHYkg=; b=piK7n7VZdDEEHpegpXLtSHFSaU
	0iJCDhuNBTDhatxv8+NoP51hpC42KZ324kNt/V/sa9dJ8hQrMrAM76YeXd2D/HE4zX4HiuHcD07Aw
	vgSpqLPdNKZyZ1MFfR6jzx3mHc5yXLudjTtwAQ7hQAoDuPZ55bt4sNMx5SohiP3bUxjg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157527-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157527: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=793c59da135be023b02ff93a57a3bb6b34044906
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 22:20:58 +0000

flight 157527 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157527/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 793c59da135be023b02ff93a57a3bb6b34044906
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   38 attempts
Testing same since   157521  2020-12-14 11:09:46 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 14 23:58:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 14 Dec 2020 23:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52751.92075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koxj1-0001lE-Ru; Mon, 14 Dec 2020 23:57:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52751.92075; Mon, 14 Dec 2020 23:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koxj1-0001l7-OU; Mon, 14 Dec 2020 23:57:55 +0000
Received: by outflank-mailman (input) for mailman id 52751;
 Mon, 14 Dec 2020 23:57:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koxj0-0001kz-BA; Mon, 14 Dec 2020 23:57:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koxj0-0000fd-4d; Mon, 14 Dec 2020 23:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koxiz-0005fy-P4; Mon, 14 Dec 2020 23:57:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koxiz-0003hg-OY; Mon, 14 Dec 2020 23:57:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wY9+yKWtfvRMiUc/mmsbNAQ70Z6VwOCtNzT8Ncv6Ij0=; b=ktO3zP4bW2W8dVS0YJWEdvv9Hc
	C0zgC/uxCu8pqXfTaHU3E8ift6QAp5a4GpEj1k6TYyYblE1M6QFXUBwOPAZQ9qzMuEGh4DAcB07oy
	Nojj2BtcR1OIcW0EXJvCHM+ojBvI/abTfGD/WPmKv1U3FEETFTTe5rGQkl5BP0i5H7ho=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157526-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157526: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=17584289af1aaa72c932e7e47c25d583b329dc45
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 14 Dec 2020 23:57:53 +0000

flight 157526 qemu-mainline real [real]
flight 157532 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157526/
http://logs.test-lab.xenproject.org/osstest/logs/157532/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                17584289af1aaa72c932e7e47c25d583b329dc45
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  116 days
Failing since        152659  2020-08-21 14:07:39 Z  115 days  242 attempts
Testing same since   157474  2020-12-13 01:37:19 Z    1 days    5 attempts

------------------------------------------------------------
308 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 75406 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 00:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 00:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52769.92095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koyPj-0006jM-Ki; Tue, 15 Dec 2020 00:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52769.92095; Tue, 15 Dec 2020 00:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koyPj-0006jF-Hm; Tue, 15 Dec 2020 00:42:03 +0000
Received: by outflank-mailman (input) for mailman id 52769;
 Tue, 15 Dec 2020 00:42:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0G7T=FT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1koyPi-0006jA-85
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 00:42:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04669d94-49b6-45ff-8187-df3ccd557880;
 Tue, 15 Dec 2020 00:42:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04669d94-49b6-45ff-8187-df3ccd557880
Date: Mon, 14 Dec 2020 16:41:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607992920;
	bh=T8CNXlo77Y0QR1I+LGgqPsErHuBpnB0uR8H9yqE0Kbk=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=VQGpDX17bPkFVRl6ftvtU6SGSVGP/Gn3mcfMYSFMPni3Th4hTMKMR0lMWt+3HWjE2
	 BqMMOG7vbmN3SjLlJxTfDtw0uto3a8iRHvNwVmXCNaLSA/erSzxHMGRl4ivFDIuEeJ
	 lRET5bdqmEHazpOhtID11yfnb5mGx779HgpKThUn9Inh27S+CdSJ9ZWMDg9Yyec+Oi
	 zzQ6wy4ZiVZcgL62LZGiW/p1s21sH8QEJppZQaGcODlbIKWRjwGsTDGu2gOHLVJQIG
	 HGYjA/Zrac40EkS6X+9+kDSsCriXqGjEEpHpz+A/5w+P7bLJh7iXFuwnLy2cayYo37
	 V8lVYyINhEsQQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rich Persaud <persaur@gmail.com>
cc: openxt <openxt@googlegroups.com>, xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    Olivier Lambert <olivier.lambert@vates.fr>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    Chris Rogers <crogers122@gmail.com>, Jason Andryuk <jandryuk@gmail.com>, 
    wl@xen.org, jbeulich@suse.com, andrew.cooper3@citrix.com, 
    roger.pau@citrix.com
Subject: Re: [openxt-dev] Re: Follow up on libxl-fix-reboot.patch
In-Reply-To: <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com>
Message-ID: <alpine.DEB.2.21.2012141632020.4040@sstabellini-ThinkPad-T480s>
References: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com> <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-754717913-1607992920=:4040"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-754717913-1607992920=:4040
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 14 Dec 2020, Rich Persaud wrote:
> (adding xen-devel & toolstack devs)
> 
> On Dec 14, 2020, at 16:12, Jason Andryuk <jandryuk@gmail.com> wrote:
> > 
> > ﻿On Fri, Dec 11, 2020 at 3:56 PM Chris Rogers <crogers122@gmail.com> wrote:
> >> 
> >> This is a follow up to a request during our roadmapping meeting to clarify the purpose of libxl-fix-reboot.patch on the current version of Xen in OpenXT (4.12).  It's pretty simple.  While the domctl API does define a trigger for reset in xen/include/public/domctl.h:
> >> 
> > 
> >> The call stack looks like this:
> >>> libxl_send_trigger(ctx, domid, LIBXL_TRIGGER_RESET, 0);
> >>> xc_domain_send_trigger(ctx->xch, domid, XEN_DOMCTL_SENDTRIGGER_RESET, vcupid);
> >>> do_domctl()
> >>> arch_do_domctl()
> >> and reaching the case statement in arch_do_domctl() for XEN_DOMCTL_sendtrigger, with RESET, we get -ENOSYS as illustrated above.
> > 
> > Thanks, Chris.  It's surprising that xl trigger reset exists, but
> > isn't wired through to do anything.  And that reboot has a fallback
> > command to something that doesn't work.

I miss some of the context of this thread -- let me try to understand
the issue properly.

It looks like HVM reboot doesn't work properly, or is it HVM reset
(in-guest reset)? It looks like it is implemented by calling "xl trigger
reset", which is implemented by libxl_send_trigger. The call chain leads
to a XEN_DOMCTL_sendtrigger domctl with XEN_DOMCTL_SENDTRIGGER_RESET as
a parameter that is not implemented on x86.

That looks like a pretty serious bug :-)


I imagine the reason why it is in that state is that the main way to
reboot would be to call "xl reboot" which is implemented with the PV
protocol "reboot" write to xenstore?  Either way, the bug should be
fixed.

What does your libxl-fix-reboot.patch patch do? Does it add an
implementation of XEN_DOMCTL_SENDTRIGGER_RESET?
--8323329-754717913-1607992920=:4040--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 00:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 00:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52774.92108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koyXv-0007fE-GV; Tue, 15 Dec 2020 00:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52774.92108; Tue, 15 Dec 2020 00:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1koyXv-0007f7-Dc; Tue, 15 Dec 2020 00:50:31 +0000
Received: by outflank-mailman (input) for mailman id 52774;
 Tue, 15 Dec 2020 00:50:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koyXu-0007ez-Ah; Tue, 15 Dec 2020 00:50:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koyXt-0002Eh-W3; Tue, 15 Dec 2020 00:50:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1koyXt-00085q-NC; Tue, 15 Dec 2020 00:50:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1koyXt-00010I-Mh; Tue, 15 Dec 2020 00:50:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qzzzjI0+kWBSbD8yjY+lpoOSXq+5ub99n9vrR7SwCCU=; b=yRvh3rJ6L/oceNtwJf13mAqFUY
	6tHFQnRGYSLn3SCiUAA454Um56TVzkJz1kP7hf/fx39EgnZbLoyBOV4Jf9xsTL5fAEGCNm209qndG
	6Xv4ZftJdvt57XpcggCnaQnQEDwJlNehyjPCtJiNajpqjXymXtacUgVrF8lQrar531dU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157531-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157531: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 00:50:29 +0000

flight 157531 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157531/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   39 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 01:53:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 01:53:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52795.92123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kozWI-0002lG-3p; Tue, 15 Dec 2020 01:52:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52795.92123; Tue, 15 Dec 2020 01:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kozWH-0002l9-Vm; Tue, 15 Dec 2020 01:52:53 +0000
Received: by outflank-mailman (input) for mailman id 52795;
 Tue, 15 Dec 2020 01:52:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0G7T=FT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kozWG-0002l4-JB
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 01:52:52 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d17f2660-9a22-4df9-982a-656cc2eea155;
 Tue, 15 Dec 2020 01:52:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d17f2660-9a22-4df9-982a-656cc2eea155
Date: Mon, 14 Dec 2020 17:52:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1607997170;
	bh=ViCdukLfbIAXEKuyIdxNNST56FI+jT/9Fj7edXZyNco=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=iRcSeeWm/8SKr3BqBxA4U1mzN+7jkn0zvgak2NxvIcWQ06AmapyiUsEZJ8CQSh9mT
	 W/D4gVi04ICXs8EsW0z5RCKNkT8ctwJrPW9Yvo8Sr0XIxXYiYLJ1tM4rUQHuUP0eEe
	 Owcc19VElxSyg6ms+H8cSZ45tzogs8ni0BhSWSjbkRaPHc1iF2kFnhFSqSRIp9FXZh
	 vzrlMWvlT9anbvdkH5UAtYNhYm61OVyhIZ8H4W2hYhWcwj0Mz9o4AHCmadgLT9u7Ti
	 ELJCYpMAG4zRXOmFebEAZCJO/KaAi+igJAhzyHtP+MFjEd5k/LiswYhtF9/FLCbAPX
	 npKhrQ1QLsFuw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v3 0/8] xen/arm: Add support for SMMUv3 driver
In-Reply-To: <8ED5EAAF-48B0-4289-BCB0-232F70001134@arm.com>
Message-ID: <alpine.DEB.2.21.2012141752330.4040@sstabellini-ThinkPad-T480s>
References: <cover.1607617848.git.rahul.singh@arm.com> <ea121c23-4deb-c566-4d1d-bb9dd4959015@xen.org> <8ED5EAAF-48B0-4289-BCB0-232F70001134@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1339817675-1607997170=:4040"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1339817675-1607997170=:4040
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 14 Dec 2020, Rahul Singh wrote:
> Hello Julien,Stefano
> 
> > On 11 Dec 2020, at 2:29 pm, Julien Grall <julien@xen.org> wrote:
> > 
> > Hi Rahul,
> > 
> > On 10/12/2020 16:56, Rahul Singh wrote:
> >> This patch series is v3 of the work to add support for the SMMUv3 driver.
> >> Approach taken is to first merge the Linux copy of the SMMUv3 driver
> >> (tag v5.8.18) and then modify the driver to build on XEN.
> >> MSI and PCI ATS functionality are not supported. Code is not tested and
> >> compiled. Code is guarded by the flag CONFIG_PCI_ATS and CONFIG_MSI to compile
> >> the driver.
> >> Code specific to Linux is removed from the driver to avoid dead code.
> >> Driver is currently supported as tech preview.
> >> Following functionality should be supported before driver is out for tech
> >> preview
> >> 1. Investigate the timing analysis of using spin lock in place of mutex when
> >>    attaching a  device to SMMU.
> >> 2. Merged the latest Linux SMMUv3 driver code once atomic operation is
> >>    available in XEN.
> >> 3. PCI ATS and MSI interrupts should be supported.
> >> 4. Investigate side-effect of using tasklet in place of threaded IRQ and fix
> >>    if any.
> > In your last e-mail, you wrote that you would investigate and then come back to use. It wasn't clear that this meant you will not deal with it in this series.
> > 
> 
> Yes, I will investigate the side-effect of using tasklet but not part of this patch series. It will be great if we proceed on this patch series as it is (using tasklet in place of threaded-IRQ).
> 
> >> 5. fallthorugh keyword should be supported.
> > 
> > This one should really be done now... It is not a complicated one...
> 
> Ok. I will implement in next version.
>  
> > 
> >> 6. Implement the ffsll function in bitops.h file.
> 
> While implementing the code I found out that Linux is using the built-in function “__builtin_ffsll() “ for ffsll and there is no implementation available in Linux for ffsll.
> If we implement the ffsll in XEN we will diverge from Linux. I am thinking of adding the below code in "xen/include/asm-arm/bitops.h”. 
> Please suggest if it is okay?
> 
> static always_inline int ffsll(long long x)                                           
> {                                                                               
>     return __builtin_ffsll(x);                                                    
> }

I think that's OK if it builds with clang too.
--8323329-1339817675-1607997170=:4040--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 01:57:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 01:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52800.92135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kozaH-0002wO-Os; Tue, 15 Dec 2020 01:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52800.92135; Tue, 15 Dec 2020 01:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kozaH-0002wH-LR; Tue, 15 Dec 2020 01:57:01 +0000
Received: by outflank-mailman (input) for mailman id 52800;
 Tue, 15 Dec 2020 01:56:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kozaF-0002w9-Nk; Tue, 15 Dec 2020 01:56:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kozaF-0001Ph-IN; Tue, 15 Dec 2020 01:56:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kozaF-0001j1-8U; Tue, 15 Dec 2020 01:56:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kozaF-00051c-7y; Tue, 15 Dec 2020 01:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KyHO7OF3+hFGU6GVN9oREnNek32CXUCwgPqtceQGEK0=; b=IFdy0wYUefjddaptiPuIs1FcVb
	MyE6LGVS1OBD99PVefLLfWHPVxhQa8vvtXiSbUOhyBr87QIf+N+R76KsaY+WR4HAyC7kzocRw/cU6
	/au467/HBocIwH6veEj8XGtgeJSI/BNf4fGpJbnCmX4wdjCrWOt1NfknNobFyVSHLna0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157535-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157535: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 01:56:59 +0000

flight 157535 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157535/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   40 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 02:16:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 02:16:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52810.92150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kozsx-00057m-CT; Tue, 15 Dec 2020 02:16:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52810.92150; Tue, 15 Dec 2020 02:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kozsx-00057f-8O; Tue, 15 Dec 2020 02:16:19 +0000
Received: by outflank-mailman (input) for mailman id 52810;
 Tue, 15 Dec 2020 02:16:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9m1Y=FT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kozsw-00057a-N6
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 02:16:18 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b173889-a453-4dec-afb5-c9e1d5981d74;
 Tue, 15 Dec 2020 02:16:17 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0BF2G6LU027013
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 14 Dec 2020 21:16:12 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0BF2G67J027012;
 Mon, 14 Dec 2020 18:16:06 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b173889-a453-4dec-afb5-c9e1d5981d74
Date: Mon, 14 Dec 2020 18:16:06 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Xen-ARM DomUs
Message-ID: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Finally getting to the truly productive stages of my project with Xen on
ARM.

How many of the OSes which function as x86 DomUs for Xen, function as
ARM DomUs?  Getting Linux operational was straightforward, but what of
others?

The available examples seem geared towards Linux DomUs.  I'm looking at a
FreeBSD installation image and it appears to expect an EFI firmware.
Beyond having a bunch of files appearing oriented towards booting on EFI
I can't say much about (booting) FreeBSD/ARM DomUs.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Dec 15 02:35:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 02:35:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52815.92162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0BT-0006vm-UY; Tue, 15 Dec 2020 02:35:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52815.92162; Tue, 15 Dec 2020 02:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0BT-0006vf-Rb; Tue, 15 Dec 2020 02:35:27 +0000
Received: by outflank-mailman (input) for mailman id 52815;
 Tue, 15 Dec 2020 02:35:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1f4I=FT=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1kp0BS-0006va-Kd
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 02:35:26 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76d2e616-f61a-4945-9ead-b31f519c853f;
 Tue, 15 Dec 2020 02:35:25 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id c7so17825539qke.1
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 18:35:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76d2e616-f61a-4945-9ead-b31f519c853f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=FPaR7CVrzngNMt+LKvxWdO+M2sng7xDIlm/Ym2ZB9wI=;
        b=bBeoEX7PQRKWF5zld0qC4gpyVMjY9/5NA8+oYLI7AqXug23ER9p3LZKypQeXwETCTR
         GokDoKqaQt8yOCGLdxEB0hjijRfl0+Jv8VR5jlRUSJ19Tfmwg/ppEBN1gPtkDP7vdF4w
         mTHEfriUgq4Q0unnUSQvM/Qfw/OGNawdBprmB+j93h8IQMcfs5/pE2JF8ISex43s4aOu
         budF7iMaU/lRYIYyeowk6C2rrTXRMSH2FhoFTY0Uy9krGatwSZ7hhKBCJnEehuG6nah8
         C2m48zyHYdJrsvEcUd8e4WRBVr7FmNFPHgx5UQ51WeOlsaTeb7XUN4/H3kNVQjjSwNjx
         9g6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=FPaR7CVrzngNMt+LKvxWdO+M2sng7xDIlm/Ym2ZB9wI=;
        b=PebRCYzO8Oxfz1PoJmjhNsG2HtZ2k3mvfhKKvfmY94cZY9q2mEF2e/Ebd5rpqYTu+5
         v5Z3V5O3DB/Y8axbRAH4AEhHf9w3ufOWNITrhnt2BjByC1P4M5bBETX8CQFwTwoqwBS1
         ihcSdhW4DnYRz3bLSJDk6p9OuD4nVFmqYRm+OfrwZ3EzYjpV8k7gNIe+IZ3OAN0GAjcx
         +8exgXhsPj1PYlRVXyO/xA9F2lr1Mfh607CzwRhIidfna+n3VWeLLXyDBFgxOrVmO7QO
         cRU0zTAIZZLGsWZ/9tiVYLF/kQFCRp9P6ouds1XBmY1UDzOnY8jrnu7mMqMAK/248B9t
         kgTA==
X-Gm-Message-State: AOAM532rvbmcJxJTwHvrgobBybQI6Kw7bSvqHNgXZd/TEY6N61ZMGFHU
	CgW+ceLsxfAHeNvJT4N8jR2Jp8r9PPnXzQPJk+UtHg==
X-Google-Smtp-Source: ABdhPJwz50lTMXTem38wwR1TpX1qxg3CP2Je0lRujFtDDfQ6VuSzDgo+W1fpfRS5l2/b3jC7wcS7v6x07EZLmf6N4hk=
X-Received: by 2002:a37:8081:: with SMTP id b123mr36385936qkd.157.1607999725376;
 Mon, 14 Dec 2020 18:35:25 -0800 (PST)
MIME-Version: 1.0
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
In-Reply-To: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 14 Dec 2020 18:35:14 -0800
Message-ID: <CAMmSBy_8+PRWiSQxwRN2oB9mLmOnyoCr0mH4L-uUYhm=1GK7Xg@mail.gmail.com>
Subject: Re: Xen-ARM DomUs
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Dec 14, 2020 at 6:16 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> Finally getting to the truly productive stages of my project with Xen on
> ARM.
>
> How many of the OSes which function as x86 DomUs for Xen, function as
> ARM DomUs?  Getting Linux operational was straightforward, but what of
> others?

On EVE we have Windows running as a pretty much a customer-facing demo:
    https://wiki.lfedge.org/display/EVE/How+get+Windows+10+running+on+a+Raspberry+Pi

> The available examples seem geared towards Linux DomUs.  I'm looking at a
> FreeBSD installation image and it appears to expect an EFI firmware.
> Beyond having a bunch of files appearing oriented towards booting on EFI
> I can't say much about (booting) FreeBSD/ARM DomUs.

Personally I'm about to make Plan9 (well 9front really) run as well ;-)

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 02:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 02:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52819.92173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0Bs-00070W-7a; Tue, 15 Dec 2020 02:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52819.92173; Tue, 15 Dec 2020 02:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0Bs-00070P-46; Tue, 15 Dec 2020 02:35:52 +0000
Received: by outflank-mailman (input) for mailman id 52819;
 Tue, 15 Dec 2020 02:35:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9m1Y=FT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kp0Br-0006za-C7
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 02:35:51 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bb1ed0f-9380-42f9-a352-4a127f62628b;
 Tue, 15 Dec 2020 02:35:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0BF2ZXLZ027095
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 14 Dec 2020 21:35:39 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0BF2ZW8L027094;
 Mon, 14 Dec 2020 18:35:32 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bb1ed0f-9380-42f9-a352-4a127f62628b
Date: Mon, 14 Dec 2020 18:35:32 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] examples: Add PVH example to config example list
Message-ID: <X9gg9Ph2na22YKdj@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Somewhat helpful to actually install the example configurations.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
 tools/examples/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/examples/Makefile b/tools/examples/Makefile
index f86ed3a271..fd8fba757d 100644
--- a/tools/examples/Makefile
+++ b/tools/examples/Makefile
@@ -7,6 +7,7 @@ XEN_READMES += README.incompatibilities
 
 XEN_CONFIGS += xlexample.hvm
 XEN_CONFIGS += xlexample.pvlinux
+XEN_CONFIGS += xlexample.pvhlinux
 XEN_CONFIGS += xl.conf
 XEN_CONFIGS += cpupool
 
-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Dec 15 02:48:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 02:48:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52825.92186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0O2-00083e-C3; Tue, 15 Dec 2020 02:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52825.92186; Tue, 15 Dec 2020 02:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0O2-00083X-84; Tue, 15 Dec 2020 02:48:26 +0000
Received: by outflank-mailman (input) for mailman id 52825;
 Tue, 15 Dec 2020 02:48:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0O0-00083P-Gc; Tue, 15 Dec 2020 02:48:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0O0-0002ur-93; Tue, 15 Dec 2020 02:48:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0O0-00033j-13; Tue, 15 Dec 2020 02:48:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0O0-0001i2-0W; Tue, 15 Dec 2020 02:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J1ME+qo4IzO/Nu0YvcLp0CSRS7Drndh3XLT60eOWnmM=; b=EMbaObzA1XvclYIpDp3BFogPBf
	bgyltzDO3qeaRmxiafMdm5/dqoZdvS8NtBnptCC5csbbGkbY9mvQ4CxxU/9FJHEoxsjiwxg5PVYVU
	WxgfRKQlFIVvtafXguquAe+0FDRZFgvnl9CmxTHO4TADK/iawdRAvU0mhtfQmjm7lAzQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157537-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157537: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 02:48:24 +0000

flight 157537 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157537/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   41 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 02:56:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 02:56:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52833.92201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0Vt-0000ZX-5z; Tue, 15 Dec 2020 02:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52833.92201; Tue, 15 Dec 2020 02:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0Vt-0000ZQ-2y; Tue, 15 Dec 2020 02:56:33 +0000
Received: by outflank-mailman (input) for mailman id 52833;
 Tue, 15 Dec 2020 02:56:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0Vq-0000ZI-Um; Tue, 15 Dec 2020 02:56:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0Vq-00035A-NM; Tue, 15 Dec 2020 02:56:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0Vq-0003F4-CQ; Tue, 15 Dec 2020 02:56:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp0Vq-0005LH-Bx; Tue, 15 Dec 2020 02:56:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6vWAF0ncdkO2mczOsmCXo+mLKa84AiYeuY2dStMKSCs=; b=FkISaFYb7XuSBntouC9FHwqU9b
	E7Hvv6LJpY/r51g2pH8RCRDpYBXzizPAmJOvNocVi7IVpw5MZCyVcXDQIe7FNRJzobEZd0YzkYdhC
	m1leNI7V0lTuZbb3FSv2PnQApDaDiNxKGULmKlAbu87lDV66GBxb9J/TD/YVptpIqBZk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157529-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157529: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2c85ebc57b3e1817b6ce1a6b703928e113a90442
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 02:56:30 +0000

flight 157529 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157529/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 157519 pass in 157529
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157519

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 157519 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2c85ebc57b3e1817b6ce1a6b703928e113a90442
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  136 days
Failing since        152366  2020-08-01 20:49:34 Z  135 days  236 attempts
Testing same since   157519  2020-12-14 08:29:24 Z    0 days    2 attempts

------------------------------------------------------------
3694 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 707303 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 02:59:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 02:59:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52841.92216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0Yu-0000of-UZ; Tue, 15 Dec 2020 02:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52841.92216; Tue, 15 Dec 2020 02:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0Yu-0000oY-RY; Tue, 15 Dec 2020 02:59:40 +0000
Received: by outflank-mailman (input) for mailman id 52841;
 Tue, 15 Dec 2020 02:59:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9m1Y=FT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kp0Yt-0000oT-Uy
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 02:59:39 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 068110e1-8c94-4a12-8cd0-15602c1c0cca;
 Tue, 15 Dec 2020 02:59:39 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0BF2xTKv027184
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 14 Dec 2020 21:59:34 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0BF2xSaA027183;
 Mon, 14 Dec 2020 18:59:28 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 068110e1-8c94-4a12-8cd0-15602c1c0cca
Date: Mon, 14 Dec 2020 18:59:28 -0800
From: Elliott Mitchell <ehem+undef@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Elliott Mitchell <ehem+xen@m5p.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen-ARM DomUs
Message-ID: <X9gmkGhQQBOmmBe5@mattapan.m5p.com>
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
 <CAMmSBy_8+PRWiSQxwRN2oB9mLmOnyoCr0mH4L-uUYhm=1GK7Xg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy_8+PRWiSQxwRN2oB9mLmOnyoCr0mH4L-uUYhm=1GK7Xg@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Dec 14, 2020 at 06:35:14PM -0800, Roman Shaposhnik wrote:
> On Mon, Dec 14, 2020 at 6:16 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >
> > Finally getting to the truly productive stages of my project with Xen on
> > ARM.
> >
> > How many of the OSes which function as x86 DomUs for Xen, function as
> > ARM DomUs?  Getting Linux operational was straightforward, but what of
> > others?
> 
> On EVE we have Windows running as a pretty much a customer-facing demo:
>     https://wiki.lfedge.org/display/EVE/How+get+Windows+10+running+on+a+Raspberry+Pi
> 

Sorry to spoil the achievement, but Tianocore beat you to having
Windows on a RP4 by 4 months:
https://rpi4-uefi.dev/alternate-guide-running-windows-10-on-the-pi-4/

> > The available examples seem geared towards Linux DomUs.  I'm looking at a
> > FreeBSD installation image and it appears to expect an EFI firmware.
> > Beyond having a bunch of files appearing oriented towards booting on EFI
> > I can't say much about (booting) FreeBSD/ARM DomUs.
> 
> Personally I'm about to make Plan9 (well 9front really) run as well ;-)

Some people may like those types of instructions, but I really hate them.
I like Tianocore's better, since I can do my type of adjustment better.
(using different amount of storage or other virtual devices)

I've already got FreeBSD installation media, issue is setting up a xl.cfg
file and/or figuring out which bits I need to extract off their media
(ah, actual kernel is /boot/kernel/kernel; an ELF file using the
interpreter /red/herring).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Dec 15 03:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 03:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52848.92231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0cz-0001hL-Il; Tue, 15 Dec 2020 03:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52848.92231; Tue, 15 Dec 2020 03:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp0cz-0001hE-Fj; Tue, 15 Dec 2020 03:03:53 +0000
Received: by outflank-mailman (input) for mailman id 52848;
 Tue, 15 Dec 2020 03:03:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1f4I=FT=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1kp0cx-0001h9-O1
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 03:03:51 +0000
Received: from mail-qt1-x82f.google.com (unknown [2607:f8b0:4864:20::82f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e55b74dd-a02b-4dc6-8deb-0b039bfd4178;
 Tue, 15 Dec 2020 03:03:50 +0000 (UTC)
Received: by mail-qt1-x82f.google.com with SMTP id u21so13560520qtw.11
 for <xen-devel@lists.xenproject.org>; Mon, 14 Dec 2020 19:03:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e55b74dd-a02b-4dc6-8deb-0b039bfd4178
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ggYZD+SM3a3cNaKWBnWmpkSzvr816tA57UspaGTVGlY=;
        b=eQkbVRf3FGEyfkNY89DlkWxcAnOdAmU82TXGScLnqc1Nk5kmUB1VI+olhELuYORfJT
         r3z4K0fcvay0YCPjOxqTS1XnWSMwzpruF+hD4v3XU94z/mtRoW5pg4dHZedUiXBbYxtg
         dkHzOtz452+G1xut+MlCrPhEg9P2cpG52kysIXmV8ntQ/Qr5zaY+mKrdBfvbFdDZQIse
         EVD1ZuULicQKwtgTd/LO3zX+TEXaUnQ1pQPvGE02oyBOnNo9f45GI4BWtLYiwohZ0SG5
         krS+NyoIZnbiNwg764dKAkTobZ6YLtUen93eHsn63ODuLaf56zOjH3zWJNWkj3n781o5
         cRuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ggYZD+SM3a3cNaKWBnWmpkSzvr816tA57UspaGTVGlY=;
        b=Nrogw82++41V+WAAYWfFNFeoQgGCC9IlS3DyOnX5Uw/TtH2ltFcwf+CBnZZ2H1dLWc
         WKc5aUTvdvN4BRV86EpJWEjwBE+/YzlqCtJOfg9t8jvvc3zkt5JL5AcKDxWXPluUPHsh
         HferTjafj8jNs4u3BSYEh21mc2kCbNk9Ym/A7XoMkD0soW1DSFZxj1Bb3UyPjlDCFprU
         vzOtE/GBa6l2vR5A75Hz+Rk5eTMwsnD+t/qJOpJcC8UFCF75iJVNc/DmtlMquyfzrylT
         E3ByC7BJ2HswhEXY6B5lK+iii0ivreX4pdcWvh5gve4pLTeTNNJZHKqSb4qJFxae+LQ8
         /jSQ==
X-Gm-Message-State: AOAM530M1xWmzBpOUQUQ9crjRQfE6ROIOzxB+bvI5gWfCaj3uFyOzYii
	55fpf1O0JALFejHqT8UBjuzwIdTmZN45CTAx+qHmjRbCUUg=
X-Google-Smtp-Source: ABdhPJwIKdMJMIdCiR6utTsmb8zT6ut1Nlm7Hm/UCaozFHHtczKspI3WBxkqzsLe/W7XzxTZQRQmBPl6gAynKs1xdDk=
X-Received: by 2002:ac8:4e39:: with SMTP id d25mr34409481qtw.266.1608001430531;
 Mon, 14 Dec 2020 19:03:50 -0800 (PST)
MIME-Version: 1.0
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com> <CAMmSBy_8+PRWiSQxwRN2oB9mLmOnyoCr0mH4L-uUYhm=1GK7Xg@mail.gmail.com>
 <X9gmkGhQQBOmmBe5@mattapan.m5p.com>
In-Reply-To: <X9gmkGhQQBOmmBe5@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 14 Dec 2020 19:03:39 -0800
Message-ID: <CAMmSBy-NbZnfifROX8-BRLCSWt8OYUKHdW9S5ob+k9QJ4w_30g@mail.gmail.com>
Subject: Re: Xen-ARM DomUs
To: Elliott Mitchell <ehem+undef@m5p.com>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Dec 14, 2020 at 6:59 PM Elliott Mitchell <ehem+undef@m5p.com> wrote:
>
> On Mon, Dec 14, 2020 at 06:35:14PM -0800, Roman Shaposhnik wrote:
> > On Mon, Dec 14, 2020 at 6:16 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > >
> > > Finally getting to the truly productive stages of my project with Xen on
> > > ARM.
> > >
> > > How many of the OSes which function as x86 DomUs for Xen, function as
> > > ARM DomUs?  Getting Linux operational was straightforward, but what of
> > > others?
> >
> > On EVE we have Windows running as a pretty much a customer-facing demo:
> >     https://wiki.lfedge.org/display/EVE/How+get+Windows+10+running+on+a+Raspberry+Pi
> >
>
> Sorry to spoil the achievement, but Tianocore beat you to having
> Windows on a RP4 by 4 months:
> https://rpi4-uefi.dev/alternate-guide-running-windows-10-on-the-pi-4/

Not to be pedantic, but Stefano and I beat them -- we made it possible
around August ;-)

> > > The available examples seem geared towards Linux DomUs.  I'm looking at a
> > > FreeBSD installation image and it appears to expect an EFI firmware.
> > > Beyond having a bunch of files appearing oriented towards booting on EFI
> > > I can't say much about (booting) FreeBSD/ARM DomUs.
> >
> > Personally I'm about to make Plan9 (well 9front really) run as well ;-)
>
> Some people may like those types of instructions, but I really hate them.
> I like Tianocore's better, since I can do my type of adjustment better.
> (using different amount of storage or other virtual devices)
>
> I've already got FreeBSD installation media, issue is setting up a xl.cfg
> file and/or figuring out which bits I need to extract off their media
> (ah, actual kernel is /boot/kernel/kernel; an ELF file using the
> interpreter /red/herring).

Well, Xen requires some kind of a management solution underneath, so until
Xen/RPi4 support shows up in Raspbian -- the choice is to either stick with
EVE or follow long lists of instructions.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 03:30:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 03:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52855.92246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp134-0004MK-OY; Tue, 15 Dec 2020 03:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52855.92246; Tue, 15 Dec 2020 03:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp134-0004MD-Kv; Tue, 15 Dec 2020 03:30:50 +0000
Received: by outflank-mailman (input) for mailman id 52855;
 Tue, 15 Dec 2020 03:30:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp133-0004M5-U9; Tue, 15 Dec 2020 03:30:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp133-0003mH-I5; Tue, 15 Dec 2020 03:30:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp133-00042i-Al; Tue, 15 Dec 2020 03:30:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp133-0000IR-AK; Tue, 15 Dec 2020 03:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VvZ4hFoF5+VMbA9BVYhuhsxNJQofFVJDzJ4dWznjfuU=; b=IWOYrowmbRBHcxcBdfufqV4Ron
	qlLSmI9LN1AYcgFX2QvtXQLd3JROFIwDL4d7lhVK8ggeXPw39EpecKMoCSCmvwPOXVviL/qB8PphH
	vTMp/c/SCfHvx2Wpmuc5EKqEL3vy9/PmQBDuB7LXx9NeI9wh3fXxhKnst9xKbG1yyyPw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157540: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 03:30:49 +0000

flight 157540 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   42 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 04:02:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 04:02:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52865.92261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp1Xv-0007AD-BL; Tue, 15 Dec 2020 04:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52865.92261; Tue, 15 Dec 2020 04:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp1Xv-0007A6-7Q; Tue, 15 Dec 2020 04:02:43 +0000
Received: by outflank-mailman (input) for mailman id 52865;
 Tue, 15 Dec 2020 04:02:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp1Xt-00079y-Mg; Tue, 15 Dec 2020 04:02:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp1Xt-0004XT-HA; Tue, 15 Dec 2020 04:02:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp1Xt-0004oB-8m; Tue, 15 Dec 2020 04:02:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp1Xt-0008J0-8K; Tue, 15 Dec 2020 04:02:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fkkV5J/t02BPgJiMGXMuFUhoF0Xy5e80JzlI1IuvsV4=; b=NEIe/Pz9ncQEcP0djVKfOmMvBQ
	Q9Hb70SwJ+dxK25BEdrq/iCuxBbTxsiTx7lToHRdufeDdxnwTcWGwa8xPsqtIDRx7SnNR1KuNxEQ0
	6W39+CsRv08AEeDUDoJVyEfrnXgRL/DxiRpPWVM7Im02oHUAx5vnSbM88VwONJimaBUk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157541: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 04:02:41 +0000

flight 157541 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157541/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   43 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 04:42:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 04:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52874.92276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp2Aa-0002Hj-H2; Tue, 15 Dec 2020 04:42:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52874.92276; Tue, 15 Dec 2020 04:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp2Aa-0002Hc-E0; Tue, 15 Dec 2020 04:42:40 +0000
Received: by outflank-mailman (input) for mailman id 52874;
 Tue, 15 Dec 2020 04:42:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2AY-0002HU-RN; Tue, 15 Dec 2020 04:42:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2AY-0005Pn-Ka; Tue, 15 Dec 2020 04:42:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2AY-0005qP-CG; Tue, 15 Dec 2020 04:42:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2AY-0003aF-Bo; Tue, 15 Dec 2020 04:42:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b6aJuk+mCQTjLB1u522RrimxDdMppRV6PWbYivi5HYg=; b=ngKhXf4UPyi+F1d3/n0jFXo+u0
	bM4+mioBr1pDVhI7QOHmSPEXb767B6nBF09tWftOXutXn2SvCuUjI/Ntm3cm28xhRNkHeYFVe6CMh
	ZfezgPJe54H9qiEemAsE2iOQuAInxWwVndcipGyPxMJ8SVJqpIDmR79CgHkA626YCF3M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157542-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157542: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 04:42:38 +0000

flight 157542 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157542/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   44 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 05:29:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 05:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52885.92291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp2tO-0006Nd-0y; Tue, 15 Dec 2020 05:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52885.92291; Tue, 15 Dec 2020 05:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp2tN-0006NW-UK; Tue, 15 Dec 2020 05:28:57 +0000
Received: by outflank-mailman (input) for mailman id 52885;
 Tue, 15 Dec 2020 05:28:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2tL-0006NO-UM; Tue, 15 Dec 2020 05:28:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2tL-0006Tc-Mj; Tue, 15 Dec 2020 05:28:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2tL-00071V-Ep; Tue, 15 Dec 2020 05:28:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp2tL-0001l1-EN; Tue, 15 Dec 2020 05:28:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0pLDOVGpaUKXfsOPvtSTjDubAMPpKTb6UU7/1GLBwiQ=; b=J19QYm30dH0T5qMMRnIZiWP5Lt
	d25YC14k22SArk7+Wy5LnI4JaPYXKEZDOukXeBkjdZhfplr+zux6gaKBU2O3slkwug2xhzQ6QeFy+
	8/0VCrY5YUNl2Z6xwXwxQX742bf8TEfnuVgvbN9ZgDRfox00k2MHIT7nwHDfj72mwY0E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157544: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 05:28:55 +0000

flight 157544 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157544/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 157345
 build-amd64                   6 xen-build                fail REGR. vs. 157345
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157345
 build-i386-xsm                6 xen-build                fail REGR. vs. 157345

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   45 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 06:33:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 06:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52901.92348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tr-0004Bq-UK; Tue, 15 Dec 2020 06:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52901.92348; Tue, 15 Dec 2020 06:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tr-0004Be-Qk; Tue, 15 Dec 2020 06:33:31 +0000
Received: by outflank-mailman (input) for mailman id 52901;
 Tue, 15 Dec 2020 06:33:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp3tq-00047p-Qi
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 06:33:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aaa6d120-d33e-4982-9359-de6796891a5a;
 Tue, 15 Dec 2020 06:33:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28241AFF3;
 Tue, 15 Dec 2020 06:33:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaa6d120-d33e-4982-9359-de6796891a5a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608014002; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yYxq5tlJRRgMbOimOLfmKJHCkbwkZMHcBMXnNxQyCg0=;
	b=qyWdFktIzoKvfdjdsGmaEWftuIPlEL80U4WOcRt0DnPl/KuH1A3TesxazxEhajQugY6V1r
	cFU0kKMbjO4UpB9Cti9JsSEwrZnS47UGTbilubTAiiyWGT5P1sd2PU3mv9eSfo1WWSg0Fe
	8yJPtWMBQkCDe/jobOMupPENVo9/zwg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 1/3] xen/arm: add support for run_in_exception_handler()
Date: Tue, 15 Dec 2020 07:33:17 +0100
Message-Id: <20201215063319.23290-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215063319.23290-1-jgross@suse.com>
References: <20201215063319.23290-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add support to run a function in an exception handler for Arm. Do it
the same way as on x86 via a bug_frame.

Use the same BUGFRAME_* #defines as on x86 in order to make a future
common header file more easily achievable.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- new patch

V5:
- adjust BUG_FRAME() macro (Jan Beulich)
- adjust arm linker script (Jan Beulich)
- drop #ifdef x86 in common/virtual_region.c

I have verified the created bugframes are correct by inspecting the
created binary.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/arm/traps.c        | 10 ++++++++-
 xen/arch/arm/xen.lds.S      |  2 ++
 xen/common/virtual_region.c |  2 --
 xen/include/asm-arm/bug.h   | 45 +++++++++++++++++--------------------
 4 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd4c6..912b9a3d77 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1236,8 +1236,16 @@ int do_bug_frame(const struct cpu_user_regs *regs, vaddr_t pc)
     if ( !bug )
         return -ENOENT;
 
+    if ( id == BUGFRAME_run_fn )
+    {
+        void (*fn)(const struct cpu_user_regs *) = bug_ptr(bug);
+
+        fn(regs);
+        return 0;
+    }
+
     /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
-    filename = bug_file(bug);
+    filename = bug_ptr(bug);
     if ( !is_kernel(filename) )
         return -EINVAL;
     fixup = strlen(filename);
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 6342ac4ead..004b182acb 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -49,6 +49,8 @@ SECTIONS
        __stop_bug_frames_1 = .;
        *(.bug_frames.2)
        __stop_bug_frames_2 = .;
+       *(.bug_frames.3)
+       __stop_bug_frames_3 = .;
        *(.rodata)
        *(.rodata.*)
        *(.data.rel.ro)
diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
index 4fbc02e35a..30b0b4ab9c 100644
--- a/xen/common/virtual_region.c
+++ b/xen/common/virtual_region.c
@@ -123,9 +123,7 @@ void __init setup_virtual_regions(const struct exception_table_entry *start,
         __stop_bug_frames_0,
         __stop_bug_frames_1,
         __stop_bug_frames_2,
-#ifdef CONFIG_X86
         __stop_bug_frames_3,
-#endif
         NULL
     };
 
diff --git a/xen/include/asm-arm/bug.h b/xen/include/asm-arm/bug.h
index 36c803357c..4610835ac3 100644
--- a/xen/include/asm-arm/bug.h
+++ b/xen/include/asm-arm/bug.h
@@ -15,65 +15,62 @@
 
 struct bug_frame {
     signed int loc_disp;    /* Relative address to the bug address */
-    signed int file_disp;   /* Relative address to the filename */
+    signed int ptr_disp;    /* Relative address to the filename or function */
     signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
     uint16_t line;          /* Line number */
     uint32_t pad0:16;       /* Padding for 8-bytes align */
 };
 
 #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
-#define bug_file(b) ((const void *)(b) + (b)->file_disp);
+#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
 #define bug_line(b) ((b)->line)
 #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
 
-#define BUGFRAME_warn   0
-#define BUGFRAME_bug    1
-#define BUGFRAME_assert 2
+#define BUGFRAME_run_fn 0
+#define BUGFRAME_warn   1
+#define BUGFRAME_bug    2
+#define BUGFRAME_assert 3
 
-#define BUGFRAME_NR     3
+#define BUGFRAME_NR     4
 
 /* Many versions of GCC doesn't support the asm %c parameter which would
  * be preferable to this unpleasantness. We use mergeable string
  * sections to avoid multiple copies of the string appearing in the
  * Xen image.
  */
-#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
+#define BUG_FRAME(type, line, ptr, msg) do {                                \
     BUILD_BUG_ON((line) >> 16);                                             \
     BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
     asm ("1:"BUG_INSTR"\n"                                                  \
-         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
-         "2:\t.asciz " __stringify(file) "\n"                               \
-         "3:\n"                                                             \
-         ".if " #has_msg "\n"                                               \
-         "\t.asciz " #msg "\n"                                              \
-         ".endif\n"                                                         \
-         ".popsection\n"                                                    \
-         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
-         "4:\n"                                                             \
+         ".pushsection .bug_frames." __stringify(type) ", \"a\", %%progbits\n"\
+         "2:\n"                                                             \
          ".p2align 2\n"                                                     \
-         ".long (1b - 4b)\n"                                                \
-         ".long (2b - 4b)\n"                                                \
-         ".long (3b - 4b)\n"                                                \
+         ".long (1b - 2b)\n"                                                \
+         ".long (%0 - 2b)\n"                                                \
+         ".long (%1 - 2b)\n"                                                \
          ".hword " __stringify(line) ", 0\n"                                \
-         ".popsection");                                                    \
+         ".popsection" :: "i" (ptr), "i" (msg));                            \
 } while (0)
 
-#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
+#define run_in_exception_handler(fn) BUG_FRAME(BUGFRAME_run_fn, 0, fn, "")
+
+#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, "")
 
 #define BUG() do {                                              \
-    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
+    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, "");           \
     unreachable();                                              \
 } while (0)
 
 #define assert_failed(msg) do {                                 \
-    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
+    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, msg);        \
     unreachable();                                              \
 } while (0)
 
 extern const struct bug_frame __start_bug_frames[],
                               __stop_bug_frames_0[],
                               __stop_bug_frames_1[],
-                              __stop_bug_frames_2[];
+                              __stop_bug_frames_2[],
+                              __stop_bug_frames_3[];
 
 #endif /* __ARM_BUG_H__ */
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 06:33:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 06:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52900.92336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tq-00049s-KH; Tue, 15 Dec 2020 06:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52900.92336; Tue, 15 Dec 2020 06:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tq-00049l-Fy; Tue, 15 Dec 2020 06:33:30 +0000
Received: by outflank-mailman (input) for mailman id 52900;
 Tue, 15 Dec 2020 06:33:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp3to-00046r-V1
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 06:33:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aad726b5-3b04-400d-8f2b-7790371e2d99;
 Tue, 15 Dec 2020 06:33:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C931B04C;
 Tue, 15 Dec 2020 06:33:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aad726b5-3b04-400d-8f2b-7790371e2d99
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608014002; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rJ4WIaQxn+EBTheKZn327hvdQLsMJ2iX0SOvU8K5LL0=;
	b=qlOwO5u5JcUqBrozIMDZsGXFo9m+i1iEQuzCVrnpd8J+zQ4uCYnqONxuOO6QBkAbj2bYCb
	vIOjEt227sQwa8aQXy3XC46Fd9omlJX3DeiPxAxeFAxDFQ9Dtb1gCYIEkZVnqNswg/Sdzy
	SP+JsXLuU33tFC8vYdvnD7FWyUc+q28=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 3/3] xen: add support for automatic debug key actions in case of crash
Date: Tue, 15 Dec 2020 07:33:19 +0100
Message-Id: <20201215063319.23290-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215063319.23290-1-jgross@suse.com>
References: <20201215063319.23290-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Depending on the type of crash the desired data might be different, so
support different settings for the possible types of crashes.

The parameter is "crash-debug" with the following syntax:

  crash-debug-<type>=<string>

with <type> being one of:

  panic, hwdom, watchdog, kexeccmd, debugkey

and <string> a sequence of debug key characters with '+' having the
special semantics of a 10 millisecond pause.

So "crash-debug-watchdog=0+0qr" would result in special output in case
of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
domain info, run queues).

Don't call key handlers in early boot, as some (e.g. for 'd') require
some initializations to be finished, like scheduler or idle domain.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- switched special character '.' to '+' (Jan Beulich)
- 10 ms instead of 1 s pause (Jan Beulich)
- added more text to the boot parameter description (Jan Beulich)

V3:
- added const (Jan Beulich)
- thorough test of crash reason parameter (Jan Beulich)
- kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
- added dummy get_irq_regs() helper on Arm

V4:
- call keyhandlers with NULL for regs
- use ARRAY_SIZE() (Jan Beulich)
- don't activate handlers in early boot (Jan Beulich)
- avoid recursion
- extend documentation a bit

V5:
- move recursion check down a little bit (Jan Beulich)
---
 docs/misc/xen-command-line.pandoc | 41 +++++++++++++++++++++++
 xen/common/kexec.c                |  8 +++--
 xen/common/keyhandler.c           | 55 +++++++++++++++++++++++++++++++
 xen/common/shutdown.c             |  4 +--
 xen/drivers/char/console.c        |  2 +-
 xen/include/xen/kexec.h           | 10 ++++--
 xen/include/xen/keyhandler.h      | 10 ++++++
 7 files changed, 122 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index b4a0d60c11..e4c0a144fc 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -574,6 +574,43 @@ reduction of features at Xen's disposal to manage guests.
 ### cpuinfo (x86)
 > `= <boolean>`
 
+### crash-debug-debugkey
+### crash-debug-hwdom
+### crash-debug-kexeccmd
+### crash-debug-panic
+### crash-debug-watchdog
+> `= <string>`
+
+> Can be modified at runtime
+
+Specify debug-key actions in cases of crashes. Each of the parameters applies
+to a different crash reason. The `<string>` is a sequence of debug key
+characters, with `+` having the special meaning of a 10 millisecond pause.
+
+`crash-debug-debugkey` will be used for crashes induced by the `C` debug
+key (i.e. manually induced crash).
+
+`crash-debug-hwdom` denotes a crash of dom0.
+
+`crash-debug-kexeccmd` is an explicit request of dom0 to continue with the
+kdump kernel via kexec. Only available on hypervisors built with CONFIG_KEXEC.
+
+`crash-debug-panic` is a crash of the hypervisor.
+
+`crash-debug-watchdog` is a crash due to the watchdog timer expiring.
+
+It should be noted that dumping diagnosis data to the console can fail in
+multiple ways (missing data, hanging system, ...) depending on the reason
+of the crash, which might have left the hypervisor in a bad state. In case
+a debug-key action leads to another crash, recursion will be avoided, so no
+additional debug-key actions will be performed in this case. A crash in the
+early boot phase will not result in any debug-key action, as the system
+might not yet be in a state where the handlers can work.
+
+So e.g. `crash-debug-watchdog=0+0r` would dump dom0 state twice with 10
+milliseconds between the two state dumps, followed by the run queues of the
+hypervisor, if the system crashes due to a watchdog timeout.
+
 ### crashinfo_maxaddr
 > `= <size>`
 
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 52cdc4ebc3..ebeee6405a 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -373,10 +373,12 @@ static int kexec_common_shutdown(void)
     return 0;
 }
 
-void kexec_crash(void)
+void kexec_crash(enum crash_reason reason)
 {
     int pos;
 
+    keyhandler_crash_action(reason);
+
     pos = (test_bit(KEXEC_FLAG_CRASH_POS, &kexec_flags) != 0);
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
@@ -409,7 +411,7 @@ static long kexec_reboot(void *_image)
 static void do_crashdump_trigger(unsigned char key)
 {
     printk("'%c' pressed -> triggering crashdump\n", key);
-    kexec_crash();
+    kexec_crash(CRASHREASON_DEBUGKEY);
     printk(" * no crash kernel loaded!\n");
 }
 
@@ -840,7 +842,7 @@ static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
         ret = continue_hypercall_on_cpu(0, kexec_reboot, image);
         break;
     case KEXEC_TYPE_CRASH:
-        kexec_crash(); /* Does not return */
+        kexec_crash(CRASHREASON_KEXECCMD); /* Does not return */
         break;
     }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 38020a1360..8b9f378371 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -3,7 +3,9 @@
  */
 
 #include <asm/regs.h>
+#include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/param.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
 #include <xen/console.h>
@@ -518,6 +520,59 @@ void __init initialize_keytable(void)
     }
 }
 
+#define CRASHACTION_SIZE  32
+static char crash_debug_panic[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-panic", crash_debug_panic);
+static char crash_debug_hwdom[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
+static char crash_debug_watchdog[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
+#ifdef CONFIG_KEXEC
+static char crash_debug_kexeccmd[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
+#else
+#define crash_debug_kexeccmd NULL
+#endif
+static char crash_debug_debugkey[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
+
+void keyhandler_crash_action(enum crash_reason reason)
+{
+    static const char *const crash_action[] = {
+        [CRASHREASON_PANIC] = crash_debug_panic,
+        [CRASHREASON_HWDOM] = crash_debug_hwdom,
+        [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
+        [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
+        [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
+    };
+    static bool ignore;
+    const char *action;
+
+    /* Some handlers are not functional too early. */
+    if ( system_state < SYS_STATE_smp_boot )
+        return;
+
+    if ( (unsigned int)reason >= ARRAY_SIZE(crash_action) )
+        return;
+    action = crash_action[reason];
+    if ( !action )
+        return;
+
+    /* Avoid recursion. */
+    if ( ignore )
+        return;
+    ignore = true;
+
+    while ( *action )
+    {
+        if ( *action == '+' )
+            mdelay(10);
+        else
+            handle_keypress(*action, NULL);
+        action++;
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 912593915b..abde48aa4c 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -43,7 +43,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_crash:
         debugger_trap_immediate();
         printk("Hardware Dom%u crashed: ", hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_HWDOM);
         maybe_reboot();
         break; /* not reached */
 
@@ -56,7 +56,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_watchdog:
         printk("Hardware Dom%u shutdown: watchdog rebooting machine\n",
                hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_WATCHDOG);
         machine_restart(0);
         break; /* not reached */
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 861ad53a8f..acec277f5e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1271,7 +1271,7 @@ void panic(const char *fmt, ...)
 
     debugger_trap_immediate();
 
-    kexec_crash();
+    kexec_crash(CRASHREASON_PANIC);
 
     if ( opt_noreboot )
         machine_halt();
diff --git a/xen/include/xen/kexec.h b/xen/include/xen/kexec.h
index e85ba16405..9f7a912e97 100644
--- a/xen/include/xen/kexec.h
+++ b/xen/include/xen/kexec.h
@@ -1,6 +1,8 @@
 #ifndef __XEN_KEXEC_H__
 #define __XEN_KEXEC_H__
 
+#include <xen/keyhandler.h>
+
 #ifdef CONFIG_KEXEC
 
 #include <public/kexec.h>
@@ -48,7 +50,7 @@ void machine_kexec_unload(struct kexec_image *image);
 void machine_kexec_reserved(xen_kexec_reserve_t *reservation);
 void machine_reboot_kexec(struct kexec_image *image);
 void machine_kexec(struct kexec_image *image);
-void kexec_crash(void);
+void kexec_crash(enum crash_reason reason);
 void kexec_crash_save_cpu(void);
 struct crash_xen_info *kexec_crash_save_info(void);
 void machine_crash_shutdown(void);
@@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
 #define kexecing 0
 
 static inline void kexec_early_calculations(void) {}
-static inline void kexec_crash(void) {}
+static inline void kexec_crash(enum crash_reason reason)
+{
+    keyhandler_crash_action(reason);
+}
+
 static inline void kexec_crash_save_cpu(void) {}
 static inline void set_kexec_crash_area_size(u64 system_ram) {}
 
diff --git a/xen/include/xen/keyhandler.h b/xen/include/xen/keyhandler.h
index 5131e86cbc..9c5830a037 100644
--- a/xen/include/xen/keyhandler.h
+++ b/xen/include/xen/keyhandler.h
@@ -48,4 +48,14 @@ void register_irq_keyhandler(unsigned char key,
 /* Inject a keypress into the key-handling subsystem. */
 extern void handle_keypress(unsigned char key, struct cpu_user_regs *regs);
 
+enum crash_reason {
+    CRASHREASON_PANIC,
+    CRASHREASON_HWDOM,
+    CRASHREASON_WATCHDOG,
+    CRASHREASON_KEXECCMD,
+    CRASHREASON_DEBUGKEY,
+};
+
+void keyhandler_crash_action(enum crash_reason reason);
+
 #endif /* __XEN_KEYHANDLER_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 06:33:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 06:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52899.92323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tn-000481-AW; Tue, 15 Dec 2020 06:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52899.92323; Tue, 15 Dec 2020 06:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tn-00047u-7b; Tue, 15 Dec 2020 06:33:27 +0000
Received: by outflank-mailman (input) for mailman id 52899;
 Tue, 15 Dec 2020 06:33:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp3tl-00047p-Ur
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 06:33:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e06a5c88-f9b1-40dc-8a91-4d75e497c552;
 Tue, 15 Dec 2020 06:33:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1024BAF4C;
 Tue, 15 Dec 2020 06:33:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e06a5c88-f9b1-40dc-8a91-4d75e497c552
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608014002; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=FZ5UAMInL92tsy7p0fKAS4wnHgF6N3gV+HXzR4TUUhA=;
	b=kAea/Wdq/pzknwBoRRrBRNHqcVMoUlIaDn0qElGeEKc5AlKD3cz1Me49pNpviWtdbgPye+
	tUWACvotwXajtfVWyo27HT61QRyk+a8tmq0BckW84DenH6Gd+rfcIzhOfCEddK7Z/B6+65
	PO7LxJjRs6ewqwGx0yxc+z10A0/jgd8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/3] xen: add support for automatic debug key actions in case of crash
Date: Tue, 15 Dec 2020 07:33:16 +0100
Message-Id: <20201215063319.23290-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Changes in V4:
- addressed comments (now patch 3)
- added patches 1 and 2

Changes in V5:
- better bug frame construction on Arm (patch 1)
- addressed comments

Juergen Gross (3):
  xen/arm: add support for run_in_exception_handler()
  xen: enable keyhandlers to work without register set specified
  xen: add support for automatic debug key actions in case of crash

 docs/misc/xen-command-line.pandoc | 41 ++++++++++++++++++
 xen/arch/arm/traps.c              | 10 ++++-
 xen/arch/arm/xen.lds.S            |  2 +
 xen/common/kexec.c                |  8 ++--
 xen/common/keyhandler.c           | 72 +++++++++++++++++++++++++++++--
 xen/common/shutdown.c             |  4 +-
 xen/common/virtual_region.c       |  2 -
 xen/drivers/char/console.c        |  2 +-
 xen/include/asm-arm/bug.h         | 45 +++++++++----------
 xen/include/xen/kexec.h           | 10 ++++-
 xen/include/xen/keyhandler.h      | 10 +++++
 11 files changed, 168 insertions(+), 38 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 06:33:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 06:33:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52898.92312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tl-000473-2r; Tue, 15 Dec 2020 06:33:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52898.92312; Tue, 15 Dec 2020 06:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3tk-00046w-Vv; Tue, 15 Dec 2020 06:33:24 +0000
Received: by outflank-mailman (input) for mailman id 52898;
 Tue, 15 Dec 2020 06:33:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp3tk-00046r-5L
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 06:33:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 138b3650-0177-4f7b-98cf-ded85c10019b;
 Tue, 15 Dec 2020 06:33:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 609B7AC90;
 Tue, 15 Dec 2020 06:33:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 138b3650-0177-4f7b-98cf-ded85c10019b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608014002; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G+FLQAaWCnWATda39k47mXPhSQZGTuR/oZbC6qRnuMI=;
	b=Gw3pGzd139wdCkMm/9aLspEJFkuERVBdp7bR1ykYginclAc4whDUuo9szWH62hPL8NCg7g
	iFvvig0wcf8FNyhWhCCnj0yjrwEiV39qyexQJVeLWG8bRoK4YWbQ2RdsssPklcbo+bA0hy
	m5p2cGNiyQVsuM1UD283PhnWbVsrtsQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 2/3] xen: enable keyhandlers to work without register set specified
Date: Tue, 15 Dec 2020 07:33:18 +0100
Message-Id: <20201215063319.23290-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215063319.23290-1-jgross@suse.com>
References: <20201215063319.23290-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are only two keyhandlers which make use of the cpu_user_regs
struct passed to them. In order to be able to call any keyhandler in
non-interrupt contexts, too, modify those two handlers to copy with a
NULL regs pointer by using run_in_exception_handler() in that case.

Suggested-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- new patch
---
 xen/common/keyhandler.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 68364e987d..38020a1360 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -181,7 +181,10 @@ static void dump_registers(unsigned char key, struct cpu_user_regs *regs)
     cpumask_copy(&dump_execstate_mask, &cpu_online_map);
 
     /* Get local execution state out immediately, in case we get stuck. */
-    dump_execstate(regs);
+    if ( regs )
+        dump_execstate(regs);
+    else
+        run_in_exception_handler(dump_execstate);
 
     /* Alt. handling: remaining CPUs are dumped asynchronously one-by-one. */
     if ( alt_key_handling )
@@ -481,15 +484,23 @@ static void run_all_keyhandlers(unsigned char key, struct cpu_user_regs *regs)
     tasklet_schedule(&run_all_keyhandlers_tasklet);
 }
 
-static void do_debug_key(unsigned char key, struct cpu_user_regs *regs)
+static void do_debugger_trap_fatal(struct cpu_user_regs *regs)
 {
-    printk("'%c' pressed -> trapping into debugger\n", key);
     (void)debugger_trap_fatal(0xf001, regs);
 
     /* Prevent tail call optimisation, which confuses xendbg. */
     barrier();
 }
 
+static void do_debug_key(unsigned char key, struct cpu_user_regs *regs)
+{
+    printk("'%c' pressed -> trapping into debugger\n", key);
+    if ( regs )
+        do_debugger_trap_fatal(regs);
+    else
+        run_in_exception_handler(do_debugger_trap_fatal);
+}
+
 static void do_toggle_alt_key(unsigned char key, struct cpu_user_regs *regs)
 {
     alt_key_handling = !alt_key_handling;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 06:39:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 06:39:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52919.92360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3zz-0004iT-PH; Tue, 15 Dec 2020 06:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52919.92360; Tue, 15 Dec 2020 06:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp3zz-0004iM-LT; Tue, 15 Dec 2020 06:39:51 +0000
Received: by outflank-mailman (input) for mailman id 52919;
 Tue, 15 Dec 2020 06:39:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp3zy-0004iE-Ph; Tue, 15 Dec 2020 06:39:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp3zy-0007kE-I7; Tue, 15 Dec 2020 06:39:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp3zy-0000t6-9u; Tue, 15 Dec 2020 06:39:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp3zy-000584-9Q; Tue, 15 Dec 2020 06:39:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S4+BEVJwzaP6XuiDHFSI1nF6sGujztyM+FoIsIW+YAc=; b=4dy5dDsgDWEsFXeAefsGgMV/LA
	p0I6hwlJnz0jrctvW0QhDEG5ppw0SMduDbZzyVdBgWuAlvTYiqfW+MAmfHgbwa5zP8qjGSznGFPZ0
	TXIHYH4Y7pnigne3ureMN7kuIyYNBYPyk0KnuzYANHUDE1TOfXu2Ximxgwg3TdemsRY8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157545-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157545: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 06:39:50 +0000

flight 157545 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157545/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 01726b6d23d4c8a870dbd5b96c0b9e3caf38ef3c
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   46 attempts
Testing same since   157531  2020-12-14 22:40:42 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 07:27:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 07:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52931.92375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp4kB-0000h9-G1; Tue, 15 Dec 2020 07:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52931.92375; Tue, 15 Dec 2020 07:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp4kB-0000h2-Cw; Tue, 15 Dec 2020 07:27:35 +0000
Received: by outflank-mailman (input) for mailman id 52931;
 Tue, 15 Dec 2020 07:27:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp4kA-0000gw-25
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 07:27:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a59419f-79ff-438e-8231-ca57484818ca;
 Tue, 15 Dec 2020 07:27:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59F13AC7F;
 Tue, 15 Dec 2020 07:27:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a59419f-79ff-438e-8231-ca57484818ca
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608017251; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nj9ihax1/LrptfS9HSC1+VTxcZq+NVNUTCtfqAvgkPo=;
	b=FozNUbfhAKc8ap0AvXU0MWeqHO0YYsbgxBpCqBLfwmZdg26YtFCRWG+CzdTZ+voDttAXG4
	Cgea/51PqVE0bNbQ9w6SSKS327Oocp2np39zl9AApNdl+Ucec2sjKOJSyHImcnzP6+x9dw
	sPWSaMCky4j69xCMPosIrJmduYpkEec=
To: Julien Grall <julien@xen.org>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
Message-ID: <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
Date: Tue, 15 Dec 2020 08:27:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="aZ8yJ9HMcgUBjSDsb6zsumgc5RgbItXS0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--aZ8yJ9HMcgUBjSDsb6zsumgc5RgbItXS0
Content-Type: multipart/mixed; boundary="ykQYKCMfBobgZ4ASufJVLpgcjHbZd6Ss7";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
Message-ID: <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
In-Reply-To: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>

--ykQYKCMfBobgZ4ASufJVLpgcjHbZd6Ss7
Content-Type: multipart/mixed;
 boundary="------------BFD7A9977F6A02DCE726E85E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BFD7A9977F6A02DCE726E85E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.12.20 22:25, Julien Grall wrote:
> Hi Juergen,
>=20
> When testing Linux 5.10 dom0, I could reliably hit the following warnin=
g=20
> with using event 2L ABI:
>=20
> [=C2=A0 589.591737] Interrupt for port 34, but apparently not enabled; =

> per-user 00000000a86a4c1b
> [=C2=A0 589.593259] WARNING: CPU: 0 PID: 1111 at=20
> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:170=20
> evtchn_interrupt+0xeb/0x100
> [=C2=A0 589.595514] Modules linked in:
> [=C2=A0 589.596145] CPU: 0 PID: 1111 Comm: qemu-system-i38 Tainted: G=20
> W=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.10.0+ #180
> [=C2=A0 589.597708] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),=
 BIOS=20
> rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
> [=C2=A0 589.599782] RIP: e030:evtchn_interrupt+0xeb/0x100
> [=C2=A0 589.600698] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00 =
00 00=20
> e8 d9 10 ca ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 31 3d 82 e8 65 29 a0=
=20
> ff <0f> 0b e9 42 ff ff ff 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 0f
> [=C2=A0 589.604087] RSP: e02b:ffffc90040003e70 EFLAGS: 00010086
> [=C2=A0 589.605102] RAX: 0000000000000000 RBX: ffff888102091800 RCX:=20
> 0000000000000027
> [=C2=A0 589.606445] RDX: 0000000000000000 RSI: ffff88817fe19150 RDI:=20
> ffff88817fe19158
> [=C2=A0 589.607790] RBP: ffff88810f5ab980 R08: 0000000000000001 R09:=20
> 0000000000328980
> [=C2=A0 589.609134] R10: 0000000000000000 R11: ffffc90040003c70 R12:=20
> ffff888107fd3c00
> [=C2=A0 589.610484] R13: ffffc90040003ed4 R14: 0000000000000000 R15:=20
> ffff88810f5ffd80
> [=C2=A0 589.611828] FS:=C2=A0 00007f960c4b8ac0(0000) GS:ffff88817fe0000=
0(0000)=20
> knlGS:0000000000000000
> [=C2=A0 589.613348] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 CR0: 00000000=
80050033
> [=C2=A0 589.614525] CR2: 00007f17ee72e000 CR3: 000000010f5b6000 CR4:=20
> 0000000000050660
> [=C2=A0 589.615874] Call Trace:
> [=C2=A0 589.616402]=C2=A0 <IRQ>
> [=C2=A0 589.616855]=C2=A0 __handle_irq_event_percpu+0x4e/0x2c0
> [=C2=A0 589.617784]=C2=A0 handle_irq_event_percpu+0x30/0x80
> [=C2=A0 589.618660]=C2=A0 handle_irq_event+0x3a/0x60
> [=C2=A0 589.619428]=C2=A0 handle_edge_irq+0x9b/0x1f0
> [=C2=A0 589.620209]=C2=A0 generic_handle_irq+0x4f/0x60
> [=C2=A0 589.621008]=C2=A0 evtchn_2l_handle_events+0x160/0x280
> [=C2=A0 589.621913]=C2=A0 __xen_evtchn_do_upcall+0x66/0xb0
> [=C2=A0 589.622767]=C2=A0 __xen_pv_evtchn_do_upcall+0x11/0x20
> [=C2=A0 589.623665]=C2=A0 asm_call_irq_on_stack+0x12/0x20
> [=C2=A0 589.624511]=C2=A0 </IRQ>
> [=C2=A0 589.624978]=C2=A0 xen_pv_evtchn_do_upcall+0x77/0xf0
> [=C2=A0 589.625848]=C2=A0 exc_xen_hypervisor_callback+0x8/0x10
>=20
> This can be reproduced when creating/destroying guest in a loop.=20
> Although, I have struggled to reproduce it on a vanilla Xen.
>=20
> After several hours of debugging, I think I have found the root cause.
>=20
> While we only expect the unmask to happen when the event channel is=20
> EOIed, there is an unmask happening as part of handle_edge_irq() becaus=
e=20
> the interrupt was seen as pending by another vCPU (IRQS_PENDING is set)=
=2E
>=20
> It turns out that the event channel is set for multiple vCPU is in=20
> cpu_evtchn_mask. This is happening because the affinity is not cleared =

> when freeing an event channel.
>=20
> The implementation of evtchn_2l_handle_events() will look for all the=20
> active interrupts for the current vCPU and later on clear the pending=20
> bit (via the ack() callback). IOW, I believe, this is not an atomic=20
> operation.
>=20
> Even if Xen will notify the event to a single vCPU, evtchn_pending_sel =

> may still be set on the other vCPU (thanks to a different event=20
> channel). Therefore, there is a chance that two vCPUs will try to handl=
e=20
> the same interrupt.
>=20
> The IRQ handler handle_edge_irq() is able to deal with that and will=20
> mask/unmask the interrupt. This will mess us with the lateeoi logic=20
> (although, I managed to reproduce it once without XSA-332).

Thanks for the analysis!

> My initial idea to fix the problem was to switch the affinity from CPU =
X=20
> to CPU0 when the event channel is freed.
>=20
> However, I am not sure this is enough because I haven't found anything =

> yet preventing a race between evtchn_2l_handle_events9) and=20
> evtchn_2l_bind_vcpu().
>=20
> So maybe we want to introduce a refcounting (if there is nothing=20
> provided by the IRQ framework) and only unmask when the counter drop to=
 0.
>=20
> Any opinions?

I think we don't need a refcount, but just the internal states "masked"
and "eoi_pending" and unmask only if both are false. "masked" will be
set when the event is being masked. When delivering a lateeoi irq
"eoi_pending" will be set and "masked "reset. "masked" will be reset
when a normal unmask is happening. And "eoi_pending" will be reset
when a lateeoi is signaled. Any reset of "masked" and "eoi_pending"
will check the other flag and do an unmask if both are false.

I'll write a patch.


Juergen

--------------BFD7A9977F6A02DCE726E85E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BFD7A9977F6A02DCE726E85E--

--ykQYKCMfBobgZ4ASufJVLpgcjHbZd6Ss7--

--aZ8yJ9HMcgUBjSDsb6zsumgc5RgbItXS0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/YZWIFAwAAAAAACgkQsN6d1ii/Ey9i
9wf/b3sDzT31CkscSrcxgPD9r3ZhiLCSL5VKnepG49iucBjyHPBak30xHzsegjQrLZOG1arML5aH
J5xipgP6VoHP+N2TMrvGCoIb9zQC2iMtuIBhfcYCWcixUWttMOenfkz5Y4HQsdqsXqSunOBqc+t3
MdtvdqkT4oNuftOkTrpiFPabF0aaiykskPf7/ldiiz5xEUJ663P8arJo+4eChbeEk5cp8YThVi6x
2xlC1w14i3PsUOdaqADwLctlkhtifzIokB675+/qRfICq1EA7Wmp8RE7zqaP26Bwa8+L2dL/Tk8o
l0cTrHJUYYKsrschyZl9zy6TCi5e0D5l66iZ5GZ8JQ==
=Mf3s
-----END PGP SIGNATURE-----

--aZ8yJ9HMcgUBjSDsb6zsumgc5RgbItXS0--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 07:37:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 07:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52938.92387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp4tJ-0001hH-E8; Tue, 15 Dec 2020 07:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52938.92387; Tue, 15 Dec 2020 07:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp4tJ-0001hA-Ax; Tue, 15 Dec 2020 07:37:01 +0000
Received: by outflank-mailman (input) for mailman id 52938;
 Tue, 15 Dec 2020 07:36:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp4tH-0001h2-Cz; Tue, 15 Dec 2020 07:36:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp4tH-0000JG-1q; Tue, 15 Dec 2020 07:36:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp4tG-0002P8-PD; Tue, 15 Dec 2020 07:36:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp4tG-00089W-OU; Tue, 15 Dec 2020 07:36:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9/mCxcU61H3VOarhcyxygELPAkhgS0VLI+cEzhrm9gk=; b=GUY3oGdukW+akUGuU06dxrYKgn
	wuHgTD4DzCr4Tc4yynSV10DZQapm96n4Y3ihz6PBtYAZ9qP3T3+9HfvcE+aPIP16KwoL0P1sVI5Sr
	PD6w+pdcj+Zv1v9jbZ6Rg/2XJkn8ES1fkr6vi3f1buLh5QmOewsg+/wf9R6fvRrEGjOA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157533-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157533: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aa14de086675280206dbc1849da6f85b75f62f1b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 07:36:58 +0000

flight 157533 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157533/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                aa14de086675280206dbc1849da6f85b75f62f1b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  116 days
Failing since        152659  2020-08-21 14:07:39 Z  115 days  243 attempts
Testing same since   157533  2020-12-15 00:07:52 Z    0 days    1 attempts

------------------------------------------------------------
310 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 76448 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 08:09:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 08:09:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52952.92402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp5On-000539-Fe; Tue, 15 Dec 2020 08:09:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52952.92402; Tue, 15 Dec 2020 08:09:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp5On-000532-BV; Tue, 15 Dec 2020 08:09:33 +0000
Received: by outflank-mailman (input) for mailman id 52952;
 Tue, 15 Dec 2020 08:09:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp5Ol-00052u-Vf; Tue, 15 Dec 2020 08:09:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp5Ol-0001PW-OB; Tue, 15 Dec 2020 08:09:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp5Ol-0003YX-Ex; Tue, 15 Dec 2020 08:09:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp5Ol-0008VE-ET; Tue, 15 Dec 2020 08:09:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=szcN2JpBv4iOluukdXqd2rixzNz2Ip1jRVbBYKLaSLU=; b=hzM4ewHOj5ELC9S7JcdijTSBfD
	EMWbb+AZaDVo2uLU/JX43clUkD5NBlVttoMs1sUWRy+/eb0bMgxsgwJiiQBGCtqJo6zsm0ezE4cmd
	qaVOMkyz8y9ihX48DV1WpPgYQyA6Qu4MatLhJffOCFZnH/Me6+wRcHx/eH73z3MR2+Hk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157547: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=dcaa93936591883aa7826eb45ef00416ad82ef08
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 08:09:31 +0000

flight 157547 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157547/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 dcaa93936591883aa7826eb45ef00416ad82ef08
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    5 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   47 attempts
Testing same since   157547  2020-12-15 07:10:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 623 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 08:44:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 08:44:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52972.92433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp5wh-00007q-DK; Tue, 15 Dec 2020 08:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52972.92433; Tue, 15 Dec 2020 08:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp5wh-00007j-AG; Tue, 15 Dec 2020 08:44:35 +0000
Received: by outflank-mailman (input) for mailman id 52972;
 Tue, 15 Dec 2020 08:44:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C2hg=FT=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kp5wg-00007e-MH
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 08:44:34 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fae789a-7964-4171-855c-b65bf073a094;
 Tue, 15 Dec 2020 08:44:32 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0BF8iUdu022065
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 09:44:31 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 499E22E992E; Tue, 15 Dec 2020 09:44:25 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fae789a-7964-4171-855c-b65bf073a094
Date: Tue, 15 Dec 2020 09:44:25 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: Re: [PATCH 14/24] Pass bridge name to qemu and set XEN_DOMAIN_ID
Message-ID: <20201215084425.GA1447@antioche.eu.org>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-15-bouyer@netbsd.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-15-bouyer@netbsd.org>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 15 Dec 2020 09:44:31 +0100 (MET)

On Mon, Dec 14, 2020 at 05:36:13PM +0100, Manuel Bouyer wrote:
> Pass bridge name to qemu
> When starting qemu, set an environnement variable XEN_DOMAIN_ID,
> to be used by qemu helper scripts

This one is not NetBSD related, I should have sent is as a separate
git mail ... I guess (i'm not familiar with git, sorry).

But I think it can be usefull for the comunity.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 09:02:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 09:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52983.92451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6E1-00020I-63; Tue, 15 Dec 2020 09:02:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52983.92451; Tue, 15 Dec 2020 09:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6E1-00020B-2X; Tue, 15 Dec 2020 09:02:29 +0000
Received: by outflank-mailman (input) for mailman id 52983;
 Tue, 15 Dec 2020 09:02:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kp6Dz-000206-Tt
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 09:02:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1142cad0-147c-4da7-8e4b-719e062fa78a;
 Tue, 15 Dec 2020 09:02:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0A88ACC6;
 Tue, 15 Dec 2020 09:02:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1142cad0-147c-4da7-8e4b-719e062fa78a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608022946; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bAmMwBy6AtYV+lM/clxgOraoypqXS/jSPLvVclLrz+Y=;
	b=nbnw1fETW4MN2c2OKoXOOFmuOBn4MmZlBrLrcs4r10WJ/tSvY67Aai702Bnf1lViczM/tT
	lQu0doBiBl3VSgudnZXpQZD36+IBL4jK6Y0pssbHdQFwflZEOIRNbi6JeYdfj9Vr8isWM9
	p7UpUh5q0Uhl822nsRHSq8yk42DwJ0M=
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
Date: Tue, 15 Dec 2020 10:02:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201215063319.23290-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.12.2020 07:33, Juergen Gross wrote:
> --- a/xen/include/asm-arm/bug.h
> +++ b/xen/include/asm-arm/bug.h
> @@ -15,65 +15,62 @@
>  
>  struct bug_frame {
>      signed int loc_disp;    /* Relative address to the bug address */
> -    signed int file_disp;   /* Relative address to the filename */
> +    signed int ptr_disp;    /* Relative address to the filename or function */
>      signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
>      uint16_t line;          /* Line number */
>      uint32_t pad0:16;       /* Padding for 8-bytes align */
>  };
>  
>  #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>  #define bug_line(b) ((b)->line)
>  #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>  
> -#define BUGFRAME_warn   0
> -#define BUGFRAME_bug    1
> -#define BUGFRAME_assert 2
> +#define BUGFRAME_run_fn 0
> +#define BUGFRAME_warn   1
> +#define BUGFRAME_bug    2
> +#define BUGFRAME_assert 3
>  
> -#define BUGFRAME_NR     3
> +#define BUGFRAME_NR     4
>  
>  /* Many versions of GCC doesn't support the asm %c parameter which would
>   * be preferable to this unpleasantness. We use mergeable string
>   * sections to avoid multiple copies of the string appearing in the
>   * Xen image.
>   */
> -#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
> +#define BUG_FRAME(type, line, ptr, msg) do {                                \
>      BUILD_BUG_ON((line) >> 16);                                             \
>      BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
>      asm ("1:"BUG_INSTR"\n"                                                  \
> -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
> -         "2:\t.asciz " __stringify(file) "\n"                               \
> -         "3:\n"                                                             \
> -         ".if " #has_msg "\n"                                               \
> -         "\t.asciz " #msg "\n"                                              \
> -         ".endif\n"                                                         \
> -         ".popsection\n"                                                    \
> -         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
> -         "4:\n"                                                             \
> +         ".pushsection .bug_frames." __stringify(type) ", \"a\", %%progbits\n"\
> +         "2:\n"                                                             \
>           ".p2align 2\n"                                                     \
> -         ".long (1b - 4b)\n"                                                \
> -         ".long (2b - 4b)\n"                                                \
> -         ".long (3b - 4b)\n"                                                \
> +         ".long (1b - 2b)\n"                                                \
> +         ".long (%0 - 2b)\n"                                                \
> +         ".long (%1 - 2b)\n"                                                \
>           ".hword " __stringify(line) ", 0\n"                                \
> -         ".popsection");                                                    \
> +         ".popsection" :: "i" (ptr), "i" (msg));                            \
>  } while (0)

The comment ahead of the construct now looks to be at best stale, if
not entirely pointless. The reference to %c looks quite strange here
to me anyway - I can only guess it appeared here because on x86 one
has to use %c to output constants as operands for .long and alike,
and this was then tried to use on Arm as well without there really
being a need.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 09:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 09:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52989.92463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6WH-0003oJ-OR; Tue, 15 Dec 2020 09:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52989.92463; Tue, 15 Dec 2020 09:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6WH-0003o6-Kn; Tue, 15 Dec 2020 09:21:21 +0000
Received: by outflank-mailman (input) for mailman id 52989;
 Tue, 15 Dec 2020 09:21:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp6WG-0003ny-3l; Tue, 15 Dec 2020 09:21:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp6WF-0002bJ-Qr; Tue, 15 Dec 2020 09:21:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp6WF-00075E-Hf; Tue, 15 Dec 2020 09:21:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp6WF-00025M-HA; Tue, 15 Dec 2020 09:21:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8pcLy8kPy91e6GgDFtxyQQh78VePCJJAm6f1eTyl+uw=; b=LPSOJLM7PDNE8c1+rsIRW/B6gS
	zgPKNy4pt9p6nola8WiaBlB+1zY8ObD2mJOQaSrYBnGCaFBXi0pQpAsopP0S5LgUhdIhemdnhC9hS
	NPsFbcCrpsWNcAfD83PvfMbA35NUwI+9/N8y3nKjODBZ/LvsM0Y1auYpZXcFz4phu+mk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157543-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157543: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-xsm:xen-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64:xen-build:fail:regression
    libvirt:build-i386:xen-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-xsm:xen-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bff2ad5d6b1f25da02802273934d2a519159fec7
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 09:21:19 +0000

flight 157543 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157543/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64                   6 xen-build                fail REGR. vs. 151777
 build-i386                    6 xen-build                fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-xsm                6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bff2ad5d6b1f25da02802273934d2a519159fec7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  158 days
Failing since        151818  2020-07-11 04:18:52 Z  157 days  152 attempts
Testing same since   157543  2020-12-15 04:19:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32988 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 09:23:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 09:23:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.52995.92478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6YH-0003xf-6U; Tue, 15 Dec 2020 09:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 52995.92478; Tue, 15 Dec 2020 09:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6YH-0003xY-23; Tue, 15 Dec 2020 09:23:25 +0000
Received: by outflank-mailman (input) for mailman id 52995;
 Tue, 15 Dec 2020 09:23:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp6YF-0003xN-EL
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 09:23:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0775729c-c8d0-4f28-8434-c4793c185158;
 Tue, 15 Dec 2020 09:23:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F769AC7F;
 Tue, 15 Dec 2020 09:23:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0775729c-c8d0-4f28-8434-c4793c185158
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608024201; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MgAHZ0iX5Zkd6J19lMj90flPiXHuCowYOehgKczB4TA=;
	b=Yoj3aZDkzkGiWcxOYyyr9ypYYQHdkGQRzEMLn23bbdrdUYx/zcUMe+14nsElulAydQ3w6E
	m7BVKetqkdi/E7tl+sv8tpS3rnqewJsPNrrizhxMX8flBorn9W4SV4j11wStMNL5dnbAxX
	bM9hrSyg/+oQ27Z5RAeq5JCwfUiEoHo=
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d2e1bff5-9630-3a50-4149-cdf9b3a4e091@suse.com>
Date: Tue, 15 Dec 2020 10:23:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="VMEoU8Xs3ayWh9XpXgQF1J6tVSumVh6D0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VMEoU8Xs3ayWh9XpXgQF1J6tVSumVh6D0
Content-Type: multipart/mixed; boundary="VjGm9D2P4KBNFStnLj1qCQRRG53IqUz1r";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <d2e1bff5-9630-3a50-4149-cdf9b3a4e091@suse.com>
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
In-Reply-To: <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>

--VjGm9D2P4KBNFStnLj1qCQRRG53IqUz1r
Content-Type: multipart/mixed;
 boundary="------------730045B961B9A5E060E398DE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------730045B961B9A5E060E398DE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 10:02, Jan Beulich wrote:
> On 15.12.2020 07:33, Juergen Gross wrote:
>> --- a/xen/include/asm-arm/bug.h
>> +++ b/xen/include/asm-arm/bug.h
>> @@ -15,65 +15,62 @@
>>  =20
>>   struct bug_frame {
>>       signed int loc_disp;    /* Relative address to the bug address *=
/
>> -    signed int file_disp;   /* Relative address to the filename */
>> +    signed int ptr_disp;    /* Relative address to the filename or fu=
nction */
>>       signed int msg_disp;    /* Relative address to the predicate (fo=
r ASSERT) */
>>       uint16_t line;          /* Line number */
>>       uint32_t pad0:16;       /* Padding for 8-bytes align */
>>   };
>>  =20
>>   #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>   #define bug_line(b) ((b)->line)
>>   #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>  =20
>> -#define BUGFRAME_warn   0
>> -#define BUGFRAME_bug    1
>> -#define BUGFRAME_assert 2
>> +#define BUGFRAME_run_fn 0
>> +#define BUGFRAME_warn   1
>> +#define BUGFRAME_bug    2
>> +#define BUGFRAME_assert 3
>>  =20
>> -#define BUGFRAME_NR     3
>> +#define BUGFRAME_NR     4
>>  =20
>>   /* Many versions of GCC doesn't support the asm %c parameter which w=
ould
>>    * be preferable to this unpleasantness. We use mergeable string
>>    * sections to avoid multiple copies of the string appearing in the
>>    * Xen image.
>>    */
>> -#define BUG_FRAME(type, line, file, has_msg, msg) do {               =
       \
>> +#define BUG_FRAME(type, line, ptr, msg) do {                         =
       \
>>       BUILD_BUG_ON((line) >> 16);                                     =
        \
>>       BUILD_BUG_ON((type) >=3D BUGFRAME_NR);                          =
          \
>>       asm ("1:"BUG_INSTR"\n"                                          =
        \
>> -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"         =
       \
>> -         "2:\t.asciz " __stringify(file) "\n"                        =
       \
>> -         "3:\n"                                                      =
       \
>> -         ".if " #has_msg "\n"                                        =
       \
>> -         "\t.asciz " #msg "\n"                                       =
       \
>> -         ".endif\n"                                                  =
       \
>> -         ".popsection\n"                                             =
       \
>> -         ".pushsection .bug_frames." __stringify(type) ", \"a\", %pro=
gbits\n"\
>> -         "4:\n"                                                      =
       \
>> +         ".pushsection .bug_frames." __stringify(type) ", \"a\", %%pr=
ogbits\n"\
>> +         "2:\n"                                                      =
       \
>>            ".p2align 2\n"                                             =
        \
>> -         ".long (1b - 4b)\n"                                         =
       \
>> -         ".long (2b - 4b)\n"                                         =
       \
>> -         ".long (3b - 4b)\n"                                         =
       \
>> +         ".long (1b - 2b)\n"                                         =
       \
>> +         ".long (%0 - 2b)\n"                                         =
       \
>> +         ".long (%1 - 2b)\n"                                         =
       \
>>            ".hword " __stringify(line) ", 0\n"                        =
        \
>> -         ".popsection");                                             =
       \
>> +         ".popsection" :: "i" (ptr), "i" (msg));                     =
       \
>>   } while (0)
>=20
> The comment ahead of the construct now looks to be at best stale, if
> not entirely pointless. The reference to %c looks quite strange here
> to me anyway - I can only guess it appeared here because on x86 one
> has to use %c to output constants as operands for .long and alike,
> and this was then tried to use on Arm as well without there really
> being a need.

Probably so.

I can remove the comment, but would like an Arm maintainer to confirm.


Juergen

--------------730045B961B9A5E060E398DE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------730045B961B9A5E060E398DE--

--VjGm9D2P4KBNFStnLj1qCQRRG53IqUz1r--

--VMEoU8Xs3ayWh9XpXgQF1J6tVSumVh6D0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/YgIgFAwAAAAAACgkQsN6d1ii/Ey+D
qAgAj4vCgzlr8pfvdMMHrA4Op7OqFpui67VwxjLOaC1ucQQAaJYwVdiA+/5r1gIUimsA/dL6xIIc
lkox2BFnAbBMBU8jqgHU671Sh3gDUAxp9cw4vaDkeUN3KZHFegpX+6k/QyEmboP42pp2Pzigzwpj
TEDG6s1hGQEUBRc0FOITuC2f2dOznf4w9uLOEXBR1Lyjc/8IkolbnyP63of3sCmn6zwRAUTyr5iF
pTF74A5db3VlGt3o/QIBIiXZ8rmaCGgnuWyUOpPie8gnaRUJSK4mWbYXqxRjra/0mmWDsbSzq8zU
GOCI+W2nIAu4vemrI3pdPyCJ/cqtc0FX9PtSZPYZ+Q==
=qy3B
-----END PGP SIGNATURE-----

--VMEoU8Xs3ayWh9XpXgQF1J6tVSumVh6D0--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 09:43:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 09:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53003.92490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6rs-0005p7-1l; Tue, 15 Dec 2020 09:43:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53003.92490; Tue, 15 Dec 2020 09:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp6rr-0005p0-Up; Tue, 15 Dec 2020 09:43:39 +0000
Received: by outflank-mailman (input) for mailman id 53003;
 Tue, 15 Dec 2020 09:43:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kWjD=FT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kp6rq-0005ov-H1
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 09:43:38 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.56]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d027654b-4a39-495c-a82b-6a57fde61bdd;
 Tue, 15 Dec 2020 09:43:37 +0000 (UTC)
Received: from AM5PR0502CA0007.eurprd05.prod.outlook.com
 (2603:10a6:203:91::17) by VI1PR08MB2765.eurprd08.prod.outlook.com
 (2603:10a6:802:18::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Tue, 15 Dec
 2020 09:43:34 +0000
Received: from AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::96) by AM5PR0502CA0007.outlook.office365.com
 (2603:10a6:203:91::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Tue, 15 Dec 2020 09:43:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT007.mail.protection.outlook.com (10.152.16.145) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Tue, 15 Dec 2020 09:43:33 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Tue, 15 Dec 2020 09:43:33 +0000
Received: from 0a9899d5fa4a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4F511974-48C0-47E9-8073-4E4D9278C132.1; 
 Tue, 15 Dec 2020 09:42:55 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0a9899d5fa4a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 15 Dec 2020 09:42:55 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2166.eurprd08.prod.outlook.com (2603:10a6:4:85::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.15; Tue, 15 Dec
 2020 09:42:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Tue, 15 Dec 2020
 09:42:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d027654b-4a39-495c-a82b-6a57fde61bdd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GzCfZ4dkI+6GzsvDBOmc9NLeXomTJwtVy73SG90+9K4=;
 b=gY03mZ0zDR6+Jfq/dGGBskNl17UlUnWgAXzCUPvfU1KtmSoo8XqB8o4Se9CzxfV8bWHj3xV68Kxn3rXo5wtzSAUwD796RCz/qSq2q8T7EGpT3Z3oyEIsVY91ZDJVe2XTQrzhip5wqq7a3S32dtFLdHxYmv/Q/QPZdxg/qqGfqx4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 71e8f12a5191d1c1
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PqF3JsHoqVopjRMSfttRJptIFqWjIXemFZfK8H931o/1iz4b2J4G9AhemKjkAR0fg1+pb/LS2nu6KMikNu7MC1wPianXME6zg7E1pOhdcCooYW3izYy/4G075eQyr+tmgJoYfFDlzLFGdf36F8kPflyhJf+nBN9/ssFjCEsxJqmD1uldlkBW+8CpKk+B9TfIlcbJ4ZOjrg3L1aN4yQ0bEnK3oT9f0+gsoeWtbPCH4AM/7swlFCreRZQarZZxBfiLk4IV51bkjlSaG1ktpOpKO9wnZP50KWgHTxUe/NyHXP6w1VjYKbeKuh/tTnoRlKlv5tqAbVmFPl2ZoHa4yXgwrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GzCfZ4dkI+6GzsvDBOmc9NLeXomTJwtVy73SG90+9K4=;
 b=cJ6wwO7oLyQhh9rBjFycc2WUC9/bJ+By/ovQ0wz58GGMvKCRjv/01gUYOJ3FKMgvmefNsRFWEssJ+AXrsuJ88NJkcA6bok8jWKWPUGXxmsTb18hlWkc63mrPpF47N/UDmM5OvRFobxxi2ge5yUIUhgIga3Zo5LLYyj0vPbiPLyntq3xpMk2DMgwYUGvbIJb0L8DrZB+d3zAyOeNfLe3NG3t6Y6iq3S97dETkA/PQFpF59ygMmwTNOE7ChvfnsPM4zk1xefJzscrnuXMeoAwjsUvnH8vBIBxT9doCxlzUzKqxKtahOwJgVO+ZEZYqiCOlHXDgKPx4JrsBXndrF2R+4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GzCfZ4dkI+6GzsvDBOmc9NLeXomTJwtVy73SG90+9K4=;
 b=gY03mZ0zDR6+Jfq/dGGBskNl17UlUnWgAXzCUPvfU1KtmSoo8XqB8o4Se9CzxfV8bWHj3xV68Kxn3rXo5wtzSAUwD796RCz/qSq2q8T7EGpT3Z3oyEIsVY91ZDJVe2XTQrzhip5wqq7a3S32dtFLdHxYmv/Q/QPZdxg/qqGfqx4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Rahul Singh <Rahul.Singh@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Index: AQHWzxYvVX92i8NXPkeNMctPP72KNqnx9LwAgAUGOoCAAAdQAIAA7OAA
Date: Tue, 15 Dec 2020 09:42:54 +0000
Message-ID: <CD549B7A-97C8-40F6-B762-6661A7EFAED1@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
 <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
 <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
In-Reply-To: <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ab773086-c665-4340-1f1d-08d8a0dde387
x-ms-traffictypediagnostic: DB6PR0802MB2166:|VI1PR08MB2765:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB27655DDEE52A95BF6D16A0809DC60@VI1PR08MB2765.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KHhz885yeH11hIuihwUhoSDFurlCq6u8C15DGnSrG++PRpB0bea2l7X0wQLeRfse60g4AqKpIvKt6izR0yy3DsQgGi2rAVxguVRNBZczLr3x9PQYGc8RkdyDph5ucCmdXjqKaBQNDf3siyGVQJP0CTRrsRikPnhfvdWQJengnUttcW1/Wq+9Feqx18upCpVu4cPtxmhcajMiBK6R4mnHw17civoaKqUQkUVwFUJaoyPbg9PTZNixSmwY6SIFS0Tx6ZXtw+EaAYAIlj3wjR1yULuDFv+kj04zrEn4JqKnXGciWrYYWrgesQ3HBJ/5HX3O9UbEZQSt1Q0MD4h23jewaUvImzOVz8XONdXVh7mFm4N7cS2xx4NXzbK8BAeK77ym
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(366004)(346002)(136003)(316002)(66446008)(186003)(478600001)(66476007)(66946007)(54906003)(91956017)(64756008)(6512007)(66556008)(33656002)(76116006)(8936002)(6916009)(7416002)(83380400001)(4326008)(2616005)(5660300002)(26005)(36756003)(8676002)(86362001)(71200400001)(6506007)(6486002)(2906002)(53546011)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?7z1Ub+4hJoU7rSqsHoKKxwU0q2/oGsZwdA2nfsyJty38rrdYY/zN5mSuSjiM?=
 =?us-ascii?Q?vXOUeWVdsF3LwOTyc9ARnj6b9yHGj4A59jlrfQ6kVx3iccOrCMTzDKNvPkQx?=
 =?us-ascii?Q?urWhxN2mKxdxgcG23hEf++LJS7vCkAdOSYrt3unCLUvDvKY7HZ9C7Snguc4f?=
 =?us-ascii?Q?stRnPdByDsL9iL9R/k9iGqVsODvlZgdSINYDBUwCGpqtNkfAg+QmmeHvDhi4?=
 =?us-ascii?Q?MrrlbI9nK8btFqQrXA4iosFbTv5U+yVnTGLqrOLKQIdtfzSQTGQ/3xbk88u7?=
 =?us-ascii?Q?MJJE8oMNGZBlhupP64ZX/ibKgUSOIIJ6QNMGGfNnTBvqsmuqLKDicsYUgWi0?=
 =?us-ascii?Q?Ma/CXk2TaditcX81eDfRUuyjpW9KkrTzBZ9X3HdsnaIgwZf+DooFdybUWGnC?=
 =?us-ascii?Q?OH7XivQWGorGEwZ6Syo4a25gxfwGN9hQ658VU/kUIF3tz8j+7G5dSABflzFJ?=
 =?us-ascii?Q?XYDpwK2lhM8YuSD3m6Jn7bfQZLinrwLNn18QIfyfpeSP4FoLJqiVjylrTMWf?=
 =?us-ascii?Q?VlLWjmJZhI4cw0iJto8xaUVkfqoZZW2hmmfAEaoObtAK1pMUVAMJsQ965m38?=
 =?us-ascii?Q?zQDiVtabA45trtGWbdkxNeoWMXoT9W2rd9J7+c2FhMVENkYLovduPlDSxCYa?=
 =?us-ascii?Q?UWYHqSpqK6LzBKdiXYqzdV6YLj9rugDhDJ5o67Nh0yyrNIiWKzfIyF3bciRf?=
 =?us-ascii?Q?yaCPcBgaExXIpemtstB6K+DA0A634srk/xG6teLzI5akolMW/Wtshkhf8rXJ?=
 =?us-ascii?Q?BLVq6XMScTuVsS3Eysvfdu7RhmZKPgAjhBq/0vmKFVKfq8KECeJZWr1ipmdr?=
 =?us-ascii?Q?BPDq5hhXb0/jlmI5hI4B1MOCUAp51qN+gmRuaIn0zv8pYbU/GrFpurM0xkNv?=
 =?us-ascii?Q?d+bsGUdJcYzEal7anKOYLbdBtZ0dZeKSYWLiOps5L8E9SYaAPoH+aVhARhs3?=
 =?us-ascii?Q?Hhfo12F97qEioa1GVWdtFGWgjTNYTY5XZ/IwtnbMiTsaZLZj8fc6xn3MTG1O?=
 =?us-ascii?Q?+k/o?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <68AB3AEB0E46974595D7AEC76A2F7A45@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2166
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c4339a8d-88f7-4c9e-678a-08d8a0ddcc5b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	laB3cTuPiDoaLlXk3JciNrFVao0aCQzf4ZlEoEuA13L9pBMUFWEckQdzdZqaH9RFCLIgU0fPB3si4rmvU0WapnZj1NFbYOJXxZGeHVfUFNZeo9V2hETdg0XI6NlN0XNOXIDUNOglxNp4TAUPKEUOCKIouZZ/1UWo50lghZCmt6rKG7wjT0igXQZ+J1mysjeKjVCiCWQJb9lC1tjpT9kKu51v7P/WWuS+P0KcSJYEAb6PyqzX6x7XE70NB2WzCMRGltGaBFYCgzaXe8MI36hUlrmsJXlmzZqMqLqBKqasp7juxwJ2gkNkEoeeurBsleyVKyVCXrdpdO+ISnECaskcSIRUgPKHNTuAwQFR70QlBVku0OoL/AgRDJcTQsetuvddif6u6/dVpw8Xm827yOlAiA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(46966005)(36756003)(2616005)(4326008)(6486002)(6506007)(81166007)(8676002)(70206006)(6862004)(86362001)(70586007)(47076004)(2906002)(33656002)(316002)(5660300002)(53546011)(8936002)(83380400001)(6512007)(82310400003)(107886003)(54906003)(336012)(186003)(356005)(478600001)(26005)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Dec 2020 09:43:33.7825
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ab773086-c665-4340-1f1d-08d8a0dde387
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2765

Hi Julien,

> On 14 Dec 2020, at 19:35, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 14/12/2020 19:08, Rahul Singh wrote:
>> Hello Julien,
>=20
> Hi Rahul,
>=20
>>> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi Rahul,
>>>=20
>>> On 10/12/2020 16:57, Rahul Singh wrote:
>>>>  struct arm_smmu_strtab_cfg {
>>>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>>>  		u64			padding;
>>>>  	};
>>>>  -	/* IOMMU core code handle */
>>>> -	struct iommu_device		iommu;
>>>> +	/* Need to keep a list of SMMU devices */
>>>> +	struct list_head		devices;
>>>> +
>>>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>>>> +	struct tasklet		evtq_irq_tasklet;
>>>> +	struct tasklet		priq_irq_tasklet;
>>>> +	struct tasklet		combined_irq_tasklet;
>>>>  };
>>>>    /* SMMU private data for each master */
>>>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>>>    struct arm_smmu_domain {
>>>>  	struct arm_smmu_device		*smmu;
>>>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>>=20
>>> Hmmm... Your commit message says the mutex would be replaced by a spinl=
ock. However, you are dropping the lock. What I did miss?
>> Linux code using the mutex in the function arm_smmu_attach_dev() but in =
XEN this function is called from arm_smmu_assign_dev() which already has th=
e spin_lock when arm_smmu_attach_dev() function I called so I drop the mute=
x to avoid nested spinlock.
>> Timing analysis of using spin lock in place of mutex as compared to linu=
x  when attaching a  device to SMMU is still valid.
>=20
> I think it would be better to keep the current locking until the investig=
ation is done.
>=20
> But if you still want to make this change, then you should explain in the=
 commit message why the lock is dropped.
>=20
> [...]
>=20
>> WARN_ON(q->base_dma & (qsz - 1));
>> if (!unlikely(q->base_dma & (qsz - 1))) {
>> 	dev_info(smmu->dev, "allocated %u entries for %s\n",
>> 		1 << q->llq.max_n_shift, name);
>> }
>=20
> Right, but this doesn't address the second part of my comment.
>=20
> This change would *not* be necessary if the implementation of WARN_ON() i=
n Xen return whether the warn was triggered.
>=20
> Before considering to change the SMMU code, you should first attempt to m=
odify implementation of the WARN_ON(). We can discuss other approach if the=
 discussion goes nowhere.

The code proposed by Rahul is providing the equivalent functionality to wha=
t linux does.

Modifying WARN_ON implementation in Xen to fit how Linux version is working=
 would make sense but should be done in its own patch as it will imply modi=
fication on more Xen code and
some of it will not be related to SMMU and will need some validation.
So I do not think it would be fare to ask Rahul to also do this in the scop=
e of this serie.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 10:13:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 10:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53022.92516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7KW-0000Ck-MY; Tue, 15 Dec 2020 10:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53022.92516; Tue, 15 Dec 2020 10:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7KW-0000Cd-Ic; Tue, 15 Dec 2020 10:13:16 +0000
Received: by outflank-mailman (input) for mailman id 53022;
 Tue, 15 Dec 2020 10:13:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kp7KV-0000CY-45
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 10:13:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kp7KQ-0003Y4-TD; Tue, 15 Dec 2020 10:13:10 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kp7KQ-0005O1-KO; Tue, 15 Dec 2020 10:13:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IPzPIFuYqyXykbuJUmZrBmTINjrSm/wOBxDDVvjdgZ4=; b=MVYqtZSYyf2SHM/p00TQo10RWD
	DHS6mx+G4ZSIaobrzVdHzRccPt/Juz6XsVC525/TiA1QUnjhcXPDqg5Cm0iy+av4ha7ncJhdJ7WvW
	nL0KidISEHdcYWfED/QqOqtJq474Qe5jYVzdac8LIKpk0fekIT1ZWYW2R2O4G3Q+OBxU=;
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
 <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
 <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
 <CD549B7A-97C8-40F6-B762-6661A7EFAED1@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <da26c36c-97ec-d9f6-abfd-642017c3df5c@xen.org>
Date: Tue, 15 Dec 2020 10:13:08 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <CD549B7A-97C8-40F6-B762-6661A7EFAED1@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 15/12/2020 09:42, Bertrand Marquis wrote:
> Hi Julien,

Hi,

>> On 14 Dec 2020, at 19:35, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 14/12/2020 19:08, Rahul Singh wrote:
>>> Hello Julien,
>>
>> Hi Rahul,
>>
>>>> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Rahul,
>>>>
>>>> On 10/12/2020 16:57, Rahul Singh wrote:
>>>>>   struct arm_smmu_strtab_cfg {
>>>>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>>>>   		u64			padding;
>>>>>   	};
>>>>>   -	/* IOMMU core code handle */
>>>>> -	struct iommu_device		iommu;
>>>>> +	/* Need to keep a list of SMMU devices */
>>>>> +	struct list_head		devices;
>>>>> +
>>>>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>>>>> +	struct tasklet		evtq_irq_tasklet;
>>>>> +	struct tasklet		priq_irq_tasklet;
>>>>> +	struct tasklet		combined_irq_tasklet;
>>>>>   };
>>>>>     /* SMMU private data for each master */
>>>>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>>>>     struct arm_smmu_domain {
>>>>>   	struct arm_smmu_device		*smmu;
>>>>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>>>
>>>> Hmmm... Your commit message says the mutex would be replaced by a spinlock. However, you are dropping the lock. What I did miss?
>>> Linux code using the mutex in the function arm_smmu_attach_dev() but in XEN this function is called from arm_smmu_assign_dev() which already has the spin_lock when arm_smmu_attach_dev() function I called so I drop the mutex to avoid nested spinlock.
>>> Timing analysis of using spin lock in place of mutex as compared to linux  when attaching a  device to SMMU is still valid.
>>
>> I think it would be better to keep the current locking until the investigation is done.
>>
>> But if you still want to make this change, then you should explain in the commit message why the lock is dropped.
>>
>> [...]
>>
>>> WARN_ON(q->base_dma & (qsz - 1));
>>> if (!unlikely(q->base_dma & (qsz - 1))) {
>>> 	dev_info(smmu->dev, "allocated %u entries for %s\n",
>>> 		1 << q->llq.max_n_shift, name);
>>> }
>>
>> Right, but this doesn't address the second part of my comment.
>>
>> This change would *not* be necessary if the implementation of WARN_ON() in Xen return whether the warn was triggered.
>>
>> Before considering to change the SMMU code, you should first attempt to modify implementation of the WARN_ON(). We can discuss other approach if the discussion goes nowhere.
> 
> The code proposed by Rahul is providing the equivalent functionality to what linux does.
> 
> Modifying WARN_ON implementation in Xen to fit how Linux version is working would make sense but should be done in its own patch as it will imply modification on more Xen code and
> some of it will not be related to SMMU and will need some validation.

Let me start that this was a request I already made on v2 and Rahul 
agreed. I saw no pushback on the request until now. So to me this meant 
this would be addressed in v3.

Further, the validation seems to be a common argument everytime I ask to 
make a change in this series... Yes validation is important, but this 
often doesn't require a lot of effort when the changes are simple... 
TBH, you are probably spending more time arguing against it.

> So I do not think it would be fare to ask Rahul to also do this in the scope of this serie

I would have agreed with this statement if the change is difficult. This 
is not the case here.

The first step when working upstream should always to improve existing 
helpers rather than working around it.

If it is not possible because it is either too complex or there are push 
back from the maintainers. Then we can discuss about workaround.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 10:19:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 10:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53029.92527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7QW-0000Sh-CI; Tue, 15 Dec 2020 10:19:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53029.92527; Tue, 15 Dec 2020 10:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7QW-0000Sa-8v; Tue, 15 Dec 2020 10:19:28 +0000
Received: by outflank-mailman (input) for mailman id 53029;
 Tue, 15 Dec 2020 10:19:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp7QU-0000SS-OM; Tue, 15 Dec 2020 10:19:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp7QU-0003eT-G7; Tue, 15 Dec 2020 10:19:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp7QU-0001ex-3f; Tue, 15 Dec 2020 10:19:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp7QU-0004hO-38; Tue, 15 Dec 2020 10:19:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IJgYJPfLbuCozJiElxV7b6rfxbDQ+15ZZnJZzdqbbqA=; b=0KHZ7/PnUOR1YFfZJJxNrmuFW0
	DSDWH9aGgljgwjMl4Ne49yOJ6oKN5UuakbrLFiwLesD1ecN0ldWZTyqkYpdz4suTqrLCEQdcmez8C
	yP+0ceAHbTfDQ6w6+0z/NywcjKiV3IXEy7dD+yOYGa878Mc8ta1su2YwCFTXTkTQQ+xo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157536-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157536: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 10:19:26 +0000

flight 157536 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157536/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 157512
 build-i386                    6 xen-build                fail REGR. vs. 157512
 build-amd64                   6 xen-build                fail REGR. vs. 157512
 build-i386-xsm                6 xen-build                fail REGR. vs. 157512

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157512
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157512
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157536  2020-12-15 01:52:24 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 10:20:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 10:20:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53035.92542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7RJ-0001Fe-TX; Tue, 15 Dec 2020 10:20:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53035.92542; Tue, 15 Dec 2020 10:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7RJ-0001FX-QV; Tue, 15 Dec 2020 10:20:17 +0000
Received: by outflank-mailman (input) for mailman id 53035;
 Tue, 15 Dec 2020 10:20:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp7RI-0001FN-K2
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 10:20:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6cd6360-53aa-43c9-a104-416a59602eae;
 Tue, 15 Dec 2020 10:20:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22073ACC6;
 Tue, 15 Dec 2020 10:20:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6cd6360-53aa-43c9-a104-416a59602eae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608027614; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=v6E/FuAqBRtjEWpUkLOQQHYyECJd8V2OJmTVeXYYdiI=;
	b=tBSbALlU+ae9nzPimjHlRpbm1OEolSVZe4SfTzRrUs6vAL1RNq2pAXGKe4affFJewCp8IK
	0cfALj8ie7PTriy8jRWtyK+gj+h32rqv54aoH6eLM1cD2gJ3yMrHK7vgKJhCITgDhGyfTS
	xYkZTmaPa7SbT3yV4kJwG9H63PTk10w=
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
 <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
Message-ID: <da65a69e-389b-1602-1479-6799ce10c101@suse.com>
Date: Tue, 15 Dec 2020 11:20:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="pJJnF1VJDpZDxjIP1XrTf4fIpeQCpWAAD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--pJJnF1VJDpZDxjIP1XrTf4fIpeQCpWAAD
Content-Type: multipart/mixed; boundary="1cw9j9rNmL20humd3iacRPzPxKY2HzWmm";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
Message-ID: <da65a69e-389b-1602-1479-6799ce10c101@suse.com>
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
 <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
In-Reply-To: <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>

--1cw9j9rNmL20humd3iacRPzPxKY2HzWmm
Content-Type: multipart/mixed;
 boundary="------------4EFFBD265BCE690D097DF724"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4EFFBD265BCE690D097DF724
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 08:27, J=C3=BCrgen Gro=C3=9F wrote:
> On 14.12.20 22:25, Julien Grall wrote:
>> Hi Juergen,
>>
>> When testing Linux 5.10 dom0, I could reliably hit the following=20
>> warning with using event 2L ABI:
>>
>> [=C2=A0 589.591737] Interrupt for port 34, but apparently not enabled;=
=20
>> per-user 00000000a86a4c1b
>> [=C2=A0 589.593259] WARNING: CPU: 0 PID: 1111 at=20
>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:170=20
>> evtchn_interrupt+0xeb/0x100
>> [=C2=A0 589.595514] Modules linked in:
>> [=C2=A0 589.596145] CPU: 0 PID: 1111 Comm: qemu-system-i38 Tainted: G =

>> W=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.10.0+ #180
>> [=C2=A0 589.597708] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)=
,=20
>> BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
>> [=C2=A0 589.599782] RIP: e030:evtchn_interrupt+0xeb/0x100
>> [=C2=A0 589.600698] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00=
 00=20
>> 00 e8 d9 10 ca ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 31 3d 82 e8 65=20
>> 29 a0 ff <0f> 0b e9 42 ff ff ff 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00=
=20
>> 00 0f
>> [=C2=A0 589.604087] RSP: e02b:ffffc90040003e70 EFLAGS: 00010086
>> [=C2=A0 589.605102] RAX: 0000000000000000 RBX: ffff888102091800 RCX:=20
>> 0000000000000027
>> [=C2=A0 589.606445] RDX: 0000000000000000 RSI: ffff88817fe19150 RDI:=20
>> ffff88817fe19158
>> [=C2=A0 589.607790] RBP: ffff88810f5ab980 R08: 0000000000000001 R09:=20
>> 0000000000328980
>> [=C2=A0 589.609134] R10: 0000000000000000 R11: ffffc90040003c70 R12:=20
>> ffff888107fd3c00
>> [=C2=A0 589.610484] R13: ffffc90040003ed4 R14: 0000000000000000 R15:=20
>> ffff88810f5ffd80
>> [=C2=A0 589.611828] FS:=C2=A0 00007f960c4b8ac0(0000) GS:ffff88817fe000=
00(0000)=20
>> knlGS:0000000000000000
>> [=C2=A0 589.613348] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 CR0: 0000000=
080050033
>> [=C2=A0 589.614525] CR2: 00007f17ee72e000 CR3: 000000010f5b6000 CR4:=20
>> 0000000000050660
>> [=C2=A0 589.615874] Call Trace:
>> [=C2=A0 589.616402]=C2=A0 <IRQ>
>> [=C2=A0 589.616855]=C2=A0 __handle_irq_event_percpu+0x4e/0x2c0
>> [=C2=A0 589.617784]=C2=A0 handle_irq_event_percpu+0x30/0x80
>> [=C2=A0 589.618660]=C2=A0 handle_irq_event+0x3a/0x60
>> [=C2=A0 589.619428]=C2=A0 handle_edge_irq+0x9b/0x1f0
>> [=C2=A0 589.620209]=C2=A0 generic_handle_irq+0x4f/0x60
>> [=C2=A0 589.621008]=C2=A0 evtchn_2l_handle_events+0x160/0x280
>> [=C2=A0 589.621913]=C2=A0 __xen_evtchn_do_upcall+0x66/0xb0
>> [=C2=A0 589.622767]=C2=A0 __xen_pv_evtchn_do_upcall+0x11/0x20
>> [=C2=A0 589.623665]=C2=A0 asm_call_irq_on_stack+0x12/0x20
>> [=C2=A0 589.624511]=C2=A0 </IRQ>
>> [=C2=A0 589.624978]=C2=A0 xen_pv_evtchn_do_upcall+0x77/0xf0
>> [=C2=A0 589.625848]=C2=A0 exc_xen_hypervisor_callback+0x8/0x10
>>
>> This can be reproduced when creating/destroying guest in a loop.=20
>> Although, I have struggled to reproduce it on a vanilla Xen.
>>
>> After several hours of debugging, I think I have found the root cause.=

>>
>> While we only expect the unmask to happen when the event channel is=20
>> EOIed, there is an unmask happening as part of handle_edge_irq()=20
>> because the interrupt was seen as pending by another vCPU=20
>> (IRQS_PENDING is set).
>>
>> It turns out that the event channel is set for multiple vCPU is in=20
>> cpu_evtchn_mask. This is happening because the affinity is not cleared=
=20
>> when freeing an event channel.
>>
>> The implementation of evtchn_2l_handle_events() will look for all the =

>> active interrupts for the current vCPU and later on clear the pending =

>> bit (via the ack() callback). IOW, I believe, this is not an atomic=20
>> operation.
>>
>> Even if Xen will notify the event to a single vCPU, evtchn_pending_sel=
=20
>> may still be set on the other vCPU (thanks to a different event=20
>> channel). Therefore, there is a chance that two vCPUs will try to=20
>> handle the same interrupt.
>>
>> The IRQ handler handle_edge_irq() is able to deal with that and will=20
>> mask/unmask the interrupt. This will mess us with the lateeoi logic=20
>> (although, I managed to reproduce it once without XSA-332).
>=20
> Thanks for the analysis!
>=20
>> My initial idea to fix the problem was to switch the affinity from CPU=
=20
>> X to CPU0 when the event channel is freed.
>>
>> However, I am not sure this is enough because I haven't found anything=
=20
>> yet preventing a race between evtchn_2l_handle_events9) and=20
>> evtchn_2l_bind_vcpu().
>>
>> So maybe we want to introduce a refcounting (if there is nothing=20
>> provided by the IRQ framework) and only unmask when the counter drop=20
>> to 0.
>>
>> Any opinions?
>=20
> I think we don't need a refcount, but just the internal states "masked"=

> and "eoi_pending" and unmask only if both are false. "masked" will be
> set when the event is being masked. When delivering a lateeoi irq
> "eoi_pending" will be set and "masked "reset. "masked" will be reset
> when a normal unmask is happening. And "eoi_pending" will be reset
> when a lateeoi is signaled. Any reset of "masked" and "eoi_pending"
> will check the other flag and do an unmask if both are false.
>=20
> I'll write a patch.

Julien, could you please test the attached (only build tested) patch?


Juergen

--------------4EFFBD265BCE690D097DF724
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-events-don-t-unmask-an-event-channel-when-an-eoi.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename*0="0001-xen-events-don-t-unmask-an-event-channel-when-an-eoi.pa";
 filename*1="tch"

=46rom 2ce5786fd6f29ec09ad653e30e089042ea62b309 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Tue, 15 Dec 2020 10:37:11 +0100
Subject: [PATCH] xen/events: don't unmask an event channel when an eoi is=

 pending

An event channel should be kept masked when an eoi is pending for it.
When being migrated to another cpu it might be unmasked, though.

In order to avoid this keep two different flags for each event channel
to be able to distinguish "normal" masking/unmasking from eoi related
masking/unmasking. The event channel should only be able to generate
an interrupt if both flags are cleared.

Cc: stable@vger.kernel.org
Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framew=
ork")
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 64 +++++++++++++++++++++++++++-----
 1 file changed, 54 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events=
_base.c
index 6038c4c35db5..b024200f1677 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -96,7 +96,9 @@ struct irq_info {
 	struct list_head eoi_list;
 	short refcnt;
 	short spurious_cnt;
-	enum xen_irq_type type; /* type */
+	short type;		/* type: IRQT_* */
+	bool masked;		/* Is event explicitly masked? */
+	bool eoi_pending;	/* Is EOI pending? */
 	unsigned irq;
 	evtchn_port_t evtchn;   /* event channel */
 	unsigned short cpu;     /* cpu bound */
@@ -272,6 +274,8 @@ static int xen_irq_info_common_setup(struct irq_info =
*info,
 	info->irq =3D irq;
 	info->evtchn =3D evtchn;
 	info->cpu =3D cpu;
+	info->masked =3D true;
+	info->eoi_pending =3D false;
=20
 	ret =3D set_evtchn_to_irq(evtchn, irq);
 	if (ret < 0)
@@ -545,7 +549,10 @@ static void xen_irq_lateeoi_locked(struct irq_info *=
info, bool spurious)
 	}
=20
 	info->eoi_time =3D 0;
-	unmask_evtchn(evtchn);
+	info->eoi_pending =3D false;
+
+	if (!info->masked)
+		unmask_evtchn(evtchn);
 }
=20
 static void xen_irq_lateeoi_worker(struct work_struct *work)
@@ -801,7 +808,11 @@ static unsigned int __startup_pirq(unsigned int irq)=

 		goto err;
=20
 out:
-	unmask_evtchn(evtchn);
+	info->masked =3D false;
+
+	if (!info->eoi_pending)
+		unmask_evtchn(evtchn);
+
 	eoi_pirq(irq_get_irq_data(irq));
=20
 	return 0;
@@ -828,6 +839,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
=20
+	info->masked =3D true;
 	mask_evtchn(evtchn);
 	xen_evtchn_close(evtchn);
 	xen_irq_info_cleanup(info);
@@ -1713,18 +1725,26 @@ EXPORT_SYMBOL_GPL(xen_set_affinity_evtchn);
=20
 static void enable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
=20
-	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D false;
+
+		if (!info->eoi_pending)
+			unmask_evtchn(evtchn);
+	}
 }
=20
 static void disable_dynirq(struct irq_data *data)
 {
-	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
=20
-	if (VALID_EVTCHN(evtchn))
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D true;
 		mask_evtchn(evtchn);
+	}
 }
=20
 static void ack_dynirq(struct irq_data *data)
@@ -1754,6 +1774,30 @@ static void mask_ack_dynirq(struct irq_data *data)=

 	ack_dynirq(data);
 }
=20
+static void lateeoi_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D false;
+		info->eoi_pending =3D true;
+		mask_evtchn(evtchn);
+	}
+}
+
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
+{
+	struct irq_info *info =3D info_for_irq(data->irq);
+	evtchn_port_t evtchn =3D info ? info->evtchn : 0;
+
+	if (VALID_EVTCHN(evtchn)) {
+		info->masked =3D true;
+		info->eoi_pending =3D true;
+		mask_evtchn(evtchn);
+	}
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
 	evtchn_port_t evtchn =3D evtchn_from_irq(data->irq);
@@ -1973,8 +2017,8 @@ static struct irq_chip xen_lateeoi_chip __read_most=
ly =3D {
 	.irq_mask		=3D disable_dynirq,
 	.irq_unmask		=3D enable_dynirq,
=20
-	.irq_ack		=3D mask_ack_dynirq,
-	.irq_mask_ack		=3D mask_ack_dynirq,
+	.irq_ack		=3D lateeoi_ack_dynirq,
+	.irq_mask_ack		=3D lateeoi_mask_ack_dynirq,
=20
 	.irq_set_affinity	=3D set_affinity_irq,
 	.irq_retrigger		=3D retrigger_dynirq,
--=20
2.26.2


--------------4EFFBD265BCE690D097DF724
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4EFFBD265BCE690D097DF724--

--1cw9j9rNmL20humd3iacRPzPxKY2HzWmm--

--pJJnF1VJDpZDxjIP1XrTf4fIpeQCpWAAD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Yjd0FAwAAAAAACgkQsN6d1ii/Ey+5
Zwf/fVGSTVGDfLz38gLOp+KIQVgZj6DD0KI8joqbxXsD+A+xOEYlYSteRqus1Av6z7lDJdsr+Mm5
n1tFN5wSoeMqYjBguCQgLwZyi3Sj3tFGxb/K5AsbsxAtM8N1zsxJttXCsvPBJO1TPJYJiuG6RWq7
9jklqZOnUP7JNJA9+s6/IUQ/jNiOYLpPhN/03FiQ1F4DSqq4ZIQQsYllHieOkL/4etdZToerlXhG
kvb5d1V3/NY+a3bFHLxDucNHnBQ/TszLBYAcBjveVVrRsUY9DyzNkHDfGANB1fH9wGDOB0ErpjmJ
rDveDgXxpv4L0dQDQr8cig1EPGVjTHJgWFXoFXv64g==
=ou5f
-----END PGP SIGNATURE-----

--pJJnF1VJDpZDxjIP1XrTf4fIpeQCpWAAD--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 10:52:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 10:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53051.92561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7vw-00043G-J2; Tue, 15 Dec 2020 10:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53051.92561; Tue, 15 Dec 2020 10:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp7vw-000439-FC; Tue, 15 Dec 2020 10:51:56 +0000
Received: by outflank-mailman (input) for mailman id 53051;
 Tue, 15 Dec 2020 10:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kWjD=FT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kp7vu-00042I-U5
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 10:51:55 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe08::612])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7a000eb-2963-4d24-a371-2c58a4570b0e;
 Tue, 15 Dec 2020 10:51:52 +0000 (UTC)
Received: from DB6PR0202CA0040.eurprd02.prod.outlook.com (2603:10a6:4:a5::26)
 by VI1PR08MB3838.eurprd08.prod.outlook.com (2603:10a6:803:b8::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.14; Tue, 15 Dec
 2020 10:51:49 +0000
Received: from DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::e0) by DB6PR0202CA0040.outlook.office365.com
 (2603:10a6:4:a5::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Tue, 15 Dec 2020 10:51:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT026.mail.protection.outlook.com (10.152.20.159) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Tue, 15 Dec 2020 10:51:48 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Tue, 15 Dec 2020 10:51:48 +0000
Received: from 9a9c661c9ebe.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7218F7FF-D947-42D2-A1CD-ED1FFF6ABD9B.1; 
 Tue, 15 Dec 2020 10:51:11 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9a9c661c9ebe.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 15 Dec 2020 10:51:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1909.eurprd08.prod.outlook.com (2603:10a6:4:72::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.21; Tue, 15 Dec
 2020 10:51:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Tue, 15 Dec 2020
 10:51:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7a000eb-2963-4d24-a371-2c58a4570b0e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3cZP+bG6ufanOe/QG9Shy+mV6coSZ8lZ3AJAYjtc1rI=;
 b=j1KMRSq1u9hJmEbxJ6vTRMknki8dYSmVO/TMTQO/XA3+hq+IQvCat/wFBC/waVCVlZHptTUceljYMAkgiIfZ3S8Yr0qIiEAvOuwl5mI2kSYcHoJA6o3Zl6/8dOcoUz5z50A2Uc0TJkdA8jc0E+5BrGEq8MFPE2V9Wo6xYTvqoUs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6a16aaf9e87e4bb3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VhOh2/KnmQITG/VPm1tKYMyFxh/LKtoCi2mO9AZxiO53lHkHm+jWEg5TrkORJjNamweyvoIH7jf0vW4ML8bxAiTBkD4d8s55HtdpT+zqbj6/v2ixFVwrtBslyhbigDlUXWXsgGSYXMtlKNdKcyVpBJzPD1pK/qW7W1/3vX/dACfq85H3EBWHSUJdwFFRj+BYtzLwOLd+OCL20EWhNorYl7kAosgdkF7hFtbAEsi152Mct4PE3iGvnV4BgZ01eZHz3/QMRJbk+xR/VkJBYrUU1w7AyafX6FWc0dgVZBQIjPenjzddJgqn3zmVIC0xbw3/RBnv70FOu6vHXFoW+c4Dhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3cZP+bG6ufanOe/QG9Shy+mV6coSZ8lZ3AJAYjtc1rI=;
 b=hRXi1LSB6LhJm8rnM5v76c5DwSn1E02VgiS9++8Sx8s8lwrQqS+ydSoxg+0Zv7PC+bZAaaiQiJ3tm3du35rYn6ZeoCPLeiDRI1zq0CR6qcL6Lv4xZNYIoIgY+F2VnIFt7/tf6UbKon7MJR1f9n0WOxpdbp9JH3ETC6x3IMRxGfnyi4MILrPf2RmX394oUin943H9jgKdFJoasIkZINSXE5INr7j2LjG8sRHxgHiO3cqlusdnqAfeiy0rR1siaBCzQ4jhNd12fhFWNYkBgMzxfpzfssua+hGeDlH2Hr2UlRPuid6GJnguLmRQ89Va8UH8JDNAX6kZ5f13sX3I1xW24w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3cZP+bG6ufanOe/QG9Shy+mV6coSZ8lZ3AJAYjtc1rI=;
 b=j1KMRSq1u9hJmEbxJ6vTRMknki8dYSmVO/TMTQO/XA3+hq+IQvCat/wFBC/waVCVlZHptTUceljYMAkgiIfZ3S8Yr0qIiEAvOuwl5mI2kSYcHoJA6o3Zl6/8dOcoUz5z50A2Uc0TJkdA8jc0E+5BrGEq8MFPE2V9Wo6xYTvqoUs=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Rahul Singh <Rahul.Singh@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Index:
 AQHWzxYvVX92i8NXPkeNMctPP72KNqnx9LwAgAUGOoCAAAdQAIAA7OAAgAAIcgCAAAqfgA==
Date: Tue, 15 Dec 2020 10:51:10 +0000
Message-ID: <99C334D2-B77B-4B8A-8294-00A811CFB80B@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
 <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
 <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
 <CD549B7A-97C8-40F6-B762-6661A7EFAED1@arm.com>
 <da26c36c-97ec-d9f6-abfd-642017c3df5c@xen.org>
In-Reply-To: <da26c36c-97ec-d9f6-abfd-642017c3df5c@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e808acf1-12da-4ae7-eae5-08d8a0e76c3e
x-ms-traffictypediagnostic: DB6PR0801MB1909:|VI1PR08MB3838:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3838E2F2934D217D26ACDDF59DC60@VI1PR08MB3838.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 BZenN5KiJOD9WNGK4GxKCVQtmXjK8YxsmxFfpK1U69Hy93rhHy1wiqHH/s/AOzkU90X2MKRn7BGFUoOU7hu2awOiks4UPudbSZMpkh4Ja246OliHWBGz5dsRnyIo9qvjX/A3QoiJqag5wlGmektNt/zUIiSWxLb0YircHsruEbDwPQBj58DrN9udwe1cgytmCofnskYi3+ehKzL/MqSrRg20T6udcTzY8+IZbQVpEAzgkgGwNaBCMNKkCw0D/HKmjQO7ed7RhZjISsF23RPPBba9/WteBYjrsKDXNavchgekVIfzXrtzlutJ7hL1G0gR4+YyeWGKnkfgRhGIf6DeDXExwtUsDjNxcNGdSGfz5yEimpTQLbQkbC2eqTG650B2
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(6506007)(6486002)(4326008)(64756008)(53546011)(186003)(8676002)(6512007)(5660300002)(86362001)(7416002)(36756003)(91956017)(478600001)(76116006)(2906002)(66946007)(8936002)(83380400001)(26005)(6916009)(54906003)(2616005)(66446008)(66476007)(71200400001)(66556008)(33656002)(316002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?74guz9zU0MikcyWxj/QDzrILVpnO6Of1cV/Ro2D9dZucZj58HtJi/TnAGQcy?=
 =?us-ascii?Q?UPI0OgX6zMJPtpKxAL0PAKm9EIvWL/aE8enAOoEv2di33x/j6eQ89d54lgfz?=
 =?us-ascii?Q?wl3JLH/IzkirVJyguPB9pIL/GPqlxl0N5y4MDWeplnXmcM/d0k4IVhhyf0EL?=
 =?us-ascii?Q?WJCEclyOHnTDpKo/QJn4mxDUAQlSB06bzX+QioqLciJDE4N9qh3qaAnbQfSP?=
 =?us-ascii?Q?Jw7VYe5bZU5BOFzdJ4BQH6/K4ARMYQ+7+rB0I4I4EpbASGbYnO27Mq6qFjxZ?=
 =?us-ascii?Q?E0iZhJJwVRzay1kW5uetHApDDIC5MOyhrOnBdZjIJ05Z1f5U3DJn2mPWX/qH?=
 =?us-ascii?Q?cBy87P4sYzIN95EIOo0ajtadW+UZ6OQoJgsmCBH6sg8r2m57GPNLjGS+Tyxb?=
 =?us-ascii?Q?++eZj+acDHxfg/SGNiWIGye58xfSjnF7SplLjD+re5Xm1+x6OK/e27f9/74P?=
 =?us-ascii?Q?kRNpiLK/qyOYHtGlxqb9jcDdtN8r42UnmVTynKpOf1euSp+xuvTov6aXcwGX?=
 =?us-ascii?Q?qIbs70iG4zLh7MzDNQamHjH391Y3rIXfXk8P/bPQo8isnjC4i7sWRvY0fm1F?=
 =?us-ascii?Q?iD5pAc2f9pcvpH6rQsAApDbUIfWlL1T0IGW3L8nyP2n45KMsz0U6sPIFoYrU?=
 =?us-ascii?Q?axeFCXXYxWJ5Nu/sHcokfIUlnGfzvyzTz44xloJ/aXu0Fwz0WxEVuexrxo9s?=
 =?us-ascii?Q?fCZuxDHwNlwVkRiqC2fR06fvIPw9rNs3LuaeggdHV9d2cTIhANVGFH4Wo94N?=
 =?us-ascii?Q?mkHb6MLkQ+vQPP/3gxtYop0/IL2BnEMnrCbI7A1jV6FBdkx+nxAjrK18+yzJ?=
 =?us-ascii?Q?QnHlh1T5ANAdbQwvD13i9QHi2cSpiB545thKb+Db7UiuCpKc+tYVv1eEMnom?=
 =?us-ascii?Q?5ZN5dkBhVRBjTEkzYOrQiMh0zNCo4gLJwEFghlL+4gSQc9YK5WRqFaVQ3gTX?=
 =?us-ascii?Q?z5Y0JKLBYVWrgdebQUCf3CXqiO+x8W/ParuYKP5BE38F0kVllxeMsjXkuaFa?=
 =?us-ascii?Q?gpiU?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C2CE6A0117DE5C46AFBEF5281587279B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1909
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9a1eb9e2-0daa-46d9-e40a-08d8a0e7554a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iOQuXN4i/TjXVKfEmoYnAflI1w75B2irRqTvdoCjEbK4YeLNJ+u3jydNwCKJrr/aVcHirqTrLzWRMheQ7QbVYsOMmSRcOMIy2K12EszA+gQ4IVKh96oo/r2z6FiVdAn5RYI7wUBFN2YN+U7sDQbvIgF4G+kqc2hdDSUopfbZ0t9EmlRnGBxWxwalB9RPc7MM0qzlI3ggvv4peevxUjtICKcjxpbhs0q35nlEYXK1kbnrz4hiEvBwimx1wKs3YG2HKI0/oNggMcfKe7X3T8qRTZljaZ4JpUgx/bzNR5DRFyTqqK8/10KjbfscxS0hzWwrCLbnEpVK7eV4k/GdQgaBjtT2+WCYDOQKXCHUVMN7KZ1dWxcSIOYATsXyl9mRahawIudE50S1Ze2cZ2zRvcXqGg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39850400004)(136003)(396003)(346002)(46966005)(478600001)(4326008)(70206006)(33656002)(6486002)(8936002)(82740400003)(5660300002)(83380400001)(36756003)(70586007)(6862004)(186003)(26005)(107886003)(316002)(336012)(2616005)(2906002)(6506007)(356005)(86362001)(81166007)(54906003)(82310400003)(53546011)(8676002)(6512007)(47076004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Dec 2020 10:51:48.6777
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e808acf1-12da-4ae7-eae5-08d8a0e76c3e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3838

Hi Julien,

> On 15 Dec 2020, at 10:13, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 15/12/2020 09:42, Bertrand Marquis wrote:
>> Hi Julien,
>=20
> Hi,
>=20
>>> On 14 Dec 2020, at 19:35, Julien Grall <julien@xen.org> wrote:
>>>=20
>>>=20
>>>=20
>>> On 14/12/2020 19:08, Rahul Singh wrote:
>>>> Hello Julien,
>>>=20
>>> Hi Rahul,
>>>=20
>>>>> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>>>>>=20
>>>>> Hi Rahul,
>>>>>=20
>>>>> On 10/12/2020 16:57, Rahul Singh wrote:
>>>>>>  struct arm_smmu_strtab_cfg {
>>>>>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>>>>>  		u64			padding;
>>>>>>  	};
>>>>>>  -	/* IOMMU core code handle */
>>>>>> -	struct iommu_device		iommu;
>>>>>> +	/* Need to keep a list of SMMU devices */
>>>>>> +	struct list_head		devices;
>>>>>> +
>>>>>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>>>>>> +	struct tasklet		evtq_irq_tasklet;
>>>>>> +	struct tasklet		priq_irq_tasklet;
>>>>>> +	struct tasklet		combined_irq_tasklet;
>>>>>>  };
>>>>>>    /* SMMU private data for each master */
>>>>>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>>>>>    struct arm_smmu_domain {
>>>>>>  	struct arm_smmu_device		*smmu;
>>>>>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>>>>=20
>>>>> Hmmm... Your commit message says the mutex would be replaced by a spi=
nlock. However, you are dropping the lock. What I did miss?
>>>> Linux code using the mutex in the function arm_smmu_attach_dev() but i=
n XEN this function is called from arm_smmu_assign_dev() which already has =
the spin_lock when arm_smmu_attach_dev() function I called so I drop the mu=
tex to avoid nested spinlock.
>>>> Timing analysis of using spin lock in place of mutex as compared to li=
nux  when attaching a  device to SMMU is still valid.
>>>=20
>>> I think it would be better to keep the current locking until the invest=
igation is done.
>>>=20
>>> But if you still want to make this change, then you should explain in t=
he commit message why the lock is dropped.
>>>=20
>>> [...]
>>>=20
>>>> WARN_ON(q->base_dma & (qsz - 1));
>>>> if (!unlikely(q->base_dma & (qsz - 1))) {
>>>> 	dev_info(smmu->dev, "allocated %u entries for %s\n",
>>>> 		1 << q->llq.max_n_shift, name);
>>>> }
>>>=20
>>> Right, but this doesn't address the second part of my comment.
>>>=20
>>> This change would *not* be necessary if the implementation of WARN_ON()=
 in Xen return whether the warn was triggered.
>>>=20
>>> Before considering to change the SMMU code, you should first attempt to=
 modify implementation of the WARN_ON(). We can discuss other approach if t=
he discussion goes nowhere.
>> The code proposed by Rahul is providing the equivalent functionality to =
what linux does.
>> Modifying WARN_ON implementation in Xen to fit how Linux version is work=
ing would make sense but should be done in its own patch as it will imply m=
odification on more Xen code and
>> some of it will not be related to SMMU and will need some validation.
>=20
> Let me start that this was a request I already made on v2 and Rahul agree=
d. I saw no pushback on the request until now. So to me this meant this wou=
ld be addressed in v3.

I think he agreed on the analysis but he did not say he was going to do it.

>=20
> Further, the validation seems to be a common argument everytime I ask to =
make a change in this series... Yes validation is important, but this often=
 doesn't require a lot of effort when the changes are simple... TBH, you ar=
e probably spending more time arguing against it.

Testing is important and effort evaluation also depends on other priorities=
 we have.

There are 20 WARN_ON in Xen and most of them are in x86 code.
If we do this change, the serie will impact a lot more code then it origina=
lly did.

I am not saying it should not be done, I am saying it should not be done in=
 this serie.
Such a change would need a serie upfront and then rebasing this serie on to=
p of it to prevent mixing stuff to much.

>=20
>> So I do not think it would be fare to ask Rahul to also do this in the s=
cope of this serie
>=20
> I would have agreed with this statement if the change is difficult. This =
is not the case here.
>=20
> The first step when working upstream should always to improve existing he=
lpers rather than working around it.

I agree with that statement but we should be carefull not to ask to much to=
 people who try to contribute so that they
do not feel like all changes asked are not too much to handle.
I am open to create new tasks on our side for the future when things to be =
improved like this ones are revealed by a
serie.

If this is a blocker from your point of view, we will evaluate the effort t=
o do this extra work and the serie will wait until
january to be pushed again.

Please tell us what you would like and we will check how we can plan it.

Regards
Bertrand

>=20
> If it is not possible because it is either too complex or there are push =
back from the maintainers. Then we can discuss about workaround.
>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 10:57:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 10:57:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53056.92573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp81G-0004FC-8v; Tue, 15 Dec 2020 10:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53056.92573; Tue, 15 Dec 2020 10:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp81G-0004F5-5e; Tue, 15 Dec 2020 10:57:26 +0000
Received: by outflank-mailman (input) for mailman id 53056;
 Tue, 15 Dec 2020 10:57:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp81E-0004Ex-E2; Tue, 15 Dec 2020 10:57:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp81E-0004GR-3T; Tue, 15 Dec 2020 10:57:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kp81D-00034G-Or; Tue, 15 Dec 2020 10:57:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kp81D-0002K2-ON; Tue, 15 Dec 2020 10:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gCYzIdBQf6dhexjyBCkzZbFY4q7mH9ZBxU6ATD9mFvg=; b=xmwvT69OZJXmdZ5IbGGreqDCip
	1eLQP13RNv44VoPKuY9tm9L2vTFzwozYoQLpfrah/IObe8hz9JxzH0axqRMdwC6O2cmnlhhPTzhD/
	GDAZs2IKJ0Xfu7Qd3SCVvXYs0tmnnpMaxfAY/FXhUFieN5NxTT9jCH5LKgACXF5mJMAI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157538-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157538: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=edd7ab76847442e299af64a761febd180d71f98d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 10:57:23 +0000

flight 157538 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157538/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 build-amd64                   6 xen-build                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 build-i386                    6 xen-build                fail REGR. vs. 152332
 build-i386-xsm                6 xen-build                fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                edd7ab76847442e299af64a761febd180d71f98d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  136 days
Failing since        152366  2020-08-01 20:49:34 Z  135 days  237 attempts
Testing same since   157538  2020-12-15 03:00:46 Z    0 days    1 attempts

------------------------------------------------------------
3825 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 770297 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:11:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53069.92593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8EQ-00061Z-KQ; Tue, 15 Dec 2020 11:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53069.92593; Tue, 15 Dec 2020 11:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8EQ-00061S-HL; Tue, 15 Dec 2020 11:11:02 +0000
Received: by outflank-mailman (input) for mailman id 53069;
 Tue, 15 Dec 2020 11:11:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp8EP-00061N-IQ
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:11:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5c952a7-bc3c-47b9-91d1-7ee7f1708b2e;
 Tue, 15 Dec 2020 11:10:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BF0FAAC7F;
 Tue, 15 Dec 2020 11:10:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5c952a7-bc3c-47b9-91d1-7ee7f1708b2e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608030658; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Yx7SevcBUttZRGBUgn4zpJi1UCc9qXZWWCyxJCQZZZI=;
	b=NV7BwIrwiGRFQaAQ96DIiW1UlD6Hby+vHIu/rM2INcEJ1fag87Bf4iwtbi51xV/LVwhL4h
	8vQ2G9q4P3cv2r+JkeedQ6A/lETc6SGYmEbelS/G9TIUSq5fWDNBu0M8Hiqz3azUYnoYa2
	yLUH8ytqC/GKmzvACYDQwY9AVnV1Yxw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
Date: Tue, 15 Dec 2020 12:10:55 +0100
Message-Id: <20201215111055.3810-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In case a process waits for any Xenstore action in the xenbus driver
it should be interruptible by signals.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- don't special case SIGKILL as libxenstore is handling -EINTR fine
---
 drivers/xen/xenbus/xenbus_xs.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index 3a06eb699f33..17c8f8a155fd 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -205,8 +205,15 @@ static bool test_reply(struct xb_req_data *req)
 
 static void *read_reply(struct xb_req_data *req)
 {
+	int ret;
+
 	do {
-		wait_event(req->wq, test_reply(req));
+		ret = wait_event_interruptible(req->wq, test_reply(req));
+
+		if (ret == -ERESTARTSYS && signal_pending(current)) {
+			req->msg.type = XS_ERROR;
+			return ERR_PTR(-EINTR);
+		}
 
 		if (!xenbus_ok())
 			/*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:26:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:26:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53076.92609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8TC-00074x-4z; Tue, 15 Dec 2020 11:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53076.92609; Tue, 15 Dec 2020 11:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8TC-00074q-1w; Tue, 15 Dec 2020 11:26:18 +0000
Received: by outflank-mailman (input) for mailman id 53076;
 Tue, 15 Dec 2020 11:26:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kp8TA-00074l-Vz
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:26:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kp8T9-0004m3-FH; Tue, 15 Dec 2020 11:26:15 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kp8T9-0001LQ-3h; Tue, 15 Dec 2020 11:26:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=sB/iBnj0bJve+Wg4G7UTvwAVqmPKendEle03M+qjOgU=; b=Ah/gYRk7Gh0WQvkJ/ew6otrLIp
	qOBaS8XDJjypfsCXaZgll5zJAkMdhICrix/KcW1jlxgi2FtpmIXZnbve7ph4wx3bHyV03yazQgSyn
	xLf5bavMizsqLEbKxyuOoZztNOYzYsBTwfdsQGyFv5FoIQhPi6oyS/L3ppYJFOC7mCJE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen: Rework WARN_ON() to return whether a warning was triggered
Date: Tue, 15 Dec 2020 11:26:10 +0000
Message-Id: <20201215112610.1986-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

So far, our implementation of WARN_ON() cannot be used in the following
situation:

if ( WARN_ON() )
    ...

This is because the WARN_ON() doesn't return whether a warning. Such
construction can be handy to have if you have to print more information
and now the stack track.

Rework the WARN_ON() implementation to return whether a warning was
triggered. The idea was borrowed from Linux.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This will be used in the SMMUv3 driver (see [1]).
---
 xen/include/xen/lib.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index a9679c913d5c..d10c68aa3c07 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -23,7 +23,13 @@
 #include <asm/bug.h>
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
-#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
+#define WARN_ON(p)  ({                  \
+    bool __ret_warn_on = (p);           \
+                                        \
+    if ( unlikely(__ret_warn_on) )      \
+        WARN();                         \
+    unlikely(__ret_warn_on);            \
+})
 
 /* All clang versions supported by Xen have _Static_assert. */
 #if defined(__clang__) || \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:31:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:31:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53085.92627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8Xz-000809-SI; Tue, 15 Dec 2020 11:31:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53085.92627; Tue, 15 Dec 2020 11:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8Xz-000802-Oy; Tue, 15 Dec 2020 11:31:15 +0000
Received: by outflank-mailman (input) for mailman id 53085;
 Tue, 15 Dec 2020 11:31:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kp8Xy-0007zx-2z
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:31:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kp8Xu-0004rI-CK; Tue, 15 Dec 2020 11:31:10 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kp8Xu-0001YU-13; Tue, 15 Dec 2020 11:31:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=fa62A7nEbkIDrc4YDc0RsgpTwVxVHjUkKzGCyZBXX78=; b=GpNV6Y3SiY/Tz12x+Qfbcr5B+7
	0pyzmOt8DX6FOc2bOxqbF1wCkGPPhwB/9zoJVSp4WYtqP2/ISA/yR7Xrghqe+f53ZMg0wjrIgFZ0H
	z6gSYOZwHR7dctpNIC7EjD1QfgN9zSIjayKEMar2V0YGd2j9brL7tAmW1WiYialEUExg=;
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
 <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
 <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
 <CD549B7A-97C8-40F6-B762-6661A7EFAED1@arm.com>
 <da26c36c-97ec-d9f6-abfd-642017c3df5c@xen.org>
 <99C334D2-B77B-4B8A-8294-00A811CFB80B@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f90df909-8815-878e-88ad-077b55a9ce1e@xen.org>
Date: Tue, 15 Dec 2020 11:31:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <99C334D2-B77B-4B8A-8294-00A811CFB80B@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 15/12/2020 10:51, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

> 
>> On 15 Dec 2020, at 10:13, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 15/12/2020 09:42, Bertrand Marquis wrote:
>>> Hi Julien,
>>
>> Hi,
>>
>>>> On 14 Dec 2020, at 19:35, Julien Grall <julien@xen.org> wrote:
>>>>
>>>>
>>>>
>>>> On 14/12/2020 19:08, Rahul Singh wrote:
>>>>> Hello Julien,
>>>>
>>>> Hi Rahul,
>>>>
>>>>>> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>>>>>>
>>>>>> Hi Rahul,
>>>>>>
>>>>>> On 10/12/2020 16:57, Rahul Singh wrote:
>>>>>>>   struct arm_smmu_strtab_cfg {
>>>>>>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>>>>>>   		u64			padding;
>>>>>>>   	};
>>>>>>>   -	/* IOMMU core code handle */
>>>>>>> -	struct iommu_device		iommu;
>>>>>>> +	/* Need to keep a list of SMMU devices */
>>>>>>> +	struct list_head		devices;
>>>>>>> +
>>>>>>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>>>>>>> +	struct tasklet		evtq_irq_tasklet;
>>>>>>> +	struct tasklet		priq_irq_tasklet;
>>>>>>> +	struct tasklet		combined_irq_tasklet;
>>>>>>>   };
>>>>>>>     /* SMMU private data for each master */
>>>>>>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>>>>>>     struct arm_smmu_domain {
>>>>>>>   	struct arm_smmu_device		*smmu;
>>>>>>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>>>>>
>>>>>> Hmmm... Your commit message says the mutex would be replaced by a spinlock. However, you are dropping the lock. What I did miss?
>>>>> Linux code using the mutex in the function arm_smmu_attach_dev() but in XEN this function is called from arm_smmu_assign_dev() which already has the spin_lock when arm_smmu_attach_dev() function I called so I drop the mutex to avoid nested spinlock.
>>>>> Timing analysis of using spin lock in place of mutex as compared to linux  when attaching a  device to SMMU is still valid.
>>>>
>>>> I think it would be better to keep the current locking until the investigation is done.
>>>>
>>>> But if you still want to make this change, then you should explain in the commit message why the lock is dropped.
>>>>
>>>> [...]
>>>>
>>>>> WARN_ON(q->base_dma & (qsz - 1));
>>>>> if (!unlikely(q->base_dma & (qsz - 1))) {
>>>>> 	dev_info(smmu->dev, "allocated %u entries for %s\n",
>>>>> 		1 << q->llq.max_n_shift, name);
>>>>> }
>>>>
>>>> Right, but this doesn't address the second part of my comment.
>>>>
>>>> This change would *not* be necessary if the implementation of WARN_ON() in Xen return whether the warn was triggered.
>>>>
>>>> Before considering to change the SMMU code, you should first attempt to modify implementation of the WARN_ON(). We can discuss other approach if the discussion goes nowhere.
>>> The code proposed by Rahul is providing the equivalent functionality to what linux does.
>>> Modifying WARN_ON implementation in Xen to fit how Linux version is working would make sense but should be done in its own patch as it will imply modification on more Xen code and
>>> some of it will not be related to SMMU and will need some validation.
>>
>> Let me start that this was a request I already made on v2 and Rahul agreed. I saw no pushback on the request until now. So to me this meant this would be addressed in v3.
> 
> I think he agreed on the analysis but he did not say he was going to do it.
> 
>>
>> Further, the validation seems to be a common argument everytime I ask to make a change in this series... Yes validation is important, but this often doesn't require a lot of effort when the changes are simple... TBH, you are probably spending more time arguing against it.
> 
> Testing is important and effort evaluation also depends on other priorities we have.
> 
> There are 20 WARN_ON in Xen and most of them are in x86 code.
> If we do this change, the serie will impact a lot more code then it originally did.

What's the problem?

> 
> I am not saying it should not be done, I am saying it should not be done in this serie.
> Such a change would need a serie upfront and then rebasing this serie on top of it to prevent mixing stuff to much.

It is trivial enough to be part of this series. But if you prefer to 
create a separate series then so be it.

> 
>>
>>> So I do not think it would be fare to ask Rahul to also do this in the scope of this serie
>>
>> I would have agreed with this statement if the change is difficult. This is not the case here.
>>
>> The first step when working upstream should always to improve existing helpers rather than working around it.
> 
> I agree with that statement but we should be carefull not to ask to much to people who try to contribute so that they
> do not feel like all changes asked are not too much to handle.

I am well aware of that and I don't think this request is asking a lot.

> I am open to create new tasks on our side for the future when things to be improved like this ones are revealed by a
> serie.
> 
> If this is a blocker from your point of view, we will evaluate the effort to do this extra work and the serie will wait until
> january to be pushed again.
This sounds like it would require more effort than it is actually 
necessary. In fact...

... it took me one minute to check the existing use of WARN_ON() (all of 
them don't care about the return so far), another 2 minutes to write it, 
  an extra 5 minutes to test it and 2 minutes to write the commit message.

So a grand total of 10 minutes.

Anyway, please see the patch [1].

Cheers,

[1] 
https://lore.kernel.org/xen-devel/20201215112610.1986-1-julien@xen.org/T/#u

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:31:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:31:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53089.92639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8Yb-00086B-6x; Tue, 15 Dec 2020 11:31:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53089.92639; Tue, 15 Dec 2020 11:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8Yb-000864-2G; Tue, 15 Dec 2020 11:31:53 +0000
Received: by outflank-mailman (input) for mailman id 53089;
 Tue, 15 Dec 2020 11:31:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp8YZ-00085w-P5
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:31:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 337d60bc-7d51-472e-8f57-d0bebd1f617a;
 Tue, 15 Dec 2020 11:31:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CFA93AE38;
 Tue, 15 Dec 2020 11:31:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 337d60bc-7d51-472e-8f57-d0bebd1f617a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608031910; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BhslU5HnQL11cIwqOa/xsL50tHb2iIeBnxeaHPwDS/A=;
	b=BdswMdK5GpiFicl3mUfnLPxBQE7BoABhtraIzpriKPJtxv340GzP28S9robbR7rhnzaHUd
	gU3Kpm45HznWvDrOD+rxLqkjrPZvFGFNOz1JjcJ5AZB2GHTcQRVeFreklSDqU1fkBoNTg/
	k6G9FGmCvMhAFAwqyfDFrGfhUNujf7I=
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201215112610.1986-1-julien@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
Date: Tue, 15 Dec 2020 12:31:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201215112610.1986-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="vxhhM3mvZ3lGmVuY6ETQt8NWXjDamrYDY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--vxhhM3mvZ3lGmVuY6ETQt8NWXjDamrYDY
Content-Type: multipart/mixed; boundary="kRn0onduX7tFInVsyyOIdzQx95od2Bhi2";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
References: <20201215112610.1986-1-julien@xen.org>
In-Reply-To: <20201215112610.1986-1-julien@xen.org>

--kRn0onduX7tFInVsyyOIdzQx95od2Bhi2
Content-Type: multipart/mixed;
 boundary="------------662EA40C00E26BD25FA8EB24"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------662EA40C00E26BD25FA8EB24
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 12:26, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> So far, our implementation of WARN_ON() cannot be used in the following=

> situation:
>=20
> if ( WARN_ON() )
>      ...
>=20
> This is because the WARN_ON() doesn't return whether a warning. Such

=2E.. warning has been triggered.

> construction can be handy to have if you have to print more information=

> and now the stack track.

Sorry, I'm not able to parse that sentence.

>=20
> Rework the WARN_ON() implementation to return whether a warning was
> triggered. The idea was borrowed from Linux.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Juergen

--------------662EA40C00E26BD25FA8EB24
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------662EA40C00E26BD25FA8EB24--

--kRn0onduX7tFInVsyyOIdzQx95od2Bhi2--

--vxhhM3mvZ3lGmVuY6ETQt8NWXjDamrYDY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/YnqMFAwAAAAAACgkQsN6d1ii/Ey88
8wf+IdliuvHPWSSYAdLYloVQ+4BqlImY6wVRMOvYnCm/koB5wXpziIUHxq3q23SLTDGSarqxSif5
l8tSsl14oQjJ0xVHxVBbSuASzbB7ac1TJfAY6twBWZXkxYvYm8OGTo+M08Oj7H7eh9q6CJJ8cJl2
OsKx9GIuUGQv33q5ZtVUHkeju6THDZmIrX04FTKobYwXTkXYywRvQPmSuXJlnIPW9CGW85jnrx3z
DQaJ2n7DuoXfaK6f2Me1aDN8uBRuBdDNCZ2jjX6lBo2RseN/WuK9fCqfo8t56wGv0hD8j85zXxIJ
/Oai+ZqDFVI78kjrYBXMQqxRCPNCqR97K6i7X3s0ew==
=6dYP
-----END PGP SIGNATURE-----

--vxhhM3mvZ3lGmVuY6ETQt8NWXjDamrYDY--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:36:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53095.92651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8ce-0008I1-Og; Tue, 15 Dec 2020 11:36:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53095.92651; Tue, 15 Dec 2020 11:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8ce-0008Hu-L4; Tue, 15 Dec 2020 11:36:04 +0000
Received: by outflank-mailman (input) for mailman id 53095;
 Tue, 15 Dec 2020 11:36:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kWjD=FT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kp8cd-0008Hp-3I
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:36:03 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.49]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cc8be07-c3c6-4cbd-b770-c5a6a3eab845;
 Tue, 15 Dec 2020 11:36:02 +0000 (UTC)
Received: from AM0PR10CA0073.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::26)
 by VI1PR08MB3917.eurprd08.prod.outlook.com (2603:10a6:803:c1::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.14; Tue, 15 Dec
 2020 11:35:57 +0000
Received: from AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:15:cafe::61) by AM0PR10CA0073.outlook.office365.com
 (2603:10a6:208:15::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Tue, 15 Dec 2020 11:35:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT005.mail.protection.outlook.com (10.152.16.146) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Tue, 15 Dec 2020 11:35:56 +0000
Received: ("Tessian outbound 6ec21dac9dd3:v71");
 Tue, 15 Dec 2020 11:35:56 +0000
Received: from d61d644f091e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 93FB9DF2-9855-48BA-BC9E-4E423CBC8AC0.1; 
 Tue, 15 Dec 2020 11:35:50 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d61d644f091e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 15 Dec 2020 11:35:50 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3515.eurprd08.prod.outlook.com (2603:10a6:10:50::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.14; Tue, 15 Dec
 2020 11:35:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Tue, 15 Dec 2020
 11:35:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cc8be07-c3c6-4cbd-b770-c5a6a3eab845
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6S+sumwMhV7vwb5FPrnwzi2Mt0E0bIqP0QfNLsZalRs=;
 b=W5SGgqAubS+EPhDcGSUNiYGDIJB8EFVCUdCIXeYgxxxH4mMy+XsqnrrcGp1Me2Wyw6NDoCJG9yk0j8XZYM4JzqYAaR3yn+9jSij9cdRnc1pdYLt8KE+9u5PaHEbsM+ucjkiRY/mOHZ874WKnqLYjRSfYhDtEsmU7TZnK2Jhszhs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9563fd1642f84854
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lUIAxPi8xBQUqN+gAa/dIonK7DciA8EXUrM+k2BKjSZV0f1QOwvnCaapF4KsAmbBg7Zp2dbxysGD1qhu13OUpjLLDVOj8/k4ArFze3qmmk0Dj0koJTvH9o064CBQEn0WT2cXlFGT0JiTAUT2Iwr1C6CTN2Cwa5aDE0D8XA1j8fH8vjzM8Fz9vsIGg5L3SN/yxnD+gzF2dnUBTcOvrlc8JxioRdePW/T2VBgF0GtFgM57M8aRwKoH//LmHIt1ojc4i8M7R6bGezc16T6qG0IYBufeDA+qjdTq8vB8paAnAvpvwW+h3TokVIbUZUKj4lFLlIA4g3reS6fWmJeR7E0SBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6S+sumwMhV7vwb5FPrnwzi2Mt0E0bIqP0QfNLsZalRs=;
 b=gVPnT+twnGcx+ZEx8IiaAGvy2U+AF6xUAJjTBdpWzf36+rb7aDtZstKitrM8zFhrgPYgot97d50I5aJpH7WAtkJPG7BtfHPOOE6CBGysZykf8vSWlyLO09/JRMNVuuYv+APzL1eK6LIHjxXBBHDDHNWBnSNaDRoBzRTC7LnMAC0PbEmfiLn0N7V2eU8A2sUz/2WBD59cuJrN0e5qCNT+qkbtK8r3rWd76lcJPWlOB9l4MpIQbh7vRYKM+36Z8LNiDVrPhkdSZtwnZ4VFAdS56N82G0tfR9HUxZoMcozqk0YEonasEoDfPnTEnIlxYCRPh6L83Swk8jY2lGLKRDztnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6S+sumwMhV7vwb5FPrnwzi2Mt0E0bIqP0QfNLsZalRs=;
 b=W5SGgqAubS+EPhDcGSUNiYGDIJB8EFVCUdCIXeYgxxxH4mMy+XsqnrrcGp1Me2Wyw6NDoCJG9yk0j8XZYM4JzqYAaR3yn+9jSij9cdRnc1pdYLt8KE+9u5PaHEbsM+ucjkiRY/mOHZ874WKnqLYjRSfYhDtEsmU7TZnK2Jhszhs=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Rahul Singh <Rahul.Singh@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Topic: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver
Thread-Index:
 AQHWzxYvVX92i8NXPkeNMctPP72KNqnx9LwAgAUGOoCAAAdQAIAA7OAAgAAIcgCAAAqfgIAACyuAgAABTQA=
Date: Tue, 15 Dec 2020 11:35:48 +0000
Message-ID: <E6B7D158-F767-4D69-AC54-CC2BDEAE4A72@arm.com>
References: <cover.1607617848.git.rahul.singh@arm.com>
 <33645b592bc5935a3b28ad576a819d06ed81e8dd.1607617848.git.rahul.singh@arm.com>
 <e26c96cb-245b-6927-c4a7-224c2114df42@xen.org>
 <1660236F-7BB0-4F3E-8CDD-10AE9282E2A3@arm.com>
 <6d693361-220c-fa1b-a04f-12a80f0aec4a@xen.org>
 <CD549B7A-97C8-40F6-B762-6661A7EFAED1@arm.com>
 <da26c36c-97ec-d9f6-abfd-642017c3df5c@xen.org>
 <99C334D2-B77B-4B8A-8294-00A811CFB80B@arm.com>
 <f90df909-8815-878e-88ad-077b55a9ce1e@xen.org>
In-Reply-To: <f90df909-8815-878e-88ad-077b55a9ce1e@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 3a658e96-4aec-4852-39db-08d8a0ed96bb
x-ms-traffictypediagnostic: DB7PR08MB3515:|VI1PR08MB3917:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB391727CD8756EA9F2BB367CB9DC60@VI1PR08MB3917.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 TDBTkhKJyRgryQzh4rk/9npmHpaVj+VbfhG7G7cdynRjkm9Il+pFAYqtAf3rcPoAzG01Uy+mmbLzwiLcIxZY7ZQh1JEWouaG9db3zDWufnO6zJlLKcm9IZEViwGkrqzHUpR91JlkbFlTBsckIWfJh2TpZWlcFZ1r179qIxT9gG6hwHWOBJ3pY4hQIfdlW1NUhTGW6CkRWlBxdZyoZq99GwX9/nCKUmBbUoX3Y3QgpgJ0rTMQBMnyyqyua/+5GlXRf4u62JW63wBm4pO9b1RseIu5NZ5LXNCWaqDeqxm/MWc8RBXfrXJtZAxfsAKhhzsfDv66EhD9XZkIBmokoM4TH7esr6zLN6FbFuJCuhs5sXD0VObRvudQvWmO/UHr8irGUsI1oPH3Iyzc2mcxRtgTc00VXC6LwRIDHqIkTletGImPrASpNZ/t5ETaRKtnO/d0fD1rlyJqi68xOnNGcMFSVQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(366004)(346002)(376002)(136003)(66946007)(86362001)(66446008)(76116006)(6512007)(53546011)(4326008)(91956017)(5660300002)(478600001)(8936002)(66476007)(966005)(66556008)(64756008)(2906002)(26005)(7416002)(54906003)(6916009)(36756003)(83380400001)(8676002)(186003)(71200400001)(33656002)(316002)(6486002)(6506007)(2616005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?mcwqKF2PG32PrlJyrYHD+WsYE2UbS4qZM4X4SKJ0J4wssi5+7sC1V1At8vO6?=
 =?us-ascii?Q?GW+4+ETZE19oQQV7XGsMiN3nKKDM88R35WZUgrUqTVMTQqyEzQlI+Wqr5pAz?=
 =?us-ascii?Q?jW9sCHjcD7nKEPPUappQxty3NXk7ppe5lFKu3k0tQqEJWINeJaYhn3jHBb94?=
 =?us-ascii?Q?LcnKiy+m+ySbfyYnZxNmkOIfgthGeZRm//2gd65KNCfjdohtuSBRgbbZeBsF?=
 =?us-ascii?Q?gENoJVmUwDt04ZNrrCCZEVkqlJ8oiIy0k4lo2VfK6LRF3wkr+KsfP4gYU24A?=
 =?us-ascii?Q?YBWAC7E0M45neqjUP3C2+oiSNmi1OeERrOPHxkPEmM2O2+MMHbFDl7rOKfKA?=
 =?us-ascii?Q?FQLYdqEzgxnpIIyhJJkt4AQhDMSCtpBb5lOMe3NfFqvUoKJ8Qo9vZt6gkcRu?=
 =?us-ascii?Q?7D7k+A01NwYPMMZi3+tfmPTg/xmQ8yIVsu6GsyljfESsEnYLIRV3Dq0GQEGj?=
 =?us-ascii?Q?e14zh2jlsgn5YCZq316c7geyNG6CbF74bu9J7RvGYxL5ars3yCMETw8j8PKG?=
 =?us-ascii?Q?3j6GFCcz7VhW1GHYmc3kLOEfu9fx1hFXJ4dKOcutnQSeEfT6pgtw4Z8Bu31N?=
 =?us-ascii?Q?+Vopo52HYnO04FIZJFuM0FFHuL3Tx31y+Y48CMhGbHTTgVGFIWijryHBPuSM?=
 =?us-ascii?Q?LAqZH9oZIXGhPkXQ8etqitJkAPXVwGPLPKMrvaU/KaIkwRpVeCMjtpBVzTrr?=
 =?us-ascii?Q?qOz9PdnUQa2NGpbZAznM0lSllkAO7SUGFyNgVj9OSeZCmDPePlEn003IRGrQ?=
 =?us-ascii?Q?HFi6518pg51F+rrzSbH4CSvDCgIXeZW8b8vzbhws7pzIK8Wtzpfgj9n6VWMe?=
 =?us-ascii?Q?1QCIpqZnQTURNMaPvzxeYUHFVLmaVL4xLMhcjqyaMUNQoC+ad6XqePgOaecl?=
 =?us-ascii?Q?+/nexJbs7Wr5xl7wFejbokKwvzgUNHZDgivqVRF0uUjhmNB5C9rJODdICPS7?=
 =?us-ascii?Q?QB20AaRDdZqJ6JwVvo25Pj732y9jiQej6VYiygYt+PcLT4X6iZ9KVoQaaBwU?=
 =?us-ascii?Q?n8LV?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4B07A0B4265ABF44AD1E8881D0A3AEFF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3515
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2744eadf-e06a-402b-678a-08d8a0ed91e5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XmyI544D0PdQ+23E/LhUzF/TLQuEizN4SYm/erIbpg81WFuRVqiNDkQeB01Iy+fhivrCRwS8j3u9bWSyjmFaemST6gn2cqyIdDfFCe4ZY92/EPrLqfuB7oTSK97r3oyCnFhkxYamvPY26sWceEfG+2YvAt/HHGTdBjN117jN26HqthwdFxGQmTswIpiIe1qk0tzcnlM5d7NzXenF2CEBRml6rgBVfdHDxXpvvtG2JPHTzvoLVK3onvw3tPnH9B7+vSNbFt9keENU/Z0GeEurIafQ7t6NJvWbijp7xnF/tEEBghx9viqi5S5LTt7Ogj7lXGA6kdTXqZ+ooOXwa41Vt2qUakPJ1ERK9UPEB+UvCeW33VIHitxB/bi9Vr7oPQBrQsO3zH2Y1pchJUhOLSKS10Bb6YBRcx9F3Mb5fbYw9g3bLK6yTtvnQw56bPC3uCT3hJPNieV+YiQ5zMiPW+rXRhMC+dffJf2bPamkHiwt+kI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(39850400004)(136003)(46966005)(8676002)(186003)(82740400003)(316002)(6512007)(70206006)(336012)(356005)(86362001)(81166007)(54906003)(2906002)(4326008)(5660300002)(2616005)(33656002)(6506007)(36756003)(6486002)(107886003)(83380400001)(26005)(82310400003)(6862004)(70586007)(47076004)(478600001)(8936002)(53546011)(966005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Dec 2020 11:35:56.8691
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a658e96-4aec-4852-39db-08d8a0ed96bb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3917

Hi Julien,

> On 15 Dec 2020, at 11:31, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 15/12/2020 10:51, Bertrand Marquis wrote:
>> Hi Julien,
>=20
> Hi Bertrand,
>=20
>>> On 15 Dec 2020, at 10:13, Julien Grall <julien@xen.org> wrote:
>>>=20
>>>=20
>>>=20
>>> On 15/12/2020 09:42, Bertrand Marquis wrote:
>>>> Hi Julien,
>>>=20
>>> Hi,
>>>=20
>>>>> On 14 Dec 2020, at 19:35, Julien Grall <julien@xen.org> wrote:
>>>>>=20
>>>>>=20
>>>>>=20
>>>>> On 14/12/2020 19:08, Rahul Singh wrote:
>>>>>> Hello Julien,
>>>>>=20
>>>>> Hi Rahul,
>>>>>=20
>>>>>>> On 11 Dec 2020, at 2:25 pm, Julien Grall <julien@xen.org> wrote:
>>>>>>>=20
>>>>>>> Hi Rahul,
>>>>>>>=20
>>>>>>> On 10/12/2020 16:57, Rahul Singh wrote:
>>>>>>>>  struct arm_smmu_strtab_cfg {
>>>>>>>> @@ -613,8 +847,13 @@ struct arm_smmu_device {
>>>>>>>>  		u64			padding;
>>>>>>>>  	};
>>>>>>>>  -	/* IOMMU core code handle */
>>>>>>>> -	struct iommu_device		iommu;
>>>>>>>> +	/* Need to keep a list of SMMU devices */
>>>>>>>> +	struct list_head		devices;
>>>>>>>> +
>>>>>>>> +	/* Tasklets for handling evts/faults and pci page request IRQs*/
>>>>>>>> +	struct tasklet		evtq_irq_tasklet;
>>>>>>>> +	struct tasklet		priq_irq_tasklet;
>>>>>>>> +	struct tasklet		combined_irq_tasklet;
>>>>>>>>  };
>>>>>>>>    /* SMMU private data for each master */
>>>>>>>> @@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
>>>>>>>>    struct arm_smmu_domain {
>>>>>>>>  	struct arm_smmu_device		*smmu;
>>>>>>>> -	struct mutex			init_mutex; /* Protects smmu pointer */
>>>>>>>=20
>>>>>>> Hmmm... Your commit message says the mutex would be replaced by a s=
pinlock. However, you are dropping the lock. What I did miss?
>>>>>> Linux code using the mutex in the function arm_smmu_attach_dev() but=
 in XEN this function is called from arm_smmu_assign_dev() which already ha=
s the spin_lock when arm_smmu_attach_dev() function I called so I drop the =
mutex to avoid nested spinlock.
>>>>>> Timing analysis of using spin lock in place of mutex as compared to =
linux  when attaching a  device to SMMU is still valid.
>>>>>=20
>>>>> I think it would be better to keep the current locking until the inve=
stigation is done.
>>>>>=20
>>>>> But if you still want to make this change, then you should explain in=
 the commit message why the lock is dropped.
>>>>>=20
>>>>> [...]
>>>>>=20
>>>>>> WARN_ON(q->base_dma & (qsz - 1));
>>>>>> if (!unlikely(q->base_dma & (qsz - 1))) {
>>>>>> 	dev_info(smmu->dev, "allocated %u entries for %s\n",
>>>>>> 		1 << q->llq.max_n_shift, name);
>>>>>> }
>>>>>=20
>>>>> Right, but this doesn't address the second part of my comment.
>>>>>=20
>>>>> This change would *not* be necessary if the implementation of WARN_ON=
() in Xen return whether the warn was triggered.
>>>>>=20
>>>>> Before considering to change the SMMU code, you should first attempt =
to modify implementation of the WARN_ON(). We can discuss other approach if=
 the discussion goes nowhere.
>>>> The code proposed by Rahul is providing the equivalent functionality t=
o what linux does.
>>>> Modifying WARN_ON implementation in Xen to fit how Linux version is wo=
rking would make sense but should be done in its own patch as it will imply=
 modification on more Xen code and
>>>> some of it will not be related to SMMU and will need some validation.
>>>=20
>>> Let me start that this was a request I already made on v2 and Rahul agr=
eed. I saw no pushback on the request until now. So to me this meant this w=
ould be addressed in v3.
>> I think he agreed on the analysis but he did not say he was going to do =
it.
>>>=20
>>> Further, the validation seems to be a common argument everytime I ask t=
o make a change in this series... Yes validation is important, but this oft=
en doesn't require a lot of effort when the changes are simple... TBH, you =
are probably spending more time arguing against it.
>> Testing is important and effort evaluation also depends on other priorit=
ies we have.
>> There are 20 WARN_ON in Xen and most of them are in x86 code.
>> If we do this change, the serie will impact a lot more code then it orig=
inally did.
>=20
> What's the problem?
>=20
>> I am not saying it should not be done, I am saying it should not be done=
 in this serie.
>> Such a change would need a serie upfront and then rebasing this serie on=
 top of it to prevent mixing stuff to much.
>=20
> It is trivial enough to be part of this series. But if you prefer to crea=
te a separate series then so be it.
>=20
>>>=20
>>>> So I do not think it would be fare to ask Rahul to also do this in the=
 scope of this serie
>>>=20
>>> I would have agreed with this statement if the change is difficult. Thi=
s is not the case here.
>>>=20
>>> The first step when working upstream should always to improve existing =
helpers rather than working around it.
>> I agree with that statement but we should be carefull not to ask to much=
 to people who try to contribute so that they
>> do not feel like all changes asked are not too much to handle.
>=20
> I am well aware of that and I don't think this request is asking a lot.
>=20
>> I am open to create new tasks on our side for the future when things to =
be improved like this ones are revealed by a
>> serie.
>> If this is a blocker from your point of view, we will evaluate the effor=
t to do this extra work and the serie will wait until
>> january to be pushed again.
> This sounds like it would require more effort than it is actually necessa=
ry. In fact...
>=20
> ... it took me one minute to check the existing use of WARN_ON() (all of =
them don't care about the return so far), another 2 minutes to write it,  a=
n extra 5 minutes to test it and 2 minutes to write the commit message.
>=20
> So a grand total of 10 minutes.

Sadly what takes 10 minutes on your side is not taking the same effort on o=
ur side for now (due to internal review, validation and required tests).

>=20
> Anyway, please see the patch [1].

Thanks for that.
We will review it and rebase the SMMU serie on top of it.

Cheers
Bertrand

>=20
> Cheers,
>=20
> [1] https://lore.kernel.org/xen-devel/20201215112610.1986-1-julien@xen.or=
g/T/#u
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:42:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:42:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53100.92663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8jD-0000oi-MB; Tue, 15 Dec 2020 11:42:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53100.92663; Tue, 15 Dec 2020 11:42:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8jD-0000ob-Iz; Tue, 15 Dec 2020 11:42:51 +0000
Received: by outflank-mailman (input) for mailman id 53100;
 Tue, 15 Dec 2020 11:42:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp8jC-0000oW-Ad
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:42:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 97e0ddf6-115f-4439-b2b0-8fe4934b8692;
 Tue, 15 Dec 2020 11:42:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 88697AE47;
 Tue, 15 Dec 2020 11:42:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97e0ddf6-115f-4439-b2b0-8fe4934b8692
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608032567; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8JgTs45MuhZxp31unKb1wCSq9R0yfQ1p132Smb6fNbE=;
	b=eJxV41g0cLnyuFzF7+jvLZgSq72DDtjDi69jcCuJm38/OdETrwGMcJmFBHJXr7D/bpNwDc
	XnlFiyx38G3FK0y/VWeDFQjNYzetAYxoLQfYVHbu+aVmvayMvPLUXHVc2GG7X6zsMhGIU+
	N5elEp2XyFiRMzWokHOth1GTVb1gs9c=
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>, "K. Y. Srinivasan"
 <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <sean.j.christopherson@intel.com>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 Daniel Lezcano <daniel.lezcano@linaro.org>,
 Juri Lelli <juri.lelli@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 Dietmar Eggemann <dietmar.eggemann@arm.com>,
 Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>,
 Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira
 <bristot@redhat.com>, Josh Poimboeuf <jpoimboe@redhat.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
Date: Tue, 15 Dec 2020 12:42:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201123134317.GE3092@hirez.programming.kicks-ass.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="H2kwefz8uIE15eQoOkpjLVA8XJ0Q8kIsD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--H2kwefz8uIE15eQoOkpjLVA8XJ0Q8kIsD
Content-Type: multipart/mixed; boundary="eJf0I1odMv2scnFFsZ16khArxsjn0GCwk";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>, "K. Y. Srinivasan"
 <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <sean.j.christopherson@intel.com>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 Daniel Lezcano <daniel.lezcano@linaro.org>,
 Juri Lelli <juri.lelli@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 Dietmar Eggemann <dietmar.eggemann@arm.com>,
 Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>,
 Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira
 <bristot@redhat.com>, Josh Poimboeuf <jpoimboe@redhat.com>
Message-ID: <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
In-Reply-To: <20201123134317.GE3092@hirez.programming.kicks-ass.net>

--eJf0I1odMv2scnFFsZ16khArxsjn0GCwk
Content-Type: multipart/mixed;
 boundary="------------F4E81710D117794268EEB261"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F4E81710D117794268EEB261
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Peter,

On 23.11.20 14:43, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 01:53:42PM +0100, Peter Zijlstra wrote:
>> On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
>>>   30 files changed, 325 insertions(+), 598 deletions(-)
>>
>> Much awesome ! I'll try and get that objtool thing sorted.
>=20
> This seems to work for me. It isn't 100% accurate, because it doesn't
> know about the direct call instruction, but I can either fudge that or
> switching to static_call() will cure that.
>=20
> It's not exactly pretty, but it should be straight forward.

Are you planning to send this out as an "official" patch, or should I
include it in my series (in this case I'd need a variant with a proper
commit message)?

I'd like to have this settled soon, as I'm going to send V2 of my
series hopefully this week.


Juergen

>=20
> Index: linux-2.6/tools/objtool/check.c
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> --- linux-2.6.orig/tools/objtool/check.c
> +++ linux-2.6/tools/objtool/check.c
> @@ -1090,6 +1090,32 @@ static int handle_group_alt(struct objto
>   		return -1;
>   	}
>  =20
> +	/*
> +	 * Add the filler NOP, required for alternative CFI.
> +	 */
> +	if (special_alt->group && special_alt->new_len < special_alt->orig_le=
n) {
> +		struct instruction *nop =3D malloc(sizeof(*nop));
> +		if (!nop) {
> +			WARN("malloc failed");
> +			return -1;
> +		}
> +		memset(nop, 0, sizeof(*nop));
> +		INIT_LIST_HEAD(&nop->alts);
> +		INIT_LIST_HEAD(&nop->stack_ops);
> +		init_cfi_state(&nop->cfi);
> +
> +		nop->sec =3D last_new_insn->sec;
> +		nop->ignore =3D last_new_insn->ignore;
> +		nop->func =3D last_new_insn->func;
> +		nop->alt_group =3D alt_group;
> +		nop->offset =3D last_new_insn->offset + last_new_insn->len;
> +		nop->type =3D INSN_NOP;
> +		nop->len =3D special_alt->orig_len - special_alt->new_len;
> +
> +		list_add(&nop->list, &last_new_insn->list);
> +		last_new_insn =3D nop;
> +	}
> +
>   	if (fake_jump)
>   		list_add(&fake_jump->list, &last_new_insn->list);
>  =20
> @@ -2190,18 +2216,12 @@ static int handle_insn_ops(struct instru
>   	struct stack_op *op;
>  =20
>   	list_for_each_entry(op, &insn->stack_ops, list) {
> -		struct cfi_state old_cfi =3D state->cfi;
>   		int res;
>  =20
>   		res =3D update_cfi_state(insn, &state->cfi, op);
>   		if (res)
>   			return res;
>  =20
> -		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct c=
fi_state))) {
> -			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
> -			return -1;
> -		}
> -
>   		if (op->dest.type =3D=3D OP_DEST_PUSHF) {
>   			if (!state->uaccess_stack) {
>   				state->uaccess_stack =3D 1;
> @@ -2399,19 +2419,137 @@ static int validate_return(struct symbol
>    * unreported (because they're NOPs), such holes would result in CFI_=
UNDEFINED
>    * states which then results in ORC entries, which we just said we di=
dn't want.
>    *
> - * Avoid them by copying the CFI entry of the first instruction into t=
he whole
> - * alternative.
> + * Avoid them by copying the CFI entry of the first instruction into t=
he hole.
>    */
> -static void fill_alternative_cfi(struct objtool_file *file, struct ins=
truction *insn)
> +static void __fill_alt_cfi(struct objtool_file *file, struct instructi=
on *insn)
>   {
>   	struct instruction *first_insn =3D insn;
>   	int alt_group =3D insn->alt_group;
>  =20
> -	sec_for_each_insn_continue(file, insn) {
> +	sec_for_each_insn_from(file, insn) {
>   		if (insn->alt_group !=3D alt_group)
>   			break;
> -		insn->cfi =3D first_insn->cfi;
> +
> +		if (!insn->visited)
> +			insn->cfi =3D first_insn->cfi;
> +	}
> +}
> +
> +static void fill_alt_cfi(struct objtool_file *file, struct instruction=
 *alt_insn)
> +{
> +	struct alternative *alt;
> +
> +	__fill_alt_cfi(file, alt_insn);
> +
> +	list_for_each_entry(alt, &alt_insn->alts, list)
> +		__fill_alt_cfi(file, alt->insn);
> +}
> +
> +static struct instruction *
> +__find_unwind(struct objtool_file *file,
> +	      struct instruction *insn, unsigned long offset)
> +{
> +	int alt_group =3D insn->alt_group;
> +	struct instruction *next;
> +	unsigned long off =3D 0;
> +
> +	while ((off + insn->len) <=3D offset) {
> +		next =3D next_insn_same_sec(file, insn);
> +		if (next && next->alt_group !=3D alt_group)
> +			next =3D NULL;
> +
> +		if (!next)
> +			break;
> +
> +		off +=3D insn->len;
> +		insn =3D next;
>   	}
> +
> +	return insn;
> +}
> +
> +struct instruction *
> +find_alt_unwind(struct objtool_file *file,
> +		struct instruction *alt_insn, unsigned long offset)
> +{
> +	struct instruction *fit;
> +	struct alternative *alt;
> +	unsigned long fit_off;
> +
> +	fit =3D __find_unwind(file, alt_insn, offset);
> +	fit_off =3D (fit->offset - alt_insn->offset);
> +
> +	list_for_each_entry(alt, &alt_insn->alts, list) {
> +		struct instruction *x;
> +		unsigned long x_off;
> +
> +		x =3D __find_unwind(file, alt->insn, offset);
> +		x_off =3D (x->offset - alt->insn->offset);
> +
> +		if (fit_off < x_off) {
> +			fit =3D x;
> +			fit_off =3D x_off;
> +
> +		} else if (fit_off =3D=3D x_off &&
> +			   memcmp(&fit->cfi, &x->cfi, sizeof(struct cfi_state))) {
> +
> +			char *_str1 =3D offstr(fit->sec, fit->offset);
> +			char *_str2 =3D offstr(x->sec, x->offset);
> +			WARN("%s: equal-offset incompatible alternative: %s\n", _str1, _str=
2);
> +			free(_str1);
> +			free(_str2);
> +			return fit;
> +		}
> +	}
> +
> +	return fit;
> +}
> +
> +static int __validate_unwind(struct objtool_file *file,
> +			     struct instruction *alt_insn,
> +			     struct instruction *insn)
> +{
> +	int alt_group =3D insn->alt_group;
> +	struct instruction *unwind;
> +	unsigned long offset =3D 0;
> +
> +	sec_for_each_insn_from(file, insn) {
> +		if (insn->alt_group !=3D alt_group)
> +			break;
> +
> +		unwind =3D find_alt_unwind(file, alt_insn, offset);
> +
> +		if (memcmp(&insn->cfi, &unwind->cfi, sizeof(struct cfi_state))) {
> +
> +			char *_str1 =3D offstr(insn->sec, insn->offset);
> +			char *_str2 =3D offstr(unwind->sec, unwind->offset);
> +			WARN("%s: unwind incompatible alternative: %s (%ld)\n",
> +			     _str1, _str2, offset);
> +			free(_str1);
> +			free(_str2);
> +			return 1;
> +		}
> +
> +		offset +=3D insn->len;
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_alt_unwind(struct objtool_file *file,
> +			       struct instruction *alt_insn)
> +{
> +	struct alternative *alt;
> +
> +	if (__validate_unwind(file, alt_insn, alt_insn))
> +		return 1;
> +
> +	list_for_each_entry(alt, &alt_insn->alts, list) {
> +		if (__validate_unwind(file, alt_insn, alt->insn))
> +			return 1;
> +	}
> +
> +	return 0;
>   }
>  =20
>   /*
> @@ -2423,9 +2561,10 @@ static void fill_alternative_cfi(struct
>   static int validate_branch(struct objtool_file *file, struct symbol *=
func,
>   			   struct instruction *insn, struct insn_state state)
>   {
> +	struct instruction *next_insn, *alt_insn =3D NULL;
>   	struct alternative *alt;
> -	struct instruction *next_insn;
>   	struct section *sec;
> +	int alt_group =3D 0;
>   	u8 visited;
>   	int ret;
>  =20
> @@ -2480,8 +2619,10 @@ static int validate_branch(struct objtoo
>   				}
>   			}
>  =20
> -			if (insn->alt_group)
> -				fill_alternative_cfi(file, insn);
> +			if (insn->alt_group) {
> +				alt_insn =3D insn;
> +				alt_group =3D insn->alt_group;
> +			}
>  =20
>   			if (skip_orig)
>   				return 0;
> @@ -2613,6 +2754,17 @@ static int validate_branch(struct objtoo
>   		}
>  =20
>   		insn =3D next_insn;
> +
> +		if (alt_insn && insn->alt_group !=3D alt_group) {
> +			alt_insn->alt_end =3D insn;
> +
> +			fill_alt_cfi(file, alt_insn);
> +
> +			if (validate_alt_unwind(file, alt_insn))
> +				return 1;
> +
> +			alt_insn =3D NULL;
> +		}
>   	}
>  =20
>   	return 0;
> Index: linux-2.6/tools/objtool/check.h
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> --- linux-2.6.orig/tools/objtool/check.h
> +++ linux-2.6/tools/objtool/check.h
> @@ -40,6 +40,7 @@ struct instruction {
>   	struct instruction *first_jump_src;
>   	struct reloc *jump_table;
>   	struct list_head alts;
> +	struct instruction *alt_end;
>   	struct symbol *func;
>   	struct list_head stack_ops;
>   	struct cfi_state cfi;
> @@ -54,6 +55,10 @@ static inline bool is_static_jump(struct
>   	       insn->type =3D=3D INSN_JUMP_UNCONDITIONAL;
>   }
>  =20
> +struct instruction *
> +find_alt_unwind(struct objtool_file *file,
> +		struct instruction *alt_insn, unsigned long offset);
> +
>   struct instruction *find_insn(struct objtool_file *file,
>   			      struct section *sec, unsigned long offset);
>  =20
> Index: linux-2.6/tools/objtool/orc_gen.c
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> --- linux-2.6.orig/tools/objtool/orc_gen.c
> +++ linux-2.6/tools/objtool/orc_gen.c
> @@ -12,75 +12,86 @@
>   #include "check.h"
>   #include "warn.h"
>  =20
> -int create_orc(struct objtool_file *file)
> +static int create_orc_insn(struct objtool_file *file, struct instructi=
on *insn)
>   {
> -	struct instruction *insn;
> +	struct orc_entry *orc =3D &insn->orc;
> +	struct cfi_reg *cfa =3D &insn->cfi.cfa;
> +	struct cfi_reg *bp =3D &insn->cfi.regs[CFI_BP];
> +
> +	orc->end =3D insn->cfi.end;
> +
> +	if (cfa->base =3D=3D CFI_UNDEFINED) {
> +		orc->sp_reg =3D ORC_REG_UNDEFINED;
> +		return 0;
> +	}
>  =20
> -	for_each_insn(file, insn) {
> -		struct orc_entry *orc =3D &insn->orc;
> -		struct cfi_reg *cfa =3D &insn->cfi.cfa;
> -		struct cfi_reg *bp =3D &insn->cfi.regs[CFI_BP];
> +	switch (cfa->base) {
> +	case CFI_SP:
> +		orc->sp_reg =3D ORC_REG_SP;
> +		break;
> +	case CFI_SP_INDIRECT:
> +		orc->sp_reg =3D ORC_REG_SP_INDIRECT;
> +		break;
> +	case CFI_BP:
> +		orc->sp_reg =3D ORC_REG_BP;
> +		break;
> +	case CFI_BP_INDIRECT:
> +		orc->sp_reg =3D ORC_REG_BP_INDIRECT;
> +		break;
> +	case CFI_R10:
> +		orc->sp_reg =3D ORC_REG_R10;
> +		break;
> +	case CFI_R13:
> +		orc->sp_reg =3D ORC_REG_R13;
> +		break;
> +	case CFI_DI:
> +		orc->sp_reg =3D ORC_REG_DI;
> +		break;
> +	case CFI_DX:
> +		orc->sp_reg =3D ORC_REG_DX;
> +		break;
> +	default:
> +		WARN_FUNC("unknown CFA base reg %d",
> +			  insn->sec, insn->offset, cfa->base);
> +		return -1;
> +	}
>  =20
> -		if (!insn->sec->text)
> -			continue;
> +	switch(bp->base) {
> +	case CFI_UNDEFINED:
> +		orc->bp_reg =3D ORC_REG_UNDEFINED;
> +		break;
> +	case CFI_CFA:
> +		orc->bp_reg =3D ORC_REG_PREV_SP;
> +		break;
> +	case CFI_BP:
> +		orc->bp_reg =3D ORC_REG_BP;
> +		break;
> +	default:
> +		WARN_FUNC("unknown BP base reg %d",
> +			  insn->sec, insn->offset, bp->base);
> +		return -1;
> +	}
>  =20
> -		orc->end =3D insn->cfi.end;
> +	orc->sp_offset =3D cfa->offset;
> +	orc->bp_offset =3D bp->offset;
> +	orc->type =3D insn->cfi.type;
>  =20
> -		if (cfa->base =3D=3D CFI_UNDEFINED) {
> -			orc->sp_reg =3D ORC_REG_UNDEFINED;
> -			continue;
> -		}
> +	return 0;
> +}
>  =20
> -		switch (cfa->base) {
> -		case CFI_SP:
> -			orc->sp_reg =3D ORC_REG_SP;
> -			break;
> -		case CFI_SP_INDIRECT:
> -			orc->sp_reg =3D ORC_REG_SP_INDIRECT;
> -			break;
> -		case CFI_BP:
> -			orc->sp_reg =3D ORC_REG_BP;
> -			break;
> -		case CFI_BP_INDIRECT:
> -			orc->sp_reg =3D ORC_REG_BP_INDIRECT;
> -			break;
> -		case CFI_R10:
> -			orc->sp_reg =3D ORC_REG_R10;
> -			break;
> -		case CFI_R13:
> -			orc->sp_reg =3D ORC_REG_R13;
> -			break;
> -		case CFI_DI:
> -			orc->sp_reg =3D ORC_REG_DI;
> -			break;
> -		case CFI_DX:
> -			orc->sp_reg =3D ORC_REG_DX;
> -			break;
> -		default:
> -			WARN_FUNC("unknown CFA base reg %d",
> -				  insn->sec, insn->offset, cfa->base);
> -			return -1;
> -		}
> +int create_orc(struct objtool_file *file)
> +{
> +	struct instruction *insn;
>  =20
> -		switch(bp->base) {
> -		case CFI_UNDEFINED:
> -			orc->bp_reg =3D ORC_REG_UNDEFINED;
> -			break;
> -		case CFI_CFA:
> -			orc->bp_reg =3D ORC_REG_PREV_SP;
> -			break;
> -		case CFI_BP:
> -			orc->bp_reg =3D ORC_REG_BP;
> -			break;
> -		default:
> -			WARN_FUNC("unknown BP base reg %d",
> -				  insn->sec, insn->offset, bp->base);
> -			return -1;
> -		}
> +	for_each_insn(file, insn) {
> +		int ret;
> +=09
> +		if (!insn->sec->text)
> +			continue;
>  =20
> -		orc->sp_offset =3D cfa->offset;
> -		orc->bp_offset =3D bp->offset;
> -		orc->type =3D insn->cfi.type;
> +		ret =3D create_orc_insn(file, insn);
> +		if (ret)
> +			return ret;
>   	}
>  =20
>   	return 0;
> @@ -166,6 +177,28 @@ int create_orc_sections(struct objtool_f
>  =20
>   		prev_insn =3D NULL;
>   		sec_for_each_insn(file, sec, insn) {
> +
> +			if (insn->alt_end) {
> +				unsigned int offset, alt_len;
> +				struct instruction *unwind;
> +
> +				alt_len =3D insn->alt_end->offset - insn->offset;
> +				for (offset =3D 0; offset < alt_len; offset++) {
> +					unwind =3D find_alt_unwind(file, insn, offset);
> +					/* XXX: skipped earlier ! */
> +					create_orc_insn(file, unwind);
> +					if (!prev_insn ||
> +					    memcmp(&unwind->orc, &prev_insn->orc,
> +						   sizeof(struct orc_entry))) {
> +						idx++;
> +//						WARN_FUNC("ORC @ %d/%d", sec, insn->offset+offset, offset, alt=
_len);
> +					}
> +					prev_insn =3D unwind;
> +				}
> +
> +				insn =3D insn->alt_end;
> +			}
> +
>   			if (!prev_insn ||
>   			    memcmp(&insn->orc, &prev_insn->orc,
>   				   sizeof(struct orc_entry))) {
> @@ -203,6 +236,31 @@ int create_orc_sections(struct objtool_f
>  =20
>   		prev_insn =3D NULL;
>   		sec_for_each_insn(file, sec, insn) {
> +
> +			if (insn->alt_end) {
> +				unsigned int offset, alt_len;
> +				struct instruction *unwind;
> +
> +				alt_len =3D insn->alt_end->offset - insn->offset;
> +				for (offset =3D 0; offset < alt_len; offset++) {
> +					unwind =3D find_alt_unwind(file, insn, offset);
> +					if (!prev_insn ||
> +					    memcmp(&unwind->orc, &prev_insn->orc,
> +						   sizeof(struct orc_entry))) {
> +
> +						if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
> +								     insn->sec, insn->offset + offset,
> +								     &unwind->orc))
> +							return -1;
> +
> +						idx++;
> +					}
> +					prev_insn =3D unwind;
> +				}
> +
> +				insn =3D insn->alt_end;
> +			}
> +
>   			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
>   						 sizeof(struct orc_entry))) {
>  =20
>=20


--------------F4E81710D117794268EEB261
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F4E81710D117794268EEB261--

--eJf0I1odMv2scnFFsZ16khArxsjn0GCwk--

--H2kwefz8uIE15eQoOkpjLVA8XJ0Q8kIsD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/YoTUFAwAAAAAACgkQsN6d1ii/Ey8c
4Af9EPguE36mExOqZBOb5ZgFrhbSqrWQDZmE1KbHgbM1ziiQCfiXQD6+EBEmtuQ0oUDN8hJc/CIg
f0IHhweVykhon4Z9R3Fv9q6z3ZryFRzygP14ZNye2GW6cspH0bp8c2v9vG+0dQKxAUee3cS2hJVn
XHeibQug+fGgdI2D90U6xryasC5CgTFYEttF/tEJNMGbVsZyTzQHHbWtPf93baVeEBfjd2cPe54R
Asv8rjRxGivpEtCA8+mK7Up57mLT6oLLX9CBzpzvfoMz7iQI8tj9gP5WO8NLcqGLF0K4PxREJEzC
UD6+h6ew8YijxTQPm3EL8bqbOZQT5pL+DHnrUoVMcQ==
=Kt/D
-----END PGP SIGNATURE-----

--H2kwefz8uIE15eQoOkpjLVA8XJ0Q8kIsD--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:46:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53105.92675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8nB-0000z5-7R; Tue, 15 Dec 2020 11:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53105.92675; Tue, 15 Dec 2020 11:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8nB-0000yy-3W; Tue, 15 Dec 2020 11:46:57 +0000
Received: by outflank-mailman (input) for mailman id 53105;
 Tue, 15 Dec 2020 11:46:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kp8nA-0000yt-M0
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:46:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c7695e8-3ac9-45f2-87ef-d39ae0952a22;
 Tue, 15 Dec 2020 11:46:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 31166AE47;
 Tue, 15 Dec 2020 11:46:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c7695e8-3ac9-45f2-87ef-d39ae0952a22
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608032815; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+SdqzB7SHYA0yvvwrceSWNi9LB3HOCaaAdfNrtqXBsk=;
	b=XLMwb8g2HuTM2hZslwYUXMJXySEX7rsBgtReNBTQsQeAHpvARXgfGXK94xPFLeklcEynQ1
	WfDz+AfvnHrmP3hNxYJ64l/BfMf/neURNYOvEDUCX5lrItmng92Q0wCQP57XIn3jYULk2N
	MMKIOsD2tNLbQDh0UbSg4GRGuU/p5cM=
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201215112610.1986-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
Date: Tue, 15 Dec 2020 12:46:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201215112610.1986-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.12.2020 12:26, Julien Grall wrote:
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -23,7 +23,13 @@
>  #include <asm/bug.h>
>  
>  #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
> +#define WARN_ON(p)  ({                  \
> +    bool __ret_warn_on = (p);           \

Please can you avoid leading underscores here?

> +                                        \
> +    if ( unlikely(__ret_warn_on) )      \
> +        WARN();                         \
> +    unlikely(__ret_warn_on);            \
> +})

Is this latter unlikely() having any effect? So far I thought it
would need to be immediately inside a control construct or be an
operand to && or ||.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 11:58:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 11:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53111.92686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8yK-0001xo-9J; Tue, 15 Dec 2020 11:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53111.92686; Tue, 15 Dec 2020 11:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp8yK-0001xh-6H; Tue, 15 Dec 2020 11:58:28 +0000
Received: by outflank-mailman (input) for mailman id 53111;
 Tue, 15 Dec 2020 11:58:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sb05=FT=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kp8yI-0001xc-Ra
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 11:58:27 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 643bcf3b-e98c-4f86-83e4-897b92f4d13e;
 Tue, 15 Dec 2020 11:58:25 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-64-IpJsG-4QN5y60YuRWvO2tg-1; Tue, 15 Dec 2020 06:58:23 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AD3371922961;
 Tue, 15 Dec 2020 11:58:21 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-117-65.ams2.redhat.com [10.36.117.65])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A94AB10074EF;
 Tue, 15 Dec 2020 11:58:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 643bcf3b-e98c-4f86-83e4-897b92f4d13e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608033504;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xQncgaDlvRlYMFtHSjAoEgnBxR0gZjfPlDQMbTR0GW8=;
	b=Vtf2QS7zXIeMqfWFTrp3Ypd1Dkn0zi727Ud279DitTMh7Q013ESDBK37hJ6T7ksPzUF568
	y6ew2gI3OguIRSYrtv5J1rtrfeVCwsVyykIS543jK9S+pQ2upRTNb70t/NPkrMm1fjstX1
	HQbAZg7KMSUs4F1eA7hgx5MVWZsZmv0=
X-MC-Unique: IpJsG-4QN5y60YuRWvO2tg-1
Date: Tue, 15 Dec 2020 12:58:04 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 1/4] block: Honor blk_set_aio_context() context
 requirements
Message-ID: <20201215115804.GC8185@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-2-slp@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201214170519.223781-2-slp@redhat.com>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> The documentation for bdrv_set_aio_context_ignore() states this:
> 
>  * The caller must own the AioContext lock for the old AioContext of bs, but it
>  * must not own the AioContext lock for new_context (unless new_context is the
>  * same as the current context of bs).
> 
> As blk_set_aio_context() makes use of this function, this rule also
> applies to it.
> 
> Fix all occurrences where this rule wasn't honored.
> 
> Suggested-by: Kevin Wolf <kwolf@redhat.com>
> Signed-off-by: Sergio Lopez <slp@redhat.com>

Reviewed-by: Kevin Wolf <kwolf@redhat.com>



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:12:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:12:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53124.92705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9CI-0003qm-0f; Tue, 15 Dec 2020 12:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53124.92705; Tue, 15 Dec 2020 12:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9CH-0003qf-SZ; Tue, 15 Dec 2020 12:12:53 +0000
Received: by outflank-mailman (input) for mailman id 53124;
 Tue, 15 Dec 2020 12:12:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sb05=FT=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kp9CH-0003qa-Be
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 12:12:53 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1e2194ff-25f3-4905-8daf-c337e1419cc2;
 Tue, 15 Dec 2020 12:12:52 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-469-DEshVlCCPQyfZLbZd14zsA-1; Tue, 15 Dec 2020 07:12:48 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6AB0E107ACE8;
 Tue, 15 Dec 2020 12:12:46 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-117-65.ams2.redhat.com [10.36.117.65])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A22F819C44;
 Tue, 15 Dec 2020 12:12:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e2194ff-25f3-4905-8daf-c337e1419cc2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608034371;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=45SVj6ixW5BIwfPuD+NTgrB4YHCHNS+cRbjYLlCLdog=;
	b=bWZW8y9vqINuSHIE7ENz+xNkfRCMbSLg1e+CKvyDOe1kzsb30r+LAmloVqmLcjjeEf78co
	oicfYh+Ecd89iJasNRhwFx6dqHRPRZ4LNZHVRCq8m19mTcpIXlOmOgMEgZANT3rHIU9dyh
	5yYEiUrVTxM/faCdbMNwp+fVwGQH46Y=
X-MC-Unique: DEshVlCCPQyfZLbZd14zsA-1
Date: Tue, 15 Dec 2020 13:12:33 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201215121233.GD8185@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201214170519.223781-3-slp@redhat.com>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> While processing the parents of a BDS, one of the parents may process
> the child that's doing the tail recursion, which leads to a BDS being
> processed twice. This is especially problematic for the aio_notifiers,
> as they might attempt to work on both the old and the new AIO
> contexts.
> 
> To avoid this, add the BDS pointer to the ignore list, and check the
> child BDS pointer while iterating over the children.
> 
> Signed-off-by: Sergio Lopez <slp@redhat.com>

Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-/

What is the specific scenario where you saw this breaking? Did you have
multiple BdrvChild connections between two nodes so that we would go to
the parent node through one and then come back to the child node through
the other?

Maybe if what we really need to do is not processing every edge once,
but processing every node once, the list should be changed to contain
_only_ BDS objects. But then blk_do_set_aio_context() probably won't
work any more because it can't have blk->root ignored any more...

Anyway, if we end up changing what the list contains, the comment needs
an update, too. Currently it says:

 * @ignore will accumulate all visited BdrvChild object. The caller is
 * responsible for freeing the list afterwards.

Another option: Split the parents QLIST_FOREACH loop in two. First add
all parent BdrvChild objects to the ignore list, remember which of them
were newly added, and only after adding all of them call
child->klass->set_aio_ctx() for each parent that was previously not on
the ignore list. This will avoid that we come back to the same node
because all of its incoming edges are ignored now.

Kevin

>  block.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/block.c b/block.c
> index f1cedac362..bc8a66ab6e 100644
> --- a/block.c
> +++ b/block.c
> @@ -6465,12 +6465,17 @@ void bdrv_set_aio_context_ignore(BlockDriverState *bs,
>      bdrv_drained_begin(bs);
>  
>      QLIST_FOREACH(child, &bs->children, next) {
> -        if (g_slist_find(*ignore, child)) {
> +        if (g_slist_find(*ignore, child) || g_slist_find(*ignore, child->bs)) {
>              continue;
>          }
>          *ignore = g_slist_prepend(*ignore, child);
>          bdrv_set_aio_context_ignore(child->bs, new_context, ignore);
>      }
> +    /*
> +     * Add a reference to this BS to the ignore list, so its
> +     * parents won't attempt to process it again.
> +     */
> +    *ignore = g_slist_prepend(*ignore, bs);
>      QLIST_FOREACH(child, &bs->parents, next_parent) {
>          if (g_slist_find(*ignore, child)) {
>              continue;
> -- 
> 2.26.2
> 



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:18:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:18:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53132.92720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Hz-000431-VT; Tue, 15 Dec 2020 12:18:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53132.92720; Tue, 15 Dec 2020 12:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Hz-00042u-QZ; Tue, 15 Dec 2020 12:18:47 +0000
Received: by outflank-mailman (input) for mailman id 53132;
 Tue, 15 Dec 2020 12:18:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Hy-00042T-UH
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:18:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ecaa08d-0a08-459f-ae6c-fcfde9caadb9;
 Tue, 15 Dec 2020 12:18:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Hc-0005ev-Mw; Tue, 15 Dec 2020 12:18:24 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Hc-0005BK-Ic; Tue, 15 Dec 2020 12:18:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ecaa08d-0a08-459f-ae6c-fcfde9caadb9
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=Xw/J//QrLWVyECrsYjYS67TF+HT7Gfo/HQDHpSw+N+U=; b=TupUUcOlieFwLtzzdP0J6btXbe
	ibtgxOicSxbax95AEMB5TgP13nGPGOmyNKsEFQdWHgx5+CHNm07KgHScNyQ1NQI7v7XZ6o17GeSHD
	IysNqrviuIDb/z31R0ocBKgqSiMJ76Ffb/LYBou0tan1xXoV8neSwYeorkZan1xYmxyA=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 115 v4 (CVE-2020-29480) - xenstore watch
 notifications lacking permission checks
Message-Id: <E1kp9Hc-0005BK-Ic@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:18:24 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29480 / XSA-115
                               version 4

         xenstore watch notifications lacking permission checks

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Neither xenstore implementation does any permissions checks when
reporting a xenstore watch event.

A guest administrator can watch the root xenstored node, which will
cause notifications for every created, modified and deleted key.

A guest administrator can also use the special watches, which will
cause a notification every time a domain is created and destroyed.

Data may include:
 - number, type and domids of other VMs
 - existence and domids of driver domains
 - numbers of virtual interfaces, block devices, vcpus
 - existence of virtual framebuffers and their backend style (eg,
   existence of VNC service)
 - Xen VM UUIDs for other domains
 - timing information about domain creation and device setup
 - some hints at the backend provisioning of VMs and their devices

The watch events do not contain values stored in xenstore, only key
names.

IMPACT
======

A guest administrator can observe non-sensitive domain and device
lifecycle events relating to other guests.  This information allows
some insight into overall system configuration (including number of
general nature of other guests), and configuration of other guests
(including number and general nature of other guests' devices).  This
information might be commercially interesting or might make other
attacks easier.

There is not believed to be exposure of sensitive data.  Specifically,
there is no exposure of: VNC passwords; port numbers; pathnames in host
and guest filesystems; cryptopgraphic keys; or within-guest data.

VULNERABLE SYSTEMS
==================

All Xen systems are vulnerable.

Both Xenstore implementations (C and Ocaml) are vulnerable.

MITIGATION
==========

There is no mitigation available.

CREDITS
=======

This issue was discovered by Andrew Reimers and Alex Sharp from
OrionVM.

RESOLUTION
==========

Applying the appropriate attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

Note that the Ocaml patches depend on XSA-353.

xsa115-c/*.patch           xen-unstable        [C xenstored]
xsa115-4.14-c/*.patch      Xen 4.14            [C xenstored]
xsa115-4.13-c/*.patch      Xen 4.13 - 4.10     [C xenstored]

xsa115-o/*.patch           xen-unstable - 4.12 [Ocaml xenstored, needs 353]
xsa115-4.11-o/*.patch      Xen 4.11            [Ocaml xenstored, needs 353]
xsa115-4.10-o/*.patch      Xen 4.10            [Ocaml xenstored, needs 353]

$ sha256sum xsa115* xsa115*/*
b2cc3bfbfb48b60e8623b276d823599bc6a33065a340fbc79804bad7ffee48be  xsa115.meta
ced68edb7da44f3e7233120c34a343ee392a4bf094a61775d54d3ea7dc920837  xsa115-4.10-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch
21d0e3aff4c696875b9db02d6ba3fc683ba05b768d4716e1a197f4c5475ed324  xsa115-4.10-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch
28249e3f48c255bbc1e87f6e4b70f5b832b50fa8028f44924c6308a9492a1cf2  xsa115-4.10-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch
219f111181cc8ddcdbca73823688b33f86a2e4bddeb06dcc7dc84c63fc9e9053  xsa115-4.10-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch
0cb14326baedd44650ce59a3da5ab6daa4a7f18f1e1440b6eda5d1a5d414233b  xsa115-4.10-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch
b84be5a85c1dadbf77fa1ea1157a293408052d9628fc9cb1f343cd3a1dcd687c  xsa115-4.10-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch
ced68edb7da44f3e7233120c34a343ee392a4bf094a61775d54d3ea7dc920837  xsa115-4.11-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch
21d0e3aff4c696875b9db02d6ba3fc683ba05b768d4716e1a197f4c5475ed324  xsa115-4.11-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch
28249e3f48c255bbc1e87f6e4b70f5b832b50fa8028f44924c6308a9492a1cf2  xsa115-4.11-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch
046d6d9044c41481071760c54e0ad2f66db70ea720c8d39056cedfd51fda56b8  xsa115-4.11-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch
a0042d3524f83ac2514d4040cc049108c3db1fe398f26d86b309dda1c1444472  xsa115-4.11-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch
b84be5a85c1dadbf77fa1ea1157a293408052d9628fc9cb1f343cd3a1dcd687c  xsa115-4.11-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch
383b1f8ae592f5330832962e98c02cf18b566ed090f9e96338536619ab1bd889  xsa115-4.13-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch
0c96d9c27bc0031f2e72170c453aca5677d8f7469b15468dc797aef4bd1d67d6  xsa115-4.13-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch
11ec359a426abaa71b7eda4a5bf319d73b14b3cbfeac483206c134b0e3ad5391  xsa115-4.13-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch
5fd6461cc96fd787a81a625b9b7e230a5c9092201a54976de088703305e86dd6  xsa115-4.13-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch
55bfaa3674fb355a2ed5830e0a7197ede0a5b9168f93889d7fa08044b312ab52  xsa115-4.13-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch
0013ad062ee5f2dd79f500e2c829a9534677282ed4a2d596cf16e6b362fd29af  xsa115-4.13-c/0006-tools-xenstore-rework-node-removal.patch
e5ed745da88dd195b03f788f255d0d752eb9e801c39c6905707c0b5fa60e8ddf  xsa115-4.13-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch
83e6b4312be4b7fe651f680e5428d47e71a0fd7fdbff5d39433f48b0f4484ad4  xsa115-4.13-c/0008-tools-xenstore-introduce-node_perms-structure.patch
8fa565f136b1fab33f6a06eebad5da9bed571dcac030dcd0b85078817b5adc75  xsa115-4.13-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch
4038e76a3a8748b748811e06b91d87d01c3d3d3ae5fead4b123065cfe35eb81a  xsa115-4.13-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch
797772d456b194a7cdad1eedbcf61499d2c5c2a71a6ba9a11e4789ac7eda602f  xsa115-4.14-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch
2f37019e0d0ca3e425da0ab272a9afae749de963bf89c6a65696b0f134257011  xsa115-4.14-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch
7a7b63884dfbea232a14b7ff49f14d1bf89edd638bf738643676504aab6ef5f2  xsa115-4.14-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch
52f2c03e318720b7ccf55c9cb11f5d33a46feb922dfed656c7c6db1e5f813d91  xsa115-4.14-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch
1db253543e2387abed872c6d94ac8915ce55f38e95d59f24cd0d19d173b8eadb  xsa115-4.14-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch
4bd75552186793cbc8bc1567b5952990e41651c1ccbdc2c55b14bbe62b707ac0  xsa115-4.14-c/0006-tools-xenstore-rework-node-removal.patch
22d0a1bc7b413ff9689b06ee7833baf970f54c678da47db3a941801c79339464  xsa115-4.14-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch
8d4a53c74d0ce42f8134b073acadf0550552da5a827840517cbae55628e5b4a9  xsa115-4.14-c/0008-tools-xenstore-introduce-node_perms-structure.patch
10a066d28b14ae667d11a9fc3c9113569fa16df4e6039380b13907886551a970  xsa115-4.14-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch
9731273b7b096326e28caad8d75b2f87e391131fe40f0952dbb8f974e6b3b298  xsa115-4.14-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch
db1b0b44aad333cc8331a3b34199b052fad3897db5386d1f1b9e02247ff72106  xsa115-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch
d052bff6d7971500bbed047f914b45fa95cc29b914a024f1d3da9bb151239432  xsa115-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch
cb016c3669b0d650d33dbfd6246545a79e75f605bbfe42f8851702a4848f71db  xsa115-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch
289beb0917e2554d3c3b6be90e2dd9215ac1aefd3e4fb0ed86e690abbd73b669  xsa115-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch
8a61a189987e88dbf4c7bdf4b247f1117c82cfe6ac308302753146b11802a670  xsa115-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch
6af64fa35e823fff2f47b11421409f2f21f8ecf853583ac70054907ad3ce83c7  xsa115-c/0006-tools-xenstore-rework-node-removal.patch
4fb7af8330e85f267235a05cce0758473326ddb5d39d47450a5492060209f0c0  xsa115-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch
ff1af7e9d36dc8d3c423a3736e82c2e4ab2a595f3fc6622c57096c7a3a1dce59  xsa115-c/0008-tools-xenstore-introduce-node_perms-structure.patch
8895fbef5ab0b8bdf303becd809c848acd85249a53e0e414d1a9c4d917402ec3  xsa115-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch
a611598bc76874d69449c23aa43d8b6f1331595e64eb5746731f4ee64301441c  xsa115-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch
46c317b0975fe975162dc4b4bd61f82bf9a6b102e7edcd3cd0dccaad84165ed6  xsa115-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch
5d0f8c8901196715ed60593bf239caf39b168814ea01ed18c2e3789fb7879790  xsa115-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch
002cb251a1dcde811dd5998a53a37afe67653361320316eaff9df2d9c5369f8d  xsa115-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch
f640ff6f2e86bc0c4074629a80d17328d7494da3f2fdc2c8d99d0018c36c28dc  xsa115-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch
fcc0d36ab9e27a2ab3dd2de8b54495676a454298ca1203d3d424cd4498e03321  xsa115-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch
62aeb42ae0a5a93de246aed259b4fe5850a33eb001f03b8d183a70c9c5617618  xsa115-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/YqMsMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZRJUIAJ66U75O7Pf5tmu9s4vLrrG/n7rCo6qp+TZ1hcio
PNd2xYJaiVfr39m2JByoUyIgBbb3C7R03pXgM15Vbvk0/v6b3QySxzSBbqdIOn3H
yQtOJlNY4OnQh7n0Svs0HV1aCbd/81wIKZ5aCxn/X3ZBjBHOIQGMAdSZ/lkh8g0p
7CTkTZB//gbuR8QZV2KYqFYsKlwhhGCueOFYlnqIs/HWmAL2wnsacF/K7xffVw0S
Fu8pATp1jWXGYc3S1J9o+C77vF4Ai8x2OLw5TCSG8grmPAuojbmB5UuT+ez4VB5q
3KbpqkJSoyuOvWOPHxydb9Z/ExbpZUMgO0c1FmZ2opXRBoA=
=OtzN
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa115.meta"
Content-Disposition: attachment; filename="xsa115.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAxMTUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTExNS00LjEzLWMvKi5wYXRjaCIsCiAgICAgICAgICAgICJ4
c2ExMTUtNC4xMC1vLyoucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQog
ICAgICB9CiAgICB9LAogICAgIjQuMTEiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjQx
YTgyMmMzOTI2MzUwZjI2OTE3ZDc0N2M4ZGZlZDFjNDRhMmNmNDIiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MwogICAgICAgICAg
XSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMTE1
LTQuMTMtYy8qLnBhdGNoIiwKICAgICAgICAgICAgInhzYTExNS00LjExLW8v
Ki5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0s
CiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhl
biI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiODE0NWQzOGI0ODAwOTI1
NWEzMmFiODdhMDJlNDgxY2QwOWM4MTFmOSIsCiAgICAgICAgICAiUHJlcmVx
cyI6IFsKICAgICAgICAgICAgMzUzCiAgICAgICAgICBdLAogICAgICAgICAg
IlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2ExMTUtNC4xMy1jLyoucGF0
Y2giLAogICAgICAgICAgICAieHNhMTE1LW8vKi5wYXRjaCIKICAgICAgICAg
IF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMyI6IHsKICAg
ICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJT
dGFibGVSZWYiOiAiYjUzMDIyNzNlMmM1MTk0MDE3MjQwMDQ4NjY0NDYzNmYy
ZjRmYzY0YSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAg
MzUzCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAg
ICAgICAgICJ4c2ExMTUtNC4xMy1jLyoucGF0Y2giLAogICAgICAgICAgICAi
eHNhMTE1LW8vKi5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAg
IH0KICAgIH0sCiAgICAiNC4xNCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAg
ICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMWQxZDFm
NTM5MTk3NjQ1NmE3OWRhYWMwZGNmZTcxNTdkYTFlNTRmNyIsCiAgICAgICAg
ICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzCiAgICAgICAgICBdLAog
ICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2ExMTUtNC4x
NC1jLyoucGF0Y2giLAogICAgICAgICAgICAieHNhMTE1LW8vKi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAibWFz
dGVyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICIzYWU0NjlhZjhlNjgwZGYzMWVlY2Qw
YTJhYzZhODNiNThhZDdjZTUzIiwKICAgICAgICAgICJQcmVyZXFzIjogWwog
ICAgICAgICAgICAzNTMKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTExNS1jLyoucGF0Y2giLAogICAgICAg
ICAgICAieHNhMTE1LW8vKi5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9
CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.10-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch"
Content-Disposition: attachment;
 filename="xsa115-4.10-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogaWdub3JlIHRyYW5zYWN0aW9uIGlkIGZvciBbdW5dd2F0Y2gK
TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBj
aGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQK
Ckluc3RlYWQgb2YgaWdub3JpbmcgdGhlIHRyYW5zYWN0aW9uIGlkIGZvciBY
U19XQVRDSCBhbmQgWFNfVU5XQVRDSApjb21tYW5kcyBhcyBpdCBpcyBkb2N1
bWVudGVkIGluIGRvY3MvbWlzYy94ZW5zdG9yZS50eHQsIGl0IGlzIHRlc3Rl
ZApmb3IgdmFsaWRpdHkgdG9kYXkuCgpSZWFsbHkgaWdub3JlIHRoZSB0cmFu
c2FjdGlvbiBpZCBmb3IgWFNfV0FUQ0ggYW5kIFhTX1VOV0FUQ0guCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEVkd2luIFTD
tnLDtmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBDaHJp
c3RpYW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+ClJl
dmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9w
cm9jZXNzLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwK
aW5kZXggNzRjNjlmODY5Yy4uMGEwZTQzZDFmMCAxMDA2NDQKLS0tIGEvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKQEAgLTQ5MiwxMiArNDkyLDE5IEBA
IGxldCByZXRhaW5fb3BfaW5faGlzdG9yeSB0eSA9CiAJfCBYZW5idXMuWGIu
T3AuUmVzZXRfd2F0Y2hlcwogCXwgWGVuYnVzLlhiLk9wLkludmFsaWQgICAg
ICAgICAgIC0+IGZhbHNlCiAKK2xldCBtYXliZV9pZ25vcmVfdHJhbnNhY3Rp
b24gPSBmdW5jdGlvbgorCXwgWGVuYnVzLlhiLk9wLldhdGNoIHwgWGVuYnVz
LlhiLk9wLlVud2F0Y2ggLT4gZnVuIHRpZCAtPgorCQlpZiB0aWQgPD4gVHJh
bnNhY3Rpb24ubm9uZSB0aGVuCisJCQlkZWJ1ZyAiSWdub3JpbmcgdHJhbnNh
Y3Rpb24gSUQgJWQgZm9yIHdhdGNoL3Vud2F0Y2giIHRpZDsKKwkJVHJhbnNh
Y3Rpb24ubm9uZQorCXwgXyAtPiBmdW4geCAtPiB4CisKICgqKgogICogTm90
aHJvdyBndWFyYW50ZWUuCiAgKikKIGxldCBwcm9jZXNzX3BhY2tldCB+c3Rv
cmUgfmNvbnMgfmRvbXMgfmNvbiB+cmVxID0KIAlsZXQgdHkgPSByZXEuUGFj
a2V0LnR5IGluCi0JbGV0IHRpZCA9IHJlcS5QYWNrZXQudGlkIGluCisJbGV0
IHRpZCA9IG1heWJlX2lnbm9yZV90cmFuc2FjdGlvbiB0eSByZXEuUGFja2V0
LnRpZCBpbgogCWxldCByaWQgPSByZXEuUGFja2V0LnJpZCBpbgogCXRyeQog
CQlsZXQgZmN0ID0gZnVuY3Rpb25fb2ZfdHlwZSB0eSBpbgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.10-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch"
Content-Disposition: attachment;
 filename="xsa115-4.10-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2hlY2sgcHJpdmlsZWdlIGZvciBYU19JU19ET01BSU5fSU5U
Uk9EVUNFRApNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQv
cGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGlu
ZzogOGJpdAoKVGhlIFhlbnN0b3JlIGNvbW1hbmQgWFNfSVNfRE9NQUlOX0lO
VFJPRFVDRUQgc2hvdWxkIGJlIHBvc3NpYmxlIGZvciBwcml2aWxlZ2VkCmRv
bWFpbnMgb25seSAodGhlIG9ubHkgdXNlciBpbiB0aGUgdHJlZSBpcyB0aGUg
eGVucGFnaW5nIGRhZW1vbikuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4K
ClNpZ25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNp
dHJpeC5jb20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3Rp
YW4ubGluZGlnQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29v
cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggMGEwZTQzZDFmMC4uZjM3
NGFiZTk5OCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3Mu
bWwKQEAgLTE2Niw3ICsxNjYsOSBAQCBsZXQgZG9fc2V0cGVybXMgY29uIHQg
ZG9tYWlucyBjb25zIGRhdGEgPQogbGV0IGRvX2Vycm9yIGNvbiB0IGRvbWFp
bnMgY29ucyBkYXRhID0KIAlyYWlzZSBEZWZpbmUuVW5rbm93bl9vcGVyYXRp
b24KIAotbGV0IGRvX2lzaW50cm9kdWNlZCBjb24gdCBkb21haW5zIGNvbnMg
ZGF0YSA9CitsZXQgZG9faXNpbnRyb2R1Y2VkIGNvbiBfdCBkb21haW5zIF9j
b25zIGRhdGEgPQorCWlmIG5vdCAoQ29ubmVjdGlvbi5pc19kb20wIGNvbikK
Kwl0aGVuIHJhaXNlIERlZmluZS5QZXJtaXNzaW9uX2RlbmllZDsKIAlsZXQg
ZG9taWQgPQogCQltYXRjaCAoc3BsaXQgTm9uZSAnXDAwMCcgZGF0YSkgd2l0
aAogCQl8IGRvbWlkIDo6IF8gLT4gaW50X29mX3N0cmluZyBkb21pZAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.10-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch"
Content-Disposition: attachment;
 filename="xsa115-4.10-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogdW5pZnkgd2F0Y2ggZmlyaW5nCk1JTUUtVmVyc2lvbjogMS4w
CkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250
ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpUaGlzIHdpbGwgbWFrZSBp
dCBlYXNpZXIgaW5zZXJ0IGFkZGl0aW9uYWwgY2hlY2tzIGluIGEgZm9sbG93
LXVwIHBhdGNoLgpBbGwgd2F0Y2hlcyBhcmUgbm93IGZpcmVkIGZyb20gYSBz
aW5nbGUgZnVuY3Rpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNp
Z25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNpdHJp
eC5jb20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3RpYW4u
bGluZGlnQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0aW9uLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKaW5kZXggYmU5YzYyZjI3Zi4u
ZDc0MzJjNjU5NyAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L2Nvbm5lY3Rpb24ubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nv
bm5lY3Rpb24ubWwKQEAgLTIxMCw4ICsyMTAsNyBAQCBsZXQgZmlyZV93YXRj
aCB3YXRjaCBwYXRoID0KIAkJZW5kIGVsc2UKIAkJCXBhdGgKIAlpbgotCWxl
dCBkYXRhID0gVXRpbHMuam9pbl9ieV9udWxsIFsgbmV3X3BhdGg7IHdhdGNo
LnRva2VuOyAiIiBdIGluCi0Jc2VuZF9yZXBseSB3YXRjaC5jb24gVHJhbnNh
Y3Rpb24ubm9uZSAwIFhlbmJ1cy5YYi5PcC5XYXRjaGV2ZW50IGRhdGEKKwlm
aXJlX3NpbmdsZV93YXRjaCB7IHdhdGNoIHdpdGggcGF0aCA9IG5ld19wYXRo
IH0KIAogKCogU2VhcmNoIGZvciBhIHZhbGlkIHVudXNlZCB0cmFuc2FjdGlv
biBpZC4gKikKIGxldCByZWMgdmFsaWRfdHJhbnNhY3Rpb25faWQgY29uIHBy
b3Bvc2VkX2lkID0K

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.10-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch"
Content-Disposition: attachment;
 filename="xsa115-4.10-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogaW50cm9kdWNlIHBlcm1pc3Npb25zIGZvciBzcGVjaWFsIHdh
dGNoZXMKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3Bs
YWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6
IDhiaXQKClRoZSBzcGVjaWFsIHdhdGNoZXMgIkBpbnRyb2R1Y2VEb21haW4i
IGFuZCAiQHJlbGVhc2VEb21haW4iIHNob3VsZCBiZQphbGxvd2VkIGZvciBw
cml2aWxlZ2VkIGNhbGxlcnMgb25seSwgYXMgdGhleSBhbGxvdyB0byBnYWlu
IGluZm9ybWF0aW9uCmFib3V0IHByZXNlbmNlIG9mIG90aGVyIGd1ZXN0cyBv
biB0aGUgaG9zdC4gU28gc2VuZCB3YXRjaCBldmVudHMgZm9yCnRob3NlIHdh
dGNoZXMgdmlhIHByaXZpbGVnZWQgY29ubmVjdGlvbnMgb25seS4KClN0YXJ0
IHRvIGFkZHJlc3MgdGhpcyBieSB0cmVhdGluZyB0aGUgc3BlY2lhbCB3YXRj
aGVzIGFzIHJlZ3VsYXIgbm9kZXMKaW4gdGhlIHRyZWUsIHdoaWNoIGdpdmVz
IHRoZW0gbm9ybWFsIHNlbWFudGljcyBmb3IgcGVybWlzc2lvbnMuICBBIGxh
dGVyCmNoYW5nZSB3aWxsIHJlc3RyaWN0IHRoZSBoYW5kbGluZywgc28gdGhh
dCB0aGV5IGNhbid0IGJlIGxpc3RlZCwgZXRjLgoKVGhpcyBpcyBwYXJ0IG9m
IFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZp
bi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRp
ZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTog
QW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcHJvY2Vzcy5tbCBi
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCmluZGV4IGYzNzRh
YmU5OTguLmMzYzhlYTJmNGIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9wcm9jZXNzLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0b3Jl
ZC9wcm9jZXNzLm1sCkBAIC00MTQsNyArNDE0LDcgQEAgbGV0IGRvX2ludHJv
ZHVjZSBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJCWVsc2UgdHJ5CiAJ
CQlsZXQgbmRvbSA9IERvbWFpbnMuY3JlYXRlIGRvbWFpbnMgZG9taWQgbWZu
IHBvcnQgaW4KIAkJCUNvbm5lY3Rpb25zLmFkZF9kb21haW4gY29ucyBuZG9t
OwotCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyAiQGlu
dHJvZHVjZURvbWFpbiI7CisJCQlDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0
Y2hlcyBjb25zIFN0b3JlLlBhdGguaW50cm9kdWNlX2RvbWFpbjsKIAkJCW5k
b20KIAkJd2l0aCBfIC0+IHJhaXNlIEludmFsaWRfQ21kX0FyZ3MKIAlpbgpA
QCAtNDMzLDcgKzQzMyw3IEBAIGxldCBkb19yZWxlYXNlIGNvbiB0IGRvbWFp
bnMgY29ucyBkYXRhID0KIAlEb21haW5zLmRlbCBkb21haW5zIGRvbWlkOwog
CUNvbm5lY3Rpb25zLmRlbF9kb21haW4gY29ucyBkb21pZDsKIAlpZiBmaXJl
X3NwZWNfd2F0Y2hlcyAKLQl0aGVuIENvbm5lY3Rpb25zLmZpcmVfc3BlY193
YXRjaGVzIGNvbnMgIkByZWxlYXNlRG9tYWluIgorCXRoZW4gQ29ubmVjdGlv
bnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9yZS5QYXRoLnJlbGVhc2Vf
ZG9tYWluCiAJZWxzZSByYWlzZSBJbnZhbGlkX0NtZF9BcmdzCiAKIGxldCBk
b19yZXN1bWUgY29uIHQgZG9tYWlucyBjb25zIGRhdGEgPQpkaWZmIC0tZ2l0
IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3N0b3JlLm1sCmluZGV4IDYzNzVhMWM4ODkuLjk4ZDM2
OGQ1MmYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9zdG9y
ZS5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwKQEAg
LTIxNCw2ICsyMTQsMTEgQEAgbGV0IHJlYyBsb29rdXAgbm9kZSBwYXRoIGZj
dCA9CiAKIGxldCBhcHBseSBybm9kZSBwYXRoIGZjdCA9CiAJbG9va3VwIHJu
b2RlIHBhdGggZmN0CisKK2xldCBpbnRyb2R1Y2VfZG9tYWluID0gIkBpbnRy
b2R1Y2VEb21haW4iCitsZXQgcmVsZWFzZV9kb21haW4gPSAiQHJlbGVhc2VE
b21haW4iCitsZXQgc3BlY2lhbHMgPSBMaXN0Lm1hcCBvZl9zdHJpbmcgWyBp
bnRyb2R1Y2VfZG9tYWluOyByZWxlYXNlX2RvbWFpbiBdCisKIGVuZAogCiAo
KiBUaGUgU3RvcmUudCB0eXBlICopCmRpZmYgLS1naXQgYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQvdXRpbHMubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQv
dXRpbHMubWwKaW5kZXggZTg5YzFhZmYwNC4uZjNkOTVlODg5NyAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3V0aWxzLm1sCisrKyBiL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC91dGlscy5tbApAQCAtODksMTkgKzg5LDE3
IEBAIGxldCByZWFkX2ZpbGVfc2luZ2xlX2ludGVnZXIgZmlsZW5hbWUgPQog
CVVuaXguY2xvc2UgZmQ7CiAJaW50X29mX3N0cmluZyAoU3RyaW5nLnN1YiBi
dWYgMCBzeikKIAotbGV0IHBhdGhfY29tcGxldGUgcGF0aCBjb25uZWN0aW9u
X3BhdGggPQotCWlmIFN0cmluZy5nZXQgcGF0aCAwIDw+ICcvJyB0aGVuCi0J
CWNvbm5lY3Rpb25fcGF0aCBeIHBhdGgKLQllbHNlCi0JCXBhdGgKLQorKCog
QHBhdGggbWF5IGJlIGd1ZXN0IGRhdGEgYW5kIG5lZWRzIGl0cyBsZW5ndGgg
dmFsaWRhdGluZy4gIEBjb25uZWN0aW9uX3BhdGgKKyAqIGlzIGdlbmVyYXRl
ZCBsb2NhbGx5IGluIHhlbnN0b3JlZCBhbmQgYWx3YXlzIG9mIHRoZSBmb3Jt
ICIvbG9jYWwvZG9tYWluLyROLyIgKikKIGxldCBwYXRoX3ZhbGlkYXRlIHBh
dGggY29ubmVjdGlvbl9wYXRoID0KLQlpZiBTdHJpbmcubGVuZ3RoIHBhdGgg
PSAwIHx8IFN0cmluZy5sZW5ndGggcGF0aCA+IDEwMjQgdGhlbgotCQlyYWlz
ZSBEZWZpbmUuSW52YWxpZF9wYXRoCi0JZWxzZQotCQlsZXQgY3BhdGggPSBw
YXRoX2NvbXBsZXRlIHBhdGggY29ubmVjdGlvbl9wYXRoIGluCi0JCWlmIFN0
cmluZy5nZXQgY3BhdGggMCA8PiAnLycgdGhlbgotCQkJcmFpc2UgRGVmaW5l
LkludmFsaWRfcGF0aAotCQllbHNlCi0JCQljcGF0aAorCWxldCBsZW4gPSBT
dHJpbmcubGVuZ3RoIHBhdGggaW4KKworCWlmIGxlbiA9IDAgfHwgbGVuID4g
MTAyNCB0aGVuIHJhaXNlIERlZmluZS5JbnZhbGlkX3BhdGg7CisKKwlsZXQg
YWJzX3BhdGggPQorCQltYXRjaCBTdHJpbmcuZ2V0IHBhdGggMCB3aXRoCisJ
CXwgJy8nIHwgJ0AnIC0+IHBhdGgKKwkJfCBfICAgLT4gY29ubmVjdGlvbl9w
YXRoIF4gcGF0aAorCWluCiAKKwlhYnNfcGF0aApkaWZmIC0tZ2l0IGEvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbCBiL3Rvb2xzL29jYW1s
L3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKaW5kZXggNDlmYzE4YmYxOS4uMzJj
M2IxYzBmMSAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hl
bnN0b3JlZC5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3Rv
cmVkLm1sCkBAIC0yODcsNiArMjg3LDggQEAgbGV0IF8gPQogCWxldCBxdWl0
ID0gcmVmIGZhbHNlIGluCiAKIAlMb2dnaW5nLmluaXRfeGVuc3RvcmVkX2xv
ZygpOworCUxpc3QuaXRlciAoZnVuIHBhdGggLT4KKwkJU3RvcmUud3JpdGUg
c3RvcmUgUGVybXMuQ29ubmVjdGlvbi5mdWxsX3JpZ2h0cyBwYXRoICIiKSBT
dG9yZS5QYXRoLnNwZWNpYWxzOwogCiAJbGV0IGZpbGVuYW1lID0gUGF0aHMu
eGVuX3J1bl9zdG9yZWQgXiAiL2RiIiBpbgogCWlmIGNmLnJlc3RhcnQgJiYg
U3lzLmZpbGVfZXhpc3RzIGZpbGVuYW1lIHRoZW4gKApAQCAtMzM5LDcgKzM0
MSw3IEBAIGxldCBfID0KIAkJCQkJbGV0IChub3RpZnksIGRlYWRkb20pID0g
RG9tYWlucy5jbGVhbnVwIGRvbWFpbnMgaW4KIAkJCQkJTGlzdC5pdGVyIChD
b25uZWN0aW9ucy5kZWxfZG9tYWluIGNvbnMpIGRlYWRkb207CiAJCQkJCWlm
IGRlYWRkb20gPD4gW10gfHwgbm90aWZ5IHRoZW4KLQkJCQkJCUNvbm5lY3Rp
b25zLmZpcmVfc3BlY193YXRjaGVzIGNvbnMgIkByZWxlYXNlRG9tYWluIgor
CQkJCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9y
ZS5QYXRoLnJlbGVhc2VfZG9tYWluCiAJCQkJKQogCQkJCWVsc2UKIAkJCQkJ
bGV0IGMgPSBDb25uZWN0aW9ucy5maW5kX2RvbWFpbl9ieV9wb3J0IGNvbnMg
cG9ydCBpbgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.10-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch"
Content-Disposition: attachment;
 filename="xsa115-4.10-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogYXZvaWQgd2F0Y2ggZXZlbnRzIGZvciBub2RlcyB3aXRob3V0
IGFjY2VzcwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQv
cGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGlu
ZzogOGJpdAoKVG9kYXkgd2F0Y2ggZXZlbnRzIGFyZSBzZW50IHJlZ2FyZGxl
c3Mgb2YgdGhlIGFjY2VzcyByaWdodHMgb2YgdGhlCm5vZGUgdGhlIGV2ZW50
IGlzIHNlbnQgZm9yLiBUaGlzIGVuYWJsZXMgYW55IGd1ZXN0IHRvIGUuZy4g
c2V0dXAgYQp3YXRjaCBmb3IgIi8iIGluIG9yZGVyIHRvIGhhdmUgYSBkZXRh
aWxlZCByZWNvcmQgb2YgYWxsIFhlbnN0b3JlCm1vZGlmaWNhdGlvbnMuCgpN
b2RpZnkgdGhhdCBieSBzZW5kaW5nIG9ubHkgd2F0Y2ggZXZlbnRzIGZvciBu
b2RlcyB0aGF0IHRoZSB3YXRjaGVyCmhhcyBhIGNoYW5jZSB0byBzZWUgb3Ro
ZXJ3aXNlIChlaXRoZXIgdmlhIGRpcmVjdCByZWFkcyBvciBieSBxdWVyeWlu
Zwp0aGUgY2hpbGRyZW4gb2YgYSBub2RlKS4gVGhpcyBpbmNsdWRlcyBjYXNl
cyB3aGVyZSB0aGUgdmlzaWJpbGl0eSBvZgphIG5vZGUgZm9yIGEgd2F0Y2hl
ciBpcyBjaGFuZ2luZyAocGVybWlzc2lvbnMgYmVpbmcgcmVtb3ZlZCkuCgpQ
ZXJtaXNzaW9ucyBmb3Igbm9kZXMgYXJlIGxvb2tlZCB1cCBlaXRoZXIgaW4g
dGhlIG9sZCAocHJlCnRyYW5zYWN0aW9uL2NvbW1hbmQpIG9yIGN1cnJlbnQg
dHJlZXMgKHBvc3QgdHJhbnNhY3Rpb24pLiAgSWYKcGVybWlzc2lvbnMgYXJl
IGNoYW5nZWQgbXVsdGlwbGUgdGltZXMgaW4gYSB0cmFuc2FjdGlvbiBvbmx5
IHRoZSBmaW5hbAp2ZXJzaW9uIGlzIGNoZWNrZWQsIGJlY2F1c2UgY29uc2lk
ZXJpbmcgYSB0cmFuc2FjdGlvbiBhdG9taWMgdGhlCmluZGl2aWR1YWwgcGVy
bWlzc2lvbiBjaGFuZ2VzIHdvdWxkIG5vdCBiZSBub3RpY2FibGUgdG8gYW4g
b3V0c2lkZQpvYnNlcnZlci4KClR3byB0cmVlcyBhcmUgb25seSBuZWVkZWQg
Zm9yIHNldF9wZXJtczogaGVyZSB3ZSBjYW4gZWl0aGVyIG5vdGljZSB0aGUK
bm9kZSBkaXNhcHBlYXJpbmcgKGlmIHdlIGxvb3NlIHBlcm1pc3Npb24pLCBh
cHBlYXJpbmcKKGlmIHdlIGdhaW4gcGVybWlzc2lvbiksIG9yIGNoYW5naW5n
IChpZiB3ZSBwcmVzZXJ2ZSBwZXJtaXNzaW9uKS4KClJNIG5lZWRzIHRvIG9u
bHkgbG9vayBhdCB0aGUgb2xkIHRyZWU6IGluIHRoZSBuZXcgdHJlZSB0aGUg
bm9kZSB3b3VsZCBiZQpnb25lLCBvciBjb3VsZCBoYXZlIGRpZmZlcmVudCBw
ZXJtaXNzaW9ucyBpZiBpdCB3YXMgcmVjcmVhdGVkICh0aGUKcmVjcmVhdGlv
biB3b3VsZCBnZXQgaXRzIG93biB3YXRjaCBmaXJlZCkuCgpJbnNpZGUgYSB0
cmVlIHdlIGxvb2t1cCB0aGUgd2F0Y2ggcGF0aCdzIHBhcmVudCwgYW5kIHRo
ZW4gdGhlIHdhdGNoIHBhdGgKY2hpbGQgaXRzZWxmLiAgVGhpcyBnZXRzIHVz
IDQgc2V0cyBvZiBwZXJtaXNzaW9ucyBpbiB3b3JzdCBjYXNlLCBhbmQgaWYK
ZWl0aGVyIG9mIHRoZXNlIGFsbG93cyBhIHdhdGNoLCB0aGVuIHdlIHBlcm1p
dCBpdCB0byBmaXJlLiAgVGhlCnBlcm1pc3Npb24gbG9va3VwcyBhcmUgZG9u
ZSB3aXRob3V0IGxvZ2dpbmcgdGhlIGZhaWx1cmVzLCBvdGhlcndpc2Ugd2Un
ZApnZXQgY29uZnVzaW5nIGVycm9ycyBhYm91dCBwZXJtaXNzaW9uIGRlbmll
ZCBmb3Igc29tZSBwYXRocywgYnV0IGEgd2F0Y2gKc3RpbGwgZmlyaW5nLiBU
aGUgYWN0dWFsIHJlc3VsdCBpcyBsb2dnZWQgaW4geGVuc3RvcmVkLWFjY2Vz
cyBsb2c6CgogICd3IGV2ZW50IC4uLicgYXMgdXN1YWwgaWYgd2F0Y2ggd2Fz
IGZpcmVkCiAgJ3cgbm90ZmlyZWQuLi4nIGlmIHRoZSB3YXRjaCB3YXMgbm90
IGZpcmVkLCB0b2dldGhlciB3aXRoIHBhdGggYW5kCiAgcGVybWlzc2lvbiBz
ZXQgdG8gaGVscCBpbiB0cm91Ymxlc2hvb3RpbmcKCkFkZGluZyBhIHdhdGNo
IGJ5cGFzc2VzIHBlcm1pc3Npb24gY2hlY2tzIGFuZCBhbHdheXMgZmlyZXMg
dGhlIHdhdGNoCm9uY2UgaW1tZWRpYXRlbHkuIFRoaXMgaXMgY29uc2lzdGVu
dCB3aXRoIHRoZSBzcGVjaWZpY2F0aW9uLCBhbmQgbm8KaW5mb3JtYXRpb24g
aXMgZ2FpbmVkICh0aGUgd2F0Y2ggaXMgZmlyZWQgYm90aCBpZiB0aGUgcGF0
aCBleGlzdHMgb3IKZG9lc24ndCwgYW5kIGJvdGggaWYgeW91IGhhdmUgb3Ig
ZG9uJ3QgaGF2ZSBhY2Nlc3MsIGkuZS4gaXQgcmVmbGVjdHMgdGhlCnBhdGgg
YSBkb21haW4gZ2F2ZSBpdCBiYWNrIHRvIHRoYXQgZG9tYWluKS4KClRoZXJl
IGFyZSBzb21lIHNlbWFudGljIGNoYW5nZXMgaGVyZToKCiAgKiBXcml0ZSty
bSBpbiBhIHNpbmdsZSB0cmFuc2FjdGlvbiBvZiB0aGUgc2FtZSBwYXRoIGlz
IHVub2JzZXJ2YWJsZQogICAgbm93IHZpYSB3YXRjaGVzOiBib3RoIGJlZm9y
ZSBhbmQgYWZ0ZXIgYSB0cmFuc2FjdGlvbiB0aGUgcGF0aAogICAgZG9lc24n
dCBleGlzdCwgdGh1cyBib3RoIHRyZWUgbG9va3VwcyBjb21lIHVwIHdpdGgg
dGhlIGVtcHR5CiAgICBwZXJtaXNzaW9uIHNldCwgYW5kIG5vb25lLCBub3Qg
ZXZlbiBEb20wIGNhbiBzZWUgdGhpcy4gVGhpcyBpcwogICAgY29uc2lzdGVu
dCB3aXRoIHRyYW5zYWN0aW9uIGF0b21pY2l0eSB0aG91Z2guCiAgKiBTaW1p
bGFyIHRvIGFib3ZlIGlmIHdlIHRlbXBvcmFyaWx5IGdyYW50IGFuZCB0aGVu
IHJldm9rZSBwZXJtaXNzaW9uCiAgICBvbiBhIHBhdGggYW55IHdhdGNoZXMg
ZmlyZWQgaW5iZXR3ZWVuIGFyZSBpZ25vcmVkIGFzIHdlbGwKICAqIFRoZXJl
IGlzIGEgbmV3IGxvZyBldmVudCAodyBub3RmaXJlZCkgd2hpY2ggc2hvd3Mg
dGhlIHBlcm1pc3Npb24gc2V0CiAgICBvZiB0aGUgcGF0aCwgYW5kIHRoZSBw
YXRoLgogICogV2F0Y2hlcyBvbiBwYXRocyB0aGF0IGEgZG9tYWluIGRvZXNu
J3QgaGF2ZSBhY2Nlc3MgdG8gYXJlIG5vdyBub3QKICAgIHNlZW4sIHdoaWNo
IGlzIHRoZSBwdXJwb3NlIG9mIHRoZSBzZWN1cml0eSBmaXguCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEVkd2luIFTDtnLD
tmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBDaHJpc3Rp
YW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+ClJldmll
d2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25u
ZWN0aW9uLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24u
bWwKaW5kZXggZDc0MzJjNjU5Ny4uMTM4OWQ5NzFjMiAxMDA2NDQKLS0tIGEv
dG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKKysrIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKQEAgLTE5NiwxMSAr
MTk2LDM2IEBAIGxldCBsaXN0X3dhdGNoZXMgY29uID0KIAkJY29uLndhdGNo
ZXMgW10gaW4KIAlMaXN0LmNvbmNhdCBsbAogCi1sZXQgZmlyZV9zaW5nbGVf
d2F0Y2ggd2F0Y2ggPQorbGV0IGRiZyBmbXQgPSBMb2dnaW5nLmRlYnVnICJj
b25uZWN0aW9uIiBmbXQKK2xldCBpbmZvIGZtdCA9IExvZ2dpbmcuaW5mbyAi
Y29ubmVjdGlvbiIgZm10CisKK2xldCBsb29rdXBfd2F0Y2hfcGVybSBwYXRo
ID0gZnVuY3Rpb24KK3wgTm9uZSAtPiBbXQorfCBTb21lIHJvb3QgLT4KKwl0
cnkgU3RvcmUuUGF0aC5hcHBseSByb290IHBhdGggQEAgZnVuIHBhcmVudCBu
YW1lIC0+CisJCVN0b3JlLk5vZGUuZ2V0X3Blcm1zIHBhcmVudCA6OgorCQl0
cnkgW1N0b3JlLk5vZGUuZ2V0X3Blcm1zIChTdG9yZS5Ob2RlLmZpbmQgcGFy
ZW50IG5hbWUpXQorCQl3aXRoIE5vdF9mb3VuZCAtPiBbXQorCXdpdGggRGVm
aW5lLkludmFsaWRfcGF0aCB8IE5vdF9mb3VuZCAtPiBbXQorCitsZXQgbG9v
a3VwX3dhdGNoX3Blcm1zIG9sZHJvb3Qgcm9vdCBwYXRoID0KKwlsb29rdXBf
d2F0Y2hfcGVybSBwYXRoIG9sZHJvb3QgQCBsb29rdXBfd2F0Y2hfcGVybSBw
YXRoIChTb21lIHJvb3QpCisKK2xldCBmaXJlX3NpbmdsZV93YXRjaF91bmNo
ZWNrZWQgd2F0Y2ggPQogCWxldCBkYXRhID0gVXRpbHMuam9pbl9ieV9udWxs
IFt3YXRjaC5wYXRoOyB3YXRjaC50b2tlbjsgIiJdIGluCiAJc2VuZF9yZXBs
eSB3YXRjaC5jb24gVHJhbnNhY3Rpb24ubm9uZSAwIFhlbmJ1cy5YYi5PcC5X
YXRjaGV2ZW50IGRhdGEKIAotbGV0IGZpcmVfd2F0Y2ggd2F0Y2ggcGF0aCA9
CitsZXQgZmlyZV9zaW5nbGVfd2F0Y2ggKG9sZHJvb3QsIHJvb3QpIHdhdGNo
ID0KKwlsZXQgYWJzcGF0aCA9IGdldF93YXRjaF9wYXRoIHdhdGNoLmNvbiB3
YXRjaC5wYXRoIHw+IFN0b3JlLlBhdGgub2Zfc3RyaW5nIGluCisJbGV0IHBl
cm1zID0gbG9va3VwX3dhdGNoX3Blcm1zIG9sZHJvb3Qgcm9vdCBhYnNwYXRo
IGluCisJaWYgTGlzdC5leGlzdHMgKFBlcm1zLmhhcyB3YXRjaC5jb24ucGVy
bSBSRUFEKSBwZXJtcyB0aGVuCisJCWZpcmVfc2luZ2xlX3dhdGNoX3VuY2hl
Y2tlZCB3YXRjaAorCWVsc2UKKwkJbGV0IHBlcm1zID0gcGVybXMgfD4gTGlz
dC5tYXAgKFBlcm1zLk5vZGUudG9fc3RyaW5nIH5zZXA6IiAiKSB8PiBTdHJp
bmcuY29uY2F0ICIsICIgaW4KKwkJbGV0IGNvbiA9IGdldF9kb21zdHIgd2F0
Y2guY29uIGluCisJCUxvZ2dpbmcud2F0Y2hfbm90X2ZpcmVkIH5jb24gcGVy
bXMgKFN0b3JlLlBhdGgudG9fc3RyaW5nIGFic3BhdGgpCisKK2xldCBmaXJl
X3dhdGNoIHJvb3RzIHdhdGNoIHBhdGggPQogCWxldCBuZXdfcGF0aCA9CiAJ
CWlmIHdhdGNoLmlzX3JlbGF0aXZlICYmIHBhdGguWzBdID0gJy8nCiAJCXRo
ZW4gYmVnaW4KQEAgLTIxMCw3ICsyMzUsNyBAQCBsZXQgZmlyZV93YXRjaCB3
YXRjaCBwYXRoID0KIAkJZW5kIGVsc2UKIAkJCXBhdGgKIAlpbgotCWZpcmVf
c2luZ2xlX3dhdGNoIHsgd2F0Y2ggd2l0aCBwYXRoID0gbmV3X3BhdGggfQor
CWZpcmVfc2luZ2xlX3dhdGNoIHJvb3RzIHsgd2F0Y2ggd2l0aCBwYXRoID0g
bmV3X3BhdGggfQogCiAoKiBTZWFyY2ggZm9yIGEgdmFsaWQgdW51c2VkIHRy
YW5zYWN0aW9uIGlkLiAqKQogbGV0IHJlYyB2YWxpZF90cmFuc2FjdGlvbl9p
ZCBjb24gcHJvcG9zZWRfaWQgPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwv
eGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL2Nvbm5lY3Rpb25zLm1sCmluZGV4IGFlNzY5MjgxOWQuLjAyMGI4NzVk
Y2QgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0
aW9ucy5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlv
bnMubWwKQEAgLTEzNSwyNSArMTM1LDI2IEBAIGxldCBkZWxfd2F0Y2ggY29u
cyBjb24gcGF0aCB0b2tlbiA9CiAgCXdhdGNoCiAKICgqIHBhdGggaXMgYWJz
b2x1dGUgKikKLWxldCBmaXJlX3dhdGNoZXMgY29ucyBwYXRoIHJlY3Vyc2Ug
PQorbGV0IGZpcmVfd2F0Y2hlcyA/b2xkcm9vdCByb290IGNvbnMgcGF0aCBy
ZWN1cnNlID0KIAlsZXQga2V5ID0ga2V5X29mX3BhdGggcGF0aCBpbgogCWxl
dCBwYXRoID0gU3RvcmUuUGF0aC50b19zdHJpbmcgcGF0aCBpbgorCWxldCBy
b290cyA9IG9sZHJvb3QsIHJvb3QgaW4KIAlsZXQgZmlyZV93YXRjaCBfID0g
ZnVuY3Rpb24KIAkJfCBOb25lICAgICAgICAgLT4gKCkKLQkJfCBTb21lIHdh
dGNoZXMgLT4gTGlzdC5pdGVyIChmdW4gdyAtPiBDb25uZWN0aW9uLmZpcmVf
d2F0Y2ggdyBwYXRoKSB3YXRjaGVzCisJCXwgU29tZSB3YXRjaGVzIC0+IExp
c3QuaXRlciAoZnVuIHcgLT4gQ29ubmVjdGlvbi5maXJlX3dhdGNoIHJvb3Rz
IHcgcGF0aCkgd2F0Y2hlcwogCWluCiAJbGV0IGZpcmVfcmVjIHggPSBmdW5j
dGlvbgogCQl8IE5vbmUgICAgICAgICAtPiAoKQogCQl8IFNvbWUgd2F0Y2hl
cyAtPiAKLQkJCSAgTGlzdC5pdGVyIChmdW4gdyAtPiBDb25uZWN0aW9uLmZp
cmVfc2luZ2xlX3dhdGNoIHcpIHdhdGNoZXMKKwkJCUxpc3QuaXRlciAoQ29u
bmVjdGlvbi5maXJlX3NpbmdsZV93YXRjaCByb290cykgd2F0Y2hlcwogCWlu
CiAJVHJpZS5pdGVyX3BhdGggZmlyZV93YXRjaCBjb25zLndhdGNoZXMga2V5
OwogCWlmIHJlY3Vyc2UgdGhlbgogCQlUcmllLml0ZXIgZmlyZV9yZWMgKFRy
aWUuc3ViIGNvbnMud2F0Y2hlcyBrZXkpCiAKLWxldCBmaXJlX3NwZWNfd2F0
Y2hlcyBjb25zIHNwZWNwYXRoID0KK2xldCBmaXJlX3NwZWNfd2F0Y2hlcyBy
b290IGNvbnMgc3BlY3BhdGggPQogCWl0ZXIgY29ucyAoZnVuIGNvbiAtPgot
CQlMaXN0Lml0ZXIgKGZ1biB3IC0+IENvbm5lY3Rpb24uZmlyZV9zaW5nbGVf
d2F0Y2ggdykgKENvbm5lY3Rpb24uZ2V0X3dhdGNoZXMgY29uIHNwZWNwYXRo
KSkKKwkJTGlzdC5pdGVyIChDb25uZWN0aW9uLmZpcmVfc2luZ2xlX3dhdGNo
IChOb25lLCByb290KSkgKENvbm5lY3Rpb24uZ2V0X3dhdGNoZXMgY29uIHNw
ZWNwYXRoKSkKIAogbGV0IHNldF90YXJnZXQgY29ucyBkb21haW4gdGFyZ2V0
X2RvbWFpbiA9CiAJbGV0IGNvbiA9IGZpbmRfZG9tYWluIGNvbnMgZG9tYWlu
IGluCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvbG9nZ2lu
Zy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9sb2dnaW5nLm1sCmluZGV4
IDBjMGQwM2QwYzQuLmZhYjc2ODI5Y2YgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9sb2dnaW5nLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9sb2dnaW5nLm1sCkBAIC0xNjEsNiArMTYxLDggQEAgbGV0IHhl
bnN0b3JlZF9sb2dfbmJfbGluZXMgPSByZWYgMTMyMTUKIGxldCB4ZW5zdG9y
ZWRfbG9nX25iX2NoYXJzID0gcmVmICgtMSkKIGxldCB4ZW5zdG9yZWRfbG9n
Z2VyID0gcmVmIChOb25lOiBsb2dnZXIgb3B0aW9uKQogCitsZXQgZGVidWdf
ZW5hYmxlZCAoKSA9ICF4ZW5zdG9yZWRfbG9nX2xldmVsID0gRGVidWcKKwog
bGV0IHNldF94ZW5zdG9yZWRfbG9nX2Rlc3RpbmF0aW9uIHMgPQogCXhlbnN0
b3JlZF9sb2dfZGVzdGluYXRpb24gOj0gbG9nX2Rlc3RpbmF0aW9uX29mX3N0
cmluZyBzCiAKQEAgLTIwNCw2ICsyMDYsNyBAQCB0eXBlIGFjY2Vzc190eXBl
ID0KIAl8IENvbW1pdAogCXwgTmV3Y29ubgogCXwgRW5kY29ubgorCXwgV2F0
Y2hfbm90X2ZpcmVkCiAJfCBYYk9wIG9mIFhlbmJ1cy5YYi5PcC5vcGVyYXRp
b24KIAogbGV0IHN0cmluZ19vZl90aWQgfmNvbiB0aWQgPQpAQCAtMjE3LDYg
KzIyMCw3IEBAIGxldCBzdHJpbmdfb2ZfYWNjZXNzX3R5cGUgPSBmdW5jdGlv
bgogCXwgQ29tbWl0ICAgICAgICAgICAgICAgICAgLT4gImNvbW1pdCAgICIK
IAl8IE5ld2Nvbm4gICAgICAgICAgICAgICAgIC0+ICJuZXdjb25uICAiCiAJ
fCBFbmRjb25uICAgICAgICAgICAgICAgICAtPiAiZW5kY29ubiAgIgorCXwg
V2F0Y2hfbm90X2ZpcmVkICAgICAgICAgLT4gIncgbm90ZmlyZWQiCiAKIAl8
IFhiT3Agb3AgLT4gbWF0Y2ggb3Agd2l0aAogCXwgWGVuYnVzLlhiLk9wLkRl
YnVnICAgICAgICAgICAgIC0+ICJkZWJ1ZyAgICAiCkBAIC0zMzMsMyArMzM3
LDcgQEAgbGV0IHhiX2Fuc3dlciB+dGlkIH5jb24gfnR5IGRhdGEgPQogCQl8
IF8gLT4gZmFsc2UsIERlYnVnCiAJaW4KIAlpZiBwcmludCB0aGVuIGFjY2Vz
c19sb2dnaW5nIH50aWQgfmNvbiB+ZGF0YSAoWGJPcCB0eSkgfmxldmVsCisK
K2xldCB3YXRjaF9ub3RfZmlyZWQgfmNvbiBwZXJtcyBwYXRoID0KKwlsZXQg
ZGF0YSA9IFByaW50Zi5zcHJpbnRmICJFUEVSTSBwZXJtcz1bJXNdIHBhdGg9
JXMiIHBlcm1zIHBhdGggaW4KKwlhY2Nlc3NfbG9nZ2luZyB+dGlkOjAgfmNv
biB+ZGF0YSBXYXRjaF9ub3RfZmlyZWQgfmxldmVsOkluZm8KZGlmZiAtLWdp
dCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbCBiL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCAzZWExOTNlYTE0Li4yM2I4
MGFiYTNkIDEwMDY0NAotLS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVy
bXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Blcm1zLm1sCkBA
IC03OSw5ICs3OSw5IEBAIGxldCBvZl9zdHJpbmcgcyA9CiBsZXQgc3RyaW5n
X29mX3Blcm0gcGVybSA9CiAJUHJpbnRmLnNwcmludGYgIiVjJXUiIChjaGFy
X29mX3Blcm10eSAoc25kIHBlcm0pKSAoZnN0IHBlcm0pCiAKLWxldCB0b19z
dHJpbmcgcGVybXZlYyA9CitsZXQgdG9fc3RyaW5nID8oc2VwPSJcMDAwIikg
cGVybXZlYyA9CiAJbGV0IGwgPSAoKHBlcm12ZWMub3duZXIsIHBlcm12ZWMu
b3RoZXIpIDo6IHBlcm12ZWMuYWNsKSBpbgotCVN0cmluZy5jb25jYXQgIlww
MDAiIChMaXN0Lm1hcCBzdHJpbmdfb2ZfcGVybSBsKQorCVN0cmluZy5jb25j
YXQgc2VwIChMaXN0Lm1hcCBzdHJpbmdfb2ZfcGVybSBsKQogCiBlbmQKIApA
QCAtMTMyLDggKzEzMiw4IEBAIGxldCBjaGVja19vd25lciAoY29ubmVjdGlv
bjpDb25uZWN0aW9uLnQpIChub2RlOk5vZGUudCkgPQogCXRoZW4gQ29ubmVj
dGlvbi5pc19vd25lciBjb25uZWN0aW9uIChOb2RlLmdldF9vd25lciBub2Rl
KQogCWVsc2UgdHJ1ZQogCi0oKiBjaGVjayBpZiB0aGUgY3VycmVudCBjb25u
ZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0gb24gdGhlIGN1cnJlbnQg
bm9kZSAqKQotbGV0IGNoZWNrIChjb25uZWN0aW9uOkNvbm5lY3Rpb24udCkg
cmVxdWVzdCAobm9kZTpOb2RlLnQpID0KKygqIGNoZWNrIGlmIHRoZSBjdXJy
ZW50IGNvbm5lY3Rpb24gbGFja3MgdGhlIHJlcXVlc3RlZCBwZXJtIG9uIHRo
ZSBjdXJyZW50IG5vZGUgKikKK2xldCBsYWNrcyAoY29ubmVjdGlvbjpDb25u
ZWN0aW9uLnQpIHJlcXVlc3QgKG5vZGU6Tm9kZS50KSA9CiAJbGV0IGNoZWNr
X2FjbCBkb21haW5pZCA9CiAJCWxldCBwZXJtID0KIAkJCWlmIExpc3QubWVt
X2Fzc29jIGRvbWFpbmlkIChOb2RlLmdldF9hY2wgbm9kZSkKQEAgLTE1NCwx
MSArMTU0LDE5IEBAIGxldCBjaGVjayAoY29ubmVjdGlvbjpDb25uZWN0aW9u
LnQpIHJlcXVlc3QgKG5vZGU6Tm9kZS50KSA9CiAJCQlpbmZvICJQZXJtaXNz
aW9uIGRlbmllZDogRG9tYWluICVkIGhhcyB3cml0ZSBvbmx5IGFjY2VzcyIg
ZG9tYWluaWQ7CiAJCQlmYWxzZQogCWluCi0JaWYgIWFjdGl2YXRlCisJIWFj
dGl2YXRlCiAJJiYgbm90IChDb25uZWN0aW9uLmlzX2RvbTAgY29ubmVjdGlv
bikKIAkmJiBub3QgKGNoZWNrX293bmVyIGNvbm5lY3Rpb24gbm9kZSkKIAkm
JiBub3QgKExpc3QuZXhpc3RzIGNoZWNrX2FjbCAoQ29ubmVjdGlvbi5nZXRf
b3duZXJzIGNvbm5lY3Rpb24pKQorCisoKiBjaGVjayBpZiB0aGUgY3VycmVu
dCBjb25uZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0gb24gdGhlIGN1
cnJlbnQgbm9kZS4KKyogIFJhaXNlcyBhbiBleGNlcHRpb24gaWYgaXQgZG9l
c24ndC4gKikKK2xldCBjaGVjayBjb25uZWN0aW9uIHJlcXVlc3Qgbm9kZSA9
CisJaWYgbGFja3MgY29ubmVjdGlvbiByZXF1ZXN0IG5vZGUKIAl0aGVuIHJh
aXNlIERlZmluZS5QZXJtaXNzaW9uX2RlbmllZAogCisoKiBjaGVjayBpZiB0
aGUgY3VycmVudCBjb25uZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0g
b24gdGhlIGN1cnJlbnQgbm9kZSAqKQorbGV0IGhhcyBjb25uZWN0aW9uIHJl
cXVlc3Qgbm9kZSA9IG5vdCAobGFja3MgY29ubmVjdGlvbiByZXF1ZXN0IG5v
ZGUpCisKIGxldCBlcXVpdiBwZXJtMSBwZXJtMiA9CiAJKE5vZGUudG9fc3Ry
aW5nIHBlcm0xKSA9IChOb2RlLnRvX3N0cmluZyBwZXJtMikKZGlmZiAtLWdp
dCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggYzNjOGVhMmY0Yi4u
M2NkMDA5N2RiOSAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nl
c3MubWwKQEAgLTU2LDE1ICs1NiwxNyBAQCBsZXQgc3BsaXRfb25lX3BhdGgg
ZGF0YSBjb24gPQogCXwgcGF0aCA6OiAiIiA6OiBbXSAtPiBTdG9yZS5QYXRo
LmNyZWF0ZSBwYXRoIChDb25uZWN0aW9uLmdldF9wYXRoIGNvbikKIAl8IF8g
ICAgICAgICAgICAgICAgLT4gcmFpc2UgSW52YWxpZF9DbWRfQXJncwogCi1s
ZXQgcHJvY2Vzc193YXRjaCBvcHMgY29ucyA9CitsZXQgcHJvY2Vzc193YXRj
aCB0IGNvbnMgPQorCWxldCBvbGRyb290ID0gdC5UcmFuc2FjdGlvbi5vbGRy
b290IGluCisJbGV0IG5ld3Jvb3QgPSBTdG9yZS5nZXRfcm9vdCB0LnN0b3Jl
IGluCisJbGV0IG9wcyA9IFRyYW5zYWN0aW9uLmdldF9wYXRocyB0IHw+IExp
c3QucmV2IGluCiAJbGV0IGRvX29wX3dhdGNoIG9wIGNvbnMgPQotCQlsZXQg
cmVjdXJzZSA9IG1hdGNoIChmc3Qgb3ApIHdpdGgKLQkJfCBYZW5idXMuWGIu
T3AuV3JpdGUgICAgLT4gZmFsc2UKLQkJfCBYZW5idXMuWGIuT3AuTWtkaXIg
ICAgLT4gZmFsc2UKLQkJfCBYZW5idXMuWGIuT3AuUm0gICAgICAgLT4gdHJ1
ZQotCQl8IFhlbmJ1cy5YYi5PcC5TZXRwZXJtcyAtPiBmYWxzZQorCQlsZXQg
cmVjdXJzZSwgb2xkcm9vdCwgcm9vdCA9IG1hdGNoIChmc3Qgb3ApIHdpdGgK
KwkJfCBYZW5idXMuWGIuT3AuV3JpdGV8WGVuYnVzLlhiLk9wLk1rZGlyIC0+
IGZhbHNlLCBOb25lLCBuZXdyb290CisJCXwgWGVuYnVzLlhiLk9wLlJtICAg
ICAgIC0+IHRydWUsIE5vbmUsIG9sZHJvb3QKKwkJfCBYZW5idXMuWGIuT3Au
U2V0cGVybXMgLT4gZmFsc2UsIFNvbWUgb2xkcm9vdCwgbmV3cm9vdAogCQl8
IF8gICAgICAgICAgICAgIC0+IHJhaXNlIChGYWlsdXJlICJodWggPyIpIGlu
Ci0JCUNvbm5lY3Rpb25zLmZpcmVfd2F0Y2hlcyBjb25zIChzbmQgb3ApIHJl
Y3Vyc2UgaW4KKwkJQ29ubmVjdGlvbnMuZmlyZV93YXRjaGVzID9vbGRyb290
IHJvb3QgY29ucyAoc25kIG9wKSByZWN1cnNlIGluCiAJTGlzdC5pdGVyIChm
dW4gb3AgLT4gZG9fb3Bfd2F0Y2ggb3AgY29ucykgb3BzCiAKIGxldCBjcmVh
dGVfaW1wbGljaXRfcGF0aCB0IHBlcm0gcGF0aCA9CkBAIC0yMDUsNyArMjA3
LDcgQEAgbGV0IHJlcGx5X2FjayBmY3QgY29uIHQgZG9tcyBjb25zIGRhdGEg
PQogCWZjdCBjb24gdCBkb21zIGNvbnMgZGF0YTsKIAlQYWNrZXQuQWNrIChm
dW4gKCkgLT4KIAkJaWYgVHJhbnNhY3Rpb24uZ2V0X2lkIHQgPSBUcmFuc2Fj
dGlvbi5ub25lIHRoZW4KLQkJCXByb2Nlc3Nfd2F0Y2ggKFRyYW5zYWN0aW9u
LmdldF9wYXRocyB0KSBjb25zCisJCQlwcm9jZXNzX3dhdGNoIHQgY29ucwog
CSkKIAogbGV0IHJlcGx5X2RhdGEgZmN0IGNvbiB0IGRvbXMgY29ucyBkYXRh
ID0KQEAgLTM1MywxNCArMzU1LDE3IEBAIGxldCB0cmFuc2FjdGlvbl9yZXBs
YXkgYyB0IGRvbXMgY29ucyA9CiAJCQlDb25uZWN0aW9uLmVuZF90cmFuc2Fj
dGlvbiBjIHRpZCBOb25lCiAJCSkKIAotbGV0IGRvX3dhdGNoIGNvbiB0IGRv
bWFpbnMgY29ucyBkYXRhID0KK2xldCBkb193YXRjaCBjb24gdCBfZG9tYWlu
cyBjb25zIGRhdGEgPQogCWxldCAobm9kZSwgdG9rZW4pID0gCiAJCW1hdGNo
IChzcGxpdCBOb25lICdcMDAwJyBkYXRhKSB3aXRoCiAJCXwgW25vZGU7IHRv
a2VuOyAiIl0gICAtPiBub2RlLCB0b2tlbgogCQl8IF8gICAgICAgICAgICAg
ICAgICAgLT4gcmFpc2UgSW52YWxpZF9DbWRfQXJncwogCQlpbgogCWxldCB3
YXRjaCA9IENvbm5lY3Rpb25zLmFkZF93YXRjaCBjb25zIGNvbiBub2RlIHRv
a2VuIGluCi0JUGFja2V0LkFjayAoZnVuICgpIC0+IENvbm5lY3Rpb24uZmly
ZV9zaW5nbGVfd2F0Y2ggd2F0Y2gpCisJUGFja2V0LkFjayAoZnVuICgpIC0+
CisJCSgqIHhlbnN0b3JlLnR4dCBzYXlzIHRoaXMgd2F0Y2ggaXMgZmlyZWQg
aW1tZWRpYXRlbHksCisJCSAgIGltcGx5aW5nIGV2ZW4gaWYgcGF0aCBkb2Vz
bid0IGV4aXN0IG9yIGlzIHVucmVhZGFibGUgKikKKwkJQ29ubmVjdGlvbi5m
aXJlX3NpbmdsZV93YXRjaF91bmNoZWNrZWQgd2F0Y2gpCiAKIGxldCBkb191
bndhdGNoIGNvbiB0IGRvbWFpbnMgY29ucyBkYXRhID0KIAlsZXQgKG5vZGUs
IHRva2VuKSA9CkBAIC0zOTEsNyArMzk2LDcgQEAgbGV0IGRvX3RyYW5zYWN0
aW9uX2VuZCBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJaWYgbm90IHN1
Y2Nlc3MgdGhlbgogCQlyYWlzZSBUcmFuc2FjdGlvbl9hZ2FpbjsKIAlpZiBj
b21taXQgdGhlbiBiZWdpbgotCQlwcm9jZXNzX3dhdGNoIChMaXN0LnJldiAo
VHJhbnNhY3Rpb24uZ2V0X3BhdGhzIHQpKSBjb25zOworCQlwcm9jZXNzX3dh
dGNoIHQgY29uczsKIAkJbWF0Y2ggdC5UcmFuc2FjdGlvbi50eSB3aXRoCiAJ
CXwgVHJhbnNhY3Rpb24uTm8gLT4KIAkJCSgpICgqIG5vIG5lZWQgdG8gcmVj
b3JkIGFueXRoaW5nICopCkBAIC00MTQsNyArNDE5LDcgQEAgbGV0IGRvX2lu
dHJvZHVjZSBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJCWVsc2UgdHJ5
CiAJCQlsZXQgbmRvbSA9IERvbWFpbnMuY3JlYXRlIGRvbWFpbnMgZG9taWQg
bWZuIHBvcnQgaW4KIAkJCUNvbm5lY3Rpb25zLmFkZF9kb21haW4gY29ucyBu
ZG9tOwotCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBT
dG9yZS5QYXRoLmludHJvZHVjZV9kb21haW47CisJCQlDb25uZWN0aW9ucy5m
aXJlX3NwZWNfd2F0Y2hlcyAoVHJhbnNhY3Rpb24uZ2V0X3Jvb3QgdCkgY29u
cyBTdG9yZS5QYXRoLmludHJvZHVjZV9kb21haW47CiAJCQluZG9tCiAJCXdp
dGggXyAtPiByYWlzZSBJbnZhbGlkX0NtZF9BcmdzCiAJaW4KQEAgLTQzMyw3
ICs0MzgsNyBAQCBsZXQgZG9fcmVsZWFzZSBjb24gdCBkb21haW5zIGNvbnMg
ZGF0YSA9CiAJRG9tYWlucy5kZWwgZG9tYWlucyBkb21pZDsKIAlDb25uZWN0
aW9ucy5kZWxfZG9tYWluIGNvbnMgZG9taWQ7CiAJaWYgZmlyZV9zcGVjX3dh
dGNoZXMgCi0JdGhlbiBDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0Y2hlcyBj
b25zIFN0b3JlLlBhdGgucmVsZWFzZV9kb21haW4KKwl0aGVuIENvbm5lY3Rp
b25zLmZpcmVfc3BlY193YXRjaGVzIChUcmFuc2FjdGlvbi5nZXRfcm9vdCB0
KSBjb25zIFN0b3JlLlBhdGgucmVsZWFzZV9kb21haW4KIAllbHNlIHJhaXNl
IEludmFsaWRfQ21kX0FyZ3MKIAogbGV0IGRvX3Jlc3VtZSBjb24gdCBkb21h
aW5zIGNvbnMgZGF0YSA9CkBAIC01MDEsNiArNTA2LDggQEAgbGV0IG1heWJl
X2lnbm9yZV90cmFuc2FjdGlvbiA9IGZ1bmN0aW9uCiAJCVRyYW5zYWN0aW9u
Lm5vbmUKIAl8IF8gLT4gZnVuIHggLT4geAogCisKK2xldCAoKSA9IFByaW50
ZXhjLnJlY29yZF9iYWNrdHJhY2UgdHJ1ZQogKCoqCiAgKiBOb3Rocm93IGd1
YXJhbnRlZS4KICAqKQpAQCAtNTQyLDcgKzU0OSw4IEBAIGxldCBwcm9jZXNz
X3BhY2tldCB+c3RvcmUgfmNvbnMgfmRvbXMgfmNvbiB+cmVxID0KIAkJKCog
UHV0IHRoZSByZXNwb25zZSBvbiB0aGUgd2lyZSAqKQogCQlzZW5kX3Jlc3Bv
bnNlIHR5IGNvbiB0IHJpZCByZXNwb25zZQogCXdpdGggZXhuIC0+Ci0JCWVy
cm9yICJwcm9jZXNzIHBhY2tldDogJXMiIChQcmludGV4Yy50b19zdHJpbmcg
ZXhuKTsKKwkJbGV0IGJ0ID0gUHJpbnRleGMuZ2V0X2JhY2t0cmFjZSAoKSBp
bgorCQllcnJvciAicHJvY2VzcyBwYWNrZXQ6ICVzLiAlcyIgKFByaW50ZXhj
LnRvX3N0cmluZyBleG4pIGJ0OwogCQlDb25uZWN0aW9uLnNlbmRfZXJyb3Ig
Y29uIHRpZCByaWQgIkVJTyIKIAogbGV0IGRvX2lucHV0IHN0b3JlIGNvbnMg
ZG9tcyBjb24gPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L3RyYW5zYWN0aW9uLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3RyYW5z
YWN0aW9uLm1sCmluZGV4IDIzZTdjY2ZmMWIuLjllOWUyOGRiOWIgMTAwNjQ0
Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC90cmFuc2FjdGlvbi5tbAor
KysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvdHJhbnNhY3Rpb24ubWwKQEAg
LTgyLDYgKzgyLDcgQEAgdHlwZSB0ID0gewogCXN0YXJ0X2NvdW50OiBpbnQ2
NDsKIAlzdG9yZTogU3RvcmUudDsgKCogVGhpcyBpcyB0aGUgc3RvcmUgdGhh
dCB3ZSBjaGFuZ2UgaW4gd3JpdGUgb3BlcmF0aW9ucy4gKikKIAlxdW90YTog
UXVvdGEudDsKKwlvbGRyb290OiBTdG9yZS5Ob2RlLnQ7CiAJbXV0YWJsZSBw
YXRoczogKFhlbmJ1cy5YYi5PcC5vcGVyYXRpb24gKiBTdG9yZS5QYXRoLnQp
IGxpc3Q7CiAJbXV0YWJsZSBvcGVyYXRpb25zOiAoUGFja2V0LnJlcXVlc3Qg
KiBQYWNrZXQucmVzcG9uc2UpIGxpc3Q7CiAJbXV0YWJsZSByZWFkX2xvd3Bh
dGg6IFN0b3JlLlBhdGgudCBvcHRpb247CkBAIC0xMjMsNiArMTI0LDcgQEAg
bGV0IG1ha2UgPyhpbnRlcm5hbD1mYWxzZSkgaWQgc3RvcmUgPQogCQlzdGFy
dF9jb3VudCA9ICFjb3VudGVyOwogCQlzdG9yZSA9IGlmIGlkID0gbm9uZSB0
aGVuIHN0b3JlIGVsc2UgU3RvcmUuY29weSBzdG9yZTsKIAkJcXVvdGEgPSBR
dW90YS5jb3B5IHN0b3JlLlN0b3JlLnF1b3RhOworCQlvbGRyb290ID0gU3Rv
cmUuZ2V0X3Jvb3Qgc3RvcmU7CiAJCXBhdGhzID0gW107CiAJCW9wZXJhdGlv
bnMgPSBbXTsKIAkJcmVhZF9sb3dwYXRoID0gTm9uZTsKQEAgLTEzNyw2ICsx
MzksOCBAQCBsZXQgbWFrZSA/KGludGVybmFsPWZhbHNlKSBpZCBzdG9yZSA9
CiBsZXQgZ2V0X3N0b3JlIHQgPSB0LnN0b3JlCiBsZXQgZ2V0X3BhdGhzIHQg
PSB0LnBhdGhzCiAKK2xldCBnZXRfcm9vdCB0ID0gU3RvcmUuZ2V0X3Jvb3Qg
dC5zdG9yZQorCiBsZXQgaXNfcmVhZF9vbmx5IHQgPSB0LnBhdGhzID0gW10K
IGxldCBhZGRfd29wIHQgdHkgcGF0aCA9IHQucGF0aHMgPC0gKHR5LCBwYXRo
KSA6OiB0LnBhdGhzCiBsZXQgYWRkX29wZXJhdGlvbiB+cGVybSB0IHJlcXVl
c3QgcmVzcG9uc2UgPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3hlbnN0b3JlZC5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5z
dG9yZWQubWwKaW5kZXggMzJjM2IxYzBmMS4uZTlmNDcxODQ2ZiAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAorKysg
Yi90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCkBAIC0zNDEs
NyArMzQxLDkgQEAgbGV0IF8gPQogCQkJCQlsZXQgKG5vdGlmeSwgZGVhZGRv
bSkgPSBEb21haW5zLmNsZWFudXAgZG9tYWlucyBpbgogCQkJCQlMaXN0Lml0
ZXIgKENvbm5lY3Rpb25zLmRlbF9kb21haW4gY29ucykgZGVhZGRvbTsKIAkJ
CQkJaWYgZGVhZGRvbSA8PiBbXSB8fCBub3RpZnkgdGhlbgotCQkJCQkJQ29u
bmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9yZS5QYXRoLnJl
bGVhc2VfZG9tYWluCisJCQkJCQlDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0
Y2hlcworCQkJCQkJCShTdG9yZS5nZXRfcm9vdCBzdG9yZSkKKwkJCQkJCQlj
b25zIFN0b3JlLlBhdGgucmVsZWFzZV9kb21haW4KIAkJCQkpCiAJCQkJZWxz
ZQogCQkJCQlsZXQgYyA9IENvbm5lY3Rpb25zLmZpbmRfZG9tYWluX2J5X3Bv
cnQgY29ucyBwb3J0IGluCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.10-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch"
Content-Disposition: attachment;
 filename="xsa115-4.10-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogYWRkIHhlbnN0b3JlZC5jb25mIGZsYWcgdG8gdHVybiBvZmYg
d2F0Y2gKIHBlcm1pc3Npb24gY2hlY2tzCk1JTUUtVmVyc2lvbjogMS4wCkNv
bnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250ZW50
LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpUaGVyZSBhcmUgZmxhZ3MgdG8g
dHVybiBvZmYgcXVvdGFzIGFuZCB0aGUgcGVybWlzc2lvbiBzeXN0ZW0sIHNv
IGFkZCBvbmUKdGhhdCB0dXJucyBvZmYgdGhlIG5ld2x5IGludHJvZHVjZWQg
d2F0Y2ggcGVybWlzc2lvbiBjaGVja3MgYXMgd2VsbC4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtMTE1LgoKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8
ZWR2aW4udG9yb2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBM
aW5kaWcgPGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KUmV2aWV3ZWQt
Ynk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
CgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rp
b24ubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbApp
bmRleCAxMzg5ZDk3MWMyLi42OThmNzIxMzQ1IDEwMDY0NAotLS0gYS90b29s
cy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbAorKysgYi90b29scy9v
Y2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbApAQCAtMjE4LDcgKzIxOCw3
IEBAIGxldCBmaXJlX3NpbmdsZV93YXRjaF91bmNoZWNrZWQgd2F0Y2ggPQog
bGV0IGZpcmVfc2luZ2xlX3dhdGNoIChvbGRyb290LCByb290KSB3YXRjaCA9
CiAJbGV0IGFic3BhdGggPSBnZXRfd2F0Y2hfcGF0aCB3YXRjaC5jb24gd2F0
Y2gucGF0aCB8PiBTdG9yZS5QYXRoLm9mX3N0cmluZyBpbgogCWxldCBwZXJt
cyA9IGxvb2t1cF93YXRjaF9wZXJtcyBvbGRyb290IHJvb3QgYWJzcGF0aCBp
bgotCWlmIExpc3QuZXhpc3RzIChQZXJtcy5oYXMgd2F0Y2guY29uLnBlcm0g
UkVBRCkgcGVybXMgdGhlbgorCWlmIFBlcm1zLmNhbl9maXJlX3dhdGNoIHdh
dGNoLmNvbi5wZXJtIHBlcm1zIHRoZW4KIAkJZmlyZV9zaW5nbGVfd2F0Y2hf
dW5jaGVja2VkIHdhdGNoCiAJZWxzZQogCQlsZXQgcGVybXMgPSBwZXJtcyB8
PiBMaXN0Lm1hcCAoUGVybXMuTm9kZS50b19zdHJpbmcgfnNlcDoiICIpIHw+
IFN0cmluZy5jb25jYXQgIiwgIiBpbgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL294ZW5zdG9yZWQuY29uZi5pbiBiL3Rvb2xzL29jYW1s
L3hlbnN0b3JlZC9veGVuc3RvcmVkLmNvbmYuaW4KaW5kZXggNjU3OWI4NDQ0
OC4uZDVkNGYwMGRlOCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL294ZW5zdG9yZWQuY29uZi5pbgorKysgYi90b29scy9vY2FtbC94ZW5z
dG9yZWQvb3hlbnN0b3JlZC5jb25mLmluCkBAIC00NCw2ICs0NCwxNiBAQCBj
b25mbGljdC1yYXRlLWxpbWl0LWlzLWFnZ3JlZ2F0ZSA9IHRydWUKICMgQWN0
aXZhdGUgbm9kZSBwZXJtaXNzaW9uIHN5c3RlbQogcGVybXMtYWN0aXZhdGUg
PSB0cnVlCiAKKyMgQWN0aXZhdGUgdGhlIHdhdGNoIHBlcm1pc3Npb24gc3lz
dGVtCisjIFdoZW4gdGhpcyBpcyBlbmFibGVkIHVucHJpdmlsZWdlZCBndWVz
dHMgY2FuIG9ubHkgZ2V0IHdhdGNoIGV2ZW50cworIyBmb3IgeGVuc3RvcmUg
ZW50cmllcyB0aGF0IHRoZXkgd291bGQndmUgYmVlbiBhYmxlIHRvIHJlYWQu
CisjCisjIFdoZW4gdGhpcyBpcyBkaXNhYmxlZCB1bnByaXZpbGVnZWQgZ3Vl
c3RzIG1heSBnZXQgd2F0Y2ggZXZlbnRzCisjIGZvciB4ZW5zdG9yZSBlbnRy
aWVzIHRoYXQgdGhleSBjYW5ub3QgcmVhZC4gVGhlIHdhdGNoIGV2ZW50IGNv
bnRhaW5zCisjIG9ubHkgdGhlIGVudHJ5IG5hbWUsIG5vdCB0aGUgdmFsdWUu
CisjIFRoaXMgcmVzdG9yZXMgYmVoYXZpb3VyIHByaW9yIHRvIFhTQS0xMTUu
CitwZXJtcy13YXRjaC1hY3RpdmF0ZSA9IHRydWUKKwogIyBBY3RpdmF0ZSBx
dW90YQogcXVvdGEtYWN0aXZhdGUgPSB0cnVlCiBxdW90YS1tYXhlbnRpdHkg
PSAxMDAwCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVy
bXMubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVybXMubWwKaW5kZXgg
MjNiODBhYmEzZC4uZWU3ZmVlNmJkYSAxMDA2NDQKLS0tIGEvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Blcm1zLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0
b3JlZC9wZXJtcy5tbApAQCAtMjAsNiArMjAsNyBAQCBsZXQgaW5mbyBmbXQg
PSBMb2dnaW5nLmluZm8gInBlcm1zIiBmbXQKIG9wZW4gU3RkZXh0CiAKIGxl
dCBhY3RpdmF0ZSA9IHJlZiB0cnVlCitsZXQgd2F0Y2hfYWN0aXZhdGUgPSBy
ZWYgdHJ1ZQogCiB0eXBlIHBlcm10eSA9IFJFQUQgfCBXUklURSB8IFJEV1Ig
fCBOT05FCiAKQEAgLTE2OCw1ICsxNjksOSBAQCBsZXQgY2hlY2sgY29ubmVj
dGlvbiByZXF1ZXN0IG5vZGUgPQogKCogY2hlY2sgaWYgdGhlIGN1cnJlbnQg
Y29ubmVjdGlvbiBoYXMgdGhlIHJlcXVlc3RlZCBwZXJtIG9uIHRoZSBjdXJy
ZW50IG5vZGUgKikKIGxldCBoYXMgY29ubmVjdGlvbiByZXF1ZXN0IG5vZGUg
PSBub3QgKGxhY2tzIGNvbm5lY3Rpb24gcmVxdWVzdCBub2RlKQogCitsZXQg
Y2FuX2ZpcmVfd2F0Y2ggY29ubmVjdGlvbiBwZXJtcyA9CisJbm90ICF3YXRj
aF9hY3RpdmF0ZQorCXx8IExpc3QuZXhpc3RzIChoYXMgY29ubmVjdGlvbiBS
RUFEKSBwZXJtcworCiBsZXQgZXF1aXYgcGVybTEgcGVybTIgPQogCShOb2Rl
LnRvX3N0cmluZyBwZXJtMSkgPSAoTm9kZS50b19zdHJpbmcgcGVybTIpCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1s
IGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAppbmRleCBl
OWY0NzE4NDZmLi4zMGZjODc0MzI3IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTk1LDYgKzk1LDcgQEAgbGV0IHBh
cnNlX2NvbmZpZyBmaWxlbmFtZSA9CiAJCSgiY29uZmxpY3QtbWF4LWhpc3Rv
cnktc2Vjb25kcyIsIENvbmZpZy5TZXRfZmxvYXQgRGVmaW5lLmNvbmZsaWN0
X21heF9oaXN0b3J5X3NlY29uZHMpOwogCQkoImNvbmZsaWN0LXJhdGUtbGlt
aXQtaXMtYWdncmVnYXRlIiwgQ29uZmlnLlNldF9ib29sIERlZmluZS5jb25m
bGljdF9yYXRlX2xpbWl0X2lzX2FnZ3JlZ2F0ZSk7CiAJCSgicGVybXMtYWN0
aXZhdGUiLCBDb25maWcuU2V0X2Jvb2wgUGVybXMuYWN0aXZhdGUpOworCQko
InBlcm1zLXdhdGNoLWFjdGl2YXRlIiwgQ29uZmlnLlNldF9ib29sIFBlcm1z
LndhdGNoX2FjdGl2YXRlKTsKIAkJKCJxdW90YS1hY3RpdmF0ZSIsIENvbmZp
Zy5TZXRfYm9vbCBRdW90YS5hY3RpdmF0ZSk7CiAJCSgicXVvdGEtbWF4d2F0
Y2giLCBDb25maWcuU2V0X2ludCBEZWZpbmUubWF4d2F0Y2gpOwogCQkoInF1
b3RhLXRyYW5zYWN0aW9uIiwgQ29uZmlnLlNldF9pbnQgRGVmaW5lLm1heHRy
YW5zYWN0aW9uKTsK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.11-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch"
Content-Disposition: attachment;
 filename="xsa115-4.11-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogaWdub3JlIHRyYW5zYWN0aW9uIGlkIGZvciBbdW5dd2F0Y2gK
TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBj
aGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQK
Ckluc3RlYWQgb2YgaWdub3JpbmcgdGhlIHRyYW5zYWN0aW9uIGlkIGZvciBY
U19XQVRDSCBhbmQgWFNfVU5XQVRDSApjb21tYW5kcyBhcyBpdCBpcyBkb2N1
bWVudGVkIGluIGRvY3MvbWlzYy94ZW5zdG9yZS50eHQsIGl0IGlzIHRlc3Rl
ZApmb3IgdmFsaWRpdHkgdG9kYXkuCgpSZWFsbHkgaWdub3JlIHRoZSB0cmFu
c2FjdGlvbiBpZCBmb3IgWFNfV0FUQ0ggYW5kIFhTX1VOV0FUQ0guCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEVkd2luIFTD
tnLDtmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBDaHJp
c3RpYW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+ClJl
dmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9w
cm9jZXNzLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwK
aW5kZXggNzRjNjlmODY5Yy4uMGEwZTQzZDFmMCAxMDA2NDQKLS0tIGEvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKQEAgLTQ5MiwxMiArNDkyLDE5IEBA
IGxldCByZXRhaW5fb3BfaW5faGlzdG9yeSB0eSA9CiAJfCBYZW5idXMuWGIu
T3AuUmVzZXRfd2F0Y2hlcwogCXwgWGVuYnVzLlhiLk9wLkludmFsaWQgICAg
ICAgICAgIC0+IGZhbHNlCiAKK2xldCBtYXliZV9pZ25vcmVfdHJhbnNhY3Rp
b24gPSBmdW5jdGlvbgorCXwgWGVuYnVzLlhiLk9wLldhdGNoIHwgWGVuYnVz
LlhiLk9wLlVud2F0Y2ggLT4gZnVuIHRpZCAtPgorCQlpZiB0aWQgPD4gVHJh
bnNhY3Rpb24ubm9uZSB0aGVuCisJCQlkZWJ1ZyAiSWdub3JpbmcgdHJhbnNh
Y3Rpb24gSUQgJWQgZm9yIHdhdGNoL3Vud2F0Y2giIHRpZDsKKwkJVHJhbnNh
Y3Rpb24ubm9uZQorCXwgXyAtPiBmdW4geCAtPiB4CisKICgqKgogICogTm90
aHJvdyBndWFyYW50ZWUuCiAgKikKIGxldCBwcm9jZXNzX3BhY2tldCB+c3Rv
cmUgfmNvbnMgfmRvbXMgfmNvbiB+cmVxID0KIAlsZXQgdHkgPSByZXEuUGFj
a2V0LnR5IGluCi0JbGV0IHRpZCA9IHJlcS5QYWNrZXQudGlkIGluCisJbGV0
IHRpZCA9IG1heWJlX2lnbm9yZV90cmFuc2FjdGlvbiB0eSByZXEuUGFja2V0
LnRpZCBpbgogCWxldCByaWQgPSByZXEuUGFja2V0LnJpZCBpbgogCXRyeQog
CQlsZXQgZmN0ID0gZnVuY3Rpb25fb2ZfdHlwZSB0eSBpbgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.11-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch"
Content-Disposition: attachment;
 filename="xsa115-4.11-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2hlY2sgcHJpdmlsZWdlIGZvciBYU19JU19ET01BSU5fSU5U
Uk9EVUNFRApNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQv
cGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGlu
ZzogOGJpdAoKVGhlIFhlbnN0b3JlIGNvbW1hbmQgWFNfSVNfRE9NQUlOX0lO
VFJPRFVDRUQgc2hvdWxkIGJlIHBvc3NpYmxlIGZvciBwcml2aWxlZ2VkCmRv
bWFpbnMgb25seSAodGhlIG9ubHkgdXNlciBpbiB0aGUgdHJlZSBpcyB0aGUg
eGVucGFnaW5nIGRhZW1vbikuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4K
ClNpZ25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNp
dHJpeC5jb20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3Rp
YW4ubGluZGlnQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29v
cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggMGEwZTQzZDFmMC4uZjM3
NGFiZTk5OCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3Mu
bWwKQEAgLTE2Niw3ICsxNjYsOSBAQCBsZXQgZG9fc2V0cGVybXMgY29uIHQg
ZG9tYWlucyBjb25zIGRhdGEgPQogbGV0IGRvX2Vycm9yIGNvbiB0IGRvbWFp
bnMgY29ucyBkYXRhID0KIAlyYWlzZSBEZWZpbmUuVW5rbm93bl9vcGVyYXRp
b24KIAotbGV0IGRvX2lzaW50cm9kdWNlZCBjb24gdCBkb21haW5zIGNvbnMg
ZGF0YSA9CitsZXQgZG9faXNpbnRyb2R1Y2VkIGNvbiBfdCBkb21haW5zIF9j
b25zIGRhdGEgPQorCWlmIG5vdCAoQ29ubmVjdGlvbi5pc19kb20wIGNvbikK
Kwl0aGVuIHJhaXNlIERlZmluZS5QZXJtaXNzaW9uX2RlbmllZDsKIAlsZXQg
ZG9taWQgPQogCQltYXRjaCAoc3BsaXQgTm9uZSAnXDAwMCcgZGF0YSkgd2l0
aAogCQl8IGRvbWlkIDo6IF8gLT4gaW50X29mX3N0cmluZyBkb21pZAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.11-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch"
Content-Disposition: attachment;
 filename="xsa115-4.11-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogdW5pZnkgd2F0Y2ggZmlyaW5nCk1JTUUtVmVyc2lvbjogMS4w
CkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250
ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpUaGlzIHdpbGwgbWFrZSBp
dCBlYXNpZXIgaW5zZXJ0IGFkZGl0aW9uYWwgY2hlY2tzIGluIGEgZm9sbG93
LXVwIHBhdGNoLgpBbGwgd2F0Y2hlcyBhcmUgbm93IGZpcmVkIGZyb20gYSBz
aW5nbGUgZnVuY3Rpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNp
Z25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNpdHJp
eC5jb20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3RpYW4u
bGluZGlnQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0aW9uLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKaW5kZXggYmU5YzYyZjI3Zi4u
ZDc0MzJjNjU5NyAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L2Nvbm5lY3Rpb24ubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nv
bm5lY3Rpb24ubWwKQEAgLTIxMCw4ICsyMTAsNyBAQCBsZXQgZmlyZV93YXRj
aCB3YXRjaCBwYXRoID0KIAkJZW5kIGVsc2UKIAkJCXBhdGgKIAlpbgotCWxl
dCBkYXRhID0gVXRpbHMuam9pbl9ieV9udWxsIFsgbmV3X3BhdGg7IHdhdGNo
LnRva2VuOyAiIiBdIGluCi0Jc2VuZF9yZXBseSB3YXRjaC5jb24gVHJhbnNh
Y3Rpb24ubm9uZSAwIFhlbmJ1cy5YYi5PcC5XYXRjaGV2ZW50IGRhdGEKKwlm
aXJlX3NpbmdsZV93YXRjaCB7IHdhdGNoIHdpdGggcGF0aCA9IG5ld19wYXRo
IH0KIAogKCogU2VhcmNoIGZvciBhIHZhbGlkIHVudXNlZCB0cmFuc2FjdGlv
biBpZC4gKikKIGxldCByZWMgdmFsaWRfdHJhbnNhY3Rpb25faWQgY29uIHBy
b3Bvc2VkX2lkID0K

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.11-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch"
Content-Disposition: attachment;
 filename="xsa115-4.11-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogaW50cm9kdWNlIHBlcm1pc3Npb25zIGZvciBzcGVjaWFsIHdh
dGNoZXMKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3Bs
YWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6
IDhiaXQKClRoZSBzcGVjaWFsIHdhdGNoZXMgIkBpbnRyb2R1Y2VEb21haW4i
IGFuZCAiQHJlbGVhc2VEb21haW4iIHNob3VsZCBiZQphbGxvd2VkIGZvciBw
cml2aWxlZ2VkIGNhbGxlcnMgb25seSwgYXMgdGhleSBhbGxvdyB0byBnYWlu
IGluZm9ybWF0aW9uCmFib3V0IHByZXNlbmNlIG9mIG90aGVyIGd1ZXN0cyBv
biB0aGUgaG9zdC4gU28gc2VuZCB3YXRjaCBldmVudHMgZm9yCnRob3NlIHdh
dGNoZXMgdmlhIHByaXZpbGVnZWQgY29ubmVjdGlvbnMgb25seS4KClN0YXJ0
IHRvIGFkZHJlc3MgdGhpcyBieSB0cmVhdGluZyB0aGUgc3BlY2lhbCB3YXRj
aGVzIGFzIHJlZ3VsYXIgbm9kZXMKaW4gdGhlIHRyZWUsIHdoaWNoIGdpdmVz
IHRoZW0gbm9ybWFsIHNlbWFudGljcyBmb3IgcGVybWlzc2lvbnMuICBBIGxh
dGVyCmNoYW5nZSB3aWxsIHJlc3RyaWN0IHRoZSBoYW5kbGluZywgc28gdGhh
dCB0aGV5IGNhbid0IGJlIGxpc3RlZCwgZXRjLgoKVGhpcyBpcyBwYXJ0IG9m
IFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZp
bi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRp
ZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTog
QW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcHJvY2Vzcy5tbCBi
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCmluZGV4IGYzNzRh
YmU5OTguLmMzYzhlYTJmNGIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9wcm9jZXNzLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0b3Jl
ZC9wcm9jZXNzLm1sCkBAIC00MTQsNyArNDE0LDcgQEAgbGV0IGRvX2ludHJv
ZHVjZSBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJCWVsc2UgdHJ5CiAJ
CQlsZXQgbmRvbSA9IERvbWFpbnMuY3JlYXRlIGRvbWFpbnMgZG9taWQgbWZu
IHBvcnQgaW4KIAkJCUNvbm5lY3Rpb25zLmFkZF9kb21haW4gY29ucyBuZG9t
OwotCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyAiQGlu
dHJvZHVjZURvbWFpbiI7CisJCQlDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0
Y2hlcyBjb25zIFN0b3JlLlBhdGguaW50cm9kdWNlX2RvbWFpbjsKIAkJCW5k
b20KIAkJd2l0aCBfIC0+IHJhaXNlIEludmFsaWRfQ21kX0FyZ3MKIAlpbgpA
QCAtNDMzLDcgKzQzMyw3IEBAIGxldCBkb19yZWxlYXNlIGNvbiB0IGRvbWFp
bnMgY29ucyBkYXRhID0KIAlEb21haW5zLmRlbCBkb21haW5zIGRvbWlkOwog
CUNvbm5lY3Rpb25zLmRlbF9kb21haW4gY29ucyBkb21pZDsKIAlpZiBmaXJl
X3NwZWNfd2F0Y2hlcyAKLQl0aGVuIENvbm5lY3Rpb25zLmZpcmVfc3BlY193
YXRjaGVzIGNvbnMgIkByZWxlYXNlRG9tYWluIgorCXRoZW4gQ29ubmVjdGlv
bnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9yZS5QYXRoLnJlbGVhc2Vf
ZG9tYWluCiAJZWxzZSByYWlzZSBJbnZhbGlkX0NtZF9BcmdzCiAKIGxldCBk
b19yZXN1bWUgY29uIHQgZG9tYWlucyBjb25zIGRhdGEgPQpkaWZmIC0tZ2l0
IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3N0b3JlLm1sCmluZGV4IDYzNzVhMWM4ODkuLjk4ZDM2
OGQ1MmYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9zdG9y
ZS5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwKQEAg
LTIxNCw2ICsyMTQsMTEgQEAgbGV0IHJlYyBsb29rdXAgbm9kZSBwYXRoIGZj
dCA9CiAKIGxldCBhcHBseSBybm9kZSBwYXRoIGZjdCA9CiAJbG9va3VwIHJu
b2RlIHBhdGggZmN0CisKK2xldCBpbnRyb2R1Y2VfZG9tYWluID0gIkBpbnRy
b2R1Y2VEb21haW4iCitsZXQgcmVsZWFzZV9kb21haW4gPSAiQHJlbGVhc2VE
b21haW4iCitsZXQgc3BlY2lhbHMgPSBMaXN0Lm1hcCBvZl9zdHJpbmcgWyBp
bnRyb2R1Y2VfZG9tYWluOyByZWxlYXNlX2RvbWFpbiBdCisKIGVuZAogCiAo
KiBUaGUgU3RvcmUudCB0eXBlICopCmRpZmYgLS1naXQgYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQvdXRpbHMubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQv
dXRpbHMubWwKaW5kZXggYjI1MmRiNzk5Yi4uZThjOWZlNGU5NCAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3V0aWxzLm1sCisrKyBiL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC91dGlscy5tbApAQCAtODgsMTkgKzg4LDE3
IEBAIGxldCByZWFkX2ZpbGVfc2luZ2xlX2ludGVnZXIgZmlsZW5hbWUgPQog
CVVuaXguY2xvc2UgZmQ7CiAJaW50X29mX3N0cmluZyAoQnl0ZXMuc3ViX3N0
cmluZyBidWYgMCBzeikKIAotbGV0IHBhdGhfY29tcGxldGUgcGF0aCBjb25u
ZWN0aW9uX3BhdGggPQotCWlmIFN0cmluZy5nZXQgcGF0aCAwIDw+ICcvJyB0
aGVuCi0JCWNvbm5lY3Rpb25fcGF0aCBeIHBhdGgKLQllbHNlCi0JCXBhdGgK
LQorKCogQHBhdGggbWF5IGJlIGd1ZXN0IGRhdGEgYW5kIG5lZWRzIGl0cyBs
ZW5ndGggdmFsaWRhdGluZy4gIEBjb25uZWN0aW9uX3BhdGgKKyAqIGlzIGdl
bmVyYXRlZCBsb2NhbGx5IGluIHhlbnN0b3JlZCBhbmQgYWx3YXlzIG9mIHRo
ZSBmb3JtICIvbG9jYWwvZG9tYWluLyROLyIgKikKIGxldCBwYXRoX3ZhbGlk
YXRlIHBhdGggY29ubmVjdGlvbl9wYXRoID0KLQlpZiBTdHJpbmcubGVuZ3Ro
IHBhdGggPSAwIHx8IFN0cmluZy5sZW5ndGggcGF0aCA+IDEwMjQgdGhlbgot
CQlyYWlzZSBEZWZpbmUuSW52YWxpZF9wYXRoCi0JZWxzZQotCQlsZXQgY3Bh
dGggPSBwYXRoX2NvbXBsZXRlIHBhdGggY29ubmVjdGlvbl9wYXRoIGluCi0J
CWlmIFN0cmluZy5nZXQgY3BhdGggMCA8PiAnLycgdGhlbgotCQkJcmFpc2Ug
RGVmaW5lLkludmFsaWRfcGF0aAotCQllbHNlCi0JCQljcGF0aAorCWxldCBs
ZW4gPSBTdHJpbmcubGVuZ3RoIHBhdGggaW4KKworCWlmIGxlbiA9IDAgfHwg
bGVuID4gMTAyNCB0aGVuIHJhaXNlIERlZmluZS5JbnZhbGlkX3BhdGg7CisK
KwlsZXQgYWJzX3BhdGggPQorCQltYXRjaCBTdHJpbmcuZ2V0IHBhdGggMCB3
aXRoCisJCXwgJy8nIHwgJ0AnIC0+IHBhdGgKKwkJfCBfICAgLT4gY29ubmVj
dGlvbl9wYXRoIF4gcGF0aAorCWluCiAKKwlhYnNfcGF0aApkaWZmIC0tZ2l0
IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbCBiL3Rvb2xz
L29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKaW5kZXggNDlmYzE4YmYx
OS4uMzJjM2IxYzBmMSAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3hlbnN0b3JlZC5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQv
eGVuc3RvcmVkLm1sCkBAIC0yODcsNiArMjg3LDggQEAgbGV0IF8gPQogCWxl
dCBxdWl0ID0gcmVmIGZhbHNlIGluCiAKIAlMb2dnaW5nLmluaXRfeGVuc3Rv
cmVkX2xvZygpOworCUxpc3QuaXRlciAoZnVuIHBhdGggLT4KKwkJU3RvcmUu
d3JpdGUgc3RvcmUgUGVybXMuQ29ubmVjdGlvbi5mdWxsX3JpZ2h0cyBwYXRo
ICIiKSBTdG9yZS5QYXRoLnNwZWNpYWxzOwogCiAJbGV0IGZpbGVuYW1lID0g
UGF0aHMueGVuX3J1bl9zdG9yZWQgXiAiL2RiIiBpbgogCWlmIGNmLnJlc3Rh
cnQgJiYgU3lzLmZpbGVfZXhpc3RzIGZpbGVuYW1lIHRoZW4gKApAQCAtMzM5
LDcgKzM0MSw3IEBAIGxldCBfID0KIAkJCQkJbGV0IChub3RpZnksIGRlYWRk
b20pID0gRG9tYWlucy5jbGVhbnVwIGRvbWFpbnMgaW4KIAkJCQkJTGlzdC5p
dGVyIChDb25uZWN0aW9ucy5kZWxfZG9tYWluIGNvbnMpIGRlYWRkb207CiAJ
CQkJCWlmIGRlYWRkb20gPD4gW10gfHwgbm90aWZ5IHRoZW4KLQkJCQkJCUNv
bm5lY3Rpb25zLmZpcmVfc3BlY193YXRjaGVzIGNvbnMgIkByZWxlYXNlRG9t
YWluIgorCQkJCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29u
cyBTdG9yZS5QYXRoLnJlbGVhc2VfZG9tYWluCiAJCQkJKQogCQkJCWVsc2UK
IAkJCQkJbGV0IGMgPSBDb25uZWN0aW9ucy5maW5kX2RvbWFpbl9ieV9wb3J0
IGNvbnMgcG9ydCBpbgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.11-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch"
Content-Disposition: attachment;
 filename="xsa115-4.11-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogYXZvaWQgd2F0Y2ggZXZlbnRzIGZvciBub2RlcyB3aXRob3V0
IGFjY2VzcwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQv
cGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGlu
ZzogOGJpdAoKVG9kYXkgd2F0Y2ggZXZlbnRzIGFyZSBzZW50IHJlZ2FyZGxl
c3Mgb2YgdGhlIGFjY2VzcyByaWdodHMgb2YgdGhlCm5vZGUgdGhlIGV2ZW50
IGlzIHNlbnQgZm9yLiBUaGlzIGVuYWJsZXMgYW55IGd1ZXN0IHRvIGUuZy4g
c2V0dXAgYQp3YXRjaCBmb3IgIi8iIGluIG9yZGVyIHRvIGhhdmUgYSBkZXRh
aWxlZCByZWNvcmQgb2YgYWxsIFhlbnN0b3JlCm1vZGlmaWNhdGlvbnMuCgpN
b2RpZnkgdGhhdCBieSBzZW5kaW5nIG9ubHkgd2F0Y2ggZXZlbnRzIGZvciBu
b2RlcyB0aGF0IHRoZSB3YXRjaGVyCmhhcyBhIGNoYW5jZSB0byBzZWUgb3Ro
ZXJ3aXNlIChlaXRoZXIgdmlhIGRpcmVjdCByZWFkcyBvciBieSBxdWVyeWlu
Zwp0aGUgY2hpbGRyZW4gb2YgYSBub2RlKS4gVGhpcyBpbmNsdWRlcyBjYXNl
cyB3aGVyZSB0aGUgdmlzaWJpbGl0eSBvZgphIG5vZGUgZm9yIGEgd2F0Y2hl
ciBpcyBjaGFuZ2luZyAocGVybWlzc2lvbnMgYmVpbmcgcmVtb3ZlZCkuCgpQ
ZXJtaXNzaW9ucyBmb3Igbm9kZXMgYXJlIGxvb2tlZCB1cCBlaXRoZXIgaW4g
dGhlIG9sZCAocHJlCnRyYW5zYWN0aW9uL2NvbW1hbmQpIG9yIGN1cnJlbnQg
dHJlZXMgKHBvc3QgdHJhbnNhY3Rpb24pLiAgSWYKcGVybWlzc2lvbnMgYXJl
IGNoYW5nZWQgbXVsdGlwbGUgdGltZXMgaW4gYSB0cmFuc2FjdGlvbiBvbmx5
IHRoZSBmaW5hbAp2ZXJzaW9uIGlzIGNoZWNrZWQsIGJlY2F1c2UgY29uc2lk
ZXJpbmcgYSB0cmFuc2FjdGlvbiBhdG9taWMgdGhlCmluZGl2aWR1YWwgcGVy
bWlzc2lvbiBjaGFuZ2VzIHdvdWxkIG5vdCBiZSBub3RpY2FibGUgdG8gYW4g
b3V0c2lkZQpvYnNlcnZlci4KClR3byB0cmVlcyBhcmUgb25seSBuZWVkZWQg
Zm9yIHNldF9wZXJtczogaGVyZSB3ZSBjYW4gZWl0aGVyIG5vdGljZSB0aGUK
bm9kZSBkaXNhcHBlYXJpbmcgKGlmIHdlIGxvb3NlIHBlcm1pc3Npb24pLCBh
cHBlYXJpbmcKKGlmIHdlIGdhaW4gcGVybWlzc2lvbiksIG9yIGNoYW5naW5n
IChpZiB3ZSBwcmVzZXJ2ZSBwZXJtaXNzaW9uKS4KClJNIG5lZWRzIHRvIG9u
bHkgbG9vayBhdCB0aGUgb2xkIHRyZWU6IGluIHRoZSBuZXcgdHJlZSB0aGUg
bm9kZSB3b3VsZCBiZQpnb25lLCBvciBjb3VsZCBoYXZlIGRpZmZlcmVudCBw
ZXJtaXNzaW9ucyBpZiBpdCB3YXMgcmVjcmVhdGVkICh0aGUKcmVjcmVhdGlv
biB3b3VsZCBnZXQgaXRzIG93biB3YXRjaCBmaXJlZCkuCgpJbnNpZGUgYSB0
cmVlIHdlIGxvb2t1cCB0aGUgd2F0Y2ggcGF0aCdzIHBhcmVudCwgYW5kIHRo
ZW4gdGhlIHdhdGNoIHBhdGgKY2hpbGQgaXRzZWxmLiAgVGhpcyBnZXRzIHVz
IDQgc2V0cyBvZiBwZXJtaXNzaW9ucyBpbiB3b3JzdCBjYXNlLCBhbmQgaWYK
ZWl0aGVyIG9mIHRoZXNlIGFsbG93cyBhIHdhdGNoLCB0aGVuIHdlIHBlcm1p
dCBpdCB0byBmaXJlLiAgVGhlCnBlcm1pc3Npb24gbG9va3VwcyBhcmUgZG9u
ZSB3aXRob3V0IGxvZ2dpbmcgdGhlIGZhaWx1cmVzLCBvdGhlcndpc2Ugd2Un
ZApnZXQgY29uZnVzaW5nIGVycm9ycyBhYm91dCBwZXJtaXNzaW9uIGRlbmll
ZCBmb3Igc29tZSBwYXRocywgYnV0IGEgd2F0Y2gKc3RpbGwgZmlyaW5nLiBU
aGUgYWN0dWFsIHJlc3VsdCBpcyBsb2dnZWQgaW4geGVuc3RvcmVkLWFjY2Vz
cyBsb2c6CgogICd3IGV2ZW50IC4uLicgYXMgdXN1YWwgaWYgd2F0Y2ggd2Fz
IGZpcmVkCiAgJ3cgbm90ZmlyZWQuLi4nIGlmIHRoZSB3YXRjaCB3YXMgbm90
IGZpcmVkLCB0b2dldGhlciB3aXRoIHBhdGggYW5kCiAgcGVybWlzc2lvbiBz
ZXQgdG8gaGVscCBpbiB0cm91Ymxlc2hvb3RpbmcKCkFkZGluZyBhIHdhdGNo
IGJ5cGFzc2VzIHBlcm1pc3Npb24gY2hlY2tzIGFuZCBhbHdheXMgZmlyZXMg
dGhlIHdhdGNoCm9uY2UgaW1tZWRpYXRlbHkuIFRoaXMgaXMgY29uc2lzdGVu
dCB3aXRoIHRoZSBzcGVjaWZpY2F0aW9uLCBhbmQgbm8KaW5mb3JtYXRpb24g
aXMgZ2FpbmVkICh0aGUgd2F0Y2ggaXMgZmlyZWQgYm90aCBpZiB0aGUgcGF0
aCBleGlzdHMgb3IKZG9lc24ndCwgYW5kIGJvdGggaWYgeW91IGhhdmUgb3Ig
ZG9uJ3QgaGF2ZSBhY2Nlc3MsIGkuZS4gaXQgcmVmbGVjdHMgdGhlCnBhdGgg
YSBkb21haW4gZ2F2ZSBpdCBiYWNrIHRvIHRoYXQgZG9tYWluKS4KClRoZXJl
IGFyZSBzb21lIHNlbWFudGljIGNoYW5nZXMgaGVyZToKCiAgKiBXcml0ZSty
bSBpbiBhIHNpbmdsZSB0cmFuc2FjdGlvbiBvZiB0aGUgc2FtZSBwYXRoIGlz
IHVub2JzZXJ2YWJsZQogICAgbm93IHZpYSB3YXRjaGVzOiBib3RoIGJlZm9y
ZSBhbmQgYWZ0ZXIgYSB0cmFuc2FjdGlvbiB0aGUgcGF0aAogICAgZG9lc24n
dCBleGlzdCwgdGh1cyBib3RoIHRyZWUgbG9va3VwcyBjb21lIHVwIHdpdGgg
dGhlIGVtcHR5CiAgICBwZXJtaXNzaW9uIHNldCwgYW5kIG5vb25lLCBub3Qg
ZXZlbiBEb20wIGNhbiBzZWUgdGhpcy4gVGhpcyBpcwogICAgY29uc2lzdGVu
dCB3aXRoIHRyYW5zYWN0aW9uIGF0b21pY2l0eSB0aG91Z2guCiAgKiBTaW1p
bGFyIHRvIGFib3ZlIGlmIHdlIHRlbXBvcmFyaWx5IGdyYW50IGFuZCB0aGVu
IHJldm9rZSBwZXJtaXNzaW9uCiAgICBvbiBhIHBhdGggYW55IHdhdGNoZXMg
ZmlyZWQgaW5iZXR3ZWVuIGFyZSBpZ25vcmVkIGFzIHdlbGwKICAqIFRoZXJl
IGlzIGEgbmV3IGxvZyBldmVudCAodyBub3RmaXJlZCkgd2hpY2ggc2hvd3Mg
dGhlIHBlcm1pc3Npb24gc2V0CiAgICBvZiB0aGUgcGF0aCwgYW5kIHRoZSBw
YXRoLgogICogV2F0Y2hlcyBvbiBwYXRocyB0aGF0IGEgZG9tYWluIGRvZXNu
J3QgaGF2ZSBhY2Nlc3MgdG8gYXJlIG5vdyBub3QKICAgIHNlZW4sIHdoaWNo
IGlzIHRoZSBwdXJwb3NlIG9mIHRoZSBzZWN1cml0eSBmaXguCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEVkd2luIFTDtnLD
tmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBDaHJpc3Rp
YW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+ClJldmll
d2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25u
ZWN0aW9uLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24u
bWwKaW5kZXggZDc0MzJjNjU5Ny4uMTM4OWQ5NzFjMiAxMDA2NDQKLS0tIGEv
dG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKKysrIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKQEAgLTE5NiwxMSAr
MTk2LDM2IEBAIGxldCBsaXN0X3dhdGNoZXMgY29uID0KIAkJY29uLndhdGNo
ZXMgW10gaW4KIAlMaXN0LmNvbmNhdCBsbAogCi1sZXQgZmlyZV9zaW5nbGVf
d2F0Y2ggd2F0Y2ggPQorbGV0IGRiZyBmbXQgPSBMb2dnaW5nLmRlYnVnICJj
b25uZWN0aW9uIiBmbXQKK2xldCBpbmZvIGZtdCA9IExvZ2dpbmcuaW5mbyAi
Y29ubmVjdGlvbiIgZm10CisKK2xldCBsb29rdXBfd2F0Y2hfcGVybSBwYXRo
ID0gZnVuY3Rpb24KK3wgTm9uZSAtPiBbXQorfCBTb21lIHJvb3QgLT4KKwl0
cnkgU3RvcmUuUGF0aC5hcHBseSByb290IHBhdGggQEAgZnVuIHBhcmVudCBu
YW1lIC0+CisJCVN0b3JlLk5vZGUuZ2V0X3Blcm1zIHBhcmVudCA6OgorCQl0
cnkgW1N0b3JlLk5vZGUuZ2V0X3Blcm1zIChTdG9yZS5Ob2RlLmZpbmQgcGFy
ZW50IG5hbWUpXQorCQl3aXRoIE5vdF9mb3VuZCAtPiBbXQorCXdpdGggRGVm
aW5lLkludmFsaWRfcGF0aCB8IE5vdF9mb3VuZCAtPiBbXQorCitsZXQgbG9v
a3VwX3dhdGNoX3Blcm1zIG9sZHJvb3Qgcm9vdCBwYXRoID0KKwlsb29rdXBf
d2F0Y2hfcGVybSBwYXRoIG9sZHJvb3QgQCBsb29rdXBfd2F0Y2hfcGVybSBw
YXRoIChTb21lIHJvb3QpCisKK2xldCBmaXJlX3NpbmdsZV93YXRjaF91bmNo
ZWNrZWQgd2F0Y2ggPQogCWxldCBkYXRhID0gVXRpbHMuam9pbl9ieV9udWxs
IFt3YXRjaC5wYXRoOyB3YXRjaC50b2tlbjsgIiJdIGluCiAJc2VuZF9yZXBs
eSB3YXRjaC5jb24gVHJhbnNhY3Rpb24ubm9uZSAwIFhlbmJ1cy5YYi5PcC5X
YXRjaGV2ZW50IGRhdGEKIAotbGV0IGZpcmVfd2F0Y2ggd2F0Y2ggcGF0aCA9
CitsZXQgZmlyZV9zaW5nbGVfd2F0Y2ggKG9sZHJvb3QsIHJvb3QpIHdhdGNo
ID0KKwlsZXQgYWJzcGF0aCA9IGdldF93YXRjaF9wYXRoIHdhdGNoLmNvbiB3
YXRjaC5wYXRoIHw+IFN0b3JlLlBhdGgub2Zfc3RyaW5nIGluCisJbGV0IHBl
cm1zID0gbG9va3VwX3dhdGNoX3Blcm1zIG9sZHJvb3Qgcm9vdCBhYnNwYXRo
IGluCisJaWYgTGlzdC5leGlzdHMgKFBlcm1zLmhhcyB3YXRjaC5jb24ucGVy
bSBSRUFEKSBwZXJtcyB0aGVuCisJCWZpcmVfc2luZ2xlX3dhdGNoX3VuY2hl
Y2tlZCB3YXRjaAorCWVsc2UKKwkJbGV0IHBlcm1zID0gcGVybXMgfD4gTGlz
dC5tYXAgKFBlcm1zLk5vZGUudG9fc3RyaW5nIH5zZXA6IiAiKSB8PiBTdHJp
bmcuY29uY2F0ICIsICIgaW4KKwkJbGV0IGNvbiA9IGdldF9kb21zdHIgd2F0
Y2guY29uIGluCisJCUxvZ2dpbmcud2F0Y2hfbm90X2ZpcmVkIH5jb24gcGVy
bXMgKFN0b3JlLlBhdGgudG9fc3RyaW5nIGFic3BhdGgpCisKK2xldCBmaXJl
X3dhdGNoIHJvb3RzIHdhdGNoIHBhdGggPQogCWxldCBuZXdfcGF0aCA9CiAJ
CWlmIHdhdGNoLmlzX3JlbGF0aXZlICYmIHBhdGguWzBdID0gJy8nCiAJCXRo
ZW4gYmVnaW4KQEAgLTIxMCw3ICsyMzUsNyBAQCBsZXQgZmlyZV93YXRjaCB3
YXRjaCBwYXRoID0KIAkJZW5kIGVsc2UKIAkJCXBhdGgKIAlpbgotCWZpcmVf
c2luZ2xlX3dhdGNoIHsgd2F0Y2ggd2l0aCBwYXRoID0gbmV3X3BhdGggfQor
CWZpcmVfc2luZ2xlX3dhdGNoIHJvb3RzIHsgd2F0Y2ggd2l0aCBwYXRoID0g
bmV3X3BhdGggfQogCiAoKiBTZWFyY2ggZm9yIGEgdmFsaWQgdW51c2VkIHRy
YW5zYWN0aW9uIGlkLiAqKQogbGV0IHJlYyB2YWxpZF90cmFuc2FjdGlvbl9p
ZCBjb24gcHJvcG9zZWRfaWQgPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwv
eGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL2Nvbm5lY3Rpb25zLm1sCmluZGV4IGFlNzY5MjgxOWQuLjAyMGI4NzVk
Y2QgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0
aW9ucy5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlv
bnMubWwKQEAgLTEzNSwyNSArMTM1LDI2IEBAIGxldCBkZWxfd2F0Y2ggY29u
cyBjb24gcGF0aCB0b2tlbiA9CiAgCXdhdGNoCiAKICgqIHBhdGggaXMgYWJz
b2x1dGUgKikKLWxldCBmaXJlX3dhdGNoZXMgY29ucyBwYXRoIHJlY3Vyc2Ug
PQorbGV0IGZpcmVfd2F0Y2hlcyA/b2xkcm9vdCByb290IGNvbnMgcGF0aCBy
ZWN1cnNlID0KIAlsZXQga2V5ID0ga2V5X29mX3BhdGggcGF0aCBpbgogCWxl
dCBwYXRoID0gU3RvcmUuUGF0aC50b19zdHJpbmcgcGF0aCBpbgorCWxldCBy
b290cyA9IG9sZHJvb3QsIHJvb3QgaW4KIAlsZXQgZmlyZV93YXRjaCBfID0g
ZnVuY3Rpb24KIAkJfCBOb25lICAgICAgICAgLT4gKCkKLQkJfCBTb21lIHdh
dGNoZXMgLT4gTGlzdC5pdGVyIChmdW4gdyAtPiBDb25uZWN0aW9uLmZpcmVf
d2F0Y2ggdyBwYXRoKSB3YXRjaGVzCisJCXwgU29tZSB3YXRjaGVzIC0+IExp
c3QuaXRlciAoZnVuIHcgLT4gQ29ubmVjdGlvbi5maXJlX3dhdGNoIHJvb3Rz
IHcgcGF0aCkgd2F0Y2hlcwogCWluCiAJbGV0IGZpcmVfcmVjIHggPSBmdW5j
dGlvbgogCQl8IE5vbmUgICAgICAgICAtPiAoKQogCQl8IFNvbWUgd2F0Y2hl
cyAtPiAKLQkJCSAgTGlzdC5pdGVyIChmdW4gdyAtPiBDb25uZWN0aW9uLmZp
cmVfc2luZ2xlX3dhdGNoIHcpIHdhdGNoZXMKKwkJCUxpc3QuaXRlciAoQ29u
bmVjdGlvbi5maXJlX3NpbmdsZV93YXRjaCByb290cykgd2F0Y2hlcwogCWlu
CiAJVHJpZS5pdGVyX3BhdGggZmlyZV93YXRjaCBjb25zLndhdGNoZXMga2V5
OwogCWlmIHJlY3Vyc2UgdGhlbgogCQlUcmllLml0ZXIgZmlyZV9yZWMgKFRy
aWUuc3ViIGNvbnMud2F0Y2hlcyBrZXkpCiAKLWxldCBmaXJlX3NwZWNfd2F0
Y2hlcyBjb25zIHNwZWNwYXRoID0KK2xldCBmaXJlX3NwZWNfd2F0Y2hlcyBy
b290IGNvbnMgc3BlY3BhdGggPQogCWl0ZXIgY29ucyAoZnVuIGNvbiAtPgot
CQlMaXN0Lml0ZXIgKGZ1biB3IC0+IENvbm5lY3Rpb24uZmlyZV9zaW5nbGVf
d2F0Y2ggdykgKENvbm5lY3Rpb24uZ2V0X3dhdGNoZXMgY29uIHNwZWNwYXRo
KSkKKwkJTGlzdC5pdGVyIChDb25uZWN0aW9uLmZpcmVfc2luZ2xlX3dhdGNo
IChOb25lLCByb290KSkgKENvbm5lY3Rpb24uZ2V0X3dhdGNoZXMgY29uIHNw
ZWNwYXRoKSkKIAogbGV0IHNldF90YXJnZXQgY29ucyBkb21haW4gdGFyZ2V0
X2RvbWFpbiA9CiAJbGV0IGNvbiA9IGZpbmRfZG9tYWluIGNvbnMgZG9tYWlu
IGluCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvbG9nZ2lu
Zy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9sb2dnaW5nLm1sCmluZGV4
IGVhNjAzMzE5NWQuLjk5YzdiYzVlMTMgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9sb2dnaW5nLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9sb2dnaW5nLm1sCkBAIC0xNjEsNiArMTYxLDggQEAgbGV0IHhl
bnN0b3JlZF9sb2dfbmJfbGluZXMgPSByZWYgMTMyMTUKIGxldCB4ZW5zdG9y
ZWRfbG9nX25iX2NoYXJzID0gcmVmICgtMSkKIGxldCB4ZW5zdG9yZWRfbG9n
Z2VyID0gcmVmIChOb25lOiBsb2dnZXIgb3B0aW9uKQogCitsZXQgZGVidWdf
ZW5hYmxlZCAoKSA9ICF4ZW5zdG9yZWRfbG9nX2xldmVsID0gRGVidWcKKwog
bGV0IHNldF94ZW5zdG9yZWRfbG9nX2Rlc3RpbmF0aW9uIHMgPQogCXhlbnN0
b3JlZF9sb2dfZGVzdGluYXRpb24gOj0gbG9nX2Rlc3RpbmF0aW9uX29mX3N0
cmluZyBzCiAKQEAgLTIwNCw2ICsyMDYsNyBAQCB0eXBlIGFjY2Vzc190eXBl
ID0KIAl8IENvbW1pdAogCXwgTmV3Y29ubgogCXwgRW5kY29ubgorCXwgV2F0
Y2hfbm90X2ZpcmVkCiAJfCBYYk9wIG9mIFhlbmJ1cy5YYi5PcC5vcGVyYXRp
b24KIAogbGV0IHN0cmluZ19vZl90aWQgfmNvbiB0aWQgPQpAQCAtMjE3LDYg
KzIyMCw3IEBAIGxldCBzdHJpbmdfb2ZfYWNjZXNzX3R5cGUgPSBmdW5jdGlv
bgogCXwgQ29tbWl0ICAgICAgICAgICAgICAgICAgLT4gImNvbW1pdCAgICIK
IAl8IE5ld2Nvbm4gICAgICAgICAgICAgICAgIC0+ICJuZXdjb25uICAiCiAJ
fCBFbmRjb25uICAgICAgICAgICAgICAgICAtPiAiZW5kY29ubiAgIgorCXwg
V2F0Y2hfbm90X2ZpcmVkICAgICAgICAgLT4gIncgbm90ZmlyZWQiCiAKIAl8
IFhiT3Agb3AgLT4gbWF0Y2ggb3Agd2l0aAogCXwgWGVuYnVzLlhiLk9wLkRl
YnVnICAgICAgICAgICAgIC0+ICJkZWJ1ZyAgICAiCkBAIC0zMzEsMyArMzM1
LDcgQEAgbGV0IHhiX2Fuc3dlciB+dGlkIH5jb24gfnR5IGRhdGEgPQogCQl8
IF8gLT4gZmFsc2UsIERlYnVnCiAJaW4KIAlpZiBwcmludCB0aGVuIGFjY2Vz
c19sb2dnaW5nIH50aWQgfmNvbiB+ZGF0YSAoWGJPcCB0eSkgfmxldmVsCisK
K2xldCB3YXRjaF9ub3RfZmlyZWQgfmNvbiBwZXJtcyBwYXRoID0KKwlsZXQg
ZGF0YSA9IFByaW50Zi5zcHJpbnRmICJFUEVSTSBwZXJtcz1bJXNdIHBhdGg9
JXMiIHBlcm1zIHBhdGggaW4KKwlhY2Nlc3NfbG9nZ2luZyB+dGlkOjAgfmNv
biB+ZGF0YSBXYXRjaF9ub3RfZmlyZWQgfmxldmVsOkluZm8KZGlmZiAtLWdp
dCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbCBiL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCAzZWExOTNlYTE0Li4yM2I4
MGFiYTNkIDEwMDY0NAotLS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVy
bXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Blcm1zLm1sCkBA
IC03OSw5ICs3OSw5IEBAIGxldCBvZl9zdHJpbmcgcyA9CiBsZXQgc3RyaW5n
X29mX3Blcm0gcGVybSA9CiAJUHJpbnRmLnNwcmludGYgIiVjJXUiIChjaGFy
X29mX3Blcm10eSAoc25kIHBlcm0pKSAoZnN0IHBlcm0pCiAKLWxldCB0b19z
dHJpbmcgcGVybXZlYyA9CitsZXQgdG9fc3RyaW5nID8oc2VwPSJcMDAwIikg
cGVybXZlYyA9CiAJbGV0IGwgPSAoKHBlcm12ZWMub3duZXIsIHBlcm12ZWMu
b3RoZXIpIDo6IHBlcm12ZWMuYWNsKSBpbgotCVN0cmluZy5jb25jYXQgIlww
MDAiIChMaXN0Lm1hcCBzdHJpbmdfb2ZfcGVybSBsKQorCVN0cmluZy5jb25j
YXQgc2VwIChMaXN0Lm1hcCBzdHJpbmdfb2ZfcGVybSBsKQogCiBlbmQKIApA
QCAtMTMyLDggKzEzMiw4IEBAIGxldCBjaGVja19vd25lciAoY29ubmVjdGlv
bjpDb25uZWN0aW9uLnQpIChub2RlOk5vZGUudCkgPQogCXRoZW4gQ29ubmVj
dGlvbi5pc19vd25lciBjb25uZWN0aW9uIChOb2RlLmdldF9vd25lciBub2Rl
KQogCWVsc2UgdHJ1ZQogCi0oKiBjaGVjayBpZiB0aGUgY3VycmVudCBjb25u
ZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0gb24gdGhlIGN1cnJlbnQg
bm9kZSAqKQotbGV0IGNoZWNrIChjb25uZWN0aW9uOkNvbm5lY3Rpb24udCkg
cmVxdWVzdCAobm9kZTpOb2RlLnQpID0KKygqIGNoZWNrIGlmIHRoZSBjdXJy
ZW50IGNvbm5lY3Rpb24gbGFja3MgdGhlIHJlcXVlc3RlZCBwZXJtIG9uIHRo
ZSBjdXJyZW50IG5vZGUgKikKK2xldCBsYWNrcyAoY29ubmVjdGlvbjpDb25u
ZWN0aW9uLnQpIHJlcXVlc3QgKG5vZGU6Tm9kZS50KSA9CiAJbGV0IGNoZWNr
X2FjbCBkb21haW5pZCA9CiAJCWxldCBwZXJtID0KIAkJCWlmIExpc3QubWVt
X2Fzc29jIGRvbWFpbmlkIChOb2RlLmdldF9hY2wgbm9kZSkKQEAgLTE1NCwx
MSArMTU0LDE5IEBAIGxldCBjaGVjayAoY29ubmVjdGlvbjpDb25uZWN0aW9u
LnQpIHJlcXVlc3QgKG5vZGU6Tm9kZS50KSA9CiAJCQlpbmZvICJQZXJtaXNz
aW9uIGRlbmllZDogRG9tYWluICVkIGhhcyB3cml0ZSBvbmx5IGFjY2VzcyIg
ZG9tYWluaWQ7CiAJCQlmYWxzZQogCWluCi0JaWYgIWFjdGl2YXRlCisJIWFj
dGl2YXRlCiAJJiYgbm90IChDb25uZWN0aW9uLmlzX2RvbTAgY29ubmVjdGlv
bikKIAkmJiBub3QgKGNoZWNrX293bmVyIGNvbm5lY3Rpb24gbm9kZSkKIAkm
JiBub3QgKExpc3QuZXhpc3RzIGNoZWNrX2FjbCAoQ29ubmVjdGlvbi5nZXRf
b3duZXJzIGNvbm5lY3Rpb24pKQorCisoKiBjaGVjayBpZiB0aGUgY3VycmVu
dCBjb25uZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0gb24gdGhlIGN1
cnJlbnQgbm9kZS4KKyogIFJhaXNlcyBhbiBleGNlcHRpb24gaWYgaXQgZG9l
c24ndC4gKikKK2xldCBjaGVjayBjb25uZWN0aW9uIHJlcXVlc3Qgbm9kZSA9
CisJaWYgbGFja3MgY29ubmVjdGlvbiByZXF1ZXN0IG5vZGUKIAl0aGVuIHJh
aXNlIERlZmluZS5QZXJtaXNzaW9uX2RlbmllZAogCisoKiBjaGVjayBpZiB0
aGUgY3VycmVudCBjb25uZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0g
b24gdGhlIGN1cnJlbnQgbm9kZSAqKQorbGV0IGhhcyBjb25uZWN0aW9uIHJl
cXVlc3Qgbm9kZSA9IG5vdCAobGFja3MgY29ubmVjdGlvbiByZXF1ZXN0IG5v
ZGUpCisKIGxldCBlcXVpdiBwZXJtMSBwZXJtMiA9CiAJKE5vZGUudG9fc3Ry
aW5nIHBlcm0xKSA9IChOb2RlLnRvX3N0cmluZyBwZXJtMikKZGlmZiAtLWdp
dCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggYzNjOGVhMmY0Yi4u
M2NkMDA5N2RiOSAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nl
c3MubWwKQEAgLTU2LDE1ICs1NiwxNyBAQCBsZXQgc3BsaXRfb25lX3BhdGgg
ZGF0YSBjb24gPQogCXwgcGF0aCA6OiAiIiA6OiBbXSAtPiBTdG9yZS5QYXRo
LmNyZWF0ZSBwYXRoIChDb25uZWN0aW9uLmdldF9wYXRoIGNvbikKIAl8IF8g
ICAgICAgICAgICAgICAgLT4gcmFpc2UgSW52YWxpZF9DbWRfQXJncwogCi1s
ZXQgcHJvY2Vzc193YXRjaCBvcHMgY29ucyA9CitsZXQgcHJvY2Vzc193YXRj
aCB0IGNvbnMgPQorCWxldCBvbGRyb290ID0gdC5UcmFuc2FjdGlvbi5vbGRy
b290IGluCisJbGV0IG5ld3Jvb3QgPSBTdG9yZS5nZXRfcm9vdCB0LnN0b3Jl
IGluCisJbGV0IG9wcyA9IFRyYW5zYWN0aW9uLmdldF9wYXRocyB0IHw+IExp
c3QucmV2IGluCiAJbGV0IGRvX29wX3dhdGNoIG9wIGNvbnMgPQotCQlsZXQg
cmVjdXJzZSA9IG1hdGNoIChmc3Qgb3ApIHdpdGgKLQkJfCBYZW5idXMuWGIu
T3AuV3JpdGUgICAgLT4gZmFsc2UKLQkJfCBYZW5idXMuWGIuT3AuTWtkaXIg
ICAgLT4gZmFsc2UKLQkJfCBYZW5idXMuWGIuT3AuUm0gICAgICAgLT4gdHJ1
ZQotCQl8IFhlbmJ1cy5YYi5PcC5TZXRwZXJtcyAtPiBmYWxzZQorCQlsZXQg
cmVjdXJzZSwgb2xkcm9vdCwgcm9vdCA9IG1hdGNoIChmc3Qgb3ApIHdpdGgK
KwkJfCBYZW5idXMuWGIuT3AuV3JpdGV8WGVuYnVzLlhiLk9wLk1rZGlyIC0+
IGZhbHNlLCBOb25lLCBuZXdyb290CisJCXwgWGVuYnVzLlhiLk9wLlJtICAg
ICAgIC0+IHRydWUsIE5vbmUsIG9sZHJvb3QKKwkJfCBYZW5idXMuWGIuT3Au
U2V0cGVybXMgLT4gZmFsc2UsIFNvbWUgb2xkcm9vdCwgbmV3cm9vdAogCQl8
IF8gICAgICAgICAgICAgIC0+IHJhaXNlIChGYWlsdXJlICJodWggPyIpIGlu
Ci0JCUNvbm5lY3Rpb25zLmZpcmVfd2F0Y2hlcyBjb25zIChzbmQgb3ApIHJl
Y3Vyc2UgaW4KKwkJQ29ubmVjdGlvbnMuZmlyZV93YXRjaGVzID9vbGRyb290
IHJvb3QgY29ucyAoc25kIG9wKSByZWN1cnNlIGluCiAJTGlzdC5pdGVyIChm
dW4gb3AgLT4gZG9fb3Bfd2F0Y2ggb3AgY29ucykgb3BzCiAKIGxldCBjcmVh
dGVfaW1wbGljaXRfcGF0aCB0IHBlcm0gcGF0aCA9CkBAIC0yMDUsNyArMjA3
LDcgQEAgbGV0IHJlcGx5X2FjayBmY3QgY29uIHQgZG9tcyBjb25zIGRhdGEg
PQogCWZjdCBjb24gdCBkb21zIGNvbnMgZGF0YTsKIAlQYWNrZXQuQWNrIChm
dW4gKCkgLT4KIAkJaWYgVHJhbnNhY3Rpb24uZ2V0X2lkIHQgPSBUcmFuc2Fj
dGlvbi5ub25lIHRoZW4KLQkJCXByb2Nlc3Nfd2F0Y2ggKFRyYW5zYWN0aW9u
LmdldF9wYXRocyB0KSBjb25zCisJCQlwcm9jZXNzX3dhdGNoIHQgY29ucwog
CSkKIAogbGV0IHJlcGx5X2RhdGEgZmN0IGNvbiB0IGRvbXMgY29ucyBkYXRh
ID0KQEAgLTM1MywxNCArMzU1LDE3IEBAIGxldCB0cmFuc2FjdGlvbl9yZXBs
YXkgYyB0IGRvbXMgY29ucyA9CiAJCQlDb25uZWN0aW9uLmVuZF90cmFuc2Fj
dGlvbiBjIHRpZCBOb25lCiAJCSkKIAotbGV0IGRvX3dhdGNoIGNvbiB0IGRv
bWFpbnMgY29ucyBkYXRhID0KK2xldCBkb193YXRjaCBjb24gdCBfZG9tYWlu
cyBjb25zIGRhdGEgPQogCWxldCAobm9kZSwgdG9rZW4pID0gCiAJCW1hdGNo
IChzcGxpdCBOb25lICdcMDAwJyBkYXRhKSB3aXRoCiAJCXwgW25vZGU7IHRv
a2VuOyAiIl0gICAtPiBub2RlLCB0b2tlbgogCQl8IF8gICAgICAgICAgICAg
ICAgICAgLT4gcmFpc2UgSW52YWxpZF9DbWRfQXJncwogCQlpbgogCWxldCB3
YXRjaCA9IENvbm5lY3Rpb25zLmFkZF93YXRjaCBjb25zIGNvbiBub2RlIHRv
a2VuIGluCi0JUGFja2V0LkFjayAoZnVuICgpIC0+IENvbm5lY3Rpb24uZmly
ZV9zaW5nbGVfd2F0Y2ggd2F0Y2gpCisJUGFja2V0LkFjayAoZnVuICgpIC0+
CisJCSgqIHhlbnN0b3JlLnR4dCBzYXlzIHRoaXMgd2F0Y2ggaXMgZmlyZWQg
aW1tZWRpYXRlbHksCisJCSAgIGltcGx5aW5nIGV2ZW4gaWYgcGF0aCBkb2Vz
bid0IGV4aXN0IG9yIGlzIHVucmVhZGFibGUgKikKKwkJQ29ubmVjdGlvbi5m
aXJlX3NpbmdsZV93YXRjaF91bmNoZWNrZWQgd2F0Y2gpCiAKIGxldCBkb191
bndhdGNoIGNvbiB0IGRvbWFpbnMgY29ucyBkYXRhID0KIAlsZXQgKG5vZGUs
IHRva2VuKSA9CkBAIC0zOTEsNyArMzk2LDcgQEAgbGV0IGRvX3RyYW5zYWN0
aW9uX2VuZCBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJaWYgbm90IHN1
Y2Nlc3MgdGhlbgogCQlyYWlzZSBUcmFuc2FjdGlvbl9hZ2FpbjsKIAlpZiBj
b21taXQgdGhlbiBiZWdpbgotCQlwcm9jZXNzX3dhdGNoIChMaXN0LnJldiAo
VHJhbnNhY3Rpb24uZ2V0X3BhdGhzIHQpKSBjb25zOworCQlwcm9jZXNzX3dh
dGNoIHQgY29uczsKIAkJbWF0Y2ggdC5UcmFuc2FjdGlvbi50eSB3aXRoCiAJ
CXwgVHJhbnNhY3Rpb24uTm8gLT4KIAkJCSgpICgqIG5vIG5lZWQgdG8gcmVj
b3JkIGFueXRoaW5nICopCkBAIC00MTQsNyArNDE5LDcgQEAgbGV0IGRvX2lu
dHJvZHVjZSBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJCWVsc2UgdHJ5
CiAJCQlsZXQgbmRvbSA9IERvbWFpbnMuY3JlYXRlIGRvbWFpbnMgZG9taWQg
bWZuIHBvcnQgaW4KIAkJCUNvbm5lY3Rpb25zLmFkZF9kb21haW4gY29ucyBu
ZG9tOwotCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBT
dG9yZS5QYXRoLmludHJvZHVjZV9kb21haW47CisJCQlDb25uZWN0aW9ucy5m
aXJlX3NwZWNfd2F0Y2hlcyAoVHJhbnNhY3Rpb24uZ2V0X3Jvb3QgdCkgY29u
cyBTdG9yZS5QYXRoLmludHJvZHVjZV9kb21haW47CiAJCQluZG9tCiAJCXdp
dGggXyAtPiByYWlzZSBJbnZhbGlkX0NtZF9BcmdzCiAJaW4KQEAgLTQzMyw3
ICs0MzgsNyBAQCBsZXQgZG9fcmVsZWFzZSBjb24gdCBkb21haW5zIGNvbnMg
ZGF0YSA9CiAJRG9tYWlucy5kZWwgZG9tYWlucyBkb21pZDsKIAlDb25uZWN0
aW9ucy5kZWxfZG9tYWluIGNvbnMgZG9taWQ7CiAJaWYgZmlyZV9zcGVjX3dh
dGNoZXMgCi0JdGhlbiBDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0Y2hlcyBj
b25zIFN0b3JlLlBhdGgucmVsZWFzZV9kb21haW4KKwl0aGVuIENvbm5lY3Rp
b25zLmZpcmVfc3BlY193YXRjaGVzIChUcmFuc2FjdGlvbi5nZXRfcm9vdCB0
KSBjb25zIFN0b3JlLlBhdGgucmVsZWFzZV9kb21haW4KIAllbHNlIHJhaXNl
IEludmFsaWRfQ21kX0FyZ3MKIAogbGV0IGRvX3Jlc3VtZSBjb24gdCBkb21h
aW5zIGNvbnMgZGF0YSA9CkBAIC01MDEsNiArNTA2LDggQEAgbGV0IG1heWJl
X2lnbm9yZV90cmFuc2FjdGlvbiA9IGZ1bmN0aW9uCiAJCVRyYW5zYWN0aW9u
Lm5vbmUKIAl8IF8gLT4gZnVuIHggLT4geAogCisKK2xldCAoKSA9IFByaW50
ZXhjLnJlY29yZF9iYWNrdHJhY2UgdHJ1ZQogKCoqCiAgKiBOb3Rocm93IGd1
YXJhbnRlZS4KICAqKQpAQCAtNTQyLDcgKzU0OSw4IEBAIGxldCBwcm9jZXNz
X3BhY2tldCB+c3RvcmUgfmNvbnMgfmRvbXMgfmNvbiB+cmVxID0KIAkJKCog
UHV0IHRoZSByZXNwb25zZSBvbiB0aGUgd2lyZSAqKQogCQlzZW5kX3Jlc3Bv
bnNlIHR5IGNvbiB0IHJpZCByZXNwb25zZQogCXdpdGggZXhuIC0+Ci0JCWVy
cm9yICJwcm9jZXNzIHBhY2tldDogJXMiIChQcmludGV4Yy50b19zdHJpbmcg
ZXhuKTsKKwkJbGV0IGJ0ID0gUHJpbnRleGMuZ2V0X2JhY2t0cmFjZSAoKSBp
bgorCQllcnJvciAicHJvY2VzcyBwYWNrZXQ6ICVzLiAlcyIgKFByaW50ZXhj
LnRvX3N0cmluZyBleG4pIGJ0OwogCQlDb25uZWN0aW9uLnNlbmRfZXJyb3Ig
Y29uIHRpZCByaWQgIkVJTyIKIAogbGV0IGRvX2lucHV0IHN0b3JlIGNvbnMg
ZG9tcyBjb24gPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L3RyYW5zYWN0aW9uLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3RyYW5z
YWN0aW9uLm1sCmluZGV4IDIzZTdjY2ZmMWIuLjllOWUyOGRiOWIgMTAwNjQ0
Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC90cmFuc2FjdGlvbi5tbAor
KysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvdHJhbnNhY3Rpb24ubWwKQEAg
LTgyLDYgKzgyLDcgQEAgdHlwZSB0ID0gewogCXN0YXJ0X2NvdW50OiBpbnQ2
NDsKIAlzdG9yZTogU3RvcmUudDsgKCogVGhpcyBpcyB0aGUgc3RvcmUgdGhh
dCB3ZSBjaGFuZ2UgaW4gd3JpdGUgb3BlcmF0aW9ucy4gKikKIAlxdW90YTog
UXVvdGEudDsKKwlvbGRyb290OiBTdG9yZS5Ob2RlLnQ7CiAJbXV0YWJsZSBw
YXRoczogKFhlbmJ1cy5YYi5PcC5vcGVyYXRpb24gKiBTdG9yZS5QYXRoLnQp
IGxpc3Q7CiAJbXV0YWJsZSBvcGVyYXRpb25zOiAoUGFja2V0LnJlcXVlc3Qg
KiBQYWNrZXQucmVzcG9uc2UpIGxpc3Q7CiAJbXV0YWJsZSByZWFkX2xvd3Bh
dGg6IFN0b3JlLlBhdGgudCBvcHRpb247CkBAIC0xMjMsNiArMTI0LDcgQEAg
bGV0IG1ha2UgPyhpbnRlcm5hbD1mYWxzZSkgaWQgc3RvcmUgPQogCQlzdGFy
dF9jb3VudCA9ICFjb3VudGVyOwogCQlzdG9yZSA9IGlmIGlkID0gbm9uZSB0
aGVuIHN0b3JlIGVsc2UgU3RvcmUuY29weSBzdG9yZTsKIAkJcXVvdGEgPSBR
dW90YS5jb3B5IHN0b3JlLlN0b3JlLnF1b3RhOworCQlvbGRyb290ID0gU3Rv
cmUuZ2V0X3Jvb3Qgc3RvcmU7CiAJCXBhdGhzID0gW107CiAJCW9wZXJhdGlv
bnMgPSBbXTsKIAkJcmVhZF9sb3dwYXRoID0gTm9uZTsKQEAgLTEzNyw2ICsx
MzksOCBAQCBsZXQgbWFrZSA/KGludGVybmFsPWZhbHNlKSBpZCBzdG9yZSA9
CiBsZXQgZ2V0X3N0b3JlIHQgPSB0LnN0b3JlCiBsZXQgZ2V0X3BhdGhzIHQg
PSB0LnBhdGhzCiAKK2xldCBnZXRfcm9vdCB0ID0gU3RvcmUuZ2V0X3Jvb3Qg
dC5zdG9yZQorCiBsZXQgaXNfcmVhZF9vbmx5IHQgPSB0LnBhdGhzID0gW10K
IGxldCBhZGRfd29wIHQgdHkgcGF0aCA9IHQucGF0aHMgPC0gKHR5LCBwYXRo
KSA6OiB0LnBhdGhzCiBsZXQgYWRkX29wZXJhdGlvbiB+cGVybSB0IHJlcXVl
c3QgcmVzcG9uc2UgPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3hlbnN0b3JlZC5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5z
dG9yZWQubWwKaW5kZXggMzJjM2IxYzBmMS4uZTlmNDcxODQ2ZiAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAorKysg
Yi90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCkBAIC0zNDEs
NyArMzQxLDkgQEAgbGV0IF8gPQogCQkJCQlsZXQgKG5vdGlmeSwgZGVhZGRv
bSkgPSBEb21haW5zLmNsZWFudXAgZG9tYWlucyBpbgogCQkJCQlMaXN0Lml0
ZXIgKENvbm5lY3Rpb25zLmRlbF9kb21haW4gY29ucykgZGVhZGRvbTsKIAkJ
CQkJaWYgZGVhZGRvbSA8PiBbXSB8fCBub3RpZnkgdGhlbgotCQkJCQkJQ29u
bmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9yZS5QYXRoLnJl
bGVhc2VfZG9tYWluCisJCQkJCQlDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0
Y2hlcworCQkJCQkJCShTdG9yZS5nZXRfcm9vdCBzdG9yZSkKKwkJCQkJCQlj
b25zIFN0b3JlLlBhdGgucmVsZWFzZV9kb21haW4KIAkJCQkpCiAJCQkJZWxz
ZQogCQkJCQlsZXQgYyA9IENvbm5lY3Rpb25zLmZpbmRfZG9tYWluX2J5X3Bv
cnQgY29ucyBwb3J0IGluCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.11-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch"
Content-Disposition: attachment;
 filename="xsa115-4.11-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogYWRkIHhlbnN0b3JlZC5jb25mIGZsYWcgdG8gdHVybiBvZmYg
d2F0Y2gKIHBlcm1pc3Npb24gY2hlY2tzCk1JTUUtVmVyc2lvbjogMS4wCkNv
bnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250ZW50
LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpUaGVyZSBhcmUgZmxhZ3MgdG8g
dHVybiBvZmYgcXVvdGFzIGFuZCB0aGUgcGVybWlzc2lvbiBzeXN0ZW0sIHNv
IGFkZCBvbmUKdGhhdCB0dXJucyBvZmYgdGhlIG5ld2x5IGludHJvZHVjZWQg
d2F0Y2ggcGVybWlzc2lvbiBjaGVja3MgYXMgd2VsbC4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtMTE1LgoKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8
ZWR2aW4udG9yb2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBM
aW5kaWcgPGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KUmV2aWV3ZWQt
Ynk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
CgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rp
b24ubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbApp
bmRleCAxMzg5ZDk3MWMyLi42OThmNzIxMzQ1IDEwMDY0NAotLS0gYS90b29s
cy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbAorKysgYi90b29scy9v
Y2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbApAQCAtMjE4LDcgKzIxOCw3
IEBAIGxldCBmaXJlX3NpbmdsZV93YXRjaF91bmNoZWNrZWQgd2F0Y2ggPQog
bGV0IGZpcmVfc2luZ2xlX3dhdGNoIChvbGRyb290LCByb290KSB3YXRjaCA9
CiAJbGV0IGFic3BhdGggPSBnZXRfd2F0Y2hfcGF0aCB3YXRjaC5jb24gd2F0
Y2gucGF0aCB8PiBTdG9yZS5QYXRoLm9mX3N0cmluZyBpbgogCWxldCBwZXJt
cyA9IGxvb2t1cF93YXRjaF9wZXJtcyBvbGRyb290IHJvb3QgYWJzcGF0aCBp
bgotCWlmIExpc3QuZXhpc3RzIChQZXJtcy5oYXMgd2F0Y2guY29uLnBlcm0g
UkVBRCkgcGVybXMgdGhlbgorCWlmIFBlcm1zLmNhbl9maXJlX3dhdGNoIHdh
dGNoLmNvbi5wZXJtIHBlcm1zIHRoZW4KIAkJZmlyZV9zaW5nbGVfd2F0Y2hf
dW5jaGVja2VkIHdhdGNoCiAJZWxzZQogCQlsZXQgcGVybXMgPSBwZXJtcyB8
PiBMaXN0Lm1hcCAoUGVybXMuTm9kZS50b19zdHJpbmcgfnNlcDoiICIpIHw+
IFN0cmluZy5jb25jYXQgIiwgIiBpbgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL294ZW5zdG9yZWQuY29uZi5pbiBiL3Rvb2xzL29jYW1s
L3hlbnN0b3JlZC9veGVuc3RvcmVkLmNvbmYuaW4KaW5kZXggNjU3OWI4NDQ0
OC4uZDVkNGYwMGRlOCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL294ZW5zdG9yZWQuY29uZi5pbgorKysgYi90b29scy9vY2FtbC94ZW5z
dG9yZWQvb3hlbnN0b3JlZC5jb25mLmluCkBAIC00NCw2ICs0NCwxNiBAQCBj
b25mbGljdC1yYXRlLWxpbWl0LWlzLWFnZ3JlZ2F0ZSA9IHRydWUKICMgQWN0
aXZhdGUgbm9kZSBwZXJtaXNzaW9uIHN5c3RlbQogcGVybXMtYWN0aXZhdGUg
PSB0cnVlCiAKKyMgQWN0aXZhdGUgdGhlIHdhdGNoIHBlcm1pc3Npb24gc3lz
dGVtCisjIFdoZW4gdGhpcyBpcyBlbmFibGVkIHVucHJpdmlsZWdlZCBndWVz
dHMgY2FuIG9ubHkgZ2V0IHdhdGNoIGV2ZW50cworIyBmb3IgeGVuc3RvcmUg
ZW50cmllcyB0aGF0IHRoZXkgd291bGQndmUgYmVlbiBhYmxlIHRvIHJlYWQu
CisjCisjIFdoZW4gdGhpcyBpcyBkaXNhYmxlZCB1bnByaXZpbGVnZWQgZ3Vl
c3RzIG1heSBnZXQgd2F0Y2ggZXZlbnRzCisjIGZvciB4ZW5zdG9yZSBlbnRy
aWVzIHRoYXQgdGhleSBjYW5ub3QgcmVhZC4gVGhlIHdhdGNoIGV2ZW50IGNv
bnRhaW5zCisjIG9ubHkgdGhlIGVudHJ5IG5hbWUsIG5vdCB0aGUgdmFsdWUu
CisjIFRoaXMgcmVzdG9yZXMgYmVoYXZpb3VyIHByaW9yIHRvIFhTQS0xMTUu
CitwZXJtcy13YXRjaC1hY3RpdmF0ZSA9IHRydWUKKwogIyBBY3RpdmF0ZSBx
dW90YQogcXVvdGEtYWN0aXZhdGUgPSB0cnVlCiBxdW90YS1tYXhlbnRpdHkg
PSAxMDAwCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVy
bXMubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVybXMubWwKaW5kZXgg
MjNiODBhYmEzZC4uZWU3ZmVlNmJkYSAxMDA2NDQKLS0tIGEvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Blcm1zLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0
b3JlZC9wZXJtcy5tbApAQCAtMjAsNiArMjAsNyBAQCBsZXQgaW5mbyBmbXQg
PSBMb2dnaW5nLmluZm8gInBlcm1zIiBmbXQKIG9wZW4gU3RkZXh0CiAKIGxl
dCBhY3RpdmF0ZSA9IHJlZiB0cnVlCitsZXQgd2F0Y2hfYWN0aXZhdGUgPSBy
ZWYgdHJ1ZQogCiB0eXBlIHBlcm10eSA9IFJFQUQgfCBXUklURSB8IFJEV1Ig
fCBOT05FCiAKQEAgLTE2OCw1ICsxNjksOSBAQCBsZXQgY2hlY2sgY29ubmVj
dGlvbiByZXF1ZXN0IG5vZGUgPQogKCogY2hlY2sgaWYgdGhlIGN1cnJlbnQg
Y29ubmVjdGlvbiBoYXMgdGhlIHJlcXVlc3RlZCBwZXJtIG9uIHRoZSBjdXJy
ZW50IG5vZGUgKikKIGxldCBoYXMgY29ubmVjdGlvbiByZXF1ZXN0IG5vZGUg
PSBub3QgKGxhY2tzIGNvbm5lY3Rpb24gcmVxdWVzdCBub2RlKQogCitsZXQg
Y2FuX2ZpcmVfd2F0Y2ggY29ubmVjdGlvbiBwZXJtcyA9CisJbm90ICF3YXRj
aF9hY3RpdmF0ZQorCXx8IExpc3QuZXhpc3RzIChoYXMgY29ubmVjdGlvbiBS
RUFEKSBwZXJtcworCiBsZXQgZXF1aXYgcGVybTEgcGVybTIgPQogCShOb2Rl
LnRvX3N0cmluZyBwZXJtMSkgPSAoTm9kZS50b19zdHJpbmcgcGVybTIpCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1s
IGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAppbmRleCBl
OWY0NzE4NDZmLi4zMGZjODc0MzI3IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTk1LDYgKzk1LDcgQEAgbGV0IHBh
cnNlX2NvbmZpZyBmaWxlbmFtZSA9CiAJCSgiY29uZmxpY3QtbWF4LWhpc3Rv
cnktc2Vjb25kcyIsIENvbmZpZy5TZXRfZmxvYXQgRGVmaW5lLmNvbmZsaWN0
X21heF9oaXN0b3J5X3NlY29uZHMpOwogCQkoImNvbmZsaWN0LXJhdGUtbGlt
aXQtaXMtYWdncmVnYXRlIiwgQ29uZmlnLlNldF9ib29sIERlZmluZS5jb25m
bGljdF9yYXRlX2xpbWl0X2lzX2FnZ3JlZ2F0ZSk7CiAJCSgicGVybXMtYWN0
aXZhdGUiLCBDb25maWcuU2V0X2Jvb2wgUGVybXMuYWN0aXZhdGUpOworCQko
InBlcm1zLXdhdGNoLWFjdGl2YXRlIiwgQ29uZmlnLlNldF9ib29sIFBlcm1z
LndhdGNoX2FjdGl2YXRlKTsKIAkJKCJxdW90YS1hY3RpdmF0ZSIsIENvbmZp
Zy5TZXRfYm9vbCBRdW90YS5hY3RpdmF0ZSk7CiAJCSgicXVvdGEtbWF4d2F0
Y2giLCBDb25maWcuU2V0X2ludCBEZWZpbmUubWF4d2F0Y2gpOwogCQkoInF1
b3RhLXRyYW5zYWN0aW9uIiwgQ29uZmlnLlNldF9pbnQgRGVmaW5lLm1heHRy
YW5zYWN0aW9uKTsK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch"
Content-Transfer-Encoding: base64

RnJvbSBlOTJmM2RmZWFhZTIxYTMzNWU2NjZjOTI0Nzk1NDQyNGUzNGU1YzU2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6MzcgKzAyMDAKU3ViamVjdDogW1BBVENIIDAxLzEwXSB0b29scy94ZW5z
dG9yZTogYWxsb3cgcmVtb3ZpbmcgY2hpbGQgb2YgYSBub2RlCiBleGNlZWRp
bmcgcXVvdGEKCkFuIHVucHJpdmlsZWdlZCB1c2VyIG9mIFhlbnN0b3JlIGlz
IG5vdCBhbGxvd2VkIHRvIHdyaXRlIG5vZGVzIHdpdGggYQpzaXplIGV4Y2Vl
ZGluZyBhIGdsb2JhbCBxdW90YSwgd2hpbGUgcHJpdmlsZWdlZCB1c2VycyBs
aWtlIGRvbTAgYXJlCmFsbG93ZWQgdG8gd3JpdGUgc3VjaCBub2Rlcy4gVGhl
IHNpemUgb2YgYSBub2RlIGlzIHRoZSBuZWVkZWQgc3BhY2UKdG8gc3RvcmUg
YWxsIG5vZGUgc3BlY2lmaWMgZGF0YSwgdGhpcyBpbmNsdWRlcyB0aGUgbmFt
ZXMgb2YgYWxsCmNoaWxkcmVuIG9mIHRoZSBub2RlLgoKV2hlbiBkZWxldGlu
ZyBhIG5vZGUgaXRzIHBhcmVudCBoYXMgdG8gYmUgbW9kaWZpZWQgYnkgcmVt
b3ZpbmcgdGhlCm5hbWUgb2YgdGhlIHRvIGJlIGRlbGV0ZWQgY2hpbGQgZnJv
bSBpdC4KClRoaXMgcmVzdWx0cyBpbiB0aGUgc3RyYW5nZSBzaXR1YXRpb24g
dGhhdCBhbiB1bnByaXZpbGVnZWQgb3duZXIgb2YgYQpub2RlIG1pZ2h0IG5v
dCBzdWNjZWVkIGluIGRlbGV0aW5nIHRoYXQgbm9kZSBpbiBjYXNlIGl0cyBw
YXJlbnQgaXMKZXhjZWVkaW5nIHRoZSBxdW90YSBvZiB0aGF0IHVucHJpdmls
ZWdlZCB1c2VyIChpdCBtaWdodCBoYXZlIGJlZW4Kd3JpdHRlbiBieSBkb20w
KSwgYXMgdGhlIHVzZXIgaXMgbm90IGFsbG93ZWQgdG8gd3JpdGUgdGhlIHVw
ZGF0ZWQKcGFyZW50IG5vZGUuCgpGaXggdGhhdCBieSBub3QgY2hlY2tpbmcg
dGhlIHF1b3RhIHdoZW4gd3JpdGluZyBhIG5vZGUgZm9yIHRoZQpwdXJwb3Nl
IG9mIHJlbW92aW5nIGEgY2hpbGQncyBuYW1lIG9ubHkuCgpUaGUgc2FtZSBh
cHBsaWVzIHRvIHRyYW5zYWN0aW9uIGhhbmRsaW5nOiBhIG5vZGUgYmVpbmcg
cmVhZCBkdXJpbmcgYQp0cmFuc2FjdGlvbiBpcyB3cml0dGVuIHRvIHRoZSB0
cmFuc2FjdGlvbiBzcGVjaWZpYyBhcmVhIGFuZCBpdCBzaG91bGQKbm90IGJl
IHRlc3RlZCBmb3IgZXhjZWVkaW5nIHRoZSBxdW90YSwgYXMgaXQgbWlnaHQg
bm90IGJlIG93bmVkIGJ5CnRoZSByZWFkZXIgYW5kIHByZXN1bWFibHkgdGhl
IG9yaWdpbmFsIHdyaXRlIHdvdWxkIGhhdmUgZmFpbGVkIGlmIHRoZQpub2Rl
IGlzIG93bmVkIGJ5IHRoZSByZWFkZXIuCgpUaGlzIGlzIHBhcnQgb2YgWFNB
LTExNS4KClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0Bz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFt
YXpvbi5jb20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVu
Lm9yZz4KLS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jICAg
ICAgICB8IDIwICsrKysrKysrKysrLS0tLS0tLS0tCiB0b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5oICAgICAgICB8ICAzICsrLQogdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgfCAgMiArLQogMyBmaWxl
cyBjaGFuZ2VkLCAxNCBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5j
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA5N2Nl
YWJmOTY0MmQuLmI0M2UxMDE4YmFiZCAxMDA2NDQKLS0tIGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jCkBAIC00MTcsNyArNDE3LDggQEAgc3RhdGljIHN0
cnVjdCBub2RlICpyZWFkX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IGNvbnN0IHZvaWQgKmN0eCwKIAlyZXR1cm4gbm9kZTsKIH0KIAotaW50IHdy
aXRlX25vZGVfcmF3KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBUREJfREFU
QSAqa2V5LCBzdHJ1Y3Qgbm9kZSAqbm9kZSkKK2ludCB3cml0ZV9ub2RlX3Jh
dyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERCX0RBVEEgKmtleSwgc3Ry
dWN0IG5vZGUgKm5vZGUsCisJCSAgIGJvb2wgbm9fcXVvdGFfY2hlY2spCiB7
CiAJVERCX0RBVEEgZGF0YTsKIAl2b2lkICpwOwpAQCAtNDI3LDcgKzQyOCw3
IEBAIGludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUpCiAJCSsgbm9k
ZS0+bnVtX3Blcm1zKnNpemVvZihub2RlLT5wZXJtc1swXSkKIAkJKyBub2Rl
LT5kYXRhbGVuICsgbm9kZS0+Y2hpbGRsZW47CiAKLQlpZiAoZG9tYWluX2lz
X3VucHJpdmlsZWdlZChjb25uKSAmJgorCWlmICghbm9fcXVvdGFfY2hlY2sg
JiYgZG9tYWluX2lzX3VucHJpdmlsZWdlZChjb25uKSAmJgogCSAgICBkYXRh
LmRzaXplID49IHF1b3RhX21heF9lbnRyeV9zaXplKSB7CiAJCWVycm5vID0g
RU5PU1BDOwogCQlyZXR1cm4gZXJybm87CkBAIC00NTUsMTQgKzQ1NiwxNSBA
QCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2RlKQogCXJldHVybiAw
OwogfQogCi1zdGF0aWMgaW50IHdyaXRlX25vZGUoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQorc3RhdGljIGludCB3cml0
ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAq
bm9kZSwKKwkJICAgICAgYm9vbCBub19xdW90YV9jaGVjaykKIHsKIAlUREJf
REFUQSBrZXk7CiAKIAlpZiAoYWNjZXNzX25vZGUoY29ubiwgbm9kZSwgTk9E
RV9BQ0NFU1NfV1JJVEUsICZrZXkpKQogCQlyZXR1cm4gZXJybm87CiAKLQly
ZXR1cm4gd3JpdGVfbm9kZV9yYXcoY29ubiwgJmtleSwgbm9kZSk7CisJcmV0
dXJuIHdyaXRlX25vZGVfcmF3KGNvbm4sICZrZXksIG5vZGUsIG5vX3F1b3Rh
X2NoZWNrKTsKIH0KIAogc3RhdGljIGVudW0geHNfcGVybV90eXBlIHBlcm1f
Zm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCkBAIC05OTksNyAr
MTAwMSw3IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqY3JlYXRlX25vZGUoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKIAkvKiBX
ZSB3cml0ZSBvdXQgdGhlIG5vZGVzIGRvd24sIHNldHRpbmcgZGVzdHJ1Y3Rv
ciBpbiBjYXNlCiAJICogc29tZXRoaW5nIGdvZXMgd3JvbmcuICovCiAJZm9y
IChpID0gbm9kZTsgaTsgaSA9IGktPnBhcmVudCkgewotCQlpZiAod3JpdGVf
bm9kZShjb25uLCBpKSkgeworCQlpZiAod3JpdGVfbm9kZShjb25uLCBpLCBm
YWxzZSkpIHsKIAkJCWRvbWFpbl9lbnRyeV9kZWMoY29ubiwgaSk7CiAJCQly
ZXR1cm4gTlVMTDsKIAkJfQpAQCAtMTAzOSw3ICsxMDQxLDcgQEAgc3RhdGlj
IGludCBkb193cml0ZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0
IGJ1ZmZlcmVkX2RhdGEgKmluKQogCX0gZWxzZSB7CiAJCW5vZGUtPmRhdGEg
PSBpbi0+YnVmZmVyICsgb2Zmc2V0OwogCQlub2RlLT5kYXRhbGVuID0gZGF0
YWxlbjsKLQkJaWYgKHdyaXRlX25vZGUoY29ubiwgbm9kZSkpCisJCWlmICh3
cml0ZV9ub2RlKGNvbm4sIG5vZGUsIGZhbHNlKSkKIAkJCXJldHVybiBlcnJu
bzsKIAl9CiAKQEAgLTExMTUsNyArMTExNyw3IEBAIHN0YXRpYyBpbnQgcmVt
b3ZlX2NoaWxkX2VudHJ5KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1
Y3Qgbm9kZSAqbm9kZSwKIAlzaXplX3QgY2hpbGRsZW4gPSBzdHJsZW4obm9k
ZS0+Y2hpbGRyZW4gKyBvZmZzZXQpOwogCW1lbWRlbChub2RlLT5jaGlsZHJl
biwgb2Zmc2V0LCBjaGlsZGxlbiArIDEsIG5vZGUtPmNoaWxkbGVuKTsKIAlu
b2RlLT5jaGlsZGxlbiAtPSBjaGlsZGxlbiArIDE7Ci0JcmV0dXJuIHdyaXRl
X25vZGUoY29ubiwgbm9kZSk7CisJcmV0dXJuIHdyaXRlX25vZGUoY29ubiwg
bm9kZSwgdHJ1ZSk7CiB9CiAKIApAQCAtMTI1NCw3ICsxMjU2LDcgQEAgc3Rh
dGljIGludCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlub2RlLT5udW1fcGVybXMg
PSBudW07CiAJZG9tYWluX2VudHJ5X2luYyhjb25uLCBub2RlKTsKIAotCWlm
ICh3cml0ZV9ub2RlKGNvbm4sIG5vZGUpKQorCWlmICh3cml0ZV9ub2RlKGNv
bm4sIG5vZGUsIGZhbHNlKSkKIAkJcmV0dXJuIGVycm5vOwogCiAJZmlyZV93
YXRjaGVzKGNvbm4sIGluLCBuYW1lLCBmYWxzZSk7CkBAIC0xNTE0LDcgKzE1
MTYsNyBAQCBzdGF0aWMgdm9pZCBtYW51YWxfbm9kZShjb25zdCBjaGFyICpu
YW1lLCBjb25zdCBjaGFyICpjaGlsZCkKIAlpZiAoY2hpbGQpCiAJCW5vZGUt
PmNoaWxkbGVuID0gc3RybGVuKGNoaWxkKSArIDE7CiAKLQlpZiAod3JpdGVf
bm9kZShOVUxMLCBub2RlKSkKKwlpZiAod3JpdGVfbm9kZShOVUxMLCBub2Rl
LCBmYWxzZSkpCiAJCWJhcmZfcGVycm9yKCJDb3VsZCBub3QgY3JlYXRlIGlu
aXRpYWwgbm9kZSAlcyIsIG5hbWUpOwogCXRhbGxvY19mcmVlKG5vZGUpOwog
fQpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
aCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKaW5kZXggNTZh
Mjc5Y2ZiYjQ3Li4zY2IxYzIzNWExMDEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuaApAQCAtMTQ5LDcgKzE0OSw4IEBAIHZvaWQgc2Vu
ZF9hY2soc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGVudW0geHNkX3NvY2tt
c2dfdHlwZSB0eXBlKTsKIGNoYXIgKmNhbm9uaWNhbGl6ZShzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBjaGFyICpu
b2RlKTsKIAogLyogV3JpdGUgYSBub2RlIHRvIHRoZSB0ZGIgZGF0YSBiYXNl
LiAqLwotaW50IHdyaXRlX25vZGVfcmF3KHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBUREJfREFUQSAqa2V5LCBzdHJ1Y3Qgbm9kZSAqbm9kZSk7CitpbnQg
d3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFREQl9E
QVRBICprZXksIHN0cnVjdCBub2RlICpub2RlLAorCQkgICBib29sIG5vX3F1
b3RhX2NoZWNrKTsKIAogLyogR2V0IHRoaXMgbm9kZSwgY2hlY2tpbmcgd2Ug
aGF2ZSBwZXJtaXNzaW9ucy4gKi8KIHN0cnVjdCBub2RlICpnZXRfbm9kZShz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKZGlmZiAtLWdpdCBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5kZXggMjgyNGY3YjM1OWI4
Li5lODc4OTc1NzM0NjkgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF90cmFuc2FjdGlvbi5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF90cmFuc2FjdGlvbi5jCkBAIC0yNzYsNyArMjc2LDcgQEAgaW50
IGFjY2Vzc19ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qg
bm9kZSAqbm9kZSwKIAkJCWktPmNoZWNrX2dlbiA9IHRydWU7CiAJCQlpZiAo
bm9kZS0+Z2VuZXJhdGlvbiAhPSBOT19HRU5FUkFUSU9OKSB7CiAJCQkJc2V0
X3RkYl9rZXkodHJhbnNfbmFtZSwgJmxvY2FsX2tleSk7Ci0JCQkJcmV0ID0g
d3JpdGVfbm9kZV9yYXcoY29ubiwgJmxvY2FsX2tleSwgbm9kZSk7CisJCQkJ
cmV0ID0gd3JpdGVfbm9kZV9yYXcoY29ubiwgJmxvY2FsX2tleSwgbm9kZSwg
dHJ1ZSk7CiAJCQkJaWYgKHJldCkKIAkJCQkJZ290byBlcnI7CiAJCQkJaS0+
dGFfbm9kZSA9IHRydWU7Ci0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch"
Content-Transfer-Encoding: base64

RnJvbSBlODA3NmY3M2RlNjVjNDgxNmY2OWQ2ZWJmNzU4MzljNzA2MTQ1ZmNk
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6MzggKzAyMDAKU3ViamVjdDogW1BBVENIIDAyLzEwXSB0b29scy94ZW5z
dG9yZTogaWdub3JlIHRyYW5zYWN0aW9uIGlkIGZvciBbdW5dd2F0Y2gKCklu
c3RlYWQgb2YgaWdub3JpbmcgdGhlIHRyYW5zYWN0aW9uIGlkIGZvciBYU19X
QVRDSCBhbmQgWFNfVU5XQVRDSApjb21tYW5kcyBhcyBpdCBpcyBkb2N1bWVu
dGVkIGluIGRvY3MvbWlzYy94ZW5zdG9yZS50eHQsIGl0IGlzIHRlc3RlZApm
b3IgdmFsaWRpdHkgdG9kYXkuCgpSZWFsbHkgaWdub3JlIHRoZSB0cmFuc2Fj
dGlvbiBpZCBmb3IgWFNfV0FUQ0ggYW5kIFhTX1VOV0FUQ0guCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jv
c3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFs
bCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJh
bnQgPHBhdWxAeGVuLm9yZz4KLS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jIHwgMjYgKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0KIDEg
ZmlsZSBjaGFuZ2VkLCAxNiBpbnNlcnRpb25zKCspLCAxMCBkZWxldGlvbnMo
LSkKCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCBi
NDNlMTAxOGJhYmQuLmJiMmY5ZmQ0ZTc2ZSAxMDA2NDQKLS0tIGEvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jCkBAIC0xMjY4LDEzICsxMjY4LDE3IEBAIHN0
YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiBzdGF0aWMgc3RydWN0IHsK
IAljb25zdCBjaGFyICpzdHI7CiAJaW50ICgqZnVuYykoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbik7CisJdW5z
aWduZWQgaW50IGZsYWdzOworI2RlZmluZSBYU19GTEFHX05PVElECQkoMVUg
PDwgMCkJLyogSWdub3JlIHRyYW5zYWN0aW9uIGlkLiAqLwogfSBjb25zdCB3
aXJlX2Z1bmNzW1hTX1RZUEVfQ09VTlRdID0gewogCVtYU19DT05UUk9MXSAg
ICAgICAgICAgPSB7ICJDT05UUk9MIiwgICAgICAgICAgIGRvX2NvbnRyb2wg
fSwKIAlbWFNfRElSRUNUT1JZXSAgICAgICAgID0geyAiRElSRUNUT1JZIiwg
ICAgICAgICBzZW5kX2RpcmVjdG9yeSB9LAogCVtYU19SRUFEXSAgICAgICAg
ICAgICAgPSB7ICJSRUFEIiwgICAgICAgICAgICAgIGRvX3JlYWQgfSwKIAlb
WFNfR0VUX1BFUk1TXSAgICAgICAgID0geyAiR0VUX1BFUk1TIiwgICAgICAg
ICBkb19nZXRfcGVybXMgfSwKLQlbWFNfV0FUQ0hdICAgICAgICAgICAgID0g
eyAiV0FUQ0giLCAgICAgICAgICAgICBkb193YXRjaCB9LAotCVtYU19VTldB
VENIXSAgICAgICAgICAgPSB7ICJVTldBVENIIiwgICAgICAgICAgIGRvX3Vu
d2F0Y2ggfSwKKwlbWFNfV0FUQ0hdICAgICAgICAgICAgID0KKwkgICAgeyAi
V0FUQ0giLCAgICAgICAgIGRvX3dhdGNoLCAgICAgICAgWFNfRkxBR19OT1RJ
RCB9LAorCVtYU19VTldBVENIXSAgICAgICAgICAgPQorCSAgICB7ICJVTldB
VENIIiwgICAgICAgZG9fdW53YXRjaCwgICAgICBYU19GTEFHX05PVElEIH0s
CiAJW1hTX1RSQU5TQUNUSU9OX1NUQVJUXSA9IHsgIlRSQU5TQUNUSU9OX1NU
QVJUIiwgZG9fdHJhbnNhY3Rpb25fc3RhcnQgfSwKIAlbWFNfVFJBTlNBQ1RJ
T05fRU5EXSAgID0geyAiVFJBTlNBQ1RJT05fRU5EIiwgICBkb190cmFuc2Fj
dGlvbl9lbmQgfSwKIAlbWFNfSU5UUk9EVUNFXSAgICAgICAgID0geyAiSU5U
Uk9EVUNFIiwgICAgICAgICBkb19pbnRyb2R1Y2UgfSwKQEAgLTEyOTYsNyAr
MTMwMCw3IEBAIHN0YXRpYyBzdHJ1Y3QgewogCiBzdGF0aWMgY29uc3QgY2hh
ciAqc29ja21zZ19zdHJpbmcoZW51bSB4c2Rfc29ja21zZ190eXBlIHR5cGUp
CiB7Ci0JaWYgKCh1bnNpZ25lZCl0eXBlIDwgWFNfVFlQRV9DT1VOVCAmJiB3
aXJlX2Z1bmNzW3R5cGVdLnN0cikKKwlpZiAoKHVuc2lnbmVkIGludCl0eXBl
IDwgQVJSQVlfU0laRSh3aXJlX2Z1bmNzKSAmJiB3aXJlX2Z1bmNzW3R5cGVd
LnN0cikKIAkJcmV0dXJuIHdpcmVfZnVuY3NbdHlwZV0uc3RyOwogCiAJcmV0
dXJuICIqKlVOS05PV04qKiI7CkBAIC0xMzExLDcgKzEzMTUsMTQgQEAgc3Rh
dGljIHZvaWQgcHJvY2Vzc19tZXNzYWdlKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJZW51bSB4c2Rfc29j
a21zZ190eXBlIHR5cGUgPSBpbi0+aGRyLm1zZy50eXBlOwogCWludCByZXQ7
CiAKLQl0cmFucyA9IHRyYW5zYWN0aW9uX2xvb2t1cChjb25uLCBpbi0+aGRy
Lm1zZy50eF9pZCk7CisJaWYgKCh1bnNpZ25lZCBpbnQpdHlwZSA+PSBYU19U
WVBFX0NPVU5UIHx8ICF3aXJlX2Z1bmNzW3R5cGVdLmZ1bmMpIHsKKwkJZXBy
aW50ZigiQ2xpZW50IHVua25vd24gb3BlcmF0aW9uICVpIiwgdHlwZSk7CisJ
CXNlbmRfZXJyb3IoY29ubiwgRU5PU1lTKTsKKwkJcmV0dXJuOworCX0KKwor
CXRyYW5zID0gKHdpcmVfZnVuY3NbdHlwZV0uZmxhZ3MgJiBYU19GTEFHX05P
VElEKQorCQk/IE5VTEwgOiB0cmFuc2FjdGlvbl9sb29rdXAoY29ubiwgaW4t
Pmhkci5tc2cudHhfaWQpOwogCWlmIChJU19FUlIodHJhbnMpKSB7CiAJCXNl
bmRfZXJyb3IoY29ubiwgLVBUUl9FUlIodHJhbnMpKTsKIAkJcmV0dXJuOwpA
QCAtMTMyMCwxMiArMTMzMSw3IEBAIHN0YXRpYyB2b2lkIHByb2Nlc3NfbWVz
c2FnZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVk
X2RhdGEgKmluKQogCWFzc2VydChjb25uLT50cmFuc2FjdGlvbiA9PSBOVUxM
KTsKIAljb25uLT50cmFuc2FjdGlvbiA9IHRyYW5zOwogCi0JaWYgKCh1bnNp
Z25lZCl0eXBlIDwgWFNfVFlQRV9DT1VOVCAmJiB3aXJlX2Z1bmNzW3R5cGVd
LmZ1bmMpCi0JCXJldCA9IHdpcmVfZnVuY3NbdHlwZV0uZnVuYyhjb25uLCBp
bik7Ci0JZWxzZSB7Ci0JCWVwcmludGYoIkNsaWVudCB1bmtub3duIG9wZXJh
dGlvbiAlaSIsIHR5cGUpOwotCQlyZXQgPSBFTk9TWVM7Ci0JfQorCXJldCA9
IHdpcmVfZnVuY3NbdHlwZV0uZnVuYyhjb25uLCBpbik7CiAJaWYgKHJldCkK
IAkJc2VuZF9lcnJvcihjb25uLCByZXQpOwogCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch"
Content-Transfer-Encoding: base64

RnJvbSBiOGM2ZGJiNjdlYmI0NDkxMjYwMjM0NDZhN2QyMDllZWRmOTY2NTM3
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6MzkgKzAyMDAKU3ViamVjdDogW1BBVENIIDAzLzEwXSB0b29scy94ZW5z
dG9yZTogZml4IG5vZGUgYWNjb3VudGluZyBhZnRlciBmYWlsZWQgbm9kZQog
Y3JlYXRpb24KCldoZW4gYSBub2RlIGNyZWF0aW9uIGZhaWxzIHRoZSBudW1i
ZXIgb2Ygbm9kZXMgb2YgdGhlIGRvbWFpbiBzaG91bGQgYmUKdGhlIHNhbWUg
YXMgYmVmb3JlIHRoZSBmYWlsZWQgbm9kZSBjcmVhdGlvbi4gSW4gY2FzZSBv
ZiBmYWlsdXJlIHdoZW4KdHJ5aW5nIHRvIGNyZWF0ZSBhIG5vZGUgcmVxdWly
aW5nIHRvIGNyZWF0ZSBvbmUgb3IgbW9yZSBpbnRlcm1lZGlhdGUKbm9kZXMg
YXMgd2VsbCAoZS5nLiB3aGVuIC9hL2IvYy9kIGlzIHRvIGJlIGNyZWF0ZWQs
IGJ1dCAvYS9iIGlzbid0CmV4aXN0aW5nIHlldCkgaXQgbWlnaHQgaGFwcGVu
IHRoYXQgdGhlIG51bWJlciBvZiBub2RlcyBvZiB0aGUgY3JlYXRpbmcKZG9t
YWluIGlzIG5vdCByZXNldCB0byB0aGUgdmFsdWUgaXQgaGFkIGJlZm9yZS4K
ClNvIG1vdmUgdGhlIHF1b3RhIGFjY291bnRpbmcgb3V0IG9mIGNvbnN0cnVj
dF9ub2RlKCkgYW5kIGludG8gdGhlIG5vZGUKd3JpdGUgbG9vcCBpbiBjcmVh
dGVfbm9kZSgpIGluIG9yZGVyIHRvIGJlIGFibGUgdG8gdW5kbyB0aGUgYWNj
b3VudGluZwppbiBjYXNlIG9mIGFuIGVycm9yIGluIHRoZSBpbnRlcm1lZGlh
dGUgbm9kZSBkZXN0cnVjdG9yLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0xMTUu
CgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5j
b20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4K
QWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+Ci0t
LQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyB8IDM3ICsrKysr
KysrKysrKysrKysrKysrKystLS0tLS0tLS0tLQogMSBmaWxlIGNoYW5nZWQs
IDI1IGluc2VydGlvbnMoKyksIDEyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdp
dCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4IGJiMmY5ZmQ0ZTc2ZS4u
ZGI5YjljYTc5NTdkIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmMKQEAgLTkyNSwxMSArOTI1LDYgQEAgc3RhdGljIHN0cnVjdCBub2Rl
ICpjb25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29u
c3Qgdm9pZCAqY3R4LAogCWlmICghcGFyZW50KQogCQlyZXR1cm4gTlVMTDsK
IAotCWlmIChkb21haW5fZW50cnkoY29ubikgPj0gcXVvdGFfbmJfZW50cnlf
cGVyX2RvbWFpbikgewotCQllcnJubyA9IEVOT1NQQzsKLQkJcmV0dXJuIE5V
TEw7Ci0JfQotCiAJLyogQWRkIGNoaWxkIHRvIHBhcmVudC4gKi8KIAliYXNl
ID0gYmFzZW5hbWUobmFtZSk7CiAJYmFzZWxlbiA9IHN0cmxlbihiYXNlKSAr
IDE7CkBAIC05NjIsNyArOTU3LDYgQEAgc3RhdGljIHN0cnVjdCBub2RlICpj
b25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qg
dm9pZCAqY3R4LAogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+ZGF0YSA9IE5V
TEw7CiAJbm9kZS0+Y2hpbGRsZW4gPSBub2RlLT5kYXRhbGVuID0gMDsKIAlu
b2RlLT5wYXJlbnQgPSBwYXJlbnQ7Ci0JZG9tYWluX2VudHJ5X2luYyhjb25u
LCBub2RlKTsKIAlyZXR1cm4gbm9kZTsKIAogbm9tZW06CkBAIC05ODIsNiAr
OTc2LDkgQEAgc3RhdGljIGludCBkZXN0cm95X25vZGUodm9pZCAqX25vZGUp
CiAJa2V5LmRzaXplID0gc3RybGVuKG5vZGUtPm5hbWUpOwogCiAJdGRiX2Rl
bGV0ZSh0ZGJfY3R4LCBrZXkpOworCisJZG9tYWluX2VudHJ5X2RlYyh0YWxs
b2NfcGFyZW50KG5vZGUpLCBub2RlKTsKKwogCXJldHVybiAwOwogfQogCkBA
IC05OTgsMTggKzk5NSwzNCBAQCBzdGF0aWMgc3RydWN0IG5vZGUgKmNyZWF0
ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpj
dHgsCiAJbm9kZS0+ZGF0YSA9IGRhdGE7CiAJbm9kZS0+ZGF0YWxlbiA9IGRh
dGFsZW47CiAKLQkvKiBXZSB3cml0ZSBvdXQgdGhlIG5vZGVzIGRvd24sIHNl
dHRpbmcgZGVzdHJ1Y3RvciBpbiBjYXNlCi0JICogc29tZXRoaW5nIGdvZXMg
d3JvbmcuICovCisJLyoKKwkgKiBXZSB3cml0ZSBvdXQgdGhlIG5vZGVzIGJv
dHRvbSB1cC4KKwkgKiBBbGwgbmV3IGNyZWF0ZWQgbm9kZXMgd2lsbCBoYXZl
IGktPnBhcmVudCBzZXQsIHdoaWxlIHRoZSBmaW5hbAorCSAqIG5vZGUgd2ls
bCBiZSBhbHJlYWR5IGV4aXN0aW5nIGFuZCB3b24ndCBoYXZlIGktPnBhcmVu
dCBzZXQuCisJICogTmV3IG5vZGVzIGFyZSBzdWJqZWN0IHRvIHF1b3RhIGhh
bmRsaW5nLgorCSAqIEluaXRpYWxseSBzZXQgYSBkZXN0cnVjdG9yIGZvciBh
bGwgbmV3IG5vZGVzIHJlbW92aW5nIHRoZW0gZnJvbQorCSAqIFREQiBhZ2Fp
biBhbmQgdW5kb2luZyBxdW90YSBhY2NvdW50aW5nIGZvciB0aGUgY2FzZSBv
ZiBhbiBlcnJvcgorCSAqIGR1cmluZyB0aGUgd3JpdGUgbG9vcC4KKwkgKi8K
IAlmb3IgKGkgPSBub2RlOyBpOyBpID0gaS0+cGFyZW50KSB7Ci0JCWlmICh3
cml0ZV9ub2RlKGNvbm4sIGksIGZhbHNlKSkgewotCQkJZG9tYWluX2VudHJ5
X2RlYyhjb25uLCBpKTsKKwkJLyogaS0+cGFyZW50IGlzIHNldCBmb3IgZWFj
aCBuZXcgbm9kZSwgc28gY2hlY2sgcXVvdGEuICovCisJCWlmIChpLT5wYXJl
bnQgJiYKKwkJICAgIGRvbWFpbl9lbnRyeShjb25uKSA+PSBxdW90YV9uYl9l
bnRyeV9wZXJfZG9tYWluKSB7CisJCQllcnJubyA9IEVOT1NQQzsKIAkJCXJl
dHVybiBOVUxMOwogCQl9Ci0JCXRhbGxvY19zZXRfZGVzdHJ1Y3RvcihpLCBk
ZXN0cm95X25vZGUpOworCQlpZiAod3JpdGVfbm9kZShjb25uLCBpLCBmYWxz
ZSkpCisJCQlyZXR1cm4gTlVMTDsKKworCQkvKiBBY2NvdW50IGZvciBuZXcg
bm9kZSwgc2V0IGRlc3RydWN0b3IgZm9yIGVycm9yIGNhc2UuICovCisJCWlm
IChpLT5wYXJlbnQpIHsKKwkJCWRvbWFpbl9lbnRyeV9pbmMoY29ubiwgaSk7
CisJCQl0YWxsb2Nfc2V0X2Rlc3RydWN0b3IoaSwgZGVzdHJveV9ub2RlKTsK
KwkJfQogCX0KIAogCS8qIE9LLCBub3cgcmVtb3ZlIGRlc3RydWN0b3JzIHNv
IHRoZXkgc3RheSBhcm91bmQgKi8KLQlmb3IgKGkgPSBub2RlOyBpOyBpID0g
aS0+cGFyZW50KQorCWZvciAoaSA9IG5vZGU7IGktPnBhcmVudDsgaSA9IGkt
PnBhcmVudCkKIAkJdGFsbG9jX3NldF9kZXN0cnVjdG9yKGksIE5VTEwpOwog
CXJldHVybiBub2RlOwogfQotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch"
Content-Transfer-Encoding: base64

RnJvbSAzMThhYTc1YmQwYzA1NDIzZTcxN2FkMGI2NGFkYjIwNDI4MjAyNWRi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDAgKzAyMDAKU3ViamVjdDogW1BBVENIIDA0LzEwXSB0b29scy94ZW5z
dG9yZTogc2ltcGxpZnkgYW5kIHJlbmFtZSBjaGVja19ldmVudF9ub2RlKCkK
ClRoZXJlIGlzIG5vIHBhdGggd2hpY2ggYWxsb3dzIHRvIGNhbGwgY2hlY2tf
ZXZlbnRfbm9kZSgpIHdpdGhvdXQgYQpldmVudCBuYW1lLiBTbyBkb24ndCBs
ZXQgdGhlIHJlc3VsdCBkZXBlbmQgb24gdGhlIG5hbWUgYmVpbmcgTlVMTCBh
bmQKYWRkIGFuIGFzc2VydCgpIGNvdmVyaW5nIHRoYXQgY2FzZS4KClJlbmFt
ZSB0aGUgZnVuY3Rpb24gdG8gY2hlY2tfc3BlY2lhbF9ldmVudCgpIHRvIGJl
dHRlciBtYXRjaCB0aGUKc2VtYW50aWNzLgoKVGhpcyBpcyBwYXJ0IG9mIFhT
QS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NA
c3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBh
bWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhl
bi5vcmc+Ci0tLQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMg
fCAxMiArKysrKy0tLS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1IGluc2VydGlv
bnMoKyksIDcgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmMgYi90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfd2F0Y2guYwppbmRleCA3ZGVkY2E2MGRmZDYuLmYyZjFiZWQ0N2Nj
NiAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNo
LmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKQEAg
LTQ3LDEzICs0NywxMSBAQCBzdHJ1Y3Qgd2F0Y2gKIAljaGFyICpub2RlOwog
fTsKIAotc3RhdGljIGJvb2wgY2hlY2tfZXZlbnRfbm9kZShjb25zdCBjaGFy
ICpub2RlKQorc3RhdGljIGJvb2wgY2hlY2tfc3BlY2lhbF9ldmVudChjb25z
dCBjaGFyICpuYW1lKQogewotCWlmICghbm9kZSB8fCAhc3Ryc3RhcnRzKG5v
ZGUsICJAIikpIHsKLQkJZXJybm8gPSBFSU5WQUw7Ci0JCXJldHVybiBmYWxz
ZTsKLQl9Ci0JcmV0dXJuIHRydWU7CisJYXNzZXJ0KG5hbWUpOworCisJcmV0
dXJuIHN0cnN0YXJ0cyhuYW1lLCAiQCIpOwogfQogCiAvKiBJcyBjaGlsZCBh
IHN1Ym5vZGUgb2YgcGFyZW50LCBvciBlcXVhbD8gKi8KQEAgLTg3LDcgKzg1
LDcgQEAgc3RhdGljIHZvaWQgYWRkX2V2ZW50KHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLAogCXVuc2lnbmVkIGludCBsZW47CiAJY2hhciAqZGF0YTsKIAot
CWlmICghY2hlY2tfZXZlbnRfbm9kZShuYW1lKSkgeworCWlmICghY2hlY2tf
c3BlY2lhbF9ldmVudChuYW1lKSkgewogCQkvKiBDYW4gdGhpcyBjb25uIGxv
YWQgbm9kZSwgb3Igc2VlIHRoYXQgaXQgZG9lc24ndCBleGlzdD8gKi8KIAkJ
c3RydWN0IG5vZGUgKm5vZGUgPSBnZXRfbm9kZShjb25uLCBjdHgsIG5hbWUs
IFhTX1BFUk1fUkVBRCk7CiAJCS8qCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch"
Content-Transfer-Encoding: base64

RnJvbSBjNjI1ZmFlNDRhZWRjMjQ2Nzc2YjUyZWIxMTczY2Y4NDdhM2Q0ZDgw
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDEgKzAyMDAKU3ViamVjdDogW1BBVENIIDA1LzEwXSB0b29scy94ZW5z
dG9yZTogY2hlY2sgcHJpdmlsZWdlIGZvcgogWFNfSVNfRE9NQUlOX0lOVFJP
RFVDRUQKClRoZSBYZW5zdG9yZSBjb21tYW5kIFhTX0lTX0RPTUFJTl9JTlRS
T0RVQ0VEIHNob3VsZCBiZSBwb3NzaWJsZSBmb3IKcHJpdmlsZWdlZCBkb21h
aW5zIG9ubHkgKHRoZSBvbmx5IHVzZXIgaW4gdGhlIHRyZWUgaXMgdGhlIHhl
bnBhZ2luZwpkYWVtb24pLgoKSW5zdGVhZCBvZiBoYXZpbmcgdGhlIHByaXZp
bGVnZSB0ZXN0IGZvciBlYWNoIGNvbW1hbmQgaW50cm9kdWNlIGEKcGVyLWNv
bW1hbmQgZmxhZyBmb3IgdGhhdCBwdXJwb3NlLgoKVGhpcyBpcyBwYXJ0IG9m
IFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jv
c3NAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFs
bEBhbWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+Ci0tLQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YyAgIHwgMjQgKysrKysrKysrKysrKysrKysrLS0tLS0tCiB0b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfZG9tYWluLmMgfCAgNyArKy0tLS0tCiAyIGZpbGVz
IGNoYW5nZWQsIDIwIGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygtKQoK
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4IGRiOWI5
Y2E3OTU3ZC4uNmFmZDU4NDMxMTExIDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmMKQEAgLTEyODMsOCArMTI4MywxMCBAQCBzdGF0aWMg
c3RydWN0IHsKIAlpbnQgKCpmdW5jKShzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKTsKIAl1bnNpZ25lZCBpbnQg
ZmxhZ3M7CiAjZGVmaW5lIFhTX0ZMQUdfTk9USUQJCSgxVSA8PCAwKQkvKiBJ
Z25vcmUgdHJhbnNhY3Rpb24gaWQuICovCisjZGVmaW5lIFhTX0ZMQUdfUFJJ
VgkJKDFVIDw8IDEpCS8qIFByaXZpbGVnZWQgZG9tYWluIG9ubHkuICovCiB9
IGNvbnN0IHdpcmVfZnVuY3NbWFNfVFlQRV9DT1VOVF0gPSB7Ci0JW1hTX0NP
TlRST0xdICAgICAgICAgICA9IHsgIkNPTlRST0wiLCAgICAgICAgICAgZG9f
Y29udHJvbCB9LAorCVtYU19DT05UUk9MXSAgICAgICAgICAgPQorCSAgICB7
ICJDT05UUk9MIiwgICAgICAgZG9fY29udHJvbCwgICAgICBYU19GTEFHX1BS
SVYgfSwKIAlbWFNfRElSRUNUT1JZXSAgICAgICAgID0geyAiRElSRUNUT1JZ
IiwgICAgICAgICBzZW5kX2RpcmVjdG9yeSB9LAogCVtYU19SRUFEXSAgICAg
ICAgICAgICAgPSB7ICJSRUFEIiwgICAgICAgICAgICAgIGRvX3JlYWQgfSwK
IAlbWFNfR0VUX1BFUk1TXSAgICAgICAgID0geyAiR0VUX1BFUk1TIiwgICAg
ICAgICBkb19nZXRfcGVybXMgfSwKQEAgLTEyOTQsOCArMTI5NiwxMCBAQCBz
dGF0aWMgc3RydWN0IHsKIAkgICAgeyAiVU5XQVRDSCIsICAgICAgIGRvX3Vu
d2F0Y2gsICAgICAgWFNfRkxBR19OT1RJRCB9LAogCVtYU19UUkFOU0FDVElP
Tl9TVEFSVF0gPSB7ICJUUkFOU0FDVElPTl9TVEFSVCIsIGRvX3RyYW5zYWN0
aW9uX3N0YXJ0IH0sCiAJW1hTX1RSQU5TQUNUSU9OX0VORF0gICA9IHsgIlRS
QU5TQUNUSU9OX0VORCIsICAgZG9fdHJhbnNhY3Rpb25fZW5kIH0sCi0JW1hT
X0lOVFJPRFVDRV0gICAgICAgICA9IHsgIklOVFJPRFVDRSIsICAgICAgICAg
ZG9faW50cm9kdWNlIH0sCi0JW1hTX1JFTEVBU0VdICAgICAgICAgICA9IHsg
IlJFTEVBU0UiLCAgICAgICAgICAgZG9fcmVsZWFzZSB9LAorCVtYU19JTlRS
T0RVQ0VdICAgICAgICAgPQorCSAgICB7ICJJTlRST0RVQ0UiLCAgICAgZG9f
aW50cm9kdWNlLCAgICBYU19GTEFHX1BSSVYgfSwKKwlbWFNfUkVMRUFTRV0g
ICAgICAgICAgID0KKwkgICAgeyAiUkVMRUFTRSIsICAgICAgIGRvX3JlbGVh
c2UsICAgICAgWFNfRkxBR19QUklWIH0sCiAJW1hTX0dFVF9ET01BSU5fUEFU
SF0gICA9IHsgIkdFVF9ET01BSU5fUEFUSCIsICAgZG9fZ2V0X2RvbWFpbl9w
YXRoIH0sCiAJW1hTX1dSSVRFXSAgICAgICAgICAgICA9IHsgIldSSVRFIiwg
ICAgICAgICAgICAgZG9fd3JpdGUgfSwKIAlbWFNfTUtESVJdICAgICAgICAg
ICAgID0geyAiTUtESVIiLCAgICAgICAgICAgICBkb19ta2RpciB9LApAQCAt
MTMwNCw5ICsxMzA4LDExIEBAIHN0YXRpYyBzdHJ1Y3QgewogCVtYU19XQVRD
SF9FVkVOVF0gICAgICAgPSB7ICJXQVRDSF9FVkVOVCIsICAgICAgIE5VTEwg
fSwKIAlbWFNfRVJST1JdICAgICAgICAgICAgID0geyAiRVJST1IiLCAgICAg
ICAgICAgICBOVUxMIH0sCiAJW1hTX0lTX0RPTUFJTl9JTlRST0RVQ0VEXSA9
Ci0JCQl7ICJJU19ET01BSU5fSU5UUk9EVUNFRCIsIGRvX2lzX2RvbWFpbl9p
bnRyb2R1Y2VkIH0sCi0JW1hTX1JFU1VNRV0gICAgICAgICAgICA9IHsgIlJF
U1VNRSIsICAgICAgICAgICAgZG9fcmVzdW1lIH0sCi0JW1hTX1NFVF9UQVJH
RVRdICAgICAgICA9IHsgIlNFVF9UQVJHRVQiLCAgICAgICAgZG9fc2V0X3Rh
cmdldCB9LAorCSAgICB7ICJJU19ET01BSU5fSU5UUk9EVUNFRCIsIGRvX2lz
X2RvbWFpbl9pbnRyb2R1Y2VkLCBYU19GTEFHX1BSSVYgfSwKKwlbWFNfUkVT
VU1FXSAgICAgICAgICAgID0KKwkgICAgeyAiUkVTVU1FIiwgICAgICAgIGRv
X3Jlc3VtZSwgICAgICAgWFNfRkxBR19QUklWIH0sCisJW1hTX1NFVF9UQVJH
RVRdICAgICAgICA9CisJICAgIHsgIlNFVF9UQVJHRVQiLCAgICBkb19zZXRf
dGFyZ2V0LCAgIFhTX0ZMQUdfUFJJViB9LAogCVtYU19SRVNFVF9XQVRDSEVT
XSAgICAgPSB7ICJSRVNFVF9XQVRDSEVTIiwgICAgIGRvX3Jlc2V0X3dhdGNo
ZXMgfSwKIAlbWFNfRElSRUNUT1JZX1BBUlRdICAgID0geyAiRElSRUNUT1JZ
X1BBUlQiLCAgICBzZW5kX2RpcmVjdG9yeV9wYXJ0IH0sCiB9OwpAQCAtMTMz
NCw2ICsxMzQwLDEyIEBAIHN0YXRpYyB2b2lkIHByb2Nlc3NfbWVzc2FnZShz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEg
KmluKQogCQlyZXR1cm47CiAJfQogCisJaWYgKCh3aXJlX2Z1bmNzW3R5cGVd
LmZsYWdzICYgWFNfRkxBR19QUklWKSAmJgorCSAgICBkb21haW5faXNfdW5w
cml2aWxlZ2VkKGNvbm4pKSB7CisJCXNlbmRfZXJyb3IoY29ubiwgRUFDQ0VT
KTsKKwkJcmV0dXJuOworCX0KKwogCXRyYW5zID0gKHdpcmVfZnVuY3NbdHlw
ZV0uZmxhZ3MgJiBYU19GTEFHX05PVElEKQogCQk/IE5VTEwgOiB0cmFuc2Fj
dGlvbl9sb29rdXAoY29ubiwgaW4tPmhkci5tc2cudHhfaWQpOwogCWlmIChJ
U19FUlIodHJhbnMpKSB7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfZG9tYWluLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
ZG9tYWluLmMKaW5kZXggMWVhZTcwM2VmNjgwLi4wZTI5MjZlMmEzZDAgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYwor
KysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKQEAgLTM3
Nyw3ICszNzcsNyBAQCBpbnQgZG9faW50cm9kdWNlKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJaWYgKGdl
dF9zdHJpbmdzKGluLCB2ZWMsIEFSUkFZX1NJWkUodmVjKSkgPCBBUlJBWV9T
SVpFKHZlYykpCiAJCXJldHVybiBFSU5WQUw7CiAKLQlpZiAoZG9tYWluX2lz
X3VucHJpdmlsZWdlZChjb25uKSB8fCAhY29ubi0+Y2FuX3dyaXRlKQorCWlm
ICghY29ubi0+Y2FuX3dyaXRlKQogCQlyZXR1cm4gRUFDQ0VTOwogCiAJZG9t
aWQgPSBhdG9pKHZlY1swXSk7CkBAIC00NDUsNyArNDQ1LDcgQEAgaW50IGRv
X3NldF90YXJnZXQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAoZ2V0X3N0cmluZ3MoaW4sIHZlYywg
QVJSQVlfU0laRSh2ZWMpKSA8IEFSUkFZX1NJWkUodmVjKSkKIAkJcmV0dXJu
IEVJTlZBTDsKIAotCWlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4p
IHx8ICFjb25uLT5jYW5fd3JpdGUpCisJaWYgKCFjb25uLT5jYW5fd3JpdGUp
CiAJCXJldHVybiBFQUNDRVM7CiAKIAlkb21pZCA9IGF0b2kodmVjWzBdKTsK
QEAgLTQ4MCw5ICs0ODAsNiBAQCBzdGF0aWMgc3RydWN0IGRvbWFpbiAqb25l
YXJnX2RvbWFpbihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIAlpZiAoIWRv
bWlkKQogCQlyZXR1cm4gRVJSX1BUUigtRUlOVkFMKTsKIAotCWlmIChkb21h
aW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pKQotCQlyZXR1cm4gRVJSX1BUUigt
RUFDQ0VTKTsKLQogCXJldHVybiBmaW5kX2Nvbm5lY3RlZF9kb21haW4oZG9t
aWQpOwogfQogCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0006-tools-xenstore-rework-node-removal.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0006-tools-xenstore-rework-node-removal.patch"
Content-Transfer-Encoding: base64

RnJvbSA0NjFjODgwNjAwMTc1YzA2ZTIzYTYzZTYyZDlmMWNjYWI3NTVkNzA4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDIgKzAyMDAKU3ViamVjdDogW1BBVENIIDA2LzEwXSB0b29scy94ZW5z
dG9yZTogcmV3b3JrIG5vZGUgcmVtb3ZhbAoKVG9kYXkgYSBYZW5zdG9yZSBu
b2RlIGlzIGJlaW5nIHJlbW92ZWQgYnkgZGVsZXRpbmcgaXQgZnJvbSB0aGUg
cGFyZW50CmZpcnN0IGFuZCB0aGVuIGRlbGV0aW5nIGl0c2VsZiBhbmQgYWxs
IGl0cyBjaGlsZHJlbi4gVGhpcyByZXN1bHRzIGluCnN0YWxlIGVudHJpZXMg
cmVtYWluaW5nIGluIHRoZSBkYXRhIGJhc2UgaW4gY2FzZSBlLmcuIGEgbWVt
b3J5CmFsbG9jYXRpb24gaXMgZmFpbGluZyBkdXJpbmcgcHJvY2Vzc2luZy4g
VGhpcyB3b3VsZCByZXN1bHQgaW4gdGhlCnJhdGhlciBzdHJhbmdlIGJlaGF2
aW9yIHRvIGJlIGFibGUgdG8gcmVhZCBhIG5vZGUgKGFzIGl0cyBzdGlsbCBp
biB0aGUKZGF0YSBiYXNlKSB3aGlsZSBub3QgYmVpbmcgdmlzaWJsZSBpbiB0
aGUgdHJlZSB2aWV3IG9mIFhlbnN0b3JlLgoKRml4IHRoYXQgYnkgZGVsZXRp
bmcgdGhlIG5vZGVzIGZyb20gdGhlIGxlYWYgc2lkZSBpbnN0ZWFkIG9mIHN0
YXJ0aW5nCmF0IHRoZSByb290LgoKQXMgZmlyZV93YXRjaGVzKCkgaXMgbm93
IGNhbGxlZCBmcm9tIF9ybSgpIHRoZSBjdHggcGFyYW1ldGVyIG5lZWRzIGEK
Y29uc3QgYXR0cmlidXRlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0xMTUuCgpT
aWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29t
PgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+Ci0t
LQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyAgfCA5OSArKysr
KysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLQogdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX3dhdGNoLmMgfCAgNCArLQogdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3dhdGNoLmggfCAgMiArLQogMyBmaWxlcyBjaGFuZ2VkLCA1NCBp
bnNlcnRpb25zKCspLCA1MSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA2YWZkNTg0MzExMTEuLjFjYjcy
OWEyY2Q1ZiAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2NvcmUuYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5j
CkBAIC0xMDg3LDc0ICsxMDg3LDc2IEBAIHN0YXRpYyBpbnQgZG9fbWtkaXIo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRh
ICppbikKIAlyZXR1cm4gMDsKIH0KIAotc3RhdGljIHZvaWQgZGVsZXRlX25v
ZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2Rl
KQotewotCXVuc2lnbmVkIGludCBpOwotCWNoYXIgKm5hbWU7Ci0KLQkvKiBE
ZWxldGUgc2VsZiwgdGhlbiBkZWxldGUgY2hpbGRyZW4uICBJZiB3ZSBjcmFz
aCwgdGhlbiB0aGUgd29yc3QKLQkgICB0aGF0IGNhbiBoYXBwZW4gaXMgdGhl
IGNoaWxkcmVuIHdpbGwgY29udGludWUgdG8gdGFrZSB1cCBzcGFjZSwgYnV0
Ci0JICAgd2lsbCBvdGhlcndpc2UgYmUgdW5yZWFjaGFibGUuICovCi0JZGVs
ZXRlX25vZGVfc2luZ2xlKGNvbm4sIG5vZGUpOwotCi0JLyogRGVsZXRlIGNo
aWxkcmVuLCB0b28uICovCi0JZm9yIChpID0gMDsgaSA8IG5vZGUtPmNoaWxk
bGVuOyBpICs9IHN0cmxlbihub2RlLT5jaGlsZHJlbitpKSArIDEpIHsKLQkJ
c3RydWN0IG5vZGUgKmNoaWxkOwotCi0JCW5hbWUgPSB0YWxsb2NfYXNwcmlu
dGYobm9kZSwgIiVzLyVzIiwgbm9kZS0+bmFtZSwKLQkJCQkgICAgICAgbm9k
ZS0+Y2hpbGRyZW4gKyBpKTsKLQkJY2hpbGQgPSBuYW1lID8gcmVhZF9ub2Rl
KGNvbm4sIG5vZGUsIG5hbWUpIDogTlVMTDsKLQkJaWYgKGNoaWxkKSB7Ci0J
CQlkZWxldGVfbm9kZShjb25uLCBjaGlsZCk7Ci0JCX0KLQkJZWxzZSB7Ci0J
CQl0cmFjZSgiZGVsZXRlX25vZGU6IEVycm9yIGRlbGV0aW5nIGNoaWxkICcl
cy8lcychXG4iLAotCQkJICAgICAgbm9kZS0+bmFtZSwgbm9kZS0+Y2hpbGRy
ZW4gKyBpKTsKLQkJCS8qIFNraXAgaXQsIHdlJ3ZlIGFscmVhZHkgZGVsZXRl
ZCB0aGUgcGFyZW50LiAqLwotCQl9Ci0JCXRhbGxvY19mcmVlKG5hbWUpOwot
CX0KLX0KLQotCiAvKiBEZWxldGUgbWVtb3J5IHVzaW5nIG1lbW1vdmUuICov
CiBzdGF0aWMgdm9pZCBtZW1kZWwodm9pZCAqbWVtLCB1bnNpZ25lZCBvZmYs
IHVuc2lnbmVkIGxlbiwgdW5zaWduZWQgdG90YWwpCiB7CiAJbWVtbW92ZSht
ZW0gKyBvZmYsIG1lbSArIG9mZiArIGxlbiwgdG90YWwgLSBvZmYgLSBsZW4p
OwogfQogCi0KLXN0YXRpYyBpbnQgcmVtb3ZlX2NoaWxkX2VudHJ5KHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKLQkJCSAg
ICAgIHNpemVfdCBvZmZzZXQpCitzdGF0aWMgdm9pZCByZW1vdmVfY2hpbGRf
ZW50cnkoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpu
b2RlLAorCQkJICAgICAgIHNpemVfdCBvZmZzZXQpCiB7CiAJc2l6ZV90IGNo
aWxkbGVuID0gc3RybGVuKG5vZGUtPmNoaWxkcmVuICsgb2Zmc2V0KTsKKwog
CW1lbWRlbChub2RlLT5jaGlsZHJlbiwgb2Zmc2V0LCBjaGlsZGxlbiArIDEs
IG5vZGUtPmNoaWxkbGVuKTsKIAlub2RlLT5jaGlsZGxlbiAtPSBjaGlsZGxl
biArIDE7Ci0JcmV0dXJuIHdyaXRlX25vZGUoY29ubiwgbm9kZSwgdHJ1ZSk7
CisJaWYgKHdyaXRlX25vZGUoY29ubiwgbm9kZSwgdHJ1ZSkpCisJCWNvcnJ1
cHQoY29ubiwgIkNhbid0IHVwZGF0ZSBwYXJlbnQgbm9kZSAnJXMnIiwgbm9k
ZS0+bmFtZSk7CiB9CiAKLQotc3RhdGljIGludCBkZWxldGVfY2hpbGQoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sCi0JCQlzdHJ1Y3Qgbm9kZSAqbm9kZSwg
Y29uc3QgY2hhciAqY2hpbGRuYW1lKQorc3RhdGljIHZvaWQgZGVsZXRlX2No
aWxkKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAorCQkJIHN0cnVjdCBub2Rl
ICpub2RlLCBjb25zdCBjaGFyICpjaGlsZG5hbWUpCiB7CiAJdW5zaWduZWQg
aW50IGk7CiAKIAlmb3IgKGkgPSAwOyBpIDwgbm9kZS0+Y2hpbGRsZW47IGkg
Kz0gc3RybGVuKG5vZGUtPmNoaWxkcmVuK2kpICsgMSkgewogCQlpZiAoc3Ry
ZXEobm9kZS0+Y2hpbGRyZW4raSwgY2hpbGRuYW1lKSkgewotCQkJcmV0dXJu
IHJlbW92ZV9jaGlsZF9lbnRyeShjb25uLCBub2RlLCBpKTsKKwkJCXJlbW92
ZV9jaGlsZF9lbnRyeShjb25uLCBub2RlLCBpKTsKKwkJCXJldHVybjsKIAkJ
fQogCX0KIAljb3JydXB0KGNvbm4sICJDYW4ndCBmaW5kIGNoaWxkICclcycg
aW4gJXMiLCBjaGlsZG5hbWUsIG5vZGUtPm5hbWUpOwotCXJldHVybiBFTk9F
TlQ7CiB9CiAKK3N0YXRpYyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpwYXJlbnQsCisJCSAgICAgICBz
dHJ1Y3Qgbm9kZSAqbm9kZSkKK3sKKwljaGFyICpuYW1lOworCisJLyogRGVs
ZXRlIGNoaWxkcmVuLiAqLworCXdoaWxlIChub2RlLT5jaGlsZGxlbikgewor
CQlzdHJ1Y3Qgbm9kZSAqY2hpbGQ7CisKKwkJbmFtZSA9IHRhbGxvY19hc3By
aW50Zihub2RlLCAiJXMvJXMiLCBub2RlLT5uYW1lLAorCQkJCSAgICAgICBu
b2RlLT5jaGlsZHJlbik7CisJCWNoaWxkID0gbmFtZSA/IHJlYWRfbm9kZShj
b25uLCBub2RlLCBuYW1lKSA6IE5VTEw7CisJCWlmIChjaGlsZCkgeworCQkJ
aWYgKGRlbGV0ZV9ub2RlKGNvbm4sIG5vZGUsIGNoaWxkKSkKKwkJCQlyZXR1
cm4gZXJybm87CisJCX0gZWxzZSB7CisJCQl0cmFjZSgiZGVsZXRlX25vZGU6
IEVycm9yIGRlbGV0aW5nIGNoaWxkICclcy8lcychXG4iLAorCQkJICAgICAg
bm9kZS0+bmFtZSwgbm9kZS0+Y2hpbGRyZW4pOworCQkJLyogUXVpdCBkZWxl
dGluZy4gKi8KKwkJCWVycm5vID0gRU5PTUVNOworCQkJcmV0dXJuIGVycm5v
OworCQl9CisJCXRhbGxvY19mcmVlKG5hbWUpOworCX0KKworCWRlbGV0ZV9u
b2RlX3NpbmdsZShjb25uLCBub2RlKTsKKwlkZWxldGVfY2hpbGQoY29ubiwg
cGFyZW50LCBiYXNlbmFtZShub2RlLT5uYW1lKSk7CisJdGFsbG9jX2ZyZWUo
bm9kZSk7CisKKwlyZXR1cm4gMDsKK30KIAogc3RhdGljIGludCBfcm0oc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0
IG5vZGUgKm5vZGUsCiAJICAgICAgIGNvbnN0IGNoYXIgKm5hbWUpCiB7Ci0J
LyogRGVsZXRlIGZyb20gcGFyZW50IGZpcnN0LCB0aGVuIGlmIHdlIGNyYXNo
LCB0aGUgd29yc3QgdGhhdCBjYW4KLQkgICBoYXBwZW4gaXMgdGhlIGNoaWxk
IHdpbGwgY29udGludWUgdG8gdGFrZSB1cCBzcGFjZSwgYnV0IHdpbGwKLQkg
ICBvdGhlcndpc2UgYmUgdW5yZWFjaGFibGUuICovCisJLyoKKwkgKiBEZWxl
dGluZyBub2RlIGJ5IG5vZGUsIHNvIHRoZSByZXN1bHQgaXMgYWx3YXlzIGNv
bnNpc3RlbnQgZXZlbiBpbgorCSAqIGNhc2Ugb2YgYSBmYWlsdXJlLgorCSAq
LwogCXN0cnVjdCBub2RlICpwYXJlbnQ7CiAJY2hhciAqcGFyZW50bmFtZSA9
IGdldF9wYXJlbnQoY3R4LCBuYW1lKTsKIApAQCAtMTE2NSwxMSArMTE2Nywx
MyBAQCBzdGF0aWMgaW50IF9ybShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwg
Y29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKIAlpZiAoIXBh
cmVudCkKIAkJcmV0dXJuIChlcnJubyA9PSBFTk9NRU0pID8gRU5PTUVNIDog
RUlOVkFMOwogCi0JaWYgKGRlbGV0ZV9jaGlsZChjb25uLCBwYXJlbnQsIGJh
c2VuYW1lKG5hbWUpKSkKLQkJcmV0dXJuIEVJTlZBTDsKLQotCWRlbGV0ZV9u
b2RlKGNvbm4sIG5vZGUpOwotCXJldHVybiAwOworCS8qCisJICogRmlyZSB0
aGUgd2F0Y2hlcyBub3csIHdoZW4gd2UgY2FuIHN0aWxsIHNlZSB0aGUgbm9k
ZSBwZXJtaXNzaW9ucy4KKwkgKiBUaGlzIGZpbmUgYXMgd2UgYXJlIHNpbmds
ZSB0aHJlYWRlZCBhbmQgdGhlIG5leHQgcG9zc2libGUgcmVhZCB3aWxsCisJ
ICogYmUgaGFuZGxlZCBvbmx5IGFmdGVyIHRoZSBub2RlIGhhcyBiZWVuIHJl
YWxseSByZW1vdmVkLgorCSAqLworCWZpcmVfd2F0Y2hlcyhjb25uLCBjdHgs
IG5hbWUsIHRydWUpOworCXJldHVybiBkZWxldGVfbm9kZShjb25uLCBwYXJl
bnQsIG5vZGUpOwogfQogCiAKQEAgLTEyMDcsNyArMTIxMSw2IEBAIHN0YXRp
YyBpbnQgZG9fcm0oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAocmV0KQogCQlyZXR1cm4gcmV0Owog
Ci0JZmlyZV93YXRjaGVzKGNvbm4sIGluLCBuYW1lLCB0cnVlKTsKIAlzZW5k
X2Fjayhjb25uLCBYU19STSk7CiAKIAlyZXR1cm4gMDsKZGlmZiAtLWdpdCBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5jIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKaW5kZXggZjJmMWJlZDQ3Y2M2Li5m
MGJiZmU3YTZkYzYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF93YXRjaC5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93
YXRjaC5jCkBAIC03Nyw3ICs3Nyw3IEBAIHN0YXRpYyBib29sIGlzX2NoaWxk
KGNvbnN0IGNoYXIgKmNoaWxkLCBjb25zdCBjaGFyICpwYXJlbnQpCiAgKiBU
ZW1wb3JhcnkgbWVtb3J5IGFsbG9jYXRpb25zIGFyZSBkb25lIHdpdGggY3R4
LgogICovCiBzdGF0aWMgdm9pZCBhZGRfZXZlbnQoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCi0JCSAgICAgIHZvaWQgKmN0eCwKKwkJICAgICAgY29uc3Qg
dm9pZCAqY3R4LAogCQkgICAgICBzdHJ1Y3Qgd2F0Y2ggKndhdGNoLAogCQkg
ICAgICBjb25zdCBjaGFyICpuYW1lKQogewpAQCAtMTIxLDcgKzEyMSw3IEBA
IHN0YXRpYyB2b2lkIGFkZF9ldmVudChzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwKICAqIENoZWNrIHdoZXRoZXIgYW55IHdhdGNoIGV2ZW50cyBhcmUgdG8g
YmUgc2VudC4KICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgYXJl
IGRvbmUgd2l0aCBjdHguCiAgKi8KLXZvaWQgZmlyZV93YXRjaGVzKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5h
bWUsCit2b2lkIGZpcmVfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBjaGFyICpuYW1lLAogCQkgIGJv
b2wgcmVjdXJzZSkKIHsKIAlzdHJ1Y3QgY29ubmVjdGlvbiAqaTsKZGlmZiAt
LWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmgKaW5kZXggYzcyZWE2YTY4
NTQyLi41NGQ0ZWE3ZTBkNDEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF93YXRjaC5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF93YXRjaC5oCkBAIC0yNSw3ICsyNSw3IEBAIGludCBkb193YXRjaChz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEg
KmluKTsKIGludCBkb191bndhdGNoKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pOwogCiAvKiBGaXJlIGFsbCB3
YXRjaGVzOiByZWN1cnNlIG1lYW5zIGFsbCB0aGUgY2hpbGRyZW4gYXJlIGFm
ZmVjdGVkIChpZS4gcm0pLiAqLwotdm9pZCBmaXJlX3dhdGNoZXMoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIHZvaWQgKnRtcCwgY29uc3QgY2hhciAqbmFt
ZSwKK3ZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCB2b2lkICp0bXAsIGNvbnN0IGNoYXIgKm5hbWUsCiAJCSAgYm9v
bCByZWN1cnNlKTsKIAogdm9pZCBjb25uX2RlbGV0ZV9hbGxfd2F0Y2hlcyhz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7Ci0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch"
Content-Transfer-Encoding: base64

RnJvbSA2Y2EyZTE0YjQzYWVjYzc5ZWZmYzFhMGNkNTI4YTRhY2VlZjQ0ZDQy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDMgKzAyMDAKU3ViamVjdDogW1BBVENIIDA3LzEwXSB0b29scy94ZW5z
dG9yZTogZmlyZSB3YXRjaGVzIG9ubHkgd2hlbiByZW1vdmluZyBhCiBzcGVj
aWZpYyBub2RlCgpJbnN0ZWFkIG9mIGZpcmluZyBhbGwgd2F0Y2hlcyBmb3Ig
cmVtb3ZpbmcgYSBzdWJ0cmVlIGluIG9uZSBnbywgZG8gc28Kb25seSB3aGVu
IHRoZSByZWxhdGVkIG5vZGUgaXMgYmVpbmcgcmVtb3ZlZC4KClRoZSB3YXRj
aGVzIGZvciB0aGUgdG9wLW1vc3Qgbm9kZSBiZWluZyByZW1vdmVkIGluY2x1
ZGUgYWxsIHdhdGNoZXMKaW5jbHVkaW5nIHRoYXQgbm9kZSwgd2hpbGUgd2F0
Y2hlcyBmb3Igbm9kZXMgYmVsb3cgdGhhdCBhcmUgb25seSBmaXJlZAppZiB0
aGV5IGFyZSBtYXRjaGluZyBleGFjdGx5LiBUaGlzIGF2b2lkcyBmaXJpbmcg
YW55IHdhdGNoIG1vcmUgdGhhbgpvbmNlIHdoZW4gcmVtb3ZpbmcgYSBzdWJ0
cmVlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5
OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+ClJldmlld2VkLWJ5
OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+Ci0tLQogdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuYyAgfCAxMSArKysrKystLS0tLQogdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMgfCAxMyArKysrKysrKy0t
LS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guaCB8ICA0ICsr
LS0KIDMgZmlsZXMgY2hhbmdlZCwgMTYgaW5zZXJ0aW9ucygrKSwgMTIgZGVs
ZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMK
aW5kZXggMWNiNzI5YTJjZDVmLi5kN2MwMjU2MTZlYWQgMTAwNjQ0Ci0tLSBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwpAQCAtMTExOCw4ICsxMTE4LDgg
QEAgc3RhdGljIHZvaWQgZGVsZXRlX2NoaWxkKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLAogCWNvcnJ1cHQoY29ubiwgIkNhbid0IGZpbmQgY2hpbGQgJyVz
JyBpbiAlcyIsIGNoaWxkbmFtZSwgbm9kZS0+bmFtZSk7CiB9CiAKLXN0YXRp
YyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0
cnVjdCBub2RlICpwYXJlbnQsCi0JCSAgICAgICBzdHJ1Y3Qgbm9kZSAqbm9k
ZSkKK3N0YXRpYyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKKwkJICAgICAgIHN0cnVjdCBub2Rl
ICpwYXJlbnQsIHN0cnVjdCBub2RlICpub2RlKQogewogCWNoYXIgKm5hbWU7
CiAKQEAgLTExMzEsNyArMTEzMSw3IEBAIHN0YXRpYyBpbnQgZGVsZXRlX25v
ZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpwYXJl
bnQsCiAJCQkJICAgICAgIG5vZGUtPmNoaWxkcmVuKTsKIAkJY2hpbGQgPSBu
YW1lID8gcmVhZF9ub2RlKGNvbm4sIG5vZGUsIG5hbWUpIDogTlVMTDsKIAkJ
aWYgKGNoaWxkKSB7Ci0JCQlpZiAoZGVsZXRlX25vZGUoY29ubiwgbm9kZSwg
Y2hpbGQpKQorCQkJaWYgKGRlbGV0ZV9ub2RlKGNvbm4sIGN0eCwgbm9kZSwg
Y2hpbGQpKQogCQkJCXJldHVybiBlcnJubzsKIAkJfSBlbHNlIHsKIAkJCXRy
YWNlKCJkZWxldGVfbm9kZTogRXJyb3IgZGVsZXRpbmcgY2hpbGQgJyVzLyVz
JyFcbiIsCkBAIC0xMTQzLDYgKzExNDMsNyBAQCBzdGF0aWMgaW50IGRlbGV0
ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAq
cGFyZW50LAogCQl0YWxsb2NfZnJlZShuYW1lKTsKIAl9CiAKKwlmaXJlX3dh
dGNoZXMoY29ubiwgY3R4LCBub2RlLT5uYW1lLCB0cnVlKTsKIAlkZWxldGVf
bm9kZV9zaW5nbGUoY29ubiwgbm9kZSk7CiAJZGVsZXRlX2NoaWxkKGNvbm4s
IHBhcmVudCwgYmFzZW5hbWUobm9kZS0+bmFtZSkpOwogCXRhbGxvY19mcmVl
KG5vZGUpOwpAQCAtMTE3Miw4ICsxMTczLDggQEAgc3RhdGljIGludCBfcm0o
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgc3Ry
dWN0IG5vZGUgKm5vZGUsCiAJICogVGhpcyBmaW5lIGFzIHdlIGFyZSBzaW5n
bGUgdGhyZWFkZWQgYW5kIHRoZSBuZXh0IHBvc3NpYmxlIHJlYWQgd2lsbAog
CSAqIGJlIGhhbmRsZWQgb25seSBhZnRlciB0aGUgbm9kZSBoYXMgYmVlbiBy
ZWFsbHkgcmVtb3ZlZC4KIAkgKi8KLQlmaXJlX3dhdGNoZXMoY29ubiwgY3R4
LCBuYW1lLCB0cnVlKTsKLQlyZXR1cm4gZGVsZXRlX25vZGUoY29ubiwgcGFy
ZW50LCBub2RlKTsKKwlmaXJlX3dhdGNoZXMoY29ubiwgY3R4LCBuYW1lLCBm
YWxzZSk7CisJcmV0dXJuIGRlbGV0ZV9ub2RlKGNvbm4sIGN0eCwgcGFyZW50
LCBub2RlKTsKIH0KIAogCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfd2F0Y2guYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93
YXRjaC5jCmluZGV4IGYwYmJmZTdhNmRjNi4uMzgzNjY3NTQ1OWZhIDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYworKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYwpAQCAtMTIyLDcg
KzEyMiw3IEBAIHN0YXRpYyB2b2lkIGFkZF9ldmVudChzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwKICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMg
YXJlIGRvbmUgd2l0aCBjdHguCiAgKi8KIHZvaWQgZmlyZV93YXRjaGVzKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0
IGNoYXIgKm5hbWUsCi0JCSAgYm9vbCByZWN1cnNlKQorCQkgIGJvb2wgZXhh
Y3QpCiB7CiAJc3RydWN0IGNvbm5lY3Rpb24gKmk7CiAJc3RydWN0IHdhdGNo
ICp3YXRjaDsKQEAgLTEzNCwxMCArMTM0LDEzIEBAIHZvaWQgZmlyZV93YXRj
aGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgs
IGNvbnN0IGNoYXIgKm5hbWUsCiAJLyogQ3JlYXRlIGFuIGV2ZW50IGZvciBl
YWNoIHdhdGNoLiAqLwogCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmNvbm5l
Y3Rpb25zLCBsaXN0KSB7CiAJCWxpc3RfZm9yX2VhY2hfZW50cnkod2F0Y2gs
ICZpLT53YXRjaGVzLCBsaXN0KSB7Ci0JCQlpZiAoaXNfY2hpbGQobmFtZSwg
d2F0Y2gtPm5vZGUpKQotCQkJCWFkZF9ldmVudChpLCBjdHgsIHdhdGNoLCBu
YW1lKTsKLQkJCWVsc2UgaWYgKHJlY3Vyc2UgJiYgaXNfY2hpbGQod2F0Y2gt
Pm5vZGUsIG5hbWUpKQotCQkJCWFkZF9ldmVudChpLCBjdHgsIHdhdGNoLCB3
YXRjaC0+bm9kZSk7CisJCQlpZiAoZXhhY3QpIHsKKwkJCQlpZiAoc3RyZXEo
bmFtZSwgd2F0Y2gtPm5vZGUpKQorCQkJCQlhZGRfZXZlbnQoaSwgY3R4LCB3
YXRjaCwgbmFtZSk7CisJCQl9IGVsc2UgeworCQkJCWlmIChpc19jaGlsZChu
YW1lLCB3YXRjaC0+bm9kZSkpCisJCQkJCWFkZF9ldmVudChpLCBjdHgsIHdh
dGNoLCBuYW1lKTsKKwkJCX0KIAkJfQogCX0KIH0KZGlmZiAtLWdpdCBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3dhdGNoLmgKaW5kZXggNTRkNGVhN2UwZDQxLi4xYjNj
ODBkM2RkYTEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF93YXRjaC5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRj
aC5oCkBAIC0yNCw5ICsyNCw5IEBACiBpbnQgZG9fd2F0Y2goc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbik7CiBp
bnQgZG9fdW53YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0
IGJ1ZmZlcmVkX2RhdGEgKmluKTsKIAotLyogRmlyZSBhbGwgd2F0Y2hlczog
cmVjdXJzZSBtZWFucyBhbGwgdGhlIGNoaWxkcmVuIGFyZSBhZmZlY3RlZCAo
aWUuIHJtKS4gKi8KKy8qIEZpcmUgYWxsIHdhdGNoZXM6ICFleGFjdCBtZWFu
cyBhbGwgdGhlIGNoaWxkcmVuIGFyZSBhZmZlY3RlZCAoaWUuIHJtKS4gKi8K
IHZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBj
b25zdCB2b2lkICp0bXAsIGNvbnN0IGNoYXIgKm5hbWUsCi0JCSAgYm9vbCBy
ZWN1cnNlKTsKKwkJICBib29sIGV4YWN0KTsKIAogdm9pZCBjb25uX2RlbGV0
ZV9hbGxfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7CiAKLS0g
CjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0008-tools-xenstore-introduce-node_perms-structure.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0008-tools-xenstore-introduce-node_perms-structure.patch"
Content-Transfer-Encoding: base64

RnJvbSAyZDRmNDEwODk5YmY1OWUxMTJjMTA3ZjM3MWMzZDE2NGY4YTU5MmY4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDQgKzAyMDAKU3ViamVjdDogW1BBVENIIDA4LzEwXSB0b29scy94ZW5z
dG9yZTogaW50cm9kdWNlIG5vZGVfcGVybXMgc3RydWN0dXJlCgpUaGVyZSBh
cmUgc2V2ZXJhbCBwbGFjZXMgaW4geGVuc3RvcmVkIHVzaW5nIGEgcGVybWlz
c2lvbiBhcnJheSBhbmQgdGhlCnNpemUgb2YgdGhhdCBhcnJheS4gSW50cm9k
dWNlIGEgbmV3IHN0cnVjdCBub2RlX3Blcm1zIGNvbnRhaW5pbmcgYm90aC4K
ClRoaXMgaXMgcGFydCBvZiBYU0EtMTE1LgoKU2lnbmVkLW9mZi1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpBY2tlZC1ieTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPgotLS0KIHRvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmMgICB8IDc5ICsrKysrKysrKysrKysrKy0tLS0tLS0t
LS0tLS0tLS0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmggICB8
ICA4ICsrKy0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYyB8
IDEyICsrLS0tCiAzIGZpbGVzIGNoYW5nZWQsIDUwIGluc2VydGlvbnMoKyks
IDQ5IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jCmluZGV4IGQ3YzAyNTYxNmVhZC4uZmU5OTQzMTEzYjlmIDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTQwMSwxNCAr
NDAxLDE0IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAJLyogRGF0
YWxlbiwgY2hpbGRsZW4sIG51bWJlciBvZiBwZXJtaXNzaW9ucyAqLwogCWhk
ciA9ICh2b2lkICopZGF0YS5kcHRyOwogCW5vZGUtPmdlbmVyYXRpb24gPSBo
ZHItPmdlbmVyYXRpb247Ci0Jbm9kZS0+bnVtX3Blcm1zID0gaGRyLT5udW1f
cGVybXM7CisJbm9kZS0+cGVybXMubnVtID0gaGRyLT5udW1fcGVybXM7CiAJ
bm9kZS0+ZGF0YWxlbiA9IGhkci0+ZGF0YWxlbjsKIAlub2RlLT5jaGlsZGxl
biA9IGhkci0+Y2hpbGRsZW47CiAKIAkvKiBQZXJtaXNzaW9ucyBhcmUgc3Ry
dWN0IHhzX3Blcm1pc3Npb25zLiAqLwotCW5vZGUtPnBlcm1zID0gaGRyLT5w
ZXJtczsKKwlub2RlLT5wZXJtcy5wID0gaGRyLT5wZXJtczsKIAkvKiBEYXRh
IGlzIGJpbmFyeSBibG9iICh1c3VhbGx5IGFzY2lpLCBubyBudWwpLiAqLwot
CW5vZGUtPmRhdGEgPSBub2RlLT5wZXJtcyArIG5vZGUtPm51bV9wZXJtczsK
Kwlub2RlLT5kYXRhID0gbm9kZS0+cGVybXMucCArIG5vZGUtPnBlcm1zLm51
bTsKIAkvKiBDaGlsZHJlbiBpcyBzdHJpbmdzLCBudWwgc2VwYXJhdGVkLiAq
LwogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+ZGF0YSArIG5vZGUtPmRhdGFs
ZW47CiAKQEAgLTQyNSw3ICs0MjUsNyBAQCBpbnQgd3JpdGVfbm9kZV9yYXco
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVj
dCBub2RlICpub2RlLAogCXN0cnVjdCB4c190ZGJfcmVjb3JkX2hkciAqaGRy
OwogCiAJZGF0YS5kc2l6ZSA9IHNpemVvZigqaGRyKQotCQkrIG5vZGUtPm51
bV9wZXJtcypzaXplb2Yobm9kZS0+cGVybXNbMF0pCisJCSsgbm9kZS0+cGVy
bXMubnVtICogc2l6ZW9mKG5vZGUtPnBlcm1zLnBbMF0pCiAJCSsgbm9kZS0+
ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOwogCiAJaWYgKCFub19xdW90YV9j
aGVjayAmJiBkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pICYmCkBAIC00
MzcsMTIgKzQzNywxMyBAQCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpu
b2RlLAogCWRhdGEuZHB0ciA9IHRhbGxvY19zaXplKG5vZGUsIGRhdGEuZHNp
emUpOwogCWhkciA9ICh2b2lkICopZGF0YS5kcHRyOwogCWhkci0+Z2VuZXJh
dGlvbiA9IG5vZGUtPmdlbmVyYXRpb247Ci0JaGRyLT5udW1fcGVybXMgPSBu
b2RlLT5udW1fcGVybXM7CisJaGRyLT5udW1fcGVybXMgPSBub2RlLT5wZXJt
cy5udW07CiAJaGRyLT5kYXRhbGVuID0gbm9kZS0+ZGF0YWxlbjsKIAloZHIt
PmNoaWxkbGVuID0gbm9kZS0+Y2hpbGRsZW47CiAKLQltZW1jcHkoaGRyLT5w
ZXJtcywgbm9kZS0+cGVybXMsIG5vZGUtPm51bV9wZXJtcypzaXplb2Yobm9k
ZS0+cGVybXNbMF0pKTsKLQlwID0gaGRyLT5wZXJtcyArIG5vZGUtPm51bV9w
ZXJtczsKKwltZW1jcHkoaGRyLT5wZXJtcywgbm9kZS0+cGVybXMucCwKKwkg
ICAgICAgbm9kZS0+cGVybXMubnVtICogc2l6ZW9mKCpub2RlLT5wZXJtcy5w
KSk7CisJcCA9IGhkci0+cGVybXMgKyBub2RlLT5wZXJtcy5udW07CiAJbWVt
Y3B5KHAsIG5vZGUtPmRhdGEsIG5vZGUtPmRhdGFsZW4pOwogCXAgKz0gbm9k
ZS0+ZGF0YWxlbjsKIAltZW1jcHkocCwgbm9kZS0+Y2hpbGRyZW4sIG5vZGUt
PmNoaWxkbGVuKTsKQEAgLTQ2OCw4ICs0NjksNyBAQCBzdGF0aWMgaW50IHdy
aXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2Rl
ICpub2RlLAogfQogCiBzdGF0aWMgZW51bSB4c19wZXJtX3R5cGUgcGVybV9m
b3JfY29ubihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKLQkJCQkgICAgICAg
c3RydWN0IHhzX3Blcm1pc3Npb25zICpwZXJtcywKLQkJCQkgICAgICAgdW5z
aWduZWQgaW50IG51bSkKKwkJCQkgICAgICAgY29uc3Qgc3RydWN0IG5vZGVf
cGVybXMgKnBlcm1zKQogewogCXVuc2lnbmVkIGludCBpOwogCWVudW0geHNf
cGVybV90eXBlIG1hc2sgPSBYU19QRVJNX1JFQUR8WFNfUEVSTV9XUklURXxY
U19QRVJNX09XTkVSOwpAQCAtNDc4LDE2ICs0NzgsMTYgQEAgc3RhdGljIGVu
dW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCiAJCW1hc2sgJj0gflhTX1BFUk1fV1JJVEU7CiAKIAkvKiBP
d25lcnMgYW5kIHRvb2xzIGdldCBpdCBhbGwuLi4gKi8KLQlpZiAoIWRvbWFp
bl9pc191bnByaXZpbGVnZWQoY29ubikgfHwgcGVybXNbMF0uaWQgPT0gY29u
bi0+aWQKLSAgICAgICAgICAgICAgICB8fCAoY29ubi0+dGFyZ2V0ICYmIHBl
cm1zWzBdLmlkID09IGNvbm4tPnRhcmdldC0+aWQpKQorCWlmICghZG9tYWlu
X2lzX3VucHJpdmlsZWdlZChjb25uKSB8fCBwZXJtcy0+cFswXS5pZCA9PSBj
b25uLT5pZAorICAgICAgICAgICAgICAgIHx8IChjb25uLT50YXJnZXQgJiYg
cGVybXMtPnBbMF0uaWQgPT0gY29ubi0+dGFyZ2V0LT5pZCkpCiAJCXJldHVy
biAoWFNfUEVSTV9SRUFEfFhTX1BFUk1fV1JJVEV8WFNfUEVSTV9PV05FUikg
JiBtYXNrOwogCi0JZm9yIChpID0gMTsgaSA8IG51bTsgaSsrKQotCQlpZiAo
cGVybXNbaV0uaWQgPT0gY29ubi0+aWQKLSAgICAgICAgICAgICAgICAgICAg
ICAgIHx8IChjb25uLT50YXJnZXQgJiYgcGVybXNbaV0uaWQgPT0gY29ubi0+
dGFyZ2V0LT5pZCkpCi0JCQlyZXR1cm4gcGVybXNbaV0ucGVybXMgJiBtYXNr
OworCWZvciAoaSA9IDE7IGkgPCBwZXJtcy0+bnVtOyBpKyspCisJCWlmIChw
ZXJtcy0+cFtpXS5pZCA9PSBjb25uLT5pZAorICAgICAgICAgICAgICAgICAg
ICAgICAgfHwgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+cFtpXS5pZCA9PSBj
b25uLT50YXJnZXQtPmlkKSkKKwkJCXJldHVybiBwZXJtcy0+cFtpXS5wZXJt
cyAmIG1hc2s7CiAKLQlyZXR1cm4gcGVybXNbMF0ucGVybXMgJiBtYXNrOwor
CXJldHVybiBwZXJtcy0+cFswXS5wZXJtcyAmIG1hc2s7CiB9CiAKIC8qCkBA
IC01MzQsNyArNTM0LDcgQEAgc3RhdGljIGludCBhc2tfcGFyZW50cyhzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAogCQlyZXR1
cm4gMDsKIAl9CiAKLQkqcGVybSA9IHBlcm1fZm9yX2Nvbm4oY29ubiwgbm9k
ZS0+cGVybXMsIG5vZGUtPm51bV9wZXJtcyk7CisJKnBlcm0gPSBwZXJtX2Zv
cl9jb25uKGNvbm4sICZub2RlLT5wZXJtcyk7CiAJcmV0dXJuIDA7CiB9CiAK
QEAgLTU4MCw4ICs1ODAsNyBAQCBzdHJ1Y3Qgbm9kZSAqZ2V0X25vZGUoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAJbm9kZSA9IHJlYWRfbm9kZShjb25u
LCBjdHgsIG5hbWUpOwogCS8qIElmIHdlIGRvbid0IGhhdmUgcGVybWlzc2lv
biwgd2UgZG9uJ3QgaGF2ZSBub2RlLiAqLwogCWlmIChub2RlKSB7Ci0JCWlm
ICgocGVybV9mb3JfY29ubihjb25uLCBub2RlLT5wZXJtcywgbm9kZS0+bnVt
X3Blcm1zKSAmIHBlcm0pCi0JCSAgICAhPSBwZXJtKSB7CisJCWlmICgocGVy
bV9mb3JfY29ubihjb25uLCAmbm9kZS0+cGVybXMpICYgcGVybSkgIT0gcGVy
bSkgewogCQkJZXJybm8gPSBFQUNDRVM7CiAJCQlub2RlID0gTlVMTDsKIAkJ
fQpAQCAtNzU3LDE2ICs3NTYsMTUgQEAgY29uc3QgY2hhciAqb25lYXJnKHN0
cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlyZXR1cm4gaW4tPmJ1ZmZlcjsK
IH0KIAotc3RhdGljIGNoYXIgKnBlcm1zX3RvX3N0cmluZ3MoY29uc3Qgdm9p
ZCAqY3R4LAotCQkJICAgICAgc3RydWN0IHhzX3Blcm1pc3Npb25zICpwZXJt
cywgdW5zaWduZWQgaW50IG51bSwKK3N0YXRpYyBjaGFyICpwZXJtc190b19z
dHJpbmdzKGNvbnN0IHZvaWQgKmN0eCwgY29uc3Qgc3RydWN0IG5vZGVfcGVy
bXMgKnBlcm1zLAogCQkJICAgICAgdW5zaWduZWQgaW50ICpsZW4pCiB7CiAJ
dW5zaWduZWQgaW50IGk7CiAJY2hhciAqc3RyaW5ncyA9IE5VTEw7CiAJY2hh
ciBidWZmZXJbTUFYX1NUUkxFTih1bnNpZ25lZCBpbnQpICsgMV07CiAKLQlm
b3IgKCpsZW4gPSAwLCBpID0gMDsgaSA8IG51bTsgaSsrKSB7Ci0JCWlmICgh
eHNfcGVybV90b19zdHJpbmcoJnBlcm1zW2ldLCBidWZmZXIsIHNpemVvZihi
dWZmZXIpKSkKKwlmb3IgKCpsZW4gPSAwLCBpID0gMDsgaSA8IHBlcm1zLT5u
dW07IGkrKykgeworCQlpZiAoIXhzX3Blcm1fdG9fc3RyaW5nKCZwZXJtcy0+
cFtpXSwgYnVmZmVyLCBzaXplb2YoYnVmZmVyKSkpCiAJCQlyZXR1cm4gTlVM
TDsKIAogCQlzdHJpbmdzID0gdGFsbG9jX3JlYWxsb2MoY3R4LCBzdHJpbmdz
LCBjaGFyLApAQCAtOTQ1LDEzICs5NDMsMTMgQEAgc3RhdGljIHN0cnVjdCBu
b2RlICpjb25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwg
Y29uc3Qgdm9pZCAqY3R4LAogCQlnb3RvIG5vbWVtOwogCiAJLyogSW5oZXJp
dCBwZXJtaXNzaW9ucywgZXhjZXB0IHVucHJpdmlsZWdlZCBkb21haW5zIG93
biB3aGF0IHRoZXkgY3JlYXRlICovCi0Jbm9kZS0+bnVtX3Blcm1zID0gcGFy
ZW50LT5udW1fcGVybXM7Ci0Jbm9kZS0+cGVybXMgPSB0YWxsb2NfbWVtZHVw
KG5vZGUsIHBhcmVudC0+cGVybXMsCi0JCQkJICAgIG5vZGUtPm51bV9wZXJt
cyAqIHNpemVvZihub2RlLT5wZXJtc1swXSkpOwotCWlmICghbm9kZS0+cGVy
bXMpCisJbm9kZS0+cGVybXMubnVtID0gcGFyZW50LT5wZXJtcy5udW07CisJ
bm9kZS0+cGVybXMucCA9IHRhbGxvY19tZW1kdXAobm9kZSwgcGFyZW50LT5w
ZXJtcy5wLAorCQkJCSAgICAgIG5vZGUtPnBlcm1zLm51bSAqIHNpemVvZigq
bm9kZS0+cGVybXMucCkpOworCWlmICghbm9kZS0+cGVybXMucCkKIAkJZ290
byBub21lbTsKIAlpZiAoZG9tYWluX2lzX3VucHJpdmlsZWdlZChjb25uKSkK
LQkJbm9kZS0+cGVybXNbMF0uaWQgPSBjb25uLT5pZDsKKwkJbm9kZS0+cGVy
bXMucFswXS5pZCA9IGNvbm4tPmlkOwogCiAJLyogTm8gY2hpbGRyZW4sIG5v
IGRhdGEgKi8KIAlub2RlLT5jaGlsZHJlbiA9IG5vZGUtPmRhdGEgPSBOVUxM
OwpAQCAtMTIyOCw3ICsxMjI2LDcgQEAgc3RhdGljIGludCBkb19nZXRfcGVy
bXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9k
YXRhICppbikKIAlpZiAoIW5vZGUpCiAJCXJldHVybiBlcnJubzsKIAotCXN0
cmluZ3MgPSBwZXJtc190b19zdHJpbmdzKG5vZGUsIG5vZGUtPnBlcm1zLCBu
b2RlLT5udW1fcGVybXMsICZsZW4pOworCXN0cmluZ3MgPSBwZXJtc190b19z
dHJpbmdzKG5vZGUsICZub2RlLT5wZXJtcywgJmxlbik7CiAJaWYgKCFzdHJp
bmdzKQogCQlyZXR1cm4gZXJybm87CiAKQEAgLTEyMzksMTMgKzEyMzcsMTIg
QEAgc3RhdGljIGludCBkb19nZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAogc3RhdGljIGlu
dCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVj
dCBidWZmZXJlZF9kYXRhICppbikKIHsKLQl1bnNpZ25lZCBpbnQgbnVtOwot
CXN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybXM7CisJc3RydWN0IG5vZGVf
cGVybXMgcGVybXM7CiAJY2hhciAqbmFtZSwgKnBlcm1zdHI7CiAJc3RydWN0
IG5vZGUgKm5vZGU7CiAKLQludW0gPSB4c19jb3VudF9zdHJpbmdzKGluLT5i
dWZmZXIsIGluLT51c2VkKTsKLQlpZiAobnVtIDwgMikKKwlwZXJtcy5udW0g
PSB4c19jb3VudF9zdHJpbmdzKGluLT5idWZmZXIsIGluLT51c2VkKTsKKwlp
ZiAocGVybXMubnVtIDwgMikKIAkJcmV0dXJuIEVJTlZBTDsKIAogCS8qIEZp
cnN0IGFyZyBpcyBub2RlIG5hbWUuICovCkBAIC0xMjU2LDIxICsxMjUzLDIx
IEBAIHN0YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJCXJldHVybiBl
cnJubzsKIAogCXBlcm1zdHIgPSBpbi0+YnVmZmVyICsgc3RybGVuKGluLT5i
dWZmZXIpICsgMTsKLQludW0tLTsKKwlwZXJtcy5udW0tLTsKIAotCXBlcm1z
ID0gdGFsbG9jX2FycmF5KG5vZGUsIHN0cnVjdCB4c19wZXJtaXNzaW9ucywg
bnVtKTsKLQlpZiAoIXBlcm1zKQorCXBlcm1zLnAgPSB0YWxsb2NfYXJyYXko
bm9kZSwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBwZXJtcy5udW0pOworCWlm
ICghcGVybXMucCkKIAkJcmV0dXJuIEVOT01FTTsKLQlpZiAoIXhzX3N0cmlu
Z3NfdG9fcGVybXMocGVybXMsIG51bSwgcGVybXN0cikpCisJaWYgKCF4c19z
dHJpbmdzX3RvX3Blcm1zKHBlcm1zLnAsIHBlcm1zLm51bSwgcGVybXN0cikp
CiAJCXJldHVybiBlcnJubzsKIAogCS8qIFVucHJpdmlsZWdlZCBkb21haW5z
IG1heSBub3QgY2hhbmdlIHRoZSBvd25lci4gKi8KLQlpZiAoZG9tYWluX2lz
X3VucHJpdmlsZWdlZChjb25uKSAmJiBwZXJtc1swXS5pZCAhPSBub2RlLT5w
ZXJtc1swXS5pZCkKKwlpZiAoZG9tYWluX2lzX3VucHJpdmlsZWdlZChjb25u
KSAmJgorCSAgICBwZXJtcy5wWzBdLmlkICE9IG5vZGUtPnBlcm1zLnBbMF0u
aWQpCiAJCXJldHVybiBFUEVSTTsKIAogCWRvbWFpbl9lbnRyeV9kZWMoY29u
biwgbm9kZSk7CiAJbm9kZS0+cGVybXMgPSBwZXJtczsKLQlub2RlLT5udW1f
cGVybXMgPSBudW07CiAJZG9tYWluX2VudHJ5X2luYyhjb25uLCBub2RlKTsK
IAogCWlmICh3cml0ZV9ub2RlKGNvbm4sIG5vZGUsIGZhbHNlKSkKQEAgLTE1
NDUsOCArMTU0Miw4IEBAIHN0YXRpYyB2b2lkIG1hbnVhbF9ub2RlKGNvbnN0
IGNoYXIgKm5hbWUsIGNvbnN0IGNoYXIgKmNoaWxkKQogCQliYXJmX3BlcnJv
cigiQ291bGQgbm90IGFsbG9jYXRlIGluaXRpYWwgbm9kZSAlcyIsIG5hbWUp
OwogCiAJbm9kZS0+bmFtZSA9IG5hbWU7Ci0Jbm9kZS0+cGVybXMgPSAmcGVy
bXM7Ci0Jbm9kZS0+bnVtX3Blcm1zID0gMTsKKwlub2RlLT5wZXJtcy5wID0g
JnBlcm1zOworCW5vZGUtPnBlcm1zLm51bSA9IDE7CiAJbm9kZS0+Y2hpbGRy
ZW4gPSAoY2hhciAqKWNoaWxkOwogCWlmIChjaGlsZCkKIAkJbm9kZS0+Y2hp
bGRsZW4gPSBzdHJsZW4oY2hpbGQpICsgMTsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmggYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5oCmluZGV4IDNjYjFjMjM1YTEwMS4uMTkzZDkzMTQy
NjM2IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKQEAg
LTEwOSw2ICsxMDksMTEgQEAgc3RydWN0IGNvbm5lY3Rpb24KIH07CiBleHRl
cm4gc3RydWN0IGxpc3RfaGVhZCBjb25uZWN0aW9uczsKIAorc3RydWN0IG5v
ZGVfcGVybXMgeworCXVuc2lnbmVkIGludCBudW07CisJc3RydWN0IHhzX3Bl
cm1pc3Npb25zICpwOworfTsKKwogc3RydWN0IG5vZGUgewogCWNvbnN0IGNo
YXIgKm5hbWU7CiAKQEAgLTEyMCw4ICsxMjUsNyBAQCBzdHJ1Y3Qgbm9kZSB7
CiAjZGVmaW5lIE5PX0dFTkVSQVRJT04gfigodWludDY0X3QpMCkKIAogCS8q
IFBlcm1pc3Npb25zLiAqLwotCXVuc2lnbmVkIGludCBudW1fcGVybXM7Ci0J
c3RydWN0IHhzX3Blcm1pc3Npb25zICpwZXJtczsKKwlzdHJ1Y3Qgbm9kZV9w
ZXJtcyBwZXJtczsKIAogCS8qIENvbnRlbnRzLiAqLwogCXVuc2lnbmVkIGlu
dCBkYXRhbGVuOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFp
bi5jCmluZGV4IDBlMjkyNmUyYTNkMC4uZGM1MWNkZmE5YWE3IDEwMDY0NAot
LS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02NTcsMTIg
KzY1NywxMiBAQCB2b2lkIGRvbWFpbl9lbnRyeV9pbmMoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogCWlmICghY29ubikK
IAkJcmV0dXJuOwogCi0JaWYgKG5vZGUtPnBlcm1zICYmIG5vZGUtPnBlcm1z
WzBdLmlkICE9IGNvbm4tPmlkKSB7CisJaWYgKG5vZGUtPnBlcm1zLnAgJiYg
bm9kZS0+cGVybXMucFswXS5pZCAhPSBjb25uLT5pZCkgewogCQlpZiAoY29u
bi0+dHJhbnNhY3Rpb24pIHsKIAkJCXRyYW5zYWN0aW9uX2VudHJ5X2luYyhj
b25uLT50cmFuc2FjdGlvbiwKLQkJCQlub2RlLT5wZXJtc1swXS5pZCk7CisJ
CQkJbm9kZS0+cGVybXMucFswXS5pZCk7CiAJCX0gZWxzZSB7Ci0JCQlkID0g
ZmluZF9kb21haW5fYnlfZG9taWQobm9kZS0+cGVybXNbMF0uaWQpOworCQkJ
ZCA9IGZpbmRfZG9tYWluX2J5X2RvbWlkKG5vZGUtPnBlcm1zLnBbMF0uaWQp
OwogCQkJaWYgKGQpCiAJCQkJZC0+bmJlbnRyeSsrOwogCQl9CkBAIC02ODMs
MTIgKzY4MywxMiBAQCB2b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogCWlmICghY29u
bikKIAkJcmV0dXJuOwogCi0JaWYgKG5vZGUtPnBlcm1zICYmIG5vZGUtPnBl
cm1zWzBdLmlkICE9IGNvbm4tPmlkKSB7CisJaWYgKG5vZGUtPnBlcm1zLnAg
JiYgbm9kZS0+cGVybXMucFswXS5pZCAhPSBjb25uLT5pZCkgewogCQlpZiAo
Y29ubi0+dHJhbnNhY3Rpb24pIHsKIAkJCXRyYW5zYWN0aW9uX2VudHJ5X2Rl
Yyhjb25uLT50cmFuc2FjdGlvbiwKLQkJCQlub2RlLT5wZXJtc1swXS5pZCk7
CisJCQkJbm9kZS0+cGVybXMucFswXS5pZCk7CiAJCX0gZWxzZSB7Ci0JCQlk
ID0gZmluZF9kb21haW5fYnlfZG9taWQobm9kZS0+cGVybXNbMF0uaWQpOwor
CQkJZCA9IGZpbmRfZG9tYWluX2J5X2RvbWlkKG5vZGUtPnBlcm1zLnBbMF0u
aWQpOwogCQkJaWYgKGQgJiYgZC0+bmJlbnRyeSkKIAkJCQlkLT5uYmVudHJ5
LS07CiAJCX0KLS0gCjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch"
Content-Transfer-Encoding: base64

RnJvbSBjZGRmNzQwMzFiM2M4YTEwOGU4ZmQ3ZGIwYmY1NmU5YzI4MDlkM2Uy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDUgKzAyMDAKU3ViamVjdDogW1BBVENIIDA5LzEwXSB0b29scy94ZW5z
dG9yZTogYWxsb3cgc3BlY2lhbCB3YXRjaGVzIGZvciBwcml2aWxlZ2VkCiBj
YWxsZXJzIG9ubHkKClRoZSBzcGVjaWFsIHdhdGNoZXMgIkBpbnRyb2R1Y2VE
b21haW4iIGFuZCAiQHJlbGVhc2VEb21haW4iIHNob3VsZCBiZQphbGxvd2Vk
IGZvciBwcml2aWxlZ2VkIGNhbGxlcnMgb25seSwgYXMgdGhleSBhbGxvdyB0
byBnYWluIGluZm9ybWF0aW9uCmFib3V0IHByZXNlbmNlIG9mIG90aGVyIGd1
ZXN0cyBvbiB0aGUgaG9zdC4gU28gc2VuZCB3YXRjaCBldmVudHMgZm9yCnRo
b3NlIHdhdGNoZXMgdmlhIHByaXZpbGVnZWQgY29ubmVjdGlvbnMgb25seS4K
CkluIG9yZGVyIHRvIGFsbG93IGZvciBkaXNhZ2dyZWdhdGVkIHNldHVwcyB3
aGVyZSBlLmcuIGRyaXZlciBkb21haW5zCm5lZWQgdG8gbWFrZSB1c2Ugb2Yg
dGhvc2Ugc3BlY2lhbCB3YXRjaGVzIGFkZCBzdXBwb3J0IGZvciBjYWxsaW5n
CiJzZXQgcGVybWlzc2lvbnMiIGZvciB0aG9zZSBzcGVjaWFsIG5vZGVzLCB0
b28uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6
IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5
OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KLS0tCiBkb2NzL21pc2Mv
eGVuc3RvcmUudHh0ICAgICAgICAgICAgfCAgNSArKysKIHRvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9jb3JlLmMgICB8IDI3ICsrKysrKysrLS0tLS0tCiB0
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oICAgfCAgMiArKwogdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIHwgNjAgKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKwogdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5oIHwgIDUgKysrCiB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfd2F0Y2guYyAgfCAgNCArKysKIDYgZmlsZXMgY2hhbmdlZCwgOTMgaW5z
ZXJ0aW9ucygrKSwgMTAgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZG9j
cy9taXNjL3hlbnN0b3JlLnR4dCBiL2RvY3MvbWlzYy94ZW5zdG9yZS50eHQK
aW5kZXggNmY4NTY5ZDU3NjBmLi4zMjk2OWViM2ZlY2QgMTAwNjQ0Ci0tLSBh
L2RvY3MvbWlzYy94ZW5zdG9yZS50eHQKKysrIGIvZG9jcy9taXNjL3hlbnN0
b3JlLnR4dApAQCAtMTcwLDYgKzE3MCw5IEBAIFNFVF9QRVJNUwkJPHBhdGg+
fDxwZXJtLWFzLXN0cmluZz58Kz8KIAkJbjxkb21pZD4Jbm8gYWNjZXNzCiAJ
U2VlIGh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9YZW5CdXMgc2VjdGlvbgog
CWBQZXJtaXNzaW9ucycgZm9yIGRldGFpbHMgb2YgdGhlIHBlcm1pc3Npb25z
IHN5c3RlbS4KKwlJdCBpcyBwb3NzaWJsZSB0byBzZXQgcGVybWlzc2lvbnMg
Zm9yIHRoZSBzcGVjaWFsIHdhdGNoIHBhdGhzCisJIkBpbnRyb2R1Y2VEb21h
aW4iIGFuZCAiQHJlbGVhc2VEb21haW4iIHRvIGVuYWJsZSByZWNlaXZpbmcg
dGhvc2UKKwl3YXRjaGVzIGluIHVucHJpdmlsZWdlZCBkb21haW5zLgogCiAt
LS0tLS0tLS0tIFdhdGNoZXMgLS0tLS0tLS0tLQogCkBAIC0xOTQsNiArMTk3
LDggQEAgV0FUQ0gJCQk8d3BhdGg+fDx0b2tlbj58PwogCSAgICBAcmVsZWFz
ZURvbWFpbiAJb2NjdXJzIG9uIGFueSBkb21haW4gY3Jhc2ggb3IKIAkJCQlz
aHV0ZG93biwgYW5kIGFsc28gb24gUkVMRUFTRQogCQkJCWFuZCBkb21haW4g
ZGVzdHJ1Y3Rpb24KKwk8d3NwZWNpYWw+IGV2ZW50cyBhcmUgc2VudCB0byBw
cml2aWxlZ2VkIGNhbGxlcnMgb3IgZXhwbGljaXRseQorCXZpYSBTRVRfUEVS
TVMgZW5hYmxlZCBkb21haW5zIG9ubHkuCiAKIAlXaGVuIGEgd2F0Y2ggaXMg
Zmlyc3Qgc2V0IHVwIGl0IGlzIHRyaWdnZXJlZCBvbmNlIHN0cmFpZ2h0CiAJ
YXdheSwgd2l0aCA8cGF0aD4gZXF1YWwgdG8gPHdwYXRoPi4gIFdhdGNoZXMg
bWF5IGJlIHRyaWdnZXJlZApkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmMKaW5kZXggZmU5OTQzMTEzYjlmLi43MjBiZWMyNjlkZDMgMTAwNjQ0
Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwpAQCAtNDY4LDggKzQ2
OCw4IEBAIHN0YXRpYyBpbnQgd3JpdGVfbm9kZShzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUsCiAJcmV0dXJuIHdyaXRlX25v
ZGVfcmF3KGNvbm4sICZrZXksIG5vZGUsIG5vX3F1b3RhX2NoZWNrKTsKIH0K
IAotc3RhdGljIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sCi0JCQkJICAgICAgIGNvbnN0IHN0cnVj
dCBub2RlX3Blcm1zICpwZXJtcykKK2VudW0geHNfcGVybV90eXBlIHBlcm1f
Zm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCisJCQkJY29uc3Qg
c3RydWN0IG5vZGVfcGVybXMgKnBlcm1zKQogewogCXVuc2lnbmVkIGludCBp
OwogCWVudW0geHNfcGVybV90eXBlIG1hc2sgPSBYU19QRVJNX1JFQUR8WFNf
UEVSTV9XUklURXxYU19QRVJNX09XTkVSOwpAQCAtMTI0NSwyMiArMTI0NSwy
OSBAQCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCWlmIChwZXJt
cy5udW0gPCAyKQogCQlyZXR1cm4gRUlOVkFMOwogCi0JLyogRmlyc3QgYXJn
IGlzIG5vZGUgbmFtZS4gKi8KLQkvKiBXZSBtdXN0IG93biBub2RlIHRvIGRv
IHRoaXMgKHRvb2xzIGNhbiBkbyB0aGlzIHRvbykuICovCi0Jbm9kZSA9IGdl
dF9ub2RlX2Nhbm9uaWNhbGl6ZWQoY29ubiwgaW4sIGluLT5idWZmZXIsICZu
YW1lLAotCQkJCSAgICAgIFhTX1BFUk1fV1JJVEUgfCBYU19QRVJNX09XTkVS
KTsKLQlpZiAoIW5vZGUpCi0JCXJldHVybiBlcnJubzsKLQogCXBlcm1zdHIg
PSBpbi0+YnVmZmVyICsgc3RybGVuKGluLT5idWZmZXIpICsgMTsKIAlwZXJt
cy5udW0tLTsKIAotCXBlcm1zLnAgPSB0YWxsb2NfYXJyYXkobm9kZSwgc3Ry
dWN0IHhzX3Blcm1pc3Npb25zLCBwZXJtcy5udW0pOworCXBlcm1zLnAgPSB0
YWxsb2NfYXJyYXkoaW4sIHN0cnVjdCB4c19wZXJtaXNzaW9ucywgcGVybXMu
bnVtKTsKIAlpZiAoIXBlcm1zLnApCiAJCXJldHVybiBFTk9NRU07CiAJaWYg
KCF4c19zdHJpbmdzX3RvX3Blcm1zKHBlcm1zLnAsIHBlcm1zLm51bSwgcGVy
bXN0cikpCiAJCXJldHVybiBlcnJubzsKIAorCS8qIEZpcnN0IGFyZyBpcyBu
b2RlIG5hbWUuICovCisJaWYgKHN0cnN0YXJ0cyhpbi0+YnVmZmVyLCAiQCIp
KSB7CisJCWlmIChzZXRfcGVybXNfc3BlY2lhbChjb25uLCBpbi0+YnVmZmVy
LCAmcGVybXMpKQorCQkJcmV0dXJuIGVycm5vOworCQlzZW5kX2Fjayhjb25u
LCBYU19TRVRfUEVSTVMpOworCQlyZXR1cm4gMDsKKwl9CisKKwkvKiBXZSBt
dXN0IG93biBub2RlIHRvIGRvIHRoaXMgKHRvb2xzIGNhbiBkbyB0aGlzIHRv
bykuICovCisJbm9kZSA9IGdldF9ub2RlX2Nhbm9uaWNhbGl6ZWQoY29ubiwg
aW4sIGluLT5idWZmZXIsICZuYW1lLAorCQkJCSAgICAgIFhTX1BFUk1fV1JJ
VEUgfCBYU19QRVJNX09XTkVSKTsKKwlpZiAoIW5vZGUpCisJCXJldHVybiBl
cnJubzsKKwogCS8qIFVucHJpdmlsZWdlZCBkb21haW5zIG1heSBub3QgY2hh
bmdlIHRoZSBvd25lci4gKi8KIAlpZiAoZG9tYWluX2lzX3VucHJpdmlsZWdl
ZChjb25uKSAmJgogCSAgICBwZXJtcy5wWzBdLmlkICE9IG5vZGUtPnBlcm1z
LnBbMF0uaWQpCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaApp
bmRleCAxOTNkOTMxNDI2MzYuLmYzZGE2YmJjOTQzZCAxMDA2NDQKLS0tIGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaAorKysgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCkBAIC0xNjUsNiArMTY1LDggQEAg
c3RydWN0IG5vZGUgKmdldF9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LAogc3RydWN0IGNvbm5lY3Rpb24gKm5ld19jb25uZWN0aW9uKGNvbm53cml0
ZWZuX3QgKndyaXRlLCBjb25ucmVhZGZuX3QgKnJlYWQpOwogdm9pZCBjaGVj
a19zdG9yZSh2b2lkKTsKIHZvaWQgY29ycnVwdChzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgY29uc3QgY2hhciAqZm10LCAuLi4pOworZW51bSB4c19wZXJt
X3R5cGUgcGVybV9mb3JfY29ubihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwK
KwkJCQljb25zdCBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMpOwogCiAvKiBJ
cyB0aGlzIGEgdmFsaWQgbm9kZSBuYW1lPyAqLwogYm9vbCBpc192YWxpZF9u
b2RlbmFtZShjb25zdCBjaGFyICpub2RlKTsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYyBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9kb21haW4uYwppbmRleCBkYzUxY2RmYTlhYTcuLjdhZmFi
ZTBhZTA4NCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21h
aW4uYwpAQCAtNDEsNiArNDEsOSBAQCBzdGF0aWMgZXZ0Y2huX3BvcnRfdCB2
aXJxX3BvcnQ7CiAKIHhlbmV2dGNobl9oYW5kbGUgKnhjZV9oYW5kbGUgPSBO
VUxMOwogCitzdGF0aWMgc3RydWN0IG5vZGVfcGVybXMgZG9tX3JlbGVhc2Vf
cGVybXM7CitzdGF0aWMgc3RydWN0IG5vZGVfcGVybXMgZG9tX2ludHJvZHVj
ZV9wZXJtczsKKwogc3RydWN0IGRvbWFpbgogewogCXN0cnVjdCBsaXN0X2hl
YWQgbGlzdDsKQEAgLTU4OSw2ICs1OTIsNTkgQEAgdm9pZCByZXN0b3JlX2V4
aXN0aW5nX2Nvbm5lY3Rpb25zKHZvaWQpCiB7CiB9CiAKK3N0YXRpYyBpbnQg
c2V0X2RvbV9wZXJtc19kZWZhdWx0KHN0cnVjdCBub2RlX3Blcm1zICpwZXJt
cykKK3sKKwlwZXJtcy0+bnVtID0gMTsKKwlwZXJtcy0+cCA9IHRhbGxvY19h
cnJheShOVUxMLCBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMsIHBlcm1zLT5udW0p
OworCWlmICghcGVybXMtPnApCisJCXJldHVybiAtMTsKKwlwZXJtcy0+cC0+
aWQgPSAwOworCXBlcm1zLT5wLT5wZXJtcyA9IFhTX1BFUk1fTk9ORTsKKwor
CXJldHVybiAwOworfQorCitzdGF0aWMgc3RydWN0IG5vZGVfcGVybXMgKmdl
dF9wZXJtc19zcGVjaWFsKGNvbnN0IGNoYXIgKm5hbWUpCit7CisJaWYgKCFz
dHJjbXAobmFtZSwgIkByZWxlYXNlRG9tYWluIikpCisJCXJldHVybiAmZG9t
X3JlbGVhc2VfcGVybXM7CisJaWYgKCFzdHJjbXAobmFtZSwgIkBpbnRyb2R1
Y2VEb21haW4iKSkKKwkJcmV0dXJuICZkb21faW50cm9kdWNlX3Blcm1zOwor
CXJldHVybiBOVUxMOworfQorCitpbnQgc2V0X3Blcm1zX3NwZWNpYWwoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IGNoYXIgKm5hbWUsCisJCSAg
ICAgIHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcykKK3sKKwlzdHJ1Y3Qgbm9k
ZV9wZXJtcyAqcDsKKworCXAgPSBnZXRfcGVybXNfc3BlY2lhbChuYW1lKTsK
KwlpZiAoIXApCisJCXJldHVybiBFSU5WQUw7CisKKwlpZiAoKHBlcm1fZm9y
X2Nvbm4oY29ubiwgcCkgJiAoWFNfUEVSTV9XUklURSB8IFhTX1BFUk1fT1dO
RVIpKSAhPQorCSAgICAoWFNfUEVSTV9XUklURSB8IFhTX1BFUk1fT1dORVIp
KQorCQlyZXR1cm4gRUFDQ0VTOworCisJcC0+bnVtID0gcGVybXMtPm51bTsK
Kwl0YWxsb2NfZnJlZShwLT5wKTsKKwlwLT5wID0gcGVybXMtPnA7CisJdGFs
bG9jX3N0ZWFsKE5VTEwsIHBlcm1zLT5wKTsKKworCXJldHVybiAwOworfQor
Citib29sIGNoZWNrX3Blcm1zX3NwZWNpYWwoY29uc3QgY2hhciAqbmFtZSwg
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pCit7CisJc3RydWN0IG5vZGVfcGVy
bXMgKnA7CisKKwlwID0gZ2V0X3Blcm1zX3NwZWNpYWwobmFtZSk7CisJaWYg
KCFwKQorCQlyZXR1cm4gZmFsc2U7CisKKwlyZXR1cm4gcGVybV9mb3JfY29u
bihjb25uLCBwKSAmIFhTX1BFUk1fUkVBRDsKK30KKwogc3RhdGljIGludCBk
b20wX2luaXQodm9pZCkgCiB7IAogCWV2dGNobl9wb3J0X3QgcG9ydDsKQEAg
LTYxMCw2ICs2NjYsMTAgQEAgc3RhdGljIGludCBkb20wX2luaXQodm9pZCkK
IAogCXhlbmV2dGNobl9ub3RpZnkoeGNlX2hhbmRsZSwgZG9tMC0+cG9ydCk7
CiAKKwlpZiAoc2V0X2RvbV9wZXJtc19kZWZhdWx0KCZkb21fcmVsZWFzZV9w
ZXJtcykgfHwKKwkgICAgc2V0X2RvbV9wZXJtc19kZWZhdWx0KCZkb21faW50
cm9kdWNlX3Blcm1zKSkKKwkJcmV0dXJuIC0xOworCiAJcmV0dXJuIDA7IAog
fQogCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9t
YWluLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmgKaW5k
ZXggNTZhZTAxNTk3NDc1Li4yNTkxODM5NjJhOWMgMTAwNjQ0Ci0tLSBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaAorKysgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmgKQEAgLTY1LDYgKzY1LDExIEBA
IHZvaWQgZG9tYWluX3dhdGNoX2luYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
bik7CiB2b2lkIGRvbWFpbl93YXRjaF9kZWMoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4pOwogaW50IGRvbWFpbl93YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubik7CiAKKy8qIFNwZWNpYWwgbm9kZSBwZXJtaXNzaW9uIGhhbmRsaW5n
LiAqLworaW50IHNldF9wZXJtc19zcGVjaWFsKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBjb25zdCBjaGFyICpuYW1lLAorCQkgICAgICBzdHJ1Y3Qgbm9k
ZV9wZXJtcyAqcGVybXMpOworYm9vbCBjaGVja19wZXJtc19zcGVjaWFsKGNv
bnN0IGNoYXIgKm5hbWUsIHN0cnVjdCBjb25uZWN0aW9uICpjb25uKTsKKwog
LyogV3JpdGUgcmF0ZSBsaW1pdGluZyAqLwogCiAjZGVmaW5lIFdSTF9GQUNU
T1IgICAxMDAwIC8qIGZvciBmaXhlZC1wb2ludCBhcml0aG1ldGljICovCmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5jCmluZGV4IDM4MzY2
NzU0NTlmYS4uZjRlMjg5MzYyZWI2IDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfd2F0Y2guYworKysgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfd2F0Y2guYwpAQCAtMTMzLDYgKzEzMywxMCBAQCB2b2lkIGZp
cmVfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9p
ZCAqY3R4LCBjb25zdCBjaGFyICpuYW1lLAogCiAJLyogQ3JlYXRlIGFuIGV2
ZW50IGZvciBlYWNoIHdhdGNoLiAqLwogCWxpc3RfZm9yX2VhY2hfZW50cnko
aSwgJmNvbm5lY3Rpb25zLCBsaXN0KSB7CisJCS8qIGludHJvZHVjZS9yZWxl
YXNlIGRvbWFpbiB3YXRjaGVzICovCisJCWlmIChjaGVja19zcGVjaWFsX2V2
ZW50KG5hbWUpICYmICFjaGVja19wZXJtc19zcGVjaWFsKG5hbWUsIGkpKQor
CQkJY29udGludWU7CisKIAkJbGlzdF9mb3JfZWFjaF9lbnRyeSh3YXRjaCwg
JmktPndhdGNoZXMsIGxpc3QpIHsKIAkJCWlmIChleGFjdCkgewogCQkJCWlm
IChzdHJlcShuYW1lLCB3YXRjaC0+bm9kZSkpCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.13-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch"
Content-Disposition: attachment;
 filename="xsa115-4.13-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch"
Content-Transfer-Encoding: base64

RnJvbSBlNTdiNzY4N2I0M2IwMzNmZTQ1ZTc1NWUyODVlZmJlNjdiYzcxOTIx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDYgKzAyMDAKU3ViamVjdDogW1BBVENIIDEwLzEwXSB0b29scy94ZW5z
dG9yZTogYXZvaWQgd2F0Y2ggZXZlbnRzIGZvciBub2RlcyB3aXRob3V0CiBh
Y2Nlc3MKClRvZGF5IHdhdGNoIGV2ZW50cyBhcmUgc2VudCByZWdhcmRsZXNz
IG9mIHRoZSBhY2Nlc3MgcmlnaHRzIG9mIHRoZQpub2RlIHRoZSBldmVudCBp
cyBzZW50IGZvci4gVGhpcyBlbmFibGVzIGFueSBndWVzdCB0byBlLmcuIHNl
dHVwIGEKd2F0Y2ggZm9yICIvIiBpbiBvcmRlciB0byBoYXZlIGEgZGV0YWls
ZWQgcmVjb3JkIG9mIGFsbCBYZW5zdG9yZQptb2RpZmljYXRpb25zLgoKTW9k
aWZ5IHRoYXQgYnkgc2VuZGluZyBvbmx5IHdhdGNoIGV2ZW50cyBmb3Igbm9k
ZXMgdGhhdCB0aGUgd2F0Y2hlcgpoYXMgYSBjaGFuY2UgdG8gc2VlIG90aGVy
d2lzZSAoZWl0aGVyIHZpYSBkaXJlY3QgcmVhZHMgb3IgYnkgcXVlcnlpbmcK
dGhlIGNoaWxkcmVuIG9mIGEgbm9kZSkuIFRoaXMgaW5jbHVkZXMgY2FzZXMg
d2hlcmUgdGhlIHZpc2liaWxpdHkgb2YKYSBub2RlIGZvciBhIHdhdGNoZXIg
aXMgY2hhbmdpbmcgKHBlcm1pc3Npb25zIGJlaW5nIHJlbW92ZWQpLgoKVGhp
cyBpcyBwYXJ0IG9mIFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+CltqdWxpZW5nOiBIYW5kbGUgcmVi
YXNlIGNvbmZsaWN0XQpSZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3Jh
bGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGF1
bEB4ZW4ub3JnPgotLS0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3Jl
LmMgICAgICAgIHwgMjggKysrKystLS0tLQogdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuaCAgICAgICAgfCAxNSArKysrLS0KIHRvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9kb21haW4uYyAgICAgIHwgIDYgKy0tCiB0b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYyB8IDIxICsrKysrKyst
CiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYyAgICAgICB8IDc1
ICsrKysrKysrKysrKysrKysrKystLS0tLS0tCiB0b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfd2F0Y2guaCAgICAgICB8ICAyICstCiA2IGZpbGVzIGNoYW5n
ZWQsIDEwNCBpbnNlcnRpb25zKCspLCA0MyBkZWxldGlvbnMoLSkKCmRpZmYg
LS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA3MjBiZWMyNjlk
ZDMuLjFjMjg0NTQ1NDU2MCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jCkBAIC0zNTgsOCArMzU4LDggQEAgc3RhdGljIHZvaWQgaW5p
dGlhbGl6ZV9mZHMoaW50IHNvY2ssIGludCAqcF9zb2NrX3BvbGxmZF9pZHgs
CiAgKiBJZiBpdCBmYWlscywgcmV0dXJucyBOVUxMIGFuZCBzZXRzIGVycm5v
LgogICogVGVtcG9yYXJ5IG1lbW9yeSBhbGxvY2F0aW9ucyB3aWxsIGJlIGRv
bmUgd2l0aCBjdHguCiAgKi8KLXN0YXRpYyBzdHJ1Y3Qgbm9kZSAqcmVhZF9u
b2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgs
Ci0JCQkgICAgICBjb25zdCBjaGFyICpuYW1lKQorc3RydWN0IG5vZGUgKnJl
YWRfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAq
Y3R4LAorCQkgICAgICAgY29uc3QgY2hhciAqbmFtZSkKIHsKIAlUREJfREFU
QSBrZXksIGRhdGE7CiAJc3RydWN0IHhzX3RkYl9yZWNvcmRfaGRyICpoZHI7
CkBAIC00OTQsNyArNDk0LDcgQEAgZW51bSB4c19wZXJtX3R5cGUgcGVybV9m
b3JfY29ubihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKICAqIEdldCBuYW1l
IG9mIG5vZGUgcGFyZW50LgogICogVGVtcG9yYXJ5IG1lbW9yeSBhbGxvY2F0
aW9ucyBhcmUgZG9uZSB3aXRoIGN0eC4KICAqLwotc3RhdGljIGNoYXIgKmdl
dF9wYXJlbnQoY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBjaGFyICpub2RlKQor
Y2hhciAqZ2V0X3BhcmVudChjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIg
Km5vZGUpCiB7CiAJY2hhciAqcGFyZW50OwogCWNoYXIgKnNsYXNoID0gc3Ry
cmNocihub2RlICsgMSwgJy8nKTsKQEAgLTU2NiwxMCArNTY2LDEwIEBAIHN0
YXRpYyBpbnQgZXJybm9fZnJvbV9wYXJlbnRzKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAgKiBJZiBpdCBmYWlscywgcmV0
dXJucyBOVUxMIGFuZCBzZXRzIGVycm5vLgogICogVGVtcG9yYXJ5IG1lbW9y
eSBhbGxvY2F0aW9ucyBhcmUgZG9uZSB3aXRoIGN0eC4KICAqLwotc3RydWN0
IG5vZGUgKmdldF9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAotCQkg
ICAgICBjb25zdCB2b2lkICpjdHgsCi0JCSAgICAgIGNvbnN0IGNoYXIgKm5h
bWUsCi0JCSAgICAgIGVudW0geHNfcGVybV90eXBlIHBlcm0pCitzdGF0aWMg
c3RydWN0IG5vZGUgKmdldF9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LAorCQkJICAgICBjb25zdCB2b2lkICpjdHgsCisJCQkgICAgIGNvbnN0IGNo
YXIgKm5hbWUsCisJCQkgICAgIGVudW0geHNfcGVybV90eXBlIHBlcm0pCiB7
CiAJc3RydWN0IG5vZGUgKm5vZGU7CiAKQEAgLTEwNTYsNyArMTA1Niw3IEBA
IHN0YXRpYyBpbnQgZG9fd3JpdGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAkJCXJldHVybiBlcnJubzsK
IAl9CiAKLQlmaXJlX3dhdGNoZXMoY29ubiwgaW4sIG5hbWUsIGZhbHNlKTsK
KwlmaXJlX3dhdGNoZXMoY29ubiwgaW4sIG5hbWUsIG5vZGUsIGZhbHNlLCBO
VUxMKTsKIAlzZW5kX2Fjayhjb25uLCBYU19XUklURSk7CiAKIAlyZXR1cm4g
MDsKQEAgLTEwNzgsNyArMTA3OCw3IEBAIHN0YXRpYyBpbnQgZG9fbWtkaXIo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRh
ICppbikKIAkJbm9kZSA9IGNyZWF0ZV9ub2RlKGNvbm4sIGluLCBuYW1lLCBO
VUxMLCAwKTsKIAkJaWYgKCFub2RlKQogCQkJcmV0dXJuIGVycm5vOwotCQlm
aXJlX3dhdGNoZXMoY29ubiwgaW4sIG5hbWUsIGZhbHNlKTsKKwkJZmlyZV93
YXRjaGVzKGNvbm4sIGluLCBuYW1lLCBub2RlLCBmYWxzZSwgTlVMTCk7CiAJ
fQogCXNlbmRfYWNrKGNvbm4sIFhTX01LRElSKTsKIApAQCAtMTE0MSw3ICsx
MTQxLDcgQEAgc3RhdGljIGludCBkZWxldGVfbm9kZShzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAogCQl0YWxsb2NfZnJlZShu
YW1lKTsKIAl9CiAKLQlmaXJlX3dhdGNoZXMoY29ubiwgY3R4LCBub2RlLT5u
YW1lLCB0cnVlKTsKKwlmaXJlX3dhdGNoZXMoY29ubiwgY3R4LCBub2RlLT5u
YW1lLCBub2RlLCB0cnVlLCBOVUxMKTsKIAlkZWxldGVfbm9kZV9zaW5nbGUo
Y29ubiwgbm9kZSk7CiAJZGVsZXRlX2NoaWxkKGNvbm4sIHBhcmVudCwgYmFz
ZW5hbWUobm9kZS0+bmFtZSkpOwogCXRhbGxvY19mcmVlKG5vZGUpOwpAQCAt
MTE2NSwxMyArMTE2NSwxNCBAQCBzdGF0aWMgaW50IF9ybShzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3Qgbm9kZSAq
bm9kZSwKIAlwYXJlbnQgPSByZWFkX25vZGUoY29ubiwgY3R4LCBwYXJlbnRu
YW1lKTsKIAlpZiAoIXBhcmVudCkKIAkJcmV0dXJuIChlcnJubyA9PSBFTk9N
RU0pID8gRU5PTUVNIDogRUlOVkFMOworCW5vZGUtPnBhcmVudCA9IHBhcmVu
dDsKIAogCS8qCiAJICogRmlyZSB0aGUgd2F0Y2hlcyBub3csIHdoZW4gd2Ug
Y2FuIHN0aWxsIHNlZSB0aGUgbm9kZSBwZXJtaXNzaW9ucy4KIAkgKiBUaGlz
IGZpbmUgYXMgd2UgYXJlIHNpbmdsZSB0aHJlYWRlZCBhbmQgdGhlIG5leHQg
cG9zc2libGUgcmVhZCB3aWxsCiAJICogYmUgaGFuZGxlZCBvbmx5IGFmdGVy
IHRoZSBub2RlIGhhcyBiZWVuIHJlYWxseSByZW1vdmVkLgogCSAqLwotCWZp
cmVfd2F0Y2hlcyhjb25uLCBjdHgsIG5hbWUsIGZhbHNlKTsKKwlmaXJlX3dh
dGNoZXMoY29ubiwgY3R4LCBuYW1lLCBub2RlLCBmYWxzZSwgTlVMTCk7CiAJ
cmV0dXJuIGRlbGV0ZV9ub2RlKGNvbm4sIGN0eCwgcGFyZW50LCBub2RlKTsK
IH0KIApAQCAtMTIzNyw3ICsxMjM4LDcgQEAgc3RhdGljIGludCBkb19nZXRf
cGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJl
ZF9kYXRhICppbikKIAogc3RhdGljIGludCBkb19zZXRfcGVybXMoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikK
IHsKLQlzdHJ1Y3Qgbm9kZV9wZXJtcyBwZXJtczsKKwlzdHJ1Y3Qgbm9kZV9w
ZXJtcyBwZXJtcywgb2xkX3Blcm1zOwogCWNoYXIgKm5hbWUsICpwZXJtc3Ry
OwogCXN0cnVjdCBub2RlICpub2RlOwogCkBAIC0xMjczLDYgKzEyNzQsNyBA
QCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCSAgICBwZXJtcy5w
WzBdLmlkICE9IG5vZGUtPnBlcm1zLnBbMF0uaWQpCiAJCXJldHVybiBFUEVS
TTsKIAorCW9sZF9wZXJtcyA9IG5vZGUtPnBlcm1zOwogCWRvbWFpbl9lbnRy
eV9kZWMoY29ubiwgbm9kZSk7CiAJbm9kZS0+cGVybXMgPSBwZXJtczsKIAlk
b21haW5fZW50cnlfaW5jKGNvbm4sIG5vZGUpOwpAQCAtMTI4MCw3ICsxMjgy
LDcgQEAgc3RhdGljIGludCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAod3Jp
dGVfbm9kZShjb25uLCBub2RlLCBmYWxzZSkpCiAJCXJldHVybiBlcnJubzsK
IAotCWZpcmVfd2F0Y2hlcyhjb25uLCBpbiwgbmFtZSwgZmFsc2UpOworCWZp
cmVfd2F0Y2hlcyhjb25uLCBpbiwgbmFtZSwgbm9kZSwgZmFsc2UsICZvbGRf
cGVybXMpOwogCXNlbmRfYWNrKGNvbm4sIFhTX1NFVF9QRVJNUyk7CiAKIAly
ZXR1cm4gMDsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCmlu
ZGV4IGYzZGE2YmJjOTQzZC4uZTA1MGIyN2NiZGRlIDEwMDY0NAotLS0gYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCisrKyBiL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKQEAgLTE1MiwxNSArMTUyLDE3IEBA
IHZvaWQgc2VuZF9hY2soc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGVudW0g
eHNkX3NvY2ttc2dfdHlwZSB0eXBlKTsKIC8qIENhbm9uaWNhbGl6ZSB0aGlz
IHBhdGggaWYgcG9zc2libGUuICovCiBjaGFyICpjYW5vbmljYWxpemUoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgY29uc3Qg
Y2hhciAqbm9kZSk7CiAKKy8qIEdldCBhY2Nlc3MgcGVybWlzc2lvbnMuICov
CitlbnVtIHhzX3Blcm1fdHlwZSBwZXJtX2Zvcl9jb25uKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLAorCQkJCWNvbnN0IHN0cnVjdCBub2RlX3Blcm1zICpw
ZXJtcyk7CisKIC8qIFdyaXRlIGEgbm9kZSB0byB0aGUgdGRiIGRhdGEgYmFz
ZS4gKi8KIGludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUsCiAJCSAg
IGJvb2wgbm9fcXVvdGFfY2hlY2spOwogCi0vKiBHZXQgdGhpcyBub2RlLCBj
aGVja2luZyB3ZSBoYXZlIHBlcm1pc3Npb25zLiAqLwotc3RydWN0IG5vZGUg
KmdldF9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAotCQkgICAgICBj
b25zdCB2b2lkICpjdHgsCi0JCSAgICAgIGNvbnN0IGNoYXIgKm5hbWUsCi0J
CSAgICAgIGVudW0geHNfcGVybV90eXBlIHBlcm0pOworLyogR2V0IGEgbm9k
ZSBmcm9tIHRoZSB0ZGIgZGF0YSBiYXNlLiAqLworc3RydWN0IG5vZGUgKnJl
YWRfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAq
Y3R4LAorCQkgICAgICAgY29uc3QgY2hhciAqbmFtZSk7CiAKIHN0cnVjdCBj
b25uZWN0aW9uICpuZXdfY29ubmVjdGlvbihjb25ud3JpdGVmbl90ICp3cml0
ZSwgY29ubnJlYWRmbl90ICpyZWFkKTsKIHZvaWQgY2hlY2tfc3RvcmUodm9p
ZCk7CkBAIC0xNzEsNiArMTczLDkgQEAgZW51bSB4c19wZXJtX3R5cGUgcGVy
bV9mb3JfY29ubihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIC8qIElzIHRo
aXMgYSB2YWxpZCBub2RlIG5hbWU/ICovCiBib29sIGlzX3ZhbGlkX25vZGVu
YW1lKGNvbnN0IGNoYXIgKm5vZGUpOwogCisvKiBHZXQgbmFtZSBvZiBwYXJl
bnQgbm9kZS4gKi8KK2NoYXIgKmdldF9wYXJlbnQoY29uc3Qgdm9pZCAqY3R4
LCBjb25zdCBjaGFyICpub2RlKTsKKwogLyogVHJhY2luZyBpbmZyYXN0cnVj
dHVyZS4gKi8KIHZvaWQgdHJhY2VfY3JlYXRlKGNvbnN0IHZvaWQgKmRhdGEs
IGNvbnN0IGNoYXIgKnR5cGUpOwogdm9pZCB0cmFjZV9kZXN0cm95KGNvbnN0
IHZvaWQgKmRhdGEsIGNvbnN0IGNoYXIgKnR5cGUpOwpkaWZmIC0tZ2l0IGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCmluZGV4IDdhZmFiZTBhZTA4NC4u
NzExYTExYjE4YWQ2IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5jCkBAIC0yMDYsNyArMjA2LDcgQEAgc3RhdGljIGludCBkZXN0
cm95X2RvbWFpbih2b2lkICpfZG9tYWluKQogCQkJdW5tYXBfaW50ZXJmYWNl
KGRvbWFpbi0+aW50ZXJmYWNlKTsKIAl9CiAKLQlmaXJlX3dhdGNoZXMoTlVM
TCwgZG9tYWluLCAiQHJlbGVhc2VEb21haW4iLCBmYWxzZSk7CisJZmlyZV93
YXRjaGVzKE5VTEwsIGRvbWFpbiwgIkByZWxlYXNlRG9tYWluIiwgTlVMTCwg
ZmFsc2UsIE5VTEwpOwogCiAJd3JsX2RvbWFpbl9kZXN0cm95KGRvbWFpbik7
CiAKQEAgLTI0NCw3ICsyNDQsNyBAQCBzdGF0aWMgdm9pZCBkb21haW5fY2xl
YW51cCh2b2lkKQogCX0KIAogCWlmIChub3RpZnkpCi0JCWZpcmVfd2F0Y2hl
cyhOVUxMLCBOVUxMLCAiQHJlbGVhc2VEb21haW4iLCBmYWxzZSk7CisJCWZp
cmVfd2F0Y2hlcyhOVUxMLCBOVUxMLCAiQHJlbGVhc2VEb21haW4iLCBOVUxM
LCBmYWxzZSwgTlVMTCk7CiB9CiAKIC8qIFdlIHNjYW4gYWxsIGRvbWFpbnMg
cmF0aGVyIHRoYW4gdXNlIHRoZSBpbmZvcm1hdGlvbiBnaXZlbiBoZXJlLiAq
LwpAQCAtNDEwLDcgKzQxMCw3IEBAIGludCBkb19pbnRyb2R1Y2Uoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikK
IAkJLyogTm93IGRvbWFpbiBiZWxvbmdzIHRvIGl0cyBjb25uZWN0aW9uLiAq
LwogCQl0YWxsb2Nfc3RlYWwoZG9tYWluLT5jb25uLCBkb21haW4pOwogCi0J
CWZpcmVfd2F0Y2hlcyhOVUxMLCBpbiwgIkBpbnRyb2R1Y2VEb21haW4iLCBm
YWxzZSk7CisJCWZpcmVfd2F0Y2hlcyhOVUxMLCBpbiwgIkBpbnRyb2R1Y2VE
b21haW4iLCBOVUxMLCBmYWxzZSwgTlVMTCk7CiAJfSBlbHNlIGlmICgoZG9t
YWluLT5tZm4gPT0gbWZuKSAmJiAoZG9tYWluLT5jb25uICE9IGNvbm4pKSB7
CiAJCS8qIFVzZSBYU19JTlRST0RVQ0UgZm9yIHJlY3JlYXRpbmcgdGhlIHhl
bmJ1cyBldmVudC1jaGFubmVsLiAqLwogCQlpZiAoZG9tYWluLT5wb3J0KQpk
aWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0
aW9uLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24u
YwppbmRleCBlODc4OTc1NzM0NjkuLmE3ZDhjNWQ0NzVlYyAxMDA2NDQKLS0t
IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKKysr
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKQEAg
LTExNCw2ICsxMTQsOSBAQCBzdHJ1Y3QgYWNjZXNzZWRfbm9kZQogCS8qIEdl
bmVyYXRpb24gY291bnQgKG9yIE5PX0dFTkVSQVRJT04pIGZvciBjb25mbGlj
dCBjaGVja2luZy4gKi8KIAl1aW50NjRfdCBnZW5lcmF0aW9uOwogCisJLyog
T3JpZ2luYWwgbm9kZSBwZXJtaXNzaW9ucy4gKi8KKwlzdHJ1Y3Qgbm9kZV9w
ZXJtcyBwZXJtczsKKwogCS8qIEdlbmVyYXRpb24gY291bnQgY2hlY2tpbmcg
cmVxdWlyZWQ/ICovCiAJYm9vbCBjaGVja19nZW47CiAKQEAgLTI2MCw2ICsy
NjMsMTUgQEAgaW50IGFjY2Vzc19ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKIAkJaS0+bm9kZSA9IHRhbGxvY19z
dHJkdXAoaSwgbm9kZS0+bmFtZSk7CiAJCWlmICghaS0+bm9kZSkKIAkJCWdv
dG8gbm9tZW07CisJCWlmIChub2RlLT5nZW5lcmF0aW9uICE9IE5PX0dFTkVS
QVRJT04gJiYgbm9kZS0+cGVybXMubnVtKSB7CisJCQlpLT5wZXJtcy5wID0g
dGFsbG9jX2FycmF5KGksIHN0cnVjdCB4c19wZXJtaXNzaW9ucywKKwkJCQkJ
CSAgbm9kZS0+cGVybXMubnVtKTsKKwkJCWlmICghaS0+cGVybXMucCkKKwkJ
CQlnb3RvIG5vbWVtOworCQkJaS0+cGVybXMubnVtID0gbm9kZS0+cGVybXMu
bnVtOworCQkJbWVtY3B5KGktPnBlcm1zLnAsIG5vZGUtPnBlcm1zLnAsCisJ
CQkgICAgICAgaS0+cGVybXMubnVtICogc2l6ZW9mKCppLT5wZXJtcy5wKSk7
CisJCX0KIAogCQlpbnRyb2R1Y2UgPSB0cnVlOwogCQlpLT50YV9ub2RlID0g
ZmFsc2U7CkBAIC0zNjgsOSArMzgwLDE0IEBAIHN0YXRpYyBpbnQgZmluYWxp
emVfdHJhbnNhY3Rpb24oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAJCQkJ
dGFsbG9jX2ZyZWUoZGF0YS5kcHRyKTsKIAkJCQlpZiAocmV0KQogCQkJCQln
b3RvIGVycjsKLQkJCX0gZWxzZSBpZiAodGRiX2RlbGV0ZSh0ZGJfY3R4LCBr
ZXkpKQorCQkJCWZpcmVfd2F0Y2hlcyhjb25uLCB0cmFucywgaS0+bm9kZSwg
TlVMTCwgZmFsc2UsCisJCQkJCSAgICAgaS0+cGVybXMucCA/ICZpLT5wZXJt
cyA6IE5VTEwpOworCQkJfSBlbHNlIHsKKwkJCQlmaXJlX3dhdGNoZXMoY29u
biwgdHJhbnMsIGktPm5vZGUsIE5VTEwsIGZhbHNlLAorCQkJCQkgICAgIGkt
PnBlcm1zLnAgPyAmaS0+cGVybXMgOiBOVUxMKTsKKwkJCQlpZiAodGRiX2Rl
bGV0ZSh0ZGJfY3R4LCBrZXkpKQogCQkJCQlnb3RvIGVycjsKLQkJCWZpcmVf
d2F0Y2hlcyhjb25uLCB0cmFucywgaS0+bm9kZSwgZmFsc2UpOworCQkJfQog
CQl9CiAKIAkJaWYgKGktPnRhX25vZGUgJiYgdGRiX2RlbGV0ZSh0ZGJfY3R4
LCB0YV9rZXkpKQpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3dhdGNoLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2gu
YwppbmRleCBmNGUyODkzNjJlYjYuLjcxYzEwOGVhOTlmMSAxMDA2NDQKLS0t
IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKKysrIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKQEAgLTg1LDIyICs4NSw2
IEBAIHN0YXRpYyB2b2lkIGFkZF9ldmVudChzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwKIAl1bnNpZ25lZCBpbnQgbGVuOwogCWNoYXIgKmRhdGE7CiAKLQlp
ZiAoIWNoZWNrX3NwZWNpYWxfZXZlbnQobmFtZSkpIHsKLQkJLyogQ2FuIHRo
aXMgY29ubiBsb2FkIG5vZGUsIG9yIHNlZSB0aGF0IGl0IGRvZXNuJ3QgZXhp
c3Q/ICovCi0JCXN0cnVjdCBub2RlICpub2RlID0gZ2V0X25vZGUoY29ubiwg
Y3R4LCBuYW1lLCBYU19QRVJNX1JFQUQpOwotCQkvKgotCQkgKiBYWFggV2Ug
YWxsb3cgRUFDQ0VTIGhlcmUgYmVjYXVzZSBvdGhlcndpc2UgYSBub24tZG9t
MAotCQkgKiBiYWNrZW5kIGRyaXZlciBjYW5ub3Qgd2F0Y2ggZm9yIGRpc2Fw
cGVhcmFuY2Ugb2YgYSBmcm9udGVuZAotCQkgKiB4ZW5zdG9yZSBkaXJlY3Rv
cnkuIFdoZW4gdGhlIGRpcmVjdG9yeSBkaXNhcHBlYXJzLCB3ZQotCQkgKiBy
ZXZlcnQgdG8gcGVybWlzc2lvbnMgb2YgdGhlIHBhcmVudCBkaXJlY3Rvcnkg
Zm9yIHRoYXQgcGF0aCwKLQkJICogd2hpY2ggd2lsbCB0eXBpY2FsbHkgZGlz
YWxsb3cgYWNjZXNzIGZvciB0aGUgYmFja2VuZC4KLQkJICogQnV0IHRoaXMg
YnJlYWtzIGRldmljZS1jaGFubmVsIHRlYXJkb3duIQotCQkgKiBSZWFsbHkg
d2Ugc2hvdWxkIGZpeCB0aGlzIGJldHRlci4uLgotCQkgKi8KLQkJaWYgKCFu
b2RlICYmIGVycm5vICE9IEVOT0VOVCAmJiBlcnJubyAhPSBFQUNDRVMpCi0J
CQlyZXR1cm47Ci0JfQotCiAJaWYgKHdhdGNoLT5yZWxhdGl2ZV9wYXRoKSB7
CiAJCW5hbWUgKz0gc3RybGVuKHdhdGNoLT5yZWxhdGl2ZV9wYXRoKTsKIAkJ
aWYgKCpuYW1lID09ICcvJykgLyogQ291bGQgYmUgIiIgKi8KQEAgLTExNywx
MiArMTAxLDYwIEBAIHN0YXRpYyB2b2lkIGFkZF9ldmVudChzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwKIAl0YWxsb2NfZnJlZShkYXRhKTsKIH0KIAorLyoK
KyAqIENoZWNrIHBlcm1pc3Npb25zIG9mIGEgc3BlY2lmaWMgd2F0Y2ggdG8g
ZmlyZToKKyAqIEVpdGhlciB0aGUgbm9kZSBpdHNlbGYgb3IgaXRzIHBhcmVu
dCBoYXZlIHRvIGJlIHJlYWRhYmxlIGJ5IHRoZSBjb25uZWN0aW9uCisgKiB0
aGUgd2F0Y2ggaGFzIGJlZW4gc2V0dXAgZm9yLiBJbiBjYXNlIGEgd2F0Y2gg
ZXZlbnQgaXMgY3JlYXRlZCBkdWUgdG8KKyAqIGNoYW5nZWQgcGVybWlzc2lv
bnMgd2UgbmVlZCB0byB0YWtlIHRoZSBvbGQgcGVybWlzc2lvbnMgaW50byBh
Y2NvdW50LCB0b28uCisgKi8KK3N0YXRpYyBib29sIHdhdGNoX3Blcm1pdHRl
ZChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAor
CQkJICAgIGNvbnN0IGNoYXIgKm5hbWUsIHN0cnVjdCBub2RlICpub2RlLAor
CQkJICAgIHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcykKK3sKKwllbnVtIHhz
X3Blcm1fdHlwZSBwZXJtOworCXN0cnVjdCBub2RlICpwYXJlbnQ7CisJY2hh
ciAqcGFyZW50X25hbWU7CisKKwlpZiAocGVybXMpIHsKKwkJcGVybSA9IHBl
cm1fZm9yX2Nvbm4oY29ubiwgcGVybXMpOworCQlpZiAocGVybSAmIFhTX1BF
Uk1fUkVBRCkKKwkJCXJldHVybiB0cnVlOworCX0KKworCWlmICghbm9kZSkg
eworCQlub2RlID0gcmVhZF9ub2RlKGNvbm4sIGN0eCwgbmFtZSk7CisJCWlm
ICghbm9kZSkKKwkJCXJldHVybiBmYWxzZTsKKwl9CisKKwlwZXJtID0gcGVy
bV9mb3JfY29ubihjb25uLCAmbm9kZS0+cGVybXMpOworCWlmIChwZXJtICYg
WFNfUEVSTV9SRUFEKQorCQlyZXR1cm4gdHJ1ZTsKKworCXBhcmVudCA9IG5v
ZGUtPnBhcmVudDsKKwlpZiAoIXBhcmVudCkgeworCQlwYXJlbnRfbmFtZSA9
IGdldF9wYXJlbnQoY3R4LCBub2RlLT5uYW1lKTsKKwkJaWYgKCFwYXJlbnRf
bmFtZSkKKwkJCXJldHVybiBmYWxzZTsKKwkJcGFyZW50ID0gcmVhZF9ub2Rl
KGNvbm4sIGN0eCwgcGFyZW50X25hbWUpOworCQlpZiAoIXBhcmVudCkKKwkJ
CXJldHVybiBmYWxzZTsKKwl9CisKKwlwZXJtID0gcGVybV9mb3JfY29ubihj
b25uLCAmcGFyZW50LT5wZXJtcyk7CisKKwlyZXR1cm4gcGVybSAmIFhTX1BF
Uk1fUkVBRDsKK30KKwogLyoKICAqIENoZWNrIHdoZXRoZXIgYW55IHdhdGNo
IGV2ZW50cyBhcmUgdG8gYmUgc2VudC4KICAqIFRlbXBvcmFyeSBtZW1vcnkg
YWxsb2NhdGlvbnMgYXJlIGRvbmUgd2l0aCBjdHguCisgKiBXZSBuZWVkIHRv
IHRha2UgdGhlIChwb3RlbnRpYWwpIG9sZCBwZXJtaXNzaW9ucyBvZiB0aGUg
bm9kZSBpbnRvIGFjY291bnQKKyAqIGFzIGEgd2F0Y2hlciBsb3NpbmcgcGVy
bWlzc2lvbnMgdG8gYWNjZXNzIGEgbm9kZSBzaG91bGQgcmVjZWl2ZSB0aGUK
KyAqIHdhdGNoIGV2ZW50LCB0b28uCiAgKi8KIHZvaWQgZmlyZV93YXRjaGVz
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNv
bnN0IGNoYXIgKm5hbWUsCi0JCSAgYm9vbCBleGFjdCkKKwkJICBzdHJ1Y3Qg
bm9kZSAqbm9kZSwgYm9vbCBleGFjdCwgc3RydWN0IG5vZGVfcGVybXMgKnBl
cm1zKQogewogCXN0cnVjdCBjb25uZWN0aW9uICppOwogCXN0cnVjdCB3YXRj
aCAqd2F0Y2g7CkBAIC0xMzQsOCArMTY2LDEzIEBAIHZvaWQgZmlyZV93YXRj
aGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgs
IGNvbnN0IGNoYXIgKm5hbWUsCiAJLyogQ3JlYXRlIGFuIGV2ZW50IGZvciBl
YWNoIHdhdGNoLiAqLwogCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmNvbm5l
Y3Rpb25zLCBsaXN0KSB7CiAJCS8qIGludHJvZHVjZS9yZWxlYXNlIGRvbWFp
biB3YXRjaGVzICovCi0JCWlmIChjaGVja19zcGVjaWFsX2V2ZW50KG5hbWUp
ICYmICFjaGVja19wZXJtc19zcGVjaWFsKG5hbWUsIGkpKQotCQkJY29udGlu
dWU7CisJCWlmIChjaGVja19zcGVjaWFsX2V2ZW50KG5hbWUpKSB7CisJCQlp
ZiAoIWNoZWNrX3Blcm1zX3NwZWNpYWwobmFtZSwgaSkpCisJCQkJY29udGlu
dWU7CisJCX0gZWxzZSB7CisJCQlpZiAoIXdhdGNoX3Blcm1pdHRlZChpLCBj
dHgsIG5hbWUsIG5vZGUsIHBlcm1zKSkKKwkJCQljb250aW51ZTsKKwkJfQog
CiAJCWxpc3RfZm9yX2VhY2hfZW50cnkod2F0Y2gsICZpLT53YXRjaGVzLCBs
aXN0KSB7CiAJCQlpZiAoZXhhY3QpIHsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3dhdGNoLmgKaW5kZXggMWIzYzgwZDNkZGExLi4wMzA5NDM3NGYz
NzkgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRj
aC5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oCkBA
IC0yNiw3ICsyNiw3IEBAIGludCBkb191bndhdGNoKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pOwogCiAvKiBG
aXJlIGFsbCB3YXRjaGVzOiAhZXhhY3QgbWVhbnMgYWxsIHRoZSBjaGlsZHJl
biBhcmUgYWZmZWN0ZWQgKGllLiBybSkuICovCiB2b2lkIGZpcmVfd2F0Y2hl
cyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqdG1wLCBj
b25zdCBjaGFyICpuYW1lLAotCQkgIGJvb2wgZXhhY3QpOworCQkgIHN0cnVj
dCBub2RlICpub2RlLCBib29sIGV4YWN0LCBzdHJ1Y3Qgbm9kZV9wZXJtcyAq
cGVybXMpOwogCiB2b2lkIGNvbm5fZGVsZXRlX2FsbF93YXRjaGVzKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uKTsKIAotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch"
Content-Transfer-Encoding: base64

RnJvbSA3MTYyMzQ5MmY3YjFiNmQ2M2VkNzZlMmJmOTcwYzExM2I4OGZmYTBi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6MzcgKzAyMDAKU3ViamVjdDogW1BBVENIIDAxLzEwXSB0b29scy94ZW5z
dG9yZTogYWxsb3cgcmVtb3ZpbmcgY2hpbGQgb2YgYSBub2RlCiBleGNlZWRp
bmcgcXVvdGEKCkFuIHVucHJpdmlsZWdlZCB1c2VyIG9mIFhlbnN0b3JlIGlz
IG5vdCBhbGxvd2VkIHRvIHdyaXRlIG5vZGVzIHdpdGggYQpzaXplIGV4Y2Vl
ZGluZyBhIGdsb2JhbCBxdW90YSwgd2hpbGUgcHJpdmlsZWdlZCB1c2VycyBs
aWtlIGRvbTAgYXJlCmFsbG93ZWQgdG8gd3JpdGUgc3VjaCBub2Rlcy4gVGhl
IHNpemUgb2YgYSBub2RlIGlzIHRoZSBuZWVkZWQgc3BhY2UKdG8gc3RvcmUg
YWxsIG5vZGUgc3BlY2lmaWMgZGF0YSwgdGhpcyBpbmNsdWRlcyB0aGUgbmFt
ZXMgb2YgYWxsCmNoaWxkcmVuIG9mIHRoZSBub2RlLgoKV2hlbiBkZWxldGlu
ZyBhIG5vZGUgaXRzIHBhcmVudCBoYXMgdG8gYmUgbW9kaWZpZWQgYnkgcmVt
b3ZpbmcgdGhlCm5hbWUgb2YgdGhlIHRvIGJlIGRlbGV0ZWQgY2hpbGQgZnJv
bSBpdC4KClRoaXMgcmVzdWx0cyBpbiB0aGUgc3RyYW5nZSBzaXR1YXRpb24g
dGhhdCBhbiB1bnByaXZpbGVnZWQgb3duZXIgb2YgYQpub2RlIG1pZ2h0IG5v
dCBzdWNjZWVkIGluIGRlbGV0aW5nIHRoYXQgbm9kZSBpbiBjYXNlIGl0cyBw
YXJlbnQgaXMKZXhjZWVkaW5nIHRoZSBxdW90YSBvZiB0aGF0IHVucHJpdmls
ZWdlZCB1c2VyIChpdCBtaWdodCBoYXZlIGJlZW4Kd3JpdHRlbiBieSBkb20w
KSwgYXMgdGhlIHVzZXIgaXMgbm90IGFsbG93ZWQgdG8gd3JpdGUgdGhlIHVw
ZGF0ZWQKcGFyZW50IG5vZGUuCgpGaXggdGhhdCBieSBub3QgY2hlY2tpbmcg
dGhlIHF1b3RhIHdoZW4gd3JpdGluZyBhIG5vZGUgZm9yIHRoZQpwdXJwb3Nl
IG9mIHJlbW92aW5nIGEgY2hpbGQncyBuYW1lIG9ubHkuCgpUaGUgc2FtZSBh
cHBsaWVzIHRvIHRyYW5zYWN0aW9uIGhhbmRsaW5nOiBhIG5vZGUgYmVpbmcg
cmVhZCBkdXJpbmcgYQp0cmFuc2FjdGlvbiBpcyB3cml0dGVuIHRvIHRoZSB0
cmFuc2FjdGlvbiBzcGVjaWZpYyBhcmVhIGFuZCBpdCBzaG91bGQKbm90IGJl
IHRlc3RlZCBmb3IgZXhjZWVkaW5nIHRoZSBxdW90YSwgYXMgaXQgbWlnaHQg
bm90IGJlIG93bmVkIGJ5CnRoZSByZWFkZXIgYW5kIHByZXN1bWFibHkgdGhl
IG9yaWdpbmFsIHdyaXRlIHdvdWxkIGhhdmUgZmFpbGVkIGlmIHRoZQpub2Rl
IGlzIG93bmVkIGJ5IHRoZSByZWFkZXIuCgpUaGlzIGlzIHBhcnQgb2YgWFNB
LTExNS4KClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0Bz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFt
YXpvbi5jb20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVu
Lm9yZz4KLS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jICAg
ICAgICB8IDIwICsrKysrKysrKysrLS0tLS0tLS0tCiB0b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5oICAgICAgICB8ICAzICsrLQogdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgfCAgMiArLQogMyBmaWxl
cyBjaGFuZ2VkLCAxNCBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5j
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA3YmQ5
NTlmMjhiMzkuLjYyYTE3YTY4NmVkYyAxMDA2NDQKLS0tIGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jCkBAIC00MTksNyArNDE5LDggQEAgc3RhdGljIHN0
cnVjdCBub2RlICpyZWFkX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IGNvbnN0IHZvaWQgKmN0eCwKIAlyZXR1cm4gbm9kZTsKIH0KIAotaW50IHdy
aXRlX25vZGVfcmF3KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBUREJfREFU
QSAqa2V5LCBzdHJ1Y3Qgbm9kZSAqbm9kZSkKK2ludCB3cml0ZV9ub2RlX3Jh
dyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERCX0RBVEEgKmtleSwgc3Ry
dWN0IG5vZGUgKm5vZGUsCisJCSAgIGJvb2wgbm9fcXVvdGFfY2hlY2spCiB7
CiAJVERCX0RBVEEgZGF0YTsKIAl2b2lkICpwOwpAQCAtNDI5LDcgKzQzMCw3
IEBAIGludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUpCiAJCSsgbm9k
ZS0+bnVtX3Blcm1zKnNpemVvZihub2RlLT5wZXJtc1swXSkKIAkJKyBub2Rl
LT5kYXRhbGVuICsgbm9kZS0+Y2hpbGRsZW47CiAKLQlpZiAoZG9tYWluX2lz
X3VucHJpdmlsZWdlZChjb25uKSAmJgorCWlmICghbm9fcXVvdGFfY2hlY2sg
JiYgZG9tYWluX2lzX3VucHJpdmlsZWdlZChjb25uKSAmJgogCSAgICBkYXRh
LmRzaXplID49IHF1b3RhX21heF9lbnRyeV9zaXplKSB7CiAJCWVycm5vID0g
RU5PU1BDOwogCQlyZXR1cm4gZXJybm87CkBAIC00NTcsMTQgKzQ1OCwxNSBA
QCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2RlKQogCXJldHVybiAw
OwogfQogCi1zdGF0aWMgaW50IHdyaXRlX25vZGUoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQorc3RhdGljIGludCB3cml0
ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAq
bm9kZSwKKwkJICAgICAgYm9vbCBub19xdW90YV9jaGVjaykKIHsKIAlUREJf
REFUQSBrZXk7CiAKIAlpZiAoYWNjZXNzX25vZGUoY29ubiwgbm9kZSwgTk9E
RV9BQ0NFU1NfV1JJVEUsICZrZXkpKQogCQlyZXR1cm4gZXJybm87CiAKLQly
ZXR1cm4gd3JpdGVfbm9kZV9yYXcoY29ubiwgJmtleSwgbm9kZSk7CisJcmV0
dXJuIHdyaXRlX25vZGVfcmF3KGNvbm4sICZrZXksIG5vZGUsIG5vX3F1b3Rh
X2NoZWNrKTsKIH0KIAogc3RhdGljIGVudW0geHNfcGVybV90eXBlIHBlcm1f
Zm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCkBAIC0xMDAxLDcg
KzEwMDMsNyBAQCBzdGF0aWMgc3RydWN0IG5vZGUgKmNyZWF0ZV9ub2RlKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAJLyog
V2Ugd3JpdGUgb3V0IHRoZSBub2RlcyBkb3duLCBzZXR0aW5nIGRlc3RydWN0
b3IgaW4gY2FzZQogCSAqIHNvbWV0aGluZyBnb2VzIHdyb25nLiAqLwogCWZv
ciAoaSA9IG5vZGU7IGk7IGkgPSBpLT5wYXJlbnQpIHsKLQkJaWYgKHdyaXRl
X25vZGUoY29ubiwgaSkpIHsKKwkJaWYgKHdyaXRlX25vZGUoY29ubiwgaSwg
ZmFsc2UpKSB7CiAJCQlkb21haW5fZW50cnlfZGVjKGNvbm4sIGkpOwogCQkJ
cmV0dXJuIE5VTEw7CiAJCX0KQEAgLTEwNDEsNyArMTA0Myw3IEBAIHN0YXRp
YyBpbnQgZG9fd3JpdGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVj
dCBidWZmZXJlZF9kYXRhICppbikKIAl9IGVsc2UgewogCQlub2RlLT5kYXRh
ID0gaW4tPmJ1ZmZlciArIG9mZnNldDsKIAkJbm9kZS0+ZGF0YWxlbiA9IGRh
dGFsZW47Ci0JCWlmICh3cml0ZV9ub2RlKGNvbm4sIG5vZGUpKQorCQlpZiAo
d3JpdGVfbm9kZShjb25uLCBub2RlLCBmYWxzZSkpCiAJCQlyZXR1cm4gZXJy
bm87CiAJfQogCkBAIC0xMTE3LDcgKzExMTksNyBAQCBzdGF0aWMgaW50IHJl
bW92ZV9jaGlsZF9lbnRyeShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3Ry
dWN0IG5vZGUgKm5vZGUsCiAJc2l6ZV90IGNoaWxkbGVuID0gc3RybGVuKG5v
ZGUtPmNoaWxkcmVuICsgb2Zmc2V0KTsKIAltZW1kZWwobm9kZS0+Y2hpbGRy
ZW4sIG9mZnNldCwgY2hpbGRsZW4gKyAxLCBub2RlLT5jaGlsZGxlbik7CiAJ
bm9kZS0+Y2hpbGRsZW4gLT0gY2hpbGRsZW4gKyAxOwotCXJldHVybiB3cml0
ZV9ub2RlKGNvbm4sIG5vZGUpOworCXJldHVybiB3cml0ZV9ub2RlKGNvbm4s
IG5vZGUsIHRydWUpOwogfQogCiAKQEAgLTEyNTYsNyArMTI1OCw3IEBAIHN0
YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJbm9kZS0+bnVtX3Blcm1z
ID0gbnVtOwogCWRvbWFpbl9lbnRyeV9pbmMoY29ubiwgbm9kZSk7CiAKLQlp
ZiAod3JpdGVfbm9kZShjb25uLCBub2RlKSkKKwlpZiAod3JpdGVfbm9kZShj
b25uLCBub2RlLCBmYWxzZSkpCiAJCXJldHVybiBlcnJubzsKIAogCWZpcmVf
d2F0Y2hlcyhjb25uLCBpbiwgbmFtZSwgZmFsc2UpOwpAQCAtMTUxNiw3ICsx
NTE4LDcgQEAgc3RhdGljIHZvaWQgbWFudWFsX25vZGUoY29uc3QgY2hhciAq
bmFtZSwgY29uc3QgY2hhciAqY2hpbGQpCiAJaWYgKGNoaWxkKQogCQlub2Rl
LT5jaGlsZGxlbiA9IHN0cmxlbihjaGlsZCkgKyAxOwogCi0JaWYgKHdyaXRl
X25vZGUoTlVMTCwgbm9kZSkpCisJaWYgKHdyaXRlX25vZGUoTlVMTCwgbm9k
ZSwgZmFsc2UpKQogCQliYXJmX3BlcnJvcigiQ291bGQgbm90IGNyZWF0ZSBp
bml0aWFsIG5vZGUgJXMiLCBuYW1lKTsKIAl0YWxsb2NfZnJlZShub2RlKTsK
IH0KZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3Jl
LmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCmluZGV4IGM0
YzMyYmM4OGYwYy4uMjlkNjM4ZmJjNWEwIDEwMDY0NAotLS0gYS90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmgKQEAgLTE0OSw3ICsxNDksOCBAQCB2b2lkIHNl
bmRfYWNrKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBlbnVtIHhzZF9zb2Nr
bXNnX3R5cGUgdHlwZSk7CiBjaGFyICpjYW5vbmljYWxpemUoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgY29uc3QgY2hhciAq
bm9kZSk7CiAKIC8qIFdyaXRlIGEgbm9kZSB0byB0aGUgdGRiIGRhdGEgYmFz
ZS4gKi8KLWludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUpOworaW50
IHdyaXRlX25vZGVfcmF3KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBUREJf
REFUQSAqa2V5LCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKKwkJICAgYm9vbCBub19x
dW90YV9jaGVjayk7CiAKIC8qIEdldCB0aGlzIG5vZGUsIGNoZWNraW5nIHdl
IGhhdmUgcGVybWlzc2lvbnMuICovCiBzdHJ1Y3Qgbm9kZSAqZ2V0X25vZGUo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCmRpZmYgLS1naXQgYS90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYyBiL3Rvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jCmluZGV4IDI4MjRmN2IzNTli
OC4uZTg3ODk3NTczNDY5IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYworKysgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYwpAQCAtMjc2LDcgKzI3Niw3IEBAIGlu
dCBhY2Nlc3Nfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0
IG5vZGUgKm5vZGUsCiAJCQlpLT5jaGVja19nZW4gPSB0cnVlOwogCQkJaWYg
KG5vZGUtPmdlbmVyYXRpb24gIT0gTk9fR0VORVJBVElPTikgewogCQkJCXNl
dF90ZGJfa2V5KHRyYW5zX25hbWUsICZsb2NhbF9rZXkpOwotCQkJCXJldCA9
IHdyaXRlX25vZGVfcmF3KGNvbm4sICZsb2NhbF9rZXksIG5vZGUpOworCQkJ
CXJldCA9IHdyaXRlX25vZGVfcmF3KGNvbm4sICZsb2NhbF9rZXksIG5vZGUs
IHRydWUpOwogCQkJCWlmIChyZXQpCiAJCQkJCWdvdG8gZXJyOwogCQkJCWkt
PnRhX25vZGUgPSB0cnVlOwotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch"
Content-Transfer-Encoding: base64

RnJvbSAwNzJjNzI5Y2ZlOTBiNGIwOWNhY2IxMmQ5MTJiYTA4OGRiODI3NGZl
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6MzggKzAyMDAKU3ViamVjdDogW1BBVENIIDAyLzEwXSB0b29scy94ZW5z
dG9yZTogaWdub3JlIHRyYW5zYWN0aW9uIGlkIGZvciBbdW5dd2F0Y2gKCklu
c3RlYWQgb2YgaWdub3JpbmcgdGhlIHRyYW5zYWN0aW9uIGlkIGZvciBYU19X
QVRDSCBhbmQgWFNfVU5XQVRDSApjb21tYW5kcyBhcyBpdCBpcyBkb2N1bWVu
dGVkIGluIGRvY3MvbWlzYy94ZW5zdG9yZS50eHQsIGl0IGlzIHRlc3RlZApm
b3IgdmFsaWRpdHkgdG9kYXkuCgpSZWFsbHkgaWdub3JlIHRoZSB0cmFuc2Fj
dGlvbiBpZCBmb3IgWFNfV0FUQ0ggYW5kIFhTX1VOV0FUQ0guCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jv
c3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFs
bCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJh
bnQgPHBhdWxAeGVuLm9yZz4KLS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jIHwgMjYgKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0KIDEg
ZmlsZSBjaGFuZ2VkLCAxNiBpbnNlcnRpb25zKCspLCAxMCBkZWxldGlvbnMo
LSkKCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA2
MmExN2E2ODZlZGMuLjJmOTg5NTI0YjQ5NyAxMDA2NDQKLS0tIGEvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jCkBAIC0xMjcwLDEzICsxMjcwLDE3IEBAIHN0
YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiBzdGF0aWMgc3RydWN0IHsK
IAljb25zdCBjaGFyICpzdHI7CiAJaW50ICgqZnVuYykoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbik7CisJdW5z
aWduZWQgaW50IGZsYWdzOworI2RlZmluZSBYU19GTEFHX05PVElECQkoMVUg
PDwgMCkJLyogSWdub3JlIHRyYW5zYWN0aW9uIGlkLiAqLwogfSBjb25zdCB3
aXJlX2Z1bmNzW1hTX1RZUEVfQ09VTlRdID0gewogCVtYU19DT05UUk9MXSAg
ICAgICAgICAgPSB7ICJDT05UUk9MIiwgICAgICAgICAgIGRvX2NvbnRyb2wg
fSwKIAlbWFNfRElSRUNUT1JZXSAgICAgICAgID0geyAiRElSRUNUT1JZIiwg
ICAgICAgICBzZW5kX2RpcmVjdG9yeSB9LAogCVtYU19SRUFEXSAgICAgICAg
ICAgICAgPSB7ICJSRUFEIiwgICAgICAgICAgICAgIGRvX3JlYWQgfSwKIAlb
WFNfR0VUX1BFUk1TXSAgICAgICAgID0geyAiR0VUX1BFUk1TIiwgICAgICAg
ICBkb19nZXRfcGVybXMgfSwKLQlbWFNfV0FUQ0hdICAgICAgICAgICAgID0g
eyAiV0FUQ0giLCAgICAgICAgICAgICBkb193YXRjaCB9LAotCVtYU19VTldB
VENIXSAgICAgICAgICAgPSB7ICJVTldBVENIIiwgICAgICAgICAgIGRvX3Vu
d2F0Y2ggfSwKKwlbWFNfV0FUQ0hdICAgICAgICAgICAgID0KKwkgICAgeyAi
V0FUQ0giLCAgICAgICAgIGRvX3dhdGNoLCAgICAgICAgWFNfRkxBR19OT1RJ
RCB9LAorCVtYU19VTldBVENIXSAgICAgICAgICAgPQorCSAgICB7ICJVTldB
VENIIiwgICAgICAgZG9fdW53YXRjaCwgICAgICBYU19GTEFHX05PVElEIH0s
CiAJW1hTX1RSQU5TQUNUSU9OX1NUQVJUXSA9IHsgIlRSQU5TQUNUSU9OX1NU
QVJUIiwgZG9fdHJhbnNhY3Rpb25fc3RhcnQgfSwKIAlbWFNfVFJBTlNBQ1RJ
T05fRU5EXSAgID0geyAiVFJBTlNBQ1RJT05fRU5EIiwgICBkb190cmFuc2Fj
dGlvbl9lbmQgfSwKIAlbWFNfSU5UUk9EVUNFXSAgICAgICAgID0geyAiSU5U
Uk9EVUNFIiwgICAgICAgICBkb19pbnRyb2R1Y2UgfSwKQEAgLTEyOTgsNyAr
MTMwMiw3IEBAIHN0YXRpYyBzdHJ1Y3QgewogCiBzdGF0aWMgY29uc3QgY2hh
ciAqc29ja21zZ19zdHJpbmcoZW51bSB4c2Rfc29ja21zZ190eXBlIHR5cGUp
CiB7Ci0JaWYgKCh1bnNpZ25lZCl0eXBlIDwgWFNfVFlQRV9DT1VOVCAmJiB3
aXJlX2Z1bmNzW3R5cGVdLnN0cikKKwlpZiAoKHVuc2lnbmVkIGludCl0eXBl
IDwgQVJSQVlfU0laRSh3aXJlX2Z1bmNzKSAmJiB3aXJlX2Z1bmNzW3R5cGVd
LnN0cikKIAkJcmV0dXJuIHdpcmVfZnVuY3NbdHlwZV0uc3RyOwogCiAJcmV0
dXJuICIqKlVOS05PV04qKiI7CkBAIC0xMzEzLDcgKzEzMTcsMTQgQEAgc3Rh
dGljIHZvaWQgcHJvY2Vzc19tZXNzYWdlKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJZW51bSB4c2Rfc29j
a21zZ190eXBlIHR5cGUgPSBpbi0+aGRyLm1zZy50eXBlOwogCWludCByZXQ7
CiAKLQl0cmFucyA9IHRyYW5zYWN0aW9uX2xvb2t1cChjb25uLCBpbi0+aGRy
Lm1zZy50eF9pZCk7CisJaWYgKCh1bnNpZ25lZCBpbnQpdHlwZSA+PSBYU19U
WVBFX0NPVU5UIHx8ICF3aXJlX2Z1bmNzW3R5cGVdLmZ1bmMpIHsKKwkJZXBy
aW50ZigiQ2xpZW50IHVua25vd24gb3BlcmF0aW9uICVpIiwgdHlwZSk7CisJ
CXNlbmRfZXJyb3IoY29ubiwgRU5PU1lTKTsKKwkJcmV0dXJuOworCX0KKwor
CXRyYW5zID0gKHdpcmVfZnVuY3NbdHlwZV0uZmxhZ3MgJiBYU19GTEFHX05P
VElEKQorCQk/IE5VTEwgOiB0cmFuc2FjdGlvbl9sb29rdXAoY29ubiwgaW4t
Pmhkci5tc2cudHhfaWQpOwogCWlmIChJU19FUlIodHJhbnMpKSB7CiAJCXNl
bmRfZXJyb3IoY29ubiwgLVBUUl9FUlIodHJhbnMpKTsKIAkJcmV0dXJuOwpA
QCAtMTMyMiwxMiArMTMzMyw3IEBAIHN0YXRpYyB2b2lkIHByb2Nlc3NfbWVz
c2FnZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVk
X2RhdGEgKmluKQogCWFzc2VydChjb25uLT50cmFuc2FjdGlvbiA9PSBOVUxM
KTsKIAljb25uLT50cmFuc2FjdGlvbiA9IHRyYW5zOwogCi0JaWYgKCh1bnNp
Z25lZCl0eXBlIDwgWFNfVFlQRV9DT1VOVCAmJiB3aXJlX2Z1bmNzW3R5cGVd
LmZ1bmMpCi0JCXJldCA9IHdpcmVfZnVuY3NbdHlwZV0uZnVuYyhjb25uLCBp
bik7Ci0JZWxzZSB7Ci0JCWVwcmludGYoIkNsaWVudCB1bmtub3duIG9wZXJh
dGlvbiAlaSIsIHR5cGUpOwotCQlyZXQgPSBFTk9TWVM7Ci0JfQorCXJldCA9
IHdpcmVfZnVuY3NbdHlwZV0uZnVuYyhjb25uLCBpbik7CiAJaWYgKHJldCkK
IAkJc2VuZF9lcnJvcihjb25uLCByZXQpOwogCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch"
Content-Transfer-Encoding: base64

RnJvbSBhMTMzNjI3NDUzODk4NzU5Y2E3M2RkNWMxYzE4NWMzODMwZmVkNzU0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6MzkgKzAyMDAKU3ViamVjdDogW1BBVENIIDAzLzEwXSB0b29scy94ZW5z
dG9yZTogZml4IG5vZGUgYWNjb3VudGluZyBhZnRlciBmYWlsZWQgbm9kZQog
Y3JlYXRpb24KCldoZW4gYSBub2RlIGNyZWF0aW9uIGZhaWxzIHRoZSBudW1i
ZXIgb2Ygbm9kZXMgb2YgdGhlIGRvbWFpbiBzaG91bGQgYmUKdGhlIHNhbWUg
YXMgYmVmb3JlIHRoZSBmYWlsZWQgbm9kZSBjcmVhdGlvbi4gSW4gY2FzZSBv
ZiBmYWlsdXJlIHdoZW4KdHJ5aW5nIHRvIGNyZWF0ZSBhIG5vZGUgcmVxdWly
aW5nIHRvIGNyZWF0ZSBvbmUgb3IgbW9yZSBpbnRlcm1lZGlhdGUKbm9kZXMg
YXMgd2VsbCAoZS5nLiB3aGVuIC9hL2IvYy9kIGlzIHRvIGJlIGNyZWF0ZWQs
IGJ1dCAvYS9iIGlzbid0CmV4aXN0aW5nIHlldCkgaXQgbWlnaHQgaGFwcGVu
IHRoYXQgdGhlIG51bWJlciBvZiBub2RlcyBvZiB0aGUgY3JlYXRpbmcKZG9t
YWluIGlzIG5vdCByZXNldCB0byB0aGUgdmFsdWUgaXQgaGFkIGJlZm9yZS4K
ClNvIG1vdmUgdGhlIHF1b3RhIGFjY291bnRpbmcgb3V0IG9mIGNvbnN0cnVj
dF9ub2RlKCkgYW5kIGludG8gdGhlIG5vZGUKd3JpdGUgbG9vcCBpbiBjcmVh
dGVfbm9kZSgpIGluIG9yZGVyIHRvIGJlIGFibGUgdG8gdW5kbyB0aGUgYWNj
b3VudGluZwppbiBjYXNlIG9mIGFuIGVycm9yIGluIHRoZSBpbnRlcm1lZGlh
dGUgbm9kZSBkZXN0cnVjdG9yLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0xMTUu
CgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5j
b20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4K
QWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+Ci0t
LQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyB8IDM3ICsrKysr
KysrKysrKysrKysrKysrKystLS0tLS0tLS0tLQogMSBmaWxlIGNoYW5nZWQs
IDI1IGluc2VydGlvbnMoKyksIDEyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdp
dCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4IDJmOTg5NTI0YjQ5Ny4u
Yzk3MTUxOWU1NDJhIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmMKQEAgLTkyNywxMSArOTI3LDYgQEAgc3RhdGljIHN0cnVjdCBub2Rl
ICpjb25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29u
c3Qgdm9pZCAqY3R4LAogCWlmICghcGFyZW50KQogCQlyZXR1cm4gTlVMTDsK
IAotCWlmIChkb21haW5fZW50cnkoY29ubikgPj0gcXVvdGFfbmJfZW50cnlf
cGVyX2RvbWFpbikgewotCQllcnJubyA9IEVOT1NQQzsKLQkJcmV0dXJuIE5V
TEw7Ci0JfQotCiAJLyogQWRkIGNoaWxkIHRvIHBhcmVudC4gKi8KIAliYXNl
ID0gYmFzZW5hbWUobmFtZSk7CiAJYmFzZWxlbiA9IHN0cmxlbihiYXNlKSAr
IDE7CkBAIC05NjQsNyArOTU5LDYgQEAgc3RhdGljIHN0cnVjdCBub2RlICpj
b25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qg
dm9pZCAqY3R4LAogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+ZGF0YSA9IE5V
TEw7CiAJbm9kZS0+Y2hpbGRsZW4gPSBub2RlLT5kYXRhbGVuID0gMDsKIAlu
b2RlLT5wYXJlbnQgPSBwYXJlbnQ7Ci0JZG9tYWluX2VudHJ5X2luYyhjb25u
LCBub2RlKTsKIAlyZXR1cm4gbm9kZTsKIAogbm9tZW06CkBAIC05ODQsNiAr
OTc4LDkgQEAgc3RhdGljIGludCBkZXN0cm95X25vZGUodm9pZCAqX25vZGUp
CiAJa2V5LmRzaXplID0gc3RybGVuKG5vZGUtPm5hbWUpOwogCiAJdGRiX2Rl
bGV0ZSh0ZGJfY3R4LCBrZXkpOworCisJZG9tYWluX2VudHJ5X2RlYyh0YWxs
b2NfcGFyZW50KG5vZGUpLCBub2RlKTsKKwogCXJldHVybiAwOwogfQogCkBA
IC0xMDAwLDE4ICs5OTcsMzQgQEAgc3RhdGljIHN0cnVjdCBub2RlICpjcmVh
dGVfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAq
Y3R4LAogCW5vZGUtPmRhdGEgPSBkYXRhOwogCW5vZGUtPmRhdGFsZW4gPSBk
YXRhbGVuOwogCi0JLyogV2Ugd3JpdGUgb3V0IHRoZSBub2RlcyBkb3duLCBz
ZXR0aW5nIGRlc3RydWN0b3IgaW4gY2FzZQotCSAqIHNvbWV0aGluZyBnb2Vz
IHdyb25nLiAqLworCS8qCisJICogV2Ugd3JpdGUgb3V0IHRoZSBub2RlcyBi
b3R0b20gdXAuCisJICogQWxsIG5ldyBjcmVhdGVkIG5vZGVzIHdpbGwgaGF2
ZSBpLT5wYXJlbnQgc2V0LCB3aGlsZSB0aGUgZmluYWwKKwkgKiBub2RlIHdp
bGwgYmUgYWxyZWFkeSBleGlzdGluZyBhbmQgd29uJ3QgaGF2ZSBpLT5wYXJl
bnQgc2V0LgorCSAqIE5ldyBub2RlcyBhcmUgc3ViamVjdCB0byBxdW90YSBo
YW5kbGluZy4KKwkgKiBJbml0aWFsbHkgc2V0IGEgZGVzdHJ1Y3RvciBmb3Ig
YWxsIG5ldyBub2RlcyByZW1vdmluZyB0aGVtIGZyb20KKwkgKiBUREIgYWdh
aW4gYW5kIHVuZG9pbmcgcXVvdGEgYWNjb3VudGluZyBmb3IgdGhlIGNhc2Ug
b2YgYW4gZXJyb3IKKwkgKiBkdXJpbmcgdGhlIHdyaXRlIGxvb3AuCisJICov
CiAJZm9yIChpID0gbm9kZTsgaTsgaSA9IGktPnBhcmVudCkgewotCQlpZiAo
d3JpdGVfbm9kZShjb25uLCBpLCBmYWxzZSkpIHsKLQkJCWRvbWFpbl9lbnRy
eV9kZWMoY29ubiwgaSk7CisJCS8qIGktPnBhcmVudCBpcyBzZXQgZm9yIGVh
Y2ggbmV3IG5vZGUsIHNvIGNoZWNrIHF1b3RhLiAqLworCQlpZiAoaS0+cGFy
ZW50ICYmCisJCSAgICBkb21haW5fZW50cnkoY29ubikgPj0gcXVvdGFfbmJf
ZW50cnlfcGVyX2RvbWFpbikgeworCQkJZXJybm8gPSBFTk9TUEM7CiAJCQly
ZXR1cm4gTlVMTDsKIAkJfQotCQl0YWxsb2Nfc2V0X2Rlc3RydWN0b3IoaSwg
ZGVzdHJveV9ub2RlKTsKKwkJaWYgKHdyaXRlX25vZGUoY29ubiwgaSwgZmFs
c2UpKQorCQkJcmV0dXJuIE5VTEw7CisKKwkJLyogQWNjb3VudCBmb3IgbmV3
IG5vZGUsIHNldCBkZXN0cnVjdG9yIGZvciBlcnJvciBjYXNlLiAqLworCQlp
ZiAoaS0+cGFyZW50KSB7CisJCQlkb21haW5fZW50cnlfaW5jKGNvbm4sIGkp
OworCQkJdGFsbG9jX3NldF9kZXN0cnVjdG9yKGksIGRlc3Ryb3lfbm9kZSk7
CisJCX0KIAl9CiAKIAkvKiBPSywgbm93IHJlbW92ZSBkZXN0cnVjdG9ycyBz
byB0aGV5IHN0YXkgYXJvdW5kICovCi0JZm9yIChpID0gbm9kZTsgaTsgaSA9
IGktPnBhcmVudCkKKwlmb3IgKGkgPSBub2RlOyBpLT5wYXJlbnQ7IGkgPSBp
LT5wYXJlbnQpCiAJCXRhbGxvY19zZXRfZGVzdHJ1Y3RvcihpLCBOVUxMKTsK
IAlyZXR1cm4gbm9kZTsKIH0KLS0gCjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch"
Content-Transfer-Encoding: base64

RnJvbSBkYzZjZjM4MWJkZWNhNDAxM2I2YmZlMjVjMjdlNTdmMDEwZTdjYTg0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDAgKzAyMDAKU3ViamVjdDogW1BBVENIIDA0LzEwXSB0b29scy94ZW5z
dG9yZTogc2ltcGxpZnkgYW5kIHJlbmFtZSBjaGVja19ldmVudF9ub2RlKCkK
ClRoZXJlIGlzIG5vIHBhdGggd2hpY2ggYWxsb3dzIHRvIGNhbGwgY2hlY2tf
ZXZlbnRfbm9kZSgpIHdpdGhvdXQgYQpldmVudCBuYW1lLiBTbyBkb24ndCBs
ZXQgdGhlIHJlc3VsdCBkZXBlbmQgb24gdGhlIG5hbWUgYmVpbmcgTlVMTCBh
bmQKYWRkIGFuIGFzc2VydCgpIGNvdmVyaW5nIHRoYXQgY2FzZS4KClJlbmFt
ZSB0aGUgZnVuY3Rpb24gdG8gY2hlY2tfc3BlY2lhbF9ldmVudCgpIHRvIGJl
dHRlciBtYXRjaCB0aGUKc2VtYW50aWNzLgoKVGhpcyBpcyBwYXJ0IG9mIFhT
QS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NA
c3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBh
bWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhl
bi5vcmc+Ci0tLQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMg
fCAxMiArKysrKy0tLS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1IGluc2VydGlv
bnMoKyksIDcgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmMgYi90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfd2F0Y2guYwppbmRleCA3ZGVkY2E2MGRmZDYuLmYyZjFiZWQ0N2Nj
NiAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNo
LmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKQEAg
LTQ3LDEzICs0NywxMSBAQCBzdHJ1Y3Qgd2F0Y2gKIAljaGFyICpub2RlOwog
fTsKIAotc3RhdGljIGJvb2wgY2hlY2tfZXZlbnRfbm9kZShjb25zdCBjaGFy
ICpub2RlKQorc3RhdGljIGJvb2wgY2hlY2tfc3BlY2lhbF9ldmVudChjb25z
dCBjaGFyICpuYW1lKQogewotCWlmICghbm9kZSB8fCAhc3Ryc3RhcnRzKG5v
ZGUsICJAIikpIHsKLQkJZXJybm8gPSBFSU5WQUw7Ci0JCXJldHVybiBmYWxz
ZTsKLQl9Ci0JcmV0dXJuIHRydWU7CisJYXNzZXJ0KG5hbWUpOworCisJcmV0
dXJuIHN0cnN0YXJ0cyhuYW1lLCAiQCIpOwogfQogCiAvKiBJcyBjaGlsZCBh
IHN1Ym5vZGUgb2YgcGFyZW50LCBvciBlcXVhbD8gKi8KQEAgLTg3LDcgKzg1
LDcgQEAgc3RhdGljIHZvaWQgYWRkX2V2ZW50KHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLAogCXVuc2lnbmVkIGludCBsZW47CiAJY2hhciAqZGF0YTsKIAot
CWlmICghY2hlY2tfZXZlbnRfbm9kZShuYW1lKSkgeworCWlmICghY2hlY2tf
c3BlY2lhbF9ldmVudChuYW1lKSkgewogCQkvKiBDYW4gdGhpcyBjb25uIGxv
YWQgbm9kZSwgb3Igc2VlIHRoYXQgaXQgZG9lc24ndCBleGlzdD8gKi8KIAkJ
c3RydWN0IG5vZGUgKm5vZGUgPSBnZXRfbm9kZShjb25uLCBjdHgsIG5hbWUs
IFhTX1BFUk1fUkVBRCk7CiAJCS8qCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch"
Content-Transfer-Encoding: base64

RnJvbSBjZDQ1NmRkN2UzYzRiYmUyMjlhMDMwN2E0NjljMmZjM2I4ZTdiNTkw
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDEgKzAyMDAKU3ViamVjdDogW1BBVENIIDA1LzEwXSB0b29scy94ZW5z
dG9yZTogY2hlY2sgcHJpdmlsZWdlIGZvcgogWFNfSVNfRE9NQUlOX0lOVFJP
RFVDRUQKClRoZSBYZW5zdG9yZSBjb21tYW5kIFhTX0lTX0RPTUFJTl9JTlRS
T0RVQ0VEIHNob3VsZCBiZSBwb3NzaWJsZSBmb3IKcHJpdmlsZWdlZCBkb21h
aW5zIG9ubHkgKHRoZSBvbmx5IHVzZXIgaW4gdGhlIHRyZWUgaXMgdGhlIHhl
bnBhZ2luZwpkYWVtb24pLgoKSW5zdGVhZCBvZiBoYXZpbmcgdGhlIHByaXZp
bGVnZSB0ZXN0IGZvciBlYWNoIGNvbW1hbmQgaW50cm9kdWNlIGEKcGVyLWNv
bW1hbmQgZmxhZyBmb3IgdGhhdCBwdXJwb3NlLgoKVGhpcyBpcyBwYXJ0IG9m
IFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jv
c3NAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFs
bEBhbWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+Ci0tLQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YyAgIHwgMjQgKysrKysrKysrKysrKysrKysrLS0tLS0tCiB0b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfZG9tYWluLmMgfCAgNyArKy0tLS0tCiAyIGZpbGVz
IGNoYW5nZWQsIDIwIGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygtKQoK
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4IGM5NzE1
MTllNTQyYS4uZjM4MTk2YWUyODI1IDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmMKQEAgLTEyODUsOCArMTI4NSwxMCBAQCBzdGF0aWMg
c3RydWN0IHsKIAlpbnQgKCpmdW5jKShzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKTsKIAl1bnNpZ25lZCBpbnQg
ZmxhZ3M7CiAjZGVmaW5lIFhTX0ZMQUdfTk9USUQJCSgxVSA8PCAwKQkvKiBJ
Z25vcmUgdHJhbnNhY3Rpb24gaWQuICovCisjZGVmaW5lIFhTX0ZMQUdfUFJJ
VgkJKDFVIDw8IDEpCS8qIFByaXZpbGVnZWQgZG9tYWluIG9ubHkuICovCiB9
IGNvbnN0IHdpcmVfZnVuY3NbWFNfVFlQRV9DT1VOVF0gPSB7Ci0JW1hTX0NP
TlRST0xdICAgICAgICAgICA9IHsgIkNPTlRST0wiLCAgICAgICAgICAgZG9f
Y29udHJvbCB9LAorCVtYU19DT05UUk9MXSAgICAgICAgICAgPQorCSAgICB7
ICJDT05UUk9MIiwgICAgICAgZG9fY29udHJvbCwgICAgICBYU19GTEFHX1BS
SVYgfSwKIAlbWFNfRElSRUNUT1JZXSAgICAgICAgID0geyAiRElSRUNUT1JZ
IiwgICAgICAgICBzZW5kX2RpcmVjdG9yeSB9LAogCVtYU19SRUFEXSAgICAg
ICAgICAgICAgPSB7ICJSRUFEIiwgICAgICAgICAgICAgIGRvX3JlYWQgfSwK
IAlbWFNfR0VUX1BFUk1TXSAgICAgICAgID0geyAiR0VUX1BFUk1TIiwgICAg
ICAgICBkb19nZXRfcGVybXMgfSwKQEAgLTEyOTYsOCArMTI5OCwxMCBAQCBz
dGF0aWMgc3RydWN0IHsKIAkgICAgeyAiVU5XQVRDSCIsICAgICAgIGRvX3Vu
d2F0Y2gsICAgICAgWFNfRkxBR19OT1RJRCB9LAogCVtYU19UUkFOU0FDVElP
Tl9TVEFSVF0gPSB7ICJUUkFOU0FDVElPTl9TVEFSVCIsIGRvX3RyYW5zYWN0
aW9uX3N0YXJ0IH0sCiAJW1hTX1RSQU5TQUNUSU9OX0VORF0gICA9IHsgIlRS
QU5TQUNUSU9OX0VORCIsICAgZG9fdHJhbnNhY3Rpb25fZW5kIH0sCi0JW1hT
X0lOVFJPRFVDRV0gICAgICAgICA9IHsgIklOVFJPRFVDRSIsICAgICAgICAg
ZG9faW50cm9kdWNlIH0sCi0JW1hTX1JFTEVBU0VdICAgICAgICAgICA9IHsg
IlJFTEVBU0UiLCAgICAgICAgICAgZG9fcmVsZWFzZSB9LAorCVtYU19JTlRS
T0RVQ0VdICAgICAgICAgPQorCSAgICB7ICJJTlRST0RVQ0UiLCAgICAgZG9f
aW50cm9kdWNlLCAgICBYU19GTEFHX1BSSVYgfSwKKwlbWFNfUkVMRUFTRV0g
ICAgICAgICAgID0KKwkgICAgeyAiUkVMRUFTRSIsICAgICAgIGRvX3JlbGVh
c2UsICAgICAgWFNfRkxBR19QUklWIH0sCiAJW1hTX0dFVF9ET01BSU5fUEFU
SF0gICA9IHsgIkdFVF9ET01BSU5fUEFUSCIsICAgZG9fZ2V0X2RvbWFpbl9w
YXRoIH0sCiAJW1hTX1dSSVRFXSAgICAgICAgICAgICA9IHsgIldSSVRFIiwg
ICAgICAgICAgICAgZG9fd3JpdGUgfSwKIAlbWFNfTUtESVJdICAgICAgICAg
ICAgID0geyAiTUtESVIiLCAgICAgICAgICAgICBkb19ta2RpciB9LApAQCAt
MTMwNiw5ICsxMzEwLDExIEBAIHN0YXRpYyBzdHJ1Y3QgewogCVtYU19XQVRD
SF9FVkVOVF0gICAgICAgPSB7ICJXQVRDSF9FVkVOVCIsICAgICAgIE5VTEwg
fSwKIAlbWFNfRVJST1JdICAgICAgICAgICAgID0geyAiRVJST1IiLCAgICAg
ICAgICAgICBOVUxMIH0sCiAJW1hTX0lTX0RPTUFJTl9JTlRST0RVQ0VEXSA9
Ci0JCQl7ICJJU19ET01BSU5fSU5UUk9EVUNFRCIsIGRvX2lzX2RvbWFpbl9p
bnRyb2R1Y2VkIH0sCi0JW1hTX1JFU1VNRV0gICAgICAgICAgICA9IHsgIlJF
U1VNRSIsICAgICAgICAgICAgZG9fcmVzdW1lIH0sCi0JW1hTX1NFVF9UQVJH
RVRdICAgICAgICA9IHsgIlNFVF9UQVJHRVQiLCAgICAgICAgZG9fc2V0X3Rh
cmdldCB9LAorCSAgICB7ICJJU19ET01BSU5fSU5UUk9EVUNFRCIsIGRvX2lz
X2RvbWFpbl9pbnRyb2R1Y2VkLCBYU19GTEFHX1BSSVYgfSwKKwlbWFNfUkVT
VU1FXSAgICAgICAgICAgID0KKwkgICAgeyAiUkVTVU1FIiwgICAgICAgIGRv
X3Jlc3VtZSwgICAgICAgWFNfRkxBR19QUklWIH0sCisJW1hTX1NFVF9UQVJH
RVRdICAgICAgICA9CisJICAgIHsgIlNFVF9UQVJHRVQiLCAgICBkb19zZXRf
dGFyZ2V0LCAgIFhTX0ZMQUdfUFJJViB9LAogCVtYU19SRVNFVF9XQVRDSEVT
XSAgICAgPSB7ICJSRVNFVF9XQVRDSEVTIiwgICAgIGRvX3Jlc2V0X3dhdGNo
ZXMgfSwKIAlbWFNfRElSRUNUT1JZX1BBUlRdICAgID0geyAiRElSRUNUT1JZ
X1BBUlQiLCAgICBzZW5kX2RpcmVjdG9yeV9wYXJ0IH0sCiB9OwpAQCAtMTMz
Niw2ICsxMzQyLDEyIEBAIHN0YXRpYyB2b2lkIHByb2Nlc3NfbWVzc2FnZShz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEg
KmluKQogCQlyZXR1cm47CiAJfQogCisJaWYgKCh3aXJlX2Z1bmNzW3R5cGVd
LmZsYWdzICYgWFNfRkxBR19QUklWKSAmJgorCSAgICBkb21haW5faXNfdW5w
cml2aWxlZ2VkKGNvbm4pKSB7CisJCXNlbmRfZXJyb3IoY29ubiwgRUFDQ0VT
KTsKKwkJcmV0dXJuOworCX0KKwogCXRyYW5zID0gKHdpcmVfZnVuY3NbdHlw
ZV0uZmxhZ3MgJiBYU19GTEFHX05PVElEKQogCQk/IE5VTEwgOiB0cmFuc2Fj
dGlvbl9sb29rdXAoY29ubiwgaW4tPmhkci5tc2cudHhfaWQpOwogCWlmIChJ
U19FUlIodHJhbnMpKSB7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfZG9tYWluLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
ZG9tYWluLmMKaW5kZXggMDYzNTk1MDNmMDkxLi4yZDBkODdlZTg5ZTEgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYwor
KysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKQEAgLTM3
Miw3ICszNzIsNyBAQCBpbnQgZG9faW50cm9kdWNlKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJaWYgKGdl
dF9zdHJpbmdzKGluLCB2ZWMsIEFSUkFZX1NJWkUodmVjKSkgPCBBUlJBWV9T
SVpFKHZlYykpCiAJCXJldHVybiBFSU5WQUw7CiAKLQlpZiAoZG9tYWluX2lz
X3VucHJpdmlsZWdlZChjb25uKSB8fCAhY29ubi0+Y2FuX3dyaXRlKQorCWlm
ICghY29ubi0+Y2FuX3dyaXRlKQogCQlyZXR1cm4gRUFDQ0VTOwogCiAJZG9t
aWQgPSBhdG9pKHZlY1swXSk7CkBAIC00MzgsNyArNDM4LDcgQEAgaW50IGRv
X3NldF90YXJnZXQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAoZ2V0X3N0cmluZ3MoaW4sIHZlYywg
QVJSQVlfU0laRSh2ZWMpKSA8IEFSUkFZX1NJWkUodmVjKSkKIAkJcmV0dXJu
IEVJTlZBTDsKIAotCWlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4p
IHx8ICFjb25uLT5jYW5fd3JpdGUpCisJaWYgKCFjb25uLT5jYW5fd3JpdGUp
CiAJCXJldHVybiBFQUNDRVM7CiAKIAlkb21pZCA9IGF0b2kodmVjWzBdKTsK
QEAgLTQ3Myw5ICs0NzMsNiBAQCBzdGF0aWMgc3RydWN0IGRvbWFpbiAqb25l
YXJnX2RvbWFpbihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIAlpZiAoIWRv
bWlkKQogCQlyZXR1cm4gRVJSX1BUUigtRUlOVkFMKTsKIAotCWlmIChkb21h
aW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pKQotCQlyZXR1cm4gRVJSX1BUUigt
RUFDQ0VTKTsKLQogCXJldHVybiBmaW5kX2Nvbm5lY3RlZF9kb21haW4oZG9t
aWQpOwogfQogCi0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0006-tools-xenstore-rework-node-removal.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0006-tools-xenstore-rework-node-removal.patch"
Content-Transfer-Encoding: base64

RnJvbSBhM2Q4MDg5NTMyYWU1NzNjMDNlMWNkYjJmYzNjNWVlNWViYjUyYTYw
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDIgKzAyMDAKU3ViamVjdDogW1BBVENIIDA2LzEwXSB0b29scy94ZW5z
dG9yZTogcmV3b3JrIG5vZGUgcmVtb3ZhbAoKVG9kYXkgYSBYZW5zdG9yZSBu
b2RlIGlzIGJlaW5nIHJlbW92ZWQgYnkgZGVsZXRpbmcgaXQgZnJvbSB0aGUg
cGFyZW50CmZpcnN0IGFuZCB0aGVuIGRlbGV0aW5nIGl0c2VsZiBhbmQgYWxs
IGl0cyBjaGlsZHJlbi4gVGhpcyByZXN1bHRzIGluCnN0YWxlIGVudHJpZXMg
cmVtYWluaW5nIGluIHRoZSBkYXRhIGJhc2UgaW4gY2FzZSBlLmcuIGEgbWVt
b3J5CmFsbG9jYXRpb24gaXMgZmFpbGluZyBkdXJpbmcgcHJvY2Vzc2luZy4g
VGhpcyB3b3VsZCByZXN1bHQgaW4gdGhlCnJhdGhlciBzdHJhbmdlIGJlaGF2
aW9yIHRvIGJlIGFibGUgdG8gcmVhZCBhIG5vZGUgKGFzIGl0cyBzdGlsbCBp
biB0aGUKZGF0YSBiYXNlKSB3aGlsZSBub3QgYmVpbmcgdmlzaWJsZSBpbiB0
aGUgdHJlZSB2aWV3IG9mIFhlbnN0b3JlLgoKRml4IHRoYXQgYnkgZGVsZXRp
bmcgdGhlIG5vZGVzIGZyb20gdGhlIGxlYWYgc2lkZSBpbnN0ZWFkIG9mIHN0
YXJ0aW5nCmF0IHRoZSByb290LgoKQXMgZmlyZV93YXRjaGVzKCkgaXMgbm93
IGNhbGxlZCBmcm9tIF9ybSgpIHRoZSBjdHggcGFyYW1ldGVyIG5lZWRzIGEK
Y29uc3QgYXR0cmlidXRlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0xMTUuCgpT
aWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29t
PgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+Ci0t
LQogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyAgfCA5OSArKysr
KysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLQogdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX3dhdGNoLmMgfCAgNCArLQogdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3dhdGNoLmggfCAgMiArLQogMyBmaWxlcyBjaGFuZ2VkLCA1NCBp
bnNlcnRpb25zKCspLCA1MSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCBmMzgxOTZhZTI4MjUuLmRmZGI2
NGYzZWU2MCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2NvcmUuYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5j
CkBAIC0xMDg5LDc0ICsxMDg5LDc2IEBAIHN0YXRpYyBpbnQgZG9fbWtkaXIo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRh
ICppbikKIAlyZXR1cm4gMDsKIH0KIAotc3RhdGljIHZvaWQgZGVsZXRlX25v
ZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2Rl
KQotewotCXVuc2lnbmVkIGludCBpOwotCWNoYXIgKm5hbWU7Ci0KLQkvKiBE
ZWxldGUgc2VsZiwgdGhlbiBkZWxldGUgY2hpbGRyZW4uICBJZiB3ZSBjcmFz
aCwgdGhlbiB0aGUgd29yc3QKLQkgICB0aGF0IGNhbiBoYXBwZW4gaXMgdGhl
IGNoaWxkcmVuIHdpbGwgY29udGludWUgdG8gdGFrZSB1cCBzcGFjZSwgYnV0
Ci0JICAgd2lsbCBvdGhlcndpc2UgYmUgdW5yZWFjaGFibGUuICovCi0JZGVs
ZXRlX25vZGVfc2luZ2xlKGNvbm4sIG5vZGUpOwotCi0JLyogRGVsZXRlIGNo
aWxkcmVuLCB0b28uICovCi0JZm9yIChpID0gMDsgaSA8IG5vZGUtPmNoaWxk
bGVuOyBpICs9IHN0cmxlbihub2RlLT5jaGlsZHJlbitpKSArIDEpIHsKLQkJ
c3RydWN0IG5vZGUgKmNoaWxkOwotCi0JCW5hbWUgPSB0YWxsb2NfYXNwcmlu
dGYobm9kZSwgIiVzLyVzIiwgbm9kZS0+bmFtZSwKLQkJCQkgICAgICAgbm9k
ZS0+Y2hpbGRyZW4gKyBpKTsKLQkJY2hpbGQgPSBuYW1lID8gcmVhZF9ub2Rl
KGNvbm4sIG5vZGUsIG5hbWUpIDogTlVMTDsKLQkJaWYgKGNoaWxkKSB7Ci0J
CQlkZWxldGVfbm9kZShjb25uLCBjaGlsZCk7Ci0JCX0KLQkJZWxzZSB7Ci0J
CQl0cmFjZSgiZGVsZXRlX25vZGU6IEVycm9yIGRlbGV0aW5nIGNoaWxkICcl
cy8lcychXG4iLAotCQkJICAgICAgbm9kZS0+bmFtZSwgbm9kZS0+Y2hpbGRy
ZW4gKyBpKTsKLQkJCS8qIFNraXAgaXQsIHdlJ3ZlIGFscmVhZHkgZGVsZXRl
ZCB0aGUgcGFyZW50LiAqLwotCQl9Ci0JCXRhbGxvY19mcmVlKG5hbWUpOwot
CX0KLX0KLQotCiAvKiBEZWxldGUgbWVtb3J5IHVzaW5nIG1lbW1vdmUuICov
CiBzdGF0aWMgdm9pZCBtZW1kZWwodm9pZCAqbWVtLCB1bnNpZ25lZCBvZmYs
IHVuc2lnbmVkIGxlbiwgdW5zaWduZWQgdG90YWwpCiB7CiAJbWVtbW92ZSht
ZW0gKyBvZmYsIG1lbSArIG9mZiArIGxlbiwgdG90YWwgLSBvZmYgLSBsZW4p
OwogfQogCi0KLXN0YXRpYyBpbnQgcmVtb3ZlX2NoaWxkX2VudHJ5KHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKLQkJCSAg
ICAgIHNpemVfdCBvZmZzZXQpCitzdGF0aWMgdm9pZCByZW1vdmVfY2hpbGRf
ZW50cnkoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpu
b2RlLAorCQkJICAgICAgIHNpemVfdCBvZmZzZXQpCiB7CiAJc2l6ZV90IGNo
aWxkbGVuID0gc3RybGVuKG5vZGUtPmNoaWxkcmVuICsgb2Zmc2V0KTsKKwog
CW1lbWRlbChub2RlLT5jaGlsZHJlbiwgb2Zmc2V0LCBjaGlsZGxlbiArIDEs
IG5vZGUtPmNoaWxkbGVuKTsKIAlub2RlLT5jaGlsZGxlbiAtPSBjaGlsZGxl
biArIDE7Ci0JcmV0dXJuIHdyaXRlX25vZGUoY29ubiwgbm9kZSwgdHJ1ZSk7
CisJaWYgKHdyaXRlX25vZGUoY29ubiwgbm9kZSwgdHJ1ZSkpCisJCWNvcnJ1
cHQoY29ubiwgIkNhbid0IHVwZGF0ZSBwYXJlbnQgbm9kZSAnJXMnIiwgbm9k
ZS0+bmFtZSk7CiB9CiAKLQotc3RhdGljIGludCBkZWxldGVfY2hpbGQoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sCi0JCQlzdHJ1Y3Qgbm9kZSAqbm9kZSwg
Y29uc3QgY2hhciAqY2hpbGRuYW1lKQorc3RhdGljIHZvaWQgZGVsZXRlX2No
aWxkKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAorCQkJIHN0cnVjdCBub2Rl
ICpub2RlLCBjb25zdCBjaGFyICpjaGlsZG5hbWUpCiB7CiAJdW5zaWduZWQg
aW50IGk7CiAKIAlmb3IgKGkgPSAwOyBpIDwgbm9kZS0+Y2hpbGRsZW47IGkg
Kz0gc3RybGVuKG5vZGUtPmNoaWxkcmVuK2kpICsgMSkgewogCQlpZiAoc3Ry
ZXEobm9kZS0+Y2hpbGRyZW4raSwgY2hpbGRuYW1lKSkgewotCQkJcmV0dXJu
IHJlbW92ZV9jaGlsZF9lbnRyeShjb25uLCBub2RlLCBpKTsKKwkJCXJlbW92
ZV9jaGlsZF9lbnRyeShjb25uLCBub2RlLCBpKTsKKwkJCXJldHVybjsKIAkJ
fQogCX0KIAljb3JydXB0KGNvbm4sICJDYW4ndCBmaW5kIGNoaWxkICclcycg
aW4gJXMiLCBjaGlsZG5hbWUsIG5vZGUtPm5hbWUpOwotCXJldHVybiBFTk9F
TlQ7CiB9CiAKK3N0YXRpYyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpwYXJlbnQsCisJCSAgICAgICBz
dHJ1Y3Qgbm9kZSAqbm9kZSkKK3sKKwljaGFyICpuYW1lOworCisJLyogRGVs
ZXRlIGNoaWxkcmVuLiAqLworCXdoaWxlIChub2RlLT5jaGlsZGxlbikgewor
CQlzdHJ1Y3Qgbm9kZSAqY2hpbGQ7CisKKwkJbmFtZSA9IHRhbGxvY19hc3By
aW50Zihub2RlLCAiJXMvJXMiLCBub2RlLT5uYW1lLAorCQkJCSAgICAgICBu
b2RlLT5jaGlsZHJlbik7CisJCWNoaWxkID0gbmFtZSA/IHJlYWRfbm9kZShj
b25uLCBub2RlLCBuYW1lKSA6IE5VTEw7CisJCWlmIChjaGlsZCkgeworCQkJ
aWYgKGRlbGV0ZV9ub2RlKGNvbm4sIG5vZGUsIGNoaWxkKSkKKwkJCQlyZXR1
cm4gZXJybm87CisJCX0gZWxzZSB7CisJCQl0cmFjZSgiZGVsZXRlX25vZGU6
IEVycm9yIGRlbGV0aW5nIGNoaWxkICclcy8lcychXG4iLAorCQkJICAgICAg
bm9kZS0+bmFtZSwgbm9kZS0+Y2hpbGRyZW4pOworCQkJLyogUXVpdCBkZWxl
dGluZy4gKi8KKwkJCWVycm5vID0gRU5PTUVNOworCQkJcmV0dXJuIGVycm5v
OworCQl9CisJCXRhbGxvY19mcmVlKG5hbWUpOworCX0KKworCWRlbGV0ZV9u
b2RlX3NpbmdsZShjb25uLCBub2RlKTsKKwlkZWxldGVfY2hpbGQoY29ubiwg
cGFyZW50LCBiYXNlbmFtZShub2RlLT5uYW1lKSk7CisJdGFsbG9jX2ZyZWUo
bm9kZSk7CisKKwlyZXR1cm4gMDsKK30KIAogc3RhdGljIGludCBfcm0oc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0
IG5vZGUgKm5vZGUsCiAJICAgICAgIGNvbnN0IGNoYXIgKm5hbWUpCiB7Ci0J
LyogRGVsZXRlIGZyb20gcGFyZW50IGZpcnN0LCB0aGVuIGlmIHdlIGNyYXNo
LCB0aGUgd29yc3QgdGhhdCBjYW4KLQkgICBoYXBwZW4gaXMgdGhlIGNoaWxk
IHdpbGwgY29udGludWUgdG8gdGFrZSB1cCBzcGFjZSwgYnV0IHdpbGwKLQkg
ICBvdGhlcndpc2UgYmUgdW5yZWFjaGFibGUuICovCisJLyoKKwkgKiBEZWxl
dGluZyBub2RlIGJ5IG5vZGUsIHNvIHRoZSByZXN1bHQgaXMgYWx3YXlzIGNv
bnNpc3RlbnQgZXZlbiBpbgorCSAqIGNhc2Ugb2YgYSBmYWlsdXJlLgorCSAq
LwogCXN0cnVjdCBub2RlICpwYXJlbnQ7CiAJY2hhciAqcGFyZW50bmFtZSA9
IGdldF9wYXJlbnQoY3R4LCBuYW1lKTsKIApAQCAtMTE2NywxMSArMTE2OSwx
MyBAQCBzdGF0aWMgaW50IF9ybShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwg
Y29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKIAlpZiAoIXBh
cmVudCkKIAkJcmV0dXJuIChlcnJubyA9PSBFTk9NRU0pID8gRU5PTUVNIDog
RUlOVkFMOwogCi0JaWYgKGRlbGV0ZV9jaGlsZChjb25uLCBwYXJlbnQsIGJh
c2VuYW1lKG5hbWUpKSkKLQkJcmV0dXJuIEVJTlZBTDsKLQotCWRlbGV0ZV9u
b2RlKGNvbm4sIG5vZGUpOwotCXJldHVybiAwOworCS8qCisJICogRmlyZSB0
aGUgd2F0Y2hlcyBub3csIHdoZW4gd2UgY2FuIHN0aWxsIHNlZSB0aGUgbm9k
ZSBwZXJtaXNzaW9ucy4KKwkgKiBUaGlzIGZpbmUgYXMgd2UgYXJlIHNpbmds
ZSB0aHJlYWRlZCBhbmQgdGhlIG5leHQgcG9zc2libGUgcmVhZCB3aWxsCisJ
ICogYmUgaGFuZGxlZCBvbmx5IGFmdGVyIHRoZSBub2RlIGhhcyBiZWVuIHJl
YWxseSByZW1vdmVkLgorCSAqLworCWZpcmVfd2F0Y2hlcyhjb25uLCBjdHgs
IG5hbWUsIHRydWUpOworCXJldHVybiBkZWxldGVfbm9kZShjb25uLCBwYXJl
bnQsIG5vZGUpOwogfQogCiAKQEAgLTEyMDksNyArMTIxMyw2IEBAIHN0YXRp
YyBpbnQgZG9fcm0oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAocmV0KQogCQlyZXR1cm4gcmV0Owog
Ci0JZmlyZV93YXRjaGVzKGNvbm4sIGluLCBuYW1lLCB0cnVlKTsKIAlzZW5k
X2Fjayhjb25uLCBYU19STSk7CiAKIAlyZXR1cm4gMDsKZGlmZiAtLWdpdCBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5jIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKaW5kZXggZjJmMWJlZDQ3Y2M2Li5m
MGJiZmU3YTZkYzYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF93YXRjaC5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93
YXRjaC5jCkBAIC03Nyw3ICs3Nyw3IEBAIHN0YXRpYyBib29sIGlzX2NoaWxk
KGNvbnN0IGNoYXIgKmNoaWxkLCBjb25zdCBjaGFyICpwYXJlbnQpCiAgKiBU
ZW1wb3JhcnkgbWVtb3J5IGFsbG9jYXRpb25zIGFyZSBkb25lIHdpdGggY3R4
LgogICovCiBzdGF0aWMgdm9pZCBhZGRfZXZlbnQoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCi0JCSAgICAgIHZvaWQgKmN0eCwKKwkJICAgICAgY29uc3Qg
dm9pZCAqY3R4LAogCQkgICAgICBzdHJ1Y3Qgd2F0Y2ggKndhdGNoLAogCQkg
ICAgICBjb25zdCBjaGFyICpuYW1lKQogewpAQCAtMTIxLDcgKzEyMSw3IEBA
IHN0YXRpYyB2b2lkIGFkZF9ldmVudChzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwKICAqIENoZWNrIHdoZXRoZXIgYW55IHdhdGNoIGV2ZW50cyBhcmUgdG8g
YmUgc2VudC4KICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgYXJl
IGRvbmUgd2l0aCBjdHguCiAgKi8KLXZvaWQgZmlyZV93YXRjaGVzKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5h
bWUsCit2b2lkIGZpcmVfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBjaGFyICpuYW1lLAogCQkgIGJv
b2wgcmVjdXJzZSkKIHsKIAlzdHJ1Y3QgY29ubmVjdGlvbiAqaTsKZGlmZiAt
LWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmgKaW5kZXggYzcyZWE2YTY4
NTQyLi41NGQ0ZWE3ZTBkNDEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF93YXRjaC5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF93YXRjaC5oCkBAIC0yNSw3ICsyNSw3IEBAIGludCBkb193YXRjaChz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEg
KmluKTsKIGludCBkb191bndhdGNoKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pOwogCiAvKiBGaXJlIGFsbCB3
YXRjaGVzOiByZWN1cnNlIG1lYW5zIGFsbCB0aGUgY2hpbGRyZW4gYXJlIGFm
ZmVjdGVkIChpZS4gcm0pLiAqLwotdm9pZCBmaXJlX3dhdGNoZXMoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIHZvaWQgKnRtcCwgY29uc3QgY2hhciAqbmFt
ZSwKK3ZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCB2b2lkICp0bXAsIGNvbnN0IGNoYXIgKm5hbWUsCiAJCSAgYm9v
bCByZWN1cnNlKTsKIAogdm9pZCBjb25uX2RlbGV0ZV9hbGxfd2F0Y2hlcyhz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7Ci0tIAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch"
Content-Transfer-Encoding: base64

RnJvbSAzZDRlM2ZkNmM3ODc5NWJmNDI2OTQ3ZmJmYmZhOWFmNjU2OGVjZTlm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDMgKzAyMDAKU3ViamVjdDogW1BBVENIIDA3LzEwXSB0b29scy94ZW5z
dG9yZTogZmlyZSB3YXRjaGVzIG9ubHkgd2hlbiByZW1vdmluZyBhCiBzcGVj
aWZpYyBub2RlCgpJbnN0ZWFkIG9mIGZpcmluZyBhbGwgd2F0Y2hlcyBmb3Ig
cmVtb3ZpbmcgYSBzdWJ0cmVlIGluIG9uZSBnbywgZG8gc28Kb25seSB3aGVu
IHRoZSByZWxhdGVkIG5vZGUgaXMgYmVpbmcgcmVtb3ZlZC4KClRoZSB3YXRj
aGVzIGZvciB0aGUgdG9wLW1vc3Qgbm9kZSBiZWluZyByZW1vdmVkIGluY2x1
ZGUgYWxsIHdhdGNoZXMKaW5jbHVkaW5nIHRoYXQgbm9kZSwgd2hpbGUgd2F0
Y2hlcyBmb3Igbm9kZXMgYmVsb3cgdGhhdCBhcmUgb25seSBmaXJlZAppZiB0
aGV5IGFyZSBtYXRjaGluZyBleGFjdGx5LiBUaGlzIGF2b2lkcyBmaXJpbmcg
YW55IHdhdGNoIG1vcmUgdGhhbgpvbmNlIHdoZW4gcmVtb3ZpbmcgYSBzdWJ0
cmVlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5
OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+ClJldmlld2VkLWJ5
OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+Ci0tLQogdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuYyAgfCAxMSArKysrKystLS0tLQogdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMgfCAxMyArKysrKysrKy0t
LS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guaCB8ICA0ICsr
LS0KIDMgZmlsZXMgY2hhbmdlZCwgMTYgaW5zZXJ0aW9ucygrKSwgMTIgZGVs
ZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMK
aW5kZXggZGZkYjY0ZjNlZTYwLi4yMGE3YTM1ODE1NTUgMTAwNjQ0Ci0tLSBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwpAQCAtMTEyMCw4ICsxMTIwLDgg
QEAgc3RhdGljIHZvaWQgZGVsZXRlX2NoaWxkKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLAogCWNvcnJ1cHQoY29ubiwgIkNhbid0IGZpbmQgY2hpbGQgJyVz
JyBpbiAlcyIsIGNoaWxkbmFtZSwgbm9kZS0+bmFtZSk7CiB9CiAKLXN0YXRp
YyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0
cnVjdCBub2RlICpwYXJlbnQsCi0JCSAgICAgICBzdHJ1Y3Qgbm9kZSAqbm9k
ZSkKK3N0YXRpYyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKKwkJICAgICAgIHN0cnVjdCBub2Rl
ICpwYXJlbnQsIHN0cnVjdCBub2RlICpub2RlKQogewogCWNoYXIgKm5hbWU7
CiAKQEAgLTExMzMsNyArMTEzMyw3IEBAIHN0YXRpYyBpbnQgZGVsZXRlX25v
ZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpwYXJl
bnQsCiAJCQkJICAgICAgIG5vZGUtPmNoaWxkcmVuKTsKIAkJY2hpbGQgPSBu
YW1lID8gcmVhZF9ub2RlKGNvbm4sIG5vZGUsIG5hbWUpIDogTlVMTDsKIAkJ
aWYgKGNoaWxkKSB7Ci0JCQlpZiAoZGVsZXRlX25vZGUoY29ubiwgbm9kZSwg
Y2hpbGQpKQorCQkJaWYgKGRlbGV0ZV9ub2RlKGNvbm4sIGN0eCwgbm9kZSwg
Y2hpbGQpKQogCQkJCXJldHVybiBlcnJubzsKIAkJfSBlbHNlIHsKIAkJCXRy
YWNlKCJkZWxldGVfbm9kZTogRXJyb3IgZGVsZXRpbmcgY2hpbGQgJyVzLyVz
JyFcbiIsCkBAIC0xMTQ1LDYgKzExNDUsNyBAQCBzdGF0aWMgaW50IGRlbGV0
ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAq
cGFyZW50LAogCQl0YWxsb2NfZnJlZShuYW1lKTsKIAl9CiAKKwlmaXJlX3dh
dGNoZXMoY29ubiwgY3R4LCBub2RlLT5uYW1lLCB0cnVlKTsKIAlkZWxldGVf
bm9kZV9zaW5nbGUoY29ubiwgbm9kZSk7CiAJZGVsZXRlX2NoaWxkKGNvbm4s
IHBhcmVudCwgYmFzZW5hbWUobm9kZS0+bmFtZSkpOwogCXRhbGxvY19mcmVl
KG5vZGUpOwpAQCAtMTE3NCw4ICsxMTc1LDggQEAgc3RhdGljIGludCBfcm0o
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgc3Ry
dWN0IG5vZGUgKm5vZGUsCiAJICogVGhpcyBmaW5lIGFzIHdlIGFyZSBzaW5n
bGUgdGhyZWFkZWQgYW5kIHRoZSBuZXh0IHBvc3NpYmxlIHJlYWQgd2lsbAog
CSAqIGJlIGhhbmRsZWQgb25seSBhZnRlciB0aGUgbm9kZSBoYXMgYmVlbiBy
ZWFsbHkgcmVtb3ZlZC4KIAkgKi8KLQlmaXJlX3dhdGNoZXMoY29ubiwgY3R4
LCBuYW1lLCB0cnVlKTsKLQlyZXR1cm4gZGVsZXRlX25vZGUoY29ubiwgcGFy
ZW50LCBub2RlKTsKKwlmaXJlX3dhdGNoZXMoY29ubiwgY3R4LCBuYW1lLCBm
YWxzZSk7CisJcmV0dXJuIGRlbGV0ZV9ub2RlKGNvbm4sIGN0eCwgcGFyZW50
LCBub2RlKTsKIH0KIAogCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfd2F0Y2guYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93
YXRjaC5jCmluZGV4IGYwYmJmZTdhNmRjNi4uMzgzNjY3NTQ1OWZhIDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYworKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYwpAQCAtMTIyLDcg
KzEyMiw3IEBAIHN0YXRpYyB2b2lkIGFkZF9ldmVudChzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwKICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMg
YXJlIGRvbmUgd2l0aCBjdHguCiAgKi8KIHZvaWQgZmlyZV93YXRjaGVzKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0
IGNoYXIgKm5hbWUsCi0JCSAgYm9vbCByZWN1cnNlKQorCQkgIGJvb2wgZXhh
Y3QpCiB7CiAJc3RydWN0IGNvbm5lY3Rpb24gKmk7CiAJc3RydWN0IHdhdGNo
ICp3YXRjaDsKQEAgLTEzNCwxMCArMTM0LDEzIEBAIHZvaWQgZmlyZV93YXRj
aGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgs
IGNvbnN0IGNoYXIgKm5hbWUsCiAJLyogQ3JlYXRlIGFuIGV2ZW50IGZvciBl
YWNoIHdhdGNoLiAqLwogCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmNvbm5l
Y3Rpb25zLCBsaXN0KSB7CiAJCWxpc3RfZm9yX2VhY2hfZW50cnkod2F0Y2gs
ICZpLT53YXRjaGVzLCBsaXN0KSB7Ci0JCQlpZiAoaXNfY2hpbGQobmFtZSwg
d2F0Y2gtPm5vZGUpKQotCQkJCWFkZF9ldmVudChpLCBjdHgsIHdhdGNoLCBu
YW1lKTsKLQkJCWVsc2UgaWYgKHJlY3Vyc2UgJiYgaXNfY2hpbGQod2F0Y2gt
Pm5vZGUsIG5hbWUpKQotCQkJCWFkZF9ldmVudChpLCBjdHgsIHdhdGNoLCB3
YXRjaC0+bm9kZSk7CisJCQlpZiAoZXhhY3QpIHsKKwkJCQlpZiAoc3RyZXEo
bmFtZSwgd2F0Y2gtPm5vZGUpKQorCQkJCQlhZGRfZXZlbnQoaSwgY3R4LCB3
YXRjaCwgbmFtZSk7CisJCQl9IGVsc2UgeworCQkJCWlmIChpc19jaGlsZChu
YW1lLCB3YXRjaC0+bm9kZSkpCisJCQkJCWFkZF9ldmVudChpLCBjdHgsIHdh
dGNoLCBuYW1lKTsKKwkJCX0KIAkJfQogCX0KIH0KZGlmZiAtLWdpdCBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3dhdGNoLmgKaW5kZXggNTRkNGVhN2UwZDQxLi4xYjNj
ODBkM2RkYTEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF93YXRjaC5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRj
aC5oCkBAIC0yNCw5ICsyNCw5IEBACiBpbnQgZG9fd2F0Y2goc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbik7CiBp
bnQgZG9fdW53YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0
IGJ1ZmZlcmVkX2RhdGEgKmluKTsKIAotLyogRmlyZSBhbGwgd2F0Y2hlczog
cmVjdXJzZSBtZWFucyBhbGwgdGhlIGNoaWxkcmVuIGFyZSBhZmZlY3RlZCAo
aWUuIHJtKS4gKi8KKy8qIEZpcmUgYWxsIHdhdGNoZXM6ICFleGFjdCBtZWFu
cyBhbGwgdGhlIGNoaWxkcmVuIGFyZSBhZmZlY3RlZCAoaWUuIHJtKS4gKi8K
IHZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBj
b25zdCB2b2lkICp0bXAsIGNvbnN0IGNoYXIgKm5hbWUsCi0JCSAgYm9vbCBy
ZWN1cnNlKTsKKwkJICBib29sIGV4YWN0KTsKIAogdm9pZCBjb25uX2RlbGV0
ZV9hbGxfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7CiAKLS0g
CjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0008-tools-xenstore-introduce-node_perms-structure.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0008-tools-xenstore-introduce-node_perms-structure.patch"
Content-Transfer-Encoding: base64

RnJvbSAxMDY5YzYwMGY4NWZmNTgzYzQ2MWNmYmZlZTFhZmIxYTA3MzE3OTZl
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDQgKzAyMDAKU3ViamVjdDogW1BBVENIIDA4LzEwXSB0b29scy94ZW5z
dG9yZTogaW50cm9kdWNlIG5vZGVfcGVybXMgc3RydWN0dXJlCgpUaGVyZSBh
cmUgc2V2ZXJhbCBwbGFjZXMgaW4geGVuc3RvcmVkIHVzaW5nIGEgcGVybWlz
c2lvbiBhcnJheSBhbmQgdGhlCnNpemUgb2YgdGhhdCBhcnJheS4gSW50cm9k
dWNlIGEgbmV3IHN0cnVjdCBub2RlX3Blcm1zIGNvbnRhaW5pbmcgYm90aC4K
ClRoaXMgaXMgcGFydCBvZiBYU0EtMTE1LgoKU2lnbmVkLW9mZi1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpBY2tlZC1ieTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPgotLS0KIHRvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmMgICB8IDc5ICsrKysrKysrKysrKysrKy0tLS0tLS0t
LS0tLS0tLS0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmggICB8
ICA4ICsrKy0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYyB8
IDEyICsrLS0tCiAzIGZpbGVzIGNoYW5nZWQsIDUwIGluc2VydGlvbnMoKyks
IDQ5IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jCmluZGV4IDIwYTdhMzU4MTU1NS4uNzlkMzA1ZmJiZTU4IDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTQwMywxNCAr
NDAzLDE0IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAJLyogRGF0
YWxlbiwgY2hpbGRsZW4sIG51bWJlciBvZiBwZXJtaXNzaW9ucyAqLwogCWhk
ciA9ICh2b2lkICopZGF0YS5kcHRyOwogCW5vZGUtPmdlbmVyYXRpb24gPSBo
ZHItPmdlbmVyYXRpb247Ci0Jbm9kZS0+bnVtX3Blcm1zID0gaGRyLT5udW1f
cGVybXM7CisJbm9kZS0+cGVybXMubnVtID0gaGRyLT5udW1fcGVybXM7CiAJ
bm9kZS0+ZGF0YWxlbiA9IGhkci0+ZGF0YWxlbjsKIAlub2RlLT5jaGlsZGxl
biA9IGhkci0+Y2hpbGRsZW47CiAKIAkvKiBQZXJtaXNzaW9ucyBhcmUgc3Ry
dWN0IHhzX3Blcm1pc3Npb25zLiAqLwotCW5vZGUtPnBlcm1zID0gaGRyLT5w
ZXJtczsKKwlub2RlLT5wZXJtcy5wID0gaGRyLT5wZXJtczsKIAkvKiBEYXRh
IGlzIGJpbmFyeSBibG9iICh1c3VhbGx5IGFzY2lpLCBubyBudWwpLiAqLwot
CW5vZGUtPmRhdGEgPSBub2RlLT5wZXJtcyArIG5vZGUtPm51bV9wZXJtczsK
Kwlub2RlLT5kYXRhID0gbm9kZS0+cGVybXMucCArIG5vZGUtPnBlcm1zLm51
bTsKIAkvKiBDaGlsZHJlbiBpcyBzdHJpbmdzLCBudWwgc2VwYXJhdGVkLiAq
LwogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+ZGF0YSArIG5vZGUtPmRhdGFs
ZW47CiAKQEAgLTQyNyw3ICs0MjcsNyBAQCBpbnQgd3JpdGVfbm9kZV9yYXco
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVj
dCBub2RlICpub2RlLAogCXN0cnVjdCB4c190ZGJfcmVjb3JkX2hkciAqaGRy
OwogCiAJZGF0YS5kc2l6ZSA9IHNpemVvZigqaGRyKQotCQkrIG5vZGUtPm51
bV9wZXJtcypzaXplb2Yobm9kZS0+cGVybXNbMF0pCisJCSsgbm9kZS0+cGVy
bXMubnVtICogc2l6ZW9mKG5vZGUtPnBlcm1zLnBbMF0pCiAJCSsgbm9kZS0+
ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOwogCiAJaWYgKCFub19xdW90YV9j
aGVjayAmJiBkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pICYmCkBAIC00
MzksMTIgKzQzOSwxMyBAQCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpu
b2RlLAogCWRhdGEuZHB0ciA9IHRhbGxvY19zaXplKG5vZGUsIGRhdGEuZHNp
emUpOwogCWhkciA9ICh2b2lkICopZGF0YS5kcHRyOwogCWhkci0+Z2VuZXJh
dGlvbiA9IG5vZGUtPmdlbmVyYXRpb247Ci0JaGRyLT5udW1fcGVybXMgPSBu
b2RlLT5udW1fcGVybXM7CisJaGRyLT5udW1fcGVybXMgPSBub2RlLT5wZXJt
cy5udW07CiAJaGRyLT5kYXRhbGVuID0gbm9kZS0+ZGF0YWxlbjsKIAloZHIt
PmNoaWxkbGVuID0gbm9kZS0+Y2hpbGRsZW47CiAKLQltZW1jcHkoaGRyLT5w
ZXJtcywgbm9kZS0+cGVybXMsIG5vZGUtPm51bV9wZXJtcypzaXplb2Yobm9k
ZS0+cGVybXNbMF0pKTsKLQlwID0gaGRyLT5wZXJtcyArIG5vZGUtPm51bV9w
ZXJtczsKKwltZW1jcHkoaGRyLT5wZXJtcywgbm9kZS0+cGVybXMucCwKKwkg
ICAgICAgbm9kZS0+cGVybXMubnVtICogc2l6ZW9mKCpub2RlLT5wZXJtcy5w
KSk7CisJcCA9IGhkci0+cGVybXMgKyBub2RlLT5wZXJtcy5udW07CiAJbWVt
Y3B5KHAsIG5vZGUtPmRhdGEsIG5vZGUtPmRhdGFsZW4pOwogCXAgKz0gbm9k
ZS0+ZGF0YWxlbjsKIAltZW1jcHkocCwgbm9kZS0+Y2hpbGRyZW4sIG5vZGUt
PmNoaWxkbGVuKTsKQEAgLTQ3MCw4ICs0NzEsNyBAQCBzdGF0aWMgaW50IHdy
aXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2Rl
ICpub2RlLAogfQogCiBzdGF0aWMgZW51bSB4c19wZXJtX3R5cGUgcGVybV9m
b3JfY29ubihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKLQkJCQkgICAgICAg
c3RydWN0IHhzX3Blcm1pc3Npb25zICpwZXJtcywKLQkJCQkgICAgICAgdW5z
aWduZWQgaW50IG51bSkKKwkJCQkgICAgICAgY29uc3Qgc3RydWN0IG5vZGVf
cGVybXMgKnBlcm1zKQogewogCXVuc2lnbmVkIGludCBpOwogCWVudW0geHNf
cGVybV90eXBlIG1hc2sgPSBYU19QRVJNX1JFQUR8WFNfUEVSTV9XUklURXxY
U19QRVJNX09XTkVSOwpAQCAtNDgwLDE2ICs0ODAsMTYgQEAgc3RhdGljIGVu
dW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCiAJCW1hc2sgJj0gflhTX1BFUk1fV1JJVEU7CiAKIAkvKiBP
d25lcnMgYW5kIHRvb2xzIGdldCBpdCBhbGwuLi4gKi8KLQlpZiAoIWRvbWFp
bl9pc191bnByaXZpbGVnZWQoY29ubikgfHwgcGVybXNbMF0uaWQgPT0gY29u
bi0+aWQKLSAgICAgICAgICAgICAgICB8fCAoY29ubi0+dGFyZ2V0ICYmIHBl
cm1zWzBdLmlkID09IGNvbm4tPnRhcmdldC0+aWQpKQorCWlmICghZG9tYWlu
X2lzX3VucHJpdmlsZWdlZChjb25uKSB8fCBwZXJtcy0+cFswXS5pZCA9PSBj
b25uLT5pZAorICAgICAgICAgICAgICAgIHx8IChjb25uLT50YXJnZXQgJiYg
cGVybXMtPnBbMF0uaWQgPT0gY29ubi0+dGFyZ2V0LT5pZCkpCiAJCXJldHVy
biAoWFNfUEVSTV9SRUFEfFhTX1BFUk1fV1JJVEV8WFNfUEVSTV9PV05FUikg
JiBtYXNrOwogCi0JZm9yIChpID0gMTsgaSA8IG51bTsgaSsrKQotCQlpZiAo
cGVybXNbaV0uaWQgPT0gY29ubi0+aWQKLSAgICAgICAgICAgICAgICAgICAg
ICAgIHx8IChjb25uLT50YXJnZXQgJiYgcGVybXNbaV0uaWQgPT0gY29ubi0+
dGFyZ2V0LT5pZCkpCi0JCQlyZXR1cm4gcGVybXNbaV0ucGVybXMgJiBtYXNr
OworCWZvciAoaSA9IDE7IGkgPCBwZXJtcy0+bnVtOyBpKyspCisJCWlmIChw
ZXJtcy0+cFtpXS5pZCA9PSBjb25uLT5pZAorICAgICAgICAgICAgICAgICAg
ICAgICAgfHwgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+cFtpXS5pZCA9PSBj
b25uLT50YXJnZXQtPmlkKSkKKwkJCXJldHVybiBwZXJtcy0+cFtpXS5wZXJt
cyAmIG1hc2s7CiAKLQlyZXR1cm4gcGVybXNbMF0ucGVybXMgJiBtYXNrOwor
CXJldHVybiBwZXJtcy0+cFswXS5wZXJtcyAmIG1hc2s7CiB9CiAKIC8qCkBA
IC01MzYsNyArNTM2LDcgQEAgc3RhdGljIGludCBhc2tfcGFyZW50cyhzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAogCQlyZXR1
cm4gMDsKIAl9CiAKLQkqcGVybSA9IHBlcm1fZm9yX2Nvbm4oY29ubiwgbm9k
ZS0+cGVybXMsIG5vZGUtPm51bV9wZXJtcyk7CisJKnBlcm0gPSBwZXJtX2Zv
cl9jb25uKGNvbm4sICZub2RlLT5wZXJtcyk7CiAJcmV0dXJuIDA7CiB9CiAK
QEAgLTU4Miw4ICs1ODIsNyBAQCBzdHJ1Y3Qgbm9kZSAqZ2V0X25vZGUoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAJbm9kZSA9IHJlYWRfbm9kZShjb25u
LCBjdHgsIG5hbWUpOwogCS8qIElmIHdlIGRvbid0IGhhdmUgcGVybWlzc2lv
biwgd2UgZG9uJ3QgaGF2ZSBub2RlLiAqLwogCWlmIChub2RlKSB7Ci0JCWlm
ICgocGVybV9mb3JfY29ubihjb25uLCBub2RlLT5wZXJtcywgbm9kZS0+bnVt
X3Blcm1zKSAmIHBlcm0pCi0JCSAgICAhPSBwZXJtKSB7CisJCWlmICgocGVy
bV9mb3JfY29ubihjb25uLCAmbm9kZS0+cGVybXMpICYgcGVybSkgIT0gcGVy
bSkgewogCQkJZXJybm8gPSBFQUNDRVM7CiAJCQlub2RlID0gTlVMTDsKIAkJ
fQpAQCAtNzU5LDE2ICs3NTgsMTUgQEAgY29uc3QgY2hhciAqb25lYXJnKHN0
cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlyZXR1cm4gaW4tPmJ1ZmZlcjsK
IH0KIAotc3RhdGljIGNoYXIgKnBlcm1zX3RvX3N0cmluZ3MoY29uc3Qgdm9p
ZCAqY3R4LAotCQkJICAgICAgc3RydWN0IHhzX3Blcm1pc3Npb25zICpwZXJt
cywgdW5zaWduZWQgaW50IG51bSwKK3N0YXRpYyBjaGFyICpwZXJtc190b19z
dHJpbmdzKGNvbnN0IHZvaWQgKmN0eCwgY29uc3Qgc3RydWN0IG5vZGVfcGVy
bXMgKnBlcm1zLAogCQkJICAgICAgdW5zaWduZWQgaW50ICpsZW4pCiB7CiAJ
dW5zaWduZWQgaW50IGk7CiAJY2hhciAqc3RyaW5ncyA9IE5VTEw7CiAJY2hh
ciBidWZmZXJbTUFYX1NUUkxFTih1bnNpZ25lZCBpbnQpICsgMV07CiAKLQlm
b3IgKCpsZW4gPSAwLCBpID0gMDsgaSA8IG51bTsgaSsrKSB7Ci0JCWlmICgh
eHNfcGVybV90b19zdHJpbmcoJnBlcm1zW2ldLCBidWZmZXIsIHNpemVvZihi
dWZmZXIpKSkKKwlmb3IgKCpsZW4gPSAwLCBpID0gMDsgaSA8IHBlcm1zLT5u
dW07IGkrKykgeworCQlpZiAoIXhzX3Blcm1fdG9fc3RyaW5nKCZwZXJtcy0+
cFtpXSwgYnVmZmVyLCBzaXplb2YoYnVmZmVyKSkpCiAJCQlyZXR1cm4gTlVM
TDsKIAogCQlzdHJpbmdzID0gdGFsbG9jX3JlYWxsb2MoY3R4LCBzdHJpbmdz
LCBjaGFyLApAQCAtOTQ3LDEzICs5NDUsMTMgQEAgc3RhdGljIHN0cnVjdCBu
b2RlICpjb25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwg
Y29uc3Qgdm9pZCAqY3R4LAogCQlnb3RvIG5vbWVtOwogCiAJLyogSW5oZXJp
dCBwZXJtaXNzaW9ucywgZXhjZXB0IHVucHJpdmlsZWdlZCBkb21haW5zIG93
biB3aGF0IHRoZXkgY3JlYXRlICovCi0Jbm9kZS0+bnVtX3Blcm1zID0gcGFy
ZW50LT5udW1fcGVybXM7Ci0Jbm9kZS0+cGVybXMgPSB0YWxsb2NfbWVtZHVw
KG5vZGUsIHBhcmVudC0+cGVybXMsCi0JCQkJICAgIG5vZGUtPm51bV9wZXJt
cyAqIHNpemVvZihub2RlLT5wZXJtc1swXSkpOwotCWlmICghbm9kZS0+cGVy
bXMpCisJbm9kZS0+cGVybXMubnVtID0gcGFyZW50LT5wZXJtcy5udW07CisJ
bm9kZS0+cGVybXMucCA9IHRhbGxvY19tZW1kdXAobm9kZSwgcGFyZW50LT5w
ZXJtcy5wLAorCQkJCSAgICAgIG5vZGUtPnBlcm1zLm51bSAqIHNpemVvZigq
bm9kZS0+cGVybXMucCkpOworCWlmICghbm9kZS0+cGVybXMucCkKIAkJZ290
byBub21lbTsKIAlpZiAoZG9tYWluX2lzX3VucHJpdmlsZWdlZChjb25uKSkK
LQkJbm9kZS0+cGVybXNbMF0uaWQgPSBjb25uLT5pZDsKKwkJbm9kZS0+cGVy
bXMucFswXS5pZCA9IGNvbm4tPmlkOwogCiAJLyogTm8gY2hpbGRyZW4sIG5v
IGRhdGEgKi8KIAlub2RlLT5jaGlsZHJlbiA9IG5vZGUtPmRhdGEgPSBOVUxM
OwpAQCAtMTIzMCw3ICsxMjI4LDcgQEAgc3RhdGljIGludCBkb19nZXRfcGVy
bXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9k
YXRhICppbikKIAlpZiAoIW5vZGUpCiAJCXJldHVybiBlcnJubzsKIAotCXN0
cmluZ3MgPSBwZXJtc190b19zdHJpbmdzKG5vZGUsIG5vZGUtPnBlcm1zLCBu
b2RlLT5udW1fcGVybXMsICZsZW4pOworCXN0cmluZ3MgPSBwZXJtc190b19z
dHJpbmdzKG5vZGUsICZub2RlLT5wZXJtcywgJmxlbik7CiAJaWYgKCFzdHJp
bmdzKQogCQlyZXR1cm4gZXJybm87CiAKQEAgLTEyNDEsMTMgKzEyMzksMTIg
QEAgc3RhdGljIGludCBkb19nZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAogc3RhdGljIGlu
dCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVj
dCBidWZmZXJlZF9kYXRhICppbikKIHsKLQl1bnNpZ25lZCBpbnQgbnVtOwot
CXN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybXM7CisJc3RydWN0IG5vZGVf
cGVybXMgcGVybXM7CiAJY2hhciAqbmFtZSwgKnBlcm1zdHI7CiAJc3RydWN0
IG5vZGUgKm5vZGU7CiAKLQludW0gPSB4c19jb3VudF9zdHJpbmdzKGluLT5i
dWZmZXIsIGluLT51c2VkKTsKLQlpZiAobnVtIDwgMikKKwlwZXJtcy5udW0g
PSB4c19jb3VudF9zdHJpbmdzKGluLT5idWZmZXIsIGluLT51c2VkKTsKKwlp
ZiAocGVybXMubnVtIDwgMikKIAkJcmV0dXJuIEVJTlZBTDsKIAogCS8qIEZp
cnN0IGFyZyBpcyBub2RlIG5hbWUuICovCkBAIC0xMjU4LDIxICsxMjU1LDIx
IEBAIHN0YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJCXJldHVybiBl
cnJubzsKIAogCXBlcm1zdHIgPSBpbi0+YnVmZmVyICsgc3RybGVuKGluLT5i
dWZmZXIpICsgMTsKLQludW0tLTsKKwlwZXJtcy5udW0tLTsKIAotCXBlcm1z
ID0gdGFsbG9jX2FycmF5KG5vZGUsIHN0cnVjdCB4c19wZXJtaXNzaW9ucywg
bnVtKTsKLQlpZiAoIXBlcm1zKQorCXBlcm1zLnAgPSB0YWxsb2NfYXJyYXko
bm9kZSwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBwZXJtcy5udW0pOworCWlm
ICghcGVybXMucCkKIAkJcmV0dXJuIEVOT01FTTsKLQlpZiAoIXhzX3N0cmlu
Z3NfdG9fcGVybXMocGVybXMsIG51bSwgcGVybXN0cikpCisJaWYgKCF4c19z
dHJpbmdzX3RvX3Blcm1zKHBlcm1zLnAsIHBlcm1zLm51bSwgcGVybXN0cikp
CiAJCXJldHVybiBlcnJubzsKIAogCS8qIFVucHJpdmlsZWdlZCBkb21haW5z
IG1heSBub3QgY2hhbmdlIHRoZSBvd25lci4gKi8KLQlpZiAoZG9tYWluX2lz
X3VucHJpdmlsZWdlZChjb25uKSAmJiBwZXJtc1swXS5pZCAhPSBub2RlLT5w
ZXJtc1swXS5pZCkKKwlpZiAoZG9tYWluX2lzX3VucHJpdmlsZWdlZChjb25u
KSAmJgorCSAgICBwZXJtcy5wWzBdLmlkICE9IG5vZGUtPnBlcm1zLnBbMF0u
aWQpCiAJCXJldHVybiBFUEVSTTsKIAogCWRvbWFpbl9lbnRyeV9kZWMoY29u
biwgbm9kZSk7CiAJbm9kZS0+cGVybXMgPSBwZXJtczsKLQlub2RlLT5udW1f
cGVybXMgPSBudW07CiAJZG9tYWluX2VudHJ5X2luYyhjb25uLCBub2RlKTsK
IAogCWlmICh3cml0ZV9ub2RlKGNvbm4sIG5vZGUsIGZhbHNlKSkKQEAgLTE1
NDcsOCArMTU0NCw4IEBAIHN0YXRpYyB2b2lkIG1hbnVhbF9ub2RlKGNvbnN0
IGNoYXIgKm5hbWUsIGNvbnN0IGNoYXIgKmNoaWxkKQogCQliYXJmX3BlcnJv
cigiQ291bGQgbm90IGFsbG9jYXRlIGluaXRpYWwgbm9kZSAlcyIsIG5hbWUp
OwogCiAJbm9kZS0+bmFtZSA9IG5hbWU7Ci0Jbm9kZS0+cGVybXMgPSAmcGVy
bXM7Ci0Jbm9kZS0+bnVtX3Blcm1zID0gMTsKKwlub2RlLT5wZXJtcy5wID0g
JnBlcm1zOworCW5vZGUtPnBlcm1zLm51bSA9IDE7CiAJbm9kZS0+Y2hpbGRy
ZW4gPSAoY2hhciAqKWNoaWxkOwogCWlmIChjaGlsZCkKIAkJbm9kZS0+Y2hp
bGRsZW4gPSBzdHJsZW4oY2hpbGQpICsgMTsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmggYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5oCmluZGV4IDI5ZDYzOGZiYzVhMC4uNDdiYTA5MTZk
YmUyIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKQEAg
LTEwOSw2ICsxMDksMTEgQEAgc3RydWN0IGNvbm5lY3Rpb24KIH07CiBleHRl
cm4gc3RydWN0IGxpc3RfaGVhZCBjb25uZWN0aW9uczsKIAorc3RydWN0IG5v
ZGVfcGVybXMgeworCXVuc2lnbmVkIGludCBudW07CisJc3RydWN0IHhzX3Bl
cm1pc3Npb25zICpwOworfTsKKwogc3RydWN0IG5vZGUgewogCWNvbnN0IGNo
YXIgKm5hbWU7CiAKQEAgLTEyMCw4ICsxMjUsNyBAQCBzdHJ1Y3Qgbm9kZSB7
CiAjZGVmaW5lIE5PX0dFTkVSQVRJT04gfigodWludDY0X3QpMCkKIAogCS8q
IFBlcm1pc3Npb25zLiAqLwotCXVuc2lnbmVkIGludCBudW1fcGVybXM7Ci0J
c3RydWN0IHhzX3Blcm1pc3Npb25zICpwZXJtczsKKwlzdHJ1Y3Qgbm9kZV9w
ZXJtcyBwZXJtczsKIAogCS8qIENvbnRlbnRzLiAqLwogCXVuc2lnbmVkIGlu
dCBkYXRhbGVuOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFp
bi5jCmluZGV4IDJkMGQ4N2VlODllMS4uYWE5OTQyZmNjMjY3IDEwMDY0NAot
LS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02NTAsMTIg
KzY1MCwxMiBAQCB2b2lkIGRvbWFpbl9lbnRyeV9pbmMoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogCWlmICghY29ubikK
IAkJcmV0dXJuOwogCi0JaWYgKG5vZGUtPnBlcm1zICYmIG5vZGUtPnBlcm1z
WzBdLmlkICE9IGNvbm4tPmlkKSB7CisJaWYgKG5vZGUtPnBlcm1zLnAgJiYg
bm9kZS0+cGVybXMucFswXS5pZCAhPSBjb25uLT5pZCkgewogCQlpZiAoY29u
bi0+dHJhbnNhY3Rpb24pIHsKIAkJCXRyYW5zYWN0aW9uX2VudHJ5X2luYyhj
b25uLT50cmFuc2FjdGlvbiwKLQkJCQlub2RlLT5wZXJtc1swXS5pZCk7CisJ
CQkJbm9kZS0+cGVybXMucFswXS5pZCk7CiAJCX0gZWxzZSB7Ci0JCQlkID0g
ZmluZF9kb21haW5fYnlfZG9taWQobm9kZS0+cGVybXNbMF0uaWQpOworCQkJ
ZCA9IGZpbmRfZG9tYWluX2J5X2RvbWlkKG5vZGUtPnBlcm1zLnBbMF0uaWQp
OwogCQkJaWYgKGQpCiAJCQkJZC0+bmJlbnRyeSsrOwogCQl9CkBAIC02NzYs
MTIgKzY3NiwxMiBAQCB2b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogCWlmICghY29u
bikKIAkJcmV0dXJuOwogCi0JaWYgKG5vZGUtPnBlcm1zICYmIG5vZGUtPnBl
cm1zWzBdLmlkICE9IGNvbm4tPmlkKSB7CisJaWYgKG5vZGUtPnBlcm1zLnAg
JiYgbm9kZS0+cGVybXMucFswXS5pZCAhPSBjb25uLT5pZCkgewogCQlpZiAo
Y29ubi0+dHJhbnNhY3Rpb24pIHsKIAkJCXRyYW5zYWN0aW9uX2VudHJ5X2Rl
Yyhjb25uLT50cmFuc2FjdGlvbiwKLQkJCQlub2RlLT5wZXJtc1swXS5pZCk7
CisJCQkJbm9kZS0+cGVybXMucFswXS5pZCk7CiAJCX0gZWxzZSB7Ci0JCQlk
ID0gZmluZF9kb21haW5fYnlfZG9taWQobm9kZS0+cGVybXNbMF0uaWQpOwor
CQkJZCA9IGZpbmRfZG9tYWluX2J5X2RvbWlkKG5vZGUtPnBlcm1zLnBbMF0u
aWQpOwogCQkJaWYgKGQgJiYgZC0+bmJlbnRyeSkKIAkJCQlkLT5uYmVudHJ5
LS07CiAJCX0KLS0gCjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch"
Content-Transfer-Encoding: base64

RnJvbSBiOWZmZjRiN2FkNmI0MWRiODYwYTQzZDM1YzQwMTg0N2ZlZjc4OWNi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDUgKzAyMDAKU3ViamVjdDogW1BBVENIIDA5LzEwXSB0b29scy94ZW5z
dG9yZTogYWxsb3cgc3BlY2lhbCB3YXRjaGVzIGZvciBwcml2aWxlZ2VkCiBj
YWxsZXJzIG9ubHkKClRoZSBzcGVjaWFsIHdhdGNoZXMgIkBpbnRyb2R1Y2VE
b21haW4iIGFuZCAiQHJlbGVhc2VEb21haW4iIHNob3VsZCBiZQphbGxvd2Vk
IGZvciBwcml2aWxlZ2VkIGNhbGxlcnMgb25seSwgYXMgdGhleSBhbGxvdyB0
byBnYWluIGluZm9ybWF0aW9uCmFib3V0IHByZXNlbmNlIG9mIG90aGVyIGd1
ZXN0cyBvbiB0aGUgaG9zdC4gU28gc2VuZCB3YXRjaCBldmVudHMgZm9yCnRo
b3NlIHdhdGNoZXMgdmlhIHByaXZpbGVnZWQgY29ubmVjdGlvbnMgb25seS4K
CkluIG9yZGVyIHRvIGFsbG93IGZvciBkaXNhZ2dyZWdhdGVkIHNldHVwcyB3
aGVyZSBlLmcuIGRyaXZlciBkb21haW5zCm5lZWQgdG8gbWFrZSB1c2Ugb2Yg
dGhvc2Ugc3BlY2lhbCB3YXRjaGVzIGFkZCBzdXBwb3J0IGZvciBjYWxsaW5n
CiJzZXQgcGVybWlzc2lvbnMiIGZvciB0aG9zZSBzcGVjaWFsIG5vZGVzLCB0
b28uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6
IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5
OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KLS0tCiBkb2NzL21pc2Mv
eGVuc3RvcmUudHh0ICAgICAgICAgICAgfCAgNSArKysKIHRvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9jb3JlLmMgICB8IDI3ICsrKysrKysrLS0tLS0tCiB0
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oICAgfCAgMiArKwogdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIHwgNjAgKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKwogdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5oIHwgIDUgKysrCiB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfd2F0Y2guYyAgfCAgNCArKysKIDYgZmlsZXMgY2hhbmdlZCwgOTMgaW5z
ZXJ0aW9ucygrKSwgMTAgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZG9j
cy9taXNjL3hlbnN0b3JlLnR4dCBiL2RvY3MvbWlzYy94ZW5zdG9yZS50eHQK
aW5kZXggY2I4MDA5Y2I2ODZkLi4yMDgxZjIwZjU1ZTQgMTAwNjQ0Ci0tLSBh
L2RvY3MvbWlzYy94ZW5zdG9yZS50eHQKKysrIGIvZG9jcy9taXNjL3hlbnN0
b3JlLnR4dApAQCAtMTcwLDYgKzE3MCw5IEBAIFNFVF9QRVJNUwkJPHBhdGg+
fDxwZXJtLWFzLXN0cmluZz58Kz8KIAkJbjxkb21pZD4Jbm8gYWNjZXNzCiAJ
U2VlIGh0dHBzOi8vd2lraS54ZW4ub3JnL3dpa2kvWGVuQnVzIHNlY3Rpb24K
IAlgUGVybWlzc2lvbnMnIGZvciBkZXRhaWxzIG9mIHRoZSBwZXJtaXNzaW9u
cyBzeXN0ZW0uCisJSXQgaXMgcG9zc2libGUgdG8gc2V0IHBlcm1pc3Npb25z
IGZvciB0aGUgc3BlY2lhbCB3YXRjaCBwYXRocworCSJAaW50cm9kdWNlRG9t
YWluIiBhbmQgIkByZWxlYXNlRG9tYWluIiB0byBlbmFibGUgcmVjZWl2aW5n
IHRob3NlCisJd2F0Y2hlcyBpbiB1bnByaXZpbGVnZWQgZG9tYWlucy4KIAog
LS0tLS0tLS0tLSBXYXRjaGVzIC0tLS0tLS0tLS0KIApAQCAtMTk0LDYgKzE5
Nyw4IEBAIFdBVENICQkJPHdwYXRoPnw8dG9rZW4+fD8KIAkgICAgQHJlbGVh
c2VEb21haW4gCW9jY3VycyBvbiBhbnkgZG9tYWluIGNyYXNoIG9yCiAJCQkJ
c2h1dGRvd24sIGFuZCBhbHNvIG9uIFJFTEVBU0UKIAkJCQlhbmQgZG9tYWlu
IGRlc3RydWN0aW9uCisJPHdzcGVjaWFsPiBldmVudHMgYXJlIHNlbnQgdG8g
cHJpdmlsZWdlZCBjYWxsZXJzIG9yIGV4cGxpY2l0bHkKKwl2aWEgU0VUX1BF
Uk1TIGVuYWJsZWQgZG9tYWlucyBvbmx5LgogCiAJV2hlbiBhIHdhdGNoIGlz
IGZpcnN0IHNldCB1cCBpdCBpcyB0cmlnZ2VyZWQgb25jZSBzdHJhaWdodAog
CWF3YXksIHdpdGggPHBhdGg+IGVxdWFsIHRvIDx3cGF0aD4uICBXYXRjaGVz
IG1heSBiZSB0cmlnZ2VyZWQKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jCmluZGV4IDc5ZDMwNWZiYmU1OC4uMTVmZmJlYjMwZjE5IDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTQ3MCw4ICs0
NzAsOCBAQCBzdGF0aWMgaW50IHdyaXRlX25vZGUoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlLAogCXJldHVybiB3cml0ZV9u
b2RlX3Jhdyhjb25uLCAma2V5LCBub2RlLCBub19xdW90YV9jaGVjayk7CiB9
CiAKLXN0YXRpYyBlbnVtIHhzX3Blcm1fdHlwZSBwZXJtX2Zvcl9jb25uKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLAotCQkJCSAgICAgICBjb25zdCBzdHJ1
Y3Qgbm9kZV9wZXJtcyAqcGVybXMpCitlbnVtIHhzX3Blcm1fdHlwZSBwZXJt
X2Zvcl9jb25uKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAorCQkJCWNvbnN0
IHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcykKIHsKIAl1bnNpZ25lZCBpbnQg
aTsKIAllbnVtIHhzX3Blcm1fdHlwZSBtYXNrID0gWFNfUEVSTV9SRUFEfFhT
X1BFUk1fV1JJVEV8WFNfUEVSTV9PV05FUjsKQEAgLTEyNDcsMjIgKzEyNDcs
MjkgQEAgc3RhdGljIGludCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAocGVy
bXMubnVtIDwgMikKIAkJcmV0dXJuIEVJTlZBTDsKIAotCS8qIEZpcnN0IGFy
ZyBpcyBub2RlIG5hbWUuICovCi0JLyogV2UgbXVzdCBvd24gbm9kZSB0byBk
byB0aGlzICh0b29scyBjYW4gZG8gdGhpcyB0b28pLiAqLwotCW5vZGUgPSBn
ZXRfbm9kZV9jYW5vbmljYWxpemVkKGNvbm4sIGluLCBpbi0+YnVmZmVyLCAm
bmFtZSwKLQkJCQkgICAgICBYU19QRVJNX1dSSVRFIHwgWFNfUEVSTV9PV05F
Uik7Ci0JaWYgKCFub2RlKQotCQlyZXR1cm4gZXJybm87Ci0KIAlwZXJtc3Ry
ID0gaW4tPmJ1ZmZlciArIHN0cmxlbihpbi0+YnVmZmVyKSArIDE7CiAJcGVy
bXMubnVtLS07CiAKLQlwZXJtcy5wID0gdGFsbG9jX2FycmF5KG5vZGUsIHN0
cnVjdCB4c19wZXJtaXNzaW9ucywgcGVybXMubnVtKTsKKwlwZXJtcy5wID0g
dGFsbG9jX2FycmF5KGluLCBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMsIHBlcm1z
Lm51bSk7CiAJaWYgKCFwZXJtcy5wKQogCQlyZXR1cm4gRU5PTUVNOwogCWlm
ICgheHNfc3RyaW5nc190b19wZXJtcyhwZXJtcy5wLCBwZXJtcy5udW0sIHBl
cm1zdHIpKQogCQlyZXR1cm4gZXJybm87CiAKKwkvKiBGaXJzdCBhcmcgaXMg
bm9kZSBuYW1lLiAqLworCWlmIChzdHJzdGFydHMoaW4tPmJ1ZmZlciwgIkAi
KSkgeworCQlpZiAoc2V0X3Blcm1zX3NwZWNpYWwoY29ubiwgaW4tPmJ1ZmZl
ciwgJnBlcm1zKSkKKwkJCXJldHVybiBlcnJubzsKKwkJc2VuZF9hY2soY29u
biwgWFNfU0VUX1BFUk1TKTsKKwkJcmV0dXJuIDA7CisJfQorCisJLyogV2Ug
bXVzdCBvd24gbm9kZSB0byBkbyB0aGlzICh0b29scyBjYW4gZG8gdGhpcyB0
b28pLiAqLworCW5vZGUgPSBnZXRfbm9kZV9jYW5vbmljYWxpemVkKGNvbm4s
IGluLCBpbi0+YnVmZmVyLCAmbmFtZSwKKwkJCQkgICAgICBYU19QRVJNX1dS
SVRFIHwgWFNfUEVSTV9PV05FUik7CisJaWYgKCFub2RlKQorCQlyZXR1cm4g
ZXJybm87CisKIAkvKiBVbnByaXZpbGVnZWQgZG9tYWlucyBtYXkgbm90IGNo
YW5nZSB0aGUgb3duZXIuICovCiAJaWYgKGRvbWFpbl9pc191bnByaXZpbGVn
ZWQoY29ubikgJiYKIAkgICAgcGVybXMucFswXS5pZCAhPSBub2RlLT5wZXJt
cy5wWzBdLmlkKQpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgK
aW5kZXggNDdiYTA5MTZkYmUyLi41M2YxMDUwODU5ZmMgMTAwNjQ0Ci0tLSBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKKysrIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaApAQCAtMTY1LDYgKzE2NSw4IEBA
IHN0cnVjdCBub2RlICpnZXRfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwKIHN0cnVjdCBjb25uZWN0aW9uICpuZXdfY29ubmVjdGlvbihjb25ud3Jp
dGVmbl90ICp3cml0ZSwgY29ubnJlYWRmbl90ICpyZWFkKTsKIHZvaWQgY2hl
Y2tfc3RvcmUodm9pZCk7CiB2b2lkIGNvcnJ1cHQoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIGNvbnN0IGNoYXIgKmZtdCwgLi4uKTsKK2VudW0geHNfcGVy
bV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
CisJCQkJY29uc3Qgc3RydWN0IG5vZGVfcGVybXMgKnBlcm1zKTsKIAogLyog
SXMgdGhpcyBhIHZhbGlkIG5vZGUgbmFtZT8gKi8KIGJvb2wgaXNfdmFsaWRf
bm9kZW5hbWUoY29uc3QgY2hhciAqbm9kZSk7CmRpZmYgLS1naXQgYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfZG9tYWluLmMKaW5kZXggYWE5OTQyZmNjMjY3Li5hMGQx
YTExYzgzN2YgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9kb21haW4uYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9t
YWluLmMKQEAgLTQxLDYgKzQxLDkgQEAgc3RhdGljIGV2dGNobl9wb3J0X3Qg
dmlycV9wb3J0OwogCiB4ZW5ldnRjaG5faGFuZGxlICp4Y2VfaGFuZGxlID0g
TlVMTDsKIAorc3RhdGljIHN0cnVjdCBub2RlX3Blcm1zIGRvbV9yZWxlYXNl
X3Blcm1zOworc3RhdGljIHN0cnVjdCBub2RlX3Blcm1zIGRvbV9pbnRyb2R1
Y2VfcGVybXM7CisKIHN0cnVjdCBkb21haW4KIHsKIAlzdHJ1Y3QgbGlzdF9o
ZWFkIGxpc3Q7CkBAIC01ODIsNiArNTg1LDU5IEBAIHZvaWQgcmVzdG9yZV9l
eGlzdGluZ19jb25uZWN0aW9ucyh2b2lkKQogewogfQogCitzdGF0aWMgaW50
IHNldF9kb21fcGVybXNfZGVmYXVsdChzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVy
bXMpCit7CisJcGVybXMtPm51bSA9IDE7CisJcGVybXMtPnAgPSB0YWxsb2Nf
YXJyYXkoTlVMTCwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBwZXJtcy0+bnVt
KTsKKwlpZiAoIXBlcm1zLT5wKQorCQlyZXR1cm4gLTE7CisJcGVybXMtPnAt
PmlkID0gMDsKKwlwZXJtcy0+cC0+cGVybXMgPSBYU19QRVJNX05PTkU7CisK
KwlyZXR1cm4gMDsKK30KKworc3RhdGljIHN0cnVjdCBub2RlX3Blcm1zICpn
ZXRfcGVybXNfc3BlY2lhbChjb25zdCBjaGFyICpuYW1lKQoreworCWlmICgh
c3RyY21wKG5hbWUsICJAcmVsZWFzZURvbWFpbiIpKQorCQlyZXR1cm4gJmRv
bV9yZWxlYXNlX3Blcm1zOworCWlmICghc3RyY21wKG5hbWUsICJAaW50cm9k
dWNlRG9tYWluIikpCisJCXJldHVybiAmZG9tX2ludHJvZHVjZV9wZXJtczsK
KwlyZXR1cm4gTlVMTDsKK30KKworaW50IHNldF9wZXJtc19zcGVjaWFsKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCBjaGFyICpuYW1lLAorCQkg
ICAgICBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMpCit7CisJc3RydWN0IG5v
ZGVfcGVybXMgKnA7CisKKwlwID0gZ2V0X3Blcm1zX3NwZWNpYWwobmFtZSk7
CisJaWYgKCFwKQorCQlyZXR1cm4gRUlOVkFMOworCisJaWYgKChwZXJtX2Zv
cl9jb25uKGNvbm4sIHApICYgKFhTX1BFUk1fV1JJVEUgfCBYU19QRVJNX09X
TkVSKSkgIT0KKwkgICAgKFhTX1BFUk1fV1JJVEUgfCBYU19QRVJNX09XTkVS
KSkKKwkJcmV0dXJuIEVBQ0NFUzsKKworCXAtPm51bSA9IHBlcm1zLT5udW07
CisJdGFsbG9jX2ZyZWUocC0+cCk7CisJcC0+cCA9IHBlcm1zLT5wOworCXRh
bGxvY19zdGVhbChOVUxMLCBwZXJtcy0+cCk7CisKKwlyZXR1cm4gMDsKK30K
KworYm9vbCBjaGVja19wZXJtc19zcGVjaWFsKGNvbnN0IGNoYXIgKm5hbWUs
IHN0cnVjdCBjb25uZWN0aW9uICpjb25uKQoreworCXN0cnVjdCBub2RlX3Bl
cm1zICpwOworCisJcCA9IGdldF9wZXJtc19zcGVjaWFsKG5hbWUpOworCWlm
ICghcCkKKwkJcmV0dXJuIGZhbHNlOworCisJcmV0dXJuIHBlcm1fZm9yX2Nv
bm4oY29ubiwgcCkgJiBYU19QRVJNX1JFQUQ7Cit9CisKIHN0YXRpYyBpbnQg
ZG9tMF9pbml0KHZvaWQpIAogeyAKIAlldnRjaG5fcG9ydF90IHBvcnQ7CkBA
IC02MDMsNiArNjU5LDEwIEBAIHN0YXRpYyBpbnQgZG9tMF9pbml0KHZvaWQp
CiAKIAl4ZW5ldnRjaG5fbm90aWZ5KHhjZV9oYW5kbGUsIGRvbTAtPnBvcnQp
OwogCisJaWYgKHNldF9kb21fcGVybXNfZGVmYXVsdCgmZG9tX3JlbGVhc2Vf
cGVybXMpIHx8CisJICAgIHNldF9kb21fcGVybXNfZGVmYXVsdCgmZG9tX2lu
dHJvZHVjZV9wZXJtcykpCisJCXJldHVybiAtMTsKKwogCXJldHVybiAwOyAK
IH0KIApkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2Rv
bWFpbi5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCmlu
ZGV4IDU2YWUwMTU5NzQ3NS4uMjU5MTgzOTYyYTljIDEwMDY0NAotLS0gYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmgKKysrIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCkBAIC02NSw2ICs2NSwxMSBA
QCB2b2lkIGRvbWFpbl93YXRjaF9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4pOwogdm9pZCBkb21haW5fd2F0Y2hfZGVjKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uKTsKIGludCBkb21haW5fd2F0Y2goc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4pOwogCisvKiBTcGVjaWFsIG5vZGUgcGVybWlzc2lvbiBoYW5kbGlu
Zy4gKi8KK2ludCBzZXRfcGVybXNfc3BlY2lhbChzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgY29uc3QgY2hhciAqbmFtZSwKKwkJICAgICAgc3RydWN0IG5v
ZGVfcGVybXMgKnBlcm1zKTsKK2Jvb2wgY2hlY2tfcGVybXNfc3BlY2lhbChj
b25zdCBjaGFyICpuYW1lLCBzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7CisK
IC8qIFdyaXRlIHJhdGUgbGltaXRpbmcgKi8KIAogI2RlZmluZSBXUkxfRkFD
VE9SICAgMTAwMCAvKiBmb3IgZml4ZWQtcG9pbnQgYXJpdGhtZXRpYyAqLwpk
aWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYwppbmRleCAzODM2
Njc1NDU5ZmEuLmY0ZTI4OTM2MmViNiAxMDA2NDQKLS0tIGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX3dhdGNoLmMKQEAgLTEzMyw2ICsxMzMsMTAgQEAgdm9pZCBm
aXJlX3dhdGNoZXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZv
aWQgKmN0eCwgY29uc3QgY2hhciAqbmFtZSwKIAogCS8qIENyZWF0ZSBhbiBl
dmVudCBmb3IgZWFjaCB3YXRjaC4gKi8KIAlsaXN0X2Zvcl9lYWNoX2VudHJ5
KGksICZjb25uZWN0aW9ucywgbGlzdCkgeworCQkvKiBpbnRyb2R1Y2UvcmVs
ZWFzZSBkb21haW4gd2F0Y2hlcyAqLworCQlpZiAoY2hlY2tfc3BlY2lhbF9l
dmVudChuYW1lKSAmJiAhY2hlY2tfcGVybXNfc3BlY2lhbChuYW1lLCBpKSkK
KwkJCWNvbnRpbnVlOworCiAJCWxpc3RfZm9yX2VhY2hfZW50cnkod2F0Y2gs
ICZpLT53YXRjaGVzLCBsaXN0KSB7CiAJCQlpZiAoZXhhY3QpIHsKIAkJCQlp
ZiAoc3RyZXEobmFtZSwgd2F0Y2gtPm5vZGUpKQotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-4.14-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch"
Content-Disposition: attachment;
 filename="xsa115-4.14-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch"
Content-Transfer-Encoding: base64

RnJvbSBmMWNjNDdiMDU3MmIzMzcyNjlhZjdlMzRiZDAxOTU4NGY0YjhjOThl
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFRodSwgMTEgSnVuIDIwMjAgMTY6
MTI6NDYgKzAyMDAKU3ViamVjdDogW1BBVENIIDEwLzEwXSB0b29scy94ZW5z
dG9yZTogYXZvaWQgd2F0Y2ggZXZlbnRzIGZvciBub2RlcyB3aXRob3V0CiBh
Y2Nlc3MKClRvZGF5IHdhdGNoIGV2ZW50cyBhcmUgc2VudCByZWdhcmRsZXNz
IG9mIHRoZSBhY2Nlc3MgcmlnaHRzIG9mIHRoZQpub2RlIHRoZSBldmVudCBp
cyBzZW50IGZvci4gVGhpcyBlbmFibGVzIGFueSBndWVzdCB0byBlLmcuIHNl
dHVwIGEKd2F0Y2ggZm9yICIvIiBpbiBvcmRlciB0byBoYXZlIGEgZGV0YWls
ZWQgcmVjb3JkIG9mIGFsbCBYZW5zdG9yZQptb2RpZmljYXRpb25zLgoKTW9k
aWZ5IHRoYXQgYnkgc2VuZGluZyBvbmx5IHdhdGNoIGV2ZW50cyBmb3Igbm9k
ZXMgdGhhdCB0aGUgd2F0Y2hlcgpoYXMgYSBjaGFuY2UgdG8gc2VlIG90aGVy
d2lzZSAoZWl0aGVyIHZpYSBkaXJlY3QgcmVhZHMgb3IgYnkgcXVlcnlpbmcK
dGhlIGNoaWxkcmVuIG9mIGEgbm9kZSkuIFRoaXMgaW5jbHVkZXMgY2FzZXMg
d2hlcmUgdGhlIHZpc2liaWxpdHkgb2YKYSBub2RlIGZvciBhIHdhdGNoZXIg
aXMgY2hhbmdpbmcgKHBlcm1pc3Npb25zIGJlaW5nIHJlbW92ZWQpLgoKVGhp
cyBpcyBwYXJ0IG9mIFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4g
R3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBE
dXJyYW50IDxwYXVsQHhlbi5vcmc+Ci0tLQogdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuYyAgICAgICAgfCAyOCArKysrKy0tLS0tCiB0b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oICAgICAgICB8IDE1ICsrKystLQog
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jICAgICAgfCAgNiAr
LS0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIHwg
MjEgKysrKysrKy0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5j
ICAgICAgIHwgNzUgKysrKysrKysrKysrKysrKysrKy0tLS0tLS0KIHRvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oICAgICAgIHwgIDIgKy0KIDYg
ZmlsZXMgY2hhbmdlZCwgMTA0IGluc2VydGlvbnMoKyksIDQzIGRlbGV0aW9u
cygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4
IDE1ZmZiZWIzMGYxOS4uOTJiZmQ1NGNmZjYyIDEwMDY0NAotLS0gYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTM2MCw4ICszNjAsOCBAQCBzdGF0
aWMgdm9pZCBpbml0aWFsaXplX2ZkcyhpbnQgKnBfc29ja19wb2xsZmRfaWR4
LCBpbnQgKnBfcm9fc29ja19wb2xsZmRfaWR4LAogICogSWYgaXQgZmFpbHMs
IHJldHVybnMgTlVMTCBhbmQgc2V0cyBlcnJuby4KICAqIFRlbXBvcmFyeSBt
ZW1vcnkgYWxsb2NhdGlvbnMgd2lsbCBiZSBkb25lIHdpdGggY3R4LgogICov
Ci1zdGF0aWMgc3RydWN0IG5vZGUgKnJlYWRfbm9kZShzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAotCQkJICAgICAgY29uc3Qg
Y2hhciAqbmFtZSkKK3N0cnVjdCBub2RlICpyZWFkX25vZGUoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKKwkJICAgICAgIGNv
bnN0IGNoYXIgKm5hbWUpCiB7CiAJVERCX0RBVEEga2V5LCBkYXRhOwogCXN0
cnVjdCB4c190ZGJfcmVjb3JkX2hkciAqaGRyOwpAQCAtNDk2LDcgKzQ5Niw3
IEBAIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sCiAgKiBHZXQgbmFtZSBvZiBub2RlIHBhcmVudC4K
ICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgYXJlIGRvbmUgd2l0
aCBjdHguCiAgKi8KLXN0YXRpYyBjaGFyICpnZXRfcGFyZW50KGNvbnN0IHZv
aWQgKmN0eCwgY29uc3QgY2hhciAqbm9kZSkKK2NoYXIgKmdldF9wYXJlbnQo
Y29uc3Qgdm9pZCAqY3R4LCBjb25zdCBjaGFyICpub2RlKQogewogCWNoYXIg
KnBhcmVudDsKIAljaGFyICpzbGFzaCA9IHN0cnJjaHIobm9kZSArIDEsICcv
Jyk7CkBAIC01NjgsMTAgKzU2OCwxMCBAQCBzdGF0aWMgaW50IGVycm5vX2Zy
b21fcGFyZW50cyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9p
ZCAqY3R4LAogICogSWYgaXQgZmFpbHMsIHJldHVybnMgTlVMTCBhbmQgc2V0
cyBlcnJuby4KICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgYXJl
IGRvbmUgd2l0aCBjdHguCiAgKi8KLXN0cnVjdCBub2RlICpnZXRfbm9kZShz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKLQkJICAgICAgY29uc3Qgdm9pZCAq
Y3R4LAotCQkgICAgICBjb25zdCBjaGFyICpuYW1lLAotCQkgICAgICBlbnVt
IHhzX3Blcm1fdHlwZSBwZXJtKQorc3RhdGljIHN0cnVjdCBub2RlICpnZXRf
bm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKKwkJCSAgICAgY29uc3Qg
dm9pZCAqY3R4LAorCQkJICAgICBjb25zdCBjaGFyICpuYW1lLAorCQkJICAg
ICBlbnVtIHhzX3Blcm1fdHlwZSBwZXJtKQogewogCXN0cnVjdCBub2RlICpu
b2RlOwogCkBAIC0xMDU4LDcgKzEwNTgsNyBAQCBzdGF0aWMgaW50IGRvX3dy
aXRlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRf
ZGF0YSAqaW4pCiAJCQlyZXR1cm4gZXJybm87CiAJfQogCi0JZmlyZV93YXRj
aGVzKGNvbm4sIGluLCBuYW1lLCBmYWxzZSk7CisJZmlyZV93YXRjaGVzKGNv
bm4sIGluLCBuYW1lLCBub2RlLCBmYWxzZSwgTlVMTCk7CiAJc2VuZF9hY2so
Y29ubiwgWFNfV1JJVEUpOwogCiAJcmV0dXJuIDA7CkBAIC0xMDgwLDcgKzEw
ODAsNyBAQCBzdGF0aWMgaW50IGRvX21rZGlyKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJCW5vZGUgPSBj
cmVhdGVfbm9kZShjb25uLCBpbiwgbmFtZSwgTlVMTCwgMCk7CiAJCWlmICgh
bm9kZSkKIAkJCXJldHVybiBlcnJubzsKLQkJZmlyZV93YXRjaGVzKGNvbm4s
IGluLCBuYW1lLCBmYWxzZSk7CisJCWZpcmVfd2F0Y2hlcyhjb25uLCBpbiwg
bmFtZSwgbm9kZSwgZmFsc2UsIE5VTEwpOwogCX0KIAlzZW5kX2Fjayhjb25u
LCBYU19NS0RJUik7CiAKQEAgLTExNDMsNyArMTE0Myw3IEBAIHN0YXRpYyBp
bnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0
IHZvaWQgKmN0eCwKIAkJdGFsbG9jX2ZyZWUobmFtZSk7CiAJfQogCi0JZmly
ZV93YXRjaGVzKGNvbm4sIGN0eCwgbm9kZS0+bmFtZSwgdHJ1ZSk7CisJZmly
ZV93YXRjaGVzKGNvbm4sIGN0eCwgbm9kZS0+bmFtZSwgbm9kZSwgdHJ1ZSwg
TlVMTCk7CiAJZGVsZXRlX25vZGVfc2luZ2xlKGNvbm4sIG5vZGUpOwogCWRl
bGV0ZV9jaGlsZChjb25uLCBwYXJlbnQsIGJhc2VuYW1lKG5vZGUtPm5hbWUp
KTsKIAl0YWxsb2NfZnJlZShub2RlKTsKQEAgLTExNjcsMTMgKzExNjcsMTQg
QEAgc3RhdGljIGludCBfcm0oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNv
bnN0IHZvaWQgKmN0eCwgc3RydWN0IG5vZGUgKm5vZGUsCiAJcGFyZW50ID0g
cmVhZF9ub2RlKGNvbm4sIGN0eCwgcGFyZW50bmFtZSk7CiAJaWYgKCFwYXJl
bnQpCiAJCXJldHVybiAoZXJybm8gPT0gRU5PTUVNKSA/IEVOT01FTSA6IEVJ
TlZBTDsKKwlub2RlLT5wYXJlbnQgPSBwYXJlbnQ7CiAKIAkvKgogCSAqIEZp
cmUgdGhlIHdhdGNoZXMgbm93LCB3aGVuIHdlIGNhbiBzdGlsbCBzZWUgdGhl
IG5vZGUgcGVybWlzc2lvbnMuCiAJICogVGhpcyBmaW5lIGFzIHdlIGFyZSBz
aW5nbGUgdGhyZWFkZWQgYW5kIHRoZSBuZXh0IHBvc3NpYmxlIHJlYWQgd2ls
bAogCSAqIGJlIGhhbmRsZWQgb25seSBhZnRlciB0aGUgbm9kZSBoYXMgYmVl
biByZWFsbHkgcmVtb3ZlZC4KIAkgKi8KLQlmaXJlX3dhdGNoZXMoY29ubiwg
Y3R4LCBuYW1lLCBmYWxzZSk7CisJZmlyZV93YXRjaGVzKGNvbm4sIGN0eCwg
bmFtZSwgbm9kZSwgZmFsc2UsIE5VTEwpOwogCXJldHVybiBkZWxldGVfbm9k
ZShjb25uLCBjdHgsIHBhcmVudCwgbm9kZSk7CiB9CiAKQEAgLTEyMzksNyAr
MTI0MCw3IEBAIHN0YXRpYyBpbnQgZG9fZ2V0X3Blcm1zKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAKIHN0
YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiB7Ci0Jc3RydWN0IG5vZGVf
cGVybXMgcGVybXM7CisJc3RydWN0IG5vZGVfcGVybXMgcGVybXMsIG9sZF9w
ZXJtczsKIAljaGFyICpuYW1lLCAqcGVybXN0cjsKIAlzdHJ1Y3Qgbm9kZSAq
bm9kZTsKIApAQCAtMTI3NSw2ICsxMjc2LDcgQEAgc3RhdGljIGludCBkb19z
ZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZm
ZXJlZF9kYXRhICppbikKIAkgICAgcGVybXMucFswXS5pZCAhPSBub2RlLT5w
ZXJtcy5wWzBdLmlkKQogCQlyZXR1cm4gRVBFUk07CiAKKwlvbGRfcGVybXMg
PSBub2RlLT5wZXJtczsKIAlkb21haW5fZW50cnlfZGVjKGNvbm4sIG5vZGUp
OwogCW5vZGUtPnBlcm1zID0gcGVybXM7CiAJZG9tYWluX2VudHJ5X2luYyhj
b25uLCBub2RlKTsKQEAgLTEyODIsNyArMTI4NCw3IEBAIHN0YXRpYyBpbnQg
ZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qg
YnVmZmVyZWRfZGF0YSAqaW4pCiAJaWYgKHdyaXRlX25vZGUoY29ubiwgbm9k
ZSwgZmFsc2UpKQogCQlyZXR1cm4gZXJybm87CiAKLQlmaXJlX3dhdGNoZXMo
Y29ubiwgaW4sIG5hbWUsIGZhbHNlKTsKKwlmaXJlX3dhdGNoZXMoY29ubiwg
aW4sIG5hbWUsIG5vZGUsIGZhbHNlLCAmb2xkX3Blcm1zKTsKIAlzZW5kX2Fj
ayhjb25uLCBYU19TRVRfUEVSTVMpOwogCiAJcmV0dXJuIDA7CmRpZmYgLS1n
aXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaAppbmRleCA1M2YxMDUwODU5ZmMu
LmViMTliNzFmNWY0NiAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuaAorKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5oCkBAIC0xNTIsMTUgKzE1MiwxNyBAQCB2b2lkIHNlbmRfYWNrKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBlbnVtIHhzZF9zb2NrbXNnX3R5cGUg
dHlwZSk7CiAvKiBDYW5vbmljYWxpemUgdGhpcyBwYXRoIGlmIHBvc3NpYmxl
LiAqLwogY2hhciAqY2Fub25pY2FsaXplKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5vZGUpOwogCisv
KiBHZXQgYWNjZXNzIHBlcm1pc3Npb25zLiAqLworZW51bSB4c19wZXJtX3R5
cGUgcGVybV9mb3JfY29ubihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKKwkJ
CQljb25zdCBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMpOworCiAvKiBXcml0
ZSBhIG5vZGUgdG8gdGhlIHRkYiBkYXRhIGJhc2UuICovCiBpbnQgd3JpdGVf
bm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICpr
ZXksIHN0cnVjdCBub2RlICpub2RlLAogCQkgICBib29sIG5vX3F1b3RhX2No
ZWNrKTsKIAotLyogR2V0IHRoaXMgbm9kZSwgY2hlY2tpbmcgd2UgaGF2ZSBw
ZXJtaXNzaW9ucy4gKi8KLXN0cnVjdCBub2RlICpnZXRfbm9kZShzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubiwKLQkJICAgICAgY29uc3Qgdm9pZCAqY3R4LAot
CQkgICAgICBjb25zdCBjaGFyICpuYW1lLAotCQkgICAgICBlbnVtIHhzX3Bl
cm1fdHlwZSBwZXJtKTsKKy8qIEdldCBhIG5vZGUgZnJvbSB0aGUgdGRiIGRh
dGEgYmFzZS4gKi8KK3N0cnVjdCBub2RlICpyZWFkX25vZGUoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKKwkJICAgICAgIGNv
bnN0IGNoYXIgKm5hbWUpOwogCiBzdHJ1Y3QgY29ubmVjdGlvbiAqbmV3X2Nv
bm5lY3Rpb24oY29ubndyaXRlZm5fdCAqd3JpdGUsIGNvbm5yZWFkZm5fdCAq
cmVhZCk7CiB2b2lkIGNoZWNrX3N0b3JlKHZvaWQpOwpAQCAtMTcxLDYgKzE3
Myw5IEBAIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sCiAvKiBJcyB0aGlzIGEgdmFsaWQgbm9kZSBu
YW1lPyAqLwogYm9vbCBpc192YWxpZF9ub2RlbmFtZShjb25zdCBjaGFyICpu
b2RlKTsKIAorLyogR2V0IG5hbWUgb2YgcGFyZW50IG5vZGUuICovCitjaGFy
ICpnZXRfcGFyZW50KGNvbnN0IHZvaWQgKmN0eCwgY29uc3QgY2hhciAqbm9k
ZSk7CisKIC8qIFRyYWNpbmcgaW5mcmFzdHJ1Y3R1cmUuICovCiB2b2lkIHRy
YWNlX2NyZWF0ZShjb25zdCB2b2lkICpkYXRhLCBjb25zdCBjaGFyICp0eXBl
KTsKIHZvaWQgdHJhY2VfZGVzdHJveShjb25zdCB2b2lkICpkYXRhLCBjb25z
dCBjaGFyICp0eXBlKTsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9kb21haW4uYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9k
b21haW4uYwppbmRleCBhMGQxYTExYzgzN2YuLjlmYWQ0NzBmODMzMSAxMDA2
NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCisr
KyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYwpAQCAtMjAy
LDcgKzIwMiw3IEBAIHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAq
X2RvbWFpbikKIAkJCXVubWFwX2ludGVyZmFjZShkb21haW4tPmludGVyZmFj
ZSk7CiAJfQogCi0JZmlyZV93YXRjaGVzKE5VTEwsIGRvbWFpbiwgIkByZWxl
YXNlRG9tYWluIiwgZmFsc2UpOworCWZpcmVfd2F0Y2hlcyhOVUxMLCBkb21h
aW4sICJAcmVsZWFzZURvbWFpbiIsIE5VTEwsIGZhbHNlLCBOVUxMKTsKIAog
CXdybF9kb21haW5fZGVzdHJveShkb21haW4pOwogCkBAIC0yNDAsNyArMjQw
LDcgQEAgc3RhdGljIHZvaWQgZG9tYWluX2NsZWFudXAodm9pZCkKIAl9CiAK
IAlpZiAobm90aWZ5KQotCQlmaXJlX3dhdGNoZXMoTlVMTCwgTlVMTCwgIkBy
ZWxlYXNlRG9tYWluIiwgZmFsc2UpOworCQlmaXJlX3dhdGNoZXMoTlVMTCwg
TlVMTCwgIkByZWxlYXNlRG9tYWluIiwgTlVMTCwgZmFsc2UsIE5VTEwpOwog
fQogCiAvKiBXZSBzY2FuIGFsbCBkb21haW5zIHJhdGhlciB0aGFuIHVzZSB0
aGUgaW5mb3JtYXRpb24gZ2l2ZW4gaGVyZS4gKi8KQEAgLTQwNCw3ICs0MDQs
NyBAQCBpbnQgZG9faW50cm9kdWNlKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJCS8qIE5vdyBkb21haW4g
YmVsb25ncyB0byBpdHMgY29ubmVjdGlvbi4gKi8KIAkJdGFsbG9jX3N0ZWFs
KGRvbWFpbi0+Y29ubiwgZG9tYWluKTsKIAotCQlmaXJlX3dhdGNoZXMoTlVM
TCwgaW4sICJAaW50cm9kdWNlRG9tYWluIiwgZmFsc2UpOworCQlmaXJlX3dh
dGNoZXMoTlVMTCwgaW4sICJAaW50cm9kdWNlRG9tYWluIiwgTlVMTCwgZmFs
c2UsIE5VTEwpOwogCX0gZWxzZSB7CiAJCS8qIFVzZSBYU19JTlRST0RVQ0Ug
Zm9yIHJlY3JlYXRpbmcgdGhlIHhlbmJ1cyBldmVudC1jaGFubmVsLiAqLwog
CQlpZiAoZG9tYWluLT5wb3J0KQpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYwppbmRleCBlODc4OTc1NzM0NjkuLmE3
ZDhjNWQ0NzVlYyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3RyYW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3RyYW5zYWN0aW9uLmMKQEAgLTExNCw2ICsxMTQsOSBAQCBzdHJ1Y3Qg
YWNjZXNzZWRfbm9kZQogCS8qIEdlbmVyYXRpb24gY291bnQgKG9yIE5PX0dF
TkVSQVRJT04pIGZvciBjb25mbGljdCBjaGVja2luZy4gKi8KIAl1aW50NjRf
dCBnZW5lcmF0aW9uOwogCisJLyogT3JpZ2luYWwgbm9kZSBwZXJtaXNzaW9u
cy4gKi8KKwlzdHJ1Y3Qgbm9kZV9wZXJtcyBwZXJtczsKKwogCS8qIEdlbmVy
YXRpb24gY291bnQgY2hlY2tpbmcgcmVxdWlyZWQ/ICovCiAJYm9vbCBjaGVj
a19nZW47CiAKQEAgLTI2MCw2ICsyNjMsMTUgQEAgaW50IGFjY2Vzc19ub2Rl
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwK
IAkJaS0+bm9kZSA9IHRhbGxvY19zdHJkdXAoaSwgbm9kZS0+bmFtZSk7CiAJ
CWlmICghaS0+bm9kZSkKIAkJCWdvdG8gbm9tZW07CisJCWlmIChub2RlLT5n
ZW5lcmF0aW9uICE9IE5PX0dFTkVSQVRJT04gJiYgbm9kZS0+cGVybXMubnVt
KSB7CisJCQlpLT5wZXJtcy5wID0gdGFsbG9jX2FycmF5KGksIHN0cnVjdCB4
c19wZXJtaXNzaW9ucywKKwkJCQkJCSAgbm9kZS0+cGVybXMubnVtKTsKKwkJ
CWlmICghaS0+cGVybXMucCkKKwkJCQlnb3RvIG5vbWVtOworCQkJaS0+cGVy
bXMubnVtID0gbm9kZS0+cGVybXMubnVtOworCQkJbWVtY3B5KGktPnBlcm1z
LnAsIG5vZGUtPnBlcm1zLnAsCisJCQkgICAgICAgaS0+cGVybXMubnVtICog
c2l6ZW9mKCppLT5wZXJtcy5wKSk7CisJCX0KIAogCQlpbnRyb2R1Y2UgPSB0
cnVlOwogCQlpLT50YV9ub2RlID0gZmFsc2U7CkBAIC0zNjgsOSArMzgwLDE0
IEBAIHN0YXRpYyBpbnQgZmluYWxpemVfdHJhbnNhY3Rpb24oc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sCiAJCQkJdGFsbG9jX2ZyZWUoZGF0YS5kcHRyKTsK
IAkJCQlpZiAocmV0KQogCQkJCQlnb3RvIGVycjsKLQkJCX0gZWxzZSBpZiAo
dGRiX2RlbGV0ZSh0ZGJfY3R4LCBrZXkpKQorCQkJCWZpcmVfd2F0Y2hlcyhj
b25uLCB0cmFucywgaS0+bm9kZSwgTlVMTCwgZmFsc2UsCisJCQkJCSAgICAg
aS0+cGVybXMucCA/ICZpLT5wZXJtcyA6IE5VTEwpOworCQkJfSBlbHNlIHsK
KwkJCQlmaXJlX3dhdGNoZXMoY29ubiwgdHJhbnMsIGktPm5vZGUsIE5VTEws
IGZhbHNlLAorCQkJCQkgICAgIGktPnBlcm1zLnAgPyAmaS0+cGVybXMgOiBO
VUxMKTsKKwkJCQlpZiAodGRiX2RlbGV0ZSh0ZGJfY3R4LCBrZXkpKQogCQkJ
CQlnb3RvIGVycjsKLQkJCWZpcmVfd2F0Y2hlcyhjb25uLCB0cmFucywgaS0+
bm9kZSwgZmFsc2UpOworCQkJfQogCQl9CiAKIAkJaWYgKGktPnRhX25vZGUg
JiYgdGRiX2RlbGV0ZSh0ZGJfY3R4LCB0YV9rZXkpKQpkaWZmIC0tZ2l0IGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMgYi90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfd2F0Y2guYwppbmRleCBmNGUyODkzNjJlYjYuLjcx
YzEwOGVhOTlmMSAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3dhdGNoLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dh
dGNoLmMKQEAgLTg1LDIyICs4NSw2IEBAIHN0YXRpYyB2b2lkIGFkZF9ldmVu
dChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIAl1bnNpZ25lZCBpbnQgbGVu
OwogCWNoYXIgKmRhdGE7CiAKLQlpZiAoIWNoZWNrX3NwZWNpYWxfZXZlbnQo
bmFtZSkpIHsKLQkJLyogQ2FuIHRoaXMgY29ubiBsb2FkIG5vZGUsIG9yIHNl
ZSB0aGF0IGl0IGRvZXNuJ3QgZXhpc3Q/ICovCi0JCXN0cnVjdCBub2RlICpu
b2RlID0gZ2V0X25vZGUoY29ubiwgY3R4LCBuYW1lLCBYU19QRVJNX1JFQUQp
OwotCQkvKgotCQkgKiBYWFggV2UgYWxsb3cgRUFDQ0VTIGhlcmUgYmVjYXVz
ZSBvdGhlcndpc2UgYSBub24tZG9tMAotCQkgKiBiYWNrZW5kIGRyaXZlciBj
YW5ub3Qgd2F0Y2ggZm9yIGRpc2FwcGVhcmFuY2Ugb2YgYSBmcm9udGVuZAot
CQkgKiB4ZW5zdG9yZSBkaXJlY3RvcnkuIFdoZW4gdGhlIGRpcmVjdG9yeSBk
aXNhcHBlYXJzLCB3ZQotCQkgKiByZXZlcnQgdG8gcGVybWlzc2lvbnMgb2Yg
dGhlIHBhcmVudCBkaXJlY3RvcnkgZm9yIHRoYXQgcGF0aCwKLQkJICogd2hp
Y2ggd2lsbCB0eXBpY2FsbHkgZGlzYWxsb3cgYWNjZXNzIGZvciB0aGUgYmFj
a2VuZC4KLQkJICogQnV0IHRoaXMgYnJlYWtzIGRldmljZS1jaGFubmVsIHRl
YXJkb3duIQotCQkgKiBSZWFsbHkgd2Ugc2hvdWxkIGZpeCB0aGlzIGJldHRl
ci4uLgotCQkgKi8KLQkJaWYgKCFub2RlICYmIGVycm5vICE9IEVOT0VOVCAm
JiBlcnJubyAhPSBFQUNDRVMpCi0JCQlyZXR1cm47Ci0JfQotCiAJaWYgKHdh
dGNoLT5yZWxhdGl2ZV9wYXRoKSB7CiAJCW5hbWUgKz0gc3RybGVuKHdhdGNo
LT5yZWxhdGl2ZV9wYXRoKTsKIAkJaWYgKCpuYW1lID09ICcvJykgLyogQ291
bGQgYmUgIiIgKi8KQEAgLTExNywxMiArMTAxLDYwIEBAIHN0YXRpYyB2b2lk
IGFkZF9ldmVudChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIAl0YWxsb2Nf
ZnJlZShkYXRhKTsKIH0KIAorLyoKKyAqIENoZWNrIHBlcm1pc3Npb25zIG9m
IGEgc3BlY2lmaWMgd2F0Y2ggdG8gZmlyZToKKyAqIEVpdGhlciB0aGUgbm9k
ZSBpdHNlbGYgb3IgaXRzIHBhcmVudCBoYXZlIHRvIGJlIHJlYWRhYmxlIGJ5
IHRoZSBjb25uZWN0aW9uCisgKiB0aGUgd2F0Y2ggaGFzIGJlZW4gc2V0dXAg
Zm9yLiBJbiBjYXNlIGEgd2F0Y2ggZXZlbnQgaXMgY3JlYXRlZCBkdWUgdG8K
KyAqIGNoYW5nZWQgcGVybWlzc2lvbnMgd2UgbmVlZCB0byB0YWtlIHRoZSBv
bGQgcGVybWlzc2lvbnMgaW50byBhY2NvdW50LCB0b28uCisgKi8KK3N0YXRp
YyBib29sIHdhdGNoX3Blcm1pdHRlZChzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3Qgdm9pZCAqY3R4LAorCQkJICAgIGNvbnN0IGNoYXIgKm5hbWUs
IHN0cnVjdCBub2RlICpub2RlLAorCQkJICAgIHN0cnVjdCBub2RlX3Blcm1z
ICpwZXJtcykKK3sKKwllbnVtIHhzX3Blcm1fdHlwZSBwZXJtOworCXN0cnVj
dCBub2RlICpwYXJlbnQ7CisJY2hhciAqcGFyZW50X25hbWU7CisKKwlpZiAo
cGVybXMpIHsKKwkJcGVybSA9IHBlcm1fZm9yX2Nvbm4oY29ubiwgcGVybXMp
OworCQlpZiAocGVybSAmIFhTX1BFUk1fUkVBRCkKKwkJCXJldHVybiB0cnVl
OworCX0KKworCWlmICghbm9kZSkgeworCQlub2RlID0gcmVhZF9ub2RlKGNv
bm4sIGN0eCwgbmFtZSk7CisJCWlmICghbm9kZSkKKwkJCXJldHVybiBmYWxz
ZTsKKwl9CisKKwlwZXJtID0gcGVybV9mb3JfY29ubihjb25uLCAmbm9kZS0+
cGVybXMpOworCWlmIChwZXJtICYgWFNfUEVSTV9SRUFEKQorCQlyZXR1cm4g
dHJ1ZTsKKworCXBhcmVudCA9IG5vZGUtPnBhcmVudDsKKwlpZiAoIXBhcmVu
dCkgeworCQlwYXJlbnRfbmFtZSA9IGdldF9wYXJlbnQoY3R4LCBub2RlLT5u
YW1lKTsKKwkJaWYgKCFwYXJlbnRfbmFtZSkKKwkJCXJldHVybiBmYWxzZTsK
KwkJcGFyZW50ID0gcmVhZF9ub2RlKGNvbm4sIGN0eCwgcGFyZW50X25hbWUp
OworCQlpZiAoIXBhcmVudCkKKwkJCXJldHVybiBmYWxzZTsKKwl9CisKKwlw
ZXJtID0gcGVybV9mb3JfY29ubihjb25uLCAmcGFyZW50LT5wZXJtcyk7CisK
KwlyZXR1cm4gcGVybSAmIFhTX1BFUk1fUkVBRDsKK30KKwogLyoKICAqIENo
ZWNrIHdoZXRoZXIgYW55IHdhdGNoIGV2ZW50cyBhcmUgdG8gYmUgc2VudC4K
ICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgYXJlIGRvbmUgd2l0
aCBjdHguCisgKiBXZSBuZWVkIHRvIHRha2UgdGhlIChwb3RlbnRpYWwpIG9s
ZCBwZXJtaXNzaW9ucyBvZiB0aGUgbm9kZSBpbnRvIGFjY291bnQKKyAqIGFz
IGEgd2F0Y2hlciBsb3NpbmcgcGVybWlzc2lvbnMgdG8gYWNjZXNzIGEgbm9k
ZSBzaG91bGQgcmVjZWl2ZSB0aGUKKyAqIHdhdGNoIGV2ZW50LCB0b28uCiAg
Ki8KIHZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5hbWUsCi0JCSAgYm9v
bCBleGFjdCkKKwkJICBzdHJ1Y3Qgbm9kZSAqbm9kZSwgYm9vbCBleGFjdCwg
c3RydWN0IG5vZGVfcGVybXMgKnBlcm1zKQogewogCXN0cnVjdCBjb25uZWN0
aW9uICppOwogCXN0cnVjdCB3YXRjaCAqd2F0Y2g7CkBAIC0xMzQsOCArMTY2
LDEzIEBAIHZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5hbWUsCiAJLyog
Q3JlYXRlIGFuIGV2ZW50IGZvciBlYWNoIHdhdGNoLiAqLwogCWxpc3RfZm9y
X2VhY2hfZW50cnkoaSwgJmNvbm5lY3Rpb25zLCBsaXN0KSB7CiAJCS8qIGlu
dHJvZHVjZS9yZWxlYXNlIGRvbWFpbiB3YXRjaGVzICovCi0JCWlmIChjaGVj
a19zcGVjaWFsX2V2ZW50KG5hbWUpICYmICFjaGVja19wZXJtc19zcGVjaWFs
KG5hbWUsIGkpKQotCQkJY29udGludWU7CisJCWlmIChjaGVja19zcGVjaWFs
X2V2ZW50KG5hbWUpKSB7CisJCQlpZiAoIWNoZWNrX3Blcm1zX3NwZWNpYWwo
bmFtZSwgaSkpCisJCQkJY29udGludWU7CisJCX0gZWxzZSB7CisJCQlpZiAo
IXdhdGNoX3Blcm1pdHRlZChpLCBjdHgsIG5hbWUsIG5vZGUsIHBlcm1zKSkK
KwkJCQljb250aW51ZTsKKwkJfQogCiAJCWxpc3RfZm9yX2VhY2hfZW50cnko
d2F0Y2gsICZpLT53YXRjaGVzLCBsaXN0KSB7CiAJCQlpZiAoZXhhY3QpIHsK
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5o
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmgKaW5kZXggMWIz
YzgwZDNkZGExLi4wMzA5NDM3NGYzNzkgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF93YXRjaC5oCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF93YXRjaC5oCkBAIC0yNiw3ICsyNiw3IEBAIGludCBkb191
bndhdGNoKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVy
ZWRfZGF0YSAqaW4pOwogCiAvKiBGaXJlIGFsbCB3YXRjaGVzOiAhZXhhY3Qg
bWVhbnMgYWxsIHRoZSBjaGlsZHJlbiBhcmUgYWZmZWN0ZWQgKGllLiBybSku
ICovCiB2b2lkIGZpcmVfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3Qgdm9pZCAqdG1wLCBjb25zdCBjaGFyICpuYW1lLAotCQkgIGJv
b2wgZXhhY3QpOworCQkgIHN0cnVjdCBub2RlICpub2RlLCBib29sIGV4YWN0
LCBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMpOwogCiB2b2lkIGNvbm5fZGVs
ZXRlX2FsbF93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uKTsKIAot
LSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0001-tools-xenstore-allow-removing-child-of-a-node-exceed.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogYWxsb3cgcmVtb3ZpbmcgY2hpbGQgb2YgYSBu
b2RlIGV4Y2VlZGluZyBxdW90YQoKQW4gdW5wcml2aWxlZ2VkIHVzZXIgb2Yg
WGVuc3RvcmUgaXMgbm90IGFsbG93ZWQgdG8gd3JpdGUgbm9kZXMgd2l0aCBh
CnNpemUgZXhjZWVkaW5nIGEgZ2xvYmFsIHF1b3RhLCB3aGlsZSBwcml2aWxl
Z2VkIHVzZXJzIGxpa2UgZG9tMCBhcmUKYWxsb3dlZCB0byB3cml0ZSBzdWNo
IG5vZGVzLiBUaGUgc2l6ZSBvZiBhIG5vZGUgaXMgdGhlIG5lZWRlZCBzcGFj
ZQp0byBzdG9yZSBhbGwgbm9kZSBzcGVjaWZpYyBkYXRhLCB0aGlzIGluY2x1
ZGVzIHRoZSBuYW1lcyBvZiBhbGwKY2hpbGRyZW4gb2YgdGhlIG5vZGUuCgpX
aGVuIGRlbGV0aW5nIGEgbm9kZSBpdHMgcGFyZW50IGhhcyB0byBiZSBtb2Rp
ZmllZCBieSByZW1vdmluZyB0aGUKbmFtZSBvZiB0aGUgdG8gYmUgZGVsZXRl
ZCBjaGlsZCBmcm9tIGl0LgoKVGhpcyByZXN1bHRzIGluIHRoZSBzdHJhbmdl
IHNpdHVhdGlvbiB0aGF0IGFuIHVucHJpdmlsZWdlZCBvd25lciBvZiBhCm5v
ZGUgbWlnaHQgbm90IHN1Y2NlZWQgaW4gZGVsZXRpbmcgdGhhdCBub2RlIGlu
IGNhc2UgaXRzIHBhcmVudCBpcwpleGNlZWRpbmcgdGhlIHF1b3RhIG9mIHRo
YXQgdW5wcml2aWxlZ2VkIHVzZXIgKGl0IG1pZ2h0IGhhdmUgYmVlbgp3cml0
dGVuIGJ5IGRvbTApLCBhcyB0aGUgdXNlciBpcyBub3QgYWxsb3dlZCB0byB3
cml0ZSB0aGUgdXBkYXRlZApwYXJlbnQgbm9kZS4KCkZpeCB0aGF0IGJ5IG5v
dCBjaGVja2luZyB0aGUgcXVvdGEgd2hlbiB3cml0aW5nIGEgbm9kZSBmb3Ig
dGhlCnB1cnBvc2Ugb2YgcmVtb3ZpbmcgYSBjaGlsZCdzIG5hbWUgb25seS4K
ClRoZSBzYW1lIGFwcGxpZXMgdG8gdHJhbnNhY3Rpb24gaGFuZGxpbmc6IGEg
bm9kZSBiZWluZyByZWFkIGR1cmluZyBhCnRyYW5zYWN0aW9uIGlzIHdyaXR0
ZW4gdG8gdGhlIHRyYW5zYWN0aW9uIHNwZWNpZmljIGFyZWEgYW5kIGl0IHNo
b3VsZApub3QgYmUgdGVzdGVkIGZvciBleGNlZWRpbmcgdGhlIHF1b3RhLCBh
cyBpdCBtaWdodCBub3QgYmUgb3duZWQgYnkKdGhlIHJlYWRlciBhbmQgcHJl
c3VtYWJseSB0aGUgb3JpZ2luYWwgd3JpdGUgd291bGQgaGF2ZSBmYWlsZWQg
aWYgdGhlCm5vZGUgaXMgb3duZWQgYnkgdGhlIHJlYWRlci4KClRoaXMgaXMg
cGFydCBvZiBYU0EtMTE1LgoKU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9z
cyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxs
IDxqZ3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFu
dCA8cGF1bEB4ZW4ub3JnPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jCmluZGV4IGI0YmUzNzRkM2YuLjdiZjExMjNkYTMgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwpAQCAtNDEzLDcgKzQxMyw4
IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAJcmV0dXJuIG5vZGU7
CiB9CiAKLWludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUpCitpbnQg
d3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFREQl9E
QVRBICprZXksIHN0cnVjdCBub2RlICpub2RlLAorCQkgICBib29sIG5vX3F1
b3RhX2NoZWNrKQogewogCVREQl9EQVRBIGRhdGE7CiAJdm9pZCAqcDsKQEAg
LTQyMyw3ICs0MjQsNyBAQCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpu
b2RlKQogCQkrIG5vZGUtPm51bV9wZXJtcypzaXplb2Yobm9kZS0+cGVybXNb
MF0pCiAJCSsgbm9kZS0+ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOwogCi0J
aWYgKGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikgJiYKKwlpZiAoIW5v
X3F1b3RhX2NoZWNrICYmIGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikg
JiYKIAkgICAgZGF0YS5kc2l6ZSA+PSBxdW90YV9tYXhfZW50cnlfc2l6ZSkg
ewogCQllcnJubyA9IEVOT1NQQzsKIAkJcmV0dXJuIGVycm5vOwpAQCAtNDUx
LDE0ICs0NTIsMTUgQEAgaW50IHdyaXRlX25vZGVfcmF3KHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBUREJfREFUQSAqa2V5LCBzdHJ1Y3Qgbm9kZSAqbm9k
ZSkKIAlyZXR1cm4gMDsKIH0KIAotc3RhdGljIGludCB3cml0ZV9ub2RlKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSkKK3N0
YXRpYyBpbnQgd3JpdGVfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwg
c3RydWN0IG5vZGUgKm5vZGUsCisJCSAgICAgIGJvb2wgbm9fcXVvdGFfY2hl
Y2spCiB7CiAJVERCX0RBVEEga2V5OwogCiAJaWYgKGFjY2Vzc19ub2RlKGNv
bm4sIG5vZGUsIE5PREVfQUNDRVNTX1dSSVRFLCAma2V5KSkKIAkJcmV0dXJu
IGVycm5vOwogCi0JcmV0dXJuIHdyaXRlX25vZGVfcmF3KGNvbm4sICZrZXks
IG5vZGUpOworCXJldHVybiB3cml0ZV9ub2RlX3Jhdyhjb25uLCAma2V5LCBu
b2RlLCBub19xdW90YV9jaGVjayk7CiB9CiAKIHN0YXRpYyBlbnVtIHhzX3Bl
cm1fdHlwZSBwZXJtX2Zvcl9jb25uKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LApAQCAtOTkyLDcgKzk5NCw3IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqY3Jl
YXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQg
KmN0eCwKIAkvKiBXZSB3cml0ZSBvdXQgdGhlIG5vZGVzIGRvd24sIHNldHRp
bmcgZGVzdHJ1Y3RvciBpbiBjYXNlCiAJICogc29tZXRoaW5nIGdvZXMgd3Jv
bmcuICovCiAJZm9yIChpID0gbm9kZTsgaTsgaSA9IGktPnBhcmVudCkgewot
CQlpZiAod3JpdGVfbm9kZShjb25uLCBpKSkgeworCQlpZiAod3JpdGVfbm9k
ZShjb25uLCBpLCBmYWxzZSkpIHsKIAkJCWRvbWFpbl9lbnRyeV9kZWMoY29u
biwgaSk7CiAJCQlyZXR1cm4gTlVMTDsKIAkJfQpAQCAtMTAzMiw3ICsxMDM0
LDcgQEAgc3RhdGljIGludCBkb193cml0ZShzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCX0gZWxzZSB7CiAJ
CW5vZGUtPmRhdGEgPSBpbi0+YnVmZmVyICsgb2Zmc2V0OwogCQlub2RlLT5k
YXRhbGVuID0gZGF0YWxlbjsKLQkJaWYgKHdyaXRlX25vZGUoY29ubiwgbm9k
ZSkpCisJCWlmICh3cml0ZV9ub2RlKGNvbm4sIG5vZGUsIGZhbHNlKSkKIAkJ
CXJldHVybiBlcnJubzsKIAl9CiAKQEAgLTExMDgsNyArMTExMCw3IEBAIHN0
YXRpYyBpbnQgcmVtb3ZlX2NoaWxkX2VudHJ5KHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKIAlzaXplX3QgY2hpbGRsZW4g
PSBzdHJsZW4obm9kZS0+Y2hpbGRyZW4gKyBvZmZzZXQpOwogCW1lbWRlbChu
b2RlLT5jaGlsZHJlbiwgb2Zmc2V0LCBjaGlsZGxlbiArIDEsIG5vZGUtPmNo
aWxkbGVuKTsKIAlub2RlLT5jaGlsZGxlbiAtPSBjaGlsZGxlbiArIDE7Ci0J
cmV0dXJuIHdyaXRlX25vZGUoY29ubiwgbm9kZSk7CisJcmV0dXJuIHdyaXRl
X25vZGUoY29ubiwgbm9kZSwgdHJ1ZSk7CiB9CiAKIApAQCAtMTI0Nyw3ICsx
MjQ5LDcgQEAgc3RhdGljIGludCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlub2Rl
LT5udW1fcGVybXMgPSBudW07CiAJZG9tYWluX2VudHJ5X2luYyhjb25uLCBu
b2RlKTsKIAotCWlmICh3cml0ZV9ub2RlKGNvbm4sIG5vZGUpKQorCWlmICh3
cml0ZV9ub2RlKGNvbm4sIG5vZGUsIGZhbHNlKSkKIAkJcmV0dXJuIGVycm5v
OwogCiAJZmlyZV93YXRjaGVzKGNvbm4sIGluLCBuYW1lLCBmYWxzZSk7CkBA
IC0xNTA1LDcgKzE1MDcsNyBAQCBzdGF0aWMgdm9pZCBtYW51YWxfbm9kZShj
b25zdCBjaGFyICpuYW1lLCBjb25zdCBjaGFyICpjaGlsZCkKIAlpZiAoY2hp
bGQpCiAJCW5vZGUtPmNoaWxkbGVuID0gc3RybGVuKGNoaWxkKSArIDE7CiAK
LQlpZiAod3JpdGVfbm9kZShOVUxMLCBub2RlKSkKKwlpZiAod3JpdGVfbm9k
ZShOVUxMLCBub2RlLCBmYWxzZSkpCiAJCWJhcmZfcGVycm9yKCJDb3VsZCBu
b3QgY3JlYXRlIGluaXRpYWwgbm9kZSAlcyIsIG5hbWUpOwogCXRhbGxvY19m
cmVlKG5vZGUpOwogfQpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3Jl
LmgKaW5kZXggMWRmNmFkOTRhYi4uNTNhYWZhMWQ5YiAxMDA2NDQKLS0tIGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaAorKysgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCkBAIC0xNDYsNyArMTQ2LDggQEAg
dm9pZCBzZW5kX2FjayhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgZW51bSB4
c2Rfc29ja21zZ190eXBlIHR5cGUpOwogY2hhciAqY2Fub25pY2FsaXplKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0
IGNoYXIgKm5vZGUpOwogCiAvKiBXcml0ZSBhIG5vZGUgdG8gdGhlIHRkYiBk
YXRhIGJhc2UuICovCi1pbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2Rl
KTsKK2ludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUsCisJCSAgIGJv
b2wgbm9fcXVvdGFfY2hlY2spOwogCiAvKiBHZXQgdGhpcyBub2RlLCBjaGVj
a2luZyB3ZSBoYXZlIHBlcm1pc3Npb25zLiAqLwogc3RydWN0IG5vZGUgKmdl
dF9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLApkaWZmIC0tZ2l0IGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgYi90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYwppbmRleCAyODI0
ZjdiMzU5Li5lODc4OTc1NzM0IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYworKysgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYwpAQCAtMjc2LDcgKzI3Niw3IEBA
IGludCBhY2Nlc3Nfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3Ry
dWN0IG5vZGUgKm5vZGUsCiAJCQlpLT5jaGVja19nZW4gPSB0cnVlOwogCQkJ
aWYgKG5vZGUtPmdlbmVyYXRpb24gIT0gTk9fR0VORVJBVElPTikgewogCQkJ
CXNldF90ZGJfa2V5KHRyYW5zX25hbWUsICZsb2NhbF9rZXkpOwotCQkJCXJl
dCA9IHdyaXRlX25vZGVfcmF3KGNvbm4sICZsb2NhbF9rZXksIG5vZGUpOwor
CQkJCXJldCA9IHdyaXRlX25vZGVfcmF3KGNvbm4sICZsb2NhbF9rZXksIG5v
ZGUsIHRydWUpOwogCQkJCWlmIChyZXQpCiAJCQkJCWdvdG8gZXJyOwogCQkJ
CWktPnRhX25vZGUgPSB0cnVlOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0002-tools-xenstore-ignore-transaction-id-for-un-watch.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogaWdub3JlIHRyYW5zYWN0aW9uIGlkIGZvciBb
dW5dd2F0Y2gKCkluc3RlYWQgb2YgaWdub3JpbmcgdGhlIHRyYW5zYWN0aW9u
IGlkIGZvciBYU19XQVRDSCBhbmQgWFNfVU5XQVRDSApjb21tYW5kcyBhcyBp
dCBpcyBkb2N1bWVudGVkIGluIGRvY3MvbWlzYy94ZW5zdG9yZS50eHQsIGl0
IGlzIHRlc3RlZApmb3IgdmFsaWRpdHkgdG9kYXkuCgpSZWFsbHkgaWdub3Jl
IHRoZSB0cmFuc2FjdGlvbiBpZCBmb3IgWFNfV0FUQ0ggYW5kIFhTX1VOV0FU
Q0guCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6
IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5
OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KCmRpZmYgLS1naXQgYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA3YmYxMTIzZGEzLi4zNDcxY2Ux
NTkyIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAg
LTEyNjEsMTMgKzEyNjEsMTcgQEAgc3RhdGljIGludCBkb19zZXRfcGVybXMo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRh
ICppbikKIHN0YXRpYyBzdHJ1Y3QgewogCWNvbnN0IGNoYXIgKnN0cjsKIAlp
bnQgKCpmdW5jKShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1
ZmZlcmVkX2RhdGEgKmluKTsKKwl1bnNpZ25lZCBpbnQgZmxhZ3M7CisjZGVm
aW5lIFhTX0ZMQUdfTk9USUQJCSgxVSA8PCAwKQkvKiBJZ25vcmUgdHJhbnNh
Y3Rpb24gaWQuICovCiB9IGNvbnN0IHdpcmVfZnVuY3NbWFNfVFlQRV9DT1VO
VF0gPSB7CiAJW1hTX0NPTlRST0xdICAgICAgICAgICA9IHsgIkNPTlRST0wi
LCAgICAgICAgICAgZG9fY29udHJvbCB9LAogCVtYU19ESVJFQ1RPUlldICAg
ICAgICAgPSB7ICJESVJFQ1RPUlkiLCAgICAgICAgIHNlbmRfZGlyZWN0b3J5
IH0sCiAJW1hTX1JFQURdICAgICAgICAgICAgICA9IHsgIlJFQUQiLCAgICAg
ICAgICAgICAgZG9fcmVhZCB9LAogCVtYU19HRVRfUEVSTVNdICAgICAgICAg
PSB7ICJHRVRfUEVSTVMiLCAgICAgICAgIGRvX2dldF9wZXJtcyB9LAotCVtY
U19XQVRDSF0gICAgICAgICAgICAgPSB7ICJXQVRDSCIsICAgICAgICAgICAg
IGRvX3dhdGNoIH0sCi0JW1hTX1VOV0FUQ0hdICAgICAgICAgICA9IHsgIlVO
V0FUQ0giLCAgICAgICAgICAgZG9fdW53YXRjaCB9LAorCVtYU19XQVRDSF0g
ICAgICAgICAgICAgPQorCSAgICB7ICJXQVRDSCIsICAgICAgICAgZG9fd2F0
Y2gsICAgICAgICBYU19GTEFHX05PVElEIH0sCisJW1hTX1VOV0FUQ0hdICAg
ICAgICAgICA9CisJICAgIHsgIlVOV0FUQ0giLCAgICAgICBkb191bndhdGNo
LCAgICAgIFhTX0ZMQUdfTk9USUQgfSwKIAlbWFNfVFJBTlNBQ1RJT05fU1RB
UlRdID0geyAiVFJBTlNBQ1RJT05fU1RBUlQiLCBkb190cmFuc2FjdGlvbl9z
dGFydCB9LAogCVtYU19UUkFOU0FDVElPTl9FTkRdICAgPSB7ICJUUkFOU0FD
VElPTl9FTkQiLCAgIGRvX3RyYW5zYWN0aW9uX2VuZCB9LAogCVtYU19JTlRS
T0RVQ0VdICAgICAgICAgPSB7ICJJTlRST0RVQ0UiLCAgICAgICAgIGRvX2lu
dHJvZHVjZSB9LApAQCAtMTI4OSw3ICsxMjkzLDcgQEAgc3RhdGljIHN0cnVj
dCB7CiAKIHN0YXRpYyBjb25zdCBjaGFyICpzb2NrbXNnX3N0cmluZyhlbnVt
IHhzZF9zb2NrbXNnX3R5cGUgdHlwZSkKIHsKLQlpZiAoKHVuc2lnbmVkKXR5
cGUgPCBYU19UWVBFX0NPVU5UICYmIHdpcmVfZnVuY3NbdHlwZV0uc3RyKQor
CWlmICgodW5zaWduZWQgaW50KXR5cGUgPCBBUlJBWV9TSVpFKHdpcmVfZnVu
Y3MpICYmIHdpcmVfZnVuY3NbdHlwZV0uc3RyKQogCQlyZXR1cm4gd2lyZV9m
dW5jc1t0eXBlXS5zdHI7CiAKIAlyZXR1cm4gIioqVU5LTk9XTioqIjsKQEAg
LTEzMDQsNyArMTMwOCwxNCBAQCBzdGF0aWMgdm9pZCBwcm9jZXNzX21lc3Nh
Z2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9k
YXRhICppbikKIAllbnVtIHhzZF9zb2NrbXNnX3R5cGUgdHlwZSA9IGluLT5o
ZHIubXNnLnR5cGU7CiAJaW50IHJldDsKIAotCXRyYW5zID0gdHJhbnNhY3Rp
b25fbG9va3VwKGNvbm4sIGluLT5oZHIubXNnLnR4X2lkKTsKKwlpZiAoKHVu
c2lnbmVkIGludCl0eXBlID49IFhTX1RZUEVfQ09VTlQgfHwgIXdpcmVfZnVu
Y3NbdHlwZV0uZnVuYykgeworCQllcHJpbnRmKCJDbGllbnQgdW5rbm93biBv
cGVyYXRpb24gJWkiLCB0eXBlKTsKKwkJc2VuZF9lcnJvcihjb25uLCBFTk9T
WVMpOworCQlyZXR1cm47CisJfQorCisJdHJhbnMgPSAod2lyZV9mdW5jc1t0
eXBlXS5mbGFncyAmIFhTX0ZMQUdfTk9USUQpCisJCT8gTlVMTCA6IHRyYW5z
YWN0aW9uX2xvb2t1cChjb25uLCBpbi0+aGRyLm1zZy50eF9pZCk7CiAJaWYg
KElTX0VSUih0cmFucykpIHsKIAkJc2VuZF9lcnJvcihjb25uLCAtUFRSX0VS
Uih0cmFucykpOwogCQlyZXR1cm47CkBAIC0xMzEzLDEyICsxMzI0LDcgQEAg
c3RhdGljIHZvaWQgcHJvY2Vzc19tZXNzYWdlKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJYXNzZXJ0KGNv
bm4tPnRyYW5zYWN0aW9uID09IE5VTEwpOwogCWNvbm4tPnRyYW5zYWN0aW9u
ID0gdHJhbnM7CiAKLQlpZiAoKHVuc2lnbmVkKXR5cGUgPCBYU19UWVBFX0NP
VU5UICYmIHdpcmVfZnVuY3NbdHlwZV0uZnVuYykKLQkJcmV0ID0gd2lyZV9m
dW5jc1t0eXBlXS5mdW5jKGNvbm4sIGluKTsKLQllbHNlIHsKLQkJZXByaW50
ZigiQ2xpZW50IHVua25vd24gb3BlcmF0aW9uICVpIiwgdHlwZSk7Ci0JCXJl
dCA9IEVOT1NZUzsKLQl9CisJcmV0ID0gd2lyZV9mdW5jc1t0eXBlXS5mdW5j
KGNvbm4sIGluKTsKIAlpZiAocmV0KQogCQlzZW5kX2Vycm9yKGNvbm4sIHJl
dCk7CiAK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0003-tools-xenstore-fix-node-accounting-after-failed-node.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogZml4IG5vZGUgYWNjb3VudGluZyBhZnRlciBm
YWlsZWQgbm9kZSBjcmVhdGlvbgoKV2hlbiBhIG5vZGUgY3JlYXRpb24gZmFp
bHMgdGhlIG51bWJlciBvZiBub2RlcyBvZiB0aGUgZG9tYWluIHNob3VsZCBi
ZQp0aGUgc2FtZSBhcyBiZWZvcmUgdGhlIGZhaWxlZCBub2RlIGNyZWF0aW9u
LiBJbiBjYXNlIG9mIGZhaWx1cmUgd2hlbgp0cnlpbmcgdG8gY3JlYXRlIGEg
bm9kZSByZXF1aXJpbmcgdG8gY3JlYXRlIG9uZSBvciBtb3JlIGludGVybWVk
aWF0ZQpub2RlcyBhcyB3ZWxsIChlLmcuIHdoZW4gL2EvYi9jL2QgaXMgdG8g
YmUgY3JlYXRlZCwgYnV0IC9hL2IgaXNuJ3QKZXhpc3RpbmcgeWV0KSBpdCBt
aWdodCBoYXBwZW4gdGhhdCB0aGUgbnVtYmVyIG9mIG5vZGVzIG9mIHRoZSBj
cmVhdGluZwpkb21haW4gaXMgbm90IHJlc2V0IHRvIHRoZSB2YWx1ZSBpdCBo
YWQgYmVmb3JlLgoKU28gbW92ZSB0aGUgcXVvdGEgYWNjb3VudGluZyBvdXQg
b2YgY29uc3RydWN0X25vZGUoKSBhbmQgaW50byB0aGUgbm9kZQp3cml0ZSBs
b29wIGluIGNyZWF0ZV9ub2RlKCkgaW4gb3JkZXIgdG8gYmUgYWJsZSB0byB1
bmRvIHRoZSBhY2NvdW50aW5nCmluIGNhc2Ugb2YgYW4gZXJyb3IgaW4gdGhl
IGludGVybWVkaWF0ZSBub2RlIGRlc3RydWN0b3IuCgpUaGlzIGlzIHBhcnQg
b2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGF1
bEB4ZW4ub3JnPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwpp
bmRleCAzNDcxY2UxNTkyLi40NzZlNjlkNjU4IDEwMDY0NAotLS0gYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTkxOCwxMSArOTE4LDYgQEAgc3Rh
dGljIHN0cnVjdCBub2RlICpjb25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAogCWlmICghcGFyZW50KQog
CQlyZXR1cm4gTlVMTDsKIAotCWlmIChkb21haW5fZW50cnkoY29ubikgPj0g
cXVvdGFfbmJfZW50cnlfcGVyX2RvbWFpbikgewotCQllcnJubyA9IEVOT1NQ
QzsKLQkJcmV0dXJuIE5VTEw7Ci0JfQotCiAJLyogQWRkIGNoaWxkIHRvIHBh
cmVudC4gKi8KIAliYXNlID0gYmFzZW5hbWUobmFtZSk7CiAJYmFzZWxlbiA9
IHN0cmxlbihiYXNlKSArIDE7CkBAIC05NTUsNyArOTUwLDYgQEAgc3RhdGlj
IHN0cnVjdCBub2RlICpjb25zdHJ1Y3Rfbm9kZShzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAogCW5vZGUtPmNoaWxkcmVuID0g
bm9kZS0+ZGF0YSA9IE5VTEw7CiAJbm9kZS0+Y2hpbGRsZW4gPSBub2RlLT5k
YXRhbGVuID0gMDsKIAlub2RlLT5wYXJlbnQgPSBwYXJlbnQ7Ci0JZG9tYWlu
X2VudHJ5X2luYyhjb25uLCBub2RlKTsKIAlyZXR1cm4gbm9kZTsKIAogbm9t
ZW06CkBAIC05NzUsNiArOTY5LDkgQEAgc3RhdGljIGludCBkZXN0cm95X25v
ZGUodm9pZCAqX25vZGUpCiAJa2V5LmRzaXplID0gc3RybGVuKG5vZGUtPm5h
bWUpOwogCiAJdGRiX2RlbGV0ZSh0ZGJfY3R4LCBrZXkpOworCisJZG9tYWlu
X2VudHJ5X2RlYyh0YWxsb2NfcGFyZW50KG5vZGUpLCBub2RlKTsKKwogCXJl
dHVybiAwOwogfQogCkBAIC05OTEsMTggKzk4OCwzNCBAQCBzdGF0aWMgc3Ry
dWN0IG5vZGUgKmNyZWF0ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCB2b2lkICpjdHgsCiAJbm9kZS0+ZGF0YSA9IGRhdGE7CiAJbm9k
ZS0+ZGF0YWxlbiA9IGRhdGFsZW47CiAKLQkvKiBXZSB3cml0ZSBvdXQgdGhl
IG5vZGVzIGRvd24sIHNldHRpbmcgZGVzdHJ1Y3RvciBpbiBjYXNlCi0JICog
c29tZXRoaW5nIGdvZXMgd3JvbmcuICovCisJLyoKKwkgKiBXZSB3cml0ZSBv
dXQgdGhlIG5vZGVzIGJvdHRvbSB1cC4KKwkgKiBBbGwgbmV3IGNyZWF0ZWQg
bm9kZXMgd2lsbCBoYXZlIGktPnBhcmVudCBzZXQsIHdoaWxlIHRoZSBmaW5h
bAorCSAqIG5vZGUgd2lsbCBiZSBhbHJlYWR5IGV4aXN0aW5nIGFuZCB3b24n
dCBoYXZlIGktPnBhcmVudCBzZXQuCisJICogTmV3IG5vZGVzIGFyZSBzdWJq
ZWN0IHRvIHF1b3RhIGhhbmRsaW5nLgorCSAqIEluaXRpYWxseSBzZXQgYSBk
ZXN0cnVjdG9yIGZvciBhbGwgbmV3IG5vZGVzIHJlbW92aW5nIHRoZW0gZnJv
bQorCSAqIFREQiBhZ2FpbiBhbmQgdW5kb2luZyBxdW90YSBhY2NvdW50aW5n
IGZvciB0aGUgY2FzZSBvZiBhbiBlcnJvcgorCSAqIGR1cmluZyB0aGUgd3Jp
dGUgbG9vcC4KKwkgKi8KIAlmb3IgKGkgPSBub2RlOyBpOyBpID0gaS0+cGFy
ZW50KSB7Ci0JCWlmICh3cml0ZV9ub2RlKGNvbm4sIGksIGZhbHNlKSkgewot
CQkJZG9tYWluX2VudHJ5X2RlYyhjb25uLCBpKTsKKwkJLyogaS0+cGFyZW50
IGlzIHNldCBmb3IgZWFjaCBuZXcgbm9kZSwgc28gY2hlY2sgcXVvdGEuICov
CisJCWlmIChpLT5wYXJlbnQgJiYKKwkJICAgIGRvbWFpbl9lbnRyeShjb25u
KSA+PSBxdW90YV9uYl9lbnRyeV9wZXJfZG9tYWluKSB7CisJCQllcnJubyA9
IEVOT1NQQzsKIAkJCXJldHVybiBOVUxMOwogCQl9Ci0JCXRhbGxvY19zZXRf
ZGVzdHJ1Y3RvcihpLCBkZXN0cm95X25vZGUpOworCQlpZiAod3JpdGVfbm9k
ZShjb25uLCBpLCBmYWxzZSkpCisJCQlyZXR1cm4gTlVMTDsKKworCQkvKiBB
Y2NvdW50IGZvciBuZXcgbm9kZSwgc2V0IGRlc3RydWN0b3IgZm9yIGVycm9y
IGNhc2UuICovCisJCWlmIChpLT5wYXJlbnQpIHsKKwkJCWRvbWFpbl9lbnRy
eV9pbmMoY29ubiwgaSk7CisJCQl0YWxsb2Nfc2V0X2Rlc3RydWN0b3IoaSwg
ZGVzdHJveV9ub2RlKTsKKwkJfQogCX0KIAogCS8qIE9LLCBub3cgcmVtb3Zl
IGRlc3RydWN0b3JzIHNvIHRoZXkgc3RheSBhcm91bmQgKi8KLQlmb3IgKGkg
PSBub2RlOyBpOyBpID0gaS0+cGFyZW50KQorCWZvciAoaSA9IG5vZGU7IGkt
PnBhcmVudDsgaSA9IGktPnBhcmVudCkKIAkJdGFsbG9jX3NldF9kZXN0cnVj
dG9yKGksIE5VTEwpOwogCXJldHVybiBub2RlOwogfQo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0004-tools-xenstore-simplify-and-rename-check_event_node.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogc2ltcGxpZnkgYW5kIHJlbmFtZSBjaGVja19l
dmVudF9ub2RlKCkKClRoZXJlIGlzIG5vIHBhdGggd2hpY2ggYWxsb3dzIHRv
IGNhbGwgY2hlY2tfZXZlbnRfbm9kZSgpIHdpdGhvdXQgYQpldmVudCBuYW1l
LiBTbyBkb24ndCBsZXQgdGhlIHJlc3VsdCBkZXBlbmQgb24gdGhlIG5hbWUg
YmVpbmcgTlVMTCBhbmQKYWRkIGFuIGFzc2VydCgpIGNvdmVyaW5nIHRoYXQg
Y2FzZS4KClJlbmFtZSB0aGUgZnVuY3Rpb24gdG8gY2hlY2tfc3BlY2lhbF9l
dmVudCgpIHRvIGJldHRlciBtYXRjaCB0aGUKc2VtYW50aWNzLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdy
b3NzIDxqZ3Jvc3NAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3Jh
bGwgPGpncmFsbEBhbWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJy
YW50IDxwYXVsQHhlbi5vcmc+CgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3dhdGNoLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfd2F0Y2guYwppbmRleCA3ZGVkY2E2MGRmLi5mMmYxYmVkNDdjIDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYworKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYwpAQCAtNDcsMTMg
KzQ3LDExIEBAIHN0cnVjdCB3YXRjaAogCWNoYXIgKm5vZGU7CiB9OwogCi1z
dGF0aWMgYm9vbCBjaGVja19ldmVudF9ub2RlKGNvbnN0IGNoYXIgKm5vZGUp
CitzdGF0aWMgYm9vbCBjaGVja19zcGVjaWFsX2V2ZW50KGNvbnN0IGNoYXIg
Km5hbWUpCiB7Ci0JaWYgKCFub2RlIHx8ICFzdHJzdGFydHMobm9kZSwgIkAi
KSkgewotCQllcnJubyA9IEVJTlZBTDsKLQkJcmV0dXJuIGZhbHNlOwotCX0K
LQlyZXR1cm4gdHJ1ZTsKKwlhc3NlcnQobmFtZSk7CisKKwlyZXR1cm4gc3Ry
c3RhcnRzKG5hbWUsICJAIik7CiB9CiAKIC8qIElzIGNoaWxkIGEgc3Vibm9k
ZSBvZiBwYXJlbnQsIG9yIGVxdWFsPyAqLwpAQCAtODcsNyArODUsNyBAQCBz
dGF0aWMgdm9pZCBhZGRfZXZlbnQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
CiAJdW5zaWduZWQgaW50IGxlbjsKIAljaGFyICpkYXRhOwogCi0JaWYgKCFj
aGVja19ldmVudF9ub2RlKG5hbWUpKSB7CisJaWYgKCFjaGVja19zcGVjaWFs
X2V2ZW50KG5hbWUpKSB7CiAJCS8qIENhbiB0aGlzIGNvbm4gbG9hZCBub2Rl
LCBvciBzZWUgdGhhdCBpdCBkb2Vzbid0IGV4aXN0PyAqLwogCQlzdHJ1Y3Qg
bm9kZSAqbm9kZSA9IGdldF9ub2RlKGNvbm4sIGN0eCwgbmFtZSwgWFNfUEVS
TV9SRUFEKTsKIAkJLyoK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0005-tools-xenstore-check-privilege-for-XS_IS_DOMAIN_INTR.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogY2hlY2sgcHJpdmlsZWdlIGZvciBYU19JU19E
T01BSU5fSU5UUk9EVUNFRAoKVGhlIFhlbnN0b3JlIGNvbW1hbmQgWFNfSVNf
RE9NQUlOX0lOVFJPRFVDRUQgc2hvdWxkIGJlIHBvc3NpYmxlIGZvcgpwcml2
aWxlZ2VkIGRvbWFpbnMgb25seSAodGhlIG9ubHkgdXNlciBpbiB0aGUgdHJl
ZSBpcyB0aGUgeGVucGFnaW5nCmRhZW1vbikuCgpJbnN0ZWFkIG9mIGhhdmlu
ZyB0aGUgcHJpdmlsZWdlIHRlc3QgZm9yIGVhY2ggY29tbWFuZCBpbnRyb2R1
Y2UgYQpwZXItY29tbWFuZCBmbGFnIGZvciB0aGF0IHB1cnBvc2UuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4g
R3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBH
cmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBQYXVsIER1
cnJhbnQgPHBhdWxAeGVuLm9yZz4KCmRpZmYgLS1naXQgYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuYwppbmRleCA0NzZlNjlkNjU4Li4zZDBlN2IzOTE3IDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTEyNzYsOCAr
MTI3NiwxMCBAQCBzdGF0aWMgc3RydWN0IHsKIAlpbnQgKCpmdW5jKShzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmlu
KTsKIAl1bnNpZ25lZCBpbnQgZmxhZ3M7CiAjZGVmaW5lIFhTX0ZMQUdfTk9U
SUQJCSgxVSA8PCAwKQkvKiBJZ25vcmUgdHJhbnNhY3Rpb24gaWQuICovCisj
ZGVmaW5lIFhTX0ZMQUdfUFJJVgkJKDFVIDw8IDEpCS8qIFByaXZpbGVnZWQg
ZG9tYWluIG9ubHkuICovCiB9IGNvbnN0IHdpcmVfZnVuY3NbWFNfVFlQRV9D
T1VOVF0gPSB7Ci0JW1hTX0NPTlRST0xdICAgICAgICAgICA9IHsgIkNPTlRS
T0wiLCAgICAgICAgICAgZG9fY29udHJvbCB9LAorCVtYU19DT05UUk9MXSAg
ICAgICAgICAgPQorCSAgICB7ICJDT05UUk9MIiwgICAgICAgZG9fY29udHJv
bCwgICAgICBYU19GTEFHX1BSSVYgfSwKIAlbWFNfRElSRUNUT1JZXSAgICAg
ICAgID0geyAiRElSRUNUT1JZIiwgICAgICAgICBzZW5kX2RpcmVjdG9yeSB9
LAogCVtYU19SRUFEXSAgICAgICAgICAgICAgPSB7ICJSRUFEIiwgICAgICAg
ICAgICAgIGRvX3JlYWQgfSwKIAlbWFNfR0VUX1BFUk1TXSAgICAgICAgID0g
eyAiR0VUX1BFUk1TIiwgICAgICAgICBkb19nZXRfcGVybXMgfSwKQEAgLTEy
ODcsOCArMTI4OSwxMCBAQCBzdGF0aWMgc3RydWN0IHsKIAkgICAgeyAiVU5X
QVRDSCIsICAgICAgIGRvX3Vud2F0Y2gsICAgICAgWFNfRkxBR19OT1RJRCB9
LAogCVtYU19UUkFOU0FDVElPTl9TVEFSVF0gPSB7ICJUUkFOU0FDVElPTl9T
VEFSVCIsIGRvX3RyYW5zYWN0aW9uX3N0YXJ0IH0sCiAJW1hTX1RSQU5TQUNU
SU9OX0VORF0gICA9IHsgIlRSQU5TQUNUSU9OX0VORCIsICAgZG9fdHJhbnNh
Y3Rpb25fZW5kIH0sCi0JW1hTX0lOVFJPRFVDRV0gICAgICAgICA9IHsgIklO
VFJPRFVDRSIsICAgICAgICAgZG9faW50cm9kdWNlIH0sCi0JW1hTX1JFTEVB
U0VdICAgICAgICAgICA9IHsgIlJFTEVBU0UiLCAgICAgICAgICAgZG9fcmVs
ZWFzZSB9LAorCVtYU19JTlRST0RVQ0VdICAgICAgICAgPQorCSAgICB7ICJJ
TlRST0RVQ0UiLCAgICAgZG9faW50cm9kdWNlLCAgICBYU19GTEFHX1BSSVYg
fSwKKwlbWFNfUkVMRUFTRV0gICAgICAgICAgID0KKwkgICAgeyAiUkVMRUFT
RSIsICAgICAgIGRvX3JlbGVhc2UsICAgICAgWFNfRkxBR19QUklWIH0sCiAJ
W1hTX0dFVF9ET01BSU5fUEFUSF0gICA9IHsgIkdFVF9ET01BSU5fUEFUSCIs
ICAgZG9fZ2V0X2RvbWFpbl9wYXRoIH0sCiAJW1hTX1dSSVRFXSAgICAgICAg
ICAgICA9IHsgIldSSVRFIiwgICAgICAgICAgICAgZG9fd3JpdGUgfSwKIAlb
WFNfTUtESVJdICAgICAgICAgICAgID0geyAiTUtESVIiLCAgICAgICAgICAg
ICBkb19ta2RpciB9LApAQCAtMTI5Nyw5ICsxMzAxLDExIEBAIHN0YXRpYyBz
dHJ1Y3QgewogCVtYU19XQVRDSF9FVkVOVF0gICAgICAgPSB7ICJXQVRDSF9F
VkVOVCIsICAgICAgIE5VTEwgfSwKIAlbWFNfRVJST1JdICAgICAgICAgICAg
ID0geyAiRVJST1IiLCAgICAgICAgICAgICBOVUxMIH0sCiAJW1hTX0lTX0RP
TUFJTl9JTlRST0RVQ0VEXSA9Ci0JCQl7ICJJU19ET01BSU5fSU5UUk9EVUNF
RCIsIGRvX2lzX2RvbWFpbl9pbnRyb2R1Y2VkIH0sCi0JW1hTX1JFU1VNRV0g
ICAgICAgICAgICA9IHsgIlJFU1VNRSIsICAgICAgICAgICAgZG9fcmVzdW1l
IH0sCi0JW1hTX1NFVF9UQVJHRVRdICAgICAgICA9IHsgIlNFVF9UQVJHRVQi
LCAgICAgICAgZG9fc2V0X3RhcmdldCB9LAorCSAgICB7ICJJU19ET01BSU5f
SU5UUk9EVUNFRCIsIGRvX2lzX2RvbWFpbl9pbnRyb2R1Y2VkLCBYU19GTEFH
X1BSSVYgfSwKKwlbWFNfUkVTVU1FXSAgICAgICAgICAgID0KKwkgICAgeyAi
UkVTVU1FIiwgICAgICAgIGRvX3Jlc3VtZSwgICAgICAgWFNfRkxBR19QUklW
IH0sCisJW1hTX1NFVF9UQVJHRVRdICAgICAgICA9CisJICAgIHsgIlNFVF9U
QVJHRVQiLCAgICBkb19zZXRfdGFyZ2V0LCAgIFhTX0ZMQUdfUFJJViB9LAog
CVtYU19SRVNFVF9XQVRDSEVTXSAgICAgPSB7ICJSRVNFVF9XQVRDSEVTIiwg
ICAgIGRvX3Jlc2V0X3dhdGNoZXMgfSwKIAlbWFNfRElSRUNUT1JZX1BBUlRd
ICAgID0geyAiRElSRUNUT1JZX1BBUlQiLCAgICBzZW5kX2RpcmVjdG9yeV9w
YXJ0IH0sCiB9OwpAQCAtMTMyNyw2ICsxMzMzLDEyIEBAIHN0YXRpYyB2b2lk
IHByb2Nlc3NfbWVzc2FnZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3Ry
dWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCQlyZXR1cm47CiAJfQogCisJaWYg
KCh3aXJlX2Z1bmNzW3R5cGVdLmZsYWdzICYgWFNfRkxBR19QUklWKSAmJgor
CSAgICBkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pKSB7CisJCXNlbmRf
ZXJyb3IoY29ubiwgRUFDQ0VTKTsKKwkJcmV0dXJuOworCX0KKwogCXRyYW5z
ID0gKHdpcmVfZnVuY3NbdHlwZV0uZmxhZ3MgJiBYU19GTEFHX05PVElEKQog
CQk/IE5VTEwgOiB0cmFuc2FjdGlvbl9sb29rdXAoY29ubiwgaW4tPmhkci5t
c2cudHhfaWQpOwogCWlmIChJU19FUlIodHJhbnMpKSB7CmRpZmYgLS1naXQg
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKaW5kZXggYTJmMTQ0ZjZkZC4u
MzY0YWQ4ZWE2MyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9k
b21haW4uYwpAQCAtMzcyLDkgKzM3Miw2IEBAIGludCBkb19pbnRyb2R1Y2Uo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRh
ICppbikKIAlpZiAoZ2V0X3N0cmluZ3MoaW4sIHZlYywgQVJSQVlfU0laRSh2
ZWMpKSA8IEFSUkFZX1NJWkUodmVjKSkKIAkJcmV0dXJuIEVJTlZBTDsKIAot
CWlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pKQotCQlyZXR1cm4g
RUFDQ0VTOwotCiAJZG9taWQgPSBhdG9pKHZlY1swXSk7CiAJLyogSWdub3Jl
IHRoZSBnZm4sIHdlIGRvbid0IG5lZWQgaXQuICovCiAJcG9ydCA9IGF0b2ko
dmVjWzJdKTsKQEAgLTQzOCw5ICs0MzUsNiBAQCBpbnQgZG9fc2V0X3Rhcmdl
dChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2Rh
dGEgKmluKQogCWlmIChnZXRfc3RyaW5ncyhpbiwgdmVjLCBBUlJBWV9TSVpF
KHZlYykpIDwgQVJSQVlfU0laRSh2ZWMpKQogCQlyZXR1cm4gRUlOVkFMOwog
Ci0JaWYgKGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikpCi0JCXJldHVy
biBFQUNDRVM7Ci0KIAlkb21pZCA9IGF0b2kodmVjWzBdKTsKIAl0ZG9taWQg
PSBhdG9pKHZlY1sxXSk7CiAKQEAgLTQ3Myw5ICs0NjcsNiBAQCBzdGF0aWMg
c3RydWN0IGRvbWFpbiAqb25lYXJnX2RvbWFpbihzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwKIAlpZiAoIWRvbWlkKQogCQlyZXR1cm4gRVJSX1BUUigtRUlO
VkFMKTsKIAotCWlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pKQot
CQlyZXR1cm4gRVJSX1BUUigtRUFDQ0VTKTsKLQogCXJldHVybiBmaW5kX2Nv
bm5lY3RlZF9kb21haW4oZG9taWQpOwogfQogCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0006-tools-xenstore-rework-node-removal.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0006-tools-xenstore-rework-node-removal.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV3b3JrIG5vZGUgcmVtb3ZhbAoKVG9kYXkg
YSBYZW5zdG9yZSBub2RlIGlzIGJlaW5nIHJlbW92ZWQgYnkgZGVsZXRpbmcg
aXQgZnJvbSB0aGUgcGFyZW50CmZpcnN0IGFuZCB0aGVuIGRlbGV0aW5nIGl0
c2VsZiBhbmQgYWxsIGl0cyBjaGlsZHJlbi4gVGhpcyByZXN1bHRzIGluCnN0
YWxlIGVudHJpZXMgcmVtYWluaW5nIGluIHRoZSBkYXRhIGJhc2UgaW4gY2Fz
ZSBlLmcuIGEgbWVtb3J5CmFsbG9jYXRpb24gaXMgZmFpbGluZyBkdXJpbmcg
cHJvY2Vzc2luZy4gVGhpcyB3b3VsZCByZXN1bHQgaW4gdGhlCnJhdGhlciBz
dHJhbmdlIGJlaGF2aW9yIHRvIGJlIGFibGUgdG8gcmVhZCBhIG5vZGUgKGFz
IGl0cyBzdGlsbCBpbiB0aGUKZGF0YSBiYXNlKSB3aGlsZSBub3QgYmVpbmcg
dmlzaWJsZSBpbiB0aGUgdHJlZSB2aWV3IG9mIFhlbnN0b3JlLgoKRml4IHRo
YXQgYnkgZGVsZXRpbmcgdGhlIG5vZGVzIGZyb20gdGhlIGxlYWYgc2lkZSBp
bnN0ZWFkIG9mIHN0YXJ0aW5nCmF0IHRoZSByb290LgoKQXMgZmlyZV93YXRj
aGVzKCkgaXMgbm93IGNhbGxlZCBmcm9tIF9ybSgpIHRoZSBjdHggcGFyYW1l
dGVyIG5lZWRzIGEKY29uc3QgYXR0cmlidXRlLgoKVGhpcyBpcyBwYXJ0IG9m
IFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jv
c3NAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFs
bEBhbWF6b24uY29tPgpSZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVs
QHhlbi5vcmc+CgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMK
aW5kZXggM2QwZTdiMzkxNy4uYzc0NDEzYmRhMiAxMDA2NDQKLS0tIGEvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jCkBAIC0xMDgwLDc0ICsxMDgwLDc2IEBA
IHN0YXRpYyBpbnQgZG9fbWtkaXIoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlyZXR1cm4gMDsKIH0KIAot
c3RhdGljIHZvaWQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sIHN0cnVjdCBub2RlICpub2RlKQotewotCXVuc2lnbmVkIGludCBpOwot
CWNoYXIgKm5hbWU7Ci0KLQkvKiBEZWxldGUgc2VsZiwgdGhlbiBkZWxldGUg
Y2hpbGRyZW4uICBJZiB3ZSBjcmFzaCwgdGhlbiB0aGUgd29yc3QKLQkgICB0
aGF0IGNhbiBoYXBwZW4gaXMgdGhlIGNoaWxkcmVuIHdpbGwgY29udGludWUg
dG8gdGFrZSB1cCBzcGFjZSwgYnV0Ci0JICAgd2lsbCBvdGhlcndpc2UgYmUg
dW5yZWFjaGFibGUuICovCi0JZGVsZXRlX25vZGVfc2luZ2xlKGNvbm4sIG5v
ZGUpOwotCi0JLyogRGVsZXRlIGNoaWxkcmVuLCB0b28uICovCi0JZm9yIChp
ID0gMDsgaSA8IG5vZGUtPmNoaWxkbGVuOyBpICs9IHN0cmxlbihub2RlLT5j
aGlsZHJlbitpKSArIDEpIHsKLQkJc3RydWN0IG5vZGUgKmNoaWxkOwotCi0J
CW5hbWUgPSB0YWxsb2NfYXNwcmludGYobm9kZSwgIiVzLyVzIiwgbm9kZS0+
bmFtZSwKLQkJCQkgICAgICAgbm9kZS0+Y2hpbGRyZW4gKyBpKTsKLQkJY2hp
bGQgPSBuYW1lID8gcmVhZF9ub2RlKGNvbm4sIG5vZGUsIG5hbWUpIDogTlVM
TDsKLQkJaWYgKGNoaWxkKSB7Ci0JCQlkZWxldGVfbm9kZShjb25uLCBjaGls
ZCk7Ci0JCX0KLQkJZWxzZSB7Ci0JCQl0cmFjZSgiZGVsZXRlX25vZGU6IEVy
cm9yIGRlbGV0aW5nIGNoaWxkICclcy8lcychXG4iLAotCQkJICAgICAgbm9k
ZS0+bmFtZSwgbm9kZS0+Y2hpbGRyZW4gKyBpKTsKLQkJCS8qIFNraXAgaXQs
IHdlJ3ZlIGFscmVhZHkgZGVsZXRlZCB0aGUgcGFyZW50LiAqLwotCQl9Ci0J
CXRhbGxvY19mcmVlKG5hbWUpOwotCX0KLX0KLQotCiAvKiBEZWxldGUgbWVt
b3J5IHVzaW5nIG1lbW1vdmUuICovCiBzdGF0aWMgdm9pZCBtZW1kZWwodm9p
ZCAqbWVtLCB1bnNpZ25lZCBvZmYsIHVuc2lnbmVkIGxlbiwgdW5zaWduZWQg
dG90YWwpCiB7CiAJbWVtbW92ZShtZW0gKyBvZmYsIG1lbSArIG9mZiArIGxl
biwgdG90YWwgLSBvZmYgLSBsZW4pOwogfQogCi0KLXN0YXRpYyBpbnQgcmVt
b3ZlX2NoaWxkX2VudHJ5KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1
Y3Qgbm9kZSAqbm9kZSwKLQkJCSAgICAgIHNpemVfdCBvZmZzZXQpCitzdGF0
aWMgdm9pZCByZW1vdmVfY2hpbGRfZW50cnkoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIHN0cnVjdCBub2RlICpub2RlLAorCQkJICAgICAgIHNpemVfdCBv
ZmZzZXQpCiB7CiAJc2l6ZV90IGNoaWxkbGVuID0gc3RybGVuKG5vZGUtPmNo
aWxkcmVuICsgb2Zmc2V0KTsKKwogCW1lbWRlbChub2RlLT5jaGlsZHJlbiwg
b2Zmc2V0LCBjaGlsZGxlbiArIDEsIG5vZGUtPmNoaWxkbGVuKTsKIAlub2Rl
LT5jaGlsZGxlbiAtPSBjaGlsZGxlbiArIDE7Ci0JcmV0dXJuIHdyaXRlX25v
ZGUoY29ubiwgbm9kZSwgdHJ1ZSk7CisJaWYgKHdyaXRlX25vZGUoY29ubiwg
bm9kZSwgdHJ1ZSkpCisJCWNvcnJ1cHQoY29ubiwgIkNhbid0IHVwZGF0ZSBw
YXJlbnQgbm9kZSAnJXMnIiwgbm9kZS0+bmFtZSk7CiB9CiAKLQotc3RhdGlj
IGludCBkZWxldGVfY2hpbGQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCi0J
CQlzdHJ1Y3Qgbm9kZSAqbm9kZSwgY29uc3QgY2hhciAqY2hpbGRuYW1lKQor
c3RhdGljIHZvaWQgZGVsZXRlX2NoaWxkKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLAorCQkJIHN0cnVjdCBub2RlICpub2RlLCBjb25zdCBjaGFyICpjaGls
ZG5hbWUpCiB7CiAJdW5zaWduZWQgaW50IGk7CiAKIAlmb3IgKGkgPSAwOyBp
IDwgbm9kZS0+Y2hpbGRsZW47IGkgKz0gc3RybGVuKG5vZGUtPmNoaWxkcmVu
K2kpICsgMSkgewogCQlpZiAoc3RyZXEobm9kZS0+Y2hpbGRyZW4raSwgY2hp
bGRuYW1lKSkgewotCQkJcmV0dXJuIHJlbW92ZV9jaGlsZF9lbnRyeShjb25u
LCBub2RlLCBpKTsKKwkJCXJlbW92ZV9jaGlsZF9lbnRyeShjb25uLCBub2Rl
LCBpKTsKKwkJCXJldHVybjsKIAkJfQogCX0KIAljb3JydXB0KGNvbm4sICJD
YW4ndCBmaW5kIGNoaWxkICclcycgaW4gJXMiLCBjaGlsZG5hbWUsIG5vZGUt
Pm5hbWUpOwotCXJldHVybiBFTk9FTlQ7CiB9CiAKK3N0YXRpYyBpbnQgZGVs
ZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2Rl
ICpwYXJlbnQsCisJCSAgICAgICBzdHJ1Y3Qgbm9kZSAqbm9kZSkKK3sKKwlj
aGFyICpuYW1lOworCisJLyogRGVsZXRlIGNoaWxkcmVuLiAqLworCXdoaWxl
IChub2RlLT5jaGlsZGxlbikgeworCQlzdHJ1Y3Qgbm9kZSAqY2hpbGQ7CisK
KwkJbmFtZSA9IHRhbGxvY19hc3ByaW50Zihub2RlLCAiJXMvJXMiLCBub2Rl
LT5uYW1lLAorCQkJCSAgICAgICBub2RlLT5jaGlsZHJlbik7CisJCWNoaWxk
ID0gbmFtZSA/IHJlYWRfbm9kZShjb25uLCBub2RlLCBuYW1lKSA6IE5VTEw7
CisJCWlmIChjaGlsZCkgeworCQkJaWYgKGRlbGV0ZV9ub2RlKGNvbm4sIG5v
ZGUsIGNoaWxkKSkKKwkJCQlyZXR1cm4gZXJybm87CisJCX0gZWxzZSB7CisJ
CQl0cmFjZSgiZGVsZXRlX25vZGU6IEVycm9yIGRlbGV0aW5nIGNoaWxkICcl
cy8lcychXG4iLAorCQkJICAgICAgbm9kZS0+bmFtZSwgbm9kZS0+Y2hpbGRy
ZW4pOworCQkJLyogUXVpdCBkZWxldGluZy4gKi8KKwkJCWVycm5vID0gRU5P
TUVNOworCQkJcmV0dXJuIGVycm5vOworCQl9CisJCXRhbGxvY19mcmVlKG5h
bWUpOworCX0KKworCWRlbGV0ZV9ub2RlX3NpbmdsZShjb25uLCBub2RlKTsK
KwlkZWxldGVfY2hpbGQoY29ubiwgcGFyZW50LCBiYXNlbmFtZShub2RlLT5u
YW1lKSk7CisJdGFsbG9jX2ZyZWUobm9kZSk7CisKKwlyZXR1cm4gMDsKK30K
IAogc3RhdGljIGludCBfcm0oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNv
bnN0IHZvaWQgKmN0eCwgc3RydWN0IG5vZGUgKm5vZGUsCiAJICAgICAgIGNv
bnN0IGNoYXIgKm5hbWUpCiB7Ci0JLyogRGVsZXRlIGZyb20gcGFyZW50IGZp
cnN0LCB0aGVuIGlmIHdlIGNyYXNoLCB0aGUgd29yc3QgdGhhdCBjYW4KLQkg
ICBoYXBwZW4gaXMgdGhlIGNoaWxkIHdpbGwgY29udGludWUgdG8gdGFrZSB1
cCBzcGFjZSwgYnV0IHdpbGwKLQkgICBvdGhlcndpc2UgYmUgdW5yZWFjaGFi
bGUuICovCisJLyoKKwkgKiBEZWxldGluZyBub2RlIGJ5IG5vZGUsIHNvIHRo
ZSByZXN1bHQgaXMgYWx3YXlzIGNvbnNpc3RlbnQgZXZlbiBpbgorCSAqIGNh
c2Ugb2YgYSBmYWlsdXJlLgorCSAqLwogCXN0cnVjdCBub2RlICpwYXJlbnQ7
CiAJY2hhciAqcGFyZW50bmFtZSA9IGdldF9wYXJlbnQoY3R4LCBuYW1lKTsK
IApAQCAtMTE1OCwxMSArMTE2MCwxMyBAQCBzdGF0aWMgaW50IF9ybShzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3Qg
bm9kZSAqbm9kZSwKIAlpZiAoIXBhcmVudCkKIAkJcmV0dXJuIChlcnJubyA9
PSBFTk9NRU0pID8gRU5PTUVNIDogRUlOVkFMOwogCi0JaWYgKGRlbGV0ZV9j
aGlsZChjb25uLCBwYXJlbnQsIGJhc2VuYW1lKG5hbWUpKSkKLQkJcmV0dXJu
IEVJTlZBTDsKLQotCWRlbGV0ZV9ub2RlKGNvbm4sIG5vZGUpOwotCXJldHVy
biAwOworCS8qCisJICogRmlyZSB0aGUgd2F0Y2hlcyBub3csIHdoZW4gd2Ug
Y2FuIHN0aWxsIHNlZSB0aGUgbm9kZSBwZXJtaXNzaW9ucy4KKwkgKiBUaGlz
IGZpbmUgYXMgd2UgYXJlIHNpbmdsZSB0aHJlYWRlZCBhbmQgdGhlIG5leHQg
cG9zc2libGUgcmVhZCB3aWxsCisJICogYmUgaGFuZGxlZCBvbmx5IGFmdGVy
IHRoZSBub2RlIGhhcyBiZWVuIHJlYWxseSByZW1vdmVkLgorCSAqLworCWZp
cmVfd2F0Y2hlcyhjb25uLCBjdHgsIG5hbWUsIHRydWUpOworCXJldHVybiBk
ZWxldGVfbm9kZShjb25uLCBwYXJlbnQsIG5vZGUpOwogfQogCiAKQEAgLTEy
MDAsNyArMTIwNCw2IEBAIHN0YXRpYyBpbnQgZG9fcm0oc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAo
cmV0KQogCQlyZXR1cm4gcmV0OwogCi0JZmlyZV93YXRjaGVzKGNvbm4sIGlu
LCBuYW1lLCB0cnVlKTsKIAlzZW5kX2Fjayhjb25uLCBYU19STSk7CiAKIAly
ZXR1cm4gMDsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF93YXRjaC5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMK
aW5kZXggZjJmMWJlZDQ3Yy4uZjBiYmZlN2E2ZCAxMDA2NDQKLS0tIGEvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKKysrIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmMKQEAgLTc3LDcgKzc3LDcgQEAgc3Rh
dGljIGJvb2wgaXNfY2hpbGQoY29uc3QgY2hhciAqY2hpbGQsIGNvbnN0IGNo
YXIgKnBhcmVudCkKICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMg
YXJlIGRvbmUgd2l0aCBjdHguCiAgKi8KIHN0YXRpYyB2b2lkIGFkZF9ldmVu
dChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKLQkJICAgICAgdm9pZCAqY3R4
LAorCQkgICAgICBjb25zdCB2b2lkICpjdHgsCiAJCSAgICAgIHN0cnVjdCB3
YXRjaCAqd2F0Y2gsCiAJCSAgICAgIGNvbnN0IGNoYXIgKm5hbWUpCiB7CkBA
IC0xMjEsNyArMTIxLDcgQEAgc3RhdGljIHZvaWQgYWRkX2V2ZW50KHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLAogICogQ2hlY2sgd2hldGhlciBhbnkgd2F0
Y2ggZXZlbnRzIGFyZSB0byBiZSBzZW50LgogICogVGVtcG9yYXJ5IG1lbW9y
eSBhbGxvY2F0aW9ucyBhcmUgZG9uZSB3aXRoIGN0eC4KICAqLwotdm9pZCBm
aXJlX3dhdGNoZXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHZvaWQgKmN0
eCwgY29uc3QgY2hhciAqbmFtZSwKK3ZvaWQgZmlyZV93YXRjaGVzKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNo
YXIgKm5hbWUsCiAJCSAgYm9vbCByZWN1cnNlKQogewogCXN0cnVjdCBjb25u
ZWN0aW9uICppOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3dhdGNoLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2gu
aAppbmRleCBjNzJlYTZhNjg1Li41NGQ0ZWE3ZTBkIDEwMDY0NAotLS0gYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guaAorKysgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guaApAQCAtMjUsNyArMjUsNyBAQCBp
bnQgZG9fd2F0Y2goc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbik7CiBpbnQgZG9fdW53YXRjaChzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKTsKIAog
LyogRmlyZSBhbGwgd2F0Y2hlczogcmVjdXJzZSBtZWFucyBhbGwgdGhlIGNo
aWxkcmVuIGFyZSBhZmZlY3RlZCAoaWUuIHJtKS4gKi8KLXZvaWQgZmlyZV93
YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCB2b2lkICp0bXAsIGNv
bnN0IGNoYXIgKm5hbWUsCit2b2lkIGZpcmVfd2F0Y2hlcyhzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqdG1wLCBjb25zdCBjaGFyICpu
YW1lLAogCQkgIGJvb2wgcmVjdXJzZSk7CiAKIHZvaWQgY29ubl9kZWxldGVf
YWxsX3dhdGNoZXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0007-tools-xenstore-fire-watches-only-when-removing-a-spe.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogZmlyZSB3YXRjaGVzIG9ubHkgd2hlbiByZW1v
dmluZyBhIHNwZWNpZmljIG5vZGUKCkluc3RlYWQgb2YgZmlyaW5nIGFsbCB3
YXRjaGVzIGZvciByZW1vdmluZyBhIHN1YnRyZWUgaW4gb25lIGdvLCBkbyBz
bwpvbmx5IHdoZW4gdGhlIHJlbGF0ZWQgbm9kZSBpcyBiZWluZyByZW1vdmVk
LgoKVGhlIHdhdGNoZXMgZm9yIHRoZSB0b3AtbW9zdCBub2RlIGJlaW5nIHJl
bW92ZWQgaW5jbHVkZSBhbGwgd2F0Y2hlcwppbmNsdWRpbmcgdGhhdCBub2Rl
LCB3aGlsZSB3YXRjaGVzIGZvciBub2RlcyBiZWxvdyB0aGF0IGFyZSBvbmx5
IGZpcmVkCmlmIHRoZXkgYXJlIG1hdGNoaW5nIGV4YWN0bHkuIFRoaXMgYXZv
aWRzIGZpcmluZyBhbnkgd2F0Y2ggbW9yZSB0aGFuCm9uY2Ugd2hlbiByZW1v
dmluZyBhIHN1YnRyZWUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNp
Z25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KCmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCBjNzQ0MTNi
ZGEyLi5iMzk2MTBjODc2IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMKQEAgLTExMTEsOCArMTExMSw4IEBAIHN0YXRpYyB2b2lkIGRl
bGV0ZV9jaGlsZChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIAljb3JydXB0
KGNvbm4sICJDYW4ndCBmaW5kIGNoaWxkICclcycgaW4gJXMiLCBjaGlsZG5h
bWUsIG5vZGUtPm5hbWUpOwogfQogCi1zdGF0aWMgaW50IGRlbGV0ZV9ub2Rl
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqcGFyZW50
LAotCQkgICAgICAgc3RydWN0IG5vZGUgKm5vZGUpCitzdGF0aWMgaW50IGRl
bGV0ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lk
ICpjdHgsCisJCSAgICAgICBzdHJ1Y3Qgbm9kZSAqcGFyZW50LCBzdHJ1Y3Qg
bm9kZSAqbm9kZSkKIHsKIAljaGFyICpuYW1lOwogCkBAIC0xMTI0LDcgKzEx
MjQsNyBAQCBzdGF0aWMgaW50IGRlbGV0ZV9ub2RlKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqcGFyZW50LAogCQkJCSAgICAgICBu
b2RlLT5jaGlsZHJlbik7CiAJCWNoaWxkID0gbmFtZSA/IHJlYWRfbm9kZShj
b25uLCBub2RlLCBuYW1lKSA6IE5VTEw7CiAJCWlmIChjaGlsZCkgewotCQkJ
aWYgKGRlbGV0ZV9ub2RlKGNvbm4sIG5vZGUsIGNoaWxkKSkKKwkJCWlmIChk
ZWxldGVfbm9kZShjb25uLCBjdHgsIG5vZGUsIGNoaWxkKSkKIAkJCQlyZXR1
cm4gZXJybm87CiAJCX0gZWxzZSB7CiAJCQl0cmFjZSgiZGVsZXRlX25vZGU6
IEVycm9yIGRlbGV0aW5nIGNoaWxkICclcy8lcychXG4iLApAQCAtMTEzNiw2
ICsxMTM2LDcgQEAgc3RhdGljIGludCBkZWxldGVfbm9kZShzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKnBhcmVudCwKIAkJdGFsbG9j
X2ZyZWUobmFtZSk7CiAJfQogCisJZmlyZV93YXRjaGVzKGNvbm4sIGN0eCwg
bm9kZS0+bmFtZSwgdHJ1ZSk7CiAJZGVsZXRlX25vZGVfc2luZ2xlKGNvbm4s
IG5vZGUpOwogCWRlbGV0ZV9jaGlsZChjb25uLCBwYXJlbnQsIGJhc2VuYW1l
KG5vZGUtPm5hbWUpKTsKIAl0YWxsb2NfZnJlZShub2RlKTsKQEAgLTExNjUs
OCArMTE2Niw4IEBAIHN0YXRpYyBpbnQgX3JtKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBub2RlICpub2RlLAog
CSAqIFRoaXMgZmluZSBhcyB3ZSBhcmUgc2luZ2xlIHRocmVhZGVkIGFuZCB0
aGUgbmV4dCBwb3NzaWJsZSByZWFkIHdpbGwKIAkgKiBiZSBoYW5kbGVkIG9u
bHkgYWZ0ZXIgdGhlIG5vZGUgaGFzIGJlZW4gcmVhbGx5IHJlbW92ZWQuCiAJ
ICovCi0JZmlyZV93YXRjaGVzKGNvbm4sIGN0eCwgbmFtZSwgdHJ1ZSk7Ci0J
cmV0dXJuIGRlbGV0ZV9ub2RlKGNvbm4sIHBhcmVudCwgbm9kZSk7CisJZmly
ZV93YXRjaGVzKGNvbm4sIGN0eCwgbmFtZSwgZmFsc2UpOworCXJldHVybiBk
ZWxldGVfbm9kZShjb25uLCBjdHgsIHBhcmVudCwgbm9kZSk7CiB9CiAKIApk
aWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYwppbmRleCBmMGJi
ZmU3YTZkLi4zODM2Njc1NDU5IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfd2F0Y2guYworKysgYi90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfd2F0Y2guYwpAQCAtMTIyLDcgKzEyMiw3IEBAIHN0YXRpYyB2b2lk
IGFkZF9ldmVudChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKICAqIFRlbXBv
cmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgYXJlIGRvbmUgd2l0aCBjdHguCiAg
Ki8KIHZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5hbWUsCi0JCSAgYm9v
bCByZWN1cnNlKQorCQkgIGJvb2wgZXhhY3QpCiB7CiAJc3RydWN0IGNvbm5l
Y3Rpb24gKmk7CiAJc3RydWN0IHdhdGNoICp3YXRjaDsKQEAgLTEzNCwxMCAr
MTM0LDEzIEBAIHZvaWQgZmlyZV93YXRjaGVzKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5hbWUsCiAJ
LyogQ3JlYXRlIGFuIGV2ZW50IGZvciBlYWNoIHdhdGNoLiAqLwogCWxpc3Rf
Zm9yX2VhY2hfZW50cnkoaSwgJmNvbm5lY3Rpb25zLCBsaXN0KSB7CiAJCWxp
c3RfZm9yX2VhY2hfZW50cnkod2F0Y2gsICZpLT53YXRjaGVzLCBsaXN0KSB7
Ci0JCQlpZiAoaXNfY2hpbGQobmFtZSwgd2F0Y2gtPm5vZGUpKQotCQkJCWFk
ZF9ldmVudChpLCBjdHgsIHdhdGNoLCBuYW1lKTsKLQkJCWVsc2UgaWYgKHJl
Y3Vyc2UgJiYgaXNfY2hpbGQod2F0Y2gtPm5vZGUsIG5hbWUpKQotCQkJCWFk
ZF9ldmVudChpLCBjdHgsIHdhdGNoLCB3YXRjaC0+bm9kZSk7CisJCQlpZiAo
ZXhhY3QpIHsKKwkJCQlpZiAoc3RyZXEobmFtZSwgd2F0Y2gtPm5vZGUpKQor
CQkJCQlhZGRfZXZlbnQoaSwgY3R4LCB3YXRjaCwgbmFtZSk7CisJCQl9IGVs
c2UgeworCQkJCWlmIChpc19jaGlsZChuYW1lLCB3YXRjaC0+bm9kZSkpCisJ
CQkJCWFkZF9ldmVudChpLCBjdHgsIHdhdGNoLCBuYW1lKTsKKwkJCX0KIAkJ
fQogCX0KIH0KZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF93YXRjaC5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmgK
aW5kZXggNTRkNGVhN2UwZC4uMWIzYzgwZDNkZCAxMDA2NDQKLS0tIGEvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmgKKysrIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmgKQEAgLTI0LDkgKzI0LDkgQEAKIGlu
dCBkb193YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1
ZmZlcmVkX2RhdGEgKmluKTsKIGludCBkb191bndhdGNoKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pOwogCi0v
KiBGaXJlIGFsbCB3YXRjaGVzOiByZWN1cnNlIG1lYW5zIGFsbCB0aGUgY2hp
bGRyZW4gYXJlIGFmZmVjdGVkIChpZS4gcm0pLiAqLworLyogRmlyZSBhbGwg
d2F0Y2hlczogIWV4YWN0IG1lYW5zIGFsbCB0aGUgY2hpbGRyZW4gYXJlIGFm
ZmVjdGVkIChpZS4gcm0pLiAqLwogdm9pZCBmaXJlX3dhdGNoZXMoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKnRtcCwgY29uc3QgY2hh
ciAqbmFtZSwKLQkJICBib29sIHJlY3Vyc2UpOworCQkgIGJvb2wgZXhhY3Qp
OwogCiB2b2lkIGNvbm5fZGVsZXRlX2FsbF93YXRjaGVzKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uKTsKIAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0008-tools-xenstore-introduce-node_perms-structure.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0008-tools-xenstore-introduce-node_perms-structure.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogaW50cm9kdWNlIG5vZGVfcGVybXMgc3RydWN0
dXJlCgpUaGVyZSBhcmUgc2V2ZXJhbCBwbGFjZXMgaW4geGVuc3RvcmVkIHVz
aW5nIGEgcGVybWlzc2lvbiBhcnJheSBhbmQgdGhlCnNpemUgb2YgdGhhdCBh
cnJheS4gSW50cm9kdWNlIGEgbmV3IHN0cnVjdCBub2RlX3Blcm1zIGNvbnRh
aW5pbmcgYm90aC4KClRoaXMgaXMgcGFydCBvZiBYU0EtMTE1LgoKU2lnbmVk
LW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpBY2tl
ZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3
ZWQtYnk6IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPgoKZGlmZiAtLWdp
dCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4IGIzOTYxMGM4NzYuLjA2
ZTkzN2RlNjYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YwpAQCAtMzk3LDE0ICszOTcsMTQgQEAgc3RhdGljIHN0cnVjdCBub2RlICpy
ZWFkX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQg
KmN0eCwKIAkvKiBEYXRhbGVuLCBjaGlsZGxlbiwgbnVtYmVyIG9mIHBlcm1p
c3Npb25zICovCiAJaGRyID0gKHZvaWQgKilkYXRhLmRwdHI7CiAJbm9kZS0+
Z2VuZXJhdGlvbiA9IGhkci0+Z2VuZXJhdGlvbjsKLQlub2RlLT5udW1fcGVy
bXMgPSBoZHItPm51bV9wZXJtczsKKwlub2RlLT5wZXJtcy5udW0gPSBoZHIt
Pm51bV9wZXJtczsKIAlub2RlLT5kYXRhbGVuID0gaGRyLT5kYXRhbGVuOwog
CW5vZGUtPmNoaWxkbGVuID0gaGRyLT5jaGlsZGxlbjsKIAogCS8qIFBlcm1p
c3Npb25zIGFyZSBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMuICovCi0Jbm9kZS0+
cGVybXMgPSBoZHItPnBlcm1zOworCW5vZGUtPnBlcm1zLnAgPSBoZHItPnBl
cm1zOwogCS8qIERhdGEgaXMgYmluYXJ5IGJsb2IgKHVzdWFsbHkgYXNjaWks
IG5vIG51bCkuICovCi0Jbm9kZS0+ZGF0YSA9IG5vZGUtPnBlcm1zICsgbm9k
ZS0+bnVtX3Blcm1zOworCW5vZGUtPmRhdGEgPSBub2RlLT5wZXJtcy5wICsg
bm9kZS0+cGVybXMubnVtOwogCS8qIENoaWxkcmVuIGlzIHN0cmluZ3MsIG51
bCBzZXBhcmF0ZWQuICovCiAJbm9kZS0+Y2hpbGRyZW4gPSBub2RlLT5kYXRh
ICsgbm9kZS0+ZGF0YWxlbjsKIApAQCAtNDIxLDcgKzQyMSw3IEBAIGludCB3
cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERCX0RB
VEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUsCiAJc3RydWN0IHhzX3RkYl9y
ZWNvcmRfaGRyICpoZHI7CiAKIAlkYXRhLmRzaXplID0gc2l6ZW9mKCpoZHIp
Ci0JCSsgbm9kZS0+bnVtX3Blcm1zKnNpemVvZihub2RlLT5wZXJtc1swXSkK
KwkJKyBub2RlLT5wZXJtcy5udW0gKiBzaXplb2Yobm9kZS0+cGVybXMucFsw
XSkKIAkJKyBub2RlLT5kYXRhbGVuICsgbm9kZS0+Y2hpbGRsZW47CiAKIAlp
ZiAoIW5vX3F1b3RhX2NoZWNrICYmIGRvbWFpbl9pc191bnByaXZpbGVnZWQo
Y29ubikgJiYKQEAgLTQzMywxMiArNDMzLDEzIEBAIGludCB3cml0ZV9ub2Rl
X3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERCX0RBVEEgKmtleSwg
c3RydWN0IG5vZGUgKm5vZGUsCiAJZGF0YS5kcHRyID0gdGFsbG9jX3NpemUo
bm9kZSwgZGF0YS5kc2l6ZSk7CiAJaGRyID0gKHZvaWQgKilkYXRhLmRwdHI7
CiAJaGRyLT5nZW5lcmF0aW9uID0gbm9kZS0+Z2VuZXJhdGlvbjsKLQloZHIt
Pm51bV9wZXJtcyA9IG5vZGUtPm51bV9wZXJtczsKKwloZHItPm51bV9wZXJt
cyA9IG5vZGUtPnBlcm1zLm51bTsKIAloZHItPmRhdGFsZW4gPSBub2RlLT5k
YXRhbGVuOwogCWhkci0+Y2hpbGRsZW4gPSBub2RlLT5jaGlsZGxlbjsKIAot
CW1lbWNweShoZHItPnBlcm1zLCBub2RlLT5wZXJtcywgbm9kZS0+bnVtX3Bl
cm1zKnNpemVvZihub2RlLT5wZXJtc1swXSkpOwotCXAgPSBoZHItPnBlcm1z
ICsgbm9kZS0+bnVtX3Blcm1zOworCW1lbWNweShoZHItPnBlcm1zLCBub2Rl
LT5wZXJtcy5wLAorCSAgICAgICBub2RlLT5wZXJtcy5udW0gKiBzaXplb2Yo
Km5vZGUtPnBlcm1zLnApKTsKKwlwID0gaGRyLT5wZXJtcyArIG5vZGUtPnBl
cm1zLm51bTsKIAltZW1jcHkocCwgbm9kZS0+ZGF0YSwgbm9kZS0+ZGF0YWxl
bik7CiAJcCArPSBub2RlLT5kYXRhbGVuOwogCW1lbWNweShwLCBub2RlLT5j
aGlsZHJlbiwgbm9kZS0+Y2hpbGRsZW4pOwpAQCAtNDY0LDIzICs0NjUsMjIg
QEAgc3RhdGljIGludCB3cml0ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwKIH0KIAogc3RhdGljIGVudW0geHNf
cGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sCi0JCQkJICAgICAgIHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybXMs
Ci0JCQkJICAgICAgIHVuc2lnbmVkIGludCBudW0pCisJCQkJICAgICAgIGNv
bnN0IHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcykKIHsKIAl1bnNpZ25lZCBp
bnQgaTsKIAllbnVtIHhzX3Blcm1fdHlwZSBtYXNrID0gWFNfUEVSTV9SRUFE
fFhTX1BFUk1fV1JJVEV8WFNfUEVSTV9PV05FUjsKIAogCS8qIE93bmVycyBh
bmQgdG9vbHMgZ2V0IGl0IGFsbC4uLiAqLwotCWlmICghZG9tYWluX2lzX3Vu
cHJpdmlsZWdlZChjb25uKSB8fCBwZXJtc1swXS5pZCA9PSBjb25uLT5pZAot
ICAgICAgICAgICAgICAgIHx8IChjb25uLT50YXJnZXQgJiYgcGVybXNbMF0u
aWQgPT0gY29ubi0+dGFyZ2V0LT5pZCkpCisJaWYgKCFkb21haW5faXNfdW5w
cml2aWxlZ2VkKGNvbm4pIHx8IHBlcm1zLT5wWzBdLmlkID09IGNvbm4tPmlk
CisgICAgICAgICAgICAgICAgfHwgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+
cFswXS5pZCA9PSBjb25uLT50YXJnZXQtPmlkKSkKIAkJcmV0dXJuIChYU19Q
RVJNX1JFQUR8WFNfUEVSTV9XUklURXxYU19QRVJNX09XTkVSKSAmIG1hc2s7
CiAKLQlmb3IgKGkgPSAxOyBpIDwgbnVtOyBpKyspCi0JCWlmIChwZXJtc1tp
XS5pZCA9PSBjb25uLT5pZAotICAgICAgICAgICAgICAgICAgICAgICAgfHwg
KGNvbm4tPnRhcmdldCAmJiBwZXJtc1tpXS5pZCA9PSBjb25uLT50YXJnZXQt
PmlkKSkKLQkJCXJldHVybiBwZXJtc1tpXS5wZXJtcyAmIG1hc2s7CisJZm9y
IChpID0gMTsgaSA8IHBlcm1zLT5udW07IGkrKykKKwkJaWYgKHBlcm1zLT5w
W2ldLmlkID09IGNvbm4tPmlkCisgICAgICAgICAgICAgICAgICAgICAgICB8
fCAoY29ubi0+dGFyZ2V0ICYmIHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPnRh
cmdldC0+aWQpKQorCQkJcmV0dXJuIHBlcm1zLT5wW2ldLnBlcm1zICYgbWFz
azsKIAotCXJldHVybiBwZXJtc1swXS5wZXJtcyAmIG1hc2s7CisJcmV0dXJu
IHBlcm1zLT5wWzBdLnBlcm1zICYgbWFzazsKIH0KIAogLyoKQEAgLTUyNyw3
ICs1MjcsNyBAQCBzdGF0aWMgaW50IGFza19wYXJlbnRzKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAJCXJldHVybiAwOwog
CX0KIAotCSpwZXJtID0gcGVybV9mb3JfY29ubihjb25uLCBub2RlLT5wZXJt
cywgbm9kZS0+bnVtX3Blcm1zKTsKKwkqcGVybSA9IHBlcm1fZm9yX2Nvbm4o
Y29ubiwgJm5vZGUtPnBlcm1zKTsKIAlyZXR1cm4gMDsKIH0KIApAQCAtNTcz
LDggKzU3Myw3IEBAIHN0cnVjdCBub2RlICpnZXRfbm9kZShzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwKIAlub2RlID0gcmVhZF9ub2RlKGNvbm4sIGN0eCwg
bmFtZSk7CiAJLyogSWYgd2UgZG9uJ3QgaGF2ZSBwZXJtaXNzaW9uLCB3ZSBk
b24ndCBoYXZlIG5vZGUuICovCiAJaWYgKG5vZGUpIHsKLQkJaWYgKChwZXJt
X2Zvcl9jb25uKGNvbm4sIG5vZGUtPnBlcm1zLCBub2RlLT5udW1fcGVybXMp
ICYgcGVybSkKLQkJICAgICE9IHBlcm0pIHsKKwkJaWYgKChwZXJtX2Zvcl9j
b25uKGNvbm4sICZub2RlLT5wZXJtcykgJiBwZXJtKSAhPSBwZXJtKSB7CiAJ
CQllcnJubyA9IEVBQ0NFUzsKIAkJCW5vZGUgPSBOVUxMOwogCQl9CkBAIC03
NTAsMTYgKzc0OSwxNSBAQCBjb25zdCBjaGFyICpvbmVhcmcoc3RydWN0IGJ1
ZmZlcmVkX2RhdGEgKmluKQogCXJldHVybiBpbi0+YnVmZmVyOwogfQogCi1z
dGF0aWMgY2hhciAqcGVybXNfdG9fc3RyaW5ncyhjb25zdCB2b2lkICpjdHgs
Ci0JCQkgICAgICBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMgKnBlcm1zLCB1bnNp
Z25lZCBpbnQgbnVtLAorc3RhdGljIGNoYXIgKnBlcm1zX3RvX3N0cmluZ3Mo
Y29uc3Qgdm9pZCAqY3R4LCBjb25zdCBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVy
bXMsCiAJCQkgICAgICB1bnNpZ25lZCBpbnQgKmxlbikKIHsKIAl1bnNpZ25l
ZCBpbnQgaTsKIAljaGFyICpzdHJpbmdzID0gTlVMTDsKIAljaGFyIGJ1ZmZl
cltNQVhfU1RSTEVOKHVuc2lnbmVkIGludCkgKyAxXTsKIAotCWZvciAoKmxl
biA9IDAsIGkgPSAwOyBpIDwgbnVtOyBpKyspIHsKLQkJaWYgKCF4c19wZXJt
X3RvX3N0cmluZygmcGVybXNbaV0sIGJ1ZmZlciwgc2l6ZW9mKGJ1ZmZlcikp
KQorCWZvciAoKmxlbiA9IDAsIGkgPSAwOyBpIDwgcGVybXMtPm51bTsgaSsr
KSB7CisJCWlmICgheHNfcGVybV90b19zdHJpbmcoJnBlcm1zLT5wW2ldLCBi
dWZmZXIsIHNpemVvZihidWZmZXIpKSkKIAkJCXJldHVybiBOVUxMOwogCiAJ
CXN0cmluZ3MgPSB0YWxsb2NfcmVhbGxvYyhjdHgsIHN0cmluZ3MsIGNoYXIs
CkBAIC05MzgsMTMgKzkzNiwxMyBAQCBzdGF0aWMgc3RydWN0IG5vZGUgKmNv
bnN0cnVjdF9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2
b2lkICpjdHgsCiAJCWdvdG8gbm9tZW07CiAKIAkvKiBJbmhlcml0IHBlcm1p
c3Npb25zLCBleGNlcHQgdW5wcml2aWxlZ2VkIGRvbWFpbnMgb3duIHdoYXQg
dGhleSBjcmVhdGUgKi8KLQlub2RlLT5udW1fcGVybXMgPSBwYXJlbnQtPm51
bV9wZXJtczsKLQlub2RlLT5wZXJtcyA9IHRhbGxvY19tZW1kdXAobm9kZSwg
cGFyZW50LT5wZXJtcywKLQkJCQkgICAgbm9kZS0+bnVtX3Blcm1zICogc2l6
ZW9mKG5vZGUtPnBlcm1zWzBdKSk7Ci0JaWYgKCFub2RlLT5wZXJtcykKKwlu
b2RlLT5wZXJtcy5udW0gPSBwYXJlbnQtPnBlcm1zLm51bTsKKwlub2RlLT5w
ZXJtcy5wID0gdGFsbG9jX21lbWR1cChub2RlLCBwYXJlbnQtPnBlcm1zLnAs
CisJCQkJICAgICAgbm9kZS0+cGVybXMubnVtICogc2l6ZW9mKCpub2RlLT5w
ZXJtcy5wKSk7CisJaWYgKCFub2RlLT5wZXJtcy5wKQogCQlnb3RvIG5vbWVt
OwogCWlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pKQotCQlub2Rl
LT5wZXJtc1swXS5pZCA9IGNvbm4tPmlkOworCQlub2RlLT5wZXJtcy5wWzBd
LmlkID0gY29ubi0+aWQ7CiAKIAkvKiBObyBjaGlsZHJlbiwgbm8gZGF0YSAq
LwogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+ZGF0YSA9IE5VTEw7CkBAIC0x
MjIxLDcgKzEyMTksNyBAQCBzdGF0aWMgaW50IGRvX2dldF9wZXJtcyhzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmlu
KQogCWlmICghbm9kZSkKIAkJcmV0dXJuIGVycm5vOwogCi0Jc3RyaW5ncyA9
IHBlcm1zX3RvX3N0cmluZ3Mobm9kZSwgbm9kZS0+cGVybXMsIG5vZGUtPm51
bV9wZXJtcywgJmxlbik7CisJc3RyaW5ncyA9IHBlcm1zX3RvX3N0cmluZ3Mo
bm9kZSwgJm5vZGUtPnBlcm1zLCAmbGVuKTsKIAlpZiAoIXN0cmluZ3MpCiAJ
CXJldHVybiBlcnJubzsKIApAQCAtMTIzMiwxMyArMTIzMCwxMiBAQCBzdGF0
aWMgaW50IGRvX2dldF9wZXJtcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwg
c3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCiBzdGF0aWMgaW50IGRvX3Nl
dF9wZXJtcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZl
cmVkX2RhdGEgKmluKQogewotCXVuc2lnbmVkIGludCBudW07Ci0Jc3RydWN0
IHhzX3Blcm1pc3Npb25zICpwZXJtczsKKwlzdHJ1Y3Qgbm9kZV9wZXJtcyBw
ZXJtczsKIAljaGFyICpuYW1lLCAqcGVybXN0cjsKIAlzdHJ1Y3Qgbm9kZSAq
bm9kZTsKIAotCW51bSA9IHhzX2NvdW50X3N0cmluZ3MoaW4tPmJ1ZmZlciwg
aW4tPnVzZWQpOwotCWlmIChudW0gPCAyKQorCXBlcm1zLm51bSA9IHhzX2Nv
dW50X3N0cmluZ3MoaW4tPmJ1ZmZlciwgaW4tPnVzZWQpOworCWlmIChwZXJt
cy5udW0gPCAyKQogCQlyZXR1cm4gRUlOVkFMOwogCiAJLyogRmlyc3QgYXJn
IGlzIG5vZGUgbmFtZS4gKi8KQEAgLTEyNDksMjEgKzEyNDYsMjEgQEAgc3Rh
dGljIGludCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAkJcmV0dXJuIGVycm5vOwog
CiAJcGVybXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikg
KyAxOwotCW51bS0tOworCXBlcm1zLm51bS0tOwogCi0JcGVybXMgPSB0YWxs
b2NfYXJyYXkobm9kZSwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBudW0pOwot
CWlmICghcGVybXMpCisJcGVybXMucCA9IHRhbGxvY19hcnJheShub2RlLCBz
dHJ1Y3QgeHNfcGVybWlzc2lvbnMsIHBlcm1zLm51bSk7CisJaWYgKCFwZXJt
cy5wKQogCQlyZXR1cm4gRU5PTUVNOwotCWlmICgheHNfc3RyaW5nc190b19w
ZXJtcyhwZXJtcywgbnVtLCBwZXJtc3RyKSkKKwlpZiAoIXhzX3N0cmluZ3Nf
dG9fcGVybXMocGVybXMucCwgcGVybXMubnVtLCBwZXJtc3RyKSkKIAkJcmV0
dXJuIGVycm5vOwogCiAJLyogVW5wcml2aWxlZ2VkIGRvbWFpbnMgbWF5IG5v
dCBjaGFuZ2UgdGhlIG93bmVyLiAqLwotCWlmIChkb21haW5faXNfdW5wcml2
aWxlZ2VkKGNvbm4pICYmIHBlcm1zWzBdLmlkICE9IG5vZGUtPnBlcm1zWzBd
LmlkKQorCWlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pICYmCisJ
ICAgIHBlcm1zLnBbMF0uaWQgIT0gbm9kZS0+cGVybXMucFswXS5pZCkKIAkJ
cmV0dXJuIEVQRVJNOwogCiAJZG9tYWluX2VudHJ5X2RlYyhjb25uLCBub2Rl
KTsKIAlub2RlLT5wZXJtcyA9IHBlcm1zOwotCW5vZGUtPm51bV9wZXJtcyA9
IG51bTsKIAlkb21haW5fZW50cnlfaW5jKGNvbm4sIG5vZGUpOwogCiAJaWYg
KHdyaXRlX25vZGUoY29ubiwgbm9kZSwgZmFsc2UpKQpAQCAtMTUzNiw4ICsx
NTMzLDggQEAgc3RhdGljIHZvaWQgbWFudWFsX25vZGUoY29uc3QgY2hhciAq
bmFtZSwgY29uc3QgY2hhciAqY2hpbGQpCiAJCWJhcmZfcGVycm9yKCJDb3Vs
ZCBub3QgYWxsb2NhdGUgaW5pdGlhbCBub2RlICVzIiwgbmFtZSk7CiAKIAlu
b2RlLT5uYW1lID0gbmFtZTsKLQlub2RlLT5wZXJtcyA9ICZwZXJtczsKLQlu
b2RlLT5udW1fcGVybXMgPSAxOworCW5vZGUtPnBlcm1zLnAgPSAmcGVybXM7
CisJbm9kZS0+cGVybXMubnVtID0gMTsKIAlub2RlLT5jaGlsZHJlbiA9IChj
aGFyICopY2hpbGQ7CiAJaWYgKGNoaWxkKQogCQlub2RlLT5jaGlsZGxlbiA9
IHN0cmxlbihjaGlsZCkgKyAxOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmgKaW5kZXggNTNhYWZhMWQ5Yi4uYTI5MWYxNWNlNyAxMDA2NDQK
LS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaAorKysgYi90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCkBAIC0xMDYsNiArMTA2
LDExIEBAIHN0cnVjdCBjb25uZWN0aW9uCiB9OwogZXh0ZXJuIHN0cnVjdCBs
aXN0X2hlYWQgY29ubmVjdGlvbnM7CiAKK3N0cnVjdCBub2RlX3Blcm1zIHsK
Kwl1bnNpZ25lZCBpbnQgbnVtOworCXN0cnVjdCB4c19wZXJtaXNzaW9ucyAq
cDsKK307CisKIHN0cnVjdCBub2RlIHsKIAljb25zdCBjaGFyICpuYW1lOwog
CkBAIC0xMTcsOCArMTIyLDcgQEAgc3RydWN0IG5vZGUgewogI2RlZmluZSBO
T19HRU5FUkFUSU9OIH4oKHVpbnQ2NF90KTApCiAKIAkvKiBQZXJtaXNzaW9u
cy4gKi8KLQl1bnNpZ25lZCBpbnQgbnVtX3Blcm1zOwotCXN0cnVjdCB4c19w
ZXJtaXNzaW9ucyAqcGVybXM7CisJc3RydWN0IG5vZGVfcGVybXMgcGVybXM7
CiAKIAkvKiBDb250ZW50cy4gKi8KIAl1bnNpZ25lZCBpbnQgZGF0YWxlbjsK
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYwppbmRleCAz
NjRhZDhlYTYzLi43NmJkZDQ2YzhkIDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02NTAsMTIgKzY1MCwxMiBAQCB2b2lk
IGRvbWFpbl9lbnRyeV9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0
cnVjdCBub2RlICpub2RlKQogCWlmICghY29ubikKIAkJcmV0dXJuOwogCi0J
aWYgKG5vZGUtPnBlcm1zICYmIG5vZGUtPnBlcm1zWzBdLmlkICE9IGNvbm4t
PmlkKSB7CisJaWYgKG5vZGUtPnBlcm1zLnAgJiYgbm9kZS0+cGVybXMucFsw
XS5pZCAhPSBjb25uLT5pZCkgewogCQlpZiAoY29ubi0+dHJhbnNhY3Rpb24p
IHsKIAkJCXRyYW5zYWN0aW9uX2VudHJ5X2luYyhjb25uLT50cmFuc2FjdGlv
biwKLQkJCQlub2RlLT5wZXJtc1swXS5pZCk7CisJCQkJbm9kZS0+cGVybXMu
cFswXS5pZCk7CiAJCX0gZWxzZSB7Ci0JCQlkID0gZmluZF9kb21haW5fYnlf
ZG9taWQobm9kZS0+cGVybXNbMF0uaWQpOworCQkJZCA9IGZpbmRfZG9tYWlu
X2J5X2RvbWlkKG5vZGUtPnBlcm1zLnBbMF0uaWQpOwogCQkJaWYgKGQpCiAJ
CQkJZC0+bmJlbnRyeSsrOwogCQl9CkBAIC02NzYsMTIgKzY3NiwxMiBAQCB2
b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBub2RlICpub2RlKQogCWlmICghY29ubikKIAkJcmV0dXJuOwog
Ci0JaWYgKG5vZGUtPnBlcm1zICYmIG5vZGUtPnBlcm1zWzBdLmlkICE9IGNv
bm4tPmlkKSB7CisJaWYgKG5vZGUtPnBlcm1zLnAgJiYgbm9kZS0+cGVybXMu
cFswXS5pZCAhPSBjb25uLT5pZCkgewogCQlpZiAoY29ubi0+dHJhbnNhY3Rp
b24pIHsKIAkJCXRyYW5zYWN0aW9uX2VudHJ5X2RlYyhjb25uLT50cmFuc2Fj
dGlvbiwKLQkJCQlub2RlLT5wZXJtc1swXS5pZCk7CisJCQkJbm9kZS0+cGVy
bXMucFswXS5pZCk7CiAJCX0gZWxzZSB7Ci0JCQlkID0gZmluZF9kb21haW5f
YnlfZG9taWQobm9kZS0+cGVybXNbMF0uaWQpOworCQkJZCA9IGZpbmRfZG9t
YWluX2J5X2RvbWlkKG5vZGUtPnBlcm1zLnBbMF0uaWQpOwogCQkJaWYgKGQg
JiYgZC0+bmJlbnRyeSkKIAkJCQlkLT5uYmVudHJ5LS07CiAJCX0K

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0009-tools-xenstore-allow-special-watches-for-privileged-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogYWxsb3cgc3BlY2lhbCB3YXRjaGVzIGZvciBw
cml2aWxlZ2VkIGNhbGxlcnMgb25seQoKVGhlIHNwZWNpYWwgd2F0Y2hlcyAi
QGludHJvZHVjZURvbWFpbiIgYW5kICJAcmVsZWFzZURvbWFpbiIgc2hvdWxk
IGJlCmFsbG93ZWQgZm9yIHByaXZpbGVnZWQgY2FsbGVycyBvbmx5LCBhcyB0
aGV5IGFsbG93IHRvIGdhaW4gaW5mb3JtYXRpb24KYWJvdXQgcHJlc2VuY2Ug
b2Ygb3RoZXIgZ3Vlc3RzIG9uIHRoZSBob3N0LiBTbyBzZW5kIHdhdGNoIGV2
ZW50cyBmb3IKdGhvc2Ugd2F0Y2hlcyB2aWEgcHJpdmlsZWdlZCBjb25uZWN0
aW9ucyBvbmx5LgoKSW4gb3JkZXIgdG8gYWxsb3cgZm9yIGRpc2FnZ3JlZ2F0
ZWQgc2V0dXBzIHdoZXJlIGUuZy4gZHJpdmVyIGRvbWFpbnMKbmVlZCB0byBt
YWtlIHVzZSBvZiB0aG9zZSBzcGVjaWFsIHdhdGNoZXMgYWRkIHN1cHBvcnQg
Zm9yIGNhbGxpbmcKInNldCBwZXJtaXNzaW9ucyIgZm9yIHRob3NlIHNwZWNp
YWwgbm9kZXMsIHRvby4KClRoaXMgaXMgcGFydCBvZiBYU0EtMTE1LgoKU2ln
bmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4K
UmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPgoKZGlm
ZiAtLWdpdCBhL2RvY3MvbWlzYy94ZW5zdG9yZS50eHQgYi9kb2NzL21pc2Mv
eGVuc3RvcmUudHh0CmluZGV4IGNiODAwOWNiNjguLjIwODFmMjBmNTUgMTAw
NjQ0Ci0tLSBhL2RvY3MvbWlzYy94ZW5zdG9yZS50eHQKKysrIGIvZG9jcy9t
aXNjL3hlbnN0b3JlLnR4dApAQCAtMTcwLDYgKzE3MCw5IEBAIFNFVF9QRVJN
UwkJPHBhdGg+fDxwZXJtLWFzLXN0cmluZz58Kz8KIAkJbjxkb21pZD4Jbm8g
YWNjZXNzCiAJU2VlIGh0dHBzOi8vd2lraS54ZW4ub3JnL3dpa2kvWGVuQnVz
IHNlY3Rpb24KIAlgUGVybWlzc2lvbnMnIGZvciBkZXRhaWxzIG9mIHRoZSBw
ZXJtaXNzaW9ucyBzeXN0ZW0uCisJSXQgaXMgcG9zc2libGUgdG8gc2V0IHBl
cm1pc3Npb25zIGZvciB0aGUgc3BlY2lhbCB3YXRjaCBwYXRocworCSJAaW50
cm9kdWNlRG9tYWluIiBhbmQgIkByZWxlYXNlRG9tYWluIiB0byBlbmFibGUg
cmVjZWl2aW5nIHRob3NlCisJd2F0Y2hlcyBpbiB1bnByaXZpbGVnZWQgZG9t
YWlucy4KIAogLS0tLS0tLS0tLSBXYXRjaGVzIC0tLS0tLS0tLS0KIApAQCAt
MTk0LDYgKzE5Nyw4IEBAIFdBVENICQkJPHdwYXRoPnw8dG9rZW4+fD8KIAkg
ICAgQHJlbGVhc2VEb21haW4gCW9jY3VycyBvbiBhbnkgZG9tYWluIGNyYXNo
IG9yCiAJCQkJc2h1dGRvd24sIGFuZCBhbHNvIG9uIFJFTEVBU0UKIAkJCQlh
bmQgZG9tYWluIGRlc3RydWN0aW9uCisJPHdzcGVjaWFsPiBldmVudHMgYXJl
IHNlbnQgdG8gcHJpdmlsZWdlZCBjYWxsZXJzIG9yIGV4cGxpY2l0bHkKKwl2
aWEgU0VUX1BFUk1TIGVuYWJsZWQgZG9tYWlucyBvbmx5LgogCiAJV2hlbiBh
IHdhdGNoIGlzIGZpcnN0IHNldCB1cCBpdCBpcyB0cmlnZ2VyZWQgb25jZSBz
dHJhaWdodAogCWF3YXksIHdpdGggPHBhdGg+IGVxdWFsIHRvIDx3cGF0aD4u
ICBXYXRjaGVzIG1heSBiZSB0cmlnZ2VyZWQKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jCmluZGV4IDA2ZTkzN2RlNjYuLjFkYjlkMGNjMzEg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMK
KysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwpAQCAtNDY0
LDggKzQ2NCw4IEBAIHN0YXRpYyBpbnQgd3JpdGVfbm9kZShzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUsCiAJcmV0dXJuIHdy
aXRlX25vZGVfcmF3KGNvbm4sICZrZXksIG5vZGUsIG5vX3F1b3RhX2NoZWNr
KTsKIH0KIAotc3RhdGljIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nv
bm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCi0JCQkJICAgICAgIGNvbnN0
IHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcykKK2VudW0geHNfcGVybV90eXBl
IHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCisJCQkJ
Y29uc3Qgc3RydWN0IG5vZGVfcGVybXMgKnBlcm1zKQogewogCXVuc2lnbmVk
IGludCBpOwogCWVudW0geHNfcGVybV90eXBlIG1hc2sgPSBYU19QRVJNX1JF
QUR8WFNfUEVSTV9XUklURXxYU19QRVJNX09XTkVSOwpAQCAtMTIzOCwyMiAr
MTIzOCwyOSBAQCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCWlm
IChwZXJtcy5udW0gPCAyKQogCQlyZXR1cm4gRUlOVkFMOwogCi0JLyogRmly
c3QgYXJnIGlzIG5vZGUgbmFtZS4gKi8KLQkvKiBXZSBtdXN0IG93biBub2Rl
IHRvIGRvIHRoaXMgKHRvb2xzIGNhbiBkbyB0aGlzIHRvbykuICovCi0Jbm9k
ZSA9IGdldF9ub2RlX2Nhbm9uaWNhbGl6ZWQoY29ubiwgaW4sIGluLT5idWZm
ZXIsICZuYW1lLAotCQkJCSAgICAgIFhTX1BFUk1fV1JJVEUgfCBYU19QRVJN
X09XTkVSKTsKLQlpZiAoIW5vZGUpCi0JCXJldHVybiBlcnJubzsKLQogCXBl
cm1zdHIgPSBpbi0+YnVmZmVyICsgc3RybGVuKGluLT5idWZmZXIpICsgMTsK
IAlwZXJtcy5udW0tLTsKIAotCXBlcm1zLnAgPSB0YWxsb2NfYXJyYXkobm9k
ZSwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBwZXJtcy5udW0pOworCXBlcm1z
LnAgPSB0YWxsb2NfYXJyYXkoaW4sIHN0cnVjdCB4c19wZXJtaXNzaW9ucywg
cGVybXMubnVtKTsKIAlpZiAoIXBlcm1zLnApCiAJCXJldHVybiBFTk9NRU07
CiAJaWYgKCF4c19zdHJpbmdzX3RvX3Blcm1zKHBlcm1zLnAsIHBlcm1zLm51
bSwgcGVybXN0cikpCiAJCXJldHVybiBlcnJubzsKIAorCS8qIEZpcnN0IGFy
ZyBpcyBub2RlIG5hbWUuICovCisJaWYgKHN0cnN0YXJ0cyhpbi0+YnVmZmVy
LCAiQCIpKSB7CisJCWlmIChzZXRfcGVybXNfc3BlY2lhbChjb25uLCBpbi0+
YnVmZmVyLCAmcGVybXMpKQorCQkJcmV0dXJuIGVycm5vOworCQlzZW5kX2Fj
ayhjb25uLCBYU19TRVRfUEVSTVMpOworCQlyZXR1cm4gMDsKKwl9CisKKwkv
KiBXZSBtdXN0IG93biBub2RlIHRvIGRvIHRoaXMgKHRvb2xzIGNhbiBkbyB0
aGlzIHRvbykuICovCisJbm9kZSA9IGdldF9ub2RlX2Nhbm9uaWNhbGl6ZWQo
Y29ubiwgaW4sIGluLT5idWZmZXIsICZuYW1lLAorCQkJCSAgICAgIFhTX1BF
Uk1fV1JJVEUgfCBYU19QRVJNX09XTkVSKTsKKwlpZiAoIW5vZGUpCisJCXJl
dHVybiBlcnJubzsKKwogCS8qIFVucHJpdmlsZWdlZCBkb21haW5zIG1heSBu
b3QgY2hhbmdlIHRoZSBvd25lci4gKi8KIAlpZiAoZG9tYWluX2lzX3VucHJp
dmlsZWdlZChjb25uKSAmJgogCSAgICBwZXJtcy5wWzBdLmlkICE9IG5vZGUt
PnBlcm1zLnBbMF0uaWQpCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2Nv
cmUuaAppbmRleCBhMjkxZjE1Y2U3Li4zZjk1OGMyOWFiIDEwMDY0NAotLS0g
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCisrKyBiL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKQEAgLTE2Miw2ICsxNjIsOCBA
QCBzdHJ1Y3Qgbm9kZSAqZ2V0X25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sCiBzdHJ1Y3QgY29ubmVjdGlvbiAqbmV3X2Nvbm5lY3Rpb24oY29ubndy
aXRlZm5fdCAqd3JpdGUsIGNvbm5yZWFkZm5fdCAqcmVhZCk7CiB2b2lkIGNo
ZWNrX3N0b3JlKHZvaWQpOwogdm9pZCBjb3JydXB0KHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBjb25zdCBjaGFyICpmbXQsIC4uLik7CitlbnVtIHhzX3Bl
cm1fdHlwZSBwZXJtX2Zvcl9jb25uKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LAorCQkJCWNvbnN0IHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcyk7CiAKIC8q
IElzIHRoaXMgYSB2YWxpZCBub2RlIG5hbWU/ICovCiBib29sIGlzX3ZhbGlk
X25vZGVuYW1lKGNvbnN0IGNoYXIgKm5vZGUpOwpkaWZmIC0tZ2l0IGEvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2RvbWFpbi5jCmluZGV4IDc2YmRkNDZjOGQuLmUxMTA2
ZDkwYjYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9k
b21haW4uYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWlu
LmMKQEAgLTQxLDYgKzQxLDkgQEAgc3RhdGljIGV2dGNobl9wb3J0X3Qgdmly
cV9wb3J0OwogCiB4ZW5ldnRjaG5faGFuZGxlICp4Y2VfaGFuZGxlID0gTlVM
TDsKIAorc3RhdGljIHN0cnVjdCBub2RlX3Blcm1zIGRvbV9yZWxlYXNlX3Bl
cm1zOworc3RhdGljIHN0cnVjdCBub2RlX3Blcm1zIGRvbV9pbnRyb2R1Y2Vf
cGVybXM7CisKIHN0cnVjdCBkb21haW4KIHsKIAlzdHJ1Y3QgbGlzdF9oZWFk
IGxpc3Q7CkBAIC01NzYsNiArNTc5LDU5IEBAIHZvaWQgcmVzdG9yZV9leGlz
dGluZ19jb25uZWN0aW9ucyh2b2lkKQogewogfQogCitzdGF0aWMgaW50IHNl
dF9kb21fcGVybXNfZGVmYXVsdChzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMp
Cit7CisJcGVybXMtPm51bSA9IDE7CisJcGVybXMtPnAgPSB0YWxsb2NfYXJy
YXkoTlVMTCwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBwZXJtcy0+bnVtKTsK
KwlpZiAoIXBlcm1zLT5wKQorCQlyZXR1cm4gLTE7CisJcGVybXMtPnAtPmlk
ID0gMDsKKwlwZXJtcy0+cC0+cGVybXMgPSBYU19QRVJNX05PTkU7CisKKwly
ZXR1cm4gMDsKK30KKworc3RhdGljIHN0cnVjdCBub2RlX3Blcm1zICpnZXRf
cGVybXNfc3BlY2lhbChjb25zdCBjaGFyICpuYW1lKQoreworCWlmICghc3Ry
Y21wKG5hbWUsICJAcmVsZWFzZURvbWFpbiIpKQorCQlyZXR1cm4gJmRvbV9y
ZWxlYXNlX3Blcm1zOworCWlmICghc3RyY21wKG5hbWUsICJAaW50cm9kdWNl
RG9tYWluIikpCisJCXJldHVybiAmZG9tX2ludHJvZHVjZV9wZXJtczsKKwly
ZXR1cm4gTlVMTDsKK30KKworaW50IHNldF9wZXJtc19zcGVjaWFsKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBjb25zdCBjaGFyICpuYW1lLAorCQkgICAg
ICBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMpCit7CisJc3RydWN0IG5vZGVf
cGVybXMgKnA7CisKKwlwID0gZ2V0X3Blcm1zX3NwZWNpYWwobmFtZSk7CisJ
aWYgKCFwKQorCQlyZXR1cm4gRUlOVkFMOworCisJaWYgKChwZXJtX2Zvcl9j
b25uKGNvbm4sIHApICYgKFhTX1BFUk1fV1JJVEUgfCBYU19QRVJNX09XTkVS
KSkgIT0KKwkgICAgKFhTX1BFUk1fV1JJVEUgfCBYU19QRVJNX09XTkVSKSkK
KwkJcmV0dXJuIEVBQ0NFUzsKKworCXAtPm51bSA9IHBlcm1zLT5udW07CisJ
dGFsbG9jX2ZyZWUocC0+cCk7CisJcC0+cCA9IHBlcm1zLT5wOworCXRhbGxv
Y19zdGVhbChOVUxMLCBwZXJtcy0+cCk7CisKKwlyZXR1cm4gMDsKK30KKwor
Ym9vbCBjaGVja19wZXJtc19zcGVjaWFsKGNvbnN0IGNoYXIgKm5hbWUsIHN0
cnVjdCBjb25uZWN0aW9uICpjb25uKQoreworCXN0cnVjdCBub2RlX3Blcm1z
ICpwOworCisJcCA9IGdldF9wZXJtc19zcGVjaWFsKG5hbWUpOworCWlmICgh
cCkKKwkJcmV0dXJuIGZhbHNlOworCisJcmV0dXJuIHBlcm1fZm9yX2Nvbm4o
Y29ubiwgcCkgJiBYU19QRVJNX1JFQUQ7Cit9CisKIHN0YXRpYyBpbnQgZG9t
MF9pbml0KHZvaWQpIAogeyAKIAlldnRjaG5fcG9ydF90IHBvcnQ7CkBAIC01
OTcsNiArNjUzLDEwIEBAIHN0YXRpYyBpbnQgZG9tMF9pbml0KHZvaWQpCiAK
IAl4ZW5ldnRjaG5fbm90aWZ5KHhjZV9oYW5kbGUsIGRvbTAtPnBvcnQpOwog
CisJaWYgKHNldF9kb21fcGVybXNfZGVmYXVsdCgmZG9tX3JlbGVhc2VfcGVy
bXMpIHx8CisJICAgIHNldF9kb21fcGVybXNfZGVmYXVsdCgmZG9tX2ludHJv
ZHVjZV9wZXJtcykpCisJCXJldHVybiAtMTsKKwogCXJldHVybiAwOyAKIH0K
IApkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFp
bi5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCmluZGV4
IDU2YWUwMTU5NzQuLjI1OTE4Mzk2MmEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaAorKysgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfZG9tYWluLmgKQEAgLTY1LDYgKzY1LDExIEBAIHZvaWQg
ZG9tYWluX3dhdGNoX2luYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7CiB2
b2lkIGRvbWFpbl93YXRjaF9kZWMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4p
OwogaW50IGRvbWFpbl93YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7
CiAKKy8qIFNwZWNpYWwgbm9kZSBwZXJtaXNzaW9uIGhhbmRsaW5nLiAqLwor
aW50IHNldF9wZXJtc19zcGVjaWFsKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCBjaGFyICpuYW1lLAorCQkgICAgICBzdHJ1Y3Qgbm9kZV9wZXJt
cyAqcGVybXMpOworYm9vbCBjaGVja19wZXJtc19zcGVjaWFsKGNvbnN0IGNo
YXIgKm5hbWUsIHN0cnVjdCBjb25uZWN0aW9uICpjb25uKTsKKwogLyogV3Jp
dGUgcmF0ZSBsaW1pdGluZyAqLwogCiAjZGVmaW5lIFdSTF9GQUNUT1IgICAx
MDAwIC8qIGZvciBmaXhlZC1wb2ludCBhcml0aG1ldGljICovCmRpZmYgLS1n
aXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guYyBiL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5jCmluZGV4IDM4MzY2NzU0NTku
LmY0ZTI4OTM2MmUgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF93YXRjaC5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93
YXRjaC5jCkBAIC0xMzMsNiArMTMzLDEwIEBAIHZvaWQgZmlyZV93YXRjaGVz
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsIGNv
bnN0IGNoYXIgKm5hbWUsCiAKIAkvKiBDcmVhdGUgYW4gZXZlbnQgZm9yIGVh
Y2ggd2F0Y2guICovCiAJbGlzdF9mb3JfZWFjaF9lbnRyeShpLCAmY29ubmVj
dGlvbnMsIGxpc3QpIHsKKwkJLyogaW50cm9kdWNlL3JlbGVhc2UgZG9tYWlu
IHdhdGNoZXMgKi8KKwkJaWYgKGNoZWNrX3NwZWNpYWxfZXZlbnQobmFtZSkg
JiYgIWNoZWNrX3Blcm1zX3NwZWNpYWwobmFtZSwgaSkpCisJCQljb250aW51
ZTsKKwogCQlsaXN0X2Zvcl9lYWNoX2VudHJ5KHdhdGNoLCAmaS0+d2F0Y2hl
cywgbGlzdCkgewogCQkJaWYgKGV4YWN0KSB7CiAJCQkJaWYgKHN0cmVxKG5h
bWUsIHdhdGNoLT5ub2RlKSkK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch"
Content-Disposition: attachment;
 filename="xsa115-c/0010-tools-xenstore-avoid-watch-events-for-nodes-without-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogYXZvaWQgd2F0Y2ggZXZlbnRzIGZvciBub2Rl
cyB3aXRob3V0IGFjY2VzcwoKVG9kYXkgd2F0Y2ggZXZlbnRzIGFyZSBzZW50
IHJlZ2FyZGxlc3Mgb2YgdGhlIGFjY2VzcyByaWdodHMgb2YgdGhlCm5vZGUg
dGhlIGV2ZW50IGlzIHNlbnQgZm9yLiBUaGlzIGVuYWJsZXMgYW55IGd1ZXN0
IHRvIGUuZy4gc2V0dXAgYQp3YXRjaCBmb3IgIi8iIGluIG9yZGVyIHRvIGhh
dmUgYSBkZXRhaWxlZCByZWNvcmQgb2YgYWxsIFhlbnN0b3JlCm1vZGlmaWNh
dGlvbnMuCgpNb2RpZnkgdGhhdCBieSBzZW5kaW5nIG9ubHkgd2F0Y2ggZXZl
bnRzIGZvciBub2RlcyB0aGF0IHRoZSB3YXRjaGVyCmhhcyBhIGNoYW5jZSB0
byBzZWUgb3RoZXJ3aXNlIChlaXRoZXIgdmlhIGRpcmVjdCByZWFkcyBvciBi
eSBxdWVyeWluZwp0aGUgY2hpbGRyZW4gb2YgYSBub2RlKS4gVGhpcyBpbmNs
dWRlcyBjYXNlcyB3aGVyZSB0aGUgdmlzaWJpbGl0eSBvZgphIG5vZGUgZm9y
IGEgd2F0Y2hlciBpcyBjaGFuZ2luZyAocGVybWlzc2lvbnMgYmVpbmcgcmVt
b3ZlZCkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYt
Ynk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQt
Ynk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2Vk
LWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KCmRpZmYgLS1naXQg
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCAxZGI5ZDBjYzMxLi5hZDE5
MDNjNTU1IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMK
QEAgLTM1NCw4ICszNTQsOCBAQCBzdGF0aWMgdm9pZCBpbml0aWFsaXplX2Zk
cyhpbnQgKnBfc29ja19wb2xsZmRfaWR4LCBpbnQgKnB0aW1lb3V0KQogICog
SWYgaXQgZmFpbHMsIHJldHVybnMgTlVMTCBhbmQgc2V0cyBlcnJuby4KICAq
IFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMgd2lsbCBiZSBkb25lIHdp
dGggY3R4LgogICovCi1zdGF0aWMgc3RydWN0IG5vZGUgKnJlYWRfbm9kZShz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAotCQkJ
ICAgICAgY29uc3QgY2hhciAqbmFtZSkKK3N0cnVjdCBub2RlICpyZWFkX25v
ZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwK
KwkJICAgICAgIGNvbnN0IGNoYXIgKm5hbWUpCiB7CiAJVERCX0RBVEEga2V5
LCBkYXRhOwogCXN0cnVjdCB4c190ZGJfcmVjb3JkX2hkciAqaGRyOwpAQCAt
NDg3LDcgKzQ4Nyw3IEBAIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nv
bm4oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAgKiBHZXQgbmFtZSBvZiBu
b2RlIHBhcmVudC4KICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxsb2NhdGlvbnMg
YXJlIGRvbmUgd2l0aCBjdHguCiAgKi8KLXN0YXRpYyBjaGFyICpnZXRfcGFy
ZW50KGNvbnN0IHZvaWQgKmN0eCwgY29uc3QgY2hhciAqbm9kZSkKK2NoYXIg
KmdldF9wYXJlbnQoY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBjaGFyICpub2Rl
KQogewogCWNoYXIgKnBhcmVudDsKIAljaGFyICpzbGFzaCA9IHN0cnJjaHIo
bm9kZSArIDEsICcvJyk7CkBAIC01NTksMTAgKzU1OSwxMCBAQCBzdGF0aWMg
aW50IGVycm5vX2Zyb21fcGFyZW50cyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3Qgdm9pZCAqY3R4LAogICogSWYgaXQgZmFpbHMsIHJldHVybnMg
TlVMTCBhbmQgc2V0cyBlcnJuby4KICAqIFRlbXBvcmFyeSBtZW1vcnkgYWxs
b2NhdGlvbnMgYXJlIGRvbmUgd2l0aCBjdHguCiAgKi8KLXN0cnVjdCBub2Rl
ICpnZXRfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKLQkJICAgICAg
Y29uc3Qgdm9pZCAqY3R4LAotCQkgICAgICBjb25zdCBjaGFyICpuYW1lLAot
CQkgICAgICBlbnVtIHhzX3Blcm1fdHlwZSBwZXJtKQorc3RhdGljIHN0cnVj
dCBub2RlICpnZXRfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKKwkJ
CSAgICAgY29uc3Qgdm9pZCAqY3R4LAorCQkJICAgICBjb25zdCBjaGFyICpu
YW1lLAorCQkJICAgICBlbnVtIHhzX3Blcm1fdHlwZSBwZXJtKQogewogCXN0
cnVjdCBub2RlICpub2RlOwogCkBAIC0xMDQ5LDcgKzEwNDksNyBAQCBzdGF0
aWMgaW50IGRvX3dyaXRlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1
Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJCQlyZXR1cm4gZXJybm87CiAJfQog
Ci0JZmlyZV93YXRjaGVzKGNvbm4sIGluLCBuYW1lLCBmYWxzZSk7CisJZmly
ZV93YXRjaGVzKGNvbm4sIGluLCBuYW1lLCBub2RlLCBmYWxzZSwgTlVMTCk7
CiAJc2VuZF9hY2soY29ubiwgWFNfV1JJVEUpOwogCiAJcmV0dXJuIDA7CkBA
IC0xMDcxLDcgKzEwNzEsNyBAQCBzdGF0aWMgaW50IGRvX21rZGlyKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4p
CiAJCW5vZGUgPSBjcmVhdGVfbm9kZShjb25uLCBpbiwgbmFtZSwgTlVMTCwg
MCk7CiAJCWlmICghbm9kZSkKIAkJCXJldHVybiBlcnJubzsKLQkJZmlyZV93
YXRjaGVzKGNvbm4sIGluLCBuYW1lLCBmYWxzZSk7CisJCWZpcmVfd2F0Y2hl
cyhjb25uLCBpbiwgbmFtZSwgbm9kZSwgZmFsc2UsIE5VTEwpOwogCX0KIAlz
ZW5kX2Fjayhjb25uLCBYU19NS0RJUik7CiAKQEAgLTExMzQsNyArMTEzNCw3
IEBAIHN0YXRpYyBpbnQgZGVsZXRlX25vZGUoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKIAkJdGFsbG9jX2ZyZWUobmFtZSk7
CiAJfQogCi0JZmlyZV93YXRjaGVzKGNvbm4sIGN0eCwgbm9kZS0+bmFtZSwg
dHJ1ZSk7CisJZmlyZV93YXRjaGVzKGNvbm4sIGN0eCwgbm9kZS0+bmFtZSwg
bm9kZSwgdHJ1ZSwgTlVMTCk7CiAJZGVsZXRlX25vZGVfc2luZ2xlKGNvbm4s
IG5vZGUpOwogCWRlbGV0ZV9jaGlsZChjb25uLCBwYXJlbnQsIGJhc2VuYW1l
KG5vZGUtPm5hbWUpKTsKIAl0YWxsb2NfZnJlZShub2RlKTsKQEAgLTExNTgs
MTMgKzExNTgsMTQgQEAgc3RhdGljIGludCBfcm0oc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IG5vZGUgKm5vZGUs
CiAJcGFyZW50ID0gcmVhZF9ub2RlKGNvbm4sIGN0eCwgcGFyZW50bmFtZSk7
CiAJaWYgKCFwYXJlbnQpCiAJCXJldHVybiAoZXJybm8gPT0gRU5PTUVNKSA/
IEVOT01FTSA6IEVJTlZBTDsKKwlub2RlLT5wYXJlbnQgPSBwYXJlbnQ7CiAK
IAkvKgogCSAqIEZpcmUgdGhlIHdhdGNoZXMgbm93LCB3aGVuIHdlIGNhbiBz
dGlsbCBzZWUgdGhlIG5vZGUgcGVybWlzc2lvbnMuCiAJICogVGhpcyBmaW5l
IGFzIHdlIGFyZSBzaW5nbGUgdGhyZWFkZWQgYW5kIHRoZSBuZXh0IHBvc3Np
YmxlIHJlYWQgd2lsbAogCSAqIGJlIGhhbmRsZWQgb25seSBhZnRlciB0aGUg
bm9kZSBoYXMgYmVlbiByZWFsbHkgcmVtb3ZlZC4KIAkgKi8KLQlmaXJlX3dh
dGNoZXMoY29ubiwgY3R4LCBuYW1lLCBmYWxzZSk7CisJZmlyZV93YXRjaGVz
KGNvbm4sIGN0eCwgbmFtZSwgbm9kZSwgZmFsc2UsIE5VTEwpOwogCXJldHVy
biBkZWxldGVfbm9kZShjb25uLCBjdHgsIHBhcmVudCwgbm9kZSk7CiB9CiAK
QEAgLTEyMzAsNyArMTIzMSw3IEBAIHN0YXRpYyBpbnQgZG9fZ2V0X3Blcm1z
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0
YSAqaW4pCiAKIHN0YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiB7Ci0J
c3RydWN0IG5vZGVfcGVybXMgcGVybXM7CisJc3RydWN0IG5vZGVfcGVybXMg
cGVybXMsIG9sZF9wZXJtczsKIAljaGFyICpuYW1lLCAqcGVybXN0cjsKIAlz
dHJ1Y3Qgbm9kZSAqbm9kZTsKIApAQCAtMTI2Niw2ICsxMjY3LDcgQEAgc3Rh
dGljIGludCBkb19zZXRfcGVybXMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAkgICAgcGVybXMucFswXS5p
ZCAhPSBub2RlLT5wZXJtcy5wWzBdLmlkKQogCQlyZXR1cm4gRVBFUk07CiAK
KwlvbGRfcGVybXMgPSBub2RlLT5wZXJtczsKIAlkb21haW5fZW50cnlfZGVj
KGNvbm4sIG5vZGUpOwogCW5vZGUtPnBlcm1zID0gcGVybXM7CiAJZG9tYWlu
X2VudHJ5X2luYyhjb25uLCBub2RlKTsKQEAgLTEyNzMsNyArMTI3NSw3IEBA
IHN0YXRpYyBpbnQgZG9fc2V0X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJaWYgKHdyaXRlX25v
ZGUoY29ubiwgbm9kZSwgZmFsc2UpKQogCQlyZXR1cm4gZXJybm87CiAKLQlm
aXJlX3dhdGNoZXMoY29ubiwgaW4sIG5hbWUsIGZhbHNlKTsKKwlmaXJlX3dh
dGNoZXMoY29ubiwgaW4sIG5hbWUsIG5vZGUsIGZhbHNlLCAmb2xkX3Blcm1z
KTsKIAlzZW5kX2Fjayhjb25uLCBYU19TRVRfUEVSTVMpOwogCiAJcmV0dXJu
IDA7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaAppbmRleCAz
Zjk1OGMyOWFiLi42YzIxZDViYjlhIDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmgKQEAgLTE0OSwxNSArMTQ5LDE3IEBAIHZvaWQgc2Vu
ZF9hY2soc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGVudW0geHNkX3NvY2tt
c2dfdHlwZSB0eXBlKTsKIC8qIENhbm9uaWNhbGl6ZSB0aGlzIHBhdGggaWYg
cG9zc2libGUuICovCiBjaGFyICpjYW5vbmljYWxpemUoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgY29uc3QgY2hhciAqbm9k
ZSk7CiAKKy8qIEdldCBhY2Nlc3MgcGVybWlzc2lvbnMuICovCitlbnVtIHhz
X3Blcm1fdHlwZSBwZXJtX2Zvcl9jb25uKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLAorCQkJCWNvbnN0IHN0cnVjdCBub2RlX3Blcm1zICpwZXJtcyk7CisK
IC8qIFdyaXRlIGEgbm9kZSB0byB0aGUgdGRiIGRhdGEgYmFzZS4gKi8KIGlu
dCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERC
X0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUsCiAJCSAgIGJvb2wgbm9f
cXVvdGFfY2hlY2spOwogCi0vKiBHZXQgdGhpcyBub2RlLCBjaGVja2luZyB3
ZSBoYXZlIHBlcm1pc3Npb25zLiAqLwotc3RydWN0IG5vZGUgKmdldF9ub2Rl
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAotCQkgICAgICBjb25zdCB2b2lk
ICpjdHgsCi0JCSAgICAgIGNvbnN0IGNoYXIgKm5hbWUsCi0JCSAgICAgIGVu
dW0geHNfcGVybV90eXBlIHBlcm0pOworLyogR2V0IGEgbm9kZSBmcm9tIHRo
ZSB0ZGIgZGF0YSBiYXNlLiAqLworc3RydWN0IG5vZGUgKnJlYWRfbm9kZShz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgY29uc3Qgdm9pZCAqY3R4LAorCQkg
ICAgICAgY29uc3QgY2hhciAqbmFtZSk7CiAKIHN0cnVjdCBjb25uZWN0aW9u
ICpuZXdfY29ubmVjdGlvbihjb25ud3JpdGVmbl90ICp3cml0ZSwgY29ubnJl
YWRmbl90ICpyZWFkKTsKIHZvaWQgY2hlY2tfc3RvcmUodm9pZCk7CkBAIC0x
NjgsNiArMTcwLDkgQEAgZW51bSB4c19wZXJtX3R5cGUgcGVybV9mb3JfY29u
bihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwKIC8qIElzIHRoaXMgYSB2YWxp
ZCBub2RlIG5hbWU/ICovCiBib29sIGlzX3ZhbGlkX25vZGVuYW1lKGNvbnN0
IGNoYXIgKm5vZGUpOwogCisvKiBHZXQgbmFtZSBvZiBwYXJlbnQgbm9kZS4g
Ki8KK2NoYXIgKmdldF9wYXJlbnQoY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBj
aGFyICpub2RlKTsKKwogLyogVHJhY2luZyBpbmZyYXN0cnVjdHVyZS4gKi8K
IHZvaWQgdHJhY2VfY3JlYXRlKGNvbnN0IHZvaWQgKmRhdGEsIGNvbnN0IGNo
YXIgKnR5cGUpOwogdm9pZCB0cmFjZV9kZXN0cm95KGNvbnN0IHZvaWQgKmRh
dGEsIGNvbnN0IGNoYXIgKnR5cGUpOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2RvbWFpbi5jCmluZGV4IGUxMTA2ZDkwYjYuLmNmMjM5YzA0NGIg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKQEAg
LTIwMiw3ICsyMDIsNyBAQCBzdGF0aWMgaW50IGRlc3Ryb3lfZG9tYWluKHZv
aWQgKl9kb21haW4pCiAJCQl1bm1hcF9pbnRlcmZhY2UoZG9tYWluLT5pbnRl
cmZhY2UpOwogCX0KIAotCWZpcmVfd2F0Y2hlcyhOVUxMLCBkb21haW4sICJA
cmVsZWFzZURvbWFpbiIsIGZhbHNlKTsKKwlmaXJlX3dhdGNoZXMoTlVMTCwg
ZG9tYWluLCAiQHJlbGVhc2VEb21haW4iLCBOVUxMLCBmYWxzZSwgTlVMTCk7
CiAKIAl3cmxfZG9tYWluX2Rlc3Ryb3koZG9tYWluKTsKIApAQCAtMjQwLDcg
KzI0MCw3IEBAIHN0YXRpYyB2b2lkIGRvbWFpbl9jbGVhbnVwKHZvaWQpCiAJ
fQogCiAJaWYgKG5vdGlmeSkKLQkJZmlyZV93YXRjaGVzKE5VTEwsIE5VTEws
ICJAcmVsZWFzZURvbWFpbiIsIGZhbHNlKTsKKwkJZmlyZV93YXRjaGVzKE5V
TEwsIE5VTEwsICJAcmVsZWFzZURvbWFpbiIsIE5VTEwsIGZhbHNlLCBOVUxM
KTsKIH0KIAogLyogV2Ugc2NhbiBhbGwgZG9tYWlucyByYXRoZXIgdGhhbiB1
c2UgdGhlIGluZm9ybWF0aW9uIGdpdmVuIGhlcmUuICovCkBAIC00MDEsNyAr
NDAxLDcgQEAgaW50IGRvX2ludHJvZHVjZShzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCQkvKiBOb3cgZG9t
YWluIGJlbG9uZ3MgdG8gaXRzIGNvbm5lY3Rpb24uICovCiAJCXRhbGxvY19z
dGVhbChkb21haW4tPmNvbm4sIGRvbWFpbik7CiAKLQkJZmlyZV93YXRjaGVz
KE5VTEwsIGluLCAiQGludHJvZHVjZURvbWFpbiIsIGZhbHNlKTsKKwkJZmly
ZV93YXRjaGVzKE5VTEwsIGluLCAiQGludHJvZHVjZURvbWFpbiIsIE5VTEws
IGZhbHNlLCBOVUxMKTsKIAl9IGVsc2UgewogCQkvKiBVc2UgWFNfSU5UUk9E
VUNFIGZvciByZWNyZWF0aW5nIHRoZSB4ZW5idXMgZXZlbnQtY2hhbm5lbC4g
Ki8KIAkJaWYgKGRvbWFpbi0+cG9ydCkKZGlmZiAtLWdpdCBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5kZXggZTg3ODk3NTczNC4u
YTdkOGM1ZDQ3NSAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3RyYW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3RyYW5zYWN0aW9uLmMKQEAgLTExNCw2ICsxMTQsOSBAQCBzdHJ1Y3Qg
YWNjZXNzZWRfbm9kZQogCS8qIEdlbmVyYXRpb24gY291bnQgKG9yIE5PX0dF
TkVSQVRJT04pIGZvciBjb25mbGljdCBjaGVja2luZy4gKi8KIAl1aW50NjRf
dCBnZW5lcmF0aW9uOwogCisJLyogT3JpZ2luYWwgbm9kZSBwZXJtaXNzaW9u
cy4gKi8KKwlzdHJ1Y3Qgbm9kZV9wZXJtcyBwZXJtczsKKwogCS8qIEdlbmVy
YXRpb24gY291bnQgY2hlY2tpbmcgcmVxdWlyZWQ/ICovCiAJYm9vbCBjaGVj
a19nZW47CiAKQEAgLTI2MCw2ICsyNjMsMTUgQEAgaW50IGFjY2Vzc19ub2Rl
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSwK
IAkJaS0+bm9kZSA9IHRhbGxvY19zdHJkdXAoaSwgbm9kZS0+bmFtZSk7CiAJ
CWlmICghaS0+bm9kZSkKIAkJCWdvdG8gbm9tZW07CisJCWlmIChub2RlLT5n
ZW5lcmF0aW9uICE9IE5PX0dFTkVSQVRJT04gJiYgbm9kZS0+cGVybXMubnVt
KSB7CisJCQlpLT5wZXJtcy5wID0gdGFsbG9jX2FycmF5KGksIHN0cnVjdCB4
c19wZXJtaXNzaW9ucywKKwkJCQkJCSAgbm9kZS0+cGVybXMubnVtKTsKKwkJ
CWlmICghaS0+cGVybXMucCkKKwkJCQlnb3RvIG5vbWVtOworCQkJaS0+cGVy
bXMubnVtID0gbm9kZS0+cGVybXMubnVtOworCQkJbWVtY3B5KGktPnBlcm1z
LnAsIG5vZGUtPnBlcm1zLnAsCisJCQkgICAgICAgaS0+cGVybXMubnVtICog
c2l6ZW9mKCppLT5wZXJtcy5wKSk7CisJCX0KIAogCQlpbnRyb2R1Y2UgPSB0
cnVlOwogCQlpLT50YV9ub2RlID0gZmFsc2U7CkBAIC0zNjgsOSArMzgwLDE0
IEBAIHN0YXRpYyBpbnQgZmluYWxpemVfdHJhbnNhY3Rpb24oc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sCiAJCQkJdGFsbG9jX2ZyZWUoZGF0YS5kcHRyKTsK
IAkJCQlpZiAocmV0KQogCQkJCQlnb3RvIGVycjsKLQkJCX0gZWxzZSBpZiAo
dGRiX2RlbGV0ZSh0ZGJfY3R4LCBrZXkpKQorCQkJCWZpcmVfd2F0Y2hlcyhj
b25uLCB0cmFucywgaS0+bm9kZSwgTlVMTCwgZmFsc2UsCisJCQkJCSAgICAg
aS0+cGVybXMucCA/ICZpLT5wZXJtcyA6IE5VTEwpOworCQkJfSBlbHNlIHsK
KwkJCQlmaXJlX3dhdGNoZXMoY29ubiwgdHJhbnMsIGktPm5vZGUsIE5VTEws
IGZhbHNlLAorCQkJCQkgICAgIGktPnBlcm1zLnAgPyAmaS0+cGVybXMgOiBO
VUxMKTsKKwkJCQlpZiAodGRiX2RlbGV0ZSh0ZGJfY3R4LCBrZXkpKQogCQkJ
CQlnb3RvIGVycjsKLQkJCWZpcmVfd2F0Y2hlcyhjb25uLCB0cmFucywgaS0+
bm9kZSwgZmFsc2UpOworCQkJfQogCQl9CiAKIAkJaWYgKGktPnRhX25vZGUg
JiYgdGRiX2RlbGV0ZSh0ZGJfY3R4LCB0YV9rZXkpKQpkaWZmIC0tZ2l0IGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3dhdGNoLmMgYi90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfd2F0Y2guYwppbmRleCBmNGUyODkzNjJlLi43MWMx
MDhlYTk5IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
d2F0Y2guYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2gu
YwpAQCAtODUsMjIgKzg1LDYgQEAgc3RhdGljIHZvaWQgYWRkX2V2ZW50KHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLAogCXVuc2lnbmVkIGludCBsZW47CiAJ
Y2hhciAqZGF0YTsKIAotCWlmICghY2hlY2tfc3BlY2lhbF9ldmVudChuYW1l
KSkgewotCQkvKiBDYW4gdGhpcyBjb25uIGxvYWQgbm9kZSwgb3Igc2VlIHRo
YXQgaXQgZG9lc24ndCBleGlzdD8gKi8KLQkJc3RydWN0IG5vZGUgKm5vZGUg
PSBnZXRfbm9kZShjb25uLCBjdHgsIG5hbWUsIFhTX1BFUk1fUkVBRCk7Ci0J
CS8qCi0JCSAqIFhYWCBXZSBhbGxvdyBFQUNDRVMgaGVyZSBiZWNhdXNlIG90
aGVyd2lzZSBhIG5vbi1kb20wCi0JCSAqIGJhY2tlbmQgZHJpdmVyIGNhbm5v
dCB3YXRjaCBmb3IgZGlzYXBwZWFyYW5jZSBvZiBhIGZyb250ZW5kCi0JCSAq
IHhlbnN0b3JlIGRpcmVjdG9yeS4gV2hlbiB0aGUgZGlyZWN0b3J5IGRpc2Fw
cGVhcnMsIHdlCi0JCSAqIHJldmVydCB0byBwZXJtaXNzaW9ucyBvZiB0aGUg
cGFyZW50IGRpcmVjdG9yeSBmb3IgdGhhdCBwYXRoLAotCQkgKiB3aGljaCB3
aWxsIHR5cGljYWxseSBkaXNhbGxvdyBhY2Nlc3MgZm9yIHRoZSBiYWNrZW5k
LgotCQkgKiBCdXQgdGhpcyBicmVha3MgZGV2aWNlLWNoYW5uZWwgdGVhcmRv
d24hCi0JCSAqIFJlYWxseSB3ZSBzaG91bGQgZml4IHRoaXMgYmV0dGVyLi4u
Ci0JCSAqLwotCQlpZiAoIW5vZGUgJiYgZXJybm8gIT0gRU5PRU5UICYmIGVy
cm5vICE9IEVBQ0NFUykKLQkJCXJldHVybjsKLQl9Ci0KIAlpZiAod2F0Y2gt
PnJlbGF0aXZlX3BhdGgpIHsKIAkJbmFtZSArPSBzdHJsZW4od2F0Y2gtPnJl
bGF0aXZlX3BhdGgpOwogCQlpZiAoKm5hbWUgPT0gJy8nKSAvKiBDb3VsZCBi
ZSAiIiAqLwpAQCAtMTE4LDExICsxMDIsNTkgQEAgc3RhdGljIHZvaWQgYWRk
X2V2ZW50KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAogfQogCiAvKgorICog
Q2hlY2sgcGVybWlzc2lvbnMgb2YgYSBzcGVjaWZpYyB3YXRjaCB0byBmaXJl
OgorICogRWl0aGVyIHRoZSBub2RlIGl0c2VsZiBvciBpdHMgcGFyZW50IGhh
dmUgdG8gYmUgcmVhZGFibGUgYnkgdGhlIGNvbm5lY3Rpb24KKyAqIHRoZSB3
YXRjaCBoYXMgYmVlbiBzZXR1cCBmb3IuIEluIGNhc2UgYSB3YXRjaCBldmVu
dCBpcyBjcmVhdGVkIGR1ZSB0bworICogY2hhbmdlZCBwZXJtaXNzaW9ucyB3
ZSBuZWVkIHRvIHRha2UgdGhlIG9sZCBwZXJtaXNzaW9ucyBpbnRvIGFjY291
bnQsIHRvby4KKyAqLworc3RhdGljIGJvb2wgd2F0Y2hfcGVybWl0dGVkKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCisJCQkg
ICAgY29uc3QgY2hhciAqbmFtZSwgc3RydWN0IG5vZGUgKm5vZGUsCisJCQkg
ICAgc3RydWN0IG5vZGVfcGVybXMgKnBlcm1zKQoreworCWVudW0geHNfcGVy
bV90eXBlIHBlcm07CisJc3RydWN0IG5vZGUgKnBhcmVudDsKKwljaGFyICpw
YXJlbnRfbmFtZTsKKworCWlmIChwZXJtcykgeworCQlwZXJtID0gcGVybV9m
b3JfY29ubihjb25uLCBwZXJtcyk7CisJCWlmIChwZXJtICYgWFNfUEVSTV9S
RUFEKQorCQkJcmV0dXJuIHRydWU7CisJfQorCisJaWYgKCFub2RlKSB7CisJ
CW5vZGUgPSByZWFkX25vZGUoY29ubiwgY3R4LCBuYW1lKTsKKwkJaWYgKCFu
b2RlKQorCQkJcmV0dXJuIGZhbHNlOworCX0KKworCXBlcm0gPSBwZXJtX2Zv
cl9jb25uKGNvbm4sICZub2RlLT5wZXJtcyk7CisJaWYgKHBlcm0gJiBYU19Q
RVJNX1JFQUQpCisJCXJldHVybiB0cnVlOworCisJcGFyZW50ID0gbm9kZS0+
cGFyZW50OworCWlmICghcGFyZW50KSB7CisJCXBhcmVudF9uYW1lID0gZ2V0
X3BhcmVudChjdHgsIG5vZGUtPm5hbWUpOworCQlpZiAoIXBhcmVudF9uYW1l
KQorCQkJcmV0dXJuIGZhbHNlOworCQlwYXJlbnQgPSByZWFkX25vZGUoY29u
biwgY3R4LCBwYXJlbnRfbmFtZSk7CisJCWlmICghcGFyZW50KQorCQkJcmV0
dXJuIGZhbHNlOworCX0KKworCXBlcm0gPSBwZXJtX2Zvcl9jb25uKGNvbm4s
ICZwYXJlbnQtPnBlcm1zKTsKKworCXJldHVybiBwZXJtICYgWFNfUEVSTV9S
RUFEOworfQorCisvKgogICogQ2hlY2sgd2hldGhlciBhbnkgd2F0Y2ggZXZl
bnRzIGFyZSB0byBiZSBzZW50LgogICogVGVtcG9yYXJ5IG1lbW9yeSBhbGxv
Y2F0aW9ucyBhcmUgZG9uZSB3aXRoIGN0eC4KKyAqIFdlIG5lZWQgdG8gdGFr
ZSB0aGUgKHBvdGVudGlhbCkgb2xkIHBlcm1pc3Npb25zIG9mIHRoZSBub2Rl
IGludG8gYWNjb3VudAorICogYXMgYSB3YXRjaGVyIGxvc2luZyBwZXJtaXNz
aW9ucyB0byBhY2Nlc3MgYSBub2RlIHNob3VsZCByZWNlaXZlIHRoZQorICog
d2F0Y2ggZXZlbnQsIHRvby4KICAqLwogdm9pZCBmaXJlX3dhdGNoZXMoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgY29uc3Qg
Y2hhciAqbmFtZSwKLQkJICBib29sIGV4YWN0KQorCQkgIHN0cnVjdCBub2Rl
ICpub2RlLCBib29sIGV4YWN0LCBzdHJ1Y3Qgbm9kZV9wZXJtcyAqcGVybXMp
CiB7CiAJc3RydWN0IGNvbm5lY3Rpb24gKmk7CiAJc3RydWN0IHdhdGNoICp3
YXRjaDsKQEAgLTEzNCw4ICsxNjYsMTMgQEAgdm9pZCBmaXJlX3dhdGNoZXMo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwgY29u
c3QgY2hhciAqbmFtZSwKIAkvKiBDcmVhdGUgYW4gZXZlbnQgZm9yIGVhY2gg
d2F0Y2guICovCiAJbGlzdF9mb3JfZWFjaF9lbnRyeShpLCAmY29ubmVjdGlv
bnMsIGxpc3QpIHsKIAkJLyogaW50cm9kdWNlL3JlbGVhc2UgZG9tYWluIHdh
dGNoZXMgKi8KLQkJaWYgKGNoZWNrX3NwZWNpYWxfZXZlbnQobmFtZSkgJiYg
IWNoZWNrX3Blcm1zX3NwZWNpYWwobmFtZSwgaSkpCi0JCQljb250aW51ZTsK
KwkJaWYgKGNoZWNrX3NwZWNpYWxfZXZlbnQobmFtZSkpIHsKKwkJCWlmICgh
Y2hlY2tfcGVybXNfc3BlY2lhbChuYW1lLCBpKSkKKwkJCQljb250aW51ZTsK
KwkJfSBlbHNlIHsKKwkJCWlmICghd2F0Y2hfcGVybWl0dGVkKGksIGN0eCwg
bmFtZSwgbm9kZSwgcGVybXMpKQorCQkJCWNvbnRpbnVlOworCQl9CiAKIAkJ
bGlzdF9mb3JfZWFjaF9lbnRyeSh3YXRjaCwgJmktPndhdGNoZXMsIGxpc3Qp
IHsKIAkJCWlmIChleGFjdCkgewpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3dhdGNoLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfd2F0Y2guaAppbmRleCAxYjNjODBkM2RkLi4wMzA5NDM3NGYzIDEwMDY0
NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guaAorKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guaApAQCAtMjYsNyAr
MjYsNyBAQCBpbnQgZG9fdW53YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKTsKIAogLyogRmlyZSBhbGwg
d2F0Y2hlczogIWV4YWN0IG1lYW5zIGFsbCB0aGUgY2hpbGRyZW4gYXJlIGFm
ZmVjdGVkIChpZS4gcm0pLiAqLwogdm9pZCBmaXJlX3dhdGNoZXMoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKnRtcCwgY29uc3QgY2hh
ciAqbmFtZSwKLQkJICBib29sIGV4YWN0KTsKKwkJICBzdHJ1Y3Qgbm9kZSAq
bm9kZSwgYm9vbCBleGFjdCwgc3RydWN0IG5vZGVfcGVybXMgKnBlcm1zKTsK
IAogdm9pZCBjb25uX2RlbGV0ZV9hbGxfd2F0Y2hlcyhzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubik7CiAK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch"
Content-Disposition: attachment;
 filename="xsa115-o/0001-tools-ocaml-xenstored-ignore-transaction-id-for-un-w.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogaWdub3JlIHRyYW5zYWN0aW9uIGlkIGZvciBbdW5dd2F0Y2gK
TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBj
aGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQK
Ckluc3RlYWQgb2YgaWdub3JpbmcgdGhlIHRyYW5zYWN0aW9uIGlkIGZvciBY
U19XQVRDSCBhbmQgWFNfVU5XQVRDSApjb21tYW5kcyBhcyBpdCBpcyBkb2N1
bWVudGVkIGluIGRvY3MvbWlzYy94ZW5zdG9yZS50eHQsIGl0IGlzIHRlc3Rl
ZApmb3IgdmFsaWRpdHkgdG9kYXkuCgpSZWFsbHkgaWdub3JlIHRoZSB0cmFu
c2FjdGlvbiBpZCBmb3IgWFNfV0FUQ0ggYW5kIFhTX1VOV0FUQ0guCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEVkd2luIFTD
tnLDtmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBDaHJp
c3RpYW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+ClJl
dmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9w
cm9jZXNzLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwK
aW5kZXggZmY1Yzk0ODRmYy4uMmZhNjc5OGUzYiAxMDA2NDQKLS0tIGEvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKQEAgLTQ5OCwxMiArNDk4LDE5IEBA
IGxldCByZXRhaW5fb3BfaW5faGlzdG9yeSB0eSA9CiAJfCBYZW5idXMuWGIu
T3AuUmVzZXRfd2F0Y2hlcwogCXwgWGVuYnVzLlhiLk9wLkludmFsaWQgICAg
ICAgICAgIC0+IGZhbHNlCiAKK2xldCBtYXliZV9pZ25vcmVfdHJhbnNhY3Rp
b24gPSBmdW5jdGlvbgorCXwgWGVuYnVzLlhiLk9wLldhdGNoIHwgWGVuYnVz
LlhiLk9wLlVud2F0Y2ggLT4gZnVuIHRpZCAtPgorCQlpZiB0aWQgPD4gVHJh
bnNhY3Rpb24ubm9uZSB0aGVuCisJCQlkZWJ1ZyAiSWdub3JpbmcgdHJhbnNh
Y3Rpb24gSUQgJWQgZm9yIHdhdGNoL3Vud2F0Y2giIHRpZDsKKwkJVHJhbnNh
Y3Rpb24ubm9uZQorCXwgXyAtPiBmdW4geCAtPiB4CisKICgqKgogICogTm90
aHJvdyBndWFyYW50ZWUuCiAgKikKIGxldCBwcm9jZXNzX3BhY2tldCB+c3Rv
cmUgfmNvbnMgfmRvbXMgfmNvbiB+cmVxID0KIAlsZXQgdHkgPSByZXEuUGFj
a2V0LnR5IGluCi0JbGV0IHRpZCA9IHJlcS5QYWNrZXQudGlkIGluCisJbGV0
IHRpZCA9IG1heWJlX2lnbm9yZV90cmFuc2FjdGlvbiB0eSByZXEuUGFja2V0
LnRpZCBpbgogCWxldCByaWQgPSByZXEuUGFja2V0LnJpZCBpbgogCXRyeQog
CQlsZXQgZmN0ID0gZnVuY3Rpb25fb2ZfdHlwZSB0eSBpbgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch"
Content-Disposition: attachment;
 filename="xsa115-o/0002-tools-ocaml-xenstored-check-privilege-for-XS_IS_DOMA.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2hlY2sgcHJpdmlsZWdlIGZvciBYU19JU19ET01BSU5fSU5U
Uk9EVUNFRApNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQv
cGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGlu
ZzogOGJpdAoKVGhlIFhlbnN0b3JlIGNvbW1hbmQgWFNfSVNfRE9NQUlOX0lO
VFJPRFVDRUQgc2hvdWxkIGJlIHBvc3NpYmxlIGZvciBwcml2aWxlZ2VkCmRv
bWFpbnMgb25seSAodGhlIG9ubHkgdXNlciBpbiB0aGUgdHJlZSBpcyB0aGUg
eGVucGFnaW5nIGRhZW1vbikuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4K
ClNpZ25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNp
dHJpeC5jb20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3Rp
YW4ubGluZGlnQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29v
cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggMmZhNjc5OGUzYi4uZmQ3
OWVmNTY0ZiAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3Mu
bWwKQEAgLTE2Niw3ICsxNjYsOSBAQCBsZXQgZG9fc2V0cGVybXMgY29uIHQg
X2RvbWFpbnMgX2NvbnMgZGF0YSA9CiBsZXQgZG9fZXJyb3IgX2NvbiBfdCBf
ZG9tYWlucyBfY29ucyBfZGF0YSA9CiAJcmFpc2UgRGVmaW5lLlVua25vd25f
b3BlcmF0aW9uCiAKLWxldCBkb19pc2ludHJvZHVjZWQgX2NvbiBfdCBkb21h
aW5zIF9jb25zIGRhdGEgPQorbGV0IGRvX2lzaW50cm9kdWNlZCBjb24gX3Qg
ZG9tYWlucyBfY29ucyBkYXRhID0KKwlpZiBub3QgKENvbm5lY3Rpb24uaXNf
ZG9tMCBjb24pCisJdGhlbiByYWlzZSBEZWZpbmUuUGVybWlzc2lvbl9kZW5p
ZWQ7CiAJbGV0IGRvbWlkID0KIAkJbWF0Y2ggKHNwbGl0IE5vbmUgJ1wwMDAn
IGRhdGEpIHdpdGgKIAkJfCBkb21pZCA6OiBfIC0+IGludF9vZl9zdHJpbmcg
ZG9taWQK

--=separator
Content-Type: application/octet-stream;
 name="xsa115-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch"
Content-Disposition: attachment;
 filename="xsa115-o/0003-tools-ocaml-xenstored-unify-watch-firing.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogdW5pZnkgd2F0Y2ggZmlyaW5nCk1JTUUtVmVyc2lvbjogMS4w
CkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250
ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpUaGlzIHdpbGwgbWFrZSBp
dCBlYXNpZXIgaW5zZXJ0IGFkZGl0aW9uYWwgY2hlY2tzIGluIGEgZm9sbG93
LXVwIHBhdGNoLgpBbGwgd2F0Y2hlcyBhcmUgbm93IGZpcmVkIGZyb20gYSBz
aW5nbGUgZnVuY3Rpb24uCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTExNS4KClNp
Z25lZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNpdHJp
eC5jb20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3RpYW4u
bGluZGlnQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0aW9uLm1sIGIvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKaW5kZXggMjQ3NTBhZGE0My4u
ZTVkZjYyZDllNyAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L2Nvbm5lY3Rpb24ubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nv
bm5lY3Rpb24ubWwKQEAgLTIxMCw4ICsyMTAsNyBAQCBsZXQgZmlyZV93YXRj
aCB3YXRjaCBwYXRoID0KIAkJZW5kIGVsc2UKIAkJCXBhdGgKIAlpbgotCWxl
dCBkYXRhID0gVXRpbHMuam9pbl9ieV9udWxsIFsgbmV3X3BhdGg7IHdhdGNo
LnRva2VuOyAiIiBdIGluCi0Jc2VuZF9yZXBseSB3YXRjaC5jb24gVHJhbnNh
Y3Rpb24ubm9uZSAwIFhlbmJ1cy5YYi5PcC5XYXRjaGV2ZW50IGRhdGEKKwlm
aXJlX3NpbmdsZV93YXRjaCB7IHdhdGNoIHdpdGggcGF0aCA9IG5ld19wYXRo
IH0KIAogKCogU2VhcmNoIGZvciBhIHZhbGlkIHVudXNlZCB0cmFuc2FjdGlv
biBpZC4gKikKIGxldCByZWMgdmFsaWRfdHJhbnNhY3Rpb25faWQgY29uIHBy
b3Bvc2VkX2lkID0K

--=separator
Content-Type: application/octet-stream;
 name="xsa115-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch"
Content-Disposition: attachment;
 filename="xsa115-o/0004-tools-ocaml-xenstored-introduce-permissions-for-spec.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogaW50cm9kdWNlIHBlcm1pc3Npb25zIGZvciBzcGVjaWFsIHdh
dGNoZXMKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3Bs
YWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6
IDhiaXQKClRoZSBzcGVjaWFsIHdhdGNoZXMgIkBpbnRyb2R1Y2VEb21haW4i
IGFuZCAiQHJlbGVhc2VEb21haW4iIHNob3VsZCBiZQphbGxvd2VkIGZvciBw
cml2aWxlZ2VkIGNhbGxlcnMgb25seSwgYXMgdGhleSBhbGxvdyB0byBnYWlu
IGluZm9ybWF0aW9uCmFib3V0IHByZXNlbmNlIG9mIG90aGVyIGd1ZXN0cyBv
biB0aGUgaG9zdC4gU28gc2VuZCB3YXRjaCBldmVudHMgZm9yCnRob3NlIHdh
dGNoZXMgdmlhIHByaXZpbGVnZWQgY29ubmVjdGlvbnMgb25seS4KClN0YXJ0
IHRvIGFkZHJlc3MgdGhpcyBieSB0cmVhdGluZyB0aGUgc3BlY2lhbCB3YXRj
aGVzIGFzIHJlZ3VsYXIgbm9kZXMKaW4gdGhlIHRyZWUsIHdoaWNoIGdpdmVz
IHRoZW0gbm9ybWFsIHNlbWFudGljcyBmb3IgcGVybWlzc2lvbnMuICBBIGxh
dGVyCmNoYW5nZSB3aWxsIHJlc3RyaWN0IHRoZSBoYW5kbGluZywgc28gdGhh
dCB0aGV5IGNhbid0IGJlIGxpc3RlZCwgZXRjLgoKVGhpcyBpcyBwYXJ0IG9m
IFhTQS0xMTUuCgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZp
bi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRp
ZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTog
QW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcHJvY2Vzcy5tbCBi
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCmluZGV4IGZkNzll
ZjU2NGYuLmU1MjhkMWVjYjIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9wcm9jZXNzLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0b3Jl
ZC9wcm9jZXNzLm1sCkBAIC00MjAsNyArNDIwLDcgQEAgbGV0IGRvX2ludHJv
ZHVjZSBjb24gX3QgZG9tYWlucyBjb25zIGRhdGEgPQogCQllbHNlIHRyeQog
CQkJbGV0IG5kb20gPSBEb21haW5zLmNyZWF0ZSBkb21haW5zIGRvbWlkIG1m
biBwb3J0IGluCiAJCQlDb25uZWN0aW9ucy5hZGRfZG9tYWluIGNvbnMgbmRv
bTsKLQkJCUNvbm5lY3Rpb25zLmZpcmVfc3BlY193YXRjaGVzIGNvbnMgIkBp
bnRyb2R1Y2VEb21haW4iOworCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dh
dGNoZXMgY29ucyBTdG9yZS5QYXRoLmludHJvZHVjZV9kb21haW47CiAJCQlu
ZG9tCiAJCXdpdGggXyAtPiByYWlzZSBJbnZhbGlkX0NtZF9BcmdzCiAJaW4K
QEAgLTQzOSw3ICs0MzksNyBAQCBsZXQgZG9fcmVsZWFzZSBjb24gX3QgZG9t
YWlucyBjb25zIGRhdGEgPQogCURvbWFpbnMuZGVsIGRvbWFpbnMgZG9taWQ7
CiAJQ29ubmVjdGlvbnMuZGVsX2RvbWFpbiBjb25zIGRvbWlkOwogCWlmIGZp
cmVfc3BlY193YXRjaGVzCi0JdGhlbiBDb25uZWN0aW9ucy5maXJlX3NwZWNf
d2F0Y2hlcyBjb25zICJAcmVsZWFzZURvbWFpbiIKKwl0aGVuIENvbm5lY3Rp
b25zLmZpcmVfc3BlY193YXRjaGVzIGNvbnMgU3RvcmUuUGF0aC5yZWxlYXNl
X2RvbWFpbgogCWVsc2UgcmFpc2UgSW52YWxpZF9DbWRfQXJncwogCiBsZXQg
ZG9fcmVzdW1lIGNvbiBfdCBkb21haW5zIF9jb25zIGRhdGEgPQpkaWZmIC0t
Z2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sIGIvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sCmluZGV4IDkyYjYyODliNWUuLjUy
Yjg4YjNlZTEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9z
dG9yZS5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwK
QEAgLTIxNCw2ICsyMTQsMTEgQEAgbGV0IHJlYyBsb29rdXAgbm9kZSBwYXRo
IGZjdCA9CiAKIGxldCBhcHBseSBybm9kZSBwYXRoIGZjdCA9CiAJbG9va3Vw
IHJub2RlIHBhdGggZmN0CisKK2xldCBpbnRyb2R1Y2VfZG9tYWluID0gIkBp
bnRyb2R1Y2VEb21haW4iCitsZXQgcmVsZWFzZV9kb21haW4gPSAiQHJlbGVh
c2VEb21haW4iCitsZXQgc3BlY2lhbHMgPSBMaXN0Lm1hcCBvZl9zdHJpbmcg
WyBpbnRyb2R1Y2VfZG9tYWluOyByZWxlYXNlX2RvbWFpbiBdCisKIGVuZAog
CiAoKiBUaGUgU3RvcmUudCB0eXBlICopCmRpZmYgLS1naXQgYS90b29scy9v
Y2FtbC94ZW5zdG9yZWQvdXRpbHMubWwgYi90b29scy9vY2FtbC94ZW5zdG9y
ZWQvdXRpbHMubWwKaW5kZXggYjI1MmRiNzk5Yi4uZThjOWZlNGU5NCAxMDA2
NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3V0aWxzLm1sCisrKyBi
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC91dGlscy5tbApAQCAtODgsMTkgKzg4
LDE3IEBAIGxldCByZWFkX2ZpbGVfc2luZ2xlX2ludGVnZXIgZmlsZW5hbWUg
PQogCVVuaXguY2xvc2UgZmQ7CiAJaW50X29mX3N0cmluZyAoQnl0ZXMuc3Vi
X3N0cmluZyBidWYgMCBzeikKIAotbGV0IHBhdGhfY29tcGxldGUgcGF0aCBj
b25uZWN0aW9uX3BhdGggPQotCWlmIFN0cmluZy5nZXQgcGF0aCAwIDw+ICcv
JyB0aGVuCi0JCWNvbm5lY3Rpb25fcGF0aCBeIHBhdGgKLQllbHNlCi0JCXBh
dGgKLQorKCogQHBhdGggbWF5IGJlIGd1ZXN0IGRhdGEgYW5kIG5lZWRzIGl0
cyBsZW5ndGggdmFsaWRhdGluZy4gIEBjb25uZWN0aW9uX3BhdGgKKyAqIGlz
IGdlbmVyYXRlZCBsb2NhbGx5IGluIHhlbnN0b3JlZCBhbmQgYWx3YXlzIG9m
IHRoZSBmb3JtICIvbG9jYWwvZG9tYWluLyROLyIgKikKIGxldCBwYXRoX3Zh
bGlkYXRlIHBhdGggY29ubmVjdGlvbl9wYXRoID0KLQlpZiBTdHJpbmcubGVu
Z3RoIHBhdGggPSAwIHx8IFN0cmluZy5sZW5ndGggcGF0aCA+IDEwMjQgdGhl
bgotCQlyYWlzZSBEZWZpbmUuSW52YWxpZF9wYXRoCi0JZWxzZQotCQlsZXQg
Y3BhdGggPSBwYXRoX2NvbXBsZXRlIHBhdGggY29ubmVjdGlvbl9wYXRoIGlu
Ci0JCWlmIFN0cmluZy5nZXQgY3BhdGggMCA8PiAnLycgdGhlbgotCQkJcmFp
c2UgRGVmaW5lLkludmFsaWRfcGF0aAotCQllbHNlCi0JCQljcGF0aAorCWxl
dCBsZW4gPSBTdHJpbmcubGVuZ3RoIHBhdGggaW4KKworCWlmIGxlbiA9IDAg
fHwgbGVuID4gMTAyNCB0aGVuIHJhaXNlIERlZmluZS5JbnZhbGlkX3BhdGg7
CisKKwlsZXQgYWJzX3BhdGggPQorCQltYXRjaCBTdHJpbmcuZ2V0IHBhdGgg
MCB3aXRoCisJCXwgJy8nIHwgJ0AnIC0+IHBhdGgKKwkJfCBfICAgLT4gY29u
bmVjdGlvbl9wYXRoIF4gcGF0aAorCWluCiAKKwlhYnNfcGF0aApkaWZmIC0t
Z2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbCBiL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKaW5kZXggN2U3ODI0
NzYxYi4uOGQwYzUwYmZhNCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVu
c3RvcmVkL3hlbnN0b3JlZC5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9y
ZWQveGVuc3RvcmVkLm1sCkBAIC0yODYsNiArMjg2LDggQEAgbGV0IF8gPQog
CWxldCBxdWl0ID0gcmVmIGZhbHNlIGluCiAKIAlMb2dnaW5nLmluaXRfeGVu
c3RvcmVkX2xvZygpOworCUxpc3QuaXRlciAoZnVuIHBhdGggLT4KKwkJU3Rv
cmUud3JpdGUgc3RvcmUgUGVybXMuQ29ubmVjdGlvbi5mdWxsX3JpZ2h0cyBw
YXRoICIiKSBTdG9yZS5QYXRoLnNwZWNpYWxzOwogCiAJbGV0IGZpbGVuYW1l
ID0gUGF0aHMueGVuX3J1bl9zdG9yZWQgXiAiL2RiIiBpbgogCWlmIGNmLnJl
c3RhcnQgJiYgU3lzLmZpbGVfZXhpc3RzIGZpbGVuYW1lIHRoZW4gKApAQCAt
MzM1LDcgKzMzNyw3IEBAIGxldCBfID0KIAkJCQkJbGV0IChub3RpZnksIGRl
YWRkb20pID0gRG9tYWlucy5jbGVhbnVwIGRvbWFpbnMgaW4KIAkJCQkJTGlz
dC5pdGVyIChDb25uZWN0aW9ucy5kZWxfZG9tYWluIGNvbnMpIGRlYWRkb207
CiAJCQkJCWlmIGRlYWRkb20gPD4gW10gfHwgbm90aWZ5IHRoZW4KLQkJCQkJ
CUNvbm5lY3Rpb25zLmZpcmVfc3BlY193YXRjaGVzIGNvbnMgIkByZWxlYXNl
RG9tYWluIgorCQkJCQkJQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMg
Y29ucyBTdG9yZS5QYXRoLnJlbGVhc2VfZG9tYWluCiAJCQkJKQogCQkJCWVs
c2UKIAkJCQkJbGV0IGMgPSBDb25uZWN0aW9ucy5maW5kX2RvbWFpbl9ieV9w
b3J0IGNvbnMgcG9ydCBpbgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa115-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch"
Content-Disposition: attachment;
 filename="xsa115-o/0005-tools-ocaml-xenstored-avoid-watch-events-for-nodes-w.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogYXZvaWQgd2F0Y2ggZXZlbnRzIGZvciBub2RlcyB3aXRob3V0
IGFjY2VzcwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQv
cGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGlu
ZzogOGJpdAoKVG9kYXkgd2F0Y2ggZXZlbnRzIGFyZSBzZW50IHJlZ2FyZGxl
c3Mgb2YgdGhlIGFjY2VzcyByaWdodHMgb2YgdGhlCm5vZGUgdGhlIGV2ZW50
IGlzIHNlbnQgZm9yLiBUaGlzIGVuYWJsZXMgYW55IGd1ZXN0IHRvIGUuZy4g
c2V0dXAgYQp3YXRjaCBmb3IgIi8iIGluIG9yZGVyIHRvIGhhdmUgYSBkZXRh
aWxlZCByZWNvcmQgb2YgYWxsIFhlbnN0b3JlCm1vZGlmaWNhdGlvbnMuCgpN
b2RpZnkgdGhhdCBieSBzZW5kaW5nIG9ubHkgd2F0Y2ggZXZlbnRzIGZvciBu
b2RlcyB0aGF0IHRoZSB3YXRjaGVyCmhhcyBhIGNoYW5jZSB0byBzZWUgb3Ro
ZXJ3aXNlIChlaXRoZXIgdmlhIGRpcmVjdCByZWFkcyBvciBieSBxdWVyeWlu
Zwp0aGUgY2hpbGRyZW4gb2YgYSBub2RlKS4gVGhpcyBpbmNsdWRlcyBjYXNl
cyB3aGVyZSB0aGUgdmlzaWJpbGl0eSBvZgphIG5vZGUgZm9yIGEgd2F0Y2hl
ciBpcyBjaGFuZ2luZyAocGVybWlzc2lvbnMgYmVpbmcgcmVtb3ZlZCkuCgpQ
ZXJtaXNzaW9ucyBmb3Igbm9kZXMgYXJlIGxvb2tlZCB1cCBlaXRoZXIgaW4g
dGhlIG9sZCAocHJlCnRyYW5zYWN0aW9uL2NvbW1hbmQpIG9yIGN1cnJlbnQg
dHJlZXMgKHBvc3QgdHJhbnNhY3Rpb24pLiAgSWYKcGVybWlzc2lvbnMgYXJl
IGNoYW5nZWQgbXVsdGlwbGUgdGltZXMgaW4gYSB0cmFuc2FjdGlvbiBvbmx5
IHRoZSBmaW5hbAp2ZXJzaW9uIGlzIGNoZWNrZWQsIGJlY2F1c2UgY29uc2lk
ZXJpbmcgYSB0cmFuc2FjdGlvbiBhdG9taWMgdGhlCmluZGl2aWR1YWwgcGVy
bWlzc2lvbiBjaGFuZ2VzIHdvdWxkIG5vdCBiZSBub3RpY2FibGUgdG8gYW4g
b3V0c2lkZQpvYnNlcnZlci4KClR3byB0cmVlcyBhcmUgb25seSBuZWVkZWQg
Zm9yIHNldF9wZXJtczogaGVyZSB3ZSBjYW4gZWl0aGVyIG5vdGljZSB0aGUK
bm9kZSBkaXNhcHBlYXJpbmcgKGlmIHdlIGxvb3NlIHBlcm1pc3Npb24pLCBh
cHBlYXJpbmcKKGlmIHdlIGdhaW4gcGVybWlzc2lvbiksIG9yIGNoYW5naW5n
IChpZiB3ZSBwcmVzZXJ2ZSBwZXJtaXNzaW9uKS4KClJNIG5lZWRzIHRvIG9u
bHkgbG9vayBhdCB0aGUgb2xkIHRyZWU6IGluIHRoZSBuZXcgdHJlZSB0aGUg
bm9kZSB3b3VsZCBiZQpnb25lLCBvciBjb3VsZCBoYXZlIGRpZmZlcmVudCBw
ZXJtaXNzaW9ucyBpZiBpdCB3YXMgcmVjcmVhdGVkICh0aGUKcmVjcmVhdGlv
biB3b3VsZCBnZXQgaXRzIG93biB3YXRjaCBmaXJlZCkuCgpJbnNpZGUgYSB0
cmVlIHdlIGxvb2t1cCB0aGUgd2F0Y2ggcGF0aCdzIHBhcmVudCwgYW5kIHRo
ZW4gdGhlIHdhdGNoIHBhdGgKY2hpbGQgaXRzZWxmLiAgVGhpcyBnZXRzIHVz
IDQgc2V0cyBvZiBwZXJtaXNzaW9ucyBpbiB3b3JzdCBjYXNlLCBhbmQgaWYK
ZWl0aGVyIG9mIHRoZXNlIGFsbG93cyBhIHdhdGNoLCB0aGVuIHdlIHBlcm1p
dCBpdCB0byBmaXJlLiAgVGhlCnBlcm1pc3Npb24gbG9va3VwcyBhcmUgZG9u
ZSB3aXRob3V0IGxvZ2dpbmcgdGhlIGZhaWx1cmVzLCBvdGhlcndpc2Ugd2Un
ZApnZXQgY29uZnVzaW5nIGVycm9ycyBhYm91dCBwZXJtaXNzaW9uIGRlbmll
ZCBmb3Igc29tZSBwYXRocywgYnV0IGEgd2F0Y2gKc3RpbGwgZmlyaW5nLiBU
aGUgYWN0dWFsIHJlc3VsdCBpcyBsb2dnZWQgaW4geGVuc3RvcmVkLWFjY2Vz
cyBsb2c6CgogICd3IGV2ZW50IC4uLicgYXMgdXN1YWwgaWYgd2F0Y2ggd2Fz
IGZpcmVkCiAgJ3cgbm90ZmlyZWQuLi4nIGlmIHRoZSB3YXRjaCB3YXMgbm90
IGZpcmVkLCB0b2dldGhlciB3aXRoIHBhdGggYW5kCiAgcGVybWlzc2lvbiBz
ZXQgdG8gaGVscCBpbiB0cm91Ymxlc2hvb3RpbmcKCkFkZGluZyBhIHdhdGNo
IGJ5cGFzc2VzIHBlcm1pc3Npb24gY2hlY2tzIGFuZCBhbHdheXMgZmlyZXMg
dGhlIHdhdGNoCm9uY2UgaW1tZWRpYXRlbHkuIFRoaXMgaXMgY29uc2lzdGVu
dCB3aXRoIHRoZSBzcGVjaWZpY2F0aW9uLCBhbmQgbm8KaW5mb3JtYXRpb24g
aXMgZ2FpbmVkICh0aGUgd2F0Y2ggaXMgZmlyZWQgYm90aCBpZiB0aGUgcGF0
aCBleGlzdHMgb3IKZG9lc24ndCwgYW5kIGJvdGggaWYgeW91IGhhdmUgb3Ig
ZG9uJ3QgaGF2ZSBhY2Nlc3MsIGkuZS4gaXQgcmVmbGVjdHMgdGhlCnBhdGgg
YSBkb21haW4gZ2F2ZSBpdCBiYWNrIHRvIHRoYXQgZG9tYWluKS4KClRoZXJl
IGFyZSBzb21lIHNlbWFudGljIGNoYW5nZXMgaGVyZToKCiAgKiBXcml0ZSty
bSBpbiBhIHNpbmdsZSB0cmFuc2FjdGlvbiBvZiB0aGUgc2FtZSBwYXRoIGlz
IHVub2JzZXJ2YWJsZQogICAgbm93IHZpYSB3YXRjaGVzOiBib3RoIGJlZm9y
ZSBhbmQgYWZ0ZXIgYSB0cmFuc2FjdGlvbiB0aGUgcGF0aAogICAgZG9lc24n
dCBleGlzdCwgdGh1cyBib3RoIHRyZWUgbG9va3VwcyBjb21lIHVwIHdpdGgg
dGhlIGVtcHR5CiAgICBwZXJtaXNzaW9uIHNldCwgYW5kIG5vb25lLCBub3Qg
ZXZlbiBEb20wIGNhbiBzZWUgdGhpcy4gVGhpcyBpcwogICAgY29uc2lzdGVu
dCB3aXRoIHRyYW5zYWN0aW9uIGF0b21pY2l0eSB0aG91Z2guCiAgKiBTaW1p
bGFyIHRvIGFib3ZlIGlmIHdlIHRlbXBvcmFyaWx5IGdyYW50IGFuZCB0aGVu
IHJldm9rZSBwZXJtaXNzaW9uCiAgICBvbiBhIHBhdGggYW55IHdhdGNoZXMg
ZmlyZWQgaW5iZXR3ZWVuIGFyZSBpZ25vcmVkIGFzIHdlbGwKICAqIFRoZXJl
IGlzIGEgbmV3IGxvZyBldmVudCAodyBub3RmaXJlZCkgd2hpY2ggc2hvd3Mg
dGhlIHBlcm1pc3Npb24gc2V0CiAgICBvZiB0aGUgcGF0aCwgYW5kIHRoZSBw
YXRoLgogICogV2F0Y2hlcyBvbiBwYXRocyB0aGF0IGEgZG9tYWluIGRvZXNu
J3QgaGF2ZSBhY2Nlc3MgdG8gYXJlIG5vdyBub3QKICAgIHNlZW4sIHdoaWNo
IGlzIHRoZSBwdXJwb3NlIG9mIHRoZSBzZWN1cml0eSBmaXguCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTExNS4KClNpZ25lZC1vZmYtYnk6IEVkd2luIFTDtnLD
tmsgPGVkdmluLnRvcm9rQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBDaHJpc3Rp
YW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+ClJldmll
d2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25u
ZWN0aW9uLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24u
bWwKaW5kZXggZTVkZjYyZDllNy4uNjQ0YTQ0OGYyZSAxMDA2NDQKLS0tIGEv
dG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKKysrIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb24ubWwKQEAgLTE5NiwxMSAr
MTk2LDM2IEBAIGxldCBsaXN0X3dhdGNoZXMgY29uID0KIAkJY29uLndhdGNo
ZXMgW10gaW4KIAlMaXN0LmNvbmNhdCBsbAogCi1sZXQgZmlyZV9zaW5nbGVf
d2F0Y2ggd2F0Y2ggPQorbGV0IGRiZyBmbXQgPSBMb2dnaW5nLmRlYnVnICJj
b25uZWN0aW9uIiBmbXQKK2xldCBpbmZvIGZtdCA9IExvZ2dpbmcuaW5mbyAi
Y29ubmVjdGlvbiIgZm10CisKK2xldCBsb29rdXBfd2F0Y2hfcGVybSBwYXRo
ID0gZnVuY3Rpb24KK3wgTm9uZSAtPiBbXQorfCBTb21lIHJvb3QgLT4KKwl0
cnkgU3RvcmUuUGF0aC5hcHBseSByb290IHBhdGggQEAgZnVuIHBhcmVudCBu
YW1lIC0+CisJCVN0b3JlLk5vZGUuZ2V0X3Blcm1zIHBhcmVudCA6OgorCQl0
cnkgW1N0b3JlLk5vZGUuZ2V0X3Blcm1zIChTdG9yZS5Ob2RlLmZpbmQgcGFy
ZW50IG5hbWUpXQorCQl3aXRoIE5vdF9mb3VuZCAtPiBbXQorCXdpdGggRGVm
aW5lLkludmFsaWRfcGF0aCB8IE5vdF9mb3VuZCAtPiBbXQorCitsZXQgbG9v
a3VwX3dhdGNoX3Blcm1zIG9sZHJvb3Qgcm9vdCBwYXRoID0KKwlsb29rdXBf
d2F0Y2hfcGVybSBwYXRoIG9sZHJvb3QgQCBsb29rdXBfd2F0Y2hfcGVybSBw
YXRoIChTb21lIHJvb3QpCisKK2xldCBmaXJlX3NpbmdsZV93YXRjaF91bmNo
ZWNrZWQgd2F0Y2ggPQogCWxldCBkYXRhID0gVXRpbHMuam9pbl9ieV9udWxs
IFt3YXRjaC5wYXRoOyB3YXRjaC50b2tlbjsgIiJdIGluCiAJc2VuZF9yZXBs
eSB3YXRjaC5jb24gVHJhbnNhY3Rpb24ubm9uZSAwIFhlbmJ1cy5YYi5PcC5X
YXRjaGV2ZW50IGRhdGEKIAotbGV0IGZpcmVfd2F0Y2ggd2F0Y2ggcGF0aCA9
CitsZXQgZmlyZV9zaW5nbGVfd2F0Y2ggKG9sZHJvb3QsIHJvb3QpIHdhdGNo
ID0KKwlsZXQgYWJzcGF0aCA9IGdldF93YXRjaF9wYXRoIHdhdGNoLmNvbiB3
YXRjaC5wYXRoIHw+IFN0b3JlLlBhdGgub2Zfc3RyaW5nIGluCisJbGV0IHBl
cm1zID0gbG9va3VwX3dhdGNoX3Blcm1zIG9sZHJvb3Qgcm9vdCBhYnNwYXRo
IGluCisJaWYgTGlzdC5leGlzdHMgKFBlcm1zLmhhcyB3YXRjaC5jb24ucGVy
bSBSRUFEKSBwZXJtcyB0aGVuCisJCWZpcmVfc2luZ2xlX3dhdGNoX3VuY2hl
Y2tlZCB3YXRjaAorCWVsc2UKKwkJbGV0IHBlcm1zID0gcGVybXMgfD4gTGlz
dC5tYXAgKFBlcm1zLk5vZGUudG9fc3RyaW5nIH5zZXA6IiAiKSB8PiBTdHJp
bmcuY29uY2F0ICIsICIgaW4KKwkJbGV0IGNvbiA9IGdldF9kb21zdHIgd2F0
Y2guY29uIGluCisJCUxvZ2dpbmcud2F0Y2hfbm90X2ZpcmVkIH5jb24gcGVy
bXMgKFN0b3JlLlBhdGgudG9fc3RyaW5nIGFic3BhdGgpCisKK2xldCBmaXJl
X3dhdGNoIHJvb3RzIHdhdGNoIHBhdGggPQogCWxldCBuZXdfcGF0aCA9CiAJ
CWlmIHdhdGNoLmlzX3JlbGF0aXZlICYmIHBhdGguWzBdID0gJy8nCiAJCXRo
ZW4gYmVnaW4KQEAgLTIxMCw3ICsyMzUsNyBAQCBsZXQgZmlyZV93YXRjaCB3
YXRjaCBwYXRoID0KIAkJZW5kIGVsc2UKIAkJCXBhdGgKIAlpbgotCWZpcmVf
c2luZ2xlX3dhdGNoIHsgd2F0Y2ggd2l0aCBwYXRoID0gbmV3X3BhdGggfQor
CWZpcmVfc2luZ2xlX3dhdGNoIHJvb3RzIHsgd2F0Y2ggd2l0aCBwYXRoID0g
bmV3X3BhdGggfQogCiAoKiBTZWFyY2ggZm9yIGEgdmFsaWQgdW51c2VkIHRy
YW5zYWN0aW9uIGlkLiAqKQogbGV0IHJlYyB2YWxpZF90cmFuc2FjdGlvbl9p
ZCBjb24gcHJvcG9zZWRfaWQgPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwv
eGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL2Nvbm5lY3Rpb25zLm1sCmluZGV4IGYyYzQzMThjODguLjlmOWY3ZWUy
ZjAgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0
aW9ucy5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlv
bnMubWwKQEAgLTEzNSwyNSArMTM1LDI2IEBAIGxldCBkZWxfd2F0Y2ggY29u
cyBjb24gcGF0aCB0b2tlbiA9CiAgCXdhdGNoCiAKICgqIHBhdGggaXMgYWJz
b2x1dGUgKikKLWxldCBmaXJlX3dhdGNoZXMgY29ucyBwYXRoIHJlY3Vyc2Ug
PQorbGV0IGZpcmVfd2F0Y2hlcyA/b2xkcm9vdCByb290IGNvbnMgcGF0aCBy
ZWN1cnNlID0KIAlsZXQga2V5ID0ga2V5X29mX3BhdGggcGF0aCBpbgogCWxl
dCBwYXRoID0gU3RvcmUuUGF0aC50b19zdHJpbmcgcGF0aCBpbgorCWxldCBy
b290cyA9IG9sZHJvb3QsIHJvb3QgaW4KIAlsZXQgZmlyZV93YXRjaCBfID0g
ZnVuY3Rpb24KIAkJfCBOb25lICAgICAgICAgLT4gKCkKLQkJfCBTb21lIHdh
dGNoZXMgLT4gTGlzdC5pdGVyIChmdW4gdyAtPiBDb25uZWN0aW9uLmZpcmVf
d2F0Y2ggdyBwYXRoKSB3YXRjaGVzCisJCXwgU29tZSB3YXRjaGVzIC0+IExp
c3QuaXRlciAoZnVuIHcgLT4gQ29ubmVjdGlvbi5maXJlX3dhdGNoIHJvb3Rz
IHcgcGF0aCkgd2F0Y2hlcwogCWluCiAJbGV0IGZpcmVfcmVjIF94ID0gZnVu
Y3Rpb24KIAkJfCBOb25lICAgICAgICAgLT4gKCkKIAkJfCBTb21lIHdhdGNo
ZXMgLT4KLQkJCSAgTGlzdC5pdGVyIChmdW4gdyAtPiBDb25uZWN0aW9uLmZp
cmVfc2luZ2xlX3dhdGNoIHcpIHdhdGNoZXMKKwkJCUxpc3QuaXRlciAoQ29u
bmVjdGlvbi5maXJlX3NpbmdsZV93YXRjaCByb290cykgd2F0Y2hlcwogCWlu
CiAJVHJpZS5pdGVyX3BhdGggZmlyZV93YXRjaCBjb25zLndhdGNoZXMga2V5
OwogCWlmIHJlY3Vyc2UgdGhlbgogCQlUcmllLml0ZXIgZmlyZV9yZWMgKFRy
aWUuc3ViIGNvbnMud2F0Y2hlcyBrZXkpCiAKLWxldCBmaXJlX3NwZWNfd2F0
Y2hlcyBjb25zIHNwZWNwYXRoID0KK2xldCBmaXJlX3NwZWNfd2F0Y2hlcyBy
b290IGNvbnMgc3BlY3BhdGggPQogCWl0ZXIgY29ucyAoZnVuIGNvbiAtPgot
CQlMaXN0Lml0ZXIgKGZ1biB3IC0+IENvbm5lY3Rpb24uZmlyZV9zaW5nbGVf
d2F0Y2ggdykgKENvbm5lY3Rpb24uZ2V0X3dhdGNoZXMgY29uIHNwZWNwYXRo
KSkKKwkJTGlzdC5pdGVyIChDb25uZWN0aW9uLmZpcmVfc2luZ2xlX3dhdGNo
IChOb25lLCByb290KSkgKENvbm5lY3Rpb24uZ2V0X3dhdGNoZXMgY29uIHNw
ZWNwYXRoKSkKIAogbGV0IHNldF90YXJnZXQgY29ucyBkb21haW4gdGFyZ2V0
X2RvbWFpbiA9CiAJbGV0IGNvbiA9IGZpbmRfZG9tYWluIGNvbnMgZG9tYWlu
IGluCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvbG9nZ2lu
Zy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9sb2dnaW5nLm1sCmluZGV4
IGM1Y2JhNzllOTIuLjFlZGUxMzEzMjkgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9sb2dnaW5nLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9sb2dnaW5nLm1sCkBAIC0xNjEsNiArMTYxLDggQEAgbGV0IHhl
bnN0b3JlZF9sb2dfbmJfbGluZXMgPSByZWYgMTMyMTUKIGxldCB4ZW5zdG9y
ZWRfbG9nX25iX2NoYXJzID0gcmVmICgtMSkKIGxldCB4ZW5zdG9yZWRfbG9n
Z2VyID0gcmVmIChOb25lOiBsb2dnZXIgb3B0aW9uKQogCitsZXQgZGVidWdf
ZW5hYmxlZCAoKSA9ICF4ZW5zdG9yZWRfbG9nX2xldmVsID0gRGVidWcKKwog
bGV0IHNldF94ZW5zdG9yZWRfbG9nX2Rlc3RpbmF0aW9uIHMgPQogCXhlbnN0
b3JlZF9sb2dfZGVzdGluYXRpb24gOj0gbG9nX2Rlc3RpbmF0aW9uX29mX3N0
cmluZyBzCiAKQEAgLTIwNCw2ICsyMDYsNyBAQCB0eXBlIGFjY2Vzc190eXBl
ID0KIAl8IENvbW1pdAogCXwgTmV3Y29ubgogCXwgRW5kY29ubgorCXwgV2F0
Y2hfbm90X2ZpcmVkCiAJfCBYYk9wIG9mIFhlbmJ1cy5YYi5PcC5vcGVyYXRp
b24KIAogbGV0IHN0cmluZ19vZl90aWQgfmNvbiB0aWQgPQpAQCAtMjE3LDYg
KzIyMCw3IEBAIGxldCBzdHJpbmdfb2ZfYWNjZXNzX3R5cGUgPSBmdW5jdGlv
bgogCXwgQ29tbWl0ICAgICAgICAgICAgICAgICAgLT4gImNvbW1pdCAgICIK
IAl8IE5ld2Nvbm4gICAgICAgICAgICAgICAgIC0+ICJuZXdjb25uICAiCiAJ
fCBFbmRjb25uICAgICAgICAgICAgICAgICAtPiAiZW5kY29ubiAgIgorCXwg
V2F0Y2hfbm90X2ZpcmVkICAgICAgICAgLT4gIncgbm90ZmlyZWQiCiAKIAl8
IFhiT3Agb3AgLT4gbWF0Y2ggb3Agd2l0aAogCXwgWGVuYnVzLlhiLk9wLkRl
YnVnICAgICAgICAgICAgIC0+ICJkZWJ1ZyAgICAiCkBAIC0zMzEsMyArMzM1
LDcgQEAgbGV0IHhiX2Fuc3dlciB+dGlkIH5jb24gfnR5IGRhdGEgPQogCQl8
IF8gLT4gZmFsc2UsIERlYnVnCiAJaW4KIAlpZiBwcmludCB0aGVuIGFjY2Vz
c19sb2dnaW5nIH50aWQgfmNvbiB+ZGF0YSAoWGJPcCB0eSkgfmxldmVsCisK
K2xldCB3YXRjaF9ub3RfZmlyZWQgfmNvbiBwZXJtcyBwYXRoID0KKwlsZXQg
ZGF0YSA9IFByaW50Zi5zcHJpbnRmICJFUEVSTSBwZXJtcz1bJXNdIHBhdGg9
JXMiIHBlcm1zIHBhdGggaW4KKwlhY2Nlc3NfbG9nZ2luZyB+dGlkOjAgfmNv
biB+ZGF0YSBXYXRjaF9ub3RfZmlyZWQgfmxldmVsOkluZm8KZGlmZiAtLWdp
dCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbCBiL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCAzZWExOTNlYTE0Li4yM2I4
MGFiYTNkIDEwMDY0NAotLS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVy
bXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Blcm1zLm1sCkBA
IC03OSw5ICs3OSw5IEBAIGxldCBvZl9zdHJpbmcgcyA9CiBsZXQgc3RyaW5n
X29mX3Blcm0gcGVybSA9CiAJUHJpbnRmLnNwcmludGYgIiVjJXUiIChjaGFy
X29mX3Blcm10eSAoc25kIHBlcm0pKSAoZnN0IHBlcm0pCiAKLWxldCB0b19z
dHJpbmcgcGVybXZlYyA9CitsZXQgdG9fc3RyaW5nID8oc2VwPSJcMDAwIikg
cGVybXZlYyA9CiAJbGV0IGwgPSAoKHBlcm12ZWMub3duZXIsIHBlcm12ZWMu
b3RoZXIpIDo6IHBlcm12ZWMuYWNsKSBpbgotCVN0cmluZy5jb25jYXQgIlww
MDAiIChMaXN0Lm1hcCBzdHJpbmdfb2ZfcGVybSBsKQorCVN0cmluZy5jb25j
YXQgc2VwIChMaXN0Lm1hcCBzdHJpbmdfb2ZfcGVybSBsKQogCiBlbmQKIApA
QCAtMTMyLDggKzEzMiw4IEBAIGxldCBjaGVja19vd25lciAoY29ubmVjdGlv
bjpDb25uZWN0aW9uLnQpIChub2RlOk5vZGUudCkgPQogCXRoZW4gQ29ubmVj
dGlvbi5pc19vd25lciBjb25uZWN0aW9uIChOb2RlLmdldF9vd25lciBub2Rl
KQogCWVsc2UgdHJ1ZQogCi0oKiBjaGVjayBpZiB0aGUgY3VycmVudCBjb25u
ZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0gb24gdGhlIGN1cnJlbnQg
bm9kZSAqKQotbGV0IGNoZWNrIChjb25uZWN0aW9uOkNvbm5lY3Rpb24udCkg
cmVxdWVzdCAobm9kZTpOb2RlLnQpID0KKygqIGNoZWNrIGlmIHRoZSBjdXJy
ZW50IGNvbm5lY3Rpb24gbGFja3MgdGhlIHJlcXVlc3RlZCBwZXJtIG9uIHRo
ZSBjdXJyZW50IG5vZGUgKikKK2xldCBsYWNrcyAoY29ubmVjdGlvbjpDb25u
ZWN0aW9uLnQpIHJlcXVlc3QgKG5vZGU6Tm9kZS50KSA9CiAJbGV0IGNoZWNr
X2FjbCBkb21haW5pZCA9CiAJCWxldCBwZXJtID0KIAkJCWlmIExpc3QubWVt
X2Fzc29jIGRvbWFpbmlkIChOb2RlLmdldF9hY2wgbm9kZSkKQEAgLTE1NCwx
MSArMTU0LDE5IEBAIGxldCBjaGVjayAoY29ubmVjdGlvbjpDb25uZWN0aW9u
LnQpIHJlcXVlc3QgKG5vZGU6Tm9kZS50KSA9CiAJCQlpbmZvICJQZXJtaXNz
aW9uIGRlbmllZDogRG9tYWluICVkIGhhcyB3cml0ZSBvbmx5IGFjY2VzcyIg
ZG9tYWluaWQ7CiAJCQlmYWxzZQogCWluCi0JaWYgIWFjdGl2YXRlCisJIWFj
dGl2YXRlCiAJJiYgbm90IChDb25uZWN0aW9uLmlzX2RvbTAgY29ubmVjdGlv
bikKIAkmJiBub3QgKGNoZWNrX293bmVyIGNvbm5lY3Rpb24gbm9kZSkKIAkm
JiBub3QgKExpc3QuZXhpc3RzIGNoZWNrX2FjbCAoQ29ubmVjdGlvbi5nZXRf
b3duZXJzIGNvbm5lY3Rpb24pKQorCisoKiBjaGVjayBpZiB0aGUgY3VycmVu
dCBjb25uZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0gb24gdGhlIGN1
cnJlbnQgbm9kZS4KKyogIFJhaXNlcyBhbiBleGNlcHRpb24gaWYgaXQgZG9l
c24ndC4gKikKK2xldCBjaGVjayBjb25uZWN0aW9uIHJlcXVlc3Qgbm9kZSA9
CisJaWYgbGFja3MgY29ubmVjdGlvbiByZXF1ZXN0IG5vZGUKIAl0aGVuIHJh
aXNlIERlZmluZS5QZXJtaXNzaW9uX2RlbmllZAogCisoKiBjaGVjayBpZiB0
aGUgY3VycmVudCBjb25uZWN0aW9uIGhhcyB0aGUgcmVxdWVzdGVkIHBlcm0g
b24gdGhlIGN1cnJlbnQgbm9kZSAqKQorbGV0IGhhcyBjb25uZWN0aW9uIHJl
cXVlc3Qgbm9kZSA9IG5vdCAobGFja3MgY29ubmVjdGlvbiByZXF1ZXN0IG5v
ZGUpCisKIGxldCBlcXVpdiBwZXJtMSBwZXJtMiA9CiAJKE5vZGUudG9fc3Ry
aW5nIHBlcm0xKSA9IChOb2RlLnRvX3N0cmluZyBwZXJtMikKZGlmZiAtLWdp
dCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggZTUyOGQxZWNiMi4u
Zjk5YjllOTM1YyAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nl
c3MubWwKQEAgLTU2LDE1ICs1NiwxNyBAQCBsZXQgc3BsaXRfb25lX3BhdGgg
ZGF0YSBjb24gPQogCXwgcGF0aCA6OiAiIiA6OiBbXSAtPiBTdG9yZS5QYXRo
LmNyZWF0ZSBwYXRoIChDb25uZWN0aW9uLmdldF9wYXRoIGNvbikKIAl8IF8g
ICAgICAgICAgICAgICAgLT4gcmFpc2UgSW52YWxpZF9DbWRfQXJncwogCi1s
ZXQgcHJvY2Vzc193YXRjaCBvcHMgY29ucyA9CitsZXQgcHJvY2Vzc193YXRj
aCB0IGNvbnMgPQorCWxldCBvbGRyb290ID0gdC5UcmFuc2FjdGlvbi5vbGRy
b290IGluCisJbGV0IG5ld3Jvb3QgPSBTdG9yZS5nZXRfcm9vdCB0LnN0b3Jl
IGluCisJbGV0IG9wcyA9IFRyYW5zYWN0aW9uLmdldF9wYXRocyB0IHw+IExp
c3QucmV2IGluCiAJbGV0IGRvX29wX3dhdGNoIG9wIGNvbnMgPQotCQlsZXQg
cmVjdXJzZSA9IG1hdGNoIChmc3Qgb3ApIHdpdGgKLQkJfCBYZW5idXMuWGIu
T3AuV3JpdGUgICAgLT4gZmFsc2UKLQkJfCBYZW5idXMuWGIuT3AuTWtkaXIg
ICAgLT4gZmFsc2UKLQkJfCBYZW5idXMuWGIuT3AuUm0gICAgICAgLT4gdHJ1
ZQotCQl8IFhlbmJ1cy5YYi5PcC5TZXRwZXJtcyAtPiBmYWxzZQorCQlsZXQg
cmVjdXJzZSwgb2xkcm9vdCwgcm9vdCA9IG1hdGNoIChmc3Qgb3ApIHdpdGgK
KwkJfCBYZW5idXMuWGIuT3AuV3JpdGV8WGVuYnVzLlhiLk9wLk1rZGlyIC0+
IGZhbHNlLCBOb25lLCBuZXdyb290CisJCXwgWGVuYnVzLlhiLk9wLlJtICAg
ICAgIC0+IHRydWUsIE5vbmUsIG9sZHJvb3QKKwkJfCBYZW5idXMuWGIuT3Au
U2V0cGVybXMgLT4gZmFsc2UsIFNvbWUgb2xkcm9vdCwgbmV3cm9vdAogCQl8
IF8gICAgICAgICAgICAgIC0+IHJhaXNlIChGYWlsdXJlICJodWggPyIpIGlu
Ci0JCUNvbm5lY3Rpb25zLmZpcmVfd2F0Y2hlcyBjb25zIChzbmQgb3ApIHJl
Y3Vyc2UgaW4KKwkJQ29ubmVjdGlvbnMuZmlyZV93YXRjaGVzID9vbGRyb290
IHJvb3QgY29ucyAoc25kIG9wKSByZWN1cnNlIGluCiAJTGlzdC5pdGVyIChm
dW4gb3AgLT4gZG9fb3Bfd2F0Y2ggb3AgY29ucykgb3BzCiAKIGxldCBjcmVh
dGVfaW1wbGljaXRfcGF0aCB0IHBlcm0gcGF0aCA9CkBAIC0yMDUsNyArMjA3
LDcgQEAgbGV0IHJlcGx5X2FjayBmY3QgY29uIHQgZG9tcyBjb25zIGRhdGEg
PQogCWZjdCBjb24gdCBkb21zIGNvbnMgZGF0YTsKIAlQYWNrZXQuQWNrIChm
dW4gKCkgLT4KIAkJaWYgVHJhbnNhY3Rpb24uZ2V0X2lkIHQgPSBUcmFuc2Fj
dGlvbi5ub25lIHRoZW4KLQkJCXByb2Nlc3Nfd2F0Y2ggKFRyYW5zYWN0aW9u
LmdldF9wYXRocyB0KSBjb25zCisJCQlwcm9jZXNzX3dhdGNoIHQgY29ucwog
CSkKIAogbGV0IHJlcGx5X2RhdGEgZmN0IGNvbiB0IGRvbXMgY29ucyBkYXRh
ID0KQEAgLTM1MywxNCArMzU1LDE3IEBAIGxldCB0cmFuc2FjdGlvbl9yZXBs
YXkgYyB0IGRvbXMgY29ucyA9CiAJCQlpZ25vcmUgQEAgQ29ubmVjdGlvbi5l
bmRfdHJhbnNhY3Rpb24gYyB0aWQgTm9uZQogCQkpCiAKLWxldCBkb193YXRj
aCBjb24gX3QgX2RvbWFpbnMgY29ucyBkYXRhID0KK2xldCBkb193YXRjaCBj
b24gdCBfZG9tYWlucyBjb25zIGRhdGEgPQogCWxldCAobm9kZSwgdG9rZW4p
ID0KIAkJbWF0Y2ggKHNwbGl0IE5vbmUgJ1wwMDAnIGRhdGEpIHdpdGgKIAkJ
fCBbbm9kZTsgdG9rZW47ICIiXSAgIC0+IG5vZGUsIHRva2VuCiAJCXwgXyAg
ICAgICAgICAgICAgICAgICAtPiByYWlzZSBJbnZhbGlkX0NtZF9BcmdzCiAJ
CWluCiAJbGV0IHdhdGNoID0gQ29ubmVjdGlvbnMuYWRkX3dhdGNoIGNvbnMg
Y29uIG5vZGUgdG9rZW4gaW4KLQlQYWNrZXQuQWNrIChmdW4gKCkgLT4gQ29u
bmVjdGlvbi5maXJlX3NpbmdsZV93YXRjaCB3YXRjaCkKKwlQYWNrZXQuQWNr
IChmdW4gKCkgLT4KKwkJKCogeGVuc3RvcmUudHh0IHNheXMgdGhpcyB3YXRj
aCBpcyBmaXJlZCBpbW1lZGlhdGVseSwKKwkJICAgaW1wbHlpbmcgZXZlbiBp
ZiBwYXRoIGRvZXNuJ3QgZXhpc3Qgb3IgaXMgdW5yZWFkYWJsZSAqKQorCQlD
b25uZWN0aW9uLmZpcmVfc2luZ2xlX3dhdGNoX3VuY2hlY2tlZCB3YXRjaCkK
IAogbGV0IGRvX3Vud2F0Y2ggY29uIF90IF9kb21haW5zIGNvbnMgZGF0YSA9
CiAJbGV0IChub2RlLCB0b2tlbikgPQpAQCAtMzkxLDcgKzM5Niw3IEBAIGxl
dCBkb190cmFuc2FjdGlvbl9lbmQgY29uIHQgZG9tYWlucyBjb25zIGRhdGEg
PQogCWlmIG5vdCBzdWNjZXNzIHRoZW4KIAkJcmFpc2UgVHJhbnNhY3Rpb25f
YWdhaW47CiAJaWYgY29tbWl0IHRoZW4gYmVnaW4KLQkJcHJvY2Vzc193YXRj
aCAoTGlzdC5yZXYgKFRyYW5zYWN0aW9uLmdldF9wYXRocyB0KSkgY29uczsK
KwkJcHJvY2Vzc193YXRjaCB0IGNvbnM7CiAJCW1hdGNoIHQuVHJhbnNhY3Rp
b24udHkgd2l0aAogCQl8IFRyYW5zYWN0aW9uLk5vIC0+CiAJCQkoKSAoKiBu
byBuZWVkIHRvIHJlY29yZCBhbnl0aGluZyAqKQpAQCAtMzk5LDcgKzQwNCw3
IEBAIGxldCBkb190cmFuc2FjdGlvbl9lbmQgY29uIHQgZG9tYWlucyBjb25z
IGRhdGEgPQogCQkJcmVjb3JkX2NvbW1pdCB+Y29uIH50aWQ6aWQgfmJlZm9y
ZTpvbGRzdG9yZSB+YWZ0ZXI6Y3N0b3JlCiAJZW5kCiAKLWxldCBkb19pbnRy
b2R1Y2UgY29uIF90IGRvbWFpbnMgY29ucyBkYXRhID0KK2xldCBkb19pbnRy
b2R1Y2UgY29uIHQgZG9tYWlucyBjb25zIGRhdGEgPQogCWlmIG5vdCAoQ29u
bmVjdGlvbi5pc19kb20wIGNvbikKIAl0aGVuIHJhaXNlIERlZmluZS5QZXJt
aXNzaW9uX2RlbmllZDsKIAlsZXQgKGRvbWlkLCBtZm4sIHBvcnQpID0KQEAg
LTQyMCwxNCArNDI1LDE0IEBAIGxldCBkb19pbnRyb2R1Y2UgY29uIF90IGRv
bWFpbnMgY29ucyBkYXRhID0KIAkJZWxzZSB0cnkKIAkJCWxldCBuZG9tID0g
RG9tYWlucy5jcmVhdGUgZG9tYWlucyBkb21pZCBtZm4gcG9ydCBpbgogCQkJ
Q29ubmVjdGlvbnMuYWRkX2RvbWFpbiBjb25zIG5kb207Ci0JCQlDb25uZWN0
aW9ucy5maXJlX3NwZWNfd2F0Y2hlcyBjb25zIFN0b3JlLlBhdGguaW50cm9k
dWNlX2RvbWFpbjsKKwkJCUNvbm5lY3Rpb25zLmZpcmVfc3BlY193YXRjaGVz
IChUcmFuc2FjdGlvbi5nZXRfcm9vdCB0KSBjb25zIFN0b3JlLlBhdGguaW50
cm9kdWNlX2RvbWFpbjsKIAkJCW5kb20KIAkJd2l0aCBfIC0+IHJhaXNlIElu
dmFsaWRfQ21kX0FyZ3MKIAlpbgogCWlmIChEb21haW4uZ2V0X3JlbW90ZV9w
b3J0IGRvbSkgPD4gcG9ydCB8fCAoRG9tYWluLmdldF9tZm4gZG9tKSA8PiBt
Zm4gdGhlbgogCQlyYWlzZSBEb21haW5fbm90X21hdGNoCiAKLWxldCBkb19y
ZWxlYXNlIGNvbiBfdCBkb21haW5zIGNvbnMgZGF0YSA9CitsZXQgZG9fcmVs
ZWFzZSBjb24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJaWYgbm90IChDb25u
ZWN0aW9uLmlzX2RvbTAgY29uKQogCXRoZW4gcmFpc2UgRGVmaW5lLlBlcm1p
c3Npb25fZGVuaWVkOwogCWxldCBkb21pZCA9CkBAIC00MzksNyArNDQ0LDcg
QEAgbGV0IGRvX3JlbGVhc2UgY29uIF90IGRvbWFpbnMgY29ucyBkYXRhID0K
IAlEb21haW5zLmRlbCBkb21haW5zIGRvbWlkOwogCUNvbm5lY3Rpb25zLmRl
bF9kb21haW4gY29ucyBkb21pZDsKIAlpZiBmaXJlX3NwZWNfd2F0Y2hlcwot
CXRoZW4gQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9y
ZS5QYXRoLnJlbGVhc2VfZG9tYWluCisJdGhlbiBDb25uZWN0aW9ucy5maXJl
X3NwZWNfd2F0Y2hlcyAoVHJhbnNhY3Rpb24uZ2V0X3Jvb3QgdCkgY29ucyBT
dG9yZS5QYXRoLnJlbGVhc2VfZG9tYWluCiAJZWxzZSByYWlzZSBJbnZhbGlk
X0NtZF9BcmdzCiAKIGxldCBkb19yZXN1bWUgY29uIF90IGRvbWFpbnMgX2Nv
bnMgZGF0YSA9CkBAIC01MDcsNiArNTEyLDggQEAgbGV0IG1heWJlX2lnbm9y
ZV90cmFuc2FjdGlvbiA9IGZ1bmN0aW9uCiAJCVRyYW5zYWN0aW9uLm5vbmUK
IAl8IF8gLT4gZnVuIHggLT4geAogCisKK2xldCAoKSA9IFByaW50ZXhjLnJl
Y29yZF9iYWNrdHJhY2UgdHJ1ZQogKCoqCiAgKiBOb3Rocm93IGd1YXJhbnRl
ZS4KICAqKQpAQCAtNTQ4LDcgKzU1NSw4IEBAIGxldCBwcm9jZXNzX3BhY2tl
dCB+c3RvcmUgfmNvbnMgfmRvbXMgfmNvbiB+cmVxID0KIAkJKCogUHV0IHRo
ZSByZXNwb25zZSBvbiB0aGUgd2lyZSAqKQogCQlzZW5kX3Jlc3BvbnNlIHR5
IGNvbiB0IHJpZCByZXNwb25zZQogCXdpdGggZXhuIC0+Ci0JCWVycm9yICJw
cm9jZXNzIHBhY2tldDogJXMiIChQcmludGV4Yy50b19zdHJpbmcgZXhuKTsK
KwkJbGV0IGJ0ID0gUHJpbnRleGMuZ2V0X2JhY2t0cmFjZSAoKSBpbgorCQll
cnJvciAicHJvY2VzcyBwYWNrZXQ6ICVzLiAlcyIgKFByaW50ZXhjLnRvX3N0
cmluZyBleG4pIGJ0OwogCQlDb25uZWN0aW9uLnNlbmRfZXJyb3IgY29uIHRp
ZCByaWQgIkVJTyIKIAogbGV0IGRvX2lucHV0IHN0b3JlIGNvbnMgZG9tcyBj
b24gPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3RyYW5z
YWN0aW9uLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3RyYW5zYWN0aW9u
Lm1sCmluZGV4IDk2MzczNGE2NTMuLjI1YmM4YzNiNGEgMTAwNjQ0Ci0tLSBh
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC90cmFuc2FjdGlvbi5tbAorKysgYi90
b29scy9vY2FtbC94ZW5zdG9yZWQvdHJhbnNhY3Rpb24ubWwKQEAgLTgyLDYg
KzgyLDcgQEAgdHlwZSB0ID0gewogCXN0YXJ0X2NvdW50OiBpbnQ2NDsKIAlz
dG9yZTogU3RvcmUudDsgKCogVGhpcyBpcyB0aGUgc3RvcmUgdGhhdCB3ZSBj
aGFuZ2UgaW4gd3JpdGUgb3BlcmF0aW9ucy4gKikKIAlxdW90YTogUXVvdGEu
dDsKKwlvbGRyb290OiBTdG9yZS5Ob2RlLnQ7CiAJbXV0YWJsZSBwYXRoczog
KFhlbmJ1cy5YYi5PcC5vcGVyYXRpb24gKiBTdG9yZS5QYXRoLnQpIGxpc3Q7
CiAJbXV0YWJsZSBvcGVyYXRpb25zOiAoUGFja2V0LnJlcXVlc3QgKiBQYWNr
ZXQucmVzcG9uc2UpIGxpc3Q7CiAJbXV0YWJsZSByZWFkX2xvd3BhdGg6IFN0
b3JlLlBhdGgudCBvcHRpb247CkBAIC0xMjMsNiArMTI0LDcgQEAgbGV0IG1h
a2UgPyhpbnRlcm5hbD1mYWxzZSkgaWQgc3RvcmUgPQogCQlzdGFydF9jb3Vu
dCA9ICFjb3VudGVyOwogCQlzdG9yZSA9IGlmIGlkID0gbm9uZSB0aGVuIHN0
b3JlIGVsc2UgU3RvcmUuY29weSBzdG9yZTsKIAkJcXVvdGEgPSBRdW90YS5j
b3B5IHN0b3JlLlN0b3JlLnF1b3RhOworCQlvbGRyb290ID0gU3RvcmUuZ2V0
X3Jvb3Qgc3RvcmU7CiAJCXBhdGhzID0gW107CiAJCW9wZXJhdGlvbnMgPSBb
XTsKIAkJcmVhZF9sb3dwYXRoID0gTm9uZTsKQEAgLTEzNyw2ICsxMzksOCBA
QCBsZXQgbWFrZSA/KGludGVybmFsPWZhbHNlKSBpZCBzdG9yZSA9CiBsZXQg
Z2V0X3N0b3JlIHQgPSB0LnN0b3JlCiBsZXQgZ2V0X3BhdGhzIHQgPSB0LnBh
dGhzCiAKK2xldCBnZXRfcm9vdCB0ID0gU3RvcmUuZ2V0X3Jvb3QgdC5zdG9y
ZQorCiBsZXQgaXNfcmVhZF9vbmx5IHQgPSB0LnBhdGhzID0gW10KIGxldCBh
ZGRfd29wIHQgdHkgcGF0aCA9IHQucGF0aHMgPC0gKHR5LCBwYXRoKSA6OiB0
LnBhdGhzCiBsZXQgYWRkX29wZXJhdGlvbiB+cGVybSB0IHJlcXVlc3QgcmVz
cG9uc2UgPQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hl
bnN0b3JlZC5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWQu
bWwKaW5kZXggOGQwYzUwYmZhNC4uZjdiODgwNjViYiAxMDA2NDQKLS0tIGEv
dG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAorKysgYi90b29s
cy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCkBAIC0zMzcsNyArMzM3
LDkgQEAgbGV0IF8gPQogCQkJCQlsZXQgKG5vdGlmeSwgZGVhZGRvbSkgPSBE
b21haW5zLmNsZWFudXAgZG9tYWlucyBpbgogCQkJCQlMaXN0Lml0ZXIgKENv
bm5lY3Rpb25zLmRlbF9kb21haW4gY29ucykgZGVhZGRvbTsKIAkJCQkJaWYg
ZGVhZGRvbSA8PiBbXSB8fCBub3RpZnkgdGhlbgotCQkJCQkJQ29ubmVjdGlv
bnMuZmlyZV9zcGVjX3dhdGNoZXMgY29ucyBTdG9yZS5QYXRoLnJlbGVhc2Vf
ZG9tYWluCisJCQkJCQlDb25uZWN0aW9ucy5maXJlX3NwZWNfd2F0Y2hlcwor
CQkJCQkJCShTdG9yZS5nZXRfcm9vdCBzdG9yZSkKKwkJCQkJCQljb25zIFN0
b3JlLlBhdGgucmVsZWFzZV9kb21haW4KIAkJCQkpCiAJCQkJZWxzZQogCQkJ
CQlsZXQgYyA9IENvbm5lY3Rpb25zLmZpbmRfZG9tYWluX2J5X3BvcnQgY29u
cyBwb3J0IGluCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa115-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch"
Content-Disposition: attachment;
 filename="xsa115-o/0006-tools-ocaml-xenstored-add-xenstored.conf-flag-to-tur.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogYWRkIHhlbnN0b3JlZC5jb25mIGZsYWcgdG8gdHVybiBvZmYg
d2F0Y2gKIHBlcm1pc3Npb24gY2hlY2tzCk1JTUUtVmVyc2lvbjogMS4wCkNv
bnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250ZW50
LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpUaGVyZSBhcmUgZmxhZ3MgdG8g
dHVybiBvZmYgcXVvdGFzIGFuZCB0aGUgcGVybWlzc2lvbiBzeXN0ZW0sIHNv
IGFkZCBvbmUKdGhhdCB0dXJucyBvZmYgdGhlIG5ld2x5IGludHJvZHVjZWQg
d2F0Y2ggcGVybWlzc2lvbiBjaGVja3MgYXMgd2VsbC4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtMTE1LgoKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8
ZWR2aW4udG9yb2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBM
aW5kaWcgPGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KUmV2aWV3ZWQt
Ynk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
CgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rp
b24ubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbApp
bmRleCA2NDRhNDQ4ZjJlLi5mYTBkM2M0ZDkyIDEwMDY0NAotLS0gYS90b29s
cy9vY2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbAorKysgYi90b29scy9v
Y2FtbC94ZW5zdG9yZWQvY29ubmVjdGlvbi5tbApAQCAtMjE4LDcgKzIxOCw3
IEBAIGxldCBmaXJlX3NpbmdsZV93YXRjaF91bmNoZWNrZWQgd2F0Y2ggPQog
bGV0IGZpcmVfc2luZ2xlX3dhdGNoIChvbGRyb290LCByb290KSB3YXRjaCA9
CiAJbGV0IGFic3BhdGggPSBnZXRfd2F0Y2hfcGF0aCB3YXRjaC5jb24gd2F0
Y2gucGF0aCB8PiBTdG9yZS5QYXRoLm9mX3N0cmluZyBpbgogCWxldCBwZXJt
cyA9IGxvb2t1cF93YXRjaF9wZXJtcyBvbGRyb290IHJvb3QgYWJzcGF0aCBp
bgotCWlmIExpc3QuZXhpc3RzIChQZXJtcy5oYXMgd2F0Y2guY29uLnBlcm0g
UkVBRCkgcGVybXMgdGhlbgorCWlmIFBlcm1zLmNhbl9maXJlX3dhdGNoIHdh
dGNoLmNvbi5wZXJtIHBlcm1zIHRoZW4KIAkJZmlyZV9zaW5nbGVfd2F0Y2hf
dW5jaGVja2VkIHdhdGNoCiAJZWxzZQogCQlsZXQgcGVybXMgPSBwZXJtcyB8
PiBMaXN0Lm1hcCAoUGVybXMuTm9kZS50b19zdHJpbmcgfnNlcDoiICIpIHw+
IFN0cmluZy5jb25jYXQgIiwgIiBpbgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL294ZW5zdG9yZWQuY29uZi5pbiBiL3Rvb2xzL29jYW1s
L3hlbnN0b3JlZC9veGVuc3RvcmVkLmNvbmYuaW4KaW5kZXggMTUxYjY1Yjcy
ZC4uZjg0MzQ4Mjk4MSAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL294ZW5zdG9yZWQuY29uZi5pbgorKysgYi90b29scy9vY2FtbC94ZW5z
dG9yZWQvb3hlbnN0b3JlZC5jb25mLmluCkBAIC00NCw2ICs0NCwxNiBAQCBj
b25mbGljdC1yYXRlLWxpbWl0LWlzLWFnZ3JlZ2F0ZSA9IHRydWUKICMgQWN0
aXZhdGUgbm9kZSBwZXJtaXNzaW9uIHN5c3RlbQogcGVybXMtYWN0aXZhdGUg
PSB0cnVlCiAKKyMgQWN0aXZhdGUgdGhlIHdhdGNoIHBlcm1pc3Npb24gc3lz
dGVtCisjIFdoZW4gdGhpcyBpcyBlbmFibGVkIHVucHJpdmlsZWdlZCBndWVz
dHMgY2FuIG9ubHkgZ2V0IHdhdGNoIGV2ZW50cworIyBmb3IgeGVuc3RvcmUg
ZW50cmllcyB0aGF0IHRoZXkgd291bGQndmUgYmVlbiBhYmxlIHRvIHJlYWQu
CisjCisjIFdoZW4gdGhpcyBpcyBkaXNhYmxlZCB1bnByaXZpbGVnZWQgZ3Vl
c3RzIG1heSBnZXQgd2F0Y2ggZXZlbnRzCisjIGZvciB4ZW5zdG9yZSBlbnRy
aWVzIHRoYXQgdGhleSBjYW5ub3QgcmVhZC4gVGhlIHdhdGNoIGV2ZW50IGNv
bnRhaW5zCisjIG9ubHkgdGhlIGVudHJ5IG5hbWUsIG5vdCB0aGUgdmFsdWUu
CisjIFRoaXMgcmVzdG9yZXMgYmVoYXZpb3VyIHByaW9yIHRvIFhTQS0xMTUu
CitwZXJtcy13YXRjaC1hY3RpdmF0ZSA9IHRydWUKKwogIyBBY3RpdmF0ZSBx
dW90YQogcXVvdGEtYWN0aXZhdGUgPSB0cnVlCiBxdW90YS1tYXhlbnRpdHkg
PSAxMDAwCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVy
bXMubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvcGVybXMubWwKaW5kZXgg
MjNiODBhYmEzZC4uZWU3ZmVlNmJkYSAxMDA2NDQKLS0tIGEvdG9vbHMvb2Nh
bWwveGVuc3RvcmVkL3Blcm1zLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0
b3JlZC9wZXJtcy5tbApAQCAtMjAsNiArMjAsNyBAQCBsZXQgaW5mbyBmbXQg
PSBMb2dnaW5nLmluZm8gInBlcm1zIiBmbXQKIG9wZW4gU3RkZXh0CiAKIGxl
dCBhY3RpdmF0ZSA9IHJlZiB0cnVlCitsZXQgd2F0Y2hfYWN0aXZhdGUgPSBy
ZWYgdHJ1ZQogCiB0eXBlIHBlcm10eSA9IFJFQUQgfCBXUklURSB8IFJEV1Ig
fCBOT05FCiAKQEAgLTE2OCw1ICsxNjksOSBAQCBsZXQgY2hlY2sgY29ubmVj
dGlvbiByZXF1ZXN0IG5vZGUgPQogKCogY2hlY2sgaWYgdGhlIGN1cnJlbnQg
Y29ubmVjdGlvbiBoYXMgdGhlIHJlcXVlc3RlZCBwZXJtIG9uIHRoZSBjdXJy
ZW50IG5vZGUgKikKIGxldCBoYXMgY29ubmVjdGlvbiByZXF1ZXN0IG5vZGUg
PSBub3QgKGxhY2tzIGNvbm5lY3Rpb24gcmVxdWVzdCBub2RlKQogCitsZXQg
Y2FuX2ZpcmVfd2F0Y2ggY29ubmVjdGlvbiBwZXJtcyA9CisJbm90ICF3YXRj
aF9hY3RpdmF0ZQorCXx8IExpc3QuZXhpc3RzIChoYXMgY29ubmVjdGlvbiBS
RUFEKSBwZXJtcworCiBsZXQgZXF1aXYgcGVybTEgcGVybTIgPQogCShOb2Rl
LnRvX3N0cmluZyBwZXJtMSkgPSAoTm9kZS50b19zdHJpbmcgcGVybTIpCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1s
IGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAppbmRleCBm
N2I4ODA2NWJiLi4wZDM1NWJiY2I4IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTk1LDYgKzk1LDcgQEAgbGV0IHBh
cnNlX2NvbmZpZyBmaWxlbmFtZSA9CiAJCSgiY29uZmxpY3QtbWF4LWhpc3Rv
cnktc2Vjb25kcyIsIENvbmZpZy5TZXRfZmxvYXQgRGVmaW5lLmNvbmZsaWN0
X21heF9oaXN0b3J5X3NlY29uZHMpOwogCQkoImNvbmZsaWN0LXJhdGUtbGlt
aXQtaXMtYWdncmVnYXRlIiwgQ29uZmlnLlNldF9ib29sIERlZmluZS5jb25m
bGljdF9yYXRlX2xpbWl0X2lzX2FnZ3JlZ2F0ZSk7CiAJCSgicGVybXMtYWN0
aXZhdGUiLCBDb25maWcuU2V0X2Jvb2wgUGVybXMuYWN0aXZhdGUpOworCQko
InBlcm1zLXdhdGNoLWFjdGl2YXRlIiwgQ29uZmlnLlNldF9ib29sIFBlcm1z
LndhdGNoX2FjdGl2YXRlKTsKIAkJKCJxdW90YS1hY3RpdmF0ZSIsIENvbmZp
Zy5TZXRfYm9vbCBRdW90YS5hY3RpdmF0ZSk7CiAJCSgicXVvdGEtbWF4d2F0
Y2giLCBDb25maWcuU2V0X2ludCBEZWZpbmUubWF4d2F0Y2gpOwogCQkoInF1
b3RhLXRyYW5zYWN0aW9uIiwgQ29uZmlnLlNldF9pbnQgRGVmaW5lLm1heHRy
YW5zYWN0aW9uKTsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:20:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:20:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53144.92774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Jj-0004yJ-KI; Tue, 15 Dec 2020 12:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53144.92774; Tue, 15 Dec 2020 12:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Jj-0004yB-Gc; Tue, 15 Dec 2020 12:20:35 +0000
Received: by outflank-mailman (input) for mailman id 53144;
 Tue, 15 Dec 2020 12:20:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Ji-0004tM-1u
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:20:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ea2daa3-55d7-49e2-950a-df2f1d832486;
 Tue, 15 Dec 2020 12:20:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JR-0005h3-Ns; Tue, 15 Dec 2020 12:20:17 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JR-00070O-Mj; Tue, 15 Dec 2020 12:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ea2daa3-55d7-49e2-950a-df2f1d832486
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=Wzs5zATjgjk1B4GDrSagFTEbwnI7OGm/AsJfs8r44Xg=; b=QeM88wZOxKmZjwXg8xOTYl5M8K
	+b6Rd3ioL/hDLdbrxwz1IQNkdjnqNUqO2ksk3cQdey7docxmChMb+839VgqdYEYPbJe0vViy46zrP
	MCJbvDcu1RXfsaEnhlWCtCVKYhJ6LutNa+yDp0n/KaMbvnZN/LXQP9Ao8n4GqdSE7C0A=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 323 v3 (CVE-2020-29482) - Xenstore: wrong
 path length check
Message-Id: <E1kp9JR-00070O-Mj@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:17 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29482 / XSA-323
                               version 3

                   Xenstore: wrong path length check

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

A guest may access xenstore paths via absolute paths containing a full
pathname, or via a relative path, which implicitly includes
/local/domain/$DOMID for their own domain id.  Management tools must
access paths in guests' namespaces, necessarily using absolute paths.

oxenstored imposes a path name limit which is applied solely to the
relative or absolute path specified by the client.  Therefore, a guest
can create paths in its own namespace which are too long for management
tools to access.

IMPACT
======

Depending on the toolstack in use, a malicious guest administrator
might cause some management tools and debugging operations to fail.
For example, a guest administrator can cause `xenstore-ls -r` to fail.

However a guest administrator cannot prevent the host administrator
from tearing down the domain.

VULNERABLE SYSTEMS
==================

All systems using oxenstored are vulnerable.  Building and using
oxenstored is the default in the upstream Xen distribution, if the
Ocaml compiler is available.

Systems using C xenstored are not vulnerable.

MITIGATION
==========

There are no mitigations.

Changing to use of C xenstored would avoid this vulnerability.  However,
given the other vulnerabilities in both versions of xenstored being
reported at this time, changing xenstored implementation is not a
recommended approach to mitigation of individual issues.

CREDITS
=======

This issue was discovered by Edwin Török and analysed by Andrew Cooper, both
of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa323.patch           xen-unstable - Xen 4.11.x
xsa323-4.10.patch      Xen 4.10.x

$ sha256sum xsa323*
b693f259d92033ffc568412f1ea591b63d7e8dcaa7f88b62158b3c09e65ad122  xsa323.meta
fdfefa3c064c6c5f49d666d7c3444f919777d557c8cb9c2e9ae6ac94711d37de  xsa323.patch
90ab525fad3f43b6de2858f8a58128ce0d4ca97f5960bcd2af5be55d49059c92  xsa323-4.10.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZNeEH/iXf3oBcCJAICRa2WuXXR8zs/R/GUaiQCJYvU2So
OBil7cnb6bIomVmDd7TxjYUaHL3ilMEFHPTbUq0gLLKPlaJqVJTLyxDzss6VEmqp
eyQ8GBRODHLHCGcQaS3eKtCN9e6Oyd+rEm5CIPXvcu+g+HVnxG1BCdzHOei7NEPS
P5rmOn3Gkv6LlYQpVPYzVX3baIpLe09Ha+1iCS2bflArPgAc23ajxHHffuI90TIF
cBjN93AOpgBc89wGM0G11NSPAjnVUWe69qLJp6HDW6mQRaUxHQVbpBesd0V43/9P
XCvo6TxWdfDgJFOubZRKg6jHP5CKqobL4PZZT+X1MC8ZNbI=
=ilKH
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa323.meta"
Content-Disposition: attachment; filename="xsa323.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjMsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyCiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
MjMtNC4xMC5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0K
ICAgIH0sCiAgICAiNC4xMSI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAg
ICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNDFhODIyYzM5
MjYzNTBmMjY5MTdkNzQ3YzhkZmVkMWM0NGEyY2Y0MiIsCiAgICAgICAgICAi
UHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAgICAgICAgICAxMTUs
CiAgICAgICAgICAgIDMyMgogICAgICAgICAgXSwKICAgICAgICAgICJQYXRj
aGVzIjogWwogICAgICAgICAgICAieHNhMzIzLnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEyIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI4MTQ1ZDM4YjQ4MDA5MjU1YTMyYWI4N2EwMmU0ODFjZDA5
YzgxMWY5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyCiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
MjMucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIjQuMTMiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4
ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogImI1MzAyMjczZTJjNTE5
NDAxNzI0MDA0ODY2NDQ2MzZmMmY0ZmM2NGEiLAogICAgICAgICAgIlByZXJl
cXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAg
ICAgICAgICAzMjIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6
IFsKICAgICAgICAgICAgInhzYTMyMy5wYXRjaCIKICAgICAgICAgIF0KICAg
ICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xNCI6IHsKICAgICAgIlJl
Y2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVS
ZWYiOiAiMWQxZDFmNTM5MTk3NjQ1NmE3OWRhYWMwZGNmZTcxNTdkYTFlNTRm
NyIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAog
ICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMgogICAgICAgICAgXSwK
ICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIzLnBh
dGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAg
ICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjNhZTQ2OWFmOGU2ODBkZjMx
ZWVjZDBhMmFjNmE4M2I1OGFkN2NlNTMiLAogICAgICAgICAgIlByZXJlcXMi
OiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAg
ICAgICAzMjIKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTMyMy5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa323.patch"
Content-Disposition: attachment; filename="xsa323.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogRml4IHBhdGggbGVuZ3RoIHZhbGlkYXRpb24KTUlNRS1WZXJz
aW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVU
Ri04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKCkN1cnJlbnRs
eSwgb3hlbnN0b3JlZCBjaGVja3MgdGhlIGxlbmd0aCBvZiBwYXRocyBhZ2Fp
bnN0IDEwMjQsIHRoZW4KcHJlcGVuZHMgIi9sb2NhbC9kb21haW4vJERPTUlE
LyIgdG8gcmVsYXRpdmUgcGF0aHMuICBUaGlzIGFsbG93cyBhIGRvbVUKdG8g
Y3JlYXRlIHBhdGhzIHdoaWNoIGNhbid0IHN1YnNlcXVlbnRseSBiZSByZWFk
IGJ5IGFueW9uZSwgZXZlbiBkb20wLgpUaGlzIGFsc28gaW50ZXJmZXJlcyB3
aXRoIGxpc3RpbmcgZGlyZWN0b3JpZXMsIGV0Yy4KCkRlZmluZSBhIG5ldyBv
eGVuc3RvcmVkLmNvbmYgZW50cnk6IHF1b3RhLXBhdGgtbWF4LCBkZWZhdWx0
aW5nIHRvIDEwMjQKYXMgYmVmb3JlLiAgRm9yIHBhdGhzIHRoYXQgYmVnaW4g
d2l0aCAiL2xvY2FsL2RvbWFpbi8kRE9NSUQvIiBjaGVjayB0aGUKcmVsYXRp
dmUgcGF0aCBsZW5ndGggYWdhaW5zdCB0aGlzIHF1b3RhLiBGb3IgYWxsIG90
aGVyIHBhdGhzIGNoZWNrIHRoZQplbnRpcmUgcGF0aCBsZW5ndGguCgpUaGlz
IGVuc3VyZXMgdGhhdCBpZiB0aGUgZG9taWQgY2hhbmdlcyAoYW5kIHRodXMg
dGhlIGxlbmd0aCBvZiBhIHByZWZpeApjaGFuZ2VzKSBhIHBhdGggdGhhdCB1
c2VkIHRvIGJlIHZhbGlkIHN0YXlzIHZhbGlkIChlLmcuIGFmdGVyIGEKbGl2
ZS1taWdyYXRpb24pLiAgSXQgYWxzbyBlbnN1cmVzIHRoYXQgcmVnYXJkbGVz
cyBob3cgdGhlIGNsaWVudCB0cmllcwp0byBhY2Nlc3MgYSBwYXRoIChkb21p
ZC1yZWxhdGl2ZSBvciBhYnNvbHV0ZSkgaXQgd2lsbCBnZXQgY29uc2lzdGVu
dApyZXN1bHRzLCBzaW5jZSB0aGUgbGltaXQgaXMgYWx3YXlzIGFwcGxpZWQg
b24gdGhlIGZpbmFsIGNhbm9uaWNhbGl6ZWQKcGF0aC4KCkRlbGV0ZSB0aGUg
dW51c2VkIERvbWFpbi5nZXRfcGF0aCB0byBhdm9pZCBpdCBiZWluZyBjb25m
dXNlZCB3aXRoCkNvbm5lY3Rpb24uZ2V0X3BhdGggKHdoaWNoIGRpZmZlcnMg
YnkgYSB0cmFpbGluZyBzbGFzaCBvbmx5KS4KClJld3JpdGUgVXRpbC5wYXRo
X3ZhbGlkYXRlIHRvIGFwcGx5IHRoZSBhcHByb3ByaWF0ZSBsZW5ndGggcmVz
dHJpY3Rpb24KYmFzZWQgb24gd2hldGhlciB0aGUgcGF0aCBpcyByZWxhdGl2
ZSBvciBub3QuICBSZW1vdmUgdGhlIGNoZWNrIGZvcgpjb25uZWN0aW9uX3Bh
dGggYmVpbmcgYWJzb2x1dGUsIGJlY2F1c2UgaXQgaXMgbm90IGd1ZXN0IGNv
bnRyb2xsZWQgZGF0YS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIzLgoKU2ln
bmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0
cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8ZWR2aW4u
dG9yb2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBMaW5kaWcg
PGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS90
b29scy9vY2FtbC9saWJzL3hiL3BhcnRpYWwubWwgYi90b29scy9vY2FtbC9s
aWJzL3hiL3BhcnRpYWwubWwKaW5kZXggZDRkMWM3YmRlYy4uYjZlMmE3MTZl
MiAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwvbGlicy94Yi9wYXJ0aWFsLm1s
CisrKyBiL3Rvb2xzL29jYW1sL2xpYnMveGIvcGFydGlhbC5tbApAQCAtMjgs
NiArMjgsNyBAQCBleHRlcm5hbCBoZWFkZXJfb2Zfc3RyaW5nX2ludGVybmFs
OiBzdHJpbmcgLT4gaW50ICogaW50ICogaW50ICogaW50CiAgICAgICAgICA9
ICJzdHViX2hlYWRlcl9vZl9zdHJpbmciCiAKIGxldCB4ZW5zdG9yZV9wYXls
b2FkX21heCA9IDQwOTYgKCogeGVuL2luY2x1ZGUvcHVibGljL2lvL3hzX3dp
cmUuaCAqKQorbGV0IHhlbnN0b3JlX3JlbF9wYXRoX21heCA9IDIwNDggKCog
eGVuL2luY2x1ZGUvcHVibGljL2lvL3hzX3dpcmUuaCAqKQogCiBsZXQgb2Zf
c3RyaW5nIHMgPQogCWxldCB0aWQsIHJpZCwgb3BpbnQsIGRsZW4gPSBoZWFk
ZXJfb2Zfc3RyaW5nX2ludGVybmFsIHMgaW4KZGlmZiAtLWdpdCBhL3Rvb2xz
L29jYW1sL2xpYnMveGIvcGFydGlhbC5tbGkgYi90b29scy9vY2FtbC9saWJz
L3hiL3BhcnRpYWwubWxpCmluZGV4IDM1OWE3NWU4OGQuLmI5MjE2MDE4ZjUg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL2xpYnMveGIvcGFydGlhbC5tbGkK
KysrIGIvdG9vbHMvb2NhbWwvbGlicy94Yi9wYXJ0aWFsLm1saQpAQCAtOSw2
ICs5LDcgQEAgZXh0ZXJuYWwgaGVhZGVyX3NpemUgOiB1bml0IC0+IGludCA9
ICJzdHViX2hlYWRlcl9zaXplIgogZXh0ZXJuYWwgaGVhZGVyX29mX3N0cmlu
Z19pbnRlcm5hbCA6IHN0cmluZyAtPiBpbnQgKiBpbnQgKiBpbnQgKiBpbnQK
ICAgPSAic3R1Yl9oZWFkZXJfb2Zfc3RyaW5nIgogdmFsIHhlbnN0b3JlX3Bh
eWxvYWRfbWF4IDogaW50Cit2YWwgeGVuc3RvcmVfcmVsX3BhdGhfbWF4IDog
aW50CiB2YWwgb2Zfc3RyaW5nIDogc3RyaW5nIC0+IHBrdAogdmFsIGFwcGVu
ZCA6IHBrdCAtPiBzdHJpbmcgLT4gaW50IC0+IHVuaXQKIHZhbCB0b19jb21w
bGV0ZSA6IHBrdCAtPiBpbnQKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9kZWZpbmUubWwgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvZGVm
aW5lLm1sCmluZGV4IGVhOWUxYjc2MjAuLmViZTE4YjhlMzEgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9kZWZpbmUubWwKKysrIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL2RlZmluZS5tbApAQCAtMzEsNiArMzEsOCBA
QCBsZXQgY29uZmxpY3RfcmF0ZV9saW1pdF9pc19hZ2dyZWdhdGUgPSByZWYg
dHJ1ZQogCiBsZXQgZG9taWRfc2VsZiA9IDB4N0ZGMAogCitsZXQgcGF0aF9t
YXggPSByZWYgWGVuYnVzLlBhcnRpYWwueGVuc3RvcmVfcmVsX3BhdGhfbWF4
CisKIGV4Y2VwdGlvbiBOb3RfYV9kaXJlY3Rvcnkgb2Ygc3RyaW5nCiBleGNl
cHRpb24gTm90X2FfdmFsdWUgb2Ygc3RyaW5nCiBleGNlcHRpb24gQWxyZWFk
eV9leGlzdApkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Rv
bWFpbi5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9kb21haW4ubWwKaW5k
ZXggYWViMTg1ZmY3ZS4uODFjYjU5YjhmMSAxMDA2NDQKLS0tIGEvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL2RvbWFpbi5tbAorKysgYi90b29scy9vY2FtbC94
ZW5zdG9yZWQvZG9tYWluLm1sCkBAIC0zOCw3ICszOCw2IEBAIHR5cGUgdCA9
CiB9CiAKIGxldCBpc19kb20wIGQgPSBkLmlkID0gMAotbGV0IGdldF9wYXRo
IGRvbSA9ICIvbG9jYWwvZG9tYWluLyIgXiAoc3ByaW50ZiAiJXUiIGRvbS5p
ZCkKIGxldCBnZXRfaWQgZG9tYWluID0gZG9tYWluLmlkCiBsZXQgZ2V0X2lu
dGVyZmFjZSBkID0gZC5pbnRlcmZhY2UKIGxldCBnZXRfbWZuIGQgPSBkLm1m
bgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL294ZW5zdG9y
ZWQuY29uZi5pbiBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9veGVuc3RvcmVk
LmNvbmYuaW4KaW5kZXggZjg0MzQ4Mjk4MS4uNGFlNDhlNDJkNCAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL294ZW5zdG9yZWQuY29uZi5p
bgorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvb3hlbnN0b3JlZC5jb25m
LmluCkBAIC02MSw2ICs2MSw3IEBAIHF1b3RhLW1heHNpemUgPSAyMDQ4CiBx
dW90YS1tYXh3YXRjaCA9IDEwMAogcXVvdGEtdHJhbnNhY3Rpb24gPSAxMAog
cXVvdGEtbWF4cmVxdWVzdHMgPSAxMDI0CitxdW90YS1wYXRoLW1heCA9IDEw
MjQKIAogIyBBY3RpdmF0ZSBmaWxlZCBiYXNlIGJhY2tlbmQKIHBlcnNpc3Rl
bnQgPSBmYWxzZQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L3V0aWxzLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3V0aWxzLm1sCmlu
ZGV4IGU4YzlmZTRlOTQuLmViNzliZjAxNDYgMTAwNjQ0Ci0tLSBhL3Rvb2xz
L29jYW1sL3hlbnN0b3JlZC91dGlscy5tbAorKysgYi90b29scy9vY2FtbC94
ZW5zdG9yZWQvdXRpbHMubWwKQEAgLTkzLDcgKzkzLDcgQEAgbGV0IHJlYWRf
ZmlsZV9zaW5nbGVfaW50ZWdlciBmaWxlbmFtZSA9CiBsZXQgcGF0aF92YWxp
ZGF0ZSBwYXRoIGNvbm5lY3Rpb25fcGF0aCA9CiAJbGV0IGxlbiA9IFN0cmlu
Zy5sZW5ndGggcGF0aCBpbgogCi0JaWYgbGVuID0gMCB8fCBsZW4gPiAxMDI0
IHRoZW4gcmFpc2UgRGVmaW5lLkludmFsaWRfcGF0aDsKKwlpZiBsZW4gPSAw
IHRoZW4gcmFpc2UgRGVmaW5lLkludmFsaWRfcGF0aDsKIAogCWxldCBhYnNf
cGF0aCA9CiAJCW1hdGNoIFN0cmluZy5nZXQgcGF0aCAwIHdpdGgKQEAgLTEw
MSw0ICsxMDEsMTcgQEAgbGV0IHBhdGhfdmFsaWRhdGUgcGF0aCBjb25uZWN0
aW9uX3BhdGggPQogCQl8IF8gICAtPiBjb25uZWN0aW9uX3BhdGggXiBwYXRo
CiAJaW4KIAorCSgqIFJlZ2FyZGxlc3Mgd2hldGhlciBjbGllbnQgc3BlY2lm
aWVkIGFic29sdXRlIG9yIHJlbGF0aXZlIHBhdGgsCisJICAgY2Fub25pY2Fs
aXplIGl0IChhYm92ZSkgYW5kLCBmb3IgZG9tYWluLXJlbGF0aXZlIHBhdGhz
LCBjaGVjayB0aGUKKwkgICBsZW5ndGggb2YgdGhlIHJlbGF0aXZlIHBhcnQu
CisKKwkgICBUaGlzIHByZXZlbnRzIHBhdGhzIGJlY29taW5nIGludmFsaWQg
YWNyb3NzIG1pZ3JhdGUgd2hlbiB0aGUgbGVuZ3RoCisJICAgb2YgdGhlIGRv
bWlkIGNoYW5nZXMgaW4gQHBhcmFtIGNvbm5lY3Rpb25fcGF0aC4KKwkgKikK
KwlsZXQgbGVuID0gU3RyaW5nLmxlbmd0aCBhYnNfcGF0aCBpbgorCWxldCBv
bl9hYnNvbHV0ZSBfIF8gPSBsZW4gaW4KKwlsZXQgb25fcmVsYXRpdmUgXyBv
ZmZzZXQgPSBsZW4gLSBvZmZzZXQgaW4KKwlsZXQgbGVuID0gU2NhbmYua3Nz
Y2FuZiBhYnNfcGF0aCBvbl9hYnNvbHV0ZSAiL2xvY2FsL2RvbWFpbi8lZC8l
biIgb25fcmVsYXRpdmUgaW4KKwlpZiBsZW4gPiAhRGVmaW5lLnBhdGhfbWF4
IHRoZW4gcmFpc2UgRGVmaW5lLkludmFsaWRfcGF0aDsKKwogCWFic19wYXRo
CmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVk
Lm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAppbmRl
eCBmZjlmYmJiYWMyLi4zOWQ2ZDc2N2U0IDEwMDY0NAotLS0gYS90b29scy9v
Y2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBiL3Rvb2xzL29jYW1s
L3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTEwMiw2ICsxMDIsNyBAQCBs
ZXQgcGFyc2VfY29uZmlnIGZpbGVuYW1lID0KIAkJKCJxdW90YS1tYXhlbnRp
dHkiLCBDb25maWcuU2V0X2ludCBRdW90YS5tYXhlbnQpOwogCQkoInF1b3Rh
LW1heHNpemUiLCBDb25maWcuU2V0X2ludCBRdW90YS5tYXhzaXplKTsKIAkJ
KCJxdW90YS1tYXhyZXF1ZXN0cyIsIENvbmZpZy5TZXRfaW50IERlZmluZS5t
YXhyZXF1ZXN0cyk7CisJCSgicXVvdGEtcGF0aC1tYXgiLCBDb25maWcuU2V0
X2ludCBEZWZpbmUucGF0aF9tYXgpOwogCQkoInRlc3QtZWFnYWluIiwgQ29u
ZmlnLlNldF9ib29sIFRyYW5zYWN0aW9uLnRlc3RfZWFnYWluKTsKIAkJKCJw
ZXJzaXN0ZW50IiwgQ29uZmlnLlNldF9ib29sIERpc2suZW5hYmxlKTsKIAkJ
KCJ4ZW5zdG9yZWQtbG9nLWZpbGUiLCBDb25maWcuU3RyaW5nIExvZ2dpbmcu
c2V0X3hlbnN0b3JlZF9sb2dfZGVzdGluYXRpb24pOwo=

--=separator
Content-Type: application/octet-stream; name="xsa323-4.10.patch"
Content-Disposition: attachment; filename="xsa323-4.10.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogRml4IHBhdGggbGVuZ3RoIHZhbGlkYXRpb24KTUlNRS1WZXJz
aW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVU
Ri04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKCkN1cnJlbnRs
eSwgb3hlbnN0b3JlZCBjaGVja3MgdGhlIGxlbmd0aCBvZiBwYXRocyBhZ2Fp
bnN0IDEwMjQsIHRoZW4KcHJlcGVuZHMgIi9sb2NhbC9kb21haW4vJERPTUlE
LyIgdG8gcmVsYXRpdmUgcGF0aHMuICBUaGlzIGFsbG93cyBhIGRvbVUKdG8g
Y3JlYXRlIHBhdGhzIHdoaWNoIGNhbid0IHN1YnNlcXVlbnRseSBiZSByZWFk
IGJ5IGFueW9uZSwgZXZlbiBkb20wLgpUaGlzIGFsc28gaW50ZXJmZXJlcyB3
aXRoIGxpc3RpbmcgZGlyZWN0b3JpZXMsIGV0Yy4KCkRlZmluZSBhIG5ldyBv
eGVuc3RvcmVkLmNvbmYgZW50cnk6IHF1b3RhLXBhdGgtbWF4LCBkZWZhdWx0
aW5nIHRvIDEwMjQKYXMgYmVmb3JlLiAgRm9yIHBhdGhzIHRoYXQgYmVnaW4g
d2l0aCAiL2xvY2FsL2RvbWFpbi8kRE9NSUQvIiBjaGVjayB0aGUKcmVsYXRp
dmUgcGF0aCBsZW5ndGggYWdhaW5zdCB0aGlzIHF1b3RhLiBGb3IgYWxsIG90
aGVyIHBhdGhzIGNoZWNrIHRoZQplbnRpcmUgcGF0aCBsZW5ndGguCgpUaGlz
IGVuc3VyZXMgdGhhdCBpZiB0aGUgZG9taWQgY2hhbmdlcyAoYW5kIHRodXMg
dGhlIGxlbmd0aCBvZiBhIHByZWZpeApjaGFuZ2VzKSBhIHBhdGggdGhhdCB1
c2VkIHRvIGJlIHZhbGlkIHN0YXlzIHZhbGlkIChlLmcuIGFmdGVyIGEKbGl2
ZS1taWdyYXRpb24pLiAgSXQgYWxzbyBlbnN1cmVzIHRoYXQgcmVnYXJkbGVz
cyBob3cgdGhlIGNsaWVudCB0cmllcwp0byBhY2Nlc3MgYSBwYXRoIChkb21p
ZC1yZWxhdGl2ZSBvciBhYnNvbHV0ZSkgaXQgd2lsbCBnZXQgY29uc2lzdGVu
dApyZXN1bHRzLCBzaW5jZSB0aGUgbGltaXQgaXMgYWx3YXlzIGFwcGxpZWQg
b24gdGhlIGZpbmFsIGNhbm9uaWNhbGl6ZWQKcGF0aC4KCkRlbGV0ZSB0aGUg
dW51c2VkIERvbWFpbi5nZXRfcGF0aCB0byBhdm9pZCBpdCBiZWluZyBjb25m
dXNlZCB3aXRoCkNvbm5lY3Rpb24uZ2V0X3BhdGggKHdoaWNoIGRpZmZlcnMg
YnkgYSB0cmFpbGluZyBzbGFzaCBvbmx5KS4KClJld3JpdGUgVXRpbC5wYXRo
X3ZhbGlkYXRlIHRvIGFwcGx5IHRoZSBhcHByb3ByaWF0ZSBsZW5ndGggcmVz
dHJpY3Rpb24KYmFzZWQgb24gd2hldGhlciB0aGUgcGF0aCBpcyByZWxhdGl2
ZSBvciBub3QuICBSZW1vdmUgdGhlIGNoZWNrIGZvcgpjb25uZWN0aW9uX3Bh
dGggYmVpbmcgYWJzb2x1dGUsIGJlY2F1c2UgaXQgaXMgbm90IGd1ZXN0IGNv
bnRyb2xsZWQgZGF0YS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIzLgoKU2ln
bmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0
cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8ZWR2aW4u
dG9yb2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBMaW5kaWcg
PGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS90
b29scy9vY2FtbC9saWJzL3hiL3BhcnRpYWwubWwgYi90b29scy9vY2FtbC9s
aWJzL3hiL3BhcnRpYWwubWwKaW5kZXggZDRkMWM3YmRlYy4uYjZlMmE3MTZl
MiAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwvbGlicy94Yi9wYXJ0aWFsLm1s
CisrKyBiL3Rvb2xzL29jYW1sL2xpYnMveGIvcGFydGlhbC5tbApAQCAtMjgs
NiArMjgsNyBAQCBleHRlcm5hbCBoZWFkZXJfb2Zfc3RyaW5nX2ludGVybmFs
OiBzdHJpbmcgLT4gaW50ICogaW50ICogaW50ICogaW50CiAgICAgICAgICA9
ICJzdHViX2hlYWRlcl9vZl9zdHJpbmciCiAKIGxldCB4ZW5zdG9yZV9wYXls
b2FkX21heCA9IDQwOTYgKCogeGVuL2luY2x1ZGUvcHVibGljL2lvL3hzX3dp
cmUuaCAqKQorbGV0IHhlbnN0b3JlX3JlbF9wYXRoX21heCA9IDIwNDggKCog
eGVuL2luY2x1ZGUvcHVibGljL2lvL3hzX3dpcmUuaCAqKQogCiBsZXQgb2Zf
c3RyaW5nIHMgPQogCWxldCB0aWQsIHJpZCwgb3BpbnQsIGRsZW4gPSBoZWFk
ZXJfb2Zfc3RyaW5nX2ludGVybmFsIHMgaW4KZGlmZiAtLWdpdCBhL3Rvb2xz
L29jYW1sL3hlbnN0b3JlZC9kZWZpbmUubWwgYi90b29scy9vY2FtbC94ZW5z
dG9yZWQvZGVmaW5lLm1sCmluZGV4IDI5NjVjMDg1MzQuLmY1NzQzOTdhNGMg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9kZWZpbmUubWwK
KysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2RlZmluZS5tbApAQCAtMzIs
NiArMzIsOCBAQCBsZXQgY29uZmxpY3RfcmF0ZV9saW1pdF9pc19hZ2dyZWdh
dGUgPSByZWYgdHJ1ZQogCiBsZXQgZG9taWRfc2VsZiA9IDB4N0ZGMAogCits
ZXQgcGF0aF9tYXggPSByZWYgWGVuYnVzLlBhcnRpYWwueGVuc3RvcmVfcmVs
X3BhdGhfbWF4CisKIGV4Y2VwdGlvbiBOb3RfYV9kaXJlY3Rvcnkgb2Ygc3Ry
aW5nCiBleGNlcHRpb24gTm90X2FfdmFsdWUgb2Ygc3RyaW5nCiBleGNlcHRp
b24gQWxyZWFkeV9leGlzdApkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVu
c3RvcmVkL2RvbWFpbi5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9kb21h
aW4ubWwKaW5kZXggYjBhMDFiMDZmYS4uMDgxMDc2MjcxYSAxMDA2NDQKLS0t
IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2RvbWFpbi5tbAorKysgYi90b29s
cy9vY2FtbC94ZW5zdG9yZWQvZG9tYWluLm1sCkBAIC0zOCw3ICszOCw2IEBA
IHR5cGUgdCA9CiB9CiAKIGxldCBpc19kb20wIGQgPSBkLmlkID0gMAotbGV0
IGdldF9wYXRoIGRvbSA9ICIvbG9jYWwvZG9tYWluLyIgXiAoc3ByaW50ZiAi
JXUiIGRvbS5pZCkKIGxldCBnZXRfaWQgZG9tYWluID0gZG9tYWluLmlkCiBs
ZXQgZ2V0X2ludGVyZmFjZSBkID0gZC5pbnRlcmZhY2UKIGxldCBnZXRfbWZu
IGQgPSBkLm1mbgpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVk
L294ZW5zdG9yZWQuY29uZi5pbiBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9v
eGVuc3RvcmVkLmNvbmYuaW4KaW5kZXggZDVkNGYwMGRlOC4uYmVmNjMzMDkw
YiAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL294ZW5zdG9y
ZWQuY29uZi5pbgorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvb3hlbnN0
b3JlZC5jb25mLmluCkBAIC02MSw2ICs2MSw3IEBAIHF1b3RhLW1heHNpemUg
PSAyMDQ4CiBxdW90YS1tYXh3YXRjaCA9IDEwMAogcXVvdGEtdHJhbnNhY3Rp
b24gPSAxMAogcXVvdGEtbWF4cmVxdWVzdHMgPSAxMDI0CitxdW90YS1wYXRo
LW1heCA9IDEwMjQKIAogIyBBY3RpdmF0ZSBmaWxlZCBiYXNlIGJhY2tlbmQK
IHBlcnNpc3RlbnQgPSBmYWxzZQpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwv
eGVuc3RvcmVkL3V0aWxzLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3V0
aWxzLm1sCmluZGV4IGYzZDk1ZTg4OTcuLjAyNjM5Yzc3ZTkgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC91dGlscy5tbAorKysgYi90b29s
cy9vY2FtbC94ZW5zdG9yZWQvdXRpbHMubWwKQEAgLTk0LDcgKzk0LDcgQEAg
bGV0IHJlYWRfZmlsZV9zaW5nbGVfaW50ZWdlciBmaWxlbmFtZSA9CiBsZXQg
cGF0aF92YWxpZGF0ZSBwYXRoIGNvbm5lY3Rpb25fcGF0aCA9CiAJbGV0IGxl
biA9IFN0cmluZy5sZW5ndGggcGF0aCBpbgogCi0JaWYgbGVuID0gMCB8fCBs
ZW4gPiAxMDI0IHRoZW4gcmFpc2UgRGVmaW5lLkludmFsaWRfcGF0aDsKKwlp
ZiBsZW4gPSAwIHRoZW4gcmFpc2UgRGVmaW5lLkludmFsaWRfcGF0aDsKIAog
CWxldCBhYnNfcGF0aCA9CiAJCW1hdGNoIFN0cmluZy5nZXQgcGF0aCAwIHdp
dGgKQEAgLTEwMiw0ICsxMDIsMTcgQEAgbGV0IHBhdGhfdmFsaWRhdGUgcGF0
aCBjb25uZWN0aW9uX3BhdGggPQogCQl8IF8gICAtPiBjb25uZWN0aW9uX3Bh
dGggXiBwYXRoCiAJaW4KIAorCSgqIFJlZ2FyZGxlc3Mgd2hldGhlciBjbGll
bnQgc3BlY2lmaWVkIGFic29sdXRlIG9yIHJlbGF0aXZlIHBhdGgsCisJICAg
Y2Fub25pY2FsaXplIGl0IChhYm92ZSkgYW5kLCBmb3IgZG9tYWluLXJlbGF0
aXZlIHBhdGhzLCBjaGVjayB0aGUKKwkgICBsZW5ndGggb2YgdGhlIHJlbGF0
aXZlIHBhcnQuCisKKwkgICBUaGlzIHByZXZlbnRzIHBhdGhzIGJlY29taW5n
IGludmFsaWQgYWNyb3NzIG1pZ3JhdGUgd2hlbiB0aGUgbGVuZ3RoCisJICAg
b2YgdGhlIGRvbWlkIGNoYW5nZXMgaW4gQHBhcmFtIGNvbm5lY3Rpb25fcGF0
aC4KKwkgKikKKwlsZXQgbGVuID0gU3RyaW5nLmxlbmd0aCBhYnNfcGF0aCBp
bgorCWxldCBvbl9hYnNvbHV0ZSBfIF8gPSBsZW4gaW4KKwlsZXQgb25fcmVs
YXRpdmUgXyBvZmZzZXQgPSBsZW4gLSBvZmZzZXQgaW4KKwlsZXQgbGVuID0g
U2NhbmYua3NzY2FuZiBhYnNfcGF0aCBvbl9hYnNvbHV0ZSAiL2xvY2FsL2Rv
bWFpbi8lZC8lbiIgb25fcmVsYXRpdmUgaW4KKwlpZiBsZW4gPiAhRGVmaW5l
LnBhdGhfbWF4IHRoZW4gcmFpc2UgRGVmaW5lLkludmFsaWRfcGF0aDsKKwog
CWFic19wYXRoCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQv
eGVuc3RvcmVkLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3Jl
ZC5tbAppbmRleCAxODNkZDI3NTRiLi43MGYxYmY4ZDJlIDEwMDY0NAotLS0g
YS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBiL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTEwMiw2ICsx
MDIsNyBAQCBsZXQgcGFyc2VfY29uZmlnIGZpbGVuYW1lID0KIAkJKCJxdW90
YS1tYXhlbnRpdHkiLCBDb25maWcuU2V0X2ludCBRdW90YS5tYXhlbnQpOwog
CQkoInF1b3RhLW1heHNpemUiLCBDb25maWcuU2V0X2ludCBRdW90YS5tYXhz
aXplKTsKIAkJKCJxdW90YS1tYXhyZXF1ZXN0cyIsIENvbmZpZy5TZXRfaW50
IERlZmluZS5tYXhyZXF1ZXN0cyk7CisJCSgicXVvdGEtcGF0aC1tYXgiLCBD
b25maWcuU2V0X2ludCBEZWZpbmUucGF0aF9tYXgpOwogCQkoInRlc3QtZWFn
YWluIiwgQ29uZmlnLlNldF9ib29sIFRyYW5zYWN0aW9uLnRlc3RfZWFnYWlu
KTsKIAkJKCJwZXJzaXN0ZW50IiwgQ29uZmlnLlNldF9ib29sIERpc2suZW5h
YmxlKTsKIAkJKCJ4ZW5zdG9yZWQtbG9nLWZpbGUiLCBDb25maWcuU3RyaW5n
IExvZ2dpbmcuc2V0X3hlbnN0b3JlZF9sb2dfZGVzdGluYXRpb24pOwo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:20:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:20:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53147.92819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Jt-0005AH-Bi; Tue, 15 Dec 2020 12:20:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53147.92819; Tue, 15 Dec 2020 12:20:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Jt-0005A5-62; Tue, 15 Dec 2020 12:20:45 +0000
Received: by outflank-mailman (input) for mailman id 53147;
 Tue, 15 Dec 2020 12:20:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Jr-0004t1-Km
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:20:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a8e9201-670a-4a40-bb4a-f69eee5766bc;
 Tue, 15 Dec 2020 12:20:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JS-0005hC-M7; Tue, 15 Dec 2020 12:20:18 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JS-00071M-L4; Tue, 15 Dec 2020 12:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a8e9201-670a-4a40-bb4a-f69eee5766bc
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=sFsBolSRPNjeYw9DvQPhXm69yPmYRcRSCuSvqENNVoc=; b=ocof4P2UiDxXaCf5aodT+7C73t
	WurKV3ign8eBilaYTliqqxrEcJVofiVSa4zUDBf2RR8HGsWKDDh+SIfoValM0/pFGIo7ktImghKw2
	vh29XyNP6T/T5ja+6VhALGwX0sSH2DuODv62zLeYDLSUa5GPmA0DKnTwC7lR5MMDlU7c=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 324 v3 (CVE-2020-29484) - Xenstore: guests
 can crash xenstored via watchs
Message-Id: <E1kp9JS-00071M-L4@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:18 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29484 / XSA-324
                               version 3

            Xenstore: guests can crash xenstored via watchs

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

When a Xenstore watch fires, the xenstore client which registered the
watch will receive a Xenstore message containing the path of the
modified Xenstore entry which triggered the watch, and the tag which
was specified when registering the watch.

Any communication with xenstored is done via Xenstore messages,
consisting of a message header and the payload. The payload length is
limited to 4096 bytes. Any request to xenstored resulting in a
response with a payload longer than 4096 bytes will result in an
error.

When registering a watch the payload length limit applies to the
combined length of the watched path and the specified tag. As watches
for a specific path are also triggered for all nodes below that path,
the payload of a watch event message can be longer than the payload
needed to register the watch.

A malicious guest which registers a watch using a very large tag (ie
with a registration operation payload length close to the 4096 byte
limit) can cause the generation of watch events with a payload length
larger than 4096 bytes, by writing to Xenstore entries below the
watched path.

This will result in an error condition in xenstored.  This error can
result in a NULL pointer dereference leading to a crash of xenstored.

IMPACT
======

A malicious guest administrator can cause xenstored to crash, leading
to a denial of service.  Following a xenstored crash, domains may
continue to run, but management operations will be impossible.

VULNERABLE SYSTEMS
==================

All Xen versions are affected.

Only C xenstored is affected, oxenstored is not affected.

MITIGATION
==========

There are no mitigations.

Changing to use of Ocaml xenstored would avoid this vulnerability.
However, given the other vulnerabilities in both versions of xenstored
being reported at this time, changing xenstored implementation is not a
recommended approach to mitigation of individual issues.

CREDITS
=======

This issue was discovered by Jürgen Groß of SUSE.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa324.patch           xen-unstable - 4.10

$ sha256sum xsa324*
78932f0a83b479902553b1acdf601f7625b383497c03c6e834a0a2b847f1a72e  xsa324.meta
8dba79842fa913290c7043d065a50abb0efe27fa5a173e421c21c544cc1e264c  xsa324.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZBoIH/ir2NdOiUg6JFoa/DXgtMBosLXRkRRjikvlaMJTY
krz3r/aBZ0nLn8wsF5u+BctJYdHrIQDrt3N7GGv1wyvnLA18HrtupsxqrHj+CCMD
pogl6QxRmmqRina7+EzRTt8N8qe6fhi8tuVmH3TYlsL1PeHyqNurwwTZizHL9BFx
uCY10qNUV0FTY05tUhdP0FD3yiNfN8QwytARo/LRhELbUMx7D+N/CmUtCKh5uklr
KfBBHy3Vb4MDlGPN7pa5vdEjZGFVj4xHWxUP+72C+bdhvLEiDi+IKkvy/TVbjoAN
eQEfFVjBpj21MeQV+3mHJMJGknaJ8NTc00txrLM5D+WscHM=
=KypE
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa324.meta"
Content-Disposition: attachment; filename="xsa324.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjQsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTMyNC5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiNDFhODIyYzM5MjYzNTBmMjY5MTdkNzQ3YzhkZmVkMWM0NGEyY2Y0MiIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAg
ICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIz
CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAg
ICAgICJ4c2EzMjQucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAg
ICB9CiAgICB9LAogICAgIjQuMTIiOiB7CiAgICAgICJSZWNpcGVzIjogewog
ICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjgxNDVk
MzhiNDgwMDkyNTVhMzJhYjg3YTAyZTQ4MWNkMDljODExZjkiLAogICAgICAg
ICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAg
MTE1LAogICAgICAgICAgICAzMjIsCiAgICAgICAgICAgIDMyMwogICAgICAg
ICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNh
MzI0LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAg
fSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAi
eGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJiNTMwMjI3M2UyYzUx
OTQwMTcyNDAwNDg2NjQ0NjM2ZjJmNGZjNjRhIiwKICAgICAgICAgICJQcmVy
ZXFzIjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAg
ICAgICAgICAgMzIyLAogICAgICAgICAgICAzMjMKICAgICAgICAgIF0sCiAg
ICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTMyNC5wYXRj
aCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAi
NC4xNCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsK
ICAgICAgICAgICJTdGFibGVSZWYiOiAiMWQxZDFmNTM5MTk3NjQ1NmE3OWRh
YWMwZGNmZTcxNTdkYTFlNTRmNyIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsK
ICAgICAgICAgICAgMzUzLAogICAgICAgICAgICAxMTUsCiAgICAgICAgICAg
IDMyMiwKICAgICAgICAgICAgMzIzCiAgICAgICAgICBdLAogICAgICAgICAg
IlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzMjQucGF0Y2giCiAgICAg
ICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIm1hc3RlciI6
IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAg
ICAgICJTdGFibGVSZWYiOiAiM2FlNDY5YWY4ZTY4MGRmMzFlZWNkMGEyYWM2
YTgzYjU4YWQ3Y2U1MyIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAg
ICAgICAgMzUzLAogICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwK
ICAgICAgICAgICAgMzIzCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNo
ZXMiOiBbCiAgICAgICAgICAgICJ4c2EzMjQucGF0Y2giCiAgICAgICAgICBd
CiAgICAgICAgfQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa324.patch"
Content-Disposition: attachment; filename="xsa324.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogZHJvcCB3YXRjaCBldmVudCBtZXNzYWdlcyBl
eGNlZWRpbmcgbWF4aW11bSBzaXplCgpCeSBzZXR0aW5nIGEgd2F0Y2ggd2l0
aCBhIHZlcnkgbGFyZ2UgdGFnIGl0IGlzIHBvc3NpYmxlIHRvIHRyaWNrCnhl
bnN0b3JlZCB0byBzZW5kIHdhdGNoIGV2ZW50IG1lc3NhZ2VzIGV4Y2VlZGlu
ZyB0aGUgbWF4aW11bSBhbGxvd2VkCnBheWxvYWQgc2l6ZS4gVGhpcyBtaWdo
dCBpbiB0dXJuIGxlYWQgdG8gYSBjcmFzaCBvZiB4ZW5zdG9yZWQgYXMgdGhl
CnJlc3VsdGluZyBlcnJvciBjYW4gY2F1c2UgZGVyZWZlcmVuY2luZyBhIE5V
TEwgcG9pbnRlciBpbiBjYXNlIHRoZXJlCmlzIG5vIGFjdGl2ZSByZXF1ZXN0
IGJlaW5nIGhhbmRsZWQgYnkgdGhlIGd1ZXN0IHRoZSB3YXRjaCBldmVudCBp
cwpiZWluZyBzZW50IHRvLgoKRml4IHRoYXQgYnkganVzdCBkcm9wcGluZyBz
dWNoIHdhdGNoIGV2ZW50cy4gQWRkaXRpb25hbGx5IG1vZGlmeSB0aGUKZXJy
b3IgaGFuZGxpbmcgdG8gdGVzdCB0aGUgcG9pbnRlciB0byBiZSBub3QgTlVM
TCBiZWZvcmUgZGVyZWZlcmVuY2luZwppdC4KClRoaXMgaXMgWFNBLTMyNC4K
ClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNv
bT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
CgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKaW5kZXggMzNm
OTVkY2YzYy4uM2Q3NGRiYmI0MCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYworKysgYi90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfY29yZS5jCkBAIC02NzQsNiArNjc0LDkgQEAgdm9pZCBzZW5kX3Jl
cGx5KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBlbnVtIHhzZF9zb2NrbXNn
X3R5cGUgdHlwZSwKIAkvKiBSZXBsaWVzIHJldXNlIHRoZSByZXF1ZXN0IGJ1
ZmZlciwgZXZlbnRzIG5lZWQgYSBuZXcgb25lLiAqLwogCWlmICh0eXBlICE9
IFhTX1dBVENIX0VWRU5UKSB7CiAJCWJkYXRhID0gY29ubi0+aW47CisJCS8q
IERyb3AgYXN5bmNocm9ub3VzIHJlc3BvbnNlcywgZS5nLiBlcnJvcnMgZm9y
IHdhdGNoIGV2ZW50cy4gKi8KKwkJaWYgKCFiZGF0YSkKKwkJCXJldHVybjsK
IAkJYmRhdGEtPmluaGRyID0gdHJ1ZTsKIAkJYmRhdGEtPnVzZWQgPSAwOwog
CQljb25uLT5pbiA9IE5VTEw7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfd2F0Y2guYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF93YXRjaC5jCmluZGV4IDcxYzEwOGVhOTkuLjlmZjIwNjkwYzAgMTAwNjQ0
Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5jCisrKyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF93YXRjaC5jCkBAIC05Miw2ICs5
MiwxMCBAQCBzdGF0aWMgdm9pZCBhZGRfZXZlbnQoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCiAJfQogCiAJbGVuID0gc3RybGVuKG5hbWUpICsgMSArIHN0
cmxlbih3YXRjaC0+dG9rZW4pICsgMTsKKwkvKiBEb24ndCB0cnkgdG8gc2Vu
ZCBvdmVyLWxvbmcgZXZlbnRzLiAqLworCWlmIChsZW4gPiBYRU5TVE9SRV9Q
QVlMT0FEX01BWCkKKwkJcmV0dXJuOworCiAJZGF0YSA9IHRhbGxvY19hcnJh
eShjdHgsIGNoYXIsIGxlbik7CiAJaWYgKCFkYXRhKQogCQlyZXR1cm47Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:20:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:20:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53148.92829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Ju-0005C8-5F; Tue, 15 Dec 2020 12:20:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53148.92829; Tue, 15 Dec 2020 12:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Jt-0005Bf-SV; Tue, 15 Dec 2020 12:20:45 +0000
Received: by outflank-mailman (input) for mailman id 53148;
 Tue, 15 Dec 2020 12:20:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Js-0004tM-28
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:20:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c38019ec-e8a0-46e5-8d3f-a4bad0967cde;
 Tue, 15 Dec 2020 12:20:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JQ-0005gv-O7; Tue, 15 Dec 2020 12:20:16 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JQ-0006zQ-Lh; Tue, 15 Dec 2020 12:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c38019ec-e8a0-46e5-8d3f-a4bad0967cde
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=ERglhXApYCPf3m5c7X8OaQsbXftKIHqIEbQw1ZUOdZQ=; b=j+xd0oWYOZnPnioeLbhvir9O6j
	C2Wph6rUzDRu2j8rDYFf4m04EeFqQtuxV2xl/TXViZZ/et5Z+3UGXJlrrrIME+jVKloZExiQb1NBi
	8zexMRQsdpbCH13E3HF6J0ITj5ptQBI9B4L2ffCzDOxazuMF1fv1h8yLm5A/zlLVnkWQ=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 322 v4 (CVE-2020-29481) - Xenstore: new
 domains inheriting existing node permissions
Message-Id: <E1kp9JQ-0006zQ-Lh@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:16 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29481 / XSA-322
                               version 4

       Xenstore: new domains inheriting existing node permissions

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Access rights of Xenstore nodes are per domid.  Unfortunately,
existing granted access rights are not removed when a domain is
destroyed.  This means that a new domain created with the same domid
will inherit the access rights to Xenstore nodes from the previous
domain(s) with the same domid.

All Xenstore entries of a guest below /local/domain/<domid> are
deleted by Xen tools when a guest is destroyed.  Therefore only
entries belonging to other guests, referring to the deleted guests,
are potentially affected.

IMPACT
======

In some circumstances, it might be possible for a new guest domain to
access resources belonging to a previous domain.  The impact would
depend on the software in use and the configuration, but might include
any of denial of service, information leak, or privilege escalation.

VULNERABLE SYSTEMS
==================

All versions of Xen are in principle vulnerable.

Both Xenstore implementations (C and Ocaml) are vulnerable.

Vulnerable systems are only those running software where one domain is
granted access to another's xenstore nodes, without complete cleanup
of those nodes on domain destruction.  No such software is enabled in
default configurations of upstream Xen.

Therefore upstream Xen, without additional management software (in
host or guest(s)), is not vulnerable in the default (host and guest)
configuration.

MITIGATION
==========

There is no mitigation available.

CREDITS
=======

This issue was discovered by Jürgen Groß of SUSE.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa322-c.patch             xen-unstable        [C xenstored]
xsa322-4.14-c.patch        Xen 4.14 - 4.13     [C xenstored]
xsa322-4.13-c.patch        Xen 4.12 - 4.10     [C xenstored]

xsa322-o.patch             xen-unstable - 4.12 [Ocaml xenstored]
xsa322-4.11-o.patch        Xen 4.11 - 4.10     [Ocaml xenstored]

$ sha256sum xsa322*
89e40422e41b8b2f8926ee5081da0e494e8e7312091151d31bfaa29eefa9b669  xsa322.meta
0cfeb0f8dd1c95e628e06f3402cbb5fb58c0972d6616958f5a0fbed59813dd6c  xsa322-4.11-o.patch
d4f9362b6f7ebfb7349849d4449f70b6004779c35238dc628736c541fe9e4279  xsa322-4.12-c.patch
8efe8fc39bf91a1c0cbdbf572deb2592930b757725951f4fdf0c387904ce4293  xsa322-4.14-c.patch
9275c7c36127f0e9719d4cb3162e39ce9233b2b55e9f9307b4c4d370a7b636a3  xsa322-c.patch
42c0818ceff11792517530237c4972967099c9828b4e2b5ec4bf6bfc1825cd7c  xsa322-o.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZm4QH/A4suMmviY3ihK5d97oiKhJWg/5bgt6ePoJtZwAe
28nqNX3pI3+hi09RTAUpINVXt+3ealblDs9XY4u+2trTX7yqtbdtRrMF+mhkHueK
Pnqvp3qSREDNaAJUN5gmsJ9vfgNwYTWscHqYga69cq4bHaLZJnEZC1He2qvvac67
MmKJk69go6VxCLG6ZAU59aHXzfs0EoQGKPhV6+Fw41HK9CNG8YErfdK1h2RIJ6Jg
GIf0bhgNSPxMs/0wcKJDmj4u3kmFStfDJbzsYxjmu5K0MZVMD87cQv89EHC+gCCc
e4ipgRwM6ba7pD338JT42gDHptqj2Rhg1YszmG2bQO0TQoA=
=MQnE
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa322.meta"
Content-Disposition: attachment; filename="xsa322.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjIsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIyLTQuMTItYy5wYXRjaCIs
CiAgICAgICAgICAgICJ4c2EzMjItNC4xMS1vLnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI0MWE4MjJjMzkyNjM1MGYyNjkxN2Q3NDdjOGRmZWQxYzQ0
YTJjZjQyIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIyLTQuMTItYy5wYXRjaCIs
CiAgICAgICAgICAgICJ4c2EzMjItNC4xMS1vLnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEyIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI4MTQ1ZDM4YjQ4MDA5MjU1YTMyYWI4N2EwMmU0ODFjZDA5
YzgxMWY5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIyLTQuMTItYy5wYXRjaCIs
CiAgICAgICAgICAgICJ4c2EzMjItby5wYXRjaCIKICAgICAgICAgIF0KICAg
ICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMyI6IHsKICAgICAgIlJl
Y2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVS
ZWYiOiAiYjUzMDIyNzNlMmM1MTk0MDE3MjQwMDQ4NjY0NDYzNmYyZjRmYzY0
YSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAog
ICAgICAgICAgICAxMTUKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTMyMi00LjE0LWMucGF0Y2giLAogICAg
ICAgICAgICAieHNhMzIyLW8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9LAogICAgIjQuMTQiOiB7CiAgICAgICJSZWNpcGVz
IjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjog
IjFkMWQxZjUzOTE5NzY0NTZhNzlkYWFjMGRjZmU3MTU3ZGExZTU0ZjciLAog
ICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAg
ICAgICAgMTE1CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBb
CiAgICAgICAgICAgICJ4c2EzMjItNC4xNC1jLnBhdGNoIiwKICAgICAgICAg
ICAgInhzYTMyMi1vLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjNh
ZTQ2OWFmOGU2ODBkZjMxZWVjZDBhMmFjNmE4M2I1OGFkN2NlNTMiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAg
ICAgMTE1CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzMjItYy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2Ez
MjItby5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAg
IH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa322-4.11-o.patch"
Content-Disposition: attachment; filename="xsa322-4.11-o.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2xlYW4gdXAgcGVybWlzc2lvbnMgZm9yIGRlYWQgZG9tYWlu
cwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47
IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJp
dAoKZG9tYWluIGlkcyBhcmUgcHJvbmUgdG8gd3JhcHBpbmcgKDE1LWJpdHMp
LCBhbmQgd2l0aCBzdWZmaWNpZW50IG51bWJlcgpvZiBWTXMgaW4gYSByZWJv
b3QgbG9vcCBpdCBpcyBwb3NzaWJsZSB0byB0cmlnZ2VyIGl0LiAgWGVuc3Rv
cmUgZW50cmllcwptYXkgbGluZ2VyIGFmdGVyIGEgZG9tYWluIGRpZXMsIHVu
dGlsIGEgdG9vbHN0YWNrIGNsZWFucyBpdCB1cC4gRHVyaW5nCnRoaXMgdGlt
ZSB0aGVyZSBpcyBhIHdpbmRvdyB3aGVyZSBhIHdyYXBwZWQgZG9taWQgY291
bGQgYWNjZXNzIHRoZXNlCnhlbnN0b3JlIGtleXMgKHRoYXQgYmVsb25nZWQg
dG8gYW5vdGhlciBWTSkuCgpUbyBwcmV2ZW50IHRoaXMgZG8gYSBjbGVhbnVw
IHdoZW4gYSBkb21haW4gZGllczoKICogd2FsayB0aGUgZW50aXJlIHhlbnN0
b3JlIHRyZWUgYW5kIHVwZGF0ZSBwZXJtaXNzaW9ucyBmb3IgYWxsIG5vZGVz
CiAgICogaWYgdGhlIGRlYWQgZG9tYWluIGhhZCBhbiBBQ0wgZW50cnk6IHJl
bW92ZSBpdAogICAqIGlmIHRoZSBkZWFkIGRvbWFpbiB3YXMgdGhlIG93bmVy
OiBjaGFuZ2UgdGhlIG93bmVyIHRvIERvbTAKClRoaXMgaXMgZG9uZSB3aXRo
b3V0IHF1b3RhIGNoZWNrcyBvciBhIHRyYW5zYWN0aW9uLiBRdW90YSBjaGVj
a3Mgd291bGQKYmUgYSBuby1vcCAoZWl0aGVyIHRoZSBkb21haW4gaXMgZGVh
ZCwgb3IgaXQgaXMgRG9tMCB3aGVyZSB0aGV5IGFyZSBub3QKZW5mb3JjZWQp
LiAgVHJhbnNhY3Rpb25zIGFyZSBub3QgbmVlZGVkLCBiZWNhdXNlIHRoaXMg
aXMgYWxsIGRvbmUKYXRvbWljYWxseSBieSBveGVuc3RvcmVkJ3Mgc2luZ2xl
IHRocmVhZC4KClRoZSB4ZW5zdG9yZSBlbnRyaWVzIG93bmVkIGJ5IHRoZSBk
ZWFkIGRvbWFpbiBhcmUgbm90IGRlbGV0ZWQsIGJlY2F1c2UKdGhhdCBjb3Vs
ZCBjb25mdXNlIGEgdG9vbHN0YWNrIC8gYmFja2VuZHMgdGhhdCBhcmUgc3Rp
bGwgYm91bmQgdG8gaXQKKG9yIGdlbmVyYXRlIHVuZXhwZWN0ZWQgd2F0Y2gg
ZXZlbnRzKS4gSXQgaXMgdGhlIHJlc3BvbnNpYmlsaXR5IG9mIGEKdG9vbHN0
YWNrIHRvIHJlbW92ZSB0aGUgeGVuc3RvcmUgZW50cmllcyB0aGVtc2VsdmVz
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjIuCgpTaWduZWQtb2ZmLWJ5OiBF
ZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1i
eTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJt
cy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCBl
ZTdmZWU2YmRhLi5lOGExNjIyMWY4IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQvcGVybXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Blcm1zLm1sCkBAIC01OCw2ICs1OCwxNSBAQCBsZXQgZ2V0X290aGVy
IHBlcm1zID0gcGVybXMub3RoZXIKIGxldCBnZXRfYWNsIHBlcm1zID0gcGVy
bXMuYWNsCiBsZXQgZ2V0X293bmVyIHBlcm0gPSBwZXJtLm93bmVyCiAKKygq
KiBbcmVtb3RlX2RvbWlkIH5kb21pZCBwZXJtXSByZW1vdmVzIGFsbCBBQ0xz
IGZvciBbZG9taWRdIGZyb20gcGVybS4KKyogSWYgW2RvbWlkXSB3YXMgdGhl
IG93bmVyIHRoZW4gaXQgaXMgY2hhbmdlZCB0byBEb20wLgorKiBUaGlzIGlz
IHVzZWQgZm9yIGNsZWFuaW5nIHVwIGFmdGVyIGRlYWQgZG9tYWlucy4KKyog
KikKK2xldCByZW1vdmVfZG9taWQgfmRvbWlkIHBlcm0gPQorCWxldCBhY2wg
PSBMaXN0LmZpbHRlciAoZnVuIChhY2xfZG9taWQsIF8pIC0+IGFjbF9kb21p
ZCA8PiBkb21pZCkgcGVybS5hY2wgaW4KKwlsZXQgb3duZXIgPSBpZiBwZXJt
Lm93bmVyID0gZG9taWQgdGhlbiAwIGVsc2UgcGVybS5vd25lciBpbgorCXsg
cGVybSB3aXRoIGFjbDsgb3duZXIgfQorCiBsZXQgZGVmYXVsdDAgPSBjcmVh
dGUgMCBOT05FIFtdCiAKIGxldCBwZXJtX29mX3N0cmluZyBzID0KZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggM2NkMDA5N2Ri
OS4uNmE5OThmODc2NCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKQEAgLTQzNyw2ICs0MzcsNyBAQCBsZXQgZG9fcmVsZWFzZSBj
b24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJbGV0IGZpcmVfc3BlY193YXRj
aGVzID0gRG9tYWlucy5leGlzdCBkb21haW5zIGRvbWlkIGluCiAJRG9tYWlu
cy5kZWwgZG9tYWlucyBkb21pZDsKIAlDb25uZWN0aW9ucy5kZWxfZG9tYWlu
IGNvbnMgZG9taWQ7CisJU3RvcmUucmVzZXRfcGVybWlzc2lvbnMgKFRyYW5z
YWN0aW9uLmdldF9zdG9yZSB0KSBkb21pZDsKIAlpZiBmaXJlX3NwZWNfd2F0
Y2hlcyAKIAl0aGVuIENvbm5lY3Rpb25zLmZpcmVfc3BlY193YXRjaGVzIChU
cmFuc2FjdGlvbi5nZXRfcm9vdCB0KSBjb25zIFN0b3JlLlBhdGgucmVsZWFz
ZV9kb21haW4KIAllbHNlIHJhaXNlIEludmFsaWRfQ21kX0FyZ3MKZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9zdG9yZS5tbCBiL3Rvb2xz
L29jYW1sL3hlbnN0b3JlZC9zdG9yZS5tbAppbmRleCAwY2U2ZjY4ZThkLi4x
MDFjMDk0NzE1IDEwMDY0NAotLS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQv
c3RvcmUubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1s
CkBAIC04OSw2ICs4OSwxMyBAQCBsZXQgY2hlY2tfb3duZXIgbm9kZSBjb25u
ZWN0aW9uID0KIAogbGV0IHJlYyByZWN1cnNlIGZjdCBub2RlID0gZmN0IG5v
ZGU7IExpc3QuaXRlciAocmVjdXJzZSBmY3QpIG5vZGUuY2hpbGRyZW4KIAor
KCoqIFtyZWN1cnNlX21hcCBmIHRyZWVdIGFwcGxpZXMgW2ZdIG9uIGVhY2gg
bm9kZSBpbiB0aGUgdHJlZSByZWN1cnNpdmVseSAqKQorbGV0IHJlY3Vyc2Vf
bWFwIGYgPQorCWxldCByZWMgd2FsayBub2RlID0KKwkJZiB7IG5vZGUgd2l0
aCBjaGlsZHJlbiA9IExpc3QucmV2X21hcCB3YWxrIG5vZGUuY2hpbGRyZW4g
fD4gTGlzdC5yZXYgfQorCWluCisJd2FsaworCiBsZXQgdW5wYWNrIG5vZGUg
PSAoU3ltYm9sLnRvX3N0cmluZyBub2RlLm5hbWUsIG5vZGUucGVybXMsIG5v
ZGUudmFsdWUpCiAKIGVuZApAQCAtNDA1LDYgKzQxMiwxNSBAQCBsZXQgc2V0
cGVybXMgc3RvcmUgcGVybSBwYXRoIG5wZXJtcyA9CiAJCVF1b3RhLmRlbF9l
bnRyeSBzdG9yZS5xdW90YSBvbGRfb3duZXI7CiAJCVF1b3RhLmFkZF9lbnRy
eSBzdG9yZS5xdW90YSBuZXdfb3duZXIKIAorbGV0IHJlc2V0X3Blcm1pc3Np
b25zIHN0b3JlIGRvbWlkID0KKwlMb2dnaW5nLmluZm8gInN0b3JlfG5vZGUi
ICJDbGVhbmluZyB1cCB4ZW5zdG9yZSBBQ0xzIGZvciBkb21pZCAlZCIgZG9t
aWQ7CisJc3RvcmUucm9vdCA8LSBOb2RlLnJlY3Vyc2VfbWFwIChmdW4gbm9k
ZSAtPgorCQlsZXQgcGVybXMgPSBQZXJtcy5Ob2RlLnJlbW92ZV9kb21pZCB+
ZG9taWQgbm9kZS5wZXJtcyBpbgorCQlpZiBwZXJtcyA8PiBub2RlLnBlcm1z
IHRoZW4KKwkJCUxvZ2dpbmcuZGVidWcgInN0b3JlfG5vZGUiICJDaGFuZ2Vk
IHBlcm1pc3Npb25zIGZvciBub2RlICVzIiAoTm9kZS5nZXRfbmFtZSBub2Rl
KTsKKwkJeyBub2RlIHdpdGggcGVybXMgfQorCSkgc3RvcmUucm9vdAorCiB0
eXBlIG9wcyA9IHsKIAlzdG9yZTogdDsKIAl3cml0ZTogUGF0aC50IC0+IHN0
cmluZyAtPiB1bml0OwpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3hlbnN0b3JlZC5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5z
dG9yZWQubWwKaW5kZXggMzBmYzg3NDMyNy4uMTgzZGQyNzU0YiAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAorKysg
Yi90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCkBAIC0zNDAs
NiArMzQwLDcgQEAgbGV0IF8gPQogCQkJZmluYWxseSAoZnVuICgpIC0+CiAJ
CQkJaWYgU29tZSBwb3J0ID0gZXZlbnRjaG4uRXZlbnQudmlycV9wb3J0IHRo
ZW4gKAogCQkJCQlsZXQgKG5vdGlmeSwgZGVhZGRvbSkgPSBEb21haW5zLmNs
ZWFudXAgZG9tYWlucyBpbgorCQkJCQlMaXN0Lml0ZXIgKFN0b3JlLnJlc2V0
X3Blcm1pc3Npb25zIHN0b3JlKSBkZWFkZG9tOwogCQkJCQlMaXN0Lml0ZXIg
KENvbm5lY3Rpb25zLmRlbF9kb21haW4gY29ucykgZGVhZGRvbTsKIAkJCQkJ
aWYgZGVhZGRvbSA8PiBbXSB8fCBub3RpZnkgdGhlbgogCQkJCQkJQ29ubmVj
dGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMK

--=separator
Content-Type: application/octet-stream; name="xsa322-4.12-c.patch"
Content-Disposition: attachment; filename="xsa322-4.12-c.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV2b2tlIGFjY2VzcyByaWdodHMgZm9yIHJl
bW92ZWQgZG9tYWlucwoKQWNjZXNzIHJpZ2h0cyBvZiBYZW5zdG9yZSBub2Rl
cyBhcmUgcGVyIGRvbWlkLiBVbmZvcnR1bmF0ZWx5IGV4aXN0aW5nCmdyYW50
ZWQgYWNjZXNzIHJpZ2h0cyBhcmUgbm90IHJlbW92ZWQgd2hlbiBhIGRvbWFp
biBpcyBiZWluZyBkZXN0cm95ZWQuClRoaXMgbWVhbnMgdGhhdCBhIG5ldyBk
b21haW4gY3JlYXRlZCB3aXRoIHRoZSBzYW1lIGRvbWlkIHdpbGwgaW5oZXJp
dAp0aGUgYWNjZXNzIHJpZ2h0cyB0byBYZW5zdG9yZSBub2RlcyBmcm9tIHRo
ZSBwcmV2aW91cyBkb21haW4ocykgd2l0aAp0aGUgc2FtZSBkb21pZC4KClRo
aXMgY2FuIGJlIGF2b2lkZWQgYnkgYWRkaW5nIGEgZ2VuZXJhdGlvbiBjb3Vu
dGVyIHRvIGVhY2ggZG9tYWluLgpUaGUgZ2VuZXJhdGlvbiBjb3VudGVyIG9m
IHRoZSBkb21haW4gaXMgc2V0IHRvIHRoZSBnbG9iYWwgZ2VuZXJhdGlvbgpj
b3VudGVyIHdoZW4gYSBkb21haW4gc3RydWN0dXJlIGlzIGJlaW5nIGFsbG9j
YXRlZC4gV2hlbiByZWFkaW5nIG9yCndyaXRpbmcgYSBub2RlIGFsbCBwZXJt
aXNzaW9ucyBvZiBkb21haW5zIHdoaWNoIGFyZSB5b3VuZ2VyIHRoYW4gdGhl
Cm5vZGUgaXRzZWxmIGFyZSBkcm9wcGVkLiBUaGlzIGlzIGRvbmUgYnkgZmxh
Z2dpbmcgdGhlIHJlbGF0ZWQgZW50cnkKYXMgaW52YWxpZCBpbiBvcmRlciB0
byBhdm9pZCBtb2RpZnlpbmcgcGVybWlzc2lvbnMgaW4gYSB3YXkgdGhlIHVz
ZXIKY291bGQgZGV0ZWN0LgoKQSBzcGVjaWFsIGNhc2UgaGFzIHRvIGJlIGNv
bnNpZGVyZWQ6IGZvciBhIG5ldyBkb21haW4gdGhlIGZpcnN0ClhlbnN0b3Jl
IGVudHJpZXMgYXJlIGFscmVhZHkgd3JpdHRlbiBiZWZvcmUgdGhlIGRvbWFp
biBpcyBvZmZpY2lhbGx5CmludHJvZHVjZWQgaW4gWGVuc3RvcmUuIEluIG9y
ZGVyIG5vdCB0byBkcm9wIHRoZSBwZXJtaXNzaW9ucyBmb3IgdGhlCm5ldyBk
b21haW4gYSBkb21haW4gc3RydWN0IGlzIGFsbG9jYXRlZCBldmVuIGJlZm9y
ZSBpbnRyb2R1Y3Rpb24gaWYKdGhlIGh5cGVydmlzb3IgaXMgYXdhcmUgb2Yg
dGhlIGRvbWFpbi4gVGhpcyByZXF1aXJlcyBhZGRpbmcgYW5vdGhlcgpib29s
ICJpbnRyb2R1Y2VkIiB0byBzdHJ1Y3QgZG9tYWluIGluIHhlbnN0b3JlZC4g
SW4gb3JkZXIgdG8gYXZvaWQKYWRkaXRpb25hbCBwYWRkaW5nIGhvbGVzIGNv
bnZlcnQgdGhlIHNodXRkb3duIGZsYWcgdG8gYm9vbCwgdG9vLgoKQXMgdmVy
aWZ5aW5nIHBlcm1pc3Npb25zIGhhcyBpdHMgcHJpY2UgcmVnYXJkaW5nIHJ1
bnRpbWUgYWRkIGEgbmV3CnF1b3RhIGZvciBsaW1pdGluZyB0aGUgbnVtYmVy
IG9mIHBlcm1pc3Npb25zIGFuIHVucHJpdmlsZWdlZCBkb21haW4KY2FuIHNl
dCBmb3IgYSBub2RlLiBUaGUgZGVmYXVsdCBmb3IgdGhhdCBuZXcgcXVvdGEg
aXMgNS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIyLgoKU2lnbmVkLW9mZi1i
eTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2VkLWJ5OiBKdWxp
ZW4gR3JhbGwgPGp1bGllbkBhbWF6b24uY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL3hlbnN0b3JlL2luY2x1ZGUveGVuc3RvcmVfbGliLmggYi90b29scy94
ZW5zdG9yZS9pbmNsdWRlL3hlbnN0b3JlX2xpYi5oCmluZGV4IDBmZmJhZTll
YjU3NC4uNGM5YjZkMTY4NThkIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9y
ZS9pbmNsdWRlL3hlbnN0b3JlX2xpYi5oCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L2luY2x1ZGUveGVuc3RvcmVfbGliLmgKQEAgLTM0LDYgKzM0LDcgQEAgZW51
bSB4c19wZXJtX3R5cGUgewogCS8qIEludGVybmFsIHVzZS4gKi8KIAlYU19Q
RVJNX0VOT0VOVF9PSyA9IDQsCiAJWFNfUEVSTV9PV05FUiA9IDgsCisJWFNf
UEVSTV9JR05PUkUgPSAxNiwKIH07CiAKIHN0cnVjdCB4c19wZXJtaXNzaW9u
cwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKaW5kZXggMmE4
NmM0YWE1YmNlLi40ZmJlNWM3NTljMWIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuYwpAQCAtMTAxLDYgKzEwMSw3IEBAIGludCBxdW90
YV9uYl9lbnRyeV9wZXJfZG9tYWluID0gMTAwMDsKIGludCBxdW90YV9uYl93
YXRjaF9wZXJfZG9tYWluID0gMTI4OwogaW50IHF1b3RhX21heF9lbnRyeV9z
aXplID0gMjA0ODsgLyogMksgKi8KIGludCBxdW90YV9tYXhfdHJhbnNhY3Rp
b24gPSAxMDsKK2ludCBxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IDU7CiAK
IHZvaWQgdHJhY2UoY29uc3QgY2hhciAqZm10LCAuLi4pCiB7CkBAIC00MDcs
OCArNDA4LDEzIEBAIHN0cnVjdCBub2RlICpyZWFkX25vZGUoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKIAogCS8qIFBlcm1p
c3Npb25zIGFyZSBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMuICovCiAJbm9kZS0+
cGVybXMucCA9IGhkci0+cGVybXM7CisJaWYgKGRvbWFpbl9hZGp1c3Rfbm9k
ZV9wZXJtcyhub2RlKSkgeworCQl0YWxsb2NfZnJlZShub2RlKTsKKwkJcmV0
dXJuIE5VTEw7CisJfQorCiAJLyogRGF0YSBpcyBiaW5hcnkgYmxvYiAodXN1
YWxseSBhc2NpaSwgbm8gbnVsKS4gKi8KLQlub2RlLT5kYXRhID0gbm9kZS0+
cGVybXMucCArIG5vZGUtPnBlcm1zLm51bTsKKwlub2RlLT5kYXRhID0gbm9k
ZS0+cGVybXMucCArIGhkci0+bnVtX3Blcm1zOwogCS8qIENoaWxkcmVuIGlz
IHN0cmluZ3MsIG51bCBzZXBhcmF0ZWQuICovCiAJbm9kZS0+Y2hpbGRyZW4g
PSBub2RlLT5kYXRhICsgbm9kZS0+ZGF0YWxlbjsKIApAQCAtNDI0LDYgKzQz
MCw5IEBAIGludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUsCiAJdm9p
ZCAqcDsKIAlzdHJ1Y3QgeHNfdGRiX3JlY29yZF9oZHIgKmhkcjsKIAorCWlm
IChkb21haW5fYWRqdXN0X25vZGVfcGVybXMobm9kZSkpCisJCXJldHVybiBl
cnJubzsKKwogCWRhdGEuZHNpemUgPSBzaXplb2YoKmhkcikKIAkJKyBub2Rl
LT5wZXJtcy5udW0gKiBzaXplb2Yobm9kZS0+cGVybXMucFswXSkKIAkJKyBu
b2RlLT5kYXRhbGVuICsgbm9kZS0+Y2hpbGRsZW47CkBAIC00ODMsOCArNDky
LDkgQEAgZW51bSB4c19wZXJtX3R5cGUgcGVybV9mb3JfY29ubihzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubiwKIAkJcmV0dXJuIChYU19QRVJNX1JFQUR8WFNf
UEVSTV9XUklURXxYU19QRVJNX09XTkVSKSAmIG1hc2s7CiAKIAlmb3IgKGkg
PSAxOyBpIDwgcGVybXMtPm51bTsgaSsrKQotCQlpZiAocGVybXMtPnBbaV0u
aWQgPT0gY29ubi0+aWQKLSAgICAgICAgICAgICAgICAgICAgICAgIHx8IChj
b25uLT50YXJnZXQgJiYgcGVybXMtPnBbaV0uaWQgPT0gY29ubi0+dGFyZ2V0
LT5pZCkpCisJCWlmICghKHBlcm1zLT5wW2ldLnBlcm1zICYgWFNfUEVSTV9J
R05PUkUpICYmCisJCSAgICAocGVybXMtPnBbaV0uaWQgPT0gY29ubi0+aWQg
fHwKKwkJICAgICAoY29ubi0+dGFyZ2V0ICYmIHBlcm1zLT5wW2ldLmlkID09
IGNvbm4tPnRhcmdldC0+aWQpKSkKIAkJCXJldHVybiBwZXJtcy0+cFtpXS5w
ZXJtcyAmIG1hc2s7CiAKIAlyZXR1cm4gcGVybXMtPnBbMF0ucGVybXMgJiBt
YXNrOwpAQCAtMTI0Niw4ICsxMjU2LDEyIEBAIHN0YXRpYyBpbnQgZG9fc2V0
X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVy
ZWRfZGF0YSAqaW4pCiAJaWYgKHBlcm1zLm51bSA8IDIpCiAJCXJldHVybiBF
SU5WQUw7CiAKLQlwZXJtc3RyID0gaW4tPmJ1ZmZlciArIHN0cmxlbihpbi0+
YnVmZmVyKSArIDE7CiAJcGVybXMubnVtLS07CisJaWYgKGRvbWFpbl9pc191
bnByaXZpbGVnZWQoY29ubikgJiYKKwkgICAgcGVybXMubnVtID4gcXVvdGFf
bmJfcGVybXNfcGVyX25vZGUpCisJCXJldHVybiBFTk9TUEM7CisKKwlwZXJt
c3RyID0gaW4tPmJ1ZmZlciArIHN0cmxlbihpbi0+YnVmZmVyKSArIDE7CiAK
IAlwZXJtcy5wID0gdGFsbG9jX2FycmF5KGluLCBzdHJ1Y3QgeHNfcGVybWlz
c2lvbnMsIHBlcm1zLm51bSk7CiAJaWYgKCFwZXJtcy5wKQpAQCAtMTkxOSw2
ICsxOTMzLDcgQEAgc3RhdGljIHZvaWQgdXNhZ2Uodm9pZCkKICIgIC1TLCAt
LWVudHJ5LXNpemUgPHNpemU+IGxpbWl0IHRoZSBzaXplIG9mIGVudHJ5IHBl
ciBkb21haW4sIGFuZFxuIgogIiAgLVcsIC0td2F0Y2gtbmIgPG5iPiAgICAg
bGltaXQgdGhlIG51bWJlciBvZiB3YXRjaGVzIHBlciBkb21haW4sXG4iCiAi
ICAtdCwgLS10cmFuc2FjdGlvbiA8bmI+ICBsaW1pdCB0aGUgbnVtYmVyIG9m
IHRyYW5zYWN0aW9uIGFsbG93ZWQgcGVyIGRvbWFpbixcbiIKKyIgIC1BLCAt
LXBlcm0tbmIgPG5iPiAgICAgIGxpbWl0IHRoZSBudW1iZXIgb2YgcGVybWlz
c2lvbnMgcGVyIG5vZGUsXG4iCiAiICAtUiwgLS1uby1yZWNvdmVyeSAgICAg
ICB0byByZXF1ZXN0IHRoYXQgbm8gcmVjb3Zlcnkgc2hvdWxkIGJlIGF0dGVt
cHRlZCB3aGVuXG4iCiAiICAgICAgICAgICAgICAgICAgICAgICAgICB0aGUg
c3RvcmUgaXMgY29ycnVwdGVkIChkZWJ1ZyBvbmx5KSxcbiIKICIgIC1JLCAt
LWludGVybmFsLWRiICAgICAgIHN0b3JlIGRhdGFiYXNlIGluIG1lbW9yeSwg
bm90IG9uIGRpc2tcbiIKQEAgLTE5MzksNiArMTk1NCw3IEBAIHN0YXRpYyBz
dHJ1Y3Qgb3B0aW9uIG9wdGlvbnNbXSA9IHsKIAl7ICJlbnRyeS1zaXplIiwg
MSwgTlVMTCwgJ1MnIH0sCiAJeyAidHJhY2UtZmlsZSIsIDEsIE5VTEwsICdU
JyB9LAogCXsgInRyYW5zYWN0aW9uIiwgMSwgTlVMTCwgJ3QnIH0sCisJeyAi
cGVybS1uYiIsIDEsIE5VTEwsICdBJyB9LAogCXsgIm5vLXJlY292ZXJ5Iiwg
MCwgTlVMTCwgJ1InIH0sCiAJeyAiaW50ZXJuYWwtZGIiLCAwLCBOVUxMLCAn
SScgfSwKIAl7ICJ2ZXJib3NlIiwgMCwgTlVMTCwgJ1YnIH0sCkBAIC0xOTYx
LDcgKzE5NzcsNyBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqYXJndltd
KQogCWludCB0aW1lb3V0OwogCiAKLQl3aGlsZSAoKG9wdCA9IGdldG9wdF9s
b25nKGFyZ2MsIGFyZ3YsICJERTpGOkhOUFM6dDpUOlJWVzoiLCBvcHRpb25z
LAorCXdoaWxlICgob3B0ID0gZ2V0b3B0X2xvbmcoYXJnYywgYXJndiwgIkRF
OkY6SE5QUzp0OkE6VDpSVlc6Iiwgb3B0aW9ucywKIAkJCQkgIE5VTEwpKSAh
PSAtMSkgewogCQlzd2l0Y2ggKG9wdCkgewogCQljYXNlICdEJzoKQEAgLTIw
MDMsNiArMjAxOSw5IEBAIGludCBtYWluKGludCBhcmdjLCBjaGFyICphcmd2
W10pCiAJCWNhc2UgJ1cnOgogCQkJcXVvdGFfbmJfd2F0Y2hfcGVyX2RvbWFp
biA9IHN0cnRvbChvcHRhcmcsIE5VTEwsIDEwKTsKIAkJCWJyZWFrOworCQlj
YXNlICdBJzoKKwkJCXF1b3RhX25iX3Blcm1zX3Blcl9ub2RlID0gc3RydG9s
KG9wdGFyZywgTlVMTCwgMTApOworCQkJYnJlYWs7CiAJCWNhc2UgJ2UnOgog
CQkJZG9tMF9ldmVudCA9IHN0cnRvbChvcHRhcmcsIE5VTEwsIDEwKTsKIAkJ
CWJyZWFrOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5j
CmluZGV4IDBiMmY0OWFjN2Q0Yy4uZjVlN2FmNDZlOGFhIDEwMDY0NAotLS0g
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCkBAIC03MSw4ICs3MSwx
NCBAQCBzdHJ1Y3QgZG9tYWluCiAJLyogVGhlIGNvbm5lY3Rpb24gYXNzb2Np
YXRlZCB3aXRoIHRoaXMuICovCiAJc3RydWN0IGNvbm5lY3Rpb24gKmNvbm47
CiAKKwkvKiBHZW5lcmF0aW9uIGNvdW50IGF0IGRvbWFpbiBpbnRyb2R1Y3Rp
b24gdGltZS4gKi8KKwl1aW50NjRfdCBnZW5lcmF0aW9uOworCiAJLyogSGF2
ZSB3ZSBub3RpY2VkIHRoYXQgdGhpcyBkb21haW4gaXMgc2h1dGRvd24/ICov
Ci0JaW50IHNodXRkb3duOworCWJvb2wgc2h1dGRvd247CisKKwkvKiBIYXMg
ZG9tYWluIGJlZW4gb2ZmaWNpYWxseSBpbnRyb2R1Y2VkPyAqLworCWJvb2wg
aW50cm9kdWNlZDsKIAogCS8qIG51bWJlciBvZiBlbnRyeSBmcm9tIHRoaXMg
ZG9tYWluIGluIHRoZSBzdG9yZSAqLwogCWludCBuYmVudHJ5OwpAQCAtMjAw
LDYgKzIwNiw5IEBAIHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAq
X2RvbWFpbikKIAogCWxpc3RfZGVsKCZkb21haW4tPmxpc3QpOwogCisJaWYg
KCFkb21haW4tPmludHJvZHVjZWQpCisJCXJldHVybiAwOworCiAJaWYgKGRv
bWFpbi0+cG9ydCkgewogCQlpZiAoeGVuZXZ0Y2huX3VuYmluZCh4Y2VfaGFu
ZGxlLCBkb21haW4tPnBvcnQpID09IC0xKQogCQkJZXByaW50ZigiPiBVbmJp
bmRpbmcgcG9ydCAlaSBmYWlsZWQhXG4iLCBkb21haW4tPnBvcnQpOwpAQCAt
MjIxLDIxICsyMzAsMzQgQEAgc3RhdGljIGludCBkZXN0cm95X2RvbWFpbih2
b2lkICpfZG9tYWluKQogCXJldHVybiAwOwogfQogCitzdGF0aWMgYm9vbCBn
ZXRfZG9tYWluX2luZm8odW5zaWduZWQgaW50IGRvbWlkLCB4Y19kb21pbmZv
X3QgKmRvbWluZm8pCit7CisJcmV0dXJuIHhjX2RvbWFpbl9nZXRpbmZvKCp4
Y19oYW5kbGUsIGRvbWlkLCAxLCBkb21pbmZvKSA9PSAxICYmCisJICAgICAg
IGRvbWluZm8tPmRvbWlkID09IGRvbWlkOworfQorCiBzdGF0aWMgdm9pZCBk
b21haW5fY2xlYW51cCh2b2lkKQogewogCXhjX2RvbWluZm9fdCBkb21pbmZv
OwogCXN0cnVjdCBkb21haW4gKmRvbWFpbjsKIAlzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubjsKIAlpbnQgbm90aWZ5ID0gMDsKKwlib29sIGRvbV92YWxpZDsK
IAogIGFnYWluOgogCWxpc3RfZm9yX2VhY2hfZW50cnkoZG9tYWluLCAmZG9t
YWlucywgbGlzdCkgewotCQlpZiAoeGNfZG9tYWluX2dldGluZm8oKnhjX2hh
bmRsZSwgZG9tYWluLT5kb21pZCwgMSwKLQkJCQkgICAgICAmZG9taW5mbykg
PT0gMSAmJgotCQkgICAgZG9taW5mby5kb21pZCA9PSBkb21haW4tPmRvbWlk
KSB7CisJCWRvbV92YWxpZCA9IGdldF9kb21haW5faW5mbyhkb21haW4tPmRv
bWlkLCAmZG9taW5mbyk7CisJCWlmICghZG9tYWluLT5pbnRyb2R1Y2VkKSB7
CisJCQlpZiAoIWRvbV92YWxpZCkgeworCQkJCXRhbGxvY19mcmVlKGRvbWFp
bik7CisJCQkJZ290byBhZ2FpbjsKKwkJCX0KKwkJCWNvbnRpbnVlOworCQl9
CisJCWlmIChkb21fdmFsaWQpIHsKIAkJCWlmICgoZG9taW5mby5jcmFzaGVk
IHx8IGRvbWluZm8uc2h1dGRvd24pCiAJCQkgICAgJiYgIWRvbWFpbi0+c2h1
dGRvd24pIHsKLQkJCQlkb21haW4tPnNodXRkb3duID0gMTsKKwkJCQlkb21h
aW4tPnNodXRkb3duID0gdHJ1ZTsKIAkJCQlub3RpZnkgPSAxOwogCQkJfQog
CQkJaWYgKCFkb21pbmZvLmR5aW5nKQpAQCAtMzAxLDU4ICszMjMsODQgQEAg
c3RhdGljIGNoYXIgKnRhbGxvY19kb21haW5fcGF0aCh2b2lkICpjb250ZXh0
LCB1bnNpZ25lZCBpbnQgZG9taWQpCiAJcmV0dXJuIHRhbGxvY19hc3ByaW50
Zihjb250ZXh0LCAiL2xvY2FsL2RvbWFpbi8ldSIsIGRvbWlkKTsKIH0KIAot
c3RhdGljIHN0cnVjdCBkb21haW4gKm5ld19kb21haW4odm9pZCAqY29udGV4
dCwgdW5zaWduZWQgaW50IGRvbWlkLAotCQkJCSBpbnQgcG9ydCkKK3N0YXRp
YyBzdHJ1Y3QgZG9tYWluICpmaW5kX2RvbWFpbl9zdHJ1Y3QodW5zaWduZWQg
aW50IGRvbWlkKQoreworCXN0cnVjdCBkb21haW4gKmk7CisKKwlsaXN0X2Zv
cl9lYWNoX2VudHJ5KGksICZkb21haW5zLCBsaXN0KSB7CisJCWlmIChpLT5k
b21pZCA9PSBkb21pZCkKKwkJCXJldHVybiBpOworCX0KKwlyZXR1cm4gTlVM
TDsKK30KKworc3RhdGljIHN0cnVjdCBkb21haW4gKmFsbG9jX2RvbWFpbih2
b2lkICpjb250ZXh0LCB1bnNpZ25lZCBpbnQgZG9taWQpCiB7CiAJc3RydWN0
IGRvbWFpbiAqZG9tYWluOwotCWludCByYzsKIAogCWRvbWFpbiA9IHRhbGxv
Yyhjb250ZXh0LCBzdHJ1Y3QgZG9tYWluKTsKLQlpZiAoIWRvbWFpbikKKwlp
ZiAoIWRvbWFpbikgeworCQllcnJubyA9IEVOT01FTTsKIAkJcmV0dXJuIE5V
TEw7CisJfQogCi0JZG9tYWluLT5wb3J0ID0gMDsKLQlkb21haW4tPnNodXRk
b3duID0gMDsKIAlkb21haW4tPmRvbWlkID0gZG9taWQ7Ci0JZG9tYWluLT5w
YXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9taWQpOwotCWlm
ICghZG9tYWluLT5wYXRoKQotCQlyZXR1cm4gTlVMTDsKKwlkb21haW4tPmdl
bmVyYXRpb24gPSBnZW5lcmF0aW9uOworCWRvbWFpbi0+aW50cm9kdWNlZCA9
IGZhbHNlOwogCi0Jd3JsX2RvbWFpbl9uZXcoZG9tYWluKTsKKwl0YWxsb2Nf
c2V0X2Rlc3RydWN0b3IoZG9tYWluLCBkZXN0cm95X2RvbWFpbik7CiAKIAls
aXN0X2FkZCgmZG9tYWluLT5saXN0LCAmZG9tYWlucyk7Ci0JdGFsbG9jX3Nl
dF9kZXN0cnVjdG9yKGRvbWFpbiwgZGVzdHJveV9kb21haW4pOworCisJcmV0
dXJuIGRvbWFpbjsKK30KKworc3RhdGljIGludCBuZXdfZG9tYWluKHN0cnVj
dCBkb21haW4gKmRvbWFpbiwgaW50IHBvcnQpCit7CisJaW50IHJjOworCisJ
ZG9tYWluLT5wb3J0ID0gMDsKKwlkb21haW4tPnNodXRkb3duID0gZmFsc2U7
CisJZG9tYWluLT5wYXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwg
ZG9tYWluLT5kb21pZCk7CisJaWYgKCFkb21haW4tPnBhdGgpIHsKKwkJZXJy
bm8gPSBFTk9NRU07CisJCXJldHVybiBlcnJubzsKKwl9CisKKwl3cmxfZG9t
YWluX25ldyhkb21haW4pOwogCiAJLyogVGVsbCBrZXJuZWwgd2UncmUgaW50
ZXJlc3RlZCBpbiB0aGlzIGV2ZW50LiAqLwotCXJjID0geGVuZXZ0Y2huX2Jp
bmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9taWQsIHBvcnQpOworCXJj
ID0geGVuZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9t
YWluLT5kb21pZCwgcG9ydCk7CiAJaWYgKHJjID09IC0xKQotCSAgICByZXR1
cm4gTlVMTDsKKwkJcmV0dXJuIGVycm5vOwogCWRvbWFpbi0+cG9ydCA9IHJj
OwogCisJZG9tYWluLT5pbnRyb2R1Y2VkID0gdHJ1ZTsKKwogCWRvbWFpbi0+
Y29ubiA9IG5ld19jb25uZWN0aW9uKHdyaXRlY2huLCByZWFkY2huKTsKLQlp
ZiAoIWRvbWFpbi0+Y29ubikKLQkJcmV0dXJuIE5VTEw7CisJaWYgKCFkb21h
aW4tPmNvbm4pICB7CisJCWVycm5vID0gRU5PTUVNOworCQlyZXR1cm4gZXJy
bm87CisJfQogCiAJZG9tYWluLT5jb25uLT5kb21haW4gPSBkb21haW47Ci0J
ZG9tYWluLT5jb25uLT5pZCA9IGRvbWlkOworCWRvbWFpbi0+Y29ubi0+aWQg
PSBkb21haW4tPmRvbWlkOwogCiAJZG9tYWluLT5yZW1vdGVfcG9ydCA9IHBv
cnQ7CiAJZG9tYWluLT5uYmVudHJ5ID0gMDsKIAlkb21haW4tPm5id2F0Y2gg
PSAwOwogCi0JcmV0dXJuIGRvbWFpbjsKKwlyZXR1cm4gMDsKIH0KIAogCiBz
dGF0aWMgc3RydWN0IGRvbWFpbiAqZmluZF9kb21haW5fYnlfZG9taWQodW5z
aWduZWQgaW50IGRvbWlkKQogewotCXN0cnVjdCBkb21haW4gKmk7CisJc3Ry
dWN0IGRvbWFpbiAqZDsKIAotCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmRv
bWFpbnMsIGxpc3QpIHsKLQkJaWYgKGktPmRvbWlkID09IGRvbWlkKQotCQkJ
cmV0dXJuIGk7Ci0JfQotCXJldHVybiBOVUxMOworCWQgPSBmaW5kX2RvbWFp
bl9zdHJ1Y3QoZG9taWQpOworCisJcmV0dXJuIChkICYmIGQtPmludHJvZHVj
ZWQpID8gZCA6IE5VTEw7CiB9CiAKIHN0YXRpYyB2b2lkIGRvbWFpbl9jb25u
X3Jlc2V0KHN0cnVjdCBkb21haW4gKmRvbWFpbikKQEAgLTM5OSwxNSArNDQ3
LDIxIEBAIGludCBkb19pbnRyb2R1Y2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAocG9ydCA8PSAw
KQogCQlyZXR1cm4gRUlOVkFMOwogCi0JZG9tYWluID0gZmluZF9kb21haW5f
YnlfZG9taWQoZG9taWQpOworCWRvbWFpbiA9IGZpbmRfZG9tYWluX3N0cnVj
dChkb21pZCk7CiAKIAlpZiAoZG9tYWluID09IE5VTEwpIHsKKwkJLyogSGFu
ZyBkb21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICovCisJ
CWRvbWFpbiA9IGFsbG9jX2RvbWFpbihpbiwgZG9taWQpOworCQlpZiAoZG9t
YWluID09IE5VTEwpCisJCQlyZXR1cm4gRU5PTUVNOworCX0KKworCWlmICgh
ZG9tYWluLT5pbnRyb2R1Y2VkKSB7CiAJCWludGVyZmFjZSA9IG1hcF9pbnRl
cmZhY2UoZG9taWQsIG1mbik7CiAJCWlmICghaW50ZXJmYWNlKQogCQkJcmV0
dXJuIGVycm5vOwogCQkvKiBIYW5nIGRvbWFpbiBvZmYgImluIiB1bnRpbCB3
ZSdyZSBmaW5pc2hlZC4gKi8KLQkJZG9tYWluID0gbmV3X2RvbWFpbihpbiwg
ZG9taWQsIHBvcnQpOwotCQlpZiAoIWRvbWFpbikgeworCQlpZiAobmV3X2Rv
bWFpbihkb21haW4sIHBvcnQpKSB7CiAJCQlyYyA9IGVycm5vOwogCQkJdW5t
YXBfaW50ZXJmYWNlKGludGVyZmFjZSk7CiAJCQlyZXR1cm4gcmM7CkBAIC01
MTgsOCArNTcyLDggQEAgaW50IGRvX3Jlc3VtZShzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCWlmIChJU19F
UlIoZG9tYWluKSkKIAkJcmV0dXJuIC1QVFJfRVJSKGRvbWFpbik7CiAKLQlk
b21haW4tPnNodXRkb3duID0gMDsKLQkKKwlkb21haW4tPnNodXRkb3duID0g
ZmFsc2U7CisKIAlzZW5kX2Fjayhjb25uLCBYU19SRVNVTUUpOwogCiAJcmV0
dXJuIDA7CkBAIC02NjIsOCArNzE2LDEwIEBAIHN0YXRpYyBpbnQgZG9tMF9p
bml0KHZvaWQpCiAJaWYgKHBvcnQgPT0gLTEpCiAJCXJldHVybiAtMTsKIAot
CWRvbTAgPSBuZXdfZG9tYWluKE5VTEwsIHhlbmJ1c19tYXN0ZXJfZG9taWQo
KSwgcG9ydCk7Ci0JaWYgKGRvbTAgPT0gTlVMTCkKKwlkb20wID0gYWxsb2Nf
ZG9tYWluKE5VTEwsIHhlbmJ1c19tYXN0ZXJfZG9taWQoKSk7CisJaWYgKCFk
b20wKQorCQlyZXR1cm4gLTE7CisJaWYgKG5ld19kb21haW4oZG9tMCwgcG9y
dCkpCiAJCXJldHVybiAtMTsKIAogCWRvbTAtPmludGVyZmFjZSA9IHhlbmJ1
c19tYXAoKTsKQEAgLTc0NCw2ICs4MDAsNjYgQEAgdm9pZCBkb21haW5fZW50
cnlfaW5jKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAq
bm9kZSkKIAl9CiB9CiAKKy8qCisgKiBDaGVjayB3aGV0aGVyIGEgZG9tYWlu
IHdhcyBjcmVhdGVkIGJlZm9yZSBvciBhZnRlciBhIHNwZWNpZmljIGdlbmVy
YXRpb24KKyAqIGNvdW50ICh1c2VkIGZvciB0ZXN0aW5nIHdoZXRoZXIgYSBu
b2RlIHBlcm1pc3Npb24gaXMgb2xkZXIgdGhhbiBhIGRvbWFpbikuCisgKgor
ICogUmV0dXJuIHZhbHVlczoKKyAqIC0xOiBlcnJvcgorICogIDA6IGRvbWFp
biBoYXMgaGlnaGVyIGdlbmVyYXRpb24gY291bnQgKGl0IGlzIHlvdW5nZXIg
dGhhbiBhIG5vZGUgd2l0aCB0aGUKKyAqICAgICBnaXZlbiBjb3VudCksIG9y
IGRvbWFpbiBpc24ndCBleGlzdGluZyBhbnkgbG9uZ2VyCisgKiAgMTogZG9t
YWluIGlzIG9sZGVyIHRoYW4gdGhlIG5vZGUKKyAqLworc3RhdGljIGludCBj
aGtfZG9tYWluX2dlbmVyYXRpb24odW5zaWduZWQgaW50IGRvbWlkLCB1aW50
NjRfdCBnZW4pCit7CisJc3RydWN0IGRvbWFpbiAqZDsKKwl4Y19kb21pbmZv
X3QgZG9taW5mbzsKKworCWlmICgheGNfaGFuZGxlICYmIGRvbWlkID09IDAp
CisJCXJldHVybiAxOworCisJZCA9IGZpbmRfZG9tYWluX3N0cnVjdChkb21p
ZCk7CisJaWYgKGQpCisJCXJldHVybiAoZC0+Z2VuZXJhdGlvbiA8PSBnZW4p
ID8gMSA6IDA7CisKKwlpZiAoIWdldF9kb21haW5faW5mbyhkb21pZCwgJmRv
bWluZm8pKQorCQlyZXR1cm4gMDsKKworCWQgPSBhbGxvY19kb21haW4oTlVM
TCwgZG9taWQpOworCXJldHVybiBkID8gMSA6IC0xOworfQorCisvKgorICog
UmVtb3ZlIHBlcm1pc3Npb25zIGZvciBubyBsb25nZXIgZXhpc3RpbmcgZG9t
YWlucyBpbiBvcmRlciB0byBhdm9pZCBhIG5ldworICogZG9tYWluIHdpdGgg
dGhlIHNhbWUgZG9taWQgaW5oZXJpdGluZyB0aGUgcGVybWlzc2lvbnMuCisg
Ki8KK2ludCBkb21haW5fYWRqdXN0X25vZGVfcGVybXMoc3RydWN0IG5vZGUg
Km5vZGUpCit7CisJdW5zaWduZWQgaW50IGk7CisJaW50IHJldDsKKworCXJl
dCA9IGNoa19kb21haW5fZ2VuZXJhdGlvbihub2RlLT5wZXJtcy5wWzBdLmlk
LCBub2RlLT5nZW5lcmF0aW9uKTsKKwlpZiAocmV0IDwgMCkKKwkJcmV0dXJu
IGVycm5vOworCisJLyogSWYgdGhlIG93bmVyIGRvZXNuJ3QgZXhpc3QgYW55
IGxvbmdlciBnaXZlIGl0IHRvIHByaXYgZG9tYWluLiAqLworCWlmICghcmV0
KQorCQlub2RlLT5wZXJtcy5wWzBdLmlkID0gcHJpdl9kb21pZDsKKworCWZv
ciAoaSA9IDE7IGkgPCBub2RlLT5wZXJtcy5udW07IGkrKykgeworCQlpZiAo
bm9kZS0+cGVybXMucFtpXS5wZXJtcyAmIFhTX1BFUk1fSUdOT1JFKQorCQkJ
Y29udGludWU7CisJCXJldCA9IGNoa19kb21haW5fZ2VuZXJhdGlvbihub2Rl
LT5wZXJtcy5wW2ldLmlkLAorCQkJCQkgICAgbm9kZS0+Z2VuZXJhdGlvbik7
CisJCWlmIChyZXQgPCAwKQorCQkJcmV0dXJuIGVycm5vOworCQlpZiAoIXJl
dCkKKwkJCW5vZGUtPnBlcm1zLnBbaV0ucGVybXMgfD0gWFNfUEVSTV9JR05P
UkU7CisJfQorCisJcmV0dXJuIDA7Cit9CisKIHZvaWQgZG9tYWluX2VudHJ5
X2RlYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5v
ZGUpCiB7CiAJc3RydWN0IGRvbWFpbiAqZDsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaCBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9kb21haW4uaAppbmRleCAyNTkxODM5NjJhOWMuLjVlMDAw
ODcyMDZjNyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21h
aW4uaApAQCAtNTYsNiArNTYsOSBAQCBib29sIGRvbWFpbl9jYW5fd3JpdGUo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwogCiBib29sIGRvbWFpbl9pc191
bnByaXZpbGVnZWQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwogCisvKiBS
ZW1vdmUgbm9kZSBwZXJtaXNzaW9ucyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5n
IGRvbWFpbnMuICovCitpbnQgZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKHN0
cnVjdCBub2RlICpub2RlKTsKKwogLyogUXVvdGEgbWFuaXB1bGF0aW9uICov
CiB2b2lkIGRvbWFpbl9lbnRyeV9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sIHN0cnVjdCBub2RlICopOwogdm9pZCBkb21haW5fZW50cnlfZGVjKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqKTsKZGlmZiAt
LWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5j
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5k
ZXggMzY3OTNiOWIxYWYzLi45ZmNiNGM5YmE5ODYgMTAwNjQ0Ci0tLSBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jCisrKyBiL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jCkBAIC00Nyw3
ICs0NywxMiBAQAogICogdHJhbnNhY3Rpb24uCiAgKiBFYWNoIHRpbWUgdGhl
IGdsb2JhbCBnZW5lcmF0aW9uIGNvdW50IGlzIGNvcGllZCB0byBlaXRoZXIg
YSBub2RlIG9yIGEKICAqIHRyYW5zYWN0aW9uIGl0IGlzIGluY3JlbWVudGVk
LiBUaGlzIGVuc3VyZXMgYWxsIG5vZGVzIGFuZC9vciB0cmFuc2FjdGlvbnMK
LSAqIGFyZSBoYXZpbmcgYSB1bmlxdWUgZ2VuZXJhdGlvbiBjb3VudC4KKyAq
IGFyZSBoYXZpbmcgYSB1bmlxdWUgZ2VuZXJhdGlvbiBjb3VudC4gVGhlIGlu
Y3JlbWVudCBpcyBkb25lIF9iZWZvcmVfIHRoZQorICogY29weSBhcyB0aGF0
IGlzIG5lZWRlZCBmb3IgY2hlY2tpbmcgd2hldGhlciBhIGRvbWFpbiB3YXMg
Y3JlYXRlZCBiZWZvcmUKKyAqIG9yIGFmdGVyIGEgbm9kZSBoYXMgYmVlbiB3
cml0dGVuICh0aGUgZG9tYWluJ3MgZ2VuZXJhdGlvbiBpcyBzZXQgd2l0aCB0
aGUKKyAqIGFjdHVhbCBnZW5lcmF0aW9uIGNvdW50IHdpdGhvdXQgaW5jcmVt
ZW50aW5nIGl0LCBpbiBvcmRlciB0byBzdXBwb3J0CisgKiB3cml0aW5nIGEg
bm9kZSBmb3IgYSBkb21haW4gYmVmb3JlIHRoZSBkb21haW4gaGFzIGJlZW4g
b2ZmaWNpYWxseQorICogaW50cm9kdWNlZCkuCiAgKgogICogVHJhbnNhY3Rp
b24gY29uZmxpY3RzIGFyZSBkZXRlY3RlZCBieSBjaGVja2luZyB0aGUgZ2Vu
ZXJhdGlvbiBjb3VudCBvZiBhbGwKICAqIG5vZGVzIHJlYWQgaW4gdGhlIHRy
YW5zYWN0aW9uIHRvIG1hdGNoIHdpdGggdGhlIGdlbmVyYXRpb24gY291bnQg
aW4gdGhlCkBAIC0xNjEsNyArMTY2LDcgQEAgc3RydWN0IHRyYW5zYWN0aW9u
CiB9OwogCiBleHRlcm4gaW50IHF1b3RhX21heF90cmFuc2FjdGlvbjsKLXN0
YXRpYyB1aW50NjRfdCBnZW5lcmF0aW9uOwordWludDY0X3QgZ2VuZXJhdGlv
bjsKIAogc3RhdGljIHZvaWQgc2V0X3RkYl9rZXkoY29uc3QgY2hhciAqbmFt
ZSwgVERCX0RBVEEgKmtleSkKIHsKQEAgLTIzNyw3ICsyNDIsNyBAQCBpbnQg
YWNjZXNzX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBu
b2RlICpub2RlLAogCWJvb2wgaW50cm9kdWNlID0gZmFsc2U7CiAKIAlpZiAo
dHlwZSAhPSBOT0RFX0FDQ0VTU19SRUFEKSB7Ci0JCW5vZGUtPmdlbmVyYXRp
b24gPSBnZW5lcmF0aW9uKys7CisJCW5vZGUtPmdlbmVyYXRpb24gPSArK2dl
bmVyYXRpb247CiAJCWlmIChjb25uICYmICFjb25uLT50cmFuc2FjdGlvbikK
IAkJCXdybF9hcHBseV9kZWJpdF9kaXJlY3QoY29ubik7CiAJfQpAQCAtMzc0
LDcgKzM3OSw3IEBAIHN0YXRpYyBpbnQgZmluYWxpemVfdHJhbnNhY3Rpb24o
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAJCQkJaWYgKCFkYXRhLmRwdHIp
CiAJCQkJCWdvdG8gZXJyOwogCQkJCWhkciA9ICh2b2lkICopZGF0YS5kcHRy
OwotCQkJCWhkci0+Z2VuZXJhdGlvbiA9IGdlbmVyYXRpb24rKzsKKwkJCQlo
ZHItPmdlbmVyYXRpb24gPSArK2dlbmVyYXRpb247CiAJCQkJcmV0ID0gdGRi
X3N0b3JlKHRkYl9jdHgsIGtleSwgZGF0YSwKIAkJCQkJCVREQl9SRVBMQUNF
KTsKIAkJCQl0YWxsb2NfZnJlZShkYXRhLmRwdHIpOwpAQCAtNDYyLDcgKzQ2
Nyw3IEBAIGludCBkb190cmFuc2FjdGlvbl9zdGFydChzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCUlOSVRf
TElTVF9IRUFEKCZ0cmFucy0+YWNjZXNzZWQpOwogCUlOSVRfTElTVF9IRUFE
KCZ0cmFucy0+Y2hhbmdlZF9kb21haW5zKTsKIAl0cmFucy0+ZmFpbCA9IGZh
bHNlOwotCXRyYW5zLT5nZW5lcmF0aW9uID0gZ2VuZXJhdGlvbisrOworCXRy
YW5zLT5nZW5lcmF0aW9uID0gKytnZW5lcmF0aW9uOwogCiAJLyogUGljayBh
biB1bnVzZWQgdHJhbnNhY3Rpb24gaWRlbnRpZmllci4gKi8KIAlkbyB7CmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rp
b24uaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5o
CmluZGV4IDMzODZiYWM1NjUwOC4uNDNhMTYyYmVhM2YzIDEwMDY0NAotLS0g
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaAorKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaApAQCAt
MjcsNiArMjcsOCBAQCBlbnVtIG5vZGVfYWNjZXNzX3R5cGUgewogCiBzdHJ1
Y3QgdHJhbnNhY3Rpb247CiAKK2V4dGVybiB1aW50NjRfdCBnZW5lcmF0aW9u
OworCiBpbnQgZG9fdHJhbnNhY3Rpb25fc3RhcnQoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICpub2RlKTsKIGludCBk
b190cmFuc2FjdGlvbl9lbmQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0
cnVjdCBidWZmZXJlZF9kYXRhICppbik7CiAKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hzX2xpYi5jIGIvdG9vbHMveGVuc3RvcmUveHNfbGliLmMK
aW5kZXggM2U0M2Y4ODA5ZDQyLi5kNDA3ZDU3MTNhZmYgMTAwNjQ0Ci0tLSBh
L3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L3hzX2xpYi5jCkBAIC0xNTIsNyArMTUyLDcgQEAgYm9vbCB4c19zdHJpbmdz
X3RvX3Blcm1zKHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybXMsIHVuc2ln
bmVkIGludCBudW0sCiBib29sIHhzX3Blcm1fdG9fc3RyaW5nKGNvbnN0IHN0
cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybSwKICAgICAgICAgICAgICAgICAg
ICAgICAgY2hhciAqYnVmZmVyLCBzaXplX3QgYnVmX2xlbikKIHsKLQlzd2l0
Y2ggKChpbnQpcGVybS0+cGVybXMpIHsKKwlzd2l0Y2ggKChpbnQpcGVybS0+
cGVybXMgJiB+WFNfUEVSTV9JR05PUkUpIHsKIAljYXNlIFhTX1BFUk1fV1JJ
VEU6CiAJCSpidWZmZXIgPSAndyc7CiAJCWJyZWFrOwotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream; name="xsa322-4.14-c.patch"
Content-Disposition: attachment; filename="xsa322-4.14-c.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV2b2tlIGFjY2VzcyByaWdodHMgZm9yIHJl
bW92ZWQgZG9tYWlucwoKQWNjZXNzIHJpZ2h0cyBvZiBYZW5zdG9yZSBub2Rl
cyBhcmUgcGVyIGRvbWlkLiBVbmZvcnR1bmF0ZWx5IGV4aXN0aW5nCmdyYW50
ZWQgYWNjZXNzIHJpZ2h0cyBhcmUgbm90IHJlbW92ZWQgd2hlbiBhIGRvbWFp
biBpcyBiZWluZyBkZXN0cm95ZWQuClRoaXMgbWVhbnMgdGhhdCBhIG5ldyBk
b21haW4gY3JlYXRlZCB3aXRoIHRoZSBzYW1lIGRvbWlkIHdpbGwgaW5oZXJp
dAp0aGUgYWNjZXNzIHJpZ2h0cyB0byBYZW5zdG9yZSBub2RlcyBmcm9tIHRo
ZSBwcmV2aW91cyBkb21haW4ocykgd2l0aAp0aGUgc2FtZSBkb21pZC4KClRo
aXMgY2FuIGJlIGF2b2lkZWQgYnkgYWRkaW5nIGEgZ2VuZXJhdGlvbiBjb3Vu
dGVyIHRvIGVhY2ggZG9tYWluLgpUaGUgZ2VuZXJhdGlvbiBjb3VudGVyIG9m
IHRoZSBkb21haW4gaXMgc2V0IHRvIHRoZSBnbG9iYWwgZ2VuZXJhdGlvbgpj
b3VudGVyIHdoZW4gYSBkb21haW4gc3RydWN0dXJlIGlzIGJlaW5nIGFsbG9j
YXRlZC4gV2hlbiByZWFkaW5nIG9yCndyaXRpbmcgYSBub2RlIGFsbCBwZXJt
aXNzaW9ucyBvZiBkb21haW5zIHdoaWNoIGFyZSB5b3VuZ2VyIHRoYW4gdGhl
Cm5vZGUgaXRzZWxmIGFyZSBkcm9wcGVkLiBUaGlzIGlzIGRvbmUgYnkgZmxh
Z2dpbmcgdGhlIHJlbGF0ZWQgZW50cnkKYXMgaW52YWxpZCBpbiBvcmRlciB0
byBhdm9pZCBtb2RpZnlpbmcgcGVybWlzc2lvbnMgaW4gYSB3YXkgdGhlIHVz
ZXIKY291bGQgZGV0ZWN0LgoKQSBzcGVjaWFsIGNhc2UgaGFzIHRvIGJlIGNv
bnNpZGVyZWQ6IGZvciBhIG5ldyBkb21haW4gdGhlIGZpcnN0ClhlbnN0b3Jl
IGVudHJpZXMgYXJlIGFscmVhZHkgd3JpdHRlbiBiZWZvcmUgdGhlIGRvbWFp
biBpcyBvZmZpY2lhbGx5CmludHJvZHVjZWQgaW4gWGVuc3RvcmUuIEluIG9y
ZGVyIG5vdCB0byBkcm9wIHRoZSBwZXJtaXNzaW9ucyBmb3IgdGhlCm5ldyBk
b21haW4gYSBkb21haW4gc3RydWN0IGlzIGFsbG9jYXRlZCBldmVuIGJlZm9y
ZSBpbnRyb2R1Y3Rpb24gaWYKdGhlIGh5cGVydmlzb3IgaXMgYXdhcmUgb2Yg
dGhlIGRvbWFpbi4gVGhpcyByZXF1aXJlcyBhZGRpbmcgYW5vdGhlcgpib29s
ICJpbnRyb2R1Y2VkIiB0byBzdHJ1Y3QgZG9tYWluIGluIHhlbnN0b3JlZC4g
SW4gb3JkZXIgdG8gYXZvaWQKYWRkaXRpb25hbCBwYWRkaW5nIGhvbGVzIGNv
bnZlcnQgdGhlIHNodXRkb3duIGZsYWcgdG8gYm9vbCwgdG9vLgoKQXMgdmVy
aWZ5aW5nIHBlcm1pc3Npb25zIGhhcyBpdHMgcHJpY2UgcmVnYXJkaW5nIHJ1
bnRpbWUgYWRkIGEgbmV3CnF1b3RhIGZvciBsaW1pdGluZyB0aGUgbnVtYmVy
IG9mIHBlcm1pc3Npb25zIGFuIHVucHJpdmlsZWdlZCBkb21haW4KY2FuIHNl
dCBmb3IgYSBub2RlLiBUaGUgZGVmYXVsdCBmb3IgdGhhdCBuZXcgcXVvdGEg
aXMgNS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIyLgoKU2lnbmVkLW9mZi1i
eTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2VkLWJ5OiBKdWxp
ZW4gR3JhbGwgPGp1bGllbkBhbWF6b24uY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL3hlbnN0b3JlL2luY2x1ZGUveGVuc3RvcmVfbGliLmggYi90b29scy94
ZW5zdG9yZS9pbmNsdWRlL3hlbnN0b3JlX2xpYi5oCmluZGV4IDBmZmJhZTll
YjUuLjRjOWI2ZDE2ODUgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL2lu
Y2x1ZGUveGVuc3RvcmVfbGliLmgKKysrIGIvdG9vbHMveGVuc3RvcmUvaW5j
bHVkZS94ZW5zdG9yZV9saWIuaApAQCAtMzQsNiArMzQsNyBAQCBlbnVtIHhz
X3Blcm1fdHlwZSB7CiAJLyogSW50ZXJuYWwgdXNlLiAqLwogCVhTX1BFUk1f
RU5PRU5UX09LID0gNCwKIAlYU19QRVJNX09XTkVSID0gOCwKKwlYU19QRVJN
X0lHTk9SRSA9IDE2LAogfTsKIAogc3RydWN0IHhzX3Blcm1pc3Npb25zCmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA5MmJmZDU0
Y2ZmLi41MDU1NjBhNWRlIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMKQEAgLTEwNCw2ICsxMDQsNyBAQCBpbnQgcXVvdGFfbmJfZW50
cnlfcGVyX2RvbWFpbiA9IDEwMDA7CiBpbnQgcXVvdGFfbmJfd2F0Y2hfcGVy
X2RvbWFpbiA9IDEyODsKIGludCBxdW90YV9tYXhfZW50cnlfc2l6ZSA9IDIw
NDg7IC8qIDJLICovCiBpbnQgcXVvdGFfbWF4X3RyYW5zYWN0aW9uID0gMTA7
CitpbnQgcXVvdGFfbmJfcGVybXNfcGVyX25vZGUgPSA1OwogCiB2b2lkIHRy
YWNlKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQogewpAQCAtNDA5LDggKzQxMCwx
MyBAQCBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAKIAkvKiBQZXJtaXNzaW9ucyBh
cmUgc3RydWN0IHhzX3Blcm1pc3Npb25zLiAqLwogCW5vZGUtPnBlcm1zLnAg
PSBoZHItPnBlcm1zOworCWlmIChkb21haW5fYWRqdXN0X25vZGVfcGVybXMo
bm9kZSkpIHsKKwkJdGFsbG9jX2ZyZWUobm9kZSk7CisJCXJldHVybiBOVUxM
OworCX0KKwogCS8qIERhdGEgaXMgYmluYXJ5IGJsb2IgKHVzdWFsbHkgYXNj
aWksIG5vIG51bCkuICovCi0Jbm9kZS0+ZGF0YSA9IG5vZGUtPnBlcm1zLnAg
KyBub2RlLT5wZXJtcy5udW07CisJbm9kZS0+ZGF0YSA9IG5vZGUtPnBlcm1z
LnAgKyBoZHItPm51bV9wZXJtczsKIAkvKiBDaGlsZHJlbiBpcyBzdHJpbmdz
LCBudWwgc2VwYXJhdGVkLiAqLwogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+
ZGF0YSArIG5vZGUtPmRhdGFsZW47CiAKQEAgLTQyNiw2ICs0MzIsOSBAQCBp
bnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFRE
Ql9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2RlLAogCXZvaWQgKnA7CiAJ
c3RydWN0IHhzX3RkYl9yZWNvcmRfaGRyICpoZHI7CiAKKwlpZiAoZG9tYWlu
X2FkanVzdF9ub2RlX3Blcm1zKG5vZGUpKQorCQlyZXR1cm4gZXJybm87CisK
IAlkYXRhLmRzaXplID0gc2l6ZW9mKCpoZHIpCiAJCSsgbm9kZS0+cGVybXMu
bnVtICogc2l6ZW9mKG5vZGUtPnBlcm1zLnBbMF0pCiAJCSsgbm9kZS0+ZGF0
YWxlbiArIG5vZGUtPmNoaWxkbGVuOwpAQCAtNDg1LDggKzQ5NCw5IEBAIGVu
dW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCiAJCXJldHVybiAoWFNfUEVSTV9SRUFEfFhTX1BFUk1fV1JJ
VEV8WFNfUEVSTV9PV05FUikgJiBtYXNrOwogCiAJZm9yIChpID0gMTsgaSA8
IHBlcm1zLT5udW07IGkrKykKLQkJaWYgKHBlcm1zLT5wW2ldLmlkID09IGNv
bm4tPmlkCi0gICAgICAgICAgICAgICAgICAgICAgICB8fCAoY29ubi0+dGFy
Z2V0ICYmIHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPnRhcmdldC0+aWQpKQor
CQlpZiAoIShwZXJtcy0+cFtpXS5wZXJtcyAmIFhTX1BFUk1fSUdOT1JFKSAm
JgorCQkgICAgKHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPmlkIHx8CisJCSAg
ICAgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+cFtpXS5pZCA9PSBjb25uLT50
YXJnZXQtPmlkKSkpCiAJCQlyZXR1cm4gcGVybXMtPnBbaV0ucGVybXMgJiBt
YXNrOwogCiAJcmV0dXJuIHBlcm1zLT5wWzBdLnBlcm1zICYgbWFzazsKQEAg
LTEyNDgsOCArMTI1OCwxMiBAQCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEg
KmluKQogCWlmIChwZXJtcy5udW0gPCAyKQogCQlyZXR1cm4gRUlOVkFMOwog
Ci0JcGVybXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikg
KyAxOwogCXBlcm1zLm51bS0tOworCWlmIChkb21haW5faXNfdW5wcml2aWxl
Z2VkKGNvbm4pICYmCisJICAgIHBlcm1zLm51bSA+IHF1b3RhX25iX3Blcm1z
X3Blcl9ub2RlKQorCQlyZXR1cm4gRU5PU1BDOworCisJcGVybXN0ciA9IGlu
LT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikgKyAxOwogCiAJcGVybXMu
cCA9IHRhbGxvY19hcnJheShpbiwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBw
ZXJtcy5udW0pOwogCWlmICghcGVybXMucCkKQEAgLTE5MDQsNiArMTkxOCw3
IEBAIHN0YXRpYyB2b2lkIHVzYWdlKHZvaWQpCiAiICAtUywgLS1lbnRyeS1z
aXplIDxzaXplPiBsaW1pdCB0aGUgc2l6ZSBvZiBlbnRyeSBwZXIgZG9tYWlu
LCBhbmRcbiIKICIgIC1XLCAtLXdhdGNoLW5iIDxuYj4gICAgIGxpbWl0IHRo
ZSBudW1iZXIgb2Ygd2F0Y2hlcyBwZXIgZG9tYWluLFxuIgogIiAgLXQsIC0t
dHJhbnNhY3Rpb24gPG5iPiAgbGltaXQgdGhlIG51bWJlciBvZiB0cmFuc2Fj
dGlvbiBhbGxvd2VkIHBlciBkb21haW4sXG4iCisiICAtQSwgLS1wZXJtLW5i
IDxuYj4gICAgICBsaW1pdCB0aGUgbnVtYmVyIG9mIHBlcm1pc3Npb25zIHBl
ciBub2RlLFxuIgogIiAgLVIsIC0tbm8tcmVjb3ZlcnkgICAgICAgdG8gcmVx
dWVzdCB0aGF0IG5vIHJlY292ZXJ5IHNob3VsZCBiZSBhdHRlbXB0ZWQgd2hl
blxuIgogIiAgICAgICAgICAgICAgICAgICAgICAgICAgdGhlIHN0b3JlIGlz
IGNvcnJ1cHRlZCAoZGVidWcgb25seSksXG4iCiAiICAtSSwgLS1pbnRlcm5h
bC1kYiAgICAgICBzdG9yZSBkYXRhYmFzZSBpbiBtZW1vcnksIG5vdCBvbiBk
aXNrXG4iCkBAIC0xOTI0LDYgKzE5MzksNyBAQCBzdGF0aWMgc3RydWN0IG9w
dGlvbiBvcHRpb25zW10gPSB7CiAJeyAiZW50cnktc2l6ZSIsIDEsIE5VTEws
ICdTJyB9LAogCXsgInRyYWNlLWZpbGUiLCAxLCBOVUxMLCAnVCcgfSwKIAl7
ICJ0cmFuc2FjdGlvbiIsIDEsIE5VTEwsICd0JyB9LAorCXsgInBlcm0tbmIi
LCAxLCBOVUxMLCAnQScgfSwKIAl7ICJuby1yZWNvdmVyeSIsIDAsIE5VTEws
ICdSJyB9LAogCXsgImludGVybmFsLWRiIiwgMCwgTlVMTCwgJ0knIH0sCiAJ
eyAidmVyYm9zZSIsIDAsIE5VTEwsICdWJyB9LApAQCAtMTk0Niw3ICsxOTYy
LDcgQEAgaW50IG1haW4oaW50IGFyZ2MsIGNoYXIgKmFyZ3ZbXSkKIAlpbnQg
dGltZW91dDsKIAogCi0Jd2hpbGUgKChvcHQgPSBnZXRvcHRfbG9uZyhhcmdj
LCBhcmd2LCAiREU6RjpITlBTOnQ6VDpSVlc6Iiwgb3B0aW9ucywKKwl3aGls
ZSAoKG9wdCA9IGdldG9wdF9sb25nKGFyZ2MsIGFyZ3YsICJERTpGOkhOUFM6
dDpBOlQ6UlZXOiIsIG9wdGlvbnMsCiAJCQkJICBOVUxMKSkgIT0gLTEpIHsK
IAkJc3dpdGNoIChvcHQpIHsKIAkJY2FzZSAnRCc6CkBAIC0xOTg4LDYgKzIw
MDQsOSBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqYXJndltdKQogCQlj
YXNlICdXJzoKIAkJCXF1b3RhX25iX3dhdGNoX3Blcl9kb21haW4gPSBzdHJ0
b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJCQlicmVhazsKKwkJY2FzZSAnQSc6
CisJCQlxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IHN0cnRvbChvcHRhcmcs
IE5VTEwsIDEwKTsKKwkJCWJyZWFrOwogCQljYXNlICdlJzoKIAkJCWRvbTBf
ZXZlbnQgPSBzdHJ0b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJCQlicmVhazsK
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYwppbmRleCA5
ZmFkNDcwZjgzLi5kYzYzNWU5YmUzIDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02Nyw4ICs2NywxNCBAQCBzdHJ1Y3Qg
ZG9tYWluCiAJLyogVGhlIGNvbm5lY3Rpb24gYXNzb2NpYXRlZCB3aXRoIHRo
aXMuICovCiAJc3RydWN0IGNvbm5lY3Rpb24gKmNvbm47CiAKKwkvKiBHZW5l
cmF0aW9uIGNvdW50IGF0IGRvbWFpbiBpbnRyb2R1Y3Rpb24gdGltZS4gKi8K
Kwl1aW50NjRfdCBnZW5lcmF0aW9uOworCiAJLyogSGF2ZSB3ZSBub3RpY2Vk
IHRoYXQgdGhpcyBkb21haW4gaXMgc2h1dGRvd24/ICovCi0JaW50IHNodXRk
b3duOworCWJvb2wgc2h1dGRvd247CisKKwkvKiBIYXMgZG9tYWluIGJlZW4g
b2ZmaWNpYWxseSBpbnRyb2R1Y2VkPyAqLworCWJvb2wgaW50cm9kdWNlZDsK
IAogCS8qIG51bWJlciBvZiBlbnRyeSBmcm9tIHRoaXMgZG9tYWluIGluIHRo
ZSBzdG9yZSAqLwogCWludCBuYmVudHJ5OwpAQCAtMTg4LDYgKzE5NCw5IEBA
IHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAqX2RvbWFpbikKIAog
CWxpc3RfZGVsKCZkb21haW4tPmxpc3QpOwogCisJaWYgKCFkb21haW4tPmlu
dHJvZHVjZWQpCisJCXJldHVybiAwOworCiAJaWYgKGRvbWFpbi0+cG9ydCkg
ewogCQlpZiAoeGVuZXZ0Y2huX3VuYmluZCh4Y2VfaGFuZGxlLCBkb21haW4t
PnBvcnQpID09IC0xKQogCQkJZXByaW50ZigiPiBVbmJpbmRpbmcgcG9ydCAl
aSBmYWlsZWQhXG4iLCBkb21haW4tPnBvcnQpOwpAQCAtMjA5LDIxICsyMTgs
MzQgQEAgc3RhdGljIGludCBkZXN0cm95X2RvbWFpbih2b2lkICpfZG9tYWlu
KQogCXJldHVybiAwOwogfQogCitzdGF0aWMgYm9vbCBnZXRfZG9tYWluX2lu
Zm8odW5zaWduZWQgaW50IGRvbWlkLCB4Y19kb21pbmZvX3QgKmRvbWluZm8p
Cit7CisJcmV0dXJuIHhjX2RvbWFpbl9nZXRpbmZvKCp4Y19oYW5kbGUsIGRv
bWlkLCAxLCBkb21pbmZvKSA9PSAxICYmCisJICAgICAgIGRvbWluZm8tPmRv
bWlkID09IGRvbWlkOworfQorCiBzdGF0aWMgdm9pZCBkb21haW5fY2xlYW51
cCh2b2lkKQogewogCXhjX2RvbWluZm9fdCBkb21pbmZvOwogCXN0cnVjdCBk
b21haW4gKmRvbWFpbjsKIAlzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubjsKIAlp
bnQgbm90aWZ5ID0gMDsKKwlib29sIGRvbV92YWxpZDsKIAogIGFnYWluOgog
CWxpc3RfZm9yX2VhY2hfZW50cnkoZG9tYWluLCAmZG9tYWlucywgbGlzdCkg
ewotCQlpZiAoeGNfZG9tYWluX2dldGluZm8oKnhjX2hhbmRsZSwgZG9tYWlu
LT5kb21pZCwgMSwKLQkJCQkgICAgICAmZG9taW5mbykgPT0gMSAmJgotCQkg
ICAgZG9taW5mby5kb21pZCA9PSBkb21haW4tPmRvbWlkKSB7CisJCWRvbV92
YWxpZCA9IGdldF9kb21haW5faW5mbyhkb21haW4tPmRvbWlkLCAmZG9taW5m
byk7CisJCWlmICghZG9tYWluLT5pbnRyb2R1Y2VkKSB7CisJCQlpZiAoIWRv
bV92YWxpZCkgeworCQkJCXRhbGxvY19mcmVlKGRvbWFpbik7CisJCQkJZ290
byBhZ2FpbjsKKwkJCX0KKwkJCWNvbnRpbnVlOworCQl9CisJCWlmIChkb21f
dmFsaWQpIHsKIAkJCWlmICgoZG9taW5mby5jcmFzaGVkIHx8IGRvbWluZm8u
c2h1dGRvd24pCiAJCQkgICAgJiYgIWRvbWFpbi0+c2h1dGRvd24pIHsKLQkJ
CQlkb21haW4tPnNodXRkb3duID0gMTsKKwkJCQlkb21haW4tPnNodXRkb3du
ID0gdHJ1ZTsKIAkJCQlub3RpZnkgPSAxOwogCQkJfQogCQkJaWYgKCFkb21p
bmZvLmR5aW5nKQpAQCAtMjg5LDU4ICszMTEsODQgQEAgc3RhdGljIGNoYXIg
KnRhbGxvY19kb21haW5fcGF0aCh2b2lkICpjb250ZXh0LCB1bnNpZ25lZCBp
bnQgZG9taWQpCiAJcmV0dXJuIHRhbGxvY19hc3ByaW50Zihjb250ZXh0LCAi
L2xvY2FsL2RvbWFpbi8ldSIsIGRvbWlkKTsKIH0KIAotc3RhdGljIHN0cnVj
dCBkb21haW4gKm5ld19kb21haW4odm9pZCAqY29udGV4dCwgdW5zaWduZWQg
aW50IGRvbWlkLAotCQkJCSBpbnQgcG9ydCkKK3N0YXRpYyBzdHJ1Y3QgZG9t
YWluICpmaW5kX2RvbWFpbl9zdHJ1Y3QodW5zaWduZWQgaW50IGRvbWlkKQor
eworCXN0cnVjdCBkb21haW4gKmk7CisKKwlsaXN0X2Zvcl9lYWNoX2VudHJ5
KGksICZkb21haW5zLCBsaXN0KSB7CisJCWlmIChpLT5kb21pZCA9PSBkb21p
ZCkKKwkJCXJldHVybiBpOworCX0KKwlyZXR1cm4gTlVMTDsKK30KKworc3Rh
dGljIHN0cnVjdCBkb21haW4gKmFsbG9jX2RvbWFpbih2b2lkICpjb250ZXh0
LCB1bnNpZ25lZCBpbnQgZG9taWQpCiB7CiAJc3RydWN0IGRvbWFpbiAqZG9t
YWluOwotCWludCByYzsKIAogCWRvbWFpbiA9IHRhbGxvYyhjb250ZXh0LCBz
dHJ1Y3QgZG9tYWluKTsKLQlpZiAoIWRvbWFpbikKKwlpZiAoIWRvbWFpbikg
eworCQllcnJubyA9IEVOT01FTTsKIAkJcmV0dXJuIE5VTEw7CisJfQogCi0J
ZG9tYWluLT5wb3J0ID0gMDsKLQlkb21haW4tPnNodXRkb3duID0gMDsKIAlk
b21haW4tPmRvbWlkID0gZG9taWQ7Ci0JZG9tYWluLT5wYXRoID0gdGFsbG9j
X2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9taWQpOwotCWlmICghZG9tYWluLT5w
YXRoKQotCQlyZXR1cm4gTlVMTDsKKwlkb21haW4tPmdlbmVyYXRpb24gPSBn
ZW5lcmF0aW9uOworCWRvbWFpbi0+aW50cm9kdWNlZCA9IGZhbHNlOwogCi0J
d3JsX2RvbWFpbl9uZXcoZG9tYWluKTsKKwl0YWxsb2Nfc2V0X2Rlc3RydWN0
b3IoZG9tYWluLCBkZXN0cm95X2RvbWFpbik7CiAKIAlsaXN0X2FkZCgmZG9t
YWluLT5saXN0LCAmZG9tYWlucyk7Ci0JdGFsbG9jX3NldF9kZXN0cnVjdG9y
KGRvbWFpbiwgZGVzdHJveV9kb21haW4pOworCisJcmV0dXJuIGRvbWFpbjsK
K30KKworc3RhdGljIGludCBuZXdfZG9tYWluKHN0cnVjdCBkb21haW4gKmRv
bWFpbiwgaW50IHBvcnQpCit7CisJaW50IHJjOworCisJZG9tYWluLT5wb3J0
ID0gMDsKKwlkb21haW4tPnNodXRkb3duID0gZmFsc2U7CisJZG9tYWluLT5w
YXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9tYWluLT5kb21p
ZCk7CisJaWYgKCFkb21haW4tPnBhdGgpIHsKKwkJZXJybm8gPSBFTk9NRU07
CisJCXJldHVybiBlcnJubzsKKwl9CisKKwl3cmxfZG9tYWluX25ldyhkb21h
aW4pOwogCiAJLyogVGVsbCBrZXJuZWwgd2UncmUgaW50ZXJlc3RlZCBpbiB0
aGlzIGV2ZW50LiAqLwotCXJjID0geGVuZXZ0Y2huX2JpbmRfaW50ZXJkb21h
aW4oeGNlX2hhbmRsZSwgZG9taWQsIHBvcnQpOworCXJjID0geGVuZXZ0Y2hu
X2JpbmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9tYWluLT5kb21pZCwg
cG9ydCk7CiAJaWYgKHJjID09IC0xKQotCSAgICByZXR1cm4gTlVMTDsKKwkJ
cmV0dXJuIGVycm5vOwogCWRvbWFpbi0+cG9ydCA9IHJjOwogCisJZG9tYWlu
LT5pbnRyb2R1Y2VkID0gdHJ1ZTsKKwogCWRvbWFpbi0+Y29ubiA9IG5ld19j
b25uZWN0aW9uKHdyaXRlY2huLCByZWFkY2huKTsKLQlpZiAoIWRvbWFpbi0+
Y29ubikKLQkJcmV0dXJuIE5VTEw7CisJaWYgKCFkb21haW4tPmNvbm4pICB7
CisJCWVycm5vID0gRU5PTUVNOworCQlyZXR1cm4gZXJybm87CisJfQogCiAJ
ZG9tYWluLT5jb25uLT5kb21haW4gPSBkb21haW47Ci0JZG9tYWluLT5jb25u
LT5pZCA9IGRvbWlkOworCWRvbWFpbi0+Y29ubi0+aWQgPSBkb21haW4tPmRv
bWlkOwogCiAJZG9tYWluLT5yZW1vdGVfcG9ydCA9IHBvcnQ7CiAJZG9tYWlu
LT5uYmVudHJ5ID0gMDsKIAlkb21haW4tPm5id2F0Y2ggPSAwOwogCi0JcmV0
dXJuIGRvbWFpbjsKKwlyZXR1cm4gMDsKIH0KIAogCiBzdGF0aWMgc3RydWN0
IGRvbWFpbiAqZmluZF9kb21haW5fYnlfZG9taWQodW5zaWduZWQgaW50IGRv
bWlkKQogewotCXN0cnVjdCBkb21haW4gKmk7CisJc3RydWN0IGRvbWFpbiAq
ZDsKIAotCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmRvbWFpbnMsIGxpc3Qp
IHsKLQkJaWYgKGktPmRvbWlkID09IGRvbWlkKQotCQkJcmV0dXJuIGk7Ci0J
fQotCXJldHVybiBOVUxMOworCWQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9t
aWQpOworCisJcmV0dXJuIChkICYmIGQtPmludHJvZHVjZWQpID8gZCA6IE5V
TEw7CiB9CiAKIHN0YXRpYyB2b2lkIGRvbWFpbl9jb25uX3Jlc2V0KHN0cnVj
dCBkb21haW4gKmRvbWFpbikKQEAgLTM4NiwxNSArNDM0LDIxIEBAIGludCBk
b19pbnRyb2R1Y2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAocG9ydCA8PSAwKQogCQlyZXR1cm4g
RUlOVkFMOwogCi0JZG9tYWluID0gZmluZF9kb21haW5fYnlfZG9taWQoZG9t
aWQpOworCWRvbWFpbiA9IGZpbmRfZG9tYWluX3N0cnVjdChkb21pZCk7CiAK
IAlpZiAoZG9tYWluID09IE5VTEwpIHsKKwkJLyogSGFuZyBkb21haW4gb2Zm
ICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICovCisJCWRvbWFpbiA9IGFs
bG9jX2RvbWFpbihpbiwgZG9taWQpOworCQlpZiAoZG9tYWluID09IE5VTEwp
CisJCQlyZXR1cm4gRU5PTUVNOworCX0KKworCWlmICghZG9tYWluLT5pbnRy
b2R1Y2VkKSB7CiAJCWludGVyZmFjZSA9IG1hcF9pbnRlcmZhY2UoZG9taWQp
OwogCQlpZiAoIWludGVyZmFjZSkKIAkJCXJldHVybiBlcnJubzsKIAkJLyog
SGFuZyBkb21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICov
Ci0JCWRvbWFpbiA9IG5ld19kb21haW4oaW4sIGRvbWlkLCBwb3J0KTsKLQkJ
aWYgKCFkb21haW4pIHsKKwkJaWYgKG5ld19kb21haW4oZG9tYWluLCBwb3J0
KSkgewogCQkJcmMgPSBlcnJubzsKIAkJCXVubWFwX2ludGVyZmFjZShpbnRl
cmZhY2UpOwogCQkJcmV0dXJuIHJjOwpAQCAtNTAzLDggKzU1Nyw4IEBAIGlu
dCBkb19yZXN1bWUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAoSVNfRVJSKGRvbWFpbikpCiAJCXJl
dHVybiAtUFRSX0VSUihkb21haW4pOwogCi0JZG9tYWluLT5zaHV0ZG93biA9
IDA7Ci0JCisJZG9tYWluLT5zaHV0ZG93biA9IGZhbHNlOworCiAJc2VuZF9h
Y2soY29ubiwgWFNfUkVTVU1FKTsKIAogCXJldHVybiAwOwpAQCAtNjQ3LDgg
KzcwMSwxMCBAQCBzdGF0aWMgaW50IGRvbTBfaW5pdCh2b2lkKQogCWlmIChw
b3J0ID09IC0xKQogCQlyZXR1cm4gLTE7CiAKLQlkb20wID0gbmV3X2RvbWFp
bihOVUxMLCB4ZW5idXNfbWFzdGVyX2RvbWlkKCksIHBvcnQpOwotCWlmIChk
b20wID09IE5VTEwpCisJZG9tMCA9IGFsbG9jX2RvbWFpbihOVUxMLCB4ZW5i
dXNfbWFzdGVyX2RvbWlkKCkpOworCWlmICghZG9tMCkKKwkJcmV0dXJuIC0x
OworCWlmIChuZXdfZG9tYWluKGRvbTAsIHBvcnQpKQogCQlyZXR1cm4gLTE7
CiAKIAlkb20wLT5pbnRlcmZhY2UgPSB4ZW5idXNfbWFwKCk7CkBAIC03Mjks
NiArNzg1LDY2IEBAIHZvaWQgZG9tYWluX2VudHJ5X2luYyhzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUpCiAJfQogfQogCisv
KgorICogQ2hlY2sgd2hldGhlciBhIGRvbWFpbiB3YXMgY3JlYXRlZCBiZWZv
cmUgb3IgYWZ0ZXIgYSBzcGVjaWZpYyBnZW5lcmF0aW9uCisgKiBjb3VudCAo
dXNlZCBmb3IgdGVzdGluZyB3aGV0aGVyIGEgbm9kZSBwZXJtaXNzaW9uIGlz
IG9sZGVyIHRoYW4gYSBkb21haW4pLgorICoKKyAqIFJldHVybiB2YWx1ZXM6
CisgKiAtMTogZXJyb3IKKyAqICAwOiBkb21haW4gaGFzIGhpZ2hlciBnZW5l
cmF0aW9uIGNvdW50IChpdCBpcyB5b3VuZ2VyIHRoYW4gYSBub2RlIHdpdGgg
dGhlCisgKiAgICAgZ2l2ZW4gY291bnQpLCBvciBkb21haW4gaXNuJ3QgZXhp
c3RpbmcgYW55IGxvbmdlcgorICogIDE6IGRvbWFpbiBpcyBvbGRlciB0aGFu
IHRoZSBub2RlCisgKi8KK3N0YXRpYyBpbnQgY2hrX2RvbWFpbl9nZW5lcmF0
aW9uKHVuc2lnbmVkIGludCBkb21pZCwgdWludDY0X3QgZ2VuKQoreworCXN0
cnVjdCBkb21haW4gKmQ7CisJeGNfZG9taW5mb190IGRvbWluZm87CisKKwlp
ZiAoIXhjX2hhbmRsZSAmJiBkb21pZCA9PSAwKQorCQlyZXR1cm4gMTsKKwor
CWQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9taWQpOworCWlmIChkKQorCQly
ZXR1cm4gKGQtPmdlbmVyYXRpb24gPD0gZ2VuKSA/IDEgOiAwOworCisJaWYg
KCFnZXRfZG9tYWluX2luZm8oZG9taWQsICZkb21pbmZvKSkKKwkJcmV0dXJu
IDA7CisKKwlkID0gYWxsb2NfZG9tYWluKE5VTEwsIGRvbWlkKTsKKwlyZXR1
cm4gZCA/IDEgOiAtMTsKK30KKworLyoKKyAqIFJlbW92ZSBwZXJtaXNzaW9u
cyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMgaW4gb3JkZXIgdG8g
YXZvaWQgYSBuZXcKKyAqIGRvbWFpbiB3aXRoIHRoZSBzYW1lIGRvbWlkIGlu
aGVyaXRpbmcgdGhlIHBlcm1pc3Npb25zLgorICovCitpbnQgZG9tYWluX2Fk
anVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKQoreworCXVuc2ln
bmVkIGludCBpOworCWludCByZXQ7CisKKwlyZXQgPSBjaGtfZG9tYWluX2dl
bmVyYXRpb24obm9kZS0+cGVybXMucFswXS5pZCwgbm9kZS0+Z2VuZXJhdGlv
bik7CisJaWYgKHJldCA8IDApCisJCXJldHVybiBlcnJubzsKKworCS8qIElm
IHRoZSBvd25lciBkb2Vzbid0IGV4aXN0IGFueSBsb25nZXIgZ2l2ZSBpdCB0
byBwcml2IGRvbWFpbi4gKi8KKwlpZiAoIXJldCkKKwkJbm9kZS0+cGVybXMu
cFswXS5pZCA9IHByaXZfZG9taWQ7CisKKwlmb3IgKGkgPSAxOyBpIDwgbm9k
ZS0+cGVybXMubnVtOyBpKyspIHsKKwkJaWYgKG5vZGUtPnBlcm1zLnBbaV0u
cGVybXMgJiBYU19QRVJNX0lHTk9SRSkKKwkJCWNvbnRpbnVlOworCQlyZXQg
PSBjaGtfZG9tYWluX2dlbmVyYXRpb24obm9kZS0+cGVybXMucFtpXS5pZCwK
KwkJCQkJICAgIG5vZGUtPmdlbmVyYXRpb24pOworCQlpZiAocmV0IDwgMCkK
KwkJCXJldHVybiBlcnJubzsKKwkJaWYgKCFyZXQpCisJCQlub2RlLT5wZXJt
cy5wW2ldLnBlcm1zIHw9IFhTX1BFUk1fSUdOT1JFOworCX0KKworCXJldHVy
biAwOworfQorCiB2b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogewogCXN0cnVjdCBk
b21haW4gKmQ7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWlu
LmgKaW5kZXggMjU5MTgzOTYyYS4uNWUwMDA4NzIwNiAxMDA2NDQKLS0tIGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCisrKyBiL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaApAQCAtNTYsNiArNTYsOSBA
QCBib29sIGRvbWFpbl9jYW5fd3JpdGUoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4pOwogCiBib29sIGRvbWFpbl9pc191bnByaXZpbGVnZWQoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4pOwogCisvKiBSZW1vdmUgbm9kZSBwZXJtaXNzaW9u
cyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMuICovCitpbnQgZG9t
YWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKTsKKwog
LyogUXVvdGEgbWFuaXB1bGF0aW9uICovCiB2b2lkIGRvbWFpbl9lbnRyeV9p
bmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICopOwog
dm9pZCBkb21haW5fZW50cnlfZGVjKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3Qgbm9kZSAqKTsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5kZXggYTdkOGM1ZDQ3NS4uMjg4MWYz
YjJlNCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmMKQEAgLTQ3LDcgKzQ3LDEyIEBACiAgKiB0cmFuc2FjdGlv
bi4KICAqIEVhY2ggdGltZSB0aGUgZ2xvYmFsIGdlbmVyYXRpb24gY291bnQg
aXMgY29waWVkIHRvIGVpdGhlciBhIG5vZGUgb3IgYQogICogdHJhbnNhY3Rp
b24gaXQgaXMgaW5jcmVtZW50ZWQuIFRoaXMgZW5zdXJlcyBhbGwgbm9kZXMg
YW5kL29yIHRyYW5zYWN0aW9ucwotICogYXJlIGhhdmluZyBhIHVuaXF1ZSBn
ZW5lcmF0aW9uIGNvdW50LgorICogYXJlIGhhdmluZyBhIHVuaXF1ZSBnZW5l
cmF0aW9uIGNvdW50LiBUaGUgaW5jcmVtZW50IGlzIGRvbmUgX2JlZm9yZV8g
dGhlCisgKiBjb3B5IGFzIHRoYXQgaXMgbmVlZGVkIGZvciBjaGVja2luZyB3
aGV0aGVyIGEgZG9tYWluIHdhcyBjcmVhdGVkIGJlZm9yZQorICogb3IgYWZ0
ZXIgYSBub2RlIGhhcyBiZWVuIHdyaXR0ZW4gKHRoZSBkb21haW4ncyBnZW5l
cmF0aW9uIGlzIHNldCB3aXRoIHRoZQorICogYWN0dWFsIGdlbmVyYXRpb24g
Y291bnQgd2l0aG91dCBpbmNyZW1lbnRpbmcgaXQsIGluIG9yZGVyIHRvIHN1
cHBvcnQKKyAqIHdyaXRpbmcgYSBub2RlIGZvciBhIGRvbWFpbiBiZWZvcmUg
dGhlIGRvbWFpbiBoYXMgYmVlbiBvZmZpY2lhbGx5CisgKiBpbnRyb2R1Y2Vk
KS4KICAqCiAgKiBUcmFuc2FjdGlvbiBjb25mbGljdHMgYXJlIGRldGVjdGVk
IGJ5IGNoZWNraW5nIHRoZSBnZW5lcmF0aW9uIGNvdW50IG9mIGFsbAogICog
bm9kZXMgcmVhZCBpbiB0aGUgdHJhbnNhY3Rpb24gdG8gbWF0Y2ggd2l0aCB0
aGUgZ2VuZXJhdGlvbiBjb3VudCBpbiB0aGUKQEAgLTE2MSw3ICsxNjYsNyBA
QCBzdHJ1Y3QgdHJhbnNhY3Rpb24KIH07CiAKIGV4dGVybiBpbnQgcXVvdGFf
bWF4X3RyYW5zYWN0aW9uOwotc3RhdGljIHVpbnQ2NF90IGdlbmVyYXRpb247
Cit1aW50NjRfdCBnZW5lcmF0aW9uOwogCiBzdGF0aWMgdm9pZCBzZXRfdGRi
X2tleShjb25zdCBjaGFyICpuYW1lLCBUREJfREFUQSAqa2V5KQogewpAQCAt
MjM3LDcgKzI0Miw3IEBAIGludCBhY2Nlc3Nfbm9kZShzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUsCiAJYm9vbCBpbnRyb2R1
Y2UgPSBmYWxzZTsKIAogCWlmICh0eXBlICE9IE5PREVfQUNDRVNTX1JFQUQp
IHsKLQkJbm9kZS0+Z2VuZXJhdGlvbiA9IGdlbmVyYXRpb24rKzsKKwkJbm9k
ZS0+Z2VuZXJhdGlvbiA9ICsrZ2VuZXJhdGlvbjsKIAkJaWYgKGNvbm4gJiYg
IWNvbm4tPnRyYW5zYWN0aW9uKQogCQkJd3JsX2FwcGx5X2RlYml0X2RpcmVj
dChjb25uKTsKIAl9CkBAIC0zNzQsNyArMzc5LDcgQEAgc3RhdGljIGludCBm
aW5hbGl6ZV90cmFuc2FjdGlvbihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwK
IAkJCQlpZiAoIWRhdGEuZHB0cikKIAkJCQkJZ290byBlcnI7CiAJCQkJaGRy
ID0gKHZvaWQgKilkYXRhLmRwdHI7Ci0JCQkJaGRyLT5nZW5lcmF0aW9uID0g
Z2VuZXJhdGlvbisrOworCQkJCWhkci0+Z2VuZXJhdGlvbiA9ICsrZ2VuZXJh
dGlvbjsKIAkJCQlyZXQgPSB0ZGJfc3RvcmUodGRiX2N0eCwga2V5LCBkYXRh
LAogCQkJCQkJVERCX1JFUExBQ0UpOwogCQkJCXRhbGxvY19mcmVlKGRhdGEu
ZHB0cik7CkBAIC00NjIsNyArNDY3LDcgQEAgaW50IGRvX3RyYW5zYWN0aW9u
X3N0YXJ0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVy
ZWRfZGF0YSAqaW4pCiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5zLT5hY2Nlc3Nl
ZCk7CiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5zLT5jaGFuZ2VkX2RvbWFpbnMp
OwogCXRyYW5zLT5mYWlsID0gZmFsc2U7Ci0JdHJhbnMtPmdlbmVyYXRpb24g
PSBnZW5lcmF0aW9uKys7CisJdHJhbnMtPmdlbmVyYXRpb24gPSArK2dlbmVy
YXRpb247CiAKIAkvKiBQaWNrIGFuIHVudXNlZCB0cmFuc2FjdGlvbiBpZGVu
dGlmaWVyLiAqLwogCWRvIHsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF90cmFuc2FjdGlvbi5oIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmgKaW5kZXggMzM4NmJhYzU2NS4uNDNhMTYy
YmVhMyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmgKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmgKQEAgLTI3LDYgKzI3LDggQEAgZW51bSBub2RlX2FjY2Vz
c190eXBlIHsKIAogc3RydWN0IHRyYW5zYWN0aW9uOwogCitleHRlcm4gdWlu
dDY0X3QgZ2VuZXJhdGlvbjsKKwogaW50IGRvX3RyYW5zYWN0aW9uX3N0YXJ0
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0
YSAqbm9kZSk7CiBpbnQgZG9fdHJhbnNhY3Rpb25fZW5kKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pOwogCmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94c19saWIuYyBiL3Rvb2xzL3hl
bnN0b3JlL3hzX2xpYi5jCmluZGV4IDNlNDNmODgwOWQuLmQ0MDdkNTcxM2Eg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCisrKyBiL3Rv
b2xzL3hlbnN0b3JlL3hzX2xpYi5jCkBAIC0xNTIsNyArMTUyLDcgQEAgYm9v
bCB4c19zdHJpbmdzX3RvX3Blcm1zKHN0cnVjdCB4c19wZXJtaXNzaW9ucyAq
cGVybXMsIHVuc2lnbmVkIGludCBudW0sCiBib29sIHhzX3Blcm1fdG9fc3Ry
aW5nKGNvbnN0IHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybSwKICAgICAg
ICAgICAgICAgICAgICAgICAgY2hhciAqYnVmZmVyLCBzaXplX3QgYnVmX2xl
bikKIHsKLQlzd2l0Y2ggKChpbnQpcGVybS0+cGVybXMpIHsKKwlzd2l0Y2gg
KChpbnQpcGVybS0+cGVybXMgJiB+WFNfUEVSTV9JR05PUkUpIHsKIAljYXNl
IFhTX1BFUk1fV1JJVEU6CiAJCSpidWZmZXIgPSAndyc7CiAJCWJyZWFrOwo=

--=separator
Content-Type: application/octet-stream; name="xsa322-c.patch"
Content-Disposition: attachment; filename="xsa322-c.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV2b2tlIGFjY2VzcyByaWdodHMgZm9yIHJl
bW92ZWQgZG9tYWlucwoKQWNjZXNzIHJpZ2h0cyBvZiBYZW5zdG9yZSBub2Rl
cyBhcmUgcGVyIGRvbWlkLiBVbmZvcnR1bmF0ZWx5IGV4aXN0aW5nCmdyYW50
ZWQgYWNjZXNzIHJpZ2h0cyBhcmUgbm90IHJlbW92ZWQgd2hlbiBhIGRvbWFp
biBpcyBiZWluZyBkZXN0cm95ZWQuClRoaXMgbWVhbnMgdGhhdCBhIG5ldyBk
b21haW4gY3JlYXRlZCB3aXRoIHRoZSBzYW1lIGRvbWlkIHdpbGwgaW5oZXJp
dAp0aGUgYWNjZXNzIHJpZ2h0cyB0byBYZW5zdG9yZSBub2RlcyBmcm9tIHRo
ZSBwcmV2aW91cyBkb21haW4ocykgd2l0aAp0aGUgc2FtZSBkb21pZC4KClRo
aXMgY2FuIGJlIGF2b2lkZWQgYnkgYWRkaW5nIGEgZ2VuZXJhdGlvbiBjb3Vu
dGVyIHRvIGVhY2ggZG9tYWluLgpUaGUgZ2VuZXJhdGlvbiBjb3VudGVyIG9m
IHRoZSBkb21haW4gaXMgc2V0IHRvIHRoZSBnbG9iYWwgZ2VuZXJhdGlvbgpj
b3VudGVyIHdoZW4gYSBkb21haW4gc3RydWN0dXJlIGlzIGJlaW5nIGFsbG9j
YXRlZC4gV2hlbiByZWFkaW5nIG9yCndyaXRpbmcgYSBub2RlIGFsbCBwZXJt
aXNzaW9ucyBvZiBkb21haW5zIHdoaWNoIGFyZSB5b3VuZ2VyIHRoYW4gdGhl
Cm5vZGUgaXRzZWxmIGFyZSBkcm9wcGVkLiBUaGlzIGlzIGRvbmUgYnkgZmxh
Z2dpbmcgdGhlIHJlbGF0ZWQgZW50cnkKYXMgaW52YWxpZCBpbiBvcmRlciB0
byBhdm9pZCBtb2RpZnlpbmcgcGVybWlzc2lvbnMgaW4gYSB3YXkgdGhlIHVz
ZXIKY291bGQgZGV0ZWN0LgoKQSBzcGVjaWFsIGNhc2UgaGFzIHRvIGJlIGNv
bnNpZGVyZWQ6IGZvciBhIG5ldyBkb21haW4gdGhlIGZpcnN0ClhlbnN0b3Jl
IGVudHJpZXMgYXJlIGFscmVhZHkgd3JpdHRlbiBiZWZvcmUgdGhlIGRvbWFp
biBpcyBvZmZpY2lhbGx5CmludHJvZHVjZWQgaW4gWGVuc3RvcmUuIEluIG9y
ZGVyIG5vdCB0byBkcm9wIHRoZSBwZXJtaXNzaW9ucyBmb3IgdGhlCm5ldyBk
b21haW4gYSBkb21haW4gc3RydWN0IGlzIGFsbG9jYXRlZCBldmVuIGJlZm9y
ZSBpbnRyb2R1Y3Rpb24gaWYKdGhlIGh5cGVydmlzb3IgaXMgYXdhcmUgb2Yg
dGhlIGRvbWFpbi4gVGhpcyByZXF1aXJlcyBhZGRpbmcgYW5vdGhlcgpib29s
ICJpbnRyb2R1Y2VkIiB0byBzdHJ1Y3QgZG9tYWluIGluIHhlbnN0b3JlZC4g
SW4gb3JkZXIgdG8gYXZvaWQKYWRkaXRpb25hbCBwYWRkaW5nIGhvbGVzIGNv
bnZlcnQgdGhlIHNodXRkb3duIGZsYWcgdG8gYm9vbCwgdG9vLgoKQXMgdmVy
aWZ5aW5nIHBlcm1pc3Npb25zIGhhcyBpdHMgcHJpY2UgcmVnYXJkaW5nIHJ1
bnRpbWUgYWRkIGEgbmV3CnF1b3RhIGZvciBsaW1pdGluZyB0aGUgbnVtYmVy
IG9mIHBlcm1pc3Npb25zIGFuIHVucHJpdmlsZWdlZCBkb21haW4KY2FuIHNl
dCBmb3IgYSBub2RlLiBUaGUgZGVmYXVsdCBmb3IgdGhhdCBuZXcgcXVvdGEg
aXMgNS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIyLgoKU2lnbmVkLW9mZi1i
eTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2VkLWJ5OiBKdWxp
ZW4gR3JhbGwgPGp1bGllbkBhbWF6b24uY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL2luY2x1ZGUveGVuc3RvcmVfbGliLmggYi90b29scy9pbmNsdWRlL3hl
bnN0b3JlX2xpYi5oCmluZGV4IDBmZmJhZTllYjUuLjRjOWI2ZDE2ODUgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL2luY2x1ZGUveGVuc3RvcmVfbGliLmgKKysrIGIv
dG9vbHMvaW5jbHVkZS94ZW5zdG9yZV9saWIuaApAQCAtMzQsNiArMzQsNyBA
QCBlbnVtIHhzX3Blcm1fdHlwZSB7CiAJLyogSW50ZXJuYWwgdXNlLiAqLwog
CVhTX1BFUk1fRU5PRU5UX09LID0gNCwKIAlYU19QRVJNX09XTkVSID0gOCwK
KwlYU19QRVJNX0lHTk9SRSA9IDE2LAogfTsKIAogc3RydWN0IHhzX3Blcm1p
c3Npb25zCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRl
eCBhZDE5MDNjNTU1Li5jYmVmZTRjODE5IDEwMDY0NAotLS0gYS90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMKQEAgLTEwMSw2ICsxMDEsNyBAQCBpbnQgcXVv
dGFfbmJfZW50cnlfcGVyX2RvbWFpbiA9IDEwMDA7CiBpbnQgcXVvdGFfbmJf
d2F0Y2hfcGVyX2RvbWFpbiA9IDEyODsKIGludCBxdW90YV9tYXhfZW50cnlf
c2l6ZSA9IDIwNDg7IC8qIDJLICovCiBpbnQgcXVvdGFfbWF4X3RyYW5zYWN0
aW9uID0gMTA7CitpbnQgcXVvdGFfbmJfcGVybXNfcGVyX25vZGUgPSA1Owog
CiB2b2lkIHRyYWNlKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQogewpAQCAtNDAz
LDggKzQwNCwxMyBAQCBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVjdCBj
b25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAKIAkvKiBQZXJt
aXNzaW9ucyBhcmUgc3RydWN0IHhzX3Blcm1pc3Npb25zLiAqLwogCW5vZGUt
PnBlcm1zLnAgPSBoZHItPnBlcm1zOworCWlmIChkb21haW5fYWRqdXN0X25v
ZGVfcGVybXMobm9kZSkpIHsKKwkJdGFsbG9jX2ZyZWUobm9kZSk7CisJCXJl
dHVybiBOVUxMOworCX0KKwogCS8qIERhdGEgaXMgYmluYXJ5IGJsb2IgKHVz
dWFsbHkgYXNjaWksIG5vIG51bCkuICovCi0Jbm9kZS0+ZGF0YSA9IG5vZGUt
PnBlcm1zLnAgKyBub2RlLT5wZXJtcy5udW07CisJbm9kZS0+ZGF0YSA9IG5v
ZGUtPnBlcm1zLnAgKyBoZHItPm51bV9wZXJtczsKIAkvKiBDaGlsZHJlbiBp
cyBzdHJpbmdzLCBudWwgc2VwYXJhdGVkLiAqLwogCW5vZGUtPmNoaWxkcmVu
ID0gbm9kZS0+ZGF0YSArIG5vZGUtPmRhdGFsZW47CiAKQEAgLTQyMCw2ICs0
MjYsOSBAQCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2RlLAogCXZv
aWQgKnA7CiAJc3RydWN0IHhzX3RkYl9yZWNvcmRfaGRyICpoZHI7CiAKKwlp
ZiAoZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKG5vZGUpKQorCQlyZXR1cm4g
ZXJybm87CisKIAlkYXRhLmRzaXplID0gc2l6ZW9mKCpoZHIpCiAJCSsgbm9k
ZS0+cGVybXMubnVtICogc2l6ZW9mKG5vZGUtPnBlcm1zLnBbMF0pCiAJCSsg
bm9kZS0+ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOwpAQCAtNDc2LDggKzQ4
NSw5IEBAIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sCiAJCXJldHVybiAoWFNfUEVSTV9SRUFEfFhT
X1BFUk1fV1JJVEV8WFNfUEVSTV9PV05FUikgJiBtYXNrOwogCiAJZm9yIChp
ID0gMTsgaSA8IHBlcm1zLT5udW07IGkrKykKLQkJaWYgKHBlcm1zLT5wW2ld
LmlkID09IGNvbm4tPmlkCi0gICAgICAgICAgICAgICAgICAgICAgICB8fCAo
Y29ubi0+dGFyZ2V0ICYmIHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPnRhcmdl
dC0+aWQpKQorCQlpZiAoIShwZXJtcy0+cFtpXS5wZXJtcyAmIFhTX1BFUk1f
SUdOT1JFKSAmJgorCQkgICAgKHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPmlk
IHx8CisJCSAgICAgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+cFtpXS5pZCA9
PSBjb25uLT50YXJnZXQtPmlkKSkpCiAJCQlyZXR1cm4gcGVybXMtPnBbaV0u
cGVybXMgJiBtYXNrOwogCiAJcmV0dXJuIHBlcm1zLT5wWzBdLnBlcm1zICYg
bWFzazsKQEAgLTEyMzksOCArMTI0OSwxMiBAQCBzdGF0aWMgaW50IGRvX3Nl
dF9wZXJtcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZl
cmVkX2RhdGEgKmluKQogCWlmIChwZXJtcy5udW0gPCAyKQogCQlyZXR1cm4g
RUlOVkFMOwogCi0JcGVybXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4t
PmJ1ZmZlcikgKyAxOwogCXBlcm1zLm51bS0tOworCWlmIChkb21haW5faXNf
dW5wcml2aWxlZ2VkKGNvbm4pICYmCisJICAgIHBlcm1zLm51bSA+IHF1b3Rh
X25iX3Blcm1zX3Blcl9ub2RlKQorCQlyZXR1cm4gRU5PU1BDOworCisJcGVy
bXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikgKyAxOwog
CiAJcGVybXMucCA9IHRhbGxvY19hcnJheShpbiwgc3RydWN0IHhzX3Blcm1p
c3Npb25zLCBwZXJtcy5udW0pOwogCWlmICghcGVybXMucCkKQEAgLTE4Nzks
NiArMTg5Myw3IEBAIHN0YXRpYyB2b2lkIHVzYWdlKHZvaWQpCiAiICAtUywg
LS1lbnRyeS1zaXplIDxzaXplPiBsaW1pdCB0aGUgc2l6ZSBvZiBlbnRyeSBw
ZXIgZG9tYWluLCBhbmRcbiIKICIgIC1XLCAtLXdhdGNoLW5iIDxuYj4gICAg
IGxpbWl0IHRoZSBudW1iZXIgb2Ygd2F0Y2hlcyBwZXIgZG9tYWluLFxuIgog
IiAgLXQsIC0tdHJhbnNhY3Rpb24gPG5iPiAgbGltaXQgdGhlIG51bWJlciBv
ZiB0cmFuc2FjdGlvbiBhbGxvd2VkIHBlciBkb21haW4sXG4iCisiICAtQSwg
LS1wZXJtLW5iIDxuYj4gICAgICBsaW1pdCB0aGUgbnVtYmVyIG9mIHBlcm1p
c3Npb25zIHBlciBub2RlLFxuIgogIiAgLVIsIC0tbm8tcmVjb3ZlcnkgICAg
ICAgdG8gcmVxdWVzdCB0aGF0IG5vIHJlY292ZXJ5IHNob3VsZCBiZSBhdHRl
bXB0ZWQgd2hlblxuIgogIiAgICAgICAgICAgICAgICAgICAgICAgICAgdGhl
IHN0b3JlIGlzIGNvcnJ1cHRlZCAoZGVidWcgb25seSksXG4iCiAiICAtSSwg
LS1pbnRlcm5hbC1kYiAgICAgICBzdG9yZSBkYXRhYmFzZSBpbiBtZW1vcnks
IG5vdCBvbiBkaXNrXG4iCkBAIC0xODk5LDYgKzE5MTQsNyBAQCBzdGF0aWMg
c3RydWN0IG9wdGlvbiBvcHRpb25zW10gPSB7CiAJeyAiZW50cnktc2l6ZSIs
IDEsIE5VTEwsICdTJyB9LAogCXsgInRyYWNlLWZpbGUiLCAxLCBOVUxMLCAn
VCcgfSwKIAl7ICJ0cmFuc2FjdGlvbiIsIDEsIE5VTEwsICd0JyB9LAorCXsg
InBlcm0tbmIiLCAxLCBOVUxMLCAnQScgfSwKIAl7ICJuby1yZWNvdmVyeSIs
IDAsIE5VTEwsICdSJyB9LAogCXsgImludGVybmFsLWRiIiwgMCwgTlVMTCwg
J0knIH0sCiAJeyAidmVyYm9zZSIsIDAsIE5VTEwsICdWJyB9LApAQCAtMTky
MSw3ICsxOTM3LDcgQEAgaW50IG1haW4oaW50IGFyZ2MsIGNoYXIgKmFyZ3Zb
XSkKIAlpbnQgdGltZW91dDsKIAogCi0Jd2hpbGUgKChvcHQgPSBnZXRvcHRf
bG9uZyhhcmdjLCBhcmd2LCAiREU6RjpITlBTOnQ6VDpSVlc6Iiwgb3B0aW9u
cywKKwl3aGlsZSAoKG9wdCA9IGdldG9wdF9sb25nKGFyZ2MsIGFyZ3YsICJE
RTpGOkhOUFM6dDpBOlQ6UlZXOiIsIG9wdGlvbnMsCiAJCQkJICBOVUxMKSkg
IT0gLTEpIHsKIAkJc3dpdGNoIChvcHQpIHsKIAkJY2FzZSAnRCc6CkBAIC0x
OTYzLDYgKzE5NzksOSBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqYXJn
dltdKQogCQljYXNlICdXJzoKIAkJCXF1b3RhX25iX3dhdGNoX3Blcl9kb21h
aW4gPSBzdHJ0b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJCQlicmVhazsKKwkJ
Y2FzZSAnQSc6CisJCQlxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IHN0cnRv
bChvcHRhcmcsIE5VTEwsIDEwKTsKKwkJCWJyZWFrOwogCQljYXNlICdlJzoK
IAkJCWRvbTBfZXZlbnQgPSBzdHJ0b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJ
CQlicmVhazsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9kb21haW4uYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YwppbmRleCBjZjIzOWMwNDRiLi43MTY5ZGE5ODUxIDEwMDY0NAotLS0gYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02Nyw4ICs2NywxNCBA
QCBzdHJ1Y3QgZG9tYWluCiAJLyogVGhlIGNvbm5lY3Rpb24gYXNzb2NpYXRl
ZCB3aXRoIHRoaXMuICovCiAJc3RydWN0IGNvbm5lY3Rpb24gKmNvbm47CiAK
KwkvKiBHZW5lcmF0aW9uIGNvdW50IGF0IGRvbWFpbiBpbnRyb2R1Y3Rpb24g
dGltZS4gKi8KKwl1aW50NjRfdCBnZW5lcmF0aW9uOworCiAJLyogSGF2ZSB3
ZSBub3RpY2VkIHRoYXQgdGhpcyBkb21haW4gaXMgc2h1dGRvd24/ICovCi0J
aW50IHNodXRkb3duOworCWJvb2wgc2h1dGRvd247CisKKwkvKiBIYXMgZG9t
YWluIGJlZW4gb2ZmaWNpYWxseSBpbnRyb2R1Y2VkPyAqLworCWJvb2wgaW50
cm9kdWNlZDsKIAogCS8qIG51bWJlciBvZiBlbnRyeSBmcm9tIHRoaXMgZG9t
YWluIGluIHRoZSBzdG9yZSAqLwogCWludCBuYmVudHJ5OwpAQCAtMTg4LDYg
KzE5NCw5IEBAIHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAqX2Rv
bWFpbikKIAogCWxpc3RfZGVsKCZkb21haW4tPmxpc3QpOwogCisJaWYgKCFk
b21haW4tPmludHJvZHVjZWQpCisJCXJldHVybiAwOworCiAJaWYgKGRvbWFp
bi0+cG9ydCkgewogCQlpZiAoeGVuZXZ0Y2huX3VuYmluZCh4Y2VfaGFuZGxl
LCBkb21haW4tPnBvcnQpID09IC0xKQogCQkJZXByaW50ZigiPiBVbmJpbmRp
bmcgcG9ydCAlaSBmYWlsZWQhXG4iLCBkb21haW4tPnBvcnQpOwpAQCAtMjA5
LDIxICsyMTgsMzQgQEAgc3RhdGljIGludCBkZXN0cm95X2RvbWFpbih2b2lk
ICpfZG9tYWluKQogCXJldHVybiAwOwogfQogCitzdGF0aWMgYm9vbCBnZXRf
ZG9tYWluX2luZm8odW5zaWduZWQgaW50IGRvbWlkLCB4Y19kb21pbmZvX3Qg
KmRvbWluZm8pCit7CisJcmV0dXJuIHhjX2RvbWFpbl9nZXRpbmZvKCp4Y19o
YW5kbGUsIGRvbWlkLCAxLCBkb21pbmZvKSA9PSAxICYmCisJICAgICAgIGRv
bWluZm8tPmRvbWlkID09IGRvbWlkOworfQorCiBzdGF0aWMgdm9pZCBkb21h
aW5fY2xlYW51cCh2b2lkKQogewogCXhjX2RvbWluZm9fdCBkb21pbmZvOwog
CXN0cnVjdCBkb21haW4gKmRvbWFpbjsKIAlzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubjsKIAlpbnQgbm90aWZ5ID0gMDsKKwlib29sIGRvbV92YWxpZDsKIAog
IGFnYWluOgogCWxpc3RfZm9yX2VhY2hfZW50cnkoZG9tYWluLCAmZG9tYWlu
cywgbGlzdCkgewotCQlpZiAoeGNfZG9tYWluX2dldGluZm8oKnhjX2hhbmRs
ZSwgZG9tYWluLT5kb21pZCwgMSwKLQkJCQkgICAgICAmZG9taW5mbykgPT0g
MSAmJgotCQkgICAgZG9taW5mby5kb21pZCA9PSBkb21haW4tPmRvbWlkKSB7
CisJCWRvbV92YWxpZCA9IGdldF9kb21haW5faW5mbyhkb21haW4tPmRvbWlk
LCAmZG9taW5mbyk7CisJCWlmICghZG9tYWluLT5pbnRyb2R1Y2VkKSB7CisJ
CQlpZiAoIWRvbV92YWxpZCkgeworCQkJCXRhbGxvY19mcmVlKGRvbWFpbik7
CisJCQkJZ290byBhZ2FpbjsKKwkJCX0KKwkJCWNvbnRpbnVlOworCQl9CisJ
CWlmIChkb21fdmFsaWQpIHsKIAkJCWlmICgoZG9taW5mby5jcmFzaGVkIHx8
IGRvbWluZm8uc2h1dGRvd24pCiAJCQkgICAgJiYgIWRvbWFpbi0+c2h1dGRv
d24pIHsKLQkJCQlkb21haW4tPnNodXRkb3duID0gMTsKKwkJCQlkb21haW4t
PnNodXRkb3duID0gdHJ1ZTsKIAkJCQlub3RpZnkgPSAxOwogCQkJfQogCQkJ
aWYgKCFkb21pbmZvLmR5aW5nKQpAQCAtMjg5LDU4ICszMTEsODQgQEAgc3Rh
dGljIGNoYXIgKnRhbGxvY19kb21haW5fcGF0aCh2b2lkICpjb250ZXh0LCB1
bnNpZ25lZCBpbnQgZG9taWQpCiAJcmV0dXJuIHRhbGxvY19hc3ByaW50Zihj
b250ZXh0LCAiL2xvY2FsL2RvbWFpbi8ldSIsIGRvbWlkKTsKIH0KIAotc3Rh
dGljIHN0cnVjdCBkb21haW4gKm5ld19kb21haW4odm9pZCAqY29udGV4dCwg
dW5zaWduZWQgaW50IGRvbWlkLAotCQkJCSBpbnQgcG9ydCkKK3N0YXRpYyBz
dHJ1Y3QgZG9tYWluICpmaW5kX2RvbWFpbl9zdHJ1Y3QodW5zaWduZWQgaW50
IGRvbWlkKQoreworCXN0cnVjdCBkb21haW4gKmk7CisKKwlsaXN0X2Zvcl9l
YWNoX2VudHJ5KGksICZkb21haW5zLCBsaXN0KSB7CisJCWlmIChpLT5kb21p
ZCA9PSBkb21pZCkKKwkJCXJldHVybiBpOworCX0KKwlyZXR1cm4gTlVMTDsK
K30KKworc3RhdGljIHN0cnVjdCBkb21haW4gKmFsbG9jX2RvbWFpbih2b2lk
ICpjb250ZXh0LCB1bnNpZ25lZCBpbnQgZG9taWQpCiB7CiAJc3RydWN0IGRv
bWFpbiAqZG9tYWluOwotCWludCByYzsKIAogCWRvbWFpbiA9IHRhbGxvYyhj
b250ZXh0LCBzdHJ1Y3QgZG9tYWluKTsKLQlpZiAoIWRvbWFpbikKKwlpZiAo
IWRvbWFpbikgeworCQllcnJubyA9IEVOT01FTTsKIAkJcmV0dXJuIE5VTEw7
CisJfQogCi0JZG9tYWluLT5wb3J0ID0gMDsKLQlkb21haW4tPnNodXRkb3du
ID0gMDsKIAlkb21haW4tPmRvbWlkID0gZG9taWQ7Ci0JZG9tYWluLT5wYXRo
ID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9taWQpOwotCWlmICgh
ZG9tYWluLT5wYXRoKQotCQlyZXR1cm4gTlVMTDsKKwlkb21haW4tPmdlbmVy
YXRpb24gPSBnZW5lcmF0aW9uOworCWRvbWFpbi0+aW50cm9kdWNlZCA9IGZh
bHNlOwogCi0Jd3JsX2RvbWFpbl9uZXcoZG9tYWluKTsKKwl0YWxsb2Nfc2V0
X2Rlc3RydWN0b3IoZG9tYWluLCBkZXN0cm95X2RvbWFpbik7CiAKIAlsaXN0
X2FkZCgmZG9tYWluLT5saXN0LCAmZG9tYWlucyk7Ci0JdGFsbG9jX3NldF9k
ZXN0cnVjdG9yKGRvbWFpbiwgZGVzdHJveV9kb21haW4pOworCisJcmV0dXJu
IGRvbWFpbjsKK30KKworc3RhdGljIGludCBuZXdfZG9tYWluKHN0cnVjdCBk
b21haW4gKmRvbWFpbiwgaW50IHBvcnQpCit7CisJaW50IHJjOworCisJZG9t
YWluLT5wb3J0ID0gMDsKKwlkb21haW4tPnNodXRkb3duID0gZmFsc2U7CisJ
ZG9tYWluLT5wYXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9t
YWluLT5kb21pZCk7CisJaWYgKCFkb21haW4tPnBhdGgpIHsKKwkJZXJybm8g
PSBFTk9NRU07CisJCXJldHVybiBlcnJubzsKKwl9CisKKwl3cmxfZG9tYWlu
X25ldyhkb21haW4pOwogCiAJLyogVGVsbCBrZXJuZWwgd2UncmUgaW50ZXJl
c3RlZCBpbiB0aGlzIGV2ZW50LiAqLwotCXJjID0geGVuZXZ0Y2huX2JpbmRf
aW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9taWQsIHBvcnQpOworCXJjID0g
eGVuZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9tYWlu
LT5kb21pZCwgcG9ydCk7CiAJaWYgKHJjID09IC0xKQotCSAgICByZXR1cm4g
TlVMTDsKKwkJcmV0dXJuIGVycm5vOwogCWRvbWFpbi0+cG9ydCA9IHJjOwog
CisJZG9tYWluLT5pbnRyb2R1Y2VkID0gdHJ1ZTsKKwogCWRvbWFpbi0+Y29u
biA9IG5ld19jb25uZWN0aW9uKHdyaXRlY2huLCByZWFkY2huKTsKLQlpZiAo
IWRvbWFpbi0+Y29ubikKLQkJcmV0dXJuIE5VTEw7CisJaWYgKCFkb21haW4t
PmNvbm4pICB7CisJCWVycm5vID0gRU5PTUVNOworCQlyZXR1cm4gZXJybm87
CisJfQogCiAJZG9tYWluLT5jb25uLT5kb21haW4gPSBkb21haW47Ci0JZG9t
YWluLT5jb25uLT5pZCA9IGRvbWlkOworCWRvbWFpbi0+Y29ubi0+aWQgPSBk
b21haW4tPmRvbWlkOwogCiAJZG9tYWluLT5yZW1vdGVfcG9ydCA9IHBvcnQ7
CiAJZG9tYWluLT5uYmVudHJ5ID0gMDsKIAlkb21haW4tPm5id2F0Y2ggPSAw
OwogCi0JcmV0dXJuIGRvbWFpbjsKKwlyZXR1cm4gMDsKIH0KIAogCiBzdGF0
aWMgc3RydWN0IGRvbWFpbiAqZmluZF9kb21haW5fYnlfZG9taWQodW5zaWdu
ZWQgaW50IGRvbWlkKQogewotCXN0cnVjdCBkb21haW4gKmk7CisJc3RydWN0
IGRvbWFpbiAqZDsKIAotCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmRvbWFp
bnMsIGxpc3QpIHsKLQkJaWYgKGktPmRvbWlkID09IGRvbWlkKQotCQkJcmV0
dXJuIGk7Ci0JfQotCXJldHVybiBOVUxMOworCWQgPSBmaW5kX2RvbWFpbl9z
dHJ1Y3QoZG9taWQpOworCisJcmV0dXJuIChkICYmIGQtPmludHJvZHVjZWQp
ID8gZCA6IE5VTEw7CiB9CiAKIHN0YXRpYyB2b2lkIGRvbWFpbl9jb25uX3Jl
c2V0KHN0cnVjdCBkb21haW4gKmRvbWFpbikKQEAgLTM4MywxNSArNDMxLDIx
IEBAIGludCBkb19pbnRyb2R1Y2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAocG9ydCA8PSAwKQog
CQlyZXR1cm4gRUlOVkFMOwogCi0JZG9tYWluID0gZmluZF9kb21haW5fYnlf
ZG9taWQoZG9taWQpOworCWRvbWFpbiA9IGZpbmRfZG9tYWluX3N0cnVjdChk
b21pZCk7CiAKIAlpZiAoZG9tYWluID09IE5VTEwpIHsKKwkJLyogSGFuZyBk
b21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICovCisJCWRv
bWFpbiA9IGFsbG9jX2RvbWFpbihpbiwgZG9taWQpOworCQlpZiAoZG9tYWlu
ID09IE5VTEwpCisJCQlyZXR1cm4gRU5PTUVNOworCX0KKworCWlmICghZG9t
YWluLT5pbnRyb2R1Y2VkKSB7CiAJCWludGVyZmFjZSA9IG1hcF9pbnRlcmZh
Y2UoZG9taWQpOwogCQlpZiAoIWludGVyZmFjZSkKIAkJCXJldHVybiBlcnJu
bzsKIAkJLyogSGFuZyBkb21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmlu
aXNoZWQuICovCi0JCWRvbWFpbiA9IG5ld19kb21haW4oaW4sIGRvbWlkLCBw
b3J0KTsKLQkJaWYgKCFkb21haW4pIHsKKwkJaWYgKG5ld19kb21haW4oZG9t
YWluLCBwb3J0KSkgewogCQkJcmMgPSBlcnJubzsKIAkJCXVubWFwX2ludGVy
ZmFjZShpbnRlcmZhY2UpOwogCQkJcmV0dXJuIHJjOwpAQCAtNDk3LDggKzU1
MSw4IEBAIGludCBkb19yZXN1bWUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAoSVNfRVJSKGRvbWFp
bikpCiAJCXJldHVybiAtUFRSX0VSUihkb21haW4pOwogCi0JZG9tYWluLT5z
aHV0ZG93biA9IDA7Ci0JCisJZG9tYWluLT5zaHV0ZG93biA9IGZhbHNlOwor
CiAJc2VuZF9hY2soY29ubiwgWFNfUkVTVU1FKTsKIAogCXJldHVybiAwOwpA
QCAtNjQxLDggKzY5NSwxMCBAQCBzdGF0aWMgaW50IGRvbTBfaW5pdCh2b2lk
KQogCWlmIChwb3J0ID09IC0xKQogCQlyZXR1cm4gLTE7CiAKLQlkb20wID0g
bmV3X2RvbWFpbihOVUxMLCB4ZW5idXNfbWFzdGVyX2RvbWlkKCksIHBvcnQp
OwotCWlmIChkb20wID09IE5VTEwpCisJZG9tMCA9IGFsbG9jX2RvbWFpbihO
VUxMLCB4ZW5idXNfbWFzdGVyX2RvbWlkKCkpOworCWlmICghZG9tMCkKKwkJ
cmV0dXJuIC0xOworCWlmIChuZXdfZG9tYWluKGRvbTAsIHBvcnQpKQogCQly
ZXR1cm4gLTE7CiAKIAlkb20wLT5pbnRlcmZhY2UgPSB4ZW5idXNfbWFwKCk7
CkBAIC03MjksNiArNzg1LDY2IEBAIHZvaWQgZG9tYWluX2VudHJ5X2luYyhz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUpCiAJ
fQogfQogCisvKgorICogQ2hlY2sgd2hldGhlciBhIGRvbWFpbiB3YXMgY3Jl
YXRlZCBiZWZvcmUgb3IgYWZ0ZXIgYSBzcGVjaWZpYyBnZW5lcmF0aW9uCisg
KiBjb3VudCAodXNlZCBmb3IgdGVzdGluZyB3aGV0aGVyIGEgbm9kZSBwZXJt
aXNzaW9uIGlzIG9sZGVyIHRoYW4gYSBkb21haW4pLgorICoKKyAqIFJldHVy
biB2YWx1ZXM6CisgKiAtMTogZXJyb3IKKyAqICAwOiBkb21haW4gaGFzIGhp
Z2hlciBnZW5lcmF0aW9uIGNvdW50IChpdCBpcyB5b3VuZ2VyIHRoYW4gYSBu
b2RlIHdpdGggdGhlCisgKiAgICAgZ2l2ZW4gY291bnQpLCBvciBkb21haW4g
aXNuJ3QgZXhpc3RpbmcgYW55IGxvbmdlcgorICogIDE6IGRvbWFpbiBpcyBv
bGRlciB0aGFuIHRoZSBub2RlCisgKi8KK3N0YXRpYyBpbnQgY2hrX2RvbWFp
bl9nZW5lcmF0aW9uKHVuc2lnbmVkIGludCBkb21pZCwgdWludDY0X3QgZ2Vu
KQoreworCXN0cnVjdCBkb21haW4gKmQ7CisJeGNfZG9taW5mb190IGRvbWlu
Zm87CisKKwlpZiAoIXhjX2hhbmRsZSAmJiBkb21pZCA9PSAwKQorCQlyZXR1
cm4gMTsKKworCWQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9taWQpOworCWlm
IChkKQorCQlyZXR1cm4gKGQtPmdlbmVyYXRpb24gPD0gZ2VuKSA/IDEgOiAw
OworCisJaWYgKCFnZXRfZG9tYWluX2luZm8oZG9taWQsICZkb21pbmZvKSkK
KwkJcmV0dXJuIDA7CisKKwlkID0gYWxsb2NfZG9tYWluKE5VTEwsIGRvbWlk
KTsKKwlyZXR1cm4gZCA/IDEgOiAtMTsKK30KKworLyoKKyAqIFJlbW92ZSBw
ZXJtaXNzaW9ucyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMgaW4g
b3JkZXIgdG8gYXZvaWQgYSBuZXcKKyAqIGRvbWFpbiB3aXRoIHRoZSBzYW1l
IGRvbWlkIGluaGVyaXRpbmcgdGhlIHBlcm1pc3Npb25zLgorICovCitpbnQg
ZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKQor
eworCXVuc2lnbmVkIGludCBpOworCWludCByZXQ7CisKKwlyZXQgPSBjaGtf
ZG9tYWluX2dlbmVyYXRpb24obm9kZS0+cGVybXMucFswXS5pZCwgbm9kZS0+
Z2VuZXJhdGlvbik7CisJaWYgKHJldCA8IDApCisJCXJldHVybiBlcnJubzsK
KworCS8qIElmIHRoZSBvd25lciBkb2Vzbid0IGV4aXN0IGFueSBsb25nZXIg
Z2l2ZSBpdCB0byBwcml2IGRvbWFpbi4gKi8KKwlpZiAoIXJldCkKKwkJbm9k
ZS0+cGVybXMucFswXS5pZCA9IHByaXZfZG9taWQ7CisKKwlmb3IgKGkgPSAx
OyBpIDwgbm9kZS0+cGVybXMubnVtOyBpKyspIHsKKwkJaWYgKG5vZGUtPnBl
cm1zLnBbaV0ucGVybXMgJiBYU19QRVJNX0lHTk9SRSkKKwkJCWNvbnRpbnVl
OworCQlyZXQgPSBjaGtfZG9tYWluX2dlbmVyYXRpb24obm9kZS0+cGVybXMu
cFtpXS5pZCwKKwkJCQkJICAgIG5vZGUtPmdlbmVyYXRpb24pOworCQlpZiAo
cmV0IDwgMCkKKwkJCXJldHVybiBlcnJubzsKKwkJaWYgKCFyZXQpCisJCQlu
b2RlLT5wZXJtcy5wW2ldLnBlcm1zIHw9IFhTX1BFUk1fSUdOT1JFOworCX0K
KworCXJldHVybiAwOworfQorCiB2b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogewog
CXN0cnVjdCBkb21haW4gKmQ7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfZG9tYWluLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmgKaW5kZXggMjU5MTgzOTYyYS4uNWUwMDA4NzIwNiAxMDA2
NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCisr
KyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaApAQCAtNTYs
NiArNTYsOSBAQCBib29sIGRvbWFpbl9jYW5fd3JpdGUoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4pOwogCiBib29sIGRvbWFpbl9pc191bnByaXZpbGVnZWQo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwogCisvKiBSZW1vdmUgbm9kZSBw
ZXJtaXNzaW9ucyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMuICov
CitpbnQgZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpu
b2RlKTsKKwogLyogUXVvdGEgbWFuaXB1bGF0aW9uICovCiB2b2lkIGRvbWFp
bl9lbnRyeV9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBu
b2RlICopOwogdm9pZCBkb21haW5fZW50cnlfZGVjKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqKTsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5kZXggYTdkOGM1ZDQ3
NS4uMjg4MWYzYjJlNCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMKQEAgLTQ3LDcgKzQ3LDEyIEBACiAgKiB0
cmFuc2FjdGlvbi4KICAqIEVhY2ggdGltZSB0aGUgZ2xvYmFsIGdlbmVyYXRp
b24gY291bnQgaXMgY29waWVkIHRvIGVpdGhlciBhIG5vZGUgb3IgYQogICog
dHJhbnNhY3Rpb24gaXQgaXMgaW5jcmVtZW50ZWQuIFRoaXMgZW5zdXJlcyBh
bGwgbm9kZXMgYW5kL29yIHRyYW5zYWN0aW9ucwotICogYXJlIGhhdmluZyBh
IHVuaXF1ZSBnZW5lcmF0aW9uIGNvdW50LgorICogYXJlIGhhdmluZyBhIHVu
aXF1ZSBnZW5lcmF0aW9uIGNvdW50LiBUaGUgaW5jcmVtZW50IGlzIGRvbmUg
X2JlZm9yZV8gdGhlCisgKiBjb3B5IGFzIHRoYXQgaXMgbmVlZGVkIGZvciBj
aGVja2luZyB3aGV0aGVyIGEgZG9tYWluIHdhcyBjcmVhdGVkIGJlZm9yZQor
ICogb3IgYWZ0ZXIgYSBub2RlIGhhcyBiZWVuIHdyaXR0ZW4gKHRoZSBkb21h
aW4ncyBnZW5lcmF0aW9uIGlzIHNldCB3aXRoIHRoZQorICogYWN0dWFsIGdl
bmVyYXRpb24gY291bnQgd2l0aG91dCBpbmNyZW1lbnRpbmcgaXQsIGluIG9y
ZGVyIHRvIHN1cHBvcnQKKyAqIHdyaXRpbmcgYSBub2RlIGZvciBhIGRvbWFp
biBiZWZvcmUgdGhlIGRvbWFpbiBoYXMgYmVlbiBvZmZpY2lhbGx5CisgKiBp
bnRyb2R1Y2VkKS4KICAqCiAgKiBUcmFuc2FjdGlvbiBjb25mbGljdHMgYXJl
IGRldGVjdGVkIGJ5IGNoZWNraW5nIHRoZSBnZW5lcmF0aW9uIGNvdW50IG9m
IGFsbAogICogbm9kZXMgcmVhZCBpbiB0aGUgdHJhbnNhY3Rpb24gdG8gbWF0
Y2ggd2l0aCB0aGUgZ2VuZXJhdGlvbiBjb3VudCBpbiB0aGUKQEAgLTE2MSw3
ICsxNjYsNyBAQCBzdHJ1Y3QgdHJhbnNhY3Rpb24KIH07CiAKIGV4dGVybiBp
bnQgcXVvdGFfbWF4X3RyYW5zYWN0aW9uOwotc3RhdGljIHVpbnQ2NF90IGdl
bmVyYXRpb247Cit1aW50NjRfdCBnZW5lcmF0aW9uOwogCiBzdGF0aWMgdm9p
ZCBzZXRfdGRiX2tleShjb25zdCBjaGFyICpuYW1lLCBUREJfREFUQSAqa2V5
KQogewpAQCAtMjM3LDcgKzI0Miw3IEBAIGludCBhY2Nlc3Nfbm9kZShzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUsCiAJYm9v
bCBpbnRyb2R1Y2UgPSBmYWxzZTsKIAogCWlmICh0eXBlICE9IE5PREVfQUND
RVNTX1JFQUQpIHsKLQkJbm9kZS0+Z2VuZXJhdGlvbiA9IGdlbmVyYXRpb24r
KzsKKwkJbm9kZS0+Z2VuZXJhdGlvbiA9ICsrZ2VuZXJhdGlvbjsKIAkJaWYg
KGNvbm4gJiYgIWNvbm4tPnRyYW5zYWN0aW9uKQogCQkJd3JsX2FwcGx5X2Rl
Yml0X2RpcmVjdChjb25uKTsKIAl9CkBAIC0zNzQsNyArMzc5LDcgQEAgc3Rh
dGljIGludCBmaW5hbGl6ZV90cmFuc2FjdGlvbihzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwKIAkJCQlpZiAoIWRhdGEuZHB0cikKIAkJCQkJZ290byBlcnI7
CiAJCQkJaGRyID0gKHZvaWQgKilkYXRhLmRwdHI7Ci0JCQkJaGRyLT5nZW5l
cmF0aW9uID0gZ2VuZXJhdGlvbisrOworCQkJCWhkci0+Z2VuZXJhdGlvbiA9
ICsrZ2VuZXJhdGlvbjsKIAkJCQlyZXQgPSB0ZGJfc3RvcmUodGRiX2N0eCwg
a2V5LCBkYXRhLAogCQkJCQkJVERCX1JFUExBQ0UpOwogCQkJCXRhbGxvY19m
cmVlKGRhdGEuZHB0cik7CkBAIC00NjIsNyArNDY3LDcgQEAgaW50IGRvX3Ry
YW5zYWN0aW9uX3N0YXJ0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1
Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5z
LT5hY2Nlc3NlZCk7CiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5zLT5jaGFuZ2Vk
X2RvbWFpbnMpOwogCXRyYW5zLT5mYWlsID0gZmFsc2U7Ci0JdHJhbnMtPmdl
bmVyYXRpb24gPSBnZW5lcmF0aW9uKys7CisJdHJhbnMtPmdlbmVyYXRpb24g
PSArK2dlbmVyYXRpb247CiAKIAkvKiBQaWNrIGFuIHVudXNlZCB0cmFuc2Fj
dGlvbiBpZGVudGlmaWVyLiAqLwogCWRvIHsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5oIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmgKaW5kZXggMzM4NmJhYzU2
NS4uNDNhMTYyYmVhMyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmgKKysrIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmgKQEAgLTI3LDYgKzI3LDggQEAgZW51bSBu
b2RlX2FjY2Vzc190eXBlIHsKIAogc3RydWN0IHRyYW5zYWN0aW9uOwogCitl
eHRlcm4gdWludDY0X3QgZ2VuZXJhdGlvbjsKKwogaW50IGRvX3RyYW5zYWN0
aW9uX3N0YXJ0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVm
ZmVyZWRfZGF0YSAqbm9kZSk7CiBpbnQgZG9fdHJhbnNhY3Rpb25fZW5kKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAq
aW4pOwogCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94c19saWIuYyBi
L3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCmluZGV4IDlmMWRjNmQ1NTkuLjgw
YzAzYWNiZWEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5j
CisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCkBAIC0xNDYsNyArMTQ2
LDcgQEAgYm9vbCB4c19zdHJpbmdzX3RvX3Blcm1zKHN0cnVjdCB4c19wZXJt
aXNzaW9ucyAqcGVybXMsIHVuc2lnbmVkIGludCBudW0sCiBib29sIHhzX3Bl
cm1fdG9fc3RyaW5nKGNvbnN0IHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVy
bSwKICAgICAgICAgICAgICAgICAgICAgICAgY2hhciAqYnVmZmVyLCBzaXpl
X3QgYnVmX2xlbikKIHsKLQlzd2l0Y2ggKChpbnQpcGVybS0+cGVybXMpIHsK
Kwlzd2l0Y2ggKChpbnQpcGVybS0+cGVybXMgJiB+WFNfUEVSTV9JR05PUkUp
IHsKIAljYXNlIFhTX1BFUk1fV1JJVEU6CiAJCSpidWZmZXIgPSAndyc7CiAJ
CWJyZWFrOwo=

--=separator
Content-Type: application/octet-stream; name="xsa322-o.patch"
Content-Disposition: attachment; filename="xsa322-o.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2xlYW4gdXAgcGVybWlzc2lvbnMgZm9yIGRlYWQgZG9tYWlu
cwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47
IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJp
dAoKZG9tYWluIGlkcyBhcmUgcHJvbmUgdG8gd3JhcHBpbmcgKDE1LWJpdHMp
LCBhbmQgd2l0aCBzdWZmaWNpZW50IG51bWJlcgpvZiBWTXMgaW4gYSByZWJv
b3QgbG9vcCBpdCBpcyBwb3NzaWJsZSB0byB0cmlnZ2VyIGl0LiAgWGVuc3Rv
cmUgZW50cmllcwptYXkgbGluZ2VyIGFmdGVyIGEgZG9tYWluIGRpZXMsIHVu
dGlsIGEgdG9vbHN0YWNrIGNsZWFucyBpdCB1cC4gRHVyaW5nCnRoaXMgdGlt
ZSB0aGVyZSBpcyBhIHdpbmRvdyB3aGVyZSBhIHdyYXBwZWQgZG9taWQgY291
bGQgYWNjZXNzIHRoZXNlCnhlbnN0b3JlIGtleXMgKHRoYXQgYmVsb25nZWQg
dG8gYW5vdGhlciBWTSkuCgpUbyBwcmV2ZW50IHRoaXMgZG8gYSBjbGVhbnVw
IHdoZW4gYSBkb21haW4gZGllczoKICogd2FsayB0aGUgZW50aXJlIHhlbnN0
b3JlIHRyZWUgYW5kIHVwZGF0ZSBwZXJtaXNzaW9ucyBmb3IgYWxsIG5vZGVz
CiAgICogaWYgdGhlIGRlYWQgZG9tYWluIGhhZCBhbiBBQ0wgZW50cnk6IHJl
bW92ZSBpdAogICAqIGlmIHRoZSBkZWFkIGRvbWFpbiB3YXMgdGhlIG93bmVy
OiBjaGFuZ2UgdGhlIG93bmVyIHRvIERvbTAKClRoaXMgaXMgZG9uZSB3aXRo
b3V0IHF1b3RhIGNoZWNrcyBvciBhIHRyYW5zYWN0aW9uLiBRdW90YSBjaGVj
a3Mgd291bGQKYmUgYSBuby1vcCAoZWl0aGVyIHRoZSBkb21haW4gaXMgZGVh
ZCwgb3IgaXQgaXMgRG9tMCB3aGVyZSB0aGV5IGFyZSBub3QKZW5mb3JjZWQp
LiAgVHJhbnNhY3Rpb25zIGFyZSBub3QgbmVlZGVkLCBiZWNhdXNlIHRoaXMg
aXMgYWxsIGRvbmUKYXRvbWljYWxseSBieSBveGVuc3RvcmVkJ3Mgc2luZ2xl
IHRocmVhZC4KClRoZSB4ZW5zdG9yZSBlbnRyaWVzIG93bmVkIGJ5IHRoZSBk
ZWFkIGRvbWFpbiBhcmUgbm90IGRlbGV0ZWQsIGJlY2F1c2UKdGhhdCBjb3Vs
ZCBjb25mdXNlIGEgdG9vbHN0YWNrIC8gYmFja2VuZHMgdGhhdCBhcmUgc3Rp
bGwgYm91bmQgdG8gaXQKKG9yIGdlbmVyYXRlIHVuZXhwZWN0ZWQgd2F0Y2gg
ZXZlbnRzKS4gSXQgaXMgdGhlIHJlc3BvbnNpYmlsaXR5IG9mIGEKdG9vbHN0
YWNrIHRvIHJlbW92ZSB0aGUgeGVuc3RvcmUgZW50cmllcyB0aGVtc2VsdmVz
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjIuCgpTaWduZWQtb2ZmLWJ5OiBF
ZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1i
eTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJt
cy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCBl
ZTdmZWU2YmRhLi5lOGExNjIyMWY4IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQvcGVybXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Blcm1zLm1sCkBAIC01OCw2ICs1OCwxNSBAQCBsZXQgZ2V0X290aGVy
IHBlcm1zID0gcGVybXMub3RoZXIKIGxldCBnZXRfYWNsIHBlcm1zID0gcGVy
bXMuYWNsCiBsZXQgZ2V0X293bmVyIHBlcm0gPSBwZXJtLm93bmVyCiAKKygq
KiBbcmVtb3RlX2RvbWlkIH5kb21pZCBwZXJtXSByZW1vdmVzIGFsbCBBQ0xz
IGZvciBbZG9taWRdIGZyb20gcGVybS4KKyogSWYgW2RvbWlkXSB3YXMgdGhl
IG93bmVyIHRoZW4gaXQgaXMgY2hhbmdlZCB0byBEb20wLgorKiBUaGlzIGlz
IHVzZWQgZm9yIGNsZWFuaW5nIHVwIGFmdGVyIGRlYWQgZG9tYWlucy4KKyog
KikKK2xldCByZW1vdmVfZG9taWQgfmRvbWlkIHBlcm0gPQorCWxldCBhY2wg
PSBMaXN0LmZpbHRlciAoZnVuIChhY2xfZG9taWQsIF8pIC0+IGFjbF9kb21p
ZCA8PiBkb21pZCkgcGVybS5hY2wgaW4KKwlsZXQgb3duZXIgPSBpZiBwZXJt
Lm93bmVyID0gZG9taWQgdGhlbiAwIGVsc2UgcGVybS5vd25lciBpbgorCXsg
cGVybSB3aXRoIGFjbDsgb3duZXIgfQorCiBsZXQgZGVmYXVsdDAgPSBjcmVh
dGUgMCBOT05FIFtdCiAKIGxldCBwZXJtX29mX3N0cmluZyBzID0KZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggZjk5YjllOTM1
Yy4uNzNlMDRjYzE4YiAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKQEAgLTQ0Myw2ICs0NDMsNyBAQCBsZXQgZG9fcmVsZWFzZSBj
b24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJbGV0IGZpcmVfc3BlY193YXRj
aGVzID0gRG9tYWlucy5leGlzdCBkb21haW5zIGRvbWlkIGluCiAJRG9tYWlu
cy5kZWwgZG9tYWlucyBkb21pZDsKIAlDb25uZWN0aW9ucy5kZWxfZG9tYWlu
IGNvbnMgZG9taWQ7CisJU3RvcmUucmVzZXRfcGVybWlzc2lvbnMgKFRyYW5z
YWN0aW9uLmdldF9zdG9yZSB0KSBkb21pZDsKIAlpZiBmaXJlX3NwZWNfd2F0
Y2hlcwogCXRoZW4gQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgKFRy
YW5zYWN0aW9uLmdldF9yb290IHQpIGNvbnMgU3RvcmUuUGF0aC5yZWxlYXNl
X2RvbWFpbgogCWVsc2UgcmFpc2UgSW52YWxpZF9DbWRfQXJncwpkaWZmIC0t
Z2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sIGIvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sCmluZGV4IDZiNmU0NDBlOTguLjNi
MDUxMjhmMWIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9z
dG9yZS5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwK
QEAgLTg5LDYgKzg5LDEzIEBAIGxldCBjaGVja19vd25lciBub2RlIGNvbm5l
Y3Rpb24gPQogCiBsZXQgcmVjIHJlY3Vyc2UgZmN0IG5vZGUgPSBmY3Qgbm9k
ZTsgTGlzdC5pdGVyIChyZWN1cnNlIGZjdCkgbm9kZS5jaGlsZHJlbgogCiso
KiogW3JlY3Vyc2VfbWFwIGYgdHJlZV0gYXBwbGllcyBbZl0gb24gZWFjaCBu
b2RlIGluIHRoZSB0cmVlIHJlY3Vyc2l2ZWx5ICopCitsZXQgcmVjdXJzZV9t
YXAgZiA9CisJbGV0IHJlYyB3YWxrIG5vZGUgPQorCQlmIHsgbm9kZSB3aXRo
IGNoaWxkcmVuID0gTGlzdC5yZXZfbWFwIHdhbGsgbm9kZS5jaGlsZHJlbiB8
PiBMaXN0LnJldiB9CisJaW4KKwl3YWxrCisKIGxldCB1bnBhY2sgbm9kZSA9
IChTeW1ib2wudG9fc3RyaW5nIG5vZGUubmFtZSwgbm9kZS5wZXJtcywgbm9k
ZS52YWx1ZSkKIAogZW5kCkBAIC00MDUsNiArNDEyLDE1IEBAIGxldCBzZXRw
ZXJtcyBzdG9yZSBwZXJtIHBhdGggbnBlcm1zID0KIAkJUXVvdGEuZGVsX2Vu
dHJ5IHN0b3JlLnF1b3RhIG9sZF9vd25lcjsKIAkJUXVvdGEuYWRkX2VudHJ5
IHN0b3JlLnF1b3RhIG5ld19vd25lcgogCitsZXQgcmVzZXRfcGVybWlzc2lv
bnMgc3RvcmUgZG9taWQgPQorCUxvZ2dpbmcuaW5mbyAic3RvcmV8bm9kZSIg
IkNsZWFuaW5nIHVwIHhlbnN0b3JlIEFDTHMgZm9yIGRvbWlkICVkIiBkb21p
ZDsKKwlzdG9yZS5yb290IDwtIE5vZGUucmVjdXJzZV9tYXAgKGZ1biBub2Rl
IC0+CisJCWxldCBwZXJtcyA9IFBlcm1zLk5vZGUucmVtb3ZlX2RvbWlkIH5k
b21pZCBub2RlLnBlcm1zIGluCisJCWlmIHBlcm1zIDw+IG5vZGUucGVybXMg
dGhlbgorCQkJTG9nZ2luZy5kZWJ1ZyAic3RvcmV8bm9kZSIgIkNoYW5nZWQg
cGVybWlzc2lvbnMgZm9yIG5vZGUgJXMiIChOb2RlLmdldF9uYW1lIG5vZGUp
OworCQl7IG5vZGUgd2l0aCBwZXJtcyB9CisJKSBzdG9yZS5yb290CisKIHR5
cGUgb3BzID0gewogCXN0b3JlOiB0OwogCXdyaXRlOiBQYXRoLnQgLT4gc3Ry
aW5nIC0+IHVuaXQ7CmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9y
ZWQveGVuc3RvcmVkLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0
b3JlZC5tbAppbmRleCAwZDM1NWJiY2I4Li5mZjlmYmJiYWMyIDEwMDY0NAot
LS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBi
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTMzNiw2
ICszMzYsNyBAQCBsZXQgXyA9CiAJCQlmaW5hbGx5IChmdW4gKCkgLT4KIAkJ
CQlpZiBTb21lIHBvcnQgPSBldmVudGNobi5FdmVudC52aXJxX3BvcnQgdGhl
biAoCiAJCQkJCWxldCAobm90aWZ5LCBkZWFkZG9tKSA9IERvbWFpbnMuY2xl
YW51cCBkb21haW5zIGluCisJCQkJCUxpc3QuaXRlciAoU3RvcmUucmVzZXRf
cGVybWlzc2lvbnMgc3RvcmUpIGRlYWRkb207CiAJCQkJCUxpc3QuaXRlciAo
Q29ubmVjdGlvbnMuZGVsX2RvbWFpbiBjb25zKSBkZWFkZG9tOwogCQkJCQlp
ZiBkZWFkZG9tIDw+IFtdIHx8IG5vdGlmeSB0aGVuCiAJCQkJCQlDb25uZWN0
aW9ucy5maXJlX3NwZWNfd2F0Y2hlcwo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:21:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53162.92912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9KD-0005oe-Q1; Tue, 15 Dec 2020 12:21:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53162.92912; Tue, 15 Dec 2020 12:21:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9KD-0005oU-JL; Tue, 15 Dec 2020 12:21:05 +0000
Received: by outflank-mailman (input) for mailman id 53162;
 Tue, 15 Dec 2020 12:21:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9KC-0004tM-2d
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:21:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d23f9e1-ab82-4fdc-b89f-7f821d98b274;
 Tue, 15 Dec 2020 12:20:24 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JT-0005hR-Ll; Tue, 15 Dec 2020 12:20:19 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JT-00072J-Kk; Tue, 15 Dec 2020 12:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d23f9e1-ab82-4fdc-b89f-7f821d98b274
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=1bDBiyLhlk4nGk6n3SWFRgf//RkPkraAVoSImLu4VEI=; b=nmjV25fXYBtL58vhuMbdXuNH2x
	9YVHNzENv/A93yaYS3ATPtA5oyqnselZZP/dAWH4ZxuVtwxyUrvt1Bdk6/eXnxMCjs04ePEg91sTQ
	4sjw2audUgS50RCWdHLqdcmbbCBUaEA1GcjN+H22AIu8h9uyhsF1MeGpDfRtRZo7xg/U=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 325 v3 (CVE-2020-29483) - Xenstore: guests
 can disturb domain cleanup
Message-Id: <E1kp9JT-00072J-Kk@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:19 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29483 / XSA-325
                               version 3

              Xenstore: guests can disturb domain cleanup

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Xenstored and guests communicate via a shared memory page using a
specific protocol. When a guest violates this protocol, xenstored will
drop the connection to that guest.

Unfortunately this is done by just removing the guest from xenstored's
internal management, resulting in the same actions as if the guest had
been destroyed, including sending an @releaseDomain event.

@releaseDomain events do not say guest has been removed.  All watchers
of this event must look at the states of all guests to find the guest
which has been removed.  When an @releaseDomain is generated due to
domain xenstored protocol violation, As the guest is still running, so
the watchers will not react.

Later, when the guest is actually destroyed, xenstored will no longer
have it stored in its internal data base, so no further @releaseDomain
event will be sent. This can lead to a zombie domain; memory mappings
of that guest's memory will not be removed, due to the missing
event. This zombie domain will be cleaned up only after another domain
is destroyed, as that will trigger another @releaseDomain event.

If the device model of the guest which violated the Xenstore protocol
is running in a stub-domain, a use-after-free case could happen in
xenstored, after having removed the guest from its internal data base,
possibly resulting in a crash of xenstored.

IMPACT
======

A malicious guest can block resources of the host for a period after
its own death.

Guests with a stub domain device model can eventually crash xenstored,
resulting in a more serious denial of service (the prevention of any
further domain management operations).

VULNERABLE SYSTEMS
==================

All versions of Xen are affected.

Only the C variant of Xenstore is affected, the Ocaml variant is not
affected.

Only HVM guests with a stubdom device model can cause a serious DoS.

MITIGATION
==========

Using the Ocaml variant of Xenstore (oxenstored) avoids the issue;
Running HVM domains with a dom0 device model rather than a stubdom
device model will avoid the more serious DoS.

However, given the other vulnerabilities in both versions of xenstored
being reported at this time, changing xenstored implementation, or
switching to dom0 xenstored, is not a recommended approach to
mitigation of individual issues.

CREDITS
=======

This issue was discovered by Pawel Wieczorkiewicz of Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa325.patch           xen-unstable
xsa325-4.14.patch      Xen 4.14 - 4.10

$ sha256sum xsa325*
29a81606e9c0e036dcc39b2a7e6ec0b1ce7d658972a368907b02d56f2aae3dc2  xsa325.meta
56e09d92fa3d623b2896fd6e6a08805514b2ff9b1cde526968be3925fda28705  xsa325.patch
702f0f4c20e685d2e23a9c1a31c0e0fda1824c9209bd8affca9dd3489dfbd23d  xsa325-4.14.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ7AEH/0fHBNU0Sd9iVVcGmZvJblI3mKy9TA3Z8vcdiN7I
j0TXOQlmjp90WPC8nYo/XtsFpCx5dhg0yLX1Unxe1R0twvt2OrXWRZTa0dbVFcou
t8yq3lSRiOqzwNK186wzS2LSyAH7yit9CpWLGsXuL6WnocL84Hb3PSsJBP4nTZzm
dcol+h85SvfQ5S+aMUTPqxdm+uE9qoSAN6rJU2Fill3jCThpJSfRUy1vIz5CDYes
oD8Oq+H1sdfzCtDHGzgRveDqkHTr6rxCmlenxAI3UCshkhM6VJypoNQ4jQpS/yfN
nrim4XntIOdy1HR4UgnHRYcnFOnn2qs7dkIU449KVzs1KCg=
=83j/
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa325.meta"
Content-Disposition: attachment; filename="xsa325.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNAogICAgICAgICAgXSwKICAgICAg
ICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzI1LTQuMTQucGF0
Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAg
IjQuMTEiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7
CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjQxYTgyMmMzOTI2MzUwZjI2OTE3
ZDc0N2M4ZGZlZDFjNDRhMmNmNDIiLAogICAgICAgICAgIlByZXJlcXMiOiBb
CiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAgICAg
ICAzMjIsCiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0CiAgICAg
ICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4
c2EzMjUtNC4xNC5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAg
IH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAg
ICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiODE0NWQz
OGI0ODAwOTI1NWEzMmFiODdhMDJlNDgxY2QwOWM4MTFmOSIsCiAgICAgICAg
ICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAgICAgICAgICAx
MTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIzLAogICAgICAg
ICAgICAzMjQKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTMyNS00LjE0LnBhdGNoIgogICAgICAgICAgXQog
ICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEzIjogewogICAgICAi
UmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJs
ZVJlZiI6ICJiNTMwMjI3M2UyYzUxOTQwMTcyNDAwNDg2NjQ0NjM2ZjJmNGZj
NjRhIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMs
CiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAg
ICAzMjMsCiAgICAgICAgICAgIDMyNAogICAgICAgICAgXSwKICAgICAgICAg
ICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzI1LTQuMTQucGF0Y2gi
CiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQu
MTQiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAg
ICAgICAgICAiU3RhYmxlUmVmIjogIjFkMWQxZjUzOTE5NzY0NTZhNzlkYWFj
MGRjZmU3MTU3ZGExZTU0ZjciLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAg
ICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAgICAgICAz
MjIsCiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
MjUtNC4xNC5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0K
ICAgIH0sCiAgICAibWFzdGVyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAg
ICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIzYWU0Njlh
ZjhlNjgwZGYzMWVlY2QwYTJhYzZhODNiNThhZDdjZTUzIiwKICAgICAgICAg
ICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDEx
NSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAg
ICAgIDMyNAogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwog
ICAgICAgICAgICAieHNhMzI1LnBhdGNoIgogICAgICAgICAgXQogICAgICAg
IH0KICAgICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa325.patch"
Content-Disposition: attachment; filename="xsa325.patch"
Content-Transfer-Encoding: base64

RnJvbTogSGFyc2hhIFNoYW1zdW5kYXJhIEhhdmFudXIgPGhhdmFudXJAYW1h
em9uLmNvbT4KU3ViamVjdDogdG9vbHMveGVuc3RvcmU6IFByZXNlcnZlIGJh
ZCBjbGllbnQgdW50aWwgdGhleSBhcmUgZGVzdHJveWVkCgpYZW5TdG9yZWQg
d2lsbCBraWxsIGFueSBjb25uZWN0aW9uIHRoYXQgaXQgdGhpbmtzIGhhcyBt
aXNiZWhhdmVkLAp0aGlzIGlzIGN1cnJlbnRseSBoYXBwZW5pbmcgaW4gdHdv
IHBsYWNlczoKICogSW4gYGhhbmRsZV9pbnB1dCgpYCBpZiB0aGUgc2FuaXR5
IGNoZWNrIG9uIHRoZSByaW5nIGFuZCB0aGUgbWVzc2FnZQogICBmYWlscy4K
ICogSW4gYGhhbmRsZV9vdXRwdXQoKWAgd2hlbiBmYWlsaW5nIHRvIHdyaXRl
IHRoZSByZXNwb25zZSBpbiB0aGUgcmluZy4KCkFzIHRoZSBkb21haW4gc3Ry
dWN0dXJlIGlzIGEgY2hpbGQgb2YgdGhlIGNvbm5lY3Rpb24sIFhlblN0b3Jl
ZCB3aWxsCmRlc3Ryb3kgaXRzIHZpZXcgb2YgdGhlIGRvbWFpbiB3aGVuIGtp
bGxpbmcgdGhlIGNvbm5lY3Rpb24uIFRoaXMgd2lsbApyZXN1bHQgaW4gc2Vu
ZGluZyBAcmVsZWFzZURvbWFpbiBldmVudCB0byBhbGwgdGhlIHdhdGNoZXJz
LgoKQXMgdGhlIHdhdGNoIGV2ZW50IGRvZXNuJ3QgY2Fycnkgd2hpY2ggZG9t
YWluIGhhcyBiZWVuIHJlbGVhc2VkLAp0aGUgd2F0Y2hlciAoc3VjaCBhcyBY
ZW5TdG9yZWQpIHdpbGwgZ2VuZXJhbGx5IGdvIHRocm91Z2ggdGhlIGxpc3Qg
b2YKZG9tYWlucyByZWdpc3RlcnMgYW5kIGNoZWNrIGlmIG9uZSBvZiB0aGVt
IGlzIHNodXR0aW5nIGRvd24vZHlpbmcuCkluIHRoZSBjYXNlIG9mIGEgY2xp
ZW50IG1pc2JlaGF2aW5nLCB0aGUgZG9tYWluIHdpbGwgbGlrZWx5IHRvIGJl
CnJ1bm5pbmcsIHNvIG5vIGFjdGlvbiB3aWxsIGJlIHBlcmZvcm1lZC4KCldo
ZW4gdGhlIGRvbWFpbiBpcyBlZmZlY3RpdmVseSBkZXN0cm95ZWQsIFhlblN0
b3JlZCB3aWxsIG5vdCBiZSBhd2FyZSBvZgp0aGUgZG9tYWluIGFueW1vcmUu
IFNvIHRoZSB3YXRjaCBldmVudCBpcyBub3QgZ29pbmcgdG8gYmUgc2VudC4K
QnkgY29uc2VxdWVuY2UsIHRoZSB3YXRjaGVycyBvZiB0aGUgZXZlbnQgd2ls
bCBub3QgcmVsZWFzZSBtYXBwaW5ncwp0aGV5IG1heSBoYXZlIG9uIHRoZSBk
b21haW4uIFRoaXMgd2lsbCByZXN1bHQgaW4gYSB6b21iaWUgZG9tYWluLgoK
SW4gb3JkZXIgdG8gc2VuZCBAcmVsZWFzZURvbWFpbiBldmVudCBhdCB0aGUg
Y29ycmVjdCB0aW1lLCB3ZSB3YW50CnRvIGtlZXAgdGhlIGRvbWFpbiBzdHJ1
Y3R1cmUgdW50aWwgdGhlIGRvbWFpbiBpcyBlZmZlY3RpdmVseQpzaHV0dGlu
Zy1kb3duL2R5aW5nLgoKV2UgYWxzbyB3YW50IHRvIGtlZXAgdGhlIGNvbm5l
Y3Rpb24gYXJvdW5kIHNvIHdlIGNvdWxkIHBvc3NpYmx5IHJldml2ZQp0aGUg
Y29ubmVjdGlvbiBpbiB0aGUgZnV0dXJlLgoKQSBuZXcgZmxhZyAnaXNfaWdu
b3JlZCcgaXMgYWRkZWQgdG8gbWFyayB3aGV0aGVyIGEgY29ubmVjdGlvbiBz
aG91bGQgYmUKaWdub3JlZCB3aGVuIGNoZWNraW5nIGlmIHRoZXJlIGFyZSB3
b3JrIHRvIGRvLiBBZGRpdGlvbmFsbHkgYW55CnRyYW5zYWN0aW9ucywgd2F0
Y2hlcywgYnVmZmVycyBhc3NvY2lhdGVkIHRvIHRoZSBjb25uZWN0aW9uIHdp
bGwgYmUKZnJlZWQgYXMgeW91IGNhbid0IGRvIG11Y2ggd2l0aCB0aGVtIChy
ZXN0YXJ0aW5nIHRoZSBjb25uZWN0aW9uIHdpbGwKbGlrZWx5IG5lZWQgYSBy
ZXNldCkuCgpBcyBhIHNpZGUgbm90ZSwgd2hlbiB0aGUgZGV2aWNlIG1vZGVs
IHdlcmUgcnVubmluZyBpbiBhIHN0dWJkb21haW4sIGEKZ3Vlc3Qgd291bGQg
aGF2ZSBiZWVuIGFibGUgdG8gaW50cm9kdWNlIGEgdXNlLWFmdGVyLWZyZWUg
YmVjYXVzZSB0aGVyZQppcyB0d28gcGFyZW50cyBmb3IgYSBndWVzdCBjb25u
ZWN0aW9uLgoKVGhpcyBpcyBYU0EtMzI1LgoKUmVwb3J0ZWQtYnk6IFBhd2Vs
IFdpZWN6b3JraWV3aWN6IDx3aXBhd2VsQGFtYXpvbi5kZT4KU2lnbmVkLW9m
Zi1ieTogSGFyc2hhIFNoYW1zdW5kYXJhIEhhdmFudXIgPGhhdmFudXJAYW1h
em9uLmNvbT4KU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxA
YW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9z
c0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGF1bEB4
ZW4ub3JnPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmlu
ZGV4IGM5MjljYmJjM2IuLjc0NmExMjQ3YjMgMTAwNjQ0Ci0tLSBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYwpAQCAtMTMzNyw2ICsxMzM3LDMyIEBAIHN0
YXRpYyBzdHJ1Y3QgewogCVtYU19ESVJFQ1RPUllfUEFSVF0gICAgPSB7ICJE
SVJFQ1RPUllfUEFSVCIsICAgIHNlbmRfZGlyZWN0b3J5X3BhcnQgfSwKIH07
CiAKKy8qCisgKiBLZWVwIHRoZSBjb25uZWN0aW9uIGFsaXZlIGJ1dCBzdG9w
IHByb2Nlc3NpbmcgYW55IG5ldyByZXF1ZXN0IG9yIHNlbmRpbmcKKyAqIHJl
cG9uc2UuIFRoaXMgaXMgdG8gYWxsb3cgc2VuZGluZyBAcmVsZWFzZURvbWFp
biB3YXRjaCBldmVudCBhdCB0aGUgY29ycmVjdAorICogbW9tZW50IGFuZC9v
ciB0byBhbGxvdyB0aGUgY29ubmVjdGlvbiB0byByZXN0YXJ0IChub3QgeWV0
IGltcGxlbWVudGVkKS4KKyAqCisgKiBBbGwgd2F0Y2hlcywgdHJhbnNhY3Rp
b25zLCBidWZmZXJzIHdpbGwgYmUgZnJlZWQuCisgKi8KK3N0YXRpYyB2b2lk
IGlnbm9yZV9jb25uZWN0aW9uKHN0cnVjdCBjb25uZWN0aW9uICpjb25uKQor
eworCXN0cnVjdCBidWZmZXJlZF9kYXRhICpvdXQsICp0bXA7CisKKwl0cmFj
ZSgiQ09OTiAlcCBpZ25vcmVkXG4iLCBjb25uKTsKKworCWNvbm4tPmlzX2ln
bm9yZWQgPSB0cnVlOworCWNvbm5fZGVsZXRlX2FsbF93YXRjaGVzKGNvbm4p
OworCWNvbm5fZGVsZXRlX2FsbF90cmFuc2FjdGlvbnMoY29ubik7CisKKwls
aXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUob3V0LCB0bXAsICZjb25uLT5vdXRf
bGlzdCwgbGlzdCkgeworCQlsaXN0X2RlbCgmb3V0LT5saXN0KTsKKwkJdGFs
bG9jX2ZyZWUob3V0KTsKKwl9CisKKwl0YWxsb2NfZnJlZShjb25uLT5pbik7
CisJY29ubi0+aW4gPSBOVUxMOworfQorCiBzdGF0aWMgY29uc3QgY2hhciAq
c29ja21zZ19zdHJpbmcoZW51bSB4c2Rfc29ja21zZ190eXBlIHR5cGUpCiB7
CiAJaWYgKCh1bnNpZ25lZCBpbnQpdHlwZSA8IEFSUkFZX1NJWkUod2lyZV9m
dW5jcykgJiYgd2lyZV9mdW5jc1t0eXBlXS5zdHIpCkBAIC0xMzk1LDggKzE0
MjEsMTAgQEAgc3RhdGljIHZvaWQgY29uc2lkZXJfbWVzc2FnZShzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubikKIAlhc3NlcnQoY29ubi0+aW4gPT0gTlVMTCk7
CiB9CiAKLS8qIEVycm9ycyBpbiByZWFkaW5nIG9yIGFsbG9jYXRpbmcgaGVy
ZSBtZWFuIHdlIGdldCBvdXQgb2Ygc3luYywgc28gd2UKLSAqIGRyb3AgdGhl
IHdob2xlIGNsaWVudCBjb25uZWN0aW9uLiAqLworLyoKKyAqIEVycm9ycyBp
biByZWFkaW5nIG9yIGFsbG9jYXRpbmcgaGVyZSBtZWFucyB3ZSBnZXQgb3V0
IG9mIHN5bmMsIHNvIHdlIG1hcmsKKyAqIHRoZSBjb25uZWN0aW9uIGFzIGln
bm9yZWQuCisgKi8KIHN0YXRpYyB2b2lkIGhhbmRsZV9pbnB1dChzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubikKIHsKIAlpbnQgYnl0ZXM7CkBAIC0xNDUzLDE0
ICsxNDgxLDE0IEBAIHN0YXRpYyB2b2lkIGhhbmRsZV9pbnB1dChzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubikKIAlyZXR1cm47CiAKIGJhZF9jbGllbnQ6Ci0J
LyogS2lsbCBpdC4gKi8KLQl0YWxsb2NfZnJlZShjb25uKTsKKwlpZ25vcmVf
Y29ubmVjdGlvbihjb25uKTsKIH0KIAogc3RhdGljIHZvaWQgaGFuZGxlX291
dHB1dChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubikKIHsKKwkvKiBJZ25vcmUg
dGhlIGNvbm5lY3Rpb24gaWYgYW4gZXJyb3Igb2NjdXJlZCAqLwogCWlmICgh
d3JpdGVfbWVzc2FnZXMoY29ubikpCi0JCXRhbGxvY19mcmVlKGNvbm4pOwor
CQlpZ25vcmVfY29ubmVjdGlvbihjb25uKTsKIH0KIAogc3RydWN0IGNvbm5l
Y3Rpb24gKm5ld19jb25uZWN0aW9uKGNvbm53cml0ZWZuX3QgKndyaXRlLCBj
b25ucmVhZGZuX3QgKnJlYWQpCkBAIC0xNDc1LDYgKzE1MDMsNyBAQCBzdHJ1
Y3QgY29ubmVjdGlvbiAqbmV3X2Nvbm5lY3Rpb24oY29ubndyaXRlZm5fdCAq
d3JpdGUsIGNvbm5yZWFkZm5fdCAqcmVhZCkKIAluZXctPnBvbGxmZF9pZHgg
PSAtMTsKIAluZXctPndyaXRlID0gd3JpdGU7CiAJbmV3LT5yZWFkID0gcmVh
ZDsKKwluZXctPmlzX2lnbm9yZWQgPSBmYWxzZTsKIAluZXctPnRyYW5zYWN0
aW9uX3N0YXJ0ZWQgPSAwOwogCUlOSVRfTElTVF9IRUFEKCZuZXctPm91dF9s
aXN0KTsKIAlJTklUX0xJU1RfSEVBRCgmbmV3LT53YXRjaGVzKTsKQEAgLTIx
MzYsOCArMjE2NSw5IEBAIGludCBtYWluKGludCBhcmdjLCBjaGFyICphcmd2
W10pCiAJCQkJCWlmIChmZHNbY29ubi0+cG9sbGZkX2lkeF0ucmV2ZW50cwog
CQkJCQkgICAgJiB+KFBPTExJTnxQT0xMT1VUKSkKIAkJCQkJCXRhbGxvY19m
cmVlKGNvbm4pOwotCQkJCQllbHNlIGlmIChmZHNbY29ubi0+cG9sbGZkX2lk
eF0ucmV2ZW50cwotCQkJCQkJICYgUE9MTElOKQorCQkJCQllbHNlIGlmICgo
ZmRzW2Nvbm4tPnBvbGxmZF9pZHhdLnJldmVudHMKKwkJCQkJCSAgJiBQT0xM
SU4pICYmCisJCQkJCQkgIWNvbm4tPmlzX2lnbm9yZWQpCiAJCQkJCQloYW5k
bGVfaW5wdXQoY29ubik7CiAJCQkJfQogCQkJCWlmICh0YWxsb2NfZnJlZShj
b25uKSA9PSAwKQpAQCAtMjE0OSw4ICsyMTc5LDkgQEAgaW50IG1haW4oaW50
IGFyZ2MsIGNoYXIgKmFyZ3ZbXSkKIAkJCQkJaWYgKGZkc1tjb25uLT5wb2xs
ZmRfaWR4XS5yZXZlbnRzCiAJCQkJCSAgICAmIH4oUE9MTElOfFBPTExPVVQp
KQogCQkJCQkJdGFsbG9jX2ZyZWUoY29ubik7Ci0JCQkJCWVsc2UgaWYgKGZk
c1tjb25uLT5wb2xsZmRfaWR4XS5yZXZlbnRzCi0JCQkJCQkgJiBQT0xMT1VU
KQorCQkJCQllbHNlIGlmICgoZmRzW2Nvbm4tPnBvbGxmZF9pZHhdLnJldmVu
dHMKKwkJCQkJCSAgJiBQT0xMT1VUKSAmJgorCQkJCQkJICFjb25uLT5pc19p
Z25vcmVkKQogCQkJCQkJaGFuZGxlX291dHB1dChjb25uKTsKIAkJCQl9CiAJ
CQkJaWYgKHRhbGxvY19mcmVlKGNvbm4pID09IDApCmRpZmYgLS1naXQgYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuaAppbmRleCA2YzIxZDViYjlhLi40YzZjM2Q2
ZjIwIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKQEAg
LTc3LDYgKzc3LDkgQEAgc3RydWN0IGNvbm5lY3Rpb24KIAkvKiBXaG8gYW0g
ST8gMCBmb3Igc29ja2V0IGNvbm5lY3Rpb25zLiAqLwogCXVuc2lnbmVkIGlu
dCBpZDsKIAorCS8qIElzIHRoaXMgY29ubmVjdGlvbiBpZ25vcmVkPyAqLwor
CWJvb2wgaXNfaWdub3JlZDsKKwogCS8qIEJ1ZmZlcmVkIGluY29taW5nIGRh
dGEuICovCiAJc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluOwogCmRpZmYgLS1n
aXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMgYi90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKaW5kZXggNzE2OWRhOTg1
MS4uN2QzNDhkNTdmMyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2RvbWFpbi5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9kb21haW4uYwpAQCAtMjg2LDYgKzI4NiwxMCBAQCBib29sIGRvbWFpbl9j
YW5fcmVhZChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubikKIAogCWlmIChkb21h
aW5faXNfdW5wcml2aWxlZ2VkKGNvbm4pICYmIGNvbm4tPmRvbWFpbi0+d3Js
X2NyZWRpdCA8IDApCiAJCXJldHVybiBmYWxzZTsKKworCWlmIChjb25uLT5p
c19pZ25vcmVkKQorCQlyZXR1cm4gZmFsc2U7CisKIAlyZXR1cm4gKGludGYt
PnJlcV9jb25zICE9IGludGYtPnJlcV9wcm9kKTsKIH0KIApAQCAtMzAzLDYg
KzMwNywxMCBAQCBib29sIGRvbWFpbl9pc191bnByaXZpbGVnZWQoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4pCiBib29sIGRvbWFpbl9jYW5fd3JpdGUoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4pCiB7CiAJc3RydWN0IHhlbnN0b3JlX2Rv
bWFpbl9pbnRlcmZhY2UgKmludGYgPSBjb25uLT5kb21haW4tPmludGVyZmFj
ZTsKKworCWlmIChjb25uLT5pc19pZ25vcmVkKQorCQlyZXR1cm4gZmFsc2U7
CisKIAlyZXR1cm4gKChpbnRmLT5yc3BfcHJvZCAtIGludGYtPnJzcF9jb25z
KSAhPSBYRU5TVE9SRV9SSU5HX1NJWkUpOwogfQogCg==

--=separator
Content-Type: application/octet-stream; name="xsa325-4.14.patch"
Content-Disposition: attachment; filename="xsa325-4.14.patch"
Content-Transfer-Encoding: base64

RnJvbTogSGFyc2hhIFNoYW1zdW5kYXJhIEhhdmFudXIgPGhhdmFudXJAYW1h
em9uLmNvbT4KU3ViamVjdDogdG9vbHMveGVuc3RvcmU6IFByZXNlcnZlIGJh
ZCBjbGllbnQgdW50aWwgdGhleSBhcmUgZGVzdHJveWVkCgpYZW5TdG9yZWQg
d2lsbCBraWxsIGFueSBjb25uZWN0aW9uIHRoYXQgaXQgdGhpbmtzIGhhcyBt
aXNiZWhhdmVkLAp0aGlzIGlzIGN1cnJlbnRseSBoYXBwZW5pbmcgaW4gdHdv
IHBsYWNlczoKICogSW4gYGhhbmRsZV9pbnB1dCgpYCBpZiB0aGUgc2FuaXR5
IGNoZWNrIG9uIHRoZSByaW5nIGFuZCB0aGUgbWVzc2FnZQogICBmYWlscy4K
ICogSW4gYGhhbmRsZV9vdXRwdXQoKWAgd2hlbiBmYWlsaW5nIHRvIHdyaXRl
IHRoZSByZXNwb25zZSBpbiB0aGUgcmluZy4KCkFzIHRoZSBkb21haW4gc3Ry
dWN0dXJlIGlzIGEgY2hpbGQgb2YgdGhlIGNvbm5lY3Rpb24sIFhlblN0b3Jl
ZCB3aWxsCmRlc3Ryb3kgaXRzIHZpZXcgb2YgdGhlIGRvbWFpbiB3aGVuIGtp
bGxpbmcgdGhlIGNvbm5lY3Rpb24uIFRoaXMgd2lsbApyZXN1bHQgaW4gc2Vu
ZGluZyBAcmVsZWFzZURvbWFpbiBldmVudCB0byBhbGwgdGhlIHdhdGNoZXJz
LgoKQXMgdGhlIHdhdGNoIGV2ZW50IGRvZXNuJ3QgY2Fycnkgd2hpY2ggZG9t
YWluIGhhcyBiZWVuIHJlbGVhc2VkLAp0aGUgd2F0Y2hlciAoc3VjaCBhcyBY
ZW5TdG9yZWQpIHdpbGwgZ2VuZXJhbGx5IGdvIHRocm91Z2ggdGhlIGxpc3Qg
b2YKZG9tYWlucyByZWdpc3RlcnMgYW5kIGNoZWNrIGlmIG9uZSBvZiB0aGVt
IGlzIHNodXR0aW5nIGRvd24vZHlpbmcuCkluIHRoZSBjYXNlIG9mIGEgY2xp
ZW50IG1pc2JlaGF2aW5nLCB0aGUgZG9tYWluIHdpbGwgbGlrZWx5IHRvIGJl
CnJ1bm5pbmcsIHNvIG5vIGFjdGlvbiB3aWxsIGJlIHBlcmZvcm1lZC4KCldo
ZW4gdGhlIGRvbWFpbiBpcyBlZmZlY3RpdmVseSBkZXN0cm95ZWQsIFhlblN0
b3JlZCB3aWxsIG5vdCBiZSBhd2FyZSBvZgp0aGUgZG9tYWluIGFueW1vcmUu
IFNvIHRoZSB3YXRjaCBldmVudCBpcyBub3QgZ29pbmcgdG8gYmUgc2VudC4K
QnkgY29uc2VxdWVuY2UsIHRoZSB3YXRjaGVycyBvZiB0aGUgZXZlbnQgd2ls
bCBub3QgcmVsZWFzZSBtYXBwaW5ncwp0aGV5IG1heSBoYXZlIG9uIHRoZSBk
b21haW4uIFRoaXMgd2lsbCByZXN1bHQgaW4gYSB6b21iaWUgZG9tYWluLgoK
SW4gb3JkZXIgdG8gc2VuZCBAcmVsZWFzZURvbWFpbiBldmVudCBhdCB0aGUg
Y29ycmVjdCB0aW1lLCB3ZSB3YW50CnRvIGtlZXAgdGhlIGRvbWFpbiBzdHJ1
Y3R1cmUgdW50aWwgdGhlIGRvbWFpbiBpcyBlZmZlY3RpdmVseQpzaHV0dGlu
Zy1kb3duL2R5aW5nLgoKV2UgYWxzbyB3YW50IHRvIGtlZXAgdGhlIGNvbm5l
Y3Rpb24gYXJvdW5kIHNvIHdlIGNvdWxkIHBvc3NpYmx5IHJldml2ZQp0aGUg
Y29ubmVjdGlvbiBpbiB0aGUgZnV0dXJlLgoKQSBuZXcgZmxhZyAnaXNfaWdu
b3JlZCcgaXMgYWRkZWQgdG8gbWFyayB3aGV0aGVyIGEgY29ubmVjdGlvbiBz
aG91bGQgYmUKaWdub3JlZCB3aGVuIGNoZWNraW5nIGlmIHRoZXJlIGFyZSB3
b3JrIHRvIGRvLiBBZGRpdGlvbmFsbHkgYW55CnRyYW5zYWN0aW9ucywgd2F0
Y2hlcywgYnVmZmVycyBhc3NvY2lhdGVkIHRvIHRoZSBjb25uZWN0aW9uIHdp
bGwgYmUKZnJlZWQgYXMgeW91IGNhbid0IGRvIG11Y2ggd2l0aCB0aGVtIChy
ZXN0YXJ0aW5nIHRoZSBjb25uZWN0aW9uIHdpbGwKbGlrZWx5IG5lZWQgYSBy
ZXNldCkuCgpBcyBhIHNpZGUgbm90ZSwgd2hlbiB0aGUgZGV2aWNlIG1vZGVs
IHdlcmUgcnVubmluZyBpbiBhIHN0dWJkb21haW4sIGEKZ3Vlc3Qgd291bGQg
aGF2ZSBiZWVuIGFibGUgdG8gaW50cm9kdWNlIGEgdXNlLWFmdGVyLWZyZWUg
YmVjYXVzZSB0aGVyZQppcyB0d28gcGFyZW50cyBmb3IgYSBndWVzdCBjb25u
ZWN0aW9uLgoKVGhpcyBpcyBYU0EtMzI1LgoKUmVwb3J0ZWQtYnk6IFBhd2Vs
IFdpZWN6b3JraWV3aWN6IDx3aXBhd2VsQGFtYXpvbi5kZT4KU2lnbmVkLW9m
Zi1ieTogSGFyc2hhIFNoYW1zdW5kYXJhIEhhdmFudXIgPGhhdmFudXJAYW1h
em9uLmNvbT4KU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxA
YW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9z
c0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGF1bEB4
ZW4ub3JnPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCmlu
ZGV4IGFmM2QxNzAwNGIzZi4uMjdkOGYxNWI2Yjc2IDEwMDY0NAotLS0gYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKQEAgLTEzNTUsNiArMTM1NSwzMiBA
QCBzdGF0aWMgc3RydWN0IHsKIAlbWFNfRElSRUNUT1JZX1BBUlRdICAgID0g
eyAiRElSRUNUT1JZX1BBUlQiLCAgICBzZW5kX2RpcmVjdG9yeV9wYXJ0IH0s
CiB9OwogCisvKgorICogS2VlcCB0aGUgY29ubmVjdGlvbiBhbGl2ZSBidXQg
c3RvcCBwcm9jZXNzaW5nIGFueSBuZXcgcmVxdWVzdCBvciBzZW5kaW5nCisg
KiByZXBvbnNlLiBUaGlzIGlzIHRvIGFsbG93IHNlbmRpbmcgQHJlbGVhc2VE
b21haW4gd2F0Y2ggZXZlbnQgYXQgdGhlIGNvcnJlY3QKKyAqIG1vbWVudCBh
bmQvb3IgdG8gYWxsb3cgdGhlIGNvbm5lY3Rpb24gdG8gcmVzdGFydCAobm90
IHlldCBpbXBsZW1lbnRlZCkuCisgKgorICogQWxsIHdhdGNoZXMsIHRyYW5z
YWN0aW9ucywgYnVmZmVycyB3aWxsIGJlIGZyZWVkLgorICovCitzdGF0aWMg
dm9pZCBpZ25vcmVfY29ubmVjdGlvbihzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
bikKK3sKKwlzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqb3V0LCAqdG1wOworCisJ
dHJhY2UoIkNPTk4gJXAgaWdub3JlZFxuIiwgY29ubik7CisKKwljb25uLT5p
c19pZ25vcmVkID0gdHJ1ZTsKKwljb25uX2RlbGV0ZV9hbGxfd2F0Y2hlcyhj
b25uKTsKKwljb25uX2RlbGV0ZV9hbGxfdHJhbnNhY3Rpb25zKGNvbm4pOwor
CisJbGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlKG91dCwgdG1wLCAmY29ubi0+
b3V0X2xpc3QsIGxpc3QpIHsKKwkJbGlzdF9kZWwoJm91dC0+bGlzdCk7CisJ
CXRhbGxvY19mcmVlKG91dCk7CisJfQorCisJdGFsbG9jX2ZyZWUoY29ubi0+
aW4pOworCWNvbm4tPmluID0gTlVMTDsKK30KKwogc3RhdGljIGNvbnN0IGNo
YXIgKnNvY2ttc2dfc3RyaW5nKGVudW0geHNkX3NvY2ttc2dfdHlwZSB0eXBl
KQogewogCWlmICgodW5zaWduZWQgaW50KXR5cGUgPCBBUlJBWV9TSVpFKHdp
cmVfZnVuY3MpICYmIHdpcmVfZnVuY3NbdHlwZV0uc3RyKQpAQCAtMTQxMyw4
ICsxNDM5LDEwIEBAIHN0YXRpYyB2b2lkIGNvbnNpZGVyX21lc3NhZ2Uoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4pCiAJYXNzZXJ0KGNvbm4tPmluID09IE5V
TEwpOwogfQogCi0vKiBFcnJvcnMgaW4gcmVhZGluZyBvciBhbGxvY2F0aW5n
IGhlcmUgbWVhbiB3ZSBnZXQgb3V0IG9mIHN5bmMsIHNvIHdlCi0gKiBkcm9w
IHRoZSB3aG9sZSBjbGllbnQgY29ubmVjdGlvbi4gKi8KKy8qCisgKiBFcnJv
cnMgaW4gcmVhZGluZyBvciBhbGxvY2F0aW5nIGhlcmUgbWVhbnMgd2UgZ2V0
IG91dCBvZiBzeW5jLCBzbyB3ZSBtYXJrCisgKiB0aGUgY29ubmVjdGlvbiBh
cyBpZ25vcmVkLgorICovCiBzdGF0aWMgdm9pZCBoYW5kbGVfaW5wdXQoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4pCiB7CiAJaW50IGJ5dGVzOwpAQCAtMTQ3
MSwxNCArMTQ5OSwxNCBAQCBzdGF0aWMgdm9pZCBoYW5kbGVfaW5wdXQoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4pCiAJcmV0dXJuOwogCiBiYWRfY2xpZW50
OgotCS8qIEtpbGwgaXQuICovCi0JdGFsbG9jX2ZyZWUoY29ubik7CisJaWdu
b3JlX2Nvbm5lY3Rpb24oY29ubik7CiB9CiAKIHN0YXRpYyB2b2lkIGhhbmRs
ZV9vdXRwdXQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pCiB7CisJLyogSWdu
b3JlIHRoZSBjb25uZWN0aW9uIGlmIGFuIGVycm9yIG9jY3VyZWQgKi8KIAlp
ZiAoIXdyaXRlX21lc3NhZ2VzKGNvbm4pKQotCQl0YWxsb2NfZnJlZShjb25u
KTsKKwkJaWdub3JlX2Nvbm5lY3Rpb24oY29ubik7CiB9CiAKIHN0cnVjdCBj
b25uZWN0aW9uICpuZXdfY29ubmVjdGlvbihjb25ud3JpdGVmbl90ICp3cml0
ZSwgY29ubnJlYWRmbl90ICpyZWFkKQpAQCAtMTQ5NCw2ICsxNTIyLDcgQEAg
c3RydWN0IGNvbm5lY3Rpb24gKm5ld19jb25uZWN0aW9uKGNvbm53cml0ZWZu
X3QgKndyaXRlLCBjb25ucmVhZGZuX3QgKnJlYWQpCiAJbmV3LT53cml0ZSA9
IHdyaXRlOwogCW5ldy0+cmVhZCA9IHJlYWQ7CiAJbmV3LT5jYW5fd3JpdGUg
PSB0cnVlOworCW5ldy0+aXNfaWdub3JlZCA9IGZhbHNlOwogCW5ldy0+dHJh
bnNhY3Rpb25fc3RhcnRlZCA9IDA7CiAJSU5JVF9MSVNUX0hFQUQoJm5ldy0+
b3V0X2xpc3QpOwogCUlOSVRfTElTVF9IRUFEKCZuZXctPndhdGNoZXMpOwpA
QCAtMjE4Niw4ICsyMjE1LDkgQEAgaW50IG1haW4oaW50IGFyZ2MsIGNoYXIg
KmFyZ3ZbXSkKIAkJCQkJaWYgKGZkc1tjb25uLT5wb2xsZmRfaWR4XS5yZXZl
bnRzCiAJCQkJCSAgICAmIH4oUE9MTElOfFBPTExPVVQpKQogCQkJCQkJdGFs
bG9jX2ZyZWUoY29ubik7Ci0JCQkJCWVsc2UgaWYgKGZkc1tjb25uLT5wb2xs
ZmRfaWR4XS5yZXZlbnRzCi0JCQkJCQkgJiBQT0xMSU4pCisJCQkJCWVsc2Ug
aWYgKChmZHNbY29ubi0+cG9sbGZkX2lkeF0ucmV2ZW50cworCQkJCQkJICAm
IFBPTExJTikgJiYKKwkJCQkJCSAhY29ubi0+aXNfaWdub3JlZCkKIAkJCQkJ
CWhhbmRsZV9pbnB1dChjb25uKTsKIAkJCQl9CiAJCQkJaWYgKHRhbGxvY19m
cmVlKGNvbm4pID09IDApCkBAIC0yMTk5LDggKzIyMjksOSBAQCBpbnQgbWFp
bihpbnQgYXJnYywgY2hhciAqYXJndltdKQogCQkJCQlpZiAoZmRzW2Nvbm4t
PnBvbGxmZF9pZHhdLnJldmVudHMKIAkJCQkJICAgICYgfihQT0xMSU58UE9M
TE9VVCkpCiAJCQkJCQl0YWxsb2NfZnJlZShjb25uKTsKLQkJCQkJZWxzZSBp
ZiAoZmRzW2Nvbm4tPnBvbGxmZF9pZHhdLnJldmVudHMKLQkJCQkJCSAmIFBP
TExPVVQpCisJCQkJCWVsc2UgaWYgKChmZHNbY29ubi0+cG9sbGZkX2lkeF0u
cmV2ZW50cworCQkJCQkJICAmIFBPTExPVVQpICYmCisJCQkJCQkgIWNvbm4t
PmlzX2lnbm9yZWQpCiAJCQkJCQloYW5kbGVfb3V0cHV0KGNvbm4pOwogCQkJ
CX0KIAkJCQlpZiAodGFsbG9jX2ZyZWUoY29ubikgPT0gMCkKZGlmZiAtLWdp
dCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmggYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCmluZGV4IGViMTliNzFmNWY0Ni4u
MTk2YTZmZDJiMGJlIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfY29yZS5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmgKQEAgLTgwLDYgKzgwLDkgQEAgc3RydWN0IGNvbm5lY3Rpb24KIAkv
KiBJcyB0aGlzIGEgcmVhZC1vbmx5IGNvbm5lY3Rpb24/ICovCiAJYm9vbCBj
YW5fd3JpdGU7CiAKKwkvKiBJcyB0aGlzIGNvbm5lY3Rpb24gaWdub3JlZD8g
Ki8KKwlib29sIGlzX2lnbm9yZWQ7CisKIAkvKiBCdWZmZXJlZCBpbmNvbWlu
ZyBkYXRhLiAqLwogCXN0cnVjdCBidWZmZXJlZF9kYXRhICppbjsKIApkaWZm
IC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCmluZGV4IGRjNjM1
ZTliZTMwYy4uZDVlMWUzZTlkNDJkIDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2RvbWFpbi5jCkBAIC0yODYsNiArMjg2LDEwIEBAIGJvb2wg
ZG9tYWluX2Nhbl9yZWFkKHN0cnVjdCBjb25uZWN0aW9uICpjb25uKQogCiAJ
aWYgKGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikgJiYgY29ubi0+ZG9t
YWluLT53cmxfY3JlZGl0IDwgMCkKIAkJcmV0dXJuIGZhbHNlOworCisJaWYg
KGNvbm4tPmlzX2lnbm9yZWQpCisJCXJldHVybiBmYWxzZTsKKwogCXJldHVy
biAoaW50Zi0+cmVxX2NvbnMgIT0gaW50Zi0+cmVxX3Byb2QpOwogfQogCkBA
IC0zMDMsNiArMzA3LDEwIEBAIGJvb2wgZG9tYWluX2lzX3VucHJpdmlsZWdl
ZChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubikKIGJvb2wgZG9tYWluX2Nhbl93
cml0ZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubikKIHsKIAlzdHJ1Y3QgeGVu
c3RvcmVfZG9tYWluX2ludGVyZmFjZSAqaW50ZiA9IGNvbm4tPmRvbWFpbi0+
aW50ZXJmYWNlOworCisJaWYgKGNvbm4tPmlzX2lnbm9yZWQpCisJCXJldHVy
biBmYWxzZTsKKwogCXJldHVybiAoKGludGYtPnJzcF9wcm9kIC0gaW50Zi0+
cnNwX2NvbnMpICE9IFhFTlNUT1JFX1JJTkdfU0laRSk7CiB9CiAKLS0gCjIu
MTcuMQoK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:29:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:29:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53293.92955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9S4-00086o-MF; Tue, 15 Dec 2020 12:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53293.92955; Tue, 15 Dec 2020 12:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9S4-00086g-Iw; Tue, 15 Dec 2020 12:29:12 +0000
Received: by outflank-mailman (input) for mailman id 53293;
 Tue, 15 Dec 2020 12:29:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9LE-0004t1-Np
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:22:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9ed13a3-001c-42d7-bd7b-5df69308f6e2;
 Tue, 15 Dec 2020 12:20:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JY-0005iy-HV; Tue, 15 Dec 2020 12:20:24 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JY-00077n-Gf; Tue, 15 Dec 2020 12:20:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9ed13a3-001c-42d7-bd7b-5df69308f6e2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=uYWuLcCdiUHZLGjev7Gvd/q5tS79ig4/rGX9a5apYZ0=; b=f6/CH+jpFM6gHWYcAuxqThNC0K
	i86i5vqH7m91T4pFPOm++A6y8td5XvdCLy/WbntkXWDNH4vJ6dmfdQHlesfdng3+zqvHZQci2Xluf
	rvwgBEdIJ21pOpEO8SmD7UsVct+OzwE0Yhr3MkjCLrOhEN3L0I8kjVjIrkUKvLbvuois=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 352 v3 (CVE-2020-29486) - oxenstored: node
 ownership can be changed by unprivileged clients
Message-Id: <E1kp9JY-00077n-Gf@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:24 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29486 / XSA-352
                               version 3

   oxenstored: node ownership can be changed by unprivileged clients

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Nodes in xenstore have an ownership.  In oxenstored, a owner could
give a node away.  But node ownership has quota implications.

Any guest can run another guest out of quota, or create an unbounded
number of nodes owned by dom0, thus running xenstored out of memory

IMPACT
======

A malicious guest administrator can cause denial of service, against a
specific guest or against the whole host.

VULNERABLE SYSTEMS
==================

All systems using oxenstored are vulnerable.  Building and using
oxenstored is the default in the upstream Xen distribution, if the
Ocaml compiler is available.

Systems using C xenstored are not vulnerable.

MITIGATION
==========

There are no mitigations.

Changing to use of C xenstored would avoid this vulnerability.  However,
given the other vulnerabilities in both versions of xenstored being
reported at this time, changing xenstored implementation is not a
recommended approach to mitigation of individual issues.

CREDITS
=======

This issue was discovered by Edwin Török of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa352.patch           xen-unstable - 4.10

$ sha256sum xsa352*
a3b2b2bd4c6b49c472df23f88fb9a5e204d2ba3cd0c3901f8ed057566ef98c85  xsa352.meta
6f9798e20282d4e06f0a8a1abd0d147649e20b33c21559d5a1ea0b1a73a2a4e4  xsa352.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd8MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ/JgH/Rb3BDBjWi+fTDsPMr21yDsrCWGzpyBabflpglQt
b3rTDEx7YlNCkb32xYvZLR9mGAGg8X01zIQVKOQ10Hnib6Vx4TvcdwPqSYGMn3U6
4g3TmWpZJZNfCIbdznXGhOmTLZzVEGDZu1+S+mE3aAdtDGEE98p9P/J43dEt/kWX
R/DcMrCe9LOHKi+MCxZqAFlbZ79QJls6G/sH6VWSUp/Bq8hCtsd/C0Jk3LIBZgnW
V3SUYLhR7Tp7Pkda4m4lVLlvCo+9jlVwevs/MmvyFulxUrDN1/9LrHpZyJ7ZMBwt
2N7zpJpdrY5JiEH6d4fuVUsH78+9+zVxs5PFDXUc7ud2QyA=
=ofMB
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa352.meta"
Content-Disposition: attachment; filename="xsa352.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTIsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAog
ICAgICAgICAgICAzMzAKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTM1Mi5wYXRjaCIKICAgICAgICAgIF0K
ICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsKICAgICAg
IlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFi
bGVSZWYiOiAiNDFhODIyYzM5MjYzNTBmMjY5MTdkNzQ3YzhkZmVkMWM0NGEy
Y2Y0MiIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUz
LAogICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAg
ICAgMzIzLAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNSwKICAg
ICAgICAgICAgMzMwCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMi
OiBbCiAgICAgICAgICAgICJ4c2EzNTIucGF0Y2giCiAgICAgICAgICBdCiAg
ICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIiOiB7CiAgICAgICJS
ZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxl
UmVmIjogIjgxNDVkMzhiNDgwMDkyNTVhMzJhYjg3YTAyZTQ4MWNkMDljODEx
ZjkiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MywK
ICAgICAgICAgICAgMTE1LAogICAgICAgICAgICAzMjIsCiAgICAgICAgICAg
IDMyMywKICAgICAgICAgICAgMzI0LAogICAgICAgICAgICAzMjUsCiAgICAg
ICAgICAgIDMzMAogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjog
WwogICAgICAgICAgICAieHNhMzUyLnBhdGNoIgogICAgICAgICAgXQogICAg
ICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEzIjogewogICAgICAiUmVj
aXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJl
ZiI6ICJiNTMwMjI3M2UyYzUxOTQwMTcyNDAwNDg2NjQ0NjM2ZjJmNGZjNjRh
IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMsCiAg
ICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAgICAz
MjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAogICAgICAg
ICAgICAzMzAKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTM1Mi5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xNCI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiMWQxZDFmNTM5MTk3NjQ1NmE3OWRhYWMwZGNmZTcxNTdkYTFlNTRmNyIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAg
ICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIz
LAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNSwKICAgICAgICAg
ICAgMzMwCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzNTIucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiM2FlNDY5YWY4ZTY4MGRmMzFlZWNkMGEyYWM2YTgzYjU4YWQ3Y2U1MyIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAg
ICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIz
LAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNSwKICAgICAgICAg
ICAgMzMwCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzNTIucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa352.patch"
Content-Disposition: attachment; filename="xsa352.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogb25seSBEb20wIGNhbiBjaGFuZ2Ugbm9kZSBvd25lcgpNSU1F
LVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJz
ZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJpdAoKT3Ro
ZXJ3aXNlIHdlIGNhbiBnaXZlIHF1b3RhIGF3YXkgdG8gYW5vdGhlciBkb21h
aW4sIGVpdGhlciBjYXVzaW5nIGl0IHRvIHJ1bgpvdXQgb2YgcXVvdGEsIG9y
IGluIGNhc2Ugb2YgRG9tMCB1c2UgdW5ib3VuZGVkIGFtb3VudHMgb2YgbWVt
b3J5IGFuZCBieXBhc3MKdGhlIHF1b3RhIHN5c3RlbSBlbnRpcmVseS4KClRo
aXMgd2FzIGZpeGVkIGluIHRoZSBDIHZlcnNpb24gb2YgeGVuc3RvcmVkIGlu
IDIwMDYgKGMvcyBkYjM0ZDJhYWE1ZjUsCnByZWRhdGluZyB0aGUgWFNBIHBy
b2Nlc3MgYnkgNSB5ZWFycykuCgpJdCB3YXMgYWxzbyBmaXhlZCBpbiB0aGUg
bWlyYWdlIHZlcnNpb24gb2YgeGVuc3RvcmUgaW4gMjAxMiwgd2l0aCBhIHVu
aXQgdGVzdApkZW1vbnN0cmF0aW5nIHRoZSB2dWxuZXJhYmlsaXR5OgoKICBo
dHRwczovL2dpdGh1Yi5jb20vbWlyYWdlL29jYW1sLXhlbnN0b3JlL2NvbW1p
dC82YjkxZjNhYzQ2Yjg4NWQwNTMwYTUxZDU3YTliM2E1N2Q2NDkyM2E3CiAg
aHR0cHM6Ly9naXRodWIuY29tL21pcmFnZS9vY2FtbC14ZW5zdG9yZS9jb21t
aXQvMjJlZTU0MTdjOTBiOGZkYTkwNWMzOGRlMGQ1MzQ1MDYxNTJlYWNlNgoK
YnV0IHBvc3NpYmx5IHdpdGhvdXQgcmVhbGlzaW5nIHRoYXQgdGhlIHZ1bG5l
cmFiaWxpdHkgc3RpbGwgYWZmZWN0ZWQgdGhlCmluLXRyZWUgb3hlbnN0b3Jl
ZCAoYWRkZWQgYy9zIGY0NGFmNjYwNDEyIGluIDIwMTApLgoKVGhpcyBpcyBY
U0EtMzUyLgoKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8ZWR2aW4u
dG9yb2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBMaW5kaWcg
PGNocmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sCmluZGV4IDNiMDUxMjhmMWIu
LjVmOTE1ZjJiYmUgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3Jl
ZC9zdG9yZS5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUu
bWwKQEAgLTQwNyw3ICs0MDcsOCBAQCBsZXQgc2V0cGVybXMgc3RvcmUgcGVy
bSBwYXRoIG5wZXJtcyA9CiAJfCBTb21lIG5vZGUgLT4KIAkJbGV0IG9sZF9v
d25lciA9IE5vZGUuZ2V0X293bmVyIG5vZGUgaW4KIAkJbGV0IG5ld19vd25l
ciA9IFBlcm1zLk5vZGUuZ2V0X293bmVyIG5wZXJtcyBpbgotCQlpZiBub3Qg
KChvbGRfb3duZXIgPSBuZXdfb3duZXIpIHx8IChQZXJtcy5Db25uZWN0aW9u
LmlzX2RvbTAgcGVybSkpIHRoZW4gUXVvdGEuY2hlY2sgc3RvcmUucXVvdGEg
bmV3X293bmVyIDA7CisJCWlmIG5vdCAoKG9sZF9vd25lciA9IG5ld19vd25l
cikgfHwgKFBlcm1zLkNvbm5lY3Rpb24uaXNfZG9tMCBwZXJtKSkgdGhlbgor
CQkJcmFpc2UgRGVmaW5lLlBlcm1pc3Npb25fZGVuaWVkOwogCQlzdG9yZS5y
b290IDwtIHBhdGhfc2V0cGVybXMgc3RvcmUgcGVybSBwYXRoIG5wZXJtczsK
IAkJUXVvdGEuZGVsX2VudHJ5IHN0b3JlLnF1b3RhIG9sZF9vd25lcjsKIAkJ
UXVvdGEuYWRkX2VudHJ5IHN0b3JlLnF1b3RhIG5ld19vd25lcgo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:29:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:29:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53283.92943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9S2-000858-9b; Tue, 15 Dec 2020 12:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53283.92943; Tue, 15 Dec 2020 12:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9S2-00084w-6O; Tue, 15 Dec 2020 12:29:10 +0000
Received: by outflank-mailman (input) for mailman id 53283;
 Tue, 15 Dec 2020 12:29:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9L4-0004t1-NO
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:21:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 279c1cd4-6af2-4629-b2e7-f45c24386efb;
 Tue, 15 Dec 2020 12:20:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JX-0005if-Il; Tue, 15 Dec 2020 12:20:23 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JX-00076t-Hu; Tue, 15 Dec 2020 12:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 279c1cd4-6af2-4629-b2e7-f45c24386efb
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=5n8SeiIEVb8RcosU747UsxvMRIFOSkp4JJhpJmbwYlU=; b=drj5Pcpf1lbWB+dWDTvdLE2s2D
	N79UhxvEHl0QE2VDbdg7KzxNMd3YuqPG3QaH3lRhyjD3vw1uEjEtOK1V1TuNvjjtRwyW/z0KDzpr7
	9+cZ0qx9E7Dc7LNScz+53Ynh7KHXA2hKMlYvxJqKR4w60temtoTYWqHelYPEAnn6HZBA=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 350 v4 (CVE-2020-29569) - Use after free
 triggered by block frontend in Linux blkback
Message-Id: <E1kp9JX-00076t-Hu@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:23 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29569 / XSA-350
                               version 4

      Use after free triggered by block frontend in Linux blkback

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

The Linux kernel PV block backend expects the kernel thread handler
to reset ring->xenblkd to NULL when stopped. However, the handler may
not have time to run if the frontend quickly toggle between the states
connect and disconnect.

As a consequence, the block backend may re-use a pointer after it was
freed.

IMPACT
======

A misbehaving guest can trigger a dom0 crash by continuously
connecting / disconnecting a block frontend. Privileged escalation and
information leak cannot be ruled out.

VULNERABLE SYSTEMS
==================

Systems using Linux blkback are vulnerable.  This includes most
systems with a Linux dom0, or Linux driver domains.

Linux versions containing a24fa22ce22a ("xen/blkback: don't use
xen_blkif_get() in xen-blkback kthread"), or its backports, are
vulnerable.  This includes all current linux-stable branches back to
at least linux-stable/linux-4.4.y.

When the Xen PV block backend is provided by userspace (eg qemu), that
backend is not vulnerable.  So configurations where the xl.cfg domain
configuration file specifies all disks with backendtype="qdisk" are
not vulnerable.

The Linux blkback only supports raw format images, so when all disks
have a format than format="raw", the system is not vulnerable.

MITIGATION
==========

Switching the disk backend to qemu with backendtype="qdisk" will avoid
the vulnerability.  This mitigation is not always available, depending
on the other aspects of the configuration.

CREDITS
=======

This issue was discovered by Olivier Benjamin and Pawel Wieczorkiewicz of
Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa350-linux.patch     Linux

$ sha256sum xsa350*
46e8141bcfd21629043df0af4d237d6c264b27c1137fc84d4a1127ace30926c4  xsa350-linux.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).


Deployment of the mitigation to change the block backend is NOT
permitted (except where all the affected systems and VMs are
administered and used only by organisations which are members of the
Xen Project Security Issues Predisclosure List).  Specifically,
deployment on public cloud systems is NOT permitted.

This is because this is a guest-visible change, which will indicate
that it is the block backend which has a vulnerability.

Deployment is permitted only AFTER the embargo ends.


Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQE/BAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd8MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZRusH9RGJFExFzCDQ/y99mvchhcIXGf4g0V373W9YrPAF
zUIiKBGEWuE07tY9YVKV5ocNnPQNdGwsnKJXPsFJAjW4DTDyL00e0yFUNQ7c1kTl
vdRgh0D5VtzIcaiqIC/4GjRzuBTQ3d9gTSOzJGhBS0yoIsZTSr5KyJBAiw1Slz7Y
IHmLZawGdQrDF6YpGLEXPRM7TxNNLn0wPqpPTxC+qMnTThdLuogf4HWLae7xHqX+
Q8b6KYxnkouq5sOddESglf+Gh+j9JHoLCIRm3XA4LrtGtQoUrvdqeS8rklRPH7Xk
yGP99M+J++KMx02ZJJUNrJmtSExDl35liz84qRiRfcKpxQ==
=qnB/
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa350-linux.patch"
Content-Disposition: attachment; filename="xsa350-linux.patch"
Content-Transfer-Encoding: base64

RnJvbTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpTdWJq
ZWN0OiBbUEFUQ0hdIHhlbi1ibGtiYWNrOiBzZXQgcmluZy0+eGVuYmxrZCB0
byBOVUxMIGFmdGVyIGt0aHJlYWRfc3RvcCgpCgpXaGVuIHhlbl9ibGtpZl9k
aXNjb25uZWN0KCkgaXMgY2FsbGVkLCB0aGUga2VybmVsIHRocmVhZCBiZWhp
bmQgdGhlCmJsb2NrIGludGVyZmFjZSBpcyBzdG9wcGVkIGJ5IGNhbGxpbmcg
a3RocmVhZF9zdG9wKHJpbmctPnhlbmJsa2QpLgpUaGUgcmluZy0+eGVuYmxr
ZCB0aHJlYWQgcG9pbnRlciBiZWluZyBub24tTlVMTCBkZXRlcm1pbmVzIGlm
IHRoZQp0aHJlYWQgaGFzIGJlZW4gYWxyZWFkeSBzdG9wcGVkLgpOb3JtYWxs
eSwgdGhlIHRocmVhZCdzIGZ1bmN0aW9uIHhlbl9ibGtpZl9zY2hlZHVsZSgp
IHNldHMgdGhlCnJpbmctPnhlbmJsa2QgdG8gTlVMTCwgd2hlbiB0aGUgdGhy
ZWFkJ3MgbWFpbiBsb29wIGVuZHMuCgpIb3dldmVyLCB3aGVuIHRoZSB0aHJl
YWQgaGFzIG5vdCBiZWVuIHN0YXJ0ZWQgeWV0IChpLmUuCndha2VfdXBfcHJv
Y2VzcygpIGhhcyBub3QgYmVlbiBjYWxsZWQgb24gaXQpLCB0aGUgeGVuX2Js
a2lmX3NjaGVkdWxlKCkKZnVuY3Rpb24gd291bGQgbm90IGJlIGNhbGxlZCB5
ZXQuCgpJbiBzdWNoIGNhc2UgdGhlIGt0aHJlYWRfc3RvcCgpIGNhbGwgcmV0
dXJucyAtRUlOVFIgYW5kIHRoZQpyaW5nLT54ZW5ibGtkIHJlbWFpbnMgZGFu
Z2xpbmcuCldoZW4gdGhpcyBoYXBwZW5zLCBhbnkgY29uc2VjdXRpdmUgY2Fs
bCB0byB4ZW5fYmxraWZfZGlzY29ubmVjdCAoZm9yCmV4YW1wbGUgaW4gZnJv
bnRlbmRfY2hhbmdlZCgpIGNhbGxiYWNrKSBsZWFkcyB0byBhIGtlcm5lbCBj
cmFzaCBpbgprdGhyZWFkX3N0b3AoKSAoZS5nLiBOVUxMIHBvaW50ZXIgZGVy
ZWZlcmVuY2UgaW4gZXhpdF9jcmVkcygpKS4KClRoaXMgaXMgWFNBLTM1MC4K
ClJlcG9ydGVkLWJ5OiBPbGl2aWVyIEJlbmphbWluIDxvbGliZW5AYW1hem9u
LmNvbT4KUmVwb3J0ZWQtYnk6IFBhd2VsIFdpZWN6b3JraWV3aWN6IDx3aXBh
d2VsQGFtYXpvbi5kZT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4KU2lnbmVkLW9mZi1ieTogUGF3ZWwgV2llY3pvcmtp
ZXdpY3ogPHdpcGF3ZWxAYW1hem9uLmRlPgpGaXhlczogYTI0ZmEyMmNlMjJh
ICgieGVuL2Jsa2JhY2s6IGRvbid0IHVzZSB4ZW5fYmxraWZfZ2V0KCkgaW4g
eGVuLWJsa2JhY2sga3RocmVhZCIpClNpZ25lZC1vZmYtYnk6IEF1dGhvciBS
ZWRhY3RlZCA8c2VjdXJpdHlAeGVuLm9yZz4KUmV2aWV3ZWQtYnk6IEp1bGll
biBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBKdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+Ci0tLQogZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay94ZW5idXMuYyB8IDEgKwogMSBmaWxlIGNoYW5nZWQs
IDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94
ZW4tYmxrYmFjay94ZW5idXMuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Jh
Y2sveGVuYnVzLmMKaW5kZXggZjU3MDU1NjllMmE3Li5mN2I5YjFmMzg5ZmUg
MTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVz
LmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwpA
QCAtMjc1LDYgKzI3NSw3IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2lmX2Rpc2Nv
bm5lY3Qoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAKIAkJaWYgKHJpbmct
PnhlbmJsa2QpIHsKIAkJCWt0aHJlYWRfc3RvcChyaW5nLT54ZW5ibGtkKTsK
KwkJCXJpbmctPnhlbmJsa2QgPSBOVUxMOwogCQkJd2FrZV91cCgmcmluZy0+
c2h1dGRvd25fd3EpOwogCQl9CiAKLS0gCjIuMTcuMQoK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:29:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53314.93021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9SJ-0008Ob-SN; Tue, 15 Dec 2020 12:29:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53314.93021; Tue, 15 Dec 2020 12:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9SJ-0008OJ-N3; Tue, 15 Dec 2020 12:29:27 +0000
Received: by outflank-mailman (input) for mailman id 53314;
 Tue, 15 Dec 2020 12:29:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9KB-0004t1-LS
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:21:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3f95b74-d345-4c82-b3f6-7b8e6c0f5962;
 Tue, 15 Dec 2020 12:20:25 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JU-0005hi-Ky; Tue, 15 Dec 2020 12:20:20 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JU-00073K-Jw; Tue, 15 Dec 2020 12:20:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3f95b74-d345-4c82-b3f6-7b8e6c0f5962
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=92GPzvwfdZ6eh4w1oyvyPZtRd6zHFwxAqkbJrFVbRPo=; b=sjFhDTGswDILlEuD2Dg0AJUWxL
	8rnZyq8A/MpUna7Iwg1q5yv5xmRIDs0JAjDNZNSBpv4TgWzcxH6xew643E8duO241jA56Tp3WnuiD
	fUdcqAgNfvyVlQHN7PJnhD5fGZ4/QRWnr5inLJxsBYeINspuu0JQD2JLyVl3r+MvwZYs=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 330 v3 (CVE-2020-29485) - oxenstored memory
 leak in reset_watches
Message-Id: <E1kp9JU-00073K-Jw@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:20 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29485 / XSA-330
                               version 3

                oxenstored memory leak in reset_watches

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

When acting upon a guest XS_RESET_WATCHES request, not all tracking
information is freed.

IMPACT
======

A guest can cause unbounded memory usage in oxenstored.  This can lead
to a system-wide DoS.

VULNERABLE SYSTEMS
==================

All version of Xen since 4.6 are vulnerable.

Only systems using the Ocaml Xenstored implementation are vulnerable.
Systems using the C Xenstored implementaion are not vulnerable.

MITIGATION
==========

There are no mitigations.

Changing to use of C xenstored would avoid this vulnerability.  However,
given the other vulnerabilities in both versions of xenstored being
reported at this time, changing xenstored implementation is not a
recommended approach to mitigation of individual issues.

CREDITS
=======

This issue was discovered by Edwin Török of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa330.patch           Xen 4.12 - xen-unstable
xsa330-4.11.patch      Xen 4.10 - 4.11

$ sha256sum xsa330*
efd95a883f227d63366a745b6007aa0c59cc612573235ba72108c8f89ecef7f3  xsa330.meta
1cda4fd8c91ceb132c5770d90375626521025e078c6ac1b53b68d78815997722  xsa330.patch
87284eaf6df92a78476f49a5587e28e1f5b9ca16ace5ad2e10b4b13abf50e034  xsa330-4.11.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd8MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZXCMH/i2lw6MRNCz3BFqan9PSE0pWGn1LxMpd/kSV0/eH
Y/TjXaCNcvK11d4fc1x8a0Wc3A/bu3uACpFFrcRuWgG5QkMKZRyOkQv7FwW1VaVd
u2NGJVetpfiDZhcSorAdS7CCJZEEt+3a7iFjH9cZKVEwZcS5Cq82UVog05MWLE80
pJ5Cid7K/urD1Zu/v3AGWESuaVYwdvwn6RcePVAs8b0sM2osYXBuKeMwOe1bXaBO
D5qPLEfLfOgLrXi77ssUzfmfRY6Z+LuQAhfug6Lv/n06Y9lyNXewmYalsnobGQSI
FTzWs0QVmFBMY/PEuZv3cRrihTs2ygu9HW7OLO2Bt+VKfcg=
=MqjK
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa330.meta"
Content-Disposition: attachment; filename="xsa330.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMzAsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1CiAg
ICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzMzAtNC4xMS5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsKICAgICAgIlJlY2lwZXMiOiB7
CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNDFh
ODIyYzM5MjYzNTBmMjY5MTdkNzQ3YzhkZmVkMWM0NGEyY2Y0MiIsCiAgICAg
ICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAgICAgICAg
ICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIzLAogICAg
ICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNQogICAgICAgICAgXSwKICAg
ICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzMwLTQuMTEu
cGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAog
ICAgIjQuMTIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjgxNDVkMzhiNDgwMDkyNTVh
MzJhYjg3YTAyZTQ4MWNkMDljODExZjkiLAogICAgICAgICAgIlByZXJlcXMi
OiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAg
ICAgICAzMjIsCiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0LAog
ICAgICAgICAgICAzMjUKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTMzMC5wYXRjaCIKICAgICAgICAgIF0K
ICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMyI6IHsKICAgICAg
IlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFi
bGVSZWYiOiAiYjUzMDIyNzNlMmM1MTk0MDE3MjQwMDQ4NjY0NDYzNmYyZjRm
YzY0YSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUz
LAogICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAg
ICAgMzIzLAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNQogICAg
ICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAi
eHNhMzMwLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAg
ICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxZDFkMWY1Mzkx
OTc2NDU2YTc5ZGFhYzBkY2ZlNzE1N2RhMWU1NGY3IiwKICAgICAgICAgICJQ
cmVyZXFzIjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwK
ICAgICAgICAgICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAg
IDMyNCwKICAgICAgICAgICAgMzI1CiAgICAgICAgICBdLAogICAgICAgICAg
IlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzMzAucGF0Y2giCiAgICAg
ICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIm1hc3RlciI6
IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAg
ICAgICJTdGFibGVSZWYiOiAiM2FlNDY5YWY4ZTY4MGRmMzFlZWNkMGEyYWM2
YTgzYjU4YWQ3Y2U1MyIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAg
ICAgICAgMzUzLAogICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwK
ICAgICAgICAgICAgMzIzLAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAg
IDMyNQogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAg
ICAgICAgICAieHNhMzMwLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa330.patch"
Content-Disposition: attachment; filename="xsa330.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogZGVsZXRlIHdhdGNoIGZyb20gdHJpZSB0b28gd2hlbiByZXNl
dHRpbmcKIHdhdGNoZXMKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKCmMvcyBmOGM3MmI1MjYxMjkgIm94ZW5zdG9yZWQ6
IGltcGxlbWVudCBYU19SRVNFVF9XQVRDSEVTIiBmcm9tIFhlbiA0LjYKaW50
cm9kdWNlZCByZXNldCB3YXRjaGVzIHN1cHBvcnQgaW4gb3hlbnN0b3JlZCBi
eSBtaXJyb3JpbmcgdGhlIGNoYW5nZQppbiBjeGVuc3RvcmVkLgoKSG93ZXZl
ciB0aGUgT0NhbWwgdmVyc2lvbiBoYXMgc29tZSBhZGRpdGlvbmFsIGRhdGEg
c3RydWN0dXJlcyB0bwpvcHRpbWl6ZSB3YXRjaCBmaXJpbmcsIGFuZCBqdXN0
IHJlc2V0dGluZyB0aGUgd2F0Y2hlcyBpbiBvbmUgb2YgdGhlIGRhdGEKc3Ry
dWN0dXJlcyBjcmVhdGVzIGEgc2VjdXJpdHkgYnVnIHdoZXJlIGEgbWFsaWNp
b3VzIGd1ZXN0IGtlcm5lbCBjYW4KZXhjZWVkIGl0cyB3YXRjaCBxdW90YSwg
ZHJpdmluZyBveGVuc3RvcmVkIGludG8gT09NOgogKiBjcmVhdGUgd2F0Y2hl
cwogKiByZXNldCB3YXRjaGVzICh0aGlzIHN0aWxsIGtlZXBzIHRoZSB3YXRj
aGVzIGxpbmdlcmluZyBpbiBhbm90aGVyIGRhdGEKICAgc3RydWN0dXJlLCB1
c2luZyBtZW1vcnkpCiAqIGNyZWF0ZSBzb21lIG1vcmUgd2F0Y2hlcwogKiBs
b29wIHVudGlsIG94ZW5zdG9yZWQgZGllcwoKVGhlIGd1ZXN0IGtlcm5lbCBk
b2Vzbid0IG5lY2Vzc2FyaWx5IGhhdmUgdG8gYmUgbWFsaWNpb3VzIHRvIHRy
aWdnZXIKdGhpczoKICogaWYgY29udHJvbC9wbGF0Zm9ybS1mZWF0dXJlLXhz
X3Jlc2V0X3dhdGNoZXMgaXMgc2V0CiAqIHRoZSBndWVzdCBrZXhlY3MgKGUu
Zy4gYmVjYXVzZSBpdCBjcmFzaGVzKQogKiBvbiBib290IG1vcmUgd2F0Y2hl
cyBhcmUgc2V0IHVwCiAqIHRoaXMgd2lsbCBzbG93bHkgImxlYWsiIG1lbW9y
eSBmb3Igd2F0Y2hlcyBpbiBveGVuc3RvcmVkLCBkcml2aW5nIGl0CiAgIHRv
d2FyZHMgT09NLgoKVGhpcyBpcyBYU0EtMzMwLgoKRml4ZXM6IGY4YzcyYjUy
NjEyOSAoIm94ZW5zdG9yZWQ6IGltcGxlbWVudCBYU19SRVNFVF9XQVRDSEVT
IikKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8ZWR2aW4udG9yb2tA
Y2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBMaW5kaWcgPGNocmlz
dGlhbi5saW5kaWdAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0
IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sCmluZGV4IDlmOWY3
ZWUyZjAuLjZlZTM1NTJlYzIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9jb25uZWN0aW9ucy5tbAorKysgYi90b29scy9vY2FtbC94ZW5z
dG9yZWQvY29ubmVjdGlvbnMubWwKQEAgLTEzNCw2ICsxMzQsMTAgQEAgbGV0
IGRlbF93YXRjaCBjb25zIGNvbiBwYXRoIHRva2VuID0KIAkJY29ucy53YXRj
aGVzIDwtIFRyaWUuc2V0IGNvbnMud2F0Y2hlcyBrZXkgd2F0Y2hlczsKICAJ
d2F0Y2gKIAorbGV0IGRlbF93YXRjaGVzIGNvbnMgY29uID0KKwlDb25uZWN0
aW9uLmRlbF93YXRjaGVzIGNvbjsKKwljb25zLndhdGNoZXMgPC0gVHJpZS5t
YXAgKGRlbF93YXRjaGVzX29mX2NvbiBjb24pIGNvbnMud2F0Y2hlcworCiAo
KiBwYXRoIGlzIGFic29sdXRlICopCiBsZXQgZmlyZV93YXRjaGVzID9vbGRy
b290IHJvb3QgY29ucyBwYXRoIHJlY3Vyc2UgPQogCWxldCBrZXkgPSBrZXlf
b2ZfcGF0aCBwYXRoIGluCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5z
dG9yZWQvcHJvY2Vzcy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9j
ZXNzLm1sCmluZGV4IDczZTA0Y2MxOGIuLjQzN2QyZGNmOWUgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCisrKyBiL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCkBAIC0xNzksOCArMTc5
LDggQEAgbGV0IGRvX2lzaW50cm9kdWNlZCBjb24gX3QgZG9tYWlucyBfY29u
cyBkYXRhID0KIAlpZiBkb21pZCA9IERlZmluZS5kb21pZF9zZWxmIHx8IERv
bWFpbnMuZXhpc3QgZG9tYWlucyBkb21pZCB0aGVuICJUXDAwMCIgZWxzZSAi
RlwwMDAiCiAKICgqIG9ubHkgaW4geGVuID49IDQuMiAqKQotbGV0IGRvX3Jl
c2V0X3dhdGNoZXMgY29uIF90IF9kb21haW5zIF9jb25zIF9kYXRhID0KLSAg
Q29ubmVjdGlvbi5kZWxfd2F0Y2hlcyBjb247CitsZXQgZG9fcmVzZXRfd2F0
Y2hlcyBjb24gX3QgX2RvbWFpbnMgY29ucyBfZGF0YSA9CisgIENvbm5lY3Rp
b25zLmRlbF93YXRjaGVzIGNvbnMgY29uOwogICBDb25uZWN0aW9uLmRlbF90
cmFuc2FjdGlvbnMgY29uCiAKICgqIG9ubHkgaW4gPj0geGVuMy4zICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKikK

--=separator
Content-Type: application/octet-stream; name="xsa330-4.11.patch"
Content-Disposition: attachment; filename="xsa330-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogZGVsZXRlIHdhdGNoIGZyb20gdHJpZSB0b28gd2hlbiByZXNl
dHRpbmcKIHdhdGNoZXMKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKCmMvcyBmOGM3MmI1MjYxMjkgIm94ZW5zdG9yZWQ6
IGltcGxlbWVudCBYU19SRVNFVF9XQVRDSEVTIiBmcm9tIFhlbiA0LjYKaW50
cm9kdWNlZCByZXNldCB3YXRjaGVzIHN1cHBvcnQgaW4gb3hlbnN0b3JlZCBi
eSBtaXJyb3JpbmcgdGhlIGNoYW5nZQppbiBjeGVuc3RvcmVkLgoKSG93ZXZl
ciB0aGUgT0NhbWwgdmVyc2lvbiBoYXMgc29tZSBhZGRpdGlvbmFsIGRhdGEg
c3RydWN0dXJlcyB0bwpvcHRpbWl6ZSB3YXRjaCBmaXJpbmcsIGFuZCBqdXN0
IHJlc2V0dGluZyB0aGUgd2F0Y2hlcyBpbiBvbmUgb2YgdGhlIGRhdGEKc3Ry
dWN0dXJlcyBjcmVhdGVzIGEgc2VjdXJpdHkgYnVnIHdoZXJlIGEgbWFsaWNp
b3VzIGd1ZXN0IGtlcm5lbCBjYW4KZXhjZWVkIGl0cyB3YXRjaCBxdW90YSwg
ZHJpdmluZyBveGVuc3RvcmVkIGludG8gT09NOgogKiBjcmVhdGUgd2F0Y2hl
cwogKiByZXNldCB3YXRjaGVzICh0aGlzIHN0aWxsIGtlZXBzIHRoZSB3YXRj
aGVzIGxpbmdlcmluZyBpbiBhbm90aGVyIGRhdGEKICAgc3RydWN0dXJlLCB1
c2luZyBtZW1vcnkpCiAqIGNyZWF0ZSBzb21lIG1vcmUgd2F0Y2hlcwogKiBs
b29wIHVudGlsIG94ZW5zdG9yZWQgZGllcwoKVGhlIGd1ZXN0IGtlcm5lbCBk
b2Vzbid0IG5lY2Vzc2FyaWx5IGhhdmUgdG8gYmUgbWFsaWNpb3VzIHRvIHRy
aWdnZXIKdGhpczoKICogaWYgY29udHJvbC9wbGF0Zm9ybS1mZWF0dXJlLXhz
X3Jlc2V0X3dhdGNoZXMgaXMgc2V0CiAqIHRoZSBndWVzdCBrZXhlY3MgKGUu
Zy4gYmVjYXVzZSBpdCBjcmFzaGVzKQogKiBvbiBib290IG1vcmUgd2F0Y2hl
cyBhcmUgc2V0IHVwCiAqIHRoaXMgd2lsbCBzbG93bHkgImxlYWsiIG1lbW9y
eSBmb3Igd2F0Y2hlcyBpbiBveGVuc3RvcmVkLCBkcml2aW5nIGl0CiAgIHRv
d2FyZHMgT09NLgoKVGhpcyBpcyBYU0EtMzMwLgoKRml4ZXM6IGY4YzcyYjUy
NjEyOSAoIm94ZW5zdG9yZWQ6IGltcGxlbWVudCBYU19SRVNFVF9XQVRDSEVT
IikKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8ZWR2aW4udG9yb2tA
Y2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBMaW5kaWcgPGNocmlz
dGlhbi5saW5kaWdAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0
IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL2Nvbm5lY3Rpb25zLm1sCmluZGV4IDAyMGI4
NzVkY2QuLjRlNjlkZTFkNDIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hl
bnN0b3JlZC9jb25uZWN0aW9ucy5tbAorKysgYi90b29scy9vY2FtbC94ZW5z
dG9yZWQvY29ubmVjdGlvbnMubWwKQEAgLTEzNCw2ICsxMzQsMTAgQEAgbGV0
IGRlbF93YXRjaCBjb25zIGNvbiBwYXRoIHRva2VuID0KIAkJY29ucy53YXRj
aGVzIDwtIFRyaWUuc2V0IGNvbnMud2F0Y2hlcyBrZXkgd2F0Y2hlczsKICAJ
d2F0Y2gKIAorbGV0IGRlbF93YXRjaGVzIGNvbnMgY29uID0KKwlDb25uZWN0
aW9uLmRlbF93YXRjaGVzIGNvbjsKKwljb25zLndhdGNoZXMgPC0gVHJpZS5t
YXAgKGRlbF93YXRjaGVzX29mX2NvbiBjb24pIGNvbnMud2F0Y2hlcworCiAo
KiBwYXRoIGlzIGFic29sdXRlICopCiBsZXQgZmlyZV93YXRjaGVzID9vbGRy
b290IHJvb3QgY29ucyBwYXRoIHJlY3Vyc2UgPQogCWxldCBrZXkgPSBrZXlf
b2ZfcGF0aCBwYXRoIGluCmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5z
dG9yZWQvcHJvY2Vzcy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9j
ZXNzLm1sCmluZGV4IDZhOTk4Zjg3NjQuLjEyYWQ2NmZjZTYgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCisrKyBiL3Rv
b2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sCkBAIC0xNzksOCArMTc5
LDggQEAgbGV0IGRvX2lzaW50cm9kdWNlZCBjb24gX3QgZG9tYWlucyBfY29u
cyBkYXRhID0KIAlpZiBkb21pZCA9IERlZmluZS5kb21pZF9zZWxmIHx8IERv
bWFpbnMuZXhpc3QgZG9tYWlucyBkb21pZCB0aGVuICJUXDAwMCIgZWxzZSAi
RlwwMDAiCiAKICgqIG9ubHkgaW4geGVuID49IDQuMiAqKQotbGV0IGRvX3Jl
c2V0X3dhdGNoZXMgY29uIHQgZG9tYWlucyBjb25zIGRhdGEgPQotICBDb25u
ZWN0aW9uLmRlbF93YXRjaGVzIGNvbjsKK2xldCBkb19yZXNldF93YXRjaGVz
IGNvbiBfdCBfZG9tYWlucyBjb25zIF9kYXRhID0KKyAgQ29ubmVjdGlvbnMu
ZGVsX3dhdGNoZXMgY29ucyBjb247CiAgIENvbm5lY3Rpb24uZGVsX3RyYW5z
YWN0aW9ucyBjb24KIAogKCogb25seSBpbiA+PSB4ZW4zLjMgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAqKQo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:29:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:29:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53335.93059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9SS-0000B5-Hp; Tue, 15 Dec 2020 12:29:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53335.93059; Tue, 15 Dec 2020 12:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9SS-0000Ak-9g; Tue, 15 Dec 2020 12:29:36 +0000
Received: by outflank-mailman (input) for mailman id 53335;
 Tue, 15 Dec 2020 12:29:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9LK-0004tM-55
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:22:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9eead270-3558-4f71-b4c2-f23cd526df3c;
 Tue, 15 Dec 2020 12:20:32 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Jc-0005ke-AN; Tue, 15 Dec 2020 12:20:28 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Jc-0007Bp-9a; Tue, 15 Dec 2020 12:20:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eead270-3558-4f71-b4c2-f23cd526df3c
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=qwX7c8Iq4udLBN2i+1bIIl7S6k70oFw1h75yc/A/GH8=; b=WaOy0XVVA/VZUwpBih3i5GNSiH
	CJZN7Dw0VxwgSaoxeUHepeKMzzy/Q/DGubTRElChX4cjmwwjseIyUKNugI/4JN3rf7KYwjMRLHd2k
	sQORxrR1lb7IlaLD6e4fzItcF0/p8451p/HxgvEPYYYCCAUdwyxpJ33uOjsoIqMaBvF4=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 358 v4 (CVE-2020-29570) - FIFO event
 channels control block related ordering
Message-Id: <E1kp9Jc-0007Bp-9a@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:28 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29570 / XSA-358
                               version 4

          FIFO event channels control block related ordering

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Recording of the per-vCPU control block mapping maintained by Xen and
that of pointers into the control block is reversed.  The consumer
assumes, seeing the former initialized, that the latter are also ready
for use.

IMPACT
======

Malicious or buggy guest kernels can mount a Denial of Service (DoS)
attack affecting the entire system.

VULNERABLE SYSTEMS
==================

All Xen versions from 4.4 onwards are vulnerable.  Xen versions 4.3 and
earlier are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa358.patch           xen-unstable - 4.10

$ sha256sum xsa358*
c8392659f71ea31574f9f82ab80a37e1359e8b8178d7b060167500bfb134eecc  xsa358.meta
ee719ff8dbf30794ddac1464267cb47c1aac7e39da32d82263f4aebc1a9b509b  xsa358.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/YqeAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZlv0H/0tFfvZ8aKiUPFYwu/9WgNwLZIZJUgqIt1q1ooxt
6S+e8yHGhg3mBoAmfqN38sffVdD14z9DVFfIpMtrZpyfGzX2kmCPwC+MAtPliaNC
8rH7CDJHuQU35z5c/3q12pldtAFKLBhhqulg3Q5jLHi/HAKvypJFibLyqmqY+Uoo
yEMqpE1UtzhoYD4RsttcT1chGiBn8Gk8wBVcLx/SzzcU6xJ+X0F37VaIyTPW+69l
74ov4jzpt667mr4VtNOCmIAHuRZNLhValRUwzwSvGGjmiF8ACKbeKZ5IQ3m7gCBA
7fNRaRDdsKJi9amdifKfyn28u/+ltkPoCK6jAQcO1Eg/+0Q=
=lxX6
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa358.meta"
Content-Disposition: attachment; filename="xsa358.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTgsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAog
ICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAg
MzQ4LAogICAgICAgICAgICAzNTYKICAgICAgICAgIF0sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1OC5wYXRjaCIKICAgICAg
ICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsK
ICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAg
ICJTdGFibGVSZWYiOiAiNDFhODIyYzM5MjYzNTBmMjY5MTdkNzQ3YzhkZmVk
MWM0NGEyY2Y0MiIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAg
ICAgMzUzLAogICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAg
ICAgICAgICAgMzIzLAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMy
NSwKICAgICAgICAgICAgMzMwLAogICAgICAgICAgICAzNTIsCiAgICAgICAg
ICAgIDM0OCwKICAgICAgICAgICAgMzU2CiAgICAgICAgICBdLAogICAgICAg
ICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNTgucGF0Y2giCiAg
ICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIi
OiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAg
ICAgICAiU3RhYmxlUmVmIjogIjgxNDVkMzhiNDgwMDkyNTVhMzJhYjg3YTAy
ZTQ4MWNkMDljODExZjkiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAg
ICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAgICAgICAzMjIs
CiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0LAogICAgICAgICAg
ICAzMjUsCiAgICAgICAgICAgIDMzMCwKICAgICAgICAgICAgMzUyLAogICAg
ICAgICAgICAzNDgsCiAgICAgICAgICAgIDM1NgogICAgICAgICAgXSwKICAg
ICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzU4LnBhdGNo
IgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0
LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICJiNTMwMjI3M2UyYzUxOTQwMTcyNDAw
NDg2NjQ0NjM2ZjJmNGZjNjRhIiwKICAgICAgICAgICJQcmVyZXFzIjogWwog
ICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAg
MzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAg
ICAgICAgMzI1LAogICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwK
ICAgICAgICAgICAgMzQ4LAogICAgICAgICAgICAzNTYKICAgICAgICAgIF0s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1OC5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xNCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMWQxZDFmNTM5MTk3NjQ1NmE3
OWRhYWMwZGNmZTcxNTdkYTFlNTRmNyIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFsKICAgICAgICAgICAgMzUzLAogICAgICAgICAgICAxMTUsCiAgICAgICAg
ICAgIDMyMiwKICAgICAgICAgICAgMzIzLAogICAgICAgICAgICAzMjQsCiAg
ICAgICAgICAgIDMyNSwKICAgICAgICAgICAgMzMwLAogICAgICAgICAgICAz
NTIsCiAgICAgICAgICAgIDM0OCwKICAgICAgICAgICAgMzU2CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NTgucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiM2FlNDY5YWY4ZTY4
MGRmMzFlZWNkMGEyYWM2YTgzYjU4YWQ3Y2U1MyIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAgICAgICAgICAxMTUsCiAg
ICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIzLAogICAgICAgICAgICAz
MjQsCiAgICAgICAgICAgIDMyNSwKICAgICAgICAgICAgMzMwLAogICAgICAg
ICAgICAzNTIsCiAgICAgICAgICAgIDM0OCwKICAgICAgICAgICAgMzU2CiAg
ICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzNTgucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9
CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa358.patch"
Content-Disposition: attachment; filename="xsa358.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG4vRklGTzogcmUtb3JkZXIgYW5kIHN5bmNocm9uaXplICh3aXRo
KSBtYXBfY29udHJvbF9ibG9jaygpCgpGb3IgZXZ0Y2huX2ZpZm9fc2V0X3Bl
bmRpbmcoKSdzIGNoZWNrIG9mIHRoZSBjb250cm9sIGJsb2NrIGhhdmluZyBi
ZWVuCnNldCB0byBiZSBlZmZlY3RpdmUsIG9yZGVyaW5nIG9mIHJlc3BlY3Rp
dmUgcmVhZHMgYW5kIHdyaXRlcyBuZWVkcyB0byBiZQplbnN1cmVkOiBUaGUg
Y29udHJvbCBibG9jayBwb2ludGVyIG5lZWRzIHRvIGJlIHJlY29yZGVkIHN0
cmljdGx5IGFmdGVyCnRoZSBzZXR0aW5nIG9mIGFsbCB0aGUgcXVldWUgaGVh
ZHMsIGFuZCBpdCBuZWVkcyBjaGVja2luZyBzdHJpY3RseQpiZWZvcmUgYW55
IHVzZXMgb2YgdGhlbSAodGhpcyBsYXR0ZXIgYXNwZWN0IHdhcyBhbHJlYWR5
IGd1YXJhbnRlZWQpLgoKVGhpcyBpcyBYU0EtMzU4IC8gQ1ZFLTIwMjAtMjk1
NzAuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4KLS0tCnYzOiBEcm9wIHJlYWQtc2lkZSBiYXJyaWVyIGFnYWluLCBs
ZXZlcmFnaW5nIGd1ZXN0X3Rlc3RfYW5kX3NldF9iaXQoKS4KdjI6IFJlLWJh
c2Ugb3ZlciBxdWV1ZSBsb2NraW5nIHJlLXdvcmsuCgotLS0gYS94ZW4vY29t
bW9uL2V2ZW50X2ZpZm8uYworKysgYi94ZW4vY29tbW9uL2V2ZW50X2ZpZm8u
YwpAQCAtMjQ5LDYgKzI0OSwxMCBAQCBzdGF0aWMgdm9pZCBldnRjaG5fZmlm
b19zZXRfcGVuZGluZyhzdHJ1CiAgICAgICAgICAgICBnb3RvIHVubG9jazsK
ICAgICAgICAgfQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgYWxz
byBhY3RzIGFzIHRoZSByZWFkIGNvdW50ZXJwYXJ0IG9mIHRoZSBzbXBfd21i
KCkgaW4KKyAgICAgICAgICogbWFwX2NvbnRyb2xfYmxvY2soKS4KKyAgICAg
ICAgICovCiAgICAgICAgIGlmICggZ3Vlc3RfdGVzdF9hbmRfc2V0X2JpdChk
LCBFVlRDSE5fRklGT19MSU5LRUQsIHdvcmQpICkKICAgICAgICAgICAgIGdv
dG8gdW5sb2NrOwogCkBAIC00NzQsNiArNDc4LDcgQEAgc3RhdGljIGludCBz
ZXR1cF9jb250cm9sX2Jsb2NrKHN0cnVjdCB2Ywogc3RhdGljIGludCBtYXBf
Y29udHJvbF9ibG9jayhzdHJ1Y3QgdmNwdSAqdiwgdWludDY0X3QgZ2ZuLCB1
aW50MzJfdCBvZmZzZXQpCiB7CiAgICAgdm9pZCAqdmlydDsKKyAgICBzdHJ1
Y3QgZXZ0Y2huX2ZpZm9fY29udHJvbF9ibG9jayAqY29udHJvbF9ibG9jazsK
ICAgICB1bnNpZ25lZCBpbnQgaTsKICAgICBpbnQgcmM7CiAKQEAgLTQ4NCwx
MCArNDg5LDE1IEBAIHN0YXRpYyBpbnQgbWFwX2NvbnRyb2xfYmxvY2soc3Ry
dWN0IHZjcHUKICAgICBpZiAoIHJjIDwgMCApCiAgICAgICAgIHJldHVybiBy
YzsKIAotICAgIHYtPmV2dGNobl9maWZvLT5jb250cm9sX2Jsb2NrID0gdmly
dCArIG9mZnNldDsKKyAgICBjb250cm9sX2Jsb2NrID0gdmlydCArIG9mZnNl
dDsKIAogICAgIGZvciAoIGkgPSAwOyBpIDw9IEVWVENITl9GSUZPX1BSSU9S
SVRZX01JTjsgaSsrICkKLSAgICAgICAgdi0+ZXZ0Y2huX2ZpZm8tPnF1ZXVl
W2ldLmhlYWQgPSAmdi0+ZXZ0Y2huX2ZpZm8tPmNvbnRyb2xfYmxvY2stPmhl
YWRbaV07CisgICAgICAgIHYtPmV2dGNobl9maWZvLT5xdWV1ZVtpXS5oZWFk
ID0gJmNvbnRyb2xfYmxvY2stPmhlYWRbaV07CisKKyAgICAvKiBBbGwgcXVl
dWUgaGVhZHMgbXVzdCBoYXZlIGJlZW4gc2V0IGJlZm9yZSBzZXR0aW5nIHRo
ZSBjb250cm9sIGJsb2NrLiAqLworICAgIHNtcF93bWIoKTsKKworICAgIHYt
PmV2dGNobl9maWZvLT5jb250cm9sX2Jsb2NrID0gY29udHJvbF9ibG9jazsK
IAogICAgIHJldHVybiAwOwogfQo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:29:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53339.93099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9SW-0000Kf-D3; Tue, 15 Dec 2020 12:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53339.93099; Tue, 15 Dec 2020 12:29:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9SV-0000Jz-T8; Tue, 15 Dec 2020 12:29:39 +0000
Received: by outflank-mailman (input) for mailman id 53339;
 Tue, 15 Dec 2020 12:29:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Kp-0004t1-Ma
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:21:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5339cac1-8fcc-452b-b200-ba3e700565eb;
 Tue, 15 Dec 2020 12:20:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JW-0005iJ-Jv; Tue, 15 Dec 2020 12:20:22 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JW-00075m-Iz; Tue, 15 Dec 2020 12:20:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5339cac1-8fcc-452b-b200-ba3e700565eb
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=vBB2qHJiMOXamat5KJvU0NWZIVF+rVkQdpQzwYShMd4=; b=emqasVPt36Q8ND2iOxAXTqx3JN
	VkIJvJMIPiWOGl+OJ7Fz0WLS0MIwGB11aUq2nntcIqzmKHmWKiOP/pk3z36NrzrZEHfwLZ/QRYtH0
	UbgP1pFoFFLUj/pVwBpQyPUj2VXGU7qwIYsfEz+tfjva6vsioW4WI3jZ9qa14QjczDVY=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 349 v3 (CVE-2020-29568) - Frontends can
 trigger OOM in Backends by update a watched path
Message-Id: <E1kp9JW-00075m-Iz@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:22 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29568 / XSA-349
                               version 3

 Frontends can trigger OOM in Backends by update a watched path

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Some OSes (such as Linux, FreeBSD, NetBSD) are processing watch events
using a single thread.  If the events are received faster than the thread
is able to handle, they will get queued.

As the queue is unbound, a guest may be able to trigger a OOM in
the backend.

IMPACT
======

A malicious guest can trigger an OOM in backends.

VULNERABLE SYSTEMS
==================

All systems with a FreeBSD, Linux, NetBSD dom0 are vulnerable.

All version of those OSes are vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Michael Kurth and Pawel Wieczorkiewicz of
Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue for Linux.

Fixes for FreeBSD and NetBSD will be handled through their own
security process.

Fixes for FreeBSD and NetBSD will be handled through their own security
process.

xsa349/xsa349-linux-?.patch   Linux

$ sha256sum xsa349*/*
76f69574553137af8c9c7aecca3025d135b49c4a5316cc541e9e355576a21599  xsa349/xsa349-linux-1.patch
3ce2e1a88321993a3698b4608d2332fb5d43e0d82de73bc9f1700202782eba30  xsa349/xsa349-linux-2.patch
4bbaf62ed5e3442b310f80344b9d3ccd37f0a07827ed41907b44228130a610da  xsa349/xsa349-linux-3.patch
a7648214cea5d0340a29552df224230cf214d698fe2d7a8798f57444225afe32  xsa349/xsa349-linux-4.patch
ac32d02129821ed7db1b71c39b2c708399c0af809eefdb5bf0709f00736e7959  xsa349/xsa349-linux-5.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd8MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZxv0IAI1ELk5Zbx9SD7obwWo7r9G0QOE2fP6DtZnlIDsL
AsD1bssyosT5L0Xkk5+8tmt6gwRN3fjpAj24QNO/DrytHFSa42ELPmpEeQ63/LJL
UJwxC+fbAwWrk8JM99WqWQbgASBka9VSktVML/yU3K+IpBk4xTPulJ5J+R96QYoe
65zCFkbkw2HHFLzUlveY03031ckNshrmfX/rP7vFrjywdKkvt0wq/jRIESjiWfln
sIC+qc/FtOWfXywpcdYZmL3uPqcZViVXnv4lOZ4Meg5+IzJDPxPnYw/T1RRKjdyy
dBZvhv3DHGtdnI5Q3BGW6KOuHC4KBsWLX5pPWm6m5MCfHak=
=XeRA
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa349/xsa349-linux-1.patch"
Content-Disposition: attachment; filename="xsa349/xsa349-linux-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpTdWJq
ZWN0OiBbUEFUQ0ggMS81XSB4ZW4veGVuYnVzOiBBbGxvdyB3YXRjaGVzIGRp
c2NhcmQgZXZlbnRzIGJlZm9yZSBxdWV1ZWluZwoKSWYgaGFuZGxpbmcgbG9n
aWNzIG9mIHdhdGNoIGV2ZW50cyBhcmUgc2xvd2VyIHRoYW4gdGhlIGV2ZW50
cyBlbnF1ZXVlCmxvZ2ljIGFuZCB0aGUgZXZlbnRzIGNhbiBiZSBjcmVhdGVk
IGZyb20gdGhlIGd1ZXN0cywgdGhlIGd1ZXN0cyBjb3VsZAp0cmlnZ2VyIG1l
bW9yeSBwcmVzc3VyZSBieSBpbnRlbnNpdmVseSBpbmR1Y2luZyB0aGUgZXZl
bnRzLCBiZWNhdXNlIGl0CndpbGwgY3JlYXRlIGEgaHVnZSBudW1iZXIgb2Yg
cGVuZGluZyBldmVudHMgdGhhdCBleGhhdXN0aW5nIHRoZSBtZW1vcnkuClRo
aXMgaXMga25vd24gYXMgWFNBLTM0OS4KCkZvcnR1bmF0ZWx5LCBzb21lIHdh
dGNoIGV2ZW50cyBjb3VsZCBiZSBpZ25vcmVkLCBkZXBlbmRpbmcgb24gaXRz
CmhhbmRsZXIgY2FsbGJhY2suICBGb3IgZXhhbXBsZSwgaWYgdGhlIGNhbGxi
YWNrIGhhcyBpbnRlcmVzdCBpbiBvbmx5IG9uZQpzaW5nbGUgcGF0aCwgdGhl
IHdhdGNoIHdvdWxkbid0IHdhbnQgbXVsdGlwbGUgcGVuZGluZyBldmVudHMu
ICBPciwgc29tZQp3YXRjaGVzIGNvdWxkIGlnbm9yZSBldmVudHMgdG8gc2Ft
ZSBwYXRoLgoKVG8gbGV0IHN1Y2ggd2F0Y2hlcyB0byB2b2x1dGFyaWx5IGhl
bHAgYXZvaWRpbmcgdGhlIG1lbW9yeSBwcmVzc3VyZQpzaXR1YXRpb24sIHRo
aXMgY29tbWl0IGludHJvZHVjZXMgbmV3IHdhdGNoIGNhbGxiYWNrLCAnd2ls
bF9oYW5kbGUnLiAgSWYKaXQgaXMgbm90IE5VTEwsIGl0IHdpbGwgYmUgY2Fs
bGVkIGZvciBlYWNoIG5ldyBldmVudCBqdXN0IGJlZm9yZQplbnF1ZXVpbmcg
aXQuICBUaGVuLCBpZiB0aGUgY2FsbGJhY2sgcmV0dXJucyBmYWxzZSwgdGhl
IGV2ZW50IHdpbGwgYmUKZGlzY2FyZGVkLiAgTm8gd2F0Y2ggaXMgdXNpbmcg
dGhlIGNhbGxiYWNrIGZvciBub3csIHRob3VnaC4KClNpZ25lZC1vZmYtYnk6
IFNlb25nSmFlIFBhcmsgPHNqcGFya0BhbWF6b24uZGU+ClJlcG9ydGVkLWJ5
OiBNaWNoYWVsIEt1cnRoIDxta3VAYW1hem9uLmRlPgpSZXBvcnRlZC1ieTog
UGF3ZWwgV2llY3pvcmtpZXdpY3ogPHdpcGF3ZWxAYW1hem9uLmRlPgpTaWdu
ZWQtb2ZmLWJ5OiBBdXRob3IgUmVkYWN0ZWQgPHNlY3VyaXR5QHhlbi5vcmc+
ClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
Ci0tLQogZHJpdmVycy9uZXQveGVuLW5ldGJhY2sveGVuYnVzLmMgICB8IDQg
KysrKwogZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c19jbGllbnQuYyB8IDEg
KwogZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c194cy5jICAgICB8IDUgKysr
Ky0KIGluY2x1ZGUveGVuL3hlbmJ1cy5oICAgICAgICAgICAgICAgfCA3ICsr
KysrKysKIDQgZmlsZXMgY2hhbmdlZCwgMTYgaW5zZXJ0aW9ucygrKSwgMSBk
ZWxldGlvbigtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvbmV0L3hlbi1uZXRi
YWNrL3hlbmJ1cy5jIGIvZHJpdmVycy9uZXQveGVuLW5ldGJhY2sveGVuYnVz
LmMKaW5kZXggZjFjMTYyNGNlYzhmLi4wMGY2ZjhkYzU2YzggMTAwNjQ0Ci0t
LSBhL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL3hlbmJ1cy5jCisrKyBiL2Ry
aXZlcnMvbmV0L3hlbi1uZXRiYWNrL3hlbmJ1cy5jCkBAIC01NTcsMTIgKzU1
NywxNCBAQCBzdGF0aWMgaW50IHhlbl9yZWdpc3Rlcl9jcmVkaXRfd2F0Y2go
c3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldiwKIAkJcmV0dXJuIC1FTk9NRU07
CiAJc25wcmludGYobm9kZSwgbWF4bGVuLCAiJXMvcmF0ZSIsIGRldi0+bm9k
ZW5hbWUpOwogCXZpZi0+Y3JlZGl0X3dhdGNoLm5vZGUgPSBub2RlOworCXZp
Zi0+Y3JlZGl0X3dhdGNoLndpbGxfaGFuZGxlID0gTlVMTDsKIAl2aWYtPmNy
ZWRpdF93YXRjaC5jYWxsYmFjayA9IHhlbl9uZXRfcmF0ZV9jaGFuZ2VkOwog
CWVyciA9IHJlZ2lzdGVyX3hlbmJ1c193YXRjaCgmdmlmLT5jcmVkaXRfd2F0
Y2gpOwogCWlmIChlcnIpIHsKIAkJcHJfZXJyKCJGYWlsZWQgdG8gc2V0IHdh
dGNoZXIgJXNcbiIsIHZpZi0+Y3JlZGl0X3dhdGNoLm5vZGUpOwogCQlrZnJl
ZShub2RlKTsKIAkJdmlmLT5jcmVkaXRfd2F0Y2gubm9kZSA9IE5VTEw7CisJ
CXZpZi0+Y3JlZGl0X3dhdGNoLndpbGxfaGFuZGxlID0gTlVMTDsKIAkJdmlm
LT5jcmVkaXRfd2F0Y2guY2FsbGJhY2sgPSBOVUxMOwogCX0KIAlyZXR1cm4g
ZXJyOwpAQCAtNjA5LDYgKzYxMSw3IEBAIHN0YXRpYyBpbnQgeGVuX3JlZ2lz
dGVyX21jYXN0X2N0cmxfd2F0Y2goc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRl
diwKIAlzbnByaW50Zihub2RlLCBtYXhsZW4sICIlcy9yZXF1ZXN0LW11bHRp
Y2FzdC1jb250cm9sIiwKIAkJIGRldi0+b3RoZXJlbmQpOwogCXZpZi0+bWNh
c3RfY3RybF93YXRjaC5ub2RlID0gbm9kZTsKKwl2aWYtPm1jYXN0X2N0cmxf
d2F0Y2gud2lsbF9oYW5kbGUgPSBOVUxMOwogCXZpZi0+bWNhc3RfY3RybF93
YXRjaC5jYWxsYmFjayA9IHhlbl9tY2FzdF9jdHJsX2NoYW5nZWQ7CiAJZXJy
ID0gcmVnaXN0ZXJfeGVuYnVzX3dhdGNoKCZ2aWYtPm1jYXN0X2N0cmxfd2F0
Y2gpOwogCWlmIChlcnIpIHsKQEAgLTYxNiw2ICs2MTksNyBAQCBzdGF0aWMg
aW50IHhlbl9yZWdpc3Rlcl9tY2FzdF9jdHJsX3dhdGNoKHN0cnVjdCB4ZW5i
dXNfZGV2aWNlICpkZXYsCiAJCSAgICAgICB2aWYtPm1jYXN0X2N0cmxfd2F0
Y2gubm9kZSk7CiAJCWtmcmVlKG5vZGUpOwogCQl2aWYtPm1jYXN0X2N0cmxf
d2F0Y2gubm9kZSA9IE5VTEw7CisJCXZpZi0+bWNhc3RfY3RybF93YXRjaC53
aWxsX2hhbmRsZSA9IE5VTEw7CiAJCXZpZi0+bWNhc3RfY3RybF93YXRjaC5j
YWxsYmFjayA9IE5VTEw7CiAJfQogCXJldHVybiBlcnI7CmRpZmYgLS1naXQg
YS9kcml2ZXJzL3hlbi94ZW5idXMveGVuYnVzX2NsaWVudC5jIGIvZHJpdmVy
cy94ZW4veGVuYnVzL3hlbmJ1c19jbGllbnQuYwppbmRleCBmZDgwZTMxOGI5
OWMuLjBhMjFhMTJkOWMzNCAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4veGVu
YnVzL3hlbmJ1c19jbGllbnQuYworKysgYi9kcml2ZXJzL3hlbi94ZW5idXMv
eGVuYnVzX2NsaWVudC5jCkBAIC0xMzMsNiArMTMzLDcgQEAgaW50IHhlbmJ1
c193YXRjaF9wYXRoKHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYsIGNvbnN0
IGNoYXIgKnBhdGgsCiAJaW50IGVycjsKIAogCXdhdGNoLT5ub2RlID0gcGF0
aDsKKwl3YXRjaC0+d2lsbF9oYW5kbGUgPSBOVUxMOwogCXdhdGNoLT5jYWxs
YmFjayA9IGNhbGxiYWNrOwogCiAJZXJyID0gcmVnaXN0ZXJfeGVuYnVzX3dh
dGNoKHdhdGNoKTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3hlbmJ1cy94
ZW5idXNfeHMuYyBiL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfeHMuYwpp
bmRleCAzYTA2ZWI2OTlmMzMuLmU4YmRiZDBhMWUyNiAxMDA2NDQKLS0tIGEv
ZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c194cy5jCisrKyBiL2RyaXZlcnMv
eGVuL3hlbmJ1cy94ZW5idXNfeHMuYwpAQCAtNzA1LDcgKzcwNSwxMCBAQCBp
bnQgeHNfd2F0Y2hfbXNnKHN0cnVjdCB4c193YXRjaF9ldmVudCAqZXZlbnQp
CiAKIAlzcGluX2xvY2soJndhdGNoZXNfbG9jayk7CiAJZXZlbnQtPmhhbmRs
ZSA9IGZpbmRfd2F0Y2goZXZlbnQtPnRva2VuKTsKLQlpZiAoZXZlbnQtPmhh
bmRsZSAhPSBOVUxMKSB7CisJaWYgKGV2ZW50LT5oYW5kbGUgIT0gTlVMTCAm
JgorCQkJKCFldmVudC0+aGFuZGxlLT53aWxsX2hhbmRsZSB8fAorCQkJIGV2
ZW50LT5oYW5kbGUtPndpbGxfaGFuZGxlKGV2ZW50LT5oYW5kbGUsCisJCQkJ
IGV2ZW50LT5wYXRoLCBldmVudC0+dG9rZW4pKSkgewogCQlzcGluX2xvY2so
JndhdGNoX2V2ZW50c19sb2NrKTsKIAkJbGlzdF9hZGRfdGFpbCgmZXZlbnQt
Pmxpc3QsICZ3YXRjaF9ldmVudHMpOwogCQl3YWtlX3VwKCZ3YXRjaF9ldmVu
dHNfd2FpdHEpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4veGVuYnVzLmgg
Yi9pbmNsdWRlL3hlbi94ZW5idXMuaAppbmRleCA1YTgzMTVlNmQ4YTYuLmJh
YTg4YmYwYjliYyAxMDA2NDQKLS0tIGEvaW5jbHVkZS94ZW4veGVuYnVzLmgK
KysrIGIvaW5jbHVkZS94ZW4veGVuYnVzLmgKQEAgLTYxLDYgKzYxLDEzIEBA
IHN0cnVjdCB4ZW5idXNfd2F0Y2gKIAkvKiBQYXRoIGJlaW5nIHdhdGNoZWQu
ICovCiAJY29uc3QgY2hhciAqbm9kZTsKIAorCS8qCisJICogQ2FsbGVkIGp1
c3QgYmVmb3JlIGVucXVlaW5nIG5ldyBldmVudCB3aGlsZSBhIHNwaW5sb2Nr
IGlzIGhlbGQuCisJICogVGhlIGV2ZW50IHdpbGwgYmUgZGlzY2FyZGVkIGlm
IHRoaXMgY2FsbGJhY2sgcmV0dXJucyBmYWxzZS4KKwkgKi8KKwlib29sICgq
d2lsbF9oYW5kbGUpKHN0cnVjdCB4ZW5idXNfd2F0Y2ggKiwKKwkJCSAgICAg
IGNvbnN0IGNoYXIgKnBhdGgsIGNvbnN0IGNoYXIgKnRva2VuKTsKKwogCS8q
IENhbGxiYWNrIChleGVjdXRlZCBpbiBhIHByb2Nlc3MgY29udGV4dCB3aXRo
IG5vIGxvY2tzIGhlbGQpLiAqLwogCXZvaWQgKCpjYWxsYmFjaykoc3RydWN0
IHhlbmJ1c193YXRjaCAqLAogCQkJIGNvbnN0IGNoYXIgKnBhdGgsIGNvbnN0
IGNoYXIgKnRva2VuKTsKLS0gCjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream; name="xsa349/xsa349-linux-2.patch"
Content-Disposition: attachment; filename="xsa349/xsa349-linux-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpTdWJq
ZWN0OiBbUEFUQ0ggMi81XSB4ZW4veGVuYnVzOiBBZGQgJ3dpbGxfaGFuZGxl
JyBjYWxsYmFjayBzdXBwb3J0IGluCiB4ZW5idXNfd2F0Y2hfcGF0aCgpCgpT
b21lIGNvZGUgZG9lcyBub3QgZGlyZWN0bHkgbWFrZSAneGVuYnVzX3dhdGNo
JyBvYmplY3QgYW5kIGNhbGwKJ3JlZ2lzdGVyX3hlbmJ1c193YXRjaCgpJyBi
dXQgdXNlICd4ZW5idXNfd2F0Y2hfcGF0aCgpJyBpbnN0ZWFkLiAgVGhpcwpj
b21taXQgYWRkcyBzdXBwb3J0IG9mICd3aWxsX2hhbmRsZScgY2FsbGJhY2sg
aW4gdGhlCid4ZW5idXNfd2F0Y2hfcGF0aCgpJyBhbmQgaXQncyB3cmFwcGVy
LCAneGVuYnVzX3dhdGNoX3BhdGhmbXQoKScuCgpTaWduZWQtb2ZmLWJ5OiBT
ZW9uZ0phZSBQYXJrIDxzanBhcmtAYW1hem9uLmRlPgpSZXBvcnRlZC1ieTog
TWljaGFlbCBLdXJ0aCA8bWt1QGFtYXpvbi5kZT4KUmVwb3J0ZWQtYnk6IFBh
d2VsIFdpZWN6b3JraWV3aWN6IDx3aXBhd2VsQGFtYXpvbi5kZT4KU2lnbmVk
LW9mZi1ieTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpS
ZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgot
LS0KIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMgfCAzICsr
LQogZHJpdmVycy9uZXQveGVuLW5ldGJhY2sveGVuYnVzLmMgICB8IDIgKy0K
IGRyaXZlcnMveGVuL3hlbi1wY2liYWNrL3hlbmJ1cy5jICAgfCAyICstCiBk
cml2ZXJzL3hlbi94ZW5idXMveGVuYnVzX2NsaWVudC5jIHwgOSArKysrKysr
LS0KIGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJvYmUuYyAgfCAyICst
CiBpbmNsdWRlL3hlbi94ZW5idXMuaCAgICAgICAgICAgICAgIHwgNiArKysr
Ky0KIDYgZmlsZXMgY2hhbmdlZCwgMTcgaW5zZXJ0aW9ucygrKSwgNyBkZWxl
dGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGti
YWNrL3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5i
dXMuYwppbmRleCBmNTcwNTU2OWUyYTcuLjRiYjEwODcwMTI2NSAxMDA2NDQK
LS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYworKysg
Yi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCkBAIC02Nzcs
NyArNjc3LDggQEAgc3RhdGljIGludCB4ZW5fYmxrYmtfcHJvYmUoc3RydWN0
IHhlbmJ1c19kZXZpY2UgKmRldiwKIAkvKiBzZXR1cCBiYWNrIHBvaW50ZXIg
Ki8KIAliZS0+YmxraWYtPmJlID0gYmU7CiAKLQllcnIgPSB4ZW5idXNfd2F0
Y2hfcGF0aGZtdChkZXYsICZiZS0+YmFja2VuZF93YXRjaCwgYmFja2VuZF9j
aGFuZ2VkLAorCWVyciA9IHhlbmJ1c193YXRjaF9wYXRoZm10KGRldiwgJmJl
LT5iYWNrZW5kX3dhdGNoLCBOVUxMLAorCQkJCSAgIGJhY2tlbmRfY2hhbmdl
ZCwKIAkJCQkgICAiJXMvJXMiLCBkZXYtPm5vZGVuYW1lLCAicGh5c2ljYWwt
ZGV2aWNlIik7CiAJaWYgKGVycikKIAkJZ290byBmYWlsOwpkaWZmIC0tZ2l0
IGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2sveGVuYnVzLmMgYi9kcml2ZXJz
L25ldC94ZW4tbmV0YmFjay94ZW5idXMuYwppbmRleCAwMGY2ZjhkYzU2Yzgu
LjZmMTBlMDk5OGYxYyAxMDA2NDQKLS0tIGEvZHJpdmVycy9uZXQveGVuLW5l
dGJhY2sveGVuYnVzLmMKKysrIGIvZHJpdmVycy9uZXQveGVuLW5ldGJhY2sv
eGVuYnVzLmMKQEAgLTgyNCw3ICs4MjQsNyBAQCBzdGF0aWMgdm9pZCBjb25u
ZWN0KHN0cnVjdCBiYWNrZW5kX2luZm8gKmJlKQogCXhlbnZpZl9jYXJyaWVy
X29uKGJlLT52aWYpOwogCiAJdW5yZWdpc3Rlcl9ob3RwbHVnX3N0YXR1c193
YXRjaChiZSk7Ci0JZXJyID0geGVuYnVzX3dhdGNoX3BhdGhmbXQoZGV2LCAm
YmUtPmhvdHBsdWdfc3RhdHVzX3dhdGNoLAorCWVyciA9IHhlbmJ1c193YXRj
aF9wYXRoZm10KGRldiwgJmJlLT5ob3RwbHVnX3N0YXR1c193YXRjaCwgTlVM
TCwKIAkJCQkgICBob3RwbHVnX3N0YXR1c19jaGFuZ2VkLAogCQkJCSAgICIl
cy8lcyIsIGRldi0+bm9kZW5hbWUsICJob3RwbHVnLXN0YXR1cyIpOwogCWlm
ICghZXJyKQpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4veGVuLXBjaWJhY2sv
eGVuYnVzLmMgYi9kcml2ZXJzL3hlbi94ZW4tcGNpYmFjay94ZW5idXMuYwpp
bmRleCA0Yjk5ZWMzZGVjNTguLmU3YzY5MmNmYjJjZiAxMDA2NDQKLS0tIGEv
ZHJpdmVycy94ZW4veGVuLXBjaWJhY2sveGVuYnVzLmMKKysrIGIvZHJpdmVy
cy94ZW4veGVuLXBjaWJhY2sveGVuYnVzLmMKQEAgLTY4OSw3ICs2ODksNyBA
QCBzdGF0aWMgaW50IHhlbl9wY2lia194ZW5idXNfcHJvYmUoc3RydWN0IHhl
bmJ1c19kZXZpY2UgKmRldiwKIAogCS8qIHdhdGNoIHRoZSBiYWNrZW5kIG5v
ZGUgZm9yIGJhY2tlbmQgY29uZmlndXJhdGlvbiBpbmZvcm1hdGlvbiAqLwog
CWVyciA9IHhlbmJ1c193YXRjaF9wYXRoKGRldiwgZGV2LT5ub2RlbmFtZSwg
JnBkZXYtPmJlX3dhdGNoLAotCQkJCXhlbl9wY2lia19iZV93YXRjaCk7CisJ
CQkJTlVMTCwgeGVuX3BjaWJrX2JlX3dhdGNoKTsKIAlpZiAoZXJyKQogCQln
b3RvIG91dDsKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4veGVuYnVzL3hl
bmJ1c19jbGllbnQuYyBiL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfY2xp
ZW50LmMKaW5kZXggMGEyMWExMmQ5YzM0Li4wY2Q3Mjg5NjFmY2UgMTAwNjQ0
Ci0tLSBhL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfY2xpZW50LmMKKysr
IGIvZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c19jbGllbnQuYwpAQCAtMTI3
LDE5ICsxMjcsMjIgQEAgRVhQT1JUX1NZTUJPTF9HUEwoeGVuYnVzX3N0cnN0
YXRlKTsKICAqLwogaW50IHhlbmJ1c193YXRjaF9wYXRoKHN0cnVjdCB4ZW5i
dXNfZGV2aWNlICpkZXYsIGNvbnN0IGNoYXIgKnBhdGgsCiAJCSAgICAgIHN0
cnVjdCB4ZW5idXNfd2F0Y2ggKndhdGNoLAorCQkgICAgICBib29sICgqd2ls
bF9oYW5kbGUpKHN0cnVjdCB4ZW5idXNfd2F0Y2ggKiwKKwkJCQkJICBjb25z
dCBjaGFyICosIGNvbnN0IGNoYXIgKiksCiAJCSAgICAgIHZvaWQgKCpjYWxs
YmFjaykoc3RydWN0IHhlbmJ1c193YXRjaCAqLAogCQkJCSAgICAgICBjb25z
dCBjaGFyICosIGNvbnN0IGNoYXIgKikpCiB7CiAJaW50IGVycjsKIAogCXdh
dGNoLT5ub2RlID0gcGF0aDsKLQl3YXRjaC0+d2lsbF9oYW5kbGUgPSBOVUxM
OworCXdhdGNoLT53aWxsX2hhbmRsZSA9IHdpbGxfaGFuZGxlOwogCXdhdGNo
LT5jYWxsYmFjayA9IGNhbGxiYWNrOwogCiAJZXJyID0gcmVnaXN0ZXJfeGVu
YnVzX3dhdGNoKHdhdGNoKTsKIAogCWlmIChlcnIpIHsKIAkJd2F0Y2gtPm5v
ZGUgPSBOVUxMOworCQl3YXRjaC0+d2lsbF9oYW5kbGUgPSBOVUxMOwogCQl3
YXRjaC0+Y2FsbGJhY2sgPSBOVUxMOwogCQl4ZW5idXNfZGV2X2ZhdGFsKGRl
diwgZXJyLCAiYWRkaW5nIHdhdGNoIG9uICVzIiwgcGF0aCk7CiAJfQpAQCAt
MTY2LDYgKzE2OSw4IEBAIEVYUE9SVF9TWU1CT0xfR1BMKHhlbmJ1c193YXRj
aF9wYXRoKTsKICAqLwogaW50IHhlbmJ1c193YXRjaF9wYXRoZm10KHN0cnVj
dCB4ZW5idXNfZGV2aWNlICpkZXYsCiAJCQkgc3RydWN0IHhlbmJ1c193YXRj
aCAqd2F0Y2gsCisJCQkgYm9vbCAoKndpbGxfaGFuZGxlKShzdHJ1Y3QgeGVu
YnVzX3dhdGNoICosCisJCQkJCWNvbnN0IGNoYXIgKiwgY29uc3QgY2hhciAq
KSwKIAkJCSB2b2lkICgqY2FsbGJhY2spKHN0cnVjdCB4ZW5idXNfd2F0Y2gg
KiwKIAkJCQkJICBjb25zdCBjaGFyICosIGNvbnN0IGNoYXIgKiksCiAJCQkg
Y29uc3QgY2hhciAqcGF0aGZtdCwgLi4uKQpAQCAtMTgyLDcgKzE4Nyw3IEBA
IGludCB4ZW5idXNfd2F0Y2hfcGF0aGZtdChzdHJ1Y3QgeGVuYnVzX2Rldmlj
ZSAqZGV2LAogCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgLUVOT01FTSwgImFs
bG9jYXRpbmcgcGF0aCBmb3Igd2F0Y2giKTsKIAkJcmV0dXJuIC1FTk9NRU07
CiAJfQotCWVyciA9IHhlbmJ1c193YXRjaF9wYXRoKGRldiwgcGF0aCwgd2F0
Y2gsIGNhbGxiYWNrKTsKKwllcnIgPSB4ZW5idXNfd2F0Y2hfcGF0aChkZXYs
IHBhdGgsIHdhdGNoLCB3aWxsX2hhbmRsZSwgY2FsbGJhY2spOwogCiAJaWYg
KGVycikKIAkJa2ZyZWUocGF0aCk7CmRpZmYgLS1naXQgYS9kcml2ZXJzL3hl
bi94ZW5idXMveGVuYnVzX3Byb2JlLmMgYi9kcml2ZXJzL3hlbi94ZW5idXMv
eGVuYnVzX3Byb2JlLmMKaW5kZXggMzg3MjVkOTdkOTA5Li40YzNkMWI4NGFh
MGEgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJv
YmUuYworKysgYi9kcml2ZXJzL3hlbi94ZW5idXMveGVuYnVzX3Byb2JlLmMK
QEAgLTEzNiw3ICsxMzYsNyBAQCBzdGF0aWMgaW50IHdhdGNoX290aGVyZW5k
KHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYpCiAJCWNvbnRhaW5lcl9vZihk
ZXYtPmRldi5idXMsIHN0cnVjdCB4ZW5fYnVzX3R5cGUsIGJ1cyk7CiAKIAly
ZXR1cm4geGVuYnVzX3dhdGNoX3BhdGhmbXQoZGV2LCAmZGV2LT5vdGhlcmVu
ZF93YXRjaCwKLQkJCQkgICAgYnVzLT5vdGhlcmVuZF9jaGFuZ2VkLAorCQkJ
CSAgICBOVUxMLCBidXMtPm90aGVyZW5kX2NoYW5nZWQsCiAJCQkJICAgICIl
cy8lcyIsIGRldi0+b3RoZXJlbmQsICJzdGF0ZSIpOwogfQogCmRpZmYgLS1n
aXQgYS9pbmNsdWRlL3hlbi94ZW5idXMuaCBiL2luY2x1ZGUveGVuL3hlbmJ1
cy5oCmluZGV4IGJhYTg4YmYwYjliYy4uYzg1NzRkMWI4MTRjIDEwMDY0NAot
LS0gYS9pbmNsdWRlL3hlbi94ZW5idXMuaAorKysgYi9pbmNsdWRlL3hlbi94
ZW5idXMuaApAQCAtMjA0LDEwICsyMDQsMTQgQEAgdm9pZCB4ZW5idXNfcHJv
YmUoc3RydWN0IHdvcmtfc3RydWN0ICopOwogCiBpbnQgeGVuYnVzX3dhdGNo
X3BhdGgoc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldiwgY29uc3QgY2hhciAq
cGF0aCwKIAkJICAgICAgc3RydWN0IHhlbmJ1c193YXRjaCAqd2F0Y2gsCisJ
CSAgICAgIGJvb2wgKCp3aWxsX2hhbmRsZSkoc3RydWN0IHhlbmJ1c193YXRj
aCAqLAorCQkJCQkgIGNvbnN0IGNoYXIgKiwgY29uc3QgY2hhciAqKSwKIAkJ
ICAgICAgdm9pZCAoKmNhbGxiYWNrKShzdHJ1Y3QgeGVuYnVzX3dhdGNoICos
CiAJCQkJICAgICAgIGNvbnN0IGNoYXIgKiwgY29uc3QgY2hhciAqKSk7Ci1f
X3ByaW50Zig0LCA1KQorX19wcmludGYoNSwgNikKIGludCB4ZW5idXNfd2F0
Y2hfcGF0aGZtdChzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2LCBzdHJ1Y3Qg
eGVuYnVzX3dhdGNoICp3YXRjaCwKKwkJCSBib29sICgqd2lsbF9oYW5kbGUp
KHN0cnVjdCB4ZW5idXNfd2F0Y2ggKiwKKwkJCQkJICAgICBjb25zdCBjaGFy
ICosIGNvbnN0IGNoYXIgKiksCiAJCQkgdm9pZCAoKmNhbGxiYWNrKShzdHJ1
Y3QgeGVuYnVzX3dhdGNoICosCiAJCQkJCSAgY29uc3QgY2hhciAqLCBjb25z
dCBjaGFyICopLAogCQkJIGNvbnN0IGNoYXIgKnBhdGhmbXQsIC4uLik7Ci0t
IAoyLjE3LjEKCg==

--=separator
Content-Type: application/octet-stream; name="xsa349/xsa349-linux-3.patch"
Content-Disposition: attachment; filename="xsa349/xsa349-linux-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpTdWJq
ZWN0OiBbUEFUQ0ggMy81XSB4ZW4veGVuYnVzL3hlbl9idXNfdHlwZTogU3Vw
cG9ydCB3aWxsX2hhbmRsZSB3YXRjaAogY2FsbGJhY2sKClRoaXMgY29tbWl0
IGFkZHMgc3VwcG9ydCBvZiB0aGUgJ3dpbGxfaGFuZGxlJyB3YXRjaCBjYWxs
YmFjayBmb3IKJ3hlbl9idXNfdHlwZScgdXNlcnMuCgpTaWduZWQtb2ZmLWJ5
OiBTZW9uZ0phZSBQYXJrIDxzanBhcmtAYW1hem9uLmRlPgpSZXBvcnRlZC1i
eTogTWljaGFlbCBLdXJ0aCA8bWt1QGFtYXpvbi5kZT4KUmVwb3J0ZWQtYnk6
IFBhd2VsIFdpZWN6b3JraWV3aWN6IDx3aXBhd2VsQGFtYXpvbi5kZT4KU2ln
bmVkLW9mZi1ieTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3Jn
PgpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29t
PgotLS0KIGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXMuaCAgICAgICB8IDIg
KysKIGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJvYmUuYyB8IDMgKyst
CiAyIGZpbGVzIGNoYW5nZWQsIDQgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlv
bigtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXMu
aCBiL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXMuaAppbmRleCA1ZjViOGE3
ZDViODAuLjJhOTNiN2M5YzE1OSAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4v
eGVuYnVzL3hlbmJ1cy5oCisrKyBiL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5i
dXMuaApAQCAtNDQsNiArNDQsOCBAQCBzdHJ1Y3QgeGVuX2J1c190eXBlIHsK
IAlpbnQgKCpnZXRfYnVzX2lkKShjaGFyIGJ1c19pZFtYRU5fQlVTX0lEX1NJ
WkVdLCBjb25zdCBjaGFyICpub2RlbmFtZSk7CiAJaW50ICgqcHJvYmUpKHN0
cnVjdCB4ZW5fYnVzX3R5cGUgKmJ1cywgY29uc3QgY2hhciAqdHlwZSwKIAkJ
ICAgICBjb25zdCBjaGFyICpkaXIpOworCWJvb2wgKCpvdGhlcmVuZF93aWxs
X2hhbmRsZSkoc3RydWN0IHhlbmJ1c193YXRjaCAqd2F0Y2gsCisJCQkJICAg
ICBjb25zdCBjaGFyICpwYXRoLCBjb25zdCBjaGFyICp0b2tlbik7CiAJdm9p
ZCAoKm90aGVyZW5kX2NoYW5nZWQpKHN0cnVjdCB4ZW5idXNfd2F0Y2ggKndh
dGNoLCBjb25zdCBjaGFyICpwYXRoLAogCQkJCSBjb25zdCBjaGFyICp0b2tl
bik7CiAJc3RydWN0IGJ1c190eXBlIGJ1czsKZGlmZiAtLWdpdCBhL2RyaXZl
cnMveGVuL3hlbmJ1cy94ZW5idXNfcHJvYmUuYyBiL2RyaXZlcnMveGVuL3hl
bmJ1cy94ZW5idXNfcHJvYmUuYwppbmRleCA0YzNkMWI4NGFhMGEuLjQ0NjM0
ZDk3MGE1YyAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1
c19wcm9iZS5jCisrKyBiL2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJv
YmUuYwpAQCAtMTM2LDcgKzEzNiw4IEBAIHN0YXRpYyBpbnQgd2F0Y2hfb3Ro
ZXJlbmQoc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldikKIAkJY29udGFpbmVy
X29mKGRldi0+ZGV2LmJ1cywgc3RydWN0IHhlbl9idXNfdHlwZSwgYnVzKTsK
IAogCXJldHVybiB4ZW5idXNfd2F0Y2hfcGF0aGZtdChkZXYsICZkZXYtPm90
aGVyZW5kX3dhdGNoLAotCQkJCSAgICBOVUxMLCBidXMtPm90aGVyZW5kX2No
YW5nZWQsCisJCQkJICAgIGJ1cy0+b3RoZXJlbmRfd2lsbF9oYW5kbGUsCisJ
CQkJICAgIGJ1cy0+b3RoZXJlbmRfY2hhbmdlZCwKIAkJCQkgICAgIiVzLyVz
IiwgZGV2LT5vdGhlcmVuZCwgInN0YXRlIik7CiB9CiAKLS0gCjIuMTcuMQoK

--=separator
Content-Type: application/octet-stream; name="xsa349/xsa349-linux-4.patch"
Content-Disposition: attachment; filename="xsa349/xsa349-linux-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpTdWJq
ZWN0OiBbUEFUQ0ggNC81XSB4ZW4veGVuYnVzOiBDb3VudCBwZW5kaW5nIG1l
c3NhZ2VzIGZvciBlYWNoIHdhdGNoCgpUaGlzIGNvbW1pdCBhZGRzIGEgY291
bnRlciBvZiBwZW5kaW5nIG1lc3NhZ2VzIGZvciBlYWNoIHdhdGNoIGluIHRo
ZQpzdHJ1Y3QuICBJdCBpcyB1c2VkIHRvIHNraXAgdW5uZWNlc3NhcnkgcGVu
ZGluZyBtZXNzYWdlcyBsb29rdXAgaW4KJ3VucmVnaXN0ZXJfeGVuYnVzX3dh
dGNoKCknLiAgSXQgY291bGQgYWxzbyBiZSB1c2VkIGluICd3aWxsX2hhbmRs
ZScKY2FsbGJhY2suCgpTaWduZWQtb2ZmLWJ5OiBTZW9uZ0phZSBQYXJrIDxz
anBhcmtAYW1hem9uLmRlPgpSZXBvcnRlZC1ieTogTWljaGFlbCBLdXJ0aCA8
bWt1QGFtYXpvbi5kZT4KUmVwb3J0ZWQtYnk6IFBhd2VsIFdpZWN6b3JraWV3
aWN6IDx3aXBhd2VsQGFtYXpvbi5kZT4KU2lnbmVkLW9mZi1ieTogQXV0aG9y
IFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpSZXZpZXdlZC1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgotLS0KIGRyaXZlcnMveGVu
L3hlbmJ1cy94ZW5idXNfeHMuYyB8IDI5ICsrKysrKysrKysrKysrKysrKy0t
LS0tLS0tLS0tCiBpbmNsdWRlL3hlbi94ZW5idXMuaCAgICAgICAgICAgfCAg
MiArKwogMiBmaWxlcyBjaGFuZ2VkLCAyMCBpbnNlcnRpb25zKCspLCAxMSBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi94ZW5idXMv
eGVuYnVzX3hzLmMgYi9kcml2ZXJzL3hlbi94ZW5idXMveGVuYnVzX3hzLmMK
aW5kZXggZThiZGJkMGExZTI2Li4xMmUwMmViMDFmNTkgMTAwNjQ0Ci0tLSBh
L2RyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfeHMuYworKysgYi9kcml2ZXJz
L3hlbi94ZW5idXMveGVuYnVzX3hzLmMKQEAgLTcxMSw2ICs3MTEsNyBAQCBp
bnQgeHNfd2F0Y2hfbXNnKHN0cnVjdCB4c193YXRjaF9ldmVudCAqZXZlbnQp
CiAJCQkJIGV2ZW50LT5wYXRoLCBldmVudC0+dG9rZW4pKSkgewogCQlzcGlu
X2xvY2soJndhdGNoX2V2ZW50c19sb2NrKTsKIAkJbGlzdF9hZGRfdGFpbCgm
ZXZlbnQtPmxpc3QsICZ3YXRjaF9ldmVudHMpOworCQlldmVudC0+aGFuZGxl
LT5ucl9wZW5kaW5nKys7CiAJCXdha2VfdXAoJndhdGNoX2V2ZW50c193YWl0
cSk7CiAJCXNwaW5fdW5sb2NrKCZ3YXRjaF9ldmVudHNfbG9jayk7CiAJfSBl
bHNlCkBAIC03NjgsNiArNzY5LDggQEAgaW50IHJlZ2lzdGVyX3hlbmJ1c193
YXRjaChzdHJ1Y3QgeGVuYnVzX3dhdGNoICp3YXRjaCkKIAogCXNwcmludGYo
dG9rZW4sICIlbFgiLCAobG9uZyl3YXRjaCk7CiAKKwl3YXRjaC0+bnJfcGVu
ZGluZyA9IDA7CisKIAlkb3duX3JlYWQoJnhzX3dhdGNoX3J3c2VtKTsKIAog
CXNwaW5fbG9jaygmd2F0Y2hlc19sb2NrKTsKQEAgLTgxNywxMSArODIwLDE0
IEBAIHZvaWQgdW5yZWdpc3Rlcl94ZW5idXNfd2F0Y2goc3RydWN0IHhlbmJ1
c193YXRjaCAqd2F0Y2gpCiAKIAkvKiBDYW5jZWwgcGVuZGluZyB3YXRjaCBl
dmVudHMuICovCiAJc3Bpbl9sb2NrKCZ3YXRjaF9ldmVudHNfbG9jayk7Ci0J
bGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlKGV2ZW50LCB0bXAsICZ3YXRjaF9l
dmVudHMsIGxpc3QpIHsKLQkJaWYgKGV2ZW50LT5oYW5kbGUgIT0gd2F0Y2gp
Ci0JCQljb250aW51ZTsKLQkJbGlzdF9kZWwoJmV2ZW50LT5saXN0KTsKLQkJ
a2ZyZWUoZXZlbnQpOworCWlmICh3YXRjaC0+bnJfcGVuZGluZykgeworCQls
aXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUoZXZlbnQsIHRtcCwgJndhdGNoX2V2
ZW50cywgbGlzdCkgeworCQkJaWYgKGV2ZW50LT5oYW5kbGUgIT0gd2F0Y2gp
CisJCQkJY29udGludWU7CisJCQlsaXN0X2RlbCgmZXZlbnQtPmxpc3QpOwor
CQkJa2ZyZWUoZXZlbnQpOworCQl9CisJCXdhdGNoLT5ucl9wZW5kaW5nID0g
MDsKIAl9CiAJc3Bpbl91bmxvY2soJndhdGNoX2V2ZW50c19sb2NrKTsKIApA
QCAtODY4LDcgKzg3NCw2IEBAIHZvaWQgeHNfc3VzcGVuZF9jYW5jZWwodm9p
ZCkKIAogc3RhdGljIGludCB4ZW53YXRjaF90aHJlYWQodm9pZCAqdW51c2Vk
KQogewotCXN0cnVjdCBsaXN0X2hlYWQgKmVudDsKIAlzdHJ1Y3QgeHNfd2F0
Y2hfZXZlbnQgKmV2ZW50OwogCiAJeGVud2F0Y2hfcGlkID0gY3VycmVudC0+
cGlkOwpAQCAtODgzLDEzICs4ODgsMTUgQEAgc3RhdGljIGludCB4ZW53YXRj
aF90aHJlYWQodm9pZCAqdW51c2VkKQogCQltdXRleF9sb2NrKCZ4ZW53YXRj
aF9tdXRleCk7CiAKIAkJc3Bpbl9sb2NrKCZ3YXRjaF9ldmVudHNfbG9jayk7
Ci0JCWVudCA9IHdhdGNoX2V2ZW50cy5uZXh0OwotCQlpZiAoZW50ICE9ICZ3
YXRjaF9ldmVudHMpCi0JCQlsaXN0X2RlbChlbnQpOworCQlldmVudCA9IGxp
c3RfZmlyc3RfZW50cnlfb3JfbnVsbCgmd2F0Y2hfZXZlbnRzLAorCQkJCXN0
cnVjdCB4c193YXRjaF9ldmVudCwgbGlzdCk7CisJCWlmIChldmVudCkgewor
CQkJbGlzdF9kZWwoJmV2ZW50LT5saXN0KTsKKwkJCWV2ZW50LT5oYW5kbGUt
Pm5yX3BlbmRpbmctLTsKKwkJfQogCQlzcGluX3VubG9jaygmd2F0Y2hfZXZl
bnRzX2xvY2spOwogCi0JCWlmIChlbnQgIT0gJndhdGNoX2V2ZW50cykgewot
CQkJZXZlbnQgPSBsaXN0X2VudHJ5KGVudCwgc3RydWN0IHhzX3dhdGNoX2V2
ZW50LCBsaXN0KTsKKwkJaWYgKGV2ZW50KSB7CiAJCQlldmVudC0+aGFuZGxl
LT5jYWxsYmFjayhldmVudC0+aGFuZGxlLCBldmVudC0+cGF0aCwKIAkJCQkJ
CWV2ZW50LT50b2tlbik7CiAJCQlrZnJlZShldmVudCk7CmRpZmYgLS1naXQg
YS9pbmNsdWRlL3hlbi94ZW5idXMuaCBiL2luY2x1ZGUveGVuL3hlbmJ1cy5o
CmluZGV4IGM4NTc0ZDFiODE0Yy4uMDBjNzIzNWFlOTNlIDEwMDY0NAotLS0g
YS9pbmNsdWRlL3hlbi94ZW5idXMuaAorKysgYi9pbmNsdWRlL3hlbi94ZW5i
dXMuaApAQCAtNjEsNiArNjEsOCBAQCBzdHJ1Y3QgeGVuYnVzX3dhdGNoCiAJ
LyogUGF0aCBiZWluZyB3YXRjaGVkLiAqLwogCWNvbnN0IGNoYXIgKm5vZGU7
CiAKKwl1bnNpZ25lZCBpbnQgbnJfcGVuZGluZzsKKwogCS8qCiAJICogQ2Fs
bGVkIGp1c3QgYmVmb3JlIGVucXVlaW5nIG5ldyBldmVudCB3aGlsZSBhIHNw
aW5sb2NrIGlzIGhlbGQuCiAJICogVGhlIGV2ZW50IHdpbGwgYmUgZGlzY2Fy
ZGVkIGlmIHRoaXMgY2FsbGJhY2sgcmV0dXJucyBmYWxzZS4KLS0gCjIuMTcu
MQoK

--=separator
Content-Type: application/octet-stream; name="xsa349/xsa349-linux-5.patch"
Content-Disposition: attachment; filename="xsa349/xsa349-linux-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogQXV0aG9yIFJlZGFjdGVkIDxzZWN1cml0eUB4ZW4ub3JnPgpTdWJq
ZWN0OiBbUEFUQ0ggNS81XSB4ZW5idXMveGVuYnVzX2JhY2tlbmQ6IERpc2Fs
bG93IHBlbmRpbmcgd2F0Y2ggbWVzc2FnZXMKCid4ZW5idXNfYmFja2VuZCcg
d2F0Y2hlcyAnc3RhdGUnIG9mIGRldmljZXMsIHdoaWNoIGlzIHdyaXRhYmxl
IGJ5Cmd1ZXN0cy4gIEhlbmNlLCBpZiBndWVzdHMgaW50ZW5zaXZlbHkgdXBk
YXRlcyBpdCwgZG9tMCB3aWxsIGhhdmUgbG90cyBvZgpwZW5kaW5nIGV2ZW50
cyB0aGF0IGV4aGF1c3RpbmcgbWVtb3J5IG9mIGRvbTAuICBJbiBvdGhlciB3
b3JkcywgZ3Vlc3RzCmNhbiB0cmlnZ2VyIGRvbTAgbWVtb3J5IHByZXNzdXJl
LiAgVGhpcyBpcyBrbm93biBhcyBYU0EtMzQ5LiAgSG93ZXZlciwKdGhlIHdh
dGNoIGNhbGxiYWNrIG9mIGl0LCAnZnJvbnRlbmRfY2hhbmdlZCgpJywgcmVh
ZHMgb25seSAnc3RhdGUnLCBzbwpkb2Vzbid0IG5lZWQgdG8gaGF2ZSB0aGUg
cGVuZGluZyBldmVudHMuCgpUbyBhdm9pZCB0aGUgcHJvYmxlbSwgdGhpcyBj
b21taXQgZGlzYWxsb3dzIHBlbmRpbmcgd2F0Y2ggbWVzc2FnZXMgZm9yCid4
ZW5idXNfYmFja2VuZCcgdXNpbmcgdGhlICd3aWxsX2hhbmRsZSgpJyB3YXRj
aCBjYWxsYmFjay4KClNpZ25lZC1vZmYtYnk6IFNlb25nSmFlIFBhcmsgPHNq
cGFya0BhbWF6b24uZGU+ClJlcG9ydGVkLWJ5OiBNaWNoYWVsIEt1cnRoIDxt
a3VAYW1hem9uLmRlPgpSZXBvcnRlZC1ieTogUGF3ZWwgV2llY3pvcmtpZXdp
Y3ogPHdpcGF3ZWxAYW1hem9uLmRlPgpTaWduZWQtb2ZmLWJ5OiBBdXRob3Ig
UmVkYWN0ZWQgPHNlY3VyaXR5QHhlbi5vcmc+ClJldmlld2VkLWJ5OiBKdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+Ci0tLQogZHJpdmVycy94ZW4v
eGVuYnVzL3hlbmJ1c19wcm9iZV9iYWNrZW5kLmMgfCA3ICsrKysrKysKIDEg
ZmlsZSBjaGFuZ2VkLCA3IGluc2VydGlvbnMoKykKCmRpZmYgLS1naXQgYS9k
cml2ZXJzL3hlbi94ZW5idXMveGVuYnVzX3Byb2JlX2JhY2tlbmQuYyBiL2Ry
aXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJvYmVfYmFja2VuZC5jCmluZGV4
IDJiYTY5OTg5N2U2ZC4uNWFiZGVkOTdlMWE3IDEwMDY0NAotLS0gYS9kcml2
ZXJzL3hlbi94ZW5idXMveGVuYnVzX3Byb2JlX2JhY2tlbmQuYworKysgYi9k
cml2ZXJzL3hlbi94ZW5idXMveGVuYnVzX3Byb2JlX2JhY2tlbmQuYwpAQCAt
MTgwLDYgKzE4MCwxMiBAQCBzdGF0aWMgaW50IHhlbmJ1c19wcm9iZV9iYWNr
ZW5kKHN0cnVjdCB4ZW5fYnVzX3R5cGUgKmJ1cywgY29uc3QgY2hhciAqdHlw
ZSwKIAlyZXR1cm4gZXJyOwogfQogCitzdGF0aWMgYm9vbCBmcm9udGVuZF93
aWxsX2hhbmRsZShzdHJ1Y3QgeGVuYnVzX3dhdGNoICp3YXRjaCwKKwkJCQkg
Y29uc3QgY2hhciAqcGF0aCwgY29uc3QgY2hhciAqdG9rZW4pCit7CisJcmV0
dXJuIHdhdGNoLT5ucl9wZW5kaW5nID09IDA7Cit9CisKIHN0YXRpYyB2b2lk
IGZyb250ZW5kX2NoYW5nZWQoc3RydWN0IHhlbmJ1c193YXRjaCAqd2F0Y2gs
CiAJCQkgICAgIGNvbnN0IGNoYXIgKnBhdGgsIGNvbnN0IGNoYXIgKnRva2Vu
KQogewpAQCAtMTkxLDYgKzE5Nyw3IEBAIHN0YXRpYyBzdHJ1Y3QgeGVuX2J1
c190eXBlIHhlbmJ1c19iYWNrZW5kID0gewogCS5sZXZlbHMgPSAzLAkJLyog
YmFja2VuZC90eXBlLzxmcm9udGVuZD4vPGlkPiAqLwogCS5nZXRfYnVzX2lk
ID0gYmFja2VuZF9idXNfaWQsCiAJLnByb2JlID0geGVuYnVzX3Byb2JlX2Jh
Y2tlbmQsCisJLm90aGVyZW5kX3dpbGxfaGFuZGxlID0gZnJvbnRlbmRfd2ls
bF9oYW5kbGUsCiAJLm90aGVyZW5kX2NoYW5nZWQgPSBmcm9udGVuZF9jaGFu
Z2VkLAogCS5idXMgPSB7CiAJCS5uYW1lCQk9ICJ4ZW4tYmFja2VuZCIsCi0t
IAoyLjE3LjEKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:30:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53428.93195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9TW-0002QI-Ka; Tue, 15 Dec 2020 12:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53428.93195; Tue, 15 Dec 2020 12:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9TW-0002Pt-CM; Tue, 15 Dec 2020 12:30:42 +0000
Received: by outflank-mailman (input) for mailman id 53428;
 Tue, 15 Dec 2020 12:30:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kp9QP-0004tM-Mx
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 12:27:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d6c641c-de9c-40d7-a22d-998d6435a25d;
 Tue, 15 Dec 2020 12:26:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0DD0DADB3;
 Tue, 15 Dec 2020 12:26:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d6c641c-de9c-40d7-a22d-998d6435a25d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608035167; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=W/+NQPp9SmxciIO27W4TJTaEMziV4Us8QYmGpci5wz8=;
	b=I7pIBULyYlgvQCQb2uACyroNg/x7g7yRZRQ8osRIwFf+LwlSltp1xO9j3rtNnH/XkB83xy
	8NEbXjaNtGe6WDyPSkUp5gsdOWp0UYxvpyALuB2SJRvz1Y+GSL/Lp6p2CPub0Ivl3QzdQu
	00/E6V/N+BxbQZyoP4wEJ2Cn5mgYG/g=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.11-rc1
Date: Tue, 15 Dec 2020 13:26:06 +0100
Message-Id: <20201215122606.6874-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc1-tag

xen: branch for v5.11-rc1

It contains fixes for security issues just having been disclosed:

- a 5 patch series for fixing of XSA-349 (DoS via resource depletion in
  Xen dom0)

- a patch fixing XSA-350 (access of stale pointer in a Xen dom0)


Thanks.

Juergen

 drivers/block/xen-blkback/xenbus.c        |  4 +++-
 drivers/net/xen-netback/xenbus.c          |  6 +++++-
 drivers/xen/xen-pciback/xenbus.c          |  2 +-
 drivers/xen/xenbus/xenbus.h               |  2 ++
 drivers/xen/xenbus/xenbus_client.c        |  8 +++++++-
 drivers/xen/xenbus/xenbus_probe.c         |  1 +
 drivers/xen/xenbus/xenbus_probe_backend.c |  7 +++++++
 drivers/xen/xenbus/xenbus_xs.c            | 34 ++++++++++++++++++++-----------
 include/xen/xenbus.h                      | 15 +++++++++++++-
 9 files changed, 62 insertions(+), 17 deletions(-)

Pawel Wieczorkiewicz (1):
      xen-blkback: set ring->xenblkd to NULL after kthread_stop()

SeongJae Park (5):
      xen/xenbus: Allow watches discard events before queueing
      xen/xenbus: Add 'will_handle' callback support in xenbus_watch_path()
      xen/xenbus/xen_bus_type: Support will_handle watch callback
      xen/xenbus: Count pending messages for each watch
      xenbus/xenbus_backend: Disallow pending watch messages


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:30:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53421.93168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9TT-0002Jo-W3; Tue, 15 Dec 2020 12:30:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53421.93168; Tue, 15 Dec 2020 12:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9TT-0002Jb-QB; Tue, 15 Dec 2020 12:30:39 +0000
Received: by outflank-mailman (input) for mailman id 53421;
 Tue, 15 Dec 2020 12:30:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9LA-0004tM-4b
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:22:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e67aa02f-9b8c-4dd9-a4d0-60106c7525b1;
 Tue, 15 Dec 2020 12:20:31 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Jb-0005kJ-Cm; Tue, 15 Dec 2020 12:20:27 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Jb-0007Au-Bq; Tue, 15 Dec 2020 12:20:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e67aa02f-9b8c-4dd9-a4d0-60106c7525b1
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=LY96pn3vSH/hD6YioZ1DWoLiSBci+jA7YDmltUbIXqE=; b=TYAyDH+o2yI6kX9ThFdwxQc+jU
	vlKEqaIzAzIqYJo1VGh2hUOgS/ja/oSEx0cocBeR4k+AWSONaEggQhvNqCnz5bELmIHn5xV5GX005
	xsW4embEE4tSzhg6mzRqQuR7LeAPtrmSN+NIT941L1u+x2nvnHMPENMYH1eFG60uODM4=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 356 v3 (CVE-2020-29567) - infinite loop
 when cleaning up IRQ vectors
Message-Id: <E1kp9Jb-0007Au-Bq@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:27 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29567 / XSA-356
                               version 3

              infinite loop when cleaning up IRQ vectors

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

When moving IRQs between CPUs to distribute the load of IRQ handling,
IRQ vectors are dynamically allocated and de-allocated on the relevant
CPUs.  De-allocation has to happen when certain constraints are met.
If these conditions are not met when first checked, the checking CPU
may send an interrupt to itself, in the expectation that this IRQ will
be delivered only after the condition preventing the cleanup has
cleared.  For two specific IRQ vectors this expectation was violated,
resulting in a continuous stream of self-interrupts, which renders the
CPU effectively unusable.

IMPACT
======

A domain with a passed through PCI device can cause lockup of a
physical CPU, resulting in a Denial of Service (DoS) to the entire
host.

VULNERABLE SYSTEMS
==================

Only Xen 4.14 is affected.  Xen versions 4.13 and older are not
affected.

Only x86 systems are vulnerable.  Arm systems are not vulnerable.

Only guests with physical PCI devices passed through to them can exploit
the vulnerability.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Roger Pau Monné of Citrix.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa356.patch           xen-unstable - Xen 4.14.x

$ sha256sum xsa356*
77316e3b86e2482ee9741db7484d323a399028762af1c88734f8c83e78069fb3  xsa356.meta
21c217e41549bf74d5fcc26f1d23b6d902c5c72de5e2c8490842aea9f999b036  xsa356.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/YqeAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZv4cIAIdqAn7O/TicwVod/L1Lktuk94g73LQlhRxMFnQ2
CoFrIBJtvyFq0m0OqRcVav3hb8wa7EdbmbJXgvoC4emKUcIcUkMA/dyvUi9SKdGP
5iQDL0Vsasq7rQN5vjuUA6KIDp4qyT87mxNLUwMzwrXDORFHT9YZO/SZLY37WU7S
UX0qaDh9FpwtdB4nDULqNimAZcy1yonXkD8bb6jDmHIeTx33cfe4BNvYqApwTPD8
fxctAlsYHLuwfnEBdQ+cadfcjF/PqkRcsGtMk6hGRn2hEscEfHWMH9I/R9lZvyj5
CjfFKzb2WpDu3KUuJJJBTavkZ97Bs+flVNGLrQ/AgKoitQs=
=vDoA
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa356.meta"
Content-Disposition: attachment; filename="xsa356.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTYsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4
ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjE0IjogewogICAgICAi
UmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJs
ZVJlZiI6ICIxZDFkMWY1MzkxOTc2NDU2YTc5ZGFhYzBkY2ZlNzE1N2RhMWU1
NGY3IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMs
CiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAg
ICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAogICAg
ICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAgMzQ4
CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAg
ICAgICJ4c2EzNTYucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAg
ICB9CiAgICB9LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7
CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiM2Fl
NDY5YWY4ZTY4MGRmMzFlZWNkMGEyYWM2YTgzYjU4YWQ3Y2U1MyIsCiAgICAg
ICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAgICAgICAg
ICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIzLAogICAg
ICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNSwKICAgICAgICAgICAgMzMw
LAogICAgICAgICAgICAzNTIsCiAgICAgICAgICAgIDM0OAogICAgICAgICAg
XSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzU2
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfQog
IH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa356.patch"
Content-Disposition: attachment; filename="xsa356.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IHg4Ni9pcnE6IGZpeCBpbmZpbml0ZSBsb29wIGluIGlycV9t
b3ZlX2NsZWFudXBfaW50ZXJydXB0CgpJZiBYZW4gZW50ZXJzIGlycV9tb3Zl
X2NsZWFudXBfaW50ZXJydXB0IHdpdGggYSBkeW5hbWljIHZlY3RvciBiZWxv
dwpJUlFfTU9WRV9DTEVBTlVQX1ZFQ1RPUiBwZW5kaW5nIGluIElSUiAoMHgy
MCBvciAweDIxKSB0aGF0J3MgYWxzbwpkZXNpZ25hdGVkIGZvciBhIGNsZWFu
dXAgaXQgd2lsbCBlbnRlciBhIGxvb3Agd2hlcmUKaXJxX21vdmVfY2xlYW51
cF9pbnRlcnJ1cHQgY29udGludW91c2x5IHNlbmRzIGEgY2xlYW51cCBJUEkg
KHZlY3RvcgoweDIyKSB0byBpdHNlbGYgd2hpbGUgd2FpdGluZyBmb3IgdGhl
IHZlY3RvciB3aXRoIGxvd2VyIHByaW9yaXR5IHRvIGJlCmluamVjdGVkIC0g
d2hpY2ggd2lsbCBuZXZlciBoYXBwZW4gYmVjYXVzZSBJUlFfTU9WRV9DTEVB
TlVQX1ZFQ1RPUgp0YWtlcyBwcmVjZWRlbmNlIGFuZCBpdCdzIGFsd2F5cyBp
bmplY3RlZCBmaXJzdC4KCkZpeCB0aGlzIGJ5IG1ha2luZyBzdXJlIHZlY3Rv
cnMgYmVsb3cgSVJRX01PVkVfQ0xFQU5VUF9WRUNUT1IgYXJlCm1hcmtlZCBh
cyB1c2VkIGFuZCB0aHVzIG5vdCBhdmFpbGFibGUgZm9yIEFQcy4gQWxzbyBh
ZGQgc29tZSBsb2dpYyB0bwphc3NlcnQgYW5kIHByZXZlbnQgaXJxX21vdmVf
Y2xlYW51cF9pbnRlcnJ1cHQgZnJvbSBlbnRlcmluZyBzdWNoIGFuCmluZmlu
aXRlIGxvb3AsIGFsYmVpdCB0aGF0IHNob3VsZCBuZXZlciBoYXBwZW4gZ2l2
ZW4gdGhlIGN1cnJlbnQgY29kZS4KClRoaXMgaXMgWFNBLTM1NiAvIENWRS0y
MDIwLTI5NTY3LgoKRml4ZXM6IDNmYmEwNmJhOWY4ICgneDg2L0lSUTogcmUt
dXNlIGxlZ2FjeSB2ZWN0b3IgcmFuZ2VzIG9uIEFQcycpClNpZ25lZC1vZmYt
Ynk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpS
ZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoK
LS0tIGEveGVuL2FyY2gveDg2L2lycS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9p
cnEuYwpAQCAtNDQxLDggKzQ0MSwxNSBAQCBpbnQgX19pbml0IGluaXRfaXJx
X2RhdGEodm9pZCkKICAgICBzZXRfYml0KEhZUEVSQ0FMTF9WRUNUT1IsIHVz
ZWRfdmVjdG9ycyk7CiAjZW5kaWYKICAgICAKLSAgICAvKiBJUlFfTU9WRV9D
TEVBTlVQX1ZFQ1RPUiB1c2VkIGZvciBjbGVhbiB1cCB2ZWN0b3JzICovCi0g
ICAgc2V0X2JpdChJUlFfTU9WRV9DTEVBTlVQX1ZFQ1RPUiwgdXNlZF92ZWN0
b3JzKTsKKyAgICAvKgorICAgICAqIE1hcmsgdmVjdG9ycyB1cCB0byB0aGUg
Y2xlYW51cCBvbmUgYXMgdXNlZCwgdG8gcHJldmVudCBhbiBpbmZpbml0ZSBs
b29wCisgICAgICogaW52b2tpbmcgaXJxX21vdmVfY2xlYW51cF9pbnRlcnJ1
cHQuCisgICAgICovCisgICAgQlVJTERfQlVHX09OKElSUV9NT1ZFX0NMRUFO
VVBfVkVDVE9SIDwgRklSU1RfRFlOQU1JQ19WRUNUT1IpOworICAgIGZvciAo
IHZlY3RvciA9IEZJUlNUX0RZTkFNSUNfVkVDVE9SOworICAgICAgICAgIHZl
Y3RvciA8PSBJUlFfTU9WRV9DTEVBTlVQX1ZFQ1RPUjsKKyAgICAgICAgICB2
ZWN0b3IrKyApCisgICAgICAgIF9fc2V0X2JpdCh2ZWN0b3IsIHVzZWRfdmVj
dG9ycyk7CiAKICAgICByZXR1cm4gMDsKIH0KQEAgLTcyNywxMCArNzM0LDYg
QEAgdm9pZCBpcnFfbW92ZV9jbGVhbnVwX2ludGVycnVwdChzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqcmVncykKIHsKICAgICB1bnNpZ25lZCB2ZWN0b3IsIG1l
OwogCi0gICAgLyogVGhpcyBpbnRlcnJ1cHQgc2hvdWxkIG5vdCBuZXN0IGlu
c2lkZSBvdGhlcnMuICovCi0gICAgQlVJTERfQlVHX09OKEFQSUNfUFJJT19D
TEFTUyhJUlFfTU9WRV9DTEVBTlVQX1ZFQ1RPUikgIT0KLSAgICAgICAgICAg
ICAgICAgQVBJQ19QUklPX0NMQVNTKEZJUlNUX0RZTkFNSUNfVkVDVE9SKSk7
Ci0KICAgICBhY2tfQVBJQ19pcnEoKTsKIAogICAgIG1lID0gc21wX3Byb2Nl
c3Nvcl9pZCgpOwpAQCAtNzc0LDYgKzc3NywxMSBAQCB2b2lkIGlycV9tb3Zl
X2NsZWFudXBfaW50ZXJydXB0KHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdz
KQogICAgICAgICAgKi8KICAgICAgICAgaWYgKCBpcnIgJiAoMXUgPDwgKHZl
Y3RvciAlIDMyKSkgKQogICAgICAgICB7CisgICAgICAgICAgICBpZiAoIHZl
Y3RvciA8IElSUV9NT1ZFX0NMRUFOVVBfVkVDVE9SICkKKyAgICAgICAgICAg
IHsKKyAgICAgICAgICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKKyAg
ICAgICAgICAgICAgICBnb3RvIHVubG9jazsKKyAgICAgICAgICAgIH0KICAg
ICAgICAgICAgIHNlbmRfSVBJX3NlbGYoSVJRX01PVkVfQ0xFQU5VUF9WRUNU
T1IpOwogICAgICAgICAgICAgVFJBQ0VfM0QoVFJDX0hXX0lSUV9NT1ZFX0NM
RUFOVVBfREVMQVksCiAgICAgICAgICAgICAgICAgICAgICBpcnEsIHZlY3Rv
ciwgc21wX3Byb2Nlc3Nvcl9pZCgpKTsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:32:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:32:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53490.93225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Ux-0003iU-Uh; Tue, 15 Dec 2020 12:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53490.93225; Tue, 15 Dec 2020 12:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Ux-0003iH-O0; Tue, 15 Dec 2020 12:32:11 +0000
Received: by outflank-mailman (input) for mailman id 53490;
 Tue, 15 Dec 2020 12:32:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9LT-0004t1-OH
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:22:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a059f6f-2452-4d41-9ed3-0e5c6dc26be0;
 Tue, 15 Dec 2020 12:20:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JZ-0005jO-F6; Tue, 15 Dec 2020 12:20:25 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JZ-00078k-EE; Tue, 15 Dec 2020 12:20:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a059f6f-2452-4d41-9ed3-0e5c6dc26be0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=BaAJ8QKJeq1ljLB3TDgqpsTvHwMGW/R9qzkqO6VU4x0=; b=yvvPot2xwUVKOqHBgSbzzmujWu
	xcvh2Ln9bNXqRyomWSuq3/PG7iYzbE2YHQ0rnjCrqUZKHOqSD5/09hYontEHFQHHsEsupU3wznG0d
	htQTPnxA3JesQnfR4R6+Z/4qZ45cFhpAxjAO+n4HAZxyNwK5MgXUHsAjDfWHbGIkIh7M=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 353 v4 (CVE-2020-29479) - oxenstored:
 permissions not checked on root node
Message-Id: <E1kp9JZ-00078k-EE@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:25 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29479 / XSA-353
                               version 4

           oxenstored: permissions not checked on root node

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

In the Ocaml xenstored implementation, the internal representation of
the tree has special cases for the root node, because this node has no
parent.

Unfortunately, permissions were not checked for certain operations on
the root node.

Unprivileged guests can get and modify permissions, list, and delete
the root node.  Deleting the whole xenstore tree is a hostwide denial
of service.  Depending on the circumstances, the vulnerability can
also be leveraged into an ability to gain write access to any part of
xenstore.

IMPACT
======

A guest administrator can deny service to the whole system
simply by deleting the whole of xenstore.

Additionally, depending on other software in use, privilege escalation
may be possible.  With the default "xl" toolstack, a guest
administrator can escalate their privilege to that of the host.

VULNERABLE SYSTEMS
==================

All systems using oxenstored are vulnerable.  Building and using
oxenstored is the default in the upstream Xen distribution, if the
Ocaml compiler is available.

The impact depends on the toolstack and other management software in
use.  Systems using libxl (for example, via "xl" or libvirt) are
vulnerable to privilege escalation.

Systems using C xenstored are not vulnerable, no matter what toolstack
or management software is in use.

MITIGATION
==========

There are no mitigations.

Changing to use of C xenstored would avoid this vulnerability.  However,
given the other vulnerabilities in both versions of xenstored being
reported at this time, changing xenstored implementation is not a
recommended approach to mitigation of individual issues.

CREDITS
=======

This issue was discovered by Edwin Török of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

Note that the Ocaml patches for XSA-115 depend on this patch.

xsa353.patch           xen-unstable - 4.10

$ sha256sum xsa353*
48fa1f414773ab1a4135fe62aaae25c7c543efe5a4c5dba71db9e497fa9f3362  xsa353.meta
e14922bf6b2095c1b17849b130e999726a1a31e29be1374e0cd3f9a8fa59fd3d  xsa353.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/Yqd8MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZmg8IALQltyH/EPk78gGNyeb/1ri3jr7IVR5lyCy1Aedg
zckh8FNaaRCplZAoa2Kc2aV2H1Lc5x/UfWtoOLaiSdcyRNXRKRFwq7LoBT7OH2SH
KSo2HK0licTOv61SL2LoJ38tXec86V0Cos89DuWtSMLQT3LUmixQlSdiTUueFidH
Fei8mqoYor5WtzjfgKjdR5KwrrPj65QFyUic3bRgdcc/t27Wr+oQU5iGg7ayeCNw
5Ylz8eyJj88rkNVw1S4jFH815lyENaJbVn56VvlEm0KDsnY7G4YAHExZ1lElrOdj
nkOXN3o6CGiHTkXPOsbPuy0WboSrXK9AZykasml/EDw41Vg=
=V1xW
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa353.meta"
Content-Disposition: attachment; filename="xsa353.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTMsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1My5wYXRjaCIKICAgICAg
ICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsK
ICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAg
ICJTdGFibGVSZWYiOiAiNDFhODIyYzM5MjYzNTBmMjY5MTdkNzQ3YzhkZmVk
MWM0NGEyY2Y0MiIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAgICAg
ICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNTMucGF0Y2giCiAg
ICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIi
OiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAg
ICAgICAiU3RhYmxlUmVmIjogIjgxNDVkMzhiNDgwMDkyNTVhMzJhYjg3YTAy
ZTQ4MWNkMDljODExZjkiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAg
ICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzUzLnBhdGNo
IgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0
LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICJiNTMwMjI3M2UyYzUxOTQwMTcyNDAw
NDg2NjQ0NjM2ZjJmNGZjNjRhIiwKICAgICAgICAgICJQcmVyZXFzIjogW10s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1My5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xNCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMWQxZDFmNTM5MTk3NjQ1NmE3
OWRhYWMwZGNmZTcxNTdkYTFlNTRmNyIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NTMucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiM2FlNDY5YWY4ZTY4
MGRmMzFlZWNkMGEyYWM2YTgzYjU4YWQ3Y2U1MyIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzNTMucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9
CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa353.patch"
Content-Disposition: attachment; filename="xsa353.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogZG8gcGVybWlzc2lvbiBjaGVja3Mgb24geGVuc3RvcmUgcm9v
dApNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47
IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJp
dAoKVGhpcyB3YXMgbGFja2luZyBpbiBhIGRpc2FwcG9pbnRpbmcgbnVtYmVy
IG9mIHBsYWNlcy4KClRoZSB4ZW5zdG9yZSByb290IG5vZGUgaXMgdHJlYXRl
ZCBkaWZmZXJlbnRseSBmcm9tIGFsbCBvdGhlciBub2RlcywgYmVjYXVzZSBp
dApkb2Vzbid0IGhhdmUgYSBwYXJlbnQsIGFuZCBtdXRhdGlvbiByZXF1aXJl
cyBjaGFuZ2luZyB0aGUgcGFyZW50LgoKVW5mb3J0dW5hdGVseSB0aGlzIGxl
YWQgdG8gb3Blbi1jb2RpbmcgdGhlIHNwZWNpYWwgY2FzZSBmb3Igcm9vdCBp
bnRvIGV2ZXJ5CnNpbmdsZSB4ZW5zdG9yZSBvcGVyYXRpb24sIGFuZCBvdXQg
b2YgYWxsIHRoZSB4ZW5zdG9yZSBvcGVyYXRpb25zIG9ubHkgcmVhZApkaWQg
YSBwZXJtaXNzaW9uIGNoZWNrIHdoZW4gaGFuZGxpbmcgdGhlIHJvb3Qgbm9k
ZS4KClRoaXMgbWVhbnMgdGhhdCBhbiB1bnByaXZpbGVnZWQgZ3Vlc3QgY2Fu
OgoKICogeGVuc3RvcmUtY2htb2QgLyB0byBpdHMgbGlraW5nIGFuZCBzdWJz
ZXF1ZW50bHkgd3JpdGUgbmV3IGFyYml0cmFyeSBub2RlcwogICB0aGVyZSAo
c3ViamVjdCB0byBxdW90YSkKICogeGVuc3RvcmUtcm0gLXIgLyBkZWxldGVz
IGFsbW9zdCB0aGUgZW50aXJlIHhlbnN0b3JlIHRyZWUgKHhlbm9wc2QgcXVp
Y2tseQogICByZWZpbGxzIHNvbWUsIGJ1dCB5b3UgYXJlIGxlZnQgd2l0aCBh
IGJyb2tlbiBzeXN0ZW0pCiAqIERJUkVDVE9SWSBvbiAvIGxpc3RzIGFsbCBj
aGlsZHJlbiB3aGVuIGNhbGxlZCB0aHJvdWdoIHB5dGhvbgogICBiaW5kaW5n
cyAoeGVuc3RvcmUtbHMgc3RvcHMgYXQgL2xvY2FsIGJlY2F1c2UgaXQgdHJp
ZXMgdG8gbGlzdCByZWN1cnNpdmVseSkKICogZ2V0LXBlcm1zIG9uIC8gd29y
a3MgdG9vLCBidXQgdGhhdCBpcyBqdXN0IGEgbWlub3IgaW5mb3JtYXRpb24g
bGVhawoKQWRkIHRoZSBtaXNzaW5nIHBlcm1pc3Npb24gY2hlY2tzLCBidXQg
dGhpcyBzaG91bGQgcmVhbGx5IGJlIHJlZmFjdG9yZWQgdG8gZG8KdGhlIHJv
b3QgaGFuZGxpbmcgYW5kIHBlcm1pc3Npb24gY2hlY2tzIG9uIHRoZSBub2Rl
IG9ubHkgb25jZSBmcm9tIGEgc2luZ2xlCmZ1bmN0aW9uLCBpbnN0ZWFkIG9m
IGdldHRpbmcgaXQgd3JvbmcgbmVhcmx5IGV2ZXJ5d2hlcmUuCgpUaGlzIGlz
IFhTQS0zNTMuCgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZp
bi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRp
ZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTog
QW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KCmRp
ZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwgYi90
b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwKaW5kZXggZjI5OWVjNjQ2
MS4uOTJiNjI4OWI1ZSAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3N0b3JlLm1sCisrKyBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9zdG9y
ZS5tbApAQCAtMjczLDE1ICsyNzMsMTcgQEAgbGV0IHBhdGhfcm0gc3RvcmUg
cGVybSBwYXRoID0KIAkJCU5vZGUuZGVsX2NoaWxkbmFtZSBub2RlIG5hbWUK
IAkJd2l0aCBOb3RfZm91bmQgLT4KIAkJCXJhaXNlIERlZmluZS5Eb2VzbnRf
ZXhpc3QgaW4KLQlpZiBwYXRoID0gW10gdGhlbgorCWlmIHBhdGggPSBbXSB0
aGVuICgKKwkJTm9kZS5jaGVja19wZXJtIHN0b3JlLnJvb3QgcGVybSBQZXJt
cy5XUklURTsKIAkJTm9kZS5kZWxfYWxsX2NoaWxkcmVuIHN0b3JlLnJvb3QK
LQllbHNlCisJKSBlbHNlCiAJCVBhdGguYXBwbHlfbW9kaWZ5IHN0b3JlLnJv
b3QgcGF0aCBkb19ybQogCiBsZXQgcGF0aF9zZXRwZXJtcyBzdG9yZSBwZXJt
IHBhdGggcGVybXMgPQotCWlmIHBhdGggPSBbXSB0aGVuCisJaWYgcGF0aCA9
IFtdIHRoZW4gKAorCQlOb2RlLmNoZWNrX3Blcm0gc3RvcmUucm9vdCBwZXJt
IFBlcm1zLldSSVRFOwogCQlOb2RlLnNldF9wZXJtcyBzdG9yZS5yb290IHBl
cm1zCi0JZWxzZQorCSkgZWxzZQogCQlsZXQgZG9fc2V0cGVybXMgbm9kZSBu
YW1lID0KIAkJCWxldCBjID0gTm9kZS5maW5kIG5vZGUgbmFtZSBpbgogCQkJ
Tm9kZS5jaGVja19vd25lciBjIHBlcm07CkBAIC0zMTMsOSArMzE1LDEwIEBA
IGxldCByZWFkIHN0b3JlIHBlcm0gcGF0aCA9CiAKIGxldCBscyBzdG9yZSBw
ZXJtIHBhdGggPQogCWxldCBjaGlsZHJlbiA9Ci0JCWlmIHBhdGggPSBbXSB0
aGVuCi0JCQkoTm9kZS5nZXRfY2hpbGRyZW4gc3RvcmUucm9vdCkKLQkJZWxz
ZQorCQlpZiBwYXRoID0gW10gdGhlbiAoCisJCQlOb2RlLmNoZWNrX3Blcm0g
c3RvcmUucm9vdCBwZXJtIFBlcm1zLlJFQUQ7CisJCQlOb2RlLmdldF9jaGls
ZHJlbiBzdG9yZS5yb290CisJCSkgZWxzZQogCQkJbGV0IGRvX2xzIG5vZGUg
bmFtZSA9CiAJCQkJbGV0IGNub2RlID0gTm9kZS5maW5kIG5vZGUgbmFtZSBp
bgogCQkJCU5vZGUuY2hlY2tfcGVybSBjbm9kZSBwZXJtIFBlcm1zLlJFQUQ7
CkBAIC0zMjQsOSArMzI3LDEwIEBAIGxldCBscyBzdG9yZSBwZXJtIHBhdGgg
PQogCUxpc3QucmV2IChMaXN0Lm1hcCAoZnVuIG4gLT4gU3ltYm9sLnRvX3N0
cmluZyBuLk5vZGUubmFtZSkgY2hpbGRyZW4pCiAKIGxldCBnZXRwZXJtcyBz
dG9yZSBwZXJtIHBhdGggPQotCWlmIHBhdGggPSBbXSB0aGVuCi0JCShOb2Rl
LmdldF9wZXJtcyBzdG9yZS5yb290KQotCWVsc2UKKwlpZiBwYXRoID0gW10g
dGhlbiAoCisJCU5vZGUuY2hlY2tfcGVybSBzdG9yZS5yb290IHBlcm0gUGVy
bXMuUkVBRDsKKwkJTm9kZS5nZXRfcGVybXMgc3RvcmUucm9vdAorCSkgZWxz
ZQogCQlsZXQgZmN0IG4gbmFtZSA9CiAJCQlsZXQgYyA9IE5vZGUuZmluZCBu
IG5hbWUgaW4KIAkJCU5vZGUuY2hlY2tfcGVybSBjIHBlcm0gUGVybXMuUkVB
RDsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:32:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:32:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53495.93240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Uz-0003kj-C1; Tue, 15 Dec 2020 12:32:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53495.93240; Tue, 15 Dec 2020 12:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Uz-0003kV-41; Tue, 15 Dec 2020 12:32:13 +0000
Received: by outflank-mailman (input) for mailman id 53495;
 Tue, 15 Dec 2020 12:32:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Ls-0004t1-P8
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:22:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0473e96-39de-4f5c-b191-083d229ca08d;
 Tue, 15 Dec 2020 12:20:33 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Jd-0005l7-8X; Tue, 15 Dec 2020 12:20:29 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Jd-0007Cj-7U; Tue, 15 Dec 2020 12:20:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0473e96-39de-4f5c-b191-083d229ca08d
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=DTGzKZvQBksUtA7KN6vQ6CiaDeJA+4XjSDthDt+VsaA=; b=MjBAuCP3VL1ehn2MvtIe6Bg2ho
	rQfAaSpxiWFCSQQdx6BOxPd0JCqhwAYxFoa/Q0USpDy5gnbgXNy6uO53PC4seslINVUlTsXjs1jrL
	bWYQUSHZFGpaHLDeh1wjURAGOzzlEc8cM2tPBBSrx9SjlMh+JGodSqLjLSsaa5qUUI54=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 359 v3 (CVE-2020-29571) - FIFO event
 channels control structure ordering
Message-Id: <E1kp9Jd-0007Cj-7U@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:29 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29571 / XSA-359
                               version 3

            FIFO event channels control structure ordering

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

A bounds check common to most operation time functions specific to FIFO
event channels depends on the CPU observing consistent state.  While the
producer side uses appropriately ordered writes, the consumer side isn't
protected against re-ordered reads, and may hence end up de-referencing
a NULL pointer.

IMPACT
======

Malicious or buggy guest kernels can mount a Denial of Service (DoS)
attack affecting the entire system.

VULNERABLE SYSTEMS
==================

All Xen versions from 4.4 onwards are vulnerable.  Xen versions 4.3 and
earlier are not vulnerable.

Only Arm systems may be vulnerable.  Whether a system is vulnerable will
depend on the specific CPU.  x86 systems are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa359.patch           xen-unstable - 4.10

$ sha256sum xsa359*
cb009ad77d1a3d8044431b2af568dd9dffefe07fc9f537fb6b53c2ec57aa77b7  xsa359.meta
3126d9304b68be84a89c42c223227c8f96ecbb96a0385a7e1bdc65ae5e0f344f  xsa359.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/YqeAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZt6wIAJhvfVB8eRr5fqCbMUjZ++KKoG0AF/hoS7YRHiDn
zCgK/ff5RkY/pHHkVnrSOQeQg88SPBp/HaYljUXhoANbhXVxlt383QxQb63JwanR
1c3Sdvv5w0HdvrDyUMV16W/Edf/DGlSgciG/2saNz8pPbqiGKzeY3Q7nj3T3vLAE
ouNlHb2NItalKB2AdC62y/BFIjsn66G/P1agxyrcGirJxdvzORBx+LY7VTFOrOEB
L7yb8Y0U6Nj1XjGUXYm4X4xCCm+940Xc0Ht9zkDJlb3xSdO5sOtBE+Cx3F4uXn1c
vTMiKziAOgEKKXWV7P3KSWR/7G1aTm2YVRMy5XWtS6GY5D0=
=uRRE
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa359.meta"
Content-Disposition: attachment; filename="xsa359.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTksCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAog
ICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAg
MzQ4LAogICAgICAgICAgICAzNTYsCiAgICAgICAgICAgIDM1OAogICAgICAg
ICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNh
MzU5LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAg
fSwKICAgICI0LjExIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAi
eGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI0MWE4MjJjMzkyNjM1
MGYyNjkxN2Q3NDdjOGRmZWQxYzQ0YTJjZjQyIiwKICAgICAgICAgICJQcmVy
ZXFzIjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAg
ICAgICAgICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAgIDMy
NCwKICAgICAgICAgICAgMzI1LAogICAgICAgICAgICAzMzAsCiAgICAgICAg
ICAgIDM1MiwKICAgICAgICAgICAgMzQ4LAogICAgICAgICAgICAzNTYsCiAg
ICAgICAgICAgIDM1OAogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVz
IjogWwogICAgICAgICAgICAieHNhMzU5LnBhdGNoIgogICAgICAgICAgXQog
ICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEyIjogewogICAgICAi
UmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJs
ZVJlZiI6ICI4MTQ1ZDM4YjQ4MDA5MjU1YTMyYWI4N2EwMmU0ODFjZDA5Yzgx
MWY5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMs
CiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAg
ICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAogICAg
ICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAgMzQ4
LAogICAgICAgICAgICAzNTYsCiAgICAgICAgICAgIDM1OAogICAgICAgICAg
XSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzU5
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwK
ICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVu
IjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJiNTMwMjI3M2UyYzUxOTQw
MTcyNDAwNDg2NjQ0NjM2ZjJmNGZjNjRhIiwKICAgICAgICAgICJQcmVyZXFz
IjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAgICAg
ICAgICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwK
ICAgICAgICAgICAgMzI1LAogICAgICAgICAgICAzMzAsCiAgICAgICAgICAg
IDM1MiwKICAgICAgICAgICAgMzQ4LAogICAgICAgICAgICAzNTYsCiAgICAg
ICAgICAgIDM1OAogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjog
WwogICAgICAgICAgICAieHNhMzU5LnBhdGNoIgogICAgICAgICAgXQogICAg
ICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVj
aXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJl
ZiI6ICIxZDFkMWY1MzkxOTc2NDU2YTc5ZGFhYzBkY2ZlNzE1N2RhMWU1NGY3
IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMsCiAg
ICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAgICAz
MjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAogICAgICAg
ICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAgMzQ4LAog
ICAgICAgICAgICAzNTYsCiAgICAgICAgICAgIDM1OAogICAgICAgICAgXSwK
ICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzU5LnBh
dGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAg
ICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjNhZTQ2OWFmOGU2ODBkZjMx
ZWVjZDBhMmFjNmE4M2I1OGFkN2NlNTMiLAogICAgICAgICAgIlByZXJlcXMi
OiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAg
ICAgICAzMjIsCiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0LAog
ICAgICAgICAgICAzMjUsCiAgICAgICAgICAgIDMzMCwKICAgICAgICAgICAg
MzUyLAogICAgICAgICAgICAzNDgsCiAgICAgICAgICAgIDM1NiwKICAgICAg
ICAgICAgMzU4CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBb
CiAgICAgICAgICAgICJ4c2EzNTkucGF0Y2giCiAgICAgICAgICBdCiAgICAg
ICAgfQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa359.patch"
Content-Disposition: attachment; filename="xsa359.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG4vRklGTzogYWRkIDJuZCBzbXBfcm1iKCkgdG8gZXZ0Y2huX2Zp
Zm9fd29yZF9mcm9tX3BvcnQoKQoKQmVzaWRlcyB3aXRoIGFkZF9wYWdlX3Rv
X2V2ZW50X2FycmF5KCkgdGhlIGZ1bmN0aW9uIGFsc28gbmVlZHMgdG8Kc3lu
Y2hyb25pemUgd2l0aCBldnRjaG5fZmlmb19pbml0X2NvbnRyb2woKSBzZXR0
aW5nIGJvdGggZC0+ZXZ0Y2huX2ZpZm8KYW5kIChzdWJzZXF1ZW50bHkpIGQt
PmV2dGNobl9wb3J0X29wcy4KClRoaXMgaXMgWFNBLTM1OSAvIENWRS0yMDIw
LTI5NTcxLgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFt
YXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxs
QGFtYXpvbi5jb20+CgotLS0gYS94ZW4vY29tbW9uL2V2ZW50X2ZpZm8uYwor
KysgYi94ZW4vY29tbW9uL2V2ZW50X2ZpZm8uYwpAQCAtNTUsNiArNTUsMTMg
QEAgc3RhdGljIGlubGluZSBldmVudF93b3JkX3QgKmV2dGNobl9maWZvXwog
ewogICAgIHVuc2lnbmVkIGludCBwLCB3OwogCisgICAgLyoKKyAgICAgKiBD
YWxsZXJzIGFyZW4ndCByZXF1aXJlZCB0byBob2xkIGQtPmV2ZW50X2xvY2ss
IHNvIHdlIG5lZWQgdG8gc3luY2hyb25pemUKKyAgICAgKiB3aXRoIGV2dGNo
bl9maWZvX2luaXRfY29udHJvbCgpIHNldHRpbmcgZC0+ZXZ0Y2huX3BvcnRf
b3BzIC9hZnRlci8KKyAgICAgKiBkLT5ldnRjaG5fZmlmby4KKyAgICAgKi8K
KyAgICBzbXBfcm1iKCk7CisKICAgICBpZiAoIHVubGlrZWx5KHBvcnQgPj0g
ZC0+ZXZ0Y2huX2ZpZm8tPm51bV9ldnRjaG5zKSApCiAgICAgICAgIHJldHVy
biBOVUxMOwogCkBAIC02MDYsNiArNjEzLDEwIEBAIGludCBldnRjaG5fZmlm
b19pbml0X2NvbnRyb2woc3RydWN0IGV2dGMKICAgICAgICAgaWYgKCByYyA8
IDAgKQogICAgICAgICAgICAgZ290byBlcnJvcjsKIAorICAgICAgICAvKgor
ICAgICAgICAgKiBUaGlzIGNhbGwsIGFzIGEgc2lkZSBlZmZlY3QsIHN5bmNo
cm9uaXplcyB3aXRoCisgICAgICAgICAqIGV2dGNobl9maWZvX3dvcmRfZnJv
bV9wb3J0KCkuCisgICAgICAgICAqLwogICAgICAgICByYyA9IG1hcF9jb250
cm9sX2Jsb2NrKHYsIGdmbiwgb2Zmc2V0KTsKICAgICAgICAgaWYgKCByYyA8
IDAgKQogICAgICAgICAgICAgZ290byBlcnJvcjsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:32:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:32:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53535.93273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Vj-0004J5-B2; Tue, 15 Dec 2020 12:32:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53535.93273; Tue, 15 Dec 2020 12:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Vj-0004Is-3r; Tue, 15 Dec 2020 12:32:59 +0000
Received: by outflank-mailman (input) for mailman id 53535;
 Tue, 15 Dec 2020 12:32:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9Kq-0004tM-40
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:21:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e3ca855-6e12-49e8-990f-4ab461948ad9;
 Tue, 15 Dec 2020 12:20:31 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Ja-0005jj-E4; Tue, 15 Dec 2020 12:20:26 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9Ja-00079y-DC; Tue, 15 Dec 2020 12:20:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e3ca855-6e12-49e8-990f-4ab461948ad9
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=HoqLeqcxZ5PYzHjMrPDfaYiKR0e0ht6qy6sUVCYCmCs=; b=tgYmy3N7W9HdqMLXv0ajQpDWq2
	bW1GaKJQZlP7GGqR1oWb8PyDljed1Jnq0IRvPwNFU7Lzw1x1LCnbYvBIyHdW3GT1dJpRuLqLeAVEB
	OBSRTwHE0qSmialKUZIxyxAm8oECxKKIWRLXEW+Or1fpGPSQUixU9PStmpeIkyMifc8E=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 354 v4 (CVE-2020-29487) - XAPI:
 guest-triggered excessive memory usage
Message-Id: <E1kp9Ja-00079y-DC@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:26 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29487 / XSA-354
                               version 4

             XAPI: guest-triggered excessive memory usage

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Certain xenstore keys provide feedback from the guest, and are therefore
watched by toolstack.  Specifically, keys are watched by xenopsd, and
data are forward via RPC through message-switch to xapi.

The watching logic in xenopsd sends one RPC update containing all data,
any time any single xenstore key is updated, and therefore has O(N^2)
time complexity.  Furthermore, message-switch retains recent (currently
128) RPC messages for diagnostic purposes, yielding O(M*N) space
complexity.

The quantity of memory a single guest can monopolise is bounded by
xenstored quota, but the quota is fairly large.  It is believed to be in
excess of 1G per malicious guest.

In practice this manifests as a host denial of service, either through
message-switch thrashing against swap, or OOM'ing entirely, depending on
dom0's configuration.

This series introduces quotas in xenopsd to limit the quantity of keys
which result in RPC traffic.

IMPACT
======

A buggy or malicious guest can cause unreasonable memory usage in dom0,
resulting in a host denial of service.

VULNERABLE SYSTEMS
==================

All versions of XAPI are vulnerable.

Systems which are not using the XAPI toolstack are not vulnerable.

MITIGATION
==========

There are no mitigations available.

CREDITS
=======

This issue was discovered by Edwin Török of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa354-*.patch         xenopsd master

$ sha256sum xsa354*
66d29c38ce4fa6c77a4853a0f0345f3bf1fcbe11703090e1dbfa83257564de42  xsa354-1-ls_lR-factor-out-dir-concatenation.patch
0686465119b4442d839d59c66c41d02ce6b4cfa9c82234e0aefcaffbb7985ee4  xsa354-2-ls_lR-refactor-use-fold.patch
fb60812f1230526f9c3be77d4f0c8c08903b21aa5c449056dc16b1181720b3cb  xsa354-3-ls_lR-separate-recursion-into-separate-funct.patch
41f221007abd89c8d24dacb7b0ff96109427c1c84eae75b7245bb287a0938d81  xsa354-4-ls_lR-add-quota.patch
fcd4abddf18bc5b875ec28213f3138f1de395e91076b5b1a828353bc8b19d8ed  xsa354-5-ls_lR-limit-depth.patch
1ff82640a446407492904b50b05fc903a70d570620cd20a21493c9240b38f8be  xsa354-6-exclude-attr-os-hotfixes-from-ls_lR.patch
b1b2f96b93d41201ddfdb093660f06f8bce5461a715cfeb7110f0194b74c93cb  xsa354-7-read-important-xenstore-entries-first.patch
6908e957c299fe57dcd5c5c93162d135326221f1e66ac4b43b771ebd63bae35d  xsa354-8-refactor-attr-os-hotfixes-exclusion.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/YqeAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZA0MIAK9VhZjA0/adgq4TY2DXFjIZKg6Q9ZE9cBZcgv4l
XhGpAwxeYKU76KFEf1si3KCGV7xzHG0tnwkEgfpeldnGCwsgSkJPRNFvgA/7iuW0
3hCAdRioSU9Rm3h2gQdIDBAppvD0NhkkjQU/XcrB7qeOjfYrdvH5gS+NSRN/z50V
g02kUrWypShC0+lvgkJ0zXfl0CAQSs27BMd2vlj5BuOP573IrbJh6NHuRMF9Dm9J
48ny910Ctws5FSbe25ZgZHERZnwDnwe/oGP1ws12wZbU8ToP5t7tHnSQGNgwXPWT
Xpoecr5Iqek2CUHPEd8KKKS4B5frJHq+Xp8CAfnX8KT8VH8=
=y19v
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream;
 name="xsa354-1-ls_lR-factor-out-dir-concatenation.patch"
Content-Disposition: attachment;
 filename="xsa354-1-ls_lR-factor-out-dir-concatenation.patch"
Content-Transfer-Encoding: base64

RnJvbSA4MWY3YmZlODljYjExMDFmYWQ2N2U1ZGI3Nzc5ODA5MTExMzU5YTZl
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCA0IE5vdiAyMDIwIDE5OjM5OjQzICswMDAwClN1Ympl
Y3Q6IFhTQS0zNTQ6IGxzX2xSOiBmYWN0b3Igb3V0IGRpciBjb25jYXRlbmF0
aW9uCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFp
bjsgY2hhcnNldD1VVEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4
Yml0CgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jv
a0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hy
aXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hjL3hl
bm9wc19zZXJ2ZXJfeGVuLm1sIGIveGMveGVub3BzX3NlcnZlcl94ZW4ubWwK
aW5kZXggNjk0MDRkZTYuLjAwOGU4ZGRhIDEwMDY0NAotLS0gYS94Yy94ZW5v
cHNfc2VydmVyX3hlbi5tbAorKysgYi94Yy94ZW5vcHNfc2VydmVyX3hlbi5t
bApAQCAtMjU5MiwxMiArMjU5MiwxMSBAQCBtb2R1bGUgVk0gPSBzdHJ1Y3QK
ICAgICAgICAgICAgICAgd2l0aCBYc19wcm90b2NvbC5Fbm9lbnQgXyAtPiAi
IgogICAgICAgICAgICAgaW4KICAgICAgICAgICAgIGxldCByZWMgbHNfbFIg
cm9vdCBkaXIgPQotICAgICAgICAgICAgICBsZXQgdGhpcyA9Ci0gICAgICAg
ICAgICAgICAgdHJ5IFsoZGlyLCB4cy5Ycy5yZWFkIChyb290IF4gIi8iIF4g
ZGlyKSldIHdpdGggXyAtPiBbXQotICAgICAgICAgICAgICBpbgorICAgICAg
ICAgICAgICBsZXQgZW50cnkgPSByb290IF4gIi8iIF4gZGlyIGluCisgICAg
ICAgICAgICAgIGxldCB0aGlzID0gdHJ5IFsoZGlyLCB4cy5Ycy5yZWFkIGVu
dHJ5KV0gd2l0aCBfIC0+IFtdIGluCiAgICAgICAgICAgICAgIGxldCBzdWJk
aXJzID0KICAgICAgICAgICAgICAgICB0cnkKLSAgICAgICAgICAgICAgICAg
IHhzLlhzLmRpcmVjdG9yeSAocm9vdCBeICIvIiBeIGRpcikKKyAgICAgICAg
ICAgICAgICAgIHhzLlhzLmRpcmVjdG9yeSBlbnRyeQogICAgICAgICAgICAg
ICAgICAgfD4gTGlzdC5maWx0ZXIgKGZ1biB4IC0+IHggPD4gIiIpCiAgICAg
ICAgICAgICAgICAgICB8PiBtYXBfdHIgKGZ1biB4IC0+IGRpciBeICIvIiBe
IHgpCiAgICAgICAgICAgICAgICAgd2l0aCBfIC0+IFtdCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa354-2-ls_lR-refactor-use-fold.patch"
Content-Disposition: attachment;
 filename="xsa354-2-ls_lR-refactor-use-fold.patch"
Content-Transfer-Encoding: base64

RnJvbSAxOTNhMDNkNzU5YjMyMGZmNDA3ODM1MjA4N2E1NzZlZDEzOGM2ZmM2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCA0IE5vdiAyMDIwIDE5OjQ2OjMxICswMDAwClN1Ympl
Y3Q6IFhTQS0zNTQ6IGxzX2xSOiByZWZhY3RvciwgdXNlIGZvbGQKTUlNRS1W
ZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0
PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKClNpZ25l
ZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNpdHJpeC5j
b20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3RpYW4ubGlu
ZGlnQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGMveGVub3BzX3NlcnZl
cl94ZW4ubWwgYi94Yy94ZW5vcHNfc2VydmVyX3hlbi5tbAppbmRleCAwMDhl
OGRkYS4uOTEyZTk3ZmEgMTAwNjQ0Ci0tLSBhL3hjL3hlbm9wc19zZXJ2ZXJf
eGVuLm1sCisrKyBiL3hjL3hlbm9wc19zZXJ2ZXJfeGVuLm1sCkBAIC0yNTkx
LDkgKzI1OTEsOSBAQCBtb2R1bGUgVk0gPSBzdHJ1Y3QKICAgICAgICAgICAg
ICAgICAgICAgIChVdWlkbS50b19zdHJpbmcgdXVpZCkpCiAgICAgICAgICAg
ICAgIHdpdGggWHNfcHJvdG9jb2wuRW5vZW50IF8gLT4gIiIKICAgICAgICAg
ICAgIGluCi0gICAgICAgICAgICBsZXQgcmVjIGxzX2xSIHJvb3QgZGlyID0K
KyAgICAgICAgICAgIGxldCByZWMgbHNfbFIgcm9vdCBhY2MgZGlyID0KICAg
ICAgICAgICAgICAgbGV0IGVudHJ5ID0gcm9vdCBeICIvIiBeIGRpciBpbgot
ICAgICAgICAgICAgICBsZXQgdGhpcyA9IHRyeSBbKGRpciwgeHMuWHMucmVh
ZCBlbnRyeSldIHdpdGggXyAtPiBbXSBpbgorICAgICAgICAgICAgICBsZXQg
YWNjID0gdHJ5IChkaXIsIHhzLlhzLnJlYWQgZW50cnkpIDo6IGFjYyB3aXRo
IF8gLT4gYWNjIGluCiAgICAgICAgICAgICAgIGxldCBzdWJkaXJzID0KICAg
ICAgICAgICAgICAgICB0cnkKICAgICAgICAgICAgICAgICAgIHhzLlhzLmRp
cmVjdG9yeSBlbnRyeQpAQCAtMjYwMSwyMiArMjYwMSwyMiBAQCBtb2R1bGUg
Vk0gPSBzdHJ1Y3QKICAgICAgICAgICAgICAgICAgIHw+IG1hcF90ciAoZnVu
IHggLT4gZGlyIF4gIi8iIF4geCkKICAgICAgICAgICAgICAgICB3aXRoIF8g
LT4gW10KICAgICAgICAgICAgICAgaW4KLSAgICAgICAgICAgICAgdGhpcyBA
IExpc3QuY29uY2F0IChtYXBfdHIgKGxzX2xSIHJvb3QpIHN1YmRpcnMpCisg
ICAgICAgICAgICAgIExpc3QuZm9sZF9sZWZ0IChsc19sUiByb290KSBhY2Mg
c3ViZGlycwogICAgICAgICAgICAgaW4KICAgICAgICAgICAgIGxldCBndWVz
dF9hZ2VudCA9CiAgICAgICAgICAgICAgIFsKICAgICAgICAgICAgICAgICAi
ZHJpdmVycyI7ICJhdHRyIjsgImRhdGEiOyAiY29udHJvbCI7ICJmZWF0dXJl
IjsgInhlbnNlcnZlci9hdHRyIgogICAgICAgICAgICAgICBdCi0gICAgICAg
ICAgICAgIHw+IG1hcF90cgorICAgICAgICAgICAgICB8PiBMaXN0LmZvbGRf
bGVmdAogICAgICAgICAgICAgICAgICAgIChsc19sUiAoUHJpbnRmLnNwcmlu
dGYgIi9sb2NhbC9kb21haW4vJWQiIGRpLlhlbmN0cmwuZG9taWQpKQotICAg
ICAgICAgICAgICB8PiBMaXN0LmNvbmNhdAorICAgICAgICAgICAgICAgICAg
IFtdCiAgICAgICAgICAgICAgIHw+IG1hcF90ciAoZnVuIChrLCB2KSAtPiAo
aywgWGVub3BzX3V0aWxzLnV0ZjhfcmVjb2RlIHYpKQogICAgICAgICAgICAg
aW4KICAgICAgICAgICAgIGxldCB4c2RhdGFfc3RhdGUgPQogICAgICAgICAg
ICAgICBEb21haW4uYWxsb3dlZF94c2RhdGFfcHJlZml4ZXMKLSAgICAgICAg
ICAgICAgfD4gbWFwX3RyCisgICAgICAgICAgICAgIHw+IExpc3QuZm9sZF9s
ZWZ0CiAgICAgICAgICAgICAgICAgICAgKGxzX2xSIChQcmludGYuc3ByaW50
ZiAiL2xvY2FsL2RvbWFpbi8lZCIgZGkuWGVuY3RybC5kb21pZCkpCi0gICAg
ICAgICAgICAgIHw+IExpc3QuY29uY2F0CisgICAgICAgICAgICAgICAgICAg
W10KICAgICAgICAgICAgIGluCiAgICAgICAgICAgICBsZXQgc2hhZG93X211
bHRpcGxpZXJfdGFyZ2V0ID0KICAgICAgICAgICAgICAgaWYgbm90IGRpLlhl
bmN0cmwuaHZtX2d1ZXN0IHRoZW4K

--=separator
Content-Type: application/octet-stream;
 name="xsa354-3-ls_lR-separate-recursion-into-separate-funct.patch"
Content-Disposition: attachment;
 filename="xsa354-3-ls_lR-separate-recursion-into-separate-funct.patch"
Content-Transfer-Encoding: base64

RnJvbSA5NGU0ZGU2ZmQzNDEzODI3ZGZmZWMwNmQ1YTc0YjNlNjczYTdjYzJj
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCA0IE5vdiAyMDIwIDE5OjUyOjIwICswMDAwClN1Ympl
Y3Q6IFhTQS0zNTQ6IGxzX2xSOiBzZXBhcmF0ZSByZWN1cnNpb24gaW50byBz
ZXBhcmF0ZSBmdW5jdGlvbgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5
cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zl
ci1FbmNvZGluZzogOGJpdAoKVGhpcyB3aWxsIG1ha2UgaXQgZWFzaWVyIHRv
IGFkZCBxdW90YSBjaGVja3MuCgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zy
w7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0
aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgoKZGlm
ZiAtLWdpdCBhL3hjL3hlbm9wc19zZXJ2ZXJfeGVuLm1sIGIveGMveGVub3Bz
X3NlcnZlcl94ZW4ubWwKaW5kZXggOTEyZTk3ZmEuLjg0YzU5ZjEzIDEwMDY0
NAotLS0gYS94Yy94ZW5vcHNfc2VydmVyX3hlbi5tbAorKysgYi94Yy94ZW5v
cHNfc2VydmVyX3hlbi5tbApAQCAtMjU5MSw5ICsyNTkxLDExIEBAIG1vZHVs
ZSBWTSA9IHN0cnVjdAogICAgICAgICAgICAgICAgICAgICAgKFV1aWRtLnRv
X3N0cmluZyB1dWlkKSkKICAgICAgICAgICAgICAgd2l0aCBYc19wcm90b2Nv
bC5Fbm9lbnQgXyAtPiAiIgogICAgICAgICAgICAgaW4KLSAgICAgICAgICAg
IGxldCByZWMgbHNfbFIgcm9vdCBhY2MgZGlyID0KKyAgICAgICAgICAgIGxl
dCBsc19sIHJvb3QgZGlyID0KICAgICAgICAgICAgICAgbGV0IGVudHJ5ID0g
cm9vdCBeICIvIiBeIGRpciBpbgotICAgICAgICAgICAgICBsZXQgYWNjID0g
dHJ5IChkaXIsIHhzLlhzLnJlYWQgZW50cnkpIDo6IGFjYyB3aXRoIF8gLT4g
YWNjIGluCisgICAgICAgICAgICAgIGxldCB2YWx1ZV9vcHQgPQorICAgICAg
ICAgICAgICAgIHRyeSBTb21lIChkaXIsIHhzLlhzLnJlYWQgZW50cnkpIHdp
dGggXyAtPiBOb25lCisgICAgICAgICAgICAgIGluCiAgICAgICAgICAgICAg
IGxldCBzdWJkaXJzID0KICAgICAgICAgICAgICAgICB0cnkKICAgICAgICAg
ICAgICAgICAgIHhzLlhzLmRpcmVjdG9yeSBlbnRyeQpAQCAtMjYwMSw2ICsy
NjAzLDEzIEBAIG1vZHVsZSBWTSA9IHN0cnVjdAogICAgICAgICAgICAgICAg
ICAgfD4gbWFwX3RyIChmdW4geCAtPiBkaXIgXiAiLyIgXiB4KQogICAgICAg
ICAgICAgICAgIHdpdGggXyAtPiBbXQogICAgICAgICAgICAgICBpbgorICAg
ICAgICAgICAgICAodmFsdWVfb3B0LCBzdWJkaXJzKQorICAgICAgICAgICAg
aW4KKyAgICAgICAgICAgIGxldCByZWMgbHNfbFIgcm9vdCBhY2MgZGlyID0K
KyAgICAgICAgICAgICAgbGV0IHZhbHVlX29wdCwgc3ViZGlycyA9IGxzX2wg
cm9vdCBkaXIgaW4KKyAgICAgICAgICAgICAgbGV0IGFjYyA9CisgICAgICAg
ICAgICAgICAgbWF0Y2ggdmFsdWVfb3B0IHdpdGggU29tZSB2IC0+IHYgOjog
YWNjIHwgTm9uZSAtPiBhY2MKKyAgICAgICAgICAgICAgaW4KICAgICAgICAg
ICAgICAgTGlzdC5mb2xkX2xlZnQgKGxzX2xSIHJvb3QpIGFjYyBzdWJkaXJz
CiAgICAgICAgICAgICBpbgogICAgICAgICAgICAgbGV0IGd1ZXN0X2FnZW50
ID0K

--=separator
Content-Type: application/octet-stream; name="xsa354-4-ls_lR-add-quota.patch"
Content-Disposition: attachment; filename="xsa354-4-ls_lR-add-quota.patch"
Content-Transfer-Encoding: base64

RnJvbSA0OTY3YjllZmYzNmY2MzdmOWU2MDg1NTliMTVlMTI1NTNiMGI5MzIy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCA0IE5vdiAyMDIwIDIwOjAwOjQ2ICswMDAwClN1Ympl
Y3Q6IFhTQS0zNTQ6IGxzX2xSOiBhZGQgcXVvdGEKTUlNRS1WZXJzaW9uOiAx
LjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNv
bnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKCnhlbm9wc2Qgd2F0Y2hl
cyBjZXJ0YWluIHN1Yi10cmVlcyBpbiB0aGUgZ3Vlc3Q6CnZtLWRhdGEsIEZJ
U1QsIGRyaXZlcnMsIGF0dHIsIGRhdGEsIGNvbnRyb2wsIGZlYXR1cmUsIHhl
bnNlcnZlci9hdHRyCgp2bS1kYXRhIGlzIHJlcGxpY2F0ZWQgYXMgaXMgaW50
byB0aGUgWEFQSSBkYXRhYmFzZSwgd2hpY2ggcmFpc2VzIGEgRG9TCmNvbmNl
cm46IGVhY2gga2V5IHVwZGF0ZSBpcyBhbXBsaWZpZWQgaW4gREIgYW5kIG1l
c3NhZ2Utc3dpdGNoIHRyYWZmaWM6CnRoZSBlbnRpcmUgVk0gb2JqZWN0cyBu
ZWVkcyB0byBiZSByZWFkIGFuZCB3cml0dGVuIGJhY2ssIHdoaWNoIHJlc3Vs
dHMKaW4gTyhOXjIpIG9wZXJhdGlvbiBpZiB3ZSBpbnNlcnQgTiBrZXlzLgoK
Vk1zIGRvIGhhdmUgeGVuc3RvcmUgcXVvdGFzLCBidXQgZXZlbiB3aGVuIHRo
ZXkgd29yayB0aGV5IGFyZSBmYWlybHkKbGFyZ2UgKDgxOTIga2V5cykuIEl0
IGlzIG5vdCBlYXN5IHRvIHJlZHVjZSB0aGF0IHF1b3RhIChjYWxjdWxhdGlu
ZwpudW1iZXIgb2Yga2V5cyBsZWdpdGltYXRlbHkgbmVlZGVkIGZvciAyNTUg
ZGlza3MsIDggTklDcywgMzIgdkNQVXMsCmV0Yy4pIHJlc3VsdHMgaW4gfjcw
MDAga2V5cyBhbHJlYWR5LgoKSG93ZXZlciB2bS1kYXRhIGlzIG5vdCBtZWFu
dCBhcyBhIGdlbmVyYWwgcHVycG9zZSBkYXRhIHN0b3JlLCBhbmQKYWx0aG91
Z2ggaXQgcHJvYmFibHkgc2hvdWxkbid0IGhhdmUgZXhpc3RlZCBpbiB0aGUg
Zmlyc3QgcGxhY2UsIHdlIGNhbid0CmNoYW5nZSB0aGF0IGZvciBMQ00gcmVs
ZWFzZXMuIExldHMgYXQgbGVhc3QgYWRkIGEgc21hbGwgcXVvdGEgb24gaXQs
CnN1Y2ggdGhhdCBOXjIgaXNuJ3QgdG9vIGJpZy4KCldpdGggdGhpcyBjaGFu
Z2UgeGVub3BzZCBkb2Vzbid0IHJ1biBvdXQgb2YgbWVtb3J5IGFueW1vcmUg
KGFzIGxvbmcgYXMKb3hlbnN0b3JlZCBwcm9wZXJseSBlbmZvcmNlcyBxdW90
YSkuICBOb3cgYSBWTSBzaG91bGQgb25seSBiZSBhYmxlIHRvCnN0b3JlIH4y
LjNNaUIsIGluc3RlYWQgb2YgfjE0NE1pQiB3aGljaCBpcyBhIHNpZ25pZmlj
YW50IHJlZHVjdGlvbi4KCldlIHNob3VsZCBjb25zaWRlciBpbnRyb2R1Y2lu
ZyBhIGxpbWl0IGluIGJ5dGVzLCBhbHRob3VnaCB0aGF0IHdvdWxkIGJlCm1v
cmUgY29tcGxpY2F0ZWQgYmVjYXVzZSB0aGUgWE1MIHNlcmlhbGl6YXRpb24g
Y2FuIGFtcGxpZnkgdGhlIHNpemUKKGUuZy4gIiBnZXRzIHR1cm5lZCBpbnRv
ICZxdW90OykgYW5kIGlzIGEgWEFQSSBpbXBsZW1lbnRhdGlvbiBkZXRhaWwu
CgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0Bj
aXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0
aWFuLmxpbmRpZ0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL2xpYi94ZW5v
cHNkLm1sIGIvbGliL3hlbm9wc2QubWwKaW5kZXggY2I0MTg1MTYuLjk0YzU1
OTMwIDEwMDY0NAotLS0gYS9saWIveGVub3BzZC5tbAorKysgYi9saWIveGVu
b3BzZC5tbApAQCAtNjYsNiArNjYsOSBAQCBsZXQgbnVtYV9wbGFjZW1lbnQg
PSByZWYgZmFsc2UKICgqIFRoaXMgaXMgZm9yIGRlYnVnZ2luZyBvbmx5ICop
CiBsZXQgbnVtYV9wbGFjZW1lbnRfc3RyaWN0ID0gcmVmIGZhbHNlCiAKKygq
IE8oTl4yKSBvcGVyYXRpb25zLCB1bnRpbCB3ZSBnZXQgYSB4ZW5zdG9yZSBj
YWNoZSwgc28gdXNlIGEgc21hbGwgbnVtYmVyIGhlcmUgKikKK2xldCB2bV94
ZW5zdG9yZV9sc19sUl9xdW90YSA9IHJlZiAxMjgKKwogbGV0IG9wdGlvbnMg
PQogICBbCiAgICAgKCAicXVldWUiCkBAIC0xNjksNiArMTcyLDEwIEBAIGxl
dCBvcHRpb25zID0KICAgICAsIChmdW4gKCkgLT4gc3RyaW5nX29mX2Jvb2wg
IXBjaV9xdWFyYW50aW5lKQogICAgICwgIlRydWUgaWYgSU9NTVUgY29udGV4
dHMgb2YgUENJIGRldmljZXMgYXJlIG5lZWRlZCB0byBiZSBwbGFjZWQgaW4g
XAogICAgICAgIHF1YXJhbnRpbmUiICkKKyAgOyAoICJ2bS14ZW5zdG9yZS1s
cy1sUi1xdW90YSIKKyAgICAsIEFyZy5TZXRfaW50IHZtX3hlbnN0b3JlX2xz
X2xSX3F1b3RhCisgICAgLCAoZnVuICgpIC0+IHN0cmluZ19vZl9pbnQgIXZt
X3hlbnN0b3JlX2xzX2xSX3F1b3RhKQorICAgICwgIk1heGltdW0gZW50cmll
cyBpbiBWTSB4ZW5zdG9yZSB0cmVlcyB3YXRjaGVkIGJ5IHhlbm9wc2QiICkK
ICAgXQogCiBsZXQgcGF0aCAoKSA9IEZpbGVuYW1lLmNvbmNhdCAhc29ja2V0
c19wYXRoICJ4ZW5vcHNkIgpkaWZmIC0tZ2l0IGEveGMveGVub3BzX3NlcnZl
cl94ZW4ubWwgYi94Yy94ZW5vcHNfc2VydmVyX3hlbi5tbAppbmRleCA4NGM1
OWYxMy4uMzFhMjIxODYgMTAwNjQ0Ci0tLSBhL3hjL3hlbm9wc19zZXJ2ZXJf
eGVuLm1sCisrKyBiL3hjL3hlbm9wc19zZXJ2ZXJfeGVuLm1sCkBAIC0yNTkx
LDQyICsyNTkxLDcxIEBAIG1vZHVsZSBWTSA9IHN0cnVjdAogICAgICAgICAg
ICAgICAgICAgICAgKFV1aWRtLnRvX3N0cmluZyB1dWlkKSkKICAgICAgICAg
ICAgICAgd2l0aCBYc19wcm90b2NvbC5Fbm9lbnQgXyAtPiAiIgogICAgICAg
ICAgICAgaW4KLSAgICAgICAgICAgIGxldCBsc19sIHJvb3QgZGlyID0KKyAg
ICAgICAgICAgIGxldCBsc19sIH5kZXB0aCByb290IGRpciA9CiAgICAgICAg
ICAgICAgIGxldCBlbnRyeSA9IHJvb3QgXiAiLyIgXiBkaXIgaW4KICAgICAg
ICAgICAgICAgbGV0IHZhbHVlX29wdCA9CiAgICAgICAgICAgICAgICAgdHJ5
IFNvbWUgKGRpciwgeHMuWHMucmVhZCBlbnRyeSkgd2l0aCBfIC0+IE5vbmUK
ICAgICAgICAgICAgICAgaW4KICAgICAgICAgICAgICAgbGV0IHN1YmRpcnMg
PQotICAgICAgICAgICAgICAgIHRyeQotICAgICAgICAgICAgICAgICAgeHMu
WHMuZGlyZWN0b3J5IGVudHJ5Ci0gICAgICAgICAgICAgICAgICB8PiBMaXN0
LmZpbHRlciAoZnVuIHggLT4geCA8PiAiIikKLSAgICAgICAgICAgICAgICAg
IHw+IG1hcF90ciAoZnVuIHggLT4gZGlyIF4gIi8iIF4geCkKLSAgICAgICAg
ICAgICAgICB3aXRoIF8gLT4gW10KKyAgICAgICAgICAgICAgICBpZiBkZXB0
aCA8IDAgdGhlbgorICAgICAgICAgICAgICAgICAgW10KKyAgICAgICAgICAg
ICAgICAoKiBkZXB0aCBsaW1pdCByZWFjaGVkLCBhdCBhIGRlcHRoIG9mIDAg
d2Ugc3RpbGwgcmVhZCBlbnRyaWVzL3ZhbHVlcywgYnV0IHN0b3AKKyAgICAg
ICAgICAgICAgICAgKiBkZXNjZW5kaW5nIGludG8gc3ViZGlycyAqKQorICAg
ICAgICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgICAgICAgIHRyeQorICAg
ICAgICAgICAgICAgICAgICB4cy5Ycy5kaXJlY3RvcnkgZW50cnkKKyAgICAg
ICAgICAgICAgICAgICAgfD4gTGlzdC5maWx0ZXIgKGZ1biB4IC0+IHggPD4g
IiIpCisgICAgICAgICAgICAgICAgICAgIHw+IG1hcF90ciAoZnVuIHggLT4g
ZGlyIF4gIi8iIF4geCkKKyAgICAgICAgICAgICAgICAgIHdpdGggXyAtPiBb
XQogICAgICAgICAgICAgICBpbgogICAgICAgICAgICAgICAodmFsdWVfb3B0
LCBzdWJkaXJzKQogICAgICAgICAgICAgaW4KLSAgICAgICAgICAgIGxldCBy
ZWMgbHNfbFIgcm9vdCBhY2MgZGlyID0KLSAgICAgICAgICAgICAgbGV0IHZh
bHVlX29wdCwgc3ViZGlycyA9IGxzX2wgcm9vdCBkaXIgaW4KLSAgICAgICAg
ICAgICAgbGV0IGFjYyA9Ci0gICAgICAgICAgICAgICAgbWF0Y2ggdmFsdWVf
b3B0IHdpdGggU29tZSB2IC0+IHYgOjogYWNjIHwgTm9uZSAtPiBhY2MKLSAg
ICAgICAgICAgICAgaW4KLSAgICAgICAgICAgICAgTGlzdC5mb2xkX2xlZnQg
KGxzX2xSIHJvb3QpIGFjYyBzdWJkaXJzCisgICAgICAgICAgICBsZXQgcmVj
IGxzX2xSID8oZGVwdGggPSA1MTIpIHJvb3QgKHF1b3RhLCBhY2MpIGRpciA9
CisgICAgICAgICAgICAgIGlmIHF1b3RhIDw9IDAgdGhlbgorICAgICAgICAg
ICAgICAgIChxdW90YSwgYWNjKSAoKiBxdW90YSByZWFjaGVkLCBzdG9wIGxp
c3RpbmcvcmVhZGluZyAqKQorICAgICAgICAgICAgICBlbHNlCisgICAgICAg
ICAgICAgICAgbGV0IHZhbHVlX29wdCwgc3ViZGlycyA9IGxzX2wgfmRlcHRo
IHJvb3QgZGlyIGluCisgICAgICAgICAgICAgICAgbGV0IHF1b3RhLCBhY2Mg
PQorICAgICAgICAgICAgICAgICAgbWF0Y2ggdmFsdWVfb3B0IHdpdGgKKyAg
ICAgICAgICAgICAgICAgIHwgU29tZSB2IC0+CisgICAgICAgICAgICAgICAg
ICAgICAgKHF1b3RhIC0gMSwgdiA6OiBhY2MpCisgICAgICAgICAgICAgICAg
ICB8IE5vbmUgLT4KKyAgICAgICAgICAgICAgICAgICAgICAocXVvdGEsIGFj
YykKKyAgICAgICAgICAgICAgICBpbgorICAgICAgICAgICAgICAgIGxldCBk
ZXB0aCA9IGRlcHRoIC0gMSBpbgorICAgICAgICAgICAgICAgIExpc3QuZm9s
ZF9sZWZ0IChsc19sUiB+ZGVwdGggcm9vdCkgKHF1b3RhLCBhY2MpIHN1YmRp
cnMKICAgICAgICAgICAgIGluCi0gICAgICAgICAgICBsZXQgZ3Vlc3RfYWdl
bnQgPQorICAgICAgICAgICAgbGV0IHF1b3RhID0gIVhlbm9wc2Qudm1feGVu
c3RvcmVfbHNfbFJfcXVvdGEgaW4KKyAgICAgICAgICAgIGxldCBxdW90YSwg
Z3Vlc3RfYWdlbnQgPQogICAgICAgICAgICAgICBbCiAgICAgICAgICAgICAg
ICAgImRyaXZlcnMiOyAiYXR0ciI7ICJkYXRhIjsgImNvbnRyb2wiOyAiZmVh
dHVyZSI7ICJ4ZW5zZXJ2ZXIvYXR0ciIKICAgICAgICAgICAgICAgXQogICAg
ICAgICAgICAgICB8PiBMaXN0LmZvbGRfbGVmdAogICAgICAgICAgICAgICAg
ICAgIChsc19sUiAoUHJpbnRmLnNwcmludGYgIi9sb2NhbC9kb21haW4vJWQi
IGRpLlhlbmN0cmwuZG9taWQpKQotICAgICAgICAgICAgICAgICAgIFtdCi0g
ICAgICAgICAgICAgIHw+IG1hcF90ciAoZnVuIChrLCB2KSAtPiAoaywgWGVu
b3BzX3V0aWxzLnV0ZjhfcmVjb2RlIHYpKQorICAgICAgICAgICAgICAgICAg
IChxdW90YSwgW10pCisgICAgICAgICAgICAgIHw+IGZ1biAocXVvdGEsIGFj
YykgLT4KKyAgICAgICAgICAgICAgKHF1b3RhLCBtYXBfdHIgKGZ1biAoaywg
dikgLT4gKGssIFhlbm9wc191dGlscy51dGY4X3JlY29kZSB2KSkgYWNjKQog
ICAgICAgICAgICAgaW4KLSAgICAgICAgICAgIGxldCB4c2RhdGFfc3RhdGUg
PQorICAgICAgICAgICAgbGV0IHF1b3RhLCB4c2RhdGFfc3RhdGUgPQogICAg
ICAgICAgICAgICBEb21haW4uYWxsb3dlZF94c2RhdGFfcHJlZml4ZXMKICAg
ICAgICAgICAgICAgfD4gTGlzdC5mb2xkX2xlZnQKICAgICAgICAgICAgICAg
ICAgICAobHNfbFIgKFByaW50Zi5zcHJpbnRmICIvbG9jYWwvZG9tYWluLyVk
IiBkaS5YZW5jdHJsLmRvbWlkKSkKLSAgICAgICAgICAgICAgICAgICBbXQor
ICAgICAgICAgICAgICAgICAgIChxdW90YSwgW10pCiAgICAgICAgICAgICBp
bgorICAgICAgICAgICAgKCBpZiBxdW90YSA8PSAwIHRoZW4KKyAgICAgICAg
ICAgICAgICBsZXQgcGF0aCA9CisgICAgICAgICAgICAgICAgICBEZXZpY2Vf
Y29tbW9uLnhlbm9wc19wYXRoX29mX2RvbWFpbiBkaS5YZW5jdHJsLmRvbWlk
CisgICAgICAgICAgICAgICAgICBeICIvbHNfbFJfcXVvdGFfcmVhY2hlZCIK
KyAgICAgICAgICAgICAgICBpbgorICAgICAgICAgICAgICAgIHRyeQorICAg
ICAgICAgICAgICAgICAgbGV0IChfIDogc3RyaW5nKSA9IHhzLlhzLnJlYWQg
cGF0aCBpbgorICAgICAgICAgICAgICAgICAgKCkKKyAgICAgICAgICAgICAg
ICB3aXRoIF8gLT4gKAorICAgICAgICAgICAgICAgICAgZGVidWcgInhlbnN0
b3JlIGxzX2xSIHF1b3RhIHJlYWNoZWQgZm9yIGRvbWlkICVkIgorICAgICAg
ICAgICAgICAgICAgICBkaS5YZW5jdHJsLmRvbWlkIDsKKyAgICAgICAgICAg
ICAgICAgIHRyeSB4cy5Ycy53cml0ZSBwYXRoICJ0IiB3aXRoIF8gLT4gKCkK
KyAgICAgICAgICAgICAgICApCisgICAgICAgICAgICApIDsKICAgICAgICAg
ICAgIGxldCBzaGFkb3dfbXVsdGlwbGllcl90YXJnZXQgPQogICAgICAgICAg
ICAgICBpZiBub3QgZGkuWGVuY3RybC5odm1fZ3Vlc3QgdGhlbgogICAgICAg
ICAgICAgICAgIDEuCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa354-5-ls_lR-limit-depth.patch"
Content-Disposition: attachment; filename="xsa354-5-ls_lR-limit-depth.patch"
Content-Transfer-Encoding: base64

RnJvbSAyNDRiYWFhZWJhMGNlODQzOTE3NDQyZjY2OTdmYjA0NzAyYTNjNjZh
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCA0IE5vdiAyMDIwIDIwOjA0OjM5ICswMDAwClN1Ympl
Y3Q6IFhTQS0zNTQ6IGxzX2xSOiBsaW1pdCBkZXB0aApNSU1FLVZlcnNpb246
IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9VVRGLTgK
Q29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJpdAoKV2Ugb25seSB3YW50
IHRvIHJlYWQgYSBmZXcgbGV2ZWxzIGRlZXAgaW50byB0aGUgeGVuc3RvcmUg
dHJlZSBvZiB0aGUKZ3Vlc3QuICBMaW1pdCB0aGUgZGVwdGggYXQgd2hpY2gg
d2UgcmVhZCBrZXlzIHRvIGZ1cnRoZXIgcmVkdWNlIERvUwpwb3RlbnRpYWwu
CgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0Bj
aXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0
aWFuLmxpbmRpZ0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hjL3hlbm9w
c19zZXJ2ZXJfeGVuLm1sIGIveGMveGVub3BzX3NlcnZlcl94ZW4ubWwKaW5k
ZXggMzFhMjIxODYuLjMyMDkyZGViIDEwMDY0NAotLS0gYS94Yy94ZW5vcHNf
c2VydmVyX3hlbi5tbAorKysgYi94Yy94ZW5vcHNfc2VydmVyX3hlbi5tbApA
QCAtMjYyOCwxMCArMjYyOCwxOSBAQCBtb2R1bGUgVk0gPSBzdHJ1Y3QKICAg
ICAgICAgICAgIGxldCBxdW90YSA9ICFYZW5vcHNkLnZtX3hlbnN0b3JlX2xz
X2xSX3F1b3RhIGluCiAgICAgICAgICAgICBsZXQgcXVvdGEsIGd1ZXN0X2Fn
ZW50ID0KICAgICAgICAgICAgICAgWwotICAgICAgICAgICAgICAgICJkcml2
ZXJzIjsgImF0dHIiOyAiZGF0YSI7ICJjb250cm9sIjsgImZlYXR1cmUiOyAi
eGVuc2VydmVyL2F0dHIiCisgICAgICAgICAgICAgICAgKCJkcml2ZXJzIiwg
MCkKKyAgICAgICAgICAgICAgOyAoImF0dHIiLCAzKSAoKiBhdHRyL3ZpZi8w
L2lwdjQvMCwgYXR0ci9ldGgwL2lwdjYvMC9hZGRyICopCisgICAgICAgICAg
ICAgIDsgKCJkYXRhIiwgMCkKKyAgICAgICAgICAgICAgICAoKiBpbiBwYXJ0
aWN1bGFyIGF2b2lkIGRhdGEvdm9sdW1lcyB3aGljaCBjb250YWlucyBtYW55
IGVudHJpZXMgZm9yIGVhY2ggZGlzayAqKQorICAgICAgICAgICAgICA7ICgi
Y29udHJvbCIsIDApCisgICAgICAgICAgICAgIDsgKCJmZWF0dXJlL2hvdHBs
dWciLCAwKQorICAgICAgICAgICAgICA7ICgieGVuc2VydmVyL2F0dHIiLCAz
KSAoKiB4ZW5zZXJ2ZXIvYXR0ci9uZXQtc3Jpb3YtdmYvMC9pcHY0LzEgKikK
ICAgICAgICAgICAgICAgXQogICAgICAgICAgICAgICB8PiBMaXN0LmZvbGRf
bGVmdAotICAgICAgICAgICAgICAgICAgIChsc19sUiAoUHJpbnRmLnNwcmlu
dGYgIi9sb2NhbC9kb21haW4vJWQiIGRpLlhlbmN0cmwuZG9taWQpKQorICAg
ICAgICAgICAgICAgICAgIChmdW4gYWNjIChkaXIsIGRlcHRoKSAtPgorICAg
ICAgICAgICAgICAgICAgICAgbHNfbFIgfmRlcHRoCisgICAgICAgICAgICAg
ICAgICAgICAgIChQcmludGYuc3ByaW50ZiAiL2xvY2FsL2RvbWFpbi8lZCIg
ZGkuWGVuY3RybC5kb21pZCkKKyAgICAgICAgICAgICAgICAgICAgICAgYWNj
IGRpcikKICAgICAgICAgICAgICAgICAgICAocXVvdGEsIFtdKQogICAgICAg
ICAgICAgICB8PiBmdW4gKHF1b3RhLCBhY2MpIC0+CiAgICAgICAgICAgICAg
IChxdW90YSwgbWFwX3RyIChmdW4gKGssIHYpIC0+IChrLCBYZW5vcHNfdXRp
bHMudXRmOF9yZWNvZGUgdikpIGFjYykK

--=separator
Content-Type: application/octet-stream;
 name="xsa354-6-exclude-attr-os-hotfixes-from-ls_lR.patch"
Content-Disposition: attachment;
 filename="xsa354-6-exclude-attr-os-hotfixes-from-ls_lR.patch"
Content-Transfer-Encoding: base64

RnJvbSA4ZThmZTNlMTg5NWU5Y2NkNTEwNDc0YjBjN2U2MDMyZDNiYTQ3MWM4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogRnJpLCA2IE5vdiAyMDIwIDEwOjM4OjM2ICswMDAwClN1Ympl
Y3Q6IFhTQS0zNTQ6IGV4Y2x1ZGUgYXR0ci9vcy9ob3RmaXhlcyBmcm9tIGxz
X2xSCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFp
bjsgY2hhcnNldD1VVEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4
Yml0CgpUaGlzIGlzIG5vdCBuZWVkZWQgYnkgWEFQSSB0byBkZXRlcm1pbmUg
dGhlIGd1ZXN0IFZNIG1ldHJpY3MgZmllbGQuICBBbmQKZm9yIG9sZCBXaW5k
b3dzIE9TZXMgKGUuZy4gV1MxMlIyLXg2NCkgd2l0aCBtYW55IGhvdGZpeGVz
IHRoaXMgY2FuIGhpdAp0aGUgeGVub3BzZCB4ZW5zdG9yZSBxdW90YSwgYnJl
YWtpbmcgb3RoZXIgZnVuY3Rpb25hbGl0eSwgYmVjYXVzZSB3ZQpjYW4ndCBy
ZWFkIHRoZSBjb250cm9sLyBlbnRyaWVzIGFueW1vcmUgdGhlbi4KClNpZ25l
ZC1vZmYtYnk6IEVkd2luIFTDtnLDtmsgPGVkdmluLnRvcm9rQGNpdHJpeC5j
b20+CkFja2VkLWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3RpYW4ubGlu
ZGlnQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGMveGVub3BzX3NlcnZl
cl94ZW4ubWwgYi94Yy94ZW5vcHNfc2VydmVyX3hlbi5tbAppbmRleCAzMjA5
MmRlYi4uYjgwYmY3NmIgMTAwNjQ0Ci0tLSBhL3hjL3hlbm9wc19zZXJ2ZXJf
eGVuLm1sCisrKyBiL3hjL3hlbm9wc19zZXJ2ZXJfeGVuLm1sCkBAIC0yNjEx
LDcgKzI2MTEsNyBAQCBtb2R1bGUgVk0gPSBzdHJ1Y3QKICAgICAgICAgICAg
ICAgKHZhbHVlX29wdCwgc3ViZGlycykKICAgICAgICAgICAgIGluCiAgICAg
ICAgICAgICBsZXQgcmVjIGxzX2xSID8oZGVwdGggPSA1MTIpIHJvb3QgKHF1
b3RhLCBhY2MpIGRpciA9Ci0gICAgICAgICAgICAgIGlmIHF1b3RhIDw9IDAg
dGhlbgorICAgICAgICAgICAgICBpZiBxdW90YSA8PSAwIHx8IGRpciA9ICJh
dHRyL29zL2hvdGZpeGVzIiB0aGVuCiAgICAgICAgICAgICAgICAgKHF1b3Rh
LCBhY2MpICgqIHF1b3RhIHJlYWNoZWQsIHN0b3AgbGlzdGluZy9yZWFkaW5n
ICopCiAgICAgICAgICAgICAgIGVsc2UKICAgICAgICAgICAgICAgICBsZXQg
dmFsdWVfb3B0LCBzdWJkaXJzID0gbHNfbCB+ZGVwdGggcm9vdCBkaXIgaW4K

--=separator
Content-Type: application/octet-stream;
 name="xsa354-7-read-important-xenstore-entries-first.patch"
Content-Disposition: attachment;
 filename="xsa354-7-read-important-xenstore-entries-first.patch"
Content-Transfer-Encoding: base64

RnJvbSAxYjU1NzkyODc1NjYwZjVjNmNlM2E1NGUwZjQ0NDY1ZGNjNjE0ZWFh
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogTW9uLCAyMyBOb3YgMjAyMCAxMTowMDozNSArMDAwMApTdWJq
ZWN0OiBYU0EtMzU0OiByZWFkIGltcG9ydGFudCB4ZW5zdG9yZSBlbnRyaWVz
IGZpcnN0Ck1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9w
bGFpbjsgY2hhcnNldD1VVEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5n
OiA4Yml0CgpXaGVuIHRoZSBxdW90YSBpcyBoaXQgc29tZSBndWVzdCBhZ2Vu
dCBmdW5jdGlvbmFsaXR5IG1pZ2h0IHN0b3Agd29ya2luZy4KV2FybiBhYm91
dCB0aGlzLCBhbmQgdHJ5IHRvIHJlYWQgdGhlIG1vcmUgaW1wb3J0YW50IGVu
dHJpZXMgZmlyc3QgYXMgYQpiZXN0LWVmZm9ydCB0byBrZWVwIGl0IHdvcmtp
bmcgKGNvbnRyb2wsIGV0Yy4pLiAgUmVhZCBkYXRhLyBhbmQgdm0tZGF0YS8K
bGFzdCwgc28gaWYgdGhlIHF1b3RhIGlzIGhpdCB0aGVyZSB3ZSBjYW4gc3Rp
bGwgZG8gYSBjbGVhbiBzaHV0ZG93biBvZiBhClZNLgoKT2YgY291cnNlIGEg
cHJpdmlsZWdlZCB1c2VyIGluc2lkZSB0aGUgZ3Vlc3QgY2FuIHN0aWxsIHVz
ZSB1cCBlbnRyaWVzCmJleW9uZCB0aGUgcXVvdGEgaW4gb25lIG9mIHRoZSBv
dGhlciB4ZW5zdG9yZSBzdWJ0cmVlcyBhbmQgYnJlYWsgZ3Vlc3QKYWdlbnQg
ZnVuY3Rpb25hbGl0eSwgYnV0IGlmIGl0IGlzIHN1ZmZpY2llbnRseSBwcml2
aWxlZ2VkIGl0IGNvdWxkIGp1c3QKa2lsbCB0aGUgZ3Vlc3QgYWdlbnQsIHNv
IHRoYXQgaXMgZXhwZWN0ZWQuCgpSZW5hbWUgbHMtbFItcXVvdGEgdG8gZ3Vl
c3QtYWdlbnQtcXVvdGEuICBBZGQgY29tbWVudCBleHBsYWluaW5nIGRlcHRo
LgoKUGVyaW9kaWNhbGx5IHdhcm4gd2hlbiB0aGUgcXVvdGEgaXMgZXhjZWVk
ZWQuCgpTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jv
a0BjaXRyaXguY29tPgpBY2tlZC1ieTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hy
aXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL2xpYi94
ZW5vcHNkLm1sIGIvbGliL3hlbm9wc2QubWwKaW5kZXggOTRjNTU5MzAuLmRh
MDJiNzliIDEwMDY0NAotLS0gYS9saWIveGVub3BzZC5tbAorKysgYi9saWIv
eGVub3BzZC5tbApAQCAtNjcsNyArNjcsOSBAQCBsZXQgbnVtYV9wbGFjZW1l
bnQgPSByZWYgZmFsc2UKIGxldCBudW1hX3BsYWNlbWVudF9zdHJpY3QgPSBy
ZWYgZmFsc2UKIAogKCogTyhOXjIpIG9wZXJhdGlvbnMsIHVudGlsIHdlIGdl
dCBhIHhlbnN0b3JlIGNhY2hlLCBzbyB1c2UgYSBzbWFsbCBudW1iZXIgaGVy
ZSAqKQotbGV0IHZtX3hlbnN0b3JlX2xzX2xSX3F1b3RhID0gcmVmIDEyOAor
bGV0IHZtX2d1ZXN0X2FnZW50X3hlbnN0b3JlX3F1b3RhID0gcmVmIDEyOAor
CitsZXQgdm1fZ3Vlc3RfYWdlbnRfeGVuc3RvcmVfcXVvdGFfd2Fybl9pbnRl
cnZhbCA9IHJlZiAzNjAwCiAKIGxldCBvcHRpb25zID0KICAgWwpAQCAtMTcy
LDEwICsxNzQsMTQgQEAgbGV0IG9wdGlvbnMgPQogICAgICwgKGZ1biAoKSAt
PiBzdHJpbmdfb2ZfYm9vbCAhcGNpX3F1YXJhbnRpbmUpCiAgICAgLCAiVHJ1
ZSBpZiBJT01NVSBjb250ZXh0cyBvZiBQQ0kgZGV2aWNlcyBhcmUgbmVlZGVk
IHRvIGJlIHBsYWNlZCBpbiBcCiAgICAgICAgcXVhcmFudGluZSIgKQotICA7
ICggInZtLXhlbnN0b3JlLWxzLWxSLXF1b3RhIgotICAgICwgQXJnLlNldF9p
bnQgdm1feGVuc3RvcmVfbHNfbFJfcXVvdGEKLSAgICAsIChmdW4gKCkgLT4g
c3RyaW5nX29mX2ludCAhdm1feGVuc3RvcmVfbHNfbFJfcXVvdGEpCisgIDsg
KCAidm0tZ3Vlc3QtYWdlbnQteGVuc3RvcmUtcXVvdGEiCisgICAgLCBBcmcu
U2V0X2ludCB2bV9ndWVzdF9hZ2VudF94ZW5zdG9yZV9xdW90YQorICAgICwg
KGZ1biAoKSAtPiBzdHJpbmdfb2ZfaW50ICF2bV9ndWVzdF9hZ2VudF94ZW5z
dG9yZV9xdW90YSkKICAgICAsICJNYXhpbXVtIGVudHJpZXMgaW4gVk0geGVu
c3RvcmUgdHJlZXMgd2F0Y2hlZCBieSB4ZW5vcHNkIiApCisgIDsgKCAidm0t
Z3Vlc3QtYWdlbnQteGVuc3RvcmUtcXVvdGEtd2Fybi1pbnRlcnZhbCIKKyAg
ICAsIEFyZy5TZXRfaW50IHZtX2d1ZXN0X2FnZW50X3hlbnN0b3JlX3F1b3Rh
X3dhcm5faW50ZXJ2YWwKKyAgICAsIChmdW4gKCkgLT4gc3RyaW5nX29mX2lu
dCAhdm1fZ3Vlc3RfYWdlbnRfeGVuc3RvcmVfcXVvdGFfd2Fybl9pbnRlcnZh
bCkKKyAgICAsICJIb3cgb2Z0ZW4gdG8gd2FybiB0aGF0IGEgVk0gaXMgc3Rp
bGwgb3ZlciBpdHMgeGVuc3RvcmUgcXVvdGEiICkKICAgXQogCiBsZXQgcGF0
aCAoKSA9IEZpbGVuYW1lLmNvbmNhdCAhc29ja2V0c19wYXRoICJ4ZW5vcHNk
IgpkaWZmIC0tZ2l0IGEveGMveGVub3BzX3NlcnZlcl94ZW4ubWwgYi94Yy94
ZW5vcHNfc2VydmVyX3hlbi5tbAppbmRleCBiODBiZjc2Yi4uZDUyMDUxNjUg
MTAwNjQ0Ci0tLSBhL3hjL3hlbm9wc19zZXJ2ZXJfeGVuLm1sCisrKyBiL3hj
L3hlbm9wc19zZXJ2ZXJfeGVuLm1sCkBAIC0yNjI1LDE2ICsyNjI1LDE4IEBA
IG1vZHVsZSBWTSA9IHN0cnVjdAogICAgICAgICAgICAgICAgIGxldCBkZXB0
aCA9IGRlcHRoIC0gMSBpbgogICAgICAgICAgICAgICAgIExpc3QuZm9sZF9s
ZWZ0IChsc19sUiB+ZGVwdGggcm9vdCkgKHF1b3RhLCBhY2MpIHN1YmRpcnMK
ICAgICAgICAgICAgIGluCi0gICAgICAgICAgICBsZXQgcXVvdGEgPSAhWGVu
b3BzZC52bV94ZW5zdG9yZV9sc19sUl9xdW90YSBpbgorICAgICAgICAgICAg
bGV0IHF1b3RhID0gIVhlbm9wc2Qudm1fZ3Vlc3RfYWdlbnRfeGVuc3RvcmVf
cXVvdGEgaW4KKyAgICAgICAgICAgICgqIGRlcHRoIGlzIHRoZSBudW1iZXIg
b2YgZGlyZWN0b3JpZXMgZGVzY2VuZGVkIGludG8sCisgICAgICAgICAgICAg
ICBrZXlzIGF0IGRlcHRoKzEgYXJlIHN0aWxsIHJlYWQgKikKICAgICAgICAg
ICAgIGxldCBxdW90YSwgZ3Vlc3RfYWdlbnQgPQogICAgICAgICAgICAgICBb
Ci0gICAgICAgICAgICAgICAgKCJkcml2ZXJzIiwgMCkKKyAgICAgICAgICAg
ICAgICAoImNvbnRyb2wiLCAwKQorICAgICAgICAgICAgICA7ICgiZmVhdHVy
ZS9ob3RwbHVnIiwgMCkKKyAgICAgICAgICAgICAgOyAoInhlbnNlcnZlci9h
dHRyIiwgMykgKCogeGVuc2VydmVyL2F0dHIvbmV0LXNyaW92LXZmLzAvaXB2
NC8xICopCiAgICAgICAgICAgICAgIDsgKCJhdHRyIiwgMykgKCogYXR0ci92
aWYvMC9pcHY0LzAsIGF0dHIvZXRoMC9pcHY2LzAvYWRkciAqKQorICAgICAg
ICAgICAgICA7ICgiZHJpdmVycyIsIDApCiAgICAgICAgICAgICAgIDsgKCJk
YXRhIiwgMCkKICAgICAgICAgICAgICAgICAoKiBpbiBwYXJ0aWN1bGFyIGF2
b2lkIGRhdGEvdm9sdW1lcyB3aGljaCBjb250YWlucyBtYW55IGVudHJpZXMg
Zm9yIGVhY2ggZGlzayAqKQotICAgICAgICAgICAgICA7ICgiY29udHJvbCIs
IDApCi0gICAgICAgICAgICAgIDsgKCJmZWF0dXJlL2hvdHBsdWciLCAwKQot
ICAgICAgICAgICAgICA7ICgieGVuc2VydmVyL2F0dHIiLCAzKSAoKiB4ZW5z
ZXJ2ZXIvYXR0ci9uZXQtc3Jpb3YtdmYvMC9pcHY0LzEgKikKICAgICAgICAg
ICAgICAgXQogICAgICAgICAgICAgICB8PiBMaXN0LmZvbGRfbGVmdAogICAg
ICAgICAgICAgICAgICAgIChmdW4gYWNjIChkaXIsIGRlcHRoKSAtPgpAQCAt
MjY1MSwxOSArMjY1Myw0NiBAQCBtb2R1bGUgVk0gPSBzdHJ1Y3QKICAgICAg
ICAgICAgICAgICAgICAobHNfbFIgKFByaW50Zi5zcHJpbnRmICIvbG9jYWwv
ZG9tYWluLyVkIiBkaS5YZW5jdHJsLmRvbWlkKSkKICAgICAgICAgICAgICAg
ICAgICAocXVvdGEsIFtdKQogICAgICAgICAgICAgaW4KKyAgICAgICAgICAg
IGxldCBwYXRoID0KKyAgICAgICAgICAgICAgRGV2aWNlX2NvbW1vbi54ZW5v
cHNfcGF0aF9vZl9kb21haW4gZGkuWGVuY3RybC5kb21pZAorICAgICAgICAg
ICAgICBeICIvZ3Vlc3RfYWdlbnRfcXVvdGFfcmVhY2hlZCIKKyAgICAgICAg
ICAgIGluCisgICAgICAgICAgICAoKiB3ZSBkb24ndCB3YW50IHRoZSBndWVz
dCBjb250cm9sbGluZyBob3cgb2Z0ZW4gd2Ugd2FybiAqKQorICAgICAgICAg
ICAgbGV0IHdhcm5lZF9wYXRoID0KKyAgICAgICAgICAgICAgRGV2aWNlX2Nv
bW1vbi5nZXRfcHJpdmF0ZV9wYXRoIGRpLlhlbmN0cmwuZG9taWQKKyAgICAg
ICAgICAgICAgXiAiL2d1ZXN0X2FnZW50X3F1b3RhX3dhcm5lZCIKKyAgICAg
ICAgICAgIGluCiAgICAgICAgICAgICAoIGlmIHF1b3RhIDw9IDAgdGhlbgot
ICAgICAgICAgICAgICAgIGxldCBwYXRoID0KLSAgICAgICAgICAgICAgICAg
IERldmljZV9jb21tb24ueGVub3BzX3BhdGhfb2ZfZG9tYWluIGRpLlhlbmN0
cmwuZG9taWQKLSAgICAgICAgICAgICAgICAgIF4gIi9sc19sUl9xdW90YV9y
ZWFjaGVkIgotICAgICAgICAgICAgICAgIGluCiAgICAgICAgICAgICAgICAg
dHJ5CiAgICAgICAgICAgICAgICAgICBsZXQgKF8gOiBzdHJpbmcpID0geHMu
WHMucmVhZCBwYXRoIGluCi0gICAgICAgICAgICAgICAgICAoKQorICAgICAg
ICAgICAgICAgICAgbGV0IG5vdyA9IFVuaXguZ2V0dGltZW9mZGF5ICgpIGlu
CisgICAgICAgICAgICAgICAgICBsZXQgbGFzdCA9CisgICAgICAgICAgICAg
ICAgICAgIHRyeSBmbG9hdF9vZl9zdHJpbmcgKHhzLlhzLnJlYWQgd2FybmVk
X3BhdGgpIHdpdGggXyAtPiAwLgorICAgICAgICAgICAgICAgICAgaW4KKyAg
ICAgICAgICAgICAgICAgIGlmCisgICAgICAgICAgICAgICAgICAgIG5vdyAt
LiBsYXN0CisgICAgICAgICAgICAgICAgICAgID4gZmxvYXQgIVhlbm9wc2Qu
dm1fZ3Vlc3RfYWdlbnRfeGVuc3RvcmVfcXVvdGFfd2Fybl9pbnRlcnZhbAor
ICAgICAgICAgICAgICAgICAgdGhlbiAoCisgICAgICAgICAgICAgICAgICAg
ICgqIHBlcmlvZGljYWxseSB3YXJuIGlmIHRoZSBxdW90YSBpcyBzdGlsbCBl
eGNlZWRlZCAqKQorICAgICAgICAgICAgICAgICAgICB4cy5Ycy53cml0ZSB3
YXJuZWRfcGF0aCAoc3RyaW5nX29mX2Zsb2F0IG5vdykgOworICAgICAgICAg
ICAgICAgICAgICB3YXJuCisgICAgICAgICAgICAgICAgICAgICAgInhlbnN0
b3JlIGd1ZXN0IGFnZW50IHF1b3RhIGlzIHN0aWxsIGV4Y2VlZGVkIGZvciBk
b21pZCBcCisgICAgICAgICAgICAgICAgICAgICAgICVkIgorICAgICAgICAg
ICAgICAgICAgICAgIGRpLlhlbmN0cmwuZG9taWQKKyAgICAgICAgICAgICAg
ICAgICkKICAgICAgICAgICAgICAgICB3aXRoIF8gLT4gKAotICAgICAgICAg
ICAgICAgICAgZGVidWcgInhlbnN0b3JlIGxzX2xSIHF1b3RhIHJlYWNoZWQg
Zm9yIGRvbWlkICVkIgorICAgICAgICAgICAgICAgICAgd2FybgorICAgICAg
ICAgICAgICAgICAgICAieGVuc3RvcmUgZ3Vlc3QgYWdlbnQgcXVvdGEgcmVh
Y2hlZCBmb3IgZG9taWQgJWQgKFZNIFwKKyAgICAgICAgICAgICAgICAgICAg
IG1ldHJpY3MgYW5kIGd1ZXN0IGFnZW50IGludGVyYWN0aW9uIG1pZ2h0IGJl
IGJyb2tlbiwgYW5kIFwKKyAgICAgICAgICAgICAgICAgICAgIHZtLWRhdGEg
aW5jb21wbGV0ZSEpIgogICAgICAgICAgICAgICAgICAgICBkaS5YZW5jdHJs
LmRvbWlkIDsKICAgICAgICAgICAgICAgICAgIHRyeSB4cy5Ycy53cml0ZSBw
YXRoICJ0IiB3aXRoIF8gLT4gKCkKICAgICAgICAgICAgICAgICApCisgICAg
ICAgICAgICBlbHNlCisgICAgICAgICAgICAgIHRyeQorICAgICAgICAgICAg
ICAgIGxldCAoXyA6IHN0cmluZykgPSB4cy5Ycy5yZWFkIHBhdGggaW4KKyAg
ICAgICAgICAgICAgICB4cy5Ycy5ybSBwYXRoCisgICAgICAgICAgICAgIHdp
dGggXyAtPiAoKSAoKiBkbyBub3QgUk0gdGhlICd3YXJuZWQnIHBhdGggdG8g
cHJldmVudCBmbG9vZCAqKQogICAgICAgICAgICAgKSA7CiAgICAgICAgICAg
ICBsZXQgc2hhZG93X211bHRpcGxpZXJfdGFyZ2V0ID0KICAgICAgICAgICAg
ICAgaWYgbm90IGRpLlhlbmN0cmwuaHZtX2d1ZXN0IHRoZW4K

--=separator
Content-Type: application/octet-stream;
 name="xsa354-8-refactor-attr-os-hotfixes-exclusion.patch"
Content-Disposition: attachment;
 filename="xsa354-8-refactor-attr-os-hotfixes-exclusion.patch"
Content-Transfer-Encoding: base64

RnJvbSA5NmY5ZDNjMWY3MzNiNDcxYzg0MmIwOGY5MzBiMzQxMTY5ZjJhZjY1
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/RWR3
aW49MjBUPUMzPUI2cj1DMz1CNms/PSA8ZWR2aW4udG9yb2tAY2l0cml4LmNv
bT4KRGF0ZTogTW9uLCAyMyBOb3YgMjAyMCAxNTo1MDoyOSArMDAwMApTdWJq
ZWN0OiBYU0EtMzU0OiByZWZhY3RvciBhdHRyL29zL2hvdGZpeGVzIGV4Y2x1
c2lvbgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxh
aW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog
OGJpdAoKU2lnbmVkLW9mZi1ieTogRWR3aW4gVMO2csO2ayA8ZWR2aW4udG9y
b2tAY2l0cml4LmNvbT4KQWNrZWQtYnk6IENocmlzdGlhbiBMaW5kaWcgPGNo
cmlzdGlhbi5saW5kaWdAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94Yy94
ZW5vcHNfc2VydmVyX3hlbi5tbCBiL3hjL3hlbm9wc19zZXJ2ZXJfeGVuLm1s
CmluZGV4IGQ1MjA1MTY1Li5jODA1Yzc1MCAxMDA2NDQKLS0tIGEveGMveGVu
b3BzX3NlcnZlcl94ZW4ubWwKKysrIGIveGMveGVub3BzX3NlcnZlcl94ZW4u
bWwKQEAgLTI1LDYgKzI1LDcgQEAgbW9kdWxlIEQgPSBEZWJ1Zy5NYWtlIChz
dHJ1Y3QgbGV0IG5hbWUgPSBzZXJ2aWNlX25hbWUgZW5kKQogCiBvcGVuIEQK
IG1vZHVsZSBSUkREID0gUnJkX2NsaWVudC5DbGllbnQKK21vZHVsZSBTdHJp
bmdTZXQgPSBTZXQuTWFrZSAoU3RyaW5nKQogCiBsZXQgZmluYWxseSA9IFhh
cGlfc3RkZXh0X3BlcnZhc2l2ZXMuUGVydmFzaXZlZXh0LmZpbmFsbHkKIApA
QCAtMjYxMCw4ICsyNjExLDkgQEAgbW9kdWxlIFZNID0gc3RydWN0CiAgICAg
ICAgICAgICAgIGluCiAgICAgICAgICAgICAgICh2YWx1ZV9vcHQsIHN1YmRp
cnMpCiAgICAgICAgICAgICBpbgotICAgICAgICAgICAgbGV0IHJlYyBsc19s
UiA/KGRlcHRoID0gNTEyKSByb290IChxdW90YSwgYWNjKSBkaXIgPQotICAg
ICAgICAgICAgICBpZiBxdW90YSA8PSAwIHx8IGRpciA9ICJhdHRyL29zL2hv
dGZpeGVzIiB0aGVuCisgICAgICAgICAgICBsZXQgcmVjIGxzX2xSID8oZXhj
bHVkZXMgPSBTdHJpbmdTZXQuZW1wdHkpID8oZGVwdGggPSA1MTIpIHJvb3QK
KyAgICAgICAgICAgICAgICAocXVvdGEsIGFjYykgZGlyID0KKyAgICAgICAg
ICAgICAgaWYgcXVvdGEgPD0gMCB8fCBTdHJpbmdTZXQubWVtIGRpciBleGNs
dWRlcyB0aGVuCiAgICAgICAgICAgICAgICAgKHF1b3RhLCBhY2MpICgqIHF1
b3RhIHJlYWNoZWQsIHN0b3AgbGlzdGluZy9yZWFkaW5nICopCiAgICAgICAg
ICAgICAgIGVsc2UKICAgICAgICAgICAgICAgICBsZXQgdmFsdWVfb3B0LCBz
dWJkaXJzID0gbHNfbCB+ZGVwdGggcm9vdCBkaXIgaW4KQEAgLTI2MjMsMjQg
KzI2MjUsMjkgQEAgbW9kdWxlIFZNID0gc3RydWN0CiAgICAgICAgICAgICAg
ICAgICAgICAgKHF1b3RhLCBhY2MpCiAgICAgICAgICAgICAgICAgaW4KICAg
ICAgICAgICAgICAgICBsZXQgZGVwdGggPSBkZXB0aCAtIDEgaW4KLSAgICAg
ICAgICAgICAgICBMaXN0LmZvbGRfbGVmdCAobHNfbFIgfmRlcHRoIHJvb3Qp
IChxdW90YSwgYWNjKSBzdWJkaXJzCisgICAgICAgICAgICAgICAgTGlzdC5m
b2xkX2xlZnQKKyAgICAgICAgICAgICAgICAgIChsc19sUiB+ZXhjbHVkZXMg
fmRlcHRoIHJvb3QpCisgICAgICAgICAgICAgICAgICAocXVvdGEsIGFjYykg
c3ViZGlycwogICAgICAgICAgICAgaW4KICAgICAgICAgICAgIGxldCBxdW90
YSA9ICFYZW5vcHNkLnZtX2d1ZXN0X2FnZW50X3hlbnN0b3JlX3F1b3RhIGlu
CiAgICAgICAgICAgICAoKiBkZXB0aCBpcyB0aGUgbnVtYmVyIG9mIGRpcmVj
dG9yaWVzIGRlc2NlbmRlZCBpbnRvLAogICAgICAgICAgICAgICAga2V5cyBh
dCBkZXB0aCsxIGFyZSBzdGlsbCByZWFkICopCiAgICAgICAgICAgICBsZXQg
cXVvdGEsIGd1ZXN0X2FnZW50ID0KICAgICAgICAgICAgICAgWwotICAgICAg
ICAgICAgICAgICgiY29udHJvbCIsIDApCi0gICAgICAgICAgICAgIDsgKCJm
ZWF0dXJlL2hvdHBsdWciLCAwKQotICAgICAgICAgICAgICA7ICgieGVuc2Vy
dmVyL2F0dHIiLCAzKSAoKiB4ZW5zZXJ2ZXIvYXR0ci9uZXQtc3Jpb3YtdmYv
MC9pcHY0LzEgKikKLSAgICAgICAgICAgICAgOyAoImF0dHIiLCAzKSAoKiBh
dHRyL3ZpZi8wL2lwdjQvMCwgYXR0ci9ldGgwL2lwdjYvMC9hZGRyICopCi0g
ICAgICAgICAgICAgIDsgKCJkcml2ZXJzIiwgMCkKLSAgICAgICAgICAgICAg
OyAoImRhdGEiLCAwKQorICAgICAgICAgICAgICAgICgiY29udHJvbCIsIE5v
bmUsIDApCisgICAgICAgICAgICAgIDsgKCJmZWF0dXJlL2hvdHBsdWciLCBO
b25lLCAwKQorICAgICAgICAgICAgICA7ICgieGVuc2VydmVyL2F0dHIiLCBO
b25lLCAzKQorICAgICAgICAgICAgICAgICgqIHhlbnNlcnZlci9hdHRyL25l
dC1zcmlvdi12Zi8wL2lwdjQvMSAqKQorICAgICAgICAgICAgICA7ICgiYXR0
ciIsIFNvbWUgKFN0cmluZ1NldC5zaW5nbGV0b24gImF0dHIvb3MvaG90Zml4
ZXMiKSwgMykKKyAgICAgICAgICAgICAgICAoKiBhdHRyL3ZpZi8wL2lwdjQv
MCwgYXR0ci9ldGgwL2lwdjYvMC9hZGRyLAorICAgICAgICAgICAgICAgICAg
IGFuZCBleGNsdWRlIGhvdGZpeGVzIHdoaWNoIGNhbiBleGNlZWQgdGhlIHF1
b3RhIG9uIHRoZWlyIG93biAqKQorICAgICAgICAgICAgICA7ICgiZHJpdmVy
cyIsIE5vbmUsIDApCisgICAgICAgICAgICAgIDsgKCJkYXRhIiwgTm9uZSwg
MCkKICAgICAgICAgICAgICAgICAoKiBpbiBwYXJ0aWN1bGFyIGF2b2lkIGRh
dGEvdm9sdW1lcyB3aGljaCBjb250YWlucyBtYW55IGVudHJpZXMgZm9yIGVh
Y2ggZGlzayAqKQogICAgICAgICAgICAgICBdCiAgICAgICAgICAgICAgIHw+
IExpc3QuZm9sZF9sZWZ0Ci0gICAgICAgICAgICAgICAgICAgKGZ1biBhY2Mg
KGRpciwgZGVwdGgpIC0+Ci0gICAgICAgICAgICAgICAgICAgICBsc19sUiB+
ZGVwdGgKKyAgICAgICAgICAgICAgICAgICAoZnVuIGFjYyAoZGlyLCBleGNs
dWRlcywgZGVwdGgpIC0+CisgICAgICAgICAgICAgICAgICAgICBsc19sUiA/
ZXhjbHVkZXMgfmRlcHRoCiAgICAgICAgICAgICAgICAgICAgICAgIChQcmlu
dGYuc3ByaW50ZiAiL2xvY2FsL2RvbWFpbi8lZCIgZGkuWGVuY3RybC5kb21p
ZCkKICAgICAgICAgICAgICAgICAgICAgICAgYWNjIGRpcikKICAgICAgICAg
ICAgICAgICAgICAocXVvdGEsIFtdKQo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 12:33:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 12:33:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.53545.93304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Vm-0004Oj-I3; Tue, 15 Dec 2020 12:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 53545.93304; Tue, 15 Dec 2020 12:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kp9Vm-0004OP-9V; Tue, 15 Dec 2020 12:33:02 +0000
Received: by outflank-mailman (input) for mailman id 53545;
 Tue, 15 Dec 2020 12:33:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdgx=FT=xenbits.xen.org=gdunlap@srs-us1.protection.inumbo.net>)
 id 1kp9KR-0004tM-36
 for xen-devel@lists.xen.org; Tue, 15 Dec 2020 12:21:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb21617b-e436-44da-a061-124a6152864a;
 Tue, 15 Dec 2020 12:20:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JV-0005hv-Ki; Tue, 15 Dec 2020 12:20:21 +0000
Received: from gdunlap by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <gdunlap@xenbits.xen.org>)
 id 1kp9JV-00074c-Ji; Tue, 15 Dec 2020 12:20:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb21617b-e436-44da-a061-124a6152864a
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=TQAZLV6RXka03JfBarz+Hb4sCc9RDRSVM78JBhJXZRc=; b=x4mqTiGOdqIeg6Nnf7OAw2f08I
	YXus3R67o4vdB7jLdArYj04YYZ7DihkyHhcMFoFBlRqoTwJMwHMsmpu2OWn4L2r9o6xv8nrOO/z/3
	CbNWG9zTMT4TDSSsF9iOiB3VbBmBVJeKMh1aQ9EcCvmfBZDBWyPtN7M8SYzl18GIk+Uw=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 348 v3 (CVE-2020-29566) - undue recursion
 in x86 HVM context switch code
Message-Id: <E1kp9JV-00074c-Ji@xenbits.xenproject.org>
Date: Tue, 15 Dec 2020 12:20:21 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29566 / XSA-348
                               version 3

            undue recursion in x86 HVM context switch code

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

When they require assistance from the device model, x86 HVM guests
must be temporarily de-scheduled.  The device model will signal Xen
when it has completed its operation, via an event channel, so that the
relevant vCPU is rescheduled.

If the device model were to signal Xen without having actually
completed the operation, the de-schedule / re-schedule cycle would
repeat.  If, in addition, Xen is resignalled very quickly, the
re-schedule may occur before the de-schedule was fully complete,
triggering a shortcut.  This potentially repeating process uses
ordinary recursive function calls, so could result a stack overflow.

IMPACT
======

A malicious or buggy stubdomain serving a HVM guest can cause Xen to
crash, resulting in a Denial of Service (DoS) to the entire host.

VULNERABLE SYSTEMS
==================

All Xen versions are vulnerable.

Only x86 systems are affected.  Arm systems are not affected.

Only x86 stubdomains serving HVM guests can exploit the vulnerability.

MITIGATION
==========

Running only PV or PVH guests will avoid the vulnerability.

(Switching from a device model stub domain to a dom0 device model does
NOT mitigate this vulnerability.  Rather, it simply recategorises the
vulnerability to hostile management code, regarding it "as designed";
thus it merely reclassifies these issues as "not a bug".  The security
of a Xen system using stub domains is still better than with a qemu-dm
running as a dom0 process.  Users and vendors of stub qemu dm systems
should not change their configuration to use a dom0 qemu process.)

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the appropriate (set of) attached patch(es) resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa348-?.patch           xen-unstable - Xen 4.14.x
xsa348-4.13-?.patch      Xen 4.13.x
xsa348-4.12.patch        Xen 4.12.x
xsa348-4.11.patch        Xen 4.11.x
xsa348-4.10.patch        Xen 4.10.x

$ sha256sum xsa348*
f9606145cdbd3caacf6be7e5bcb62fc7d2c0b76572c1be26db608c5eac57ead0  xsa348.meta
b619dac8453daa9f85526dec67ed67d999d182ccbc39b91be122b3365a0b5cb9  xsa348-1.patch
01b11ea3be160704c992187ad727ac1f03841cc452bbe2c142b53fddfa2da844  xsa348-2.patch
2c54474da9680625717e5a61b2a3a5ac23acad6f7bc0fcb306fe181fd0a38f1d  xsa348-3.patch
e2f4cbec1a763f045e827ececf13d06dedcc7cc49b42136160c8d986778529ae  xsa348-4.10.patch
15d4f5fb894a45027f4a17a557d4fdb0a390575ab2c2d3aa2b265d3c6239c765  xsa348-4.11.patch
58b1a771dc720b1efb205a9d1baf46aea0205d4c65310e693dd2cfe7834cd8b9  xsa348-4.12.patch
1d181edd11f2437ff9298f9b5e81d75f5e5db8a79a8ce2c5aed0d75882473a0b  xsa348-4.13-1.patch
b68d3dfa2003a7444c165ab3639886b9b502c06cdfd4f43bea747d8fb14dc7cd  xsa348-4.13-2.patch
67ecb0819041bf0b20a1af42970af72a15842571beb13cd0d740b0600e1aa2fd  xsa348-4.13-3.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/YqEoMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZHysH/RUkeyzKbsafoC4gOpdTKsbCOkR6U609yR5Gpv0G
JjoeMculUV+4q4aEJVm+FoXpK2H526akTA9iZnfhxZH224/nJ/MuK8IYdCCUxAPH
GTBa64RMTcl9lwHUZUOOWNFbEwTy7CiLBh+ccAi+o8BJGBDcXYFOtD5CerD08wFI
HJ/OKa4a36q6YDbG5ESvPK+9KL7e/VM+4BUCtvrlQFMV/4zSiBh9rKLlJEa975zB
NC4dZ6ZsM/uRV8s39WQ1ihz2ylAB0Ol/uemYCMWKZRscXxolKJdoWN5F5kpygj3n
ETmwpMQSwDcG+yhIBMbJ3CnCguQzEIVyWs8Z7wPcFMZk9QQ=
=UJMI
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa348.meta"
Content-Disposition: attachment; filename="xsa348.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNDgsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAog
ICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MgogICAgICAgICAgXSwK
ICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQ4LTQu
MTAucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIjQuMTEiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4
ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjQxYTgyMmMzOTI2MzUw
ZjI2OTE3ZDc0N2M4ZGZlZDFjNDRhMmNmNDIiLAogICAgICAgICAgIlByZXJl
cXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAg
ICAgICAgICAzMjIsCiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0
LAogICAgICAgICAgICAzMjUsCiAgICAgICAgICAgIDMzMCwKICAgICAgICAg
ICAgMzUyCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzNDgtNC4xMS5wYXRjaCIKICAgICAgICAgIF0KICAg
ICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJl
Y2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVS
ZWYiOiAiODE0NWQzOGI0ODAwOTI1NWEzMmFiODdhMDJlNDgxY2QwOWM4MTFm
OSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAog
ICAgICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAg
MzIzLAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNSwKICAgICAg
ICAgICAgMzMwLAogICAgICAgICAgICAzNTIKICAgICAgICAgIF0sCiAgICAg
ICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM0OC00LjEyLnBh
dGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAg
ICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjog
ewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJiNTMwMjI3M2UyYzUxOTQwMTcy
NDAwNDg2NjQ0NjM2ZjJmNGZjNjRhIiwKICAgICAgICAgICJQcmVyZXFzIjog
WwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAg
ICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAg
ICAgICAgICAgMzI1LAogICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1
MgogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzQ4LTQuMTMtPy5wYXRjaCIKICAgICAgICAgIF0KICAgICAg
ICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xNCI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiMWQxZDFmNTM5MTk3NjQ1NmE3OWRhYWMwZGNmZTcxNTdkYTFlNTRmNyIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAogICAg
ICAgICAgICAxMTUsCiAgICAgICAgICAgIDMyMiwKICAgICAgICAgICAgMzIz
LAogICAgICAgICAgICAzMjQsCiAgICAgICAgICAgIDMyNSwKICAgICAgICAg
ICAgMzMwLAogICAgICAgICAgICAzNTIKICAgICAgICAgIF0sCiAgICAgICAg
ICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM0OC0/LnBhdGNoIgog
ICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICJtYXN0
ZXIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAg
ICAgICAgICAiU3RhYmxlUmVmIjogIjNhZTQ2OWFmOGU2ODBkZjMxZWVjZDBh
MmFjNmE4M2I1OGFkN2NlNTMiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAg
ICAgICAgICAgIDM1MywKICAgICAgICAgICAgMTE1LAogICAgICAgICAgICAz
MjIsCiAgICAgICAgICAgIDMyMywKICAgICAgICAgICAgMzI0LAogICAgICAg
ICAgICAzMjUsCiAgICAgICAgICAgIDMzMCwKICAgICAgICAgICAgMzUyCiAg
ICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzNDgtPy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAg
IH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa348-1.patch"
Content-Disposition: attachment; filename="xsa348-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IHJlcGxhY2UgcmVzZXRfc3RhY2tfYW5kX2p1bXBfbm9scCgpCgpN
b3ZlIHRoZSBuZWNlc3NhcnkgY2hlY2sgaW50byBjaGVja19mb3JfbGl2ZXBh
dGNoX3dvcmsoKSwgcmF0aGVyIHRoYW4KbW9zdGx5IGR1cGxpY2F0aW5nIHJl
c2V0X3N0YWNrX2FuZF9qdW1wKCkgZm9yIHRoaXMgcHVycG9zZS4gVGhpcyBp
cyB0bwpwcmV2ZW50IGFuIGluZmxhdGlvbiBvZiByZXNldF9zdGFja19hbmRf
anVtcCgpIGZsYXZvcnMuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+Ci0tLQpPZiBjb3Vyc2UgaW5zdGVhZCBvZiBh
ZGRpbmcgdGhlIGNoZWNrIHJpZ2h0IGludG8KY2hlY2tfZm9yX2xpdmVwYXRj
aF93b3JrKCksIGEgd3JhcHBlciBjb3VsZCBiZSBpbnRyb2R1Y2VkLgoKLS0t
IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jCisrKyBiL3hlbi9hcmNoL3g4Ni9k
b21haW4uYwpAQCAtMTkyLDcgKzE5Miw3IEBAIHN0YXRpYyB2b2lkIG5vcmV0
dXJuIGNvbnRpbnVlX2lkbGVfZG9tYWkKIHsKICAgICAvKiBJZGxlIHZjcHVz
IG1pZ2h0IGJlIGF0dGFjaGVkIHRvIG5vbi1pZGxlIHVuaXRzISAqLwogICAg
IGlmICggIWlzX2lkbGVfZG9tYWluKHYtPnNjaGVkX3VuaXQtPmRvbWFpbikg
KQotICAgICAgICByZXNldF9zdGFja19hbmRfanVtcF9ub2xwKGd1ZXN0X2lk
bGVfbG9vcCk7CisgICAgICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGd1ZXN0
X2lkbGVfbG9vcCk7CiAKICAgICByZXNldF9zdGFja19hbmRfanVtcChpZGxl
X2xvb3ApOwogfQotLS0gYS94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYwor
KysgYi94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYwpAQCAtMTAzNiw3ICsx
MDM2LDcgQEAgc3RhdGljIHZvaWQgbm9yZXR1cm4gc3ZtX2RvX3Jlc3VtZShz
dHJ1YwogCiAgICAgaHZtX2RvX3Jlc3VtZSh2KTsKIAotICAgIHJlc2V0X3N0
YWNrX2FuZF9qdW1wX25vbHAoc3ZtX2FzbV9kb19yZXN1bWUpOworICAgIHJl
c2V0X3N0YWNrX2FuZF9qdW1wKHN2bV9hc21fZG9fcmVzdW1lKTsKIH0KIAog
dm9pZCBzdm1fdm1lbnRlcl9oZWxwZXIoY29uc3Qgc3RydWN0IGNwdV91c2Vy
X3JlZ3MgKnJlZ3MpCi0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3Mu
YworKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMKQEAgLTE5MDks
NyArMTkwOSw3IEBAIHZvaWQgdm14X2RvX3Jlc3VtZShzdHJ1Y3QgdmNwdSAq
dikKICAgICBpZiAoIGhvc3RfY3I0ICE9IHJlYWRfY3I0KCkgKQogICAgICAg
ICBfX3Ztd3JpdGUoSE9TVF9DUjQsIHJlYWRfY3I0KCkpOwogCi0gICAgcmVz
ZXRfc3RhY2tfYW5kX2p1bXBfbm9scCh2bXhfYXNtX2RvX3ZtZW50cnkpOwor
ICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKHZteF9hc21fZG9fdm1lbnRyeSk7
CiB9CiAKIHN0YXRpYyBpbmxpbmUgdW5zaWduZWQgbG9uZyB2bXIodW5zaWdu
ZWQgbG9uZyBmaWVsZCkKLS0tIGEveGVuL2FyY2gveDg2L3B2L2RvbWFpbi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9wdi9kb21haW4uYwpAQCAtMTEzLDcgKzEx
Myw3IEBAIHN0YXRpYyBpbnQgcGFyc2VfcGNpZChjb25zdCBjaGFyICpzKQog
c3RhdGljIHZvaWQgbm9yZXR1cm4gY29udGludWVfbm9uaWRsZV9kb21haW4o
c3RydWN0IHZjcHUgKnYpCiB7CiAgICAgY2hlY2tfd2FrZXVwX2Zyb21fd2Fp
dCgpOwotICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wX25vbHAocmV0X2Zyb21f
aW50cik7CisgICAgcmVzZXRfc3RhY2tfYW5kX2p1bXAocmV0X2Zyb21faW50
cik7CiB9CiAKIHN0YXRpYyBpbnQgc2V0dXBfY29tcGF0X2w0KHN0cnVjdCB2
Y3B1ICp2KQotLS0gYS94ZW4vYXJjaC94ODYvc2V0dXAuYworKysgYi94ZW4v
YXJjaC94ODYvc2V0dXAuYwpAQCAtNjc2LDcgKzY3Niw3IEBAIHN0YXRpYyB2
b2lkIF9faW5pdCBub3JldHVybiByZWluaXRfYnNwX3MKICAgICAgICAgYXNt
IHZvbGF0aWxlICgic2V0c3Nic3kiIDo6OiAibWVtb3J5Iik7CiAgICAgfQog
Ci0gICAgcmVzZXRfc3RhY2tfYW5kX2p1bXBfbm9scChpbml0X2RvbmUpOwor
ICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGluaXRfZG9uZSk7CiB9CiAKIC8q
Ci0tLSBhL3hlbi9jb21tb24vbGl2ZXBhdGNoLmMKKysrIGIveGVuL2NvbW1v
bi9saXZlcGF0Y2guYwpAQCAtMTYzNSw2ICsxNjM1LDExIEBAIHZvaWQgY2hl
Y2tfZm9yX2xpdmVwYXRjaF93b3JrKHZvaWQpCiAgICAgc190aW1lX3QgdGlt
ZW91dDsKICAgICB1bnNpZ25lZCBsb25nIGZsYWdzOwogCisgICAgLyogT25s
eSBkbyBhbnkgd29yayB3aGVuIGludm9rZWQgaW4gdHJ1bHkgaWRsZSBzdGF0
ZS4gKi8KKyAgICBpZiAoIHN5c3RlbV9zdGF0ZSAhPSBTWVNfU1RBVEVfYWN0
aXZlIHx8CisgICAgICAgICAhaXNfaWRsZV9kb21haW4oY3VycmVudC0+c2No
ZWRfdW5pdC0+ZG9tYWluKSApCisgICAgICAgIHJldHVybjsKKwogICAgIC8q
IEZhc3QgcGF0aDogbm8gd29yayB0byBkby4gKi8KICAgICBpZiAoICFwZXJf
Y3B1KHdvcmtfdG9fZG8sIGNwdSApICkKICAgICAgICAgcmV0dXJuOwotLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L2N1cnJlbnQuaAorKysgYi94ZW4vaW5j
bHVkZS9hc20teDg2L2N1cnJlbnQuaApAQCAtMTU1LDEzICsxNTUsMTMgQEAg
dW5zaWduZWQgbG9uZyBnZXRfc3RhY2tfZHVtcF9ib3R0b20gKHVucwogIyBk
ZWZpbmUgU0hBRE9XX1NUQUNLX1dPUksgIiIKICNlbmRpZgogCi0jZGVmaW5l
IHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgaW5zdHIpICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCisjZGVmaW5lIHJlc2V0X3N0YWNrX2Fu
ZF9qdW1wKGZuKSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgKHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAg
IHVuc2lnbmVkIGludCB0bXA7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgIF9fYXNtX18gX192b2xh
dGlsZV9fICggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgICAgICAgICBTSEFET1dfU1RBQ0tfV09SSyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAg
ICAgICAibW92ICVbc3RrXSwgJSVyc3A7IiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCi0gICAgICAgICAgICBpbnN0ciAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCisgICAgICAgICAgICBDSEVDS19GT1JfTElWRVBBVENIX1dPUksg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAg
ICAgICAiam1wICVjW2Z1bl07IiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICA6IFt2YWxdICI9
JnIiICh0bXApLCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgICAgICAgICAgIFtzc3BdICI9JnIiICh0bXApICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCkBAIC0xNzYs
MTIgKzE3Niw2IEBAIHVuc2lnbmVkIGxvbmcgZ2V0X3N0YWNrX2R1bXBfYm90
dG9tICh1bnMKICAgICAgICAgdW5yZWFjaGFibGUoKTsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICB9
KQogCi0jZGVmaW5lIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGZuKSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgc3dpdGNo
X3N0YWNrX2FuZF9qdW1wKGZuLCBDSEVDS19GT1JfTElWRVBBVENIX1dPUksp
Ci0KLSNkZWZpbmUgcmVzZXRfc3RhY2tfYW5kX2p1bXBfbm9scChmbikgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKLSAgICBzd2l0Y2hf
c3RhY2tfYW5kX2p1bXAoZm4sICIiKQotCiAvKgogICogV2hpY2ggVkNQVSdz
IHN0YXRlIGlzIGN1cnJlbnRseSBydW5uaW5nIG9uIGVhY2ggQ1BVPwogICog
VGhpcyBpcyBub3QgbmVjZXNhc3JpbHkgdGhlIHNhbWUgYXMgJ2N1cnJlbnQn
IGFzIGEgQ1BVIG1heSBiZQo=

--=separator
Content-Type: application/octet-stream; name="xsa348-2.patch"
Content-Disposition: attachment; filename="xsa348-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGZvbGQgZ3Vlc3RfaWRsZV9sb29wKCkgaW50byBpZGxlX2xvb3Ao
KQoKVGhlIGxhdHRlciBjYW4gZWFzaWx5IGJlIG1hZGUgY292ZXIgYm90aCBj
YXNlcy4gVGhpcyBpcyBpbiBwcmVwYXJhdGlvbgpvZiB1c2luZyBpZGxlX2xv
b3AgZGlyZWN0bHkgZm9yIHBvcHVsYXRpbmcgaWRsZV9jc3cudGFpbC4KClRh
a2UgdGhlIGxpYmVydHkgYW5kIGFsc28gYWRqdXN0IGluZGVudGF0aW9uIC8g
c3BhY2luZyBpbiBpbnZvbHZlZCBjb2RlLgoKU2lnbmVkLW9mZi1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCisrKyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYwpAQCAt
MTMzLDE0ICsxMzMsMjIgQEAgdm9pZCBwbGF5X2RlYWQodm9pZCkKIHN0YXRp
YyB2b2lkIGlkbGVfbG9vcCh2b2lkKQogewogICAgIHVuc2lnbmVkIGludCBj
cHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CisgICAgLyoKKyAgICAgKiBJZGxl
IHZjcHVzIG1pZ2h0IGJlIGF0dGFjaGVkIHRvIG5vbi1pZGxlIHVuaXRzISBX
ZSBkb24ndCBkbyBhbnkKKyAgICAgKiBzdGFuZGFyZCBpZGxlIHdvcmsgbGlr
ZSB0YXNrbGV0cyBvciBsaXZlcGF0Y2hpbmcgaW4gdGhpcyBjYXNlLgorICAg
ICAqLworICAgIGJvb2wgZ3Vlc3QgPSAhaXNfaWRsZV9kb21haW4oY3VycmVu
dC0+c2NoZWRfdW5pdC0+ZG9tYWluKTsKIAogICAgIGZvciAoIDsgOyApCiAg
ICAgewogICAgICAgICBpZiAoIGNwdV9pc19vZmZsaW5lKGNwdSkgKQorICAg
ICAgICB7CisgICAgICAgICAgICBBU1NFUlQoIWd1ZXN0KTsKICAgICAgICAg
ICAgIHBsYXlfZGVhZCgpOworICAgICAgICB9CiAKICAgICAgICAgLyogQXJl
IHdlIGhlcmUgZm9yIHJ1bm5pbmcgdmNwdSBjb250ZXh0IHRhc2tsZXRzLCBv
ciBmb3IgaWRsaW5nPyAqLwotICAgICAgICBpZiAoIHVubGlrZWx5KHRhc2ts
ZXRfd29ya190b19kbyhjcHUpKSApCisgICAgICAgIGlmICggIWd1ZXN0ICYm
IHVubGlrZWx5KHRhc2tsZXRfd29ya190b19kbyhjcHUpKSApCiAgICAgICAg
IHsKICAgICAgICAgICAgIGRvX3Rhc2tsZXQoKTsKICAgICAgICAgICAgIC8q
IExpdmVwYXRjaCB3b3JrIGlzIGFsd2F5cyBraWNrZWQgb2ZmIHZpYSBhIHRh
c2tsZXQuICovCkBAIC0xNTEsMjggKzE1OSwxNCBAQCBzdGF0aWMgdm9pZCBp
ZGxlX2xvb3Aodm9pZCkKICAgICAgICAgICogYW5kIHRoZW4sIGFmdGVyIGl0
IGlzIGRvbmUsIHdoZXRoZXIgc29mdGlycXMgYmVjYW1lIHBlbmRpbmcKICAg
ICAgICAgICogd2hpbGUgd2Ugd2VyZSBzY3J1YmJpbmcuCiAgICAgICAgICAq
LwotICAgICAgICBlbHNlIGlmICggIXNvZnRpcnFfcGVuZGluZyhjcHUpICYm
ICFzY3J1Yl9mcmVlX3BhZ2VzKCkgICYmCi0gICAgICAgICAgICAgICAgICAg
ICFzb2Z0aXJxX3BlbmRpbmcoY3B1KSApCi0gICAgICAgICAgICBwbV9pZGxl
KCk7Ci0gICAgICAgIGRvX3NvZnRpcnEoKTsKLSAgICB9Ci19Ci0KLS8qCi0g
KiBJZGxlIGxvb3AgZm9yIHNpYmxpbmdzIGluIGFjdGl2ZSBzY2hlZHVsZSB1
bml0cy4KLSAqIFdlIGRvbid0IGRvIGFueSBzdGFuZGFyZCBpZGxlIHdvcmsg
bGlrZSB0YXNrbGV0cyBvciBsaXZlcGF0Y2hpbmcuCi0gKi8KLXN0YXRpYyB2
b2lkIGd1ZXN0X2lkbGVfbG9vcCh2b2lkKQotewotICAgIHVuc2lnbmVkIGlu
dCBjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7Ci0KLSAgICBmb3IgKCA7IDsg
KQotICAgIHsKLSAgICAgICAgQVNTRVJUKCFjcHVfaXNfb2ZmbGluZShjcHUp
KTsKLQotICAgICAgICBpZiAoICFzb2Z0aXJxX3BlbmRpbmcoY3B1KSAmJiAh
c2NydWJfZnJlZV9wYWdlcygpICYmCi0gICAgICAgICAgICAgIXNvZnRpcnFf
cGVuZGluZyhjcHUpKQotICAgICAgICAgICAgc2NoZWRfZ3Vlc3RfaWRsZShw
bV9pZGxlLCBjcHUpOworICAgICAgICBlbHNlIGlmICggIXNvZnRpcnFfcGVu
ZGluZyhjcHUpICYmICFzY3J1Yl9mcmVlX3BhZ2VzKCkgJiYKKyAgICAgICAg
ICAgICAgICAgICFzb2Z0aXJxX3BlbmRpbmcoY3B1KSApCisgICAgICAgIHsK
KyAgICAgICAgICAgIGlmICggZ3Vlc3QgKQorICAgICAgICAgICAgICAgIHNj
aGVkX2d1ZXN0X2lkbGUocG1faWRsZSwgY3B1KTsKKyAgICAgICAgICAgIGVs
c2UKKyAgICAgICAgICAgICAgICBwbV9pZGxlKCk7CisgICAgICAgIH0KICAg
ICAgICAgZG9fc29mdGlycSgpOwogICAgIH0KIH0KQEAgLTE5MCwxMCArMTg0
LDYgQEAgdm9pZCBzdGFydHVwX2NwdV9pZGxlX2xvb3Aodm9pZCkKIAogc3Rh
dGljIHZvaWQgbm9yZXR1cm4gY29udGludWVfaWRsZV9kb21haW4oc3RydWN0
IHZjcHUgKnYpCiB7Ci0gICAgLyogSWRsZSB2Y3B1cyBtaWdodCBiZSBhdHRh
Y2hlZCB0byBub24taWRsZSB1bml0cyEgKi8KLSAgICBpZiAoICFpc19pZGxl
X2RvbWFpbih2LT5zY2hlZF91bml0LT5kb21haW4pICkKLSAgICAgICAgcmVz
ZXRfc3RhY2tfYW5kX2p1bXAoZ3Vlc3RfaWRsZV9sb29wKTsKLQogICAgIHJl
c2V0X3N0YWNrX2FuZF9qdW1wKGlkbGVfbG9vcCk7CiB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa348-3.patch"
Content-Disposition: attachment; filename="xsa348-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGF2b2lkIGNhbGxpbmcge3N2bSx2bXh9X2RvX3Jlc3VtZSgpCgpU
aGVzZSBmdW5jdGlvbnMgZm9sbG93IHRoZSBmb2xsb3dpbmcgcGF0aDogaHZt
X2RvX3Jlc3VtZSgpIC0+CmhhbmRsZV9odm1faW9fY29tcGxldGlvbigpIC0+
IGh2bV93YWl0X2Zvcl9pbygpIC0+CndhaXRfb25feGVuX2V2ZW50X2NoYW5u
ZWwoKSAtPiBkb19zb2Z0aXJxKCkgLT4gc2NoZWR1bGUoKSAtPgpzY2hlZF9j
b250ZXh0X3N3aXRjaCgpIC0+IGNvbnRpbnVlX3J1bm5pbmcoKSBhbmQgaGVu
Y2UgbWF5CnJlY3Vyc2l2ZWx5IGludm9rZSB0aGVtc2VsdmVzLiBJZiB0aGlz
IGVuZHMgdXAgaGFwcGVuaW5nIGEgY291cGxlIG9mCnRpbWVzLCBhIHN0YWNr
IG92ZXJmbG93IHdvdWxkIHJlc3VsdC4KClByZXZlbnQgdGhpcyBieSBhbHNv
IHJlc2V0dGluZyB0aGUgc3RhY2sgYXQgdGhlCi0+YXJjaC5jdHh0X3N3aXRj
aC0+dGFpbCgpIGludm9jYXRpb25zIChpbiBib3RoIHBsYWNlcyBmb3IgY29u
c2lzdGVuY3kpCmFuZCB0aHVzIGp1bXBpbmcgdG8gdGhlIGZ1bmN0aW9ucyBp
bnN0ZWFkIG9mIGNhbGxpbmcgdGhlbS4KClRoaXMgaXMgWFNBLTM0OCAvIENW
RS0yMDIwLTI5NTY2LgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4KLS0tCnYyOiBGaXggTElWRVBBVENIIGJ1aWxk
cyBjcmFzaGluZy4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9kb21haW4uYworKysg
Yi94ZW4vYXJjaC94ODYvZG9tYWluLmMKQEAgLTEzMCw3ICsxMzAsNyBAQCB2
b2lkIHBsYXlfZGVhZCh2b2lkKQogICAgICAgICBkZWFkX2lkbGUoKTsKIH0K
IAotc3RhdGljIHZvaWQgaWRsZV9sb29wKHZvaWQpCitzdGF0aWMgdm9pZCBu
b3JldHVybiBpZGxlX2xvb3Aodm9pZCkKIHsKICAgICB1bnNpZ25lZCBpbnQg
Y3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwogICAgIC8qCkBAIC0xODIsMTEg
KzE4Miw2IEBAIHZvaWQgc3RhcnR1cF9jcHVfaWRsZV9sb29wKHZvaWQpCiAg
ICAgcmVzZXRfc3RhY2tfYW5kX2p1bXAoaWRsZV9sb29wKTsKIH0KIAotc3Rh
dGljIHZvaWQgbm9yZXR1cm4gY29udGludWVfaWRsZV9kb21haW4oc3RydWN0
IHZjcHUgKnYpCi17Ci0gICAgcmVzZXRfc3RhY2tfYW5kX2p1bXAoaWRsZV9s
b29wKTsKLX0KLQogdm9pZCBpbml0X2h5cGVyY2FsbF9wYWdlKHN0cnVjdCBk
b21haW4gKmQsIHZvaWQgKnB0cikKIHsKICAgICBtZW1zZXQocHRyLCAweGNj
LCBQQUdFX1NJWkUpOwpAQCAtNzEwLDcgKzcwNSw3IEBAIGludCBhcmNoX2Rv
bWFpbl9jcmVhdGUoc3RydWN0IGRvbWFpbiAqZCwKICAgICAgICAgc3RhdGlj
IGNvbnN0IHN0cnVjdCBhcmNoX2NzdyBpZGxlX2NzdyA9IHsKICAgICAgICAg
ICAgIC5mcm9tID0gcGFyYXZpcnRfY3R4dF9zd2l0Y2hfZnJvbSwKICAgICAg
ICAgICAgIC50byAgID0gcGFyYXZpcnRfY3R4dF9zd2l0Y2hfdG8sCi0gICAg
ICAgICAgICAudGFpbCA9IGNvbnRpbnVlX2lkbGVfZG9tYWluLAorICAgICAg
ICAgICAgLnRhaWwgPSBpZGxlX2xvb3AsCiAgICAgICAgIH07CiAKICAgICAg
ICAgZC0+YXJjaC5jdHh0X3N3aXRjaCA9ICZpZGxlX2NzdzsKQEAgLTIwNDcs
MjAgKzIwNDIsMTIgQEAgdm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNw
dSAqcHJldiwgcwogICAgIC8qIEVuc3VyZSB0aGF0IHRoZSB2Y3B1IGhhcyBh
biB1cC10by1kYXRlIHRpbWUgYmFzZS4gKi8KICAgICB1cGRhdGVfdmNwdV9z
eXN0ZW1fdGltZShuZXh0KTsKIAotICAgIC8qCi0gICAgICogU2NoZWR1bGUg
dGFpbCAqc2hvdWxkKiBiZSBhIHRlcm1pbmFsIGZ1bmN0aW9uIHBvaW50ZXIs
IGJ1dCBsZWF2ZSBhCi0gICAgICogYnVnIGZyYW1lIGFyb3VuZCBqdXN0IGlu
IGNhc2UgaXQgcmV0dXJucywgdG8gc2F2ZSBnb2luZyBiYWNrIGludG8gdGhl
Ci0gICAgICogY29udGV4dCBzd2l0Y2hpbmcgY29kZSBhbmQgbGVhdmluZyBh
IGZhciBtb3JlIHN1YnRsZSBjcmFzaCB0byBkaWFnbm9zZS4KLSAgICAgKi8K
LSAgICBuZXh0ZC0+YXJjaC5jdHh0X3N3aXRjaC0+dGFpbChuZXh0KTsKLSAg
ICBCVUcoKTsKKyAgICByZXNldF9zdGFja19hbmRfanVtcF9pbmQobmV4dGQt
PmFyY2guY3R4dF9zd2l0Y2gtPnRhaWwpOwogfQogCiB2b2lkIGNvbnRpbnVl
X3J1bm5pbmcoc3RydWN0IHZjcHUgKnNhbWUpCiB7Ci0gICAgLyogU2VlIHRo
ZSBjb21tZW50IGFib3ZlLiAqLwotICAgIHNhbWUtPmRvbWFpbi0+YXJjaC5j
dHh0X3N3aXRjaC0+dGFpbChzYW1lKTsKLSAgICBCVUcoKTsKKyAgICByZXNl
dF9zdGFja19hbmRfanVtcF9pbmQoc2FtZS0+ZG9tYWluLT5hcmNoLmN0eHRf
c3dpdGNoLT50YWlsKTsKIH0KIAogaW50IF9fc3luY19sb2NhbF9leGVjc3Rh
dGUodm9pZCkKLS0tIGEveGVuL2FyY2gveDg2L2h2bS9zdm0vc3ZtLmMKKysr
IGIveGVuL2FyY2gveDg2L2h2bS9zdm0vc3ZtLmMKQEAgLTk5MSw4ICs5OTEs
OSBAQCBzdGF0aWMgdm9pZCBzdm1fY3R4dF9zd2l0Y2hfdG8oc3RydWN0IHZj
CiAgICAgICAgIHdybXNyX3RzY19hdXgodi0+YXJjaC5tc3JzLT50c2NfYXV4
KTsKIH0KIAotc3RhdGljIHZvaWQgbm9yZXR1cm4gc3ZtX2RvX3Jlc3VtZShz
dHJ1Y3QgdmNwdSAqdikKK3N0YXRpYyB2b2lkIG5vcmV0dXJuIHN2bV9kb19y
ZXN1bWUodm9pZCkKIHsKKyAgICBzdHJ1Y3QgdmNwdSAqdiA9IGN1cnJlbnQ7
CiAgICAgc3RydWN0IHZtY2Jfc3RydWN0ICp2bWNiID0gdi0+YXJjaC5odm0u
c3ZtLnZtY2I7CiAgICAgYm9vbCBkZWJ1Z19zdGF0ZSA9ICh2LT5kb21haW4t
PmRlYnVnZ2VyX2F0dGFjaGVkIHx8CiAgICAgICAgICAgICAgICAgICAgICAg
ICB2LT5kb21haW4tPmFyY2gubW9uaXRvci5zb2Z0d2FyZV9icmVha3BvaW50
X2VuYWJsZWQgfHwKLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm1jcy5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYwpAQCAtMTg1MCw4
ICsxODUwLDkgQEAgdm9pZCB2bXhfdm1lbnRyeV9mYWlsdXJlKHZvaWQpCiAg
ICAgZG9tYWluX2NyYXNoKGN1cnItPmRvbWFpbik7CiB9CiAKLXZvaWQgdm14
X2RvX3Jlc3VtZShzdHJ1Y3QgdmNwdSAqdikKK3ZvaWQgdm14X2RvX3Jlc3Vt
ZSh2b2lkKQogeworICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsKICAg
ICBib29sX3QgZGVidWdfc3RhdGU7CiAgICAgdW5zaWduZWQgbG9uZyBob3N0
X2NyNDsKIAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZG9tYWluLmMKKysrIGIv
eGVuL2FyY2gveDg2L3B2L2RvbWFpbi5jCkBAIC0xMTAsNyArMTEwLDcgQEAg
c3RhdGljIGludCBwYXJzZV9wY2lkKGNvbnN0IGNoYXIgKnMpCiAgICAgcmV0
dXJuIHJjOwogfQogCi1zdGF0aWMgdm9pZCBub3JldHVybiBjb250aW51ZV9u
b25pZGxlX2RvbWFpbihzdHJ1Y3QgdmNwdSAqdikKK3N0YXRpYyB2b2lkIG5v
cmV0dXJuIGNvbnRpbnVlX25vbmlkbGVfZG9tYWluKHZvaWQpCiB7CiAgICAg
Y2hlY2tfd2FrZXVwX2Zyb21fd2FpdCgpOwogICAgIHJlc2V0X3N0YWNrX2Fu
ZF9qdW1wKHJldF9mcm9tX2ludHIpOwotLS0gYS94ZW4vaW5jbHVkZS9hc20t
eDg2L2N1cnJlbnQuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2N1cnJl
bnQuaApAQCAtMTU1LDE4ICsxNTUsMTggQEAgdW5zaWduZWQgbG9uZyBnZXRf
c3RhY2tfZHVtcF9ib3R0b20gKHVucwogIyBkZWZpbmUgU0hBRE9XX1NUQUNL
X1dPUksgIiIKICNlbmRpZgogCi0jZGVmaW5lIHJlc2V0X3N0YWNrX2FuZF9q
dW1wKGZuKSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCisjZGVmaW5lIHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgaW5zdHIs
IGNvbnN0cikgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgKHsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCiAgICAgICAgIHVuc2lnbmVkIGludCB0bXA7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgICAgIF9fYXNtX18gX192b2xhdGlsZV9fICggICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAg
ICBTSEFET1dfU1RBQ0tfV09SSyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICAibW92ICVbc3RrXSwg
JSVyc3A7IiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgICAgICAgICBDSEVDS19GT1JfTElWRVBBVENIX1dPUksgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgICAgICAg
ICAiam1wICVjW2Z1bl07IiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICBpbnN0ciAiW2Z1bl0i
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgICAgICAgICA6IFt2YWxdICI9JnIiICh0bXApLCAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAg
ICAgIFtzc3BdICI9JnIiICh0bXApICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICA6IFtzdGtdICJyIiAo
Z3Vlc3RfY3B1X3VzZXJfcmVncygpKSwgICAgICAgICAgICAgICAgICAgICAg
ICBcCi0gICAgICAgICAgICAgIFtmdW5dICJpIiAoZm4pLCAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAg
ICAgIFtmdW5dIGNvbnN0ciAoZm4pLCAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICAgIFtza3N0a19iYXNl
XSAiaSIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgICAgICAgICAgICgoUFJJTUFSWV9TSFNUS19TTE9UICsgMSkg
KiBQQUdFX1NJWkUgLSA4KSwgICAgICAgICAgICAgICBcCiAgICAgICAgICAg
ICAgIFtzdGFja19tYXNrXSAiaSIgKFNUQUNLX1NJWkUgLSAxKSwgICAgICAg
ICAgICAgICAgICAgICAgICBcCkBAIC0xNzYsNiArMTc2LDEzIEBAIHVuc2ln
bmVkIGxvbmcgZ2V0X3N0YWNrX2R1bXBfYm90dG9tICh1bnMKICAgICAgICAg
dW5yZWFjaGFibGUoKTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIFwKICAgICB9KQogCisjZGVmaW5lIHJlc2V0
X3N0YWNrX2FuZF9qdW1wKGZuKSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCisgICAgc3dpdGNoX3N0YWNrX2FuZF9qdW1wKGZu
LCAiam1wICVjIiwgImkiKQorCisvKiBUaGUgY29uc3RyYWludCBtYXkgb25s
eSBzcGVjaWZ5IG5vbi1jYWxsLWNsb2JiZXJlZCByZWdpc3RlcnMuICovCisj
ZGVmaW5lIHJlc2V0X3N0YWNrX2FuZF9qdW1wX2luZChmbikgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgc3dpdGNoX3N0YWNr
X2FuZF9qdW1wKGZuLCAiSU5ESVJFQ1RfSk1QICUiLCAiYiIpCisKIC8qCiAg
KiBXaGljaCBWQ1BVJ3Mgc3RhdGUgaXMgY3VycmVudGx5IHJ1bm5pbmcgb24g
ZWFjaCBDUFU/CiAgKiBUaGlzIGlzIG5vdCBuZWNlc2FzcmlseSB0aGUgc2Ft
ZSBhcyAnY3VycmVudCcgYXMgYSBDUFUgbWF5IGJlCi0tLSBhL3hlbi9pbmNs
dWRlL2FzbS14ODYvZG9tYWluLmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4
Ni9kb21haW4uaApAQCAtMzM3LDcgKzMzNyw3IEBAIHN0cnVjdCBhcmNoX2Rv
bWFpbgogICAgIGNvbnN0IHN0cnVjdCBhcmNoX2NzdyB7CiAgICAgICAgIHZv
aWQgKCpmcm9tKShzdHJ1Y3QgdmNwdSAqKTsKICAgICAgICAgdm9pZCAoKnRv
KShzdHJ1Y3QgdmNwdSAqKTsKLSAgICAgICAgdm9pZCAoKnRhaWwpKHN0cnVj
dCB2Y3B1ICopOworICAgICAgICB2b2lkIG5vcmV0dXJuICgqdGFpbCkodm9p
ZCk7CiAgICAgfSAqY3R4dF9zd2l0Y2g7CiAKICNpZmRlZiBDT05GSUdfSFZN
Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaAorKysg
Yi94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmgKQEAgLTk1LDcg
Kzk1LDcgQEAgdHlwZWRlZiBlbnVtIHsKIHZvaWQgdm14X2FzbV92bWV4aXRf
aGFuZGxlcihzdHJ1Y3QgY3B1X3VzZXJfcmVncyk7CiB2b2lkIHZteF9hc21f
ZG9fdm1lbnRyeSh2b2lkKTsKIHZvaWQgdm14X2ludHJfYXNzaXN0KHZvaWQp
Owotdm9pZCBub3JldHVybiB2bXhfZG9fcmVzdW1lKHN0cnVjdCB2Y3B1ICop
Owordm9pZCBub3JldHVybiB2bXhfZG9fcmVzdW1lKHZvaWQpOwogdm9pZCB2
bXhfdmxhcGljX21zcl9jaGFuZ2VkKHN0cnVjdCB2Y3B1ICp2KTsKIHN0cnVj
dCBodm1fZW11bGF0ZV9jdHh0Owogdm9pZCB2bXhfcmVhbG1vZGVfZW11bGF0
ZV9vbmUoc3RydWN0IGh2bV9lbXVsYXRlX2N0eHQgKmh2bWVtdWxfY3R4dCk7
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa348-4.10.patch"
Content-Disposition: attachment; filename="xsa348-4.10.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGF2b2lkIGNhbGxpbmcge3N2bSx2bXh9X2RvX3Jlc3VtZSgpCgpU
aGVzZSBmdW5jdGlvbnMgZm9sbG93IHRoZSBmb2xsb3dpbmcgcGF0aDogaHZt
X2RvX3Jlc3VtZSgpIC0+CmhhbmRsZV9odm1faW9fY29tcGxldGlvbigpIC0+
IGh2bV93YWl0X2Zvcl9pbygpIC0+CndhaXRfb25feGVuX2V2ZW50X2NoYW5u
ZWwoKSAtPiBkb19zb2Z0aXJxKCkgLT4gc2NoZWR1bGUoKSAtPgpzY2hlZF9j
b250ZXh0X3N3aXRjaCgpIC0+IGNvbnRpbnVlX3J1bm5pbmcoKSBhbmQgaGVu
Y2UgbWF5CnJlY3Vyc2l2ZWx5IGludm9rZSB0aGVtc2VsdmVzLiBJZiB0aGlz
IGVuZHMgdXAgaGFwcGVuaW5nIGEgY291cGxlIG9mCnRpbWVzLCBhIHN0YWNr
IG92ZXJmbG93IHdvdWxkIHJlc3VsdC4KClByZXZlbnQgdGhpcyBieSBhbHNv
IHJlc2V0dGluZyB0aGUgc3RhY2sgYXQgdGhlCi0+YXJjaC5jdHh0X3N3aXRj
aC0+dGFpbCgpIGludm9jYXRpb25zIChpbiBib3RoIHBsYWNlcyBmb3IgY29u
c2lzdGVuY3kpCmFuZCB0aHVzIGp1bXBpbmcgdG8gdGhlIGZ1bmN0aW9ucyBp
bnN0ZWFkIG9mIGNhbGxpbmcgdGhlbS4KClRoaXMgaXMgWFNBLTM0OCAvIENW
RS0yMDIwLTI5NTY2LgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4KCi0tLSBzbGUxNS5vcmlnL3hlbi9hcmNoL3g4
Ni9kb21haW4uYwkyMDIwLTEwLTE1IDE3OjM1OjE3LjAwMDAwMDAwMCArMDIw
MAorKysgc2xlMTUveGVuL2FyY2gveDg2L2RvbWFpbi5jCTIwMjAtMTEtMTAg
MTc6NTY6NTkuMDAwMDAwMDAwICswMTAwCkBAIC0xMjEsNyArMTIxLDcgQEAg
c3RhdGljIHZvaWQgcGxheV9kZWFkKHZvaWQpCiAgICAgKCpkZWFkX2lkbGUp
KCk7CiB9CiAKLXN0YXRpYyB2b2lkIGlkbGVfbG9vcCh2b2lkKQorc3RhdGlj
IHZvaWQgbm9yZXR1cm4gaWRsZV9sb29wKHZvaWQpCiB7CiAgICAgdW5zaWdu
ZWQgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKIApAQCAtMTYxLDEx
ICsxNjEsNiBAQCB2b2lkIHN0YXJ0dXBfY3B1X2lkbGVfbG9vcCh2b2lkKQog
ICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGlkbGVfbG9vcCk7CiB9CiAKLXN0
YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX2lkbGVfZG9tYWluKHN0cnVj
dCB2Y3B1ICp2KQotewotICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGlkbGVf
bG9vcCk7Ci19Ci0KIHZvaWQgZHVtcF9wYWdlZnJhbWVfaW5mbyhzdHJ1Y3Qg
ZG9tYWluICpkKQogewogICAgIHN0cnVjdCBwYWdlX2luZm8gKnBhZ2U7CkBA
IC01NjAsNyArNTU1LDcgQEAgaW50IGFyY2hfZG9tYWluX2NyZWF0ZShzdHJ1
Y3QgZG9tYWluICpkLAogICAgICAgICBzdGF0aWMgY29uc3Qgc3RydWN0IGFy
Y2hfY3N3IGlkbGVfY3N3ID0gewogICAgICAgICAgICAgLmZyb20gPSBwYXJh
dmlydF9jdHh0X3N3aXRjaF9mcm9tLAogICAgICAgICAgICAgLnRvICAgPSBw
YXJhdmlydF9jdHh0X3N3aXRjaF90bywKLSAgICAgICAgICAgIC50YWlsID0g
Y29udGludWVfaWRsZV9kb21haW4sCisgICAgICAgICAgICAudGFpbCA9IGlk
bGVfbG9vcCwKICAgICAgICAgfTsKIAogICAgICAgICBkLT5hcmNoLmN0eHRf
c3dpdGNoID0gJmlkbGVfY3N3OwpAQCAtMTc3NCwyMCArMTc2OSwxMiBAQCB2
b2lkIGNvbnRleHRfc3dpdGNoKHN0cnVjdCB2Y3B1ICpwcmV2LCBzCiAgICAg
LyogRW5zdXJlIHRoYXQgdGhlIHZjcHUgaGFzIGFuIHVwLXRvLWRhdGUgdGlt
ZSBiYXNlLiAqLwogICAgIHVwZGF0ZV92Y3B1X3N5c3RlbV90aW1lKG5leHQp
OwogCi0gICAgLyoKLSAgICAgKiBTY2hlZHVsZSB0YWlsICpzaG91bGQqIGJl
IGEgdGVybWluYWwgZnVuY3Rpb24gcG9pbnRlciwgYnV0IGxlYXZlIGEKLSAg
ICAgKiBidWcgZnJhbWUgYXJvdW5kIGp1c3QgaW4gY2FzZSBpdCByZXR1cm5z
LCB0byBzYXZlIGdvaW5nIGJhY2sgaW50byB0aGUKLSAgICAgKiBjb250ZXh0
IHN3aXRjaGluZyBjb2RlIGFuZCBsZWF2aW5nIGEgZmFyIG1vcmUgc3VidGxl
IGNyYXNoIHRvIGRpYWdub3NlLgotICAgICAqLwotICAgIG5leHRkLT5hcmNo
LmN0eHRfc3dpdGNoLT50YWlsKG5leHQpOwotICAgIEJVRygpOworICAgIHJl
c2V0X3N0YWNrX2FuZF9qdW1wX2luZChuZXh0ZC0+YXJjaC5jdHh0X3N3aXRj
aC0+dGFpbCk7CiB9CiAKIHZvaWQgY29udGludWVfcnVubmluZyhzdHJ1Y3Qg
dmNwdSAqc2FtZSkKIHsKLSAgICAvKiBTZWUgdGhlIGNvbW1lbnQgYWJvdmUu
ICovCi0gICAgc2FtZS0+ZG9tYWluLT5hcmNoLmN0eHRfc3dpdGNoLT50YWls
KHNhbWUpOwotICAgIEJVRygpOworICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1w
X2luZChzYW1lLT5kb21haW4tPmFyY2guY3R4dF9zd2l0Y2gtPnRhaWwpOwog
fQogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNzdGF0ZSh2b2lkKQotLS0gc2xl
MTUub3JpZy94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYwkyMDE5LTAyLTE1
IDIzOjQwOjMxLjAwMDAwMDAwMCArMDEwMAorKysgc2xlMTUveGVuL2FyY2gv
eDg2L2h2bS9zdm0vc3ZtLmMJMjAyMC0xMS0xMCAxNzo1Njo1OS4wMDAwMDAw
MDAgKzAxMDAKQEAgLTEwODYsOCArMTA4Niw5IEBAIHN0YXRpYyB2b2lkIHN2
bV9jdHh0X3N3aXRjaF90byhzdHJ1Y3QgdmMKICAgICAgICAgd3Jtc3JfdHNj
X2F1eChodm1fbXNyX3RzY19hdXgodikpOwogfQogCi1zdGF0aWMgdm9pZCBu
b3JldHVybiBzdm1fZG9fcmVzdW1lKHN0cnVjdCB2Y3B1ICp2KQorc3RhdGlj
IHZvaWQgbm9yZXR1cm4gc3ZtX2RvX3Jlc3VtZSh2b2lkKQogeworICAgIHN0
cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsKICAgICBzdHJ1Y3Qgdm1jYl9zdHJ1
Y3QgKnZtY2IgPSB2LT5hcmNoLmh2bV9zdm0udm1jYjsKICAgICBib29sX3Qg
ZGVidWdfc3RhdGUgPSB2LT5kb21haW4tPmRlYnVnZ2VyX2F0dGFjaGVkOwog
ICAgIGJvb2xfdCB2Y3B1X2d1ZXN0bW9kZSA9IDA7Ci0tLSBzbGUxNS5vcmln
L3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYwkyMDIwLTA1LTE4IDAwOjAw
OjAwLjAwMDAwMDAwMCArMDIwMAorKysgc2xlMTUveGVuL2FyY2gveDg2L2h2
bS92bXgvdm1jcy5jCTIwMjAtMTEtMTAgMTc6NTY6NTkuMDAwMDAwMDAwICsw
MTAwCkBAIC0xNzg1LDggKzE3ODUsOSBAQCB2b2lkIHZteF92bWVudHJ5X2Zh
aWx1cmUodm9pZCkKICAgICBkb21haW5fY3Jhc2hfc3luY2hyb25vdXMoKTsK
IH0KIAotdm9pZCB2bXhfZG9fcmVzdW1lKHN0cnVjdCB2Y3B1ICp2KQordm9p
ZCB2bXhfZG9fcmVzdW1lKHZvaWQpCiB7CisgICAgc3RydWN0IHZjcHUgKnYg
PSBjdXJyZW50OwogICAgIGJvb2xfdCBkZWJ1Z19zdGF0ZTsKIAogICAgIGlm
ICggdi0+YXJjaC5odm1fdm14LmFjdGl2ZV9jcHUgPT0gc21wX3Byb2Nlc3Nv
cl9pZCgpICkKLS0tIHNsZTE1Lm9yaWcveGVuL2FyY2gveDg2L3B2L2RvbWFp
bi5jCTIwMTktMDctMDUgMTg6Mjc6MzEuMDAwMDAwMDAwICswMjAwCisrKyBz
bGUxNS94ZW4vYXJjaC94ODYvcHYvZG9tYWluLmMJMjAyMC0xMS0xMCAxNzo1
Njo1OS4wMDAwMDAwMDAgKzAxMDAKQEAgLTY0LDcgKzY0LDcgQEAgY3VzdG9t
X3J1bnRpbWVfcGFyYW0oInBjaWQiLCBwYXJzZV9wY2lkKQogI3VuZGVmIHBh
Z2VfdG9fbWZuCiAjZGVmaW5lIHBhZ2VfdG9fbWZuKHBnKSBfbWZuKF9fcGFn
ZV90b19tZm4ocGcpKQogCi1zdGF0aWMgdm9pZCBub3JldHVybiBjb250aW51
ZV9ub25pZGxlX2RvbWFpbihzdHJ1Y3QgdmNwdSAqdikKK3N0YXRpYyB2b2lk
IG5vcmV0dXJuIGNvbnRpbnVlX25vbmlkbGVfZG9tYWluKHZvaWQpCiB7CiAg
ICAgY2hlY2tfd2FrZXVwX2Zyb21fd2FpdCgpOwogICAgIHJlc2V0X3N0YWNr
X2FuZF9qdW1wKHJldF9mcm9tX2ludHIpOwotLS0gc2xlMTUub3JpZy94ZW4v
aW5jbHVkZS9hc20teDg2L2N1cnJlbnQuaAkyMDE5LTA3LTA1IDE4OjI3OjMx
LjAwMDAwMDAwMCArMDIwMAorKysgc2xlMTUveGVuL2luY2x1ZGUvYXNtLXg4
Ni9jdXJyZW50LmgJMjAyMC0xMS0xMCAxNzo1Njo1OS4wMDAwMDAwMDAgKzAx
MDAKQEAgLTEyNCwxNiArMTI0LDIzIEBAIHVuc2lnbmVkIGxvbmcgZ2V0X3N0
YWNrX2R1bXBfYm90dG9tICh1bnMKICMgZGVmaW5lIENIRUNLX0ZPUl9MSVZF
UEFUQ0hfV09SSyAiIgogI2VuZGlmCiAKLSNkZWZpbmUgcmVzZXRfc3RhY2tf
YW5kX2p1bXAoX19mbikgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwKKyNkZWZpbmUgc3dpdGNoX3N0YWNrX2FuZF9qdW1wKGZuLCBp
bnN0ciwgY29uc3RyKSAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAo
eyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAgX19hc21fXyBfX3Zv
bGF0aWxlX18gKCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwKICAgICAgICAgICAgICJtb3YgJTAsJSUiX19PUCJzcDsiICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKLSAgICAg
ICAgICAgIENIRUNLX0ZPUl9MSVZFUEFUQ0hfV09SSyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAotICAgICAgICAgICAgICJqbXAg
JWMxIiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXAotICAgICAgICAgICAgOiA6ICJyIiAoZ3Vlc3RfY3B1X3Vz
ZXJfcmVncygpKSwgImkiIChfX2ZuKSA6ICJtZW1vcnkiICk7ICAgXAorICAg
ICAgICAgICAgQ0hFQ0tfRk9SX0xJVkVQQVRDSF9XT1JLICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICAgICAgaW5zdHIg
IjEiICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXAorICAgICAgICAgICAgOiA6ICJyIiAoZ3Vlc3RfY3B1X3Vz
ZXJfcmVncygpKSwgY29uc3RyIChmbikgOiAibWVtb3J5IiApOyAgXAogICAg
ICAgICB1bnJlYWNoYWJsZSgpOyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgIH0pCiAKKyNkZWZpbmUg
cmVzZXRfc3RhY2tfYW5kX2p1bXAoZm4pICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIFwKKyAgICBzd2l0Y2hfc3RhY2tfYW5kX2p1
bXAoZm4sICJqbXAgJWMiLCAiaSIpCisKKy8qIFRoZSBjb25zdHJhaW50IG1h
eSBvbmx5IHNwZWNpZnkgbm9uLWNhbGwtY2xvYmJlcmVkIHJlZ2lzdGVycy4g
Ki8KKyNkZWZpbmUgcmVzZXRfc3RhY2tfYW5kX2p1bXBfaW5kKGZuKSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBzd2l0Y2hf
c3RhY2tfYW5kX2p1bXAoZm4sICJJTkRJUkVDVF9KTVAgJSIsICJiIikKKwog
LyoKICAqIFdoaWNoIFZDUFUncyBzdGF0ZSBpcyBjdXJyZW50bHkgcnVubmlu
ZyBvbiBlYWNoIENQVT8KICAqIFRoaXMgaXMgbm90IG5lY2VzYXNyaWx5IHRo
ZSBzYW1lIGFzICdjdXJyZW50JyBhcyBhIENQVSBtYXkgYmUKLS0tIHNsZTE1
Lm9yaWcveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAkyMDE5LTExLTI2
IDE0OjU1OjE1LjgxNzU0Nzc2MCArMDEwMAorKysgc2xlMTUveGVuL2luY2x1
ZGUvYXNtLXg4Ni9kb21haW4uaAkyMDIwLTExLTEwIDE3OjU2OjU5LjAwMDAw
MDAwMCArMDEwMApAQCAtMzI4LDcgKzMyOCw3IEBAIHN0cnVjdCBhcmNoX2Rv
bWFpbgogICAgIGNvbnN0IHN0cnVjdCBhcmNoX2NzdyB7CiAgICAgICAgIHZv
aWQgKCpmcm9tKShzdHJ1Y3QgdmNwdSAqKTsKICAgICAgICAgdm9pZCAoKnRv
KShzdHJ1Y3QgdmNwdSAqKTsKLSAgICAgICAgdm9pZCAoKnRhaWwpKHN0cnVj
dCB2Y3B1ICopOworICAgICAgICB2b2lkIG5vcmV0dXJuICgqdGFpbCkodm9p
ZCk7CiAgICAgfSAqY3R4dF9zd2l0Y2g7CiAKICAgICAvKiBuZXN0ZWRodm06
IHRyYW5zbGF0ZSBsMiBndWVzdCBwaHlzaWNhbCB0byBob3N0IHBoeXNpY2Fs
ICovCi0tLSBzbGUxNS5vcmlnL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3Zt
eC92bXguaAkyMDIwLTA1LTE4IDAwOjAwOjAwLjAwMDAwMDAwMCArMDIwMAor
Kysgc2xlMTUveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZteC5oCTIw
MjAtMTEtMTAgMTc6NTY6NTkuMDAwMDAwMDAwICswMTAwCkBAIC05NSw3ICs5
NSw3IEBAIHR5cGVkZWYgZW51bSB7CiB2b2lkIHZteF9hc21fdm1leGl0X2hh
bmRsZXIoc3RydWN0IGNwdV91c2VyX3JlZ3MpOwogdm9pZCB2bXhfYXNtX2Rv
X3ZtZW50cnkodm9pZCk7CiB2b2lkIHZteF9pbnRyX2Fzc2lzdCh2b2lkKTsK
LXZvaWQgbm9yZXR1cm4gdm14X2RvX3Jlc3VtZShzdHJ1Y3QgdmNwdSAqKTsK
K3ZvaWQgbm9yZXR1cm4gdm14X2RvX3Jlc3VtZSh2b2lkKTsKIHZvaWQgdm14
X3ZsYXBpY19tc3JfY2hhbmdlZChzdHJ1Y3QgdmNwdSAqdik7CiB2b2lkIHZt
eF9yZWFsbW9kZV9lbXVsYXRlX29uZShzdHJ1Y3QgaHZtX2VtdWxhdGVfY3R4
dCAqaHZtZW11bF9jdHh0KTsKIHZvaWQgdm14X3JlYWxtb2RlKHN0cnVjdCBj
cHVfdXNlcl9yZWdzICpyZWdzKTsK

--=separator
Content-Type: application/octet-stream; name="xsa348-4.11.patch"
Content-Disposition: attachment; filename="xsa348-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGF2b2lkIGNhbGxpbmcge3N2bSx2bXh9X2RvX3Jlc3VtZSgpCgpU
aGVzZSBmdW5jdGlvbnMgZm9sbG93IHRoZSBmb2xsb3dpbmcgcGF0aDogaHZt
X2RvX3Jlc3VtZSgpIC0+CmhhbmRsZV9odm1faW9fY29tcGxldGlvbigpIC0+
IGh2bV93YWl0X2Zvcl9pbygpIC0+CndhaXRfb25feGVuX2V2ZW50X2NoYW5u
ZWwoKSAtPiBkb19zb2Z0aXJxKCkgLT4gc2NoZWR1bGUoKSAtPgpzY2hlZF9j
b250ZXh0X3N3aXRjaCgpIC0+IGNvbnRpbnVlX3J1bm5pbmcoKSBhbmQgaGVu
Y2UgbWF5CnJlY3Vyc2l2ZWx5IGludm9rZSB0aGVtc2VsdmVzLiBJZiB0aGlz
IGVuZHMgdXAgaGFwcGVuaW5nIGEgY291cGxlIG9mCnRpbWVzLCBhIHN0YWNr
IG92ZXJmbG93IHdvdWxkIHJlc3VsdC4KClByZXZlbnQgdGhpcyBieSBhbHNv
IHJlc2V0dGluZyB0aGUgc3RhY2sgYXQgdGhlCi0+YXJjaC5jdHh0X3N3aXRj
aC0+dGFpbCgpIGludm9jYXRpb25zIChpbiBib3RoIHBsYWNlcyBmb3IgY29u
c2lzdGVuY3kpCmFuZCB0aHVzIGp1bXBpbmcgdG8gdGhlIGZ1bmN0aW9ucyBp
bnN0ZWFkIG9mIGNhbGxpbmcgdGhlbS4KClRoaXMgaXMgWFNBLTM0OCAvIENW
RS0yMDIwLTI5NTY2LgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4KCi0tLSBzbGUxMnNwNC5vcmlnL3hlbi9hcmNo
L3g4Ni9kb21haW4uYwkyMDIwLTEwLTE1IDE3OjM1OjE3LjAwMDAwMDAwMCAr
MDIwMAorKysgc2xlMTJzcDQveGVuL2FyY2gveDg2L2RvbWFpbi5jCTIwMjAt
MTEtMTAgMTc6NTY6NTkuMDAwMDAwMDAwICswMTAwCkBAIC0xMjEsNyArMTIx
LDcgQEAgc3RhdGljIHZvaWQgcGxheV9kZWFkKHZvaWQpCiAgICAgKCpkZWFk
X2lkbGUpKCk7CiB9CiAKLXN0YXRpYyB2b2lkIGlkbGVfbG9vcCh2b2lkKQor
c3RhdGljIHZvaWQgbm9yZXR1cm4gaWRsZV9sb29wKHZvaWQpCiB7CiAgICAg
dW5zaWduZWQgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKIApAQCAt
MTYxLDExICsxNjEsNiBAQCB2b2lkIHN0YXJ0dXBfY3B1X2lkbGVfbG9vcCh2
b2lkKQogICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGlkbGVfbG9vcCk7CiB9
CiAKLXN0YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX2lkbGVfZG9tYWlu
KHN0cnVjdCB2Y3B1ICp2KQotewotICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1w
KGlkbGVfbG9vcCk7Ci19Ci0KIHZvaWQgZHVtcF9wYWdlZnJhbWVfaW5mbyhz
dHJ1Y3QgZG9tYWluICpkKQogewogICAgIHN0cnVjdCBwYWdlX2luZm8gKnBh
Z2U7CkBAIC00NTYsNyArNDUxLDcgQEAgaW50IGFyY2hfZG9tYWluX2NyZWF0
ZShzdHJ1Y3QgZG9tYWluICpkLAogICAgICAgICBzdGF0aWMgY29uc3Qgc3Ry
dWN0IGFyY2hfY3N3IGlkbGVfY3N3ID0gewogICAgICAgICAgICAgLmZyb20g
PSBwYXJhdmlydF9jdHh0X3N3aXRjaF9mcm9tLAogICAgICAgICAgICAgLnRv
ICAgPSBwYXJhdmlydF9jdHh0X3N3aXRjaF90bywKLSAgICAgICAgICAgIC50
YWlsID0gY29udGludWVfaWRsZV9kb21haW4sCisgICAgICAgICAgICAudGFp
bCA9IGlkbGVfbG9vcCwKICAgICAgICAgfTsKIAogICAgICAgICBkLT5hcmNo
LmN0eHRfc3dpdGNoID0gJmlkbGVfY3N3OwpAQCAtMTc3MCwyMCArMTc2NSwx
MiBAQCB2b2lkIGNvbnRleHRfc3dpdGNoKHN0cnVjdCB2Y3B1ICpwcmV2LCBz
CiAgICAgLyogRW5zdXJlIHRoYXQgdGhlIHZjcHUgaGFzIGFuIHVwLXRvLWRh
dGUgdGltZSBiYXNlLiAqLwogICAgIHVwZGF0ZV92Y3B1X3N5c3RlbV90aW1l
KG5leHQpOwogCi0gICAgLyoKLSAgICAgKiBTY2hlZHVsZSB0YWlsICpzaG91
bGQqIGJlIGEgdGVybWluYWwgZnVuY3Rpb24gcG9pbnRlciwgYnV0IGxlYXZl
IGEKLSAgICAgKiBidWcgZnJhbWUgYXJvdW5kIGp1c3QgaW4gY2FzZSBpdCBy
ZXR1cm5zLCB0byBzYXZlIGdvaW5nIGJhY2sgaW50byB0aGUKLSAgICAgKiBj
b250ZXh0IHN3aXRjaGluZyBjb2RlIGFuZCBsZWF2aW5nIGEgZmFyIG1vcmUg
c3VidGxlIGNyYXNoIHRvIGRpYWdub3NlLgotICAgICAqLwotICAgIG5leHRk
LT5hcmNoLmN0eHRfc3dpdGNoLT50YWlsKG5leHQpOwotICAgIEJVRygpOwor
ICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wX2luZChuZXh0ZC0+YXJjaC5jdHh0
X3N3aXRjaC0+dGFpbCk7CiB9CiAKIHZvaWQgY29udGludWVfcnVubmluZyhz
dHJ1Y3QgdmNwdSAqc2FtZSkKIHsKLSAgICAvKiBTZWUgdGhlIGNvbW1lbnQg
YWJvdmUuICovCi0gICAgc2FtZS0+ZG9tYWluLT5hcmNoLmN0eHRfc3dpdGNo
LT50YWlsKHNhbWUpOwotICAgIEJVRygpOworICAgIHJlc2V0X3N0YWNrX2Fu
ZF9qdW1wX2luZChzYW1lLT5kb21haW4tPmFyY2guY3R4dF9zd2l0Y2gtPnRh
aWwpOwogfQogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNzdGF0ZSh2b2lkKQot
LS0gc2xlMTJzcDQub3JpZy94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYwky
MDIwLTA2LTE4IDE1OjEzOjEzLjAwMTc2MDA5NSArMDIwMAorKysgc2xlMTJz
cDQveGVuL2FyY2gveDg2L2h2bS9zdm0vc3ZtLmMJMjAyMC0xMS0xMCAxNzo1
Njo1OS4wMDAwMDAwMDAgKzAxMDAKQEAgLTExMTEsOCArMTExMSw5IEBAIHN0
YXRpYyB2b2lkIHN2bV9jdHh0X3N3aXRjaF90byhzdHJ1Y3QgdmMKICAgICAg
ICAgd3Jtc3JfdHNjX2F1eChodm1fbXNyX3RzY19hdXgodikpOwogfQogCi1z
dGF0aWMgdm9pZCBub3JldHVybiBzdm1fZG9fcmVzdW1lKHN0cnVjdCB2Y3B1
ICp2KQorc3RhdGljIHZvaWQgbm9yZXR1cm4gc3ZtX2RvX3Jlc3VtZSh2b2lk
KQogeworICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsKICAgICBzdHJ1
Y3Qgdm1jYl9zdHJ1Y3QgKnZtY2IgPSB2LT5hcmNoLmh2bV9zdm0udm1jYjsK
ICAgICBib29sIGRlYnVnX3N0YXRlID0gKHYtPmRvbWFpbi0+ZGVidWdnZXJf
YXR0YWNoZWQgfHwKICAgICAgICAgICAgICAgICAgICAgICAgIHYtPmRvbWFp
bi0+YXJjaC5tb25pdG9yLnNvZnR3YXJlX2JyZWFrcG9pbnRfZW5hYmxlZCB8
fAotLS0gc2xlMTJzcDQub3JpZy94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNz
LmMJMjAxOS0xMi0wMyAxNzo0NjoyNi4wMDAwMDAwMDAgKzAxMDAKKysrIHNs
ZTEyc3A0L3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYwkyMDIwLTExLTEw
IDE3OjU2OjU5LjAwMDAwMDAwMCArMDEwMApAQCAtMTc4Miw4ICsxNzgyLDkg
QEAgdm9pZCB2bXhfdm1lbnRyeV9mYWlsdXJlKHZvaWQpCiAgICAgZG9tYWlu
X2NyYXNoX3N5bmNocm9ub3VzKCk7CiB9CiAKLXZvaWQgdm14X2RvX3Jlc3Vt
ZShzdHJ1Y3QgdmNwdSAqdikKK3ZvaWQgdm14X2RvX3Jlc3VtZSh2b2lkKQog
eworICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsKICAgICBib29sX3Qg
ZGVidWdfc3RhdGU7CiAgICAgdW5zaWduZWQgbG9uZyBob3N0X2NyNDsKIAot
LS0gc2xlMTJzcDQub3JpZy94ZW4vYXJjaC94ODYvcHYvZG9tYWluLmMJMjAx
OS0wNi0yNSAyMzo0NzoxMS4wMDAwMDAwMDAgKzAyMDAKKysrIHNsZTEyc3A0
L3hlbi9hcmNoL3g4Ni9wdi9kb21haW4uYwkyMDIwLTExLTEwIDE3OjU2OjU5
LjAwMDAwMDAwMCArMDEwMApAQCAtNTgsNyArNTgsNyBAQCBzdGF0aWMgaW50
IHBhcnNlX3BjaWQoY29uc3QgY2hhciAqcykKIH0KIGN1c3RvbV9ydW50aW1l
X3BhcmFtKCJwY2lkIiwgcGFyc2VfcGNpZCk7CiAKLXN0YXRpYyB2b2lkIG5v
cmV0dXJuIGNvbnRpbnVlX25vbmlkbGVfZG9tYWluKHN0cnVjdCB2Y3B1ICp2
KQorc3RhdGljIHZvaWQgbm9yZXR1cm4gY29udGludWVfbm9uaWRsZV9kb21h
aW4odm9pZCkKIHsKICAgICBjaGVja193YWtldXBfZnJvbV93YWl0KCk7CiAg
ICAgcmVzZXRfc3RhY2tfYW5kX2p1bXAocmV0X2Zyb21faW50cik7Ci0tLSBz
bGUxMnNwNC5vcmlnL3hlbi9pbmNsdWRlL2FzbS14ODYvY3VycmVudC5oCTIw
MTktMDYtMjUgMjM6NDc6MTEuMDAwMDAwMDAwICswMjAwCisrKyBzbGUxMnNw
NC94ZW4vaW5jbHVkZS9hc20teDg2L2N1cnJlbnQuaAkyMDIwLTExLTEwIDE3
OjU2OjU5LjAwMDAwMDAwMCArMDEwMApAQCAtMTI0LDE2ICsxMjQsMjMgQEAg
dW5zaWduZWQgbG9uZyBnZXRfc3RhY2tfZHVtcF9ib3R0b20gKHVucwogIyBk
ZWZpbmUgQ0hFQ0tfRk9SX0xJVkVQQVRDSF9XT1JLICIiCiAjZW5kaWYKIAot
I2RlZmluZSByZXNldF9zdGFja19hbmRfanVtcChfX2ZuKSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorI2RlZmluZSBzd2l0Y2hf
c3RhY2tfYW5kX2p1bXAoZm4sIGluc3RyLCBjb25zdHIpICAgICAgICAgICAg
ICAgICAgICAgICAgXAogICAgICh7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAog
ICAgICAgICBfX2FzbV9fIF9fdm9sYXRpbGVfXyAoICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAgIm1v
diAlMCwlJSJfX09QInNwOyIgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgXAotICAgICAgICAgICAgQ0hFQ0tfRk9SX0xJVkVQQVRD
SF9XT1JLICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
Ci0gICAgICAgICAgICAgImptcCAlYzEiICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgICAgICAgICA6
IDogInIiIChndWVzdF9jcHVfdXNlcl9yZWdzKCkpLCAiaSIgKF9fZm4pIDog
Im1lbW9yeSIgKTsgICBcCisgICAgICAgICAgICBDSEVDS19GT1JfTElWRVBB
VENIX1dPUksgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CisgICAgICAgICAgICBpbnN0ciAiMSIgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICA6
IDogInIiIChndWVzdF9jcHVfdXNlcl9yZWdzKCkpLCBjb25zdHIgKGZuKSA6
ICJtZW1vcnkiICk7ICBcCiAgICAgICAgIHVucmVhY2hhYmxlKCk7ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CiAgICAgfSkKIAorI2RlZmluZSByZXNldF9zdGFja19hbmRfanVtcChmbikg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAg
IHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgImptcCAlYyIsICJpIikKKwor
LyogVGhlIGNvbnN0cmFpbnQgbWF5IG9ubHkgc3BlY2lmeSBub24tY2FsbC1j
bG9iYmVyZWQgcmVnaXN0ZXJzLiAqLworI2RlZmluZSByZXNldF9zdGFja19h
bmRfanVtcF9pbmQoZm4pICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAorICAgIHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgIklORElS
RUNUX0pNUCAlIiwgImIiKQorCiAvKgogICogV2hpY2ggVkNQVSdzIHN0YXRl
IGlzIGN1cnJlbnRseSBydW5uaW5nIG9uIGVhY2ggQ1BVPwogICogVGhpcyBp
cyBub3QgbmVjZXNhc3JpbHkgdGhlIHNhbWUgYXMgJ2N1cnJlbnQnIGFzIGEg
Q1BVIG1heSBiZQotLS0gc2xlMTJzcDQub3JpZy94ZW4vaW5jbHVkZS9hc20t
eDg2L2RvbWFpbi5oCTIwMTktMTItMDMgMTc6NDY6MjYuMDAwMDAwMDAwICsw
MTAwCisrKyBzbGUxMnNwNC94ZW4vaW5jbHVkZS9hc20teDg2L2RvbWFpbi5o
CTIwMjAtMTEtMTAgMTc6NTY6NTkuMDAwMDAwMDAwICswMTAwCkBAIC0zMjgs
NyArMzI4LDcgQEAgc3RydWN0IGFyY2hfZG9tYWluCiAgICAgY29uc3Qgc3Ry
dWN0IGFyY2hfY3N3IHsKICAgICAgICAgdm9pZCAoKmZyb20pKHN0cnVjdCB2
Y3B1ICopOwogICAgICAgICB2b2lkICgqdG8pKHN0cnVjdCB2Y3B1ICopOwot
ICAgICAgICB2b2lkICgqdGFpbCkoc3RydWN0IHZjcHUgKik7CisgICAgICAg
IHZvaWQgbm9yZXR1cm4gKCp0YWlsKSh2b2lkKTsKICAgICB9ICpjdHh0X3N3
aXRjaDsKIAogICAgIC8qIG5lc3RlZGh2bTogdHJhbnNsYXRlIGwyIGd1ZXN0
IHBoeXNpY2FsIHRvIGhvc3QgcGh5c2ljYWwgKi8KLS0tIHNsZTEyc3A0Lm9y
aWcveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZteC5oCTIwMTktMTIt
MDMgMTc6NDY6MjYuMDAwMDAwMDAwICswMTAwCisrKyBzbGUxMnNwNC94ZW4v
aW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmgJMjAyMC0xMS0xMCAxNzo1
Njo1OS4wMDAwMDAwMDAgKzAxMDAKQEAgLTk1LDcgKzk1LDcgQEAgdHlwZWRl
ZiBlbnVtIHsKIHZvaWQgdm14X2FzbV92bWV4aXRfaGFuZGxlcihzdHJ1Y3Qg
Y3B1X3VzZXJfcmVncyk7CiB2b2lkIHZteF9hc21fZG9fdm1lbnRyeSh2b2lk
KTsKIHZvaWQgdm14X2ludHJfYXNzaXN0KHZvaWQpOwotdm9pZCBub3JldHVy
biB2bXhfZG9fcmVzdW1lKHN0cnVjdCB2Y3B1ICopOwordm9pZCBub3JldHVy
biB2bXhfZG9fcmVzdW1lKHZvaWQpOwogdm9pZCB2bXhfdmxhcGljX21zcl9j
aGFuZ2VkKHN0cnVjdCB2Y3B1ICp2KTsKIHZvaWQgdm14X3JlYWxtb2RlX2Vt
dWxhdGVfb25lKHN0cnVjdCBodm1fZW11bGF0ZV9jdHh0ICpodm1lbXVsX2N0
eHQpOwogdm9pZCB2bXhfcmVhbG1vZGUoc3RydWN0IGNwdV91c2VyX3JlZ3Mg
KnJlZ3MpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa348-4.12.patch"
Content-Disposition: attachment; filename="xsa348-4.12.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGF2b2lkIGNhbGxpbmcge3N2bSx2bXh9X2RvX3Jlc3VtZSgpCgpU
aGVzZSBmdW5jdGlvbnMgZm9sbG93IHRoZSBmb2xsb3dpbmcgcGF0aDogaHZt
X2RvX3Jlc3VtZSgpIC0+CmhhbmRsZV9odm1faW9fY29tcGxldGlvbigpIC0+
IGh2bV93YWl0X2Zvcl9pbygpIC0+CndhaXRfb25feGVuX2V2ZW50X2NoYW5u
ZWwoKSAtPiBkb19zb2Z0aXJxKCkgLT4gc2NoZWR1bGUoKSAtPgpzY2hlZF9j
b250ZXh0X3N3aXRjaCgpIC0+IGNvbnRpbnVlX3J1bm5pbmcoKSBhbmQgaGVu
Y2UgbWF5CnJlY3Vyc2l2ZWx5IGludm9rZSB0aGVtc2VsdmVzLiBJZiB0aGlz
IGVuZHMgdXAgaGFwcGVuaW5nIGEgY291cGxlIG9mCnRpbWVzLCBhIHN0YWNr
IG92ZXJmbG93IHdvdWxkIHJlc3VsdC4KClByZXZlbnQgdGhpcyBieSBhbHNv
IHJlc2V0dGluZyB0aGUgc3RhY2sgYXQgdGhlCi0+YXJjaC5jdHh0X3N3aXRj
aC0+dGFpbCgpIGludm9jYXRpb25zIChpbiBib3RoIHBsYWNlcyBmb3IgY29u
c2lzdGVuY3kpCmFuZCB0aHVzIGp1bXBpbmcgdG8gdGhlIGZ1bmN0aW9ucyBp
bnN0ZWFkIG9mIGNhbGxpbmcgdGhlbS4KClRoaXMgaXMgWFNBLTM0OCAvIENW
RS0yMDIwLTI5NTY2LgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4KCi0tLSBzbGUxNXNwMS5vcmlnL3hlbi9hcmNo
L3g4Ni9kb21haW4uYwkyMDIwLTExLTAyIDE1OjU0OjQwLjAwMDAwMDAwMCAr
MDEwMAorKysgc2xlMTVzcDEveGVuL2FyY2gveDg2L2RvbWFpbi5jCTIwMjAt
MTEtMTAgMTc6NTY6NTkuMDAwMDAwMDAwICswMTAwCkBAIC0xMjMsNyArMTIz
LDcgQEAgc3RhdGljIHZvaWQgcGxheV9kZWFkKHZvaWQpCiAgICAgKCpkZWFk
X2lkbGUpKCk7CiB9CiAKLXN0YXRpYyB2b2lkIGlkbGVfbG9vcCh2b2lkKQor
c3RhdGljIHZvaWQgbm9yZXR1cm4gaWRsZV9sb29wKHZvaWQpCiB7CiAgICAg
dW5zaWduZWQgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKIApAQCAt
MTYzLDExICsxNjMsNiBAQCB2b2lkIHN0YXJ0dXBfY3B1X2lkbGVfbG9vcCh2
b2lkKQogICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGlkbGVfbG9vcCk7CiB9
CiAKLXN0YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX2lkbGVfZG9tYWlu
KHN0cnVjdCB2Y3B1ICp2KQotewotICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1w
KGlkbGVfbG9vcCk7Ci19Ci0KIHZvaWQgZHVtcF9wYWdlZnJhbWVfaW5mbyhz
dHJ1Y3QgZG9tYWluICpkKQogewogICAgIHN0cnVjdCBwYWdlX2luZm8gKnBh
Z2U7CkBAIC00ODQsNyArNDc5LDcgQEAgaW50IGFyY2hfZG9tYWluX2NyZWF0
ZShzdHJ1Y3QgZG9tYWluICpkLAogICAgICAgICBzdGF0aWMgY29uc3Qgc3Ry
dWN0IGFyY2hfY3N3IGlkbGVfY3N3ID0gewogICAgICAgICAgICAgLmZyb20g
PSBwYXJhdmlydF9jdHh0X3N3aXRjaF9mcm9tLAogICAgICAgICAgICAgLnRv
ICAgPSBwYXJhdmlydF9jdHh0X3N3aXRjaF90bywKLSAgICAgICAgICAgIC50
YWlsID0gY29udGludWVfaWRsZV9kb21haW4sCisgICAgICAgICAgICAudGFp
bCA9IGlkbGVfbG9vcCwKICAgICAgICAgfTsKIAogICAgICAgICBkLT5hcmNo
LmN0eHRfc3dpdGNoID0gJmlkbGVfY3N3OwpAQCAtMTc4NCwyMCArMTc3OSwx
MiBAQCB2b2lkIGNvbnRleHRfc3dpdGNoKHN0cnVjdCB2Y3B1ICpwcmV2LCBz
CiAgICAgLyogRW5zdXJlIHRoYXQgdGhlIHZjcHUgaGFzIGFuIHVwLXRvLWRh
dGUgdGltZSBiYXNlLiAqLwogICAgIHVwZGF0ZV92Y3B1X3N5c3RlbV90aW1l
KG5leHQpOwogCi0gICAgLyoKLSAgICAgKiBTY2hlZHVsZSB0YWlsICpzaG91
bGQqIGJlIGEgdGVybWluYWwgZnVuY3Rpb24gcG9pbnRlciwgYnV0IGxlYXZl
IGEKLSAgICAgKiBidWcgZnJhbWUgYXJvdW5kIGp1c3QgaW4gY2FzZSBpdCBy
ZXR1cm5zLCB0byBzYXZlIGdvaW5nIGJhY2sgaW50byB0aGUKLSAgICAgKiBj
b250ZXh0IHN3aXRjaGluZyBjb2RlIGFuZCBsZWF2aW5nIGEgZmFyIG1vcmUg
c3VidGxlIGNyYXNoIHRvIGRpYWdub3NlLgotICAgICAqLwotICAgIG5leHRk
LT5hcmNoLmN0eHRfc3dpdGNoLT50YWlsKG5leHQpOwotICAgIEJVRygpOwor
ICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wX2luZChuZXh0ZC0+YXJjaC5jdHh0
X3N3aXRjaC0+dGFpbCk7CiB9CiAKIHZvaWQgY29udGludWVfcnVubmluZyhz
dHJ1Y3QgdmNwdSAqc2FtZSkKIHsKLSAgICAvKiBTZWUgdGhlIGNvbW1lbnQg
YWJvdmUuICovCi0gICAgc2FtZS0+ZG9tYWluLT5hcmNoLmN0eHRfc3dpdGNo
LT50YWlsKHNhbWUpOwotICAgIEJVRygpOworICAgIHJlc2V0X3N0YWNrX2Fu
ZF9qdW1wX2luZChzYW1lLT5kb21haW4tPmFyY2guY3R4dF9zd2l0Y2gtPnRh
aWwpOwogfQogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNzdGF0ZSh2b2lkKQot
LS0gc2xlMTVzcDEub3JpZy94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYwky
MDIwLTExLTAyIDE1OjU0OjQwLjAwMDAwMDAwMCArMDEwMAorKysgc2xlMTVz
cDEveGVuL2FyY2gveDg2L2h2bS9zdm0vc3ZtLmMJMjAyMC0xMS0xMCAxNzo1
Njo1OS4wMDAwMDAwMDAgKzAxMDAKQEAgLTEwNTUsOCArMTA1NSw5IEBAIHN0
YXRpYyB2b2lkIHN2bV9jdHh0X3N3aXRjaF90byhzdHJ1Y3QgdmMKICAgICAg
ICAgd3Jtc3JfdHNjX2F1eCh2LT5hcmNoLm1zcnMtPnRzY19hdXgpOwogfQog
Ci1zdGF0aWMgdm9pZCBub3JldHVybiBzdm1fZG9fcmVzdW1lKHN0cnVjdCB2
Y3B1ICp2KQorc3RhdGljIHZvaWQgbm9yZXR1cm4gc3ZtX2RvX3Jlc3VtZSh2
b2lkKQogeworICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsKICAgICBz
dHJ1Y3Qgdm1jYl9zdHJ1Y3QgKnZtY2IgPSB2LT5hcmNoLmh2bS5zdm0udm1j
YjsKICAgICBib29sIGRlYnVnX3N0YXRlID0gKHYtPmRvbWFpbi0+ZGVidWdn
ZXJfYXR0YWNoZWQgfHwKICAgICAgICAgICAgICAgICAgICAgICAgIHYtPmRv
bWFpbi0+YXJjaC5tb25pdG9yLnNvZnR3YXJlX2JyZWFrcG9pbnRfZW5hYmxl
ZCB8fAotLS0gc2xlMTVzcDEub3JpZy94ZW4vYXJjaC94ODYvaHZtL3ZteC92
bWNzLmMJMjAyMC0wMS0wNiAxOTowMzowOS4wMDAwMDAwMDAgKzAxMDAKKysr
IHNsZTE1c3AxL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYwkyMDIwLTEx
LTEwIDE3OjU2OjU5LjAwMDAwMDAwMCArMDEwMApAQCAtMTgzMCw4ICsxODMw
LDkgQEAgdm9pZCB2bXhfdm1lbnRyeV9mYWlsdXJlKHZvaWQpCiAgICAgZG9t
YWluX2NyYXNoKGN1cnItPmRvbWFpbik7CiB9CiAKLXZvaWQgdm14X2RvX3Jl
c3VtZShzdHJ1Y3QgdmNwdSAqdikKK3ZvaWQgdm14X2RvX3Jlc3VtZSh2b2lk
KQogeworICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3VycmVudDsKICAgICBib29s
X3QgZGVidWdfc3RhdGU7CiAgICAgdW5zaWduZWQgbG9uZyBob3N0X2NyNDsK
IAotLS0gc2xlMTVzcDEub3JpZy94ZW4vYXJjaC94ODYvcHYvZG9tYWluLmMJ
MjAyMC0xMS0wMiAxNTo1NDo0MC4wMDAwMDAwMDAgKzAxMDAKKysrIHNsZTE1
c3AxL3hlbi9hcmNoL3g4Ni9wdi9kb21haW4uYwkyMDIwLTExLTEwIDE3OjU2
OjU5LjAwMDAwMDAwMCArMDEwMApAQCAtNTgsNyArNTgsNyBAQCBzdGF0aWMg
aW50IHBhcnNlX3BjaWQoY29uc3QgY2hhciAqcykKIH0KIGN1c3RvbV9ydW50
aW1lX3BhcmFtKCJwY2lkIiwgcGFyc2VfcGNpZCk7CiAKLXN0YXRpYyB2b2lk
IG5vcmV0dXJuIGNvbnRpbnVlX25vbmlkbGVfZG9tYWluKHN0cnVjdCB2Y3B1
ICp2KQorc3RhdGljIHZvaWQgbm9yZXR1cm4gY29udGludWVfbm9uaWRsZV9k
b21haW4odm9pZCkKIHsKICAgICBjaGVja193YWtldXBfZnJvbV93YWl0KCk7
CiAgICAgcmVzZXRfc3RhY2tfYW5kX2p1bXAocmV0X2Zyb21faW50cik7Ci0t
LSBzbGUxNXNwMS5vcmlnL3hlbi9pbmNsdWRlL2FzbS14ODYvY3VycmVudC5o
CTIwMTktMDgtMDkgMTc6NTE6MjAuMDAwMDAwMDAwICswMjAwCisrKyBzbGUx
NXNwMS94ZW4vaW5jbHVkZS9hc20teDg2L2N1cnJlbnQuaAkyMDIwLTExLTEw
IDE3OjU2OjU5LjAwMDAwMDAwMCArMDEwMApAQCAtMTI0LDE2ICsxMjQsMjMg
QEAgdW5zaWduZWQgbG9uZyBnZXRfc3RhY2tfZHVtcF9ib3R0b20gKHVucwog
IyBkZWZpbmUgQ0hFQ0tfRk9SX0xJVkVQQVRDSF9XT1JLICIiCiAjZW5kaWYK
IAotI2RlZmluZSByZXNldF9zdGFja19hbmRfanVtcChfX2ZuKSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorI2RlZmluZSBzd2l0
Y2hfc3RhY2tfYW5kX2p1bXAoZm4sIGluc3RyLCBjb25zdHIpICAgICAgICAg
ICAgICAgICAgICAgICAgXAogICAgICh7ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAogICAgICAgICBfX2FzbV9fIF9fdm9sYXRpbGVfXyAoICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAg
Im1vdiAlMCwlJSJfX09QInNwOyIgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAotICAgICAgICAgICAgQ0hFQ0tfRk9SX0xJVkVQ
QVRDSF9XT1JLICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCi0gICAgICAgICAgICAgImptcCAlYzEiICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgICAgICAg
ICA6IDogInIiIChndWVzdF9jcHVfdXNlcl9yZWdzKCkpLCAiaSIgKF9fZm4p
IDogIm1lbW9yeSIgKTsgICBcCisgICAgICAgICAgICBDSEVDS19GT1JfTElW
RVBBVENIX1dPUksgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCisgICAgICAgICAgICBpbnN0ciAiMSIgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAg
ICA6IDogInIiIChndWVzdF9jcHVfdXNlcl9yZWdzKCkpLCBjb25zdHIgKGZu
KSA6ICJtZW1vcnkiICk7ICBcCiAgICAgICAgIHVucmVhY2hhYmxlKCk7ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgfSkKIAorI2RlZmluZSByZXNldF9zdGFja19hbmRfanVtcChm
bikgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAor
ICAgIHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgImptcCAlYyIsICJpIikK
KworLyogVGhlIGNvbnN0cmFpbnQgbWF5IG9ubHkgc3BlY2lmeSBub24tY2Fs
bC1jbG9iYmVyZWQgcmVnaXN0ZXJzLiAqLworI2RlZmluZSByZXNldF9zdGFj
a19hbmRfanVtcF9pbmQoZm4pICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXAorICAgIHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgIklO
RElSRUNUX0pNUCAlIiwgImIiKQorCiAvKgogICogV2hpY2ggVkNQVSdzIHN0
YXRlIGlzIGN1cnJlbnRseSBydW5uaW5nIG9uIGVhY2ggQ1BVPwogICogVGhp
cyBpcyBub3QgbmVjZXNhc3JpbHkgdGhlIHNhbWUgYXMgJ2N1cnJlbnQnIGFz
IGEgQ1BVIG1heSBiZQotLS0gc2xlMTVzcDEub3JpZy94ZW4vaW5jbHVkZS9h
c20teDg2L2RvbWFpbi5oCTIwMjAtMTEtMDIgMTU6NTQ6NDAuMDAwMDAwMDAw
ICswMTAwCisrKyBzbGUxNXNwMS94ZW4vaW5jbHVkZS9hc20teDg2L2RvbWFp
bi5oCTIwMjAtMTEtMTAgMTc6NTY6NTkuMDAwMDAwMDAwICswMTAwCkBAIC0z
MTgsNyArMzE4LDcgQEAgc3RydWN0IGFyY2hfZG9tYWluCiAgICAgY29uc3Qg
c3RydWN0IGFyY2hfY3N3IHsKICAgICAgICAgdm9pZCAoKmZyb20pKHN0cnVj
dCB2Y3B1ICopOwogICAgICAgICB2b2lkICgqdG8pKHN0cnVjdCB2Y3B1ICop
OwotICAgICAgICB2b2lkICgqdGFpbCkoc3RydWN0IHZjcHUgKik7CisgICAg
ICAgIHZvaWQgbm9yZXR1cm4gKCp0YWlsKSh2b2lkKTsKICAgICB9ICpjdHh0
X3N3aXRjaDsKIAogI2lmZGVmIENPTkZJR19IVk0KLS0tIHNsZTE1c3AxLm9y
aWcveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZteC5oCTIwMjAtMDEt
MDYgMTk6MDM6MDkuMDAwMDAwMDAwICswMTAwCisrKyBzbGUxNXNwMS94ZW4v
aW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmgJMjAyMC0xMS0xMCAxNzo1
Njo1OS4wMDAwMDAwMDAgKzAxMDAKQEAgLTk1LDcgKzk1LDcgQEAgdHlwZWRl
ZiBlbnVtIHsKIHZvaWQgdm14X2FzbV92bWV4aXRfaGFuZGxlcihzdHJ1Y3Qg
Y3B1X3VzZXJfcmVncyk7CiB2b2lkIHZteF9hc21fZG9fdm1lbnRyeSh2b2lk
KTsKIHZvaWQgdm14X2ludHJfYXNzaXN0KHZvaWQpOwotdm9pZCBub3JldHVy
biB2bXhfZG9fcmVzdW1lKHN0cnVjdCB2Y3B1ICopOwordm9pZCBub3JldHVy
biB2bXhfZG9fcmVzdW1lKHZvaWQpOwogdm9pZCB2bXhfdmxhcGljX21zcl9j
aGFuZ2VkKHN0cnVjdCB2Y3B1ICp2KTsKIHZvaWQgdm14X3JlYWxtb2RlX2Vt
dWxhdGVfb25lKHN0cnVjdCBodm1fZW11bGF0ZV9jdHh0ICpodm1lbXVsX2N0
eHQpOwogdm9pZCB2bXhfcmVhbG1vZGUoc3RydWN0IGNwdV91c2VyX3JlZ3Mg
KnJlZ3MpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa348-4.13-1.patch"
Content-Disposition: attachment; filename="xsa348-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IHJlcGxhY2UgcmVzZXRfc3RhY2tfYW5kX2p1bXBfbm9scCgpCgpN
b3ZlIHRoZSBuZWNlc3NhcnkgY2hlY2sgaW50byBjaGVja19mb3JfbGl2ZXBh
dGNoX3dvcmsoKSwgcmF0aGVyIHRoYW4KbW9zdGx5IGR1cGxpY2F0aW5nIHJl
c2V0X3N0YWNrX2FuZF9qdW1wKCkgZm9yIHRoaXMgcHVycG9zZS4gVGhpcyBp
cyB0bwpwcmV2ZW50IGFuIGluZmxhdGlvbiBvZiByZXNldF9zdGFja19hbmRf
anVtcCgpIGZsYXZvcnMuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CgotLS0gc2xlMTVzcDIub3JpZy94ZW4vYXJj
aC94ODYvZG9tYWluLmMJMjAyMC0xMC0zMCAxNzoyMjozOS4wMDAwMDAwMDAg
KzAxMDAKKysrIHNsZTE1c3AyL3hlbi9hcmNoL3g4Ni9kb21haW4uYwkyMDIw
LTExLTEwIDE3OjUxOjEwLjg5NDUyNTcyMSArMDEwMApAQCAtMTkyLDcgKzE5
Miw3IEBAIHN0YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX2lkbGVfZG9t
YWkKIHsKICAgICAvKiBJZGxlIHZjcHVzIG1pZ2h0IGJlIGF0dGFjaGVkIHRv
IG5vbi1pZGxlIHVuaXRzISAqLwogICAgIGlmICggIWlzX2lkbGVfZG9tYWlu
KHYtPnNjaGVkX3VuaXQtPmRvbWFpbikgKQotICAgICAgICByZXNldF9zdGFj
a19hbmRfanVtcF9ub2xwKGd1ZXN0X2lkbGVfbG9vcCk7CisgICAgICAgIHJl
c2V0X3N0YWNrX2FuZF9qdW1wKGd1ZXN0X2lkbGVfbG9vcCk7CiAKICAgICBy
ZXNldF9zdGFja19hbmRfanVtcChpZGxlX2xvb3ApOwogfQotLS0gc2xlMTVz
cDIub3JpZy94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYwkyMDIwLTEwLTMw
IDE3OjIyOjM5LjAwMDAwMDAwMCArMDEwMAorKysgc2xlMTVzcDIveGVuL2Fy
Y2gveDg2L2h2bS9zdm0vc3ZtLmMJMjAyMC0xMS0xMCAxNzo1MToxMC44OTg1
MjU3MjMgKzAxMDAKQEAgLTEwMzIsNyArMTAzMiw3IEBAIHN0YXRpYyB2b2lk
IG5vcmV0dXJuIHN2bV9kb19yZXN1bWUoc3RydWMKIAogICAgIGh2bV9kb19y
ZXN1bWUodik7CiAKLSAgICByZXNldF9zdGFja19hbmRfanVtcF9ub2xwKHN2
bV9hc21fZG9fcmVzdW1lKTsKKyAgICByZXNldF9zdGFja19hbmRfanVtcChz
dm1fYXNtX2RvX3Jlc3VtZSk7CiB9CiAKIHZvaWQgc3ZtX3ZtZW50ZXJfaGVs
cGVyKGNvbnN0IHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQotLS0gc2xl
MTVzcDIub3JpZy94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMJMjAyMC0w
NS0xOCAxODo1MzowOS4wMDAwMDAwMDAgKzAyMDAKKysrIHNsZTE1c3AyL3hl
bi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYwkyMDIwLTExLTEwIDE3OjUxOjEw
Ljg5ODUyNTcyMyArMDEwMApAQCAtMTg4OSw3ICsxODg5LDcgQEAgdm9pZCB2
bXhfZG9fcmVzdW1lKHN0cnVjdCB2Y3B1ICp2KQogICAgIGlmICggaG9zdF9j
cjQgIT0gcmVhZF9jcjQoKSApCiAgICAgICAgIF9fdm13cml0ZShIT1NUX0NS
NCwgcmVhZF9jcjQoKSk7CiAKLSAgICByZXNldF9zdGFja19hbmRfanVtcF9u
b2xwKHZteF9hc21fZG9fdm1lbnRyeSk7CisgICAgcmVzZXRfc3RhY2tfYW5k
X2p1bXAodm14X2FzbV9kb192bWVudHJ5KTsKIH0KIAogc3RhdGljIGlubGlu
ZSB1bnNpZ25lZCBsb25nIHZtcih1bnNpZ25lZCBsb25nIGZpZWxkKQotLS0g
c2xlMTVzcDIub3JpZy94ZW4vYXJjaC94ODYvcHYvZG9tYWluLmMJMjAyMC0x
MC0zMCAxNzoyMjozOS4wMDAwMDAwMDAgKzAxMDAKKysrIHNsZTE1c3AyL3hl
bi9hcmNoL3g4Ni9wdi9kb21haW4uYwkyMDIwLTExLTEwIDE3OjUxOjEwLjg5
ODUyNTcyMyArMDEwMApAQCAtNjEsNyArNjEsNyBAQCBjdXN0b21fcnVudGlt
ZV9wYXJhbSgicGNpZCIsIHBhcnNlX3BjaWQpCiBzdGF0aWMgdm9pZCBub3Jl
dHVybiBjb250aW51ZV9ub25pZGxlX2RvbWFpbihzdHJ1Y3QgdmNwdSAqdikK
IHsKICAgICBjaGVja193YWtldXBfZnJvbV93YWl0KCk7Ci0gICAgcmVzZXRf
c3RhY2tfYW5kX2p1bXBfbm9scChyZXRfZnJvbV9pbnRyKTsKKyAgICByZXNl
dF9zdGFja19hbmRfanVtcChyZXRfZnJvbV9pbnRyKTsKIH0KIAogc3RhdGlj
IGludCBzZXR1cF9jb21wYXRfbDQoc3RydWN0IHZjcHUgKnYpCi0tLSBzbGUx
NXNwMi5vcmlnL3hlbi9hcmNoL3g4Ni9zZXR1cC5jCTIwMjAtMDUtMTggMTg6
NTM6MDkuMDAwMDAwMDAwICswMjAwCisrKyBzbGUxNXNwMi94ZW4vYXJjaC94
ODYvc2V0dXAuYwkyMDIwLTExLTEwIDE3OjUxOjEwLjg5ODUyNTcyMyArMDEw
MApAQCAtNjMxLDcgKzYzMSw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBub3Jl
dHVybiByZWluaXRfYnNwX3MKICAgICBzdGFja19iYXNlWzBdID0gc3RhY2s7
CiAgICAgbWVtZ3VhcmRfZ3VhcmRfc3RhY2soc3RhY2spOwogCi0gICAgcmVz
ZXRfc3RhY2tfYW5kX2p1bXBfbm9scChpbml0X2RvbmUpOworICAgIHJlc2V0
X3N0YWNrX2FuZF9qdW1wKGluaXRfZG9uZSk7CiB9CiAKIC8qCi0tLSBzbGUx
NXNwMi5vcmlnL3hlbi9jb21tb24vbGl2ZXBhdGNoLmMJMjAyMC0wNS0xOCAx
ODo1MzowOS4wMDAwMDAwMDAgKzAyMDAKKysrIHNsZTE1c3AyL3hlbi9jb21t
b24vbGl2ZXBhdGNoLmMJMjAyMC0xMS0xMCAxNzo1MToxMC44OTg1MjU3MjMg
KzAxMDAKQEAgLTEzMDAsNiArMTMwMCwxMSBAQCB2b2lkIGNoZWNrX2Zvcl9s
aXZlcGF0Y2hfd29yayh2b2lkKQogICAgIHNfdGltZV90IHRpbWVvdXQ7CiAg
ICAgdW5zaWduZWQgbG9uZyBmbGFnczsKIAorICAgIC8qIE9ubHkgZG8gYW55
IHdvcmsgd2hlbiBpbnZva2VkIGluIHRydWx5IGlkbGUgc3RhdGUuICovCisg
ICAgaWYgKCBzeXN0ZW1fc3RhdGUgIT0gU1lTX1NUQVRFX2FjdGl2ZSB8fAor
ICAgICAgICAgIWlzX2lkbGVfZG9tYWluKGN1cnJlbnQtPnNjaGVkX3VuaXQt
PmRvbWFpbikgKQorICAgICAgICByZXR1cm47CisKICAgICAvKiBGYXN0IHBh
dGg6IG5vIHdvcmsgdG8gZG8uICovCiAgICAgaWYgKCAhcGVyX2NwdSh3b3Jr
X3RvX2RvLCBjcHUgKSApCiAgICAgICAgIHJldHVybjsKLS0tIHNsZTE1c3Ay
Lm9yaWcveGVuL2luY2x1ZGUvYXNtLXg4Ni9jdXJyZW50LmgJMjAxOS0xMi0x
OCAxNjoxODo1OS4wMDAwMDAwMDAgKzAxMDAKKysrIHNsZTE1c3AyL3hlbi9p
bmNsdWRlL2FzbS14ODYvY3VycmVudC5oCTIwMjAtMTEtMTAgMTc6NTE6MTAu
OTAyNTI1NzI1ICswMTAwCkBAIC0xMjksMjIgKzEyOSwxNiBAQCB1bnNpZ25l
ZCBsb25nIGdldF9zdGFja19kdW1wX2JvdHRvbSAodW5zCiAjIGRlZmluZSBD
SEVDS19GT1JfTElWRVBBVENIX1dPUksgIiIKICNlbmRpZgogCi0jZGVmaW5l
IHN3aXRjaF9zdGFja19hbmRfanVtcChmbiwgaW5zdHIpICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCisjZGVmaW5lIHJlc2V0X3N0YWNrX2Fu
ZF9qdW1wKGZuKSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgKHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAg
IF9fYXNtX18gX192b2xhdGlsZV9fICggICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICAibW92ICUwLCUl
Il9fT1Aic3A7IiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCi0gICAgICAgICAgICBpbnN0ciAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAg
ICAgICBDSEVDS19GT1JfTElWRVBBVENIX1dPUksgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICAgImptcCAlYzEi
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCiAgICAgICAgICAgICA6IDogInIiIChndWVzdF9jcHVfdXNlcl9y
ZWdzKCkpLCAiaSIgKGZuKSA6ICJtZW1vcnkiICk7ICAgICBcCiAgICAgICAg
IHVucmVhY2hhYmxlKCk7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBcCiAgICAgfSkKIAotI2RlZmluZSByZXNl
dF9zdGFja19hbmRfanVtcChmbikgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAotICAgIHN3aXRjaF9zdGFja19hbmRfanVtcChm
biwgQ0hFQ0tfRk9SX0xJVkVQQVRDSF9XT1JLKQotCi0jZGVmaW5lIHJlc2V0
X3N0YWNrX2FuZF9qdW1wX25vbHAoZm4pICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCi0gICAgc3dpdGNoX3N0YWNrX2FuZF9qdW1wKGZu
LCAiIikKLQogLyoKICAqIFdoaWNoIFZDUFUncyBzdGF0ZSBpcyBjdXJyZW50
bHkgcnVubmluZyBvbiBlYWNoIENQVT8KICAqIFRoaXMgaXMgbm90IG5lY2Vz
YXNyaWx5IHRoZSBzYW1lIGFzICdjdXJyZW50JyBhcyBhIENQVSBtYXkgYmUK

--=separator
Content-Type: application/octet-stream; name="xsa348-4.13-2.patch"
Content-Disposition: attachment; filename="xsa348-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGZvbGQgZ3Vlc3RfaWRsZV9sb29wKCkgaW50byBpZGxlX2xvb3Ao
KQoKVGhlIGxhdHRlciBjYW4gZWFzaWx5IGJlIG1hZGUgY292ZXIgYm90aCBj
YXNlcy4gVGhpcyBpcyBpbiBwcmVwYXJhdGlvbgpvZiB1c2luZyBpZGxlX2xv
b3AgZGlyZWN0bHkgZm9yIHBvcHVsYXRpbmcgaWRsZV9jc3cudGFpbC4KClRh
a2UgdGhlIGxpYmVydHkgYW5kIGFsc28gYWRqdXN0IGluZGVudGF0aW9uIC8g
c3BhY2luZyBpbiBpbnZvbHZlZCBjb2RlLgoKU2lnbmVkLW9mZi1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgoKLS0tIHNsZTE1c3AyLm9y
aWcveGVuL2FyY2gveDg2L2RvbWFpbi5jCTIwMjAtMTEtMTAgMTc6NTE6MTAu
ODk0NTI1NzIxICswMTAwCisrKyBzbGUxNXNwMi94ZW4vYXJjaC94ODYvZG9t
YWluLmMJMjAyMC0xMS0xMCAxNzo1MTo0Ni4zNTQ1NDYzNDkgKzAxMDAKQEAg
LTEzMywxNCArMTMzLDIyIEBAIHZvaWQgcGxheV9kZWFkKHZvaWQpCiBzdGF0
aWMgdm9pZCBpZGxlX2xvb3Aodm9pZCkKIHsKICAgICB1bnNpZ25lZCBpbnQg
Y3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOworICAgIC8qCisgICAgICogSWRs
ZSB2Y3B1cyBtaWdodCBiZSBhdHRhY2hlZCB0byBub24taWRsZSB1bml0cyEg
V2UgZG9uJ3QgZG8gYW55CisgICAgICogc3RhbmRhcmQgaWRsZSB3b3JrIGxp
a2UgdGFza2xldHMgb3IgbGl2ZXBhdGNoaW5nIGluIHRoaXMgY2FzZS4KKyAg
ICAgKi8KKyAgICBib29sIGd1ZXN0ID0gIWlzX2lkbGVfZG9tYWluKGN1cnJl
bnQtPnNjaGVkX3VuaXQtPmRvbWFpbik7CiAKICAgICBmb3IgKCA7IDsgKQog
ICAgIHsKICAgICAgICAgaWYgKCBjcHVfaXNfb2ZmbGluZShjcHUpICkKKyAg
ICAgICAgeworICAgICAgICAgICAgQVNTRVJUKCFndWVzdCk7CiAgICAgICAg
ICAgICBwbGF5X2RlYWQoKTsKKyAgICAgICAgfQogCiAgICAgICAgIC8qIEFy
ZSB3ZSBoZXJlIGZvciBydW5uaW5nIHZjcHUgY29udGV4dCB0YXNrbGV0cywg
b3IgZm9yIGlkbGluZz8gKi8KLSAgICAgICAgaWYgKCB1bmxpa2VseSh0YXNr
bGV0X3dvcmtfdG9fZG8oY3B1KSkgKQorICAgICAgICBpZiAoICFndWVzdCAm
JiB1bmxpa2VseSh0YXNrbGV0X3dvcmtfdG9fZG8oY3B1KSkgKQogICAgICAg
ICB7CiAgICAgICAgICAgICBkb190YXNrbGV0KCk7CiAgICAgICAgICAgICAv
KiBMaXZlcGF0Y2ggd29yayBpcyBhbHdheXMga2lja2VkIG9mZiB2aWEgYSB0
YXNrbGV0LiAqLwpAQCAtMTUxLDI4ICsxNTksMTQgQEAgc3RhdGljIHZvaWQg
aWRsZV9sb29wKHZvaWQpCiAgICAgICAgICAqIGFuZCB0aGVuLCBhZnRlciBp
dCBpcyBkb25lLCB3aGV0aGVyIHNvZnRpcnFzIGJlY2FtZSBwZW5kaW5nCiAg
ICAgICAgICAqIHdoaWxlIHdlIHdlcmUgc2NydWJiaW5nLgogICAgICAgICAg
Ki8KLSAgICAgICAgZWxzZSBpZiAoICFzb2Z0aXJxX3BlbmRpbmcoY3B1KSAm
JiAhc2NydWJfZnJlZV9wYWdlcygpICAmJgotICAgICAgICAgICAgICAgICAg
ICAhc29mdGlycV9wZW5kaW5nKGNwdSkgKQotICAgICAgICAgICAgcG1faWRs
ZSgpOwotICAgICAgICBkb19zb2Z0aXJxKCk7Ci0gICAgfQotfQotCi0vKgot
ICogSWRsZSBsb29wIGZvciBzaWJsaW5ncyBpbiBhY3RpdmUgc2NoZWR1bGUg
dW5pdHMuCi0gKiBXZSBkb24ndCBkbyBhbnkgc3RhbmRhcmQgaWRsZSB3b3Jr
IGxpa2UgdGFza2xldHMgb3IgbGl2ZXBhdGNoaW5nLgotICovCi1zdGF0aWMg
dm9pZCBndWVzdF9pZGxlX2xvb3Aodm9pZCkKLXsKLSAgICB1bnNpZ25lZCBp
bnQgY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwotCi0gICAgZm9yICggOyA7
ICkKLSAgICB7Ci0gICAgICAgIEFTU0VSVCghY3B1X2lzX29mZmxpbmUoY3B1
KSk7Ci0KLSAgICAgICAgaWYgKCAhc29mdGlycV9wZW5kaW5nKGNwdSkgJiYg
IXNjcnViX2ZyZWVfcGFnZXMoKSAmJgotICAgICAgICAgICAgICFzb2Z0aXJx
X3BlbmRpbmcoY3B1KSkKLSAgICAgICAgICAgIHNjaGVkX2d1ZXN0X2lkbGUo
cG1faWRsZSwgY3B1KTsKKyAgICAgICAgZWxzZSBpZiAoICFzb2Z0aXJxX3Bl
bmRpbmcoY3B1KSAmJiAhc2NydWJfZnJlZV9wYWdlcygpICYmCisgICAgICAg
ICAgICAgICAgICAhc29mdGlycV9wZW5kaW5nKGNwdSkgKQorICAgICAgICB7
CisgICAgICAgICAgICBpZiAoIGd1ZXN0ICkKKyAgICAgICAgICAgICAgICBz
Y2hlZF9ndWVzdF9pZGxlKHBtX2lkbGUsIGNwdSk7CisgICAgICAgICAgICBl
bHNlCisgICAgICAgICAgICAgICAgcG1faWRsZSgpOworICAgICAgICB9CiAg
ICAgICAgIGRvX3NvZnRpcnEoKTsKICAgICB9CiB9CkBAIC0xOTAsMTAgKzE4
NCw2IEBAIHZvaWQgc3RhcnR1cF9jcHVfaWRsZV9sb29wKHZvaWQpCiAKIHN0
YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX2lkbGVfZG9tYWluKHN0cnVj
dCB2Y3B1ICp2KQogewotICAgIC8qIElkbGUgdmNwdXMgbWlnaHQgYmUgYXR0
YWNoZWQgdG8gbm9uLWlkbGUgdW5pdHMhICovCi0gICAgaWYgKCAhaXNfaWRs
ZV9kb21haW4odi0+c2NoZWRfdW5pdC0+ZG9tYWluKSApCi0gICAgICAgIHJl
c2V0X3N0YWNrX2FuZF9qdW1wKGd1ZXN0X2lkbGVfbG9vcCk7Ci0KICAgICBy
ZXNldF9zdGFja19hbmRfanVtcChpZGxlX2xvb3ApOwogfQogCg==

--=separator
Content-Type: application/octet-stream; name="xsa348-4.13-3.patch"
Content-Disposition: attachment; filename="xsa348-4.13-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGF2b2lkIGNhbGxpbmcge3N2bSx2bXh9X2RvX3Jlc3VtZSgpCgpU
aGVzZSBmdW5jdGlvbnMgZm9sbG93IHRoZSBmb2xsb3dpbmcgcGF0aDogaHZt
X2RvX3Jlc3VtZSgpIC0+CmhhbmRsZV9odm1faW9fY29tcGxldGlvbigpIC0+
IGh2bV93YWl0X2Zvcl9pbygpIC0+CndhaXRfb25feGVuX2V2ZW50X2NoYW5u
ZWwoKSAtPiBkb19zb2Z0aXJxKCkgLT4gc2NoZWR1bGUoKSAtPgpzY2hlZF9j
b250ZXh0X3N3aXRjaCgpIC0+IGNvbnRpbnVlX3J1bm5pbmcoKSBhbmQgaGVu
Y2UgbWF5CnJlY3Vyc2l2ZWx5IGludm9rZSB0aGVtc2VsdmVzLiBJZiB0aGlz
IGVuZHMgdXAgaGFwcGVuaW5nIGEgY291cGxlIG9mCnRpbWVzLCBhIHN0YWNr
IG92ZXJmbG93IHdvdWxkIHJlc3VsdC4KClByZXZlbnQgdGhpcyBieSBhbHNv
IHJlc2V0dGluZyB0aGUgc3RhY2sgYXQgdGhlCi0+YXJjaC5jdHh0X3N3aXRj
aC0+dGFpbCgpIGludm9jYXRpb25zIChpbiBib3RoIHBsYWNlcyBmb3IgY29u
c2lzdGVuY3kpCmFuZCB0aHVzIGp1bXBpbmcgdG8gdGhlIGZ1bmN0aW9ucyBp
bnN0ZWFkIG9mIGNhbGxpbmcgdGhlbS4KClRoaXMgaXMgWFNBLTM0OCAvIENW
RS0yMDIwLTI5NTY2LgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4KCi0tLSBzbGUxNXNwMi5vcmlnL3hlbi9hcmNo
L3g4Ni9kb21haW4uYwkyMDIwLTExLTEwIDE3OjUxOjQ2LjM1NDU0NjM0OSAr
MDEwMAorKysgc2xlMTVzcDIveGVuL2FyY2gveDg2L2RvbWFpbi5jCTIwMjAt
MTEtMTAgMTc6NTY6NTguNzU4NzMwMDg4ICswMTAwCkBAIC0xMzAsNyArMTMw
LDcgQEAgdm9pZCBwbGF5X2RlYWQodm9pZCkKICAgICAgICAgZGVhZF9pZGxl
KCk7CiB9CiAKLXN0YXRpYyB2b2lkIGlkbGVfbG9vcCh2b2lkKQorc3RhdGlj
IHZvaWQgbm9yZXR1cm4gaWRsZV9sb29wKHZvaWQpCiB7CiAgICAgdW5zaWdu
ZWQgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgICAvKgpAQCAt
MTgyLDExICsxODIsNiBAQCB2b2lkIHN0YXJ0dXBfY3B1X2lkbGVfbG9vcCh2
b2lkKQogICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1wKGlkbGVfbG9vcCk7CiB9
CiAKLXN0YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX2lkbGVfZG9tYWlu
KHN0cnVjdCB2Y3B1ICp2KQotewotICAgIHJlc2V0X3N0YWNrX2FuZF9qdW1w
KGlkbGVfbG9vcCk7Ci19Ci0KIHZvaWQgaW5pdF9oeXBlcmNhbGxfcGFnZShz
dHJ1Y3QgZG9tYWluICpkLCB2b2lkICpwdHIpCiB7CiAgICAgbWVtc2V0KHB0
ciwgMHhjYywgUEFHRV9TSVpFKTsKQEAgLTUzNSw3ICs1MzAsNyBAQCBpbnQg
YXJjaF9kb21haW5fY3JlYXRlKHN0cnVjdCBkb21haW4gKmQsCiAgICAgICAg
IHN0YXRpYyBjb25zdCBzdHJ1Y3QgYXJjaF9jc3cgaWRsZV9jc3cgPSB7CiAg
ICAgICAgICAgICAuZnJvbSA9IHBhcmF2aXJ0X2N0eHRfc3dpdGNoX2Zyb20s
CiAgICAgICAgICAgICAudG8gICA9IHBhcmF2aXJ0X2N0eHRfc3dpdGNoX3Rv
LAotICAgICAgICAgICAgLnRhaWwgPSBjb250aW51ZV9pZGxlX2RvbWFpbiwK
KyAgICAgICAgICAgIC50YWlsID0gaWRsZV9sb29wLAogICAgICAgICB9Owog
CiAgICAgICAgIGQtPmFyY2guY3R4dF9zd2l0Y2ggPSAmaWRsZV9jc3c7CkBA
IC0xODMzLDIwICsxODI4LDEyIEBAIHZvaWQgY29udGV4dF9zd2l0Y2goc3Ry
dWN0IHZjcHUgKnByZXYsIHMKICAgICAvKiBFbnN1cmUgdGhhdCB0aGUgdmNw
dSBoYXMgYW4gdXAtdG8tZGF0ZSB0aW1lIGJhc2UuICovCiAgICAgdXBkYXRl
X3ZjcHVfc3lzdGVtX3RpbWUobmV4dCk7CiAKLSAgICAvKgotICAgICAqIFNj
aGVkdWxlIHRhaWwgKnNob3VsZCogYmUgYSB0ZXJtaW5hbCBmdW5jdGlvbiBw
b2ludGVyLCBidXQgbGVhdmUgYQotICAgICAqIGJ1ZyBmcmFtZSBhcm91bmQg
anVzdCBpbiBjYXNlIGl0IHJldHVybnMsIHRvIHNhdmUgZ29pbmcgYmFjayBp
bnRvIHRoZQotICAgICAqIGNvbnRleHQgc3dpdGNoaW5nIGNvZGUgYW5kIGxl
YXZpbmcgYSBmYXIgbW9yZSBzdWJ0bGUgY3Jhc2ggdG8gZGlhZ25vc2UuCi0g
ICAgICovCi0gICAgbmV4dGQtPmFyY2guY3R4dF9zd2l0Y2gtPnRhaWwobmV4
dCk7Ci0gICAgQlVHKCk7CisgICAgcmVzZXRfc3RhY2tfYW5kX2p1bXBfaW5k
KG5leHRkLT5hcmNoLmN0eHRfc3dpdGNoLT50YWlsKTsKIH0KIAogdm9pZCBj
b250aW51ZV9ydW5uaW5nKHN0cnVjdCB2Y3B1ICpzYW1lKQogewotICAgIC8q
IFNlZSB0aGUgY29tbWVudCBhYm92ZS4gKi8KLSAgICBzYW1lLT5kb21haW4t
PmFyY2guY3R4dF9zd2l0Y2gtPnRhaWwoc2FtZSk7Ci0gICAgQlVHKCk7Cisg
ICAgcmVzZXRfc3RhY2tfYW5kX2p1bXBfaW5kKHNhbWUtPmRvbWFpbi0+YXJj
aC5jdHh0X3N3aXRjaC0+dGFpbCk7CiB9CiAKIGludCBfX3N5bmNfbG9jYWxf
ZXhlY3N0YXRlKHZvaWQpCi0tLSBzbGUxNXNwMi5vcmlnL3hlbi9hcmNoL3g4
Ni9odm0vc3ZtL3N2bS5jCTIwMjAtMTEtMTAgMTc6NTE6MTAuODk4NTI1NzIz
ICswMTAwCisrKyBzbGUxNXNwMi94ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0u
YwkyMDIwLTExLTEwIDE3OjU2OjU4Ljc2MjczMDA5MCArMDEwMApAQCAtOTg3
LDggKzk4Nyw5IEBAIHN0YXRpYyB2b2lkIHN2bV9jdHh0X3N3aXRjaF90byhz
dHJ1Y3QgdmMKICAgICAgICAgd3Jtc3JfdHNjX2F1eCh2LT5hcmNoLm1zcnMt
PnRzY19hdXgpOwogfQogCi1zdGF0aWMgdm9pZCBub3JldHVybiBzdm1fZG9f
cmVzdW1lKHN0cnVjdCB2Y3B1ICp2KQorc3RhdGljIHZvaWQgbm9yZXR1cm4g
c3ZtX2RvX3Jlc3VtZSh2b2lkKQogeworICAgIHN0cnVjdCB2Y3B1ICp2ID0g
Y3VycmVudDsKICAgICBzdHJ1Y3Qgdm1jYl9zdHJ1Y3QgKnZtY2IgPSB2LT5h
cmNoLmh2bS5zdm0udm1jYjsKICAgICBib29sIGRlYnVnX3N0YXRlID0gKHYt
PmRvbWFpbi0+ZGVidWdnZXJfYXR0YWNoZWQgfHwKICAgICAgICAgICAgICAg
ICAgICAgICAgIHYtPmRvbWFpbi0+YXJjaC5tb25pdG9yLnNvZnR3YXJlX2Jy
ZWFrcG9pbnRfZW5hYmxlZCB8fAotLS0gc2xlMTVzcDIub3JpZy94ZW4vYXJj
aC94ODYvaHZtL3ZteC92bWNzLmMJMjAyMC0xMS0xMCAxNzo1MToxMC44OTg1
MjU3MjMgKzAxMDAKKysrIHNsZTE1c3AyL3hlbi9hcmNoL3g4Ni9odm0vdm14
L3ZtY3MuYwkyMDIwLTExLTEwIDE3OjU2OjU4Ljc2MjczMDA5MCArMDEwMApA
QCAtMTgzMCw4ICsxODMwLDkgQEAgdm9pZCB2bXhfdm1lbnRyeV9mYWlsdXJl
KHZvaWQpCiAgICAgZG9tYWluX2NyYXNoKGN1cnItPmRvbWFpbik7CiB9CiAK
LXZvaWQgdm14X2RvX3Jlc3VtZShzdHJ1Y3QgdmNwdSAqdikKK3ZvaWQgdm14
X2RvX3Jlc3VtZSh2b2lkKQogeworICAgIHN0cnVjdCB2Y3B1ICp2ID0gY3Vy
cmVudDsKICAgICBib29sX3QgZGVidWdfc3RhdGU7CiAgICAgdW5zaWduZWQg
bG9uZyBob3N0X2NyNDsKIAotLS0gc2xlMTVzcDIub3JpZy94ZW4vYXJjaC94
ODYvcHYvZG9tYWluLmMJMjAyMC0xMS0xMCAxNzo1MToxMC44OTg1MjU3MjMg
KzAxMDAKKysrIHNsZTE1c3AyL3hlbi9hcmNoL3g4Ni9wdi9kb21haW4uYwky
MDIwLTExLTEwIDE3OjU2OjU4Ljc2MjczMDA5MCArMDEwMApAQCAtNTgsNyAr
NTgsNyBAQCBzdGF0aWMgaW50IHBhcnNlX3BjaWQoY29uc3QgY2hhciAqcykK
IH0KIGN1c3RvbV9ydW50aW1lX3BhcmFtKCJwY2lkIiwgcGFyc2VfcGNpZCk7
CiAKLXN0YXRpYyB2b2lkIG5vcmV0dXJuIGNvbnRpbnVlX25vbmlkbGVfZG9t
YWluKHN0cnVjdCB2Y3B1ICp2KQorc3RhdGljIHZvaWQgbm9yZXR1cm4gY29u
dGludWVfbm9uaWRsZV9kb21haW4odm9pZCkKIHsKICAgICBjaGVja193YWtl
dXBfZnJvbV93YWl0KCk7CiAgICAgcmVzZXRfc3RhY2tfYW5kX2p1bXAocmV0
X2Zyb21faW50cik7Ci0tLSBzbGUxNXNwMi5vcmlnL3hlbi9pbmNsdWRlL2Fz
bS14ODYvY3VycmVudC5oCTIwMjAtMTEtMTAgMTc6NTE6MTAuOTAyNTI1NzI1
ICswMTAwCisrKyBzbGUxNXNwMi94ZW4vaW5jbHVkZS9hc20teDg2L2N1cnJl
bnQuaAkyMDIwLTExLTEwIDE3OjU2OjU4Ljc2MjczMDA5MCArMDEwMApAQCAt
MTI5LDE2ICsxMjksMjMgQEAgdW5zaWduZWQgbG9uZyBnZXRfc3RhY2tfZHVt
cF9ib3R0b20gKHVucwogIyBkZWZpbmUgQ0hFQ0tfRk9SX0xJVkVQQVRDSF9X
T1JLICIiCiAjZW5kaWYKIAotI2RlZmluZSByZXNldF9zdGFja19hbmRfanVt
cChmbikgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAorI2RlZmluZSBzd2l0Y2hfc3RhY2tfYW5kX2p1bXAoZm4sIGluc3RyLCBj
b25zdHIpICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICh7ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAogICAgICAgICBfX2FzbV9fIF9fdm9sYXRpbGVf
XyAoICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAogICAgICAgICAgICAgIm1vdiAlMCwlJSJfX09QInNwOyIgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAg
Q0hFQ0tfRk9SX0xJVkVQQVRDSF9XT1JLICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAotICAgICAgICAgICAgICJqbXAgJWMxIiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAotICAgICAgICAgICAgOiA6ICJyIiAoZ3Vlc3RfY3B1X3VzZXJfcmVncygp
KSwgImkiIChmbikgOiAibWVtb3J5IiApOyAgICAgXAorICAgICAgICAgICAg
aW5zdHIgIjEiICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAorICAgICAgICAgICAgOiA6ICJyIiAoZ3Vlc3Rf
Y3B1X3VzZXJfcmVncygpKSwgY29uc3RyIChmbikgOiAibWVtb3J5IiApOyAg
XAogICAgICAgICB1bnJlYWNoYWJsZSgpOyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgIH0pCiAKKyNk
ZWZpbmUgcmVzZXRfc3RhY2tfYW5kX2p1bXAoZm4pICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBzd2l0Y2hfc3RhY2tf
YW5kX2p1bXAoZm4sICJqbXAgJWMiLCAiaSIpCisKKy8qIFRoZSBjb25zdHJh
aW50IG1heSBvbmx5IHNwZWNpZnkgbm9uLWNhbGwtY2xvYmJlcmVkIHJlZ2lz
dGVycy4gKi8KKyNkZWZpbmUgcmVzZXRfc3RhY2tfYW5kX2p1bXBfaW5kKGZu
KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBz
d2l0Y2hfc3RhY2tfYW5kX2p1bXAoZm4sICJJTkRJUkVDVF9KTVAgJSIsICJi
IikKKwogLyoKICAqIFdoaWNoIFZDUFUncyBzdGF0ZSBpcyBjdXJyZW50bHkg
cnVubmluZyBvbiBlYWNoIENQVT8KICAqIFRoaXMgaXMgbm90IG5lY2VzYXNy
aWx5IHRoZSBzYW1lIGFzICdjdXJyZW50JyBhcyBhIENQVSBtYXkgYmUKLS0t
IHNsZTE1c3AyLm9yaWcveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAky
MDIwLTEwLTMwIDE3OjIyOjM5LjAwMDAwMDAwMCArMDEwMAorKysgc2xlMTVz
cDIveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAkyMDIwLTExLTEwIDE3
OjU2OjU4Ljc2MjczMDA5MCArMDEwMApAQCAtMzEzLDcgKzMxMyw3IEBAIHN0
cnVjdCBhcmNoX2RvbWFpbgogICAgIGNvbnN0IHN0cnVjdCBhcmNoX2NzdyB7
CiAgICAgICAgIHZvaWQgKCpmcm9tKShzdHJ1Y3QgdmNwdSAqKTsKICAgICAg
ICAgdm9pZCAoKnRvKShzdHJ1Y3QgdmNwdSAqKTsKLSAgICAgICAgdm9pZCAo
KnRhaWwpKHN0cnVjdCB2Y3B1ICopOworICAgICAgICB2b2lkIG5vcmV0dXJu
ICgqdGFpbCkodm9pZCk7CiAgICAgfSAqY3R4dF9zd2l0Y2g7CiAKICNpZmRl
ZiBDT05GSUdfSFZNCi0tLSBzbGUxNXNwMi5vcmlnL3hlbi9pbmNsdWRlL2Fz
bS14ODYvaHZtL3ZteC92bXguaAkyMDE5LTEyLTE4IDE2OjE4OjU5LjAwMDAw
MDAwMCArMDEwMAorKysgc2xlMTVzcDIveGVuL2luY2x1ZGUvYXNtLXg4Ni9o
dm0vdm14L3ZteC5oCTIwMjAtMTEtMTAgMTc6NTY6NTguNzYyNzMwMDkwICsw
MTAwCkBAIC05NSw3ICs5NSw3IEBAIHR5cGVkZWYgZW51bSB7CiB2b2lkIHZt
eF9hc21fdm1leGl0X2hhbmRsZXIoc3RydWN0IGNwdV91c2VyX3JlZ3MpOwog
dm9pZCB2bXhfYXNtX2RvX3ZtZW50cnkodm9pZCk7CiB2b2lkIHZteF9pbnRy
X2Fzc2lzdCh2b2lkKTsKLXZvaWQgbm9yZXR1cm4gdm14X2RvX3Jlc3VtZShz
dHJ1Y3QgdmNwdSAqKTsKK3ZvaWQgbm9yZXR1cm4gdm14X2RvX3Jlc3VtZSh2
b2lkKTsKIHZvaWQgdm14X3ZsYXBpY19tc3JfY2hhbmdlZChzdHJ1Y3QgdmNw
dSAqdik7CiB2b2lkIHZteF9yZWFsbW9kZV9lbXVsYXRlX29uZShzdHJ1Y3Qg
aHZtX2VtdWxhdGVfY3R4dCAqaHZtZW11bF9jdHh0KTsKIHZvaWQgdm14X3Jl
YWxtb2RlKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKTsK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 13:07:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 13:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54047.93572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpA36-0002nk-Gi; Tue, 15 Dec 2020 13:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54047.93572; Tue, 15 Dec 2020 13:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpA36-0002nd-DX; Tue, 15 Dec 2020 13:07:28 +0000
Received: by outflank-mailman (input) for mailman id 54047;
 Tue, 15 Dec 2020 13:07:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpA36-0002nV-39; Tue, 15 Dec 2020 13:07:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpA35-0006kg-VE; Tue, 15 Dec 2020 13:07:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpA35-0000mG-Ku; Tue, 15 Dec 2020 13:07:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpA35-0008Oa-KQ; Tue, 15 Dec 2020 13:07:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sHteU6LjM1FXOPVr9/wjsoqz8KT0wOIipWTcan40jVc=; b=jZteqDPW4p1t20z3lQkXir1p3O
	M9X7lO+V8pFoJgK9IwrwaPyCIMi+ppTQa6GFPo28lVCmc5tvYu6x1gE3WLmIF6A9Vj4mtlh83nwsh
	MeUblXeg9lkDJm06enwqDjhiqwRMC8Y8uHXCgBQOoX6aD5m/M4XAkUtE1V/IkcdfUIcc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157549: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=532f907b753ef99b47371629187d5f53ea3c83e2
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 13:07:27 +0000

flight 157549 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157549/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 532f907b753ef99b47371629187d5f53ea3c83e2
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    6 days
Failing since        157348  2020-12-09 15:39:39 Z    5 days   48 attempts
Testing same since   157549  2020-12-15 08:40:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 641 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 13:11:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 13:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54061.93587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpA7I-0003nL-33; Tue, 15 Dec 2020 13:11:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54061.93587; Tue, 15 Dec 2020 13:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpA7H-0003nE-W7; Tue, 15 Dec 2020 13:11:47 +0000
Received: by outflank-mailman (input) for mailman id 54061;
 Tue, 15 Dec 2020 13:11:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpA7H-0003n9-8A
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 13:11:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpA7C-0006qH-R2; Tue, 15 Dec 2020 13:11:42 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpA7C-0004a5-Hk; Tue, 15 Dec 2020 13:11:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=i6MKGm0InjprrS6tvbC8BguXeQwblFTvmDLB7gs1w5I=; b=X/a0hqmF/tEDUrf/E39Xotvrtl
	fR/iySN4mWyssZOooRWzd00te3IBjr+YEzry+qQQR1DjsThbdCVRHY/NTF+G6qirHksQUNaA+PadY
	U1hWmHR6uXgUa8jmrx940mgjk8o0alBhFaxhotvMwE+ftaSdHAERU7t50WigosHmwc80=;
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201215112610.1986-1-julien@xen.org>
 <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>
Date: Tue, 15 Dec 2020 13:11:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 15/12/2020 11:31, Jürgen Groß wrote:
> On 15.12.20 12:26, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> So far, our implementation of WARN_ON() cannot be used in the following
>> situation:
>>
>> if ( WARN_ON() )
>>      ...
>>
>> This is because the WARN_ON() doesn't return whether a warning. Such
> 
> ... warning has been triggered.

I will add it.

> 
>> construction can be handy to have if you have to print more information
>> and now the stack track.
> 
> Sorry, I'm not able to parse that sentence.

Urgh :/. How about the following commit message:

"So far, our implementation of WARN_ON() cannot be used in the following 
situation:

if ( WARN_ON() )
   ...

This is because WARN_ON() doesn't return whether a warning has been 
triggered. Such construciton can be handy if you want to print more 
information and also dump the stack trace.

Therefore, rework the WARN_ON() implementation to return whether a 
warning was triggered. The idea was borrowed from Linux".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 13:19:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 13:19:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54069.93599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpAEt-00046V-0j; Tue, 15 Dec 2020 13:19:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54069.93599; Tue, 15 Dec 2020 13:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpAEs-00046O-T4; Tue, 15 Dec 2020 13:19:38 +0000
Received: by outflank-mailman (input) for mailman id 54069;
 Tue, 15 Dec 2020 13:19:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpAEr-00046E-Em
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 13:19:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpAEq-0006y0-7j; Tue, 15 Dec 2020 13:19:36 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpAEp-0005BJ-Um; Tue, 15 Dec 2020 13:19:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1qVcZQEO+1OxKDOOLU7DQI0+5rG1VyXBNOheLar7arA=; b=2F5uJt34K3RGtS2yTeogvWze+F
	IETk1Jjk2OX5zoOy1mQgdGSzilxnK1vGVGCGNhARMCmQMZRAX/UJ6PO25HQMPs1EcJjkVUG5lkwPO
	g7P93Kx27D1+k6SthAyDm3LnVO3n6pg7lymMM6NHw21mLKS147jUczp9X3lUAAGKLHdY=;
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: Jan Beulich <jbeulich@suse.com>
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201215112610.1986-1-julien@xen.org>
 <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
Date: Tue, 15 Dec 2020 13:19:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 15/12/2020 11:46, Jan Beulich wrote:
> On 15.12.2020 12:26, Julien Grall wrote:
>> --- a/xen/include/xen/lib.h
>> +++ b/xen/include/xen/lib.h
>> @@ -23,7 +23,13 @@
>>   #include <asm/bug.h>
>>   
>>   #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
>> +#define WARN_ON(p)  ({                  \
>> +    bool __ret_warn_on = (p);           \
> 
> Please can you avoid leading underscores here?

I can.

> 
>> +                                        \
>> +    if ( unlikely(__ret_warn_on) )      \
>> +        WARN();                         \
>> +    unlikely(__ret_warn_on);            \
>> +})
> 
> Is this latter unlikely() having any effect? So far I thought it
> would need to be immediately inside a control construct or be an
> operand to && or ||.

The unlikely() is directly taken from the Linux implementation.

My guess is the compiler is still able to use the information for the 
branch prediction in the case of:

if ( WARN_ON(...) )

Cheers,

> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 13:39:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 13:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54157.93850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpAY5-0007Qc-H5; Tue, 15 Dec 2020 13:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54157.93850; Tue, 15 Dec 2020 13:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpAY5-0007QV-EA; Tue, 15 Dec 2020 13:39:29 +0000
Received: by outflank-mailman (input) for mailman id 54157;
 Tue, 15 Dec 2020 13:39:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpAY3-0007QI-Lp
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 13:39:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpAY2-0007P6-Nb; Tue, 15 Dec 2020 13:39:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpAY2-0007b5-CS; Tue, 15 Dec 2020 13:39:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1o+B0RdPAD21aM1NMCkg2a5CbhmpI/re8qs3MXAAbvY=; b=H9toXEz6Id3SyuvO7Lm6eDVa4e
	9WwzgeYpoqUcbnnekDDLyL6z+4HuENyHXrIX9RcCf9QYyH/G3LnW88rgvDGq/JvI4SuyZSMCM10+l
	/AGeO+BwQXMOUMCoCKkkb6TmO312ju5eJBPr88YnzMdHV+sbpQ+XK+rHCyaO9aonvVBg=;
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2ffa6302-5368-61c6-8564-6d3f828e2163@xen.org>
Date: Tue, 15 Dec 2020 13:39:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 15/12/2020 09:02, Jan Beulich wrote:
> On 15.12.2020 07:33, Juergen Gross wrote:
>> --- a/xen/include/asm-arm/bug.h
>> +++ b/xen/include/asm-arm/bug.h
>> @@ -15,65 +15,62 @@
>>   
>>   struct bug_frame {
>>       signed int loc_disp;    /* Relative address to the bug address */
>> -    signed int file_disp;   /* Relative address to the filename */
>> +    signed int ptr_disp;    /* Relative address to the filename or function */
>>       signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
>>       uint16_t line;          /* Line number */
>>       uint32_t pad0:16;       /* Padding for 8-bytes align */
>>   };
>>   
>>   #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>   #define bug_line(b) ((b)->line)
>>   #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>   
>> -#define BUGFRAME_warn   0
>> -#define BUGFRAME_bug    1
>> -#define BUGFRAME_assert 2
>> +#define BUGFRAME_run_fn 0
>> +#define BUGFRAME_warn   1
>> +#define BUGFRAME_bug    2
>> +#define BUGFRAME_assert 3
>>   
>> -#define BUGFRAME_NR     3
>> +#define BUGFRAME_NR     4
>>   
>>   /* Many versions of GCC doesn't support the asm %c parameter which would
>>    * be preferable to this unpleasantness. We use mergeable string
>>    * sections to avoid multiple copies of the string appearing in the
>>    * Xen image.
>>    */
>> -#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
>> +#define BUG_FRAME(type, line, ptr, msg) do {                                \
>>       BUILD_BUG_ON((line) >> 16);                                             \
>>       BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
>>       asm ("1:"BUG_INSTR"\n"                                                  \
>> -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
>> -         "2:\t.asciz " __stringify(file) "\n"                               \
>> -         "3:\n"                                                             \
>> -         ".if " #has_msg "\n"                                               \
>> -         "\t.asciz " #msg "\n"                                              \
>> -         ".endif\n"                                                         \
>> -         ".popsection\n"                                                    \
>> -         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
>> -         "4:\n"                                                             \
>> +         ".pushsection .bug_frames." __stringify(type) ", \"a\", %%progbits\n"\
>> +         "2:\n"                                                             \
>>            ".p2align 2\n"                                                     \
>> -         ".long (1b - 4b)\n"                                                \
>> -         ".long (2b - 4b)\n"                                                \
>> -         ".long (3b - 4b)\n"                                                \
>> +         ".long (1b - 2b)\n"                                                \
>> +         ".long (%0 - 2b)\n"                                                \
>> +         ".long (%1 - 2b)\n"                                                \
>>            ".hword " __stringify(line) ", 0\n"                                \
>> -         ".popsection");                                                    \
>> +         ".popsection" :: "i" (ptr), "i" (msg));                            \
>>   } while (0)
> 
> The comment ahead of the construct now looks to be at best stale, if
> not entirely pointless. The reference to %c looks quite strange here
> to me anyway - I can only guess it appeared here because on x86 one
> has to use %c to output constants as operands for .long and alike,
> and this was then tried to use on Arm as well without there really
> being a need.

Well, %c is one the reason why we can't have a common BUG_FRAME 
implementation. So I would like to retain this information before 
someone wants to consolidate the code and missed this issue.

Regarding the mergeable string section, I agree that the comment in now 
stale. However,  could someone confirm that "i" will still retain the 
same behavior as using mergeable string sections?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 13:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 13:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54255.94190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpArq-0002lm-7t; Tue, 15 Dec 2020 13:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54255.94190; Tue, 15 Dec 2020 13:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpArq-0002lf-4s; Tue, 15 Dec 2020 13:59:54 +0000
Received: by outflank-mailman (input) for mailman id 54255;
 Tue, 15 Dec 2020 13:59:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpArp-0002la-Dv
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 13:59:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8b385bf-66f2-45b6-8067-c2eda0b25ed4;
 Tue, 15 Dec 2020 13:59:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75DE2AD4D;
 Tue, 15 Dec 2020 13:59:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8b385bf-66f2-45b6-8067-c2eda0b25ed4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608040791; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Is3OGq1b6d264WC4SSSaFqG1DWJIMc+mXDDv+1Rw8R4=;
	b=IpluHQLsutV1UwLyQRGMkwdSNOZ6ekeKWXucfmz3zwOCNU2BIH6GNCOiJbKPC3SN0QDIhy
	JfRTAXIaiQnWl86GAjuXnqdMPAo1tNkl4/FB6L9W8MbfR+WTVZizvpLDpfKbxy9/3KDx15
	mMJggKL3oHY+wdblN+3PPEAwfFuhKic=
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
 <2ffa6302-5368-61c6-8564-6d3f828e2163@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <26480338-56f4-6a61-e776-78727fce0910@suse.com>
Date: Tue, 15 Dec 2020 14:59:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <2ffa6302-5368-61c6-8564-6d3f828e2163@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.12.2020 14:39, Julien Grall wrote:
> On 15/12/2020 09:02, Jan Beulich wrote:
>> On 15.12.2020 07:33, Juergen Gross wrote:
>>> --- a/xen/include/asm-arm/bug.h
>>> +++ b/xen/include/asm-arm/bug.h
>>> @@ -15,65 +15,62 @@
>>>   
>>>   struct bug_frame {
>>>       signed int loc_disp;    /* Relative address to the bug address */
>>> -    signed int file_disp;   /* Relative address to the filename */
>>> +    signed int ptr_disp;    /* Relative address to the filename or function */
>>>       signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
>>>       uint16_t line;          /* Line number */
>>>       uint32_t pad0:16;       /* Padding for 8-bytes align */
>>>   };
>>>   
>>>   #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>>   #define bug_line(b) ((b)->line)
>>>   #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>>   
>>> -#define BUGFRAME_warn   0
>>> -#define BUGFRAME_bug    1
>>> -#define BUGFRAME_assert 2
>>> +#define BUGFRAME_run_fn 0
>>> +#define BUGFRAME_warn   1
>>> +#define BUGFRAME_bug    2
>>> +#define BUGFRAME_assert 3
>>>   
>>> -#define BUGFRAME_NR     3
>>> +#define BUGFRAME_NR     4
>>>   
>>>   /* Many versions of GCC doesn't support the asm %c parameter which would
>>>    * be preferable to this unpleasantness. We use mergeable string
>>>    * sections to avoid multiple copies of the string appearing in the
>>>    * Xen image.
>>>    */
>>> -#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
>>> +#define BUG_FRAME(type, line, ptr, msg) do {                                \
>>>       BUILD_BUG_ON((line) >> 16);                                             \
>>>       BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
>>>       asm ("1:"BUG_INSTR"\n"                                                  \
>>> -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
>>> -         "2:\t.asciz " __stringify(file) "\n"                               \
>>> -         "3:\n"                                                             \
>>> -         ".if " #has_msg "\n"                                               \
>>> -         "\t.asciz " #msg "\n"                                              \
>>> -         ".endif\n"                                                         \
>>> -         ".popsection\n"                                                    \
>>> -         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
>>> -         "4:\n"                                                             \
>>> +         ".pushsection .bug_frames." __stringify(type) ", \"a\", %%progbits\n"\
>>> +         "2:\n"                                                             \
>>>            ".p2align 2\n"                                                     \
>>> -         ".long (1b - 4b)\n"                                                \
>>> -         ".long (2b - 4b)\n"                                                \
>>> -         ".long (3b - 4b)\n"                                                \
>>> +         ".long (1b - 2b)\n"                                                \
>>> +         ".long (%0 - 2b)\n"                                                \
>>> +         ".long (%1 - 2b)\n"                                                \
>>>            ".hword " __stringify(line) ", 0\n"                                \
>>> -         ".popsection");                                                    \
>>> +         ".popsection" :: "i" (ptr), "i" (msg));                            \
>>>   } while (0)
>>
>> The comment ahead of the construct now looks to be at best stale, if
>> not entirely pointless. The reference to %c looks quite strange here
>> to me anyway - I can only guess it appeared here because on x86 one
>> has to use %c to output constants as operands for .long and alike,
>> and this was then tried to use on Arm as well without there really
>> being a need.
> 
> Well, %c is one the reason why we can't have a common BUG_FRAME 
> implementation. So I would like to retain this information before 
> someone wants to consolidate the code and missed this issue.

Fair enough, albeit I guess this then could do with re-wording.

> Regarding the mergeable string section, I agree that the comment in now 
> stale. However,  could someone confirm that "i" will still retain the 
> same behavior as using mergeable string sections?

That's depend on compiler settings / behavior.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 14:01:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 14:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54260.94202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpAtV-0003eB-K2; Tue, 15 Dec 2020 14:01:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54260.94202; Tue, 15 Dec 2020 14:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpAtV-0003e4-G6; Tue, 15 Dec 2020 14:01:37 +0000
Received: by outflank-mailman (input) for mailman id 54260;
 Tue, 15 Dec 2020 14:01:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpAtU-0003dy-Fd
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 14:01:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c996c292-eabe-4c6a-908f-60b92659fa69;
 Tue, 15 Dec 2020 14:01:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF882AD4D;
 Tue, 15 Dec 2020 14:01:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c996c292-eabe-4c6a-908f-60b92659fa69
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608040895; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6f7Cdl3fHbGV7Hhheerja6iwn1EKnzaN/GRV/WuG0AA=;
	b=gTK+Tm8Bys3AgpUs6W82kbbIimmmy/7jeAxicdwiXOQYmnBPHLDI4z3+UwS2JkG3D7KHTE
	oSHXckJzf/mPKZVXCT82cMJ1UYAzIo+5zUXlPMO8HIUZmhjL56XWMD8/dsQ+Vgj/FEr+mW
	qg4UmgbRuWk3mTiGveoDAHMRItgiKA8=
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201215112610.1986-1-julien@xen.org>
 <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
 <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
Date: Tue, 15 Dec 2020 15:01:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.12.2020 14:19, Julien Grall wrote:
> On 15/12/2020 11:46, Jan Beulich wrote:
>> On 15.12.2020 12:26, Julien Grall wrote:
>>> --- a/xen/include/xen/lib.h
>>> +++ b/xen/include/xen/lib.h
>>> @@ -23,7 +23,13 @@
>>>   #include <asm/bug.h>
>>>   
>>>   #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
>>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
>>> +#define WARN_ON(p)  ({                  \
>>> +    bool __ret_warn_on = (p);           \
>>
>> Please can you avoid leading underscores here?
> 
> I can.
> 
>>
>>> +                                        \
>>> +    if ( unlikely(__ret_warn_on) )      \
>>> +        WARN();                         \
>>> +    unlikely(__ret_warn_on);            \
>>> +})
>>
>> Is this latter unlikely() having any effect? So far I thought it
>> would need to be immediately inside a control construct or be an
>> operand to && or ||.
> 
> The unlikely() is directly taken from the Linux implementation.
> 
> My guess is the compiler is still able to use the information for the 
> branch prediction in the case of:
> 
> if ( WARN_ON(...) )

Maybe. Or maybe not. I don't suppose the Linux commit introducing
it clarifies this?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 14:19:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 14:19:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54265.94213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBAl-0004ls-8A; Tue, 15 Dec 2020 14:19:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54265.94213; Tue, 15 Dec 2020 14:19:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBAl-0004ll-5C; Tue, 15 Dec 2020 14:19:27 +0000
Received: by outflank-mailman (input) for mailman id 54265;
 Tue, 15 Dec 2020 14:19:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDkq=FT=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kpBAj-0004lN-5p
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 14:19:25 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db42cfca-a869-4ccb-949f-2eafe5a9e9ec;
 Tue, 15 Dec 2020 14:19:20 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kpB9x-0007Qz-Vq; Tue, 15 Dec 2020 14:18:38 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C214A300446;
 Tue, 15 Dec 2020 15:18:34 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A8CE3200CFB30; Tue, 15 Dec 2020 15:18:34 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db42cfca-a869-4ccb-949f-2eafe5a9e9ec
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=n3MIZt3/znb4uRxmCaPyGVKsSFg4njtwCHLsmpqHHtc=; b=j5aJJbiLj/qdgQPvO1H9cZO69S
	cghj1tVt1MmeFUvrTyb7sI8SjpJMY/zf9c3AF09ZpaS3kpNRMq7Qcza+a+4SVPftUg92+de3j06ij
	TpvnEmE1F1M9N0xpaIyBwLxOwyzPMjM1Fuu/184ceVPJZHJ8yc8/CxeiHpKLfHNVY2JdoHHrZrv/6
	52/R56RdCA7sZvOvf5pukk87Nrmwsfm/qV5Ymb9hkIGZ805z6H/EQBg+eqwMVq1zBU/Ekh8ZN0Wyq
	bkHOj3MQ8RLBs8mNaGQ8LQey8vYZ+0uPL6pOynf1wFmF5Sb2xwez3kwxSkQndlTWxEbDpMaIXx+2n
	SfEy17IQ==;
Date: Tue, 15 Dec 2020 15:18:34 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201215141834.GG3040@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
In-Reply-To: <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>

On Tue, Dec 15, 2020 at 12:42:45PM +0100, J=FCrgen Gro=DF wrote:
> Peter,
>=20
> On 23.11.20 14:43, Peter Zijlstra wrote:
> > On Fri, Nov 20, 2020 at 01:53:42PM +0100, Peter Zijlstra wrote:
> > > On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
> > > >   30 files changed, 325 insertions(+), 598 deletions(-)
> > >=20
> > > Much awesome ! I'll try and get that objtool thing sorted.
> >=20
> > This seems to work for me. It isn't 100% accurate, because it doesn't
> > know about the direct call instruction, but I can either fudge that or
> > switching to static_call() will cure that.
> >=20
> > It's not exactly pretty, but it should be straight forward.
>=20
> Are you planning to send this out as an "official" patch, or should I
> include it in my series (in this case I'd need a variant with a proper
> commit message)?

Ah, I was waiting for Josh to have an opinion (and then sorta forgot
about the whole thing again). Let me refresh and provide at least a
Changelog.


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 14:24:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 14:24:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54282.94242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBFh-0005ih-29; Tue, 15 Dec 2020 14:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54282.94242; Tue, 15 Dec 2020 14:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBFg-0005ia-V8; Tue, 15 Dec 2020 14:24:32 +0000
Received: by outflank-mailman (input) for mailman id 54282;
 Tue, 15 Dec 2020 14:24:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AeOo=FT=redhat.com=imammedo@srs-us1.protection.inumbo.net>)
 id 1kpBFg-0005iV-G8
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 14:24:32 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e633bffd-4d2e-4982-b50c-96d9494dc207;
 Tue, 15 Dec 2020 14:24:30 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-463-FIzfC_48MYiPhrV1QHKixw-1; Tue, 15 Dec 2020 09:24:28 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6D39810054FF;
 Tue, 15 Dec 2020 14:24:26 +0000 (UTC)
Received: from localhost (ovpn-113-127.ams2.redhat.com [10.36.113.127])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A633A179B3;
 Tue, 15 Dec 2020 14:24:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e633bffd-4d2e-4982-b50c-96d9494dc207
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608042270;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=z+cQ0iEhkWfsQjRdH/t6R2th3kreynHohts1awCUh9o=;
	b=LllXFXE7rrTc8cS3G7F6KkdNKMl58YSExNmZTQznDQuVDErKoK8CkArKgggYCDljAhR1eR
	ikdmBAyC5iawQ7KM4wq+3JYLbPsKenNuKClgX7+MgiLe8UssolBQ9PCk+eZbwyhNZQbK1K
	GLvh/7XDFeSl1kF6UEZc2I1zEx6ATyY=
X-MC-Unique: FIzfC_48MYiPhrV1QHKixw-1
Date: Tue, 15 Dec 2020 15:24:18 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>, Stefan
 Berger <stefanb@linux.ibm.com>, =?UTF-8?B?TWFyYy1BbmRyw6k=?= Lureau
 <marcandre.lureau@redhat.com>, "Daniel P. Berrange" <berrange@redhat.com>,
 Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>, John Snow
 <jsnow@redhat.com>, Kevin Wolf <kwolf@redhat.com>, Eric Blake
 <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, Stefan Berger
 <stefanb@linux.vnet.ibm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Max Reitz <mreitz@redhat.com>, Cornelia Huck <cohuck@redhat.com>, Halil
 Pasic <pasic@linux.ibm.com>, Christian Borntraeger
 <borntraeger@de.ibm.com>, Richard Henderson <rth@twiddle.net>, David
 Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>, Matthew
 Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Artyom Tarasenko <atar4qemu@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 qemu-s390x@nongnu.org
Subject: Re: [PATCH v4 23/32] qdev: Move dev->realized check to
 qdev_property_set()
Message-ID: <20201215152418.0e8dff6b@redhat.com>
In-Reply-To: <20201214172418.GK1289986@habkost.net>
References: <20201211220529.2290218-1-ehabkost@redhat.com>
	<20201211220529.2290218-24-ehabkost@redhat.com>
	<20201214155530.55f80cd6@redhat.com>
	<20201214172418.GK1289986@habkost.net>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=imammedo@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Mon, 14 Dec 2020 12:24:18 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> On Mon, Dec 14, 2020 at 03:55:30PM +0100, Igor Mammedov wrote:
> > On Fri, 11 Dec 2020 17:05:20 -0500
> > Eduardo Habkost <ehabkost@redhat.com> wrote:
> >   
> > > Every single qdev property setter function manually checks
> > > dev->realized.  We can just check dev->realized inside
> > > qdev_property_set() instead.
> > > 
> > > The check is being added as a separate function
> > > (qdev_prop_allow_set()) because it will become a callback later.  
> > 
> > is callback added within this series?
> > and I'd add here what's the purpose of it.  
> 
> It will be added in part 2 of the series.  See v3:
> https://lore.kernel.org/qemu-devel/20201112214350.872250-35-ehabkost@redhat.com/
> 
> I don't know what else I could say about its purpose, in addition
> to what I wrote above, and the comment below[1].
> 
> If you are just curious about the callback and confused because
> it is not anywhere in this series, I can just remove the
> paragraph above from the commit message.  Would that be enough?
lets remove it for now to avoid confusion

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> 
> >   
> [...]
> > > +/* returns: true if property is allowed to be set, false otherwise */  
> 
> [1] ^^^
> 
> > > +static bool qdev_prop_allow_set(Object *obj, const char *name,
> > > +                                Error **errp)
> > > +{
> > > +    DeviceState *dev = DEVICE(obj);
> > > +
> > > +    if (dev->realized) {
> > > +        qdev_prop_set_after_realize(dev, name, errp);
> > > +        return false;
> > > +    }
> > > +    return true;
> > > +}
> > > +  
> 



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 14:54:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 14:54:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54298.94257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBiv-000094-JZ; Tue, 15 Dec 2020 14:54:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54298.94257; Tue, 15 Dec 2020 14:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBiv-00008x-GW; Tue, 15 Dec 2020 14:54:45 +0000
Received: by outflank-mailman (input) for mailman id 54298;
 Tue, 15 Dec 2020 14:54:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDkq=FT=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kpBit-00008s-Q4
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 14:54:44 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 955b7e7b-d03c-4398-9d15-ab139100852f;
 Tue, 15 Dec 2020 14:54:39 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kpBiO-00050k-Al; Tue, 15 Dec 2020 14:54:12 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id DF3A93059DD;
 Tue, 15 Dec 2020 15:54:08 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id C0B352C7DB779; Tue, 15 Dec 2020 15:54:08 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 955b7e7b-d03c-4398-9d15-ab139100852f
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=i7/Dmdf/9/KX5OWMc6gqbQ/znmDaWWCIUvdqdmOq2yg=; b=poBbcvBpz4LX1hoYM4wbTihWDX
	uOy0rPE2W9evHPNElzxZmHkhNfRbIlBLmxgHkpUY8gE0iDzhzDgUAouRD4qerv1VQ37nUxBOY/QS+
	iSEtiFHDyHBWEO/MHXmljfdRTMcm7xSieGyq2r/smem57sLPSUzdCkAII0fwBLX5d2KUgEFSbAI/X
	Oc1HurNdTe8wrQjE8tSrWRk815zD0WPgBYvHJRJ6PLlHQdZlcNcBDhHfCTaphqHwiaemBhk9VEqyt
	V5Jg2P6IdKLXSx2xJlOtzRE0tRdV074jsVCOx2Qr/NIRSOXmR5R7p1XsDdC3EE0yMxgE8V43+75Co
	ZNKvsQUQ==;
Date: Tue, 15 Dec 2020 15:54:08 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201215145408.GR3092@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201215141834.GG3040@hirez.programming.kicks-ass.net>

On Tue, Dec 15, 2020 at 03:18:34PM +0100, Peter Zijlstra wrote:
> Ah, I was waiting for Josh to have an opinion (and then sorta forgot
> about the whole thing again). Let me refresh and provide at least a
> Changelog.

How's this then?

---
Subject: objtool: Alternatives vs ORC, the hard way
From: Peter Zijlstra <peterz@infradead.org>
Date: Mon, 23 Nov 2020 14:43:17 +0100

Alternatives pose an interesting problem for unwinders because from
the unwinders PoV we're just executing instructions, it has no idea
the text is modified, nor any way of retrieving what with.

Therefore the stance has been that alternatives must not change stack
state, as encoded by commit: 7117f16bf460 ("objtool: Fix ORC vs
alternatives"). This obviously guarantees that whatever actual
instructions end up in the text, the unwind information is correct.

However, there is one additional source of text patching that isn't
currently visible to objtool: paravirt immediate patching. And it
turns out one of these violates the rule.

As part of cleaning that up, the unfortunate reality is that objtool
now has to deal with alternatives modifying unwind state and validate
the combination is valid and generate ORC data to match.

The problem is that a single instance of unwind information (ORC) must
capture and correctly unwind all alternatives. Since the trivially
correct mandate is out, implement the straight forward brute-force
approach:

 1) generate CFI information for each alternative

 2) unwind every alternative with the merge-sort of the previously
    generated CFI information -- O(n^2)

 3) for any possible conflict: yell.

 4) Generate ORC with merge-sort

Specifically for 3 there are two possible classes of conflicts:

 - the merge-sort itself could find conflicting CFI for the same
   offset.

 - the unwind can fail with the merged CFI.

In specific, this allows us to deal with:

	Alt1			Alt2			Alt3

 0x00	CALL *pv_ops.save_fl	CALL xen_save_fl	PUSHF
 0x01							POP %RAX
 0x02							NOP
 ...
 0x05				NOP
 ...
 0x07   <insn>

The unwind information for offset-0x00 is identical for all 3
alternatives. Similarly offset-0x05 and higher also are identical (and
the same as 0x00). However offset-0x01 has deviating CFI, but that is
only relevant for Alt3, neither of the other alternative instruction
streams will ever hit that offset.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 tools/objtool/check.c   |  180 ++++++++++++++++++++++++++++++++++++++++++++----
 tools/objtool/check.h   |    5 +
 tools/objtool/orc_gen.c |  180 +++++++++++++++++++++++++++++++-----------------
 3 files changed, 290 insertions(+), 75 deletions(-)

--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1096,6 +1096,32 @@ static int handle_group_alt(struct objto
 		return -1;
 	}
 
+	/*
+	 * Add the filler NOP, required for alternative CFI.
+	 */
+	if (special_alt->group && special_alt->new_len < special_alt->orig_len) {
+		struct instruction *nop = malloc(sizeof(*nop));
+		if (!nop) {
+			WARN("malloc failed");
+			return -1;
+		}
+		memset(nop, 0, sizeof(*nop));
+		INIT_LIST_HEAD(&nop->alts);
+		INIT_LIST_HEAD(&nop->stack_ops);
+		init_cfi_state(&nop->cfi);
+
+		nop->sec = last_new_insn->sec;
+		nop->ignore = last_new_insn->ignore;
+		nop->func = last_new_insn->func;
+		nop->alt_group = alt_group;
+		nop->offset = last_new_insn->offset + last_new_insn->len;
+		nop->type = INSN_NOP;
+		nop->len = special_alt->orig_len - special_alt->new_len;
+
+		list_add(&nop->list, &last_new_insn->list);
+		last_new_insn = nop;
+	}
+
 	if (fake_jump)
 		list_add(&fake_jump->list, &last_new_insn->list);
 
@@ -2237,18 +2263,12 @@ static int handle_insn_ops(struct instru
 	struct stack_op *op;
 
 	list_for_each_entry(op, &insn->stack_ops, list) {
-		struct cfi_state old_cfi = state->cfi;
 		int res;
 
 		res = update_cfi_state(insn, &state->cfi, op);
 		if (res)
 			return res;
 
-		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct cfi_state))) {
-			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
-			return -1;
-		}
-
 		if (op->dest.type == OP_DEST_PUSHF) {
 			if (!state->uaccess_stack) {
 				state->uaccess_stack = 1;
@@ -2460,19 +2480,137 @@ static int validate_return(struct symbol
  * unreported (because they're NOPs), such holes would result in CFI_UNDEFINED
  * states which then results in ORC entries, which we just said we didn't want.
  *
- * Avoid them by copying the CFI entry of the first instruction into the whole
- * alternative.
+ * Avoid them by copying the CFI entry of the first instruction into the hole.
  */
-static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn)
+static void __fill_alt_cfi(struct objtool_file *file, struct instruction *insn)
 {
 	struct instruction *first_insn = insn;
 	int alt_group = insn->alt_group;
 
-	sec_for_each_insn_continue(file, insn) {
+	sec_for_each_insn_from(file, insn) {
 		if (insn->alt_group != alt_group)
 			break;
-		insn->cfi = first_insn->cfi;
+
+		if (!insn->visited)
+			insn->cfi = first_insn->cfi;
+	}
+}
+
+static void fill_alt_cfi(struct objtool_file *file, struct instruction *alt_insn)
+{
+	struct alternative *alt;
+
+	__fill_alt_cfi(file, alt_insn);
+
+	list_for_each_entry(alt, &alt_insn->alts, list)
+		__fill_alt_cfi(file, alt->insn);
+}
+
+static struct instruction *
+__find_unwind(struct objtool_file *file,
+	      struct instruction *insn, unsigned long offset)
+{
+	int alt_group = insn->alt_group;
+	struct instruction *next;
+	unsigned long off = 0;
+
+	while ((off + insn->len) <= offset) {
+		next = next_insn_same_sec(file, insn);
+		if (next && next->alt_group != alt_group)
+			next = NULL;
+
+		if (!next)
+			break;
+
+		off += insn->len;
+		insn = next;
 	}
+
+	return insn;
+}
+
+struct instruction *
+find_alt_unwind(struct objtool_file *file,
+		struct instruction *alt_insn, unsigned long offset)
+{
+	struct instruction *fit;
+	struct alternative *alt;
+	unsigned long fit_off;
+
+	fit = __find_unwind(file, alt_insn, offset);
+	fit_off = (fit->offset - alt_insn->offset);
+
+	list_for_each_entry(alt, &alt_insn->alts, list) {
+		struct instruction *x;
+		unsigned long x_off;
+
+		x = __find_unwind(file, alt->insn, offset);
+		x_off = (x->offset - alt->insn->offset);
+
+		if (fit_off < x_off) {
+			fit = x;
+			fit_off = x_off;
+
+		} else if (fit_off == x_off &&
+			   memcmp(&fit->cfi, &x->cfi, sizeof(struct cfi_state))) {
+
+			char *_str1 = offstr(fit->sec, fit->offset);
+			char *_str2 = offstr(x->sec, x->offset);
+			WARN("%s: equal-offset incompatible alternative: %s\n", _str1, _str2);
+			free(_str1);
+			free(_str2);
+			return fit;
+		}
+	}
+
+	return fit;
+}
+
+static int __validate_unwind(struct objtool_file *file,
+			     struct instruction *alt_insn,
+			     struct instruction *insn)
+{
+	int alt_group = insn->alt_group;
+	struct instruction *unwind;
+	unsigned long offset = 0;
+
+	sec_for_each_insn_from(file, insn) {
+		if (insn->alt_group != alt_group)
+			break;
+
+		unwind = find_alt_unwind(file, alt_insn, offset);
+
+		if (memcmp(&insn->cfi, &unwind->cfi, sizeof(struct cfi_state))) {
+
+			char *_str1 = offstr(insn->sec, insn->offset);
+			char *_str2 = offstr(unwind->sec, unwind->offset);
+			WARN("%s: unwind incompatible alternative: %s (%ld)\n",
+			     _str1, _str2, offset);
+			free(_str1);
+			free(_str2);
+			return 1;
+		}
+
+		offset += insn->len;
+	}
+
+	return 0;
+}
+
+static int validate_alt_unwind(struct objtool_file *file,
+			       struct instruction *alt_insn)
+{
+	struct alternative *alt;
+
+	if (__validate_unwind(file, alt_insn, alt_insn))
+		return 1;
+
+	list_for_each_entry(alt, &alt_insn->alts, list) {
+		if (__validate_unwind(file, alt_insn, alt->insn))
+			return 1;
+	}
+
+	return 0;
 }
 
 /*
@@ -2484,9 +2622,10 @@ static void fill_alternative_cfi(struct
 static int validate_branch(struct objtool_file *file, struct symbol *func,
 			   struct instruction *insn, struct insn_state state)
 {
+	struct instruction *next_insn, *alt_insn = NULL;
 	struct alternative *alt;
-	struct instruction *next_insn;
 	struct section *sec;
+	int alt_group = 0;
 	u8 visited;
 	int ret;
 
@@ -2541,8 +2680,10 @@ static int validate_branch(struct objtoo
 				}
 			}
 
-			if (insn->alt_group)
-				fill_alternative_cfi(file, insn);
+			if (insn->alt_group) {
+				alt_insn = insn;
+				alt_group = insn->alt_group;
+			}
 
 			if (skip_orig)
 				return 0;
@@ -2697,6 +2838,17 @@ static int validate_branch(struct objtoo
 		}
 
 		insn = next_insn;
+
+		if (alt_insn && insn->alt_group != alt_group) {
+			alt_insn->alt_end = insn;
+
+			fill_alt_cfi(file, alt_insn);
+
+			if (validate_alt_unwind(file, alt_insn))
+				return 1;
+
+			alt_insn = NULL;
+		}
 	}
 
 	return 0;
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -41,6 +41,7 @@ struct instruction {
 	struct instruction *first_jump_src;
 	struct reloc *jump_table;
 	struct list_head alts;
+	struct instruction *alt_end;
 	struct symbol *func;
 	struct list_head stack_ops;
 	struct cfi_state cfi;
@@ -55,6 +56,10 @@ static inline bool is_static_jump(struct
 	       insn->type == INSN_JUMP_UNCONDITIONAL;
 }
 
+struct instruction *
+find_alt_unwind(struct objtool_file *file,
+		struct instruction *alt_insn, unsigned long offset);
+
 struct instruction *find_insn(struct objtool_file *file,
 			      struct section *sec, unsigned long offset);
 
--- a/tools/objtool/orc_gen.c
+++ b/tools/objtool/orc_gen.c
@@ -12,75 +12,86 @@
 #include "check.h"
 #include "warn.h"
 
-int create_orc(struct objtool_file *file)
+static int create_orc_insn(struct objtool_file *file, struct instruction *insn)
 {
-	struct instruction *insn;
+	struct orc_entry *orc = &insn->orc;
+	struct cfi_reg *cfa = &insn->cfi.cfa;
+	struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+
+	orc->end = insn->cfi.end;
+
+	if (cfa->base == CFI_UNDEFINED) {
+		orc->sp_reg = ORC_REG_UNDEFINED;
+		return 0;
+	}
 
-	for_each_insn(file, insn) {
-		struct orc_entry *orc = &insn->orc;
-		struct cfi_reg *cfa = &insn->cfi.cfa;
-		struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+	switch (cfa->base) {
+	case CFI_SP:
+		orc->sp_reg = ORC_REG_SP;
+		break;
+	case CFI_SP_INDIRECT:
+		orc->sp_reg = ORC_REG_SP_INDIRECT;
+		break;
+	case CFI_BP:
+		orc->sp_reg = ORC_REG_BP;
+		break;
+	case CFI_BP_INDIRECT:
+		orc->sp_reg = ORC_REG_BP_INDIRECT;
+		break;
+	case CFI_R10:
+		orc->sp_reg = ORC_REG_R10;
+		break;
+	case CFI_R13:
+		orc->sp_reg = ORC_REG_R13;
+		break;
+	case CFI_DI:
+		orc->sp_reg = ORC_REG_DI;
+		break;
+	case CFI_DX:
+		orc->sp_reg = ORC_REG_DX;
+		break;
+	default:
+		WARN_FUNC("unknown CFA base reg %d",
+			  insn->sec, insn->offset, cfa->base);
+		return -1;
+	}
 
-		if (!insn->sec->text)
-			continue;
+	switch(bp->base) {
+	case CFI_UNDEFINED:
+		orc->bp_reg = ORC_REG_UNDEFINED;
+		break;
+	case CFI_CFA:
+		orc->bp_reg = ORC_REG_PREV_SP;
+		break;
+	case CFI_BP:
+		orc->bp_reg = ORC_REG_BP;
+		break;
+	default:
+		WARN_FUNC("unknown BP base reg %d",
+			  insn->sec, insn->offset, bp->base);
+		return -1;
+	}
 
-		orc->end = insn->cfi.end;
+	orc->sp_offset = cfa->offset;
+	orc->bp_offset = bp->offset;
+	orc->type = insn->cfi.type;
 
-		if (cfa->base == CFI_UNDEFINED) {
-			orc->sp_reg = ORC_REG_UNDEFINED;
-			continue;
-		}
+	return 0;
+}
 
-		switch (cfa->base) {
-		case CFI_SP:
-			orc->sp_reg = ORC_REG_SP;
-			break;
-		case CFI_SP_INDIRECT:
-			orc->sp_reg = ORC_REG_SP_INDIRECT;
-			break;
-		case CFI_BP:
-			orc->sp_reg = ORC_REG_BP;
-			break;
-		case CFI_BP_INDIRECT:
-			orc->sp_reg = ORC_REG_BP_INDIRECT;
-			break;
-		case CFI_R10:
-			orc->sp_reg = ORC_REG_R10;
-			break;
-		case CFI_R13:
-			orc->sp_reg = ORC_REG_R13;
-			break;
-		case CFI_DI:
-			orc->sp_reg = ORC_REG_DI;
-			break;
-		case CFI_DX:
-			orc->sp_reg = ORC_REG_DX;
-			break;
-		default:
-			WARN_FUNC("unknown CFA base reg %d",
-				  insn->sec, insn->offset, cfa->base);
-			return -1;
-		}
+int create_orc(struct objtool_file *file)
+{
+	struct instruction *insn;
 
-		switch(bp->base) {
-		case CFI_UNDEFINED:
-			orc->bp_reg = ORC_REG_UNDEFINED;
-			break;
-		case CFI_CFA:
-			orc->bp_reg = ORC_REG_PREV_SP;
-			break;
-		case CFI_BP:
-			orc->bp_reg = ORC_REG_BP;
-			break;
-		default:
-			WARN_FUNC("unknown BP base reg %d",
-				  insn->sec, insn->offset, bp->base);
-			return -1;
-		}
+	for_each_insn(file, insn) {
+		int ret;
+
+		if (!insn->sec->text)
+			continue;
 
-		orc->sp_offset = cfa->offset;
-		orc->bp_offset = bp->offset;
-		orc->type = insn->cfi.type;
+		ret = create_orc_insn(file, insn);
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -166,6 +177,28 @@ int create_orc_sections(struct objtool_f
 
 		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
+
+			if (insn->alt_end) {
+				unsigned int offset, alt_len;
+				struct instruction *unwind;
+
+				alt_len = insn->alt_end->offset - insn->offset;
+				for (offset = 0; offset < alt_len; offset++) {
+					unwind = find_alt_unwind(file, insn, offset);
+					/* XXX: skipped earlier ! */
+					create_orc_insn(file, unwind);
+					if (!prev_insn ||
+					    memcmp(&unwind->orc, &prev_insn->orc,
+						   sizeof(struct orc_entry))) {
+						idx++;
+//						WARN_FUNC("ORC @ %d/%d", sec, insn->offset+offset, offset, alt_len);
+					}
+					prev_insn = unwind;
+				}
+
+				insn = insn->alt_end;
+			}
+
 			if (!prev_insn ||
 			    memcmp(&insn->orc, &prev_insn->orc,
 				   sizeof(struct orc_entry))) {
@@ -203,6 +236,31 @@ int create_orc_sections(struct objtool_f
 
 		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
+
+			if (insn->alt_end) {
+				unsigned int offset, alt_len;
+				struct instruction *unwind;
+
+				alt_len = insn->alt_end->offset - insn->offset;
+				for (offset = 0; offset < alt_len; offset++) {
+					unwind = find_alt_unwind(file, insn, offset);
+					if (!prev_insn ||
+					    memcmp(&unwind->orc, &prev_insn->orc,
+						   sizeof(struct orc_entry))) {
+
+						if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
+								     insn->sec, insn->offset + offset,
+								     &unwind->orc))
+							return -1;
+
+						idx++;
+					}
+					prev_insn = unwind;
+				}
+
+				insn = insn->alt_end;
+			}
+
 			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
 						 sizeof(struct orc_entry))) {
 


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:02:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:02:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54309.94274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBq0-0001EN-EE; Tue, 15 Dec 2020 15:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54309.94274; Tue, 15 Dec 2020 15:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBq0-0001EG-BA; Tue, 15 Dec 2020 15:02:04 +0000
Received: by outflank-mailman (input) for mailman id 54309;
 Tue, 15 Dec 2020 15:02:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sb05=FT=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kpBpy-0001E9-Qf
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:02:02 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 31d492a8-d1e8-4453-aae4-9d978f5c4156;
 Tue, 15 Dec 2020 15:02:00 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-520-G2MurdU8PqS7AuSKestIwA-1; Tue, 15 Dec 2020 10:01:57 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A6AD5192CC7A;
 Tue, 15 Dec 2020 15:01:24 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-117-65.ams2.redhat.com [10.36.117.65])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C545B19C44;
 Tue, 15 Dec 2020 15:01:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31d492a8-d1e8-4453-aae4-9d978f5c4156
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608044520;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Jf3sbZ3jClT4129xhsQLzjLRz8BmOvREW3TyV/kU598=;
	b=WpioA3Pxlzqo+rwf7Uf3j4S2gJ+Me2ltc3GISGL4yX/MaSOSlOYYxCNe22WBJ49W3N+FFU
	cf0hhhktrtVTVZ68zAAj8q4hWrsbdhNsGSwfWD6+0F66Ck08DAfRsSFg5Td/yiMVaWHFsY
	Fa1Mkd8fQjr2ZMqvsNQ+GzE6E5Ub2gM=
X-MC-Unique: G2MurdU8PqS7AuSKestIwA-1
Date: Tue, 15 Dec 2020 16:01:19 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201215150119.GE8185@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
MIME-Version: 1.0
In-Reply-To: <20201215131527.evpidxevevtfy54n@mhamilton>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="7JfCtLOvnd9MIVvH"
Content-Disposition: inline

--7JfCtLOvnd9MIVvH
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > While processing the parents of a BDS, one of the parents may process
> > > the child that's doing the tail recursion, which leads to a BDS being
> > > processed twice. This is especially problematic for the aio_notifiers=
,
> > > as they might attempt to work on both the old and the new AIO
> > > contexts.
> > >=20
> > > To avoid this, add the BDS pointer to the ignore list, and check the
> > > child BDS pointer while iterating over the children.
> > >=20
> > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> >=20
> > Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-/
>=20
> I know, it's effective but quite ugly...
>=20
> > What is the specific scenario where you saw this breaking? Did you have
> > multiple BdrvChild connections between two nodes so that we would go to
> > the parent node through one and then come back to the child node throug=
h
> > the other?
>=20
> I don't think this is a corner case. If the graph is walked top->down,
> there's no problem since children are added to the ignore list before
> getting processed, and siblings don't process each other. But, if the
> graph is walked bottom->up, a BDS will start processing its parents
> without adding itself to the ignore list, so there's nothing
> preventing them from processing it again.

I don't understand. child is added to ignore before calling the parent
callback on it, so how can we come back through the same BdrvChild?

    QLIST_FOREACH(child, &bs->parents, next_parent) {
        if (g_slist_find(*ignore, child)) {
            continue;
        }
        assert(child->klass->set_aio_ctx);
        *ignore =3D g_slist_prepend(*ignore, child);
        child->klass->set_aio_ctx(child, new_context, ignore);
    }

> I'm pasting here an annotated trace of bdrv_set_aio_context_ignore I
> generated while triggering the issue:
>=20
> <----- begin ------>
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing children
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e5d420 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e5d420 processing children
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 processing children
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 processing parents
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 processing itself
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e5d420 processing parents
>=20
>  - We enter b_s_a_c_i with BDS 2fbf660 the first time:
> =20
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing children
>=20
>  - We enter b_s_a_c_i with BDS 3bc0c00, a child of 2fbf660:
> =20
> bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 processing children
> bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 processing parents

>=20
>  - We start processing its parents:
> =20
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing parents
>=20
>  - We enter b_s_a_c_i with BDS 2e48030, a parent of 2fbf660:
> =20
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing children
>=20
>  - We enter b_s_a_c_i with BDS 2fbf660 again, because of parent
>    2e48030 didn't found us it in the ignore list:
>   =20
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 enter
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing children
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing parents
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing itself
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing parents
> bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing itself
>=20
>  - BDS 2fbf660 will be processed here a second time, triggering the
>    issue:
>   =20
> bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing itself
> <----- end ------>

You didn't dump the BdrvChild here. I think that would add some
information on why we re-entered 0x555ee2fbf660. Maybe you can also add
bs->drv->format_name for each node to make the scenario less abstract?

So far my reconstruction of the graph is something like this:

0x555ee2e48030 --+
   |  |          |
   |  |          +-> 0x555ee2e5d420 -> 0x555ee2e52060
   v  v          |
0x555ee2fbf660 --+
           |
           +-------> 0x555ee3bc0c00

It doesn't look quite trivial, but if 0x555ee2e48030 is the filter node
of a block job, it's not hard to imagine either.

> I suspect this has been happening for a while, and has only surfaced
> now due to the need to run an AIO context BH in an aio_notifier
> function that the "nbd/server: Quiesce coroutines on context switch"
> patch introduces. There the problem is that the first time the
> aio_notifier AIO detach function is called, it works on the old
> context (as it should be), and the second one works on the new context
> (which is wrong).
>=20
> > Maybe if what we really need to do is not processing every edge once,
> > but processing every node once, the list should be changed to contain
> > _only_ BDS objects. But then blk_do_set_aio_context() probably won't
> > work any more because it can't have blk->root ignored any more...
>=20
> I tried that in my first attempt and it broke badly. I didn't take a
> deeper look at the causes.
>=20
> > Anyway, if we end up changing what the list contains, the comment needs
> > an update, too. Currently it says:
> >=20
> >  * @ignore will accumulate all visited BdrvChild object. The caller is
> >  * responsible for freeing the list afterwards.
> >=20
> > Another option: Split the parents QLIST_FOREACH loop in two. First add
> > all parent BdrvChild objects to the ignore list, remember which of them
> > were newly added, and only after adding all of them call
> > child->klass->set_aio_ctx() for each parent that was previously not on
> > the ignore list. This will avoid that we come back to the same node
> > because all of its incoming edges are ignored now.
>=20
> I don't think this strategy will fix the issue illustrated in the
> trace above, as the BdrvChild pointer of the BDS processing its
> parents won't be the on ignore list by the time one of its parents
> starts processing its own children.

But why? We do append to the ignore list each time before we recurse
into a child or parent node. The only way I see is if you have two
separate BdrvChild links between the nodes.

Kevin

--7JfCtLOvnd9MIVvH
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAl/Yz78ACgkQfwmycsiP
L9abTA/+OSf0+xD2lm5JSqzqGFhGcXKJZqW2/X3Kt1TWhZOsX6Rvlyl0oCqgwOWt
plCkHKjbFzhciuJBhA3OdkrRqHBUlzguRBG3haBITV4KlOnMLcgxvQr1kdB+MUny
ZY7+WBT+NNVFFsfYua14/q7nbjcuyvf+SZa5OYnFN3RRDkQyaKLZnrMJh/0WgQz4
/+s1vGLEyJ2Eh9ASp8N0Td+NBLoQ41nEyGHPYwag7ogpEJakoWEeGhKiNqitJRXs
5y3WWBvM+xt962D9z29lbUcxJiJ97TWpNvebOIRj+EkXUsYfAZKL0Lywaa2CJtu+
++gq0f3dlXW219D90gmWu7pd+DgUFAoZvW7Z80PICkjKMR0Z+YMT5CD2E2jhs/qF
4XYb+6cZJ6pZglqDcKQuGtqqN0i0uvEevrym1S+N0cHQzY05W9wlWZJY33Ie4Lpy
KfaRZ4tdGUryPD1xX3OQSsXh+VFwNM3YAHxgC7h5e0UiNGsN0sllWKDXcevTyxdM
SB1MjvK+AJm/j6C+lF0IAJlEJ3RZq3f77CBYq3kMWHv40ai1lCeTjTaOnTiSarhe
nvqvqbEsf7FRvuP3mnsl1zVBOLNQXxmD4nBveydDrmCid0DCz2Mg/2mGt7tpGV/t
CcMFPCs9RSDWhy5F8m7FOeD5CZZXbFaFnjTuKY6eEfmPjhFNb90=
=PR8T
-----END PGP SIGNATURE-----

--7JfCtLOvnd9MIVvH--



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:04:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54314.94285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBsA-0001MS-Vr; Tue, 15 Dec 2020 15:04:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54314.94285; Tue, 15 Dec 2020 15:04:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBsA-0001ML-Sv; Tue, 15 Dec 2020 15:04:18 +0000
Received: by outflank-mailman (input) for mailman id 54314;
 Tue, 15 Dec 2020 15:04:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpBs8-0001MG-MR
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:04:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2348941b-46a8-412d-b803-75683c9fd45f;
 Tue, 15 Dec 2020 15:04:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4C48AC7F;
 Tue, 15 Dec 2020 15:04:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2348941b-46a8-412d-b803-75683c9fd45f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608044655; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=1ONxP5uPtKWFefZOyd/Uymh+C2zg0SXGzfp1xar6tXg=;
	b=eT/aDkZ+5/0+ZflH9JfPI+zJmA6sc3gawUouS0XEtySeQ77MT4yCRKtUYqHUsNIkilx8+q
	AGwEOoctpwnibcMFEdrJFs+38I+ab3eY/TuQQAdg7hnjNgiGLYg4yMzIqwvL9BaApF8x6E
	caZpu2LJ7eiTqzUs9M8+0I2LerOJiTY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH] tools/xenstore: rework path length check
Date: Tue, 15 Dec 2020 16:04:11 +0100
Message-Id: <20201215150411.9987-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The different fixed limits for absolute and relative path lengths of
Xenstore nodes make it possible to create per-domain nodes via
absolute paths which are not accessible using relative paths, as the
two limits differ by 1024 characters.

Instead of this weird limits use only one limit, which applies to the
relative path length of per-domain nodes and to the absolute path
length of all other nodes. This means, the path length check is
applied to the path after removing a possible start of
"/local/domain/<n>/" with <n> being a domain id.

There has been the request to be able to limit the path lengths even
more, so an additional quota is added which can be applied to path
lengths. It is XENSTORE_REL_PATH_MAX (2048) per default, but can be
set to lower values. This is done via the new "-M" or "--path-max"
option when invoking xenstored.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Julien Grall <jgrall@amazon.com>
---
This patch was originally thought to be part of XSA-323, but later it
was decided not to include it, as in C Xenstored this is no security
issue.
---
 tools/xenstore/xenstored_core.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 746a1247b3..3082a36d3a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -102,6 +102,7 @@ int quota_nb_watch_per_domain = 128;
 int quota_max_entry_size = 2048; /* 2K */
 int quota_max_transaction = 10;
 int quota_nb_perms_per_node = 5;
+int quota_max_path_len = XENSTORE_REL_PATH_MAX;
 
 void trace(const char *fmt, ...)
 {
@@ -734,6 +735,9 @@ static bool valid_chars(const char *node)
 
 bool is_valid_nodename(const char *node)
 {
+	int local_off = 0;
+	unsigned int domid;
+
 	/* Must start in /. */
 	if (!strstarts(node, "/"))
 		return false;
@@ -746,7 +750,10 @@ bool is_valid_nodename(const char *node)
 	if (strstr(node, "//"))
 		return false;
 
-	if (strlen(node) > XENSTORE_ABS_PATH_MAX)
+	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
+		local_off = 0;
+
+	if (strlen(node) > local_off + quota_max_path_len)
 		return false;
 
 	return valid_chars(node);
@@ -806,6 +813,8 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	if (!canonical_name)
 		canonical_name = &tmp_name;
 	*canonical_name = canonicalize(conn, ctx, name);
+	if (!*canonical_name)
+		return NULL;
 	return get_node(conn, ctx, *canonical_name, perm);
 }
 
@@ -1926,6 +1935,7 @@ static void usage(void)
 "  -W, --watch-nb <nb>     limit the number of watches per domain,\n"
 "  -t, --transaction <nb>  limit the number of transaction allowed per domain,\n"
 "  -A, --perm-nb <nb>      limit the number of permissions per node,\n"
+"  -M, --path-max <chars>  limit the allowed Xenstore node path length,\n"
 "  -R, --no-recovery       to request that no recovery should be attempted when\n"
 "                          the store is corrupted (debug only),\n"
 "  -I, --internal-db       store database in memory, not on disk\n"
@@ -1947,6 +1957,7 @@ static struct option options[] = {
 	{ "trace-file", 1, NULL, 'T' },
 	{ "transaction", 1, NULL, 't' },
 	{ "perm-nb", 1, NULL, 'A' },
+	{ "path-max", 1, NULL, 'M' },
 	{ "no-recovery", 0, NULL, 'R' },
 	{ "internal-db", 0, NULL, 'I' },
 	{ "verbose", 0, NULL, 'V' },
@@ -1969,7 +1980,7 @@ int main(int argc, char *argv[])
 	int timeout;
 
 
-	while ((opt = getopt_long(argc, argv, "DE:F:HNPS:t:A:T:RVW:", options,
+	while ((opt = getopt_long(argc, argv, "DE:F:HNPS:t:A:M:T:RVW:", options,
 				  NULL)) != -1) {
 		switch (opt) {
 		case 'D':
@@ -2014,6 +2025,10 @@ int main(int argc, char *argv[])
 		case 'A':
 			quota_nb_perms_per_node = strtol(optarg, NULL, 10);
 			break;
+			quota_max_path_len = strtol(optarg, NULL, 10);
+			quota_max_path_len = min(XENSTORE_REL_PATH_MAX,
+						 quota_max_path_len);
+			break;
 		case 'e':
 			dom0_event = strtol(optarg, NULL, 10);
 			break;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:06:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:06:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54319.94298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBuE-0001VH-D7; Tue, 15 Dec 2020 15:06:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54319.94298; Tue, 15 Dec 2020 15:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBuE-0001VA-A8; Tue, 15 Dec 2020 15:06:26 +0000
Received: by outflank-mailman (input) for mailman id 54319;
 Tue, 15 Dec 2020 15:06:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6ufw=FT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpBuD-0001Up-AH
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:06:25 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fca7ef58-875b-49d9-8b50-fdf322f7fad6;
 Tue, 15 Dec 2020 15:06:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fca7ef58-875b-49d9-8b50-fdf322f7fad6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608044784;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=OTpugqn7zvt2lSUnB/CR9WtOPwPJgpWceKapczBjzPk=;
  b=LAZrZSK9k5bnVjS5KPy67xBR9xnIUwn8OkbxOmOMUGEute3Jx3wO8U5t
   0Tc3ybKCrt9AXLmbZGV7tx5V2aI9uOXrZqIYy6RVoa9MG5N+AGjswzQPb
   NHnEFB8e+cLU+dMc+ASb6Akv/XMIxT1zhJa+iV1lSy65V6COG5ax1t3kq
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: I3NndTN5xR6bVHc9LXuP0BUuJmsuQnM8W8oK1XFhj8+YlyyzkFFKuoBocVuePlx7vGWzpHdye2
 HIUTLi6k7Z/K66SFxMToP4+GgGapNxmMACLjubUt0rD3KGqWPgW16v0vG7iRB461qfM7AZW8yc
 vhs/Q+tCda809FBenIdk1s5qwZRCwOyMPhihkWiDNiCg4EGSFMh4BFpifSKVszb5xe3TmpC42E
 sd8Um7b1XXJV/Dw8qQHWcSY/3cdFY99ugnCzb/tpB/rQdaYjq7MSOK8hAQIIf84ZVOrDO8q4Zq
 MSs=
X-SBRS: 5.2
X-MesageID: 33240661
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,421,1599537600"; 
   d="scan'208";a="33240661"
Subject: Re: [PATCH] tools/xenstore: rework path length check
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>, Julien Grall <jgrall@amazon.com>
References: <20201215150411.9987-1-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f37930e2-10e8-69c4-5e36-4cf563b2f38e@citrix.com>
Date: Tue, 15 Dec 2020 15:06:17 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201215150411.9987-1-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 15/12/2020 15:04, Juergen Gross wrote:
> The different fixed limits for absolute and relative path lengths of
> Xenstore nodes make it possible to create per-domain nodes via
> absolute paths which are not accessible using relative paths, as the
> two limits differ by 1024 characters.
>
> Instead of this weird limits use only one limit, which applies to the
> relative path length of per-domain nodes and to the absolute path
> length of all other nodes. This means, the path length check is
> applied to the path after removing a possible start of
> "/local/domain/<n>/" with <n> being a domain id.
>
> There has been the request to be able to limit the path lengths even
> more, so an additional quota is added which can be applied to path
> lengths. It is XENSTORE_REL_PATH_MAX (2048) per default, but can be
> set to lower values. This is done via the new "-M" or "--path-max"
> option when invoking xenstored.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>
> Acked-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:07:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:07:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54323.94310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBvL-0001dj-Oy; Tue, 15 Dec 2020 15:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54323.94310; Tue, 15 Dec 2020 15:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpBvL-0001dc-LK; Tue, 15 Dec 2020 15:07:35 +0000
Received: by outflank-mailman (input) for mailman id 54323;
 Tue, 15 Dec 2020 15:07:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpBvK-0001dW-DO
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:07:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14c798d5-4701-4bff-9af6-b662e8d52505;
 Tue, 15 Dec 2020 15:07:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C1912AF58;
 Tue, 15 Dec 2020 15:07:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14c798d5-4701-4bff-9af6-b662e8d52505
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608044852; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DTMgY9IAUintdaOgUwZJxqSz/rzlPjbhmR/iJKmOAsE=;
	b=kqYCIO4LOQIayrtMDj0KQ1k0dZfDtN8TGJgpIw2840lDjyfhS0Pe8gt2FhpQH9C+QaHMrh
	raFYAUlHX+w5hn7WGB6fRkIB7ywg1gbbujlMRbDHRwibmY9VGSOH8Kdg2Lwy0PuCvyyTst
	y6lkWHYnnTqrtysTc7EOIGguUpb0OpM=
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
To: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>, linux-hyperv@vger.kernel.org,
 Daniel Lezcano <daniel.lezcano@linaro.org>,
 Wanpeng Li <wanpengli@tencent.com>, kvm@vger.kernel.org,
 "VMware, Inc." <pv-drivers@vmware.com>,
 virtualization@lists.linux-foundation.org, Ben Segall <bsegall@google.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Wei Liu <wei.liu@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stephen Hemminger <sthemmin@microsoft.com>, Joerg Roedel <joro@8bytes.org>,
 x86@kernel.org, Ingo Molnar <mingo@redhat.com>, Mel Gorman
 <mgorman@suse.de>, xen-devel@lists.xenproject.org,
 Haiyang Zhang <haiyangz@microsoft.com>, Steven Rostedt
 <rostedt@goodmis.org>, Borislav Petkov <bp@alien8.de>, luto@kernel.org,
 Josh Poimboeuf <jpoimboe@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Dietmar Eggemann <dietmar.eggemann@arm.com>,
 Jim Mattson <jmattson@google.com>, linux-kernel@vger.kernel.org,
 Sean Christopherson <sean.j.christopherson@intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Daniel Bristot de Oliveira <bristot@redhat.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
 <20201215145408.GR3092@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2eeef3e6-2f7c-82ed-f02b-acd49a47b527@suse.com>
Date: Tue, 15 Dec 2020 16:07:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201215145408.GR3092@hirez.programming.kicks-ass.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="bbE03T3vMvfPnuzIYd7blCFzPDKWu8tXK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--bbE03T3vMvfPnuzIYd7blCFzPDKWu8tXK
Content-Type: multipart/mixed; boundary="rOMghP2aNZKArDug7s6citEJGC1oClV5D";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>, linux-hyperv@vger.kernel.org,
 Daniel Lezcano <daniel.lezcano@linaro.org>,
 Wanpeng Li <wanpengli@tencent.com>, kvm@vger.kernel.org,
 "VMware, Inc." <pv-drivers@vmware.com>,
 virtualization@lists.linux-foundation.org, Ben Segall <bsegall@google.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Wei Liu <wei.liu@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stephen Hemminger <sthemmin@microsoft.com>, Joerg Roedel <joro@8bytes.org>,
 x86@kernel.org, Ingo Molnar <mingo@redhat.com>, Mel Gorman
 <mgorman@suse.de>, xen-devel@lists.xenproject.org,
 Haiyang Zhang <haiyangz@microsoft.com>, Steven Rostedt
 <rostedt@goodmis.org>, Borislav Petkov <bp@alien8.de>, luto@kernel.org,
 Josh Poimboeuf <jpoimboe@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Dietmar Eggemann <dietmar.eggemann@arm.com>,
 Jim Mattson <jmattson@google.com>, linux-kernel@vger.kernel.org,
 Sean Christopherson <sean.j.christopherson@intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Daniel Bristot de Oliveira <bristot@redhat.com>
Message-ID: <2eeef3e6-2f7c-82ed-f02b-acd49a47b527@suse.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
 <20201215145408.GR3092@hirez.programming.kicks-ass.net>
In-Reply-To: <20201215145408.GR3092@hirez.programming.kicks-ass.net>

--rOMghP2aNZKArDug7s6citEJGC1oClV5D
Content-Type: multipart/mixed;
 boundary="------------0BB9EF8F4478FA91BE4D7120"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0BB9EF8F4478FA91BE4D7120
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 15:54, Peter Zijlstra wrote:
> On Tue, Dec 15, 2020 at 03:18:34PM +0100, Peter Zijlstra wrote:
>> Ah, I was waiting for Josh to have an opinion (and then sorta forgot
>> about the whole thing again). Let me refresh and provide at least a
>> Changelog.
>=20
> How's this then?

Thanks, will add it into my series.


Juergen

>=20
> ---
> Subject: objtool: Alternatives vs ORC, the hard way
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Mon, 23 Nov 2020 14:43:17 +0100
>=20
> Alternatives pose an interesting problem for unwinders because from
> the unwinders PoV we're just executing instructions, it has no idea
> the text is modified, nor any way of retrieving what with.
>=20
> Therefore the stance has been that alternatives must not change stack
> state, as encoded by commit: 7117f16bf460 ("objtool: Fix ORC vs
> alternatives"). This obviously guarantees that whatever actual
> instructions end up in the text, the unwind information is correct.
>=20
> However, there is one additional source of text patching that isn't
> currently visible to objtool: paravirt immediate patching. And it
> turns out one of these violates the rule.
>=20
> As part of cleaning that up, the unfortunate reality is that objtool
> now has to deal with alternatives modifying unwind state and validate
> the combination is valid and generate ORC data to match.
>=20
> The problem is that a single instance of unwind information (ORC) must
> capture and correctly unwind all alternatives. Since the trivially
> correct mandate is out, implement the straight forward brute-force
> approach:
>=20
>   1) generate CFI information for each alternative
>=20
>   2) unwind every alternative with the merge-sort of the previously
>      generated CFI information -- O(n^2)
>=20
>   3) for any possible conflict: yell.
>=20
>   4) Generate ORC with merge-sort
>=20
> Specifically for 3 there are two possible classes of conflicts:
>=20
>   - the merge-sort itself could find conflicting CFI for the same
>     offset.
>=20
>   - the unwind can fail with the merged CFI.
>=20
> In specific, this allows us to deal with:
>=20
> 	Alt1			Alt2			Alt3
>=20
>   0x00	CALL *pv_ops.save_fl	CALL xen_save_fl	PUSHF
>   0x01							POP %RAX
>   0x02							NOP
>   ...
>   0x05				NOP
>   ...
>   0x07   <insn>
>=20
> The unwind information for offset-0x00 is identical for all 3
> alternatives. Similarly offset-0x05 and higher also are identical (and
> the same as 0x00). However offset-0x01 has deviating CFI, but that is
> only relevant for Alt3, neither of the other alternative instruction
> streams will ever hit that offset.
>=20
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>   tools/objtool/check.c   |  180 ++++++++++++++++++++++++++++++++++++++=
++++++----
>   tools/objtool/check.h   |    5 +
>   tools/objtool/orc_gen.c |  180 +++++++++++++++++++++++++++++++-------=
----------
>   3 files changed, 290 insertions(+), 75 deletions(-)
>=20
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -1096,6 +1096,32 @@ static int handle_group_alt(struct objto
>   		return -1;
>   	}
>  =20
> +	/*
> +	 * Add the filler NOP, required for alternative CFI.
> +	 */
> +	if (special_alt->group && special_alt->new_len < special_alt->orig_le=
n) {
> +		struct instruction *nop =3D malloc(sizeof(*nop));
> +		if (!nop) {
> +			WARN("malloc failed");
> +			return -1;
> +		}
> +		memset(nop, 0, sizeof(*nop));
> +		INIT_LIST_HEAD(&nop->alts);
> +		INIT_LIST_HEAD(&nop->stack_ops);
> +		init_cfi_state(&nop->cfi);
> +
> +		nop->sec =3D last_new_insn->sec;
> +		nop->ignore =3D last_new_insn->ignore;
> +		nop->func =3D last_new_insn->func;
> +		nop->alt_group =3D alt_group;
> +		nop->offset =3D last_new_insn->offset + last_new_insn->len;
> +		nop->type =3D INSN_NOP;
> +		nop->len =3D special_alt->orig_len - special_alt->new_len;
> +
> +		list_add(&nop->list, &last_new_insn->list);
> +		last_new_insn =3D nop;
> +	}
> +
>   	if (fake_jump)
>   		list_add(&fake_jump->list, &last_new_insn->list);
>  =20
> @@ -2237,18 +2263,12 @@ static int handle_insn_ops(struct instru
>   	struct stack_op *op;
>  =20
>   	list_for_each_entry(op, &insn->stack_ops, list) {
> -		struct cfi_state old_cfi =3D state->cfi;
>   		int res;
>  =20
>   		res =3D update_cfi_state(insn, &state->cfi, op);
>   		if (res)
>   			return res;
>  =20
> -		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct c=
fi_state))) {
> -			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
> -			return -1;
> -		}
> -
>   		if (op->dest.type =3D=3D OP_DEST_PUSHF) {
>   			if (!state->uaccess_stack) {
>   				state->uaccess_stack =3D 1;
> @@ -2460,19 +2480,137 @@ static int validate_return(struct symbol
>    * unreported (because they're NOPs), such holes would result in CFI_=
UNDEFINED
>    * states which then results in ORC entries, which we just said we di=
dn't want.
>    *
> - * Avoid them by copying the CFI entry of the first instruction into t=
he whole
> - * alternative.
> + * Avoid them by copying the CFI entry of the first instruction into t=
he hole.
>    */
> -static void fill_alternative_cfi(struct objtool_file *file, struct ins=
truction *insn)
> +static void __fill_alt_cfi(struct objtool_file *file, struct instructi=
on *insn)
>   {
>   	struct instruction *first_insn =3D insn;
>   	int alt_group =3D insn->alt_group;
>  =20
> -	sec_for_each_insn_continue(file, insn) {
> +	sec_for_each_insn_from(file, insn) {
>   		if (insn->alt_group !=3D alt_group)
>   			break;
> -		insn->cfi =3D first_insn->cfi;
> +
> +		if (!insn->visited)
> +			insn->cfi =3D first_insn->cfi;
> +	}
> +}
> +
> +static void fill_alt_cfi(struct objtool_file *file, struct instruction=
 *alt_insn)
> +{
> +	struct alternative *alt;
> +
> +	__fill_alt_cfi(file, alt_insn);
> +
> +	list_for_each_entry(alt, &alt_insn->alts, list)
> +		__fill_alt_cfi(file, alt->insn);
> +}
> +
> +static struct instruction *
> +__find_unwind(struct objtool_file *file,
> +	      struct instruction *insn, unsigned long offset)
> +{
> +	int alt_group =3D insn->alt_group;
> +	struct instruction *next;
> +	unsigned long off =3D 0;
> +
> +	while ((off + insn->len) <=3D offset) {
> +		next =3D next_insn_same_sec(file, insn);
> +		if (next && next->alt_group !=3D alt_group)
> +			next =3D NULL;
> +
> +		if (!next)
> +			break;
> +
> +		off +=3D insn->len;
> +		insn =3D next;
>   	}
> +
> +	return insn;
> +}
> +
> +struct instruction *
> +find_alt_unwind(struct objtool_file *file,
> +		struct instruction *alt_insn, unsigned long offset)
> +{
> +	struct instruction *fit;
> +	struct alternative *alt;
> +	unsigned long fit_off;
> +
> +	fit =3D __find_unwind(file, alt_insn, offset);
> +	fit_off =3D (fit->offset - alt_insn->offset);
> +
> +	list_for_each_entry(alt, &alt_insn->alts, list) {
> +		struct instruction *x;
> +		unsigned long x_off;
> +
> +		x =3D __find_unwind(file, alt->insn, offset);
> +		x_off =3D (x->offset - alt->insn->offset);
> +
> +		if (fit_off < x_off) {
> +			fit =3D x;
> +			fit_off =3D x_off;
> +
> +		} else if (fit_off =3D=3D x_off &&
> +			   memcmp(&fit->cfi, &x->cfi, sizeof(struct cfi_state))) {
> +
> +			char *_str1 =3D offstr(fit->sec, fit->offset);
> +			char *_str2 =3D offstr(x->sec, x->offset);
> +			WARN("%s: equal-offset incompatible alternative: %s\n", _str1, _str=
2);
> +			free(_str1);
> +			free(_str2);
> +			return fit;
> +		}
> +	}
> +
> +	return fit;
> +}
> +
> +static int __validate_unwind(struct objtool_file *file,
> +			     struct instruction *alt_insn,
> +			     struct instruction *insn)
> +{
> +	int alt_group =3D insn->alt_group;
> +	struct instruction *unwind;
> +	unsigned long offset =3D 0;
> +
> +	sec_for_each_insn_from(file, insn) {
> +		if (insn->alt_group !=3D alt_group)
> +			break;
> +
> +		unwind =3D find_alt_unwind(file, alt_insn, offset);
> +
> +		if (memcmp(&insn->cfi, &unwind->cfi, sizeof(struct cfi_state))) {
> +
> +			char *_str1 =3D offstr(insn->sec, insn->offset);
> +			char *_str2 =3D offstr(unwind->sec, unwind->offset);
> +			WARN("%s: unwind incompatible alternative: %s (%ld)\n",
> +			     _str1, _str2, offset);
> +			free(_str1);
> +			free(_str2);
> +			return 1;
> +		}
> +
> +		offset +=3D insn->len;
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_alt_unwind(struct objtool_file *file,
> +			       struct instruction *alt_insn)
> +{
> +	struct alternative *alt;
> +
> +	if (__validate_unwind(file, alt_insn, alt_insn))
> +		return 1;
> +
> +	list_for_each_entry(alt, &alt_insn->alts, list) {
> +		if (__validate_unwind(file, alt_insn, alt->insn))
> +			return 1;
> +	}
> +
> +	return 0;
>   }
>  =20
>   /*
> @@ -2484,9 +2622,10 @@ static void fill_alternative_cfi(struct
>   static int validate_branch(struct objtool_file *file, struct symbol *=
func,
>   			   struct instruction *insn, struct insn_state state)
>   {
> +	struct instruction *next_insn, *alt_insn =3D NULL;
>   	struct alternative *alt;
> -	struct instruction *next_insn;
>   	struct section *sec;
> +	int alt_group =3D 0;
>   	u8 visited;
>   	int ret;
>  =20
> @@ -2541,8 +2680,10 @@ static int validate_branch(struct objtoo
>   				}
>   			}
>  =20
> -			if (insn->alt_group)
> -				fill_alternative_cfi(file, insn);
> +			if (insn->alt_group) {
> +				alt_insn =3D insn;
> +				alt_group =3D insn->alt_group;
> +			}
>  =20
>   			if (skip_orig)
>   				return 0;
> @@ -2697,6 +2838,17 @@ static int validate_branch(struct objtoo
>   		}
>  =20
>   		insn =3D next_insn;
> +
> +		if (alt_insn && insn->alt_group !=3D alt_group) {
> +			alt_insn->alt_end =3D insn;
> +
> +			fill_alt_cfi(file, alt_insn);
> +
> +			if (validate_alt_unwind(file, alt_insn))
> +				return 1;
> +
> +			alt_insn =3D NULL;
> +		}
>   	}
>  =20
>   	return 0;
> --- a/tools/objtool/check.h
> +++ b/tools/objtool/check.h
> @@ -41,6 +41,7 @@ struct instruction {
>   	struct instruction *first_jump_src;
>   	struct reloc *jump_table;
>   	struct list_head alts;
> +	struct instruction *alt_end;
>   	struct symbol *func;
>   	struct list_head stack_ops;
>   	struct cfi_state cfi;
> @@ -55,6 +56,10 @@ static inline bool is_static_jump(struct
>   	       insn->type =3D=3D INSN_JUMP_UNCONDITIONAL;
>   }
>  =20
> +struct instruction *
> +find_alt_unwind(struct objtool_file *file,
> +		struct instruction *alt_insn, unsigned long offset);
> +
>   struct instruction *find_insn(struct objtool_file *file,
>   			      struct section *sec, unsigned long offset);
>  =20
> --- a/tools/objtool/orc_gen.c
> +++ b/tools/objtool/orc_gen.c
> @@ -12,75 +12,86 @@
>   #include "check.h"
>   #include "warn.h"
>  =20
> -int create_orc(struct objtool_file *file)
> +static int create_orc_insn(struct objtool_file *file, struct instructi=
on *insn)
>   {
> -	struct instruction *insn;
> +	struct orc_entry *orc =3D &insn->orc;
> +	struct cfi_reg *cfa =3D &insn->cfi.cfa;
> +	struct cfi_reg *bp =3D &insn->cfi.regs[CFI_BP];
> +
> +	orc->end =3D insn->cfi.end;
> +
> +	if (cfa->base =3D=3D CFI_UNDEFINED) {
> +		orc->sp_reg =3D ORC_REG_UNDEFINED;
> +		return 0;
> +	}
>  =20
> -	for_each_insn(file, insn) {
> -		struct orc_entry *orc =3D &insn->orc;
> -		struct cfi_reg *cfa =3D &insn->cfi.cfa;
> -		struct cfi_reg *bp =3D &insn->cfi.regs[CFI_BP];
> +	switch (cfa->base) {
> +	case CFI_SP:
> +		orc->sp_reg =3D ORC_REG_SP;
> +		break;
> +	case CFI_SP_INDIRECT:
> +		orc->sp_reg =3D ORC_REG_SP_INDIRECT;
> +		break;
> +	case CFI_BP:
> +		orc->sp_reg =3D ORC_REG_BP;
> +		break;
> +	case CFI_BP_INDIRECT:
> +		orc->sp_reg =3D ORC_REG_BP_INDIRECT;
> +		break;
> +	case CFI_R10:
> +		orc->sp_reg =3D ORC_REG_R10;
> +		break;
> +	case CFI_R13:
> +		orc->sp_reg =3D ORC_REG_R13;
> +		break;
> +	case CFI_DI:
> +		orc->sp_reg =3D ORC_REG_DI;
> +		break;
> +	case CFI_DX:
> +		orc->sp_reg =3D ORC_REG_DX;
> +		break;
> +	default:
> +		WARN_FUNC("unknown CFA base reg %d",
> +			  insn->sec, insn->offset, cfa->base);
> +		return -1;
> +	}
>  =20
> -		if (!insn->sec->text)
> -			continue;
> +	switch(bp->base) {
> +	case CFI_UNDEFINED:
> +		orc->bp_reg =3D ORC_REG_UNDEFINED;
> +		break;
> +	case CFI_CFA:
> +		orc->bp_reg =3D ORC_REG_PREV_SP;
> +		break;
> +	case CFI_BP:
> +		orc->bp_reg =3D ORC_REG_BP;
> +		break;
> +	default:
> +		WARN_FUNC("unknown BP base reg %d",
> +			  insn->sec, insn->offset, bp->base);
> +		return -1;
> +	}
>  =20
> -		orc->end =3D insn->cfi.end;
> +	orc->sp_offset =3D cfa->offset;
> +	orc->bp_offset =3D bp->offset;
> +	orc->type =3D insn->cfi.type;
>  =20
> -		if (cfa->base =3D=3D CFI_UNDEFINED) {
> -			orc->sp_reg =3D ORC_REG_UNDEFINED;
> -			continue;
> -		}
> +	return 0;
> +}
>  =20
> -		switch (cfa->base) {
> -		case CFI_SP:
> -			orc->sp_reg =3D ORC_REG_SP;
> -			break;
> -		case CFI_SP_INDIRECT:
> -			orc->sp_reg =3D ORC_REG_SP_INDIRECT;
> -			break;
> -		case CFI_BP:
> -			orc->sp_reg =3D ORC_REG_BP;
> -			break;
> -		case CFI_BP_INDIRECT:
> -			orc->sp_reg =3D ORC_REG_BP_INDIRECT;
> -			break;
> -		case CFI_R10:
> -			orc->sp_reg =3D ORC_REG_R10;
> -			break;
> -		case CFI_R13:
> -			orc->sp_reg =3D ORC_REG_R13;
> -			break;
> -		case CFI_DI:
> -			orc->sp_reg =3D ORC_REG_DI;
> -			break;
> -		case CFI_DX:
> -			orc->sp_reg =3D ORC_REG_DX;
> -			break;
> -		default:
> -			WARN_FUNC("unknown CFA base reg %d",
> -				  insn->sec, insn->offset, cfa->base);
> -			return -1;
> -		}
> +int create_orc(struct objtool_file *file)
> +{
> +	struct instruction *insn;
>  =20
> -		switch(bp->base) {
> -		case CFI_UNDEFINED:
> -			orc->bp_reg =3D ORC_REG_UNDEFINED;
> -			break;
> -		case CFI_CFA:
> -			orc->bp_reg =3D ORC_REG_PREV_SP;
> -			break;
> -		case CFI_BP:
> -			orc->bp_reg =3D ORC_REG_BP;
> -			break;
> -		default:
> -			WARN_FUNC("unknown BP base reg %d",
> -				  insn->sec, insn->offset, bp->base);
> -			return -1;
> -		}
> +	for_each_insn(file, insn) {
> +		int ret;
> +
> +		if (!insn->sec->text)
> +			continue;
>  =20
> -		orc->sp_offset =3D cfa->offset;
> -		orc->bp_offset =3D bp->offset;
> -		orc->type =3D insn->cfi.type;
> +		ret =3D create_orc_insn(file, insn);
> +		if (ret)
> +			return ret;
>   	}
>  =20
>   	return 0;
> @@ -166,6 +177,28 @@ int create_orc_sections(struct objtool_f
>  =20
>   		prev_insn =3D NULL;
>   		sec_for_each_insn(file, sec, insn) {
> +
> +			if (insn->alt_end) {
> +				unsigned int offset, alt_len;
> +				struct instruction *unwind;
> +
> +				alt_len =3D insn->alt_end->offset - insn->offset;
> +				for (offset =3D 0; offset < alt_len; offset++) {
> +					unwind =3D find_alt_unwind(file, insn, offset);
> +					/* XXX: skipped earlier ! */
> +					create_orc_insn(file, unwind);
> +					if (!prev_insn ||
> +					    memcmp(&unwind->orc, &prev_insn->orc,
> +						   sizeof(struct orc_entry))) {
> +						idx++;
> +//						WARN_FUNC("ORC @ %d/%d", sec, insn->offset+offset, offset, alt=
_len);
> +					}
> +					prev_insn =3D unwind;
> +				}
> +
> +				insn =3D insn->alt_end;
> +			}
> +
>   			if (!prev_insn ||
>   			    memcmp(&insn->orc, &prev_insn->orc,
>   				   sizeof(struct orc_entry))) {
> @@ -203,6 +236,31 @@ int create_orc_sections(struct objtool_f
>  =20
>   		prev_insn =3D NULL;
>   		sec_for_each_insn(file, sec, insn) {
> +
> +			if (insn->alt_end) {
> +				unsigned int offset, alt_len;
> +				struct instruction *unwind;
> +
> +				alt_len =3D insn->alt_end->offset - insn->offset;
> +				for (offset =3D 0; offset < alt_len; offset++) {
> +					unwind =3D find_alt_unwind(file, insn, offset);
> +					if (!prev_insn ||
> +					    memcmp(&unwind->orc, &prev_insn->orc,
> +						   sizeof(struct orc_entry))) {
> +
> +						if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
> +								     insn->sec, insn->offset + offset,
> +								     &unwind->orc))
> +							return -1;
> +
> +						idx++;
> +					}
> +					prev_insn =3D unwind;
> +				}
> +
> +				insn =3D insn->alt_end;
> +			}
> +
>   			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
>   						 sizeof(struct orc_entry))) {
>  =20
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
>=20


--------------0BB9EF8F4478FA91BE4D7120
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0BB9EF8F4478FA91BE4D7120--

--rOMghP2aNZKArDug7s6citEJGC1oClV5D--

--bbE03T3vMvfPnuzIYd7blCFzPDKWu8tXK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Y0TEFAwAAAAAACgkQsN6d1ii/Ey90
5gf/fv6ShHPDsI6KjVGI01pyEfB7zDILbgYz5I1pW/QR1lKJCIIoFLrMMH8py872tlshMice+SiY
ZBPTyyXz+h8bX//x8DGdvI5Hs482fWql/k7zjojO/vDc8uPyqQduf0WTn0umbRR5SFmF8KnPxGMG
3/uIyRCQCl9SSAMVKjkv4nG4d8YXgScYr6/40lxeKuFh6nglSlIxwyviZVB2GTNs7UNa0KiYVSOo
+nN5BHDLpZvLbe4lAFh3+SHld1uZML9OmbNq6f7SJOp4zSCEfvqPBbEXJQ8TBa7JGYuKSD3IUQpS
+fogPib/MIc4ihSryEAzmJw/MXu0djiGjMDJquBIEw==
=UqFe
-----END PGP SIGNATURE-----

--bbE03T3vMvfPnuzIYd7blCFzPDKWu8tXK--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54329.94321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCLa-0004Nz-48; Tue, 15 Dec 2020 15:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54329.94321; Tue, 15 Dec 2020 15:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCLa-0004Ns-13; Tue, 15 Dec 2020 15:34:42 +0000
Received: by outflank-mailman (input) for mailman id 54329;
 Tue, 15 Dec 2020 15:34:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sb05=FT=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kpCLY-0004Nn-2D
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:34:40 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0313667e-2a1d-4491-8ab0-c5aa8646a448;
 Tue, 15 Dec 2020 15:34:37 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-266-mB2UIrCmOIiyNDQ8CYSesw-1; Tue, 15 Dec 2020 10:34:35 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13D2F81CB08;
 Tue, 15 Dec 2020 15:34:10 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-117-65.ams2.redhat.com [10.36.117.65])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6E72060854;
 Tue, 15 Dec 2020 15:34:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0313667e-2a1d-4491-8ab0-c5aa8646a448
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608046477;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JGAmSk/5NnCmnaOU3RbF+dwSsWHRiNNwBCIq+AYP6jw=;
	b=PhE6w4KWWN2LfiRWpdmF+aJF6pN0oNjuQs+Ftei7VZ7CYlEqZx3OGj9yzdrDajjRbYZX+I
	3kUq3GrHhVbAiSlTST91eZlcy3NlLDNrjGtLadOMQ5KTC4vXfSXWxI8hG/hnCtB8we2qOS
	1PGWLSwDmRDAJNtxIFEx2Na97jknEpw=
X-MC-Unique: mB2UIrCmOIiyNDQ8CYSesw-1
Date: Tue, 15 Dec 2020 16:34:05 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 4/4] block: Close block exports in two steps
Message-ID: <20201215153405.GF8185@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-5-slp@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201214170519.223781-5-slp@redhat.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> There's a cross-dependency between closing the block exports and
> draining the block layer. The latter needs that we close all export's
> client connections to ensure they won't queue more requests, but the
> exports may have coroutines yielding in the block layer, which implies
> they can't be fully closed until we drain it.

A coroutine that yielded must have some way to be reentered. So I guess
the quesiton becomes why they aren't reentered until drain. We do
process events:

    AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));

So in theory, anything that would finalise the block export closing
should still execute.

What is the difference that drain makes compared to a simple
AIO_WAIT_WHILE, so that coroutine are reentered during drain, but not
during AIO_WAIT_WHILE?

This is an even more interesting question because the NBD server isn't a
block node nor a BdrvChildClass implementation, so it shouldn't even
notice a drain operation.

Kevin

> To break this cross-dependency, this change adds a "bool wait"
> argument to blk_exp_close_all() and blk_exp_close_all_type(), so
> callers can decide whether they want to wait for the exports to be
> fully quiesced, or just return after requesting them to shut down.
> 
> Then, in bdrv_close_all we make two calls, one without waiting to
> close all client connections, and another after draining the block
> layer, this time waiting for the exports to be fully quiesced.
> 
> RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1900505
> Signed-off-by: Sergio Lopez <slp@redhat.com>
> ---
>  block.c                   | 20 +++++++++++++++++++-
>  block/export/export.c     | 10 ++++++----
>  blockdev-nbd.c            |  2 +-
>  include/block/export.h    |  4 ++--
>  qemu-nbd.c                |  2 +-
>  stubs/blk-exp-close-all.c |  2 +-
>  6 files changed, 30 insertions(+), 10 deletions(-)
> 
> diff --git a/block.c b/block.c
> index bc8a66ab6e..41db70ac07 100644
> --- a/block.c
> +++ b/block.c
> @@ -4472,13 +4472,31 @@ static void bdrv_close(BlockDriverState *bs)
>  void bdrv_close_all(void)
>  {
>      assert(job_next(NULL) == NULL);
> -    blk_exp_close_all();
> +
> +    /*
> +     * There's a cross-dependency between closing the block exports and
> +     * draining the block layer. The latter needs that we close all export's
> +     * client connections to ensure they won't queue more requests, but the
> +     * exports may have coroutines yielding in the block layer, which implies
> +     * they can't be fully closed until we drain it.
> +     *
> +     * Make a first call to close all export's client connections, without
> +     * waiting for each export to be fully quiesced.
> +     */
> +    blk_exp_close_all(false);
>  
>      /* Drop references from requests still in flight, such as canceled block
>       * jobs whose AIO context has not been polled yet */
>      bdrv_drain_all();
>  
>      blk_remove_all_bs();
> +
> +    /*
> +     * Make a second call to shut down the exports, this time waiting for them
> +     * to be fully quiesced.
> +     */
> +    blk_exp_close_all(true);
> +
>      blockdev_close_all_bdrv_states();
>  
>      assert(QTAILQ_EMPTY(&all_bdrv_states));
> diff --git a/block/export/export.c b/block/export/export.c
> index bad6f21b1c..0124ebd9f9 100644
> --- a/block/export/export.c
> +++ b/block/export/export.c
> @@ -280,7 +280,7 @@ static bool blk_exp_has_type(BlockExportType type)
>  }
>  
>  /* type == BLOCK_EXPORT_TYPE__MAX for all types */
> -void blk_exp_close_all_type(BlockExportType type)
> +void blk_exp_close_all_type(BlockExportType type, bool wait)
>  {
>      BlockExport *exp, *next;
>  
> @@ -293,12 +293,14 @@ void blk_exp_close_all_type(BlockExportType type)
>          blk_exp_request_shutdown(exp);
>      }
>  
> -    AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));
> +    if (wait) {
> +        AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));
> +    }
>  }
>  
> -void blk_exp_close_all(void)
> +void blk_exp_close_all(bool wait)
>  {
> -    blk_exp_close_all_type(BLOCK_EXPORT_TYPE__MAX);
> +    blk_exp_close_all_type(BLOCK_EXPORT_TYPE__MAX, wait);
>  }
>  
>  void qmp_block_export_add(BlockExportOptions *export, Error **errp)
> diff --git a/blockdev-nbd.c b/blockdev-nbd.c
> index d8443d235b..d71d4da7c2 100644
> --- a/blockdev-nbd.c
> +++ b/blockdev-nbd.c
> @@ -266,7 +266,7 @@ void qmp_nbd_server_stop(Error **errp)
>          return;
>      }
>  
> -    blk_exp_close_all_type(BLOCK_EXPORT_TYPE_NBD);
> +    blk_exp_close_all_type(BLOCK_EXPORT_TYPE_NBD, true);
>  
>      nbd_server_free(nbd_server);
>      nbd_server = NULL;
> diff --git a/include/block/export.h b/include/block/export.h
> index 7feb02e10d..71c25928ce 100644
> --- a/include/block/export.h
> +++ b/include/block/export.h
> @@ -83,7 +83,7 @@ BlockExport *blk_exp_find(const char *id);
>  void blk_exp_ref(BlockExport *exp);
>  void blk_exp_unref(BlockExport *exp);
>  void blk_exp_request_shutdown(BlockExport *exp);
> -void blk_exp_close_all(void);
> -void blk_exp_close_all_type(BlockExportType type);
> +void blk_exp_close_all(bool wait);
> +void blk_exp_close_all_type(BlockExportType type, bool wait);
>  
>  #endif
> diff --git a/qemu-nbd.c b/qemu-nbd.c
> index a7075c5419..928f4466f6 100644
> --- a/qemu-nbd.c
> +++ b/qemu-nbd.c
> @@ -1122,7 +1122,7 @@ int main(int argc, char **argv)
>      do {
>          main_loop_wait(false);
>          if (state == TERMINATE) {
> -            blk_exp_close_all();
> +            blk_exp_close_all(true);
>              state = TERMINATED;
>          }
>      } while (state != TERMINATED);
> diff --git a/stubs/blk-exp-close-all.c b/stubs/blk-exp-close-all.c
> index 1c71316763..ecd0ce611f 100644
> --- a/stubs/blk-exp-close-all.c
> +++ b/stubs/blk-exp-close-all.c
> @@ -2,6 +2,6 @@
>  #include "block/export.h"
>  
>  /* Only used in programs that support block exports (libblockdev.fa) */
> -void blk_exp_close_all(void)
> +void blk_exp_close_all(bool wait)
>  {
>  }
> -- 
> 2.26.2
> 



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54335.94334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCa0-0005So-El; Tue, 15 Dec 2020 15:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54335.94334; Tue, 15 Dec 2020 15:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCa0-0005Sh-BD; Tue, 15 Dec 2020 15:49:36 +0000
Received: by outflank-mailman (input) for mailman id 54335;
 Tue, 15 Dec 2020 15:49:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCZz-0005Sc-8C
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:49:35 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cf23b3b-687b-48fa-b32e-96e2e03dbdef;
 Tue, 15 Dec 2020 15:49:34 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id y17so20314593wrr.10
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 07:49:34 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 90sm38267609wrl.60.2020.12.15.07.49.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 07:49:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cf23b3b-687b-48fa-b32e-96e2e03dbdef
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Fc/eblxcEeudcxVbRfSaIJkDPIOYUdk8ioWRzw/Kc8s=;
        b=R/Hes2Su2goTaqNhj2G3BmINnWjZc/3evZL1N1k69YkQV5N0RzXIWScuPNmHLq29K2
         LJz2inQwaoBcyQt6+6DQISyjuRA/udPzTSLUv8DMk/o4EseVLcK49KEtpYl8giftWaEL
         vSeuXImwb7/1zKEPAuoWtU9d3uymF9OKLUDZEp3oMmYdPMnbc5C0tlufOsVYUlDUkXbU
         y6HetnFwSg1Ri08lO3w5MQ0M16o+d1WIzRUWgHdRzfWjVE/U6va+t2UUKvxZkwww7pb+
         p5AamHQ+OT6hYJBTVLf7lzUpT+0mCL5Hn22GAUhZ0i+3W4x+g+O48voG3Z2DTco5+Tr8
         aSEQ==
X-Gm-Message-State: AOAM531mdJKoG3ABrwN2v4zJZnqDvNSLy1HnBi1J3wqJnarhJTxqXfoV
	YQ2UlSMB5Ow1wdgEurzJ9rw=
X-Google-Smtp-Source: ABdhPJzauL+wWb0NvR0IddQ/VRyo5VPDepb1dUrJVQ3k7d9xvEh7Uzc23e+ln+udUaOerpqMwTircQ==
X-Received: by 2002:a5d:4491:: with SMTP id j17mr3360831wrq.78.1608047373780;
        Tue, 15 Dec 2020 07:49:33 -0800 (PST)
Date: Tue, 15 Dec 2020 15:49:31 +0000
From: Wei Liu <wl@xen.org>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] examples: Add PVH example to config example list
Message-ID: <20201215154931.xdifgqo4y24cm2ap@liuwe-devbox-debian-v2>
References: <X9gg9Ph2na22YKdj@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <X9gg9Ph2na22YKdj@mattapan.m5p.com>
User-Agent: NeoMutt/20180716

On Mon, Dec 14, 2020 at 06:35:32PM -0800, Elliott Mitchell wrote:
> Somewhat helpful to actually install the example configurations.
> 
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 15:55:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 15:55:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54341.94346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCfX-0006M7-4F; Tue, 15 Dec 2020 15:55:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54341.94346; Tue, 15 Dec 2020 15:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCfX-0006M0-14; Tue, 15 Dec 2020 15:55:19 +0000
Received: by outflank-mailman (input) for mailman id 54341;
 Tue, 15 Dec 2020 15:55:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCfW-0006Lv-AK
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:55:18 +0000
Received: from mail-wr1-f53.google.com (unknown [209.85.221.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ab1e6a3-f180-460d-a483-a1b602cf2312;
 Tue, 15 Dec 2020 15:55:17 +0000 (UTC)
Received: by mail-wr1-f53.google.com with SMTP id a12so20333576wrv.8
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 07:55:17 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n3sm39392747wrw.61.2020.12.15.07.55.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 07:55:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ab1e6a3-f180-460d-a483-a1b602cf2312
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=+OD5sbekjUDQfZrpR66mCLgwBNZhOjRrfTta/GMA2Lo=;
        b=gfEF3z3faFOY5BxM/HMipLtbfht3FKhfp+HNMYxQyVmGTZvT87u0b5zxMqKi4pEu9U
         S5zIr9p2qVXckBLeUuznTXLreRtpB3kkgYT3MuhEUdsnaRoQeJOBt9UVnCyPhEK405Ak
         y19bfpS1ZT40JvR5SrQeljJE7K6o10y4efZFR6qOzKGc0PsMLjZeFjvdkxZTvsmwXNVi
         whBY8TEvbayJ9Kutk2ohgWo3BZ1ul93ZbFU0X37d0sfOj0v5Nrs+eWANsgH7tN6swmIG
         qCkNT33ViV5LEwZjuaDjKsuDQA5eJV4bQQs46jxLsiGmNH5qSNIGehRLOORUBuc1C2vG
         6g5A==
X-Gm-Message-State: AOAM531e7/i2VGsHYAOYHa2GFRjs6Jd2/qtL6N/AyfP9zZs15N6pXwnb
	M8eC3R6c1t4d01bR4ihXA4A=
X-Google-Smtp-Source: ABdhPJwWA0ZSaerNPmeoMDrvCF/3i9/BFri1QrxJiX0qH899NZSlzwSvE1NsBUq2X1/FRkuEZrB81A==
X-Received: by 2002:a5d:4c4e:: with SMTP id n14mr35035673wrt.209.1608047716642;
        Tue, 15 Dec 2020 07:55:16 -0800 (PST)
Date: Tue, 15 Dec 2020 15:55:14 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH] tools/xenstore: rework path length check
Message-ID: <20201215155514.3yu2frm76vc3lxbf@liuwe-devbox-debian-v2>
References: <20201215150411.9987-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201215150411.9987-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Dec 15, 2020 at 04:04:11PM +0100, Juergen Gross wrote:
> The different fixed limits for absolute and relative path lengths of
> Xenstore nodes make it possible to create per-domain nodes via
> absolute paths which are not accessible using relative paths, as the
> two limits differ by 1024 characters.
> 
> Instead of this weird limits use only one limit, which applies to the
> relative path length of per-domain nodes and to the absolute path
> length of all other nodes. This means, the path length check is
> applied to the path after removing a possible start of
> "/local/domain/<n>/" with <n> being a domain id.
> 
> There has been the request to be able to limit the path lengths even
> more, so an additional quota is added which can be applied to path
> lengths. It is XENSTORE_REL_PATH_MAX (2048) per default, but can be
> set to lower values. This is done via the new "-M" or "--path-max"
> option when invoking xenstored.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>
> Acked-by: Julien Grall <jgrall@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:00:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54347.94357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCk4-0006Zy-Ny; Tue, 15 Dec 2020 16:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54347.94357; Tue, 15 Dec 2020 16:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCk4-0006Zr-KQ; Tue, 15 Dec 2020 16:00:00 +0000
Received: by outflank-mailman (input) for mailman id 54347;
 Tue, 15 Dec 2020 15:59:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCk2-0006Zl-Tr
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 15:59:58 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9e9bbd7-9631-42a8-b6db-c9924111c3bc;
 Tue, 15 Dec 2020 15:59:58 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id k10so17341085wmi.3
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 07:59:58 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h14sm39492140wrx.37.2020.12.15.07.59.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 07:59:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9e9bbd7-9631-42a8-b6db-c9924111c3bc
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=tFEk86mVqO4FH5WDNJH6ijNTLh5WOam/pXSe+O3TdoM=;
        b=oK+l2k4tiBS9JsP7dMJ6zsLxCjeVzU/Gmv1j9WSsBK9lowDnpimTTlYtuLFeeMXJyb
         TBgRjKdlENZR0hQPcAVX3qCjCM/Yo/G5I1w9WqvD1DWmr1DiijF3WgDYUR3om5/97MjK
         J2ZdtLmydvN+5Xpe5Ojh2MBAEc8NVKQBwyUorqCI24GUBovaKphW2nxKcxftEhcikcAW
         QYH8zUkSyqivuAghbDV4m/+7CxznzSyPLRToYFMdC8KOlokEjvKfnxw9kAuMc+d+WeTb
         WhNrvnuHm5wTuOyM8wd2OL/QAFpubKajJaPdR+J2o/ydiOfrRlEjbAEoUJa2DCHQv5KC
         nPOQ==
X-Gm-Message-State: AOAM533Witg9Zjj8Jyyd2UHmiPXSAB5I27TSB7y4pfvyeHXKbI8IxzvD
	W//5hi8gSrkSANoCEJi6plE=
X-Google-Smtp-Source: ABdhPJwmP+SVbZsLVElNYmrgtIPYjfxEBlPJvPBwnI/ZlHuQ65/EXhNVwVGYbokxa6yVFhVUjlEm6w==
X-Received: by 2002:a7b:c4d5:: with SMTP id g21mr33543203wmk.92.1608047997250;
        Tue, 15 Dec 2020 07:59:57 -0800 (PST)
Date: Tue, 15 Dec 2020 15:59:54 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v6 01/25] libxl: s/pcidev/pci and remove
 DEFINE_DEVICE_TYPE_STRUCT_X
Message-ID: <20201215155954.jqcozwhko2pdebgj@liuwe-devbox-debian-v2>
References: <20201208193033.11306-1-paul@xen.org>
 <20201208193033.11306-2-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208193033.11306-2-paul@xen.org>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 07:30:09PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
> is confusing and also compromises use of some macros used for other device
> types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
> of this duality.
> 
> This patch purges use of 'pcidev' from the libxl internal code, but
> unfortunately the 'pcidevs' and 'num_pcidevs' fields in 'libxl_domain_config'
> are part of the API and need to be retained to avoid breaking callers,
> particularly libvirt.
> 
> DEFINE_DEVICE_TYPE_STRUCT_X is still removed to avoid the special case in
> libxl_pci.c but DEFINE_DEVICE_TYPE_STRUCT is given an extra 'array' argument
> which is used to identify the fields in 'libxl_domain_config' relating to
> the device type.
> 
> NOTE: Some of the more gross formatting errors (such as lack of spaces after
>       keywords) that came into context have been fixed in libxl_pci.c.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:00:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54353.94382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCkY-0007wJ-4A; Tue, 15 Dec 2020 16:00:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54353.94382; Tue, 15 Dec 2020 16:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCkY-0007wC-0y; Tue, 15 Dec 2020 16:00:30 +0000
Received: by outflank-mailman (input) for mailman id 54353;
 Tue, 15 Dec 2020 16:00:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCkW-0007vo-Rp
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:00:28 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77496efb-9335-4a90-ab04-9812846fb068;
 Tue, 15 Dec 2020 16:00:28 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id a6so17339075wmc.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:00:28 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o13sm35484610wmc.44.2020.12.15.08.00.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:00:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77496efb-9335-4a90-ab04-9812846fb068
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=KyopMRbfgwAezlKBm0gOz4p+tD0YFM8rVdny59R6zWU=;
        b=WKFotyJcebdG5C0e4C2FDwUJiblZHjD2NPmiQ39ClQwor50hyzmbqrCYoJVHL8PCad
         sAjsWkLAxF6oUv4vNvPN1uiUgpqPJ6FKNvyzkencbPMp46ZneZ3XaaJHr6+rBnTBpkca
         QNGV+do/66bqQPIWzN5uqd6iiyX5TzWcSaDcO0RmR/+UV9twYqs9tR+XP3iknL8Lkqlf
         5LaTBAkbTjWna1fZqBSCwYjHvl2u/KrJdatt3kd0/M16Hwl/SUaliNpaS5MM8hL6TryL
         C8w+GNnBEvPSnfw6GM1z2Sfi4o22lENvNxebMFnB5oFob8lup59NDZ08mx9qnQLOwzi5
         OBcQ==
X-Gm-Message-State: AOAM5311oiTNaHYYj+YSTE9kg9J9kwgnLPG14zIH0utX7KGEsYmlvyDC
	E6W6LnM1aXdSoHnfiwP0Wik=
X-Google-Smtp-Source: ABdhPJy4myQNC532aK6f0lZnx6f/7kHVrjYCq8q+aor+ciAldauHLl0xpkFipCOImbKJb98jrEXxBw==
X-Received: by 2002:a1c:3cd5:: with SMTP id j204mr32942392wma.53.1608048027450;
        Tue, 15 Dec 2020 08:00:27 -0800 (PST)
Date: Tue, 15 Dec 2020 16:00:25 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v6 02/25] xl: s/pcidev/pci where possible
Message-ID: <20201215160025.hlasdl7yijl43baz@liuwe-devbox-debian-v2>
References: <20201208193033.11306-1-paul@xen.org>
 <20201208193033.11306-3-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208193033.11306-3-paul@xen.org>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 07:30:10PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> To improve naming consistency, replaces occurrences of 'pcidev' with 'pci'.
> The only remaining use of the term should be in relation to
> 'libxl_domain_config' where there are fields named 'pcidevs' and 'num_pcidevs'.
> 
> Purely cosmetic. No functional change.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:03:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:03:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54361.94393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCn2-0008CT-IF; Tue, 15 Dec 2020 16:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54361.94393; Tue, 15 Dec 2020 16:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCn2-0008CM-FH; Tue, 15 Dec 2020 16:03:04 +0000
Received: by outflank-mailman (input) for mailman id 54361;
 Tue, 15 Dec 2020 16:03:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCn1-0008CH-9c
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:03:03 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 32c9e6d0-c5df-44d7-ac73-9696292c3384;
 Tue, 15 Dec 2020 16:03:02 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id d13so2133581wrc.13
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:03:02 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h5sm40769128wrp.56.2020.12.15.08.03.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:03:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32c9e6d0-c5df-44d7-ac73-9696292c3384
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=wSFx3Z50lGO0G6CS6ngFrAe0zqu2JkGjbDZ3jJk9+mw=;
        b=Bod24dKRtf7wzeGh54P7szpTRKAGKerFfl20lwlURxTElWSDZNKD1sGi6uK1a6RwA1
         bFpTck1Oa2jsxvm9TMS4wJFwtpvyTFvhFHo45tf3CF325RSnsK8bRC00x79OhSUrJ6Rt
         J7ENZZ0olWrHdjwyWqZ0rzh+7g7De3HaSeJ+kFsIMOMSl0T+t/vT8jThRkySiKaCEMrP
         wdkIokvNMaeOkEuTpBGu8KrKYYE3Ttxs6P4yQt4RVDAFPB8L9I2K73bSEVfUDPhTxbKN
         YRj/Ua7UOB2e/x6gDouXkPMHSXCn3KkYtCfxP7S6A3pam2RdsNDN7XBxEpwy7wwph3e2
         J0YA==
X-Gm-Message-State: AOAM5335jpR70Fl1OkS03sVkAx5Nb+ToZerPvHOSmV6Cze40RKQx7FQG
	7PDlHLIOsOqaAGQHZAEHIUE=
X-Google-Smtp-Source: ABdhPJwfLovFBdxbBGDN8r5bRurBcx3AA2Aax3zEXztJ3ofSGWhx1ivcgLCAjEhIYnJPiaWVCIkLHQ==
X-Received: by 2002:adf:8342:: with SMTP id 60mr35193993wrd.140.1608048181683;
        Tue, 15 Dec 2020 08:03:01 -0800 (PST)
Date: Tue, 15 Dec 2020 16:02:59 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v6 21/25] libxl: convert internal functions in
 libxl_pci.c...
Message-ID: <20201215160259.mb74mbmhjg2ioamg@liuwe-devbox-debian-v2>
References: <20201208193033.11306-1-paul@xen.org>
 <20201208193033.11306-22-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208193033.11306-22-paul@xen.org>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 07:30:29PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to use 'libx_pci_bdf' where appropriate.
> 
> No API change.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:03:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:03:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54363.94406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCnP-0008I2-SL; Tue, 15 Dec 2020 16:03:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54363.94406; Tue, 15 Dec 2020 16:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCnP-0008Hu-Oi; Tue, 15 Dec 2020 16:03:27 +0000
Received: by outflank-mailman (input) for mailman id 54363;
 Tue, 15 Dec 2020 16:03:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCnP-0008Hn-0H
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:03:27 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 019b11cd-ac16-4ceb-941d-b4b1f04dc4a9;
 Tue, 15 Dec 2020 16:03:25 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id q75so19005979wme.2
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:03:25 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o17sm40913983wrg.32.2020.12.15.08.03.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:03:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 019b11cd-ac16-4ceb-941d-b4b1f04dc4a9
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=AxpDV0Qn7Vpcq2ewD71QdjCevENkjLkMwzAak0AJxfo=;
        b=qG4K81Gs2Qh3hTyiXKoRuEMuSBspfc7dKfiO6gxwwiiQNVtdHNH2NyPo4ea+8Jhc8j
         bmAIGwNikZClKFd1L/2xxPOW6EXWuAjxj45jan4isBODOb0k8lIRcW0n3SnffaCX2UWG
         Gx1kwUXxKT2bLUCa6ELQTj/0R2hNWCzifpMIPV9Kfo5CvXEbWLLZhCUQi3i6FEf68Q9L
         9zrwSI2jP9VV/460HRKTEEH9pHHFkBpNbv5e3amMaTIlVJXgQcTjwVBzu2uO13MN/zQD
         66iggDZpZfqM7YD3x+rrjaufhZKIfSXkxsyudGfrT5LDHX+bsZbMO6PAe+U9J2LR2b5O
         16qg==
X-Gm-Message-State: AOAM532FY8OWk9rGQxwPb7b6jsQmniwg0YBrEfH94CXKiH4BOCgI79fO
	X/t3W1YTiuJ/ZyJEkGniHbc=
X-Google-Smtp-Source: ABdhPJyj3BHvzVaNjiv28FE2WOrVaHRUJ+bdGiNQtIpzCX4LrJxfo+EIOXXoR/eLb89bvoR2+SSeLA==
X-Received: by 2002:a1c:ed13:: with SMTP id l19mr3817895wmh.141.1608048205151;
        Tue, 15 Dec 2020 08:03:25 -0800 (PST)
Date: Tue, 15 Dec 2020 16:03:23 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v6 22/25] libxl: introduce
 libxl_pci_bdf_assignable_add/remove/list/list_free(), ...
Message-ID: <20201215160323.sb42miugvdchacj2@liuwe-devbox-debian-v2>
References: <20201208193033.11306-1-paul@xen.org>
 <20201208193033.11306-23-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208193033.11306-23-paul@xen.org>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 07:30:30PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> which support naming and use 'libxl_pci_bdf' rather than 'libxl_device_pci',
> as replacements for libxl_device_pci_assignable_add/remove/list/list_free().
> 
> libxl_pci_bdf_assignable_add() takes a 'name' parameter which is stored in
> xenstore and facilitates two addtional functions added by this patch:
> libxl_pci_bdf_assignable_name2bdf() and libxl_pci_bdf_assignable_bdf2name().
> Currently there are no callers of these two functions. They will be added in
> a subsequent patch.
> 
> libxl_device_pci_assignable_add/remove/list/list_free() are left in place
> for compatibility but are re-implemented in terms of the newly introduced
> functions.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:03:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:03:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54369.94418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCnl-0008OA-AG; Tue, 15 Dec 2020 16:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54369.94418; Tue, 15 Dec 2020 16:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCnl-0008O3-79; Tue, 15 Dec 2020 16:03:49 +0000
Received: by outflank-mailman (input) for mailman id 54369;
 Tue, 15 Dec 2020 16:03:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCnk-0008Nw-7b
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:03:48 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77f90baf-5a0a-4be8-858f-52affeb32876;
 Tue, 15 Dec 2020 16:03:47 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id 190so7118269wmz.0
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:03:47 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id l1sm37788545wrq.64.2020.12.15.08.03.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:03:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77f90baf-5a0a-4be8-858f-52affeb32876
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=/XS+iYso1pLvM2m44w9x3fZSoxfXT53+l9cFnhg15RY=;
        b=p0rtbXqHLDsWBcmz+/XCZdpZqLTqfVzOL537uEWU2wBL/VhQ1svYzdVV/46XcL8TRL
         NQFj5Vf6VMNvJPUip8xxQvrm3fWgOFdNWcdHJY29KZ5bSqkE3TDwR1/BuYuQawQ69NPV
         VFC65vPhXHNS+1orjnqIjCES/ta2PcWWUO7hksEN0R6a8ndM7LuvrPb2MwXBKRHhlARX
         8nw5fY142OfCzsYAFUdNtkN25VNWWyRv3KQLMVgicBCRgFbS6V1/GM0wuqWgSxlAT2/i
         i6SD6YPprgfQGh+EbHG+ImTf0tSFGPotCs/Z4LWbOELa3TCRO6CQdklE7VIovvedrNSS
         8yPg==
X-Gm-Message-State: AOAM532S3zsE3xBMmRCtxXVKaaZ9IH63APB8sQqrySt3yfsncYXokVw/
	yDiRsb/Ml5IUMvlGM4EqK+8=
X-Google-Smtp-Source: ABdhPJyo/9Njv2vs85nX4cq/IsXRnEE7iT/+fLW+8A5VW5kWQ2EyeNQ3fJUy5Pzl8ioT6/V5BgXcXQ==
X-Received: by 2002:a1c:dd07:: with SMTP id u7mr17937542wmg.51.1608048226859;
        Tue, 15 Dec 2020 08:03:46 -0800 (PST)
Date: Tue, 15 Dec 2020 16:03:44 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v6 23/25] xl: support naming of assignable devices
Message-ID: <20201215160344.4j333d23wj6egmyq@liuwe-devbox-debian-v2>
References: <20201208193033.11306-1-paul@xen.org>
 <20201208193033.11306-24-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201208193033.11306-24-paul@xen.org>
User-Agent: NeoMutt/20180716

On Tue, Dec 08, 2020 at 07:30:31PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> This patch converts libxl to use libxl_pci_bdf_assignable_add/remove/list/
> list_free() rather than libxl_device_pci_assignable_add/remove/list/
> list_free(), which then allows naming of assignable devices to be supported.
> 
> With this patch applied 'xl pci-assignable-add' will take an optional '--name'
> parameter, 'xl pci-assignable-remove' can be passed either a BDF or a name and
> 'xl pci-assignable-list' will take a optional '--show-names' flag which
> determines whether names are displayed in its output.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:09:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:09:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54377.94430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCtd-0000I5-0u; Tue, 15 Dec 2020 16:09:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54377.94430; Tue, 15 Dec 2020 16:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCtc-0000Hy-TJ; Tue, 15 Dec 2020 16:09:52 +0000
Received: by outflank-mailman (input) for mailman id 54377;
 Tue, 15 Dec 2020 16:09:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpCtb-0000Ht-Km
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:09:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9c349bf-a42c-442e-8d64-94bb4bce43b3;
 Tue, 15 Dec 2020 16:09:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4E443ACE0;
 Tue, 15 Dec 2020 16:09:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9c349bf-a42c-442e-8d64-94bb4bce43b3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608048589; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Fs2bOX9OQpj4Kvrk0iflV9QhGelJlvbitAZKY2EqpUc=;
	b=h3klwOCA764KDvdLGiJll41rgg7q1E+Uav5Uwf8dq8i9PvrCjkavRbeJDhFqvQFPWMB2Cc
	ep4911oUKZOcKAvu8z+ylOXU8e5o4P31Bw5z7qJ0suXRsk7XspaKMooGgtdYRXJDeTnw0F
	sAFpe3wSIWpd9ZmnXOqyZbSHTkSDOJc=
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/4] x86: XSA-348 follow-up
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Date: Tue, 15 Dec 2020 17:09:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The changes for this XSA introduced an inefficiency and could
do with some further hardening. Addressing of this wasn't
sensible as part of the XSA, though (but you may take this as
an explanation of why this starts out as v2).

1: x86: verify function type (and maybe attribute) in switch_stack_and_jump()
2: x86: clobber registers in switch_stack_and_jump() when !LIVEPATCH
3: x86/PV: avoid double stack reset during schedule tail handling
4: livepatch: adjust a stale comment

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:11:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:11:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54381.94442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCuv-00014s-CY; Tue, 15 Dec 2020 16:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54381.94442; Tue, 15 Dec 2020 16:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCuv-00014l-88; Tue, 15 Dec 2020 16:11:13 +0000
Received: by outflank-mailman (input) for mailman id 54381;
 Tue, 15 Dec 2020 16:11:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpCuu-00014f-7h
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:11:12 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 088da368-8401-4249-8868-31ef8191303b;
 Tue, 15 Dec 2020 16:11:11 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id c133so5361356wme.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:11:11 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id z11sm39199208wmc.39.2020.12.15.08.11.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:11:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 088da368-8401-4249-8868-31ef8191303b
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Tk04yatt8pcAr2KqRJUeNudOQt102sFCS7Ucr7jZCsg=;
        b=TbLqtXtQ8klLP569xI5R6fP9i5AK8X8FwI9Ul/UA5QxlrHxlYlNt3wO0SoNQlO+N25
         0HmhL6nUqcSTQzgydtJmZKZGaW0Z/UWfLh4kx0rycUWb43WzOcugF2lOj5jC/WElDW9/
         4jzfZhTXNlrzBm0PTNS6WE9GbKvgWB0zT8PTyso0UoluG4xzvKCGtrtBL68dHVnqFZh9
         qrzDVTLWeyK6RNJusQWyQ6P8angapde12L8+Ba/rH6GK2UQ5EDgJN6nYS/m2hRU8CRJf
         y6u5a85TeYIfxJVGcjpXVOkENBpZyjdihS6tbZEX31dFAzKg5Yl8Magxs77o7P6x5Usm
         bNCQ==
X-Gm-Message-State: AOAM53050GTXYFKM/9uZDlQKOvoBFNDip+7plIhfACg3hXtnVn5SeW+t
	FDWAEHdfM5ybsd3e1w7v2jg=
X-Google-Smtp-Source: ABdhPJyO27aFWo3trYWQ9joXKAjSmIn8RTpBZsPp7118xQJLyyvsOhINIoC2wXiZ7KC+4njwh+Wa+Q==
X-Received: by 2002:a1c:e042:: with SMTP id x63mr33984009wmg.68.1608048670637;
        Tue, 15 Dec 2020 08:11:10 -0800 (PST)
Date: Tue, 15 Dec 2020 16:11:08 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 1/3] tools: allocate bitmaps in units of unsigned long
Message-ID: <20201215161108.7irspc5rtl72r57o@liuwe-devbox-debian-v2>
References: <20201209155452.28376-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201209155452.28376-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Wed, Dec 09, 2020 at 04:54:49PM +0100, Olaf Hering wrote:
> Allocate enough memory so that the returned pointer can be safely
> accessed as an array of unsigned long.
> 
> The actual bitmap size in units of bytes, as returned by bitmap_size,
> remains unchanged.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wl@xen.org>

I can see where you're coming from. This (internal) API's returned
pointer is being assigned to unsigned long *.

> ---
>  tools/libs/ctrl/xc_bitops.h | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
> index 3d3a09772a..d6c5ea5138 100644
> --- a/tools/libs/ctrl/xc_bitops.h
> +++ b/tools/libs/ctrl/xc_bitops.h
> @@ -21,7 +21,10 @@ static inline unsigned long bitmap_size(unsigned long nr_bits)
>  
>  static inline void *bitmap_alloc(unsigned long nr_bits)
>  {
> -    return calloc(1, bitmap_size(nr_bits));
> +    unsigned long longs;
> +
> +    longs = (nr_bits + BITS_PER_LONG - 1) / BITS_PER_LONG;
> +    return calloc(longs, sizeof(unsigned long));
>  }
>  
>  static inline void bitmap_set(void *addr, unsigned long nr_bits)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54386.94454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCvR-0001Ae-Me; Tue, 15 Dec 2020 16:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54386.94454; Tue, 15 Dec 2020 16:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCvR-0001AX-Ih; Tue, 15 Dec 2020 16:11:45 +0000
Received: by outflank-mailman (input) for mailman id 54386;
 Tue, 15 Dec 2020 16:11:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpCvQ-0001AR-FR
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:11:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd129208-05c3-4fc6-b402-75b9d3a05474;
 Tue, 15 Dec 2020 16:11:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A67B5ACE0;
 Tue, 15 Dec 2020 16:11:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd129208-05c3-4fc6-b402-75b9d3a05474
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608048702; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S96oFG2zQFoWVyvd6bLhm9V+gi64IB8HfqT2iVAsePg=;
	b=nVg7Liwi5fbMsuWKylUhb8KgG/I5InmeJBVd3GRBpzO5sqNwxzSy9Mod9t0EhwDMddhE+P
	CfTFRuQW7AyRjcUFfFy7uZZWt/CvWW6J9nPd38UXs5MOupL+4n5tDP9cRpTMH4s3nmh1dw
	GEbNViEil7/It/K3gXvwzh5ROdy0ECI=
Subject: [PATCH v2 1/4] x86: verify function type (and maybe attribute) in
 switch_stack_and_jump()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Message-ID: <792c442d-c05a-7a00-c807-b94a54bca94f@suse.com>
Date: Tue, 15 Dec 2020 17:11:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

It is imperative that the functions passed here are taking no arguments,
return no values, and don't return in the first place. While the type
can be checked uniformly, the attribute check is limited to gcc 9 and
newer (no clang support for this so far afaict).

Note that I didn't want to have the "true" fallback "implementation" of
__builtin_has_attribute(..., __noreturn__) generally available, as
"true" may not be a suitable fallback in other cases.

Note further that the noreturn addition to startup_cpu_idle_loop()'s
declaration requires adding unreachable() to Arm's
switch_stack_and_jump(), or else the build would break. I suppose this
should have been there already.

For vmx_asm_do_vmentry() along with adding the attribute, also restrict
its scope.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
v2: Fix Arm build.

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -63,7 +63,7 @@
 #include <asm/monitor.h>
 #include <asm/xstate.h>
 
-void svm_asm_do_resume(void);
+void noreturn svm_asm_do_resume(void);
 
 u32 svm_feature_flags;
 
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1850,6 +1850,8 @@ void vmx_vmentry_failure(void)
     domain_crash(curr->domain);
 }
 
+void noreturn vmx_asm_do_vmentry(void);
+
 void vmx_do_resume(void)
 {
     struct vcpu *v = current;
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -619,7 +619,7 @@ static inline bool using_2M_mapping(void
            !l1_table_offset((unsigned long)__2M_rwdata_end);
 }
 
-static void noinline init_done(void)
+static void noreturn init_done(void)
 {
     void *va;
     unsigned long start, end;
--- a/xen/include/asm-arm/current.h
+++ b/xen/include/asm-arm/current.h
@@ -43,8 +43,10 @@ static inline struct cpu_info *get_cpu_i
 
 #define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs)
 
-#define switch_stack_and_jump(stack, fn)                                \
-    asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack) : "memory" )
+#define switch_stack_and_jump(stack, fn) do {                           \
+    asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack) : "memory" ); \
+    unreachable();                                                      \
+} while ( false )
 
 #define reset_stack_and_jump(fn) switch_stack_and_jump(get_cpu_info(), fn)
 
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -23,7 +23,7 @@ asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
 #include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
-void ret_from_intr(void);
+void noreturn ret_from_intr(void);
 
 /*
  * This output constraint should be used for any inline asm which has a "call"
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -155,9 +155,18 @@ unsigned long get_stack_dump_bottom (uns
 # define SHADOW_STACK_WORK ""
 #endif
 
+#if __GNUC__ >= 9
+# define ssaj_has_attr_noreturn(fn) __builtin_has_attribute(fn, __noreturn__)
+#else
+/* Simply can't check the property with older gcc. */
+# define ssaj_has_attr_noreturn(fn) true
+#endif
+
 #define switch_stack_and_jump(fn, instr, constr)                        \
     ({                                                                  \
         unsigned int tmp;                                               \
+        (void)((fn) == (void (*)(void))NULL);                           \
+        BUILD_BUG_ON(!ssaj_has_attr_noreturn(fn));                      \
         __asm__ __volatile__ (                                          \
             SHADOW_STACK_WORK                                           \
             "mov %[stk], %%rsp;"                                        \
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -93,7 +93,6 @@ typedef enum {
 #define PI_xAPIC_NDST_MASK      0xFF00
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
-void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
 void noreturn vmx_do_resume(void);
 void vmx_vlapic_msr_changed(struct vcpu *v);
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -736,7 +736,7 @@ void sched_context_switched(struct vcpu
 void continue_running(
     struct vcpu *same);
 
-void startup_cpu_idle_loop(void);
+void noreturn startup_cpu_idle_loop(void);
 extern void (*pm_idle) (void);
 extern void (*dead_idle) (void);
 



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:12:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54389.94465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCvv-0001Ip-Uh; Tue, 15 Dec 2020 16:12:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54389.94465; Tue, 15 Dec 2020 16:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCvv-0001Ii-Ri; Tue, 15 Dec 2020 16:12:15 +0000
Received: by outflank-mailman (input) for mailman id 54389;
 Tue, 15 Dec 2020 16:12:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpCvu-0001IY-6p
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:12:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 183fe662-ed95-4b53-9f02-ba55a62b1f79;
 Tue, 15 Dec 2020 16:12:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 83BCFACE0;
 Tue, 15 Dec 2020 16:12:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 183fe662-ed95-4b53-9f02-ba55a62b1f79
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608048732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QCkxWtPoeobzIyNZrsryM93YWFQL7VgqdQBp69ML/JE=;
	b=UF7p9dyaGJVVvin68EuQv9skV5oWLbzcUSJQKO9K3XEWO5JJnubpzx+vPteRou8RtWbW3V
	YQ2Kt29oiEKK4R/EYrID2kMWK/W2lRmOWZYjPWsKSmgHXXmraXfbTlR1CVYVJEsZomXuB+
	mYQkaDWNNPGHpM6zJ4O5F7JddTA15C8=
Subject: [PATCH v2 2/4] x86: clobber registers in switch_stack_and_jump() when
 !LIVEPATCH
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Message-ID: <28db518c-da59-8e56-b8dc-ccc814f91131@suse.com>
Date: Tue, 15 Dec 2020 17:12:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In order to have the same effect on registers as a call to
check_for_livepatch_work() may have, clobber all call-clobbered
registers in debug builds.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -120,6 +120,14 @@ unsigned long get_stack_dump_bottom (uns
 
 #ifdef CONFIG_LIVEPATCH
 # define CHECK_FOR_LIVEPATCH_WORK "call check_for_livepatch_work;"
+#elif defined(CONFIG_DEBUG)
+/* Mimic the clobbering effect a call has on registers. */
+# define CHECK_FOR_LIVEPATCH_WORK \
+    "mov $0x1234567890abcdef, %%rax\n\t" \
+    "mov %%rax, %%rcx; mov %%rax, %%rdx\n\t" \
+    "mov %%rax, %%rsi; mov %%rax, %%rdi\n\t" \
+    "mov %%rax, %%r8; mov %%rax, %%r9\n\t" \
+    "mov %%rax, %%r10; mov %%rax, %%r11\n\t"
 #else
 # define CHECK_FOR_LIVEPATCH_WORK ""
 #endif



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:12:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:12:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54390.94478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCvz-0001LD-70; Tue, 15 Dec 2020 16:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54390.94478; Tue, 15 Dec 2020 16:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCvz-0001L5-3t; Tue, 15 Dec 2020 16:12:19 +0000
Received: by outflank-mailman (input) for mailman id 54390;
 Tue, 15 Dec 2020 16:12:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCvx-0001KU-O8; Tue, 15 Dec 2020 16:12:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCvx-0002Qn-Gj; Tue, 15 Dec 2020 16:12:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCvx-0000MX-9h; Tue, 15 Dec 2020 16:12:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCvx-0008FF-6Q; Tue, 15 Dec 2020 16:12:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=v/CuHLvfTjwhLrqz1EQatD3CN4Ls3U1O2rJsVWeK5Ps=; b=yAJGYEjponOofIy9LvQJjcEuJA
	TpBVnipfY+Ha8HBK6/5i3mP6eBctgS6odpPjdtWdpqZV98tZir3HHce7q/y+X0zXfu4fpo5uJ1STR
	nk0Z3HzVsjWeuN7SojYILdCXTIcXCr781/kwB+4z41gTx+aefdb5rwncdXgExuvLmCwU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157560-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157560: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 16:12:17 +0000

flight 157560 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157560/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157421  2020-12-11 11:03:57 Z    4 days
Testing same since   157560  2020-12-15 13:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8e0fe4fe5f..904148ecb4  904148ecb4a59d4c8375d8e8d38117b8605e10ac -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:12:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:12:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54398.94493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCwJ-0001Ud-KV; Tue, 15 Dec 2020 16:12:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54398.94493; Tue, 15 Dec 2020 16:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCwJ-0001UV-Gh; Tue, 15 Dec 2020 16:12:39 +0000
Received: by outflank-mailman (input) for mailman id 54398;
 Tue, 15 Dec 2020 16:12:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpCwI-0001UI-NX
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:12:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd16e83a-d9c5-4b16-950d-306987afe8f9;
 Tue, 15 Dec 2020 16:12:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4580EAC7F;
 Tue, 15 Dec 2020 16:12:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd16e83a-d9c5-4b16-950d-306987afe8f9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608048757; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hi/QG0GNULtTR1Bja4BckrYs7CUKuP5VY/j6rLAr9Dc=;
	b=IFZPZKUENv1iTwzz56h1sw2rafVHZbMWwGUpKl/kG6DFMLhDomr1sD+ZhtjoZstNZIYIoj
	QJgVcgcIhLg4m4y6nT2kgSRy1sJSgafLQo6XPNIRxUiLRWIxfJ8r0ap1vaz/OIXqmSaQq9
	6RygxZv2Ct/RW30BTwVVx+bTAv/H4Fg=
Subject: [PATCH v2 3/4] x86/PV: avoid double stack reset during schedule tail
 handling
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Message-ID: <00befc54-58f7-1891-031e-cdb848fb5787@suse.com>
Date: Tue, 15 Dec 2020 17:12:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Invoking check_wakeup_from_wait() from assembly allows the new
continue_pv_domain() to replace the prior continue_nonidle_domain() as
the tail hook, eliminating an extra reset_stack_and_jump().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>

--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -110,12 +110,6 @@ static int parse_pcid(const char *s)
     return rc;
 }
 
-static void noreturn continue_nonidle_domain(void)
-{
-    check_wakeup_from_wait();
-    reset_stack_and_jump(ret_from_intr);
-}
-
 static int setup_compat_l4(struct vcpu *v)
 {
     struct page_info *pg;
@@ -341,13 +335,14 @@ void pv_domain_destroy(struct domain *d)
     FREE_XENHEAP_PAGE(d->arch.pv.gdt_ldt_l1tab);
 }
 
+void noreturn continue_pv_domain(void);
 
 int pv_domain_initialise(struct domain *d)
 {
     static const struct arch_csw pv_csw = {
         .from = paravirt_ctxt_switch_from,
         .to   = paravirt_ctxt_switch_to,
-        .tail = continue_nonidle_domain,
+        .tail = continue_pv_domain,
     };
     int rc = -ENOMEM;
 
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -557,8 +557,10 @@ ENTRY(dom_crash_sync_extable)
         .text
 
 /* No special register assumptions. */
-ENTRY(ret_from_intr)
 #ifdef CONFIG_PV
+ENTRY(continue_pv_domain)
+        call  check_wakeup_from_wait
+ret_from_intr:
         GET_CURRENT(bx)
         testb $3, UREGS_cs(%rsp)
         jz    restore_all_xen
@@ -567,6 +569,7 @@ ENTRY(ret_from_intr)
         je    test_all_events
         jmp   compat_test_all_events
 #else
+ret_from_intr:
         ASSERT_CONTEXT_IS_XEN
         jmp   restore_all_xen
 #endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -23,7 +23,6 @@ asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
 #include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
-void noreturn ret_from_intr(void);
 
 /*
  * This output constraint should be used for any inline asm which has a "call"



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:13:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:13:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54408.94505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCxP-0001gd-52; Tue, 15 Dec 2020 16:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54408.94505; Tue, 15 Dec 2020 16:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCxP-0001gW-20; Tue, 15 Dec 2020 16:13:47 +0000
Received: by outflank-mailman (input) for mailman id 54408;
 Tue, 15 Dec 2020 16:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpCxN-0001gM-MU
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:13:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0b89ad5-9c23-443d-a2fd-7608b9909f61;
 Tue, 15 Dec 2020 16:13:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4D39CACE0;
 Tue, 15 Dec 2020 16:13:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0b89ad5-9c23-443d-a2fd-7608b9909f61
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608048824; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fjhJIfeNseaLG2JlvheRvbbm4A5PwnvWoBdioI/zkZw=;
	b=k9WPy3H5athXiIOEqdNn1bIlhrHmdgHt6i7PFpcmc3iOIxo8ryVEOQIvvH36g3+0zWfgPe
	xEx9xGiTucisRRG2ojiFLiqILkvVqweXoShO5ublbxdG/KFfbUZBNMvMmGpeNwoZgz0K3M
	UH8IHYUHD8OSyoIOrWUtbMrYNyEUIvw=
Subject: [PATCH v2 4/4] livepatch: adjust a stale comment
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Konrad Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Message-ID: <659b188d-d26e-3351-2285-3e75197e2c5f@suse.com>
Date: Tue, 15 Dec 2020 17:13:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

As of 005de45c887e ("xen: do live patching only from main idle loop")
the comment ahead of livepatch_do_action() has been stale.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -1392,8 +1392,8 @@ static inline bool was_action_consistent
 }
 
 /*
- * This function is executed having all other CPUs with no deep stack (we may
- * have cpu_idle on it) and IRQs disabled.
+ * This function is executed having all other CPUs with no deep stack (when
+ * idle) and IRQs disabled.
  */
 static void livepatch_do_action(void)
 {



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:15:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:15:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54416.94517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCzK-0001rN-HJ; Tue, 15 Dec 2020 16:15:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54416.94517; Tue, 15 Dec 2020 16:15:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpCzK-0001rG-EE; Tue, 15 Dec 2020 16:15:46 +0000
Received: by outflank-mailman (input) for mailman id 54416;
 Tue, 15 Dec 2020 16:15:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCzI-0001r8-Iz; Tue, 15 Dec 2020 16:15:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCzI-0002Ub-En; Tue, 15 Dec 2020 16:15:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCzI-0000bQ-3B; Tue, 15 Dec 2020 16:15:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpCzI-0002Qw-2f; Tue, 15 Dec 2020 16:15:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T4Q4JkrkqDS82zerddwczQnUzPDA5JVJFSbXW6bOl+A=; b=kX8d7LEazeappDOj6xIc4vainj
	YSY9tzcAUDnjntxEJ1eMVChmqxZFvH5ZMP4iRx9CyugH4stsQumk7dO8TUujiwnlVfnBwam4D1Ev5
	k76xNyEDCiwNYp0En25nNMT7mx4kEp079IEhPPBM0ppCLf0OIR9u1q1xe124jlE+MsNQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157548-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157548: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aa14de086675280206dbc1849da6f85b75f62f1b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 16:15:44 +0000

flight 157548 qemu-mainline real [real]
flight 157567 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157548/
http://logs.test-lab.xenproject.org/osstest/logs/157567/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore fail in 157567 pass in 157548
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157567-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                aa14de086675280206dbc1849da6f85b75f62f1b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  117 days
Failing since        152659  2020-08-21 14:07:39 Z  116 days  244 attempts
Testing same since   157533  2020-12-15 00:07:52 Z    0 days    2 attempts

------------------------------------------------------------
310 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 76448 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:21:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54425.94544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD4R-0002oL-MB; Tue, 15 Dec 2020 16:21:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54425.94544; Tue, 15 Dec 2020 16:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD4R-0002oE-IH; Tue, 15 Dec 2020 16:21:03 +0000
Received: by outflank-mailman (input) for mailman id 54425;
 Tue, 15 Dec 2020 16:21:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpD4Q-0002np-AZ
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:21:02 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12d316c8-79ea-43a9-bc75-3556e7c1f6d3;
 Tue, 15 Dec 2020 16:21:01 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id i9so20463162wrc.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:21:01 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id r20sm39476238wrg.66.2020.12.15.08.21.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:21:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12d316c8-79ea-43a9-bc75-3556e7c1f6d3
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=GOafdAmRo79rgJtTfM0Wg9m70HMISokFlz0tvo2GS4c=;
        b=LcCZC9tx82IOik8P0OEyec78YogZa3n4352zseHI1/TwAmSq/5KaZRxx1N054iyK0j
         wKjUhZcjAX0vwNKthghMvTlV1LbgbuAtlrvaDrCqXRmHQwg5OmQVWTGMOMTjv22zPk4x
         8s/W9k/TBo5hoQTMR1r2t91uO6C4vcvp7UkkmZUnaxlae6BJe4NyMKP4iiOxFtUzvFks
         TaZKGmSQqma9HNz3no/eRySnIXXlPNjCtDu9qWWPMOZwgY+LEhOy6yqVO42gAQ9rpinA
         m/TECKasOY0EvQBEsS0JqxRFIiOk+zJRhr7XBo6KdPHr+FkSV/PEEu26W3t4jmRY/l5p
         oNxA==
X-Gm-Message-State: AOAM5302QvZJ5rPSlEiJvvxtaoNDpeLWcIMvX0T4wg4cNwxPqkGtHY0q
	esfONTzz6poot+TDQ38uoWw=
X-Google-Smtp-Source: ABdhPJy0/0MfXTCXMm/SAlhMmzBVDhrlj+64MxIdB91fL5fqZcyQVckhcW1XTos/9WxTivBWh8DzSw==
X-Received: by 2002:a5d:6209:: with SMTP id y9mr7418129wru.197.1608049260793;
        Tue, 15 Dec 2020 08:21:00 -0800 (PST)
Date: Tue, 15 Dec 2020 16:20:58 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 2/3] tools: remove unused ORDER_LONG
Message-ID: <20201215162058.3flhliqfkzsnbvjh@liuwe-devbox-debian-v2>
References: <20201209155452.28376-1-olaf@aepfle.de>
 <20201209155452.28376-2-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201209155452.28376-2-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Wed, Dec 09, 2020 at 04:54:50PM +0100, Olaf Hering wrote:
> There are no users left, xenpaging has its own variant.
> The last user was removed with commit 11d0044a168994de85b9b328452292852aedc871
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:21:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54424.94532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD4I-0002ks-8F; Tue, 15 Dec 2020 16:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54424.94532; Tue, 15 Dec 2020 16:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD4I-0002kl-4H; Tue, 15 Dec 2020 16:20:54 +0000
Received: by outflank-mailman (input) for mailman id 54424;
 Tue, 15 Dec 2020 16:20:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpD4H-0002kg-6H
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:20:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 916408b3-7a26-445f-8e26-8c526473c38a;
 Tue, 15 Dec 2020 16:20:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2D1E2AF73;
 Tue, 15 Dec 2020 16:20:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 916408b3-7a26-445f-8e26-8c526473c38a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049251; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DmZ7h/DBLCoQhOY4oQX/tAxepYegymf/YYc9L7Mu8Wc=;
	b=E/KYPHowoNLHSMLu1r2r/XkoZFxZht/XQQ1OWpERcYphdMxBz1Ixava7bltbdaiSJdVKhR
	Mkh8rSiIyjQG/vx4p+p4WXjhjNFKzuu8pUqe4XON4FL6B2FuzPIzAKn6cf4AoQAiytGqXV
	pL+Y+QJeHVgoUgdS+Uw5eHzW7J25YRY=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201215112610.1986-1-julien@xen.org>
 <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
 <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
Message-ID: <805b2663-ca64-1e8a-6bbf-b93ecabec979@suse.com>
Date: Tue, 15 Dec 2020 17:20:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="wdmixVwwhuxrZmeRQS48MzSJhdAj4DCSv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--wdmixVwwhuxrZmeRQS48MzSJhdAj4DCSv
Content-Type: multipart/mixed; boundary="wd5TEZrMehPRZMIsv2ryB2LuRqVSKFMlJ";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <805b2663-ca64-1e8a-6bbf-b93ecabec979@suse.com>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
References: <20201215112610.1986-1-julien@xen.org>
 <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
 <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>
In-Reply-To: <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>

--wd5TEZrMehPRZMIsv2ryB2LuRqVSKFMlJ
Content-Type: multipart/mixed;
 boundary="------------2B0BC78ED9B355EDBE488736"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2B0BC78ED9B355EDBE488736
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 14:11, Julien Grall wrote:
> Hi Juergen,
>=20
> On 15/12/2020 11:31, J=C3=BCrgen Gro=C3=9F wrote:
>> On 15.12.20 12:26, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> So far, our implementation of WARN_ON() cannot be used in the followi=
ng
>>> situation:
>>>
>>> if ( WARN_ON() )
>>> =C2=A0=C2=A0=C2=A0=C2=A0 ...
>>>
>>> This is because the WARN_ON() doesn't return whether a warning. Such
>>
>> ... warning has been triggered.
>=20
> I will add it.
>=20
>>
>>> construction can be handy to have if you have to print more informati=
on
>>> and now the stack track.
>>
>> Sorry, I'm not able to parse that sentence.
>=20
> Urgh :/. How about the following commit message:
>=20
> "So far, our implementation of WARN_ON() cannot be used in the followin=
g=20
> situation:
>=20
> if ( WARN_ON() )
>  =C2=A0 ...
>=20
> This is because WARN_ON() doesn't return whether a warning has been=20
> triggered. Such construciton can be handy if you want to print more=20
> information and also dump the stack trace.
>=20
> Therefore, rework the WARN_ON() implementation to return whether a=20
> warning was triggered. The idea was borrowed from Linux".

Better :-)

With that you can add my:

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------2B0BC78ED9B355EDBE488736
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2B0BC78ED9B355EDBE488736--

--wd5TEZrMehPRZMIsv2ryB2LuRqVSKFMlJ--

--wdmixVwwhuxrZmeRQS48MzSJhdAj4DCSv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Y4mIFAwAAAAAACgkQsN6d1ii/Ey+k
Wwf9Gu+Att8ikpwk4skeFzaW+XI8EJA+P4pPbr9x8BCUoO65W9l5/TdAmgjZvG6jDTL/iFx9wsJx
bQotyHTEouk5zjnM5Lz8kt1UB2y9XSkW9WS5uTMHK8H5/9+qEVVUIXJhgz2yRDeEPxTokGlfA7Qs
FOl5MKttNYMUSqj5E/F5kzbb0BcXhq02Zsy6XaRN2wwDg6/jAVkE+s8QY7oNOuIvMBwJOV2JGtlq
4VMhkdFfeZr4D10RHo36Mc6qMJI6bUUWUF/bqviXqFpFShJzKK9f4NBG/gtQJOPvr7mjJiB5eEsO
Pq6YA/7sFXCFkBH1J6U1gKamIUsw7WpBNtoj3Mqi7g==
=EHuk
-----END PGP SIGNATURE-----

--wdmixVwwhuxrZmeRQS48MzSJhdAj4DCSv--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:22:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54437.94556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD69-00031l-3q; Tue, 15 Dec 2020 16:22:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54437.94556; Tue, 15 Dec 2020 16:22:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD68-00031e-WC; Tue, 15 Dec 2020 16:22:48 +0000
Received: by outflank-mailman (input) for mailman id 54437;
 Tue, 15 Dec 2020 16:22:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpD67-00031X-P0
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:22:47 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2681c8cd-97b0-4f8f-82b5-0a06d9ca39e5;
 Tue, 15 Dec 2020 16:22:47 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id 3so19065218wmg.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:22:47 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id j13sm36619380wmi.36.2020.12.15.08.22.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:22:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2681c8cd-97b0-4f8f-82b5-0a06d9ca39e5
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=LPVhR5UAWejAwKJd+I2ZO/5ggdQlOv2pSnWqNxfe7XU=;
        b=lAoCyljHmWu8LGKf4VYevyDtEVfWiuEWxBkw7l4jxKhjLxhd/xEnAtojKeJ5CNs59w
         i/X/AcC7CKmWRmwVxiFf7mw6TNJlmTeqOJtHqP9btPXfH15er0g8++uOHbyNgVtP0T11
         FQlCOfgg0gLqke+UGNJGQFIRXpAQgpQXPlEo+/gcAA1vSsJ59ewkRHYFgwquPhed/x7w
         cuA4S3cRP7eVCzKwuUL0PZRUOkkEZ2QEFPf3ORvy+J77xF3zNBBth2bV4ruv50iB9VsZ
         /7o2kUc/BRMozEsbqKG9Z09fNOqaNLihTC+YWrgKjVfqDgty1s8DCjhb5rhsPPdyb+RY
         Q64Q==
X-Gm-Message-State: AOAM533osW5yPni3JMzO4cX+crEeP1osbv4d0DpkgeZt0yoRgvj/lnpA
	sifkv0l8yPgX/26xfYKtrM8=
X-Google-Smtp-Source: ABdhPJwpAwtMIm6fDqxm9jAFAoS5W2OrNEbLoYH1bx8nwOZywJZFvkTJ4QGl+Zr40CZybgvxjlx+oA==
X-Received: by 2002:a1c:1bc9:: with SMTP id b192mr33365654wmb.136.1608049366385;
        Tue, 15 Dec 2020 08:22:46 -0800 (PST)
Date: Tue, 15 Dec 2020 16:22:44 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 3/3] tools: add API to work with sevaral bits at once
Message-ID: <20201215162244.mln6xm5qj7pmvauc@liuwe-devbox-debian-v2>
References: <20201209155452.28376-1-olaf@aepfle.de>
 <20201209155452.28376-3-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201209155452.28376-3-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Wed, Dec 09, 2020 at 04:54:51PM +0100, Olaf Hering wrote:
> Introduce new API to test if a fixed number of bits is clear or set,
> and clear or set them all at once.
> 
> The caller has to make sure the input bitnumber is a multiply of BITS_PER_LONG.
> 
> This API avoids the loop over each bit in a known range just to see
> if all of them are either clear or set.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I would rather these be introduced along side their callers.

> ---
>  tools/libs/ctrl/xc_bitops.h | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
> index f0bac4a071..92f38872fb 100644
> --- a/tools/libs/ctrl/xc_bitops.h
> +++ b/tools/libs/ctrl/xc_bitops.h
> @@ -77,4 +77,29 @@ static inline void bitmap_or(void *_dst, const void *_other,
>          dst[i] |= other[i];
>  }
>  
> +static inline int test_bit_long_set(unsigned long nr_base, const void *_addr)

What's wrong with requiring the input addr be const unsigned long *?


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:24:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:24:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54442.94567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD7o-0003A2-EA; Tue, 15 Dec 2020 16:24:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54442.94567; Tue, 15 Dec 2020 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD7o-00039v-BM; Tue, 15 Dec 2020 16:24:32 +0000
Received: by outflank-mailman (input) for mailman id 54442;
 Tue, 15 Dec 2020 16:24:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpD7m-00039g-F7
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:24:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c382944c-bec9-4204-bdde-59123e1c277b;
 Tue, 15 Dec 2020 16:24:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 985FFAF73;
 Tue, 15 Dec 2020 16:24:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c382944c-bec9-4204-bdde-59123e1c277b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049468; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=gVCt0CoeLEWyBtqA7+1996CM9kjAXGCRYWC5E2fe0g0=;
	b=Th903HUBkmzuJEqNShcickwqkobSTqlH6ddfMUkgExf9FyY4X7r6CgQaQVp+3+/uMDs5rN
	K9JscCJmWA7eQGOFjr3ETvzEB2p8ddmwvNswon+Q5tJz1fKXU3dZvNbRBDgbEHlRV90hb4
	HHVuYbrizBO6e91fYGgi/QHYdxq/gB4=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/6] x86/p2m: restrict more code to build just for HVM
Message-ID: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Date: Tue, 15 Dec 2020 17:24:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: p2m: tidy p2m_add_foreign() a little
2: mm: p2m_add_foreign() is HVM-only
3: p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
4: p2m: {,un}map_mmio_regions() are HVM-only
5: mm: the gva_to_gfn() hook is HVM-only
6: p2m: set_shared_p2m_entry() is MEM_SHARING-only

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:25:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54447.94580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD8z-0003LQ-Pf; Tue, 15 Dec 2020 16:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54447.94580; Tue, 15 Dec 2020 16:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD8z-0003LJ-Md; Tue, 15 Dec 2020 16:25:45 +0000
Received: by outflank-mailman (input) for mailman id 54447;
 Tue, 15 Dec 2020 16:25:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpD8y-0003L9-Lk
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:25:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d13375f4-8631-4eb2-83f0-59f856ac2ed3;
 Tue, 15 Dec 2020 16:25:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 343D8AF73;
 Tue, 15 Dec 2020 16:25:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d13375f4-8631-4eb2-83f0-59f856ac2ed3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049542; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6glsRD1ghJhnGmjeRLQn2qIkW/U42ztTAyyz7PIAryU=;
	b=TTNJOZUzfjsIdlSJDhVTKKJppaBx5uqFSdfRUHeAgnxo4U7fzQa/oiCjuoTpsaQEqdorAG
	wF3KaObBtNd4yJ+hvOA5Bn+nScXJmlJID4plCFA3QFs6IgaC6XwAmoW/ddV0DNEi1Y8Dv3
	C8MXKNEL2QlMnMTF2e9sOMJ9qvqZiLc=
Subject: [PATCH 1/6] x86/p2m: tidy p2m_add_foreign() a little
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Message-ID: <8b70c26e-7ae6-8438-67a3-99cef338ba52@suse.com>
Date: Tue, 15 Dec 2020 17:25:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Drop a bogus ASSERT() - we don't typically assert incoming domain
pointers to be non-NULL, and there's no particular reason to do so here.

Replace the open-coded DOMID_SELF check by use of
rcu_lock_remote_domain_by_id(), at the same time covering the request
being made with the current domain's actual ID.

Move the "both domains same" check into just the path where it really
is meaningful.

Swap the order of the two puts, such that
- the p2m lock isn't needlessly held across put_page(),
- a separate put_page() on an error path can be avoided,
- they're inverse to the order of the respective gets.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The DOMID_SELF check being converted also suggests to me that there's an
implication of tdom == current->domain, which would in turn appear to
mean the "both domains same" check could as well be dropped altogether.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2560,9 +2560,6 @@ int p2m_add_foreign(struct domain *tdom,
     int rc;
     struct domain *fdom;
 
-    ASSERT(tdom);
-    if ( foreigndom == DOMID_SELF )
-        return -EINVAL;
     /*
      * hvm fixme: until support is added to p2m teardown code to cleanup any
      * foreign entries, limit this to hardware domain only.
@@ -2573,13 +2570,15 @@ int p2m_add_foreign(struct domain *tdom,
     if ( foreigndom == DOMID_XEN )
         fdom = rcu_lock_domain(dom_xen);
     else
-        fdom = rcu_lock_domain_by_id(foreigndom);
-    if ( fdom == NULL )
-        return -ESRCH;
+    {
+        rc = rcu_lock_remote_domain_by_id(foreigndom, &fdom);
+        if ( rc )
+            return rc;
 
-    rc = -EINVAL;
-    if ( tdom == fdom )
-        goto out;
+        rc = -EINVAL;
+        if ( tdom == fdom )
+            goto out;
+    }
 
     rc = xsm_map_gmfn_foreign(XSM_TARGET, tdom, fdom);
     if ( rc )
@@ -2593,10 +2592,8 @@ int p2m_add_foreign(struct domain *tdom,
     if ( !page ||
          !p2m_is_ram(p2mt) || p2m_is_shared(p2mt) || p2m_is_hole(p2mt) )
     {
-        if ( page )
-            put_page(page);
         rc = -EINVAL;
-        goto out;
+        goto put_one;
     }
     mfn = page_to_mfn(page);
 
@@ -2625,8 +2622,6 @@ int p2m_add_foreign(struct domain *tdom,
                  gpfn, mfn_x(mfn), fgfn, tdom->domain_id, fdom->domain_id);
 
  put_both:
-    put_page(page);
-
     /*
      * This put_gfn for the above get_gfn for prev_mfn.  We must do this
      * after set_foreign_p2m_entry so another cpu doesn't populate the gpfn
@@ -2634,9 +2629,13 @@ int p2m_add_foreign(struct domain *tdom,
      */
     put_gfn(tdom, gpfn);
 
-out:
+ put_one:
+    put_page(page);
+
+ out:
     if ( fdom )
         rcu_unlock_domain(fdom);
+
     return rc;
 }
 



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:26:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:26:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54451.94592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD9Q-0003RE-2c; Tue, 15 Dec 2020 16:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54451.94592; Tue, 15 Dec 2020 16:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD9P-0003R7-Va; Tue, 15 Dec 2020 16:26:11 +0000
Received: by outflank-mailman (input) for mailman id 54451;
 Tue, 15 Dec 2020 16:26:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpD9P-0003R1-Ai
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:26:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 564a43b6-9f89-49a8-816f-1b9c84fad78b;
 Tue, 15 Dec 2020 16:26:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 07E8FAF73;
 Tue, 15 Dec 2020 16:26:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 564a43b6-9f89-49a8-816f-1b9c84fad78b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049569; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2v/1RwZ/B8SXbtGwlsQ+TtN2+gyOEB1v2MbI3xzoisA=;
	b=bMTnqTO3ksXZegBmW3IDaLL3GhfI0KyqUhbeP7gOes2MIuev0Ex0TtHHi3/FxUu3oXlfj9
	o1SDqHTVN05yE4lVaYxTvpVYOZVXt5KQTOrXc6VxFNlt4Xl+JiHxNH0riofLgZzlP3oA2q
	wtf6riMIj8zof1wHQUrDMGuNboPudNs=
Subject: [PATCH 2/6] x86/mm: p2m_add_foreign() is HVM-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Message-ID: <cf4569c5-a9c5-7b4b-d576-d1521c369418@suse.com>
Date: Tue, 15 Dec 2020 17:26:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is together with its only caller, xenmem_add_to_physmap_one(). Move
the latter next to p2m_add_foreign(), allowing this one to become static
at the same time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -118,7 +118,6 @@
 #include <xen/vmap.h>
 #include <xen/xmalloc.h>
 #include <xen/efi.h>
-#include <xen/grant_table.h>
 #include <xen/hypercall.h>
 #include <xen/mm.h>
 #include <asm/paging.h>
@@ -142,10 +141,7 @@
 #include <asm/pci.h>
 #include <asm/guest.h>
 #include <asm/hvm/ioreq.h>
-
-#include <asm/hvm/grant_table.h>
 #include <asm/pv/domain.h>
-#include <asm/pv/grant_table.h>
 #include <asm/pv/mm.h>
 
 #ifdef CONFIG_PV
@@ -4591,114 +4587,6 @@ static int handle_iomem_range(unsigned l
     return err || s > e ? err : _handle_iomem_range(s, e, p);
 }
 
-int xenmem_add_to_physmap_one(
-    struct domain *d,
-    unsigned int space,
-    union add_to_physmap_extra extra,
-    unsigned long idx,
-    gfn_t gpfn)
-{
-    struct page_info *page = NULL;
-    unsigned long gfn = 0 /* gcc ... */, old_gpfn;
-    mfn_t prev_mfn;
-    int rc = 0;
-    mfn_t mfn = INVALID_MFN;
-    p2m_type_t p2mt;
-
-    switch ( space )
-    {
-        case XENMAPSPACE_shared_info:
-            if ( idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
-            break;
-        case XENMAPSPACE_grant_table:
-            rc = gnttab_map_frame(d, idx, gpfn, &mfn);
-            if ( rc )
-                return rc;
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
-
-            gfn = idx;
-            mfn = get_gfn_unshare(d, gfn, &p2mt);
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                put_gfn(d, gfn);
-                return -ENOMEM;
-            }
-            page = get_page_from_mfn(mfn, d);
-            if ( unlikely(!page) )
-                mfn = INVALID_MFN;
-            break;
-        }
-        case XENMAPSPACE_gmfn_foreign:
-            return p2m_add_foreign(d, idx, gfn_x(gpfn), extra.foreign_domid);
-        default:
-            break;
-    }
-
-    if ( mfn_eq(mfn, INVALID_MFN) )
-    {
-        rc = -EINVAL;
-        goto put_both;
-    }
-
-    /* Remove previously mapped page if it was present. */
-    prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
-    if ( mfn_valid(prev_mfn) )
-    {
-        if ( is_special_page(mfn_to_page(prev_mfn)) )
-            /* Special pages are simply unhooked from this phys slot. */
-            rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
-        else if ( !mfn_eq(mfn, prev_mfn) )
-            /* Normal domain memory is freed, to avoid leaking memory. */
-            rc = guest_remove_page(d, gfn_x(gpfn));
-    }
-    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-    put_gfn(d, gfn_x(gpfn));
-
-    if ( rc )
-        goto put_both;
-
-    /* Unmap from old location, if any. */
-    old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
-    ASSERT(!SHARED_M2P(old_gpfn));
-    if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
-    {
-        rc = -EXDEV;
-        goto put_both;
-    }
-    if ( old_gpfn != INVALID_M2P_ENTRY )
-        rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
-
-    /* Map at new location. */
-    if ( !rc )
-        rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
-
- put_both:
-    /*
-     * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.
-     * We also may need to transfer ownership of the page reference to our
-     * caller.
-     */
-    if ( space == XENMAPSPACE_gmfn )
-    {
-        put_gfn(d, gfn);
-        if ( !rc && extra.ppage )
-        {
-            *extra.ppage = page;
-            page = NULL;
-        }
-    }
-
-    if ( page )
-        put_page(page);
-
-    return rc;
-}
-
 int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int id, unsigned long frame,
                           unsigned int nr_frames, xen_pfn_t mfn_list[])
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -27,6 +27,7 @@
 #include <xen/mem_access.h>
 #include <xen/vm_event.h>
 #include <xen/event.h>
+#include <xen/grant_table.h>
 #include <xen/param.h>
 #include <public/vm_event.h>
 #include <asm/domain.h>
@@ -42,6 +43,10 @@
 
 #include "mm-locks.h"
 
+/* Override macro from asm/page.h to make work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(v) _mfn(__virt_to_mfn(v))
+
 /* Turn on/off host superpage page table support for hap, default on. */
 bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1;
 boolean_param("hap_1gb", opt_hap_1gb);
@@ -2535,6 +2540,8 @@ out_p2m_audit:
 }
 #endif /* P2M_AUDIT */
 
+#ifdef CONFIG_HVM
+
 /*
  * Add frame from foreign domain to target domain's physmap. Similar to
  * XENMAPSPACE_gmfn but the frame is foreign being mapped into current,
@@ -2551,8 +2558,8 @@ out_p2m_audit:
  *
  * Returns: 0 ==> success
  */
-int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
-                    unsigned long gpfn, domid_t foreigndom)
+static int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
+                           unsigned long gpfn, domid_t foreigndom)
 {
     p2m_type_t p2mt, p2mt_prev;
     mfn_t prev_mfn, mfn;
@@ -2639,7 +2646,114 @@ int p2m_add_foreign(struct domain *tdom,
     return rc;
 }
 
-#ifdef CONFIG_HVM
+int xenmem_add_to_physmap_one(
+    struct domain *d,
+    unsigned int space,
+    union add_to_physmap_extra extra,
+    unsigned long idx,
+    gfn_t gpfn)
+{
+    struct page_info *page = NULL;
+    unsigned long gfn = 0 /* gcc ... */, old_gpfn;
+    mfn_t prev_mfn;
+    int rc = 0;
+    mfn_t mfn = INVALID_MFN;
+    p2m_type_t p2mt;
+
+    switch ( space )
+    {
+        case XENMAPSPACE_shared_info:
+            if ( idx == 0 )
+                mfn = virt_to_mfn(d->shared_info);
+            break;
+        case XENMAPSPACE_grant_table:
+            rc = gnttab_map_frame(d, idx, gpfn, &mfn);
+            if ( rc )
+                return rc;
+            break;
+        case XENMAPSPACE_gmfn:
+        {
+            p2m_type_t p2mt;
+
+            gfn = idx;
+            mfn = get_gfn_unshare(d, gfn, &p2mt);
+            /* If the page is still shared, exit early */
+            if ( p2m_is_shared(p2mt) )
+            {
+                put_gfn(d, gfn);
+                return -ENOMEM;
+            }
+            page = get_page_from_mfn(mfn, d);
+            if ( unlikely(!page) )
+                mfn = INVALID_MFN;
+            break;
+        }
+        case XENMAPSPACE_gmfn_foreign:
+            return p2m_add_foreign(d, idx, gfn_x(gpfn), extra.foreign_domid);
+        default:
+            break;
+    }
+
+    if ( mfn_eq(mfn, INVALID_MFN) )
+    {
+        rc = -EINVAL;
+        goto put_both;
+    }
+
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_special_page(mfn_to_page(prev_mfn)) )
+            /* Special pages are simply unhooked from this phys slot. */
+            rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
+        else if ( !mfn_eq(mfn, prev_mfn) )
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            rc = guest_remove_page(d, gfn_x(gpfn));
+    }
+    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
+    put_gfn(d, gfn_x(gpfn));
+
+    if ( rc )
+        goto put_both;
+
+    /* Unmap from old location, if any. */
+    old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
+    ASSERT(!SHARED_M2P(old_gpfn));
+    if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
+    {
+        rc = -EXDEV;
+        goto put_both;
+    }
+    if ( old_gpfn != INVALID_M2P_ENTRY )
+        rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
+
+    /* Map at new location. */
+    if ( !rc )
+        rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
+
+ put_both:
+    /*
+     * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.
+     * We also may need to transfer ownership of the page reference to our
+     * caller.
+     */
+    if ( space == XENMAPSPACE_gmfn )
+    {
+        put_gfn(d, gfn);
+        if ( !rc && extra.ppage )
+        {
+            *extra.ppage = page;
+            page = NULL;
+        }
+    }
+
+    if ( page )
+        put_page(page);
+
+    return rc;
+}
+
 /*
  * Set/clear the #VE suppress bit for a page.  Only available on VMX.
  */
@@ -2792,7 +2906,8 @@ int p2m_set_altp2m_view_visibility(struc
 
     return rc;
 }
-#endif
+
+#endif /* CONFIG_HVM */
 
 /*
  * Local variables:
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -661,10 +661,6 @@ int set_identity_p2m_entry(struct domain
                            p2m_access_t p2ma, unsigned int flag);
 int clear_identity_p2m_entry(struct domain *d, unsigned long gfn);
 
-/* Add foreign mapping to the guest's p2m table. */
-int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
-                    unsigned long gpfn, domid_t foreign_domid);
-
 /* 
  * Populate-on-demand
  */



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:26:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54456.94603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD9s-0003Xy-Bd; Tue, 15 Dec 2020 16:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54456.94603; Tue, 15 Dec 2020 16:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpD9s-0003Xr-8N; Tue, 15 Dec 2020 16:26:40 +0000
Received: by outflank-mailman (input) for mailman id 54456;
 Tue, 15 Dec 2020 16:26:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpD9q-0003XC-T2
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:26:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa142ba7-ec6b-4d63-a2e9-1e2b44e86113;
 Tue, 15 Dec 2020 16:26:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B7D2AC7F;
 Tue, 15 Dec 2020 16:26:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa142ba7-ec6b-4d63-a2e9-1e2b44e86113
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049594; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ib4JzAgO9xvhFCA7cw+kxBNsg4RQNQhk6ehHzj9Ffso=;
	b=I/1sVtEEiTxAGm7vRT+sihiIrnmaklHo65i+OBqrHCA7FlSYO/3SJF4cc3oqJjqivwC5uA
	/KcS3v2VfQnuy0l+OLSiWcEmZJj22hhEpa8JPMurV7G6dO6pxKFB8zeMllj1vghpGfFfhE
	btUTDVOfefspkBUFL85HjfFO/Y8Z+rM=
Subject: [PATCH 3/6] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Message-ID: <15f41816-4814-bae5-e0bc-89e99d04a142@suse.com>
Date: Tue, 15 Dec 2020 17:26:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Extend a respective #ifdef from inside set_typed_p2m_entry() to around
all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
check path.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1257,6 +1257,8 @@ int p2m_finish_type_change(struct domain
     return rc;
 }
 
+#ifdef CONFIG_HVM
+
 /*
  * Returns:
  *    0              for success
@@ -1277,7 +1279,10 @@ static int set_typed_p2m_entry(struct do
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
         return -EIO;
+    }
 
     gfn_lock(p2m, gfn, order);
     omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, &cur_order, NULL);
@@ -1308,7 +1313,6 @@ static int set_typed_p2m_entry(struct do
     if ( rc )
         gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
                  gfn_l, order, rc, mfn_x(mfn));
-#ifdef CONFIG_HVM
     else if ( p2m_is_pod(ot) )
     {
         pod_lock(p2m);
@@ -1316,7 +1320,6 @@ static int set_typed_p2m_entry(struct do
         BUG_ON(p2m->pod.entry_count < 0);
         pod_unlock(p2m);
     }
-#endif
     gfn_unlock(p2m, gfn, order);
 
     return rc;
@@ -1341,6 +1344,8 @@ int set_mmio_p2m_entry(struct domain *d,
                                p2m_get_hostp2m(d)->default_access);
 }
 
+#endif /* CONFIG_HVM */
+
 int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:27:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:27:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54459.94616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDAB-0003gD-Nx; Tue, 15 Dec 2020 16:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54459.94616; Tue, 15 Dec 2020 16:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDAB-0003g4-Ki; Tue, 15 Dec 2020 16:26:59 +0000
Received: by outflank-mailman (input) for mailman id 54459;
 Tue, 15 Dec 2020 16:26:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpDA9-0003fc-IV
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:26:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de65e034-7a0b-4ccd-ba60-e84954bbfca2;
 Tue, 15 Dec 2020 16:26:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B780BAC7F;
 Tue, 15 Dec 2020 16:26:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de65e034-7a0b-4ccd-ba60-e84954bbfca2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Kaeqho8fy5FXxUiWuLYiGf7U8sN4JWrQrpqcg1h2skY=;
	b=K9lxTqpKguORYQXiBxUwhPQZGrvIIxkDYyJk4gyoviYtra9LInEBSrYOIDlutm8e2eieOS
	jHaOad42v+3nmmbihmFKMOP4K9Mx86qeQF3uypgGm41x7Pe49nBvEO9cu78QVF4t7QZU+L
	PQdYOrouu7YcmzzZGqtYadXnwtn3soY=
Subject: [PATCH 4/6] x86/p2m: {,un}map_mmio_regions() are HVM-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Message-ID: <84bd7740-39f5-fb2a-aeec-4ce1cfba631c@suse.com>
Date: Tue, 15 Dec 2020 17:26:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Mirror the "translated" check the functions do to do_domctl(), allowing
the calls to be DCEd by the compiler. Add ASSERT_UNREACHABLE() to the
original checks.

Also arrange for {set,clear}_mmio_p2m_entry() and
{set,clear}_identity_p2m_entry() to respectively live next to each
other, such that clear_mmio_p2m_entry() can also be covered by the
#ifdef already covering set_mmio_p2m_entry().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Arguably the original checks, returning success, could also be dropped
at this point.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1344,52 +1344,6 @@ int set_mmio_p2m_entry(struct domain *d,
                                p2m_get_hostp2m(d)->default_access);
 }
 
-#endif /* CONFIG_HVM */
-
-int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
-                           p2m_access_t p2ma, unsigned int flag)
-{
-    p2m_type_t p2mt;
-    p2m_access_t a;
-    gfn_t gfn = _gfn(gfn_l);
-    mfn_t mfn;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int ret;
-
-    if ( !paging_mode_translate(p2m->domain) )
-    {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
-                                1ul << PAGE_ORDER_4K,
-                                IOMMUF_readable | IOMMUF_writable);
-    }
-
-    gfn_lock(p2m, gfn, 0);
-
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
-
-    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
-        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
-                            p2m_mmio_direct, p2ma);
-    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
-        ret = 0;
-    else
-    {
-        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
-            ret = 0;
-        else
-            ret = -EBUSY;
-        printk(XENLOG_G_WARNING
-               "Cannot setup identity map d%d:%lx,"
-               " gfn already mapped to %lx.\n",
-               d->domain_id, gfn_l, mfn_x(mfn));
-    }
-
-    gfn_unlock(p2m, gfn, 0);
-    return ret;
-}
-
 /*
  * Returns:
  *    0        for success
@@ -1439,6 +1393,52 @@ int clear_mmio_p2m_entry(struct domain *
     return rc;
 }
 
+#endif /* CONFIG_HVM */
+
+int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
+                           p2m_access_t p2ma, unsigned int flag)
+{
+    p2m_type_t p2mt;
+    p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
+    mfn_t mfn;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int ret;
+
+    if ( !paging_mode_translate(p2m->domain) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
+                                1ul << PAGE_ORDER_4K,
+                                IOMMUF_readable | IOMMUF_writable);
+    }
+
+    gfn_lock(p2m, gfn, 0);
+
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+
+    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
+        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
+                            p2m_mmio_direct, p2ma);
+    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
+        ret = 0;
+    else
+    {
+        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
+            ret = 0;
+        else
+            ret = -EBUSY;
+        printk(XENLOG_G_WARNING
+               "Cannot setup identity map d%d:%lx,"
+               " gfn already mapped to %lx.\n",
+               d->domain_id, gfn_l, mfn_x(mfn));
+    }
+
+    gfn_unlock(p2m, gfn, 0);
+    return ret;
+}
+
 int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 {
     p2m_type_t p2mt;
@@ -1868,6 +1868,8 @@ void *map_domain_gfn(struct p2m_domain *
     return map_domain_page(*mfn);
 }
 
+#ifdef CONFIG_HVM
+
 static unsigned int mmio_order(const struct domain *d,
                                unsigned long start_fn, unsigned long nr)
 {
@@ -1908,7 +1910,10 @@ int map_mmio_regions(struct domain *d,
     unsigned int iter, order;
 
     if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
         return 0;
+    }
 
     for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
           i += 1UL << order, ++iter )
@@ -1940,7 +1945,10 @@ int unmap_mmio_regions(struct domain *d,
     unsigned int iter, order;
 
     if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
         return 0;
+    }
 
     for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
           i += 1UL << order, ++iter )
@@ -1962,8 +1970,6 @@ int unmap_mmio_regions(struct domain *d,
     return i == nr ? 0 : i ?: ret;
 }
 
-#ifdef CONFIG_HVM
-
 int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
                                p2m_type_t *t, p2m_access_t *a,
                                bool prepopulate)
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -750,6 +750,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( ret )
             break;
 
+        if ( !paging_mode_translate(d) )
+            break;
+
         if ( add )
         {
             printk(XENLOG_G_DEBUG



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:27:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54466.94632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDAk-0003pT-3P; Tue, 15 Dec 2020 16:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54466.94632; Tue, 15 Dec 2020 16:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDAj-0003pM-WB; Tue, 15 Dec 2020 16:27:33 +0000
Received: by outflank-mailman (input) for mailman id 54466;
 Tue, 15 Dec 2020 16:27:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpDAi-0003pE-PI
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:27:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e24b1aca-844e-4236-8a4b-a42a3fd7108a;
 Tue, 15 Dec 2020 16:27:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0526CAF73;
 Tue, 15 Dec 2020 16:27:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e24b1aca-844e-4236-8a4b-a42a3fd7108a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049651; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2InyiU20QXavHCYQ7QiW/K+4yItlOuaF3sK2py5UaPY=;
	b=RRmUPuqyj7Okjezg4vgHpSqG86r2xUJXyGfm/BHX17A3F8mJ2/WkyKjiPwbiRl2bJ4X3n7
	anILlklPb5y3g84Fb81vx57zn5tJ97I3eXud0b3FOf23bbHWn+59vLkzT9Mmh3qJPyMAHB
	i/ziR/q+5A/3vAqpA1c6izL+sqrbfkA=
Subject: [PATCH 5/6] x86/mm: the gva_to_gfn() hook is HVM-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Message-ID: <cc141f1f-7af8-9d23-de1d-a22ba320ca80@suse.com>
Date: Tue, 15 Dec 2020 17:27:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

As is the adjacent ga_to_gfn() one as well as paging_gva_to_gfn().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1772,7 +1772,6 @@ void np2m_schedule(int dir)
         p2m_unlock(p2m);
     }
 }
-#endif
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
                                 unsigned long va,
@@ -1820,6 +1819,8 @@ unsigned long paging_gva_to_gfn(struct v
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
 }
 
+#endif /* CONFIG_HVM */
+
 /*
  * If the map is non-NULL, we leave this function having acquired an extra ref
  * on mfn_to_page(*mfn).  In all cases, *pfec contains appropriate
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3414,6 +3414,7 @@ static bool sh_invlpg(struct vcpu *v, un
     return true;
 }
 
+#ifdef CONFIG_HVM
 
 static unsigned long
 sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
@@ -3447,6 +3448,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m
     return gfn_x(gfn);
 }
 
+#endif /* CONFIG_HVM */
 
 static inline void
 sh_update_linear_entries(struct vcpu *v)
@@ -4571,7 +4573,9 @@ int sh_audit_l4_table(struct vcpu *v, mf
 const struct paging_mode sh_paging_mode = {
     .page_fault                    = sh_page_fault,
     .invlpg                        = sh_invlpg,
+#ifdef CONFIG_HVM
     .gva_to_gfn                    = sh_gva_to_gfn,
+#endif
     .update_cr3                    = sh_update_cr3,
     .update_paging_modes           = shadow_update_paging_modes,
     .flush_tlb                     = shadow_flush_tlb,
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -127,6 +127,7 @@ struct paging_mode {
                                             struct cpu_user_regs *regs);
     bool          (*invlpg                )(struct vcpu *v,
                                             unsigned long linear);
+#ifdef CONFIG_HVM
     unsigned long (*gva_to_gfn            )(struct vcpu *v,
                                             struct p2m_domain *p2m,
                                             unsigned long va,
@@ -136,6 +137,7 @@ struct paging_mode {
                                             unsigned long cr3,
                                             paddr_t ga, uint32_t *pfec,
                                             unsigned int *page_order);
+#endif
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
@@ -286,6 +288,8 @@ unsigned long paging_gva_to_gfn(struct v
                                 unsigned long va,
                                 uint32_t *pfec);
 
+#ifdef CONFIG_HVM
+
 /* Translate a guest address using a particular CR3 value.  This is used
  * to by nested HAP code, to walk the guest-supplied NPT tables as if
  * they were pagetables.
@@ -304,6 +308,8 @@ static inline unsigned long paging_ga_to
         page_order);
 }
 
+#endif /* CONFIG_HVM */
+
 /* Update all the things that are derived from the guest's CR3.
  * Called when the guest changes CR3; the caller can then use v->arch.cr3
  * as the value to load into the host CR3 to schedule this vcpu */



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:28:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54471.94644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDBd-0003xc-Dh; Tue, 15 Dec 2020 16:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54471.94644; Tue, 15 Dec 2020 16:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDBd-0003xU-9W; Tue, 15 Dec 2020 16:28:29 +0000
Received: by outflank-mailman (input) for mailman id 54471;
 Tue, 15 Dec 2020 16:28:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vckb=FT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpDBc-0003xO-K4
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:28:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29e18333-382a-4328-8c7f-5abed8e08e0d;
 Tue, 15 Dec 2020 16:28:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AFDB1AF9E;
 Tue, 15 Dec 2020 16:28:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29e18333-382a-4328-8c7f-5abed8e08e0d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608049707; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aGh7mL6+YZBcigbYlw7lNc5SDhGzwaIzC2wWxunuha4=;
	b=VAs/I3NVev7ZzZ2PCs2afHBrzjHapEshm7HVklz+VUTRwfd+/MqhMsVD8tn+3cuMc2Thxs
	Su/wW43MMwU9Yc1NazikmJDJPMO0trDFNKe5cisVcxF+ngAwPHo635mLyBe0AVxcRMvRJ8
	0Bm3oL14kFvYHajtVB2oyxYiBaHB9os=
Subject: [PATCH 6/6] x86/p2m: set_shared_p2m_entry() is MEM_SHARING-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Message-ID: <a9c24d20-0feb-42c8-ae2c-5cfd564157ee@suse.com>
Date: Tue, 15 Dec 2020 17:28:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1476,6 +1476,8 @@ int clear_identity_p2m_entry(struct doma
     return ret;
 }
 
+#ifdef CONFIG_MEM_SHARING
+
 /* Returns: 0 for success, -errno for failure */
 int set_shared_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn)
 {
@@ -1514,7 +1516,10 @@ int set_shared_p2m_entry(struct domain *
     return rc;
 }
 
+#endif /* CONFIG_MEM_SHARING */
+
 #ifdef CONFIG_HVM
+
 static struct p2m_domain *
 p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
 {



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:28:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54474.94656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDBr-00042k-Mb; Tue, 15 Dec 2020 16:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54474.94656; Tue, 15 Dec 2020 16:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDBr-00042d-Hm; Tue, 15 Dec 2020 16:28:43 +0000
Received: by outflank-mailman (input) for mailman id 54474;
 Tue, 15 Dec 2020 16:28:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpDBq-00042O-GB
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:28:42 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c445b198-3528-46e9-9a1c-f6ac4e3b9408;
 Tue, 15 Dec 2020 16:28:41 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id k10so17433170wmi.3
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:28:41 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 94sm39531260wrq.22.2020.12.15.08.28.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:28:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c445b198-3528-46e9-9a1c-f6ac4e3b9408
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=j4tyz0KYb1+m9dRi4JRX79o8d/DzBmaeKPiouZzpfiw=;
        b=AymfjRnC//EHiu7Vefi0Onerqg91k03TRLX2YXA+GrVZI70L6l081fFn6znm0HDZ+E
         /5j7/pKSN9/dhGweT0Q74NjJAj3YjHW3sSA2PDFmSrzHy4aYsPAO6qDY18nJvfSHfYks
         rUpayxobe/0HI53Xrcjb9YXWV+8G23VZNj/hSGW4K1vL1W4RbRiu1bkodVLr1zqHdAqr
         zeJo4ve6MSFQgatxOOeajPlml8fF2Ieg7JdO+I9l2rOAcEjKryxXO4XkM/PH98PwSJ2K
         y8OBgXBHb0pK5BHrRdYqsF9p793dw5MwuDNIMqTREyi57WnuEawxYIk1tEz51rW5wFbM
         eXSw==
X-Gm-Message-State: AOAM533/I/j+4JRG1Ys62Ibq2aJy5HN56LOUfSm63Qv1rMYFt7uVX4lU
	lUAuKUJTBhz9iJedCiQuDO4=
X-Google-Smtp-Source: ABdhPJyl+EIpqtc0c0xoqNXI313POITbese7sP6R6Su0lWEELf4cv8amYqeyVPZBhw6Y32FV+pqPRA==
X-Received: by 2002:a1c:770d:: with SMTP id t13mr34839387wmi.153.1608049721060;
        Tue, 15 Dec 2020 08:28:41 -0800 (PST)
Date: Tue, 15 Dec 2020 16:28:39 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/4] x86: verify function type (and maybe attribute)
 in switch_stack_and_jump()
Message-ID: <20201215162838.o7ihaoj2tirqjg5s@liuwe-devbox-debian-v2>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
 <792c442d-c05a-7a00-c807-b94a54bca94f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <792c442d-c05a-7a00-c807-b94a54bca94f@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Dec 15, 2020 at 05:11:41PM +0100, Jan Beulich wrote:
> It is imperative that the functions passed here are taking no arguments,
> return no values, and don't return in the first place. While the type
> can be checked uniformly, the attribute check is limited to gcc 9 and
> newer (no clang support for this so far afaict).
> 
> Note that I didn't want to have the "true" fallback "implementation" of
> __builtin_has_attribute(..., __noreturn__) generally available, as
> "true" may not be a suitable fallback in other cases.
> 
> Note further that the noreturn addition to startup_cpu_idle_loop()'s
> declaration requires adding unreachable() to Arm's
> switch_stack_and_jump(), or else the build would break. I suppose this
> should have been there already.
> 
> For vmx_asm_do_vmentry() along with adding the attribute, also restrict
> its scope.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:28:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:28:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54476.94668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDC1-00047e-Ud; Tue, 15 Dec 2020 16:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54476.94668; Tue, 15 Dec 2020 16:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDC1-00047V-Qh; Tue, 15 Dec 2020 16:28:53 +0000
Received: by outflank-mailman (input) for mailman id 54476;
 Tue, 15 Dec 2020 16:28:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpDC0-000475-H1
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:28:52 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37fc0100-5474-4e73-bfe1-b4af6e62d9c9;
 Tue, 15 Dec 2020 16:28:51 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id n16so236732wmc.0
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:28:51 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id s133sm38197648wmf.38.2020.12.15.08.28.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:28:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37fc0100-5474-4e73-bfe1-b4af6e62d9c9
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=KY5wz41Enh2tKod7R02tcFCNNgU7dvd2/e/Rsy9POQ4=;
        b=KtwI6fsRxAWc0dS2Rb2bQKtGLY+DiOocvoj/Jbx8/9uiyP4YGFjfjmzYJcD5XddWDE
         J5Xwl8JgalMa1ixuwObrkjUUj4dqd4rCTkVY5yYAw3KPvNvCohLALSl6W552NPqfY2Vw
         QKYfpt0cOIxB2zxLhryNSsLJ73QbfQgEfRpNMz7DWnz4QdvOwrjveKLLCJ4luNHK0n3h
         d2Jur5ykRSrhOXXbw/1jYDCUDPxqFHGLrfH+14z2BXm0XVktYFh2xJWF0qwXkvNItdak
         Ga2DoXdC9JeaT+wVvF3xaB4/hLGuiNIwBy0yvMN2S5FPQI9lRIYXwYmhKBuYWzB505aZ
         o05Q==
X-Gm-Message-State: AOAM531jMN9JqxiU0CZJewU+fUewVEI9pGKFGsJII9RdqZH/OJXra5X1
	7FFL9JeaKtdjQFGzv+MFIZE=
X-Google-Smtp-Source: ABdhPJyFcS62rGLLZ0EjAwY0p9t8w/nlk+P69zOcR/Ub+6SBHdiRFu9Da86aHG305c8OK9PsWgEuhw==
X-Received: by 2002:a1c:cc14:: with SMTP id h20mr14143517wmb.180.1608049731193;
        Tue, 15 Dec 2020 08:28:51 -0800 (PST)
Date: Tue, 15 Dec 2020 16:28:49 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 2/4] x86: clobber registers in switch_stack_and_jump()
 when !LIVEPATCH
Message-ID: <20201215162849.khcagkpxe3ccnohj@liuwe-devbox-debian-v2>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
 <28db518c-da59-8e56-b8dc-ccc814f91131@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <28db518c-da59-8e56-b8dc-ccc814f91131@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Dec 15, 2020 at 05:12:12PM +0100, Jan Beulich wrote:
> In order to have the same effect on registers as a call to
> check_for_livepatch_work() may have, clobber all call-clobbered
> registers in debug builds.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:29:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:29:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54484.94680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDCj-0004P9-8Q; Tue, 15 Dec 2020 16:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54484.94680; Tue, 15 Dec 2020 16:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDCj-0004P2-5B; Tue, 15 Dec 2020 16:29:37 +0000
Received: by outflank-mailman (input) for mailman id 54484;
 Tue, 15 Dec 2020 16:29:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ADE+=FT=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kpDCi-0004Ow-MN
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:29:36 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.161])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52abdeaf-96af-4caf-b3ac-4297c68be352;
 Tue, 15 Dec 2020 16:29:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.8.3 DYNA|AUTH)
 with ESMTPSA id D005cbwBFGTV4WN
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 15 Dec 2020 17:29:31 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52abdeaf-96af-4caf-b3ac-4297c68be352
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1608049774;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:From:
	Subject:Sender;
	bh=46DZN/VEJ99V4ZgVjeUCEK5Kzd6YvHl26d9HkIJ5Wj0=;
	b=aQeY5xTH/kFAYMQLYcHgm2JP3uY7kNJ3tTZcKEbCWESReXPwVPe+u+WaKc4JfO4Fws
	DjvfwOtpsSX3PhWxFjpVISQ9fE1sWF0Si59sm8PnDqReNOxA+1EW0ElLNsInraVLjOG+
	CBRasayLIpTRoQWd/N8p3sI+6yBXhDZCi+fGoq/jUgGsXg+6sJDnHoLLNy++L2o/f3mw
	Ebh2qZZeEj3IbMPp648jkbN2NdZuTYun5Vm7xRdyFsq5bCn1F8MnOMv98jODd1mhtSjE
	XJQ12OOVpNtTh4QD1tFfcsbmGA/Mo8xjFzFN9VYS0pCeQpiwDFosUhKC7c78yqvBSmXl
	+bVQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTWOnz/A=="
X-RZG-CLASS-ID: mo00
Date: Tue, 15 Dec 2020 17:29:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v1 3/3] tools: add API to work with sevaral bits at once
Message-ID: <20201215172917.556071ff.olaf@aepfle.de>
In-Reply-To: <20201215162244.mln6xm5qj7pmvauc@liuwe-devbox-debian-v2>
References: <20201209155452.28376-1-olaf@aepfle.de>
	<20201209155452.28376-3-olaf@aepfle.de>
	<20201215162244.mln6xm5qj7pmvauc@liuwe-devbox-debian-v2>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/pGuc=7kWgqwdQnd8HcUA8Na"; protocol="application/pgp-signature"

--Sig_/pGuc=7kWgqwdQnd8HcUA8Na
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 15 Dec 2020 16:22:44 +0000
schrieb Wei Liu <wl@xen.org>:

> What's wrong with requiring the input addr be const unsigned long *?

Probably nothing. In the end I just borrowed the prototypes from the other =
functions in this file.

I will resend with this change once I have the consumers ready.

Olaf

--Sig_/pGuc=7kWgqwdQnd8HcUA8Na
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl/Y5F0ACgkQ86SN7mm1
DoAUXw/+PoC5o52SIW7bjQ+tTgh3eAQ8RU+nDiRJsV5Ym3zQfcD51XnFdQyhKlq9
dv0sKi2D4SiDObiPEesm673mF/AnOXE/PF3PMFOfu5Wi57FHy6b00L0Elj0ZzhGQ
kLCUikzkjY3bchjmnMkXKZthIXB3pVVwdW4osBBslbnB718jEYh/nHQD1U/9K1tS
Jzd9z7khJxcjSsLuHe95iva/bsXmeh1vohYSBFP8gE3sZL7z9KyJqLSHyc42NYL2
LdhY6nneTMG6O69EeazIaUUD3jdLZ2kDzGSMUaIUFHltEoKN6z+uT1CtsVVwU+ck
3rYNOaEq+MiDgxePSTg7jvwia3OrM127SNp95esBnec/yk1Ai3jjMLJLBfi/XX8B
4w3k6lX9yZG8qNDW6lXDf/z3qq9/oJ1VGwMJ8l2A5lrL3TasWlvYac/bhnWAlukt
2rxplqZP87jsIsXunGSj4OhZPL7LAY2OtOjPrCQZKMCJsNkdYFV0XHsABWd051iQ
6e6vlbwHoQnKJ6SsWHbPEAiEDoAOGdWggcV8W6nmRdPHHM7QtofKw1P0+Gb1Px4K
Bgaz4lMLD0zX9HXFgORuSHnkntNyNLbHFsTMnWb6yf0aDBJqhhH/2VWragyu58AJ
1mbhkXgSukGIjmUIoia/e55ObuYIph2UkPb18McufLE4/Q2Y3a4=
=X7kn
-----END PGP SIGNATURE-----

--Sig_/pGuc=7kWgqwdQnd8HcUA8Na--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54485.94692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDCo-0004S1-HR; Tue, 15 Dec 2020 16:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54485.94692; Tue, 15 Dec 2020 16:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDCo-0004Rt-Du; Tue, 15 Dec 2020 16:29:42 +0000
Received: by outflank-mailman (input) for mailman id 54485;
 Tue, 15 Dec 2020 16:29:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpDCn-0004Ow-LM
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:29:41 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69d3f5d6-df17-4451-ba3b-4abd5894ee80;
 Tue, 15 Dec 2020 16:29:38 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id r7so20488577wrc.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:29:38 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h29sm26837793wrc.68.2020.12.15.08.29.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:29:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69d3f5d6-df17-4451-ba3b-4abd5894ee80
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=uOABWkSAvdD198SSsGdA5XhgKek8Zb4ci4zIBiaJPzE=;
        b=RCaAmS47yMmtfJ7GC3ssy5sp4kiMjAX2MGAF8540gioQ64+XH8WToqNMyxaUDl8vZQ
         8NcIe8KkkNFbI3sRcFgUlmMaFUBI26CSo4S8X1q40fRcS6lAXst2KSg3d4hFIpFwmB2r
         NuwQ1kQIV5amHKn2EJviqGN/BBh9MCb9cgXBrnzLQ0an4ku2c4SdwQaTsb/F4+Q9ommn
         iWYEexUCA1/CpVVEICuh0lodF2TukUL+LIzUO6nF6gLU3PqarmFtJGxg5KYQOf6E7JVq
         auGDXuUpk9Yai/h4+w+eY9jTgcXwFz79AwZH5ehBRA4SqRLkwl0VkQVADVWHf4AOqFhO
         Pwhg==
X-Gm-Message-State: AOAM531eFacCytgVy24y8eD3EF8NUjTJJjWI7GoR/lXTZ4hk670PANwv
	jKMVHRMdThq8GcnFUOhpAXI=
X-Google-Smtp-Source: ABdhPJzt0bdUPI5AlsVeen9j9jU1OdH+99+4LMPrnhLYVuL8ETKWJwu3VPm7TAFLUDvP1SKuxN9nNw==
X-Received: by 2002:adf:e590:: with SMTP id l16mr35018336wrm.294.1608049778235;
        Tue, 15 Dec 2020 08:29:38 -0800 (PST)
Date: Tue, 15 Dec 2020 16:29:36 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 3/4] x86/PV: avoid double stack reset during schedule
 tail handling
Message-ID: <20201215162936.vwex5yvwzg6nsi7u@liuwe-devbox-debian-v2>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
 <00befc54-58f7-1891-031e-cdb848fb5787@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <00befc54-58f7-1891-031e-cdb848fb5787@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Dec 15, 2020 at 05:12:36PM +0100, Jan Beulich wrote:
> Invoking check_wakeup_from_wait() from assembly allows the new
> continue_pv_domain() to replace the prior continue_nonidle_domain() as
> the tail hook, eliminating an extra reset_stack_and_jump().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:30:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:30:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54492.94704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDDV-0005JX-Vj; Tue, 15 Dec 2020 16:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54492.94704; Tue, 15 Dec 2020 16:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDDV-0005JQ-RM; Tue, 15 Dec 2020 16:30:25 +0000
Received: by outflank-mailman (input) for mailman id 54492;
 Tue, 15 Dec 2020 16:30:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SBK9=FT=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpDDU-0005JH-Co
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:30:24 +0000
Received: from mail-wm1-f43.google.com (unknown [209.85.128.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35d8356b-d5c9-46ab-a378-28ed7d16c5a2;
 Tue, 15 Dec 2020 16:30:23 +0000 (UTC)
Received: by mail-wm1-f43.google.com with SMTP id v14so17434642wml.1
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 08:30:23 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id l16sm39208219wrx.5.2020.12.15.08.30.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 15 Dec 2020 08:30:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35d8356b-d5c9-46ab-a378-28ed7d16c5a2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=sol3DqdqjvPfzZLv9LxukCVxPxOlrQeDo4/RylqHliE=;
        b=RpDWkijXdZ8/1w8hH+wIgX0B3JQsy9I9e90GEsySZtSWvzXM02a2YLgq+CU+4hmm5S
         Q/S2Er+Y+qtb9xgpagzfMN0OcuFNEchRYnQDc+RH4Um4+rvzhCdkZ2ixZ14O4fbrhhuw
         E2bKy5XuBXSdPXb8DpkjzD6mJ0pp4YHUWcTL5sNG+w++AXJoZ1ByCq+7Zax6A8oNe3xa
         Ee1qu/Cf5xXR0lC+ybDN+G6hman4BrusvtIzhuwAQFqbIFgXbybRe0r7IdYSC+DSp220
         OuxHk/KU1EPnLXPVQwWgYGAxOPYftMb510IW2Hoe4dnVs6t30ffM35HeIoAtZQdP+55B
         EFOw==
X-Gm-Message-State: AOAM530huMcc9otddcdfZWk59PV9am5nwalyB105h8LxTlqvmZV0Q3KM
	TghM5m+4WMItUdCCZxWvTLsOxd75ZYQ=
X-Google-Smtp-Source: ABdhPJyy1umDtSjhcYwHQAtgqDHUbyxRLLaq6ibsWFtTBd0l0QNYKDrpF6dsqvq6zUP4lxvOKM6rXg==
X-Received: by 2002:a1c:9949:: with SMTP id b70mr8154398wme.72.1608049822385;
        Tue, 15 Dec 2020 08:30:22 -0800 (PST)
Date: Tue, 15 Dec 2020 16:30:20 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v1 3/3] tools: add API to work with sevaral bits at once
Message-ID: <20201215163020.tbam3pktisoyksun@liuwe-devbox-debian-v2>
References: <20201209155452.28376-1-olaf@aepfle.de>
 <20201209155452.28376-3-olaf@aepfle.de>
 <20201215162244.mln6xm5qj7pmvauc@liuwe-devbox-debian-v2>
 <20201215172917.556071ff.olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201215172917.556071ff.olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Tue, Dec 15, 2020 at 05:29:17PM +0100, Olaf Hering wrote:
> Am Tue, 15 Dec 2020 16:22:44 +0000
> schrieb Wei Liu <wl@xen.org>:
> 
> > What's wrong with requiring the input addr be const unsigned long *?
> 
> Probably nothing. In the end I just borrowed the prototypes from the other functions in this file.
> 
> I will resend with this change once I have the consumers ready.

Okay.

I will push the first two shortly.

Wei.

> 
> Olaf




From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:33:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:33:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54499.94715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDFr-0005WQ-CY; Tue, 15 Dec 2020 16:32:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54499.94715; Tue, 15 Dec 2020 16:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDFr-0005WJ-9b; Tue, 15 Dec 2020 16:32:51 +0000
Received: by outflank-mailman (input) for mailman id 54499;
 Tue, 15 Dec 2020 16:32:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gTI=FT=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1kpDFp-0005WD-U7
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:32:50 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebf22e93-3046-4853-af95-96e5c538d4e1;
 Tue, 15 Dec 2020 16:32:49 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BFGK4w8157495;
 Tue, 15 Dec 2020 16:32:47 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 35cntm3fp4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 15 Dec 2020 16:32:47 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BFGLBOJ119059;
 Tue, 15 Dec 2020 16:32:47 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3030.oracle.com with ESMTP id 35d7en91tn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 15 Dec 2020 16:32:46 +0000
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0BFGWkEM011567;
 Tue, 15 Dec 2020 16:32:46 GMT
Received: from char.us.oracle.com (/10.152.32.25)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 15 Dec 2020 08:32:45 -0800
Received: by char.us.oracle.com (Postfix, from userid 1000)
 id B54586A00F4; Tue, 15 Dec 2020 11:34:52 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebf22e93-3046-4853-af95-96e5c538d4e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=dEzuJZYdwQshX1RFzGp+3lUMPPjUeZnIpK7xkDUWxjA=;
 b=fyerFR+RzpDkgqlU8NwrZXmLK4pxCcAiv01Dk06x0j9rVF/7Febv7M9ZaRKwVg7NlK7w
 AqnJDurlP9MG3ad2Xvi0TjtduauvbNIVRGBY2FYy/JmFhZtJupxfOeYl03s6hHkJ0Gxl
 U+CUVltwIEFOPd0yxv5KBa2Uv3Ixg9+NhQ4vG2x66YnDF0uAr7EKLvMSD6dNy8MizvMQ
 V1KEmkmVdnG9pK1hUQcUoKsoDqZiD84zN0O402Mez7EBxqHTv52s/wG0Q9Oxy4D2kV28
 GKI3qnzGDqzKa6AXtoVa1W8zv54NpZB8fZmv2toNK+c724GemOZUJ65VNA9fNbY5gYEG Gw== 
Date: Tue, 15 Dec 2020 11:34:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: Re: [PATCH v2 4/4] livepatch: adjust a stale comment
Message-ID: <20201215163452.GA349@char.us.oracle.com>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
 <659b188d-d26e-3351-2285-3e75197e2c5f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <659b188d-d26e-3351-2285-3e75197e2c5f@suse.com>
User-Agent: Mutt/1.9.1 (2017-09-22)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9836 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012150111
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9836 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 phishscore=0 mlxscore=0
 lowpriorityscore=0 spamscore=0 adultscore=0 malwarescore=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 priorityscore=1501 clxscore=1011
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012150111

On Tue, Dec 15, 2020 at 05:13:43PM +0100, Jan Beulich wrote:
> As of 005de45c887e ("xen: do live patching only from main idle loop")
> the comment ahead of livepatch_do_action() has been stale.
> 
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/livepatch.c
> +++ b/xen/common/livepatch.c
> @@ -1392,8 +1392,8 @@ static inline bool was_action_consistent
>  }
>  
>  /*
> - * This function is executed having all other CPUs with no deep stack (we may
> - * have cpu_idle on it) and IRQs disabled.
> + * This function is executed having all other CPUs with no deep stack (when
> + * idle) and IRQs disabled.
>   */
>  static void livepatch_do_action(void)
>  {
> 


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54528.94821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJ6-000672-7z; Tue, 15 Dec 2020 16:36:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54528.94821; Tue, 15 Dec 2020 16:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJ6-00066u-3f; Tue, 15 Dec 2020 16:36:12 +0000
Received: by outflank-mailman (input) for mailman id 54528;
 Tue, 15 Dec 2020 16:36:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJ5-00066M-Ii
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a7ee2d4-cc73-4aa5-912e-16bad8b626cb;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D6D18AD5C;
 Tue, 15 Dec 2020 16:36:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a7ee2d4-cc73-4aa5-912e-16bad8b626cb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050167; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NHjrhGiUQFsdbZDZFHw3I8KGLBEONfoM4zSWmLMKK4w=;
	b=l7F07gFpB7BMP5/TheYwzFqpGp5jK/IX4hsEdhP8iQQopOTp+ucfkpYkppi2THAj2Thh15
	Z8OwUlfeI8jweNPgbOzRlH+vu7N5acksoEF+/oNBoEnYsiSIrj9ZgYTIRZmc3RomkfHbuU
	aXTfDV3AR4WJP9wBKDpxDulZJ3xdqHY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 01/25] tools/xenstore: switch barf[_perror]() to use syslog()
Date: Tue, 15 Dec 2020 17:35:39 +0100
Message-Id: <20201215163603.21700-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When xenstored crashes due to an unrecoverable condition it is calling
either barf() or barf_perror() to issue a message and then exit().

Make sure the message is visible somewhere by using syslog()
additionally to xprintf(), as the latter will be visible only with
tracing active.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V4:
- new patch
---
 tools/xenstore/utils.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/xenstore/utils.c b/tools/xenstore/utils.c
index a1ac12584a..633ce3b4fc 100644
--- a/tools/xenstore/utils.c
+++ b/tools/xenstore/utils.c
@@ -3,6 +3,7 @@
 #include <stdarg.h>
 #include <stdlib.h>
 #include <string.h>
+#include <syslog.h>
 #include <errno.h>
 #include <unistd.h>
 #include <fcntl.h>
@@ -35,6 +36,7 @@ void barf(const char *fmt, ...)
 	va_end(arglist);
 
  	if (bytes >= 0) {
+		syslog(LOG_CRIT, "%s\n", str);
 		xprintf("%s\n", str);
 		free(str);
 	}
@@ -54,6 +56,7 @@ void barf_perror(const char *fmt, ...)
 	va_end(arglist);
 
  	if (bytes >= 0) {
+		syslog(LOG_CRIT, "%s: %s\n", str, strerror(err));
 		xprintf("%s: %s\n", str, strerror(err));
 		free(str);
 	}
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54527.94816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJ5-00066a-Vi; Tue, 15 Dec 2020 16:36:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54527.94816; Tue, 15 Dec 2020 16:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJ5-00066T-RO; Tue, 15 Dec 2020 16:36:11 +0000
Received: by outflank-mailman (input) for mailman id 54527;
 Tue, 15 Dec 2020 16:36:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJ4-000667-HD
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 149f1d9d-c46c-4705-a48a-fc83a6645843;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69CFDAD6A;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 149f1d9d-c46c-4705-a48a-fc83a6645843
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iNV/KA8nEAAEkG05F5JTnuJ6pmnkrzAhlaFIwhddg8E=;
	b=YCy9BEGPS5F1VKvruRzYcdVXEV0FJVzDpOVUONqjGiynneQTGpLlQNqVGLYAOOr/x+SGNC
	KwaT/VQS4+5Xo1G88/GKds4KGuRfw+tzH897tVDe8j01cujdCoUfctAdbiMGdHob753AUZ
	QXUrBGyoLzKc7Wsxa5dcKs7eh7JghgQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not close file descriptor on exec
Date: Tue, 15 Dec 2020 17:35:42 +0100
Message-Id: <20201215163603.21700-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today the file descriptor for the access of the event channel driver
is being closed in case of exec(2). For the support of live update of
a daemon using libxenevtchn this can be problematic, so add a way to
keep that file descriptor open.

Add support of a flag XENEVTCHN_NO_CLOEXEC for xenevtchn_open() which
will result in _not_ setting O_CLOEXEC when opening the event channel
driver node.

The caller can then obtain the file descriptor via xenevtchn_fd().

Add an alternative open function xenevtchn_open_fd() which takes that
file descriptor as an additional parameter. This allows to allocate a
xenevtchn_handle and to associate it with that file descriptor.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Wei Liu <wl@xen.org>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V7:
- new patch

V8:
- some minor comments by Julien Grall addressed

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/include/xenevtchn.h          | 16 ++++++++++-
 tools/libs/evtchn/Makefile         |  2 +-
 tools/libs/evtchn/core.c           | 45 ++++++++++++++++++++++++------
 tools/libs/evtchn/freebsd.c        |  9 ++++--
 tools/libs/evtchn/libxenevtchn.map |  4 +++
 tools/libs/evtchn/linux.c          |  9 ++++--
 tools/libs/evtchn/minios.c         |  6 +++-
 tools/libs/evtchn/netbsd.c         |  2 +-
 tools/libs/evtchn/private.h        |  2 +-
 tools/libs/evtchn/solaris.c        |  2 +-
 10 files changed, 79 insertions(+), 18 deletions(-)

diff --git a/tools/include/xenevtchn.h b/tools/include/xenevtchn.h
index 91821ee56d..dadc46ea36 100644
--- a/tools/include/xenevtchn.h
+++ b/tools/include/xenevtchn.h
@@ -64,11 +64,25 @@ struct xentoollog_logger;
  *
  * Calling xenevtchn_close() is the only safe operation on a
  * xenevtchn_handle which has been inherited.
+ *
+ * Setting XENEVTCHN_NO_CLOEXEC allows to keep the file descriptor used
+ * for the event channel driver open across exec(2). In order to be able
+ * to use that file descriptor the new binary activated via exec(2) has
+ * to call xenevtchn_open_fd() with that file descriptor as parameter in
+ * order to associate it with a new handle. The file descriptor can be
+ * obtained via xenevtchn_fd() before calling exec(2).
  */
-/* Currently no flags are defined */
+
+/* Don't set O_CLOEXEC when opening event channel driver node. */
+#define XENEVTCHN_NO_CLOEXEC 0x01
+
 xenevtchn_handle *xenevtchn_open(struct xentoollog_logger *logger,
                                  unsigned open_flags);
 
+/* Flag XENEVTCHN_NO_CLOEXEC is ignored by xenevtchn_open_fd(). */
+xenevtchn_handle *xenevtchn_open_fd(struct xentoollog_logger *logger,
+                                    int fd, unsigned open_flags);
+
 /*
  * Close a handle previously allocated with xenevtchn_open().
  */
diff --git a/tools/libs/evtchn/Makefile b/tools/libs/evtchn/Makefile
index ad01a17b3d..b8c37b5b97 100644
--- a/tools/libs/evtchn/Makefile
+++ b/tools/libs/evtchn/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
-MINOR    = 1
+MINOR    = 2
 
 SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += linux.c
diff --git a/tools/libs/evtchn/core.c b/tools/libs/evtchn/core.c
index aff6ecfaa0..fa1e44b6ea 100644
--- a/tools/libs/evtchn/core.c
+++ b/tools/libs/evtchn/core.c
@@ -13,6 +13,7 @@
  * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <errno.h>
 #include <unistd.h>
 #include <stdlib.h>
 
@@ -28,10 +29,9 @@ static int all_restrict_cb(Xentoolcore__Active_Handle *ah, domid_t domid) {
     return xenevtchn_restrict(xce, domid);
 }
 
-xenevtchn_handle *xenevtchn_open(xentoollog_logger *logger, unsigned open_flags)
+static xenevtchn_handle *xenevtchn_alloc_handle(xentoollog_logger *logger)
 {
     xenevtchn_handle *xce = malloc(sizeof(*xce));
-    int rc;
 
     if (!xce) return NULL;
 
@@ -49,19 +49,48 @@ xenevtchn_handle *xenevtchn_open(xentoollog_logger *logger, unsigned open_flags)
         if (!xce->logger) goto err;
     }
 
-    rc = osdep_evtchn_open(xce);
-    if ( rc  < 0 ) goto err;
+    return xce;
+
+err:
+    xenevtchn_close(xce);
+    return NULL;
+}
+
+xenevtchn_handle *xenevtchn_open(xentoollog_logger *logger, unsigned open_flags)
+{
+    xenevtchn_handle *xce = xenevtchn_alloc_handle(logger);
+    int rc;
+
+    if (!xce) return NULL;
+
+    rc = osdep_evtchn_open(xce, open_flags);
+    if ( rc < 0 ) goto err;
 
     return xce;
 
 err:
-    xentoolcore__deregister_active_handle(&xce->tc_ah);
-    osdep_evtchn_close(xce);
-    xtl_logger_destroy(xce->logger_tofree);
-    free(xce);
+    xenevtchn_close(xce);
     return NULL;
 }
 
+xenevtchn_handle *xenevtchn_open_fd(struct xentoollog_logger *logger,
+                                    int fd, unsigned open_flags)
+{
+    xenevtchn_handle *xce;
+
+    if (open_flags & ~XENEVTCHN_NO_CLOEXEC) {
+        errno = EINVAL;
+        return NULL;
+    }
+
+    xce = xenevtchn_alloc_handle(logger);
+    if (!xce) return NULL;
+
+    xce->fd = fd;
+
+    return xce;
+}
+
 int xenevtchn_close(xenevtchn_handle *xce)
 {
     int rc;
diff --git a/tools/libs/evtchn/freebsd.c b/tools/libs/evtchn/freebsd.c
index 6564ed4c44..635c10f09f 100644
--- a/tools/libs/evtchn/freebsd.c
+++ b/tools/libs/evtchn/freebsd.c
@@ -31,9 +31,14 @@
 
 #define EVTCHN_DEV      "/dev/xen/evtchn"
 
-int osdep_evtchn_open(xenevtchn_handle *xce)
+int osdep_evtchn_open(xenevtchn_handle *xce, unsigned int open_flags)
 {
-    int fd = open(EVTCHN_DEV, O_RDWR|O_CLOEXEC);
+    int flags = O_RDWR;
+    int fd;
+
+    if ( !(open_flags & XENEVTCHN_NO_CLOEXEC) )
+        flags |= O_CLOEXEC;
+    fd = open(EVTCHN_DEV, flags);
     if ( fd == -1 )
         return -1;
     xce->fd = fd;
diff --git a/tools/libs/evtchn/libxenevtchn.map b/tools/libs/evtchn/libxenevtchn.map
index 33a38f953a..722fa026f9 100644
--- a/tools/libs/evtchn/libxenevtchn.map
+++ b/tools/libs/evtchn/libxenevtchn.map
@@ -21,3 +21,7 @@ VERS_1.1 {
 	global:
 		xenevtchn_restrict;
 } VERS_1.0;
+VERS_1.2 {
+	global:
+		xenevtchn_open_fd;
+} VERS_1.1;
diff --git a/tools/libs/evtchn/linux.c b/tools/libs/evtchn/linux.c
index 17e64aea32..2297488f88 100644
--- a/tools/libs/evtchn/linux.c
+++ b/tools/libs/evtchn/linux.c
@@ -34,9 +34,14 @@
 #define O_CLOEXEC 0
 #endif
 
-int osdep_evtchn_open(xenevtchn_handle *xce)
+int osdep_evtchn_open(xenevtchn_handle *xce, unsigned int open_flags)
 {
-    int fd = open("/dev/xen/evtchn", O_RDWR|O_CLOEXEC);
+    int flags = O_RDWR;
+    int fd;
+
+    if ( !(open_flags & XENEVTCHN_NO_CLOEXEC) )
+        flags |= O_CLOEXEC;
+    fd = open("/dev/xen/evtchn", flags);
     if ( fd == -1 )
         return -1;
     xce->fd = fd;
diff --git a/tools/libs/evtchn/minios.c b/tools/libs/evtchn/minios.c
index 9cd7636fc5..f5db021747 100644
--- a/tools/libs/evtchn/minios.c
+++ b/tools/libs/evtchn/minios.c
@@ -63,7 +63,11 @@ static void port_dealloc(struct evtchn_port_info *port_info) {
     free(port_info);
 }
 
-int osdep_evtchn_open(xenevtchn_handle *xce)
+/*
+ * XENEVTCHN_NO_CLOEXEC is being ignored, as there is no exec() call supported
+ * in Mini-OS.
+ */
+int osdep_evtchn_open(xenevtchn_handle *xce, unsigned int open_flags)
 {
     int fd = alloc_fd(FTYPE_EVTCHN);
     if ( fd == -1 )
diff --git a/tools/libs/evtchn/netbsd.c b/tools/libs/evtchn/netbsd.c
index 8b8545d2f9..7c73d1c599 100644
--- a/tools/libs/evtchn/netbsd.c
+++ b/tools/libs/evtchn/netbsd.c
@@ -31,7 +31,7 @@
 
 #define EVTCHN_DEV_NAME  "/dev/xenevt"
 
-int osdep_evtchn_open(xenevtchn_handle *xce)
+int osdep_evtchn_open(xenevtchn_handle *xce, unsigned int open_flags)
 {
     int fd = open(EVTCHN_DEV_NAME, O_NONBLOCK|O_RDWR);
     if ( fd == -1 )
diff --git a/tools/libs/evtchn/private.h b/tools/libs/evtchn/private.h
index 31e595bea2..bcac2a191d 100644
--- a/tools/libs/evtchn/private.h
+++ b/tools/libs/evtchn/private.h
@@ -14,7 +14,7 @@ struct xenevtchn_handle {
     Xentoolcore__Active_Handle tc_ah;
 };
 
-int osdep_evtchn_open(xenevtchn_handle *xce);
+int osdep_evtchn_open(xenevtchn_handle *xce, unsigned int open_flags);
 int osdep_evtchn_close(xenevtchn_handle *xce);
 int osdep_evtchn_restrict(xenevtchn_handle *xce, domid_t domid);
 
diff --git a/tools/libs/evtchn/solaris.c b/tools/libs/evtchn/solaris.c
index dd41f62a24..7e22f7906a 100644
--- a/tools/libs/evtchn/solaris.c
+++ b/tools/libs/evtchn/solaris.c
@@ -29,7 +29,7 @@
 
 #include "private.h"
 
-int osdep_evtchn_open(xenevtchn_handle *xce)
+int osdep_evtchn_open(xenevtchn_handle *xce, unsigned int open_flags)
 {
     int fd;
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54529.94840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJA-0006At-Il; Tue, 15 Dec 2020 16:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54529.94840; Tue, 15 Dec 2020 16:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJA-0006Aa-Dy; Tue, 15 Dec 2020 16:36:16 +0000
Received: by outflank-mailman (input) for mailman id 54529;
 Tue, 15 Dec 2020 16:36:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJ9-000667-G7
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b221cee-1501-4820-a7ad-4edc5e463149;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FD9BB1C2;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b221cee-1501-4820-a7ad-4edc5e463149
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iKglzi+e/kMABxTXOe4mkJUbHkqRBfPcECtQZuhy+sI=;
	b=rQZEyMKQtaBrARu/AF3NFyPJOgady/KeQKZZHwxeIIfTQ+l1JateQ6igqIly/Jw19A91Hy
	3Jag1SaEI0V0Zdts0K5kaj0vsTljkXwjMNFd/LPhAkQFhSWaglNClKnigir+uGdjPj2yP3
	tYU/zT4r9wl0NaO6vO1TQxAtG1mf9TU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 08/25] tools/xenstore: introduce live update status block
Date: Tue, 15 Dec 2020 17:35:46 +0100
Message-Id: <20201215163603.21700-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Live update of Xenstore is done in multiple steps. It needs a status
block holding the current state of live update and related data. It
is allocated as child of the connection live update was started over
in order to abort live update in case the connection is closed.

Allocation of the block is done in lu_binary[_alloc](), freeing in
lu_abort() (and for now in lu_start() as long as no real live-update
is happening).

Add tests in all live-update command handlers other than lu_abort()
and lu_binary[_alloc]() for being started via the same connection
as the begin of live-update.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- use talloc_zero() for allocating the status area (Julien Grall)

V4:
- const (Julien Grall)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c | 63 ++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index e3f0d34528..7854b7f46f 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -28,6 +28,34 @@
 #include "xenstored_core.h"
 #include "xenstored_control.h"
 
+struct live_update {
+	/* For verification the correct connection is acting. */
+	struct connection *conn;
+};
+
+static struct live_update *lu_status;
+
+static int lu_destroy(void *data)
+{
+	lu_status = NULL;
+
+	return 0;
+}
+
+static const char *lu_begin(struct connection *conn)
+{
+	if (lu_status)
+		return "live-update session already active.";
+
+	lu_status = talloc_zero(conn, struct live_update);
+	if (!lu_status)
+		return "Allocation failure.";
+	lu_status->conn = conn;
+	talloc_set_destructor(lu_status, lu_destroy);
+
+	return NULL;
+}
+
 struct cmd_s {
 	char *cmd;
 	int (*func)(void *, struct connection *, char **, int);
@@ -154,6 +182,13 @@ static int do_control_print(void *ctx, struct connection *conn,
 static const char *lu_abort(const void *ctx, struct connection *conn)
 {
 	syslog(LOG_INFO, "live-update: abort\n");
+
+	if (!lu_status)
+		return "No live-update session active.";
+
+	/* Destructor will do the real abort handling. */
+	talloc_free(lu_status);
+
 	return NULL;
 }
 
@@ -161,6 +196,10 @@ static const char *lu_cmdline(const void *ctx, struct connection *conn,
 			      const char *cmdline)
 {
 	syslog(LOG_INFO, "live-update: cmdline %s\n", cmdline);
+
+	if (!lu_status || lu_status->conn != conn)
+		return "Not in live-update session.";
+
 	return NULL;
 }
 
@@ -168,13 +207,23 @@ static const char *lu_cmdline(const void *ctx, struct connection *conn,
 static const char *lu_binary_alloc(const void *ctx, struct connection *conn,
 				   unsigned long size)
 {
+	const char *ret;
+
 	syslog(LOG_INFO, "live-update: binary size %lu\n", size);
+
+	ret = lu_begin(conn);
+	if (ret)
+		return ret;
+
 	return NULL;
 }
 
 static const char *lu_binary_save(const void *ctx, struct connection *conn,
 				  unsigned int size, const char *data)
 {
+	if (!lu_status || lu_status->conn != conn)
+		return "Not in live-update session.";
+
 	return NULL;
 }
 
@@ -193,7 +242,14 @@ static const char *lu_arch(const void *ctx, struct connection *conn,
 static const char *lu_binary(const void *ctx, struct connection *conn,
 			     const char *filename)
 {
+	const char *ret;
+
 	syslog(LOG_INFO, "live-update: binary %s\n", filename);
+
+	ret = lu_begin(conn);
+	if (ret)
+		return ret;
+
 	return NULL;
 }
 
@@ -212,6 +268,13 @@ static const char *lu_start(const void *ctx, struct connection *conn,
 			    bool force, unsigned int to)
 {
 	syslog(LOG_INFO, "live-update: start, force=%d, to=%u\n", force, to);
+
+	if (!lu_status || lu_status->conn != conn)
+		return "Not in live-update session.";
+
+	/* Will be replaced by real live-update later. */
+	talloc_free(lu_status);
+
 	return NULL;
 }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54530.94852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJB-0006DR-Rq; Tue, 15 Dec 2020 16:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54530.94852; Tue, 15 Dec 2020 16:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJB-0006DI-NE; Tue, 15 Dec 2020 16:36:17 +0000
Received: by outflank-mailman (input) for mailman id 54530;
 Tue, 15 Dec 2020 16:36:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJA-00066M-HY
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2487bee8-80c9-4fc5-a19c-ec077f182862;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F68EAD60;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2487bee8-80c9-4fc5-a19c-ec077f182862
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3PuHi28Ilah+6+tULvxDPYy5ikYx4BHQrVKKI8OOPfk=;
	b=HVWTV8zu8kcDgmNvdRRp4axdAd+XJXPUrNLtCjKyGguVtlf0u8JKk2RDWduVaLetAjKvjn
	brq5T7y+XZoZLXQC84UlFpdYkeTRz44nFn52ABubZOH7dAHodvHx6h2OABNnz/OQesw4cV
	C2ZHbbC2ALfzS9mghwruUvKL7VNd+yw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 02/25] tools/xenstore: make set_tdb_key() non-static
Date: Tue, 15 Dec 2020 17:35:40 +0100
Message-Id: <20201215163603.21700-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

set_tdb_key() can be used by destroy_node(), too. So remove the static
attribute and move it to xenstored_core.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
V5:
- new patch

V6:
- add comment (Julien Grall)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 14 +++++++++++---
 tools/xenstore/xenstored_core.h        |  2 ++
 tools/xenstore/xenstored_transaction.c |  6 ------
 3 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3082a36d3a..ab1c7835b8 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -352,6 +352,16 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 	}
 }
 
+void set_tdb_key(const char *name, TDB_DATA *key)
+{
+	/*
+	 * Dropping const is fine here, as the key will never be modified
+	 * by TDB.
+	 */
+	key->dptr = (char *)name;
+	key->dsize = strlen(name);
+}
+
 /*
  * If it fails, returns NULL and sets errno.
  * Temporary memory allocations will be done with ctx.
@@ -985,9 +995,7 @@ static int destroy_node(void *_node)
 	if (streq(node->name, "/"))
 		corrupt(NULL, "Destroying root node!");
 
-	key.dptr = (void *)node->name;
-	key.dsize = strlen(node->name);
-
+	set_tdb_key(node->name, &key);
 	tdb_delete(tdb_ctx, key);
 
 	domain_entry_dec(talloc_parent(node), node);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 4c6c3d6f20..fb59d862a2 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -220,6 +220,8 @@ extern xengnttab_handle **xgt_handle;
 
 int remember_string(struct hashtable *hash, const char *str);
 
+void set_tdb_key(const char *name, TDB_DATA *key);
+
 #endif /* _XENSTORED_CORE_H */
 
 /*
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 2881f3b2e4..52355f4ed8 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -168,12 +168,6 @@ struct transaction
 extern int quota_max_transaction;
 uint64_t generation;
 
-static void set_tdb_key(const char *name, TDB_DATA *key)
-{
-	key->dptr = (char *)name;
-	key->dsize = strlen(name);
-}
-
 static struct accessed_node *find_accessed_node(struct transaction *trans,
 						const char *name)
 {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54532.94868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJF-0006Ja-Hx; Tue, 15 Dec 2020 16:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54532.94868; Tue, 15 Dec 2020 16:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJF-0006JO-Cp; Tue, 15 Dec 2020 16:36:21 +0000
Received: by outflank-mailman (input) for mailman id 54532;
 Tue, 15 Dec 2020 16:36:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJE-000667-Ga
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c49e25d7-c81e-4f40-8a58-c74da2309399;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BFFFEB263;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c49e25d7-c81e-4f40-8a58-c74da2309399
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oPsWgsu0Wo7yJeT1ib74W9U6E/lawBZhnW/z8jovZm8=;
	b=s039V4RVTZgXJYfw041GnbYOSMsbkTwO/6sN/ldkOjZSIY/BGrxE/mOiJE5Xyk44ldmsv/
	CXixgGXkUbh41g+VxYF5bBnF0KQWpwBypFg+gkdNNw4wlAyTLvsIlH8k/sO3HPkLTy/ovN
	c6U0CWh/Mt2PX8EagEjIKPUS8/YwRcg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 10/25] tools/xenstore: add command line handling for live update
Date: Tue, 15 Dec 2020 17:35:48 +0100
Message-Id: <20201215163603.21700-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Updating an instance of Xenstore via live update needs to hand over
the command line parameters to the updated instance. Those can be
either the parameters used by the updated instance or new ones when
supplied when starting the live update.

So when supplied store the new command line parameters in lu_status.

As it is related add a new option -U (or --live-update") to the command
line of xenstored which will be added when starting the new instance.
This enables to perform slightly different initializations when
started as a result of live update.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 tools/xenstore/xenstored_control.c | 6 ++++++
 tools/xenstore/xenstored_core.c    | 7 ++++++-
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 95ac1a1648..2e0827b9ef 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -42,6 +42,8 @@ struct live_update {
 #else
 	char *filename;
 #endif
+
+	char *cmdline;
 };
 
 static struct live_update *lu_status;
@@ -211,6 +213,10 @@ static const char *lu_cmdline(const void *ctx, struct connection *conn,
 	if (!lu_status || lu_status->conn != conn)
 		return "Not in live-update session.";
 
+	lu_status->cmdline = talloc_strdup(lu_status, cmdline);
+	if (!lu_status->cmdline)
+		return "Allocation failure.";
+
 	return NULL;
 }
 
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index e1b92c3dc8..0dddf24327 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1969,6 +1969,7 @@ static struct option options[] = {
 	{ "internal-db", 0, NULL, 'I' },
 	{ "verbose", 0, NULL, 'V' },
 	{ "watch-nb", 1, NULL, 'W' },
+	{ "live-update", 0, NULL, 'U' },
 	{ NULL, 0, NULL, 0 } };
 
 extern void dump_conn(struct connection *conn); 
@@ -1983,11 +1984,12 @@ int main(int argc, char *argv[])
 	bool dofork = true;
 	bool outputpid = false;
 	bool no_domain_init = false;
+	bool live_update = false;
 	const char *pidfile = NULL;
 	int timeout;
 
 
-	while ((opt = getopt_long(argc, argv, "DE:F:HNPS:t:A:M:T:RVW:", options,
+	while ((opt = getopt_long(argc, argv, "DE:F:HNPS:t:A:M:T:RVW:U", options,
 				  NULL)) != -1) {
 		switch (opt) {
 		case 'D':
@@ -2045,6 +2047,9 @@ int main(int argc, char *argv[])
 		case 'p':
 			priv_domid = strtol(optarg, NULL, 10);
 			break;
+		case 'U':
+			live_update = true;
+			break;
 		}
 	}
 	if (optind != argc)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54533.94880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJG-0006Mc-VP; Tue, 15 Dec 2020 16:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54533.94880; Tue, 15 Dec 2020 16:36:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJG-0006MQ-QW; Tue, 15 Dec 2020 16:36:22 +0000
Received: by outflank-mailman (input) for mailman id 54533;
 Tue, 15 Dec 2020 16:36:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJF-00066M-HU
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cd1ba9f-404d-4d83-925f-64f686129a6d;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AF608AC7F;
 Tue, 15 Dec 2020 16:36:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cd1ba9f-404d-4d83-925f-64f686129a6d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050167; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=kYVC+qKjmu1QH58TZR0/l7X0f2towWcVsvAHjXP8k6o=;
	b=oEofi3aD1RzuCCCnkM7DE40w5NnFiOaY5aVlC6P8XioMkDfign/z1L9h8x+v9zrqK9eEAD
	GdykhfR1mLbO9bxrN0dqZnTC0jB29oeos9EfiX1WBMRxK173cuuSe4AXHnlB+NtB3WyfdS
	d0xsr49SxsRL/N+qhy0slWpg79WA9yU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v10 00/25] tools/xenstore: support live update for xenstored
Date: Tue, 15 Dec 2020 17:35:38 +0100
Message-Id: <20201215163603.21700-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today Xenstore is not restartable. This means a Xen server needing an
update of xenstored has to be rebooted in order to let this update
become effective.

This patch series is changing that: The internal state of xenstored
(the contents of Xenstore, all connections to various clients like
programs or other domains, and watches) is saved in a defined format
and a new binary is being activated consuming the old state. All
connections are being restored and the new Xenstore binary will
continue where the old one stopped.

This patch series has been under (secret) development for quite some
time. It hasn't been posted to xen-devel until now due to the various
Xenstore related security issues which have become public only today.

There will be a similar series for oxenstored posted.

Xenstore-stubdom is not yet supported, but I'm planning to start
working on that soon.

Changes in V10 (for the members of the security team):
- dropped patch 6 as requested by Andrew

Juergen Gross (24):
  tools/xenstore: switch barf[_perror]() to use syslog()
  tools/xenstore: make set_tdb_key() non-static
  tools/xenstore: remove unused cruft from xenstored_domain.c
  tools/libxenevtchn: add possibility to not close file descriptor on
    exec
  tools/xenstore: refactor XS_CONTROL handling
  tools/xenstore: add live update command to xenstore-control
  tools/xenstore: add basic live-update command parsing
  tools/xenstore: introduce live update status block
  tools/xenstore: save new binary for live update
  tools/xenstore: add command line handling for live update
  tools/xenstore: add the basic framework for doing the live update
  tools/xenstore: allow live update only with no transaction active
  docs: update the xenstore migration stream documentation
  tools/xenstore: add include file for state structure definitions
  tools/xenstore: dump the xenstore state for live update
  tools/xenstore: handle CLOEXEC flag for local files and pipes
  tools/xenstore: split off domain introduction from do_introduce()
  tools/xenstore: evaluate the live update flag when starting
  tools/xenstore: read internal state when doing live upgrade
  tools/xenstore: add reading global state for live update
  tools/xenstore: add read connection state for live update
  tools/xenstore: add read node state for live update
  tools/xenstore: add read watch state for live update
  tools/xenstore: activate new binary for live update

Julien Grall (1):
  tools/xenstore: handle dying domains in live update

 docs/designs/xenstore-migration.md      |  19 +-
 docs/misc/xenstore.txt                  |  21 +
 tools/include/xenevtchn.h               |  16 +-
 tools/libs/evtchn/Makefile              |   2 +-
 tools/libs/evtchn/core.c                |  45 +-
 tools/libs/evtchn/freebsd.c             |   9 +-
 tools/libs/evtchn/libxenevtchn.map      |   4 +
 tools/libs/evtchn/linux.c               |   9 +-
 tools/libs/evtchn/minios.c              |   6 +-
 tools/libs/evtchn/netbsd.c              |   2 +-
 tools/libs/evtchn/private.h             |   2 +-
 tools/libs/evtchn/solaris.c             |   2 +-
 tools/xenstore/Makefile                 |   3 +-
 tools/xenstore/include/xenstore_state.h | 131 +++++
 tools/xenstore/utils.c                  |  20 +
 tools/xenstore/utils.h                  |   6 +
 tools/xenstore/xenstore_control.c       | 332 ++++++++++++-
 tools/xenstore/xenstored_control.c      | 612 +++++++++++++++++++++++-
 tools/xenstore/xenstored_control.h      |   1 +
 tools/xenstore/xenstored_core.c         | 510 ++++++++++++++++++--
 tools/xenstore/xenstored_core.h         |  40 ++
 tools/xenstore/xenstored_domain.c       | 312 +++++++++---
 tools/xenstore/xenstored_domain.h       |  14 +-
 tools/xenstore/xenstored_posix.c        |  13 +-
 tools/xenstore/xenstored_transaction.c  |  11 +-
 tools/xenstore/xenstored_watch.c        | 171 +++++--
 tools/xenstore/xenstored_watch.h        |   5 +
 27 files changed, 2103 insertions(+), 215 deletions(-)
 create mode 100644 tools/xenstore/include/xenstore_state.h

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54535.94892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJL-0006V0-D8; Tue, 15 Dec 2020 16:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54535.94892; Tue, 15 Dec 2020 16:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJL-0006Ug-7Z; Tue, 15 Dec 2020 16:36:27 +0000
Received: by outflank-mailman (input) for mailman id 54535;
 Tue, 15 Dec 2020 16:36:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJJ-000667-GR
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38bf8d0a-1866-4834-915e-24fcd94ee086;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CB6CB1C4;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38bf8d0a-1866-4834-915e-24fcd94ee086
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aNFJAE4KqQnX5IjGOaMkCIzNeyfe1g2GU9xo9W5ZYdw=;
	b=HxoH/VKPziYS/CvlgodAP7TL52lOF7xAQhediq8DcuBxx6+7J/OmpXAidgqcRGgQr3PhbX
	IliUzW+PJ1/zyFQV+CPzjxK4aQ8yZGAMMN/1ZCLJqo4pieME0pC+jW7jyQH7Ih8vbDBoDF
	+JzLHnZnlnWy0M1SlVkYdW2Rx45V1d4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 09/25] tools/xenstore: save new binary for live update
Date: Tue, 15 Dec 2020 17:35:47 +0100
Message-Id: <20201215163603.21700-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Save the new binary name for the daemon case and the new kernel for
stubdom in order to support live update of Xenstore..

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 41 +++++++++++++++++++++++++++++-
 1 file changed, 40 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 7854b7f46f..95ac1a1648 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -1,5 +1,5 @@
 /*
-    Interactive commands for Xen Store Daemon.
+Interactive commands for Xen Store Daemon.
     Copyright (C) 2017 Juergen Gross, SUSE Linux GmbH
 
     This program is free software; you can redistribute it and/or modify
@@ -22,6 +22,9 @@
 #include <stdlib.h>
 #include <string.h>
 #include <syslog.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <unistd.h>
 
 #include "utils.h"
 #include "talloc.h"
@@ -31,6 +34,14 @@
 struct live_update {
 	/* For verification the correct connection is acting. */
 	struct connection *conn;
+
+#ifdef __MINIOS__
+	void *kernel;
+	unsigned int kernel_size;
+	unsigned int kernel_off;
+#else
+	char *filename;
+#endif
 };
 
 static struct live_update *lu_status;
@@ -215,6 +226,13 @@ static const char *lu_binary_alloc(const void *ctx, struct connection *conn,
 	if (ret)
 		return ret;
 
+	lu_status->kernel = talloc_size(lu_status, size);
+	if (!lu_status->kernel)
+		return "Allocation failure.";
+
+	lu_status->kernel_size = size;
+	lu_status->kernel_off = 0;
+
 	return NULL;
 }
 
@@ -224,6 +242,12 @@ static const char *lu_binary_save(const void *ctx, struct connection *conn,
 	if (!lu_status || lu_status->conn != conn)
 		return "Not in live-update session.";
 
+	if (lu_status->kernel_off + size > lu_status->kernel_size)
+		return "Too much kernel data.";
+
+	memcpy(lu_status->kernel + lu_status->kernel_off, data, size);
+	lu_status->kernel_off += size;
+
 	return NULL;
 }
 
@@ -243,13 +267,23 @@ static const char *lu_binary(const void *ctx, struct connection *conn,
 			     const char *filename)
 {
 	const char *ret;
+	struct stat statbuf;
 
 	syslog(LOG_INFO, "live-update: binary %s\n", filename);
 
+	if (stat(filename, &statbuf))
+		return "File not accessible.";
+	if (!(statbuf.st_mode & (S_IXOTH | S_IXGRP | S_IXUSR)))
+		return "File not executable.";
+
 	ret = lu_begin(conn);
 	if (ret)
 		return ret;
 
+	lu_status->filename = talloc_strdup(lu_status, filename);
+	if (!lu_status->filename)
+		return "Allocation failure.";
+
 	return NULL;
 }
 
@@ -272,6 +306,11 @@ static const char *lu_start(const void *ctx, struct connection *conn,
 	if (!lu_status || lu_status->conn != conn)
 		return "Not in live-update session.";
 
+#ifdef __MINIOS__
+	if (lu_status->kernel_size != lu_status->kernel_off)
+		return "Kernel not complete.";
+#endif
+
 	/* Will be replaced by real live-update later. */
 	talloc_free(lu_status);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54536.94900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJM-0006WQ-5q; Tue, 15 Dec 2020 16:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54536.94900; Tue, 15 Dec 2020 16:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJL-0006Vp-Nb; Tue, 15 Dec 2020 16:36:27 +0000
Received: by outflank-mailman (input) for mailman id 54536;
 Tue, 15 Dec 2020 16:36:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJK-00066M-Hc
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2dfe56c1-dc56-4768-a56d-03b25f5899e1;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3D0F7ADA2;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2dfe56c1-dc56-4768-a56d-03b25f5899e1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Fvbcf3En7lYu//cGgGBUZK52QcUFkax9zugp5BLC0iQ=;
	b=lD+KAQ4aqGuvG9gAovIJkzk0naw5LkJX24Ju800aPr2HPBSwYrcWlZH3GFCGvQB4RCc+pu
	rYbgGp8SfkUyvIi6eyuVNWKGeXvplGhfPw1vojDRFw4wForo7DaJM2QUy0/+uOgdgPRdlS
	tz0Se5v7kqwJjPTpIVqZQS9m0p7gJgM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 03/25] tools/xenstore: remove unused cruft from xenstored_domain.c
Date: Tue, 15 Dec 2020 17:35:41 +0100
Message-Id: <20201215163603.21700-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

domain->remote_port and restore_existing_connections() are useless and
can be removed.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V7:
- new patch
---
 tools/xenstore/xenstored_core.c   |  3 ---
 tools/xenstore/xenstored_domain.c | 11 -----------
 tools/xenstore/xenstored_domain.h |  3 ---
 3 files changed, 17 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index ab1c7835b8..50986f8b29 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2087,9 +2087,6 @@ int main(int argc, char *argv[])
 	if (!no_domain_init)
 		domain_init();
 
-	/* Restore existing connections. */
-	restore_existing_connections();
-
 	if (outputpid) {
 		printf("%ld\n", (long)getpid());
 		fflush(stdout);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 7d348d57f3..ed8e83b06b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -54,10 +54,6 @@ struct domain
 	/* Event channel port */
 	evtchn_port_t port;
 
-	/* The remote end of the event channel, used only to validate
-	   repeated domain introductions. */
-	evtchn_port_t remote_port;
-
 	/* Domain path in store. */
 	char *path;
 
@@ -382,7 +378,6 @@ static int new_domain(struct domain *domain, int port)
 	domain->conn->domain = domain;
 	domain->conn->id = domain->domid;
 
-	domain->remote_port = port;
 	domain->nbentry = 0;
 	domain->nbwatch = 0;
 
@@ -470,7 +465,6 @@ int do_introduce(struct connection *conn, struct buffered_data *in)
 			xenevtchn_unbind(xce_handle, domain->port);
 		rc = xenevtchn_bind_interdomain(xce_handle, domid, port);
 		domain->port = (rc == -1) ? 0 : rc;
-		domain->remote_port = port;
 	}
 
 	domain_conn_reset(domain);
@@ -636,11 +630,6 @@ const char *get_implicit_path(const struct connection *conn)
 	return conn->domain->path;
 }
 
-/* Restore existing connections. */
-void restore_existing_connections(void)
-{
-}
-
 static int set_dom_perms_default(struct node_perms *perms)
 {
 	perms->num = 1;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 5e00087206..66e0a12654 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -47,9 +47,6 @@ void domain_init(void);
 /* Returns the implicit path of a connection (only domains have this) */
 const char *get_implicit_path(const struct connection *conn);
 
-/* Read existing connection information from store. */
-void restore_existing_connections(void);
-
 /* Can connection attached to domain read/write. */
 bool domain_can_read(struct connection *conn);
 bool domain_can_write(struct connection *conn);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54543.94920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJQ-0006ft-CD; Tue, 15 Dec 2020 16:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54543.94920; Tue, 15 Dec 2020 16:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJQ-0006ff-6q; Tue, 15 Dec 2020 16:36:32 +0000
Received: by outflank-mailman (input) for mailman id 54543;
 Tue, 15 Dec 2020 16:36:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJO-000667-Go
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad375b72-b3e0-4ac5-a755-001c076d9bc0;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EFEEEB267;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad375b72-b3e0-4ac5-a755-001c076d9bc0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050170; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nPZAkpB2QjQ2BH5uTUJNNCYD4hmthye6v2/HhM/4weY=;
	b=n29LNHon97XlJ3SwFAAL9se/VmJcMCOGjszGdVU/BSKULo+CzD9a4hn456b/ZUNFzgmUns
	TpqqvWBYiD5pdGC2/DNpZ6lrTF4BgRRk3s3b9KRcLfglCFOK7CiId3/cAjFDKgnJU0jrn+
	sOLUqhl9Oo4SzA8d59W7SO7tYMoLhrI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 11/25] tools/xenstore: add the basic framework for doing the live update
Date: Tue, 15 Dec 2020 17:35:49 +0100
Message-Id: <20201215163603.21700-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the main framework for executing the live update. This for now
only defines the basic execution steps with empty dummy functions.
This final step returning means failure, as in case of success the
new executable will have taken over.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
V4:
- const (Julien Grall)
---
 tools/xenstore/xenstored_control.c | 39 ++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 2e0827b9ef..940b717741 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -304,9 +304,27 @@ static const char *lu_arch(const void *ctx, struct connection *conn,
 }
 #endif
 
+static const char *lu_check_lu_allowed(const void *ctx, bool force,
+				       unsigned int to)
+{
+	return NULL;
+}
+
+static const char *lu_dump_state(const void *ctx, struct connection *conn)
+{
+	return NULL;
+}
+
+static const char *lu_activate_binary(const void *ctx)
+{
+	return "Not yet implemented.";
+}
+
 static const char *lu_start(const void *ctx, struct connection *conn,
 			    bool force, unsigned int to)
 {
+	const char *ret;
+
 	syslog(LOG_INFO, "live-update: start, force=%d, to=%u\n", force, to);
 
 	if (!lu_status || lu_status->conn != conn)
@@ -317,10 +335,27 @@ static const char *lu_start(const void *ctx, struct connection *conn,
 		return "Kernel not complete.";
 #endif
 
-	/* Will be replaced by real live-update later. */
+	/* Check for state to allow live update. */
+	ret = lu_check_lu_allowed(ctx, force, to);
+	if (ret) {
+		if (!strcmp(ret, "BUSY"))
+			return ret;
+		goto out;
+	}
+
+	/* Dump out internal state, including "OK" for live update. */
+	ret = lu_dump_state(ctx, conn);
+	if (ret)
+		goto out;
+
+	/* Perform the activation of new binary. */
+	ret = lu_activate_binary(ctx);
+	/* We will reach this point only in case of failure. */
+
+ out:
 	talloc_free(lu_status);
 
-	return NULL;
+	return ret;
 }
 
 static int do_control_lu(void *ctx, struct connection *conn,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54544.94932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJR-0006j4-P6; Tue, 15 Dec 2020 16:36:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54544.94932; Tue, 15 Dec 2020 16:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJR-0006ip-JB; Tue, 15 Dec 2020 16:36:33 +0000
Received: by outflank-mailman (input) for mailman id 54544;
 Tue, 15 Dec 2020 16:36:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJP-00066M-Hz
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff984401-f548-4ba5-82c3-160735321afd;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9F059ADD9;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff984401-f548-4ba5-82c3-160735321afd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FRUtFaGWgmKtjWz8l5QtTap0za2dmtwHYkudzxpjDmo=;
	b=vSJDBkQFORKegMLskiQLhKR90P/B8rPjwSfJXHAXvwyFh+Sc8248eMX61xw4R83j0Lr5gF
	/+Qwa2c1jlAKXYjtxk4BdqyYSUd5JSnf7I6etJiQuThluE6GkrEC5GBdUlenmMlboP4V2V
	gaY/MNrCEO3A/oRVAaSvP/ClG26h4uY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 05/25] tools/xenstore: refactor XS_CONTROL handling
Date: Tue, 15 Dec 2020 17:35:43 +0100
Message-Id: <20201215163603.21700-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to allow control commands with binary data refactor handling
of XS_CONTROL:

- get primary command first
- add maximum number of additional parameters to pass to command
  handler

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
V2:
- add comment regarding max_pars (Pawel Wieczorkiewicz)

V3:
- addressed Paul's comments

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c | 34 ++++++++++++++++++++----------
 tools/xenstore/xenstored_core.c    |  3 +--
 tools/xenstore/xenstored_core.h    |  1 +
 3 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8d48ab4820..8d29db8270 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -30,6 +30,14 @@ struct cmd_s {
 	char *cmd;
 	int (*func)(void *, struct connection *, char **, int);
 	char *pars;
+	/*
+	 * max_pars can be used to limit the size of the parameter vector,
+	 * e.g. in case of large binary parts in the parameters.
+	 * The command is included in the count, so 1 means just the command
+	 * without any parameter.
+	 * 0 == no limit (the default)
+	 */
+	unsigned int max_pars;
 };
 
 static int do_control_check(void *ctx, struct connection *conn,
@@ -194,25 +202,29 @@ static int do_control_help(void *ctx, struct connection *conn,
 
 int do_control(struct connection *conn, struct buffered_data *in)
 {
-	int num;
-	int cmd;
-	char **vec;
+	unsigned int cmd, num, off;
+	char **vec = NULL;
 
 	if (conn->id != 0)
 		return EACCES;
 
-	num = xs_count_strings(in->buffer, in->used);
-	if (num < 1)
+	off = get_string(in, 0);
+	if (!off)
+		return EINVAL;
+	for (cmd = 0; cmd < ARRAY_SIZE(cmds); cmd++)
+		if (streq(in->buffer, cmds[cmd].cmd))
+			break;
+	if (cmd == ARRAY_SIZE(cmds))
 		return EINVAL;
+
+	num = xs_count_strings(in->buffer, in->used);
+	if (cmds[cmd].max_pars)
+		num = min(num, cmds[cmd].max_pars);
 	vec = talloc_array(in, char *, num);
 	if (!vec)
 		return ENOMEM;
-	if (get_strings(in, vec, num) != num)
+	if (get_strings(in, vec, num) < num)
 		return EIO;
 
-	for (cmd = 0; cmd < ARRAY_SIZE(cmds); cmd++)
-		if (streq(vec[0], cmds[cmd].cmd))
-			return cmds[cmd].func(in, conn, vec + 1, num - 1);
-
-	return EINVAL;
+	return cmds[cmd].func(in, conn, vec + 1, num - 1);
 }
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 50986f8b29..e1b92c3dc8 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -620,8 +620,7 @@ static struct buffered_data *new_buffer(void *ctx)
 /* Return length of string (including nul) at this offset.
  * If there is no nul, returns 0 for failure.
  */
-static unsigned int get_string(const struct buffered_data *data,
-			       unsigned int offset)
+unsigned int get_string(const struct buffered_data *data, unsigned int offset)
 {
 	const char *nul;
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index fb59d862a2..27826c125c 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -142,6 +142,7 @@ const char *onearg(struct buffered_data *in);
 /* Break input into vectors, return the number, fill in up to num of them. */
 unsigned int get_strings(struct buffered_data *data,
 			 char *vec[], unsigned int num);
+unsigned int get_string(const struct buffered_data *data, unsigned int offset);
 
 void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 		const void *data, unsigned int len);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54549.94944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJV-0006qc-In; Tue, 15 Dec 2020 16:36:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54549.94944; Tue, 15 Dec 2020 16:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJV-0006qJ-D5; Tue, 15 Dec 2020 16:36:37 +0000
Received: by outflank-mailman (input) for mailman id 54549;
 Tue, 15 Dec 2020 16:36:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJT-000667-Gi
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08cedaff-b36e-4a64-ac8e-a631dd681446;
 Tue, 15 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A878EB278;
 Tue, 15 Dec 2020 16:36:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08cedaff-b36e-4a64-ac8e-a631dd681446
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050170; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/9jdr9ciXN6y1fOGGhzb94maMZkUU3XcSi6SB1NLEOc=;
	b=gX9xld/jXX9t52FNa/wJUfO6Ki6amzM6whCAZ8LoUPwmzXxbODYxrHD53x/RvJsI0w7SyW
	LmQd23Eg1nop1jPC/PJmR3spZgox6+CEGtsOwqI3keFE5oDqOXUkZJUwx4+sYpO86PjwMw
	GphPVgEtx4CVT8kB2Vg1RF02u4tF6Rw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 14/25] tools/xenstore: add include file for state structure definitions
Date: Tue, 15 Dec 2020 17:35:52 +0100
Message-Id: <20201215163603.21700-15-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add an include file containing all structures and defines needed for
dumping and restoring the internal Xenstore state.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V4:
- drop mfn from connection record (rebase to V5 of state doc patch)
- add #ifdef __MINIOS__ (Julien Grall)
- correct comments (Julien Grall)
- add data_in_len

V5:
- add data_resp_len

V6:
- add flag byte to node permissions (Julien Grall)
- update migration document

V7:
- add evtchn_fd

V8:
- remove ro-socket and read-only connection flag
- split off documentation part

V9:
- add htobe32() macro if needed (Wei Liu)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/include/xenstore_state.h | 131 ++++++++++++++++++++++++
 1 file changed, 131 insertions(+)
 create mode 100644 tools/xenstore/include/xenstore_state.h

diff --git a/tools/xenstore/include/xenstore_state.h b/tools/xenstore/include/xenstore_state.h
new file mode 100644
index 0000000000..d2a9307400
--- /dev/null
+++ b/tools/xenstore/include/xenstore_state.h
@@ -0,0 +1,131 @@
+/*
+ * Xenstore internal state dump definitions.
+ * Copyright (C) Juergen Gross, SUSE Software Solutions Germany GmbH
+ *
+ * Used for live-update and migration, possibly across Xenstore implementations.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef XENSTORE_STATE_H
+#define XENSTORE_STATE_H
+
+#include <endian.h>
+#include <sys/types.h>
+
+#ifndef htobe32
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+#define htobe32(x) __builtin_bswap32(x)
+#else
+#define htobe32(x) (x)
+#endif
+#endif
+
+struct xs_state_preamble {
+    char ident[8];
+#define XS_STATE_IDENT    "xenstore"  /* To be used without the NUL byte. */
+    uint32_t version;                 /* Version in big endian format. */
+#define XS_STATE_VERSION  0x00000001
+    uint32_t flags;                   /* Endianess. */
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+#define XS_STATE_FLAGS    0x00000000  /* Little endian. */
+#else
+#define XS_STATE_FLAGS    0x00000001  /* Big endian. */
+#endif
+};
+
+/*
+ * Each record is starting with xs_state_record_header.
+ * All records have a length of a multiple of 8 bytes.
+ */
+
+/* Common record layout: */
+struct xs_state_record_header {
+    uint32_t type;
+#define XS_STATE_TYPE_END        0x00000000
+#define XS_STATE_TYPE_GLOBAL     0x00000001
+#define XS_STATE_TYPE_CONN       0x00000002
+#define XS_STATE_TYPE_WATCH      0x00000003
+#define XS_STATE_TYPE_TA         0x00000004
+#define XS_STATE_TYPE_NODE       0x00000005
+    uint32_t length;         /* Length of record in bytes. */
+};
+
+/* Global state of Xenstore: */
+struct xs_state_global {
+    int32_t socket_fd;      /* File descriptor for socket connections or -1. */
+    int32_t evtchn_fd;      /* File descriptor for event channel operations. */
+};
+
+/* Connection to Xenstore: */
+struct xs_state_connection {
+    uint32_t conn_id;       /* Used as reference in watch and TA records. */
+    uint16_t conn_type;
+#define XS_STATE_CONN_TYPE_RING   0
+#define XS_STATE_CONN_TYPE_SOCKET 1
+    uint16_t pad;
+    union {
+        struct {
+            uint16_t domid;  /* Domain-Id. */
+            uint16_t tdomid; /* Id of target domain or DOMID_INVALID. */
+            uint32_t evtchn; /* Event channel port. */
+        } ring;
+        int32_t socket_fd;   /* File descriptor for socket connections. */
+    } spec;
+    uint16_t data_in_len;    /* Number of unprocessed bytes read from conn. */
+    uint16_t data_resp_len;  /* Size of partial response pending for conn. */
+    uint32_t data_out_len;   /* Number of bytes not yet written to conn. */
+    uint8_t  data[];         /* Pending data (read, written) + 0-7 pad bytes. */
+};
+
+/* Watch: */
+struct xs_state_watch {
+    uint32_t conn_id;       /* Connection this watch is associated with. */
+    uint16_t path_length;   /* Number of bytes of path watched (incl. 0). */
+    uint16_t token_length;  /* Number of bytes of watch token (incl. 0). */
+    uint8_t data[];         /* Path bytes, token bytes, 0-7 pad bytes. */
+};
+
+/* Transaction: */
+struct xs_state_transaction {
+    uint32_t conn_id;       /* Connection this TA is associated with. */
+    uint32_t ta_id;         /* Transaction Id. */
+};
+
+/* Node (either XS_STATE_TYPE_NODE or XS_STATE_TYPE_TANODE[_MOD]): */
+struct xs_state_node_perm {
+    uint8_t access;         /* Access rights. */
+#define XS_STATE_NODE_PERM_NONE   'n'
+#define XS_STATE_NODE_PERM_READ   'r'
+#define XS_STATE_NODE_PERM_WRITE  'w'
+#define XS_STATE_NODE_PERM_BOTH   'b'
+    uint8_t flags;
+#define XS_STATE_NODE_PERM_IGNORE 0x01 /* Stale permission, ignore for check. */
+    uint16_t domid;         /* Domain-Id. */
+};
+struct xs_state_node {
+    uint32_t conn_id;       /* Connection in case of transaction or 0. */
+    uint32_t ta_id;         /* Transaction Id or 0. */
+    uint16_t path_len;      /* Length of path string including NUL byte. */
+    uint16_t data_len;      /* Length of node data. */
+    uint16_t ta_access;
+#define XS_STATE_NODE_TA_DELETED  0x0000
+#define XS_STATE_NODE_TA_READ     0x0001
+#define XS_STATE_NODE_TA_WRITTEN  0x0002
+    uint16_t perm_n;        /* Number of permissions (0 in TA: node deleted). */
+    /* Permissions (first is owner, has full access). */
+    struct xs_state_node_perm perms[];
+    /* Path and data follows, plus 0-7 pad bytes. */
+};
+#endif /* XENSTORE_STATE_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54550.94948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJW-0006s5-4T; Tue, 15 Dec 2020 16:36:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54550.94948; Tue, 15 Dec 2020 16:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJV-0006rP-SS; Tue, 15 Dec 2020 16:36:37 +0000
Received: by outflank-mailman (input) for mailman id 54550;
 Tue, 15 Dec 2020 16:36:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJU-00066M-Hk
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a6e8cc7-3db9-4848-9c53-86f1bff27f9a;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2C46CAF8D;
 Tue, 15 Dec 2020 16:36:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a6e8cc7-3db9-4848-9c53-86f1bff27f9a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sDAZDUZnLGJmS+Y9/23vLX8k1ckhg9TgdsR5cTt0Hbc=;
	b=IesJbGGKovHNVj2Tp6PkcohkjOpjXZ3paIMcB9zpe1vd2KjLssLMRNYaVnk5SWT7v7+cS8
	jzvleg+GbmH7LwMiSq5sZWLL5XXxg+fxqlfYoVPgGiZ2SC/vE4Nf8S1Z3S07uHq37UeYky
	V2vourtvMtnWNDDn9qqGCAnnGQMZuzE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 07/25] tools/xenstore: add basic live-update command parsing
Date: Tue, 15 Dec 2020 17:35:45 +0100
Message-Id: <20201215163603.21700-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the basic parts for parsing the live-update control command.

For now only add the parameter evaluation and calling appropriate
functions. Those function only print a message for now and return
success.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- keep consistent style in lu_arch() (Pawel Wieczorkiewicz)
- fix handling of force flag (Pawel Wieczorkiewicz)
- use xprintf() instead of trace() for better stubdom diag
- add conn parameter to subfunctions

V4:
- make several parameters/variables const (Julien Grall)
- don't reject an option specified multiple times (Julien Grall)
- use syslog() for messages

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c | 105 ++++++++++++++++++++++++++++-
 1 file changed, 104 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 00fda5acdb..e3f0d34528 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -19,7 +19,9 @@
 #include <errno.h>
 #include <stdarg.h>
 #include <stdio.h>
+#include <stdlib.h>
 #include <string.h>
+#include <syslog.h>
 
 #include "utils.h"
 #include "talloc.h"
@@ -149,12 +151,113 @@ static int do_control_print(void *ctx, struct connection *conn,
 	return 0;
 }
 
+static const char *lu_abort(const void *ctx, struct connection *conn)
+{
+	syslog(LOG_INFO, "live-update: abort\n");
+	return NULL;
+}
+
+static const char *lu_cmdline(const void *ctx, struct connection *conn,
+			      const char *cmdline)
+{
+	syslog(LOG_INFO, "live-update: cmdline %s\n", cmdline);
+	return NULL;
+}
+
+#ifdef __MINIOS__
+static const char *lu_binary_alloc(const void *ctx, struct connection *conn,
+				   unsigned long size)
+{
+	syslog(LOG_INFO, "live-update: binary size %lu\n", size);
+	return NULL;
+}
+
+static const char *lu_binary_save(const void *ctx, struct connection *conn,
+				  unsigned int size, const char *data)
+{
+	return NULL;
+}
+
+static const char *lu_arch(const void *ctx, struct connection *conn,
+			   char **vec, int num)
+{
+	if (num == 2 && !strcmp(vec[0], "-b"))
+		return lu_binary_alloc(ctx, conn, atol(vec[1]));
+	if (num > 2 && !strcmp(vec[0], "-d"))
+		return lu_binary_save(ctx, conn, atoi(vec[1]), vec[2]);
+
+	errno = EINVAL;
+	return NULL;
+}
+#else
+static const char *lu_binary(const void *ctx, struct connection *conn,
+			     const char *filename)
+{
+	syslog(LOG_INFO, "live-update: binary %s\n", filename);
+	return NULL;
+}
+
+static const char *lu_arch(const void *ctx, struct connection *conn,
+			   char **vec, int num)
+{
+	if (num == 2 && !strcmp(vec[0], "-f"))
+		return lu_binary(ctx, conn, vec[1]);
+
+	errno = EINVAL;
+	return NULL;
+}
+#endif
+
+static const char *lu_start(const void *ctx, struct connection *conn,
+			    bool force, unsigned int to)
+{
+	syslog(LOG_INFO, "live-update: start, force=%d, to=%u\n", force, to);
+	return NULL;
+}
+
 static int do_control_lu(void *ctx, struct connection *conn,
 			 char **vec, int num)
 {
 	const char *resp;
+	const char *ret = NULL;
+	unsigned int i;
+	bool force = false;
+	unsigned int to = 0;
+
+	if (num < 1)
+		return EINVAL;
+
+	if (!strcmp(vec[0], "-a")) {
+		if (num == 1)
+			ret = lu_abort(ctx, conn);
+		else
+			return EINVAL;
+	} else if (!strcmp(vec[0], "-c")) {
+		if (num == 2)
+			ret = lu_cmdline(ctx, conn, vec[1]);
+		else
+			return EINVAL;
+	} else if (!strcmp(vec[0], "-s")) {
+		for (i = 1; i < num; i++) {
+			if (!strcmp(vec[i], "-F"))
+				force = true;
+			else if (!strcmp(vec[i], "-t") && i < num - 1) {
+				i++;
+				to = atoi(vec[i]);
+			} else
+				return EINVAL;
+		}
+		ret = lu_start(ctx, conn, force, to);
+	} else {
+		errno = 0;
+		ret = lu_arch(ctx, conn, vec, num);
+		if (errno)
+			return errno;
+	}
 
-	resp = talloc_strdup(ctx, "NYI");
+	if (!ret)
+		ret = "OK";
+	resp = talloc_strdup(ctx, ret);
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 	return 0;
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54555.94972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJa-00072m-MO; Tue, 15 Dec 2020 16:36:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54555.94972; Tue, 15 Dec 2020 16:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJa-00072X-GQ; Tue, 15 Dec 2020 16:36:42 +0000
Received: by outflank-mailman (input) for mailman id 54555;
 Tue, 15 Dec 2020 16:36:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJY-000667-H0
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a03c5e1c-191b-4771-8d9f-49c7b8d4e651;
 Tue, 15 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E212B272;
 Tue, 15 Dec 2020 16:36:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a03c5e1c-191b-4771-8d9f-49c7b8d4e651
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050170; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9hLKS3tqGavquprb3POoginKf8qVTOiNwihT9d+dpjc=;
	b=HakO0z1+oS4Nyk2uj5il+A2MjsGmAQ+Kgs86MOoGuaTnRVK8BMtBE9dUT264E0QVbBiVZU
	K2lLq1RjnDVsS+/sQkfj9Ur9laNLqqa0qDWvKI118ti2wYYPFonTWDE7YeO2IlhzDWifEu
	iUXZTCvmhxwAfz5veszeyb8PzT7NDeU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 13/25] docs: update the xenstore migration stream documentation
Date: Tue, 15 Dec 2020 17:35:51 +0100
Message-Id: <20201215163603.21700-14-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For live update of Xenstore some records defined in the migration
stream document need to be changed:

- Support of the read-only socket has been dropped from all Xenstore
  implementations, so ro-socket-fd in the global record can be removed.

- Some guests require the event channel to Xenstore to remain the same
  on Xenstore side, so Xenstore has to keep the event channel interface
  open across a live update. For this purpose an evtchn-fd needs to be
  added to the global record.

- With no read-only support the flags field in the connection record
  can be dropped.

- The evtchn field in the connection record needs to be switched to
  hold the port of the Xenstore side of the event channel.

- A flags field needs to be added to permission specifiers in order to
  be able to mark a permission as stale (XSA-322).

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V8:
- split off from following patch (Julien Grall)
---
 docs/designs/xenstore-migration.md | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 2ce2c836f5..1a5b94b31d 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -116,7 +116,7 @@ xenstored state that needs to be restored.
 +-------+-------+-------+-------+
 | rw-socket-fd                  |
 +-------------------------------+
-| ro-socket-fd                  |
+| evtchn-fd                     |
 +-------------------------------+
 ```
 
@@ -126,8 +126,8 @@ xenstored state that needs to be restored.
 | `rw-socket-fd` | The file descriptor of the socket accepting  |
 |                | read-write connections                       |
 |                |                                              |
-| `ro-socket-fd` | The file descriptor of the socket accepting  |
-|                | read-only connections                        |
+| `evtchn-fd`    | The file descriptor used to communicate with |
+|                | the event channel driver                     |
 
 xenstored will resume in the original process context. Hence `rw-socket-fd` and
 `ro-socket-fd` simply specify the file descriptors of the sockets. Sockets
@@ -147,7 +147,7 @@ the domain being migrated.
 ```
     0       1       2       3       4       5       6       7    octet
 +-------+-------+-------+-------+-------+-------+-------+-------+
-| conn-id                       | conn-type     | flags         |
+| conn-id                       | conn-type     |               |
 +-------------------------------+---------------+---------------+
 | conn-spec
 ...
@@ -169,9 +169,6 @@ the domain being migrated.
 |                | 0x0001: socket                               |
 |                | 0x0002 - 0xFFFF: reserved for future use     |
 |                |                                              |
-| `flags`        | A bit-wise OR of:                            |
-|                | 0001: read-only                              |
-|                |                                              |
 | `conn-spec`    | See below                                    |
 |                |                                              |
 | `in-data-len`  | The length (in octets) of any data read      |
@@ -216,7 +213,7 @@ For `shared ring` connections it is as follows:
 |           | operation [2] or DOMID_INVALID [3] otherwise      |
 |           |                                                   |
 | `evtchn`  | The port number of the interdomain channel used   |
-|           | by `domid` to communicate with xenstored          |
+|           | by xenstored to communicate with `domid`          |
 |           |                                                   |
 
 Since the ABI guarantees that entry 1 in `domid`'s grant table will always
@@ -386,7 +383,7 @@ A node permission specifier has the following format:
 ```
     0       1       2       3    octet
 +-------+-------+-------+-------+
-| perm  | pad   | domid         |
+| perm  | flags | domid         |
 +-------+-------+---------------+
 ```
 
@@ -395,6 +392,10 @@ A node permission specifier has the following format:
 | `perm`  | One of the ASCII values `w`, `r`, `b` or `n` as     |
 |         | specified for the `SET_PERMS` operation [2]         |
 |         |                                                     |
+| `flags` | A bit-wise OR of:                                   |
+|         | 0x01: stale permission, ignore when checking        |
+|         |       permissions                                   |
+|         |                                                     |
 | `domid` | The domain-id to which the permission relates       |
 
 Note that perm1 defines the domain owning the code. See [4] for more
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54557.94978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJb-00074I-DL; Tue, 15 Dec 2020 16:36:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54557.94978; Tue, 15 Dec 2020 16:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJb-00073m-11; Tue, 15 Dec 2020 16:36:43 +0000
Received: by outflank-mailman (input) for mailman id 54557;
 Tue, 15 Dec 2020 16:36:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJZ-00066M-Hy
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edb30b2c-898a-4786-a166-5a33fabf2be1;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F269CAF73;
 Tue, 15 Dec 2020 16:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: edb30b2c-898a-4786-a166-5a33fabf2be1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Pqaa2EAISfoqMQYdGvAptWd4x6dr6DKrqL0LaEJ013g=;
	b=DvLlmnlO946ltBwXIbxkN5+QAUTA1nV9fl8ji2mxnxtW3FUSU67Y/rfR7kgIk/jJq8BQQf
	EkQBTUr++toDfK6MKVxRqoL/04i0qDACbBk8XjVKoR+VkC5k2L5ocESV91eNPHv8PO8Oi1
	RlFinGRUo0e3mazFPD6fUMHulqGrKJM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 06/25] tools/xenstore: add live update command to xenstore-control
Date: Tue, 15 Dec 2020 17:35:44 +0100
Message-Id: <20201215163603.21700-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the "live-update" command to xenstore-control enabling updating
xenstored to a new version in a running Xen system.

With -c <arg> it is possible to pass a different command line to the
new instance of xenstored. This will replace the command line used
for the invocation of the just running xenstored instance.

The running xenstored (or xenstore-stubdom) needs to support live
updating, of course.

For now just add a small dummy handler to C xenstore denying any
live update action.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- add 0 byte after kernel chunk
- add comment regrading add_to_buf() semantics (Pawel Wieczorkiewicz)
- use %u for unsigned in format (Pawel Wieczorkiewicz)
- explain buffer size better (Pawel Wieczorkiewicz)
- add loop around "-s" option for client side retry in case of timeout

V3:
- add live-update command to docs/misc/xenstore.txt (Paul Durrant)
- fix indent (Paul Durrant)

V4:
- made several parameters const (Julien Grall)
- added more details to xenstore.txt (Julien Grall)

V5:
- set old_binary to NULL initially (Paul Durrant)

V6:
- use strerror(errno) in error message (Julien Grall)

V10:
- make binary specification mandatory (Andrew Cooper)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/xenstore.txt             |  21 ++
 tools/xenstore/Makefile            |   3 +-
 tools/xenstore/xenstore_control.c  | 332 +++++++++++++++++++++++++++--
 tools/xenstore/xenstored_control.c |  30 +++
 4 files changed, 369 insertions(+), 17 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index 2081f20f55..1480742330 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -317,6 +317,27 @@ CONTROL			<command>|[<parameters>|]
 	Current commands are:
 	check
 		checks xenstored innards
+	live-update|<params>|+
+		perform a live-update of the Xenstore daemon, only to
+		be used via xenstore-control command.
+		<params> are implementation specific and are used for
+		different steps of the live-update processing. Currently
+		supported <params> are:
+		-f <file>  specify new daemon binary
+		-b <size>  specify size of new stubdom binary
+		-d <chunk-size> <binary-chunk>  transfer chunk of new
+			stubdom binary
+		-c <pars>  specify new command line to use
+		-s [-t <sec>] [-F]  start live update process (-t specifies
+			timeout in seconds to wait for active transactions
+			to finish, default is 60 seconds; -F will force
+			live update to happen even with running transactions
+			after timeout elapsed)
+		-a  abort live update handling
+		All sub-options will return "OK" in case of success or an
+		error string in case of failure. -s can return "BUSY" in case
+		of an active transaction, a retry of -s can be done in that
+		case.
 	log|on
 		turn xenstore logging on
 	log|off
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index 9a0f0d012d..ab89e22d3a 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -11,6 +11,7 @@ CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += -I./include
 CFLAGS += $(CFLAGS_libxenevtchn)
 CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_libxentoolcore)
 CFLAGS += -DXEN_LIB_STORED="\"$(XEN_LIB_STORED)\""
 CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
@@ -81,7 +82,7 @@ xenstore: xenstore_client.o
 	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
 
 xenstore-control: xenstore_control.o
-	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
+	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
 
 xs_tdb_dump: xs_tdb_dump.o utils.o tdb.o talloc.o
 	$(CC) $^ $(LDFLAGS) -o $@ $(APPEND_LDFLAGS)
diff --git a/tools/xenstore/xenstore_control.c b/tools/xenstore/xenstore_control.c
index afa04495a7..5ca015a07d 100644
--- a/tools/xenstore/xenstore_control.c
+++ b/tools/xenstore/xenstore_control.c
@@ -1,9 +1,311 @@
+#define _GNU_SOURCE
+#include <stdbool.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
+#include <time.h>
+#include <xenctrl.h>
+#include <xenguest.h>
 
 #include "xenstore.h"
 
+/* Add a string plus terminating 0 byte to buf, returning new len. */
+static int add_to_buf(char **buf, const char *val, int len)
+{
+    int vallen = strlen(val) + 1;
+
+    if (len < 0)
+        return -1;
+
+    *buf = realloc(*buf, len + vallen);
+    if (!*buf)
+        return -1;
+
+    strcpy(*buf + len, val);
+
+    return len + vallen;
+}
+
+static int live_update_start(struct xs_handle *xsh, bool force, unsigned int to)
+{
+    int len = 0;
+    char *buf = NULL, *ret;
+    time_t time_start;
+
+    if (asprintf(&ret, "%u", to) < 0)
+        return 1;
+    len = add_to_buf(&buf, "-s", len);
+    len = add_to_buf(&buf, "-t", len);
+    len = add_to_buf(&buf, ret, len);
+    free(ret);
+    if (force)
+        len = add_to_buf(&buf, "-F", len);
+    if (len < 0)
+        return 1;
+
+    for (time_start = time(NULL); time(NULL) - time_start < to;) {
+        ret = xs_control_command(xsh, "live-update", buf, len);
+        if (!ret)
+            goto err;
+        if (strcmp(ret, "BUSY"))
+            break;
+    }
+
+    if (strcmp(ret, "OK"))
+        goto err;
+
+    free(buf);
+    free(ret);
+
+    return 0;
+
+ err:
+    fprintf(stderr, "Starting live update failed:\n%s\n",
+            ret ? : strerror(errno));
+    free(buf);
+    free(ret);
+
+    return 3;
+}
+
+static int live_update_cmdline(struct xs_handle *xsh, const char *cmdline)
+{
+    int len = 0, rc = 0;
+    char *buf = NULL, *ret;
+
+    len = add_to_buf(&buf, "-c", len);
+    len = add_to_buf(&buf, cmdline, len);
+    if (len < 0)
+        return 1;
+
+    ret = xs_control_command(xsh, "live-update", buf, len);
+    free(buf);
+    if (!ret || strcmp(ret, "OK")) {
+        fprintf(stderr, "Setting update binary failed:\n%s\n",
+                ret ? : strerror(errno));
+        rc = 3;
+    }
+    free(ret);
+
+    return rc;
+}
+
+static int send_kernel_blob(struct xs_handle *xsh, const char *binary)
+{
+    int rc = 0, len = 0;
+    xc_interface *xch;
+    struct xc_dom_image *dom;
+    char *ret, *buf = NULL;
+    size_t off, sz;
+#define BLOB_CHUNK_SZ 2048
+
+    xch = xc_interface_open(NULL, NULL, 0);
+    if (!xch) {
+        fprintf(stderr, "xc_interface_open() failed\n");
+        return 1;
+    }
+
+    dom = xc_dom_allocate(xch, NULL, NULL);
+    if (!dom) {
+        rc = 1;
+        goto out_close;
+    }
+
+    rc = xc_dom_kernel_file(dom, binary);
+    if (rc) {
+        rc = 1;
+        goto out_rel;
+    }
+
+    if (asprintf(&ret, "%zu", dom->kernel_size) < 0) {
+        rc = 1;
+        goto out_rel;
+    }
+    len = add_to_buf(&buf, "-b", len);
+    len = add_to_buf(&buf, ret, len);
+    free(ret);
+    if (len < 0) {
+        rc = 1;
+        goto out_rel;
+    }
+    ret = xs_control_command(xsh, "live-update", buf, len);
+    free(buf);
+    if (!ret || strcmp(ret, "OK")) {
+        fprintf(stderr, "Starting live update failed:\n%s\n",
+                ret ? : strerror(errno));
+        rc = 3;
+    }
+    free(ret);
+    if (rc)
+        goto out_rel;
+
+    /* buf capable to hold "-d" <1..2048> BLOB_CHUNK_SZ and a terminating 0. */
+    buf = malloc(3 + 5 + BLOB_CHUNK_SZ + 1);
+    if (!buf) {
+        rc = 1;
+        goto out_rel;
+    }
+
+    strcpy(buf, "-d");
+    sz = BLOB_CHUNK_SZ;
+    for (off = 0; off < dom->kernel_size; off += BLOB_CHUNK_SZ) {
+        if (dom->kernel_size - off < BLOB_CHUNK_SZ)
+            sz = dom->kernel_size - off;
+        sprintf(buf + 3, "%zu", sz);
+        len = 3 + strlen(buf + 3) + 1;
+        memcpy(buf + len, dom->kernel_blob + off, sz);
+        buf[len + sz] = 0;
+        len += sz + 1;
+        ret = xs_control_command(xsh, "live-update", buf, len);
+        if (!ret || strcmp(ret, "OK")) {
+            fprintf(stderr, "Transfer of new binary failed:\n%s\n",
+                    ret ? : strerror(errno));
+            rc = 3;
+            free(ret);
+            break;
+        }
+        free(ret);
+    }
+
+    free(buf);
+
+ out_rel:
+    xc_dom_release(dom);
+
+ out_close:
+    xc_interface_close(xch);
+
+    return rc;
+}
+
+/*
+ * Live update of Xenstore stubdom
+ *
+ * Sequence of actions:
+ * 1. transfer new stubdom binary
+ *    a) specify size
+ *    b) transfer unpacked binary in chunks
+ * 2. transfer new cmdline (optional)
+ * 3. start update (includes flags)
+ */
+static int live_update_stubdom(struct xs_handle *xsh, const char *binary,
+                               const char *cmdline, bool force, unsigned int to)
+{
+    int rc;
+
+    rc = send_kernel_blob(xsh, binary);
+    if (rc)
+        goto abort;
+
+    if (cmdline) {
+        rc = live_update_cmdline(xsh, cmdline);
+        if (rc)
+            goto abort;
+    }
+
+    rc = live_update_start(xsh, force, to);
+    if (rc)
+        goto abort;
+
+    return 0;
+
+ abort:
+    xs_control_command(xsh, "live-update", "-a", 3);
+    return rc;
+}
+
+/*
+ * Live update of Xenstore daemon
+ *
+ * Sequence of actions:
+ * 1. transfer new binary filename
+ * 2. transfer new cmdline (optional)
+ * 3. start update (includes flags)
+ */
+static int live_update_daemon(struct xs_handle *xsh, const char *binary,
+                              const char *cmdline, bool force, unsigned int to)
+{
+    int len = 0, rc;
+    char *buf = NULL, *ret;
+
+    len = add_to_buf(&buf, "-f", len);
+    len = add_to_buf(&buf, binary, len);
+    if (len < 0)
+        return 1;
+    ret = xs_control_command(xsh, "live-update", buf, len);
+    free(buf);
+    if (!ret || strcmp(ret, "OK")) {
+        fprintf(stderr, "Setting update binary failed:\n%s\n",
+                ret ? : strerror(errno));
+        free(ret);
+        return 3;
+    }
+    free(ret);
+
+    if (cmdline) {
+        rc = live_update_cmdline(xsh, cmdline);
+        if (rc)
+            goto abort;
+    }
+
+    rc = live_update_start(xsh, force, to);
+    if (rc)
+        goto abort;
+
+    return 0;
+
+ abort:
+    xs_control_command(xsh, "live-update", "-a", 3);
+    return rc;
+}
+
+static int live_update(struct xs_handle *xsh, int argc, char **argv)
+{
+    int rc = 0;
+    unsigned int i, to = 60;
+    char *binary = NULL, *cmdline = NULL, *val;
+    bool force = false;
+
+    for (i = 0; i < argc; i++) {
+        if (!strcmp(argv[i], "-c")) {
+            i++;
+            if (i == argc) {
+                fprintf(stderr, "Missing command line value\n");
+                rc = 2;
+                goto out;
+            }
+            cmdline = argv[i];
+        } else if (!strcmp(argv[i], "-t")) {
+            i++;
+            if (i == argc) {
+                fprintf(stderr, "Missing timeout value\n");
+                rc = 2;
+                goto out;
+            }
+            to = atoi(argv[i]);
+        } else if (!strcmp(argv[i], "-F"))
+            force = true;
+        else
+            binary = argv[i];
+    }
+
+    if (!binary) {
+        fprintf(stderr, "Missing binary specification\n");
+        rc = 2;
+        goto out;
+    }
+
+    val = xs_read(xsh, XBT_NULL, "/tool/xenstored/domid", &i);
+    if (val)
+        rc = live_update_stubdom(xsh, binary, cmdline, force, to);
+    else
+        rc = live_update_daemon(xsh, binary, cmdline, force, to);
+
+    free(val);
+
+ out:
+    return rc;
+}
 
 int main(int argc, char **argv)
 {
@@ -20,22 +322,6 @@ int main(int argc, char **argv)
         goto out;
     }
 
-    for (p = 2; p < argc; p++)
-        len += strlen(argv[p]) + 1;
-    if (len) {
-        par = malloc(len);
-        if (!par) {
-            fprintf(stderr, "Allocation error.\n");
-            rc = 1;
-            goto out;
-        }
-        len = 0;
-        for (p = 2; p < argc; p++) {
-            memcpy(par + len, argv[p], strlen(argv[p]) + 1);
-            len += strlen(argv[p]) + 1;
-        }
-    }
-
     xsh = xs_open(0);
     if (xsh == NULL) {
         fprintf(stderr, "Failed to contact Xenstored.\n");
@@ -43,6 +329,19 @@ int main(int argc, char **argv)
         goto out;
     }
 
+    if (!strcmp(argv[1], "live-update")) {
+        rc = live_update(xsh, argc - 2, argv + 2);
+        goto out_close;
+    }
+
+    for (p = 2; p < argc; p++)
+        len = add_to_buf(&par, argv[p], len);
+    if (len < 0) {
+        fprintf(stderr, "Allocation error.\n");
+        rc = 1;
+        goto out_close;
+    }
+
     ret = xs_control_command(xsh, argv[1], par, len);
     if (!ret) {
         rc = 3;
@@ -59,6 +358,7 @@ int main(int argc, char **argv)
     } else if (strlen(ret) > 0)
         printf("%s\n", ret);
 
+ out_close:
     xs_close(xsh);
 
  out:
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8d29db8270..00fda5acdb 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -149,11 +149,41 @@ static int do_control_print(void *ctx, struct connection *conn,
 	return 0;
 }
 
+static int do_control_lu(void *ctx, struct connection *conn,
+			 char **vec, int num)
+{
+	const char *resp;
+
+	resp = talloc_strdup(ctx, "NYI");
+	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+	return 0;
+}
+
 static int do_control_help(void *, struct connection *, char **, int);
 
 static struct cmd_s cmds[] = {
 	{ "check", do_control_check, "" },
 	{ "log", do_control_log, "on|off" },
+
+	/*
+	 * The parameters are those of the xenstore-control utility!
+	 * Depending on environment (Mini-OS or daemon) the live-update
+	 * sequence is split into several sub-operations:
+	 * 1. Specification of new binary
+	 *    daemon:  -f <filename>
+	 *    Mini-OS: -b <binary-size>
+	 *             -d <size> <data-bytes> (multiple of those)
+	 * 2. New command-line (optional): -c <cmdline>
+	 * 3. Start of update: -s [-F] [-t <timeout>]
+	 * Any sub-operation needs to respond with the string "OK" in case
+	 * of success, any other response indicates failure.
+	 * A started live-update sequence can be aborted via "-a" (not
+	 * needed in case of failure for the first or last live-update
+	 * sub-operation).
+	 */
+	{ "live-update", do_control_lu,
+		"[-c <cmdline>] [-F] [-t <timeout>] <file>\n"
+		"    Default timeout is 60 seconds.", 4 },
 #ifdef __MINIOS__
 	{ "memreport", do_control_memreport, "" },
 #else
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54559.94996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJf-0007D6-6Y; Tue, 15 Dec 2020 16:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54559.94996; Tue, 15 Dec 2020 16:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJe-0007Cl-Vg; Tue, 15 Dec 2020 16:36:46 +0000
Received: by outflank-mailman (input) for mailman id 54559;
 Tue, 15 Dec 2020 16:36:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJd-000667-HH
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38ce364d-e14c-46cd-9ae6-d9967ea60f23;
 Tue, 15 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 17BD8B279;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38ce364d-e14c-46cd-9ae6-d9967ea60f23
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dvZq9ug5KV3ON4auQi7iEvSAASMBEbL1LsbwM9uHXmI=;
	b=B7nClCsdmrLKxGBJ6gd7duXA25B84ZeoSzUu6hb/c+CMLVyISFs+6WNxo5kNJToq/IGVFk
	Ihz4nunmvkKBez8mZ0CY5v7y9AJNNzYUVQLsVoj/8yc54PrSJ3UJ7MD2/DNeBtETf7m3Tp
	nRnEW++D8SJzJ0z6n68OrOqmCvlTPKY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien.grall@amazon.com>
Subject: [PATCH v10 16/25] tools/xenstore: handle CLOEXEC flag for local files and pipes
Date: Tue, 15 Dec 2020 17:35:54 +0100
Message-Id: <20201215163603.21700-17-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For support of live update the locally used files need to have the
"close on exec" flag set. Fortunately the used Xen libraries are
already doing this, so only the logging and tdb related files and
pipes are affected. openlog() has the close on exec attribute, too.

In order to be able to keep the event channels open specify the
XENEVTCHN_NO_CLOEXEC flag when calling xenevtchn_open().

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <julien.grall@amazon.com>
---
V4:
- disable LU in case of O_CLOEXEC not supported (Julien Grall)

V5:
- add comment (Paul Durrant)

V7:
- set XENEVTCHN_NO_CLOEXEC

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c |  6 ++++++
 tools/xenstore/xenstored_core.c    |  6 ++++--
 tools/xenstore/xenstored_core.h    |  8 ++++++++
 tools/xenstore/xenstored_domain.c  |  2 +-
 tools/xenstore/xenstored_posix.c   | 12 ++++++++++++
 5 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 38550a559e..437276de8d 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -41,6 +41,7 @@ Interactive commands for Xen Store Daemon.
 #define MAP_ANONYMOUS MAP_ANON
 #endif
 
+#ifndef NO_LIVE_UPDATE
 struct live_update {
 	/* For verification the correct connection is acting. */
 	struct connection *conn;
@@ -85,6 +86,7 @@ static const char *lu_begin(struct connection *conn)
 
 	return NULL;
 }
+#endif
 
 struct cmd_s {
 	char *cmd;
@@ -209,6 +211,7 @@ static int do_control_print(void *ctx, struct connection *conn,
 	return 0;
 }
 
+#ifndef NO_LIVE_UPDATE
 static const char *lu_abort(const void *ctx, struct connection *conn)
 {
 	syslog(LOG_INFO, "live-update: abort\n");
@@ -522,6 +525,7 @@ static int do_control_lu(void *ctx, struct connection *conn,
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 	return 0;
 }
+#endif
 
 static int do_control_help(void *, struct connection *, char **, int);
 
@@ -529,6 +533,7 @@ static struct cmd_s cmds[] = {
 	{ "check", do_control_check, "" },
 	{ "log", do_control_log, "on|off" },
 
+#ifndef NO_LIVE_UPDATE
 	/*
 	 * The parameters are those of the xenstore-control utility!
 	 * Depending on environment (Mini-OS or daemon) the live-update
@@ -548,6 +553,7 @@ static struct cmd_s cmds[] = {
 	{ "live-update", do_control_lu,
 		"[-c <cmdline>] [-F] [-t <timeout>] <file>\n"
 		"    Default timeout is 60 seconds.", 4 },
+#endif
 #ifdef __MINIOS__
 	{ "memreport", do_control_memreport, "" },
 #else
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 064109c393..0f4e10815a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -197,7 +197,8 @@ void reopen_log(void)
 	if (tracefile) {
 		close_log();
 
-		tracefd = open(tracefile, O_WRONLY|O_CREAT|O_APPEND, 0600);
+		tracefd = open(tracefile,
+			       O_WRONLY | O_CREAT | O_APPEND | O_CLOEXEC, 0600);
 
 		if (tracefd < 0)
 			perror("Could not open tracefile");
@@ -1646,7 +1647,8 @@ static void setup_structure(void)
 	if (!(tdb_flags & TDB_INTERNAL))
 		unlink(tdbname);
 
-	tdb_ctx = tdb_open_ex(tdbname, 7919, tdb_flags, O_RDWR|O_CREAT|O_EXCL,
+	tdb_ctx = tdb_open_ex(tdbname, 7919, tdb_flags,
+			      O_RDWR | O_CREAT | O_EXCL | O_CLOEXEC,
 			      0640, &tdb_logger, NULL);
 	if (!tdb_ctx)
 		barf_perror("Could not create tdb file %s", tdbname);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 32b306e161..e40e0e6806 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -35,6 +35,14 @@
 #include "tdb.h"
 #include "hashtable.h"
 
+#ifndef O_CLOEXEC
+#define O_CLOEXEC 0
+/* O_CLOEXEC support is needed for Live Update in the daemon case. */
+#ifndef __MINIOS__
+#define NO_LIVE_UPDATE
+#endif
+#endif
+
 /* DEFAULT_BUFFER_SIZE should be large enough for each errno string. */
 #define DEFAULT_BUFFER_SIZE 16
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 919a4d98cf..38d250fbed 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -743,7 +743,7 @@ void domain_init(void)
 
 	talloc_set_destructor(xgt_handle, close_xgt_handle);
 
-	xce_handle = xenevtchn_open(NULL, 0);
+	xce_handle = xenevtchn_open(NULL, XENEVTCHN_NO_CLOEXEC);
 
 	if (xce_handle == NULL)
 		barf_perror("Failed to open evtchn device");
diff --git a/tools/xenstore/xenstored_posix.c b/tools/xenstore/xenstored_posix.c
index 1f9603fea2..ae3e63e07f 100644
--- a/tools/xenstore/xenstored_posix.c
+++ b/tools/xenstore/xenstored_posix.c
@@ -90,9 +90,21 @@ void finish_daemonize(void)
 
 void init_pipe(int reopen_log_pipe[2])
 {
+	int flags;
+	unsigned int i;
+
 	if (pipe(reopen_log_pipe)) {
 		barf_perror("pipe");
 	}
+
+	for (i = 0; i < 2; i++) {
+		flags = fcntl(reopen_log_pipe[i], F_GETFD);
+		if (flags < 0)
+			barf_perror("pipe get flags");
+		flags |= FD_CLOEXEC;
+		if (fcntl(reopen_log_pipe[i],  F_SETFD, flags) < 0)
+			barf_perror("pipe set flags");
+	}
 }
 
 void unmap_xenbus(void *interface)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54562.95004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJf-0007En-SP; Tue, 15 Dec 2020 16:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54562.95004; Tue, 15 Dec 2020 16:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJf-0007E7-Fl; Tue, 15 Dec 2020 16:36:47 +0000
Received: by outflank-mailman (input) for mailman id 54562;
 Tue, 15 Dec 2020 16:36:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJe-00066M-Hz
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98f00c5d-0b3c-4ea8-a28f-445388c7de8c;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 315FBB271;
 Tue, 15 Dec 2020 16:36:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98f00c5d-0b3c-4ea8-a28f-445388c7de8c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050170; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3OGXhmDpavRgcLL+iSFnJ3xYl7jjDOEuqGoMRL/Y0AQ=;
	b=gFVrZOXtO3pkUFa6HLadEJ50eh//1ChSYICzygJvi8f2Sw4H5IAMxKGZXR9Tv4oRCsaxLl
	Y6RuHXQOcRob14xQBSWUzgBD3Ur9qvr+57mV19fUEhP4ey1flqIGrHwpvF4WLJVxTIHEt4
	zX/HYtNgraWlUi0roWnKFvcm1uSFdcw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 12/25] tools/xenstore: allow live update only with no transaction active
Date: Tue, 15 Dec 2020 17:35:50 +0100
Message-Id: <20201215163603.21700-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to simplify live update state dumping only allow live update
to happen when no transaction is active.

A timeout is used to detect guests which have a transaction active for
longer periods of time. In case such a guest is detected when trying
to do a live update it will be reported and the update will fail.

The admin can then either use a longer timeout, or use the force flag
to just ignore the transactions of such a guest, or kill the guest
before retrying.

Transactions that have been active for a shorter time than the timeout
will end in the live update starting to respond "BUSY" without
aborting the complete live update process. The xenstore-control
program will then just repeat the live update starting until a
different result is returned.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c     | 20 +++++++++++++++++++-
 tools/xenstore/xenstored_core.h        |  1 +
 tools/xenstore/xenstored_transaction.c |  5 +++++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 940b717741..af64a9a2d4 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -22,6 +22,7 @@ Interactive commands for Xen Store Daemon.
 #include <stdlib.h>
 #include <string.h>
 #include <syslog.h>
+#include <time.h>
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <unistd.h>
@@ -307,7 +308,24 @@ static const char *lu_arch(const void *ctx, struct connection *conn,
 static const char *lu_check_lu_allowed(const void *ctx, bool force,
 				       unsigned int to)
 {
-	return NULL;
+	char *ret = NULL;
+	struct connection *conn;
+	time_t now = time(NULL);
+	bool busy = false;
+
+	list_for_each_entry(conn, &connections, list) {
+		if (conn->ta_start_time - now >= to && !force) {
+			ret = talloc_asprintf(ctx, "%s\nDomain %u: %ld s",
+					      ret ? : "Domains with long running transactions:",
+					      conn->id,
+					      conn->ta_start_time - now);
+			if (!ret)
+				busy = true;
+		} else if (conn->ta_start_time)
+			busy = true;
+	}
+
+	return ret ? (const char *)ret : (busy ? "BUSY" : NULL);
 }
 
 static const char *lu_dump_state(const void *ctx, struct connection *conn)
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 27826c125c..a009b182fd 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -93,6 +93,7 @@ struct connection
 	struct list_head transaction_list;
 	uint32_t next_transaction_id;
 	unsigned int transaction_started;
+	time_t ta_start_time;
 
 	/* The domain I'm associated with, if any. */
 	struct domain *domain;
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 52355f4ed8..cd07fb0f21 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -473,6 +473,8 @@ int do_transaction_start(struct connection *conn, struct buffered_data *in)
 	list_add_tail(&trans->list, &conn->transaction_list);
 	talloc_steal(conn, trans);
 	talloc_set_destructor(trans, destroy_transaction);
+	if (!conn->transaction_started)
+		conn->ta_start_time = time(NULL);
 	conn->transaction_started++;
 	wrl_ntransactions++;
 
@@ -511,6 +513,8 @@ int do_transaction_end(struct connection *conn, struct buffered_data *in)
 	conn->transaction = NULL;
 	list_del(&trans->list);
 	conn->transaction_started--;
+	if (!conn->transaction_started)
+		conn->ta_start_time = 0;
 
 	/* Attach transaction to in for auto-cleanup */
 	talloc_steal(in, trans);
@@ -589,6 +593,7 @@ void conn_delete_all_transactions(struct connection *conn)
 	assert(conn->transaction == NULL);
 
 	conn->transaction_started = 0;
+	conn->ta_start_time = 0;
 }
 
 int check_transactions(struct hashtable *hash)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54566.95024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJj-0007Or-LL; Tue, 15 Dec 2020 16:36:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54566.95024; Tue, 15 Dec 2020 16:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJj-0007OZ-Fu; Tue, 15 Dec 2020 16:36:51 +0000
Received: by outflank-mailman (input) for mailman id 54566;
 Tue, 15 Dec 2020 16:36:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJi-000667-HS
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa8f278a-b59a-428d-b53c-7ac49645efba;
 Tue, 15 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 42CC2B27B;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa8f278a-b59a-428d-b53c-7ac49645efba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/rmGIUwb7rQQbsNmDH/AGkcUN8zjvijT5247nfpjU8Q=;
	b=CihjzGYV5sKIw68KPlIyPXJBm0oU8R5xziOB7NNHgxMOHHYPLZeC0AcU57tHxXrtxgPpw0
	b0AX22F4eW7a/nY3Z1+QUEOe5ZFyqiT9pBFQEYpGuFEP5yJj9xquDp2s3iBxXnQLjSmdna
	7eqtJwkgaC31jrd02acAomBhanHos7o=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 17/25] tools/xenstore: split off domain introduction from do_introduce()
Date: Tue, 15 Dec 2020 17:35:55 +0100
Message-Id: <20201215163603.21700-18-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For live update the functionality to introduce a new domain similar to
the XS_INTRODUCE command is needed, so split that functionality off
into a dedicated function introduce_domain().

Switch initial dom0 initialization to use this function, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V8:
- new patch
---
 tools/xenstore/xenstored_domain.c | 95 ++++++++++++++++++-------------
 1 file changed, 55 insertions(+), 40 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 38d250fbed..71b078caf3 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -326,7 +326,7 @@ static struct domain *find_domain_struct(unsigned int domid)
 	return NULL;
 }
 
-static struct domain *alloc_domain(void *context, unsigned int domid)
+static struct domain *alloc_domain(const void *context, unsigned int domid)
 {
 	struct domain *domain;
 
@@ -347,6 +347,14 @@ static struct domain *alloc_domain(void *context, unsigned int domid)
 	return domain;
 }
 
+static struct domain *find_or_alloc_domain(const void *ctx, unsigned int domid)
+{
+	struct domain *domain;
+
+	domain = find_domain_struct(domid);
+	return domain ? : alloc_domain(ctx, domid);
+}
+
 static int new_domain(struct domain *domain, int port)
 {
 	int rc;
@@ -413,52 +421,41 @@ static void domain_conn_reset(struct domain *domain)
 	domain->interface->rsp_cons = domain->interface->rsp_prod = 0;
 }
 
-/* domid, gfn, evtchn, path */
-int do_introduce(struct connection *conn, struct buffered_data *in)
+static struct domain *introduce_domain(const void *ctx,
+				       unsigned int domid,
+				       evtchn_port_t port)
 {
 	struct domain *domain;
-	char *vec[3];
-	unsigned int domid;
-	evtchn_port_t port;
 	int rc;
 	struct xenstore_domain_interface *interface;
+	bool is_master_domain = (domid == xenbus_master_domid());
 
-	if (get_strings(in, vec, ARRAY_SIZE(vec)) < ARRAY_SIZE(vec))
-		return EINVAL;
-
-	domid = atoi(vec[0]);
-	/* Ignore the gfn, we don't need it. */
-	port = atoi(vec[2]);
-
-	/* Sanity check args. */
-	if (port <= 0)
-		return EINVAL;
-
-	domain = find_domain_struct(domid);
-
-	if (domain == NULL) {
-		/* Hang domain off "in" until we're finished. */
-		domain = alloc_domain(in, domid);
-		if (domain == NULL)
-			return ENOMEM;
-	}
+	domain = find_or_alloc_domain(ctx, domid);
+	if (!domain)
+		return NULL;
 
 	if (!domain->introduced) {
-		interface = map_interface(domid);
+		interface = is_master_domain ? xenbus_map()
+					     : map_interface(domid);
 		if (!interface)
-			return errno;
-		/* Hang domain off "in" until we're finished. */
+			return NULL;
 		if (new_domain(domain, port)) {
 			rc = errno;
-			unmap_interface(interface);
-			return rc;
+			if (is_master_domain)
+				unmap_xenbus(interface);
+			else
+				unmap_interface(interface);
+			errno = rc;
+			return NULL;
 		}
 		domain->interface = interface;
 
 		/* Now domain belongs to its connection. */
 		talloc_steal(domain->conn, domain);
 
-		fire_watches(NULL, in, "@introduceDomain", NULL, false, NULL);
+		if (!is_master_domain)
+			fire_watches(NULL, ctx, "@introduceDomain", NULL,
+				     false, NULL);
 	} else {
 		/* Use XS_INTRODUCE for recreating the xenbus event-channel. */
 		if (domain->port)
@@ -467,6 +464,32 @@ int do_introduce(struct connection *conn, struct buffered_data *in)
 		domain->port = (rc == -1) ? 0 : rc;
 	}
 
+	return domain;
+}
+
+/* domid, gfn, evtchn, path */
+int do_introduce(struct connection *conn, struct buffered_data *in)
+{
+	struct domain *domain;
+	char *vec[3];
+	unsigned int domid;
+	evtchn_port_t port;
+
+	if (get_strings(in, vec, ARRAY_SIZE(vec)) < ARRAY_SIZE(vec))
+		return EINVAL;
+
+	domid = atoi(vec[0]);
+	/* Ignore the gfn, we don't need it. */
+	port = atoi(vec[2]);
+
+	/* Sanity check args. */
+	if (port <= 0)
+		return EINVAL;
+
+	domain = introduce_domain(in, domid, port);
+	if (!domain)
+		return errno;
+
 	domain_conn_reset(domain);
 
 	send_ack(conn, XS_INTRODUCE);
@@ -692,17 +715,9 @@ static int dom0_init(void)
 	if (port == -1)
 		return -1;
 
-	dom0 = alloc_domain(NULL, xenbus_master_domid());
+	dom0 = introduce_domain(NULL, xenbus_master_domid(), port);
 	if (!dom0)
 		return -1;
-	if (new_domain(dom0, port))
-		return -1;
-
-	dom0->interface = xenbus_map();
-	if (dom0->interface == NULL)
-		return -1;
-
-	talloc_steal(dom0->conn, dom0); 
 
 	xenevtchn_notify(xce_handle, dom0->port);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54568.95036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJl-0007T4-9E; Tue, 15 Dec 2020 16:36:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54568.95036; Tue, 15 Dec 2020 16:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJl-0007Sj-2N; Tue, 15 Dec 2020 16:36:53 +0000
Received: by outflank-mailman (input) for mailman id 54568;
 Tue, 15 Dec 2020 16:36:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJj-00066M-IB
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28d56c77-03f2-431c-867b-f6c7232bc192;
 Tue, 15 Dec 2020 16:36:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D1B29B27F;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28d56c77-03f2-431c-867b-f6c7232bc192
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QJPffHFIyMSKdd75zSh1AO2Awnj/+F1fuoew120PJO0=;
	b=tpTufBOz6DdHVRPPJ1z+8UZ/7CD0gbq77+pAvlUXSrOhGTgXhTgspf2cwO9+Pq+OXIxAdl
	hCbNNX+3Wla3Fb2b4gkcWBzTMlfo0HNjMl5894nx9TukfLj9iUVP9Ye+r+YqV6QvnPI3Kb
	W+X6lXqPSNgLXLQUwieVvfrf/iLq/9o=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 20/25] tools/xenstore: add reading global state for live update
Date: Tue, 15 Dec 2020 17:35:58 +0100
Message-Id: <20201215163603.21700-21-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add reading the global state for live update.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 tools/xenstore/xenstored_control.c | 1 +
 tools/xenstore/xenstored_core.c    | 9 +++++++++
 tools/xenstore/xenstored_core.h    | 2 ++
 3 files changed, 12 insertions(+)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 129d2b44bb..f6c4ab3d8a 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -519,6 +519,7 @@ void lu_read_state(void)
 	     head = (void *)head + sizeof(*head) + head->length) {
 		switch (head->type) {
 		case XS_STATE_TYPE_GLOBAL:
+			read_state_global(ctx, head + 1);
 			break;
 		case XS_STATE_TYPE_CONN:
 			break;
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index d6f8373ee0..5922a03a98 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2451,6 +2451,15 @@ const char *dump_state_nodes(FILE *fp, const void *ctx)
 	return dump_state_node_tree(fp, path);
 }
 
+void read_state_global(const void *ctx, const void *state)
+{
+	const struct xs_state_global *glb = state;
+
+	sock = glb->socket_fd;
+
+	domain_init(glb->evtchn_fd);
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index e40e0e6806..6c9d838f11 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -244,6 +244,8 @@ const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
 				  const struct xs_permissions *perms,
 				  unsigned int n_perms);
 
+void read_state_global(const void *ctx, const void *state);
+
 #endif /* _XENSTORED_CORE_H */
 
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54574.95047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJo-0007cj-So; Tue, 15 Dec 2020 16:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54574.95047; Tue, 15 Dec 2020 16:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJo-0007cL-Nu; Tue, 15 Dec 2020 16:36:56 +0000
Received: by outflank-mailman (input) for mailman id 54574;
 Tue, 15 Dec 2020 16:36:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJn-000667-Hm
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c56c1a43-fb7d-47aa-aea8-6437f7017613;
 Tue, 15 Dec 2020 16:36:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A16DCB27E;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c56c1a43-fb7d-47aa-aea8-6437f7017613
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pA1ZtU8C4Sqokfze6z3ZFczGnVFvmsTtw5IWTtBl4ns=;
	b=Lq/qCuVBxIfvQfzDAim/d09X3MSEaaev9nWlqYubadVL7JZdjMqiq0fReNRle8YjiGJpoh
	VvdWrH++ROiPDVN3y93VtIVebJEJurUxdtzj3ctGaE9oqgM7tO4+2ZZA0QzAFU/g748nxz
	kZJXo3/ULFeznovvUDx4g3ndfLOcWNo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 19/25] tools/xenstore: read internal state when doing live upgrade
Date: Tue, 15 Dec 2020 17:35:57 +0100
Message-Id: <20201215163603.21700-20-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When started due to a live upgrade read the internal state and apply
it to the data base and internal structures.

Add the main control functions for that.

For now only handle the daemon case.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V4:
- directly mmap dump state file (Julien Grall)
- use syslog() instead of xprintf() (Julien Grall)

V8:
- remove state file after reading it (Julien Grall)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c | 102 ++++++++++++++++++++++++++++-
 1 file changed, 101 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index a539666410..129d2b44bb 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -62,6 +62,14 @@ struct live_update {
 
 static struct live_update *lu_status;
 
+struct lu_dump_state {
+	void *buf;
+	unsigned int size;
+#ifndef __MINIOS__
+	int fd;
+#endif
+};
+
 static int lu_destroy(void *data)
 {
 #ifdef __MINIOS__
@@ -313,6 +321,14 @@ static void lu_dump_close(FILE *fp)
 
 	fclose(fp);
 }
+
+static void lu_get_dump_state(struct lu_dump_state *state)
+{
+}
+
+static void lu_close_dump_state(struct lu_dump_state *state)
+{
+}
 #else
 static const char *lu_binary(const void *ctx, struct connection *conn,
 			     const char *filename)
@@ -368,6 +384,50 @@ static void lu_dump_close(FILE *fp)
 {
 	fclose(fp);
 }
+
+static void lu_get_dump_state(struct lu_dump_state *state)
+{
+	char *filename;
+	struct stat statbuf;
+
+	state->size = 0;
+
+	filename = talloc_asprintf(NULL, "%s/state_dump", xs_daemon_rootdir());
+	if (!filename)
+		barf("Allocation failure");
+
+	state->fd = open(filename, O_RDONLY);
+	talloc_free(filename);
+	if (state->fd < 0)
+		return;
+	if (fstat(state->fd, &statbuf) != 0)
+		goto out_close;
+	state->size = statbuf.st_size;
+
+	state->buf = mmap(NULL, state->size, PROT_READ, MAP_PRIVATE,
+			  state->fd, 0);
+	if (state->buf == MAP_FAILED) {
+		state->size = 0;
+		goto out_close;
+	}
+
+	return;
+
+ out_close:
+	close(state->fd);
+}
+
+static void lu_close_dump_state(struct lu_dump_state *state)
+{
+	char *filename;
+
+	munmap(state->buf, state->size);
+	close(state->fd);
+
+	filename = talloc_asprintf(NULL, "%s/state_dump", xs_daemon_rootdir());
+	unlink(filename);
+	talloc_free(filename);
+}
 #endif
 
 static const char *lu_check_lu_allowed(const void *ctx, bool force,
@@ -438,7 +498,47 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 
 void lu_read_state(void)
 {
-	xprintf("live-update: read state\n");
+	struct lu_dump_state state;
+	struct xs_state_record_header *head;
+	void *ctx = talloc_new(NULL); /* Work context for subfunctions. */
+	struct xs_state_preamble *pre;
+
+	syslog(LOG_INFO, "live-update: read state\n");
+	lu_get_dump_state(&state);
+	if (state.size == 0)
+		barf_perror("No state found after live-update");
+
+	pre = state.buf;
+	if (memcmp(pre->ident, XS_STATE_IDENT, sizeof(pre->ident)) ||
+	    pre->version != htobe32(XS_STATE_VERSION) ||
+	    pre->flags != XS_STATE_FLAGS)
+		barf("Unknown record identifier");
+	for (head = state.buf + sizeof(*pre);
+	     head->type != XS_STATE_TYPE_END &&
+		(void *)head - state.buf < state.size;
+	     head = (void *)head + sizeof(*head) + head->length) {
+		switch (head->type) {
+		case XS_STATE_TYPE_GLOBAL:
+			break;
+		case XS_STATE_TYPE_CONN:
+			break;
+		case XS_STATE_TYPE_WATCH:
+			break;
+		case XS_STATE_TYPE_TA:
+			xprintf("live-update: ignore transaction record\n");
+			break;
+		case XS_STATE_TYPE_NODE:
+			break;
+		default:
+			xprintf("live-update: unknown state record %08x\n",
+				head->type);
+			break;
+		}
+	}
+
+	lu_close_dump_state(&state);
+
+	talloc_free(ctx);
 }
 
 static const char *lu_activate_binary(const void *ctx)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:36:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:36:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54576.95060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJq-0007gT-Et; Tue, 15 Dec 2020 16:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54576.95060; Tue, 15 Dec 2020 16:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDJp-0007f9-LE; Tue, 15 Dec 2020 16:36:57 +0000
Received: by outflank-mailman (input) for mailman id 54576;
 Tue, 15 Dec 2020 16:36:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJo-00066M-IH
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:36:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 857a6ad4-b29c-4827-a73a-71328af6fe5c;
 Tue, 15 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EE9DB27D;
 Tue, 15 Dec 2020 16:36:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 857a6ad4-b29c-4827-a73a-71328af6fe5c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R/YgePDHO2chAetPsjOi7ZWAx5SNUZ/vW9zQWAbOTdQ=;
	b=l6TmpsIfi+RAU/bRj31ztEcxBtN0mSauw50kqJj9NUxt1gBX8mrlwp1nsRe0tZpLgfxWah
	jLp2DbXnxoP7YyOe3xlSMaVZYM/AyME+1Mcvykid4KF9h7WPugbCSm9rBXbUBquMkEYTk+
	n4jzfo1ruRxhLs+82NSiUw9KEaKqCNo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 18/25] tools/xenstore: evaluate the live update flag when starting
Date: Tue, 15 Dec 2020 17:35:56 +0100
Message-Id: <20201215163603.21700-19-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the live update case several initialization steps of xenstore must
be omitted or modified. Add the proper handling for that.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V4:
- set xprintf = trace in daemon case (Julien Grall)
- only update /tool/xenstored node contents

V7:
- some restructuring to enable keeping event channel fd

V8:
- pass evtfd to domain_init() as parameter (Julien Grall)
- call dom0_init() from main()

V10:
- remove support for remembering binary name (Andrew Cooper)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c |  5 ++++
 tools/xenstore/xenstored_control.h |  1 +
 tools/xenstore/xenstored_core.c    | 43 +++++++++++++++++++++---------
 tools/xenstore/xenstored_domain.c  | 26 +++++++++---------
 tools/xenstore/xenstored_domain.h  |  3 ++-
 tools/xenstore/xenstored_posix.c   |  1 -
 6 files changed, 50 insertions(+), 29 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 437276de8d..a539666410 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -436,6 +436,11 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 	return ret;
 }
 
+void lu_read_state(void)
+{
+	xprintf("live-update: read state\n");
+}
+
 static const char *lu_activate_binary(const void *ctx)
 {
 	return "Not yet implemented.";
diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstored_control.h
index 207e0a6fa3..aac61f0590 100644
--- a/tools/xenstore/xenstored_control.h
+++ b/tools/xenstore/xenstored_control.h
@@ -17,3 +17,4 @@
 */
 
 int do_control(struct connection *conn, struct buffered_data *in);
+void lu_read_state(void);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 0f4e10815a..d6f8373ee0 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1637,9 +1637,10 @@ static void tdb_logger(TDB_CONTEXT *tdb, int level, const char * fmt, ...)
 	}
 }
 
-static void setup_structure(void)
+static void setup_structure(bool live_update)
 {
 	char *tdbname;
+
 	tdbname = talloc_strdup(talloc_autofree_context(), xs_daemon_tdb());
 	if (!tdbname)
 		barf_perror("Could not create tdbname");
@@ -1653,14 +1654,16 @@ static void setup_structure(void)
 	if (!tdb_ctx)
 		barf_perror("Could not create tdb file %s", tdbname);
 
-	manual_node("/", "tool");
-	manual_node("/tool", "xenstored");
-	manual_node("/tool/xenstored", NULL);
+	if (live_update)
+		manual_node("/", NULL);
+	else {
+		manual_node("/", "tool");
+		manual_node("/tool", "xenstored");
+	}
 
 	check_store();
 }
 
-
 static unsigned int hash_from_key_fn(void *k)
 {
 	char *str = k;
@@ -2066,7 +2069,8 @@ int main(int argc, char *argv[])
 
 	if (dofork) {
 		openlog("xenstored", 0, LOG_DAEMON);
-		daemonize();
+		if (!live_update)
+			daemonize();
 	}
 	if (pidfile)
 		write_pidfile(pidfile);
@@ -2081,17 +2085,20 @@ int main(int argc, char *argv[])
 	talloc_enable_null_tracking();
 
 #ifndef NO_SOCKETS
-	init_sockets();
+	if (!live_update)
+		init_sockets();
 #endif
 
 	init_pipe(reopen_log_pipe);
 
 	/* Setup the database */
-	setup_structure();
+	setup_structure(live_update);
 
 	/* Listen to hypervisor. */
-	if (!no_domain_init)
-		domain_init();
+	if (!no_domain_init && !live_update) {
+		domain_init(-1);
+		dom0_init();
+	}
 
 	if (outputpid) {
 		printf("%ld\n", (long)getpid());
@@ -2099,13 +2106,21 @@ int main(int argc, char *argv[])
 	}
 
 	/* redirect to /dev/null now we're ready to accept connections */
-	if (dofork)
+	if (dofork && !live_update)
 		finish_daemonize();
+#ifndef __MINIOS__
+	if (dofork)
+		xprintf = trace;
+#endif
 
 	signal(SIGHUP, trigger_reopen_log);
 	if (tracefile)
 		tracefile = talloc_strdup(NULL, tracefile);
 
+	/* Read state in case of live update. */
+	if (live_update)
+		lu_read_state();
+
 	/* Get ready to listen to the tools. */
 	initialize_fds(&sock_pollfd_idx, &timeout);
 
@@ -2113,8 +2128,10 @@ int main(int argc, char *argv[])
 	xenbus_notify_running();
 
 #if defined(XEN_SYSTEMD_ENABLED)
-	sd_notify(1, "READY=1");
-	fprintf(stderr, SD_NOTICE "xenstored is ready\n");
+	if (!live_update) {
+		sd_notify(1, "READY=1");
+		fprintf(stderr, SD_NOTICE "xenstored is ready\n");
+	}
 #endif
 
 	/* Main loop. */
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 71b078caf3..94dd501a3b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -706,29 +706,23 @@ bool check_perms_special(const char *name, struct connection *conn)
 	return perm_for_conn(conn, p) & XS_PERM_READ;
 }
 
-static int dom0_init(void) 
-{ 
+void dom0_init(void)
+{
 	evtchn_port_t port;
 	struct domain *dom0;
 
 	port = xenbus_evtchn();
 	if (port == -1)
-		return -1;
+		barf_perror("Failed to initialize dom0 port");
 
 	dom0 = introduce_domain(NULL, xenbus_master_domid(), port);
 	if (!dom0)
-		return -1;
+		barf_perror("Failed to initialize dom0");
 
 	xenevtchn_notify(xce_handle, dom0->port);
-
-	if (set_dom_perms_default(&dom_release_perms) ||
-	    set_dom_perms_default(&dom_introduce_perms))
-		return -1;
-
-	return 0; 
 }
 
-void domain_init(void)
+void domain_init(int evtfd)
 {
 	int rc;
 
@@ -758,13 +752,17 @@ void domain_init(void)
 
 	talloc_set_destructor(xgt_handle, close_xgt_handle);
 
-	xce_handle = xenevtchn_open(NULL, XENEVTCHN_NO_CLOEXEC);
+	if (evtfd < 0)
+		xce_handle = xenevtchn_open(NULL, XENEVTCHN_NO_CLOEXEC);
+	else
+		xce_handle = xenevtchn_open_fd(NULL, evtfd, 0);
 
 	if (xce_handle == NULL)
 		barf_perror("Failed to open evtchn device");
 
-	if (dom0_init() != 0) 
-		barf_perror("Failed to initialize dom0 state"); 
+	if (set_dom_perms_default(&dom_release_perms) ||
+	    set_dom_perms_default(&dom_introduce_perms))
+		barf_perror("Failed to set special permissions");
 
 	if ((rc = xenevtchn_bind_virq(xce_handle, VIRQ_DOM_EXC)) == -1)
 		barf_perror("Failed to bind to domain exception virq port");
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 413b974375..b20269b038 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -42,7 +42,8 @@ int do_get_domain_path(struct connection *conn, struct buffered_data *in);
 /* Allow guest to reset all watches */
 int do_reset_watches(struct connection *conn, struct buffered_data *in);
 
-void domain_init(void);
+void domain_init(int evtfd);
+void dom0_init(void);
 
 /* Returns the implicit path of a connection (only domains have this) */
 const char *get_implicit_path(const struct connection *conn);
diff --git a/tools/xenstore/xenstored_posix.c b/tools/xenstore/xenstored_posix.c
index ae3e63e07f..48c37ffe3e 100644
--- a/tools/xenstore/xenstored_posix.c
+++ b/tools/xenstore/xenstored_posix.c
@@ -85,7 +85,6 @@ void finish_daemonize(void)
 	dup2(devnull, STDOUT_FILENO);
 	dup2(devnull, STDERR_FILENO);
 	close(devnull);
-	xprintf = trace;
 }
 
 void init_pipe(int reopen_log_pipe[2])
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:42:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:42:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54626.95092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDPS-00011n-0t; Tue, 15 Dec 2020 16:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54626.95092; Tue, 15 Dec 2020 16:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDPR-00011g-Ta; Tue, 15 Dec 2020 16:42:45 +0000
Received: by outflank-mailman (input) for mailman id 54626;
 Tue, 15 Dec 2020 16:42:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJy-00066M-J0
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9851a69-a357-4887-a899-5172ee9468ec;
 Tue, 15 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE2F1B277;
 Tue, 15 Dec 2020 16:36:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9851a69-a357-4887-a899-5172ee9468ec
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jQcBPeZHc8gVTXU4ZNxERg9iWNB1Iu03Ik/xUl3PHFc=;
	b=s/yFGpZkTLVuYZy6De14h7kq31qqsqje40DCH3AGZuRI6v9kc+TpuY53w6XB3TntxrTITI
	qm7xJ4XXT0eXDwJBFTXVDsku1va2lF7GYUeG6cuyMAySjkv66jVX68b4mX5O49jc7gic3P
	5Ng44pt0jGSbTb/WBaLjrpiBGvv47xQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 15/25] tools/xenstore: dump the xenstore state for live update
Date: Tue, 15 Dec 2020 17:35:53 +0100
Message-Id: <20201215163603.21700-16-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Dump the complete Xenstore status to a file (daemon case) or memory
(stubdom case).

As we don't know the exact length of the needed area in advance we are
using an anonymous rather large mapping in stubdom case, which will
use only virtual address space until accessed. And as we are writing
the area in a sequential manner this is fine. As the initial size we
are choosing the double size of the memory allocated via talloc(),
which should be more than enough.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V4:
- directly write to state file in daemon case
- add pending read data (Julien Grall)

V5:
- fix 0 length buffered data body (Julien Grall)
- simplify dump_state_buffered_data() (Paul Durrant)
- add data_resp_len field handling
- add comments (Paul Durrant)
- move dump_state_align() call out of dump_state_buffered_data()
  (Paul Durrant)
- move constant assignments out of loops (Paul Durrant)
- use set_tdb_key() (Paul Durrant)

V6:
- rename "first" to "Partial" (Paul Durrant)
- make sure data_resp_len is written (Paul Durrant)
- don't leak node memory in dump_state_nodes()
- add permission flag byte handling (Julien Grall)
- drop global state buffer (Julien Grall)
- add and correct comments (Julien Grall)
- add const (Julien Grall)
- add path buffer overrun check (Julien Grall)
- move get_watch_path() from later patch, correct dump_state_watches()
  to use it (Julien Grall)

V7:
- add glb.evtchn_fd, switch evtchn to local port
- remove definition of ROUNDUP() from utils.h due to rebase
- create state file with 0600 permissions

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/utils.c             |  17 +++
 tools/xenstore/utils.h             |   6 +
 tools/xenstore/xenstored_control.c | 102 +++++++++++++-
 tools/xenstore/xenstored_core.c    | 213 +++++++++++++++++++++++++++++
 tools/xenstore/xenstored_core.h    |  12 ++
 tools/xenstore/xenstored_domain.c  | 105 ++++++++++++++
 tools/xenstore/xenstored_domain.h  |   3 +
 tools/xenstore/xenstored_watch.c   |  57 +++++++-
 tools/xenstore/xenstored_watch.h   |   3 +
 9 files changed, 512 insertions(+), 6 deletions(-)

diff --git a/tools/xenstore/utils.c b/tools/xenstore/utils.c
index 633ce3b4fc..0d80cb6de8 100644
--- a/tools/xenstore/utils.c
+++ b/tools/xenstore/utils.c
@@ -62,3 +62,20 @@ void barf_perror(const char *fmt, ...)
 	}
 	exit(1);
 }
+
+const char *dump_state_align(FILE *fp)
+{
+	long len;
+	static char nul[8] = {};
+
+	len = ftell(fp);
+	if (len < 0)
+		return "Dump state align error";
+	len &= 7;
+	if (!len)
+		return NULL;
+
+	if (fwrite(nul, 8 - len, 1, fp) != 1)
+		return "Dump state align error";
+	return NULL;
+}
diff --git a/tools/xenstore/utils.h b/tools/xenstore/utils.h
index 6a1b5de9bd..df1cb9a3ba 100644
--- a/tools/xenstore/utils.h
+++ b/tools/xenstore/utils.h
@@ -3,6 +3,7 @@
 #include <stdbool.h>
 #include <string.h>
 #include <stdint.h>
+#include <stdio.h>
 
 #include <xen-tools/libs.h>
 
@@ -21,6 +22,11 @@ static inline bool strends(const char *a, const char *b)
 	return streq(a + strlen(a) - strlen(b), b);
 }
 
+/*
+ * Write NUL bytes for aligning state data to 8 bytes.
+ */
+const char *dump_state_align(FILE *fp);
+
 void barf(const char *fmt, ...) __attribute__((noreturn));
 void barf_perror(const char *fmt, ...) __attribute__((noreturn));
 
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index af64a9a2d4..38550a559e 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -25,12 +25,21 @@ Interactive commands for Xen Store Daemon.
 #include <time.h>
 #include <sys/types.h>
 #include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
 #include <unistd.h>
+#include <xenctrl.h>
 
 #include "utils.h"
 #include "talloc.h"
 #include "xenstored_core.h"
 #include "xenstored_control.h"
+#include "xenstored_domain.h"
+
+/* Mini-OS only knows about MAP_ANON. */
+#ifndef MAP_ANONYMOUS
+#define MAP_ANONYMOUS MAP_ANON
+#endif
 
 struct live_update {
 	/* For verification the correct connection is acting. */
@@ -40,6 +49,9 @@ struct live_update {
 	void *kernel;
 	unsigned int kernel_size;
 	unsigned int kernel_off;
+
+	void *dump_state;
+	unsigned long dump_size;
 #else
 	char *filename;
 #endif
@@ -51,6 +63,10 @@ static struct live_update *lu_status;
 
 static int lu_destroy(void *data)
 {
+#ifdef __MINIOS__
+	if (lu_status->dump_state)
+		munmap(lu_status->dump_state, lu_status->dump_size);
+#endif
 	lu_status = NULL;
 
 	return 0;
@@ -269,6 +285,31 @@ static const char *lu_arch(const void *ctx, struct connection *conn,
 	errno = EINVAL;
 	return NULL;
 }
+
+static FILE *lu_dump_open(const void *ctx)
+{
+	lu_status->dump_size = ROUNDUP(talloc_total_size(NULL) * 2,
+				       XC_PAGE_SHIFT);
+	lu_status->dump_state = mmap(NULL, lu_status->dump_size,
+				     PROT_READ | PROT_WRITE,
+				     MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	if (lu_status->dump_state == MAP_FAILED)
+		return NULL;
+
+	return fmemopen(lu_status->dump_state, lu_status->dump_size, "w");
+}
+
+static void lu_dump_close(FILE *fp)
+{
+	size_t size;
+
+	size = ftell(fp);
+	size = ROUNDUP(size, XC_PAGE_SHIFT);
+	munmap(lu_status->dump_state + size, lu_status->dump_size - size);
+	lu_status->dump_size = size;
+
+	fclose(fp);
+}
 #else
 static const char *lu_binary(const void *ctx, struct connection *conn,
 			     const char *filename)
@@ -303,6 +344,27 @@ static const char *lu_arch(const void *ctx, struct connection *conn,
 	errno = EINVAL;
 	return NULL;
 }
+
+static FILE *lu_dump_open(const void *ctx)
+{
+	char *filename;
+	int fd;
+
+	filename = talloc_asprintf(ctx, "%s/state_dump", xs_daemon_rootdir());
+	if (!filename)
+		return NULL;
+
+	fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
+	if (fd < 0)
+		return NULL;
+
+	return fdopen(fd, "w");
+}
+
+static void lu_dump_close(FILE *fp)
+{
+	fclose(fp);
+}
 #endif
 
 static const char *lu_check_lu_allowed(const void *ctx, bool force,
@@ -330,7 +392,45 @@ static const char *lu_check_lu_allowed(const void *ctx, bool force,
 
 static const char *lu_dump_state(const void *ctx, struct connection *conn)
 {
-	return NULL;
+	FILE *fp;
+	const char *ret;
+	struct xs_state_record_header end;
+	struct xs_state_preamble pre;
+
+	fp = lu_dump_open(ctx);
+	if (!fp)
+		return "Dump state open error";
+
+	memcpy(pre.ident, XS_STATE_IDENT, sizeof(pre.ident));
+	pre.version = htobe32(XS_STATE_VERSION);
+	pre.flags = XS_STATE_FLAGS;
+	if (fwrite(&pre, sizeof(pre), 1, fp) != 1) {
+		ret = "Dump write error";
+		goto out;
+	}
+
+	ret = dump_state_global(fp);
+	if (ret)
+		goto out;
+	ret = dump_state_connections(fp, conn);
+	if (ret)
+		goto out;
+	ret = dump_state_special_nodes(fp);
+	if (ret)
+		goto out;
+	ret = dump_state_nodes(fp, ctx);
+	if (ret)
+		goto out;
+
+	end.type = XS_STATE_TYPE_END;
+	end.length = 0;
+	if (fwrite(&end, sizeof(end), 1, fp) != 1)
+		ret = "Dump write error";
+
+ out:
+	lu_dump_close(fp);
+
+	return ret;
 }
 
 static const char *lu_activate_binary(const void *ctx)
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 0dddf24327..064109c393 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2219,6 +2219,219 @@ int main(int argc, char *argv[])
 	}
 }
 
+const char *dump_state_global(FILE *fp)
+{
+	struct xs_state_record_header head;
+	struct xs_state_global glb;
+
+	head.type = XS_STATE_TYPE_GLOBAL;
+	head.length = sizeof(glb);
+	if (fwrite(&head, sizeof(head), 1, fp) != 1)
+		return "Dump global state error";
+	glb.socket_fd = sock;
+	glb.evtchn_fd = xenevtchn_fd(xce_handle);
+	if (fwrite(&glb, sizeof(glb), 1, fp) != 1)
+		return "Dump global state error";
+
+	return NULL;
+}
+
+/* Called twice: first with fp == NULL to get length, then for writing data. */
+const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
+				     const struct connection *conn,
+				     struct xs_state_connection *sc)
+{
+	unsigned int len = 0, used;
+	struct buffered_data *out, *in = c->in;
+	bool partial = true;
+
+	if (in && c != conn) {
+		len = in->inhdr ? in->used : sizeof(in->hdr);
+		if (fp && fwrite(&in->hdr, len, 1, fp) != 1)
+			return "Dump read data error";
+		if (!in->inhdr && in->used) {
+			len += in->used;
+			if (fp && fwrite(in->buffer, in->used, 1, fp) != 1)
+				return "Dump read data error";
+		}
+	}
+
+	if (sc) {
+		sc->data_in_len = len;
+		sc->data_resp_len = 0;
+	}
+
+	len = 0;
+
+	list_for_each_entry(out, &c->out_list, list) {
+		used = out->used;
+		if (out->inhdr) {
+			if (!used)
+				partial = false;
+			if (fp && fwrite(out->hdr.raw + out->used,
+				  sizeof(out->hdr) - out->used, 1, fp) != 1)
+				return "Dump buffered data error";
+			len += sizeof(out->hdr) - out->used;
+			used = 0;
+		}
+		if (fp && out->hdr.msg.len &&
+		    fwrite(out->buffer + used, out->hdr.msg.len - used,
+			   1, fp) != 1)
+			return "Dump buffered data error";
+		len += out->hdr.msg.len - used;
+		if (partial && sc)
+			sc->data_resp_len = len;
+		partial = false;
+	}
+
+	/* Add "OK" for live-update command. */
+	if (c == conn) {
+		struct xsd_sockmsg msg = conn->in->hdr.msg;
+
+		msg.len = sizeof("OK");
+		if (fp && fwrite(&msg, sizeof(msg), 1, fp) != 1)
+			return "Dump buffered data error";
+		len += sizeof(msg);
+		if (fp && fwrite("OK", msg.len, 1, fp) != 1)
+
+			return "Dump buffered data error";
+		len += msg.len;
+	}
+
+	if (sc)
+		sc->data_out_len = len;
+
+	return NULL;
+}
+
+const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
+				  const struct xs_permissions *perms,
+				  unsigned int n_perms)
+{
+	unsigned int p;
+
+	for (p = 0; p < n_perms; p++) {
+		switch ((int)perms[p].perms & ~XS_PERM_IGNORE) {
+		case XS_PERM_READ:
+			sn->perms[p].access = XS_STATE_NODE_PERM_READ;
+			break;
+		case XS_PERM_WRITE:
+			sn->perms[p].access = XS_STATE_NODE_PERM_WRITE;
+			break;
+		case XS_PERM_READ | XS_PERM_WRITE:
+			sn->perms[p].access = XS_STATE_NODE_PERM_BOTH;
+			break;
+		default:
+			sn->perms[p].access = XS_STATE_NODE_PERM_NONE;
+			break;
+		}
+		sn->perms[p].flags = (perms[p].perms & XS_PERM_IGNORE)
+				     ? XS_STATE_NODE_PERM_IGNORE : 0;
+		sn->perms[p].domid = perms[p].id;
+	}
+
+	if (fwrite(sn->perms, sizeof(*sn->perms), n_perms, fp) != n_perms)
+		return "Dump node permissions error";
+
+	return NULL;
+}
+
+static const char *dump_state_node_tree(FILE *fp, char *path)
+{
+	unsigned int pathlen, childlen, p = 0;
+	struct xs_state_record_header head;
+	struct xs_state_node sn;
+	TDB_DATA key, data;
+	const struct xs_tdb_record_hdr *hdr;
+	const char *child;
+	const char *ret;
+
+	pathlen = strlen(path) + 1;
+
+	set_tdb_key(path, &key);
+	data = tdb_fetch(tdb_ctx, key);
+	if (data.dptr == NULL)
+		return "Error reading node";
+
+	/* Clean up in case of failure. */
+	talloc_steal(path, data.dptr);
+
+	hdr = (void *)data.dptr;
+
+	head.type = XS_STATE_TYPE_NODE;
+	head.length = sizeof(sn);
+	sn.conn_id = 0;
+	sn.ta_id = 0;
+	sn.ta_access = 0;
+	sn.perm_n = hdr->num_perms;
+	sn.path_len = pathlen;
+	sn.data_len = hdr->datalen;
+	head.length += hdr->num_perms * sizeof(*sn.perms);
+	head.length += pathlen;
+	head.length += hdr->datalen;
+	head.length = ROUNDUP(head.length, 3);
+
+	if (fwrite(&head, sizeof(head), 1, fp) != 1)
+		return "Dump node state error";
+	if (fwrite(&sn, sizeof(sn), 1, fp) != 1)
+		return "Dump node state error";
+
+	ret = dump_state_node_perms(fp, &sn, hdr->perms, hdr->num_perms);
+	if (ret)
+		return ret;
+
+	if (fwrite(path, pathlen, 1, fp) != 1)
+		return "Dump node path error";
+	if (hdr->datalen &&
+	    fwrite(hdr->perms + hdr->num_perms, hdr->datalen, 1, fp) != 1)
+		return "Dump node data error";
+
+	ret = dump_state_align(fp);
+	if (ret)
+		return ret;
+
+	child = (char *)(hdr->perms + hdr->num_perms) + hdr->datalen;
+
+	/*
+	 * Use path for constructing children paths.
+	 * As we don't write out nodes without having written their parent
+	 * already we will never clobber a part of the path we'll need later.
+	 */
+	pathlen--;
+	if (path[pathlen - 1] != '/') {
+		path[pathlen] = '/';
+		pathlen++;
+	}
+	while (p < hdr->childlen) {
+		childlen = strlen(child) + 1;
+		if (pathlen + childlen > XENSTORE_ABS_PATH_MAX)
+			return "Dump node path length error";
+		strcpy(path + pathlen, child);
+		ret = dump_state_node_tree(fp, path);
+		if (ret)
+			return ret;
+		p += childlen;
+		child += childlen;
+	}
+
+	talloc_free(data.dptr);
+
+	return NULL;
+}
+
+const char *dump_state_nodes(FILE *fp, const void *ctx)
+{
+	char *path;
+
+	path = talloc_size(ctx, XENSTORE_ABS_PATH_MAX);
+	if (!path)
+		return "Path buffer allocation error";
+
+	strcpy(path, "/");
+
+	return dump_state_node_tree(fp, path);
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index a009b182fd..32b306e161 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -30,6 +30,7 @@
 #include <errno.h>
 
 #include "xenstore_lib.h"
+#include "xenstore_state.h"
 #include "list.h"
 #include "tdb.h"
 #include "hashtable.h"
@@ -41,6 +42,8 @@ typedef int32_t wrl_creditt;
 #define WRL_CREDIT_MAX (1000*1000*1000)
 /* ^ satisfies non-overflow condition for wrl_xfer_credit */
 
+struct xs_state_connection;
+
 struct buffered_data
 {
 	struct list_head list;
@@ -224,6 +227,15 @@ int remember_string(struct hashtable *hash, const char *str);
 
 void set_tdb_key(const char *name, TDB_DATA *key);
 
+const char *dump_state_global(FILE *fp);
+const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
+				     const struct connection *conn,
+				     struct xs_state_connection *sc);
+const char *dump_state_nodes(FILE *fp, const void *ctx);
+const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
+				  const struct xs_permissions *perms,
+				  unsigned int n_perms);
+
 #endif /* _XENSTORED_CORE_H */
 
 /*
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index ed8e83b06b..919a4d98cf 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1143,6 +1143,111 @@ void wrl_apply_debit_trans_commit(struct connection *conn)
 	wrl_apply_debit_actual(conn->domain);
 }
 
+const char *dump_state_connections(FILE *fp, struct connection *conn)
+{
+	const char *ret = NULL;
+	unsigned int conn_id = 1;
+	struct xs_state_connection sc;
+	struct xs_state_record_header head;
+	struct connection *c;
+
+	list_for_each_entry(c, &connections, list) {
+		head.type = XS_STATE_TYPE_CONN;
+		head.length = sizeof(sc);
+
+		sc.conn_id = conn_id++;
+		sc.pad = 0;
+		memset(&sc.spec, 0, sizeof(sc.spec));
+		if (c->domain) {
+			sc.conn_type = XS_STATE_CONN_TYPE_RING;
+			sc.spec.ring.domid = c->id;
+			sc.spec.ring.tdomid = c->target ? c->target->id
+						: DOMID_INVALID;
+			sc.spec.ring.evtchn = c->domain->port;
+		} else {
+			sc.conn_type = XS_STATE_CONN_TYPE_SOCKET;
+			sc.spec.socket_fd = c->fd;
+		}
+
+		ret = dump_state_buffered_data(NULL, c, conn, &sc);
+		if (ret)
+			return ret;
+		head.length += sc.data_in_len + sc.data_out_len;
+		head.length = ROUNDUP(head.length, 3);
+		if (fwrite(&head, sizeof(head), 1, fp) != 1)
+			return "Dump connection state error";
+		if (fwrite(&sc, offsetof(struct xs_state_connection, data),
+			   1, fp) != 1)
+			return "Dump connection state error";
+		ret = dump_state_buffered_data(fp, c, conn, NULL);
+		if (ret)
+			return ret;
+		ret = dump_state_align(fp);
+		if (ret)
+			return ret;
+
+		ret = dump_state_watches(fp, c, sc.conn_id);
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
+static const char *dump_state_special_node(FILE *fp, const char *name,
+					   const struct node_perms *perms)
+{
+	struct xs_state_record_header head;
+	struct xs_state_node sn;
+	unsigned int pathlen;
+	const char *ret;
+
+	pathlen = strlen(name) + 1;
+
+	head.type = XS_STATE_TYPE_NODE;
+	head.length = sizeof(sn);
+
+	sn.conn_id = 0;
+	sn.ta_id = 0;
+	sn.ta_access = 0;
+	sn.perm_n = perms->num;
+	sn.path_len = pathlen;
+	sn.data_len = 0;
+	head.length += perms->num * sizeof(*sn.perms);
+	head.length += pathlen;
+	head.length = ROUNDUP(head.length, 3);
+	if (fwrite(&head, sizeof(head), 1, fp) != 1)
+		return "Dump special node error";
+	if (fwrite(&sn, sizeof(sn), 1, fp) != 1)
+		return "Dump special node error";
+
+	ret = dump_state_node_perms(fp, &sn, perms->p, perms->num);
+	if (ret)
+		return ret;
+
+	if (fwrite(name, pathlen, 1, fp) != 1)
+		return "Dump special node path error";
+
+	ret = dump_state_align(fp);
+
+	return ret;
+}
+
+const char *dump_state_special_nodes(FILE *fp)
+{
+	const char *ret;
+
+	ret = dump_state_special_node(fp, "@releaseDomain",
+				      &dom_release_perms);
+	if (ret)
+		return ret;
+
+	ret = dump_state_special_node(fp, "@introduceDomain",
+				      &dom_introduce_perms);
+
+	return ret;
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 66e0a12654..413b974375 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -97,4 +97,7 @@ void wrl_log_periodic(struct wrl_timestampt now);
 void wrl_apply_debit_direct(struct connection *conn);
 void wrl_apply_debit_trans_commit(struct connection *conn);
 
+const char *dump_state_connections(FILE *fp, struct connection *conn);
+const char *dump_state_special_nodes(FILE *fp);
+
 #endif /* _XENSTORED_DOMAIN_H */
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 9ff20690c0..9248f08bd9 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -72,6 +72,19 @@ static bool is_child(const char *child, const char *parent)
 	return child[len] == '/' || child[len] == '\0';
 }
 
+static const char *get_watch_path(const struct watch *watch, const char *name)
+{
+	const char *path = name;
+
+	if (watch->relative_path) {
+		path += strlen(watch->relative_path);
+		if (*path == '/') /* Could be "" */
+			path++;
+	}
+
+	return path;
+}
+
 /*
  * Send a watch event.
  * Temporary memory allocations are done with ctx.
@@ -85,11 +98,7 @@ static void add_event(struct connection *conn,
 	unsigned int len;
 	char *data;
 
-	if (watch->relative_path) {
-		name += strlen(watch->relative_path);
-		if (*name == '/') /* Could be "" */
-			name++;
-	}
+	name = get_watch_path(watch, name);
 
 	len = strlen(name) + 1 + strlen(watch->token) + 1;
 	/* Don't try to send over-long events. */
@@ -291,6 +300,44 @@ void conn_delete_all_watches(struct connection *conn)
 	}
 }
 
+const char *dump_state_watches(FILE *fp, struct connection *conn,
+			       unsigned int conn_id)
+{
+	const char *ret = NULL;
+	struct watch *watch;
+	struct xs_state_watch sw;
+	struct xs_state_record_header head;
+	const char *path;
+
+	head.type = XS_STATE_TYPE_WATCH;
+
+	list_for_each_entry(watch, &conn->watches, list) {
+		head.length = sizeof(sw);
+
+		sw.conn_id = conn_id;
+		path = get_watch_path(watch, watch->node);
+		sw.path_length = strlen(path) + 1;
+		sw.token_length = strlen(watch->token) + 1;
+		head.length += sw.path_length + sw.token_length;
+		head.length = ROUNDUP(head.length, 3);
+		if (fwrite(&head, sizeof(head), 1, fp) != 1)
+			return "Dump watch state error";
+		if (fwrite(&sw, sizeof(sw), 1, fp) != 1)
+			return "Dump watch state error";
+
+		if (fwrite(path, sw.path_length, 1, fp) != 1)
+			return "Dump watch path error";
+		if (fwrite(watch->token, sw.token_length, 1, fp) != 1)
+			return "Dump watch token error";
+
+		ret = dump_state_align(fp);
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_watch.h b/tools/xenstore/xenstored_watch.h
index 03094374f3..3d81645f45 100644
--- a/tools/xenstore/xenstored_watch.h
+++ b/tools/xenstore/xenstored_watch.h
@@ -30,4 +30,7 @@ void fire_watches(struct connection *conn, const void *tmp, const char *name,
 
 void conn_delete_all_watches(struct connection *conn);
 
+const char *dump_state_watches(FILE *fp, struct connection *conn,
+			       unsigned int conn_id);
+
 #endif /* _XENSTORED_WATCH_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:43:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54634.95104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDQA-00019v-G3; Tue, 15 Dec 2020 16:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54634.95104; Tue, 15 Dec 2020 16:43:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDQA-00019o-Br; Tue, 15 Dec 2020 16:43:30 +0000
Received: by outflank-mailman (input) for mailman id 54634;
 Tue, 15 Dec 2020 16:43:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJs-000667-I3
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11243044-b289-43c5-9894-68c2e92a0253;
 Tue, 15 Dec 2020 16:36:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 424A5B281;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11243044-b289-43c5-9894-68c2e92a0253
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050172; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HtVxa77CLyXcQ6lxzjJcuCpBF5I8/PiHNgqqLa9tyHM=;
	b=TK0PlX6y3Jkvu5emSXlbw4TpyW/RtmzHmACAG7DKRnhWU9Th2Eb6jQSqbEDW/RXM6LqQFh
	wiklHF52f18E4hbMT9qV0h1E5Lveao0pCsLRj0m+HTU3skXkq1mU3GyHM/bHsR2mZQgYkg
	1VBkskMDDnkSXdDifD3Op/QElt8BXeo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 22/25] tools/xenstore: add read node state for live update
Date: Tue, 15 Dec 2020 17:36:00 +0100
Message-Id: <20201215163603.21700-23-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the needed functions for reading node state for live update.

This requires some refactoring of current node handling in Xenstore in
order to avoid repeating the same code patterns multiple times.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V4:
- drop local node handling (Julien Grall)

V5:
- use set_tdb_key (Paul Durrant)

V6:
- add permission flag handling (Julien Grall)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c |   1 +
 tools/xenstore/xenstored_core.c    | 105 ++++++++++++++++++++++++++---
 tools/xenstore/xenstored_core.h    |   1 +
 3 files changed, 96 insertions(+), 11 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 4bf075ad79..a978ccf17e 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -530,6 +530,7 @@ void lu_read_state(void)
 			xprintf("live-update: ignore transaction record\n");
 			break;
 		case XS_STATE_TYPE_NODE:
+			read_state_node(ctx, head + 1);
 			break;
 		default:
 			xprintf("live-update: unknown state record %08x\n",
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 2ad1cc8d44..649dfb534a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -930,13 +930,30 @@ static char *basename(const char *name)
 	return strrchr(name, '/') + 1;
 }
 
-static struct node *construct_node(struct connection *conn, const void *ctx,
-				   const char *name)
+static int add_child(const void *ctx, struct node *parent, const char *name)
 {
 	const char *base;
 	unsigned int baselen;
+	char *children;
+
+	base = basename(name);
+	baselen = strlen(base) + 1;
+	children = talloc_array(ctx, char, parent->childlen + baselen);
+	if (!children)
+		return ENOMEM;
+	memcpy(children, parent->children, parent->childlen);
+	memcpy(children + parent->childlen, base, baselen);
+	parent->children = children;
+	parent->childlen += baselen;
+
+	return 0;
+}
+
+static struct node *construct_node(struct connection *conn, const void *ctx,
+				   const char *name)
+{
 	struct node *parent, *node;
-	char *children, *parentname = get_parent(ctx, name);
+	char *parentname = get_parent(ctx, name);
 
 	if (!parentname)
 		return NULL;
@@ -949,15 +966,8 @@ static struct node *construct_node(struct connection *conn, const void *ctx,
 		return NULL;
 
 	/* Add child to parent. */
-	base = basename(name);
-	baselen = strlen(base) + 1;
-	children = talloc_array(ctx, char, parent->childlen + baselen);
-	if (!children)
+	if (add_child(ctx, parent, name))
 		goto nomem;
-	memcpy(children, parent->children, parent->childlen);
-	memcpy(children + parent->childlen, base, baselen);
-	parent->children = children;
-	parent->childlen += baselen;
 
 	/* Allocate node */
 	node = talloc(ctx, struct node);
@@ -2558,6 +2568,79 @@ void read_state_buffered_data(const void *ctx, struct connection *conn,
 	}
 }
 
+void read_state_node(const void *ctx, const void *state)
+{
+	const struct xs_state_node *sn = state;
+	struct node *node, *parent;
+	TDB_DATA key;
+	char *name, *parentname;
+	unsigned int i;
+	struct connection conn = { .id = priv_domid };
+
+	name = (char *)(sn->perms + sn->perm_n);
+	node = talloc(ctx, struct node);
+	if (!node)
+		barf("allocation error restoring node");
+
+	node->name = name;
+	node->generation = ++generation;
+	node->datalen = sn->data_len;
+	node->data = name + sn->path_len;
+	node->childlen = 0;
+	node->children = NULL;
+	node->perms.num = sn->perm_n;
+	node->perms.p = talloc_array(node, struct xs_permissions,
+				     node->perms.num);
+	if (!node->perms.p)
+		barf("allocation error restoring node");
+	for (i = 0; i < node->perms.num; i++) {
+		switch (sn->perms[i].access) {
+		case 'r':
+			node->perms.p[i].perms = XS_PERM_READ;
+			break;
+		case 'w':
+			node->perms.p[i].perms = XS_PERM_WRITE;
+			break;
+		case 'b':
+			node->perms.p[i].perms = XS_PERM_READ | XS_PERM_WRITE;
+			break;
+		default:
+			node->perms.p[i].perms = XS_PERM_NONE;
+			break;
+		}
+		if (sn->perms[i].flags & XS_STATE_NODE_PERM_IGNORE)
+			node->perms.p[i].perms |= XS_PERM_IGNORE;
+		node->perms.p[i].id = sn->perms[i].domid;
+	}
+
+	if (strstarts(name, "@")) {
+		set_perms_special(&conn, name, &node->perms);
+		talloc_free(node);
+		return;
+	}
+
+	parentname = get_parent(node, name);
+	if (!parentname)
+		barf("allocation error restoring node");
+	parent = read_node(NULL, node, parentname);
+	if (!parent)
+		barf("read parent error restoring node");
+
+	if (add_child(node, parent, name))
+		barf("allocation error restoring node");
+
+	set_tdb_key(parentname, &key);
+	if (write_node_raw(NULL, &key, parent, true))
+		barf("write parent error restoring node");
+
+	set_tdb_key(name, &key);
+	if (write_node_raw(NULL, &key, node, true))
+		barf("write node error restoring node");
+	domain_entry_inc(&conn, node);
+
+	talloc_free(node);
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index cb256626fe..0af2f364bf 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -255,6 +255,7 @@ const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
 void read_state_global(const void *ctx, const void *state);
 void read_state_buffered_data(const void *ctx, struct connection *conn,
 			      const struct xs_state_connection *sc);
+void read_state_node(const void *ctx, const void *state);
 
 #endif /* _XENSTORED_CORE_H */
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:43:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54635.95116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDQG-0001DN-Nn; Tue, 15 Dec 2020 16:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54635.95116; Tue, 15 Dec 2020 16:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDQG-0001DE-KM; Tue, 15 Dec 2020 16:43:36 +0000
Received: by outflank-mailman (input) for mailman id 54635;
 Tue, 15 Dec 2020 16:43:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJx-000667-I9
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65e45a5a-a94f-4d2b-a23f-d4e6af217605;
 Tue, 15 Dec 2020 16:36:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C8CA2B715;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65e45a5a-a94f-4d2b-a23f-d4e6af217605
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050172; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gn9Ko+vbBsxqvm4ELhEm2wkUmTVMxNSsUGTqvo8dVao=;
	b=HQOiAwmVAVhRyZkRDCOtMIU1txAyo1OIOL30TBdZ9o4wI6JPtKkxenKBYo48XWrdEghoPn
	SPO1ewuhITE3gTSfbjwt93c4bj4vYlk4Qz6A6zBby7GFPKgwUKI1pWp4XBDeUvml0TqCg+
	1jzuKAQkI6BA0oCPoA67OoIXbUTwQNk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 25/25] tools/xenstore: activate new binary for live update
Date: Tue, 15 Dec 2020 17:36:03 +0100
Message-Id: <20201215163603.21700-26-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add activation of the new binary for live update. The daemon case is
handled completely, while for stubdom we only add stubs.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V7:
- added unbinding dom0 and virq event channels

V8:
- no longer close dom0 evtchn (Julien Grall)

V10:
- remember original argc and argv (taken from deleted patch)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c | 61 +++++++++++++++++++++++++++++-
 tools/xenstore/xenstored_core.c    |  5 +++
 tools/xenstore/xenstored_core.h    |  3 ++
 tools/xenstore/xenstored_domain.c  |  6 +++
 tools/xenstore/xenstored_domain.h  |  1 +
 5 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index dee55de264..a5d8185c41 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -16,6 +16,7 @@ Interactive commands for Xen Store Daemon.
     along with this program; If not, see <http://www.gnu.org/licenses/>.
 */
 
+#include <ctype.h>
 #include <errno.h>
 #include <stdarg.h>
 #include <stdio.h>
@@ -330,6 +331,11 @@ static void lu_get_dump_state(struct lu_dump_state *state)
 static void lu_close_dump_state(struct lu_dump_state *state)
 {
 }
+
+static char *lu_exec(const void *ctx, int argc, char **argv)
+{
+	return "NYI";
+}
 #else
 static const char *lu_binary(const void *ctx, struct connection *conn,
 			     const char *filename)
@@ -429,6 +435,14 @@ static void lu_close_dump_state(struct lu_dump_state *state)
 	unlink(filename);
 	talloc_free(filename);
 }
+
+static char *lu_exec(const void *ctx, int argc, char **argv)
+{
+	argv[0] = lu_status->filename;
+	execvp(argv[0], argv);
+
+	return "Error activating new binary.";
+}
 #endif
 
 static const char *lu_check_lu_allowed(const void *ctx, bool force,
@@ -555,7 +569,52 @@ void lu_read_state(void)
 
 static const char *lu_activate_binary(const void *ctx)
 {
-	return "Not yet implemented.";
+	int argc;
+	char **argv;
+	unsigned int i;
+
+	if (lu_status->cmdline) {
+		argc = 4;   /* At least one arg + progname + "-U" + NULL. */
+		for (i = 0; lu_status->cmdline[i]; i++)
+			if (isspace(lu_status->cmdline[i]))
+				argc++;
+		argv = talloc_array(ctx, char *, argc);
+		if (!argv)
+			return "Allocation failure.";
+
+		i = 0;
+		argc = 1;
+		argv[1] = strtok(lu_status->cmdline, " \t");
+		while (argv[argc]) {
+			if (!strcmp(argv[argc], "-U"))
+				i = 1;
+			argc++;
+			argv[argc] = strtok(NULL, " \t");
+		}
+
+		if (!i) {
+			argv[argc++] = "-U";
+			argv[argc] = NULL;
+		}
+	} else {
+		for (i = 0; i < orig_argc; i++)
+			if (!strcmp(orig_argv[i], "-U"))
+				break;
+
+		argc = orig_argc;
+		argv = talloc_array(ctx, char *, orig_argc + 2);
+		if (!argv)
+			return "Allocation failure.";
+
+		memcpy(argv, orig_argv, orig_argc * sizeof(*argv));
+		if (i == orig_argc)
+			argv[argc++] = "-U";
+		argv[argc] = NULL;
+	}
+
+	domain_deinit();
+
+	return lu_exec(ctx, argc, argv);
 }
 
 static const char *lu_start(const void *ctx, struct connection *conn,
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 649dfb534a..7174a9288a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -72,6 +72,9 @@ static unsigned int nr_fds;
 
 static int sock = -1;
 
+int orig_argc;
+char **orig_argv;
+
 static bool verbose = false;
 LIST_HEAD(connections);
 int tracefd = -1;
@@ -2026,6 +2029,8 @@ int main(int argc, char *argv[])
 	const char *pidfile = NULL;
 	int timeout;
 
+	orig_argc = argc;
+	orig_argv = argv;
 
 	while ((opt = getopt_long(argc, argv, "DE:F:HNPS:t:A:M:T:RVW:U", options,
 				  NULL)) != -1) {
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 0af2f364bf..91e036f49f 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -201,6 +201,9 @@ void dtrace_io(const struct connection *conn, const struct buffered_data *data,
 void reopen_log(void);
 void close_log(void);
 
+extern int orig_argc;
+extern char **orig_argv;
+
 extern char *tracefile;
 extern int tracefd;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index dfda90c791..317427b7cb 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -783,6 +783,12 @@ void domain_init(int evtfd)
 	virq_port = rc;
 }
 
+void domain_deinit(void)
+{
+	if (virq_port)
+		xenevtchn_unbind(xce_handle, virq_port);
+}
+
 void domain_entry_inc(struct connection *conn, struct node *node)
 {
 	struct domain *d;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 1cc1c03ed8..dc97591713 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -46,6 +46,7 @@ int do_reset_watches(struct connection *conn, struct buffered_data *in);
 
 void domain_init(int evtfd);
 void dom0_init(void);
+void domain_deinit(void);
 
 /* Returns the implicit path of a connection (only domains have this) */
 const char *get_implicit_path(const struct connection *conn);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:45:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:45:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54654.95132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDSJ-0001Wp-6N; Tue, 15 Dec 2020 16:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54654.95132; Tue, 15 Dec 2020 16:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDSJ-0001Wi-2o; Tue, 15 Dec 2020 16:45:43 +0000
Received: by outflank-mailman (input) for mailman id 54654;
 Tue, 15 Dec 2020 16:45:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpDSI-0001Wc-0I
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:45:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpDSH-00039m-NJ; Tue, 15 Dec 2020 16:45:41 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpDSH-0006bJ-HZ; Tue, 15 Dec 2020 16:45:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:To:Subject;
	bh=9hPQXRVwV4EQwXQe2iL5b6tNmVUdHl4SUU3kkYA7bFA=; b=7DGp+3/xnuCcGK6GRfbnWRTqSv
	E+9QVDcXx6E4TD08u0JEzBrFj7eXtbqSvIH7j2JM+XpNIqxh4oCWDuZMIyneHGsezllhCdSAYiw34
	37N/xQIoRviDVMt3k6vudzAduwwQDCkBi8/N2O5Oe6GtZLXpfAcuAI71CsjABYz97nQc=;
Subject: Re: Xen-ARM DomUs
To: Elliott Mitchell <ehem+xen@m5p.com>, Roman Shaposhnik <roman@zededa.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <82c684ad-6c33-7608-9424-3bb46f58ac9c@xen.org>
Date: Tue, 15 Dec 2020 16:45:39 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 15/12/2020 02:16, Elliott Mitchell wrote:
> Finally getting to the truly productive stages of my project with Xen on
> ARM.
> 
> How many of the OSes which function as x86 DomUs for Xen, function as
> ARM DomUs?  Getting Linux operational was straightforward, but what of
> others?

If you are interested to port an new OS on Xen, I would suggest to read [1].

> 
> The available examples seem geared towards Linux DomUs.  I'm looking at a
> FreeBSD installation image and it appears to expect an EFI firmware.
> Beyond having a bunch of files appearing oriented towards booting on EFI
> I can't say much about (booting) FreeBSD/ARM DomUs.

I wrote PoC a few years ago to boot FreeBSD on Xen on Arm (see [2]). I 
haven't touched it for quite a while, so you may need to use a different 
branch in that tree.

Cheers,

[1] 
https://events.static.linuxfound.org/sites/events/files/slides/Porting%20FreeBSD%20on%20Xen%20on%20ARM%20.pdf
[2] 
https://xenbits.xen.org/gitweb/?p=people/julieng/freebsd.git;a=shortlog;h=refs/heads/dev-arm64

> 
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:46:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54664.95144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDSx-0001fP-F9; Tue, 15 Dec 2020 16:46:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54664.95144; Tue, 15 Dec 2020 16:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDSx-0001fH-Bs; Tue, 15 Dec 2020 16:46:23 +0000
Received: by outflank-mailman (input) for mailman id 54664;
 Tue, 15 Dec 2020 16:46:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0G7T=FT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kpDK2-000667-IL
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:10 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id daa7a249-4b04-4f2e-b2fc-307e7cd90ff9;
 Tue, 15 Dec 2020 16:36:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daa7a249-4b04-4f2e-b2fc-307e7cd90ff9
Date: Tue, 15 Dec 2020 08:36:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608050195;
	bh=/ci9mZ+yx/ofabk1Z8BtUgyoxeNEGUxytxqspy+rg6M=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=rlUNJ7Z5XFrkrvkJZsPgmqn9UIAsKotuT1HbiOXLB/zvvh5WtA9efww/8cg9XZ0k9
	 uSdkD7bXpRSQFBnaHRBaJiHDzQLbrTHs4UPh3OoczpGqJLO1doqzK325p0hFt3NIDj
	 Z3z206QE0De+nT3NibZiD75Kch2q2oacCIm1q2bTFGAt2X9L5tZfsuk/iw+OuN1IiE
	 bikvMxx//UanYOjzQaxnhmVHWhAbCqL1ygIWpdCP7QFRBc1CYWhpaFu14A7YBT5vTo
	 IsBtAIFiMyStnP6OeGfJUwSF5x8GNgmwsroe2/I0WmTUiE5K9dLDACvYppIq9+d01r
	 XZTdXJIGm56rQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Roman Shaposhnik <roman@zededa.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Oleksandr_Andrushchenko@epam.com, 
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen-ARM DomUs
In-Reply-To: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2012150828170.4040@sstabellini-ThinkPad-T480s>
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 14 Dec 2020, Elliott Mitchell wrote:
> Finally getting to the truly productive stages of my project with Xen on
> ARM.
> 
> How many of the OSes which function as x86 DomUs for Xen, function as
> ARM DomUs?  Getting Linux operational was straightforward, but what of
> others?

I know of FreeRTOS, Zephyr, VxWorks.


> The available examples seem geared towards Linux DomUs.  I'm looking at a
> FreeBSD installation image and it appears to expect an EFI firmware.
> Beyond having a bunch of files appearing oriented towards booting on EFI
> I can't say much about (booting) FreeBSD/ARM DomUs.

Running EFI firmware in a domU is possible with both Tianocore and
U-Boot. You should be able to build the firmware and pass it as a
kernel= binary in the xl file. Then the firmware will be able to load
the necessary binaries from the virtual disk.

I ran Tianocore this way years ago. Recently, u-boot has been ported to
be run in a domU by Oleksandr Andrushchenko (CCed).


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:46:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54669.95155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTJ-0001m3-Oa; Tue, 15 Dec 2020 16:46:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54669.95155; Tue, 15 Dec 2020 16:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTJ-0001lw-Ld; Tue, 15 Dec 2020 16:46:45 +0000
Received: by outflank-mailman (input) for mailman id 54669;
 Tue, 15 Dec 2020 16:46:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDJt-00066M-Ib
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef3335b5-2c28-45c3-b9c3-4c4ff1e53e81;
 Tue, 15 Dec 2020 16:36:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA173B712;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef3335b5-2c28-45c3-b9c3-4c4ff1e53e81
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050172; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FspAu+0zxN5f+kBxTUaFqpiXMEMEL35uaqc65DYbH+U=;
	b=WYjMGffV3cHOlieIVijpD/JpBAYGwjWoFeZF+pBfxCAeriK5IAr+ISrP0xfa/hcw6SuHCt
	jVVXGRZY/WKCML5J9L3bMHYydkYN5FEnpFZdSTQM+vHbq/qv1JjkYMn4SgqMgGmGZVsbCt
	kV3F5wAAexfDePQ4yB2IC1tJrdS4XMI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 24/25] tools/xenstore: handle dying domains in live update
Date: Tue, 15 Dec 2020 17:36:02 +0100
Message-Id: <20201215163603.21700-25-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

A domain could just be dying when live updating Xenstore, so the case
of not being able to map the ring page or to connect to the event
channel must be handled gracefully.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
V4:
- new patch (Julien, I hope adding the Sob: is okay?)

V10:
- removed "XXX..." comment (Julien Grall)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c |  7 +++++++
 tools/xenstore/xenstored_domain.c  | 25 +++++++++++++++++--------
 tools/xenstore/xenstored_domain.h  |  2 ++
 3 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8a1e3b35fe..dee55de264 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -544,6 +544,13 @@ void lu_read_state(void)
 	lu_close_dump_state(&state);
 
 	talloc_free(ctx);
+
+	/*
+	 * We may have missed the VIRQ_DOM_EXC notification and a domain may
+	 * have died while we were live-updating. So check all the domains are
+	 * still alive.
+	 */
+	check_domains(true);
 }
 
 static const char *lu_activate_binary(const void *ctx)
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 0cd8234bd1..dfda90c791 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -220,7 +220,7 @@ static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
 	       dominfo->domid == domid;
 }
 
-static void domain_cleanup(void)
+void check_domains(bool restore)
 {
 	xc_dominfo_t dominfo;
 	struct domain *domain;
@@ -244,7 +244,14 @@ static void domain_cleanup(void)
 				domain->shutdown = true;
 				notify = 1;
 			}
-			if (!dominfo.dying)
+			/*
+			 * On Restore, we may have been unable to remap the
+			 * interface and the port. As we don't know whether
+			 * this was because of a dying domain, we need to
+			 * check if the interface and port are still valid.
+			 */
+			if (!dominfo.dying && domain->port &&
+			    domain->interface)
 				continue;
 		}
 		if (domain->conn) {
@@ -270,7 +277,7 @@ void handle_event(void)
 		barf_perror("Failed to read from event fd");
 
 	if (port == virq_port)
-		domain_cleanup();
+		check_domains(false);
 
 	if (xenevtchn_unmask(xce_handle, port) == -1)
 		barf_perror("Failed to write to event fd");
@@ -442,14 +449,16 @@ static struct domain *introduce_domain(const void *ctx,
 	if (!domain->introduced) {
 		interface = is_master_domain ? xenbus_map()
 					     : map_interface(domid);
-		if (!interface)
+		if (!interface && !restore)
 			return NULL;
 		if (new_domain(domain, port, restore)) {
 			rc = errno;
-			if (is_master_domain)
-				unmap_xenbus(interface);
-			else
-				unmap_interface(interface);
+			if (interface) {
+				if (is_master_domain)
+					unmap_xenbus(interface);
+				else
+					unmap_interface(interface);
+			}
 			errno = rc;
 			return NULL;
 		}
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 8f3b4e0f8b..1cc1c03ed8 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -21,6 +21,8 @@
 
 void handle_event(void);
 
+void check_domains(bool restore);
+
 /* domid, mfn, eventchn, path */
 int do_introduce(struct connection *conn, struct buffered_data *in);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:46:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54671.95168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTS-0001qD-41; Tue, 15 Dec 2020 16:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54671.95168; Tue, 15 Dec 2020 16:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTR-0001q5-Uq; Tue, 15 Dec 2020 16:46:53 +0000
Received: by outflank-mailman (input) for mailman id 54671;
 Tue, 15 Dec 2020 16:46:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpDTQ-0001pb-Ni; Tue, 15 Dec 2020 16:46:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpDTQ-0003By-KB; Tue, 15 Dec 2020 16:46:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpDTQ-0002a6-D9; Tue, 15 Dec 2020 16:46:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpDTQ-0002XJ-Cd; Tue, 15 Dec 2020 16:46:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XwWEOjKjS14dVIZFeSIs0Gre7cB/KZ0qUMqAs8xjdHo=; b=1Ym1HsOSKUK46l8r+A4BUR7Bhg
	EpZT2v9W6yloCD3gevvm4jehYHov4g5r4jrVVlt0OKYTEtCnhAXORI7nZGW69RHCwor3878d2Lryc
	JdQTwI9PK8Vdnes/WSba25fUANgjhJKgF+6B7JwaUeJb8xH/EdFQGYxYK7V1axIMS83A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157561-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157561: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=5c3cdebf95bfa32c611b8e72921277401bc90fec
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 16:46:52 +0000

flight 157561 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157561/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 5c3cdebf95bfa32c611b8e72921277401bc90fec
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    6 days
Failing since        157348  2020-12-09 15:39:39 Z    6 days   49 attempts
Testing same since   157561  2020-12-15 13:10:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sheng Wei <w.sheng@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 664 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:47:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54678.95183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTf-00020V-MV; Tue, 15 Dec 2020 16:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54678.95183; Tue, 15 Dec 2020 16:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTf-00020L-Ha; Tue, 15 Dec 2020 16:47:07 +0000
Received: by outflank-mailman (input) for mailman id 54678;
 Tue, 15 Dec 2020 16:47:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDK8-00066M-JH
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4fc910e-a22a-4aa1-ae90-3f3d3434ab4a;
 Tue, 15 Dec 2020 16:36:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E8A1B282;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4fc910e-a22a-4aa1-ae90-3f3d3434ab4a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050172; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MSf4+puKW484kP2G0bQfIEcisU7NAj/olnxKhB+8UZ8=;
	b=i50mjkOdP+7BifXtMXoGf6d55kFLh6ncbe1YDHsy/X3dZZclnkC3OLl2lbReNLUL65f38T
	tDSmxxL3EW2nVCqAA+X8Quk08b7kGxqy837Xc+UtmzJOwSdKyx0WvryeKyyylpmLcFTJZM
	yOxbqzoZNOml94++9HnrzXq+NxAvS4A=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v10 21/25] tools/xenstore: add read connection state for live update
Date: Tue, 15 Dec 2020 17:35:59 +0100
Message-Id: <20201215163603.21700-22-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the needed functions for reading connection state for live update.

As the connection is identified by a unique connection id in the state
records we need to add this to struct connection. Add a new function
to return the connection based on a connection id.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- fixed condition in introduce_domain() (Julien Grall)

V4:
- set pending data msg type to XS_INVALID (Julien Grall)
- add buffered read data (Julien Grall)

V5:
- really read buffered read data (Julien Grall)
- drop conn parameter from introduce_domain() (Paul Durrant)
- split pending write data into individual buffers

V6:
- rename "first" to "partial" (Paul Durrant)

V7:
- use local port from connection data

V8:
- remove dom0 special handling

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c |   1 +
 tools/xenstore/xenstored_core.c    | 102 ++++++++++++++++++++++++++++-
 tools/xenstore/xenstored_core.h    |  10 +++
 tools/xenstore/xenstored_domain.c  |  60 +++++++++++++----
 tools/xenstore/xenstored_domain.h  |   2 +
 5 files changed, 162 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index f6c4ab3d8a..4bf075ad79 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -522,6 +522,7 @@ void lu_read_state(void)
 			read_state_global(ctx, head + 1);
 			break;
 		case XS_STATE_TYPE_CONN:
+			read_state_connection(ctx, head + 1);
 			break;
 		case XS_STATE_TYPE_WATCH:
 			break;
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 5922a03a98..2ad1cc8d44 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1532,12 +1532,35 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
 	return new;
 }
 
+struct connection *get_connection_by_id(unsigned int conn_id)
+{
+	struct connection *conn;
+
+	list_for_each_entry(conn, &connections, list)
+		if (conn->conn_id == conn_id)
+			return conn;
+
+	return NULL;
+}
+
 #ifdef NO_SOCKETS
 static void accept_connection(int sock)
 {
 }
+
+int writefd(struct connection *conn, const void *data, unsigned int len)
+{
+	errno = EBADF;
+	return -1;
+}
+
+int readfd(struct connection *conn, void *data, unsigned int len)
+{
+	errno = EBADF;
+	return -1;
+}
 #else
-static int writefd(struct connection *conn, const void *data, unsigned int len)
+int writefd(struct connection *conn, const void *data, unsigned int len)
 {
 	int rc;
 
@@ -1553,7 +1576,7 @@ static int writefd(struct connection *conn, const void *data, unsigned int len)
 	return rc;
 }
 
-static int readfd(struct connection *conn, void *data, unsigned int len)
+int readfd(struct connection *conn, void *data, unsigned int len)
 {
 	int rc;
 
@@ -2460,6 +2483,81 @@ void read_state_global(const void *ctx, const void *state)
 	domain_init(glb->evtchn_fd);
 }
 
+static void add_buffered_data(struct buffered_data *bdata,
+			      struct connection *conn, const uint8_t *data,
+			      unsigned int len)
+{
+	bdata->hdr.msg.len = len;
+	if (len <= DEFAULT_BUFFER_SIZE)
+		bdata->buffer = bdata->default_buffer;
+	else
+		bdata->buffer = talloc_array(bdata, char, len);
+	if (!bdata->buffer)
+		barf("error restoring buffered data");
+
+	memcpy(bdata->buffer, data, len);
+
+	/* Queue for later transmission. */
+	list_add_tail(&bdata->list, &conn->out_list);
+}
+
+void read_state_buffered_data(const void *ctx, struct connection *conn,
+			      const struct xs_state_connection *sc)
+{
+	struct buffered_data *bdata;
+	const uint8_t *data;
+	unsigned int len;
+	bool partial = sc->data_resp_len;
+
+	if (sc->data_in_len) {
+		bdata = new_buffer(conn);
+		if (!bdata)
+			barf("error restoring read data");
+		if (sc->data_in_len < sizeof(bdata->hdr)) {
+			bdata->inhdr = true;
+			memcpy(&bdata->hdr, sc->data, sc->data_in_len);
+			bdata->used = sc->data_in_len;
+		} else {
+			bdata->inhdr = false;
+			memcpy(&bdata->hdr, sc->data, sizeof(bdata->hdr));
+			if (bdata->hdr.msg.len <= DEFAULT_BUFFER_SIZE)
+				bdata->buffer = bdata->default_buffer;
+			else
+				bdata->buffer = talloc_array(bdata, char,
+							bdata->hdr.msg.len);
+			if (!bdata->buffer)
+				barf("Error allocating in buffer");
+			bdata->used = sc->data_in_len - sizeof(bdata->hdr);
+			memcpy(bdata->buffer, sc->data + sizeof(bdata->hdr),
+			       bdata->used);
+		}
+
+		conn->in = bdata;
+	}
+
+	for (data = sc->data + sc->data_in_len;
+	     data < sc->data + sc->data_in_len + sc->data_out_len;
+	     data += len) {
+		bdata = new_buffer(conn);
+		if (!bdata)
+			barf("error restoring buffered data");
+		if (partial) {
+			bdata->inhdr = false;
+			/* Make trace look nice. */
+			bdata->hdr.msg.type = XS_INVALID;
+			len = sc->data_resp_len;
+			add_buffered_data(bdata, conn, data, len);
+			partial = false;
+			continue;
+		}
+
+		memcpy(&bdata->hdr, data, sizeof(bdata->hdr));
+		data += sizeof(bdata->hdr);
+		len = bdata->hdr.msg.len;
+		add_buffered_data(bdata, conn, data, len);
+	}
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6c9d838f11..cb256626fe 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -118,6 +118,9 @@ struct connection
 	/* Methods for communicating over this connection: write can be NULL */
 	connwritefn_t *write;
 	connreadfn_t *read;
+
+	/* Support for live update: connection id. */
+	unsigned int conn_id;
 };
 extern struct list_head connections;
 
@@ -178,6 +181,7 @@ struct node *read_node(struct connection *conn, const void *ctx,
 		       const char *name);
 
 struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
+struct connection *get_connection_by_id(unsigned int conn_id);
 void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
 enum xs_perm_type perm_for_conn(struct connection *conn,
@@ -229,6 +233,10 @@ void finish_daemonize(void);
 /* Open a pipe for signal handling */
 void init_pipe(int reopen_log_pipe[2]);
 
+int writefd(struct connection *conn, const void *data, unsigned int len);
+int readfd(struct connection *conn, void *data, unsigned int len);
+
+extern struct interface_funcs socket_funcs;
 extern xengnttab_handle **xgt_handle;
 
 int remember_string(struct hashtable *hash, const char *str);
@@ -245,6 +253,8 @@ const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
 				  unsigned int n_perms);
 
 void read_state_global(const void *ctx, const void *state);
+void read_state_buffered_data(const void *ctx, struct connection *conn,
+			      const struct xs_state_connection *sc);
 
 #endif /* _XENSTORED_CORE_H */
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 94dd501a3b..0cd8234bd1 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -355,7 +355,7 @@ static struct domain *find_or_alloc_domain(const void *ctx, unsigned int domid)
 	return domain ? : alloc_domain(ctx, domid);
 }
 
-static int new_domain(struct domain *domain, int port)
+static int new_domain(struct domain *domain, int port, bool restore)
 {
 	int rc;
 
@@ -369,11 +369,16 @@ static int new_domain(struct domain *domain, int port)
 
 	wrl_domain_new(domain);
 
-	/* Tell kernel we're interested in this event. */
-	rc = xenevtchn_bind_interdomain(xce_handle, domain->domid, port);
-	if (rc == -1)
-		return errno;
-	domain->port = rc;
+	if (restore)
+		domain->port = port;
+	else {
+		/* Tell kernel we're interested in this event. */
+		rc = xenevtchn_bind_interdomain(xce_handle, domain->domid,
+						port);
+		if (rc == -1)
+			return errno;
+		domain->port = rc;
+	}
 
 	domain->introduced = true;
 
@@ -423,7 +428,7 @@ static void domain_conn_reset(struct domain *domain)
 
 static struct domain *introduce_domain(const void *ctx,
 				       unsigned int domid,
-				       evtchn_port_t port)
+				       evtchn_port_t port, bool restore)
 {
 	struct domain *domain;
 	int rc;
@@ -439,7 +444,7 @@ static struct domain *introduce_domain(const void *ctx,
 					     : map_interface(domid);
 		if (!interface)
 			return NULL;
-		if (new_domain(domain, port)) {
+		if (new_domain(domain, port, restore)) {
 			rc = errno;
 			if (is_master_domain)
 				unmap_xenbus(interface);
@@ -453,7 +458,7 @@ static struct domain *introduce_domain(const void *ctx,
 		/* Now domain belongs to its connection. */
 		talloc_steal(domain->conn, domain);
 
-		if (!is_master_domain)
+		if (!is_master_domain && !restore)
 			fire_watches(NULL, ctx, "@introduceDomain", NULL,
 				     false, NULL);
 	} else {
@@ -486,7 +491,7 @@ int do_introduce(struct connection *conn, struct buffered_data *in)
 	if (port <= 0)
 		return EINVAL;
 
-	domain = introduce_domain(in, domid, port);
+	domain = introduce_domain(in, domid, port, false);
 	if (!domain)
 		return errno;
 
@@ -715,7 +720,7 @@ void dom0_init(void)
 	if (port == -1)
 		barf_perror("Failed to initialize dom0 port");
 
-	dom0 = introduce_domain(NULL, xenbus_master_domid(), port);
+	dom0 = introduce_domain(NULL, xenbus_master_domid(), port, false);
 	if (!dom0)
 		barf_perror("Failed to initialize dom0");
 
@@ -1261,6 +1266,39 @@ const char *dump_state_special_nodes(FILE *fp)
 	return ret;
 }
 
+void read_state_connection(const void *ctx, const void *state)
+{
+	const struct xs_state_connection *sc = state;
+	struct connection *conn;
+	struct domain *domain, *tdomain;
+
+	if (sc->conn_type == XS_STATE_CONN_TYPE_SOCKET) {
+		conn = new_connection(writefd, readfd);
+		if (!conn)
+			barf("error restoring connection");
+		conn->fd = sc->spec.socket_fd;
+	} else {
+		domain = introduce_domain(ctx, sc->spec.ring.domid,
+					  sc->spec.ring.evtchn, true);
+		if (!domain)
+			barf("domain allocation error");
+
+		if (sc->spec.ring.tdomid != DOMID_INVALID) {
+			tdomain = find_or_alloc_domain(ctx,
+						       sc->spec.ring.tdomid);
+			if (!tdomain)
+				barf("target domain allocation error");
+			talloc_reference(domain->conn, tdomain->conn);
+			domain->conn->target = tdomain->conn;
+		}
+		conn = domain->conn;
+	}
+
+	conn->conn_id = sc->conn_id;
+
+	read_state_buffered_data(ctx, conn, sc);
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index b20269b038..8f3b4e0f8b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -101,4 +101,6 @@ void wrl_apply_debit_trans_commit(struct connection *conn);
 const char *dump_state_connections(FILE *fp, struct connection *conn);
 const char *dump_state_special_nodes(FILE *fp);
 
+void read_state_connection(const void *ctx, const void *state);
+
 #endif /* _XENSTORED_DOMAIN_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:47:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:47:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54683.95195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTm-00025a-VH; Tue, 15 Dec 2020 16:47:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54683.95195; Tue, 15 Dec 2020 16:47:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDTm-00025T-R1; Tue, 15 Dec 2020 16:47:14 +0000
Received: by outflank-mailman (input) for mailman id 54683;
 Tue, 15 Dec 2020 16:47:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2CwE=FT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpDK3-00066M-J8
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:37:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9484f92d-f7fe-4499-b74a-209d001bc347;
 Tue, 15 Dec 2020 16:36:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 929D8B711;
 Tue, 15 Dec 2020 16:36:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9484f92d-f7fe-4499-b74a-209d001bc347
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608050172; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NilqGYTQhpkTDabJtW0PbsSjQkzIJ61e98qt7CTEwA4=;
	b=sMeTI8HgyVysSVPzViBQusG8SejoZRzLIT8SAiJBx1tj/3vZdKYMWR3YS7EPPo7Si+K+g5
	GhOlM5rHqlvd5ESlZvtmcrVV6YOjpEExhwbU+aU4PuHd26ssYFkTTkT06M9Q8sP5DbFc8o
	7aC8Xvr0Y9svcUJLz+dOgefYWiiJr58=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v10 23/25] tools/xenstore: add read watch state for live update
Date: Tue, 15 Dec 2020 17:36:01 +0100
Message-Id: <20201215163603.21700-24-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201215163603.21700-1-jgross@suse.com>
References: <20201215163603.21700-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add reading the watch state records for live update.

This requires factoring out some of the add watch functionality into a
dedicated function.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
V4:
- add comment (Julien Grall)

V6:
- correct check_watch_path() (setting errno)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_control.c |   2 +
 tools/xenstore/xenstored_watch.c   | 114 +++++++++++++++++++++--------
 tools/xenstore/xenstored_watch.h   |   2 +
 3 files changed, 88 insertions(+), 30 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index a978ccf17e..8a1e3b35fe 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -35,6 +35,7 @@ Interactive commands for Xen Store Daemon.
 #include "xenstored_core.h"
 #include "xenstored_control.h"
 #include "xenstored_domain.h"
+#include "xenstored_watch.h"
 
 /* Mini-OS only knows about MAP_ANON. */
 #ifndef MAP_ANONYMOUS
@@ -525,6 +526,7 @@ void lu_read_state(void)
 			read_state_connection(ctx, head + 1);
 			break;
 		case XS_STATE_TYPE_WATCH:
+			read_state_watch(ctx, head + 1);
 			break;
 		case XS_STATE_TYPE_TA:
 			xprintf("live-update: ignore transaction record\n");
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 9248f08bd9..db89e0141f 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -205,6 +205,62 @@ static int destroy_watch(void *_watch)
 	return 0;
 }
 
+static int check_watch_path(struct connection *conn, const void *ctx,
+			    char **path, bool *relative)
+{
+	/* Check if valid event. */
+	if (strstarts(*path, "@")) {
+		*relative = false;
+		if (strlen(*path) > XENSTORE_REL_PATH_MAX)
+			goto inval;
+	} else {
+		*relative = !strstarts(*path, "/");
+		*path = canonicalize(conn, ctx, *path);
+		if (!*path)
+			return errno;
+		if (!is_valid_nodename(*path))
+			goto inval;
+	}
+
+	return 0;
+
+ inval:
+	errno = EINVAL;
+	return errno;
+}
+
+static struct watch *add_watch(struct connection *conn, char *path, char *token,
+			       bool relative)
+{
+	struct watch *watch;
+
+	watch = talloc(conn, struct watch);
+	if (!watch)
+		goto nomem;
+	watch->node = talloc_strdup(watch, path);
+	watch->token = talloc_strdup(watch, token);
+	if (!watch->node || !watch->token)
+		goto nomem;
+
+	if (relative)
+		watch->relative_path = get_implicit_path(conn);
+	else
+		watch->relative_path = NULL;
+
+	INIT_LIST_HEAD(&watch->events);
+
+	domain_watch_inc(conn);
+	list_add_tail(&watch->list, &conn->watches);
+	talloc_set_destructor(watch, destroy_watch);
+
+	return watch;
+
+ nomem:
+	talloc_free(watch);
+	errno = ENOMEM;
+	return NULL;
+}
+
 int do_watch(struct connection *conn, struct buffered_data *in)
 {
 	struct watch *watch;
@@ -214,19 +270,9 @@ int do_watch(struct connection *conn, struct buffered_data *in)
 	if (get_strings(in, vec, ARRAY_SIZE(vec)) != ARRAY_SIZE(vec))
 		return EINVAL;
 
-	if (strstarts(vec[0], "@")) {
-		relative = false;
-		if (strlen(vec[0]) > XENSTORE_REL_PATH_MAX)
-			return EINVAL;
-		/* check if valid event */
-	} else {
-		relative = !strstarts(vec[0], "/");
-		vec[0] = canonicalize(conn, in, vec[0]);
-		if (!vec[0])
-			return ENOMEM;
-		if (!is_valid_nodename(vec[0]))
-			return EINVAL;
-	}
+	errno = check_watch_path(conn, in, &(vec[0]), &relative);
+	if (errno)
+		return errno;
 
 	/* Check for duplicates. */
 	list_for_each_entry(watch, &conn->watches, list) {
@@ -238,26 +284,11 @@ int do_watch(struct connection *conn, struct buffered_data *in)
 	if (domain_watch(conn) > quota_nb_watch_per_domain)
 		return E2BIG;
 
-	watch = talloc(conn, struct watch);
+	watch = add_watch(conn, vec[0], vec[1], relative);
 	if (!watch)
-		return ENOMEM;
-	watch->node = talloc_strdup(watch, vec[0]);
-	watch->token = talloc_strdup(watch, vec[1]);
-	if (!watch->node || !watch->token) {
-		talloc_free(watch);
-		return ENOMEM;
-	}
-	if (relative)
-		watch->relative_path = get_implicit_path(conn);
-	else
-		watch->relative_path = NULL;
+		return errno;
 
-	INIT_LIST_HEAD(&watch->events);
-
-	domain_watch_inc(conn);
-	list_add_tail(&watch->list, &conn->watches);
 	trace_create(watch, "watch");
-	talloc_set_destructor(watch, destroy_watch);
 	send_ack(conn, XS_WATCH);
 
 	/* We fire once up front: simplifies clients and restart. */
@@ -338,6 +369,29 @@ const char *dump_state_watches(FILE *fp, struct connection *conn,
 	return ret;
 }
 
+void read_state_watch(const void *ctx, const void *state)
+{
+	const struct xs_state_watch *sw = state;
+	struct connection *conn;
+	char *path, *token;
+	bool relative;
+
+	conn = get_connection_by_id(sw->conn_id);
+	if (!conn)
+		barf("connection not found for read watch");
+
+	path = (char *)sw->data;
+	token = path + sw->path_length;
+
+	/* Don't check success, we want the relative information only. */
+	check_watch_path(conn, ctx, &path, &relative);
+	if (!path)
+		barf("allocation error for read watch");
+
+	if (!add_watch(conn, path, token, relative))
+		barf("error adding watch");
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_watch.h b/tools/xenstore/xenstored_watch.h
index 3d81645f45..0e693f0839 100644
--- a/tools/xenstore/xenstored_watch.h
+++ b/tools/xenstore/xenstored_watch.h
@@ -33,4 +33,6 @@ void conn_delete_all_watches(struct connection *conn);
 const char *dump_state_watches(FILE *fp, struct connection *conn,
 			       unsigned int conn_id);
 
+void read_state_watch(const void *ctx, const void *state);
+
 #endif /* _XENSTORED_WATCH_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 16:49:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 16:49:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54701.95206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDWG-0002Xp-E0; Tue, 15 Dec 2020 16:49:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54701.95206; Tue, 15 Dec 2020 16:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDWG-0002Xi-B6; Tue, 15 Dec 2020 16:49:48 +0000
Received: by outflank-mailman (input) for mailman id 54701;
 Tue, 15 Dec 2020 16:49:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpDWF-0002Xd-2P
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 16:49:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpDWE-0003Ef-Rt; Tue, 15 Dec 2020 16:49:46 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpDWE-0006pU-LP; Tue, 15 Dec 2020 16:49:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Q56mA1p/aU6IJkPDzP6EJRxz8lUjnDr6+iHAne2CaUU=; b=DFJo+h+wrCNP6wQYfvf+228xoX
	sfBiFjkIw41yKU7Rfrf9ubrbHZNpIc3IF6R1N5e4YUAjpMCMkLtjKB96p0VgeOanDRvyFSgyNYGr7
	P/Y3GFkEpTbV+fN/wEgRgwyO43dk8OOQaByWEBWdCKwSe9lrJmWRz0Ils8lV1vOxt7fs=;
Subject: Re: Xen-ARM DomUs
To: Roman Shaposhnik <roman@zededa.com>, Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
 <CAMmSBy_8+PRWiSQxwRN2oB9mLmOnyoCr0mH4L-uUYhm=1GK7Xg@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5741302a-1703-7766-96a3-d0f606e9973a@xen.org>
Date: Tue, 15 Dec 2020 16:49:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <CAMmSBy_8+PRWiSQxwRN2oB9mLmOnyoCr0mH4L-uUYhm=1GK7Xg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 15/12/2020 02:35, Roman Shaposhnik wrote:
> On Mon, Dec 14, 2020 at 6:16 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>>
>> Finally getting to the truly productive stages of my project with Xen on
>> ARM.
>>
>> How many of the OSes which function as x86 DomUs for Xen, function as
>> ARM DomUs?  Getting Linux operational was straightforward, but what of
>> others?
> 
> On EVE we have Windows running as a pretty much a customer-facing demo:
>      https://wiki.lfedge.org/display/EVE/How+get+Windows+10+running+on+a+Raspberry+Pi

Are you saying that Windows is booting on top of Xen on Arm?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 17:06:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 17:06:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54713.95223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDls-0004Qp-UA; Tue, 15 Dec 2020 17:05:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54713.95223; Tue, 15 Dec 2020 17:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpDls-0004Qi-Qj; Tue, 15 Dec 2020 17:05:56 +0000
Received: by outflank-mailman (input) for mailman id 54713;
 Tue, 15 Dec 2020 17:05:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1p51=FT=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1kpDlr-0004Qd-Ol
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 17:05:55 +0000
Received: from MTA-08-4.privateemail.com (unknown [198.54.122.58])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06bd5574-c4e5-4f42-9cb1-aab31d8c4919;
 Tue, 15 Dec 2020 17:05:54 +0000 (UTC)
Received: from MTA-08.privateemail.com (localhost [127.0.0.1])
 by MTA-08.privateemail.com (Postfix) with ESMTP id 8CC20600A7
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 12:05:53 -0500 (EST)
Received: from mail-wm1-f45.google.com (unknown [10.20.151.235])
 by MTA-08.privateemail.com (Postfix) with ESMTPA id 55951600A4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 17:05:53 +0000 (UTC)
Received: by mail-wm1-f45.google.com with SMTP id c133so5528204wme.4
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 09:05:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06bd5574-c4e5-4f42-9cb1-aab31d8c4919
X-Gm-Message-State: AOAM531fpD8zpdqXIs8FbWErLkIArskG7S0QHYjTxTaPuY1JuvzTnXEB
	MYx/SImq8GiSH823HaidTM20tiK+QMcuoCz7CrA=
X-Google-Smtp-Source: ABdhPJyhuozME1HLdtqWtG3JumBjmF74EPKVkLRP8xE8YvzICkdvMIeDsCReYkTjNo5dUEt6gKAwQ+89+dDqV+KIbwk=
X-Received: by 2002:a1c:4e0a:: with SMTP id g10mr34168015wmh.51.1608051951892;
 Tue, 15 Dec 2020 09:05:51 -0800 (PST)
MIME-Version: 1.0
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com> <a9c24d20-0feb-42c8-ae2c-5cfd564157ee@suse.com>
In-Reply-To: <a9c24d20-0feb-42c8-ae2c-5cfd564157ee@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 15 Dec 2020 12:05:15 -0500
X-Gmail-Original-Message-ID: <CABfawh==44+wb0wt_NSmR_eJGfn_17BTNYLJxKvctwrhx4Hyvg@mail.gmail.com>
Message-ID: <CABfawh==44+wb0wt_NSmR_eJGfn_17BTNYLJxKvctwrhx4Hyvg@mail.gmail.com>
Subject: Re: [PATCH 6/6] x86/p2m: set_shared_p2m_entry() is MEM_SHARING-only
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-Virus-Scanned: ClamAV using ClamSMTP

On Tue, Dec 15, 2020 at 11:28 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 17:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 17:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54722.95241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpE9r-0007HL-Ee; Tue, 15 Dec 2020 17:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54722.95241; Tue, 15 Dec 2020 17:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpE9r-0007H9-8e; Tue, 15 Dec 2020 17:30:43 +0000
Received: by outflank-mailman (input) for mailman id 54722;
 Tue, 15 Dec 2020 17:24:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LaZc=FT=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kpE3N-0006OE-GF
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 17:24:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8aaf35f7-ccc3-450d-ab6c-06d8aaa7d03e;
 Tue, 15 Dec 2020 17:24:00 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-243-hCy7xXOIO3OqX7U5VVhdwg-1; Tue, 15 Dec 2020 12:23:53 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 28CE119251A6;
 Tue, 15 Dec 2020 17:23:51 +0000 (UTC)
Received: from localhost (ovpn-114-253.rdu2.redhat.com [10.10.114.253])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BB8B260C0F;
 Tue, 15 Dec 2020 17:23:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8aaf35f7-ccc3-450d-ab6c-06d8aaa7d03e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608053040;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rUdEUiX0emWRYmCkDrP17XYuL2iUgaq+xIQzZwTCHCQ=;
	b=BRwvzkGIMMW3Mu7V4aZ64fh6Y1MHI2QgYTctIH/KumD3D/Ff1JZbhkbB4v+npqOa01fnmL
	zVHF9T8ZNwGhjPqzI4C3yvq8tJtOufGvemBHioRg3lf6UtImmfNjf+qP7L5XY/9UUtFkPI
	DOpV+uBtcZwXl/sWwfPv40p54+z0yyE=
X-MC-Unique: hCy7xXOIO3OqX7U5VVhdwg-1
Date: Tue, 15 Dec 2020 18:23:37 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201215172337.w7vcn2woze2ejgco@mhamilton>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201215150119.GE8185@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="smfgrrjgqasjiwvo"
Content-Disposition: inline

--smfgrrjgqasjiwvo
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
> Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> > On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > > While processing the parents of a BDS, one of the parents may proce=
ss
> > > > the child that's doing the tail recursion, which leads to a BDS bei=
ng
> > > > processed twice. This is especially problematic for the aio_notifie=
rs,
> > > > as they might attempt to work on both the old and the new AIO
> > > > contexts.
> > > >=20
> > > > To avoid this, add the BDS pointer to the ignore list, and check th=
e
> > > > child BDS pointer while iterating over the children.
> > > >=20
> > > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> > >=20
> > > Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-/
> >=20
> > I know, it's effective but quite ugly...
> >=20
> > > What is the specific scenario where you saw this breaking? Did you ha=
ve
> > > multiple BdrvChild connections between two nodes so that we would go =
to
> > > the parent node through one and then come back to the child node thro=
ugh
> > > the other?
> >=20
> > I don't think this is a corner case. If the graph is walked top->down,
> > there's no problem since children are added to the ignore list before
> > getting processed, and siblings don't process each other. But, if the
> > graph is walked bottom->up, a BDS will start processing its parents
> > without adding itself to the ignore list, so there's nothing
> > preventing them from processing it again.
>=20
> I don't understand. child is added to ignore before calling the parent
> callback on it, so how can we come back through the same BdrvChild?
>=20
>     QLIST_FOREACH(child, &bs->parents, next_parent) {
>         if (g_slist_find(*ignore, child)) {
>             continue;
>         }
>         assert(child->klass->set_aio_ctx);
>         *ignore =3D g_slist_prepend(*ignore, child);
>         child->klass->set_aio_ctx(child, new_context, ignore);
>     }

Perhaps I'm missing something, but the way I understand it, that loop
is adding the BdrvChild pointer of each of its parents, but not the
BdrvChild pointer of the BDS that was passed as an argument to
b_s_a_c_i.

> You didn't dump the BdrvChild here. I think that would add some
> information on why we re-entered 0x555ee2fbf660. Maybe you can also add
> bs->drv->format_name for each node to make the scenario less abstract?

I've generated another trace with more data:

bs=3D0x565505e48030 (backup-top) enter
bs=3D0x565505e48030 (backup-top) processing children
bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e42090 (chil=
d->bs=3D0x565505e5d420)
bs=3D0x565505e5d420 (qcow2) enter
bs=3D0x565505e5d420 (qcow2) processing children
bs=3D0x565505e5d420 (qcow2) calling bsaci child=3D0x565505e41ea0 (child->bs=
=3D0x565505e52060)
bs=3D0x565505e52060 (file) enter
bs=3D0x565505e52060 (file) processing children
bs=3D0x565505e52060 (file) processing parents
bs=3D0x565505e52060 (file) processing itself
bs=3D0x565505e5d420 (qcow2) processing parents
bs=3D0x565505e5d420 (qcow2) calling set_aio_ctx child=3D0x5655066a34d0
bs=3D0x565505fbf660 (qcow2) enter
bs=3D0x565505fbf660 (qcow2) processing children
bs=3D0x565505fbf660 (qcow2) calling bsaci child=3D0x565505e41d20 (child->bs=
=3D0x565506bc0c00)
bs=3D0x565506bc0c00 (file) enter
bs=3D0x565506bc0c00 (file) processing children
bs=3D0x565506bc0c00 (file) processing parents
bs=3D0x565506bc0c00 (file) processing itself
bs=3D0x565505fbf660 (qcow2) processing parents
bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x565505fc7aa0
bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x5655068b8510
bs=3D0x565505e48030 (backup-top) enter
bs=3D0x565505e48030 (backup-top) processing children
bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e3c450 (chil=
d->bs=3D0x565505fbf660)
bs=3D0x565505fbf660 (qcow2) enter
bs=3D0x565505fbf660 (qcow2) processing children
bs=3D0x565505fbf660 (qcow2) processing parents
bs=3D0x565505fbf660 (qcow2) processing itself
bs=3D0x565505e48030 (backup-top) processing parents
bs=3D0x565505e48030 (backup-top) calling set_aio_ctx child=3D0x565505e402d0
bs=3D0x565505e48030 (backup-top) processing itself
bs=3D0x565505fbf660 (qcow2) processing itself


So it seems this is happening:

backup-top (5e48030) <---------| (5)
   |    |                      |
   |    | (6) ------------> qcow2 (5fbf660)
   |                           ^    |
   |                       (3) |    | (4)
   |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
   |
   |-> (2) file (5e52060)

backup-top (5e48030), the BDS that was passed as argument in the first
bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660)
is processing its parents, and the latter is also re-entered when the
first one starts processing its children again.

Sergio.

--smfgrrjgqasjiwvo
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/Y8RgACgkQ9GknjS8M
AjV8gA/6AzvhEg8xeUBjecxy67BEBGuQYW5n0uFMZ3eWYWc3rtBB2CEWtYIWwtc+
4D74Ez21djBsCAoS3AjQeeMQ6eGSRT7IP3A1g5FONqVqzgouny7jJMUfUgXe12lg
goBFlYA8eoYWB4lVHxLn1vbAImEyOkBg3c1DkMj+auBLkRS7cVGusvWGv8XYqQc5
Nnfu/6MYS2eT0K8noWtRMoccu8nPvAUJP6ijAhA7ktXonsS7B/+IdX0Ongfc0IdU
531DOnmQwm8P67V+EKj7pe5g/W/UKwfeeRbVvIMptUD0wktJfjQUuE9nqeG8iEm/
LH5KcLIZlM9S6EbQp2pXhYWBJR/g4jblbf8dyRYLi/Hv36LI5vQdSg+DghkyhvVp
3RSROZypQxVxBCb3W/5n4OIbpKEm87WnES2Lr5lyzy4QEKSjr6owi6EXP8WTfmU7
0+7HtStBZ51C84ZkvkneV7W9dGwCzzIrQyJ6aRnirwN6fVtrv65GUXkhi/ysln4x
N/j95DF1xZU8yawCKkrP7GY7clHOyYhhpeLvcEmn9mi9r2ypIAlRN7d2tfR5BKjH
cjkF6M5uq9gxrcLFjmLgAGo1f/D5S7T6qo3Pkt6gcuUX0R2SlP0PspN15y1aEjPF
8kiVA9FS+I/zXvLZ3R8Rt/sbZslb5/SNUP4Y1NnwYivDOmVOszI=
=8V3n
-----END PGP SIGNATURE-----

--smfgrrjgqasjiwvo--



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 17:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 17:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54724.95248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpE9r-0007I7-PP; Tue, 15 Dec 2020 17:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54724.95248; Tue, 15 Dec 2020 17:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpE9r-0007Hk-IA; Tue, 15 Dec 2020 17:30:43 +0000
Received: by outflank-mailman (input) for mailman id 54724;
 Tue, 15 Dec 2020 17:27:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LaZc=FT=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kpE6U-0006Rd-8P
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 17:27:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id dc16ec82-ec6d-4245-929d-9f028f963ab0;
 Tue, 15 Dec 2020 17:27:12 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-525-tUoFMpo_Oh-DKH_WWSxhMg-1; Tue, 15 Dec 2020 12:27:07 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F28C11936B9E;
 Tue, 15 Dec 2020 17:27:00 +0000 (UTC)
Received: from localhost (ovpn-114-253.rdu2.redhat.com [10.10.114.253])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0D5617445D;
 Tue, 15 Dec 2020 17:26:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc16ec82-ec6d-4245-929d-9f028f963ab0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608053232;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CRT2RnWTC2KUg0aduM+3r6rfK5Si5JdLSD0xOCp5zu4=;
	b=cd4t77TpxCGRo6RsZsgGUdjAQj405bbgZT1s5KGqddrXxIopeqUcmQlEa8M0wk/fY5ncja
	y5u9wyR68UsZfWNOT/mi0N+L15Q36zNRDwYGFCFWeuHC3AjO84W0SnP5ugs+B9RqphZSbd
	dwe2oGB0yy8gQu9r0K2VSO9EXJ9yw6Q=
X-MC-Unique: tUoFMpo_Oh-DKH_WWSxhMg-1
Date: Tue, 15 Dec 2020 18:26:28 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 4/4] block: Close block exports in two steps
Message-ID: <20201215172628.andsendtcqcgtdkr@mhamilton>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-5-slp@redhat.com>
 <20201215153405.GF8185@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201215153405.GF8185@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="f2qqvpnhxsbc774i"
Content-Disposition: inline

--f2qqvpnhxsbc774i
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 15, 2020 at 04:34:05PM +0100, Kevin Wolf wrote:
> Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > There's a cross-dependency between closing the block exports and
> > draining the block layer. The latter needs that we close all export's
> > client connections to ensure they won't queue more requests, but the
> > exports may have coroutines yielding in the block layer, which implies
> > they can't be fully closed until we drain it.
>=20
> A coroutine that yielded must have some way to be reentered. So I guess
> the quesiton becomes why they aren't reentered until drain. We do
> process events:
>=20
>     AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));
>=20
> So in theory, anything that would finalise the block export closing
> should still execute.
>=20
> What is the difference that drain makes compared to a simple
> AIO_WAIT_WHILE, so that coroutine are reentered during drain, but not
> during AIO_WAIT_WHILE?
>=20
> This is an even more interesting question because the NBD server isn't a
> block node nor a BdrvChildClass implementation, so it shouldn't even
> notice a drain operation.

I agree in that this deserves a deeper analysis. I'm going to drop
this patch from the series, and will re-analyze the issue later.

Thanks,
Sergio.

--f2qqvpnhxsbc774i
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/Y8cMACgkQ9GknjS8M
AjUkDRAApetpCE3uZgwqDD+iYgtyPIC0m50Ehe0DFhC7yDo9Fnh5gwBkI4jetoN1
q+eNqnBcD9hjLsTbc8ffCWVU6i8AGLA5K+usJ6Erm3iBTttHcpvG+sdSRpam3hjk
dGXuBK9AE5PHIbSxDOCi2LuX9FCyUrCWiTAE/lnRnkNwhfsPnJcamDjIBRFPdE99
0kDjGuL63UpVdsTAHu5B5KvSaCyY58SZk/aQLl7PIMoVIWvYuE8xBzO/MCFK6js8
WGokHSqqIJPjoHKBLNxU2m7HCWt2E2kTxxmL21GfgdKC9vW0pxB7vXdQI8RYuM2D
jpX9w6EMnUT+WnfUldv0qfFLE0XolM8YEw0is4Y7u8oNRHeW9hgyEM1QCnv3gdc8
XAVgKB2X2/NPboPz9+pGsS/06RN1y6vMLA/aW0YWTcf8p8MGEpiIcR+nmX4LbFUH
qD0a8GrpTmVOWZGGeFdiwxxlCOzvC7NftdSCaJeLjf3BxpuW8Ok8+HE11oya7iYY
ZZvf3SHttWNdxGQCSUNCiM8Z7BkQne0sBqBw94bfRwLbh8BRhytSS6RvWxePSXKo
k7L4Er1z9/T+8RGuIZcJ/Qmc6NOoSmYyxpJ8lhFdyLe3cIXh24Fkd5yla6Qo6HQW
rEp1FBZcAZjS6nD8XeKXKIPkXYEuspjwv12XprG7BFT3zp0rcRE=
=aKje
-----END PGP SIGNATURE-----

--f2qqvpnhxsbc774i--



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 17:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 17:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54067.95235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpE9r-0007Gw-40; Tue, 15 Dec 2020 17:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54067.95235; Tue, 15 Dec 2020 17:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpE9r-0007Gp-0z; Tue, 15 Dec 2020 17:30:43 +0000
Received: by outflank-mailman (input) for mailman id 54067;
 Tue, 15 Dec 2020 13:15:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LaZc=FT=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kpAB4-00041o-8o
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 13:15:42 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f0ead8c4-cdf4-4b02-8eb7-97961e79dd96;
 Tue, 15 Dec 2020 13:15:40 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-502-udzab7_jMkuYDONNk2Yxlw-1; Tue, 15 Dec 2020 08:15:38 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8D398102CB29;
 Tue, 15 Dec 2020 13:15:36 +0000 (UTC)
Received: from localhost (ovpn-114-253.rdu2.redhat.com [10.10.114.253])
 by smtp.corp.redhat.com (Postfix) with ESMTP id AB18010013C1;
 Tue, 15 Dec 2020 13:15:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0ead8c4-cdf4-4b02-8eb7-97961e79dd96
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608038140;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Z79n8v+sbiSgNAX7d3rn1sXk7k+bJlB46c28y6qkcZQ=;
	b=AZ0fjRBv8g7/Iwp4TOjnmAUZI2YwEScVBFOWsXfnnPHHYtgwLO1KzrfHz2SONTMAKOShAe
	rne7auVEsiEtxlD/hcO0BHPb00Xw+33wYsdHM99l8NeGmLaTx84vGdpftyeNrWAaS6oth/
	qo0L3S62IB/UcPKBI9unI26RrvhYIHg=
X-MC-Unique: udzab7_jMkuYDONNk2Yxlw-1
Date: Tue, 15 Dec 2020 14:15:27 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201215131527.evpidxevevtfy54n@mhamilton>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201215121233.GD8185@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="mgsiejcqqudu3t6r"
Content-Disposition: inline

--mgsiejcqqudu3t6r
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > While processing the parents of a BDS, one of the parents may process
> > the child that's doing the tail recursion, which leads to a BDS being
> > processed twice. This is especially problematic for the aio_notifiers,
> > as they might attempt to work on both the old and the new AIO
> > contexts.
> >=20
> > To avoid this, add the BDS pointer to the ignore list, and check the
> > child BDS pointer while iterating over the children.
> >=20
> > Signed-off-by: Sergio Lopez <slp@redhat.com>
>=20
> Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-/

I know, it's effective but quite ugly...

> What is the specific scenario where you saw this breaking? Did you have
> multiple BdrvChild connections between two nodes so that we would go to
> the parent node through one and then come back to the child node through
> the other?

I don't think this is a corner case. If the graph is walked top->down,
there's no problem since children are added to the ignore list before
getting processed, and siblings don't process each other. But, if the
graph is walked bottom->up, a BDS will start processing its parents
without adding itself to the ignore list, so there's nothing
preventing them from processing it again.

I'm pasting here an annotated trace of bdrv_set_aio_context_ignore I
generated while triggering the issue:

<----- begin ------>
bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing children
bdrv_set_aio_context_ignore: bs=3D0x555ee2e5d420 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee2e5d420 processing children
bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 processing children
bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 processing parents
bdrv_set_aio_context_ignore: bs=3D0x555ee2e52060 processing itself
bdrv_set_aio_context_ignore: bs=3D0x555ee2e5d420 processing parents

 - We enter b_s_a_c_i with BDS 2fbf660 the first time:
=20
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing children

 - We enter b_s_a_c_i with BDS 3bc0c00, a child of 2fbf660:
=20
bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 processing children
bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 processing parents
bdrv_set_aio_context_ignore: bs=3D0x555ee3bc0c00 processing itself

 - We start processing its parents:
=20
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing parents

 - We enter b_s_a_c_i with BDS 2e48030, a parent of 2fbf660:
=20
bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing children

 - We enter b_s_a_c_i with BDS 2fbf660 again, because of parent
   2e48030 didn't found us it in the ignore list:
  =20
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 enter
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing children
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing parents
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing itself
bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing parents
bdrv_set_aio_context_ignore: bs=3D0x555ee2e48030 processing itself

 - BDS 2fbf660 will be processed here a second time, triggering the
   issue:
  =20
bdrv_set_aio_context_ignore: bs=3D0x555ee2fbf660 processing itself
<----- end ------>

I suspect this has been happening for a while, and has only surfaced
now due to the need to run an AIO context BH in an aio_notifier
function that the "nbd/server: Quiesce coroutines on context switch"
patch introduces. There the problem is that the first time the
aio_notifier AIO detach function is called, it works on the old
context (as it should be), and the second one works on the new context
(which is wrong).

> Maybe if what we really need to do is not processing every edge once,
> but processing every node once, the list should be changed to contain
> _only_ BDS objects. But then blk_do_set_aio_context() probably won't
> work any more because it can't have blk->root ignored any more...

I tried that in my first attempt and it broke badly. I didn't take a
deeper look at the causes.

> Anyway, if we end up changing what the list contains, the comment needs
> an update, too. Currently it says:
>=20
>  * @ignore will accumulate all visited BdrvChild object. The caller is
>  * responsible for freeing the list afterwards.
>=20
> Another option: Split the parents QLIST_FOREACH loop in two. First add
> all parent BdrvChild objects to the ignore list, remember which of them
> were newly added, and only after adding all of them call
> child->klass->set_aio_ctx() for each parent that was previously not on
> the ignore list. This will avoid that we come back to the same node
> because all of its incoming edges are ignored now.

I don't think this strategy will fix the issue illustrated in the
trace above, as the BdrvChild pointer of the BDS processing its
parents won't be the on ignore list by the time one of its parents
starts processing its own children.

Thanks,
Sergio.

> >  block.c | 7 ++++++-
> >  1 file changed, 6 insertions(+), 1 deletion(-)
> >=20
> > diff --git a/block.c b/block.c
> > index f1cedac362..bc8a66ab6e 100644
> > --- a/block.c
> > +++ b/block.c
> > @@ -6465,12 +6465,17 @@ void bdrv_set_aio_context_ignore(BlockDriverSta=
te *bs,
> >      bdrv_drained_begin(bs);
> > =20
> >      QLIST_FOREACH(child, &bs->children, next) {
> > -        if (g_slist_find(*ignore, child)) {
> > +        if (g_slist_find(*ignore, child) || g_slist_find(*ignore, chil=
d->bs)) {
> >              continue;
> >          }
> >          *ignore =3D g_slist_prepend(*ignore, child);
> >          bdrv_set_aio_context_ignore(child->bs, new_context, ignore);
> >      }
> > +    /*
> > +     * Add a reference to this BS to the ignore list, so its
> > +     * parents won't attempt to process it again.
> > +     */
> > +    *ignore =3D g_slist_prepend(*ignore, bs);
> >      QLIST_FOREACH(child, &bs->parents, next_parent) {
> >          if (g_slist_find(*ignore, child)) {
> >              continue;
> > --=20
> > 2.26.2
> >=20
>=20

--mgsiejcqqudu3t6r
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/YtuwACgkQ9GknjS8M
AjWu7xAAtCuMZM0F+PLvGZwJNfkdkzIwymByhID9/nb2Bp3OB/76uSBeJebeTJpB
/PywhpZAso5RaRygiZRn0v0Fbo8dvPSlk4nyFkqJRCHITQaAYsh/bQaznQ7LdUBo
3r0HVQ78H1QBVEmWxkQNa/XajGaPXyMj8ShWCvKMJAkNHH2FRMXYF5bz8vn5P7tD
98zQpArgME/Ai7knKEOMsbhrp4TxNlmWAw0qtUeOmFoYqJFjhgOM1ZzbI52jbA2K
SKcPh3moULw53+UOqiIyw4iUvQTR611ozA+sHxIuSXJq2D29P6c0q2ozanAxnm4c
9ejn3EPH3ynMMPMBh/c/vJcMHwgIacPQFdv7nOu5zPkwTwHhn6hOle/Spre+Sd6Y
hPSBWMB5JHFBAhsufxcsenTwXbFpXurBrhtxI/hlO+izV+bR7masSjahtlwTyJug
BnRtQu0ZRXiFZna5XicNeOGruwj09hF0kCrQ3U8x5X9BuNFwiGawZHAvrSMYpb2l
4GNwfBHBaXFthu1wjmCk6JayH9JfOk3p+JdDhFAZzNOTCPiMwJ56QQt7KCCiu38R
+RL977/C3VH/Y+Gzbdm0EtaMiNUa3jKnCgeiPiylJUAo6TXIoMdPAvzvdbMcV5ZT
kMRSR4oY5a99W9ai1fzZWqpvQ4H6O7MHgqTu0Y0coGnh4ZuHUtk=
=lne5
-----END PGP SIGNATURE-----

--mgsiejcqqudu3t6r--



From xen-devel-bounces@lists.xenproject.org Tue Dec 15 18:09:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 18:09:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54747.95277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpElJ-0002En-5M; Tue, 15 Dec 2020 18:09:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54747.95277; Tue, 15 Dec 2020 18:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpElJ-0002Eg-1q; Tue, 15 Dec 2020 18:09:25 +0000
Received: by outflank-mailman (input) for mailman id 54747;
 Tue, 15 Dec 2020 18:09:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6ufw=FT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpElH-0002Eb-NG
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 18:09:23 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 519c9373-4cdb-40f7-bc39-7dced4e25519;
 Tue, 15 Dec 2020 18:09:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 519c9373-4cdb-40f7-bc39-7dced4e25519
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608055761;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=uRlhFGp0BOyW4m4+IOML7lSSobqn6PHuWxZvG4M2uxI=;
  b=DLEp4X9wPKhj9DoAz196Xi74MsTBsC+6a+IQL/uFQIywqaw9oV1qwh0I
   z4NfGXqX8hi0hMEKruaIQOT6ZKFCXKWesZaisxTBJwnxZ6iGgUtJvMUcp
   WFnPT0cgC9TPtcBoVCbt8tWqbVG3/q1MgZ+0k5dxc3uV1oHi9fAV0SuX5
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: KDUDwgAs5IG1brpXMjRQilrzyahVUQVB574hx8NxgS+z+hbb7ZBO4OgFqkndlbiFbt/jck8L4t
 125EDEZEjqIqiId7J4UFRnX1fKvjqkI05UakjGfUhcTIXHgVSzqrf08Tp14coFYTwyEBfRzIaS
 SJc2GKepZOyGP63Y3QunqKrR74F3taI3t42l5SvGhZRoaR6uAgzQWBC4CjvEZHDYKd+Zlb7mX7
 SIy3kSZL59v9NwpKJcWgAdoHmvmbROX4eV2uc6qmev48WSjPLUMijC8WWHK2u1QuJtzFR5Qk4X
 FmM=
X-SBRS: 5.2
X-MesageID: 34499476
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,422,1599537600"; 
   d="scan'208";a="34499476"
Subject: Re: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not
 close file descriptor on exec
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Julien Grall
	<jgrall@amazon.com>
References: <20201215163603.21700-1-jgross@suse.com>
 <20201215163603.21700-5-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
Date: Tue, 15 Dec 2020 18:09:14 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201215163603.21700-5-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 15/12/2020 16:35, Juergen Gross wrote:
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Wei Liu <wl@xen.org>
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> ---
> V7:
> - new patch
>
> V8:
> - some minor comments by Julien Grall addressed
>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Various of your patches still have double SoB.  (Just as a note to be
careful to anyone committing...)

> diff --git a/tools/include/xenevtchn.h b/tools/include/xenevtchn.h
> index 91821ee56d..dadc46ea36 100644
> --- a/tools/include/xenevtchn.h
> +++ b/tools/include/xenevtchn.h
> @@ -64,11 +64,25 @@ struct xentoollog_logger;
>   *
>   * Calling xenevtchn_close() is the only safe operation on a
>   * xenevtchn_handle which has been inherited.
> + *
> + * Setting XENEVTCHN_NO_CLOEXEC allows to keep the file descriptor used
> + * for the event channel driver open across exec(2). In order to be able
> + * to use that file descriptor the new binary activated via exec(2) has
> + * to call xenevtchn_open_fd() with that file descriptor as parameter in
> + * order to associate it with a new handle. The file descriptor can be
> + * obtained via xenevtchn_fd() before calling exec(2).
>   */

More of the comment block than this needs adjusting in light of the
exec() changes.

> -/* Currently no flags are defined */
> +
> +/* Don't set O_CLOEXEC when opening event channel driver node. */
> +#define XENEVTCHN_NO_CLOEXEC 0x01
> +
>  xenevtchn_handle *xenevtchn_open(struct xentoollog_logger *logger,
>                                   unsigned open_flags);
>  
> +/* Flag XENEVTCHN_NO_CLOEXEC is ignored by xenevtchn_open_fd(). */
> +xenevtchn_handle *xenevtchn_open_fd(struct xentoollog_logger *logger,
> +                                    int fd, unsigned open_flags);
> +

I spotted this before, but didn't have time to reply.

This isn't "open fd".  It is "construct a xenevtchn_handle object around
an already-open fd".  As such, open_flags appears bogus because at no
point are we making an open() call.  (I'd argue that it irrespective of
other things, it wants naming xenevtchn_fdopen() for API familiarity.)

However, the root of the problem is actually the ambiguity in the name. 
These are not flags to the open() system call, but general flags for
xenevtchn.

Therefore, I recommend a prep patch which renames open_flags to just
flags, and while at it, changes to unsigned int rather than a naked
"unsigned" type.  There are no API/ABI implications for this, but it
will help massively with code clarity.

I'd also possibly go as far as to say that plumbing 'flags' down into
osdep ought to split out into a separate patch.  There is also a wild
mix of coding styles even within the hunks here.

Additionally, something in core.c should check for unknown flags and
reject them them with EINVAL.  It was buggy that this wasn't done
before, and really needs to be implemented before we start having cases
where people might plausibly pass something other than 0.

~Andrew

P.S. if you don't fancy doing all of this, my brain could do with a
break from the complicated work, and I can see about organising this
cleanup.


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 18:20:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 18:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54753.95289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpEvZ-0003Fw-5d; Tue, 15 Dec 2020 18:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54753.95289; Tue, 15 Dec 2020 18:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpEvZ-0003Fp-2R; Tue, 15 Dec 2020 18:20:01 +0000
Received: by outflank-mailman (input) for mailman id 54753;
 Tue, 15 Dec 2020 18:19:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpEvX-0003Fk-RN
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 18:19:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpEvX-0004uY-N5; Tue, 15 Dec 2020 18:19:59 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpEvX-00011t-Bq; Tue, 15 Dec 2020 18:19:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=j4hluyl/wJZlPMsnYC4TRrDdiT8x0rl9+u5M9EYKY7A=; b=jP24mNx5s2yi+6GHYxbc0joK/H
	TWSxamamA/biYQsJdlTSf4g+Ck7HyjBXsrKgolDBjFgZNSWLUh/RKy7K3d0I6QJuWL+qXE/kWXxT9
	Kmy66Ba/8RO3zX7Ne1D5+Vf3CWl2VWEYySx53/MbIox+KZYD3eJ9xyjBEWANDC7IV5AE=;
Subject: Re: [RFC PATCH 1/6] xen/arm: Support detection of CPU features in
 other ID registers
To: Ash Wilding <ash.j.wilding@gmail.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, rahul.singh@arm.com
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
 <20201105185603.24149-2-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3fb7ab46-cedd-1ce2-03d5-ee4a6c7dab85@xen.org>
Date: Tue, 15 Dec 2020 18:19:57 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201105185603.24149-2-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ash,

Apologies for the late reply.

On 05/11/2020 18:55, Ash Wilding wrote:
> The current Arm boot_cpu_feature64() and boot_cpu_feature32() macros
> are hardcoded to only detect features in ID_AA64PFR{0,1}_EL1 and
> ID_PFR{0,1} respectively.
> 
> This patch replaces these macros with a new macro, boot_cpu_feature(),
> which takes an explicit ID register name as an argument.
> 
> While we're here, cull cpu_feature64() and cpu_feature32() as they
> have no callers (we only ever use the boot CPU features), and update
> the printk() messages in setup.c to use the new macro.
> 
> Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
> ---
>   xen/arch/arm/setup.c             |  8 +++---
>   xen/include/asm-arm/cpufeature.h | 44 +++++++++++++++-----------------
>   2 files changed, 24 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 7fcff9af2a..5121f06fc5 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -134,16 +134,16 @@ static void __init processor_id(void)
>              cpu_has_gicv3 ? " GICv3-SysReg" : "");
>   
>       /* Warn user if we find unknown floating-point features */
> -    if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
> +    if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
>           printk(XENLOG_WARNING "WARNING: Unknown Floating-point ID:%d, "
>                  "this may result in corruption on the platform\n",
> -               boot_cpu_feature64(fp));
> +               boot_cpu_feature(pfr64, fp));
>   
>       /* Warn user if we find unknown AdvancedSIMD features */
> -    if ( cpu_has_simd && (boot_cpu_feature64(simd) >= 2) )
> +    if ( cpu_has_simd && (boot_cpu_feature(pfr64, simd) >= 2) )
>           printk(XENLOG_WARNING "WARNING: Unknown AdvancedSIMD ID:%d, "
>                  "this may result in corruption on the platform\n",
> -               boot_cpu_feature64(simd));
> +               boot_cpu_feature(pfr64, simd));
>   
>       printk("  Debug Features: %016"PRIx64" %016"PRIx64"\n",
>              boot_cpu_data.dbg64.bits[0], boot_cpu_data.dbg64.bits[1]);
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 10878ead8a..f9281ea343 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -1,39 +1,35 @@
>   #ifndef __ASM_ARM_CPUFEATURE_H
>   #define __ASM_ARM_CPUFEATURE_H
>   
> +#define boot_cpu_feature(idreg, feat) (boot_cpu_data.idreg.feat)
> +
>   #ifdef CONFIG_ARM_64
> -#define cpu_feature64(c, feat)         ((c)->pfr64.feat)
> -#define boot_cpu_feature64(feat)       (boot_cpu_data.pfr64.feat)
> -
> -#define cpu_has_el0_32    (boot_cpu_feature64(el0) == 2)
> -#define cpu_has_el0_64    (boot_cpu_feature64(el0) >= 1)
> -#define cpu_has_el1_32    (boot_cpu_feature64(el1) == 2)
> -#define cpu_has_el1_64    (boot_cpu_feature64(el1) >= 1)
> -#define cpu_has_el2_32    (boot_cpu_feature64(el2) == 2)
> -#define cpu_has_el2_64    (boot_cpu_feature64(el2) >= 1)
> -#define cpu_has_el3_32    (boot_cpu_feature64(el3) == 2)
> -#define cpu_has_el3_64    (boot_cpu_feature64(el3) >= 1)
> -#define cpu_has_fp        (boot_cpu_feature64(fp) < 8)
> -#define cpu_has_simd      (boot_cpu_feature64(simd) < 8)
> -#define cpu_has_gicv3     (boot_cpu_feature64(gic) == 1)
> +#define cpu_has_el0_32          (boot_cpu_feature(pfr64, el0) == 2)
> +#define cpu_has_el0_64          (boot_cpu_feature(pfr64, el0) >= 1)
> +#define cpu_has_el1_32          (boot_cpu_feature(pfr64, el1) == 2)
> +#define cpu_has_el1_64          (boot_cpu_feature(pfr64, el1) >= 1)
> +#define cpu_has_el2_32          (boot_cpu_feature(pfr64, el2) == 2)
> +#define cpu_has_el2_64          (boot_cpu_feature(pfr64, el2) >= 1)
> +#define cpu_has_el3_32          (boot_cpu_feature(pfr64, el3) == 2)
> +#define cpu_has_el3_64          (boot_cpu_feature(pfr64, el3) >= 1)
> +#define cpu_has_fp              (boot_cpu_feature(pfr64, fp) < 8)
> +#define cpu_has_simd            (boot_cpu_feature(pfr64, simd) < 8)
> +#define cpu_has_gicv3           (boot_cpu_feature(pfr64, gic) == 1)

May I ask why the indentation was changed here?

The rest of the patch looks good to me:

Acked-by: Julien Grall <jgrall@amazon.com>

>   #endif
>   
> -#define cpu_feature32(c, feat)         ((c)->pfr32.feat)
> -#define boot_cpu_feature32(feat)       (boot_cpu_data.pfr32.feat)
> -
> -#define cpu_has_arm       (boot_cpu_feature32(arm) == 1)
> -#define cpu_has_thumb     (boot_cpu_feature32(thumb) >= 1)
> -#define cpu_has_thumb2    (boot_cpu_feature32(thumb) >= 3)
> -#define cpu_has_jazelle   (boot_cpu_feature32(jazelle) > 0)
> -#define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
> +#define cpu_has_arm       (boot_cpu_feature(pfr32, arm) == 1)
> +#define cpu_has_thumb     (boot_cpu_feature(pfr32, thumb) >= 1)
> +#define cpu_has_thumb2    (boot_cpu_feature(pfr32, thumb) >= 3)
> +#define cpu_has_jazelle   (boot_cpu_feature(pfr32, jazelle) > 0)
> +#define cpu_has_thumbee   (boot_cpu_feature(pfr32, thumbee) == 1)
>   #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
>   
>   #ifdef CONFIG_ARM_32
> -#define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
> +#define cpu_has_gentimer  (boot_cpu_feature(pfr32, gentimer) == 1)
>   #else
>   #define cpu_has_gentimer  (1)
>   #endif
> -#define cpu_has_security  (boot_cpu_feature32(security) > 0)
> +#define cpu_has_security  (boot_cpu_feature(pfr32, security) > 0)
>   
>   #define ARM64_WORKAROUND_CLEAN_CACHE    0
>   #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE    1
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 18:38:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 18:38:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54758.95301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFD9-00052Z-PN; Tue, 15 Dec 2020 18:38:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54758.95301; Tue, 15 Dec 2020 18:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFD9-00052S-Kz; Tue, 15 Dec 2020 18:38:11 +0000
Received: by outflank-mailman (input) for mailman id 54758;
 Tue, 15 Dec 2020 18:38:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpFD7-00052M-NB
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 18:38:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpFD7-0005Cr-Fk; Tue, 15 Dec 2020 18:38:09 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpFD7-0002c7-50; Tue, 15 Dec 2020 18:38:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=LALxq+qUWS03eoWYacC62lznzYhwKjKvDAm+Not7hYI=; b=PzCbbQjwNFvq8huJNniOJipY31
	H9W6hJIFQoLRQwB0iI74uynj2IxZp1hZJQY1PMyVRyU1eZctpvdw5ATLaXJtOxVUxMwN3ljKLj9kr
	NC8safmcThESLIq/lCZLp9nHQq/4sig7UzPxxjXWT8VY9/gEDyWd7C4RYIYgPnwvfICU=;
Subject: Re: [RFC PATCH v2 08/15] xen/arm64: port Linux's arm64 atomic_ll_sc.h
 to Xen
To: Ash Wilding <ash.j.wilding@gmail.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, rahul.singh@arm.com
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
 <20201111215203.80336-9-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <17e5fe0b-cb9f-0411-1862-363d9dc4385f@xen.org>
Date: Tue, 15 Dec 2020 18:38:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201111215203.80336-9-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ash,

On 11/11/2020 21:51, Ash Wilding wrote:
> From: Ash Wilding <ash.j.wilding@gmail.com>
> 
> Most of the "work" here is simply deleting the atomic64_t helper
> definitions as we don't have an atomic64_t type in Xen.

There is some request to support atomic64_t in Xen. I would suggest to 
keep as this would be simpler to support it in the future.

Although, we can probably just revert part of the patch when this is 
necessary.

> 
> Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
> ---
>   xen/include/asm-arm/arm64/atomic_ll_sc.h | 134 +----------------------

I think we want to update README.LinuxPrimitives also with the new 
baseline for the helpers.

>   1 file changed, 6 insertions(+), 128 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm/arm64/atomic_ll_sc.h
> index e1009c0f94..20b0cb174e 100644
> --- a/xen/include/asm-arm/arm64/atomic_ll_sc.h
> +++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h
> @@ -1,16 +1,16 @@
> -/* SPDX-License-Identifier: GPL-2.0-only */
>   /*
> - * Based on arch/arm/include/asm/atomic.h
> + * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
>    *
>    * Copyright (C) 1996 Russell King.
>    * Copyright (C) 2002 Deep Blue Solutions Ltd.
>    * Copyright (C) 2012 ARM Ltd.
> + * SPDX-License-Identifier: GPL-2.0-only

May I ask why the SDPX license was moved around?

>    */
>   
> -#ifndef __ASM_ATOMIC_LL_SC_H
> -#define __ASM_ATOMIC_LL_SC_H
> +#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H
> +#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H

I would suggest to keep the Linux guards.

>   
> -#include <linux/stringify.h>
> +#include <xen/stringify.h>
>   
>   #ifdef CONFIG_ARM64_LSE_ATOMICS
>   #define __LL_SC_FALLBACK(asm_ops)					\
> @@ -134,128 +134,6 @@ ATOMIC_OPS(andnot, bic, )
>   #undef ATOMIC_OP_RETURN
>   #undef ATOMIC_OP
>   
> -#define ATOMIC64_OP(op, asm_op, constraint)				\
> -static inline void							\
> -__ll_sc_atomic64_##op(s64 i, atomic64_t *v)				\
> -{									\
> -	s64 result;							\
> -	unsigned long tmp;						\
> -									\
> -	asm volatile("// atomic64_" #op "\n"				\
> -	__LL_SC_FALLBACK(						\
> -"	prfm	pstl1strm, %2\n"					\
> -"1:	ldxr	%0, %2\n"						\
> -"	" #asm_op "	%0, %0, %3\n"					\
> -"	stxr	%w1, %0, %2\n"						\
> -"	cbnz	%w1, 1b")						\
> -	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
> -	: __stringify(constraint) "r" (i));				\
> -}
> -
> -#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
> -static inline long							\
> -__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v)		\
> -{									\
> -	s64 result;							\
> -	unsigned long tmp;						\
> -									\
> -	asm volatile("// atomic64_" #op "_return" #name "\n"		\
> -	__LL_SC_FALLBACK(						\
> -"	prfm	pstl1strm, %2\n"					\
> -"1:	ld" #acq "xr	%0, %2\n"					\
> -"	" #asm_op "	%0, %0, %3\n"					\
> -"	st" #rel "xr	%w1, %0, %2\n"					\
> -"	cbnz	%w1, 1b\n"						\
> -"	" #mb )								\
> -	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
> -	: __stringify(constraint) "r" (i)				\
> -	: cl);								\
> -									\
> -	return result;							\
> -}
> -
> -#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
> -static inline long							\
> -__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v)		\
> -{									\
> -	s64 result, val;						\
> -	unsigned long tmp;						\
> -									\
> -	asm volatile("// atomic64_fetch_" #op #name "\n"		\
> -	__LL_SC_FALLBACK(						\
> -"	prfm	pstl1strm, %3\n"					\
> -"1:	ld" #acq "xr	%0, %3\n"					\
> -"	" #asm_op "	%1, %0, %4\n"					\
> -"	st" #rel "xr	%w2, %1, %3\n"					\
> -"	cbnz	%w2, 1b\n"						\
> -"	" #mb )								\
> -	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
> -	: __stringify(constraint) "r" (i)				\
> -	: cl);								\
> -									\
> -	return result;							\
> -}
> -
> -#define ATOMIC64_OPS(...)						\
> -	ATOMIC64_OP(__VA_ARGS__)					\
> -	ATOMIC64_OP_RETURN(, dmb ish,  , l, "memory", __VA_ARGS__)	\
> -	ATOMIC64_OP_RETURN(_relaxed,,  ,  ,         , __VA_ARGS__)	\
> -	ATOMIC64_OP_RETURN(_acquire,, a,  , "memory", __VA_ARGS__)	\
> -	ATOMIC64_OP_RETURN(_release,,  , l, "memory", __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (, dmb ish,  , l, "memory", __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (_relaxed,,  ,  ,         , __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (_acquire,, a,  , "memory", __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (_release,,  , l, "memory", __VA_ARGS__)
> -
> -ATOMIC64_OPS(add, add, I)
> -ATOMIC64_OPS(sub, sub, J)
> -
> -#undef ATOMIC64_OPS
> -#define ATOMIC64_OPS(...)						\
> -	ATOMIC64_OP(__VA_ARGS__)					\
> -	ATOMIC64_FETCH_OP (, dmb ish,  , l, "memory", __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (_relaxed,,  ,  ,         , __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (_acquire,, a,  , "memory", __VA_ARGS__)	\
> -	ATOMIC64_FETCH_OP (_release,,  , l, "memory", __VA_ARGS__)
> -
> -ATOMIC64_OPS(and, and, L)
> -ATOMIC64_OPS(or, orr, L)
> -ATOMIC64_OPS(xor, eor, L)
> -/*
> - * GAS converts the mysterious and undocumented BIC (immediate) alias to
> - * an AND (immediate) instruction with the immediate inverted. We don't
> - * have a constraint for this, so fall back to register.
> - */
> -ATOMIC64_OPS(andnot, bic, )
> -
> -#undef ATOMIC64_OPS
> -#undef ATOMIC64_FETCH_OP
> -#undef ATOMIC64_OP_RETURN
> -#undef ATOMIC64_OP
> -
> -static inline s64
> -__ll_sc_atomic64_dec_if_positive(atomic64_t *v)
> -{
> -	s64 result;
> -	unsigned long tmp;
> -
> -	asm volatile("// atomic64_dec_if_positive\n"
> -	__LL_SC_FALLBACK(
> -"	prfm	pstl1strm, %2\n"
> -"1:	ldxr	%0, %2\n"
> -"	subs	%0, %0, #1\n"
> -"	b.lt	2f\n"
> -"	stlxr	%w1, %0, %2\n"
> -"	cbnz	%w1, 1b\n"
> -"	dmb	ish\n"
> -"2:")
> -	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
> -	:
> -	: "cc", "memory");
> -
> -	return result;
> -}
> -
>   #define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint)	\
>   static inline u##sz							\
>   __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,			\
> @@ -350,4 +228,4 @@ __CMPXCHG_DBL(_mb, dmb ish, l, "memory")
>   #undef __CMPXCHG_DBL
>   #undef K
>   
> -#endif	/* __ASM_ATOMIC_LL_SC_H */
> \ No newline at end of file
> +#endif	/* __ASM_ARM_ARM64_ATOMIC_LL_SC_H */
> \ No newline at end of file
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 18:53:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 18:53:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54763.95312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFRn-0006tP-8D; Tue, 15 Dec 2020 18:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54763.95312; Tue, 15 Dec 2020 18:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFRn-0006tI-5H; Tue, 15 Dec 2020 18:53:19 +0000
Received: by outflank-mailman (input) for mailman id 54763;
 Tue, 15 Dec 2020 18:53:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpFRm-0006tD-Cd
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 18:53:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpFRm-0005SK-3K; Tue, 15 Dec 2020 18:53:18 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpFRl-0003gj-TC; Tue, 15 Dec 2020 18:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ppg+aSo7YVAKml67KlLFcasfugqH7OiLRYbRXANzKpI=; b=iZ/79tnVaf+eqIrzkZwkq0ukGs
	5vS09m3DHdm3mhaS5kapY2qjI0F6/CQbLPP+ByZkRqSzQf5u+nkz2vE1SPX6jSMMP3aw1AaiSP6FM
	cnFSA1aCSRVI71Y7jEUDDfNZp9PYeHOcf02hVv8zM5N4CydpDh5jyBEoAllvCyUa2KE4=;
Subject: Re: [RFC PATCH v2 10/15] xen/arm64: port Linux's arm64 cmpxchg.h to
 Xen
To: Ash Wilding <ash.j.wilding@gmail.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, rahul.singh@arm.com
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
 <20201111215203.80336-11-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c16534dd-d775-8085-a763-8ee48f380427@xen.org>
Date: Tue, 15 Dec 2020 18:53:16 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201111215203.80336-11-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ash,

On 11/11/2020 21:51, Ash Wilding wrote:
> From: Ash Wilding <ash.j.wilding@gmail.com>
> 
>   - s/arch_xchg/xchg/
> 
>   - s/arch_cmpxchg/cmpxchg/
> 
>   - Replace calls to BUILD_BUG() with calls to __bad_cmpxchg() as we
>     don't currently have a BUILD_BUG() macro in Xen and this will
>     equivalently cause a link-time error.

How complex would it be to import BUILD_BUG() in Xen? Would the 
following work:

#define BUILD_BUG() BUILD_BUG_ON(1)

> 
>   - Replace calls to VM_BUG_ON() with BUG_ON() as we don't currently
>     have a VM_BUG_ON() macro in Xen.
> 
>   - Pull in the timeout variants of cmpxchg from the original Xen
>     arm64 cmpxchg.h as these are required for guest atomics and are
>     not provided by Linux. Note these are always using LL/SC so we
>     should ideally write LSE versions at some point.

The main advantage of LSE support in Xen is to drop the timeout helpers. 
I guess this would require a bit more thought to still allow inlining.

In any case, it may make sense to move them in a separate headers so you 
don't have to remove/add them. Anyway, I would be happy with the current 
approach as well.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 18:59:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 18:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54768.95324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFXg-0007B8-U8; Tue, 15 Dec 2020 18:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54768.95324; Tue, 15 Dec 2020 18:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFXg-0007B1-RI; Tue, 15 Dec 2020 18:59:24 +0000
Received: by outflank-mailman (input) for mailman id 54768;
 Tue, 15 Dec 2020 18:59:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpFXf-0007At-Lx; Tue, 15 Dec 2020 18:59:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpFXf-0005Y6-CB; Tue, 15 Dec 2020 18:59:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpFXf-0002lj-3W; Tue, 15 Dec 2020 18:59:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpFXe-0004l4-V0; Tue, 15 Dec 2020 18:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Kw2/Pxha90D4hGpHAnKygKujh0n5aHwcsAh/nL8OVu4=; b=mq869j6viKdyynnh0vb+7iio1Z
	ayECIU89jV88AkDtQrQFaeAT7Z/6U1LbFPPzjfyn2giGp2wNhfrGUSY+YMNX2E06kdPqCaZlVxN3X
	FDVrf+ME9Rk94P+xoUyKScLhhZ9C5hcZUoSYEcY5OvXRbXFUzoTCXSpn2oyGu4gQ/nbA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157555-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157555: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=148842c98a24e508aecb929718818fbf4c2a6ff3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 18:59:22 +0000

flight 157555 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157555/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                148842c98a24e508aecb929718818fbf4c2a6ff3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  136 days
Failing since        152366  2020-08-01 20:49:34 Z  135 days  238 attempts
Testing same since   157555  2020-12-15 11:09:01 Z    0 days    1 attempts

------------------------------------------------------------
3825 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 771172 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 19:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 19:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54782.95340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFiK-0000Ug-2V; Tue, 15 Dec 2020 19:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54782.95340; Tue, 15 Dec 2020 19:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpFiJ-0000UZ-UT; Tue, 15 Dec 2020 19:10:23 +0000
Received: by outflank-mailman (input) for mailman id 54782;
 Tue, 15 Dec 2020 19:10:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gY+w=FT=gmail.com=xieliwei@srs-us1.protection.inumbo.net>)
 id 1kpFiI-0000UU-1i
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 19:10:22 +0000
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fee21534-b2bd-45ee-b1d0-0c36ee007a0d;
 Tue, 15 Dec 2020 19:10:21 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id n4so21599814iow.12
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 11:10:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fee21534-b2bd-45ee-b1d0-0c36ee007a0d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=NXFjuHVE58ZaaSbU8iuQJAGjLdlVtuTQrlX/TBdKKyc=;
        b=g4MFJX6mzTnY1nL3zmX2eVqU9yUadL/ePYSG1PqrycHoHmrA0cONJJZO8MEjGnvSc/
         BIiM8zwY545N21r0WJZ73NpzZoG7nq950UU/9ALmfhqCqUqh9e9BtIcuw4eGkJg+wgeb
         IyBmrnZlefn3HaUWzJR8Wr19dRKO//a6Mm5rHVS3rY6FW87MlW+PouGVEf6owBwG6IA1
         W6rFOU6P+mPgBIC1b1p0GfxxrZpN+vnNPWXZGHEL/My+pzJs8m/HBnoqXoG5o5HRKaAF
         EwSlVDpn9mKzqoJ9nq/oKHCHtQuPI5egQxaM+G8svViYtKa9e2/B7+t2PuOyC5ZfOZmT
         6e2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=NXFjuHVE58ZaaSbU8iuQJAGjLdlVtuTQrlX/TBdKKyc=;
        b=nL7Jqpit5YOX41DMZpvLgBpdNE4bijanDtvT77HdyQ8At7KR3EXGCrnN3dn8avCtZ1
         5ccICB6Cy5/6mIQILRlZ2nSMc2Jmc7HtUN1Mvlrt8xSVnQyVvxPurc0/PWZGZ/Cpd11m
         5v1RfdJIb0O+UGrSmG9roIFgIvSBRHDY7vcBobuBb1FD3eO1xrXC2BSNcWAt5vIXUP6E
         +8nY7zCg2f5g7U0KwkajfNqLDPu/vC5FeIEY5VUFPID9OVAMCP4so+hUCgS9JTgcCZJ4
         l+0qFZeB2lRG9WBo9Ximm+L7fum6kY8ez9kSfAce7Kybk5+QQQtluoATbjl0dA5S0CZy
         1csQ==
X-Gm-Message-State: AOAM5335Mzp5AmzZ0v7XviKY9YFN26Iap8QLuAorNYZ2JdzHe0AIKIqV
	8GzqL5zv5esoZhTbtgHFRLVF3DiFuQ/APB7a9ywYFvAYCqrCsA==
X-Google-Smtp-Source: ABdhPJzuGvIAfEMJb+Oa8oV9eCxTBWS0sVa2cW4gSaM6M4SpTDbO+WOJ5Nuo0/pVYQ/q/t7vA5/aMwDgcPrRZpJ2bC8=
X-Received: by 2002:a05:6638:1247:: with SMTP id o7mr39987241jas.31.1608059419841;
 Tue, 15 Dec 2020 11:10:19 -0800 (PST)
MIME-Version: 1.0
References: <mailman.2112.1604414193.711.xen-devel@lists.xenproject.org>
In-Reply-To: <mailman.2112.1604414193.711.xen-devel@lists.xenproject.org>
From: Liwei <xieliwei@gmail.com>
Date: Wed, 16 Dec 2020 03:08:50 +0800
Message-ID: <CAPE0SYz0be1ZOoNqDHpeJWeZS-1BM_zy50=Cmeo+4Aq1Na0eNQ@mail.gmail.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>, 
	Dario Faggioli <dfaggioli@suse.com>
Content-Type: text/plain; charset="UTF-8"

Hi list,
    This is a reply to the thread of the same title (linked here:
https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg84916.html
) which I could not reply to because I receive this list by digest.

    I'm unclear if this is exactly the reason, but I experienced the
same symptoms when upgrading to 4.14. The issue does not occur if I
downgrade to 4.11 (the previous version that was provided by Debian).
Kernel is 5.9.11 and unchanged between xen versions.

    One thing I noticed is that if I disable the monitor/mwait
instructions on my CPU (Intel Xeon E5-2699 v4 ES), the stalls seem to
occur later into the boot. With the instructions enabled, the system
usually stalls less than a few minutes after boot; disabled, it can
last for tens of minutes.

    Further disabling the HPET or forcing the kernel to use PIT causes
it to be somewhat usable. The stalls still occur tens of minutes in
but somehow everything seems to continue chugging along fine?

    I've also verified that the stalls do not occur in all the above
cases if I just boot into the kernel without xen.

    When the stalls happen, I get the "rcu: INFO: rcu_sched detected
stalls on CPUs/tasks" backtraces printed on the console periodically,
but keystrokes don't do anything on the console, and I can't spawn new
SSH sessions even though pinging the system produces a reply. The last
item in the call trace is usually "xen_safe_halt", but I've seen it
occur for other functions related to btrfs and the network adapter as
well.

    Do let me know if there's anything I can provide to help
troubleshoot this. At the moment I've reverted to 4.11, but I can
temporarily switch over to 4.14 to collect any necessary information.

Liwei


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 19:31:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 19:31:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54789.95351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpG2f-0002Qh-Vl; Tue, 15 Dec 2020 19:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54789.95351; Tue, 15 Dec 2020 19:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpG2f-0002Qa-Sp; Tue, 15 Dec 2020 19:31:25 +0000
Received: by outflank-mailman (input) for mailman id 54789;
 Tue, 15 Dec 2020 19:31:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kpG2e-0002QV-NC
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 19:31:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpG2d-00066z-J6; Tue, 15 Dec 2020 19:31:23 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kpG2d-0006Hh-B8; Tue, 15 Dec 2020 19:31:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sW1I+YEDrMePCyLq/HFc207N4WzhJW7c49godHdP1UE=; b=CLGNByGtonUId8vtf+wG7PqSe1
	kMjq1yDpk8r+UvmOwEuO1lCDvrhXmpDRYgdfzNxtQC34aYbzNG7bCmoZv+/59hTfMC7bTsnYBA6Tk
	Us59p2QLqO2NvApBQ9YRVCJrGVNNQbg9nvQw5mTvxJDmoNwjW1nMy066qBHt3gVJvFco=;
Subject: Re: [RFC PATCH v2 00/15] xen/arm: port Linux LL/SC and LSE atomics
 helpers to Xen.
To: Ash Wilding <ash.j.wilding@gmail.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, rahul.singh@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cb0f7055-6d9a-5c39-6198-109593fd3424@xen.org>
Date: Tue, 15 Dec 2020 19:31:21 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 11/11/2020 21:51, Ash Wilding wrote:
> From: Ash Wilding <ash.j.wilding@gmail.com>
> 
> 
> Hey,

Hi Ash,

> 
> I've found some time to improve this series a bit:
> 
> 
> Changes since RFC v1
> ====================
> 
>   - Broken up patches into smaller chunks to aid in readability.
> 
>   - As per Julien's feedback I've also introduced intermediary patches
>     that first remove Xen's existing headers, then pull in the current
>     Linux versions as-is. This means we only need to review the changes
>     made while porting to Xen, rather than reviewing the existing Linux
>     code.
> 
>   - Pull in Linux's <asm-generic/rwonce.h> as <xen/rwonce.h> for
>     __READ_ONCE()/__WRITE_ONCE() instead of putting these in Xen's
>     <xen/compiler.h>. While doing this, provide justification for
>     dropping Linux's <linux/compiler_types.h> and relaxing the
>     __READ_ONCE() usage of __unqual_scalar_typeof() to just typeof()
>     (see patches #5 and #6).
> 
> 
> 
> Keeping the rest of the cover-letter unchanged as it would still be
> good to discuss these things:
> 
> 
> Arguments in favour of doing this
> =================================
> 
>      - Lets SMMUv3 driver switch to using <asm/atomic.h> rather than
>        maintaining its own implementation of the helpers.
> 
>      - Provides mitigation against XSA-295 [2], which affects both arm32
>        and arm64, across all versions of Xen, and may allow a domU to
>        maliciously or erroneously DoS the hypervisor.
> 
>      - All Armv8-A core implementations since ~2017 implement LSE so
>        there is an argument to be made we are long overdue support in
>        Xen. This is compounded by LSE atomics being more performant than
>        LL/SC atomics in most real-world applications, especially at high
>        core counts.
> 
>      - We may be able to get improved performance when using LL/SC too
>        as Linux provides helpers with relaxed ordering requirements which
>        are currently not available in Xen, though for this we would need
>        to go back through existing code to see where the more relaxed
>        versions can be safely used.
> 
>      - Anything else?
> 
> 
> Arguments against doing this
> ============================
> 
>      - Limited testing infrastructure in place to ensure use of new
>        atomics helpers does not introduce new bugs and regressions. This
>        is a particularly strong argument given how difficult it can be to
>        identify and debug malfunctioning atomics. The old adage applies,
>        "If it ain't broke, don't fix it."

I am not too concerned about the Linux atomics implementation. They are 
well tested and your changes doesn't seem to touch the implementation 
itself.

However, I vaguely remember that some of the atomics helper in Xen are 
somewhat much stronger than the Linux counterpart.

Would you be able to look at all the existing helpers and see whether 
the memory ordering diverge?

Once we have a list we could decide whether we want to make them 
stronger again or check the callers.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 19:36:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 19:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54794.95381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpG7B-0002eE-EH; Tue, 15 Dec 2020 19:36:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54794.95381; Tue, 15 Dec 2020 19:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpG7B-0002e6-BG; Tue, 15 Dec 2020 19:36:05 +0000
Received: by outflank-mailman (input) for mailman id 54794;
 Tue, 15 Dec 2020 19:36:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qx/l=FT=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kpG79-0002dt-Nv
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 19:36:03 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51b1f04b-56e2-42a6-8ef0-da9e77fdff67;
 Tue, 15 Dec 2020 19:36:03 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BFJXqZR156836;
 Tue, 15 Dec 2020 19:35:58 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2120.oracle.com with ESMTP id 35cntm4dwn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 15 Dec 2020 19:35:58 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BFJZ9UW009806;
 Tue, 15 Dec 2020 19:35:58 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3020.oracle.com with ESMTP id 35e6jrmaqq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 15 Dec 2020 19:35:57 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0BFJZv03011217;
 Tue, 15 Dec 2020 19:35:57 GMT
Received: from [10.39.237.186] (/10.39.237.186)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 15 Dec 2020 11:35:56 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51b1f04b-56e2-42a6-8ef0-da9e77fdff67
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=s0Cs1JEkSZnaprExuXHSIt4siSUpbCCqvHiWI9iK+Dc=;
 b=Ro0VZ1qVg2R7st409/BXYUjldRN15ppn5/7vDVpnV4wjD6w0NRhRg0F3GxDfUffJdrcj
 J/Sn0PRL/olk8HDRkDJses0JNQLQPMRmzEx6U5SVqz7G/HMIRhNbP6tV5eZbaIdNL8KL
 OSHjhYovQUtqDjo4Mjd7kw+2HNP4kOdZDoTVhGrNuBiPM3gr6YAjkgy/PKM+QvNXkJ9z
 ynGyBfTY56XK5Gf/ocSXBrOOfi1RUrToOBmJ+angZ7hhisKhy8gC6pErPGa9frP2FIA+
 MSzvk6Q6HhKO++64dLImSbUWzf0qet0e5d1UXVlwwMM7SbrJu1clVyWUaq5OESlfWV1m AQ== 
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20201215111055.3810-1-jgross@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <bd15d0d7-e4d9-74e8-31d5-131434af7f1e@oracle.com>
Date: Tue, 15 Dec 2020 14:35:55 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201215111055.3810-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9836 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 bulkscore=0
 malwarescore=0 adultscore=0 mlxlogscore=999 phishscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012150130
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9836 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 phishscore=0 mlxscore=0
 lowpriorityscore=0 spamscore=0 adultscore=0 malwarescore=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 priorityscore=1501 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012150130


On 12/15/20 6:10 AM, Juergen Gross wrote:
> In case a process waits for any Xenstore action in the xenbus driver
> it should be interruptible by signals.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - don't special case SIGKILL as libxenstore is handling -EINTR fine


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com





From xen-devel-bounces@lists.xenproject.org Tue Dec 15 19:50:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 19:50:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54819.95393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpGKv-0004iS-NJ; Tue, 15 Dec 2020 19:50:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54819.95393; Tue, 15 Dec 2020 19:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpGKv-0004iL-KQ; Tue, 15 Dec 2020 19:50:17 +0000
Received: by outflank-mailman (input) for mailman id 54819;
 Tue, 15 Dec 2020 19:50:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpGKu-0004iA-1L; Tue, 15 Dec 2020 19:50:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpGKt-0006PO-Qt; Tue, 15 Dec 2020 19:50:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpGKt-00057i-KE; Tue, 15 Dec 2020 19:50:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpGKt-0001ST-JV; Tue, 15 Dec 2020 19:50:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JvRBNSEtIveCp/ArQb8ChjQ2odJl2iVSmzHy2Nq7PjE=; b=dpuiNRvLvwFfPrUw/vY4XFp7h7
	e2A4eywjKc1Gxfwopgo0YockOuF/HczNbkyJ5KzBTtwnTbrWZ1nlo2cenmxcQT4lJcpvR+c1aZFy9
	QaEv//s9upxG9sCFiHJlCCXM0rL+sHXyyXRDRAlyatCOGQp61qvAE80rJk5jdTDKQLd4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157570-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157570: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 19:50:15 +0000

flight 157570 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    0 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 20:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 20:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54833.95412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpH29-00008n-CT; Tue, 15 Dec 2020 20:34:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54833.95412; Tue, 15 Dec 2020 20:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpH29-00008g-8R; Tue, 15 Dec 2020 20:34:57 +0000
Received: by outflank-mailman (input) for mailman id 54833;
 Tue, 15 Dec 2020 20:25:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xjw9=FT=google.com=sjg@srs-us1.protection.inumbo.net>)
 id 1kpGsy-0007fk-Jz
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 20:25:28 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4ca9329-08aa-42d1-abe6-0ece8b4e08f0;
 Tue, 15 Dec 2020 20:25:27 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id x22so424021wmc.5
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 12:25:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4ca9329-08aa-42d1-abe6-0ece8b4e08f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=DKgedKEtb5mnSfvh4xlmj/r1V3a4kPz3KEPIfj5KHQI=;
        b=dvYbP6TUERJsKbDFxZbdxZzIE8DNpnd6A06uozqtUq7LhX4cXVtUbBAiVtR79WfZ5w
         0HvIER4UPQWC9gGqh56/kIFXpyY4oXuDM8SZcmvrbgTwLhkE+AAsyAlfBreoUJwuBYam
         nKo6Tqps+C58xcXy69jk4lsy9StBWGLoNOOgM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=DKgedKEtb5mnSfvh4xlmj/r1V3a4kPz3KEPIfj5KHQI=;
        b=eSSVQDR0BgfE8q5YCHLzgjoyC5UDHhmhE664HwTUz6sDWkXmdoU854PlyMqMkdAcTE
         XeBowTXwHzwjVQ/yrSPjPM+E2IpJSr7rywWXaBWd8onFNYk6QEn2qXKWFpqV2cvs5Nso
         ZniuhRwIRmdg0poCPRDHGjWs+ANbEbLbYkNcWyEoP/Sm2s7zlWaQ3oo8EaAqcKPrb+rS
         dDpw7kq82IjtHtXppWJfKqRaz34+FBzZFDi9dUxxXQ13OhJ4QcHC7aLxOJ/C2fZ0yP5Z
         7+gP7ExUzvIjQOl/lIFExswdFQiVPfZWnPXWw3zJHR/8GeNUqHok8YQKYw/4UWGovGGH
         iNnA==
X-Gm-Message-State: AOAM530p07Sx6VsErKyWsfFIjqb4whlijIi1hDRtkpT0hI6YfluLAQWS
	llHiW3ix8DeM4OzHYeJiE8seZW0rhOx7zzA8nBFRCg==
X-Google-Smtp-Source: ABdhPJxLA99SR44IbyCAzhlr5dVrSinm0a/tZD0GiKoMvdNv7FPsSRxYrHstKKbx77iIrtQuc2sgy0E5I1H64h3mosg=
X-Received: by 2002:a7b:cf0d:: with SMTP id l13mr552217wmg.168.1608063925901;
 Tue, 15 Dec 2020 12:25:25 -0800 (PST)
MIME-Version: 1.0
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
In-Reply-To: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
From: Simon Glass <sjg@chromium.org>
Date: Tue, 15 Dec 2020 13:25:14 -0700
Message-ID: <CAPnjgZ3zCuLd7G-bOFp38OoyboAbHG21L_5oogN_7tLmba-ShQ@mail.gmail.com>
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Coreboot <coreboot@coreboot.org>, grub-devel@gnu.org, 
	lk <linux-kernel@vger.kernel.org>, systemd-devel@lists.freedesktop.org, 
	trenchboot-devel@googlegroups.com, U-Boot Mailing List <u-boot@lists.denx.de>, x86@kernel.org, 
	xen-devel@lists.xenproject.org, alecb@umass.edu, 
	alexander.burmashev@oracle.com, allen.cryptic@gmail.com, 
	andrew.cooper3@citrix.com, Ard Biesheuvel <ard.biesheuvel@linaro.org>, btrotter@gmail.com, 
	dpsmith@apertussolutions.com, eric.devolder@oracle.com, 
	eric.snowberg@oracle.com, "H. Peter Anvin" <hpa@zytor.com>, hun@n-dimensional.de, 
	javierm@redhat.com, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com, 
	konrad.wilk@oracle.com, krystian.hebel@3mdeb.com, 
	Leif Lindholm <leif@nuviainc.com>, lukasz.hawrylko@intel.com, luto@amacapital.net, 
	michal.zygowski@3mdeb.com, mjg59@google.com, mtottenh@akamai.com, 
	=?UTF-8?Q?Vladimir_=27=CF=86=2Dcoder=2Fphcoder=27_Serbinenko?= <phcoder@gmail.com>, 
	=?UTF-8?Q?Piotr_Kr=C3=B3l?= <piotr.krol@3mdeb.com>, 
	Peter Jones <pjones@redhat.com>, Paul Menzel <pmenzel@molgen.mpg.de>, roger.pau@citrix.com, 
	ross.philipson@oracle.com, tyhicks@linux.microsoft.com
Content-Type: text/plain; charset="UTF-8"

Hi Daniel,

On Fri, 13 Nov 2020 at 19:07, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>
> Hey,
>
> This is next attempt to create firmware and bootloader log specification.
> Due to high interest among industry it is an extension to the initial
> bootloader log only specification. It takes into the account most of the
> comments which I got up until now.
>
> The goal is to pass all logs produced by various boot components to the
> running OS. The OS kernel should expose these logs to the user space
> and/or process them internally if needed. The content of these logs
> should be human readable. However, they should also contain the
> information which allows admins to do e.g. boot time analysis.
>
> The log specification should be as much as possible platform agnostic
> and self contained. The final version of this spec should be merged into
> existing specifications, e.g. UEFI, ACPI, Multiboot2, or be a standalone
> spec, e.g. as a part of OASIS Standards. The former seems better but is
> not perfect too...
>
> Here is the description (pseudocode) of the structures which will be
> used to store the log data.
>
>   struct bf_log
>   {
>     uint32_t   version;
>     char       producer[64];
>     uint64_t   flags;
>     uint64_t   next_bf_log_addr;
>     uint32_t   next_msg_off;
>     bf_log_msg msgs[];
>   }
>
>   struct bf_log_msg
>   {
>     uint32_t size;
>     uint64_t ts_nsec;
>     uint32_t level;
>     uint32_t facility;
>     uint32_t msg_off;
>     char     strings[];
>   }
>
> The members of struct bf_log:
>   - version: the firmware and bootloader log format version number, 1 for now,
>   - producer: the producer/firmware/bootloader/... type; the length
>     allows ASCII UUID storage if somebody needs that functionality,
>   - flags: it can be used to store information about log state, e.g.
>     it was truncated or not (does it make sense to have an information
>     about the number of lost messages?),
>   - next_bf_log_addr: address of next bf_log struct; none if zero (I think
>     newer spec versions should not change anything in first 5 bf_log members;
>     this way older log parsers will be able to traverse/copy all logs regardless
>     of version used in one log or another),
>   - next_msg_off: the offset, in bytes, from the beginning of the bf_log struct,
>     of the next byte after the last log message in the msgs[]; i.e. the offset
>     of the next available log message slot; it is equal to the total size of
>     the log buffer including the bf_log struct,
>   - msgs: the array of log messages,
>   - should we add CRC or hash or signatures here?
>
> The members of struct bf_log_msg:
>   - size: total size of bf_log_msg struct,
>   - ts_nsec: timestamp expressed in nanoseconds starting from 0,
>   - level: similar to syslog meaning; can be used to differentiate normal messages
>     from debug messages; the exact interpretation depends on the current producer
>     type specified in the bf_log.producer,
>   - facility: similar to syslog meaning; can be used to differentiate the sources of
>     the messages, e.g. message produced by networking module; the exact interpretation
>     depends on the current producer type specified in the bf_log.producer,
>   - msg_off: the log message offset in strings[],
>   - strings[0]: the beginning of log message type, similar to the facility member but
>     NUL terminated string instead of integer; this will be used by, e.g., the GRUB2
>     for messages printed using grub_dprintf(),
>   - strings[msg_off]: the beginning of log message, NUL terminated string.
>
> Note: The producers are free to use/ignore any given set of level, facility and/or
>       log type members. Though the usage of these members has to be clearly defined.
>       Ignored integer members should be set to 0. Ignored log message type should
>       contain an empty NUL terminated string. The log message is mandatory but can
>       be an empty NUL terminated string.
>
> There is still not fully solved problem how the logs should be presented to the OS.
> On the UEFI platforms we can use config tables to do that. Then probably
> bf_log.next_bf_log_addr should not be used. On the ACPI and Device Tree platforms
> we can use these mechanisms to present the logs to the OSes. The situation gets more
> difficult if neither of these mechanisms are present. However, maybe we should not
> bother too much about that because probably these platforms getting less and less
> common.
>
> Anyway, I am aware that this is not specification per se. The goal of this email is
> to continue the discussion about the idea of the firmware and booloader log and to
> find out where the final specification should land. Of course taking into the account
> assumptions made above.
>
> You can find previous discussions about related topics at [1], [2] and [3].
>
> Additionally, I am going to present this during GRUB mini-summit session on Tuesday,
> 17th of November at 15:45 UTC. So, if you want to discuss the log design please join
> us. You can find more details here [4].

I hesitate to add my opinions here since it is probably more important
to settle on something than get everyone happy.

It would be nice if the format were extensible in a simple way. As
others have mentioned, we may want to provide logs from various
different sources (EC, AP firmware through various
read-only/read-write paths, trusted firmware). Each of these is
presumably its own separate log, but with a coherent timestamp. I
think the log level and 'facility' (category) that you have are
important features, because they help to provide hierarchy and
attribution to the messages, allowing filtering out debugging, etc.

It could be more compact - e.g. a byte is enough for the level, use \0
instead of size, add a flags bytes to allow things to be optional. Is
ns necessary it would it be good enough and we could use 32-bit and
have an hour before wrapping.

Thinking about U-Boot TPL, where every byte counts, we would likely
store it in a different format and expand it later, but it would be
better if the format were efficient enough that it did not matter. A
flag byte indicating what fields are present? Overall, is it important
to have a simple struct for this, or is something more compact
possible?

IMO timestamp 0 should be the time the SoC comes out of reset, so far
as it can be known / estimated.

Also if we can repurpose something existing that is extensible, that
would be nice. I'm not arguing for legacy, just for retiring old
things.

Regards,
Simon

>
> Daniel
>
> [1] https://lists.gnu.org/archive/html/grub-devel/2019-10/msg00107.html
> [2] https://lists.gnu.org/archive/html/grub-devel/2019-11/msg00079.html
> [3] https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00223.html
> [4] https://twitter.com/3mdeb_com/status/1327278804100931587


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 20:59:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 20:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54849.95424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpHQ6-0002Dr-Di; Tue, 15 Dec 2020 20:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54849.95424; Tue, 15 Dec 2020 20:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpHQ6-0002Dk-8k; Tue, 15 Dec 2020 20:59:42 +0000
Received: by outflank-mailman (input) for mailman id 54849;
 Tue, 15 Dec 2020 20:59:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6ufw=FT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpHQ4-0002Df-SV
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 20:59:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 258133db-4552-4f80-a05c-69c4a8a8e9b3;
 Tue, 15 Dec 2020 20:59:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 258133db-4552-4f80-a05c-69c4a8a8e9b3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608065978;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=twnfeOO/nJh+7Yqrwe0fs8m/4jH5/Yo3yi9Jp0z/Lss=;
  b=FoiQYpDytOcoFzfBWnV4vVzqp2A2Mf57D6pAA7yJMbwxyI6POc169uNn
   UK8skzwNZmHxiW9UfAubNlI9gOvRpwQGX54dEmE7SeVQltVl/otWYEsKw
   Rk9IbJcuWJOOm+J+uR5O1avHDvzsJ7L6Uvk8Xri28In6Bfhy27lZlDOqr
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uuatwPkBk2JlFNvnKxCJz1fCleazbEJAfO+UCKhDHk/N1fJ6YIIb3Me8H7vmAP/wC4OOPR92M3
 Ljp+e5tikssU7acjBxZrayROOdojYqfTc57bvGwSBQkDNkPh2Zg1dnVi0S0P36ChXKPTthy94K
 uBJCWhda77S2AUZc0x84cppZ/tF2VKl+CZC+FstSAd6IlZsThgc6a+3O/5t3OHTOOkB7Xxg01C
 VnRQHy+6IEWa7QpkIJfZ5+92YLk5y8JNhfhxnw6IxvaR30I8wctCDDtdNIljbssIT9vzqVm5o+
 gbQ=
X-SBRS: 5.2
X-MesageID: 33639630
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,422,1599537600"; 
   d="scan'208";a="33639630"
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>
References: <20201215111055.3810-1-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
Date: Tue, 15 Dec 2020 20:59:32 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201215111055.3810-1-jgross@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 15/12/2020 11:10, Juergen Gross wrote:
> In case a process waits for any Xenstore action in the xenbus driver
> it should be interruptible by signals.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - don't special case SIGKILL as libxenstore is handling -EINTR fine
> ---
>  drivers/xen/xenbus/xenbus_xs.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> index 3a06eb699f33..17c8f8a155fd 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -205,8 +205,15 @@ static bool test_reply(struct xb_req_data *req)
>  
>  static void *read_reply(struct xb_req_data *req)
>  {
> +	int ret;
> +
>  	do {
> -		wait_event(req->wq, test_reply(req));
> +		ret = wait_event_interruptible(req->wq, test_reply(req));
> +
> +		if (ret == -ERESTARTSYS && signal_pending(current)) {
> +			req->msg.type = XS_ERROR;
> +			return ERR_PTR(-EINTR);
> +		}

So now I can talk fully about the situations which lead to this, I think
there is a bit more complexity.

It turns out there are a number of issues related to running a Xen
system with no xenstored.

1) If a xenstore-write occurs during startup before init-xenstore-domain
runs, the former blocks on /dev/xen/xenbus waiting for xenstored to
reply, while the latter blocks on /dev/xen/xenbus_backend when trying to
tell the dom0 kernel that xenstored is in dom1.  This effectively
deadlocks the system.

2) If xenstore-watch is running when xenstored dies, it spins at 100%
cpu usage making no system calls at all.  This is caused by bad error
handling from xs_watch(), and attempting to debug found:

3) (this issue).  If anyone starts xenstore-watch with no xenstored
running at all, it blocks in D in the kernel.

The cause is the special handling for watch/unwatch commands which,
instead of just queuing up the data for xenstore, explicitly waits for
an OK for registering the watch.  This causes a write() system call to
block waiting for a non-existent entity to reply.

So while this patch does resolve the major usability issue I found (I
can't even SIGINT and get my terminal back), I think there are issues.

The reason why XS_WATCH/XS_UNWATCH are special cased is because they do
require special handling.  The main kernel thread for processing
incoming data from xenstored does need to know how to associate each
async XS_WATCH_EVENT to the caller who watched the path.

Therefore, depending on when this cancellation hits, we might be in any
of the following states:

1) the watch is queued in the kernel, but not even sent to xenstored yet
2) the watch is queued in the xenstored ring, but not acted upon
3) the watch is queued in the xenstored ring, and the xenstored has seen
it but not replied yet
4) the watch has been processed, but the XS_WATCH reply hasn't been
received yet
5) the watch has been processed, and the XS_WATCH reply received

State 5 (and a little bit) is the normal success path when xenstored has
acted upon the request, and the internal kernel infrastructure is set up
appropriately to handle XS_WATCH_EVENTs.

States 1 and 2 can be very common if there is no xenstored (or at least,
it hasn't started up yet).  In reality, there is either no xenstored, or
it is up and running (and for a period of time during system startup,
these cases occur in sequence).

As soon as the XS_WATCH event has been written into the xenstored ring,
it is not safe to cancel.  You've committed to xenstored processing the
request (if it is up).

If xenstored is actually up and running, its fine and necessary to
block.  The request will be processed in due course (timing subject to
the client and server load).  If xenstored isn't up, blocking isn't ok.

Therefore, I think we need to distinguish "not yet on the ring" from "on
the ring", as our distinction as to whether cancelling is safe, and
ensure we don't queue anything on the ring before we're sure xenstored
has started up.

Does this make sense?

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 22:23:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 22:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54891.95454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpIiW-0001ur-3x; Tue, 15 Dec 2020 22:22:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54891.95454; Tue, 15 Dec 2020 22:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpIiW-0001uk-0g; Tue, 15 Dec 2020 22:22:48 +0000
Received: by outflank-mailman (input) for mailman id 54891;
 Tue, 15 Dec 2020 22:22:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GCyV=FT=gmail.com=crogers122@srs-us1.protection.inumbo.net>)
 id 1kpIiU-0001uf-QK
 for xen-devel@lists.xenproject.org; Tue, 15 Dec 2020 22:22:46 +0000
Received: from mail-qk1-x72d.google.com (unknown [2607:f8b0:4864:20::72d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9014b889-5974-44f0-b9aa-017223c1e4cf;
 Tue, 15 Dec 2020 22:22:45 +0000 (UTC)
Received: by mail-qk1-x72d.google.com with SMTP id c7so20831068qke.1
 for <xen-devel@lists.xenproject.org>; Tue, 15 Dec 2020 14:22:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9014b889-5974-44f0-b9aa-017223c1e4cf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=wA0NS6LYMdsQ75agvCmT0E+X2ITs6qTXBs67m6lU3U4=;
        b=SamdIYmE09sby8EOWZ2uy7CskOJNNL/aFUy2sUdpWqB76z+RjpAtFrbC4Dax8yE94O
         dJOUCTwPCs7wz9LEW4KLEnJkU8xFTMOEu4lDmiS+MCSghBUQjSZerh1/lSlGKnECMSFN
         Jof8LLQHMPP6V8UdEQ3N+RGbpAT/NtM/JWcUXOidoE/m0ZpWVZ0P0EPGQ8KQt63kTF0A
         /BlereSFNYHYlWLF1ZzhttUzhc60Cgo4GA4W++khagvY6DOF60J2/2E/FjiBvI1jCwu9
         6yrYsY4msS2KzqfNmHwCSI5emXSCjBoxH6XnqRfIfgbJ5UBBnZT4i80oX6njGpkNqYVn
         z1jg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=wA0NS6LYMdsQ75agvCmT0E+X2ITs6qTXBs67m6lU3U4=;
        b=KnaKytzkN33/qE7W3QLO2y7lwsMR3qA6igMbDPRq/lc6fjiYyzQYoQc4wWqxF42w8x
         93q56vGcj5e0c/QtCiimczzCRLako5bYjNW+QRcu5i8BQ5GGzX2Usdl20v+g5cHK/mFy
         NDvDKHwAExKzhi+7I+xPbSmpqWBwS3fwzbD5nfsI54s28CSGMq1ZYlJRKnCWmvA+Ei2U
         j8Yy1JQHTizTnLNJQl91jt0QCTr/lEzL4wUajluGM6ZEONtUJ6x0dyrI6ioFkSegE1c0
         0fVtTSA93Y4YMIqaAkyhdMjxEJc0Ev6wy6ybnXzAa571GV8bNRX1qYxyNY8XGNcijC99
         1BuA==
X-Gm-Message-State: AOAM531zY3X5b0PSegSAtqdL7YycAdAkpmqFRU2A20BM/P0qwoWftdBX
	Gm+tHvLcpIRUo8a1tnopPEVDrKBpznuobB+qlxg=
X-Google-Smtp-Source: ABdhPJzpaf1KS/3u9aEsVJ6JCo3oOKSQDcucIyRGGWeRU4nIHtNWIUi5ZnoSNTm4fA58GtzH+zQ8o/LCEBBj1i72UDc=
X-Received: by 2002:a05:620a:1265:: with SMTP id b5mr40488098qkl.27.1608070965041;
 Tue, 15 Dec 2020 14:22:45 -0800 (PST)
MIME-Version: 1.0
References: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com>
 <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com> <alpine.DEB.2.21.2012141632020.4040@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012141632020.4040@sstabellini-ThinkPad-T480s>
From: Chris Rogers <crogers122@gmail.com>
Date: Tue, 15 Dec 2020 17:22:33 -0500
Message-ID: <CAC4Yorgk89vaDsbygvebiBOan-3OWE=D9xKiri_JwQAVWZ19GQ@mail.gmail.com>
Subject: Re: [openxt-dev] Re: Follow up on libxl-fix-reboot.patch
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rich Persaud <persaur@gmail.com>, openxt <openxt@googlegroups.com>, 
	xen-devel@lists.xenproject.org, Anthony PERARD <anthony.perard@citrix.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	Olivier Lambert <olivier.lambert@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Jason Andryuk <jandryuk@gmail.com>, wl@xen.org, jbeulich@suse.com, roger.pau@citrix.com
Content-Type: multipart/alternative; boundary="000000000000a51de505b6883228"

--000000000000a51de505b6883228
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hopefully I can provide a little more context.  Here is a link to the patch=
:

https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/fil=
es/libxl-fix-reboot.patch

The patch is a bit mis-named.  It does not implement
XEN_DOMCTL_SENDTRIGGER_RESET.  It's just a workaround to handle the missing
RESET implementation.

Its purpose is to make an HVM guest "reboot" regardless of whether PV tools
have been installed and the xenstore interface is listening or not.  From
the client perspective that OpenXT is concerned with, this is for
ease-of-use for working with HVM guests before PV tools are installed.  To
summarize the flow of the patch:

- User input causes high level toolstack, xenmgr, to do xl reboot <domid>
- libxl hits "PV interface not available", so it tries the fallback ACPI
reset trigger (but that's not implemented in domctl)
- therefore, the patch changes the RESET trigger to POWER trigger, and sets
a 'reboot' flag
- when the xl create process handles the domain_death event for
LIBXL_SHUTDOWN_REASON_POWEROFF, we check for our 'reboot' flag.
- It's set, so we set "action" as if we came from a real restart, which
makes the xl create process take the 'goto start' codepath to rebuild the
domain.

I think we'd like to get rid of this patch, but at the moment I don't have
any code or a design to propose that would implement the
XEN_DOMCTL_SENDTRIGGER_RESET.

On Mon, Dec 14, 2020 at 7:42 PM Stefano Stabellini <sstabellini@kernel.org>
wrote:

> On Mon, 14 Dec 2020, Rich Persaud wrote:
> > (adding xen-devel & toolstack devs)
> >
> > On Dec 14, 2020, at 16:12, Jason Andryuk <jandryuk@gmail.com> wrote:
> > >
> > > =EF=BB=BFOn Fri, Dec 11, 2020 at 3:56 PM Chris Rogers <crogers122@gma=
il.com>
> wrote:
> > >>
> > >> This is a follow up to a request during our roadmapping meeting to
> clarify the purpose of libxl-fix-reboot.patch on the current version of X=
en
> in OpenXT (4.12).  It's pretty simple.  While the domctl API does define =
a
> trigger for reset in xen/include/public/domctl.h:
> > >>
> > >
> > >> The call stack looks like this:
> > >>> libxl_send_trigger(ctx, domid, LIBXL_TRIGGER_RESET, 0);
> > >>> xc_domain_send_trigger(ctx->xch, domid,
> XEN_DOMCTL_SENDTRIGGER_RESET, vcupid);
> > >>> do_domctl()
> > >>> arch_do_domctl()
> > >> and reaching the case statement in arch_do_domctl() for
> XEN_DOMCTL_sendtrigger, with RESET, we get -ENOSYS as illustrated above.
> > >
> > > Thanks, Chris.  It's surprising that xl trigger reset exists, but
> > > isn't wired through to do anything.  And that reboot has a fallback
> > > command to something that doesn't work.
>
> I miss some of the context of this thread -- let me try to understand
> the issue properly.
>
> It looks like HVM reboot doesn't work properly, or is it HVM reset
> (in-guest reset)? It looks like it is implemented by calling "xl trigger
> reset", which is implemented by libxl_send_trigger. The call chain leads
> to a XEN_DOMCTL_sendtrigger domctl with XEN_DOMCTL_SENDTRIGGER_RESET as
> a parameter that is not implemented on x86.
>
> That looks like a pretty serious bug :-)
>
>
> I imagine the reason why it is in that state is that the main way to
> reboot would be to call "xl reboot" which is implemented with the PV
> protocol "reboot" write to xenstore?  Either way, the bug should be
> fixed.
>
> What does your libxl-fix-reboot.patch patch do? Does it add an
> implementation of XEN_DOMCTL_SENDTRIGGER_RESET?

--000000000000a51de505b6883228
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hopefully I can provide a little more context.=C2=A0 =
Here is a link to the patch:<br></div><div><br></div><div><a href=3D"https:=
//github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/files/lib=
xl-fix-reboot.patch">https://github.com/OpenXT/xenclient-oe/blob/master/rec=
ipes-extended/xen/files/libxl-fix-reboot.patch</a></div><div><br></div><div=
>The patch is a bit mis-named.=C2=A0 It does not implement XEN_DOMCTL_SENDT=
RIGGER_RESET.=C2=A0 It&#39;s just a workaround to handle the missing RESET =
implementation.<br></div><div><br></div><div>Its purpose is to make an HVM =
guest &quot;reboot&quot; regardless of whether PV tools have been installed=
 and the xenstore interface is listening or not.=C2=A0 From the client pers=
pective that OpenXT is concerned with, this is for ease-of-use for working =
with HVM guests before  PV tools are installed.=C2=A0 To summarize the flow=
 of the patch:</div><div><br></div><div>- User input causes high level tool=
stack, xenmgr, to do xl reboot &lt;domid&gt; <br></div><div>- libxl hits &q=
uot;PV interface not available&quot;, so it tries the fallback ACPI reset t=
rigger (but that&#39;s not implemented in domctl)<br></div><div>- therefore=
, the patch changes the RESET trigger to POWER trigger, and sets a &#39;reb=
oot&#39; flag<br></div><div>- when the xl create process handles the domain=
_death event for LIBXL_SHUTDOWN_REASON_POWEROFF, we check for our &#39;rebo=
ot&#39; flag.<br>- It&#39;s set, so we set &quot;action&quot; as if we came=
 from a real restart, which makes the xl create process take the &#39;goto =
start&#39; codepath to rebuild the domain. </div><div><br></div><div>I thin=
k we&#39;d like to get rid of this patch, but at the moment I don&#39;t hav=
e any code or a design to propose that would implement the XEN_DOMCTL_SENDT=
RIGGER_RESET.</div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" cl=
ass=3D"gmail_attr">On Mon, Dec 14, 2020 at 7:42 PM Stefano Stabellini &lt;<=
a href=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; wro=
te:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px =
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Mon, 14 D=
ec 2020, Rich Persaud wrote:<br>
&gt; (adding xen-devel &amp; toolstack devs)<br>
&gt; <br>
&gt; On Dec 14, 2020, at 16:12, Jason Andryuk &lt;<a href=3D"mailto:jandryu=
k@gmail.com" target=3D"_blank">jandryuk@gmail.com</a>&gt; wrote:<br>
&gt; &gt; <br>
&gt; &gt; =EF=BB=BFOn Fri, Dec 11, 2020 at 3:56 PM Chris Rogers &lt;<a href=
=3D"mailto:crogers122@gmail.com" target=3D"_blank">crogers122@gmail.com</a>=
&gt; wrote:<br>
&gt; &gt;&gt; <br>
&gt; &gt;&gt; This is a follow up to a request during our roadmapping meeti=
ng to clarify the purpose of libxl-fix-reboot.patch on the current version =
of Xen in OpenXT (4.12).=C2=A0 It&#39;s pretty simple.=C2=A0 While the domc=
tl API does define a trigger for reset in xen/include/public/domctl.h:<br>
&gt; &gt;&gt; <br>
&gt; &gt; <br>
&gt; &gt;&gt; The call stack looks like this:<br>
&gt; &gt;&gt;&gt; libxl_send_trigger(ctx, domid, LIBXL_TRIGGER_RESET, 0);<b=
r>
&gt; &gt;&gt;&gt; xc_domain_send_trigger(ctx-&gt;xch, domid, XEN_DOMCTL_SEN=
DTRIGGER_RESET, vcupid);<br>
&gt; &gt;&gt;&gt; do_domctl()<br>
&gt; &gt;&gt;&gt; arch_do_domctl()<br>
&gt; &gt;&gt; and reaching the case statement in arch_do_domctl() for XEN_D=
OMCTL_sendtrigger, with RESET, we get -ENOSYS as illustrated above.<br>
&gt; &gt; <br>
&gt; &gt; Thanks, Chris.=C2=A0 It&#39;s surprising that xl trigger reset ex=
ists, but<br>
&gt; &gt; isn&#39;t wired through to do anything.=C2=A0 And that reboot has=
 a fallback<br>
&gt; &gt; command to something that doesn&#39;t work.<br>
<br>
I miss some of the context of this thread -- let me try to understand<br>
the issue properly.<br>
<br>
It looks like HVM reboot doesn&#39;t work properly, or is it HVM reset<br>
(in-guest reset)? It looks like it is implemented by calling &quot;xl trigg=
er<br>
reset&quot;, which is implemented by libxl_send_trigger. The call chain lea=
ds<br>
to a XEN_DOMCTL_sendtrigger domctl with XEN_DOMCTL_SENDTRIGGER_RESET as<br>
a parameter that is not implemented on x86.<br>
<br>
That looks like a pretty serious bug :-)<br>
<br>
<br>
I imagine the reason why it is in that state is that the main way to<br>
reboot would be to call &quot;xl reboot&quot; which is implemented with the=
 PV<br>
protocol &quot;reboot&quot; write to xenstore?=C2=A0 Either way, the bug sh=
ould be<br>
fixed.<br>
<br>
What does your libxl-fix-reboot.patch patch do? Does it add an<br>
implementation of XEN_DOMCTL_SENDTRIGGER_RESET?</blockquote></div>

--000000000000a51de505b6883228--


From xen-devel-bounces@lists.xenproject.org Tue Dec 15 23:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 15 Dec 2020 23:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54917.95466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpJJA-0005Zh-3y; Tue, 15 Dec 2020 23:00:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54917.95466; Tue, 15 Dec 2020 23:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpJJA-0005Za-0l; Tue, 15 Dec 2020 23:00:40 +0000
Received: by outflank-mailman (input) for mailman id 54917;
 Tue, 15 Dec 2020 23:00:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpJJ8-0005ZP-Ni; Tue, 15 Dec 2020 23:00:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpJJ8-0001Pg-Ij; Tue, 15 Dec 2020 23:00:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpJJ8-0000mw-9h; Tue, 15 Dec 2020 23:00:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpJJ8-0000KB-95; Tue, 15 Dec 2020 23:00:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aBZFOhTopP5DNsFBsbPPN3TbXEJurFkxgFsn8UI5tUA=; b=Y8jKsh3cL8ZTOEugNbn6k4J04c
	KTLC2dTrSbzUJmZtAgsy7B/m4sa5Jx+0YepjVMMpmhAqKHslgHaYKt+jSauUaUoEXDRwdsTuX4SUR
	+YfFJckPPuStK0kJWhed7zepIbNcT1MgQcQmNpAffOEoVgxME0hmDfnA9H2ApuAPZwyw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157575-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157575: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 15 Dec 2020 23:00:38 +0000

flight 157575 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157575/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    0 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 00:39:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 00:39:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54944.95499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpKqS-0005yx-VN; Wed, 16 Dec 2020 00:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54944.95499; Wed, 16 Dec 2020 00:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpKqS-0005yq-SD; Wed, 16 Dec 2020 00:39:08 +0000
Received: by outflank-mailman (input) for mailman id 54944;
 Wed, 16 Dec 2020 00:39:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IZX4=FU=redhat.com=jpoimboe@srs-us1.protection.inumbo.net>)
 id 1kpKqR-0005v7-D2
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 00:39:07 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 53cf8209-bb26-4f36-96ca-6a9ecc530da1;
 Wed, 16 Dec 2020 00:39:05 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-99-wYBKR5ESPACLwIRJr6G_JA-1; Tue, 15 Dec 2020 19:39:03 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B14061054F8E;
 Wed, 16 Dec 2020 00:38:10 +0000 (UTC)
Received: from treble (ovpn-112-170.rdu2.redhat.com [10.10.112.170])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 44CAF5DD87;
 Wed, 16 Dec 2020 00:38:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53cf8209-bb26-4f36-96ca-6a9ecc530da1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608079145;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BCyr+zCzlSHQ1KjMAt3Zb/a+qisNyayFH7/e91jTxqk=;
	b=Wc6kB00M0aX8D/pn0V4QnbkpFZmoJeLjefwnzj6H3NGm2d/YcAhHoXJTBKsfTb9fsalhIB
	nnDVlmakQG4smw9LU4pPKD5oas3q/cfpkcpSdcNgYLGDV/lT+HuMRyO/38evLtYmyIGNqC
	ArW85v8lKLNBaqLmX+jVbExyqqjOM3k=
X-MC-Unique: wYBKR5ESPACLwIRJr6G_JA-1
Date: Tue, 15 Dec 2020 18:38:02 -0600
From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201216003802.5fpklvx37yuiufrt@treble>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
 <20201215145408.GR3092@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201215145408.GR3092@hirez.programming.kicks-ass.net>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

On Tue, Dec 15, 2020 at 03:54:08PM +0100, Peter Zijlstra wrote:
> The problem is that a single instance of unwind information (ORC) must
> capture and correctly unwind all alternatives. Since the trivially
> correct mandate is out, implement the straight forward brute-force
> approach:
> 
>  1) generate CFI information for each alternative
> 
>  2) unwind every alternative with the merge-sort of the previously
>     generated CFI information -- O(n^2)
> 
>  3) for any possible conflict: yell.
> 
>  4) Generate ORC with merge-sort
> 
> Specifically for 3 there are two possible classes of conflicts:
> 
>  - the merge-sort itself could find conflicting CFI for the same
>    offset.
> 
>  - the unwind can fail with the merged CFI.

So much algorithm.  Could we make it easier by caching the shared
per-alt-group CFI state somewhere along the way?

For example:

struct alt_group_info {

	/* first original insn in the group */
	struct instruction *orig_insn;

	/* max # of bytes in the group (cfi array size) */
	unsigned long nbytes;

	/* byte-offset-addressed array of CFI pointers */
	struct cfi_state **cfi;
};

We could change 'insn->alt_group' to be a pointer to a shared instance
of the above struct, so that all original and replacement instructions
in a group have a pointer to it.

Starting out, 'cfi' array is all NULLs.  Then when updating CFI, check
'insn->alt_group.cfi[offset]'.

[ 'offset' is a byte offset from the beginning of the group.  It could
  be calculated based on 'orig_insn' or 'orig_insn->alts', depending on
  whether 'insn' is an original or a replacement. ]

If the array entry is NULL, just update it with a pointer to the CFI.
If it's not NULL, make sure it matches the existing CFI, and WARN if it
doesn't.

Also, with this data structure, the ORC generation should also be a lot
more straightforward, just ignore the NULL entries.

Thoughts?  This is all theoretical of course, I could try to do a patch
tomorrow.

-- 
Josh



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 02:27:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 02:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54963.95547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpMWq-0005wJ-2s; Wed, 16 Dec 2020 02:27:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54963.95547; Wed, 16 Dec 2020 02:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpMWp-0005wC-VD; Wed, 16 Dec 2020 02:26:59 +0000
Received: by outflank-mailman (input) for mailman id 54963;
 Wed, 16 Dec 2020 02:26:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ehtw=FU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kpMWo-0005w7-3u
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 02:26:58 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d707b746-9a37-4fb0-8aed-1f28fdbec66a;
 Wed, 16 Dec 2020 02:26:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d707b746-9a37-4fb0-8aed-1f28fdbec66a
Date: Tue, 15 Dec 2020 18:26:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608085616;
	bh=9FG24449RUqdNEsOPLPB1M6/QEOSJ0r+dv5iGCISCto=;
	h=From:To:cc:Subject:From;
	b=fA5vTxY+tJtefLLQxBsAXTRhyDN2bkRH/6uKCM5qvzqIAZaN1CcAAYyQT3kS81/Py
	 vyiyy05npY2/G6HTB9e9wwULFagOukDqMamUFFLhB6GwYf2UgtJpyfOlsPWyatB/xO
	 vkBAKwxY/MmPanyfvQkSIbSFMnZJ8QN0ykN0AfUonwUXq9fxQ8fYE/tm8dTS0eMnH5
	 cVMm44QOawIwPzzxsHU56zOIc5a0Fv69ygXbdz0uiOgpOnr0iX+spcFGgvG5PnB4w1
	 bKUNoYme/I6Wh6uA5Vnwep/92thEuKiblEa7XYzUbf8r8rz8Ce0XBbUi4vEIbvS8pq
	 YoDSSbWT5HKfQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: iwj@xenproject.org, anthony.perard@citrix.com, wl@xen.org, 
    dgdegra@tycho.nsa.gov
cc: sstabellini@kernel.org, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    Bertrand.Marquis@arm.com, xen-devel@lists.xenproject.org
Subject: arm32 tools/flask build failure
Message-ID: <alpine.DEB.2.21.2012151823480.4040@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

I am building Xen tools for ARM32 using qemu-user. I am getting the
following error building tools/flask. Everything else works fine. It is
worth noting that make -j1 works fine, it is only make -j4 that fails.

I played with .NOTPARALLEL but couldn't get it to work. Anyone has any
ideas?

Cheers,

Stefano


make[2]: Leaving directory '/build/tools/flask/utils'
make[1]: Leaving directory '/build/tools/flask'
make[1]: Entering directory '/build/tools/flask'
/usr/bin/make -C policy all
make[2]: Entering directory '/build/tools/flask/policy'
make[2]: warning: jobserver unavailable: using -j1.  Add '+' to parent make rule.
/build/tools/flask/policy/Makefile.common:115: *** target pattern contains no '%'.  Stop.
make[2]: Leaving directory '/build/tools/flask/policy'
make[1]: *** [/build/tools/flask/../../tools/Rules.mk:160: subdir-all-policy] Error 2
make[1]: Leaving directory '/build/tools/flask'
make: *** [/build/tools/flask/../../tools/Rules.mk:155: subdirs-all] Error 2



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 02:27:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 02:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54968.95558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpMXm-000628-Bt; Wed, 16 Dec 2020 02:27:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54968.95558; Wed, 16 Dec 2020 02:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpMXm-000621-8u; Wed, 16 Dec 2020 02:27:58 +0000
Received: by outflank-mailman (input) for mailman id 54968;
 Wed, 16 Dec 2020 02:27:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMXl-00061t-2D; Wed, 16 Dec 2020 02:27:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMXk-0003g2-Q3; Wed, 16 Dec 2020 02:27:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMXk-0006tp-KD; Wed, 16 Dec 2020 02:27:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMXk-0006zb-Jk; Wed, 16 Dec 2020 02:27:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=8m3MaZQSLJ0PvCgUhLAf3bAVvEuKlrjTsuC7AE++B8c=; b=hM8vkdog0897MuTpmeXffte5KL
	dM5Q1tcpSbvJKWwVwslWqL0A3DAU/78AMTYTBfkN0KYAtsWKvaUbw3Mmn9sJdAUXru3wU+vPHDBVi
	A1OLsQezWZoZEMyAsRcwWzGjPujmnW2Go+qn/x+NiLMe5Ee+m/2lk1+yGGrT83UuR6AQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64-libvirt
Message-Id: <E1kpMXk-0006zb-Jk@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 02:27:56 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64-libvirt
testid libvirt-build

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  929f23114061a0089e6d63d109cf6a1d03d35c71
  Bug not present: 8bc342b043a6838c03cd86039a34e3f8eea1242f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157589/


  commit 929f23114061a0089e6d63d109cf6a1d03d35c71
  Author: Paul Durrant <pdurrant@amazon.com>
  Date:   Tue Dec 8 19:30:26 2020 +0000
  
      libxl: introduce 'libxl_pci_bdf' in the idl...
      
      ... and use in 'libxl_device_pci'
      
      This patch is preparatory work for restricting the type passed to functions
      that only require BDF information, rather than passing a 'libxl_device_pci'
      structure which is only partially filled. In this patch only the minimal
      mechanical changes necessary to deal with the structural changes are made.
      Subsequent patches will adjust the code to make better use of the new type.
      
      Signed-off-by: Paul Durrant <pdurrant@amazon.com>
      Acked-by: Wei Liu <wl@xen.org>
      Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64-libvirt.libvirt-build --summary-out=tmp/157589.bisection-summary --basis-template=157560 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64-libvirt libvirt-build
Searching for failure / basis pass:
 157575 fail [host=himrod2] / 157560 ok.
Failure / basis pass flights: 157575 / 157560
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bf0fab14256057bbd145563151814300476bb28
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 904148ecb4a59d4c8375d8e8d38117b8605e10ac
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308\
 308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#904148ecb4a59d4c8375d8e8d38117b8605e10ac-8bf0fab14256057bbd145563151814300476bb28
Loaded 5001 nodes in revision graph
Searching for test results:
 157560 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 904148ecb4a59d4c8375d8e8d38117b8605e10ac
 157570 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bf0fab14256057bbd145563151814300476bb28
 157574 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 904148ecb4a59d4c8375d8e8d38117b8605e10ac
 157577 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bf0fab14256057bbd145563151814300476bb28
 157575 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bf0fab14256057bbd145563151814300476bb28
 157579 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 4951b9ea807d4a4e5a54798d366b2ea3d6ca5060
 157580 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bc342b043a6838c03cd86039a34e3f8eea1242f
 157581 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 66c2fbc6e82b1aa7b9f0fb37eecf93983c348058
 157583 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 96ed6ff29741df820217b6b744eb0fa2d76b50f3
 157585 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 929f23114061a0089e6d63d109cf6a1d03d35c71
 157586 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bc342b043a6838c03cd86039a34e3f8eea1242f
 157587 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 929f23114061a0089e6d63d109cf6a1d03d35c71
 157588 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bc342b043a6838c03cd86039a34e3f8eea1242f
 157589 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 929f23114061a0089e6d63d109cf6a1d03d35c71
Searching for interesting versions
 Result found: flight 157560 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bc342b043a6838c03cd86039a34e3f8eea1242f, results HASH(0x564e75a40ce8) HASH(0x564e75a40fe8) HASH(0x564e75a3ee60) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895\
 af2840d85c524f0bd11a38aac308308 4951b9ea807d4a4e5a54798d366b2ea3d6ca5060, results HASH(0x564e75a3ece0) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 904148ecb4a59d4c8375d8e8d38117b8605e10ac, results HASH(0x564e75a34368) HASH(0x564e75a2d5a8) Result found: flight 157570 (fail), for basis failure (at ancestor ~674)
 Repro found: flight 157574 (pass), for basis pass
 Repro found: flight 157575 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 8bc342b043a6838c03cd86039a34e3f8eea1242f
No revisions left to test, checking graph state.
 Result found: flight 157580 (pass), for last pass
 Result found: flight 157585 (fail), for first failure
 Repro found: flight 157586 (pass), for last pass
 Repro found: flight 157587 (fail), for first failure
 Repro found: flight 157588 (pass), for last pass
 Repro found: flight 157589 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  929f23114061a0089e6d63d109cf6a1d03d35c71
  Bug not present: 8bc342b043a6838c03cd86039a34e3f8eea1242f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157589/


  commit 929f23114061a0089e6d63d109cf6a1d03d35c71
  Author: Paul Durrant <pdurrant@amazon.com>
  Date:   Tue Dec 8 19:30:26 2020 +0000
  
      libxl: introduce 'libxl_pci_bdf' in the idl...
      
      ... and use in 'libxl_device_pci'
      
      This patch is preparatory work for restricting the type passed to functions
      that only require BDF information, rather than passing a 'libxl_device_pci'
      structure which is only partially filled. In this patch only the minimal
      mechanical changes necessary to deal with the structural changes are made.
      Subsequent patches will adjust the code to make better use of the new type.
      
      Signed-off-by: Paul Durrant <pdurrant@amazon.com>
      Acked-by: Wei Liu <wl@xen.org>
      Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64-libvirt.libvirt-build.{dot,ps,png,html,svg}.
----------------------------------------
157589: tolerable ALL FAIL

flight 157589 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/157589/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build           fail baseline untested


jobs:
 build-amd64-libvirt                                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 02:52:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 02:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54977.95574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpMvB-0000KT-IC; Wed, 16 Dec 2020 02:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54977.95574; Wed, 16 Dec 2020 02:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpMvB-0000KM-F6; Wed, 16 Dec 2020 02:52:09 +0000
Received: by outflank-mailman (input) for mailman id 54977;
 Wed, 16 Dec 2020 02:52:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMvA-0000KD-7R; Wed, 16 Dec 2020 02:52:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMvA-00044n-0n; Wed, 16 Dec 2020 02:52:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMv9-0000DE-Ob; Wed, 16 Dec 2020 02:52:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpMv9-0002cm-O6; Wed, 16 Dec 2020 02:52:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y0QNDVgS5RyVU72IN8IyOCCkD5wDR211jLjLB2w9bOY=; b=ph84Ym19py2ksskV3RhrVOoQZ9
	BRqRCalPuIYFI5nWiXG/2paXJgfihpwy9Kf+wEaGbjEvsCKNPqL8jHHQV4cxrZwXnLs84QaGIz6aR
	tlwvFpfuqmpbfMi9to7TaGfLvwjgM0glDwpVeiL92ZWMeD+V7egPO15UjVbBGcEr8XfY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157582-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157582: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 02:52:07 +0000

flight 157582 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157582/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    0 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 04:00:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 04:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54987.95595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpNyf-0005ui-Qs; Wed, 16 Dec 2020 03:59:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54987.95595; Wed, 16 Dec 2020 03:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpNyf-0005ub-NS; Wed, 16 Dec 2020 03:59:49 +0000
Received: by outflank-mailman (input) for mailman id 54987;
 Wed, 16 Dec 2020 03:59:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iRal=FU=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kpNye-0005uW-Ah
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 03:59:48 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d08d3efc-737e-4c92-9806-fc6cf0d8b089;
 Wed, 16 Dec 2020 03:59:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0BG3xRCV034767
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 15 Dec 2020 22:59:33 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0BG3xRRO034766;
 Tue, 15 Dec 2020 19:59:27 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d08d3efc-737e-4c92-9806-fc6cf0d8b089
Date: Tue, 15 Dec 2020 19:59:27 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
        Oleksandr_Andrushchenko@epam.com,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen-ARM DomUs
Message-ID: <X9mGH9SPoC5cfpSu@mattapan.m5p.com>
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com>
 <alpine.DEB.2.21.2012150828170.4040@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2012150828170.4040@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Tue, Dec 15, 2020 at 08:36:34AM -0800, Stefano Stabellini wrote:
> On Mon, 14 Dec 2020, Elliott Mitchell wrote:
> > The available examples seem geared towards Linux DomUs.  I'm looking at a
> > FreeBSD installation image and it appears to expect an EFI firmware.
> > Beyond having a bunch of files appearing oriented towards booting on EFI
> > I can't say much about (booting) FreeBSD/ARM DomUs.
> 
> Running EFI firmware in a domU is possible with both Tianocore and
> U-Boot. You should be able to build the firmware and pass it as a
> kernel= binary in the xl file. Then the firmware will be able to load
> the necessary binaries from the virtual disk.

Hmm, no mention of this on:
https://wiki.xenproject.org/wiki/OVMF

In fact that appears 100% x86.  Perhaps tools/firmware needs to be
adjusted to make it work on ARM?

Really the xlexample files in tools/examples need equivalents for ARM...

*This* reads like the approach I'm looking for, but building Tianocore
is an adventure even with a good guide.

> I ran Tianocore this way years ago. Recently, u-boot has been ported to
> be run in a domU by Oleksandr Andrushchenko (CCed).

The Xen wiki has no information on this.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Dec 16 04:21:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 04:21:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.54992.95607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpOJr-0000O7-G8; Wed, 16 Dec 2020 04:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 54992.95607; Wed, 16 Dec 2020 04:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpOJr-0000O0-D2; Wed, 16 Dec 2020 04:21:43 +0000
Received: by outflank-mailman (input) for mailman id 54992;
 Wed, 16 Dec 2020 04:21:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOJp-0000Ns-O0; Wed, 16 Dec 2020 04:21:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOJp-0005l2-Dn; Wed, 16 Dec 2020 04:21:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOJp-0005BR-60; Wed, 16 Dec 2020 04:21:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOJp-0004Qt-5M; Wed, 16 Dec 2020 04:21:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nGBSfWRKYUG2ldWOiLzQzCKzcS6UWZ0J///DhNU0IiA=; b=OWxDuV1ojlwYv4Ffbd5OticxS4
	5M1jb/UivbkteTlkvBFTqQ/6cvXrfeh8NsI020QteBD0zw1bQTB0pqFY4ZW2dge7VbBzYEWsufDFY
	Quez9mVmYFr1zeprCXL+W5NnTI4/OQXTsnNeqeXnREeOF96Nl7tecm91c7cAHz10pPjc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157562-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 157562: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2525a745e18bbf14b4f7b1b18209a0ab9166178d
X-Osstest-Versions-That:
    xen=8145d38b48009255a32ab87a02e481cd09c811f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 04:21:41 +0000

flight 157562 xen-4.12-testing real [real]
flight 157578 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157562/
http://logs.test-lab.xenproject.org/osstest/logs/157578/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 157134

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail pass in 157578-retest
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail pass in 157578-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157134
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157134
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157134
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157134
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157134
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157134
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157134
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2525a745e18bbf14b4f7b1b18209a0ab9166178d
baseline version:
 xen                  8145d38b48009255a32ab87a02e481cd09c811f9

Last test of basis   157134  2020-12-01 15:05:58 Z   14 days
Testing same since   157562  2020-12-15 13:36:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 712 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 04:26:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 04:26:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55000.95622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpOOH-0000a0-9x; Wed, 16 Dec 2020 04:26:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55000.95622; Wed, 16 Dec 2020 04:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpOOH-0000Zt-6H; Wed, 16 Dec 2020 04:26:17 +0000
Received: by outflank-mailman (input) for mailman id 55000;
 Wed, 16 Dec 2020 04:26:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOOF-0000Zf-NZ; Wed, 16 Dec 2020 04:26:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOOF-0005rc-FU; Wed, 16 Dec 2020 04:26:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOOF-0005LQ-6l; Wed, 16 Dec 2020 04:26:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpOOF-0001aH-6F; Wed, 16 Dec 2020 04:26:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OTnKL2eFEN6mh3IneTbz4J6amkFuhW0qskgUzfYkZb4=; b=jy0IGDpkb4t9pq/Ixkmr3gTsQ6
	rPm1l1F0rsWK5yG1exYB8YsEfOe6A+OSHKYrh4EaNr6g96Bu7neHDJylbxdaX+rHWmNJEreVX2r45
	RAYUrobF86smTazhiyBpHXOEscgZwCZaq4ek3boluxODHXOORDeYUzcrHgVTBWWWbtyw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157563-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157563: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10c7c213bef26274684798deb3e351a6756046d2
X-Osstest-Versions-That:
    xen=b5302273e2c51940172400486644636f2f4fc64a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 04:26:15 +0000

flight 157563 xen-4.13-testing real [real]
flight 157592 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157563/
http://logs.test-lab.xenproject.org/osstest/logs/157592/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157135

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157135
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157135
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157135
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157135
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157135
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10c7c213bef26274684798deb3e351a6756046d2
baseline version:
 xen                  b5302273e2c51940172400486644636f2f4fc64a

Last test of basis   157135  2020-12-01 15:06:11 Z   14 days
Testing same since   157563  2020-12-15 13:36:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 743 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 05:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 05:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55014.95643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpPq3-0000Z0-DM; Wed, 16 Dec 2020 05:59:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55014.95643; Wed, 16 Dec 2020 05:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpPq3-0000Yt-AH; Wed, 16 Dec 2020 05:59:03 +0000
Received: by outflank-mailman (input) for mailman id 55014;
 Wed, 16 Dec 2020 05:59:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPq2-0000Yl-OK; Wed, 16 Dec 2020 05:59:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPq2-0007nE-CZ; Wed, 16 Dec 2020 05:59:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPq2-0000VV-5o; Wed, 16 Dec 2020 05:59:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPq2-0003bv-5H; Wed, 16 Dec 2020 05:59:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8ShqE6F4/R96HnZ6Jb09ldEbnTMlKJ4CccDejMZ8pXA=; b=Ya5aQqQaM/N30UusSYatSzrqyk
	/38oEA8kxJvvaZGSebouANHzH38l10iZVW/ufu6C89br8U5aETQlbhoCR630T/7YF2C/JUm2J0Y98
	GmT891HqshEQQPRvFGsFv8MKatExMsB2p3WqP1h1o9GnprfVgL5RZCyt9LFPVxkg7dp8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157590-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157590: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 05:59:02 +0000

flight 157590 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157590/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    0 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 06:04:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 06:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55046.95754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpPuv-00026f-6R; Wed, 16 Dec 2020 06:04:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55046.95754; Wed, 16 Dec 2020 06:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpPuv-00026X-3A; Wed, 16 Dec 2020 06:04:05 +0000
Received: by outflank-mailman (input) for mailman id 55046;
 Wed, 16 Dec 2020 06:04:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPuu-00026M-Ba; Wed, 16 Dec 2020 06:04:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPuu-00080T-3m; Wed, 16 Dec 2020 06:04:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPut-0000qp-ON; Wed, 16 Dec 2020 06:04:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpPut-00072V-Nn; Wed, 16 Dec 2020 06:04:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uljVK9QdbW+3SdQX6iEUKowHkmnij7sQPYP1sH1/jwE=; b=2zuTbuEPYTDoWA9Q5oqT87o8Ie
	KrPkUIOx1ZzfFopoTnhX6UMLOkXTebUPD21JnqcVZIP3fEW/5T1u3kLNhWrAbJMVkCqqE1z9FnCLH
	93tpDlFLk4Nq97VZy/4fEcmzZK7OPOuwBHy28JsTmBv5Imhy9Zacb5TMh8KSBSwwphSc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157564-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 157564: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d17a5d5d2774601f8137984a3ee23ec28eb0793c
X-Osstest-Versions-That:
    xen=1d1d1f5391976456a79daac0dcfe7157da1e54f7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 06:04:03 +0000

flight 157564 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157564/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 157133
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157133
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157133
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157133
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157133
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157133
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157133
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157133
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157133
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157133
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157133
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  d17a5d5d2774601f8137984a3ee23ec28eb0793c
baseline version:
 xen                  1d1d1f5391976456a79daac0dcfe7157da1e54f7

Last test of basis   157133  2020-12-01 14:37:21 Z   14 days
Testing same since   157564  2020-12-15 13:36:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1d1d1f5391..d17a5d5d27  d17a5d5d2774601f8137984a3ee23ec28eb0793c -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 06:06:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 06:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55060.95793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpPxE-0002OV-3L; Wed, 16 Dec 2020 06:06:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55060.95793; Wed, 16 Dec 2020 06:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpPxE-0002OO-0J; Wed, 16 Dec 2020 06:06:28 +0000
Received: by outflank-mailman (input) for mailman id 55060;
 Wed, 16 Dec 2020 06:06:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpPxC-0002OJ-He
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 06:06:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0ad8d84-63ff-4351-a7af-b6fad766b9a5;
 Wed, 16 Dec 2020 06:06:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E114AB113;
 Wed, 16 Dec 2020 06:06:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0ad8d84-63ff-4351-a7af-b6fad766b9a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608098784; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uC3ARCQsbnLz9ctv7WVMZZqqqu6Ljd+T/YxhzPh3Smc=;
	b=V48NrEo4d6hxDT6uo90Q4DmgZnk6c24mDD0YpDQGP2m7WJztnfSd2bWLQf8LiSMTvS/Dm9
	v7Q6V87psCjNl+iPu1o6WnskED2twG+IXJtHGbXBQZd56pFWxRvqcMxwLpe16cnWIWpb+w
	MyO2dtFIsNAlfWxKUHvpdk9ekhkfGsM=
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>
References: <20201215163603.21700-1-jgross@suse.com>
 <20201215163603.21700-5-jgross@suse.com>
 <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not
 close file descriptor on exec
Message-ID: <e33205e7-11cb-e463-8c6f-92cfff2c74da@suse.com>
Date: Wed, 16 Dec 2020 07:06:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="f7yBGInJaHJLm7JkohHoSqGRbg64YniXN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--f7yBGInJaHJLm7JkohHoSqGRbg64YniXN
Content-Type: multipart/mixed; boundary="U5wKb6JrF9t2XrP2sxTQ3cgtgCcLv2Qnu";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>
Message-ID: <e33205e7-11cb-e463-8c6f-92cfff2c74da@suse.com>
Subject: Re: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not
 close file descriptor on exec
References: <20201215163603.21700-1-jgross@suse.com>
 <20201215163603.21700-5-jgross@suse.com>
 <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
In-Reply-To: <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>

--U5wKb6JrF9t2XrP2sxTQ3cgtgCcLv2Qnu
Content-Type: multipart/mixed;
 boundary="------------F2AC585614C0DAE1BDBB8EF7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F2AC585614C0DAE1BDBB8EF7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 19:09, Andrew Cooper wrote:
> On 15/12/2020 16:35, Juergen Gross wrote:
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Reviewed-by: Wei Liu <wl@xen.org>
>> Reviewed-by: Julien Grall <jgrall@amazon.com>
>> ---
>> V7:
>> - new patch
>>
>> V8:
>> - some minor comments by Julien Grall addressed
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Various of your patches still have double SoB.=C2=A0 (Just as a note to=
 be
> careful to anyone committing...)

Yeah, this is annoying.

They are all after the first "---" mark, so they shouldn't end up in
git AFAICS.

Why git is adding those duplicates I don't know. I had this problem
before, but some git update made it disappear. Now it is back, but
not for all patches. :-(

>=20
>> diff --git a/tools/include/xenevtchn.h b/tools/include/xenevtchn.h
>> index 91821ee56d..dadc46ea36 100644
>> --- a/tools/include/xenevtchn.h
>> +++ b/tools/include/xenevtchn.h
>> @@ -64,11 +64,25 @@ struct xentoollog_logger;
>>    *
>>    * Calling xenevtchn_close() is the only safe operation on a
>>    * xenevtchn_handle which has been inherited.
>> + *
>> + * Setting XENEVTCHN_NO_CLOEXEC allows to keep the file descriptor us=
ed
>> + * for the event channel driver open across exec(2). In order to be a=
ble
>> + * to use that file descriptor the new binary activated via exec(2) h=
as
>> + * to call xenevtchn_open_fd() with that file descriptor as parameter=
 in
>> + * order to associate it with a new handle. The file descriptor can b=
e
>> + * obtained via xenevtchn_fd() before calling exec(2).
>>    */
>=20
> More of the comment block than this needs adjusting in light of the
> exec() changes.
>=20
>> -/* Currently no flags are defined */
>> +
>> +/* Don't set O_CLOEXEC when opening event channel driver node. */
>> +#define XENEVTCHN_NO_CLOEXEC 0x01
>> +
>>   xenevtchn_handle *xenevtchn_open(struct xentoollog_logger *logger,
>>                                    unsigned open_flags);
>>  =20
>> +/* Flag XENEVTCHN_NO_CLOEXEC is ignored by xenevtchn_open_fd(). */
>> +xenevtchn_handle *xenevtchn_open_fd(struct xentoollog_logger *logger,=

>> +                                    int fd, unsigned open_flags);
>> +
>=20
> I spotted this before, but didn't have time to reply.
>=20
> This isn't "open fd".=C2=A0 It is "construct a xenevtchn_handle object =
around
> an already-open fd".=C2=A0 As such, open_flags appears bogus because at=
 no
> point are we making an open() call.=C2=A0 (I'd argue that it irrespecti=
ve of
> other things, it wants naming xenevtchn_fdopen() for API familiarity.)

Okay.

>=20
> However, the root of the problem is actually the ambiguity in the name.=

> These are not flags to the open() system call, but general flags for
> xenevtchn.
>=20
> Therefore, I recommend a prep patch which renames open_flags to just
> flags, and while at it, changes to unsigned int rather than a naked
> "unsigned" type.=C2=A0 There are no API/ABI implications for this, but =
it
> will help massively with code clarity.

Okay.

>=20
> I'd also possibly go as far as to say that plumbing 'flags' down into
> osdep ought to split out into a separate patch.=C2=A0 There is also a w=
ild
> mix of coding styles even within the hunks here.

Fine with me.

>=20
> Additionally, something in core.c should check for unknown flags and
> reject them them with EINVAL.=C2=A0 It was buggy that this wasn't done
> before, and really needs to be implemented before we start having cases=

> where people might plausibly pass something other than 0.

Are you sure this is safe? I'm not arguing against it, but we considered
to do that and didn't dare to.

>=20
> ~Andrew
>=20
> P.S. if you don't fancy doing all of this, my brain could do with a
> break from the complicated work, and I can see about organising this
> cleanup.
>=20

I'm fine doing it. I'm sure you'll find some other no-brainer to work
on :-)


Juergen

--------------F2AC585614C0DAE1BDBB8EF7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F2AC585614C0DAE1BDBB8EF7--

--U5wKb6JrF9t2XrP2sxTQ3cgtgCcLv2Qnu--

--f7yBGInJaHJLm7JkohHoSqGRbg64YniXN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Zo98FAwAAAAAACgkQsN6d1ii/Ey8N
mAgAgUUv6lZ96bg84wX2/9npnS/PLbzuUZmCu93fhbw0z9zdvSphyB4P/LLIWOZRipOatWyGhWUR
cra4M9ykd8dmq/5waix9148w1bqhiManIJSOdMsKNDgpfjgiEvyCMbuNIlUDPpCeBDpgFJUwSULu
D9cpofBwI6ztSipzoo2VTOiyLZvzT8frWHPSo9DQPpMcboBiuYXcomKFzjeARw8B6IBRX4lD2hVH
76w9yDv1Hp1WR5Mj7elPXG11jRnDRBxELMoN6i2vjD4rmL6mppAhw9qk95ZfyHcM9qxXppEyRpBN
N249UzqtvxqhOKxUyFIUKNDIV9kN9cVQvBdlk94xuQ==
=OAZk
-----END PGP SIGNATURE-----

--f7yBGInJaHJLm7JkohHoSqGRbg64YniXN--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 07:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 07:01:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55067.95805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQo9-0007mN-1s; Wed, 16 Dec 2020 07:01:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55067.95805; Wed, 16 Dec 2020 07:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQo8-0007mG-Us; Wed, 16 Dec 2020 07:01:08 +0000
Received: by outflank-mailman (input) for mailman id 55067;
 Wed, 16 Dec 2020 07:01:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpQo7-0007mB-K9
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 07:01:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bae1d43d-4dfa-4c40-b424-7f7e24b7f89e;
 Wed, 16 Dec 2020 07:01:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC97BAD2B;
 Wed, 16 Dec 2020 07:01:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bae1d43d-4dfa-4c40-b424-7f7e24b7f89e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608102063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=RNPw9wGxeqKZXJZNzwqN89RLD5ziumjabqjjdT3MtS8=;
	b=Wnx9I1wvLZEBxXQCIoToaKF0qU3BHZhloZI0qQakbYXOfSD/R+ZTJWCIIIIbPuUbTWonI7
	nS9SE8VsuSNf6Z5RylD3e/tjs0AbrvjarIZCNfT6G3HkfD6G3uHqLcoFUADcHaxSEEBqrD
	jcXDoNKUKq+eUESeoeYv3dlqp4YJn0A=
Subject: Re: [PATCH -next v2] x86/xen: Convert to DEFINE_SHOW_ATTRIBUTE
To: Qinglang Miao <miaoqinglang@huawei.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <20200917125547.104472-1-miaoqinglang@huawei.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <08e16e36-b086-814c-20a9-9d0748c4d497@suse.com>
Date: Wed, 16 Dec 2020 08:01:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20200917125547.104472-1-miaoqinglang@huawei.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="u0MRvIWpSOyDyK7F8P5WPQ7Bno9KY37UB"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--u0MRvIWpSOyDyK7F8P5WPQ7Bno9KY37UB
Content-Type: multipart/mixed; boundary="qFCxlpEBAVtXCyXhiMKSMwIXwzRcosJx8";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Qinglang Miao <miaoqinglang@huawei.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Message-ID: <08e16e36-b086-814c-20a9-9d0748c4d497@suse.com>
Subject: Re: [PATCH -next v2] x86/xen: Convert to DEFINE_SHOW_ATTRIBUTE
References: <20200917125547.104472-1-miaoqinglang@huawei.com>
In-Reply-To: <20200917125547.104472-1-miaoqinglang@huawei.com>

--qFCxlpEBAVtXCyXhiMKSMwIXwzRcosJx8
Content-Type: multipart/mixed;
 boundary="------------C6B20E965551A682523F7056"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C6B20E965551A682523F7056
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.09.20 14:55, Qinglang Miao wrote:
> Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
>=20
> Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com>

Applied to: xen/tip.git for-linus-5.11


Juergen

--------------C6B20E965551A682523F7056
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C6B20E965551A682523F7056--

--qFCxlpEBAVtXCyXhiMKSMwIXwzRcosJx8--

--u0MRvIWpSOyDyK7F8P5WPQ7Bno9KY37UB
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/ZsK0FAwAAAAAACgkQsN6d1ii/Ey/E
nwf8D3sy39qtCyZ/WHaKKMaCCo0pT8KZ5m8n4Z0x4sSMEfUWP8iqezOesALJhOsk8FlczmQOMGW+
cWF+QvB1QD5n24RqqKz3Zfhlf5xrp0NXN9f67n3wE8z+PFsnamWi79kBH3VkgPLfzNqVqTvsguoO
nt33hGvnWEjInfWALRMmX/FNoWWeFOhfNJRh7Oaa4snc1kO06ydl2QrwtmeUAdd4uxpNdTkuFCuZ
vHQaAm7Hm5xLXvcbZT9J5s3Wk2nV+ySA3Cyz3uw2qtDuzgQWYzdJYts7RFKRKRMaM1kGomAQEWPX
fQU0LpIDY0GnM8jcqFQdC/z/corvq5fiAjaBsmyjLQ==
=jvQV
-----END PGP SIGNATURE-----

--u0MRvIWpSOyDyK7F8P5WPQ7Bno9KY37UB--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 07:01:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 07:01:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55069.95818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQod-0007r9-Dw; Wed, 16 Dec 2020 07:01:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55069.95818; Wed, 16 Dec 2020 07:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQod-0007r2-8K; Wed, 16 Dec 2020 07:01:39 +0000
Received: by outflank-mailman (input) for mailman id 55069;
 Wed, 16 Dec 2020 07:01:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpQoc-0007qt-CO
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 07:01:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68ebd9c3-ecd5-436b-ac90-6ee35073282d;
 Wed, 16 Dec 2020 07:01:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 325C3AD18;
 Wed, 16 Dec 2020 07:01:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68ebd9c3-ecd5-436b-ac90-6ee35073282d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608102092; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7hLPFvKf0KQ27daJnHMmIn6IkafDM/HKPtdm/WK3604=;
	b=DgJMjfvVtLQrPPlGp7CIbNRypFymSBqQzHQ6whX/d5Ra5ZVv7ui2d8Jll/j54vb3lSx6LQ
	PhlUhpLghfaejmbh9K30IdcMMKoYg0ZydYNxGc+kZqljep9GQaJ5htAX3awTbCo1FOK62+
	ru1FMTj1DRPHMa3g1dwuo7L7nGMDlMo=
Subject: Re: [PATCH 0/2] Remove Xen PVH dependency on PCI
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Cc: "H. Peter Anvin" <hpa@zytor.com>, linux-kernel@vger.kernel.org
References: <20201014175342.152712-1-jandryuk@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <5aa9a54c-207e-6cf6-7fbb-37782c016161@suse.com>
Date: Wed, 16 Dec 2020 08:01:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201014175342.152712-1-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="BsaiQXjF7kB1T2vSWMShDAGTM0x5njbvC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--BsaiQXjF7kB1T2vSWMShDAGTM0x5njbvC
Content-Type: multipart/mixed; boundary="u1wK4B18JdUdPmsfAj57AHy2IhYDrXhYt";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Cc: "H. Peter Anvin" <hpa@zytor.com>, linux-kernel@vger.kernel.org
Message-ID: <5aa9a54c-207e-6cf6-7fbb-37782c016161@suse.com>
Subject: Re: [PATCH 0/2] Remove Xen PVH dependency on PCI
References: <20201014175342.152712-1-jandryuk@gmail.com>
In-Reply-To: <20201014175342.152712-1-jandryuk@gmail.com>

--u1wK4B18JdUdPmsfAj57AHy2IhYDrXhYt
Content-Type: multipart/mixed;
 boundary="------------C4653AF006164DD645B3036F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C4653AF006164DD645B3036F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.10.20 19:53, Jason Andryuk wrote:
> A Xen PVH domain doesn't have a PCI bus or devices, so it doesn't need
> PCI support built in.  Currently, XEN_PVH depends on XEN_PVHVM which
> depends on PCI.
>=20
> The first patch introduces XEN_PVHVM_GUEST as a toplevel item and
> changes XEN_PVHVM to a hidden variable.  This allows XEN_PVH to depend
> on XEN_PVHVM without PCI while XEN_PVHVM_GUEST depends on PCI.
>=20
> The second patch moves XEN_512GB to clean up the option nesting.
>=20
> Jason Andryuk (2):
>    xen: Remove Xen PVH/PVHVM dependency on PCI
>    xen: Kconfig: nest Xen guest options
>=20
>   arch/x86/xen/Kconfig | 38 ++++++++++++++++++++++----------------
>   drivers/xen/Makefile |  2 +-
>   2 files changed, 23 insertions(+), 17 deletions(-)
>=20

Series applied to: xen/tip.git for-linus-5.11


Juergen

--------------C4653AF006164DD645B3036F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C4653AF006164DD645B3036F--

--u1wK4B18JdUdPmsfAj57AHy2IhYDrXhYt--

--BsaiQXjF7kB1T2vSWMShDAGTM0x5njbvC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/ZsMsFAwAAAAAACgkQsN6d1ii/Ey/A
4gf9ESVC95o0zgF1KYkEqsfbAijfO6wWfJ+jEUzU+qL0rSkdsxXW3J9tfeRQTAdZAhaEro4hg3B2
QMHD5XpFjgwc+NLftcpf3S3a3hR4giHmttHDF8rnc7PaaMZLuDJyJ2ZbRp4jGROg+iOKYfghyzS3
MIzbQUp3tH/cWhCY5kctQkpb3r1rEr6axWtThSXxAdysRLG08f9Mw7ShO0Y1NQ537VRIWjSSRMXE
soYapjCPRZ/nRQWpvpe0bSFRTHOQszD6EBh4f6RZEgSZRb7LCEwuJKTeXrO5ESrnowNJNox2Gyus
bNeru9g3AjpYJa2tltvb0SrIjQuotZlbBOoFriCplA==
=Z21z
-----END PGP SIGNATURE-----

--BsaiQXjF7kB1T2vSWMShDAGTM0x5njbvC--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 07:02:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 07:02:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55073.95828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQp2-0007yZ-L4; Wed, 16 Dec 2020 07:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55073.95828; Wed, 16 Dec 2020 07:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQp2-0007yS-Hv; Wed, 16 Dec 2020 07:02:04 +0000
Received: by outflank-mailman (input) for mailman id 55073;
 Wed, 16 Dec 2020 07:02:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpQp1-0007yF-09
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 07:02:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4f49800-f483-4dca-a24d-bd50a474dfee;
 Wed, 16 Dec 2020 07:02:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63288AD2B;
 Wed, 16 Dec 2020 07:02:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4f49800-f483-4dca-a24d-bd50a474dfee
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608102121; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QoiSX8Hvxn2fqq9nVOof5al//6vCWiVhYQsloyKMYqs=;
	b=V7RGybcqbrSuNn8RDRU4KdMeHTf8rCq9V4f0ei6fcelVjbMzqDSPQI0pA50/UWOoXrx9jp
	Dj659JTp/MJB+CDr4pUHRQuU1IhYcc/ezbkf7aQjnBjueFWtuu9sv98Her4boxx4uukNu4
	A/Sut2UcOy18cDJ3Ov9eS8Y0Hi4YgmY=
Subject: Re: [PATCH 058/141] xen-blkfront: Fix fall-through warnings for Clang
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org
References: <cover.1605896059.git.gustavoars@kernel.org>
 <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <41ed666b-739e-80c2-5714-c488b17c8500@suse.com>
Date: Wed, 16 Dec 2020 08:02:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="gvEDSSw8t26PSNBbUdz9PTbLgUC3pfy9W"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gvEDSSw8t26PSNBbUdz9PTbLgUC3pfy9W
Content-Type: multipart/mixed; boundary="acehXDoNClKCL8Daxb7vBCy06W0iwoyi8";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org
Message-ID: <41ed666b-739e-80c2-5714-c488b17c8500@suse.com>
Subject: Re: [PATCH 058/141] xen-blkfront: Fix fall-through warnings for Clang
References: <cover.1605896059.git.gustavoars@kernel.org>
 <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
In-Reply-To: <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>

--acehXDoNClKCL8Daxb7vBCy06W0iwoyi8
Content-Type: multipart/mixed;
 boundary="------------AC7D4EB44FC6ED9CCA3EFC5F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AC7D4EB44FC6ED9CCA3EFC5F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 19:32, Gustavo A. R. Silva wrote:
> In preparation to enable -Wimplicit-fallthrough for Clang, fix a warnin=
g
> by explicitly adding a break statement instead of letting the code fall=

> through to the next case.
>=20
> Link: https://github.com/KSPP/linux/issues/115
> Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>

Applied to: xen/tip.git for-linus-5.11


Juergen

--------------AC7D4EB44FC6ED9CCA3EFC5F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AC7D4EB44FC6ED9CCA3EFC5F--

--acehXDoNClKCL8Daxb7vBCy06W0iwoyi8--

--gvEDSSw8t26PSNBbUdz9PTbLgUC3pfy9W
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/ZsOgFAwAAAAAACgkQsN6d1ii/Ey+V
Gwf7B5XLWILGCq3WNwhMi7WFHqun6smQ5tXZ/zhc3ROrAk8C8hvonGGkapDULRVea7cFex5OJeYD
szTs2qp1NV2bACyDLl+/Z3TybRkQj8JSmehQHr3cOknMs2sBQL8aRLPNSGrYgmmyv7SKB8bJp4LG
5A633sbbylbfBUhmPB13rncb4WI0suuGoCOHqcL+uCHLHDM/+hlbudM8cg34iMc+Sv668uXp4V1B
L/KfVdUPT0Tp1ZY6KhXoWGCSM6PKz5dSbmGGlTzaG+tuegSgIRFTSrN4amrVU0mDqe74N5TIBh2B
8+R4stIvl5eTFkZ91BZyivSkQMfRZyqND3aCGhd+FQ==
=fQHh
-----END PGP SIGNATURE-----

--gvEDSSw8t26PSNBbUdz9PTbLgUC3pfy9W--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 07:02:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 07:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55078.95841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQpS-00085N-6N; Wed, 16 Dec 2020 07:02:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55078.95841; Wed, 16 Dec 2020 07:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQpS-00085G-2J; Wed, 16 Dec 2020 07:02:30 +0000
Received: by outflank-mailman (input) for mailman id 55078;
 Wed, 16 Dec 2020 07:02:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpQpQ-00084C-SX
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 07:02:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2395e5db-fbdb-4959-b0be-0a1f43c293ce;
 Wed, 16 Dec 2020 07:02:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 17B1FACF9;
 Wed, 16 Dec 2020 07:02:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2395e5db-fbdb-4959-b0be-0a1f43c293ce
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608102146; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LwfxpmJUNoS4imRrEZDa/hsgsQBDnZ5aeqpNh/UFg8E=;
	b=Mirc5V+tUR/gfl5ALiNhaxZKI8cOb1x2TCv3Y7DHXqXwQos14+yo7107em1XhF9dnTh/gH
	m4U9kcDXbUjgxZH586Uc49/yfINb6UuRmN/2D/824V4QeIeAcPaYeWLFiqXhUoHs49GNhn
	lr6o/MI36fNfXamsixEhW4szEVuLcSo=
Subject: Re: [PATCH 138/141] xen/manage: Fix fall-through warnings for Clang
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-hardening@vger.kernel.org
References: <cover.1605896059.git.gustavoars@kernel.org>
 <5cfc00b1d8ed68eb2c2b6317806a0aa7e57d27f1.1605896060.git.gustavoars@kernel.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <15031ddc-e2ef-4fb0-185b-d2d8a7f45c2f@suse.com>
Date: Wed, 16 Dec 2020 08:02:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <5cfc00b1d8ed68eb2c2b6317806a0aa7e57d27f1.1605896060.git.gustavoars@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="vxs4o6JTSmrfNdGUcKhNa1wkMf9uuVgXl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--vxs4o6JTSmrfNdGUcKhNa1wkMf9uuVgXl
Content-Type: multipart/mixed; boundary="rv4BO9JJTrt9PkbcxMXWgncgAJtJftkwI";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-hardening@vger.kernel.org
Message-ID: <15031ddc-e2ef-4fb0-185b-d2d8a7f45c2f@suse.com>
Subject: Re: [PATCH 138/141] xen/manage: Fix fall-through warnings for Clang
References: <cover.1605896059.git.gustavoars@kernel.org>
 <5cfc00b1d8ed68eb2c2b6317806a0aa7e57d27f1.1605896060.git.gustavoars@kernel.org>
In-Reply-To: <5cfc00b1d8ed68eb2c2b6317806a0aa7e57d27f1.1605896060.git.gustavoars@kernel.org>

--rv4BO9JJTrt9PkbcxMXWgncgAJtJftkwI
Content-Type: multipart/mixed;
 boundary="------------59288F99C9D4FF0AAA5C05DA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------59288F99C9D4FF0AAA5C05DA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 19:40, Gustavo A. R. Silva wrote:
> In preparation to enable -Wimplicit-fallthrough for Clang, fix a warnin=
g
> by explicitly adding a break statement instead of letting the code fall=

> through to the next case.
>=20
> Link: https://github.com/KSPP/linux/issues/115
> Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>

Applied to: xen/tip.git for-linus-5.11


Juergen

--------------59288F99C9D4FF0AAA5C05DA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------59288F99C9D4FF0AAA5C05DA--

--rv4BO9JJTrt9PkbcxMXWgncgAJtJftkwI--

--vxs4o6JTSmrfNdGUcKhNa1wkMf9uuVgXl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/ZsQEFAwAAAAAACgkQsN6d1ii/Ey9g
XAgAn7bnr9HNcb0zfKbhmYUH1+52a8N6mfVY6l1LFvcmsuFVTt/qA9nSYKb7iQWcMPr1kMfKedOu
+gSPfJoDj+bSSRstcd9u7+CK/044jEJlWknl+tsbBVbfGxdscWHKQHb6mBLbr9eNgTwKHTmf/Afq
Ujp/d+EFon+yP4pXdqShlxnhzAhjLS/T2Jc35Vnhko8X8wMHt2QIOLjYjH5nx0jjfNxdlaaKH27K
I8IF6BMImMFvhVPBiWNOZpWNe0sDIQDzftwafjpYgq1HubtvZllt9l72F1sec30tRCSngPYdCUz8
kDGln56K+x3Ru+DMlE/AW5XFyB1F/AhhK1yCg80m5w==
=YviI
-----END PGP SIGNATURE-----

--vxs4o6JTSmrfNdGUcKhNa1wkMf9uuVgXl--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 07:02:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 07:02:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55084.95852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQps-0008D4-E9; Wed, 16 Dec 2020 07:02:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55084.95852; Wed, 16 Dec 2020 07:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpQps-0008Cx-BE; Wed, 16 Dec 2020 07:02:56 +0000
Received: by outflank-mailman (input) for mailman id 55084;
 Wed, 16 Dec 2020 07:02:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpQpq-0008CS-6x
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 07:02:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc2f82fa-1ef6-471a-b5c8-ef09126140d1;
 Wed, 16 Dec 2020 07:02:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A58F4AD2B;
 Wed, 16 Dec 2020 07:02:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc2f82fa-1ef6-471a-b5c8-ef09126140d1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608102172; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=X05gtw4kgTwX17yOh6ryFaZQNp8HrbmMgmLTjKIzlq8=;
	b=PgoMg50ca2BU6FbkT16JT2azHdchaw2R7U9Nia7MJroXzwQUqvix5JtrttGw9EzaMkHuvw
	wJwE8e2OqfsNjXRrmogJ1YJST/9n3qyRqqftpZcffpRnE79iFK/yFxw8SG5kpdzXuqNXye
	kKcPYORa58LtmuweT/5Yq1rbYO/NGwU=
Subject: Re: [PATCH] xen: remove trailing semicolon in macro definition
To: trix@redhat.com, boris.ostrovsky@oracle.com, sstabellini@kernel.org,
 tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com
Cc: x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20201127160707.2622061-1-trix@redhat.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <49ea103f-c506-2252-f17c-5488fc03988e@suse.com>
Date: Wed, 16 Dec 2020 08:02:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127160707.2622061-1-trix@redhat.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="WF9Uk85fbX9051r9PAIpsoQaKUjFHYNE3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--WF9Uk85fbX9051r9PAIpsoQaKUjFHYNE3
Content-Type: multipart/mixed; boundary="Gm9EMK5W9xjZeuzmTGz4bE0qkfDBP2Ewx";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: trix@redhat.com, boris.ostrovsky@oracle.com, sstabellini@kernel.org,
 tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com
Cc: x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Message-ID: <49ea103f-c506-2252-f17c-5488fc03988e@suse.com>
Subject: Re: [PATCH] xen: remove trailing semicolon in macro definition
References: <20201127160707.2622061-1-trix@redhat.com>
In-Reply-To: <20201127160707.2622061-1-trix@redhat.com>

--Gm9EMK5W9xjZeuzmTGz4bE0qkfDBP2Ewx
Content-Type: multipart/mixed;
 boundary="------------6E8CD34F2C319CA262AEFC06"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6E8CD34F2C319CA262AEFC06
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 17:07, trix@redhat.com wrote:
> From: Tom Rix <trix@redhat.com>
>=20
> The macro use will already have a semicolon.
>=20
> Signed-off-by: Tom Rix <trix@redhat.com>

Applied to: xen/tip.git for-linus-5.11


Juergen

--------------6E8CD34F2C319CA262AEFC06
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6E8CD34F2C319CA262AEFC06--

--Gm9EMK5W9xjZeuzmTGz4bE0qkfDBP2Ewx--

--WF9Uk85fbX9051r9PAIpsoQaKUjFHYNE3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/ZsRsFAwAAAAAACgkQsN6d1ii/Ey/B
lQgAkcpAKfTn6Y+Jbp1HsZdXkanY7gB1wcou5exvBPIv434XKG5IhsClwhlXMAIYQoYJE5hGAUOj
LpmSPsIn/9tnKMy86CiaPYXprpcatYhvQPYwKwcPiXWR26JFqo7btLTwYletYqDRNl1j7sG8Ab8H
E6ZLnLu/igF7SNIyCEUe2SJss1PMmMPtKE8Wqsjla95KzJFg+XNP8nxKDfv3rzm7D95FOQlPIce5
+50h/SAGQF+XgyUXlW6ZuryJGUoJeDagWy1Aezh6q5PDBeNYAPbvK3vCRlfJpu4W+A6YkU0WfgzU
0BR7qiFv0n/1GeqcONMOPRWZVpPLFFLSUIBRtTL1IQ==
=Vt1f
-----END PGP SIGNATURE-----

--WF9Uk85fbX9051r9PAIpsoQaKUjFHYNE3--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 08:12:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 08:12:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55101.95871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpRvP-0006lN-UG; Wed, 16 Dec 2020 08:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55101.95871; Wed, 16 Dec 2020 08:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpRvP-0006lG-Qc; Wed, 16 Dec 2020 08:12:43 +0000
Received: by outflank-mailman (input) for mailman id 55101;
 Wed, 16 Dec 2020 08:12:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZEz=FU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpRvP-0006lB-5X
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 08:12:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ff6bf1a-01bf-424c-a5be-d29a118f6f37;
 Wed, 16 Dec 2020 08:12:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ECEBAACBD;
 Wed, 16 Dec 2020 08:12:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ff6bf1a-01bf-424c-a5be-d29a118f6f37
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608106361; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7ValLgEYg1KF55nl6jtub6eAGN5kJomEIMPabfj9FNg=;
	b=L9k5pgKgmcGhB75QAL5J+KZORFEZIh8g3YapR0dsEfJSZaRegsqssj27mT2SDiB5+J598m
	PCvsNgorp7Wc+3U+C8gA1sPlsC3/WjxQ7ZTgCOPegrEntF9ZbunaxD1+8fOMj6mqV9M63i
	RGkQL2xUKm6ZF2YtrVwKuA25facusOA=
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: Liwei <xieliwei@gmail.com>
Cc: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <mailman.2112.1604414193.711.xen-devel@lists.xenproject.org>
 <CAPE0SYz0be1ZOoNqDHpeJWeZS-1BM_zy50=Cmeo+4Aq1Na0eNQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4a0d9b00-5f1f-e9e1-fccf-1f26762134e8@suse.com>
Date: Wed, 16 Dec 2020 09:12:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <CAPE0SYz0be1ZOoNqDHpeJWeZS-1BM_zy50=Cmeo+4Aq1Na0eNQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.12.2020 20:08, Liwei wrote:
> Hi list,
>     This is a reply to the thread of the same title (linked here:
> https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg84916.html
> ) which I could not reply to because I receive this list by digest.
> 
>     I'm unclear if this is exactly the reason, but I experienced the
> same symptoms when upgrading to 4.14. The issue does not occur if I
> downgrade to 4.11 (the previous version that was provided by Debian).
> Kernel is 5.9.11 and unchanged between xen versions.
> 
>     One thing I noticed is that if I disable the monitor/mwait
> instructions on my CPU (Intel Xeon E5-2699 v4 ES), the stalls seem to
> occur later into the boot. With the instructions enabled, the system
> usually stalls less than a few minutes after boot; disabled, it can
> last for tens of minutes.
> 
>     Further disabling the HPET or forcing the kernel to use PIT causes
> it to be somewhat usable. The stalls still occur tens of minutes in
> but somehow everything seems to continue chugging along fine?

By "the kernel" do you really mean the kernel, or Xen?

>     I've also verified that the stalls do not occur in all the above
> cases if I just boot into the kernel without xen.
> 
>     When the stalls happen, I get the "rcu: INFO: rcu_sched detected
> stalls on CPUs/tasks" backtraces printed on the console periodically,
> but keystrokes don't do anything on the console, and I can't spawn new
> SSH sessions even though pinging the system produces a reply. The last
> item in the call trace is usually "xen_safe_halt", but I've seen it
> occur for other functions related to btrfs and the network adapter as
> well.

The kernel log may not be the only relevant thing here - the hypervisor
log may also need looking at (with full verbosity enabled and
preferably a debug build in use).

>     Do let me know if there's anything I can provide to help
> troubleshoot this. At the moment I've reverted to 4.11, but I can
> temporarily switch over to 4.14 to collect any necessary information.

In that earlier thread a number of things to try were suggested, iirc
(switching scheduler or disabling use of deep C states come to mind).
Did you experiment with those? If so, can you let us know of the
results, so we can see whether there's a pattern?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 08:21:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 08:21:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55106.95883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpS45-0007kO-Qw; Wed, 16 Dec 2020 08:21:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55106.95883; Wed, 16 Dec 2020 08:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpS45-0007kH-Nz; Wed, 16 Dec 2020 08:21:41 +0000
Received: by outflank-mailman (input) for mailman id 55106;
 Wed, 16 Dec 2020 08:21:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpS44-0007kC-2r
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 08:21:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8a519fc-18b3-4b28-b702-1042ac86e206;
 Wed, 16 Dec 2020 08:21:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CB288AD6A;
 Wed, 16 Dec 2020 08:21:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8a519fc-18b3-4b28-b702-1042ac86e206
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608106897; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=W1oenR6GpZ5zDvXXt7ITUbn+6gm5vPT2aasqCb5slls=;
	b=b07+yc3wArjNoG1RGjFAJzVPxrzvX6dR3Ucs1JYfW5Z/jlUkdRzRkeJcV9XHcAXtrRAiGO
	gqczg6z/dn1uFQqXhmxL6yZ0g5iN10QCwdUiA99onofVXQ4MIOCBzYfOLG2zHOz4y6DieK
	Dhz0zYWkDL+uSvjQVodAx1WI39NxzMg=
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201215111055.3810-1-jgross@suse.com>
 <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
Message-ID: <8d26b752-b7ba-159f-5bed-bb015a06d819@suse.com>
Date: Wed, 16 Dec 2020 09:21:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="VEsOmEl5Q0L2GQ2SPMrdoseYU2Ng77RxF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VEsOmEl5Q0L2GQ2SPMrdoseYU2Ng77RxF
Content-Type: multipart/mixed; boundary="Bqj9vE9hjQIUGpl7s3f4xXuzpSl9QgIui";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <8d26b752-b7ba-159f-5bed-bb015a06d819@suse.com>
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
References: <20201215111055.3810-1-jgross@suse.com>
 <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
In-Reply-To: <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>

--Bqj9vE9hjQIUGpl7s3f4xXuzpSl9QgIui
Content-Type: multipart/mixed;
 boundary="------------10DC628869F4F77CEAD28FF5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------10DC628869F4F77CEAD28FF5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.12.20 21:59, Andrew Cooper wrote:
> On 15/12/2020 11:10, Juergen Gross wrote:
>> In case a process waits for any Xenstore action in the xenbus driver
>> it should be interruptible by signals.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - don't special case SIGKILL as libxenstore is handling -EINTR fine
>> ---
>>   drivers/xen/xenbus/xenbus_xs.c | 9 ++++++++-
>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbu=
s_xs.c
>> index 3a06eb699f33..17c8f8a155fd 100644
>> --- a/drivers/xen/xenbus/xenbus_xs.c
>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>> @@ -205,8 +205,15 @@ static bool test_reply(struct xb_req_data *req)
>>  =20
>>   static void *read_reply(struct xb_req_data *req)
>>   {
>> +	int ret;
>> +
>>   	do {
>> -		wait_event(req->wq, test_reply(req));
>> +		ret =3D wait_event_interruptible(req->wq, test_reply(req));
>> +
>> +		if (ret =3D=3D -ERESTARTSYS && signal_pending(current)) {
>> +			req->msg.type =3D XS_ERROR;
>> +			return ERR_PTR(-EINTR);
>> +		}
>=20
> So now I can talk fully about the situations which lead to this, I thin=
k
> there is a bit more complexity.
>=20
> It turns out there are a number of issues related to running a Xen
> system with no xenstored.
>=20
> 1) If a xenstore-write occurs during startup before init-xenstore-domai=
n
> runs, the former blocks on /dev/xen/xenbus waiting for xenstored to
> reply, while the latter blocks on /dev/xen/xenbus_backend when trying t=
o
> tell the dom0 kernel that xenstored is in dom1.=C2=A0 This effectively
> deadlocks the system.

This should be easy to solve: any request to /dev/xen/xenbus should
block upfront in case xenstored isn't up yet (could e.g. wait
interruptible until xenstored_ready is non-zero).

> 2) If xenstore-watch is running when xenstored dies, it spins at 100%
> cpu usage making no system calls at all.=C2=A0 This is caused by bad er=
ror
> handling from xs_watch(), and attempting to debug found:

Can you expand on "bad error handling from xs_watch()", please?

>=20
> 3) (this issue).=C2=A0 If anyone starts xenstore-watch with no xenstore=
d
> running at all, it blocks in D in the kernel.

Should be handled with solution for 1).

>=20
> The cause is the special handling for watch/unwatch commands which,
> instead of just queuing up the data for xenstore, explicitly waits for
> an OK for registering the watch.=C2=A0 This causes a write() system cal=
l to
> block waiting for a non-existent entity to reply.
>=20
> So while this patch does resolve the major usability issue I found (I
> can't even SIGINT and get my terminal back), I think there are issues.
>=20
> The reason why XS_WATCH/XS_UNWATCH are special cased is because they do=

> require special handling.=C2=A0 The main kernel thread for processing
> incoming data from xenstored does need to know how to associate each
> async XS_WATCH_EVENT to the caller who watched the path.
>=20
> Therefore, depending on when this cancellation hits, we might be in any=

> of the following states:
>=20
> 1) the watch is queued in the kernel, but not even sent to xenstored ye=
t
> 2) the watch is queued in the xenstored ring, but not acted upon
> 3) the watch is queued in the xenstored ring, and the xenstored has see=
n
> it but not replied yet
> 4) the watch has been processed, but the XS_WATCH reply hasn't been
> received yet
> 5) the watch has been processed, and the XS_WATCH reply received
>=20
> State 5 (and a little bit) is the normal success path when xenstored ha=
s
> acted upon the request, and the internal kernel infrastructure is set u=
p
> appropriately to handle XS_WATCH_EVENTs.
>=20
> States 1 and 2 can be very common if there is no xenstored (or at least=
,
> it hasn't started up yet).=C2=A0 In reality, there is either no xenstor=
ed, or
> it is up and running (and for a period of time during system startup,
> these cases occur in sequence).

Yes. this is the reason we can't just reject a user request if xenstored
hasn't been detected yet: it could be just starting.

>=20
> As soon as the XS_WATCH event has been written into the xenstored ring,=

> it is not safe to cancel.=C2=A0 You've committed to xenstored processin=
g the
> request (if it is up).

I'm not sure this is true. Cancelling it might result in a stale watch
in xenstored, but there shouldn't be a problem related to that. In case
that watch fires the event will normally be discarded by the kernel as
no matching watch is found in the kernel's data. In case a new watch
has been setup with the same struct xenbus_watch address (which is used
as the token), then this new watch might fire without the node of the
new watch having changed, but spurious watch events are defined to be
okay (OTOH the path in the event might look strange to the handler).

>=20
> If xenstored is actually up and running, its fine and necessary to
> block.=C2=A0 The request will be processed in due course (timing subjec=
t to
> the client and server load).=C2=A0 If xenstored isn't up, blocking isn'=
t ok.
>=20
> Therefore, I think we need to distinguish "not yet on the ring" from "o=
n
> the ring", as our distinction as to whether cancelling is safe, and
> ensure we don't queue anything on the ring before we're sure xenstored
> has started up.
>=20
> Does this make sense?

Basically, yes.


Juergen

--------------10DC628869F4F77CEAD28FF5
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------10DC628869F4F77CEAD28FF5--

--Bqj9vE9hjQIUGpl7s3f4xXuzpSl9QgIui--

--VEsOmEl5Q0L2GQ2SPMrdoseYU2Ng77RxF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Zw5EFAwAAAAAACgkQsN6d1ii/Ey9G
iAgAlIlcZMgB8Dan2KsF5BlNOq2VL4q6YOgTK9xF9n8ihKQAvAv47VXBa4Pev6H1k4/Yj+SVxwKO
hjP4WTIPGawAZAeXCzR8kDcZnr+aPPXb1ISMT6VHPID6D6nHCiqLLoH7aGNjeL9y2c7gClHMuezi
ZWpsP+t2PR3EbwNBnLV+fuAkJcgaq7K5tvFwpCwOq94Dj+jz6GZCYwZPe5COtpE03hRzSBG0KOPj
EakRCC0u8ACQVlvHQ9P/uIDOzbtySGPVRMQwly9ZaBDTdeob/ZnkJtLx+Ln2OTWuPrXksnz9MUDz
WNVUwnKREUX9Yyr4Q/yZCrz1uLR9J6eStJn6T3YD4g==
=abTa
-----END PGP SIGNATURE-----

--VEsOmEl5Q0L2GQ2SPMrdoseYU2Ng77RxF--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 08:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 08:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55111.95895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpSAE-0007xs-M5; Wed, 16 Dec 2020 08:28:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55111.95895; Wed, 16 Dec 2020 08:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpSAE-0007xl-J5; Wed, 16 Dec 2020 08:28:02 +0000
Received: by outflank-mailman (input) for mailman id 55111;
 Wed, 16 Dec 2020 08:28:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpSAD-0007xd-8B; Wed, 16 Dec 2020 08:28:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpSAD-0002Y3-0p; Wed, 16 Dec 2020 08:28:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpSAC-0003ng-Pi; Wed, 16 Dec 2020 08:28:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpSAC-000091-PE; Wed, 16 Dec 2020 08:28:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iAzLUajVOUzfEva4QFmWTDDttDIdosm+LhIxgEIr5Ss=; b=AYz/VDLz+Pa9JVUyJYjV4wVdcb
	LKHh1cTacrGeWDruD4hYIQRNVEFufqvMllWH0dISqdmo7G5ciIkKEsgT+u9dy9tlERqsqCzAhjNux
	IYpIDdK/KoHBfsW+n1Bjehb9OjgZD0gnvA/F6c6fL8NpURGuzI51yKiJKm4lls6e4jeY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157598-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157598: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 08:28:00 +0000

flight 157598 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157598/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    0 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 08:41:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 08:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55119.95910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpSNW-0001KY-V9; Wed, 16 Dec 2020 08:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55119.95910; Wed, 16 Dec 2020 08:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpSNW-0001KR-Ra; Wed, 16 Dec 2020 08:41:46 +0000
Received: by outflank-mailman (input) for mailman id 55119;
 Wed, 16 Dec 2020 08:41:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OuYO=FU=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kpSNW-0001KM-8s
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 08:41:46 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9c5a53e-b926-4c2c-a8b4-9247d80b7e75;
 Wed, 16 Dec 2020 08:41:40 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kpSMu-0005UY-Qg; Wed, 16 Dec 2020 08:41:09 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id AB87E307697;
 Wed, 16 Dec 2020 09:40:59 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 967842CADD880; Wed, 16 Dec 2020 09:40:59 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9c5a53e-b926-4c2c-a8b4-9247d80b7e75
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=0Vl8v+K48CQ1CpDPZyiLdlZ3QZd1xhq2vGyfDmO1QSk=; b=d0ukNEGopW7xSAvf4l6oxB/hVo
	FRyEdkJ6gVh6WhW3HqIsdfeYXWYQ1HLOyEXl/M8OOjMOALFor8DXrwW8OkhssqBZWj2qY+dOjn3WQ
	w+eOTFIjqNLK3/OUKiTAiR7DfaZ2tzpDvvAr4qqHg7G/gm2EllebQ0U87v/0G0msGEUvStoO64DY+
	jRApct0VEjBdukFFaWwT/w5fpgGG2r/xGFbqZ+y+RR+1foywG9EkTHpqpIVFJmYZHyDNnwWxdOg8q
	B4pWQSqNEGAgqHIW+S7+CgCiPXoMiutaQwIno90DVH6FICQhR2vzciTHwxWeaxxT1doUmhnq8aMrv
	PjYDhOMw==;
Date: Wed, 16 Dec 2020 09:40:59 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201216084059.GL3040@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
 <20201215145408.GR3092@hirez.programming.kicks-ass.net>
 <20201216003802.5fpklvx37yuiufrt@treble>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201216003802.5fpklvx37yuiufrt@treble>

On Tue, Dec 15, 2020 at 06:38:02PM -0600, Josh Poimboeuf wrote:
> On Tue, Dec 15, 2020 at 03:54:08PM +0100, Peter Zijlstra wrote:
> > The problem is that a single instance of unwind information (ORC) must
> > capture and correctly unwind all alternatives. Since the trivially
> > correct mandate is out, implement the straight forward brute-force
> > approach:
> > 
> >  1) generate CFI information for each alternative
> > 
> >  2) unwind every alternative with the merge-sort of the previously
> >     generated CFI information -- O(n^2)
> > 
> >  3) for any possible conflict: yell.
> > 
> >  4) Generate ORC with merge-sort
> > 
> > Specifically for 3 there are two possible classes of conflicts:
> > 
> >  - the merge-sort itself could find conflicting CFI for the same
> >    offset.
> > 
> >  - the unwind can fail with the merged CFI.
> 
> So much algorithm.

:-)

It's not really hard, but it has a few pesky details (as always).

> Could we make it easier by caching the shared
> per-alt-group CFI state somewhere along the way?

Yes, but when I tried it grew the code required. Runtime costs would be
less, but I figured that since alternatives are typically few and small,
that wasn't a real consideration.

That is, it would basically cache the results of find_alt_unwind(), but
you still need find_alt_unwind() to generate that data, and so you gain
the code for filling and using the extra data structure.

Yes, computing it 3 times is naf, but meh.

> [ 'offset' is a byte offset from the beginning of the group.  It could
>   be calculated based on 'orig_insn' or 'orig_insn->alts', depending on
>   whether 'insn' is an original or a replacement. ]

That's exactly what it already does ofcourse ;-)

> If the array entry is NULL, just update it with a pointer to the CFI.
> If it's not NULL, make sure it matches the existing CFI, and WARN if it
> doesn't.
> 
> Also, with this data structure, the ORC generation should also be a lot
> more straightforward, just ignore the NULL entries.

Yeah, I suppose it gets rid of the memcmp-prev thing.

> Thoughts?  This is all theoretical of course, I could try to do a patch
> tomorrow.

No real objection, I just didn't do it because 1) it works, and 2) even
moar lines.


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 09:50:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 09:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55124.95922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpTS8-0007gP-9z; Wed, 16 Dec 2020 09:50:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55124.95922; Wed, 16 Dec 2020 09:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpTS8-0007gI-5j; Wed, 16 Dec 2020 09:50:36 +0000
Received: by outflank-mailman (input) for mailman id 55124;
 Wed, 16 Dec 2020 09:50:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTS7-0007gA-7z; Wed, 16 Dec 2020 09:50:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTS7-0003uH-1E; Wed, 16 Dec 2020 09:50:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTS6-00019Q-OB; Wed, 16 Dec 2020 09:50:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTS6-0006WM-NM; Wed, 16 Dec 2020 09:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2ZePtdWSOw4/C/xGCtwe2U3D8BZ0N7h8Dy/wdceHyDY=; b=gorG7CPhE9eX6q/i6f2uo/z1e+
	wtw+MJ3qx4V2CeY/Ly2YOKyPmP7v0HezkVO0qLB4epLM93qHsE+rOvM1zflx7jUew7dIjtFKDebVr
	ofk14dC3AF3oQ1zTiOGuc6T2gCi+LzhWrrYP6IqFL1loC/knvTr/G2s4GKZGUFIyp4Ks=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157602-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157602: all pass - PUSHED
X-Osstest-Versions-This:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 09:50:34 +0000

flight 157602 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157602/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157492  2020-12-13 09:18:28 Z    3 days
Testing same since   157602  2020-12-16 09:18:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8e0fe4fe5f..904148ecb4  904148ecb4a59d4c8375d8e8d38117b8605e10ac -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 10:08:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 10:08:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55132.95937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpTis-0000Ry-RR; Wed, 16 Dec 2020 10:07:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55132.95937; Wed, 16 Dec 2020 10:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpTis-0000Rr-NP; Wed, 16 Dec 2020 10:07:54 +0000
Received: by outflank-mailman (input) for mailman id 55132;
 Wed, 16 Dec 2020 10:07:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTir-0000Rj-5f; Wed, 16 Dec 2020 10:07:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTiq-0004HX-Rc; Wed, 16 Dec 2020 10:07:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTiq-00029w-IA; Wed, 16 Dec 2020 10:07:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpTiq-00009a-Hd; Wed, 16 Dec 2020 10:07:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kGBHYiDPk6my0lY6N6l7USSYVTqzXHGsvwYbd7mQ8t0=; b=Wxq3Imc/FX4DiUdBlgyedsWaVs
	pKO1/eja6EYLY+bRQtJUnyo4z14i7GWkYOtxNDNCtNyw/lkEYDc8eqpdmXVrY6pg0HFq9F6OjW8Vi
	CROM5ecV2x4ECw61P7S/ZKrHKDmuAGXVeXLnZmFF8T3fRchygm1LLakKy2xpmxo9hSjA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157565-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 157565: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-amd64-i386-freebsd10-amd64:guest-saverestore.2:fail:heisenbug
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ea37c69c7d3948d9bb6f217235ae8bd767e8c46
X-Osstest-Versions-That:
    xen=1d72d9915edff0dd41f601bbb0b1f83c02ff1689
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 10:07:52 +0000

flight 157565 xen-4.10-testing real [real]
flight 157601 xen-4.10-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157565/
http://logs.test-lab.xenproject.org/osstest/logs/157601/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64 18 guest-saverestore.2 fail pass in 157601-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 157138
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157138
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157138
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157138
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157138
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157138
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157138
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157138
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157138
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157138
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157138
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157138
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157138
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157138
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6ea37c69c7d3948d9bb6f217235ae8bd767e8c46
baseline version:
 xen                  1d72d9915edff0dd41f601bbb0b1f83c02ff1689

Last test of basis   157138  2020-12-01 17:05:42 Z   14 days
Testing same since   157565  2020-12-15 14:05:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1d72d9915e..6ea37c69c7  6ea37c69c7d3948d9bb6f217235ae8bd767e8c46 -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 10:17:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 10:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55167.96059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpTsI-00021x-8t; Wed, 16 Dec 2020 10:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55167.96059; Wed, 16 Dec 2020 10:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpTsI-00021q-61; Wed, 16 Dec 2020 10:17:38 +0000
Received: by outflank-mailman (input) for mailman id 55167;
 Wed, 16 Dec 2020 10:17:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cg29=FU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpTsG-00021l-SL
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 10:17:36 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ef3197e-f381-4105-99cb-93057bee257d;
 Wed, 16 Dec 2020 10:17:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ef3197e-f381-4105-99cb-93057bee257d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608113855;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=qll6Wpqk4RFhNWuNJAwRuphR6lrp5io6cBTRKGbeL8A=;
  b=KXvZttbUnkSIWUS8ij3TpyjkgKHGfvOe0Sc4MQlHn63gRWtzdqkO9DlQ
   jKnoTTV6ZJAWZMBj5DPKcXh8UCObMRaAZSLlcHTLKAdrsaYPkJsfavaHF
   oL6Sb4RYkcTuIJNk4PV0oWpjSYqeCNkw6Lg0fvqTeZLBSsYJIrmwVPFA3
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 9fxSQSuzMEdzIk68C8lklNKqsDjJURYTiUaqjk00LpOyd8b2KqUevD0Et7mVWM9r7DM9KfRLuk
 IZXz46O/jZUUWG/wkO3PmfTPC+Za3vAJ/pXkf1xRpcTyApuxs7yZ5lCW6fDg/Xm6Bewjh9ucf+
 fdp33JgN8lxf/TPgJyOm0cgaJGfaJ2hFSKUoeZcWFbTzWsZNbCw4YvbHPTpFuyfuFE9/50SiC+
 Gtq1NkHp/QXlQHmSOWd/Ve2SzgTaQyz3xA3uApglcVsuSDk7Bp+fbZ7a7nKfHDSUY0fJdp17h8
 dmU=
X-SBRS: 5.2
X-MesageID: 33338992
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,424,1599537600"; 
   d="scan'208";a="33338992"
Subject: Re: [xen-unstable-smoke bisection] complete build-amd64-libvirt
To: osstest service owner <osstest-admin@xenproject.org>,
	<xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>, Wei Liu
	<wl@xen.org>
References: <E1kpMXk-0006zb-Jk@osstest.test-lab.xenproject.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <19ed8894-23f7-0f9d-f3c4-1d5ea5bc0c02@citrix.com>
Date: Wed, 16 Dec 2020 10:17:29 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <E1kpMXk-0006zb-Jk@osstest.test-lab.xenproject.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 16/12/2020 02:27, osstest service owner wrote:
> branch xen-unstable-smoke
> xenbranch xen-unstable-smoke
> job build-amd64-libvirt
> testid libvirt-build
>
> Tree: libvirt git://xenbits.xen.org/libvirt.git
> Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: xen git://xenbits.xen.org/xen.git
>
> *** Found and reproduced problem changeset ***
>
>   Bug is in tree:  xen git://xenbits.xen.org/xen.git
>   Bug introduced:  929f23114061a0089e6d63d109cf6a1d03d35c71
>   Bug not present: 8bc342b043a6838c03cd86039a34e3f8eea1242f
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157589/
>
>
>   commit 929f23114061a0089e6d63d109cf6a1d03d35c71
>   Author: Paul Durrant <pdurrant@amazon.com>
>   Date:   Tue Dec 8 19:30:26 2020 +0000
>   
>       libxl: introduce 'libxl_pci_bdf' in the idl...
>       
>       ... and use in 'libxl_device_pci'
>       
>       This patch is preparatory work for restricting the type passed to functions
>       that only require BDF information, rather than passing a 'libxl_device_pci'
>       structure which is only partially filled. In this patch only the minimal
>       mechanical changes necessary to deal with the structural changes are made.
>       Subsequent patches will adjust the code to make better use of the new type.
>       
>       Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>       Acked-by: Wei Liu <wl@xen.org>
>       Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>

This breaks the API.  You can't make the following change in the IDL.

 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 10:44:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 10:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55174.96077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpUHq-0004mu-GS; Wed, 16 Dec 2020 10:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55174.96077; Wed, 16 Dec 2020 10:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpUHq-0004mn-DX; Wed, 16 Dec 2020 10:44:02 +0000
Received: by outflank-mailman (input) for mailman id 55174;
 Wed, 16 Dec 2020 10:44:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZxtY=FU=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpUHp-0004mi-34
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 10:44:01 +0000
Received: from mail-wm1-f47.google.com (unknown [209.85.128.47])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 140776f2-1109-438f-9718-a6b4627d2729;
 Wed, 16 Dec 2020 10:44:00 +0000 (UTC)
Received: by mail-wm1-f47.google.com with SMTP id q75so1968433wme.2
 for <xen-devel@lists.xenproject.org>; Wed, 16 Dec 2020 02:44:00 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id i7sm2717598wrv.12.2020.12.16.02.43.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 16 Dec 2020 02:43:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 140776f2-1109-438f-9718-a6b4627d2729
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=UIVKPr/2mLPS/umOv8iGKxHMJAeouS0XlPgcBGh+KpQ=;
        b=HpR+bLCssSPC64wpd0nL3W7Fcp216O6ZF+HPuhQDMmb3t1hoq1SjGq+Asj9WU5SmHS
         leDCA4/cJO7PNcoVC6ChzllYzqHH65A8kh41uXll27jhi65JSRlPcNeV8npUVA6uK47s
         8VXCzM9thRN3sjea7d+HQg14hhD56QxxOIkMEoYqaJigdLEUQUy4dlkgx2iptiTEgly+
         7ZwSzCgxxhFY00igOoyXvBUfNL0rh7dyFUIdX7D8iMWJknH5EOsG0M4oWp/rhOuYC30K
         1TYw8HFoq+MF9WpYQDXI/XFOlMna6smBQ+F10yjBn8/UaflOnHWocxY7XZwJAGFifQPp
         tUrQ==
X-Gm-Message-State: AOAM531l8RjDTPOyomjHgJo3MaWtzqbrSRMXd52Z3UM49zlG9//6g/zO
	OJhf+KjBw7ioHg1E1VuZ6DA=
X-Google-Smtp-Source: ABdhPJyRoSP2do49kmkWuSY3GUsaW1I6ApEZZNiW5I+ZqaBvpQXbc16JI35LdThmJM9pziN7bAXcxw==
X-Received: by 2002:a1c:43c6:: with SMTP id q189mr2688586wma.7.1608115439309;
        Wed, 16 Dec 2020 02:43:59 -0800 (PST)
Date: Wed, 16 Dec 2020 10:43:57 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [xen-unstable-smoke bisection] complete build-amd64-libvirt
Message-ID: <20201216104357.wcggzckdii76d4iz@liuwe-devbox-debian-v2>
References: <E1kpMXk-0006zb-Jk@osstest.test-lab.xenproject.org>
 <19ed8894-23f7-0f9d-f3c4-1d5ea5bc0c02@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <19ed8894-23f7-0f9d-f3c4-1d5ea5bc0c02@citrix.com>
User-Agent: NeoMutt/20180716

Paul, are you able to cook up a patch today? If not I will revert the
offending patch(es).

Wei.

On Wed, Dec 16, 2020 at 10:17:29AM +0000, Andrew Cooper wrote:
> On 16/12/2020 02:27, osstest service owner wrote:
> > branch xen-unstable-smoke
> > xenbranch xen-unstable-smoke
> > job build-amd64-libvirt
> > testid libvirt-build
> >
> > Tree: libvirt git://xenbits.xen.org/libvirt.git
> > Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
> > Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> > Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> > Tree: xen git://xenbits.xen.org/xen.git
> >
> > *** Found and reproduced problem changeset ***
> >
> >   Bug is in tree:  xen git://xenbits.xen.org/xen.git
> >   Bug introduced:  929f23114061a0089e6d63d109cf6a1d03d35c71
> >   Bug not present: 8bc342b043a6838c03cd86039a34e3f8eea1242f
> >   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157589/
> >
> >
> >   commit 929f23114061a0089e6d63d109cf6a1d03d35c71
> >   Author: Paul Durrant <pdurrant@amazon.com>
> >   Date:   Tue Dec 8 19:30:26 2020 +0000
> >   
> >       libxl: introduce 'libxl_pci_bdf' in the idl...
> >       
> >       ... and use in 'libxl_device_pci'
> >       
> >       This patch is preparatory work for restricting the type passed to functions
> >       that only require BDF information, rather than passing a 'libxl_device_pci'
> >       structure which is only partially filled. In this patch only the minimal
> >       mechanical changes necessary to deal with the structural changes are made.
> >       Subsequent patches will adjust the code to make better use of the new type.
> >       
> >       Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> >       Acked-by: Wei Liu <wl@xen.org>
> >       Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> 
> This breaks the API. You can't make the following change in the IDL.
> 
> libxl_device_pci = Struct("device_pci", [
> - ("func", uint8),
> - ("dev", uint8),
> - ("bus", uint8),
> - ("domain", integer),
> - ("vdevfn", uint32),
> + ("bdf", libxl_pci_bdf),
> + ("vdevfn", uint32),
> 
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 11:23:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 11:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55180.96090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpUtI-0008Sp-Bw; Wed, 16 Dec 2020 11:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55180.96090; Wed, 16 Dec 2020 11:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpUtI-0008Si-8S; Wed, 16 Dec 2020 11:22:44 +0000
Received: by outflank-mailman (input) for mailman id 55180;
 Wed, 16 Dec 2020 11:22:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cg29=FU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpUtH-0008Sd-M6
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 11:22:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 550920ef-f32e-4602-bd0f-c4deb4847fea;
 Wed, 16 Dec 2020 11:22:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 550920ef-f32e-4602-bd0f-c4deb4847fea
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608117760;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=penWowEIrCCaT2sBFgkZanUNFuHURfWJOg9gJ17z3GA=;
  b=RArOC3mhCmwk+WukgVOuiHb8fqX9Q0QWs24a1AyzPjlEHaf7XHNASqcK
   IEWzZ8ZnjZGL3DtblPOpysjKGQ8/AbJATYPObpC/efbSTfoIbKerQUYBK
   3a0nEQjkEVLpUqC0AwLO6fW9bJgpLXhUfZjp1ZwiGLO8sWCltYugIswZD
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: j+gInFzWtNVKXfq625TT/LdLZ7Z/jsMfLEJtcpjbKmykuFCB/fGmQgm5cL3Y1jcFlx646Mm+rb
 COsAvbr502XlI2ROmcj4cqAoLL8zQX0PWkbiytau4/DXW5IjJWYGNqJUSGqv9kmStcdbN++9ZA
 HVK2Knf0di9TKKgkSq6dCHaStl2n/Y0tcnXM4ZZ1Jbn5zrbQmg7JgLc/UN2FbVRMesyGJ3P/Vu
 YhbQcHpZZB8OW1o6Ic0Y6urncLt6/mnAEIbdHEeNNY0U/LSCoQTcvilXYLl0IuNCijbNOgr038
 dBY=
X-SBRS: 5.2
X-MesageID: 33570766
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,424,1599537600"; 
   d="scan'208";a="33570766"
Subject: Re: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not
 close file descriptor on exec
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Julien Grall
	<jgrall@amazon.com>
References: <20201215163603.21700-1-jgross@suse.com>
 <20201215163603.21700-5-jgross@suse.com>
 <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
 <e33205e7-11cb-e463-8c6f-92cfff2c74da@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0ae8f6c8-266f-98e2-df83-17eeacabeaed@citrix.com>
Date: Wed, 16 Dec 2020 11:22:35 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e33205e7-11cb-e463-8c6f-92cfff2c74da@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 16/12/2020 06:06, J=C3=BCrgen Gro=C3=9F wrote:
> On 15.12.20 19:09, Andrew Cooper wrote:=C2=A0
>>
>> Additionally, something in core.c should check for unknown flags and
>> reject them them with EINVAL.=C2=A0 It was buggy that this wasn't done=

>> before, and really needs to be implemented before we start having case=
s
>> where people might plausibly pass something other than 0.
>
> Are you sure this is safe? I'm not arguing against it, but we considere=
d
> to do that and didn't dare to.

Well - you're already breaking things by adding meaning to bit 0 where
it was previously ignored.

But fundamentally - any caller passing non-zero to begin with is buggy,
and it will be less bad to fix up our input validation and given them a
clean EINVAL now.

The alternative is no error and some weird side effect when we implement
whichever bit they were settings.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 11:33:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 11:33:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55185.96102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpV3a-000122-D3; Wed, 16 Dec 2020 11:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55185.96102; Wed, 16 Dec 2020 11:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpV3a-00011v-98; Wed, 16 Dec 2020 11:33:22 +0000
Received: by outflank-mailman (input) for mailman id 55185;
 Wed, 16 Dec 2020 11:33:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpV3Z-00011q-Dk
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 11:33:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bfaaeef9-92c5-4f78-b723-5baab26969ae;
 Wed, 16 Dec 2020 11:33:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 58993AD89;
 Wed, 16 Dec 2020 11:33:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfaaeef9-92c5-4f78-b723-5baab26969ae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608118399; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7qLSy5xwkYpjzjAI7FCDXImiebyhEClZDy/Z1mpd5T4=;
	b=lpEEecLTxZo9Rzo7MLRN51UTgtKCuf1lJpmoRyqc0N6QVXnfphbQFXFPMGgjaE4x4ioV0J
	xv+PqbtiDy08yqhpZW3LQnTEebdqsAbnYdk0NH9moq2E5kRWR7VbdPLl+lwCszTpI2CjKF
	s8P/ovVqVFpNQ5aWbXrTxdezyQ5MAaQ=
Subject: Re: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not
 close file descriptor on exec
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>
References: <20201215163603.21700-1-jgross@suse.com>
 <20201215163603.21700-5-jgross@suse.com>
 <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
 <e33205e7-11cb-e463-8c6f-92cfff2c74da@suse.com>
 <0ae8f6c8-266f-98e2-df83-17eeacabeaed@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <11e504a9-eab4-6dbc-ec1d-d1c8940fe56d@suse.com>
Date: Wed, 16 Dec 2020 12:33:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <0ae8f6c8-266f-98e2-df83-17eeacabeaed@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="HTX4G267rjK0LiJ0qSmWJPr38r05EQrKD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--HTX4G267rjK0LiJ0qSmWJPr38r05EQrKD
Content-Type: multipart/mixed; boundary="9PZJwYakkF5NodLjmZBnbbrHnD0eiI60i";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>
Message-ID: <11e504a9-eab4-6dbc-ec1d-d1c8940fe56d@suse.com>
Subject: Re: [PATCH v10 04/25] tools/libxenevtchn: add possibility to not
 close file descriptor on exec
References: <20201215163603.21700-1-jgross@suse.com>
 <20201215163603.21700-5-jgross@suse.com>
 <3c8ab988-725e-2823-23f6-d9286a04243e@citrix.com>
 <e33205e7-11cb-e463-8c6f-92cfff2c74da@suse.com>
 <0ae8f6c8-266f-98e2-df83-17eeacabeaed@citrix.com>
In-Reply-To: <0ae8f6c8-266f-98e2-df83-17eeacabeaed@citrix.com>

--9PZJwYakkF5NodLjmZBnbbrHnD0eiI60i
Content-Type: multipart/mixed;
 boundary="------------F27EC543E916B3D437042C85"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F27EC543E916B3D437042C85
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.12.20 12:22, Andrew Cooper wrote:
> On 16/12/2020 06:06, J=C3=BCrgen Gro=C3=9F wrote:
>> On 15.12.20 19:09, Andrew Cooper wrote:
>>>
>>> Additionally, something in core.c should check for unknown flags and
>>> reject them them with EINVAL.=C2=A0 It was buggy that this wasn't don=
e
>>> before, and really needs to be implemented before we start having cas=
es
>>> where people might plausibly pass something other than 0.
>>
>> Are you sure this is safe? I'm not arguing against it, but we consider=
ed
>> to do that and didn't dare to.
>=20
> Well - you're already breaking things by adding meaning to bit 0 where
> it was previously ignored.

Fair enough.

> But fundamentally - any caller passing non-zero to begin with is buggy,=

> and it will be less bad to fix up our input validation and given them a=

> clean EINVAL now.

Fine with me.

> The alternative is no error and some weird side effect when we implemen=
t
> whichever bit they were settings.

Hmm, yes. I'll add the check.


Juergen

--------------F27EC543E916B3D437042C85
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F27EC543E916B3D437042C85--

--9PZJwYakkF5NodLjmZBnbbrHnD0eiI60i--

--HTX4G267rjK0LiJ0qSmWJPr38r05EQrKD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/Z8H4FAwAAAAAACgkQsN6d1ii/Ey/Y
MQf/eS2HPluXdtTR7kDq40NDkLWmCaDObxS5xKVCzlbDZwE6UfxMbxzyvPHQG1B2G/VBL5TvMdEf
joRUYw18IJbhNIB6ajmgkWyaq7ELLW0pEQSdNe3bqk3mG4ha6qSYXRkuH5qWBjUKi0ZLBciD3DjL
0l0oWDEMs/N71tNbvfmE9ZcvfFCzoRcDFDC4Abrl6gfDbv5EuVN52dSX65AcFgqZmnfHdoffQsPv
kUmtL//8a7WPBgRfGdKN95HbaxrW9RfHAmrOgO1f6cIqg/TWn+kEO/sVJzX0k2IiG72Im65XFLTa
aCXtMCRoHsReYLYkGMmzILC4LR5ztjfIjdq+EWkXxg==
=Wvng
-----END PGP SIGNATURE-----

--HTX4G267rjK0LiJ0qSmWJPr38r05EQrKD--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 12:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 12:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55198.96113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpVfP-0004oE-S0; Wed, 16 Dec 2020 12:12:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55198.96113; Wed, 16 Dec 2020 12:12:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpVfP-0004o7-Ov; Wed, 16 Dec 2020 12:12:27 +0000
Received: by outflank-mailman (input) for mailman id 55198;
 Wed, 16 Dec 2020 12:12:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpVfP-0004nz-1I; Wed, 16 Dec 2020 12:12:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpVfO-0006OF-RM; Wed, 16 Dec 2020 12:12:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpVfO-00089e-EA; Wed, 16 Dec 2020 12:12:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpVfO-0005sJ-Df; Wed, 16 Dec 2020 12:12:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LGAzJCJlEpvczvlBQQa9riFBNsIlksTEDOqwIUgWSlY=; b=jX0KiF9B0xpNCXbbPfZzyZzbNs
	RBtYsbMHWvaWEFWLYiXVIGgi3e/d03nFE03W4dum605/SSHZcsl/kGmpeojYCQQIlIDv90gzdVyS9
	MYwCu6paVPbufWjqg4jN4zbvwDdjVGyDEjU6WlnoZIlY/gQZQgDGgtij7/OqaR+9tjWU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157600: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 12:12:26 +0000

flight 157600 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157600/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    0 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 12:20:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 12:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55206.96129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpVnK-0005mI-NO; Wed, 16 Dec 2020 12:20:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55206.96129; Wed, 16 Dec 2020 12:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpVnK-0005mB-KO; Wed, 16 Dec 2020 12:20:38 +0000
Received: by outflank-mailman (input) for mailman id 55206;
 Wed, 16 Dec 2020 12:20:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zyv+=FU=gmail.com=xieliwei@srs-us1.protection.inumbo.net>)
 id 1kpVnJ-0005m1-26
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 12:20:37 +0000
Received: from mail-il1-x135.google.com (unknown [2607:f8b0:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5c0eb69-25f4-40f1-a141-0d57d9a94e62;
 Wed, 16 Dec 2020 12:20:36 +0000 (UTC)
Received: by mail-il1-x135.google.com with SMTP id k8so22355557ilr.4
 for <xen-devel@lists.xenproject.org>; Wed, 16 Dec 2020 04:20:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5c0eb69-25f4-40f1-a141-0d57d9a94e62
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=rcOJ/B9WKeoDZoU7//up+spMMrwIYlhyVLHQkdzuZa8=;
        b=LPNDy3q49dRe1Udz8UEjnB2hlMziCx/LNfq6i/SFTtWlolgRDTuJPqBMFyFT1pMNmN
         dZr4dVglM2LUshBqa/ei+tR2oAaTh1AJt+kwWEldYyzH8DLTnVdoUWp4hORAJv81t049
         VyhFY/BcP9nIcYx8IOcgxguHlQnTD8OoaEgP+JRkl+rsyFl6MEa/zAzVSl7lAnoyRXFS
         CCublE2FYRsOwOUoFSYvmUDAeFElKl856hlJJWDfD20ccSzyk7fRf5/qzdOZBiAbSrBQ
         WBoXS8wNjyBWagdh0cYtQ8Ro/1fzRXM6rYMHDSXzlGcEGGBx03unAHVOoAMiiHhHVtku
         /4fQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=rcOJ/B9WKeoDZoU7//up+spMMrwIYlhyVLHQkdzuZa8=;
        b=fNBIjuBv3q/+c2Wrem79wXWheY/A/4Qw8rUxFkheqNWmJdW5iaxwgI+40rOA0ji5nv
         92/N7+8h3rrfJU1sBuKrKmg1s51uliGGDHUzUbwWwCc9i9p4rFr9apALLSXfPUAiR7Uc
         btbGvGtzFEaYH2KVdwQ0VNyW+uPA4ml1KaPJM6qbs3tnSaHiGZ/0uUXX9onAihp5ngvE
         cclwCABbmziYP7ZL/lTmMyRDE3chjEsn4hvFzpc+si5FrvAjKyTUqtG5bRDTH9iJo678
         RGRpnhYUHEAet/lmjahdwky43D7QhmSzsRos6pU1CD8I8my9zu3/qOFv1d5hPmhsUWWE
         Utng==
X-Gm-Message-State: AOAM530IStvhwZoDQXyYHQ1ZMvu7yko27yU8WRq3Tz4X6VGnpLD+XUyO
	74O3+HR4RC8UztukNCyx0rL1adVXrecm76r6YD4w0jMm8h8=
X-Google-Smtp-Source: ABdhPJxjP0nkV0RltRY1U1qPuSYqwYt6LxKJgzgPcoJuniM0oql9Ek1ngYWsr8WiaHQnzS6QvVXP5txFxlsA2hoSwvQ=
X-Received: by 2002:a92:c54b:: with SMTP id a11mr42205377ilj.186.1608121235634;
 Wed, 16 Dec 2020 04:20:35 -0800 (PST)
MIME-Version: 1.0
References: <mailman.2112.1604414193.711.xen-devel@lists.xenproject.org>
 <CAPE0SYz0be1ZOoNqDHpeJWeZS-1BM_zy50=Cmeo+4Aq1Na0eNQ@mail.gmail.com> <4a0d9b00-5f1f-e9e1-fccf-1f26762134e8@suse.com>
In-Reply-To: <4a0d9b00-5f1f-e9e1-fccf-1f26762134e8@suse.com>
From: Liwei <xieliwei@gmail.com>
Date: Wed, 16 Dec 2020 20:19:05 +0800
Message-ID: <CAPE0SYzHL_SZpocBB_Puz7+Zm6fqSD6vYg3zZ1Tf_exYU0Cj0w@mail.gmail.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>, 
	Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan,
    Response inline...

Liwei

On Wed, 16 Dec 2020 at 16:12, Jan Beulich <jbeulich@suse.com> wrote:
>
> On 15.12.2020 20:08, Liwei wrote:
> > Hi list,
> >     This is a reply to the thread of the same title (linked here:
> > https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg84916.html
> > ) which I could not reply to because I receive this list by digest.
> >
> >     I'm unclear if this is exactly the reason, but I experienced the
> > same symptoms when upgrading to 4.14. The issue does not occur if I
> > downgrade to 4.11 (the previous version that was provided by Debian).
> > Kernel is 5.9.11 and unchanged between xen versions.
> >
> >     One thing I noticed is that if I disable the monitor/mwait
> > instructions on my CPU (Intel Xeon E5-2699 v4 ES), the stalls seem to
> > occur later into the boot. With the instructions enabled, the system
> > usually stalls less than a few minutes after boot; disabled, it can
> > last for tens of minutes.
> >
> >     Further disabling the HPET or forcing the kernel to use PIT causes
> > it to be somewhat usable. The stalls still occur tens of minutes in
> > but somehow everything seems to continue chugging along fine?
>
> By "the kernel" do you really mean the kernel, or Xen?

Sorry, I mean xen. Too used to thinking that xen isn't there when I'm
talking about dom0.

>
> >     I've also verified that the stalls do not occur in all the above
> > cases if I just boot into the kernel without xen.
> >
> >     When the stalls happen, I get the "rcu: INFO: rcu_sched detected
> > stalls on CPUs/tasks" backtraces printed on the console periodically,
> > but keystrokes don't do anything on the console, and I can't spawn new
> > SSH sessions even though pinging the system produces a reply. The last
> > item in the call trace is usually "xen_safe_halt", but I've seen it
> > occur for other functions related to btrfs and the network adapter as
> > well.
>
> The kernel log may not be the only relevant thing here - the hypervisor
> log may also need looking at (with full verbosity enabled and
> preferably a debug build in use).

I will build a debug version and get back to you on that. Do I just
have loglvl and guest_loglvl set to full, console to ring, and get the
entire serial spew? I recall that you wanted to see the I, q and r
outputs as well.

>
> >     Do let me know if there's anything I can provide to help
> > troubleshoot this. At the moment I've reverted to 4.11, but I can
> > temporarily switch over to 4.14 to collect any necessary information.
>
> In that earlier thread a number of things to try were suggested, iirc
> (switching scheduler or disabling use of deep C states come to mind).
> Did you experiment with those? If so, can you let us know of the
> results, so we can see whether there's a pattern?

1. Switching to credit didn't seem to make any difference in my case
2. I have tried with cpuidle=off and max_cstate=1, and it actually
provides the same result as when I have mwait/monitor & hpet turned
off (even when I leave mwait & hpet on in BIOS)
3. I could not try with dom0=PVH as my system reboots after(or while?)
the kernel is loaded/ing when I do that

I do realise, after working with the cpuidle=off and max_cstate=1
combination for a day, the system is actually limping. Most of the
visible issues seem to stem from storage hanging or responding very
slowly, but it might be due to the btrfs tasks hanging in the
background.

>
> Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 12:35:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 12:35:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55212.96140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpW1q-0006q2-0J; Wed, 16 Dec 2020 12:35:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55212.96140; Wed, 16 Dec 2020 12:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpW1p-0006pv-Tb; Wed, 16 Dec 2020 12:35:37 +0000
Received: by outflank-mailman (input) for mailman id 55212;
 Wed, 16 Dec 2020 12:35:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hnmN=FU=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kpW1o-0006pl-1X
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 12:35:36 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id ad8e701d-6b5a-499b-ab13-6d2981795dec;
 Wed, 16 Dec 2020 12:35:34 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-159-hCDAhdqZMsqRsPvKKMnrMQ-1; Wed, 16 Dec 2020 07:35:31 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3CA8C100C600;
 Wed, 16 Dec 2020 12:35:29 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-115-50.ams2.redhat.com [10.36.115.50])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 3D8DD5D9C0;
 Wed, 16 Dec 2020 12:35:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad8e701d-6b5a-499b-ab13-6d2981795dec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608122134;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UtNZYdo9xoytjzGiKwk4LzakyNj0adxhgOtO55qRA30=;
	b=hP7Sgvfg/3paJ/4nXIJJiMkYve/McVngcPYsKcHqLIySp/vRTnJXQ8yQfDUhxN5U08McsQ
	ZldUz/MrY4SqM0QdYGdgw+RnyIiQ1d0J6uFnz0JI9EsYgJmDpFn0DlARBaySUmGof3pLVy
	ud4iFU5/amsD7+1SYOiuLKiawhItahg=
X-MC-Unique: hCDAhdqZMsqRsPvKKMnrMQ-1
Date: Wed, 16 Dec 2020 13:35:14 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201216123514.GD7548@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
MIME-Version: 1.0
In-Reply-To: <20201215172337.w7vcn2woze2ejgco@mhamilton>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="UugvWAfsgieZRqgk"
Content-Disposition: inline

--UugvWAfsgieZRqgk
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 15.12.2020 um 18:23 hat Sergio Lopez geschrieben:
> On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
> > Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> > > On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > > > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > > > While processing the parents of a BDS, one of the parents may pro=
cess
> > > > > the child that's doing the tail recursion, which leads to a BDS b=
eing
> > > > > processed twice. This is especially problematic for the aio_notif=
iers,
> > > > > as they might attempt to work on both the old and the new AIO
> > > > > contexts.
> > > > >=20
> > > > > To avoid this, add the BDS pointer to the ignore list, and check =
the
> > > > > child BDS pointer while iterating over the children.
> > > > >=20
> > > > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> > > >=20
> > > > Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-/
> > >=20
> > > I know, it's effective but quite ugly...
> > >=20
> > > > What is the specific scenario where you saw this breaking? Did you =
have
> > > > multiple BdrvChild connections between two nodes so that we would g=
o to
> > > > the parent node through one and then come back to the child node th=
rough
> > > > the other?
> > >=20
> > > I don't think this is a corner case. If the graph is walked top->down=
,
> > > there's no problem since children are added to the ignore list before
> > > getting processed, and siblings don't process each other. But, if the
> > > graph is walked bottom->up, a BDS will start processing its parents
> > > without adding itself to the ignore list, so there's nothing
> > > preventing them from processing it again.
> >=20
> > I don't understand. child is added to ignore before calling the parent
> > callback on it, so how can we come back through the same BdrvChild?
> >=20
> >     QLIST_FOREACH(child, &bs->parents, next_parent) {
> >         if (g_slist_find(*ignore, child)) {
> >             continue;
> >         }
> >         assert(child->klass->set_aio_ctx);
> >         *ignore =3D g_slist_prepend(*ignore, child);
> >         child->klass->set_aio_ctx(child, new_context, ignore);
> >     }
>=20
> Perhaps I'm missing something, but the way I understand it, that loop
> is adding the BdrvChild pointer of each of its parents, but not the
> BdrvChild pointer of the BDS that was passed as an argument to
> b_s_a_c_i.

Generally, the caller has already done that.

In the theoretical case that it was the outermost call in the recursion
and it hasn't (I couldn't find any such case), I think we should still
call the callback for the passed BdrvChild like we currently do.

> > You didn't dump the BdrvChild here. I think that would add some
> > information on why we re-entered 0x555ee2fbf660. Maybe you can also add
> > bs->drv->format_name for each node to make the scenario less abstract?
>=20
> I've generated another trace with more data:
>=20
> bs=3D0x565505e48030 (backup-top) enter
> bs=3D0x565505e48030 (backup-top) processing children
> bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e42090 (ch=
ild->bs=3D0x565505e5d420)
> bs=3D0x565505e5d420 (qcow2) enter
> bs=3D0x565505e5d420 (qcow2) processing children
> bs=3D0x565505e5d420 (qcow2) calling bsaci child=3D0x565505e41ea0 (child->=
bs=3D0x565505e52060)
> bs=3D0x565505e52060 (file) enter
> bs=3D0x565505e52060 (file) processing children
> bs=3D0x565505e52060 (file) processing parents
> bs=3D0x565505e52060 (file) processing itself
> bs=3D0x565505e5d420 (qcow2) processing parents
> bs=3D0x565505e5d420 (qcow2) calling set_aio_ctx child=3D0x5655066a34d0
> bs=3D0x565505fbf660 (qcow2) enter
> bs=3D0x565505fbf660 (qcow2) processing children
> bs=3D0x565505fbf660 (qcow2) calling bsaci child=3D0x565505e41d20 (child->=
bs=3D0x565506bc0c00)
> bs=3D0x565506bc0c00 (file) enter
> bs=3D0x565506bc0c00 (file) processing children
> bs=3D0x565506bc0c00 (file) processing parents
> bs=3D0x565506bc0c00 (file) processing itself
> bs=3D0x565505fbf660 (qcow2) processing parents
> bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x565505fc7aa0
> bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x5655068b8510
> bs=3D0x565505e48030 (backup-top) enter
> bs=3D0x565505e48030 (backup-top) processing children
> bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e3c450 (ch=
ild->bs=3D0x565505fbf660)
> bs=3D0x565505fbf660 (qcow2) enter
> bs=3D0x565505fbf660 (qcow2) processing children
> bs=3D0x565505fbf660 (qcow2) processing parents
> bs=3D0x565505fbf660 (qcow2) processing itself
> bs=3D0x565505e48030 (backup-top) processing parents
> bs=3D0x565505e48030 (backup-top) calling set_aio_ctx child=3D0x565505e402=
d0
> bs=3D0x565505e48030 (backup-top) processing itself
> bs=3D0x565505fbf660 (qcow2) processing itself

Hm, is this complete? Is see no "processing itself" for
bs=3D0x565505e5d420. Or is this because it crashed before getting there?

Anyway, trying to reconstruct the block graph with BdrvChild pointers
annotated at the edges:

BlockBackend
      |
      v
  backup-top ------------------------+
      |   |                          |
      |   +-----------------------+  |
      |            0x5655068b8510 |  | 0x565505e3c450
      |                           |  |
      | 0x565505e42090            |  |
      v                           |  |
    qcow2 ---------------------+  |  |
      |                        |  |  |
      | 0x565505e52060         |  |  | ??? [1]
      |                        |  |  |  |
      v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
    file                       v  v  v  v
                             qcow2 (backing)
                                    |
                                    | 0x565505e41d20
                                    v
                                  file

[1] This seems to be a BdrvChild with a non-BDS parent. Probably a
    BdrvChild directly owned by the backup job.

> So it seems this is happening:
>=20
> backup-top (5e48030) <---------| (5)
>    |    |                      |
>    |    | (6) ------------> qcow2 (5fbf660)
>    |                           ^    |
>    |                       (3) |    | (4)
>    |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
>    |
>    |-> (2) file (5e52060)
>=20
> backup-top (5e48030), the BDS that was passed as argument in the first
> bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660)
> is processing its parents, and the latter is also re-entered when the
> first one starts processing its children again.

Yes, but look at the BdrvChild pointers, it is through different edges
that we come back to the same node. No BdrvChild is used twice.

If backup-top had added all of its children to the ignore list before
calling into the overlay qcow2, the backing qcow2 wouldn't eventually
have called back into backup-top.

Kevin

--UugvWAfsgieZRqgk
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAl/Z/wEACgkQfwmycsiP
L9YaAhAAwCX3qwQuucObmD7XW3Qh3RFNxDtaTW8d1vXXPnMXZdogTbG2SiC5TQ4q
JciH/JgUMisvgGZLLKeNDPjOKh5lLLCoqOjd7Vp6G6R/yDQLaUqgDEtk/fOifAp0
9USYSM6Z20xIhkCGwWsgDcMiFtlkNq5UxJxJ8ObB5BzLeIBXX99hK2xntEhZ3RZk
prxmsGMhWz8XDP2smTSANz99rpszTRDqNM1r8aFhlZ6BySBntJQ0fpjcjq04Xt2X
B7B8WFkWgTP2tf6FS75f7p+uQoM7mllN03Unn9Tr7ZmnVeF3PvVo6p4G5kr1raIC
Qrdo6btc7AhJJGTLE8CYHp3Zc22VDbniNeTqDhk8xfo4u21WZ+qZLg+jQE/jRq6R
EiJVW7kS05P3+zGGV4F6JbqS1NItQFYtu41Tiop68R3YpdyFKviUkzGoXxj1DrxA
h5oQq4jv4xp8eq6dBpBveoPph377pxi76nJE/OLu3+rH/xoyqk8XHfWYiQLuvM8I
FtdYejAnuE/s2/wQpjLa/Jga8A3NSnIs6XRTOhcZ0kHkyeNMnxfYy0hd+uSy8CMw
34Ljkssf/zNnytkDJuKlbckEJTstubpkkYy+t2FUfViqbzAC50diU6MQuO8enDv5
TGcxnqhFGThQU3ZKMQEdVnWyulO2CycskOLBmTYN3jC3ZkuaK9A=
=sPmL
-----END PGP SIGNATURE-----

--UugvWAfsgieZRqgk--



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 12:57:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 12:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55229.96201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpWMT-0000Zk-Ba; Wed, 16 Dec 2020 12:56:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55229.96201; Wed, 16 Dec 2020 12:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpWMT-0000Zd-8c; Wed, 16 Dec 2020 12:56:57 +0000
Received: by outflank-mailman (input) for mailman id 55229;
 Wed, 16 Dec 2020 12:56:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpWMS-0000ZO-3M; Wed, 16 Dec 2020 12:56:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpWMR-00077M-VY; Wed, 16 Dec 2020 12:56:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpWMR-0001lQ-MH; Wed, 16 Dec 2020 12:56:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpWMR-0003Ue-Lq; Wed, 16 Dec 2020 12:56:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gjz0y/NxrJEI5E3jVn0vvWI4wnzql1YY+YlsG9NxnxM=; b=zrXmIsmnBYWNq5SZy3+wng3E32
	6B5eAQy4fanGU80svC6z2habqF3v7ETui6qsF1FN9a6v3CKxyIiKpo/q0jYWrFb7e3t6803tlV0BN
	bx0s0DIyb8dZK5YzS3VwB1VDW5ss4frDId4qTQBtxwmM8ddh4Fr+0zMquDGUqDiMH2HA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157566-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 157566: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=310ab79875cb705cc2c7daddff412b5a4899f8c9
X-Osstest-Versions-That:
    xen=41a822c3926350f26917d747c8dfed1c44a2cf42
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 12:56:55 +0000

flight 157566 xen-4.11-testing real [real]
flight 157605 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157566/
http://logs.test-lab.xenproject.org/osstest/logs/157605/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157605-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157137
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 157137
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157137
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157137
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157137
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157137
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157137
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157137
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157137
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157137
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157137
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157137
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157137
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  310ab79875cb705cc2c7daddff412b5a4899f8c9
baseline version:
 xen                  41a822c3926350f26917d747c8dfed1c44a2cf42

Last test of basis   157137  2020-12-01 16:36:48 Z   14 days
Testing same since   157566  2020-12-15 14:05:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   41a822c392..310ab79875  310ab79875cb705cc2c7daddff412b5a4899f8c9 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 13:11:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 13:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55255.96282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpWa5-0002nD-Ni; Wed, 16 Dec 2020 13:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55255.96282; Wed, 16 Dec 2020 13:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpWa5-0002n6-KV; Wed, 16 Dec 2020 13:11:01 +0000
Received: by outflank-mailman (input) for mailman id 55255;
 Wed, 16 Dec 2020 13:11:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZEz=FU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpWa4-0002n1-5r
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 13:11:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a6cf237-d4f1-4f51-8aa0-f57940e1c79f;
 Wed, 16 Dec 2020 13:10:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE49DAEC4;
 Wed, 16 Dec 2020 13:10:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a6cf237-d4f1-4f51-8aa0-f57940e1c79f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608124257; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4MqR9A68//E0ZAJzwYMhdICzWSqTGTdLVJAy1Iu4Jfs=;
	b=NhTJS8s7eab5FeT/cj0wArcqhOfDkYUyNjSaWGyW6VdI0SQtR5uYt/oO2slhJegMGVxoDb
	miv8EjsmaH4aSL1ughM2YlwnfXuVbRjWZzoXPpeAfQL2BmWmbc6LuJIHsXtDZu5UIl7om+
	K5lT40IPTVVFYdTfnYKIjxghgOOsUu0=
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: Liwei <xieliwei@gmail.com>
Cc: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <mailman.2112.1604414193.711.xen-devel@lists.xenproject.org>
 <CAPE0SYz0be1ZOoNqDHpeJWeZS-1BM_zy50=Cmeo+4Aq1Na0eNQ@mail.gmail.com>
 <4a0d9b00-5f1f-e9e1-fccf-1f26762134e8@suse.com>
 <CAPE0SYzHL_SZpocBB_Puz7+Zm6fqSD6vYg3zZ1Tf_exYU0Cj0w@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9fd52124-7f37-84a8-4ed9-d8d12c276731@suse.com>
Date: Wed, 16 Dec 2020 14:10:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <CAPE0SYzHL_SZpocBB_Puz7+Zm6fqSD6vYg3zZ1Tf_exYU0Cj0w@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.12.2020 13:19, Liwei wrote:
> On Wed, 16 Dec 2020 at 16:12, Jan Beulich <jbeulich@suse.com> wrote:
>> On 15.12.2020 20:08, Liwei wrote:
>>> Hi list,
>>>     This is a reply to the thread of the same title (linked here:
>>> https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg84916.html
>>> ) which I could not reply to because I receive this list by digest.
>>>
>>>     I'm unclear if this is exactly the reason, but I experienced the
>>> same symptoms when upgrading to 4.14. The issue does not occur if I
>>> downgrade to 4.11 (the previous version that was provided by Debian).
>>> Kernel is 5.9.11 and unchanged between xen versions.
>>>
>>>     One thing I noticed is that if I disable the monitor/mwait
>>> instructions on my CPU (Intel Xeon E5-2699 v4 ES), the stalls seem to
>>> occur later into the boot. With the instructions enabled, the system
>>> usually stalls less than a few minutes after boot; disabled, it can
>>> last for tens of minutes.
>>>
>>>     Further disabling the HPET or forcing the kernel to use PIT causes
>>> it to be somewhat usable. The stalls still occur tens of minutes in
>>> but somehow everything seems to continue chugging along fine?
>>
>> By "the kernel" do you really mean the kernel, or Xen?
> 
> Sorry, I mean xen. Too used to thinking that xen isn't there when I'm
> talking about dom0.
> 
>>
>>>     I've also verified that the stalls do not occur in all the above
>>> cases if I just boot into the kernel without xen.
>>>
>>>     When the stalls happen, I get the "rcu: INFO: rcu_sched detected
>>> stalls on CPUs/tasks" backtraces printed on the console periodically,
>>> but keystrokes don't do anything on the console, and I can't spawn new
>>> SSH sessions even though pinging the system produces a reply. The last
>>> item in the call trace is usually "xen_safe_halt", but I've seen it
>>> occur for other functions related to btrfs and the network adapter as
>>> well.
>>
>> The kernel log may not be the only relevant thing here - the hypervisor
>> log may also need looking at (with full verbosity enabled and
>> preferably a debug build in use).
> 
> I will build a debug version and get back to you on that. Do I just
> have loglvl and guest_loglvl set to full, console to ring, and get the
> entire serial spew? I recall that you wanted to see the I, q and r
> outputs as well.

Yes. The debug keys are kind of optional at a first step, but it won't
hurt if you include them unless your box is a really big one.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 14:09:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 14:09:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55265.96293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpXU3-0007Xe-2P; Wed, 16 Dec 2020 14:08:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55265.96293; Wed, 16 Dec 2020 14:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpXU2-0007XX-Vi; Wed, 16 Dec 2020 14:08:50 +0000
Received: by outflank-mailman (input) for mailman id 55265;
 Wed, 16 Dec 2020 14:08:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vvT0=FU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kpXU1-0007XS-Ei
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 14:08:49 +0000
Received: from mail-qv1-xf34.google.com (unknown [2607:f8b0:4864:20::f34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 345bef11-c01a-431d-b5aa-d78300b88282;
 Wed, 16 Dec 2020 14:08:48 +0000 (UTC)
Received: by mail-qv1-xf34.google.com with SMTP id s6so11341455qvn.6
 for <xen-devel@lists.xenproject.org>; Wed, 16 Dec 2020 06:08:48 -0800 (PST)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:b150:92df:fbb7:5df0])
 by smtp.gmail.com with ESMTPSA id
 h25sm1113717qkh.122.2020.12.16.06.08.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 16 Dec 2020 06:08:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 345bef11-c01a-431d-b5aa-d78300b88282
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hw1VUdj7oe+RFhOiOdtGs39fzND3Jk6BzojvG/18LqQ=;
        b=IYJrC+opx3RLldMMxKa+dnCdhBtLbYnq411h7iiYF9ef0/eKjQCrJCFdC/xXxARHVk
         hgZwqhemvIbeEUYojpXPZXFaec6MC4EjE6iifV5CmhRH4Uso7deUtBEcv3C48KwOe2IX
         AotAwfUYy2E0EMxePt6fwTcyKngYQULGVSl4IX7xZ/O/Rwe1FU/L6us/62Sxw6UXoeYJ
         BU8zIp3SVIBnlWZL77U4Oz1DD+DTx9/+ZFjLeDwFau2PJCBAYKnUvJLmK4v1KMzXtZgE
         4sZoG8ojDbjl79lpO6NII5VuzUB9gIvdeUTI1RUvZsA1QGDsIvKZX8O76zWgRA+o8MzL
         7D/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hw1VUdj7oe+RFhOiOdtGs39fzND3Jk6BzojvG/18LqQ=;
        b=rEHmUbWoOAl3HFdpmMZHLAwqmXGz9Y/fUefV2lGOlfaqhPBTqcgPswhlXwZ5u48hZO
         XcFRZBM1DoGUYgJtTh9SMU7tKPD06O94D7KJpXrwDFl/gZg8FMeAyTLwj/vlBASzFaif
         PjzOPdFhk8yh7yw4R8AE7qMtkk4XT8xk4tfInmi1r/CR1lfjcF17LHp7JevQADeGXyZy
         bnA2PyU3GgWp81BU/9Kir/bVN1jvC4N/2xhgkFPbKCLypbrqDTp+Ie8VvtdYILb8hkgs
         42renoIAcoI38k5VH0C3Y83oNlu323mTaPBgBySwqlDAg0G1Z5M6vaaO5Ob29DiKB4mx
         LMbw==
X-Gm-Message-State: AOAM533CKMZcvTZFBJ9zWZ9jb4t2CwMLty0xgN/kquoOW81Wi1EGzhYu
	vg8k8vuIS88PFIMwMzNZGqw=
X-Google-Smtp-Source: ABdhPJx2O7S7enSJUDMETKVJkTVEahKzf1Jq4arGE9gh5ELFFQGqLEFuiPeVji+PzJ9hBGu27Ah8kw==
X-Received: by 2002:a05:6214:1764:: with SMTP id et4mr39801202qvb.2.1608127728273;
        Wed, 16 Dec 2020 06:08:48 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jason Andryuk <jandryuk@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH] xen: Kconfig: remove X86_64 depends from XEN_512GB
Date: Wed, 16 Dec 2020 09:08:38 -0500
Message-Id: <20201216140838.16085-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

commit bfda93aee0ec ("xen: Kconfig: nest Xen guest options")
accidentally re-added X86_64 as a dependency to XEN_512GB.  It was
originally removed in commit a13f2ef168cb ("x86/xen: remove 32-bit Xen
PV guest support").  Remove it again.

Fixes: bfda93aee0ec ("xen: Kconfig: nest Xen guest options")
Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
This fixes Boris's review comment.  I lost track of posting a v2 with it
fixed.
---
 arch/x86/xen/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 2b105888927c..afc1da68b06d 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -28,7 +28,7 @@ config XEN_PV
 
 config XEN_512GB
 	bool "Limit Xen pv-domain memory to 512GB"
-	depends on XEN_PV && X86_64
+	depends on XEN_PV
 	default y
 	help
 	  Limit paravirtualized user domains to 512GB of RAM.
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 14:14:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 14:14:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55271.96305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpXZN-0008UH-Mt; Wed, 16 Dec 2020 14:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55271.96305; Wed, 16 Dec 2020 14:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpXZN-0008UA-K2; Wed, 16 Dec 2020 14:14:21 +0000
Received: by outflank-mailman (input) for mailman id 55271;
 Wed, 16 Dec 2020 14:14:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vvT0=FU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kpXZL-0008U5-Km
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 14:14:19 +0000
Received: from mail-lf1-x12c.google.com (unknown [2a00:1450:4864:20::12c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af4c21cb-67c7-4440-9643-779c25332bea;
 Wed, 16 Dec 2020 14:14:18 +0000 (UTC)
Received: by mail-lf1-x12c.google.com with SMTP id o19so23060789lfo.1
 for <xen-devel@lists.xenproject.org>; Wed, 16 Dec 2020 06:14:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af4c21cb-67c7-4440-9643-779c25332bea
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=DmwvF8IgzoA7KtVGEtd76/gZKuxggmaZRur5HyvCvwE=;
        b=Ko3OAcqjdApXNL1BXkFY7dSl+PGJ603F2A74GFwxfm7vOt1Et4fhUVNdbN0B1YxodZ
         c/nIWNmYImbg2MPHraAv1jO/3MMcK8Xc1q9pOOIYBtndnawOwafSiIV6G3DfFSgSArLW
         K9gjkYHCjf76IboFmufr4q112g3ReFzUFLxs1aEMxlKOdc7FGz7GMKGPcsjlNgj2Ytd+
         JF4hMVOWPrCBINulnmrJSFX5VIBpanDgUdSjPRQb6X4xwHQWzzakf9Pc5rx7llSwimJ8
         eXmTY77EBQeAAPm4c0HjCuXWxrZRj2FLksYCzbqHx98/QcCAX0l2o+gA1wXCUS7hjrIp
         vaYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=DmwvF8IgzoA7KtVGEtd76/gZKuxggmaZRur5HyvCvwE=;
        b=PO5RJH0qQTl1DPbTuEkrP9ErLp8PAN3h0/cp2o1uNc6V1GgZaIruHL7BpVcShRrnYV
         BM7/ZuipOGFieGZIVpUVX+MXxhFuXNShgIqgjwVv/zZ+JZ1/a6JQbwgQyMO6sL/50yhz
         Vftz4/aiyK7Gk1wrGZ36ooO/BXltj41Ui5KdfDtws8HoS54wKhQCCyXVPllOltb+QVqL
         Gtn6S+kkJxT53YIoEzZM3sHt0rjSJRR22GnQQXWX4LrWz6Wv0TC9kYjkI3wa0gG8SfeI
         kJ5+IeSJmZKSpFIL5t4PmR5f3IMa8F0X9nS+QON1i7+dB1QnEVrb9ZOiI9VJf4khvTVk
         GtDQ==
X-Gm-Message-State: AOAM5308zPRCqbc1qQzHvI6LcMVJ5/bKVo0ycX1NB4cPcHZS64CV/Qa6
	LInsMVh2TT5aVmeUswBTLXPv8X5CAbuqroroImY=
X-Google-Smtp-Source: ABdhPJzaEUDZszSAZI7uVluoFu9iNADRW+yux5PR6VdcF7NZZG/a3c8YTYmUEY9zdq8ObbVQYOeADPZ7hcomGYvuyjU=
X-Received: by 2002:a2e:8745:: with SMTP id q5mr13934227ljj.77.1608128057330;
 Wed, 16 Dec 2020 06:14:17 -0800 (PST)
MIME-Version: 1.0
References: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com>
 <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com> <alpine.DEB.2.21.2012141632020.4040@sstabellini-ThinkPad-T480s>
 <CAC4Yorgk89vaDsbygvebiBOan-3OWE=D9xKiri_JwQAVWZ19GQ@mail.gmail.com>
In-Reply-To: <CAC4Yorgk89vaDsbygvebiBOan-3OWE=D9xKiri_JwQAVWZ19GQ@mail.gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 16 Dec 2020 09:14:05 -0500
Message-ID: <CAKf6xpvpyA6E6gC6cmZ-Ewayyue-C5WcnGtatsxf_Cefg1CxaA@mail.gmail.com>
Subject: Re: [openxt-dev] Re: Follow up on libxl-fix-reboot.patch
To: Chris Rogers <crogers122@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Rich Persaud <persaur@gmail.com>, 
	openxt <openxt@googlegroups.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Anthony PERARD <anthony.perard@citrix.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	Olivier Lambert <olivier.lambert@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 15, 2020 at 5:22 PM Chris Rogers <crogers122@gmail.com> wrote:
>
> Hopefully I can provide a little more context.  Here is a link to the pat=
ch:
>
> https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/f=
iles/libxl-fix-reboot.patch
>
> The patch is a bit mis-named.  It does not implement XEN_DOMCTL_SENDTRIGG=
ER_RESET.  It's just a workaround to handle the missing RESET implementatio=
n.
>
> Its purpose is to make an HVM guest "reboot" regardless of whether PV too=
ls have been installed and the xenstore interface is listening or not.  Fro=
m the client perspective that OpenXT is concerned with, this is for ease-of=
-use for working with HVM guests before PV tools are installed.  To summari=
ze the flow of the patch:
>
> - User input causes high level toolstack, xenmgr, to do xl reboot <domid>
> - libxl hits "PV interface not available", so it tries the fallback ACPI =
reset trigger (but that's not implemented in domctl)
> - therefore, the patch changes the RESET trigger to POWER trigger, and se=
ts a 'reboot' flag
> - when the xl create process handles the domain_death event for LIBXL_SHU=
TDOWN_REASON_POWEROFF, we check for our 'reboot' flag.
> - It's set, so we set "action" as if we came from a real restart, which m=
akes the xl create process take the 'goto start' codepath to rebuild the do=
main.
>
> I think we'd like to get rid of this patch, but at the moment I don't hav=
e any code or a design to propose that would implement the XEN_DOMCTL_SENDT=
RIGGER_RESET.

I'm not sure it's possible to implement one.  Does an ACPI
reset/reboot button exist?  And then you'd have the problem that the
guest needs to be configured to react to the button.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 14:51:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 14:51:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55283.96330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpY9P-0003lA-Lg; Wed, 16 Dec 2020 14:51:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55283.96330; Wed, 16 Dec 2020 14:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpY9P-0003l3-Ic; Wed, 16 Dec 2020 14:51:35 +0000
Received: by outflank-mailman (input) for mailman id 55283;
 Wed, 16 Dec 2020 14:51:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cg29=FU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpY9N-0003ky-Ul
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 14:51:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 726ce51f-154c-4ffe-b429-16d3be5518d3;
 Wed, 16 Dec 2020 14:51:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 726ce51f-154c-4ffe-b429-16d3be5518d3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608130292;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=dQA0xUECaBfK1yRNFSy0wTT3mTIf7fofy8eoq1aXgp0=;
  b=IdYgSz7VirUNq3yOD7Rv/dVTuIcH+UpgBpilT5DfHkBUMD+X+CYoYK07
   GGPsSBFsLxc9p/MOIPvZ+bt7zGMCd5J60FNsk3jjtvza8nFbbmxP//1R2
   uE2uiBpjdJa7brCBLNSKGVXcH40kwmHQNwXujUM+fDadr1Sfoz1dqKOrK
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: HRCDMPqf9GWVs7mwFgcdBvtFV8PGjqRwuH6pgkeUus2kqbzzXlXAHd8QrcmYbTi1y28pDPlcKS
 pDo6BdE4OMT5RZ4T+GtfqJjw72jx/OnqD4vPZSDFzV7S1Jr1jhTLUJe6Elnr59VQoIftjIP2N+
 myNyJgMVOTyIKzAt2ry/nbqJTGd1pgpn2Ao8NnPvny6UgMBvSsZVbQH8qHmUyXXqY7qjjS6Xc8
 PFbRG2HOg8aqdOYMya8BpMzjCLmyb+bbGBHDTm8LK6M8mA22iz0jqzmhsxQ4bFcpxyrwSEWYZX
 L5Y=
X-SBRS: 5.2
X-MesageID: 33375625
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,424,1599537600"; 
   d="scan'208";a="33375625"
Subject: Re: [openxt-dev] Re: Follow up on libxl-fix-reboot.patch
To: Jason Andryuk <jandryuk@gmail.com>, Chris Rogers <crogers122@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Rich Persaud
	<persaur@gmail.com>, openxt <openxt@googlegroups.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Anthony PERARD <anthony.perard@citrix.com>,
	=?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
	<marmarek@invisiblethingslab.com>, Olivier Lambert
	<olivier.lambert@vates.fr>, Wei Liu <wl@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
References: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com>
 <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com>
 <alpine.DEB.2.21.2012141632020.4040@sstabellini-ThinkPad-T480s>
 <CAC4Yorgk89vaDsbygvebiBOan-3OWE=D9xKiri_JwQAVWZ19GQ@mail.gmail.com>
 <CAKf6xpvpyA6E6gC6cmZ-Ewayyue-C5WcnGtatsxf_Cefg1CxaA@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c7b345aa-604a-b2f3-0800-1ed445ebc213@citrix.com>
Date: Wed, 16 Dec 2020 14:51:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvpyA6E6gC6cmZ-Ewayyue-C5WcnGtatsxf_Cefg1CxaA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 16/12/2020 14:14, Jason Andryuk wrote:
> On Tue, Dec 15, 2020 at 5:22 PM Chris Rogers <crogers122@gmail.com> wrote:
>> Hopefully I can provide a little more context.  Here is a link to the patch:
>>
>> https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/files/libxl-fix-reboot.patch
>>
>> The patch is a bit mis-named.  It does not implement XEN_DOMCTL_SENDTRIGGER_RESET.  It's just a workaround to handle the missing RESET implementation.
>>
>> Its purpose is to make an HVM guest "reboot" regardless of whether PV tools have been installed and the xenstore interface is listening or not.  From the client perspective that OpenXT is concerned with, this is for ease-of-use for working with HVM guests before PV tools are installed.  To summarize the flow of the patch:
>>
>> - User input causes high level toolstack, xenmgr, to do xl reboot <domid>
>> - libxl hits "PV interface not available", so it tries the fallback ACPI reset trigger (but that's not implemented in domctl)
>> - therefore, the patch changes the RESET trigger to POWER trigger, and sets a 'reboot' flag
>> - when the xl create process handles the domain_death event for LIBXL_SHUTDOWN_REASON_POWEROFF, we check for our 'reboot' flag.
>> - It's set, so we set "action" as if we came from a real restart, which makes the xl create process take the 'goto start' codepath to rebuild the domain.
>>
>> I think we'd like to get rid of this patch, but at the moment I don't have any code or a design to propose that would implement the XEN_DOMCTL_SENDTRIGGER_RESET.
> I'm not sure it's possible to implement one.  Does an ACPI
> reset/reboot button exist?  And then you'd have the problem that the
> guest needs to be configured to react to the button.

The ACPI spec has two signals as far as this goes. "the user pressed the
power button" and "the user {pressed the suspend button, closed the
laptop lid}".  Neither are useful for VMs typically, because default OS
settings do the wrong thing.

The mystery to unravel here is why xl is issuing an erroneous hypercall.

It is very unlikely that we will have dropped
XEN_DOMCTL_SENDTRIGGER_RESET from Xen, but I suppose its possible.  It's
definitely weird that we have it in the interface and unimplemented.

It's also possible it was a copy&paste mistake when trying to implement
an interface consistent with `xm trigger`.

It is definitely concerning that we've got a piece of functionality like
this which clearly hasn't seen any testing upstream.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 14:55:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 14:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55289.96342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpYD0-0003vO-7A; Wed, 16 Dec 2020 14:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55289.96342; Wed, 16 Dec 2020 14:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpYD0-0003vH-49; Wed, 16 Dec 2020 14:55:18 +0000
Received: by outflank-mailman (input) for mailman id 55289;
 Wed, 16 Dec 2020 14:55:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VqGa=FU=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kpYCy-0003vA-M1
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 14:55:16 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4007072f-f154-407f-9208-4066ec0f5ce1;
 Wed, 16 Dec 2020 14:55:14 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-443-foJ0_GvjMiaI7T4WXfgDzA-1; Wed, 16 Dec 2020 09:55:11 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E2939107ACF6;
 Wed, 16 Dec 2020 14:55:09 +0000 (UTC)
Received: from localhost (ovpn-115-223.rdu2.redhat.com [10.10.115.223])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 09C7460C15;
 Wed, 16 Dec 2020 14:55:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4007072f-f154-407f-9208-4066ec0f5ce1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608130514;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IWLMB7IDWR7sP3FDJj69QwoPb+z2DnqCQ1Kbpkqm3NA=;
	b=Tbmpg0Wtr9NrgxBlnj9j+Jso5vRqUxjZhog/RmM9GacJx+ccQaTyFqxtIMAPAWul/ie8zB
	sujVeirU9qBwmZ8Wj182DhTCiCQMzVrONoB07uerOjX/Sis+/L+wwCAVC5VmUwvh11dTQK
	Zt5LSH9omHRQ73H80MzhwpXIP9PignE=
X-MC-Unique: foJ0_GvjMiaI7T4WXfgDzA-1
Date: Wed, 16 Dec 2020 15:55:02 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201216145502.yiejsw47q5pfbzio@mhamilton>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201216123514.GD7548@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="yz6yiipnpvshq5nu"
Content-Disposition: inline

--yz6yiipnpvshq5nu
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> Am 15.12.2020 um 18:23 hat Sergio Lopez geschrieben:
> > On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
> > > Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> > > > On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > > > > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > > > > While processing the parents of a BDS, one of the parents may p=
rocess
> > > > > > the child that's doing the tail recursion, which leads to a BDS=
 being
> > > > > > processed twice. This is especially problematic for the aio_not=
ifiers,
> > > > > > as they might attempt to work on both the old and the new AIO
> > > > > > contexts.
> > > > > >=20
> > > > > > To avoid this, add the BDS pointer to the ignore list, and chec=
k the
> > > > > > child BDS pointer while iterating over the children.
> > > > > >=20
> > > > > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> > > > >=20
> > > > > Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-=
/
> > > >=20
> > > > I know, it's effective but quite ugly...
> > > >=20
> > > > > What is the specific scenario where you saw this breaking? Did yo=
u have
> > > > > multiple BdrvChild connections between two nodes so that we would=
 go to
> > > > > the parent node through one and then come back to the child node =
through
> > > > > the other?
> > > >=20
> > > > I don't think this is a corner case. If the graph is walked top->do=
wn,
> > > > there's no problem since children are added to the ignore list befo=
re
> > > > getting processed, and siblings don't process each other. But, if t=
he
> > > > graph is walked bottom->up, a BDS will start processing its parents
> > > > without adding itself to the ignore list, so there's nothing
> > > > preventing them from processing it again.
> > >=20
> > > I don't understand. child is added to ignore before calling the paren=
t
> > > callback on it, so how can we come back through the same BdrvChild?
> > >=20
> > >     QLIST_FOREACH(child, &bs->parents, next_parent) {
> > >         if (g_slist_find(*ignore, child)) {
> > >             continue;
> > >         }
> > >         assert(child->klass->set_aio_ctx);
> > >         *ignore =3D g_slist_prepend(*ignore, child);
> > >         child->klass->set_aio_ctx(child, new_context, ignore);
> > >     }
> >=20
> > Perhaps I'm missing something, but the way I understand it, that loop
> > is adding the BdrvChild pointer of each of its parents, but not the
> > BdrvChild pointer of the BDS that was passed as an argument to
> > b_s_a_c_i.
>=20
> Generally, the caller has already done that.
>=20
> In the theoretical case that it was the outermost call in the recursion
> and it hasn't (I couldn't find any such case), I think we should still
> call the callback for the passed BdrvChild like we currently do.
>=20
> > > You didn't dump the BdrvChild here. I think that would add some
> > > information on why we re-entered 0x555ee2fbf660. Maybe you can also a=
dd
> > > bs->drv->format_name for each node to make the scenario less abstract=
?
> >=20
> > I've generated another trace with more data:
> >=20
> > bs=3D0x565505e48030 (backup-top) enter
> > bs=3D0x565505e48030 (backup-top) processing children
> > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e42090 (=
child->bs=3D0x565505e5d420)
> > bs=3D0x565505e5d420 (qcow2) enter
> > bs=3D0x565505e5d420 (qcow2) processing children
> > bs=3D0x565505e5d420 (qcow2) calling bsaci child=3D0x565505e41ea0 (child=
->bs=3D0x565505e52060)
> > bs=3D0x565505e52060 (file) enter
> > bs=3D0x565505e52060 (file) processing children
> > bs=3D0x565505e52060 (file) processing parents
> > bs=3D0x565505e52060 (file) processing itself
> > bs=3D0x565505e5d420 (qcow2) processing parents
> > bs=3D0x565505e5d420 (qcow2) calling set_aio_ctx child=3D0x5655066a34d0
> > bs=3D0x565505fbf660 (qcow2) enter
> > bs=3D0x565505fbf660 (qcow2) processing children
> > bs=3D0x565505fbf660 (qcow2) calling bsaci child=3D0x565505e41d20 (child=
->bs=3D0x565506bc0c00)
> > bs=3D0x565506bc0c00 (file) enter
> > bs=3D0x565506bc0c00 (file) processing children
> > bs=3D0x565506bc0c00 (file) processing parents
> > bs=3D0x565506bc0c00 (file) processing itself
> > bs=3D0x565505fbf660 (qcow2) processing parents
> > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x565505fc7aa0
> > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x5655068b8510
> > bs=3D0x565505e48030 (backup-top) enter
> > bs=3D0x565505e48030 (backup-top) processing children
> > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e3c450 (=
child->bs=3D0x565505fbf660)
> > bs=3D0x565505fbf660 (qcow2) enter
> > bs=3D0x565505fbf660 (qcow2) processing children
> > bs=3D0x565505fbf660 (qcow2) processing parents
> > bs=3D0x565505fbf660 (qcow2) processing itself
> > bs=3D0x565505e48030 (backup-top) processing parents
> > bs=3D0x565505e48030 (backup-top) calling set_aio_ctx child=3D0x565505e4=
02d0
> > bs=3D0x565505e48030 (backup-top) processing itself
> > bs=3D0x565505fbf660 (qcow2) processing itself
>=20
> Hm, is this complete? Is see no "processing itself" for
> bs=3D0x565505e5d420. Or is this because it crashed before getting there?

Yes, it crashes there. I forgot to mention that, sorry.

> Anyway, trying to reconstruct the block graph with BdrvChild pointers
> annotated at the edges:
>=20
> BlockBackend
>       |
>       v
>   backup-top ------------------------+
>       |   |                          |
>       |   +-----------------------+  |
>       |            0x5655068b8510 |  | 0x565505e3c450
>       |                           |  |
>       | 0x565505e42090            |  |
>       v                           |  |
>     qcow2 ---------------------+  |  |
>       |                        |  |  |
>       | 0x565505e52060         |  |  | ??? [1]
>       |                        |  |  |  |
>       v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
>     file                       v  v  v  v
>                              qcow2 (backing)
>                                     |
>                                     | 0x565505e41d20
>                                     v
>                                   file
>=20
> [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
>     BdrvChild directly owned by the backup job.
>=20
> > So it seems this is happening:
> >=20
> > backup-top (5e48030) <---------| (5)
> >    |    |                      |
> >    |    | (6) ------------> qcow2 (5fbf660)
> >    |                           ^    |
> >    |                       (3) |    | (4)
> >    |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> >    |
> >    |-> (2) file (5e52060)
> >=20
> > backup-top (5e48030), the BDS that was passed as argument in the first
> > bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660)
> > is processing its parents, and the latter is also re-entered when the
> > first one starts processing its children again.
>=20
> Yes, but look at the BdrvChild pointers, it is through different edges
> that we come back to the same node. No BdrvChild is used twice.
>=20
> If backup-top had added all of its children to the ignore list before
> calling into the overlay qcow2, the backing qcow2 wouldn't eventually
> have called back into backup-top.

I've tested a patch that first adds every child to the ignore list,
and then processes those that weren't there before, as you suggested
on a previous email. With that, the offending qcow2 is not re-entered,
so we avoid the crash, but backup-top is still entered twice:

bs=3D0x560db0e3b030 (backup-top) enter
bs=3D0x560db0e3b030 (backup-top) processing children
bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e2f450 (chil=
d->bs=3D0x560db0fb2660)
bs=3D0x560db0fb2660 (qcow2) enter
bs=3D0x560db0fb2660 (qcow2) processing children
bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db0e34d20 (child->bs=
=3D0x560db1bb3c00)
bs=3D0x560db1bb3c00 (file) enter
bs=3D0x560db1bb3c00 (file) processing children
bs=3D0x560db1bb3c00 (file) processing parents
bs=3D0x560db1bb3c00 (file) processing itself
bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db16964d0 (child->bs=
=3D0x560db0e50420)
bs=3D0x560db0e50420 (qcow2) enter
bs=3D0x560db0e50420 (qcow2) processing children
bs=3D0x560db0e50420 (qcow2) calling bsaci child=3D0x560db0e34ea0 (child->bs=
=3D0x560db0e45060)
bs=3D0x560db0e45060 (file) enter
bs=3D0x560db0e45060 (file) processing children
bs=3D0x560db0e45060 (file) processing parents
bs=3D0x560db0e45060 (file) processing itself
bs=3D0x560db0e50420 (qcow2) processing parents
bs=3D0x560db0e50420 (qcow2) processing itself
bs=3D0x560db0fb2660 (qcow2) processing parents
bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1672860
bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1b14a20
bs=3D0x560db0e3b030 (backup-top) enter
bs=3D0x560db0e3b030 (backup-top) processing children
bs=3D0x560db0e3b030 (backup-top) processing parents
bs=3D0x560db0e3b030 (backup-top) calling set_aio_ctx child=3D0x560db0e332d0
bs=3D0x560db0e3b030 (backup-top) processing itself
bs=3D0x560db0fb2660 (qcow2) processing itself
bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e35090 (chil=
d->bs=3D0x560db0e50420)
bs=3D0x560db0e50420 (qcow2) enter
bs=3D0x560db0e3b030 (backup-top) processing parents
bs=3D0x560db0e3b030 (backup-top) processing itself

I see that "blk_do_set_aio_context()" passes "blk->root" to
"bdrv_child_try_set_aio_context()" so it's already in the ignore list,
so I'm not sure what's happening here. Is backup-top is referenced
from two different BdrvChild or is "blk->root" not pointing to
backup-top's BDS?

Thanks,
Sergio.

--yz6yiipnpvshq5nu
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/aH74ACgkQ9GknjS8M
AjU7/w/5AXB1yOkP+qQSstZ7lYY13rY+gfBW+kR2bprvpAi3evIsPVy7rNQSUn36
/mZhlgIAtgTsmPpNd0BBIfCWRW4ZnIgcKw5XxeKvE2xfZ7wTfjSoXWE+TdbPZWZR
+u+bXcoL4+tLmXPkP4GKTN8MexF4l6MiomcczzbZlezzcWZ4E41Tuha+FuDFKG13
tjCnK7mwk/uqyAkRa7iGb2K3+4iHgUTsXEO2f8CJlsppjW6Opy8FSetQyH3M79yP
lTGV+ttQ14Y3d+HahinQKZPisl+tn5dVhUl2f3H09YkUUW3pH8edyVcX/3fcvuy/
B57eTuKlmdyhSEokmTkt21sOh/9bUgULVHi/QbfG6IqM2+bus+9y67C9+PD/SqEc
IWrO74GOiB8luQua9E1PQhAKjKTwnSQOS9YBO5Lt72c6PJsuTFZ4sifPNo2c+3LE
9+cVcOwYcbM1E+9g7twZqibYQ/9ADZGInHLKqel/h1DsGfEKOL4LN51uziftDSZe
YufQQERp/hE2EzSWSb6IB06AtDAyrzTSnDDQVcHGnh93TxJ4YXa/LSOCfSB1QTAv
4NGHa2QrhXcFBKgqVHRd6lkxr5md0e7iT0hKyaYuep0IBUXVfOUw4QNJKZJ8FSlZ
qA02t67HmJnbFRrydzzKNgJFq0eHMEN3w3VEuO/bZA3QArDDJNk=
=+Grh
-----END PGP SIGNATURE-----

--yz6yiipnpvshq5nu--



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 15:08:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 15:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55295.96353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpYPY-00051F-G0; Wed, 16 Dec 2020 15:08:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55295.96353; Wed, 16 Dec 2020 15:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpYPY-000518-D3; Wed, 16 Dec 2020 15:08:16 +0000
Received: by outflank-mailman (input) for mailman id 55295;
 Wed, 16 Dec 2020 15:08:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpYPX-000513-Em
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 15:08:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc2168d6-20ad-4295-9ef9-37449fbe73a0;
 Wed, 16 Dec 2020 15:08:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9647BAC7B;
 Wed, 16 Dec 2020 15:08:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc2168d6-20ad-4295-9ef9-37449fbe73a0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608131293; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=w/LCdZXIaYEqdPWTafIHwV5LuovOn8Myg6gt0HNuGek=;
	b=RAvuIbRikVWYshQsmVw+fzBP2clgf1+1xfnUEYHdqONk093Uh4btLyY3TcLDAb4ynAxa72
	y8RCT1Bc6LZA5OkWYLeGYsXNvWUwpnaKDMKKVmdG1gQouDnkVEdiciF4egZFokTHUL4A+I
	H1c+RVl3AZDqPM7m9OPOEbyfxFUEYLo=
Subject: Re: [PATCH] xen: Kconfig: remove X86_64 depends from XEN_512GB
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20201216140838.16085-1-jandryuk@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <19c5737a-8cb8-2cbb-a836-2e09cd6ff79f@suse.com>
Date: Wed, 16 Dec 2020 16:08:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201216140838.16085-1-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Qep4h0WpqmdaARs5zQrlcUILe1KLr2JQI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Qep4h0WpqmdaARs5zQrlcUILe1KLr2JQI
Content-Type: multipart/mixed; boundary="5brQ8hbWiGtQoYq2N0FdQc4MtWIWA1haU";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Message-ID: <19c5737a-8cb8-2cbb-a836-2e09cd6ff79f@suse.com>
Subject: Re: [PATCH] xen: Kconfig: remove X86_64 depends from XEN_512GB
References: <20201216140838.16085-1-jandryuk@gmail.com>
In-Reply-To: <20201216140838.16085-1-jandryuk@gmail.com>

--5brQ8hbWiGtQoYq2N0FdQc4MtWIWA1haU
Content-Type: multipart/mixed;
 boundary="------------5C8EAC7837D2E9928E630D93"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5C8EAC7837D2E9928E630D93
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.12.20 15:08, Jason Andryuk wrote:
> commit bfda93aee0ec ("xen: Kconfig: nest Xen guest options")
> accidentally re-added X86_64 as a dependency to XEN_512GB.  It was
> originally removed in commit a13f2ef168cb ("x86/xen: remove 32-bit Xen
> PV guest support").  Remove it again.
>=20
> Fixes: bfda93aee0ec ("xen: Kconfig: nest Xen guest options")
> Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------5C8EAC7837D2E9928E630D93
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5C8EAC7837D2E9928E630D93--

--5brQ8hbWiGtQoYq2N0FdQc4MtWIWA1haU--

--Qep4h0WpqmdaARs5zQrlcUILe1KLr2JQI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/aItwFAwAAAAAACgkQsN6d1ii/Ey9f
EQf/ciD4Sdt+gLFHjNayQ4U8Cm0YjGZd4z5hW4kR5U6OAkMFC+n2wlmhL+M4YnE1a0r7ngl0QTV7
gpMAdB8wz8xiQGD5oAaFGYhOmEnL17L9evOPYwS4nvXsPmsGrIRy2xCURw9Bx2ldDYmR1e3jx4MA
sDMEUDlLC1ps29qQCFhBpdazLFlX3xDEnBeomfL6oCvyjBskAzc2iNrrd9JPnpyI7LIGMe6+0dE6
aS5v4uaZVwjShm/MKOCOT0JZnF2mZFPUiduasCnLeo1KFakL0G481sjXGWPaId7fBUx604PcnSv8
FL6QmN6COGPt6YVOI9uuq9E3wPF/DcCzLNmgHIMfwQ==
=E9pp
-----END PGP SIGNATURE-----

--Qep4h0WpqmdaARs5zQrlcUILe1KLr2JQI--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 15:40:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 15:40:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55306.96376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpYuo-0000De-Ba; Wed, 16 Dec 2020 15:40:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55306.96376; Wed, 16 Dec 2020 15:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpYuo-0000DX-8j; Wed, 16 Dec 2020 15:40:34 +0000
Received: by outflank-mailman (input) for mailman id 55306;
 Wed, 16 Dec 2020 15:40:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpYun-0000DP-95; Wed, 16 Dec 2020 15:40:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpYun-0001nY-5L; Wed, 16 Dec 2020 15:40:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpYum-0000io-TY; Wed, 16 Dec 2020 15:40:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpYum-0004j1-T3; Wed, 16 Dec 2020 15:40:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h90SOGGbjzOv2A+8b4JyuUsXAjdrFjc5E3vdmJOFPR0=; b=U7ZJMcGweaQtXLlycDDnCYR1U2
	1S68Q7d6gLqTZHSFPSEKDyw0MEDFMarP/K2VoCWL/PxOn4cPRbxOlS9uydnvZh8UFZbJrZZSeELh8
	vVLRCfwc+9HaylWf4iRiow3ymPYnhpaGTC55UAPoo2kQAZS/fE2fu78waeReP9gq0BzU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157606-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157606: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8bf0fab14256057bbd145563151814300476bb28
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 15:40:32 +0000

flight 157606 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157606/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8bf0fab14256057bbd145563151814300476bb28
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    1 days
Testing same since   157570  2020-12-15 17:00:30 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:08:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:08:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55320.96407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZLj-0002ss-Sn; Wed, 16 Dec 2020 16:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55320.96407; Wed, 16 Dec 2020 16:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZLj-0002sl-Pa; Wed, 16 Dec 2020 16:08:23 +0000
Received: by outflank-mailman (input) for mailman id 55320;
 Wed, 16 Dec 2020 16:08:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZEz=FU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpZLi-0002sg-IV
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:08:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0f2a1fe-9111-45d2-bee5-daef275fa2f3;
 Wed, 16 Dec 2020 16:08:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC6B6AC7B;
 Wed, 16 Dec 2020 16:08:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0f2a1fe-9111-45d2-bee5-daef275fa2f3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608134900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bsRCeq+rPE4IAYzCn2gq/2a9pbnYQYIqEpJ9q37JRlo=;
	b=uio/ASaCVb4BdCdy6QTxE2elZTOdujvuRtg6Vfgl6CuxoKXb1SxHuUzRxSm9Q3hIlrvbph
	t8P/6uWX6/7iuNIjJe5TssDH2mmgNReOu/1B80SgElwSwqOOxVEP1rN6GR/Ux01Jh1HffI
	3EeHvbT0G8KStEnPsIvyD6lMtxiL8Lc=
Subject: Re: [PATCH v3 2/8] xen/hypfs: switch write function handles to const
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bdd553b4-968a-ff71-4bac-2824f86ba869@suse.com>
Date: Wed, 16 Dec 2020 17:08:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209160956.32456-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 17:09, Juergen Gross wrote:
> --- a/xen/include/xen/guest_access.h
> +++ b/xen/include/xen/guest_access.h
> @@ -26,6 +26,11 @@
>      type *_x = (hnd).p;                         \
>      (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
>  })
> +/* Same for casting to a const type. */
> +#define guest_handle_const_cast(hnd, type) ({       \
> +    const type *_x = (const type *)((hnd).p);       \
> +    (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x };  \
> +})

Afaict this allow casting from e.g. uint to const_ulong. We
don't want to permit this (i.e. if really needed one is to
go through two steps). I think all it takes is dropping the
cast:

#define guest_handle_const_cast(hnd, type) ({      \
    const type *_x = (hnd).p;                      \
    (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x }; \
})

With this
Reviewed-by: Jan Beulich <jbeulich@suse.com>
and I'd be okay making the adjustment while committing
(provided it works and I didn't overlook anything).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:16:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55325.96419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZTt-0003rE-Nb; Wed, 16 Dec 2020 16:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55325.96419; Wed, 16 Dec 2020 16:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZTt-0003r7-Kb; Wed, 16 Dec 2020 16:16:49 +0000
Received: by outflank-mailman (input) for mailman id 55325;
 Wed, 16 Dec 2020 16:16:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZEz=FU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpZTs-0003r2-Az
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:16:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8ba603e-181b-41e3-a9c7-ed6cfb341051;
 Wed, 16 Dec 2020 16:16:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E73DAC7B;
 Wed, 16 Dec 2020 16:16:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8ba603e-181b-41e3-a9c7-ed6cfb341051
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608135405; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7vfulspyiYTcLTUA37KSOyWnDLJHc/Y2+s317T6X6/g=;
	b=qTKa5QgPfmDDSUDd0DxmwUMmb4gnDDMvehWf3kRjIQrLvoz7fuJc+45Dv/iMn2zP1D/2Oe
	+Elee6up4ioggGHlb2g9OD+/ONMLJAnCxDMd/EXEpqWPdOi7GWQINPS8RWi1SQMj9+vxiw
	4pkUPHa7zo25IYrAK/nEjPzIRJkR4us=
Subject: Re: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
Date: Wed, 16 Dec 2020 17:16:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209160956.32456-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 17:09, Juergen Gross wrote:
> In order to better support resource allocation and locking for dynamic
> hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.
> 
> The enter() callback is called when entering a node during hypfs user
> actions (traversing, reading or writing it), while the exit() callback
> is called when leaving a node (accessing another node at the same or a
> higher directory level, or when returning to the user).
> 
> For avoiding recursion this requires a parent pointer in each node.
> Let the enter() callback return the entry address which is stored as
> the last accessed node in order to be able to use a template entry for
> that purpose in case of dynamic entries.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - new patch
> 
> V3:
> - add ASSERT(entry); (Jan Beulich)
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/common/hypfs.c      | 80 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/hypfs.h |  5 +++
>  2 files changed, 85 insertions(+)
> 
> diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
> index 6f822ae097..f04934db10 100644
> --- a/xen/common/hypfs.c
> +++ b/xen/common/hypfs.c
> @@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry;
>       ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>  
>  const struct hypfs_funcs hypfs_dir_funcs = {
> +    .enter = hypfs_node_enter,
> +    .exit = hypfs_node_exit,
>      .read = hypfs_read_dir,
>      .write = hypfs_write_deny,
>      .getsize = hypfs_getsize,
>      .findentry = hypfs_dir_findentry,
>  };
>  const struct hypfs_funcs hypfs_leaf_ro_funcs = {
> +    .enter = hypfs_node_enter,
> +    .exit = hypfs_node_exit,
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_deny,
>      .getsize = hypfs_getsize,
>      .findentry = hypfs_leaf_findentry,
>  };
>  const struct hypfs_funcs hypfs_leaf_wr_funcs = {
> +    .enter = hypfs_node_enter,
> +    .exit = hypfs_node_exit,
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_leaf,
>      .getsize = hypfs_getsize,
>      .findentry = hypfs_leaf_findentry,
>  };
>  const struct hypfs_funcs hypfs_bool_wr_funcs = {
> +    .enter = hypfs_node_enter,
> +    .exit = hypfs_node_exit,
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_bool,
>      .getsize = hypfs_getsize,
>      .findentry = hypfs_leaf_findentry,
>  };
>  const struct hypfs_funcs hypfs_custom_wr_funcs = {
> +    .enter = hypfs_node_enter,
> +    .exit = hypfs_node_exit,
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_custom,
>      .getsize = hypfs_getsize,
> @@ -63,6 +73,8 @@ enum hypfs_lock_state {
>  };
>  static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
>  
> +static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
> +
>  HYPFS_DIR_INIT(hypfs_root, "");
>  
>  static void hypfs_read_lock(void)
> @@ -100,11 +112,59 @@ static void hypfs_unlock(void)
>      }
>  }
>  
> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
> +{
> +    return entry;
> +}
> +
> +void hypfs_node_exit(const struct hypfs_entry *entry)
> +{
> +}
> +
> +static int node_enter(const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
> +
> +    entry = entry->funcs->enter(entry);
> +    if ( IS_ERR(entry) )
> +        return PTR_ERR(entry);
> +
> +    ASSERT(entry);
> +    ASSERT(!*last || *last == entry->parent);
> +
> +    *last = entry;
> +
> +    return 0;
> +}
> +
> +static void node_exit(const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
> +
> +    if ( !*last )
> +        return;

To my question regarding this in v2 you replied

"I rechecked and have found that this was a remnant from an earlier
 variant. *last won't ever be NULL, so the if can be dropped (a NULL
 will be catched by the following ASSERT())."

Now this if() is still there. Why? (My alternative suggestion was
to have ASSERT(!entry->parent) inside the if() body, since prior to
that you said this would be an indication of the root entry.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:17:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55328.96431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZUY-0003y6-2H; Wed, 16 Dec 2020 16:17:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55328.96431; Wed, 16 Dec 2020 16:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZUX-0003xz-Uc; Wed, 16 Dec 2020 16:17:29 +0000
Received: by outflank-mailman (input) for mailman id 55328;
 Wed, 16 Dec 2020 16:17:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpZUV-0003xr-UY
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:17:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59c38dab-fbf7-4445-9458-3f2f9547e977;
 Wed, 16 Dec 2020 16:17:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14A49AC7F;
 Wed, 16 Dec 2020 16:17:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59c38dab-fbf7-4445-9458-3f2f9547e977
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608135446; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=eF8iye3aJJjmodiyRNp30ALVMd3SgTP6MMfVPx1JXbo=;
	b=NtpiP/s+f+heouA/WqGKApJn6vrDzpI8PLnqLar5QEh/Kn9hJTx6zt1j2c8qDp51TYX/FA
	jY0lkblfWlOnpbiAFqfpn6ecySlDQ8y2lkAM5FybZYWWiSoYdTI5qX037IvF+eqqwKdAzH
	/rhgvOutzs5AmQ8qHprTcgXntSdJJnY=
Subject: Re: [PATCH v3 2/8] xen/hypfs: switch write function handles to const
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-3-jgross@suse.com>
 <bdd553b4-968a-ff71-4bac-2824f86ba869@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <be37658b-2a56-8506-70d0-e74328b61c5a@suse.com>
Date: Wed, 16 Dec 2020 17:17:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <bdd553b4-968a-ff71-4bac-2824f86ba869@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="MMIz031IqXfpEShhdWMREFyWWvEkpbCLG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--MMIz031IqXfpEShhdWMREFyWWvEkpbCLG
Content-Type: multipart/mixed; boundary="wCanPexOv0N3dgKkpbmAztiNGhQswaR8X";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <be37658b-2a56-8506-70d0-e74328b61c5a@suse.com>
Subject: Re: [PATCH v3 2/8] xen/hypfs: switch write function handles to const
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-3-jgross@suse.com>
 <bdd553b4-968a-ff71-4bac-2824f86ba869@suse.com>
In-Reply-To: <bdd553b4-968a-ff71-4bac-2824f86ba869@suse.com>

--wCanPexOv0N3dgKkpbmAztiNGhQswaR8X
Content-Type: multipart/mixed;
 boundary="------------8746C9B61E947DE71B19CCFE"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8746C9B61E947DE71B19CCFE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.12.20 17:08, Jan Beulich wrote:
> On 09.12.2020 17:09, Juergen Gross wrote:
>> --- a/xen/include/xen/guest_access.h
>> +++ b/xen/include/xen/guest_access.h
>> @@ -26,6 +26,11 @@
>>       type *_x =3D (hnd).p;                         \
>>       (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
>>   })
>> +/* Same for casting to a const type. */
>> +#define guest_handle_const_cast(hnd, type) ({       \
>> +    const type *_x =3D (const type *)((hnd).p);       \
>> +    (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x };  \
>> +})
>=20
> Afaict this allow casting from e.g. uint to const_ulong. We
> don't want to permit this (i.e. if really needed one is to
> go through two steps). I think all it takes is dropping the
> cast:
>=20
> #define guest_handle_const_cast(hnd, type) ({      \
>      const type *_x =3D (hnd).p;                      \
>      (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x }; \
> })
>=20
> With this
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> and I'd be okay making the adjustment while committing
> (provided it works and I didn't overlook anything).

At least it is still compiling, and I guess that was the main
concern.


Juergen

--------------8746C9B61E947DE71B19CCFE
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8746C9B61E947DE71B19CCFE--

--wCanPexOv0N3dgKkpbmAztiNGhQswaR8X--

--MMIz031IqXfpEShhdWMREFyWWvEkpbCLG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/aMxUFAwAAAAAACgkQsN6d1ii/Ey/V
Xwf/ewhGO0C2HoSru6BAk4TZ1LYIDQL97/eLHajGPH/BGT4l1/0SDeVVTc8a4/ZNdAHiP5d/Z5iI
nEZ3OhIIhwuOuwbZ6/+qYq6vdU5oH35+4cjuYmj0K1e1x8Yd9k3xLA+MzCT7tY0WPCHfqL3ILLjp
0vVO1gClpgESsCLJ81/w1qh/gyqMWXg3UCuuNIHIhi09mYOPKw4B8fmOAistamjgeHAiZIpCz2Gc
pN+6kYtUQZ9hCklaPZ4Ms1Z2FBP+hWEvpS+ZvO1L7fDB3WmWTDHPCzax2gPO4Ki7NtUK74fGQhdw
xaVwStK83imhmcE0GROkKgInlIU/6rtdooIZ6KZbaw==
=VsSN
-----END PGP SIGNATURE-----

--MMIz031IqXfpEShhdWMREFyWWvEkpbCLG--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:24:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55335.96443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZb0-0004yL-VR; Wed, 16 Dec 2020 16:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55335.96443; Wed, 16 Dec 2020 16:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZb0-0004yE-Rb; Wed, 16 Dec 2020 16:24:10 +0000
Received: by outflank-mailman (input) for mailman id 55335;
 Wed, 16 Dec 2020 16:24:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpZaz-0004y9-Jn
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:24:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a32b00a-2925-4195-a578-56f65e8e920e;
 Wed, 16 Dec 2020 16:24:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9F7E7AC7F;
 Wed, 16 Dec 2020 16:24:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a32b00a-2925-4195-a578-56f65e8e920e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608135847; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8bCNpvRu1aaYztt7lNYPM2+/3ZIjutOceBP40ILDXYc=;
	b=Rcvkt0nPQMiHntsr/DRTPjJyI9l/up7yGk9FflhK6CC9yXPhppMXHO2WysYUzvID+Y2dr6
	6Mn5u/XH0/HY/65WLAvT5Wh+BQVrwWIUENN3y7FRjj4OMjY+1fsdedujJVgVa6y6gqCPqJ
	fwIwOTrrq/vBeTVBgCqXUePGD70mAPI=
Subject: Re: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-4-jgross@suse.com>
 <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <aad9131f-ca42-94b4-1ce2-18c6db0ac381@suse.com>
Date: Wed, 16 Dec 2020 17:24:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ufMjG3EonlO7vLn4ZjMp1F7MEwBHxDMCm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ufMjG3EonlO7vLn4ZjMp1F7MEwBHxDMCm
Content-Type: multipart/mixed; boundary="s4HuKqeIBHtLZcNXJ2v91T4sw7bLFGTcp";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <aad9131f-ca42-94b4-1ce2-18c6db0ac381@suse.com>
Subject: Re: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node
 callbacks
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-4-jgross@suse.com>
 <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
In-Reply-To: <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>

--s4HuKqeIBHtLZcNXJ2v91T4sw7bLFGTcp
Content-Type: multipart/mixed;
 boundary="------------E1DAC578A47395658EADEB73"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E1DAC578A47395658EADEB73
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.12.20 17:16, Jan Beulich wrote:
> On 09.12.2020 17:09, Juergen Gross wrote:
>> In order to better support resource allocation and locking for dynamic=

>> hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.
>>
>> The enter() callback is called when entering a node during hypfs user
>> actions (traversing, reading or writing it), while the exit() callback=

>> is called when leaving a node (accessing another node at the same or a=

>> higher directory level, or when returning to the user).
>>
>> For avoiding recursion this requires a parent pointer in each node.
>> Let the enter() callback return the entry address which is stored as
>> the last accessed node in order to be able to use a template entry for=

>> that purpose in case of dynamic entries.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - new patch
>>
>> V3:
>> - add ASSERT(entry); (Jan Beulich)
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   xen/common/hypfs.c      | 80 +++++++++++++++++++++++++++++++++++++++=
++
>>   xen/include/xen/hypfs.h |  5 +++
>>   2 files changed, 85 insertions(+)
>>
>> diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
>> index 6f822ae097..f04934db10 100644
>> --- a/xen/common/hypfs.c
>> +++ b/xen/common/hypfs.c
>> @@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry;
>>        ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>>  =20
>>   const struct hypfs_funcs hypfs_dir_funcs =3D {
>> +    .enter =3D hypfs_node_enter,
>> +    .exit =3D hypfs_node_exit,
>>       .read =3D hypfs_read_dir,
>>       .write =3D hypfs_write_deny,
>>       .getsize =3D hypfs_getsize,
>>       .findentry =3D hypfs_dir_findentry,
>>   };
>>   const struct hypfs_funcs hypfs_leaf_ro_funcs =3D {
>> +    .enter =3D hypfs_node_enter,
>> +    .exit =3D hypfs_node_exit,
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_deny,
>>       .getsize =3D hypfs_getsize,
>>       .findentry =3D hypfs_leaf_findentry,
>>   };
>>   const struct hypfs_funcs hypfs_leaf_wr_funcs =3D {
>> +    .enter =3D hypfs_node_enter,
>> +    .exit =3D hypfs_node_exit,
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_leaf,
>>       .getsize =3D hypfs_getsize,
>>       .findentry =3D hypfs_leaf_findentry,
>>   };
>>   const struct hypfs_funcs hypfs_bool_wr_funcs =3D {
>> +    .enter =3D hypfs_node_enter,
>> +    .exit =3D hypfs_node_exit,
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_bool,
>>       .getsize =3D hypfs_getsize,
>>       .findentry =3D hypfs_leaf_findentry,
>>   };
>>   const struct hypfs_funcs hypfs_custom_wr_funcs =3D {
>> +    .enter =3D hypfs_node_enter,
>> +    .exit =3D hypfs_node_exit,
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_custom,
>>       .getsize =3D hypfs_getsize,
>> @@ -63,6 +73,8 @@ enum hypfs_lock_state {
>>   };
>>   static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
>>  =20
>> +static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_ent=
ered);
>> +
>>   HYPFS_DIR_INIT(hypfs_root, "");
>>  =20
>>   static void hypfs_read_lock(void)
>> @@ -100,11 +112,59 @@ static void hypfs_unlock(void)
>>       }
>>   }
>>  =20
>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *=
entry)
>> +{
>> +    return entry;
>> +}
>> +
>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>> +{
>> +}
>> +
>> +static int node_enter(const struct hypfs_entry *entry)
>> +{
>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_ent=
ered);
>> +
>> +    entry =3D entry->funcs->enter(entry);
>> +    if ( IS_ERR(entry) )
>> +        return PTR_ERR(entry);
>> +
>> +    ASSERT(entry);
>> +    ASSERT(!*last || *last =3D=3D entry->parent);
>> +
>> +    *last =3D entry;
>> +
>> +    return 0;
>> +}
>> +
>> +static void node_exit(const struct hypfs_entry *entry)
>> +{
>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_ent=
ered);
>> +
>> +    if ( !*last )
>> +        return;
>=20
> To my question regarding this in v2 you replied
>=20
> "I rechecked and have found that this was a remnant from an earlier
>   variant. *last won't ever be NULL, so the if can be dropped (a NULL
>   will be catched by the following ASSERT())."
>=20
> Now this if() is still there. Why?

I really thought I did remove the if(). Seems as if I did that on
my test machine only and not in my git tree. Sorry for that.


Juergen

--------------E1DAC578A47395658EADEB73
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E1DAC578A47395658EADEB73--

--s4HuKqeIBHtLZcNXJ2v91T4sw7bLFGTcp--

--ufMjG3EonlO7vLn4ZjMp1F7MEwBHxDMCm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/aNKYFAwAAAAAACgkQsN6d1ii/Ey+8
CAf/b9Uz4HkjVMPncTvi7dMhL8CO6URBZs2uBXbZZIHzEuttx5bxIU9Kzi3H7+0csDISQLo4CMCW
VB7rJ70qa8Of2tPpCYxC5ZrtDN1qG/aszGzu7bh055NqaaluvIydU9Z/JVxCdZYmPkdInf/Jm1mZ
fiOLHK9mEEr04KRNMqhcse3jyFsv0hO5p3UMwkp8MuuUbCWEXPi3W0ZKcGmZ7HwHucu6agTUgemv
Cmvmt9tCkK1ce6XwbcAHf2yKUu0za2zPGwdAOi2EdDAG43NmnZ4io9PGi2fuXlIradHYgNbDNX/K
tiIsdrs06Q5tT6yfepNCjeGkWNEcG6U8N92Ag9DGJA==
=blsI
-----END PGP SIGNATURE-----

--ufMjG3EonlO7vLn4ZjMp1F7MEwBHxDMCm--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:35:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:35:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55340.96455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZmE-00061K-38; Wed, 16 Dec 2020 16:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55340.96455; Wed, 16 Dec 2020 16:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZmD-00061D-Uq; Wed, 16 Dec 2020 16:35:45 +0000
Received: by outflank-mailman (input) for mailman id 55340;
 Wed, 16 Dec 2020 16:35:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZEz=FU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpZmC-000616-Uh
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:35:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3d35508-293d-484c-8971-33ee2a0a6b72;
 Wed, 16 Dec 2020 16:35:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05A83AE86;
 Wed, 16 Dec 2020 16:35:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3d35508-293d-484c-8971-33ee2a0a6b72
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608136543; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bnPekGeKuPbFJ1TL9eKBy1yLcE8aaa9z6PSjb6epRNU=;
	b=Tt7oKcbJndpFvBnEYrt4yt1aWjMBMwsFIlCXROQP5AeOa88az9kJ+vGmhXhv+8oMBglHOD
	y70tRnVC6AYqHJUCoLVNptIAbBt3wgfcytQ/G9AYg+6KkzF1i5jDRmmIzehNf9UZA+wCeo
	ICZyKROYwfKPXW/erZx/VKvX+9WJCOw=
Subject: Re: [PATCH v3 2/8] xen/hypfs: switch write function handles to const
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-3-jgross@suse.com>
 <bdd553b4-968a-ff71-4bac-2824f86ba869@suse.com>
 <be37658b-2a56-8506-70d0-e74328b61c5a@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3b9910ee-9663-fc8f-8323-f483cd51b6f9@suse.com>
Date: Wed, 16 Dec 2020 17:35:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <be37658b-2a56-8506-70d0-e74328b61c5a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.12.2020 17:17, Jürgen Groß wrote:
> On 16.12.20 17:08, Jan Beulich wrote:
>> On 09.12.2020 17:09, Juergen Gross wrote:
>>> --- a/xen/include/xen/guest_access.h
>>> +++ b/xen/include/xen/guest_access.h
>>> @@ -26,6 +26,11 @@
>>>       type *_x = (hnd).p;                         \
>>>       (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
>>>   })
>>> +/* Same for casting to a const type. */
>>> +#define guest_handle_const_cast(hnd, type) ({       \
>>> +    const type *_x = (const type *)((hnd).p);       \
>>> +    (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x };  \
>>> +})
>>
>> Afaict this allow casting from e.g. uint to const_ulong. We
>> don't want to permit this (i.e. if really needed one is to
>> go through two steps). I think all it takes is dropping the
>> cast:
>>
>> #define guest_handle_const_cast(hnd, type) ({      \
>>      const type *_x = (hnd).p;                      \
>>      (XEN_GUEST_HANDLE_PARAM(const_##type)) { _x }; \
>> })
>>
>> With this
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> and I'd be okay making the adjustment while committing
>> (provided it works and I didn't overlook anything).
> 
> At least it is still compiling, and I guess that was the main
> concern.

Indeed. Thanks for checking.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:36:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:36:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55345.96467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZnI-00067i-CV; Wed, 16 Dec 2020 16:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55345.96467; Wed, 16 Dec 2020 16:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZnI-00067b-9J; Wed, 16 Dec 2020 16:36:52 +0000
Received: by outflank-mailman (input) for mailman id 55345;
 Wed, 16 Dec 2020 16:36:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZEz=FU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpZnH-00067T-7H
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:36:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9affa49e-fc23-4965-a64e-119840d108ab;
 Wed, 16 Dec 2020 16:36:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C57FACBD;
 Wed, 16 Dec 2020 16:36:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9affa49e-fc23-4965-a64e-119840d108ab
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608136608; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VQH3sfUNlgwjFpO8zC7ngq+VaMEQ8WGlQc8Sc++NJsA=;
	b=RbhaTE/IsGVCvcRsOmEy8x+Iq/SaKJTmse/Z3uznae4Va3A8vwJM9DGmDFBn4Jfn8XwkH/
	2GghH8n7inq2uEgYxekvD7ZWFI5YjEo01jilS6HzuvK+BSM4qSldks3V5dV2HXdvbDCOuz
	Llz+qHdPYtEQR6g7OZZJtpGB43+33pw=
Subject: Re: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node
 callbacks
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-4-jgross@suse.com>
 <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
 <aad9131f-ca42-94b4-1ce2-18c6db0ac381@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <227825e7-d704-fd36-a327-1dbd6aa391c8@suse.com>
Date: Wed, 16 Dec 2020 17:36:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <aad9131f-ca42-94b4-1ce2-18c6db0ac381@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.12.2020 17:24, Jürgen Groß wrote:
> On 16.12.20 17:16, Jan Beulich wrote:
>> On 09.12.2020 17:09, Juergen Gross wrote:
>>> In order to better support resource allocation and locking for dynamic
>>> hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.
>>>
>>> The enter() callback is called when entering a node during hypfs user
>>> actions (traversing, reading or writing it), while the exit() callback
>>> is called when leaving a node (accessing another node at the same or a
>>> higher directory level, or when returning to the user).
>>>
>>> For avoiding recursion this requires a parent pointer in each node.
>>> Let the enter() callback return the entry address which is stored as
>>> the last accessed node in order to be able to use a template entry for
>>> that purpose in case of dynamic entries.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - new patch
>>>
>>> V3:
>>> - add ASSERT(entry); (Jan Beulich)
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   xen/common/hypfs.c      | 80 +++++++++++++++++++++++++++++++++++++++++
>>>   xen/include/xen/hypfs.h |  5 +++
>>>   2 files changed, 85 insertions(+)
>>>
>>> diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
>>> index 6f822ae097..f04934db10 100644
>>> --- a/xen/common/hypfs.c
>>> +++ b/xen/common/hypfs.c
>>> @@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry;
>>>        ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>>>   
>>>   const struct hypfs_funcs hypfs_dir_funcs = {
>>> +    .enter = hypfs_node_enter,
>>> +    .exit = hypfs_node_exit,
>>>       .read = hypfs_read_dir,
>>>       .write = hypfs_write_deny,
>>>       .getsize = hypfs_getsize,
>>>       .findentry = hypfs_dir_findentry,
>>>   };
>>>   const struct hypfs_funcs hypfs_leaf_ro_funcs = {
>>> +    .enter = hypfs_node_enter,
>>> +    .exit = hypfs_node_exit,
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_deny,
>>>       .getsize = hypfs_getsize,
>>>       .findentry = hypfs_leaf_findentry,
>>>   };
>>>   const struct hypfs_funcs hypfs_leaf_wr_funcs = {
>>> +    .enter = hypfs_node_enter,
>>> +    .exit = hypfs_node_exit,
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_leaf,
>>>       .getsize = hypfs_getsize,
>>>       .findentry = hypfs_leaf_findentry,
>>>   };
>>>   const struct hypfs_funcs hypfs_bool_wr_funcs = {
>>> +    .enter = hypfs_node_enter,
>>> +    .exit = hypfs_node_exit,
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_bool,
>>>       .getsize = hypfs_getsize,
>>>       .findentry = hypfs_leaf_findentry,
>>>   };
>>>   const struct hypfs_funcs hypfs_custom_wr_funcs = {
>>> +    .enter = hypfs_node_enter,
>>> +    .exit = hypfs_node_exit,
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_custom,
>>>       .getsize = hypfs_getsize,
>>> @@ -63,6 +73,8 @@ enum hypfs_lock_state {
>>>   };
>>>   static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
>>>   
>>> +static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
>>> +
>>>   HYPFS_DIR_INIT(hypfs_root, "");
>>>   
>>>   static void hypfs_read_lock(void)
>>> @@ -100,11 +112,59 @@ static void hypfs_unlock(void)
>>>       }
>>>   }
>>>   
>>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry)
>>> +{
>>> +    return entry;
>>> +}
>>> +
>>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>>> +{
>>> +}
>>> +
>>> +static int node_enter(const struct hypfs_entry *entry)
>>> +{
>>> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
>>> +
>>> +    entry = entry->funcs->enter(entry);
>>> +    if ( IS_ERR(entry) )
>>> +        return PTR_ERR(entry);
>>> +
>>> +    ASSERT(entry);
>>> +    ASSERT(!*last || *last == entry->parent);
>>> +
>>> +    *last = entry;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static void node_exit(const struct hypfs_entry *entry)
>>> +{
>>> +    const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered);
>>> +
>>> +    if ( !*last )
>>> +        return;
>>
>> To my question regarding this in v2 you replied
>>
>> "I rechecked and have found that this was a remnant from an earlier
>>   variant. *last won't ever be NULL, so the if can be dropped (a NULL
>>   will be catched by the following ASSERT())."
>>
>> Now this if() is still there. Why?
> 
> I really thought I did remove the if(). Seems as if I did that on
> my test machine only and not in my git tree. Sorry for that.

So should I drop it while committing and adding
Reviewed-by: Jan Beulich <jbeulich@suse.com>
?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:41:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55351.96482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZrh-000760-0s; Wed, 16 Dec 2020 16:41:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55351.96482; Wed, 16 Dec 2020 16:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpZrg-00075t-Tn; Wed, 16 Dec 2020 16:41:24 +0000
Received: by outflank-mailman (input) for mailman id 55351;
 Wed, 16 Dec 2020 16:41:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DfXp=FU=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kpZre-00075U-VN
 for xen-devel@lists.xen.org; Wed, 16 Dec 2020 16:41:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34fbced4-cbe2-4e92-b3ed-9ecd582dc5f9;
 Wed, 16 Dec 2020 16:41:17 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kpZrQ-0003P1-GX; Wed, 16 Dec 2020 16:41:08 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kpZrQ-0007UB-Cc; Wed, 16 Dec 2020 16:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34fbced4-cbe2-4e92-b3ed-9ecd582dc5f9
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=6BMMB+qFzoJLjJCFfeVFrzbgQ8RHe3vg/jJ7YR5Cc6U=; b=LnnuTq9K4z7252HuRENdqubuT5
	quDmQgs9DLR4N/sXXTRkccH2zg3GEAiIjJLRb6ktChh+tFBnUv9hYg4FuIm1MuoM65Tf2JsV4prDQ
	NN/OaCWw4+2hYBzDVgS6/8cYw5/Ee/pbidtnZ6/b0eLeYfNZahdlh3dE5D0TnQBe4FwI=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 322 v5 (CVE-2020-29481) - Xenstore: new
 domains inheriting existing node permissions
Message-Id: <E1kpZrQ-0007UB-Cc@xenbits.xenproject.org>
Date: Wed, 16 Dec 2020 16:41:08 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29481 / XSA-322
                               version 5

       Xenstore: new domains inheriting existing node permissions

UPDATES IN VERSION 5
====================

Fix deployment info to refer to xsa322-4.12-c.patch not nonexistent
file xsa322-4.13-c.patch.

ISSUE DESCRIPTION
=================

Access rights of Xenstore nodes are per domid.  Unfortunately,
existing granted access rights are not removed when a domain is
destroyed.  This means that a new domain created with the same domid
will inherit the access rights to Xenstore nodes from the previous
domain(s) with the same domid.

All Xenstore entries of a guest below /local/domain/<domid> are
deleted by Xen tools when a guest is destroyed.  Therefore only
entries belonging to other guests, referring to the deleted guests,
are potentially affected.

IMPACT
======

In some circumstances, it might be possible for a new guest domain to
access resources belonging to a previous domain.  The impact would
depend on the software in use and the configuration, but might include
any of denial of service, information leak, or privilege escalation.

VULNERABLE SYSTEMS
==================

All versions of Xen are in principle vulnerable.

Both Xenstore implementations (C and Ocaml) are vulnerable.

Vulnerable systems are only those running software where one domain is
granted access to another's xenstore nodes, without complete cleanup
of those nodes on domain destruction.  No such software is enabled in
default configurations of upstream Xen.

Therefore upstream Xen, without additional management software (in
host or guest(s)), is not vulnerable in the default (host and guest)
configuration.

MITIGATION
==========

There is no mitigation available.

CREDITS
=======

This issue was discovered by Jürgen Groß of SUSE.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa322-c.patch             xen-unstable        [C xenstored]
xsa322-4.14-c.patch        Xen 4.14 - 4.13     [C xenstored]
xsa322-4.12-c.patch        Xen 4.12 - 4.10     [C xenstored]

xsa322-o.patch             xen-unstable - 4.12 [Ocaml xenstored]
xsa322-4.11-o.patch        Xen 4.11 - 4.10     [Ocaml xenstored]

$ sha256sum xsa322*
89e40422e41b8b2f8926ee5081da0e494e8e7312091151d31bfaa29eefa9b669  xsa322.meta
0cfeb0f8dd1c95e628e06f3402cbb5fb58c0972d6616958f5a0fbed59813dd6c  xsa322-4.11-o.patch
d4f9362b6f7ebfb7349849d4449f70b6004779c35238dc628736c541fe9e4279  xsa322-4.12-c.patch
8efe8fc39bf91a1c0cbdbf572deb2592930b757725951f4fdf0c387904ce4293  xsa322-4.14-c.patch
9275c7c36127f0e9719d4cb3162e39ce9233b2b55e9f9307b4c4d370a7b636a3  xsa322-c.patch
42c0818ceff11792517530237c4972967099c9828b4e2b5ec4bf6bfc1825cd7c  xsa322-o.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/aOI4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZHGIH/iFQ2CLj2l+CjWu0hevHuUzikJ93X5sa/Yu7DhLg
oa/JCPdiUotBSorMgZedU1aYKPLBZC7vhFQD+q4IUIQsA9sEB6Mux2C9Zs7ZXnOI
i635ZtaWpJnzX3xez5vt5AjIFQXyFZzrXhmbNB9tVFiRgA/cmqikbIhF/tVGcx1H
XtqT0hIcQpiH2GIAuslKHtfV9E9w6Uiye8kcMmm/8nUaNeHs3SGUvHceg9xBbT5M
MTarsmBvk8Usp5jtYqPkrE4WsmtL3HprXv5+U8yPzDia6/CqAF6ekMtpmGEwvwTK
YtYmbLmBRSVYw6/nXPA1AczLkvb12QWrk8eRZhsFpfgxbu4=
=gyZV
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa322.meta"
Content-Disposition: attachment; filename="xsa322.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjIsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIyLTQuMTItYy5wYXRjaCIs
CiAgICAgICAgICAgICJ4c2EzMjItNC4xMS1vLnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjExIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI0MWE4MjJjMzkyNjM1MGYyNjkxN2Q3NDdjOGRmZWQxYzQ0
YTJjZjQyIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIyLTQuMTItYy5wYXRjaCIs
CiAgICAgICAgICAgICJ4c2EzMjItNC4xMS1vLnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEyIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI4MTQ1ZDM4YjQ4MDA5MjU1YTMyYWI4N2EwMmU0ODFjZDA5
YzgxMWY5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIyLTQuMTItYy5wYXRjaCIs
CiAgICAgICAgICAgICJ4c2EzMjItby5wYXRjaCIKICAgICAgICAgIF0KICAg
ICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMyI6IHsKICAgICAgIlJl
Y2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVS
ZWYiOiAiYjUzMDIyNzNlMmM1MTk0MDE3MjQwMDQ4NjY0NDYzNmYyZjRmYzY0
YSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzUzLAog
ICAgICAgICAgICAxMTUKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTMyMi00LjE0LWMucGF0Y2giLAogICAg
ICAgICAgICAieHNhMzIyLW8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9LAogICAgIjQuMTQiOiB7CiAgICAgICJSZWNpcGVz
IjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjog
IjFkMWQxZjUzOTE5NzY0NTZhNzlkYWFjMGRjZmU3MTU3ZGExZTU0ZjciLAog
ICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAg
ICAgICAgMTE1CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBb
CiAgICAgICAgICAgICJ4c2EzMjItNC4xNC1jLnBhdGNoIiwKICAgICAgICAg
ICAgInhzYTMyMi1vLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjNh
ZTQ2OWFmOGU2ODBkZjMxZWVjZDBhMmFjNmE4M2I1OGFkN2NlNTMiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDM1MywKICAgICAgICAg
ICAgMTE1CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzMjItYy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2Ez
MjItby5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAg
IH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa322-4.11-o.patch"
Content-Disposition: attachment; filename="xsa322-4.11-o.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2xlYW4gdXAgcGVybWlzc2lvbnMgZm9yIGRlYWQgZG9tYWlu
cwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47
IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJp
dAoKZG9tYWluIGlkcyBhcmUgcHJvbmUgdG8gd3JhcHBpbmcgKDE1LWJpdHMp
LCBhbmQgd2l0aCBzdWZmaWNpZW50IG51bWJlcgpvZiBWTXMgaW4gYSByZWJv
b3QgbG9vcCBpdCBpcyBwb3NzaWJsZSB0byB0cmlnZ2VyIGl0LiAgWGVuc3Rv
cmUgZW50cmllcwptYXkgbGluZ2VyIGFmdGVyIGEgZG9tYWluIGRpZXMsIHVu
dGlsIGEgdG9vbHN0YWNrIGNsZWFucyBpdCB1cC4gRHVyaW5nCnRoaXMgdGlt
ZSB0aGVyZSBpcyBhIHdpbmRvdyB3aGVyZSBhIHdyYXBwZWQgZG9taWQgY291
bGQgYWNjZXNzIHRoZXNlCnhlbnN0b3JlIGtleXMgKHRoYXQgYmVsb25nZWQg
dG8gYW5vdGhlciBWTSkuCgpUbyBwcmV2ZW50IHRoaXMgZG8gYSBjbGVhbnVw
IHdoZW4gYSBkb21haW4gZGllczoKICogd2FsayB0aGUgZW50aXJlIHhlbnN0
b3JlIHRyZWUgYW5kIHVwZGF0ZSBwZXJtaXNzaW9ucyBmb3IgYWxsIG5vZGVz
CiAgICogaWYgdGhlIGRlYWQgZG9tYWluIGhhZCBhbiBBQ0wgZW50cnk6IHJl
bW92ZSBpdAogICAqIGlmIHRoZSBkZWFkIGRvbWFpbiB3YXMgdGhlIG93bmVy
OiBjaGFuZ2UgdGhlIG93bmVyIHRvIERvbTAKClRoaXMgaXMgZG9uZSB3aXRo
b3V0IHF1b3RhIGNoZWNrcyBvciBhIHRyYW5zYWN0aW9uLiBRdW90YSBjaGVj
a3Mgd291bGQKYmUgYSBuby1vcCAoZWl0aGVyIHRoZSBkb21haW4gaXMgZGVh
ZCwgb3IgaXQgaXMgRG9tMCB3aGVyZSB0aGV5IGFyZSBub3QKZW5mb3JjZWQp
LiAgVHJhbnNhY3Rpb25zIGFyZSBub3QgbmVlZGVkLCBiZWNhdXNlIHRoaXMg
aXMgYWxsIGRvbmUKYXRvbWljYWxseSBieSBveGVuc3RvcmVkJ3Mgc2luZ2xl
IHRocmVhZC4KClRoZSB4ZW5zdG9yZSBlbnRyaWVzIG93bmVkIGJ5IHRoZSBk
ZWFkIGRvbWFpbiBhcmUgbm90IGRlbGV0ZWQsIGJlY2F1c2UKdGhhdCBjb3Vs
ZCBjb25mdXNlIGEgdG9vbHN0YWNrIC8gYmFja2VuZHMgdGhhdCBhcmUgc3Rp
bGwgYm91bmQgdG8gaXQKKG9yIGdlbmVyYXRlIHVuZXhwZWN0ZWQgd2F0Y2gg
ZXZlbnRzKS4gSXQgaXMgdGhlIHJlc3BvbnNpYmlsaXR5IG9mIGEKdG9vbHN0
YWNrIHRvIHJlbW92ZSB0aGUgeGVuc3RvcmUgZW50cmllcyB0aGVtc2VsdmVz
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjIuCgpTaWduZWQtb2ZmLWJ5OiBF
ZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1i
eTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJt
cy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCBl
ZTdmZWU2YmRhLi5lOGExNjIyMWY4IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQvcGVybXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Blcm1zLm1sCkBAIC01OCw2ICs1OCwxNSBAQCBsZXQgZ2V0X290aGVy
IHBlcm1zID0gcGVybXMub3RoZXIKIGxldCBnZXRfYWNsIHBlcm1zID0gcGVy
bXMuYWNsCiBsZXQgZ2V0X293bmVyIHBlcm0gPSBwZXJtLm93bmVyCiAKKygq
KiBbcmVtb3RlX2RvbWlkIH5kb21pZCBwZXJtXSByZW1vdmVzIGFsbCBBQ0xz
IGZvciBbZG9taWRdIGZyb20gcGVybS4KKyogSWYgW2RvbWlkXSB3YXMgdGhl
IG93bmVyIHRoZW4gaXQgaXMgY2hhbmdlZCB0byBEb20wLgorKiBUaGlzIGlz
IHVzZWQgZm9yIGNsZWFuaW5nIHVwIGFmdGVyIGRlYWQgZG9tYWlucy4KKyog
KikKK2xldCByZW1vdmVfZG9taWQgfmRvbWlkIHBlcm0gPQorCWxldCBhY2wg
PSBMaXN0LmZpbHRlciAoZnVuIChhY2xfZG9taWQsIF8pIC0+IGFjbF9kb21p
ZCA8PiBkb21pZCkgcGVybS5hY2wgaW4KKwlsZXQgb3duZXIgPSBpZiBwZXJt
Lm93bmVyID0gZG9taWQgdGhlbiAwIGVsc2UgcGVybS5vd25lciBpbgorCXsg
cGVybSB3aXRoIGFjbDsgb3duZXIgfQorCiBsZXQgZGVmYXVsdDAgPSBjcmVh
dGUgMCBOT05FIFtdCiAKIGxldCBwZXJtX29mX3N0cmluZyBzID0KZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggM2NkMDA5N2Ri
OS4uNmE5OThmODc2NCAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKQEAgLTQzNyw2ICs0MzcsNyBAQCBsZXQgZG9fcmVsZWFzZSBj
b24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJbGV0IGZpcmVfc3BlY193YXRj
aGVzID0gRG9tYWlucy5leGlzdCBkb21haW5zIGRvbWlkIGluCiAJRG9tYWlu
cy5kZWwgZG9tYWlucyBkb21pZDsKIAlDb25uZWN0aW9ucy5kZWxfZG9tYWlu
IGNvbnMgZG9taWQ7CisJU3RvcmUucmVzZXRfcGVybWlzc2lvbnMgKFRyYW5z
YWN0aW9uLmdldF9zdG9yZSB0KSBkb21pZDsKIAlpZiBmaXJlX3NwZWNfd2F0
Y2hlcyAKIAl0aGVuIENvbm5lY3Rpb25zLmZpcmVfc3BlY193YXRjaGVzIChU
cmFuc2FjdGlvbi5nZXRfcm9vdCB0KSBjb25zIFN0b3JlLlBhdGgucmVsZWFz
ZV9kb21haW4KIAllbHNlIHJhaXNlIEludmFsaWRfQ21kX0FyZ3MKZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9zdG9yZS5tbCBiL3Rvb2xz
L29jYW1sL3hlbnN0b3JlZC9zdG9yZS5tbAppbmRleCAwY2U2ZjY4ZThkLi4x
MDFjMDk0NzE1IDEwMDY0NAotLS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQv
c3RvcmUubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1s
CkBAIC04OSw2ICs4OSwxMyBAQCBsZXQgY2hlY2tfb3duZXIgbm9kZSBjb25u
ZWN0aW9uID0KIAogbGV0IHJlYyByZWN1cnNlIGZjdCBub2RlID0gZmN0IG5v
ZGU7IExpc3QuaXRlciAocmVjdXJzZSBmY3QpIG5vZGUuY2hpbGRyZW4KIAor
KCoqIFtyZWN1cnNlX21hcCBmIHRyZWVdIGFwcGxpZXMgW2ZdIG9uIGVhY2gg
bm9kZSBpbiB0aGUgdHJlZSByZWN1cnNpdmVseSAqKQorbGV0IHJlY3Vyc2Vf
bWFwIGYgPQorCWxldCByZWMgd2FsayBub2RlID0KKwkJZiB7IG5vZGUgd2l0
aCBjaGlsZHJlbiA9IExpc3QucmV2X21hcCB3YWxrIG5vZGUuY2hpbGRyZW4g
fD4gTGlzdC5yZXYgfQorCWluCisJd2FsaworCiBsZXQgdW5wYWNrIG5vZGUg
PSAoU3ltYm9sLnRvX3N0cmluZyBub2RlLm5hbWUsIG5vZGUucGVybXMsIG5v
ZGUudmFsdWUpCiAKIGVuZApAQCAtNDA1LDYgKzQxMiwxNSBAQCBsZXQgc2V0
cGVybXMgc3RvcmUgcGVybSBwYXRoIG5wZXJtcyA9CiAJCVF1b3RhLmRlbF9l
bnRyeSBzdG9yZS5xdW90YSBvbGRfb3duZXI7CiAJCVF1b3RhLmFkZF9lbnRy
eSBzdG9yZS5xdW90YSBuZXdfb3duZXIKIAorbGV0IHJlc2V0X3Blcm1pc3Np
b25zIHN0b3JlIGRvbWlkID0KKwlMb2dnaW5nLmluZm8gInN0b3JlfG5vZGUi
ICJDbGVhbmluZyB1cCB4ZW5zdG9yZSBBQ0xzIGZvciBkb21pZCAlZCIgZG9t
aWQ7CisJc3RvcmUucm9vdCA8LSBOb2RlLnJlY3Vyc2VfbWFwIChmdW4gbm9k
ZSAtPgorCQlsZXQgcGVybXMgPSBQZXJtcy5Ob2RlLnJlbW92ZV9kb21pZCB+
ZG9taWQgbm9kZS5wZXJtcyBpbgorCQlpZiBwZXJtcyA8PiBub2RlLnBlcm1z
IHRoZW4KKwkJCUxvZ2dpbmcuZGVidWcgInN0b3JlfG5vZGUiICJDaGFuZ2Vk
IHBlcm1pc3Npb25zIGZvciBub2RlICVzIiAoTm9kZS5nZXRfbmFtZSBub2Rl
KTsKKwkJeyBub2RlIHdpdGggcGVybXMgfQorCSkgc3RvcmUucm9vdAorCiB0
eXBlIG9wcyA9IHsKIAlzdG9yZTogdDsKIAl3cml0ZTogUGF0aC50IC0+IHN0
cmluZyAtPiB1bml0OwpkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3hlbnN0b3JlZC5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5z
dG9yZWQubWwKaW5kZXggMzBmYzg3NDMyNy4uMTgzZGQyNzU0YiAxMDA2NDQK
LS0tIGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbAorKysg
Yi90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCkBAIC0zNDAs
NiArMzQwLDcgQEAgbGV0IF8gPQogCQkJZmluYWxseSAoZnVuICgpIC0+CiAJ
CQkJaWYgU29tZSBwb3J0ID0gZXZlbnRjaG4uRXZlbnQudmlycV9wb3J0IHRo
ZW4gKAogCQkJCQlsZXQgKG5vdGlmeSwgZGVhZGRvbSkgPSBEb21haW5zLmNs
ZWFudXAgZG9tYWlucyBpbgorCQkJCQlMaXN0Lml0ZXIgKFN0b3JlLnJlc2V0
X3Blcm1pc3Npb25zIHN0b3JlKSBkZWFkZG9tOwogCQkJCQlMaXN0Lml0ZXIg
KENvbm5lY3Rpb25zLmRlbF9kb21haW4gY29ucykgZGVhZGRvbTsKIAkJCQkJ
aWYgZGVhZGRvbSA8PiBbXSB8fCBub3RpZnkgdGhlbgogCQkJCQkJQ29ubmVj
dGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMK

--=separator
Content-Type: application/octet-stream; name="xsa322-4.12-c.patch"
Content-Disposition: attachment; filename="xsa322-4.12-c.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV2b2tlIGFjY2VzcyByaWdodHMgZm9yIHJl
bW92ZWQgZG9tYWlucwoKQWNjZXNzIHJpZ2h0cyBvZiBYZW5zdG9yZSBub2Rl
cyBhcmUgcGVyIGRvbWlkLiBVbmZvcnR1bmF0ZWx5IGV4aXN0aW5nCmdyYW50
ZWQgYWNjZXNzIHJpZ2h0cyBhcmUgbm90IHJlbW92ZWQgd2hlbiBhIGRvbWFp
biBpcyBiZWluZyBkZXN0cm95ZWQuClRoaXMgbWVhbnMgdGhhdCBhIG5ldyBk
b21haW4gY3JlYXRlZCB3aXRoIHRoZSBzYW1lIGRvbWlkIHdpbGwgaW5oZXJp
dAp0aGUgYWNjZXNzIHJpZ2h0cyB0byBYZW5zdG9yZSBub2RlcyBmcm9tIHRo
ZSBwcmV2aW91cyBkb21haW4ocykgd2l0aAp0aGUgc2FtZSBkb21pZC4KClRo
aXMgY2FuIGJlIGF2b2lkZWQgYnkgYWRkaW5nIGEgZ2VuZXJhdGlvbiBjb3Vu
dGVyIHRvIGVhY2ggZG9tYWluLgpUaGUgZ2VuZXJhdGlvbiBjb3VudGVyIG9m
IHRoZSBkb21haW4gaXMgc2V0IHRvIHRoZSBnbG9iYWwgZ2VuZXJhdGlvbgpj
b3VudGVyIHdoZW4gYSBkb21haW4gc3RydWN0dXJlIGlzIGJlaW5nIGFsbG9j
YXRlZC4gV2hlbiByZWFkaW5nIG9yCndyaXRpbmcgYSBub2RlIGFsbCBwZXJt
aXNzaW9ucyBvZiBkb21haW5zIHdoaWNoIGFyZSB5b3VuZ2VyIHRoYW4gdGhl
Cm5vZGUgaXRzZWxmIGFyZSBkcm9wcGVkLiBUaGlzIGlzIGRvbmUgYnkgZmxh
Z2dpbmcgdGhlIHJlbGF0ZWQgZW50cnkKYXMgaW52YWxpZCBpbiBvcmRlciB0
byBhdm9pZCBtb2RpZnlpbmcgcGVybWlzc2lvbnMgaW4gYSB3YXkgdGhlIHVz
ZXIKY291bGQgZGV0ZWN0LgoKQSBzcGVjaWFsIGNhc2UgaGFzIHRvIGJlIGNv
bnNpZGVyZWQ6IGZvciBhIG5ldyBkb21haW4gdGhlIGZpcnN0ClhlbnN0b3Jl
IGVudHJpZXMgYXJlIGFscmVhZHkgd3JpdHRlbiBiZWZvcmUgdGhlIGRvbWFp
biBpcyBvZmZpY2lhbGx5CmludHJvZHVjZWQgaW4gWGVuc3RvcmUuIEluIG9y
ZGVyIG5vdCB0byBkcm9wIHRoZSBwZXJtaXNzaW9ucyBmb3IgdGhlCm5ldyBk
b21haW4gYSBkb21haW4gc3RydWN0IGlzIGFsbG9jYXRlZCBldmVuIGJlZm9y
ZSBpbnRyb2R1Y3Rpb24gaWYKdGhlIGh5cGVydmlzb3IgaXMgYXdhcmUgb2Yg
dGhlIGRvbWFpbi4gVGhpcyByZXF1aXJlcyBhZGRpbmcgYW5vdGhlcgpib29s
ICJpbnRyb2R1Y2VkIiB0byBzdHJ1Y3QgZG9tYWluIGluIHhlbnN0b3JlZC4g
SW4gb3JkZXIgdG8gYXZvaWQKYWRkaXRpb25hbCBwYWRkaW5nIGhvbGVzIGNv
bnZlcnQgdGhlIHNodXRkb3duIGZsYWcgdG8gYm9vbCwgdG9vLgoKQXMgdmVy
aWZ5aW5nIHBlcm1pc3Npb25zIGhhcyBpdHMgcHJpY2UgcmVnYXJkaW5nIHJ1
bnRpbWUgYWRkIGEgbmV3CnF1b3RhIGZvciBsaW1pdGluZyB0aGUgbnVtYmVy
IG9mIHBlcm1pc3Npb25zIGFuIHVucHJpdmlsZWdlZCBkb21haW4KY2FuIHNl
dCBmb3IgYSBub2RlLiBUaGUgZGVmYXVsdCBmb3IgdGhhdCBuZXcgcXVvdGEg
aXMgNS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIyLgoKU2lnbmVkLW9mZi1i
eTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2VkLWJ5OiBKdWxp
ZW4gR3JhbGwgPGp1bGllbkBhbWF6b24uY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL3hlbnN0b3JlL2luY2x1ZGUveGVuc3RvcmVfbGliLmggYi90b29scy94
ZW5zdG9yZS9pbmNsdWRlL3hlbnN0b3JlX2xpYi5oCmluZGV4IDBmZmJhZTll
YjU3NC4uNGM5YjZkMTY4NThkIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9y
ZS9pbmNsdWRlL3hlbnN0b3JlX2xpYi5oCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L2luY2x1ZGUveGVuc3RvcmVfbGliLmgKQEAgLTM0LDYgKzM0LDcgQEAgZW51
bSB4c19wZXJtX3R5cGUgewogCS8qIEludGVybmFsIHVzZS4gKi8KIAlYU19Q
RVJNX0VOT0VOVF9PSyA9IDQsCiAJWFNfUEVSTV9PV05FUiA9IDgsCisJWFNf
UEVSTV9JR05PUkUgPSAxNiwKIH07CiAKIHN0cnVjdCB4c19wZXJtaXNzaW9u
cwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKaW5kZXggMmE4
NmM0YWE1YmNlLi40ZmJlNWM3NTljMWIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuYwpAQCAtMTAxLDYgKzEwMSw3IEBAIGludCBxdW90
YV9uYl9lbnRyeV9wZXJfZG9tYWluID0gMTAwMDsKIGludCBxdW90YV9uYl93
YXRjaF9wZXJfZG9tYWluID0gMTI4OwogaW50IHF1b3RhX21heF9lbnRyeV9z
aXplID0gMjA0ODsgLyogMksgKi8KIGludCBxdW90YV9tYXhfdHJhbnNhY3Rp
b24gPSAxMDsKK2ludCBxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IDU7CiAK
IHZvaWQgdHJhY2UoY29uc3QgY2hhciAqZm10LCAuLi4pCiB7CkBAIC00MDcs
OCArNDA4LDEzIEBAIHN0cnVjdCBub2RlICpyZWFkX25vZGUoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIGNvbnN0IHZvaWQgKmN0eCwKIAogCS8qIFBlcm1p
c3Npb25zIGFyZSBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMuICovCiAJbm9kZS0+
cGVybXMucCA9IGhkci0+cGVybXM7CisJaWYgKGRvbWFpbl9hZGp1c3Rfbm9k
ZV9wZXJtcyhub2RlKSkgeworCQl0YWxsb2NfZnJlZShub2RlKTsKKwkJcmV0
dXJuIE5VTEw7CisJfQorCiAJLyogRGF0YSBpcyBiaW5hcnkgYmxvYiAodXN1
YWxseSBhc2NpaSwgbm8gbnVsKS4gKi8KLQlub2RlLT5kYXRhID0gbm9kZS0+
cGVybXMucCArIG5vZGUtPnBlcm1zLm51bTsKKwlub2RlLT5kYXRhID0gbm9k
ZS0+cGVybXMucCArIGhkci0+bnVtX3Blcm1zOwogCS8qIENoaWxkcmVuIGlz
IHN0cmluZ3MsIG51bCBzZXBhcmF0ZWQuICovCiAJbm9kZS0+Y2hpbGRyZW4g
PSBub2RlLT5kYXRhICsgbm9kZS0+ZGF0YWxlbjsKIApAQCAtNDI0LDYgKzQz
MCw5IEBAIGludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwgVERCX0RBVEEgKmtleSwgc3RydWN0IG5vZGUgKm5vZGUsCiAJdm9p
ZCAqcDsKIAlzdHJ1Y3QgeHNfdGRiX3JlY29yZF9oZHIgKmhkcjsKIAorCWlm
IChkb21haW5fYWRqdXN0X25vZGVfcGVybXMobm9kZSkpCisJCXJldHVybiBl
cnJubzsKKwogCWRhdGEuZHNpemUgPSBzaXplb2YoKmhkcikKIAkJKyBub2Rl
LT5wZXJtcy5udW0gKiBzaXplb2Yobm9kZS0+cGVybXMucFswXSkKIAkJKyBu
b2RlLT5kYXRhbGVuICsgbm9kZS0+Y2hpbGRsZW47CkBAIC00ODMsOCArNDky
LDkgQEAgZW51bSB4c19wZXJtX3R5cGUgcGVybV9mb3JfY29ubihzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubiwKIAkJcmV0dXJuIChYU19QRVJNX1JFQUR8WFNf
UEVSTV9XUklURXxYU19QRVJNX09XTkVSKSAmIG1hc2s7CiAKIAlmb3IgKGkg
PSAxOyBpIDwgcGVybXMtPm51bTsgaSsrKQotCQlpZiAocGVybXMtPnBbaV0u
aWQgPT0gY29ubi0+aWQKLSAgICAgICAgICAgICAgICAgICAgICAgIHx8IChj
b25uLT50YXJnZXQgJiYgcGVybXMtPnBbaV0uaWQgPT0gY29ubi0+dGFyZ2V0
LT5pZCkpCisJCWlmICghKHBlcm1zLT5wW2ldLnBlcm1zICYgWFNfUEVSTV9J
R05PUkUpICYmCisJCSAgICAocGVybXMtPnBbaV0uaWQgPT0gY29ubi0+aWQg
fHwKKwkJICAgICAoY29ubi0+dGFyZ2V0ICYmIHBlcm1zLT5wW2ldLmlkID09
IGNvbm4tPnRhcmdldC0+aWQpKSkKIAkJCXJldHVybiBwZXJtcy0+cFtpXS5w
ZXJtcyAmIG1hc2s7CiAKIAlyZXR1cm4gcGVybXMtPnBbMF0ucGVybXMgJiBt
YXNrOwpAQCAtMTI0Niw4ICsxMjU2LDEyIEBAIHN0YXRpYyBpbnQgZG9fc2V0
X3Blcm1zKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVy
ZWRfZGF0YSAqaW4pCiAJaWYgKHBlcm1zLm51bSA8IDIpCiAJCXJldHVybiBF
SU5WQUw7CiAKLQlwZXJtc3RyID0gaW4tPmJ1ZmZlciArIHN0cmxlbihpbi0+
YnVmZmVyKSArIDE7CiAJcGVybXMubnVtLS07CisJaWYgKGRvbWFpbl9pc191
bnByaXZpbGVnZWQoY29ubikgJiYKKwkgICAgcGVybXMubnVtID4gcXVvdGFf
bmJfcGVybXNfcGVyX25vZGUpCisJCXJldHVybiBFTk9TUEM7CisKKwlwZXJt
c3RyID0gaW4tPmJ1ZmZlciArIHN0cmxlbihpbi0+YnVmZmVyKSArIDE7CiAK
IAlwZXJtcy5wID0gdGFsbG9jX2FycmF5KGluLCBzdHJ1Y3QgeHNfcGVybWlz
c2lvbnMsIHBlcm1zLm51bSk7CiAJaWYgKCFwZXJtcy5wKQpAQCAtMTkxOSw2
ICsxOTMzLDcgQEAgc3RhdGljIHZvaWQgdXNhZ2Uodm9pZCkKICIgIC1TLCAt
LWVudHJ5LXNpemUgPHNpemU+IGxpbWl0IHRoZSBzaXplIG9mIGVudHJ5IHBl
ciBkb21haW4sIGFuZFxuIgogIiAgLVcsIC0td2F0Y2gtbmIgPG5iPiAgICAg
bGltaXQgdGhlIG51bWJlciBvZiB3YXRjaGVzIHBlciBkb21haW4sXG4iCiAi
ICAtdCwgLS10cmFuc2FjdGlvbiA8bmI+ICBsaW1pdCB0aGUgbnVtYmVyIG9m
IHRyYW5zYWN0aW9uIGFsbG93ZWQgcGVyIGRvbWFpbixcbiIKKyIgIC1BLCAt
LXBlcm0tbmIgPG5iPiAgICAgIGxpbWl0IHRoZSBudW1iZXIgb2YgcGVybWlz
c2lvbnMgcGVyIG5vZGUsXG4iCiAiICAtUiwgLS1uby1yZWNvdmVyeSAgICAg
ICB0byByZXF1ZXN0IHRoYXQgbm8gcmVjb3Zlcnkgc2hvdWxkIGJlIGF0dGVt
cHRlZCB3aGVuXG4iCiAiICAgICAgICAgICAgICAgICAgICAgICAgICB0aGUg
c3RvcmUgaXMgY29ycnVwdGVkIChkZWJ1ZyBvbmx5KSxcbiIKICIgIC1JLCAt
LWludGVybmFsLWRiICAgICAgIHN0b3JlIGRhdGFiYXNlIGluIG1lbW9yeSwg
bm90IG9uIGRpc2tcbiIKQEAgLTE5MzksNiArMTk1NCw3IEBAIHN0YXRpYyBz
dHJ1Y3Qgb3B0aW9uIG9wdGlvbnNbXSA9IHsKIAl7ICJlbnRyeS1zaXplIiwg
MSwgTlVMTCwgJ1MnIH0sCiAJeyAidHJhY2UtZmlsZSIsIDEsIE5VTEwsICdU
JyB9LAogCXsgInRyYW5zYWN0aW9uIiwgMSwgTlVMTCwgJ3QnIH0sCisJeyAi
cGVybS1uYiIsIDEsIE5VTEwsICdBJyB9LAogCXsgIm5vLXJlY292ZXJ5Iiwg
MCwgTlVMTCwgJ1InIH0sCiAJeyAiaW50ZXJuYWwtZGIiLCAwLCBOVUxMLCAn
SScgfSwKIAl7ICJ2ZXJib3NlIiwgMCwgTlVMTCwgJ1YnIH0sCkBAIC0xOTYx
LDcgKzE5NzcsNyBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqYXJndltd
KQogCWludCB0aW1lb3V0OwogCiAKLQl3aGlsZSAoKG9wdCA9IGdldG9wdF9s
b25nKGFyZ2MsIGFyZ3YsICJERTpGOkhOUFM6dDpUOlJWVzoiLCBvcHRpb25z
LAorCXdoaWxlICgob3B0ID0gZ2V0b3B0X2xvbmcoYXJnYywgYXJndiwgIkRF
OkY6SE5QUzp0OkE6VDpSVlc6Iiwgb3B0aW9ucywKIAkJCQkgIE5VTEwpKSAh
PSAtMSkgewogCQlzd2l0Y2ggKG9wdCkgewogCQljYXNlICdEJzoKQEAgLTIw
MDMsNiArMjAxOSw5IEBAIGludCBtYWluKGludCBhcmdjLCBjaGFyICphcmd2
W10pCiAJCWNhc2UgJ1cnOgogCQkJcXVvdGFfbmJfd2F0Y2hfcGVyX2RvbWFp
biA9IHN0cnRvbChvcHRhcmcsIE5VTEwsIDEwKTsKIAkJCWJyZWFrOworCQlj
YXNlICdBJzoKKwkJCXF1b3RhX25iX3Blcm1zX3Blcl9ub2RlID0gc3RydG9s
KG9wdGFyZywgTlVMTCwgMTApOworCQkJYnJlYWs7CiAJCWNhc2UgJ2UnOgog
CQkJZG9tMF9ldmVudCA9IHN0cnRvbChvcHRhcmcsIE5VTEwsIDEwKTsKIAkJ
CWJyZWFrOwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5j
CmluZGV4IDBiMmY0OWFjN2Q0Yy4uZjVlN2FmNDZlOGFhIDEwMDY0NAotLS0g
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCkBAIC03MSw4ICs3MSwx
NCBAQCBzdHJ1Y3QgZG9tYWluCiAJLyogVGhlIGNvbm5lY3Rpb24gYXNzb2Np
YXRlZCB3aXRoIHRoaXMuICovCiAJc3RydWN0IGNvbm5lY3Rpb24gKmNvbm47
CiAKKwkvKiBHZW5lcmF0aW9uIGNvdW50IGF0IGRvbWFpbiBpbnRyb2R1Y3Rp
b24gdGltZS4gKi8KKwl1aW50NjRfdCBnZW5lcmF0aW9uOworCiAJLyogSGF2
ZSB3ZSBub3RpY2VkIHRoYXQgdGhpcyBkb21haW4gaXMgc2h1dGRvd24/ICov
Ci0JaW50IHNodXRkb3duOworCWJvb2wgc2h1dGRvd247CisKKwkvKiBIYXMg
ZG9tYWluIGJlZW4gb2ZmaWNpYWxseSBpbnRyb2R1Y2VkPyAqLworCWJvb2wg
aW50cm9kdWNlZDsKIAogCS8qIG51bWJlciBvZiBlbnRyeSBmcm9tIHRoaXMg
ZG9tYWluIGluIHRoZSBzdG9yZSAqLwogCWludCBuYmVudHJ5OwpAQCAtMjAw
LDYgKzIwNiw5IEBAIHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAq
X2RvbWFpbikKIAogCWxpc3RfZGVsKCZkb21haW4tPmxpc3QpOwogCisJaWYg
KCFkb21haW4tPmludHJvZHVjZWQpCisJCXJldHVybiAwOworCiAJaWYgKGRv
bWFpbi0+cG9ydCkgewogCQlpZiAoeGVuZXZ0Y2huX3VuYmluZCh4Y2VfaGFu
ZGxlLCBkb21haW4tPnBvcnQpID09IC0xKQogCQkJZXByaW50ZigiPiBVbmJp
bmRpbmcgcG9ydCAlaSBmYWlsZWQhXG4iLCBkb21haW4tPnBvcnQpOwpAQCAt
MjIxLDIxICsyMzAsMzQgQEAgc3RhdGljIGludCBkZXN0cm95X2RvbWFpbih2
b2lkICpfZG9tYWluKQogCXJldHVybiAwOwogfQogCitzdGF0aWMgYm9vbCBn
ZXRfZG9tYWluX2luZm8odW5zaWduZWQgaW50IGRvbWlkLCB4Y19kb21pbmZv
X3QgKmRvbWluZm8pCit7CisJcmV0dXJuIHhjX2RvbWFpbl9nZXRpbmZvKCp4
Y19oYW5kbGUsIGRvbWlkLCAxLCBkb21pbmZvKSA9PSAxICYmCisJICAgICAg
IGRvbWluZm8tPmRvbWlkID09IGRvbWlkOworfQorCiBzdGF0aWMgdm9pZCBk
b21haW5fY2xlYW51cCh2b2lkKQogewogCXhjX2RvbWluZm9fdCBkb21pbmZv
OwogCXN0cnVjdCBkb21haW4gKmRvbWFpbjsKIAlzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubjsKIAlpbnQgbm90aWZ5ID0gMDsKKwlib29sIGRvbV92YWxpZDsK
IAogIGFnYWluOgogCWxpc3RfZm9yX2VhY2hfZW50cnkoZG9tYWluLCAmZG9t
YWlucywgbGlzdCkgewotCQlpZiAoeGNfZG9tYWluX2dldGluZm8oKnhjX2hh
bmRsZSwgZG9tYWluLT5kb21pZCwgMSwKLQkJCQkgICAgICAmZG9taW5mbykg
PT0gMSAmJgotCQkgICAgZG9taW5mby5kb21pZCA9PSBkb21haW4tPmRvbWlk
KSB7CisJCWRvbV92YWxpZCA9IGdldF9kb21haW5faW5mbyhkb21haW4tPmRv
bWlkLCAmZG9taW5mbyk7CisJCWlmICghZG9tYWluLT5pbnRyb2R1Y2VkKSB7
CisJCQlpZiAoIWRvbV92YWxpZCkgeworCQkJCXRhbGxvY19mcmVlKGRvbWFp
bik7CisJCQkJZ290byBhZ2FpbjsKKwkJCX0KKwkJCWNvbnRpbnVlOworCQl9
CisJCWlmIChkb21fdmFsaWQpIHsKIAkJCWlmICgoZG9taW5mby5jcmFzaGVk
IHx8IGRvbWluZm8uc2h1dGRvd24pCiAJCQkgICAgJiYgIWRvbWFpbi0+c2h1
dGRvd24pIHsKLQkJCQlkb21haW4tPnNodXRkb3duID0gMTsKKwkJCQlkb21h
aW4tPnNodXRkb3duID0gdHJ1ZTsKIAkJCQlub3RpZnkgPSAxOwogCQkJfQog
CQkJaWYgKCFkb21pbmZvLmR5aW5nKQpAQCAtMzAxLDU4ICszMjMsODQgQEAg
c3RhdGljIGNoYXIgKnRhbGxvY19kb21haW5fcGF0aCh2b2lkICpjb250ZXh0
LCB1bnNpZ25lZCBpbnQgZG9taWQpCiAJcmV0dXJuIHRhbGxvY19hc3ByaW50
Zihjb250ZXh0LCAiL2xvY2FsL2RvbWFpbi8ldSIsIGRvbWlkKTsKIH0KIAot
c3RhdGljIHN0cnVjdCBkb21haW4gKm5ld19kb21haW4odm9pZCAqY29udGV4
dCwgdW5zaWduZWQgaW50IGRvbWlkLAotCQkJCSBpbnQgcG9ydCkKK3N0YXRp
YyBzdHJ1Y3QgZG9tYWluICpmaW5kX2RvbWFpbl9zdHJ1Y3QodW5zaWduZWQg
aW50IGRvbWlkKQoreworCXN0cnVjdCBkb21haW4gKmk7CisKKwlsaXN0X2Zv
cl9lYWNoX2VudHJ5KGksICZkb21haW5zLCBsaXN0KSB7CisJCWlmIChpLT5k
b21pZCA9PSBkb21pZCkKKwkJCXJldHVybiBpOworCX0KKwlyZXR1cm4gTlVM
TDsKK30KKworc3RhdGljIHN0cnVjdCBkb21haW4gKmFsbG9jX2RvbWFpbih2
b2lkICpjb250ZXh0LCB1bnNpZ25lZCBpbnQgZG9taWQpCiB7CiAJc3RydWN0
IGRvbWFpbiAqZG9tYWluOwotCWludCByYzsKIAogCWRvbWFpbiA9IHRhbGxv
Yyhjb250ZXh0LCBzdHJ1Y3QgZG9tYWluKTsKLQlpZiAoIWRvbWFpbikKKwlp
ZiAoIWRvbWFpbikgeworCQllcnJubyA9IEVOT01FTTsKIAkJcmV0dXJuIE5V
TEw7CisJfQogCi0JZG9tYWluLT5wb3J0ID0gMDsKLQlkb21haW4tPnNodXRk
b3duID0gMDsKIAlkb21haW4tPmRvbWlkID0gZG9taWQ7Ci0JZG9tYWluLT5w
YXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9taWQpOwotCWlm
ICghZG9tYWluLT5wYXRoKQotCQlyZXR1cm4gTlVMTDsKKwlkb21haW4tPmdl
bmVyYXRpb24gPSBnZW5lcmF0aW9uOworCWRvbWFpbi0+aW50cm9kdWNlZCA9
IGZhbHNlOwogCi0Jd3JsX2RvbWFpbl9uZXcoZG9tYWluKTsKKwl0YWxsb2Nf
c2V0X2Rlc3RydWN0b3IoZG9tYWluLCBkZXN0cm95X2RvbWFpbik7CiAKIAls
aXN0X2FkZCgmZG9tYWluLT5saXN0LCAmZG9tYWlucyk7Ci0JdGFsbG9jX3Nl
dF9kZXN0cnVjdG9yKGRvbWFpbiwgZGVzdHJveV9kb21haW4pOworCisJcmV0
dXJuIGRvbWFpbjsKK30KKworc3RhdGljIGludCBuZXdfZG9tYWluKHN0cnVj
dCBkb21haW4gKmRvbWFpbiwgaW50IHBvcnQpCit7CisJaW50IHJjOworCisJ
ZG9tYWluLT5wb3J0ID0gMDsKKwlkb21haW4tPnNodXRkb3duID0gZmFsc2U7
CisJZG9tYWluLT5wYXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwg
ZG9tYWluLT5kb21pZCk7CisJaWYgKCFkb21haW4tPnBhdGgpIHsKKwkJZXJy
bm8gPSBFTk9NRU07CisJCXJldHVybiBlcnJubzsKKwl9CisKKwl3cmxfZG9t
YWluX25ldyhkb21haW4pOwogCiAJLyogVGVsbCBrZXJuZWwgd2UncmUgaW50
ZXJlc3RlZCBpbiB0aGlzIGV2ZW50LiAqLwotCXJjID0geGVuZXZ0Y2huX2Jp
bmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9taWQsIHBvcnQpOworCXJj
ID0geGVuZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9t
YWluLT5kb21pZCwgcG9ydCk7CiAJaWYgKHJjID09IC0xKQotCSAgICByZXR1
cm4gTlVMTDsKKwkJcmV0dXJuIGVycm5vOwogCWRvbWFpbi0+cG9ydCA9IHJj
OwogCisJZG9tYWluLT5pbnRyb2R1Y2VkID0gdHJ1ZTsKKwogCWRvbWFpbi0+
Y29ubiA9IG5ld19jb25uZWN0aW9uKHdyaXRlY2huLCByZWFkY2huKTsKLQlp
ZiAoIWRvbWFpbi0+Y29ubikKLQkJcmV0dXJuIE5VTEw7CisJaWYgKCFkb21h
aW4tPmNvbm4pICB7CisJCWVycm5vID0gRU5PTUVNOworCQlyZXR1cm4gZXJy
bm87CisJfQogCiAJZG9tYWluLT5jb25uLT5kb21haW4gPSBkb21haW47Ci0J
ZG9tYWluLT5jb25uLT5pZCA9IGRvbWlkOworCWRvbWFpbi0+Y29ubi0+aWQg
PSBkb21haW4tPmRvbWlkOwogCiAJZG9tYWluLT5yZW1vdGVfcG9ydCA9IHBv
cnQ7CiAJZG9tYWluLT5uYmVudHJ5ID0gMDsKIAlkb21haW4tPm5id2F0Y2gg
PSAwOwogCi0JcmV0dXJuIGRvbWFpbjsKKwlyZXR1cm4gMDsKIH0KIAogCiBz
dGF0aWMgc3RydWN0IGRvbWFpbiAqZmluZF9kb21haW5fYnlfZG9taWQodW5z
aWduZWQgaW50IGRvbWlkKQogewotCXN0cnVjdCBkb21haW4gKmk7CisJc3Ry
dWN0IGRvbWFpbiAqZDsKIAotCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmRv
bWFpbnMsIGxpc3QpIHsKLQkJaWYgKGktPmRvbWlkID09IGRvbWlkKQotCQkJ
cmV0dXJuIGk7Ci0JfQotCXJldHVybiBOVUxMOworCWQgPSBmaW5kX2RvbWFp
bl9zdHJ1Y3QoZG9taWQpOworCisJcmV0dXJuIChkICYmIGQtPmludHJvZHVj
ZWQpID8gZCA6IE5VTEw7CiB9CiAKIHN0YXRpYyB2b2lkIGRvbWFpbl9jb25u
X3Jlc2V0KHN0cnVjdCBkb21haW4gKmRvbWFpbikKQEAgLTM5OSwxNSArNDQ3
LDIxIEBAIGludCBkb19pbnRyb2R1Y2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAocG9ydCA8PSAw
KQogCQlyZXR1cm4gRUlOVkFMOwogCi0JZG9tYWluID0gZmluZF9kb21haW5f
YnlfZG9taWQoZG9taWQpOworCWRvbWFpbiA9IGZpbmRfZG9tYWluX3N0cnVj
dChkb21pZCk7CiAKIAlpZiAoZG9tYWluID09IE5VTEwpIHsKKwkJLyogSGFu
ZyBkb21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICovCisJ
CWRvbWFpbiA9IGFsbG9jX2RvbWFpbihpbiwgZG9taWQpOworCQlpZiAoZG9t
YWluID09IE5VTEwpCisJCQlyZXR1cm4gRU5PTUVNOworCX0KKworCWlmICgh
ZG9tYWluLT5pbnRyb2R1Y2VkKSB7CiAJCWludGVyZmFjZSA9IG1hcF9pbnRl
cmZhY2UoZG9taWQsIG1mbik7CiAJCWlmICghaW50ZXJmYWNlKQogCQkJcmV0
dXJuIGVycm5vOwogCQkvKiBIYW5nIGRvbWFpbiBvZmYgImluIiB1bnRpbCB3
ZSdyZSBmaW5pc2hlZC4gKi8KLQkJZG9tYWluID0gbmV3X2RvbWFpbihpbiwg
ZG9taWQsIHBvcnQpOwotCQlpZiAoIWRvbWFpbikgeworCQlpZiAobmV3X2Rv
bWFpbihkb21haW4sIHBvcnQpKSB7CiAJCQlyYyA9IGVycm5vOwogCQkJdW5t
YXBfaW50ZXJmYWNlKGludGVyZmFjZSk7CiAJCQlyZXR1cm4gcmM7CkBAIC01
MTgsOCArNTcyLDggQEAgaW50IGRvX3Jlc3VtZShzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCWlmIChJU19F
UlIoZG9tYWluKSkKIAkJcmV0dXJuIC1QVFJfRVJSKGRvbWFpbik7CiAKLQlk
b21haW4tPnNodXRkb3duID0gMDsKLQkKKwlkb21haW4tPnNodXRkb3duID0g
ZmFsc2U7CisKIAlzZW5kX2Fjayhjb25uLCBYU19SRVNVTUUpOwogCiAJcmV0
dXJuIDA7CkBAIC02NjIsOCArNzE2LDEwIEBAIHN0YXRpYyBpbnQgZG9tMF9p
bml0KHZvaWQpCiAJaWYgKHBvcnQgPT0gLTEpCiAJCXJldHVybiAtMTsKIAot
CWRvbTAgPSBuZXdfZG9tYWluKE5VTEwsIHhlbmJ1c19tYXN0ZXJfZG9taWQo
KSwgcG9ydCk7Ci0JaWYgKGRvbTAgPT0gTlVMTCkKKwlkb20wID0gYWxsb2Nf
ZG9tYWluKE5VTEwsIHhlbmJ1c19tYXN0ZXJfZG9taWQoKSk7CisJaWYgKCFk
b20wKQorCQlyZXR1cm4gLTE7CisJaWYgKG5ld19kb21haW4oZG9tMCwgcG9y
dCkpCiAJCXJldHVybiAtMTsKIAogCWRvbTAtPmludGVyZmFjZSA9IHhlbmJ1
c19tYXAoKTsKQEAgLTc0NCw2ICs4MDAsNjYgQEAgdm9pZCBkb21haW5fZW50
cnlfaW5jKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAq
bm9kZSkKIAl9CiB9CiAKKy8qCisgKiBDaGVjayB3aGV0aGVyIGEgZG9tYWlu
IHdhcyBjcmVhdGVkIGJlZm9yZSBvciBhZnRlciBhIHNwZWNpZmljIGdlbmVy
YXRpb24KKyAqIGNvdW50ICh1c2VkIGZvciB0ZXN0aW5nIHdoZXRoZXIgYSBu
b2RlIHBlcm1pc3Npb24gaXMgb2xkZXIgdGhhbiBhIGRvbWFpbikuCisgKgor
ICogUmV0dXJuIHZhbHVlczoKKyAqIC0xOiBlcnJvcgorICogIDA6IGRvbWFp
biBoYXMgaGlnaGVyIGdlbmVyYXRpb24gY291bnQgKGl0IGlzIHlvdW5nZXIg
dGhhbiBhIG5vZGUgd2l0aCB0aGUKKyAqICAgICBnaXZlbiBjb3VudCksIG9y
IGRvbWFpbiBpc24ndCBleGlzdGluZyBhbnkgbG9uZ2VyCisgKiAgMTogZG9t
YWluIGlzIG9sZGVyIHRoYW4gdGhlIG5vZGUKKyAqLworc3RhdGljIGludCBj
aGtfZG9tYWluX2dlbmVyYXRpb24odW5zaWduZWQgaW50IGRvbWlkLCB1aW50
NjRfdCBnZW4pCit7CisJc3RydWN0IGRvbWFpbiAqZDsKKwl4Y19kb21pbmZv
X3QgZG9taW5mbzsKKworCWlmICgheGNfaGFuZGxlICYmIGRvbWlkID09IDAp
CisJCXJldHVybiAxOworCisJZCA9IGZpbmRfZG9tYWluX3N0cnVjdChkb21p
ZCk7CisJaWYgKGQpCisJCXJldHVybiAoZC0+Z2VuZXJhdGlvbiA8PSBnZW4p
ID8gMSA6IDA7CisKKwlpZiAoIWdldF9kb21haW5faW5mbyhkb21pZCwgJmRv
bWluZm8pKQorCQlyZXR1cm4gMDsKKworCWQgPSBhbGxvY19kb21haW4oTlVM
TCwgZG9taWQpOworCXJldHVybiBkID8gMSA6IC0xOworfQorCisvKgorICog
UmVtb3ZlIHBlcm1pc3Npb25zIGZvciBubyBsb25nZXIgZXhpc3RpbmcgZG9t
YWlucyBpbiBvcmRlciB0byBhdm9pZCBhIG5ldworICogZG9tYWluIHdpdGgg
dGhlIHNhbWUgZG9taWQgaW5oZXJpdGluZyB0aGUgcGVybWlzc2lvbnMuCisg
Ki8KK2ludCBkb21haW5fYWRqdXN0X25vZGVfcGVybXMoc3RydWN0IG5vZGUg
Km5vZGUpCit7CisJdW5zaWduZWQgaW50IGk7CisJaW50IHJldDsKKworCXJl
dCA9IGNoa19kb21haW5fZ2VuZXJhdGlvbihub2RlLT5wZXJtcy5wWzBdLmlk
LCBub2RlLT5nZW5lcmF0aW9uKTsKKwlpZiAocmV0IDwgMCkKKwkJcmV0dXJu
IGVycm5vOworCisJLyogSWYgdGhlIG93bmVyIGRvZXNuJ3QgZXhpc3QgYW55
IGxvbmdlciBnaXZlIGl0IHRvIHByaXYgZG9tYWluLiAqLworCWlmICghcmV0
KQorCQlub2RlLT5wZXJtcy5wWzBdLmlkID0gcHJpdl9kb21pZDsKKworCWZv
ciAoaSA9IDE7IGkgPCBub2RlLT5wZXJtcy5udW07IGkrKykgeworCQlpZiAo
bm9kZS0+cGVybXMucFtpXS5wZXJtcyAmIFhTX1BFUk1fSUdOT1JFKQorCQkJ
Y29udGludWU7CisJCXJldCA9IGNoa19kb21haW5fZ2VuZXJhdGlvbihub2Rl
LT5wZXJtcy5wW2ldLmlkLAorCQkJCQkgICAgbm9kZS0+Z2VuZXJhdGlvbik7
CisJCWlmIChyZXQgPCAwKQorCQkJcmV0dXJuIGVycm5vOworCQlpZiAoIXJl
dCkKKwkJCW5vZGUtPnBlcm1zLnBbaV0ucGVybXMgfD0gWFNfUEVSTV9JR05P
UkU7CisJfQorCisJcmV0dXJuIDA7Cit9CisKIHZvaWQgZG9tYWluX2VudHJ5
X2RlYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5v
ZGUpCiB7CiAJc3RydWN0IGRvbWFpbiAqZDsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaCBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9kb21haW4uaAppbmRleCAyNTkxODM5NjJhOWMuLjVlMDAw
ODcyMDZjNyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5oCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21h
aW4uaApAQCAtNTYsNiArNTYsOSBAQCBib29sIGRvbWFpbl9jYW5fd3JpdGUo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwogCiBib29sIGRvbWFpbl9pc191
bnByaXZpbGVnZWQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwogCisvKiBS
ZW1vdmUgbm9kZSBwZXJtaXNzaW9ucyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5n
IGRvbWFpbnMuICovCitpbnQgZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKHN0
cnVjdCBub2RlICpub2RlKTsKKwogLyogUXVvdGEgbWFuaXB1bGF0aW9uICov
CiB2b2lkIGRvbWFpbl9lbnRyeV9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4sIHN0cnVjdCBub2RlICopOwogdm9pZCBkb21haW5fZW50cnlfZGVjKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqKTsKZGlmZiAt
LWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5j
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5k
ZXggMzY3OTNiOWIxYWYzLi45ZmNiNGM5YmE5ODYgMTAwNjQ0Ci0tLSBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jCisrKyBiL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jCkBAIC00Nyw3
ICs0NywxMiBAQAogICogdHJhbnNhY3Rpb24uCiAgKiBFYWNoIHRpbWUgdGhl
IGdsb2JhbCBnZW5lcmF0aW9uIGNvdW50IGlzIGNvcGllZCB0byBlaXRoZXIg
YSBub2RlIG9yIGEKICAqIHRyYW5zYWN0aW9uIGl0IGlzIGluY3JlbWVudGVk
LiBUaGlzIGVuc3VyZXMgYWxsIG5vZGVzIGFuZC9vciB0cmFuc2FjdGlvbnMK
LSAqIGFyZSBoYXZpbmcgYSB1bmlxdWUgZ2VuZXJhdGlvbiBjb3VudC4KKyAq
IGFyZSBoYXZpbmcgYSB1bmlxdWUgZ2VuZXJhdGlvbiBjb3VudC4gVGhlIGlu
Y3JlbWVudCBpcyBkb25lIF9iZWZvcmVfIHRoZQorICogY29weSBhcyB0aGF0
IGlzIG5lZWRlZCBmb3IgY2hlY2tpbmcgd2hldGhlciBhIGRvbWFpbiB3YXMg
Y3JlYXRlZCBiZWZvcmUKKyAqIG9yIGFmdGVyIGEgbm9kZSBoYXMgYmVlbiB3
cml0dGVuICh0aGUgZG9tYWluJ3MgZ2VuZXJhdGlvbiBpcyBzZXQgd2l0aCB0
aGUKKyAqIGFjdHVhbCBnZW5lcmF0aW9uIGNvdW50IHdpdGhvdXQgaW5jcmVt
ZW50aW5nIGl0LCBpbiBvcmRlciB0byBzdXBwb3J0CisgKiB3cml0aW5nIGEg
bm9kZSBmb3IgYSBkb21haW4gYmVmb3JlIHRoZSBkb21haW4gaGFzIGJlZW4g
b2ZmaWNpYWxseQorICogaW50cm9kdWNlZCkuCiAgKgogICogVHJhbnNhY3Rp
b24gY29uZmxpY3RzIGFyZSBkZXRlY3RlZCBieSBjaGVja2luZyB0aGUgZ2Vu
ZXJhdGlvbiBjb3VudCBvZiBhbGwKICAqIG5vZGVzIHJlYWQgaW4gdGhlIHRy
YW5zYWN0aW9uIHRvIG1hdGNoIHdpdGggdGhlIGdlbmVyYXRpb24gY291bnQg
aW4gdGhlCkBAIC0xNjEsNyArMTY2LDcgQEAgc3RydWN0IHRyYW5zYWN0aW9u
CiB9OwogCiBleHRlcm4gaW50IHF1b3RhX21heF90cmFuc2FjdGlvbjsKLXN0
YXRpYyB1aW50NjRfdCBnZW5lcmF0aW9uOwordWludDY0X3QgZ2VuZXJhdGlv
bjsKIAogc3RhdGljIHZvaWQgc2V0X3RkYl9rZXkoY29uc3QgY2hhciAqbmFt
ZSwgVERCX0RBVEEgKmtleSkKIHsKQEAgLTIzNyw3ICsyNDIsNyBAQCBpbnQg
YWNjZXNzX25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBu
b2RlICpub2RlLAogCWJvb2wgaW50cm9kdWNlID0gZmFsc2U7CiAKIAlpZiAo
dHlwZSAhPSBOT0RFX0FDQ0VTU19SRUFEKSB7Ci0JCW5vZGUtPmdlbmVyYXRp
b24gPSBnZW5lcmF0aW9uKys7CisJCW5vZGUtPmdlbmVyYXRpb24gPSArK2dl
bmVyYXRpb247CiAJCWlmIChjb25uICYmICFjb25uLT50cmFuc2FjdGlvbikK
IAkJCXdybF9hcHBseV9kZWJpdF9kaXJlY3QoY29ubik7CiAJfQpAQCAtMzc0
LDcgKzM3OSw3IEBAIHN0YXRpYyBpbnQgZmluYWxpemVfdHJhbnNhY3Rpb24o
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAJCQkJaWYgKCFkYXRhLmRwdHIp
CiAJCQkJCWdvdG8gZXJyOwogCQkJCWhkciA9ICh2b2lkICopZGF0YS5kcHRy
OwotCQkJCWhkci0+Z2VuZXJhdGlvbiA9IGdlbmVyYXRpb24rKzsKKwkJCQlo
ZHItPmdlbmVyYXRpb24gPSArK2dlbmVyYXRpb247CiAJCQkJcmV0ID0gdGRi
X3N0b3JlKHRkYl9jdHgsIGtleSwgZGF0YSwKIAkJCQkJCVREQl9SRVBMQUNF
KTsKIAkJCQl0YWxsb2NfZnJlZShkYXRhLmRwdHIpOwpAQCAtNDYyLDcgKzQ2
Nyw3IEBAIGludCBkb190cmFuc2FjdGlvbl9zdGFydChzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEgKmluKQogCUlOSVRf
TElTVF9IRUFEKCZ0cmFucy0+YWNjZXNzZWQpOwogCUlOSVRfTElTVF9IRUFE
KCZ0cmFucy0+Y2hhbmdlZF9kb21haW5zKTsKIAl0cmFucy0+ZmFpbCA9IGZh
bHNlOwotCXRyYW5zLT5nZW5lcmF0aW9uID0gZ2VuZXJhdGlvbisrOworCXRy
YW5zLT5nZW5lcmF0aW9uID0gKytnZW5lcmF0aW9uOwogCiAJLyogUGljayBh
biB1bnVzZWQgdHJhbnNhY3Rpb24gaWRlbnRpZmllci4gKi8KIAlkbyB7CmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rp
b24uaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5o
CmluZGV4IDMzODZiYWM1NjUwOC4uNDNhMTYyYmVhM2YzIDEwMDY0NAotLS0g
YS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaAorKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaApAQCAt
MjcsNiArMjcsOCBAQCBlbnVtIG5vZGVfYWNjZXNzX3R5cGUgewogCiBzdHJ1
Y3QgdHJhbnNhY3Rpb247CiAKK2V4dGVybiB1aW50NjRfdCBnZW5lcmF0aW9u
OworCiBpbnQgZG9fdHJhbnNhY3Rpb25fc3RhcnQoc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sIHN0cnVjdCBidWZmZXJlZF9kYXRhICpub2RlKTsKIGludCBk
b190cmFuc2FjdGlvbl9lbmQoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0
cnVjdCBidWZmZXJlZF9kYXRhICppbik7CiAKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hzX2xpYi5jIGIvdG9vbHMveGVuc3RvcmUveHNfbGliLmMK
aW5kZXggM2U0M2Y4ODA5ZDQyLi5kNDA3ZDU3MTNhZmYgMTAwNjQ0Ci0tLSBh
L3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L3hzX2xpYi5jCkBAIC0xNTIsNyArMTUyLDcgQEAgYm9vbCB4c19zdHJpbmdz
X3RvX3Blcm1zKHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybXMsIHVuc2ln
bmVkIGludCBudW0sCiBib29sIHhzX3Blcm1fdG9fc3RyaW5nKGNvbnN0IHN0
cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybSwKICAgICAgICAgICAgICAgICAg
ICAgICAgY2hhciAqYnVmZmVyLCBzaXplX3QgYnVmX2xlbikKIHsKLQlzd2l0
Y2ggKChpbnQpcGVybS0+cGVybXMpIHsKKwlzd2l0Y2ggKChpbnQpcGVybS0+
cGVybXMgJiB+WFNfUEVSTV9JR05PUkUpIHsKIAljYXNlIFhTX1BFUk1fV1JJ
VEU6CiAJCSpidWZmZXIgPSAndyc7CiAJCWJyZWFrOwotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream; name="xsa322-4.14-c.patch"
Content-Disposition: attachment; filename="xsa322-4.14-c.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV2b2tlIGFjY2VzcyByaWdodHMgZm9yIHJl
bW92ZWQgZG9tYWlucwoKQWNjZXNzIHJpZ2h0cyBvZiBYZW5zdG9yZSBub2Rl
cyBhcmUgcGVyIGRvbWlkLiBVbmZvcnR1bmF0ZWx5IGV4aXN0aW5nCmdyYW50
ZWQgYWNjZXNzIHJpZ2h0cyBhcmUgbm90IHJlbW92ZWQgd2hlbiBhIGRvbWFp
biBpcyBiZWluZyBkZXN0cm95ZWQuClRoaXMgbWVhbnMgdGhhdCBhIG5ldyBk
b21haW4gY3JlYXRlZCB3aXRoIHRoZSBzYW1lIGRvbWlkIHdpbGwgaW5oZXJp
dAp0aGUgYWNjZXNzIHJpZ2h0cyB0byBYZW5zdG9yZSBub2RlcyBmcm9tIHRo
ZSBwcmV2aW91cyBkb21haW4ocykgd2l0aAp0aGUgc2FtZSBkb21pZC4KClRo
aXMgY2FuIGJlIGF2b2lkZWQgYnkgYWRkaW5nIGEgZ2VuZXJhdGlvbiBjb3Vu
dGVyIHRvIGVhY2ggZG9tYWluLgpUaGUgZ2VuZXJhdGlvbiBjb3VudGVyIG9m
IHRoZSBkb21haW4gaXMgc2V0IHRvIHRoZSBnbG9iYWwgZ2VuZXJhdGlvbgpj
b3VudGVyIHdoZW4gYSBkb21haW4gc3RydWN0dXJlIGlzIGJlaW5nIGFsbG9j
YXRlZC4gV2hlbiByZWFkaW5nIG9yCndyaXRpbmcgYSBub2RlIGFsbCBwZXJt
aXNzaW9ucyBvZiBkb21haW5zIHdoaWNoIGFyZSB5b3VuZ2VyIHRoYW4gdGhl
Cm5vZGUgaXRzZWxmIGFyZSBkcm9wcGVkLiBUaGlzIGlzIGRvbmUgYnkgZmxh
Z2dpbmcgdGhlIHJlbGF0ZWQgZW50cnkKYXMgaW52YWxpZCBpbiBvcmRlciB0
byBhdm9pZCBtb2RpZnlpbmcgcGVybWlzc2lvbnMgaW4gYSB3YXkgdGhlIHVz
ZXIKY291bGQgZGV0ZWN0LgoKQSBzcGVjaWFsIGNhc2UgaGFzIHRvIGJlIGNv
bnNpZGVyZWQ6IGZvciBhIG5ldyBkb21haW4gdGhlIGZpcnN0ClhlbnN0b3Jl
IGVudHJpZXMgYXJlIGFscmVhZHkgd3JpdHRlbiBiZWZvcmUgdGhlIGRvbWFp
biBpcyBvZmZpY2lhbGx5CmludHJvZHVjZWQgaW4gWGVuc3RvcmUuIEluIG9y
ZGVyIG5vdCB0byBkcm9wIHRoZSBwZXJtaXNzaW9ucyBmb3IgdGhlCm5ldyBk
b21haW4gYSBkb21haW4gc3RydWN0IGlzIGFsbG9jYXRlZCBldmVuIGJlZm9y
ZSBpbnRyb2R1Y3Rpb24gaWYKdGhlIGh5cGVydmlzb3IgaXMgYXdhcmUgb2Yg
dGhlIGRvbWFpbi4gVGhpcyByZXF1aXJlcyBhZGRpbmcgYW5vdGhlcgpib29s
ICJpbnRyb2R1Y2VkIiB0byBzdHJ1Y3QgZG9tYWluIGluIHhlbnN0b3JlZC4g
SW4gb3JkZXIgdG8gYXZvaWQKYWRkaXRpb25hbCBwYWRkaW5nIGhvbGVzIGNv
bnZlcnQgdGhlIHNodXRkb3duIGZsYWcgdG8gYm9vbCwgdG9vLgoKQXMgdmVy
aWZ5aW5nIHBlcm1pc3Npb25zIGhhcyBpdHMgcHJpY2UgcmVnYXJkaW5nIHJ1
bnRpbWUgYWRkIGEgbmV3CnF1b3RhIGZvciBsaW1pdGluZyB0aGUgbnVtYmVy
IG9mIHBlcm1pc3Npb25zIGFuIHVucHJpdmlsZWdlZCBkb21haW4KY2FuIHNl
dCBmb3IgYSBub2RlLiBUaGUgZGVmYXVsdCBmb3IgdGhhdCBuZXcgcXVvdGEg
aXMgNS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIyLgoKU2lnbmVkLW9mZi1i
eTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2VkLWJ5OiBKdWxp
ZW4gR3JhbGwgPGp1bGllbkBhbWF6b24uY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL3hlbnN0b3JlL2luY2x1ZGUveGVuc3RvcmVfbGliLmggYi90b29scy94
ZW5zdG9yZS9pbmNsdWRlL3hlbnN0b3JlX2xpYi5oCmluZGV4IDBmZmJhZTll
YjUuLjRjOWI2ZDE2ODUgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL2lu
Y2x1ZGUveGVuc3RvcmVfbGliLmgKKysrIGIvdG9vbHMveGVuc3RvcmUvaW5j
bHVkZS94ZW5zdG9yZV9saWIuaApAQCAtMzQsNiArMzQsNyBAQCBlbnVtIHhz
X3Blcm1fdHlwZSB7CiAJLyogSW50ZXJuYWwgdXNlLiAqLwogCVhTX1BFUk1f
RU5PRU5UX09LID0gNCwKIAlYU19QRVJNX09XTkVSID0gOCwKKwlYU19QRVJN
X0lHTk9SRSA9IDE2LAogfTsKIAogc3RydWN0IHhzX3Blcm1pc3Npb25zCmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRleCA5MmJmZDU0
Y2ZmLi41MDU1NjBhNWRlIDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMKQEAgLTEwNCw2ICsxMDQsNyBAQCBpbnQgcXVvdGFfbmJfZW50
cnlfcGVyX2RvbWFpbiA9IDEwMDA7CiBpbnQgcXVvdGFfbmJfd2F0Y2hfcGVy
X2RvbWFpbiA9IDEyODsKIGludCBxdW90YV9tYXhfZW50cnlfc2l6ZSA9IDIw
NDg7IC8qIDJLICovCiBpbnQgcXVvdGFfbWF4X3RyYW5zYWN0aW9uID0gMTA7
CitpbnQgcXVvdGFfbmJfcGVybXNfcGVyX25vZGUgPSA1OwogCiB2b2lkIHRy
YWNlKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQogewpAQCAtNDA5LDggKzQxMCwx
MyBAQCBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAKIAkvKiBQZXJtaXNzaW9ucyBh
cmUgc3RydWN0IHhzX3Blcm1pc3Npb25zLiAqLwogCW5vZGUtPnBlcm1zLnAg
PSBoZHItPnBlcm1zOworCWlmIChkb21haW5fYWRqdXN0X25vZGVfcGVybXMo
bm9kZSkpIHsKKwkJdGFsbG9jX2ZyZWUobm9kZSk7CisJCXJldHVybiBOVUxM
OworCX0KKwogCS8qIERhdGEgaXMgYmluYXJ5IGJsb2IgKHVzdWFsbHkgYXNj
aWksIG5vIG51bCkuICovCi0Jbm9kZS0+ZGF0YSA9IG5vZGUtPnBlcm1zLnAg
KyBub2RlLT5wZXJtcy5udW07CisJbm9kZS0+ZGF0YSA9IG5vZGUtPnBlcm1z
LnAgKyBoZHItPm51bV9wZXJtczsKIAkvKiBDaGlsZHJlbiBpcyBzdHJpbmdz
LCBudWwgc2VwYXJhdGVkLiAqLwogCW5vZGUtPmNoaWxkcmVuID0gbm9kZS0+
ZGF0YSArIG5vZGUtPmRhdGFsZW47CiAKQEAgLTQyNiw2ICs0MzIsOSBAQCBp
bnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFRE
Ql9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2RlLAogCXZvaWQgKnA7CiAJ
c3RydWN0IHhzX3RkYl9yZWNvcmRfaGRyICpoZHI7CiAKKwlpZiAoZG9tYWlu
X2FkanVzdF9ub2RlX3Blcm1zKG5vZGUpKQorCQlyZXR1cm4gZXJybm87CisK
IAlkYXRhLmRzaXplID0gc2l6ZW9mKCpoZHIpCiAJCSsgbm9kZS0+cGVybXMu
bnVtICogc2l6ZW9mKG5vZGUtPnBlcm1zLnBbMF0pCiAJCSsgbm9kZS0+ZGF0
YWxlbiArIG5vZGUtPmNoaWxkbGVuOwpAQCAtNDg1LDggKzQ5NCw5IEBAIGVu
dW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0IGNvbm5lY3Rp
b24gKmNvbm4sCiAJCXJldHVybiAoWFNfUEVSTV9SRUFEfFhTX1BFUk1fV1JJ
VEV8WFNfUEVSTV9PV05FUikgJiBtYXNrOwogCiAJZm9yIChpID0gMTsgaSA8
IHBlcm1zLT5udW07IGkrKykKLQkJaWYgKHBlcm1zLT5wW2ldLmlkID09IGNv
bm4tPmlkCi0gICAgICAgICAgICAgICAgICAgICAgICB8fCAoY29ubi0+dGFy
Z2V0ICYmIHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPnRhcmdldC0+aWQpKQor
CQlpZiAoIShwZXJtcy0+cFtpXS5wZXJtcyAmIFhTX1BFUk1fSUdOT1JFKSAm
JgorCQkgICAgKHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPmlkIHx8CisJCSAg
ICAgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+cFtpXS5pZCA9PSBjb25uLT50
YXJnZXQtPmlkKSkpCiAJCQlyZXR1cm4gcGVybXMtPnBbaV0ucGVybXMgJiBt
YXNrOwogCiAJcmV0dXJuIHBlcm1zLT5wWzBdLnBlcm1zICYgbWFzazsKQEAg
LTEyNDgsOCArMTI1OCwxMiBAQCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZlcmVkX2RhdGEg
KmluKQogCWlmIChwZXJtcy5udW0gPCAyKQogCQlyZXR1cm4gRUlOVkFMOwog
Ci0JcGVybXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikg
KyAxOwogCXBlcm1zLm51bS0tOworCWlmIChkb21haW5faXNfdW5wcml2aWxl
Z2VkKGNvbm4pICYmCisJICAgIHBlcm1zLm51bSA+IHF1b3RhX25iX3Blcm1z
X3Blcl9ub2RlKQorCQlyZXR1cm4gRU5PU1BDOworCisJcGVybXN0ciA9IGlu
LT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikgKyAxOwogCiAJcGVybXMu
cCA9IHRhbGxvY19hcnJheShpbiwgc3RydWN0IHhzX3Blcm1pc3Npb25zLCBw
ZXJtcy5udW0pOwogCWlmICghcGVybXMucCkKQEAgLTE5MDQsNiArMTkxOCw3
IEBAIHN0YXRpYyB2b2lkIHVzYWdlKHZvaWQpCiAiICAtUywgLS1lbnRyeS1z
aXplIDxzaXplPiBsaW1pdCB0aGUgc2l6ZSBvZiBlbnRyeSBwZXIgZG9tYWlu
LCBhbmRcbiIKICIgIC1XLCAtLXdhdGNoLW5iIDxuYj4gICAgIGxpbWl0IHRo
ZSBudW1iZXIgb2Ygd2F0Y2hlcyBwZXIgZG9tYWluLFxuIgogIiAgLXQsIC0t
dHJhbnNhY3Rpb24gPG5iPiAgbGltaXQgdGhlIG51bWJlciBvZiB0cmFuc2Fj
dGlvbiBhbGxvd2VkIHBlciBkb21haW4sXG4iCisiICAtQSwgLS1wZXJtLW5i
IDxuYj4gICAgICBsaW1pdCB0aGUgbnVtYmVyIG9mIHBlcm1pc3Npb25zIHBl
ciBub2RlLFxuIgogIiAgLVIsIC0tbm8tcmVjb3ZlcnkgICAgICAgdG8gcmVx
dWVzdCB0aGF0IG5vIHJlY292ZXJ5IHNob3VsZCBiZSBhdHRlbXB0ZWQgd2hl
blxuIgogIiAgICAgICAgICAgICAgICAgICAgICAgICAgdGhlIHN0b3JlIGlz
IGNvcnJ1cHRlZCAoZGVidWcgb25seSksXG4iCiAiICAtSSwgLS1pbnRlcm5h
bC1kYiAgICAgICBzdG9yZSBkYXRhYmFzZSBpbiBtZW1vcnksIG5vdCBvbiBk
aXNrXG4iCkBAIC0xOTI0LDYgKzE5MzksNyBAQCBzdGF0aWMgc3RydWN0IG9w
dGlvbiBvcHRpb25zW10gPSB7CiAJeyAiZW50cnktc2l6ZSIsIDEsIE5VTEws
ICdTJyB9LAogCXsgInRyYWNlLWZpbGUiLCAxLCBOVUxMLCAnVCcgfSwKIAl7
ICJ0cmFuc2FjdGlvbiIsIDEsIE5VTEwsICd0JyB9LAorCXsgInBlcm0tbmIi
LCAxLCBOVUxMLCAnQScgfSwKIAl7ICJuby1yZWNvdmVyeSIsIDAsIE5VTEws
ICdSJyB9LAogCXsgImludGVybmFsLWRiIiwgMCwgTlVMTCwgJ0knIH0sCiAJ
eyAidmVyYm9zZSIsIDAsIE5VTEwsICdWJyB9LApAQCAtMTk0Niw3ICsxOTYy
LDcgQEAgaW50IG1haW4oaW50IGFyZ2MsIGNoYXIgKmFyZ3ZbXSkKIAlpbnQg
dGltZW91dDsKIAogCi0Jd2hpbGUgKChvcHQgPSBnZXRvcHRfbG9uZyhhcmdj
LCBhcmd2LCAiREU6RjpITlBTOnQ6VDpSVlc6Iiwgb3B0aW9ucywKKwl3aGls
ZSAoKG9wdCA9IGdldG9wdF9sb25nKGFyZ2MsIGFyZ3YsICJERTpGOkhOUFM6
dDpBOlQ6UlZXOiIsIG9wdGlvbnMsCiAJCQkJICBOVUxMKSkgIT0gLTEpIHsK
IAkJc3dpdGNoIChvcHQpIHsKIAkJY2FzZSAnRCc6CkBAIC0xOTg4LDYgKzIw
MDQsOSBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqYXJndltdKQogCQlj
YXNlICdXJzoKIAkJCXF1b3RhX25iX3dhdGNoX3Blcl9kb21haW4gPSBzdHJ0
b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJCQlicmVhazsKKwkJY2FzZSAnQSc6
CisJCQlxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IHN0cnRvbChvcHRhcmcs
IE5VTEwsIDEwKTsKKwkJCWJyZWFrOwogCQljYXNlICdlJzoKIAkJCWRvbTBf
ZXZlbnQgPSBzdHJ0b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJCQlicmVhazsK
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYwppbmRleCA5
ZmFkNDcwZjgzLi5kYzYzNWU5YmUzIDEwMDY0NAotLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02Nyw4ICs2NywxNCBAQCBzdHJ1Y3Qg
ZG9tYWluCiAJLyogVGhlIGNvbm5lY3Rpb24gYXNzb2NpYXRlZCB3aXRoIHRo
aXMuICovCiAJc3RydWN0IGNvbm5lY3Rpb24gKmNvbm47CiAKKwkvKiBHZW5l
cmF0aW9uIGNvdW50IGF0IGRvbWFpbiBpbnRyb2R1Y3Rpb24gdGltZS4gKi8K
Kwl1aW50NjRfdCBnZW5lcmF0aW9uOworCiAJLyogSGF2ZSB3ZSBub3RpY2Vk
IHRoYXQgdGhpcyBkb21haW4gaXMgc2h1dGRvd24/ICovCi0JaW50IHNodXRk
b3duOworCWJvb2wgc2h1dGRvd247CisKKwkvKiBIYXMgZG9tYWluIGJlZW4g
b2ZmaWNpYWxseSBpbnRyb2R1Y2VkPyAqLworCWJvb2wgaW50cm9kdWNlZDsK
IAogCS8qIG51bWJlciBvZiBlbnRyeSBmcm9tIHRoaXMgZG9tYWluIGluIHRo
ZSBzdG9yZSAqLwogCWludCBuYmVudHJ5OwpAQCAtMTg4LDYgKzE5NCw5IEBA
IHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAqX2RvbWFpbikKIAog
CWxpc3RfZGVsKCZkb21haW4tPmxpc3QpOwogCisJaWYgKCFkb21haW4tPmlu
dHJvZHVjZWQpCisJCXJldHVybiAwOworCiAJaWYgKGRvbWFpbi0+cG9ydCkg
ewogCQlpZiAoeGVuZXZ0Y2huX3VuYmluZCh4Y2VfaGFuZGxlLCBkb21haW4t
PnBvcnQpID09IC0xKQogCQkJZXByaW50ZigiPiBVbmJpbmRpbmcgcG9ydCAl
aSBmYWlsZWQhXG4iLCBkb21haW4tPnBvcnQpOwpAQCAtMjA5LDIxICsyMTgs
MzQgQEAgc3RhdGljIGludCBkZXN0cm95X2RvbWFpbih2b2lkICpfZG9tYWlu
KQogCXJldHVybiAwOwogfQogCitzdGF0aWMgYm9vbCBnZXRfZG9tYWluX2lu
Zm8odW5zaWduZWQgaW50IGRvbWlkLCB4Y19kb21pbmZvX3QgKmRvbWluZm8p
Cit7CisJcmV0dXJuIHhjX2RvbWFpbl9nZXRpbmZvKCp4Y19oYW5kbGUsIGRv
bWlkLCAxLCBkb21pbmZvKSA9PSAxICYmCisJICAgICAgIGRvbWluZm8tPmRv
bWlkID09IGRvbWlkOworfQorCiBzdGF0aWMgdm9pZCBkb21haW5fY2xlYW51
cCh2b2lkKQogewogCXhjX2RvbWluZm9fdCBkb21pbmZvOwogCXN0cnVjdCBk
b21haW4gKmRvbWFpbjsKIAlzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubjsKIAlp
bnQgbm90aWZ5ID0gMDsKKwlib29sIGRvbV92YWxpZDsKIAogIGFnYWluOgog
CWxpc3RfZm9yX2VhY2hfZW50cnkoZG9tYWluLCAmZG9tYWlucywgbGlzdCkg
ewotCQlpZiAoeGNfZG9tYWluX2dldGluZm8oKnhjX2hhbmRsZSwgZG9tYWlu
LT5kb21pZCwgMSwKLQkJCQkgICAgICAmZG9taW5mbykgPT0gMSAmJgotCQkg
ICAgZG9taW5mby5kb21pZCA9PSBkb21haW4tPmRvbWlkKSB7CisJCWRvbV92
YWxpZCA9IGdldF9kb21haW5faW5mbyhkb21haW4tPmRvbWlkLCAmZG9taW5m
byk7CisJCWlmICghZG9tYWluLT5pbnRyb2R1Y2VkKSB7CisJCQlpZiAoIWRv
bV92YWxpZCkgeworCQkJCXRhbGxvY19mcmVlKGRvbWFpbik7CisJCQkJZ290
byBhZ2FpbjsKKwkJCX0KKwkJCWNvbnRpbnVlOworCQl9CisJCWlmIChkb21f
dmFsaWQpIHsKIAkJCWlmICgoZG9taW5mby5jcmFzaGVkIHx8IGRvbWluZm8u
c2h1dGRvd24pCiAJCQkgICAgJiYgIWRvbWFpbi0+c2h1dGRvd24pIHsKLQkJ
CQlkb21haW4tPnNodXRkb3duID0gMTsKKwkJCQlkb21haW4tPnNodXRkb3du
ID0gdHJ1ZTsKIAkJCQlub3RpZnkgPSAxOwogCQkJfQogCQkJaWYgKCFkb21p
bmZvLmR5aW5nKQpAQCAtMjg5LDU4ICszMTEsODQgQEAgc3RhdGljIGNoYXIg
KnRhbGxvY19kb21haW5fcGF0aCh2b2lkICpjb250ZXh0LCB1bnNpZ25lZCBp
bnQgZG9taWQpCiAJcmV0dXJuIHRhbGxvY19hc3ByaW50Zihjb250ZXh0LCAi
L2xvY2FsL2RvbWFpbi8ldSIsIGRvbWlkKTsKIH0KIAotc3RhdGljIHN0cnVj
dCBkb21haW4gKm5ld19kb21haW4odm9pZCAqY29udGV4dCwgdW5zaWduZWQg
aW50IGRvbWlkLAotCQkJCSBpbnQgcG9ydCkKK3N0YXRpYyBzdHJ1Y3QgZG9t
YWluICpmaW5kX2RvbWFpbl9zdHJ1Y3QodW5zaWduZWQgaW50IGRvbWlkKQor
eworCXN0cnVjdCBkb21haW4gKmk7CisKKwlsaXN0X2Zvcl9lYWNoX2VudHJ5
KGksICZkb21haW5zLCBsaXN0KSB7CisJCWlmIChpLT5kb21pZCA9PSBkb21p
ZCkKKwkJCXJldHVybiBpOworCX0KKwlyZXR1cm4gTlVMTDsKK30KKworc3Rh
dGljIHN0cnVjdCBkb21haW4gKmFsbG9jX2RvbWFpbih2b2lkICpjb250ZXh0
LCB1bnNpZ25lZCBpbnQgZG9taWQpCiB7CiAJc3RydWN0IGRvbWFpbiAqZG9t
YWluOwotCWludCByYzsKIAogCWRvbWFpbiA9IHRhbGxvYyhjb250ZXh0LCBz
dHJ1Y3QgZG9tYWluKTsKLQlpZiAoIWRvbWFpbikKKwlpZiAoIWRvbWFpbikg
eworCQllcnJubyA9IEVOT01FTTsKIAkJcmV0dXJuIE5VTEw7CisJfQogCi0J
ZG9tYWluLT5wb3J0ID0gMDsKLQlkb21haW4tPnNodXRkb3duID0gMDsKIAlk
b21haW4tPmRvbWlkID0gZG9taWQ7Ci0JZG9tYWluLT5wYXRoID0gdGFsbG9j
X2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9taWQpOwotCWlmICghZG9tYWluLT5w
YXRoKQotCQlyZXR1cm4gTlVMTDsKKwlkb21haW4tPmdlbmVyYXRpb24gPSBn
ZW5lcmF0aW9uOworCWRvbWFpbi0+aW50cm9kdWNlZCA9IGZhbHNlOwogCi0J
d3JsX2RvbWFpbl9uZXcoZG9tYWluKTsKKwl0YWxsb2Nfc2V0X2Rlc3RydWN0
b3IoZG9tYWluLCBkZXN0cm95X2RvbWFpbik7CiAKIAlsaXN0X2FkZCgmZG9t
YWluLT5saXN0LCAmZG9tYWlucyk7Ci0JdGFsbG9jX3NldF9kZXN0cnVjdG9y
KGRvbWFpbiwgZGVzdHJveV9kb21haW4pOworCisJcmV0dXJuIGRvbWFpbjsK
K30KKworc3RhdGljIGludCBuZXdfZG9tYWluKHN0cnVjdCBkb21haW4gKmRv
bWFpbiwgaW50IHBvcnQpCit7CisJaW50IHJjOworCisJZG9tYWluLT5wb3J0
ID0gMDsKKwlkb21haW4tPnNodXRkb3duID0gZmFsc2U7CisJZG9tYWluLT5w
YXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9tYWluLT5kb21p
ZCk7CisJaWYgKCFkb21haW4tPnBhdGgpIHsKKwkJZXJybm8gPSBFTk9NRU07
CisJCXJldHVybiBlcnJubzsKKwl9CisKKwl3cmxfZG9tYWluX25ldyhkb21h
aW4pOwogCiAJLyogVGVsbCBrZXJuZWwgd2UncmUgaW50ZXJlc3RlZCBpbiB0
aGlzIGV2ZW50LiAqLwotCXJjID0geGVuZXZ0Y2huX2JpbmRfaW50ZXJkb21h
aW4oeGNlX2hhbmRsZSwgZG9taWQsIHBvcnQpOworCXJjID0geGVuZXZ0Y2hu
X2JpbmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9tYWluLT5kb21pZCwg
cG9ydCk7CiAJaWYgKHJjID09IC0xKQotCSAgICByZXR1cm4gTlVMTDsKKwkJ
cmV0dXJuIGVycm5vOwogCWRvbWFpbi0+cG9ydCA9IHJjOwogCisJZG9tYWlu
LT5pbnRyb2R1Y2VkID0gdHJ1ZTsKKwogCWRvbWFpbi0+Y29ubiA9IG5ld19j
b25uZWN0aW9uKHdyaXRlY2huLCByZWFkY2huKTsKLQlpZiAoIWRvbWFpbi0+
Y29ubikKLQkJcmV0dXJuIE5VTEw7CisJaWYgKCFkb21haW4tPmNvbm4pICB7
CisJCWVycm5vID0gRU5PTUVNOworCQlyZXR1cm4gZXJybm87CisJfQogCiAJ
ZG9tYWluLT5jb25uLT5kb21haW4gPSBkb21haW47Ci0JZG9tYWluLT5jb25u
LT5pZCA9IGRvbWlkOworCWRvbWFpbi0+Y29ubi0+aWQgPSBkb21haW4tPmRv
bWlkOwogCiAJZG9tYWluLT5yZW1vdGVfcG9ydCA9IHBvcnQ7CiAJZG9tYWlu
LT5uYmVudHJ5ID0gMDsKIAlkb21haW4tPm5id2F0Y2ggPSAwOwogCi0JcmV0
dXJuIGRvbWFpbjsKKwlyZXR1cm4gMDsKIH0KIAogCiBzdGF0aWMgc3RydWN0
IGRvbWFpbiAqZmluZF9kb21haW5fYnlfZG9taWQodW5zaWduZWQgaW50IGRv
bWlkKQogewotCXN0cnVjdCBkb21haW4gKmk7CisJc3RydWN0IGRvbWFpbiAq
ZDsKIAotCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmRvbWFpbnMsIGxpc3Qp
IHsKLQkJaWYgKGktPmRvbWlkID09IGRvbWlkKQotCQkJcmV0dXJuIGk7Ci0J
fQotCXJldHVybiBOVUxMOworCWQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9t
aWQpOworCisJcmV0dXJuIChkICYmIGQtPmludHJvZHVjZWQpID8gZCA6IE5V
TEw7CiB9CiAKIHN0YXRpYyB2b2lkIGRvbWFpbl9jb25uX3Jlc2V0KHN0cnVj
dCBkb21haW4gKmRvbWFpbikKQEAgLTM4NiwxNSArNDM0LDIxIEBAIGludCBk
b19pbnRyb2R1Y2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAocG9ydCA8PSAwKQogCQlyZXR1cm4g
RUlOVkFMOwogCi0JZG9tYWluID0gZmluZF9kb21haW5fYnlfZG9taWQoZG9t
aWQpOworCWRvbWFpbiA9IGZpbmRfZG9tYWluX3N0cnVjdChkb21pZCk7CiAK
IAlpZiAoZG9tYWluID09IE5VTEwpIHsKKwkJLyogSGFuZyBkb21haW4gb2Zm
ICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICovCisJCWRvbWFpbiA9IGFs
bG9jX2RvbWFpbihpbiwgZG9taWQpOworCQlpZiAoZG9tYWluID09IE5VTEwp
CisJCQlyZXR1cm4gRU5PTUVNOworCX0KKworCWlmICghZG9tYWluLT5pbnRy
b2R1Y2VkKSB7CiAJCWludGVyZmFjZSA9IG1hcF9pbnRlcmZhY2UoZG9taWQp
OwogCQlpZiAoIWludGVyZmFjZSkKIAkJCXJldHVybiBlcnJubzsKIAkJLyog
SGFuZyBkb21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICov
Ci0JCWRvbWFpbiA9IG5ld19kb21haW4oaW4sIGRvbWlkLCBwb3J0KTsKLQkJ
aWYgKCFkb21haW4pIHsKKwkJaWYgKG5ld19kb21haW4oZG9tYWluLCBwb3J0
KSkgewogCQkJcmMgPSBlcnJubzsKIAkJCXVubWFwX2ludGVyZmFjZShpbnRl
cmZhY2UpOwogCQkJcmV0dXJuIHJjOwpAQCAtNTAzLDggKzU1Nyw4IEBAIGlu
dCBkb19yZXN1bWUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBi
dWZmZXJlZF9kYXRhICppbikKIAlpZiAoSVNfRVJSKGRvbWFpbikpCiAJCXJl
dHVybiAtUFRSX0VSUihkb21haW4pOwogCi0JZG9tYWluLT5zaHV0ZG93biA9
IDA7Ci0JCisJZG9tYWluLT5zaHV0ZG93biA9IGZhbHNlOworCiAJc2VuZF9h
Y2soY29ubiwgWFNfUkVTVU1FKTsKIAogCXJldHVybiAwOwpAQCAtNjQ3LDgg
KzcwMSwxMCBAQCBzdGF0aWMgaW50IGRvbTBfaW5pdCh2b2lkKQogCWlmIChw
b3J0ID09IC0xKQogCQlyZXR1cm4gLTE7CiAKLQlkb20wID0gbmV3X2RvbWFp
bihOVUxMLCB4ZW5idXNfbWFzdGVyX2RvbWlkKCksIHBvcnQpOwotCWlmIChk
b20wID09IE5VTEwpCisJZG9tMCA9IGFsbG9jX2RvbWFpbihOVUxMLCB4ZW5i
dXNfbWFzdGVyX2RvbWlkKCkpOworCWlmICghZG9tMCkKKwkJcmV0dXJuIC0x
OworCWlmIChuZXdfZG9tYWluKGRvbTAsIHBvcnQpKQogCQlyZXR1cm4gLTE7
CiAKIAlkb20wLT5pbnRlcmZhY2UgPSB4ZW5idXNfbWFwKCk7CkBAIC03Mjks
NiArNzg1LDY2IEBAIHZvaWQgZG9tYWluX2VudHJ5X2luYyhzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUpCiAJfQogfQogCisv
KgorICogQ2hlY2sgd2hldGhlciBhIGRvbWFpbiB3YXMgY3JlYXRlZCBiZWZv
cmUgb3IgYWZ0ZXIgYSBzcGVjaWZpYyBnZW5lcmF0aW9uCisgKiBjb3VudCAo
dXNlZCBmb3IgdGVzdGluZyB3aGV0aGVyIGEgbm9kZSBwZXJtaXNzaW9uIGlz
IG9sZGVyIHRoYW4gYSBkb21haW4pLgorICoKKyAqIFJldHVybiB2YWx1ZXM6
CisgKiAtMTogZXJyb3IKKyAqICAwOiBkb21haW4gaGFzIGhpZ2hlciBnZW5l
cmF0aW9uIGNvdW50IChpdCBpcyB5b3VuZ2VyIHRoYW4gYSBub2RlIHdpdGgg
dGhlCisgKiAgICAgZ2l2ZW4gY291bnQpLCBvciBkb21haW4gaXNuJ3QgZXhp
c3RpbmcgYW55IGxvbmdlcgorICogIDE6IGRvbWFpbiBpcyBvbGRlciB0aGFu
IHRoZSBub2RlCisgKi8KK3N0YXRpYyBpbnQgY2hrX2RvbWFpbl9nZW5lcmF0
aW9uKHVuc2lnbmVkIGludCBkb21pZCwgdWludDY0X3QgZ2VuKQoreworCXN0
cnVjdCBkb21haW4gKmQ7CisJeGNfZG9taW5mb190IGRvbWluZm87CisKKwlp
ZiAoIXhjX2hhbmRsZSAmJiBkb21pZCA9PSAwKQorCQlyZXR1cm4gMTsKKwor
CWQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9taWQpOworCWlmIChkKQorCQly
ZXR1cm4gKGQtPmdlbmVyYXRpb24gPD0gZ2VuKSA/IDEgOiAwOworCisJaWYg
KCFnZXRfZG9tYWluX2luZm8oZG9taWQsICZkb21pbmZvKSkKKwkJcmV0dXJu
IDA7CisKKwlkID0gYWxsb2NfZG9tYWluKE5VTEwsIGRvbWlkKTsKKwlyZXR1
cm4gZCA/IDEgOiAtMTsKK30KKworLyoKKyAqIFJlbW92ZSBwZXJtaXNzaW9u
cyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMgaW4gb3JkZXIgdG8g
YXZvaWQgYSBuZXcKKyAqIGRvbWFpbiB3aXRoIHRoZSBzYW1lIGRvbWlkIGlu
aGVyaXRpbmcgdGhlIHBlcm1pc3Npb25zLgorICovCitpbnQgZG9tYWluX2Fk
anVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKQoreworCXVuc2ln
bmVkIGludCBpOworCWludCByZXQ7CisKKwlyZXQgPSBjaGtfZG9tYWluX2dl
bmVyYXRpb24obm9kZS0+cGVybXMucFswXS5pZCwgbm9kZS0+Z2VuZXJhdGlv
bik7CisJaWYgKHJldCA8IDApCisJCXJldHVybiBlcnJubzsKKworCS8qIElm
IHRoZSBvd25lciBkb2Vzbid0IGV4aXN0IGFueSBsb25nZXIgZ2l2ZSBpdCB0
byBwcml2IGRvbWFpbi4gKi8KKwlpZiAoIXJldCkKKwkJbm9kZS0+cGVybXMu
cFswXS5pZCA9IHByaXZfZG9taWQ7CisKKwlmb3IgKGkgPSAxOyBpIDwgbm9k
ZS0+cGVybXMubnVtOyBpKyspIHsKKwkJaWYgKG5vZGUtPnBlcm1zLnBbaV0u
cGVybXMgJiBYU19QRVJNX0lHTk9SRSkKKwkJCWNvbnRpbnVlOworCQlyZXQg
PSBjaGtfZG9tYWluX2dlbmVyYXRpb24obm9kZS0+cGVybXMucFtpXS5pZCwK
KwkJCQkJICAgIG5vZGUtPmdlbmVyYXRpb24pOworCQlpZiAocmV0IDwgMCkK
KwkJCXJldHVybiBlcnJubzsKKwkJaWYgKCFyZXQpCisJCQlub2RlLT5wZXJt
cy5wW2ldLnBlcm1zIHw9IFhTX1BFUk1fSUdOT1JFOworCX0KKworCXJldHVy
biAwOworfQorCiB2b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogewogCXN0cnVjdCBk
b21haW4gKmQ7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWlu
LmgKaW5kZXggMjU5MTgzOTYyYS4uNWUwMDA4NzIwNiAxMDA2NDQKLS0tIGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCisrKyBiL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaApAQCAtNTYsNiArNTYsOSBA
QCBib29sIGRvbWFpbl9jYW5fd3JpdGUoc3RydWN0IGNvbm5lY3Rpb24gKmNv
bm4pOwogCiBib29sIGRvbWFpbl9pc191bnByaXZpbGVnZWQoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4pOwogCisvKiBSZW1vdmUgbm9kZSBwZXJtaXNzaW9u
cyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMuICovCitpbnQgZG9t
YWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKTsKKwog
LyogUXVvdGEgbWFuaXB1bGF0aW9uICovCiB2b2lkIGRvbWFpbl9lbnRyeV9p
bmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICopOwog
dm9pZCBkb21haW5fZW50cnlfZGVjKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBzdHJ1Y3Qgbm9kZSAqKTsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5kZXggYTdkOGM1ZDQ3NS4uMjg4MWYz
YjJlNCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmMKQEAgLTQ3LDcgKzQ3LDEyIEBACiAgKiB0cmFuc2FjdGlv
bi4KICAqIEVhY2ggdGltZSB0aGUgZ2xvYmFsIGdlbmVyYXRpb24gY291bnQg
aXMgY29waWVkIHRvIGVpdGhlciBhIG5vZGUgb3IgYQogICogdHJhbnNhY3Rp
b24gaXQgaXMgaW5jcmVtZW50ZWQuIFRoaXMgZW5zdXJlcyBhbGwgbm9kZXMg
YW5kL29yIHRyYW5zYWN0aW9ucwotICogYXJlIGhhdmluZyBhIHVuaXF1ZSBn
ZW5lcmF0aW9uIGNvdW50LgorICogYXJlIGhhdmluZyBhIHVuaXF1ZSBnZW5l
cmF0aW9uIGNvdW50LiBUaGUgaW5jcmVtZW50IGlzIGRvbmUgX2JlZm9yZV8g
dGhlCisgKiBjb3B5IGFzIHRoYXQgaXMgbmVlZGVkIGZvciBjaGVja2luZyB3
aGV0aGVyIGEgZG9tYWluIHdhcyBjcmVhdGVkIGJlZm9yZQorICogb3IgYWZ0
ZXIgYSBub2RlIGhhcyBiZWVuIHdyaXR0ZW4gKHRoZSBkb21haW4ncyBnZW5l
cmF0aW9uIGlzIHNldCB3aXRoIHRoZQorICogYWN0dWFsIGdlbmVyYXRpb24g
Y291bnQgd2l0aG91dCBpbmNyZW1lbnRpbmcgaXQsIGluIG9yZGVyIHRvIHN1
cHBvcnQKKyAqIHdyaXRpbmcgYSBub2RlIGZvciBhIGRvbWFpbiBiZWZvcmUg
dGhlIGRvbWFpbiBoYXMgYmVlbiBvZmZpY2lhbGx5CisgKiBpbnRyb2R1Y2Vk
KS4KICAqCiAgKiBUcmFuc2FjdGlvbiBjb25mbGljdHMgYXJlIGRldGVjdGVk
IGJ5IGNoZWNraW5nIHRoZSBnZW5lcmF0aW9uIGNvdW50IG9mIGFsbAogICog
bm9kZXMgcmVhZCBpbiB0aGUgdHJhbnNhY3Rpb24gdG8gbWF0Y2ggd2l0aCB0
aGUgZ2VuZXJhdGlvbiBjb3VudCBpbiB0aGUKQEAgLTE2MSw3ICsxNjYsNyBA
QCBzdHJ1Y3QgdHJhbnNhY3Rpb24KIH07CiAKIGV4dGVybiBpbnQgcXVvdGFf
bWF4X3RyYW5zYWN0aW9uOwotc3RhdGljIHVpbnQ2NF90IGdlbmVyYXRpb247
Cit1aW50NjRfdCBnZW5lcmF0aW9uOwogCiBzdGF0aWMgdm9pZCBzZXRfdGRi
X2tleShjb25zdCBjaGFyICpuYW1lLCBUREJfREFUQSAqa2V5KQogewpAQCAt
MjM3LDcgKzI0Miw3IEBAIGludCBhY2Nlc3Nfbm9kZShzdHJ1Y3QgY29ubmVj
dGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUsCiAJYm9vbCBpbnRyb2R1
Y2UgPSBmYWxzZTsKIAogCWlmICh0eXBlICE9IE5PREVfQUNDRVNTX1JFQUQp
IHsKLQkJbm9kZS0+Z2VuZXJhdGlvbiA9IGdlbmVyYXRpb24rKzsKKwkJbm9k
ZS0+Z2VuZXJhdGlvbiA9ICsrZ2VuZXJhdGlvbjsKIAkJaWYgKGNvbm4gJiYg
IWNvbm4tPnRyYW5zYWN0aW9uKQogCQkJd3JsX2FwcGx5X2RlYml0X2RpcmVj
dChjb25uKTsKIAl9CkBAIC0zNzQsNyArMzc5LDcgQEAgc3RhdGljIGludCBm
aW5hbGl6ZV90cmFuc2FjdGlvbihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwK
IAkJCQlpZiAoIWRhdGEuZHB0cikKIAkJCQkJZ290byBlcnI7CiAJCQkJaGRy
ID0gKHZvaWQgKilkYXRhLmRwdHI7Ci0JCQkJaGRyLT5nZW5lcmF0aW9uID0g
Z2VuZXJhdGlvbisrOworCQkJCWhkci0+Z2VuZXJhdGlvbiA9ICsrZ2VuZXJh
dGlvbjsKIAkJCQlyZXQgPSB0ZGJfc3RvcmUodGRiX2N0eCwga2V5LCBkYXRh
LAogCQkJCQkJVERCX1JFUExBQ0UpOwogCQkJCXRhbGxvY19mcmVlKGRhdGEu
ZHB0cik7CkBAIC00NjIsNyArNDY3LDcgQEAgaW50IGRvX3RyYW5zYWN0aW9u
X3N0YXJ0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVy
ZWRfZGF0YSAqaW4pCiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5zLT5hY2Nlc3Nl
ZCk7CiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5zLT5jaGFuZ2VkX2RvbWFpbnMp
OwogCXRyYW5zLT5mYWlsID0gZmFsc2U7Ci0JdHJhbnMtPmdlbmVyYXRpb24g
PSBnZW5lcmF0aW9uKys7CisJdHJhbnMtPmdlbmVyYXRpb24gPSArK2dlbmVy
YXRpb247CiAKIAkvKiBQaWNrIGFuIHVudXNlZCB0cmFuc2FjdGlvbiBpZGVu
dGlmaWVyLiAqLwogCWRvIHsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF90cmFuc2FjdGlvbi5oIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmgKaW5kZXggMzM4NmJhYzU2NS4uNDNhMTYy
YmVhMyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmgKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3Ry
YW5zYWN0aW9uLmgKQEAgLTI3LDYgKzI3LDggQEAgZW51bSBub2RlX2FjY2Vz
c190eXBlIHsKIAogc3RydWN0IHRyYW5zYWN0aW9uOwogCitleHRlcm4gdWlu
dDY0X3QgZ2VuZXJhdGlvbjsKKwogaW50IGRvX3RyYW5zYWN0aW9uX3N0YXJ0
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0
YSAqbm9kZSk7CiBpbnQgZG9fdHJhbnNhY3Rpb25fZW5kKHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAqaW4pOwogCmRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94c19saWIuYyBiL3Rvb2xzL3hl
bnN0b3JlL3hzX2xpYi5jCmluZGV4IDNlNDNmODgwOWQuLmQ0MDdkNTcxM2Eg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCisrKyBiL3Rv
b2xzL3hlbnN0b3JlL3hzX2xpYi5jCkBAIC0xNTIsNyArMTUyLDcgQEAgYm9v
bCB4c19zdHJpbmdzX3RvX3Blcm1zKHN0cnVjdCB4c19wZXJtaXNzaW9ucyAq
cGVybXMsIHVuc2lnbmVkIGludCBudW0sCiBib29sIHhzX3Blcm1fdG9fc3Ry
aW5nKGNvbnN0IHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVybSwKICAgICAg
ICAgICAgICAgICAgICAgICAgY2hhciAqYnVmZmVyLCBzaXplX3QgYnVmX2xl
bikKIHsKLQlzd2l0Y2ggKChpbnQpcGVybS0+cGVybXMpIHsKKwlzd2l0Y2gg
KChpbnQpcGVybS0+cGVybXMgJiB+WFNfUEVSTV9JR05PUkUpIHsKIAljYXNl
IFhTX1BFUk1fV1JJVEU6CiAJCSpidWZmZXIgPSAndyc7CiAJCWJyZWFrOwo=

--=separator
Content-Type: application/octet-stream; name="xsa322-c.patch"
Content-Disposition: attachment; filename="xsa322-c.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpTdWJqZWN0
OiB0b29scy94ZW5zdG9yZTogcmV2b2tlIGFjY2VzcyByaWdodHMgZm9yIHJl
bW92ZWQgZG9tYWlucwoKQWNjZXNzIHJpZ2h0cyBvZiBYZW5zdG9yZSBub2Rl
cyBhcmUgcGVyIGRvbWlkLiBVbmZvcnR1bmF0ZWx5IGV4aXN0aW5nCmdyYW50
ZWQgYWNjZXNzIHJpZ2h0cyBhcmUgbm90IHJlbW92ZWQgd2hlbiBhIGRvbWFp
biBpcyBiZWluZyBkZXN0cm95ZWQuClRoaXMgbWVhbnMgdGhhdCBhIG5ldyBk
b21haW4gY3JlYXRlZCB3aXRoIHRoZSBzYW1lIGRvbWlkIHdpbGwgaW5oZXJp
dAp0aGUgYWNjZXNzIHJpZ2h0cyB0byBYZW5zdG9yZSBub2RlcyBmcm9tIHRo
ZSBwcmV2aW91cyBkb21haW4ocykgd2l0aAp0aGUgc2FtZSBkb21pZC4KClRo
aXMgY2FuIGJlIGF2b2lkZWQgYnkgYWRkaW5nIGEgZ2VuZXJhdGlvbiBjb3Vu
dGVyIHRvIGVhY2ggZG9tYWluLgpUaGUgZ2VuZXJhdGlvbiBjb3VudGVyIG9m
IHRoZSBkb21haW4gaXMgc2V0IHRvIHRoZSBnbG9iYWwgZ2VuZXJhdGlvbgpj
b3VudGVyIHdoZW4gYSBkb21haW4gc3RydWN0dXJlIGlzIGJlaW5nIGFsbG9j
YXRlZC4gV2hlbiByZWFkaW5nIG9yCndyaXRpbmcgYSBub2RlIGFsbCBwZXJt
aXNzaW9ucyBvZiBkb21haW5zIHdoaWNoIGFyZSB5b3VuZ2VyIHRoYW4gdGhl
Cm5vZGUgaXRzZWxmIGFyZSBkcm9wcGVkLiBUaGlzIGlzIGRvbmUgYnkgZmxh
Z2dpbmcgdGhlIHJlbGF0ZWQgZW50cnkKYXMgaW52YWxpZCBpbiBvcmRlciB0
byBhdm9pZCBtb2RpZnlpbmcgcGVybWlzc2lvbnMgaW4gYSB3YXkgdGhlIHVz
ZXIKY291bGQgZGV0ZWN0LgoKQSBzcGVjaWFsIGNhc2UgaGFzIHRvIGJlIGNv
bnNpZGVyZWQ6IGZvciBhIG5ldyBkb21haW4gdGhlIGZpcnN0ClhlbnN0b3Jl
IGVudHJpZXMgYXJlIGFscmVhZHkgd3JpdHRlbiBiZWZvcmUgdGhlIGRvbWFp
biBpcyBvZmZpY2lhbGx5CmludHJvZHVjZWQgaW4gWGVuc3RvcmUuIEluIG9y
ZGVyIG5vdCB0byBkcm9wIHRoZSBwZXJtaXNzaW9ucyBmb3IgdGhlCm5ldyBk
b21haW4gYSBkb21haW4gc3RydWN0IGlzIGFsbG9jYXRlZCBldmVuIGJlZm9y
ZSBpbnRyb2R1Y3Rpb24gaWYKdGhlIGh5cGVydmlzb3IgaXMgYXdhcmUgb2Yg
dGhlIGRvbWFpbi4gVGhpcyByZXF1aXJlcyBhZGRpbmcgYW5vdGhlcgpib29s
ICJpbnRyb2R1Y2VkIiB0byBzdHJ1Y3QgZG9tYWluIGluIHhlbnN0b3JlZC4g
SW4gb3JkZXIgdG8gYXZvaWQKYWRkaXRpb25hbCBwYWRkaW5nIGhvbGVzIGNv
bnZlcnQgdGhlIHNodXRkb3duIGZsYWcgdG8gYm9vbCwgdG9vLgoKQXMgdmVy
aWZ5aW5nIHBlcm1pc3Npb25zIGhhcyBpdHMgcHJpY2UgcmVnYXJkaW5nIHJ1
bnRpbWUgYWRkIGEgbmV3CnF1b3RhIGZvciBsaW1pdGluZyB0aGUgbnVtYmVy
IG9mIHBlcm1pc3Npb25zIGFuIHVucHJpdmlsZWdlZCBkb21haW4KY2FuIHNl
dCBmb3IgYSBub2RlLiBUaGUgZGVmYXVsdCBmb3IgdGhhdCBuZXcgcXVvdGEg
aXMgNS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIyLgoKU2lnbmVkLW9mZi1i
eTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2VkLWJ5OiBKdWxp
ZW4gR3JhbGwgPGp1bGllbkBhbWF6b24uY29tPgoKZGlmZiAtLWdpdCBhL3Rv
b2xzL2luY2x1ZGUveGVuc3RvcmVfbGliLmggYi90b29scy9pbmNsdWRlL3hl
bnN0b3JlX2xpYi5oCmluZGV4IDBmZmJhZTllYjUuLjRjOWI2ZDE2ODUgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL2luY2x1ZGUveGVuc3RvcmVfbGliLmgKKysrIGIv
dG9vbHMvaW5jbHVkZS94ZW5zdG9yZV9saWIuaApAQCAtMzQsNiArMzQsNyBA
QCBlbnVtIHhzX3Blcm1fdHlwZSB7CiAJLyogSW50ZXJuYWwgdXNlLiAqLwog
CVhTX1BFUk1fRU5PRU5UX09LID0gNCwKIAlYU19QRVJNX09XTkVSID0gOCwK
KwlYU19QRVJNX0lHTk9SRSA9IDE2LAogfTsKIAogc3RydWN0IHhzX3Blcm1p
c3Npb25zCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwppbmRl
eCBhZDE5MDNjNTU1Li5jYmVmZTRjODE5IDEwMDY0NAotLS0gYS90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMKQEAgLTEwMSw2ICsxMDEsNyBAQCBpbnQgcXVv
dGFfbmJfZW50cnlfcGVyX2RvbWFpbiA9IDEwMDA7CiBpbnQgcXVvdGFfbmJf
d2F0Y2hfcGVyX2RvbWFpbiA9IDEyODsKIGludCBxdW90YV9tYXhfZW50cnlf
c2l6ZSA9IDIwNDg7IC8qIDJLICovCiBpbnQgcXVvdGFfbWF4X3RyYW5zYWN0
aW9uID0gMTA7CitpbnQgcXVvdGFfbmJfcGVybXNfcGVyX25vZGUgPSA1Owog
CiB2b2lkIHRyYWNlKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQogewpAQCAtNDAz
LDggKzQwNCwxMyBAQCBzdHJ1Y3Qgbm9kZSAqcmVhZF9ub2RlKHN0cnVjdCBj
b25uZWN0aW9uICpjb25uLCBjb25zdCB2b2lkICpjdHgsCiAKIAkvKiBQZXJt
aXNzaW9ucyBhcmUgc3RydWN0IHhzX3Blcm1pc3Npb25zLiAqLwogCW5vZGUt
PnBlcm1zLnAgPSBoZHItPnBlcm1zOworCWlmIChkb21haW5fYWRqdXN0X25v
ZGVfcGVybXMobm9kZSkpIHsKKwkJdGFsbG9jX2ZyZWUobm9kZSk7CisJCXJl
dHVybiBOVUxMOworCX0KKwogCS8qIERhdGEgaXMgYmluYXJ5IGJsb2IgKHVz
dWFsbHkgYXNjaWksIG5vIG51bCkuICovCi0Jbm9kZS0+ZGF0YSA9IG5vZGUt
PnBlcm1zLnAgKyBub2RlLT5wZXJtcy5udW07CisJbm9kZS0+ZGF0YSA9IG5v
ZGUtPnBlcm1zLnAgKyBoZHItPm51bV9wZXJtczsKIAkvKiBDaGlsZHJlbiBp
cyBzdHJpbmdzLCBudWwgc2VwYXJhdGVkLiAqLwogCW5vZGUtPmNoaWxkcmVu
ID0gbm9kZS0+ZGF0YSArIG5vZGUtPmRhdGFsZW47CiAKQEAgLTQyMCw2ICs0
MjYsOSBAQCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIFREQl9EQVRBICprZXksIHN0cnVjdCBub2RlICpub2RlLAogCXZv
aWQgKnA7CiAJc3RydWN0IHhzX3RkYl9yZWNvcmRfaGRyICpoZHI7CiAKKwlp
ZiAoZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKG5vZGUpKQorCQlyZXR1cm4g
ZXJybm87CisKIAlkYXRhLmRzaXplID0gc2l6ZW9mKCpoZHIpCiAJCSsgbm9k
ZS0+cGVybXMubnVtICogc2l6ZW9mKG5vZGUtPnBlcm1zLnBbMF0pCiAJCSsg
bm9kZS0+ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOwpAQCAtNDc2LDggKzQ4
NSw5IEBAIGVudW0geHNfcGVybV90eXBlIHBlcm1fZm9yX2Nvbm4oc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sCiAJCXJldHVybiAoWFNfUEVSTV9SRUFEfFhT
X1BFUk1fV1JJVEV8WFNfUEVSTV9PV05FUikgJiBtYXNrOwogCiAJZm9yIChp
ID0gMTsgaSA8IHBlcm1zLT5udW07IGkrKykKLQkJaWYgKHBlcm1zLT5wW2ld
LmlkID09IGNvbm4tPmlkCi0gICAgICAgICAgICAgICAgICAgICAgICB8fCAo
Y29ubi0+dGFyZ2V0ICYmIHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPnRhcmdl
dC0+aWQpKQorCQlpZiAoIShwZXJtcy0+cFtpXS5wZXJtcyAmIFhTX1BFUk1f
SUdOT1JFKSAmJgorCQkgICAgKHBlcm1zLT5wW2ldLmlkID09IGNvbm4tPmlk
IHx8CisJCSAgICAgKGNvbm4tPnRhcmdldCAmJiBwZXJtcy0+cFtpXS5pZCA9
PSBjb25uLT50YXJnZXQtPmlkKSkpCiAJCQlyZXR1cm4gcGVybXMtPnBbaV0u
cGVybXMgJiBtYXNrOwogCiAJcmV0dXJuIHBlcm1zLT5wWzBdLnBlcm1zICYg
bWFzazsKQEAgLTEyMzksOCArMTI0OSwxMiBAQCBzdGF0aWMgaW50IGRvX3Nl
dF9wZXJtcyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IGJ1ZmZl
cmVkX2RhdGEgKmluKQogCWlmIChwZXJtcy5udW0gPCAyKQogCQlyZXR1cm4g
RUlOVkFMOwogCi0JcGVybXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4t
PmJ1ZmZlcikgKyAxOwogCXBlcm1zLm51bS0tOworCWlmIChkb21haW5faXNf
dW5wcml2aWxlZ2VkKGNvbm4pICYmCisJICAgIHBlcm1zLm51bSA+IHF1b3Rh
X25iX3Blcm1zX3Blcl9ub2RlKQorCQlyZXR1cm4gRU5PU1BDOworCisJcGVy
bXN0ciA9IGluLT5idWZmZXIgKyBzdHJsZW4oaW4tPmJ1ZmZlcikgKyAxOwog
CiAJcGVybXMucCA9IHRhbGxvY19hcnJheShpbiwgc3RydWN0IHhzX3Blcm1p
c3Npb25zLCBwZXJtcy5udW0pOwogCWlmICghcGVybXMucCkKQEAgLTE4Nzks
NiArMTg5Myw3IEBAIHN0YXRpYyB2b2lkIHVzYWdlKHZvaWQpCiAiICAtUywg
LS1lbnRyeS1zaXplIDxzaXplPiBsaW1pdCB0aGUgc2l6ZSBvZiBlbnRyeSBw
ZXIgZG9tYWluLCBhbmRcbiIKICIgIC1XLCAtLXdhdGNoLW5iIDxuYj4gICAg
IGxpbWl0IHRoZSBudW1iZXIgb2Ygd2F0Y2hlcyBwZXIgZG9tYWluLFxuIgog
IiAgLXQsIC0tdHJhbnNhY3Rpb24gPG5iPiAgbGltaXQgdGhlIG51bWJlciBv
ZiB0cmFuc2FjdGlvbiBhbGxvd2VkIHBlciBkb21haW4sXG4iCisiICAtQSwg
LS1wZXJtLW5iIDxuYj4gICAgICBsaW1pdCB0aGUgbnVtYmVyIG9mIHBlcm1p
c3Npb25zIHBlciBub2RlLFxuIgogIiAgLVIsIC0tbm8tcmVjb3ZlcnkgICAg
ICAgdG8gcmVxdWVzdCB0aGF0IG5vIHJlY292ZXJ5IHNob3VsZCBiZSBhdHRl
bXB0ZWQgd2hlblxuIgogIiAgICAgICAgICAgICAgICAgICAgICAgICAgdGhl
IHN0b3JlIGlzIGNvcnJ1cHRlZCAoZGVidWcgb25seSksXG4iCiAiICAtSSwg
LS1pbnRlcm5hbC1kYiAgICAgICBzdG9yZSBkYXRhYmFzZSBpbiBtZW1vcnks
IG5vdCBvbiBkaXNrXG4iCkBAIC0xODk5LDYgKzE5MTQsNyBAQCBzdGF0aWMg
c3RydWN0IG9wdGlvbiBvcHRpb25zW10gPSB7CiAJeyAiZW50cnktc2l6ZSIs
IDEsIE5VTEwsICdTJyB9LAogCXsgInRyYWNlLWZpbGUiLCAxLCBOVUxMLCAn
VCcgfSwKIAl7ICJ0cmFuc2FjdGlvbiIsIDEsIE5VTEwsICd0JyB9LAorCXsg
InBlcm0tbmIiLCAxLCBOVUxMLCAnQScgfSwKIAl7ICJuby1yZWNvdmVyeSIs
IDAsIE5VTEwsICdSJyB9LAogCXsgImludGVybmFsLWRiIiwgMCwgTlVMTCwg
J0knIH0sCiAJeyAidmVyYm9zZSIsIDAsIE5VTEwsICdWJyB9LApAQCAtMTky
MSw3ICsxOTM3LDcgQEAgaW50IG1haW4oaW50IGFyZ2MsIGNoYXIgKmFyZ3Zb
XSkKIAlpbnQgdGltZW91dDsKIAogCi0Jd2hpbGUgKChvcHQgPSBnZXRvcHRf
bG9uZyhhcmdjLCBhcmd2LCAiREU6RjpITlBTOnQ6VDpSVlc6Iiwgb3B0aW9u
cywKKwl3aGlsZSAoKG9wdCA9IGdldG9wdF9sb25nKGFyZ2MsIGFyZ3YsICJE
RTpGOkhOUFM6dDpBOlQ6UlZXOiIsIG9wdGlvbnMsCiAJCQkJICBOVUxMKSkg
IT0gLTEpIHsKIAkJc3dpdGNoIChvcHQpIHsKIAkJY2FzZSAnRCc6CkBAIC0x
OTYzLDYgKzE5NzksOSBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqYXJn
dltdKQogCQljYXNlICdXJzoKIAkJCXF1b3RhX25iX3dhdGNoX3Blcl9kb21h
aW4gPSBzdHJ0b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJCQlicmVhazsKKwkJ
Y2FzZSAnQSc6CisJCQlxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IHN0cnRv
bChvcHRhcmcsIE5VTEwsIDEwKTsKKwkJCWJyZWFrOwogCQljYXNlICdlJzoK
IAkJCWRvbTBfZXZlbnQgPSBzdHJ0b2wob3B0YXJnLCBOVUxMLCAxMCk7CiAJ
CQlicmVhazsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9kb21haW4uYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YwppbmRleCBjZjIzOWMwNDRiLi43MTY5ZGE5ODUxIDEwMDY0NAotLS0gYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMKKysrIGIvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jCkBAIC02Nyw4ICs2NywxNCBA
QCBzdHJ1Y3QgZG9tYWluCiAJLyogVGhlIGNvbm5lY3Rpb24gYXNzb2NpYXRl
ZCB3aXRoIHRoaXMuICovCiAJc3RydWN0IGNvbm5lY3Rpb24gKmNvbm47CiAK
KwkvKiBHZW5lcmF0aW9uIGNvdW50IGF0IGRvbWFpbiBpbnRyb2R1Y3Rpb24g
dGltZS4gKi8KKwl1aW50NjRfdCBnZW5lcmF0aW9uOworCiAJLyogSGF2ZSB3
ZSBub3RpY2VkIHRoYXQgdGhpcyBkb21haW4gaXMgc2h1dGRvd24/ICovCi0J
aW50IHNodXRkb3duOworCWJvb2wgc2h1dGRvd247CisKKwkvKiBIYXMgZG9t
YWluIGJlZW4gb2ZmaWNpYWxseSBpbnRyb2R1Y2VkPyAqLworCWJvb2wgaW50
cm9kdWNlZDsKIAogCS8qIG51bWJlciBvZiBlbnRyeSBmcm9tIHRoaXMgZG9t
YWluIGluIHRoZSBzdG9yZSAqLwogCWludCBuYmVudHJ5OwpAQCAtMTg4LDYg
KzE5NCw5IEBAIHN0YXRpYyBpbnQgZGVzdHJveV9kb21haW4odm9pZCAqX2Rv
bWFpbikKIAogCWxpc3RfZGVsKCZkb21haW4tPmxpc3QpOwogCisJaWYgKCFk
b21haW4tPmludHJvZHVjZWQpCisJCXJldHVybiAwOworCiAJaWYgKGRvbWFp
bi0+cG9ydCkgewogCQlpZiAoeGVuZXZ0Y2huX3VuYmluZCh4Y2VfaGFuZGxl
LCBkb21haW4tPnBvcnQpID09IC0xKQogCQkJZXByaW50ZigiPiBVbmJpbmRp
bmcgcG9ydCAlaSBmYWlsZWQhXG4iLCBkb21haW4tPnBvcnQpOwpAQCAtMjA5
LDIxICsyMTgsMzQgQEAgc3RhdGljIGludCBkZXN0cm95X2RvbWFpbih2b2lk
ICpfZG9tYWluKQogCXJldHVybiAwOwogfQogCitzdGF0aWMgYm9vbCBnZXRf
ZG9tYWluX2luZm8odW5zaWduZWQgaW50IGRvbWlkLCB4Y19kb21pbmZvX3Qg
KmRvbWluZm8pCit7CisJcmV0dXJuIHhjX2RvbWFpbl9nZXRpbmZvKCp4Y19o
YW5kbGUsIGRvbWlkLCAxLCBkb21pbmZvKSA9PSAxICYmCisJICAgICAgIGRv
bWluZm8tPmRvbWlkID09IGRvbWlkOworfQorCiBzdGF0aWMgdm9pZCBkb21h
aW5fY2xlYW51cCh2b2lkKQogewogCXhjX2RvbWluZm9fdCBkb21pbmZvOwog
CXN0cnVjdCBkb21haW4gKmRvbWFpbjsKIAlzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubjsKIAlpbnQgbm90aWZ5ID0gMDsKKwlib29sIGRvbV92YWxpZDsKIAog
IGFnYWluOgogCWxpc3RfZm9yX2VhY2hfZW50cnkoZG9tYWluLCAmZG9tYWlu
cywgbGlzdCkgewotCQlpZiAoeGNfZG9tYWluX2dldGluZm8oKnhjX2hhbmRs
ZSwgZG9tYWluLT5kb21pZCwgMSwKLQkJCQkgICAgICAmZG9taW5mbykgPT0g
MSAmJgotCQkgICAgZG9taW5mby5kb21pZCA9PSBkb21haW4tPmRvbWlkKSB7
CisJCWRvbV92YWxpZCA9IGdldF9kb21haW5faW5mbyhkb21haW4tPmRvbWlk
LCAmZG9taW5mbyk7CisJCWlmICghZG9tYWluLT5pbnRyb2R1Y2VkKSB7CisJ
CQlpZiAoIWRvbV92YWxpZCkgeworCQkJCXRhbGxvY19mcmVlKGRvbWFpbik7
CisJCQkJZ290byBhZ2FpbjsKKwkJCX0KKwkJCWNvbnRpbnVlOworCQl9CisJ
CWlmIChkb21fdmFsaWQpIHsKIAkJCWlmICgoZG9taW5mby5jcmFzaGVkIHx8
IGRvbWluZm8uc2h1dGRvd24pCiAJCQkgICAgJiYgIWRvbWFpbi0+c2h1dGRv
d24pIHsKLQkJCQlkb21haW4tPnNodXRkb3duID0gMTsKKwkJCQlkb21haW4t
PnNodXRkb3duID0gdHJ1ZTsKIAkJCQlub3RpZnkgPSAxOwogCQkJfQogCQkJ
aWYgKCFkb21pbmZvLmR5aW5nKQpAQCAtMjg5LDU4ICszMTEsODQgQEAgc3Rh
dGljIGNoYXIgKnRhbGxvY19kb21haW5fcGF0aCh2b2lkICpjb250ZXh0LCB1
bnNpZ25lZCBpbnQgZG9taWQpCiAJcmV0dXJuIHRhbGxvY19hc3ByaW50Zihj
b250ZXh0LCAiL2xvY2FsL2RvbWFpbi8ldSIsIGRvbWlkKTsKIH0KIAotc3Rh
dGljIHN0cnVjdCBkb21haW4gKm5ld19kb21haW4odm9pZCAqY29udGV4dCwg
dW5zaWduZWQgaW50IGRvbWlkLAotCQkJCSBpbnQgcG9ydCkKK3N0YXRpYyBz
dHJ1Y3QgZG9tYWluICpmaW5kX2RvbWFpbl9zdHJ1Y3QodW5zaWduZWQgaW50
IGRvbWlkKQoreworCXN0cnVjdCBkb21haW4gKmk7CisKKwlsaXN0X2Zvcl9l
YWNoX2VudHJ5KGksICZkb21haW5zLCBsaXN0KSB7CisJCWlmIChpLT5kb21p
ZCA9PSBkb21pZCkKKwkJCXJldHVybiBpOworCX0KKwlyZXR1cm4gTlVMTDsK
K30KKworc3RhdGljIHN0cnVjdCBkb21haW4gKmFsbG9jX2RvbWFpbih2b2lk
ICpjb250ZXh0LCB1bnNpZ25lZCBpbnQgZG9taWQpCiB7CiAJc3RydWN0IGRv
bWFpbiAqZG9tYWluOwotCWludCByYzsKIAogCWRvbWFpbiA9IHRhbGxvYyhj
b250ZXh0LCBzdHJ1Y3QgZG9tYWluKTsKLQlpZiAoIWRvbWFpbikKKwlpZiAo
IWRvbWFpbikgeworCQllcnJubyA9IEVOT01FTTsKIAkJcmV0dXJuIE5VTEw7
CisJfQogCi0JZG9tYWluLT5wb3J0ID0gMDsKLQlkb21haW4tPnNodXRkb3du
ID0gMDsKIAlkb21haW4tPmRvbWlkID0gZG9taWQ7Ci0JZG9tYWluLT5wYXRo
ID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9taWQpOwotCWlmICgh
ZG9tYWluLT5wYXRoKQotCQlyZXR1cm4gTlVMTDsKKwlkb21haW4tPmdlbmVy
YXRpb24gPSBnZW5lcmF0aW9uOworCWRvbWFpbi0+aW50cm9kdWNlZCA9IGZh
bHNlOwogCi0Jd3JsX2RvbWFpbl9uZXcoZG9tYWluKTsKKwl0YWxsb2Nfc2V0
X2Rlc3RydWN0b3IoZG9tYWluLCBkZXN0cm95X2RvbWFpbik7CiAKIAlsaXN0
X2FkZCgmZG9tYWluLT5saXN0LCAmZG9tYWlucyk7Ci0JdGFsbG9jX3NldF9k
ZXN0cnVjdG9yKGRvbWFpbiwgZGVzdHJveV9kb21haW4pOworCisJcmV0dXJu
IGRvbWFpbjsKK30KKworc3RhdGljIGludCBuZXdfZG9tYWluKHN0cnVjdCBk
b21haW4gKmRvbWFpbiwgaW50IHBvcnQpCit7CisJaW50IHJjOworCisJZG9t
YWluLT5wb3J0ID0gMDsKKwlkb21haW4tPnNodXRkb3duID0gZmFsc2U7CisJ
ZG9tYWluLT5wYXRoID0gdGFsbG9jX2RvbWFpbl9wYXRoKGRvbWFpbiwgZG9t
YWluLT5kb21pZCk7CisJaWYgKCFkb21haW4tPnBhdGgpIHsKKwkJZXJybm8g
PSBFTk9NRU07CisJCXJldHVybiBlcnJubzsKKwl9CisKKwl3cmxfZG9tYWlu
X25ldyhkb21haW4pOwogCiAJLyogVGVsbCBrZXJuZWwgd2UncmUgaW50ZXJl
c3RlZCBpbiB0aGlzIGV2ZW50LiAqLwotCXJjID0geGVuZXZ0Y2huX2JpbmRf
aW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9taWQsIHBvcnQpOworCXJjID0g
eGVuZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oeGNlX2hhbmRsZSwgZG9tYWlu
LT5kb21pZCwgcG9ydCk7CiAJaWYgKHJjID09IC0xKQotCSAgICByZXR1cm4g
TlVMTDsKKwkJcmV0dXJuIGVycm5vOwogCWRvbWFpbi0+cG9ydCA9IHJjOwog
CisJZG9tYWluLT5pbnRyb2R1Y2VkID0gdHJ1ZTsKKwogCWRvbWFpbi0+Y29u
biA9IG5ld19jb25uZWN0aW9uKHdyaXRlY2huLCByZWFkY2huKTsKLQlpZiAo
IWRvbWFpbi0+Y29ubikKLQkJcmV0dXJuIE5VTEw7CisJaWYgKCFkb21haW4t
PmNvbm4pICB7CisJCWVycm5vID0gRU5PTUVNOworCQlyZXR1cm4gZXJybm87
CisJfQogCiAJZG9tYWluLT5jb25uLT5kb21haW4gPSBkb21haW47Ci0JZG9t
YWluLT5jb25uLT5pZCA9IGRvbWlkOworCWRvbWFpbi0+Y29ubi0+aWQgPSBk
b21haW4tPmRvbWlkOwogCiAJZG9tYWluLT5yZW1vdGVfcG9ydCA9IHBvcnQ7
CiAJZG9tYWluLT5uYmVudHJ5ID0gMDsKIAlkb21haW4tPm5id2F0Y2ggPSAw
OwogCi0JcmV0dXJuIGRvbWFpbjsKKwlyZXR1cm4gMDsKIH0KIAogCiBzdGF0
aWMgc3RydWN0IGRvbWFpbiAqZmluZF9kb21haW5fYnlfZG9taWQodW5zaWdu
ZWQgaW50IGRvbWlkKQogewotCXN0cnVjdCBkb21haW4gKmk7CisJc3RydWN0
IGRvbWFpbiAqZDsKIAotCWxpc3RfZm9yX2VhY2hfZW50cnkoaSwgJmRvbWFp
bnMsIGxpc3QpIHsKLQkJaWYgKGktPmRvbWlkID09IGRvbWlkKQotCQkJcmV0
dXJuIGk7Ci0JfQotCXJldHVybiBOVUxMOworCWQgPSBmaW5kX2RvbWFpbl9z
dHJ1Y3QoZG9taWQpOworCisJcmV0dXJuIChkICYmIGQtPmludHJvZHVjZWQp
ID8gZCA6IE5VTEw7CiB9CiAKIHN0YXRpYyB2b2lkIGRvbWFpbl9jb25uX3Jl
c2V0KHN0cnVjdCBkb21haW4gKmRvbWFpbikKQEAgLTM4MywxNSArNDMxLDIx
IEBAIGludCBkb19pbnRyb2R1Y2Uoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAocG9ydCA8PSAwKQog
CQlyZXR1cm4gRUlOVkFMOwogCi0JZG9tYWluID0gZmluZF9kb21haW5fYnlf
ZG9taWQoZG9taWQpOworCWRvbWFpbiA9IGZpbmRfZG9tYWluX3N0cnVjdChk
b21pZCk7CiAKIAlpZiAoZG9tYWluID09IE5VTEwpIHsKKwkJLyogSGFuZyBk
b21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmluaXNoZWQuICovCisJCWRv
bWFpbiA9IGFsbG9jX2RvbWFpbihpbiwgZG9taWQpOworCQlpZiAoZG9tYWlu
ID09IE5VTEwpCisJCQlyZXR1cm4gRU5PTUVNOworCX0KKworCWlmICghZG9t
YWluLT5pbnRyb2R1Y2VkKSB7CiAJCWludGVyZmFjZSA9IG1hcF9pbnRlcmZh
Y2UoZG9taWQpOwogCQlpZiAoIWludGVyZmFjZSkKIAkJCXJldHVybiBlcnJu
bzsKIAkJLyogSGFuZyBkb21haW4gb2ZmICJpbiIgdW50aWwgd2UncmUgZmlu
aXNoZWQuICovCi0JCWRvbWFpbiA9IG5ld19kb21haW4oaW4sIGRvbWlkLCBw
b3J0KTsKLQkJaWYgKCFkb21haW4pIHsKKwkJaWYgKG5ld19kb21haW4oZG9t
YWluLCBwb3J0KSkgewogCQkJcmMgPSBlcnJubzsKIAkJCXVubWFwX2ludGVy
ZmFjZShpbnRlcmZhY2UpOwogCQkJcmV0dXJuIHJjOwpAQCAtNDk3LDggKzU1
MSw4IEBAIGludCBkb19yZXN1bWUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4s
IHN0cnVjdCBidWZmZXJlZF9kYXRhICppbikKIAlpZiAoSVNfRVJSKGRvbWFp
bikpCiAJCXJldHVybiAtUFRSX0VSUihkb21haW4pOwogCi0JZG9tYWluLT5z
aHV0ZG93biA9IDA7Ci0JCisJZG9tYWluLT5zaHV0ZG93biA9IGZhbHNlOwor
CiAJc2VuZF9hY2soY29ubiwgWFNfUkVTVU1FKTsKIAogCXJldHVybiAwOwpA
QCAtNjQxLDggKzY5NSwxMCBAQCBzdGF0aWMgaW50IGRvbTBfaW5pdCh2b2lk
KQogCWlmIChwb3J0ID09IC0xKQogCQlyZXR1cm4gLTE7CiAKLQlkb20wID0g
bmV3X2RvbWFpbihOVUxMLCB4ZW5idXNfbWFzdGVyX2RvbWlkKCksIHBvcnQp
OwotCWlmIChkb20wID09IE5VTEwpCisJZG9tMCA9IGFsbG9jX2RvbWFpbihO
VUxMLCB4ZW5idXNfbWFzdGVyX2RvbWlkKCkpOworCWlmICghZG9tMCkKKwkJ
cmV0dXJuIC0xOworCWlmIChuZXdfZG9tYWluKGRvbTAsIHBvcnQpKQogCQly
ZXR1cm4gLTE7CiAKIAlkb20wLT5pbnRlcmZhY2UgPSB4ZW5idXNfbWFwKCk7
CkBAIC03MjksNiArNzg1LDY2IEBAIHZvaWQgZG9tYWluX2VudHJ5X2luYyhz
dHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUpCiAJ
fQogfQogCisvKgorICogQ2hlY2sgd2hldGhlciBhIGRvbWFpbiB3YXMgY3Jl
YXRlZCBiZWZvcmUgb3IgYWZ0ZXIgYSBzcGVjaWZpYyBnZW5lcmF0aW9uCisg
KiBjb3VudCAodXNlZCBmb3IgdGVzdGluZyB3aGV0aGVyIGEgbm9kZSBwZXJt
aXNzaW9uIGlzIG9sZGVyIHRoYW4gYSBkb21haW4pLgorICoKKyAqIFJldHVy
biB2YWx1ZXM6CisgKiAtMTogZXJyb3IKKyAqICAwOiBkb21haW4gaGFzIGhp
Z2hlciBnZW5lcmF0aW9uIGNvdW50IChpdCBpcyB5b3VuZ2VyIHRoYW4gYSBu
b2RlIHdpdGggdGhlCisgKiAgICAgZ2l2ZW4gY291bnQpLCBvciBkb21haW4g
aXNuJ3QgZXhpc3RpbmcgYW55IGxvbmdlcgorICogIDE6IGRvbWFpbiBpcyBv
bGRlciB0aGFuIHRoZSBub2RlCisgKi8KK3N0YXRpYyBpbnQgY2hrX2RvbWFp
bl9nZW5lcmF0aW9uKHVuc2lnbmVkIGludCBkb21pZCwgdWludDY0X3QgZ2Vu
KQoreworCXN0cnVjdCBkb21haW4gKmQ7CisJeGNfZG9taW5mb190IGRvbWlu
Zm87CisKKwlpZiAoIXhjX2hhbmRsZSAmJiBkb21pZCA9PSAwKQorCQlyZXR1
cm4gMTsKKworCWQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9taWQpOworCWlm
IChkKQorCQlyZXR1cm4gKGQtPmdlbmVyYXRpb24gPD0gZ2VuKSA/IDEgOiAw
OworCisJaWYgKCFnZXRfZG9tYWluX2luZm8oZG9taWQsICZkb21pbmZvKSkK
KwkJcmV0dXJuIDA7CisKKwlkID0gYWxsb2NfZG9tYWluKE5VTEwsIGRvbWlk
KTsKKwlyZXR1cm4gZCA/IDEgOiAtMTsKK30KKworLyoKKyAqIFJlbW92ZSBw
ZXJtaXNzaW9ucyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMgaW4g
b3JkZXIgdG8gYXZvaWQgYSBuZXcKKyAqIGRvbWFpbiB3aXRoIHRoZSBzYW1l
IGRvbWlkIGluaGVyaXRpbmcgdGhlIHBlcm1pc3Npb25zLgorICovCitpbnQg
ZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKQor
eworCXVuc2lnbmVkIGludCBpOworCWludCByZXQ7CisKKwlyZXQgPSBjaGtf
ZG9tYWluX2dlbmVyYXRpb24obm9kZS0+cGVybXMucFswXS5pZCwgbm9kZS0+
Z2VuZXJhdGlvbik7CisJaWYgKHJldCA8IDApCisJCXJldHVybiBlcnJubzsK
KworCS8qIElmIHRoZSBvd25lciBkb2Vzbid0IGV4aXN0IGFueSBsb25nZXIg
Z2l2ZSBpdCB0byBwcml2IGRvbWFpbi4gKi8KKwlpZiAoIXJldCkKKwkJbm9k
ZS0+cGVybXMucFswXS5pZCA9IHByaXZfZG9taWQ7CisKKwlmb3IgKGkgPSAx
OyBpIDwgbm9kZS0+cGVybXMubnVtOyBpKyspIHsKKwkJaWYgKG5vZGUtPnBl
cm1zLnBbaV0ucGVybXMgJiBYU19QRVJNX0lHTk9SRSkKKwkJCWNvbnRpbnVl
OworCQlyZXQgPSBjaGtfZG9tYWluX2dlbmVyYXRpb24obm9kZS0+cGVybXMu
cFtpXS5pZCwKKwkJCQkJICAgIG5vZGUtPmdlbmVyYXRpb24pOworCQlpZiAo
cmV0IDwgMCkKKwkJCXJldHVybiBlcnJubzsKKwkJaWYgKCFyZXQpCisJCQlu
b2RlLT5wZXJtcy5wW2ldLnBlcm1zIHw9IFhTX1BFUk1fSUdOT1JFOworCX0K
KworCXJldHVybiAwOworfQorCiB2b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQogewog
CXN0cnVjdCBkb21haW4gKmQ7CmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfZG9tYWluLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmgKaW5kZXggMjU5MTgzOTYyYS4uNWUwMDA4NzIwNiAxMDA2
NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oCisr
KyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaApAQCAtNTYs
NiArNTYsOSBAQCBib29sIGRvbWFpbl9jYW5fd3JpdGUoc3RydWN0IGNvbm5l
Y3Rpb24gKmNvbm4pOwogCiBib29sIGRvbWFpbl9pc191bnByaXZpbGVnZWQo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOwogCisvKiBSZW1vdmUgbm9kZSBw
ZXJtaXNzaW9ucyBmb3Igbm8gbG9uZ2VyIGV4aXN0aW5nIGRvbWFpbnMuICov
CitpbnQgZG9tYWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpu
b2RlKTsKKwogLyogUXVvdGEgbWFuaXB1bGF0aW9uICovCiB2b2lkIGRvbWFp
bl9lbnRyeV9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBu
b2RlICopOwogdm9pZCBkb21haW5fZW50cnlfZGVjKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqKTsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKaW5kZXggYTdkOGM1ZDQ3
NS4uMjg4MWYzYjJlNCAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMKQEAgLTQ3LDcgKzQ3LDEyIEBACiAgKiB0
cmFuc2FjdGlvbi4KICAqIEVhY2ggdGltZSB0aGUgZ2xvYmFsIGdlbmVyYXRp
b24gY291bnQgaXMgY29waWVkIHRvIGVpdGhlciBhIG5vZGUgb3IgYQogICog
dHJhbnNhY3Rpb24gaXQgaXMgaW5jcmVtZW50ZWQuIFRoaXMgZW5zdXJlcyBh
bGwgbm9kZXMgYW5kL29yIHRyYW5zYWN0aW9ucwotICogYXJlIGhhdmluZyBh
IHVuaXF1ZSBnZW5lcmF0aW9uIGNvdW50LgorICogYXJlIGhhdmluZyBhIHVu
aXF1ZSBnZW5lcmF0aW9uIGNvdW50LiBUaGUgaW5jcmVtZW50IGlzIGRvbmUg
X2JlZm9yZV8gdGhlCisgKiBjb3B5IGFzIHRoYXQgaXMgbmVlZGVkIGZvciBj
aGVja2luZyB3aGV0aGVyIGEgZG9tYWluIHdhcyBjcmVhdGVkIGJlZm9yZQor
ICogb3IgYWZ0ZXIgYSBub2RlIGhhcyBiZWVuIHdyaXR0ZW4gKHRoZSBkb21h
aW4ncyBnZW5lcmF0aW9uIGlzIHNldCB3aXRoIHRoZQorICogYWN0dWFsIGdl
bmVyYXRpb24gY291bnQgd2l0aG91dCBpbmNyZW1lbnRpbmcgaXQsIGluIG9y
ZGVyIHRvIHN1cHBvcnQKKyAqIHdyaXRpbmcgYSBub2RlIGZvciBhIGRvbWFp
biBiZWZvcmUgdGhlIGRvbWFpbiBoYXMgYmVlbiBvZmZpY2lhbGx5CisgKiBp
bnRyb2R1Y2VkKS4KICAqCiAgKiBUcmFuc2FjdGlvbiBjb25mbGljdHMgYXJl
IGRldGVjdGVkIGJ5IGNoZWNraW5nIHRoZSBnZW5lcmF0aW9uIGNvdW50IG9m
IGFsbAogICogbm9kZXMgcmVhZCBpbiB0aGUgdHJhbnNhY3Rpb24gdG8gbWF0
Y2ggd2l0aCB0aGUgZ2VuZXJhdGlvbiBjb3VudCBpbiB0aGUKQEAgLTE2MSw3
ICsxNjYsNyBAQCBzdHJ1Y3QgdHJhbnNhY3Rpb24KIH07CiAKIGV4dGVybiBp
bnQgcXVvdGFfbWF4X3RyYW5zYWN0aW9uOwotc3RhdGljIHVpbnQ2NF90IGdl
bmVyYXRpb247Cit1aW50NjRfdCBnZW5lcmF0aW9uOwogCiBzdGF0aWMgdm9p
ZCBzZXRfdGRiX2tleShjb25zdCBjaGFyICpuYW1lLCBUREJfREFUQSAqa2V5
KQogewpAQCAtMjM3LDcgKzI0Miw3IEBAIGludCBhY2Nlc3Nfbm9kZShzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUsCiAJYm9v
bCBpbnRyb2R1Y2UgPSBmYWxzZTsKIAogCWlmICh0eXBlICE9IE5PREVfQUND
RVNTX1JFQUQpIHsKLQkJbm9kZS0+Z2VuZXJhdGlvbiA9IGdlbmVyYXRpb24r
KzsKKwkJbm9kZS0+Z2VuZXJhdGlvbiA9ICsrZ2VuZXJhdGlvbjsKIAkJaWYg
KGNvbm4gJiYgIWNvbm4tPnRyYW5zYWN0aW9uKQogCQkJd3JsX2FwcGx5X2Rl
Yml0X2RpcmVjdChjb25uKTsKIAl9CkBAIC0zNzQsNyArMzc5LDcgQEAgc3Rh
dGljIGludCBmaW5hbGl6ZV90cmFuc2FjdGlvbihzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwKIAkJCQlpZiAoIWRhdGEuZHB0cikKIAkJCQkJZ290byBlcnI7
CiAJCQkJaGRyID0gKHZvaWQgKilkYXRhLmRwdHI7Ci0JCQkJaGRyLT5nZW5l
cmF0aW9uID0gZ2VuZXJhdGlvbisrOworCQkJCWhkci0+Z2VuZXJhdGlvbiA9
ICsrZ2VuZXJhdGlvbjsKIAkJCQlyZXQgPSB0ZGJfc3RvcmUodGRiX2N0eCwg
a2V5LCBkYXRhLAogCQkJCQkJVERCX1JFUExBQ0UpOwogCQkJCXRhbGxvY19m
cmVlKGRhdGEuZHB0cik7CkBAIC00NjIsNyArNDY3LDcgQEAgaW50IGRvX3Ry
YW5zYWN0aW9uX3N0YXJ0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1
Y3QgYnVmZmVyZWRfZGF0YSAqaW4pCiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5z
LT5hY2Nlc3NlZCk7CiAJSU5JVF9MSVNUX0hFQUQoJnRyYW5zLT5jaGFuZ2Vk
X2RvbWFpbnMpOwogCXRyYW5zLT5mYWlsID0gZmFsc2U7Ci0JdHJhbnMtPmdl
bmVyYXRpb24gPSBnZW5lcmF0aW9uKys7CisJdHJhbnMtPmdlbmVyYXRpb24g
PSArK2dlbmVyYXRpb247CiAKIAkvKiBQaWNrIGFuIHVudXNlZCB0cmFuc2Fj
dGlvbiBpZGVudGlmaWVyLiAqLwogCWRvIHsKZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5oIGIvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmgKaW5kZXggMzM4NmJhYzU2
NS4uNDNhMTYyYmVhMyAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmgKKysrIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmgKQEAgLTI3LDYgKzI3LDggQEAgZW51bSBu
b2RlX2FjY2Vzc190eXBlIHsKIAogc3RydWN0IHRyYW5zYWN0aW9uOwogCitl
eHRlcm4gdWludDY0X3QgZ2VuZXJhdGlvbjsKKwogaW50IGRvX3RyYW5zYWN0
aW9uX3N0YXJ0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVm
ZmVyZWRfZGF0YSAqbm9kZSk7CiBpbnQgZG9fdHJhbnNhY3Rpb25fZW5kKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3QgYnVmZmVyZWRfZGF0YSAq
aW4pOwogCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94c19saWIuYyBi
L3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCmluZGV4IDlmMWRjNmQ1NTkuLjgw
YzAzYWNiZWEgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5j
CisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hzX2xpYi5jCkBAIC0xNDYsNyArMTQ2
LDcgQEAgYm9vbCB4c19zdHJpbmdzX3RvX3Blcm1zKHN0cnVjdCB4c19wZXJt
aXNzaW9ucyAqcGVybXMsIHVuc2lnbmVkIGludCBudW0sCiBib29sIHhzX3Bl
cm1fdG9fc3RyaW5nKGNvbnN0IHN0cnVjdCB4c19wZXJtaXNzaW9ucyAqcGVy
bSwKICAgICAgICAgICAgICAgICAgICAgICAgY2hhciAqYnVmZmVyLCBzaXpl
X3QgYnVmX2xlbikKIHsKLQlzd2l0Y2ggKChpbnQpcGVybS0+cGVybXMpIHsK
Kwlzd2l0Y2ggKChpbnQpcGVybS0+cGVybXMgJiB+WFNfUEVSTV9JR05PUkUp
IHsKIAljYXNlIFhTX1BFUk1fV1JJVEU6CiAJCSpidWZmZXIgPSAndyc7CiAJ
CWJyZWFrOwo=

--=separator
Content-Type: application/octet-stream; name="xsa322-o.patch"
Content-Disposition: attachment; filename="xsa322-o.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP0Vkd2luPTIwVD1DMz1CNnI9QzM9QjZrPz0gPGVk
dmluLnRvcm9rQGNpdHJpeC5jb20+ClN1YmplY3Q6IHRvb2xzL29jYW1sL3hl
bnN0b3JlZDogY2xlYW4gdXAgcGVybWlzc2lvbnMgZm9yIGRlYWQgZG9tYWlu
cwpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47
IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJp
dAoKZG9tYWluIGlkcyBhcmUgcHJvbmUgdG8gd3JhcHBpbmcgKDE1LWJpdHMp
LCBhbmQgd2l0aCBzdWZmaWNpZW50IG51bWJlcgpvZiBWTXMgaW4gYSByZWJv
b3QgbG9vcCBpdCBpcyBwb3NzaWJsZSB0byB0cmlnZ2VyIGl0LiAgWGVuc3Rv
cmUgZW50cmllcwptYXkgbGluZ2VyIGFmdGVyIGEgZG9tYWluIGRpZXMsIHVu
dGlsIGEgdG9vbHN0YWNrIGNsZWFucyBpdCB1cC4gRHVyaW5nCnRoaXMgdGlt
ZSB0aGVyZSBpcyBhIHdpbmRvdyB3aGVyZSBhIHdyYXBwZWQgZG9taWQgY291
bGQgYWNjZXNzIHRoZXNlCnhlbnN0b3JlIGtleXMgKHRoYXQgYmVsb25nZWQg
dG8gYW5vdGhlciBWTSkuCgpUbyBwcmV2ZW50IHRoaXMgZG8gYSBjbGVhbnVw
IHdoZW4gYSBkb21haW4gZGllczoKICogd2FsayB0aGUgZW50aXJlIHhlbnN0
b3JlIHRyZWUgYW5kIHVwZGF0ZSBwZXJtaXNzaW9ucyBmb3IgYWxsIG5vZGVz
CiAgICogaWYgdGhlIGRlYWQgZG9tYWluIGhhZCBhbiBBQ0wgZW50cnk6IHJl
bW92ZSBpdAogICAqIGlmIHRoZSBkZWFkIGRvbWFpbiB3YXMgdGhlIG93bmVy
OiBjaGFuZ2UgdGhlIG93bmVyIHRvIERvbTAKClRoaXMgaXMgZG9uZSB3aXRo
b3V0IHF1b3RhIGNoZWNrcyBvciBhIHRyYW5zYWN0aW9uLiBRdW90YSBjaGVj
a3Mgd291bGQKYmUgYSBuby1vcCAoZWl0aGVyIHRoZSBkb21haW4gaXMgZGVh
ZCwgb3IgaXQgaXMgRG9tMCB3aGVyZSB0aGV5IGFyZSBub3QKZW5mb3JjZWQp
LiAgVHJhbnNhY3Rpb25zIGFyZSBub3QgbmVlZGVkLCBiZWNhdXNlIHRoaXMg
aXMgYWxsIGRvbmUKYXRvbWljYWxseSBieSBveGVuc3RvcmVkJ3Mgc2luZ2xl
IHRocmVhZC4KClRoZSB4ZW5zdG9yZSBlbnRyaWVzIG93bmVkIGJ5IHRoZSBk
ZWFkIGRvbWFpbiBhcmUgbm90IGRlbGV0ZWQsIGJlY2F1c2UKdGhhdCBjb3Vs
ZCBjb25mdXNlIGEgdG9vbHN0YWNrIC8gYmFja2VuZHMgdGhhdCBhcmUgc3Rp
bGwgYm91bmQgdG8gaXQKKG9yIGdlbmVyYXRlIHVuZXhwZWN0ZWQgd2F0Y2gg
ZXZlbnRzKS4gSXQgaXMgdGhlIHJlc3BvbnNpYmlsaXR5IG9mIGEKdG9vbHN0
YWNrIHRvIHJlbW92ZSB0aGUgeGVuc3RvcmUgZW50cmllcyB0aGVtc2VsdmVz
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjIuCgpTaWduZWQtb2ZmLWJ5OiBF
ZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPgpBY2tlZC1i
eTogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXgu
Y29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJt
cy5tbCBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wZXJtcy5tbAppbmRleCBl
ZTdmZWU2YmRhLi5lOGExNjIyMWY4IDEwMDY0NAotLS0gYS90b29scy9vY2Ft
bC94ZW5zdG9yZWQvcGVybXMubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Blcm1zLm1sCkBAIC01OCw2ICs1OCwxNSBAQCBsZXQgZ2V0X290aGVy
IHBlcm1zID0gcGVybXMub3RoZXIKIGxldCBnZXRfYWNsIHBlcm1zID0gcGVy
bXMuYWNsCiBsZXQgZ2V0X293bmVyIHBlcm0gPSBwZXJtLm93bmVyCiAKKygq
KiBbcmVtb3RlX2RvbWlkIH5kb21pZCBwZXJtXSByZW1vdmVzIGFsbCBBQ0xz
IGZvciBbZG9taWRdIGZyb20gcGVybS4KKyogSWYgW2RvbWlkXSB3YXMgdGhl
IG93bmVyIHRoZW4gaXQgaXMgY2hhbmdlZCB0byBEb20wLgorKiBUaGlzIGlz
IHVzZWQgZm9yIGNsZWFuaW5nIHVwIGFmdGVyIGRlYWQgZG9tYWlucy4KKyog
KikKK2xldCByZW1vdmVfZG9taWQgfmRvbWlkIHBlcm0gPQorCWxldCBhY2wg
PSBMaXN0LmZpbHRlciAoZnVuIChhY2xfZG9taWQsIF8pIC0+IGFjbF9kb21p
ZCA8PiBkb21pZCkgcGVybS5hY2wgaW4KKwlsZXQgb3duZXIgPSBpZiBwZXJt
Lm93bmVyID0gZG9taWQgdGhlbiAwIGVsc2UgcGVybS5vd25lciBpbgorCXsg
cGVybSB3aXRoIGFjbDsgb3duZXIgfQorCiBsZXQgZGVmYXVsdDAgPSBjcmVh
dGUgMCBOT05FIFtdCiAKIGxldCBwZXJtX29mX3N0cmluZyBzID0KZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sIGIvdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3Byb2Nlc3MubWwKaW5kZXggZjk5YjllOTM1
Yy4uNzNlMDRjYzE4YiAxMDA2NDQKLS0tIGEvdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL3Byb2Nlc3MubWwKKysrIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3By
b2Nlc3MubWwKQEAgLTQ0Myw2ICs0NDMsNyBAQCBsZXQgZG9fcmVsZWFzZSBj
b24gdCBkb21haW5zIGNvbnMgZGF0YSA9CiAJbGV0IGZpcmVfc3BlY193YXRj
aGVzID0gRG9tYWlucy5leGlzdCBkb21haW5zIGRvbWlkIGluCiAJRG9tYWlu
cy5kZWwgZG9tYWlucyBkb21pZDsKIAlDb25uZWN0aW9ucy5kZWxfZG9tYWlu
IGNvbnMgZG9taWQ7CisJU3RvcmUucmVzZXRfcGVybWlzc2lvbnMgKFRyYW5z
YWN0aW9uLmdldF9zdG9yZSB0KSBkb21pZDsKIAlpZiBmaXJlX3NwZWNfd2F0
Y2hlcwogCXRoZW4gQ29ubmVjdGlvbnMuZmlyZV9zcGVjX3dhdGNoZXMgKFRy
YW5zYWN0aW9uLmdldF9yb290IHQpIGNvbnMgU3RvcmUuUGF0aC5yZWxlYXNl
X2RvbWFpbgogCWVsc2UgcmFpc2UgSW52YWxpZF9DbWRfQXJncwpkaWZmIC0t
Z2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sIGIvdG9vbHMv
b2NhbWwveGVuc3RvcmVkL3N0b3JlLm1sCmluZGV4IDZiNmU0NDBlOTguLjNi
MDUxMjhmMWIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9z
dG9yZS5tbAorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvc3RvcmUubWwK
QEAgLTg5LDYgKzg5LDEzIEBAIGxldCBjaGVja19vd25lciBub2RlIGNvbm5l
Y3Rpb24gPQogCiBsZXQgcmVjIHJlY3Vyc2UgZmN0IG5vZGUgPSBmY3Qgbm9k
ZTsgTGlzdC5pdGVyIChyZWN1cnNlIGZjdCkgbm9kZS5jaGlsZHJlbgogCiso
KiogW3JlY3Vyc2VfbWFwIGYgdHJlZV0gYXBwbGllcyBbZl0gb24gZWFjaCBu
b2RlIGluIHRoZSB0cmVlIHJlY3Vyc2l2ZWx5ICopCitsZXQgcmVjdXJzZV9t
YXAgZiA9CisJbGV0IHJlYyB3YWxrIG5vZGUgPQorCQlmIHsgbm9kZSB3aXRo
IGNoaWxkcmVuID0gTGlzdC5yZXZfbWFwIHdhbGsgbm9kZS5jaGlsZHJlbiB8
PiBMaXN0LnJldiB9CisJaW4KKwl3YWxrCisKIGxldCB1bnBhY2sgbm9kZSA9
IChTeW1ib2wudG9fc3RyaW5nIG5vZGUubmFtZSwgbm9kZS5wZXJtcywgbm9k
ZS52YWx1ZSkKIAogZW5kCkBAIC00MDUsNiArNDEyLDE1IEBAIGxldCBzZXRw
ZXJtcyBzdG9yZSBwZXJtIHBhdGggbnBlcm1zID0KIAkJUXVvdGEuZGVsX2Vu
dHJ5IHN0b3JlLnF1b3RhIG9sZF9vd25lcjsKIAkJUXVvdGEuYWRkX2VudHJ5
IHN0b3JlLnF1b3RhIG5ld19vd25lcgogCitsZXQgcmVzZXRfcGVybWlzc2lv
bnMgc3RvcmUgZG9taWQgPQorCUxvZ2dpbmcuaW5mbyAic3RvcmV8bm9kZSIg
IkNsZWFuaW5nIHVwIHhlbnN0b3JlIEFDTHMgZm9yIGRvbWlkICVkIiBkb21p
ZDsKKwlzdG9yZS5yb290IDwtIE5vZGUucmVjdXJzZV9tYXAgKGZ1biBub2Rl
IC0+CisJCWxldCBwZXJtcyA9IFBlcm1zLk5vZGUucmVtb3ZlX2RvbWlkIH5k
b21pZCBub2RlLnBlcm1zIGluCisJCWlmIHBlcm1zIDw+IG5vZGUucGVybXMg
dGhlbgorCQkJTG9nZ2luZy5kZWJ1ZyAic3RvcmV8bm9kZSIgIkNoYW5nZWQg
cGVybWlzc2lvbnMgZm9yIG5vZGUgJXMiIChOb2RlLmdldF9uYW1lIG5vZGUp
OworCQl7IG5vZGUgd2l0aCBwZXJtcyB9CisJKSBzdG9yZS5yb290CisKIHR5
cGUgb3BzID0gewogCXN0b3JlOiB0OwogCXdyaXRlOiBQYXRoLnQgLT4gc3Ry
aW5nIC0+IHVuaXQ7CmRpZmYgLS1naXQgYS90b29scy9vY2FtbC94ZW5zdG9y
ZWQveGVuc3RvcmVkLm1sIGIvdG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0
b3JlZC5tbAppbmRleCAwZDM1NWJiY2I4Li5mZjlmYmJiYWMyIDEwMDY0NAot
LS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQveGVuc3RvcmVkLm1sCisrKyBi
L3Rvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWQubWwKQEAgLTMzNiw2
ICszMzYsNyBAQCBsZXQgXyA9CiAJCQlmaW5hbGx5IChmdW4gKCkgLT4KIAkJ
CQlpZiBTb21lIHBvcnQgPSBldmVudGNobi5FdmVudC52aXJxX3BvcnQgdGhl
biAoCiAJCQkJCWxldCAobm90aWZ5LCBkZWFkZG9tKSA9IERvbWFpbnMuY2xl
YW51cCBkb21haW5zIGluCisJCQkJCUxpc3QuaXRlciAoU3RvcmUucmVzZXRf
cGVybWlzc2lvbnMgc3RvcmUpIGRlYWRkb207CiAJCQkJCUxpc3QuaXRlciAo
Q29ubmVjdGlvbnMuZGVsX2RvbWFpbiBjb25zKSBkZWFkZG9tOwogCQkJCQlp
ZiBkZWFkZG9tIDw+IFtdIHx8IG5vdGlmeSB0aGVuCiAJCQkJCQlDb25uZWN0
aW9ucy5maXJlX3NwZWNfd2F0Y2hlcwo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 16:56:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 16:56:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55409.96527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpa6I-0000KA-G8; Wed, 16 Dec 2020 16:56:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55409.96527; Wed, 16 Dec 2020 16:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpa6I-0000K3-C2; Wed, 16 Dec 2020 16:56:30 +0000
Received: by outflank-mailman (input) for mailman id 55409;
 Wed, 16 Dec 2020 16:56:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IZX4=FU=redhat.com=jpoimboe@srs-us1.protection.inumbo.net>)
 id 1kpa6H-0000Jy-AX
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 16:56:29 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2835dd86-fb96-477f-a6db-087b3b11044f;
 Wed, 16 Dec 2020 16:56:27 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-134-crZqldqkO2O1uPxUIhu4fA-1; Wed, 16 Dec 2020 11:56:25 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 59FF580400F;
 Wed, 16 Dec 2020 16:56:20 +0000 (UTC)
Received: from treble (ovpn-112-170.rdu2.redhat.com [10.10.112.170])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 860D160CD0;
 Wed, 16 Dec 2020 16:56:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2835dd86-fb96-477f-a6db-087b3b11044f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608137787;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uFK8uelp0EfnpwVsJ+unzlDIADvPHcDnDcvCpIi/i5Q=;
	b=f9ulRqQUFsUC14Oxu/JvjLS5ZdS4uyHeHQeS8gifCA1Su2Al10J99fdUdE/UP7lKwQI92d
	htjp6Jg0Pm1RC43rEiTgWRbl7kC2se+evh/O6P4UJ9JPjZyi7NpW1ASfaeKMwiY2KRT10x
	bQgJzJgKeRLEtlriKBppTnNn51saX4Y=
X-MC-Unique: crZqldqkO2O1uPxUIhu4fA-1
Date: Wed, 16 Dec 2020 10:56:05 -0600
From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201216165605.4h5q7os5dutjgdqi@treble>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
 <20201215145408.GR3092@hirez.programming.kicks-ass.net>
 <20201216003802.5fpklvx37yuiufrt@treble>
 <20201216084059.GL3040@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201216084059.GL3040@hirez.programming.kicks-ass.net>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13

On Wed, Dec 16, 2020 at 09:40:59AM +0100, Peter Zijlstra wrote:
> > So much algorithm.
> 
> :-)
> 
> It's not really hard, but it has a few pesky details (as always).

It really hurt my brain to look at it.

> > Could we make it easier by caching the shared
> > per-alt-group CFI state somewhere along the way?
> 
> Yes, but when I tried it grew the code required. Runtime costs would be
> less, but I figured that since alternatives are typically few and small,
> that wasn't a real consideration.

Aren't alternatives going to be everywhere now with paravirt using them?

> That is, it would basically cache the results of find_alt_unwind(), but
> you still need find_alt_unwind() to generate that data, and so you gain
> the code for filling and using the extra data structure.
> 
> Yes, computing it 3 times is naf, but meh.

Haha, I loved this sentence.

> > Thoughts?  This is all theoretical of course, I could try to do a patch
> > tomorrow.
> 
> No real objection, I just didn't do it because 1) it works, and 2) even
> moar lines.

I'm kind of surprised it would need moar lines.  Let me play around with
it and maybe I'll come around ;-)

-- 
Josh



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 17:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 17:04:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55415.96542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaE4-0001MK-DN; Wed, 16 Dec 2020 17:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55415.96542; Wed, 16 Dec 2020 17:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaE4-0001MD-9x; Wed, 16 Dec 2020 17:04:32 +0000
Received: by outflank-mailman (input) for mailman id 55415;
 Wed, 16 Dec 2020 17:04:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DfXp=FU=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kpaE3-0001Ls-Ak
 for xen-devel@lists.xen.org; Wed, 16 Dec 2020 17:04:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f637f3df-737b-4510-bb9a-4a51a7348cab;
 Wed, 16 Dec 2020 17:04:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kpaDs-0003nE-Ue; Wed, 16 Dec 2020 17:04:20 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kpaDs-0006x2-Os; Wed, 16 Dec 2020 17:04:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f637f3df-737b-4510-bb9a-4a51a7348cab
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=568n+gDvxILb6maqTwmS3WUQObCpCK52n8KAxTQYxzE=; b=ZZJRAjklsq0ylMW6kv7Xag9cbR
	RitJAzFQPaWDsQ2C3SH3S9rKQKgdvsTqAZ5nEwKPC4nlmhHGM6bkzCctyfT7DRMu5x7xvyBnwZai1
	iflWp1oaS1NmorFcszT1SICquEt5+qA4wTnwUdHNKsZRJaXBpqos7RS3Jie0KUQ9WpG8=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 343 v5 (CVE-2020-25599) - races with
 evtchn_reset()
Message-Id: <E1kpaDs-0006x2-Os@xenbits.xenproject.org>
Date: Wed, 16 Dec 2020 17:04:20 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-25599 / XSA-343
                               version 5

                       races with evtchn_reset()

UPDATES IN VERSION 5
====================

In the RESOLUTION section, describe and list the followup fixes for
vm_event.

ISSUE DESCRIPTION
=================

Uses of EVTCHNOP_reset (potentially by a guest on itself) or
XEN_DOMCTL_soft_reset (by itself covered by XSA-77) can lead to the
violation of various internal assumptions.  This may lead to out of
bounds memory accesses or triggering of bug checks.

IMPACT
======

In particular x86 PV guests may be able to elevate their privilege to
that of the host.  Host and guest crashes are also possible, leading to
a Denial of Service (DoS).  Information leaks cannot be ruled out.

VULNERABLE SYSTEMS
==================

All Xen versions from 4.5 onwards are vulnerable.  Xen versions 4.4 and
earlier are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

Different aspects of this issue were discovered by Julien Grall of
Amazon and by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

The original patches (still listed later, below, unchanged since
XSA-343 v4) cause problems with the vm_event event subsystem (part of
Virtual Machine Introspection), and with Xen paging and memory
sharing.  Fixes for these issues have been applied to the Xen public
branches.  These are not security-supported features, but for
completeness we list those fixes here, now (commit hashes are those
from the master branch):

 6f6f07b64cbe90e54f8e62b4d6f2404cf5306536  evtchn/fifo: use stable fields when recording "last queue" information
 5f2df45ead7c1195142f68b7923047a1e9479d54  xen/evtchn: rework per event channel lock
 b5ad37f8e9284cc147218f7a5193d739ae7b956f  xen/evtchn: revert 52e1fc47abc3a0123
 1277cb9dc5e966f1faf665bcded02b7533e38078  xen/events: access last_priority and last_vcpu_id together
 71ac522909e9302350a88bc378be99affa87067c  xen/events: rework fifo queue locking

Backports of these have also been applied to the respective stable
branches of the tree.  The middle one, being a revert, of course is
applicable only if the original change "evtchn/Flask: pre-allocate
node on send path" (or a backport of it) had been applied previously.

xsa343/xsa343-?.patch           Xen 4.13 - xen-unstable
xsa343/xsa343-4.12-?.patch      Xen 4.12
xsa343/xsa343-4.11-?.patch      Xen 4.11
xsa343/xsa343-4.10-?.patch      Xen 4.10

$ sha256sum xsa343* xsa343*/*
097d5fa32e22fc7a18fddd757f950699e823202bbae67245eece783d6d06f4eb  xsa343.meta
d714a542bae9d96b6a061c5a8f754549d699dcfb7bf2a766b721f6bbe33aefd2  xsa343/xsa343-1.patch
657c44c8ea13523d2e59776531237bbc20166c9b7c3960e0e9ad381fce927344  xsa343/xsa343-2.patch
2b275e3fa559167c1b59e6fd4a20bc4d1df9d9cb0cbd0050a3db9c3d0299b233  xsa343/xsa343-3.patch
9aec124e2afcba57f8adaf7374ecebffc4a8ed1913512a7456f87761bb115f68  xsa343/xsa343-4.10-1.patch
54d9ce9acdb8dcc6aa81928037afbb081a6cd579127aa225833767e285e30ea2  xsa343/xsa343-4.10-2.patch
3801300cddd8d138c800dc45eeff111e313eb40cea3aa94e2e045ac8956ab9d3  xsa343/xsa343-4.10-3.patch
7abbec828f77c427a53182db820fc19bdf34e37882fc6ae51351ed6027c56da1  xsa343/xsa343-4.11-1.patch
5c90a53333e9c81ce938deddfc690f474d61e083d2a43b859d3227100f793aff  xsa343/xsa343-4.11-2.patch
0e12cfe8e505b9685912c61a740b98084d62e4ba0670d51a47345739f463a039  xsa343/xsa343-4.11-3.patch
f3462b4e672f69a9fa951b1c04a50d754c64d18aadf444ef248587b3ac7f635a  xsa343/xsa343-4.12-1.patch
d99cbbc3792755c4998b73460bbeaef5612a8942f98adcaea0762950e5a07c2a  xsa343/xsa343-4.12-2.patch
cf23d3b61d4f07efc7057035c45e53e32a0b0f8fc3b9bc6c05f0f5bc71204914  xsa343/xsa343-4.12-3.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/aPdYMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZlucH/Rbh47bbMflkGfu5JChDnYvLbJ1RHxtJg95ENvGr
MSIL5QbAzJSvRfiiNqhMny4ykxmuWdrU4nFQCM1xk6B/84cRYPGHTpzLS3yE+dP3
Q5LHDYBR6DPoaP9608xFWWAk6+Mb42uKOstQTEBnOKG8qknYJ2RzOLgZ1m9/FWP6
+6AuFe82omBdw8lCw4pFOOeIONfxFXCVU6tbenP4PmdzMQSJr8sQ0ToRkfT+2bHr
dTpmUKsOU2WCJ6v3+YrPtPhGhdzypm55Sdr6x7ikoF+iANN5RHW8V3l6Qupyghtm
L2R907aFVzfqgOKwuRV4gGGPvnuy78EtEljPnp9ZJxhCl6U=
=Sk1L
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa343.meta"
Content-Disposition: attachment; filename="xsa343.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNDMsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI5M2JlOTQzZTdkNzU5MDE1YmQ1ZGI0MWE0OGY2ZGNlNThl
NTgwZDVhIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
MzYsCiAgICAgICAgICAgIDMzNywKICAgICAgICAgICAgMzM4LAogICAgICAg
ICAgICAzMzksCiAgICAgICAgICAgIDM0MCwKICAgICAgICAgICAgMzQyCiAg
ICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzNDMveHNhMzQzLTQuMTAtPy5wYXRjaCIKICAgICAgICAgIF0KICAg
ICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsKICAgICAgIlJl
Y2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVS
ZWYiOiAiZGRhYWNjYmJhYjZiMTliZjIxZWQyYzA5N2YzMDU1YTNjMjU0NGM4
ZCIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzMzLAog
ICAgICAgICAgICAzMzYsCiAgICAgICAgICAgIDMzNywKICAgICAgICAgICAg
MzM4LAogICAgICAgICAgICAzMzksCiAgICAgICAgICAgIDM0MCwKICAgICAg
ICAgICAgMzQyCiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBb
CiAgICAgICAgICAgICJ4c2EzNDMveHNhMzQzLTQuMTEtPy5wYXRjaCIKICAg
ICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMiI6
IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAg
ICAgICJTdGFibGVSZWYiOiAiMTMzNmNhMTc3NDI0NzFmYzRhNTk4NzlhZTJm
NjM3YTU5NTMwYTkzMyIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAg
ICAgICAgMzMzLAogICAgICAgICAgICAzMzQsCiAgICAgICAgICAgIDMzNiwK
ICAgICAgICAgICAgMzM3LAogICAgICAgICAgICAzMzgsCiAgICAgICAgICAg
IDMzOSwKICAgICAgICAgICAgMzQwLAogICAgICAgICAgICAzNDIKICAgICAg
ICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhz
YTM0My94c2EzNDMtNC4xMi0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAg
IH0KICAgICAgfQogICAgfSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBl
cyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6
ICI5YjM2N2IyYjBiNzE0ZjNmZmI2OWVkNmJlMGExMThlOGQzZWFjMDdmIiwK
ICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzMzMsCiAgICAg
ICAgICAgIDMzNCwKICAgICAgICAgICAgMzM2LAogICAgICAgICAgICAzMzcs
CiAgICAgICAgICAgIDMzOCwKICAgICAgICAgICAgMzM5LAogICAgICAgICAg
ICAzNDAsCiAgICAgICAgICAgIDM0MgogICAgICAgICAgXSwKICAgICAgICAg
ICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQzL3hzYTM0My0/LnBh
dGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAg
ICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjog
ewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJjM2EwZmMyMmFmOTBlZjI4ZTY4
YjExNmM2YTQ5ZDljZWM1N2Y3MWNmIiwKICAgICAgICAgICJQcmVyZXFzIjog
WwogICAgICAgICAgICAzMzMsCiAgICAgICAgICAgIDMzNCwKICAgICAgICAg
ICAgMzM2LAogICAgICAgICAgICAzMzcsCiAgICAgICAgICAgIDMzOCwKICAg
ICAgICAgICAgMzM5LAogICAgICAgICAgICAzNDAsCiAgICAgICAgICAgIDM0
MgogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzQzL3hzYTM0My0/LnBhdGNoIgogICAgICAgICAgXQogICAg
ICAgIH0KICAgICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJS
ZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxl
UmVmIjogImIxMTkxMDA4MmQ5MGJiMTU5N2Y2Njc5NTI0ZWI3MjZhMzMzMDY2
NzIiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMzMywK
ICAgICAgICAgICAgMzM0LAogICAgICAgICAgICAzMzYsCiAgICAgICAgICAg
IDMzNywKICAgICAgICAgICAgMzM4LAogICAgICAgICAgICAzMzksCiAgICAg
ICAgICAgIDM0MCwKICAgICAgICAgICAgMzQyCiAgICAgICAgICBdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNDMveHNhMzQz
LT8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-1.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGV2dGNobl9yZXNldCgpIHNob3VsZG4ndCBzdWNjZWVkIHdp
dGggc3RpbGwtb3BlbiBwb3J0cwoKV2hpbGUgdGhlIGZ1bmN0aW9uIGNsb3Nl
cyBhbGwgcG9ydHMsIGl0IGRvZXMgc28gd2l0aG91dCBob2xkaW5nIGFueQps
b2NrLCBhbmQgaGVuY2UgcmFjaW5nIHJlcXVlc3RzIG1heSBiZSBpc3N1ZWQg
Y2F1c2luZyBuZXcgcG9ydHMgdG8gZ2V0Cm9wZW5lZC4gVGhpcyB3b3VsZCBo
YXZlIGJlZW4gcHJvYmxlbWF0aWMgaW4gcGFydGljdWxhciBpZiBzdWNoIGEg
bmV3bHkKb3BlbmVkIHBvcnQgaGFkIGEgcG9ydCBudW1iZXIgYWJvdmUgdGhl
IG5ldyBpbXBsZW1lbnRhdGlvbiBsaW1pdCAoaS5lLgp3aGVuIHN3aXRjaGlu
ZyBmcm9tIEZJRk8gdG8gMi1sZXZlbCkgYWZ0ZXIgdGhlIHJlc2V0LCBhcyBw
cmlvciB0bwoiZXZ0Y2huOiByZWxheCBwb3J0X2lzX3ZhbGlkKCkiIHRoaXMg
Y291bGQgaGF2ZSBsZWQgdG8gZS5nLgpldnRjaG5fY2xvc2UoKSdzICJCVUdf
T04oIXBvcnRfaXNfdmFsaWQoZDIsIHBvcnQyKSkiIHRvIHRyaWdnZXIuCgpJ
bnRyb2R1Y2UgYSBjb3VudGVyIG9mIGFjdGl2ZSBwb3J0cyBhbmQgY2hlY2sg
dGhhdCBpdCdzIChzdGlsbCkgbm8KbGFyZ2VyIHRoZW4gdGhlIG51bWJlciBv
ZiBYZW4gaW50ZXJuYWxseSB1c2VkIG9uZXMgYWZ0ZXIgb2J0YWluaW5nIHRo
ZQpuZWNlc3NhcnkgbG9jayBpbiBldnRjaG5fcmVzZXQoKS4KCkFzIHRvIHRo
ZSBhY2Nlc3MgbW9kZWwgb2YgdGhlIG5ldyB7YWN0aXZlLHhlbn1fZXZ0Y2hu
cyBmaWVsZHMgLSB3aGlsZQphbGwgd3JpdGVzIGdldCBkb25lIHVzaW5nIHdy
aXRlX2F0b21pYygpLCByZWFkcyBvdWdodCB0byB1c2UKcmVhZF9hdG9taWMo
KSBvbmx5IHdoZW4gb3V0c2lkZSBvZiBhIHN1aXRhYmx5IGxvY2tlZCByZWdp
b24uCgpOb3RlIHRoYXQgYXMgb2Ygbm93IGV2dGNobl9iaW5kX3ZpcnEoKSBh
bmQgZXZ0Y2huX2JpbmRfaXBpKCkgZG9uJ3QgaGF2ZQphIG5lZWQgdG8gY2Fs
bCBjaGVja19mcmVlX3BvcnQoKS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzQz
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxsIDxq
Z3JhbGxAYW1hem9uLmNvbT4KLS0tCnY3OiBEcm9wIG9wdGltaXphdGlvbiBm
cm9tIGV2dGNobl9yZXNldCgpLgp2NjogRml4IGxvb3AgZXhpdCBjb25kaXRp
b24gaW4gZXZ0Y2huX3Jlc2V0KCkuIFVzZSB7cmVhZCx3cml0ZX1fYXRvbWlj
KCkKICAgIGFsc28gZm9yIHhlbl9ldnRjaG5zLgp2NTogTW92ZSBpbmNyZW1l
bnQgaW4gYWxsb2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgpIG91dCBv
ZiB0aGUgaW5uZXIKICAgIGxvY2tlZCByZWdpb24uCnY0OiBBY2NvdW50IGZv
ciBYZW4gaW50ZXJuYWwgcG9ydHMuCnYzOiBEb2N1bWVudCBpbnRlbmRlZCBh
Y2Nlc3MgbmV4dCB0byBuZXcgc3RydWN0IGZpZWxkLgp2MjogQWRkIGNvbW1l
bnQgdG8gY2hlY2tfZnJlZV9wb3J0KCkuIERyb3AgY29tbWVudGVkIG91dCBj
YWxscy4KCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCisrKyBi
L3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCkBAIC0xODgsNiArMTg4LDgg
QEAgaW50IGV2dGNobl9hbGxvY2F0ZV9wb3J0KHN0cnVjdCBkb21haW4gKgog
ICAgICAgICB3cml0ZV9hdG9taWMoJmQtPnZhbGlkX2V2dGNobnMsIGQtPnZh
bGlkX2V2dGNobnMgKyBFVlRDSE5TX1BFUl9CVUNLRVQpOwogICAgIH0KIAor
ICAgIHdyaXRlX2F0b21pYygmZC0+YWN0aXZlX2V2dGNobnMsIGQtPmFjdGl2
ZV9ldnRjaG5zICsgMSk7CisKICAgICByZXR1cm4gMDsKIH0KIApAQCAtMjEx
LDExICsyMTMsMjYgQEAgc3RhdGljIGludCBnZXRfZnJlZV9wb3J0KHN0cnVj
dCBkb21haW4gKgogICAgIHJldHVybiAtRU5PU1BDOwogfQogCisvKgorICog
Q2hlY2sgd2hldGhlciBhIHBvcnQgaXMgc3RpbGwgbWFya2VkIGZyZWUsIGFu
ZCBpZiBzbyB1cGRhdGUgdGhlIGRvbWFpbgorICogY291bnRlciBhY2NvcmRp
bmdseS4gIFRvIGJlIHVzZWQgb24gZnVuY3Rpb24gZXhpdCBwYXRocy4KKyAq
Lworc3RhdGljIHZvaWQgY2hlY2tfZnJlZV9wb3J0KHN0cnVjdCBkb21haW4g
KmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3sKKyAgICBpZiAoIHBvcnRfaXNf
dmFsaWQoZCwgcG9ydCkgJiYKKyAgICAgICAgIGV2dGNobl9mcm9tX3BvcnQo
ZCwgcG9ydCktPnN0YXRlID09IEVDU19GUkVFICkKKyAgICAgICAgd3JpdGVf
YXRvbWljKCZkLT5hY3RpdmVfZXZ0Y2hucywgZC0+YWN0aXZlX2V2dGNobnMg
LSAxKTsKK30KKwogdm9pZCBldnRjaG5fZnJlZShzdHJ1Y3QgZG9tYWluICpk
LCBzdHJ1Y3QgZXZ0Y2huICpjaG4pCiB7CiAgICAgLyogQ2xlYXIgcGVuZGlu
ZyBldmVudCB0byBhdm9pZCB1bmV4cGVjdGVkIGJlaGF2aW9yIG9uIHJlLWJp
bmQuICovCiAgICAgZXZ0Y2huX3BvcnRfY2xlYXJfcGVuZGluZyhkLCBjaG4p
OwogCisgICAgaWYgKCBjb25zdW1lcl9pc194ZW4oY2huKSApCisgICAgICAg
IHdyaXRlX2F0b21pYygmZC0+eGVuX2V2dGNobnMsIGQtPnhlbl9ldnRjaG5z
IC0gMSk7CisgICAgd3JpdGVfYXRvbWljKCZkLT5hY3RpdmVfZXZ0Y2hucywg
ZC0+YWN0aXZlX2V2dGNobnMgLSAxKTsKKwogICAgIC8qIFJlc2V0IGJpbmRp
bmcgdG8gdmNwdTAgd2hlbiB0aGUgY2hhbm5lbCBpcyBmcmVlZC4gKi8KICAg
ICBjaG4tPnN0YXRlICAgICAgICAgID0gRUNTX0ZSRUU7CiAgICAgY2huLT5u
b3RpZnlfdmNwdV9pZCA9IDA7CkBAIC0yNTgsNiArMjc1LDcgQEAgc3RhdGlj
IGxvbmcgZXZ0Y2huX2FsbG9jX3VuYm91bmQoZXZ0Y2huXwogICAgIGFsbG9j
LT5wb3J0ID0gcG9ydDsKIAogIG91dDoKKyAgICBjaGVja19mcmVlX3BvcnQo
ZCwgcG9ydCk7CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2spOwog
ICAgIHJjdV91bmxvY2tfZG9tYWluKGQpOwogCkBAIC0zNTEsNiArMzY5LDcg
QEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oZXZ0Ywog
ICAgIGJpbmQtPmxvY2FsX3BvcnQgPSBscG9ydDsKIAogIG91dDoKKyAgICBj
aGVja19mcmVlX3BvcnQobGQsIGxwb3J0KTsKICAgICBzcGluX3VubG9jaygm
bGQtPmV2ZW50X2xvY2spOwogICAgIGlmICggbGQgIT0gcmQgKQogICAgICAg
ICBzcGluX3VubG9jaygmcmQtPmV2ZW50X2xvY2spOwpAQCAtNDg4LDcgKzUw
Nyw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX3BpcnEoZXZ0Y2huX2Jp
bmQKICAgICBzdHJ1Y3QgZG9tYWluICpkID0gY3VycmVudC0+ZG9tYWluOwog
ICAgIHN0cnVjdCB2Y3B1ICAgKnYgPSBkLT52Y3B1WzBdOwogICAgIHN0cnVj
dCBwaXJxICAgKmluZm87Ci0gICAgaW50ICAgICAgICAgICAgcG9ydCwgcGly
cSA9IGJpbmQtPnBpcnE7CisgICAgaW50ICAgICAgICAgICAgcG9ydCA9IDAs
IHBpcnEgPSBiaW5kLT5waXJxOwogICAgIGxvbmcgICAgICAgICAgIHJjOwog
CiAgICAgaWYgKCAocGlycSA8IDApIHx8IChwaXJxID49IGQtPm5yX3BpcnFz
KSApCkBAIC01MzYsNiArNTU1LDcgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2Jp
bmRfcGlycShldnRjaG5fYmluZAogICAgIGFyY2hfZXZ0Y2huX2JpbmRfcGly
cShkLCBwaXJxKTsKIAogIG91dDoKKyAgICBjaGVja19mcmVlX3BvcnQoZCwg
cG9ydCk7CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2spOwogCiAg
ICAgcmV0dXJuIHJjOwpAQCAtMTAxMSwxMCArMTAzMSwxMCBAQCBpbnQgZXZ0
Y2huX3VubWFzayh1bnNpZ25lZCBpbnQgcG9ydCkKICAgICByZXR1cm4gMDsK
IH0KIAotCiBpbnQgZXZ0Y2huX3Jlc2V0KHN0cnVjdCBkb21haW4gKmQpCiB7
CiAgICAgdW5zaWduZWQgaW50IGk7CisgICAgaW50IHJjID0gMDsKIAogICAg
IGlmICggZCAhPSBjdXJyZW50LT5kb21haW4gJiYgIWQtPmNvbnRyb2xsZXJf
cGF1c2VfY291bnQgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKQEAgLTEw
MjQsNyArMTA0NCw5IEBAIGludCBldnRjaG5fcmVzZXQoc3RydWN0IGRvbWFp
biAqZCkKIAogICAgIHNwaW5fbG9jaygmZC0+ZXZlbnRfbG9jayk7CiAKLSAg
ICBpZiAoIGQtPmV2dGNobl9maWZvICkKKyAgICBpZiAoIGQtPmFjdGl2ZV9l
dnRjaG5zID4gZC0+eGVuX2V2dGNobnMgKQorICAgICAgICByYyA9IC1FQUdB
SU47CisgICAgZWxzZSBpZiAoIGQtPmV2dGNobl9maWZvICkKICAgICB7CiAg
ICAgICAgIC8qIFN3aXRjaGluZyBiYWNrIHRvIDItbGV2ZWwgQUJJLiAqLwog
ICAgICAgICBldnRjaG5fZmlmb19kZXN0cm95KGQpOwpAQCAtMTAzMyw3ICsx
MDU1LDcgQEAgaW50IGV2dGNobl9yZXNldChzdHJ1Y3QgZG9tYWluICpkKQog
CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2spOwogCi0gICAgcmV0
dXJuIDA7CisgICAgcmV0dXJuIHJjOwogfQogCiBzdGF0aWMgbG9uZyBldnRj
aG5fc2V0X3ByaW9yaXR5KGNvbnN0IHN0cnVjdCBldnRjaG5fc2V0X3ByaW9y
aXR5ICpzZXRfcHJpb3JpdHkpCkBAIC0xMjE5LDEwICsxMjQxLDkgQEAgaW50
IGFsbG9jX3VuYm91bmRfeGVuX2V2ZW50X2NoYW5uZWwoCiAKICAgICBzcGlu
X2xvY2soJmxkLT5ldmVudF9sb2NrKTsKIAotICAgIHJjID0gZ2V0X2ZyZWVf
cG9ydChsZCk7CisgICAgcG9ydCA9IHJjID0gZ2V0X2ZyZWVfcG9ydChsZCk7
CiAgICAgaWYgKCByYyA8IDAgKQogICAgICAgICBnb3RvIG91dDsKLSAgICBw
b3J0ID0gcmM7CiAgICAgY2huID0gZXZ0Y2huX2Zyb21fcG9ydChsZCwgcG9y
dCk7CiAKICAgICByYyA9IHhzbV9ldnRjaG5fdW5ib3VuZChYU01fVEFSR0VU
LCBsZCwgY2huLCByZW1vdGVfZG9taWQpOwpAQCAtMTIzOCw3ICsxMjU5LDEw
IEBAIGludCBhbGxvY191bmJvdW5kX3hlbl9ldmVudF9jaGFubmVsKAogCiAg
ICAgc3Bpbl91bmxvY2soJmNobi0+bG9jayk7CiAKKyAgICB3cml0ZV9hdG9t
aWMoJmxkLT54ZW5fZXZ0Y2hucywgbGQtPnhlbl9ldnRjaG5zICsgMSk7CisK
ICBvdXQ6CisgICAgY2hlY2tfZnJlZV9wb3J0KGxkLCBwb3J0KTsKICAgICBz
cGluX3VubG9jaygmbGQtPmV2ZW50X2xvY2spOwogCiAgICAgcmV0dXJuIHJj
IDwgMCA/IHJjIDogcG9ydDsKQEAgLTEzMTQsNiArMTMzOCw3IEBAIGludCBl
dnRjaG5faW5pdChzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ24KICAgICAgICAg
cmV0dXJuIC1FSU5WQUw7CiAgICAgfQogICAgIGV2dGNobl9mcm9tX3BvcnQo
ZCwgMCktPnN0YXRlID0gRUNTX1JFU0VSVkVEOworICAgIHdyaXRlX2F0b21p
YygmZC0+YWN0aXZlX2V2dGNobnMsIDApOwogCiAjaWYgTUFYX1ZJUlRfQ1BV
UyA+IEJJVFNfUEVSX0xPTkcKICAgICBkLT5wb2xsX21hc2sgPSB4emFsbG9j
X2FycmF5KHVuc2lnbmVkIGxvbmcsIEJJVFNfVE9fTE9OR1MoZC0+bWF4X3Zj
cHVzKSk7CkBAIC0xMzQwLDYgKzEzNjUsOCBAQCB2b2lkIGV2dGNobl9kZXN0
cm95KHN0cnVjdCBkb21haW4gKmQpCiAgICAgZm9yICggaSA9IDA7IHBvcnRf
aXNfdmFsaWQoZCwgaSk7IGkrKyApCiAgICAgICAgIGV2dGNobl9jbG9zZShk
LCBpLCAwKTsKIAorICAgIEFTU0VSVCghZC0+YWN0aXZlX2V2dGNobnMpOwor
CiAgICAgY2xlYXJfZ2xvYmFsX3ZpcnFfaGFuZGxlcnMoZCk7CiAKICAgICBl
dnRjaG5fZmlmb19kZXN0cm95KGQpOwotLS0gYS94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaApAQCAtMzYx
LDYgKzM2MSwxNiBAQCBzdHJ1Y3QgZG9tYWluCiAgICAgc3RydWN0IGV2dGNo
biAgKipldnRjaG5fZ3JvdXBbTlJfRVZUQ0hOX0dST1VQU107IC8qIGFsbCBv
dGhlciBidWNrZXRzICovCiAgICAgdW5zaWduZWQgaW50ICAgICBtYXhfZXZ0
Y2huX3BvcnQ7IC8qIG1heCBwZXJtaXR0ZWQgcG9ydCBudW1iZXIgKi8KICAg
ICB1bnNpZ25lZCBpbnQgICAgIHZhbGlkX2V2dGNobnM7ICAgLyogbnVtYmVy
IG9mIGFsbG9jYXRlZCBldmVudCBjaGFubmVscyAqLworICAgIC8qCisgICAg
ICogTnVtYmVyIG9mIGluLXVzZSBldmVudCBjaGFubmVscy4gIFdyaXRlcnMg
c2hvdWxkIHVzZSB3cml0ZV9hdG9taWMoKS4KKyAgICAgKiBSZWFkZXJzIG5l
ZWQgdG8gdXNlIHJlYWRfYXRvbWljKCkgb25seSB3aGVuIG5vdCBob2xkaW5n
IGV2ZW50X2xvY2suCisgICAgICovCisgICAgdW5zaWduZWQgaW50ICAgICBh
Y3RpdmVfZXZ0Y2huczsKKyAgICAvKgorICAgICAqIE51bWJlciBvZiBldmVu
dCBjaGFubmVscyB1c2VkIGludGVybmFsbHkgYnkgWGVuIChub3Qgc3ViamVj
dCB0bworICAgICAqIEVWVENITk9QX3Jlc2V0KS4gIFJlYWQvd3JpdGUgYWNj
ZXNzIGxpa2UgZm9yIGFjdGl2ZV9ldnRjaG5zLgorICAgICAqLworICAgIHVu
c2lnbmVkIGludCAgICAgeGVuX2V2dGNobnM7CiAgICAgc3BpbmxvY2tfdCAg
ICAgICBldmVudF9sb2NrOwogICAgIGNvbnN0IHN0cnVjdCBldnRjaG5fcG9y
dF9vcHMgKmV2dGNobl9wb3J0X29wczsKICAgICBzdHJ1Y3QgZXZ0Y2huX2Zp
Zm9fZG9tYWluICpldnRjaG5fZmlmbzsK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-2.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGNvbnZlcnQgcGVyLWNoYW5uZWwgbG9jayB0byBiZSBJUlEt
c2FmZQoKLi4uIGluIG9yZGVyIGZvciBzZW5kX2d1ZXN0X3tnbG9iYWwsdmNw
dX1fdmlycSgpIHRvIGJlIGFibGUgdG8gbWFrZSB1c2UKb2YgaXQuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0My4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFs
bCA8amdyYWxsQGFtYXpvbi5jb20+Ci0tLQp2NjogTmV3LgotLS0KVEJEOiBU
aGlzIGlzIHRoZSAiZHVtYiIgY29udmVyc2lvbiB2YXJpYW50LiBJbiBhIGNv
dXBsZSBvZiBjYXNlcyB0aGUKICAgICBzbGlnaHRseSBzaW1wbGVyIHNwaW5f
eyx1bn1sb2NrX2lycSgpIGNvdWxkIGFwcGFyZW50bHkgYmUgdXNlZC4KCi0t
LSBhL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCisrKyBiL3hlbi9jb21t
b24vZXZlbnRfY2hhbm5lbC5jCkBAIC0yNDgsNiArMjQ4LDcgQEAgc3RhdGlj
IGxvbmcgZXZ0Y2huX2FsbG9jX3VuYm91bmQoZXZ0Y2huXwogICAgIGludCAg
ICAgICAgICAgIHBvcnQ7CiAgICAgZG9taWRfdCAgICAgICAgZG9tID0gYWxs
b2MtPmRvbTsKICAgICBsb25nICAgICAgICAgICByYzsKKyAgICB1bnNpZ25l
ZCBsb25nICBmbGFnczsKIAogICAgIGQgPSByY3VfbG9ja19kb21haW5fYnlf
YW55X2lkKGRvbSk7CiAgICAgaWYgKCBkID09IE5VTEwgKQpAQCAtMjYzLDE0
ICsyNjQsMTQgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2FsbG9jX3VuYm91bmQo
ZXZ0Y2huXwogICAgIGlmICggcmMgKQogICAgICAgICBnb3RvIG91dDsKIAot
ICAgIHNwaW5fbG9jaygmY2huLT5sb2NrKTsKKyAgICBzcGluX2xvY2tfaXJx
c2F2ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBjaG4tPnN0YXRlID0g
RUNTX1VOQk9VTkQ7CiAgICAgaWYgKCAoY2huLT51LnVuYm91bmQucmVtb3Rl
X2RvbWlkID0gYWxsb2MtPnJlbW90ZV9kb20pID09IERPTUlEX1NFTEYgKQog
ICAgICAgICBjaG4tPnUudW5ib3VuZC5yZW1vdGVfZG9taWQgPSBjdXJyZW50
LT5kb21haW4tPmRvbWFpbl9pZDsKICAgICBldnRjaG5fcG9ydF9pbml0KGQs
IGNobik7CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAog
ICAgIGFsbG9jLT5wb3J0ID0gcG9ydDsKIApAQCAtMjgzLDI2ICsyODQsMzIg
QEAgc3RhdGljIGxvbmcgZXZ0Y2huX2FsbG9jX3VuYm91bmQoZXZ0Y2huXwog
fQogCiAKLXN0YXRpYyB2b2lkIGRvdWJsZV9ldnRjaG5fbG9jayhzdHJ1Y3Qg
ZXZ0Y2huICpsY2huLCBzdHJ1Y3QgZXZ0Y2huICpyY2huKQorc3RhdGljIHVu
c2lnbmVkIGxvbmcgZG91YmxlX2V2dGNobl9sb2NrKHN0cnVjdCBldnRjaG4g
KmxjaG4sCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgc3RydWN0IGV2dGNobiAqcmNobikKIHsKLSAgICBpZiAoIGxjaG4gPCBy
Y2huICkKKyAgICB1bnNpZ25lZCBsb25nIGZsYWdzOworCisgICAgaWYgKCBs
Y2huIDw9IHJjaG4gKQogICAgIHsKLSAgICAgICAgc3Bpbl9sb2NrKCZsY2hu
LT5sb2NrKTsKLSAgICAgICAgc3Bpbl9sb2NrKCZyY2huLT5sb2NrKTsKKyAg
ICAgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmxjaG4tPmxvY2ssIGZsYWdzKTsK
KyAgICAgICAgaWYgKCBsY2huICE9IHJjaG4gKQorICAgICAgICAgICAgc3Bp
bl9sb2NrKCZyY2huLT5sb2NrKTsKICAgICB9CiAgICAgZWxzZQogICAgIHsK
LSAgICAgICAgaWYgKCBsY2huICE9IHJjaG4gKQotICAgICAgICAgICAgc3Bp
bl9sb2NrKCZyY2huLT5sb2NrKTsKKyAgICAgICAgc3Bpbl9sb2NrX2lycXNh
dmUoJnJjaG4tPmxvY2ssIGZsYWdzKTsKICAgICAgICAgc3Bpbl9sb2NrKCZs
Y2huLT5sb2NrKTsKICAgICB9CisKKyAgICByZXR1cm4gZmxhZ3M7CiB9CiAK
LXN0YXRpYyB2b2lkIGRvdWJsZV9ldnRjaG5fdW5sb2NrKHN0cnVjdCBldnRj
aG4gKmxjaG4sIHN0cnVjdCBldnRjaG4gKnJjaG4pCitzdGF0aWMgdm9pZCBk
b3VibGVfZXZ0Y2huX3VubG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLCBzdHJ1
Y3QgZXZ0Y2huICpyY2huLAorICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdW5zaWduZWQgbG9uZyBmbGFncykKIHsKLSAgICBzcGluX3VubG9j
aygmbGNobi0+bG9jayk7CiAgICAgaWYgKCBsY2huICE9IHJjaG4gKQotICAg
ICAgICBzcGluX3VubG9jaygmcmNobi0+bG9jayk7CisgICAgICAgIHNwaW5f
dW5sb2NrKCZsY2huLT5sb2NrKTsKKyAgICBzcGluX3VubG9ja19pcnFyZXN0
b3JlKCZyY2huLT5sb2NrLCBmbGFncyk7CiB9CiAKIHN0YXRpYyBsb25nIGV2
dGNobl9iaW5kX2ludGVyZG9tYWluKGV2dGNobl9iaW5kX2ludGVyZG9tYWlu
X3QgKmJpbmQpCkBAIC0zMTIsNiArMzE5LDcgQEAgc3RhdGljIGxvbmcgZXZ0
Y2huX2JpbmRfaW50ZXJkb21haW4oZXZ0YwogICAgIGludCAgICAgICAgICAg
IGxwb3J0LCBycG9ydCA9IGJpbmQtPnJlbW90ZV9wb3J0OwogICAgIGRvbWlk
X3QgICAgICAgIHJkb20gPSBiaW5kLT5yZW1vdGVfZG9tOwogICAgIGxvbmcg
ICAgICAgICAgIHJjOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAg
ICAgaWYgKCByZG9tID09IERPTUlEX1NFTEYgKQogICAgICAgICByZG9tID0g
Y3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ7CkBAIC0zNDcsNyArMzU1LDcg
QEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oZXZ0Ywog
ICAgIGlmICggcmMgKQogICAgICAgICBnb3RvIG91dDsKIAotICAgIGRvdWJs
ZV9ldnRjaG5fbG9jayhsY2huLCByY2huKTsKKyAgICBmbGFncyA9IGRvdWJs
ZV9ldnRjaG5fbG9jayhsY2huLCByY2huKTsKIAogICAgIGxjaG4tPnUuaW50
ZXJkb21haW4ucmVtb3RlX2RvbSAgPSByZDsKICAgICBsY2huLT51LmludGVy
ZG9tYWluLnJlbW90ZV9wb3J0ID0gcnBvcnQ7CkBAIC0zNjQsNyArMzcyLDcg
QEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfaW50ZXJkb21haW4oZXZ0Ywog
ICAgICAqLwogICAgIGV2dGNobl9wb3J0X3NldF9wZW5kaW5nKGxkLCBsY2hu
LT5ub3RpZnlfdmNwdV9pZCwgbGNobik7CiAKLSAgICBkb3VibGVfZXZ0Y2hu
X3VubG9jayhsY2huLCByY2huKTsKKyAgICBkb3VibGVfZXZ0Y2huX3VubG9j
ayhsY2huLCByY2huLCBmbGFncyk7CiAKICAgICBiaW5kLT5sb2NhbF9wb3J0
ID0gbHBvcnQ7CiAKQEAgLTM4Nyw2ICszOTUsNyBAQCBpbnQgZXZ0Y2huX2Jp
bmRfdmlycShldnRjaG5fYmluZF92aXJxX3QKICAgICBzdHJ1Y3QgZG9tYWlu
ICpkID0gY3VycmVudC0+ZG9tYWluOwogICAgIGludCAgICAgICAgICAgIHZp
cnEgPSBiaW5kLT52aXJxLCB2Y3B1ID0gYmluZC0+dmNwdTsKICAgICBpbnQg
ICAgICAgICAgICByYyA9IDA7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7
CiAKICAgICBpZiAoICh2aXJxIDwgMCkgfHwgKHZpcnEgPj0gQVJSQVlfU0la
RSh2LT52aXJxX3RvX2V2dGNobikpICkKICAgICAgICAgcmV0dXJuIC1FSU5W
QUw7CkBAIC00MjQsMTQgKzQzMywxNCBAQCBpbnQgZXZ0Y2huX2JpbmRfdmly
cShldnRjaG5fYmluZF92aXJxX3QKIAogICAgIGNobiA9IGV2dGNobl9mcm9t
X3BvcnQoZCwgcG9ydCk7CiAKLSAgICBzcGluX2xvY2soJmNobi0+bG9jayk7
CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgY2huLT5zdGF0ZSAgICAgICAgICA9IEVDU19WSVJROwogICAgIGNo
bi0+bm90aWZ5X3ZjcHVfaWQgPSB2Y3B1OwogICAgIGNobi0+dS52aXJxICAg
ICAgICAgPSB2aXJxOwogICAgIGV2dGNobl9wb3J0X2luaXQoZCwgY2huKTsK
IAotICAgIHNwaW5fdW5sb2NrKCZjaG4tPmxvY2spOworICAgIHNwaW5fdW5s
b2NrX2lycXJlc3RvcmUoJmNobi0+bG9jaywgZmxhZ3MpOwogCiAgICAgdi0+
dmlycV90b19ldnRjaG5bdmlycV0gPSBiaW5kLT5wb3J0ID0gcG9ydDsKIApA
QCAtNDQ4LDYgKzQ1Nyw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX2lw
aShldnRjaG5fYmluZF8KICAgICBzdHJ1Y3QgZG9tYWluICpkID0gY3VycmVu
dC0+ZG9tYWluOwogICAgIGludCAgICAgICAgICAgIHBvcnQsIHZjcHUgPSBi
aW5kLT52Y3B1OwogICAgIGxvbmcgICAgICAgICAgIHJjID0gMDsKKyAgICB1
bnNpZ25lZCBsb25nICBmbGFnczsKIAogICAgIGlmICggZG9tYWluX3ZjcHUo
ZCwgdmNwdSkgPT0gTlVMTCApCiAgICAgICAgIHJldHVybiAtRU5PRU5UOwpA
QCAtNDU5LDEzICs0NjksMTMgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRf
aXBpKGV2dGNobl9iaW5kXwogCiAgICAgY2huID0gZXZ0Y2huX2Zyb21fcG9y
dChkLCBwb3J0KTsKIAotICAgIHNwaW5fbG9jaygmY2huLT5sb2NrKTsKKyAg
ICBzcGluX2xvY2tfaXJxc2F2ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAg
ICBjaG4tPnN0YXRlICAgICAgICAgID0gRUNTX0lQSTsKICAgICBjaG4tPm5v
dGlmeV92Y3B1X2lkID0gdmNwdTsKICAgICBldnRjaG5fcG9ydF9pbml0KGQs
IGNobik7CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAog
ICAgIGJpbmQtPnBvcnQgPSBwb3J0OwogCkBAIC01MDksNiArNTE5LDcgQEAg
c3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAg
IHN0cnVjdCBwaXJxICAgKmluZm87CiAgICAgaW50ICAgICAgICAgICAgcG9y
dCA9IDAsIHBpcnEgPSBiaW5kLT5waXJxOwogICAgIGxvbmcgICAgICAgICAg
IHJjOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAgICAgaWYgKCAo
cGlycSA8IDApIHx8IChwaXJxID49IGQtPm5yX3BpcnFzKSApCiAgICAgICAg
IHJldHVybiAtRUlOVkFMOwpAQCAtNTQxLDE0ICs1NTIsMTQgQEAgc3RhdGlj
IGxvbmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAgICAgICBn
b3RvIG91dDsKICAgICB9CiAKLSAgICBzcGluX2xvY2soJmNobi0+bG9jayk7
CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgY2huLT5zdGF0ZSAgPSBFQ1NfUElSUTsKICAgICBjaG4tPnUucGly
cS5pcnEgPSBwaXJxOwogICAgIGxpbmtfcGlycV9wb3J0KHBvcnQsIGNobiwg
dik7CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bp
bl91bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVz
dG9yZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBiaW5kLT5wb3J0ID0g
cG9ydDsKIApAQCAtNTY5LDYgKzU4MCw3IEBAIGludCBldnRjaG5fY2xvc2Uo
c3RydWN0IGRvbWFpbiAqZDEsIGludAogICAgIHN0cnVjdCBldnRjaG4gKmNo
bjEsICpjaG4yOwogICAgIGludCAgICAgICAgICAgIHBvcnQyOwogICAgIGxv
bmcgICAgICAgICAgIHJjID0gMDsKKyAgICB1bnNpZ25lZCBsb25nICBmbGFn
czsKIAogIGFnYWluOgogICAgIHNwaW5fbG9jaygmZDEtPmV2ZW50X2xvY2sp
OwpAQCAtNjY4LDE0ICs2ODAsMTQgQEAgaW50IGV2dGNobl9jbG9zZShzdHJ1
Y3QgZG9tYWluICpkMSwgaW50CiAgICAgICAgIEJVR19PTihjaG4yLT5zdGF0
ZSAhPSBFQ1NfSU5URVJET01BSU4pOwogICAgICAgICBCVUdfT04oY2huMi0+
dS5pbnRlcmRvbWFpbi5yZW1vdGVfZG9tICE9IGQxKTsKIAotICAgICAgICBk
b3VibGVfZXZ0Y2huX2xvY2soY2huMSwgY2huMik7CisgICAgICAgIGZsYWdz
ID0gZG91YmxlX2V2dGNobl9sb2NrKGNobjEsIGNobjIpOwogCiAgICAgICAg
IGV2dGNobl9mcmVlKGQxLCBjaG4xKTsKIAogICAgICAgICBjaG4yLT5zdGF0
ZSA9IEVDU19VTkJPVU5EOwogICAgICAgICBjaG4yLT51LnVuYm91bmQucmVt
b3RlX2RvbWlkID0gZDEtPmRvbWFpbl9pZDsKIAotICAgICAgICBkb3VibGVf
ZXZ0Y2huX3VubG9jayhjaG4xLCBjaG4yKTsKKyAgICAgICAgZG91YmxlX2V2
dGNobl91bmxvY2soY2huMSwgY2huMiwgZmxhZ3MpOwogCiAgICAgICAgIGdv
dG8gb3V0OwogCkBAIC02ODMsOSArNjk1LDkgQEAgaW50IGV2dGNobl9jbG9z
ZShzdHJ1Y3QgZG9tYWluICpkMSwgaW50CiAgICAgICAgIEJVRygpOwogICAg
IH0KIAotICAgIHNwaW5fbG9jaygmY2huMS0+bG9jayk7CisgICAgc3Bpbl9s
b2NrX2lycXNhdmUoJmNobjEtPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5f
ZnJlZShkMSwgY2huMSk7Ci0gICAgc3Bpbl91bmxvY2soJmNobjEtPmxvY2sp
OworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobjEtPmxvY2ssIGZs
YWdzKTsKIAogIG91dDoKICAgICBpZiAoIGQyICE9IE5VTEwgKQpAQCAtNzA1
LDEzICs3MTcsMTQgQEAgaW50IGV2dGNobl9zZW5kKHN0cnVjdCBkb21haW4g
KmxkLCB1bnNpZwogICAgIHN0cnVjdCBldnRjaG4gKmxjaG4sICpyY2huOwog
ICAgIHN0cnVjdCBkb21haW4gKnJkOwogICAgIGludCAgICAgICAgICAgIHJw
b3J0LCByZXQgPSAwOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAg
ICAgaWYgKCAhcG9ydF9pc192YWxpZChsZCwgbHBvcnQpICkKICAgICAgICAg
cmV0dXJuIC1FSU5WQUw7CiAKICAgICBsY2huID0gZXZ0Y2huX2Zyb21fcG9y
dChsZCwgbHBvcnQpOwogCi0gICAgc3Bpbl9sb2NrKCZsY2huLT5sb2NrKTsK
KyAgICBzcGluX2xvY2tfaXJxc2F2ZSgmbGNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgLyogR3Vlc3QgY2Fubm90IHNlbmQgdmlhIGEgWGVuLWF0dGFjaGVk
IGV2ZW50IGNoYW5uZWwuICovCiAgICAgaWYgKCB1bmxpa2VseShjb25zdW1l
cl9pc194ZW4obGNobikpICkKQEAgLTc0Niw3ICs3NTksNyBAQCBpbnQgZXZ0
Y2huX3NlbmQoc3RydWN0IGRvbWFpbiAqbGQsIHVuc2lnCiAgICAgfQogCiBv
dXQ6Ci0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxvY2spOworICAgIHNwaW5f
dW5sb2NrX2lycXJlc3RvcmUoJmxjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAg
IHJldHVybiByZXQ7CiB9CkBAIC0xMjM4LDYgKzEyNTEsNyBAQCBpbnQgYWxs
b2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgKIHsKICAgICBzdHJ1Y3Qg
ZXZ0Y2huICpjaG47CiAgICAgaW50ICAgICAgICAgICAgcG9ydCwgcmM7Cisg
ICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICAgICBzcGluX2xvY2soJmxk
LT5ldmVudF9sb2NrKTsKIApAQCAtMTI1MCwxNCArMTI2NCwxNCBAQCBpbnQg
YWxsb2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgKICAgICBpZiAoIHJj
ICkKICAgICAgICAgZ290byBvdXQ7CiAKLSAgICBzcGluX2xvY2soJmNobi0+
bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgY2huLT5zdGF0ZSA9IEVDU19VTkJPVU5EOwogICAgIGNo
bi0+eGVuX2NvbnN1bWVyID0gZ2V0X3hlbl9jb25zdW1lcihub3RpZmljYXRp
b25fZm4pOwogICAgIGNobi0+bm90aWZ5X3ZjcHVfaWQgPSBsdmNwdTsKICAg
ICBjaG4tPnUudW5ib3VuZC5yZW1vdGVfZG9taWQgPSByZW1vdGVfZG9taWQ7
CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBzcGluX3Vu
bG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIHdy
aXRlX2F0b21pYygmbGQtPnhlbl9ldnRjaG5zLCBsZC0+eGVuX2V2dGNobnMg
KyAxKTsKIApAQCAtMTI4MCwxMSArMTI5NCwxMiBAQCB2b2lkIG5vdGlmeV92
aWFfeGVuX2V2ZW50X2NoYW5uZWwoc3RydWN0CiB7CiAgICAgc3RydWN0IGV2
dGNobiAqbGNobiwgKnJjaG47CiAgICAgc3RydWN0IGRvbWFpbiAqcmQ7Cisg
ICAgdW5zaWduZWQgbG9uZyBmbGFnczsKIAogICAgIEFTU0VSVChwb3J0X2lz
X3ZhbGlkKGxkLCBscG9ydCkpOwogICAgIGxjaG4gPSBldnRjaG5fZnJvbV9w
b3J0KGxkLCBscG9ydCk7CiAKLSAgICBzcGluX2xvY2soJmxjaG4tPmxvY2sp
OworICAgIHNwaW5fbG9ja19pcnFzYXZlKCZsY2huLT5sb2NrLCBmbGFncyk7
CiAKICAgICBpZiAoIGxpa2VseShsY2huLT5zdGF0ZSA9PSBFQ1NfSU5URVJE
T01BSU4pICkKICAgICB7CkBAIC0xMjk0LDcgKzEzMDksNyBAQCB2b2lkIG5v
dGlmeV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoc3RydWN0CiAgICAgICAgIGV2
dGNobl9wb3J0X3NldF9wZW5kaW5nKHJkLCByY2huLT5ub3RpZnlfdmNwdV9p
ZCwgcmNobik7CiAgICAgfQogCi0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxv
Y2spOworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmxjaG4tPmxvY2ss
IGZsYWdzKTsKIH0KIAogdm9pZCBldnRjaG5fY2hlY2tfcG9sbGVycyhzdHJ1
Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgcG9ydCkK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-3.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGFkZHJlc3MgcmFjZXMgd2l0aCBldnRjaG5fcmVzZXQoKQoK
TmVpdGhlciBkLT5ldnRjaG5fcG9ydF9vcHMgbm9yIG1heF9ldnRjaG5zKGQp
IG1heSBiZSB1c2VkIGluIGFuIGVudGlyZWx5CmxvY2stbGVzcyBtYW5uZXIs
IGFzIGJvdGggbWF5IGNoYW5nZSBieSBhIHJhY2luZyBldnRjaG5fcmVzZXQo
KS4gSW4gdGhlCmNvbW1vbiBjYXNlLCBhdCBsZWFzdCBvbmUgb2YgdGhlIGRv
bWFpbidzIGV2ZW50IGxvY2sgb3IgdGhlIHBlci1jaGFubmVsCmxvY2sgbmVl
ZHMgdG8gYmUgaGVsZC4gSW4gdGhlIHNwZWNpZmljIGNhc2Ugb2YgdGhlIGlu
dGVyLWRvbWFpbiBzZW5kaW5nCmJ5IGV2dGNobl9zZW5kKCkgYW5kIG5vdGlm
eV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoKSBob2xkaW5nIHRoZSBvdGhlcgpz
aWRlJ3MgcGVyLWNoYW5uZWwgbG9jayBpcyBzdWZmaWNpZW50LCBhcyB0aGUg
Y2hhbm5lbCBjYW4ndCBjaGFuZ2Ugc3RhdGUKd2l0aG91dCBib3RoIHBlci1j
aGFubmVsIGxvY2tzIGhlbGQuIFdpdGhvdXQgc3VjaCBhIGNoYW5uZWwgY2hh
bmdpbmcKc3RhdGUsIGV2dGNobl9yZXNldCgpIGNhbid0IGNvbXBsZXRlIHN1
Y2Nlc3NmdWxseS4KCkxvY2stZnJlZSBhY2Nlc3NlcyBjb250aW51ZSB0byBi
ZSBwZXJtaXR0ZWQgZm9yIHRoZSBzaGltIChjYWxsaW5nIHNvbWUKb3RoZXJ3
aXNlIGludGVybmFsIGV2ZW50IGNoYW5uZWwgZnVuY3Rpb25zKSwgYXMgdGhp
cyBoYXBwZW5zIHdoaWxlIHRoZQpkb21haW4gaXMgaW4gZWZmZWN0aXZlbHkg
c2luZ2xlLXRocmVhZGVkIG1vZGUuIFNwZWNpYWwgY2FyZSBhbHNvIG5lZWRz
CnRha2luZyBmb3IgdGhlIHNoaW0ncyBtYXJraW5nIG9mIGluLXVzZSBwb3J0
cyBhcyBFQ1NfUkVTRVJWRUQgKGFsbG93aW5nCnVzZSBvZiBzdWNoIHBvcnRz
IGluIHRoZSBzaGltIGNhc2UgaXMgb2theSBiZWNhdXNlIHN3aXRjaGluZyBp
bnRvIGFuZApoZW5jZSBhbHNvIG91dCBvZiBGSUZPIG1vZGUgaXMgaW1wb3Nz
aWhibGUgdGhlcmUpLgoKQXMgYSBzaWRlIGVmZmVjdCwgY2VydGFpbiBvcGVy
YXRpb25zIG9uIFhlbiBib3VuZCBldmVudCBjaGFubmVscyB3aGljaAp3ZXJl
IG1pc3Rha2VubHkgcGVybWl0dGVkIHNvIGZhciAoZS5nLiB1bm1hc2sgb3Ig
cG9sbCkgd2lsbCBiZSByZWZ1c2VkCm5vdy4KClRoaXMgaXMgcGFydCBvZiBY
U0EtMzQzLgoKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFt
YXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFt
YXpvbi5jb20+Ci0tLQp2OTogQWRkIGFyY2hfZXZ0Y2huX2lzX3NwZWNpYWwo
KSB0byBmaXggUFYgc2hpbS4Kdjg6IEFkZCBCVUlMRF9CVUdfT04oKSBpbiBl
dnRjaG5fdXNhYmxlKCkuCnY3OiBBZGQgbG9ja2luZyByZWxhdGVkIGNvbW1l
bnQgYWhlYWQgb2Ygc3RydWN0IGV2dGNobl9wb3J0X29wcy4KdjY6IE5ldy4K
LS0tClRCRDogSSd2ZSBiZWVuIGNvbnNpZGVyaW5nIHRvIG1vdmUgc29tZSBv
ZiB0aGUgd3JhcHBlcnMgZnJvbSB4ZW4vZXZlbnQuaAogICAgIGludG8gZXZl
bnRfY2hhbm5lbC5jIChvciBldmVuIGRyb3AgdGhlbSBhbHRvZ2V0aGVyKSwg
d2hlbiB0aGV5CiAgICAgcmVxdWlyZSBleHRlcm5hbCBsb2NraW5nIChlLmcu
IGV2dGNobl9wb3J0X2luaXQoKSBvcgogICAgIGV2dGNobl9wb3J0X3NldF9w
cmlvcml0eSgpKS4gRG9lcyBhbnlvbmUgaGF2ZSBhIHN0cm9uZyBvcGluaW9u
CiAgICAgZWl0aGVyIHdheT8KCi0tLSBhL3hlbi9hcmNoL3g4Ni9pcnEuYwor
KysgYi94ZW4vYXJjaC94ODYvaXJxLmMKQEAgLTI0ODgsMTQgKzI0ODgsMjQg
QEAgc3RhdGljIHZvaWQgZHVtcF9pcnFzKHVuc2lnbmVkIGNoYXIga2V5KQog
CiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IGFjdGlvbi0+bnJfZ3Vl
c3RzOyApCiAgICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgc3RydWN0
IGV2dGNobiAqZXZ0Y2huOworICAgICAgICAgICAgICAgIHVuc2lnbmVkIGlu
dCBwZW5kaW5nID0gMiwgbWFza2VkID0gMjsKKwogICAgICAgICAgICAgICAg
IGQgPSBhY3Rpb24tPmd1ZXN0W2krK107CiAgICAgICAgICAgICAgICAgcGly
cSA9IGRvbWFpbl9pcnFfdG9fcGlycShkLCBpcnEpOwogICAgICAgICAgICAg
ICAgIGluZm8gPSBwaXJxX2luZm8oZCwgcGlycSk7CisgICAgICAgICAgICAg
ICAgZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBpbmZvLT5ldnRjaG4p
OworICAgICAgICAgICAgICAgIGxvY2FsX2lycV9kaXNhYmxlKCk7CisgICAg
ICAgICAgICAgICAgaWYgKCBzcGluX3RyeWxvY2soJmV2dGNobi0+bG9jaykg
KQorICAgICAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICAgICAgcGVu
ZGluZyA9IGV2dGNobl9pc19wZW5kaW5nKGQsIGV2dGNobik7CisgICAgICAg
ICAgICAgICAgICAgIG1hc2tlZCA9IGV2dGNobl9pc19tYXNrZWQoZCwgZXZ0
Y2huKTsKKyAgICAgICAgICAgICAgICAgICAgc3Bpbl91bmxvY2soJmV2dGNo
bi0+bG9jayk7CisgICAgICAgICAgICAgICAgfQorICAgICAgICAgICAgICAg
IGxvY2FsX2lycV9lbmFibGUoKTsKICAgICAgICAgICAgICAgICBwcmludGso
ImQlZDolM2QoJWMlYyVjKSVjIiwKLSAgICAgICAgICAgICAgICAgICAgICAg
ZC0+ZG9tYWluX2lkLCBwaXJxLAotICAgICAgICAgICAgICAgICAgICAgICBl
dnRjaG5fcG9ydF9pc19wZW5kaW5nKGQsIGluZm8tPmV2dGNobikgPyAnUCcg
OiAnLScsCi0gICAgICAgICAgICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lz
X21hc2tlZChkLCBpbmZvLT5ldnRjaG4pID8gJ00nIDogJy0nLAotICAgICAg
ICAgICAgICAgICAgICAgICBpbmZvLT5tYXNrZWQgPyAnTScgOiAnLScsCisg
ICAgICAgICAgICAgICAgICAgICAgIGQtPmRvbWFpbl9pZCwgcGlycSwgIi1Q
PyJbcGVuZGluZ10sCisgICAgICAgICAgICAgICAgICAgICAgICItTT8iW21h
c2tlZF0sIGluZm8tPm1hc2tlZCA/ICdNJyA6ICctJywKICAgICAgICAgICAg
ICAgICAgICAgICAgaSA8IGFjdGlvbi0+bnJfZ3Vlc3RzID8gJywnIDogJ1xu
Jyk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KLS0tIGEveGVuL2FyY2gv
eDg2L3B2L3NoaW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvc2hpbS5jCkBA
IC02NjAsOCArNjYwLDExIEBAIHZvaWQgcHZfc2hpbV9pbmplY3RfZXZ0Y2hu
KHVuc2lnbmVkIGludAogICAgIGlmICggcG9ydF9pc192YWxpZChndWVzdCwg
cG9ydCkgKQogICAgIHsKICAgICAgICAgc3RydWN0IGV2dGNobiAqY2huID0g
ZXZ0Y2huX2Zyb21fcG9ydChndWVzdCwgcG9ydCk7CisgICAgICAgIHVuc2ln
bmVkIGxvbmcgZmxhZ3M7CiAKKyAgICAgICAgc3Bpbl9sb2NrX2lycXNhdmUo
JmNobi0+bG9jaywgZmxhZ3MpOwogICAgICAgICBldnRjaG5fcG9ydF9zZXRf
cGVuZGluZyhndWVzdCwgY2huLT5ub3RpZnlfdmNwdV9pZCwgY2huKTsKKyAg
ICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmY2huLT5sb2NrLCBmbGFn
cyk7CiAgICAgfQogfQogCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfMmwuYwor
KysgYi94ZW4vY29tbW9uL2V2ZW50XzJsLmMKQEAgLTYzLDggKzYzLDEwIEBA
IHN0YXRpYyB2b2lkIGV2dGNobl8ybF91bm1hc2soc3RydWN0IGRvbWEKICAg
ICB9CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5nKGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5nKGNvbnN0IHN0cnVjdCBk
b21haW4gKmQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBj
b25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7CisgICAgZXZ0Y2huX3Bv
cnRfdCBwb3J0ID0gZXZ0Y2huLT5wb3J0OwogICAgIHVuc2lnbmVkIGludCBt
YXhfcG9ydHMgPSBCSVRTX1BFUl9FVlRDSE5fV09SRChkKSAqIEJJVFNfUEVS
X0VWVENITl9XT1JEKGQpOwogCiAgICAgQVNTRVJUKHBvcnQgPCBtYXhfcG9y
dHMpOwpAQCAtNzIsOCArNzQsMTAgQEAgc3RhdGljIGJvb2wgZXZ0Y2huXzJs
X2lzX3BlbmRpbmcoY29uc3QgcwogICAgICAgICAgICAgZ3Vlc3RfdGVzdF9i
aXQoZCwgcG9ydCwgJnNoYXJlZF9pbmZvKGQsIGV2dGNobl9wZW5kaW5nKSkp
OwogfQogCi1zdGF0aWMgYm9vbCBldnRjaG5fMmxfaXNfbWFza2VkKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0YXRp
YyBib29sIGV2dGNobl8ybF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRvbWFp
biAqZCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qg
c3RydWN0IGV2dGNobiAqZXZ0Y2huKQogeworICAgIGV2dGNobl9wb3J0X3Qg
cG9ydCA9IGV2dGNobi0+cG9ydDsKICAgICB1bnNpZ25lZCBpbnQgbWF4X3Bv
cnRzID0gQklUU19QRVJfRVZUQ0hOX1dPUkQoZCkgKiBCSVRTX1BFUl9FVlRD
SE5fV09SRChkKTsKIAogICAgIEFTU0VSVChwb3J0IDwgbWF4X3BvcnRzKTsK
LS0tIGEveGVuL2NvbW1vbi9ldmVudF9jaGFubmVsLmMKKysrIGIveGVuL2Nv
bW1vbi9ldmVudF9jaGFubmVsLmMKQEAgLTE1Niw4ICsxNTYsOSBAQCBpbnQg
ZXZ0Y2huX2FsbG9jYXRlX3BvcnQoc3RydWN0IGRvbWFpbiAqCiAKICAgICBp
ZiAoIHBvcnRfaXNfdmFsaWQoZCwgcG9ydCkgKQogICAgIHsKLSAgICAgICAg
aWYgKCBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpLT5zdGF0ZSAhPSBFQ1Nf
RlJFRSB8fAotICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX2J1c3koZCwg
cG9ydCkgKQorICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpjaG4gPSBl
dnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworCisgICAgICAgIGlmICggY2hu
LT5zdGF0ZSAhPSBFQ1NfRlJFRSB8fCBldnRjaG5faXNfYnVzeShkLCBjaG4p
ICkKICAgICAgICAgICAgIHJldHVybiAtRUJVU1k7CiAgICAgfQogICAgIGVs
c2UKQEAgLTc3NCw2ICs3NzUsNyBAQCB2b2lkIHNlbmRfZ3Vlc3RfdmNwdV92
aXJxKHN0cnVjdCB2Y3B1ICp2CiAgICAgdW5zaWduZWQgbG9uZyBmbGFnczsK
ICAgICBpbnQgcG9ydDsKICAgICBzdHJ1Y3QgZG9tYWluICpkOworICAgIHN0
cnVjdCBldnRjaG4gKmNobjsKIAogICAgIEFTU0VSVCghdmlycV9pc19nbG9i
YWwodmlycSkpOwogCkBAIC03ODQsNyArNzg2LDEwIEBAIHZvaWQgc2VuZF9n
dWVzdF92Y3B1X3ZpcnEoc3RydWN0IHZjcHUgKnYKICAgICAgICAgZ290byBv
dXQ7CiAKICAgICBkID0gdi0+ZG9tYWluOwotICAgIGV2dGNobl9wb3J0X3Nl
dF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGV2dGNobl9mcm9tX3BvcnQoZCwg
cG9ydCkpOworICAgIGNobiA9IGV2dGNobl9mcm9tX3BvcnQoZCwgcG9ydCk7
CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIGV2dGNobl9wb3J0
X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGNobik7CisgICAgc3Bpbl91
bmxvY2soJmNobi0+bG9jayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2tf
aXJxcmVzdG9yZSgmdi0+dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MTMsNyAr
ODE4LDkgQEAgdm9pZCBzZW5kX2d1ZXN0X2dsb2JhbF92aXJxKHN0cnVjdCBk
b21haQogICAgICAgICBnb3RvIG91dDsKIAogICAgIGNobiA9IGV2dGNobl9m
cm9tX3BvcnQoZCwgcG9ydCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2sp
OwogICAgIGV2dGNobl9wb3J0X3NldF9wZW5kaW5nKGQsIGNobi0+bm90aWZ5
X3ZjcHVfaWQsIGNobik7CisgICAgc3Bpbl91bmxvY2soJmNobi0+bG9jayk7
CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmdi0+dmly
cV9sb2NrLCBmbGFncyk7CkBAIC04MjMsNiArODMwLDcgQEAgdm9pZCBzZW5k
X2d1ZXN0X3BpcnEoc3RydWN0IGRvbWFpbiAqZCwgYwogewogICAgIGludCBw
b3J0OwogICAgIHN0cnVjdCBldnRjaG4gKmNobjsKKyAgICB1bnNpZ25lZCBs
b25nIGZsYWdzOwogCiAgICAgLyoKICAgICAgKiBQViBndWVzdHM6IEl0IHNo
b3VsZCBub3QgYmUgcG9zc2libGUgdG8gcmFjZSB3aXRoIF9fZXZ0Y2huX2Ns
b3NlKCkuIFRoZQpAQCAtODM3LDcgKzg0NSw5IEBAIHZvaWQgc2VuZF9ndWVz
dF9waXJxKHN0cnVjdCBkb21haW4gKmQsIGMKICAgICB9CiAKICAgICBjaG4g
PSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIHNwaW5fbG9ja19p
cnFzYXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5fcG9ydF9z
ZXRfcGVuZGluZyhkLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4pOworICAg
IHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobi0+bG9jaywgZmxhZ3MpOwog
fQogCiBzdGF0aWMgc3RydWN0IGRvbWFpbiAqZ2xvYmFsX3ZpcnFfaGFuZGxl
cnNbTlJfVklSUVNdIF9fcmVhZF9tb3N0bHk7CkBAIC0xMDM0LDEyICsxMDQ0
LDE1IEBAIGludCBldnRjaG5fdW5tYXNrKHVuc2lnbmVkIGludCBwb3J0KQog
ewogICAgIHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47CiAg
ICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huOworICAgIHVuc2lnbmVkIGxvbmcg
ZmxhZ3M7CiAKICAgICBpZiAoIHVubGlrZWx5KCFwb3J0X2lzX3ZhbGlkKGQs
IHBvcnQpKSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCiAgICAgZXZ0
Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0KTsKKyAgICBzcGluX2xv
Y2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAgICAgZXZ0Y2hu
X3BvcnRfdW5tYXNrKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tfaXJx
cmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAKICAgICByZXR1cm4g
MDsKIH0KQEAgLTE0NDksOCArMTQ2Miw4IEBAIHN0YXRpYyB2b2lkIGRvbWFp
bl9kdW1wX2V2dGNobl9pbmZvKHN0cnUKIAogICAgICAgICBwcmludGsoIiAg
ICAlNHUgWyVkLyVkLyIsCiAgICAgICAgICAgICAgICBwb3J0LAotICAgICAg
ICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfcGVuZGluZyhkLCBwb3J0KSwKLSAg
ICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX21hc2tlZChkLCBwb3J0KSk7
CisgICAgICAgICAgICAgICBldnRjaG5faXNfcGVuZGluZyhkLCBjaG4pLAor
ICAgICAgICAgICAgICAgZXZ0Y2huX2lzX21hc2tlZChkLCBjaG4pKTsKICAg
ICAgICAgZXZ0Y2huX3BvcnRfcHJpbnRfc3RhdGUoZCwgY2huKTsKICAgICAg
ICAgcHJpbnRrKCJdOiBzPSVkIG49JWQgeD0lZCIsCiAgICAgICAgICAgICAg
ICBjaG4tPnN0YXRlLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4tPnhlbl9j
b25zdW1lcik7Ci0tLSBhL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCisrKyBi
L3hlbi9jb21tb24vZXZlbnRfZmlmby5jCkBAIC0yOTYsMjMgKzI5NiwyNiBA
QCBzdGF0aWMgdm9pZCBldnRjaG5fZmlmb191bm1hc2soc3RydWN0IGRvCiAg
ICAgICAgIGV2dGNobl9maWZvX3NldF9wZW5kaW5nKHYsIGV2dGNobik7CiB9
CiAKLXN0YXRpYyBib29sIGV2dGNobl9maWZvX2lzX3BlbmRpbmcoY29uc3Qg
c3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3BvcnRfdCBwb3J0KQorc3RhdGlj
IGJvb2wgZXZ0Y2huX2ZpZm9faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3QgZG9t
YWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBj
b25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7Ci0gICAgY29uc3QgZXZl
bnRfd29yZF90ICp3b3JkID0gZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3BvcnQo
ZCwgcG9ydCk7CisgICAgY29uc3QgZXZlbnRfd29yZF90ICp3b3JkID0gZXZ0
Y2huX2ZpZm9fd29yZF9mcm9tX3BvcnQoZCwgZXZ0Y2huLT5wb3J0KTsKIAog
ICAgIHJldHVybiB3b3JkICYmIGd1ZXN0X3Rlc3RfYml0KGQsIEVWVENITl9G
SUZPX1BFTkRJTkcsIHdvcmQpOwogfQogCi1zdGF0aWMgYm9vbF90IGV2dGNo
bl9maWZvX2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBldnRj
aG5fcG9ydF90IHBvcnQpCitzdGF0aWMgYm9vbF90IGV2dGNobl9maWZvX2lz
X21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IGV2dGNobiAq
ZXZ0Y2huKQogewotICAgIGNvbnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9IGV2
dGNobl9maWZvX3dvcmRfZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGNvbnN0
IGV2ZW50X3dvcmRfdCAqd29yZCA9IGV2dGNobl9maWZvX3dvcmRfZnJvbV9w
b3J0KGQsIGV2dGNobi0+cG9ydCk7CiAKICAgICByZXR1cm4gIXdvcmQgfHwg
Z3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hOX0ZJRk9fTUFTS0VELCB3b3JkKTsK
IH0KIAotc3RhdGljIGJvb2xfdCBldnRjaG5fZmlmb19pc19idXN5KGNvbnN0
IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0YXRp
YyBib29sX3QgZXZ0Y2huX2ZpZm9faXNfYnVzeShjb25zdCBzdHJ1Y3QgZG9t
YWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNv
bnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBjb25zdCBldmVu
dF93b3JkX3QgKndvcmQgPSBldnRjaG5fZmlmb193b3JkX2Zyb21fcG9ydChk
LCBwb3J0KTsKKyAgICBjb25zdCBldmVudF93b3JkX3QgKndvcmQgPSBldnRj
aG5fZmlmb193b3JkX2Zyb21fcG9ydChkLCBldnRjaG4tPnBvcnQpOwogCiAg
ICAgcmV0dXJuIHdvcmQgJiYgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hOX0ZJ
Rk9fTElOS0VELCB3b3JkKTsKIH0KLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4
Ni9ldmVudC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvZXZlbnQuaApA
QCAtNDcsNCArNDcsMTAgQEAgc3RhdGljIGlubGluZSBib29sIGFyY2hfdmly
cV9pc19nbG9iYWwodQogICAgIHJldHVybiB0cnVlOwogfQogCisjaWZkZWYg
Q09ORklHX1BWX1NISU0KKyMgaW5jbHVkZSA8YXNtL3B2L3NoaW0uaD4KKyMg
ZGVmaW5lIGFyY2hfZXZ0Y2huX2lzX3NwZWNpYWwoY2huKSBcCisgICAgICAg
ICAgICAgKHB2X3NoaW0gJiYgKGNobiktPnBvcnQgJiYgKGNobiktPnN0YXRl
ID09IEVDU19SRVNFUlZFRCkKKyNlbmRpZgorCiAjZW5kaWYKLS0tIGEveGVu
L2luY2x1ZGUveGVuL2V2ZW50LmgKKysrIGIveGVuL2luY2x1ZGUveGVuL2V2
ZW50LmgKQEAgLTEzMyw2ICsxMzMsMjQgQEAgc3RhdGljIGlubGluZSBzdHJ1
Y3QgZXZ0Y2huICpldnRjaG5fZnJvbQogICAgIHJldHVybiBidWNrZXRfZnJv
bV9wb3J0KGQsIHApICsgKHAgJSBFVlRDSE5TX1BFUl9CVUNLRVQpOwogfQog
CisvKgorICogInVzYWJsZSIgYXMgaW4gImJ5IGEgZ3Vlc3QiLCBpLmUuIFhl
biBjb25zdW1lZCBjaGFubmVscyBhcmUgYXNzdW1lZCB0byBiZQorICogdGFr
ZW4gY2FyZSBvZiBzZXBhcmF0ZWx5IHdoZXJlIHVzZWQgZm9yIFhlbidzIGlu
dGVybmFsIHB1cnBvc2VzLgorICovCitzdGF0aWMgYm9vbCBldnRjaG5fdXNh
YmxlKGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKK3sKKyAgICBpZiAo
IGV2dGNobi0+eGVuX2NvbnN1bWVyICkKKyAgICAgICAgcmV0dXJuIGZhbHNl
OworCisjaWZkZWYgYXJjaF9ldnRjaG5faXNfc3BlY2lhbAorICAgIGlmICgg
YXJjaF9ldnRjaG5faXNfc3BlY2lhbChldnRjaG4pICkKKyAgICAgICAgcmV0
dXJuIHRydWU7CisjZW5kaWYKKworICAgIEJVSUxEX0JVR19PTihFQ1NfRlJF
RSA+IEVDU19SRVNFUlZFRCk7CisgICAgcmV0dXJuIGV2dGNobi0+c3RhdGUg
PiBFQ1NfUkVTRVJWRUQ7Cit9CisKIC8qIFdhaXQgb24gYSBYZW4tYXR0YWNo
ZWQgZXZlbnQgY2hhbm5lbC4gKi8KICNkZWZpbmUgd2FpdF9vbl94ZW5fZXZl
bnRfY2hhbm5lbChwb3J0LCBjb25kaXRpb24pICAgICAgICAgICAgICAgICAg
ICAgIFwKICAgICBkbyB7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKQEAgLTE2NSwx
OSArMTgzLDI0IEBAIGludCBldnRjaG5fcmVzZXQoc3RydWN0IGRvbWFpbiAq
ZCk7CiAKIC8qCiAgKiBMb3ctbGV2ZWwgZXZlbnQgY2hhbm5lbCBwb3J0IG9w
cy4KKyAqCisgKiBBbGwgaG9va3MgaGF2ZSB0byBiZSBjYWxsZWQgd2l0aCBh
IGxvY2sgaGVsZCB3aGljaCBwcmV2ZW50cyB0aGUgY2hhbm5lbAorICogZnJv
bSBjaGFuZ2luZyBzdGF0ZS4gVGhpcyBtYXkgYmUgdGhlIGRvbWFpbiBldmVu
dCBsb2NrLCB0aGUgcGVyLWNoYW5uZWwKKyAqIGxvY2ssIG9yIGluIHRoZSBj
YXNlIG9mIHNlbmRpbmcgaW50ZXJkb21haW4gZXZlbnRzIGFsc28gdGhlIG90
aGVyIHNpZGUncworICogcGVyLWNoYW5uZWwgbG9jay4gRXhjZXB0aW9ucyBh
cHBseSBpbiBjZXJ0YWluIGNhc2VzIGZvciB0aGUgUFYgc2hpbS4KICAqLwog
c3RydWN0IGV2dGNobl9wb3J0X29wcyB7CiAgICAgdm9pZCAoKmluaXQpKHN0
cnVjdCBkb21haW4gKmQsIHN0cnVjdCBldnRjaG4gKmV2dGNobik7CiAgICAg
dm9pZCAoKnNldF9wZW5kaW5nKShzdHJ1Y3QgdmNwdSAqdiwgc3RydWN0IGV2
dGNobiAqZXZ0Y2huKTsKICAgICB2b2lkICgqY2xlYXJfcGVuZGluZykoc3Ry
dWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAgICB2
b2lkICgqdW5tYXNrKShzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZXZ0Y2hu
ICpldnRjaG4pOwotICAgIGJvb2wgKCppc19wZW5kaW5nKShjb25zdCBzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOwotICAgIGJvb2wg
KCppc19tYXNrZWQpKGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9w
b3J0X3QgcG9ydCk7CisgICAgYm9vbCAoKmlzX3BlbmRpbmcpKGNvbnN0IHN0
cnVjdCBkb21haW4gKmQsIGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobik7
CisgICAgYm9vbCAoKmlzX21hc2tlZCkoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCwgY29uc3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAgICAvKgogICAg
ICAqIElzIHRoZSBwb3J0IHVuYXZhaWxhYmxlIGJlY2F1c2UgaXQncyBzdGls
bCBiZWluZyBjbGVhbmVkIHVwCiAgICAgICogYWZ0ZXIgYmVpbmcgY2xvc2Vk
PwogICAgICAqLwotICAgIGJvb2wgKCppc19idXN5KShjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOworICAgIGJvb2wgKCpp
c19idXN5KShjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBjb25zdCBzdHJ1Y3Qg
ZXZ0Y2huICpldnRjaG4pOwogICAgIGludCAoKnNldF9wcmlvcml0eSkoc3Ry
dWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huLAogICAgICAg
ICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHByaW9yaXR5KTsKICAg
ICB2b2lkICgqcHJpbnRfc3RhdGUpKHN0cnVjdCBkb21haW4gKmQsIGNvbnN0
IHN0cnVjdCBldnRjaG4gKmV2dGNobik7CkBAIC0xOTMsMzggKzIxNiw2NyBA
QCBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfc2V0X3BlbmRpCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5z
aWduZWQgaW50IHZjcHVfaWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQogewot
ICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3BlbmRpbmcoZC0+dmNwdVt2
Y3B1X2lkXSwgZXZ0Y2huKTsKKyAgICBpZiAoIGV2dGNobl91c2FibGUoZXZ0
Y2huKSApCisgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3BlbmRp
bmcoZC0+dmNwdVt2Y3B1X2lkXSwgZXZ0Y2huKTsKIH0KIAogc3RhdGljIGlu
bGluZSB2b2lkIGV2dGNobl9wb3J0X2NsZWFyX3BlbmRpbmcoc3RydWN0IGRv
bWFpbiAqZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBkLT5l
dnRjaG5fcG9ydF9vcHMtPmNsZWFyX3BlbmRpbmcoZCwgZXZ0Y2huKTsKKyAg
ICBpZiAoIGV2dGNobl91c2FibGUoZXZ0Y2huKSApCisgICAgICAgIGQtPmV2
dGNobl9wb3J0X29wcy0+Y2xlYXJfcGVuZGluZyhkLCBldnRjaG4pOwogfQog
CiBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfdW5tYXNrKHN0cnVj
dCBkb21haW4gKmQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBkLT5ldnRj
aG5fcG9ydF9vcHMtPnVubWFzayhkLCBldnRjaG4pOworICAgIGlmICggZXZ0
Y2huX3VzYWJsZShldnRjaG4pICkKKyAgICAgICAgZC0+ZXZ0Y2huX3BvcnRf
b3BzLT51bm1hc2soZCwgZXZ0Y2huKTsKIH0KIAotc3RhdGljIGlubGluZSBi
b29sIGV2dGNobl9wb3J0X2lzX3BlbmRpbmcoY29uc3Qgc3RydWN0IGRvbWFp
biAqZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0YXRpYyBpbmxpbmUgYm9vbCBl
dnRjaG5faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0cnVj
dCBldnRjaG4gKmV2dGNobikKIHsKLSAgICByZXR1cm4gZC0+ZXZ0Y2huX3Bv
cnRfb3BzLT5pc19wZW5kaW5nKGQsIHBvcnQpOworICAgIHJldHVybiBldnRj
aG5fdXNhYmxlKGV2dGNobikgJiYgZC0+ZXZ0Y2huX3BvcnRfb3BzLT5pc19w
ZW5kaW5nKGQsIGV2dGNobik7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBl
dnRjaG5fcG9ydF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRvbWFpbiAqZCwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXZ0
Y2huX3BvcnRfdCBwb3J0KQorc3RhdGljIGlubGluZSBib29sIGV2dGNobl9w
b3J0X2lzX3BlbmRpbmcoc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3BvcnRf
dCBwb3J0KQogewotICAgIHJldHVybiBkLT5ldnRjaG5fcG9ydF9vcHMtPmlz
X21hc2tlZChkLCBwb3J0KTsKKyAgICBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4g
PSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGJvb2wgcmM7Cisg
ICAgdW5zaWduZWQgbG9uZyBmbGFnczsKKworICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZldnRjaG4tPmxvY2ssIGZsYWdzKTsKKyAgICByYyA9IGV2dGNobl9p
c19wZW5kaW5nKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVz
dG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CisKKyAgICByZXR1cm4gcmM7
Cit9CisKK3N0YXRpYyBpbmxpbmUgYm9vbCBldnRjaG5faXNfbWFza2VkKGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCit7
CisgICAgcmV0dXJuICFldnRjaG5fdXNhYmxlKGV2dGNobikgfHwgZC0+ZXZ0
Y2huX3BvcnRfb3BzLT5pc19tYXNrZWQoZCwgZXZ0Y2huKTsKK30KKworc3Rh
dGljIGlubGluZSBib29sIGV2dGNobl9wb3J0X2lzX21hc2tlZChzdHJ1Y3Qg
ZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpCit7CisgICAgc3RydWN0
IGV2dGNobiAqZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0KTsK
KyAgICBib29sIHJjOworICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7CisKKyAg
ICBzcGluX2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7Cisg
ICAgcmMgPSBldnRjaG5faXNfbWFza2VkKGQsIGV2dGNobik7CisgICAgc3Bp
bl91bmxvY2tfaXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CisK
KyAgICByZXR1cm4gcmM7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBldnRj
aG5fcG9ydF9pc19idXN5KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBldnRjaG5fcG9y
dF90IHBvcnQpCitzdGF0aWMgaW5saW5lIGJvb2wgZXZ0Y2huX2lzX2J1c3ko
Y29uc3Qgc3RydWN0IGRvbWFpbiAqZCwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7
CiAgICAgcmV0dXJuIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVzeSAmJgot
ICAgICAgICAgICBkLT5ldnRjaG5fcG9ydF9vcHMtPmlzX2J1c3koZCwgcG9y
dCk7CisgICAgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVzeShk
LCBldnRjaG4pOwogfQogCiBzdGF0aWMgaW5saW5lIGludCBldnRjaG5fcG9y
dF9zZXRfcHJpb3JpdHkoc3RydWN0IGRvbWFpbiAqZCwKQEAgLTIzMyw2ICsy
ODUsOCBAQCBzdGF0aWMgaW5saW5lIGludCBldnRjaG5fcG9ydF9zZXRfcHJp
b3JpCiB7CiAgICAgaWYgKCAhZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRfcHJp
b3JpdHkgKQogICAgICAgICByZXR1cm4gLUVOT1NZUzsKKyAgICBpZiAoICFl
dnRjaG5fdXNhYmxlKGV2dGNobikgKQorICAgICAgICByZXR1cm4gLUVBQ0NF
UzsKICAgICByZXR1cm4gZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRfcHJpb3Jp
dHkoZCwgZXZ0Y2huLCBwcmlvcml0eSk7CiB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.10-1.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.10-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGV2dGNobl9yZXNldCgpIG1heSBub3Qgc3VjY2VlZCB3aXRo
IHN0aWxsLW9wZW4gcG9ydHMKCldoaWxlIHRoZSBmdW5jdGlvbiBjbG9zZXMg
YWxsIHBvcnRzLCBpdCBkb2VzIHNvIHdpdGhvdXQgaG9sZGluZyBhbnkKbG9j
aywgYW5kIGhlbmNlIHJhY2luZyByZXF1ZXN0cyBtYXkgYmUgaXNzdWVkIGNh
dXNpbmcgbmV3IHBvcnRzIHRvIGdldApvcGVuZWQuIFRoaXMgd291bGQgaGF2
ZSBiZWVuIHByb2JsZW1hdGljIGluIHBhcnRpY3VsYXIgaWYgc3VjaCBhIG5l
d2x5Cm9wZW5lZCBwb3J0IGhhZCBhIHBvcnQgbnVtYmVyIGFib3ZlIHRoZSBu
ZXcgaW1wbGVtZW50YXRpb24gbGltaXQgKGkuZS4Kd2hlbiBzd2l0Y2hpbmcg
ZnJvbSBGSUZPIHRvIDItbGV2ZWwpIGFmdGVyIHRoZSByZXNldCwgYXMgcHJp
b3IgdG8KImV2dGNobjogcmVsYXggcG9ydF9pc192YWxpZCgpIiB0aGlzIGNv
dWxkIGhhdmUgbGVkIHRvIGUuZy4KZXZ0Y2huX2Nsb3NlKCkncyAiQlVHX09O
KCFwb3J0X2lzX3ZhbGlkKGQyLCBwb3J0MikpIiB0byB0cmlnZ2VyLgoKSW50
cm9kdWNlIGEgY291bnRlciBvZiBhY3RpdmUgcG9ydHMgYW5kIGNoZWNrIHRo
YXQgaXQncyAoc3RpbGwpIG5vCmxhcmdlciB0aGVuIHRoZSBudW1iZXIgb2Yg
WGVuIGludGVybmFsbHkgdXNlZCBvbmVzIGFmdGVyIG9idGFpbmluZyB0aGUK
bmVjZXNzYXJ5IGxvY2sgaW4gZXZ0Y2huX3Jlc2V0KCkuCgpBcyB0byB0aGUg
YWNjZXNzIG1vZGVsIG9mIHRoZSBuZXcge2FjdGl2ZSx4ZW59X2V2dGNobnMg
ZmllbGRzIC0gd2hpbGUKYWxsIHdyaXRlcyBnZXQgZG9uZSB1c2luZyB3cml0
ZV9hdG9taWMoKSwgcmVhZHMgb3VnaHQgdG8gdXNlCnJlYWRfYXRvbWljKCkg
b25seSB3aGVuIG91dHNpZGUgb2YgYSBzdWl0YWJseSBsb2NrZWQgcmVnaW9u
LgoKTm90ZSB0aGF0IGFzIG9mIG5vdyBldnRjaG5fYmluZF92aXJxKCkgYW5k
IGV2dGNobl9iaW5kX2lwaSgpIGRvbid0IGhhdmUKYSBuZWVkIHRvIGNhbGwg
Y2hlY2tfZnJlZV9wb3J0KCkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0My4K
ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KUmV2aWV3ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4KUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+CgotLS0gYS94ZW4vY29tbW9uL2V2ZW50X2NoYW5u
ZWwuYworKysgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYwpAQCAtMTk1
LDYgKzE5NSw4IEBAIGludCBldnRjaG5fYWxsb2NhdGVfcG9ydChzdHJ1Y3Qg
ZG9tYWluICoKICAgICAgICAgd3JpdGVfYXRvbWljKCZkLT52YWxpZF9ldnRj
aG5zLCBkLT52YWxpZF9ldnRjaG5zICsgRVZUQ0hOU19QRVJfQlVDS0VUKTsK
ICAgICB9CiAKKyAgICB3cml0ZV9hdG9taWMoJmQtPmFjdGl2ZV9ldnRjaG5z
LCBkLT5hY3RpdmVfZXZ0Y2hucyArIDEpOworCiAgICAgcmV0dXJuIDA7CiB9
CiAKQEAgLTIxOCwxMSArMjIwLDI2IEBAIHN0YXRpYyBpbnQgZ2V0X2ZyZWVf
cG9ydChzdHJ1Y3QgZG9tYWluICoKICAgICByZXR1cm4gLUVOT1NQQzsKIH0K
IAorLyoKKyAqIENoZWNrIHdoZXRoZXIgYSBwb3J0IGlzIHN0aWxsIG1hcmtl
ZCBmcmVlLCBhbmQgaWYgc28gdXBkYXRlIHRoZSBkb21haW4KKyAqIGNvdW50
ZXIgYWNjb3JkaW5nbHkuICBUbyBiZSB1c2VkIG9uIGZ1bmN0aW9uIGV4aXQg
cGF0aHMuCisgKi8KK3N0YXRpYyB2b2lkIGNoZWNrX2ZyZWVfcG9ydChzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpCit7CisgICAgaWYg
KCBwb3J0X2lzX3ZhbGlkKGQsIHBvcnQpICYmCisgICAgICAgICBldnRjaG5f
ZnJvbV9wb3J0KGQsIHBvcnQpLT5zdGF0ZSA9PSBFQ1NfRlJFRSApCisgICAg
ICAgIHdyaXRlX2F0b21pYygmZC0+YWN0aXZlX2V2dGNobnMsIGQtPmFjdGl2
ZV9ldnRjaG5zIC0gMSk7Cit9CisKIHZvaWQgZXZ0Y2huX2ZyZWUoc3RydWN0
IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqY2huKQogewogICAgIC8qIENs
ZWFyIHBlbmRpbmcgZXZlbnQgdG8gYXZvaWQgdW5leHBlY3RlZCBiZWhhdmlv
ciBvbiByZS1iaW5kLiAqLwogICAgIGV2dGNobl9wb3J0X2NsZWFyX3BlbmRp
bmcoZCwgY2huKTsKIAorICAgIGlmICggY29uc3VtZXJfaXNfeGVuKGNobikg
KQorICAgICAgICB3cml0ZV9hdG9taWMoJmQtPnhlbl9ldnRjaG5zLCBkLT54
ZW5fZXZ0Y2hucyAtIDEpOworICAgIHdyaXRlX2F0b21pYygmZC0+YWN0aXZl
X2V2dGNobnMsIGQtPmFjdGl2ZV9ldnRjaG5zIC0gMSk7CisKICAgICAvKiBS
ZXNldCBiaW5kaW5nIHRvIHZjcHUwIHdoZW4gdGhlIGNoYW5uZWwgaXMgZnJl
ZWQuICovCiAgICAgY2huLT5zdGF0ZSAgICAgICAgICA9IEVDU19GUkVFOwog
ICAgIGNobi0+bm90aWZ5X3ZjcHVfaWQgPSAwOwpAQCAtMjY1LDYgKzI4Miw3
IEBAIHN0YXRpYyBsb25nIGV2dGNobl9hbGxvY191bmJvdW5kKGV2dGNobl8K
ICAgICBhbGxvYy0+cG9ydCA9IHBvcnQ7CiAKICBvdXQ6CisgICAgY2hlY2tf
ZnJlZV9wb3J0KGQsIHBvcnQpOwogICAgIHNwaW5fdW5sb2NrKCZkLT5ldmVu
dF9sb2NrKTsKICAgICByY3VfdW5sb2NrX2RvbWFpbihkKTsKIApAQCAtMzU4
LDYgKzM3Niw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX2ludGVyZG9t
YWluKGV2dGMKICAgICBiaW5kLT5sb2NhbF9wb3J0ID0gbHBvcnQ7CiAKICBv
dXQ6CisgICAgY2hlY2tfZnJlZV9wb3J0KGxkLCBscG9ydCk7CiAgICAgc3Bp
bl91bmxvY2soJmxkLT5ldmVudF9sb2NrKTsKICAgICBpZiAoIGxkICE9IHJk
ICkKICAgICAgICAgc3Bpbl91bmxvY2soJnJkLT5ldmVudF9sb2NrKTsKQEAg
LTQ5MSw3ICs1MTAsNyBAQCBzdGF0aWMgbG9uZyBldnRjaG5fYmluZF9waXJx
KGV2dGNobl9iaW5kCiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQt
PmRvbWFpbjsKICAgICBzdHJ1Y3QgdmNwdSAgICp2ID0gZC0+dmNwdVswXTsK
ICAgICBzdHJ1Y3QgcGlycSAgICppbmZvOwotICAgIGludCAgICAgICAgICAg
IHBvcnQsIHBpcnEgPSBiaW5kLT5waXJxOworICAgIGludCAgICAgICAgICAg
IHBvcnQgPSAwLCBwaXJxID0gYmluZC0+cGlycTsKICAgICBsb25nICAgICAg
ICAgICByYzsKIAogICAgIGlmICggKHBpcnEgPCAwKSB8fCAocGlycSA+PSBk
LT5ucl9waXJxcykgKQpAQCAtNTM5LDYgKzU1OCw3IEBAIHN0YXRpYyBsb25n
IGV2dGNobl9iaW5kX3BpcnEoZXZ0Y2huX2JpbmQKICAgICBhcmNoX2V2dGNo
bl9iaW5kX3BpcnEoZCwgcGlycSk7CiAKICBvdXQ6CisgICAgY2hlY2tfZnJl
ZV9wb3J0KGQsIHBvcnQpOwogICAgIHNwaW5fdW5sb2NrKCZkLT5ldmVudF9s
b2NrKTsKIAogICAgIHJldHVybiByYzsKQEAgLTEwMTMsMTAgKzEwMzMsMTAg
QEAgaW50IGV2dGNobl91bm1hc2sodW5zaWduZWQgaW50IHBvcnQpCiAgICAg
cmV0dXJuIDA7CiB9CiAKLQogaW50IGV2dGNobl9yZXNldChzdHJ1Y3QgZG9t
YWluICpkKQogewogICAgIHVuc2lnbmVkIGludCBpOworICAgIGludCByYyA9
IDA7CiAKICAgICBpZiAoIGQgIT0gY3VycmVudC0+ZG9tYWluICYmICFkLT5j
b250cm9sbGVyX3BhdXNlX2NvdW50ICkKICAgICAgICAgcmV0dXJuIC1FSU5W
QUw7CkBAIC0xMDI2LDcgKzEwNDYsOSBAQCBpbnQgZXZ0Y2huX3Jlc2V0KHN0
cnVjdCBkb21haW4gKmQpCiAKICAgICBzcGluX2xvY2soJmQtPmV2ZW50X2xv
Y2spOwogCi0gICAgaWYgKCBkLT5ldnRjaG5fZmlmbyApCisgICAgaWYgKCBk
LT5hY3RpdmVfZXZ0Y2hucyA+IGQtPnhlbl9ldnRjaG5zICkKKyAgICAgICAg
cmMgPSAtRUFHQUlOOworICAgIGVsc2UgaWYgKCBkLT5ldnRjaG5fZmlmbyAp
CiAgICAgewogICAgICAgICAvKiBTd2l0Y2hpbmcgYmFjayB0byAyLWxldmVs
IEFCSS4gKi8KICAgICAgICAgZXZ0Y2huX2ZpZm9fZGVzdHJveShkKTsKQEAg
LTEwMzUsNyArMTA1Nyw3IEBAIGludCBldnRjaG5fcmVzZXQoc3RydWN0IGRv
bWFpbiAqZCkKIAogICAgIHNwaW5fdW5sb2NrKCZkLT5ldmVudF9sb2NrKTsK
IAotICAgIHJldHVybiAwOworICAgIHJldHVybiByYzsKIH0KIAogc3RhdGlj
IGxvbmcgZXZ0Y2huX3NldF9wcmlvcml0eShjb25zdCBzdHJ1Y3QgZXZ0Y2hu
X3NldF9wcmlvcml0eSAqc2V0X3ByaW9yaXR5KQpAQCAtMTIyMSwxMCArMTI0
Myw5IEBAIGludCBhbGxvY191bmJvdW5kX3hlbl9ldmVudF9jaGFubmVsKAog
CiAgICAgc3Bpbl9sb2NrKCZsZC0+ZXZlbnRfbG9jayk7CiAKLSAgICByYyA9
IGdldF9mcmVlX3BvcnQobGQpOworICAgIHBvcnQgPSByYyA9IGdldF9mcmVl
X3BvcnQobGQpOwogICAgIGlmICggcmMgPCAwICkKICAgICAgICAgZ290byBv
dXQ7Ci0gICAgcG9ydCA9IHJjOwogICAgIGNobiA9IGV2dGNobl9mcm9tX3Bv
cnQobGQsIHBvcnQpOwogCiAgICAgcmMgPSB4c21fZXZ0Y2huX3VuYm91bmQo
WFNNX1RBUkdFVCwgbGQsIGNobiwgcmVtb3RlX2RvbWlkKTsKQEAgLTEyNDAs
NyArMTI2MSwxMCBAQCBpbnQgYWxsb2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hh
bm5lbCgKIAogICAgIHNwaW5fdW5sb2NrKCZjaG4tPmxvY2spOwogCisgICAg
d3JpdGVfYXRvbWljKCZsZC0+eGVuX2V2dGNobnMsIGxkLT54ZW5fZXZ0Y2hu
cyArIDEpOworCiAgb3V0OgorICAgIGNoZWNrX2ZyZWVfcG9ydChsZCwgcG9y
dCk7CiAgICAgc3Bpbl91bmxvY2soJmxkLT5ldmVudF9sb2NrKTsKIAogICAg
IHJldHVybiByYyA8IDAgPyByYyA6IHBvcnQ7CkBAIC0xMzE2LDYgKzEzNDAs
NyBAQCBpbnQgZXZ0Y2huX2luaXQoc3RydWN0IGRvbWFpbiAqZCkKICAgICAg
ICAgcmV0dXJuIC1FSU5WQUw7CiAgICAgfQogICAgIGV2dGNobl9mcm9tX3Bv
cnQoZCwgMCktPnN0YXRlID0gRUNTX1JFU0VSVkVEOworICAgIHdyaXRlX2F0
b21pYygmZC0+YWN0aXZlX2V2dGNobnMsIDApOwogCiAjaWYgTUFYX1ZJUlRf
Q1BVUyA+IEJJVFNfUEVSX0xPTkcKICAgICBkLT5wb2xsX21hc2sgPSB4emFs
bG9jX2FycmF5KHVuc2lnbmVkIGxvbmcsCkBAIC0xMzQzLDYgKzEzNjgsOCBA
QCB2b2lkIGV2dGNobl9kZXN0cm95KHN0cnVjdCBkb21haW4gKmQpCiAgICAg
Zm9yICggaSA9IDA7IHBvcnRfaXNfdmFsaWQoZCwgaSk7IGkrKyApCiAgICAg
ICAgIGV2dGNobl9jbG9zZShkLCBpLCAwKTsKIAorICAgIEFTU0VSVCghZC0+
YWN0aXZlX2V2dGNobnMpOworCiAgICAgY2xlYXJfZ2xvYmFsX3ZpcnFfaGFu
ZGxlcnMoZCk7CiAKICAgICBldnRjaG5fZmlmb19kZXN0cm95KGQpOwotLS0g
YS94ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaAorKysgYi94ZW4vaW5jbHVkZS94
ZW4vc2NoZWQuaApAQCAtMzM4LDYgKzMzOCwxNiBAQCBzdHJ1Y3QgZG9tYWlu
CiAgICAgc3RydWN0IGV2dGNobiAgKipldnRjaG5fZ3JvdXBbTlJfRVZUQ0hO
X0dST1VQU107IC8qIGFsbCBvdGhlciBidWNrZXRzICovCiAgICAgdW5zaWdu
ZWQgaW50ICAgICBtYXhfZXZ0Y2huX3BvcnQ7IC8qIG1heCBwZXJtaXR0ZWQg
cG9ydCBudW1iZXIgKi8KICAgICB1bnNpZ25lZCBpbnQgICAgIHZhbGlkX2V2
dGNobnM7ICAgLyogbnVtYmVyIG9mIGFsbG9jYXRlZCBldmVudCBjaGFubmVs
cyAqLworICAgIC8qCisgICAgICogTnVtYmVyIG9mIGluLXVzZSBldmVudCBj
aGFubmVscy4gIFdyaXRlcnMgc2hvdWxkIHVzZSB3cml0ZV9hdG9taWMoKS4K
KyAgICAgKiBSZWFkZXJzIG5lZWQgdG8gdXNlIHJlYWRfYXRvbWljKCkgb25s
eSB3aGVuIG5vdCBob2xkaW5nIGV2ZW50X2xvY2suCisgICAgICovCisgICAg
dW5zaWduZWQgaW50ICAgICBhY3RpdmVfZXZ0Y2huczsKKyAgICAvKgorICAg
ICAqIE51bWJlciBvZiBldmVudCBjaGFubmVscyB1c2VkIGludGVybmFsbHkg
YnkgWGVuIChub3Qgc3ViamVjdCB0bworICAgICAqIEVWVENITk9QX3Jlc2V0
KS4gIFJlYWQvd3JpdGUgYWNjZXNzIGxpa2UgZm9yIGFjdGl2ZV9ldnRjaG5z
LgorICAgICAqLworICAgIHVuc2lnbmVkIGludCAgICAgeGVuX2V2dGNobnM7
CiAgICAgc3BpbmxvY2tfdCAgICAgICBldmVudF9sb2NrOwogICAgIGNvbnN0
IHN0cnVjdCBldnRjaG5fcG9ydF9vcHMgKmV2dGNobl9wb3J0X29wczsKICAg
ICBzdHJ1Y3QgZXZ0Y2huX2ZpZm9fZG9tYWluICpldnRjaG5fZmlmbzsK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.10-2.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.10-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGNvbnZlcnQgcGVyLWNoYW5uZWwgbG9jayB0byBiZSBJUlEt
c2FmZQoKLi4uIGluIG9yZGVyIGZvciBzZW5kX2d1ZXN0X3tnbG9iYWwsdmNw
dX1fdmlycSgpIHRvIGJlIGFibGUgdG8gbWFrZSB1c2UKb2YgaXQuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0My4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFs
bCA8amdyYWxsQGFtYXpvbi5jb20+CgotLS0gYS94ZW4vY29tbW9uL2V2ZW50
X2NoYW5uZWwuYworKysgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYwpA
QCAtMjU1LDYgKzI1NSw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9hbGxvY191
bmJvdW5kKGV2dGNobl8KICAgICBpbnQgICAgICAgICAgICBwb3J0OwogICAg
IGRvbWlkX3QgICAgICAgIGRvbSA9IGFsbG9jLT5kb207CiAgICAgbG9uZyAg
ICAgICAgICAgcmM7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICAg
ICBkID0gcmN1X2xvY2tfZG9tYWluX2J5X2FueV9pZChkb20pOwogICAgIGlm
ICggZCA9PSBOVUxMICkKQEAgLTI3MCwxNCArMjcxLDE0IEBAIHN0YXRpYyBs
b25nIGV2dGNobl9hbGxvY191bmJvdW5kKGV2dGNobl8KICAgICBpZiAoIHJj
ICkKICAgICAgICAgZ290byBvdXQ7CiAKLSAgICBzcGluX2xvY2soJmNobi0+
bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgY2huLT5zdGF0ZSA9IEVDU19VTkJPVU5EOwogICAgIGlm
ICggKGNobi0+dS51bmJvdW5kLnJlbW90ZV9kb21pZCA9IGFsbG9jLT5yZW1v
dGVfZG9tKSA9PSBET01JRF9TRUxGICkKICAgICAgICAgY2huLT51LnVuYm91
bmQucmVtb3RlX2RvbWlkID0gY3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ7
CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bpbl91
bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBhbGxvYy0+cG9ydCA9IHBv
cnQ7CiAKQEAgLTI5MCwyNiArMjkxLDMyIEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9hbGxvY191bmJvdW5kKGV2dGNobl8KIH0KIAogCi1zdGF0aWMgdm9pZCBk
b3VibGVfZXZ0Y2huX2xvY2soc3RydWN0IGV2dGNobiAqbGNobiwgc3RydWN0
IGV2dGNobiAqcmNobikKK3N0YXRpYyB1bnNpZ25lZCBsb25nIGRvdWJsZV9l
dnRjaG5fbG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLAorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCBldnRjaG4gKnJj
aG4pCiB7Ci0gICAgaWYgKCBsY2huIDwgcmNobiApCisgICAgdW5zaWduZWQg
bG9uZyBmbGFnczsKKworICAgIGlmICggbGNobiA8PSByY2huICkKICAgICB7
Ci0gICAgICAgIHNwaW5fbG9jaygmbGNobi0+bG9jayk7Ci0gICAgICAgIHNw
aW5fbG9jaygmcmNobi0+bG9jayk7CisgICAgICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZsY2huLT5sb2NrLCBmbGFncyk7CisgICAgICAgIGlmICggbGNobiAh
PSByY2huICkKKyAgICAgICAgICAgIHNwaW5fbG9jaygmcmNobi0+bG9jayk7
CiAgICAgfQogICAgIGVsc2UKICAgICB7Ci0gICAgICAgIGlmICggbGNobiAh
PSByY2huICkKLSAgICAgICAgICAgIHNwaW5fbG9jaygmcmNobi0+bG9jayk7
CisgICAgICAgIHNwaW5fbG9ja19pcnFzYXZlKCZyY2huLT5sb2NrLCBmbGFn
cyk7CiAgICAgICAgIHNwaW5fbG9jaygmbGNobi0+bG9jayk7CiAgICAgfQor
CisgICAgcmV0dXJuIGZsYWdzOwogfQogCi1zdGF0aWMgdm9pZCBkb3VibGVf
ZXZ0Y2huX3VubG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLCBzdHJ1Y3QgZXZ0
Y2huICpyY2huKQorc3RhdGljIHZvaWQgZG91YmxlX2V2dGNobl91bmxvY2so
c3RydWN0IGV2dGNobiAqbGNobiwgc3RydWN0IGV2dGNobiAqcmNobiwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcg
ZmxhZ3MpCiB7Ci0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxvY2spOwogICAg
IGlmICggbGNobiAhPSByY2huICkKLSAgICAgICAgc3Bpbl91bmxvY2soJnJj
aG4tPmxvY2spOworICAgICAgICBzcGluX3VubG9jaygmbGNobi0+bG9jayk7
CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmcmNobi0+bG9jaywgZmxh
Z3MpOwogfQogCiBzdGF0aWMgbG9uZyBldnRjaG5fYmluZF9pbnRlcmRvbWFp
bihldnRjaG5fYmluZF9pbnRlcmRvbWFpbl90ICpiaW5kKQpAQCAtMzE5LDYg
KzMyNiw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX2ludGVyZG9tYWlu
KGV2dGMKICAgICBpbnQgICAgICAgICAgICBscG9ydCwgcnBvcnQgPSBiaW5k
LT5yZW1vdGVfcG9ydDsKICAgICBkb21pZF90ICAgICAgICByZG9tID0gYmlu
ZC0+cmVtb3RlX2RvbTsKICAgICBsb25nICAgICAgICAgICByYzsKKyAgICB1
bnNpZ25lZCBsb25nICBmbGFnczsKIAogICAgIGlmICggcmRvbSA9PSBET01J
RF9TRUxGICkKICAgICAgICAgcmRvbSA9IGN1cnJlbnQtPmRvbWFpbi0+ZG9t
YWluX2lkOwpAQCAtMzU0LDcgKzM2Miw3IEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9iaW5kX2ludGVyZG9tYWluKGV2dGMKICAgICBpZiAoIHJjICkKICAgICAg
ICAgZ290byBvdXQ7CiAKLSAgICBkb3VibGVfZXZ0Y2huX2xvY2sobGNobiwg
cmNobik7CisgICAgZmxhZ3MgPSBkb3VibGVfZXZ0Y2huX2xvY2sobGNobiwg
cmNobik7CiAKICAgICBsY2huLT51LmludGVyZG9tYWluLnJlbW90ZV9kb20g
ID0gcmQ7CiAgICAgbGNobi0+dS5pbnRlcmRvbWFpbi5yZW1vdGVfcG9ydCA9
IHJwb3J0OwpAQCAtMzcxLDcgKzM3OSw3IEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9iaW5kX2ludGVyZG9tYWluKGV2dGMKICAgICAgKi8KICAgICBldnRjaG5f
cG9ydF9zZXRfcGVuZGluZyhsZCwgbGNobi0+bm90aWZ5X3ZjcHVfaWQsIGxj
aG4pOwogCi0gICAgZG91YmxlX2V2dGNobl91bmxvY2sobGNobiwgcmNobik7
CisgICAgZG91YmxlX2V2dGNobl91bmxvY2sobGNobiwgcmNobiwgZmxhZ3Mp
OwogCiAgICAgYmluZC0+bG9jYWxfcG9ydCA9IGxwb3J0OwogCkBAIC0zOTQs
NiArNDAyLDcgQEAgaW50IGV2dGNobl9iaW5kX3ZpcnEoZXZ0Y2huX2JpbmRf
dmlycV90CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRvbWFp
bjsKICAgICBpbnQgICAgICAgICAgICB2aXJxID0gYmluZC0+dmlycSwgdmNw
dSA9IGJpbmQtPnZjcHU7CiAgICAgaW50ICAgICAgICAgICAgcmMgPSAwOwor
ICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAgICAgaWYgKCAodmlycSA8
IDApIHx8ICh2aXJxID49IEFSUkFZX1NJWkUodi0+dmlycV90b19ldnRjaG4p
KSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwpAQCAtNDI2LDE0ICs0MzUs
MTQgQEAgaW50IGV2dGNobl9iaW5kX3ZpcnEoZXZ0Y2huX2JpbmRfdmlycV90
CiAKICAgICBjaG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOwogCi0g
ICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIGNobi0+c3RhdGUgICAg
ICAgICAgPSBFQ1NfVklSUTsKICAgICBjaG4tPm5vdGlmeV92Y3B1X2lkID0g
dmNwdTsKICAgICBjaG4tPnUudmlycSAgICAgICAgID0gdmlycTsKICAgICBl
dnRjaG5fcG9ydF9pbml0KGQsIGNobik7CiAKLSAgICBzcGluX3VubG9jaygm
Y2huLT5sb2NrKTsKKyAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4t
PmxvY2ssIGZsYWdzKTsKIAogICAgIHYtPnZpcnFfdG9fZXZ0Y2huW3ZpcnFd
ID0gYmluZC0+cG9ydCA9IHBvcnQ7CiAKQEAgLTQ1MCw2ICs0NTksNyBAQCBz
dGF0aWMgbG9uZyBldnRjaG5fYmluZF9pcGkoZXZ0Y2huX2JpbmRfCiAgICAg
c3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRvbWFpbjsKICAgICBpbnQg
ICAgICAgICAgICBwb3J0LCB2Y3B1ID0gYmluZC0+dmNwdTsKICAgICBsb25n
ICAgICAgICAgICByYyA9IDA7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7
CiAKICAgICBpZiAoICh2Y3B1IDwgMCkgfHwgKHZjcHUgPj0gZC0+bWF4X3Zj
cHVzKSB8fAogICAgICAgICAgKGQtPnZjcHVbdmNwdV0gPT0gTlVMTCkgKQpA
QCAtNDYyLDEzICs0NzIsMTMgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRf
aXBpKGV2dGNobl9iaW5kXwogCiAgICAgY2huID0gZXZ0Y2huX2Zyb21fcG9y
dChkLCBwb3J0KTsKIAotICAgIHNwaW5fbG9jaygmY2huLT5sb2NrKTsKKyAg
ICBzcGluX2xvY2tfaXJxc2F2ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAg
ICBjaG4tPnN0YXRlICAgICAgICAgID0gRUNTX0lQSTsKICAgICBjaG4tPm5v
dGlmeV92Y3B1X2lkID0gdmNwdTsKICAgICBldnRjaG5fcG9ydF9pbml0KGQs
IGNobik7CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAog
ICAgIGJpbmQtPnBvcnQgPSBwb3J0OwogCkBAIC01MTIsNiArNTIyLDcgQEAg
c3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAg
IHN0cnVjdCBwaXJxICAgKmluZm87CiAgICAgaW50ICAgICAgICAgICAgcG9y
dCA9IDAsIHBpcnEgPSBiaW5kLT5waXJxOwogICAgIGxvbmcgICAgICAgICAg
IHJjOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAgICAgaWYgKCAo
cGlycSA8IDApIHx8IChwaXJxID49IGQtPm5yX3BpcnFzKSApCiAgICAgICAg
IHJldHVybiAtRUlOVkFMOwpAQCAtNTQ0LDE0ICs1NTUsMTQgQEAgc3RhdGlj
IGxvbmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAgICAgICBn
b3RvIG91dDsKICAgICB9CiAKLSAgICBzcGluX2xvY2soJmNobi0+bG9jayk7
CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgY2huLT5zdGF0ZSAgPSBFQ1NfUElSUTsKICAgICBjaG4tPnUucGly
cS5pcnEgPSBwaXJxOwogICAgIGxpbmtfcGlycV9wb3J0KHBvcnQsIGNobiwg
dik7CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bp
bl91bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVz
dG9yZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBiaW5kLT5wb3J0ID0g
cG9ydDsKIApAQCAtNTcyLDYgKzU4Myw3IEBAIGludCBldnRjaG5fY2xvc2Uo
c3RydWN0IGRvbWFpbiAqZDEsIGludAogICAgIHN0cnVjdCBldnRjaG4gKmNo
bjEsICpjaG4yOwogICAgIGludCAgICAgICAgICAgIHBvcnQyOwogICAgIGxv
bmcgICAgICAgICAgIHJjID0gMDsKKyAgICB1bnNpZ25lZCBsb25nICBmbGFn
czsKIAogIGFnYWluOgogICAgIHNwaW5fbG9jaygmZDEtPmV2ZW50X2xvY2sp
OwpAQCAtNjcxLDE0ICs2ODMsMTQgQEAgaW50IGV2dGNobl9jbG9zZShzdHJ1
Y3QgZG9tYWluICpkMSwgaW50CiAgICAgICAgIEJVR19PTihjaG4yLT5zdGF0
ZSAhPSBFQ1NfSU5URVJET01BSU4pOwogICAgICAgICBCVUdfT04oY2huMi0+
dS5pbnRlcmRvbWFpbi5yZW1vdGVfZG9tICE9IGQxKTsKIAotICAgICAgICBk
b3VibGVfZXZ0Y2huX2xvY2soY2huMSwgY2huMik7CisgICAgICAgIGZsYWdz
ID0gZG91YmxlX2V2dGNobl9sb2NrKGNobjEsIGNobjIpOwogCiAgICAgICAg
IGV2dGNobl9mcmVlKGQxLCBjaG4xKTsKIAogICAgICAgICBjaG4yLT5zdGF0
ZSA9IEVDU19VTkJPVU5EOwogICAgICAgICBjaG4yLT51LnVuYm91bmQucmVt
b3RlX2RvbWlkID0gZDEtPmRvbWFpbl9pZDsKIAotICAgICAgICBkb3VibGVf
ZXZ0Y2huX3VubG9jayhjaG4xLCBjaG4yKTsKKyAgICAgICAgZG91YmxlX2V2
dGNobl91bmxvY2soY2huMSwgY2huMiwgZmxhZ3MpOwogCiAgICAgICAgIGdv
dG8gb3V0OwogCkBAIC02ODYsOSArNjk4LDkgQEAgaW50IGV2dGNobl9jbG9z
ZShzdHJ1Y3QgZG9tYWluICpkMSwgaW50CiAgICAgICAgIEJVRygpOwogICAg
IH0KIAotICAgIHNwaW5fbG9jaygmY2huMS0+bG9jayk7CisgICAgc3Bpbl9s
b2NrX2lycXNhdmUoJmNobjEtPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5f
ZnJlZShkMSwgY2huMSk7Ci0gICAgc3Bpbl91bmxvY2soJmNobjEtPmxvY2sp
OworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobjEtPmxvY2ssIGZs
YWdzKTsKIAogIG91dDoKICAgICBpZiAoIGQyICE9IE5VTEwgKQpAQCAtNzA4
LDEzICs3MjAsMTQgQEAgaW50IGV2dGNobl9zZW5kKHN0cnVjdCBkb21haW4g
KmxkLCB1bnNpZwogICAgIHN0cnVjdCBldnRjaG4gKmxjaG4sICpyY2huOwog
ICAgIHN0cnVjdCBkb21haW4gKnJkOwogICAgIGludCAgICAgICAgICAgIHJw
b3J0LCByZXQgPSAwOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAg
ICAgaWYgKCAhcG9ydF9pc192YWxpZChsZCwgbHBvcnQpICkKICAgICAgICAg
cmV0dXJuIC1FSU5WQUw7CiAKICAgICBsY2huID0gZXZ0Y2huX2Zyb21fcG9y
dChsZCwgbHBvcnQpOwogCi0gICAgc3Bpbl9sb2NrKCZsY2huLT5sb2NrKTsK
KyAgICBzcGluX2xvY2tfaXJxc2F2ZSgmbGNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgLyogR3Vlc3QgY2Fubm90IHNlbmQgdmlhIGEgWGVuLWF0dGFjaGVk
IGV2ZW50IGNoYW5uZWwuICovCiAgICAgaWYgKCB1bmxpa2VseShjb25zdW1l
cl9pc194ZW4obGNobikpICkKQEAgLTc0OSw3ICs3NjIsNyBAQCBpbnQgZXZ0
Y2huX3NlbmQoc3RydWN0IGRvbWFpbiAqbGQsIHVuc2lnCiAgICAgfQogCiBv
dXQ6Ci0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxvY2spOworICAgIHNwaW5f
dW5sb2NrX2lycXJlc3RvcmUoJmxjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAg
IHJldHVybiByZXQ7CiB9CkBAIC0xMjQwLDYgKzEyNTMsNyBAQCBpbnQgYWxs
b2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgKIHsKICAgICBzdHJ1Y3Qg
ZXZ0Y2huICpjaG47CiAgICAgaW50ICAgICAgICAgICAgcG9ydCwgcmM7Cisg
ICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICAgICBzcGluX2xvY2soJmxk
LT5ldmVudF9sb2NrKTsKIApAQCAtMTI1MiwxNCArMTI2NiwxNCBAQCBpbnQg
YWxsb2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgKICAgICBpZiAoIHJj
ICkKICAgICAgICAgZ290byBvdXQ7CiAKLSAgICBzcGluX2xvY2soJmNobi0+
bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgY2huLT5zdGF0ZSA9IEVDU19VTkJPVU5EOwogICAgIGNo
bi0+eGVuX2NvbnN1bWVyID0gZ2V0X3hlbl9jb25zdW1lcihub3RpZmljYXRp
b25fZm4pOwogICAgIGNobi0+bm90aWZ5X3ZjcHVfaWQgPSBsdmNwdTsKICAg
ICBjaG4tPnUudW5ib3VuZC5yZW1vdGVfZG9taWQgPSByZW1vdGVfZG9taWQ7
CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBzcGluX3Vu
bG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIHdy
aXRlX2F0b21pYygmbGQtPnhlbl9ldnRjaG5zLCBsZC0+eGVuX2V2dGNobnMg
KyAxKTsKIApAQCAtMTI4MiwxMSArMTI5NiwxMiBAQCB2b2lkIG5vdGlmeV92
aWFfeGVuX2V2ZW50X2NoYW5uZWwoc3RydWN0CiB7CiAgICAgc3RydWN0IGV2
dGNobiAqbGNobiwgKnJjaG47CiAgICAgc3RydWN0IGRvbWFpbiAqcmQ7Cisg
ICAgdW5zaWduZWQgbG9uZyBmbGFnczsKIAogICAgIEFTU0VSVChwb3J0X2lz
X3ZhbGlkKGxkLCBscG9ydCkpOwogICAgIGxjaG4gPSBldnRjaG5fZnJvbV9w
b3J0KGxkLCBscG9ydCk7CiAKLSAgICBzcGluX2xvY2soJmxjaG4tPmxvY2sp
OworICAgIHNwaW5fbG9ja19pcnFzYXZlKCZsY2huLT5sb2NrLCBmbGFncyk7
CiAKICAgICBpZiAoIGxpa2VseShsY2huLT5zdGF0ZSA9PSBFQ1NfSU5URVJE
T01BSU4pICkKICAgICB7CkBAIC0xMjk2LDcgKzEzMTEsNyBAQCB2b2lkIG5v
dGlmeV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoc3RydWN0CiAgICAgICAgIGV2
dGNobl9wb3J0X3NldF9wZW5kaW5nKHJkLCByY2huLT5ub3RpZnlfdmNwdV9p
ZCwgcmNobik7CiAgICAgfQogCi0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxv
Y2spOworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmxjaG4tPmxvY2ss
IGZsYWdzKTsKIH0KIAogdm9pZCBldnRjaG5fY2hlY2tfcG9sbGVycyhzdHJ1
Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgcG9ydCkK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.10-3.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.10-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGFkZHJlc3MgcmFjZXMgd2l0aCBldnRjaG5fcmVzZXQoKQoK
TmVpdGhlciBkLT5ldnRjaG5fcG9ydF9vcHMgbm9yIG1heF9ldnRjaG5zKGQp
IG1heSBiZSB1c2VkIGluIGFuIGVudGlyZWx5CmxvY2stbGVzcyBtYW5uZXIs
IGFzIGJvdGggbWF5IGNoYW5nZSBieSBhIHJhY2luZyBldnRjaG5fcmVzZXQo
KS4gSW4gdGhlCmNvbW1vbiBjYXNlLCBhdCBsZWFzdCBvbmUgb2YgdGhlIGRv
bWFpbidzIGV2ZW50IGxvY2sgb3IgdGhlIHBlci1jaGFubmVsCmxvY2sgbmVl
ZHMgdG8gYmUgaGVsZC4gSW4gdGhlIHNwZWNpZmljIGNhc2Ugb2YgdGhlIGlu
dGVyLWRvbWFpbiBzZW5kaW5nCmJ5IGV2dGNobl9zZW5kKCkgYW5kIG5vdGlm
eV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoKSBob2xkaW5nIHRoZSBvdGhlcgpz
aWRlJ3MgcGVyLWNoYW5uZWwgbG9jayBpcyBzdWZmaWNpZW50LCBhcyB0aGUg
Y2hhbm5lbCBjYW4ndCBjaGFuZ2Ugc3RhdGUKd2l0aG91dCBib3RoIHBlci1j
aGFubmVsIGxvY2tzIGhlbGQuIFdpdGhvdXQgc3VjaCBhIGNoYW5uZWwgY2hh
bmdpbmcKc3RhdGUsIGV2dGNobl9yZXNldCgpIGNhbid0IGNvbXBsZXRlIHN1
Y2Nlc3NmdWxseS4KCkxvY2stZnJlZSBhY2Nlc3NlcyBjb250aW51ZSB0byBi
ZSBwZXJtaXR0ZWQgZm9yIHRoZSBzaGltIChjYWxsaW5nIHNvbWUKb3RoZXJ3
aXNlIGludGVybmFsIGV2ZW50IGNoYW5uZWwgZnVuY3Rpb25zKSwgYXMgdGhp
cyBoYXBwZW5zIHdoaWxlIHRoZQpkb21haW4gaXMgaW4gZWZmZWN0aXZlbHkg
c2luZ2xlLXRocmVhZGVkIG1vZGUuIFNwZWNpYWwgY2FyZSBhbHNvIG5lZWRz
CnRha2luZyBmb3IgdGhlIHNoaW0ncyBtYXJraW5nIG9mIGluLXVzZSBwb3J0
cyBhcyBFQ1NfUkVTRVJWRUQgKGFsbG93aW5nCnVzZSBvZiBzdWNoIHBvcnRz
IGluIHRoZSBzaGltIGNhc2UgaXMgb2theSBiZWNhdXNlIHN3aXRjaGluZyBp
bnRvIGFuZApoZW5jZSBhbHNvIG91dCBvZiBGSUZPIG1vZGUgaXMgaW1wb3Nz
aWJsZSB0aGVyZSkuCgpBcyBhIHNpZGUgZWZmZWN0LCBjZXJ0YWluIG9wZXJh
dGlvbnMgb24gWGVuIGJvdW5kIGV2ZW50IGNoYW5uZWxzIHdoaWNoCndlcmUg
bWlzdGFrZW5seSBwZXJtaXR0ZWQgc28gZmFyIChlLmcuIHVubWFzayBvciBw
b2xsKSB3aWxsIGJlIHJlZnVzZWQKbm93LgoKVGhpcyBpcyBwYXJ0IG9mIFhT
QS0zNDMuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9pcnEuYworKysgYi94ZW4v
YXJjaC94ODYvaXJxLmMKQEAgLTIzMjMsMTQgKzIzMjMsMjQgQEAgc3RhdGlj
IHZvaWQgZHVtcF9pcnFzKHVuc2lnbmVkIGNoYXIga2V5KQogCiAgICAgICAg
ICAgICBmb3IgKCBpID0gMDsgaSA8IGFjdGlvbi0+bnJfZ3Vlc3RzOyBpKysg
KQogICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIHN0cnVjdCBldnRj
aG4gKmV2dGNobjsKKyAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgcGVu
ZGluZyA9IDIsIG1hc2tlZCA9IDI7CisKICAgICAgICAgICAgICAgICBkID0g
YWN0aW9uLT5ndWVzdFtpXTsKICAgICAgICAgICAgICAgICBwaXJxID0gZG9t
YWluX2lycV90b19waXJxKGQsIGlycSk7CiAgICAgICAgICAgICAgICAgaW5m
byA9IHBpcnFfaW5mbyhkLCBwaXJxKTsKKyAgICAgICAgICAgICAgICBldnRj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIGluZm8tPmV2dGNobik7CisgICAg
ICAgICAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKKyAgICAgICAgICAg
ICAgICBpZiAoIHNwaW5fdHJ5bG9jaygmZXZ0Y2huLT5sb2NrKSApCisgICAg
ICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZW5kaW5nID0g
ZXZ0Y2huX2lzX3BlbmRpbmcoZCwgZXZ0Y2huKTsKKyAgICAgICAgICAgICAg
ICAgICAgbWFza2VkID0gZXZ0Y2huX2lzX21hc2tlZChkLCBldnRjaG4pOwor
ICAgICAgICAgICAgICAgICAgICBzcGluX3VubG9jaygmZXZ0Y2huLT5sb2Nr
KTsKKyAgICAgICAgICAgICAgICB9CisgICAgICAgICAgICAgICAgbG9jYWxf
aXJxX2VuYWJsZSgpOwogICAgICAgICAgICAgICAgIHByaW50aygiJXU6JTNk
KCVjJWMlYykiLAotICAgICAgICAgICAgICAgICAgICAgICBkLT5kb21haW5f
aWQsIHBpcnEsCi0gICAgICAgICAgICAgICAgICAgICAgIGV2dGNobl9wb3J0
X2lzX3BlbmRpbmcoZCwgaW5mby0+ZXZ0Y2huKSA/ICdQJyA6ICctJywKLSAg
ICAgICAgICAgICAgICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfbWFza2VkKGQs
IGluZm8tPmV2dGNobikgPyAnTScgOiAnLScsCi0gICAgICAgICAgICAgICAg
ICAgICAgIChpbmZvLT5tYXNrZWQgPyAnTScgOiAnLScpKTsKKyAgICAgICAg
ICAgICAgICAgICAgICAgZC0+ZG9tYWluX2lkLCBwaXJxLCAiLVA/IltwZW5k
aW5nXSwKKyAgICAgICAgICAgICAgICAgICAgICAgIi1NPyJbbWFza2VkXSwg
aW5mby0+bWFza2VkID8gJ00nIDogJy0nKTsKICAgICAgICAgICAgICAgICBp
ZiAoIGkgIT0gYWN0aW9uLT5ucl9ndWVzdHMgKQogICAgICAgICAgICAgICAg
ICAgICBwcmludGsoIiwiKTsKICAgICAgICAgICAgIH0KLS0tIGEveGVuL2Fy
Y2gveDg2L3B2L3NoaW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvc2hpbS5j
CkBAIC02MDMsOCArNjAzLDExIEBAIHZvaWQgcHZfc2hpbV9pbmplY3RfZXZ0
Y2huKHVuc2lnbmVkIGludAogICAgIGlmICggcG9ydF9pc192YWxpZChndWVz
dCwgcG9ydCkgKQogICAgIHsKICAgICAgICAgc3RydWN0IGV2dGNobiAqY2hu
ID0gZXZ0Y2huX2Zyb21fcG9ydChndWVzdCwgcG9ydCk7CisgICAgICAgIHVu
c2lnbmVkIGxvbmcgZmxhZ3M7CiAKKyAgICAgICAgc3Bpbl9sb2NrX2lycXNh
dmUoJmNobi0+bG9jaywgZmxhZ3MpOwogICAgICAgICBldnRjaG5fcG9ydF9z
ZXRfcGVuZGluZyhndWVzdCwgY2huLT5ub3RpZnlfdmNwdV9pZCwgY2huKTsK
KyAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmY2huLT5sb2NrLCBm
bGFncyk7CiAgICAgfQogfQogCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfMmwu
YworKysgYi94ZW4vY29tbW9uL2V2ZW50XzJsLmMKQEAgLTYzLDggKzYzLDEw
IEBAIHN0YXRpYyB2b2lkIGV2dGNobl8ybF91bm1hc2soc3RydWN0IGRvbWEK
ICAgICB9CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5n
KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkK
K3N0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5nKGNvbnN0IHN0cnVj
dCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7CisgICAgZXZ0Y2hu
X3BvcnRfdCBwb3J0ID0gZXZ0Y2huLT5wb3J0OwogICAgIHVuc2lnbmVkIGlu
dCBtYXhfcG9ydHMgPSBCSVRTX1BFUl9FVlRDSE5fV09SRChkKSAqIEJJVFNf
UEVSX0VWVENITl9XT1JEKGQpOwogCiAgICAgQVNTRVJUKHBvcnQgPCBtYXhf
cG9ydHMpOwpAQCAtNzIsOCArNzQsMTAgQEAgc3RhdGljIGJvb2wgZXZ0Y2hu
XzJsX2lzX3BlbmRpbmcoY29uc3QgcwogICAgICAgICAgICAgZ3Vlc3RfdGVz
dF9iaXQoZCwgcG9ydCwgJnNoYXJlZF9pbmZvKGQsIGV2dGNobl9wZW5kaW5n
KSkpOwogfQogCi1zdGF0aWMgYm9vbCBldnRjaG5fMmxfaXNfbWFza2VkKGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sIGV2dGNobl8ybF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRv
bWFpbiAqZCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29u
c3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQogeworICAgIGV2dGNobl9wb3J0
X3QgcG9ydCA9IGV2dGNobi0+cG9ydDsKICAgICB1bnNpZ25lZCBpbnQgbWF4
X3BvcnRzID0gQklUU19QRVJfRVZUQ0hOX1dPUkQoZCkgKiBCSVRTX1BFUl9F
VlRDSE5fV09SRChkKTsKIAogICAgIEFTU0VSVChwb3J0IDwgbWF4X3BvcnRz
KTsKLS0tIGEveGVuL2NvbW1vbi9ldmVudF9jaGFubmVsLmMKKysrIGIveGVu
L2NvbW1vbi9ldmVudF9jaGFubmVsLmMKQEAgLTE2Myw4ICsxNjMsOSBAQCBp
bnQgZXZ0Y2huX2FsbG9jYXRlX3BvcnQoc3RydWN0IGRvbWFpbiAqCiAKICAg
ICBpZiAoIHBvcnRfaXNfdmFsaWQoZCwgcG9ydCkgKQogICAgIHsKLSAgICAg
ICAgaWYgKCBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpLT5zdGF0ZSAhPSBF
Q1NfRlJFRSB8fAotICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX2J1c3ko
ZCwgcG9ydCkgKQorICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpjaG4g
PSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworCisgICAgICAgIGlmICgg
Y2huLT5zdGF0ZSAhPSBFQ1NfRlJFRSB8fCBldnRjaG5faXNfYnVzeShkLCBj
aG4pICkKICAgICAgICAgICAgIHJldHVybiAtRUJVU1k7CiAgICAgfQogICAg
IGVsc2UKQEAgLTc3Nyw2ICs3NzgsNyBAQCB2b2lkIHNlbmRfZ3Vlc3RfdmNw
dV92aXJxKHN0cnVjdCB2Y3B1ICp2CiAgICAgdW5zaWduZWQgbG9uZyBmbGFn
czsKICAgICBpbnQgcG9ydDsKICAgICBzdHJ1Y3QgZG9tYWluICpkOworICAg
IHN0cnVjdCBldnRjaG4gKmNobjsKIAogICAgIEFTU0VSVCghdmlycV9pc19n
bG9iYWwodmlycSkpOwogCkBAIC03ODcsNyArNzg5LDEwIEBAIHZvaWQgc2Vu
ZF9ndWVzdF92Y3B1X3ZpcnEoc3RydWN0IHZjcHUgKnYKICAgICAgICAgZ290
byBvdXQ7CiAKICAgICBkID0gdi0+ZG9tYWluOwotICAgIGV2dGNobl9wb3J0
X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGV2dGNobl9mcm9tX3BvcnQo
ZCwgcG9ydCkpOworICAgIGNobiA9IGV2dGNobl9mcm9tX3BvcnQoZCwgcG9y
dCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIGV2dGNobl9w
b3J0X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGNobik7CisgICAgc3Bp
bl91bmxvY2soJmNobi0+bG9jayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxv
Y2tfaXJxcmVzdG9yZSgmdi0+dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MTYs
NyArODIxLDkgQEAgc3RhdGljIHZvaWQgc2VuZF9ndWVzdF9nbG9iYWxfdmly
cShzdHJ1YwogICAgICAgICBnb3RvIG91dDsKIAogICAgIGNobiA9IGV2dGNo
bl9mcm9tX3BvcnQoZCwgcG9ydCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxv
Y2spOwogICAgIGV2dGNobl9wb3J0X3NldF9wZW5kaW5nKGQsIGNobi0+bm90
aWZ5X3ZjcHVfaWQsIGNobik7CisgICAgc3Bpbl91bmxvY2soJmNobi0+bG9j
ayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmdi0+
dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MjYsNiArODMzLDcgQEAgdm9pZCBz
ZW5kX2d1ZXN0X3BpcnEoc3RydWN0IGRvbWFpbiAqZCwgYwogewogICAgIGlu
dCBwb3J0OwogICAgIHN0cnVjdCBldnRjaG4gKmNobjsKKyAgICB1bnNpZ25l
ZCBsb25nIGZsYWdzOwogCiAgICAgLyoKICAgICAgKiBQViBndWVzdHM6IEl0
IHNob3VsZCBub3QgYmUgcG9zc2libGUgdG8gcmFjZSB3aXRoIF9fZXZ0Y2hu
X2Nsb3NlKCkuIFRoZQpAQCAtODQwLDcgKzg0OCw5IEBAIHZvaWQgc2VuZF9n
dWVzdF9waXJxKHN0cnVjdCBkb21haW4gKmQsIGMKICAgICB9CiAKICAgICBj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIHNwaW5fbG9j
a19pcnFzYXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5fcG9y
dF9zZXRfcGVuZGluZyhkLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4pOwor
ICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobi0+bG9jaywgZmxhZ3Mp
OwogfQogCiBzdGF0aWMgc3RydWN0IGRvbWFpbiAqZ2xvYmFsX3ZpcnFfaGFu
ZGxlcnNbTlJfVklSUVNdIF9fcmVhZF9tb3N0bHk7CkBAIC0xMDM2LDEyICsx
MDQ2LDE1IEBAIGludCBldnRjaG5fdW5tYXNrKHVuc2lnbmVkIGludCBwb3J0
KQogewogICAgIHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47
CiAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huOworICAgIHVuc2lnbmVkIGxv
bmcgZmxhZ3M7CiAKICAgICBpZiAoIHVubGlrZWx5KCFwb3J0X2lzX3ZhbGlk
KGQsIHBvcnQpKSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCiAgICAg
ZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0KTsKKyAgICBzcGlu
X2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAgICAgZXZ0
Y2huX3BvcnRfdW5tYXNrKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tf
aXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAKICAgICByZXR1
cm4gMDsKIH0KQEAgLTE0NTQsOCArMTQ2Nyw4IEBAIHN0YXRpYyB2b2lkIGRv
bWFpbl9kdW1wX2V2dGNobl9pbmZvKHN0cnUKIAogICAgICAgICBwcmludGso
IiAgICAlNHUgWyVkLyVkLyIsCiAgICAgICAgICAgICAgICBwb3J0LAotICAg
ICAgICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfcGVuZGluZyhkLCBwb3J0KSwK
LSAgICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX21hc2tlZChkLCBwb3J0
KSk7CisgICAgICAgICAgICAgICBldnRjaG5faXNfcGVuZGluZyhkLCBjaG4p
LAorICAgICAgICAgICAgICAgZXZ0Y2huX2lzX21hc2tlZChkLCBjaG4pKTsK
ICAgICAgICAgZXZ0Y2huX3BvcnRfcHJpbnRfc3RhdGUoZCwgY2huKTsKICAg
ICAgICAgcHJpbnRrKCJdOiBzPSVkIG49JWQgeD0lZCIsCiAgICAgICAgICAg
ICAgICBjaG4tPnN0YXRlLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4tPnhl
bl9jb25zdW1lcik7Ci0tLSBhL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCisr
KyBiL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCkBAIC0yOTUsMjMgKzI5NSwy
NiBAQCBzdGF0aWMgdm9pZCBldnRjaG5fZmlmb191bm1hc2soc3RydWN0IGRv
CiAgICAgICAgIGV2dGNobl9maWZvX3NldF9wZW5kaW5nKHYsIGV2dGNobik7
CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl9maWZvX2lzX3BlbmRpbmcoY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3BvcnRfdCBwb3J0KQorc3Rh
dGljIGJvb2wgZXZ0Y2huX2ZpZm9faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7Ci0gICAgY29uc3Qg
ZXZlbnRfd29yZF90ICp3b3JkID0gZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3Bv
cnQoZCwgcG9ydCk7CisgICAgY29uc3QgZXZlbnRfd29yZF90ICp3b3JkID0g
ZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3BvcnQoZCwgZXZ0Y2huLT5wb3J0KTsK
IAogICAgIHJldHVybiB3b3JkICYmIGd1ZXN0X3Rlc3RfYml0KGQsIEVWVENI
Tl9GSUZPX1BFTkRJTkcsIHdvcmQpOwogfQogCi1zdGF0aWMgYm9vbF90IGV2
dGNobl9maWZvX2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBl
dnRjaG5fcG9ydF90IHBvcnQpCitzdGF0aWMgYm9vbF90IGV2dGNobl9maWZv
X2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IGV2dGNo
biAqZXZ0Y2huKQogewotICAgIGNvbnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9
IGV2dGNobl9maWZvX3dvcmRfZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGNv
bnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9IGV2dGNobl9maWZvX3dvcmRfZnJv
bV9wb3J0KGQsIGV2dGNobi0+cG9ydCk7CiAKICAgICByZXR1cm4gIXdvcmQg
fHwgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hOX0ZJRk9fTUFTS0VELCB3b3Jk
KTsKIH0KIAotc3RhdGljIGJvb2xfdCBldnRjaG5fZmlmb19pc19idXN5KGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sX3QgZXZ0Y2huX2ZpZm9faXNfYnVzeShjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBjb25zdCBl
dmVudF93b3JkX3QgKndvcmQgPSBldnRjaG5fZmlmb193b3JkX2Zyb21fcG9y
dChkLCBwb3J0KTsKKyAgICBjb25zdCBldmVudF93b3JkX3QgKndvcmQgPSBl
dnRjaG5fZmlmb193b3JkX2Zyb21fcG9ydChkLCBldnRjaG4tPnBvcnQpOwog
CiAgICAgcmV0dXJuIHdvcmQgJiYgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hO
X0ZJRk9fTElOS0VELCB3b3JkKTsKIH0KLS0tIGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9ldmVudC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvZXZlbnQu
aApAQCAtNDcsNCArNDcsMTAgQEAgc3RhdGljIGlubGluZSBpbnQgYXJjaF92
aXJxX2lzX2dsb2JhbCh1aQogICAgIHJldHVybiAxOwogfQogCisjaWZkZWYg
Q09ORklHX1BWX1NISU0KKyMgaW5jbHVkZSA8YXNtL3B2L3NoaW0uaD4KKyMg
ZGVmaW5lIGFyY2hfZXZ0Y2huX2lzX3NwZWNpYWwoY2huKSBcCisgICAgICAg
ICAgICAgKHB2X3NoaW0gJiYgKGNobiktPnBvcnQgJiYgKGNobiktPnN0YXRl
ID09IEVDU19SRVNFUlZFRCkKKyNlbmRpZgorCiAjZW5kaWYKLS0tIGEveGVu
L2luY2x1ZGUveGVuL2V2ZW50LmgKKysrIGIveGVuL2luY2x1ZGUveGVuL2V2
ZW50LmgKQEAgLTEyNSw2ICsxMjUsMjQgQEAgc3RhdGljIGlubGluZSBzdHJ1
Y3QgZXZ0Y2huICpldnRjaG5fZnJvbQogICAgIHJldHVybiBidWNrZXRfZnJv
bV9wb3J0KGQsIHApICsgKHAgJSBFVlRDSE5TX1BFUl9CVUNLRVQpOwogfQog
CisvKgorICogInVzYWJsZSIgYXMgaW4gImJ5IGEgZ3Vlc3QiLCBpLmUuIFhl
biBjb25zdW1lZCBjaGFubmVscyBhcmUgYXNzdW1lZCB0byBiZQorICogdGFr
ZW4gY2FyZSBvZiBzZXBhcmF0ZWx5IHdoZXJlIHVzZWQgZm9yIFhlbidzIGlu
dGVybmFsIHB1cnBvc2VzLgorICovCitzdGF0aWMgYm9vbCBldnRjaG5fdXNh
YmxlKGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKK3sKKyAgICBpZiAo
IGV2dGNobi0+eGVuX2NvbnN1bWVyICkKKyAgICAgICAgcmV0dXJuIGZhbHNl
OworCisjaWZkZWYgYXJjaF9ldnRjaG5faXNfc3BlY2lhbAorICAgIGlmICgg
YXJjaF9ldnRjaG5faXNfc3BlY2lhbChldnRjaG4pICkKKyAgICAgICAgcmV0
dXJuIHRydWU7CisjZW5kaWYKKworICAgIEJVSUxEX0JVR19PTihFQ1NfRlJF
RSA+IEVDU19SRVNFUlZFRCk7CisgICAgcmV0dXJuIGV2dGNobi0+c3RhdGUg
PiBFQ1NfUkVTRVJWRUQ7Cit9CisKIC8qIFdhaXQgb24gYSBYZW4tYXR0YWNo
ZWQgZXZlbnQgY2hhbm5lbC4gKi8KICNkZWZpbmUgd2FpdF9vbl94ZW5fZXZl
bnRfY2hhbm5lbChwb3J0LCBjb25kaXRpb24pICAgICAgICAgICAgICAgICAg
ICAgIFwKICAgICBkbyB7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKQEAgLTE1Nywx
OSArMTc1LDI0IEBAIGludCBldnRjaG5fcmVzZXQoc3RydWN0IGRvbWFpbiAq
ZCk7CiAKIC8qCiAgKiBMb3ctbGV2ZWwgZXZlbnQgY2hhbm5lbCBwb3J0IG9w
cy4KKyAqCisgKiBBbGwgaG9va3MgaGF2ZSB0byBiZSBjYWxsZWQgd2l0aCBh
IGxvY2sgaGVsZCB3aGljaCBwcmV2ZW50cyB0aGUgY2hhbm5lbAorICogZnJv
bSBjaGFuZ2luZyBzdGF0ZS4gVGhpcyBtYXkgYmUgdGhlIGRvbWFpbiBldmVu
dCBsb2NrLCB0aGUgcGVyLWNoYW5uZWwKKyAqIGxvY2ssIG9yIGluIHRoZSBj
YXNlIG9mIHNlbmRpbmcgaW50ZXJkb21haW4gZXZlbnRzIGFsc28gdGhlIG90
aGVyIHNpZGUncworICogcGVyLWNoYW5uZWwgbG9jay4gRXhjZXB0aW9ucyBh
cHBseSBpbiBjZXJ0YWluIGNhc2VzIGZvciB0aGUgUFYgc2hpbS4KICAqLwog
c3RydWN0IGV2dGNobl9wb3J0X29wcyB7CiAgICAgdm9pZCAoKmluaXQpKHN0
cnVjdCBkb21haW4gKmQsIHN0cnVjdCBldnRjaG4gKmV2dGNobik7CiAgICAg
dm9pZCAoKnNldF9wZW5kaW5nKShzdHJ1Y3QgdmNwdSAqdiwgc3RydWN0IGV2
dGNobiAqZXZ0Y2huKTsKICAgICB2b2lkICgqY2xlYXJfcGVuZGluZykoc3Ry
dWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAgICB2
b2lkICgqdW5tYXNrKShzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZXZ0Y2hu
ICpldnRjaG4pOwotICAgIGJvb2wgKCppc19wZW5kaW5nKShjb25zdCBzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOwotICAgIGJvb2wg
KCppc19tYXNrZWQpKGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9w
b3J0X3QgcG9ydCk7CisgICAgYm9vbCAoKmlzX3BlbmRpbmcpKGNvbnN0IHN0
cnVjdCBkb21haW4gKmQsIGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobik7
CisgICAgYm9vbCAoKmlzX21hc2tlZCkoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCwgY29uc3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAgICAvKgogICAg
ICAqIElzIHRoZSBwb3J0IHVuYXZhaWxhYmxlIGJlY2F1c2UgaXQncyBzdGls
bCBiZWluZyBjbGVhbmVkIHVwCiAgICAgICogYWZ0ZXIgYmVpbmcgY2xvc2Vk
PwogICAgICAqLwotICAgIGJvb2wgKCppc19idXN5KShjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOworICAgIGJvb2wgKCpp
c19idXN5KShjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBjb25zdCBzdHJ1Y3Qg
ZXZ0Y2huICpldnRjaG4pOwogICAgIGludCAoKnNldF9wcmlvcml0eSkoc3Ry
dWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huLAogICAgICAg
ICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHByaW9yaXR5KTsKICAg
ICB2b2lkICgqcHJpbnRfc3RhdGUpKHN0cnVjdCBkb21haW4gKmQsIGNvbnN0
IHN0cnVjdCBldnRjaG4gKmV2dGNobik7CkBAIC0xODUsMzggKzIwOCw2NyBA
QCBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfc2V0X3BlbmRpCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5z
aWduZWQgaW50IHZjcHVfaWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQogewot
ICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3BlbmRpbmcoZC0+dmNwdVt2
Y3B1X2lkXSwgZXZ0Y2huKTsKKyAgICBpZiAoIGV2dGNobl91c2FibGUoZXZ0
Y2huKSApCisgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3BlbmRp
bmcoZC0+dmNwdVt2Y3B1X2lkXSwgZXZ0Y2huKTsKIH0KIAogc3RhdGljIGlu
bGluZSB2b2lkIGV2dGNobl9wb3J0X2NsZWFyX3BlbmRpbmcoc3RydWN0IGRv
bWFpbiAqZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBkLT5l
dnRjaG5fcG9ydF9vcHMtPmNsZWFyX3BlbmRpbmcoZCwgZXZ0Y2huKTsKKyAg
ICBpZiAoIGV2dGNobl91c2FibGUoZXZ0Y2huKSApCisgICAgICAgIGQtPmV2
dGNobl9wb3J0X29wcy0+Y2xlYXJfcGVuZGluZyhkLCBldnRjaG4pOwogfQog
CiBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfdW5tYXNrKHN0cnVj
dCBkb21haW4gKmQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBkLT5ldnRj
aG5fcG9ydF9vcHMtPnVubWFzayhkLCBldnRjaG4pOworICAgIGlmICggZXZ0
Y2huX3VzYWJsZShldnRjaG4pICkKKyAgICAgICAgZC0+ZXZ0Y2huX3BvcnRf
b3BzLT51bm1hc2soZCwgZXZ0Y2huKTsKIH0KIAotc3RhdGljIGlubGluZSBi
b29sIGV2dGNobl9wb3J0X2lzX3BlbmRpbmcoY29uc3Qgc3RydWN0IGRvbWFp
biAqZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0YXRpYyBpbmxpbmUgYm9vbCBl
dnRjaG5faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0cnVj
dCBldnRjaG4gKmV2dGNobikKIHsKLSAgICByZXR1cm4gZC0+ZXZ0Y2huX3Bv
cnRfb3BzLT5pc19wZW5kaW5nKGQsIHBvcnQpOworICAgIHJldHVybiBldnRj
aG5fdXNhYmxlKGV2dGNobikgJiYgZC0+ZXZ0Y2huX3BvcnRfb3BzLT5pc19w
ZW5kaW5nKGQsIGV2dGNobik7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBl
dnRjaG5fcG9ydF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRvbWFpbiAqZCwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXZ0
Y2huX3BvcnRfdCBwb3J0KQorc3RhdGljIGlubGluZSBib29sIGV2dGNobl9w
b3J0X2lzX3BlbmRpbmcoc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3BvcnRf
dCBwb3J0KQogewotICAgIHJldHVybiBkLT5ldnRjaG5fcG9ydF9vcHMtPmlz
X21hc2tlZChkLCBwb3J0KTsKKyAgICBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4g
PSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGJvb2wgcmM7Cisg
ICAgdW5zaWduZWQgbG9uZyBmbGFnczsKKworICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZldnRjaG4tPmxvY2ssIGZsYWdzKTsKKyAgICByYyA9IGV2dGNobl9p
c19wZW5kaW5nKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVz
dG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CisKKyAgICByZXR1cm4gcmM7
Cit9CisKK3N0YXRpYyBpbmxpbmUgYm9vbCBldnRjaG5faXNfbWFza2VkKGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCit7
CisgICAgcmV0dXJuICFldnRjaG5fdXNhYmxlKGV2dGNobikgfHwgZC0+ZXZ0
Y2huX3BvcnRfb3BzLT5pc19tYXNrZWQoZCwgZXZ0Y2huKTsKK30KKworc3Rh
dGljIGlubGluZSBib29sIGV2dGNobl9wb3J0X2lzX21hc2tlZChzdHJ1Y3Qg
ZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpCit7CisgICAgc3RydWN0
IGV2dGNobiAqZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0KTsK
KyAgICBib29sIHJjOworICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7CisKKyAg
ICBzcGluX2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7Cisg
ICAgcmMgPSBldnRjaG5faXNfbWFza2VkKGQsIGV2dGNobik7CisgICAgc3Bp
bl91bmxvY2tfaXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CisK
KyAgICByZXR1cm4gcmM7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBldnRj
aG5fcG9ydF9pc19idXN5KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBldnRjaG5fcG9y
dF90IHBvcnQpCitzdGF0aWMgaW5saW5lIGJvb2wgZXZ0Y2huX2lzX2J1c3ko
Y29uc3Qgc3RydWN0IGRvbWFpbiAqZCwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7
CiAgICAgcmV0dXJuIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVzeSAmJgot
ICAgICAgICAgICBkLT5ldnRjaG5fcG9ydF9vcHMtPmlzX2J1c3koZCwgcG9y
dCk7CisgICAgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVzeShk
LCBldnRjaG4pOwogfQogCiBzdGF0aWMgaW5saW5lIGludCBldnRjaG5fcG9y
dF9zZXRfcHJpb3JpdHkoc3RydWN0IGRvbWFpbiAqZCwKQEAgLTIyNSw2ICsy
NzcsOCBAQCBzdGF0aWMgaW5saW5lIGludCBldnRjaG5fcG9ydF9zZXRfcHJp
b3JpCiB7CiAgICAgaWYgKCAhZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRfcHJp
b3JpdHkgKQogICAgICAgICByZXR1cm4gLUVOT1NZUzsKKyAgICBpZiAoICFl
dnRjaG5fdXNhYmxlKGV2dGNobikgKQorICAgICAgICByZXR1cm4gLUVBQ0NF
UzsKICAgICByZXR1cm4gZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRfcHJpb3Jp
dHkoZCwgZXZ0Y2huLCBwcmlvcml0eSk7CiB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.11-1.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGV2dGNobl9yZXNldCgpIHNob3VsZG4ndCBzdWNjZWVkIHdp
dGggc3RpbGwtb3BlbiBwb3J0cwoKV2hpbGUgdGhlIGZ1bmN0aW9uIGNsb3Nl
cyBhbGwgcG9ydHMsIGl0IGRvZXMgc28gd2l0aG91dCBob2xkaW5nIGFueQps
b2NrLCBhbmQgaGVuY2UgcmFjaW5nIHJlcXVlc3RzIG1heSBiZSBpc3N1ZWQg
Y2F1c2luZyBuZXcgcG9ydHMgdG8gZ2V0Cm9wZW5lZC4gVGhpcyB3b3VsZCBo
YXZlIGJlZW4gcHJvYmxlbWF0aWMgaW4gcGFydGljdWxhciBpZiBzdWNoIGEg
bmV3bHkKb3BlbmVkIHBvcnQgaGFkIGEgcG9ydCBudW1iZXIgYWJvdmUgdGhl
IG5ldyBpbXBsZW1lbnRhdGlvbiBsaW1pdCAoaS5lLgp3aGVuIHN3aXRjaGlu
ZyBmcm9tIEZJRk8gdG8gMi1sZXZlbCkgYWZ0ZXIgdGhlIHJlc2V0LCBhcyBw
cmlvciB0bwoiZXZ0Y2huOiByZWxheCBwb3J0X2lzX3ZhbGlkKCkiIHRoaXMg
Y291bGQgaGF2ZSBsZWQgdG8gZS5nLgpldnRjaG5fY2xvc2UoKSdzICJCVUdf
T04oIXBvcnRfaXNfdmFsaWQoZDIsIHBvcnQyKSkiIHRvIHRyaWdnZXIuCgpJ
bnRyb2R1Y2UgYSBjb3VudGVyIG9mIGFjdGl2ZSBwb3J0cyBhbmQgY2hlY2sg
dGhhdCBpdCdzIChzdGlsbCkgbm8KbGFyZ2VyIHRoZW4gdGhlIG51bWJlciBv
ZiBYZW4gaW50ZXJuYWxseSB1c2VkIG9uZXMgYWZ0ZXIgb2J0YWluaW5nIHRo
ZQpuZWNlc3NhcnkgbG9jayBpbiBldnRjaG5fcmVzZXQoKS4KCkFzIHRvIHRo
ZSBhY2Nlc3MgbW9kZWwgb2YgdGhlIG5ldyB7YWN0aXZlLHhlbn1fZXZ0Y2hu
cyBmaWVsZHMgLSB3aGlsZQphbGwgd3JpdGVzIGdldCBkb25lIHVzaW5nIHdy
aXRlX2F0b21pYygpLCByZWFkcyBvdWdodCB0byB1c2UKcmVhZF9hdG9taWMo
KSBvbmx5IHdoZW4gb3V0c2lkZSBvZiBhIHN1aXRhYmx5IGxvY2tlZCByZWdp
b24uCgpOb3RlIHRoYXQgYXMgb2Ygbm93IGV2dGNobl9iaW5kX3ZpcnEoKSBh
bmQgZXZ0Y2huX2JpbmRfaXBpKCkgZG9uJ3QgaGF2ZQphIG5lZWQgdG8gY2Fs
bCBjaGVja19mcmVlX3BvcnQoKS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzQz
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxsIDxq
Z3JhbGxAYW1hem9uLmNvbT4KCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfY2hh
bm5lbC5jCisrKyBiL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCkBAIC0x
ODgsNiArMTg4LDggQEAgaW50IGV2dGNobl9hbGxvY2F0ZV9wb3J0KHN0cnVj
dCBkb21haW4gKgogICAgICAgICB3cml0ZV9hdG9taWMoJmQtPnZhbGlkX2V2
dGNobnMsIGQtPnZhbGlkX2V2dGNobnMgKyBFVlRDSE5TX1BFUl9CVUNLRVQp
OwogICAgIH0KIAorICAgIHdyaXRlX2F0b21pYygmZC0+YWN0aXZlX2V2dGNo
bnMsIGQtPmFjdGl2ZV9ldnRjaG5zICsgMSk7CisKICAgICByZXR1cm4gMDsK
IH0KIApAQCAtMjExLDExICsyMTMsMjYgQEAgc3RhdGljIGludCBnZXRfZnJl
ZV9wb3J0KHN0cnVjdCBkb21haW4gKgogICAgIHJldHVybiAtRU5PU1BDOwog
fQogCisvKgorICogQ2hlY2sgd2hldGhlciBhIHBvcnQgaXMgc3RpbGwgbWFy
a2VkIGZyZWUsIGFuZCBpZiBzbyB1cGRhdGUgdGhlIGRvbWFpbgorICogY291
bnRlciBhY2NvcmRpbmdseS4gIFRvIGJlIHVzZWQgb24gZnVuY3Rpb24gZXhp
dCBwYXRocy4KKyAqLworc3RhdGljIHZvaWQgY2hlY2tfZnJlZV9wb3J0KHN0
cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3sKKyAgICBp
ZiAoIHBvcnRfaXNfdmFsaWQoZCwgcG9ydCkgJiYKKyAgICAgICAgIGV2dGNo
bl9mcm9tX3BvcnQoZCwgcG9ydCktPnN0YXRlID09IEVDU19GUkVFICkKKyAg
ICAgICAgd3JpdGVfYXRvbWljKCZkLT5hY3RpdmVfZXZ0Y2hucywgZC0+YWN0
aXZlX2V2dGNobnMgLSAxKTsKK30KKwogdm9pZCBldnRjaG5fZnJlZShzdHJ1
Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZXZ0Y2huICpjaG4pCiB7CiAgICAgLyog
Q2xlYXIgcGVuZGluZyBldmVudCB0byBhdm9pZCB1bmV4cGVjdGVkIGJlaGF2
aW9yIG9uIHJlLWJpbmQuICovCiAgICAgZXZ0Y2huX3BvcnRfY2xlYXJfcGVu
ZGluZyhkLCBjaG4pOwogCisgICAgaWYgKCBjb25zdW1lcl9pc194ZW4oY2hu
KSApCisgICAgICAgIHdyaXRlX2F0b21pYygmZC0+eGVuX2V2dGNobnMsIGQt
Pnhlbl9ldnRjaG5zIC0gMSk7CisgICAgd3JpdGVfYXRvbWljKCZkLT5hY3Rp
dmVfZXZ0Y2hucywgZC0+YWN0aXZlX2V2dGNobnMgLSAxKTsKKwogICAgIC8q
IFJlc2V0IGJpbmRpbmcgdG8gdmNwdTAgd2hlbiB0aGUgY2hhbm5lbCBpcyBm
cmVlZC4gKi8KICAgICBjaG4tPnN0YXRlICAgICAgICAgID0gRUNTX0ZSRUU7
CiAgICAgY2huLT5ub3RpZnlfdmNwdV9pZCA9IDA7CkBAIC0yNTgsNiArMjc1
LDcgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2FsbG9jX3VuYm91bmQoZXZ0Y2hu
XwogICAgIGFsbG9jLT5wb3J0ID0gcG9ydDsKIAogIG91dDoKKyAgICBjaGVj
a19mcmVlX3BvcnQoZCwgcG9ydCk7CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2
ZW50X2xvY2spOwogICAgIHJjdV91bmxvY2tfZG9tYWluKGQpOwogCkBAIC0z
NTEsNiArMzY5LDcgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfaW50ZXJk
b21haW4oZXZ0YwogICAgIGJpbmQtPmxvY2FsX3BvcnQgPSBscG9ydDsKIAog
IG91dDoKKyAgICBjaGVja19mcmVlX3BvcnQobGQsIGxwb3J0KTsKICAgICBz
cGluX3VubG9jaygmbGQtPmV2ZW50X2xvY2spOwogICAgIGlmICggbGQgIT0g
cmQgKQogICAgICAgICBzcGluX3VubG9jaygmcmQtPmV2ZW50X2xvY2spOwpA
QCAtNDg0LDcgKzUwMyw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX3Bp
cnEoZXZ0Y2huX2JpbmQKICAgICBzdHJ1Y3QgZG9tYWluICpkID0gY3VycmVu
dC0+ZG9tYWluOwogICAgIHN0cnVjdCB2Y3B1ICAgKnYgPSBkLT52Y3B1WzBd
OwogICAgIHN0cnVjdCBwaXJxICAgKmluZm87Ci0gICAgaW50ICAgICAgICAg
ICAgcG9ydCwgcGlycSA9IGJpbmQtPnBpcnE7CisgICAgaW50ICAgICAgICAg
ICAgcG9ydCA9IDAsIHBpcnEgPSBiaW5kLT5waXJxOwogICAgIGxvbmcgICAg
ICAgICAgIHJjOwogCiAgICAgaWYgKCAocGlycSA8IDApIHx8IChwaXJxID49
IGQtPm5yX3BpcnFzKSApCkBAIC01MzIsNiArNTUxLDcgQEAgc3RhdGljIGxv
bmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAgIGFyY2hfZXZ0
Y2huX2JpbmRfcGlycShkLCBwaXJxKTsKIAogIG91dDoKKyAgICBjaGVja19m
cmVlX3BvcnQoZCwgcG9ydCk7CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50
X2xvY2spOwogCiAgICAgcmV0dXJuIHJjOwpAQCAtMTAwNSwxMCArMTAyNSwx
MCBAQCBpbnQgZXZ0Y2huX3VubWFzayh1bnNpZ25lZCBpbnQgcG9ydCkKICAg
ICByZXR1cm4gMDsKIH0KIAotCiBpbnQgZXZ0Y2huX3Jlc2V0KHN0cnVjdCBk
b21haW4gKmQpCiB7CiAgICAgdW5zaWduZWQgaW50IGk7CisgICAgaW50IHJj
ID0gMDsKIAogICAgIGlmICggZCAhPSBjdXJyZW50LT5kb21haW4gJiYgIWQt
PmNvbnRyb2xsZXJfcGF1c2VfY291bnQgKQogICAgICAgICByZXR1cm4gLUVJ
TlZBTDsKQEAgLTEwMTgsNyArMTAzOCw5IEBAIGludCBldnRjaG5fcmVzZXQo
c3RydWN0IGRvbWFpbiAqZCkKIAogICAgIHNwaW5fbG9jaygmZC0+ZXZlbnRf
bG9jayk7CiAKLSAgICBpZiAoIGQtPmV2dGNobl9maWZvICkKKyAgICBpZiAo
IGQtPmFjdGl2ZV9ldnRjaG5zID4gZC0+eGVuX2V2dGNobnMgKQorICAgICAg
ICByYyA9IC1FQUdBSU47CisgICAgZWxzZSBpZiAoIGQtPmV2dGNobl9maWZv
ICkKICAgICB7CiAgICAgICAgIC8qIFN3aXRjaGluZyBiYWNrIHRvIDItbGV2
ZWwgQUJJLiAqLwogICAgICAgICBldnRjaG5fZmlmb19kZXN0cm95KGQpOwpA
QCAtMTAyNyw3ICsxMDQ5LDcgQEAgaW50IGV2dGNobl9yZXNldChzdHJ1Y3Qg
ZG9tYWluICpkKQogCiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2sp
OwogCi0gICAgcmV0dXJuIDA7CisgICAgcmV0dXJuIHJjOwogfQogCiBzdGF0
aWMgbG9uZyBldnRjaG5fc2V0X3ByaW9yaXR5KGNvbnN0IHN0cnVjdCBldnRj
aG5fc2V0X3ByaW9yaXR5ICpzZXRfcHJpb3JpdHkpCkBAIC0xMjEzLDEwICsx
MjM1LDkgQEAgaW50IGFsbG9jX3VuYm91bmRfeGVuX2V2ZW50X2NoYW5uZWwo
CiAKICAgICBzcGluX2xvY2soJmxkLT5ldmVudF9sb2NrKTsKIAotICAgIHJj
ID0gZ2V0X2ZyZWVfcG9ydChsZCk7CisgICAgcG9ydCA9IHJjID0gZ2V0X2Zy
ZWVfcG9ydChsZCk7CiAgICAgaWYgKCByYyA8IDAgKQogICAgICAgICBnb3Rv
IG91dDsKLSAgICBwb3J0ID0gcmM7CiAgICAgY2huID0gZXZ0Y2huX2Zyb21f
cG9ydChsZCwgcG9ydCk7CiAKICAgICByYyA9IHhzbV9ldnRjaG5fdW5ib3Vu
ZChYU01fVEFSR0VULCBsZCwgY2huLCByZW1vdGVfZG9taWQpOwpAQCAtMTIz
Miw3ICsxMjUzLDEwIEBAIGludCBhbGxvY191bmJvdW5kX3hlbl9ldmVudF9j
aGFubmVsKAogCiAgICAgc3Bpbl91bmxvY2soJmNobi0+bG9jayk7CiAKKyAg
ICB3cml0ZV9hdG9taWMoJmxkLT54ZW5fZXZ0Y2hucywgbGQtPnhlbl9ldnRj
aG5zICsgMSk7CisKICBvdXQ6CisgICAgY2hlY2tfZnJlZV9wb3J0KGxkLCBw
b3J0KTsKICAgICBzcGluX3VubG9jaygmbGQtPmV2ZW50X2xvY2spOwogCiAg
ICAgcmV0dXJuIHJjIDwgMCA/IHJjIDogcG9ydDsKQEAgLTEzMDgsNiArMTMz
Miw3IEBAIGludCBldnRjaG5faW5pdChzdHJ1Y3QgZG9tYWluICpkKQogICAg
ICAgICByZXR1cm4gLUVJTlZBTDsKICAgICB9CiAgICAgZXZ0Y2huX2Zyb21f
cG9ydChkLCAwKS0+c3RhdGUgPSBFQ1NfUkVTRVJWRUQ7CisgICAgd3JpdGVf
YXRvbWljKCZkLT5hY3RpdmVfZXZ0Y2hucywgMCk7CiAKICNpZiBNQVhfVklS
VF9DUFVTID4gQklUU19QRVJfTE9ORwogICAgIGQtPnBvbGxfbWFzayA9IHh6
YWxsb2NfYXJyYXkodW5zaWduZWQgbG9uZywKQEAgLTEzMzUsNiArMTM2MCw4
IEBAIHZvaWQgZXZ0Y2huX2Rlc3Ryb3koc3RydWN0IGRvbWFpbiAqZCkKICAg
ICBmb3IgKCBpID0gMDsgcG9ydF9pc192YWxpZChkLCBpKTsgaSsrICkKICAg
ICAgICAgZXZ0Y2huX2Nsb3NlKGQsIGksIDApOwogCisgICAgQVNTRVJUKCFk
LT5hY3RpdmVfZXZ0Y2hucyk7CisKICAgICBjbGVhcl9nbG9iYWxfdmlycV9o
YW5kbGVycyhkKTsKIAogICAgIGV2dGNobl9maWZvX2Rlc3Ryb3koZCk7Ci0t
LSBhL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRl
L3hlbi9zY2hlZC5oCkBAIC0zNDUsNiArMzQ1LDE2IEBAIHN0cnVjdCBkb21h
aW4KICAgICBzdHJ1Y3QgZXZ0Y2huICAqKmV2dGNobl9ncm91cFtOUl9FVlRD
SE5fR1JPVVBTXTsgLyogYWxsIG90aGVyIGJ1Y2tldHMgKi8KICAgICB1bnNp
Z25lZCBpbnQgICAgIG1heF9ldnRjaG5fcG9ydDsgLyogbWF4IHBlcm1pdHRl
ZCBwb3J0IG51bWJlciAqLwogICAgIHVuc2lnbmVkIGludCAgICAgdmFsaWRf
ZXZ0Y2huczsgICAvKiBudW1iZXIgb2YgYWxsb2NhdGVkIGV2ZW50IGNoYW5u
ZWxzICovCisgICAgLyoKKyAgICAgKiBOdW1iZXIgb2YgaW4tdXNlIGV2ZW50
IGNoYW5uZWxzLiAgV3JpdGVycyBzaG91bGQgdXNlIHdyaXRlX2F0b21pYygp
LgorICAgICAqIFJlYWRlcnMgbmVlZCB0byB1c2UgcmVhZF9hdG9taWMoKSBv
bmx5IHdoZW4gbm90IGhvbGRpbmcgZXZlbnRfbG9jay4KKyAgICAgKi8KKyAg
ICB1bnNpZ25lZCBpbnQgICAgIGFjdGl2ZV9ldnRjaG5zOworICAgIC8qCisg
ICAgICogTnVtYmVyIG9mIGV2ZW50IGNoYW5uZWxzIHVzZWQgaW50ZXJuYWxs
eSBieSBYZW4gKG5vdCBzdWJqZWN0IHRvCisgICAgICogRVZUQ0hOT1BfcmVz
ZXQpLiAgUmVhZC93cml0ZSBhY2Nlc3MgbGlrZSBmb3IgYWN0aXZlX2V2dGNo
bnMuCisgICAgICovCisgICAgdW5zaWduZWQgaW50ICAgICB4ZW5fZXZ0Y2hu
czsKICAgICBzcGlubG9ja190ICAgICAgIGV2ZW50X2xvY2s7CiAgICAgY29u
c3Qgc3RydWN0IGV2dGNobl9wb3J0X29wcyAqZXZ0Y2huX3BvcnRfb3BzOwog
ICAgIHN0cnVjdCBldnRjaG5fZmlmb19kb21haW4gKmV2dGNobl9maWZvOwo=

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.11-2.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGNvbnZlcnQgcGVyLWNoYW5uZWwgbG9jayB0byBiZSBJUlEt
c2FmZQoKLi4uIGluIG9yZGVyIGZvciBzZW5kX2d1ZXN0X3tnbG9iYWwsdmNw
dX1fdmlycSgpIHRvIGJlIGFibGUgdG8gbWFrZSB1c2UKb2YgaXQuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0My4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFs
bCA8amdyYWxsQGFtYXpvbi5jb20+CgotLS0gYS94ZW4vY29tbW9uL2V2ZW50
X2NoYW5uZWwuYworKysgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYwpA
QCAtMjQ4LDYgKzI0OCw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9hbGxvY191
bmJvdW5kKGV2dGNobl8KICAgICBpbnQgICAgICAgICAgICBwb3J0OwogICAg
IGRvbWlkX3QgICAgICAgIGRvbSA9IGFsbG9jLT5kb207CiAgICAgbG9uZyAg
ICAgICAgICAgcmM7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICAg
ICBkID0gcmN1X2xvY2tfZG9tYWluX2J5X2FueV9pZChkb20pOwogICAgIGlm
ICggZCA9PSBOVUxMICkKQEAgLTI2MywxNCArMjY0LDE0IEBAIHN0YXRpYyBs
b25nIGV2dGNobl9hbGxvY191bmJvdW5kKGV2dGNobl8KICAgICBpZiAoIHJj
ICkKICAgICAgICAgZ290byBvdXQ7CiAKLSAgICBzcGluX2xvY2soJmNobi0+
bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgY2huLT5zdGF0ZSA9IEVDU19VTkJPVU5EOwogICAgIGlm
ICggKGNobi0+dS51bmJvdW5kLnJlbW90ZV9kb21pZCA9IGFsbG9jLT5yZW1v
dGVfZG9tKSA9PSBET01JRF9TRUxGICkKICAgICAgICAgY2huLT51LnVuYm91
bmQucmVtb3RlX2RvbWlkID0gY3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ7
CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bpbl91
bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBhbGxvYy0+cG9ydCA9IHBv
cnQ7CiAKQEAgLTI4MywyNiArMjg0LDMyIEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9hbGxvY191bmJvdW5kKGV2dGNobl8KIH0KIAogCi1zdGF0aWMgdm9pZCBk
b3VibGVfZXZ0Y2huX2xvY2soc3RydWN0IGV2dGNobiAqbGNobiwgc3RydWN0
IGV2dGNobiAqcmNobikKK3N0YXRpYyB1bnNpZ25lZCBsb25nIGRvdWJsZV9l
dnRjaG5fbG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLAorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCBldnRjaG4gKnJj
aG4pCiB7Ci0gICAgaWYgKCBsY2huIDwgcmNobiApCisgICAgdW5zaWduZWQg
bG9uZyBmbGFnczsKKworICAgIGlmICggbGNobiA8PSByY2huICkKICAgICB7
Ci0gICAgICAgIHNwaW5fbG9jaygmbGNobi0+bG9jayk7Ci0gICAgICAgIHNw
aW5fbG9jaygmcmNobi0+bG9jayk7CisgICAgICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZsY2huLT5sb2NrLCBmbGFncyk7CisgICAgICAgIGlmICggbGNobiAh
PSByY2huICkKKyAgICAgICAgICAgIHNwaW5fbG9jaygmcmNobi0+bG9jayk7
CiAgICAgfQogICAgIGVsc2UKICAgICB7Ci0gICAgICAgIGlmICggbGNobiAh
PSByY2huICkKLSAgICAgICAgICAgIHNwaW5fbG9jaygmcmNobi0+bG9jayk7
CisgICAgICAgIHNwaW5fbG9ja19pcnFzYXZlKCZyY2huLT5sb2NrLCBmbGFn
cyk7CiAgICAgICAgIHNwaW5fbG9jaygmbGNobi0+bG9jayk7CiAgICAgfQor
CisgICAgcmV0dXJuIGZsYWdzOwogfQogCi1zdGF0aWMgdm9pZCBkb3VibGVf
ZXZ0Y2huX3VubG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLCBzdHJ1Y3QgZXZ0
Y2huICpyY2huKQorc3RhdGljIHZvaWQgZG91YmxlX2V2dGNobl91bmxvY2so
c3RydWN0IGV2dGNobiAqbGNobiwgc3RydWN0IGV2dGNobiAqcmNobiwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcg
ZmxhZ3MpCiB7Ci0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxvY2spOwogICAg
IGlmICggbGNobiAhPSByY2huICkKLSAgICAgICAgc3Bpbl91bmxvY2soJnJj
aG4tPmxvY2spOworICAgICAgICBzcGluX3VubG9jaygmbGNobi0+bG9jayk7
CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmcmNobi0+bG9jaywgZmxh
Z3MpOwogfQogCiBzdGF0aWMgbG9uZyBldnRjaG5fYmluZF9pbnRlcmRvbWFp
bihldnRjaG5fYmluZF9pbnRlcmRvbWFpbl90ICpiaW5kKQpAQCAtMzEyLDYg
KzMxOSw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX2ludGVyZG9tYWlu
KGV2dGMKICAgICBpbnQgICAgICAgICAgICBscG9ydCwgcnBvcnQgPSBiaW5k
LT5yZW1vdGVfcG9ydDsKICAgICBkb21pZF90ICAgICAgICByZG9tID0gYmlu
ZC0+cmVtb3RlX2RvbTsKICAgICBsb25nICAgICAgICAgICByYzsKKyAgICB1
bnNpZ25lZCBsb25nICBmbGFnczsKIAogICAgIGlmICggcmRvbSA9PSBET01J
RF9TRUxGICkKICAgICAgICAgcmRvbSA9IGN1cnJlbnQtPmRvbWFpbi0+ZG9t
YWluX2lkOwpAQCAtMzQ3LDcgKzM1NSw3IEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9iaW5kX2ludGVyZG9tYWluKGV2dGMKICAgICBpZiAoIHJjICkKICAgICAg
ICAgZ290byBvdXQ7CiAKLSAgICBkb3VibGVfZXZ0Y2huX2xvY2sobGNobiwg
cmNobik7CisgICAgZmxhZ3MgPSBkb3VibGVfZXZ0Y2huX2xvY2sobGNobiwg
cmNobik7CiAKICAgICBsY2huLT51LmludGVyZG9tYWluLnJlbW90ZV9kb20g
ID0gcmQ7CiAgICAgbGNobi0+dS5pbnRlcmRvbWFpbi5yZW1vdGVfcG9ydCA9
IHJwb3J0OwpAQCAtMzY0LDcgKzM3Miw3IEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9iaW5kX2ludGVyZG9tYWluKGV2dGMKICAgICAgKi8KICAgICBldnRjaG5f
cG9ydF9zZXRfcGVuZGluZyhsZCwgbGNobi0+bm90aWZ5X3ZjcHVfaWQsIGxj
aG4pOwogCi0gICAgZG91YmxlX2V2dGNobl91bmxvY2sobGNobiwgcmNobik7
CisgICAgZG91YmxlX2V2dGNobl91bmxvY2sobGNobiwgcmNobiwgZmxhZ3Mp
OwogCiAgICAgYmluZC0+bG9jYWxfcG9ydCA9IGxwb3J0OwogCkBAIC0zODcs
NiArMzk1LDcgQEAgaW50IGV2dGNobl9iaW5kX3ZpcnEoZXZ0Y2huX2JpbmRf
dmlycV90CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRvbWFp
bjsKICAgICBpbnQgICAgICAgICAgICB2aXJxID0gYmluZC0+dmlycSwgdmNw
dSA9IGJpbmQtPnZjcHU7CiAgICAgaW50ICAgICAgICAgICAgcmMgPSAwOwor
ICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAgICAgaWYgKCAodmlycSA8
IDApIHx8ICh2aXJxID49IEFSUkFZX1NJWkUodi0+dmlycV90b19ldnRjaG4p
KSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwpAQCAtNDE5LDE0ICs0Mjgs
MTQgQEAgaW50IGV2dGNobl9iaW5kX3ZpcnEoZXZ0Y2huX2JpbmRfdmlycV90
CiAKICAgICBjaG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOwogCi0g
ICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIGNobi0+c3RhdGUgICAg
ICAgICAgPSBFQ1NfVklSUTsKICAgICBjaG4tPm5vdGlmeV92Y3B1X2lkID0g
dmNwdTsKICAgICBjaG4tPnUudmlycSAgICAgICAgID0gdmlycTsKICAgICBl
dnRjaG5fcG9ydF9pbml0KGQsIGNobik7CiAKLSAgICBzcGluX3VubG9jaygm
Y2huLT5sb2NrKTsKKyAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4t
PmxvY2ssIGZsYWdzKTsKIAogICAgIHYtPnZpcnFfdG9fZXZ0Y2huW3ZpcnFd
ID0gYmluZC0+cG9ydCA9IHBvcnQ7CiAKQEAgLTQ0Myw2ICs0NTIsNyBAQCBz
dGF0aWMgbG9uZyBldnRjaG5fYmluZF9pcGkoZXZ0Y2huX2JpbmRfCiAgICAg
c3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRvbWFpbjsKICAgICBpbnQg
ICAgICAgICAgICBwb3J0LCB2Y3B1ID0gYmluZC0+dmNwdTsKICAgICBsb25n
ICAgICAgICAgICByYyA9IDA7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7
CiAKICAgICBpZiAoICh2Y3B1IDwgMCkgfHwgKHZjcHUgPj0gZC0+bWF4X3Zj
cHVzKSB8fAogICAgICAgICAgKGQtPnZjcHVbdmNwdV0gPT0gTlVMTCkgKQpA
QCAtNDU1LDEzICs0NjUsMTMgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRf
aXBpKGV2dGNobl9iaW5kXwogCiAgICAgY2huID0gZXZ0Y2huX2Zyb21fcG9y
dChkLCBwb3J0KTsKIAotICAgIHNwaW5fbG9jaygmY2huLT5sb2NrKTsKKyAg
ICBzcGluX2xvY2tfaXJxc2F2ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAg
ICBjaG4tPnN0YXRlICAgICAgICAgID0gRUNTX0lQSTsKICAgICBjaG4tPm5v
dGlmeV92Y3B1X2lkID0gdmNwdTsKICAgICBldnRjaG5fcG9ydF9pbml0KGQs
IGNobik7CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAog
ICAgIGJpbmQtPnBvcnQgPSBwb3J0OwogCkBAIC01MDUsNiArNTE1LDcgQEAg
c3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAg
IHN0cnVjdCBwaXJxICAgKmluZm87CiAgICAgaW50ICAgICAgICAgICAgcG9y
dCA9IDAsIHBpcnEgPSBiaW5kLT5waXJxOwogICAgIGxvbmcgICAgICAgICAg
IHJjOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAgICAgaWYgKCAo
cGlycSA8IDApIHx8IChwaXJxID49IGQtPm5yX3BpcnFzKSApCiAgICAgICAg
IHJldHVybiAtRUlOVkFMOwpAQCAtNTM3LDE0ICs1NDgsMTQgQEAgc3RhdGlj
IGxvbmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAgICAgICBn
b3RvIG91dDsKICAgICB9CiAKLSAgICBzcGluX2xvY2soJmNobi0+bG9jayk7
CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgY2huLT5zdGF0ZSAgPSBFQ1NfUElSUTsKICAgICBjaG4tPnUucGly
cS5pcnEgPSBwaXJxOwogICAgIGxpbmtfcGlycV9wb3J0KHBvcnQsIGNobiwg
dik7CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bp
bl91bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVz
dG9yZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBiaW5kLT5wb3J0ID0g
cG9ydDsKIApAQCAtNTY1LDYgKzU3Niw3IEBAIGludCBldnRjaG5fY2xvc2Uo
c3RydWN0IGRvbWFpbiAqZDEsIGludAogICAgIHN0cnVjdCBldnRjaG4gKmNo
bjEsICpjaG4yOwogICAgIGludCAgICAgICAgICAgIHBvcnQyOwogICAgIGxv
bmcgICAgICAgICAgIHJjID0gMDsKKyAgICB1bnNpZ25lZCBsb25nICBmbGFn
czsKIAogIGFnYWluOgogICAgIHNwaW5fbG9jaygmZDEtPmV2ZW50X2xvY2sp
OwpAQCAtNjY0LDE0ICs2NzYsMTQgQEAgaW50IGV2dGNobl9jbG9zZShzdHJ1
Y3QgZG9tYWluICpkMSwgaW50CiAgICAgICAgIEJVR19PTihjaG4yLT5zdGF0
ZSAhPSBFQ1NfSU5URVJET01BSU4pOwogICAgICAgICBCVUdfT04oY2huMi0+
dS5pbnRlcmRvbWFpbi5yZW1vdGVfZG9tICE9IGQxKTsKIAotICAgICAgICBk
b3VibGVfZXZ0Y2huX2xvY2soY2huMSwgY2huMik7CisgICAgICAgIGZsYWdz
ID0gZG91YmxlX2V2dGNobl9sb2NrKGNobjEsIGNobjIpOwogCiAgICAgICAg
IGV2dGNobl9mcmVlKGQxLCBjaG4xKTsKIAogICAgICAgICBjaG4yLT5zdGF0
ZSA9IEVDU19VTkJPVU5EOwogICAgICAgICBjaG4yLT51LnVuYm91bmQucmVt
b3RlX2RvbWlkID0gZDEtPmRvbWFpbl9pZDsKIAotICAgICAgICBkb3VibGVf
ZXZ0Y2huX3VubG9jayhjaG4xLCBjaG4yKTsKKyAgICAgICAgZG91YmxlX2V2
dGNobl91bmxvY2soY2huMSwgY2huMiwgZmxhZ3MpOwogCiAgICAgICAgIGdv
dG8gb3V0OwogCkBAIC02NzksOSArNjkxLDkgQEAgaW50IGV2dGNobl9jbG9z
ZShzdHJ1Y3QgZG9tYWluICpkMSwgaW50CiAgICAgICAgIEJVRygpOwogICAg
IH0KIAotICAgIHNwaW5fbG9jaygmY2huMS0+bG9jayk7CisgICAgc3Bpbl9s
b2NrX2lycXNhdmUoJmNobjEtPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5f
ZnJlZShkMSwgY2huMSk7Ci0gICAgc3Bpbl91bmxvY2soJmNobjEtPmxvY2sp
OworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobjEtPmxvY2ssIGZs
YWdzKTsKIAogIG91dDoKICAgICBpZiAoIGQyICE9IE5VTEwgKQpAQCAtNzAx
LDEzICs3MTMsMTQgQEAgaW50IGV2dGNobl9zZW5kKHN0cnVjdCBkb21haW4g
KmxkLCB1bnNpZwogICAgIHN0cnVjdCBldnRjaG4gKmxjaG4sICpyY2huOwog
ICAgIHN0cnVjdCBkb21haW4gKnJkOwogICAgIGludCAgICAgICAgICAgIHJw
b3J0LCByZXQgPSAwOworICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAg
ICAgaWYgKCAhcG9ydF9pc192YWxpZChsZCwgbHBvcnQpICkKICAgICAgICAg
cmV0dXJuIC1FSU5WQUw7CiAKICAgICBsY2huID0gZXZ0Y2huX2Zyb21fcG9y
dChsZCwgbHBvcnQpOwogCi0gICAgc3Bpbl9sb2NrKCZsY2huLT5sb2NrKTsK
KyAgICBzcGluX2xvY2tfaXJxc2F2ZSgmbGNobi0+bG9jaywgZmxhZ3MpOwog
CiAgICAgLyogR3Vlc3QgY2Fubm90IHNlbmQgdmlhIGEgWGVuLWF0dGFjaGVk
IGV2ZW50IGNoYW5uZWwuICovCiAgICAgaWYgKCB1bmxpa2VseShjb25zdW1l
cl9pc194ZW4obGNobikpICkKQEAgLTc0Miw3ICs3NTUsNyBAQCBpbnQgZXZ0
Y2huX3NlbmQoc3RydWN0IGRvbWFpbiAqbGQsIHVuc2lnCiAgICAgfQogCiBv
dXQ6Ci0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxvY2spOworICAgIHNwaW5f
dW5sb2NrX2lycXJlc3RvcmUoJmxjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAg
IHJldHVybiByZXQ7CiB9CkBAIC0xMjMyLDYgKzEyNDUsNyBAQCBpbnQgYWxs
b2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgKIHsKICAgICBzdHJ1Y3Qg
ZXZ0Y2huICpjaG47CiAgICAgaW50ICAgICAgICAgICAgcG9ydCwgcmM7Cisg
ICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICAgICBzcGluX2xvY2soJmxk
LT5ldmVudF9sb2NrKTsKIApAQCAtMTI0NCwxNCArMTI1OCwxNCBAQCBpbnQg
YWxsb2NfdW5ib3VuZF94ZW5fZXZlbnRfY2hhbm5lbCgKICAgICBpZiAoIHJj
ICkKICAgICAgICAgZ290byBvdXQ7CiAKLSAgICBzcGluX2xvY2soJmNobi0+
bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgY2huLT5zdGF0ZSA9IEVDU19VTkJPVU5EOwogICAgIGNo
bi0+eGVuX2NvbnN1bWVyID0gZ2V0X3hlbl9jb25zdW1lcihub3RpZmljYXRp
b25fZm4pOwogICAgIGNobi0+bm90aWZ5X3ZjcHVfaWQgPSBsdmNwdTsKICAg
ICBjaG4tPnUudW5ib3VuZC5yZW1vdGVfZG9taWQgPSByZW1vdGVfZG9taWQ7
CiAKLSAgICBzcGluX3VubG9jaygmY2huLT5sb2NrKTsKKyAgICBzcGluX3Vu
bG9ja19pcnFyZXN0b3JlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIHdy
aXRlX2F0b21pYygmbGQtPnhlbl9ldnRjaG5zLCBsZC0+eGVuX2V2dGNobnMg
KyAxKTsKIApAQCAtMTI3NCwxMSArMTI4OCwxMiBAQCB2b2lkIG5vdGlmeV92
aWFfeGVuX2V2ZW50X2NoYW5uZWwoc3RydWN0CiB7CiAgICAgc3RydWN0IGV2
dGNobiAqbGNobiwgKnJjaG47CiAgICAgc3RydWN0IGRvbWFpbiAqcmQ7Cisg
ICAgdW5zaWduZWQgbG9uZyBmbGFnczsKIAogICAgIEFTU0VSVChwb3J0X2lz
X3ZhbGlkKGxkLCBscG9ydCkpOwogICAgIGxjaG4gPSBldnRjaG5fZnJvbV9w
b3J0KGxkLCBscG9ydCk7CiAKLSAgICBzcGluX2xvY2soJmxjaG4tPmxvY2sp
OworICAgIHNwaW5fbG9ja19pcnFzYXZlKCZsY2huLT5sb2NrLCBmbGFncyk7
CiAKICAgICBpZiAoIGxpa2VseShsY2huLT5zdGF0ZSA9PSBFQ1NfSU5URVJE
T01BSU4pICkKICAgICB7CkBAIC0xMjg4LDcgKzEzMDMsNyBAQCB2b2lkIG5v
dGlmeV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoc3RydWN0CiAgICAgICAgIGV2
dGNobl9wb3J0X3NldF9wZW5kaW5nKHJkLCByY2huLT5ub3RpZnlfdmNwdV9p
ZCwgcmNobik7CiAgICAgfQogCi0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxv
Y2spOworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmxjaG4tPmxvY2ss
IGZsYWdzKTsKIH0KIAogdm9pZCBldnRjaG5fY2hlY2tfcG9sbGVycyhzdHJ1
Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgcG9ydCkK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.11-3.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.11-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGFkZHJlc3MgcmFjZXMgd2l0aCBldnRjaG5fcmVzZXQoKQoK
TmVpdGhlciBkLT5ldnRjaG5fcG9ydF9vcHMgbm9yIG1heF9ldnRjaG5zKGQp
IG1heSBiZSB1c2VkIGluIGFuIGVudGlyZWx5CmxvY2stbGVzcyBtYW5uZXIs
IGFzIGJvdGggbWF5IGNoYW5nZSBieSBhIHJhY2luZyBldnRjaG5fcmVzZXQo
KS4gSW4gdGhlCmNvbW1vbiBjYXNlLCBhdCBsZWFzdCBvbmUgb2YgdGhlIGRv
bWFpbidzIGV2ZW50IGxvY2sgb3IgdGhlIHBlci1jaGFubmVsCmxvY2sgbmVl
ZHMgdG8gYmUgaGVsZC4gSW4gdGhlIHNwZWNpZmljIGNhc2Ugb2YgdGhlIGlu
dGVyLWRvbWFpbiBzZW5kaW5nCmJ5IGV2dGNobl9zZW5kKCkgYW5kIG5vdGlm
eV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoKSBob2xkaW5nIHRoZSBvdGhlcgpz
aWRlJ3MgcGVyLWNoYW5uZWwgbG9jayBpcyBzdWZmaWNpZW50LCBhcyB0aGUg
Y2hhbm5lbCBjYW4ndCBjaGFuZ2Ugc3RhdGUKd2l0aG91dCBib3RoIHBlci1j
aGFubmVsIGxvY2tzIGhlbGQuIFdpdGhvdXQgc3VjaCBhIGNoYW5uZWwgY2hh
bmdpbmcKc3RhdGUsIGV2dGNobl9yZXNldCgpIGNhbid0IGNvbXBsZXRlIHN1
Y2Nlc3NmdWxseS4KCkxvY2stZnJlZSBhY2Nlc3NlcyBjb250aW51ZSB0byBi
ZSBwZXJtaXR0ZWQgZm9yIHRoZSBzaGltIChjYWxsaW5nIHNvbWUKb3RoZXJ3
aXNlIGludGVybmFsIGV2ZW50IGNoYW5uZWwgZnVuY3Rpb25zKSwgYXMgdGhp
cyBoYXBwZW5zIHdoaWxlIHRoZQpkb21haW4gaXMgaW4gZWZmZWN0aXZlbHkg
c2luZ2xlLXRocmVhZGVkIG1vZGUuIFNwZWNpYWwgY2FyZSBhbHNvIG5lZWRz
CnRha2luZyBmb3IgdGhlIHNoaW0ncyBtYXJraW5nIG9mIGluLXVzZSBwb3J0
cyBhcyBFQ1NfUkVTRVJWRUQgKGFsbG93aW5nCnVzZSBvZiBzdWNoIHBvcnRz
IGluIHRoZSBzaGltIGNhc2UgaXMgb2theSBiZWNhdXNlIHN3aXRjaGluZyBp
bnRvIGFuZApoZW5jZSBhbHNvIG91dCBvZiBGSUZPIG1vZGUgaXMgaW1wb3Nz
aWJsZSB0aGVyZSkuCgpBcyBhIHNpZGUgZWZmZWN0LCBjZXJ0YWluIG9wZXJh
dGlvbnMgb24gWGVuIGJvdW5kIGV2ZW50IGNoYW5uZWxzIHdoaWNoCndlcmUg
bWlzdGFrZW5seSBwZXJtaXR0ZWQgc28gZmFyIChlLmcuIHVubWFzayBvciBw
b2xsKSB3aWxsIGJlIHJlZnVzZWQKbm93LgoKVGhpcyBpcyBwYXJ0IG9mIFhT
QS0zNDMuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9pcnEuYworKysgYi94ZW4v
YXJjaC94ODYvaXJxLmMKQEAgLTIzNjcsMTQgKzIzNjcsMjQgQEAgc3RhdGlj
IHZvaWQgZHVtcF9pcnFzKHVuc2lnbmVkIGNoYXIga2V5KQogCiAgICAgICAg
ICAgICBmb3IgKCBpID0gMDsgaSA8IGFjdGlvbi0+bnJfZ3Vlc3RzOyBpKysg
KQogICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIHN0cnVjdCBldnRj
aG4gKmV2dGNobjsKKyAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgcGVu
ZGluZyA9IDIsIG1hc2tlZCA9IDI7CisKICAgICAgICAgICAgICAgICBkID0g
YWN0aW9uLT5ndWVzdFtpXTsKICAgICAgICAgICAgICAgICBwaXJxID0gZG9t
YWluX2lycV90b19waXJxKGQsIGlycSk7CiAgICAgICAgICAgICAgICAgaW5m
byA9IHBpcnFfaW5mbyhkLCBwaXJxKTsKKyAgICAgICAgICAgICAgICBldnRj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIGluZm8tPmV2dGNobik7CisgICAg
ICAgICAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKKyAgICAgICAgICAg
ICAgICBpZiAoIHNwaW5fdHJ5bG9jaygmZXZ0Y2huLT5sb2NrKSApCisgICAg
ICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZW5kaW5nID0g
ZXZ0Y2huX2lzX3BlbmRpbmcoZCwgZXZ0Y2huKTsKKyAgICAgICAgICAgICAg
ICAgICAgbWFza2VkID0gZXZ0Y2huX2lzX21hc2tlZChkLCBldnRjaG4pOwor
ICAgICAgICAgICAgICAgICAgICBzcGluX3VubG9jaygmZXZ0Y2huLT5sb2Nr
KTsKKyAgICAgICAgICAgICAgICB9CisgICAgICAgICAgICAgICAgbG9jYWxf
aXJxX2VuYWJsZSgpOwogICAgICAgICAgICAgICAgIHByaW50aygiJXU6JTNk
KCVjJWMlYykiLAotICAgICAgICAgICAgICAgICAgICAgICBkLT5kb21haW5f
aWQsIHBpcnEsCi0gICAgICAgICAgICAgICAgICAgICAgIGV2dGNobl9wb3J0
X2lzX3BlbmRpbmcoZCwgaW5mby0+ZXZ0Y2huKSA/ICdQJyA6ICctJywKLSAg
ICAgICAgICAgICAgICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfbWFza2VkKGQs
IGluZm8tPmV2dGNobikgPyAnTScgOiAnLScsCi0gICAgICAgICAgICAgICAg
ICAgICAgIChpbmZvLT5tYXNrZWQgPyAnTScgOiAnLScpKTsKKyAgICAgICAg
ICAgICAgICAgICAgICAgZC0+ZG9tYWluX2lkLCBwaXJxLCAiLVA/IltwZW5k
aW5nXSwKKyAgICAgICAgICAgICAgICAgICAgICAgIi1NPyJbbWFza2VkXSwg
aW5mby0+bWFza2VkID8gJ00nIDogJy0nKTsKICAgICAgICAgICAgICAgICBp
ZiAoIGkgIT0gYWN0aW9uLT5ucl9ndWVzdHMgKQogICAgICAgICAgICAgICAg
ICAgICBwcmludGsoIiwiKTsKICAgICAgICAgICAgIH0KLS0tIGEveGVuL2Fy
Y2gveDg2L3B2L3NoaW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvc2hpbS5j
CkBAIC02MTYsOCArNjE2LDExIEBAIHZvaWQgcHZfc2hpbV9pbmplY3RfZXZ0
Y2huKHVuc2lnbmVkIGludAogICAgIGlmICggcG9ydF9pc192YWxpZChndWVz
dCwgcG9ydCkgKQogICAgIHsKICAgICAgICAgc3RydWN0IGV2dGNobiAqY2hu
ID0gZXZ0Y2huX2Zyb21fcG9ydChndWVzdCwgcG9ydCk7CisgICAgICAgIHVu
c2lnbmVkIGxvbmcgZmxhZ3M7CiAKKyAgICAgICAgc3Bpbl9sb2NrX2lycXNh
dmUoJmNobi0+bG9jaywgZmxhZ3MpOwogICAgICAgICBldnRjaG5fcG9ydF9z
ZXRfcGVuZGluZyhndWVzdCwgY2huLT5ub3RpZnlfdmNwdV9pZCwgY2huKTsK
KyAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmY2huLT5sb2NrLCBm
bGFncyk7CiAgICAgfQogfQogCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfMmwu
YworKysgYi94ZW4vY29tbW9uL2V2ZW50XzJsLmMKQEAgLTYzLDggKzYzLDEw
IEBAIHN0YXRpYyB2b2lkIGV2dGNobl8ybF91bm1hc2soc3RydWN0IGRvbWEK
ICAgICB9CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5n
KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkK
K3N0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5nKGNvbnN0IHN0cnVj
dCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7CisgICAgZXZ0Y2hu
X3BvcnRfdCBwb3J0ID0gZXZ0Y2huLT5wb3J0OwogICAgIHVuc2lnbmVkIGlu
dCBtYXhfcG9ydHMgPSBCSVRTX1BFUl9FVlRDSE5fV09SRChkKSAqIEJJVFNf
UEVSX0VWVENITl9XT1JEKGQpOwogCiAgICAgQVNTRVJUKHBvcnQgPCBtYXhf
cG9ydHMpOwpAQCAtNzIsOCArNzQsMTAgQEAgc3RhdGljIGJvb2wgZXZ0Y2hu
XzJsX2lzX3BlbmRpbmcoY29uc3QgcwogICAgICAgICAgICAgZ3Vlc3RfdGVz
dF9iaXQoZCwgcG9ydCwgJnNoYXJlZF9pbmZvKGQsIGV2dGNobl9wZW5kaW5n
KSkpOwogfQogCi1zdGF0aWMgYm9vbCBldnRjaG5fMmxfaXNfbWFza2VkKGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sIGV2dGNobl8ybF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRv
bWFpbiAqZCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29u
c3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQogeworICAgIGV2dGNobl9wb3J0
X3QgcG9ydCA9IGV2dGNobi0+cG9ydDsKICAgICB1bnNpZ25lZCBpbnQgbWF4
X3BvcnRzID0gQklUU19QRVJfRVZUQ0hOX1dPUkQoZCkgKiBCSVRTX1BFUl9F
VlRDSE5fV09SRChkKTsKIAogICAgIEFTU0VSVChwb3J0IDwgbWF4X3BvcnRz
KTsKLS0tIGEveGVuL2NvbW1vbi9ldmVudF9jaGFubmVsLmMKKysrIGIveGVu
L2NvbW1vbi9ldmVudF9jaGFubmVsLmMKQEAgLTE1Niw4ICsxNTYsOSBAQCBp
bnQgZXZ0Y2huX2FsbG9jYXRlX3BvcnQoc3RydWN0IGRvbWFpbiAqCiAKICAg
ICBpZiAoIHBvcnRfaXNfdmFsaWQoZCwgcG9ydCkgKQogICAgIHsKLSAgICAg
ICAgaWYgKCBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpLT5zdGF0ZSAhPSBF
Q1NfRlJFRSB8fAotICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX2J1c3ko
ZCwgcG9ydCkgKQorICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpjaG4g
PSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworCisgICAgICAgIGlmICgg
Y2huLT5zdGF0ZSAhPSBFQ1NfRlJFRSB8fCBldnRjaG5faXNfYnVzeShkLCBj
aG4pICkKICAgICAgICAgICAgIHJldHVybiAtRUJVU1k7CiAgICAgfQogICAg
IGVsc2UKQEAgLTc3MCw2ICs3NzEsNyBAQCB2b2lkIHNlbmRfZ3Vlc3RfdmNw
dV92aXJxKHN0cnVjdCB2Y3B1ICp2CiAgICAgdW5zaWduZWQgbG9uZyBmbGFn
czsKICAgICBpbnQgcG9ydDsKICAgICBzdHJ1Y3QgZG9tYWluICpkOworICAg
IHN0cnVjdCBldnRjaG4gKmNobjsKIAogICAgIEFTU0VSVCghdmlycV9pc19n
bG9iYWwodmlycSkpOwogCkBAIC03ODAsNyArNzgyLDEwIEBAIHZvaWQgc2Vu
ZF9ndWVzdF92Y3B1X3ZpcnEoc3RydWN0IHZjcHUgKnYKICAgICAgICAgZ290
byBvdXQ7CiAKICAgICBkID0gdi0+ZG9tYWluOwotICAgIGV2dGNobl9wb3J0
X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGV2dGNobl9mcm9tX3BvcnQo
ZCwgcG9ydCkpOworICAgIGNobiA9IGV2dGNobl9mcm9tX3BvcnQoZCwgcG9y
dCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIGV2dGNobl9w
b3J0X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGNobik7CisgICAgc3Bp
bl91bmxvY2soJmNobi0+bG9jayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxv
Y2tfaXJxcmVzdG9yZSgmdi0+dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MDks
NyArODE0LDkgQEAgc3RhdGljIHZvaWQgc2VuZF9ndWVzdF9nbG9iYWxfdmly
cShzdHJ1YwogICAgICAgICBnb3RvIG91dDsKIAogICAgIGNobiA9IGV2dGNo
bl9mcm9tX3BvcnQoZCwgcG9ydCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxv
Y2spOwogICAgIGV2dGNobl9wb3J0X3NldF9wZW5kaW5nKGQsIGNobi0+bm90
aWZ5X3ZjcHVfaWQsIGNobik7CisgICAgc3Bpbl91bmxvY2soJmNobi0+bG9j
ayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmdi0+
dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MTksNiArODI2LDcgQEAgdm9pZCBz
ZW5kX2d1ZXN0X3BpcnEoc3RydWN0IGRvbWFpbiAqZCwgYwogewogICAgIGlu
dCBwb3J0OwogICAgIHN0cnVjdCBldnRjaG4gKmNobjsKKyAgICB1bnNpZ25l
ZCBsb25nIGZsYWdzOwogCiAgICAgLyoKICAgICAgKiBQViBndWVzdHM6IEl0
IHNob3VsZCBub3QgYmUgcG9zc2libGUgdG8gcmFjZSB3aXRoIF9fZXZ0Y2hu
X2Nsb3NlKCkuIFRoZQpAQCAtODMzLDcgKzg0MSw5IEBAIHZvaWQgc2VuZF9n
dWVzdF9waXJxKHN0cnVjdCBkb21haW4gKmQsIGMKICAgICB9CiAKICAgICBj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIHNwaW5fbG9j
a19pcnFzYXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5fcG9y
dF9zZXRfcGVuZGluZyhkLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4pOwor
ICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobi0+bG9jaywgZmxhZ3Mp
OwogfQogCiBzdGF0aWMgc3RydWN0IGRvbWFpbiAqZ2xvYmFsX3ZpcnFfaGFu
ZGxlcnNbTlJfVklSUVNdIF9fcmVhZF9tb3N0bHk7CkBAIC0xMDI4LDEyICsx
MDM4LDE1IEBAIGludCBldnRjaG5fdW5tYXNrKHVuc2lnbmVkIGludCBwb3J0
KQogewogICAgIHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47
CiAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huOworICAgIHVuc2lnbmVkIGxv
bmcgZmxhZ3M7CiAKICAgICBpZiAoIHVubGlrZWx5KCFwb3J0X2lzX3ZhbGlk
KGQsIHBvcnQpKSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCiAgICAg
ZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0KTsKKyAgICBzcGlu
X2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAgICAgZXZ0
Y2huX3BvcnRfdW5tYXNrKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tf
aXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAKICAgICByZXR1
cm4gMDsKIH0KQEAgLTE0NDYsOCArMTQ1OSw4IEBAIHN0YXRpYyB2b2lkIGRv
bWFpbl9kdW1wX2V2dGNobl9pbmZvKHN0cnUKIAogICAgICAgICBwcmludGso
IiAgICAlNHUgWyVkLyVkLyIsCiAgICAgICAgICAgICAgICBwb3J0LAotICAg
ICAgICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfcGVuZGluZyhkLCBwb3J0KSwK
LSAgICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX21hc2tlZChkLCBwb3J0
KSk7CisgICAgICAgICAgICAgICBldnRjaG5faXNfcGVuZGluZyhkLCBjaG4p
LAorICAgICAgICAgICAgICAgZXZ0Y2huX2lzX21hc2tlZChkLCBjaG4pKTsK
ICAgICAgICAgZXZ0Y2huX3BvcnRfcHJpbnRfc3RhdGUoZCwgY2huKTsKICAg
ICAgICAgcHJpbnRrKCJdOiBzPSVkIG49JWQgeD0lZCIsCiAgICAgICAgICAg
ICAgICBjaG4tPnN0YXRlLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4tPnhl
bl9jb25zdW1lcik7Ci0tLSBhL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCisr
KyBiL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCkBAIC0yOTUsMjMgKzI5NSwy
NiBAQCBzdGF0aWMgdm9pZCBldnRjaG5fZmlmb191bm1hc2soc3RydWN0IGRv
CiAgICAgICAgIGV2dGNobl9maWZvX3NldF9wZW5kaW5nKHYsIGV2dGNobik7
CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl9maWZvX2lzX3BlbmRpbmcoY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3BvcnRfdCBwb3J0KQorc3Rh
dGljIGJvb2wgZXZ0Y2huX2ZpZm9faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7Ci0gICAgY29uc3Qg
ZXZlbnRfd29yZF90ICp3b3JkID0gZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3Bv
cnQoZCwgcG9ydCk7CisgICAgY29uc3QgZXZlbnRfd29yZF90ICp3b3JkID0g
ZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3BvcnQoZCwgZXZ0Y2huLT5wb3J0KTsK
IAogICAgIHJldHVybiB3b3JkICYmIGd1ZXN0X3Rlc3RfYml0KGQsIEVWVENI
Tl9GSUZPX1BFTkRJTkcsIHdvcmQpOwogfQogCi1zdGF0aWMgYm9vbF90IGV2
dGNobl9maWZvX2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBl
dnRjaG5fcG9ydF90IHBvcnQpCitzdGF0aWMgYm9vbF90IGV2dGNobl9maWZv
X2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IGV2dGNo
biAqZXZ0Y2huKQogewotICAgIGNvbnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9
IGV2dGNobl9maWZvX3dvcmRfZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGNv
bnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9IGV2dGNobl9maWZvX3dvcmRfZnJv
bV9wb3J0KGQsIGV2dGNobi0+cG9ydCk7CiAKICAgICByZXR1cm4gIXdvcmQg
fHwgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hOX0ZJRk9fTUFTS0VELCB3b3Jk
KTsKIH0KIAotc3RhdGljIGJvb2xfdCBldnRjaG5fZmlmb19pc19idXN5KGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sX3QgZXZ0Y2huX2ZpZm9faXNfYnVzeShjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBjb25zdCBl
dmVudF93b3JkX3QgKndvcmQgPSBldnRjaG5fZmlmb193b3JkX2Zyb21fcG9y
dChkLCBwb3J0KTsKKyAgICBjb25zdCBldmVudF93b3JkX3QgKndvcmQgPSBl
dnRjaG5fZmlmb193b3JkX2Zyb21fcG9ydChkLCBldnRjaG4tPnBvcnQpOwog
CiAgICAgcmV0dXJuIHdvcmQgJiYgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hO
X0ZJRk9fTElOS0VELCB3b3JkKTsKIH0KLS0tIGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9ldmVudC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvZXZlbnQu
aApAQCAtNDcsNCArNDcsMTAgQEAgc3RhdGljIGlubGluZSBib29sIGFyY2hf
dmlycV9pc19nbG9iYWwodQogICAgIHJldHVybiB0cnVlOwogfQogCisjaWZk
ZWYgQ09ORklHX1BWX1NISU0KKyMgaW5jbHVkZSA8YXNtL3B2L3NoaW0uaD4K
KyMgZGVmaW5lIGFyY2hfZXZ0Y2huX2lzX3NwZWNpYWwoY2huKSBcCisgICAg
ICAgICAgICAgKHB2X3NoaW0gJiYgKGNobiktPnBvcnQgJiYgKGNobiktPnN0
YXRlID09IEVDU19SRVNFUlZFRCkKKyNlbmRpZgorCiAjZW5kaWYKLS0tIGEv
eGVuL2luY2x1ZGUveGVuL2V2ZW50LmgKKysrIGIveGVuL2luY2x1ZGUveGVu
L2V2ZW50LmgKQEAgLTEyNSw2ICsxMjUsMjQgQEAgc3RhdGljIGlubGluZSBz
dHJ1Y3QgZXZ0Y2huICpldnRjaG5fZnJvbQogICAgIHJldHVybiBidWNrZXRf
ZnJvbV9wb3J0KGQsIHApICsgKHAgJSBFVlRDSE5TX1BFUl9CVUNLRVQpOwog
fQogCisvKgorICogInVzYWJsZSIgYXMgaW4gImJ5IGEgZ3Vlc3QiLCBpLmUu
IFhlbiBjb25zdW1lZCBjaGFubmVscyBhcmUgYXNzdW1lZCB0byBiZQorICog
dGFrZW4gY2FyZSBvZiBzZXBhcmF0ZWx5IHdoZXJlIHVzZWQgZm9yIFhlbidz
IGludGVybmFsIHB1cnBvc2VzLgorICovCitzdGF0aWMgYm9vbCBldnRjaG5f
dXNhYmxlKGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKK3sKKyAgICBp
ZiAoIGV2dGNobi0+eGVuX2NvbnN1bWVyICkKKyAgICAgICAgcmV0dXJuIGZh
bHNlOworCisjaWZkZWYgYXJjaF9ldnRjaG5faXNfc3BlY2lhbAorICAgIGlm
ICggYXJjaF9ldnRjaG5faXNfc3BlY2lhbChldnRjaG4pICkKKyAgICAgICAg
cmV0dXJuIHRydWU7CisjZW5kaWYKKworICAgIEJVSUxEX0JVR19PTihFQ1Nf
RlJFRSA+IEVDU19SRVNFUlZFRCk7CisgICAgcmV0dXJuIGV2dGNobi0+c3Rh
dGUgPiBFQ1NfUkVTRVJWRUQ7Cit9CisKIC8qIFdhaXQgb24gYSBYZW4tYXR0
YWNoZWQgZXZlbnQgY2hhbm5lbC4gKi8KICNkZWZpbmUgd2FpdF9vbl94ZW5f
ZXZlbnRfY2hhbm5lbChwb3J0LCBjb25kaXRpb24pICAgICAgICAgICAgICAg
ICAgICAgIFwKICAgICBkbyB7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKQEAgLTE1
NywxOSArMTc1LDI0IEBAIGludCBldnRjaG5fcmVzZXQoc3RydWN0IGRvbWFp
biAqZCk7CiAKIC8qCiAgKiBMb3ctbGV2ZWwgZXZlbnQgY2hhbm5lbCBwb3J0
IG9wcy4KKyAqCisgKiBBbGwgaG9va3MgaGF2ZSB0byBiZSBjYWxsZWQgd2l0
aCBhIGxvY2sgaGVsZCB3aGljaCBwcmV2ZW50cyB0aGUgY2hhbm5lbAorICog
ZnJvbSBjaGFuZ2luZyBzdGF0ZS4gVGhpcyBtYXkgYmUgdGhlIGRvbWFpbiBl
dmVudCBsb2NrLCB0aGUgcGVyLWNoYW5uZWwKKyAqIGxvY2ssIG9yIGluIHRo
ZSBjYXNlIG9mIHNlbmRpbmcgaW50ZXJkb21haW4gZXZlbnRzIGFsc28gdGhl
IG90aGVyIHNpZGUncworICogcGVyLWNoYW5uZWwgbG9jay4gRXhjZXB0aW9u
cyBhcHBseSBpbiBjZXJ0YWluIGNhc2VzIGZvciB0aGUgUFYgc2hpbS4KICAq
Lwogc3RydWN0IGV2dGNobl9wb3J0X29wcyB7CiAgICAgdm9pZCAoKmluaXQp
KHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBldnRjaG4gKmV2dGNobik7CiAg
ICAgdm9pZCAoKnNldF9wZW5kaW5nKShzdHJ1Y3QgdmNwdSAqdiwgc3RydWN0
IGV2dGNobiAqZXZ0Y2huKTsKICAgICB2b2lkICgqY2xlYXJfcGVuZGluZyko
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAg
ICB2b2lkICgqdW5tYXNrKShzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZXZ0
Y2huICpldnRjaG4pOwotICAgIGJvb2wgKCppc19wZW5kaW5nKShjb25zdCBz
dHJ1Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOwotICAgIGJv
b2wgKCppc19tYXNrZWQpKGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNo
bl9wb3J0X3QgcG9ydCk7CisgICAgYm9vbCAoKmlzX3BlbmRpbmcpKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQsIGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNo
bik7CisgICAgYm9vbCAoKmlzX21hc2tlZCkoY29uc3Qgc3RydWN0IGRvbWFp
biAqZCwgY29uc3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAgICAvKgog
ICAgICAqIElzIHRoZSBwb3J0IHVuYXZhaWxhYmxlIGJlY2F1c2UgaXQncyBz
dGlsbCBiZWluZyBjbGVhbmVkIHVwCiAgICAgICogYWZ0ZXIgYmVpbmcgY2xv
c2VkPwogICAgICAqLwotICAgIGJvb2wgKCppc19idXN5KShjb25zdCBzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOworICAgIGJvb2wg
KCppc19idXN5KShjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBjb25zdCBzdHJ1
Y3QgZXZ0Y2huICpldnRjaG4pOwogICAgIGludCAoKnNldF9wcmlvcml0eSko
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huLAogICAg
ICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHByaW9yaXR5KTsK
ICAgICB2b2lkICgqcHJpbnRfc3RhdGUpKHN0cnVjdCBkb21haW4gKmQsIGNv
bnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobik7CkBAIC0xODUsMzggKzIwOCw2
NyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfc2V0X3BlbmRp
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dW5zaWduZWQgaW50IHZjcHVfaWQsCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQog
ewotICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3BlbmRpbmcoZC0+dmNw
dVt2Y3B1X2lkXSwgZXZ0Y2huKTsKKyAgICBpZiAoIGV2dGNobl91c2FibGUo
ZXZ0Y2huKSApCisgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3Bl
bmRpbmcoZC0+dmNwdVt2Y3B1X2lkXSwgZXZ0Y2huKTsKIH0KIAogc3RhdGlj
IGlubGluZSB2b2lkIGV2dGNobl9wb3J0X2NsZWFyX3BlbmRpbmcoc3RydWN0
IGRvbWFpbiAqZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBk
LT5ldnRjaG5fcG9ydF9vcHMtPmNsZWFyX3BlbmRpbmcoZCwgZXZ0Y2huKTsK
KyAgICBpZiAoIGV2dGNobl91c2FibGUoZXZ0Y2huKSApCisgICAgICAgIGQt
PmV2dGNobl9wb3J0X29wcy0+Y2xlYXJfcGVuZGluZyhkLCBldnRjaG4pOwog
fQogCiBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfdW5tYXNrKHN0
cnVjdCBkb21haW4gKmQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBkLT5l
dnRjaG5fcG9ydF9vcHMtPnVubWFzayhkLCBldnRjaG4pOworICAgIGlmICgg
ZXZ0Y2huX3VzYWJsZShldnRjaG4pICkKKyAgICAgICAgZC0+ZXZ0Y2huX3Bv
cnRfb3BzLT51bm1hc2soZCwgZXZ0Y2huKTsKIH0KIAotc3RhdGljIGlubGlu
ZSBib29sIGV2dGNobl9wb3J0X2lzX3BlbmRpbmcoY29uc3Qgc3RydWN0IGRv
bWFpbiAqZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0YXRpYyBpbmxpbmUgYm9v
bCBldnRjaG5faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0
cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICByZXR1cm4gZC0+ZXZ0Y2hu
X3BvcnRfb3BzLT5pc19wZW5kaW5nKGQsIHBvcnQpOworICAgIHJldHVybiBl
dnRjaG5fdXNhYmxlKGV2dGNobikgJiYgZC0+ZXZ0Y2huX3BvcnRfb3BzLT5p
c19wZW5kaW5nKGQsIGV2dGNobik7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9v
bCBldnRjaG5fcG9ydF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ZXZ0Y2huX3BvcnRfdCBwb3J0KQorc3RhdGljIGlubGluZSBib29sIGV2dGNo
bl9wb3J0X2lzX3BlbmRpbmcoc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3Bv
cnRfdCBwb3J0KQogewotICAgIHJldHVybiBkLT5ldnRjaG5fcG9ydF9vcHMt
PmlzX21hc2tlZChkLCBwb3J0KTsKKyAgICBzdHJ1Y3QgZXZ0Y2huICpldnRj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGJvb2wgcmM7
CisgICAgdW5zaWduZWQgbG9uZyBmbGFnczsKKworICAgIHNwaW5fbG9ja19p
cnFzYXZlKCZldnRjaG4tPmxvY2ssIGZsYWdzKTsKKyAgICByYyA9IGV2dGNo
bl9pc19wZW5kaW5nKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tfaXJx
cmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CisKKyAgICByZXR1cm4g
cmM7Cit9CisKK3N0YXRpYyBpbmxpbmUgYm9vbCBldnRjaG5faXNfbWFza2Vk
KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4p
Cit7CisgICAgcmV0dXJuICFldnRjaG5fdXNhYmxlKGV2dGNobikgfHwgZC0+
ZXZ0Y2huX3BvcnRfb3BzLT5pc19tYXNrZWQoZCwgZXZ0Y2huKTsKK30KKwor
c3RhdGljIGlubGluZSBib29sIGV2dGNobl9wb3J0X2lzX21hc2tlZChzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpCit7CisgICAgc3Ry
dWN0IGV2dGNobiAqZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0
KTsKKyAgICBib29sIHJjOworICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7CisK
KyAgICBzcGluX2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7
CisgICAgcmMgPSBldnRjaG5faXNfbWFza2VkKGQsIGV2dGNobik7CisgICAg
c3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7
CisKKyAgICByZXR1cm4gcmM7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBl
dnRjaG5fcG9ydF9pc19idXN5KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCi0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBldnRjaG5f
cG9ydF90IHBvcnQpCitzdGF0aWMgaW5saW5lIGJvb2wgZXZ0Y2huX2lzX2J1
c3koY29uc3Qgc3RydWN0IGRvbWFpbiAqZCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4p
CiB7CiAgICAgcmV0dXJuIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVzeSAm
JgotICAgICAgICAgICBkLT5ldnRjaG5fcG9ydF9vcHMtPmlzX2J1c3koZCwg
cG9ydCk7CisgICAgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVz
eShkLCBldnRjaG4pOwogfQogCiBzdGF0aWMgaW5saW5lIGludCBldnRjaG5f
cG9ydF9zZXRfcHJpb3JpdHkoc3RydWN0IGRvbWFpbiAqZCwKQEAgLTIyNSw2
ICsyNzcsOCBAQCBzdGF0aWMgaW5saW5lIGludCBldnRjaG5fcG9ydF9zZXRf
cHJpb3JpCiB7CiAgICAgaWYgKCAhZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRf
cHJpb3JpdHkgKQogICAgICAgICByZXR1cm4gLUVOT1NZUzsKKyAgICBpZiAo
ICFldnRjaG5fdXNhYmxlKGV2dGNobikgKQorICAgICAgICByZXR1cm4gLUVB
Q0NFUzsKICAgICByZXR1cm4gZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRfcHJp
b3JpdHkoZCwgZXZ0Y2huLCBwcmlvcml0eSk7CiB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.12-1.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGV2dGNobl9yZXNldCgpIHNob3VsZG4ndCBzdWNjZWVkIHdp
dGggc3RpbGwtb3BlbiBwb3J0cwoKV2hpbGUgdGhlIGZ1bmN0aW9uIGNsb3Nl
cyBhbGwgcG9ydHMsIGl0IGRvZXMgc28gd2l0aG91dCBob2xkaW5nIGFueQps
b2NrLCBhbmQgaGVuY2UgcmFjaW5nIHJlcXVlc3RzIG1heSBiZSBpc3N1ZWQg
Y2F1c2luZyBuZXcgcG9ydHMgdG8gZ2V0Cm9wZW5lZC4gVGhpcyB3b3VsZCBo
YXZlIGJlZW4gcHJvYmxlbWF0aWMgaW4gcGFydGljdWxhciBpZiBzdWNoIGEg
bmV3bHkKb3BlbmVkIHBvcnQgaGFkIGEgcG9ydCBudW1iZXIgYWJvdmUgdGhl
IG5ldyBpbXBsZW1lbnRhdGlvbiBsaW1pdCAoaS5lLgp3aGVuIHN3aXRjaGlu
ZyBmcm9tIEZJRk8gdG8gMi1sZXZlbCkgYWZ0ZXIgdGhlIHJlc2V0LCBhcyBw
cmlvciB0bwoiZXZ0Y2huOiByZWxheCBwb3J0X2lzX3ZhbGlkKCkiIHRoaXMg
Y291bGQgaGF2ZSBsZWQgdG8gZS5nLgpldnRjaG5fY2xvc2UoKSdzICJCVUdf
T04oIXBvcnRfaXNfdmFsaWQoZDIsIHBvcnQyKSkiIHRvIHRyaWdnZXIuCgpJ
bnRyb2R1Y2UgYSBjb3VudGVyIG9mIGFjdGl2ZSBwb3J0cyBhbmQgY2hlY2sg
dGhhdCBpdCdzIChzdGlsbCkgbm8KbGFyZ2VyIHRoZW4gdGhlIG51bWJlciBv
ZiBYZW4gaW50ZXJuYWxseSB1c2VkIG9uZXMgYWZ0ZXIgb2J0YWluaW5nIHRo
ZQpuZWNlc3NhcnkgbG9jayBpbiBldnRjaG5fcmVzZXQoKS4KCkFzIHRvIHRo
ZSBhY2Nlc3MgbW9kZWwgb2YgdGhlIG5ldyB7YWN0aXZlLHhlbn1fZXZ0Y2hu
cyBmaWVsZHMgLSB3aGlsZQphbGwgd3JpdGVzIGdldCBkb25lIHVzaW5nIHdy
aXRlX2F0b21pYygpLCByZWFkcyBvdWdodCB0byB1c2UKcmVhZF9hdG9taWMo
KSBvbmx5IHdoZW4gb3V0c2lkZSBvZiBhIHN1aXRhYmx5IGxvY2tlZCByZWdp
b24uCgpOb3RlIHRoYXQgYXMgb2Ygbm93IGV2dGNobl9iaW5kX3ZpcnEoKSBh
bmQgZXZ0Y2huX2JpbmRfaXBpKCkgZG9uJ3QgaGF2ZQphIG5lZWQgdG8gY2Fs
bCBjaGVja19mcmVlX3BvcnQoKS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzQz
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxsIDxq
Z3JhbGxAYW1hem9uLmNvbT4KCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfY2hh
bm5lbC5jCisrKyBiL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCkBAIC0x
ODgsNiArMTg4LDggQEAgaW50IGV2dGNobl9hbGxvY2F0ZV9wb3J0KHN0cnVj
dCBkb21haW4gKgogICAgICAgICB3cml0ZV9hdG9taWMoJmQtPnZhbGlkX2V2
dGNobnMsIGQtPnZhbGlkX2V2dGNobnMgKyBFVlRDSE5TX1BFUl9CVUNLRVQp
OwogICAgIH0KIAorICAgIHdyaXRlX2F0b21pYygmZC0+YWN0aXZlX2V2dGNo
bnMsIGQtPmFjdGl2ZV9ldnRjaG5zICsgMSk7CisKICAgICByZXR1cm4gMDsK
IH0KIApAQCAtMjExLDExICsyMTMsMjYgQEAgc3RhdGljIGludCBnZXRfZnJl
ZV9wb3J0KHN0cnVjdCBkb21haW4gKgogICAgIHJldHVybiAtRU5PU1BDOwog
fQogCisvKgorICogQ2hlY2sgd2hldGhlciBhIHBvcnQgaXMgc3RpbGwgbWFy
a2VkIGZyZWUsIGFuZCBpZiBzbyB1cGRhdGUgdGhlIGRvbWFpbgorICogY291
bnRlciBhY2NvcmRpbmdseS4gIFRvIGJlIHVzZWQgb24gZnVuY3Rpb24gZXhp
dCBwYXRocy4KKyAqLworc3RhdGljIHZvaWQgY2hlY2tfZnJlZV9wb3J0KHN0
cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3sKKyAgICBp
ZiAoIHBvcnRfaXNfdmFsaWQoZCwgcG9ydCkgJiYKKyAgICAgICAgIGV2dGNo
bl9mcm9tX3BvcnQoZCwgcG9ydCktPnN0YXRlID09IEVDU19GUkVFICkKKyAg
ICAgICAgd3JpdGVfYXRvbWljKCZkLT5hY3RpdmVfZXZ0Y2hucywgZC0+YWN0
aXZlX2V2dGNobnMgLSAxKTsKK30KKwogdm9pZCBldnRjaG5fZnJlZShzdHJ1
Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZXZ0Y2huICpjaG4pCiB7CiAgICAgLyog
Q2xlYXIgcGVuZGluZyBldmVudCB0byBhdm9pZCB1bmV4cGVjdGVkIGJlaGF2
aW9yIG9uIHJlLWJpbmQuICovCiAgICAgZXZ0Y2huX3BvcnRfY2xlYXJfcGVu
ZGluZyhkLCBjaG4pOwogCisgICAgaWYgKCBjb25zdW1lcl9pc194ZW4oY2hu
KSApCisgICAgICAgIHdyaXRlX2F0b21pYygmZC0+eGVuX2V2dGNobnMsIGQt
Pnhlbl9ldnRjaG5zIC0gMSk7CisgICAgd3JpdGVfYXRvbWljKCZkLT5hY3Rp
dmVfZXZ0Y2hucywgZC0+YWN0aXZlX2V2dGNobnMgLSAxKTsKKwogICAgIC8q
IFJlc2V0IGJpbmRpbmcgdG8gdmNwdTAgd2hlbiB0aGUgY2hhbm5lbCBpcyBm
cmVlZC4gKi8KICAgICBjaG4tPnN0YXRlICAgICAgICAgID0gRUNTX0ZSRUU7
CiAgICAgY2huLT5ub3RpZnlfdmNwdV9pZCA9IDA7CkBAIC0yNTgsNiArMjc1
LDcgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2FsbG9jX3VuYm91bmQoZXZ0Y2hu
XwogICAgIGFsbG9jLT5wb3J0ID0gcG9ydDsKIAogIG91dDoKKyAgICBjaGVj
a19mcmVlX3BvcnQoZCwgcG9ydCk7CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2
ZW50X2xvY2spOwogICAgIHJjdV91bmxvY2tfZG9tYWluKGQpOwogCkBAIC0z
NTEsNiArMzY5LDcgQEAgc3RhdGljIGxvbmcgZXZ0Y2huX2JpbmRfaW50ZXJk
b21haW4oZXZ0YwogICAgIGJpbmQtPmxvY2FsX3BvcnQgPSBscG9ydDsKIAog
IG91dDoKKyAgICBjaGVja19mcmVlX3BvcnQobGQsIGxwb3J0KTsKICAgICBz
cGluX3VubG9jaygmbGQtPmV2ZW50X2xvY2spOwogICAgIGlmICggbGQgIT0g
cmQgKQogICAgICAgICBzcGluX3VubG9jaygmcmQtPmV2ZW50X2xvY2spOwpA
QCAtNDg4LDcgKzUwNyw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX3Bp
cnEoZXZ0Y2huX2JpbmQKICAgICBzdHJ1Y3QgZG9tYWluICpkID0gY3VycmVu
dC0+ZG9tYWluOwogICAgIHN0cnVjdCB2Y3B1ICAgKnYgPSBkLT52Y3B1WzBd
OwogICAgIHN0cnVjdCBwaXJxICAgKmluZm87Ci0gICAgaW50ICAgICAgICAg
ICAgcG9ydCwgcGlycSA9IGJpbmQtPnBpcnE7CisgICAgaW50ICAgICAgICAg
ICAgcG9ydCA9IDAsIHBpcnEgPSBiaW5kLT5waXJxOwogICAgIGxvbmcgICAg
ICAgICAgIHJjOwogCiAgICAgaWYgKCAocGlycSA8IDApIHx8IChwaXJxID49
IGQtPm5yX3BpcnFzKSApCkBAIC01MzYsNiArNTU1LDcgQEAgc3RhdGljIGxv
bmcgZXZ0Y2huX2JpbmRfcGlycShldnRjaG5fYmluZAogICAgIGFyY2hfZXZ0
Y2huX2JpbmRfcGlycShkLCBwaXJxKTsKIAogIG91dDoKKyAgICBjaGVja19m
cmVlX3BvcnQoZCwgcG9ydCk7CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50
X2xvY2spOwogCiAgICAgcmV0dXJuIHJjOwpAQCAtMTAxMSwxMCArMTAzMSwx
MCBAQCBpbnQgZXZ0Y2huX3VubWFzayh1bnNpZ25lZCBpbnQgcG9ydCkKICAg
ICByZXR1cm4gMDsKIH0KIAotCiBpbnQgZXZ0Y2huX3Jlc2V0KHN0cnVjdCBk
b21haW4gKmQpCiB7CiAgICAgdW5zaWduZWQgaW50IGk7CisgICAgaW50IHJj
ID0gMDsKIAogICAgIGlmICggZCAhPSBjdXJyZW50LT5kb21haW4gJiYgIWQt
PmNvbnRyb2xsZXJfcGF1c2VfY291bnQgKQogICAgICAgICByZXR1cm4gLUVJ
TlZBTDsKQEAgLTEwMjQsNyArMTA0NCw5IEBAIGludCBldnRjaG5fcmVzZXQo
c3RydWN0IGRvbWFpbiAqZCkKIAogICAgIHNwaW5fbG9jaygmZC0+ZXZlbnRf
bG9jayk7CiAKLSAgICBpZiAoIGQtPmV2dGNobl9maWZvICkKKyAgICBpZiAo
IGQtPmFjdGl2ZV9ldnRjaG5zID4gZC0+eGVuX2V2dGNobnMgKQorICAgICAg
ICByYyA9IC1FQUdBSU47CisgICAgZWxzZSBpZiAoIGQtPmV2dGNobl9maWZv
ICkKICAgICB7CiAgICAgICAgIC8qIFN3aXRjaGluZyBiYWNrIHRvIDItbGV2
ZWwgQUJJLiAqLwogICAgICAgICBldnRjaG5fZmlmb19kZXN0cm95KGQpOwpA
QCAtMTAzMyw3ICsxMDU1LDcgQEAgaW50IGV2dGNobl9yZXNldChzdHJ1Y3Qg
ZG9tYWluICpkKQogCiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2sp
OwogCi0gICAgcmV0dXJuIDA7CisgICAgcmV0dXJuIHJjOwogfQogCiBzdGF0
aWMgbG9uZyBldnRjaG5fc2V0X3ByaW9yaXR5KGNvbnN0IHN0cnVjdCBldnRj
aG5fc2V0X3ByaW9yaXR5ICpzZXRfcHJpb3JpdHkpCkBAIC0xMjE5LDEwICsx
MjQxLDkgQEAgaW50IGFsbG9jX3VuYm91bmRfeGVuX2V2ZW50X2NoYW5uZWwo
CiAKICAgICBzcGluX2xvY2soJmxkLT5ldmVudF9sb2NrKTsKIAotICAgIHJj
ID0gZ2V0X2ZyZWVfcG9ydChsZCk7CisgICAgcG9ydCA9IHJjID0gZ2V0X2Zy
ZWVfcG9ydChsZCk7CiAgICAgaWYgKCByYyA8IDAgKQogICAgICAgICBnb3Rv
IG91dDsKLSAgICBwb3J0ID0gcmM7CiAgICAgY2huID0gZXZ0Y2huX2Zyb21f
cG9ydChsZCwgcG9ydCk7CiAKICAgICByYyA9IHhzbV9ldnRjaG5fdW5ib3Vu
ZChYU01fVEFSR0VULCBsZCwgY2huLCByZW1vdGVfZG9taWQpOwpAQCAtMTIz
OCw3ICsxMjU5LDEwIEBAIGludCBhbGxvY191bmJvdW5kX3hlbl9ldmVudF9j
aGFubmVsKAogCiAgICAgc3Bpbl91bmxvY2soJmNobi0+bG9jayk7CiAKKyAg
ICB3cml0ZV9hdG9taWMoJmxkLT54ZW5fZXZ0Y2hucywgbGQtPnhlbl9ldnRj
aG5zICsgMSk7CisKICBvdXQ6CisgICAgY2hlY2tfZnJlZV9wb3J0KGxkLCBw
b3J0KTsKICAgICBzcGluX3VubG9jaygmbGQtPmV2ZW50X2xvY2spOwogCiAg
ICAgcmV0dXJuIHJjIDwgMCA/IHJjIDogcG9ydDsKQEAgLTEzMTQsNiArMTMz
OCw3IEBAIGludCBldnRjaG5faW5pdChzdHJ1Y3QgZG9tYWluICpkLCB1bnNp
Z24KICAgICAgICAgcmV0dXJuIC1FSU5WQUw7CiAgICAgfQogICAgIGV2dGNo
bl9mcm9tX3BvcnQoZCwgMCktPnN0YXRlID0gRUNTX1JFU0VSVkVEOworICAg
IHdyaXRlX2F0b21pYygmZC0+YWN0aXZlX2V2dGNobnMsIDApOwogCiAjaWYg
TUFYX1ZJUlRfQ1BVUyA+IEJJVFNfUEVSX0xPTkcKICAgICBkLT5wb2xsX21h
c2sgPSB4emFsbG9jX2FycmF5KHVuc2lnbmVkIGxvbmcsIEJJVFNfVE9fTE9O
R1MoZC0+bWF4X3ZjcHVzKSk7CkBAIC0xMzQwLDYgKzEzNjUsOCBAQCB2b2lk
IGV2dGNobl9kZXN0cm95KHN0cnVjdCBkb21haW4gKmQpCiAgICAgZm9yICgg
aSA9IDA7IHBvcnRfaXNfdmFsaWQoZCwgaSk7IGkrKyApCiAgICAgICAgIGV2
dGNobl9jbG9zZShkLCBpLCAwKTsKIAorICAgIEFTU0VSVCghZC0+YWN0aXZl
X2V2dGNobnMpOworCiAgICAgY2xlYXJfZ2xvYmFsX3ZpcnFfaGFuZGxlcnMo
ZCk7CiAKICAgICBldnRjaG5fZmlmb19kZXN0cm95KGQpOwotLS0gYS94ZW4v
aW5jbHVkZS94ZW4vc2NoZWQuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4vc2No
ZWQuaApAQCAtMzQ2LDYgKzM0NiwxNiBAQCBzdHJ1Y3QgZG9tYWluCiAgICAg
c3RydWN0IGV2dGNobiAgKipldnRjaG5fZ3JvdXBbTlJfRVZUQ0hOX0dST1VQ
U107IC8qIGFsbCBvdGhlciBidWNrZXRzICovCiAgICAgdW5zaWduZWQgaW50
ICAgICBtYXhfZXZ0Y2huX3BvcnQ7IC8qIG1heCBwZXJtaXR0ZWQgcG9ydCBu
dW1iZXIgKi8KICAgICB1bnNpZ25lZCBpbnQgICAgIHZhbGlkX2V2dGNobnM7
ICAgLyogbnVtYmVyIG9mIGFsbG9jYXRlZCBldmVudCBjaGFubmVscyAqLwor
ICAgIC8qCisgICAgICogTnVtYmVyIG9mIGluLXVzZSBldmVudCBjaGFubmVs
cy4gIFdyaXRlcnMgc2hvdWxkIHVzZSB3cml0ZV9hdG9taWMoKS4KKyAgICAg
KiBSZWFkZXJzIG5lZWQgdG8gdXNlIHJlYWRfYXRvbWljKCkgb25seSB3aGVu
IG5vdCBob2xkaW5nIGV2ZW50X2xvY2suCisgICAgICovCisgICAgdW5zaWdu
ZWQgaW50ICAgICBhY3RpdmVfZXZ0Y2huczsKKyAgICAvKgorICAgICAqIE51
bWJlciBvZiBldmVudCBjaGFubmVscyB1c2VkIGludGVybmFsbHkgYnkgWGVu
IChub3Qgc3ViamVjdCB0bworICAgICAqIEVWVENITk9QX3Jlc2V0KS4gIFJl
YWQvd3JpdGUgYWNjZXNzIGxpa2UgZm9yIGFjdGl2ZV9ldnRjaG5zLgorICAg
ICAqLworICAgIHVuc2lnbmVkIGludCAgICAgeGVuX2V2dGNobnM7CiAgICAg
c3BpbmxvY2tfdCAgICAgICBldmVudF9sb2NrOwogICAgIGNvbnN0IHN0cnVj
dCBldnRjaG5fcG9ydF9vcHMgKmV2dGNobl9wb3J0X29wczsKICAgICBzdHJ1
Y3QgZXZ0Y2huX2ZpZm9fZG9tYWluICpldnRjaG5fZmlmbzsK

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.12-2.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGNvbnZlcnQgcGVyLWNoYW5uZWwgbG9jayB0byBiZSBJUlEt
c2FmZQoKLi4uIGluIG9yZGVyIGZvciBzZW5kX2d1ZXN0X3tnbG9iYWwsdmNw
dX1fdmlycSgpIHRvIGJlIGFibGUgdG8gbWFrZSB1c2UKb2YgaXQuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0My4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFs
bCA8amdyYWxsQGFtYXpvbi5jb20+CgotLS0gYS94ZW4vY29tbW9uL2V2ZW50
X2NoYW5uZWwuYworKysgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYwpA
QCAtMjQ4LDYgKzI0OCw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9hbGxvY191
bmJvdW5kKGV2dGNobl8KICAgICBpbnQgICAgICAgICAgICBwb3J0OwogICAg
IGRvbWlkX3QgICAgICAgIGRvbSA9IGFsbG9jLT5kb207CiAgICAgbG9uZyAg
ICAgICAgICAgcmM7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICAg
ICBkID0gcmN1X2xvY2tfZG9tYWluX2J5X2FueV9pZChkb20pOwogICAgIGlm
ICggZCA9PSBOVUxMICkKQEAgLTI2MywxNCArMjY0LDE0IEBAIHN0YXRpYyBs
b25nIGV2dGNobl9hbGxvY191bmJvdW5kKGV2dGNobl8KICAgICBpZiAoIHJj
ICkKICAgICAgICAgZ290byBvdXQ7CiAKLSAgICBzcGluX2xvY2soJmNobi0+
bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgY2huLT5zdGF0ZSA9IEVDU19VTkJPVU5EOwogICAgIGlm
ICggKGNobi0+dS51bmJvdW5kLnJlbW90ZV9kb21pZCA9IGFsbG9jLT5yZW1v
dGVfZG9tKSA9PSBET01JRF9TRUxGICkKICAgICAgICAgY2huLT51LnVuYm91
bmQucmVtb3RlX2RvbWlkID0gY3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ7
CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bpbl91
bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBhbGxvYy0+cG9ydCA9IHBv
cnQ7CiAKQEAgLTI4MywyNiArMjg0LDMyIEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9hbGxvY191bmJvdW5kKGV2dGNobl8KIH0KIAogCi1zdGF0aWMgdm9pZCBk
b3VibGVfZXZ0Y2huX2xvY2soc3RydWN0IGV2dGNobiAqbGNobiwgc3RydWN0
IGV2dGNobiAqcmNobikKK3N0YXRpYyB1bnNpZ25lZCBsb25nIGRvdWJsZV9l
dnRjaG5fbG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLAorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCBldnRjaG4gKnJj
aG4pCiB7Ci0gICAgaWYgKCBsY2huIDwgcmNobiApCisgICAgdW5zaWduZWQg
bG9uZyBmbGFnczsKKworICAgIGlmICggbGNobiA8PSByY2huICkKICAgICB7
Ci0gICAgICAgIHNwaW5fbG9jaygmbGNobi0+bG9jayk7Ci0gICAgICAgIHNw
aW5fbG9jaygmcmNobi0+bG9jayk7CisgICAgICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZsY2huLT5sb2NrLCBmbGFncyk7CisgICAgICAgIGlmICggbGNobiAh
PSByY2huICkKKyAgICAgICAgICAgIHNwaW5fbG9jaygmcmNobi0+bG9jayk7
CiAgICAgfQogICAgIGVsc2UKICAgICB7Ci0gICAgICAgIGlmICggbGNobiAh
PSByY2huICkKLSAgICAgICAgICAgIHNwaW5fbG9jaygmcmNobi0+bG9jayk7
CisgICAgICAgIHNwaW5fbG9ja19pcnFzYXZlKCZyY2huLT5sb2NrLCBmbGFn
cyk7CiAgICAgICAgIHNwaW5fbG9jaygmbGNobi0+bG9jayk7CiAgICAgfQor
CisgICAgcmV0dXJuIGZsYWdzOwogfQogCi1zdGF0aWMgdm9pZCBkb3VibGVf
ZXZ0Y2huX3VubG9jayhzdHJ1Y3QgZXZ0Y2huICpsY2huLCBzdHJ1Y3QgZXZ0
Y2huICpyY2huKQorc3RhdGljIHZvaWQgZG91YmxlX2V2dGNobl91bmxvY2so
c3RydWN0IGV2dGNobiAqbGNobiwgc3RydWN0IGV2dGNobiAqcmNobiwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcg
ZmxhZ3MpCiB7Ci0gICAgc3Bpbl91bmxvY2soJmxjaG4tPmxvY2spOwogICAg
IGlmICggbGNobiAhPSByY2huICkKLSAgICAgICAgc3Bpbl91bmxvY2soJnJj
aG4tPmxvY2spOworICAgICAgICBzcGluX3VubG9jaygmbGNobi0+bG9jayk7
CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmcmNobi0+bG9jaywgZmxh
Z3MpOwogfQogCiBzdGF0aWMgbG9uZyBldnRjaG5fYmluZF9pbnRlcmRvbWFp
bihldnRjaG5fYmluZF9pbnRlcmRvbWFpbl90ICpiaW5kKQpAQCAtMzEyLDYg
KzMxOSw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX2ludGVyZG9tYWlu
KGV2dGMKICAgICBpbnQgICAgICAgICAgICBscG9ydCwgcnBvcnQgPSBiaW5k
LT5yZW1vdGVfcG9ydDsKICAgICBkb21pZF90ICAgICAgICByZG9tID0gYmlu
ZC0+cmVtb3RlX2RvbTsKICAgICBsb25nICAgICAgICAgICByYzsKKyAgICB1
bnNpZ25lZCBsb25nICBmbGFnczsKIAogICAgIGlmICggcmRvbSA9PSBET01J
RF9TRUxGICkKICAgICAgICAgcmRvbSA9IGN1cnJlbnQtPmRvbWFpbi0+ZG9t
YWluX2lkOwpAQCAtMzQ3LDcgKzM1NSw3IEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9iaW5kX2ludGVyZG9tYWluKGV2dGMKICAgICBpZiAoIHJjICkKICAgICAg
ICAgZ290byBvdXQ7CiAKLSAgICBkb3VibGVfZXZ0Y2huX2xvY2sobGNobiwg
cmNobik7CisgICAgZmxhZ3MgPSBkb3VibGVfZXZ0Y2huX2xvY2sobGNobiwg
cmNobik7CiAKICAgICBsY2huLT51LmludGVyZG9tYWluLnJlbW90ZV9kb20g
ID0gcmQ7CiAgICAgbGNobi0+dS5pbnRlcmRvbWFpbi5yZW1vdGVfcG9ydCA9
IHJwb3J0OwpAQCAtMzY0LDcgKzM3Miw3IEBAIHN0YXRpYyBsb25nIGV2dGNo
bl9iaW5kX2ludGVyZG9tYWluKGV2dGMKICAgICAgKi8KICAgICBldnRjaG5f
cG9ydF9zZXRfcGVuZGluZyhsZCwgbGNobi0+bm90aWZ5X3ZjcHVfaWQsIGxj
aG4pOwogCi0gICAgZG91YmxlX2V2dGNobl91bmxvY2sobGNobiwgcmNobik7
CisgICAgZG91YmxlX2V2dGNobl91bmxvY2sobGNobiwgcmNobiwgZmxhZ3Mp
OwogCiAgICAgYmluZC0+bG9jYWxfcG9ydCA9IGxwb3J0OwogCkBAIC0zODcs
NiArMzk1LDcgQEAgaW50IGV2dGNobl9iaW5kX3ZpcnEoZXZ0Y2huX2JpbmRf
dmlycV90CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRvbWFp
bjsKICAgICBpbnQgICAgICAgICAgICB2aXJxID0gYmluZC0+dmlycSwgdmNw
dSA9IGJpbmQtPnZjcHU7CiAgICAgaW50ICAgICAgICAgICAgcmMgPSAwOwor
ICAgIHVuc2lnbmVkIGxvbmcgIGZsYWdzOwogCiAgICAgaWYgKCAodmlycSA8
IDApIHx8ICh2aXJxID49IEFSUkFZX1NJWkUodi0+dmlycV90b19ldnRjaG4p
KSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwpAQCAtNDI0LDE0ICs0MzMs
MTQgQEAgaW50IGV2dGNobl9iaW5kX3ZpcnEoZXZ0Y2huX2JpbmRfdmlycV90
CiAKICAgICBjaG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOwogCi0g
ICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIGNobi0+c3RhdGUgICAg
ICAgICAgPSBFQ1NfVklSUTsKICAgICBjaG4tPm5vdGlmeV92Y3B1X2lkID0g
dmNwdTsKICAgICBjaG4tPnUudmlycSAgICAgICAgID0gdmlycTsKICAgICBl
dnRjaG5fcG9ydF9pbml0KGQsIGNobik7CiAKLSAgICBzcGluX3VubG9jaygm
Y2huLT5sb2NrKTsKKyAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZjaG4t
PmxvY2ssIGZsYWdzKTsKIAogICAgIHYtPnZpcnFfdG9fZXZ0Y2huW3ZpcnFd
ID0gYmluZC0+cG9ydCA9IHBvcnQ7CiAKQEAgLTQ0OCw2ICs0NTcsNyBAQCBz
dGF0aWMgbG9uZyBldnRjaG5fYmluZF9pcGkoZXZ0Y2huX2JpbmRfCiAgICAg
c3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRvbWFpbjsKICAgICBpbnQg
ICAgICAgICAgICBwb3J0LCB2Y3B1ID0gYmluZC0+dmNwdTsKICAgICBsb25n
ICAgICAgICAgICByYyA9IDA7CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7
CiAKICAgICBpZiAoIGRvbWFpbl92Y3B1KGQsIHZjcHUpID09IE5VTEwgKQog
ICAgICAgICByZXR1cm4gLUVOT0VOVDsKQEAgLTQ1OSwxMyArNDY5LDEzIEBA
IHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX2lwaShldnRjaG5fYmluZF8KIAog
ICAgIGNobiA9IGV2dGNobl9mcm9tX3BvcnQoZCwgcG9ydCk7CiAKLSAgICBz
cGluX2xvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNhdmUo
JmNobi0+bG9jaywgZmxhZ3MpOwogCiAgICAgY2huLT5zdGF0ZSAgICAgICAg
ICA9IEVDU19JUEk7CiAgICAgY2huLT5ub3RpZnlfdmNwdV9pZCA9IHZjcHU7
CiAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOwogCi0gICAgc3Bpbl91
bmxvY2soJmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9y
ZSgmY2huLT5sb2NrLCBmbGFncyk7CiAKICAgICBiaW5kLT5wb3J0ID0gcG9y
dDsKIApAQCAtNTA5LDYgKzUxOSw3IEBAIHN0YXRpYyBsb25nIGV2dGNobl9i
aW5kX3BpcnEoZXZ0Y2huX2JpbmQKICAgICBzdHJ1Y3QgcGlycSAgICppbmZv
OwogICAgIGludCAgICAgICAgICAgIHBvcnQgPSAwLCBwaXJxID0gYmluZC0+
cGlycTsKICAgICBsb25nICAgICAgICAgICByYzsKKyAgICB1bnNpZ25lZCBs
b25nICBmbGFnczsKIAogICAgIGlmICggKHBpcnEgPCAwKSB8fCAocGlycSA+
PSBkLT5ucl9waXJxcykgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKQEAg
LTU0MSwxNCArNTUyLDE0IEBAIHN0YXRpYyBsb25nIGV2dGNobl9iaW5kX3Bp
cnEoZXZ0Y2huX2JpbmQKICAgICAgICAgZ290byBvdXQ7CiAgICAgfQogCi0g
ICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIHNwaW5fbG9ja19pcnFz
YXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIGNobi0+c3RhdGUgID0g
RUNTX1BJUlE7CiAgICAgY2huLT51LnBpcnEuaXJxID0gcGlycTsKICAgICBs
aW5rX3BpcnFfcG9ydChwb3J0LCBjaG4sIHYpOwogICAgIGV2dGNobl9wb3J0
X2luaXQoZCwgY2huKTsKIAotICAgIHNwaW5fdW5sb2NrKCZjaG4tPmxvY2sp
OworICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobi0+bG9jaywgZmxh
Z3MpOwogCiAgICAgYmluZC0+cG9ydCA9IHBvcnQ7CiAKQEAgLTU2OSw2ICs1
ODAsNyBAQCBpbnQgZXZ0Y2huX2Nsb3NlKHN0cnVjdCBkb21haW4gKmQxLCBp
bnQKICAgICBzdHJ1Y3QgZXZ0Y2huICpjaG4xLCAqY2huMjsKICAgICBpbnQg
ICAgICAgICAgICBwb3J0MjsKICAgICBsb25nICAgICAgICAgICByYyA9IDA7
CisgICAgdW5zaWduZWQgbG9uZyAgZmxhZ3M7CiAKICBhZ2FpbjoKICAgICBz
cGluX2xvY2soJmQxLT5ldmVudF9sb2NrKTsKQEAgLTY2OCwxNCArNjgwLDE0
IEBAIGludCBldnRjaG5fY2xvc2Uoc3RydWN0IGRvbWFpbiAqZDEsIGludAog
ICAgICAgICBCVUdfT04oY2huMi0+c3RhdGUgIT0gRUNTX0lOVEVSRE9NQUlO
KTsKICAgICAgICAgQlVHX09OKGNobjItPnUuaW50ZXJkb21haW4ucmVtb3Rl
X2RvbSAhPSBkMSk7CiAKLSAgICAgICAgZG91YmxlX2V2dGNobl9sb2NrKGNo
bjEsIGNobjIpOworICAgICAgICBmbGFncyA9IGRvdWJsZV9ldnRjaG5fbG9j
ayhjaG4xLCBjaG4yKTsKIAogICAgICAgICBldnRjaG5fZnJlZShkMSwgY2hu
MSk7CiAKICAgICAgICAgY2huMi0+c3RhdGUgPSBFQ1NfVU5CT1VORDsKICAg
ICAgICAgY2huMi0+dS51bmJvdW5kLnJlbW90ZV9kb21pZCA9IGQxLT5kb21h
aW5faWQ7CiAKLSAgICAgICAgZG91YmxlX2V2dGNobl91bmxvY2soY2huMSwg
Y2huMik7CisgICAgICAgIGRvdWJsZV9ldnRjaG5fdW5sb2NrKGNobjEsIGNo
bjIsIGZsYWdzKTsKIAogICAgICAgICBnb3RvIG91dDsKIApAQCAtNjgzLDkg
KzY5NSw5IEBAIGludCBldnRjaG5fY2xvc2Uoc3RydWN0IGRvbWFpbiAqZDEs
IGludAogICAgICAgICBCVUcoKTsKICAgICB9CiAKLSAgICBzcGluX2xvY2so
JmNobjEtPmxvY2spOworICAgIHNwaW5fbG9ja19pcnFzYXZlKCZjaG4xLT5s
b2NrLCBmbGFncyk7CiAgICAgZXZ0Y2huX2ZyZWUoZDEsIGNobjEpOwotICAg
IHNwaW5fdW5sb2NrKCZjaG4xLT5sb2NrKTsKKyAgICBzcGluX3VubG9ja19p
cnFyZXN0b3JlKCZjaG4xLT5sb2NrLCBmbGFncyk7CiAKICBvdXQ6CiAgICAg
aWYgKCBkMiAhPSBOVUxMICkKQEAgLTcwNSwxMyArNzE3LDE0IEBAIGludCBl
dnRjaG5fc2VuZChzdHJ1Y3QgZG9tYWluICpsZCwgdW5zaWcKICAgICBzdHJ1
Y3QgZXZ0Y2huICpsY2huLCAqcmNobjsKICAgICBzdHJ1Y3QgZG9tYWluICpy
ZDsKICAgICBpbnQgICAgICAgICAgICBycG9ydCwgcmV0ID0gMDsKKyAgICB1
bnNpZ25lZCBsb25nICBmbGFnczsKIAogICAgIGlmICggIXBvcnRfaXNfdmFs
aWQobGQsIGxwb3J0KSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCiAg
ICAgbGNobiA9IGV2dGNobl9mcm9tX3BvcnQobGQsIGxwb3J0KTsKIAotICAg
IHNwaW5fbG9jaygmbGNobi0+bG9jayk7CisgICAgc3Bpbl9sb2NrX2lycXNh
dmUoJmxjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIC8qIEd1ZXN0IGNhbm5v
dCBzZW5kIHZpYSBhIFhlbi1hdHRhY2hlZCBldmVudCBjaGFubmVsLiAqLwog
ICAgIGlmICggdW5saWtlbHkoY29uc3VtZXJfaXNfeGVuKGxjaG4pKSApCkBA
IC03NDYsNyArNzU5LDcgQEAgaW50IGV2dGNobl9zZW5kKHN0cnVjdCBkb21h
aW4gKmxkLCB1bnNpZwogICAgIH0KIAogb3V0OgotICAgIHNwaW5fdW5sb2Nr
KCZsY2huLT5sb2NrKTsKKyAgICBzcGluX3VubG9ja19pcnFyZXN0b3JlKCZs
Y2huLT5sb2NrLCBmbGFncyk7CiAKICAgICByZXR1cm4gcmV0OwogfQpAQCAt
MTIzOCw2ICsxMjUxLDcgQEAgaW50IGFsbG9jX3VuYm91bmRfeGVuX2V2ZW50
X2NoYW5uZWwoCiB7CiAgICAgc3RydWN0IGV2dGNobiAqY2huOwogICAgIGlu
dCAgICAgICAgICAgIHBvcnQsIHJjOworICAgIHVuc2lnbmVkIGxvbmcgIGZs
YWdzOwogCiAgICAgc3Bpbl9sb2NrKCZsZC0+ZXZlbnRfbG9jayk7CiAKQEAg
LTEyNTAsMTQgKzEyNjQsMTQgQEAgaW50IGFsbG9jX3VuYm91bmRfeGVuX2V2
ZW50X2NoYW5uZWwoCiAgICAgaWYgKCByYyApCiAgICAgICAgIGdvdG8gb3V0
OwogCi0gICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIHNwaW5fbG9j
a19pcnFzYXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKIAogICAgIGNobi0+c3Rh
dGUgPSBFQ1NfVU5CT1VORDsKICAgICBjaG4tPnhlbl9jb25zdW1lciA9IGdl
dF94ZW5fY29uc3VtZXIobm90aWZpY2F0aW9uX2ZuKTsKICAgICBjaG4tPm5v
dGlmeV92Y3B1X2lkID0gbHZjcHU7CiAgICAgY2huLT51LnVuYm91bmQucmVt
b3RlX2RvbWlkID0gcmVtb3RlX2RvbWlkOwogCi0gICAgc3Bpbl91bmxvY2so
JmNobi0+bG9jayk7CisgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmY2hu
LT5sb2NrLCBmbGFncyk7CiAKICAgICB3cml0ZV9hdG9taWMoJmxkLT54ZW5f
ZXZ0Y2hucywgbGQtPnhlbl9ldnRjaG5zICsgMSk7CiAKQEAgLTEyODAsMTEg
KzEyOTQsMTIgQEAgdm9pZCBub3RpZnlfdmlhX3hlbl9ldmVudF9jaGFubmVs
KHN0cnVjdAogewogICAgIHN0cnVjdCBldnRjaG4gKmxjaG4sICpyY2huOwog
ICAgIHN0cnVjdCBkb21haW4gKnJkOworICAgIHVuc2lnbmVkIGxvbmcgZmxh
Z3M7CiAKICAgICBBU1NFUlQocG9ydF9pc192YWxpZChsZCwgbHBvcnQpKTsK
ICAgICBsY2huID0gZXZ0Y2huX2Zyb21fcG9ydChsZCwgbHBvcnQpOwogCi0g
ICAgc3Bpbl9sb2NrKCZsY2huLT5sb2NrKTsKKyAgICBzcGluX2xvY2tfaXJx
c2F2ZSgmbGNobi0+bG9jaywgZmxhZ3MpOwogCiAgICAgaWYgKCBsaWtlbHko
bGNobi0+c3RhdGUgPT0gRUNTX0lOVEVSRE9NQUlOKSApCiAgICAgewpAQCAt
MTI5NCw3ICsxMzA5LDcgQEAgdm9pZCBub3RpZnlfdmlhX3hlbl9ldmVudF9j
aGFubmVsKHN0cnVjdAogICAgICAgICBldnRjaG5fcG9ydF9zZXRfcGVuZGlu
ZyhyZCwgcmNobi0+bm90aWZ5X3ZjcHVfaWQsIHJjaG4pOwogICAgIH0KIAot
ICAgIHNwaW5fdW5sb2NrKCZsY2huLT5sb2NrKTsKKyAgICBzcGluX3VubG9j
a19pcnFyZXN0b3JlKCZsY2huLT5sb2NrLCBmbGFncyk7CiB9CiAKIHZvaWQg
ZXZ0Y2huX2NoZWNrX3BvbGxlcnMoc3RydWN0IGRvbWFpbiAqZCwgdW5zaWdu
ZWQgaW50IHBvcnQpCg==

--=separator
Content-Type: application/octet-stream; name="xsa343/xsa343-4.12-3.patch"
Content-Disposition: attachment; filename="xsa343/xsa343-4.12-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG46IGFkZHJlc3MgcmFjZXMgd2l0aCBldnRjaG5fcmVzZXQoKQoK
TmVpdGhlciBkLT5ldnRjaG5fcG9ydF9vcHMgbm9yIG1heF9ldnRjaG5zKGQp
IG1heSBiZSB1c2VkIGluIGFuIGVudGlyZWx5CmxvY2stbGVzcyBtYW5uZXIs
IGFzIGJvdGggbWF5IGNoYW5nZSBieSBhIHJhY2luZyBldnRjaG5fcmVzZXQo
KS4gSW4gdGhlCmNvbW1vbiBjYXNlLCBhdCBsZWFzdCBvbmUgb2YgdGhlIGRv
bWFpbidzIGV2ZW50IGxvY2sgb3IgdGhlIHBlci1jaGFubmVsCmxvY2sgbmVl
ZHMgdG8gYmUgaGVsZC4gSW4gdGhlIHNwZWNpZmljIGNhc2Ugb2YgdGhlIGlu
dGVyLWRvbWFpbiBzZW5kaW5nCmJ5IGV2dGNobl9zZW5kKCkgYW5kIG5vdGlm
eV92aWFfeGVuX2V2ZW50X2NoYW5uZWwoKSBob2xkaW5nIHRoZSBvdGhlcgpz
aWRlJ3MgcGVyLWNoYW5uZWwgbG9jayBpcyBzdWZmaWNpZW50LCBhcyB0aGUg
Y2hhbm5lbCBjYW4ndCBjaGFuZ2Ugc3RhdGUKd2l0aG91dCBib3RoIHBlci1j
aGFubmVsIGxvY2tzIGhlbGQuIFdpdGhvdXQgc3VjaCBhIGNoYW5uZWwgY2hh
bmdpbmcKc3RhdGUsIGV2dGNobl9yZXNldCgpIGNhbid0IGNvbXBsZXRlIHN1
Y2Nlc3NmdWxseS4KCkxvY2stZnJlZSBhY2Nlc3NlcyBjb250aW51ZSB0byBi
ZSBwZXJtaXR0ZWQgZm9yIHRoZSBzaGltIChjYWxsaW5nIHNvbWUKb3RoZXJ3
aXNlIGludGVybmFsIGV2ZW50IGNoYW5uZWwgZnVuY3Rpb25zKSwgYXMgdGhp
cyBoYXBwZW5zIHdoaWxlIHRoZQpkb21haW4gaXMgaW4gZWZmZWN0aXZlbHkg
c2luZ2xlLXRocmVhZGVkIG1vZGUuIFNwZWNpYWwgY2FyZSBhbHNvIG5lZWRz
CnRha2luZyBmb3IgdGhlIHNoaW0ncyBtYXJraW5nIG9mIGluLXVzZSBwb3J0
cyBhcyBFQ1NfUkVTRVJWRUQgKGFsbG93aW5nCnVzZSBvZiBzdWNoIHBvcnRz
IGluIHRoZSBzaGltIGNhc2UgaXMgb2theSBiZWNhdXNlIHN3aXRjaGluZyBp
bnRvIGFuZApoZW5jZSBhbHNvIG91dCBvZiBGSUZPIG1vZGUgaXMgaW1wb3Nz
aWJsZSB0aGVyZSkuCgpBcyBhIHNpZGUgZWZmZWN0LCBjZXJ0YWluIG9wZXJh
dGlvbnMgb24gWGVuIGJvdW5kIGV2ZW50IGNoYW5uZWxzIHdoaWNoCndlcmUg
bWlzdGFrZW5seSBwZXJtaXR0ZWQgc28gZmFyIChlLmcuIHVubWFzayBvciBw
b2xsKSB3aWxsIGJlIHJlZnVzZWQKbm93LgoKVGhpcyBpcyBwYXJ0IG9mIFhT
QS0zNDMuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1h
em9uLmNvbT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9pcnEuYworKysgYi94ZW4v
YXJjaC94ODYvaXJxLmMKQEAgLTIzNjQsMTQgKzIzNjQsMjQgQEAgc3RhdGlj
IHZvaWQgZHVtcF9pcnFzKHVuc2lnbmVkIGNoYXIga2V5KQogCiAgICAgICAg
ICAgICBmb3IgKCBpID0gMDsgaSA8IGFjdGlvbi0+bnJfZ3Vlc3RzOyBpKysg
KQogICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIHN0cnVjdCBldnRj
aG4gKmV2dGNobjsKKyAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgcGVu
ZGluZyA9IDIsIG1hc2tlZCA9IDI7CisKICAgICAgICAgICAgICAgICBkID0g
YWN0aW9uLT5ndWVzdFtpXTsKICAgICAgICAgICAgICAgICBwaXJxID0gZG9t
YWluX2lycV90b19waXJxKGQsIGlycSk7CiAgICAgICAgICAgICAgICAgaW5m
byA9IHBpcnFfaW5mbyhkLCBwaXJxKTsKKyAgICAgICAgICAgICAgICBldnRj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIGluZm8tPmV2dGNobik7CisgICAg
ICAgICAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKKyAgICAgICAgICAg
ICAgICBpZiAoIHNwaW5fdHJ5bG9jaygmZXZ0Y2huLT5sb2NrKSApCisgICAg
ICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZW5kaW5nID0g
ZXZ0Y2huX2lzX3BlbmRpbmcoZCwgZXZ0Y2huKTsKKyAgICAgICAgICAgICAg
ICAgICAgbWFza2VkID0gZXZ0Y2huX2lzX21hc2tlZChkLCBldnRjaG4pOwor
ICAgICAgICAgICAgICAgICAgICBzcGluX3VubG9jaygmZXZ0Y2huLT5sb2Nr
KTsKKyAgICAgICAgICAgICAgICB9CisgICAgICAgICAgICAgICAgbG9jYWxf
aXJxX2VuYWJsZSgpOwogICAgICAgICAgICAgICAgIHByaW50aygiJXU6JTNk
KCVjJWMlYykiLAotICAgICAgICAgICAgICAgICAgICAgICBkLT5kb21haW5f
aWQsIHBpcnEsCi0gICAgICAgICAgICAgICAgICAgICAgIGV2dGNobl9wb3J0
X2lzX3BlbmRpbmcoZCwgaW5mby0+ZXZ0Y2huKSA/ICdQJyA6ICctJywKLSAg
ICAgICAgICAgICAgICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfbWFza2VkKGQs
IGluZm8tPmV2dGNobikgPyAnTScgOiAnLScsCi0gICAgICAgICAgICAgICAg
ICAgICAgIChpbmZvLT5tYXNrZWQgPyAnTScgOiAnLScpKTsKKyAgICAgICAg
ICAgICAgICAgICAgICAgZC0+ZG9tYWluX2lkLCBwaXJxLCAiLVA/IltwZW5k
aW5nXSwKKyAgICAgICAgICAgICAgICAgICAgICAgIi1NPyJbbWFza2VkXSwg
aW5mby0+bWFza2VkID8gJ00nIDogJy0nKTsKICAgICAgICAgICAgICAgICBp
ZiAoIGkgIT0gYWN0aW9uLT5ucl9ndWVzdHMgKQogICAgICAgICAgICAgICAg
ICAgICBwcmludGsoIiwiKTsKICAgICAgICAgICAgIH0KLS0tIGEveGVuL2Fy
Y2gveDg2L3B2L3NoaW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvc2hpbS5j
CkBAIC02NjIsOCArNjYyLDExIEBAIHZvaWQgcHZfc2hpbV9pbmplY3RfZXZ0
Y2huKHVuc2lnbmVkIGludAogICAgIGlmICggcG9ydF9pc192YWxpZChndWVz
dCwgcG9ydCkgKQogICAgIHsKICAgICAgICAgc3RydWN0IGV2dGNobiAqY2hu
ID0gZXZ0Y2huX2Zyb21fcG9ydChndWVzdCwgcG9ydCk7CisgICAgICAgIHVu
c2lnbmVkIGxvbmcgZmxhZ3M7CiAKKyAgICAgICAgc3Bpbl9sb2NrX2lycXNh
dmUoJmNobi0+bG9jaywgZmxhZ3MpOwogICAgICAgICBldnRjaG5fcG9ydF9z
ZXRfcGVuZGluZyhndWVzdCwgY2huLT5ub3RpZnlfdmNwdV9pZCwgY2huKTsK
KyAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmY2huLT5sb2NrLCBm
bGFncyk7CiAgICAgfQogfQogCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfMmwu
YworKysgYi94ZW4vY29tbW9uL2V2ZW50XzJsLmMKQEAgLTYzLDggKzYzLDEw
IEBAIHN0YXRpYyB2b2lkIGV2dGNobl8ybF91bm1hc2soc3RydWN0IGRvbWEK
ICAgICB9CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5n
KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkK
K3N0YXRpYyBib29sIGV2dGNobl8ybF9pc19wZW5kaW5nKGNvbnN0IHN0cnVj
dCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7CisgICAgZXZ0Y2hu
X3BvcnRfdCBwb3J0ID0gZXZ0Y2huLT5wb3J0OwogICAgIHVuc2lnbmVkIGlu
dCBtYXhfcG9ydHMgPSBCSVRTX1BFUl9FVlRDSE5fV09SRChkKSAqIEJJVFNf
UEVSX0VWVENITl9XT1JEKGQpOwogCiAgICAgQVNTRVJUKHBvcnQgPCBtYXhf
cG9ydHMpOwpAQCAtNzIsOCArNzQsMTAgQEAgc3RhdGljIGJvb2wgZXZ0Y2hu
XzJsX2lzX3BlbmRpbmcoY29uc3QgcwogICAgICAgICAgICAgZ3Vlc3RfdGVz
dF9iaXQoZCwgcG9ydCwgJnNoYXJlZF9pbmZvKGQsIGV2dGNobl9wZW5kaW5n
KSkpOwogfQogCi1zdGF0aWMgYm9vbCBldnRjaG5fMmxfaXNfbWFza2VkKGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sIGV2dGNobl8ybF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRv
bWFpbiAqZCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29u
c3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQogeworICAgIGV2dGNobl9wb3J0
X3QgcG9ydCA9IGV2dGNobi0+cG9ydDsKICAgICB1bnNpZ25lZCBpbnQgbWF4
X3BvcnRzID0gQklUU19QRVJfRVZUQ0hOX1dPUkQoZCkgKiBCSVRTX1BFUl9F
VlRDSE5fV09SRChkKTsKIAogICAgIEFTU0VSVChwb3J0IDwgbWF4X3BvcnRz
KTsKLS0tIGEveGVuL2NvbW1vbi9ldmVudF9jaGFubmVsLmMKKysrIGIveGVu
L2NvbW1vbi9ldmVudF9jaGFubmVsLmMKQEAgLTE1Niw4ICsxNTYsOSBAQCBp
bnQgZXZ0Y2huX2FsbG9jYXRlX3BvcnQoc3RydWN0IGRvbWFpbiAqCiAKICAg
ICBpZiAoIHBvcnRfaXNfdmFsaWQoZCwgcG9ydCkgKQogICAgIHsKLSAgICAg
ICAgaWYgKCBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpLT5zdGF0ZSAhPSBF
Q1NfRlJFRSB8fAotICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX2J1c3ko
ZCwgcG9ydCkgKQorICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpjaG4g
PSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworCisgICAgICAgIGlmICgg
Y2huLT5zdGF0ZSAhPSBFQ1NfRlJFRSB8fCBldnRjaG5faXNfYnVzeShkLCBj
aG4pICkKICAgICAgICAgICAgIHJldHVybiAtRUJVU1k7CiAgICAgfQogICAg
IGVsc2UKQEAgLTc3NCw2ICs3NzUsNyBAQCB2b2lkIHNlbmRfZ3Vlc3RfdmNw
dV92aXJxKHN0cnVjdCB2Y3B1ICp2CiAgICAgdW5zaWduZWQgbG9uZyBmbGFn
czsKICAgICBpbnQgcG9ydDsKICAgICBzdHJ1Y3QgZG9tYWluICpkOworICAg
IHN0cnVjdCBldnRjaG4gKmNobjsKIAogICAgIEFTU0VSVCghdmlycV9pc19n
bG9iYWwodmlycSkpOwogCkBAIC03ODQsNyArNzg2LDEwIEBAIHZvaWQgc2Vu
ZF9ndWVzdF92Y3B1X3ZpcnEoc3RydWN0IHZjcHUgKnYKICAgICAgICAgZ290
byBvdXQ7CiAKICAgICBkID0gdi0+ZG9tYWluOwotICAgIGV2dGNobl9wb3J0
X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGV2dGNobl9mcm9tX3BvcnQo
ZCwgcG9ydCkpOworICAgIGNobiA9IGV2dGNobl9mcm9tX3BvcnQoZCwgcG9y
dCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxvY2spOworICAgIGV2dGNobl9w
b3J0X3NldF9wZW5kaW5nKGQsIHYtPnZjcHVfaWQsIGNobik7CisgICAgc3Bp
bl91bmxvY2soJmNobi0+bG9jayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxv
Y2tfaXJxcmVzdG9yZSgmdi0+dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MTMs
NyArODE4LDkgQEAgdm9pZCBzZW5kX2d1ZXN0X2dsb2JhbF92aXJxKHN0cnVj
dCBkb21haQogICAgICAgICBnb3RvIG91dDsKIAogICAgIGNobiA9IGV2dGNo
bl9mcm9tX3BvcnQoZCwgcG9ydCk7CisgICAgc3Bpbl9sb2NrKCZjaG4tPmxv
Y2spOwogICAgIGV2dGNobl9wb3J0X3NldF9wZW5kaW5nKGQsIGNobi0+bm90
aWZ5X3ZjcHVfaWQsIGNobik7CisgICAgc3Bpbl91bmxvY2soJmNobi0+bG9j
ayk7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmdi0+
dmlycV9sb2NrLCBmbGFncyk7CkBAIC04MjMsNiArODMwLDcgQEAgdm9pZCBz
ZW5kX2d1ZXN0X3BpcnEoc3RydWN0IGRvbWFpbiAqZCwgYwogewogICAgIGlu
dCBwb3J0OwogICAgIHN0cnVjdCBldnRjaG4gKmNobjsKKyAgICB1bnNpZ25l
ZCBsb25nIGZsYWdzOwogCiAgICAgLyoKICAgICAgKiBQViBndWVzdHM6IEl0
IHNob3VsZCBub3QgYmUgcG9zc2libGUgdG8gcmFjZSB3aXRoIF9fZXZ0Y2hu
X2Nsb3NlKCkuIFRoZQpAQCAtODM3LDcgKzg0NSw5IEBAIHZvaWQgc2VuZF9n
dWVzdF9waXJxKHN0cnVjdCBkb21haW4gKmQsIGMKICAgICB9CiAKICAgICBj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIHNwaW5fbG9j
a19pcnFzYXZlKCZjaG4tPmxvY2ssIGZsYWdzKTsKICAgICBldnRjaG5fcG9y
dF9zZXRfcGVuZGluZyhkLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4pOwor
ICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmNobi0+bG9jaywgZmxhZ3Mp
OwogfQogCiBzdGF0aWMgc3RydWN0IGRvbWFpbiAqZ2xvYmFsX3ZpcnFfaGFu
ZGxlcnNbTlJfVklSUVNdIF9fcmVhZF9tb3N0bHk7CkBAIC0xMDM0LDEyICsx
MDQ0LDE1IEBAIGludCBldnRjaG5fdW5tYXNrKHVuc2lnbmVkIGludCBwb3J0
KQogewogICAgIHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47
CiAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huOworICAgIHVuc2lnbmVkIGxv
bmcgZmxhZ3M7CiAKICAgICBpZiAoIHVubGlrZWx5KCFwb3J0X2lzX3ZhbGlk
KGQsIHBvcnQpKSApCiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCiAgICAg
ZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0KTsKKyAgICBzcGlu
X2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAgICAgZXZ0
Y2huX3BvcnRfdW5tYXNrKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tf
aXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CiAKICAgICByZXR1
cm4gMDsKIH0KQEAgLTE0NDksOCArMTQ2Miw4IEBAIHN0YXRpYyB2b2lkIGRv
bWFpbl9kdW1wX2V2dGNobl9pbmZvKHN0cnUKIAogICAgICAgICBwcmludGso
IiAgICAlNHUgWyVkLyVkLyIsCiAgICAgICAgICAgICAgICBwb3J0LAotICAg
ICAgICAgICAgICAgZXZ0Y2huX3BvcnRfaXNfcGVuZGluZyhkLCBwb3J0KSwK
LSAgICAgICAgICAgICAgIGV2dGNobl9wb3J0X2lzX21hc2tlZChkLCBwb3J0
KSk7CisgICAgICAgICAgICAgICBldnRjaG5faXNfcGVuZGluZyhkLCBjaG4p
LAorICAgICAgICAgICAgICAgZXZ0Y2huX2lzX21hc2tlZChkLCBjaG4pKTsK
ICAgICAgICAgZXZ0Y2huX3BvcnRfcHJpbnRfc3RhdGUoZCwgY2huKTsKICAg
ICAgICAgcHJpbnRrKCJdOiBzPSVkIG49JWQgeD0lZCIsCiAgICAgICAgICAg
ICAgICBjaG4tPnN0YXRlLCBjaG4tPm5vdGlmeV92Y3B1X2lkLCBjaG4tPnhl
bl9jb25zdW1lcik7Ci0tLSBhL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCisr
KyBiL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCkBAIC0yOTYsMjMgKzI5Niwy
NiBAQCBzdGF0aWMgdm9pZCBldnRjaG5fZmlmb191bm1hc2soc3RydWN0IGRv
CiAgICAgICAgIGV2dGNobl9maWZvX3NldF9wZW5kaW5nKHYsIGV2dGNobik7
CiB9CiAKLXN0YXRpYyBib29sIGV2dGNobl9maWZvX2lzX3BlbmRpbmcoY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3BvcnRfdCBwb3J0KQorc3Rh
dGljIGJvb2wgZXZ0Y2huX2ZpZm9faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4pCiB7Ci0gICAgY29uc3Qg
ZXZlbnRfd29yZF90ICp3b3JkID0gZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3Bv
cnQoZCwgcG9ydCk7CisgICAgY29uc3QgZXZlbnRfd29yZF90ICp3b3JkID0g
ZXZ0Y2huX2ZpZm9fd29yZF9mcm9tX3BvcnQoZCwgZXZ0Y2huLT5wb3J0KTsK
IAogICAgIHJldHVybiB3b3JkICYmIGd1ZXN0X3Rlc3RfYml0KGQsIEVWVENI
Tl9GSUZPX1BFTkRJTkcsIHdvcmQpOwogfQogCi1zdGF0aWMgYm9vbF90IGV2
dGNobl9maWZvX2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBl
dnRjaG5fcG9ydF90IHBvcnQpCitzdGF0aWMgYm9vbF90IGV2dGNobl9maWZv
X2lzX21hc2tlZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IGV2dGNo
biAqZXZ0Y2huKQogewotICAgIGNvbnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9
IGV2dGNobl9maWZvX3dvcmRfZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGNv
bnN0IGV2ZW50X3dvcmRfdCAqd29yZCA9IGV2dGNobl9maWZvX3dvcmRfZnJv
bV9wb3J0KGQsIGV2dGNobi0+cG9ydCk7CiAKICAgICByZXR1cm4gIXdvcmQg
fHwgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hOX0ZJRk9fTUFTS0VELCB3b3Jk
KTsKIH0KIAotc3RhdGljIGJvb2xfdCBldnRjaG5fZmlmb19pc19idXN5KGNv
bnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0
YXRpYyBib29sX3QgZXZ0Y2huX2ZpZm9faXNfYnVzeShjb25zdCBzdHJ1Y3Qg
ZG9tYWluICpkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBjb25zdCBl
dmVudF93b3JkX3QgKndvcmQgPSBldnRjaG5fZmlmb193b3JkX2Zyb21fcG9y
dChkLCBwb3J0KTsKKyAgICBjb25zdCBldmVudF93b3JkX3QgKndvcmQgPSBl
dnRjaG5fZmlmb193b3JkX2Zyb21fcG9ydChkLCBldnRjaG4tPnBvcnQpOwog
CiAgICAgcmV0dXJuIHdvcmQgJiYgZ3Vlc3RfdGVzdF9iaXQoZCwgRVZUQ0hO
X0ZJRk9fTElOS0VELCB3b3JkKTsKIH0KLS0tIGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9ldmVudC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvZXZlbnQu
aApAQCAtNDcsNCArNDcsMTAgQEAgc3RhdGljIGlubGluZSBib29sIGFyY2hf
dmlycV9pc19nbG9iYWwodQogICAgIHJldHVybiB0cnVlOwogfQogCisjaWZk
ZWYgQ09ORklHX1BWX1NISU0KKyMgaW5jbHVkZSA8YXNtL3B2L3NoaW0uaD4K
KyMgZGVmaW5lIGFyY2hfZXZ0Y2huX2lzX3NwZWNpYWwoY2huKSBcCisgICAg
ICAgICAgICAgKHB2X3NoaW0gJiYgKGNobiktPnBvcnQgJiYgKGNobiktPnN0
YXRlID09IEVDU19SRVNFUlZFRCkKKyNlbmRpZgorCiAjZW5kaWYKLS0tIGEv
eGVuL2luY2x1ZGUveGVuL2V2ZW50LmgKKysrIGIveGVuL2luY2x1ZGUveGVu
L2V2ZW50LmgKQEAgLTEzMyw2ICsxMzMsMjQgQEAgc3RhdGljIGlubGluZSBz
dHJ1Y3QgZXZ0Y2huICpldnRjaG5fZnJvbQogICAgIHJldHVybiBidWNrZXRf
ZnJvbV9wb3J0KGQsIHApICsgKHAgJSBFVlRDSE5TX1BFUl9CVUNLRVQpOwog
fQogCisvKgorICogInVzYWJsZSIgYXMgaW4gImJ5IGEgZ3Vlc3QiLCBpLmUu
IFhlbiBjb25zdW1lZCBjaGFubmVscyBhcmUgYXNzdW1lZCB0byBiZQorICog
dGFrZW4gY2FyZSBvZiBzZXBhcmF0ZWx5IHdoZXJlIHVzZWQgZm9yIFhlbidz
IGludGVybmFsIHB1cnBvc2VzLgorICovCitzdGF0aWMgYm9vbCBldnRjaG5f
dXNhYmxlKGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobikKK3sKKyAgICBp
ZiAoIGV2dGNobi0+eGVuX2NvbnN1bWVyICkKKyAgICAgICAgcmV0dXJuIGZh
bHNlOworCisjaWZkZWYgYXJjaF9ldnRjaG5faXNfc3BlY2lhbAorICAgIGlm
ICggYXJjaF9ldnRjaG5faXNfc3BlY2lhbChldnRjaG4pICkKKyAgICAgICAg
cmV0dXJuIHRydWU7CisjZW5kaWYKKworICAgIEJVSUxEX0JVR19PTihFQ1Nf
RlJFRSA+IEVDU19SRVNFUlZFRCk7CisgICAgcmV0dXJuIGV2dGNobi0+c3Rh
dGUgPiBFQ1NfUkVTRVJWRUQ7Cit9CisKIC8qIFdhaXQgb24gYSBYZW4tYXR0
YWNoZWQgZXZlbnQgY2hhbm5lbC4gKi8KICNkZWZpbmUgd2FpdF9vbl94ZW5f
ZXZlbnRfY2hhbm5lbChwb3J0LCBjb25kaXRpb24pICAgICAgICAgICAgICAg
ICAgICAgIFwKICAgICBkbyB7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKQEAgLTE2
NSwxOSArMTgzLDI0IEBAIGludCBldnRjaG5fcmVzZXQoc3RydWN0IGRvbWFp
biAqZCk7CiAKIC8qCiAgKiBMb3ctbGV2ZWwgZXZlbnQgY2hhbm5lbCBwb3J0
IG9wcy4KKyAqCisgKiBBbGwgaG9va3MgaGF2ZSB0byBiZSBjYWxsZWQgd2l0
aCBhIGxvY2sgaGVsZCB3aGljaCBwcmV2ZW50cyB0aGUgY2hhbm5lbAorICog
ZnJvbSBjaGFuZ2luZyBzdGF0ZS4gVGhpcyBtYXkgYmUgdGhlIGRvbWFpbiBl
dmVudCBsb2NrLCB0aGUgcGVyLWNoYW5uZWwKKyAqIGxvY2ssIG9yIGluIHRo
ZSBjYXNlIG9mIHNlbmRpbmcgaW50ZXJkb21haW4gZXZlbnRzIGFsc28gdGhl
IG90aGVyIHNpZGUncworICogcGVyLWNoYW5uZWwgbG9jay4gRXhjZXB0aW9u
cyBhcHBseSBpbiBjZXJ0YWluIGNhc2VzIGZvciB0aGUgUFYgc2hpbS4KICAq
Lwogc3RydWN0IGV2dGNobl9wb3J0X29wcyB7CiAgICAgdm9pZCAoKmluaXQp
KHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBldnRjaG4gKmV2dGNobik7CiAg
ICAgdm9pZCAoKnNldF9wZW5kaW5nKShzdHJ1Y3QgdmNwdSAqdiwgc3RydWN0
IGV2dGNobiAqZXZ0Y2huKTsKICAgICB2b2lkICgqY2xlYXJfcGVuZGluZyko
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAg
ICB2b2lkICgqdW5tYXNrKShzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZXZ0
Y2huICpldnRjaG4pOwotICAgIGJvb2wgKCppc19wZW5kaW5nKShjb25zdCBz
dHJ1Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOwotICAgIGJv
b2wgKCppc19tYXNrZWQpKGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIGV2dGNo
bl9wb3J0X3QgcG9ydCk7CisgICAgYm9vbCAoKmlzX3BlbmRpbmcpKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQsIGNvbnN0IHN0cnVjdCBldnRjaG4gKmV2dGNo
bik7CisgICAgYm9vbCAoKmlzX21hc2tlZCkoY29uc3Qgc3RydWN0IGRvbWFp
biAqZCwgY29uc3Qgc3RydWN0IGV2dGNobiAqZXZ0Y2huKTsKICAgICAvKgog
ICAgICAqIElzIHRoZSBwb3J0IHVuYXZhaWxhYmxlIGJlY2F1c2UgaXQncyBz
dGlsbCBiZWluZyBjbGVhbmVkIHVwCiAgICAgICogYWZ0ZXIgYmVpbmcgY2xv
c2VkPwogICAgICAqLwotICAgIGJvb2wgKCppc19idXN5KShjb25zdCBzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpOworICAgIGJvb2wg
KCppc19idXN5KShjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBjb25zdCBzdHJ1
Y3QgZXZ0Y2huICpldnRjaG4pOwogICAgIGludCAoKnNldF9wcmlvcml0eSko
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGV2dGNobiAqZXZ0Y2huLAogICAg
ICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHByaW9yaXR5KTsK
ICAgICB2b2lkICgqcHJpbnRfc3RhdGUpKHN0cnVjdCBkb21haW4gKmQsIGNv
bnN0IHN0cnVjdCBldnRjaG4gKmV2dGNobik7CkBAIC0xOTMsMzggKzIxNiw2
NyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfc2V0X3BlbmRp
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dW5zaWduZWQgaW50IHZjcHVfaWQsCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgc3RydWN0IGV2dGNobiAqZXZ0Y2huKQog
ewotICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3BlbmRpbmcoZC0+dmNw
dVt2Y3B1X2lkXSwgZXZ0Y2huKTsKKyAgICBpZiAoIGV2dGNobl91c2FibGUo
ZXZ0Y2huKSApCisgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+c2V0X3Bl
bmRpbmcoZC0+dmNwdVt2Y3B1X2lkXSwgZXZ0Y2huKTsKIH0KIAogc3RhdGlj
IGlubGluZSB2b2lkIGV2dGNobl9wb3J0X2NsZWFyX3BlbmRpbmcoc3RydWN0
IGRvbWFpbiAqZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBk
LT5ldnRjaG5fcG9ydF9vcHMtPmNsZWFyX3BlbmRpbmcoZCwgZXZ0Y2huKTsK
KyAgICBpZiAoIGV2dGNobl91c2FibGUoZXZ0Y2huKSApCisgICAgICAgIGQt
PmV2dGNobl9wb3J0X29wcy0+Y2xlYXJfcGVuZGluZyhkLCBldnRjaG4pOwog
fQogCiBzdGF0aWMgaW5saW5lIHZvaWQgZXZ0Y2huX3BvcnRfdW5tYXNrKHN0
cnVjdCBkb21haW4gKmQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHN0cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICBkLT5l
dnRjaG5fcG9ydF9vcHMtPnVubWFzayhkLCBldnRjaG4pOworICAgIGlmICgg
ZXZ0Y2huX3VzYWJsZShldnRjaG4pICkKKyAgICAgICAgZC0+ZXZ0Y2huX3Bv
cnRfb3BzLT51bm1hc2soZCwgZXZ0Y2huKTsKIH0KIAotc3RhdGljIGlubGlu
ZSBib29sIGV2dGNobl9wb3J0X2lzX3BlbmRpbmcoY29uc3Qgc3RydWN0IGRv
bWFpbiAqZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIGV2dGNobl9wb3J0X3QgcG9ydCkKK3N0YXRpYyBpbmxpbmUgYm9v
bCBldnRjaG5faXNfcGVuZGluZyhjb25zdCBzdHJ1Y3QgZG9tYWluICpkLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0
cnVjdCBldnRjaG4gKmV2dGNobikKIHsKLSAgICByZXR1cm4gZC0+ZXZ0Y2hu
X3BvcnRfb3BzLT5pc19wZW5kaW5nKGQsIHBvcnQpOworICAgIHJldHVybiBl
dnRjaG5fdXNhYmxlKGV2dGNobikgJiYgZC0+ZXZ0Y2huX3BvcnRfb3BzLT5p
c19wZW5kaW5nKGQsIGV2dGNobik7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9v
bCBldnRjaG5fcG9ydF9pc19tYXNrZWQoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ZXZ0Y2huX3BvcnRfdCBwb3J0KQorc3RhdGljIGlubGluZSBib29sIGV2dGNo
bl9wb3J0X2lzX3BlbmRpbmcoc3RydWN0IGRvbWFpbiAqZCwgZXZ0Y2huX3Bv
cnRfdCBwb3J0KQogewotICAgIHJldHVybiBkLT5ldnRjaG5fcG9ydF9vcHMt
PmlzX21hc2tlZChkLCBwb3J0KTsKKyAgICBzdHJ1Y3QgZXZ0Y2huICpldnRj
aG4gPSBldnRjaG5fZnJvbV9wb3J0KGQsIHBvcnQpOworICAgIGJvb2wgcmM7
CisgICAgdW5zaWduZWQgbG9uZyBmbGFnczsKKworICAgIHNwaW5fbG9ja19p
cnFzYXZlKCZldnRjaG4tPmxvY2ssIGZsYWdzKTsKKyAgICByYyA9IGV2dGNo
bl9pc19wZW5kaW5nKGQsIGV2dGNobik7CisgICAgc3Bpbl91bmxvY2tfaXJx
cmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7CisKKyAgICByZXR1cm4g
cmM7Cit9CisKK3N0YXRpYyBpbmxpbmUgYm9vbCBldnRjaG5faXNfbWFza2Vk
KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4p
Cit7CisgICAgcmV0dXJuICFldnRjaG5fdXNhYmxlKGV2dGNobikgfHwgZC0+
ZXZ0Y2huX3BvcnRfb3BzLT5pc19tYXNrZWQoZCwgZXZ0Y2huKTsKK30KKwor
c3RhdGljIGlubGluZSBib29sIGV2dGNobl9wb3J0X2lzX21hc2tlZChzdHJ1
Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90IHBvcnQpCit7CisgICAgc3Ry
dWN0IGV2dGNobiAqZXZ0Y2huID0gZXZ0Y2huX2Zyb21fcG9ydChkLCBwb3J0
KTsKKyAgICBib29sIHJjOworICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7CisK
KyAgICBzcGluX2xvY2tfaXJxc2F2ZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7
CisgICAgcmMgPSBldnRjaG5faXNfbWFza2VkKGQsIGV2dGNobik7CisgICAg
c3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmZXZ0Y2huLT5sb2NrLCBmbGFncyk7
CisKKyAgICByZXR1cm4gcmM7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9vbCBl
dnRjaG5fcG9ydF9pc19idXN5KGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCi0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBldnRjaG5f
cG9ydF90IHBvcnQpCitzdGF0aWMgaW5saW5lIGJvb2wgZXZ0Y2huX2lzX2J1
c3koY29uc3Qgc3RydWN0IGRvbWFpbiAqZCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZXZ0Y2huICpldnRjaG4p
CiB7CiAgICAgcmV0dXJuIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVzeSAm
JgotICAgICAgICAgICBkLT5ldnRjaG5fcG9ydF9vcHMtPmlzX2J1c3koZCwg
cG9ydCk7CisgICAgICAgICAgIGQtPmV2dGNobl9wb3J0X29wcy0+aXNfYnVz
eShkLCBldnRjaG4pOwogfQogCiBzdGF0aWMgaW5saW5lIGludCBldnRjaG5f
cG9ydF9zZXRfcHJpb3JpdHkoc3RydWN0IGRvbWFpbiAqZCwKQEAgLTIzMyw2
ICsyODUsOCBAQCBzdGF0aWMgaW5saW5lIGludCBldnRjaG5fcG9ydF9zZXRf
cHJpb3JpCiB7CiAgICAgaWYgKCAhZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRf
cHJpb3JpdHkgKQogICAgICAgICByZXR1cm4gLUVOT1NZUzsKKyAgICBpZiAo
ICFldnRjaG5fdXNhYmxlKGV2dGNobikgKQorICAgICAgICByZXR1cm4gLUVB
Q0NFUzsKICAgICByZXR1cm4gZC0+ZXZ0Y2huX3BvcnRfb3BzLT5zZXRfcHJp
b3JpdHkoZCwgZXZ0Y2huLCBwcmlvcml0eSk7CiB9CiAK

--=separator--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 17:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 17:05:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55424.96623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaEY-0001gG-D4; Wed, 16 Dec 2020 17:05:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55424.96623; Wed, 16 Dec 2020 17:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaEY-0001g5-9S; Wed, 16 Dec 2020 17:05:02 +0000
Received: by outflank-mailman (input) for mailman id 55424;
 Wed, 16 Dec 2020 17:05:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DfXp=FU=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kpaEX-0001Ls-8H
 for xen-devel@lists.xen.org; Wed, 16 Dec 2020 17:05:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eca3f244-30d2-4811-90bd-f80ecdddecbf;
 Wed, 16 Dec 2020 17:04:39 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kpaE5-0003ne-JC; Wed, 16 Dec 2020 17:04:33 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kpaE5-0006zV-Gk; Wed, 16 Dec 2020 17:04:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eca3f244-30d2-4811-90bd-f80ecdddecbf
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=VjL2K3I6xHxwJ5PIAY3BGQUreSWLBHFBY5XsSudfb48=; b=QJZmExHmf5/+4nUx6r3uN9LFw7
	nL/D0QNv0eGMVzZ2rbCmJKu225eHoyOn49rseQu2SLx7AQa8mcowXvVKU5OqDZnxDaPWEH7No0DAc
	shtGlg5cRo8zz38EPF2j1JLEcd8k+wMg4LBoAyAbO0hiLbhe0ME1uJ5k4unmHXT64aeY=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 358 v5 (CVE-2020-29570) - FIFO event
 channels control block related ordering
Message-Id: <E1kpaE5-0006zV-Gk@xenbits.xenproject.org>
Date: Wed, 16 Dec 2020 17:04:33 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-29570 / XSA-358
                               version 5

          FIFO event channels control block related ordering

UPDATES IN VERSION 5
====================

"Unstable" patch updated (needed re-basing).

ISSUE DESCRIPTION
=================

Recording of the per-vCPU control block mapping maintained by Xen and
that of pointers into the control block is reversed.  The consumer
assumes, seeing the former initialized, that the latter are also ready
for use.

IMPACT
======

Malicious or buggy guest kernels can mount a Denial of Service (DoS)
attack affecting the entire system.

VULNERABLE SYSTEMS
==================

All Xen versions from 4.4 onwards are vulnerable.  Xen versions 4.3 and
earlier are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa358.patch           xen-unstable
xsa358-4.14.patch      Xen 4.14 - 4.10

$ sha256sum xsa358*
0e8428a52e9bedafb2d8cbbb8dffae4e882e4b0898e4e7df3576c99e0e607167  xsa358.meta
c0763c85287d138a02dc795aa5d2e903ca7efc641390bee53ea2f7473f4f95af  xsa358.patch
937a3786d3d0147aef63eed373ed1df9ede75d1fabf5ad8f6ccaacfbf7fbcf42  xsa358-4.14.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl/aPhoMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZhWkH/08MG6OKo6O0vXv9PuznO/6JPjpSmAgkQYUBqYkw
cAp/yq1kXo3kA+TyHQUPZwBzWx+B0OAG7OBDIoyDlVRhj5Z24YINY+knWzocyXmn
7b6p8RdEf47cvWYn3Nugh2KXDdVo+CZ2C597kUBJSSuAJicT3BU3NIexXXLM9phU
zeGcm39u4/ucZoBAAzP8IlsjxTs3woZG8ZlNNRrcF2QF98AWK1joIR3j54bWqwKs
xvI+BLOXjhpr9Q2P/WY7zQsvWfw2dRsYpGMtPRpug+jpYOV51q//CnrDoSF7mXj9
oHMklW1n/C+U0NeXMXdiwb+PhcP40m1ltya0Vfal8rPH1G4=
=GzHh
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa358.meta"
Content-Disposition: attachment; filename="xsa358.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTgsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxZDcyZDk5MTVlZGZmMGRkNDFmNjAxYmJiMGIxZjgzYzAy
ZmYxNjg5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAz
NTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAg
ICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAog
ICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAg
MzQ4LAogICAgICAgICAgICAzNTYKICAgICAgICAgIF0sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1OC00LjE0LnBhdGNoIgog
ICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEx
IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAg
ICAgICAgIlN0YWJsZVJlZiI6ICI0MWE4MjJjMzkyNjM1MGYyNjkxN2Q3NDdj
OGRmZWQxYzQ0YTJjZjQyIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAg
ICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAgICAgICAgICAgMzIy
LAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwKICAgICAgICAg
ICAgMzI1LAogICAgICAgICAgICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAg
ICAgICAgICAgMzQ4LAogICAgICAgICAgICAzNTYKICAgICAgICAgIF0sCiAg
ICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1OC00LjE0
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwK
ICAgICI0LjEyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVu
IjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI4MTQ1ZDM4YjQ4MDA5MjU1
YTMyYWI4N2EwMmU0ODFjZDA5YzgxMWY5IiwKICAgICAgICAgICJQcmVyZXFz
IjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDExNSwKICAgICAg
ICAgICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAgICAgIDMyNCwK
ICAgICAgICAgICAgMzI1LAogICAgICAgICAgICAzMzAsCiAgICAgICAgICAg
IDM1MiwKICAgICAgICAgICAgMzQ4LAogICAgICAgICAgICAzNTYKICAgICAg
ICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhz
YTM1OC00LjE0LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAg
fQogICAgfSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAg
ICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJiNTMwMjI3
M2UyYzUxOTQwMTcyNDAwNDg2NjQ0NjM2ZjJmNGZjNjRhIiwKICAgICAgICAg
ICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMsCiAgICAgICAgICAgIDEx
NSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAgICAzMjMsCiAgICAgICAg
ICAgIDMyNCwKICAgICAgICAgICAgMzI1LAogICAgICAgICAgICAzMzAsCiAg
ICAgICAgICAgIDM1MiwKICAgICAgICAgICAgMzQ4LAogICAgICAgICAgICAz
NTYKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM1OC00LjE0LnBhdGNoIgogICAgICAgICAgXQogICAgICAg
IH0KICAgICAgfQogICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVjaXBl
cyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6
ICIxZDFkMWY1MzkxOTc2NDU2YTc5ZGFhYzBkY2ZlNzE1N2RhMWU1NGY3IiwK
ICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzNTMsCiAgICAg
ICAgICAgIDExNSwKICAgICAgICAgICAgMzIyLAogICAgICAgICAgICAzMjMs
CiAgICAgICAgICAgIDMyNCwKICAgICAgICAgICAgMzI1LAogICAgICAgICAg
ICAzMzAsCiAgICAgICAgICAgIDM1MiwKICAgICAgICAgICAgMzQ4LAogICAg
ICAgICAgICAzNTYKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6
IFsKICAgICAgICAgICAgInhzYTM1OC00LjE0LnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAg
ICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAi
U3RhYmxlUmVmIjogImRjOGIwMWFmZmQ3ZjZmMzZkMzRjMzg1NGY1MWRmMDg0
N2RmM2VjMGUiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAg
ICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzU4LnBhdGNoIgogICAg
ICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa358.patch"
Content-Disposition: attachment; filename="xsa358.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG4vRklGTzogcmUtb3JkZXIgYW5kIHN5bmNocm9uaXplICh3aXRo
KSBtYXBfY29udHJvbF9ibG9jaygpCgpGb3IgZXZ0Y2huX2ZpZm9fc2V0X3Bl
bmRpbmcoKSdzIGNoZWNrIG9mIHRoZSBjb250cm9sIGJsb2NrIGhhdmluZyBi
ZWVuCnNldCB0byBiZSBlZmZlY3RpdmUsIG9yZGVyaW5nIG9mIHJlc3BlY3Rp
dmUgcmVhZHMgYW5kIHdyaXRlcyBuZWVkcyB0byBiZQplbnN1cmVkOiBUaGUg
Y29udHJvbCBibG9jayBwb2ludGVyIG5lZWRzIHRvIGJlIHJlY29yZGVkIHN0
cmljdGx5IGFmdGVyCnRoZSBzZXR0aW5nIG9mIGFsbCB0aGUgcXVldWUgaGVh
ZHMsIGFuZCBpdCBuZWVkcyBjaGVja2luZyBzdHJpY3RseQpiZWZvcmUgYW55
IHVzZXMgb2YgdGhlbSAodGhpcyBsYXR0ZXIgYXNwZWN0IHdhcyBhbHJlYWR5
IGd1YXJhbnRlZWQpLgoKVGhpcyBpcyBYU0EtMzU4IC8gQ1ZFLTIwMjAtMjk1
NzAuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4KLS0tCnY0OiBSZS1iYXNlLgp2MzogRHJvcCByZWFkLXNpZGUgYmFy
cmllciBhZ2FpbiwgbGV2ZXJhZ2luZyBndWVzdF90ZXN0X2FuZF9zZXRfYml0
KCkuCnYyOiBSZS1iYXNlIG92ZXIgcXVldWUgbG9ja2luZyByZS13b3JrLgoK
LS0tIGEveGVuL2NvbW1vbi9ldmVudF9maWZvLmMKKysrIGIveGVuL2NvbW1v
bi9ldmVudF9maWZvLmMKQEAgLTI0OSw2ICsyNDksMTAgQEAgc3RhdGljIHZv
aWQgZXZ0Y2huX2ZpZm9fc2V0X3BlbmRpbmcoc3RydQogICAgICAqIExpbmsg
dGhlIGV2ZW50IGlmIGl0IHVubWFza2VkIGFuZCBub3QgYWxyZWFkeSBsaW5r
ZWQuCiAgICAgICovCiAgICAgaWYgKCAhZ3Vlc3RfdGVzdF9iaXQoZCwgRVZU
Q0hOX0ZJRk9fTUFTS0VELCB3b3JkKSAmJgorICAgICAgICAgLyoKKyAgICAg
ICAgICAqIFRoaXMgYWxzbyBhY3RzIGFzIHRoZSByZWFkIGNvdW50ZXJwYXJ0
IG9mIHRoZSBzbXBfd21iKCkgaW4KKyAgICAgICAgICAqIG1hcF9jb250cm9s
X2Jsb2NrKCkuCisgICAgICAgICAgKi8KICAgICAgICAgICFndWVzdF90ZXN0
X2FuZF9zZXRfYml0KGQsIEVWVENITl9GSUZPX0xJTktFRCwgd29yZCkgKQog
ICAgIHsKICAgICAgICAgLyoKQEAgLTQ3NCw2ICs0NzgsNyBAQCBzdGF0aWMg
aW50IHNldHVwX2NvbnRyb2xfYmxvY2soc3RydWN0IHZjCiBzdGF0aWMgaW50
IG1hcF9jb250cm9sX2Jsb2NrKHN0cnVjdCB2Y3B1ICp2LCB1aW50NjRfdCBn
Zm4sIHVpbnQzMl90IG9mZnNldCkKIHsKICAgICB2b2lkICp2aXJ0OworICAg
IHN0cnVjdCBldnRjaG5fZmlmb19jb250cm9sX2Jsb2NrICpjb250cm9sX2Js
b2NrOwogICAgIHVuc2lnbmVkIGludCBpOwogICAgIGludCByYzsKIApAQCAt
NDg0LDEwICs0ODksMTUgQEAgc3RhdGljIGludCBtYXBfY29udHJvbF9ibG9j
ayhzdHJ1Y3QgdmNwdQogICAgIGlmICggcmMgPCAwICkKICAgICAgICAgcmV0
dXJuIHJjOwogCi0gICAgdi0+ZXZ0Y2huX2ZpZm8tPmNvbnRyb2xfYmxvY2sg
PSB2aXJ0ICsgb2Zmc2V0OworICAgIGNvbnRyb2xfYmxvY2sgPSB2aXJ0ICsg
b2Zmc2V0OwogCiAgICAgZm9yICggaSA9IDA7IGkgPD0gRVZUQ0hOX0ZJRk9f
UFJJT1JJVFlfTUlOOyBpKysgKQotICAgICAgICB2LT5ldnRjaG5fZmlmby0+
cXVldWVbaV0uaGVhZCA9ICZ2LT5ldnRjaG5fZmlmby0+Y29udHJvbF9ibG9j
ay0+aGVhZFtpXTsKKyAgICAgICAgdi0+ZXZ0Y2huX2ZpZm8tPnF1ZXVlW2ld
LmhlYWQgPSAmY29udHJvbF9ibG9jay0+aGVhZFtpXTsKKworICAgIC8qIEFs
bCBxdWV1ZSBoZWFkcyBtdXN0IGhhdmUgYmVlbiBzZXQgYmVmb3JlIHNldHRp
bmcgdGhlIGNvbnRyb2wgYmxvY2suICovCisgICAgc21wX3dtYigpOworCisg
ICAgdi0+ZXZ0Y2huX2ZpZm8tPmNvbnRyb2xfYmxvY2sgPSBjb250cm9sX2Js
b2NrOwogCiAgICAgcmV0dXJuIDA7CiB9Cg==

--=separator
Content-Type: application/octet-stream; name="xsa358-4.14.patch"
Content-Disposition: attachment; filename="xsa358-4.14.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBldnRjaG4vRklGTzogcmUtb3JkZXIgYW5kIHN5bmNocm9uaXplICh3aXRo
KSBtYXBfY29udHJvbF9ibG9jaygpCgpGb3IgZXZ0Y2huX2ZpZm9fc2V0X3Bl
bmRpbmcoKSdzIGNoZWNrIG9mIHRoZSBjb250cm9sIGJsb2NrIGhhdmluZyBi
ZWVuCnNldCB0byBiZSBlZmZlY3RpdmUsIG9yZGVyaW5nIG9mIHJlc3BlY3Rp
dmUgcmVhZHMgYW5kIHdyaXRlcyBuZWVkcyB0byBiZQplbnN1cmVkOiBUaGUg
Y29udHJvbCBibG9jayBwb2ludGVyIG5lZWRzIHRvIGJlIHJlY29yZGVkIHN0
cmljdGx5IGFmdGVyCnRoZSBzZXR0aW5nIG9mIGFsbCB0aGUgcXVldWUgaGVh
ZHMsIGFuZCBpdCBuZWVkcyBjaGVja2luZyBzdHJpY3RseQpiZWZvcmUgYW55
IHVzZXMgb2YgdGhlbSAodGhpcyBsYXR0ZXIgYXNwZWN0IHdhcyBhbHJlYWR5
IGd1YXJhbnRlZWQpLgoKVGhpcyBpcyBYU0EtMzU4IC8gQ1ZFLTIwMjAtMjk1
NzAuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPgpBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4KCi0tLSBhL3hlbi9jb21tb24vZXZlbnRfZmlmby5jCisrKyBiL3hl
bi9jb21tb24vZXZlbnRfZmlmby5jCkBAIC0yNDksNiArMjQ5LDEwIEBAIHN0
YXRpYyB2b2lkIGV2dGNobl9maWZvX3NldF9wZW5kaW5nKHN0cnUKICAgICAg
ICAgICAgIGdvdG8gdW5sb2NrOwogICAgICAgICB9CiAKKyAgICAgICAgLyoK
KyAgICAgICAgICogVGhpcyBhbHNvIGFjdHMgYXMgdGhlIHJlYWQgY291bnRl
cnBhcnQgb2YgdGhlIHNtcF93bWIoKSBpbgorICAgICAgICAgKiBtYXBfY29u
dHJvbF9ibG9jaygpLgorICAgICAgICAgKi8KICAgICAgICAgaWYgKCBndWVz
dF90ZXN0X2FuZF9zZXRfYml0KGQsIEVWVENITl9GSUZPX0xJTktFRCwgd29y
ZCkgKQogICAgICAgICAgICAgZ290byB1bmxvY2s7CiAKQEAgLTQ3NCw2ICs0
NzgsNyBAQCBzdGF0aWMgaW50IHNldHVwX2NvbnRyb2xfYmxvY2soc3RydWN0
IHZjCiBzdGF0aWMgaW50IG1hcF9jb250cm9sX2Jsb2NrKHN0cnVjdCB2Y3B1
ICp2LCB1aW50NjRfdCBnZm4sIHVpbnQzMl90IG9mZnNldCkKIHsKICAgICB2
b2lkICp2aXJ0OworICAgIHN0cnVjdCBldnRjaG5fZmlmb19jb250cm9sX2Js
b2NrICpjb250cm9sX2Jsb2NrOwogICAgIHVuc2lnbmVkIGludCBpOwogICAg
IGludCByYzsKIApAQCAtNDg0LDEwICs0ODksMTUgQEAgc3RhdGljIGludCBt
YXBfY29udHJvbF9ibG9jayhzdHJ1Y3QgdmNwdQogICAgIGlmICggcmMgPCAw
ICkKICAgICAgICAgcmV0dXJuIHJjOwogCi0gICAgdi0+ZXZ0Y2huX2ZpZm8t
PmNvbnRyb2xfYmxvY2sgPSB2aXJ0ICsgb2Zmc2V0OworICAgIGNvbnRyb2xf
YmxvY2sgPSB2aXJ0ICsgb2Zmc2V0OwogCiAgICAgZm9yICggaSA9IDA7IGkg
PD0gRVZUQ0hOX0ZJRk9fUFJJT1JJVFlfTUlOOyBpKysgKQotICAgICAgICB2
LT5ldnRjaG5fZmlmby0+cXVldWVbaV0uaGVhZCA9ICZ2LT5ldnRjaG5fZmlm
by0+Y29udHJvbF9ibG9jay0+aGVhZFtpXTsKKyAgICAgICAgdi0+ZXZ0Y2hu
X2ZpZm8tPnF1ZXVlW2ldLmhlYWQgPSAmY29udHJvbF9ibG9jay0+aGVhZFtp
XTsKKworICAgIC8qIEFsbCBxdWV1ZSBoZWFkcyBtdXN0IGhhdmUgYmVlbiBz
ZXQgYmVmb3JlIHNldHRpbmcgdGhlIGNvbnRyb2wgYmxvY2suICovCisgICAg
c21wX3dtYigpOworCisgICAgdi0+ZXZ0Y2huX2ZpZm8tPmNvbnRyb2xfYmxv
Y2sgPSBjb250cm9sX2Jsb2NrOwogCiAgICAgcmV0dXJuIDA7CiB9Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 17:12:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 17:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55527.96635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaLb-0003eT-80; Wed, 16 Dec 2020 17:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55527.96635; Wed, 16 Dec 2020 17:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaLb-0003eM-53; Wed, 16 Dec 2020 17:12:19 +0000
Received: by outflank-mailman (input) for mailman id 55527;
 Wed, 16 Dec 2020 17:12:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DJND=FU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpaLZ-0003eH-PR
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 17:12:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba015c2c-2d2e-4928-971b-296d855c24e6;
 Wed, 16 Dec 2020 17:12:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5C191AC7F;
 Wed, 16 Dec 2020 17:12:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba015c2c-2d2e-4928-971b-296d855c24e6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608138735; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JyoICodWswFOjhkfImLaYWuoNHW5VwMCeyaAzQoPIVY=;
	b=E2ovRRhDq8qZA6ADoDLBDC84/LosqcHTTNZz3rhuOzf1Nu2znr0k74INyOaHAUqzgx/VO1
	kMywhfbVajrodQnJ6H6F0AngQ4gOwzxjAocwtE1s/GoIHHsGgkRpHPpsuxkq6SDXzji9Fm
	tdOOq2eLTxgjmJtMfaq6FOcloiVugTo=
Subject: Re: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node
 callbacks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-4-jgross@suse.com>
 <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
 <aad9131f-ca42-94b4-1ce2-18c6db0ac381@suse.com>
 <227825e7-d704-fd36-a327-1dbd6aa391c8@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6b0cda5f-596f-f390-c22b-c922dee04a55@suse.com>
Date: Wed, 16 Dec 2020 18:12:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <227825e7-d704-fd36-a327-1dbd6aa391c8@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="diajpoYnltilczuPMwLTp2FoHHS9ZxUM9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--diajpoYnltilczuPMwLTp2FoHHS9ZxUM9
Content-Type: multipart/mixed; boundary="2jEMZxHgzjF8ZuqZQnmYb1JUL8Mq3hJGs";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <6b0cda5f-596f-f390-c22b-c922dee04a55@suse.com>
Subject: Re: [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node
 callbacks
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-4-jgross@suse.com>
 <36469295-8c77-0e58-654a-35fd992c11a1@suse.com>
 <aad9131f-ca42-94b4-1ce2-18c6db0ac381@suse.com>
 <227825e7-d704-fd36-a327-1dbd6aa391c8@suse.com>
In-Reply-To: <227825e7-d704-fd36-a327-1dbd6aa391c8@suse.com>

--2jEMZxHgzjF8ZuqZQnmYb1JUL8Mq3hJGs
Content-Type: multipart/mixed;
 boundary="------------65F4E64037C290A0B9B4D7C9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------65F4E64037C290A0B9B4D7C9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.12.20 17:36, Jan Beulich wrote:
> On 16.12.2020 17:24, J=C3=BCrgen Gro=C3=9F wrote:
>> On 16.12.20 17:16, Jan Beulich wrote:
>>> On 09.12.2020 17:09, Juergen Gross wrote:
>>>> In order to better support resource allocation and locking for dynam=
ic
>>>> hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs.
>>>>
>>>> The enter() callback is called when entering a node during hypfs use=
r
>>>> actions (traversing, reading or writing it), while the exit() callba=
ck
>>>> is called when leaving a node (accessing another node at the same or=
 a
>>>> higher directory level, or when returning to the user).
>>>>
>>>> For avoiding recursion this requires a parent pointer in each node.
>>>> Let the enter() callback return the entry address which is stored as=

>>>> the last accessed node in order to be able to use a template entry f=
or
>>>> that purpose in case of dynamic entries.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> V2:
>>>> - new patch
>>>>
>>>> V3:
>>>> - add ASSERT(entry); (Jan Beulich)
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>    xen/common/hypfs.c      | 80 ++++++++++++++++++++++++++++++++++++=
+++++
>>>>    xen/include/xen/hypfs.h |  5 +++
>>>>    2 files changed, 85 insertions(+)
>>>>
>>>> diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
>>>> index 6f822ae097..f04934db10 100644
>>>> --- a/xen/common/hypfs.c
>>>> +++ b/xen/common/hypfs.c
>>>> @@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry;
>>>>         ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))=

>>>>   =20
>>>>    const struct hypfs_funcs hypfs_dir_funcs =3D {
>>>> +    .enter =3D hypfs_node_enter,
>>>> +    .exit =3D hypfs_node_exit,
>>>>        .read =3D hypfs_read_dir,
>>>>        .write =3D hypfs_write_deny,
>>>>        .getsize =3D hypfs_getsize,
>>>>        .findentry =3D hypfs_dir_findentry,
>>>>    };
>>>>    const struct hypfs_funcs hypfs_leaf_ro_funcs =3D {
>>>> +    .enter =3D hypfs_node_enter,
>>>> +    .exit =3D hypfs_node_exit,
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_deny,
>>>>        .getsize =3D hypfs_getsize,
>>>>        .findentry =3D hypfs_leaf_findentry,
>>>>    };
>>>>    const struct hypfs_funcs hypfs_leaf_wr_funcs =3D {
>>>> +    .enter =3D hypfs_node_enter,
>>>> +    .exit =3D hypfs_node_exit,
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_leaf,
>>>>        .getsize =3D hypfs_getsize,
>>>>        .findentry =3D hypfs_leaf_findentry,
>>>>    };
>>>>    const struct hypfs_funcs hypfs_bool_wr_funcs =3D {
>>>> +    .enter =3D hypfs_node_enter,
>>>> +    .exit =3D hypfs_node_exit,
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_bool,
>>>>        .getsize =3D hypfs_getsize,
>>>>        .findentry =3D hypfs_leaf_findentry,
>>>>    };
>>>>    const struct hypfs_funcs hypfs_custom_wr_funcs =3D {
>>>> +    .enter =3D hypfs_node_enter,
>>>> +    .exit =3D hypfs_node_exit,
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_custom,
>>>>        .getsize =3D hypfs_getsize,
>>>> @@ -63,6 +73,8 @@ enum hypfs_lock_state {
>>>>    };
>>>>    static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
>>>>   =20
>>>> +static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_e=
ntered);
>>>> +
>>>>    HYPFS_DIR_INIT(hypfs_root, "");
>>>>   =20
>>>>    static void hypfs_read_lock(void)
>>>> @@ -100,11 +112,59 @@ static void hypfs_unlock(void)
>>>>        }
>>>>    }
>>>>   =20
>>>> +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry=
 *entry)
>>>> +{
>>>> +    return entry;
>>>> +}
>>>> +
>>>> +void hypfs_node_exit(const struct hypfs_entry *entry)
>>>> +{
>>>> +}
>>>> +
>>>> +static int node_enter(const struct hypfs_entry *entry)
>>>> +{
>>>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_e=
ntered);
>>>> +
>>>> +    entry =3D entry->funcs->enter(entry);
>>>> +    if ( IS_ERR(entry) )
>>>> +        return PTR_ERR(entry);
>>>> +
>>>> +    ASSERT(entry);
>>>> +    ASSERT(!*last || *last =3D=3D entry->parent);
>>>> +
>>>> +    *last =3D entry;
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static void node_exit(const struct hypfs_entry *entry)
>>>> +{
>>>> +    const struct hypfs_entry **last =3D &this_cpu(hypfs_last_node_e=
ntered);
>>>> +
>>>> +    if ( !*last )
>>>> +        return;
>>>
>>> To my question regarding this in v2 you replied
>>>
>>> "I rechecked and have found that this was a remnant from an earlier
>>>    variant. *last won't ever be NULL, so the if can be dropped (a NUL=
L
>>>    will be catched by the following ASSERT())."
>>>
>>> Now this if() is still there. Why?
>>
>> I really thought I did remove the if(). Seems as if I did that on
>> my test machine only and not in my git tree. Sorry for that.
>=20
> So should I drop it while committing and adding
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ?

Yes, please.


Juergen


--------------65F4E64037C290A0B9B4D7C9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------65F4E64037C290A0B9B4D7C9--

--2jEMZxHgzjF8ZuqZQnmYb1JUL8Mq3hJGs--

--diajpoYnltilczuPMwLTp2FoHHS9ZxUM9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/aP+4FAwAAAAAACgkQsN6d1ii/Ey8+
IAf9EdFKA1dXsq5RDt1fsBNTPhU9fT/S059wAZbyz/0g5CzSY5sPSBiSoWWiZR4ADd4Blo7BQeiX
dy5XK/R2FesO9PcfD3ALrvT7clmswd9qcHP0AtgkVEVjiWMlDNRyLvtBw88o5PmUzkRnp7KgRUXJ
RlHEtSLbxLFlmQMQYwxPjI63kfRCo7Pynlfw07H2+WCdtSFXbrhHqUW8nco+xH2JABwL/rSVrj26
6BrrQ5SY8OyVvEnVisqxcNmnRNz0jnsv3vB9voQWJRv6N2Joqf85vMUeLliItwSmz5MxEVq0uhUp
HaQ/ZwNoBYZj56WPrBPHI5XjJ2PbMb79aF/VXVg23A==
=xLBZ
-----END PGP SIGNATURE-----

--diajpoYnltilczuPMwLTp2FoHHS9ZxUM9--


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 17:53:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 17:53:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55536.96647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaz2-0007P9-IW; Wed, 16 Dec 2020 17:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55536.96647; Wed, 16 Dec 2020 17:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpaz2-0007P2-Cq; Wed, 16 Dec 2020 17:53:04 +0000
Received: by outflank-mailman (input) for mailman id 55536;
 Wed, 16 Dec 2020 17:53:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZVlt=FU=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kpaz0-0007Ox-TU
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 17:53:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7375f2b-113e-4f62-93b3-8fb0bffdeade;
 Wed, 16 Dec 2020 17:53:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C174DAE63;
 Wed, 16 Dec 2020 17:53:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7375f2b-113e-4f62-93b3-8fb0bffdeade
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608141180; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Lm0OOb2o7t7i6S+Vw6olTaWMMNamPMPZeCRhd93YYeE=;
	b=RmGcVc78z8et4uCOwfbdelYoOd9eWeRsIv+UqdnNpfF/ULUEgdzYq90cJHyE/ZedyfJp9A
	OdQ8b27eKanF/vowfpEXeSBDCyZvhmB98632aPSbe4B+MR1D1H9lhQlxba4rqI/EyspWwv
	qKKdzxv6aEdlKJ+m/Lh10hsF1wSGsew=
Message-ID: <a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com>
Subject: Re: [PATCH v3 1/8] xen/cpupool: support moving domain between
 cpupools with different granularity
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Wed, 16 Dec 2020 18:52:59 +0100
In-Reply-To: <20201209160956.32456-2-jgross@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
	 <20201209160956.32456-2-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-mUbumSaun4tYjl6nss8L"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-mUbumSaun4tYjl6nss8L
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-12-09 at 17:09 +0100, Juergen Gross wrote:
> When moving a domain between cpupools with different scheduling
> granularity the sched_units of the domain need to be adjusted.
>=20
> Do that by allocating new sched_units and throwing away the old ones
> in sched_move_domain().
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
This looks fine, and can have:

Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

I would only have one request. It's not a huge deal, and probably not
worth a resend only for that, but if either you or the committer are up
for complying with that in whatever way you find the most suitable,
that would be great.

I.e., can we...
> ---
> =C2=A0xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++-------=
-
> --
> =C2=A01 file changed, 90 insertions(+), 31 deletions(-)
>=20
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index a429fc7640..2a61c879b3 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
>=20
> [...]
> -=C2=A0=C2=A0=C2=A0 old_ops =3D dom_scheduler(d);
> =C2=A0=C2=A0=C2=A0=C2=A0 old_domdata =3D d->sched_priv;
>=20
Move *here* (i.e., above this new call to cpumask_first()) the comment
that is currently inside the loop?
> =C2=A0
> +=C2=A0=C2=A0=C2=A0 new_p =3D cpumask_first(d->cpupool->cpu_valid);
> =C2=A0=C2=A0=C2=A0=C2=A0 for_each_sched_unit ( d, unit )
> =C2=A0=C2=A0=C2=A0=C2=A0 {
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spinlock_t *lock;
> +
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Temporarily move all =
units to same processor to make
> locking
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * easier when moving th=
e new units to the new processors.
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>
This one here, basically ^^^

> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lock =3D unit_schedule_lock_i=
rq(unit);
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sched_set_res(unit, get_sched=
_res(new_p));
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_unlock_irq(lock);
> +
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sched_remove_unit(old_op=
s, unit);
> =C2=A0=C2=A0=C2=A0=C2=A0 }
> =C2=A0
Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-mUbumSaun4tYjl6nss8L
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/aSXsACgkQFkJ4iaW4
c+4koQ/9HYzg3JwMedeReU7rrLIb4LnarY4OLpheup0QOxeRooM3dkVcOUTCMR5q
aS9V5kYCyKY4IqpScYQlZjUxLLYdPFx7FDOs5ulgeDzdhFu9ZswZAHtuFSdg0Tx5
Z6BhmjRjanD4kqCiIq81aAVg8g+D896mkFVPqoGX10xVZ+87Jjz3n+euV1+Fuulv
pjR7qfoQYLYY+XXbGIxrYF+b99k0TM5W7C7xD7kHQXSY9fHrqy2T5l9zNSXjDg21
7lKqLlKGc5OqFqOfaE/+jL3z+Cw5uf1YV08P1hl2dJXAerhLz3jyEX/x1k/R4CdB
qm4Fd/FQWO85Vkx7HZQ54yH2FIipcf55943gKrh1lhljc0qzNov3hjwRrXLEeimE
5sPh3mJx8sZdGcz87tFvbXaj4ezA5EVA63pHuSfguFLiwJBe9B7lZAQHJWbX47Zm
d3NBNEFKZX13VAQaXb+tBRvPcB8qw+M76FuLFkH8xkTylRe8TWHAN0oyNrTNkJZB
MsoCgL+KiL6D7EWFwp/y47oWO6Yn92lb4qJ4Oi9YJYmSeF9R32+5S3QUivXtfZS+
DbJMgrjRgrjJfNhm1NUteQDEgIeOJuUCERqfkRZLQjJYdZxuuY7u8iM/EJ1/8XD4
pX29foW5UTYEX32nnZoPrDX4bvA85b8hf2OmWLJcDyFrY2j1cNQ=
=vWTY
-----END PGP SIGNATURE-----

--=-mUbumSaun4tYjl6nss8L--



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 17:59:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 17:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55541.96659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpb56-0007iI-D1; Wed, 16 Dec 2020 17:59:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55541.96659; Wed, 16 Dec 2020 17:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpb56-0007iB-9q; Wed, 16 Dec 2020 17:59:20 +0000
Received: by outflank-mailman (input) for mailman id 55541;
 Wed, 16 Dec 2020 17:59:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OuYO=FU=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kpb55-0007i5-7B
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 17:59:19 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab5def83-d858-4587-a792-0bc985a594f3;
 Wed, 16 Dec 2020 17:59:14 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kpb4K-0005dO-H8; Wed, 16 Dec 2020 17:58:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id ACE52300DAE;
 Wed, 16 Dec 2020 18:58:28 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 8D62420274B26; Wed, 16 Dec 2020 18:58:28 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab5def83-d858-4587-a792-0bc985a594f3
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Be2nf2w/Vps/TTeCXTZLGMYhjE3HrJ1GdzWfCzB412A=; b=nqy3TkxWRSI/ZUAfRjb7CiyHmI
	z2b0gw+/xmF3V4bRP9jBf1OEn4etLAnhng4c/UhOWUSeqB05jwPIj8Rh9uZ21hQ4ee6X60bTP9wEA
	RT0YNjOtLitpMmmclOBcDf+B6m0wbgQvaR8LEckSrcfDteNXNOq2baniPg2yh03WmdTFSO9hIdNUI
	8E2mfDpBotw+GKCLBHJG9i8cSzTmpu4Cs3K+bLZySIc078PNlokD2esyT5D92TUH4gAPARyY3Bs4M
	CQ1YkaXuVJJC+yhROI4pG+byk+ORTk/MtIQmn4cU/HodbcrIdCOQPphC4Beha/FAi9+e3UzMniGn2
	1KoBKrug==;
Date: Wed, 16 Dec 2020 18:58:28 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201216175828.GQ3040@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
 <20201123134317.GE3092@hirez.programming.kicks-ass.net>
 <6771a12c-051d-1655-fb3a-cc45a3c82e29@suse.com>
 <20201215141834.GG3040@hirez.programming.kicks-ass.net>
 <20201215145408.GR3092@hirez.programming.kicks-ass.net>
 <20201216003802.5fpklvx37yuiufrt@treble>
 <20201216084059.GL3040@hirez.programming.kicks-ass.net>
 <20201216165605.4h5q7os5dutjgdqi@treble>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201216165605.4h5q7os5dutjgdqi@treble>

On Wed, Dec 16, 2020 at 10:56:05AM -0600, Josh Poimboeuf wrote:
> On Wed, Dec 16, 2020 at 09:40:59AM +0100, Peter Zijlstra wrote:

> > > Could we make it easier by caching the shared
> > > per-alt-group CFI state somewhere along the way?
> > 
> > Yes, but when I tried it grew the code required. Runtime costs would be
> > less, but I figured that since alternatives are typically few and small,
> > that wasn't a real consideration.
> 
> Aren't alternatives going to be everywhere now with paravirt using them?

What I meant was, they're either 2-3 wide and only a few instructions
long. Which greatly bounds the actual complexity of the algorithm,
however daft.

> > No real objection, I just didn't do it because 1) it works, and 2) even
> > moar lines.
> 
> I'm kind of surprised it would need moar lines.  Let me play around with
> it and maybe I'll come around ;-)

Please do, it could be getting all the niggly bits right exhausted my
brain enough to miss the obvious ;-)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 18:04:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 18:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55546.96671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbAB-0000Cg-0f; Wed, 16 Dec 2020 18:04:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55546.96671; Wed, 16 Dec 2020 18:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbAA-0000CZ-Tw; Wed, 16 Dec 2020 18:04:34 +0000
Received: by outflank-mailman (input) for mailman id 55546;
 Wed, 16 Dec 2020 18:04:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZxtY=FU=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kpbA9-0000CU-Fu
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 18:04:33 +0000
Received: from mail-wm1-f44.google.com (unknown [209.85.128.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45c4374f-7c42-4406-9527-55091ace2ebb;
 Wed, 16 Dec 2020 18:04:32 +0000 (UTC)
Received: by mail-wm1-f44.google.com with SMTP id x22so3216596wmc.5
 for <xen-devel@lists.xenproject.org>; Wed, 16 Dec 2020 10:04:32 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u6sm4374227wrm.90.2020.12.16.10.04.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 16 Dec 2020 10:04:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45c4374f-7c42-4406-9527-55091ace2ebb
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Kf6zKszwv9FQVxm0z3BUXLUYBr/LMl3PQ4xL4HdNhYE=;
        b=Cd5VEKxL6Qa6hzJc6kd53d6ft8XjisCvOr61J/VZ5F2x1EFaq82Kroc3VYDLx0tsIv
         Glynw6FUvoVcKWOUhNgpVhl3HRCWWzRh5NY2XrxENZVVerTn9RTrMRCwSCTzSbKaKeks
         ZcbhFjSRch+8gNDVhHQTmr4XsTMHKGWNjfAFXExa+kH/wtdntbc1S5cfmmt3hFkAe68c
         mT/5K2Y2zX974ovNkj2amNMN+n+Aw3u1fElLeIGfz+WwUy9cDyoQBmcBIyQDYc3PVe0a
         BtgDTglxL1WvSlEvRvXC3Hte+KKyw1W8/YWQGf2duwTPWKfakWAtYRnjBHvG5cq0O20l
         3wAg==
X-Gm-Message-State: AOAM530NZLJftumLRDk/Sorui2EwQhUhHk2Z4O5zMe5kIkWeWKkPQRpJ
	ziRRQteI6p+B3ajVwHN6i98=
X-Google-Smtp-Source: ABdhPJyE1rgjLPn3X92s8gsMtf2dnyOxvEco5DITm2I6PUOZTUkyMVihMAp8ydpspIoHgxztjk/kpg==
X-Received: by 2002:a05:600c:255:: with SMTP id 21mr4520138wmj.69.1608141871924;
        Wed, 16 Dec 2020 10:04:31 -0800 (PST)
Date: Wed, 16 Dec 2020 18:04:29 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>,
	xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Subject: Re: [xen-unstable-smoke bisection] complete build-amd64-libvirt
Message-ID: <20201216180429.r4szd5wzhlwqoxpi@liuwe-devbox-debian-v2>
References: <E1kpMXk-0006zb-Jk@osstest.test-lab.xenproject.org>
 <19ed8894-23f7-0f9d-f3c4-1d5ea5bc0c02@citrix.com>
 <20201216104357.wcggzckdii76d4iz@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201216104357.wcggzckdii76d4iz@liuwe-devbox-debian-v2>
User-Agent: NeoMutt/20180716

On Wed, Dec 16, 2020 at 10:43:57AM +0000, Wei Liu wrote:
> Paul, are you able to cook up a patch today? If not I will revert the
> offending patch(es).
> 

I've reverted the offending 8 patches for now.

Wei.


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 18:09:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 18:09:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55552.96683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbF2-0000UU-KJ; Wed, 16 Dec 2020 18:09:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55552.96683; Wed, 16 Dec 2020 18:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbF2-0000UN-HC; Wed, 16 Dec 2020 18:09:36 +0000
Received: by outflank-mailman (input) for mailman id 55552;
 Wed, 16 Dec 2020 18:09:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbF0-0000RR-W9; Wed, 16 Dec 2020 18:09:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbF0-0004z4-Ll; Wed, 16 Dec 2020 18:09:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbF0-0002Wo-CE; Wed, 16 Dec 2020 18:09:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbF0-0005az-Bm; Wed, 16 Dec 2020 18:09:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l8wZ6aKmMc4nA4yqfofGtrVkZx+ha2fqW9yOMZqDKs8=; b=MJdejgoWjYaRu/wJZLotngb1o0
	vVGB8Gnzseo3dLVcBxcdS7ZMq10S70Bhgzpc2yPRIgVpkWzHoFX3orTXIcz3ncULctx6cVJn4HHqL
	65TbSMJA2s8n9FtJo2XhsOt6FgXAL4RP8oVboegg+brA+NnkITf8g0Yq/QfOTZmDhL60=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157593-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157593: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=2dd0fb492fbbe424b1ad6a9f417d0fd68a3e44ee
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 18:09:34 +0000

flight 157593 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157593/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              2dd0fb492fbbe424b1ad6a9f417d0fd68a3e44ee
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  159 days
Failing since        151818  2020-07-11 04:18:52 Z  158 days  153 attempts
Testing same since   157593  2020-12-16 04:19:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33081 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 18:23:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 18:23:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55561.96702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbSS-0002II-0j; Wed, 16 Dec 2020 18:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55561.96702; Wed, 16 Dec 2020 18:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbSR-0002IB-TY; Wed, 16 Dec 2020 18:23:27 +0000
Received: by outflank-mailman (input) for mailman id 55561;
 Wed, 16 Dec 2020 18:23:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbSQ-0002I3-GZ; Wed, 16 Dec 2020 18:23:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbSQ-0005Dv-8N; Wed, 16 Dec 2020 18:23:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbSP-0003NR-Uo; Wed, 16 Dec 2020 18:23:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpbSP-0005ss-UL; Wed, 16 Dec 2020 18:23:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QQCUrKcnWVG647fqqrzpCfQ4lRRhTEGcpnTlV2Sdrl8=; b=QuzpbtCjNq1rAcodJMZJWN4W8i
	TOhJtaYg7Wndalujls0G91tF/06zPj5MzXi7xjNHernSKseNlX49zXWLNb2p/7YbjMZO4BuduTAiD
	K13VobLtkXtaIefYEk+TnbvQUkXxGuBY0B8IrMsgsbqjXsQYIsiDJFSqlb5F+kpV80vY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157609-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157609: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64-libvirt:libvirt-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5a324e1c39b6b78496dc73c222c9874302c2423c
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 18:23:25 +0000

flight 157609 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157609/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157560

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5a324e1c39b6b78496dc73c222c9874302c2423c
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    1 days
Failing since        157570  2020-12-15 17:00:30 Z    1 days    8 attempts
Testing same since   157609  2020-12-16 16:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 604 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 18:31:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 18:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55569.96717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbaB-0003IV-T5; Wed, 16 Dec 2020 18:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55569.96717; Wed, 16 Dec 2020 18:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpbaB-0003IO-OZ; Wed, 16 Dec 2020 18:31:27 +0000
Received: by outflank-mailman (input) for mailman id 55569;
 Wed, 16 Dec 2020 18:31:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hnmN=FU=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kpbaA-0003IJ-N6
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 18:31:27 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 07553f4c-28fc-4f26-9653-59abcf24ce19;
 Wed, 16 Dec 2020 18:31:24 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-401-K3ZasvZfMDWhYHoTxaZAjQ-1; Wed, 16 Dec 2020 13:31:19 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9896B801817;
 Wed, 16 Dec 2020 18:31:17 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-115-50.ams2.redhat.com [10.36.115.50])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id BD9D062AF7;
 Wed, 16 Dec 2020 18:31:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07553f4c-28fc-4f26-9653-59abcf24ce19
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608143484;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8R8yMuE3L37hwwvSDxqlIu1MnWMQCBHRhIccuK6Xuts=;
	b=DuJQrgKXeNMbn+6Bnht25S04z9f/92crR0hGMbODyL61/hOumPqLmA94ZVzM7bU9+CiEHi
	ADcqyJ0WRw6Uod61dajyuCZr+UqiP8AwmK/6oWqY2FAXuWcEZZP/D4wkNZeBshB+DHRX1i
	P7CKu+Mn4CzL53i8/fwc3Le+sz0K/Hw=
X-MC-Unique: K3ZasvZfMDWhYHoTxaZAjQ-1
Date: Wed, 16 Dec 2020 19:31:02 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201216183102.GH7548@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
MIME-Version: 1.0
In-Reply-To: <20201216145502.yiejsw47q5pfbzio@mhamilton>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="hOcCNbCCxyk/YU74"
Content-Disposition: inline

--hOcCNbCCxyk/YU74
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
> On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> > Am 15.12.2020 um 18:23 hat Sergio Lopez geschrieben:
> > > On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
> > > > Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> > > > > On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > > > > > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > > > > > While processing the parents of a BDS, one of the parents may=
 process
> > > > > > > the child that's doing the tail recursion, which leads to a B=
DS being
> > > > > > > processed twice. This is especially problematic for the aio_n=
otifiers,
> > > > > > > as they might attempt to work on both the old and the new AIO
> > > > > > > contexts.
> > > > > > >=20
> > > > > > > To avoid this, add the BDS pointer to the ignore list, and ch=
eck the
> > > > > > > child BDS pointer while iterating over the children.
> > > > > > >=20
> > > > > > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> > > > > >=20
> > > > > > Ugh, so we get a mixed list of BdrvChild and BlockDriverState? =
:-/
> > > > >=20
> > > > > I know, it's effective but quite ugly...
> > > > >=20
> > > > > > What is the specific scenario where you saw this breaking? Did =
you have
> > > > > > multiple BdrvChild connections between two nodes so that we wou=
ld go to
> > > > > > the parent node through one and then come back to the child nod=
e through
> > > > > > the other?
> > > > >=20
> > > > > I don't think this is a corner case. If the graph is walked top->=
down,
> > > > > there's no problem since children are added to the ignore list be=
fore
> > > > > getting processed, and siblings don't process each other. But, if=
 the
> > > > > graph is walked bottom->up, a BDS will start processing its paren=
ts
> > > > > without adding itself to the ignore list, so there's nothing
> > > > > preventing them from processing it again.
> > > >=20
> > > > I don't understand. child is added to ignore before calling the par=
ent
> > > > callback on it, so how can we come back through the same BdrvChild?
> > > >=20
> > > >     QLIST_FOREACH(child, &bs->parents, next_parent) {
> > > >         if (g_slist_find(*ignore, child)) {
> > > >             continue;
> > > >         }
> > > >         assert(child->klass->set_aio_ctx);
> > > >         *ignore =3D g_slist_prepend(*ignore, child);
> > > >         child->klass->set_aio_ctx(child, new_context, ignore);
> > > >     }
> > >=20
> > > Perhaps I'm missing something, but the way I understand it, that loop
> > > is adding the BdrvChild pointer of each of its parents, but not the
> > > BdrvChild pointer of the BDS that was passed as an argument to
> > > b_s_a_c_i.
> >=20
> > Generally, the caller has already done that.
> >=20
> > In the theoretical case that it was the outermost call in the recursion
> > and it hasn't (I couldn't find any such case), I think we should still
> > call the callback for the passed BdrvChild like we currently do.
> >=20
> > > > You didn't dump the BdrvChild here. I think that would add some
> > > > information on why we re-entered 0x555ee2fbf660. Maybe you can also=
 add
> > > > bs->drv->format_name for each node to make the scenario less abstra=
ct?
> > >=20
> > > I've generated another trace with more data:
> > >=20
> > > bs=3D0x565505e48030 (backup-top) enter
> > > bs=3D0x565505e48030 (backup-top) processing children
> > > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e42090=
 (child->bs=3D0x565505e5d420)
> > > bs=3D0x565505e5d420 (qcow2) enter
> > > bs=3D0x565505e5d420 (qcow2) processing children
> > > bs=3D0x565505e5d420 (qcow2) calling bsaci child=3D0x565505e41ea0 (chi=
ld->bs=3D0x565505e52060)
> > > bs=3D0x565505e52060 (file) enter
> > > bs=3D0x565505e52060 (file) processing children
> > > bs=3D0x565505e52060 (file) processing parents
> > > bs=3D0x565505e52060 (file) processing itself
> > > bs=3D0x565505e5d420 (qcow2) processing parents
> > > bs=3D0x565505e5d420 (qcow2) calling set_aio_ctx child=3D0x5655066a34d=
0
> > > bs=3D0x565505fbf660 (qcow2) enter
> > > bs=3D0x565505fbf660 (qcow2) processing children
> > > bs=3D0x565505fbf660 (qcow2) calling bsaci child=3D0x565505e41d20 (chi=
ld->bs=3D0x565506bc0c00)
> > > bs=3D0x565506bc0c00 (file) enter
> > > bs=3D0x565506bc0c00 (file) processing children
> > > bs=3D0x565506bc0c00 (file) processing parents
> > > bs=3D0x565506bc0c00 (file) processing itself
> > > bs=3D0x565505fbf660 (qcow2) processing parents
> > > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x565505fc7aa=
0
> > > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x5655068b851=
0
> > > bs=3D0x565505e48030 (backup-top) enter
> > > bs=3D0x565505e48030 (backup-top) processing children
> > > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e3c450=
 (child->bs=3D0x565505fbf660)
> > > bs=3D0x565505fbf660 (qcow2) enter
> > > bs=3D0x565505fbf660 (qcow2) processing children
> > > bs=3D0x565505fbf660 (qcow2) processing parents
> > > bs=3D0x565505fbf660 (qcow2) processing itself
> > > bs=3D0x565505e48030 (backup-top) processing parents
> > > bs=3D0x565505e48030 (backup-top) calling set_aio_ctx child=3D0x565505=
e402d0
> > > bs=3D0x565505e48030 (backup-top) processing itself
> > > bs=3D0x565505fbf660 (qcow2) processing itself
> >=20
> > Hm, is this complete? Is see no "processing itself" for
> > bs=3D0x565505e5d420. Or is this because it crashed before getting there=
?
>=20
> Yes, it crashes there. I forgot to mention that, sorry.
>=20
> > Anyway, trying to reconstruct the block graph with BdrvChild pointers
> > annotated at the edges:
> >=20
> > BlockBackend
> >       |
> >       v
> >   backup-top ------------------------+
> >       |   |                          |
> >       |   +-----------------------+  |
> >       |            0x5655068b8510 |  | 0x565505e3c450
> >       |                           |  |
> >       | 0x565505e42090            |  |
> >       v                           |  |
> >     qcow2 ---------------------+  |  |
> >       |                        |  |  |
> >       | 0x565505e52060         |  |  | ??? [1]
> >       |                        |  |  |  |
> >       v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
> >     file                       v  v  v  v
> >                              qcow2 (backing)
> >                                     |
> >                                     | 0x565505e41d20
> >                                     v
> >                                   file
> >=20
> > [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
> >     BdrvChild directly owned by the backup job.
> >=20
> > > So it seems this is happening:
> > >=20
> > > backup-top (5e48030) <---------| (5)
> > >    |    |                      |
> > >    |    | (6) ------------> qcow2 (5fbf660)
> > >    |                           ^    |
> > >    |                       (3) |    | (4)
> > >    |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> > >    |
> > >    |-> (2) file (5e52060)
> > >=20
> > > backup-top (5e48030), the BDS that was passed as argument in the firs=
t
> > > bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660=
)
> > > is processing its parents, and the latter is also re-entered when the
> > > first one starts processing its children again.
> >=20
> > Yes, but look at the BdrvChild pointers, it is through different edges
> > that we come back to the same node. No BdrvChild is used twice.
> >=20
> > If backup-top had added all of its children to the ignore list before
> > calling into the overlay qcow2, the backing qcow2 wouldn't eventually
> > have called back into backup-top.
>=20
> I've tested a patch that first adds every child to the ignore list,
> and then processes those that weren't there before, as you suggested
> on a previous email. With that, the offending qcow2 is not re-entered,
> so we avoid the crash, but backup-top is still entered twice:

I think we also need to every parent to the ignore list before calling
callbacks, though it doesn't look like this is the problem you're
currently seeing.

> bs=3D0x560db0e3b030 (backup-top) enter
> bs=3D0x560db0e3b030 (backup-top) processing children
> bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e2f450 (ch=
ild->bs=3D0x560db0fb2660)
> bs=3D0x560db0fb2660 (qcow2) enter
> bs=3D0x560db0fb2660 (qcow2) processing children
> bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db0e34d20 (child->=
bs=3D0x560db1bb3c00)
> bs=3D0x560db1bb3c00 (file) enter
> bs=3D0x560db1bb3c00 (file) processing children
> bs=3D0x560db1bb3c00 (file) processing parents
> bs=3D0x560db1bb3c00 (file) processing itself
> bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db16964d0 (child->=
bs=3D0x560db0e50420)
> bs=3D0x560db0e50420 (qcow2) enter
> bs=3D0x560db0e50420 (qcow2) processing children
> bs=3D0x560db0e50420 (qcow2) calling bsaci child=3D0x560db0e34ea0 (child->=
bs=3D0x560db0e45060)
> bs=3D0x560db0e45060 (file) enter
> bs=3D0x560db0e45060 (file) processing children
> bs=3D0x560db0e45060 (file) processing parents
> bs=3D0x560db0e45060 (file) processing itself
> bs=3D0x560db0e50420 (qcow2) processing parents
> bs=3D0x560db0e50420 (qcow2) processing itself
> bs=3D0x560db0fb2660 (qcow2) processing parents
> bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1672860
> bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1b14a20
> bs=3D0x560db0e3b030 (backup-top) enter
> bs=3D0x560db0e3b030 (backup-top) processing children
> bs=3D0x560db0e3b030 (backup-top) processing parents
> bs=3D0x560db0e3b030 (backup-top) calling set_aio_ctx child=3D0x560db0e332=
d0
> bs=3D0x560db0e3b030 (backup-top) processing itself
> bs=3D0x560db0fb2660 (qcow2) processing itself
> bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e35090 (ch=
ild->bs=3D0x560db0e50420)
> bs=3D0x560db0e50420 (qcow2) enter
> bs=3D0x560db0e3b030 (backup-top) processing parents
> bs=3D0x560db0e3b030 (backup-top) processing itself
>=20
> I see that "blk_do_set_aio_context()" passes "blk->root" to
> "bdrv_child_try_set_aio_context()" so it's already in the ignore list,
> so I'm not sure what's happening here. Is backup-top is referenced
> from two different BdrvChild or is "blk->root" not pointing to
> backup-top's BDS?

The second time that backup-top is entered, it is not as the BDS of
blk->root, but as the parent node of the overlay qcow2. Which is
interesting, because last time it was still the backing qcow2, so the
change did have _some_ effect.

The part that I don't understand is why you still get the line with
child=3D0x560db1b14a20, because when you add all children to the ignore
list first, that should have been put into the ignore list as one of the
first things in the whole process (when backup-top was first entered).

Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
but isn't actually present in backup-top's bs->children?

Kevin

--hOcCNbCCxyk/YU74
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAl/aUmYACgkQfwmycsiP
L9a7eBAAhQfYOdhwdHGjyfgOrVAtrXPe/HVx2BROGQdMTOM1AWTYokDVrbFweYkp
jRJXMlNFhCwCZUthexOdvWSIsk8aX92H7YEXi40jypjz3+8QoP8jFqtdHD5m/CiL
HBsE7E/0InJF7DDVz/lCh1fzb5venrFBkVo8uOqEu7/EfQod9JFnDblmCLqwzcOa
d/aDnkcQb6hr4RhE/RdBIcnnDWHIwXU4f00rDxSm/0cJ6Axie+jXWyWZdDnCDnQQ
SWny6JV2GiWjOPPuYgL9WYU18HXLftCoxp35FERjZOMAtNi3uQ8LJ08VjVbJoCB7
Wm9oqA1lYbXl9dXPluzQRPpawo/Fro+Iorx6Ss1N9eFnQPSJi1lxd/y6iKE0VJ56
0YBBqEoMEzdLsxVXu/o3ayYzxI+wm6u/3yPjLQl3MPRacBIGXfBtmXwp21JdTKQL
HhLWE6dGIarB5NuEDI0ri6rlA6SHsfl7i0dWqGAWf5V9bn0kG/JKXQwUJl8QcVJG
neVMaekXwsQ6aZ+y644ehEEWFixxSbbTWLyPfOlf27vU4cc6ANGmPS6YGJxWyaWN
Vm+Cz03nTtOJ4Tm7yLnRFsiDz7+mqWlHhDantQrjSO9QOcoaTyfvnBbVJR+1OPTt
70Ewl3R/h1iNyJHxhN6jDKzfUSsYw9GB+u+ie9waatTQrkm2vkM=
=0pF9
-----END PGP SIGNATURE-----

--hOcCNbCCxyk/YU74--



From xen-devel-bounces@lists.xenproject.org Wed Dec 16 20:14:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 20:14:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55577.96735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpdBN-0004I9-Jh; Wed, 16 Dec 2020 20:13:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55577.96735; Wed, 16 Dec 2020 20:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpdBN-0004I2-GV; Wed, 16 Dec 2020 20:13:57 +0000
Received: by outflank-mailman (input) for mailman id 55577;
 Wed, 16 Dec 2020 20:13:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ehtw=FU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kpdBM-0004Hx-1H
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 20:13:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8699f4fb-d4ff-49a4-9aa3-4e9909a94125;
 Wed, 16 Dec 2020 20:13:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8699f4fb-d4ff-49a4-9aa3-4e9909a94125
Date: Wed, 16 Dec 2020 12:13:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608149634;
	bh=IJ4Mr7DeeTOalLyT7XjVXCXQlaTQ8u0dmKNL/VnGG2I=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Gfn7GygXSmuVfYvoxVehZcyahmbZuqu8uiNW3H00Es93FmVz3CugGgzvBIl4srA3W
	 hudeY8pgjNWXdHf4vkPTSQj+Y1088GMH44aputV7V+/Be9IctNnAGasLmIiuGstgG8
	 obUJMsu4lSpVZA4tsvaZQnnKqB0q87txOElKQHcMkU/jnt2QwlDtML19qJW07w2Vko
	 RCJITIX5Jqqx4+oXDOVFQGfche3PKL4ZC+Dvyrl7TCiUAxI8ZUkTYuZSQpGbpIGv8u
	 9Au77uvJNqre8zHCG4/Ilk3zTBMgAYWMowhKz4D2O4g+kCDIdTp19dEebAJDpRUX+j
	 rKULVX53s/4sg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>, 
    Oleksandr_Andrushchenko@epam.com, 
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen-ARM DomUs
In-Reply-To: <X9mGH9SPoC5cfpSu@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2012161206420.4040@sstabellini-ThinkPad-T480s>
References: <X9gcZu5uJpXx8wNn@mattapan.m5p.com> <alpine.DEB.2.21.2012150828170.4040@sstabellini-ThinkPad-T480s> <X9mGH9SPoC5cfpSu@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 15 Dec 2020, Elliott Mitchell wrote:
> On Tue, Dec 15, 2020 at 08:36:34AM -0800, Stefano Stabellini wrote:
> > On Mon, 14 Dec 2020, Elliott Mitchell wrote:
> > > The available examples seem geared towards Linux DomUs.  I'm looking at a
> > > FreeBSD installation image and it appears to expect an EFI firmware.
> > > Beyond having a bunch of files appearing oriented towards booting on EFI
> > > I can't say much about (booting) FreeBSD/ARM DomUs.
> > 
> > Running EFI firmware in a domU is possible with both Tianocore and
> > U-Boot. You should be able to build the firmware and pass it as a
> > kernel= binary in the xl file. Then the firmware will be able to load
> > the necessary binaries from the virtual disk.
> 
> Hmm, no mention of this on:
> https://wiki.xenproject.org/wiki/OVMF
> 
> In fact that appears 100% x86.  Perhaps tools/firmware needs to be
> adjusted to make it work on ARM?
> 
> Really the xlexample files in tools/examples need equivalents for ARM...
> 
> *This* reads like the approach I'm looking for, but building Tianocore
> is an adventure even with a good guide.

Tianocore has been working for many years as domU kernel, but I haven't
tried it in a while. You should definitely be able to get it to boot.
Linaro offers pre-built binaries of it with Xen enabled:

http://snapshots.linaro.org/components/kernel/leg-virt-tianocore-edk2-upstream/4123/XEN-AARCH64/RELEASE_GCC5/


> > I ran Tianocore this way years ago. Recently, u-boot has been ported to
> > be run in a domU by Oleksandr Andrushchenko (CCed).
> 
> The Xen wiki has no information on this.

This is relatively new. Maybe Oleksandr should add a page to the wiki
when he gets a chance :-)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 20:16:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 20:16:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55582.96747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpdDv-0004Pv-28; Wed, 16 Dec 2020 20:16:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55582.96747; Wed, 16 Dec 2020 20:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpdDu-0004Po-V6; Wed, 16 Dec 2020 20:16:34 +0000
Received: by outflank-mailman (input) for mailman id 55582;
 Wed, 16 Dec 2020 20:16:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BvyZ=FU=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kpdDt-0004Pj-KH
 for xen-devel@lists.xenproject.org; Wed, 16 Dec 2020 20:16:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62ddce1c-6b13-4e30-8fd6-4c1485986fc0;
 Wed, 16 Dec 2020 20:16:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62ddce1c-6b13-4e30-8fd6-4c1485986fc0
Subject: Re: [GIT PULL] xen: branch for v5.11-rc1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608149792;
	bh=WmvOvSHb+S4MEfDf4FDvtuOcycVfyBfLfxYt/7dEbqE=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=sbTo8Pu/8B6VP/I4Q+uo+QgmEF37N3agNCMau5MSMHQEmJziEF+dKx4UNeyUmiOCV
	 n6YAtpOMungGOeyPit3YKcMbQVmgPgwqsX4YMy3AQFa5Pp+2eT3xf9sfxkvfot69LY
	 vhg8kwpqwS+a2puZcXn0K0oigK0c9LZMGFiFY2yTo55SjnKZmwA4JqIEz8FnLWgHGz
	 Y34krmmS4Y166UqNGYsOozBQX6wOfizY4RSVSfT4f1nfDlAOJTCIpHYmHtHYNo5I7n
	 6aEqAJAk+69iud3uhDPmecICoHN/YuM3aDUjCjr2IvcNR0PM/J21htvoll5C22YCRa
	 z3q3KzcUXrlNw==
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201215122606.6874-1-jgross@suse.com>
References: <20201215122606.6874-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20201215122606.6874-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc1-tag
X-PR-Tracked-Commit-Id: 1c728719a4da6e654afb9cc047164755072ed7c9
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 7acfd4274e26e05a4f12ad31bf331fef11ebc6a3
Message-Id: <160814979215.31129.10861518322757012100.pr-tracker-bot@kernel.org>
Date: Wed, 16 Dec 2020 20:16:32 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Tue, 15 Dec 2020 13:26:06 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc1-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/7acfd4274e26e05a4f12ad31bf331fef11ebc6a3

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 21:07:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 21:07:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55591.96759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpe1D-0000kc-Vi; Wed, 16 Dec 2020 21:07:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55591.96759; Wed, 16 Dec 2020 21:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpe1D-0000kV-Sd; Wed, 16 Dec 2020 21:07:31 +0000
Received: by outflank-mailman (input) for mailman id 55591;
 Wed, 16 Dec 2020 21:07:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpe1C-0000kN-Ll; Wed, 16 Dec 2020 21:07:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpe1C-0008J8-E0; Wed, 16 Dec 2020 21:07:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpe1C-0002CX-5B; Wed, 16 Dec 2020 21:07:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpe1C-0005gt-4j; Wed, 16 Dec 2020 21:07:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xP9bJ2aOMllWIO0M0uBTlvXsbsBxvs2orBYXmDf9YO0=; b=rQhWNTSTJDnjGC2QMhBDVeictP
	n4oPSqs/4NGA4YZGSp1QfWq8NLktBMT11wKmBM4jKdI0QYJvNo98lfdCDadC13MdGkRzgMJbkmJnU
	OTOJ5DEf3GbTagv25uXmPG1tc+WhDkclbTnB3siGnA80jJ/+3FJD8ndDt7abv/o5oHzE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157571-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157571: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=5c3cdebf95bfa32c611b8e72921277401bc90fec
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 21:07:30 +0000

flight 157571 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157571/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 5c3cdebf95bfa32c611b8e72921277401bc90fec
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    7 days
Failing since        157348  2020-12-09 15:39:39 Z    7 days   50 attempts
Testing same since   157561  2020-12-15 13:10:54 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sheng Wei <w.sheng@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 664 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 21:40:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 21:40:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55601.96773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpeXB-0004Su-EO; Wed, 16 Dec 2020 21:40:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55601.96773; Wed, 16 Dec 2020 21:40:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpeXB-0004Sn-BM; Wed, 16 Dec 2020 21:40:33 +0000
Received: by outflank-mailman (input) for mailman id 55601;
 Wed, 16 Dec 2020 21:40:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpeX9-0004Sf-OO; Wed, 16 Dec 2020 21:40:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpeX9-0000Rb-5e; Wed, 16 Dec 2020 21:40:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpeX8-00040Q-SX; Wed, 16 Dec 2020 21:40:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpeX8-0001W0-RZ; Wed, 16 Dec 2020 21:40:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NH33Q6tuCmb8bMasKh7fkTZkbzwTOOaBmJ/5hh4aM1U=; b=f3hHbCwMu3ik0ffxnBSuEztVkQ
	WTpwkpmvk4fCWLsmYc7lq8xdva3WdSwjcQYBQcNiPS7zWQ8zJwEd635KImkywi9HgBhu6lwTWVBNw
	5sf4dMMX5mcdzbpQxbe1im8yGgcil3BNMpF5d9q53c+jx8pTAa79IAsJ7OAG7vGQhXs4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157569-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157569: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-start.2:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ffb1e2ed7cd76df7537562f7e0b2bd5bf8b0842d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 21:40:30 +0000

flight 157569 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157569/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 21 guest-start.2    fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ffb1e2ed7cd76df7537562f7e0b2bd5bf8b0842d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  118 days
Failing since        152659  2020-08-21 14:07:39 Z  117 days  245 attempts
Testing same since   157569  2020-12-15 16:38:41 Z    1 days    1 attempts

------------------------------------------------------------
312 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 76955 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 16 22:11:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 16 Dec 2020 22:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55609.96788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpf0e-0007NL-38; Wed, 16 Dec 2020 22:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55609.96788; Wed, 16 Dec 2020 22:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpf0d-0007NE-WE; Wed, 16 Dec 2020 22:11:00 +0000
Received: by outflank-mailman (input) for mailman id 55609;
 Wed, 16 Dec 2020 22:10:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpf0c-0007N5-4s; Wed, 16 Dec 2020 22:10:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpf0b-0000xw-W0; Wed, 16 Dec 2020 22:10:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpf0b-0006q4-NU; Wed, 16 Dec 2020 22:10:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpf0b-0002CS-Mu; Wed, 16 Dec 2020 22:10:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=D93lkEPipFx/d+mnCb4KQ3fI9rox6wIsnWMct+UQkLE=; b=f1/6OTUATRpjVMF69MxESMFWGH
	8/Zws0UOeQpoGYozgrxCffN7vtpkzCnDzekhqZH/LDs+fz/ptVxytAyh0j2H+y1/0HZWafL/N7tzb
	w7MQKqIe8tgfwdfIm/XjS/TA3qGqyuWF5uxgNTlOwYGMyRkUCiMQEfhQU2TvNgERCUI4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157611-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157611: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 16 Dec 2020 22:10:57 +0000

flight 157611 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157611/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157560  2020-12-15 13:00:26 Z    1 days
Failing since        157570  2020-12-15 17:00:30 Z    1 days    9 attempts
Testing same since   157611  2020-12-16 19:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   904148ecb4..ac6a0af387  ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 00:25:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 00:25:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55640.96888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kph6I-0004CQ-Q5; Thu, 17 Dec 2020 00:24:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55640.96888; Thu, 17 Dec 2020 00:24:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kph6I-0004CI-Lz; Thu, 17 Dec 2020 00:24:58 +0000
Received: by outflank-mailman (input) for mailman id 55640;
 Thu, 17 Dec 2020 00:24:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kph6H-0004C7-DV; Thu, 17 Dec 2020 00:24:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kph6H-0003tt-8F; Thu, 17 Dec 2020 00:24:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kph6G-0006LJ-Vu; Thu, 17 Dec 2020 00:24:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kph6G-0006je-V7; Thu, 17 Dec 2020 00:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VZnW8W4tt/AiBcGoozF/9MYzqD+eMfQ7+VweacrKpog=; b=tUMTTMCYXUg7YY+LvLcNvsr9s7
	vkhho0pETTNjBk4bj8BsbNGFT5NfNZPiNyU3ZPok2fNym9eeCO99Qoi9tusCrVP93cInLzlLGXIg5
	GeIHA/pzXvCtfNFKCR+EY1qtwvfNQSv4yhJVSmJMi/lrPzFJV2PAWxMv/rxoZ944WQHg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157568-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157568: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-pygrub:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
X-Osstest-Versions-That:
    xen=8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 00:24:56 +0000

flight 157568 xen-unstable real [real]
flight 157614 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157568/
http://logs.test-lab.xenproject.org/osstest/logs/157614/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pygrub   19 guest-localmigrate/x10 fail pass in 157614-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157512
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157512
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157512
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157512
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157512
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157512
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157512
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157512
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157512
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157536
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157536
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac
baseline version:
 xen                  8e0fe4fe5fd89d80a362d8a9a46726aded3b49c4

Last test of basis   157536  2020-12-15 01:52:24 Z    1 days
Testing same since   157568  2020-12-15 16:38:41 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8e0fe4fe5f..904148ecb4  904148ecb4a59d4c8375d8e8d38117b8605e10ac -> master


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 01:16:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 01:16:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55668.96961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kphuF-0007Sh-CZ; Thu, 17 Dec 2020 01:16:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55668.96961; Thu, 17 Dec 2020 01:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kphuF-0007Sa-8h; Thu, 17 Dec 2020 01:16:35 +0000
Received: by outflank-mailman (input) for mailman id 55668;
 Thu, 17 Dec 2020 01:16:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kphuE-0007SS-BZ; Thu, 17 Dec 2020 01:16:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kphuD-0002aV-VL; Thu, 17 Dec 2020 01:16:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kphuD-0000Ct-Ju; Thu, 17 Dec 2020 01:16:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kphuD-0005iu-JQ; Thu, 17 Dec 2020 01:16:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gsvQ/aZSrCJGlVR3bLOcwokXTrK027JnS1ujncSefSM=; b=iRdY7c4PsfHlEpls7Q4I8jHPHm
	oeSN8t5VPPOfHqimDzTYtFgWZGFpcA8s8Kd27s/5gGJ8NDaeZIllYgBgX8YTVUCDSYfAkJIUFLmbb
	/vjHj3thcYbCO2sO/NOo2knMIM19a7s1hpOslZAFX6aWcqdT6J7RS7hhADQiWr2lMXEI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157573-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157573: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=148842c98a24e508aecb929718818fbf4c2a6ff3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 01:16:33 +0000

flight 157573 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157573/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 157555 pass in 157573
 test-armhf-armhf-libvirt-raw  8 xen-boot         fail in 157555 pass in 157573
 test-amd64-amd64-examine    4 memdisk-try-append fail in 157555 pass in 157573
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 157555 pass in 157573
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 157555 pass in 157573
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 157555 pass in 157573
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 157555

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157555 like 152332
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 157555 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 157555 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                148842c98a24e508aecb929718818fbf4c2a6ff3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  138 days
Failing since        152366  2020-08-01 20:49:34 Z  137 days  239 attempts
Testing same since   157555  2020-12-15 11:09:01 Z    1 days    2 attempts

------------------------------------------------------------
3825 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 771172 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 01:52:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 01:52:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55677.96979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpiSQ-0002nx-Fs; Thu, 17 Dec 2020 01:51:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55677.96979; Thu, 17 Dec 2020 01:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpiSQ-0002nT-CH; Thu, 17 Dec 2020 01:51:54 +0000
Received: by outflank-mailman (input) for mailman id 55677;
 Thu, 17 Dec 2020 01:51:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tHkA=FV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kpiSP-0002nJ-Rg
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 01:51:53 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 965de945-b423-4fe7-a8df-47dc188d6506;
 Thu, 17 Dec 2020 01:51:51 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BH1oJZp023972;
 Thu, 17 Dec 2020 01:51:49 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 35cn9rk70t-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 17 Dec 2020 01:51:49 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BH1p9VI157324;
 Thu, 17 Dec 2020 01:51:48 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3020.oracle.com with ESMTP id 35e6esnxwm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 17 Dec 2020 01:51:48 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BH1pk7F022502;
 Thu, 17 Dec 2020 01:51:47 GMT
Received: from [10.39.246.206] (/10.39.246.206)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 16 Dec 2020 17:51:46 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 965de945-b423-4fe7-a8df-47dc188d6506
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : subject : to :
 cc : references : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=7Dk1bbvSwo8qhlO8Y6j7Qb2cAWBPa4frZBAAF+aKujM=;
 b=MTryJ0OrK/bWmgWGow3fVDJ/zPxygczSzeQcQIOA2TAxz+dCvFy2w0HJxeLs5I0S5DEh
 cAUx9SKc4lGz6yr7g2k65MrCelhfbw6BUBIQPktTAbFmp+1vk7KBlN4VY8bvWhTQoqBL
 Qqy/T/L/oOiYRlhMeWw51LsIwfCyEaeBQgf+NOwX5eycXi37cPtw+1Iz2VvS8Y8Tdd1E
 cO/kWjCdHe/+I1SKtLk5F1atOfiJQ/oMux57x9uz0Zl7eKVuUlXvE/UVhbN4J5+VHOPH
 g66KAkT60D9h/iLeAjAvAbuFuOov2hbDIvskyOGIzGTFJ6lA9o613BioLxRlErmvjQfF UQ== 
From: boris.ostrovsky@oracle.com
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Jan Beulich <jbeulich@suse.com>, Cheyenne Wills <cheyenne.wills@gmail.com>
Cc: xen-devel@lists.xenproject.org
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
Organization: Oracle Corporation
Message-ID: <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
Date: Wed, 16 Dec 2020 20:51:45 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9837 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 phishscore=0
 bulkscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012170010
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9837 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxlogscore=999
 impostorscore=0 lowpriorityscore=0 clxscore=1011 spamscore=0
 malwarescore=0 priorityscore=1501 phishscore=0 mlxscore=0 bulkscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012170010


On 11/17/20 3:12 AM, Jan Beulich wrote:
> On 16.11.2020 22:57, Cheyenne Wills wrote:
>> Running Xen with XSA-351 is causing Solaris 11 systems to panic during
>> boot.  The panic screen is showing the failure to be coming from
>> "unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
>> and the  booting from an install ISO image.
>>
>> I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
>> requested that I report it here.
> Thanks. What we need though is information on the specific MSR(s) that
> will need to have workarounds added: We surely would want to avoid
> blindly doing this for all that the XSA change disallowed access to.
> Reproducing the panic screen here might already help; proper full logs
> would be even better.


We hit this issue today so I poked a bit around Solaris code.


It definitely reads MSR_RAPL_POWER_UNIT unguarded during boot.


In addition, it may read MSR_*_ENERGY_STATUS when running kstat. I haven't been able to trigger those reads (I didn't have access to the system myself and with neither me nor the tester remembering much about Solaris we only tried some basic commands).


The patch below lets Solaris guest boot on OVM. Our codebase is somewhat different from stable branches but if this is an acceptable workaround I will send proper patch for stable. I won't be able to test it though.


Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Wed Dec 16 17:19:07 2020 -0500

    x86/msr: Allow read access to some RAPL MSRs
   
    XSA-351 limited access to RAPL-related MSRs to avoid creating a
    side-channel that might allow information leakage. Guests trying
    to access those MSRs now receive #GP.
   
    RAPL is not indicated by CPUID but the assumption is that guests
    should not deal with power-related features and therefore should
    not touch those MSRs. (Linux, in fact, does read MSR_RAPL_POWER_UNIT
    but it does so in safe manner and can ignore the fault).
   
    Unfortunately, Solaris reads some of those registers without
    safeguards. So for those MSRs let's return 0.
   
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 0dbe810e4b27..6b4a5dc77b7f 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -131,6 +131,18 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
         *val &= ~(ARCH_CAPS_TSX_CTRL);
         break;
 
+        /* Solaris reads these MSRs unguarded so let's return 0 */
+    case MSR_RAPL_POWER_UNIT:
+    case MSR_PKG_ENERGY_STATUS:
+    case MSR_DRAM_ENERGY_STATUS:
+    case MSR_PP0_ENERGY_STATUS:
+    case MSR_PP1_ENERGY_STATUS:
+        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
+            goto gp_fault;
+
+        *val = 0;
+        break;
+
         /*
          * These MSRs are not enumerated in CPUID.  They have been around
          * since the Pentium 4, and implemented by other vendors.
@@ -151,11 +163,16 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
             break;
 
         /*fallthrough*/
-    case MSR_RAPL_POWER_UNIT:
-    case MSR_PKG_POWER_LIMIT  ... MSR_PKG_POWER_INFO:
-    case MSR_DRAM_POWER_LIMIT ... MSR_DRAM_POWER_INFO:
-    case MSR_PP0_POWER_LIMIT  ... MSR_PP0_POLICY:
-    case MSR_PP1_POWER_LIMIT  ... MSR_PP1_POLICY:
+    case MSR_PKG_POWER_LIMIT:
+    case MSR_PKG_PERF_STATUS:
+    case MSR_PKG_POWER_INFO:
+    case MSR_DRAM_POWER_LIMIT:
+    case MSR_DRAM_PERF_STATUS:
+    case MSR_DRAM_POWER_INFO:
+    case MSR_PP0_POWER_LIMIT:
+    case MSR_PP0_POLICY:
+    case MSR_PP1_POWER_LIMIT:
+    case MSR_PP1_POLICY:
     case MSR_PLATFORM_ENERGY_COUNTER:
     case MSR_PLATFORM_POWER_LIMIT:
     case MSR_F15H_CU_POWER ... MSR_F15H_CU_MAX_POWER:



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 01:54:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 01:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55681.96991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpiUl-0002wU-U8; Thu, 17 Dec 2020 01:54:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55681.96991; Thu, 17 Dec 2020 01:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpiUl-0002wN-Qs; Thu, 17 Dec 2020 01:54:19 +0000
Received: by outflank-mailman (input) for mailman id 55681;
 Thu, 17 Dec 2020 01:54:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kpiUk-0002wI-Rj
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 01:54:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0154d6b6-1693-48b5-84e2-1c301b1b6c33;
 Thu, 17 Dec 2020 01:54:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0154d6b6-1693-48b5-84e2-1c301b1b6c33
Date: Wed, 16 Dec 2020 17:54:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608170057;
	bh=A7K8EVznVBsm9la5oP/60sCc5/xOkT+pdBJ9Nrg/VwA=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=aBwqbzIFJVCLIk/FPsIqJtyhwfOPdJSQ9HbjtuYfW3aF8uYw1jjv9z//J18Y1pdKV
	 wg8h7bq9RTRaLjpCGYwcGzPc1WeKMQ492GHRsLhdp8zRLRAgi9rkZapoImoVmMyvnN
	 lqTtWLcWJ9BSLA8KEds7eGKb+bGMrAiHlSJR0BO5MYnzfxMK9scHNqmIedvNC5TXW2
	 A+gnxQ4fVAwnAg4dNJsz6juMhkkjy3mzj6JxRh74w5zjMV4rJhwaaIVQINHyx3JmIx
	 Kvdlu5BYq7NGiw+blSyAP5TRNr6ZC4a9+ABzsEfJ48UjfbjjlhjvBhU6lPCAQsfNC0
	 6KrEuM4KYo6vQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: iwj@xenproject.org, anthony.perard@citrix.com, wl@xen.org, 
    dgdegra@tycho.nsa.gov, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    Bertrand.Marquis@arm.com, xen-devel@lists.xenproject.org
Subject: Re: arm32 tools/flask build failure
In-Reply-To: <alpine.DEB.2.21.2012151823480.4040@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2012161753100.4040@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2012151823480.4040@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 15 Dec 2020, Stefano Stabellini wrote:
> Hi all,
> 
> I am building Xen tools for ARM32 using qemu-user. I am getting the
> following error building tools/flask. Everything else works fine. It is
> worth noting that make -j1 works fine, it is only make -j4 that fails.
> 
> I played with .NOTPARALLEL but couldn't get it to work. Anyone has any
> ideas?
> 
> Cheers,
> 
> Stefano
> 
> 
> make[2]: Leaving directory '/build/tools/flask/utils'
> make[1]: Leaving directory '/build/tools/flask'
> make[1]: Entering directory '/build/tools/flask'
> /usr/bin/make -C policy all
> make[2]: Entering directory '/build/tools/flask/policy'
> make[2]: warning: jobserver unavailable: using -j1.  Add '+' to parent make rule.
> /build/tools/flask/policy/Makefile.common:115: *** target pattern contains no '%'.  Stop.
> make[2]: Leaving directory '/build/tools/flask/policy'
> make[1]: *** [/build/tools/flask/../../tools/Rules.mk:160: subdir-all-policy] Error 2
> make[1]: Leaving directory '/build/tools/flask'
> make: *** [/build/tools/flask/../../tools/Rules.mk:155: subdirs-all] Error 2


The fix seems to be turning the problematic variable:

POLICY_FILENAME = $(FLASK_BUILD_DIR)/xenpolicy-$(shell $(MAKE) -C $(XEN_ROOT)/xen xenversion --no-print-directory)

into a rule.


diff --git a/tools/flask/policy/Makefile.common b/tools/flask/policy/Makefile.common
index bea5ba4b6a..9a086d8acd 100644
--- a/tools/flask/policy/Makefile.common
+++ b/tools/flask/policy/Makefile.common
@@ -35,7 +35,6 @@ OUTPUT_POLICY ?= $(BEST_POLICY_VER)
 #
 ########################################
 
-POLICY_FILENAME = $(FLASK_BUILD_DIR)/xenpolicy-$(shell $(MAKE) -C $(XEN_ROOT)/xen xenversion --no-print-directory)
 POLICY_LOADPATH = /boot
 
 # List of policy versions supported by the hypervisor
@@ -112,17 +111,19 @@ POLICY_SECTIONS += $(USERS)
 POLICY_SECTIONS += $(ALL_CONSTRAINTS)
 POLICY_SECTIONS += $(ISID_DEFS) $(DEV_OCONS)
 
-all: $(POLICY_FILENAME)
+policy:
 
-install: $(POLICY_FILENAME)
+all: policy
+
+install: policy
 	$(INSTALL_DIR) $(DESTDIR)/$(POLICY_LOADPATH)
 	$(INSTALL_DATA) $^ $(DESTDIR)/$(POLICY_LOADPATH)
 
 uninstall:
 	rm -f $(DESTDIR)/$(POLICY_LOADPATH)/$(POLICY_FILENAME)
 
-$(POLICY_FILENAME): $(FLASK_BUILD_DIR)/policy.conf
-	$(CHECKPOLICY) $(CHECKPOLICY_PARAM) $^ -o $@
+policy: $(FLASK_BUILD_DIR)/policy.conf
+	$(CHECKPOLICY) $(CHECKPOLICY_PARAM) $^ -o xenpolicy-"$$($(MAKE) -C $(XEN_ROOT)/xen xenversion --no-print-directory)"
 
 $(FLASK_BUILD_DIR)/policy.conf: $(POLICY_SECTIONS) $(MOD_CONF)
 	$(M4) $(M4PARAM) $(POLICY_SECTIONS) > $@


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 02:19:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 02:19:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55690.97016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpitS-0005X4-5f; Thu, 17 Dec 2020 02:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55690.97016; Thu, 17 Dec 2020 02:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpitS-0005Wx-1R; Thu, 17 Dec 2020 02:19:50 +0000
Received: by outflank-mailman (input) for mailman id 55690;
 Thu, 17 Dec 2020 02:19:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kpitR-0005WZ-1q
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 02:19:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1dc8571a-e6eb-4f18-a8fa-e8e6417b8968;
 Thu, 17 Dec 2020 02:19:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dc8571a-e6eb-4f18-a8fa-e8e6417b8968
Date: Wed, 16 Dec 2020 18:19:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608171587;
	bh=G+cExh6jJq3EcNHX86SHv7LNuHfpcsIJ/6vBCBU7z40=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=FaCbO5GBXCcK2AB0ATojyjDhoT/RImLkOaaUx0741josrlAQneTT0CHVKVwlpg9V2
	 DQdfaPaOzwmyWHOZDQHEE4TX0ZTRC0F/g1s2TLgw+htuInE4Dci5f4H0YoymdJt/Ff
	 A12l03g1bq74lZj2vFU9srOVQSjMp1E6hcROdIjg/YEe4aSB8Y7X32blBtrkTf9gXF
	 S7TnNuYXI+4M7cj5C2UMnCtrf+BE7zwf2DhdBskxz9JRXrlrs2qM0fxdlsDwIrYnFn
	 Qnc27aHZcqEoLNfPbZqFDiyU7VB7emxMDYWwuD9D3A8EEfqenZswH9eBRIu7JIois6
	 52RwvYBoWqXyQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Jason Andryuk <jandryuk@gmail.com>, Chris Rogers <crogers122@gmail.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Rich Persaud <persaur@gmail.com>, openxt <openxt@googlegroups.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    Olivier Lambert <olivier.lambert@vates.fr>, Wei Liu <wl@xen.org>, 
    Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [openxt-dev] Re: Follow up on libxl-fix-reboot.patch
In-Reply-To: <c7b345aa-604a-b2f3-0800-1ed445ebc213@citrix.com>
Message-ID: <alpine.DEB.2.21.2012161815380.4040@sstabellini-ThinkPad-T480s>
References: <CAKf6xps-nM13E19SVS3NJwq6LwOJLUwN+FC6k_Sp9-_YaRt-EA@mail.gmail.com> <3ACCFEC6-A8B7-48E6-AA3F-48D4CDE75FA4@gmail.com> <alpine.DEB.2.21.2012141632020.4040@sstabellini-ThinkPad-T480s> <CAC4Yorgk89vaDsbygvebiBOan-3OWE=D9xKiri_JwQAVWZ19GQ@mail.gmail.com>
 <CAKf6xpvpyA6E6gC6cmZ-Ewayyue-C5WcnGtatsxf_Cefg1CxaA@mail.gmail.com> <c7b345aa-604a-b2f3-0800-1ed445ebc213@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-766529692-1608171587=:4040"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-766529692-1608171587=:4040
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 16 Dec 2020, Andrew Cooper wrote:
> On 16/12/2020 14:14, Jason Andryuk wrote:
> > On Tue, Dec 15, 2020 at 5:22 PM Chris Rogers <crogers122@gmail.com> wrote:
> >> Hopefully I can provide a little more context.  Here is a link to the patch:
> >>
> >> https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/files/libxl-fix-reboot.patch
> >>
> >> The patch is a bit mis-named.  It does not implement XEN_DOMCTL_SENDTRIGGER_RESET.  It's just a workaround to handle the missing RESET implementation.
> >>
> >> Its purpose is to make an HVM guest "reboot" regardless of whether PV tools have been installed and the xenstore interface is listening or not.  From the client perspective that OpenXT is concerned with, this is for ease-of-use for working with HVM guests before PV tools are installed.  To summarize the flow of the patch:
> >>
> >> - User input causes high level toolstack, xenmgr, to do xl reboot <domid>
> >> - libxl hits "PV interface not available", so it tries the fallback ACPI reset trigger (but that's not implemented in domctl)
> >> - therefore, the patch changes the RESET trigger to POWER trigger, and sets a 'reboot' flag
> >> - when the xl create process handles the domain_death event for LIBXL_SHUTDOWN_REASON_POWEROFF, we check for our 'reboot' flag.
> >> - It's set, so we set "action" as if we came from a real restart, which makes the xl create process take the 'goto start' codepath to rebuild the domain.
> >>
> >> I think we'd like to get rid of this patch, but at the moment I don't have any code or a design to propose that would implement the XEN_DOMCTL_SENDTRIGGER_RESET.
> > I'm not sure it's possible to implement one.  Does an ACPI
> > reset/reboot button exist?  And then you'd have the problem that the
> > guest needs to be configured to react to the button.

Looking at the patch, it is difficult to suggest anything better. The
only thing I could think of would be to force shutdown_reason to be
"reboot" from xl_vmcontrol.c:reboot_domain. To do that, we would have to
be careful not to overwrite it in domain_death_xswatch_callback.

I am not sure if it would be actually a lot better.


> The ACPI spec has two signals as far as this goes. "the user pressed the
> power button" and "the user {pressed the suspend button, closed the
> laptop lid}".  Neither are useful for VMs typically, because default OS
> settings do the wrong thing.
> 
> The mystery to unravel here is why xl is issuing an erroneous hypercall.
> 
> It is very unlikely that we will have dropped
> XEN_DOMCTL_SENDTRIGGER_RESET from Xen, but I suppose its possible.  It's
> definitely weird that we have it in the interface and unimplemented.
> 
> It's also possible it was a copy&paste mistake when trying to implement
> an interface consistent with `xm trigger`.
> 
> It is definitely concerning that we've got a piece of functionality like
> this which clearly hasn't seen any testing upstream.

Indeed. I think we should fix this in 4.15, one way or another.
--8323329-766529692-1608171587=:4040--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 05:02:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 05:02:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55703.97043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kplQV-0004su-3h; Thu, 17 Dec 2020 05:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55703.97043; Thu, 17 Dec 2020 05:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kplQU-0004sn-Vm; Thu, 17 Dec 2020 05:02:06 +0000
Received: by outflank-mailman (input) for mailman id 55703;
 Thu, 17 Dec 2020 05:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kplQT-0004sf-T1; Thu, 17 Dec 2020 05:02:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kplQT-0007Fj-IP; Thu, 17 Dec 2020 05:02:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kplQT-0006o4-7Z; Thu, 17 Dec 2020 05:02:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kplQT-00056l-70; Thu, 17 Dec 2020 05:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R1zkAHt7B7atgW0TTD909AD3LBesJwc6sQ5lB2Gk3H0=; b=rmfNrbunrSTwNmw7tbRjIsN4px
	uB6s1nzMOl8hAjssMsaexRDXdhv6lAySPV8qc+ZsX/fskSI+DWespkXSrhrZjAdJASjTS3nnepyca
	vZv6jhcZTFVjeHeFkmB5Z2ehS9sVr6QCkaVQUcN78TYH+Lvg5o6a6Mdttv1onKH4ubBg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157594-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 157594: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2525a745e18bbf14b4f7b1b18209a0ab9166178d
X-Osstest-Versions-That:
    xen=8145d38b48009255a32ab87a02e481cd09c811f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 05:02:05 +0000

flight 157594 xen-4.12-testing real [real]
flight 157623 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157594/
http://logs.test-lab.xenproject.org/osstest/logs/157623/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 157134

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157134
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157134
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157134
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157134
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157134
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157134
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2525a745e18bbf14b4f7b1b18209a0ab9166178d
baseline version:
 xen                  8145d38b48009255a32ab87a02e481cd09c811f9

Last test of basis   157134  2020-12-01 15:05:58 Z   15 days
Testing same since   157562  2020-12-15 13:36:12 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 712 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 05:08:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 05:08:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55710.97058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kplWp-00055d-0L; Thu, 17 Dec 2020 05:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55710.97058; Thu, 17 Dec 2020 05:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kplWo-00055W-TB; Thu, 17 Dec 2020 05:08:38 +0000
Received: by outflank-mailman (input) for mailman id 55710;
 Thu, 17 Dec 2020 05:08:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kplWn-00055O-Go; Thu, 17 Dec 2020 05:08:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kplWn-0007LT-6r; Thu, 17 Dec 2020 05:08:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kplWm-00074f-V1; Thu, 17 Dec 2020 05:08:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kplWm-0006cP-UW; Thu, 17 Dec 2020 05:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Rr0UbTytmTSUEI85VZMHSa2OgUDV6eQVf9c91+uaPUM=; b=UYYPp6qsw0FR18OJTwdp7oQnR5
	w7DBiO2IrBi9RENMkPe3Vc/Svwx+QqtMSsxS2SIU8Z44j000LKV8+VFwdXSR4edO+qk5mYwK36II1
	ly6SYgiyeR+QHJk/2Sqr2h0UaO7z7gakdMqbGcdj1qk6qz11nUDo382JYwp2fcF68ets=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157621-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157621: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d81133d45d81d35a4e7445778bfd1179190cbd31
X-Osstest-Versions-That:
    xen=ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 05:08:36 +0000

flight 157621 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157621/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d81133d45d81d35a4e7445778bfd1179190cbd31
baseline version:
 xen                  ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0

Last test of basis   157611  2020-12-16 19:00:26 Z    0 days
Testing same since   157621  2020-12-17 02:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Luca Fancellu <luca.fancellu@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ac6a0af387..d81133d45d  d81133d45d81d35a4e7445778bfd1179190cbd31 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 05:52:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 05:52:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55722.97078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpmD8-0001Mk-Hc; Thu, 17 Dec 2020 05:52:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55722.97078; Thu, 17 Dec 2020 05:52:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpmD8-0001Md-Eb; Thu, 17 Dec 2020 05:52:22 +0000
Received: by outflank-mailman (input) for mailman id 55722;
 Thu, 17 Dec 2020 05:52:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpmD7-0001MV-6C; Thu, 17 Dec 2020 05:52:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpmD6-00082h-Pu; Thu, 17 Dec 2020 05:52:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpmD6-0000ma-HZ; Thu, 17 Dec 2020 05:52:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpmD6-0005YF-H3; Thu, 17 Dec 2020 05:52:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X4nWKRIz6G0wKaR7jpw4HO/JG5Sg9zjX9KqMHYzRt2w=; b=zhDosXGkgh+N7a9I939OV5WF7y
	EeP6yrXMbgmabMm1n1ADPJ3FkGdoM3mNQFZFbctjgORvgz5MzR89ZVbuBSBy3P03+sjqpK//7gMoX
	8VhklQzL1zPmBVecfzVPo/hiJI033G+mydxVEpIe8wnwwBWSpIzC1exlvwReAVlfouqE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157597-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157597: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10c7c213bef26274684798deb3e351a6756046d2
X-Osstest-Versions-That:
    xen=b5302273e2c51940172400486644636f2f4fc64a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 05:52:20 +0000

flight 157597 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157597/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157135

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157135
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157135
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157135
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157135
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10c7c213bef26274684798deb3e351a6756046d2
baseline version:
 xen                  b5302273e2c51940172400486644636f2f4fc64a

Last test of basis   157135  2020-12-01 15:06:11 Z   15 days
Testing same since   157563  2020-12-15 13:36:28 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 743 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 07:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 07:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55735.97100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpnu1-0003Kx-Mh; Thu, 17 Dec 2020 07:40:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55735.97100; Thu, 17 Dec 2020 07:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpnu1-0003Kq-Ie; Thu, 17 Dec 2020 07:40:45 +0000
Received: by outflank-mailman (input) for mailman id 55735;
 Thu, 17 Dec 2020 07:40:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpnu0-0003Kl-Qr
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 07:40:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87cf3383-6e09-4149-9612-49aec6f0e635;
 Thu, 17 Dec 2020 07:40:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 71BA3AC79;
 Thu, 17 Dec 2020 07:40:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87cf3383-6e09-4149-9612-49aec6f0e635
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608190842; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Rl9ABB6r4JraSZR8pKlXjxJFaO92UWOODMdk+PucpY0=;
	b=Kbvn5X16Dz3weSrYihfT4fSsR6FWeDL9w8psZvca/vSxoK5WM/KSKgvi2Qmy06tGp3Csf2
	HukmPCVCKmF207pbD+XSoEMSsXU4IoEYfqOckMJ6Vr6YO9pqjGiqraLI2KMF2PJ76u5rEh
	CqpWurVhDJtfVqZqUY1QvqyVPBczKac=
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: boris.ostrovsky@oracle.com, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills <cheyenne.wills@gmail.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
Date: Thu, 17 Dec 2020 08:40:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 02:51, boris.ostrovsky@oracle.com wrote:
> 
> On 11/17/20 3:12 AM, Jan Beulich wrote:
>> On 16.11.2020 22:57, Cheyenne Wills wrote:
>>> Running Xen with XSA-351 is causing Solaris 11 systems to panic during
>>> boot.  The panic screen is showing the failure to be coming from
>>> "unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
>>> and the  booting from an install ISO image.
>>>
>>> I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
>>> requested that I report it here.
>> Thanks. What we need though is information on the specific MSR(s) that
>> will need to have workarounds added: We surely would want to avoid
>> blindly doing this for all that the XSA change disallowed access to.
>> Reproducing the panic screen here might already help; proper full logs
>> would be even better.
> 
> 
> We hit this issue today so I poked a bit around Solaris code.
> 
> 
> It definitely reads MSR_RAPL_POWER_UNIT unguarded during boot.
> 
> 
> In addition, it may read MSR_*_ENERGY_STATUS when running kstat. I haven't been able to trigger those reads (I didn't have access to the system myself and with neither me nor the tester remembering much about Solaris we only tried some basic commands).
> 
> 
> The patch below lets Solaris guest boot on OVM. Our codebase is somewhat different from stable branches but if this is an acceptable workaround I will send proper patch for stable. I won't be able to test it though.

I think this is acceptable as a workaround, albeit we may want to
consider further restricting this (at least on staging), like e.g.
requiring a guest config setting to enable the workaround. But
maybe this will need to be part of the MSR policy for the domain
instead, down the road. We'll definitely want Andrew's view here.

Speaking of staging - before applying anything to the stable
branches, I think we want to have this addressed on the main
branch. I can't see how Solaris would work there.

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -131,6 +131,18 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>          *val &= ~(ARCH_CAPS_TSX_CTRL);
>          break;
>  
> +        /* Solaris reads these MSRs unguarded so let's return 0 */
> +    case MSR_RAPL_POWER_UNIT:
> +    case MSR_PKG_ENERGY_STATUS:
> +    case MSR_DRAM_ENERGY_STATUS:
> +    case MSR_PP0_ENERGY_STATUS:
> +    case MSR_PP1_ENERGY_STATUS:
> +        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
> +            goto gp_fault;
> +
> +        *val = 0;
> +        break;
> +
>          /*
>           * These MSRs are not enumerated in CPUID.  They have been around
>           * since the Pentium 4, and implemented by other vendors.
> @@ -151,11 +163,16 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>              break;
>  
>          /*fallthrough*/
> -    case MSR_RAPL_POWER_UNIT:
> -    case MSR_PKG_POWER_LIMIT  ... MSR_PKG_POWER_INFO:
> -    case MSR_DRAM_POWER_LIMIT ... MSR_DRAM_POWER_INFO:
> -    case MSR_PP0_POWER_LIMIT  ... MSR_PP0_POLICY:
> -    case MSR_PP1_POWER_LIMIT  ... MSR_PP1_POLICY:
> +    case MSR_PKG_POWER_LIMIT:
> +    case MSR_PKG_PERF_STATUS:
> +    case MSR_PKG_POWER_INFO:
> +    case MSR_DRAM_POWER_LIMIT:
> +    case MSR_DRAM_PERF_STATUS:
> +    case MSR_DRAM_POWER_INFO:
> +    case MSR_PP0_POWER_LIMIT:
> +    case MSR_PP0_POLICY:
> +    case MSR_PP1_POWER_LIMIT:
> +    case MSR_PP1_POLICY:
>      case MSR_PLATFORM_ENERGY_COUNTER:
>      case MSR_PLATFORM_POWER_LIMIT:
>      case MSR_F15H_CU_POWER ... MSR_F15H_CU_MAX_POWER:

Note how you no longer handle MSRs previously included (one each
in the first two groups) in the range expressions. I think I'd
prefer the alternative of filtering just the STATUS ones here:

    case MSR_PKG_POWER_LIMIT  ... MSR_PKG_POWER_INFO:
    case MSR_DRAM_POWER_LIMIT ... MSR_DRAM_POWER_INFO:
    case MSR_PP0_POWER_LIMIT  ... MSR_PP0_POLICY:
    case MSR_PP1_POWER_LIMIT  ... MSR_PP1_POLICY:
        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
             (msr & 0xf) != 1 /* MSR_*_POWER_STATUS */ )
            goto gp_fault;

        *val = 0;
        break;

Or, folding in MSR_RAPL_POWER_UNIT,

    case MSR_PKG_POWER_LIMIT  ... MSR_PKG_POWER_INFO:
    case MSR_DRAM_POWER_LIMIT ... MSR_DRAM_POWER_INFO:
    case MSR_PP0_POWER_LIMIT  ... MSR_PP0_POLICY:
    case MSR_PP1_POWER_LIMIT  ... MSR_PP1_POLICY:
        if ( (msr & 0xf) != 1 /* MSR_*_POWER_STATUS */ )
            goto gp_fault;
        /* fallthrough */
    case MSR_RAPL_POWER_UNIT:
        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
            goto gp_fault;

        *val = 0;
        break;

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 07:49:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 07:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55739.97112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpo2J-0003hJ-NL; Thu, 17 Dec 2020 07:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55739.97112; Thu, 17 Dec 2020 07:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpo2J-0003hC-KL; Thu, 17 Dec 2020 07:49:19 +0000
Received: by outflank-mailman (input) for mailman id 55739;
 Thu, 17 Dec 2020 07:49:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpo2I-0003h7-ES
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 07:49:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db865477-619f-4346-af7f-0e79e338809e;
 Thu, 17 Dec 2020 07:49:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BB8A4AC7B;
 Thu, 17 Dec 2020 07:49:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db865477-619f-4346-af7f-0e79e338809e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608191356; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Yt7QC8uq/oNoEVftnqKF6ElnnEmWsJeB1lmIuj1On+4=;
	b=MC3lPtoQQ0oxWT9ahPpfQb+drdqvz7ipTyRKNz8yALQqx000hhU9nS4D05jk1xY9F+Vx6u
	v80Mrz1DfiPPBCjDwqrChqJgswaAX7eSrH3gBNlzBkrEWh1giOyn8i7dWKEMRq/KkuWNRy
	qLDd07NXeW+1uCr4qR18gBfezlJT2zU=
Subject: Re: [PATCH v3 1/8] xen/cpupool: support moving domain between
 cpupools with different granularity
To: Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-2-jgross@suse.com>
 <a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <627b62f4-1011-36fa-9623-bbd30834010a@suse.com>
Date: Thu, 17 Dec 2020 08:49:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.12.2020 18:52, Dario Faggioli wrote:
> On Wed, 2020-12-09 at 17:09 +0100, Juergen Gross wrote:
>> When moving a domain between cpupools with different scheduling
>> granularity the sched_units of the domain need to be adjusted.
>>
>> Do that by allocating new sched_units and throwing away the old ones
>> in sched_move_domain().
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> This looks fine, and can have:
> 
> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
> 
> I would only have one request. It's not a huge deal, and probably not
> worth a resend only for that, but if either you or the committer are up
> for complying with that in whatever way you find the most suitable,
> that would be great.

I'd certainly be okay making this adjustment while committing, as
long as Jürgen agrees. With ...

> I.e., can we...
>> ---
>>  xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++--------
>> --
>>  1 file changed, 90 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
>> index a429fc7640..2a61c879b3 100644
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>>
>> [...]
>> -    old_ops = dom_scheduler(d);
>>      old_domdata = d->sched_priv;
>>
> Move *here* (i.e., above this new call to cpumask_first()) the comment
> that is currently inside the loop?
>>  
>> +    new_p = cpumask_first(d->cpupool->cpu_valid);
>>      for_each_sched_unit ( d, unit )
>>      {
>> +        spinlock_t *lock;
>> +
>> +        /*
>> +         * Temporarily move all units to same processor to make
>> locking
>> +         * easier when moving the new units to the new processors.
>> +         */
>>
> This one here, basically ^^^

... this comment moved out of here, I'd be tempted to suggest to
make ...

>> +        lock = unit_schedule_lock_irq(unit);

... this the variable's initializer then at the same time.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 07:54:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 07:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55743.97124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpo6x-0004XX-AZ; Thu, 17 Dec 2020 07:54:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55743.97124; Thu, 17 Dec 2020 07:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpo6x-0004XQ-7G; Thu, 17 Dec 2020 07:54:07 +0000
Received: by outflank-mailman (input) for mailman id 55743;
 Thu, 17 Dec 2020 07:54:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kpo6v-0004XL-76
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 07:54:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0eff6a1-0b9d-4edb-bdc5-566d6cc29650;
 Thu, 17 Dec 2020 07:54:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66384AE63;
 Thu, 17 Dec 2020 07:54:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0eff6a1-0b9d-4edb-bdc5-566d6cc29650
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608191642; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=b34nmTylTb7jvC6mCLKpAEF3ItDic7YMvlbI0uPZYSc=;
	b=Cbi5L21o5atctUyqtANYKPH86edhEdbqCrraigegG2Mt51Anf5nWpS/5j7ZeAPwEGSYqMW
	a3OyGOxh8djv6rwV7TScybm256XXGY2sJ3zNF3rANgWJN5DgN1NIPpMOO3xejs5nJiQIkU
	DnjTrZ2xs6QDqPdyZkce1UpA5SmfrRA=
Subject: Re: [PATCH v3 1/8] xen/cpupool: support moving domain between
 cpupools with different granularity
To: Jan Beulich <jbeulich@suse.com>, Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-2-jgross@suse.com>
 <a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com>
 <627b62f4-1011-36fa-9623-bbd30834010a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <da457cc2-754c-80a8-da10-e7fbafe7ae3c@suse.com>
Date: Thu, 17 Dec 2020 08:54:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <627b62f4-1011-36fa-9623-bbd30834010a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="7JvlJ9OcfmKqDh71FL4P2LCbDBCJ8r52j"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--7JvlJ9OcfmKqDh71FL4P2LCbDBCJ8r52j
Content-Type: multipart/mixed; boundary="fgai9odRjUs2Wr1MLGiFilrL3forXluFU";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Message-ID: <da457cc2-754c-80a8-da10-e7fbafe7ae3c@suse.com>
Subject: Re: [PATCH v3 1/8] xen/cpupool: support moving domain between
 cpupools with different granularity
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-2-jgross@suse.com>
 <a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com>
 <627b62f4-1011-36fa-9623-bbd30834010a@suse.com>
In-Reply-To: <627b62f4-1011-36fa-9623-bbd30834010a@suse.com>

--fgai9odRjUs2Wr1MLGiFilrL3forXluFU
Content-Type: multipart/mixed;
 boundary="------------0F50E50340E5B58C4EA2DC31"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0F50E50340E5B58C4EA2DC31
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.12.20 08:49, Jan Beulich wrote:
> On 16.12.2020 18:52, Dario Faggioli wrote:
>> On Wed, 2020-12-09 at 17:09 +0100, Juergen Gross wrote:
>>> When moving a domain between cpupools with different scheduling
>>> granularity the sched_units of the domain need to be adjusted.
>>>
>>> Do that by allocating new sched_units and throwing away the old ones
>>> in sched_move_domain().
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>> This looks fine, and can have:
>>
>> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
>>
>> I would only have one request. It's not a huge deal, and probably not
>> worth a resend only for that, but if either you or the committer are u=
p
>> for complying with that in whatever way you find the most suitable,
>> that would be great.
>=20
> I'd certainly be okay making this adjustment while committing, as
> long as J=C3=BCrgen agrees. With ...
>=20
>> I.e., can we...
>>> ---
>>>  =C2=A0xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++--=
------
>>> --
>>>  =C2=A01 file changed, 90 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
>>> index a429fc7640..2a61c879b3 100644
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>>
>>> [...]
>>> -=C2=A0=C2=A0=C2=A0 old_ops =3D dom_scheduler(d);
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 old_domdata =3D d->sched_priv;
>>>
>> Move *here* (i.e., above this new call to cpumask_first()) the comment=

>> that is currently inside the loop?
>>>  =20
>>> +=C2=A0=C2=A0=C2=A0 new_p =3D cpumask_first(d->cpupool->cpu_valid);
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 for_each_sched_unit ( d, unit )
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 {
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spinlock_t *lock;
>>> +
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Temporarily move =
all units to same processor to make
>>> locking
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * easier when movin=
g the new units to the new processors.
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>>>
>> This one here, basically ^^^
>=20
> ... this comment moved out of here, I'd be tempted to suggest to
> make ...
>=20
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lock =3D unit_schedule_lo=
ck_irq(unit);
>=20
> ... this the variable's initializer then at the same time.

Fine with me.


Juergen


--------------0F50E50340E5B58C4EA2DC31
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0F50E50340E5B58C4EA2DC31--

--fgai9odRjUs2Wr1MLGiFilrL3forXluFU--

--7JvlJ9OcfmKqDh71FL4P2LCbDBCJ8r52j
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/bDpkFAwAAAAAACgkQsN6d1ii/Ey8v
CAf/VP+OcJYy0CLjQCa+N5YQ9nA7TWO0MOcpmAI070rzFy8rNJQX+2e3SS1JAAJdVDHjvm1Nx/h+
sYgAxKgvbjSv+JuWzzSCSdBxL54pT0o78vr8U+6Qr1lV9Mi9yj6JLgXzeB5qHJCxw6IilIhtWIDH
cWas8JWFqgNA6THQMP95OYw3onjmu7FaWAJVDc31ht7gB5rUOOZZuQ2mP6qeZDng3YApPlsfeoLp
P5OKHdx4j94QBoWVUD/oPz5n9Nx6iR5j+t+Bv/fkOdVjsjdaycjFhYxIJZD8uHz9jqlu6t2dgE8E
/GOkJjptWGCSh7klqHIWcNUHuPfqJYkblGRxN4qKAw==
=o+Ak
-----END PGP SIGNATURE-----

--7JvlJ9OcfmKqDh71FL4P2LCbDBCJ8r52j--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 08:13:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 08:13:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55754.97148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpoPR-00075x-A6; Thu, 17 Dec 2020 08:13:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55754.97148; Thu, 17 Dec 2020 08:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpoPR-00075q-6q; Thu, 17 Dec 2020 08:13:13 +0000
Received: by outflank-mailman (input) for mailman id 55754;
 Thu, 17 Dec 2020 08:13:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpoPP-00075l-TS
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 08:13:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b7e1d7d-c1fc-4cbb-8b8f-3b001e5fe4e9;
 Thu, 17 Dec 2020 08:13:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DBC02AC79;
 Thu, 17 Dec 2020 08:13:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b7e1d7d-c1fc-4cbb-8b8f-3b001e5fe4e9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608192790; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2qnws3/6vo3B1EvPC08I1gLRMm+PbjpnSkBWcFzv6WQ=;
	b=rTR9PA/x39myenr6Kn1w1SvjfecwlQGEFyznu8IkpPzhY8HfAj/KGy3WMml+3EOvj0qy/X
	Ud90jBsMQ3LQTYbAo2f72stUYCK6rOrBbqw40+VylO/0tXtq+HtoWgQdC6eRd1prCMxD0c
	NVXh2GlD9PZU5wvnLOD5ToBRgXjnIrc=
Subject: Re: arm32 tools/flask build failure
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: iwj@xenproject.org, anthony.perard@citrix.com, wl@xen.org,
 dgdegra@tycho.nsa.gov, julien@xen.org, Volodymyr_Babchuk@epam.com,
 Bertrand.Marquis@arm.com, xen-devel@lists.xenproject.org
References: <alpine.DEB.2.21.2012151823480.4040@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2012161753100.4040@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <215dee92-634e-73c3-f000-928deda5ab3d@suse.com>
Date: Thu, 17 Dec 2020 09:13:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012161753100.4040@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.12.2020 02:54, Stefano Stabellini wrote:
> On Tue, 15 Dec 2020, Stefano Stabellini wrote:
>> Hi all,
>>
>> I am building Xen tools for ARM32 using qemu-user. I am getting the
>> following error building tools/flask. Everything else works fine. It is
>> worth noting that make -j1 works fine, it is only make -j4 that fails.
>>
>> I played with .NOTPARALLEL but couldn't get it to work. Anyone has any
>> ideas?
>>
>> Cheers,
>>
>> Stefano
>>
>>
>> make[2]: Leaving directory '/build/tools/flask/utils'
>> make[1]: Leaving directory '/build/tools/flask'
>> make[1]: Entering directory '/build/tools/flask'
>> /usr/bin/make -C policy all
>> make[2]: Entering directory '/build/tools/flask/policy'
>> make[2]: warning: jobserver unavailable: using -j1.  Add '+' to parent make rule.
>> /build/tools/flask/policy/Makefile.common:115: *** target pattern contains no '%'.  Stop.
>> make[2]: Leaving directory '/build/tools/flask/policy'
>> make[1]: *** [/build/tools/flask/../../tools/Rules.mk:160: subdir-all-policy] Error 2
>> make[1]: Leaving directory '/build/tools/flask'
>> make: *** [/build/tools/flask/../../tools/Rules.mk:155: subdirs-all] Error 2
> 
> 
> The fix seems to be turning the problematic variable:
> 
> POLICY_FILENAME = $(FLASK_BUILD_DIR)/xenpolicy-$(shell $(MAKE) -C $(XEN_ROOT)/xen xenversion --no-print-directory)
> 
> into a rule.

At a first glance this looks like just papering over the issue. When
I had looked at it yesterday after seeing your mail, I didn't even
spot this "interesting" make recursion. What I'd like to understand
first is where the % is coming from - the error message clearly
suggests that there's a % in the filename. Yet

.PHONY: xenversion
xenversion:
	@echo $(XEN_FULLVERSION)

doesn't make clear to me where the % might be coming from. Of course
there's nothing at all precluding e.g. $(XEN_VENDORVERSION) to
contain a % character, but I don't think that's what you're running
into.

> --- a/tools/flask/policy/Makefile.common
> +++ b/tools/flask/policy/Makefile.common
> @@ -35,7 +35,6 @@ OUTPUT_POLICY ?= $(BEST_POLICY_VER)
>  #
>  ########################################
>  
> -POLICY_FILENAME = $(FLASK_BUILD_DIR)/xenpolicy-$(shell $(MAKE) -C $(XEN_ROOT)/xen xenversion --no-print-directory)
>  POLICY_LOADPATH = /boot
>  
>  # List of policy versions supported by the hypervisor
> @@ -112,17 +111,19 @@ POLICY_SECTIONS += $(USERS)
>  POLICY_SECTIONS += $(ALL_CONSTRAINTS)
>  POLICY_SECTIONS += $(ISID_DEFS) $(DEV_OCONS)
>  
> -all: $(POLICY_FILENAME)
> +policy:

This is a phony target, isn't it? It then also needs marking so.
However, ...

> -install: $(POLICY_FILENAME)
> +all: policy
> +
> +install: policy
>  	$(INSTALL_DIR) $(DESTDIR)/$(POLICY_LOADPATH)
>  	$(INSTALL_DATA) $^ $(DESTDIR)/$(POLICY_LOADPATH)
>  
>  uninstall:
>  	rm -f $(DESTDIR)/$(POLICY_LOADPATH)/$(POLICY_FILENAME)
>  
> -$(POLICY_FILENAME): $(FLASK_BUILD_DIR)/policy.conf
> -	$(CHECKPOLICY) $(CHECKPOLICY_PARAM) $^ -o $@
> +policy: $(FLASK_BUILD_DIR)/policy.conf
> +	$(CHECKPOLICY) $(CHECKPOLICY_PARAM) $^ -o xenpolicy-"$$($(MAKE) -C $(XEN_ROOT)/xen xenversion --no-print-directory)"

... wouldn't it make sense to latch the version into an output
file, and use that as the target? Along the lines of

xenversion:
	$(MAKE) -C $(XEN_ROOT)/xen --no-print-directory $@ >$@

but possibly utilizing move-if-changed. This would then result in
more "conventional" make recursion.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 08:29:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 08:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55758.97159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpoey-0008Hc-LT; Thu, 17 Dec 2020 08:29:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55758.97159; Thu, 17 Dec 2020 08:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpoey-0008HV-IE; Thu, 17 Dec 2020 08:29:16 +0000
Received: by outflank-mailman (input) for mailman id 55758;
 Thu, 17 Dec 2020 08:29:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=epRl=FV=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kpoex-0008HQ-0w
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 08:29:15 +0000
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fcaa918-b833-40fc-b20a-da12d6ffdeb8;
 Thu, 17 Dec 2020 08:29:13 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id d13so7414602wrc.13
 for <xen-devel@lists.xenproject.org>; Thu, 17 Dec 2020 00:29:13 -0800 (PST)
Received: from CBGR90WXYV0 (host86-166-98-87.range86-166.btcentralplus.com.
 [86.166.98.87])
 by smtp.gmail.com with ESMTPSA id j10sm7635096wmj.7.2020.12.17.00.29.11
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 17 Dec 2020 00:29:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fcaa918-b833-40fc-b20a-da12d6ffdeb8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=dFnmyhGK1dH5Ahx0r85mEoGTGW27Pstio+R/BsNOdTA=;
        b=aZ/qoCfZV0LMz++7ECSwQtJUU8Miw/ruL0JnjbNB8tBSjYS1ls707Y3IsQwf4K2zmF
         2655pl2jyCTnO6rUM0egIYwvqREoCB14OdYT8N1+XfwrSDOwOVJP+EtbXT447jBmVu9n
         Fo1qXoJlLYHezYMdyzh4KzzmCU2d5HONFikwXC3adIu5rVwSaHmNJFLrdi8cOMgMLhFd
         Sjy/TG55TTvqcdOxXn283ShZlI8e4Qexp1xDv9X78OzlQlyI79iro9cEiAbfKTJ08zo3
         18aWJ9wK6oGQHbAkuaVfGrtAo4yqmvmTX3PW2DMmQTJ2/csdndO9LPD+eNk0FKxn4btF
         s1wg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=dFnmyhGK1dH5Ahx0r85mEoGTGW27Pstio+R/BsNOdTA=;
        b=UEXeyEZXxuMLd6mbsPTZEoSw6bPOFmeifuj/xG+0pJlbIPwZVm1ZpfBSPKoF61bwZU
         IaB1l/kd/O79NpHCxNZNfm1c987DrMP/IUDR9vwpO6fHEI5LsK1hVyvzQPM/1yZaNQlT
         0acDq3CTx9kFmQhRCz/C5ziTm3AQcmTu4bVJ0PHcoUmWVEOs9+gnu/0/r9MnCCj+nyAm
         siUDyV8eKJOX7w7ggc2aYiq5UFQ3pJc2TfoMX6j0YAZJ9wU7tq/sZgnHxGV+k7Glh/uH
         +D1ViYGL3i9mKDcOhcOAm9wYnP5UrtKHvrylPo1DXJGVZyln3NBVEqGhHfgdiNAEyEFX
         75xg==
X-Gm-Message-State: AOAM533Ak8L9EOOM9ODAzBuauiEpUoP3WemIDXF5UpvGJvxNB1YEoGRj
	0IX8gf+c6N3o91+JlAHLN6k=
X-Google-Smtp-Source: ABdhPJym0YR7aMw1m/PlO08/3rZHiOFM5+cmmAfmRNzZpwKsQjO1c8GpGE9U20oM56Ha421hkcCiNQ==
X-Received: by 2002:a05:6000:5:: with SMTP id h5mr42339710wrx.153.1608193752888;
        Thu, 17 Dec 2020 00:29:12 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>
Cc: "'osstest service owner'" <osstest-admin@xenproject.org>,
	<xen-devel@lists.xenproject.org>
References: <E1kpMXk-0006zb-Jk@osstest.test-lab.xenproject.org> <19ed8894-23f7-0f9d-f3c4-1d5ea5bc0c02@citrix.com> <20201216104357.wcggzckdii76d4iz@liuwe-devbox-debian-v2>
In-Reply-To: <20201216104357.wcggzckdii76d4iz@liuwe-devbox-debian-v2>
Subject: RE: [xen-unstable-smoke bisection] complete build-amd64-libvirt
Date: Thu, 17 Dec 2020 08:29:12 -0000
Message-ID: <00e501d6d44e$b2fa6420$18ef2c60$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLfRoVMfz30YRXb6SCWrJ23es5KvwIrMUEmAlwjqk2nxdTfgA==

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 16 December 2020 10:44
> To: Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant =
<paul@xen.org>
> Cc: osstest service owner <osstest-admin@xenproject.org>; =
xen-devel@lists.xenproject.org; Paul Durrant
> <paul@xen.org>; Wei Liu <wl@xen.org>
> Subject: Re: [xen-unstable-smoke bisection] complete =
build-amd64-libvirt
>=20
> Paul, are you able to cook up a patch today? If not I will revert the
> offending patch(es).
>=20

Sorry I was otherwise occupied yesterday. It's not so simple to avoid =
the API change the way things are in the series... it will
take a reasonable amount of re-factoring to avoid it. I'll re-base and =
fix it.

  Paul

> Wei.
>=20
> On Wed, Dec 16, 2020 at 10:17:29AM +0000, Andrew Cooper wrote:
> > On 16/12/2020 02:27, osstest service owner wrote:
> > > branch xen-unstable-smoke
> > > xenbranch xen-unstable-smoke
> > > job build-amd64-libvirt
> > > testid libvirt-build
> > >
> > > Tree: libvirt git://xenbits.xen.org/libvirt.git
> > > Tree: libvirt_keycodemapdb =
https://gitlab.com/keycodemap/keycodemapdb.git
> > > Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> > > Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> > > Tree: xen git://xenbits.xen.org/xen.git
> > >
> > > *** Found and reproduced problem changeset ***
> > >
> > >   Bug is in tree:  xen git://xenbits.xen.org/xen.git
> > >   Bug introduced:  929f23114061a0089e6d63d109cf6a1d03d35c71
> > >   Bug not present: 8bc342b043a6838c03cd86039a34e3f8eea1242f
> > >   Last fail repro: =
http://logs.test-lab.xenproject.org/osstest/logs/157589/
> > >
> > >
> > >   commit 929f23114061a0089e6d63d109cf6a1d03d35c71
> > >   Author: Paul Durrant <pdurrant@amazon.com>
> > >   Date:   Tue Dec 8 19:30:26 2020 +0000
> > >
> > >       libxl: introduce 'libxl_pci_bdf' in the idl...
> > >
> > >       ... and use in 'libxl_device_pci'
> > >
> > >       This patch is preparatory work for restricting the type =
passed to functions
> > >       that only require BDF information, rather than passing a =
'libxl_device_pci'
> > >       structure which is only partially filled. In this patch only =
the minimal
> > >       mechanical changes necessary to deal with the structural =
changes are made.
> > >       Subsequent patches will adjust the code to make better use =
of the new type.
> > >
> > >       Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > >       Acked-by: Wei Liu <wl@xen.org>
> > >       Acked-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> >
> > This breaks the API.=A0 You can't make the following change in the =
IDL.
> >
> > =A0libxl_device_pci =3D Struct("device_pci", [
> > -=A0=A0=A0 ("func",=A0=A0=A0=A0=A0 uint8),
> > -=A0=A0=A0 ("dev",=A0=A0=A0=A0=A0=A0 uint8),
> > -=A0=A0=A0 ("bus",=A0=A0=A0=A0=A0=A0 uint8),
> > -=A0=A0=A0 ("domain",=A0=A0=A0 integer),
> > -=A0=A0=A0 ("vdevfn",=A0=A0=A0 uint32),
> > +=A0=A0=A0 ("bdf", libxl_pci_bdf),
> > +=A0=A0=A0 ("vdevfn", uint32),
> >
> > ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:10:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:10:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55765.97171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppIB-0003lb-Q5; Thu, 17 Dec 2020 09:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55765.97171; Thu, 17 Dec 2020 09:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppIB-0003lU-Ml; Thu, 17 Dec 2020 09:09:47 +0000
Received: by outflank-mailman (input) for mailman id 55765;
 Thu, 17 Dec 2020 09:09:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kppIA-0003lM-1z; Thu, 17 Dec 2020 09:09:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kppI9-0003Wq-TY; Thu, 17 Dec 2020 09:09:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kppI9-0004Kf-Me; Thu, 17 Dec 2020 09:09:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kppI9-0007k6-Lr; Thu, 17 Dec 2020 09:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DdMvBdTDd5GazT4mUrtVJ4EIlZ8aLO3LxoONYeoPFEI=; b=uBQ8g+l94yuRQdDcCq1RQvzW32
	d9txHGwyYZy+53MTuxnyGzdWK9ctuYTA+myANUnE6yN6WNUdOOaSvKyPG2qFMo5/MpHJooE6n9I5B
	8OhMWfMADfYMjtUqR5W4APXJgy9yDnzxnTUIpdRhsKQSWv/cLqrtVclQpTbYFNuR09qs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157612-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157612: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=e6ae24e1d676bb2bdc0fc715b49b04908f41fc10
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 09:09:45 +0000

flight 157612 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157612/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 e6ae24e1d676bb2bdc0fc715b49b04908f41fc10
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    7 days
Failing since        157348  2020-12-09 15:39:39 Z    7 days   51 attempts
Testing same since   157612  2020-12-16 21:09:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sheng Wei <w.sheng@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 699 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:31:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55774.97199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdW-0006SG-4b; Thu, 17 Dec 2020 09:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55774.97199; Thu, 17 Dec 2020 09:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdW-0006S9-0j; Thu, 17 Dec 2020 09:31:50 +0000
Received: by outflank-mailman (input) for mailman id 55774;
 Thu, 17 Dec 2020 09:31:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdV-0006RQ-45
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:31:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1121d495-8f55-40c8-85d7-06bd245deb82;
 Thu, 17 Dec 2020 09:31:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 44FF5B1A1;
 Thu, 17 Dec 2020 09:31:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1121d495-8f55-40c8-85d7-06bd245deb82
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197506; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=E7FpqkqnD5unmDeE4sjWFYeLFL2ZgXDe6/5+3YmBT64=;
	b=a9yFTfK79u+NFXUZ4zWV8aHPpDUcRPDVvBCcSa9sHMiOYeRnwQj4sILTaezMzm4EYJQG+K
	gWtQy41s+pv/TtoyNIh8/P2b5G/eZiIFhJyS9RJs098dCw6tqaN2AHvfO5MUEHUwEaUz34
	+nKJxIVkWwafsJ2frUmJ4ePnkcxTR4M=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org,
	kvm@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: [PATCH v3 00/15] x86: major paravirt cleanup
Date: Thu, 17 Dec 2020 10:31:18 +0100
Message-Id: <20201217093133.1507-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a major cleanup of the paravirt infrastructure aiming at
eliminating all custom code patching via paravirt patching.

This is achieved by using ALTERNATIVE instead, leading to the ability
to give objtool access to the patched in instructions.

In order to remove most of the 32-bit special handling from pvops the
time related operations are switched to use static_call() instead.

At the end of this series all paravirt patching has to do is to
replace indirect calls with direct ones. In a further step this could
be switched to static_call(), too, but that would require a major
header file disentangling.

Changes in V3:
- added patches 7 and 12
- addressed all comments

Changes in V2:
- added patches 5-12

Juergen Gross (14):
  x86/xen: use specific Xen pv interrupt entry for MCE
  x86/xen: use specific Xen pv interrupt entry for DF
  x86/pv: switch SWAPGS to ALTERNATIVE
  x86/xen: drop USERGS_SYSRET64 paravirt call
  x86: rework arch_local_irq_restore() to not use popf
  x86/paravirt: switch time pvops functions to use static_call()
  x86/alternative: support "not feature" and ALTERNATIVE_TERNARY
  x86: add new features for paravirt patching
  x86/paravirt: remove no longer needed 32-bit pvops cruft
  x86/paravirt: simplify paravirt macros
  x86/paravirt: switch iret pvops to ALTERNATIVE
  x86/paravirt: add new macros PVOP_ALT* supporting pvops in
    ALTERNATIVEs
  x86/paravirt: switch functions with custom code to ALTERNATIVE
  x86/paravirt: have only one paravirt patch function

Peter Zijlstra (1):
  objtool: Alternatives vs ORC, the hard way

 arch/x86/Kconfig                       |   1 +
 arch/x86/entry/entry_32.S              |   4 +-
 arch/x86/entry/entry_64.S              |  26 ++-
 arch/x86/include/asm/alternative-asm.h |   3 +
 arch/x86/include/asm/alternative.h     |   7 +
 arch/x86/include/asm/cpufeatures.h     |   2 +
 arch/x86/include/asm/idtentry.h        |   6 +
 arch/x86/include/asm/irqflags.h        |  51 ++----
 arch/x86/include/asm/mshyperv.h        |  11 --
 arch/x86/include/asm/paravirt.h        | 157 ++++++------------
 arch/x86/include/asm/paravirt_time.h   |  38 +++++
 arch/x86/include/asm/paravirt_types.h  | 220 +++++++++----------------
 arch/x86/kernel/Makefile               |   3 +-
 arch/x86/kernel/alternative.c          |  59 ++++++-
 arch/x86/kernel/asm-offsets.c          |   7 -
 arch/x86/kernel/asm-offsets_64.c       |   3 -
 arch/x86/kernel/cpu/vmware.c           |   5 +-
 arch/x86/kernel/irqflags.S             |  11 --
 arch/x86/kernel/kvm.c                  |   3 +-
 arch/x86/kernel/kvmclock.c             |   3 +-
 arch/x86/kernel/paravirt.c             |  83 +++-------
 arch/x86/kernel/paravirt_patch.c       | 109 ------------
 arch/x86/kernel/tsc.c                  |   3 +-
 arch/x86/xen/enlighten_pv.c            |  36 ++--
 arch/x86/xen/irq.c                     |  23 ---
 arch/x86/xen/time.c                    |  12 +-
 arch/x86/xen/xen-asm.S                 |  52 +-----
 arch/x86/xen/xen-ops.h                 |   3 -
 drivers/clocksource/hyperv_timer.c     |   5 +-
 drivers/xen/time.c                     |   3 +-
 kernel/sched/sched.h                   |   1 +
 tools/objtool/check.c                  | 180 ++++++++++++++++++--
 tools/objtool/check.h                  |   5 +
 tools/objtool/orc_gen.c                | 178 +++++++++++++-------
 34 files changed, 627 insertions(+), 686 deletions(-)
 create mode 100644 arch/x86/include/asm/paravirt_time.h
 delete mode 100644 arch/x86/kernel/paravirt_patch.c

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:31:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55773.97187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdU-0006RJ-TF; Thu, 17 Dec 2020 09:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55773.97187; Thu, 17 Dec 2020 09:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdU-0006RC-Oj; Thu, 17 Dec 2020 09:31:48 +0000
Received: by outflank-mailman (input) for mailman id 55773;
 Thu, 17 Dec 2020 09:31:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdT-0006R7-Nx
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:31:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d97c4d01-3d7d-47e1-8352-0f890596a2e5;
 Thu, 17 Dec 2020 09:31:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08D43AC90;
 Thu, 17 Dec 2020 09:31:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d97c4d01-3d7d-47e1-8352-0f890596a2e5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197506; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PezddZt9B1as0O7QAdAa2+LbregUz6HT/ncAiFWbX44=;
	b=DVZpQtKFw1mbHOWW2Oa8umjpwXvQ8DLru5/9cePMOFKRN0fxFl0PRnfMwggcnF//of12qz
	+S57/gJSEh/PGOWLxELsA0xltbCfgnoigSs/f3djzPQlxeLmr07I6DHwijDWY5ABJ4WsQ3
	YSIyWGihHsG4YZsZYKtBtufuF8+ewAc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH v3 01/15] x86/xen: use specific Xen pv interrupt entry for MCE
Date: Thu, 17 Dec 2020 10:31:19 +0100
Message-Id: <20201217093133.1507-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests don't use IST. For machine check interrupts switch to
the same model as debug interrupts.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/idtentry.h |  3 +++
 arch/x86/xen/enlighten_pv.c     | 16 +++++++++++++++-
 arch/x86/xen/xen-asm.S          |  2 +-
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 247a60a47331..5dd64404715a 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -585,6 +585,9 @@ DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
 #else
 DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	exc_machine_check);
 #endif
+#ifdef CONFIG_XEN_PV
+DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	xenpv_exc_machine_check);
+#endif
 #endif
 
 /* NMI */
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4409306364dc..9f5e44c1f70a 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -583,6 +583,20 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
 		exc_debug(regs);
 }
 
+#ifdef CONFIG_X86_MCE
+DEFINE_IDTENTRY_RAW(xenpv_exc_machine_check)
+{
+	/*
+	 * There's no IST on Xen PV, but we still need to dispatch
+	 * to the correct handler.
+	 */
+	if (user_mode(regs))
+		noist_exc_machine_check(regs);
+	else
+		exc_machine_check(regs);
+}
+#endif
+
 struct trap_array_entry {
 	void (*orig)(void);
 	void (*xen)(void);
@@ -603,7 +617,7 @@ static struct trap_array_entry trap_array[] = {
 	TRAP_ENTRY_REDIR(exc_debug,			true  ),
 	TRAP_ENTRY(exc_double_fault,			true  ),
 #ifdef CONFIG_X86_MCE
-	TRAP_ENTRY(exc_machine_check,			true  ),
+	TRAP_ENTRY_REDIR(exc_machine_check,		true  ),
 #endif
 	TRAP_ENTRY_REDIR(exc_nmi,			true  ),
 	TRAP_ENTRY(exc_int3,				false ),
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 1cb0e84b9161..bc2586730a5b 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -172,7 +172,7 @@ xen_pv_trap asm_exc_spurious_interrupt_bug
 xen_pv_trap asm_exc_coprocessor_error
 xen_pv_trap asm_exc_alignment_check
 #ifdef CONFIG_X86_MCE
-xen_pv_trap asm_exc_machine_check
+xen_pv_trap asm_xenpv_exc_machine_check
 #endif /* CONFIG_X86_MCE */
 xen_pv_trap asm_exc_simd_coprocessor_error
 #ifdef CONFIG_IA32_EMULATION
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:31:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55775.97212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppda-0006VU-FR; Thu, 17 Dec 2020 09:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55775.97212; Thu, 17 Dec 2020 09:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppda-0006VN-8n; Thu, 17 Dec 2020 09:31:54 +0000
Received: by outflank-mailman (input) for mailman id 55775;
 Thu, 17 Dec 2020 09:31:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdY-0006R7-LQ
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:31:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4789a870-27b9-4374-a816-d1d0d5ed6e18;
 Thu, 17 Dec 2020 09:31:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0A08B715;
 Thu, 17 Dec 2020 09:31:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4789a870-27b9-4374-a816-d1d0d5ed6e18
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197507; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=s9jYYrCp5NMm5yGKLL/4edua8SOODyzTExPxz/HtQyc=;
	b=IbP6h6XuI55B6mlzx6H61UbVjivatheangh9jaqVm5H+C3z4ag44hxqz+JV3AWrsNVpeQY
	JhxYE1Xi9BqTdVfk/IA8rFns3GTtiM+LqbwmLH9b5JC4Xmg2vxE5RCUiHvaLG19e7DB9yk
	58Qv4P2H7PqaDYhi/4gkqgF4yKGDNks=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 04/15] x86/xen: drop USERGS_SYSRET64 paravirt call
Date: Thu, 17 Dec 2020 10:31:22 +0100
Message-Id: <20201217093133.1507-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

USERGS_SYSRET64 is used to return from a syscall via sysret, but
a Xen PV guest will nevertheless use the iret hypercall, as there
is no sysret PV hypercall defined.

So instead of testing all the prerequisites for doing a sysret and
then mangling the stack for Xen PV again for doing an iret just use
the iret exit from the beginning.

This can easily be done via an ALTERNATIVE like it is done for the
sysenter compat case already.

It should be noted that this drops the optimization in Xen for not
restoring a few registers when returning to user mode, but it seems
as if the saved instructions in the kernel more than compensate for
this drop (a kernel build in a Xen PV guest was slightly faster with
this patch applied).

While at it remove the stale sysret32 remnants.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- simplify ALTERNATIVE (Boris Petkov)
---
 arch/x86/entry/entry_64.S             | 16 +++++++---------
 arch/x86/include/asm/irqflags.h       |  6 ------
 arch/x86/include/asm/paravirt.h       |  5 -----
 arch/x86/include/asm/paravirt_types.h |  8 --------
 arch/x86/kernel/asm-offsets_64.c      |  2 --
 arch/x86/kernel/paravirt.c            |  5 +----
 arch/x86/kernel/paravirt_patch.c      |  4 ----
 arch/x86/xen/enlighten_pv.c           |  1 -
 arch/x86/xen/xen-asm.S                | 20 --------------------
 arch/x86/xen/xen-ops.h                |  2 --
 10 files changed, 8 insertions(+), 61 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a876204a73e0..ce0464d630a2 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -46,14 +46,6 @@
 .code64
 .section .entry.text, "ax"
 
-#ifdef CONFIG_PARAVIRT_XXL
-SYM_CODE_START(native_usergs_sysret64)
-	UNWIND_HINT_EMPTY
-	swapgs
-	sysretq
-SYM_CODE_END(native_usergs_sysret64)
-#endif /* CONFIG_PARAVIRT_XXL */
-
 /*
  * 64-bit SYSCALL instruction entry. Up to 6 arguments in registers.
  *
@@ -123,7 +115,12 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
 	 * Try to use SYSRET instead of IRET if we're returning to
 	 * a completely clean 64-bit userspace context.  If we're not,
 	 * go to the slow exit path.
+	 * In the Xen PV case we must use iret anyway.
 	 */
+
+	ALTERNATIVE "", "jmp	swapgs_restore_regs_and_return_to_usermode", \
+		X86_FEATURE_XENPV
+
 	movq	RCX(%rsp), %rcx
 	movq	RIP(%rsp), %r11
 
@@ -215,7 +212,8 @@ syscall_return_via_sysret:
 
 	popq	%rdi
 	popq	%rsp
-	USERGS_SYSRET64
+	swapgs
+	sysretq
 SYM_CODE_END(entry_SYSCALL_64)
 
 /*
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 8c86edefa115..e585a4705b8d 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -132,12 +132,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 #endif
 
 #define INTERRUPT_RETURN	jmp native_iret
-#define USERGS_SYSRET64				\
-	swapgs;					\
-	sysretq;
-#define USERGS_SYSRET32				\
-	swapgs;					\
-	sysretl
 
 #else
 #define INTERRUPT_RETURN		iret
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index f2ebe109a37e..dd43b1100a87 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -776,11 +776,6 @@ extern void default_banner(void);
 
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_PARAVIRT_XXL
-#define USERGS_SYSRET64							\
-	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  jmp PARA_INDIRECT(pv_ops+PV_CPU_usergs_sysret64);)
-
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
 	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 130f428b0cc8..0169365f1403 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -156,14 +156,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-	/*
-	 * Switch to usermode gs and return to 64-bit usermode using
-	 * sysret.  Only used in 64-bit kernels to return to 64-bit
-	 * processes.  Usermode register state, including %rsp, must
-	 * already be restored.
-	 */
-	void (*usergs_sysret64)(void);
-
 	/* Normal iret.  Jump to this with the standard iret stack
 	   frame set up. */
 	void (*iret)(void);
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 1354bc30614d..b14533af7676 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -13,8 +13,6 @@ int main(void)
 {
 #ifdef CONFIG_PARAVIRT
 #ifdef CONFIG_PARAVIRT_XXL
-	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
-	       cpu.usergs_sysret64);
 #ifdef CONFIG_DEBUG_ENTRY
 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
 #endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 5e5fcf5c376d..18560b71e717 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -135,8 +135,7 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	else if (opfunc == _paravirt_ident_64)
 		ret = paravirt_patch_ident_64(insn_buff, len);
 
-	else if (type == PARAVIRT_PATCH(cpu.iret) ||
-		 type == PARAVIRT_PATCH(cpu.usergs_sysret64))
+	else if (type == PARAVIRT_PATCH(cpu.iret))
 		/* If operation requires a jmp, then jmp */
 		ret = paravirt_patch_jmp(insn_buff, opfunc, addr, len);
 #endif
@@ -170,7 +169,6 @@ static u64 native_steal_clock(int cpu)
 
 /* These are in entry.S */
 extern void native_iret(void);
-extern void native_usergs_sysret64(void);
 
 static struct resource reserve_ioports = {
 	.start = 0,
@@ -310,7 +308,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.load_sp0		= native_load_sp0,
 
-	.cpu.usergs_sysret64	= native_usergs_sysret64,
 	.cpu.iret		= native_iret,
 
 #ifdef CONFIG_X86_IOPL_IOPERM
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index 7c518b08aa3c..2fada2c347c9 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -27,7 +27,6 @@ struct patch_xxl {
 	const unsigned char	mmu_write_cr3[3];
 	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
-	const unsigned char	cpu_usergs_sysret64[6];
 	const unsigned char	mov64[3];
 };
 
@@ -40,8 +39,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
 	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
-	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
-				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
 
@@ -83,7 +80,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
-	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
 #endif
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 44bb18adfb51..5476423fc6d0 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1060,7 +1060,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 	.read_pmc = xen_read_pmc,
 
 	.iret = xen_iret,
-	.usergs_sysret64 = xen_sysret64,
 
 	.load_tr_desc = paravirt_nop,
 	.set_ldt = xen_set_ldt,
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 1d054c915046..c0630fd9f44e 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -214,26 +214,6 @@ SYM_CODE_START(xen_iret)
 	jmp hypercall_iret
 SYM_CODE_END(xen_iret)
 
-SYM_CODE_START(xen_sysret64)
-	/*
-	 * We're already on the usermode stack at this point, but
-	 * still with the kernel gs, so we can easily switch back.
-	 *
-	 * tss.sp2 is scratch space.
-	 */
-	movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
-	pushq $__USER_DS
-	pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	pushq %r11
-	pushq $__USER_CS
-	pushq %rcx
-
-	pushq $VGCF_in_syscall
-	jmp hypercall_iret
-SYM_CODE_END(xen_sysret64)
-
 /*
  * Xen handles syscall callbacks much like ordinary exceptions, which
  * means we have:
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9546c3384c75..b2fd80a01a36 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -138,8 +138,6 @@ __visible unsigned long xen_read_cr2_direct(void);
 
 /* These are not functions, and cannot be called normally */
 __visible void xen_iret(void);
-__visible void xen_sysret32(void);
-__visible void xen_sysret64(void);
 
 extern int xen_panic_handler_init(void);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:31:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:31:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55776.97223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdb-0006YV-NB; Thu, 17 Dec 2020 09:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55776.97223; Thu, 17 Dec 2020 09:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdb-0006YI-J0; Thu, 17 Dec 2020 09:31:55 +0000
Received: by outflank-mailman (input) for mailman id 55776;
 Thu, 17 Dec 2020 09:31:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdZ-0006RQ-U1
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:31:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7e3094e-7f00-4912-b097-4a068f2a9ff6;
 Thu, 17 Dec 2020 09:31:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 31C95AF68;
 Thu, 17 Dec 2020 09:31:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7e3094e-7f00-4912-b097-4a068f2a9ff6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197506; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sHlV/W1YFbgt7ZvBN6riN3ibxVfg2lJ3Lv2QWPteOwQ=;
	b=aCNZ52DNsGKEPUfQUQ8t1aBQp1M8U5RMwBKcreHecPni3WpfYrIU/UZIPnclCUTzeUgua/
	8fn4E8VJvoE3+ZFjWNhNgBOjsMRnh6j8awKUr574LcIJFCBDqxtP8kguiVgXmEfMyoVsiy
	psxAUComfKsTH2Qhw526GmJdB4iYHmU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH v3 02/15] x86/xen: use specific Xen pv interrupt entry for DF
Date: Thu, 17 Dec 2020 10:31:20 +0100
Message-Id: <20201217093133.1507-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests don't use IST. For double fault interrupts switch to
the same model as NMI.

Correct a typo in a comment while copying it.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
V2:
- fix typo (Andy Lutomirski)
---
 arch/x86/include/asm/idtentry.h |  3 +++
 arch/x86/xen/enlighten_pv.c     | 10 ++++++++--
 arch/x86/xen/xen-asm.S          |  2 +-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 5dd64404715a..3ac84cb702fc 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -608,6 +608,9 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_DB,	xenpv_exc_debug);
 
 /* #DF */
 DECLARE_IDTENTRY_DF(X86_TRAP_DF,	exc_double_fault);
+#ifdef CONFIG_XEN_PV
+DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF,	xenpv_exc_double_fault);
+#endif
 
 /* #VC */
 #ifdef CONFIG_AMD_MEM_ENCRYPT
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 9f5e44c1f70a..76616024129e 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -567,10 +567,16 @@ void noist_exc_debug(struct pt_regs *regs);
 
 DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)
 {
-	/* On Xen PV, NMI doesn't use IST.  The C part is the sane as native. */
+	/* On Xen PV, NMI doesn't use IST.  The C part is the same as native. */
 	exc_nmi(regs);
 }
 
+DEFINE_IDTENTRY_RAW_ERRORCODE(xenpv_exc_double_fault)
+{
+	/* On Xen PV, DF doesn't use IST.  The C part is the same as native. */
+	exc_double_fault(regs, error_code);
+}
+
 DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
 {
 	/*
@@ -615,7 +621,7 @@ struct trap_array_entry {
 
 static struct trap_array_entry trap_array[] = {
 	TRAP_ENTRY_REDIR(exc_debug,			true  ),
-	TRAP_ENTRY(exc_double_fault,			true  ),
+	TRAP_ENTRY_REDIR(exc_double_fault,		true  ),
 #ifdef CONFIG_X86_MCE
 	TRAP_ENTRY_REDIR(exc_machine_check,		true  ),
 #endif
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index bc2586730a5b..1d054c915046 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -161,7 +161,7 @@ xen_pv_trap asm_exc_overflow
 xen_pv_trap asm_exc_bounds
 xen_pv_trap asm_exc_invalid_op
 xen_pv_trap asm_exc_device_not_available
-xen_pv_trap asm_exc_double_fault
+xen_pv_trap asm_xenpv_exc_double_fault
 xen_pv_trap asm_exc_coproc_segment_overrun
 xen_pv_trap asm_exc_invalid_tss
 xen_pv_trap asm_exc_segment_not_present
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55777.97234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdf-0006e5-Vs; Thu, 17 Dec 2020 09:31:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55777.97234; Thu, 17 Dec 2020 09:31:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdf-0006dz-SC; Thu, 17 Dec 2020 09:31:59 +0000
Received: by outflank-mailman (input) for mailman id 55777;
 Thu, 17 Dec 2020 09:31:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdd-0006R7-Le
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:31:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id caa6de0b-6522-49ac-a3ae-88f22672009b;
 Thu, 17 Dec 2020 09:31:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45CAFB717;
 Thu, 17 Dec 2020 09:31:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: caa6de0b-6522-49ac-a3ae-88f22672009b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197507; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PWa8LXJtPmdrGF/SU7ngKGgWKwsy52HIQIMfyyd743k=;
	b=XV18fU5jObgxzak0SuEa4O/Fu1W4msBVPVw+I4RyV0+x/g1FB+E24MhDfSXhUgrLBHkP54
	7+VaG4EuQrH0C1QUt62IfclRgOMOUQEy2sf66NMiRuMKbqj7u6FJ1SeSNZ+V1TQCeZlso+
	HwYGPxsv9e3fr+FOrK2wvuezZi6Mu9Y=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andy Lutomirski <luto@kernel.org>
Subject: [PATCH v3 05/15] x86: rework arch_local_irq_restore() to not use popf
Date: Thu, 17 Dec 2020 10:31:23 +0100
Message-Id: <20201217093133.1507-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

"popf" is a rather expensive operation, so don't use it for restoring
irq flags. Instead test whether interrupts are enabled in the flags
parameter and enable interrupts via "sti" in that case.

This results in the restore_fl paravirt op to be no longer needed.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/irqflags.h       | 20 ++++++-------------
 arch/x86/include/asm/paravirt.h       |  5 -----
 arch/x86/include/asm/paravirt_types.h |  7 ++-----
 arch/x86/kernel/irqflags.S            | 11 -----------
 arch/x86/kernel/paravirt.c            |  1 -
 arch/x86/kernel/paravirt_patch.c      |  3 ---
 arch/x86/xen/enlighten_pv.c           |  2 --
 arch/x86/xen/irq.c                    | 23 ----------------------
 arch/x86/xen/xen-asm.S                | 28 ---------------------------
 arch/x86/xen/xen-ops.h                |  1 -
 10 files changed, 8 insertions(+), 93 deletions(-)

diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index e585a4705b8d..144d70ea4393 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -35,15 +35,6 @@ extern __always_inline unsigned long native_save_fl(void)
 	return flags;
 }
 
-extern inline void native_restore_fl(unsigned long flags);
-extern inline void native_restore_fl(unsigned long flags)
-{
-	asm volatile("push %0 ; popf"
-		     : /* no output */
-		     :"g" (flags)
-		     :"memory", "cc");
-}
-
 static __always_inline void native_irq_disable(void)
 {
 	asm volatile("cli": : :"memory");
@@ -79,11 +70,6 @@ static __always_inline unsigned long arch_local_save_flags(void)
 	return native_save_fl();
 }
 
-static __always_inline void arch_local_irq_restore(unsigned long flags)
-{
-	native_restore_fl(flags);
-}
-
 static __always_inline void arch_local_irq_disable(void)
 {
 	native_irq_disable();
@@ -152,6 +138,12 @@ static __always_inline int arch_irqs_disabled(void)
 
 	return arch_irqs_disabled_flags(flags);
 }
+
+static __always_inline void arch_local_irq_restore(unsigned long flags)
+{
+	if (!arch_irqs_disabled_flags(flags))
+		arch_local_irq_enable();
+}
 #else
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_XEN_PV
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index dd43b1100a87..4abf110e2243 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -648,11 +648,6 @@ static inline notrace unsigned long arch_local_save_flags(void)
 	return PVOP_CALLEE0(unsigned long, irq.save_fl);
 }
 
-static inline notrace void arch_local_irq_restore(unsigned long f)
-{
-	PVOP_VCALLEE1(irq.restore_fl, f);
-}
-
 static inline notrace void arch_local_irq_disable(void)
 {
 	PVOP_VCALLEE0(irq.irq_disable);
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 0169365f1403..de87087d3bde 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -168,16 +168,13 @@ struct pv_cpu_ops {
 struct pv_irq_ops {
 #ifdef CONFIG_PARAVIRT_XXL
 	/*
-	 * Get/set interrupt state.  save_fl and restore_fl are only
-	 * expected to use X86_EFLAGS_IF; all other bits
-	 * returned from save_fl are undefined, and may be ignored by
-	 * restore_fl.
+	 * Get/set interrupt state.  save_fl is expected to use X86_EFLAGS_IF;
+	 * all other bits returned from save_fl are undefined.
 	 *
 	 * NOTE: These functions callers expect the callee to preserve
 	 * more registers than the standard C calling convention.
 	 */
 	struct paravirt_callee_save save_fl;
-	struct paravirt_callee_save restore_fl;
 	struct paravirt_callee_save irq_disable;
 	struct paravirt_callee_save irq_enable;
 
diff --git a/arch/x86/kernel/irqflags.S b/arch/x86/kernel/irqflags.S
index 0db0375235b4..8ef35063964b 100644
--- a/arch/x86/kernel/irqflags.S
+++ b/arch/x86/kernel/irqflags.S
@@ -13,14 +13,3 @@ SYM_FUNC_START(native_save_fl)
 	ret
 SYM_FUNC_END(native_save_fl)
 EXPORT_SYMBOL(native_save_fl)
-
-/*
- * void native_restore_fl(unsigned long flags)
- * %eax/%rdi: flags
- */
-SYM_FUNC_START(native_restore_fl)
-	push %_ASM_ARG1
-	popf
-	ret
-SYM_FUNC_END(native_restore_fl)
-EXPORT_SYMBOL(native_restore_fl)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 18560b71e717..c60222ab8ab9 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -320,7 +320,6 @@ struct paravirt_patch_template pv_ops = {
 
 	/* Irq ops. */
 	.irq.save_fl		= __PV_IS_CALLEE_SAVE(native_save_fl),
-	.irq.restore_fl		= __PV_IS_CALLEE_SAVE(native_restore_fl),
 	.irq.irq_disable	= __PV_IS_CALLEE_SAVE(native_irq_disable),
 	.irq.irq_enable		= __PV_IS_CALLEE_SAVE(native_irq_enable),
 	.irq.safe_halt		= native_safe_halt,
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index 2fada2c347c9..abd27ec67397 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -25,7 +25,6 @@ struct patch_xxl {
 	const unsigned char	mmu_read_cr2[3];
 	const unsigned char	mmu_read_cr3[3];
 	const unsigned char	mmu_write_cr3[3];
-	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
 	const unsigned char	mov64[3];
 };
@@ -37,7 +36,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.mmu_read_cr2		= { 0x0f, 0x20, 0xd0 },	// mov %cr2, %[re]ax
 	.mmu_read_cr3		= { 0x0f, 0x20, 0xd8 },	// mov %cr3, %[re]ax
 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
-	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
@@ -71,7 +69,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	switch (type) {
 
 #ifdef CONFIG_PARAVIRT_XXL
-	PATCH_CASE(irq, restore_fl, xxl, insn_buff, len);
 	PATCH_CASE(irq, save_fl, xxl, insn_buff, len);
 	PATCH_CASE(irq, irq_enable, xxl, insn_buff, len);
 	PATCH_CASE(irq, irq_disable, xxl, insn_buff, len);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 5476423fc6d0..32b295cc2716 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1022,8 +1022,6 @@ void __init xen_setup_vcpu_info_placement(void)
 	 */
 	if (xen_have_vcpu_info_placement) {
 		pv_ops.irq.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
-		pv_ops.irq.restore_fl =
-			__PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
 		pv_ops.irq.irq_disable =
 			__PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
 		pv_ops.irq.irq_enable =
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 850c93f346c7..dfa091d79c2e 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -42,28 +42,6 @@ asmlinkage __visible unsigned long xen_save_fl(void)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_save_fl);
 
-__visible void xen_restore_fl(unsigned long flags)
-{
-	struct vcpu_info *vcpu;
-
-	/* convert from IF type flag */
-	flags = !(flags & X86_EFLAGS_IF);
-
-	/* See xen_irq_enable() for why preemption must be disabled. */
-	preempt_disable();
-	vcpu = this_cpu_read(xen_vcpu);
-	vcpu->evtchn_upcall_mask = flags;
-
-	if (flags == 0) {
-		barrier(); /* unmask then check (avoid races) */
-		if (unlikely(vcpu->evtchn_upcall_pending))
-			xen_force_evtchn_callback();
-		preempt_enable();
-	} else
-		preempt_enable_no_resched();
-}
-PV_CALLEE_SAVE_REGS_THUNK(xen_restore_fl);
-
 asmlinkage __visible void xen_irq_disable(void)
 {
 	/* There's a one instruction preempt window here.  We need to
@@ -118,7 +96,6 @@ static void xen_halt(void)
 
 static const struct pv_irq_ops xen_irq_ops __initconst = {
 	.save_fl = PV_CALLEE_SAVE(xen_save_fl),
-	.restore_fl = PV_CALLEE_SAVE(xen_restore_fl),
 	.irq_disable = PV_CALLEE_SAVE(xen_irq_disable),
 	.irq_enable = PV_CALLEE_SAVE(xen_irq_enable),
 
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index c0630fd9f44e..1ea7e41044b5 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -72,34 +72,6 @@ SYM_FUNC_START(xen_save_fl_direct)
 	ret
 SYM_FUNC_END(xen_save_fl_direct)
 
-
-/*
- * In principle the caller should be passing us a value return from
- * xen_save_fl_direct, but for robustness sake we test only the
- * X86_EFLAGS_IF flag rather than the whole byte. After setting the
- * interrupt mask state, it checks for unmasked pending events and
- * enters the hypervisor to get them delivered if so.
- */
-SYM_FUNC_START(xen_restore_fl_direct)
-	FRAME_BEGIN
-	testw $X86_EFLAGS_IF, %di
-	setz PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
-	/*
-	 * Preempt here doesn't matter because that will deal with any
-	 * pending interrupts.  The pending check may end up being run
-	 * on the wrong CPU, but that doesn't hurt.
-	 */
-
-	/* check for unmasked and pending */
-	cmpw $0x0001, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending
-	jnz 1f
-	call check_events
-1:
-	FRAME_END
-	ret
-SYM_FUNC_END(xen_restore_fl_direct)
-
-
 /*
  * Force an event check by making a hypercall, but preserve regs
  * before making the call.
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index b2fd80a01a36..8d7ec49a35fb 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -131,7 +131,6 @@ static inline void __init xen_efi_init(struct boot_params *boot_params)
 __visible void xen_irq_enable_direct(void);
 __visible void xen_irq_disable_direct(void);
 __visible unsigned long xen_save_fl_direct(void);
-__visible void xen_restore_fl_direct(unsigned long);
 
 __visible unsigned long xen_read_cr2(void);
 __visible unsigned long xen_read_cr2_direct(void);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55778.97245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdg-0006fm-LT; Thu, 17 Dec 2020 09:32:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55778.97245; Thu, 17 Dec 2020 09:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdg-0006fO-Eh; Thu, 17 Dec 2020 09:32:00 +0000
Received: by outflank-mailman (input) for mailman id 55778;
 Thu, 17 Dec 2020 09:31:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppde-0006RQ-U0
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:31:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 086049f7-47f3-44b6-aa81-0f5b38ce16c2;
 Thu, 17 Dec 2020 09:31:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 906B4B714;
 Thu, 17 Dec 2020 09:31:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 086049f7-47f3-44b6-aa81-0f5b38ce16c2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197506; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bZVylN0TrlznWFey0UjP85iwRxueujMp7mhjZsmecT4=;
	b=hByBMRgbT+XkJ9Mt7cLsI/hHSEPvITssS5RC2Blz1ZT/unuVgLjUt6TU8+pmElWYWs2JUm
	8U39hG1KQvRe1yiCXXdOp5Ind5w328J+R0eFCO5j5WwB1UVirCnHNKBgf4ryjNGOt1Ja0/
	CW7P5cTlVaBRnDgsnOGV+he1u6WdkdM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Borislav Petkov <bp@suse.de>
Subject: [PATCH v3 03/15] x86/pv: switch SWAPGS to ALTERNATIVE
Date: Thu, 17 Dec 2020 10:31:21 +0100
Message-Id: <20201217093133.1507-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SWAPGS is used only for interrupts coming from user mode or for
returning to user mode. So there is no reason to use the PARAVIRT
framework, as it can easily be replaced by an ALTERNATIVE depending
on X86_FEATURE_XENPV.

There are several instances using the PV-aware SWAPGS macro in paths
which are never executed in a Xen PV guest. Replace those with the
plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_64.S             | 10 +++++-----
 arch/x86/include/asm/irqflags.h       | 20 ++++++++------------
 arch/x86/include/asm/paravirt.h       | 20 --------------------
 arch/x86/include/asm/paravirt_types.h |  2 --
 arch/x86/kernel/asm-offsets_64.c      |  1 -
 arch/x86/kernel/paravirt.c            |  1 -
 arch/x86/kernel/paravirt_patch.c      |  3 ---
 arch/x86/xen/enlighten_pv.c           |  3 ---
 8 files changed, 13 insertions(+), 47 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index cad08703c4ad..a876204a73e0 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -669,7 +669,7 @@ native_irq_return_ldt:
 	 */
 
 	pushq	%rdi				/* Stash user RDI */
-	SWAPGS					/* to kernel GS */
+	swapgs					/* to kernel GS */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi	/* to kernel CR3 */
 
 	movq	PER_CPU_VAR(espfix_waddr), %rdi
@@ -699,7 +699,7 @@ native_irq_return_ldt:
 	orq	PER_CPU_VAR(espfix_stack), %rax
 
 	SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
-	SWAPGS					/* to user GS */
+	swapgs					/* to user GS */
 	popq	%rdi				/* Restore user RDI */
 
 	movq	%rax, %rsp
@@ -943,7 +943,7 @@ SYM_CODE_START_LOCAL(paranoid_entry)
 	ret
 
 .Lparanoid_entry_swapgs:
-	SWAPGS
+	swapgs
 
 	/*
 	 * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an
@@ -1001,7 +1001,7 @@ SYM_CODE_START_LOCAL(paranoid_exit)
 	jnz		restore_regs_and_return_to_kernel
 
 	/* We are returning to a context with user GSBASE */
-	SWAPGS_UNSAFE_STACK
+	swapgs
 	jmp		restore_regs_and_return_to_kernel
 SYM_CODE_END(paranoid_exit)
 
@@ -1426,7 +1426,7 @@ nmi_no_fsgsbase:
 	jnz	nmi_restore
 
 nmi_swapgs:
-	SWAPGS_UNSAFE_STACK
+	swapgs
 
 nmi_restore:
 	POP_REGS
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 2dfc8d380dab..8c86edefa115 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -131,18 +131,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 #define SAVE_FLAGS(x)		pushfq; popq %rax
 #endif
 
-#define SWAPGS	swapgs
-/*
- * Currently paravirt can't handle swapgs nicely when we
- * don't have a stack we can rely on (such as a user space
- * stack).  So we either find a way around these or just fault
- * and emulate if a guest tries to call swapgs directly.
- *
- * Either way, this is a good way to document that we don't
- * have a reliable stack. x86_64 only.
- */
-#define SWAPGS_UNSAFE_STACK	swapgs
-
 #define INTERRUPT_RETURN	jmp native_iret
 #define USERGS_SYSRET64				\
 	swapgs;					\
@@ -170,6 +158,14 @@ static __always_inline int arch_irqs_disabled(void)
 
 	return arch_irqs_disabled_flags(flags);
 }
+#else
+#ifdef CONFIG_X86_64
+#ifdef CONFIG_XEN_PV
+#define SWAPGS	ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
+#else
+#define SWAPGS	swapgs
+#endif
+#endif
 #endif /* !__ASSEMBLY__ */
 
 #endif
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index f8dce11d2bc1..f2ebe109a37e 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -776,26 +776,6 @@ extern void default_banner(void);
 
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_PARAVIRT_XXL
-/*
- * If swapgs is used while the userspace stack is still current,
- * there's no way to call a pvop.  The PV replacement *must* be
- * inlined, or the swapgs instruction must be trapped and emulated.
- */
-#define SWAPGS_UNSAFE_STACK						\
-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs), swapgs)
-
-/*
- * Note: swapgs is very special, and in practise is either going to be
- * implemented with a single "swapgs" instruction or something very
- * special.  Either way, we don't need to save any registers for
- * it.
- */
-#define SWAPGS								\
-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs),				\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_CPU_swapgs);		\
-		 )
-
 #define USERGS_SYSRET64							\
 	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
 		  ANNOTATE_RETPOLINE_SAFE;				\
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index b6b02b7c19cc..130f428b0cc8 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -168,8 +168,6 @@ struct pv_cpu_ops {
 	   frame set up. */
 	void (*iret)(void);
 
-	void (*swapgs)(void);
-
 	void (*start_context_switch)(struct task_struct *prev);
 	void (*end_context_switch)(struct task_struct *next);
 #endif
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 828be792231e..1354bc30614d 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -15,7 +15,6 @@ int main(void)
 #ifdef CONFIG_PARAVIRT_XXL
 	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
 	       cpu.usergs_sysret64);
-	OFFSET(PV_CPU_swapgs, paravirt_patch_template, cpu.swapgs);
 #ifdef CONFIG_DEBUG_ENTRY
 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
 #endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 6c3407ba6ee9..5e5fcf5c376d 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -312,7 +312,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.usergs_sysret64	= native_usergs_sysret64,
 	.cpu.iret		= native_iret,
-	.cpu.swapgs		= native_swapgs,
 
 #ifdef CONFIG_X86_IOPL_IOPERM
 	.cpu.invalidate_io_bitmap	= native_tss_invalidate_io_bitmap,
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index ace6e334cb39..7c518b08aa3c 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -28,7 +28,6 @@ struct patch_xxl {
 	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
 	const unsigned char	cpu_usergs_sysret64[6];
-	const unsigned char	cpu_swapgs[3];
 	const unsigned char	mov64[3];
 };
 
@@ -43,7 +42,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
 	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
 				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
-	.cpu_swapgs		= { 0x0f, 0x01, 0xf8 },	// swapgs
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
 
@@ -86,7 +84,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
 	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
-	PATCH_CASE(cpu, swapgs, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
 #endif
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 76616024129e..44bb18adfb51 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1085,9 +1085,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 #endif
 	.io_delay = xen_io_delay,
 
-	/* Xen takes care of %gs when switching to usermode for us */
-	.swapgs = paravirt_nop,
-
 	.start_context_switch = paravirt_start_context_switch,
 	.end_context_switch = xen_end_context_switch,
 };
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55779.97259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdk-0006ly-06; Thu, 17 Dec 2020 09:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55779.97259; Thu, 17 Dec 2020 09:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdj-0006li-Pw; Thu, 17 Dec 2020 09:32:03 +0000
Received: by outflank-mailman (input) for mailman id 55779;
 Thu, 17 Dec 2020 09:32:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdi-0006R7-Li
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8b57350-d259-40cd-8b6c-5f35f7fde69d;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0C5B1B71E;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8b57350-d259-40cd-8b6c-5f35f7fde69d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=krzYKKZ6g2vq1rvt7bfI6SRgvPLaGsiufG7X/rsRMo8=;
	b=pwPtxyy2poL6EYi7HW3ugjJSE162fXp9iMFusfTAXqqk+neOimaWa+Px30E+z0YeWBShjY
	G+ZO6LE1yrDFl1JBPNJTPH4PzSYzFoPzCJk7xJh5ZDSfW+UaqCz/B24t0h035dFTHzmate
	pROLqcc9WGbfSwVo6PbSPOcb2+V/EuI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 10/15] x86/paravirt: simplify paravirt macros
Date: Thu, 17 Dec 2020 10:31:28 +0100
Message-Id: <20201217093133.1507-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The central pvops call macros ____PVOP_CALL() and ____PVOP_VCALL() are
looking very similar now.

The main differences are using PVOP_VCALL_ARGS or PVOP_CALL_ARGS, which
are identical, and the return value handling.

So drop PVOP_VCALL_ARGS and instead of ____PVOP_VCALL() just use
(void)____PVOP_CALL(long, ...).

Note that it isn't easily possible to just redefine ____PVOP_VCALL()
to use ____PVOP_CALL() instead, as this would require further hiding of
commas in macro parameters.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 arch/x86/include/asm/paravirt_types.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 42f9eef84131..a9efd4dad820 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -408,11 +408,9 @@ int paravirt_disable_iospace(void);
  * makes sure the incoming and outgoing types are always correct.
  */
 #ifdef CONFIG_X86_32
-#define PVOP_VCALL_ARGS							\
+#define PVOP_CALL_ARGS							\
 	unsigned long __eax = __eax, __edx = __edx, __ecx = __ecx;
 
-#define PVOP_CALL_ARGS			PVOP_VCALL_ARGS
-
 #define PVOP_CALL_ARG1(x)		"a" ((unsigned long)(x))
 #define PVOP_CALL_ARG2(x)		"d" ((unsigned long)(x))
 #define PVOP_CALL_ARG3(x)		"c" ((unsigned long)(x))
@@ -428,12 +426,10 @@ int paravirt_disable_iospace(void);
 #define VEXTRA_CLOBBERS
 #else  /* CONFIG_X86_64 */
 /* [re]ax isn't an arg, but the return val */
-#define PVOP_VCALL_ARGS						\
+#define PVOP_CALL_ARGS						\
 	unsigned long __edi = __edi, __esi = __esi,		\
 		__edx = __edx, __ecx = __ecx, __eax = __eax;
 
-#define PVOP_CALL_ARGS		PVOP_VCALL_ARGS
-
 #define PVOP_CALL_ARG1(x)		"D" ((unsigned long)(x))
 #define PVOP_CALL_ARG2(x)		"S" ((unsigned long)(x))
 #define PVOP_CALL_ARG3(x)		"d" ((unsigned long)(x))
@@ -492,25 +488,12 @@ int paravirt_disable_iospace(void);
 	____PVOP_CALL(rettype, op.func, CLBR_RET_REG,			\
 		      PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
 
-
-#define ____PVOP_VCALL(op, clbr, call_clbr, extra_clbr, ...)		\
-	({								\
-		PVOP_VCALL_ARGS;					\
-		PVOP_TEST_NULL(op);					\
-		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
-			     : call_clbr, ASM_CALL_CONSTRAINT		\
-			     : paravirt_type(op),			\
-			       paravirt_clobber(clbr),			\
-			       ##__VA_ARGS__				\
-			     : "memory", "cc" extra_clbr);		\
-	})
-
 #define __PVOP_VCALL(op, ...)						\
-	____PVOP_VCALL(op, CLBR_ANY, PVOP_VCALL_CLOBBERS,		\
+	(void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,	\
 		       VEXTRA_CLOBBERS, ##__VA_ARGS__)
 
 #define __PVOP_VCALLEESAVE(op, ...)					\
-	____PVOP_VCALL(op.func, CLBR_RET_REG,				\
+	(void)____PVOP_CALL(long, op.func, CLBR_RET_REG,		\
 		      PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
 
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55780.97271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdl-0006pH-G4; Thu, 17 Dec 2020 09:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55780.97271; Thu, 17 Dec 2020 09:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdl-0006p4-8w; Thu, 17 Dec 2020 09:32:05 +0000
Received: by outflank-mailman (input) for mailman id 55780;
 Thu, 17 Dec 2020 09:32:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdj-0006RQ-UI
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbeebc3e-71eb-4ca9-a902-e6f77d8d6f74;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0861FB718;
 Thu, 17 Dec 2020 09:31:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbeebc3e-71eb-4ca9-a902-e6f77d8d6f74
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j6gxliSCsh3WKILmxj4RJn4SpdlW/0yI/QevTH0przg=;
	b=kwgvz+7bk8SUbt9TMo5E0AtpcQKuGkGIdhTz/B272guohLREgzNYP6Uy2FZyXZ1hNFXD4C
	ehDwVXEgZ1cNADdlNf25z7q0dWGCZYTw4BBwg1bvjOAkRFRxsHc5o2QdQvPxSnFF6jzzgz
	qLKKeB1GzE4cCsH8sAFFQPrx9N7v4qw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Joerg Roedel <joro@8bytes.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: [PATCH v3 06/15] x86/paravirt: switch time pvops functions to use static_call()
Date: Thu, 17 Dec 2020 10:31:24 +0100
Message-Id: <20201217093133.1507-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The time pvops functions are the only ones left which might be
used in 32-bit mode and which return a 64-bit value.

Switch them to use the static_call() mechanism instead of pvops, as
this allows quite some simplification of the pvops implementation.

Due to include hell this requires to split out the time interfaces
into a new header file.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/Kconfig                      |  1 +
 arch/x86/include/asm/mshyperv.h       | 11 --------
 arch/x86/include/asm/paravirt.h       | 14 ----------
 arch/x86/include/asm/paravirt_time.h  | 38 +++++++++++++++++++++++++++
 arch/x86/include/asm/paravirt_types.h |  6 -----
 arch/x86/kernel/cpu/vmware.c          |  5 ++--
 arch/x86/kernel/kvm.c                 |  3 ++-
 arch/x86/kernel/kvmclock.c            |  3 ++-
 arch/x86/kernel/paravirt.c            | 16 ++++++++---
 arch/x86/kernel/tsc.c                 |  3 ++-
 arch/x86/xen/time.c                   | 12 ++++-----
 drivers/clocksource/hyperv_timer.c    |  5 ++--
 drivers/xen/time.c                    |  3 ++-
 kernel/sched/sched.h                  |  1 +
 14 files changed, 71 insertions(+), 50 deletions(-)
 create mode 100644 arch/x86/include/asm/paravirt_time.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a8bd298e45b1..ebabd8bf4064 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -769,6 +769,7 @@ if HYPERVISOR_GUEST
 
 config PARAVIRT
 	bool "Enable paravirtualization code"
+	depends on HAVE_STATIC_CALL
 	help
 	  This changes the kernel so it can modify itself when it is run
 	  under a hypervisor, potentially improving performance significantly
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index ffc289992d1b..45942d420626 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -56,17 +56,6 @@ typedef int (*hyperv_fill_flush_list_func)(
 #define hv_get_raw_timer() rdtsc_ordered()
 #define hv_get_vector() HYPERVISOR_CALLBACK_VECTOR
 
-/*
- * Reference to pv_ops must be inline so objtool
- * detection of noinstr violations can work correctly.
- */
-static __always_inline void hv_setup_sched_clock(void *sched_clock)
-{
-#ifdef CONFIG_PARAVIRT
-	pv_ops.time.sched_clock = sched_clock;
-#endif
-}
-
 void hyperv_vector_handler(struct pt_regs *regs);
 
 static inline void hv_enable_stimer0_percpu_irq(int irq) {}
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 4abf110e2243..0785a9686e32 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -17,25 +17,11 @@
 #include <linux/cpumask.h>
 #include <asm/frame.h>
 
-static inline unsigned long long paravirt_sched_clock(void)
-{
-	return PVOP_CALL0(unsigned long long, time.sched_clock);
-}
-
-struct static_key;
-extern struct static_key paravirt_steal_enabled;
-extern struct static_key paravirt_steal_rq_enabled;
-
 __visible void __native_queued_spin_unlock(struct qspinlock *lock);
 bool pv_is_native_spin_unlock(void);
 __visible bool __native_vcpu_is_preempted(long cpu);
 bool pv_is_native_vcpu_is_preempted(void);
 
-static inline u64 paravirt_steal_clock(int cpu)
-{
-	return PVOP_CALL1(u64, time.steal_clock, cpu);
-}
-
 /* The paravirtualized I/O functions */
 static inline void slow_down_io(void)
 {
diff --git a/arch/x86/include/asm/paravirt_time.h b/arch/x86/include/asm/paravirt_time.h
new file mode 100644
index 000000000000..76cf94b7c899
--- /dev/null
+++ b/arch/x86/include/asm/paravirt_time.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_PARAVIRT_TIME_H
+#define _ASM_X86_PARAVIRT_TIME_H
+
+/* Time related para-virtualized functions. */
+
+#ifdef CONFIG_PARAVIRT
+
+#include <linux/types.h>
+#include <linux/jump_label.h>
+#include <linux/static_call.h>
+
+extern struct static_key paravirt_steal_enabled;
+extern struct static_key paravirt_steal_rq_enabled;
+
+u64 dummy_steal_clock(int cpu);
+u64 dummy_sched_clock(void);
+
+DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
+DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock);
+
+extern bool paravirt_using_native_sched_clock;
+
+void paravirt_set_sched_clock(u64 (*func)(void));
+
+static inline u64 paravirt_sched_clock(void)
+{
+	return static_call(pv_sched_clock)();
+}
+
+static inline u64 paravirt_steal_clock(int cpu)
+{
+	return static_call(pv_steal_clock)(cpu);
+}
+
+#endif /* CONFIG_PARAVIRT */
+
+#endif /* _ASM_X86_PARAVIRT_TIME_H */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index de87087d3bde..1fff349e4792 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -95,11 +95,6 @@ struct pv_lazy_ops {
 } __no_randomize_layout;
 #endif
 
-struct pv_time_ops {
-	unsigned long long (*sched_clock)(void);
-	unsigned long long (*steal_clock)(int cpu);
-} __no_randomize_layout;
-
 struct pv_cpu_ops {
 	/* hooks for various privileged instructions */
 	void (*io_delay)(void);
@@ -291,7 +286,6 @@ struct pv_lock_ops {
  * what to patch. */
 struct paravirt_patch_template {
 	struct pv_init_ops	init;
-	struct pv_time_ops	time;
 	struct pv_cpu_ops	cpu;
 	struct pv_irq_ops	irq;
 	struct pv_mmu_ops	mmu;
diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
index 924571fe5864..f265426a1c3e 100644
--- a/arch/x86/kernel/cpu/vmware.c
+++ b/arch/x86/kernel/cpu/vmware.c
@@ -34,6 +34,7 @@
 #include <asm/apic.h>
 #include <asm/vmware.h>
 #include <asm/svm.h>
+#include <asm/paravirt_time.h>
 
 #undef pr_fmt
 #define pr_fmt(fmt)	"vmware: " fmt
@@ -336,11 +337,11 @@ static void __init vmware_paravirt_ops_setup(void)
 	vmware_cyc2ns_setup();
 
 	if (vmw_sched_clock)
-		pv_ops.time.sched_clock = vmware_sched_clock;
+		paravirt_set_sched_clock(vmware_sched_clock);
 
 	if (vmware_is_stealclock_available()) {
 		has_steal_clock = true;
-		pv_ops.time.steal_clock = vmware_steal_clock;
+		static_call_update(pv_steal_clock, vmware_steal_clock);
 
 		/* We use reboot notifier only to disable steal clock */
 		register_reboot_notifier(&vmware_pv_reboot_nb);
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5e78e01ca3b4..2aa1b9e33f61 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -38,6 +38,7 @@
 #include <asm/cpuidle_haltpoll.h>
 #include <asm/ptrace.h>
 #include <asm/svm.h>
+#include <asm/paravirt_time.h>
 
 DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
 
@@ -650,7 +651,7 @@ static void __init kvm_guest_init(void)
 
 	if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		has_steal_clock = 1;
-		pv_ops.time.steal_clock = kvm_steal_clock;
+		static_call_update(pv_steal_clock, kvm_steal_clock);
 	}
 
 	if (pv_tlb_flush_supported()) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 34b18f6eeb2c..02f60ee16f10 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -22,6 +22,7 @@
 #include <asm/x86_init.h>
 #include <asm/reboot.h>
 #include <asm/kvmclock.h>
+#include <asm/paravirt_time.h>
 
 static int kvmclock __initdata = 1;
 static int kvmclock_vsyscall __initdata = 1;
@@ -107,7 +108,7 @@ static inline void kvm_sched_clock_init(bool stable)
 	if (!stable)
 		clear_sched_clock_stable();
 	kvm_sched_clock_offset = kvm_clock_read();
-	pv_ops.time.sched_clock = kvm_sched_clock_read;
+	paravirt_set_sched_clock(kvm_sched_clock_read);
 
 	pr_info("kvm-clock: using sched offset of %llu cycles",
 		kvm_sched_clock_offset);
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index c60222ab8ab9..9f8aa18aa378 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -31,6 +31,7 @@
 #include <asm/special_insns.h>
 #include <asm/tlb.h>
 #include <asm/io_bitmap.h>
+#include <asm/paravirt_time.h>
 
 /*
  * nop stub, which must not clobber anything *including the stack* to
@@ -167,6 +168,17 @@ static u64 native_steal_clock(int cpu)
 	return 0;
 }
 
+DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock);
+DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock);
+
+bool paravirt_using_native_sched_clock = true;
+
+void paravirt_set_sched_clock(u64 (*func)(void))
+{
+	static_call_update(pv_sched_clock, func);
+	paravirt_using_native_sched_clock = (func == native_sched_clock);
+}
+
 /* These are in entry.S */
 extern void native_iret(void);
 
@@ -272,10 +284,6 @@ struct paravirt_patch_template pv_ops = {
 	/* Init ops. */
 	.init.patch		= native_patch,
 
-	/* Time ops. */
-	.time.sched_clock	= native_sched_clock,
-	.time.steal_clock	= native_steal_clock,
-
 	/* Cpu ops. */
 	.cpu.io_delay		= native_io_delay,
 
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index f70dffc2771f..d01245b770de 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -28,6 +28,7 @@
 #include <asm/intel-family.h>
 #include <asm/i8259.h>
 #include <asm/uv/uv.h>
+#include <asm/paravirt_time.h>
 
 unsigned int __read_mostly cpu_khz;	/* TSC clocks / usec, not used here */
 EXPORT_SYMBOL(cpu_khz);
@@ -254,7 +255,7 @@ unsigned long long sched_clock(void)
 
 bool using_native_sched_clock(void)
 {
-	return pv_ops.time.sched_clock == native_sched_clock;
+	return paravirt_using_native_sched_clock;
 }
 #else
 unsigned long long
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 91f5b330dcc6..17e62f4f69a9 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -18,6 +18,7 @@
 #include <linux/timekeeper_internal.h>
 
 #include <asm/pvclock.h>
+#include <asm/paravirt_time.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 
@@ -379,11 +380,6 @@ void xen_timer_resume(void)
 	}
 }
 
-static const struct pv_time_ops xen_time_ops __initconst = {
-	.sched_clock = xen_sched_clock,
-	.steal_clock = xen_steal_clock,
-};
-
 static struct pvclock_vsyscall_time_info *xen_clock __read_mostly;
 static u64 xen_clock_value_saved;
 
@@ -528,7 +524,8 @@ static void __init xen_time_init(void)
 void __init xen_init_time_ops(void)
 {
 	xen_sched_clock_offset = xen_clocksource_read();
-	pv_ops.time = xen_time_ops;
+	static_call_update(pv_steal_clock, xen_steal_clock);
+	paravirt_set_sched_clock(xen_sched_clock);
 
 	x86_init.timers.timer_init = xen_time_init;
 	x86_init.timers.setup_percpu_clockev = x86_init_noop;
@@ -570,7 +567,8 @@ void __init xen_hvm_init_time_ops(void)
 	}
 
 	xen_sched_clock_offset = xen_clocksource_read();
-	pv_ops.time = xen_time_ops;
+	static_call_update(pv_steal_clock, xen_steal_clock);
+	paravirt_set_sched_clock(xen_sched_clock);
 	x86_init.timers.setup_percpu_clockev = xen_time_init;
 	x86_cpuinit.setup_percpu_clockev = xen_hvm_setup_cpu_clockevents;
 
diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
index ba04cb381cd3..1ed79993fc50 100644
--- a/drivers/clocksource/hyperv_timer.c
+++ b/drivers/clocksource/hyperv_timer.c
@@ -21,6 +21,7 @@
 #include <clocksource/hyperv_timer.h>
 #include <asm/hyperv-tlfs.h>
 #include <asm/mshyperv.h>
+#include <asm/paravirt_time.h>
 
 static struct clock_event_device __percpu *hv_clock_event;
 static u64 hv_sched_clock_offset __ro_after_init;
@@ -445,7 +446,7 @@ static bool __init hv_init_tsc_clocksource(void)
 	clocksource_register_hz(&hyperv_cs_tsc, NSEC_PER_SEC/100);
 
 	hv_sched_clock_offset = hv_read_reference_counter();
-	hv_setup_sched_clock(read_hv_sched_clock_tsc);
+	paravirt_set_sched_clock(read_hv_sched_clock_tsc);
 
 	return true;
 }
@@ -470,6 +471,6 @@ void __init hv_init_clocksource(void)
 	clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100);
 
 	hv_sched_clock_offset = hv_read_reference_counter();
-	hv_setup_sched_clock(read_hv_sched_clock_msr);
+	static_call_update(pv_sched_clock, read_hv_sched_clock_msr);
 }
 EXPORT_SYMBOL_GPL(hv_init_clocksource);
diff --git a/drivers/xen/time.c b/drivers/xen/time.c
index 108edbcbc040..b35ce88427c9 100644
--- a/drivers/xen/time.c
+++ b/drivers/xen/time.c
@@ -9,6 +9,7 @@
 #include <linux/slab.h>
 
 #include <asm/paravirt.h>
+#include <asm/paravirt_time.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 
@@ -175,7 +176,7 @@ void __init xen_time_setup_guest(void)
 	xen_runstate_remote = !HYPERVISOR_vm_assist(VMASST_CMD_enable,
 					VMASST_TYPE_runstate_update_flag);
 
-	pv_ops.time.steal_clock = xen_steal_clock;
+	static_call_update(pv_steal_clock, xen_steal_clock);
 
 	static_key_slow_inc(&paravirt_steal_enabled);
 	if (xen_runstate_remote)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index f5acb6c5ce49..08427b9c11fd 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -70,6 +70,7 @@
 
 #ifdef CONFIG_PARAVIRT
 # include <asm/paravirt.h>
+# include <asm/paravirt_time.h>
 #endif
 
 #include "cpupri.h"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55783.97283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdo-0006vZ-Cc; Thu, 17 Dec 2020 09:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55783.97283; Thu, 17 Dec 2020 09:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdo-0006vP-8E; Thu, 17 Dec 2020 09:32:08 +0000
Received: by outflank-mailman (input) for mailman id 55783;
 Thu, 17 Dec 2020 09:32:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdn-0006R7-Lf
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08e1002c-1486-4f12-b99a-9b02559b8b30;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B99DAB71D;
 Thu, 17 Dec 2020 09:31:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08e1002c-1486-4f12-b99a-9b02559b8b30
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tw17vw9qDinK072Gqjp0wECv525qejhJlbKr8xzx2Nk=;
	b=IkLEuniFx1QPiIU+XZSiMA+xhU+PYa4LNmxqu+d0Dp3gONeD422EIm3Ft/rSuD2mziVA/x
	556GbUzg7wqElUMLCzOAT8wUbuVLzbSWRxd9ELZLwU0jivC1i/u1pKz9Sk/nDa5SIXfd/L
	8yMmfCHhZP8HlRN/e/oQ2E3/wRBr62I=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>
Subject: [PATCH v3 09/15] x86/paravirt: remove no longer needed 32-bit pvops cruft
Date: Thu, 17 Dec 2020 10:31:27 +0100
Message-Id: <20201217093133.1507-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

PVOP_VCALL4() is only used for Xen PV, while PVOP_CALL4() isn't used
at all. Keep PVOP_CALL4() for 64 bits due to symmetry reasons.

This allows to remove the 32-bit definitions of those macros leading
to a substantial simplification of the paravirt macros, as those were
the only ones needing non-empty "pre" and "post" parameters.

PVOP_CALLEE2() and PVOP_VCALLEE2() are used nowhere, so remove them.

Another no longer needed case is special handling of return types
larger than unsigned long. Replace that with a BUILD_BUG_ON().

DISABLE_INTERRUPTS() is used in 32-bit code only, so it can just be
replaced by cli.

INTERRUPT_RETURN in 32-bit code can be replaced by iret.

ENABLE_INTERRUPTS is used nowhere, so it can be removed.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/entry_32.S             |   4 +-
 arch/x86/include/asm/irqflags.h       |   5 --
 arch/x86/include/asm/paravirt.h       |  35 +-------
 arch/x86/include/asm/paravirt_types.h | 112 ++++++++------------------
 arch/x86/kernel/asm-offsets.c         |   2 -
 5 files changed, 35 insertions(+), 123 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index df8c017e6161..765487e57d6e 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -430,7 +430,7 @@
 	 * will soon execute iret and the tracer was already set to
 	 * the irqstate after the IRET:
 	 */
-	DISABLE_INTERRUPTS(CLBR_ANY)
+	cli
 	lss	(%esp), %esp			/* switch to espfix segment */
 .Lend_\@:
 #endif /* CONFIG_X86_ESPFIX32 */
@@ -1077,7 +1077,7 @@ restore_all_switch_stack:
 	 * when returning from IPI handler and when returning from
 	 * scheduler to user-space.
 	 */
-	INTERRUPT_RETURN
+	iret
 
 .section .fixup, "ax"
 SYM_CODE_START(asm_iret_error)
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 144d70ea4393..a0efbcd24b86 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -109,9 +109,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 }
 #else
 
-#define ENABLE_INTERRUPTS(x)	sti
-#define DISABLE_INTERRUPTS(x)	cli
-
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(x)		pushfq; popq %rax
@@ -119,8 +116,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 
 #define INTERRUPT_RETURN	jmp native_iret
 
-#else
-#define INTERRUPT_RETURN		iret
 #endif
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 0785a9686e32..1dd30c95505d 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -692,6 +692,7 @@ extern void default_banner(void);
 	.if ((~(set)) & mask); pop %reg; .endif
 
 #ifdef CONFIG_X86_64
+#ifdef CONFIG_PARAVIRT_XXL
 
 #define PV_SAVE_REGS(set)			\
 	COND_PUSH(set, CLBR_RAX, rax);		\
@@ -717,46 +718,12 @@ extern void default_banner(void);
 #define PARA_PATCH(off)		((off) / 8)
 #define PARA_SITE(ptype, ops)	_PVSITE(ptype, ops, .quad, 8)
 #define PARA_INDIRECT(addr)	*addr(%rip)
-#else
-#define PV_SAVE_REGS(set)			\
-	COND_PUSH(set, CLBR_EAX, eax);		\
-	COND_PUSH(set, CLBR_EDI, edi);		\
-	COND_PUSH(set, CLBR_ECX, ecx);		\
-	COND_PUSH(set, CLBR_EDX, edx)
-#define PV_RESTORE_REGS(set)			\
-	COND_POP(set, CLBR_EDX, edx);		\
-	COND_POP(set, CLBR_ECX, ecx);		\
-	COND_POP(set, CLBR_EDI, edi);		\
-	COND_POP(set, CLBR_EAX, eax)
-
-#define PARA_PATCH(off)		((off) / 4)
-#define PARA_SITE(ptype, ops)	_PVSITE(ptype, ops, .long, 4)
-#define PARA_INDIRECT(addr)	*%cs:addr
-#endif
 
-#ifdef CONFIG_PARAVIRT_XXL
 #define INTERRUPT_RETURN						\
 	PARA_SITE(PARA_PATCH(PV_CPU_iret),				\
 		  ANNOTATE_RETPOLINE_SAFE;				\
 		  jmp PARA_INDIRECT(pv_ops+PV_CPU_iret);)
 
-#define DISABLE_INTERRUPTS(clobbers)					\
-	PARA_SITE(PARA_PATCH(PV_IRQ_irq_disable),			\
-		  PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);		\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_IRQ_irq_disable);	\
-		  PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
-
-#define ENABLE_INTERRUPTS(clobbers)					\
-	PARA_SITE(PARA_PATCH(PV_IRQ_irq_enable),			\
-		  PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);		\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_IRQ_irq_enable);		\
-		  PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
-#endif
-
-#ifdef CONFIG_X86_64
-#ifdef CONFIG_PARAVIRT_XXL
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
 	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 1fff349e4792..42f9eef84131 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -470,55 +470,34 @@ int paravirt_disable_iospace(void);
 	})
 
 
-#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr,		\
-		      pre, post, ...)					\
+#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, ...)	\
 	({								\
-		rettype __ret;						\
 		PVOP_CALL_ARGS;						\
 		PVOP_TEST_NULL(op);					\
-		/* This is 32-bit specific, but is okay in 64-bit */	\
-		/* since this condition will never hold */		\
-		if (sizeof(rettype) > sizeof(unsigned long)) {		\
-			asm volatile(pre				\
-				     paravirt_alt(PARAVIRT_CALL)	\
-				     post				\
-				     : call_clbr, ASM_CALL_CONSTRAINT	\
-				     : paravirt_type(op),		\
-				       paravirt_clobber(clbr),		\
-				       ##__VA_ARGS__			\
-				     : "memory", "cc" extra_clbr);	\
-			__ret = (rettype)((((u64)__edx) << 32) | __eax); \
-		} else {						\
-			asm volatile(pre				\
-				     paravirt_alt(PARAVIRT_CALL)	\
-				     post				\
-				     : call_clbr, ASM_CALL_CONSTRAINT	\
-				     : paravirt_type(op),		\
-				       paravirt_clobber(clbr),		\
-				       ##__VA_ARGS__			\
-				     : "memory", "cc" extra_clbr);	\
-			__ret = (rettype)(__eax & PVOP_RETMASK(rettype));	\
-		}							\
-		__ret;							\
+		BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long));	\
+		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
+			     : call_clbr, ASM_CALL_CONSTRAINT		\
+			     : paravirt_type(op),			\
+			       paravirt_clobber(clbr),			\
+			       ##__VA_ARGS__				\
+			     : "memory", "cc" extra_clbr);		\
+		(rettype)(__eax & PVOP_RETMASK(rettype));		\
 	})
 
-#define __PVOP_CALL(rettype, op, pre, post, ...)			\
+#define __PVOP_CALL(rettype, op, ...)					\
 	____PVOP_CALL(rettype, op, CLBR_ANY, PVOP_CALL_CLOBBERS,	\
-		      EXTRA_CLOBBERS, pre, post, ##__VA_ARGS__)
+		      EXTRA_CLOBBERS, ##__VA_ARGS__)
 
-#define __PVOP_CALLEESAVE(rettype, op, pre, post, ...)			\
+#define __PVOP_CALLEESAVE(rettype, op, ...)				\
 	____PVOP_CALL(rettype, op.func, CLBR_RET_REG,			\
-		      PVOP_CALLEE_CLOBBERS, ,				\
-		      pre, post, ##__VA_ARGS__)
+		      PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
 
 
-#define ____PVOP_VCALL(op, clbr, call_clbr, extra_clbr, pre, post, ...)	\
+#define ____PVOP_VCALL(op, clbr, call_clbr, extra_clbr, ...)		\
 	({								\
 		PVOP_VCALL_ARGS;					\
 		PVOP_TEST_NULL(op);					\
-		asm volatile(pre					\
-			     paravirt_alt(PARAVIRT_CALL)		\
-			     post					\
+		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
 			     : call_clbr, ASM_CALL_CONSTRAINT		\
 			     : paravirt_type(op),			\
 			       paravirt_clobber(clbr),			\
@@ -526,84 +505,57 @@ int paravirt_disable_iospace(void);
 			     : "memory", "cc" extra_clbr);		\
 	})
 
-#define __PVOP_VCALL(op, pre, post, ...)				\
+#define __PVOP_VCALL(op, ...)						\
 	____PVOP_VCALL(op, CLBR_ANY, PVOP_VCALL_CLOBBERS,		\
-		       VEXTRA_CLOBBERS,					\
-		       pre, post, ##__VA_ARGS__)
+		       VEXTRA_CLOBBERS, ##__VA_ARGS__)
 
-#define __PVOP_VCALLEESAVE(op, pre, post, ...)				\
+#define __PVOP_VCALLEESAVE(op, ...)					\
 	____PVOP_VCALL(op.func, CLBR_RET_REG,				\
-		      PVOP_VCALLEE_CLOBBERS, ,				\
-		      pre, post, ##__VA_ARGS__)
+		      PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
 
 
 
 #define PVOP_CALL0(rettype, op)						\
-	__PVOP_CALL(rettype, op, "", "")
+	__PVOP_CALL(rettype, op)
 #define PVOP_VCALL0(op)							\
-	__PVOP_VCALL(op, "", "")
+	__PVOP_VCALL(op)
 
 #define PVOP_CALLEE0(rettype, op)					\
-	__PVOP_CALLEESAVE(rettype, op, "", "")
+	__PVOP_CALLEESAVE(rettype, op)
 #define PVOP_VCALLEE0(op)						\
-	__PVOP_VCALLEESAVE(op, "", "")
+	__PVOP_VCALLEESAVE(op)
 
 
 #define PVOP_CALL1(rettype, op, arg1)					\
-	__PVOP_CALL(rettype, op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALL1(op, arg1)						\
-	__PVOP_VCALL(op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
 
 #define PVOP_CALLEE1(rettype, op, arg1)					\
-	__PVOP_CALLEESAVE(rettype, op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_CALLEESAVE(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALLEE1(op, arg1)						\
-	__PVOP_VCALLEESAVE(op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_VCALLEESAVE(op, PVOP_CALL_ARG1(arg1))
 
 
 #define PVOP_CALL2(rettype, op, arg1, arg2)				\
-	__PVOP_CALL(rettype, op, "", "", PVOP_CALL_ARG1(arg1),		\
-		    PVOP_CALL_ARG2(arg2))
+	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
 #define PVOP_VCALL2(op, arg1, arg2)					\
-	__PVOP_VCALL(op, "", "", PVOP_CALL_ARG1(arg1),			\
-		     PVOP_CALL_ARG2(arg2))
-
-#define PVOP_CALLEE2(rettype, op, arg1, arg2)				\
-	__PVOP_CALLEESAVE(rettype, op, "", "", PVOP_CALL_ARG1(arg1),	\
-			  PVOP_CALL_ARG2(arg2))
-#define PVOP_VCALLEE2(op, arg1, arg2)					\
-	__PVOP_VCALLEESAVE(op, "", "", PVOP_CALL_ARG1(arg1),		\
-			   PVOP_CALL_ARG2(arg2))
-
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
 
 #define PVOP_CALL3(rettype, op, arg1, arg2, arg3)			\
-	__PVOP_CALL(rettype, op, "", "", PVOP_CALL_ARG1(arg1),		\
+	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1),			\
 		    PVOP_CALL_ARG2(arg2), PVOP_CALL_ARG3(arg3))
 #define PVOP_VCALL3(op, arg1, arg2, arg3)				\
-	__PVOP_VCALL(op, "", "", PVOP_CALL_ARG1(arg1),			\
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1),				\
 		     PVOP_CALL_ARG2(arg2), PVOP_CALL_ARG3(arg3))
 
-/* This is the only difference in x86_64. We can make it much simpler */
-#ifdef CONFIG_X86_32
 #define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4)			\
 	__PVOP_CALL(rettype, op,					\
-		    "push %[_arg4];", "lea 4(%%esp),%%esp;",		\
-		    PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),		\
-		    PVOP_CALL_ARG3(arg3), [_arg4] "mr" ((u32)(arg4)))
-#define PVOP_VCALL4(op, arg1, arg2, arg3, arg4)				\
-	__PVOP_VCALL(op,						\
-		    "push %[_arg4];", "lea 4(%%esp),%%esp;",		\
-		    "0" ((u32)(arg1)), "1" ((u32)(arg2)),		\
-		    "2" ((u32)(arg3)), [_arg4] "mr" ((u32)(arg4)))
-#else
-#define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4)			\
-	__PVOP_CALL(rettype, op, "", "",				\
 		    PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),		\
 		    PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4))
 #define PVOP_VCALL4(op, arg1, arg2, arg3, arg4)				\
-	__PVOP_VCALL(op, "", "",					\
-		     PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),	\
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),	\
 		     PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4))
-#endif
 
 /* Lazy mode for batching updates / context switch */
 enum paravirt_lazy_mode {
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 60b9f42ce3c1..736508004b30 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -63,8 +63,6 @@ static void __used common(void)
 
 #ifdef CONFIG_PARAVIRT_XXL
 	BLANK();
-	OFFSET(PV_IRQ_irq_disable, paravirt_patch_template, irq.irq_disable);
-	OFFSET(PV_IRQ_irq_enable, paravirt_patch_template, irq.irq_enable);
 	OFFSET(PV_CPU_iret, paravirt_patch_template, cpu.iret);
 #endif
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55784.97295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdq-00070w-Se; Thu, 17 Dec 2020 09:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55784.97295; Thu, 17 Dec 2020 09:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdq-00070m-OU; Thu, 17 Dec 2020 09:32:10 +0000
Received: by outflank-mailman (input) for mailman id 55784;
 Thu, 17 Dec 2020 09:32:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdo-0006RQ-UK
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8d3504b-9db0-4650-92c8-b11bf05e908e;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EA20B71A;
 Thu, 17 Dec 2020 09:31:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8d3504b-9db0-4650-92c8-b11bf05e908e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TvIYSNPdBSefisWFtTOksv3sxjm38mYHs1Pyg9aHYqE=;
	b=nvEwT2oDeGIskBzFViVMmay8X7b3n/OSFIxRxooevtQ/kgewzkhr5nPy+Aucg2f9zx0LjE
	LFY8Mx3cQe3AWA7NjKrb3ADoMWFUoFwjFSWjiP+WM7y+OWTC91NI0Uj3Wvu5+bATZcNUQ4
	2Kw25F56RY3EpTGm9xoWP0r5mG7nFL8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 07/15] x86/alternative: support "not feature" and ALTERNATIVE_TERNARY
Date: Thu, 17 Dec 2020 10:31:25 +0100
Message-Id: <20201217093133.1507-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of only supporting to modify instructions when a specific
feature is set, support doing so for the case a feature is not set.

As today a feature is specified using a 16 bit quantity and the highest
feature number in use is around 600, using a negated feature number for
specifying the inverted case seems to be appropriate.

  ALTERNATIVE "default_instr", "patched_instr", ~FEATURE_NR

will start with "default_instr" and patch that with "patched_instr" in
case FEATURE_NR is not set.

Using that add ALTERNATIVE_TERNARY:

  ALTERNATIVE_TERNARY "default_instr", FEATURE_NR,
                      "feature_on_instr", "feature_off_instr"

which will start with "default_instr" and at patch time will, depending
on FEATURE_NR being set or not, patch that with either
"feature_on_instr" or "feature_off_instr".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 arch/x86/include/asm/alternative-asm.h |  3 +++
 arch/x86/include/asm/alternative.h     |  7 +++++++
 arch/x86/kernel/alternative.c          | 17 ++++++++++++-----
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h
index 464034db299f..b6989995fddf 100644
--- a/arch/x86/include/asm/alternative-asm.h
+++ b/arch/x86/include/asm/alternative-asm.h
@@ -109,6 +109,9 @@
 	.popsection
 .endm
 
+#define ALTERNATIVE_TERNARY(oldinstr, feature, newinstr1, newinstr2)	\
+	ALTERNATIVE_2 oldinstr, newinstr1, feature, newinstr2, ~(feature)
+
 #endif  /*  __ASSEMBLY__  */
 
 #endif /* _ASM_X86_ALTERNATIVE_ASM_H */
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 13adca37c99a..a0f8f33609aa 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -59,6 +59,7 @@ struct alt_instr {
 	s32 instr_offset;	/* original instruction */
 	s32 repl_offset;	/* offset to replacement instruction */
 	u16 cpuid;		/* cpuid bit set for replacement */
+#define ALT_INSTR_CPUID_INV	0x8000	/* patch if ~cpuid bit is NOT set */
 	u8  instrlen;		/* length of original instruction */
 	u8  replacementlen;	/* length of new instruction */
 	u8  padlen;		/* length of build-time padding */
@@ -175,6 +176,9 @@ static inline int alternatives_text_reserved(void *start, void *end)
 	ALTINSTR_REPLACEMENT(newinstr2, feature2, 2)			\
 	".popsection\n"
 
+#define ALTERNATIVE_TERNARY(oldinstr, feature, newinstr1, newinstr2)	\
+	ALTERNATIVE_2(oldinstr, newinstr1, feature, newinstr2, ~(feature))
+
 #define ALTERNATIVE_3(oldinsn, newinsn1, feat1, newinsn2, feat2, newinsn3, feat3) \
 	OLDINSTR_3(oldinsn, 1, 2, 3)						\
 	".pushsection .altinstructions,\"a\"\n"					\
@@ -206,6 +210,9 @@ static inline int alternatives_text_reserved(void *start, void *end)
 #define alternative_2(oldinstr, newinstr1, feature1, newinstr2, feature2) \
 	asm_inline volatile(ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2) ::: "memory")
 
+#define alternative_ternary(oldinstr, feature, newinstr1, newinstr2)	\
+	asm_inline volatile(ALTERNATIVE_TERNARY(oldinstr, feature, newinstr1, newinstr2) ::: "memory")
+
 /*
  * Alternative inline assembly with input.
  *
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 8d778e46725d..0a904fb2678b 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -388,21 +388,28 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
 	 */
 	for (a = start; a < end; a++) {
 		int insn_buff_sz = 0;
+		u16 feature;
+		bool not_feature;
 
 		instr = (u8 *)&a->instr_offset + a->instr_offset;
 		replacement = (u8 *)&a->repl_offset + a->repl_offset;
+		feature = a->cpuid;
+		not_feature = feature & ALT_INSTR_CPUID_INV;
+		if (not_feature)
+			feature = ~feature;
 		BUG_ON(a->instrlen > sizeof(insn_buff));
-		BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32);
-		if (!boot_cpu_has(a->cpuid)) {
+		BUG_ON(feature >= (NCAPINTS + NBUGINTS) * 32);
+		if (!!boot_cpu_has(feature) == not_feature) {
 			if (a->padlen > 1)
 				optimize_nops(a, instr);
 
 			continue;
 		}
 
-		DPRINTK("feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d), pad: %d",
-			a->cpuid >> 5,
-			a->cpuid & 0x1f,
+		DPRINTK("feat: %s%d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d), pad: %d",
+			not_feature ? "~" : "",
+			feature >> 5,
+			feature & 0x1f,
 			instr, instr, a->instrlen,
 			replacement, a->replacementlen, a->padlen);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55786.97307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdu-000781-EK; Thu, 17 Dec 2020 09:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55786.97307; Thu, 17 Dec 2020 09:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdu-00077k-7L; Thu, 17 Dec 2020 09:32:14 +0000
Received: by outflank-mailman (input) for mailman id 55786;
 Thu, 17 Dec 2020 09:32:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppds-0006R7-Lk
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 395ea5e9-f67b-4682-91c3-3b12986ce395;
 Thu, 17 Dec 2020 09:31:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 290A0B722;
 Thu, 17 Dec 2020 09:31:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 395ea5e9-f67b-4682-91c3-3b12986ce395
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197510; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QbhN8nDn8ElFaPhinA1wJiI1Qr2HaGBNE+bcWGQlFpI=;
	b=UFoQlSy4Dkk3IGYqpUsYwqK3x/2DMMD2RVRF0AFx3WupuOh8INGNwy1YOnqk73w8nBO7lq
	7Sc81VGyhZdXgXTPFN0fW6vOOCOL7i2y0I6M4Wgu0IzV4ZBY9IO1lAldq5Q8GS2I2BpVXp
	7BFQrpQDmDPkie4o7I8sVGexzABwWOY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 14/15] x86/paravirt: switch functions with custom code to ALTERNATIVE
Date: Thu, 17 Dec 2020 10:31:32 +0100
Message-Id: <20201217093133.1507-15-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using paravirt patching for custom code sequences use
ALTERNATIVE for the functions with custom code replacements.

Instead of patching an ud2 instruction for unpopulated vector entries
into the caller site, use a simple function just calling BUG() as a
replacement.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/paravirt.h       | 72 ++++++++++++++--------
 arch/x86/include/asm/paravirt_types.h |  1 -
 arch/x86/kernel/paravirt.c            | 16 ++---
 arch/x86/kernel/paravirt_patch.c      | 88 ---------------------------
 4 files changed, 53 insertions(+), 124 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 49a823abc0e1..fc551dcc6458 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -108,7 +108,8 @@ static inline void write_cr0(unsigned long x)
 
 static inline unsigned long read_cr2(void)
 {
-	return PVOP_CALLEE0(unsigned long, mmu.read_cr2);
+	return PVOP_ALT_CALLEE0(unsigned long, mmu.read_cr2,
+				"mov %%cr2, %%rax;", ~X86_FEATURE_XENPV);
 }
 
 static inline void write_cr2(unsigned long x)
@@ -118,12 +119,14 @@ static inline void write_cr2(unsigned long x)
 
 static inline unsigned long __read_cr3(void)
 {
-	return PVOP_CALL0(unsigned long, mmu.read_cr3);
+	return PVOP_ALT_CALL0(unsigned long, mmu.read_cr3,
+			      "mov %%cr3, %%rax;", ~X86_FEATURE_XENPV);
 }
 
 static inline void write_cr3(unsigned long x)
 {
-	PVOP_VCALL1(mmu.write_cr3, x);
+	PVOP_ALT_VCALL1(mmu.write_cr3, x,
+			"mov %%rdi, %%cr3", ~X86_FEATURE_XENPV);
 }
 
 static inline void __write_cr4(unsigned long x)
@@ -143,7 +146,7 @@ static inline void halt(void)
 
 static inline void wbinvd(void)
 {
-	PVOP_VCALL0(cpu.wbinvd);
+	PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ~X86_FEATURE_XENPV);
 }
 
 static inline u64 paravirt_read_msr(unsigned msr)
@@ -357,22 +360,28 @@ static inline void paravirt_release_p4d(unsigned long pfn)
 
 static inline pte_t __pte(pteval_t val)
 {
-	return (pte_t) { PVOP_CALLEE1(pteval_t, mmu.make_pte, val) };
+	return (pte_t) { PVOP_ALT_CALLEE1(pteval_t, mmu.make_pte, val,
+					  "mov %%rdi, %%rax",
+					  ~X86_FEATURE_XENPV) };
 }
 
 static inline pteval_t pte_val(pte_t pte)
 {
-	return PVOP_CALLEE1(pteval_t, mmu.pte_val, pte.pte);
+	return PVOP_ALT_CALLEE1(pteval_t, mmu.pte_val, pte.pte,
+				"mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 }
 
 static inline pgd_t __pgd(pgdval_t val)
 {
-	return (pgd_t) { PVOP_CALLEE1(pgdval_t, mmu.make_pgd, val) };
+	return (pgd_t) { PVOP_ALT_CALLEE1(pgdval_t, mmu.make_pgd, val,
+					  "mov %%rdi, %%rax",
+					  ~X86_FEATURE_XENPV) };
 }
 
 static inline pgdval_t pgd_val(pgd_t pgd)
 {
-	return PVOP_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd);
+	return PVOP_ALT_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd,
+				"mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 }
 
 #define  __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
@@ -405,12 +414,15 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 
 static inline pmd_t __pmd(pmdval_t val)
 {
-	return (pmd_t) { PVOP_CALLEE1(pmdval_t, mmu.make_pmd, val) };
+	return (pmd_t) { PVOP_ALT_CALLEE1(pmdval_t, mmu.make_pmd, val,
+					  "mov %%rdi, %%rax",
+					  ~X86_FEATURE_XENPV) };
 }
 
 static inline pmdval_t pmd_val(pmd_t pmd)
 {
-	return PVOP_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd);
+	return PVOP_ALT_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd,
+				"mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 }
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
@@ -422,14 +434,16 @@ static inline pud_t __pud(pudval_t val)
 {
 	pudval_t ret;
 
-	ret = PVOP_CALLEE1(pudval_t, mmu.make_pud, val);
+	ret = PVOP_ALT_CALLEE1(pudval_t, mmu.make_pud, val,
+			       "mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 
 	return (pud_t) { ret };
 }
 
 static inline pudval_t pud_val(pud_t pud)
 {
-	return PVOP_CALLEE1(pudval_t, mmu.pud_val, pud.pud);
+	return PVOP_ALT_CALLEE1(pudval_t, mmu.pud_val, pud.pud,
+				"mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 }
 
 static inline void pud_clear(pud_t *pudp)
@@ -448,14 +462,16 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
 
 static inline p4d_t __p4d(p4dval_t val)
 {
-	p4dval_t ret = PVOP_CALLEE1(p4dval_t, mmu.make_p4d, val);
+	p4dval_t ret = PVOP_ALT_CALLEE1(p4dval_t, mmu.make_p4d, val,
+					"mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 
 	return (p4d_t) { ret };
 }
 
 static inline p4dval_t p4d_val(p4d_t p4d)
 {
-	return PVOP_CALLEE1(p4dval_t, mmu.p4d_val, p4d.p4d);
+	return PVOP_ALT_CALLEE1(p4dval_t, mmu.p4d_val, p4d.p4d,
+				"mov %%rdi, %%rax", ~X86_FEATURE_XENPV);
 }
 
 static inline void __set_pgd(pgd_t *pgdp, pgd_t pgd)
@@ -542,7 +558,9 @@ static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock,
 
 static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
 {
-	PVOP_VCALLEE1(lock.queued_spin_unlock, lock);
+	PVOP_ALT_VCALLEE1(lock.queued_spin_unlock, lock,
+			  "movb $0, (%%" _ASM_ARG1 ");",
+			  ~X86_FEATURE_PVUNLOCK);
 }
 
 static __always_inline void pv_wait(u8 *ptr, u8 val)
@@ -557,7 +575,9 @@ static __always_inline void pv_kick(int cpu)
 
 static __always_inline bool pv_vcpu_is_preempted(long cpu)
 {
-	return PVOP_CALLEE1(bool, lock.vcpu_is_preempted, cpu);
+	return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu,
+				"xor %%" _ASM_AX ", %%" _ASM_AX ";",
+				~X86_FEATURE_VCPUPREEMPT);
 }
 
 void __raw_callee_save___native_queued_spin_unlock(struct qspinlock *lock);
@@ -631,17 +651,18 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
 #ifdef CONFIG_PARAVIRT_XXL
 static inline notrace unsigned long arch_local_save_flags(void)
 {
-	return PVOP_CALLEE0(unsigned long, irq.save_fl);
+	return PVOP_ALT_CALLEE0(unsigned long, irq.save_fl,
+				"pushf; pop %%rax;", ~X86_FEATURE_XENPV);
 }
 
 static inline notrace void arch_local_irq_disable(void)
 {
-	PVOP_VCALLEE0(irq.irq_disable);
+	PVOP_ALT_VCALLEE0(irq.irq_disable, "cli;", ~X86_FEATURE_XENPV);
 }
 
 static inline notrace void arch_local_irq_enable(void)
 {
-	PVOP_VCALLEE0(irq.irq_enable);
+	PVOP_ALT_VCALLEE0(irq.irq_enable, "sti;", ~X86_FEATURE_XENPV);
 }
 
 static inline notrace unsigned long arch_local_irq_save(void)
@@ -725,12 +746,13 @@ extern void default_banner(void);
 		X86_FEATURE_XENPV, "jmp xen_iret;", "jmp native_iret;")
 
 #ifdef CONFIG_DEBUG_ENTRY
-#define SAVE_FLAGS(clobbers)                                        \
-	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
-		  PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);        \
-		  ANNOTATE_RETPOLINE_SAFE;			    \
-		  call PARA_INDIRECT(pv_ops+PV_IRQ_save_fl);	    \
-		  PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
+#define SAVE_FLAGS(clobbers)						      \
+	ALTERNATIVE(PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),		      \
+			      PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);      \
+			      ANNOTATE_RETPOLINE_SAFE;			      \
+			      call PARA_INDIRECT(pv_ops+PV_IRQ_save_fl);      \
+			      PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);), \
+		    "pushf; pop %rax;", ~X86_FEATURE_XENPV)
 #endif
 #endif /* CONFIG_PARAVIRT_XXL */
 #endif	/* CONFIG_X86_64 */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7e0130781b12..f9e77046b61b 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -322,7 +322,6 @@ extern void (*paravirt_iret)(void);
 /* Simple instruction patching code. */
 #define NATIVE_LABEL(a,x,b) "\n\t.globl " a #x "_" #b "\n" a #x "_" #b ":\n\t"
 
-unsigned paravirt_patch_ident_64(void *insn_buff, unsigned len);
 unsigned paravirt_patch_default(u8 type, void *insn_buff, unsigned long addr, unsigned len);
 unsigned paravirt_patch_insns(void *insn_buff, unsigned len, const char *start, const char *end);
 
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 2ab547dd66c3..db6ae7f7c14e 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -53,7 +53,10 @@ void __init default_banner(void)
 }
 
 /* Undefined instruction for dealing with missing ops pointers. */
-static const unsigned char ud2a[] = { 0x0f, 0x0b };
+static void paravirt_BUG(void)
+{
+	BUG();
+}
 
 struct branch {
 	unsigned char opcode;
@@ -107,17 +110,10 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	unsigned ret;
 
 	if (opfunc == NULL)
-		/* If there's no function, patch it with a ud2a (BUG) */
-		ret = paravirt_patch_insns(insn_buff, len, ud2a, ud2a+sizeof(ud2a));
+		/* If there's no function, patch it with paravirt_BUG() */
+		ret = paravirt_patch_call(insn_buff, paravirt_BUG, addr, len);
 	else if (opfunc == _paravirt_nop)
 		ret = 0;
-
-#ifdef CONFIG_PARAVIRT_XXL
-	/* identity functions just return their single argument */
-	else if (opfunc == _paravirt_ident_64)
-		ret = paravirt_patch_ident_64(insn_buff, len);
-
-#endif
 	else
 		/* Otherwise call the function. */
 		ret = paravirt_patch_call(insn_buff, opfunc, addr, len);
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index abd27ec67397..10543dcc8211 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -4,96 +4,8 @@
 #include <asm/paravirt.h>
 #include <asm/asm-offsets.h>
 
-#define PSTART(d, m)							\
-	patch_data_##d.m
-
-#define PEND(d, m)							\
-	(PSTART(d, m) + sizeof(patch_data_##d.m))
-
-#define PATCH(d, m, insn_buff, len)						\
-	paravirt_patch_insns(insn_buff, len, PSTART(d, m), PEND(d, m))
-
-#define PATCH_CASE(ops, m, data, insn_buff, len)				\
-	case PARAVIRT_PATCH(ops.m):					\
-		return PATCH(data, ops##_##m, insn_buff, len)
-
-#ifdef CONFIG_PARAVIRT_XXL
-struct patch_xxl {
-	const unsigned char	irq_irq_disable[1];
-	const unsigned char	irq_irq_enable[1];
-	const unsigned char	irq_save_fl[2];
-	const unsigned char	mmu_read_cr2[3];
-	const unsigned char	mmu_read_cr3[3];
-	const unsigned char	mmu_write_cr3[3];
-	const unsigned char	cpu_wbinvd[2];
-	const unsigned char	mov64[3];
-};
-
-static const struct patch_xxl patch_data_xxl = {
-	.irq_irq_disable	= { 0xfa },		// cli
-	.irq_irq_enable		= { 0xfb },		// sti
-	.irq_save_fl		= { 0x9c, 0x58 },	// pushf; pop %[re]ax
-	.mmu_read_cr2		= { 0x0f, 0x20, 0xd0 },	// mov %cr2, %[re]ax
-	.mmu_read_cr3		= { 0x0f, 0x20, 0xd8 },	// mov %cr3, %[re]ax
-	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
-	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
-	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
-};
-
-unsigned int paravirt_patch_ident_64(void *insn_buff, unsigned int len)
-{
-	return PATCH(xxl, mov64, insn_buff, len);
-}
-# endif /* CONFIG_PARAVIRT_XXL */
-
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-struct patch_lock {
-	unsigned char queued_spin_unlock[3];
-	unsigned char vcpu_is_preempted[2];
-};
-
-static const struct patch_lock patch_data_lock = {
-	.vcpu_is_preempted	= { 0x31, 0xc0 },	// xor %eax, %eax
-
-# ifdef CONFIG_X86_64
-	.queued_spin_unlock	= { 0xc6, 0x07, 0x00 },	// movb $0, (%rdi)
-# else
-	.queued_spin_unlock	= { 0xc6, 0x00, 0x00 },	// movb $0, (%eax)
-# endif
-};
-#endif /* CONFIG_PARAVIRT_SPINLOCKS */
-
 unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 			  unsigned int len)
 {
-	switch (type) {
-
-#ifdef CONFIG_PARAVIRT_XXL
-	PATCH_CASE(irq, save_fl, xxl, insn_buff, len);
-	PATCH_CASE(irq, irq_enable, xxl, insn_buff, len);
-	PATCH_CASE(irq, irq_disable, xxl, insn_buff, len);
-
-	PATCH_CASE(mmu, read_cr2, xxl, insn_buff, len);
-	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
-	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
-
-	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
-#endif
-
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-	case PARAVIRT_PATCH(lock.queued_spin_unlock):
-		if (pv_is_native_spin_unlock())
-			return PATCH(lock, queued_spin_unlock, insn_buff, len);
-		break;
-
-	case PARAVIRT_PATCH(lock.vcpu_is_preempted):
-		if (pv_is_native_vcpu_is_preempted())
-			return PATCH(lock, vcpu_is_preempted, insn_buff, len);
-		break;
-#endif
-	default:
-		break;
-	}
-
 	return paravirt_patch_default(type, insn_buff, addr, len);
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55788.97317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdv-0007AQ-JF; Thu, 17 Dec 2020 09:32:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55788.97317; Thu, 17 Dec 2020 09:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdv-00079z-41; Thu, 17 Dec 2020 09:32:15 +0000
Received: by outflank-mailman (input) for mailman id 55788;
 Thu, 17 Dec 2020 09:32:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdt-0006RQ-UZ
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c79e4cd-2721-4e36-874a-d57d52af3a86;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6A6B5B71C;
 Thu, 17 Dec 2020 09:31:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c79e4cd-2721-4e36-874a-d57d52af3a86
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=f7wMr3rCKeUgxXFk8LrwUaFxcJhxPpDbM3TMdRt3vHc=;
	b=ZxRpHnPLouJGWl0qvClnnvw5rU/SYAMpP+qybw/K+Uy0fLrHnnu5ahxvF0cCDOZeqZlvyZ
	dlR9w0GMgGxy9KFsuKFvnomGmDS65MgZNHk5x24aIFmT/GaUidfJrqQXQlfvoGrGqb0z2i
	ENfyfKAHzdoZsjSKRBdvzoLmTj5g32g=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 08/15] x86: add new features for paravirt patching
Date: Thu, 17 Dec 2020 10:31:26 +0100
Message-Id: <20201217093133.1507-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For being able to switch paravirt patching from special cased custom
code sequences to ALTERNATIVE handling some X86_FEATURE_* are needed
as new features. This enables to have the standard indirect pv call
as the default code and to patch that with the non-Xen custom code
sequence via ALTERNATIVE patching later.

Make sure paravirt patching is performed before alternative patching.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- add comment (Boris Petkov)
- no negative features (Boris Petkov)
---
 arch/x86/include/asm/cpufeatures.h |  2 ++
 arch/x86/kernel/alternative.c      | 40 ++++++++++++++++++++++++++++--
 2 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index f5ef2d5b9231..1077b675a008 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -237,6 +237,8 @@
 #define X86_FEATURE_VMCALL		( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
 #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
 #define X86_FEATURE_SEV_ES		( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */
+#define X86_FEATURE_PVUNLOCK		( 8*32+21) /* "" PV unlock function */
+#define X86_FEATURE_VCPUPREEMPT		( 8*32+22) /* "" PV vcpu_is_preempted function */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
 #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 0a904fb2678b..abb481808811 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -600,6 +600,15 @@ int alternatives_text_reserved(void *start, void *end)
 #endif /* CONFIG_SMP */
 
 #ifdef CONFIG_PARAVIRT
+static void __init paravirt_set_cap(void)
+{
+	if (!pv_is_native_spin_unlock())
+		setup_force_cpu_cap(X86_FEATURE_PVUNLOCK);
+
+	if (!pv_is_native_vcpu_is_preempted())
+		setup_force_cpu_cap(X86_FEATURE_VCPUPREEMPT);
+}
+
 void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
 				     struct paravirt_patch_site *end)
 {
@@ -623,6 +632,8 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
 }
 extern struct paravirt_patch_site __start_parainstructions[],
 	__stop_parainstructions[];
+#else
+static void __init paravirt_set_cap(void) { }
 #endif	/* CONFIG_PARAVIRT */
 
 /*
@@ -730,6 +741,33 @@ void __init alternative_instructions(void)
 	 * patching.
 	 */
 
+	/*
+	 * Paravirt patching and alternative patching can be combined to
+	 * replace a function call with a short direct code sequence (e.g.
+	 * by setting a constant return value instead of doing that in an
+	 * external function).
+	 * In order to make this work the following sequence is required:
+	 * 1. set (artificial) features depending on used paravirt
+	 *    functions which can later influence alternative patching
+	 * 2. apply paravirt patching (generally replacing an indirect
+	 *    function call with a direct one)
+	 * 3. apply alternative patching (e.g. replacing a direct function
+	 *    call with a custom code sequence)
+	 * Doing paravirt patching after alternative patching would clobber
+	 * the optimization of the custom code with a function call again.
+	 */
+	paravirt_set_cap();
+
+	/*
+	 * First patch paravirt functions, such that we overwrite the indirect
+	 * call with the direct call.
+	 */
+	apply_paravirt(__parainstructions, __parainstructions_end);
+
+	/*
+	 * Then patch alternatives, such that those paravirt calls that are in
+	 * alternatives can be overwritten by their immediate fragments.
+	 */
 	apply_alternatives(__alt_instructions, __alt_instructions_end);
 
 #ifdef CONFIG_SMP
@@ -748,8 +786,6 @@ void __init alternative_instructions(void)
 	}
 #endif
 
-	apply_paravirt(__parainstructions, __parainstructions_end);
-
 	restart_nmi();
 	alternatives_patched = 1;
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55792.97331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdz-0007K1-Ts; Thu, 17 Dec 2020 09:32:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55792.97331; Thu, 17 Dec 2020 09:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppdz-0007Jp-Po; Thu, 17 Dec 2020 09:32:19 +0000
Received: by outflank-mailman (input) for mailman id 55792;
 Thu, 17 Dec 2020 09:32:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppdy-0006RQ-Uf
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35757d51-ae9f-4cd8-aa7a-80281bd7c06d;
 Thu, 17 Dec 2020 09:31:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61530B71F;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35757d51-ae9f-4cd8-aa7a-80281bd7c06d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vphHwpXezua84krOfCT11+alhrY3VVaGyL69tPWeylA=;
	b=bUa7hVCyoXmHPiGDed/ghKsn7GGpxXtsDkS8xb01htJcWJnJ4lCDcjVj1847aJLcWSe7H1
	Bwe8lVvtktasRFWmyF5E/leiVb8mmTZB5gyKPxVD4ya64lWx1MHZlYlWCXXgCME2qCQtFH
	I6tNJH8Qfdxao0YTT2gcLliqw9LTFMs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 11/15] x86/paravirt: switch iret pvops to ALTERNATIVE
Date: Thu, 17 Dec 2020 10:31:29 +0100
Message-Id: <20201217093133.1507-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The iret paravirt op is rather special as it is using a jmp instead
of a call instruction. Switch it to ALTERNATIVE.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- use ALTERNATIVE_TERNARY
---
 arch/x86/include/asm/paravirt.h       |  6 +++---
 arch/x86/include/asm/paravirt_types.h |  5 +----
 arch/x86/kernel/asm-offsets.c         |  5 -----
 arch/x86/kernel/paravirt.c            | 26 ++------------------------
 arch/x86/xen/enlighten_pv.c           |  3 +--
 5 files changed, 7 insertions(+), 38 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 1dd30c95505d..49a823abc0e1 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -720,9 +720,9 @@ extern void default_banner(void);
 #define PARA_INDIRECT(addr)	*addr(%rip)
 
 #define INTERRUPT_RETURN						\
-	PARA_SITE(PARA_PATCH(PV_CPU_iret),				\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  jmp PARA_INDIRECT(pv_ops+PV_CPU_iret);)
+	ANNOTATE_RETPOLINE_SAFE;					\
+	ALTERNATIVE_TERNARY("jmp *paravirt_iret(%rip);",		\
+		X86_FEATURE_XENPV, "jmp xen_iret;", "jmp native_iret;")
 
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index a9efd4dad820..5d6de014d2f6 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -151,10 +151,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-	/* Normal iret.  Jump to this with the standard iret stack
-	   frame set up. */
-	void (*iret)(void);
-
 	void (*start_context_switch)(struct task_struct *prev);
 	void (*end_context_switch)(struct task_struct *next);
 #endif
@@ -294,6 +290,7 @@ struct paravirt_patch_template {
 
 extern struct pv_info pv_info;
 extern struct paravirt_patch_template pv_ops;
+extern void (*paravirt_iret)(void);
 
 #define PARAVIRT_PATCH(x)					\
 	(offsetof(struct paravirt_patch_template, x) / sizeof(void *))
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 736508004b30..ecd3fd6993d1 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -61,11 +61,6 @@ static void __used common(void)
 	OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
 #endif
 
-#ifdef CONFIG_PARAVIRT_XXL
-	BLANK();
-	OFFSET(PV_CPU_iret, paravirt_patch_template, cpu.iret);
-#endif
-
 #ifdef CONFIG_XEN
 	BLANK();
 	OFFSET(XEN_vcpu_info_mask, vcpu_info, evtchn_upcall_mask);
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 9f8aa18aa378..2ab547dd66c3 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -86,25 +86,6 @@ u64 notrace _paravirt_ident_64(u64 x)
 {
 	return x;
 }
-
-static unsigned paravirt_patch_jmp(void *insn_buff, const void *target,
-				   unsigned long addr, unsigned len)
-{
-	struct branch *b = insn_buff;
-	unsigned long delta = (unsigned long)target - (addr+5);
-
-	if (len < 5) {
-#ifdef CONFIG_RETPOLINE
-		WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr);
-#endif
-		return len;	/* call too long for patch site */
-	}
-
-	b->opcode = 0xe9;	/* jmp */
-	b->delta = delta;
-
-	return 5;
-}
 #endif
 
 DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
@@ -136,9 +117,6 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	else if (opfunc == _paravirt_ident_64)
 		ret = paravirt_patch_ident_64(insn_buff, len);
 
-	else if (type == PARAVIRT_PATCH(cpu.iret))
-		/* If operation requires a jmp, then jmp */
-		ret = paravirt_patch_jmp(insn_buff, opfunc, addr, len);
 #endif
 	else
 		/* Otherwise call the function. */
@@ -316,8 +294,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.load_sp0		= native_load_sp0,
 
-	.cpu.iret		= native_iret,
-
 #ifdef CONFIG_X86_IOPL_IOPERM
 	.cpu.invalidate_io_bitmap	= native_tss_invalidate_io_bitmap,
 	.cpu.update_io_bitmap		= native_tss_update_io_bitmap,
@@ -422,6 +398,8 @@ struct paravirt_patch_template pv_ops = {
 NOKPROBE_SYMBOL(native_get_debugreg);
 NOKPROBE_SYMBOL(native_set_debugreg);
 NOKPROBE_SYMBOL(native_load_idt);
+
+void (*paravirt_iret)(void) = native_iret;
 #endif
 
 EXPORT_SYMBOL(pv_ops);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 32b295cc2716..4716383c64a9 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1057,8 +1057,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 
 	.read_pmc = xen_read_pmc,
 
-	.iret = xen_iret,
-
 	.load_tr_desc = paravirt_nop,
 	.set_ldt = xen_set_ldt,
 	.load_gdt = xen_load_gdt,
@@ -1222,6 +1220,7 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	pv_info = xen_info;
 	pv_ops.init.patch = paravirt_patch_default;
 	pv_ops.cpu = xen_cpu_ops;
+	paravirt_iret = xen_iret;
 	xen_init_irq_ops();
 
 	/*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55796.97343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppe5-0007Tl-EU; Thu, 17 Dec 2020 09:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55796.97343; Thu, 17 Dec 2020 09:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppe5-0007TX-8w; Thu, 17 Dec 2020 09:32:25 +0000
Received: by outflank-mailman (input) for mailman id 55796;
 Thu, 17 Dec 2020 09:32:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppe3-0006RQ-Uu
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f918cde4-b710-4be7-89f1-c31a968b17b9;
 Thu, 17 Dec 2020 09:31:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D72D1B721;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f918cde4-b710-4be7-89f1-c31a968b17b9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197510; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ceiDV3ATRY9YnL7SgQrnOIjlDPrG0xckHD/uLnaouJY=;
	b=fkabVt/gQVsQgi+sTbuhe2pPxdQ4U/Mx68jn/Ds9nwTAstwwF2nFVqItyVleqvdwWiVUm2
	LBkYA+3Iwt7AJHNa8uB5tG4CsJSePTrAh1FtlF3w4fgpGNUXmSINUyYY5gD969qbJTGzCV
	xEtn7DCMf0bMmVFBnIN3kPgmGmUiWPk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 13/15] x86/paravirt: add new macros PVOP_ALT* supporting pvops in ALTERNATIVEs
Date: Thu, 17 Dec 2020 10:31:31 +0100
Message-Id: <20201217093133.1507-14-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using paravirt patching for custom code sequences add
support for using ALTERNATIVE handling combined with paravirt call
patching.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- drop ____PVOP_ALT_VCALL() macro
---
 arch/x86/include/asm/paravirt_types.h | 49 +++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 5d6de014d2f6..7e0130781b12 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -477,44 +477,93 @@ int paravirt_disable_iospace(void);
 		(rettype)(__eax & PVOP_RETMASK(rettype));		\
 	})
 
+#define ____PVOP_ALT_CALL(rettype, op, alt, cond, clbr, call_clbr,	\
+			  extra_clbr, ...)				\
+	({								\
+		PVOP_CALL_ARGS;						\
+		PVOP_TEST_NULL(op);					\
+		BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long));	\
+		asm volatile(ALTERNATIVE(paravirt_alt(PARAVIRT_CALL),	\
+					 alt, cond)			\
+			     : call_clbr, ASM_CALL_CONSTRAINT		\
+			     : paravirt_type(op),			\
+			       paravirt_clobber(clbr),			\
+			       ##__VA_ARGS__				\
+			     : "memory", "cc" extra_clbr);		\
+		(rettype)(__eax & PVOP_RETMASK(rettype));		\
+	})
+
 #define __PVOP_CALL(rettype, op, ...)					\
 	____PVOP_CALL(rettype, op, CLBR_ANY, PVOP_CALL_CLOBBERS,	\
 		      EXTRA_CLOBBERS, ##__VA_ARGS__)
 
+#define __PVOP_ALT_CALL(rettype, op, alt, cond, ...)			\
+	____PVOP_ALT_CALL(rettype, op, alt, cond, CLBR_ANY,		\
+			  PVOP_CALL_CLOBBERS, EXTRA_CLOBBERS,		\
+			  ##__VA_ARGS__)
+
 #define __PVOP_CALLEESAVE(rettype, op, ...)				\
 	____PVOP_CALL(rettype, op.func, CLBR_RET_REG,			\
 		      PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
 
+#define __PVOP_ALT_CALLEESAVE(rettype, op, alt, cond, ...)		\
+	____PVOP_ALT_CALL(rettype, op.func, alt, cond, CLBR_RET_REG,	\
+			  PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
+
+
 #define __PVOP_VCALL(op, ...)						\
 	(void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,	\
 		       VEXTRA_CLOBBERS, ##__VA_ARGS__)
 
+#define __PVOP_ALT_VCALL(op, alt, cond, ...)				\
+	(void)____PVOP_ALT_CALL(long, op, alt, cond, CLBR_ANY,		\
+			   PVOP_VCALL_CLOBBERS, VEXTRA_CLOBBERS,	\
+			   ##__VA_ARGS__)
+
 #define __PVOP_VCALLEESAVE(op, ...)					\
 	(void)____PVOP_CALL(long, op.func, CLBR_RET_REG,		\
 		      PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
 
+#define __PVOP_ALT_VCALLEESAVE(op, alt, cond, ...)			\
+	(void)____PVOP_ALT_CALL(long, op.func, alt, cond, CLBR_RET_REG,	\
+			   PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
+
 
 
 #define PVOP_CALL0(rettype, op)						\
 	__PVOP_CALL(rettype, op)
 #define PVOP_VCALL0(op)							\
 	__PVOP_VCALL(op)
+#define PVOP_ALT_CALL0(rettype, op, alt, cond)				\
+	__PVOP_ALT_CALL(rettype, op, alt, cond)
+#define PVOP_ALT_VCALL0(op, alt, cond)					\
+	__PVOP_ALT_VCALL(op, alt, cond)
 
 #define PVOP_CALLEE0(rettype, op)					\
 	__PVOP_CALLEESAVE(rettype, op)
 #define PVOP_VCALLEE0(op)						\
 	__PVOP_VCALLEESAVE(op)
+#define PVOP_ALT_CALLEE0(rettype, op, alt, cond)			\
+	__PVOP_ALT_CALLEESAVE(rettype, op, alt, cond)
+#define PVOP_ALT_VCALLEE0(op, alt, cond)				\
+	__PVOP_ALT_VCALLEESAVE(op, alt, cond)
 
 
 #define PVOP_CALL1(rettype, op, arg1)					\
 	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALL1(op, arg1)						\
 	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
+#define PVOP_ALT_VCALL1(op, arg1, alt, cond)				\
+	__PVOP_ALT_VCALL(op, alt, cond, PVOP_CALL_ARG1(arg1))
 
 #define PVOP_CALLEE1(rettype, op, arg1)					\
 	__PVOP_CALLEESAVE(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALLEE1(op, arg1)						\
 	__PVOP_VCALLEESAVE(op, PVOP_CALL_ARG1(arg1))
+#define PVOP_ALT_CALLEE1(rettype, op, arg1, alt, cond)			\
+	__PVOP_ALT_CALLEESAVE(rettype, op, alt, cond, PVOP_CALL_ARG1(arg1))
+#define PVOP_ALT_VCALLEE1(op, arg1, alt, cond)				\
+	__PVOP_ALT_VCALLEESAVE(op, alt, cond, PVOP_CALL_ARG1(arg1))
 
 
 #define PVOP_CALL2(rettype, op, arg1, arg2)				\
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55801.97355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppe9-0007b4-OP; Thu, 17 Dec 2020 09:32:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55801.97355; Thu, 17 Dec 2020 09:32:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppe9-0007au-KN; Thu, 17 Dec 2020 09:32:29 +0000
Received: by outflank-mailman (input) for mailman id 55801;
 Thu, 17 Dec 2020 09:32:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppe8-0006RQ-Uv
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64467509-9ba3-4397-a690-daff3eb10b49;
 Thu, 17 Dec 2020 09:31:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CE63B720;
 Thu, 17 Dec 2020 09:31:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64467509-9ba3-4397-a690-daff3eb10b49
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yGVPjX9+LeHHbOMZCGVkaq8fLleyCYczOXjLlFm6RmQ=;
	b=Rt+6Mw4VMJLhHHbz6Zscgaw/jkKFJ3XCDRpw+jF+o49rsRqsrKg3cAV77v5CqZh4tnko9O
	IMm4MYV3g2Jt9qGnBnDfrw7kTwHQaBo2IO6/8ZfCn1ccCvULWQ0PJKcUAj6tTWqmh0wtEv
	2TxHeauUuKRnzXPc1Ik3Aecd474R22k=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 12/15] objtool: Alternatives vs ORC, the hard way
Date: Thu, 17 Dec 2020 10:31:30 +0100
Message-Id: <20201217093133.1507-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Peter Zijlstra <peterz@infradead.org>

Alternatives pose an interesting problem for unwinders because from
the unwinders PoV we're just executing instructions, it has no idea
the text is modified, nor any way of retrieving what with.

Therefore the stance has been that alternatives must not change stack
state, as encoded by commit: 7117f16bf460 ("objtool: Fix ORC vs
alternatives"). This obviously guarantees that whatever actual
instructions end up in the text, the unwind information is correct.

However, there is one additional source of text patching that isn't
currently visible to objtool: paravirt immediate patching. And it
turns out one of these violates the rule.

As part of cleaning that up, the unfortunate reality is that objtool
now has to deal with alternatives modifying unwind state and validate
the combination is valid and generate ORC data to match.

The problem is that a single instance of unwind information (ORC) must
capture and correctly unwind all alternatives. Since the trivially
correct mandate is out, implement the straight forward brute-force
approach:

 1) generate CFI information for each alternative

 2) unwind every alternative with the merge-sort of the previously
    generated CFI information -- O(n^2)

 3) for any possible conflict: yell.

 4) Generate ORC with merge-sort

Specifically for 3 there are two possible classes of conflicts:

 - the merge-sort itself could find conflicting CFI for the same
   offset.

 - the unwind can fail with the merged CFI.

In specific, this allows us to deal with:

        Alt1                    Alt2                    Alt3

 0x00   CALL *pv_ops.save_fl    CALL xen_save_fl        PUSHF
 0x01                                                   POP %RAX
 0x02                                                   NOP
 ...
 0x05                           NOP
 ...
 0x07   <insn>

The unwind information for offset-0x00 is identical for all 3
alternatives. Similarly offset-0x05 and higher also are identical (and
the same as 0x00). However offset-0x01 has deviating CFI, but that is
only relevant for Alt3, neither of the other alternative instruction
streams will ever hit that offset.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
V3:
- new patch; there is still an ongoing discussion whether this patch
  couldn't be made simpler, but I'm including it here nevertheless, as
  there is some solution required in objtool for the following patches
  of the series.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/objtool/check.c   | 180 ++++++++++++++++++++++++++++++++++++----
 tools/objtool/check.h   |   5 ++
 tools/objtool/orc_gen.c | 178 +++++++++++++++++++++++++--------------
 3 files changed, 289 insertions(+), 74 deletions(-)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index c6ab44543c92..2d70766af857 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1090,6 +1090,32 @@ static int handle_group_alt(struct objtool_file *file,
 		return -1;
 	}
 
+	/*
+	 * Add the filler NOP, required for alternative CFI.
+	 */
+	if (special_alt->group && special_alt->new_len < special_alt->orig_len) {
+		struct instruction *nop = malloc(sizeof(*nop));
+		if (!nop) {
+			WARN("malloc failed");
+			return -1;
+		}
+		memset(nop, 0, sizeof(*nop));
+		INIT_LIST_HEAD(&nop->alts);
+		INIT_LIST_HEAD(&nop->stack_ops);
+		init_cfi_state(&nop->cfi);
+
+		nop->sec = last_new_insn->sec;
+		nop->ignore = last_new_insn->ignore;
+		nop->func = last_new_insn->func;
+		nop->alt_group = alt_group;
+		nop->offset = last_new_insn->offset + last_new_insn->len;
+		nop->type = INSN_NOP;
+		nop->len = special_alt->orig_len - special_alt->new_len;
+
+		list_add(&nop->list, &last_new_insn->list);
+		last_new_insn = nop;
+	}
+
 	if (fake_jump)
 		list_add(&fake_jump->list, &last_new_insn->list);
 
@@ -2190,18 +2216,12 @@ static int handle_insn_ops(struct instruction *insn, struct insn_state *state)
 	struct stack_op *op;
 
 	list_for_each_entry(op, &insn->stack_ops, list) {
-		struct cfi_state old_cfi = state->cfi;
 		int res;
 
 		res = update_cfi_state(insn, &state->cfi, op);
 		if (res)
 			return res;
 
-		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct cfi_state))) {
-			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
-			return -1;
-		}
-
 		if (op->dest.type == OP_DEST_PUSHF) {
 			if (!state->uaccess_stack) {
 				state->uaccess_stack = 1;
@@ -2399,19 +2419,137 @@ static int validate_return(struct symbol *func, struct instruction *insn, struct
  * unreported (because they're NOPs), such holes would result in CFI_UNDEFINED
  * states which then results in ORC entries, which we just said we didn't want.
  *
- * Avoid them by copying the CFI entry of the first instruction into the whole
- * alternative.
+ * Avoid them by copying the CFI entry of the first instruction into the hole.
  */
-static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn)
+static void __fill_alt_cfi(struct objtool_file *file, struct instruction *insn)
 {
 	struct instruction *first_insn = insn;
 	int alt_group = insn->alt_group;
 
-	sec_for_each_insn_continue(file, insn) {
+	sec_for_each_insn_from(file, insn) {
 		if (insn->alt_group != alt_group)
 			break;
-		insn->cfi = first_insn->cfi;
+
+		if (!insn->visited)
+			insn->cfi = first_insn->cfi;
+	}
+}
+
+static void fill_alt_cfi(struct objtool_file *file, struct instruction *alt_insn)
+{
+	struct alternative *alt;
+
+	__fill_alt_cfi(file, alt_insn);
+
+	list_for_each_entry(alt, &alt_insn->alts, list)
+		__fill_alt_cfi(file, alt->insn);
+}
+
+static struct instruction *
+__find_unwind(struct objtool_file *file,
+	      struct instruction *insn, unsigned long offset)
+{
+	int alt_group = insn->alt_group;
+	struct instruction *next;
+	unsigned long off = 0;
+
+	while ((off + insn->len) <= offset) {
+		next = next_insn_same_sec(file, insn);
+		if (next && next->alt_group != alt_group)
+			next = NULL;
+
+		if (!next)
+			break;
+
+		off += insn->len;
+		insn = next;
 	}
+
+	return insn;
+}
+
+struct instruction *
+find_alt_unwind(struct objtool_file *file,
+		struct instruction *alt_insn, unsigned long offset)
+{
+	struct instruction *fit;
+	struct alternative *alt;
+	unsigned long fit_off;
+
+	fit = __find_unwind(file, alt_insn, offset);
+	fit_off = (fit->offset - alt_insn->offset);
+
+	list_for_each_entry(alt, &alt_insn->alts, list) {
+		struct instruction *x;
+		unsigned long x_off;
+
+		x = __find_unwind(file, alt->insn, offset);
+		x_off = (x->offset - alt->insn->offset);
+
+		if (fit_off < x_off) {
+			fit = x;
+			fit_off = x_off;
+
+		} else if (fit_off == x_off &&
+			   memcmp(&fit->cfi, &x->cfi, sizeof(struct cfi_state))) {
+
+			char *_str1 = offstr(fit->sec, fit->offset);
+			char *_str2 = offstr(x->sec, x->offset);
+			WARN("%s: equal-offset incompatible alternative: %s\n", _str1, _str2);
+			free(_str1);
+			free(_str2);
+			return fit;
+		}
+	}
+
+	return fit;
+}
+
+static int __validate_unwind(struct objtool_file *file,
+			     struct instruction *alt_insn,
+			     struct instruction *insn)
+{
+	int alt_group = insn->alt_group;
+	struct instruction *unwind;
+	unsigned long offset = 0;
+
+	sec_for_each_insn_from(file, insn) {
+		if (insn->alt_group != alt_group)
+			break;
+
+		unwind = find_alt_unwind(file, alt_insn, offset);
+
+		if (memcmp(&insn->cfi, &unwind->cfi, sizeof(struct cfi_state))) {
+
+			char *_str1 = offstr(insn->sec, insn->offset);
+			char *_str2 = offstr(unwind->sec, unwind->offset);
+			WARN("%s: unwind incompatible alternative: %s (%ld)\n",
+			     _str1, _str2, offset);
+			free(_str1);
+			free(_str2);
+			return 1;
+		}
+
+		offset += insn->len;
+	}
+
+	return 0;
+}
+
+static int validate_alt_unwind(struct objtool_file *file,
+			       struct instruction *alt_insn)
+{
+	struct alternative *alt;
+
+	if (__validate_unwind(file, alt_insn, alt_insn))
+		return 1;
+
+	list_for_each_entry(alt, &alt_insn->alts, list) {
+		if (__validate_unwind(file, alt_insn, alt->insn))
+			return 1;
+	}
+
+	return 0;
 }
 
 /*
@@ -2423,9 +2561,10 @@ static void fill_alternative_cfi(struct objtool_file *file, struct instruction *
 static int validate_branch(struct objtool_file *file, struct symbol *func,
 			   struct instruction *insn, struct insn_state state)
 {
+	struct instruction *next_insn, *alt_insn = NULL;
 	struct alternative *alt;
-	struct instruction *next_insn;
 	struct section *sec;
+	int alt_group = 0;
 	u8 visited;
 	int ret;
 
@@ -2480,8 +2619,10 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
 				}
 			}
 
-			if (insn->alt_group)
-				fill_alternative_cfi(file, insn);
+			if (insn->alt_group) {
+				alt_insn = insn;
+				alt_group = insn->alt_group;
+			}
 
 			if (skip_orig)
 				return 0;
@@ -2613,6 +2754,17 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
 		}
 
 		insn = next_insn;
+
+		if (alt_insn && insn->alt_group != alt_group) {
+			alt_insn->alt_end = insn;
+
+			fill_alt_cfi(file, alt_insn);
+
+			if (validate_alt_unwind(file, alt_insn))
+				return 1;
+
+			alt_insn = NULL;
+		}
 	}
 
 	return 0;
diff --git a/tools/objtool/check.h b/tools/objtool/check.h
index 5ec00a4b891b..b3215aa4bcdd 100644
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -40,6 +40,7 @@ struct instruction {
 	struct instruction *first_jump_src;
 	struct reloc *jump_table;
 	struct list_head alts;
+	struct instruction *alt_end;
 	struct symbol *func;
 	struct list_head stack_ops;
 	struct cfi_state cfi;
@@ -54,6 +55,10 @@ static inline bool is_static_jump(struct instruction *insn)
 	       insn->type == INSN_JUMP_UNCONDITIONAL;
 }
 
+struct instruction *
+find_alt_unwind(struct objtool_file *file,
+		struct instruction *alt_insn, unsigned long offset);
+
 struct instruction *find_insn(struct objtool_file *file,
 			      struct section *sec, unsigned long offset);
 
diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
index 235663b96adc..75afbaffae23 100644
--- a/tools/objtool/orc_gen.c
+++ b/tools/objtool/orc_gen.c
@@ -12,75 +12,86 @@
 #include "check.h"
 #include "warn.h"
 
+static int create_orc_insn(struct objtool_file *file, struct instruction *insn)
+{
+	struct orc_entry *orc = &insn->orc;
+	struct cfi_reg *cfa = &insn->cfi.cfa;
+	struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+
+	orc->end = insn->cfi.end;
+
+	if (cfa->base == CFI_UNDEFINED) {
+		orc->sp_reg = ORC_REG_UNDEFINED;
+		return 0;
+	}
+
+	switch (cfa->base) {
+	case CFI_SP:
+		orc->sp_reg = ORC_REG_SP;
+		break;
+	case CFI_SP_INDIRECT:
+		orc->sp_reg = ORC_REG_SP_INDIRECT;
+		break;
+	case CFI_BP:
+		orc->sp_reg = ORC_REG_BP;
+		break;
+	case CFI_BP_INDIRECT:
+		orc->sp_reg = ORC_REG_BP_INDIRECT;
+		break;
+	case CFI_R10:
+		orc->sp_reg = ORC_REG_R10;
+		break;
+	case CFI_R13:
+		orc->sp_reg = ORC_REG_R13;
+		break;
+	case CFI_DI:
+		orc->sp_reg = ORC_REG_DI;
+		break;
+	case CFI_DX:
+		orc->sp_reg = ORC_REG_DX;
+		break;
+	default:
+		WARN_FUNC("unknown CFA base reg %d",
+			  insn->sec, insn->offset, cfa->base);
+		return -1;
+	}
+
+	switch(bp->base) {
+	case CFI_UNDEFINED:
+		orc->bp_reg = ORC_REG_UNDEFINED;
+		break;
+	case CFI_CFA:
+		orc->bp_reg = ORC_REG_PREV_SP;
+		break;
+	case CFI_BP:
+		orc->bp_reg = ORC_REG_BP;
+		break;
+	default:
+		WARN_FUNC("unknown BP base reg %d",
+			  insn->sec, insn->offset, bp->base);
+		return -1;
+	}
+
+	orc->sp_offset = cfa->offset;
+	orc->bp_offset = bp->offset;
+	orc->type = insn->cfi.type;
+
+	return 0;
+}
+
 int create_orc(struct objtool_file *file)
 {
 	struct instruction *insn;
 
 	for_each_insn(file, insn) {
-		struct orc_entry *orc = &insn->orc;
-		struct cfi_reg *cfa = &insn->cfi.cfa;
-		struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+		int ret;
 
 		if (!insn->sec->text)
 			continue;
 
-		orc->end = insn->cfi.end;
-
-		if (cfa->base == CFI_UNDEFINED) {
-			orc->sp_reg = ORC_REG_UNDEFINED;
-			continue;
-		}
-
-		switch (cfa->base) {
-		case CFI_SP:
-			orc->sp_reg = ORC_REG_SP;
-			break;
-		case CFI_SP_INDIRECT:
-			orc->sp_reg = ORC_REG_SP_INDIRECT;
-			break;
-		case CFI_BP:
-			orc->sp_reg = ORC_REG_BP;
-			break;
-		case CFI_BP_INDIRECT:
-			orc->sp_reg = ORC_REG_BP_INDIRECT;
-			break;
-		case CFI_R10:
-			orc->sp_reg = ORC_REG_R10;
-			break;
-		case CFI_R13:
-			orc->sp_reg = ORC_REG_R13;
-			break;
-		case CFI_DI:
-			orc->sp_reg = ORC_REG_DI;
-			break;
-		case CFI_DX:
-			orc->sp_reg = ORC_REG_DX;
-			break;
-		default:
-			WARN_FUNC("unknown CFA base reg %d",
-				  insn->sec, insn->offset, cfa->base);
-			return -1;
-		}
-
-		switch(bp->base) {
-		case CFI_UNDEFINED:
-			orc->bp_reg = ORC_REG_UNDEFINED;
-			break;
-		case CFI_CFA:
-			orc->bp_reg = ORC_REG_PREV_SP;
-			break;
-		case CFI_BP:
-			orc->bp_reg = ORC_REG_BP;
-			break;
-		default:
-			WARN_FUNC("unknown BP base reg %d",
-				  insn->sec, insn->offset, bp->base);
-			return -1;
-		}
-
-		orc->sp_offset = cfa->offset;
-		orc->bp_offset = bp->offset;
-		orc->type = insn->cfi.type;
+		ret = create_orc_insn(file, insn);
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -166,6 +177,28 @@ int create_orc_sections(struct objtool_file *file)
 
 		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
+
+			if (insn->alt_end) {
+				unsigned int offset, alt_len;
+				struct instruction *unwind;
+
+				alt_len = insn->alt_end->offset - insn->offset;
+				for (offset = 0; offset < alt_len; offset++) {
+					unwind = find_alt_unwind(file, insn, offset);
+					/* XXX: skipped earlier ! */
+					create_orc_insn(file, unwind);
+					if (!prev_insn ||
+					    memcmp(&unwind->orc, &prev_insn->orc,
+						   sizeof(struct orc_entry))) {
+						idx++;
+//						WARN_FUNC("ORC @ %d/%d", sec, insn->offset+offset, offset, alt_len);
+					}
+					prev_insn = unwind;
+				}
+
+				insn = insn->alt_end;
+			}
+
 			if (!prev_insn ||
 			    memcmp(&insn->orc, &prev_insn->orc,
 				   sizeof(struct orc_entry))) {
@@ -203,6 +236,31 @@ int create_orc_sections(struct objtool_file *file)
 
 		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
+
+			if (insn->alt_end) {
+				unsigned int offset, alt_len;
+				struct instruction *unwind;
+
+				alt_len = insn->alt_end->offset - insn->offset;
+				for (offset = 0; offset < alt_len; offset++) {
+					unwind = find_alt_unwind(file, insn, offset);
+					if (!prev_insn ||
+					    memcmp(&unwind->orc, &prev_insn->orc,
+						   sizeof(struct orc_entry))) {
+
+						if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
+								     insn->sec, insn->offset + offset,
+								     &unwind->orc))
+							return -1;
+
+						idx++;
+					}
+					prev_insn = unwind;
+				}
+
+				insn = insn->alt_end;
+			}
+
 			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
 						 sizeof(struct orc_entry))) {
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:32:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55804.97367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppeF-0007jW-Az; Thu, 17 Dec 2020 09:32:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55804.97367; Thu, 17 Dec 2020 09:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppeF-0007jJ-5S; Thu, 17 Dec 2020 09:32:35 +0000
Received: by outflank-mailman (input) for mailman id 55804;
 Thu, 17 Dec 2020 09:32:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kppeD-0006RQ-Uy
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:32:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 36683085-7723-4ecc-9fae-a641c7b7eba0;
 Thu, 17 Dec 2020 09:31:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 78760B723;
 Thu, 17 Dec 2020 09:31:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36683085-7723-4ecc-9fae-a641c7b7eba0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608197510; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3Z31sgETFoAxIEzI7xHzoYozsFHKuujs94Hp62Q7FRA=;
	b=X/y9Q2eBf8HFFDZkyWg/27vZR0RcyGn9FEAFW7zZqaU+Q0iEgrwssEbeEmRqe0nf+utKs9
	entG7+GEZi0QUf/vnNUbp5QkeEXLFeCqUOWnk2dwv55kG7VRoldm2ADmjnfnRPSGJoKZh+
	5ecW8L5SqqsF7vzhfZWwGCCG5E5aCGQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 15/15] x86/paravirt: have only one paravirt patch function
Date: Thu, 17 Dec 2020 10:31:33 +0100
Message-Id: <20201217093133.1507-16-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201217093133.1507-1-jgross@suse.com>
References: <20201217093133.1507-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is no need any longer to have different paravirt patch functions
for native and Xen. Eliminate native_patch() and rename
paravirt_patch_default() to paravirt_patch().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- remove paravirt_patch_insns() (kernel test robot)
---
 arch/x86/include/asm/paravirt_types.h | 19 +------------------
 arch/x86/kernel/Makefile              |  3 +--
 arch/x86/kernel/alternative.c         |  2 +-
 arch/x86/kernel/paravirt.c            | 20 ++------------------
 arch/x86/kernel/paravirt_patch.c      | 11 -----------
 arch/x86/xen/enlighten_pv.c           |  1 -
 6 files changed, 5 insertions(+), 51 deletions(-)
 delete mode 100644 arch/x86/kernel/paravirt_patch.c

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index f9e77046b61b..5c728eab9cd1 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -73,19 +73,6 @@ struct pv_info {
 	const char *name;
 };
 
-struct pv_init_ops {
-	/*
-	 * Patch may replace one of the defined code sequences with
-	 * arbitrary code, subject to the same register constraints.
-	 * This generally means the code is not free to clobber any
-	 * registers other than EAX.  The patch function should return
-	 * the number of bytes of code generated, as we nop pad the
-	 * rest in generic code.
-	 */
-	unsigned (*patch)(u8 type, void *insn_buff,
-			  unsigned long addr, unsigned len);
-} __no_randomize_layout;
-
 #ifdef CONFIG_PARAVIRT_XXL
 struct pv_lazy_ops {
 	/* Set deferred update mode, used for batching operations. */
@@ -281,7 +268,6 @@ struct pv_lock_ops {
  * number for each function using the offset which we use to indicate
  * what to patch. */
 struct paravirt_patch_template {
-	struct pv_init_ops	init;
 	struct pv_cpu_ops	cpu;
 	struct pv_irq_ops	irq;
 	struct pv_mmu_ops	mmu;
@@ -322,10 +308,7 @@ extern void (*paravirt_iret)(void);
 /* Simple instruction patching code. */
 #define NATIVE_LABEL(a,x,b) "\n\t.globl " a #x "_" #b "\n" a #x "_" #b ":\n\t"
 
-unsigned paravirt_patch_default(u8 type, void *insn_buff, unsigned long addr, unsigned len);
-unsigned paravirt_patch_insns(void *insn_buff, unsigned len, const char *start, const char *end);
-
-unsigned native_patch(u8 type, void *insn_buff, unsigned long addr, unsigned len);
+unsigned paravirt_patch(u8 type, void *insn_buff, unsigned long addr, unsigned len);
 
 int paravirt_disable_iospace(void);
 
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 68608bd892c0..61f52f95670b 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -35,7 +35,6 @@ KASAN_SANITIZE_sev-es.o					:= n
 KCSAN_SANITIZE := n
 
 OBJECT_FILES_NON_STANDARD_test_nx.o			:= y
-OBJECT_FILES_NON_STANDARD_paravirt_patch.o		:= y
 
 ifdef CONFIG_FRAME_POINTER
 OBJECT_FILES_NON_STANDARD_ftrace_$(BITS).o		:= y
@@ -122,7 +121,7 @@ obj-$(CONFIG_AMD_NB)		+= amd_nb.o
 obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
-obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch.o
+obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 obj-$(CONFIG_X86_PMEM_LEGACY_DEVICE) += pmem.o
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index abb481808811..1dbd6a934b66 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -621,7 +621,7 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
 		BUG_ON(p->len > MAX_PATCH_LEN);
 		/* prep the buffer with the original instructions */
 		memcpy(insn_buff, p->instr, p->len);
-		used = pv_ops.init.patch(p->type, insn_buff, (unsigned long)p->instr, p->len);
+		used = paravirt_patch(p->type, insn_buff, (unsigned long)p->instr, p->len);
 
 		BUG_ON(used > p->len);
 
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index db6ae7f7c14e..b648eaf640f2 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -99,8 +99,8 @@ void __init native_pv_lock_init(void)
 		static_branch_disable(&virt_spin_lock_key);
 }
 
-unsigned paravirt_patch_default(u8 type, void *insn_buff,
-				unsigned long addr, unsigned len)
+unsigned int paravirt_patch(u8 type, void *insn_buff, unsigned long addr,
+			    unsigned int len)
 {
 	/*
 	 * Neat trick to map patch type back to the call within the
@@ -121,19 +121,6 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	return ret;
 }
 
-unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
-			      const char *start, const char *end)
-{
-	unsigned insn_len = end - start;
-
-	/* Alternative instruction is too large for the patch site and we cannot continue: */
-	BUG_ON(insn_len > len || start == NULL);
-
-	memcpy(insn_buff, start, insn_len);
-
-	return insn_len;
-}
-
 struct static_key paravirt_steal_enabled;
 struct static_key paravirt_steal_rq_enabled;
 
@@ -255,9 +242,6 @@ struct pv_info pv_info = {
 #define PTE_IDENT	__PV_IS_CALLEE_SAVE(_paravirt_ident_64)
 
 struct paravirt_patch_template pv_ops = {
-	/* Init ops. */
-	.init.patch		= native_patch,
-
 	/* Cpu ops. */
 	.cpu.io_delay		= native_io_delay,
 
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
deleted file mode 100644
index 10543dcc8211..000000000000
--- a/arch/x86/kernel/paravirt_patch.c
+++ /dev/null
@@ -1,11 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <linux/stringify.h>
-
-#include <asm/paravirt.h>
-#include <asm/asm-offsets.h>
-
-unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
-			  unsigned int len)
-{
-	return paravirt_patch_default(type, insn_buff, addr, len);
-}
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4716383c64a9..66f83de4d9e0 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1218,7 +1218,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
-	pv_ops.init.patch = paravirt_patch_default;
 	pv_ops.cpu = xen_cpu_ops;
 	paravirt_iret = xen_iret;
 	xen_init_irq_ops();
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 09:38:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 09:38:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55809.97379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppjV-0008Ls-1V; Thu, 17 Dec 2020 09:38:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55809.97379; Thu, 17 Dec 2020 09:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kppjU-0008Ll-Ug; Thu, 17 Dec 2020 09:38:00 +0000
Received: by outflank-mailman (input) for mailman id 55809;
 Thu, 17 Dec 2020 09:37:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HMd0=FV=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kppjS-0008Lg-Vk
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 09:37:59 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d3da13aa-2e01-4c0f-bdb0-fad771dbece9;
 Thu, 17 Dec 2020 09:37:56 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-64-ZUMVJ2InMn-COX4tGwi0tw-1; Thu, 17 Dec 2020 04:37:54 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 82DA6180E46A;
 Thu, 17 Dec 2020 09:37:52 +0000 (UTC)
Received: from localhost (ovpn-112-215.rdu2.redhat.com [10.10.112.215])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 889771971D;
 Thu, 17 Dec 2020 09:37:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3da13aa-2e01-4c0f-bdb0-fad771dbece9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608197876;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9pZiRRy/inHhproaErA7w2RhzvrVms5706fxsIiKShc=;
	b=gZ+f0jAgXfpoEdB45EqVIUcE6dQTMBzh2UmFrHDeytpZVkRFz5nxehGRZpBx2ooYxwoWoS
	CJTQLSMm9OUkBMn44n6kSuR/bHklQQTB/1i8dAvBdFEf/RmKdq2fGNy/j08PwptcS5f2jO
	enrjwkrAh6kzOfciBJSuhrjK8JBtOJc=
X-MC-Unique: ZUMVJ2InMn-COX4tGwi0tw-1
Date: Thu, 17 Dec 2020 10:37:44 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201217093744.tg6ik73o45nidcs4@mhamilton>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201216183102.GH7548@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="tn3wo75ygssgd6yr"
Content-Disposition: inline

--tn3wo75ygssgd6yr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
> Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
> > On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> > > Am 15.12.2020 um 18:23 hat Sergio Lopez geschrieben:
> > > > On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
> > > > > Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> > > > > > On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > > > > > > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > > > > > > While processing the parents of a BDS, one of the parents m=
ay process
> > > > > > > > the child that's doing the tail recursion, which leads to a=
 BDS being
> > > > > > > > processed twice. This is especially problematic for the aio=
_notifiers,
> > > > > > > > as they might attempt to work on both the old and the new A=
IO
> > > > > > > > contexts.
> > > > > > > >=20
> > > > > > > > To avoid this, add the BDS pointer to the ignore list, and =
check the
> > > > > > > > child BDS pointer while iterating over the children.
> > > > > > > >=20
> > > > > > > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> > > > > > >=20
> > > > > > > Ugh, so we get a mixed list of BdrvChild and BlockDriverState=
? :-/
> > > > > >=20
> > > > > > I know, it's effective but quite ugly...
> > > > > >=20
> > > > > > > What is the specific scenario where you saw this breaking? Di=
d you have
> > > > > > > multiple BdrvChild connections between two nodes so that we w=
ould go to
> > > > > > > the parent node through one and then come back to the child n=
ode through
> > > > > > > the other?
> > > > > >=20
> > > > > > I don't think this is a corner case. If the graph is walked top=
->down,
> > > > > > there's no problem since children are added to the ignore list =
before
> > > > > > getting processed, and siblings don't process each other. But, =
if the
> > > > > > graph is walked bottom->up, a BDS will start processing its par=
ents
> > > > > > without adding itself to the ignore list, so there's nothing
> > > > > > preventing them from processing it again.
> > > > >=20
> > > > > I don't understand. child is added to ignore before calling the p=
arent
> > > > > callback on it, so how can we come back through the same BdrvChil=
d?
> > > > >=20
> > > > >     QLIST_FOREACH(child, &bs->parents, next_parent) {
> > > > >         if (g_slist_find(*ignore, child)) {
> > > > >             continue;
> > > > >         }
> > > > >         assert(child->klass->set_aio_ctx);
> > > > >         *ignore =3D g_slist_prepend(*ignore, child);
> > > > >         child->klass->set_aio_ctx(child, new_context, ignore);
> > > > >     }
> > > >=20
> > > > Perhaps I'm missing something, but the way I understand it, that lo=
op
> > > > is adding the BdrvChild pointer of each of its parents, but not the
> > > > BdrvChild pointer of the BDS that was passed as an argument to
> > > > b_s_a_c_i.
> > >=20
> > > Generally, the caller has already done that.
> > >=20
> > > In the theoretical case that it was the outermost call in the recursi=
on
> > > and it hasn't (I couldn't find any such case), I think we should stil=
l
> > > call the callback for the passed BdrvChild like we currently do.
> > >=20
> > > > > You didn't dump the BdrvChild here. I think that would add some
> > > > > information on why we re-entered 0x555ee2fbf660. Maybe you can al=
so add
> > > > > bs->drv->format_name for each node to make the scenario less abst=
ract?
> > > >=20
> > > > I've generated another trace with more data:
> > > >=20
> > > > bs=3D0x565505e48030 (backup-top) enter
> > > > bs=3D0x565505e48030 (backup-top) processing children
> > > > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e420=
90 (child->bs=3D0x565505e5d420)
> > > > bs=3D0x565505e5d420 (qcow2) enter
> > > > bs=3D0x565505e5d420 (qcow2) processing children
> > > > bs=3D0x565505e5d420 (qcow2) calling bsaci child=3D0x565505e41ea0 (c=
hild->bs=3D0x565505e52060)
> > > > bs=3D0x565505e52060 (file) enter
> > > > bs=3D0x565505e52060 (file) processing children
> > > > bs=3D0x565505e52060 (file) processing parents
> > > > bs=3D0x565505e52060 (file) processing itself
> > > > bs=3D0x565505e5d420 (qcow2) processing parents
> > > > bs=3D0x565505e5d420 (qcow2) calling set_aio_ctx child=3D0x5655066a3=
4d0
> > > > bs=3D0x565505fbf660 (qcow2) enter
> > > > bs=3D0x565505fbf660 (qcow2) processing children
> > > > bs=3D0x565505fbf660 (qcow2) calling bsaci child=3D0x565505e41d20 (c=
hild->bs=3D0x565506bc0c00)
> > > > bs=3D0x565506bc0c00 (file) enter
> > > > bs=3D0x565506bc0c00 (file) processing children
> > > > bs=3D0x565506bc0c00 (file) processing parents
> > > > bs=3D0x565506bc0c00 (file) processing itself
> > > > bs=3D0x565505fbf660 (qcow2) processing parents
> > > > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x565505fc7=
aa0
> > > > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x5655068b8=
510
> > > > bs=3D0x565505e48030 (backup-top) enter
> > > > bs=3D0x565505e48030 (backup-top) processing children
> > > > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e3c4=
50 (child->bs=3D0x565505fbf660)
> > > > bs=3D0x565505fbf660 (qcow2) enter
> > > > bs=3D0x565505fbf660 (qcow2) processing children
> > > > bs=3D0x565505fbf660 (qcow2) processing parents
> > > > bs=3D0x565505fbf660 (qcow2) processing itself
> > > > bs=3D0x565505e48030 (backup-top) processing parents
> > > > bs=3D0x565505e48030 (backup-top) calling set_aio_ctx child=3D0x5655=
05e402d0
> > > > bs=3D0x565505e48030 (backup-top) processing itself
> > > > bs=3D0x565505fbf660 (qcow2) processing itself
> > >=20
> > > Hm, is this complete? Is see no "processing itself" for
> > > bs=3D0x565505e5d420. Or is this because it crashed before getting the=
re?
> >=20
> > Yes, it crashes there. I forgot to mention that, sorry.
> >=20
> > > Anyway, trying to reconstruct the block graph with BdrvChild pointers
> > > annotated at the edges:
> > >=20
> > > BlockBackend
> > >       |
> > >       v
> > >   backup-top ------------------------+
> > >       |   |                          |
> > >       |   +-----------------------+  |
> > >       |            0x5655068b8510 |  | 0x565505e3c450
> > >       |                           |  |
> > >       | 0x565505e42090            |  |
> > >       v                           |  |
> > >     qcow2 ---------------------+  |  |
> > >       |                        |  |  |
> > >       | 0x565505e52060         |  |  | ??? [1]
> > >       |                        |  |  |  |
> > >       v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
> > >     file                       v  v  v  v
> > >                              qcow2 (backing)
> > >                                     |
> > >                                     | 0x565505e41d20
> > >                                     v
> > >                                   file
> > >=20
> > > [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
> > >     BdrvChild directly owned by the backup job.
> > >=20
> > > > So it seems this is happening:
> > > >=20
> > > > backup-top (5e48030) <---------| (5)
> > > >    |    |                      |
> > > >    |    | (6) ------------> qcow2 (5fbf660)
> > > >    |                           ^    |
> > > >    |                       (3) |    | (4)
> > > >    |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> > > >    |
> > > >    |-> (2) file (5e52060)
> > > >=20
> > > > backup-top (5e48030), the BDS that was passed as argument in the fi=
rst
> > > > bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf6=
60)
> > > > is processing its parents, and the latter is also re-entered when t=
he
> > > > first one starts processing its children again.
> > >=20
> > > Yes, but look at the BdrvChild pointers, it is through different edge=
s
> > > that we come back to the same node. No BdrvChild is used twice.
> > >=20
> > > If backup-top had added all of its children to the ignore list before
> > > calling into the overlay qcow2, the backing qcow2 wouldn't eventually
> > > have called back into backup-top.
> >=20
> > I've tested a patch that first adds every child to the ignore list,
> > and then processes those that weren't there before, as you suggested
> > on a previous email. With that, the offending qcow2 is not re-entered,
> > so we avoid the crash, but backup-top is still entered twice:
>=20
> I think we also need to every parent to the ignore list before calling
> callbacks, though it doesn't look like this is the problem you're
> currently seeing.

I agree.

> > bs=3D0x560db0e3b030 (backup-top) enter
> > bs=3D0x560db0e3b030 (backup-top) processing children
> > bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e2f450 (=
child->bs=3D0x560db0fb2660)
> > bs=3D0x560db0fb2660 (qcow2) enter
> > bs=3D0x560db0fb2660 (qcow2) processing children
> > bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db0e34d20 (child=
->bs=3D0x560db1bb3c00)
> > bs=3D0x560db1bb3c00 (file) enter
> > bs=3D0x560db1bb3c00 (file) processing children
> > bs=3D0x560db1bb3c00 (file) processing parents
> > bs=3D0x560db1bb3c00 (file) processing itself
> > bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db16964d0 (child=
->bs=3D0x560db0e50420)
> > bs=3D0x560db0e50420 (qcow2) enter
> > bs=3D0x560db0e50420 (qcow2) processing children
> > bs=3D0x560db0e50420 (qcow2) calling bsaci child=3D0x560db0e34ea0 (child=
->bs=3D0x560db0e45060)
> > bs=3D0x560db0e45060 (file) enter
> > bs=3D0x560db0e45060 (file) processing children
> > bs=3D0x560db0e45060 (file) processing parents
> > bs=3D0x560db0e45060 (file) processing itself
> > bs=3D0x560db0e50420 (qcow2) processing parents
> > bs=3D0x560db0e50420 (qcow2) processing itself
> > bs=3D0x560db0fb2660 (qcow2) processing parents
> > bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1672860
> > bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1b14a20
> > bs=3D0x560db0e3b030 (backup-top) enter
> > bs=3D0x560db0e3b030 (backup-top) processing children
> > bs=3D0x560db0e3b030 (backup-top) processing parents
> > bs=3D0x560db0e3b030 (backup-top) calling set_aio_ctx child=3D0x560db0e3=
32d0
> > bs=3D0x560db0e3b030 (backup-top) processing itself
> > bs=3D0x560db0fb2660 (qcow2) processing itself
> > bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e35090 (=
child->bs=3D0x560db0e50420)
> > bs=3D0x560db0e50420 (qcow2) enter
> > bs=3D0x560db0e3b030 (backup-top) processing parents
> > bs=3D0x560db0e3b030 (backup-top) processing itself
> >=20
> > I see that "blk_do_set_aio_context()" passes "blk->root" to
> > "bdrv_child_try_set_aio_context()" so it's already in the ignore list,
> > so I'm not sure what's happening here. Is backup-top is referenced
> > from two different BdrvChild or is "blk->root" not pointing to
> > backup-top's BDS?
>=20
> The second time that backup-top is entered, it is not as the BDS of
> blk->root, but as the parent node of the overlay qcow2. Which is
> interesting, because last time it was still the backing qcow2, so the
> change did have _some_ effect.
>=20
> The part that I don't understand is why you still get the line with
> child=3D0x560db1b14a20, because when you add all children to the ignore
> list first, that should have been put into the ignore list as one of the
> first things in the whole process (when backup-top was first entered).
>=20
> Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
> but isn't actually present in backup-top's bs->children?

Exactly, that line corresponds to this chunk of code:

<---- begin ---->
    QLIST_FOREACH(child, &bs->parents, next_parent) {
        if (g_slist_find(*ignore, child)) {
            continue;
        }
        assert(child->klass->set_aio_ctx);
        *ignore =3D g_slist_prepend(*ignore, child);
        fprintf(stderr, "bs=3D%p (%s) calling set_aio_ctx child=3D%p\n", bs=
, bs->drv->format_name, child);
        child->klass->set_aio_ctx(child, new_context, ignore);
    }
<---- end ---->

Do you think it's safe to re-enter backup-top, or should we look for a
way to avoid this?

Thanks,
Sergio.

--tn3wo75ygssgd6yr
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/bJuUACgkQ9GknjS8M
AjU+fQ//TebTZnzd4upDxj883rb7kSYGIpv0HgiZ7FTIlsUpRCJ2Vfy0fMAadd7X
Xzt9zb1hjXBA6NwLLfXKk18wrPWiZj9hZcvZmmQ7oqH6OqklETpfZab2pIpwypaH
DjG2Q0OSFAi5+n/zALOZIk/habt0XbDLlY17XSd5bAkgk9zAHECooeV+Ohig9l+X
epjlmbQ+RabLpoHDwAQMA1SpRgUX6nIxELrqp6GnCS2eWftknYLgXo3kCG3ukkFV
2e4pjyBRWxFic2GG04ekL8B+Uf4lQPFSpUX0SMTcNVF72b63nEEGet/ziH6vG8qy
Pm9EhbTfqeg99o2h7fm4IBBiu7Je4WSHpgEUtu2L3j4NWpLTFZxFp7jfHtm7jZ+Y
8V2kND2Int2dY5+GePjq8MIHh5DindDkD2Q7sd1vSk2iRcfJsaxT2MMQRZvK7oUX
fJ9vc45Bo3PX4WyIjCuxGQrPg0+85RwLYuQ2WjEd483+6oZNU7yfElAmlsnebKac
2eLSwI3zPdEzY4W8MC8bDTpPdPxHVaRqR8W6CTcz0RPD2KQCvJ4nHG1WGyBa3+ul
PHRDXN3wD2WaSoBEo6N5zg3e88gTcQpotHcFWS48EL9fpzdBQh/ptMZTXtYbf5+f
G+GJ1o6oTbOQ1Birn602cGFknsJhd62OfAVeWbT/BIJJGGRFnKE=
=Vi3P
-----END PGP SIGNATURE-----

--tn3wo75ygssgd6yr--



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 10:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 10:26:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55845.97397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpqUM-0005RM-OC; Thu, 17 Dec 2020 10:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55845.97397; Thu, 17 Dec 2020 10:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpqUM-0005RF-L5; Thu, 17 Dec 2020 10:26:26 +0000
Received: by outflank-mailman (input) for mailman id 55845;
 Thu, 17 Dec 2020 10:26:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpqUL-0005R7-Pt; Thu, 17 Dec 2020 10:26:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpqUL-0004u9-Ft; Thu, 17 Dec 2020 10:26:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpqUL-0008NC-8f; Thu, 17 Dec 2020 10:26:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpqUL-0005Ui-89; Thu, 17 Dec 2020 10:26:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tBeEnssxSg1Je2/B+10txN88NxaOyLxcoC7RMe/hriM=; b=pCLYN1MGIIK+Dn531CP83h3NOw
	qnXFfum+j3tDJj+IvpT5i0SEFmW8rcETdLbhBDDIQbalMOVm9fi13UeUM7lcecxruY9iiDGDWsJso
	YyCZTutWblRl1S2vrmwTjL7AxkqM81AgZCWxC1ATpy5ESFwSrRAB8VFm+Mt4FHMFJfGk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157603-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157603: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a866bdbbac227a99b0b37e03679908642f58aec
X-Osstest-Versions-That:
    linux=2bff021f53b211386abad8cd661e6bb38d0fd524
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 10:26:25 +0000

flight 157603 linux-5.4 real [real]
flight 157634 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157603/
http://logs.test-lab.xenproject.org/osstest/logs/157634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157431

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157431
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157431
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157431
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157431
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a866bdbbac227a99b0b37e03679908642f58aec
baseline version:
 linux                2bff021f53b211386abad8cd661e6bb38d0fd524

Last test of basis   157431  2020-12-11 12:40:36 Z    5 days
Testing same since   157603  2020-12-16 10:11:52 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Bean Huo <beanhuo@micron.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Chris Chiu <chiu@endlessos.org>
  Coiby Xu <coiby.xu@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Fangrui Song <maskray@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Georgi Djakov <georgi.djakov@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Si <si.hao@zte.com.cn>
  Heiko Stuebner <heiko@sntech.de>
  Jakub Kicinski <kuba@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kalle Valo <kvalo@codeaurora.org>
  Li Yang <leoyang.li@nxp.com>
  Libo Chen <libo.chen@oracle.com>
  Lijun Pan <ljp@linux.ibm.com>
  Lin Chen <chen.lin5@zte.com.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luca Coelho <luciano.coelho@intel.com>
  Manasi Navare <manasi.d.navare@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Markus Reichl <m.reichl@fivetechno.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Verevkin <me@maxverevkin.tk>
  Michael Ellerman <mpe@ellerman.id.au>
  Miles Chen <miles.chen@mediatek.com>
  Minchan Kim <minchan@kernel.org>
  Mordechay Goodstein <mordechay.goodstein@intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Pankaj Sharma <pankj.sharma@samsung.com>
  Ran Wang <ran.wang_1@nxp.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sara Sharon <sara.sharon@intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Wood <oss@buserror.net>
  Shuah Khan <skhan@linuxfoundation.org>
  Shung-Hsi Yu <shung-hsi.yu@suse.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo Witte <timo.witte@gmail.com>
  Tom Lendacky <thomas.lendacky@amd.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vineet Gupta <vgupta@synopsys.com>
  Xu Qiang <xuqiang36@huawei.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Zhan Liu <zliua@micron.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1160 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 10:59:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 10:59:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55856.97421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpqzn-0008Tz-Pt; Thu, 17 Dec 2020 10:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55856.97421; Thu, 17 Dec 2020 10:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpqzn-0008Ts-Mv; Thu, 17 Dec 2020 10:58:55 +0000
Received: by outflank-mailman (input) for mailman id 55856;
 Thu, 17 Dec 2020 10:58:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiow=FV=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kpqzl-0008TX-RB
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 10:58:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 513bae5a-7073-4348-aa69-5e7f815f10ae;
 Thu, 17 Dec 2020 10:58:51 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-566-9GZqtNIWMm2wZfbzCjSZ-A-1; Thu, 17 Dec 2020 05:58:46 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8F44159;
 Thu, 17 Dec 2020 10:58:45 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-112-122.ams2.redhat.com [10.36.112.122])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 908585F9C8;
 Thu, 17 Dec 2020 10:58:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 513bae5a-7073-4348-aa69-5e7f815f10ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608202731;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DtB6+8zcg4vtW3aTDRv84BIQqSxabTmZ3TIKys936AQ=;
	b=CxoLVuDgE5PoxZbUP8e+uHsIJEOls3OsuHgmoRfh8jQd01JLPbgqImuwFB87L3hw43zKOh
	wwp9/IqAKE0WpEs35Fd0fjMYU05OYe2AGuwHMFBXWG/mQ0vYugI9kC0UubMJHs7/jyLNXT
	pdD0PlzF2GA79mZ96WdbRnaUQRNV8/c=
X-MC-Unique: 9GZqtNIWMm2wZfbzCjSZ-A-1
Date: Thu, 17 Dec 2020 11:58:30 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201217105830.GA12328@merkur.fritz.box>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
 <20201217093744.tg6ik73o45nidcs4@mhamilton>
MIME-Version: 1.0
In-Reply-To: <20201217093744.tg6ik73o45nidcs4@mhamilton>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="5vNYLRcllDrimb99"
Content-Disposition: inline

--5vNYLRcllDrimb99
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
> On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
> > Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
> > > On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> > > > Am 15.12.2020 um 18:23 hat Sergio Lopez geschrieben:
> > > > > On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
> > > > > > Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
> > > > > > > On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
> > > > > > > > Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > > > > > > > > While processing the parents of a BDS, one of the parents=
 may process
> > > > > > > > > the child that's doing the tail recursion, which leads to=
 a BDS being
> > > > > > > > > processed twice. This is especially problematic for the a=
io_notifiers,
> > > > > > > > > as they might attempt to work on both the old and the new=
 AIO
> > > > > > > > > contexts.
> > > > > > > > >=20
> > > > > > > > > To avoid this, add the BDS pointer to the ignore list, an=
d check the
> > > > > > > > > child BDS pointer while iterating over the children.
> > > > > > > > >=20
> > > > > > > > > Signed-off-by: Sergio Lopez <slp@redhat.com>
> > > > > > > >=20
> > > > > > > > Ugh, so we get a mixed list of BdrvChild and BlockDriverSta=
te? :-/
> > > > > > >=20
> > > > > > > I know, it's effective but quite ugly...
> > > > > > >=20
> > > > > > > > What is the specific scenario where you saw this breaking? =
Did you have
> > > > > > > > multiple BdrvChild connections between two nodes so that we=
 would go to
> > > > > > > > the parent node through one and then come back to the child=
 node through
> > > > > > > > the other?
> > > > > > >=20
> > > > > > > I don't think this is a corner case. If the graph is walked t=
op->down,
> > > > > > > there's no problem since children are added to the ignore lis=
t before
> > > > > > > getting processed, and siblings don't process each other. But=
, if the
> > > > > > > graph is walked bottom->up, a BDS will start processing its p=
arents
> > > > > > > without adding itself to the ignore list, so there's nothing
> > > > > > > preventing them from processing it again.
> > > > > >=20
> > > > > > I don't understand. child is added to ignore before calling the=
 parent
> > > > > > callback on it, so how can we come back through the same BdrvCh=
ild?
> > > > > >=20
> > > > > >     QLIST_FOREACH(child, &bs->parents, next_parent) {
> > > > > >         if (g_slist_find(*ignore, child)) {
> > > > > >             continue;
> > > > > >         }
> > > > > >         assert(child->klass->set_aio_ctx);
> > > > > >         *ignore =3D g_slist_prepend(*ignore, child);
> > > > > >         child->klass->set_aio_ctx(child, new_context, ignore);
> > > > > >     }
> > > > >=20
> > > > > Perhaps I'm missing something, but the way I understand it, that =
loop
> > > > > is adding the BdrvChild pointer of each of its parents, but not t=
he
> > > > > BdrvChild pointer of the BDS that was passed as an argument to
> > > > > b_s_a_c_i.
> > > >=20
> > > > Generally, the caller has already done that.
> > > >=20
> > > > In the theoretical case that it was the outermost call in the recur=
sion
> > > > and it hasn't (I couldn't find any such case), I think we should st=
ill
> > > > call the callback for the passed BdrvChild like we currently do.
> > > >=20
> > > > > > You didn't dump the BdrvChild here. I think that would add some
> > > > > > information on why we re-entered 0x555ee2fbf660. Maybe you can =
also add
> > > > > > bs->drv->format_name for each node to make the scenario less ab=
stract?
> > > > >=20
> > > > > I've generated another trace with more data:
> > > > >=20
> > > > > bs=3D0x565505e48030 (backup-top) enter
> > > > > bs=3D0x565505e48030 (backup-top) processing children
> > > > > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e4=
2090 (child->bs=3D0x565505e5d420)
> > > > > bs=3D0x565505e5d420 (qcow2) enter
> > > > > bs=3D0x565505e5d420 (qcow2) processing children
> > > > > bs=3D0x565505e5d420 (qcow2) calling bsaci child=3D0x565505e41ea0 =
(child->bs=3D0x565505e52060)
> > > > > bs=3D0x565505e52060 (file) enter
> > > > > bs=3D0x565505e52060 (file) processing children
> > > > > bs=3D0x565505e52060 (file) processing parents
> > > > > bs=3D0x565505e52060 (file) processing itself
> > > > > bs=3D0x565505e5d420 (qcow2) processing parents
> > > > > bs=3D0x565505e5d420 (qcow2) calling set_aio_ctx child=3D0x5655066=
a34d0
> > > > > bs=3D0x565505fbf660 (qcow2) enter
> > > > > bs=3D0x565505fbf660 (qcow2) processing children
> > > > > bs=3D0x565505fbf660 (qcow2) calling bsaci child=3D0x565505e41d20 =
(child->bs=3D0x565506bc0c00)
> > > > > bs=3D0x565506bc0c00 (file) enter
> > > > > bs=3D0x565506bc0c00 (file) processing children
> > > > > bs=3D0x565506bc0c00 (file) processing parents
> > > > > bs=3D0x565506bc0c00 (file) processing itself
> > > > > bs=3D0x565505fbf660 (qcow2) processing parents
> > > > > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x565505f=
c7aa0
> > > > > bs=3D0x565505fbf660 (qcow2) calling set_aio_ctx child=3D0x5655068=
b8510
> > > > > bs=3D0x565505e48030 (backup-top) enter
> > > > > bs=3D0x565505e48030 (backup-top) processing children
> > > > > bs=3D0x565505e48030 (backup-top) calling bsaci child=3D0x565505e3=
c450 (child->bs=3D0x565505fbf660)
> > > > > bs=3D0x565505fbf660 (qcow2) enter
> > > > > bs=3D0x565505fbf660 (qcow2) processing children
> > > > > bs=3D0x565505fbf660 (qcow2) processing parents
> > > > > bs=3D0x565505fbf660 (qcow2) processing itself
> > > > > bs=3D0x565505e48030 (backup-top) processing parents
> > > > > bs=3D0x565505e48030 (backup-top) calling set_aio_ctx child=3D0x56=
5505e402d0
> > > > > bs=3D0x565505e48030 (backup-top) processing itself
> > > > > bs=3D0x565505fbf660 (qcow2) processing itself
> > > >=20
> > > > Hm, is this complete? Is see no "processing itself" for
> > > > bs=3D0x565505e5d420. Or is this because it crashed before getting t=
here?
> > >=20
> > > Yes, it crashes there. I forgot to mention that, sorry.
> > >=20
> > > > Anyway, trying to reconstruct the block graph with BdrvChild pointe=
rs
> > > > annotated at the edges:
> > > >=20
> > > > BlockBackend
> > > >       |
> > > >       v
> > > >   backup-top ------------------------+
> > > >       |   |                          |
> > > >       |   +-----------------------+  |
> > > >       |            0x5655068b8510 |  | 0x565505e3c450
> > > >       |                           |  |
> > > >       | 0x565505e42090            |  |
> > > >       v                           |  |
> > > >     qcow2 ---------------------+  |  |
> > > >       |                        |  |  |
> > > >       | 0x565505e52060         |  |  | ??? [1]
> > > >       |                        |  |  |  |
> > > >       v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
> > > >     file                       v  v  v  v
> > > >                              qcow2 (backing)
> > > >                                     |
> > > >                                     | 0x565505e41d20
> > > >                                     v
> > > >                                   file
> > > >=20
> > > > [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
> > > >     BdrvChild directly owned by the backup job.
> > > >=20
> > > > > So it seems this is happening:
> > > > >=20
> > > > > backup-top (5e48030) <---------| (5)
> > > > >    |    |                      |
> > > > >    |    | (6) ------------> qcow2 (5fbf660)
> > > > >    |                           ^    |
> > > > >    |                       (3) |    | (4)
> > > > >    |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> > > > >    |
> > > > >    |-> (2) file (5e52060)
> > > > >=20
> > > > > backup-top (5e48030), the BDS that was passed as argument in the =
first
> > > > > bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fb=
f660)
> > > > > is processing its parents, and the latter is also re-entered when=
 the
> > > > > first one starts processing its children again.
> > > >=20
> > > > Yes, but look at the BdrvChild pointers, it is through different ed=
ges
> > > > that we come back to the same node. No BdrvChild is used twice.
> > > >=20
> > > > If backup-top had added all of its children to the ignore list befo=
re
> > > > calling into the overlay qcow2, the backing qcow2 wouldn't eventual=
ly
> > > > have called back into backup-top.
> > >=20
> > > I've tested a patch that first adds every child to the ignore list,
> > > and then processes those that weren't there before, as you suggested
> > > on a previous email. With that, the offending qcow2 is not re-entered=
,
> > > so we avoid the crash, but backup-top is still entered twice:
> >=20
> > I think we also need to every parent to the ignore list before calling
> > callbacks, though it doesn't look like this is the problem you're
> > currently seeing.
>=20
> I agree.
>=20
> > > bs=3D0x560db0e3b030 (backup-top) enter
> > > bs=3D0x560db0e3b030 (backup-top) processing children
> > > bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e2f450=
 (child->bs=3D0x560db0fb2660)
> > > bs=3D0x560db0fb2660 (qcow2) enter
> > > bs=3D0x560db0fb2660 (qcow2) processing children
> > > bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db0e34d20 (chi=
ld->bs=3D0x560db1bb3c00)
> > > bs=3D0x560db1bb3c00 (file) enter
> > > bs=3D0x560db1bb3c00 (file) processing children
> > > bs=3D0x560db1bb3c00 (file) processing parents
> > > bs=3D0x560db1bb3c00 (file) processing itself
> > > bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db16964d0 (chi=
ld->bs=3D0x560db0e50420)
> > > bs=3D0x560db0e50420 (qcow2) enter
> > > bs=3D0x560db0e50420 (qcow2) processing children
> > > bs=3D0x560db0e50420 (qcow2) calling bsaci child=3D0x560db0e34ea0 (chi=
ld->bs=3D0x560db0e45060)
> > > bs=3D0x560db0e45060 (file) enter
> > > bs=3D0x560db0e45060 (file) processing children
> > > bs=3D0x560db0e45060 (file) processing parents
> > > bs=3D0x560db0e45060 (file) processing itself
> > > bs=3D0x560db0e50420 (qcow2) processing parents
> > > bs=3D0x560db0e50420 (qcow2) processing itself
> > > bs=3D0x560db0fb2660 (qcow2) processing parents
> > > bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db167286=
0
> > > bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db1b14a2=
0
> > > bs=3D0x560db0e3b030 (backup-top) enter
> > > bs=3D0x560db0e3b030 (backup-top) processing children
> > > bs=3D0x560db0e3b030 (backup-top) processing parents
> > > bs=3D0x560db0e3b030 (backup-top) calling set_aio_ctx child=3D0x560db0=
e332d0
> > > bs=3D0x560db0e3b030 (backup-top) processing itself
> > > bs=3D0x560db0fb2660 (qcow2) processing itself
> > > bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0e35090=
 (child->bs=3D0x560db0e50420)
> > > bs=3D0x560db0e50420 (qcow2) enter
> > > bs=3D0x560db0e3b030 (backup-top) processing parents
> > > bs=3D0x560db0e3b030 (backup-top) processing itself
> > >=20
> > > I see that "blk_do_set_aio_context()" passes "blk->root" to
> > > "bdrv_child_try_set_aio_context()" so it's already in the ignore list=
,
> > > so I'm not sure what's happening here. Is backup-top is referenced
> > > from two different BdrvChild or is "blk->root" not pointing to
> > > backup-top's BDS?
> >=20
> > The second time that backup-top is entered, it is not as the BDS of
> > blk->root, but as the parent node of the overlay qcow2. Which is
> > interesting, because last time it was still the backing qcow2, so the
> > change did have _some_ effect.
> >=20
> > The part that I don't understand is why you still get the line with
> > child=3D0x560db1b14a20, because when you add all children to the ignore
> > list first, that should have been put into the ignore list as one of th=
e
> > first things in the whole process (when backup-top was first entered).
> >=20
> > Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
> > but isn't actually present in backup-top's bs->children?
>=20
> Exactly, that line corresponds to this chunk of code:
>=20
> <---- begin ---->
>     QLIST_FOREACH(child, &bs->parents, next_parent) {
>         if (g_slist_find(*ignore, child)) {
>             continue;
>         }
>         assert(child->klass->set_aio_ctx);
>         *ignore =3D g_slist_prepend(*ignore, child);
>         fprintf(stderr, "bs=3D%p (%s) calling set_aio_ctx child=3D%p\n", =
bs, bs->drv->format_name, child);
>         child->klass->set_aio_ctx(child, new_context, ignore);
>     }
> <---- end ---->
>=20
> Do you think it's safe to re-enter backup-top, or should we look for a
> way to avoid this?

I think it should be avoided, but I don't understand why putting all
children of backup-top into the ignore list doesn't already avoid it. If
backup-top is in the parents list of qcow2, then qcow2 should be in the
children list of backup-top and therefore the BdrvChild should already
be in the ignore list.

The only way I can explain this is that backup-top and qcow2 have
different ideas about which BdrvChild objects exist that connect them.
Or that the graph changes between both places, but I don't see how that
could happen in bdrv_set_aio_context_ignore().

Kevin

--5vNYLRcllDrimb99
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAl/bOdYACgkQfwmycsiP
L9aW3hAAomSWQdevHknWuaTaL/eQU81mgAcdYN1n5CUpUfpvokbK1cV7jx/YF6TL
SB49xdN/OsI9o8333H/PgdU9fV1CDSJYvmeR6gro4m6rEplCeFpwhZawjUGW+JV6
G401Kfi+nMDhb/xXtT1a8TCfqMNYVNcCcGEpeorF4NzFeX0YB0yqfrjaaqmmySoc
qSxZ9JqFyL2VrqJpzso++6qnVPC/UM9r8cacnrdyGoclEEYGXmcp84wzIhxD1csP
Ip2Ddfby1I1R7aN+dNyKmtQRR98tVuOaReuWm98k1nzvotXCKnsH4W1aDGTfAU9Y
NK9t0OIaDHH+bpYhJ7id7B92ApoYlhArdM86XQVo9bCsfb3FCOv7BPqIaeRCHW8n
dBspokD58TFHRmE9kYrY29ZcjYGmwbAA5IPgAb0oxpvcv5AX8LfcR0F87BxHLIIw
xytfmwbphcdxiVaFa99n4VFW521HYMnEPwZzMxtRhhsfxLcu1b/moCK6lS8n867V
pJYxmU25oS+Wf/ILbFJqXMVAxZPjEpfXCffOdQNw09Y8UMO9oDSWG5f/xtl7FKFp
QLaleAMnSrVDZKxojljfwkh8qg0j9MRdSdnq7qyTTVxjVR9d1n/K6bzw7ElFYuny
6TIVFwNUoj0DerB8o+Yy/jPFNC2a6TUB+/CiKc1YSR9Sqxtl0ng=
=vwWj
-----END PGP SIGNATURE-----

--5vNYLRcllDrimb99--



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 11:01:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 11:01:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55861.97436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpr21-000113-8E; Thu, 17 Dec 2020 11:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55861.97436; Thu, 17 Dec 2020 11:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpr21-00010w-4O; Thu, 17 Dec 2020 11:01:13 +0000
Received: by outflank-mailman (input) for mailman id 55861;
 Thu, 17 Dec 2020 11:01:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpr20-00010r-J8
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 11:01:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d20381c-d3ca-41a2-b072-9fb940ed0d13;
 Thu, 17 Dec 2020 11:01:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 616A8AD5C;
 Thu, 17 Dec 2020 11:01:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d20381c-d3ca-41a2-b072-9fb940ed0d13
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608202869; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6evqPzX3Evbi4aJ9HknSky1CGCSRblI/dM7uDwmX9Pw=;
	b=QaP61E+Jc0h/TECVh1XLvojjSNfnmglqtYg+HnBrW1eRnx6yXLmQzRYyZTQwNc3sQnvAfz
	n4fhM7vLKfSbhWzkcwpzsCUdZn8OEB8T6ijI8oYFiGCag92bu3LCe3aK8PXaYK3f3OpVSs
	Ta3ODPnsryaKLgL8aixuyVnc6fbpMsw=
Subject: Re: [PATCH v3 4/8] xen/hypfs: support dynamic hypfs nodes
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <beda1152-4cab-2ed2-bc76-e9125f805e3f@suse.com>
Date: Thu, 17 Dec 2020 12:01:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209160956.32456-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 17:09, Juergen Gross wrote:
> @@ -158,6 +159,30 @@ static void node_exit_all(void)
>          node_exit(*last);
>  }
>  
> +void *hypfs_alloc_dyndata_size(unsigned long size)
> +{
> +    unsigned int cpu = smp_processor_id();
> +
> +    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
> +    ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL);
> +
> +    per_cpu(hypfs_dyndata, cpu) = xzalloc_bytes(size);
> +
> +    return per_cpu(hypfs_dyndata, cpu);
> +}
> +
> +void *hypfs_get_dyndata(void)
> +{
> +    ASSERT(this_cpu(hypfs_dyndata));
> +
> +    return this_cpu(hypfs_dyndata);
> +}
> +
> +void hypfs_free_dyndata(void)
> +{
> +    XFREE(this_cpu(hypfs_dyndata));
> +}

In all three cases, would an intermediate local variable perhaps
yield better generated code? (In hypfs_get_dyndata() this may be
less important because the 2nd use is only an ASSERT().)

> @@ -219,6 +244,12 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,
>      return ret;
>  }
>  
> +void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
> +                      struct hypfs_entry_dir *template)
> +{
> +    template->e.parent = &parent->e;
> +}

I'm struggling with the direction here: This makes the template
point at the parent, but the parent will still have no
"knowledge" of its new templated children. I suppose that's how
it is meant to be, but maybe this could do with a comment, since
it's the opposite way of hypfs_add_dir()?

Also - does this mean parent may not also have further children,
templated or "normal"?

> @@ -177,6 +182,10 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
>  struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
>                                          const char *name,
>                                          unsigned int name_len);
> +void *hypfs_alloc_dyndata_size(unsigned long size);
> +#define hypfs_alloc_dyndata(type) (type *)hypfs_alloc_dyndata_size(sizeof(type))

This wants an extra pair of parentheses.

As a minor point, I also wonder whether you really want the type
unsafe version to be easily usable. It would be possible to
largely "hide" it by having

void *hypfs_alloc_dyndata(unsigned long size);
#define hypfs_alloc_dyndata(type) ((type *)hypfs_alloc_dyndata(sizeof(type)))

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 11:24:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 11:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55867.97451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kprOZ-00033P-3g; Thu, 17 Dec 2020 11:24:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55867.97451; Thu, 17 Dec 2020 11:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kprOY-00033I-VQ; Thu, 17 Dec 2020 11:24:30 +0000
Received: by outflank-mailman (input) for mailman id 55867;
 Thu, 17 Dec 2020 11:24:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kprOX-00033D-M1
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 11:24:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e131b507-f56c-4203-9743-48af4942746c;
 Thu, 17 Dec 2020 11:24:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38CE9AC7B;
 Thu, 17 Dec 2020 11:24:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e131b507-f56c-4203-9743-48af4942746c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608204265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MP9UxfvgjBLYYDtLO5UiNo0vpABNC26Bp38hk0HlVaI=;
	b=p4WpRp+JJZtle1GcXGfKt4WaRTh9E5R5rioyP6Hc2BCSETUeV6OR3hPOoUWLXGgunXFn+N
	BiFuTOQk4UfwPJcmN8r19ETLX5PtgWWl7Zb7EYf9R3q17emC25VuvRLwrN4iiKpdPQpNc3
	nXeFUjL375z/w0Rlkv2oDWfvw/6Dqs0=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-5-jgross@suse.com>
 <beda1152-4cab-2ed2-bc76-e9125f805e3f@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v3 4/8] xen/hypfs: support dynamic hypfs nodes
Message-ID: <4be1bda2-ca1a-1e81-6212-9a6a44af39da@suse.com>
Date: Thu, 17 Dec 2020 12:24:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <beda1152-4cab-2ed2-bc76-e9125f805e3f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="3byKQ92V0fZYiP4p59zikbCv8wwTsmUOk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--3byKQ92V0fZYiP4p59zikbCv8wwTsmUOk
Content-Type: multipart/mixed; boundary="619p0ZPhdHxSjovfHG40hyz2QLAjN5mhr";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <4be1bda2-ca1a-1e81-6212-9a6a44af39da@suse.com>
Subject: Re: [PATCH v3 4/8] xen/hypfs: support dynamic hypfs nodes
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-5-jgross@suse.com>
 <beda1152-4cab-2ed2-bc76-e9125f805e3f@suse.com>
In-Reply-To: <beda1152-4cab-2ed2-bc76-e9125f805e3f@suse.com>

--619p0ZPhdHxSjovfHG40hyz2QLAjN5mhr
Content-Type: multipart/mixed;
 boundary="------------2DA53DB95359E2404CC4A16D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2DA53DB95359E2404CC4A16D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.12.20 12:01, Jan Beulich wrote:
> On 09.12.2020 17:09, Juergen Gross wrote:
>> @@ -158,6 +159,30 @@ static void node_exit_all(void)
>>           node_exit(*last);
>>   }
>>  =20
>> +void *hypfs_alloc_dyndata_size(unsigned long size)
>> +{
>> +    unsigned int cpu =3D smp_processor_id();
>> +
>> +    ASSERT(per_cpu(hypfs_locked, cpu) !=3D hypfs_unlocked);
>> +    ASSERT(per_cpu(hypfs_dyndata, cpu) =3D=3D NULL);
>> +
>> +    per_cpu(hypfs_dyndata, cpu) =3D xzalloc_bytes(size);
>> +
>> +    return per_cpu(hypfs_dyndata, cpu);
>> +}
>> +
>> +void *hypfs_get_dyndata(void)
>> +{
>> +    ASSERT(this_cpu(hypfs_dyndata));
>> +
>> +    return this_cpu(hypfs_dyndata);
>> +}
>> +
>> +void hypfs_free_dyndata(void)
>> +{
>> +    XFREE(this_cpu(hypfs_dyndata));
>> +}
>=20
> In all three cases, would an intermediate local variable perhaps
> yield better generated code? (In hypfs_get_dyndata() this may be
> less important because the 2nd use is only an ASSERT().)

Okay.

>=20
>> @@ -219,6 +244,12 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,=

>>       return ret;
>>   }
>>  =20
>> +void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
>> +                      struct hypfs_entry_dir *template)
>> +{
>> +    template->e.parent =3D &parent->e;
>> +}
>=20
> I'm struggling with the direction here: This makes the template
> point at the parent, but the parent will still have no
> "knowledge" of its new templated children. I suppose that's how
> it is meant to be, but maybe this could do with a comment, since
> it's the opposite way of hypfs_add_dir()?

I'll add a comment.

>=20
> Also - does this mean parent may not also have further children,
> templated or "normal"?

No, the related read and findentry functions just need to cover that
case, e.g. by calling multiple sub-functions.

>=20
>> @@ -177,6 +182,10 @@ struct hypfs_entry *hypfs_leaf_findentry(const st=
ruct hypfs_entry_dir *dir,
>>   struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir=
 *dir,
>>                                           const char *name,
>>                                           unsigned int name_len);
>> +void *hypfs_alloc_dyndata_size(unsigned long size);
>> +#define hypfs_alloc_dyndata(type) (type *)hypfs_alloc_dyndata_size(si=
zeof(type))
>=20
> This wants an extra pair of parentheses.

Okay.

>=20
> As a minor point, I also wonder whether you really want the type
> unsafe version to be easily usable. It would be possible to
> largely "hide" it by having
>=20
> void *hypfs_alloc_dyndata(unsigned long size);
> #define hypfs_alloc_dyndata(type) ((type *)hypfs_alloc_dyndata(sizeof(t=
ype)))

Yes, will change.


Juergen

--------------2DA53DB95359E2404CC4A16D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2DA53DB95359E2404CC4A16D--

--619p0ZPhdHxSjovfHG40hyz2QLAjN5mhr--

--3byKQ92V0fZYiP4p59zikbCv8wwTsmUOk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/bP+gFAwAAAAAACgkQsN6d1ii/Ey/c
XQf/VmTTWXQ1hnUoTh5Od67AZC+vnmhPE29GEsZ8HoXCLIjgenFzVeZtME6bYKAs9XLSBCv9v7Dp
tonRCj4qb7Zvf/cinvtSQQWKmu4SZGzbmmzeKIS9En3ZxTFNwAev7oZUTA3YzlkksAw7rCbA2HKd
FCR43hOvk0U+phdMC9Dexny434J5dWpGX6c2+pDgq76ohKZEVwG1AKB6aSOUs9FmtdvqAljeHw+L
SNPHOvM3z9cE/c/Lzs5wJBS4DTD5Y9+G0sMZLoec+X/Gsda5SE/USJO689aULqErV4T7s/3dRs0F
5U+xQ1CclVGEhQjjh4mGjf4Eeh57GHHf+zR7dPMyDA==
=yxgV
-----END PGP SIGNATURE-----

--3byKQ92V0fZYiP4p59zikbCv8wwTsmUOk--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 11:28:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 11:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55871.97463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kprSR-0003EU-QC; Thu, 17 Dec 2020 11:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55871.97463; Thu, 17 Dec 2020 11:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kprSR-0003EN-MH; Thu, 17 Dec 2020 11:28:31 +0000
Received: by outflank-mailman (input) for mailman id 55871;
 Thu, 17 Dec 2020 11:28:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kprSP-0003EI-R7
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 11:28:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b58acd1-4d25-4eab-9f96-c6f2c4fb68cf;
 Thu, 17 Dec 2020 11:28:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53BD1AC7B;
 Thu, 17 Dec 2020 11:28:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b58acd1-4d25-4eab-9f96-c6f2c4fb68cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608204508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=byGWWpcSVKqbTQcQ3wglN0nxKpOX6GFjlcWDjhuWRCA=;
	b=BMHSblRqwWT0JsPaZVrZ2j9yy9Hx0qq2aeixNTEDjjapT64dHfmakiSrqsKS3stIC0wsQP
	7xOAnXfF5JSwEXlJt3Ps64zSudxtCE8JDoOGcLwPNvXLMupYvyZ/cW8V9MnGX79WMc1n6a
	59MzIOcbm6vqZEKFV6RWem122261rU8=
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
Date: Thu, 17 Dec 2020 12:28:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209160956.32456-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 17:09, Juergen Gross wrote:
> +static const struct hypfs_entry *hypfs_dyndir_enter(
> +    const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_get_dyndata();
> +
> +    /* Use template with original enter function. */
> +    return data->template->e.funcs->enter(&data->template->e);
> +}

At the example of this (applies to other uses as well): I realize
hypfs_get_dyndata() asserts that the pointer is non-NULL, but
according to the bottom of ./CODING_STYLE this may not be enough
when considering the implications of a NULL deref in the context
of a PV guest. Even this living behind a sysctl doesn't really
help, both because via XSM not fully privileged domains can be
granted access, and because speculation may still occur all the
way into here. (I'll send a patch to address the latter aspect in
a few minutes.) While likely we have numerous existing examples
with similar problems, I guess in new code we'd better be as
defensive as possible.

> +/*
> + * Fill dyndata with a dynamically generated directory based on a template
> + * and a numerical id.
> + * Needs to be kept in sync with hypfs_read_dyndir_id_entry() regarding the
> + * name generated.
> + */
> +struct hypfs_entry *hypfs_gen_dyndir_id_entry(
> +    const struct hypfs_entry_dir *template, unsigned int id, void *data)
> +{

s/directory/entry/ in the comment (and as a I realize only now
then also for hypfs_read_dyndir_id_entry())?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 11:33:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 11:33:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55875.97475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kprWp-0004Ct-Bx; Thu, 17 Dec 2020 11:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55875.97475; Thu, 17 Dec 2020 11:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kprWp-0004Cm-8O; Thu, 17 Dec 2020 11:33:03 +0000
Received: by outflank-mailman (input) for mailman id 55875;
 Thu, 17 Dec 2020 11:33:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gjir=FV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kprWn-0004Ch-LY
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 11:33:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62190456-6a27-4ad0-85e0-fe3ec6325ab6;
 Thu, 17 Dec 2020 11:33:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7FBAB740;
 Thu, 17 Dec 2020 11:32:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62190456-6a27-4ad0-85e0-fe3ec6325ab6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608204779; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=1uGNf6DVjBIWoNS7RqfmaXCqdOqych2XEQKJZFo6jyg=;
	b=nUUD8xFxhUvWcs/Z9FVh94BZEbJAlfH1b+SvCwp5EXu4Rr212r7EQZDRCSiVtSnp0T5EN7
	F4j5myCPXc35g69/yF8qgxRHs37qx5YOmgpVkFoBuFrQjT7DiUMrR60iLcnAItd1tXhGvU
	xu5kfzGju4yc0+kmc6tWB4ATZAPkpQc=
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
Date: Thu, 17 Dec 2020 12:32:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qCDbGZsZmV0aaaRIoV0quLwOF5oMuhMrZ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qCDbGZsZmV0aaaRIoV0quLwOF5oMuhMrZ
Content-Type: multipart/mixed; boundary="DqE4c3xl7ypjgWc1CxhrPue5l3uUOF89Y";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
In-Reply-To: <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>

--DqE4c3xl7ypjgWc1CxhrPue5l3uUOF89Y
Content-Type: multipart/mixed;
 boundary="------------6A4BCA8D705FCF7E7E6657AF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6A4BCA8D705FCF7E7E6657AF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.12.20 12:28, Jan Beulich wrote:
> On 09.12.2020 17:09, Juergen Gross wrote:
>> +static const struct hypfs_entry *hypfs_dyndir_enter(
>> +    const struct hypfs_entry *entry)
>> +{
>> +    const struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_get_dyndata();
>> +
>> +    /* Use template with original enter function. */
>> +    return data->template->e.funcs->enter(&data->template->e);
>> +}
>=20
> At the example of this (applies to other uses as well): I realize
> hypfs_get_dyndata() asserts that the pointer is non-NULL, but
> according to the bottom of ./CODING_STYLE this may not be enough
> when considering the implications of a NULL deref in the context
> of a PV guest. Even this living behind a sysctl doesn't really
> help, both because via XSM not fully privileged domains can be
> granted access, and because speculation may still occur all the
> way into here. (I'll send a patch to address the latter aspect in
> a few minutes.) While likely we have numerous existing examples
> with similar problems, I guess in new code we'd better be as
> defensive as possible.

What do you suggest? BUG_ON()?

You are aware that this is nothing a user can influence, so it would
be a clear coding error in the hypervisor?

>=20
>> +/*
>> + * Fill dyndata with a dynamically generated directory based on a tem=
plate
>> + * and a numerical id.
>> + * Needs to be kept in sync with hypfs_read_dyndir_id_entry() regardi=
ng the
>> + * name generated.
>> + */
>> +struct hypfs_entry *hypfs_gen_dyndir_id_entry(
>> +    const struct hypfs_entry_dir *template, unsigned int id, void *da=
ta)
>> +{
>=20
> s/directory/entry/ in the comment (and as a I realize only now
> then also for hypfs_read_dyndir_id_entry())?

Oh, indeed.


Juergen

--------------6A4BCA8D705FCF7E7E6657AF
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6A4BCA8D705FCF7E7E6657AF--

--DqE4c3xl7ypjgWc1CxhrPue5l3uUOF89Y--

--qCDbGZsZmV0aaaRIoV0quLwOF5oMuhMrZ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/bQeoFAwAAAAAACgkQsN6d1ii/Ey+P
Rgf+JJt5b8p9JhNfH+fwNmspT0E/p9m/2z5TO7P5S6xg+w2ckBbO4JwLSRObq2tS3ryonXsK35+g
MI3HBreWbSL+gvPIVGnIkFpPYvaOVPZ7Zl0bm5VmIa824tsxXPrA6FcBKQSsAnnHNYrvmLMKMN7K
sO1dy51dJVbHVFBZITgEMjTBTfLbacH+tJRFeoOmm0oWBDLx+pQTeoHYBt11QLEHQUVo/h2EhAFt
XvT+EWhzB49NLmq+cax0YO1ZyBKcwM1oA2yurWlINIjcI60IqNR2LXBTSfOcjqSM0MIRv2sa0rFa
Oo5WpW/iJB3XneoOsRSboAinI592QeCmkWcdtDcnOQ==
=DCmR
-----END PGP SIGNATURE-----

--qCDbGZsZmV0aaaRIoV0quLwOF5oMuhMrZ--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 11:57:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 11:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55879.97487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpru4-0006HM-CR; Thu, 17 Dec 2020 11:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55879.97487; Thu, 17 Dec 2020 11:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpru4-0006HF-81; Thu, 17 Dec 2020 11:57:04 +0000
Received: by outflank-mailman (input) for mailman id 55879;
 Thu, 17 Dec 2020 11:57:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpru3-0006HA-0g
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 11:57:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 049d03ae-6f80-4855-a93f-520094822715;
 Thu, 17 Dec 2020 11:57:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C064AC7F;
 Thu, 17 Dec 2020 11:57:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 049d03ae-6f80-4855-a93f-520094822715
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608206221; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=JZRIRuR7hZ2IMcHzjNWlwLpR2F1Sh0DoTu8N940Vr6E=;
	b=l0i+HfEOZP5q7C6UBm2c1CiQNlQiZYo0xTfeJMLTe2PZH4PcvmOACCvkoT85uZ2s8BElnI
	Kg+Syizm3b73YaK7hdJG3JcYDbdAko7pXx6K/1UcaC0w5H6dF9yj4s5r272y4YO9mjA/60
	mE6HrLpr0dZWygpJZBWi+T2n/USgRFA=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xsm/dummy: harden against speculative abuse
Message-ID: <34833712-93d9-1b4e-1ebf-9df5ea93d19f@suse.com>
Date: Thu, 17 Dec 2020 12:57:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

First of all don't open-code is_control_domain(), which is already
suitably using evaluate_nospec(). Then also apply this construct to the
other paths of xsm_default_action(). Also guard two paths not using this
function.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
While the functions are always_inline I'm not entirely certain we can
get away with doing this inside of them, rather than in the callers. It
will certainly take more to also guard builds with non-dummy XSM.

--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -76,20 +76,20 @@ static always_inline int xsm_default_act
     case XSM_HOOK:
         return 0;
     case XSM_TARGET:
-        if ( src == target )
+        if ( evaluate_nospec(src == target) )
         {
             return 0;
     case XSM_XS_PRIV:
-            if ( is_xenstore_domain(src) )
+            if ( evaluate_nospec(is_xenstore_domain(src)) )
                 return 0;
         }
         /* fall through */
     case XSM_DM_PRIV:
-        if ( target && src->target == target )
+        if ( target && evaluate_nospec(src->target == target) )
             return 0;
         /* fall through */
     case XSM_PRIV:
-        if ( src->is_privileged )
+        if ( !is_control_domain(src) )
             return 0;
         return -EPERM;
     default:
@@ -656,7 +656,7 @@ static XSM_INLINE int xsm_mmu_update(XSM
     XSM_ASSERT_ACTION(XSM_TARGET);
     if ( f != dom_io )
         rc = xsm_default_action(action, d, f);
-    if ( t && !rc )
+    if ( evaluate_nospec(t) && !rc )
         rc = xsm_default_action(action, d, t);
     return rc;
 }
@@ -750,6 +750,7 @@ static XSM_INLINE int xsm_xen_version (X
     case XENVER_platform_parameters:
     case XENVER_get_features:
         /* These sub-ops ignore the permission checks and return data. */
+        block_speculation();
         return 0;
     case XENVER_extraversion:
     case XENVER_compile_info:


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 12:14:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 12:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55902.97508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpsB4-0008Op-Jt; Thu, 17 Dec 2020 12:14:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55902.97508; Thu, 17 Dec 2020 12:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpsB4-0008Oi-Gy; Thu, 17 Dec 2020 12:14:38 +0000
Received: by outflank-mailman (input) for mailman id 55902;
 Thu, 17 Dec 2020 12:14:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpsB3-0008Oc-DQ
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 12:14:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e92a102d-a6a6-48f1-9f82-3f6470365b82;
 Thu, 17 Dec 2020 12:14:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98737AC7F;
 Thu, 17 Dec 2020 12:14:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e92a102d-a6a6-48f1-9f82-3f6470365b82
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608207274; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xrphdW29sKqk78WD/Xl6dHk4g7FXa40Q7mkMEDngeVc=;
	b=V9FIZWuOIwamNxZyRj7jIcvuH5dYpVaea+97qytNyjFEkEEL2Nxu0MdRRuyCkmlSFIV3sL
	6xUlUIeYt7emk/4a/MiZWn2OtcaVeI9voyqZHj9PssnT7qucoBgiJdHkX5KVNqKTyr0ANw
	C2kodxqMpN9p3QvF6lf6+pj0kcfJ/Hs=
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
Date: Thu, 17 Dec 2020 13:14:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 12:32, Jürgen Groß wrote:
> On 17.12.20 12:28, Jan Beulich wrote:
>> On 09.12.2020 17:09, Juergen Gross wrote:
>>> +static const struct hypfs_entry *hypfs_dyndir_enter(
>>> +    const struct hypfs_entry *entry)
>>> +{
>>> +    const struct hypfs_dyndir_id *data;
>>> +
>>> +    data = hypfs_get_dyndata();
>>> +
>>> +    /* Use template with original enter function. */
>>> +    return data->template->e.funcs->enter(&data->template->e);
>>> +}
>>
>> At the example of this (applies to other uses as well): I realize
>> hypfs_get_dyndata() asserts that the pointer is non-NULL, but
>> according to the bottom of ./CODING_STYLE this may not be enough
>> when considering the implications of a NULL deref in the context
>> of a PV guest. Even this living behind a sysctl doesn't really
>> help, both because via XSM not fully privileged domains can be
>> granted access, and because speculation may still occur all the
>> way into here. (I'll send a patch to address the latter aspect in
>> a few minutes.) While likely we have numerous existing examples
>> with similar problems, I guess in new code we'd better be as
>> defensive as possible.
> 
> What do you suggest? BUG_ON()?

Well, BUG_ON() would be a step in the right direction, converting
privilege escalation to DoS. The question is if we can't do better
here, gracefully failing in such a case (the usual pair of
ASSERT_UNREACHABLE() plus return/break/goto approach doesn't fit
here, at least not directly).

> You are aware that this is nothing a user can influence, so it would
> be a clear coding error in the hypervisor?

A user (or guest) can't arrange for there to be a NULL pointer,
but if there is one that can be run into here, this would still
require an XSA afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 12:51:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 12:51:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55914.97521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpsk9-0003m6-Cd; Thu, 17 Dec 2020 12:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55914.97521; Thu, 17 Dec 2020 12:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpsk9-0003lz-9L; Thu, 17 Dec 2020 12:50:53 +0000
Received: by outflank-mailman (input) for mailman id 55914;
 Thu, 17 Dec 2020 12:50:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=852C=FV=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1kpsk8-0003lu-9a
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 12:50:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.109]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10e7a55d-be59-482f-a101-00d372d363be;
 Thu, 17 Dec 2020 12:50:49 +0000 (UTC)
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AS8PR08MB6152.eurprd08.prod.outlook.com (2603:10a6:20b:298::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.26; Thu, 17 Dec
 2020 12:50:48 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::d585:99a4:d7a4:d478]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::d585:99a4:d7a4:d478%4]) with mapi id 15.20.3654.024; Thu, 17 Dec 2020
 12:50:48 +0000
Received: from [192.168.100.5] (185.215.60.92) by
 AM0PR02CA0107.eurprd02.prod.outlook.com (2603:10a6:208:154::48) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 17 Dec 2020 12:50:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10e7a55d-be59-482f-a101-00d372d363be
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YIiCdlhBsNn+vZ5AYmKITysRiryLC14M/1jjaHM38ME4SZvCcaAA2+1aQLFw61UHQJfhECreyx42turaDz7X2MRcXEheq3SInpyC/qyAHChX4Z7c4zmcIeB2lLoGNRGmyeMzsfFprXQx3Bkw/ZCKFP4JB1F4w6/EvnDcV4aV+TrNg0tZfLw8YoxjEdaZsU4X+xuzY8LdB8h3DEJAGHIqAugGgycNnXZJRsYnsGXvwyg/S26ivf3OHCR6+hmfh2dU9D3OqrNGiZ29XcYTCFNml+MpTcj8ivNESIBl+i3VDx7RK7/aMU4kurH50s7EF8lshl0mwgZzJLClg4RV76U3Bg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8g828gnbvP1DL+ouvCbDDCfpOXOeIAKu/9EbpQKk5aY=;
 b=kvbgSbGDoAA8rAQblGI5L3YKAIZwApmzwUmdfcdc+qo8bm6OVzQdprmT2lNXEcWHfZrVb3tZvG10LlO+kDCXhv19s1xpLosy7MiKU5jV2YbFH5BDEA+w+6AzKba5/5+BpthG2K+RFvoEwJe/JxpxfKV/lBqtv2pc0Ut0YvgLRJTJ/Ir5FBr0GQVqwqcuavYAVqK0kK/xV54BXNVOWuzs2GPZLbR34tfQDH6gKBCSpCjdYICoHUe3Fzrt+VOSZbXoCvo6acTmGt0863xl4No9YHWzrJvOhNxdakx5vn0UVajsQtSfNs9YtlLFgRbGndSZZrwDzjABAyWUgxRLsj5NqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8g828gnbvP1DL+ouvCbDDCfpOXOeIAKu/9EbpQKk5aY=;
 b=K3KarMMgNXKJV3SUCBvPRD3tt1AJjdTtcWXVB/17ldb+ZZ3lPHpJRRlw9k4j8yuEVgawCBmbJ21zk4zTssuPimevfKR5ogoxK2dE6trOwqxJfurephNBwV+1BhWym/v2mvAdvSdy0LmunnUknXZ9ZxJqt7MsvfgUWvAq65/K+9c=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=virtuozzo.com;
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
To: Kevin Wolf <kwolf@redhat.com>, Sergio Lopez <slp@redhat.com>
Cc: Fam Zheng <fam@euphon.net>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Max Reitz <mreitz@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
 <20201217093744.tg6ik73o45nidcs4@mhamilton>
 <20201217105830.GA12328@merkur.fritz.box>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-ID: <d7c1ee7f-4171-1407-3a71-a7e45708cc4a@virtuozzo.com>
Date: Thu, 17 Dec 2020 15:50:45 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
In-Reply-To: <20201217105830.GA12328@merkur.fritz.box>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [185.215.60.92]
X-ClientProxiedBy: AM0PR02CA0107.eurprd02.prod.outlook.com
 (2603:10a6:208:154::48) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f1ccdb98-6293-4112-c685-08d8a28a608f
X-MS-TrafficTypeDiagnostic: AS8PR08MB6152:
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB615264F26C176D9EB1BC82EDC1C40@AS8PR08MB6152.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3513;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	maSiBUjHN9+MXbTXT+/8FxSs48oIErhGbQdKJxoM66LdaVJtVLgaiyyWke5vajHlCwAm9n16g4MWslx+6i3aOzHTVzMEXk+TVtbHQ2rh6ZcOIzi1i+OGhpKTLsnsJBCRtmGdZTqDh0mSJCA94XlKY6DF6+xvNaVdyAFmYqPP13InL/unooNAIAiDV5oKGb6VQg6RrtlLW/5YhfFslGDuZ32o1EY4KO0zK4QXzF8wNLFWzXIkBUBx6Armra3wKzhGmF2xRMVB+GTrpDGaljDC3ZkeluS/r0QGxogjsImYhq2v96i/gZTXrUYA7S6nTyEbsgElMJZHbdV6m7dCIbImd+s/luO7uNdsvjNRKck3iwjyGaQAXvoFjuMteRnZAFuaw+qYQmv6ClF5ikqVhEIhSX8nqnaN0goW6ZZu30MEJA4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR08MB5494.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39830400003)(366004)(136003)(396003)(376002)(346002)(8936002)(186003)(26005)(31686004)(66946007)(86362001)(36756003)(52116002)(66556008)(83380400001)(66476007)(110136005)(8676002)(7416002)(54906003)(478600001)(6486002)(956004)(16526019)(30864003)(2616005)(5660300002)(31696002)(2906002)(316002)(16576012)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	=?Windows-1252?Q?elqVAOEva7sdNeiNpUP3H20KyWA3kSg2UhigopzkXzuWuSnvJtu4ahtK?=
 =?Windows-1252?Q?6uioe1XSIHEHzR4D9R0paq0NCxfYKI8u9u3qxmiFAUkCd3IRVzw/ckXt?=
 =?Windows-1252?Q?vF+D5UKFGcp193RRQe12Ve1vOY+XSeBzbyodwLALP2HYpLc3abew37vp?=
 =?Windows-1252?Q?jHpC0ZfU6tIFDKxfFVcG21U7ST2XavOT7ZU9GQ2bwtVfhgL9Q5NcbJRO?=
 =?Windows-1252?Q?0jhznpwxmQuTpVFVX/pdzCZv+72W/PoYm7WyAEHaDUHogpHHGWvEUZqn?=
 =?Windows-1252?Q?kZzV8gKHdEOeftT8V1Vpzm6OQRSnIUOKf4vkIWzBtBNlQ9WUdKtynLEJ?=
 =?Windows-1252?Q?mLpuBvJyuhBZKCaJCxvWdqyot5yOBfe4f7JVb704GR0uhBrY9A97CeC5?=
 =?Windows-1252?Q?j13GTLSRY/Ym9BmQ/k7JZUKZIwwckskNirdMKd2FC1Y/GReHcLZpS70B?=
 =?Windows-1252?Q?qxgQQuAnPf0osGeZXkHm+AvbbIGWbCLVTWUgvvvz43DAgIi2+npKeO+O?=
 =?Windows-1252?Q?Oz18l/bx+q5UjjlHE5rguDqXUHbTzcxY6lddr6w42mdjWRXQtICqThp2?=
 =?Windows-1252?Q?L6hqXrd7/EdRWjKtS3C0Ev6umGAlzQMBZcNWNr42Bsv6dkqfcd7v169f?=
 =?Windows-1252?Q?CTmlWnmpyXozmjlWq/LrNkUTKnIwHjYJzznpidfxmRGLdio5IhsGndhY?=
 =?Windows-1252?Q?K06pNw2hC1qJJWFPPKSYBPqgFkUgiAoLRTZ1xJARKXgCa076vQFR48WB?=
 =?Windows-1252?Q?HzKmKLKYbleUYJN4Rm+gTQMDF/S45H1EkeV0mN4qateMRnOA4YXzjiV6?=
 =?Windows-1252?Q?yz2nK1TZdv8pbpP9VddOPx7pAOwmHuVOb1B2YYpw5eugGbo9po8Ni7cA?=
 =?Windows-1252?Q?2L+gNMxHYiS2dZfaGjIPwcYfDDNtbajah+vz8ToZgI7q8xSCbngh4swj?=
 =?Windows-1252?Q?vzST60pqBDu7zmC5NQ1uLhs+1782l+yOy0MMors1ZkqX7JC8Tg6RjZo9?=
 =?Windows-1252?Q?n8qM/CncjBlS+GRuvYIeXv74+eYkw2hSwaV7Wd4B6mvcokUdt2pqYjCN?=
 =?Windows-1252?Q?YXYr2X03AHT6Qeu4?=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Dec 2020 12:50:48.0952
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-Network-Message-Id: f1ccdb98-6293-4112-c685-08d8a28a608f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kGIaMMHKCg55quve8Q4tFYhw5cguUPaG8UPiu+qrJ9gTCxPyiDriPH55Ce7c0u5egMys/EABMJlox47tX9xUuogLNexRdlRfI8dECHSwgfA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6152

17.12.2020 13:58, Kevin Wolf wrote:
> Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
>> On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
>>> Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
>>>> On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
>>>>> Am 15.12.2020 um 18:23 hat Sergio Lopez geschrieben:
>>>>>> On Tue, Dec 15, 2020 at 04:01:19PM +0100, Kevin Wolf wrote:
>>>>>>> Am 15.12.2020 um 14:15 hat Sergio Lopez geschrieben:
>>>>>>>> On Tue, Dec 15, 2020 at 01:12:33PM +0100, Kevin Wolf wrote:
>>>>>>>>> Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
>>>>>>>>>> While processing the parents of a BDS, one of the parents may process
>>>>>>>>>> the child that's doing the tail recursion, which leads to a BDS being
>>>>>>>>>> processed twice. This is especially problematic for the aio_notifiers,
>>>>>>>>>> as they might attempt to work on both the old and the new AIO
>>>>>>>>>> contexts.
>>>>>>>>>>
>>>>>>>>>> To avoid this, add the BDS pointer to the ignore list, and check the
>>>>>>>>>> child BDS pointer while iterating over the children.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Sergio Lopez <slp@redhat.com>
>>>>>>>>>
>>>>>>>>> Ugh, so we get a mixed list of BdrvChild and BlockDriverState? :-/
>>>>>>>>
>>>>>>>> I know, it's effective but quite ugly...
>>>>>>>>
>>>>>>>>> What is the specific scenario where you saw this breaking? Did you have
>>>>>>>>> multiple BdrvChild connections between two nodes so that we would go to
>>>>>>>>> the parent node through one and then come back to the child node through
>>>>>>>>> the other?
>>>>>>>>
>>>>>>>> I don't think this is a corner case. If the graph is walked top->down,
>>>>>>>> there's no problem since children are added to the ignore list before
>>>>>>>> getting processed, and siblings don't process each other. But, if the
>>>>>>>> graph is walked bottom->up, a BDS will start processing its parents
>>>>>>>> without adding itself to the ignore list, so there's nothing
>>>>>>>> preventing them from processing it again.
>>>>>>>
>>>>>>> I don't understand. child is added to ignore before calling the parent
>>>>>>> callback on it, so how can we come back through the same BdrvChild?
>>>>>>>
>>>>>>>      QLIST_FOREACH(child, &bs->parents, next_parent) {
>>>>>>>          if (g_slist_find(*ignore, child)) {
>>>>>>>              continue;
>>>>>>>          }
>>>>>>>          assert(child->klass->set_aio_ctx);
>>>>>>>          *ignore = g_slist_prepend(*ignore, child);
>>>>>>>          child->klass->set_aio_ctx(child, new_context, ignore);
>>>>>>>      }
>>>>>>
>>>>>> Perhaps I'm missing something, but the way I understand it, that loop
>>>>>> is adding the BdrvChild pointer of each of its parents, but not the
>>>>>> BdrvChild pointer of the BDS that was passed as an argument to
>>>>>> b_s_a_c_i.
>>>>>
>>>>> Generally, the caller has already done that.
>>>>>
>>>>> In the theoretical case that it was the outermost call in the recursion
>>>>> and it hasn't (I couldn't find any such case), I think we should still
>>>>> call the callback for the passed BdrvChild like we currently do.
>>>>>
>>>>>>> You didn't dump the BdrvChild here. I think that would add some
>>>>>>> information on why we re-entered 0x555ee2fbf660. Maybe you can also add
>>>>>>> bs->drv->format_name for each node to make the scenario less abstract?
>>>>>>
>>>>>> I've generated another trace with more data:
>>>>>>
>>>>>> bs=0x565505e48030 (backup-top) enter
>>>>>> bs=0x565505e48030 (backup-top) processing children
>>>>>> bs=0x565505e48030 (backup-top) calling bsaci child=0x565505e42090 (child->bs=0x565505e5d420)
>>>>>> bs=0x565505e5d420 (qcow2) enter
>>>>>> bs=0x565505e5d420 (qcow2) processing children
>>>>>> bs=0x565505e5d420 (qcow2) calling bsaci child=0x565505e41ea0 (child->bs=0x565505e52060)
>>>>>> bs=0x565505e52060 (file) enter
>>>>>> bs=0x565505e52060 (file) processing children
>>>>>> bs=0x565505e52060 (file) processing parents
>>>>>> bs=0x565505e52060 (file) processing itself
>>>>>> bs=0x565505e5d420 (qcow2) processing parents
>>>>>> bs=0x565505e5d420 (qcow2) calling set_aio_ctx child=0x5655066a34d0
>>>>>> bs=0x565505fbf660 (qcow2) enter
>>>>>> bs=0x565505fbf660 (qcow2) processing children
>>>>>> bs=0x565505fbf660 (qcow2) calling bsaci child=0x565505e41d20 (child->bs=0x565506bc0c00)
>>>>>> bs=0x565506bc0c00 (file) enter
>>>>>> bs=0x565506bc0c00 (file) processing children
>>>>>> bs=0x565506bc0c00 (file) processing parents
>>>>>> bs=0x565506bc0c00 (file) processing itself
>>>>>> bs=0x565505fbf660 (qcow2) processing parents
>>>>>> bs=0x565505fbf660 (qcow2) calling set_aio_ctx child=0x565505fc7aa0
>>>>>> bs=0x565505fbf660 (qcow2) calling set_aio_ctx child=0x5655068b8510
>>>>>> bs=0x565505e48030 (backup-top) enter
>>>>>> bs=0x565505e48030 (backup-top) processing children
>>>>>> bs=0x565505e48030 (backup-top) calling bsaci child=0x565505e3c450 (child->bs=0x565505fbf660)
>>>>>> bs=0x565505fbf660 (qcow2) enter
>>>>>> bs=0x565505fbf660 (qcow2) processing children
>>>>>> bs=0x565505fbf660 (qcow2) processing parents
>>>>>> bs=0x565505fbf660 (qcow2) processing itself
>>>>>> bs=0x565505e48030 (backup-top) processing parents
>>>>>> bs=0x565505e48030 (backup-top) calling set_aio_ctx child=0x565505e402d0
>>>>>> bs=0x565505e48030 (backup-top) processing itself
>>>>>> bs=0x565505fbf660 (qcow2) processing itself
>>>>>
>>>>> Hm, is this complete? Is see no "processing itself" for
>>>>> bs=0x565505e5d420. Or is this because it crashed before getting there?
>>>>
>>>> Yes, it crashes there. I forgot to mention that, sorry.
>>>>
>>>>> Anyway, trying to reconstruct the block graph with BdrvChild pointers
>>>>> annotated at the edges:
>>>>>
>>>>> BlockBackend
>>>>>        |
>>>>>        v
>>>>>    backup-top ------------------------+
>>>>>        |   |                          |
>>>>>        |   +-----------------------+  |
>>>>>        |            0x5655068b8510 |  | 0x565505e3c450
>>>>>        |                           |  |
>>>>>        | 0x565505e42090            |  |
>>>>>        v                           |  |
>>>>>      qcow2 ---------------------+  |  |
>>>>>        |                        |  |  |
>>>>>        | 0x565505e52060         |  |  | ??? [1]
>>>>>        |                        |  |  |  |
>>>>>        v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
>>>>>      file                       v  v  v  v
>>>>>                               qcow2 (backing)
>>>>>                                      |
>>>>>                                      | 0x565505e41d20
>>>>>                                      v
>>>>>                                    file
>>>>>
>>>>> [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
>>>>>      BdrvChild directly owned by the backup job.
>>>>>
>>>>>> So it seems this is happening:
>>>>>>
>>>>>> backup-top (5e48030) <---------| (5)
>>>>>>     |    |                      |
>>>>>>     |    | (6) ------------> qcow2 (5fbf660)
>>>>>>     |                           ^    |
>>>>>>     |                       (3) |    | (4)
>>>>>>     |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
>>>>>>     |
>>>>>>     |-> (2) file (5e52060)
>>>>>>
>>>>>> backup-top (5e48030), the BDS that was passed as argument in the first
>>>>>> bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660)
>>>>>> is processing its parents, and the latter is also re-entered when the
>>>>>> first one starts processing its children again.
>>>>>
>>>>> Yes, but look at the BdrvChild pointers, it is through different edges
>>>>> that we come back to the same node. No BdrvChild is used twice.
>>>>>
>>>>> If backup-top had added all of its children to the ignore list before
>>>>> calling into the overlay qcow2, the backing qcow2 wouldn't eventually
>>>>> have called back into backup-top.
>>>>
>>>> I've tested a patch that first adds every child to the ignore list,
>>>> and then processes those that weren't there before, as you suggested
>>>> on a previous email. With that, the offending qcow2 is not re-entered,
>>>> so we avoid the crash, but backup-top is still entered twice:
>>>
>>> I think we also need to every parent to the ignore list before calling
>>> callbacks, though it doesn't look like this is the problem you're
>>> currently seeing.
>>
>> I agree.
>>
>>>> bs=0x560db0e3b030 (backup-top) enter
>>>> bs=0x560db0e3b030 (backup-top) processing children
>>>> bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e2f450 (child->bs=0x560db0fb2660)
>>>> bs=0x560db0fb2660 (qcow2) enter
>>>> bs=0x560db0fb2660 (qcow2) processing children
>>>> bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db0e34d20 (child->bs=0x560db1bb3c00)
>>>> bs=0x560db1bb3c00 (file) enter
>>>> bs=0x560db1bb3c00 (file) processing children
>>>> bs=0x560db1bb3c00 (file) processing parents
>>>> bs=0x560db1bb3c00 (file) processing itself
>>>> bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db16964d0 (child->bs=0x560db0e50420)
>>>> bs=0x560db0e50420 (qcow2) enter
>>>> bs=0x560db0e50420 (qcow2) processing children
>>>> bs=0x560db0e50420 (qcow2) calling bsaci child=0x560db0e34ea0 (child->bs=0x560db0e45060)
>>>> bs=0x560db0e45060 (file) enter
>>>> bs=0x560db0e45060 (file) processing children
>>>> bs=0x560db0e45060 (file) processing parents
>>>> bs=0x560db0e45060 (file) processing itself
>>>> bs=0x560db0e50420 (qcow2) processing parents
>>>> bs=0x560db0e50420 (qcow2) processing itself
>>>> bs=0x560db0fb2660 (qcow2) processing parents
>>>> bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1672860
>>>> bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1b14a20
>>>> bs=0x560db0e3b030 (backup-top) enter
>>>> bs=0x560db0e3b030 (backup-top) processing children
>>>> bs=0x560db0e3b030 (backup-top) processing parents
>>>> bs=0x560db0e3b030 (backup-top) calling set_aio_ctx child=0x560db0e332d0
>>>> bs=0x560db0e3b030 (backup-top) processing itself
>>>> bs=0x560db0fb2660 (qcow2) processing itself
>>>> bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e35090 (child->bs=0x560db0e50420)
>>>> bs=0x560db0e50420 (qcow2) enter
>>>> bs=0x560db0e3b030 (backup-top) processing parents
>>>> bs=0x560db0e3b030 (backup-top) processing itself
>>>>
>>>> I see that "blk_do_set_aio_context()" passes "blk->root" to
>>>> "bdrv_child_try_set_aio_context()" so it's already in the ignore list,
>>>> so I'm not sure what's happening here. Is backup-top is referenced
>>>> from two different BdrvChild or is "blk->root" not pointing to
>>>> backup-top's BDS?
>>>
>>> The second time that backup-top is entered, it is not as the BDS of
>>> blk->root, but as the parent node of the overlay qcow2. Which is
>>> interesting, because last time it was still the backing qcow2, so the
>>> change did have _some_ effect.
>>>
>>> The part that I don't understand is why you still get the line with
>>> child=0x560db1b14a20, because when you add all children to the ignore
>>> list first, that should have been put into the ignore list as one of the
>>> first things in the whole process (when backup-top was first entered).
>>>
>>> Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
>>> but isn't actually present in backup-top's bs->children?
>>
>> Exactly, that line corresponds to this chunk of code:
>>
>> <---- begin ---->
>>      QLIST_FOREACH(child, &bs->parents, next_parent) {
>>          if (g_slist_find(*ignore, child)) {
>>              continue;
>>          }
>>          assert(child->klass->set_aio_ctx);
>>          *ignore = g_slist_prepend(*ignore, child);
>>          fprintf(stderr, "bs=%p (%s) calling set_aio_ctx child=%p\n", bs, bs->drv->format_name, child);
>>          child->klass->set_aio_ctx(child, new_context, ignore);
>>      }
>> <---- end ---->
>>
>> Do you think it's safe to re-enter backup-top, or should we look for a
>> way to avoid this?
> 
> I think it should be avoided, but I don't understand why putting all
> children of backup-top into the ignore list doesn't already avoid it. If
> backup-top is in the parents list of qcow2, then qcow2 should be in the
> children list of backup-top and therefore the BdrvChild should already
> be in the ignore list.
> 
> The only way I can explain this is that backup-top and qcow2 have
> different ideas about which BdrvChild objects exist that connect them.
> Or that the graph changes between both places, but I don't see how that
> could happen in bdrv_set_aio_context_ignore().
> 

bdrv_set_aio_context_ignore() do bdrv_drained_begin().. As I reported recently, nothing prevents some job finish and do graph modification during some another drained section. It may be the case.

If backup-top involved, I can suppose that graph modification is in backup_clean, when we remove the filter.. Who is making set_aio_context in the issue? I mean, what is the backtrace of bdrv_set_aio_context_ignore()?


-- 
Best regards,
Vladimir


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 13:06:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 13:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55920.97539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpszB-0004wA-WA; Thu, 17 Dec 2020 13:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55920.97539; Thu, 17 Dec 2020 13:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpszB-0004w3-Sx; Thu, 17 Dec 2020 13:06:25 +0000
Received: by outflank-mailman (input) for mailman id 55920;
 Thu, 17 Dec 2020 13:06:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiow=FV=redhat.com=kwolf@srs-us1.protection.inumbo.net>)
 id 1kpszA-0004vy-32
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 13:06:24 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a3fdc264-2b9d-4240-a530-1c11f20ed6de;
 Thu, 17 Dec 2020 13:06:22 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-347-y7LRLn09PFaB_IwleJrLMQ-1; Thu, 17 Dec 2020 08:06:20 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A9081835DE9;
 Thu, 17 Dec 2020 13:06:18 +0000 (UTC)
Received: from merkur.fritz.box (ovpn-112-122.ams2.redhat.com [10.36.112.122])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 33C431B469;
 Thu, 17 Dec 2020 13:06:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3fdc264-2b9d-4240-a530-1c11f20ed6de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608210382;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HoEHYiafdJ3VFjwGMLtUd6Ab5KECP7FSkHqJfsBmj5k=;
	b=BZ9tLnM+CoT28EsvTFHlQDLAukWSW/q7+R+jOh8ydGMB+rPD7RcOpbzJtcGFp1oxepTUTq
	6nVa+VU/K/9+lVeTEhwZpGbn7tRd/iGltwwavDp0uk19TwCTMO3YgLCkMerRfscMNV5jlS
	Yl315nEUZqod4kk/ebUPl1u8Z9uVP0M=
X-MC-Unique: y7LRLn09PFaB_IwleJrLMQ-1
Date: Thu, 17 Dec 2020 14:06:02 +0100
From: Kevin Wolf <kwolf@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Cc: Sergio Lopez <slp@redhat.com>, Fam Zheng <fam@euphon.net>,
	Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201217130602.GB12328@merkur.fritz.box>
References: <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
 <20201217093744.tg6ik73o45nidcs4@mhamilton>
 <20201217105830.GA12328@merkur.fritz.box>
 <d7c1ee7f-4171-1407-3a71-a7e45708cc4a@virtuozzo.com>
MIME-Version: 1.0
In-Reply-To: <d7c1ee7f-4171-1407-3a71-a7e45708cc4a@virtuozzo.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kwolf@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Am 17.12.2020 um 13:50 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 17.12.2020 13:58, Kevin Wolf wrote:
> > Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
> > > On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
> > > > Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
> > > > > On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> > > > > > Anyway, trying to reconstruct the block graph with BdrvChild pointers
> > > > > > annotated at the edges:
> > > > > > 
> > > > > > BlockBackend
> > > > > >        |
> > > > > >        v
> > > > > >    backup-top ------------------------+
> > > > > >        |   |                          |
> > > > > >        |   +-----------------------+  |
> > > > > >        |            0x5655068b8510 |  | 0x565505e3c450
> > > > > >        |                           |  |
> > > > > >        | 0x565505e42090            |  |
> > > > > >        v                           |  |
> > > > > >      qcow2 ---------------------+  |  |
> > > > > >        |                        |  |  |
> > > > > >        | 0x565505e52060         |  |  | ??? [1]
> > > > > >        |                        |  |  |  |
> > > > > >        v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
> > > > > >      file                       v  v  v  v
> > > > > >                               qcow2 (backing)
> > > > > >                                      |
> > > > > >                                      | 0x565505e41d20
> > > > > >                                      v
> > > > > >                                    file
> > > > > > 
> > > > > > [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
> > > > > >      BdrvChild directly owned by the backup job.
> > > > > > 
> > > > > > > So it seems this is happening:
> > > > > > > 
> > > > > > > backup-top (5e48030) <---------| (5)
> > > > > > >     |    |                      |
> > > > > > >     |    | (6) ------------> qcow2 (5fbf660)
> > > > > > >     |                           ^    |
> > > > > > >     |                       (3) |    | (4)
> > > > > > >     |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> > > > > > >     |
> > > > > > >     |-> (2) file (5e52060)
> > > > > > > 
> > > > > > > backup-top (5e48030), the BDS that was passed as argument in the first
> > > > > > > bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660)
> > > > > > > is processing its parents, and the latter is also re-entered when the
> > > > > > > first one starts processing its children again.
> > > > > > 
> > > > > > Yes, but look at the BdrvChild pointers, it is through different edges
> > > > > > that we come back to the same node. No BdrvChild is used twice.
> > > > > > 
> > > > > > If backup-top had added all of its children to the ignore list before
> > > > > > calling into the overlay qcow2, the backing qcow2 wouldn't eventually
> > > > > > have called back into backup-top.
> > > > > 
> > > > > I've tested a patch that first adds every child to the ignore list,
> > > > > and then processes those that weren't there before, as you suggested
> > > > > on a previous email. With that, the offending qcow2 is not re-entered,
> > > > > so we avoid the crash, but backup-top is still entered twice:
> > > > 
> > > > I think we also need to every parent to the ignore list before calling
> > > > callbacks, though it doesn't look like this is the problem you're
> > > > currently seeing.
> > > 
> > > I agree.
> > > 
> > > > > bs=0x560db0e3b030 (backup-top) enter
> > > > > bs=0x560db0e3b030 (backup-top) processing children
> > > > > bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e2f450 (child->bs=0x560db0fb2660)
> > > > > bs=0x560db0fb2660 (qcow2) enter
> > > > > bs=0x560db0fb2660 (qcow2) processing children
> > > > > bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db0e34d20 (child->bs=0x560db1bb3c00)
> > > > > bs=0x560db1bb3c00 (file) enter
> > > > > bs=0x560db1bb3c00 (file) processing children
> > > > > bs=0x560db1bb3c00 (file) processing parents
> > > > > bs=0x560db1bb3c00 (file) processing itself
> > > > > bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db16964d0 (child->bs=0x560db0e50420)
> > > > > bs=0x560db0e50420 (qcow2) enter
> > > > > bs=0x560db0e50420 (qcow2) processing children
> > > > > bs=0x560db0e50420 (qcow2) calling bsaci child=0x560db0e34ea0 (child->bs=0x560db0e45060)
> > > > > bs=0x560db0e45060 (file) enter
> > > > > bs=0x560db0e45060 (file) processing children
> > > > > bs=0x560db0e45060 (file) processing parents
> > > > > bs=0x560db0e45060 (file) processing itself
> > > > > bs=0x560db0e50420 (qcow2) processing parents
> > > > > bs=0x560db0e50420 (qcow2) processing itself
> > > > > bs=0x560db0fb2660 (qcow2) processing parents
> > > > > bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1672860
> > > > > bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1b14a20
> > > > > bs=0x560db0e3b030 (backup-top) enter
> > > > > bs=0x560db0e3b030 (backup-top) processing children
> > > > > bs=0x560db0e3b030 (backup-top) processing parents
> > > > > bs=0x560db0e3b030 (backup-top) calling set_aio_ctx child=0x560db0e332d0
> > > > > bs=0x560db0e3b030 (backup-top) processing itself
> > > > > bs=0x560db0fb2660 (qcow2) processing itself
> > > > > bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e35090 (child->bs=0x560db0e50420)
> > > > > bs=0x560db0e50420 (qcow2) enter
> > > > > bs=0x560db0e3b030 (backup-top) processing parents
> > > > > bs=0x560db0e3b030 (backup-top) processing itself
> > > > > 
> > > > > I see that "blk_do_set_aio_context()" passes "blk->root" to
> > > > > "bdrv_child_try_set_aio_context()" so it's already in the ignore list,
> > > > > so I'm not sure what's happening here. Is backup-top is referenced
> > > > > from two different BdrvChild or is "blk->root" not pointing to
> > > > > backup-top's BDS?
> > > > 
> > > > The second time that backup-top is entered, it is not as the BDS of
> > > > blk->root, but as the parent node of the overlay qcow2. Which is
> > > > interesting, because last time it was still the backing qcow2, so the
> > > > change did have _some_ effect.
> > > > 
> > > > The part that I don't understand is why you still get the line with
> > > > child=0x560db1b14a20, because when you add all children to the ignore
> > > > list first, that should have been put into the ignore list as one of the
> > > > first things in the whole process (when backup-top was first entered).
> > > > 
> > > > Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
> > > > but isn't actually present in backup-top's bs->children?
> > > 
> > > Exactly, that line corresponds to this chunk of code:
> > > 
> > > <---- begin ---->
> > >      QLIST_FOREACH(child, &bs->parents, next_parent) {
> > >          if (g_slist_find(*ignore, child)) {
> > >              continue;
> > >          }
> > >          assert(child->klass->set_aio_ctx);
> > >          *ignore = g_slist_prepend(*ignore, child);
> > >          fprintf(stderr, "bs=%p (%s) calling set_aio_ctx child=%p\n", bs, bs->drv->format_name, child);
> > >          child->klass->set_aio_ctx(child, new_context, ignore);
> > >      }
> > > <---- end ---->
> > > 
> > > Do you think it's safe to re-enter backup-top, or should we look for a
> > > way to avoid this?
> > 
> > I think it should be avoided, but I don't understand why putting all
> > children of backup-top into the ignore list doesn't already avoid it. If
> > backup-top is in the parents list of qcow2, then qcow2 should be in the
> > children list of backup-top and therefore the BdrvChild should already
> > be in the ignore list.
> > 
> > The only way I can explain this is that backup-top and qcow2 have
> > different ideas about which BdrvChild objects exist that connect them.
> > Or that the graph changes between both places, but I don't see how that
> > could happen in bdrv_set_aio_context_ignore().
> > 
> 
> bdrv_set_aio_context_ignore() do bdrv_drained_begin().. As I reported
> recently, nothing prevents some job finish and do graph modification
> during some another drained section. It may be the case.

Good point, this might be the same bug then.

If everything worked correctly, a job completion could only happen on
the outer bdrv_set_aio_context_ignore(). But after that, we are already
in a drain section, so the job should be quiesced and a second drain
shouldn't cause any additional graph changes.

I would have to go back to the other discussion, but I think it was
related to block jobs that are already in the completion process and
keep moving forward even though they are supposed to be quiesced.

If I remember correctly, actually pausing them at this point looked
difficult. Maybe what we should then do is letting .drained_poll return
true until they have actually fully completed?

Ah, but was this something that would deadlock because the job
completion callbacks use drain sections themselves?

> If backup-top involved, I can suppose that graph modification is in
> backup_clean, when we remove the filter.. Who is making
> set_aio_context in the issue? I mean, what is the backtrace of
> bdrv_set_aio_context_ignore()?

Sergio, can you provide the backtrace and also test if the theory with a
job completion in the middle of the process is what you actually hit?

Kevin



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 13:09:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 13:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55924.97551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpt2N-0005Ew-L6; Thu, 17 Dec 2020 13:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55924.97551; Thu, 17 Dec 2020 13:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpt2N-0005Ep-Hk; Thu, 17 Dec 2020 13:09:43 +0000
Received: by outflank-mailman (input) for mailman id 55924;
 Thu, 17 Dec 2020 13:09:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HMd0=FV=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kpt2L-0005Ej-GJ
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 13:09:41 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id de603ec6-41d8-43e5-9925-91d9966e6bcf;
 Thu, 17 Dec 2020 13:09:40 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-479-rjtvrmtQOwqgUwPTRmmT4g-1; Thu, 17 Dec 2020 08:09:36 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DDDB06D524;
 Thu, 17 Dec 2020 13:09:34 +0000 (UTC)
Received: from localhost (ovpn-112-215.rdu2.redhat.com [10.10.112.215])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 98E3518993;
 Thu, 17 Dec 2020 13:09:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de603ec6-41d8-43e5-9925-91d9966e6bcf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608210580;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YYl58uYPe2iXP8wZfDNNl6jDEd/ToJVvWvG2lXD7l0E=;
	b=ZmrbH6kohKZk08qrlX4cRslsJw5tQ99Uo2Z9qIBuTIGbFIRMNhgV8cGHP6FAn0n77w0o8E
	xz3MGAUCW0u8HtIvB+TBcy3zpga4NihnILN73zz/7cDLCih9RzeuOf6BhsB69fvADHehXQ
	eM3nqEHJ0Ztf26lqaJxg9yaAPuvbKxM=
X-MC-Unique: rjtvrmtQOwqgUwPTRmmT4g-1
Date: Thu, 17 Dec 2020 14:09:26 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201217130926.lqbthako23t4o3s5@mhamilton>
References: <20201214170519.223781-3-slp@redhat.com>
 <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
 <20201217093744.tg6ik73o45nidcs4@mhamilton>
 <20201217105830.GA12328@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201217105830.GA12328@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="jvwub2mrtajxdnkv"
Content-Disposition: inline

--jvwub2mrtajxdnkv
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 17, 2020 at 11:58:30AM +0100, Kevin Wolf wrote:
> Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
> > Do you think it's safe to re-enter backup-top, or should we look for a
> > way to avoid this?
>=20
> I think it should be avoided, but I don't understand why putting all
> children of backup-top into the ignore list doesn't already avoid it. If
> backup-top is in the parents list of qcow2, then qcow2 should be in the
> children list of backup-top and therefore the BdrvChild should already
> be in the ignore list.
>=20
> The only way I can explain this is that backup-top and qcow2 have
> different ideas about which BdrvChild objects exist that connect them.
> Or that the graph changes between both places, but I don't see how that
> could happen in bdrv_set_aio_context_ignore().

I've been digging around with gdb, and found that, at that point, the
backup-top BDS is actually referenced by two different BdrvChild
objects:

(gdb) p *(BdrvChild *) 0x560c40f7e400
$84 =3D {bs =3D 0x560c40c4c030, name =3D 0x560c41ca4960 "root", klass =3D 0=
x560c3eae7c20 <child_root>,=20
  role =3D 20, opaque =3D 0x560c41ca4610, perm =3D 3, shared_perm =3D 29, h=
as_backup_perm =3D false,=20
  backup_perm =3D 0, backup_shared_perm =3D 31, frozen =3D false, parent_qu=
iesce_counter =3D 2, next =3D {
    le_next =3D 0x0, le_prev =3D 0x0}, next_parent =3D {le_next =3D 0x0, le=
_prev =3D 0x560c40c44338}}

(gdb) p sibling
$72 =3D (BdrvChild *) 0x560c40981840
(gdb) p *sibling
$73 =3D {bs =3D 0x560c40c4c030, name =3D 0x560c4161be20 "main node", klass =
=3D 0x560c3eae6a40 <child_job>,=20
  role =3D 0, opaque =3D 0x560c4161bc00, perm =3D 0, shared_perm =3D 31, ha=
s_backup_perm =3D false,=20
  backup_perm =3D 0, backup_shared_perm =3D 0, frozen =3D false, parent_qui=
esce_counter =3D 2, next =3D {
    le_next =3D 0x0, le_prev =3D 0x0}, next_parent =3D {le_next =3D 0x560c4=
0c442d0, le_prev =3D 0x560c40c501c0}}

When the chain of calls to switch AIO contexts is started, backup-top
is the first one to be processed. blk_do_set_aio_context() instructs
bdrv_child_try_set_aio_context() to add blk->root (0x560c40f7e400) as
the first element in ignore list, but the referenced BDS is still
re-entered through the other BdrvChild (0x560c40981840) by one the
children of the latter.

I can't think of a way of preventing this other than keeping track of
BDS pointers in the ignore list too. Do you think there are any
alternatives?

Thanks,
Sergio.

--jvwub2mrtajxdnkv
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/bWHwACgkQ9GknjS8M
AjVmmxAAnOyVZGFSmhLixYFUU37Rpkw5SAKaWvauoce3mDVaCBpyDf7NajfTq25j
GK0NOL/mRp+trxC1absAN0tBULadrMbkT4UIUGyrRvcntEGdFrfQJ44LOKZ4lEw+
QlZG/bS0MwDQ+cql/WjWPicqTdr+pdyjzhWxZzoJtUkRKMWBHkt6e2UQMkzR3DaB
eTUdx6LF+Tfm6w36ZqAIdH0pMZu1Te084hWIQmgj+6FMGZSUqYKpKgn8sGxqQ20Y
XOv3PHtuAu6wugqd3bCs5A+mTenG+dtW/NKPFcMkI6U30u8pOciXSMnfx0+o3mpd
L4K+7qXPUSziR+9XRFvPMBwblkKCpP5Zt91CEbES9jRS9EQSnbs3E3eW9tshBUFE
akQD6qJgN3XvfqhqGQDlJGyz0R14qzFrehnbloTFQuOEM2dXEfVo6rXD+pfGX04o
886fgxqJezmL44XdaxLQ8k/Q3ywtnwfgaD/0jK7XKZrD8f/GwSvTp+b+PWmYp2eL
NCO0zWybPocN/5PsYPsnN0+w37GX1vaIhhhCX1nXo2BfbzO3Z/8bfCkLBhUFLfE4
a7/lT7Q3NVvRnIt0o9MEqPXkYTfIN4Zwcji6Zx0jV/ex8o0lCyk/EKL8DiZ5Ri9e
xrDqHyeL1BnTrdSb5quyXv3tS72m9XrpYpgzKNGhD6ZyS+Wi/rU=
=9hdX
-----END PGP SIGNATURE-----

--jvwub2mrtajxdnkv--



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 13:13:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 13:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55928.97563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpt63-000650-7G; Thu, 17 Dec 2020 13:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55928.97563; Thu, 17 Dec 2020 13:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpt63-00064t-3c; Thu, 17 Dec 2020 13:13:31 +0000
Received: by outflank-mailman (input) for mailman id 55928;
 Thu, 17 Dec 2020 13:13:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Fje=FV=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1kpt62-00064o-CL
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 13:13:30 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea759129-21b9-4252-9bc6-5e6379f7f1fa;
 Thu, 17 Dec 2020 13:13:27 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Dec 2020 05:13:25 -0800
Received: from lkp-server02.sh.intel.com (HELO 070e1a605002) ([10.239.97.151])
 by orsmga007.jf.intel.com with ESMTP; 17 Dec 2020 05:13:22 -0800
Received: from kbuild by 070e1a605002 with local (Exim 4.92)
 (envelope-from <lkp@intel.com>)
 id 1kpt5u-0001DC-0P; Thu, 17 Dec 2020 13:13:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea759129-21b9-4252-9bc6-5e6379f7f1fa
IronPort-SDR: ZJQo0Ul6hdLCDz/ET7g78sVeR3RMPNlDGfC78j4yqt0SsFA+zSBsfjhDkhmW2DM4CDU4yIbYS6
 K88zhRn5YsSw==
X-IronPort-AV: E=McAfee;i="6000,8403,9837"; a="259973050"
X-IronPort-AV: E=Sophos;i="5.78,428,1599548400"; 
   d="gz'50?scan'50,208,50";a="259973050"
IronPort-SDR: mLkY0dcKFRSvrPnHfkKjQwc67RCkhlCdxPw2NANfqbIkqRRLDoHcasc2QtRZZ5pWHgvh1jTebp
 yaAIJQpn3apA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,428,1599548400"; 
   d="gz'50?scan'50,208,50";a="379898162"
Date: Thu, 17 Dec 2020 21:12:57 +0800
From: kernel test robot <lkp@intel.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	x86@kernel.org, linux-kernel@vger.kernel.org
Cc: kbuild-all@lists.01.org, Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v3 08/15] x86: add new features for paravirt patching
Message-ID: <202012172021.VSDLPK5D-lkp@intel.com>
References: <20201217093133.1507-9-jgross@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="gBBFr7Ir9EOA20Yy"
Content-Disposition: inline
In-Reply-To: <20201217093133.1507-9-jgross@suse.com>
User-Agent: Mutt/1.10.1 (2018-07-13)


--gBBFr7Ir9EOA20Yy
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Juergen,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.10]
[cannot apply to xen-tip/linux-next tip/x86/core tip/x86/asm next-20201217]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201217-173646
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git accefff5b547a9a1d959c7e76ad539bf2480e78b
config: i386-randconfig-r021-20201217 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/032ee351da7a8adab17b0306cf5908b02f5728d2
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Juergen-Gross/x86-major-paravirt-cleanup/20201217-173646
        git checkout 032ee351da7a8adab17b0306cf5908b02f5728d2
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   ld: arch/x86/kernel/alternative.o: in function `paravirt_set_cap':
>> arch/x86/kernel/alternative.c:605: undefined reference to `pv_is_native_spin_unlock'
>> ld: arch/x86/kernel/alternative.c:608: undefined reference to `pv_is_native_vcpu_is_preempted'


vim +605 arch/x86/kernel/alternative.c

   601	
   602	#ifdef CONFIG_PARAVIRT
   603	static void __init paravirt_set_cap(void)
   604	{
 > 605		if (!pv_is_native_spin_unlock())
   606			setup_force_cpu_cap(X86_FEATURE_PVUNLOCK);
   607	
 > 608		if (!pv_is_native_vcpu_is_preempted())
   609			setup_force_cpu_cap(X86_FEATURE_VCPUPREEMPT);
   610	}
   611	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--gBBFr7Ir9EOA20Yy
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICBZL218AAy5jb25maWcAlFxLc9y2st7nV0w5m2QRHz19nLqlBQiCHJwhCRoARzPasBR5
7KhiS756nMT//nYDBAmA4CQ3C0eDbrwb3V83Gvzxhx9X5PXl8evty/3d7Zcv31efDw+Hp9uX
w8fVp/svh/9Z5WLVCL1iOddvgbm6f3j961/35+/frS7fnp68PVltDk8Phy8r+vjw6f7zK1S9
f3z44ccfqGgKXvaU9lsmFRdNr9lOX735fHf3y6+rn/LDb/e3D6tf356/Pfnl9PJn+9cbrxpX
fUnp1XdXVE5NXf16cn5y4ghVPpafnV+emP/GdirSlCN5quLVOfH6XBPVE1X3pdBi6tkj8Kbi
DZtIXH7or4XcTCVZx6tc85r1mmQV65WQeqLqtWQkh2YKAf8Ai8KqsFw/rkqz8F9Wz4eX12/T
AvKG6541255IGDevub46PwN2NzZRtxy60Uzp1f3z6uHxBVsYJyooqdxM37xJFfek8ydrxt8r
UmmPf022rN8w2bCqL294O7H7lAwoZ2lSdVOTNGV3s1RDLBEu0oQbpXOgjEvjjddfmZhuRn2M
Acd+jL67OV5bHCdfJLYtnNFQmLOCdJU2EuHtjSteC6UbUrOrNz89PD4cfn4z9aWuSXoJ1F5t
eUsTI2iF4ru+/tCxzhN3vxQrU11NxGui6bp3NSYJlUKpvma1kPueaE3oOtFfp1jFM78e6UDl
JDjNlhMJXRkOHAWpKneK4ECunl9/e/7+/HL4Op2ikjVMcmrOaytF5s3JJ6m1uE5TWFEwqjl2
XRR9bc9txNeyJueNUQrpRmpeSqLxKHoCLHMgKdijXjIFLYTKJRc14U1YpnidYurXnElcmP1C
70RL2D1YLDj9Wsg0Fw5Cbs0o+1rkka4rhKQsH9QYzHWiqpZIxYa5j5vot5yzrCsLFQri4eHj
6vFTtG2T9hZ0o0QHfVrpyoXXo5EBn8UcjO+pyltS8Zxo1ldE6Z7uaZUQAKO0t5M8RWTTHtuy
RqujxD6TguQUOjrOVsOOkfw/XZKvFqrvWhxypOvsCaRtZ4YrlTEhzgSZE6Dvvx6enlOHQHO6
6UXDQMq9PhvRr2/QjtRGLsetg8IWBiNyntIPthbP/YU0ZUETvFyjQA1jTe78bLijrpGM1a2G
VptAn7jyrai6RhO5T2q2gSul2Yb6VEB1t2iwoP/St89/rF5gOKtbGNrzy+3L8+r27u7x9eHl
/uFztIy4A4SaNoJjgIJuBClFNDuo6BpOENmW8VnJVI66iTJQl1BbJ+eF26000So9a8WTi/wP
pucpbJgaV6IyOsBvzqyUpN1KJWQLVrUHmj8h+NmzHQhRahuUZfarR0U4U9PGcCoSpFlRl7NU
uZaERgRsGBayqibR9ygNgz1SrKRZxc0BHZcynP+47Rv7hycIm1HeBPWL16A54TBMRZVA1FWA
8eGFvjo78ctxL2qy8+inZ5Mg80ZvAKoVLGrj9DyQuK5RAxY1omeUkBN8dff74ePrl8PT6tPh
9uX16fA87WkH6LluHUgNC7MOFBloMXuKLqf1STQYKOxr0ug+Q2UOQ+mamkAHVdYXVafWnvIu
peha5UsTIAhaJgQpqzYDe1zdztdvoyBc9h4t0ZzU/UJlbzksS6L20HHLczUbjcwNBp7OmS0u
QPRvmEyeZ9h+xcKzHlbO2ZZTNusK6qEGmZXDWSxmhcYue4Zc0M1IIjoYMuJLMPSgotLDXTO6
aQWIJWp9gBgsjTqtBgS3Y7aMPjItFAwNtDWAlfROsYp4UAflAJbDoADpoyj8TWpozYIBDzrL
3Hkzk87K5w7BRBrcGJ87RP8+q+e/mN8XUU1E9yl5FgKNU6hNwAUVLdgVfsMQg5mNFLImDQ2x
dsSm4I+UV5j3QrZr0sBplJ7mixG91R48P30X84Bep6w1INHo1hilUNVuYJRgQ3CY3ia1nvxZ
2+CplrCnGrwaDvIv/RmqkmlE3/2A044IT4LDqQGYukUuk+U0uGqOUwJd6zlCVvc2Nfc95zJo
MVyD5EAzApC56NKD7DTbTa2bn6BYvOVrhY9TFS8bUhWe5JvZ+AUGefoFag1a1UPU3JNZLvpO
BvCF5FuumFvXWDlnREoe6rGBuEHufe1pRFfSB0h7LDXLgscbHa4AIrXFkW1FWTG+sD9FY3Qw
fDMNEZpoAGODegpOpGIfEo1CLZbnLI8lHLrqY2+gpacnF86yDnGx9vD06fHp6+3D3WHF/nt4
ANRFwFJSxF0AfCdrG7Y4DstoZ0uE+fXb2vhvSZT3D3uc2t7WtkNrxWcI3SkVUbcEbLbcpA9b
RbIFQpelgF8lAj8f68PmyJK5UEa6tXVXFIBhWgKMow+bPKqi4FUguEZFGbukfCgXRtwc8+79
u/7cC2MZl7fP92D7wDMrInUH3L6pUVp21KjFnFHwnj0nW3S67XRv1La+enP48un87BcMpPpR
uQ1YvF51bRtEDQG+0Y3FsDNaXXeRsNeIqWQDhoxbN/Pq/TE62V2dvkszuJ3/m3YCtqC50f1X
pM/9CKAjBMjDtkr2zob0RU7nVeDw80yiM5+HAGA86QiaUXvsUjQC4KPHIK4xjgkOEB44Cn1b
giDp6NQDGLMoyvp/knlTMi6DIxmtAU1JDDesu2azwGekOclmx8MzJhsbjAELpXhWxUNWncK4
0xLZwG2zdKTq1x1YzyoLpBdkvVd1O2vViBkGITBa5um4AswkI7LaUwwQ+dajLa2XUYFOAesw
+ilD+FwRXHOUZFxYRm0EyqjK9unx7vD8/Pi0evn+zfqnc2/kRkD9QIiCYeNUCkZ0J5mFqSGp
bk18yhMnUeUF930OyTRYVB5GG7CulSeAPTINN5An4yUMZ5HMdho2CTf+GBxATlBSGBZuVQqG
IAOpp1Zm+J8LVfR1xv0puDJrShY7BjngkqdNgEXsouagvwBLYxgKx5lSv+s9yDbgBECfZcf8
4BZsANlyqQOQNJQdGdvIolremJjewsKst6gbqgwkrN86+XLQAkxdNBwbPmw7jGGBgFZ6gFNT
x9v18QFFcZ6ULXKszueeANPF+3dql2wfSWnC5RGCVnSRVtcLPb1bahBUCmDvmvO/IR+np6Xc
US/S1M3CkDb/Xih/ny6nslMi7YDWrCjg2ITRrYl6zRu65i1dGMhAPs8X2q5IkxCFumSAB8rd
aSAFprCvFraH7iXfLS7ylhN63qevrgxxYcEQ+C7UAki1rJkGA7xw+ow+aHA21sTaSNSFz1Kd
LtPAqJdNjVjUdwGRAoIdFtC63dF1+e4i0vDg6NddbbR1AVit2l9d+nSjYcB1rZWnFzgBtYdm
ow8cX+Tf1rslgzJEadHBZhUL4izQOdhOq8XnxWZDLboMwq1IA72+FAMy9PW+FCnBGtuGxSOd
TDUN0LFRNdMEuj7SQlfThbHdrInY8VTv65ZZRegtau57xY0BOqqHIQDUyVgJOPIsTcTLr/cx
afAJZoSpwBonVQeGxRbWS9JqLsB70vJItsDpHQoD0ZdMAny3MZVMig1rbJgGb+4Wz0tNA+1j
kY7nm319fLh/eXwK7hY8J3BAAl0zOKqLHJK01TE6xfsC5js+C6MIh1+xktA9HIMFG2GXq63w
HxYCGud2CTjomQfY+PtNuNyS4TICpIxjvZzCEQNlsry4Kh0xHfAaT5+lRuCVFKDWlCRbykUQ
xNnWqq0A9ZynsclExrjbUZaz4y2czVqIGE5LH0nBaRFFAe7I1clfFydhzsswkUhptgTBt+ZK
c6oiFFQAnIQacABJwsEwwHmZbDSgu9/HG2NPGHmFUlQ5uIj3sB27Ckba6jnexjAyIGqB1wtS
dm18ERVIEF5X4wXI9dW7C0+CtEwLiBk06Jv8iKVTNUkF95EEoCdyOuwh02pnJo/bEs8n5kjD
jgQnhshTIakiwPfwE7a2Swdh1jf96cnJEuns8iR1Em7685MTvwvbSpr3ykvF2rAd8zQqlUSt
+7zz/bR2vVcclS9Io0TxPR2kdwrEMRPcQVlLQWpX3yAGqH8WCr/QbdUZyxUEDkHNIBivfYb0
stiA2xKbm5kNeWxzJQJ7WecmtADdpV1F2FZe7Psq1+kAtlPQRxzi4PTZI+lO3zDo0a1+/PPw
tAI1f/v58PXw8GLaIbTlq8dvmKX3bK++B4Np4whpjyelLxENlrMTH0YPsDOPNvvljIzZbAWH
WGy6OBRRg5LQQ9YPVmn9oJApgZXUoJ5acY2xXTR2ah4nM5xmb8vwRiEgmBhyYrK2n5ZKO9RZ
fcm2vdgyKXnOxlDMUjuMulyYaHiEzhrOiAbFuV9qKuu09u+uTeEWBiGisoLEXLn1SP0iA50l
+9C3Kh7aBHlHOJEmhykhIXE2O97WfGlq4UlOd0bKUoIEBa6+YdFrgCSkiufXKXBw+lzB8St4
5V8FjuE/W92cqq4tJcnj2RyjuavUcJYt5Rj1Tll3OywByB3URjwJtwJcxJjUimOWxp62Lksf
ZX8dwCNYi9QVpBWtMgzVDHKed5gAhmlr10SiMatSwjkdV9Iy79CH5cP9WdgFEpbGlLc6sK74
2577RA1LRHjDt/HS2r+LAHS2GN8XLQjUMtxo69HZmTRnEYzX5ROtiqfD/74eHu6+r57vbr8E
MN8dtNBDNEevFFvMkkTXUi+Q44SVkYgnM3bfDMHliGJt7+p40d+cV8I1VrB5/7wKalKTK7Dg
cc4qGJet07xamHZ4553kcKNcoI9DWqCLJmfQfr647s2QG7nYgz+HURA+xYKw+vh0/9/g8g/Y
7HqEez6UmXhvzrYp4NlG6thIKaWudkhwWn6gBNjTLF4jrvuFAFzIk44thTzpwJwJJe0M3qiT
2scg8JaxHKy6DWtI3ohwInP6aJiTXJyu4/lORJU0QmYuFzZyCwNNXAqY3WnM/WAqrGLjDE0p
uyaujMVrEPzl+4JJluVMuTz/fvt0+OiBuOSsonTqkGiuxzAXjLTW3VpKG0xosVGu+ccvh1Cn
hcbflZiTUZE8uA4NiDVrugWSZiJWaSPNRe2TBsiSXITfD3+MY/fuSMxZQsY0Hv9bJG0WJXt9
dgWrn8Dqrw4vd29/9lE2QoFSoGubBtqGXNf25xGWnEu2kCFqGUSVTus3RNJ416ZYhAMKS2wH
YZkbV1iKPQVmFMpok52dwCZ96HiYNeCWWxGArl6XwwUxBt+CQu+yj6KnF/9eyzj4GA8Hf/c7
cXoJNdKBdHAkd4kxNkxfXp4E4fqSLeybOVx7VWRJ6VkQCysy9w+3T99X7Ovrl9voOA8+5hCR
dW3N+ENoBbgO79tFTVpngor7p69/gsZY5bHVYXmg1uAnhi9SuUhc1gbugaNpW3YYq+Y8aAMK
bPpVCpAhjZKmrwldo5vciMZEL4rh2i64tVRUgYOQFSnYXFz3tBjyvKbB+KXOFZ+ocCbKio1T
CfbVkiI7EJMxDmuivkuhiYEPE1UBKYhq3vlE8kKsbp9gJVY/sb9eDg/P9799OUz7xjFv59Pt
3eHnlXr99u3x6cXbQli+LfHTibGEKd/1xRKJd5Q1jMDfPbv2m/m2IgHTjh1xCrb7bV1L0rYs
7huXqBKY42wcGymCS1XkAKOjOkwUMFzJJffZjBKBfwn8S1Xq8RBy6+By1wyT8rO5u46UHDQL
OnFG0cWPu4aD9v/ZjWDph9yI2aStz6XQ98RIQUX2ambZ9eHz0+3qk+vKwkRDca8l0gyOPDvq
gXLYbOsgts2l7kD53cwS/Z1iBG94u7s89ZKhMI9jTU77hsdlZ5fv4lLdks6kXATvG2+f7n6/
fzncYTTrl4+HbzB0tKgzNGODh+Hlh4kvRmXOS7YXQKOmtslCCA73gTmw6WGp42vWyNE9gzKU
oLM6d/42Nj8m0dx/uhpAGMn86JiJwFOYxl5h4LwYhHZyJS0d44IjfWmk5g6b441p15hYKOZU
UwyMRAE0TBvEV52aN32GTwEjd5zDcmIqVyLfaRNn/9hSzJRJEUSbLh+aAf9qlj9n6EXX2KQ5
JiWGkJr/MBo/k9uyMNt2egZoWlwLsYmIiCdQA/GyE13izZaC/TFQ0r5mi1bNpIIJqTFcOySO
zxnARx9irwtEC6NCreuN3L4GtkmD/fWaAwjlsxwVTOFSYwKieX5ga0R852cZ12jA+3gb8eUy
uDDDg994dyQr4aA3uU3OGmQoRGKWT/nuZLhx+Dh5seL6us9govZlQESrObohE1mZ4URM6Ldj
elYnG8AMsCXcR3xxPm5CTjBchX6cee1gc8+itxBTI4n+XfatHJYovNGY9jN15FNUP7d5YKvr
ri8Jhi2HACM+C0mS8XlSimWQO3tO7KugIVMiGsxQau/AF2i56BZyCTnAafvU0z0rT0xVMYro
9QhpSLP0VGNcZcY4acmBYpNJZplj8y5x0yqQsGg8s2TESQuH5b5+9ih4DEUy72vq+5prgLqD
3JjcuVi4/v7FYC1QBrs4Bd0W13Gx04kNXtyiecCsT7wxTu0m0rANtOAyVsugMtwVMKNw6Lx7
FyB1eGODtgXsFAp0QgMairlqDVJtp2EGqcgRA9uBNkuq5rDWCE0HpzJUQLTCtFF0LgD0+4+y
BH7lgJfDtdf5jEAiCzR6Y6hkcb9SGl+DXdHu9b683vkytUiKq9u1TVZPkabVbGEXzs/c/Wmo
6UckAOYqZe5RO/qPAeKqwwsKAFRU7ttZXvSEW1JStvTeKLwsHF46gKSa9P4RN1Kx/eW32+fD
x9Uf9oHDt6fHT/dDQH1y0oBtWN4UlHVzMGwO0ZEh4dO9BzjSUzBd/BQJYk7eJN8T/A3CdU1J
2E981ONrBPPyReEbj6vT6Kz5emiQA/s0Ad2opctm5OqaYxwOFBxrQUk6fixk4cmV41yIWg1k
PESSqaOd4eZfAy5QCr+lML447HltxCSxuV0Dgg2Hdl9nolJzJWUeF8d3y1kV3H/i60ETcZDs
Q5gnPL0vhXM33LB4JHxymKkyWWhDsFE5hhpLyXXy6eJA6vVpkH/hGDAHPvlmcKCDuhRaD49f
ZrUnKqbLLDTjUhiMjZfhGK8zHbc7rAzH5+KgG9KP8QNGKpI+zdB+X3+YDx31QvzhCH/fMDe9
JWnBRAb71R6nu1Kv2tvbp5d7PJ4r/f3bIUiMgIXQ3ILcfIuXT6kNqFUu1MQaBiL84imkHPXo
r0P9AUMT4dpDGTrs/gu9oXh4V2w/fyKmB9ieLw1cXNiUyBzsafzIwSNv9lnyMtXRs8LzAeBH
77Z19lwaif4L42SUJRzvFDtoTqdeumbYP3wAYLQZjd/JTAknNv4pa+8zLkbJ2sqwieI6uGiH
Iw1WbYFojOICbTSo5oM3+fQ6YWJZpsSV5XW66qx8tGMYOrVhpLbFA03yHBVrH11vTtjCPXns
M1bg/9CdCb/v4vGazCQX33Oyxf463L2+3GIwDD8NtjIppC+elGW8KWqNqM+T/6oIwzUDk6KS
t6E2sQRQ/OmHDNgMOl7paN3C2MzA68PXx6fvq3q6vpkFm45mMbr0yJo0HQmimVNupKWlQtW2
cthabxLlbT3f5Rmbs4Aq9rbx4zWlb8SG8Y5f6/CbwqTRVhsRNvnSF1GlDA1tGKcciizWpQtR
wYk49WYSUiXDMxi4L4kPHlETkekdxPMy9LommStgn9qI4b5oCrypVFKayycwLoL9wk4ury5O
fn031Uz5RmmJA5+xMU5mahn8l2jwI/6Mwljk31BgIXRM1NW/XdFNK4QnazeZ7+fdnBfgjvjT
vlHzB8AOAbuYJ0agXVTPr2uCXWarMGS2ST9Zsu+7tjPXFxSBeQ2w+O2ZEh8apOOVo2ZpNbPO
pB942OBeuWDEeKKXD62r1/gJMmqT2Ud2LsJlTn5zePnz8ekPTLlIpDyCGG9YarxgcTwHC3+B
vgpi56Ys5yS1gtrHoPBjeqQ3iSCUapESq//j7Mma2zaafN9fwcrDVlL1+TMJipS0VX4YXARM
XMKAh/KCkiUmZkWWXJL8Jfvvd7oHxxw9kGsf4ojdPQfm7O7p4xir4RLgF2j0gIE3oCzblAZo
pzFsCEKz61j3WAY43/ktODsGtwZC7ljdJBsLDLbjjk63LDGqEny72ZuqU/GMBvkRyMSUBRnP
leNP/MDRHiGpnP1xbVcyXgTE3SKXpyDo+bcWnTlILkeIxYXSa/m7DZOgMhoDMBoVuxoDgprV
NB4+O60ccQklclODaivfOZ6moYlmVxSRfh/dwuFdblPyRUIW2zeqd44A7UKlKgUelzsLMDbL
9bmQ06/YUwiQWAD06MiOwFqg5gCwZn8QaK+AtgmqHqxXDx9l7k+domYH1w4eWhMzADo+ZZNA
g+LPzbCYCJSvxarqocGOhh9EE4dSNzAakIn4i/yEkYIbJBbBrZ8xot19tGGcgBd7AggMoW5f
NaCyiqxctdgawLeRekoM4DQT/H2ZUr0Jg0bfe+N4huThO0yCr11fPWMg5sBh9S/xOEuTFDie
kxQ4su9QFHRU0J6gXxSTRDg2E0yQYwhqo3ED3Y/gp1++fHn5RR/4PFwZGp7hwNurcTDEr+6Q
BaVkTGEwBq5+qAqUjF4E90wbMkrQhq251u4aCdEumwGk3J3a5kekuBvJ4ZME9l2FHczTau0s
g5ccdXStbShUph3FCOFpYzUpYO26JgcD0EUoJCYUKJrbKjLqI5vVDvYeQpPinVplXTBjbvdt
54N2jV7tsgacbVfnebRZt9mBbBtxSc4CCq4FkpKrqsrUmkblRuU6Q8XEQOBMeE/KGWmnBvdA
1VQQRprzNFbugb5sldzia4Dgj/LK0LsJGvl4RSumqgmkuGbDgF6dEJmu0RYm/G5Df9OW/ueg
cITPQZr+kEMWAccWtjTFfbnIwfrDbpsghLCd7p78ZA+IltUrWjZuXP91SK/Ghg50LKRzRQXS
5GKu9YhwPQzCm6YBybIAScZUYzOA5FXJdIhfe+urC7NyCRUTa6+HjirzGuV0g1+2aInQ/dIA
pGa5qFGOTq5WuxHMqqKmqdNQfTKUv9t0k4v1V5SludY7/F4MQ/dia4iXJmVeUyPZIYNYmRT5
pg0cGWfG9gIQpTKAXlzNvYWiqxxh7WZfazOsoPI92a0wCgx5Q0LckkSWKceW+KHaUDUs2+p1
7VtWiVMWEJR45a2UulilPCpUSWl0bJ2Vh4rRLqZpFEXwnSsq5rjcUFKJi1LyzY/Tj5MQnT92
GlrNg6WjbgP/RjsUEZg0PgGM1UAOPVTbfj2wqlU1dw/Fu+jGXAGAqUmdfI/lMdEbHhP9bqKb
jKq/8Smj2HEIOFVIHP4ThRrWfaRVbjP9NSGHu8ruu/h/RAxlWNfESN64GudbH1BTX5uU28iu
8oYaz0DXo/ZgeBjoMFYHAralmIWxKFUoSaaGukojqpDohcA4JXAsmTliEo1zTN81w+jbAe2k
Aurx7vX1/Mf53siWAeUCVbTuAPAurHsA9ogmSIswouPH9DR4SLm2PBDEB7vFnREJRILQFIrS
hnZoe21iB/je0vr0cIqbHvqVlUTPzGC/wxhVMdUIVEIe0T1BDsGIWWYtxwgRkyPLAkp9OKyw
VJdwwoCKhxgWYPPGS8jHody34mBk+OSo3bIDtP9zT9+xCp1DWFVIQkbzjApJQbFOCj7X492r
lZsusQoO1NAuNqGsomLPD6lrCvYQp3hi/2VpsXVrgPIqc2nICjUmXMLN4xN7JH3xtOWQLSEr
BEi8AklUfVM3mjQOv0Hmcq6vIuCUW1pdKcdDHWOoePWQPeqRVLoXWBRk6INdoZBiTqh/cQ3R
x/ltq0eb9fWbEgKzfibTdaDiECw6ZO4aXTE/ezu9vhmGPNjVbbOJqMcn5Pnqsmrzskj7F4ru
xcCq00CorwAjb5nXLMS7sLMDuP/r9Dar7x7Oz2D98/Z8//yovBMyyYuN/Lv4LfZPziCMqcM7
VvS5dgQbqUtuxwhix397q9lT9zUPp/+c70+2x2i+TVVL73XFNHOW6iYCa1FdVKM38W0Afgli
4cbhUd3CAzwh4BWrLVhUKafyLcvVyZn8KGW5krHTfPVogUi0UVhrkDoGn2htQfbAtmmo5wWo
pogqvV4BECcZYXHaI8FmrmydcpogS9JQrzTh2k/dGQUBDnkVbGF4DEmzXGgiocqIpNwLFHAb
BSF9sKpERkhL6bH2+OP09vz89tW5NP3GjH8Gs9Hov28CZnQtCVK/2XHqjpTYHav1hdDBYJFq
3tEKKrkgwX7AK7t5RLEmWVJSmEKSZWSly0Nam9Pb43BAXOM99op601YIzEHs4XIwiW/ZrI9H
EpPX+4wY/yD35kvqyanDV2wxtyr0YzFrJjBssoVF2CwDC5btooDVod2ZfRJQF6A/dF4BNNtu
cYyOkq5lOhzlsbjaat1ZtIehRpGcrZECnVTarHRYMw6ELvv0+rjVFdOixJZcALypI5ZbFnlx
6rd1ZxnagWABZtIDaDxS4w3I/AtbCOkRT6fTw+vs7Xn25SRGD4xoHsCAZpazAAmUS6eDwHMw
POgn6CWIgZSHCEx1vE3VDSJ/4zlnAdOi2jUWdFOZ4v91Zf4ejeE0xuGaNDYbLpc0phFRlYC/
PMVwxLrYFcND9CZtSFsfwBaq4V4HABMwsxYAw5qlWQZBYKz+jm26e5nF59MjhHP/9u3HUydB
zn4VJX7rVrpm4wA1xSGl0wJMVayWS727CGq1DT2CU88aDUB41pcoBPpm7SFdC1pVCBdNTFTU
jaQC5U035FpVEjpRWUdgVVccK6q+DjxV4TI+1MXK6JwEduNmIa7kwOlM7E/N8KD/40yIMpZ+
I43JtB/WO0sP0Z9TQohGDmZPI0jICGKXZKpqAvPEDInVjrmaew0lJcDnqmV0zNIMbBY/fesh
gkdtyjLr5TVF7kIPkE6q6JnzUB7klhu7JNaiBXS/hkGB3+0+80EIymnDJyQBj2C6rHSWFRw7
GccJaQrCu0ez4zV/dIn79ChEQYp2dELmItoBLONVrlWDkCFQqlEX4qbD9+hkYEz7U8TvxBEC
wrZqaMEHw0mQ8i1g0OPbHJWpEOGBHQFRQYENJNyKY1YdrWRa0koUwInl4sYxnlKaWmyycwLU
RwO8k8SGixxRFgYax1QiDhz73OMNFD81MZIwqj34hyTr/aor4h4C2P3z09vL8yNk+holAG2A
4kb864o+CQSQspQKK6139QjZPI5WH8LT6/nPpwN4u0N3gmfxxxgeYThQp8ikdfDzF9H78yOg
T85qJqjkZ989nCCcLqLHoYHEhmNd6lcFLIzEDGFoJxwI5yh9vvQWEUHSR7N5t+XB5YCetWFG
o6eH78/nJ7OvEJcZfXPJ5rWCQ1Wvf5/f7r/+xBrhh06l1kR0Upvp2sat08kQw+88SJn5G12l
2kA1NIJi0rK46/uH+7uXh9mXl/PDnzondQvxtelJCteX3jX94nblza+p4FACsVwr3EITqMZh
XXeNTLXyI+FlfTAPHzVJrEpDXb83Rl0433eX5qy0rWB30usvibKKDqMX7Zu80kPl9bA2B19B
+pm/YUXIMsMKoP+SWjY6xJXBrNOfzIA1j89ibb+MF3186GKGKBf9sanZGNTlF8VgaaCWrtv2
BxKUtIOaGV2j69cgdYGPIzzi9i4KY/ekMxuNM6DK4IK/U1ine8d8IDra1xG3i6HaRpZt6whc
hGljCCBj6BTSEeNaI5ob0u5AwptdUzqyJwN6v8sgz4wvzusmVTmhOtpofg7yt84VdzCepbkW
GaqHq4x6BzssLFCeq6Jj346aj7mvTyzcsNPXGJgg8KnmW7bPFb4Lokug03QIiTdjXV8IyBiP
dwxOQa4mx84cAomNwlwvfJfHRn/rzxOIAEsvVrUKRdItBbsf0ImzNoWqUIZfoFjTTPURmEPS
UQrB0zqmMTv/aCFyPVu7+Imr0g6KM3rKfb97eTXuECjG6kv0saMYZsArLo16bjVAlvFkWTG/
GPOpL0ugZIwPdKlB17QPC70FrQoM1oL+2bQfoUUP/u8QZFUVEu0RwSHZiT8FowLOdDLlW/Ny
9/Qq44TNsrv/1d8PREt+thXniPFZhmNs3GjSu/mrrQ/qiKYAo5RdcajXxLmW3IvnHVqfm7Jy
uAoL5OBVCXke8OXNWjk1yz/WZf4xfrx7FezD1/N34h0FFkic6mPwOQqjwDjpAC5OO/NS7srj
e2ZZGV73PbIou7g72hcAxhd33y14xRwYbaTYE2Y/S7iJyjxqyJDRQAKnmc+KbXtIwyZpF3pn
Daw3ib2wPzRdEDDPmteGUkgN9KAqF/c6McZ5yJvQhgteg9lQPZQs7iyWG4AyN/vGfB45WN2J
5SQlibvv35XorqhDRaq7e4jmbqy5EhQ3RxhTsKUzFg1EudeuTAXYhaKgcX1U/Ss9J4RKkkXF
JxIBU4szOyZoUdFlbI5Vj4HgAIIhJaNqqXSbCJL0uGoBtS/66zlq4X7QbtTnDARiGEgIZh5n
jCfWbObh5fpoPL8q+DRIjsQiiLjvud5scfK2V/MLs1qNgge+12KXnCRF1LydHp3o7OJivnEk
hoJRc0SzlDgQH5xoGex0X4tjyTXWIJXW+gvue6tbphM/Pf7xAeS0u/PT6WEmqrJfCfWu5sFq
tXD0AtJPk9M6INpDnTYYiiiNXSfeSFw2xo7Kg6TylltvtTZWFW+8lXUd8UwMiXOJ98OlVt+E
7hJ4z3qSC5IKjfPrXx/Kpw8BDKpL14kfVAYbRWvvo6WdkAnb/NPiwoZCpq0xPfu7EyQfGoTw
pjcKECtnLh7mRQQ4FzPDDli0/8j67u+Pgmm5E1L8I7Yy+0OepqPigmg3jCDEndmygjK3g4Mq
bMg6Akbqy0c8X62WR7JofkypF4EB371l2QWpRJB2v1BHRJZnNeMOM9+Bpsslucktzig/v96b
WxGLwD88dZ9rSCQEvpKKwzkOd8q3JWbJo+dsQEtuajKyxkQhDHnwaT7dgu83eEi4bhUh2akL
NAoCsVf+FLtDUd2Z1Qsi/cDooaDPSlieG9bxDhLwa50e6Y7eN43g+rAHRGeHlyTYwvhJWQWX
6n/L/3szcTvMvknvasehLAtQDb5flTW2uhu5AsYAGBfofiOkJzej35PzQ9VHbp2aS50SInfs
MTpCZu0jlXwbRRRPCiRyFxlPuBrCcfoYNN2G1xfOzk8tQHvIMPAZT8D5H4MXGAR+5Hf2dN7c
xEGUCIt1BMQm20VUa76Vm1QgMI0r/QYVNsri15lCIUzvirRxhAEQWAh+0WhhCwVQBiwgUWKn
5xZwW/qfNUAXG1OD9aeDCtMUQeK3FkGgjPtkYqGe11giwD5Xg3VBfBVdn5GtSAZK1LMQuQCt
EUO9g0qdCa10HgqinTGlZB0p8ElP1YwpuEFVblXNjldXl9eUgXZPsfCuLuxKi7L7nh6uutSj
Pz2qKnMxPWwTjbp329xS7GFZeOxbUZnR80eMnmCqC3ullu4jYRW7LIMftEFWRxQ7kmExSPI0
WRIeeDgH/i+tlt6RZuJ74p2RjtAiyMqSlvh7grD26a4On/sOnh/pVB093mBiR21iWIMp6LYJ
wr0j3VDDcKPASz9JIM143p2L976w5kf7hbDY55EdMR2gFiM7jBQUIQw8oIz0hWWqAx3Ck4OW
jQdhMfNrmWdxtMtAOGk+Ahght2/U40gBwkszFxfCjsbC+rAa6nAxzV+oJJZ3bM8/qKM3MI22
WppHBRfXt7iP+DLbzz3dpi1ceatjG1ak1US4y/Pb7lQeVYh+DgGL6b2XsMKVorhJ4xznlX6I
C/j10uMXc0rWFEx2VnJI9AvnfxroDytJ1aYZdcCyKuTXV3OPZbpjNM+86/l8Sb36IcrTYvv1
49cI3GpFP5b3NH6yuLyk8hH2BNil67kmqyR5sF6u6IzPIV+sr2gUZM6qEtKoAm5qMUqCO62W
o1FF3xVDEFZfkF0BguQTf8vDWE0fCaHA2rrh2tdU+4oVKb2sk5Sn4p9tdNvSdsSBp9++8rdY
h6LPrG69xWo+yAER8B7U873EiKPNozOYj3jKr7nDDqkfdHDOjuury5UFv14Gx7U6DAP8eLyg
bugOn4ZNe3WdVJE+iB02ihbz+QW5+43PV858/3Ixt/ZZl73gn7vXWfr0+vbyAyI2vfaZit7g
HQLqmT2CqPIgzpHzd/hTHdYG9LhkX/4f9VKHk/7eyMCGH3PiVpqiRzLseURr1wZsmzvOmYGg
OdIUe/nivc9JmSEKEl1fAHuAZQEELHdp/Ppt4lKCDHjD5jJhPitYy1Jy1LXT/r+GIhCxWo0a
KH9IBu7xdPcqJNHTaRY+3+NU4dvTx/PDCf7798vrG6oNv54ev388P/3xPHt+mgEzhcKjyveF
UXuMBdugRygEcIP2e1wHCjZDSxwkQOYGG2KsChwX9Dr1JjR/twTN0I7FNwh84A5QikxalG1T
Wl+jVjLN5ggK0QOKQ1EodC4YxwxyBqSlluIeE5XWpZAeBgYcZgI0uqLafuN//PLjzz/O/5hz
073626OrCLlW54M8XF9Ql5fSd8ntD2ZESo9IS6q+5JQNWU8Dj3NrbzFJU/9upjm2SFgUrN9j
6VmWLlbH5TRNHl5evFdPk6bHad4fB3W6lqZO48zhIjxUw1crb/rDUQn6EySr90lo14qeJKma
5Xqa5DMmiJ/eTTxYeO/MZZWm08OSNleLS5o9Uki8xfRUI8l0QwW/urxYTA9dFQbeXCw9iFD/
c4RFdJgeov1hO31q8TTNXeExRxoxp+8MAc+C63n0zqw2dS6440mSfcquvOD4zr5pgqt1MJ/b
Li/l29fTi+tUkTLj89vpf2bfnsVNJe5AQS4utLvH1+cZpEE8v4jb7fvp/nz32EeI//Is6ocH
jW+nN90JruvLBdpXcfuohEPiQvPO6m+KJvC8yysbkTTr1XpOhBS/CdcrqqZdLgbi0utPVIhv
3r/2WLptDH4u81x2kJqlIaaiVY3+BZX+qw1zZkAsa36EGpcNdqbrhcxy/qvg3/761+zt7vvp
X7Mg/CD4z9+ow55TzwRBUkukFaEcodQ751BEjdraw4LE6P4gIxrwAK0xtYQJCM/KzcZ4BUA4
JsBDyztrdeKQND1P+2rMDZqg2bMhRPwBrLckM+ghjpIYsU5I8UXUCfAs9TmjEBpjNEDRlpvn
lf3BdWV3YnyPNL7ZGMNDFu11Q3r5VY7IbYhD46k+NaDel+C48ZeSjJaze6KL94j84uhN0PiR
N4HsVuXy0IqT7Igbzd1SUnH6KR+xoo5r13HYE3BHNEK5PsCm2LU+WMIWK+9oTDdCLzxrdAF+
eeHQYCABC6a/laXB5eTXAMH1OwTXLrZKHnH7yeHI9ztHFkh52FWNECPp2I2yfXjJ5LdTA14H
OaftkOVBI/rn0fhcyDR4KouL3YiPYNNIAWiaZnooBB/2HoE3SQChiprqZmI8dzFPHDJPt/ib
tCQf5nAT7rg4ZnVDeHkmgrkH4bOg9f629iex9Id14nu1d25vcXA61K5yUFzaq+42PS4X14uJ
IYml05hTHaARpVNnxyZ0KOX764N81MSSndV3EdSr5dXcvnocdpsSWYDZ5iSeGV5DGh9RmddP
qtplS8jvaQXhKBZrCsHBJj9oavuqahxiksTe5qtlcCUObFoi6L6cYjUQdYNrFd5nrYY71MK7
mjg9bzLm0ucP+HdurawiHyDk56X55cKeyjBYXq/+mTjOYFCuL2lVqGQHebWcGLFDeLm4pkIO
yObN8EFyDeTvXCRVfmXw/8b1HZtjqWJt33LJGCRRxtNSFCRDt2ssC/WsKp16ujvVoXNHkm73
TpEUafGZtc7Z7qjkupqikIt65RCU5RTQtieUNDEoVzUWER4BgT3UHoYEbB/VfgkJ5CDrJ9k+
UGGSKycWk7ATcwG4Kh+USYHiF/f3+e2roH/6wON49nT3dv7PaXbuk/qqAgdWwugoFAOOVHkh
Ioj2VHBKxN2UtR7KEOsTJ1+wWDvWhmwRPcDMPuk0PM0cLxOIJV1P1SyCPXuqwvIQnX9k8kwN
DJ4VTDfxCZGjpQ7wDrXQakDI3AJdaEaZIfX8KqDoKHurgYJsx834PVYWD1s/63hVlI+kznfF
eMepfFIQaXO2WF5fzH6Nzy+ng/jvN1vmjtM6MmPG9LC2pBfegOd+pZjnD2AjHOgIL/ktuY8n
uzoMKwRqa0qedO5iuhsNC9oo3+Xljkd+4wjbJZNgmKENzWf4sghdAeHwsZjEwPdtdoYYM2Cj
G0xy7XD9KyYeyeFxPHKYPohvhrCxJC6tnKj90YUBlsrhrOcLhnMX0nrgjSMDhegfNz1qx+8K
ZBJ6esU7YtIJeLvHSatLzltH6b1h6NGDpZkHLM5vSk+y3KHEFUJSQaZmgYDJxBJEsHOBANYV
BLyL3Ww+ginYqHDjYHvJ8DxOkt+ZIwYCIMWBDx4bTnwaNpeXnuPx/v8ou5Jut20l/Ve87F6k
n0hN1CILDpAEi9MlSIm6Gx4nvuc8n3Ycn9jpdv59owAOGKok98LJVX2FGQQKhUIVMMRFEgsR
Z+QGWsitt+GvVD9DGfgRQTVPfp/hakVtvzJvGpITrPIf82Wfvn3/69Nvf39/+/hO6GfdsREu
07JInV7w/2SS+QoVHN9Zpn0wa6+slJ00rFP7tQfLccW1lIkIdfwoOEkGQuhdGCL8Ufi1aqiD
RnuvzxUadstoQ5zFdctsx1WapJ7/wJx8ksGJ2esua4N1QEWamRLlcQp21Kn1GkPkPHVcYGFJ
W2ZHGotTRh2Dx4v5VjxrRBG/WnZXJmQbIRVZFAQBaYOWP3CVIXMlDi+TIF6k1JJe8h0+hWSF
hv6EPrc2WyE3rrI1vRiYYJPidJj8lbM25tT6keNHJACoDzsPqGF7Nn86KeVbByJNGcokitDj
vpE4aao4cz7dZIN/f0lawGZKuE0se7wzUmo+tvxUlcTtlsyMENbvUgRVAWuphJSv1KXB8PrD
am+JHSeMNNhzkRj1OWwluvKuQOfSeN4185uOwC0+cWYY768Zxgduga+UP++pZrxp7HCAqYgO
P55MopSL1GqNuwYhSVREQ2vW6keM8y6Dt6QfWEp4Vc5wwcYoNGOeq++2yznqxM1INd65LQXl
IW5UK7oyc5c8Pz8p0efMshpLWPi07uzVffWjKUNZg6ftUm49BXj+cD9QP6dj9563okM0KMfi
+j6Iniw3p6o65Qyd1+cuvjGOQjwKrZtUExpdmi4Nw5WUQF65fIQIxU+4ClrSr7irQt5TSdwN
ZUE2ZOn4ivcet4FeuqKImyuzHaYX1yKjlLoXwoBAXO7E1gpCtRQCntRCViEuK/tdXt5vBupm
JO+39EFeouL2ED5iIbPN+vC0sWfIRUTRBm8iQFt8EdWQLBF/+HERrzJXytjPqU/lfY1lGkbv
d7i6T4J9uJEoDsve3m9Q77BuqUKukdawiDQdqpTl1eRB/Ukm98ZOL38HK2IaHVmcl09qVcat
W6eRhEtuIlpH4ZPtBCJuNNyWa0VIfATX/vTko1JuwsuqwNcs+9m83H96iKHz/1lPo/VhhSym
cU9tYiUc/fCZIKELeQE1Oa9yT/AzQ5e3DX7mvGXR6gdmOG/2xJVntmJdaYgzXPFgJKwu3G7/
eaCWTJkXGmLXyE0Hj5X9fuKl815AnkzkZ4dmfGfgEOrIn5zwalaKWP6FToUX787zJY/X1LX5
S05KtjLPnpUDBb+QcWiminRgmGz7XX9J472cNWCtgmc64qTr2xflSV72IIo2xdN53mS2Y7Td
CjU6NVMwOHNaslNMqNWiYH0gFEkAtRX+9TdRsDs8q0TJLHsbE4NQF5ZCW1Me5yjiQkp69uUZ
CAnuORhJydgLWhGIM94c5T/rkCGoW0nwPQxT5MlsF1zuLPZt2yFcrbGXOlYq+x6ViwN1i8VF
cHgyB0QhbDOCIj0Eh4cKHsWSHvD9ndU8pVxfQlmHgLAIVeDm2d4jqhQ8H/W4kku0ahe22tMW
Sk3/dOi70l7L6vpeMOINKkwv4vFiCpFECMVpybsnlbiXVS2P0NZp5pYOfX5yFgY/bcvOXWst
9JryJJWdgg9pLaVBCO4oCLf9bY6GqjDyvNq7lPw5NGdOREUEVMrUcljRcBVGtjf+6tzxaMpw
21ITbmZYP9Oz6IdYZubj06y45/SqPPLkuezrpwPU88ZR5IzfEwAhYThyzDJ8LknxlthplA/q
hLTnl2PreHxfpE0lyIMcfjhsC8KPjjztjIHdTXx8uSAwrzezQ1MPNWqVE0HB6xqnC1wv0Ilk
jJXjXZUAlMYtPpIAXuTpmNCUAlxDdGPX6aCBN20eBcQ7xgXHV03A4ZQREcIM4PIfKVhK+Czw
LRgwXp/xBfCmNyDj16JrL7RogGHt2ZYZzg98VEt068nMaKaF6aHdhAwlKIJO2i0EcqKxuFAj
uHVsBVsNwslc3XBRoOEjzUwX/QAGMim+k31qHloRuIlty3ILm8U4DDTt1k3ADFtm0luC//We
mVKaCSl9PStLzOqnie/EA7EbdVVcwMkMV6SOurGBCLysr9sdNz7Gsmg44l+UQSIjbjGuvvsg
/uXr39/JpwROVA/104n/oWnHI3jacAOWaEyocCeXIsaWN81SxG3D+4t2djJ7wPz8Qa6zsz3P
N6dag7JScNxC2AgEVegwfYLDJuQuIY9P/a/BKtw85rn/ut9FNsv76m7FLNJUdkWJ2gbO6HrK
PZhOcGH3pHJi6kw0uaLV7lsvlCWK6OTExebCBNFsBWqZtfC0l8SwMJrpL22w2q4IYI8DYbDD
gGwMAtjsoi0C5xe8Bnb8GYusouAxvGPbNN5tAuwxtskSbYIIyVzPZaySRbQO1wSwxoAi7vfr
7QFDbI8UC71ughA7bs0cJbu15mXrDECYRlCa4hmPR9BHOZ+qPDtycR6UR3CBFCHa6hbf4jte
QldeCN8gMw9/EY5RndcKuYZs8EEtwqGtuvQsKY9y6MfZ7OeQxrU86z0sPkkLYjkiVyG5fIiW
m456J8oQl3FenTBgnWHUjCPUtErsy9oZOR2Ja6WFoyGEWItjQEOpLywdl19aUbVI5ZTQEqcY
JHjGbhASt0Er3xYZdjxZcla6RKxIBdhxkVwwtEPmzvAtbhqOutqcWeCRZW4JoUuL6jhlVYOV
q6DECV27oC2Xx/OHxbY3nskfSNavZ1aeO3wGZAmmy1pGLi5YaqvFlwK7JqlOTXzE5ftlUort
KsBWpJkD9k/Lnf6M9LUdV80ChiN+r2YzkUEZZ7ZaKEZ8j1u4etNMYyYfBY93if/Jt+CxDlNv
jjCsQ1qiWDI1iPDEsmbN6Ht/0VQYHHEm9hHqMMTm2kf7PZ2HRHEBwGJrpFwUuBGzcFY48wwF
ahNh8XVyA+Z9yhuqbkkXBqsAu0HwuMIDlQkcP6qSDTwto3WAu8ai+LcrzOmLxX2P0raIg80K
H0SNn4KAxNtW1JPpLM1geTtBcCuigo9vPONcjAePh4ZxOv4TTZYsPqzW2JnSZdqGZBb3Mpaz
/0ke57ioxZlTHcdYS3QJO8V53D/ClvAGaP1Yn65XqPLN5FpMHhDwVFUZ76kCznLfY/iua7Lx
nMupT9geGnxiJ+77HX4/bVWqK18JM2Kz9Zf2GAbh/jkjrlS1WSq8f24x3Bzd4N3PIwbys5Bi
cxBEKjFaNyk8b58PYVGIINgQJbD8GIuh4LasabGoH8+Hseh3XT604vnaykvWo3G/rWIv+yDE
K31u05rcb1ipAgARH0YmT/jttl/tcFz93YC/TKo31N834iGUxQjv0Nbrbe92CcLbpYlce1dU
mXp/eZLHLWujfd+78TAtluKwR6V+qy6gi62KuhK8JRalIg3W+2hNFQM56BXoJ4qq4/I9J0YL
8HVBY7x9ADIl19G4WiZoOCtSGDhqx1PFN4ryqBvA9REoAH+iI9T71Tgfpjwptqqtahp+D1FH
iNVE9UpePRy2EL+4cPle72AYQlzS+yMBHh03W9z/ncut1oYHDYjF/WG3q795Gz6VuOToqh2S
KEzC4WrVPxQ5NM8zMUFzbR9n8nwjaooBDQlk7ZE8Z3GGN0hw8WhxEG0QEmblNltxfF6NrjnK
Y+D6kYQl+miHau2tvqnFbrvaE2LOK2t3YUiuQ6/0M02rX6tzMYrfuGbb2rpexPbpEvqqnpNb
ctGoP+HoVtAU3BdtFRGXZRVkRx9TlCJxKMfV2qe4k17Rw2z0S+jyB4FHCV3KeuVV/IgKzyMU
++zoTBih7aRnPn/466MKdsf/Vb1z/dHYjUIcXzsc6ufAo9UmdInyv7ZPTk1O2yhM3ffoCqnj
htL4jQwprwUWc1HDOU8k7OfbxLizLI2Or1IeZSyxQkdttVM26aALtMl1glC1jtmkd05XgnZl
7LC5ihNtKMV2i59XZ5Yct2mZcVZ0weqCS/4z07GI3Nf14+02Nm3ml6bYrZF+cP3vD399+P07
BPZ03Qu3raX4vWI6gq7k/SEa6tY2GdEeUBQZbU6eKeeaXVtB9Efvkku8/QWOvryIYfqkp73W
p6ZSfASicOtN25E8ZKxumAr+NgULI6bTlMDynW4CwW67XcXDNZYk2/uUwXQETekFx1L9BhQH
badeZn3MENYmwPq4IQoiKlco6TbBwbJRJoLi1w2GNvLQwQs2s6B9zfqWlRlqxGiyxaJmcjiu
YyR2rDNucs0gR5ReNebatmGEPlcwmfJaUP3EMw+A0AiL34HRc92XX4BfFqDmrXKshvjMHHOQ
x9014WXFZOiRdkNP5XhElZHDVk4ZRGPWubm+J3x+j7DgR359UGQOb+VekHw1MBX8sIg0LQl3
mzNHsONiT/mX0kxyZiasyeLHhSVpsVuj0s3IMO4479v4NE5NNwuH42eaOCYhrW9HNn7sdz3x
SGBkAfPpZ9mMdmG18DidWjUp1jy5cSJN8pnkWgG7kvg18PJoamq/luBRyNlRE527gD/Tr4qb
l+Bn9XFTU7DaVPGC+YmncuNpkLJ9pucdAavpa7Deep+dqG1DAIOMt2yOFGZtf25xadvkXnSr
EdQRqsuMcskw3/m2uLHjcBLWhWhZvVYFdtGiouvKQ0XXmjGhNVVYhkLn6xTp2OsgsNOwQg4b
dNVKWUtbRJUEsIQq2wtGG7TPwjloj6La95F5/WBA69qyARldJ3jbNa8LLuXtMsvNNilqBv9Y
avvOBkAFm8+0d57luKQQcKqv7+DRIdP5KntHff16jNEHhIrPtLXSBLl6e0Xe4jY9ZxVuLqcr
Vd1YUxE3dpIj+ZkanW9StC8z2+R0JsJqDQK2E4TFY3NM6BbAeWu+AEm8QU3YF44Ts0ZoAa7m
o2+TPLql8pBUzlPb0n/BerB6RN/SZW1uP3qoa3jNT2zDVXm3LztHW1cVavJ3WoAHV6fKDseU
A8GNURGXw8Z5K7nQST+STUj4u+Y1OILJqejtZE2nOhW3+Gp5d1WRRW1joDqN9uvdD4daymOE
TZHzUseQmmsnKRd8kpVXKy4tREx0Vyopi2k6RGsOtzujGHtlOte2aTj8HgrciE6uHaf0zOBW
Hr4BYylL5b+a+l5qrA0qCRe+Ak/TH6SwryEX4pA29jFqwniY6gvjB5kCj9yNeem4nTDxsrtW
Lfo0BbhKkdrVcuxygWSUYFBT01QDCFfZZXA139+Rhrbr9WsdbmjEubByUVfpx/IUwrkhrZLC
WH5P7FfzE02eJ9Bvxj+XG7qkcT40nWiVz104RCNBuEGt5puIWqE4UgiUJ4ekkofjEzc7FKjK
Rkr2tSVpAKDjIGObEIBnmcqypZTEouun41Lx9+fvn75+fvsBDsFlFVXUVeS4pKZbk2jNjMw0
z1mJPuUc8/dkooVedMTCNXLkbbpZrzDjjImjTuPDdhNg2WsId9Q48/ASBJoHBcj+t3ssY0ZC
rNwi79PadUg/hRB51MdmKWeW16xRqhi7eFFYkpkajPxUJeYt1kSUPTANLhQ2a6KSvz0v73X6
TuYs6f8GR+9L7FksIKbOngdUZIMZ3+Ea7RknAkIovMj2W9w5/giDA5tH+FDU+H2CWgo9bZ0J
CuLCWYMFIQ9KEMIn4NpEtb6qGwG6Uvodr/wsOpJFRRY40N0u8R0RkmKEDzv6k7sSjq9GzDEn
UVNCeZsk5ohIC188UovfP9++v/3x7jc53abY2f8BAQY+//Pu7Y/f3j5+fPv47l8j1y9/fvkF
QhP8p7VUDims0baEoT9OwU+l8kdsa1wcUOSWZOOgmANLhyWJ7/JQxfGXcG52xLNwYGOncEXP
J1awK3ZcB8xvvFqQlftHuRG/VzFkbYZqsks2p2waI4FYFdLHHsHWdgKxuax7Z4XiheOLC6ha
8+FNB/ZD7qVf5Kla8vxLr0AfPn74+t1aecxu5RW8L+nM3VLR8zJ0ajsHFrXq0VRJ1R6719eh
kkcwom/buBLy6Od0SMvLu23vqj8MiAs7Pi4wY2rMDTLmud0Y6HsurKPnKGPjToogzVFo8Wa6
QaDWdedTbNGwdQoaPwWbH4hjUDR6NVFMEMAOQtw++BIgtikdh3BmgS3rCUviPrMzugFp+Rp1
aOzHLfYeqRlYEQtLl6JobFYyg71r8eEbTNjFpa7/DEXFqlCqRrdseEcK/9d+E4hKyO09icuT
XYuka+GQn99tMuL0SrdxWpCIMuxPCChHK7QJxIbu6wE0etYRBQB7MQJKXuxXQ57XbjVAL4jb
CKtUWjUtzLMG0Cv99dlEuR6FpmuiheZcykg6uAdwPb8AXaRBJPfMFaoUBVwp2d1Ubox7C2yl
3JXz4xEUwkSuveskQhG9FdKCX+/lS1EPpxfKkYSaTAVyawcz1JA7sRsQaFPnr86QdApCPM5y
Z07Lf87TMTWMVVUnMZyj8eiWqptytgv7lTcgsKhQM9SNai3qwhjns7B/WOckfYUvuCHbfpuE
X0X+/AkCKS6tOyv/5rYj57r2/XfWbS0T//n7f7siNfvy4bfPb+/GZ9XwBq9k7a1qLuoVPWgY
RBsXNXj3/f7nO4gRKPcNuft9/AQhAuWWqHL99l/mQMnChmAbRYM607ozbNHseHWa1TXzuWUk
TIELRmA4NVVXGzKCpOszos8Ph51jJ5PBra2VAv7Ci7AAvaZ7VZqqEov1PgwRel+HqwNCtxWP
E7lI63AtVhEyqSYWIQfB1CTP9D7Y2hd+M9IWxIuNiaO5RKjl+4Rr509+mfN730E4mraRYRI9
fSQ9s6a5Xzm7+Vh+l4s3xP/wIUeTO3dnnkE89wvSL0lT9a2tSJrrEJdlVUKyB21PWRY3UuK8
+FnLvevKGutp3QSx/HKGi1a0SqwoeCuSrjn5mHZQOKbzaszlQDjV9Xjew51486RVAB85yzOk
99mNT5Xz51JXNlwwLzqLw9by0zweak1o3r68ffvw7d3XT19+//7XZ8y1AsXi1U/OuDI+Oavd
1LUvndyakoZ3mKITJqm2BrAJENajBe/0Q87l0Py6DcKJozo6AoM6udhROqdcePNiP37Xi4ar
U1I5qBBX+P2a0lXJnYpogB+/U1HVw9HVoiF7++PPv/5598eHr1/lCRU4fLlepYM4loOckF4T
JwnQIhZZ3XptGaU4qr7ZLa4TLxGYnNDtP7bwv1WAmTiYnYCcBzXcIKN2zm+ZVw9OKFAUqJx1
XXH5Sfd6Eu3EHjME0DArX4Nw79RDxEW8zUI5aauk82eGZyvh4hVZnpxVqb3WKfK1j7bY+q7A
W5od1pveqePsS8YZ/uE4GhRPWkJ6nmmRQ+7ov4woWJU9mInBagNn5mETMadcQDhAZkggE5Fp
HOC4D6Ko93pCjwq1NkCwz70/IKhsPEHrIHC76cZLiIXgZXQTwS7dRLgY9KifZm2Uor79+Col
NEeJpcdHP/2n506cldhlku6am/yY/ErrdQXX0y0M6AttbQ8Jqu21PxIjHRbNh0lNpwEj9Rht
926ftzVPw2g0OTVO2k6P6bXxmPk96fVj6BYcK0/87pKYZPvVNnSnn6QGkUedn8FZROssq5ed
Otqv3RYCcbvbeu0Wu20YRF7/KuAQPBi4kQNX8iqO8bUNuXIU0eGwsVYDv2fncNDP5u4DNbnu
0pZyLKSnoZSOqgdLuXf+sEE+LTAPmZjmIiL0KK4mS9dUwGC9ZlRZfIXn8cRFt9dT8/H2SQ/K
nTzYYdbh02cKsencWaU/78D/7NP1OorIoa+5qETj7hkNPIxd+3lJCdx1+jTZKfnN0r5gRPL4
E7VUmXN2SDL7Ez6dGnaKLW3zWMX00plui4JJkAp++d9Po75y0SrM7bsFo9JN+QBBN+aFJRPh
5rAyCzGRyDJrN7Hghis2Fx7V5kdFi5OlikUaZTZWfP7wP6YViMxnVKDKU1thNUDThWMvMQPQ
MPRoaXNEdOIIXENloKGh+mBhJuJY2xniX7jFQzxvMXnwA7OVy3qF9JQCArK5a+wtls0R4bnq
wz8C7COiHvsowIGIrTYUEuyReTTOF+M4BrZfcuAE6hBBo6Kr69x6HmDSST23xXS+Fc7RPos1
B5JSLnXRIdxq3EwEmj8yFejVTnC9LWWC1c4auSQGpfZdngbb6LDZYlZaE0t6C1eBsXdPdBgG
0/uRSY8oOloJhWCq4YmhyusUSygSzMJnarZEl1pol9UOcconeQkh/C8J2GYjLnjOXmgwa4dO
jqwcJ3BohrYevBBgu5XJYIpeBj2wTZUmBJ6U73H3ww4Lkq1CQnO3nTpUiq9yHpm+nyaEixpy
M+syQWrqrrC1YeIA6fD/GLu25sZtJf1X/HQqqd2tkCAJgg95oEhKYkxKHIKSNfOicjnOiatm
7CnbOTvZX79ogBdcGnQeZsrqr4lrA2gA3Q19oznR7cOHJUXZlWspDhFNQk9pwjhJ05WP1VuD
x5GX6q8BaqmkKc0ib30z7Bh04hCiEYfJBftYQhmu+Oo8JFmrAXCkUeLJIBF5r3+cMH2p14GM
IQBvN1GM9N+o36eYhO7y064CwyOSxZix6sw3WmxjafRDEqCLzlSAfhBTW+IW7FTwMAgIUsV5
Q+Q2XJllGe7AeEgGGrJ5bh7J0/yu/xSqn2W0C8Tx5nZvBq9SvjXqlVDEFQzcOPk139TDaXfq
tdAdDmRI6YyWaRRitdEYYj2ehEFnGL2FsDs+IPEB1AdkHsDUQHQoRIe1xpERPfbNAgzpJfQA
kQ+IbRdRHcIk2uCgxJNq6ssuxVqQRyg/L1KKdsWlvm7zA5j0C4W/wcp/y+D5KPzic0qlLeHV
h36HuVDMTDIQY1ugbSSjI6/nIX3j1tIfLh0qCIX4L6/7a2GZUXkZO44FBZ+4pEE0NAmWV8kp
Wa+H2ABRNMDhzFA1jZhAWzR5ue6L7vdcgo9sdXILrzGuZAIHikGydQVCnjSS7Q5DkihNOFas
nSf4y4SPgUM+LPeWF/t2rY93TRIyjjaNgEjAsbPQmUPopjn6qRgba98p664D9um+3tMQVdXm
vti0eYWWWCCd75H3uSO9z2EvMlvZ49NORB0DW9TfihiZccSK1YeEoBOZfF7Z937SxDNdZ62U
Ry3xyOSlgBTNW0GewAg2l2klo4MZXjMJeR6GX3iEirY2cIGDhIkng5iQjzMgMbYPNziorwYC
WiudjCkVIksAAARtdEBo4Hk40mAKsSiMBgdFFAQAMkQ05eGeYYBgIhGywgmEqhUOKyGl0Qcl
pBQbDhKwXeg1KFtTL1RhM6ywRRehmlHbXPpqN042FjYUNEG0r7Y6bEm4aQtbvZwZ+lTMjJEL
iNn4ckGFqfXYjy8M6dqUJ2BUwxT0VfFuU0QYBBURnaZlSLtCHGI8Y7aeMTZBNi3WdYKKyImg
Ig0sqAmJkD6TQIwKq4LWSqv8vpCiARATpCaHoVBnnTU3zopnvBjE+EQqAECaojOagFKGG+9N
HF3RpoaV4FzOLUsyTfq71vJ5HflwMuj1hHo2CQRTijdVc+22lQuIFfhabLcdkkt94N2pv9Yd
R9E+Sgg+2wiIBejFxcLR8SQOkOFf84YyoSrhgkGSgGIeOcYqhg4WBYBHzakZrwuwFSRi4Zrg
jctBjH2t5niPa4fGRIJ0VVtSLIlvHhfzKftwOYriGD3n0lgYZdh61IlmQuSna2lK4wEZOd2l
EosfMhY/JTH/LQxYjkwWQ8fjICYEq6PAkoima4vVqSgz5aqKACRAF6tL2VUhWRutXxpRD2xS
uWvx5YhvBl5jefH94HmuXONY3QEJPPqB5LgfClQwEN8ce9/UVkJtQKbGSmxNrEs+DSIhekqp
cVA4CUfL1PIiTtvVao4s2IKisE2UoXMBHwaeruqiYv9IKXrCUoSElcy8Yl9QnjKCHVQaHCl2
kCDagmEqTX3Ilc2oO1MewGJ8ff9zyCOyKitDkSIL7LBvC1xtG9ouDNYVccmy1u+SAZk/BD3G
pQEQsj45CpYEjV04McCzXEV38p08CJgyit3azBxDSEK0eOeBETRAwMRwx6I0jZBDAQBYWGKJ
ApSFa7t5yUFKPFVMoZJ0RKoVHeYp055ZwxuxcAzo4YUCKfo0hMZDSbpHzksUUu23uIQPEEw8
DK6zfu6c51p+fPawAn9k6xh5xobbwAzNDTqeFfhZkeCpHQh8gErgxMOHfKghKjzqIz8yVW3V
76oDxDIbI2LAgVX++dryXwOb2bnQnIAj5v81gXd9LaPPX4e+1pWvCS8r5Wq3O55Fmavuelfz
CstFZ9zCwR7f5x5fKOwTCHen3jVYKayZtltYu5AIDP5F19HJyCmQvyAoq7pPzpvmWICihxS8
rM7bvvqkCYvTw6AkGq7vEzQa1o6P/7w/fgXHiddvWGQ6JfqyOEWTm1OWwvixuJYDnxLHB4Zg
jeLgguSjpwYsWDrzzf5qWnbBIJzUWmJ4zTUTCCSYzDR0IfLxkfN6Y0S70z0bgYWD959J6ooa
3kLDv55QmwhRRla/mhhMugr5AYnKOGnax8sM57Dhi9vC5jHq2RRtjhQPyOavq6pIUXu4Z1wv
5gJw9BFliS/1cD6dyg7vlRYtHvzbYOw8b24rJtsIZIn+8Mdfzw/gfOQ+1jgm0G5LJ6KJpAnV
N8JUXwAnUw77I7glQm+kJtBw/mnrQrMhNRPKB8LSwPEx1Vnkgxrgu2g9tbmA+6ZAn8IBDtFw
SRaY51SSXmZJGrZ3mF+BTBlclbTDh4VmB0MGpIVwIZjuJCsvzTEuVou4j0BAQuMFDe5nqTEg
ZZAIvnGaYPSaYgYjs7azLYhOM6xzgbLLhwpc4+QljgnBrY1hAqMRbV9eHfI9syJ5OkIJ/mAL
wPuaCj1ZNjBSU7H/u3Y5rwutpkATGVq23pCWmsw/nfL+dnbSRzNuusLrNwGYNyjFvIzZ5fWw
XIv9cPdPGUtwvPV0t+I2w3Sa9MmnB2kRCfsiIyxsXYsZu0lcvmFmJ/5bfvgi5shj6Wll4LkV
e5cG2xEAyFjXssASWEVM7MwkmQaY0Yoazsqkxx3mYKTjuaBdGDwvxi4MDDuGW2DTCmimsxjb
3o0wy4LUGmnKwA9JimXovcOCMiulgUbUmbuB6k9nulXQv6q+yNg+uC2AXIxsVMMsE2cN6asB
u28HyDUvmyhwkoFQTT+p0eIfXTgRy3YdlYZEzjdFMiTM14vgd2q1/GgIZBJ5VaAl4nWc0os/
XoPkaZPAt3Dz289MiL02weebS7JUfyZCsF2ceBw6p1RD2/nW9tlBS6MZ75+oTjLSa7ooi/Hr
JQWzlGEnUGPaTXuyU+zyps3RPWvHaRiYZnXqZQGPI8n07IC3dIqB4UbXC0OGHT/PsGX/NtFZ
jF6qTfW2nGg0suFGo+XCsFwSRn0yP/nYIIllIcGpphmsgSAagsDEDB953nK6a+IgcnVJnYEG
8aqyedeEJI0s2ZZi1UaJO54/CI8sWYooYdmKRHxqLyvy4PNVlGXSXH91LdV2zNKIbnNPgGHy
oDYGcdqQ2K7yXZuE6LXdBNr9L92iUoTGHJr1lNFIjcJ1lXBk8QXXmFiSwGP8MRcotuZe+b4H
eK7ZOuyEmJ5u5jc2wgdQzEKb2G6ttB3/0156lNiP5C161ng6+Kvud7a2I5zT1S7zbJLaaWLA
tr5AgPljM+S7CmOAwK4nFfeZn4xQnAsPnEfJ46hVLqGj7cRk44FMVW+BYMvK9AnNhMbdrIuV
SaRLpIbIlU2XSw0bB09THrFF1WUUIgC+Gp7U5MZ5PR1rV7og2D5XQ5WwogPE4Arx6xuL54IW
wd7mmoi51zUxj7mIwRR67j8MJoK6yFssIV6QbX5IogSdbS0mxlDRs50LFqTmTRahLlIGDyVp
iEqnWH0o3rJo+AcNFspS+lG7SSb8SktnYinq1myyeEoJWgY6Jh39Q4PU0umpmABpim2jFh5s
F2eiCboRM3gYjTOsdBKinnlh3I99mHZG0JpLSNfDLQifqLS9I44x3TJew8ZjF3OBMfGU4ckK
iGWegd0WXSh02fUJre2SWIYxwBLoGEswSwaThXrmvLb7lGZkfT6APazxUpWBUHSYgx9+nHgg
e7upYVt2wZesbnv6UoUe7CwmG085AGIeCZSgx+tH47rD7J0X/BO8MmqG6rJAeFrvbJhZLQx9
zrsNxDWSodX0J5LNiHDaF+YOWAPsfbAGCZULb4N+iH3BYnUmj8ePztKeiaedOWm7HN1Rmzwc
lzKetCylKQo5u3ENa3ZCC8clZlE0seKKNAP0kt/gYSRG53EJpQc8bTA9Cmm0Pt61LTWKEc+g
U/tigo4s97E/CwsjtBWxzbSDrmtDiin2zECrQSccto9W4LMn3vXCMW+hMMTY31ijuMk39UZ/
BLCwV4NCLCGG2trUPb4l64vxvY4eD+gk8XNdVKifbWXnDJTDcai3teml3FYQcRlQTzkWBtC4
j+hDNopnxN3UR0BseiD600ou/LQp+7OM+c2rpiqMvMYAVL8/3U9bsfe/v+vxBMaS5i08Z7QU
xkDzQ94cd9fh7GOA53UGeLHJy9HnEKzDA/Ky90FTTCkfLv3F9TacYyE5Vdaa4uHl9dGNpX+u
y+p4NWLdj61zlP5lxvsw5XmznIMamRqJy0zPT78/vsTN0/NfP25evsO++M3O9Rw32jSx0Mwz
E40OvV6JXtdPThScl2d7C60AtX1u64NcHg87PaazTHPb5Hx/bQRTIf5y0LuDemZlri9WL6OV
58C2S63tITI3LbQoarHgTUymVj79++n9/uvNcMYygV7yvNoB0KEazB6F90HyMu/EiOO/htRM
aIwjqloQm0Akk4zzzysZ3lLskDi4Fe3MXE5NNffQXE2kIvrgdU1HVAOCDoTMMRYXWB74ZyI1
OOd6/23ShypPUvMcehzNdZyit1gLHBqnlrKskooWdRnrfh4VKdyGtfTb3jibka9i803vll50
dS3/8ldgn+uhJjUisWt1K1RL7FgXsD7vq/Z4OJqFavNMN6LXmprGHvL1MugmT2N58jxNA7p3
v9lSprvlKrI6YzZG9ojUfDIEcSRABRs3SBBncLCJPTwY5rSYosqX5X6Ngj/sflCwR/9Qgvll
qDy32IphV7XWc+V2X9f9sStafD+mmnEb0q2x1dDIvdOMYpz0uRWufkTgGRdvNsPnbn/UL50M
8thGIcXR9iQ6ta8+/crSJAhMni/HZujri53wSFYJk+DBWBA2py2xFJ6FjixIkt4KWdbtGhek
bNV0XtsLj0qvlXZ9puTNI34RvLk51ZpV5NvqWhQeC4WJxxfzclwQpfO1m7bvGRmFrsTVhqKL
diDi31Ryz2qwVkHQemzcm53Uc9aYIDcfk1w1tk+vj3cQIeinuqqqmzDK4p9vcvVogqaOQDrb
uq/KQVODNKJ6FBxRtfT4iIp0//zw9PXr/evfiEGY0iuHIS/2lrRc635UepQh5V+/P70IRe7h
BUKI/ffN99eXh8e3t5fXNxkb+9vTDyNhlcRwzk+lfpc+kss8jSNHzxLkjMWBKyACCLMMDSk6
MlQ5jcOkQD4FxGOrMUof7yI8oN8o1TyK9POIiZpEcYKME0FvIoJtrscCNeeIBHldkGjjfn4S
NY1ifBJWHGITl6IOkAscZXZpzx1Jedtd3AzhFbzrZtheBYpqff+s31VA45LPjLYkiOWRJozp
8mqwL/q5NwmhT0O8AUTNFuQII8cMqTEANMBc6xacxY5wjmTYS9rQZmCh0+SCqEf4mYnUId7y
wIiHO4plw6goKHUAUDTCEBklClibSuVZeuqxm5hGbJeEnvcINQ6PcdPMkQboke+I3xGmR1Ob
qFkWON0oqU6TATV0JOHcXSJCHLJQMDPC6DSRKSED2b03RBuR2DTUz5PGAX4hyTRJ6dsvVJQf
n1fSdvtckhkyq0gZRw07dDzBZDaK0aERmeZlC5CgJr0TnkUs2zjp3TIWOg017DmbHAmNhpob
RWuop29iXvnP47fH5/cbeC7KabFTV9I4iMIcmTMlxCJ0+vIlv6xpvyiWhxfBIyY2uCyfSuAO
MJomZI+/YbOemApYWvY37389i62zVUdQQcBpNRzdpKdIoBa/WtOf3h4exXL+/PgC7709fv3u
pjf3QBoFSD+3CUlR66Jx4XePOoQ+09ZdXQbE0Dj8RVGtd//t8fVeZPAs1gv3IdVRerqhPsAB
U+MMtYJj5H2dJNStVN2K9sOD0GoM2E3SAifOOg/U1JmrgJo5E42gRu5CANQEGdPHc0Byz7Hw
xEEo6oO8wImTHVDdRVJS0UKI2q1lkdAYSUxQnclGUp0p7XgeQ184vCmyhEm6X7kBOENrkRLU
kXWGU+LMT4KK1i2lKUZNMcX0eGYswW6PJzhDs8jQJsnSyBG04zmMmCuVZ06paZU1DuwhawP0
IkrDI4J/GK7M/ALvrAhpMzAEnvu1hSP03GvMHOdgPfNz4G4XgGzEohmnqj6Igq6IkO46iN17
EEpwrThJe2ywjaSC+zKHEww39f63JD74a8GTW5rnTmmB6qzQghpXxc6RWUFPNvnWJhcFsmOv
BlbdYhdXU1JFGrWRPp3j07WcyRtBc3ePk1aQMFfrym/TKEXGanmXpavTNDBQ3EhpZmBBej0X
LboOG0VVO+6v929/epefEgwUnB4AU0/qVApMcGKqt5mZ9hyWfG1Z3vGQUmMddb7QNu+AuacD
xaUkjAXqWa3+7B4DGJ+Zu/3hdJAXKGqJ/uvt/eXb0/89wnm31DWc0wHJD89Cdrq3nI7B/pwR
w03IRJmxWjqgrmW76eqBAiw0Y3rMHQOUR7W+LyXo+bLldWD63xvoQIILavtsMVmuEjbqsV03
2Qj12ASbbCHqda8zfRrCIPT0wKUggWGnamCJ9Z69icJr9x+1xaURaSTc09gSTd3rRIUWccyZ
vik0UFCYDat1R3RCT722hehij3BIjKxgnuKMORJfa1X/oLG2hVBBPf3UMtZzKtJA7qjHEpzy
zLcWm0OZhGacX5StHrIw8tisa2w9s57Axbs5CsJ+6yv5pzYsQ9G2MWpTbjNuRCMYT2xgk5g+
u7093sCl5vb15fldfDI/HChNpN/e759/v3/9/eant/t3sZt5en/8+eYPjXUsBpy98mETsEzT
uUcitYyDFfkcZMEPpEIzGmIf0TBc+4oaKo+8KxVjSLfFlTTGSh6FcuhgVX2Qbxv+141YHsQ+
9f316f6rt9Jlf7m1yznNzAUpS1RCZGlrGJ9euD0wFqdYjy/oXH5B+h/+T7qouJDYcD+YiSSy
a9EOUejL/0sj+jSi9ieKjO0iZY2TfRgTVBQI6o80yU+Ayw/JvDkpQcGFDptmxl5jgX5UOnVl
EDDqUBmhlqSdKx5eMqcZp+miDPEZbuFRneMWQGRlCbCYy8xQUkvfOn2iyPiEtvS+r2QgpabF
vMyfi1XR94kYWoFdNnj1LA/dVhSVSENdioebn/7JqOOd0G2sPCTNKaqoHkm9Da9Q4ggKSCpq
pjcO+dL+ohG7e4YvLktVY0w1khYel4G6bTZEiVMyGGFRgutHsmz1Blq/3XzIgfn5jHgKuFmY
kdo51Mwp91hXZlLzbRbYsl0VjhDDEI105VP1ktDnSWDbNQE1Dk2bNwD6oSEMDYG3oE7DjmQ4
bFwZCpQ5HVKGYvEGg5oj5nU9F1QqLrOYF+Mi4xVwmEyYO1eqtkWjdWlw5LYpkf5l6lx34CL7
w8vr+583udjTPj3cP/9y+/L6eP98Myxj75dCroLlcDYLaZRHCC4JAlwNAvzYJxAQy1NcQA3D
UyBuCrG5tJeoZlcOURQ4o3uk+5fRkQG151W46FRbBmH0B5Yek59YQhypUdSraCRP+iPDOW7Q
OcZsGhX0h5fr06CZSubx+RmHJ/MvOnJOJgGf5EJmbGoR//q4NLr0FeDC5LSR1FViU1c2jOK0
tG9enr/+PSqpv3RNY2agjruR1VVUVCwk+LGVxWWe7KsjiaqYDPems4qbP15elVblaHtRdvn8
myWdh81ed1OZaZkjr4dNt9JhEsYPAwEG/yf8cawZNQOmLmTMEE3KJyNZZI81znaNXR0g2np0
PmyEKh1hsxSlyQ9/PS4kCRLfkJFbOeIsK7CARI52tT/2Jx7l3qxyXhwHglndyK+rpjrMTw4X
L9++vTzL2FOvf9w/PN78VB2SgJDwZ92u0znnm6b4wNn6dMY5lm+7JfMeXl6+vsFT6UIAH7++
fL95fvxf/6gvT237+brFHwT0mbXIRHav99//fHp40+yK55TzHWaBet7l17zXrzUVQVqh7rqT
aYEKIL+rB3it+4gZLZb6c3Dih7w5u5abGqOa0U+BXnZiSr3I92GsB49NNvm4C/ps6wLzqtmC
MZKZ823LQS46w4x6pG83KKSSE0Vr+XAdjt2xOe4+X/tqy02+rTSGRoK/LeDxXPXKBC1cjOcW
uKny22u3/8ydB/SApznm5bUq6xLMoNq7HPXhH1vRsNQA2q5qrzKUl6fuPgy++3/KrqXbcRtH
7/tX3NXs+owelmz3nCxoiZZZ1qtEyZZro3OTcqfrTCWVuamcnvz7ASjJJilQN7Ooh/GBbxIE
KRCQJzQye6CP6LPTx+wXEKb0nS9mgMayyQm0y9jMeDSizX3TG/KMlH2t7jj3O1KptrmiRZxX
V91GHakptCtxo/BzVfCUkStPT2UmaljKK9q3GcKsSGElOeGy6i6cuXGxJ51KI3TJltPkAsPp
zOtSXLOjQ6HD0S6YK1wGwl2auxspadtvtdwzlgW0poK9lzDYva/DKS0sMaGQ/JJKu40fe3dF
DlViWynoHSCaVsV9phzmIEPNSp4/lPkvv//29fXPl/r11/tXa2IrxoEd2uHmge7ae/GWmdWf
OLBU3kgQCvo3BI1BdnL45HkgXIqojoYSTobRPqZYDxUfTgKfwAbbferiaC++5187mFk5mcvU
owv64zOH0WMjxnORsuGchlHrk68Fn6xHLnpRDmeoxCCK4MCsE7jOeEPPnccbaHfBJhVBzEKP
PGc90ohctPwM/+xD3cUewSD2u52f0CWLsqxy2Ghqb7v/lJCnhwfvh1QMeQs1LLhnfxF4cp1F
maVC1ujI9Zx6+21Kmtlpg8BZihXN2zNkewr9TXyls9Y4ofxTCidH8mbsOY6skB30a57uPf0T
vJYlgAcvjD56ZB8inG0i3Ub/CZb4Hi3feZvdKTc9CWg81YVhldVUJj0SkLxxvA3INaTx7D3r
EuzBVLCyFf1Q5OzoRdsrJ60inuxVLgreD3mS4n/LDuZsRZVdNUJi9MLTULXoc2NP1rCSKf6B
Od8G0W47RGFLrjH4m8mqFMlwufS+d/TCTemaU47nw07ZNqe6pQIWf1PEW3+/3gca72SvtmSp
ykM1NAdYAGlIcszTTcapH6eOtjyZeHhi5N0bxRuHH7zeI+ehwVW8XywyOdxKuvkJHXXBuNsx
D/Z4uYkCfnR8gqITMubeay3u6gh5v8vNxbkaNuH1cvSz93hBx66H/CPM2MaXPWk4s+CWXri9
bNOr51j3D7ZN2Po5f78zRAvzCxatbLfb/ye345aU5t7tyfPokxlNwVnSb4INO9fkdJs4ojhi
54XSNfK0Kdq4w0q5ypPDzEZjrtG43wt2LQiX9d6fWDdh0XLm6HrFU2e+w6pOY2y6/DYpGdvh
+rHP1ve/i5Bwmql6FBL7YL+niwfpWXOYqH1de1GUBLY7lccDQ0Oh0ks7NCLNSAXpgRg62fMg
f3j78vnnpSqfpCUGC6SdYimGE0wQ9CCF5xOnUjPv6kAqVQxbs4o5ZIEiNG/38XI7NNGup58Q
KU5QygZ8NU7e3KMSzTOG8UwxeEVa9+iwJOPDYRd5l3A4Xs1alddcP2TrCJyZ6rYMNzEhMvEg
M9RyF5NemCweW7WA0xz8EZB4AYi9Z/o7nclB6NKSRmX0OfZG0vYkSow+n8Qh9JsPyqMjl7aS
J3Fgk/V/bGk7FrpZRbeLSpg4bba1ZCRtPBUb7PLHeuNbvQdkWcYRDORuofhgkjr1A+k5j4jj
u3kQg6zs41A3XbXRreHVykDTeiVZbLpYnQ/nboP6x+IsTmm9izbWIYU8Ck7EgZ3w1t14V6XD
IpAP2LwymhgSnqwKpqVUMfPhbckuwrWZsCaps84uu+jl0fHFDntCNA0cAD/ywnUkzQo/6ELz
cxE6kEHs1O/CaEudmWYOPAkF+v21DoQbnwY2+kfxGSgEbFfhx5aqSMNrVjt8RMw8sBNHDleP
Gss2jOhs1Pn9UPXKlNAtSFFOUmF01fD1oz8LdDrCJa2ig8LPy1bd4Q0fO9GcLa5c4HvqMlXO
2Ecjy7fXX+4vP/7xz3/e315S29byeBiSIsXYn898gKZ8etx0kt6t8xWfuvAjGgMZHKqqxS+J
hMsMLPKIzyXzvIE9awEkVX2DzNkCEAXL+AGO0AYib5LOCwEyLwTovKDrucjKgZepMCPCqia1
pwlxtBn+IVNCMS1sF2tpVSuMd8NHdJVwhBMWTwddoqiL4KQ7mG06sOSci+xktqeALXu6GzVz
xusebD3M7IycKP96ffv879e3O+VXAYdDyQVymgNaF/SHJEx4g0NjQH8ZBBhklNV3DHZu6DX6
/k5NCtk6wUvGfOoVwFF95WZWUfxInb9wNWx00y4cgMxOW4GCiS/FKfN0HF4/nb3t66lKkBaC
/oIEaCMuTkxYr0OM6cZ3XrSlt3ycFAzOHvQ1Kxbqvi7G0WhvvsNR5Yi6IEmfiBBhF1fYYUSF
c5Zd3D1X8grWt6AVWsDPN0escMDC1HEHjUVWVVpV9FEQ4Rb0S2dDW9AWuXsis4Z20KDWkzPT
hDWFKJ3dl3GQAM6+LWTSHalvGAB2aW5NVgynmfXtJnIt3qzK06OQJ2OpTB5nTbHE8fRbFdwq
Ac0CAtKIW80E084dSRItYLZWLrLY2q9a5jcB1FaoJNvh9af//vrl5399f/mPlzxJZ79ICw9I
eCGnXP9MXrKe1UEk3xw9UPeD1nzbp6BCgnaSHclP6IqhvYSR9/Fi5jjqR/2SGOrnGCTC8T7Y
FCbtkmXBJgzYxiTPPhVMKitkGO+PmRcTdY88/3x03Gogy6jpOZpWtUUISp4e8WferRyd+cTP
bRroj0CeyMPn9wKpr8YVyBMYXeAStTRZdK9+T4QI2PAEWV07vIw/eZQvt2vOKY3pySXZCU6x
jlLGMD2r6YFnt9PNiizIfN2nddrkHHM1c+Xp1nPUToF08BeNCc5U0foQaK4ViQxmt4DvlOOK
R/WsyAV6cpvXVEcd0tjX43VoXdgkfVKWdNUmX+Dv9QBPSeH0jgia66LOF7ReZx5M4Qxcmb8G
9SVhmFwtPdfvE3LpSxpLkndtEBhW/gvDjjmZrLpSj/9o/UAH/UbkwxJDghQLwsDzdEkUPNnr
DzCRnhaMlxlevyzyOV1TXpskyT8uJA/SG3YtQC8ziR8Mx00zZXT2MrnEe/QoopWUaG1BToa5
Car9RHerphi+1MyS0fwFNv5U/hAGRoMmj4iwD5t+71SBTZUMRyunCwZtkVyBbkyU7dlu3yLa
mZ6yYLK1Rxa9H8ns0B0Xg9Chc6KGGBs0NVqScWwGfgFVisaWVNBBlkBRdxvPHzrWWPmwZL8d
LzutBti+vxRxWUeWV5U1054VMPqwaGtG3daMmLRicavGNILlQ+fHUURtA892WWMPs6JgZdBv
iKaOQSVBDeer4Bw88wfPrJOQZFRaNcUXLWapv9vRe8TYdWjAvwbbb7UsXESbiFbPFS7FyRGH
QsGtED0tvp+wOlMXbqZut3N85Zhhh9OjGXZ8m1Hw1RHaGLFPbRg6jmaIH9qdI/INognzfI++
/FJwIVwhxZSs62+Z49OvSi03geOBwgTHrkDR5RQC0d0nY4REda3q5mn7o7v2KWtytjIomYpx
7YRzdltNPmZPP6x+ZO+Gx+zdOOzm9DFYgY4jMmI8OVUh/SFWLesyFZm7S0d4pc9HhvTDuzm4
R37Ows0BO6zvnd1Ta8JXMiilHzqst5/4SgHS34fuRYew49sLwsfC5QFdaSypQ5ecQbcUApXG
XxyCbXxlUqnYNbve3S8zg7sK56rJ/GClDnmVuydn3sebeMPpu59RBeKybSr6RDpO/Z45PL4i
XBZB5JZ3ddKf6Gt+pR+KuhWOaxWFFzx0txvQvbtkhUbu1JLH7tkshdx6vnt7VbY9F3FY6de1
S6pRk2G7YEVaT/g7u6S6FKqkW3pc+sDxGAHRW3G0tiN1iXNK/658Pel31eNaYeOEJc9dj1R/
s5LUDVfm0NCtn/gPgbfZGXplvVBuOml8SNMXumj4VTSWgjVT8aONpVUuziRVf7za5QmJB8q1
Iivj+5BSA/ihOtgZPSqCbso90mOxwdYymbDC0ZiiarsldGQJt4sdA1Q7CpOVpX5j9FSlkhrx
LGYE9A/QjZO1EyCyzRbyVNb2gUlRF8eJkTiwXn3PXSjoGizrVBxX1mKCttdY7l/hCeknLTpX
w8tKuI6TYyjmsVMXqQtxbip1CGypSxM1b5IiDlX0XzlcT0K2+eJ8x2E6luqz4NgxNDYOy/j4
6VsyebHEJ0/Ht/v9959ev95fkrp7eAKYHsY8WSfX30SSf9jrXqoDL9owN5Tdis4iGTHOCBQf
icmmMu3SQncnbOQmHbmpKUFD3F0FkRxFTmN9crEPzoCIolf163r9mma1uy3JAqN8EnHgY1Q+
6pPWs6SMKj5TOYiSmmwzWnUu0TVzoTlPnuNH784+6k8cqj9Xyhnxv1ASTGi0X6rUPtGUcMZO
GblUpkjO42ObnF84FfPjsa7aM5y7kotMl9WX1fGRBVUQ4tYmR3DQUguR6ujKdXwyBIv9QH6t
NFmhdlXNiegTOltZEdclFkjF8dDZQJsTSTuwgxiSE0/ObhXFaMd6A1yFjaNIjeAomNriy09v
3+5f7z99f/v2K15tSvyG8QLpJ+ek+hu2eX399VR2VUdX2ku5qWHqhIsGCgVrF3drGp9DxPTt
sc6YWcKnfmhTYpNUBlIPFWeSqmj7t3h/aOzLjjnAUtYNXStyom2IwfkqcCOTz0tik51w2mja
YNva12FPpHci8QpiOuJcoJLWJFiHzncdiO/v3Mhwuq6AdGXOG99b3h9OCBk3SWPYRDtH0iii
rBk1hlj3tKDTNwGdZRSSYf40hshRmzyJYvJ18cxxSAP8IL+sz6EdZFIt6YkMozwkxmgEQqoa
I7TWKyNH5Mo1poBNkG/IegAQEVNzAujJMILO7GK6WQBt13oXOUJyiiHi8C+ks5B+kw0GR0O3
vkssTOi6UECmvifW2wSsZB76K1eRM8/GfafzYFk7pCeTs/p3SuoDz4r5ueBJ2TZwxIl5sIB6
udJXo10qLdm53PrhhqQHG2LouNyFPjHbkR4QwzHS6Sk9YaSozdoipkQ+PqwbmnPoUUvuEaoV
JMMSRVflO29HVFIhYaQ/rzSgyCN6SCGm0bYB7QP6+7ZZ6DZ0Rt5eMMr0+hcYSd/TZqU9stKy
2O39GMNjT+bXa/lozFNctGUPwTHRj3fEICKw3e2dAD1dFLgnjmsTsJqKnmUIGgGwLcCdJYLE
9dEEh168iIXu5FsXdMgFvUhMzRlxVnJE3bXEMPMrVzcjS/C/ZN4IuGTsDK83DBYxKTGaHLZ7
YtI0LcjtHS4CCotin9wCESE9eOoMO2JbHemu4rb6W0aD7Ezhk20CsjsFoW4oMp1CZm0eeVTF
pMgKlkrbdEJD6Dn0QBuejcEpFwzqDQ2DvxfhIy0eVxyiJ1tznA5IrkdKD9bpeLTMQxaBy5uT
zhN7izOHk89axUuuTaT7G3sALQsDQrAgPSLlr8R3OGztNN8yGUQRqYIrKKbeVOkcxgskA9iS
VQIo8lY1e+TY+kQ7FRAQcxEAOEZQ9cC4RT6xKbRHtt9tKeAZ7WcVpCe3zkBuDg+G0O+pFj7g
0SRj2Xs6wzvzyOR1iOwnk+t2X+NKk97f0KMqQxYEW8q5y5NlVKPp5IBF7u+OyKMiLa2ep0Bx
2IdhRBWgINLX4YOj2Bku3nQ6NRcUnZhySN/R+ZDiGunUlqXiQjn4Q0I4IJ1SrpEeOeoT0e3a
bh38W0I/Rjq11wF9R180jMg7k3diIhcRhkX2yDO3QlanCDDEdOv2Md2K/dbVir3jLYXOslvf
Oa6SYSielRp/ysOdR2vXn9Tt3z6uyYevunq+jQhRV7RxSF0ZKDp5swJIHK+fPkvWwWGOMmbX
OaINMQblaCPlAAJidEaAkvs1i0GzY0SavEbDcuh36JikqahmjiyXiYP8OGzeexpljKoJmmGS
t5tP2ARGNSVrWH2aUaNiPblnah8+x0/dIl0+TgCinhv8HA7qsviGt+u8zFo6RigwNuxKlNqN
OWr5TZ9Z52rI3+4/oetErM7ichj52Qadk9i1gu7uqLWgsNp45aFIHX6JN2kHnp/N7z5IRddv
Df0GaYQF/KKeXyq0aiQTzSLPqssYbQyCMMwdlueuPOumSsWZ36RZ+0S5Tl+UdKsbLiktDlEY
o6wq0d/LM68nbTgezSI4+oyzaTlPqsKifYLq2TXJeHEQDfVQQaHHxsoky6tGVJ2087mIC8tT
6iSHKBSsPMeYeZ1v3M7myvK2ohwEjmXwqzJsWTTi1qh3so50AqOimkWLdlH0B3ZoaDMlRNur
KE/kk86xfaUUsOqqxTzNE2VX60iX88U6znlZXahv8wqsMkEts5mOP2rayuDBcjwSmSPadMUh
5zVLg3E6GUmz/cazkhr49cR5Ll0c4/LJRFLA1KHNqUaWHJ+LORpfsJuKym6OZMPHpWFSCwFy
XlbH1iJX+NF3uQqKLm+FmqHOupUtfRJErGpcMZeVZGBlC9IKFo5rmdW8Zfmt7M3K1iDC8mQx
QSbycKRMkHQG/aEwmQO+PVnPAubnYqXPWEJaoCiOnJXK6U1iCcO6QadxJg0EMXSeTVPuhiwi
Ri7ORWnztpwVdh2BCLMR9i+H/Zvi6co671wyuNEf2CgZgz6vmBTaFduDtBC/smBN+6G6YQGa
UqBRF0lacansZoAolJx8RabQE4gcSzy3J4y5/XiU8chNp9MSAFN3qB0MtQzNTK9CFFVrSdBe
lEVlkj7xpjJbPFMWrf10S1ETs1atBDFaNcOpOywGdEQSaERVTL9cakVej3N2/lhPaC5KpUEj
PlO7ehSJUelPpMODcQWks1o053H4Bmz127fv335C/862goT5nQ/GUlaB75fScKr0O/nabA9r
g9knqqNd+L3fapfhrtRI9jCb1AvQal+dEmE6PHiOJuILQxIkglZSVBYjSBm0Rs1MapfXwrQA
HNOXpXryZ5JZg5sfk8MpSQ3EZLPeqaiUZVl1ZcKHkl+nN2LLgOFmlE3s9ck4zRzi6d3MgI/1
hGztoo5QgihFC3tWa4smPRfHezDV562y50m7pM2FbJdgKiQ74ID0k3XTuJjM3paquzMQBUBY
jhHr2kp2IG9LtOfL2e2H4G/GrC2N6f/t9+8vydN1dbp0I6EGLt72nofj42h2j9PJHr6Rmh6y
RL9YfgDEeM50NJLk9E3pk+35PNrIg09VcaSt+i7wvVM91dZIKmTt+3FvpzbnAQwUGty5S4Bd
MtwEPlVARdTNYOjWK9/5YbDsZpnvfH+FDO2qKCix5mezQ6fscIIlao7ZHJKC1rFnBklaVs8o
OstRBuf6BByf978kX19/JwIkqgmdLIZYPXV0+AlC/JrSNu2ItWZYE1WREjbIf7yojmmrBt1c
fL7/hl7VX9CmNZHi5cc/vr8c8jPKmEGmL7+8/jlbvr5+/f3by4/3l1/v98/3z/8Fmd6NnE73
r78pS85fvr3dX778+s9vZvMmPmt8RuJoO2e3fgbxtG2prlQWrGVHdqDzP4ISNB41yRKETGmX
yzoT/J8tpOUMyjRtPNqIwmaLqFsqnelDV9TyVDnLYjnrbLfbBFtV8sVhgWA7s6ZgdK9NFwAD
9Gzi6FgQX0N3iI2YjmoBM6nPfvHL689ffv15GVtTyZE02XlWenU0MlRuoIoaL0W41S9Avbwj
boDlVEnK6nZK36WJXdI8JfXdIS11rfNBGjKWZnwxXiPmLnhiwG3g2tg7R6GkSWr6H3oC7kwV
TldIQWnH0FGt6R5CDVT99fU7LOBfXrKvf9xf8tc/72+PAHhKhIFU/OXb57sW91MJKVHBXMtv
Zv3TaxLa5SNNaUwunQJxbBqZcGyTc9ornr/auHH7f5H2leUjo3HPXtSN1ZIgo7mw5Q1mwoIl
ZW7gGIPi9fPP9+//mf7x+vXvoJncVf++vN3/548vb/dRh/s/yp5kuXEj1l9RzSmpSl60WtIh
B7JJSoy4maRkOReWY2vGqngslyxXMu/rH9DdJHtBy3mX8QgAe18ANBZB0vK2mBcDDuHDK2Yk
ejLZF14+8HVxscbcCdfGakyPFVEcafTdl0Kd3Bwj/fWvfVyXGLMgjasqRBV4ZA/uuDUPh07l
QWztBgzyGQeh+zREVmeuPx50pxIfUPIu3lbVXI8hyA897u5OFqXz3mSZYRrfjM0iATimFOyc
IQi29XZvnLvhrgqNYykJV3mN+jsDbDM37YHO7ufshrKlFESoCLLuyjiwBEGVW6wxCEJiSldc
ey6DwvYYDm3SKG4ikPUxgYnFF8TA2vs7PawZ75SLY4SlBPLRLvZLTws7y5ue33klrB8DrCc8
ESx1FdaCf4vifb0tjWYJDzfd8w3h90BJPSDwMv/k47O3ph5kHvw7no32Tm6yAgkL/jOZDa3D
tMVNb4b0AzYfrjjboMt2KEI4umZ87eWVULurc1fb/BJqt64xFWyPbynmZ9vQWyUhlOds554z
Vym5tYrnH+/Hx4cXcSPRe6tYa9rSLC9EsSyM6cw4/JLEm2vnk9q19uiYyOjdiirD0R71y+76
tWAUWyEx8rw0B0/9DgNLOpSFNqmrW2110HF8sLnT5WaJbfm7bJs2/jaKMNDIWKmtDf0Bvyvq
uuUDdTgf354PZxiqXvLW5y3CxT20DtpW9gS2zNWJsjF5tk6ss6SovTeeu7ZnurMLQtjEFDKz
jvk0oPA5l9EtbgsbQ72QI9IPmKxX505IjgSJKaVQGsxmkxtjkDQSEAHGVnhxE7+gn9b5MOcb
OtcOP9lWRkJnajEKNyAnldAhXJtpkeCrldTVfUguL+0GjH0Q+oq8imtj3iIpbWsguBgTQ8xp
17kJDfGKtL4nSKMm98O9CcvsykMbVKxRhLMIQ7vhW7+yCcsMLl4TmKKJASmKR43JfkUgxbGR
jBFsFaQFMBIw7Xle9kFoMCz1KPzXrK6F9uOo8y4tGmbOdfu0JPaYd6jM1rJ0uPDzkoGEHO6O
gBj1/mNz6joMNdUd0j1nHUkEKxfWr7NnkXEduKhwwj8bAnNZXCnLffAodLiS/gud9STgqHbn
Pg4VMkK11l1cUuR6Ox8eT9/fTu+HJ8x6+PX47eP80CrUtXLxBcnFxOvx1+WpiCNIAsndBqeN
BegWmn4kY0hoR6ALfijj+v/kyHYyDtE2Y2jAEFnV9hhswaef021X8NKsx6l3du2HFXkG8wBt
JEumHE+G5oY13c1xbTDzDRkwQmDhoGrSyqySv+JbTJ4Ai4FxFihomMmuWJZWAhj4K9rIQqDv
Qp95rvMOX1gVHZJy636+N9py6vtC9dLiP5uaFdr520Ed4bwEXrCJFC8l8FumRjjAXw1julYC
YeidfqWWdTCpqsnYETtLthTDfdKJFgVBVUNbRzc8P3N3oNQ/3g6/skH68XI5vr0c/j2cfwsO
yq9B9c/x8vhMPYaKUlPMvxZP+DDMzGA+yuz8fysyW+i9XA7n14fLYZCiIsqStURrMPNoUssX
DqOlIoR4i/+soY76tKUI4oXMo2qucURV0nken8rIeUtT+j5Iw7SqY0Y9K+AjqzQqkRD+HMlj
A1OwxrD4UTB8y7I8UXUPHO2XqErIUBuzvkMJPFuF3YM9xmS1xp5/5nn1aLzUBCYBz2B/zJa0
HkxQkFkcBaqa3EzVeMgCejfW0tSLZmPcFdVuvIfOTCgrh0PM6z014GEymo2HEyM9IEfxIMrU
Q0yPHRuldXGXrZJuptSJ0WGXWhzrFjocmVBgzKZathcO1TX2HFQwb2m3T0INcwCOkiCj4cVk
OaU1Oh3eEZBL4mdDR0CsFj/b76WJg3N8jKDNckWHuxxuXzX4S9/F2d7qioRbwY9tqpvJlRY7
w2R32Jk56AGwUuNpNVzMzKbqsbg5rAxXmIaYtNURyzsAGXlsfdfGXpnSb4diIOvJbGluIhk3
24CmbDSZL+yVXDPvZjakHXAFQcJmy9G1OQcpez6/WVIa324Pzf412pPXWpZDUVCYReORnzID
jmHRYUMZ0LiajKJkMlraS0OijNBtxuHHn5P/ejm+/v3T6Gd+bZQrfyADVn+8Yspiwm5q8FNv
d/azcXz6qBBNjWZW9xVTrX1ET5M9K5LAajjAYb24BnJbheZBn8VsvvDN46NCs6F73b5XTGYM
s7H9ZHsiFzIazqxCV+lEOEp1A1mfj9++2deItKwxb7PW4KYNhG00TWJBVMXnaWfjJBlwshtH
+WkdODDr0CtrP9S1aRpFZzB6ZUNIUuZIJK0ReSB07OKaspjX6Ijzu+uptKjqTY6Obxd8oHsf
XMT49ws2O1y+HpHrkWzz4CecpsvDGbjqn1XGT5+Q0suq2BUIUe+0B3NHCSUaVeEZFuoaFq48
I6U8XQY6c5gbpxtXXfD1GANeJ/Yx5fD9772/xsPfH284FO/4+vn+djg8PmtBjGiKttQY/s1i
38u0jdpD+YaE4496NTKpRAv7Flul6OnDFXSewRJI8X+Ft4Jjh5wkhd4LAjmln1Gm9ZrRPB2c
Q1OF8rOCclYGDqMmhSou8ph6iwrhPm3ghkSDv4qVqrEeR1lGlKEIV6bSCMUGHraq4o+jrDdk
UV8azB3BRTk+nO8dd55EzxzRSDg6XowX8xktILcEy/nsWgkTV+xviR5fRYeT0VWCvSOOr/h6
Nr1aOHTO4TDH8eVifHP1+9n1rs1GV9HzCckQlTXTQ4siALie6c1itJCYriTEcRGKKAhWsrSB
Vb/ooXYoNpEMFTaAlXAO0wCICJ19uxAm8wRxySwLVXc6xOpx7hCSK7bjaK5Wek1arQLVtCq4
43E4AabsDR5rUSMTbGUMMD3gfsHWjWsPF8nexEmMDMP25312mxYguYuqug95Epg1Vtakq5S+
Xnoaai7ueIeMtzEJ1aZHEtL2JOtq22ijUEWNbGs3d+zleHi9KHPnVfcZa+p9Y3QKfpqv3W0h
/jayraJ5Mfi0qhZS3XE4pSAT5Rg1AqRJ810ocxWSIynJqjCJeEZFem0jCfBBhbm6OzgeonVI
6e40KiZHpc3aqfe+G8Tt3jLQQJOMRDWzXQfT6XwxtAzjJbwHbKrhaLgwf/OED78P/wX5xkAY
ltss8lYjOLumyinRw5oS5vX38bDbKSmuARbHjd7YenSzUVMxAVaNM1h4JU9LUmAGSxUscs6X
orEGuMz5ApnpYKG3AdagqjRltMDypJMt7suXfjLl+II4AucG5eeiEmjeegqCK52Ib9tu9aoy
MvVQXN6CBMJdjoF1gRZqDD8ef22GEeJjRKvcnfiN4uHWLAXALuWuRO+CgjpZJNbHoNfqmpNw
numGqCxNye5iJUoZaITXxHmtPvjupGWeRiO7pMG0pzoBksX3jeFQdO+rpNsJkWtVem08nk/v
p6+XwfrH2+H8627w7ePwfqF0wev7IiwNexa5uT8rpW3tqgzvfd0xVoKasHJEOK/dTO1+cdP5
J1B+Ke2STAX72I9am5uvKeIi1Bd4madhVyhVWhomiYdJ1u3EcUI6b0A6LRLVzEbC1SWbJwUw
iWowlW2JUcH7qrXbQCInjb+ta1Iy70lEkpq8gCpjdeW2FKsiJKot87Zw5SjDRDcsUeRo+IG2
+bAlNtvCJsQo8XCEKeWLI1sW0g9zB+WBGqYLyihdISo3C/VYVzBVPNMSIxuomROl6oV1zNSJ
0WP7KDgWsHA+pCwpDaKlmt1ZxVWYBbZhhaOCapwW1YiOpKiQdWH7rrdEqCFt+I7RrfOD+UjT
QSu4KN6HAT/2tFXSJKu0YXqC7fVdVcQZtHFjHUPs5fT496A6fZwfiXcfKK8quVyhxm4BaLir
CaifBB20l1jRpB/tdWHP13Cbk+cY2QylDC9O/JzSBccwIFtFEBXm1YfXw/n4OODIQfHw7cDV
MoqVVZ807hNSvR4uYESdi0N5+H66HN7Op0d76MoQ3VIxoZkm3nRQWJUhfaYTpYra3r6/f6MS
IZcFiBzyHqRL1L7sTiBMiyfTQchA+B+vT3fH88EWlzraBsMVZFpaiA7FJRjt9OxQKEtQZ2dH
0CZO4KkueLyErlU5G/xU/Xi/HL4P8tcBez6+/Yz6oMfjV5i33qGPE3vfX07fAIxB5tWRkgNB
oUXi1fPp4enx9N31IYkXvlX74rc+iP3t6Rzfugr5jFSoEP8n3bsKsHAcefvx8AJNc7adxKuT
BLKUncVkf3w5vv5rlKkLlzu2VQUN6otOn/efJrHnG5CpiMrwtpMDxc/B6gSErye1MRIFrMWu
DXeTZ0IvpwimChEwUzx+ubaKNQK8x/VEdCoatYJV4Tm/BtY63oVmywNzEPtOmlkEw33N+vUf
/nt5PL22joSEA6sgb7wy/tOVhKsl2RfjBRVzW+KjygOWYGi2xNCBS6DUV2T1ZLq8sbDAXYym
s/mcQkwmsxkF529VFqKos9lIjzcoMWW9WM4nlAwhCap0NlNDnUtwawROIWAvoLWF+kqXwpld
ahZvseNdM6t9WuABzpa2N9cYAvhh6kkRZJiOIwi1R5Fup4/g+La6GQ+pAUFsUuiWgi3MoZrp
0YQjMiL5A67OQAoX//J28Agb3I4MBRhkBNSCPOhIzMhbyypHmYACHYjoIS1DdOaAH3WZJ1p6
GIHxS5ZWtY+/mG5uKvBiXa/oOMWCBMNc8sdDq+/F+h5Yh7/e+UnXd7y93YS/gg0EJqmIm8Bw
Z+C2zMDNYaFUY3yWNhvY8tydxKRqJwoKl1d2U+dlqR00KtKsXMVVcViSb0sakZfscr1sXKZx
ul+kt9hEs/QUGNik77mj+GLvNeNFlnLHF7OIDokj4Cgg9QpuZtukQXpzoxufID5nYZLXaEEY
0EIs0HBV6JZ735ifK6iY0hoijQwZw1upjxC3GhuPrEaJRchZ+Tz1KcWGThWmKVPvYn0dKmXj
1cY8KohVyrQZgp+OgwExIES391NxOKMLw8PrI7oxvh4vpzOVluQaWbdBPdM6dGrtMO/16Xw6
ai6QcNOXuSNwSEve3dRqiJ9sZzzucYA4gyn7MIEtUtj+gR7XR+bcbkLk7W1/pvXd4HJ+eERX
aOtQrPRzHH6KDFiN79ErqqeAhjZqLhxAmImAAQRMdgn7kwmXHbMuie0e4B0VSrIIXTcVxkdm
sNFM9FqYU//XEZiGmSZ+pRpAd9CKhMIOoxtRX62if4NsA8rYU9V+hKlr9LuLv/cUZSM91ImK
eLqbdFV2xJUZGtekYDv6dbKj6zLsUB3rqGIWTofOulKPrfe5K20NJ/PLOFB127J5GFLhz7DH
9jpB0awC7SJYvgU+nFKG8KJNBVkeGXC9wUFEZbeKKr1rVdxGyWkyywRVIZJxpkzjNJtCC06j
wD3uDqmjKiPKBIf5YRRHVCXcOhQGaM+5E6EOVox1CREfDYG9YDVfktHuJbYaTVVlHUJ1Nhch
UmHUa5CJihURJS/UmOlxvtd/IQ9mVFIlcarnRgSAuCZZXSb61i3h/1nIlEMMlo7uOQR8bnO7
9QKR+4NQK9VwH8GVhm671Nhoyn38JW7rQJsvDndqZAzZi09MdESzFH6/qsIog40VNnd5GfSG
Jb0c4CVx4NVwjFb44FTRO6RCZZNqZQrSybjRvS8kqNl7dU0VAviJ/cmEV5xXMSwXRocJaKmq
kG1L2jwKSKZ22VNn2QZNW7L1vSuR2h9+oBlh4m8nMfpN+XwSVL4/hqFG9x+t0R2YJ3pzsPuS
hGvs4ozczUrxYj7ISj4ZHZVOGaG2y1bj//h0Jv9wzKJGYA2j/nnt1TFay1Pc0L5tk/Jb6mGb
3VSH327zWrs69582HylKihtBRJ4lmFvVsElSMPgaEpc6yjI3QqBXwbDXTeTVHjUtq6gyt54E
ocp/EwNTHySUZjNn3ZcGpMnHOqfdITrFUsOSbVWTp0NHjHNT2cUIJ3C4qTZ09huVSm2dX9sb
pIV9MlMdGd9H/FhfORddR1xuM5CLMqBrXDYRgtaaNgEWE+f8CmsII3SujyM9RkCciBGkGYQx
/9bBPKD0QJ853QippzZuBc3oTUIaH9cOXK0KDq0M2iWlXUwg3KCN+r1G4WofCOLlfeGINgx4
HA71XOlA9ij3KH8bA7uSYSqVzMNblpqrqBL2L4pO1ATEAmAYS0eeSdceFupPfHvnQS040xBp
Mgh35pNkd16ZGSMoEK77QmBrYGqVGqMUjrCRCRgbbWK1pj3CwIRRNaUDMAiktuOiLYbLVgBs
q8bUlmYN+o7MYUoS796oQzCKD4/P6otNVLW3oDKlgjvhR4djCQkKzKycr0rST7ClIVaMQOT+
H8DRNRgCknoNRRoez0L9todeuZIUIrKB/WuiGAsxLsGvIJH/FuwCzrJZHFtc5cubm6E2E3/k
SazaevwZyyBFvWoisF2r28rpCsVjQF79BlfNb+Ee/81qukmAM+Y9reBLemXtOmrl69YyHfMG
FGiFNJ3MKXyco2ERerV/Ob6fFovZ8teRYq+kkm7riH45MOsXEKKGj8vXxZdOl1Jb1w0HuWef
o8s7csyvjqvQUr0fPp5Og6/UeHPmTm8LB20cMj1H7lIzJp0CljYxqI2hVG2cEjXItWqHhkCc
LIxhGmvBjMR78TpOgjLMzC8wqjAGmMVdrYpdm7DM1HkxXhbqtNC7zAGfXPWCxhI6ekOD7QrO
aZ9cqGmYRkHDytCrNTMV/GMcjbDJd17ZTkmrQLRnUJUGK2HKKUwkSe1dWINYtlGpFE2g0QL8
rR74/Ldm1iAgDm6eI6e/fzfIpw1tSFKiwWDmYDtE0/jZ58TjVSIN7YOM7LwkwkURJkik962N
zLsNCipDNpBQrhSrklsw8RhtfXnIo5g/NYkAKzQtSqttVhbM/N2sVM8MAIBAg7BmU/p64iVB
3nYjzrjkg3EqGfonO7J4y4/cF05YrOlTl8X65sHf4l6ljek4Hm0b7vq22YaBOvm2wJQejspt
SZNDr/SFo8lidZpra40T4Gp12EMFnot59tx89bKgxzhTre/hR3un0HcVErTXXQPXHV1gTyKy
WtGfz+d0GiWNaDGjfSEMIno9GET/qbpPu6S57hqYkRMzdmImTszUiZk5B3VxQ5npGSRLR8HL
yY2z4OV/mYjlhHof1EmmS3fj57RrNhIBc4irsaHYI62QkRYu10SNzMq5ibujzLZO66MW4ept
izfmtgVPXeVRdqIq3pqeFkE7MKsUy8/6OHGVPaJyr2kEM72bmzxeNKVZHIfSLqOIRu8V4Oj1
/DoWBQsx4oOjPYIARNdtmest4pgy92qRocAu9r6MkySmY1m0RCsvTK7WjclANnbFwKImmplU
h8i2cW2D+ShoqRRaTL0tN7EaCwMRKDIo+oAk1X4QIWSzmBkvN62Qljd3tyozqGnfhZng4fHj
fLz8sN18ZN6rrhr83ZTh7RbjfVp3XctbiywJMGdIX8bZSleNyXJoLhnTmYSBm0CqaQiSvolN
sG5yaAW3hdTY+nvhahSzDtVzNFLf2wRpWHFrlbqMGX1bX9UNt0jyYo6AO0RFjngT1upHhTHj
Gh4MA7MOk8IRxj5OvUbyQdxPtexGzc9JH/NWoOy7qLrTJVX6+xe0IX46/fP6y4+H7w+/vJwe
nt6Or7+8P3w9QDnHp1+Or5fDN1wjv/z19vWLWDabw/n18DJ4fjg/HV7x0bdfPuJd7vD9dP4x
OL4eL8eHl+P/8mBHivYAM2ZAr9kGJiXTHsZjdAwX9qy6p7hBgQ+qOkH/NkdX3qLdbe/MLs1N
0enDYcC5FlJZWcKtTnfoEzAQ3Vhxb0L3qogqQMWtCSm9OLiBhcjynSL04erHg1BoaM4/3i6n
wSPmDzidB8+Hlzc1XqgghoFcearzpQYe2/DQC0igTVptGA+c7UTYn6y1uD8K0CYt1VRRPYwk
7Dhcq+HOlniuxm+KwqbeqK+5bQkY8cwm7Z3DSLj2IidRW/pBU/+wk9G41t8qfhWNxot0m1iI
bJvQQKolBf/rbgv/Q6yPbb2Gs9mC6xGj29URp3YJnUuxUDp9/PVyfPz178OPwSNf5N/OD2/P
P9SH/XbyK+pdXyIDe62FzG5lyII1MRYhK4NrpVepPflwxO7C8WzGc1kLs6uPy/Ph9XJ8fLgc
ngbhK+8PHCmDf46X54H3/n56PHJU8HB5sDYvU6OctjNNwNgarmNvPCzy5H40Gc6IHbyKq5Ea
iKrtRXgbWycM9H3twTm7a3vhczcTDLb/brfRt4eURb4Nq+1NwYiVHDL726S8s2A5UUdBNWZP
VALcgIxIZU67h86H9ZbS4LcNRDv1dmTWD+/ProHRHNfbw85w827bCA1317gTHwlV+PHb4f1i
V1ayyZiYCA7uPEwIJA2FkUyo02S/J49wP/E24dgnOiYwpCqoq64eDYM4shc6WZVziafBlIAR
dDEsbm5Aaw9XmQbUJkGwntu5R4xnpBddh9dM4ttNt/ZGFBDKosCzEXH5rr3/q+xIluPWcb+S
eqc5zKTixMlzpioHLexujVuLtbhtX1SO0+NxJXZSdrsqb75+AJCUQBKUPYcsTUAURYLYCIAf
wsZSaMMzurQOhWm/bo8+hx3vGv06zWypYnRI2okKtxS0jb2gaCis6RShwKQa0kLoqs3CtQS1
a7cqROLTgMA3aikswQxYfsfjBEBDJvZQ138UVhzbFxY8FyZmRf+GLGiTXAm6Vpdsu0SgGcvi
BQ6uhF5U2+iQdf8DDGTsOvV+/CjeVj1RU7gIvUqEPvtdjQsQ78ogxCbbgmE4E+X9vP/1uH96
cqyGaZJXW/cwxEiJqzpoOzmW9JztleQBmYEbiUdfdb3jzNfJhdcP337ev6me77/uH3VWpGfq
TKSOVVYbSa3N23TtJc5zSERkaJhX00BEysRQN4YRvPdfBZaqUpgKwu0Xpq+OklFhAVbPlxRd
glsLIT6sCVWaMA6EHXkeKucThmjPTFBVkUpdpxjhLVAUfgeFr3k214+7r4/XYGE+/nw+3D0I
sn9bpCKTpHaJuyHAyEp2S3sUR4RpFrH4uEaRQZPuynoINo6DuLCHilTkhdhuJTgo78WV+nK0
hLL0LVFNYP7QBY0YkSIidxMqmxgd3SQ5Wv7SVpyhuOpL4mFG7DYSK0WMtYrFYTOkTbGqxj8/
f5QCqxha0peYbSiohjNUMopmKE7Su+PYWLNMjrpnKGcY1rA5+fzxdya7Yz3c7EOsUJmP+ClS
sczDO35lf3aQ5/J959IwX4kKAz0X69bMeH6JGAbqkpW6yNwUcWcJQMN7aRhJiRe6Z+P6InKE
OaOeKlWmCSZ4daS9gR4gRWZ1l2WJV95m5FrF02I+PgZuhnRrsLohRcQwCGv/eMCMZrCEn6iK
6dPd7cP14flx/+bmP/ub73cPt9z412EI3F/cesF9LiIwTSyC2U0+6XmaAwxi+XQNL1UgssFJ
rxig7TItqqS91OF1Kys4tlGJoT19zRmfPds2pqrKQP63UtltDNhNWsCt1pzTYoqn84lpASYA
FiZi8s3mUIJ1UGXN5bhqKRuP+604ylZVEWiFFzH0BT9ytqBVUeXwVwuTmhZOpkKbc5aOF8jT
zUmpU2FVe/D55QRT4mdW+KH+FuQ1TzdI48VpNn2k4N9BGBjhAfRK10b3/sEBGKiwxUAhcpqO
PrkYoQ0Lg+mH0X3KNbfRzra1zdy9TRDYNCq9lKsaOiiyLksISbtL3Kq5GgALIj/0yVFNsmPv
UeksHa8NCrwPGbOiJ6cBo+8qr0v2+UK3oLnTpRKtU2UUWzFPym+/QnkOmpprGFxpPcRrBTtB
6BlbpZ7JHBDxj+WRgJkgoFOzgz/NxsUVAsSzG4s+rq8KRtcMsL1yaiZyQB1pPw73jSkfwk8m
QKjgJZPb2rHYeCv2yvdBmm2cH5Q6iXXj2qR0zku6Oitgm50r2H4tL6SHWxW2ML/3RzdRdoSz
tbHdqYBY0ch0dUngV04OJLbBYLcJFmKpN8pN50Zo5swiNDSqBZZkAdr1tv/39fOPAxbXPdzd
PuPNlff6pOn6cX8NIuG/+38ySwBP6kDBxZ7wDBvDSd+xzWTBHfqisLa1qDdyLNbRX7GOCvm0
3UUSo/ERJdkW66rEyTmZn6UCnphsHgkF79ZbTT9s/1NGwxT4zgDNMLbOOuZnnMNv69T9xRmk
XeitCeGyfW6vxp7fZI2l+kDvZ/2WTeFUUs2L0vkNP1Y5e0Vd5JRUCbKP3y2Ude9RHDrSlSqJ
2W10nvMb1W3rWvVYIbxe5YlQxgCfGblgcAA9yUYeplyj88YvIEytJ7/5bqQmjJ+HGXQyBTtM
Z6/Z7Ngw2Ox0l/DaadSUq6bmD4PkcNZPz4crx4zSFOg87smxVeuo9dfj3cPh+5trePLb/f7p
NgxHIH3qlCaEk79pxtg82Z+g07dHUH+3oAVtp7PCP6MYZwNGsR9PxKMrUoY9HLO4BoxENUOh
Gp0SN7+sEqwI7dUlcZqDC7tBLUlrVJ9V2wKeFHKhH4Q//EZMswTRaZ2cbHc/9v843N0bPfaJ
UG90+2O4CPpdxiMStMGeyYdMOcXFGdSKICWX32aYHWhpcvQFQ8p3SbuSw8vWeYpJbkUj5kcZ
r085oPsXORXbNCCqFGXDfDk5+vyeE3kDggtrHLiFDluV5NQbAIVXbQAMWi0Gr/YJ50j6OzpF
V3hhGHaZ9Fx8+hAak7033JuNVU0lBsx9YJqJA0eRKpPr72tq75ZB3s9OJacoapBXc1J6NbE4
peTMbs/3X59v6Ur74uHp8Ph871YmLhO0TcEWa1kgBGuc4jz00n159/tIwvJvMwtheN45UB22
P/7wPr4TJtbGHscCfyc0PJwnzBLTsKObdOrQDXohGUJc+BQol48Df0vGt7VrhrRLTD4gSndN
YnNgGELFxI9XLY87dh0C79MMJhBY/ciE3UydMd6N/FNd9KrqvEIF5rpygJMKIUWU4bP1rnIL
7lIr0HFXV7G8vrlrTGmMrklb5wmmsjk6xjTBGmd34X83b5ns0h7zVphhS78Dlm6aTU3EhaHr
hDCJgxHBmGUBbXkLWzacVAtZeIOOwBpQvMnhbMAec4OlwJoPMr7l6Tovx2bd0071pu28DFvo
eNvPuZuArUT/7DVg2607odNgAP4YdcH64Mmp2RuKrmZG0WdL1Kb5JpoqYrVyszBJg+kdSaCD
abWuYxiGKZdurILfz4y1MLZNsd6UYhl0Rk601pguuQI2FYgrGZhlNK+nCfKi0GOvoZhDhIpi
Vc/cKs8nU9iN3JtZSECxGyxv5vsPCf9N/fPX09/fbH/efH/+pWXT5vrh1qkH0uA9Lhg9WNfi
AjlwrDsxKKe0eZGRAloPrOI5evMH3M897FZuQXf1qo8CUWMky5ijNeaamRdxzNCO5hVsc+9V
VK+Ry5gAg1PU/CqGSK8SZimOPE0ZWzZ82bjBOl590kncY3cG2g3oOHnthDUjRzaTK8qw5UXX
wc2gp3x7RuVEEEqauXnauG50FVxqm+sG2BhRoW93uyClnCrVaHtRu4ExjmqWtn97+nX3gLFV
8An3z4f97z38Z3+4efv2Lb+KC4+LqMs1mVphnlnT4k0KQpI77wG/wN/R6FUZenWhAhZqKx8H
An9C93bmbqdhYwfMoUl6+VzavHbXycmGGqwP0lyXAqXDKYELGkC0M3sPz1bFnsZJpVPphXsm
aEiwZ9CdoXUFlqQ4f7rgzmTCdOX0IBL1/0MgdmxUWwy9Hp4sdNvHqmQn9sTKbVGy+UPQyICJ
H4cKw0pgJ2hP7cJSnmotJcKSv2v98tv14foNKpY3eGTicGSzDEVk0oxYjSTFG1Jd+0SqA/31
ccLM4lClqkbS9kAna4eg4IPHWiKD9weXgdWrqh6si7C6QJsNEuuJERKgj1hdMyQQhrD0MNbu
eLEDVFDIHJ1k2fsj5wU+WWCjOhOzJG2lbOc7A9ZwZhSUNn7JnPFi0D4DSwFPIsUjChi7ua6A
GJqyFUsZp4DWKrvsa6aKU8THTPOhA62qG/3d7RfXLJvM6mXouk2ajYxjHTx+DUABOO4KvKpc
8V0cRcuLFiUrer58dINWUlEwyiJocw8F0+iJBBCTHAJBJxj/c+k1ZqY33bXHa7A+3sXofaYe
SubKEvIZpsNqxWeLyjkTvuNhxQVGitCXGgdzzLoyxnW34w7KplWqhK3ensnfGrzPmnP+iwyi
4Hz1vhjVI/LbBl1HiekFOoqR0MvU8wrCmbbgNAhze4/kTtW2oD8+mFzQUFdzu2dB6nbJ8tjB
bhUew2p71CryCvMxhoQlcW3IsavADtvUIZ1awGSwuTSTgtgDUjPzYL01XN2idnNejMlX9IB4
hJNuTymsoqhDxjpAT6nShC+qQ2b1NIJPcbH9bQnV9dNfVrD0fkdYD4bfbspy8fAFekcWlS/h
XTTaUYuhCnyPTnjS6xI8nWqo1sTiyvcJyMBGUKOEF76IzDgE+e5jspNNITIJz2XkTGWYpIkq
epGrsd5kxdGHz8d0rOZ7E+yLEizH72YoUhNflkhtBo6nDyRexqNT2CU0QcnzUTY72AQqOSVi
WOzLL+7pI+hfEc+YwTlf4dVVuK3KHMNXFp2kgIYFiAvjfHXPJ3Smp8EJ9LffJ59E/c3VrAOJ
EGrewp2fSbu9tGdGusy3geAtUeYsh2QJv7iIPxXpK0/XkQeogudFnjqRnWpVoLMsqLnjaWZY
nmY7iEHepBhMHFuyT/GLMAQhx+24ZCMVtdmC7y5OpDsxGdxdxgkwBMdvIQ7y8yUllA7zKGpB
Pkxv4nXRdA9Wc/I6JjqIR7zoWaJzgsYpytwMmPOKhmz0vUO1KyqcXv+EZ1LRXVLmR7H9/umA
xiY6TzK8KOT6ds9NtNMh5vS0dhaeTtatERSBUWV5ty5sJeHYidWuwg5kaX1ueZh7DxDIO9Ks
YAZJxKlKNidgM0dt7MXPDpKE9TH1/wABoJBUAIMCAA==

--gBBFr7Ir9EOA20Yy--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 13:27:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 13:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55932.97574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kptJs-0007Bg-Ok; Thu, 17 Dec 2020 13:27:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55932.97574; Thu, 17 Dec 2020 13:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kptJs-0007BZ-Le; Thu, 17 Dec 2020 13:27:48 +0000
Received: by outflank-mailman (input) for mailman id 55932;
 Thu, 17 Dec 2020 13:27:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HMd0=FV=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1kptJr-0007BU-Kk
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 13:27:47 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c07b12be-9693-4e6a-a35a-7fac6f0813d0;
 Thu, 17 Dec 2020 13:27:45 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-199-abEwEbf0N_yVvfafwXb5ZQ-1; Thu, 17 Dec 2020 08:27:43 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 962D959;
 Thu, 17 Dec 2020 13:27:41 +0000 (UTC)
Received: from localhost (ovpn-112-215.rdu2.redhat.com [10.10.112.215])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 43B34E723;
 Thu, 17 Dec 2020 13:27:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c07b12be-9693-4e6a-a35a-7fac6f0813d0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608211665;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fcN1UDTE30qNALnBkD3BVEMwYUnsHOvNV3eVqMCCVDE=;
	b=iFy4U3hV7fCfcIkbakVqrsazLf+22lZe5zL52Gsek8AX1IMmIZlH16iXqZ4sPaNRfBz01E
	bhoSRgsXFmTmQEhpT1i1WPlbNUIMLcuEeP8sZTtTvSxZUgTi1nredJxcPMIgbjyp95wNrN
	OIB711AXRDfw/a5bYKNROhkqEDJmKKw=
X-MC-Unique: abEwEbf0N_yVvfafwXb5ZQ-1
Date: Thu, 17 Dec 2020 14:27:33 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	Fam Zheng <fam@euphon.net>,
	Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
Message-ID: <20201217132733.cma4u2niu3texah3@mhamilton>
References: <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
 <20201217093744.tg6ik73o45nidcs4@mhamilton>
 <20201217105830.GA12328@merkur.fritz.box>
 <d7c1ee7f-4171-1407-3a71-a7e45708cc4a@virtuozzo.com>
 <20201217130602.GB12328@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201217130602.GB12328@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="5763q23vvthyoprt"
Content-Disposition: inline

--5763q23vvthyoprt
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 17, 2020 at 02:06:02PM +0100, Kevin Wolf wrote:
> Am 17.12.2020 um 13:50 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > 17.12.2020 13:58, Kevin Wolf wrote:
> > > Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
> > > > On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
> > > > > Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
> > > > > > On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> > > > > > > Anyway, trying to reconstruct the block graph with BdrvChild =
pointers
> > > > > > > annotated at the edges:
> > > > > > >=20
> > > > > > > BlockBackend
> > > > > > >        |
> > > > > > >        v
> > > > > > >    backup-top ------------------------+
> > > > > > >        |   |                          |
> > > > > > >        |   +-----------------------+  |
> > > > > > >        |            0x5655068b8510 |  | 0x565505e3c450
> > > > > > >        |                           |  |
> > > > > > >        | 0x565505e42090            |  |
> > > > > > >        v                           |  |
> > > > > > >      qcow2 ---------------------+  |  |
> > > > > > >        |                        |  |  |
> > > > > > >        | 0x565505e52060         |  |  | ??? [1]
> > > > > > >        |                        |  |  |  |
> > > > > > >        v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
> > > > > > >      file                       v  v  v  v
> > > > > > >                               qcow2 (backing)
> > > > > > >                                      |
> > > > > > >                                      | 0x565505e41d20
> > > > > > >                                      v
> > > > > > >                                    file
> > > > > > >=20
> > > > > > > [1] This seems to be a BdrvChild with a non-BDS parent. Proba=
bly a
> > > > > > >      BdrvChild directly owned by the backup job.
> > > > > > >=20
> > > > > > > > So it seems this is happening:
> > > > > > > >=20
> > > > > > > > backup-top (5e48030) <---------| (5)
> > > > > > > >     |    |                      |
> > > > > > > >     |    | (6) ------------> qcow2 (5fbf660)
> > > > > > > >     |                           ^    |
> > > > > > > >     |                       (3) |    | (4)
> > > > > > > >     |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> > > > > > > >     |
> > > > > > > >     |-> (2) file (5e52060)
> > > > > > > >=20
> > > > > > > > backup-top (5e48030), the BDS that was passed as argument i=
n the first
> > > > > > > > bdrv_set_aio_context_ignore() call, is re-entered when qcow=
2 (5fbf660)
> > > > > > > > is processing its parents, and the latter is also re-entere=
d when the
> > > > > > > > first one starts processing its children again.
> > > > > > >=20
> > > > > > > Yes, but look at the BdrvChild pointers, it is through differ=
ent edges
> > > > > > > that we come back to the same node. No BdrvChild is used twic=
e.
> > > > > > >=20
> > > > > > > If backup-top had added all of its children to the ignore lis=
t before
> > > > > > > calling into the overlay qcow2, the backing qcow2 wouldn't ev=
entually
> > > > > > > have called back into backup-top.
> > > > > >=20
> > > > > > I've tested a patch that first adds every child to the ignore l=
ist,
> > > > > > and then processes those that weren't there before, as you sugg=
ested
> > > > > > on a previous email. With that, the offending qcow2 is not re-e=
ntered,
> > > > > > so we avoid the crash, but backup-top is still entered twice:
> > > > >=20
> > > > > I think we also need to every parent to the ignore list before ca=
lling
> > > > > callbacks, though it doesn't look like this is the problem you're
> > > > > currently seeing.
> > > >=20
> > > > I agree.
> > > >=20
> > > > > > bs=3D0x560db0e3b030 (backup-top) enter
> > > > > > bs=3D0x560db0e3b030 (backup-top) processing children
> > > > > > bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0=
e2f450 (child->bs=3D0x560db0fb2660)
> > > > > > bs=3D0x560db0fb2660 (qcow2) enter
> > > > > > bs=3D0x560db0fb2660 (qcow2) processing children
> > > > > > bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db0e34d2=
0 (child->bs=3D0x560db1bb3c00)
> > > > > > bs=3D0x560db1bb3c00 (file) enter
> > > > > > bs=3D0x560db1bb3c00 (file) processing children
> > > > > > bs=3D0x560db1bb3c00 (file) processing parents
> > > > > > bs=3D0x560db1bb3c00 (file) processing itself
> > > > > > bs=3D0x560db0fb2660 (qcow2) calling bsaci child=3D0x560db16964d=
0 (child->bs=3D0x560db0e50420)
> > > > > > bs=3D0x560db0e50420 (qcow2) enter
> > > > > > bs=3D0x560db0e50420 (qcow2) processing children
> > > > > > bs=3D0x560db0e50420 (qcow2) calling bsaci child=3D0x560db0e34ea=
0 (child->bs=3D0x560db0e45060)
> > > > > > bs=3D0x560db0e45060 (file) enter
> > > > > > bs=3D0x560db0e45060 (file) processing children
> > > > > > bs=3D0x560db0e45060 (file) processing parents
> > > > > > bs=3D0x560db0e45060 (file) processing itself
> > > > > > bs=3D0x560db0e50420 (qcow2) processing parents
> > > > > > bs=3D0x560db0e50420 (qcow2) processing itself
> > > > > > bs=3D0x560db0fb2660 (qcow2) processing parents
> > > > > > bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db=
1672860
> > > > > > bs=3D0x560db0fb2660 (qcow2) calling set_aio_ctx child=3D0x560db=
1b14a20
> > > > > > bs=3D0x560db0e3b030 (backup-top) enter
> > > > > > bs=3D0x560db0e3b030 (backup-top) processing children
> > > > > > bs=3D0x560db0e3b030 (backup-top) processing parents
> > > > > > bs=3D0x560db0e3b030 (backup-top) calling set_aio_ctx child=3D0x=
560db0e332d0
> > > > > > bs=3D0x560db0e3b030 (backup-top) processing itself
> > > > > > bs=3D0x560db0fb2660 (qcow2) processing itself
> > > > > > bs=3D0x560db0e3b030 (backup-top) calling bsaci child=3D0x560db0=
e35090 (child->bs=3D0x560db0e50420)
> > > > > > bs=3D0x560db0e50420 (qcow2) enter
> > > > > > bs=3D0x560db0e3b030 (backup-top) processing parents
> > > > > > bs=3D0x560db0e3b030 (backup-top) processing itself
> > > > > >=20
> > > > > > I see that "blk_do_set_aio_context()" passes "blk->root" to
> > > > > > "bdrv_child_try_set_aio_context()" so it's already in the ignor=
e list,
> > > > > > so I'm not sure what's happening here. Is backup-top is referen=
ced
> > > > > > from two different BdrvChild or is "blk->root" not pointing to
> > > > > > backup-top's BDS?
> > > > >=20
> > > > > The second time that backup-top is entered, it is not as the BDS =
of
> > > > > blk->root, but as the parent node of the overlay qcow2. Which is
> > > > > interesting, because last time it was still the backing qcow2, so=
 the
> > > > > change did have _some_ effect.
> > > > >=20
> > > > > The part that I don't understand is why you still get the line wi=
th
> > > > > child=3D0x560db1b14a20, because when you add all children to the =
ignore
> > > > > list first, that should have been put into the ignore list as one=
 of the
> > > > > first things in the whole process (when backup-top was first ente=
red).
> > > > >=20
> > > > > Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque v=
alue,
> > > > > but isn't actually present in backup-top's bs->children?
> > > >=20
> > > > Exactly, that line corresponds to this chunk of code:
> > > >=20
> > > > <---- begin ---->
> > > >      QLIST_FOREACH(child, &bs->parents, next_parent) {
> > > >          if (g_slist_find(*ignore, child)) {
> > > >              continue;
> > > >          }
> > > >          assert(child->klass->set_aio_ctx);
> > > >          *ignore =3D g_slist_prepend(*ignore, child);
> > > >          fprintf(stderr, "bs=3D%p (%s) calling set_aio_ctx child=3D=
%p\n", bs, bs->drv->format_name, child);
> > > >          child->klass->set_aio_ctx(child, new_context, ignore);
> > > >      }
> > > > <---- end ---->
> > > >=20
> > > > Do you think it's safe to re-enter backup-top, or should we look fo=
r a
> > > > way to avoid this?
> > >=20
> > > I think it should be avoided, but I don't understand why putting all
> > > children of backup-top into the ignore list doesn't already avoid it.=
 If
> > > backup-top is in the parents list of qcow2, then qcow2 should be in t=
he
> > > children list of backup-top and therefore the BdrvChild should alread=
y
> > > be in the ignore list.
> > >=20
> > > The only way I can explain this is that backup-top and qcow2 have
> > > different ideas about which BdrvChild objects exist that connect them=
.
> > > Or that the graph changes between both places, but I don't see how th=
at
> > > could happen in bdrv_set_aio_context_ignore().
> > >=20
> >=20
> > bdrv_set_aio_context_ignore() do bdrv_drained_begin().. As I reported
> > recently, nothing prevents some job finish and do graph modification
> > during some another drained section. It may be the case.
>=20
> Good point, this might be the same bug then.
>=20
> If everything worked correctly, a job completion could only happen on
> the outer bdrv_set_aio_context_ignore(). But after that, we are already
> in a drain section, so the job should be quiesced and a second drain
> shouldn't cause any additional graph changes.
>=20
> I would have to go back to the other discussion, but I think it was
> related to block jobs that are already in the completion process and
> keep moving forward even though they are supposed to be quiesced.
>=20
> If I remember correctly, actually pausing them at this point looked
> difficult. Maybe what we should then do is letting .drained_poll return
> true until they have actually fully completed?
>=20
> Ah, but was this something that would deadlock because the job
> completion callbacks use drain sections themselves?
>=20
> > If backup-top involved, I can suppose that graph modification is in
> > backup_clean, when we remove the filter.. Who is making
> > set_aio_context in the issue? I mean, what is the backtrace of
> > bdrv_set_aio_context_ignore()?
>=20
> Sergio, can you provide the backtrace and also test if the theory with a
> job completion in the middle of the process is what you actually hit?

No, I'm sure the job is not finishing in the middle of the
set_aio_context chain, which is started by a
virtio_blk_data_plane_[start|stop], which in turn is triggered by a
guest reboot.

This is a stack trace that reaches to the point in which backup-top is
entered a second time:

#0  0x0000560c3e173bbd in child_job_set_aio_ctx
    (c=3D<optimized out>, ctx=3D0x560c40c45630, ignore=3D0x7f6d4eeb6f40) at=
 ../blockjob.c:159
#1  0x0000560c3e1aefc6 in bdrv_set_aio_context_ignore
    (bs=3D0x560c40dc3660, new_context=3D0x560c40c45630, ignore=3D0x7f6d4eeb=
6f40) at ../block.c:6509
#2  0x0000560c3e1aee8a in bdrv_set_aio_context_ignore
    (bs=3Dbs@entry=3D0x560c40c4c030, new_context=3Dnew_context@entry=3D0x56=
0c40c45630, ignore=3Dignore@entry=3D0x7f6d4eeb6f40) at ../block.c:6487
#3  0x0000560c3e1af503 in bdrv_child_try_set_aio_context
    (bs=3Dbs@entry=3D0x560c40c4c030, ctx=3Dctx@entry=3D0x560c40c45630, igno=
re_child=3D<optimized out>, errp=3Derrp@entry=3D0x7f6d4eeb6fc8) at ../block=
.c:6619
#4  0x0000560c3e1e561a in blk_do_set_aio_context
    (blk=3D0x560c41ca4610, new_context=3D0x560c40c45630, update_root_node=
=3Dupdate_root_node@entry=3Dtrue, errp=3Derrp@entry=3D0x7f6d4eeb6fc8) at ..=
/block/block-backend.c:2027
#5  0x0000560c3e1e740d in blk_set_aio_context
    (blk=3D<optimized out>, new_context=3D<optimized out>, errp=3Derrp@entr=
y=3D0x7f6d4eeb6fc8)
    at ../block/block-backend.c:2048
#6  0x0000560c3e10de78 in virtio_blk_data_plane_start (vdev=3D<optimized ou=
t>)
    at ../hw/block/dataplane/virtio-blk.c:220
#7  0x0000560c3de691d2 in virtio_bus_start_ioeventfd (bus=3Dbus@entry=3D0x5=
60c41ca1e98)
    at ../hw/virtio/virtio-bus.c:222
#8  0x0000560c3de4f907 in virtio_pci_start_ioeventfd (proxy=3D0x560c41c99d9=
0)
    at ../hw/virtio/virtio-pci.c:1261
#9  0x0000560c3de4f907 in virtio_pci_common_write
    (opaque=3D0x560c41c99d90, addr=3D<optimized out>, val=3D<optimized out>=
, size=3D<optimized out>)
    at ../hw/virtio/virtio-pci.c:1261
#10 0x0000560c3e145d81 in memory_region_write_accessor
    (mr=3D0x560c41c9a770, addr=3D20, value=3D<optimized out>, size=3D1, shi=
ft=3D<optimized out>, mask=3D<optimized out>, attrs=3D...) at ../softmmu/me=
mory.c:491
#11 0x0000560c3e1447de in access_with_adjusted_size
    (addr=3Daddr@entry=3D20, value=3Dvalue@entry=3D0x7f6d4eeb71a8, size=3Ds=
ize@entry=3D1, access_size_min=3D<optimized out>, access_size_max=3D<optimi=
zed out>, access_fn=3D
    0x560c3e145c80 <memory_region_write_accessor>, mr=3D0x560c41c9a770, att=
rs=3D...)
    at ../softmmu/memory.c:552
#12 0x0000560c3e148052 in memory_region_dispatch_write
    (mr=3Dmr@entry=3D0x560c41c9a770, addr=3D20, data=3D<optimized out>, op=
=3D<optimized out>, attrs=3Dattrs@entry=3D...) at ../softmmu/memory.c:1501
#13 0x0000560c3e06b5b7 in flatview_write_continue
    (fv=3Dfv@entry=3D0x7f6d400ed3e0, addr=3Daddr@entry=3D4261429268, attrs=
=3D..., ptr=3Dptr@entry=3D0x7f6d71dad028, len=3Dlen@entry=3D1, addr1=3D<opt=
imized out>, l=3D<optimized out>, mr=3D0x560c41c9a770)
    at /home/BZs/1900326/qemu/include/qemu/host-utils.h:164
#14 0x0000560c3e06b7d6 in flatview_write
    (fv=3D0x7f6d400ed3e0, addr=3Daddr@entry=3D4261429268, attrs=3Dattrs@ent=
ry=3D..., buf=3Dbuf@entry=3D0x7f6d71dad028, len=3Dlen@entry=3D1) at ../soft=
mmu/physmem.c:2799
#15 0x0000560c3e06e330 in address_space_write
    (as=3D0x560c3ec0a920 <address_space_memory>, addr=3D4261429268, attrs=
=3D..., buf=3Dbuf@entry=3D0x7f6d71dad028, len=3D1) at ../softmmu/physmem.c:=
2891
#16 0x0000560c3e06e3ba in address_space_rw (as=3D<optimized out>, addr=3D<o=
ptimized out>, attrs=3D...,=20
    attrs@entry=3D..., buf=3Dbuf@entry=3D0x7f6d71dad028, len=3D<optimized o=
ut>, is_write=3D<optimized out>)
    at ../softmmu/physmem.c:2901
#17 0x0000560c3e10021a in kvm_cpu_exec (cpu=3Dcpu@entry=3D0x560c40d7e0d0) a=
t ../accel/kvm/kvm-all.c:2541
#18 0x0000560c3e1445e5 in kvm_vcpu_thread_fn (arg=3Darg@entry=3D0x560c40d7e=
0d0) at ../accel/kvm/kvm-cpus.c:49
#19 0x0000560c3e2c798a in qemu_thread_start (args=3D<optimized out>) at ../=
util/qemu-thread-posix.c:521
#20 0x00007f6d6ba8614a in start_thread () at /lib64/libpthread.so.0
#21 0x00007f6d6b7b8763 in clone () at /lib64/libc.so.6

Thanks,
Sergio.

--5763q23vvthyoprt
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/bXMQACgkQ9GknjS8M
AjWHbhAAqAynHW2PvCqRDO5Fkb14jQqz48cxSOsYnnLgsq18xpQ7SiA9pHN83zjJ
FBKwp2KYzTbaFF5Ojs/2JTlJBg/6ZZm+2nAXLTuMrQjQbR8ugjYB5iUk4F5ox48h
NYrmczZK5oCUbHcbXkTa8H1YyVUo7xkx0F9JDG+GUTgKuazK6edwnE7k///+JmQK
1au5ggmctwQZp2eLuX2Z0Y2cLAjkDSeSuweXH7LcAsEyf1GdlAxYVB1J0r6UXoMV
5wxaTiHJdAubwGubfeLbAfMC/v9FohtQ4VVRiOSA0P7UfJDp4pFgefNjvxwBItwR
LNc3ldk0eIGnWdt7+de0p5B6ObtMbbSqxIE3gJCLswjlZ8zjzLCnSWDJ+XJ88nx7
78wkC4KtM6fCeMeYlahVFd8Kl4e/ykVq71H9W2XKgKwRQXH218DW7U7T2yqIGugw
a6iECkEVhEksbaW5qFJvYgUH7N0hYV/InuKx5aq5f1tPD9TdTv0dPXXPkhR6r4vB
3B3FPQ13LEEBjIHcg9ygOElwqo3kWDOXPb4Ms/RGoRSGqD3eeObewLBsjzddpe7A
Vy977Bn7vrVs5hSar3e66QX2KmGFScgHAmedXOIrnM0299NCw4kCwhCF5x7hJ2Fd
OXtJhXA08Uo4q/FW8WJl9Sd/KWIa2mdVB8g+yCYYfj15i8B+N2A=
=zHsJ
-----END PGP SIGNATURE-----

--5763q23vvthyoprt--



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 14:01:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 14:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55936.97586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kptqC-0002af-FV; Thu, 17 Dec 2020 14:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55936.97586; Thu, 17 Dec 2020 14:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kptqC-0002aY-CN; Thu, 17 Dec 2020 14:01:12 +0000
Received: by outflank-mailman (input) for mailman id 55936;
 Thu, 17 Dec 2020 14:01:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=852C=FV=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1kptqA-0002aT-Rj
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 14:01:11 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.102]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f544834-821c-4fdb-9cc9-69e7789bcf94;
 Thu, 17 Dec 2020 14:01:06 +0000 (UTC)
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AS8PR08MB5944.eurprd08.prod.outlook.com (2603:10a6:20b:297::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.24; Thu, 17 Dec
 2020 14:01:05 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::d585:99a4:d7a4:d478]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::d585:99a4:d7a4:d478%4]) with mapi id 15.20.3654.024; Thu, 17 Dec 2020
 14:01:05 +0000
Received: from [192.168.100.5] (185.215.60.92) by
 AM0PR06CA0089.eurprd06.prod.outlook.com (2603:10a6:208:fa::30) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.12 via Frontend Transport; Thu, 17 Dec 2020 14:01:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f544834-821c-4fdb-9cc9-69e7789bcf94
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fMAGZInbdk2Hru95/QMExIdpX8OzG69oehlLcl2cc1KkhGgnb6o6Ve02XgOwD/jdNwR2SIoC2YcyVbx4QH5YENW23OEor4BCqdi0KtYvx34hUxWwzV2nYLNUN0VjTDfpFQcd04stEuZTFmntsu0MRHKy3Y1PAMOW3JUTBrDBXxwV2JKwVkzubX4cM7lYYdTEU5XU4sRWLnSqnIInzWy1SWJ+crurAczRE5JsayJxltV5QXg4+dzc19BKz73VDSDIFGCRD7U9qrxlCJC808sJyTm+sIL0WpL/qcFprtL93vh7Qu6efQlGuuO1bWeqZNQJKcFqrMMjX7HZcJW/Chwy6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pO4nCuCyX73aaYqdOq4oxM1ZPe9FqdWAsp0oFeIBq2g=;
 b=Xu3/FIpd5Uuq4jcrALsi+KOTiqmF8JIJuh2uwOHOsqPYphsvwO9acnuh4x+eanu/Pe/V/mc2RBjAd7m1CrWynmotEy6bvzgTu1EpMwvBB1GCEVYmnDEmDwFncQIpavVljcdktVhSmPxRa20A2DuzJax6N94LAeXNfJblAVFS1ZLdJuqC5KMyLV+61pbaTUJ/vu9hc47LP5t9eBnaFUmV6H2KyxevGEdUv+cypeWm4TMtFAU7Iphp4JfitdsW1TISnHdH1DWgG/j2ljHyDAae6lspNNJOD6coG5mijEpHbEp9C9thQZ7fq38ygXrohCuPj8+23CvLJUtx7H2v0T5cFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pO4nCuCyX73aaYqdOq4oxM1ZPe9FqdWAsp0oFeIBq2g=;
 b=WiTxmDWDLW8avJU6t00WZ+/3mHPPB0sVOfK9iSdt28xe1Zh+4cvuIQX0XIdMmIx6c7vzg0zbICauRwVYwAynYgZWNrU+LO+5fInyzltIicWI+5iYaq0fxomTspfuXctvz5IOTEvWGQ+vzPY27v4Bk/S3Eo4WGVBdngjJuapCuWs=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=virtuozzo.com;
Subject: Re: [PATCH v2 2/4] block: Avoid processing BDS twice in
 bdrv_set_aio_context_ignore()
To: Kevin Wolf <kwolf@redhat.com>
Cc: Sergio Lopez <slp@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201215121233.GD8185@merkur.fritz.box>
 <20201215131527.evpidxevevtfy54n@mhamilton>
 <20201215150119.GE8185@merkur.fritz.box>
 <20201215172337.w7vcn2woze2ejgco@mhamilton>
 <20201216123514.GD7548@merkur.fritz.box>
 <20201216145502.yiejsw47q5pfbzio@mhamilton>
 <20201216183102.GH7548@merkur.fritz.box>
 <20201217093744.tg6ik73o45nidcs4@mhamilton>
 <20201217105830.GA12328@merkur.fritz.box>
 <d7c1ee7f-4171-1407-3a71-a7e45708cc4a@virtuozzo.com>
 <20201217130602.GB12328@merkur.fritz.box>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-ID: <84380429-388c-f8c3-d318-788eda550c97@virtuozzo.com>
Date: Thu, 17 Dec 2020 17:01:03 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
In-Reply-To: <20201217130602.GB12328@merkur.fritz.box>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [185.215.60.92]
X-ClientProxiedBy: AM0PR06CA0089.eurprd06.prod.outlook.com
 (2603:10a6:208:fa::30) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f7a67be9-9a09-4319-25fc-08d8a29431fe
X-MS-TrafficTypeDiagnostic: AS8PR08MB5944:
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB5944F2BEC381BEA81910DC13C1C40@AS8PR08MB5944.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zKD4Hd47SKkjtLbHHKlaM/C2vhOfTZElmjvrxSH/ur+DsEq6eXQvACbbsXT5heL79zWUclkce5+K0XZhGA8VZLf3cLLrEDSuNg11+FhxAt8er8gBVROXr71k/8F3MPqOFExxeLKuRkvJ+b0kRP0VT1Icc9NKAjkQ8ZzZKb3fxZrxZujuj+5aCagrmZN/Y6BEg/512GHHvqW8GsozsWCedUDOo696iQ1SwopyYvVpWg5/KSn+JN5mGyetJvrblIb85/QKH6qOReqNuGZJM0FdEf8dK0e4QxoUWpYlLoPxy8Y4/zW9nMa0+gdxOMfifmx4hiHjj7IKyQnndklKAN3stBIcQyD0vZsYQVAuCp7ZD0+edv8T5uBm7wDmB4Smu5ffyvpelYCGo0zwT95kK2AwAfb0mRD2V42dp07yXV5KH0I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR08MB5494.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(396003)(136003)(39830400003)(376002)(7416002)(478600001)(66946007)(6486002)(6916009)(36756003)(316002)(66556008)(8936002)(16526019)(16576012)(66476007)(5660300002)(31696002)(54906003)(186003)(8676002)(2616005)(956004)(4326008)(31686004)(83380400001)(86362001)(2906002)(52116002)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?MDVZUzVJZHJ2blNjT0hJbEtDcTFaeEs5OUhreE9pSC92UXE2ZS85M3ZQQi9C?=
 =?utf-8?B?VlNHbm9WN3RTeHhoTEFiT0pvem9ndXYvYU1xQWJ1V1NBT3RBQzFqZGRBMDFJ?=
 =?utf-8?B?YVc3bG5kTUFjdkkrY0ZaN0hORGFhSmJIMDNLUTFhcDhjd0hTV0x2d2RBYlZw?=
 =?utf-8?B?NUtGdVMxZ1Mva1Nhdmc2djBYTERQMG1UR0RCZm9LcWtHcWZWSmlHbzhiamlG?=
 =?utf-8?B?NE81alV5bGJKSldDWnBndXg5WEpZQ2ZVR2NoRXNkdTR1cm0xRjVkV0pYbXFn?=
 =?utf-8?B?TCtUeGk2N2NINGNvUkJndmU1WmdTN1RoS01FQ1IweEcyMXFSNk5Qc2tXR2Nj?=
 =?utf-8?B?aTlKVmtRQ1VSSVdnZGdqT2hNOWdneTFyV2psZ2tWUHdBVGUyOTc0d2JpNVE2?=
 =?utf-8?B?SGxRbmsxNFE2OVMvbEllMW5UQjllR0NlSGZpRU1LdFlXS2JrNEo0ZExZVUVT?=
 =?utf-8?B?dDRjQ1VEM1psM2s2dTFQck1OV2hrRzZtYkl4QkdrNWZmeDFzd1ByOTZNalFG?=
 =?utf-8?B?SFFmMzFrN3QxSSsxS2d3MTVNSnNnWXdUU2g1WUlUUnBicmZFNEJoak5XcERy?=
 =?utf-8?B?TGpOY1hCMU1kWFBqVmxWOEdZQ0dsalhsM1drcmp2bnpkd3pJaDBuY3lNcVUw?=
 =?utf-8?B?SkZ4UTQyVVhCNGUvdHBab3V5RE9DTWxIbFFZWnVjWDdpKzBQR2ZmdFVSNFpB?=
 =?utf-8?B?MmNuSDJrVFFvalJZUXRhK21uc2xiVVpxaHFuRm5vQUN0TE8zZlZqWFIrbkJw?=
 =?utf-8?B?MU9DdCtkLzJGOW5IbTYraEJyRzRHYWxEdHlwTGlnRWxNdWFUR1A5czU4ZHY3?=
 =?utf-8?B?TldQVXlnWHJCQnhWWGVuQm1uZVk2SXE0RWRHNlNHMFVjeEhBVXpuRFIvQzFL?=
 =?utf-8?B?bU1OenI2RExsaVFoTytjY1BzTjIrdFNkWlp1MkhQSFZuaVBvZTlEZU9KRVZG?=
 =?utf-8?B?RXNYMnI5RW5nYUNINnpydDdUM1dVbXVSQ1k3c0ZzM0JkRlU0ZTZIOHNRWXhT?=
 =?utf-8?B?c0dLUjJTOGZUVW1BbnN6aCs2L2JRdWIrWlBST0gzWmRSKzNPYll2d0Z6R2JM?=
 =?utf-8?B?VkdDNmFrcUVQTVc2Nll3aU9BNmtRRTJsY1pVek00SU5MWCs2Vjc1YjM1aHFa?=
 =?utf-8?B?OUVpOFltSWhyTktpRmRBcTZrQWdqelB5WWVoR1RKK3FHNFhjVzFvb0l2bWxq?=
 =?utf-8?B?Q2pnOVhYYUh5eFN0SHRWaVVsNnowSGhucWFEdTFWTW9MV3NObjJ1dXJWTXkx?=
 =?utf-8?B?eFNheEdrUnRqZ2hvMFRPQ2x3bFlyNmZEckYvb0FEVUZzaCtyZE5rc0FqMHlM?=
 =?utf-8?Q?/1cS4y2Fn92Ofo4MGTRK3lYsBfTaPNK50g?=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Dec 2020 14:01:04.9916
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-Network-Message-Id: f7a67be9-9a09-4319-25fc-08d8a29431fe
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aRzVLTntScyIFkZhiAbvnTglALE6I5m9iN58xbIuONPDSJTvxm8SM51jkBuHkTE2y/AdczRjsJGJQa+oBxyCXHu85UGFTxd08liuxyhtnbc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5944

17.12.2020 16:06, Kevin Wolf wrote:
> Am 17.12.2020 um 13:50 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 17.12.2020 13:58, Kevin Wolf wrote:
>>> Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
>>>> On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
>>>>> Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
>>>>>> On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
>>>>>>> Anyway, trying to reconstruct the block graph with BdrvChild pointers
>>>>>>> annotated at the edges:
>>>>>>>
>>>>>>> BlockBackend
>>>>>>>         |
>>>>>>>         v
>>>>>>>     backup-top ------------------------+
>>>>>>>         |   |                          |
>>>>>>>         |   +-----------------------+  |
>>>>>>>         |            0x5655068b8510 |  | 0x565505e3c450
>>>>>>>         |                           |  |
>>>>>>>         | 0x565505e42090            |  |
>>>>>>>         v                           |  |
>>>>>>>       qcow2 ---------------------+  |  |
>>>>>>>         |                        |  |  |
>>>>>>>         | 0x565505e52060         |  |  | ??? [1]
>>>>>>>         |                        |  |  |  |
>>>>>>>         v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
>>>>>>>       file                       v  v  v  v
>>>>>>>                                qcow2 (backing)
>>>>>>>                                       |
>>>>>>>                                       | 0x565505e41d20
>>>>>>>                                       v
>>>>>>>                                     file
>>>>>>>
>>>>>>> [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
>>>>>>>       BdrvChild directly owned by the backup job.
>>>>>>>
>>>>>>>> So it seems this is happening:
>>>>>>>>
>>>>>>>> backup-top (5e48030) <---------| (5)
>>>>>>>>      |    |                      |
>>>>>>>>      |    | (6) ------------> qcow2 (5fbf660)
>>>>>>>>      |                           ^    |
>>>>>>>>      |                       (3) |    | (4)
>>>>>>>>      |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
>>>>>>>>      |
>>>>>>>>      |-> (2) file (5e52060)
>>>>>>>>
>>>>>>>> backup-top (5e48030), the BDS that was passed as argument in the first
>>>>>>>> bdrv_set_aio_context_ignore() call, is re-entered when qcow2 (5fbf660)
>>>>>>>> is processing its parents, and the latter is also re-entered when the
>>>>>>>> first one starts processing its children again.
>>>>>>>
>>>>>>> Yes, but look at the BdrvChild pointers, it is through different edges
>>>>>>> that we come back to the same node. No BdrvChild is used twice.
>>>>>>>
>>>>>>> If backup-top had added all of its children to the ignore list before
>>>>>>> calling into the overlay qcow2, the backing qcow2 wouldn't eventually
>>>>>>> have called back into backup-top.
>>>>>>
>>>>>> I've tested a patch that first adds every child to the ignore list,
>>>>>> and then processes those that weren't there before, as you suggested
>>>>>> on a previous email. With that, the offending qcow2 is not re-entered,
>>>>>> so we avoid the crash, but backup-top is still entered twice:
>>>>>
>>>>> I think we also need to every parent to the ignore list before calling
>>>>> callbacks, though it doesn't look like this is the problem you're
>>>>> currently seeing.
>>>>
>>>> I agree.
>>>>
>>>>>> bs=0x560db0e3b030 (backup-top) enter
>>>>>> bs=0x560db0e3b030 (backup-top) processing children
>>>>>> bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e2f450 (child->bs=0x560db0fb2660)
>>>>>> bs=0x560db0fb2660 (qcow2) enter
>>>>>> bs=0x560db0fb2660 (qcow2) processing children
>>>>>> bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db0e34d20 (child->bs=0x560db1bb3c00)
>>>>>> bs=0x560db1bb3c00 (file) enter
>>>>>> bs=0x560db1bb3c00 (file) processing children
>>>>>> bs=0x560db1bb3c00 (file) processing parents
>>>>>> bs=0x560db1bb3c00 (file) processing itself
>>>>>> bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db16964d0 (child->bs=0x560db0e50420)
>>>>>> bs=0x560db0e50420 (qcow2) enter
>>>>>> bs=0x560db0e50420 (qcow2) processing children
>>>>>> bs=0x560db0e50420 (qcow2) calling bsaci child=0x560db0e34ea0 (child->bs=0x560db0e45060)
>>>>>> bs=0x560db0e45060 (file) enter
>>>>>> bs=0x560db0e45060 (file) processing children
>>>>>> bs=0x560db0e45060 (file) processing parents
>>>>>> bs=0x560db0e45060 (file) processing itself
>>>>>> bs=0x560db0e50420 (qcow2) processing parents
>>>>>> bs=0x560db0e50420 (qcow2) processing itself
>>>>>> bs=0x560db0fb2660 (qcow2) processing parents
>>>>>> bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1672860
>>>>>> bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1b14a20
>>>>>> bs=0x560db0e3b030 (backup-top) enter
>>>>>> bs=0x560db0e3b030 (backup-top) processing children
>>>>>> bs=0x560db0e3b030 (backup-top) processing parents
>>>>>> bs=0x560db0e3b030 (backup-top) calling set_aio_ctx child=0x560db0e332d0
>>>>>> bs=0x560db0e3b030 (backup-top) processing itself
>>>>>> bs=0x560db0fb2660 (qcow2) processing itself
>>>>>> bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e35090 (child->bs=0x560db0e50420)
>>>>>> bs=0x560db0e50420 (qcow2) enter
>>>>>> bs=0x560db0e3b030 (backup-top) processing parents
>>>>>> bs=0x560db0e3b030 (backup-top) processing itself
>>>>>>
>>>>>> I see that "blk_do_set_aio_context()" passes "blk->root" to
>>>>>> "bdrv_child_try_set_aio_context()" so it's already in the ignore list,
>>>>>> so I'm not sure what's happening here. Is backup-top is referenced
>>>>>> from two different BdrvChild or is "blk->root" not pointing to
>>>>>> backup-top's BDS?
>>>>>
>>>>> The second time that backup-top is entered, it is not as the BDS of
>>>>> blk->root, but as the parent node of the overlay qcow2. Which is
>>>>> interesting, because last time it was still the backing qcow2, so the
>>>>> change did have _some_ effect.
>>>>>
>>>>> The part that I don't understand is why you still get the line with
>>>>> child=0x560db1b14a20, because when you add all children to the ignore
>>>>> list first, that should have been put into the ignore list as one of the
>>>>> first things in the whole process (when backup-top was first entered).
>>>>>
>>>>> Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
>>>>> but isn't actually present in backup-top's bs->children?
>>>>
>>>> Exactly, that line corresponds to this chunk of code:
>>>>
>>>> <---- begin ---->
>>>>       QLIST_FOREACH(child, &bs->parents, next_parent) {
>>>>           if (g_slist_find(*ignore, child)) {
>>>>               continue;
>>>>           }
>>>>           assert(child->klass->set_aio_ctx);
>>>>           *ignore = g_slist_prepend(*ignore, child);
>>>>           fprintf(stderr, "bs=%p (%s) calling set_aio_ctx child=%p\n", bs, bs->drv->format_name, child);
>>>>           child->klass->set_aio_ctx(child, new_context, ignore);
>>>>       }
>>>> <---- end ---->
>>>>
>>>> Do you think it's safe to re-enter backup-top, or should we look for a
>>>> way to avoid this?
>>>
>>> I think it should be avoided, but I don't understand why putting all
>>> children of backup-top into the ignore list doesn't already avoid it. If
>>> backup-top is in the parents list of qcow2, then qcow2 should be in the
>>> children list of backup-top and therefore the BdrvChild should already
>>> be in the ignore list.
>>>
>>> The only way I can explain this is that backup-top and qcow2 have
>>> different ideas about which BdrvChild objects exist that connect them.
>>> Or that the graph changes between both places, but I don't see how that
>>> could happen in bdrv_set_aio_context_ignore().
>>>
>>
>> bdrv_set_aio_context_ignore() do bdrv_drained_begin().. As I reported
>> recently, nothing prevents some job finish and do graph modification
>> during some another drained section. It may be the case.
> 
> Good point, this might be the same bug then.
> 
> If everything worked correctly, a job completion could only happen on
> the outer bdrv_set_aio_context_ignore(). But after that, we are already
> in a drain section, so the job should be quiesced and a second drain
> shouldn't cause any additional graph changes.
> 
> I would have to go back to the other discussion, but I think it was
> related to block jobs that are already in the completion process and
> keep moving forward even though they are supposed to be quiesced.
> 
> If I remember correctly, actually pausing them at this point looked
> difficult. Maybe what we should then do is letting .drained_poll return
> true until they have actually fully completed?
> 
> Ah, but was this something that would deadlock because the job
> completion callbacks use drain sections themselves?

Hmm..  I've recently sent good example of dead-lock in email "aio-poll dead-lock"..

I don't have better idea than moving all graph modifications (together with
corresponding drained sections) into coroutines and protect by global coroutine
mutex.

> 
>> If backup-top involved, I can suppose that graph modification is in
>> backup_clean, when we remove the filter.. Who is making
>> set_aio_context in the issue? I mean, what is the backtrace of
>> bdrv_set_aio_context_ignore()?
> 
> Sergio, can you provide the backtrace and also test if the theory with a
> job completion in the middle of the process is what you actually hit?
> 
> Kevin
> 


-- 
Best regards,
Vladimir


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:38:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:38:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55964.97625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvLg-00031O-1k; Thu, 17 Dec 2020 15:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55964.97625; Thu, 17 Dec 2020 15:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvLf-00031H-Tb; Thu, 17 Dec 2020 15:37:47 +0000
Received: by outflank-mailman (input) for mailman id 55964;
 Thu, 17 Dec 2020 15:37:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kpvLe-00031C-BF
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:37:46 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb212da8-73e2-461b-9f87-ddbc31422f42;
 Thu, 17 Dec 2020 15:37:45 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id t16so27022849wra.3
 for <xen-devel@lists.xenproject.org>; Thu, 17 Dec 2020 07:37:45 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (5ec3281d.skybroadband.com.
 [94.195.40.29])
 by smtp.gmail.com with ESMTPSA id d9sm10008684wrc.87.2020.12.17.07.37.43
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 17 Dec 2020 07:37:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb212da8-73e2-461b-9f87-ddbc31422f42
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=HifGYkonzOaa8e9gq3lwyEK5YpbFloCuVkFZZE//Dn8=;
        b=BJdq2+Gg6ucbV3e4puCHrkiW+r8sxPc9HXDsKD+SHx6ZheXXdbhiQtN7ZWtYITA/8n
         BpDeBMDmogmcj2l8aRgrbp5JRZUodpWuHDEkrDO2J6AbGVXx2aHI3MyPqj3Bp9F8cL9q
         ZdjMuQGRptlMgx3Ixx2/AYBgUDLSpTrnQ24Imn4xGk9B74C3I9nyPXuyfZyrKUxhysDv
         toTRx9DaJGyVGbkUuaWUxAbpmHp94sivrAGWqjw3d5S1sNpDqMXQHWYdgUdLtmMXXwBT
         T4vKui/VMgg7EZUNOzTvmfncokMHnBVmnwQlx3Fm79Xd+QN7XVcH5kVs43PRJi1v01ZX
         kuHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=HifGYkonzOaa8e9gq3lwyEK5YpbFloCuVkFZZE//Dn8=;
        b=Rk+JsknUv+rOnFkreOUJw3K54pkhPtbRNwRJKqi994IyhbdINNa6GaUNmUoXXUv5kr
         fpjBfIIJRzFBLXAE21ftsep1nDLAkfiFIwY+zQS+Z4zGLb+Pi8HfQNvERGN5n4k61iDV
         47mlwTclT/ksR4lUeaifnev5iRY+gsI+OKuCGN+6suHPcUKDEWJTaPHCaiKm7qTF3UEk
         5Mm1UVHDJfyKvw4f6Uln4wNkBADVWaxMMF29VcsdBSiUR6bVAcdmQG1W8RbU5EqUxdPB
         bNC51ueumBXo+987oSRfBODBNdeDuB8noX/owxHtRds8jOQwFQeFZ4wcd+60lTBT8KCR
         p52A==
X-Gm-Message-State: AOAM530gpEjKBp+X8RJDTikXrXJLfEE3pcWIbmh6paQuuUkIkiYvwf7g
	Q/x39DwUqb6XshumEh58Tdc=
X-Google-Smtp-Source: ABdhPJxwW3MrkDmrvh4uSwb9sFgw7sW2BzBK07zY2jWRiUh9rvrwTZ6+bMiO6Xk2M/sQ3ITTYzL0DA==
X-Received: by 2002:adf:9525:: with SMTP id 34mr45707362wrs.389.1608219464168;
        Thu, 17 Dec 2020 07:37:44 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: julien@xen.org
Cc: ash.j.wilding@gmail.com,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH v2 00/15] xen/arm: port Linux LL/SC and LSE atomics helpers to Xen
Date: Thu, 17 Dec 2020 15:37:42 +0000
Message-Id: <20201217153742.14034-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <cb0f7055-6d9a-5c39-6198-109593fd3424@xen.org>
References: <cb0f7055-6d9a-5c39-6198-109593fd3424@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi Julien,

Thanks for taking a look at the patches and providing feedback. I've seen your
other comments and will reply to those separately when I get a chance (maybe at
the weekend or over the Christmas break).

RE the differences in ordering semantics between Xen's and Linux's atomics
helpers, please find my notes below.

Thoughts?

Cheers,
Ash.


The tables below use format AAA/BBB/CCC/DDD/EEE, where:

 - AAA is the memory barrier before the operation
 - BBB is the acquire semantics of the atomic operation
 - CCC is the release semantics of the atomic operation
 - DDD is whether the asm() block clobbers memory
 - EEE is the memory barrier after the operation

For example, ---/---/rel/mem/dmb would mean:

 - No memory barrier before the operation
 - The atomic does *not* have acquire semantics
 - The atomic *does* have release semantics
 - The asm() block clobbers memory
 - There is a DMB memory barrier after the atomic operation


    arm64 LL/SC
    ===========

        Xen Function            Xen                     Linux                   Inconsistent
        ============            ===                     =====                   ============

        atomic_add              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_add_return       ---/---/rel/mem/dmb     ---/---/rel/mem/dmb     --- (1)
        atomic_sub              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_sub_return       ---/---/rel/mem/dmb     ---/---/rel/mem/dmb     --- (1)        
        atomic_and              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_cmpxchg          dmb/---/---/---/dmb     ---/---/rel/mem/---     YES (2)
        atomic_xchg             ---/---/rel/mem/dmb     ---/acq/rel/mem/dmb     YES (3)

(1) It's actually interesting to me that Linux does it this way. As with the
    LSE atomics below, I'd have expected acq/rel semantics and ditch the DMB.
    Unless I'm missing something where there is a concern around taking an IRQ
    between the LDAXR and the STLXR, which can't happen in the LSE atomic case
    since it's a single instruction. But the exclusive monitor is cleared on
    exception return in AArch64 so I'm struggling to see what that potential
    issue may be. Regardless, Linux and Xen are consistent so we're OK ;-)

(2) The Linux version uses either STLXR with rel semantics if the comparison
    passes, or DMB if the comparison fails. This is weaker than Xen's version,
    which is quite blunt in always wrapping the operation between two DMBs. This
    may be a holdover from Xen's arm32 versions being ported to arm64, as we
    didn't support acq/rel semantics on LDREX and STREX in Armv7-A? Regardless,
    this is quite a big discrepancy and I've not yet given it enough thought to
    determine whether it would actually cause an issue. My feeling is that the
    Linux LL/SC atomic_cmpxchg() should have have acq semantics on the LL, but
    like you said these helpers are well tested so I'd be surprised if there
    is a bug. See (5) below though, where the Linux LSE atomic_cmpxchg() *does*
    have acq semantics.

(3) The Linux version just adds acq semantics to the LL, so we're OK here.


    arm64 LSE (comparison to Xen's LL/SC)
    =====================================

        Xen Function            Xen                     Linux                   Inconsistent
        ============            ===                     =====                   ============

        atomic_add              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_add_return       ---/---/rel/mem/dmb     ---/acq/rel/mem/---     YES (4)
        atomic_sub              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_sub_return       ---/---/rel/mem/dmb     ---/acq/rel/mem/---     YES (4)
        atomic_and              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_cmpxchg          dmb/---/---/---/dmb     ---/acq/rel/mem/---     YES (5)
        atomic_xchg             ---/---/rel/mem/dmb     ---/acq/rel/mem/---     YES (4)

(4) As noted in (1), this is how I would have expected Linux's LL/SC atomics to
    work too. I don't think this discrepancy will cause any issues.

(5) As with (2) above, this is quite a big discrepancy to Xen. However at least
    this version has acq semantics unlike the LL/SC version in (2), so I'm more
    confident that there won't be regressions going from Xen LL/SC to Linux LSE
    version of atomic_cmpxchg().


    arm32 LL/SC
    ===========

        Xen Function            Xen                     Linux                   Inconsistent
        ============            ===                     =====                   ============

        atomic_add              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_add_return       dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
        atomic_sub              ---/---/---/---/---     ---/---/---/---/---     ---
        atomic_sub_return       dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
        atomic_and              ---/---/---/---/---     ---/---/---/---/---     ---  
        atomic_cmpxchg          dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)
        atomic_xchg             dmb/---/---/---/dmb     XXX/XXX/XXX/XXX/XXX     YES (6)

(6) Linux only provides relaxed variants of these functions, such as
    atomic_add_return_relaxed() and atomic_xchg_relaxed(). Patches #13 and #14
    in the series add the stricter versions expected by Xen, wrapping calls to
    Linux's relaxed variants inbetween two calls to smb_mb(). This makes them
    consistent with Xen's existing helpers, though is quite blunt. It is worth
    noting that Armv8-A AArch32 does support acq/rel semantics on exclusive
    accesses, with LDAEX and STLEX, so I could imagine us introducing a new
    arm32 hwcap to detect whether we're on actual Armv7-A hardware or Armv8-A
    AArch32, then swap to lighterweight STLEX versions of these helpers rather
    than the heavyweight double DMB versions. Whether that would actually give
    measurable performance improvements is another story!


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55968.97637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvMa-000371-BD; Thu, 17 Dec 2020 15:38:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55968.97637; Thu, 17 Dec 2020 15:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvMa-00036u-7u; Thu, 17 Dec 2020 15:38:44 +0000
Received: by outflank-mailman (input) for mailman id 55968;
 Thu, 17 Dec 2020 15:38:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvMZ-00036o-16
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:38:43 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c6f92b50-9181-4828-8132-41458fb8ad54;
 Thu, 17 Dec 2020 15:38:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 109C130E;
 Thu, 17 Dec 2020 07:38:41 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4CCB03F66B;
 Thu, 17 Dec 2020 07:38:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6f92b50-9181-4828-8132-41458fb8ad54
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 0/8] xen/arm: Emulate ID registers
Date: Thu, 17 Dec 2020 15:38:00 +0000
Message-Id: <cover.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

The goal of this serie is to emulate coprocessor ID registers so that
Xen only publish to guest features that are supported by Xen and can
actually be used by guests.
One practical example where this is required are SVE support which is
forbidden by Xen as it is not supported, but if Linux is compiled with
it, it will crash on boot. An other one is AMU which is also forbidden
by Xen but one Linux compiled with it would crash if the platform
supports it.

To be able to emulate the coprocessor registers defining what features
are supported by the hardware, the TID3 bit of HCR must be disabled and
Xen must emulated the values of those registers when an exception is
catched when a guest is accessing it.

This serie is first creating a guest cpuinfo structure which will
contain the values that we want to publish to the guests and then
provides the proper emulationg for those registers when Xen is getting
an exception due to an access to any of those registers.

This is a first simple implementation to solve the problem and the way
to define the values that we provide to guests and which features are
disabled will be in a future patchset enhance so that we could decide
per guest what can be used or not and depending on this deduce the bits
to activate in HCR and the values that we must publish on ID registers.

---
Changes in V2:
  Fix First patch to properly handle DFR1 register and increase dbg32
  size. Other patches have just been rebased.

Changes in V3:
  Add handling of reserved registers as RAZ
  Minor fixes described in each patch

Changes in V4:
  Add a patch to switch implementation to use READ_SYSREG instead of the
  32/64 bit version of it.
  Move cases for reserved register handling from macros to the code
  itself.
  Various typos fixes.

Bertrand Marquis (8):
  xen/arm: Use READ_SYSREG instead of 32/64 versions
  xen/arm: Add ID registers and complete cpuinfo
  xen/arm: Add arm64 ID registers definitions
  xen/arm: create a cpuinfo structure for guest
  xen/arm: Add handler for ID registers on arm64
  xen/arm: Add handler for cp15 ID registers
  xen/arm: Add CP10 exception support to handle MVFR
  xen/arm: Activate TID3 in HCR_EL2

 xen/arch/arm/arm64/vsysreg.c        |  82 ++++++++++++++++++++
 xen/arch/arm/cpufeature.c           | 113 ++++++++++++++++++++++------
 xen/arch/arm/traps.c                |   7 +-
 xen/arch/arm/vcpreg.c               | 102 +++++++++++++++++++++++++
 xen/include/asm-arm/arm64/hsr.h     |  37 +++++++++
 xen/include/asm-arm/arm64/sysregs.h |  28 +++++++
 xen/include/asm-arm/cpregs.h        |  15 ++++
 xen/include/asm-arm/cpufeature.h    |  58 +++++++++++---
 xen/include/asm-arm/perfc_defn.h    |   1 +
 xen/include/asm-arm/traps.h         |   1 +
 10 files changed, 409 insertions(+), 35 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:39:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55970.97649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvMu-0003Ci-Kv; Thu, 17 Dec 2020 15:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55970.97649; Thu, 17 Dec 2020 15:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvMu-0003Cb-H1; Thu, 17 Dec 2020 15:39:04 +0000
Received: by outflank-mailman (input) for mailman id 55970;
 Thu, 17 Dec 2020 15:39:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpvMt-0003CJ-KZ; Thu, 17 Dec 2020 15:39:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpvMt-0001jB-AX; Thu, 17 Dec 2020 15:39:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpvMt-00012P-14; Thu, 17 Dec 2020 15:39:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpvMt-0000Q8-0Y; Thu, 17 Dec 2020 15:39:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LUNpxK50nteZXu8ZFxC/9u71g1nh+WpnPx3o2N5twTg=; b=2cT4wIG9RdMqGcAXn/H6GtnPeC
	FjR/LUFEqe+7tozKVhTPZ5LXgoNON1LdFxO6W8cXpCJfdLnEoR5IxyQjur3qG970UMHt4mp9yEQmR
	ibso1uwaHPp/ua7TQAJXCQ1pIpVtD+ZE7ridkqDGzN32SdXSi3Z9yoFPoNxKcTxZkVag=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157613-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157613: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=af3f37319cb1e1ca0c42842ecdbd1bcfc64a4b6f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 15:39:03 +0000

flight 157613 qemu-mainline real [real]
flight 157642 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157613/
http://logs.test-lab.xenproject.org/osstest/logs/157642/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                af3f37319cb1e1ca0c42842ecdbd1bcfc64a4b6f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  119 days
Failing since        152659  2020-08-21 14:07:39 Z  118 days  246 attempts
Testing same since   157613  2020-12-16 21:42:22 Z    0 days    1 attempts

------------------------------------------------------------
318 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 78650 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55979.97664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQI-0004GQ-8n; Thu, 17 Dec 2020 15:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55979.97664; Thu, 17 Dec 2020 15:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQI-0004GJ-5W; Thu, 17 Dec 2020 15:42:34 +0000
Received: by outflank-mailman (input) for mailman id 55979;
 Thu, 17 Dec 2020 15:42:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQG-0004G9-Tk
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:32 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 449d379c-d30f-49e2-98e8-9d1402156f8b;
 Thu, 17 Dec 2020 15:42:32 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F37AE30E;
 Thu, 17 Dec 2020 07:42:31 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3BC833F66B;
 Thu, 17 Dec 2020 07:42:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 449d379c-d30f-49e2-98e8-9d1402156f8b
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 1/8] xen/arm: Use READ_SYSREG instead of 32/64 versions
Date: Thu, 17 Dec 2020 15:38:01 +0000
Message-Id: <75ab5c84ed6ce1d004316ca4677735aa0543ecdc.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Modify identify_cpu function to use READ_SYSREG instead of READ_SYSREG32
or READ_SYSREG64.
The aarch32 versions of the registers are 64bit on an aarch64 processor
so it was wrong to access them as 32bit registers.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Change in V4:
  This patch was introduced in v4.

---
 xen/arch/arm/cpufeature.c | 50 +++++++++++++++++++--------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 44126dbf07..115e1b164d 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -99,44 +99,44 @@ int enable_nonboot_cpu_caps(const struct arm_cpu_capabilities *caps)
 
 void identify_cpu(struct cpuinfo_arm *c)
 {
-        c->midr.bits = READ_SYSREG32(MIDR_EL1);
+        c->midr.bits = READ_SYSREG(MIDR_EL1);
         c->mpidr.bits = READ_SYSREG(MPIDR_EL1);
 
 #ifdef CONFIG_ARM_64
-        c->pfr64.bits[0] = READ_SYSREG64(ID_AA64PFR0_EL1);
-        c->pfr64.bits[1] = READ_SYSREG64(ID_AA64PFR1_EL1);
+        c->pfr64.bits[0] = READ_SYSREG(ID_AA64PFR0_EL1);
+        c->pfr64.bits[1] = READ_SYSREG(ID_AA64PFR1_EL1);
 
-        c->dbg64.bits[0] = READ_SYSREG64(ID_AA64DFR0_EL1);
-        c->dbg64.bits[1] = READ_SYSREG64(ID_AA64DFR1_EL1);
+        c->dbg64.bits[0] = READ_SYSREG(ID_AA64DFR0_EL1);
+        c->dbg64.bits[1] = READ_SYSREG(ID_AA64DFR1_EL1);
 
-        c->aux64.bits[0] = READ_SYSREG64(ID_AA64AFR0_EL1);
-        c->aux64.bits[1] = READ_SYSREG64(ID_AA64AFR1_EL1);
+        c->aux64.bits[0] = READ_SYSREG(ID_AA64AFR0_EL1);
+        c->aux64.bits[1] = READ_SYSREG(ID_AA64AFR1_EL1);
 
-        c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
-        c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
+        c->mm64.bits[0]  = READ_SYSREG(ID_AA64MMFR0_EL1);
+        c->mm64.bits[1]  = READ_SYSREG(ID_AA64MMFR1_EL1);
 
-        c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
-        c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
+        c->isa64.bits[0] = READ_SYSREG(ID_AA64ISAR0_EL1);
+        c->isa64.bits[1] = READ_SYSREG(ID_AA64ISAR1_EL1);
 #endif
 
-        c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
-        c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
+        c->pfr32.bits[0] = READ_SYSREG(ID_PFR0_EL1);
+        c->pfr32.bits[1] = READ_SYSREG(ID_PFR1_EL1);
 
-        c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
+        c->dbg32.bits[0] = READ_SYSREG(ID_DFR0_EL1);
 
-        c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
+        c->aux32.bits[0] = READ_SYSREG(ID_AFR0_EL1);
 
-        c->mm32.bits[0]  = READ_SYSREG32(ID_MMFR0_EL1);
-        c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
-        c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
-        c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
+        c->mm32.bits[0]  = READ_SYSREG(ID_MMFR0_EL1);
+        c->mm32.bits[1]  = READ_SYSREG(ID_MMFR1_EL1);
+        c->mm32.bits[2]  = READ_SYSREG(ID_MMFR2_EL1);
+        c->mm32.bits[3]  = READ_SYSREG(ID_MMFR3_EL1);
 
-        c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
-        c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
-        c->isa32.bits[2] = READ_SYSREG32(ID_ISAR2_EL1);
-        c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
-        c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
-        c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
+        c->isa32.bits[0] = READ_SYSREG(ID_ISAR0_EL1);
+        c->isa32.bits[1] = READ_SYSREG(ID_ISAR1_EL1);
+        c->isa32.bits[2] = READ_SYSREG(ID_ISAR2_EL1);
+        c->isa32.bits[3] = READ_SYSREG(ID_ISAR3_EL1);
+        c->isa32.bits[4] = READ_SYSREG(ID_ISAR4_EL1);
+        c->isa32.bits[5] = READ_SYSREG(ID_ISAR5_EL1);
 }
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55980.97676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQJ-0004Hl-Ha; Thu, 17 Dec 2020 15:42:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55980.97676; Thu, 17 Dec 2020 15:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQJ-0004Hd-DZ; Thu, 17 Dec 2020 15:42:35 +0000
Received: by outflank-mailman (input) for mailman id 55980;
 Thu, 17 Dec 2020 15:42:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQI-0004GE-0U
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 4bb17f7e-4ed5-4095-8d4e-a9faba7f8de8;
 Thu, 17 Dec 2020 15:42:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 16B8430E;
 Thu, 17 Dec 2020 07:42:33 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 377B63F66B;
 Thu, 17 Dec 2020 07:42:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4bb17f7e-4ed5-4095-8d4e-a9faba7f8de8
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 2/8] xen/arm: Add ID registers and complete cpuinfo
Date: Thu, 17 Dec 2020 15:38:02 +0000
Message-Id: <31d3537b11ba1a7531f1e3a38ba3b1e694a1224b.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Add definition and entries in cpuinfo for ID registers introduced in
newer Arm Architecture reference manual:
- ID_PFR2: processor feature register 2
- ID_DFR1: debug feature register 1
- ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
- ID_ISA6: ISA Feature register 6
Add more bitfield definitions in PFR fields of cpuinfo.
Add MVFR2 register definition for aarch32.
Add MVFRx_EL1 defines for aarch32.
Add mvfr values in cpuinfo.
Add some registers definition for arm64 in sysregs as some are not
always know by compilers.
Initialize the new values added in cpuinfo in identify_cpu during init.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

---
Changes in V2:
  Fix dbg32 table size and add proper initialisation of the second entry
  of the table by reading ID_DFR1 register.
Changes in V3:
  Fix typo in commit title
  Add MVFR2 definition and handling on aarch32 and remove specific case
  for mvfr field in cpuinfo (now the same on arm64 and arm32).
  Add MMFR4 definition if not known by the compiler.
Changes in V4:
  Add MVFRx_EL1 defines for aarch32
  Use READ_SYSREG instead of 32/64 versions of the function which
  removed the ifdef case for MVFR access.
  User register_t type for mvfr and zfr64 fields of cpuinfo structure.

---
 xen/arch/arm/cpufeature.c           | 12 +++++++
 xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
 xen/include/asm-arm/cpregs.h        | 15 ++++++++
 xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
 4 files changed, 102 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 115e1b164d..86b99ee960 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
 
         c->mm64.bits[0]  = READ_SYSREG(ID_AA64MMFR0_EL1);
         c->mm64.bits[1]  = READ_SYSREG(ID_AA64MMFR1_EL1);
+        c->mm64.bits[2]  = READ_SYSREG(ID_AA64MMFR2_EL1);
 
         c->isa64.bits[0] = READ_SYSREG(ID_AA64ISAR0_EL1);
         c->isa64.bits[1] = READ_SYSREG(ID_AA64ISAR1_EL1);
+
+        c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 #endif
 
         c->pfr32.bits[0] = READ_SYSREG(ID_PFR0_EL1);
         c->pfr32.bits[1] = READ_SYSREG(ID_PFR1_EL1);
+        c->pfr32.bits[2] = READ_SYSREG(ID_PFR2_EL1);
 
         c->dbg32.bits[0] = READ_SYSREG(ID_DFR0_EL1);
+        c->dbg32.bits[1] = READ_SYSREG(ID_DFR1_EL1);
 
         c->aux32.bits[0] = READ_SYSREG(ID_AFR0_EL1);
 
@@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->mm32.bits[1]  = READ_SYSREG(ID_MMFR1_EL1);
         c->mm32.bits[2]  = READ_SYSREG(ID_MMFR2_EL1);
         c->mm32.bits[3]  = READ_SYSREG(ID_MMFR3_EL1);
+        c->mm32.bits[4]  = READ_SYSREG(ID_MMFR4_EL1);
+        c->mm32.bits[5]  = READ_SYSREG(ID_MMFR5_EL1);
 
         c->isa32.bits[0] = READ_SYSREG(ID_ISAR0_EL1);
         c->isa32.bits[1] = READ_SYSREG(ID_ISAR1_EL1);
@@ -137,6 +144,11 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->isa32.bits[3] = READ_SYSREG(ID_ISAR3_EL1);
         c->isa32.bits[4] = READ_SYSREG(ID_ISAR4_EL1);
         c->isa32.bits[5] = READ_SYSREG(ID_ISAR5_EL1);
+        c->isa32.bits[6] = READ_SYSREG(ID_ISAR6_EL1);
+
+        c->mvfr.bits[0] = READ_SYSREG(MVFR0_EL1);
+        c->mvfr.bits[1] = READ_SYSREG(MVFR1_EL1);
+        c->mvfr.bits[2] = READ_SYSREG(MVFR2_EL1);
 }
 
 /*
diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
index c60029d38f..077fd95fb7 100644
--- a/xen/include/asm-arm/arm64/sysregs.h
+++ b/xen/include/asm-arm/arm64/sysregs.h
@@ -57,6 +57,34 @@
 #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
 #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
 
+/*
+ * Define ID coprocessor registers if they are not
+ * already defined by the compiler.
+ *
+ * Values picked from linux kernel
+ */
+#ifndef ID_AA64MMFR2_EL1
+#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
+#endif
+#ifndef ID_PFR2_EL1
+#define ID_PFR2_EL1                 S3_0_C0_C3_4
+#endif
+#ifndef ID_MMFR4_EL1
+#define ID_MMFR4_EL1                S3_0_C0_C2_6
+#endif
+#ifndef ID_MMFR5_EL1
+#define ID_MMFR5_EL1                S3_0_C0_C3_6
+#endif
+#ifndef ID_ISAR6_EL1
+#define ID_ISAR6_EL1                S3_0_C0_C2_7
+#endif
+#ifndef ID_AA64ZFR0_EL1
+#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
+#endif
+#ifndef ID_DFR1_EL1
+#define ID_DFR1_EL1                 S3_0_C0_C3_5
+#endif
+
 /* Access to system registers */
 
 #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 8fd344146e..6daf2b1a30 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -63,6 +63,8 @@
 #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
 #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
 #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
+#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
+#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Register 2 */
 #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
 #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
 #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
@@ -108,18 +110,23 @@
 #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
 #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
 #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
+#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
 #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
+#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
 #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
 #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
 #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
 #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
 #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
+#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
+#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
 #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
 #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
 #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
 #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
 #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
 #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
+#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
 #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
 #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
 #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
@@ -312,18 +319,23 @@
 #define HSTR_EL2                HSTR
 #define ID_AFR0_EL1             ID_AFR0
 #define ID_DFR0_EL1             ID_DFR0
+#define ID_DFR1_EL1             ID_DFR1
 #define ID_ISAR0_EL1            ID_ISAR0
 #define ID_ISAR1_EL1            ID_ISAR1
 #define ID_ISAR2_EL1            ID_ISAR2
 #define ID_ISAR3_EL1            ID_ISAR3
 #define ID_ISAR4_EL1            ID_ISAR4
 #define ID_ISAR5_EL1            ID_ISAR5
+#define ID_ISAR6_EL1            ID_ISAR6
 #define ID_MMFR0_EL1            ID_MMFR0
 #define ID_MMFR1_EL1            ID_MMFR1
 #define ID_MMFR2_EL1            ID_MMFR2
 #define ID_MMFR3_EL1            ID_MMFR3
+#define ID_MMFR4_EL1            ID_MMFR4
+#define ID_MMFR5_EL1            ID_MMFR5
 #define ID_PFR0_EL1             ID_PFR0
 #define ID_PFR1_EL1             ID_PFR1
+#define ID_PFR2_EL1             ID_PFR2
 #define IFSR32_EL2              IFSR
 #define MDCR_EL2                HDCR
 #define MIDR_EL1                MIDR
@@ -347,6 +359,9 @@
 #define VPIDR_EL2               VPIDR
 #define VTCR_EL2                VTCR
 #define VTTBR_EL2               VTTBR
+#define MVFR0_EL1               MVFR0
+#define MVFR1_EL1               MVFR1
+#define MVFR2_EL1               MVFR2
 #endif
 
 #endif
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index c7b5052992..74139be1cc 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -148,6 +148,7 @@ struct cpuinfo_arm {
     union {
         uint64_t bits[2];
         struct {
+            /* PFR0 */
             unsigned long el0:4;
             unsigned long el1:4;
             unsigned long el2:4;
@@ -155,9 +156,23 @@ struct cpuinfo_arm {
             unsigned long fp:4;   /* Floating Point */
             unsigned long simd:4; /* Advanced SIMD */
             unsigned long gic:4;  /* GIC support */
-            unsigned long __res0:28;
+            unsigned long ras:4;
+            unsigned long sve:4;
+            unsigned long sel2:4;
+            unsigned long mpam:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long __res0:4;
             unsigned long csv2:4;
-            unsigned long __res1:4;
+            unsigned long cvs3:4;
+
+            /* PFR1 */
+            unsigned long bt:4;
+            unsigned long ssbs:4;
+            unsigned long mte:4;
+            unsigned long ras_frac:4;
+            unsigned long mpam_frac:4;
+            unsigned long __res1:44;
         };
     } pfr64;
 
@@ -170,7 +185,7 @@ struct cpuinfo_arm {
     } aux64;
 
     union {
-        uint64_t bits[2];
+        uint64_t bits[3];
         struct {
             unsigned long pa_range:4;
             unsigned long asid_bits:4;
@@ -190,6 +205,8 @@ struct cpuinfo_arm {
             unsigned long pan:4;
             unsigned long __res1:8;
             unsigned long __res2:32;
+
+            unsigned long __res3:64;
         };
     } mm64;
 
@@ -197,6 +214,10 @@ struct cpuinfo_arm {
         uint64_t bits[2];
     } isa64;
 
+    struct {
+        register_t bits[1];
+    } zfr64;
+
 #endif
 
     /*
@@ -204,25 +225,38 @@ struct cpuinfo_arm {
      * when running in 32-bit mode.
      */
     union {
-        uint32_t bits[2];
+        uint32_t bits[3];
         struct {
+            /* PFR0 */
             unsigned long arm:4;
             unsigned long thumb:4;
             unsigned long jazelle:4;
             unsigned long thumbee:4;
-            unsigned long __res0:16;
+            unsigned long csv2:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long ras:4;
 
+            /* PFR1 */
             unsigned long progmodel:4;
             unsigned long security:4;
             unsigned long mprofile:4;
             unsigned long virt:4;
             unsigned long gentimer:4;
-            unsigned long __res1:12;
+            unsigned long sec_frac:4;
+            unsigned long virt_frac:4;
+            unsigned long gic:4;
+
+            /* PFR2 */
+            unsigned long csv3:4;
+            unsigned long ssbs:4;
+            unsigned long ras_frac:4;
+            unsigned long __res2:20;
         };
     } pfr32;
 
     struct {
-        uint32_t bits[1];
+        uint32_t bits[2];
     } dbg32;
 
     struct {
@@ -230,12 +264,16 @@ struct cpuinfo_arm {
     } aux32;
 
     struct {
-        uint32_t bits[4];
+        uint32_t bits[6];
     } mm32;
 
     struct {
-        uint32_t bits[6];
+        uint32_t bits[7];
     } isa32;
+
+    struct {
+        register_t bits[3];
+    } mvfr;
 };
 
 extern struct cpuinfo_arm boot_cpu_data;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55981.97688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQN-0004LS-Qj; Thu, 17 Dec 2020 15:42:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55981.97688; Thu, 17 Dec 2020 15:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQN-0004LK-NQ; Thu, 17 Dec 2020 15:42:39 +0000
Received: by outflank-mailman (input) for mailman id 55981;
 Thu, 17 Dec 2020 15:42:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQL-0004G9-S9
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:37 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d4cbdaf9-5fdb-4233-8850-c7547c976cfc;
 Thu, 17 Dec 2020 15:42:35 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D45330E;
 Thu, 17 Dec 2020 07:42:35 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4BD273F66B;
 Thu, 17 Dec 2020 07:42:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4cbdaf9-5fdb-4233-8850-c7547c976cfc
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 4/8] xen/arm: create a cpuinfo structure for guest
Date: Thu, 17 Dec 2020 15:38:04 +0000
Message-Id: <8a93d20d20fae570c83c4d7bea0c882735496f34.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Create a cpuinfo structure for guest and mask into it the features that
we do not support in Xen or that we do not want to publish to guests.

Modify some values in the cpuinfo structure for guests to mask some
features which we do not want to allow to guests (like AMU) or we do not
support (like SVE).
Modify some values in the guest cpuinfo structure to guests to hide some
processor features:
- SVE as this is not supported by Xen and guest are not allowed to use
this features (ZEN is set to 0 in CPTR_EL2).
- AMU as HCPTR_TAM is set in CPTR_EL2 so AMU cannot be used by guests
All other bits are left untouched.
- RAS as this is not supported by Xen.

The code is trying to group together registers modifications for the
same feature to be able in the long term to easily enable/disable a
feature depending on user parameters or add other registers modification
in the same place (like enabling/disabling HCR bits).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Use current_cpu_data info instead of recalling identify_cpu
Changes in V4:
  Use boot_cpu_data instead of current_cpu_data
  Use "hide XX support" instead of disable as this part of the code is
  actually only hidding feature to guests but not disabling them (this
  is done through the HCR register).
  Modify commit message to be more clear about what is done in
  guest_cpuinfo.

---
 xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/cpufeature.h |  2 ++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 86b99ee960..1f6a85aafe 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -24,6 +24,8 @@
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
 
+struct cpuinfo_arm __read_mostly guest_cpuinfo;
+
 void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
                              const char *info)
 {
@@ -151,6 +153,55 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->mvfr.bits[2] = READ_SYSREG(MVFR2_EL1);
 }
 
+/*
+ * This function is creating a cpuinfo structure with values modified to mask
+ * all cpu features that should not be published to guest.
+ * The created structure is then used to provide ID registers values to guests.
+ */
+static int __init create_guest_cpuinfo(void)
+{
+    /*
+     * TODO: The code is currently using only the features detected on the boot
+     * core. In the long term we should try to compute values containing only
+     * features supported by all cores.
+     */
+    guest_cpuinfo = boot_cpu_data;
+
+#ifdef CONFIG_ARM_64
+    /* Hide MPAM support as xen does not support it */
+    guest_cpuinfo.pfr64.mpam = 0;
+    guest_cpuinfo.pfr64.mpam_frac = 0;
+
+    /* Hide SVE as Xen does not support it */
+    guest_cpuinfo.pfr64.sve = 0;
+    guest_cpuinfo.zfr64.bits[0] = 0;
+
+    /* Hide MTE support as Xen does not support it */
+    guest_cpuinfo.pfr64.mte = 0;
+#endif
+
+    /* Hide AMU support */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.amu = 0;
+#endif
+    guest_cpuinfo.pfr32.amu = 0;
+
+    /* Hide RAS support as Xen does not support it */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.ras = 0;
+    guest_cpuinfo.pfr64.ras_frac = 0;
+#endif
+    guest_cpuinfo.pfr32.ras = 0;
+    guest_cpuinfo.pfr32.ras_frac = 0;
+
+    return 0;
+}
+/*
+ * This function needs to be run after all smp are started to have
+ * cpuinfo structures for all cores.
+ */
+__initcall(create_guest_cpuinfo);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 74139be1cc..6058744c18 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -283,6 +283,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
 extern struct cpuinfo_arm cpu_data[];
 #define current_cpu_data cpu_data[smp_processor_id()]
 
+extern struct cpuinfo_arm guest_cpuinfo;
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55982.97694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQO-0004M8-8s; Thu, 17 Dec 2020 15:42:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55982.97694; Thu, 17 Dec 2020 15:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQN-0004Lw-WB; Thu, 17 Dec 2020 15:42:40 +0000
Received: by outflank-mailman (input) for mailman id 55982;
 Thu, 17 Dec 2020 15:42:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQM-0004GE-VT
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:39 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 03bed0a0-1b34-4d46-a2ea-fdd55279b29f;
 Thu, 17 Dec 2020 15:42:34 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11B19101E;
 Thu, 17 Dec 2020 07:42:34 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5014A3F66B;
 Thu, 17 Dec 2020 07:42:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03bed0a0-1b34-4d46-a2ea-fdd55279b29f
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 3/8] xen/arm: Add arm64 ID registers definitions
Date: Thu, 17 Dec 2020 15:38:03 +0000
Message-Id: <905822b31f5494bf20e1e2a0a56f935db0550aef.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Add coprocessor registers definitions for all ID registers trapped
through the TID3 bit of HSR.
Those are the one that will be emulated in Xen to only publish to guests
the features that are supported by Xen and that are accessible to
guests.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Add case definition for reserved registers.
Changes in V4:
  Remove case definition for reserved registers and move it to the code
  directly.

---
 xen/include/asm-arm/arm64/hsr.h | 37 +++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
index ca931dd2fe..e691d41c17 100644
--- a/xen/include/asm-arm/arm64/hsr.h
+++ b/xen/include/asm-arm/arm64/hsr.h
@@ -110,6 +110,43 @@
 #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
 #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
 
+/* Those registers are used when HCR_EL2.TID3 is set */
+#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
+#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
+#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
+#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
+#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
+#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
+#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
+#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
+#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
+#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
+#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
+#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
+#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
+#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
+#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
+#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
+#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
+#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
+#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
+#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
+#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
+#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
+
+#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
+#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
+#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
+#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
+#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
+#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
+#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
+#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
+#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
+#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
+#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
+#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
+
 #endif /* __ASM_ARM_ARM64_HSR_H */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55983.97712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQS-0004T1-Tf; Thu, 17 Dec 2020 15:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55983.97712; Thu, 17 Dec 2020 15:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQS-0004Ss-Px; Thu, 17 Dec 2020 15:42:44 +0000
Received: by outflank-mailman (input) for mailman id 55983;
 Thu, 17 Dec 2020 15:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQQ-0004G9-SH
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:42 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e99d0a0f-ebf0-460c-b3a9-5ba4df1b471f;
 Thu, 17 Dec 2020 15:42:36 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08A19101E;
 Thu, 17 Dec 2020 07:42:36 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 46A263F66B;
 Thu, 17 Dec 2020 07:42:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e99d0a0f-ebf0-460c-b3a9-5ba4df1b471f
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 5/8] xen/arm: Add handler for ID registers on arm64
Date: Thu, 17 Dec 2020 15:38:05 +0000
Message-Id: <46c4c7e8ec64a48ecefd894d436c116bab5d4a86.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Add vsysreg emulation for registers trapped when TID3 bit is activated
in HSR.
The emulation is returning the value stored in cpuinfo_guest structure
for know registers and is handling reserved registers as RAZ.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Fix commit message
  Fix code style for GENERATE_TID3_INFO declaration
  Add handling of reserved registers as RAZ.
Changes in V4:
  Fix indentation in GENERATE_TID3_INFO macro
  Add explicit case code for reserved registers

---
 xen/arch/arm/arm64/vsysreg.c | 82 ++++++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 8a85507d9d..41f18612c6 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
         break;                                                          \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg, field, offset)                          \
+    case HSR_SYSREG_##reg:                                              \
+    {                                                                   \
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
+                                  1, guest_cpuinfo.field.bits[offset]); \
+    }
+
 void do_sysreg(struct cpu_user_regs *regs,
                const union hsr hsr)
 {
@@ -259,6 +267,80 @@ void do_sysreg(struct cpu_user_regs *regs,
          */
         return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
+    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
+    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
+    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
+    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
+    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
+    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
+    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
+    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
+    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
+    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
+    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    /*
+     * Those cases are catching all Reserved registers trapped by TID3 which
+     * currently have no assignment.
+     * HCR.TID3 is trapping all registers in the group 3:
+     * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
+     * Those registers are defined as being RO in the Arm Architecture
+     * Reference manual Armv8 (Chapter D12.3.2 of issue F.c) so handle them
+     * as Read-only read as zero.
+     */
+    case HSR_SYSREG(3,0,c0,c3,3):
+    case HSR_SYSREG(3,0,c0,c3,7):
+    case HSR_SYSREG(3,0,c0,c4,2):
+    case HSR_SYSREG(3,0,c0,c4,3):
+    case HSR_SYSREG(3,0,c0,c4,5):
+    case HSR_SYSREG(3,0,c0,c4,6):
+    case HSR_SYSREG(3,0,c0,c4,7):
+    case HSR_SYSREG(3,0,c0,c5,2):
+    case HSR_SYSREG(3,0,c0,c5,3):
+    case HSR_SYSREG(3,0,c0,c5,6):
+    case HSR_SYSREG(3,0,c0,c5,7):
+    case HSR_SYSREG(3,0,c0,c6,2):
+    case HSR_SYSREG(3,0,c0,c6,3):
+    case HSR_SYSREG(3,0,c0,c6,4):
+    case HSR_SYSREG(3,0,c0,c6,5):
+    case HSR_SYSREG(3,0,c0,c6,6):
+    case HSR_SYSREG(3,0,c0,c6,7):
+    case HSR_SYSREG(3,0,c0,c7,3):
+    case HSR_SYSREG(3,0,c0,c7,4):
+    case HSR_SYSREG(3,0,c0,c7,5):
+    case HSR_SYSREG(3,0,c0,c7,6):
+    case HSR_SYSREG(3,0,c0,c7,7):
+        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55984.97718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQT-0004Tu-BX; Thu, 17 Dec 2020 15:42:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55984.97718; Thu, 17 Dec 2020 15:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQT-0004Tf-5H; Thu, 17 Dec 2020 15:42:45 +0000
Received: by outflank-mailman (input) for mailman id 55984;
 Thu, 17 Dec 2020 15:42:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQR-0004GE-W0
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:44 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 77e24c80-ff18-473d-97fa-bf8feda5e0f4;
 Thu, 17 Dec 2020 15:42:37 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0380330E;
 Thu, 17 Dec 2020 07:42:37 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 41F943F66B;
 Thu, 17 Dec 2020 07:42:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e24c80-ff18-473d-97fa-bf8feda5e0f4
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 6/8] xen/arm: Add handler for cp15 ID registers
Date: Thu, 17 Dec 2020 15:38:06 +0000
Message-Id: <c1c68e89683913dbf71a8f370dc6fd896a9e8cce.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Add support for emulation of cp15 based ID registers (on arm32 or when
running a 32bit guest on arm64).
The handlers are returning the values stored in the guest_cpuinfo
structure for known registers and RAZ for all reserved registers.
In the current status the MVFR registers are no supported.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Add case definition for reserved registers
  Add handling of reserved registers as RAZ.
  Fix code style in GENERATE_TID3_INFO declaration
Changes in V4:
  Fix comment for missing t (no to not)
  Put cases for reserved registers directly in the code instead of using
  a define in the cpregs.h header.

---
 xen/arch/arm/vcpreg.c | 65 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 65 insertions(+)

diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index cdc91cdf5b..1fe07fe02a 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -155,6 +155,24 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
         break;                                                      \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg, field, offset)                      \
+    case HSR_CPREG32(reg):                                          \
+    {                                                               \
+        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
+                          1, guest_cpuinfo.field.bits[offset]);     \
+    }
+
+/* helper to define cases for all registers for one CRm value */
+#define HSR_CPREG32_TID3_CASES(REG)     case HSR_CPREG32(p15,0,c0,REG,0): \
+                                        case HSR_CPREG32(p15,0,c0,REG,1): \
+                                        case HSR_CPREG32(p15,0,c0,REG,2): \
+                                        case HSR_CPREG32(p15,0,c0,REG,3): \
+                                        case HSR_CPREG32(p15,0,c0,REG,4): \
+                                        case HSR_CPREG32(p15,0,c0,REG,5): \
+                                        case HSR_CPREG32(p15,0,c0,REG,6): \
+                                        case HSR_CPREG32(p15,0,c0,REG,7)
+
 void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp32 cp32 = hsr.cp32;
@@ -286,6 +304,53 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
          */
         return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
+    /* MVFR registers are in cp10 not cp15 */
+
+    /*
+     * Those cases are catching all Reserved registers trapped by TID3 which
+     * currently have no assignment.
+     * HCR.TID3 is trapping all registers in the group 3:
+     * coproc == p15, opc1 == 0, CRn == c0, CRm == {c2-c7}, opc2 == {0-7}.
+     * Those registers are defined as being RO in the Arm Architecture
+     * Reference manual Armv8 (Chapter D12.3.2 of issue F.c) so handle them
+     * as Read-only read as zero.
+     */
+    case HSR_CPREG32(p15,0,c0,c3,0):
+    case HSR_CPREG32(p15,0,c0,c3,1):
+    case HSR_CPREG32(p15,0,c0,c3,2):
+    case HSR_CPREG32(p15,0,c0,c3,3):
+    case HSR_CPREG32(p15,0,c0,c3,7):
+    HSR_CPREG32_TID3_CASES(c4):
+    HSR_CPREG32_TID3_CASES(c5):
+    HSR_CPREG32_TID3_CASES(c6):
+    HSR_CPREG32_TID3_CASES(c7):
+        return handle_ro_raz(regs, regidx, cp32.read, hsr, 1);
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55986.97736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQX-0004bv-T1; Thu, 17 Dec 2020 15:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55986.97736; Thu, 17 Dec 2020 15:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQX-0004bk-NP; Thu, 17 Dec 2020 15:42:49 +0000
Received: by outflank-mailman (input) for mailman id 55986;
 Thu, 17 Dec 2020 15:42:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQV-0004G9-SJ
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:47 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 158ab480-93c7-46a5-a1b6-5fec4addc648;
 Thu, 17 Dec 2020 15:42:39 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EDA4830E;
 Thu, 17 Dec 2020 07:42:38 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 37F373F66B;
 Thu, 17 Dec 2020 07:42:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 158ab480-93c7-46a5-a1b6-5fec4addc648
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 8/8] xen/arm: Activate TID3 in HCR_EL2
Date: Thu, 17 Dec 2020 15:38:08 +0000
Message-Id: <d89992ce6177bee2c5331cdc3a90d5b189669d0d.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Activate TID3 bit in HCR register when starting a guest.
This will trap all coprecessor ID registers so that we can give to guest
values corresponding to what they can actually use and mask some
features to guests even though they would be supported by the underlying
hardware (like SVE or MPAM).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3: Rebase
Changes in V4: Rebase

---
 xen/arch/arm/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 28d9d64558..c1a9ad6056 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -98,7 +98,7 @@ register_t get_default_hcr_flags(void)
 {
     return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
              (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
-             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
+             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
 static enum {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:42:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.55987.97748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQZ-0004el-83; Thu, 17 Dec 2020 15:42:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 55987.97748; Thu, 17 Dec 2020 15:42:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvQZ-0004eX-3u; Thu, 17 Dec 2020 15:42:51 +0000
Received: by outflank-mailman (input) for mailman id 55987;
 Thu, 17 Dec 2020 15:42:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpvQW-0004GE-W1
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:42:49 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3eabbf11-50b1-4745-a474-c7cc8f6ea9d8;
 Thu, 17 Dec 2020 15:42:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F2C1F101E;
 Thu, 17 Dec 2020 07:42:37 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3CE683F66B;
 Thu, 17 Dec 2020 07:42:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3eabbf11-50b1-4745-a474-c7cc8f6ea9d8
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 7/8] xen/arm: Add CP10 exception support to handle MVFR
Date: Thu, 17 Dec 2020 15:38:07 +0000
Message-Id: <841e5cd22290158d9b0c5d6dedafd01ed9a3d0bc.1608214355.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>

Add support for cp10 exceptions decoding to be able to emulate the
values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
This is required for aarch32 guests accessing MVFR registers using
vmrs and vmsr instructions.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: Rebase
Changes in V3:
  Add case for MVFR2, fix typo VMFR <-> MVFR.
Changes in V4:
  Fix typo HSR -> HCR
  Move no to not comment fix to previous patch

---
 xen/arch/arm/traps.c             |  5 +++++
 xen/arch/arm/vcpreg.c            | 37 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/perfc_defn.h |  1 +
 xen/include/asm-arm/traps.h      |  1 +
 4 files changed, 44 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd4c6..28d9d64558 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_cp14_dbg);
         do_cp14_dbg(regs, hsr);
         break;
+    case HSR_EC_CP10:
+        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
+        perfc_incr(trap_cp10);
+        do_cp10(regs, hsr);
+        break;
     case HSR_EC_CP:
         GUEST_BUG_ON(!psr_mode_is_32bit(regs));
         perfc_incr(trap_cp);
diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index 1fe07fe02a..cbad8f25a0 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -664,6 +664,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
     inject_undef_exception(regs, hsr);
 }
 
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
+{
+    const struct hsr_cp32 cp32 = hsr.cp32;
+    int regidx = cp32.reg;
+
+    if ( !check_conditional_instr(regs, hsr) )
+    {
+        advance_pc(regs, hsr);
+        return;
+    }
+
+    switch ( hsr.bits & HSR_CP32_REGS_MASK )
+    {
+    /*
+     * HCR.TID3 is trapping access to MVFR register used to identify the
+     * VFP/Simd using VMRS/VMSR instructions.
+     * Exception encoding is using MRC/MCR standard with the reg field in Crn
+     * as are declared MVFR0 and MVFR1 in cpregs.h
+     */
+    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
+    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
+
+    default:
+        gdprintk(XENLOG_ERR,
+                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
+                 cp32.read ? "mrc" : "mcr",
+                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
+        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
+                 hsr.bits & HSR_CP32_REGS_MASK);
+        inject_undef_exception(regs, hsr);
+        return;
+    }
+
+    advance_pc(regs, hsr);
+}
+
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp cp = hsr.cp;
diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
index 6a83185163..31f071222b 100644
--- a/xen/include/asm-arm/perfc_defn.h
+++ b/xen/include/asm-arm/perfc_defn.h
@@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
 PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
 PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
 PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
+PERFCOUNTER(trap_cp10,     "trap: cp10 access")
 PERFCOUNTER(trap_cp,       "trap: cp access")
 PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
 PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
index 997c37884e..c4a3d0fb1b 100644
--- a/xen/include/asm-arm/traps.h
+++ b/xen/include/asm-arm/traps.h
@@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
 
 /* SMCCC handling */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:54:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56028.97763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvbW-0006Aa-FE; Thu, 17 Dec 2020 15:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56028.97763; Thu, 17 Dec 2020 15:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvbW-0006AT-Bc; Thu, 17 Dec 2020 15:54:10 +0000
Received: by outflank-mailman (input) for mailman id 56028;
 Thu, 17 Dec 2020 15:54:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpvbV-0006AO-CL
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:54:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67f5f643-a9ef-4607-bc04-1322f2339085;
 Thu, 17 Dec 2020 15:54:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 485CBAC1A;
 Thu, 17 Dec 2020 15:54:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67f5f643-a9ef-4607-bc04-1322f2339085
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608220447; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2DZh75jUxdp+tLWQj/5xkezmHM+AXQskTAF3Pe03PIk=;
	b=mKR/Dxk9pmCDRuR/TSjgvyFMso4lQdP2ap91wwPYctgzoDtfFcvav47Z6sN0hOj1feNuiU
	GhIwjPIWDCpijs+9s4lxAVuKgOZRG7I8Vkt4ZcRg/exKdwZU/xrzsMFqCKvTSszbnIgvKh
	gKJL21ncSnsglx/k2iARa63zq4KarWw=
Subject: Re: [PATCH v3 6/8] xen/cpupool: add cpupool directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-7-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fc21ddcd-f263-d502-5b85-a74350c29fe3@suse.com>
Date: Thu, 17 Dec 2020 16:54:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209160956.32456-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 17:09, Juergen Gross wrote:
> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
> dynamic, so the related hypfs access functions need to be implemented.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 15:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 15:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56035.97787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpveT-0006OQ-4R; Thu, 17 Dec 2020 15:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56035.97787; Thu, 17 Dec 2020 15:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpveT-0006OJ-0E; Thu, 17 Dec 2020 15:57:13 +0000
Received: by outflank-mailman (input) for mailman id 56035;
 Thu, 17 Dec 2020 15:57:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YGc=FV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kpveR-0006OD-Dq
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 15:57:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c84f4168-ce21-4b98-8f7b-61f3252de7d0;
 Thu, 17 Dec 2020 15:57:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E494BAC7B;
 Thu, 17 Dec 2020 15:57:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c84f4168-ce21-4b98-8f7b-61f3252de7d0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608220630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TioOiW3tXkJRnbrvJbYYSVUNm2pWLcoihqS9C5v2D9o=;
	b=gmO961GS1gnDVWqUNeQXret7ivmTtr/z9ip7xxRy2SMxKypXX/20E5IUwwpTfKoUt9QRrI
	okcv/qa953HJVRZVJlkFRuZBagyW1DMisUHdTdj1Mi1gMX2GzrCy6cDNrru7oL6E3i0gHQ
	D2lXCh8ZKEDPhYitNkXPy2BydHvHUoI=
Subject: Re: [PATCH v3 7/8] xen/cpupool: add scheduling granularity entry to
 cpupool entries
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-8-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9cb6325b-09c2-29b7-1a78-09465bde9473@suse.com>
Date: Thu, 17 Dec 2020 16:57:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201209160956.32456-8-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.12.2020 17:09, Juergen Gross wrote:
> @@ -1080,6 +1092,56 @@ static struct hypfs_entry *cpupool_dir_findentry(
>      return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool);
>  }
>  
> +static int cpupool_gran_read(const struct hypfs_entry *entry,
> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    const struct cpupool *cpupool;
> +    const char *gran;
> +
> +    data = hypfs_get_dyndata();
> +    cpupool = data->data;
> +    ASSERT(cpupool);

With this and ...

> +static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    const struct cpupool *cpupool;
> +    const char *gran;
> +
> +    data = hypfs_get_dyndata();
> +    cpupool = data->data;
> +    ASSERT(cpupool);

... this ASSERT() I'd like to first settle our earlier discussion,
before possibly giving my R-b here. No other remaining remarks from
my side.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 16:10:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 16:10:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56045.97801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvrF-0000RA-JS; Thu, 17 Dec 2020 16:10:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56045.97801; Thu, 17 Dec 2020 16:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpvrF-0000R3-GZ; Thu, 17 Dec 2020 16:10:25 +0000
Received: by outflank-mailman (input) for mailman id 56045;
 Thu, 17 Dec 2020 16:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6lBs=FV=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kpvrE-0000Qy-Il
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 16:10:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0034a92-5fd6-4aeb-8e1b-b1df66a3d1eb;
 Thu, 17 Dec 2020 16:10:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BA939AC79;
 Thu, 17 Dec 2020 16:10:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0034a92-5fd6-4aeb-8e1b-b1df66a3d1eb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608221422; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qh3wm76ap/z3ujVwhKHfHGAXUtjkoyalsb8yIQAaDak=;
	b=uNCQ99WuWu0+KE4I0JNdLF1Kf7VnMTJjsJfIBNjHgWRGa6xSsMYmKwzJG1FtRfVEBvdpmk
	9hadAZgckZcDxfmP0Qs5MWcTxujHA9F80omkYUr9s7X7P+gBQofMudnKdhrKuDUxL8On7b
	jU9d81E6B8kEBpE4JuDETkDZ7lcOG6U=
Message-ID: <4cf9e657a31317fb8ce1dcce2e841f13733c79a1.camel@suse.com>
Subject: Re: [PATCH v3 6/8] xen/cpupool: add cpupool directories
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	 <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	 <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	 <wl@xen.org>, xen-devel@lists.xenproject.org
Date: Thu, 17 Dec 2020 17:10:21 +0100
In-Reply-To: <fc21ddcd-f263-d502-5b85-a74350c29fe3@suse.com>
References: <20201209160956.32456-1-jgross@suse.com>
	 <20201209160956.32456-7-jgross@suse.com>
	 <fc21ddcd-f263-d502-5b85-a74350c29fe3@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-HXN4+4NLrcBn94s8zI8s"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-HXN4+4NLrcBn94s8zI8s
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-12-17 at 16:54 +0100, Jan Beulich wrote:
> On 09.12.2020 17:09, Juergen Gross wrote:
> > Add /cpupool/<cpupool-id> directories to hypfs. Those are
> > completely
> > dynamic, so the related hypfs access functions need to be
> > implemented.
> >=20
> > Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
Not needed, I think, but still (and if this hasn't been committed
already):

Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-HXN4+4NLrcBn94s8zI8s
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/bgu0ACgkQFkJ4iaW4
c+4rgxAAnt9u5C8cstsXi0+UzlPWikRQczw41GQCpNOY+HXxpQjLGPkztk0d7X1t
0O9x6QeILicWfPOqbTl8M4ZJJle5HvkHVHPWxg6SxIlLYrEe+FYbYuiaw+m+ACSD
Uvf0LlZjAM8Gf1Wonz3jIy4dGtqNLiBJnfEi8MqjBBBy7rAqmyUroPnWXMC4hUUI
E/1s1m2CbXFWRIRCbBuYJLypdG5kI+/5bTA/z7uj324YyJcg6QdoaMmRzUVwBEYE
D/7QkqhNAIB7vNAHmYuYQTFoESFt7I3CSi2uTpb0icosbSERowzF5wZEn0YH2I89
DXUiOKeKPFhG9mF5EBo7nPv8urbw04DoMSmPZ8Lwm7rXopNUf2Rgsa1OAu2Ux1Sg
NVyyCacBMvvmhpGxCe3ZpzY6lYB6Vm8PvW3Dl85g441sARJw92S/bxZc/WdhqVoH
xYGQhq42vrtI7ntotv+WyN6n/McEcsK7qji6EVhpTaM0SPcXB2gV9o8zLxAk/5Nw
lexufXPDQbEDYNnw73pjwouIOokBgEpscYmiE4Ao0nNoHmlFzC8uQ6UlvHfGMDP3
rKVzIfM//AIOPFQfrAYDX82VeoHt4gx+G6qBNaOJ6wMYlStzl5Pzi0IExMhpQTqN
+uWsSTXCnWSgtkXY1wQgFZsKOuVvtAgFI7nMJTsZY3DR4n4QD5o=
=hZHG
-----END PGP SIGNATURE-----

--=-HXN4+4NLrcBn94s8zI8s--



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 16:25:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 16:25:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56049.97814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpw65-0001ZM-TW; Thu, 17 Dec 2020 16:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56049.97814; Thu, 17 Dec 2020 16:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpw65-0001ZF-Q7; Thu, 17 Dec 2020 16:25:45 +0000
Received: by outflank-mailman (input) for mailman id 56049;
 Thu, 17 Dec 2020 16:25:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tHkA=FV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kpw64-0001ZA-BK
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 16:25:44 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5f3b7a1-fbbc-4f4a-9fbd-da28b2f3b71d;
 Thu, 17 Dec 2020 16:25:43 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BHGNmiY068507;
 Thu, 17 Dec 2020 16:25:39 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 35cntme81n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 17 Dec 2020 16:25:39 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BHGPKCR079930;
 Thu, 17 Dec 2020 16:25:39 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3030.oracle.com with ESMTP id 35d7er3pke-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 17 Dec 2020 16:25:39 +0000
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0BHGPbuN020657;
 Thu, 17 Dec 2020 16:25:37 GMT
Received: from [10.39.250.121] (/10.39.250.121)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 17 Dec 2020 08:25:37 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5f3b7a1-fbbc-4f4a-9fbd-da28b2f3b71d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : subject : to :
 cc : references : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=nLISPQKk9UyzeVhlyH2H8KlkM1pe3JTdt5YkBCjv/ec=;
 b=smet4DJl4It6bfnmWmlZ/a0Y3LlQ2d8gNZdxHR2Vbvu7xnQeUor4UhnKusXes7UuMGxg
 mze8DhefJXImT5YvwWvx5REQBDS2USyafNnf3xGfXMrjnHVlr+7kgEvgS9RQmEMpWBz1
 uRKaTkw/SzuZx0u4cPUd+t1uu2xlSDoXj5V9jba3OME/3SXKzqnS8hgcdCryp9f0Bikj
 NkWubF/ovjZkcmfbA9yBHIHoPN3wb7l+kMcJUuqUtTCEjbtMFpH57Ksu+a/4EhQ34he6
 95B2/bayxy7onIafrY5zz80q0AY2Ga+vQCURnFYy+9WLYtubiu9AuRNHXkBAHJz6w6AJ 9Q== 
From: boris.ostrovsky@oracle.com
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills <cheyenne.wills@gmail.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
Organization: Oracle Corporation
Message-ID: <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
Date: Thu, 17 Dec 2020 11:25:35 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9838 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012170113
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9838 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 phishscore=0 mlxscore=0
 lowpriorityscore=0 spamscore=0 adultscore=0 malwarescore=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 priorityscore=1501 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012170113


On 12/17/20 2:40 AM, Jan Beulich wrote:
> On 17.12.2020 02:51, boris.ostrovsky@oracle.com wrote:
> I think this is acceptable as a workaround, albeit we may want to
> consider further restricting this (at least on staging), like e.g.
> requiring a guest config setting to enable the workaround. 


Maybe, but then someone migrating from a stable release to 4.15 will have to modify guest configuration.


> But
> maybe this will need to be part of the MSR policy for the domain
> instead, down the road. We'll definitely want Andrew's view here.
>
> Speaking of staging - before applying anything to the stable
> branches, I think we want to have this addressed on the main
> branch. I can't see how Solaris would work there.


Indeed it won't. I'll need to do that as well (I misinterpreted the statement in the XSA about only 4.14- being vulnerable)



-boris



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 16:43:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 16:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56053.97826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpwNR-0003YU-F3; Thu, 17 Dec 2020 16:43:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56053.97826; Thu, 17 Dec 2020 16:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpwNR-0003YM-BL; Thu, 17 Dec 2020 16:43:41 +0000
Received: by outflank-mailman (input) for mailman id 56053;
 Thu, 17 Dec 2020 16:43:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Fje=FV=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1kpwNP-0003YH-Tp
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 16:43:40 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85255e67-f117-4020-9d72-49fd4b594dbc;
 Thu, 17 Dec 2020 16:43:35 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Dec 2020 08:43:34 -0800
Received: from lkp-server02.sh.intel.com (HELO 070e1a605002) ([10.239.97.151])
 by orsmga001.jf.intel.com with ESMTP; 17 Dec 2020 08:43:30 -0800
Received: from kbuild by 070e1a605002 with local (Exim 4.92)
 (envelope-from <lkp@intel.com>)
 id 1kpwNF-0001HD-UW; Thu, 17 Dec 2020 16:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85255e67-f117-4020-9d72-49fd4b594dbc
IronPort-SDR: 8eHTwhtT4c93DvujiQT//+O3HAzZSnJ2+ZgvUM3iTWnHXuCLrOIhfii8JcW20wtEfQc/OsgngJ
 RZol0bLcOGrQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9838"; a="162331501"
X-IronPort-AV: E=Sophos;i="5.78,428,1599548400"; 
   d="gz'50?scan'50,208,50";a="162331501"
IronPort-SDR: KuBvCWc6xxBn6UZRGsBEBoWE62uEkR2OcPbE+r8gxwMI0M2zXPzTehQp6EVJaIkjjn34myZKgx
 GAkKWetAg5vw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,428,1599548400"; 
   d="gz'50?scan'50,208,50";a="413499944"
Date: Fri, 18 Dec 2020 00:42:39 +0800
From: kernel test robot <lkp@intel.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: kbuild-all@lists.01.org, Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>
Subject: Re: [PATCH v3 14/15] x86/paravirt: switch functions with custom code
 to ALTERNATIVE
Message-ID: <202012180031.B1lbwtVM-lkp@intel.com>
References: <20201217093133.1507-15-jgross@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="3MwIy2ne0vdjdPXF"
Content-Disposition: inline
In-Reply-To: <20201217093133.1507-15-jgross@suse.com>
User-Agent: Mutt/1.10.1 (2018-07-13)


--3MwIy2ne0vdjdPXF
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Juergen,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to xen-tip/linux-next tip/x86/core tip/x86/asm v5.10 next-20201217]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201217-173646
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git accefff5b547a9a1d959c7e76ad539bf2480e78b
config: x86_64-randconfig-a002-20201217 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/bc3cbe0ff1b123a4b7f48c91b32198d7dfe57797
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Juergen-Gross/x86-major-paravirt-cleanup/20201217-173646
        git checkout bc3cbe0ff1b123a4b7f48c91b32198d7dfe57797
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

   arch/x86/entry/entry_64.S: Assembler messages:
>> arch/x86/entry/entry_64.S:1092: Error: junk at end of line, first unrecognized character is `('
>> arch/x86/entry/entry_64.S:1092: Error: backward ref to unknown label "771:"
>> arch/x86/entry/entry_64.S:1092: Error: backward ref to unknown label "771:"
>> arch/x86/entry/entry_64.S:1092: Error: junk at end of line, first unrecognized character is `,'
>> arch/x86/entry/entry_64.S:1092: Warning: missing closing '"'
>> arch/x86/entry/entry_64.S:1092: Error: expecting mnemonic; got nothing


vim +1092 arch/x86/entry/entry_64.S

ddeb8f2149de280 arch/x86/kernel/entry_64.S Alexander van Heukelum 2008-11-24  1089  
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1090  SYM_CODE_START_LOCAL(error_return)
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1091  	UNWIND_HINT_REGS
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26 @1092  	DEBUG_ENTRY_ASSERT_IRQS_OFF
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1093  	testb	$3, CS(%rsp)
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1094  	jz	restore_regs_and_return_to_kernel
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1095  	jmp	swapgs_restore_regs_and_return_to_usermode
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1096  SYM_CODE_END(error_return)
424c7d0a9a396ba arch/x86/entry/entry_64.S  Thomas Gleixner        2020-03-26  1097  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--3MwIy2ne0vdjdPXF
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICCty218AAy5jb25maWcAlDzJctw4svf+igr3pftgT0mWFe54oQNIgix0kQQNgLXogqiW
yx7FaPFombH//mUCXAAQrPb0wS0iE1sidyTq119+XZDXl8f7w8vtzeHu7sfi6/Hh+HR4OX5e
fLm9O/7fIuOLmqsFzZh6B8jl7cPr9398/3ipLy8WH96dLd8tF+vj08PxbpE+Pny5/foKnW8f
H3759ZeU1zkrdJrqDRWS8VorulNXb77e3Lz9Y/Fbdvzr9vCw+OPd+3fLt2cffrd/vXG6MamL
NL360TcV41BXfyzfL5c9oMyG9vP3H5bmv2GcktTFAB67OH2WzpwpqXXJ6vU4q9OopSKKpR5s
RaQmstIFVzwKYDV0pSOIiU96y4UzQ9KyMlOsolqRpKRacqFGqFoJSjIYJufwD6BI7Ar0/XVR
mLO6WzwfX16/jRRnNVOa1htNBGyUVUxdvT8H9H5tvGoYTKOoVIvb58XD4wuOMFCGp6TsSfPm
TaxZk9bdrFm/lqRUDv6KbKheU1HTUhfXrBnRXUgCkPM4qLyuSByyu57rwecAF3HAtVQZQAbS
OOt1KRPCzapPIeDaT8F316d789Pgi8ix+TvqGjOak7ZUhiOcs+mbV1yqmlT06s1vD48Px9/f
jHPJLYmTQO7lhjVpFNZwyXa6+tTSlkYRtkSlKz2B94wpuJS6ohUXe02UIulq3EorackS97BI
CzopMow5YCJgIoMBCwbOLXuZAfFbPL/+9fzj+eV4P8pMQWsqWGqksxE8cQTWBckV38YhNM9p
qhhOnee6slIa4DW0zlhtVEB8kIoVAjQMCF4UzOo/cQ4XvCIiA5CEA9OCSpgg3jVduSKILRmv
CKv9NsmqGJJeMSqQovuZZRMl4OCByqAkFBdxLFye2Jjt6YpngUrMuUhp1mk7INIIlQ0Rks4T
LaNJW+TSsMbx4fPi8UtwyKMp4Ola8hYmspyYcWcawzEuihGaH7HOG1KyjCiqSyKVTvdpGWEX
o9A3I/cFYDMe3dBayZNAnQhOshQmOo1WwTGR7M82ildxqdsGlxzoQSuxadOa5QppzEtgnk7i
GJlSt/fHp+eYWIG1XGteU5AbZ10116trtEOVYeVBoqGxgQXzjMX1i+3HsjKmPSwwb11iw//Q
59BKkHRtmcoxgz7McuD8vDFNw4oVsnVHGJcDJyRxtKSgtGoUjFrH9tGDN7xsa0XE3l10BzzR
LeXQqz8YOLR/qMPzvxYvsJzFAZb2/HJ4eV4cbm4eXx9ebh++jke1YUKZUyapGcOTwQgQucsX
YcPxsd6G1WS6Avkmm179DXtKZIYqN6Wg/KG3ih4B8hx6XzK2dck8Gkk2WLiMSXSpMn/M7ox+
gjoDI8HWmeRlr5wNdUXaLmSE5+EkNMDcNcGnpjtg7tjRSYvsdg+acPNmjE6iI6BJU5vRWDuy
ewDAgYG2ZTmKpAOpKRybpEWalMxVLgbG0wRp43K+T5WBP9b2D4dj1gPn8tSlFVuvwASAPEWd
U3Q3c7DDLFdX50u3Hc+oIjsHfnY+SgerFfjvJKfBGGfvPS5ta9k54YZdjYbtz1ve/PP4+fXu
+LT4cjy8vD4dn01zt+8I1DMtsm0acOylrtuK6IRAXJJ6UmKwtqRWAFRm9rauSKNVmei8bKXj
DHXhBezp7PxjMMIwTwhNC8HbRrqkBl8rLSJkTsp1hx52t3QZW3PChPYho4bNwXqROtuyTK0i
s4BKmetp2xuWxZigg4rMBAhhpxzk55qKqBbpUFZtQYGqsaEb8DNdi4zMievoIBNyZHTDUjpp
BmxUZbE9UZHP7ylp8kgf4+HEFAcw54BDlEcN9OvBcwKtGqfEiqbrhgOPoAEDny1mijqVDcFe
zwtuGACnm1EwO+Dy+Qq2P15aEsdhRJ4Cahm3SjgcZL5JBaNZ78qJU0QWhI7QEESM0OIHitDg
xocGzoPvC+87DAITztGM4t9x0qWaN2Dr2DVFn8GcKBcViHM0qAmwJfwxTg9xMxfNitQg+MJR
vOg2qjL8BhOS0sY40EaNh85cKps1rAesFC7IIb3hqu7DmqHxuwI7yYC3Ha9dgnhgBKNH7zU4
+A4Q2W8Om8lcf9i6j4OH5Cnk8FvXFXPzCo52nG5uPDAC4QE6frHltODkOeoKP0GeHXI03PUY
JStqUuYOf5qV5x6HGJc7j/G8XIFKdRQy455l47qFfcZEmWQbBrvo6CqDkzWKHc/IeDV5preO
UMCMCRGCuSe4xkH2lZy2aC8aGVsT8G6AOMjN1qCHGIbKKNMY5Hq8pSdBzmjKekcM0f504ync
DUaBOhMwnvAHBK1SQtQTbdRtlV3du8N0RAmmRwM5kgbWWEMgBHrOc8sk/RSVcehHsyyq1qyo
waw6jN6a9Gx50TsLXVK0OT59eXy6PzzcHBf0P8cHcC8J+AspOpgQIIxeoz/isBCj+i0Qtqo3
lQmyo+7sT844uPSVnc5GDJ5wYoaQwJmZCG+U/JIkUWLJso3ZUlnyxBEs6A3nIQra84QDW7V5
Dj5XQwAaySIAfylaGROHSVuWszTIkoCZzlnZR3gdQfz0aI96eZG4fLgzqWzv27VNUonWpFxg
1Smwq7Mq3qqmVdrofnX15nj35fLi7fePl28vL9ys6RqMZO+UOVtSEHVaX3wCq6o24OUK/UBR
g/VjNtS/Ov94CoHsMOMbRejPth9oZhwPDYY7uwyTCp52dhoHHaGNe+Ex1pCQICVLBGZQMt9J
GCQXnXkcaBeDEfBLMCFPA2s6YAAvwMS6KYAvHMLaOJQq64bZeFZQx3kwsU4PMjoAhhKY41m1
7p2Ah2e4Nopm18MSKmqb9gJbKFlShkuWrcTU4BzYKFJDOlL2vuuIcs2BDuAKv3e8IpP4NJ1D
RteyaiazdwFDaxKfznHlYLgpEeU+xZSda9iawkZJJagQsFYXQWAiCZ4NsjgeAE2tNBu92Dw9
3hyfnx+fFi8/vtmo24umgl3F9Iq7A9xVTolqBbUur6uxELg7J42fTnKAVWNyi26fgpdZzuQq
qukEVeAusGjeBsezXAuumijDddCdgiNGtuk8mOgEiIlCU+qykXHPHVFINY7ThSBRXMZlrquE
zSx3OPku5w7BXNn6FtKGALwCdsrBSx+EOmYY9yAR4MWAk1u01E0VAJEJZpCmLXq3KyOtxup5
OZ0eIhtWm0zszJZWG1QuZQKsB6Yi9czIzk1YwYduNuF3wFzQlvFqGWKtNlWkqes7HhQAPpyd
FzHbiDCJ2qgLvoI5jdHPZTgaTBO7+YGpA4LbXHbTYqoUBLJUvpML40SJGyTwYqm2HrVPjAyD
/Am8s+Loo5i1xFzcVNTDQscUxPpjlHWrRsaTwBU6cfHbN7DDvIrMPJgd17HtRU3U6IVam2JT
QpcuSnk2D1My9cdLq2aXrorAn8D8+8ZvAcvLqrYyCiMnFSv3V5cXLoJhAQj5KukwMAMlb7Sd
9oJDxN9Uu3k92GVXMfqkJU1jh4MLAXVglY/jCXbNoHCmjat94fpgfXMKTiVpxRRwvSJ85145
rRpqmU4EbRSCUvQOhPIyg1nFoudeEOBIxsFvmmGLHYhm7MrAGHCpBanBhCe0QMcqDsSLtY9n
f0ygvS87nl4HcVqsDpWVx/e2sZqzTOa6XaPpCjiW942eQRBUcIzeMMOQCL4GZWHSGHg1OGf7
3PxB14Cp0ZIWJN2HE1TmQgx4Y340n0n6RrzQkyswqlOQvc4cHAMnbrl/fLh9eXzy7iecAKkz
kW2deqpziiFIU56Cp3il4Nk7F8dYWb4Nc4lddDGzXk8ou8C442cvYrGH2ZT4D/XzGezjOkLl
iqUgxvbedGTtvnF6OBEc2NCpgTWcklWJuU0u+RwgY0a/83pYFqJ/MK7hTI+MCTh5XSToO8tA
WTbE1tlIxVJPjeF5gKcCspiKfRO/J8Lkesz2GEfXOHt2BBLxwgfwJEC1cKM7e18Jcxeek2fj
Jws0jnRkGaxE6Sp7Jwovklt6tfz++Xj4vHT+80nZ4JqsWM6dAGZzIVjjElMcom06XvOGQYWA
9rvqVzmi2gFmBrf39nhvskWlNvKVEjGOMHQApQmOU7gCWc0UlSCwrVhMR48e8UhbZasg9Jru
J/6+xVVyZ05I8zyWbY8hTugVIGAafM6PLpwwlebebSR8AjtHEySra322XLrI0HL+YRklEYDe
L2dBMM4yOsPV2VgKZ03OSuClvJPgozvq2RPTgCF0PKZIBZErnbVRi9qs9pKh5QIJBn9/+f3M
r8bDFF5KlC99llkwQY45Rl8bmEjb9HITbf0spGRFDbOcB1IzjmiZJkYaEIayLXyfcBQRB+y4
/9Yxj8OsgIY631OjIcqO1+U+SuQQM6wVGE+jykwiBFY+o/15xvK9LjN1ImlvEiMl29AGbx7d
HNqpcH2SdiFZpgONbmBW3fYC3JHv73AE/LUJdXSHJZsSAtIG7bPqApEIFqZKTHLGLauyvsbj
f49PC7Ddh6/H++PDi9kXSRu2ePyG1atOXrbLzTiZuy5Z090zTgFyzRqTbnac2i4HRIfg1OVm
iBxLSptpix+NQivqoCnulqypiYnjrV0JqKMIPGiRut08X6iavXkEUFo6ZN9+sp4SlsGxlNEx
8z+XbEJqO7DJVy8CRlfAHjhft2HmCs51pbobEuzSZGkwSJc5tmszzp508q2j7UZcs9cimtqw
YzWp0IHqsitt3ISyxe0OyZ9B0I0GnhaCZXRI9s1NBzq4r2y7D8Yh8cjYwBKiwLOIaxWL0CrF
67lZIe7fd+SyiMHGJvDuxuzq/UcPbwNb5OM1jWnLSR20KJJNj4FHPREDM+GyoMBsUgYLG2Pc
wamPg1k2OcABOFkMa6pY8szAohYkmIwUhQA+VpGh1QrcfhLTxqM+tTRCXdY2oMKycOGnYJMc
ml1aiszJ4/kZpDCH4B0MynS9/XZnLaqHxXgYplq5SOJJTds3et9ml9VKxdFxVSueBUyUFBFh
FjRrsZ4U62O3RKB/N2NqDTr8NV9KbESuoY6C8tu7W+tARgEwP1/WqJhfamBOaarfie7A2EUv
j7uTg79zzwgwLD4ABgyigWSn9Db14TNVIqDHsZR1HteLU8JkjjS+cF+KuMifjv9+PT7c/Fg8
3xzubHQ/ztXJ9lypXqT3MDD7fHd0XpxgsZ4n5X2LLvhGl+CmUDEDrGjdzoAU9S7zPVifY43y
kAX1+VjXuxrWPgSIJuYI0f7eXTGUSF6f+4bFbyDoi+PLzbvfnQwKyL6NvB0egbaqsh9+q5ct
tyiYqjxb+rljwEzr5HwJhPjUMrGO8hGTBMxFXPQRllUE01ox1gYfr/bK/k3AtZd5EmWUGRpY
+tw+HJ5+LOj9692h9/LGZWBudcjTzEapu/fn8XknY5vB89un+/8eno6L7On2P959P80cnxI+
MFIdG3ImKqO2QNtC0OykgGUqwSwluQIc1y0dAaN6zLc6zbuSHWdsp7WPH0ZowXlR0mEBLuU7
ECbdTHLRWB+XGmbLEPAufqPfX44Pz7d/3R1HEjCsP/hyuDn+vpCv3749Pr041IAoeUPcq2Js
odK9M8IWgTcTFZCFeN4VgnLwaDtyRfjI7bwVpGloOBluCetL8FYXjKDg5UhIhKekkS1eeRoc
vy8+pXK9NDNbys6nJPJQuupkK5jhi6GOsf4Xcnq06+5tQzJ1llRKCAjRFy3JXk7OUB2/Ph0W
X/qpPhvmdYtdZxB68ITtPcO53nhhBt4ntRDJX08kr9cB4Altdh/O3AtucONX5EzXLGw7/3AZ
tkJI35q8gvdo7vB088/bl+MNxrJvPx+/wdJRtU4CQJvv8DPNNkXit/Wej83z9yLT3TxBQCSc
Mgdu61XotKWryTFFeU3plq4Z0p3oCE5IaH7X4YX+n23VgP1L/MylfdVosmmY5MzDh4EhoslI
xBCHJalwYrP4MTZsa5O5weLSFD3lIK7DW0R8VAjBhk7wJZqzJbyCjw3O4DCweCVS8TEhg22d
G2lu+d0w4EnqPFZ+mbe1TUtCgIdxRuyx1ob6JY5jdZ8ZcQVRbgBE04j6iBUtbyOlNBJO1Lgh
9uVUQElTDAPBLqaAuvLaKYKkfVZ7Btil6z1D5KzcPlC1lVJ6u2LgwrDJbTQWsUid7WuCnqR5
QmR7hEPKCnNW3UvT8AzA/wRlgEkWLC7puAddhxBP0k9zx4OvYmc7rrY6ge3Y0ugAVrEdcOwI
lmY5ARIWJGKdSCtqXXMgvFeXGRYPRrgBIxbMzJi6b1s7Y3rEBonM35cKio5EmKiNnZqnCk5A
IyWfVdVqiGkhfu0CUEyFRcH4RiSG0nGXlQb7KqO7Og8W07Xa69AZWMZbLycz7kLSFOvYToC6
SjEnYgm7TBBHbdhBbHnAXKbMmRLPowTmCdYzqZMate1PtKMU8cljKytyTK1Ag1o+MNU7IbOk
0/dxp8Doo5rRAry/fR1mNfTfPhHDhK9u2izaXIXNvdqs8eIOrUqf7/1ZvMhUljEBjpW2YbrR
1OEZIGaewdcQca7juVGZaj/ZR9bfNNIUi1gdmeFZi2lOtHxgWI3QRZSxAfV3G7G5vTrQ0Pzu
mIpbCb/XWFo6smf/5HVqzmClzObkh4pWNygxcZ+vZ7vS0vfnCbPVH7GNIPl1z2ujqzi0nkqb
gUQwsETdi3ix3bmyMwsKu9sjiXaPgcalY506xJLdVZdvxAb3BuxtzF9Bxe+WZYddu6L2/gp8
qkl6/2weMvlBipG3516j+Jn7riQdBMiUag9+dco3b/86PB8/L/5lK9G/PT1+uQ3TPIjW0f/U
GRq03rXtn6D0dd0nZvJ2jT8Ygj45q723uT8ZAfRDCXTHQQ+62tM8hpBYjH91FshwKNT2XbHu
3jO493IIbGsEzN3c9V7QHBxHkCIdfpajnL0DNJgzD5o6MMqloDNVpx0OnvgWHCEpUccPb8w0
qwxvRE60rYHdQVvuq4SXE+JI+9w1vN1J/Es9fDpm0huCfvILHceHiiCNyOQ+CN+bJbKINga/
JjE+T1O0EEzFk8U9FpYnx0+txwBFy5Uq42WUZtndza2pOhHhUrZJLLJytsvwMTKogX3Yc4Cn
PBqd2QVOC0wNkbHWtyFxLkIEqzt69ROE6/Zq9fD0cosCtFA/vh3dBy4EwhDrgGcbfCznWiEI
pesR48rLfHsgnbYVqaMlPgEipZLvZqfQYW1RACZZHuPmEM3chCmazs8jmEzZzpuK7UZ4LHyW
uUeKvlvFCjJDI0UEOzlmRdLYmJXMuIwB8E1+xuQ6CBCwbHUHliCJdMHX8bBZexMwBbfQ02Q0
I8OWWRXrgs39LVY/ScHiJGhL8xMkp2gg2xkWWxNRkZNdMXEW7Yq/hnP58WRfR86d/n1mP5AW
V0irT5gY9HUXtKGzzbjfbO7t7c/b8PEhuiN80I9xW/+Tgf9nHAAnX+mA1/skegXew5PcCazh
Q/fqJngYjiD3+bK7a3+RA4PL+szjF6trsOLfmMlJmcdYS6A4JhVE5fw2jzHZtjNIId/WbpQH
tgKcqBmg8cFmYIP/Zn7DKBufI4wo85Cws9jGu07aB6+oxhWZpG3ToAEmWYYWWxsjHHNl+0eP
OqE5/q9/dRnFtaVGXXZ8xBgLXWx+//vx5vXlgLlo/IW4hSmCfXF4LWF1XimMVcYx4MPPlnZI
MhXMdWO7ZnAwUpc/sW9YcTZmyGcWZFZbHe8fn34sqvHubFrhE63r7IFDUSiYnZbEIDFkiJrB
vacx0MbeUUxqUCcYYf4Kf2uoaP2Hv7hi93dJ3A54oYHDmV+Oq71Dn6vE8tu7Jc2C+7e9PAgo
5mu4urotU7Nlq+Qv3EOGuDCdvX0zEbugKPFxpyry41mpSZzqPk7pR1rtTa2a0Cp8DWqfwXAM
Wv2E1jSVt5bOyfekMCdrf4opE1cXyz8uPTn7iQdXPiRKiVguYy6WsmlX9f+cPcly47iS9/kK
xTtM9ES8mhKpxdKhDiAISixxM0FJdF0YLpe629Fuu8J2ve739w8JcAHABFUzh1qETCzEksgd
+6Jp9ejDdCdMcGGgu0IvLTO1hrgfXQquHqbbGKEQQiv5p5uu6EuR59rx+RIcNUeOL4sIQgS0
I/+Fj0OdOxGyM6qAiatT/OvDFavBypL1Omm5dSDPBHavhV3g8VgJ1tPGQkaqnqxuVIziKPrO
ovZcpYkSlZsoITuMrBe2I7MKwm9GCY8GKVpcqw7rizFoqWnSqdYBNminTe1JqJtKdvUyPfcJ
PwQqnrFT0UtSm13e/3p5/UNI4RqN1Y4vPTBsvMAYmmyiuBUMA6EsC2OCS69V4oiTjMpU3nd4
nAADLRAu5NVhIVOrMFSgjdVUDI4DhbLWQYo23MGh6CWeRkbpoOxV0RSZbueXv5twTwurMyiW
fsquzgChJCUOh++OC0d6SwXcAZPO0mONRUxJjKY6ZpllQ7wTopmQ4mNHvhlV8VThPlEAjXI8
equFDd063EgAj+BhuxLGuGPG1NDg9nGs9vC5eiFsSKuookVXbDZ/DAv3BpYYJTlfwQCoWBdQ
9+PbFnoX/931uw35nB6HHgNd/9zdXx380z8efnx9fPiH2XoariwdUr/rTmtzm57W7V4HdWjk
2KoCSeXGgVCfJnToweDr11NLu55c2zWyuOYY0rhYu6HWntVBPK5GXy3KmnWJzb0EZ0L8pZIb
q+4KNqqtdtrEUFu/gNYtegJRzr4bztlu3STna/1JtH3qcDhWy1wkaEMdP1NUtLDOiSyzDpAq
szeSKj0cIaMx3HdO4gLpKMEilxKHC1qHI5g/aXQQd2taWIykjqzsfbgirJgACjIVUuokzpw6
CHcZ4t9WuZLtkgpPIZD4jh6CMg5RRlEZYoHEcDOFmCpCGzslJGs2c9/D8+eEjGYMvw6ThOLR
26QiCb52tb/CmyIFnpGm2Oeu7tdJfi4ILmLEjDH4ptXStStUDBX+yRSL8Qoz8BIQEhrkONLY
20AsH5GqULSxvGDZiZ/jiuJk74TwJ/o4ZbZy532SFo5LFL4wc+S+2HM3J6VGKlhhJ0aygJzI
cB+4sG7Lyt1BRjlGhctCY6fLSOYP1S/q2kx32OqjoMGijPEk2xoOTQjnMUbK5Y0NmSY5BEPo
1rbg1mCL2rRXLjICAgojaWsDcHQUgclFxfCYnPbs/fL2bpnX5LcdKiEiobqSUU0LoDPv2tKT
tCSha8IchynAv5pEYuZKF02LmgPFMjic45IlylFs6DjawWH1RiaIHvB8uXx7m72/zL5exHeC
hugbaIdm4j6TCJp+tC0BkQuEp73MJCqzC+lhhdEhRn28Yda32i2nfg+qWmN5BKC2l8cEj0Rt
bbJjnJmirNiLTYKTwyxy5Gvn4g50+KdKrjjCYRPXfQhZkECpMEyGOFBieEZSO+V9YegDQOGR
KyLZlrBqXwmUjpxZ2i02ZJSTix5e/vX4gPhZK+TYvNngt+siNHTu9o82IboprNNYaqUsB3cN
SniRGs3IEi1CzmhLwqRhCTLE4AtnoIFa6aeQ8USVBmJTONgKGSiAEmGASNd/e1Ym9rEMQ8Kj
omWiGxqDDTUq86wygi+hHigPgRa0ES92p3GO3y8AE1TfDSM4rZdd2i6zcqrAs0UcBFeQeY/j
WGcJA98392IAxk+tmkJkpQ9/4dd3q7SFmAibYELZw8vz++vLE+Qv/tafIWOCokr87Tli0AEB
npPolHPuodaQp68ejSG8vD3+9nwGp3EYDn0R/xnCBPp7agpNKfxfvorRPz4B+OJsZgJLffb9
twsk9pDgYWogofvQlv5VlIRMrJDMyiUnwjlLn298jyEoXUjQ1Z572yG+av2Ksudv318en+2x
Qiob6b6Kdm9U7Jt6++vx/eH3n9gj/NzyghXDk1ROt6Y3RkmJM9olKWKLGRkc+x8f2itglo91
kEflWbVnSYFyW4LmVGmh67G7kiZt0+q35YJNyEKS5JmZPKZUHfTRO/JlnNFA++iIpxex2K/D
XRWdR9ElfZG8M0NIXK5Z1eqqJEOozpA4eaglPYHVB+sjRRHEHayC09FpH6pMOgpBdJFtVBjH
hbRf3rN3KlXsqTf0aUpy6XGEw6xSTfoE9xaVWBZdZglmp9JMk6XKQc3Q1m2U/QqXywGNSANr
i+xK+sLvuJYbT+OLhqxkMrDf8ewMgE/HBLI/BoJ0VrHuYlaynWEpVL+b2KcDH9WWcSFCgFlr
VF6k8ajw7I3aTFPd66DrSH+UpWtQHJUQGPZxV5QGQyFENkgnWrmrI33XAyiS9LRzKDWd98an
vA+A/Cb5QN28GwOPC8Gpxten+9iy8qmCcdx0BwD6ht5uWuhh17vGmOeCUwZHbWRn7DKuj6gy
DGDi51jlZblYfb9/fbOoMFQj5Y10N3G48gkM3SnFjZVHYwQNLFZOxgFKHOM7epCKHAGDpnLp
++CZPRhNyLAg6TTrULGMa4Az8ziye+Rc082TnKij+K9gAMD7RKVGrl7vn99U2OYsuf+36Tgj
ugySgyAW3F4c+UWOqZGwptQyIESVFk+YjX41pea7EpvwMgrN6pxHoeEfwVNAcK9jXrjWsPdK
gmR0UkHTCVQlST+Wefoxerp/E5f174/ftUtf30pRbK79ZxYyqoiZUb4Dxn5cLOqDGkxaAgxf
6Q6Y5bbpuoME4oa9Ayun67GyDjH5WcQdy1NWoekNAAWIWECyQyOftGg8c7AW1J+ELscfGntI
mdWK4OKxmZBx44IdcIxbznEa8iocz71gZci49FjFiXWkSWoV5FYBCbjgfuS+7N6Cce8hxazf
f/8OKqe2UCpnJNb9A2QdsjZaDoS8hokExb21U8CpA65C65C2xW3QgesQtEh5hLYpH4okVZww
e+o7hB0Dz8grre8KSN4IfiFGJ2Jdbtb1aDJjum8LjR4ZD/wyx0V0OUmHzXxZT2FwGvjgicCx
XLKAkLHq/fJkjiZZLue7ejS5FBen5fhlJoNTKY4vdvXJ6kJWUrtqkMuubAj1Ds/l6dcPID3c
Pz5fvs1EU+29i0klsqOUrlaeYxQ8Ge3sYj8qEn/sMkgTVuUVJDsDPaHu7NNCBcvG2xzl3hBV
098Pvrrxlfj7+PbHh/z5A4WPdemyoGaY091iGEcAiX4F2aya9JO3HJdW0sOqe6Lo6sQp9bKQ
bcxOoUSFOxuTIm4NgNjboi1WqfrvmnMZVxhfrKN271o5WrK8HRAMv4a7ZQdrZJ1RsKwDyoiN
YpSKiflNTIUm1dsfzSi1G+zKQdjdE8EW495oFmZg5lXDOu918TD/cohJIcjF7L/Vv74QntPZ
n8pVx7HTVQWMGbrelD7yY2Bd6qKgOSdaNldru0uEgAWtocKfm+MCKDgw4skZOoxdcmRYx1Y4
CBRLgcpg6HPj5STBuoLR2BWhHjWHPPg8NCkK2tBko6z1rzXKDJFH/LbcgURJm1YB6dfOWlZI
r3zz9YeuQDsKbVFTYN5wHZDUm83Ndo3VE8QHezK2A2fATuv+kbojkvRCkoJxKmaiTe7XZfJ/
f3l4edJjPLLCzOTWRsMYJrE2QCY7Jgn8wI1HLRLo1jgH8hsXC7+uJ5GPKcOvvQ4hEYzwJEJY
BtNRPdkVOD9cgdd4tvMOLugXbt4JxZUOhj0anvAe4G0U2HhgNnFo1ZUco2vVB3uytFRdXY5r
01PyeqzZzU4pM3Sw9pwCHDVBCUDjMF1JWEXKnW3m72io3qniNR/fHhDtAMt4XkKORL5ITnPf
zEcXrvxV3YQFmi8tPKbpnf2YZRykkMLA4ZVAssrBmVVxlMobFndMoHy78PlyjjExLKNJziHx
O+TpjSkzJNV90cQJbrklRci3m7lPEtTPkSf+dj7XeA1V4s81QbSdvUpAVqu5IZK2oGDv3dxg
uWk7BDmK7bweWt2ndL1Y+XprIffWGx9rxbrxdQX3yEO2x1I2iIaHEcNIKoQfNGXFNb/U4lSQ
LNa0avuYx+KvA7trjjwwghV8oKpjdoMVwJYjBgQFESfYx/1OWvg4A7SNkZJ6vblZYXtEIWwX
tDZuiLZcSKTNZrsvGMecPlskxrz5fKlz69YnaVMQ3Hjz0XZuUwv9ff82i5/f3l9//Clfp3r7
/f5VcKPvoIWBdmZPwBV9E8f18Tv8V5+qCuRV9MD/P9rFaECrOpV9kqf3y+v9LCp2REt19PLX
M2ivZ39KDdLsF8hK9/h6EX37VEu2RsBDTGYyLxJ9xrtc1rjk1EMbB00cEKoaxzgp08MpdQhn
gtU+3zpyUtE9TinkgSAJhVQkLpmvOzM2xggOx0XPeU0CkpGGYJXgEU1myIc6Ee+PoswuYTwo
HrJuEYuny/2bYLIvQp56eZBbQ2r6Pj5+u8Cf/319e5di5u+Xp+8fH59/fZm9PM9EA4ov1q4K
SNoqbtkixrgZAHIBxQipAO0MIUmVNFPokz3R6VtYYIjK+BJrODKBILrHQqYS9MQ5rTAnD5m+
Fh5GjXpmECYMBHWB1dGDj19//Pbr49/2FCKiXs/tTT2N1CHRNFwvcSO09nGCE0XN3No4UWNu
18SUGbvDAY3l2vcmccovdmr2EQphdH2NtSVJ7K3qxTROGt4sr7VTxXE9zQPL+Z1upRJyfcKm
cfZFtVjjbtUdymf58oTDe6zbEmK802tdbbwb3KFUQ/G96bmTKNMdZXxzs/RwR9R+tCH152It
4SWNn0PM2HlaYDidD7iBpseI49QKAENw+Gp1ZQp4QrdzdmXJqjIVLOAkyikmG5/WVzZiRTdr
Op+Pnfcgz0OnlRqpZWQSiDQ3mPSSxEAtK/S9cKigWe2heqi/FixLLFImR9B2rbLe/yKYhj/+
OXu//37554yGHwTTo930/QTqTyTtS1VWYSQcfUmlr2IYH/tShyOw/IBeAnDMgNIFEutpUQlJ
8t3O5QUvETgFN2Wwc+NLVXXslaGLUlUhDbK9MCZKRK9hqLymU+sr7l2uEDTBpCtP4kD8Ywgm
QxX0dcEOLL2ZjAT8ClQWbWeaNs+eif8yp/gs38IxxEQJqSgaciRh0sipMrfag6f1LlgoNPe8
AdLyGlKQ1f4ETsD8CWC7mRfnRhz2Wp5Dd0/7whFIIKGija2LYnQIYk3ccOL0FlJgQqeHR2J6
MzkAQNheQdi6bl9Ftk6TX5Cejo7XzRTVKiohmuDcueofgr743dQclTR1+PArKiLG5+PwVIie
ks6K62rkWm7jTMipPc70VAjW4RqCP4nAU1JWxe3EfB4jvnew0e1+rWKHslAN4a50PIbcQvHR
tcJZcXIeK/kypCSNrZoOxVL+RA28Hem6+9VEZA4etr0K64W39SamIVL+zk6pTyLtwmricoL8
NxPQYor2Z+CaMAknLodU9fmVg0tV0Lt0taAbQbtw/rEd4MSRuRXXZ0xBvz4xiNuEXKPDIV1s
V39PHF0Y6PYGVw9JjHN4420nvtXtiq1YoPQKgSzSzRzVPkqonZNGdbkfFTRlSKh1S4tSmZ1g
jMxSOi4kyZHoGgGMX+x1MJXO6oGGHC513dQiiiD+VHkTDIpR9c54kEM2Rkjmi3w34MjMbpqr
oigyzR+yzy9FHhocqywt0rGASjV3378e338X0OcPPIpmz/fvj/+6zB67lN8GqyW73aNKlx7W
i9ba2xlQTNmJGM/LQOFtXsZ4WJ9sT5w66gmJ1Y0hb+PRmEwcHic+ZpSSMJkKX/GY4usf7Gl5
+PH2/vLnTKZMx6ZESFfiPrQTquu933LLI88aXI0fNYAFqdWy0i3E+YeX56d/2wM2UyCI6kp9
4SSoEid1Sr0SrCRRhx++tAMVjjh7CZ3UskiMsdLCcLD+9f7p6ev9wx+zj7Ony2/3D6g1WjY0
xQqg4enSoNN5GgxaX5o28cjdzgBDnkjHVQHgwkl/AQo+0JhxAeyF8iVpNS5DhacYeMQA1SJE
R46lx4MQ05m32C5nv0SPr5ez+PM/mB4qiksGPrR42y0QXONw18fJbrRFIDTOqhze85M+zJiI
lbFK5TjRzOBZt0i6cJtnoUuSlKYyFAKfsTu6mHd2Kx8GmMjh4TIPQjYG5jCmiq+G8GVc91k4
QafaBQFlocNNPBDs3jHEGcmdI1BbjI/b4RPDd4EsnzuC9qojPkBR3pzkopU5F0Kxw3KAm45b
23BmHoAsSR2qO/CXdkVfCxnEAnXeT++vj19/vF++zbiKDiFaclaDvnSBQD9ZpTdiQUJyI4sN
zMdJMNh52Syo+VjpKS9dnGN1V+xzNNGf1h4JSVGZr2q2RfJhzChGVTV6Aztmni5WeQvPlZGl
q5QIMTcWnRgv9PAkpjl3nOyhasXsvJvMJTu0pq2KX/uIlHzRs3EZINPQnoYbz/Ns34XhMoFd
Y7+8M9Rt6h0aeaF3KChJVsVGFCi5daRs1OuVFP8A2E65xTUmrsQGCW4kAAB+FAHimvxru+Ao
GFbzO2VJkwWbDfpKrFY5KHMSWochWOLMUEBTIHyO4PasxieDunZVFe/yzKGiFo052E35vqNt
b9crYp4b5gdTYr6pHaDpZbU6bayf4RdI0OwPRqVTfDTmtdofMwinEhPSFHhUt45yuo4S7Bw0
S8MpHThJfHu0w+qQr9izhJsh7W1RU+F7vAfjS9uD8T02gE9YiK0+MsGRGeOyyRdSRaYSNI6K
cuTuLwt8THXDKMFhYYbmONM6Dc1rQaWXStCnr/VabQT80FHi48wwF0ttxwqP24OHzpiRlThg
/tWxsy90H5sPb8mSJis45McUt1YK8Y82VRi3pJ4SQynr/kjO+luLGije+Ku6xkHgF2GMzENp
HRTPbby5w9dih6v4RLnjMMa1q4p9Qw2QpbN3nE5+Tq+sbUrKE0uMyUhPqSt9Bz+49IeHO0wm
0jsSvZAsN7ZRmtTLxqUhTuqV28dNQPl5Ehydr4wnpqW5CQ58s1l5oi4eG3XgXzabpctjxWo5
t/e++Pab5eLKbSxrcpbiGzq9Kw0fD/jtzR0LEjGSZFe6y0jVdjZQGFWEixN8s9j4V3gC8V9W
2nmvfcd2OtVoiimzuTLP8hQ//Zk59liwduz/Rlo2i+3cpLD+4foKZydx+RlXgVTuhbhIpFXM
D8aI4YHaK9eOyoQpvmQXZ2YA8l5wzGKXoRN7xyAQOoqv8KsFyzg8EGMYG/OrV6FSZOuVbhOy
cFnEbhMnFyfaBAuGC3yLZiXUB3IEN7PUYJRuKXgfupLQlenVLVGGxqeV6/nyyp4vGQg5xo1L
HNL6xltsHfneAFTl+EEpN956e20QGdjk0HNSQv6vEgVxkgomwPAi4HD9ODzD9ZpMfzVNB0Bu
/0j8MZhe7lC+iHJIJUCvyVY8FiTU9EjY+vMFZmUwapleDDHfukxAMfe2Vxaap9zYG6yIqdOk
JHC3nsNNSAKX12gpzynE2ta4GoJX8rowPq9K4e2G60t3zExKUhR3KXO85AHbwxEqQSHlWea4
LeLjlUHcZXnBzVTG4Zk2dbKzTu+4bsX2x8ogparkSi2zBrwQL5gIyPHIHWktK0uJN27zZN4D
4mdT7gWpxu+7GExFiVjWCgv40Zo9x1+sUCFV0pxXrg3XIyyuye3KoV1vvHVxJ3XsJp0tTpKI
uXbhRGGI7wbBDTn8R2VGwMDp9yhWx5WoDLhUJKN2673JsTDPPq/MCKr1mDgSHheFw8SPC2RH
HqikmEpZrk83gIRQiM8hAA9CqnFotwBcsB3hjheyAV5WycZb4RM6wHGVC8CBWd04rnOAiz8u
eRfAcbHHSdBZkXDt16ADTdUNisGqvXm17icM1AK6cnFwZqOpnvtRB2lqLQTaqQ4QUCdWOkCl
uMIMspuDvz++1cqYpyvM4qk3OshuGJAJFtU5p7qMgoBLYibRM2A9t4MBdQ9KHaBb0vXyyoH/
5S7UmRkdJJWzLDN1MWeUUEvuUpqk8EC2tAZVMU7Kjp/jih8bR97k1qfGdQxUpzzG0lNKS9SQ
wHDgvHmIXjYnbSeKH00RJAav2ZWNT4UyIj5///HudJKNs+KoLY382SQs5HZZFMFrE4kR4Kog
6uGPg/nyvISkBN5ZaiF9OpWne0GFe1P8mzUWSJ7FGYSBGnEwOgRSUaIJ5y00LiR8sQHqT97c
X07j3H26WW/s/j7nd1b+WQPMTmqUVqFKCKRNvStAX1U4sLsgJ6XmDtyVCKJI0dJitdpsho4t
yBaDVIcA6+G28uarOVIBADc4wPfWRuxeDwrbPMLleoNFlfV4yQEGM24aUl4gY5SZMCBvLsMq
VZSsl94aHZCAbZYeHkPbI6k9OjnedLPwF2gPAFospiqnpL5ZrLA1SSnHSovS8z0EkLFzlWcI
ALI/gyKMI3PXSXfjWrzKz+RM7jDQMcN3S3zL136NTkQuzjiulB9WI/WbKj/SvfWSho1Xt3t1
3AIlhRCoJtcqoKmDbjiPsTjl8KKAQVK7soZkJMlxTmPAWWBqigGsn2KtNEZKaR6UBCnfRf4B
Ky7jYlhAo7j5D2NX0uS4raTv8yvqOHPwPC7iooMPJEhJcHFrgpJYdVGUXTXjjuktusvx7H8/
SIALlgTVB7dL+SX2hZlAIlO1RV+RM+XrqFaDyC6YEFEygkGMFuWVwq0QUtpQow2k0t7N5pfA
9A7WAQah9qB3ga9Z31PUwm5hgectlSZnrs2AwGdtj5UroDzTD8JXFIJIod4B11640oL/QEp9
PpXN6ZwhhRb5Hhu9rC6JutLXMs59Dl5LDiMCZizyfB8B4CsGLg+x/hw7RxgNpcerRz4t+McA
O2lZ2LqxJ2jXHRjNYsc1h1h+IjYFGmtJwrBjyI/02rcKEd7jcElscnS4Ku4KR1YkaYKdnulM
xJm+5/KDD2997+UBOsWtHjXTF5ThNoTJvczO/KtHR0KVNafi+TnwPT/cAIO9qx4g+EM4Kkqa
NNQ/kHe4Iy/CSyRPKRnqzN95W/jR9534MLDOcBuEMGjOKm18dzeHnTsLcOrS9S0+z05Z3bET
dWVelgN1IMesysYtDFwxaHG3NJaRhHAL6RjKSU25M4THti2oow4nvrmXnSt/WlE+lRyWAgof
i9lTEuO3+lpNzs2zwxxNbfTjcAj84N4aKWGfxzu2anHgmsH9wBXs0/H+lgzOOcLlOd9PPd+B
Er4LuwerrpnvY1q9xlRWBwjcTrudMx/x404+tB7jc3UbmKMltClH3ThDK+Ix8fEDIm3zLRvh
6/jeOBVcfxyi0YtdLRJ/9+Dr6E5W4m8ui+Bt2towr8WQJuM4DS1aC3Eu19Zdy3AnZPpQ+2GS
OvZf8TflqpILZ0Qsesdew+HA88aNvUxyOCeIhJO749fXN4dHV21t06rMUAtsjYltdS0b/CDE
TAR0pvowMGcWYxqjZ2JayzsWR14yujJ5Loc4CDCVTeMyhFetz9pTPX1gHaPL9SSwPEGxZ/FM
SQEnFYUyYp6epCm8oxlvbcPVG1ur4WKN73h6PzEIaYMrTa7dQrLl/KutngFM5xjh6PFWDprO
KaGOsO6xt896xiThXb9UF0H3Idw1DapnnAmWi+nWXfulTFOJq7kyH2E3G1NruwzC7hkZi8OD
nH/fSqvCAiq4qF04sAvNddPIuVsrvj3nQ4PHe5QsVHgDH8rAzBmCd/KaTrCFjsNve6u/IWxH
nQ2lCTyV8pjW6itS+x4m8EoUbOirbADrKXQwxBIK/HRrNLKxC/jU7ErMp7FkOcujRStpRw6R
F4d8sGtMbFmY0ijZWV1xrR2jCcg8YHprH1MvgqYgs1IMc98OWf8Ez+KxmVBkSZB6U08xM4Mi
2/OmuFfoWIU790ElrcFP+Nkskm8fQbzPsGHNQg+93JM4nHY/5gUeDGXKuyj5QgE/pvyvPOvt
6c1aMq17rvj0qGPCqe39JYj5HHB1DcBxpMBGQZIhmRmQcvqa7m56JHBB0h3WA4XVuUE5eKGR
ilPMb66gB8Xk08nk932LEpiU0LMoO816RNIwG2EJRdF8WHx6+f4qnEbRf7UPs4+EideoN+IZ
0uAQP2809XaBZiAgyPxfpxG05CBDGpDE8WpMsnRZ/+hw8DcxENox1AubgCuac9iuXJ9hhnsS
m94SoOk4EeJdblSI99ptq0ZZJ2q0Om2TVzzLAf1nnV2eSzNlSpzlICx8cJKju+ucKbeGRVFq
c94qbfYs5LI++94jrl0tTAcuLRgs0303NrWWx1/Y9ZB8XPbny/eXP94h4orpCHEYnrR7NFeo
5j3/iAxPysm0fOPnJPLVDpFTgiheM69EEFSIeQERRazrLfb2/ePLJ9vbvNSpZdhvoj4smYA0
iDxzFk1kLhV0fSkiC8yO5p3Tak5iuJBCOPw4irzsdsk4yXSuorAd4BwW+6qqTEQ+60IbpYch
U4FyzHocqYVCk+tTfAab/nYWQRt2GNrzAaN1ucUion8XZeFqcp01EBnRFVxCZc1YV/JRuUBp
d5lFvBDwuXmnN4tyKMlgOufUGolGgNPyuPINzTmfXFvakv8QpOmIj03VqVeuWr9RrEshDgfy
aFd6V/365RdIyili1QivRciD0ikr6OUKV4UnDv3rrBCVKWrm+hvDbsUnkNEDvZTWRKzgPdYH
i8wIacYOKUMCcyW2ZgojfkxZgl4rTSw5qeNwHJFiJuRnypm+Xb8N2fHe5J1YTTadaTLH6pjg
s3oGnqIhNFjMYqX96lvF9p3r28jBA+OD0Imi7G5YwZ/pCcFNG3AKt91EAsaPItgTPVLCt397
97JZNiYe7HLPfohdTs+ToesLzV+E/nExuGsy9NV8bW0W1kgvWoXrtXJzOzo8/zbtc1tj9ksi
vNEUKV05XRBUJhXBWXS5zIGm1h4DmuaOHAijeq0yEVBTmalVoFvgoSu5SABmS82gVGOl3aRT
q8UJvKDq5i8Vulxn/k6ztZheGVufQdrVlIvETVGpPSSoItZeIZ18rJqIQMBX700EY8OUHWCR
VobyivSQqSdDAmbUJPBtzCBdMwh63B41RUgUD+o9HhTydOXicFPoLywXoohMzgVOw5G5xWaY
pa2A8a52BfJshxpWrxyG2asK9AP6LnBlIXzZNEesQiNYDva6j5GugyfJjrf5V1cYUz5geLdw
4JEjqkWVFqoDAkBMK2eNLJCNkg5xnUA2XfPSxftTp9vswm84uMJEQz5Lj+RUwrUuDKOyaAj/
r3MNeYc1SiShzDwvllTdN4Zk5Bq0PB7cyAx4+C5Nm1I/AFLx5nxpB9RyH7ga7eqBHE2jSyAp
JShUbVsCAlFv7YFwGSBGbd+OTzoj1IwNYfjcBTurKxbEOqkuK+L0GMK/tNVTbprbzhE4LSVJ
Uf6nMevPEOe4w067NBbw4reEdJQGZAFBTPbU4w8YBaGe8o5U9H8gyyhG2nYHVC4UO6zaOFqf
4WhausD/69P7x2+f3v7mbYN6iGAzWGW4XJFLbZjnXVVlcyz1+vFM5VcSocoCtRoCUA1kF3qx
u5Zcy8n20c6385TA3whAG/hg20BfHnViUW7y19VIukoTFDY7S00/Bc8EdVYfLabHWxT9Wh3b
nA42kTdxHiUobFHvIajgOkKTIfwDz5nT//z64/1OTFaZPfWjEHdcu+AxbjK74A7fwwKviyTC
3cZOMPh22MJvdYdfTIoNyzoCUUGXX1QJ1rhQDiC4WMLN28QuKG6L3JWSzwX5ZD87WYTP3b27
2zkeh/hx3ATvY8ddEIf513oL49uopSkKf9/WoYooi4gQoOv+9M+P97fPD79DQMspENd/fuaT
7dM/D2+ff397fX17ffjXxPUL1z/BofZ/mdOOQMxM2CMcK74oGT02wh2i6e/JgFllyAQ4m+Li
zJWTy+sWsJV1eXEP+EZDWmmqqS1pvpwRl2uA9I+htT0yWg9oIAoAp6c90/CUf/MP0xeuwHDo
X3IbeHl9+fauLX+18bQFO/uz/nEUVcysI2O1mm3eDofz8/OtlYKvlnbIWsbFa0xwETBtrKgY
cmJC0KbW0J9Eu9r3P+U+OzVKmXvmxJr2audQHUxPX/OxqGtf1ZbCcLaqbc0/Y3pBcCR36JaF
Bbb5OywukUQVGpbahtqQkqJhQJuieOIi9PUeB+tQN4JaiOCT+iDkJFwVr8KKvPdg1PCBt5I/
fYTgHOskhQxAgFHb0ulOQeVXb+h44q9//B8agn3obn6UpjdL5JOL5svL75/eHqaHZ/BWoSmH
a9s/ipeCIKxzvb6GoJIP718fIFQEn418Xb2K6LF8sYmCf/y39uDMqs+i2E5CxnpFMEVynoDb
kev7nXIAyOkgM2H8IJsczjyZHhoVcuJ/4UVIQNGbYGZNZWN6+FQrcf+r2RcuSI0vthkXV6bY
QdPMUJMuCJmX6nKuiWJFMz4m+OnBzDDUhxFLKS6JNxK2pKxUo+mlNvMzqhszj4Fmljx7GvqM
bnUmVwP7/ulCy6vd3uqpGUV8IrtwQ69fBqAqIKLcY2lDOdeXjOv8pQ5Z07QNJNuqaFlkPf9K
PdpZF2XDdWbNWmSGyurxBAfJaJXKuqYDy8/9Ees86WHHrJXFRvnw3OP5Da4O+rtswHCgZYWZ
Oy085ZXOVTZn2LnpKSsdIzbQ4zI0Mmzy25e3Hy8/Hr59/PLH+/dP2FtVF4s1FUFvzOzeJ2yX
VHvProwEAmRIPpy5FJH34IFqvaHk01vecOgEEaixgzeZMpZj5AczR3swnvTImLaaz9s5F9p/
MD2myK3IFKXWa0/ITLi8R0ZKaqMylqBJul18gzoHmdGp4uWOt6rDMv7l55dv37hAK6qFiBwi
JQRWufF5jcmAsg/EAb/eubBxdpq2Lmss3Ye5ciquWZdbieCiz5XiMMD/PN+zUi2fBeT2SOPr
dWVeEE/VVYtaJIgUNTsTkHAhcrH6PE9jloxG3nXZPPtBYvCyrM6iIuDTts3PVmPkPZKrdEZb
sxA+lYjutkYagY1phKtlAr6SYh86rPAEgxTInROBq7MHctJOEtzzTIo2XHr4ZULhOt+YiWru
vrcDkf62S0urXYCBB7ebjyvjKhPPwDmbEh8uLs2Bl8PmnEF0SBMrjUs5n8HQR11aymGgDXiU
NWbIlfkx2aW/KrE3NntvUWkF9e3vb1z4s3t1fX+pVzErUBMA2R/XmzwwsjcYD6MG5uwUh1qh
g6pf66xI4lmVlHZ1G/N16CgJUtPyR9EsjL6Re+Oh2O6zqksTq/ZAjOLIan9h742zcalBni0V
TfKHekxjq+3XE2UQgZG0poI2Lz27GUsoMKt5xuBvnFtJG9TB5VBBNpALMe3G/IdIlXeXK8S/
lVyO2JDS3rEgoSt8lVxrbZFd4J0g2klIZ8iX3Sy/10m4Rr/kjOSgL7zjsS+PYLhqfjW4GndW
HkJe/Vm+8n/598dJf69ffrwblbr6k3YrHgG32O6yshQs2KlilI6kgVr8ivhX7SJnhRxHRCsD
O1J140JaoraQfXqR4QfVosRRxA28U2I78cLA4D7ss0WGZqmvvXQg1XpCBcBBRAGO0I2Grzw+
Zn6v56KtXg0K8LNllQfX5bRcQs/RsNB3AaETuBHV768OpniqyBtd3ZOkmIWvzuHjxaWlt8PL
S0s/UaUMfdooahBcRvMRZOiTTImyc9dVmv2fSt/wzdIVmWTF955J/MwKwpXmga8B3Ak8xOm1
s5lAOBw6wv0K/0h7sa/OoinPG7kGno9NkJkBelh3caAi6OhoDMrgaHRli5jpLFeOgue6a0Tp
zdAgzsnzDwEEolKHwoAcr1ZNrlPxQTsenHtRWPlvpJcMa8XmZwEwOtotPqen6e1wLrlOn51d
AQinXOGNXYJ73TNYkC4VCP++rYMwt2Z+eIC1lLIO8tuYUTzfdO+hiUGQQd8qzgzm2dCapxjc
jZTVEMaRb88RaOUuShIbkcaM7cQSRzGaeJamkCrJhzobdeKTZudHo52vAPYeDgQRUlkAkjBC
gchVRpTu0aoDtEeX57Ky6jzcIdWY5MjEnk1issJtdLBX75pnuB8iT/0wzBn2w34XRVglz4T5
noedfy7NKPb7faQ97Ttda9zIAj7tmeazYCKBkyF4G4TvoBMPG7KBMsfji5mprEterQbssydD
JT7FqowvNParZzK3yinzTLv2VLzxh0Cw6kH2jBflITtXw+3YQoiysuPKHCuxJqmMh4z20r53
s4VqErDOl34eNpO4c0cY1foicJ41R/EP1pyfqBPpzjP75gjBcy6q2rjPEBxCrVR59L7MmvkO
98v72ye4w/j++eUTeo0mooiL0SdVVuMeyyQTPB4qBoZVer1V5qzhjn9WtosEFiyfRYTZzOs/
tGrlI5cYai5brE03W0dOm4XhnWReqIDoshYxi2Gz4d8/JsWwnl7ITXvNnlrVLdcCSdtGGe+z
bGBZFQgXuAQSN1WQiaeImTODdW4qOv/68v7Hn69f//eh+/72/vHz29e/3h+OX3lLv3w1FKc5
n64vp2JgOrsztFxgrXeI7WFY8sPs9eScxawnpVKPJF044ODQi/db+V+LbICn4eoKnUyrN1I9
U9qDZK1Uay6yGqfs5jUsD3DVaaBesW4Uwj/Fwrgca3tGPpwhphMvCkkpwntCaBdRFTVZRWsw
7zHTaQyJ7/lOhjInNxKmOycDnAh6qVUz5dIYPJjeBoKpv4znfqBDRwK0v8pz387NwnehPOF5
451Cc65Z9+q6PPDN1+ggGoeeV7LcXUIZj6Mb5c1yFT+kiR8crAI52Zndqdue34z4gbO9HYHA
b6E5uZuLo+9jTzZMk1q6s3uugBPj+WDWUQdgCZM8kW3UvoLijM6ZN3jKdWHXNIyD5HTYYkiT
xMJXdD+hqppFTs9mHWE2lt3I5/vWMm0oVwtGPbuGksTzU6MM8OoT+GYfj9I3hW3DQOgvv7/8
eHtdt1Hy8v1Vj5tHaEc2pwjP2bBjmk/rXJlPCTnHmvXaCAbuZ1vGaK49P2O59gMeUqgOVEUq
QkWUbDT1jOpEaYgNmHhSpKRcx8hiw+fqyuY4ectJnaElAGD1n7D3/J+/vvwBNh+2p+F5xA+F
FfkOaBkZUq4gYG/KBCwcN8D7GPBC+9mGThUpiA7wWkZ7b9SuYgS92EeJX18xg1+RoaHBrzTd
/gLo5tH8SjPtqUXD4X7Zx+/OFtxhabrgqCfJBVXPY1dioBOlBGHWTr7cdxbOYZcDZQFXDabC
AXTMhhKshdjtyMz+Iz5ESUCJWA/WXRAHmB8HAE803vHNBMRaTVUcwFiPUeJum5R4P5yz/hG1
VlyYq46YV7gaxhxeRWZNQIjcXPa+ai4mNJScBpCZqd4nkkk8fHTQpRnBZ6xhAsYdQK9MXS1q
hufQ1dh3Q+CWB0yg/pY1zzdSt66oOcDzWNYdalQCoPT04pn5SrJrBdhHQBPVOttZ6SlmUb/C
+9DKLN17ZglDHMZ2/py6xx0OCbhsDoGf19hhJODr1ZCZMdcwsIcTACkHesv3cPItkhXaqljo
zrkuirJvxlTUOOsRtMdUN0UTxCYaYtSrHqCM7pJ4tKyYBVRHqJdFgT0+pXy4tX0sy8fI8yxr
UjWVZdQA1AFihIdhNIKfKN5Tzg6punC/w44Cp1yq+rz2Bxyi+V6ke14S3ph87EwOc9QkshV0
50S1j+rmuhhXzApZu2RWMrFGTtD3fuA4NQeWa+UHSWi8tBJ9VYeROT3si2gx150mJeLj29Pn
tsnMKmg81zrd77HDajHmk5Gjyi+tVDQzky0BZk67eApav6ir8yBxz4MBBzrCE/22GjL1JdDK
AG8Ez/LxLjtrb99WHjgZEwdjKxdSFN8hj2k8Yhkge+oKggCWxtjOqvAUUbhPHemnQaqKFluw
NiP/1sH1GFbRRQJDypmlus0yFgEHQ6QzaSRrjgXowjRYHMkPWROFUbTdhYIpTR2D4BDEFfdU
QsTBGiaRSxQ6sqas2ocevsg0Lq5I+vjbmJWNr+wYjdymsPCNMkFrKhB0cMStETp1l60Mqwzs
Z/caNm1621UeSKg5i9ehOIkxyBY6dCxKXckM+x0NS+MdWhEBxc5UUjRB2i9AR3wTgyvBPm9m
xVW5yMDSAG/wJNXr3wkd17xH6lC6DxwtI53P+xFTPhSmLpIu8REkTSO8qzmCb6R19yHZB46F
BmKfw+xJZ4pxZWplAos13L+fymPf4CroIR1R72gqy/kZYnhi7ewufKfCJ5uAUje0x6FrjVdU
hKiDZzJ3ukTwgU/Si/XUx+LtM9bl8Iygo4bLaXhftdkpsxBrA8NO83SrIrosrCL1JUC7Q5Fg
baw6Rr7n+lozntCLUR9qKk8a7BybpgATPM7oysWlv8iPUSelGtMs5aJYYOhGOspXLq6Xm2wO
E02DzXdEOTfYuLh8t026GaUiq02vk1zTs8pymmPRrHtiuu6Dt2RKUIKK6i7j8+4gaDeuSDse
0/ZkdtiJugskk58OzbdWTxR3nNhxRQ9hQpUDCpAuxuhUBBqN1ur96kQArxDaYT5s0CXYMaKX
AzzRwOVZits/0R7x/aWibmcLFEy2ij4bQqM2bOjLrH5GHVDQfjabvkmfyVpFj23fVeej0RaV
4Zw1mdZFw8C5aW90ftW2HVgDulolXzlQzAhmRodR63lw9WmQJgfqny0SeMdpWE3BXENPQXtt
ao55O96KS2H0w9BieycpifFlB0rTDvRAS60DRBQugfYOdW5hAP0Af2IreSZcvU9QyHzuVANW
NjvnRX8RDgdYWZXEvmio314/vswq4Ps/31RHf1P1shoOeecaWGXIKCm34YI1wuAFR00DDMzP
MPcZWDPf52NF/xNc88OWn2AVRoYo2/JMw+q0uc8utChFYEJzqPiPoYdARot/j8vH17evu+rj
l7/+fvj6DVRxpe9lPpddpexEK00cGP+D0GHASz7g6jMrCWfFxdTaJfD/jF1Jc+O4kr7Pr/Bp
3mUmhou4aCL6AJGUhBK3IiiJqgvD4+d+7RhXucNVPdH17ycTpCQsCboOrrLzS2JJAInEljmt
2CteS0Oi3qmxhGSa23MNWlfdP6BKrvQmxdHEvV6G8AgetT/eDlMkcb468PD7y+uP5/fnfz48
focGeX1++oG//3j4x1YCD1/Vj/+hnpVNbYDm1C/0lIz/Suur70Ym0uO3p5fX18f3n/Zh0CRq
VJ1yp3+6D/PXP1/eoBc9veF19/94+PP97en5+/c3qCg+PP768rdxXWJKpD+xo2uzeebIWbIi
TZkbvk7VWB03sr9eqw+wZnqBUbaijKQHVjKVaMOVZ5EzEYaqpXmlRuEqoqhlGGi+oOY8y1MY
eIxnQUjNTRPTESoSrgL763OVJgm1Mr7D4doab22QiKq1xCKa+jJu+u04YferSb/UrNML0Fzc
GM2+IhiLry+Nrq9BVfa7ClGTMCoMgx5vZCz0lYmDtkvvHKuUNknvHLF+f5bgSFfuHrnpU98S
PBDVC6w3YmwRD8LDJ4JmRyzTGAoWJ3ZHAOEmPrkBpuL2SMBdExhaLjpO5aZ67U9t5OtrEwUg
17w3PPE8ohf35yAlbytf4fV0VdmmWoJDqu/ZWZzaIQyCpX5TsWEd6Bv1So/Ejv6ojQO7b0oZ
J9TW2qwEhiCatJQ+75BD4PmbcxQlRNeQ5NTSO3IwJJbqmsiRLSYEQvKMRMHXVlsgOVLDdWlk
qhOxfB2m6431xSFNiV66F2ngEYK7CUkR3MtX0E//9/z1+duPB3TOY0nw2Obxygt9ZmYzAWlo
52OneZ/v/mtieXoDHtCKeP5AZovqL4mCvbBUqzOFySN03j38+OsbGAPXZO/+PA1omrlfvj89
w6T97fkN/WQ9v/6pfGqKNQntgVVFQbImhpDrCGeuHkb9aHnuBbSJ4S7VNJAevz6/P8I332Be
sT10z72j7XmNhm9pljnLBEXe8yiKiZpUQ0AGLlJgS3kjNbKme6QmKzqLtVsVAhySWYSRNYab
kxcwSqc1pyAmn5Xc4cjKA6mppRAklcgZ6kbwRrGDSmgUSadP0q8McbwwaeD3tgKTVEdua7c9
1JySILLUFFC1o4sblaxmEidkWyTJYlukKdUTm9PaaEKCYVE6a9DXdiH9MI1SO7uTiOPA3fGr
fl15niUfSQ6JeRsB1zPhG0frhR9w9B55T+CO+761ZgTyydNP8xRgYamAuG9PVaLzQq/NQqvB
66apPZ+EqqhqSmFSu5xllb2I6D5Fq9rONjrEzJqIJNXSy0BdFdmOMLwAiTaMcrl7045mYkWf
FgfNFqc1sFTOJdCodw3XqTxKA3cXZYckTCzVkp/XiW91XKTGlo4Fauol42kOhjuXVyuULNX2
9fH7H8rcYZUTD5vctg1eeYitdsPj0lWsZqxnM83RLTdn2vskbWH6stteY8uvd++Pf/7x8vSd
cvnFdtQO6GnH0HepYmlNBOnwd9cexW++EocCQXHmPXpraqiD61z13gd/yOl9zAXXrvwDPW9H
dhwWPLFKJun+RhTlFvei9IQPlZgdidr07eYK/bSTg5wrgUFV26ZsdpexK7ZC59vKzSPiSdEd
xLj1rCyb7Dff8/SaTQxlwaSzNCGfljvqh45uxyLn+bjlXYUOFwkxZaSjQwR3RYXxkMi6ohhc
GH4n9lAsEhXQuPlvinPL2V5+gAFudVflu8mxLqzVaO8MVxbBSz+mZpMrA8YswK2ZdTroLauB
kWbYLxVzMrq7yrYPpZyaqsiZZl0rrHrxOwZjmz5BQ5hVucu9KcJ1czwVzI3zNfkiG6HTrjCG
1QnaV1Xpkladd1t6g0K2esUij7YaED7mpbtmgt4olAN8x3YBeeQsRZaxDp/47POKm+WVWHnK
6WNd5Pg8uMu0abI9dQImJTG5xofW0KXWyvB4875z/vL9z9fHnw8tLC9erd4sWUERQmJFJ0AP
OOI5KLziKMYvngeqpYraaKx7MMvX7sEwfbVpClhw4DUMWD9Rd2N11v7ke/75CL2pjE15TlyL
Ap1YBK9a0kfgnaUoec7GQx5Gva8ect85tgUfeD0e8KEUr4IN0/doNMYLvgHdXrzEC1Y5D8BM
8ZaryjHiygH/W8PSPqPy53XdlOih2kvWXzJG5/0p52PZQ75V4UV02Lg784HXu5yLFh/2HnJv
neR6cFFFxgXLsXxlf4Bk96G/iqnoNuQHUIx97qfBmqpU3ZwY8sm+o6/eSKY4TgL66tadvcKw
Meiwm229KDkXEWU239mbklfFMJZZjr/WR2jkhi5Hg17+5FO0psdrhuuPStKIHH+gx/SwKE7G
KHREO7p/Av8y0WBsitNp8L2tF67qD5rRcQuErkPHLjmH4dRVceKvlyWj8M4bSzZLU2/Aat9A
j8tDR/MJVokjjAcR536cL9flzluEexZQWSoscfjJGzxytGpcFVl2g0Uabctsaco8mPXEKgqK
rboCpLkZW8632UIqLqEV/NCMq/B82vrkE9k7J1iU7Vh+hl7W+WLwfEeCE5vwwuSU5GdyLUlw
r8LeLwtHXXkP7Q9DTfRJ8issdEvheQrLhlWwYoeW4uhzPACCLnYW+5AUad8dy8s8ASXj+fOw
c+jHExdg4DYD9ul1sKbe2dyZQRO0BTTU0LZeFGVBEqhmkzGdqp9vOp7vCt3KnKe0K6LNyPjQ
/P33x6fnh837yz//ZZps0nGz1TuzvYzYU0ob1JyyrmodSLX0DKLDJXyJI7vs17HvL2HHwZiM
cLod8TqOQa8wnt+etxhvI28HvAe5K8ZNGnmncNyedeb6XN4XSkZDocXb9nW4it2qAg3TsRVp
HFgq4gatjJ4Cdjj8cPjGGnJAXnsBdSRxRYPQmhkno2JuUcen/Z7X6Iwri0OQm+8FVip9I/Z8
w+ajJNJHMsG20qtmoMkimi6h+m6hxGFS2bYr8tBsxkUdR9CQqWWg4bdt7gfCczwJlLa2vEcC
KoLVQxyufo0xScn78RZbHFgVkhElpiMXZ1ZyxFX7vE2jFfUoRQ6xm5FvE0e234zyuJ6GeSBu
sL78nRmMNbCldGyNoWZT9DU78ZOe90yk/MdIsXVZu6MeXcnBPRg7BkDYbszCZ7zrYEnwuSBj
PE9LMj84hoExMrXAPzPhHo/MXPRtmkHuOTkbb4os6ihBMUwXp/AGZCF6QSlpsPOKupdbISN6
XDgIQwXyzT1E17Sn9v749fnhf/76/XdYhefmsnu7GbMqB9tSmRKAJq+OXVSSWtfrBoncLiEq
g4nCz5aXZQca/l7CGcia9gKfMwuAtdeu2JRc/0RcBJ0WAmRaCNBpgXALvqtHaD3OtIdoAG6a
fj8jdK028B/5JWTTg8Jd+lbWommFVpy82IJ1XOSj6hZH7phlx41eJ7yzWGKIZY2Kd1PnnSNh
lAmXyVj/HiMDmGfiWq/44xpXgtghxpaRY4fs04C2FX03Fj+8gO0f0Es9gKeQkOoHpx3zKZ2G
PXKlGgMoIt2OAgp6mZHRTBwN4OfX587qV1MwHFcVOn5yYjxxHPxghyhSWOClLtj276ll6t7b
QrH1Fz9wpgyoCxL0lRpE2Ik5fOAhyp2N7wrkg3ItGhiBnD5xBvxw6Wg9CViYO/bOMMumyZuG
nh8R7sGKcla0B4vICKuoCa+jrwXLbu5MNGNdBQrU0esqkR3V949AO+al9jffwPQz9KtIXchK
6cp3iWqHldOP3NS+TkJ0tlWBi5umKozejg7NA9I+wSElA4wbXwgBw8ajz39l/RKfvjBATjxS
tWwen/739eVff/x4+PeHMsvNYN+3yQn3PbKSCTHfo1fu6gNyi1CkeLecVaTjqzt+6PMgCnU/
lVdsejhLSOjOMr2kscgYG7KgAPky4az5xrqDgsGClpHp5fgIynNC+jG2Urz5WRLZZlpF49Cj
B7DBRS1GFRawRdUXKErdrDc2d4xycnjFdIf+Sk6nKPCSsqWwTR77HpkaTDRDVtdkgoUWNu+D
nnk7l0Mjz5h6Z0ja3fd+2uwa/a9R7uPBvF1rHjgVyJoEKaasPPaB6VV7roR18HjNXzTHWnH6
Lv8cGyHMqNkaHX2qwXDiqvccLZU6NyOGIqnNKoswFmWupSKJvMjWUarT84oV9Q4XqFY6+3Ne
tDpJFJ+vY12jd+xcweSuEzEKJdgIYmy2Wzzz09FPk59ogzLyuj32oxa/Q0wywpNJnVjxoegQ
UgfntbJAJpv2iktJEoNN1rwj5JxfaoaOS+RleWGUhA04OeXitzDQ5DU/0WjKXL+WL0vRNRiF
TCee0BmGKCToxmSw5Z96pSzvy+qXk2txq+lHsdsctzoZ2viIfus6U6qy8Y9VRZs92qem9I1U
ZvlenXRa+WN8ez7FZbb739zR9LpLB0+uLNuMm8JiuZ+ma2dFmOD7ln6kKeGe84H2hnmH5YqB
jl4smY6pFWzBgB13cK+wIyqkhM/0UgGxTZ863hjKQcs833GWLeGKu4IjymE6XHYFbU/Lr8Uq
SGmDcoZjR6gECffD1p11zrqSLUgMdNwSXLLL4udT8vQ191vybnhK3o3DLEWbB5Oec2NFtm9C
h4/AGt0a5dwRTvAOO96T3BnyTx+m4G62axJujqIWfpi4ZT/h7n6zrVyxX6Uyz4V7qCLoHqMw
z/nJQqtJp1Hp4C75lcGdxaHpdn5gGvZqz2lKd+uXQ7yKV4V7poMpkjneOyFcV4EjJO+kN4e9
I/4jzvm87cE6c+NV4XikPKOOWwI31OHGYZoHHDFv5WTHWRos6JEZ/0A/yyVhI9xD4zQEgbuE
l2pLeV/c5/8p77FrjnBlP2RTZyHtzNtX/2Z8AvaVvI81Cv6l+C3wVqnKcdR8M04Ec0NaI6Pv
HerFosF7ZL5+wHgDxBBQe65XPGOcfbbzluTJ8iMyE34QlPZH8ZbrfrSuwJ5vGRk2TU5+WR54
6uL/+hVuzMZUcm1DLfwVdJ/bqfVNXehPdq/IiXWcDYZV02QW4eZZWjfQf5psVyPbRq5X/WyE
mZb6TBzZIE8l3KBocxnl14Qr9GxprhVmIPsCk2cS+OtqWKdhlEiPq07Wrse78VcefdDfcwr/
dquGmasr6oZ8Wz5ZaNXki9AuR8UPXSPt774xS7DJKumXGU9uMKZUXzqXEEo0aUuiWqTp2/GB
eMvm54O/v70/bN+fn78/Pb4+P2Tt8fYWNnv7+vXtm8I6P58lPvlvU8UIudrAG1sddbVSZRHM
srCvUPXZPdnccjjC0tytf2+5CDKIsMpBdzeEiqmMVAl5tuUljQ3ZqbMRXg2yzEftKediexjT
PPSHPY8D38NfFyrFq509HIEoU+A1VbQJQ3fuJIhn3aCly5nDEjTySDFC8ostcmeElJarMLbQ
9fGcv5FzVldjvAWWkdlPTj+nm8clrOiocAa3odcfYHWSnURu11U021sS5mLuii+tTK48MnaA
uwjI0hBdDulzhOSu2eh+RnQeKF/TFqRTApufEhkmM4mNEtmkKvrq5en9TT6Cf3/7hjtQQAJr
C76fn16qV+GvHfrXvzKLOvvFJ+eGGZsMCNwmlhFSqJrNnLKXLTbU0G/bHXOMpS/D2OcVMRrw
SB9/l+5hZ+2XFxl11nabKLJ1Mt1jWWgnlrPjeOx5SU6M7Ahrk8CNmO59NTxZWLfcmWLf4ZZS
ZZufC1OIr/u7NLFxT13jtLhc9TisfNLlqcKwUh8DKvRID3ajIDEZKU5lWFG1PUSh6oZOoUdk
EcosioOQKsMmD8zjLZOjH0XW2IlmIozKkCjdBJC5TZB7oXnnoW/G6DzUwe6dYxWUlOwkoD74
MwDdKbgOOpMj2kICiUMKq+BDIawC0r2hyqCf1mjIRyMJmYaB6Coz4JRC6IeuXMMVdc1SY1hT
aaIbDDrNIfCSYNnAkib3UvedbHI733xyK25Qp/tI+oW/K1aIxFcfVir0YGWtEyckDR0nLypL
kH7QWru+iu2V6GR31M3YHUIvXM4GPRmkXrqkvyQLrFuYXUMJRR5Rd4mod/A0YB24kDAJ3YhL
Bd9wkS9p8oltTXapqbzujSzJI6p07cfoU3i+X7eUmcI8O36yKwarHz9OyQZEKEnXH3QAybW2
l9NXgB6uCGouLw3AJWeAQy/2Pi4TdO6U6C5XZCH9yPcCyt2hxhL8TaaNAF1hGAgwmAh6CVMc
Kf6uB32WftChcK3uEyoe6SExk0xrezq7KE4DMzuCDYP8/AKX7/8SV/Qhl9j1pfmExmaaLgEz
+Fe6gfsVZlhsLi0NeLedLWqH0nUsjYWoglCN3asCMWUhzoCrTwK8imLKi+SNo2ehHhlBRcgH
+XcGPgpmHd8i1DMRRBF1E1rjiIkaITDdjiZTTZKlIgEH+qB3fBwlZJAAjSMgFSxAYLJSL0Bv
HOi4i5qR+y1bpwkF3P1hLYKutr2xhD55UcnmCwZirtPgj/P6hZzybPBXtBhFyIIgcZ86TEyT
tfYxU7TUINKTGGXYSKf6ITHI7t72TaBKp4dtBJ1ef0hkqXTIkJIyQk9mjmvtKkuwZPRIb2jk
zCCRJX2ADCtC+yM9omWQRC4ZJMnSYgYZ0sDxaeqtnH54FLa1R9/e0lg+aIh1TNdrHbsKt06W
1zmS5YMGWqdUFxRsdgtlpflFbtKs43bh2OpqCyaR+3KE5OnjMFruY5Jl0bDu45gSXM2OsDwg
6oZAROuF2r5OQXEExHQxAZRea1kMVhAj27Bs8d4jSBuPKjoyoqHGeZoZ1R1mfXvKyGMyE/BS
kVPKk4Gw61i7txgVNgxEYmxXqm8r7DNKnts3RIGoJgJ/3kOZ911R73oqIhSwoaPk25b3kUhm
PuiyNzn/fH56eXyVxbFcauKHbIXPcO/tJmlZdpTPX01ydxzMjCVx3NJbkZKhpR+M3zDeGdmI
o2bMSNoRz2gdqWyK8sBr85NN0TetUTAV5rsNtp9ybwvJ6J2ku+gFyvYc/roYjE0nmOoEeSIe
d6wzRQRdlpUldaSLaNs1OT8UF2FkKk+QDVobaJ6JJA0E0/NTMYqNh8PayDu7yINNR97Qr3ZN
3RkBo+/UpXYt0FmKS7pFyawGQe/JDeXOZAIbvV7FF5CJ3curDXeMZYlvO1cGu7LpeHMUZpL7
pjR8hivgiZ9YqV+Uk2n1cRrSCxSEoeBy9LgZLq4BcczwwV2mi+LMSujKZiFOvDjL5+1uaVw6
+fLKkRfH0LqmODjp5B2RT2zTMbMQ/ZnXe/IJ0SSIWnBQak2tV6jMZDRsg6heN58IdXNqzAKi
fFBjObKUbzgqaOdCT70CGXaN1ScrdtmWTLhSk17hd2bxKw5TEAZe1stb4UzQFYaeqI5lzwll
WvfcrFrdd5zaDUKs6aCX6im0rMYHiNCvFcEpREu3tUUNkql7M9+26Fl5qaklhYRB+5VZbopu
Jo9byouwynB/CWbmOzPgzfXlJKBvCKMuM5JxS922oHvkU/nMpfbaDr2m6NLs8AWKPR66JssY
df6IIEwA2Co/dZp0SWBKC5/mOweqaIsCHzXSr3kkR18wl2YDrCjRXX9hKTcoSVseXWLoKluz
obsLJji1eya1pXxyMxIDTFSs6z81F8xPOZ1XqFN/1NUHP9FXwyTYtALk4igJvgffVbrs+313
FL15S1ulWmPiiKbV2IrQIAfbL0VnKZ8zc89gZ84xRIYp0YHDoHN8glno8rpSrHJ+ueRo+NZG
ZwMFi+G8jhuSnkG9MVSO/MswssrWGFEVWBfBHO3setZNGI/X8Me0gTtdHbOs05Z8dzszT88U
bpmaad98iZEZ4nk1Zmg49LIT+Pbj+fWBg6bXk7kVc7osAAyYHHl/kE5ichFW5Q9iOwGCcJ9X
QZNs3SmTn9+uLaqZKZJr9hnXH/IqK4S7l36dCN230gNaymt7RT465h55ebBs+bjRjfIpsbp2
RaH7f8qebbl1W9df8fSpnTk925YtWzl79oNEybYa3SJKttMXTZqoWZ4mccbxmt3srz8EqQsv
oLP2Q7tiAOIVBAkQBLg3ZElYn33abEmoNENtk/awgH+ZZWyjIlGTRfs+KY2h2qgRfoFt5FQJ
UmndA4kGXj7FlnBkQLdmlcVZXPG9ILb4A/MClUcsVrK8wl3KOxw/+dekSq61CejCmPoBzPKh
cxNi690y5LBR8qncRCVPQW5wAE9kUrMdJwMXusS//5cjowV3jIv89HGZkDHRRKirkJwPlqvD
dGrMcnMADsWhYbAhcjKjAYEwQw8HZ8SI+thuNpKNjyyVMqKuKfZxPtTObLotdCKJJKbFbLY8
8B69yog1myTwWjMQeT8AKJT3VVtSA46iiYPUz/u+amXUX/WVJt5sdqWjpecvlxD9Z6ue+ri8
Nttm4HnyF/ALN5YssJR4WjshLw8fH6ZJgrOo7KLLJUTJPeTUYdyHRs+r1DSAZGxP/r8J73aV
l/Bs/Kl9Z6L1YwIuoITGkz++XyZBcgtypqHh5PXhs3cUfXj5OE3+aCdvbfvUPv2TFdoqJW3b
l3fu1/h6OreT49ufJ7UjHZ3a7g44ZJJRp6ZDgrUDV0uVIvzKX/sBXv6aHeWYuNfHqEfHNHQs
N3IyGfvbt0unnoqGYTnFbZ06mSX5pkz2W50WdJtjp26ZzE/8OvTxzudZpOlcMvbWL1PLh529
pGEjSwLb2DE51NTB0kGv5YRvP2yWA8vHrw/Px7dnLFIwl7kh8a5MBVc3bXnNIGVYYaSwVrfp
MLOEUuCl81Ubot7MfK/bk7m6GgHSbHNa6czLERs/3ER2juE0IWQvLvPEFBDFy8OFLajXyebl
e9vtNtKJSi9I7BZG23z5bDuA83VnD0bajdvy+ehtIclAhClE/aawku3vEtDc+wYEGz0+AHpT
egIxisYYobT20QTeg5HDnDfFqxC6QgNpc2bXcsKNsN7YbAgvgbWGCJVo/LgkcKZBi/fL2/ls
tjRWn8AKa+/14sl2Ll+eSZj9lqns28ivUCz42ID1O0r6hydYC8AMO8WsJTJNJ0dSD60oSoto
g2LWVRizIcxR5C5WFDkJExfymyAZgdNHjL+udLFHMw3dLq67BnszBw1/r9K4akZmmZt4PJLr
BcTFHu9dXaNwsKcXftYUxgah4C0tuk3QRxUyRR5ADDyCs1FKqqZ2ZO9VGQmGJhyT09XKmdpx
MxdeKug5BDUqD80KIRMd6iszn/m71McfOUhUReLMp5hjpESTV/HSc3H+vyN+fcAxTJqBlmpp
Hi1I4R0w31WZyF9Htu8Zig1iGEZXzse9HIvK0t/HJZMGljAIMvV9GuR4yGmJ6uvlxINQ/aYl
/ETk2N7P0PHLCx53Hu99nmZxFlkPVmMJxFrEAaxJTXrlTNg1MKbbIEfDDMmjRmslnbTMCBW+
fuoiXHnr6Ur16pWFuB7CYdgLVYuBZVOM0hiNHdnhnKVhrAjrCvVDE23a0WijtzSJNnkFVyA2
NV7XGfvthNyvyFI7j5F7HodQO4KEvY1WNXHA7gKXczaTDVzIdhFPx1o4tEnXcbP2aQV5F3Sl
JtHaC2ljSbSLg9IXEVPVk0u+98syzm3dVzM28IHfUnYc4orlOj5UtRz/RZyF4IJhvVeh94zu
oIKi3/kwHDTe2tZwGgocd3bQ1KktjQn8MXenc70fPW5hS93HhybObiFyQSTCJFq6zAY1p3Dj
KakNxbfPj+Pjw8skefhkx2BUVy620jxlecGBBxLJISsBxBMB74QRTztDzqeK0fdKzUqB/ISq
r4Xu3GosQSsRREpEI+CZhNqZvkNCnxruH+Eg2F5jy+q0Cer1GmIfOVJrBqmbZ9R2gi7a8/H9
W3tmwzFawtRZ6C1AdUi0RpYdTBmA3sBi6XZx8J2VtjumO6wggM5tZhyaFVqW5R7KSuLWI52f
U2iVTfYFIemaoGpqqHYGxJhdNw1dd75kxVgqYZuTA8Ghte86MDzjtnIVp7Ek6uRTkd/iCTS4
WNg4U9vS7FhJvLvT9EoI5NOpeeoCQnlGkZhxAK/Oc8r0EbXQtWkAY5orbRLNbt4zrw7VYhN0
3yOk6yYPdPm4blJwtOotVxpOX4PrpvbJTK8LYF3oWB2lxrcSMHF9o4B6g592dwF/6k3ooWMP
9TsKgWZjahO9PUk3Gvj3GcGjXChE0Q8SQZJpesVKMtCWWWiJZqkWaQnBoRDJE/vVUKwZtzWU
WkcDWOHrGgUrfF2XzDD2GhnVFTPNSAc89iN0xm2cpdod7mGjkXUs+yOklcolw0azeXh6bi+T
93MLmTlPH+0T5Pj+8/j8/fyg5TaHErsLalm8qiu/E1x8iUrDKoGRAM+qWKxwJyYuFvUlYYhM
Q1rUGQHlZG3w1Yi5WqVEZiwMnGy0VMnbnCrhsLNJBUdc64aA2sZ4tLXuVKQevSRxphlmReQR
vgdY62KCq0mNEdsIxyLrV4ZQ3cCdm3bjJmBIJDwJKbp6hQv2UUBQ3xR+0vH38klR2iG/Zva+
nOq+kN/u8J9sDRUpAiOxDiyr2Wo2U8KgCAR4VqN5EaTCwIs7NupZgzogvwIS4JqoGRjgd0MI
eqsOKAiioJexDeeUQt5oo28QfVSkNdP6QSvWnNlSNUgOMqX6fG9/JZP0+8vl+P7S/t2e/xG2
0q8J/ffx8vjNdKzoBgmyJMVz3mF3bnQZ0MJxokiJrkT8t1XrbfZfLu357eHSTtLTU2uqP6IJ
kAwwqeC2UW+ciJQtYbHWWSpReJgpBl3mQnVJAYJ2/YfbZ3lu0hQ74qZRSquYSI58PWS4DhSu
De3r6fxJL8fHv8x+D5/UGbdmlRGtU+moltKizJsgyXk9Y4uogJl+FFJl9vv+sZy++ipeg2RC
RcNA9Bu/vcqauYc/JBoIS1zzGPHiRhQGfhw9cA9RfQO5AwQPpyzLsxHa2Nw9JRIuWUmeqIYL
ThCUYGfIwBSz3YPSnm0i0/efkZrTxr/3MyY43BvJKC3Ae2c6m2tAHifJ8bS+cairQ3kQ6KnR
XA7GD00jHr8b7PHLBTYtA/ZGzmPMoQXxb1w1ea8MNxyGVKrr2KSY3yxwU8uAR587dlh3etCb
y4Du4dA7Rpk4Z4YB9bkCoPyKsgN67tT8XI2RPY6Ne8DHzD3YnKwGmuUc+XaP6wQcWUYbyFCa
4+dVwWgh06btY1nN3Rt9FFIym688HVoRf+nKka4FNCHuzcyYDrbhrlZLVx8gYFT3bw0Y0/ls
ncxnN2bnO5T2fFFbnNx144+X49tfP89+4ZtCuQk4nn3z/Q0SgiIekJOfR/fTX2TBKEYNzH1X
Bp7eU4J65IveJwc2M9rarqmsCXNQFpOVF+hjV8VsnOqRlc21jJqcBqzIK6+UyE4dsylnzGHs
qvPx+dmUbJ3TmS6Ke180LRS0gsuZPN3mlSGwezw76WO3IQpNWoXW77eRX1ZBhLpxK4SypzqG
J0VtrcRnKscurrBXPgqdGipe7Wfnpzi64B3fLw9/vLQfk4sY9JExs/by5xHOLt2pefIzzM3l
4cwO1b/gU8ON8hTy8di657M50remHln4mXrjr2CzqNJSMeN0BX/tZuX/YTC5IitVBnfylMYB
JPfE1dWY/T+LAx/NMBGFPmmYuAOnTUpK2V2aowxf2UhEYpNphKoMC1jVXznSFja8Q8IbRSYd
lesY0aY0XOL7GUdHK9cSoYajY8+5WbnXCOa2qAsd2uYCJtDRfHaV4DDH87qIr93F1cKtASE6
9OwqejW/imZ6P/bkuawIGF7HeQYA27UWS2/mmZj+HCmBtqTKGQugwD6o/0/ny+P0J5mAIat8
S9SvOqD21dARILHfowA222npwbnMYJjJsc9pppzc4RumrawFD1uGhxNA3Hy1sRysJBSQoU0d
R42aWoA3v9z1d3mDFz80D7l57ck9r0i9Kc7SPY0fBO7vkcWnbSSK8t9xl8SR5OChfjw9QUgh
lcwoB1R4Q5gorct7s8+Al/dSFd7swwr9ZilHwOvh2/vUc5dzE8FOSssbNeephPJupthe31OU
t97UM8ssqUvmWCtimjBRgHwhEI6DNaPD4bGjeqIDI8FdQnuKgqwhvsPXNNMl5pGikMyxkeQY
K8LDxn4xq7ypDd5NsNHI4G7uYAeZHk+ZMnYz9bGhXKcQGu3ahDJWniEtYnBXjQ0lf+FgjjQ9
QZQyXRXh/nLH4B5aJMOgrmAjgeep1+hD30O2ojxDmNEi1iSGLH3MWJVA//D2hEgaY7nNnTnK
tWzSnZlzdfHAANwQp690uAq9WiNJc4qKBEcOtijB3Rk6b4BxrzE6SBLPbdZ+Gie4cGJojD85
5rrIZCQrx7u+XoFm8QM0nneN+3gpDtZ8Z6HmVh8w/o0mAUwuq25nq8rH4mmMS9irsAkBuByt
Roa7NwicpktngYjS4G7BFh42/mXhEjSZdE8AfDfFvsQyaGkkv99nd2nRs+zp7VemzHzBsH4I
aWjMHqwr9hdIG6QlYAI4oHGQhl6uhMPJECWDtm8fTBH/4mAgPWsEXRKd5jD1bY/FGCqo19IL
se4Tep8R7oMid4buORy/9OhKstTPUE2a76IuJek1Mhola5449RoR010LjaDPX6v2qO+QXx8M
/y3w2FJcyrbhYrFiG5VpKugwuIqVsuIpiWPbi+1qtryV02czMjnWcee7KlLDymDIFNs7tk41
cJnz2XFVsLC8NilTChU3BYEN8rwacD9JZ+puIJoggXRaSBdkAuWZpISwWZC1btWqBltDFDpL
xGbAFcDkTG2JyzsrTcjO+1/R+Hq6YwlHo5Lktrcg0AYSY4FkFBqm6eOnc15AWds8ZRk2XS/R
YFyA2+6Gu1qJG3drizUY8opcyVcFaDlFrfjNWCJTDDgd2BbxvEMHkBwEtdkNxaZGVSIPx6tO
mWKtSoE1RFbj/rGrQcTTjDEejMLOj00ZprDA/aJ2/KEJdNu89YEo5h+nPy+T7ed7e/51N3n+
3n5ckJhFfZZN5Xdn0PvUoN1YyZdtX1XEW3No33qjrNEACLPUlytNngSGGcjL+2abV0ViUZiB
nJt+2DrdRLQ3tiGzCpSg00a7imyRKsktnsaUYWWPByCG232/wjBgLhBjxp90KDj2Hzg1Dmm7
tRZsMt3WJiNLP+OZ/xqe0kYtuEOmvo6k+zivkqBLH67UVuwgeBG9lr+Vk7G1yHhY/zolUYOH
uAXs1mcbZbFL01ptZ7SOVQA8Wm4OiS+7ynE4KWJ9XFNtpHklu0Kvg/enKTZhXLKFBfn7JJ5F
2HHs16aM7gM0qAat/E2cSfZ7Nv9RGOu/h7teHSrMv/xsEP8eNbeBkowIIWOKvkw51UjTmJJe
ShrVBbnKWh3Y4qPcYcfnJyo8pr5UkV5mQRItlqKJdxZGoRwsHcUl8HyKgb2Zg9fuoWm6ZbyH
fpjObYnLOhKIVsuGOM6d6RQGwV6LoCyIM18CodHZAb+cc7zePbYLeFOz1xzsmLzkkyk2FqHP
dLwUDzk4kky9633hpRhNYVCshUBsgS8XshdND68cTw1HLiEsETllCuyEIeNdtEY5r68Edg7m
0Kbp3JHfD3bwdeLOHKTZPhzK4nzmNLixXCKL4zJvLCHd+4XGQyE401vMvaSjIcsDPGTMjSam
BVk6C6yN4d3MwQIOdPiMkVSN78zcKfJ1h8VuqGWKFGlRj5gtQ2OgGS7xg4Kgy4WtTd/8hEFD
f2byFIMrJ68RXCNg7v9xN0d6Sl2LNXMoML5yKu2IuOO4RSiH1Y2HtD/jXy3Bm0D/gsFD+Smd
AobnOogUEEgab9Iri3yX3nrgLqFX6DmuKaoZ0EUqAnBzTZLcin+VmxdTYFpZA0NUOJuVeV3F
alyusmKC/8bBXwUwJGuVBcX4YGoaK2OmrHxcuqf+gyVDBAh6fGxf2vPptb309o0+9o+KEdRv
Dy+n58nlNHk6Ph8vDy9wycuKM769RieX1KP/OP76dDy3j2AyUMvsuuaH1Wouy/YOMASkVmv+
qlxhyHl4f3hkZG+P7ZUuDfWtZmhYBYZYLZZyG74uV1h+eMPYPwJNP98u39qPozKQVhoRQqS9
/Pt0/ot3+vM/7fl/JvHre/vEKyboILo3XYaarvwfLKFjlQtjHfZle37+nHC2AIaKiVxBtPLc
hTxLHGDMkrUocVPYfpxe4Iz7JaN9RTmEyEJWwDi366Ch6cq1pAEQZ2eRONNYWv7b0/l0fFKZ
X4C0wzc71/ql6gtSRc0mTNlJDjNLbphqU2x8MBlJztFZzBQ0WvBYrrzSzcPHX+0Fi+DR17vx
6W1UMb3CT6N9XmrhOvpc92oxkrIUJ5AbkjUmXmMb6TqOkpA/dZPvYLcpuBSCNkIb5W2fX5JD
h5GTkL7KH3LTWhYpr8Bumby13cvfJRvc7LqHaIr4pBZp3GxjGs+XlszE6TqEpH8LZ8aJMUXR
Ww6hjkzzCGiAzT6VdCz2owlSOf2cn8SRSBSpEArHXSCnYNXbw8tioWFKTiU9SbWtsxAyxyf4
q530kAIdiisi/05HDtPu56nowNhcEpXbUMlqB6Dm6kt0QWFpAMSfK1Lcpife9m60TBojmjIO
SvyiygtMKAO2b9fYg5CEgS9NUhglCVv6QSzfgEnAbl5G27eEoinuTcdpyqBGWtV9nnugdqiN
gKkMI0og7bIcSWNA+vLpYYCK2LSjIKt/iytaI+NikFQQ5QRTqDcFm5OccJEhh+7bFiIAiQIZ
hlgBquE74yCFQwtmluDXJhRyHRfSBAjmJtsK/prP1ZAJAskDMe+iDOtAFzAxq5i8cJqd6tcm
kLugknqW0lhvc0FEVl4mxYraEk9fxFO182BPcCdff3OxUOV0GweSKt0BIOlcub6Nk8REqbGD
eqgiN3jZhOlSiosl1sKxn37m81jNV7pxT6soXS01mxYEQa38svtOEl4OEXoKG2pGkFWxr8Ye
TZPDIDYtDnN8/gpMUxG4kla6tOTxXBkki8jo1cPjYtL3tn2aUJ4Xc1K1j9/eTuyE+Dm6ItmD
bvLwt3AdxgoVuVD13NNaDM4fr0ttfZ3Be2iIjHbX74omzweHas/0DnAVrFJcNxhWTkjA6bbY
l4zRr1Ay2WuE6NIIKt3ZbESwfyMIGnOvLy/+VenTbZJvDFwNgSzjgugISmoONnY5QNhuQiSK
r1lqaC92R5YKP0z5brJkOtpQLtUxbAso4K2esi8PqCqwbGsQobtBE5JyzG3Ag0pLHr/Si5wk
8bP8gMaSpDXnzbG52DUoGJxJIsWFZj/AIp/k+W0treGeEBLQs6NmpGizaZ51hchKdQfl2UVs
PhYSGXfzwrXvnoTG7nyhm9tkpCVBiUo1s1pJJaKFxTQnkaxUO2GPISGJVtOlpZWAvUFdmWQi
CgfbRs6vLmGLfWopfEe+HOV1fGCsBBYmnJKRJJu0IRvssLLd0yLO+GOtTtUgL6fHvyb09P38
iL16g3s3yK78qUJ4RmWF5aJdBR7CrhQuhv9suspGyiAJdUoGpSXRLiz5Wy4IiNcUcbVcBP+S
3nSirR4+9OMkyCVb0nCgT7fKjWxBLKdUpsSVfpOyQpAx7IoXvqZjZ9kM1exgEeug0dtbqHag
Hx8fJxw5KR6eW+50rwRy7tW3L0jVevg1j3zx14O7cMRM3ldMkNWb7UgCaatzcSGmHN9FjQj7
iHvX/ptOr389Xdr38+kRce6JIEh55987WrQGKFtMuh//YAMwShW1vb9+PCMVFSlVwptyAPej
wJyDODKj5gd3bKE1Gx5EhwGsnw43/mN7lXYNx6yc6XFwnB48Bk/f3572x3PbBaZVdfueWmxt
2LFtoLgTwVZFoTmZ/Ew/Py7t6yR/m5Bvx/dfJh/wluhPxj2hZht8ZYcXBqYn1QGqt3MgaI4P
zqeHp8fTq+1DFC8MWofiH+tz2348PjDmvTud4ztbIV+Rimcq/5sebAUYOI68+/7wwppmbTuK
H8ccjl09tx+OL8e3v7WCBvWa50rfkVq53UW+GELe/9DMSQKL2yTgRIkwR3SAs3Lf0Ojvy+Pp
rY9/jLysFeSNz06VegA4g+ZQOGi+3Q6/pj47HyiXNR3G+sCyww9q3Xxxg12admTsADJbuCvp
1mxEzOf8MsCAr1bLmznSpKLKXNzo2xGUlXezmvvIpzR13SmuOHYUffAre+mMgvETPHGXn90L
lxJJdssbYQxuQcINB4E1JEDBYerb4EINRrHwzjjP/p+1J1tuHEfyfb9C0U+zEd3REnVY2oh+
gEhKYplXkZQs+4WhslVlxdiW10fMeL5+kQCPTCDhqpnYl3IpMwHiTCQSeYCLt/GxS1AVAhUF
Nz5UvaEQwer/kkOpL2ORqq+WEDi/I0HRs4CobOPzc3cbjW9LPtKSfTstDQP/ZNKehsE+Hk/Q
K24DoEl7FRB7MjQASrVMxAQ/TevfJo0vF6eplMFQM1tnIIw4VB18PELuwHLai4AKtRq04MoC
Bms4kD2sbsQY2Xeqka9aBOiVHThQI7T4Xgm8LwOuDZd7/8vliDiqJ/7Yw7YYSSIuJtOpBaBD
CkCdtLBXVyRiPmFdtyVmMZ2OjFBmDdQE4KbtfTmXJFGyBM28KZv4vrqcj/HDKwCWonnu/s+f
6rrFdzFcjAqyaC+8Bbl2SchsOKsjfb8UhYhjVuKQdAucrFvAW+ke3vDRcyicD8O9gj1i2HxO
Yb4/kreiUQPseWe6C+MsB3PEKvRdbuKbPW/VE6XC2xsfjyvfm1yMDADOvakAiwsyX/KQGbNO
PXD/neHNlPj5eELdkNRLEoSM0JFeoD1cTWFa34zMcUnF9mKOjVOUGLoTOmIPiRSgMOqhIzJG
scfs+G/3BBKPU3UGSghIskA76RMNRLUfsZlUK1XLcD4ibVDQUm5a/ha7W81GrpHZRTk8eEi+
Yy6PRqzaG+X+/Xfp1cv56W0QPt2hvQL8qQhLXzQ2+bROVKIRnp8fpGhG01om/sSbksI9lRa4
7o+PKr6l9j3AO7WKhTzwNn3SpH6DKlR4kzU4ZsiWSTibk/MEfpvng++Xc37jiK+Uy5V+MB5q
zvdBYWaCaMi0V0QgEa1z1lGszEsaIGR3M1/s2emzRke7apzuWlcNeOb1pXR+fkKD3p9HWnho
Eg3x6F7g6PMcsfXjVZGU3VOG7r6+a5V5W65rUy/OW0jjGKQV8rhmThprAr2g5do+6BXJ8/vp
kOZOl5Cxw61CoiYTTtSWiOnCg7AEONmago4LApjNZ/T3YmYtOjAiF9w+D/KsAhR5gSsnE4fl
YzLzxo6YMpIrT0ec5xwg5h457iTDnlywSrtKmS9Op/i00HyqbWRntfHJXHQWOXfvj48fzUUP
Lw0Lp5Dyrvu/78en24/OCORfEPMjCMo/8zhur+xa2aW0QYe388ufwen17eX07R3sX/A3PqXT
voP3h9fjH7EkO94N4vP5efA3+Z3/Hnzv2vGK2oHr/ndLtuV+0kOyyn98vJxfb8/PRzk/BqNc
JuvRjLA6+G2uutVelJ4UMRwZxBFXWF8XWT3mIzIm+XY8nA7NROR0t+oKWFlXobCo26KrNURA
wDKeu+eaCR4PD2/36NhooS9vg0JHEXs6vZ0Nm6ZVOJk4IkTDdXnoCnXQID2WS7MfRUjcTt3K
98fT3entA80lUjF6Y4eIEGwq9rDaBCA7ktA7EuQNR3xfSLpCiJXIhkrZVKWHIy7p3+aq2lRb
j2tSGV0YEj9AvCE/fuZwNI+Zkn9AxJ/H4+H1/eX4eJTixrscXrL0I2PpR/3S7xZ+Vs4v8N2y
hVC6y2Q/Q/2N0l0d+cnEm+GiGGqOBeDk/pgx+4Ne1Ks6LpNZUPJn/if91pF/Tj/u32wuoN7C
RYwtj4IvcqLJRVcEWymzYuWKiMfaWx0p+OWhMuQsRUUelIsxHg4FWeAZEOXF2MOfXG5GF9RW
GSCOA9hPZOG545krGfPZTSRijMOAyd+z4ZT+nk3JkbfOPZEPHeoqjZRDMBzyToLR13Imt4Ic
7E9MPKIy9hbDEQrUQDEewijIyCP7BWsVYt7SCJHkBfsc86UUIw9fpou8GJJAam2jdPw5YkdR
FVPWBzreyRUz8dFKk9xRslW8MBoI8sZOMzEa42nJ8kquJdSUXLbVG1JYGY1G2I8Vfk8oX6ku
x+MRK2dX9XYXlR6+zrUgc+9WfjmeOJ5OFe6CW3nt4FVy/kjYCgWYk+EE0AVbi8RMpmMSu3s6
mntIibTz05iOr4ZgL/hdmMSzIdYAaQh+x93FsxG+FN3IOZBDTpIOUPaiHRQPP56Ob1rZwjCe
y/kCh1hRv7F+5XK4WBCOoBV2iVinLNDQUIm1ZGG8ug2owypLQsgnjLVuSeKPp94EM27NeFX9
vGjSftpEdzZbiT+dT8ZOBG12iyySMUmrQuGmmTA70noO+mitxg092e6xKE4Im4P09uH05Jo+
fB1M/ThK8XByrEYrmesiq6yU9ugMYz6pGtPGrBv8AebJT3fysvB0pB3aFPodvddWI6QySiq2
ecVryiuwIImzLOdLq3BL3JWXb1Zz3j5J+U7FMjk8/Xh/kP9/Pr+elIG9NZrqaJjUeVbSPfXz
Koi0/3x+k6f+idG4Tz2sSg/AoQtrWuXtbkKDqcDtTh5DvFQrcZLzcDwpj0Hc5eRxo21su+UY
YjEtTvLFaMiL97SIvom9HF9B8mHF42U+nA0Tzo91meQeVfnAb+tJIN5IrsjfboK85I8SctIa
1p94/CM/Hw3Jfk/yeIQd3fRvs1ESKpkcdwVPyinVrqrfBouUsPGFxdB0S1koLV9NJ7gTm9wb
zkjzbnIhRa8Zu9WtyeqF1CdwPsBziI8Ygmym/fzP0yPcA2Cj3J1etUMJswiUvGQIJ73cEgVg
pRlVYb1jXzGWIyPeUB6l3HoqVuDoglXPZbEaopOu3C/G+GSSv6c0DBgU4LceHODjIZvycRdP
x/Fwb54NPxme/18/Es2rj4/PoC6hWxFzuqEAo8gEWXLh4DCA6C2W4v1iOBuh4dMQEso8kUI5
eYZTEE6PVUlWTn1TFcTj07pzPelLphXvXbZLwpp3LwcjtQ/0w45FCUBXKErAiSoBq+4YMg8Y
Jm+AXpWQf5K37QR8M/qOyuO8tBoDMNOq1EL3dpYIpcIaq/chLU4UXwe396dnOzyExIBJGnr7
kp3AKR8gKFAhgA4vbKtCtNFz4V+ac4BspSAVmZ+1TjXWw3W+uR6U799elRFJ384mvAnN2IWA
dRLJu3RA0CpR0DpRZfoTxk/qyywVKmFZU10/6LKiJnBTXWVFwdvuY6qAVI4xpYh3GUXBComS
/Tz5qgKxGx9Oor1cXl0/HB/O96L25mmi0qfR6jsUdA2tB2iUXBB5E/2dflTk+SZLwzoJktnM
oUkDwswP4wy0+kUQ8lMLVOpNTid3+xWaiNNIAo3KyuBRJQfAtX2NsgfNkiVvikPpwsS0cW55
M1loqDhkFpSjxR1DPpk0+dOxOwET5322r+MLRLhTvP9RaxuJPX/bok/IugNOmMlMJtYWwu5+
7a5OgyIzM/aYroDoOF6muyBKOIPBQCBDVBVD1fhp89UGDC+1ZSDsmKubq8Hby+FWCRe2l0Pp
YKmNufyG7RNTZadJzNcCsztlGpvLm1RuPBVaKCvrG1RVJ+uiJfV33KpRVMsiCmi+nKYMJJq/
CRs8U7p5Lc4LFQh5m8f4+qSqLsJ1hB8LsxUPV8BgFduQWqy2Vr8Azq/uVYmMguUPlUgCTPRT
yBSCPb1KyAFTVk3gbr6qlmKzRakpEFwoBxeKKv0sMT9ULkPT+7OXM0JuaJUHkBzQfdg5quIE
K3bqkC2YZawvFh4xpgOwo3uASnTkBu6yz3j5at9ZnaraESYnypD9CvyCw9aIjF7GUQIurR8Y
oDmvXxVoCai7ud+4JBFvia0zIVeSlRW76wzjTP0keHqQ8ptismgoA1/4m7C+ysA8QkUnxx/f
CbgQyMuAvPjnoijZTK8SF2Xy+CK2kh4k2Xs0APVeVBUxA2kRkDxKzqnPWQq1NGXob4uouibf
GddGGHMN+lmFY2eFE7vCyS9UOPmkQiNOkoJdKscmFTexH6YvywCJC/DLLAv575ZqyqjdeySn
BpKycev0i0LgjfLF1R9C0fbGSeAO662Kg4oLkt7wAsje1dz1qvSMOch8DWMrWlbOnqdR3FTW
My1PjwbmWV7TWtcXmjJ69bop5KRIefvTOpTDRJR+CX1b+2d8DGKugbYhYkP2wejj8x9PKF5p
4CBirmcN01mU6ox1nIRYlcq/hoQBS6TwAnZc1yYesf9aitzFdW52r8fvQrpDOhCz1BvEchvJ
8yGVLHadCkiaXGIqHaGUWH3YQUs7JqswRi6SlejqaCBft1klcJUKAEEjlQcH69zZSq2Qha6h
vxJFaoyQRrgutxpbSUkEtWWVVPUOabE0wDNa61do4iHQ3KqckESnGkZAKzkOBOBDrm0zZiMm
yOSUxOLaAZOMKIgKcH+Vf8gGZkhEfCXkAbuSt8/sit0KqFSUBiHv2Y+I9nJ6VTeZkUVkSShH
K8uvO0e1w+39ER2Jq9JisA1IcQnH/m4oNpLlZeuCzSTY0rQL3SqcLYEzSKmfj10HNCrPNDGI
6qCf8GNExDYQBaNRY6HHJfijyJI/g12gZIdedECqxGwh76rOJKqBnV+1/Q5ft9aHZ+WfK1H9
Ge7hX3nxp1/vNm1FVmFSynIEsmtIHnGR1msNUplABM2/JuMLDh9l4OJVhtVfv51ez/P5dPHH
6DeOcFut5pjjmu3SEKba97fv867GtGoPaQywFoqCFlfskH46bPr2+3p8vzsPvnPDqZzmDC0c
gC4dDt8KCVmgMNtRQBhVKZlK8YZa/Gq3vE0UB0XInQ26cBRA0JeN2mlYbNal862yqSWC82VY
kLCk7a23vXEkOe2WAvxE/NE07vNe4yO4Zs04I+bNdi2PiiVuRwNSw4NWbQiRY/wiJDFB1Qhs
hLxLRGuIieAbpfQfg5nL/b0TRSvptZoMe8rR9QHiaiquoII2sBJUWEEQIEyFtAxGC+A3PpfU
7zFZvwrikKIVklibakjNP1UUEB07dbAe3TTF9px4OPyabEhByna+IYIVFsZARPsWRCWERZF8
LkfOuPgbXJhZyXvBM0KKNhmK/AfClPkTRoN80LRGLrdpgaPH6d/1uiQnRAP95HQI8w0vQPsR
vTjAb30IsrZEgIWQvFcQzQNuD+0AkwstUF2FArzVYZHzeZIV1TaHoL1uvLVBMdJinj2UN13q
8XWwTXI57df84tKEv9C+z1agPICE6+AU1oWmQy1yx1Unxosz7gJDk+OrX5px2Z2AtTwB+Qp7
kosxcWihuAvu6ZWQzHEiRAPjOSuesy5OBsmFq+KZ85PYUNHAeHQMEWbsLDNxlpk6MTNnbQvC
QTBuMeZjdlIiR1g6oyZu91KSycLVeGyuBBgpBML6qufOlo+8X2mVpOKMKYBGZWsw10n7XVeh
Fu/xzR2bzW0RvB0bpnAtyxY/4794wYOtGe86xnmMEYKJc0x4G2ggucyiec0xzQ65pTOfCF+e
tQkO7tWC/RAyDJvt1xh5Rd4WvBK4IyoyUUWCEwY7kusiimP8DNpi1iLk4fLyfGmDI9lWkQYM
It1GlaPHsm10zgBTbYvLqNyYQw/3AN4uJuafTbZpBGufleWJrla7Bx1v31/AfsFK/QLHFJaG
r+GO/XUbllVt3WOl1FHK+5+cHCAsonTNnSXLvtZe4i22slzgPhQbvc9nJBJRB5s6k61Q1m88
Vat2hDQhpXqnrorI58MVcipKA4UFVMVnVCg72CmxagTShkj5DZRDZbYtfJpIB1SZvtIaQQ72
TRjnDo18BPkttOQDKb3AkKMZOIjMyb3iNRfDvtcCLeq4TP76DZxv7s7/ePr94/B4+P3hfLh7
Pj39/nr4fpT1nO5+h3BhP2Bt/P7t+ftverlcHl+ejg+D+8PL3VEZC/XL5r/6ZOmD09MJrOVP
/zo0Lj9tN0A5LXvtX9ZpRhz6AQGBGyDxAs30alDAcxol6B9e+I+3aHfbO3c4czN0gh8swaxT
7Lx8PL+dB7fnl+Pg/DK4Pz48K98pQiy7sibBdQjYs+EhCdjdA23S8tKP8g3WNBoIuwjIwyzQ
Ji1IhoYOxhJ20qDVcGdLhKvxl3luU0ugXQOosG1SyVfFmqm3gdNw/xq15R+eaMHuOqYfVczq
16uRN4fUno8GIt3GsUUNQK4lufrrbov6w6yPbbWRDNL6No301K6OKLFrWMfbsNaMB0LJtks8
f//2cLr94+/Hj8GtWu0/Xg7P9x/WIi9I7HkNC+yVFvo+A2MJi6Akj69t4xP2ZtiMzrbYhd50
OlpYFfYo3D3x/nYPBq63h7fj3SB8Un0Ew99/nN7uB+L19Xx7Uqjg8HawOu37iT2OfmLNgr+R
56TwhnkWXys3C3t7ryNIcGiVbBHyP2Ua1WUZMlwg/BrtmKUUym9KZkmiVOmASMrb8vF8R4M4
tY1duiIFavSKzX/QICt72/n4AbJr2pKZ2pgqHikyWy2tanLZVgu4r0qmbikaXBWsqU+7ITfO
2elRegKsDd7jxW5v40Ug5bxqa68VeH3btZZ7m8PrfTcpxhAmwu7nJhH2XtrrETF7v0uoR3Nr
Fn58fbM/Vvhjz2fWk0Y4I3xhKldpyCEjWaS79H7PHlDLWFyG3pKpVmMcahRCAtv+02ZXo2EQ
rfima9xPm79WrbeWab+wLHbfLhyI3s1qetszKJjYB1rAVZlEct8r88JP93GRBJKvuD8I+NnQ
+qYEe9MZBx57NnW5ESMWKDdSGY45lKy9Q1q8fyOmI0+j3S3XlXB1T0f25pRgph3J2CaEV9Nl
ZstE1boYLWy2fJWbeW7QGqnVQoJ8G9Z20iz59HxPLJK688BmphKmw7zZ4LZ+ZkNlV6uIWast
wgrfYuL1grWPOgFxYSPhRLQFrWOixeuTTjLSn32ip/TcpHBJNZTaCGfzegXFX+cIZsy0Kjgq
+Nnec9nV9uhxHQbhT5nCSv3lNoqIS8G6LxgSid29BuHqvhSUc3m7d8HVAemajZaGzK3V9J7I
++kAlIn9leoqUyvb2qYa7loOLdrRb4qux1ckbyylIf3T2/n8+AyeOG3YC3O+V5Cyz93N+CZj
Rmo+cTwztIU+XYMSvfn0gLgpq8DiS8Xh6e78OEjfH78dX9qAHXyvRFpGtZ8XrPdO2/NiuW6z
jTIYVsTRGC0lWCMJOJ9/sukprCq/RFUVggtAoa02uOshBMj95DXJIGwv4L9EbAyRkw6UAO6e
qWMlSlemduLh9O3l8PIxeDm/v52eGOkyjpbqXDFHRcEL395hgGiFqj4ZrZPGPrD06/MuVFSa
47Bf16juG24S+/Cjn3BfCyn680/1tXC9DZijGeCdiFdACsq/RqPPaD77vvN+0o/DJ1dMIOok
I3OJbbiLlyivE4hPH/lKsVpd59iCoEfm22Xc0JTbJSXbT4eL2g9B7xn5YOerjXyRxvTSL+cq
KypgoY6G4hFTXLRZqvvyvcpZ4UHRAcV5LW60TsOgzkNtVwhWf6o5hg2h3jUQ3+O7Ugi8Dr6D
X8bpx5P2Tbu9P97+/fT0o99B2mIB664LYtBo40uSZ7vBh/sKfAr6YXLporM0EMW1+T2eWlct
9yBE7i8rnri1lvqFTjeeoy5mAlmZRVEXkG+cWvkIZcXJPgNIkRrS+6EF0zp6SWk79fNrSD+b
GNaUmCQOUwc2DcF0KsKv1i1qFaWB/KeQg7KMDFv4IohYT6UiSsI63SZLko1QPzOI2P4GZDg0
jNZblAFWrAqsQPwk3/sbbZpRhCuDAqx1IE1h68sQ4U53dchdKQ/etPH+pzlmC7/2fXnSsUeI
PyI3Jr/ubsMIFlXbmoh++qqPf0J6kpXSOhKBRWEklwiX147snpjEJbgoElFcuYQlwC8j2kIq
h/pEsvPRi6nkj53moydAGjlTNSFXepAluMcd6gaYrTyKaSJmKcV1RtfouzeZ8vdJDNd1KZ+x
cBDLWMT+BsB4xjXEofdokMoHDpv4NPBIKCHYrEsUnN1pj6w2cosw5SBFHLevGvTS/8IUcqRY
7jtfr2+wQytCLCXCYzHxDY4ujRD7Gwd95oBPWDjMj73n1XNWE6K05X4+un0rg++diA3TbFGW
mR/JvSwFGVEU+M4B/EByEuyUp0FgJVYTDgNwElMbMptnOH9TqhILaYRkqetqY+AAIetUD5Wm
HaHKkx4ERV3JGw/Zfk26dPphPyGG7wDKw0KyVoWytZTH74f3hzdwf387/Xg/v78OHvXz3uHl
eBhAVL7/QSItPI9Cpu9keS2Xz19DCyG/BbYDYO44RLylRZegOVNleR6E6fqqOG5EaoxIvniK
E1xsKF8ln5dySwK37TkdL7gMuCz8y3VsZq3XfiOdcwNC5HLUy8s6W63UWyzB1AVZRMFXfMrF
2ZL+wny/XThx44zV1hnfwNs4Hoio+ApiLadRTXKa7TaLAuXnWFY42vzWLz0QBIjUpeT+duft
gjKz9+M6rCCMTbYKBONlDmVUupgaW2SuMtBvdGaYGDr/Jz4+FQjcBHSqLrQfwOc4i439A9sz
B79Z8krcobbaa65exdty0xpcmETgblAnvoFRk3olcIolBQrCHGfxLOWm1XP9f5VdS2/bRhC+
91fk2AJtYCdG2h58oMiVREh8mA9LOgluIhhGGsew5cI/v/PNLMl9sunBgLUzHC73Ma+dnTEC
EjCk46QGlUZPF7QDAwZVmVufnh8ez18lN8a308u9H2XC11o2POqW9ijNCIMMH9GSllPxXaHV
lnTJ7XgQ/XsU46bPVXd9NS40bVl4FK6mXiC2YuhKprZJ2MzIDmWCcmjxQFgLwyt1ZujuxaKC
TaWahh4IsRehQH+3yD3dWimgo4M9+qIe/j79dn74prX7F0b9LO3P/tTIu7RrwWvDhZs+tQs/
GtBBMKpIUp8JsyWlNhyFYyBlu6RZhvXDVUZciCtWhmKOVMln90UPjzAYorFfUYqW71Jdo861
vQtqEsK46V6EBUKjkowJE1YQYa2QVqOV8oNBTidf18qVPcTMF0mXGgLYhXBPj1W5PbjTsaxI
IB2XfSkPsPw4fvywcLb/cOvWCk4yKUjwNAp01L25sH546fxk1ovSHCE7/fV6zxWP88eX8/Mr
MmeaV66TVc4XPpqbqVNG4xjqIxN5ffF2GcIiWzA37TEfhvPyHnk7YInbH9/6a3gMOJ+bOx2q
z3gFrlPP0EG4U+hCR8IqHs3uhtax+Tx+h7wzo3xYtElJZlGZd1ApEg42mYIAAZ1/X9omVvTU
D82bPQBygcH/alwA8TQ6HZ010jV9xxxAp/Yd0rRH7q8KZSCymhPEYTLVrgyKDAbWVY5Co6ba
IISbinZG4gTOjIMtOLu9/6m7YNHswRvQIeDfkmvcMpQ+m/lQubIXrZ+ox5/Uhy1tWr9bAyTO
d5gj9G1i56xoiUdmGqjKTFjm3DIUarfFsV5xAKTfldtgkhH/sQjlvOl6c2e7ze7S47pEHC0Y
fa1mcbBsWpfwBuYOjDZXXdMaYGtgaLZp68sOlRCOsQmT1ipubAMQv+Go9CkPjUB9p7FAcbEK
WlxZTdyB7LQhBYMdLDltR3ck2zVyQPl2GeG/q74/vfz6DsnVX59EFqzvHu9NtS5BNV8SRpVl
cFrNkEe9ur60gayf991kwsEv1tdjqRVDqFXLzgdayhvqxBQmIr8j5JOMIuteXkxz1GQaLgYW
Okwbxt7nBtZsjRgBHteo0NyRTRZE2t2Q7CcNIKtClh+7wOVdVirH2YmS2G+S5V9eIcBNpmxx
AufuvDTaKiG38d0tc3mFaNt7BOO2UaoWZixuZgSoTYLn55enh0cErdEnfHs9n95O9M/p/Pn9
+/e/GB5opBtgkiu2XfwLc3VT3Y5pBYIjzDTwDVGeAedK36m98jiGUZ/U5kQjuisydgI7tqQY
1Imd4Mh+6a61LkZKK3fWYQx8r0/V/ss0IPoKVBuGprRVsacxqHy2qY3CkEziLtEah5/haLsD
pq+d/KWGuFlaj4WdL20mL9gleReysgbD9H+sHksx7nBzchpL1rNpfI992SqV0YIXn29AyoqU
jjDJr6JIfbk7372DBvUZByuefYVDGp9yHbmrr5ecp75wooncOqRgLaI8slpDJifS8+Z2vPxs
N236KVl7ClXU+WBFYgLSPsQ3nFUwWUxpf+SqRTH/LhDmHm7U8r8JQKyzwTXKkQ+XNhme7LBN
R1B1E0x3MOTntD7ZHiFi0iLqm8l+so12XsOk5CKTVvA0hvq+JjG0FfWsU2MR9mmbU2uZHqxS
9xwnMK1j31XFWsVoHTJSE4OumqReh3EGJ8bS2S4B4HGXd2u47dofQNNpO+DqcdE1WsG5qoge
zukcFKQK4OkGJmn3ZecRQYDHwWlMNTUhPQHlhanNz9kn5laLNBq1idfuTCUPlCLyaBlfgxBV
eUZGzDrNLz/+ecV+X1eZndgnaR1bFWLIhhbNuflybYeyv4a379sfn4LblweANL/lNlm1/mJy
4CVS/rk4Kmm2h8FvJnkwNQThUdpzxSqXWe7dfCpCK1usIg9wusl9trAOwtQyh33Bt66jvBTp
FOBidRYIkrq5+2k6qKLPwNlPhp0X9JVqRBSVgt/weLEPlrw04LYnbQT0cb/jiOP6FlzGw15M
KLeRw406njhIKAw7yCHMkz/3+TJO7Aep+/B653RlUEKiXejLHbL0NEfirpaLY2gXPx7vfTfy
SvNte62b/uru9HKGsgDNOP3+z+n57t5IfM/J1Ew5JNnVAta7BbcVIGlTe96rnmQTKHMwV7ca
cQbhDr8wVwQI5PkaWMvAyB1Ui/3Es4W53GOTVreefUlWJTVrTmCeIdvY+DW4X+ENThq4XloH
Af7Rpi849NO8LyXA5ob6ouTU7PriDeU2RkOsIcaNsxqMGxitjiecdPpNFkksKhYTgmNa2thx
lCIv4cet4xjR5zfEyBeqFf/cIa7bLiaxTRspjtcscEI8A+eT3GpbFRDxUXZkHjfH0XDASbpI
FC42w6er+c3PA7RWe5f7OiMoJ05yAy20pQasNrVjNCXyiwBdFU7gxQgsHcIFgxguR2BxeN+7
OXVN6J4P5+NwJAdbxrKQMUaD4BL2gM2MYSz6lKF5lsws8s3MDqBvd7L02XDtfZoZHGieOIKc
eUc9N/gIUlvjkI60lCAah2xRP48LUnvXRdKEHSRMbZk3BZluMwMpaaXCEWnEX7eZz/AbJbe2
Dc4eOnxnwkHuLzF4QYAROOfA0iIDOPgc9XVEd6bDUxXsLcR3zTkW0X1yU1Qzq9zyuM5wS1Wk
Ce24mQ7AmWBH3g1PRvy1MrFgVvDem8WtVGG76tcH4h+3gzAwrdxZSe/dDZfj638BldLNbIte
AgA=

--3MwIy2ne0vdjdPXF--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 16:46:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 16:46:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56057.97838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpwQO-0003hX-3U; Thu, 17 Dec 2020 16:46:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56057.97838; Thu, 17 Dec 2020 16:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpwQN-0003hQ-VW; Thu, 17 Dec 2020 16:46:43 +0000
Received: by outflank-mailman (input) for mailman id 56057;
 Thu, 17 Dec 2020 16:46:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Fje=FV=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1kpwQM-0003hK-BV
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 16:46:42 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0dea77f-0b7c-4abc-912c-8c7ed7367bc3;
 Thu, 17 Dec 2020 16:46:38 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Dec 2020 08:46:36 -0800
Received: from lkp-server02.sh.intel.com (HELO 070e1a605002) ([10.239.97.151])
 by orsmga005.jf.intel.com with ESMTP; 17 Dec 2020 08:46:32 -0800
Received: from kbuild by 070e1a605002 with local (Exim 4.92)
 (envelope-from <lkp@intel.com>)
 id 1kpwQB-0001HV-Mp; Thu, 17 Dec 2020 16:46:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0dea77f-0b7c-4abc-912c-8c7ed7367bc3
IronPort-SDR: r8mGtVnSnOS9DtzcFWaCR3OE6xYg+6tD+lvDqm9/B39FJnAyKcEDB4OOdca1JeodVzF0GddOdE
 Fd6LCAtwI+Rg==
X-IronPort-AV: E=McAfee;i="6000,8403,9838"; a="172716052"
X-IronPort-AV: E=Sophos;i="5.78,428,1599548400"; 
   d="gz'50?scan'50,208,50";a="172716052"
IronPort-SDR: ZvnzmcGCUrmjoPpbN+QOvsoz56j5Lsp1Cy7XO6R9izTvKfYVJVrZ6njRPANVuh2767s6BczMqR
 HD6znqiKYVMg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,428,1599548400"; 
   d="gz'50?scan'50,208,50";a="558047904"
Date: Fri, 18 Dec 2020 00:46:00 +0800
From: kernel test robot <lkp@intel.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: kbuild-all@lists.01.org, clang-built-linux@googlegroups.com,
	Juergen Gross <jgross@suse.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>
Subject: Re: [PATCH v3 10/15] x86/paravirt: simplify paravirt macros
Message-ID: <202012180016.1wPj4Oft-lkp@intel.com>
References: <20201217093133.1507-11-jgross@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="CE+1k2dSO48ffgeK"
Content-Disposition: inline
In-Reply-To: <20201217093133.1507-11-jgross@suse.com>
User-Agent: Mutt/1.10.1 (2018-07-13)


--CE+1k2dSO48ffgeK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Juergen,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v5.10]
[cannot apply to xen-tip/linux-next tip/x86/core tip/x86/asm next-20201217]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201217-173646
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git accefff5b547a9a1d959c7e76ad539bf2480e78b
config: x86_64-randconfig-a016-20201217 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project cee1e7d14f4628d6174b33640d502bff3b54ae45)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/0d13a33e925f799d8487bcc597e2dc016d1fdd16
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Juergen-Gross/x86-major-paravirt-cleanup/20201217-173646
        git checkout 0d13a33e925f799d8487bcc597e2dc016d1fdd16
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
>> arch/x86/include/asm/paravirt.h:44:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL0(mmu.flush_tlb_user);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:504:2: note: expanded from macro 'PVOP_VCALL0'
           __PVOP_VCALL(op)
           ^~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:49:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL0(mmu.flush_tlb_kernel);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:504:2: note: expanded from macro 'PVOP_VCALL0'
           __PVOP_VCALL(op)
           ^~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:54:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:515:2: note: expanded from macro 'PVOP_VCALL1'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:60:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL2(mmu.flush_tlb_others, cpumask, info);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:526:2: note: expanded from macro 'PVOP_VCALL2'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:65:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL2(mmu.tlb_remove_table, tlb, table);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:526:2: note: expanded from macro 'PVOP_VCALL2'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
--
   In file included from drivers/iio/accel/bma180.c:17:
   In file included from include/linux/i2c.h:13:
   In file included from include/linux/acpi.h:35:
   In file included from include/acpi/acpi_io.h:5:
   In file included from include/linux/io.h:13:
   In file included from arch/x86/include/asm/io.h:244:
>> arch/x86/include/asm/paravirt.h:44:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL0(mmu.flush_tlb_user);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:504:2: note: expanded from macro 'PVOP_VCALL0'
           __PVOP_VCALL(op)
           ^~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from drivers/iio/accel/bma180.c:17:
   In file included from include/linux/i2c.h:13:
   In file included from include/linux/acpi.h:35:
   In file included from include/acpi/acpi_io.h:5:
   In file included from include/linux/io.h:13:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:49:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL0(mmu.flush_tlb_kernel);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:504:2: note: expanded from macro 'PVOP_VCALL0'
           __PVOP_VCALL(op)
           ^~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from drivers/iio/accel/bma180.c:17:
   In file included from include/linux/i2c.h:13:
   In file included from include/linux/acpi.h:35:
   In file included from include/acpi/acpi_io.h:5:
   In file included from include/linux/io.h:13:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:54:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:515:2: note: expanded from macro 'PVOP_VCALL1'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from drivers/iio/accel/bma180.c:17:
   In file included from include/linux/i2c.h:13:
   In file included from include/linux/acpi.h:35:
   In file included from include/acpi/acpi_io.h:5:
   In file included from include/linux/io.h:13:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:60:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL2(mmu.flush_tlb_others, cpumask, info);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:526:2: note: expanded from macro 'PVOP_VCALL2'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from drivers/iio/accel/bma180.c:17:
   In file included from include/linux/i2c.h:13:
   In file included from include/linux/acpi.h:35:
   In file included from include/acpi/acpi_io.h:5:
   In file included from include/linux/io.h:13:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:65:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL2(mmu.tlb_remove_table, tlb, table);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:526:2: note: expanded from macro 'PVOP_VCALL2'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from drivers/iio/accel/bma180.c:17:
   In file included from include/linux/i2c.h:13:
--
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
>> arch/x86/include/asm/paravirt.h:44:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL0(mmu.flush_tlb_user);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:504:2: note: expanded from macro 'PVOP_VCALL0'
           __PVOP_VCALL(op)
           ^~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:49:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL0(mmu.flush_tlb_kernel);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:504:2: note: expanded from macro 'PVOP_VCALL0'
           __PVOP_VCALL(op)
           ^~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:54:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL1(mmu.flush_tlb_one_user, addr);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:515:2: note: expanded from macro 'PVOP_VCALL1'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:60:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL2(mmu.flush_tlb_others, cpumask, info);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:526:2: note: expanded from macro 'PVOP_VCALL2'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:492:8: note: expanded from macro '__PVOP_VCALL'
           (void)____PVOP_CALL(long, op, CLBR_ANY, PVOP_VCALL_CLOBBERS,    \
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:471:3: note: expanded from macro '____PVOP_CALL'
                   PVOP_CALL_ARGS;                                         \
                   ^~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:431:41: note: expanded from macro 'PVOP_CALL_ARGS'
                   __edx = __edx, __ecx = __ecx, __eax = __eax;
                                                 ~~~~~   ^~~~~
   In file included from arch/x86/kernel/asm-offsets.c:13:
   In file included from include/linux/suspend.h:5:
   In file included from include/linux/swap.h:9:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:26:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/x86/include/asm/io.h:244:
   arch/x86/include/asm/paravirt.h:65:2: warning: variable '__eax' is uninitialized when used within its own initialization [-Wuninitialized]
           PVOP_VCALL2(mmu.tlb_remove_table, tlb, table);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/paravirt_types.h:526:2: note: expanded from macro 'PVOP_VCALL2'
           __PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))


vim +/__eax +44 arch/x86/include/asm/paravirt.h

fdc0269e8958a1e Juergen Gross   2018-08-28  35  
2faf153bb7346b7 Thomas Gleixner 2020-04-21  36  void native_flush_tlb_local(void);
cd30d26cf307b45 Thomas Gleixner 2020-04-21  37  void native_flush_tlb_global(void);
127ac915c8e1c11 Thomas Gleixner 2020-04-21  38  void native_flush_tlb_one_user(unsigned long addr);
29def599b38bb8a Thomas Gleixner 2020-04-21  39  void native_flush_tlb_others(const struct cpumask *cpumask,
29def599b38bb8a Thomas Gleixner 2020-04-21  40  			     const struct flush_tlb_info *info);
2faf153bb7346b7 Thomas Gleixner 2020-04-21  41  
2faf153bb7346b7 Thomas Gleixner 2020-04-21  42  static inline void __flush_tlb_local(void)
fdc0269e8958a1e Juergen Gross   2018-08-28  43  {
fdc0269e8958a1e Juergen Gross   2018-08-28 @44  	PVOP_VCALL0(mmu.flush_tlb_user);
fdc0269e8958a1e Juergen Gross   2018-08-28  45  }
fdc0269e8958a1e Juergen Gross   2018-08-28  46  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--CE+1k2dSO48ffgeK
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICDx2218AAy5jb25maWcAjDzLdty2kvv7FX2cTe4iiSTLimfmaIEmwW6kSYIGwH5owyNL
LV9N9PC0pMT++6kC+ADAYsde2Caq8K53Ffqnf/00Y2+vz4/Xr/c31w8P32df9k/7w/Xr/nZ2
d/+w/59ZKmelNDOeCvMrIOf3T2/ffvv28aK5OJ99+PX05NeT2Wp/eNo/zJLnp7v7L2/Q+f75
6V8//SuRZSYWTZI0a660kGVj+NZcvrt5uH76Mvtrf3gBvNnp2a84xs9f7l//+7ff4O/H+8Ph
+fDbw8Nfj83Xw/P/7m9eZzf7/en+99vT87vzi7OPtxenv59/fv/+4vzk9sPJ2ee7u/efP5xf
788//PtdN+timPbypGvM03Eb4AndJDkrF5ffPURozPN0aLIYfffTsxP406N7A4cQGD1hZZOL
cuUNNTQ22jAjkgC2ZLphumgW0shJQCNrU9WGhIsShuYDSKhPzUYqbwXzWuSpEQVvDJvnvNFS
eUOZpeIMTqDMJPwFKBq7wo3+NFtY6niYvexf374OdzxXcsXLBq5YF5U3cSlMw8t1wxQckiiE
uXx/1i9YFpWAuQ3XOPdPs7a9ZpVolrAArixsdv8ye3p+xSn785YJy7sDf/cu2FWjWW68xiVb
82bFVcnzZnElvNX5kDlAzmhQflUwGrK9muohpwDnNOBKm9Q/A2+9/vZjuF31MQRc+zH49oo4
3WAX4xHPjw2IGyGGTHnG6txYgvDupmteSm1KVvDLdz8/PT/tBy7WO70WlccebQP+m5jcX18l
tdg2xaea15xc4YaZZNlMwxMltW4KXki1a5gxLFkSO6k1z8Xcn5jVIBoJTHu/TMGcFgNXzPK8
YyTgydnL2+eX7y+v+8eBkRa85EoklmUrJeceF/sgvZQbGiLKP3hikC88QlMpgHSjN43impcp
3TVZ+tyBLaksmCjDNi0KCqlZCq5wt7sQmjFtuBQDGJZTpjnwLr2IghkF9wgnBTxupKKxcBtq
zXCfTSHTSNZlUiU8bcWY8OW6rpjSHJHocVM+rxeZtte7f7qdPd9FFzVoA5mstKxhIkdYqfSm
sbfuo1i6/051XrNcpMzwJoeDapJdkhNXbiX1eqCgCGzH42teGuJQPSCKaZYmTJvjaAVcJ0v/
qEm8QuqmrnDJkShzDJhUtV2u0lZvRHrnKI7lC3P/CIYBxRqgJlegYTjQvreuUjbLK9QlhSX5
niuhsYIFy1QkBG+6XiL1Dxv+QfukMYolK0c1g2yIYI7ESDFihyYhS7FYIuW2ew9xWmobbb8/
OcV5URkY3mr2Qey17WuZ16VhakdO3WIRJ9H1TyR07y4BLug3c/3y5+wVljO7hqW9vF6/vsyu
b26e355e75++DNeyFsrYG2WJHSM6OXtrIZhYBTEIUlzI1pYLglk6va9TFJYJB/kN8MCaiGHN
+j15QkiIaItp+vy0IK/rBw7KIyLYn9Ayt2LLH86euUrqmaaovtw1ABt2Cx8N3wJxe1ygAwzb
J2rC7dmuLSMToFFTnXKqHZmgA4TnN4Aaa0EWc/LUwq2G1ttclGfe4sTK/WfcYm/VX4FYOaNR
kwYjjp+B3hSZuTw7GahflAaMcJbxCOf0fSDc6lK3lnKyBNVipWXHLfrmP/vbt4f9YXa3v359
O+xfbHO7WQIaqAldVxVY37op64I1cwZuRhJQt8XasNIA0NjZ67JgVWPyeZPltV6OfADY0+nZ
x2iEfp4YmiyUrCvtHyUYQcmCNpIssjuFYwiVSGlOauEqnTBMW3gGYumKq2MoKV+LCRncYgDD
IcsfXSdX2TH4vDoKtvYCrQbAnAVrA+QO3X/Jk1Ul4S5QI4CdQ2/EURurjbTz0Tg7nWlYCUhy
sJgmrkXxnO0IxpjnKzxIa6Aozy6036yAgZ2d4hntKu38qGH0dOyKDKDWgfKxSb/Donqek/0+
D75bP6lbvJSouEL5ADwgK1AV4oqjlrY3LFUBXBXozRhNw38owZE2UlVgsgIHKk8f9f5HICRE
enoR44CsTnhlLVUrHWOrKdHVClYJWgGX6W2uyoaPXt4PXjLORSy4AH9KgJOifGS94KZAS6s1
IYl+jopGJmbmrPXYgnMWjNdq5Wj83ZSF8GMO3i3xPIObU/7AkwfBwGLP6mBVNVhj0SeIHG/4
Svr4WixKlmce6dgNZIHPbY3fjPJe9RIEYuDyCUmgCdnUKhTd6VrA4ttT1dHVW7GMV2Wd4Cxt
NgFTwZxzppQIpWALXOF4u8IbsmtpgvvrW+0pohQwYs0DKhtf+qBxOgcd0f7wXRhv2VE/VEDD
0mHwMuluepAAmn8iNgW9eJryNGYRmKrp3ZvBJktOT85HZlQbi6z2h7vnw+P1081+xv/aP4FN
xkAfJ2iVgYE92FcTg1vB7oCw1WZdWIeUtGZ+cMbe1C3cdM7idlzk+RlFxeC41YoW9TmbTwDq
OUW4uZx7XAC94WrUgnf36sGWdZaBeVMxgBLON9CI4UUDbh/DSKfIRMLaKIPnZMhM5JGB3x9T
GDnsxr04n/tktbVx5eDb10naqNoGN2ADCbj93gJdOLSxot5cvts/3F2c//Lt48UvF+e95kKj
DfRkZwp5uzPg1jnbdQQrijqi8AKtL1Wireqc5cuzj8cQ2NaLeoYI3X13A02ME6DBcKcXsVse
CFevsWf5xhobZNgFmF/MFcYgUrQSCH5GzwUH2lIwBjYKxrK51aEEBhAFTNxUCyAQE/G25sZZ
Yc77A6/Bc+052D4dyMoGGEphlGRZ++H0AM8SMInm1iPmXJUucASqTIt5Hi9Z17ricOgTYCsX
7dGxvFnWoFlzj8euwD1vwLZ97wWTbfjPdp4yyWsb6vOuJgMdy5nKdwkGuLinDdMdWJZwadVy
p4EH86Zw0fmOBxfOTclBxoDC+RB5BprhNSG1413wxPG4FZzV4flm//LyfJi9fv/qfNnAnYk2
SIuhoiKkEDJ1xpmpFXdmccjv2zNWicCTw9aisqE6cpqFzNNMaCo8q7gB1S/CGAmO50gYDDJF
WT+IwbcGrh1JaTBBgiGoaQMEZLC8yStNuaCIwIph9NZ/CTxYqTPwmcVE755c2hh1xkReh4rV
+QuyAHrLwHzvuZ4KT++AZcA8Adt3UXM/1AcHzzAgE8j2tm3s8XgLXK5RluRzIC9QEkmgQLY8
0BXw2VRrahwLWK6LoCsQ7pp3nlI8jtPVGe1oteNRJhRo4mjnLm5b1RgYBH7JTWhGwjjjU5oO
S/UYXbSgX9QfcHNLiYaFXQCxOJaosl/dYBKuPpKbLCqd0AC0vOgMEahJWRAz91qhqkM+tYRU
gtZtRb6Lk1z4KPnpNMzoJBwvKaptslxE6h4DzOuwBRSjKOrCsnAGwi/fXV6c+wiWAMDVKrRH
cAJksBU7TeCoIf662I4Ekm/FYLAQXUKe84S6HFwIMKPjd8/zbJuBy8eNy93Cz8l0zQlYgqxW
Y8DVksmtn3tZVtzRmoraOLh8qLyVCYRoWghaeDKgSCHBrKHi4laJ6kaxEtTonC9gBac0ENNI
I1BnV8aAoQG2ZlcbZkQs8WBSt2l1gU93kmhUXIHF51z3NvlsowGY54qoJxnpAmjCYF/OFyyh
giItTny/XXNwv10jJpv0UuYpMVmblJuYySw52Kz5IDGdOvbcicfnp/vX50MQdvf8llbv1GXk
UI8wFKvyY/AEA+BhdsHDscpLbuKgXGveT6w3PIvTizmZKrVM2bqhLT0HGUxHBlWOf3E/NiA+
BnIVDCPgXhBRU5cKAuJxZBcIyudH2AdrRYXLSIWCe2oWczQ0dXzZScVcRYc2IqGsADxEUN/A
PYnaVYFwj0CgAqzdPt91XEUNZ81Iaym5roywZ3vwyOtzcCvmOqMCs6nelkWOXJJ3dgSmK2t+
efLtdn99e+L9CSQ4xjbBWZEaHX9VV7GjiEjIraggi27iAdUNMHEnLvOL0fqNpwUKo4KAF36j
sSuMmIoi42DgVJEwey4g5lJSQ1pbDFyzeEt1IabMX8df7U5b0xt3uuK7EQ05XKO39i4amWVH
Bx0QR2ccIWAYefIo9GJLwnhGq5HlVXN6cjIFOvswCXof9gqGO/FU29XlqUdXK77lngqwn+hY
Uv6mA1a1WmAMY+efigNpQVmwiWJ62aS1X8DUe1nA1GAQn3w7DckdnGKMlIRc5wgHA8sYdAvF
h/VVbS9NzAKO+KKEWc6CSTqXryUfcNFlHUiOYUKHQmccXAxhnWqaCJw2jMUwGROPMLeyzINT
jhEwZ02vqUht4ACEAeWVAdWKDDacmnGM0kYPchCQFSbFLr2CjWOO7IhWWJo2nST3YU6Sdke+
BFmV13FOrsXRVQ7uVoXa0bSGPoFllhWop4XqNJtT8s9/7w8zUJrXX/aP+6dXu1iWVGL2/BWL
KL04ZRuV8GJWbZiizZQFXmAL0itR2QAsxXBtIIT3LptPkEWjc84DCQdtKEBsO1VHUDQbtuK2
siUYqG9tCwBPB8IOoIvE7xYMYe37oIWla0zopD3IXyYWDXZnQK/UbmLcN7ULcsU8dMcog9O1
tLb30Jrkq+C7c6pcrVQg8jefnFUFkjYTieBD1J1eQjRUf1HTGDKLSLIPJCCpebDRV8fJVsDB
BUq5qqtoMCDqpWkTGtilSpNokDZ27HZpTUw9DrNaTHsji5CYA4DNO9Aq285UJcotlnSpcUOV
iCeNDtC2Kb5u5JorJVLuBwLD6UCVtAVjU9OxxLc3bdOcGTCkKLfDgWtjQk1um9ewEFpuW3DG
yqkBDUtHa0hlQicY3A12HvDUkKLy83qDLHXTobSrK5B06fjEAujU8CO+dKtK8KrkhMGGGPB/
AwRPRrssQivGwYsIHUpHBnMdtSx5OlpFUmsj0WY1S0ln2lv6SWsUT1h6uWEKjbOcuvKBbVjF
vTMN28M8KoE+YC6WPN6GbR+FDEcYHJzU0XYdBIPpU3E/d2OVyUJBarKxrxgAgWYysVbRStz/
s0CDCMzUK76InIj5ziQqCeETpRY/jthsfhQxxarMaVzfySj6sM2goUOruiu6m2WH/f+97Z9u
vs9ebq4fnMMf1I+Btv8UrmmoRCN69wOL24e99yQDK9HSUKJ1bc1CrsHGTFPy8gKsgpf15BBm
Ql4FSF2MlWQNB+risb6J1++od1OtpxOj/bN5Zc9n/vbSNcx+BjEz27/e/PpvL9QCkse5/R5Z
QltRuI+wdeuXLToUjFqeniwDMwUwk3J+dgIH8akWE/leoRkoBDqwjbC0YBgbm4gzlF5eyrp4
O53N/fOZ2Lg7lPun68P3GX98e7iOTFEbWZ2I0mz9xFfrcoybRigYxqsvzp0DBGTl52bbiv++
57D80RLtyrP7w+Pf14f9LD3c/+Vy/Z5jD8ZfUlgdZ2RCeh0DjjVY+hr4aJDqBwappgfhKa0+
MqEKqzFAYheMrvjKNk2StSU7dDFrDZaLBv9p26iNKUiceVKc/77dNuVaMRpjIeUi5/2KiF3W
PAMyrvyQdN/U5u4Hei22TarpDSFMJ/VIJJr9l8P17K670Ft7oX5x5QRCBx6RQqBfV+sgxYep
mhoI8GpUn9vdKZg46+2HUz+1C5bskp02pYjbzj5cxK2mYrWNBwQvqa4PN/+5f93foIP6y+3+
KywdRdXIAXThiTDE68IZYVtn/QMTKS+Xb3csXYmGh921oIkRZxdWfVZ6SFzVBcbv55wsIbNP
2aw/iUHFzAS5NVmZOMtt1zS4PXVp2R8LHhM0QCMTE9NlWCNsRNnM9YZ5NvsK077U4AKOBoso
iMqDFdlhcqSp5bfD4NO3jKrvy+rShfrAmQAhRz7PWfPQwhuqw+yIS3C7IiAKfjR3xaKWNVHS
oeGarJJ1b2AIYx0krsHgSlveOUbQvIsJTwDbSHjB4id1buXuDaGr2Gk2SwEKWoxyrVhMofsA
l30M4nrEQ+oCo0Htk7/4DsD8AtbEgAdWNrTUg4oxxtP809T14AvFyY7LTTOH7biC3QhWiC1Q
7ADWdjkRErqtWJtQq7IpJRy8CNJ4UZUbQQ3oSGCcxNYcu8IN24MahJi/K2RT7RGFUc7h1gb+
Pg4lKgmLom4WDJNZrZeHgSkSjM8DKJSWuhw3uJr8NkccLaZtdRnDCVgq64nandbkQJvCvQbr
HpISuDJPPXzqTDRPEOEIqK1/8iyauMsIcfATWojLqk/5YN6UeLs5kGK0nlHJzyCyf6AdeVKW
8S24UxJmCfLYUZUtOYlJD8VU9KzqGBjzHXa0CO8fXwg5eX/smZBjV4nsUMc1pq65iJs7IVxi
Eg11FBZ8Ybz4R/GIqRyZAxyrSuNomqU6C8SAMtgRipxKy8wKYBNreBCSXdaPJ1il6XGgTGuM
4qEexfJrZGFCtFtQl1Cg5g6qG2NlvhWG1jlhr6FgkhjXq3acGsRHIYZqwRYdczDxMh25tk8v
x8oYTka40H5fF+rZMfjEXCza+PL7kSvTwlmk5XtfaC5cGQd1tEgQTUT9VNughw1oe9M9vlab
rc/Ck6C4u6MMsjsFGtZbwUmBL9hmwULN3NtsYERQRhhqM78SOu7a1pF7mfLoBjtbcxoy+iGE
gcWm3nWE8fG2Chz42JZE96Z7Ite/fL5+2d/O/nTF318Pz3f3cbwG0drzn0rd4yYtWmeGu/0M
RdNHZgp2jT93gbHNLrkQFV3/g5PRDQUitsC3FD7D2IcDGuvfh9/MaEVJLFvcq1i4WRYETltg
XSKArkUbTLspOI6gVdL9kkj0iGWEKehXWS0YmVFxfXQyvPENWHdao6rpn3OBr2ppg+xal0Dw
wP67Yi5zKikATFV0WKvw/Ybf6tnLw7urTn4bMJqGVMzgz+eSLJpgujz1Z3E8AfoB1DZeyUiu
DNkhI9EqB++f4Gr7OwqpHSbK+cUoakMhIOWXcJGYZclZVeEhszTFW2nsQVPiqntL0sx5hv+g
RRs+//dwXe55o2Bw3/oa8qWWV/m3/c3b6/Xnh739vZuZLWJ69fzuuSizwqBaHAlzCgQfoT9u
14v2dv+YBjVs+1TVYzM3lk6U8IVd2wxkiJkkb8jWgu85fWofdpPF/vH58H1WDJHQcX75WBXQ
UEJUsLJmFIRCBrsOJD+nQGsX3hpVLI0wYn8NfyVh4Sch2xX3D6sDyROk76mQhcvd27y9K1Q8
D+44UuLWolMc2SOwLP3Eft8dvfOm0xvdAMudrTkAHyl++uJqiSXGe73AhvZL/1v6sQfnfqMh
VZfnJ/91QfPvVM5nunx8uQEXVgNXThYsHreFSQuY5RsWVjmRaIV7LDelKV2kAKsowtBP8Phj
5Z1WAm5Qaf0mf+okfLnQtl5VUuZ+lvRqXlOlA1fvM6zw7BnxShfdBQ9d2zZriBypr7bPO7oA
1bBsG7WxBzJ2mHrhVtm3OoT7AAJRu9+ZAGCT5WxByd0qLjuD87N1x5O/fwAcN/VbTDbMg+lN
ezkYyM/IOQ13PgkLDJxpyTRcsJ8MWM3dO40uDGTFW7l//fv58CcYRZ5c854uJCtOrRx0oWfV
4hfG/f1zsW2pYLQxYfKJJweZKqYLFWA/WPZH90yBvPF3WAxlPAh3FEPUvHKvffEHXeiwejUU
ythyaSqjB0hV6ROg/W7SZVJFk2GzLducmgwRFFM0HPctqonfs3LABWpKXtRbYpkOozF1WfLg
d5/AKABhK1di4qW967g2dBkjQjNZH4MN004k4hCP0a+CLIxPZT3c0lDRTNz2sF2/EQkyajJJ
1TWHw9dpNU3AFkOxzT9gIBTuBYNANNni7PDfRU9tlOLocJJ67kclOqXWwS/f3bx9vr95F45e
pB/ook242TDLBN8traN3Sv94g0Vyr/+xTrtJJ9wS3P3Fsau9OHq3F8TlhmsoRHUxDY1o1gdp
YUa7hrbmQlFnb8FlCjZkg+93zK7io96O0v6fsydZbhxX8ld8mpg5dDyttnx4B4ikJJQJkiYo
ia4Lw13263ZMlV1hu2d6/n4yAS4AmCnWzKEWIZPYkcgdF7qKlKZI24SBzEkwiGb2ebhO9tdN
ep5qz6AdlKDjmuwyF+nlimANOFuVKqqoCA6RKQtOly0Ld5ktvTtiekO8FlnKg5mtUImrBGPh
73CAMTRaIbiCVTEKoR6QrYqYNugWF4BAw+IoYim3jhiqXjJpXGAv0CsjGHNzumBa2JYy3lNs
n7UEIP3RXvqQtois7JSKrNnMFvN7EhwnUZbQd2WaRnScnKhESq9dvVjTVYmCjtIvDjnX/HWa
nwtBuxvJJElwTGs68yLOB5+PJ46oxABxhmYqEJlA+nb53i0sn0DB4kRWlhdJdtJnWUU0TTwR
zIt3GjHjKXvZqIK5YXGEGRN3e9A8m2V7Cowyi5EuMX4aLwsO676s+AayKMwL1okZNj0Q4hSl
ZByhBpwoFVqTYUjmhq5RKHxo/GDU7b3HBrUZOkYuFC1vfPX5/PEZ6CdN7+4qEEhIj7LRlwHA
Zbed9RCqFDE3ZGaHbxmvzh2MveQIza65i6jwnLMsk9S6DwwN7/Z4guZjp7sO8Pr8/PRx9fl2
9fszjBP1KE+oQ7mCG8ggDJqSrgTFJRRtDpiSweY+mDmEc3cnSb82nPVbT4DF30b8l3lI5wBQ
h8vjg/kMVJGQTO6qpDg0qaRpVLajp7vQcDGlTMpD5GN3NIy6oDsihDkaWgG9kzPLHLrnpcqx
VjSUzwcVnpBpbilXW5JUhwpQOhoTWqWG9DVm0ePn/3r55vqHechSO2ax8S+4Y7Z4qJUnmhsI
OvW1Hwx+XuYT61oEjCnjv2ywjI6ZuwihbmdOgh9UXAEUG0VV4D/oQIX24izaEidK0KvLwIw3
G8bi03vEQ0NF1C8hDwm6WMSmYNgK44epKT4ZIcazMpyVC0fG+JhXZCodk0cgkpgrbVeizdqN
xcHvUIWIZGdISOZVK3P6fkEY7CYeJuiLwTTZOmr4UwULjtw8FznY4zDrbGDofMEvBmL80qpZ
xKRc4F/09d265KP3aUibsezb2+vn+9t3TJ/4NHbnxEnYVfA3F4qICJhcutO68V2tMXMRHQLp
wJuo4PdhjY2w0NMSBAEmJt/A8axUkiGxpg8C2Xua8e0HWh2OGWZOLxK+ox5iEjHun3ZTg0iu
GTd406eD1BghUI5WL37+ePnj9Yzel7iQ0Rv8R//18+fb+6frwXkJzdou3n6HdX/5juBntpoL
WHbDPD49Y2i6AQ+bCjPtDnW544pEnMDeNulazISxc/TlZjFPCJTOV32y5d5dnd7v/VlIXp9+
vr28hn3FTAzG84xs3vuwr+rjv18+v/35C6dLn1suukro7GeXaxuITiRKLw6piFQkKV08Ilob
SNvb3749vj9d/f7+8vSHn4roAXNn0FKrKGTAgw4+ty/f2pv/Kg+NYEfrj3BI0sCn2SnGoPGD
l8P+VKliF2SPs2WNQs8GYpDAOmaxSMe5nE1DvQ+4SYk7GkXv1fz9DfbW+9D93dnY3j1rZ1dk
+KgYE9w6TEtdlaJvzRnT8JXxGezno+8piQB8mY0kJQY8fNDZ3r0+Dnaa0HO7HWPP3NushCff
BtqJBMZk70IZLQB6dcSlPDGXUouQnEpG0WURTJyprQaEafQgo65b1dzn2tEVOdYy/F4Yo3Zb
i83433O19qMOlgSfO4l6TGQu81wAgk/HFPOVbeEOq6TrtwHyum9wLJO9Z1Wyvxvp5mRuy7QX
BdgWKiXz8dduun10XTZ+bWYz7vx9hcCdobvG64qkOMwh7kN5ngx3Pwr7wBAhtNLmZZMyERHV
vOFUNwZWM6kQ4ApMJfxoUkZWvYdND1KIpBLkaolyFe4TXAiHQu502qgojP/phNCD9BeuLRgH
T3YApOYkF+QEA3VT58iKOQhv6ANKjmufkanOVOXYSOGH1ZFCkaXoj++fL7hmVz8f3z+COwex
RXmDnrmkDQzh20hdL+va4nhDBWCXUuBSBfmu/dbrpPE8KBupgFBWYk8Cq7L2y3EnF7BQXX1e
X2CPm6REo74Mt+doLsxkHOG/wNJg2nCbRbR6f3z9sFFOV+nj/3hSqxlRXoxHU0l0SYBzZlVc
3Y1aCvWPMlf/2H1//IBL+8+Xn87l787kToYD+pLESWToDDOze5SSWiLmL8tOolbRWF0CLy8H
y/prZnfNWcbVoXH8lgjo4iJ05UOxfTknyhZEGQYT4ktVP0KIULEO9zaWw3UuxqVH4Ob9Upj6
oCAPCsRWY/Tbj2GLXFguyx8//vyJqri20CitDNbjN0yDEaxpjsSmxslCK4P2B4/eKUj9fxCF
o8g9F9ZlR9n42VFclDRx3txyAbhmZsn+ufC3TIeQU1Ksi7AvMEUY+sIEtEBH68UsimkuABGy
pDI4LEKl12syX42pfhs1+7oOt3ogynowG6d6KpuMoaimAhA5RjF5nZwzsdr2QYLn7//6Dbnx
x5fX56crqLOl7BSXb1pU0Xo9Zzuki0SUjWaEV4ORckGEdqECqHtsqzg8FZgUqsorzLqDSlXX
wamFAi+j23Sz88WmlTdfPv7zt/z1twhnY6Th8/oT59F+SU7v9MxZpTpw72GlQGmxmJ0EtLOH
CNYDMYqg+T+gQUoYNdsUY5sADaWxg1CKsw+GuNvQTNP5ChIt9sp9HJrpQFrAkbr6N/vvAgQx
dfXDeuuQ14VB80nHvXlBbrgP2iamK/aHdCQzoiLE5DC1bFPHi22BZRLqer0aymI3OUvuJSoA
TgB57NC1yYWjE2lcbakrC6A7OKuVF8UEhXf59otX0Aa4eWWts6tX5jHK8Nvzfcp3ncnOK7MO
tGGQnpOVxcY5tdlWBlnTFlESS+bn/claI0OjoLuYKmmsqHt/+3z79vbdjRHXwvMqgh9tehl7
gE4qodQ4Xrm93l4+vjksfTfGeL1Y101c5M4t7RQagWXYAEelHszkOqRabhWGFTKGWhAzcxrW
pnKTmO6JJoiV3CkjgRKTKyN9u1zo1Wzu9LtSUKPWnhshCEFprjFDKea7k9w7IgeQrlLqMQIj
ckS5zNCcMqyDKUYn2rJwJkgUsb7dzBYidY6S1OnidjZbepNmyhbUjQh8i87xRS9AgUtzqKcD
bA/zm5uZW1sHMc3fzii3r4OKrpfrhScj6vn1hrbVY1aR4kCnwsdbJtDJd4qzkWtjj9VqfXW8
C9VfXTWnQmSSSrAVLcIjZ0tgN0JX4D5dzP0kefYySIBcKuci6LaDKYetsli5M9gWs5lMW7gS
9fXmZu1ofmz57TKqr0elwIw1m9tDkeiaaCtJ5rPZir5X/M47I9/ezGejI9GG+P/9+HElXz8+
3//6YV5N+Pjz8R3u3k8UebCeq+94UT0BHXj5if91b8cKeW+yL/+PescbM5V6iaSEmFmri0fO
t/BMMF2mR5o09FD4M4FQ1TTGyaoGT4phM0EEO99TpCeJDp59GWMKYBgRRhJzLCuilJhAksM4
CODfRSOoGxofX/KUeh4t96ybMu7zImj0qWiZrtExQCAGJ7i1Uh84WpSjpl5bQ5+aq/nydnX1
77uX9+cz/PmPcXM7WSboTuCpZdqyJj8wc9JjcM4+A0KuH8jde7F7zvqICDZMjmkrjZKQYlKg
EzbTvMNqGJ+TIJP6Ns9ijrE0FygJwWHsj6Kk+d7k3iTUuOAJXSWc3CAidOaiD0nBgk41B0Hd
F2OK3sKBOjJi4p5xW4P+aeZOgHHB/4BTo1urjnQHobw5mZUxD9EyX5+Sikpmb91HDMPo7NUs
VYztDrWC3PYEBjIAdQLW5/vL73/h0+DaWnyEE1boyZadce8XP3HcODBcsvJ36gnuaSBSS2Bd
PA4ppV9yPMElm9CW3OqhOORkbhenHRGLoko8ZqwtMnoOPLwTFewT/2gl1Xw555zau49SEZUS
GvHSRGnUK5M6Vu/TKglTBiYBU+LYSMy9VZGRNm6lSnx145k8kJ8XT8Wb+XzeBBvT4bTg2yXj
XKnipt5vebci3qbeQ5sTpVZ3+wtUKKuk55Yj7pkkP+53pb8JyqhJOIdkBDRlHt3hI8oT1eIm
zz3FtahSzvk0pdUyCKDnBSHc0k/twWOZl/40mZIm2242pBrM+di+Lewf0e2KdlndRgpXjiY/
26ymJyPi9nQl93lGEwOsjKYF2z0uWfcaC4liE5eGYrJbNyU/+3MSBWkwtxll9Ha+aR0PPE5N
RIyzHt4WsKxJLOAsBHuTqvokj94CdU4gGb4oRHsLuiinaZQtk0fcxSkZnFTeH0O7PTGKQ5Jq
31WyLWoq+rD0YHqP9GB6sw7gE6WJdnsmy/Lo+53qze3ftHo5KW7RiEVf516lOsp9zQApb7qf
mPhU7yDvE3xFpb9g6YHW6AxEw2J1O2N8rOKMjK5z+hP7t6kNbEqn6GTcenIODaUL5lk82Fqh
I9q4PsxYaF6NHM5ispjse/IVKbo3/6akyQp8iy2Dyx7zFTYhORvXtBMl8BEP5JWKQekYr+4d
e07ngIbhnWLYXAQW943iXLARbkgFj7KXIoPOsp/jUMefjwdl8weS4+1N5u54D7JeH+JFExIy
BwGkvx17B8OqzFYsG3LINIaN0BQMgeyFBMDl5ZEejuLs5uN1QHKzWNc1DWqflxl21py8ZJP2
ZQQPb8boD/b0TQHlDPGWNfcJy5jJFds6vTJf1MTZVKI8Jak3Gep0vSIIpANnN7BCMYr2DFOn
omB0h7WYX2/Y5vTdnh6zvnugK8wjZMuretEwm25AKCYuFgVzI7LcI14qrWGzM9JdWq9H6jYX
qs8XwbvzRH9kVPpb905vNit6HhC0nkO1dJDenf4Kn440THSjeUiMYVpuYJv8wpfofEseQ/VQ
Sm9i4fd8xqz2LhFpNtFcJqq2seHKs0U0E6c3yw2pUnfrTEBsC7Jb6AWz/081GVDnV1fmWa5o
6pz5fTeezf+3u26zvPUolqg3m5tbOtY1SxZ306ufnYAz9Fgak2wo5s5rWkS/0M/8zhsq2g84
oocZ3yf4LptpAJrdy8zPm3oAcRq2LlnxQ4Juizs5IY0WSaYxLZunbM4necH7NN/7r2vepwLo
Ks2B36eskAV1YgwEB74no77djhxR16w88cP6xXH3eakm16+MvaGV17PVxEEqE9SAeHzlZr68
ZUJjEVTl9CkrN/Pr26nGYB8IbyfoA3vFlOJEWa7c+jC8siTPrBYK2GDP80/jBT4tYOjEzYrq
AvJUlDv44z8xygSIQTk6AEdTWhVg/IRPxqLbxWw5n/rKn0WpbxlxBEDz24lNoJVvbNUqup0z
718VMuKiS8xnzHfYxGXgaorm6zxC/73aMyJqIMdcSBXC0KUpmaAJujI3oldtpVAmmN4sR59l
F0XxoOBIcXLZnglDiTDgNWMuREk+Tel04iHLC+2nsInPUVOn0zqQKjkcK4/o25KJr/wvZBOL
k8zQhMwRLweH5VMrzJQOrBgG/2smh0GLQ8NS8tkXp98n/8qEn015kIxGFaEnTOUoK8qW7FR7
ll8zPxeNLWnOa+6s9Aj0+29O5dbm7lbeWuFxrlPJ5JZocUR9YU1anDSFNedwdnHMeXkXBZ9D
Rm/nnH4EVpcLtsWd0b4D7cLbwBdNee31YTcjqNNiQd9mOtC5mAoPbx+fv328PD1fHfW2szEa
rOfnpzYIGiFdOLh4evz5+fw+NpaeLV13fg0WFWWvXApWeQYP+HkhPhOga45X9CtVbtYcF+So
qQlop2wjQMFjsCGohHvNo4w5ugbQG6KUWvnJHIhKB5GYAibADLNz6kpKBLgUfoy0B+vZIwqo
JQ1ww1Lc8orB//oQC02DjK0myXzt5Zmkc4YdNdZtNyR3OO0DGIOSEsY386RqtE/RxOD4RVb6
2HDOV+YJddaOahvXknJ9RZrhhLEPHL2OCWeF159/fbI+ETIrjs78m59NmsTe9WhLdztM/pdy
mU8tkk3BeMc9PmKRlKhKWYdIffDCd3xp5+UV6MS/HoNYnPZ7fP2cy7lhUb7kD5cRktMUPKAj
zmRyKQDsl3fJwzbHgEVXw9GWATWjbw4HoVhzTuU+0oZ+zj1AooSMAaW629L9vK/mM+YZVg/n
ZhJnMb+ewInSQt9w3G6PFbf5X8rrDZ05p8dM72BUl1Gs/eQyDkYGTGOYxCpM+pwesYrE9WpO
azBcpM1qPrGo9uhMjF9tlguaJnk4ywkcJeqb5fp2AimiycGAUJTzBW1a63F0dtJNcS651IY9
olQTY8+Sc8V4rfQ4mIgIFZgTHcfn2TacrmPouhWdJzZKnsY7idK7eaJjosYqP4uzmJgJbQiN
5pLiDHjHbPI0QMdMXVMtqoJmn4fludfXjMV8mH0g+7StdDgJatFU+TE6TO6H6pyuZssJ+lJX
kzOAKt6GsZgNSKIAKjUxvG1Ey6zDeajuzNYi6LJzuzmhUvizKfSCKGoEkE6qfPsQU8WozoN/
i4ICgmAsCnx5/iKw0coPHe1RoocgNHsAmWy03fs3gwDTw5MU+TYmL5jTiQTZaEaH6LRmto6k
tH8D0g6ffwk9pgbwSZn/X6yim4ngc52UktFpWARRFGliOnkBCfbR+vaGPigWI3oQBWOnyu0z
HcAYB/7HAcpJA30TlyphL8F2rP22uNzQgIfS4EW2DZNqMg8aGhSTQpJJWWsRcGZ1VCaMfas9
ZZKhd6WSK9rb/PD4/mTSCsh/5FfIRnsZ0Es3dp0IowkwzM9GbmarRVgIf/vPG9viqNosopt5
EASBEJBpYT8Se9WCU7n1yIctLcXZDbPAota9D5EDCBRhDs/RB/g0qsEOeiSK7aUe5WhgEYUu
3C9bgagjlOzHcKOtZEsQg2YtQ6ZpU+LR4BDV7oUyya6G0XUlTaaBhXY72UNS+mj28EQd57M7
mu/pkXZqMwtQWhUNtdd6/25KorNS0p+P74/fUMEyCnqqzGs/g+DK5bS+3TRF9eBQ8fYVb67Q
vr/yz8W6D7NMTa5YTCrRvuXRhpW+vzx+H0f/WWppE5pHrs9oC9gs1jOysIkTuG8iUSVxFxhO
4wUhaS5ofr1ez0RzElDE8WUu/g6VNVSGEhcpsu7bTKeVYHrpZmVzAUktShqSlc3RZA5YUdAS
X7BSSY9CDsikQ48Z8cVFFLrAXPknrG1i/PEZCA434zGtjPY6Xi02G8b04aABCZlvGG7MxYNN
XRwkE0XgIspsz3jH+c1qZp8pGdMAkzqEmBDMJEF4kNoYx7fX3/BjKDHnxuhViQjftiqjs7w0
wosydotzSZRpUTj5awD3J/kSHu4iVhnf9ebQaCZGpsU4aNwEy0VNuVR04/aeU3EKnWMa1vuF
CexswcbujFvlYu/lTnKGNouBLK+kMw13dURRVjPK+A5jfi31zeVTAFRgm5SxYGJCWqw2L8ol
lJY9+FKJfUgGGNQpNHSZmMJRtYbLZAKpNc4Uero65Nwnu18yZmoLLhkvsBaMbo5pMdWGwZLZ
Lk3qKdQIDbkmd5Xcywiu1YvEzOSFuth/XTAhV30Nigm56CbxlGyPk7OYny+SJNiSF9uQ6TYB
LqLBGDySRwpYiuCQq6gqU8MREkc8g8NvsqgxE5E1e4YIZPnXnPHFMlmn4ORnFIdwOHWJugZy
hGVe1D4WeE/AtwWkiaIdijFRMM+8A9OHBqOsogWqNu4rGsebdaJSoSQIMFmcut02pSa1YSwq
EZZjRLPVc5EQfBrQzX1rQNaKaU1hO+/5LgPW0gksNwVAXj0DBRaeBebrzpm3KIruTfYgsWmP
0XI4dyjLIvJWMa5MRaTw6fNJxLbCbXUZDYDb0QRQG+jcvqzp7I6uyD6+LXN8SuvHGBpYHweA
8N4s7Yu3YrWcUwDrEUAUt+mnxy0DQ1Bm+4iCGSpDdVclGgQBAjAOQXE+qshT18OT+iHz46YG
GC7pxY9RO1oF+RadbgGlycindXuUGnjQJIiOKgp0YmcCWM+cmw6+3MS4xQDoLoB1e/ZkExgM
iL6C4VAkwa8meD6tK6IS/wKB2EeHJLqz25Bov4rgT0HuzcpNI23wpLZc24+g1BXDO0S5iC7w
vi4W3LQyS0j/MhctO57yyt19CMzc9/OwwLoIBP2ZaAHpevBFVFIuewg5VZjbuszrh9E0wLlZ
Lr8WixUPaROYDG0lacQ88AiMU/rgKXS7Ekx24+TdGasWug+6hSyPwHlgyro+jag1mS4iwuzs
poTE3C5m8nMQ6PfeQ3hYarRBmAzIL+5Tig2UFEvxrXPfsOtA1bHuuqX++v758vP7898wIuyi
yfxE9ROYwa1VLEHdaZpk7pt1baWWzfjhd8WW0w9SdfC0ilbL2bVzvbWAIhK369WcA/w97kIh
M+R4xgCYU7/QvJLF46u0joo0dhf/4mS537e5XFH541dsdeXeaES6z71HDLtCGGKX7hEb61Vh
mGRyWKHWz+kKaobyP98+PicyEtvq5Xy9pO3HPfyatov28PoCXMU3a+ZVJgvGaOdLcBBQaNbb
TK8NcmLhcqROdIGaMThYoGJ06gAspKxpdac9clVzZngbpKjG1Z7v8/8ydi1dbuO4+q9kOXfR
M3qLWvRClmRbXZKtEuWyko1PTVIzN2dSSZ+kem73v78EKUp8gHQ2qRgfSIFvkARAYZvPRgr+
jhrvPC1N08LdagzPHJeAC1xk+M4W4KcW34csGJuFrdMZmLHso0z+rapvtZnvrx9vL6/v/gnx
UZeoe397ZX31y1/vXl7/+fIJjOT+sXD98u3rLxCO73/sXmuGbldBrmYaQ2sqQmNcMcqNduKZ
dTZEW/DFKDsj2Ty3pTmN7ao+Ip4hw3Cmw45ndw8Ajocz6rrM4bHq6bTTRalgBcJm1cUu1pFX
3dD2cOIxbvSTHwPkNeFE5amcubioLI6jKc6G7dIVvNkLxVdL1ByiADtZ5VjfPEW6tEKdTfVK
w+qLr1zilTzxLqvj8EBML4djV4KntUOQtj/oXwT9vhvEaq7PJ+fBdZoE8G8fkpy4R+xD0w+d
41k9BndDFWHKDF/B+OZBX9SmTPOiFLQ8i4wxAk6D8zybRelnh8EbTF5iM+iQ5cytTMz8zobd
mwpdjRHJlsKtN+pIz0bNYNBOs0GYS4uA9X0R761qTVHRo04FH9u2Mjvc+IB61PHJMa6iJAx0
keAJJKYHdIZItO1FHBWNNu4NyqDb2XGaaxjxDec+MXLgxNwsBZ0uscMkm8OXU9behujqqhn6
/vR4YRv5US8svwq47QY99Dwg2DUFAt/2eobrwx9mflfPer6EU3APT/s9ExXsjELN3VCY42us
ylEqcc2fbNvw9fkLLIj/EOra82IH7lDTphLMcZ7sa5Hz2/8KFXTJR1lXzTwWNdattgiTH/Tt
SnnR6tI+9Q50MdYuZIAtKy8PcWd1NY5BFEGIJupeVXgYTafj7cYCKvQdFuu8UCmwHSisjR2n
yQPW+/XI/vDr1tOemzzBVko5U1CP9dgPbaMnDC6o+qbJD6n0c/KXzxCXT210yAK2f4hMw6Cd
HrCftp+A2E8MVGaNPu7CElZdC26sD9ZZB8bF78JxgSTLsmivn/83vFDw/Pbtu73ZmQYm3LeP
/0FFm4ZbmBJys3b6YgzyV+neLe4kYGB9cj0e+vaNJXt5x0YaG6afeIB5Nnb5h3/83f1JuMxB
O5Ut9loL5hZUPjuxADf+yKH6nFt7Elt4mx92rvsLS6YbAkBO7H/4JwSgnJ/B2Fi+jTXaIhXo
0qzREv0rHOn1Z2oW8q4PCcEclyRDXRIwAbgMih6/YUWQabYuEkFuwA2OvhqimAZEj2hrojZC
WT/oGoQ+h2kw2+Vmi+geIYMBMVOzAhtZ7tptYHwgQWqTz1XTnSesElZ3lRs1bdXMPK4dInka
IOLRXI+ZsdIL1P9sbX6xFUMSLncBB8xvx+RBii+hDMubb9Bcdqkak2MXtzYK7OIsiz+DqXp/
OLGtmRiGVhYnXFXe4OFe/ica3bQxrqZ1fLWkscMPYS1+MzKN5rY7JJXjLS7JWL6fxrL1Df/q
2Izj+6e2udpCdu9Ps7BztcaOFbBllb6rIQj3A/oAkBRrPM/asfQqTHk6nU+QGsGauoR39h5s
OevmxDRHkaMlUNM9HMFAwy9S0/ftRHeX8YDlIUJGmVnYXY6Na/9nfoO+Py4FtKdARt+3jUPN
W7maa8sl9XyHafRjSxvZeFYeU3u420rLbtCubtiHIS3PyFHqm7+BIZ+RmZv2yCQxPJIgw1Yl
AAgCtMNjEoQF0lddWXEgT7AKYlAW6O4rttQkijK0OCTLAhwoMmSC7uu+yEJkpoQUc544sgoz
THIOpViUJo0jd0heFK7PFRkuYFEQG3isaBKgNftY76PZu87zfTpXtUHNRvPgHHTXunwP1nFQ
5SEJsCwYEnm1GFr3ohXtpHVPEv/iQ+s5TX2Z9yRMsaW65xai2NwwlBAJd7DfBh2Ziv3j+ce7
3z9//fj2HTFKXbWRNQSH+dXjbdgj6ougS6sTGwQN1YFCOuOgT4VGUuZ5USBdfkORAaskRSpv
RfMCnZ/WxL5m37iw9lHQ0CcAMiK2pLFfOiz8h82Vpf5cHG6KCCN+x2IzOmKFWXyOk1CbMf+p
dkhQ3VXCcelTQscPJdJMjOrrlknua/jEX+8J5iZgc8W+LyT+L1Q/V22Nr4MmvF48H9n5euH4
4eRMTo95FOBXiiZbht/CWWy4z6bBlju8jC22ew0ETLGzCQBN85/IgjjmNo4hK+mCxaWzu3Pp
fev6yuSR3rztlS+rOdYQa9JfQyuZ+rNlkKQj8O6kT/1emTJUeH7WfmdTyHiyuzwDuH5WBcm8
Sz+35cLOBeDEPSqcUIYuPMuhfJLd++Q+92RwZLOGv2zA1Q+h3j0R1f/Wnmu2UcEC20gmeUCP
SbMe3ne1fwCvjGwn/pOctKtxl3EsT78KtnHOqO8WUpxsZ7esAofIjKrA2MmQKkS8Guu8fPr8
PL38x62xNfASTz8hW10X8YZpWkDvz9rNsQoN5dgi2mA/RXmAzu/8mtHfCTmLf8LuJxLeObsB
lsg3y4KMIVriLM9SdFvEkPyeYBlTSrxfZYVDv0rCLMfpObLUA5046AWefxpiu68pi4v8V/VZ
GVfXso52ztXxVB7KEdnsgb1baX+N7ejyDtulciB2AdhCyIECXemmfnjKc+/BZPN4abt2N4L1
7ObkyLYhwlNLJ/AXvuCd8lvX9u30axpGkuO8NzYv4vVM8TabkUs7PprxVsUBu9OlmGdG39M9
fogobPBw4z6OLUf7hnTLo9SvcibhbwW+Pv/++8und1wU5P6QJ8zZ0sifpHZ9T5jlmJXR14N2
gCSo/ADXmZE43qVY3U5HfW8mysRS7OAUcmjBnMeVrzS8sdIDMB+obbejMS12OWbi5Y0BVzLp
aWbVQX013sdWwaZdLvNfdXJvfR78IV257Cf4E4SBUYvrpY9lSiHgcbmEU4nH7lpbZWgrLMSh
gM6DJSsPn/qE354KBucVjoTBvc2QrN+RjOazUcZ+4NFKTKph5yKIc2VmOVNLeH5jK9vFJaFx
sil6Z+VuobE2BwxTkMu0jthcdd5drAq3/eh09GxWDoXQ+pVm8yrodudis9xtvqpB9eUMVKlH
7ZwofTgtWkgyS+iJJsRhPMJxr0sq53iaSYov+BwWT9dQ52haoiCacs2d45kqAD/g+wA5qd32
jutdz4wqbq2/fX/7ZUHBmd2Yc7Xhm4eEmA3aTiS3upjLlFWCcYg+mbS0Tprq13ucfG1P8IqZ
O9srDbMqIWgleAu5GoNy6sufvz9//WQXfonpZcm10GE9da4f9cmeew7Xm8v8RQx5iO6Eag0b
HJmNwa3PY7tnLXRTSITJcVG3MOxJmrsbbmiriFiTOxttRRCo5upIXYvFf1/bbYDUtuNgRDCM
7QfcllUsmjXrwxGxKqga39OJu3K6ZzO46U8jo8aFKaSVXzeQPPaMWMDTDDtQX1q3NtyR1kaH
C3t/r2H7BQ/HWKVTSrBTFzE4u4gs7gX6XNwPjSXPNFD2LYIb1G8cJHP3mcd+Jpk5mYuoUTYV
IoAbvevakzhFiEWRaNsIu2stDgrt3S7ncQoQfWpyxTkQTdLNO9ylcYPxk+gFZ6qLZza1zHp0
sL3Bo0s3R3A7ydQIrsgRNEVoBkzXcQQoEF3kDLbfnenGvsa0tSrarOfDgakF5YTaZYuqOFcP
FzXuaSi3DeEv//d5sQHsn3+8Gc14Ddm4oGwzwMPznfEybEw1jRKCHbAo+czaLZ6aNrzi3n8b
j7nBshjooVWnS6RwaqHpl+f/vpjlXYwWjw16SLkyUOGRaqeEOgiw6UnnIO7EBMLN1hCc9V4u
YbyNdD2PzAFEjhRgm4SniAOnpDE+tHUebL7UOZxVkaJvX6scOQlwsXMSOgraBIkLCXOk8yyd
RDmdAF9reHMefYVBoPQyDJ0Wlkil2yaaGNPx2qv2/kNdCty2+Cvr6rYrJzYAFI2frWekiFIz
jZj1F6rmSUknQUXkAuPTA3i/MR0iyLTzwOW7t7KaSJGkmPogWaprFITagZxEoMHQU3iVQb+7
1xC8H2os2KwkGbrmwHbfT8rQkAjdKYeishY0onj/RRCt5LvHKJ/nGZN7gRzPaJtcx/rRzh20
qlhZwFW6qm1JOusRYa7pAQYSYQ3LsQjddEiWResA3avCyjrOKXaFKOuT99RAqX0JbNrMmqWE
QAtET4Ulg37usX2Kt5ZazjXHKc5S10OJq6Bhkua+z9bNxD2RBG+WZpjwmKppsAirln63w0Rl
vSIJUfMqjaMI7AoAIEpzHMjVg1sFSNnHcICothcqUBAEYOWJkxzrIYfycmjAZzgqEv9glgFn
Pd1xnNIgjrGKHyc2RWFr8yphFeVqgIj9pekW4QDSXahkoktFwyBwPM4q66QuiiLFlUTJc227
yvGmzimdspA452djqeA/b09tbZIWTwtxeC2icT2/sa08FrzuRM8jZTN+noTKgqnRtS39hvRh
4AiLrPPgx0A6D3ZPqnMUTiEc6onKE6KDWeEookQL0ieBKZ9DB5CEAS4SQNg8qHFkkSPX3PW5
PEUAsBtGpaAVHJv6pJjb2748QewcpvFrsSAkywOB93+9lfsQBnd59mUfpke7U9tC9zU8WjYe
8JjJKxtEn6U9HudOFh8eJEG6s3Q1tjOd5sFXXbspvA1Pk90ECwAvmI89xdqiYv+U7XirDEds
J+NAsReAJFdNNYeEjRzCOblV5LrpwMawxyQTGoQzpL9ka9MH1jbYWa3kgEcTZqR/wnlokO6x
b/Oj0miPqccbSxrnKbWzPajhTFZuWh372qYfujQkqrmxAkQB7bHecGBqqiu218qBR4ddYH4a
XJ5scY7tMQtjpAHbXV82qDQMGRpXYLuFBe57YO6/05Apel6qdMAGhjMinHGGLem/VY5nLyUD
G/VjGKFvfUmWrj01TFuzq2q7sLbkESpEiqThQO5KkeuBWzSwCLDKF5C/kBAEJXQolSpPFPrX
Qs4T3f9W5DBF1ngc1qA6j19mUG5R7VtlyALd+kLDQuz1Do0jI3YjAlDkjkzjMI993YmxZOLG
EEudZfEdkbIsiRyfznBlXuPwyV34m6SvhjjwLtpTlaUJWrLmtI/CXV/9xCzQj3nqstdce0fv
iCGzMeR3Ge500t6rlDGYoOOxR034FThGxndPUqzaGB23nNsYUMtxBUb0OEaN8a8VaRRjxssa
R4L2XQFhm5p1qq1IHmeoOghQ4h3Jp6kSh6EtZdtaLI9TNbHBih3zqRx5jk4GDMpJ4FsxpeuR
NRmcaBljCs+5qm6D4RiqYDaRX86pYWUG8/mDlbN3xaNUNxVRdm/XEmE6+64Bw9wGVQBu1X4/
IApPe6LDZby1A0XRMU4j1UZCAbgXFKZTjANNE0ewpZWJdhkJ43tDJEoDb1XwJTRHd5ELBDZO
l85xs6HwxiREanRZhdByijXmTjkZUxTcXVUYS4oo2GJqJ4guAkiSJKhaAYdhmeMBrJVnYHXj
n0SHPsuzZPJV2zA3bB1GhXhME/pbGJDSr3JMAzh2oVb0CksaZzm6Sb9UdRF49U7giAJkjM/1
0IQRMsV+6DJ0gzdc+0XttsRQTc6sRdLcz2x3vFY+dDdRNGyExI8T1kkZGdueMXL8J0quQvTr
diQxc1PXN0xDQjTgpq/CJEBWRwZEoQPI4DAfKU1PqyTvPUgRIaXi2E4YsJrYNNE8RTPsM1zD
ZPvWMCI1QX02Nyaagz2DfXbCCkfQWfNURkGBTpoMccRqXxniKEIbbqpy38o/HfsqRdfuqR9C
78rJGWL0m4D4KocxJAGqbgDiV0X7IQ2RPgPP6FbDZdlGWvkyOCMZdn20ckxhFKIyPU0kQl/K
lgxXEud5fLA7HgAkrG1pAShC5NSAA5ELQMrN6Wg3FQhMS2DK7Be/YwvJRNGvMihT3XAUKIvy
4x5NxJDmuEcSCaMQRFpumacKicULtIciREB1HVuvTNNDEKrRs7jCWWoxBRYSvBDaGXHILR46
lVMLry1R5KOSqembkRUJ3npZgmffuCfMrae/BnaerpVB4melMiXtOrb8TafbNLaqfibxJeLw
7XB+YjI3w+3a0gYrtMq4h5NAeiwd0ZKwJPA+EBzFVf4k7twRRq+8wLArTwf+z52MNuGUC6Ph
4usDTQ86YevYzkouh4H7saXtmvkW3aqfsS8yMul7iSC5PcRKsoUmbc2wDB/PY/voyY8OTTna
WdLLiSBSj+fqgUeIsZEKy4ZTWaePlQSraA/t+HA9n2uPdPVZmsbopSoZoS59CXmoH6SqpgeF
uDxE+/byBWI0fX/FXlXiwUbEkK26Uj2SZHrzbXgAS4Z+wMonUtJzdasnism6TWiMNU6CGZFC
zQ1YsHxWCxJvXqZg8GKILzO8XrhIu+/fnj99/PbqrjUIkpOHIdZ2S/wc/NuSRzhMeBoY/C5O
1O6HQKej1hxLeZxC8yJNL38+/2Bl/vH2/Y9XHrTMWbap5Y1q9a2ptWkQoRIZsfzlTqxyAPBX
TT2WeRp5W+5+WcQ7Ys+vP/74+m9fp1tcY30fc+WizI9sTj57mlK1KtrajYvx+MfzF9Zkno62
xevgw5Bffqmt7sxBZvBhjoosx8bv6j/pmz3HGp3ajmx6gnPUC7/a9LUn9taGzJ3u2MRDabvT
3kCjO+0HCKE+ZcFTVS08fI+nlqhOFC+qAMbfdFJSbjqWxeYQemHSnbDYyCwRgYBsMAnRq9bB
veLGrLIAFPXD4vgmvJHjAvSa1iQE3nclPRrEE0aUhe7L6lb1Jwc66I/oCAwNoMgjxf/rj68f
IWygfKPQ6v/9vpbxmddMOY3tzmNsdw6gtJ3bpk2gitcgDwMYVRm5wfW+w65awnhsAx6qUjgC
WHmWU0TywIrDqbJAqO0LLUfNDVAgPVvC4MUn490Pi+fYVXZ5+JOwgcMUnDPURZqH/RV/157n
Pg9RYJnT6W2wxEXFHS2Bw3R33Wj6cbJoUcMFdiXGGJFgxMJqBUHG2k40bFupvsLQnNwKcEaI
aaR/cbnUN0Jar4irc65RGk1ajGQTovdgAIJb18MuLtQbbk4XSxoPaaRX8KGcGojjKa71jW/1
VRgj5pM6zxC5/M85PLPPjoaNg4ZHbOmnYvgp9GObJVFoxaNaoDSdXYGojlN1G0QTvqo0VgY4
UjRKKLTCx0s5PqzhztGywIuzrcOVATCKenZu2jAvyF84HZ5AuHpRUBJaveEEE39O0kEXjtWv
WIE57IpBv7ENPfoI9l6+1G7W5m/l6QNbBs41uicEDtsdCaiEDD1Bz6g3NDU/xskZarwuZhRh
RqrXjh1SdKOm1jwh6A7/oY2hwK7lVpiosYcWKikCWzAwJEckIEWBXwFtOHbIyNEpizOzrDKE
gUqT99e6TGMzXUyBhmqfslnJVWLWJEYIfL7IytByznIg3jsqKm1NVZrwE9MLAuFfiUESVp3G
qt9U8pEoXdA2ybPZtzrTlnXgRowBc7HC7i04vU8D7OCUYw/vCeunynF9uZvTIDDenyh3cegi
nqfBkGNxhhMbnqn//PH7t5cvLx/fvn/7+vnjj3cc51vd7/96ZvpVjZwsAos978vtz8/naeha
EPWebXP1Jlo8pjUa21eWfRyzeX6ilWHxDng3xIUjPI+ASU5cw4Ll3fUXM8eh7PrS8Q7iQLMw
SHHNSTgZhrhxiQBRn1QuyeKgaA4YQXcYrKwMUeieGICBJGjAN1kF3AVU7zkLOc2siWj5oLNK
Fz9KNFnhqByFIfLqGIyJLQ/o5YP0isC2AxIrL/iCtHhsGqMKUl67MMpj4x053q/6OI0tnezO
E7mcpYpTUjj7gXQz1dJYXvyqILZ1HtddhXsxSlwmLFsTdrhV8oroU/z6S4KhtWhy51bM2GUF
CZIkcQQ8WOA4dDnxSIbUWOcWFyzN4HCVIDEmoOmakNDqvOP52AuPbM+mSTIxrd81OLZ8Iqvo
PIYYG3Supw83Hs5BzZkeVtXQLI2Ipq7XYVUXceLqf9vRza+qU7JvM76edkvbEeUAXJLEJl8V
ZYP27QwPwp+7qTzgw2bjhQdCL+IhY3rB62ljhqsOftOxsmOSMe3xAPPVK/Y9OCogqPe7zrMc
J2A51Glc4PYlCtOJ/cGi7ygsfIlXbhxWZIsUYEHLFhyDVq81RByxB/WKY29JNSxE76w1lki3
hTEwf/J9eUrjNE0dHQpQ4oiFurE5d3gbi9h7/hTTU4raLW1sLe3YbjzF2gLsu6I8LLHuydaZ
TF2cFUSZ+BGxQC/K/dXIWRx9gHv8YbOEzuIQDVSHFC3OqlTYkFgZXVCWZxhk7+50LNVXUw20
QlQ4mFJ03HHTsaRwQpkzlbbnM6AIrRsOqXsDA8pjB8R3rY4c+YYV7zxiv+pwsjPZUK8xhaka
QlaFkeNLQ5qEdzIYCEnRfgFINuPIY15EjhkGdr6O81ydyRFxUWdK783uYvPtLyJj+X/GrqU5
bhxJ3/dXKPqwMROxHVNkPXXoA0iiqtDiSwTr5QtDI1fbCqslh6yOWe+v30yADwBMUH1wyJVf
4kkgASSQmZuZp4fUOf2DMspIeI4sBk/MYJ36SCyW283ZswUzmQ6feEAqagymIwhhU+vgQBty
gijolk51yuguUtdgVZnRWjmHT2YJ8v4tVthU/R2+g4yao/NWecRpPnysi0O8l3HFed6wuhb5
hWpvp9kgeknrN8jR3eo5pusCG1FP6nqxIXUUJouthDGR7Bh6hrEMs5J9kDPyyCCgspbLbLM2
PX0akGNFbCAjnYqBpTs40cx8tVU77ago0M/JB0NA8x4rvo0O2+n2Kc7yRG5Euy08+Vnag0lz
zDL6dGywQptnK9pszuLahOQxwOFZ53SN8KlxACJyMgc81IeOMYSNwsJAqRBdpjW5zzC0JjQW
mM6kHcxRUhio1hZ80IETcRaN8wo+NaRq0D/8oxAdCMQnbFIWiSiiaxf7NJZxp+f8aVLyohZb
YZ6LkFqK3OyYltSA6MK9av47dXTkGHoZOdGbSlHVdo7xfj03n48rWns2sYgq6GrDCoq6C0I2
ghy/E1gB7ScdBEZps8pauM1S85HsSUSVW0YSRXlfHlLJN8joZamYyCUcqIuTl013XNtpo0vw
3dvD96+oVCXi9LEddWA87pgdHLEl4FYNOusgfwtWQx4IypOoMRJVQSnG8J2eKA/H+UhJnlTj
oJroaF4rB7QKuXuXZZD/a0jenHne6aa3bw9/Xm/+/dcff1zfbpI+i5Z5GzVxlqB96tAuoKnR
ezFJw0jYiipT8ZWhfxMrVQz/tiJNKx4bwbVaIC7KC6RiI0BkbMejVNhJ5EXSeSFA5oUAndcW
OljsYJrlMCJyq8pRUe8Hev8JEIE/GiBHF3BAMXXKCSanFUUpreokfAsTnieN+YgFS2TxXSp2
e7vyWZHwZs/TEhVTdhUxpiw2thb5+JWH9d2/dhFSiadY+BlEVXnMwQAtM3oiY8LWkbAPh0Hu
TXqBHVs48+yHgYFV9LwGCCaXx5kdjl3aNwUg+509YspjFVqEouS5jgdtDawgUVf0FlG9ATFn
LZZ8FIknUD2glTh6MbFeePsh5ZvZck2fhHB8+N3ZYaEs4Z7XxaoRlyD05gyoD5L06Q0RdvTp
GhEV3lF29PdczguY156bTcDvLh63E4DNk623c45FkRQFfVpFuN74AgPg5KtEwnM6UqEavfTL
fjWfvJnGINBF7u2+HQdh4M32DJOCOpsAdrIUnPghuojejXrFZEJ1Zq77LaFhcczT1BFAck7d
FuB4jrJmd64XS6fYzueRM3ESRjvYVsNCqaXNBLC4AZLwY7OtirzmOeX5GmcGh5mRFxl3SkPf
3LSREwrhqmCJ3HNuS+H9BeTv0e5CCWJhtnYyl9k6oDbuKF0yVoYOu6K13oyIrYrLmB8y+CF/
m4+QRKJtlqitFaWH6FIlEWJ5gs3j899mLOlTvcV0BCk71Urk2SeZgHU+s9xBtRyLnmMELf2Q
zlcmwhnERuVJq0eLBWZns8VAyuq1591vM7qQlPOyYVt05ImN1T4Wux0Z8m2jm/Lh5fqswjzz
l8fXz7g7IxdnnS0uTAlkV5RsvvKuxzZvvS0XnmvhMW+ZBKGcBdSFTM8Mv3PtFC45CnvldHD1
BaYYWM7SYtfUR4KrZDlPcSj5MYwjnpGjWjOg4U/ZpHI2Xx/Xycm1jm63zx9+if4QlpVNIqRl
79fROv9SKX23CFxq99wXSm7ItfXDw+O356cvX99v/vsGxHJ3G0icUACFvRdTM/MoyEAP/W7S
YjQeD/e4fs7avmYdoXd1Ei4tXfaA6Zt6coANTKXHGe7AoR9+TrZhfKc2YK1yfzK98qhlGHP0
gDr3n1KeUG2XDIYRo5INT48paLOx9TIO6PGvbvXraj6jPZBaPLee71JulqRXReOrsDwp6LYZ
qj0ib4/nYqPwI3TMOi2pvomSVWDe0BgdU8XnOM/pMlOekPP3gwnTH9Rhp1Y456gWaleKYWIV
u4IsaqQvGNLI4mBvQdRM3YvEmLldccJ6HQs/BwetdcXzXU3r2YGxYidaG4IFjb8HZj0E29Gv
1L5fH58enlXNiHMgpmCLmnse4Co4rg70blqhOMf86AEO7LSliuoGnt4J+qCCsA7oPQEL+DWB
F4cdo5XNCGcsZmk6kVzpkvzwpYQFnt4dIQ7fbleo0NVeFp7JZkt7iFdwyh1zBBv+dMf9td/x
LBIVHWZC4dvKn/UuLSpReJQDyHCEY22a0Ls+xKFm6krGz3Dxd8uJpc7rDadsfpJF7jkZqupf
Kr99LTIIDJXkR2s/9juLPDpMROuTyPce3ZHullwKmO8TVUtjv0NJhbtS0cLy4kgfihVc7MTk
TFfn7Qy+u7/9GXybaqL6GbsoayYvQ8X1xPDnIFCTXWzpQ7biwPNfNTH2s0Nai+nxl9f+wQvH
MU6f4xGFNRQN8GCG+D9EyWuWXnK/1CxBcuHa5cVThtvL3PEBYPNUAjZBXlgyMdWMVq3vx9Gj
qNdVgeKoOfOLEEB5KmEl4v4WQAXKdELKVJ7bYjXH8a6XyQkBrU/vzfR4lhmr6t+Ly2Q9ajEx
q0BUST4xKes9zHh/P9X76iBrHf/By3TAjUBTehRwSmYKkRUTcuss8szfhk+8KiZ74NMlgW3A
xKzVvjCa/YG+SFN7gbR0CugevRNblD40k72j6jNEkyJnD2RFGTKTGXb56Hnbl6MyzAEGf75k
FvqmJktu5FYDkrhbyqCHtv6cyeQdaBXW7f8knHz3sbCvI4YdLuLDzeGwbQTyIS1F4/MDhwx4
XPdt+BGHTTu0hMlmHydO5p4U2kJWdQgyYUuMvWhPL7/+/PH0CAMhffgJp2Rit5oXpcrwHHNB
21IiqgO1+ZpYs/2xcCtrpVf3lE25pxeYrg9J8PdPi/V6Nk7bfsqJVjpNYMmO0ytgDWJt4oq0
gNGgryBJvYSh9S1PleT3sAkliNoYx/Kjg2GCDozUVEIOKgBHZ2cPv/8lk39hkpv964/3m/j1
5f3t9fmZVndhcl9IEcRkAiPdeJ7dkZpWQy3RC4+tpOk4vJZ4PYdycjFRLizF9TajSi+2MBWY
NJ0h2+DIKbcN17ek6ZLJw/F/3hySU5zJPfl032RrnVgTlWwDhpJQLu3brwFSlcJLPgpEFyt0
hf23/wOPc7VAcYiKehJkfK4zO87pGiBEK1GN/NGBzWQBfexWsowt/vW8dRm4MpFGnB3oCW6M
b08UWORo33A4H0FTs3PTDhwaMq96FFSctVH8uDscKupOm72k255JepujxIPYZsDmaQzGD8vd
OU5dG6nCyPsn8xOaAUVV4XiXNbJKbAF/XoJoplC3+0nGJuog1F6iytGNFTDadenD0Tt5x9Ha
Z8wFKHqfk0lGBgdQHXiyS0lOvdyyxeypidID3wruiUDZMmmzdm9hsJGZr2838TE0r/pa7G40
/bA2U4J4j3+Eb14fsEtXsLDNRt/jkJ99wju+Hy0ae3k/GgCF3IuIudUzOFr/Qs6Eqe+o2XKG
8ze9GuhA1+PFJ0O/1/YMO6Wk0D0PI8owSuIZ+om7sxa/lubzA6Kiwsr3p8dv1A6rT33IJdty
vMM6ZGO/eWYuf2eF73JVYiCjN2Y90+9KBZA3843HOKxjrJakq4mcn1BNYih88Ze+EaFoTed/
ZVAID5jSJcBhnXThq/iiCtXVOccryBNGw8l36mJBdQCqa0ZeVlQyxuogtH3za3o+n4XLW3qp
1BxwqKbcB2lQzleWLxZNRXenc6fxMLRXc9PQaaAuXaq69JlRxNDJtTXkdJuFNxcL6mv16K0Z
1LanzoKzU4A2JBh9LlgWwgV5wa+/ZhGxtG7uDxEf1a3FKkbHyFU8OpCutwH2u0ldezSkXow7
AshLehvS4suZvxmALs/n4Y58lHZJWooN6LjjkExap7XoZmm7Uu3IG08khqG7yNuoHrbMsBR1
7L1Tkb03hT26dAchWja42Uz54tYjPwk3s9D9ivV8eTt3p1N7EWhTc+kmznl9jsTOqZzjBkfR
6pjhq+xRP9dpvLwNPFayejr4o7H183H5v05xRW2t3Dqjsa8KRce74NWt28NCzoNtOg9u3enZ
AqE6PDgy8OaP17ebfz8/vXz7R/BPdRqudtFNq9L+C4PDUsqgm38MerR/OlI0QhVlNhYGyvHB
xPBEvyfUsyndE+kZ49Tb3YDWr05TcxGvN5HbAdoJQjdFKTlIGnH3aLheOEWLciR65S6bB8oF
e9/F9dvTly/OwqsrBAvUjlfUdk4fn0UkUlFfjCvDh29/fb95fH358fp8vfnx/Xp9/Gq+QPZw
mMq0rchhY0W+0uIJi+H4W6ALHhlXB+N1tYIIxRXSiZyqOm5SYaRHAroDX22CzRhxdgBI2sew
BbzQxO71xC9v74+zX4bKIAvAdUGevRHtrLMNUn7EiLttDwPh5qnz6GFsDJARzg5bzH7rVErR
8UBIkKGilumFQW8OgiuPs+RkUJWtjkp7M9rkoR4Va0psE7t0LIqWn7hHLz0w8eIT7chqYDlv
SIdDHUPrwtNtJUKJxBd5E2mRwZxTBn3lmOu2yP6SbZYr0o6m5cCYOLema38DaO1RR7n6n6wY
HK6BaYt0/ndGmVZyGc/XpNVQyyFkGoSm8x4bCEMq2xYjDUlbljMwLKl2qoAktBWSyYH27qNv
opC5F3FM5E2IDOHSd+wiqC3zSIvenJJ63D3R/Ty8G5O7qK5jYDCCcr/RyO7QAFbB7RiQsIe+
nbExsAWxb4eU7fOCGRSQBqQDw3ITkGMIknrCWXYsPINTicdotsvlCCykmaTBMCfnW4U2qlPf
Ty6zcbfKBOb9pl+1SuFIK1MahrDi5HirKUx+fP03lnIjMQHHDXKSaGQiNJUxQEM64pjVebcx
WYzGxsWoZpTPD++wq/pzug1xVkiq40EEhh4vbAbLknYfYTAs5+PPg+J1g/EBMpFePIUDw0eF
rzbTSwewrMOPs1kvNlOSFzk2ppNLKyk5aFWgeir2Rc/gnExMOi3IZH0XrGs2NYmyxaa23VqZ
COmj1WSw7P47usxW4SIcC5vofmEdiPoBWS7jWTCm4zglhGzvRGjcXhWUeGreozqe6nztvmUi
5adLfp+V48oMZpJq+ry+/BqXh+nJw2R2G67IBrSK66lBIHZafUTIcpk22zozvEy7HwZV9pMD
Wyv1j2o/6a2CrX0clpiYahEvb+eTn+RYLQLzYqjvpfo2qLJb61xpYpJlxNgbXiu6xdSb5Ywe
NOjsbrJXfHrhfqU+L27tV7R9f9K3yn07VKAARy/pDrD2Uo3IfVvD/6YX6bjY386CuWV+3wsH
M1xAR8Ub5wWxG0lLpQ4jgfZwPl6mlI/JqY7DOzyiZvlRkr2p7pWmB3AdOqYzBIvrr2PEsF6F
hJhSIV+I3dh6PiO3QX7r5T5pnQQB6WtukBXq0vfn8BxYXuGM/DYtYYyrLld2YlB1y+VUgh6d
8VxqnA8Hmnv0NJBjB6mqATA2DGbykscwPRqeq8AvqMxWhhXqQYGVK7DsLANipPXuxHQ6u4ba
a65FKawQyiyt0VV1JndYOeo7YHBvz91bO3GCjV0CTo/NzKZJFgRnaw+tqF6xkpymCtYCs72X
6dOgbOe+Zohs12RJ3Hhx/RQJ4BXtQLBlKFjtZOHgJYZrNE4Rd/NG/x7uUuLtqJ4d1F5S43N5
6xazo5/d282yKZ1uQJqnjhlMUNO3Hfqzti638qjctv1u1hgDjfg6rkzPXkz7CqHr0mPZwZCY
mppZlVIhGpwatTcPvgGihGY4a1gZ2Q3UQDDrPtLwxEdkkaemfRSJTHW++fyqQ86eeihx2Fjj
4SxSkZ/bvVKTlBb4aTSoMdzNXno+J2DxvZNA2dnscSA32Y50eT1wGBLupLpy5Hm0pfvmqEpD
X+Tu5cHuerlt7MZW0DeSSbfFUo013kRMUuZWKiBR+yGcjPAxlo3UQs00S5LrvZnJgkTco8pI
OaLrZXX8/ISxV6xXhZ20pr8IUNtHWSOpjZ40EiN39LPz+h2dURqLgcp9K6xwHydFNd4Z6sSW
fIXfsPIf+eBPwpSziEqebrFqlEa6ZdlzVkoiqaIr7St3Hpu0qmmnNf3Kdjij2V7KDP8WsEpV
aWzYgO2TBa4WIwvUlj4QUIAzGQvRpPYbyH0drO48xnHAGlItLpXtor43xg29xKDxP200Koq6
x375xWlBE2EAN2stNRH6CGFwjF7sd1/Xji5ywFc0grZXQaxsN/SiuqczwxieWcthyjuEmO9R
I2CSV3HhUTKrgmPRHSS8PHgV58+gOngseRDNtiuPI19E90eq7JbhuAUOAcPpoN5tGrt6RMzO
VZx5oXh9GWVaz++SRlaeuA1rQ8RYc+gYFefdwVH1G2mcGikKXgseyMYfk5JeaY8qcI6bTr8b
Qe/mP17/eL/Z//x+ffv1ePPlr+uPd8JmTpve/7R/a1e9I2rE0rRor8M7D7sfFKRqc76+dBeT
xINw9DjQ5kytpoBi3/Mj7IwNWaFTxXc8Tyzi1voSyAVSpmS1xsh+RCa8htKtFJK8uUYm+Bcd
ZO8YwS1ol9d0zBgFViyvVUuwqdZjUwPG3TvC1N7pJIo6jVrvQ0ZSmJWYbdcZTsblEa3vJOnN
gWRs8/E0o4TRHmd2l4PQr4vmnMIy6ND1QaQfLMQ4GOqwq/iFdkUoawaLqXHYgi/KE+vVoKZ4
Xzf3sA52qdZG8QkjzfwWzhabCbaMnU3O2ajITMi4kwH+koVkhqCwsTJO10FAtAaBkFJ8mvhq
2HcYZPMeeyBvgpAmk5lszJgTPTmbQ51GdJaVKXSDKMLZDBvrYSjjcL6axlfzFnd7A4TkxuPP
yOSgbui678ri2bgDEiaDVRZQ9NmGrKtKQY0+JukYMEa6zWz8ZYC+WsxCKsc6pJ1aGnhAVB3J
44+kyEtPMQGl+TFw89FYR87g3MZqIsNtuiT9tHQfG/chogjCZjzCEBOiKhpiUAplaBDO7uIR
FK/OqAEuiMpkZezbV3RlJvdBSBnVtHgOLHUDR8jl+OO1GFWwguiXzw5HsEqojFMWYdAmYgjC
PGTjJEBNGDnHs4zsGgAOU/VTjzLv56MM5TKkJgDuDz8UhZtwOR6aQFySxIZo/J3+i89OpuTQ
lAyiZ7v3G9hK3BRKJodTEde8yBuOdqs5Jx56QF//eH/48vTyxbXLYo+P1+fr2+uf1/fuCUhn
s2Yjmvvl4fn1y837683npy9P7w/P+EIIshulneIzc+rgfz/9+vnp7arjJVh5due7pF7PA+Ph
REvoQ3PYJX+Ur94HPnx/eAS2l8frRJP68tZ09DgA1ouVWYeP89XnclUx+KNh+fPl/ev1x5PV
kV4exZRf3//z+vZNNfrn/13f/udG/Pn9+lkVHHtasbx1j69tUX8zs3bUvMMogpTXty8/b9QI
wbElYrssvt4sF2Rh/gz0+6nrj9dn3K59ONI+4uxtO4kpYGhzlYsv8vO2m0DteLRTqbCXz2+v
T5/tIa9JbrqoYJW1Y9+Kip/gH556hcd3xg72zeWOoVaA1hHCsQHNvhz9IB4m1DnC5zzB5BF5
6TFQupNr+hILTaOU2xU7vE8XovkY74V17reAJqPiWe8efny7vlteUJ0O3DF5x2s4R7BMxVsk
x5OTjXHCECkq99Ht6NbjThBtZfB45Zyhhu4Aye3zaHmfetwinry2++X+guY1q/XMY1GrYr5h
GHjksdSzQ3RHr6N1FWUcTkTbospavQGpk8r0C01TWVbB2tOnlS5SyKaUtWNj1UN1RJoujX0W
tzFOreBCHbEqM2m8yu3IaRmPiWVV1IWTxV2k3GdQLlfH12wdpfOCRyD8yPPaelbTQzVPOXoi
pE2IM56mLC/OfW/SqzZGvzwXwZp60yEP1RYj8ZjfY5gaLTjXgeaboqz4zucPpc+pKuZwtKxr
Ut2xZ0fexKnxHg5+4PE+LYq7g6HC7hghPw4SiFvHygx2IToTPZOfX3srJPU0HD0bV9c/rm9X
XFU+w0r2xdRIi1har4KxGFluXLu5bpn9e7kb7cHLDcuWaqgyGbGS5LpdbOwtY4eNQgwYmI68
6jsLdFwyzqjbDYvD9J9nAmI5XwTUx1DQ0gs55zUDWSx8aWyxZGBRFvjiCBlccRLz9Yx+leaw
0c9pTSaJsrmJS0/H4z2tZLSsNNh2PBP5B33fX/2Tn05HYqByALSNl0f2NF6Bwt8dz92hfw+L
NKVpRyyVwSzc/H9rX9LcOI40+lccfZqJ6JnRbunQBwikJLa5mSBluS4Mt0td5ZiyXc92fV/X
/PqXCYAklgTtifcOXW1lJrEjkUjkwoA/pFGyJyfKMY8wMEMmJKpByrfGhxennAmyvCOnd0SW
lTOVQ5D8TCeIIz9VSc7k9dHkSJg8DBidsIHFDUyyY03Uwy9DypuOYEPrTrCBLLliaVtP7eq2
9bTlvMGxd2vsUFFC6WMlBchEl9NpGx1L72MlLgVbi+LXah7wFTIJZG7qUSp0cB9f7Yl2RfA+
5bf7nFSXdgSHauaMFwBzUVJAglJUNsxIAxPYeyBNLacrfpyHlGAW4SbIouergMebQxWI9GhT
de7T77VoNZtZuXsEyLoo/JlmCs3WJjZk5B7lNp7YbgXIb2ay2BPXh7U1x0l2WmdZYH4l0mNU
EkoHdOvRFiPTEXu+nJ8e7i/EM3/1TadA9I9zzOG9712tjFpNrLKQC9jR2GSzJSVwu1SX5kuw
g7OTTpnY0zR0SbCp1mTeu46mBubRTUoflogYJ2J+r2L0WTJZfp1oR7lRoSw7f364q8//xgqG
8TfZOF596zgoOdWzy5Cy2KSZzkYKmM7gOCihtR8pB9/nLacun+L3ch/FXBGN1Jnt9nxHB0kj
iDOnfSO0R1X7R6nj/GPUq8tV4HywqS5pm3qHirS6tGik4WVoBBGJSscPzJokPSS7sQmRNCAt
fKy4zVg5G6JZI8SqYe8Tr6ek/b1DYybd8lC6g2MUo4tbUqhVGxwAReMu1iDl8Z2Nsp5e0qYZ
DhXpoGXTLE0NrocaxiZ8x7OYlcHP9Luuugc+fnv+Agzzu3adsRSGHyHvWiiN1/aRMHQVXeY8
zsmOINozkGPLOYiKIeM4KdWWXKCbxnpjDlCP1hn4BkGZldftnvMWLpwLG5plHjgBMCuF0PKq
C11NTJvaRJe8mJhuax1U0hriU9K3Y0VLpUiQvkegSrikXpxhTBR6ZeZF7KHWcA3QudH0AWp7
eCA81XBaiROpDzerKc1ykSAlCIwK1GxszMymQ3suF3bbNfGlFVJiIN+Mjs9msyJLc8GaeO1A
y6aDP1KFrM1FKfQCMZqPpgiJKAEMN7qJBd9TwLTE0CuoRR6wg1DLdXsQQau0OLROePgBq5TU
RNEwZZzJTi1IrZuecysXLfa4birURWOnLfj1SoDMVTqjoUuBOtzK9UAHK+8arj41EHp4Ef5o
wuVIeh8M9DMzu3s3/1MK6FGqpk5tz64BMQtkRhVGL6YfoJnRzy6oBMf8jlIPmRzNYUeL4p3F
zK6QkZ249V4q747KWPcdnZqKlGv2UoXHQneg1cIgJcrpKBvMECIVUvZFReMBUzTUW460LZ9O
jEqs0ZbY2XgTJNFiHihCqex2yZGyb5bW7sZ3ptE7ogTfrFeTUL09xZzpr8060UfLawgC4a+C
X4U0CIqkrFATgB4bdBEdfk3q7DyyjXmdVm3gja1Rz5Nju5tyuMQJRJIq+Xw5SVqGS8L7WmKm
qIkc/RYpqsDnh5X7MUFRjdEsZA2jpSRj2BV8P5+Gu7AG/GxONB8R8/lY0UixntfvkBzeK+M4
H5mdNZrQzXTzLHC1mHjgDbbIByO1DTRYRY02O8h5rKXeh/xzlmq6z/DOT3bncCPKJMeN4OlF
lIQqnn+83J99vYi0kFWOVRakrIqtqe1Nr+Jj3Sbrmem6LX+i+td+59mmkUsJUFHxTgHrvejK
OgMvxKhb7A15+y+1J23wy96P1rUBRu+scusXuKvrrJrAcvdKHITLU4l+O6EqpXPtyi8ZtcLh
QquIjWDVLgzVqHbgQThdVAGJHaBykHWhecmzy65P1oJTbqltXfOR5mlX5xEKPfHR9oS1IwOl
dluXItJtHqtTJi5dKLp+OSAZ3H7m9Q42RRW7UPR2gwGSCaZKoueqxWUiasYPtFmZIlEOX2YS
Gzjhj5eZ9Ddxwiyq/EllQptIKKygkV1tOrFuKFVS5x4engv54gI3PzFCg35ZI8sRz+jQatTt
/B3lcOypwQcPmqvwjIJmdWMHstA+UQUMMLXqu+/qzDo4Yt13zDQ8NpDliXqrOKznuD2yygqb
00MDqUQ1vqQWtWoMpvCV6WhrapUJzMNEG9ewmsMgTifE7uoXslbyuutbgaHWQtQ+vLBfxWX0
TFjoJc7ZauGYBFpqDucQ6fcjS9JtYXhCYp8zBRnsVDorkuxAH2HKWb6dIzuqbmAhYwk0y4QD
RzbYpeiq4tZLU+cnTBOrp46uuT0Qn0YcoO5ma3vJKZUMKlyS0syNBidXGXFVxHAJRq/HLLr2
RkdJS5nY042U+879RtYMlVKWMsqHiJkO1Ao0BEtT9lJoMvdwf6H8iMq7L+e3uz++nY2kBNbX
6Pu0r9HX2y13wOAd8j1079JnvX64lJKb0jkg3mu3W+pYbsqOQnlL4AW4PlRFs6dc3IqdIrfW
Vxa1nh/WwHQxBHYY3a/lMImW1sME+lIY8gVLSuzfMRPMOqjg1pw1PqRN4z3jt21Ut9skj4Bp
CIIoSoScou2tVL1sb7vxM9j7fIPi8I0/YBIzOiq4T8JYtfADvdU+Xl2l2v7z8fnt/P3l+Z6I
qRBjIhQnUF4Pa7nlQ9ex0GPZwDlaFdze2kLbjfTGpF61qjnfH1+/EC3R9mrD2kCAdLwk+qmQ
uWFopiBDOyyw0j1j9MswxlYKK6z28zN7ZbW+n23Mb4d2qN2Yw1nx9Pnm4eVshI1QiIJf/E38
fH07P14UTxf868P3v2NEyPuHP2FDE9GgUX4uszaCnZTk1hZWxrNa3y6eydCDKuQNZ/mR0dtf
E8iHSSaairZ06OLnQ0d5kgdsP3siurkWVRwbVNZ53Afep2rqzIOJTqvRkFZFocFQWBQ5UB6h
3hAMCpEXhbGONKacMfmtuVA1arTBfrtMUWczxa9bMkthjxU7tDBSWVBfnu8+3z8/Oh11rpXS
nNHiPAVXAavJsDUSawSO7BJIUnUpg/lT+a/dy/n8en8HZ8/180tyHRr56ybhXLvTE1Wj8L5v
ajMCSsmYzIUrijQ2m/NepbLWh39mJ3psUDbbl/w4Cyw/OQ1oYUFOo1euMr2Ai/Fff4W6rq/N
19meFGIVNi+tThIl6kDyw3ud37dOvLK2U42h7Y+sJHMh4DmT7yqm3j8NqNQV31RmtAXNWZ23
zQEa2PUWJfHm33kgUz2Tfb7+cfcNll9grSv5s4BD2Ipfprg6HFYgajlPgHuxtZxOJTBNOX0N
6V4oKWmow5WRU7H3wqgy6PJcCI95aAG6IkeF7Lu9WMNK+V622ldWsAND5lLzNfattUuMagc1
vy2FMXGkYCg6UYp8qMJN8mVTvLOoNFUfrBw4RlOmoet7wfvwMscirdk+/hj9/L+gp1TojdT+
KA7fCQinh28PTy7T0PQ60MxRa4Y7v2v/C7vuT66ZYhd/+kMih3FXlb4OuyqmzGXjU83lo4zs
RfzX2/3zk5ZxDOnFIm4Z3AR/Z9x6EdGonWCbBfnwqAl0qgD3Ox2zKa/niw1lZW6R8UMNu2/Y
jhqZsdN0sby8pBDz+XJpL9gOc3m5XlC2GZqirPOl5WGq4Yox4Dsc+pqbe0Siq3q9uZwzoqci
Wy5Jf2iN73K4eUUCgvc+I365El3Dv/MZNfwZXAIqI/xLFFmKSamwiyqW2boGCY+3tAJKCxdw
tu9oKRPtfVM49WvajwvfC+KMzHyD0YIAY+md8ea5L+kkQEe4EOMi3zbWaYZyCOrv8rhuOVUP
EiQ7Y/qUFWObx/ZAyGMx4FQUsTWGv4oqp5/97Uup/6qSJ8a7hNK27jI+w/E14FovmnGP74rK
dElKzCUCPzAwws5SnPWwlluhww1EMAabRRKU8wwyTP4C0l2TuU24Qt+y1gqThGAdlh8dkoh2
qz/NIPDGNx6prBXOFlSiapKZSSJuuiAxjw6YLHFomvRx6ngj4QzcbZPolM4XS/TbogR+xF4a
/t8aoJ1zu72SMcsOBH4vJt5v7Ro27LCMA3OSiQzoPRaxGcmNIza3o1vANFfRhGK9CmNZFEoQ
6QQpx7VWDWrn6FvojHmHw5jUDv7qJCIjSKr8aQ/S1Yn/fjW1cvlkfD6bW2m42OViufQAvSu0
AaaNwgGzXixnVgmb5XLa6ohsNtTiEhJEHSbZicPsma068ZXyrh+O/PpqPScDNCBmy7Spyf+L
A3q/AC8nm2lFG24BcmbnZjRRq8kKGKb0mGMVS1MyFRPQbTaWcpeho/8J34fJHaIu3IC0Dx+4
KAc+kHdolrFlNJOf2W61yj3D/XZ4rEA7holX9nCg5sc4LcoYmEYdcydljmcFEygET5bsNFsG
CQ6nyyk9zEnOZnCihD7stHb00MBl9zJyxyQtOfoSBT7RMXH1Rx2w5rPFpcUiJIiMIy4xGyvt
AwpjczKJBHoJrqZTcwuX84VtSd0Z1cvwt6tJoOEmFch9GOZOraEej7onAQvVXlk5ay7pmDT4
gNxaZUg574gSb+9nYWJU0OD2VDgjPoiHCd32geDo1yfhADYjsmOwxf1tVdjTVOWYQ2HtALvb
Xt/1jpHIiOPuRpNhxgNDLORaarMiUhcyw75OCjBqYEx/9x7ugqKdNDC0mKiJcVtVZ7AFA62S
dgT70qpFmp/wyXpqR51kkYATY2nTZXAfcBa8zmwBy9OBrhDq1HXcrWTkUqMqfck7dd34b4N9
7F6en94u4qfPpnYLTswqFpzZOjP/C606/v4NroAOsz9kfOGmt+jVrv0H6ouv50eZnFlFbbbP
DLRdaMuDznUeeONFmvhTQRD1Aku8ssUc/O14vHOxNvlDwq7d9KWCR/OJXEvU6oDKkyrBrb8v
zVwlohRzy17z+Gm9OZFD4w2Fimj98LmLaI0hM/jz4+Pzk2k/TxOY05kJPThCd1q9Ioiy+84v
1Ec6EpVZoCttdTg9gjq2ilqJsCjv1FIKyQnLyYoyrAbE3JxF+L1YrOwDf7nczAPLJFquNitX
Wh6Ey7KoWzrjVSQWi9nCrKc72Wj6bDWb2zlX4PBZkkG1ELGe2eIcL9EpjhTIJNsy49v2IIfD
1TLC2XJ5OXW5R8QsRjE6KX2gos8/Hh9/an2RpZXG2ZZpy9uoyTI6AbtXgCxh93L+Pz/OT/c/
+wA6/8G0c1Ek/lWmaffOpUwm5EP13dvzy7+ih9e3l4c/fmDsIN+DJECncrZ8vXs9/yMFsvPn
i/T5+fvF36Cev1/82bfj1WiHWfZ/+2X33Ts9tPbEl58vz6/3z9/PMHQeE9xm+2nAIWJ3YmIG
ciV5BczKZj4xLcg1wOZ7esfKQ56+OkmUeXMa5r/ez2euh6WzsvyuKaZ2vvv29tXg+R305e2i
uns7X2TPTw9v7nGwixd0/hdUtk2m5tVVQ2bmaieLN5Bmi1R7fjw+fH54+0lNC8tmc9LLJDrU
9i33EKHsT9vhAG5GR/Y51GI2M/av+m2z20Pd2NxDJJdw46OYByBmE/MN2uub9gCGzY/JIB/P
d68/Xs6PZzjtf8BYOUsygSUZUD3sToVYX5pz0UFcRcJVdlpRXqpJfmwTni1mq4nl9ztAnTUM
GFjcK7m4LQ2VibDr1os7FdkqEvRxPDIWKgPkw5evb8bS6BYGL0HYS42NxKLfo1bMp1P7sGpO
sD7pjc3S+SSQmxxQsOkCyRrKSGxon3uJstyemLicz+yVuj1M6ZBbiFhb+Rs4nDzTNX2XRByZ
txcQmEfYLmZFLlhErMwYKftyxsrJZOZCYCwmE1PPdy1WsE+sCehlFZHONhMzmKmNmdmudAib
kmfx74JNZ1Mz0VRZTZbmdu0K7nMz97fWSiX37X4fYaIX3GgscC7gclaWdwUx1FR5wabAywdA
UdYw70b9JTRwNpGwQVxIplM7TzRCaM+n+mo+n07M61TbHBMxM32aOpC7t2ou5ospxaclxlRL
dsNUw1gvzRyEErB2AJfmpwBYLOdGlxuxnK5nRkDiI89TeyQVZG504hhn8rLlQuyAOsd0RXu2
fYKBh3GemseMzRfUm/vdl6fzm9KbERzjynY9lL+X5u/JZuPsVKWDzdg+D7BhQAHHMU/EjM+X
MzPkjeaBshD67O/Kd9HdtME9b7lezIMIVwHaoasMc4n50nj3kE8NlxrIH9/eHr5/O/9lG2Tg
Hac5mZNgEeqj7f7bw5M3Bwa7J/CSoEvze/EPDFj49BkE5aezXfuh0vbUlG4fHxCrqilrA+0c
RMoK3iojqOhG2pHaaoyzlhZF2aPtWcU8twaq7z/dS33SPYHoJHM33j19+fEN/v7+/PogQ3h6
q1ky4EVb6vSH/aZ4vwhLJv7+/Abn7cPw8jFc5GaXlt4uwljKtD88XrAWgQxTeNOCsyCIA95C
cbAylWKl0bVAi8newMi+GZ1Js3KDemHqTmZ/om4xL+dXlERIeXRbTlaTjIp5vs3Kma3/wN8u
147SA/A9ynosKsXc5CSHcmJs+YSXUyl7D+dVmU5N5Zf67T2GlCkwqIAjt1iuyNBdiJhfenyq
rGLhcy8JtSXmegns3+zJbLIy0J9KBpKL4aCtAf1YdZdCdyIGgfAJ45kS7MVH6il9/uvhEeVw
3BqfH15VkFpigqU4siTDuqRJxCr4t47bo63S3k5pSazEKPqD7naHEXMn1qei2pFXLXHaWIsB
fi/Nycfv1vYZbOeoPKbLeTo5uWGC3xmI/79BaBVbPz9+R4WBvaFMLjZhwLJj0yYsS0+byWpq
qYMUjBzmOgOR1QgxLX9fmjLMrTClNvl7FplLjWql8apQ0/Gnj1nc0ikUMJbcT+OHm/gcQV3a
usGaB4DSLoMuUNtsHFIecV2B9Wn/PBb4vPNRtFuhTVAdYFylSe7AeoNTA9j5wxkiMUIr7pC5
8VARqBLM2TDtG2YXd0i2x9rtbUIyYYU5Te0CADK7tCtC69K6dCrSy9GtqtNECk77vWkafGkL
NAlGydLpdLBAWq8BrX2R7GbKxy8HhMajiRlrThHqAFY27clZiNL+JspcT0XAlJxtVmbYTwk8
eSOET1iBfnRGMnXZeF/px6zAl0R8AgkOOfVLZDpb8zKNvI/wISz0TVlFdg9FnbizFc6h2WNh
ykI14IOXww/wncsGOUlIJSiJuZnjVsMOFbH96xvaTkTjMEJwoHl9SloD9unUPSlg1qv7rw/f
/VxGgMHps/QtsH0T+qIUoW8bJsgaLLKkAyazsuTq1QLCPEfiMrGiSvRoqJnsbG+S9YlNw1Td
KpHVUAeKWKzx4iSzeXWdNQLVWb3oqjysVbOta0d1PeQvZEkUU6EwkBUBoahj65KB0Ly2Ejd2
/lNQKi+ybZLblxzM6LTHd3FMJVkmdN8tokzQhm/A6Fsn29lwg3PXQ9/ikvErPA+tKzSGgYfN
j2lEAjFRZDh4+LrgNaPN7DA2JPyoqyJNzTFSGFYfLjce8CSmk5O9dhAunSpIZYzGOyefhuqz
79ErThtKqqfcwJKUoS1FRMUSUUi07PDLVhax+5uRUlOW12SoXI1WB5fbG3V+eEOjbCpk2NqW
VbTEoyjRlmIEPeaRrih6XwBDOh4QpW1woTBGONRgsRhH2i1RPZ35vZUsOyuny8uRnoiCYz6C
cIVusBcF7uNPBj80QmeQ8HafNkSjMaEppcZVkTq6IKrzlR3yy0Gjx7BZiLohHW4vxI8/XqXt
+cDidQZADN5vqIQHYJslZQJXVxON4E5kQpPnorbPcEDLEM3kuOMHGD8E20OeZvC1shGhkwho
PPooGy1zP9+88zm6saK9tSUFYG9xi6y3iAsIAx1Ruz+lHplPNJ0xSWUc8R5yjlJiTFGw034U
J4cACVqWs7TYk90ZKCNn0A1K7YuGzTnYtamwyEQzVExj/MIwhe3CpGCf5aJyWqSiI48NXC5m
KgdtFdlLbitjHrGaufMtEeHp1u2UHXC+7GOLFBWc8tTZbVJRa63DCdjTVSC3s0nG0iPFNZFG
GrTLiML2mKpdeIKjIrjidZSA8Bjo2AJeuYcEDzeUNYi5wmjNcEjlxfh26AQnp3aLRp1j7bE6
YT47nIpAQzVhBUKYvW1UUIb55VJ6S6QNSFNVSwyFOt/lOgmtB0WhxsLeLtIXASqBNjY1mbLA
JFufcOQ81gjXp3a2zuFaKxLuNq9Hjg4pUoUnM8vKuWy+U7aEY6VhzoUxVcJMANGNpcXQwJMg
Rlom1Y4CnhVy0cPtshxdFnAvLQ9FHmPISViftPyIhAWP0wItoqqITvsCNFJM9Je4Dk5xjYE/
A1hcjzN3MUjMdUZe+Xq0XKM/iQ+Rg4m8FO0uzuqiPYa4XU98EHJdhAsLdbvrH0YipRZ1xaRH
fXg5KfPUOJ+rE9FpQG+QGslfJ+r5zqKTDAPXhdcQi2J0/dukkUhGTq7BP5Jg8D0SkxiTejMg
0reuqFSxG912a7RkvpJgvBjdDFPe0H5BjZmMx0J4AoJYlsfZdNJS66sXIUcOUZNmbhfdo/yG
DjfZA0+cPtRKxTKdQ6tgKORpQeIXA96WyurksJhcjvIDpWYBCvgRmi2pRZluFm05a+w2Km8u
gjFG2XqqNkeIdWSr5UJzF7vM3y9n07i9ST4ZRgKoTdO3VVfEgYtBmZQx/YymDku8813FcbZl
sGSyLMysbdJw4xWdjMkH53hht39AYl02Tps9450js8KF2HeF/hOMYcCZpQ5KojSGOn6PA5rT
jFutVjeR8wtGo5bvEo/KgMxXPWF8F57l5tgiKMr4CgSd0o3H0jV8pOj+HmbmeoEfLTf11hrg
6+5h/BdeV8x0eV0r86gqXEfqQCq9iBlXd5kPnRmK9vyYxcYjg/zpPjMooFQlJdbZMSAKXtR0
CgvtMBnvGkGJY6qI7qIXYxQaoooO71Ri0WCYPNkM4wUIhAdZsVmiOlN3WFGwLOlfISJmjEzP
5LsCXbhVsyoGLwWqSe4ASxaE2d8yc/H1rHF8tJR5sFNwH0dFte/RHUKRHwWM496NPKCJtOdH
qGIZkavrubK3vLl4e7m7l4+kfnZ4J5TdsJUlp6gP5Molihy+dDUoAyKgfaxjqiMyN26Zxiep
NXHtVMiIOg06guwvNzNK1kes7Y2NkD7wp2/h4oUSMrNKKrVnN4aJGUsNf6FatHOQ78BpkjnK
UgTp6PdO0BtrGir4O3dYqkbzokEC10FHBzXMrdzRtj0Mz2kWjf7a1zG1eTGG43XDoshMLzgE
6as53BxZWTe2f0dWuNEjO4sMOzKBMmB/+Ha+UOeM8WocccYPcOoWsLLR5VRYnuFHhg/1dQzL
Cz0ZBWnjg7hCJLA+uBFOOz5hyDmTgXaQdovBY9uiNHC7BIPEATgxo2tkwOHRneo2gIey4Kpf
3ZaYz8sCH+Mqqa1rVA9U5w3dD02xbRLYH3CzTfY5w0E3Wyryok52ZowAF5AoAJxjTrRZphBE
3ddNYStbJKDN41reKORKRX9SSi6pAKvpb1iVJ/Yzn0KE+qywdRUbCqfrXVa3x6kLMMRm+RWv
jdlmTV3sxKI151vBLBCyznZn7VNOs1oVgsyhLWB+UobqG0884Hf3X8/Gut4JubDtFaDWuqhZ
TcdV6ShQVVOA6BuKuaqovDH1KIotSmttmgS2qW60EtVezz8+P1/8CbvU26TST3dn7UwJugq4
dknkMdMuNvY3Cqw9/dELhrx1IyWq3c05lsASw8FkRZ6gl6NbNvCqNKpiSlWvPk6AyVT8IKeg
MdbFVVzl5jpxRK86K72fFM9RiBOr68oFwp6M4pVlBXNo9rC9tjs6XBsmyuVVDMzPWNHY9gOI
rftkj8ozNRwDXv2vW/ODqOxP7cDiBZfMDaMHx2bG3KJi+T529k8smZ2zFHogSj9CJjwmevT7
bidmrXkt7iBaAJ948BtghnEfd2KQZXs84CTL3NHvC4pQNFnGKorp9QV1M+ZX0U/ySAU79BXk
DXLuYCWoPETbOnR3LMou/aNT0Kc0oe/LCp1+otQRCieNaY3XeAVstvaTu25LBkuxzYvAq41J
BLy9GO+XJBPJpzhUz44di6ai286BxTncWELgykCHpwJZXa5IaoOrxM4/7d99hNUrjCu5va3h
KJ1OZouJwTl6whQljW6m6Lu6ooXekHQu1aKn8toFyAMf0I9eHevF7ENt+STq6AONGWtIjzTi
qfpdNtvbEb5fX1/kL9/+s/jFK5arUINjHcSIoOF6YL0Y9724xuzqNEPLHV6Gv48z57elE1UQ
d/ebSCvNkYK0tIdPVRQ1UpBI1TR5ZgfxKLl0UXLJCJ8dEZ5lcLuLcqevXfTcJiqNgMxmHZRN
EUghGEUEmGxhvNGhJOz+xNGwKnQjMYgmr8xo1ep3uzcXJQCAlyKsvaq2duohRd51I8kl041R
AkeNLz2y3UdBUYnH5YFmKDxxWFOCjB5lN9JRC7EsTYuboWVquqzXe6S6iRkmKMaD/EC3Cama
krOUPnYkXp5ZoYZ4Kq0BSmviB7wUyWQezhHCD7RvbD3zImJtYC8wj8H3qE1Jz1Semks9NXjO
w+vzer3c/GNqcB4kgAbEUpJczGmTEYvo8kNEl5QhkkWyNgPUORjrTcrBfaDgy/DnAU9oh4jm
Wg4RGYvFJpmPNIQyjndIlvZEGpjVSMGb9wrezFeBkd8E52QznwUas1lYYbbsxpApAJEkEQWu
xXYd/HYaSkfmUlEeDUjDBE8SuztdrVN3+DpEaE47/NwehA68oMHLUDVUyDIT7y3gDkEngbW6
RoUvsggWgTFZ2vCrIlm3lTs9EkoFfUdkxjgKpSy3hwPBPE5r0xx2gOd13FQFgakKVicsd1sg
cbdVkqYJ9VLWkexZnFIV7qvYznvcIeACnrKcOvV7irxJanuQ+h5jQz1M3VRXiTi4tTX1jnYb
i1JK/d/kCS73oSsaAJeWKmNp8onVMm5Ml89hMOwt2ptr8+prKR1VDJfz/Y8XdJp5/o6udIaW
A888UyVwixFurxuooPV0OSAQiQQEzLxGQsyrSB1MdYW2K5Eq2YwFqpSGGkMODCDa6AB3xbiS
nQ2IN/re2UZw9ZaGhXWVkMrkjtIQvzTE0nt05WlJ2uoyMh+VIwM2UCpbNd6oktVUAOcdSJ6o
nhRwL+RWlClWS9uiuMKb4yFOS9OokkTLOn775V+vfzw8/evH6/nl8fnz+R9fz9++n19+IZok
YJFejbe6LrLilg6139OwsmTQCjI9VkdzyzJLozq0ge3QejMQgbknk0J1cZNjxIXg88E+cDvv
rr3DAjGj4ECJcB97vv/35+f/ffr1593j3a/fnu8+f394+vX17s8zlPPw+deHp7fzF9wpv/7x
/c9f1Oa5Or88nb9dfL17+XyWnnnDJtKR0h+fX35ePDw9YBiIh//c6bgzndzHpfYKFbPtkVXQ
gwRzn9Q1XFkMLRZF9SmurCRuEoimyVeeJsOnAEHVqIYqAymwisBAAx0aV4J4z/uhDaz+jngH
XDdI24dfJ4erQ4dHu4855TKzQbMC/KXo3tb4y8/vb88X988v54vnlwu1PYxpkcTQvb2Vu8cC
z3x4zCIS6JOKK56UB3MzOwj/E7wckUCftDKfZAYYSehnjOkaHmwJCzX+qix9agAOx1FXAipD
fFI4Q9meKFfD7Zz1CtXQL2D2h/0VGc9G4RW/301n66xJPUTepDTQb7r8HzH7TX2As82D24d0
N/dJ5peApvKtZu2n9crDqxjH3bouf/zx7eH+H/8+/7y4l0v8y8vd968/vZVdmVmINCw6eC2K
OSdgktCdiJhXkaBeorvOZTPiK+DFx3i2XE6pC4tHowdAWZ/8ePuKvu33d2/nzxfxk+wuRgL4
34e3rxfs9fX5/kGioru3O6//nGf+SPPM6yw/gJzDZpOySG9ltBRvJcf7RMDyIZZmh4I/BCaA
EDGpJtHDE1+b6ZD7QT0wYJ/HrtNbGaQMT/RXv0tbf5nx3dbvUu1vME7siphvPVha3RAdLXa0
kl6jS2hZuOOnWnhNBFFP5vxw4fkhOA8DSg70GJ4dT9RSZBHI73VDSd/diAgxTMXh7vVraCZA
pPIacMiYPz8nnDSX8qgou9gP59c3v4aKz+3ICxZCmbOMzYmkG9lyiIaJS5Er+kN1OrmaOhu/
TdlVPNsSXypMQJFmkeBeH21gPZ1Eyc4bvT15Thrrxlue3bqACltSI9MdJtHCqy2Llv5plcCu
lQ4L/oRXWTSdrb1iELyaUNSz5YpoMyDobAkdNzmwqVcaAmFziHhOoaAijfSrA/RyOlPo0Uqx
tUTZyykhAR3Y3D8BMwKGthDbYu8h6n013fg7/aakqpPLopULus0TtUE6H2D+8P2rFR+mZ+CC
ZOuiDWU2HSi6OsY2SXGzS4i12iG85wIXrxasz+BZFqdpwoKI9z7UBxbwyY9TzsKkqC2ge4I4
f/9I6Hjtol7R0LHPLPPmATZv4yjuvnHxO/l/f/GyVDAzIIkjLlCSjka9y2ZAoC1j25bNxsgD
7qPFjA2IQRKcPpH5sPqmkAs3AA/NdocODLWNbuc37DZIY3Squ9A9P37HgDr27bqbZPmQTexl
5xHeRa8XI2dk+snvg3wTJmYOn6Q9A6nq7unz8+NF/uPxj/NLF3m2i0rrcJRcJC0vK9KOpOtl
tUUrmbzxFzlitPzhlqxwo6e5JKGkRkR4wN8T1CrE6CBQ3hIV4oUNM+KOPPY5hN2V+EPEzhAF
6fBaHu6yPCwwiaKjL/j28MfL3cvPi5fnH28PT4TolyZbfWwQ8Ir7ewkRncSjPY/JjzUNiVOs
pf+cqkKR+LcXZT51jBVRf9GiqxnuYWMtNUqhWkJxYYT3YlglbWem0zGasa4GrwjDOIzc55Ao
IMgcbojj44g6sJskzwm1BWJFk69h68ajSOJ1miB6Z5eapGWVv1tcPc5NelIhCIZmosdsCQhq
l4uNEr/f55JFqBcOtFFjcU++V2dHCrP/To0JL04c5B9Cwjj2mQDzfWjYlrRDirmkZCgsrTsa
b4wmJUWbDlvTkk+HFgRjGLBW6DsPSymIrJJnkwULDATn7w7ENavb6LDeLP8KpMV0aPn8dKKD
ZLuEq9mH6BYfLK9r5HH34WZ+kBQa+j4lZ8ekydpPyfsjSlqsWwSYTDTIPpJsX8f8A/tSOdYF
V2YfvItC9tmuid3DdjFuPhLJuTKbpxou/flF/O4yYlla7BOOES7e4Y5sRipHENf5LRZcyNsg
SNFEYUzcZlmM75PycRNttobhMJBls001jWi2mmwwDxoI6zIzqYgqT8vJBpg/PjcmHI07lQ/J
UG15xcUarVyPiMXCKIpLbdhMf38p1dv4sfFWmexzTFsdK8cRaaOMLUiGHJ4cQ6X/KRW6rxd/
ovPiw5cnFbTw/uv5/t8PT18GeUsZM5rvyZXliOLjxW+//OJg41NdMXM4vO89CmXZu5hsVtYD
ZZFHrLp1m0M/Z6qSQaDjV+iHQBN3/ggfGBMdNzQkmaZJjmmSpPG6aezKHNedbVJXMcyL6UTY
xcIRICzw8rbdVdIh35xwkySN8wAWE1k2dWKapXWoXZJH8E8FIwFNMPhBUUWmiAlrMovbvMm2
0MYBrIwBWOoXXPKkTdBf2Uc5YIxg12VatF0L0HyUZ+WJH5RNZxXvHAp8mN2hjkN78Flhj/oy
YI/CJS4vatZZuvdMgwPbgjuTBZqubPYLW1tqO0mOBN2pm9Z6TuJz560M1bmdcUiAA0oS4DPx
9nZN1zMQLIjSWXXD3MS/FgXMbggbUGRw64bNjaCnIKZrHbjZayNmrFJRm8skj4rMGIUBhU4G
eNlLLQbwSV1WHCjauUexbzpOW7Z7Ju0GNVWKtFynEKdPCHZ/2++BGiYd1UufNmGmzkUDnWzY
A7Q+wD6jzg9FIYDt+1Vs+e8ezH7oHPrW7j8lxg40EOmnjAUQRQC+IOE4mv7WlyYLTPlKdWsD
kziKIi0yM/CoCUX7pDX9AdZorjtWVXDwS2ZgHuai4CDvJHDTlgQDCvlHUljO7gqE5uitxaYQ
HlmDkzHbaTOXDVMIYMb7+uDgEIERHNB0yOV1iGNRVLV1u1pYrHhgdEWFt1UgbPLeZMvgozdJ
Uadbu4G8OEh9EKzMIjVXm6wPYzoFPCHFPlWzZZR3bfL5tNjav4jNnae27xlPP6HJ1gDAcJNl
Yb7zZ2UCe9/Yxcl2FxlFFkmEid7h6K6seYS57RbZMRKFv/T2cY1OScUuYkS8OfxGujK3Vo57
9HFPycko0X3esk/pUY1yHG53aSMOnameS8QLONHNsBidIyC/umGpY+lWZYxyIiy2v7O9KXbV
KMqYdohG8HJHQrHtpzr5TkK/vzw8vf1bRfF+PL9+8U0TpY/tVaudvIaGKjAa4tN2Icq7Bs7y
fQrCTtpbv1wGKa6bJK5/W/SrQwu+XgmLoRVbdG3RTYnilNHOeNFtzrJkzBXDovCyBhoSZbYt
8IYQVxV8QMn9qgT4DwS8bSGsrIzBwe4V6w/fzv94e3jUMuerJL1X8Bd/auJc2tpkDT78HGJu
3PF2FTRP+kT/tp5uZuaqgauiwAgfpoNSFbNIlgUoY6fFGG0XHUhhrZobV3USpHFpC5slImM1
Nzigi5ENaYs8tfTUqhTF6XZNrj5haYIJTmbUmajs5bRjfmLHHjULU14uGIygpAO4fHio5cTI
B4OH+24HRec/fnz5gjZxydPr28sPzDRlTErG8FILdw8zuK8B7A3z1Oz9NvlrOvTCpFMBbYNL
zHaE7WDaDyjkHtOToZ2WpMww0sNIJbpANHR0WLBkYlf7yIqDhL+pC3jPL7eCYdi8PKnheteq
VdV/LbHE50Z9XJgG7xIhYVL4S1LbXVdiyBXwoTm1x0I5t/mDjn7M3pOTNq/syzWfmqQ1M9x3
MQlpwJJTlYyE8lwmaWQxxU1Osl+JLItEFLl191QFVwVsINbaB3g/R4rm5uR39YYKTN/fAmt0
3zKukfK3k+xSA2Vx1ApWYQMog3Y51Xom4DxOYY/7n3eY4IJWLKQRlu+6AOYZaVQMd2SHlzoD
c8zaci+N0f36j7S5tPvh2A7RtElVN4xYbRoxUg0MAYYNQWPgUD3aB1PAeIGIimJ6qllnZgfe
6kbVpxrfpczfpQMC7a5saVObXSus/+RlYsUNSJV74WHRZQAForwYmAuI2F1QF9vqediWzuo4
qJDsyiIMiS6K5++vv15gRtEf39Uhcbh7+mLKRwyjkqJvvXVBsMB4ZjXxb1MbKUXUpv7NcAgX
xa5G7UdTkqnl+/GsIk0ll6osCZZvZs2dQTWapl4h2wOGCKyZoF0Ubq7hDIeTPCpoAUrqIVVt
JL8dH0zlHQMH8ecfePqaXNPaot3rnQXUr8cmTLrKmvNOle2uchzEqzguHQ2hUvqhNehwSPzt
9fvDE1qIQm8ef7yd/zrDH+e3+3/+859/N/SB+Mgly95LUd13fC6r4tjH9iEmWj2TQWc8Bo5a
sDo+mXpGvYSh/fKZzoEP5C5Pv1G4VsA5H3Cb0ZXeCMurXUHVq5+9n6X/SExwEo0IVsHqAsVw
kcahr3Ekpa2CvvhQR4VsEix3DF/U9rejbiX3/R3T0gm+s0qg9WkiUnXdsKSmbhfdhey/WD2W
rFujB/wwrFKKRneRJhdxHMHaV8o2gmOrc9Rbx2ob/lsJPZ/v3u4uUNq5RzW3FfJMj7cXvMcW
T97Bk7ETFEoGg0qUenm4zOH5n7dSNAGpAXPwefKRxU4C/XDbwSsYq7wGYdqPnVTxhmI3ocWD
AaMxQ4O/KAyCsY+rePeBAuxpR1B8LVzuJ9sife7cSAlDYi2rd/YUAEdXh3kl70nG5mUgmfLb
ujBkOWnNMyxJg5eZR3x/iZNEVQgLrS0PNE13D985Q0Ag25ukPqB6SHyALEoqPONQW+GSa7JM
hryD8vA1xCHBaFC47SQlCNV57RWCZliujorr0lTRA1JVyG0ujcDAKaFaSN8B4ABJIrgGHHgy
nW8WUtUXkPwEwwwW1o5ToJY1J0z6FFKfaCqCpzgE0JptbMYv1PAqrgOow027rUBel4Pqf7hL
doUHrTAqCwxfEhOfqF87oqYkqtgN0fkyiXYBd0hFIGKOd8twv7uEV+6HzSGhLN809rjDPKjy
hb6urei+PkFUjs6MTdnuSN2JR7ot+MEfJx1Ytt0meRRXO2q5HAMxMjR6VE40adrrJm7euQup
+LJaCyE14JJt/7VekWxb7iyQ4Xcp3BF8LuXgc4zx6tLErEpvO52iivKsMWgHq7V6UvHYlPRX
gbKi7T7wgYy8fopMtyItkqZbqVh2GAsGF3X58OAEWihNaDs5kUmQDXwckR82YU1qTxNwfNXa
T6mVZRXLbLP+kgjG6JzVko+O4OWUjQluOElad+Uq/zqe2aCbLEqawQeRJr+Ry78tKsuUt4cr
9arkWgFz2Z5033hRtPTpbC9iU0lfn1/fUFLEGxJ//p/zy90XI4XxFXZgWBLyp6FRscBaxTNo
xCQ0PmnOFRpFRSbPvICM3UlwqAovKh2c2wpFWmY0kfG+s5MnZLg8s+V5XCPbIumI9vXSRrB9
SgtLIpJUKdC6a81wFNvfSLkIX3eoBmApO7xpjBXQqVnHuOAVL46evkOwHMCao5VWI5GekixB
CMFHL5xRlDS0BXv/WXoVBWI2q8s9WtAIYDZhkizJ8c2ANo2TFMHvteygNMO34YW5HURQ4AJh
umqLzkwjePNhOUglt/eRpe14YSB4gwQdkujVnXa1MN9M7VE5xCc3GKkzbOqxTzknk9E+NJXg
tiuAMg8DRF1QqluJlmeQYWwjgdukzlhp5RxAcNMEQkdIrHqSD+Mxhu0OzvEwRYW2JJ6K1Bmt
kDuDxCYRHSRcreGrkQUOXS5KWrKReK21DA2jvCIhL/AGbVvSZqUKidZpB3wHBe5GkkmzLWhc
u4Vr2SFjFa2hk6Xtkiq7YYHXS7VIZGRbem8lNXDWNFLcP7BtVOQU43QJmd5BLQGqjt9Ik7zh
lDJMjgw7Ou8E41mEBO+1ADojwlg1Y2E5R+8nGfUlGCdH8bw44ww20ciqly/34U0rzQOT2ls2
ULJ7k3MmG5kSnlNBjrAz7/VQnv3gc7gFbnLszgNTbzAmhhiaNFRHZYkQyJuigjdwogSiSyvN
1TZRpzEtDjkGCv8XMtRPn6RdAgA=

--CE+1k2dSO48ffgeK--


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 16:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 16:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56058.97850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpwQU-0003kk-Go; Thu, 17 Dec 2020 16:46:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56058.97850; Thu, 17 Dec 2020 16:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpwQU-0003kd-DY; Thu, 17 Dec 2020 16:46:50 +0000
Received: by outflank-mailman (input) for mailman id 56058;
 Thu, 17 Dec 2020 16:46:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIZe=FV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpwQT-0003kJ-Fo
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 16:46:49 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 214f0f02-32e9-43ad-b9e2-1c96d4e70d9a;
 Thu, 17 Dec 2020 16:46:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 214f0f02-32e9-43ad-b9e2-1c96d4e70d9a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608223607;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=HON6Rso/LFkmgFDI9GbBKv1wRsoRIqh3NIzDsd9cMzk=;
  b=ZHwIK9qbvb/bg58KN4QmKsmaIv1equcVHNlzhQcOgvRugnPjMuWFwGnx
   RoXxgS2Xnm6y+KwWmBqpUJ2p5VKuHzx8f32aMM04YBfhQv7Dml2cc2fFA
   8Mh8BXpij/1w92YhMBf78MdLAxwRoJ4aE4v3KYvrBH7vOe66o34EWm1Hy
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vr7Z/XLe1qXhQv2uYamwn4Rw6yA8Lq94tfk/lmidme9JOy3vTSwX2hJxEuVF8lCN7VBwDPJumq
 oq9PgnHG7Ry2gsl50/pp7bLgeslboS96WhLCEpffA+0QlnAcsBHYsb6ZoFKW6STyIOJe+mUqaq
 SIJxDVI9b8iR4qnSLWXkYZWpBjI5nMmm+d6XxZKeMEO9VUFqaZa3Lf1S7MvjU+1HCBzNZFcLn5
 WJzjJ/ZCmber8RXSwsH5vzHTuVVYeEDup7jjTnjdnoJosWAPAFSy+TIYpPXlU0m3YLJPM7pJB1
 qmY=
X-SBRS: 5.2
X-MesageID: 33703718
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,428,1599537600"; 
   d="scan'208";a="33703718"
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: <boris.ostrovsky@oracle.com>, Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, Cheyenne Wills
	<cheyenne.wills@gmail.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
 <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0430337a-6fcd-9471-4455-838390401220@citrix.com>
Date: Thu, 17 Dec 2020 16:46:42 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 17/12/2020 16:25, boris.ostrovsky@oracle.com wrote:
> On 12/17/20 2:40 AM, Jan Beulich wrote:
>> On 17.12.2020 02:51, boris.ostrovsky@oracle.com wrote:
>> I think this is acceptable as a workaround, albeit we may want to
>> consider further restricting this (at least on staging), like e.g.
>> requiring a guest config setting to enable the workaround. 
>
> Maybe, but then someone migrating from a stable release to 4.15 will have to modify guest configuration.
>
>
>> But
>> maybe this will need to be part of the MSR policy for the domain
>> instead, down the road. We'll definitely want Andrew's view here.
>>
>> Speaking of staging - before applying anything to the stable
>> branches, I think we want to have this addressed on the main
>> branch. I can't see how Solaris would work there.
>
> Indeed it won't. I'll need to do that as well (I misinterpreted the statement in the XSA about only 4.14- being vulnerable)

It's hopefully obvious now why we suddenly finished the "lets turn all
unknown MSRs to #GP" work at the point that we did (after dithering on
the point for several years).

To put it bluntly, default MSR readability was not a clever decision at all.

There is a large risk that there is a similar vulnerability elsewhere,
given how poorly documented the MSRs are (and one contemporary CPU I've
got the manual open for has more than 6000 *documented* MSRs).  We did
debate for a while whether the readability of the PPIN MSRs was a
vulnerability or not, before eventually deciding not.

Irrespective of what we do to fix this in Xen, has anyone fixed Solaris yet?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 17:32:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 17:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56067.97866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpx88-0000Fb-Vw; Thu, 17 Dec 2020 17:31:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56067.97866; Thu, 17 Dec 2020 17:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpx88-0000FU-SC; Thu, 17 Dec 2020 17:31:56 +0000
Received: by outflank-mailman (input) for mailman id 56067;
 Thu, 17 Dec 2020 17:31:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QywH=FV=microsoft.com=mikelley@srs-us1.protection.inumbo.net>)
 id 1kpx87-0000FB-Nh
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 17:31:56 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.223.99]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d500880-44ec-4ae7-8497-ef1fae98600f;
 Thu, 17 Dec 2020 17:31:54 +0000 (UTC)
Received: from (2603:10b6:302:a::16) by
 MWHPR21MB0142.namprd21.prod.outlook.com (2603:10b6:300:78::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3700.6; Thu, 17 Dec 2020 17:31:51 +0000
Received: from MW2PR2101MB1052.namprd21.prod.outlook.com
 ([fe80::b8f6:e748:cdf2:1922]) by MW2PR2101MB1052.namprd21.prod.outlook.com
 ([fe80::b8f6:e748:cdf2:1922%8]) with mapi id 15.20.3700.013; Thu, 17 Dec 2020
 17:31:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d500880-44ec-4ae7-8497-ef1fae98600f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GcfSep0dkGQn3ohRtbDF4xJZawKRvdyJUcwji40pveLhisgfWxuMDyDp8RGjYtPc2MAhRwu6LfVK68z+4lqZvwZLyntLPYMayWpRW2mUG/qLXk2ydYsLgs6gPgxkj3+jt3IPyyV84y6eP//IiGpEERCF6mDk7KXELjqzju9k3qxxXFI9qpi6Z5d/2il5QrnzAx7leivcPrDaGFI3jpx6OrL9Ms6dCUMUDNEX56hsWgeF/QYTVLuNNstKC11VnXw3AESdYCaE21tJW9uulXUWvjOI2yfxpJUTz6syheXQyQZJaq63J1G0DhHV05kWsQwLTTgKITvYLNUbMFVRK3KbVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BPUvqVY/oR9yRE41vY4FRFh2McGy5ZpgCl1eY+7+Zio=;
 b=S1IkMLweI9kX7Jtwkq0gqcR0PEKvp9dQh18JB8CWM9b1KOkgGcohvkZYDpp2hGiG8lyxf4Eq0GFhZwtXo6kyzik73tpl1nqUJCZDfGtiigT2z2vcWbYFhP1kcKC0a7T7wtcQgl5J7E9c+eyEMPd0exyheyjbKFpbGiIN9HMISW7vo70fPEhwJTPZNIJkvP5KRp4lF0yrEyMeQhhpr0P0HGu6C/llkaY221IeTpLdKQ3jLeiyOVhVUlTUruynyE7IIiUig6TXTQJr4SBuWph/2p9UsZS119UwfB+FpDSu2xLvozPHpxJGk/GtZuo+TFcna3jj5JNlt2E99w4SU7GyDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=microsoft.com; dmarc=pass action=none
 header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BPUvqVY/oR9yRE41vY4FRFh2McGy5ZpgCl1eY+7+Zio=;
 b=FyRNgfWs/Pli5rsBwJHYWCbwTdc9OXDgR2AXqT5ifIuPDPfTVjvGp/o5K700//fBAlEdny1twUQRu/4AU2r4XHINq81MTN02L1FVtmjfEDKd4tadbDMOMt+kMqpQ0lSxlQv9cqfjOijeKpHKYhUgitWW2rym7zscwTZLxNGudew=
From: Michael Kelley <mikelley@microsoft.com>
To: Juergen Gross <jgross@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "kvm@vger.kernel.org"
	<kvm@vger.kernel.org>
CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, KY
 Srinivasan <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
	Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Sean Christopherson <seanjc@google.com>,
	vkuznets <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim
 Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Daniel Lezcano <daniel.lezcano@linaro.org>, Peter
 Zijlstra <peterz@infradead.org>, Juri Lelli <juri.lelli@redhat.com>, Vincent
 Guittot <vincent.guittot@linaro.org>, Dietmar Eggemann
	<dietmar.eggemann@arm.com>, Steven Rostedt <rostedt@goodmis.org>, Ben Segall
	<bsegall@google.com>, Mel Gorman <mgorman@suse.de>, Daniel Bristot de
 Oliveira <bristot@redhat.com>
Subject: RE: [PATCH v3 06/15] x86/paravirt: switch time pvops functions to use
 static_call()
Thread-Topic: [PATCH v3 06/15] x86/paravirt: switch time pvops functions to
 use static_call()
Thread-Index: AQHW1FgK985cvXh+pkSmuKOGNWS3nqn7ilNA
Date: Thu, 17 Dec 2020 17:31:50 +0000
Message-ID:
 <MW2PR2101MB1052877B5376112F1BAF3D93D7C49@MW2PR2101MB1052.namprd21.prod.outlook.com>
References: <20201217093133.1507-1-jgross@suse.com>
 <20201217093133.1507-7-jgross@suse.com>
In-Reply-To: <20201217093133.1507-7-jgross@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels: MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=true;
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2020-12-17T17:31:48Z;
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Method=Standard;
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=Internal;
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47;
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ActionId=2f28ae99-1d93-43a6-b6b4-5233e0d4e0a9;
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ContentBits=0
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=microsoft.com;
x-originating-ip: [24.22.167.197]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: c4135d3c-37b9-435f-411d-08d8a2b1a3f9
x-ms-traffictypediagnostic: MWHPR21MB0142:
x-ms-exchange-transport-forked: True
x-ld-processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr
x-microsoft-antispam-prvs:
 <MWHPR21MB0142C163DFC2F39C4763FC67D7C49@MWHPR21MB0142.namprd21.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:989;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 IaLSeEbsG8rP8E8je3b9OZamJ+BskrPETAcRrYVlQI3Rt8aj7KtizwCjW57Zqz2Zx275n/W7MR0jQ85VCW7UNqLs8aKRkvDWfIMAnhCRY4zd5brm5wI9zIvlXELzQ/vrf/xdtYv6pMzXHSc5opt53rOneu7GAB8bPGy+lYpRZbHKCKoDB8vGBTFVgwIIt9765jFJHRtOsggQBtiel7KTr2NS4C7Y6L995UdLZQk38anh2bY71FjssNraL9InvIQsSywdXHZVKFB2EFsMO1cb9MWkI4MyEhqjmJ9OoqnSJ7SCCVZlYe2YLRUL67dN1rivbmaUgYpsZ8f5iFIiwopN0c7wOxBX2Ed/AEUZgurxcwZVqsEid25xsr++WVwMcRgSFoID+F/J5+2pVf9LMXWpNg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW2PR2101MB1052.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(136003)(376002)(366004)(7696005)(110136005)(64756008)(33656002)(66446008)(66556008)(8676002)(55016002)(186003)(66946007)(5660300002)(66476007)(10290500003)(478600001)(8936002)(86362001)(2906002)(26005)(7416002)(316002)(82950400001)(6506007)(52536014)(7406005)(76116006)(9686003)(8990500004)(71200400001)(82960400001)(54906003)(4326008)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?R+Y9nc8b6UEA7w8KIpQ6o0v1/MFkvTl91lNvfEN6wXfr8U6qJFOo+foh2ADb?=
 =?us-ascii?Q?UCmkJiQx3AubR0XSzzMri+4BnswxO7qZwukUjjFCDbal5pJa4XNKR/KrMRTY?=
 =?us-ascii?Q?73EgN0zFfqKUUfGSv2plvip3GA/FEggH7nyvmDrfYR5mjoeEQL5+9/r+6Yf/?=
 =?us-ascii?Q?mlKXxd3dBVm2fJVMMmFMXY2bH2Q+3xp/p9kz8a1G8VGbUxE25U7CYixKFDfz?=
 =?us-ascii?Q?GkvWD3Gr6BjVSfILdz4jilitWQId+hgQ1YvkcLxzoIn4R3iV6AR5/R6YgSf3?=
 =?us-ascii?Q?Ta2H8b82IEpBWXydjLYty5qVvXM3avLKDJrvTjjEpC/2U1KK9ilwNwdD8mN3?=
 =?us-ascii?Q?QVGPQeoBDcFlbLnsMLeJG8RBOdwJDAcbgRXyu9J6b+K1qkmejj2REh2zppgl?=
 =?us-ascii?Q?Ex7wEM9NAWXJzBmo7FAEb26XrolYEmt5zijx1Re3vYysaaLTL2VMkEvk5omV?=
 =?us-ascii?Q?GG8Nu4JSYcsCzxcNoxzGxWnSfFOY2yN7poOIEsvuUNhURTcwn7donnhimYI3?=
 =?us-ascii?Q?cAhiQCpY/FtanBD9S/8RvG1bDhtIvI6BJJfSCkm6ollskdGVyRpWpgdd9xYa?=
 =?us-ascii?Q?tY5lydtjAdz4EJcofqRU4MPq8Lv/mlLej1R5tdOHWjDGZoAxThPakVq6Fljm?=
 =?us-ascii?Q?MmXrBZNlWOgDludtgKTJyBFvIgo1VeKXwuRQA5KsJzvxELD6Rp7Xe8Ryqqkg?=
 =?us-ascii?Q?jpWzHxRx0JBV+hYJNAELCQX9Sax4sRcmjMnORmDTdrRzleSl7MfujiV76vzU?=
 =?us-ascii?Q?3OXtfuDMhi0sNE/hFp5nAAFHlhF1wQMmTkhw7vAH5Xqmj7aiy6UpTE2Y0Vp5?=
 =?us-ascii?Q?tWrlj11qpKH4TJbL0nLJTgAQHLzhwqljt+ALjPmUZ9hXfx1fqLJZOpbozNFm?=
 =?us-ascii?Q?M8ba4+xJDh0cMDNH8H2zIgOGyNuHjHWkuKAtH/TowV1S+pcdNOJIp5o8BogS?=
 =?us-ascii?Q?HSz+K1CIixkIRgNda0mVSrBsTrTfqxagd8+2n3yEco0=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MW2PR2101MB1052.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c4135d3c-37b9-435f-411d-08d8a2b1a3f9
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Dec 2020 17:31:51.0431
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /Q5InnaJJLcy01EpA6UmZxFA1/22sr9BtTksbFiG3rnBWdxJ29Ds9dE6ZwmP4DBytRWwgdPYYjW15F4LEpR0acdS0Q0E8FQRsiq2pQ/kriE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR21MB0142

From: Juergen Gross <jgross@suse.com> Sent: Thursday, December 17, 2020 1:3=
1 AM

> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
>=20
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification of the pvops implementation.
>=20
> Due to include hell this requires to split out the time interfaces
> into a new header file.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/Kconfig                      |  1 +
>  arch/x86/include/asm/mshyperv.h       | 11 --------
>  arch/x86/include/asm/paravirt.h       | 14 ----------
>  arch/x86/include/asm/paravirt_time.h  | 38 +++++++++++++++++++++++++++
>  arch/x86/include/asm/paravirt_types.h |  6 -----
>  arch/x86/kernel/cpu/vmware.c          |  5 ++--
>  arch/x86/kernel/kvm.c                 |  3 ++-
>  arch/x86/kernel/kvmclock.c            |  3 ++-
>  arch/x86/kernel/paravirt.c            | 16 ++++++++---
>  arch/x86/kernel/tsc.c                 |  3 ++-
>  arch/x86/xen/time.c                   | 12 ++++-----
>  drivers/clocksource/hyperv_timer.c    |  5 ++--
>  drivers/xen/time.c                    |  3 ++-
>  kernel/sched/sched.h                  |  1 +
>  14 files changed, 71 insertions(+), 50 deletions(-)
>  create mode 100644 arch/x86/include/asm/paravirt_time.h
>

[snip]
=20
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyp=
erv.h
> index ffc289992d1b..45942d420626 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -56,17 +56,6 @@ typedef int (*hyperv_fill_flush_list_func)(
>  #define hv_get_raw_timer() rdtsc_ordered()
>  #define hv_get_vector() HYPERVISOR_CALLBACK_VECTOR
>=20
> -/*
> - * Reference to pv_ops must be inline so objtool
> - * detection of noinstr violations can work correctly.
> - */
> -static __always_inline void hv_setup_sched_clock(void *sched_clock)
> -{
> -#ifdef CONFIG_PARAVIRT
> -	pv_ops.time.sched_clock =3D sched_clock;
> -#endif
> -}
> -
>  void hyperv_vector_handler(struct pt_regs *regs);
>=20
>  static inline void hv_enable_stimer0_percpu_irq(int irq) {}

[snip]

> diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyp=
erv_timer.c
> index ba04cb381cd3..1ed79993fc50 100644
> --- a/drivers/clocksource/hyperv_timer.c
> +++ b/drivers/clocksource/hyperv_timer.c
> @@ -21,6 +21,7 @@
>  #include <clocksource/hyperv_timer.h>
>  #include <asm/hyperv-tlfs.h>
>  #include <asm/mshyperv.h>
> +#include <asm/paravirt_time.h>
>=20
>  static struct clock_event_device __percpu *hv_clock_event;
>  static u64 hv_sched_clock_offset __ro_after_init;
> @@ -445,7 +446,7 @@ static bool __init hv_init_tsc_clocksource(void)
>  	clocksource_register_hz(&hyperv_cs_tsc, NSEC_PER_SEC/100);
>=20
>  	hv_sched_clock_offset =3D hv_read_reference_counter();
> -	hv_setup_sched_clock(read_hv_sched_clock_tsc);
> +	paravirt_set_sched_clock(read_hv_sched_clock_tsc);
>=20
>  	return true;
>  }
> @@ -470,6 +471,6 @@ void __init hv_init_clocksource(void)
>  	clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100);
>=20
>  	hv_sched_clock_offset =3D hv_read_reference_counter();
> -	hv_setup_sched_clock(read_hv_sched_clock_msr);
> +	static_call_update(pv_sched_clock, read_hv_sched_clock_msr);
>  }
>  EXPORT_SYMBOL_GPL(hv_init_clocksource);

These Hyper-V changes are problematic as we want to keep hyperv_timer.c
architecture independent.  While only the code for x86/x64 is currently
accepted upstream, code for ARM64 support is in progress.   So we need
to use hv_setup_sched_clock() in hyperv_timer.c, and have the per-arch
implementation in mshyperv.h.

Michael


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 17:34:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 17:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56071.97878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxAs-0000Nq-Ep; Thu, 17 Dec 2020 17:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56071.97878; Thu, 17 Dec 2020 17:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxAs-0000Nj-Aw; Thu, 17 Dec 2020 17:34:46 +0000
Received: by outflank-mailman (input) for mailman id 56071;
 Thu, 17 Dec 2020 17:34:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uYxM=FV=canonical.com=guilherme.piccoli@srs-us1.protection.inumbo.net>)
 id 1kpxAr-0000Ne-1g
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 17:34:45 +0000
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d86692f-2b8e-4c7f-8d85-a5e1a5a3295a;
 Thu, 17 Dec 2020 17:34:44 +0000 (UTC)
Received: from mail-ej1-f69.google.com ([209.85.218.69])
 by youngberry.canonical.com with esmtps
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <guilherme.piccoli@canonical.com>) id 1kpxAp-0001SS-Dx
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 17:34:43 +0000
Received: by mail-ej1-f69.google.com with SMTP id d19so6989055ejo.18
 for <xen-devel@lists.xenproject.org>; Thu, 17 Dec 2020 09:34:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d86692f-2b8e-4c7f-8d85-a5e1a5a3295a
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=SPFRd5wk7DzLoWCgRoXY8TxuOzCkUgHlVxVwC36uc/k=;
        b=reQh+/t2ClTa63KUOcPRxsUTDGrCbvBaGKVhDED48cVvAcDnGYjLKK9DlwzH8AZyDw
         cl7jueFJPGAb2ul3iNP/N5tFq5tDVCS/yv9AY8xwv3oxX0FuYj7c9B5Zvr2yOUf6AhVV
         1XpMa64cmgdLzHL/MQ1YkpbK52JUoe23PSCB2LK8TubXKHTZxFGKahIc1Jlf3zMxCotx
         Yuc6nd9RGJI9/Hluruds9era9oBr8yXvvJUsWFkVuI3KMS2GJNmHpKj92bV9o+Hsqfcc
         l6POxOJus8zcJVsMXkc6jDkz2T7ShH4VbVU/gAHE+jNFT6AI8V1YgSDfNghpZQ1TgMei
         GS4g==
X-Gm-Message-State: AOAM531fB3vDrvvk8Vm23I8Bj+kP6S50VsjURZboO3y65H3S+YOWzdSH
	ARyU2gTNSKK9IrpxSNITsfZFJFKQi63QKdRsWs0fR0BpiJO1hiObuahijJoTGbwnm0Q4bAujfiV
	24YbV/qtp0GEIIYm/woacs+/+AtlxGDNjmq/P3g1T6zETZe7jpFoO1UaKOv51
X-Received: by 2002:a17:906:af49:: with SMTP id ly9mr109284ejb.38.1608226483104;
        Thu, 17 Dec 2020 09:34:43 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzJkOCJR6Gowu5dIgYxfW5ocfxx7MhUz7e1MdKgnL5q38tShG3Uwa7Kq9s8kzjXXNOPRBmcnzKU85/lj+6LuhI=
X-Received: by 2002:a17:906:af49:: with SMTP id ly9mr109263ejb.38.1608226482844;
 Thu, 17 Dec 2020 09:34:42 -0800 (PST)
MIME-Version: 1.0
References: <87h7oudcbx.fsf@vps.thesusis.net> <CAHD1Q_zcruQ6KVHApvhb=0+mG0m80T+tmg1UzjQBki8j+aR51A@mail.gmail.com>
 <87czzcdtir.fsf@vps.thesusis.net>
In-Reply-To: <87czzcdtir.fsf@vps.thesusis.net>
From: "Guilherme G. Piccoli" <guilherme.piccoli@canonical.com>
Date: Thu, 17 Dec 2020 14:34:07 -0300
Message-ID: <CAHD1Q_z+WW36rfr1RAFYKjU5bocA90OonBmSKECRnpacvWyPmQ@mail.gmail.com>
Subject: Re: kexec not working in xen domU?
To: Phillip Susi <phill@thesusis.net>
Cc: kexec mailing list <kexec@lists.infradead.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Mon, Dec 14, 2020 at 5:25 PM Phillip Susi <phill@thesusis.net> wrote:
> The regular xen cosole should work for this shouldn't it?  So
> earlyprintk=hvc0 I guess?  I also threw in console=hvc0 and loglevel=7:
>
> [  184.734810] systemd-shutdown[1]: Syncing filesystems and block
> devices.
> [  185.772511] systemd-shutdown[1]: Sending SIGTERM to remaining
> processes...
> [  185.896957] systemd-shutdown[1]: Sending SIGKILL to remaining
> processes...
> [  185.901111] systemd-shutdown[1]: Unmounting file systems.
> [  185.902180] [1035]: Remounting '/' read-only in with options
> 'errors=remount-ro'.
> [  185.990634] EXT4-fs (xvda1): re-mounted. Opts: errors=remount-ro
> [  186.002373] systemd-shutdown[1]: All filesystems unmounted.
> [  186.002411] systemd-shutdown[1]: Deactivating swaps.
> [  186.002502] systemd-shutdown[1]: All swaps deactivated.
> [  186.002529] systemd-shutdown[1]: Detaching loop devices.
> [  186.002699] systemd-shutdown[1]: All loop devices detached.
> [  186.002727] systemd-shutdown[1]: Stopping MD devices.
> [  186.002814] systemd-shutdown[1]: All MD devices stopped.
> [  186.002840] systemd-shutdown[1]: Detaching DM devices.
> [  186.002974] systemd-shutdown[1]: All DM devices detached.
> [  186.003017] systemd-shutdown[1]: All filesystems, swaps, loop
> devices, MD devices and DM devices detached.
> [  186.168475] systemd-shutdown[1]: Syncing filesystems and block
> devices.
> [  186.169150] systemd-shutdown[1]: Rebooting with kexec.
> [  186.418653] xenbus_probe_frontend: xenbus_frontend_dev_shutdown:
> device/vbd/5632: Initialising != Connected, skipping
> [  186.427377] kexec_core: Starting new kernel
>

Hm..not many prints, either earlyprintk didn't work, or it's a really
early boot issue. Might worth to investigate if it's not a purgatory
issue too - did you try to use the ""new"" kexec syscall, by running
"kexec -s -l" instead of just "kexec -l" ?
Also, worth to try that with upstream kernel and kexec-tools - I
assume you're doing that already?

Cheers,


Guilherme


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 17:49:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 17:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56075.97890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxPE-0001g8-Mq; Thu, 17 Dec 2020 17:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56075.97890; Thu, 17 Dec 2020 17:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxPE-0001g1-JZ; Thu, 17 Dec 2020 17:49:36 +0000
Received: by outflank-mailman (input) for mailman id 56075;
 Thu, 17 Dec 2020 17:49:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tHkA=FV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kpxPC-0001fw-Dj
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 17:49:34 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aef84541-a856-4e21-92e5-ac96e50afeb0;
 Thu, 17 Dec 2020 17:49:33 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BHHiMec172221;
 Thu, 17 Dec 2020 17:49:29 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 35cn9rppav-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 17 Dec 2020 17:49:29 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BHHjKjx174922;
 Thu, 17 Dec 2020 17:49:28 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3030.oracle.com with ESMTP id 35d7er7q8g-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 17 Dec 2020 17:49:28 +0000
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BHHnR5m005112;
 Thu, 17 Dec 2020 17:49:27 GMT
Received: from [10.39.250.121] (/10.39.250.121)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 17 Dec 2020 09:49:27 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aef84541-a856-4e21-92e5-ac96e50afeb0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=YbAHKLgMgETX0KzOa8PyAW+RsrrKIcquYoJvCL3P3vU=;
 b=ayT7jH0qYKhDJfv+NyxbbBbVB4biv5ZybKiAILSwpUUGrnMWpK9JrcQaT0eMqPpfprDg
 1wxY6c8JpacvTKg6iWYe1gsphxkMFV+sbS0ZgJmgIJE7tIGS6xJzNk6hAxXOoRjir3Kf
 R4rJ8HfkVpzP3r6rzdPO4qe9mCUTCZ3vi/kQecBQap52xMkDmUZza5nfmQsb427m3VOV
 /CtIf+Kg+HpQipoMrzFoP/TkCa6m/OcYmk+yjLHvp2ERUlVu3F9ML9nSM0Nqcuoi/WMK
 Sj7k7ZVIF8U7dngmbiz1qltMNK2axaIsMue/Mbtl5I7P7wTmwPaCMraJgz7GwDIUSj5E ow== 
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills <cheyenne.wills@gmail.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
 <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
 <0430337a-6fcd-9471-4455-838390401220@citrix.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <c6e05b63-b066-9bd0-9da1-1fc089cd1aea@oracle.com>
Date: Thu, 17 Dec 2020 12:49:26 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <0430337a-6fcd-9471-4455-838390401220@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9838 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012170122
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9838 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxlogscore=999
 impostorscore=0 lowpriorityscore=0 clxscore=1015 spamscore=0
 malwarescore=0 priorityscore=1501 phishscore=0 mlxscore=0 bulkscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012170122


On 12/17/20 11:46 AM, Andrew Cooper wrote:
> On 17/12/2020 16:25, boris.ostrovsky@oracle.com wrote:
>> On 12/17/20 2:40 AM, Jan Beulich wrote:
>>> On 17.12.2020 02:51, boris.ostrovsky@oracle.com wrote:
>>> I think this is acceptable as a workaround, albeit we may want to
>>> consider further restricting this (at least on staging), like e.g.
>>> requiring a guest config setting to enable the workaround. 
>> Maybe, but then someone migrating from a stable release to 4.15 will have to modify guest configuration.
>>
>>
>>> But
>>> maybe this will need to be part of the MSR policy for the domain
>>> instead, down the road. We'll definitely want Andrew's view here.
>>>
>>> Speaking of staging - before applying anything to the stable
>>> branches, I think we want to have this addressed on the main
>>> branch. I can't see how Solaris would work there.
>> Indeed it won't. I'll need to do that as well (I misinterpreted the statement in the XSA about only 4.14- being vulnerable)
> It's hopefully obvious now why we suddenly finished the "lets turn all
> unknown MSRs to #GP" work at the point that we did (after dithering on
> the point for several years).
>
> To put it bluntly, default MSR readability was not a clever decision at all.
>
> There is a large risk that there is a similar vulnerability elsewhere,
> given how poorly documented the MSRs are (and one contemporary CPU I've
> got the manual open for has more than 6000 *documented* MSRs).  We did
> debate for a while whether the readability of the PPIN MSRs was a
> vulnerability or not, before eventually deciding not.

> Irrespective of what we do to fix this in Xen, has anyone fixed Solaris yet?


I am not aware of anyone working on this (not that I would be).


-boris



From xen-devel-bounces@lists.xenproject.org Thu Dec 17 17:59:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 17:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56082.97905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxYq-0002dQ-Qc; Thu, 17 Dec 2020 17:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56082.97905; Thu, 17 Dec 2020 17:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxYq-0002dJ-NC; Thu, 17 Dec 2020 17:59:32 +0000
Received: by outflank-mailman (input) for mailman id 56082;
 Thu, 17 Dec 2020 17:59:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xKjN=FV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kpxYq-0002d1-0k
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 17:59:32 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::614])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0e204af-5e1d-4ef4-8994-a323229850e5;
 Thu, 17 Dec 2020 17:59:30 +0000 (UTC)
Received: from AM6PR04CA0039.eurprd04.prod.outlook.com (2603:10a6:20b:f0::16)
 by VI1PR08MB4255.eurprd08.prod.outlook.com (2603:10a6:803:100::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.15; Thu, 17 Dec
 2020 17:59:28 +0000
Received: from AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:f0:cafe::b3) by AM6PR04CA0039.outlook.office365.com
 (2603:10a6:20b:f0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Thu, 17 Dec 2020 17:59:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT048.mail.protection.outlook.com (10.152.17.177) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3676.22 via Frontend Transport; Thu, 17 Dec 2020 17:59:27 +0000
Received: ("Tessian outbound eeda57fffe7b:v71");
 Thu, 17 Dec 2020 17:59:27 +0000
Received: from ef11484e9eb8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5FFEDDD9-74B0-4519-873D-0805036EE5D4.1; 
 Thu, 17 Dec 2020 17:58:49 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ef11484e9eb8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 17 Dec 2020 17:58:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2870.eurprd08.prod.outlook.com (2603:10a6:6:20::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12; Thu, 17 Dec
 2020 17:58:47 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Thu, 17 Dec 2020
 17:58:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0e204af-5e1d-4ef4-8994-a323229850e5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iiHxLp0JFQ0tSsAhUhQWsLavame0QxLqrK1DWj0v9vA=;
 b=yHHXtpYTlKJCLEpCAMHCkQVsRUP2lTWNfZtrZKSGBwzKyBY1+rDggbLzf+WQ8jWFZX+TiXfutelmLWY13XPQB4Be26WpGYKDVxIwGVesef68Yn/StmUqYaY98Ng9cdzuEhnqHfrOgFuQLB9nwmm4d8Qu3JYxR3BLqepeEv8xXeA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2c485c349efbe95a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VWDnuhuroEyCb0BkXRrc7mPTrmNYFVYUR6nTejRamcimmFRPD+d6dXnVPDUq4SdWyH84iymYHl5VVZpnKJIJqypPeCmOXBbt96EI1GUeaasKeRM6q/bM1f9ST0OINJOh1V08D4NUQKPdHM0fdo7V7W18ABxS5bnupdKikjIhkO9w4gBik/ZHP0TOKrWWINrGABjp1XS+QsidwdZ4mj2P7SYckeT2VbIZRPw9sB3YmO0acUpzX1/TzU8ch1BxJP6MFw9rfipAjLSuidEddtFu+XpGYC7+J5FfN+TToSAtWebe9WCLURgob2kVq1QCK0aygMYwZLuvCPcXGawVJ4o9QQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iiHxLp0JFQ0tSsAhUhQWsLavame0QxLqrK1DWj0v9vA=;
 b=V8cVuoYp9pqLpF/toRjyGfwGIqdkZFL3LqyLTelYVDenZh+pkpftiRVazXbK5mEgvw4AAx7O15Z8eX+VSDCgXvjsXtEhRZHnRekbzieVjNEMbXAsys27+UFs3a+KiYSZAN52Djpa8W8B78RNhjRMbbOEhkqVR/P3Ps5KzWkphdfUqQRj5Y56twRobmJGDZz/dJyYqDU64U380VGg8nNfloW2jcOnta3QF6Al9M8KC47QPcVwnAaIInFHThOojT31W7ABKr5blwNPdCWTLbXbB9wHDZV6xrlI5SvwA74+GQAUFKyXLOm9qTtRXncIkZ/PyjCvqMBkMubq36PZlzJ2HA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iiHxLp0JFQ0tSsAhUhQWsLavame0QxLqrK1DWj0v9vA=;
 b=yHHXtpYTlKJCLEpCAMHCkQVsRUP2lTWNfZtrZKSGBwzKyBY1+rDggbLzf+WQ8jWFZX+TiXfutelmLWY13XPQB4Be26WpGYKDVxIwGVesef68Yn/StmUqYaY98Ng9cdzuEhnqHfrOgFuQLB9nwmm4d8Qu3JYxR3BLqepeEv8xXeA=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Rahul
 Singh <Rahul.Singh@arm.com>, Julien Grall <jgrall@amazon.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
Thread-Topic: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
Thread-Index: AQHW0tUr2p4F3png40a2lvBeLGQljqn4BhSAgAAb6ACAA3ThAA==
Date: Thu, 17 Dec 2020 17:58:47 +0000
Message-ID: <FE04E596-2E53-4950-A9DC-8C5EEEF9124E@arm.com>
References: <20201215112610.1986-1-julien@xen.org>
 <c45407e5-3173-4f0d-453b-1a01969b667c@suse.com>
 <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>
In-Reply-To: <cbae7c17-829e-f48f-3a6a-7fee489711c2@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f0eaf26b-4764-414b-b6e8-08d8a2b57f17
x-ms-traffictypediagnostic: DB6PR08MB2870:|VI1PR08MB4255:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB42553580FBF17735BCC3C86B9DC40@VI1PR08MB4255.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:117;OLM:117;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9EJOylkmC5XkAwCah6vNVK8ess5t7Na3qgdz5CD4N7LqEj5qWoyWJLG/KM4sEhbBJAL09oVMsHxhiJE1BdHZb8vWK4cct/Dn+l2BkOlr6SBpFY1xop79dOr2Upl49kNSsgcRM8s/yx0bwW4tlFZHNmFABEKn8YJ2I1LO0RoVC6NYGWVt8BUYzLYEsbvW1T32fIiXTzlJ0/bLiLAL5kXbISkhnu+Mzxi3RMNhllNnK0lCCaKZDJ8fFjaTHzke+xdBuU/OdEyR15wly/6XSqngVUDr+wkjwqrZB6Gnmu7t7cF/mXBkp76HBA20584MareN2JSu1qAlXoSNxqOWwE0+3m9kasD80fJ0kZlu6OUnZTO6hL6rru1qfNOnYAwQ+bgc
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(136003)(376002)(39860400002)(6512007)(478600001)(6916009)(7416002)(186003)(26005)(66556008)(4326008)(8936002)(53546011)(33656002)(316002)(66574015)(76116006)(2906002)(66946007)(83380400001)(2616005)(6506007)(6486002)(64756008)(66476007)(8676002)(66446008)(71200400001)(5660300002)(91956017)(36756003)(86362001)(54906003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?c3VXdDkreU92K2w1VFJhZ1F0RGpxSFZaT0lCa08yR2lXN1FQdndwZUNwWnpR?=
 =?utf-8?B?aFJHY1VaRjVvMmZFbzRTQUk4ZDFWSTcyN0VLbWVqMExQdzBwMlVWckIvcjNK?=
 =?utf-8?B?NmlncmJPcGdRSjV6Sm9wQnp5QWxGR0JXbmFGa0pxY1J0NnN6ejZCTTFESjM0?=
 =?utf-8?B?cDQyY0NIVzNwRTgrS2l0N3JvT1hEakxITHpCMks2WllFZHMvcnphS3lDQVZB?=
 =?utf-8?B?Wm1Xc01ydEEwM2JEU0JhYkh6R2t6M0Vnd21IUWxTVUZHdDVoMWJsRFBOMjVq?=
 =?utf-8?B?SHJYZytQYjc1MVRvaHp4OHVSaDFIK0crRktHa0NVRTl3NHlGa09JWnVDU3V2?=
 =?utf-8?B?YVIzZXhGbnNmN2thWjVCWXdMSkJEV09kYUQzNm1lTmcyWVZRRXhMOVVHUXlV?=
 =?utf-8?B?dlo1RXRkNjdPS2E5TitjUUJ0YzdBb2hmY0lLd29xTjAxRnRmL3RzMVQva1lB?=
 =?utf-8?B?cnRLNnRQbk5wMDB0WmEwbkdINTN4Nm9QOEVyclc1VHdDNG83dVlMSEJENzNZ?=
 =?utf-8?B?T0FKTVRYTnVzcG9xenlPQkRWQ0VRWUVPZklWWHhFRWtneVIxdkZ1YUFla3Vr?=
 =?utf-8?B?VG53aVRZcDBJNklycDJnaGtza3NUUkhmc3RqN3pSK3d3VmRRa3k5dVF4Zi9D?=
 =?utf-8?B?OHBaMEVJdjNxUDRXQXFkR2hRWUlhVG9JcFhrNVJrbGs5THVaS21YbU5SUDJL?=
 =?utf-8?B?dGYyaG51N2YySVJZTTd3cElYMFZYejRJcHFQdUZYbkg1TnJDSk92OHZHcHpE?=
 =?utf-8?B?cHNrc3hHUFJSTzNVenUrSTcwRXZnbGIwYm1CM1JqejhBbEoyMnp2NVI5dTFL?=
 =?utf-8?B?OC81eEVGRHVhZDA4akNaRGhmbFhiWmZQbm5NMGJxWGI5QmRkbDl0Yk9hYlBV?=
 =?utf-8?B?ZVl0NE10V1oyNDkzU0ZiTXBtcE9YcVZhM3hWQUZ6Q3ZBUFhQR0xFMllzTmlK?=
 =?utf-8?B?MWo4M0ZRdHZueFBMWDdTeUJ3YkgvMzNyWHFUVVNNTmNsMXNmSlg4TmRlODR2?=
 =?utf-8?B?TS9mNWoxSjdibEVydk4wUm5LRnVVNVY3RG5yck5aV29xK0krN09LZVN1WUE2?=
 =?utf-8?B?T09UVWlqdzJZYy9uZkJveEtmcCtpdXg0WHE5d3RLZElCZ2l3MlY4dW5Od2dY?=
 =?utf-8?B?RmYxQ0V1NVc5MW9YY0NRV1NnNkoyY1A3RjQrTjVFVlRsb2pXcUZROWFyd3Zo?=
 =?utf-8?B?WTFBNTNEaWQ3ZUUzZ1crcjJqQzB0L0JSWis2c3AxMmJZWTlUYkhZK2VmaXIw?=
 =?utf-8?B?YmhPOW0xM1dna3oxRHBjUndHT1FyenFBdTVsN2FrajlxeFVVVnlDazV0MFhB?=
 =?utf-8?Q?BRo4ZUWPhsDj5ByELILGWGz7DJoRhjACwt?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0D48321E61A4AC4D946C8F9386F0A410@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2870
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c38b25bf-ebc2-43e9-7e9b-08d8a2b566fb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aUaeFHqNf40ZCXH2sAaoEomK0tTQDitFzcW0gN91m2OHUSisHEL8uL4K7bdtlY9MmQifOkFkIsow6qcNQfIcC1M6PFh/ErC7hLHK1TR+lTkvoshK2h2DrHiZIuwSHfrkNO0ygIrV+LabsnXi/TGbYrBdzJo/fQucSrOc8cZfl2reRPV2AMYszSolrpe02vkbSI6E8TcvHCOQqVi9SUdVhy5MLM7dXHlVxPTgxluK++D5ClqRQoWT87IwLHv39HO0sYsPL/CLQ6p73iP1KjFi2odhm1qVH8MRMrUaOzxTIeEW0Gym/QZeTaGeyQqKwbLNIVqKnB+OLnhGEBxg7EH+ZPVtbzGugTEfmXQWECEHY0VqYRACuS3ED0DcLwMSwyezSwuK1bTP4Hg3zJ78VlwnJA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(376002)(39860400002)(46966005)(83380400001)(6486002)(33656002)(336012)(6862004)(66574015)(81166007)(54906003)(356005)(2616005)(5660300002)(70586007)(316002)(6512007)(82310400003)(47076004)(8676002)(86362001)(4326008)(36756003)(53546011)(26005)(70206006)(2906002)(8936002)(6506007)(82740400003)(186003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Dec 2020 17:59:27.7406
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f0eaf26b-4764-414b-b6e8-08d8a2b57f17
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4255

SGkgSnVsaWVuLA0KDQo+IE9uIDE1IERlYyAyMDIwLCBhdCAxMzoxMSwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSBKdWVyZ2VuLA0KPiANCj4gT24gMTUvMTIv
MjAyMCAxMTozMSwgSsO8cmdlbiBHcm/DnyB3cm90ZToNCj4+IE9uIDE1LjEyLjIwIDEyOjI2LCBK
dWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9u
LmNvbT4NCj4+PiANCj4+PiBTbyBmYXIsIG91ciBpbXBsZW1lbnRhdGlvbiBvZiBXQVJOX09OKCkg
Y2Fubm90IGJlIHVzZWQgaW4gdGhlIGZvbGxvd2luZw0KPj4+IHNpdHVhdGlvbjoNCj4+PiANCj4+
PiBpZiAoIFdBUk5fT04oKSApDQo+Pj4gICAgICAuLi4NCj4+PiANCj4+PiBUaGlzIGlzIGJlY2F1
c2UgdGhlIFdBUk5fT04oKSBkb2Vzbid0IHJldHVybiB3aGV0aGVyIGEgd2FybmluZy4gU3VjaA0K
Pj4gLi4uIHdhcm5pbmcgaGFzIGJlZW4gdHJpZ2dlcmVkLg0KPiANCj4gSSB3aWxsIGFkZCBpdC4N
Cj4gDQo+Pj4gY29uc3RydWN0aW9uIGNhbiBiZSBoYW5keSB0byBoYXZlIGlmIHlvdSBoYXZlIHRv
IHByaW50IG1vcmUgaW5mb3JtYXRpb24NCj4+PiBhbmQgbm93IHRoZSBzdGFjayB0cmFjay4NCj4+
IFNvcnJ5LCBJJ20gbm90IGFibGUgdG8gcGFyc2UgdGhhdCBzZW50ZW5jZS4NCj4gDQo+IFVyZ2gg
Oi8uIEhvdyBhYm91dCB0aGUgZm9sbG93aW5nIGNvbW1pdCBtZXNzYWdlOg0KPiANCj4gIlNvIGZh
ciwgb3VyIGltcGxlbWVudGF0aW9uIG9mIFdBUk5fT04oKSBjYW5ub3QgYmUgdXNlZCBpbiB0aGUg
Zm9sbG93aW5nIHNpdHVhdGlvbjoNCj4gDQo+IGlmICggV0FSTl9PTigpICkNCj4gIC4uLg0KPiAN
Cj4gVGhpcyBpcyBiZWNhdXNlIFdBUk5fT04oKSBkb2Vzbid0IHJldHVybiB3aGV0aGVyIGEgd2Fy
bmluZyBoYXMgYmVlbiB0cmlnZ2VyZWQuIFN1Y2ggY29uc3RydWNpdG9uIGNhbiBiZSBoYW5keSBp
ZiB5b3Ugd2FudCB0byBwcmludCBtb3JlIGluZm9ybWF0aW9uIGFuZCBhbHNvIGR1bXAgdGhlIHN0
YWNrIHRyYWNlLg0KPiANCj4gVGhlcmVmb3JlLCByZXdvcmsgdGhlIFdBUk5fT04oKSBpbXBsZW1l
bnRhdGlvbiB0byByZXR1cm4gd2hldGhlciBhIHdhcm5pbmcgd2FzIHRyaWdnZXJlZC4gVGhlIGlk
ZWEgd2FzIGJvcnJvd2VkIGZyb20gTGludXgiLg0KDQpXaXRoIHRoYXQuDQoNClJldmlld2VkLWJ5
OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+DQoNCkFuZCB0aGFu
a3MgYSBsb3QgZm9yIHRoaXMgOi0pDQoNCkNoZWVycw0KQmVydHJhbmQNCg0KPiANCj4gQ2hlZXJz
LA0KPiANCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 18:26:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 18:26:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56092.97921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxyU-0005dF-2f; Thu, 17 Dec 2020 18:26:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56092.97921; Thu, 17 Dec 2020 18:26:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpxyT-0005d8-VV; Thu, 17 Dec 2020 18:26:01 +0000
Received: by outflank-mailman (input) for mailman id 56092;
 Thu, 17 Dec 2020 18:26:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIZe=FV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpxyS-0005d3-KW
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 18:26:00 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d556b491-7573-468f-a6f3-1952de10c748;
 Thu, 17 Dec 2020 18:25:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d556b491-7573-468f-a6f3-1952de10c748
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608229558;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=0JnxpV5kSnzrC5FbbL8YNG1KAvQnfLMRnXpPN2OIrj0=;
  b=Eed2niLyV8JRD1i+aNCJtjHyDEVz4DJJG/+xnXGEXQxhV4Ur7qsqdiyE
   7t5L0gtTvK8noaigQSpqvb66YhhyNJaTXg6uLid7ywtpL3toll6hyT7lX
   PoYKpxYcLSzF5doqI74QNPeipqv7hJLqa3Uo5qc4C1f6Uvb3cDB4JKPi2
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 00tXMVKz+bqsvoJATU8UhkG0eBARLmv5IpviZufx/bZc56Toigz1exXkaq/60al/HMtc0VNN9g
 k1QKMN0UehcqDT62F4WnbUjWK+u/kwbUPTkrS1h/l3YEhMC+8o7G9iRsiXIsYI9DydJJGpDWyd
 QSC5FwiSno+b9sn3UcJbmC9EQ1kdZMvZmZZJh+jJ/YWqaqU9+gLhru6eCHDx8A6/NMalhCeFkQ
 CwWXMpJnNZqRs2epeeF571ULEE5+T1kMsOrBOWiF3YoWDij5uJKUfkJ71GQKwF5sOV5fwk9E13
 3Zs=
X-SBRS: 5.2
X-MesageID: 33507226
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,428,1599537600"; 
   d="scan'208";a="33507226"
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>
References: <20201215111055.3810-1-jgross@suse.com>
 <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
 <8d26b752-b7ba-159f-5bed-bb015a06d819@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2414c191-ff55-e446-b555-c9d0ccca6b93@citrix.com>
Date: Thu, 17 Dec 2020 18:25:52 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8d26b752-b7ba-159f-5bed-bb015a06d819@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 16/12/2020 08:21, Jürgen Groß wrote:
> On 15.12.20 21:59, Andrew Cooper wrote:
>> On 15/12/2020 11:10, Juergen Gross wrote:
>>> In case a process waits for any Xenstore action in the xenbus driver
>>> it should be interruptible by signals.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - don't special case SIGKILL as libxenstore is handling -EINTR fine
>>> ---
>>>   drivers/xen/xenbus/xenbus_xs.c | 9 ++++++++-
>>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/xenbus/xenbus_xs.c
>>> b/drivers/xen/xenbus/xenbus_xs.c
>>> index 3a06eb699f33..17c8f8a155fd 100644
>>> --- a/drivers/xen/xenbus/xenbus_xs.c
>>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>>> @@ -205,8 +205,15 @@ static bool test_reply(struct xb_req_data *req)
>>>     static void *read_reply(struct xb_req_data *req)
>>>   {
>>> +    int ret;
>>> +
>>>       do {
>>> -        wait_event(req->wq, test_reply(req));
>>> +        ret = wait_event_interruptible(req->wq, test_reply(req));
>>> +
>>> +        if (ret == -ERESTARTSYS && signal_pending(current)) {
>>> +            req->msg.type = XS_ERROR;
>>> +            return ERR_PTR(-EINTR);
>>> +        }
>>
>> So now I can talk fully about the situations which lead to this, I think
>> there is a bit more complexity.
>>
>> It turns out there are a number of issues related to running a Xen
>> system with no xenstored.
>>
>> 1) If a xenstore-write occurs during startup before init-xenstore-domain
>> runs, the former blocks on /dev/xen/xenbus waiting for xenstored to
>> reply, while the latter blocks on /dev/xen/xenbus_backend when trying to
>> tell the dom0 kernel that xenstored is in dom1.  This effectively
>> deadlocks the system.
>
> This should be easy to solve: any request to /dev/xen/xenbus should
> block upfront in case xenstored isn't up yet (could e.g. wait
> interruptible until xenstored_ready is non-zero).

I'm not sure that that would fix the problem.  The problem is that
setting the ring details via /dev/xen/xenbus_backend blocks, which
prevents us launching the xenstored stubdomain, which prevents the
earlier xenbus write being completed.

So long as /dev/xen/xenbus_backend doesn't block, there's no problem
with other /dev/xen/xenbus activity being pending briefly.


Looking at the current logic, I'm not completely convinced.  Even
finding a filled-in evtchn/gfn doesn't mean that xenstored is actually
ready.

There are 3 possible cases.

1) PV guest, and details in start_info
2) HVM guest, and details in HVM_PARAMs
3) No details (expected for dom0).  Something in userspace must provide
details at a later point.

So the setup phases go from nothing, to having ring details, to finding
the ring working.

I think it would be prudent to try reading a key between having details
and declaring the xenstored_ready.  Any activity, even XS_ERROR,
indicates that the other end of the ring is listening.

>
>> 2) If xenstore-watch is running when xenstored dies, it spins at 100%
>> cpu usage making no system calls at all.  This is caused by bad error
>> handling from xs_watch(), and attempting to debug found:
>
> Can you expand on "bad error handling from xs_watch()", please?

do_watch() has

    for ( ... ) { // defaults to an infinite loop
        vec = xs_read_watch();
        if (vec == NULL)
            continue;
        ...
    }


My next plan was to experiment with break instead of continue, which
I'll get to at some point.

>
>>
>> 3) (this issue).  If anyone starts xenstore-watch with no xenstored
>> running at all, it blocks in D in the kernel.
>
> Should be handled with solution for 1).
>
>>
>> The cause is the special handling for watch/unwatch commands which,
>> instead of just queuing up the data for xenstore, explicitly waits for
>> an OK for registering the watch.  This causes a write() system call to
>> block waiting for a non-existent entity to reply.
>>
>> So while this patch does resolve the major usability issue I found (I
>> can't even SIGINT and get my terminal back), I think there are issues.
>>
>> The reason why XS_WATCH/XS_UNWATCH are special cased is because they do
>> require special handling.  The main kernel thread for processing
>> incoming data from xenstored does need to know how to associate each
>> async XS_WATCH_EVENT to the caller who watched the path.
>>
>> Therefore, depending on when this cancellation hits, we might be in any
>> of the following states:
>>
>> 1) the watch is queued in the kernel, but not even sent to xenstored yet
>> 2) the watch is queued in the xenstored ring, but not acted upon
>> 3) the watch is queued in the xenstored ring, and the xenstored has seen
>> it but not replied yet
>> 4) the watch has been processed, but the XS_WATCH reply hasn't been
>> received yet
>> 5) the watch has been processed, and the XS_WATCH reply received
>>
>> State 5 (and a little bit) is the normal success path when xenstored has
>> acted upon the request, and the internal kernel infrastructure is set up
>> appropriately to handle XS_WATCH_EVENTs.
>>
>> States 1 and 2 can be very common if there is no xenstored (or at least,
>> it hasn't started up yet).  In reality, there is either no xenstored, or
>> it is up and running (and for a period of time during system startup,
>> these cases occur in sequence).
>
> Yes. this is the reason we can't just reject a user request if xenstored
> hasn't been detected yet: it could be just starting.

Right, and I'm not suggesting that we'd want to reject accesses while
xenstored is starting up.

>
>>
>> As soon as the XS_WATCH event has been written into the xenstored ring,
>> it is not safe to cancel.  You've committed to xenstored processing the
>> request (if it is up).
>
> I'm not sure this is true. Cancelling it might result in a stale watch
> in xenstored, but there shouldn't be a problem related to that. In case
> that watch fires the event will normally be discarded by the kernel as
> no matching watch is found in the kernel's data. In case a new watch
> has been setup with the same struct xenbus_watch address (which is used
> as the token), then this new watch might fire without the node of the
> new watch having changed, but spurious watch events are defined to be
> okay (OTOH the path in the event might look strange to the handler).

Watches are a quota'd resource in (at least some) xenstored
configurations.  Losing track of the registration is a resource leak,
even if the kernel can filter and discard the unexpected watch events.

>> If xenstored is actually up and running, its fine and necessary to
>> block.  The request will be processed in due course (timing subject to
>> the client and server load).  If xenstored isn't up, blocking isn't ok.
>>
>> Therefore, I think we need to distinguish "not yet on the ring" from "on
>> the ring", as our distinction as to whether cancelling is safe, and
>> ensure we don't queue anything on the ring before we're sure xenstored
>> has started up.
>>
>> Does this make sense?
>
> Basically, yes.

Great.  If I get any time, I'll try to look into some fixes along the
above lines.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 18:42:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 18:42:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56097.97933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpyEe-0007bq-FN; Thu, 17 Dec 2020 18:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56097.97933; Thu, 17 Dec 2020 18:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpyEe-0007bj-Bi; Thu, 17 Dec 2020 18:42:44 +0000
Received: by outflank-mailman (input) for mailman id 56097;
 Thu, 17 Dec 2020 18:42:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpyEd-0007bb-0K; Thu, 17 Dec 2020 18:42:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpyEc-0005Rt-R8; Thu, 17 Dec 2020 18:42:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpyEc-0002Y4-Jt; Thu, 17 Dec 2020 18:42:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpyEc-0005WP-JO; Thu, 17 Dec 2020 18:42:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/cVt3RTsItKejIQS+OHjSaKadDEFdDNc0KXBzyeJ4OU=; b=DTkMqpoNTK4V6xN/XIj5a480Vq
	uHFLeohYU3j53IKfTL5ehS+Ku/Yr6sAl6U+rUQKSYmNN2P5ucwUcZTXONC/sV13NlzukBn0p9rMyR
	sFxz5rRc3aW9LMrHSGMkcWshxAX4JpmoKJ5l1K7+7UEgJTtRrBUPm+mXI0KEbMKcjhB8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157649-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157649: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=641723f78d3a0b1982e1cd2ef37d8d877cfe542d
X-Osstest-Versions-That:
    xen=d81133d45d81d35a4e7445778bfd1179190cbd31
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 18:42:42 +0000

flight 157649 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157649/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  641723f78d3a0b1982e1cd2ef37d8d877cfe542d
baseline version:
 xen                  d81133d45d81d35a4e7445778bfd1179190cbd31

Last test of basis   157621  2020-12-17 02:00:29 Z    0 days
Testing same since   157649  2020-12-17 16:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d81133d45d..641723f78d  641723f78d3a0b1982e1cd2ef37d8d877cfe542d -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 19:04:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 19:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56108.97954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpyZH-0001Fk-7g; Thu, 17 Dec 2020 19:04:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56108.97954; Thu, 17 Dec 2020 19:04:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpyZH-0001Fd-4i; Thu, 17 Dec 2020 19:04:03 +0000
Received: by outflank-mailman (input) for mailman id 56108;
 Thu, 17 Dec 2020 19:04:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIZe=FV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpyZE-0001FH-Q7
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 19:04:01 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef42eea2-5961-4759-a758-ae352e65c152;
 Thu, 17 Dec 2020 19:04:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef42eea2-5961-4759-a758-ae352e65c152
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608231839;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=vvGEhdZ3CPO98v3B3B26nGP9tbSQ53gH9VoMQvb25vA=;
  b=RUW+vJQFtVqYPHHLkVYW9Ao1wXg1PFlGk/vTnBseliVn4BWa+umBMrRU
   iXFZZ4NSZqzUR5oC+s4RuWj2arEBOcoA5gl15OO3uWl/ML/xCH4EPqZR2
   KiBtF0TCAXLYpNnDqxQlmlsTFc3O7PZb+1lwf2r95eDsUmyNrs3ihAf8n
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Kf6KMR+S+tzQ8TyKr20uh3sbV7mOUEpG5ZXF3c6Vu8jm3zJccQ9MrBc6Efn2ePo5joLBBsUsiH
 E+4NgpsGHxzlhKswUMprcuqDwhHKMOrk6zgOZ3z66OXE5XLv0BOCG6o2ep46fTiw2Ml0NryL1l
 0hJ8wYvJYR9n/j7rXiUMYyhWil39wpwwufU4IG5J5GWp0xLXeLCyjmQImhBeDvPR8JMKKfmHhd
 CZF3tNK8DT/1vxrDZYVuK9HtftE74MTKaXtj4vjoqZ3erevRlXeIt6Y3D3tOWYp/taMa6S3hpO
 f7Y=
X-SBRS: 5.2
X-MesageID: 34716130
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,428,1599537600"; 
   d="scan'208";a="34716130"
Subject: Re: [PATCH 1/6] x86/p2m: tidy p2m_add_foreign() a little
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <8b70c26e-7ae6-8438-67a3-99cef338ba52@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <55de56b3-0e83-c558-6432-9853db82f57a@citrix.com>
Date: Thu, 17 Dec 2020 19:03:54 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8b70c26e-7ae6-8438-67a3-99cef338ba52@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 15/12/2020 16:25, Jan Beulich wrote:
> Drop a bogus ASSERT() - we don't typically assert incoming domain
> pointers to be non-NULL, and there's no particular reason to do so here.
>
> Replace the open-coded DOMID_SELF check by use of
> rcu_lock_remote_domain_by_id(), at the same time covering the request
> being made with the current domain's actual ID.
>
> Move the "both domains same" check into just the path where it really
> is meaningful.
>
> Swap the order of the two puts, such that
> - the p2m lock isn't needlessly held across put_page(),
> - a separate put_page() on an error path can be avoided,
> - they're inverse to the order of the respective gets.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> The DOMID_SELF check being converted also suggests to me that there's an
> implication of tdom == current->domain, which would in turn appear to
> mean the "both domains same" check could as well be dropped altogether.

I don't see anything conceptually wrong with the toolstack creating a
foreign mapping on behalf of a guest at construction time.  I'd go as
far as to argue that it is an interface shortcoming if this didn't
function correctly.

>
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2560,9 +2560,6 @@ int p2m_add_foreign(struct domain *tdom,
>      int rc;
>      struct domain *fdom;
>  
> -    ASSERT(tdom);
> -    if ( foreigndom == DOMID_SELF )
> -        return -EINVAL;
>      /*
>       * hvm fixme: until support is added to p2m teardown code to cleanup any
>       * foreign entries, limit this to hardware domain only.
> @@ -2573,13 +2570,15 @@ int p2m_add_foreign(struct domain *tdom,
>      if ( foreigndom == DOMID_XEN )
>          fdom = rcu_lock_domain(dom_xen);
>      else
> -        fdom = rcu_lock_domain_by_id(foreigndom);
> -    if ( fdom == NULL )
> -        return -ESRCH;
> +    {
> +        rc = rcu_lock_remote_domain_by_id(foreigndom, &fdom);

It occurs to me that rcu_lock_remote_domain_by_id()'s self error path
ought to be -EINVAL rather than -EPERM.  It's never for permissions
reasons that we restrict to remote domains like this - always for
technical ones.

But that is definitely content for a different patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 19:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 19:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56123.97974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpynB-0002SB-Rj; Thu, 17 Dec 2020 19:18:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56123.97974; Thu, 17 Dec 2020 19:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpynB-0002S4-Mc; Thu, 17 Dec 2020 19:18:25 +0000
Received: by outflank-mailman (input) for mailman id 56123;
 Thu, 17 Dec 2020 19:18:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIZe=FV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpynA-0002Rz-9y
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 19:18:24 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63c5d24a-de23-42eb-9e14-fdcf7b441c36;
 Thu, 17 Dec 2020 19:18:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63c5d24a-de23-42eb-9e14-fdcf7b441c36
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608232702;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=b6PMh/U7PSOk0hBN5ueOUeqZfRvoeO+m1nCDE5XKpU0=;
  b=JPKnB4o59ODwzXOE4YRnxiJroTU1xzYRk6TEt5QtTJBL1iU+n7hVwLIl
   K/s96Yi6IFBcjKoTMWdpaZUpxNBIAb2jtMOWV8r/jxvpRDkj0D5DAEO8j
   Is2m2qFx4X20iWQ/am99YshfLjtjfZ3BKc06Hc5oKMR7TElUdRLWdD2Q7
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: aLaqJJDR6DE1K7Xs+j64xFnH5sv4I5b3jN1Ig+aD7bRWWAbbeCknIzsJfHfQ2Ly9IrtDai9vaK
 G9HwcihaRa+ofEvX4RFdglCqOgowlwRtd6ne4IIB6hr8KP0VGYbYri2WoxR4TV6Lsqh3tghfoW
 SH72Mw/OxzR4Asi5nkS1Fysx4yFmZ0wvLaqwcWfVy3bKHK4eyUGA7LpAMofGnSz0S8b7rPG/i2
 bqyTmSv6ceN4itCUU51KcL/f8FKKyg461oYkjqf5Ik4WLa7zuhZB37Dxxt5jPR7B0+SIY4hNrI
 Z4c=
X-SBRS: 5.2
X-MesageID: 33493654
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,428,1599537600"; 
   d="scan'208";a="33493654"
Subject: Re: [PATCH 2/6] x86/mm: p2m_add_foreign() is HVM-only
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <cf4569c5-a9c5-7b4b-d576-d1521c369418@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f736244b-ece7-af35-1517-2e5fdd9705c7@citrix.com>
Date: Thu, 17 Dec 2020 19:18:16 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <cf4569c5-a9c5-7b4b-d576-d1521c369418@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 15/12/2020 16:26, Jan Beulich wrote:
> This is together with its only caller, xenmem_add_to_physmap_one().

I can't parse this sentence.  Perhaps "... as is it's only caller," as a
follow-on from the subject sentence.

>  Move
> the latter next to p2m_add_foreign(), allowing this one to become static
> at the same time.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2639,7 +2646,114 @@ int p2m_add_foreign(struct domain *tdom,
>      return rc;
>  }
>  
> -#ifdef CONFIG_HVM
> +int xenmem_add_to_physmap_one(
> +    struct domain *d,
> +    unsigned int space,
> +    union add_to_physmap_extra extra,
> +    unsigned long idx,
> +    gfn_t gpfn)
> +{
> +    struct page_info *page = NULL;
> +    unsigned long gfn = 0 /* gcc ... */, old_gpfn;
> +    mfn_t prev_mfn;
> +    int rc = 0;
> +    mfn_t mfn = INVALID_MFN;
> +    p2m_type_t p2mt;
> +
> +    switch ( space )
> +    {
> +        case XENMAPSPACE_shared_info:
> +            if ( idx == 0 )
> +                mfn = virt_to_mfn(d->shared_info);
> +            break;
> +        case XENMAPSPACE_grant_table:
> +            rc = gnttab_map_frame(d, idx, gpfn, &mfn);
> +            if ( rc )
> +                return rc;
> +            break;
> +        case XENMAPSPACE_gmfn:
> +        {
> +            p2m_type_t p2mt;
> +
> +            gfn = idx;
> +            mfn = get_gfn_unshare(d, gfn, &p2mt);
> +            /* If the page is still shared, exit early */
> +            if ( p2m_is_shared(p2mt) )
> +            {
> +                put_gfn(d, gfn);
> +                return -ENOMEM;
> +            }
> +            page = get_page_from_mfn(mfn, d);
> +            if ( unlikely(!page) )
> +                mfn = INVALID_MFN;
> +            break;
> +        }
> +        case XENMAPSPACE_gmfn_foreign:
> +            return p2m_add_foreign(d, idx, gfn_x(gpfn), extra.foreign_domid);
> +        default:
> +            break;

... seeing as the function is moving wholesale, can we at least correct
the indention, to save yet another large churn in the future?  (If it
were me, I'd go as far as deleting the default case as well.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 19:36:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 19:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56129.97986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpz4F-0004SV-ES; Thu, 17 Dec 2020 19:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56129.97986; Thu, 17 Dec 2020 19:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpz4F-0004SO-9G; Thu, 17 Dec 2020 19:36:03 +0000
Received: by outflank-mailman (input) for mailman id 56129;
 Thu, 17 Dec 2020 19:36:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpz4D-0004SG-JE; Thu, 17 Dec 2020 19:36:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpz4D-0006Ka-9z; Thu, 17 Dec 2020 19:36:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kpz4D-00058N-2P; Thu, 17 Dec 2020 19:36:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kpz4D-0006ZY-1v; Thu, 17 Dec 2020 19:36:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Gwfd8llvxSE313dzee6d0STKCuvPtiU23f7U4eM2ld4=; b=M9gdhkue/rFplGdHTEfwSTvDfz
	jH//DOfxnCANtz/Qpz+s435WklQ3hc6Imkk2cpmx9xJfoqwAejrjwjNORJN9ZApznFkFKcWSUfpKf
	N7S8pNDp5jMXkCalf/1zE6O10q9jtxft9iDtP1w0kQ/lC6797h2sN+CgrgMFa5G7FGmM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157624-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157624: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4252318bb3e863df52c90cbf9b3c70de11fa1a53
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 19:36:01 +0000

flight 157624 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157624/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4252318bb3e863df52c90cbf9b3c70de11fa1a53
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  160 days
Failing since        151818  2020-07-11 04:18:52 Z  159 days  154 attempts
Testing same since   157624  2020-12-17 04:19:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33260 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 19:55:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 19:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56136.98001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpzMa-0006TY-4s; Thu, 17 Dec 2020 19:55:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56136.98001; Thu, 17 Dec 2020 19:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kpzMa-0006TR-1N; Thu, 17 Dec 2020 19:55:00 +0000
Received: by outflank-mailman (input) for mailman id 56136;
 Thu, 17 Dec 2020 19:54:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIZe=FV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kpzMY-0006TL-0h
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 19:54:58 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3a4d447-f897-4af5-99df-aad7e89fa9d9;
 Thu, 17 Dec 2020 19:54:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3a4d447-f897-4af5-99df-aad7e89fa9d9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608234896;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=6b75V8lNBxTwMLVNegOgh4gC5t2N3jdmF7LWG6K9iZM=;
  b=U2IDvXNKu6Ed02Qto61I6gMJI4WV3xGH+9ljpcZ3ea8ewERdIpmgvopD
   qopGJ42MUIQrZHjckPi5dbVDCmCFkHLnyI0I3euk+A9oQYJRcX2ZZKf0P
   cZrxRbL+OX49p4nJwaG/9niJ2jWwVBtUiarAeQYqN9izZeHG08mQ6vGMG
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: YRLXeFaD6X01hM2pJLCP8KhXP1IyJmK8KuCRsbK8M8ML7MNqQvk6JKRGmNk4YoOXgkP+7ZyDEG
 u2uoyoWNzyN9+Z/AVwyt5PiYnueicZlIaJAQvjRk1AGrdn1xsmdIdyW35dQSjgryhSD6jQHAQj
 eEcwpOvlHQdJKldMwI47S9Tf1emrlubqDmoISGLiybUO6ox95xZTjtAENYpjA+VeOVpwc4/OyS
 LDetmAaqLxKkk4Mycn17zRkgdddKNrXVO3f+huSDF+7OJVu0FuxTX5S/Uozhd8VxU8i/okRYg6
 l3Q=
X-SBRS: 5.2
X-MesageID: 33475931
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,428,1599537600"; 
   d="scan'208";a="33475931"
Subject: Re: [PATCH 3/6] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <15f41816-4814-bae5-e0bc-89e99d04a142@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fc78c235-f806-6120-25f0-182b4c08bdaa@citrix.com>
Date: Thu, 17 Dec 2020 19:54:51 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <15f41816-4814-bae5-e0bc-89e99d04a142@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 15/12/2020 16:26, Jan Beulich wrote:
> Extend a respective #ifdef from inside set_typed_p2m_entry() to around
> all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
> check path.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

As the code currently stands, yes.  However, I'm not sure I agree
conceptually.

The p2m APIs are either a common interface to use, or HVM-specific.

PV guests don't actually have a p2m, but some of the APIs are used from
common code (e.g. copy_to/from_guest()), and some p2m concepts are
special cased as identity for PV (technically paging_mode_translate()),
while other concepts, such as foreign/mmio, which do exist for both PV
and HVM guests, are handled with totally different API sets for PV and HVM.

This is a broken mess of an abstraction.  I suspect some of it has to do
with PV autotranslate mode in the past, but that doesn't alter the fact
that we have a totally undocumented and error prone set of APIs here.

Either P2M's should (fully) be the common abstraction (despite not being
a real object for PV guests), or they should should be a different set
of APIs which is the common abstraction, and P2Ms should move being
exclusively for HVM guests.

(It's also very obvious by all the CONFIG_X86 ifdefary that we've got
arch specifics in our common code, and that is another aspect of the API
mess which needs handling.)

I'm honestly not sure which of these would be better, but I'm fairly
sure that either would be better than what we've currently got.  I
certainly think it would be better to have a plan for improvement, to
guide patches like this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 21:02:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 21:02:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56158.98077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq0PX-0005IN-8T; Thu, 17 Dec 2020 21:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56158.98077; Thu, 17 Dec 2020 21:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq0PX-0005IG-5B; Thu, 17 Dec 2020 21:02:07 +0000
Received: by outflank-mailman (input) for mailman id 56158;
 Thu, 17 Dec 2020 21:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq0PW-0005I4-CT; Thu, 17 Dec 2020 21:02:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq0PW-0007tw-5g; Thu, 17 Dec 2020 21:02:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq0PV-0007uM-Ut; Thu, 17 Dec 2020 21:02:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kq0PV-0005By-US; Thu, 17 Dec 2020 21:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QVjuQ74hT1Kvf3/Su27vyVAi+EGMUEd2Zl+8r+vehjw=; b=XWDNFCQ9RwynFi74u+mcyhaimZ
	qIFnipIXbIckapQ+U4yFRwExX2b2gQaUYpZ7Faf9txr3sXyozDEL3bKp5nWJz5mv7biSbQ2StRB2b
	GjiRBL9I+vuhB9ryrRPxyhtR9CNWdq/rDslPWTB845WuGESsDzBaOIWctiwxN+qbZZfk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157617-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157617: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 17 Dec 2020 21:02:05 +0000

flight 157617 xen-unstable real [real]
flight 157652 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157617/
http://logs.test-lab.xenproject.org/osstest/logs/157652/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail pass in 157652-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157568
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157568
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157568
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157568
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157568
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157568
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157568
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157568
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157568
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157568
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157568
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157568  2020-12-15 16:38:41 Z    2 days
Testing same since   157617  2020-12-17 00:26:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   904148ecb4..ac6a0af387  ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0 -> master


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 21:27:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 21:27:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56188.98180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq0nb-0007tZ-Ls; Thu, 17 Dec 2020 21:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56188.98180; Thu, 17 Dec 2020 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq0nb-0007tS-IN; Thu, 17 Dec 2020 21:26:59 +0000
Received: by outflank-mailman (input) for mailman id 56188;
 Thu, 17 Dec 2020 21:26:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ya8y=FV=vps.thesusis.net=psusi@srs-us1.protection.inumbo.net>)
 id 1kq0na-0007tN-N5
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 21:26:58 +0000
Received: from vps.thesusis.net (unknown [34.202.238.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78b7d677-1498-43ea-b151-22931e222092;
 Thu, 17 Dec 2020 21:26:57 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by vps.thesusis.net (Postfix) with ESMTP id B05002781D;
 Thu, 17 Dec 2020 16:26:57 -0500 (EST)
Received: from vps.thesusis.net ([127.0.0.1])
 by localhost (vps.thesusis.net [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id gmEwGuj1UTxB; Thu, 17 Dec 2020 16:26:57 -0500 (EST)
Received: by vps.thesusis.net (Postfix, from userid 1000)
 id 7191F2781C; Thu, 17 Dec 2020 16:26:57 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78b7d677-1498-43ea-b151-22931e222092
References: <87h7oudcbx.fsf@vps.thesusis.net> <CAHD1Q_zcruQ6KVHApvhb=0+mG0m80T+tmg1UzjQBki8j+aR51A@mail.gmail.com> <87czzcdtir.fsf@vps.thesusis.net> <CAHD1Q_z+WW36rfr1RAFYKjU5bocA90OonBmSKECRnpacvWyPmQ@mail.gmail.com>
User-agent: mu4e 1.5.7; emacs 26.3
From: Phillip Susi <phill@thesusis.net>
To: "Guilherme G. Piccoli" <guilherme.piccoli@canonical.com>
Cc: kexec mailing list <kexec@lists.infradead.org>, xen-devel@lists.xenproject.org
Subject: Re: kexec not working in xen domU?
Date: Thu, 17 Dec 2020 16:25:25 -0500
In-reply-to: <CAHD1Q_z+WW36rfr1RAFYKjU5bocA90OonBmSKECRnpacvWyPmQ@mail.gmail.com>
Message-ID: <873604p1i6.fsf@vps.thesusis.net>
MIME-Version: 1.0
Content-Type: text/plain


Guilherme G. Piccoli writes:

> Hm..not many prints, either earlyprintk didn't work, or it's a really
> early boot issue. Might worth to investigate if it's not a purgatory
> issue too - did you try to use the ""new"" kexec syscall, by running
> "kexec -s -l" instead of just "kexec -l" ?
> Also, worth to try that with upstream kernel and kexec-tools - I
> assume you're doing that already?

I tried with -s and it didn't help.  So far I tried it originally on my
Ubuntu 20.04 amazon vps, then on my debian testing ( linux 5.9.0 ) on my
local xen server.  I'll try building the latest upstream kernel and
kexec tomorrow.


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 21:46:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 21:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56196.98200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq16G-0001Xc-Cb; Thu, 17 Dec 2020 21:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56196.98200; Thu, 17 Dec 2020 21:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq16G-0001XV-9V; Thu, 17 Dec 2020 21:46:16 +0000
Received: by outflank-mailman (input) for mailman id 56196;
 Thu, 17 Dec 2020 21:46:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIZe=FV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kq16E-0001XQ-VM
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 21:46:14 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86cd528c-5d4f-4951-b75f-2b6d2ed41fc5;
 Thu, 17 Dec 2020 21:46:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86cd528c-5d4f-4951-b75f-2b6d2ed41fc5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608241573;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=5Fy511Av2SUNnJ4dw3xkeyxm7ImaAAzPUB9ZrpA93d0=;
  b=JRQf+2n6FkIkmecPaSWset35hiB42dPgtH19OaG6kM+wTewJwGRhbwCK
   oJMYuAr+plETi8sFnvRZO9bounMbPmYu1wyIsawda9zbRMj58s+ADwKVC
   nAPSdPvlsCyhR0K4PLYO/BJiPh2+bDNuj8J8LLHL5qc5tms6W3PALKufd
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QTBa53xVxyVPknzB5aX5iRm1EjGIOwnuRtsE8wexWC5FUOCjeUuJm6TzVNJ2dPR6pehPJWcgNQ
 cHLGCjWUIaxClfiRbp07PHnMzUCtcTXMDNkVCS6zueNmkHVMjAdJelH/3mZrnQfZBjIJCHuwR8
 +JqG/uvZp5hFRQiGUgM8yCd3wg0RzOZojuIz0/AB+2gBzuZedK4bk1ZqOgitEJJcwp1R8znK1I
 oTzmvHobbjLKS/43BZ4aA6dRq7VE6mdrUWXyPKhniS5ao2Dsh3APv3DLH4O9ein6oWRQukvjpY
 3ts=
X-SBRS: 5.2
X-MesageID: 33849264
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,428,1599537600"; 
   d="scan'208";a="33849264"
Subject: Re: [PATCH] xen/x86: Fix memory leak in vcpu_create() error path
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan
	<tim@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
	<michal.leszczynski@cert.pl>
References: <20200928154741.2366-1-andrew.cooper3@citrix.com>
 <33331c3a-1fd5-1ef6-16a3-21d2a6672e90@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9556aeb3-2a7c-7aea-4386-6e561dd9ef6e@citrix.com>
Date: Thu, 17 Dec 2020 21:46:06 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <33331c3a-1fd5-1ef6-16a3-21d2a6672e90@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 29/09/2020 07:18, Jan Beulich wrote:
> On 28.09.2020 17:47, Andrew Cooper wrote:
>> Various paths in vcpu_create() end up calling paging_update_paging_modes(),
>> which eventually allocate a monitor pagetable if one doesn't exist.
>>
>> However, an error in vcpu_create() results in the vcpu being cleaned up
>> locally, and not put onto the domain's vcpu list.  Therefore, the monitor
>> table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
>> assertions later that we've successfully freed the entire hap/shadow memory
>> pool.
>>
>> The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
>> due to insufficient existing structure in the existing logic.
>>
>> Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
>> in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
>> fixes the memory leak.
>>
>> The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
>> as tolerable as possible, with the minimum number of safety checks possible.
>> In particular, drop the mfn_valid() check - if junk is in these fields, then
>> Xen is going to explode anyway.
>>
>> Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.  (Wow it really is a long time since needing to drop everything
for security work...)

>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -563,30 +563,37 @@ void hap_final_teardown(struct domain *d)
>>      paging_unlock(d);
>>  }
>>  
>> +void hap_vcpu_teardown(struct vcpu *v)
>> +{
>> +    struct domain *d = v->domain;
>> +    mfn_t mfn;
>> +
>> +    paging_lock(d);
>> +
>> +    if ( !paging_mode_hap(d) || !v->arch.paging.mode )
>> +        goto out;
> Any particular reason you don't use paging_get_hostmode() (as the
> original code did) here? Any particular reason for the seemingly
> redundant (and hence somewhat in conflict with the description's
> "with the minimum number of safety checks possible")
> paging_mode_hap()?

Yes to both.  As you spotted, I converted the shadow side first, and
made the two consistent.

The paging_mode_{shadow,hap})() is necessary for idempotency.  These
functions really might get called before paging is set up, for an early
failure in domain_create().

The paging mode has nothing really to do with hostmode/guestmode/etc. 
It is the only way of expressing the logic where it is clear that the
lower pointer dereferences are trivially safe.  (Also, the guestmode
predicate isn't going to survive the nested virt work.  It's
conceptually broken.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:03:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56214.98262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2IT-00016t-98; Thu, 17 Dec 2020 23:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56214.98262; Thu, 17 Dec 2020 23:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2IT-00016m-6B; Thu, 17 Dec 2020 23:02:57 +0000
Received: by outflank-mailman (input) for mailman id 56214;
 Thu, 17 Dec 2020 23:02:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2IR-00016h-Vv
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:02:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aadaaac4-5c45-4e9e-b86f-2cae4a78d93c;
 Thu, 17 Dec 2020 23:02:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aadaaac4-5c45-4e9e-b86f-2cae4a78d93c
Date: Thu, 17 Dec 2020 15:02:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608246174;
	bh=MhszyCe1+FYfdc91t8oZfrS6Fvqvd3u3aMqB4kZ3N+w=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=G3I62Na6SRYto3lBUVqN4dE9T45P/s+yaZlhjUBRJEzIFf3XreZWyEdN3z+FuCDX2
	 ChSYnjg6aMBZCBTcDBBRmUKzgnzFJDxcy4RMXcwyy8Gq1YPYTlamNEg48IdU1UDN0h
	 +8lAX7HYGsOZZ/CJWbTCOWHwzSrcbXrUnfgSCg0WMUy3A5Uu0eVvsFI0MJC9ncyoMu
	 RoN7CriOFToRVy+FIxHsriUJ3Cw470qj81yJ0ndcLef35MHhd8Ag0L5By6DGlYOeXQ
	 osDOxcBEhKuOQiaJROrvLx+3ttPEA7yfjNNNeTXP9dafY/YGr1mxypYC7lH3Is8Xe1
	 eF/YhGbA1swJw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: famzheng@amazon.com, sstabellini@kernel.org, cardoe@cardoe.com, wl@xen.org, 
    Bertrand.Marquis@arm.com, julien@xen.org
Subject: Re: [PATCH v4 0/8] xen/arm: Emulate ID registers
In-Reply-To: <160823586491.13274.572144728643942444@600e7e483b3a>
Message-ID: <alpine.DEB.2.21.2012171502300.4040@sstabellini-ThinkPad-T480s>
References: <160823586491.13274.572144728643942444@600e7e483b3a>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Actually it passed. It was just a transient internet issue.


On Thu, 17 Dec 2020, no-reply@patchew.org wrote:
> Hi,
> 
> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> 
> You can find the link to the pipeline near the end of the report below:
> 
> Type: series
> Message-id: cover.1608214355.git.bertrand.marquis@arm.com
> Subject: [PATCH v4 0/8] xen/arm: Emulate ID registers
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> sleep 10
> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> === TEST SCRIPT END ===
> 
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>    8e0fe4fe5f..904148ecb4  master     -> master
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>  * [new tag]               patchew/cover.1608214355.git.bertrand.marquis@arm.com -> patchew/cover.1608214355.git.bertrand.marquis@arm.com
> Switched to a new branch 'test'
> 4fc8dff44c xen/arm: Activate TID3 in HCR_EL2
> d72e6d1faa xen/arm: Add CP10 exception support to handle MVFR
> 9ef18928a0 xen/arm: Add handler for cp15 ID registers
> 09f61edd55 xen/arm: Add handler for ID registers on arm64
> 0a14368a8f xen/arm: create a cpuinfo structure for guest
> 01fd2fca83 xen/arm: Add arm64 ID registers definitions
> e87a25c913 xen/arm: Add ID registers and complete cpuinfo
> 66f3ee6d1a xen/arm: Use READ_SYSREG instead of 32/64 versions
> 
> === OUTPUT BEGIN ===
> [2020-12-17 16:52:57] Looking up pipeline...
> [2020-12-17 16:52:58] Found pipeline 231473331:
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/231473331
> 
> [2020-12-17 16:52:58] Waiting for pipeline to finish...
> [2020-12-17 17:08:03] Still waiting...
> [2020-12-17 17:23:09] Still waiting...
> [2020-12-17 17:38:13] Still waiting...
> [2020-12-17 17:53:18] Still waiting...
> [2020-12-17 18:08:22] Still waiting...
> [2020-12-17 18:23:27] Still waiting...
> [2020-12-17 18:38:32] Still waiting...
> [2020-12-17 18:53:36] Still waiting...
> [2020-12-17 19:08:42] Still waiting...
> [2020-12-17 19:23:48] Still waiting...
> [2020-12-17 19:38:53] Still waiting...
> [2020-12-17 19:53:58] Still waiting...
> [2020-12-17 20:09:03] Still waiting...
> [2020-12-17 20:11:03] Pipeline failed
> [2020-12-17 20:11:04] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' is skipped
> [2020-12-17 20:11:04] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is skipped
> [2020-12-17 20:11:04] Job 'qemu-smoke-x86-64-clang' in stage 'test' is skipped
> [2020-12-17 20:11:04] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is skipped
> [2020-12-17 20:11:04] Job 'build-each-commit-gcc' in stage 'test' is skipped
> [2020-12-17 20:11:04] Job 'debian-unstable-gcc-debug-arm64' in stage 'build' is failed
> [2020-12-17 20:11:04] Job 'debian-unstable-gcc-arm64' in stage 'build' is failed
> === OUTPUT END ===
> 
> Test command exited with code: 1


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:17:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:17:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56219.98277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2WB-0002Fw-JZ; Thu, 17 Dec 2020 23:17:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56219.98277; Thu, 17 Dec 2020 23:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2WB-0002Fp-GY; Thu, 17 Dec 2020 23:17:07 +0000
Received: by outflank-mailman (input) for mailman id 56219;
 Thu, 17 Dec 2020 23:17:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2WA-0002Fk-E8
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:17:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 819e3839-668e-4748-869a-2df1ff976175;
 Thu, 17 Dec 2020 23:17:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 819e3839-668e-4748-869a-2df1ff976175
Date: Thu, 17 Dec 2020 15:17:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608247024;
	bh=jXN3v4FSewTv18rQKuXwSbNEsWTYD9/bX50bkqSnTm8=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=M6CiytJl/u9uAeakUziOfAFov3lalJ18c4+YeKU7McaIj72LALvQatMAVdGbmGWYC
	 Jh9EyBpSYPMrkknn2J9O+69KMW8jW37gmTDGZOWYaZP+XBq5bsqYGTnyEkPKm+Een9
	 56rN1WsPGrDwCjqGQmVdd6N53uqSzH1DO2IRezCNEKBdXg4a4xNybopqHkYPx5wy9y
	 pYq77U/tim8DK2XxkULvtXZVyu//YIHL4NzUWzpd+vL6dvY8csFrgSwB0eQlgrunvX
	 7YVZqOf6t+UlhPwZ7f16T53h2e9Qj9SpTryy8yPTQ+Ndc9LAjJLUo7ie8K9Um2O0pl
	 51dYJiVo3UKCA==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 1/8] xen/arm: Use READ_SYSREG instead of 32/64
 versions
In-Reply-To: <75ab5c84ed6ce1d004316ca4677735aa0543ecdc.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171505430.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <75ab5c84ed6ce1d004316ca4677735aa0543ecdc.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Modify identify_cpu function to use READ_SYSREG instead of READ_SYSREG32
> or READ_SYSREG64.
> The aarch32 versions of the registers are 64bit on an aarch64 processor
> so it was wrong to access them as 32bit registers.

This sentence is a bit confusing because, as an example, MIDR_EL1 is
also an aarch64 register, not only an aarch32 register. Maybe we should
clarify.

Aside from that:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> ---
> Change in V4:
>   This patch was introduced in v4.
> 
> ---
>  xen/arch/arm/cpufeature.c | 50 +++++++++++++++++++--------------------
>  1 file changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 44126dbf07..115e1b164d 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -99,44 +99,44 @@ int enable_nonboot_cpu_caps(const struct arm_cpu_capabilities *caps)
>  
>  void identify_cpu(struct cpuinfo_arm *c)
>  {
> -        c->midr.bits = READ_SYSREG32(MIDR_EL1);
> +        c->midr.bits = READ_SYSREG(MIDR_EL1);
>          c->mpidr.bits = READ_SYSREG(MPIDR_EL1);
>  
>  #ifdef CONFIG_ARM_64
> -        c->pfr64.bits[0] = READ_SYSREG64(ID_AA64PFR0_EL1);
> -        c->pfr64.bits[1] = READ_SYSREG64(ID_AA64PFR1_EL1);
> +        c->pfr64.bits[0] = READ_SYSREG(ID_AA64PFR0_EL1);
> +        c->pfr64.bits[1] = READ_SYSREG(ID_AA64PFR1_EL1);
>  
> -        c->dbg64.bits[0] = READ_SYSREG64(ID_AA64DFR0_EL1);
> -        c->dbg64.bits[1] = READ_SYSREG64(ID_AA64DFR1_EL1);
> +        c->dbg64.bits[0] = READ_SYSREG(ID_AA64DFR0_EL1);
> +        c->dbg64.bits[1] = READ_SYSREG(ID_AA64DFR1_EL1);
>  
> -        c->aux64.bits[0] = READ_SYSREG64(ID_AA64AFR0_EL1);
> -        c->aux64.bits[1] = READ_SYSREG64(ID_AA64AFR1_EL1);
> +        c->aux64.bits[0] = READ_SYSREG(ID_AA64AFR0_EL1);
> +        c->aux64.bits[1] = READ_SYSREG(ID_AA64AFR1_EL1);
>  
> -        c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
> -        c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
> +        c->mm64.bits[0]  = READ_SYSREG(ID_AA64MMFR0_EL1);
> +        c->mm64.bits[1]  = READ_SYSREG(ID_AA64MMFR1_EL1);
>  
> -        c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
> -        c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
> +        c->isa64.bits[0] = READ_SYSREG(ID_AA64ISAR0_EL1);
> +        c->isa64.bits[1] = READ_SYSREG(ID_AA64ISAR1_EL1);
>  #endif
>  
> -        c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
> -        c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
> +        c->pfr32.bits[0] = READ_SYSREG(ID_PFR0_EL1);
> +        c->pfr32.bits[1] = READ_SYSREG(ID_PFR1_EL1);
>  
> -        c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
> +        c->dbg32.bits[0] = READ_SYSREG(ID_DFR0_EL1);
>  
> -        c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
> +        c->aux32.bits[0] = READ_SYSREG(ID_AFR0_EL1);
>  
> -        c->mm32.bits[0]  = READ_SYSREG32(ID_MMFR0_EL1);
> -        c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
> -        c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
> -        c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
> +        c->mm32.bits[0]  = READ_SYSREG(ID_MMFR0_EL1);
> +        c->mm32.bits[1]  = READ_SYSREG(ID_MMFR1_EL1);
> +        c->mm32.bits[2]  = READ_SYSREG(ID_MMFR2_EL1);
> +        c->mm32.bits[3]  = READ_SYSREG(ID_MMFR3_EL1);
>  
> -        c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
> -        c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
> -        c->isa32.bits[2] = READ_SYSREG32(ID_ISAR2_EL1);
> -        c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
> -        c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
> -        c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
> +        c->isa32.bits[0] = READ_SYSREG(ID_ISAR0_EL1);
> +        c->isa32.bits[1] = READ_SYSREG(ID_ISAR1_EL1);
> +        c->isa32.bits[2] = READ_SYSREG(ID_ISAR2_EL1);
> +        c->isa32.bits[3] = READ_SYSREG(ID_ISAR3_EL1);
> +        c->isa32.bits[4] = READ_SYSREG(ID_ISAR4_EL1);
> +        c->isa32.bits[5] = READ_SYSREG(ID_ISAR5_EL1);
>  }
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:22:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56223.98290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2bQ-0003GK-9G; Thu, 17 Dec 2020 23:22:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56223.98290; Thu, 17 Dec 2020 23:22:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2bQ-0003GD-5W; Thu, 17 Dec 2020 23:22:32 +0000
Received: by outflank-mailman (input) for mailman id 56223;
 Thu, 17 Dec 2020 23:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2bO-0003G8-Dl
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:22:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8270b16e-abd4-49b6-88a9-def898365b9f;
 Thu, 17 Dec 2020 23:22:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8270b16e-abd4-49b6-88a9-def898365b9f
Date: Thu, 17 Dec 2020 15:22:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608247349;
	bh=r2cH5Rjw995NLnRT2x0LpBjqG0CjcRRjRgtPySDA2HA=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=piJMEy4mwRh4R5jehhSD0dObaIA/9QqAUCkLzvs4Ajf31R8pdNAbm1ijSviiY6K1w
	 RSuQ29izl4GQ09gkvtP/IudPW3Kwfqe9WM9Ucwak1+/C7mx//0Z4TgtOJaCSiAt+L1
	 68X/M50o8erf3VpF8qYJWwZbbv1f/yU67H4RIzpols37pmRsWIHVdrif9zY50X9OZV
	 aO+gaxlS2j4jbXHZ4K/+0f/44zH8oj+IUCvpa+RFqPdr9uXk8tiCbjy/thGVL7TtXE
	 ubPBnJojzkLBQHZLVnDt0OGgH4nsr3j3oQ3GyKA1U7MuY267++b/ytJr/06ELTSiL9
	 CaQfKx46pLfZw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 2/8] xen/arm: Add ID registers and complete cpuinfo
In-Reply-To: <31d3537b11ba1a7531f1e3a38ba3b1e694a1224b.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171521080.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <31d3537b11ba1a7531f1e3a38ba3b1e694a1224b.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Add definition and entries in cpuinfo for ID registers introduced in
> newer Arm Architecture reference manual:
> - ID_PFR2: processor feature register 2
> - ID_DFR1: debug feature register 1
> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
> - ID_ISA6: ISA Feature register 6
> Add more bitfield definitions in PFR fields of cpuinfo.
> Add MVFR2 register definition for aarch32.
> Add MVFRx_EL1 defines for aarch32.
> Add mvfr values in cpuinfo.
> Add some registers definition for arm64 in sysregs as some are not
> always know by compilers.
> Initialize the new values added in cpuinfo in identify_cpu during init.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2:
>   Fix dbg32 table size and add proper initialisation of the second entry
>   of the table by reading ID_DFR1 register.
> Changes in V3:
>   Fix typo in commit title
>   Add MVFR2 definition and handling on aarch32 and remove specific case
>   for mvfr field in cpuinfo (now the same on arm64 and arm32).
>   Add MMFR4 definition if not known by the compiler.
> Changes in V4:
>   Add MVFRx_EL1 defines for aarch32
>   Use READ_SYSREG instead of 32/64 versions of the function which
>   removed the ifdef case for MVFR access.
>   User register_t type for mvfr and zfr64 fields of cpuinfo structure.
> 
> ---
>  xen/arch/arm/cpufeature.c           | 12 +++++++
>  xen/include/asm-arm/arm64/sysregs.h | 28 +++++++++++++++
>  xen/include/asm-arm/cpregs.h        | 15 ++++++++
>  xen/include/asm-arm/cpufeature.h    | 56 ++++++++++++++++++++++++-----
>  4 files changed, 102 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 115e1b164d..86b99ee960 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
>  
>          c->mm64.bits[0]  = READ_SYSREG(ID_AA64MMFR0_EL1);
>          c->mm64.bits[1]  = READ_SYSREG(ID_AA64MMFR1_EL1);
> +        c->mm64.bits[2]  = READ_SYSREG(ID_AA64MMFR2_EL1);
>  
>          c->isa64.bits[0] = READ_SYSREG(ID_AA64ISAR0_EL1);
>          c->isa64.bits[1] = READ_SYSREG(ID_AA64ISAR1_EL1);
> +
> +        c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
>  #endif
>  
>          c->pfr32.bits[0] = READ_SYSREG(ID_PFR0_EL1);
>          c->pfr32.bits[1] = READ_SYSREG(ID_PFR1_EL1);
> +        c->pfr32.bits[2] = READ_SYSREG(ID_PFR2_EL1);
>  
>          c->dbg32.bits[0] = READ_SYSREG(ID_DFR0_EL1);
> +        c->dbg32.bits[1] = READ_SYSREG(ID_DFR1_EL1);
>  
>          c->aux32.bits[0] = READ_SYSREG(ID_AFR0_EL1);
>  
> @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->mm32.bits[1]  = READ_SYSREG(ID_MMFR1_EL1);
>          c->mm32.bits[2]  = READ_SYSREG(ID_MMFR2_EL1);
>          c->mm32.bits[3]  = READ_SYSREG(ID_MMFR3_EL1);
> +        c->mm32.bits[4]  = READ_SYSREG(ID_MMFR4_EL1);
> +        c->mm32.bits[5]  = READ_SYSREG(ID_MMFR5_EL1);
>  
>          c->isa32.bits[0] = READ_SYSREG(ID_ISAR0_EL1);
>          c->isa32.bits[1] = READ_SYSREG(ID_ISAR1_EL1);
> @@ -137,6 +144,11 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->isa32.bits[3] = READ_SYSREG(ID_ISAR3_EL1);
>          c->isa32.bits[4] = READ_SYSREG(ID_ISAR4_EL1);
>          c->isa32.bits[5] = READ_SYSREG(ID_ISAR5_EL1);
> +        c->isa32.bits[6] = READ_SYSREG(ID_ISAR6_EL1);
> +
> +        c->mvfr.bits[0] = READ_SYSREG(MVFR0_EL1);
> +        c->mvfr.bits[1] = READ_SYSREG(MVFR1_EL1);
> +        c->mvfr.bits[2] = READ_SYSREG(MVFR2_EL1);
>  }
>  
>  /*
> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
> index c60029d38f..077fd95fb7 100644
> --- a/xen/include/asm-arm/arm64/sysregs.h
> +++ b/xen/include/asm-arm/arm64/sysregs.h
> @@ -57,6 +57,34 @@
>  #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>  #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
>  
> +/*
> + * Define ID coprocessor registers if they are not
> + * already defined by the compiler.
> + *
> + * Values picked from linux kernel
> + */
> +#ifndef ID_AA64MMFR2_EL1
> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
> +#endif
> +#ifndef ID_PFR2_EL1
> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
> +#endif
> +#ifndef ID_MMFR4_EL1
> +#define ID_MMFR4_EL1                S3_0_C0_C2_6
> +#endif
> +#ifndef ID_MMFR5_EL1
> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
> +#endif
> +#ifndef ID_ISAR6_EL1
> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
> +#endif
> +#ifndef ID_AA64ZFR0_EL1
> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
> +#endif
> +#ifndef ID_DFR1_EL1
> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
> +#endif
> +
>  /* Access to system registers */
>  
>  #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index 8fd344146e..6daf2b1a30 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -63,6 +63,8 @@
>  #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
>  #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
>  #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
> +#define MVFR2           p10,7,c5,c0,0   /* Media and VFP Feature Register 2 */
>  #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
>  #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
>  #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
> @@ -108,18 +110,23 @@
>  #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
>  #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
>  #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
>  #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>  #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
>  #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
>  #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
>  #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
>  #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
>  #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>  #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>  #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>  #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>  #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>  #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>  #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>  #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>  #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
> @@ -312,18 +319,23 @@
>  #define HSTR_EL2                HSTR
>  #define ID_AFR0_EL1             ID_AFR0
>  #define ID_DFR0_EL1             ID_DFR0
> +#define ID_DFR1_EL1             ID_DFR1
>  #define ID_ISAR0_EL1            ID_ISAR0
>  #define ID_ISAR1_EL1            ID_ISAR1
>  #define ID_ISAR2_EL1            ID_ISAR2
>  #define ID_ISAR3_EL1            ID_ISAR3
>  #define ID_ISAR4_EL1            ID_ISAR4
>  #define ID_ISAR5_EL1            ID_ISAR5
> +#define ID_ISAR6_EL1            ID_ISAR6
>  #define ID_MMFR0_EL1            ID_MMFR0
>  #define ID_MMFR1_EL1            ID_MMFR1
>  #define ID_MMFR2_EL1            ID_MMFR2
>  #define ID_MMFR3_EL1            ID_MMFR3
> +#define ID_MMFR4_EL1            ID_MMFR4
> +#define ID_MMFR5_EL1            ID_MMFR5
>  #define ID_PFR0_EL1             ID_PFR0
>  #define ID_PFR1_EL1             ID_PFR1
> +#define ID_PFR2_EL1             ID_PFR2
>  #define IFSR32_EL2              IFSR
>  #define MDCR_EL2                HDCR
>  #define MIDR_EL1                MIDR
> @@ -347,6 +359,9 @@
>  #define VPIDR_EL2               VPIDR
>  #define VTCR_EL2                VTCR
>  #define VTTBR_EL2               VTTBR
> +#define MVFR0_EL1               MVFR0
> +#define MVFR1_EL1               MVFR1
> +#define MVFR2_EL1               MVFR2
>  #endif
>  
>  #endif
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index c7b5052992..74139be1cc 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>      union {
>          uint64_t bits[2];
>          struct {
> +            /* PFR0 */
>              unsigned long el0:4;
>              unsigned long el1:4;
>              unsigned long el2:4;
> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>              unsigned long fp:4;   /* Floating Point */
>              unsigned long simd:4; /* Advanced SIMD */
>              unsigned long gic:4;  /* GIC support */
> -            unsigned long __res0:28;
> +            unsigned long ras:4;
> +            unsigned long sve:4;
> +            unsigned long sel2:4;
> +            unsigned long mpam:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long __res0:4;
>              unsigned long csv2:4;
> -            unsigned long __res1:4;
> +            unsigned long cvs3:4;
> +
> +            /* PFR1 */
> +            unsigned long bt:4;
> +            unsigned long ssbs:4;
> +            unsigned long mte:4;
> +            unsigned long ras_frac:4;
> +            unsigned long mpam_frac:4;
> +            unsigned long __res1:44;
>          };
>      } pfr64;
>  
> @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>      } aux64;
>  
>      union {
> -        uint64_t bits[2];
> +        uint64_t bits[3];
>          struct {
>              unsigned long pa_range:4;
>              unsigned long asid_bits:4;
> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>              unsigned long pan:4;
>              unsigned long __res1:8;
>              unsigned long __res2:32;
> +
> +            unsigned long __res3:64;
>          };
>      } mm64;
>  
> @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>          uint64_t bits[2];
>      } isa64;
>  
> +    struct {
> +        register_t bits[1];
> +    } zfr64;
> +
>  #endif
>  
>      /*
> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>       * when running in 32-bit mode.
>       */
>      union {
> -        uint32_t bits[2];
> +        uint32_t bits[3];
>          struct {
> +            /* PFR0 */
>              unsigned long arm:4;
>              unsigned long thumb:4;
>              unsigned long jazelle:4;
>              unsigned long thumbee:4;
> -            unsigned long __res0:16;
> +            unsigned long csv2:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long ras:4;
>  
> +            /* PFR1 */
>              unsigned long progmodel:4;
>              unsigned long security:4;
>              unsigned long mprofile:4;
>              unsigned long virt:4;
>              unsigned long gentimer:4;
> -            unsigned long __res1:12;
> +            unsigned long sec_frac:4;
> +            unsigned long virt_frac:4;
> +            unsigned long gic:4;
> +
> +            /* PFR2 */
> +            unsigned long csv3:4;
> +            unsigned long ssbs:4;
> +            unsigned long ras_frac:4;
> +            unsigned long __res2:20;
>          };
>      } pfr32;
>  
>      struct {
> -        uint32_t bits[1];
> +        uint32_t bits[2];
>      } dbg32;
>  
>      struct {
> @@ -230,12 +264,16 @@ struct cpuinfo_arm {
>      } aux32;
>  
>      struct {
> -        uint32_t bits[4];
> +        uint32_t bits[6];
>      } mm32;
>  
>      struct {
> -        uint32_t bits[6];
> +        uint32_t bits[7];
>      } isa32;
> +
> +    struct {
> +        register_t bits[3];
> +    } mvfr;
>  };
>  
>  extern struct cpuinfo_arm boot_cpu_data;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:25:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:25:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56227.98302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2dn-0003Oi-QF; Thu, 17 Dec 2020 23:24:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56227.98302; Thu, 17 Dec 2020 23:24:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2dn-0003Ob-Mi; Thu, 17 Dec 2020 23:24:59 +0000
Received: by outflank-mailman (input) for mailman id 56227;
 Thu, 17 Dec 2020 23:24:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2dm-0003OV-Ft
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:24:58 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ff1c2d5-9da5-4ae5-aed4-47f894e3e0a0;
 Thu, 17 Dec 2020 23:24:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ff1c2d5-9da5-4ae5-aed4-47f894e3e0a0
Date: Thu, 17 Dec 2020 15:24:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608247497;
	bh=EAnYLRRwGyHq1agi3f6Usn0mIWXXHh/yRHTA4076Mxc=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=tF0JUMTSa58k3VUW5ZmbRlK75dvK1/Yc9uaNTLcqhSGNqVx34716cvknINAuw9qyo
	 hA0iiZ7cLs/WQDuM8z8YzWGxCFumADPQqoUufXPIY0qVGmWIOdpIzGMMrIP5o5Bz0j
	 j/MystMVlDFj0IZwsVZxUJCkzIE+PoTF2PmosWqt/D7XfWeHxdWJ3lnEANuU/40HCT
	 o00yJKOwOyJY2uu9Maa5t1qYyg7pRZnI5/MYsB2zJ8zZIs/rPZpMaRfvIBxNeN8597
	 MUSfPsV/HZz/fG9CDJ/XsNxoJzZDplzJ6/unaemsGo5dIkOQ8N/nkhBiSTd/2XVOsQ
	 uVgo+UXg8cong==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 3/8] xen/arm: Add arm64 ID registers definitions
In-Reply-To: <905822b31f5494bf20e1e2a0a56f935db0550aef.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171523290.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <905822b31f5494bf20e1e2a0a56f935db0550aef.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Add coprocessor registers definitions for all ID registers trapped
> through the TID3 bit of HSR.
> Those are the one that will be emulated in Xen to only publish to guests
> the features that are supported by Xen and that are accessible to
> guests.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3:
>   Add case definition for reserved registers.
> Changes in V4:
>   Remove case definition for reserved registers and move it to the code
>   directly.
> 
> ---
>  xen/include/asm-arm/arm64/hsr.h | 37 +++++++++++++++++++++++++++++++++
>  1 file changed, 37 insertions(+)
> 
> diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
> index ca931dd2fe..e691d41c17 100644
> --- a/xen/include/asm-arm/arm64/hsr.h
> +++ b/xen/include/asm-arm/arm64/hsr.h
> @@ -110,6 +110,43 @@
>  #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
>  #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
>  
> +/* Those registers are used when HCR_EL2.TID3 is set */
> +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
> +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
> +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
> +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
> +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
> +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
> +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
> +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
> +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
> +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
> +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
> +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
> +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
> +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
> +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
> +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
> +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
> +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
> +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
> +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
> +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
> +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
> +
> +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
> +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
> +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
> +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
> +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
> +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
> +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
> +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
> +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
> +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
> +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
> +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
> +
>  #endif /* __ASM_ARM_ARM64_HSR_H */
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:26:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:26:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56231.98314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2ey-0003WH-4g; Thu, 17 Dec 2020 23:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56231.98314; Thu, 17 Dec 2020 23:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2ey-0003WA-1Q; Thu, 17 Dec 2020 23:26:12 +0000
Received: by outflank-mailman (input) for mailman id 56231;
 Thu, 17 Dec 2020 23:26:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2ex-0003W5-34
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:26:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f873621-4f89-4d86-9359-0f759ed58a00;
 Thu, 17 Dec 2020 23:26:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f873621-4f89-4d86-9359-0f759ed58a00
Date: Thu, 17 Dec 2020 15:26:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608247569;
	bh=E6DbtqAOjZ4mq9uACHCis2MyEhl93Q8NC67++Lb+gjg=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=pRNQjrCVHFtkH25vJHg4ljLuP43zS+bjrtJ+mjVoAElKXqQX6JEdcssMamKcDOarz
	 fsvki6fQ+JG8ebtH1kF4NunbMq9rCL9c+UkYtPogZ9qEIy1Cft/GDPo1ydLH8Sw4WO
	 Dc0Xd5e4enqxbquNlWeXCjw8hIn3dmTum6ADGwnwTIY3a7vZuPq92qXZpv+ropgYzk
	 NZbuwCer7+Dro7v2kFPcq1qpbuf7LIq6MA3gwtE4buPe2mzK9AKClEiOxPHN7H9UAz
	 eiTKJuz4uAjLLXino3R5yUPpG7QGsfQlLnCjCYMRaQbrrgP6shHFea0MtYvscgqByI
	 VE8kiKtSoml0w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 4/8] xen/arm: create a cpuinfo structure for guest
In-Reply-To: <8a93d20d20fae570c83c4d7bea0c882735496f34.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171526010.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <8a93d20d20fae570c83c4d7bea0c882735496f34.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Create a cpuinfo structure for guest and mask into it the features that
> we do not support in Xen or that we do not want to publish to guests.
> 
> Modify some values in the cpuinfo structure for guests to mask some
> features which we do not want to allow to guests (like AMU) or we do not
> support (like SVE).
> Modify some values in the guest cpuinfo structure to guests to hide some
> processor features:
> - SVE as this is not supported by Xen and guest are not allowed to use
> this features (ZEN is set to 0 in CPTR_EL2).
> - AMU as HCPTR_TAM is set in CPTR_EL2 so AMU cannot be used by guests
> All other bits are left untouched.
> - RAS as this is not supported by Xen.
> 
> The code is trying to group together registers modifications for the
> same feature to be able in the long term to easily enable/disable a
> feature depending on user parameters or add other registers modification
> in the same place (like enabling/disabling HCR bits).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3:
>   Use current_cpu_data info instead of recalling identify_cpu
> Changes in V4:
>   Use boot_cpu_data instead of current_cpu_data
>   Use "hide XX support" instead of disable as this part of the code is
>   actually only hidding feature to guests but not disabling them (this
>   is done through the HCR register).
>   Modify commit message to be more clear about what is done in
>   guest_cpuinfo.
> 
> ---
>  xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/cpufeature.h |  2 ++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 86b99ee960..1f6a85aafe 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -24,6 +24,8 @@
>  
>  DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>  
> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
> +
>  void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>                               const char *info)
>  {
> @@ -151,6 +153,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->mvfr.bits[2] = READ_SYSREG(MVFR2_EL1);
>  }
>  
> +/*
> + * This function is creating a cpuinfo structure with values modified to mask
> + * all cpu features that should not be published to guest.
> + * The created structure is then used to provide ID registers values to guests.
> + */
> +static int __init create_guest_cpuinfo(void)
> +{
> +    /*
> +     * TODO: The code is currently using only the features detected on the boot
> +     * core. In the long term we should try to compute values containing only
> +     * features supported by all cores.
> +     */
> +    guest_cpuinfo = boot_cpu_data;
> +
> +#ifdef CONFIG_ARM_64
> +    /* Hide MPAM support as xen does not support it */
> +    guest_cpuinfo.pfr64.mpam = 0;
> +    guest_cpuinfo.pfr64.mpam_frac = 0;
> +
> +    /* Hide SVE as Xen does not support it */
> +    guest_cpuinfo.pfr64.sve = 0;
> +    guest_cpuinfo.zfr64.bits[0] = 0;
> +
> +    /* Hide MTE support as Xen does not support it */
> +    guest_cpuinfo.pfr64.mte = 0;
> +#endif
> +
> +    /* Hide AMU support */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.amu = 0;
> +#endif
> +    guest_cpuinfo.pfr32.amu = 0;
> +
> +    /* Hide RAS support as Xen does not support it */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.ras = 0;
> +    guest_cpuinfo.pfr64.ras_frac = 0;
> +#endif
> +    guest_cpuinfo.pfr32.ras = 0;
> +    guest_cpuinfo.pfr32.ras_frac = 0;
> +
> +    return 0;
> +}
> +/*
> + * This function needs to be run after all smp are started to have
> + * cpuinfo structures for all cores.
> + */
> +__initcall(create_guest_cpuinfo);
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 74139be1cc..6058744c18 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -283,6 +283,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>  extern struct cpuinfo_arm cpu_data[];
>  #define current_cpu_data cpu_data[smp_processor_id()]
>  
> +extern struct cpuinfo_arm guest_cpuinfo;
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:31:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56235.98326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2jq-0004XP-Nu; Thu, 17 Dec 2020 23:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56235.98326; Thu, 17 Dec 2020 23:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2jq-0004XI-Ku; Thu, 17 Dec 2020 23:31:14 +0000
Received: by outflank-mailman (input) for mailman id 56235;
 Thu, 17 Dec 2020 23:31:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2jp-0004XD-Hj
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:31:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4987853-634d-4568-b25c-dec394798864;
 Thu, 17 Dec 2020 23:31:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4987853-634d-4568-b25c-dec394798864
Date: Thu, 17 Dec 2020 15:31:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608247872;
	bh=wK/yrqisqWMvh7xigLLpmrN2yP+G+H9OmuC+/gNgWKE=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=UxUpzZzmvgwaKjTB0+/AKhxlH/neVNISME4/lp4V6HH3k1pseorFQ4key28CbNVBA
	 X7twEMpvgVZsq1TVjHX9aTi5/sIWYM02LZ/PS5Ds0F2MMinHxCowfDzDMMVeesbp8R
	 +5NRgSCeVjY6cRWqp0XjRjT0Sc7P/RftLFmDrriLzxAc/Klkm+jSJUVO2rQaAqp9yk
	 or+pWPAHTFhzsvfgMDlg6dSSCuT6WL3wj8mSVmuRx2H9lxHQrz/puMZ19F4TmGCk1h
	 M4SZ6Qx2rqzz41IBD53vO94MLNSrXuQqq6Viot2HAaSHzht4/wdLJTU4r7PoFNaKz+
	 xBFByi29uT01w==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 5/8] xen/arm: Add handler for ID registers on arm64
In-Reply-To: <46c4c7e8ec64a48ecefd894d436c116bab5d4a86.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171529570.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <46c4c7e8ec64a48ecefd894d436c116bab5d4a86.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Add vsysreg emulation for registers trapped when TID3 bit is activated
> in HSR.
> The emulation is returning the value stored in cpuinfo_guest structure
> for know registers and is handling reserved registers as RAZ.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3:
>   Fix commit message
>   Fix code style for GENERATE_TID3_INFO declaration
>   Add handling of reserved registers as RAZ.
> Changes in V4:
>   Fix indentation in GENERATE_TID3_INFO macro
>   Add explicit case code for reserved registers
> 
> ---
>  xen/arch/arm/arm64/vsysreg.c | 82 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 82 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 8a85507d9d..41f18612c6 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>          break;                                                          \
>      }
>  
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg, field, offset)                          \
> +    case HSR_SYSREG_##reg:                                              \
> +    {                                                                   \
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
> +                                  1, guest_cpuinfo.field.bits[offset]); \
> +    }
> +
>  void do_sysreg(struct cpu_user_regs *regs,
>                 const union hsr hsr)
>  {
> @@ -259,6 +267,80 @@ void do_sysreg(struct cpu_user_regs *regs,
>           */
>          return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
>  
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
> +
> +    /*
> +     * Those cases are catching all Reserved registers trapped by TID3 which
> +     * currently have no assignment.
> +     * HCR.TID3 is trapping all registers in the group 3:
> +     * Op0 == 3, op1 == 0, CRn == c0,CRm == {c1-c7}, op2 == {0-7}.
> +     * Those registers are defined as being RO in the Arm Architecture
> +     * Reference manual Armv8 (Chapter D12.3.2 of issue F.c) so handle them
> +     * as Read-only read as zero.
> +     */
> +    case HSR_SYSREG(3,0,c0,c3,3):
> +    case HSR_SYSREG(3,0,c0,c3,7):
> +    case HSR_SYSREG(3,0,c0,c4,2):
> +    case HSR_SYSREG(3,0,c0,c4,3):
> +    case HSR_SYSREG(3,0,c0,c4,5):
> +    case HSR_SYSREG(3,0,c0,c4,6):
> +    case HSR_SYSREG(3,0,c0,c4,7):
> +    case HSR_SYSREG(3,0,c0,c5,2):
> +    case HSR_SYSREG(3,0,c0,c5,3):
> +    case HSR_SYSREG(3,0,c0,c5,6):
> +    case HSR_SYSREG(3,0,c0,c5,7):
> +    case HSR_SYSREG(3,0,c0,c6,2):
> +    case HSR_SYSREG(3,0,c0,c6,3):
> +    case HSR_SYSREG(3,0,c0,c6,4):
> +    case HSR_SYSREG(3,0,c0,c6,5):
> +    case HSR_SYSREG(3,0,c0,c6,6):
> +    case HSR_SYSREG(3,0,c0,c6,7):
> +    case HSR_SYSREG(3,0,c0,c7,3):
> +    case HSR_SYSREG(3,0,c0,c7,4):
> +    case HSR_SYSREG(3,0,c0,c7,5):
> +    case HSR_SYSREG(3,0,c0,c7,6):
> +    case HSR_SYSREG(3,0,c0,c7,7):
> +        return handle_ro_raz(regs, regidx, hsr.sysreg.read, hsr, 1);
> +
>      /*
>       * HCR_EL2.TIDCP
>       *
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:38:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56239.98338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2qR-0004k7-An; Thu, 17 Dec 2020 23:38:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56239.98338; Thu, 17 Dec 2020 23:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2qR-0004k0-7B; Thu, 17 Dec 2020 23:38:03 +0000
Received: by outflank-mailman (input) for mailman id 56239;
 Thu, 17 Dec 2020 23:38:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2qQ-0004ju-Dx
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:38:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6abec9ff-643f-445c-bd8f-27faea984d89;
 Thu, 17 Dec 2020 23:38:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6abec9ff-643f-445c-bd8f-27faea984d89
Date: Thu, 17 Dec 2020 15:37:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608248281;
	bh=9Wt/uaSVeHyiRKnHpHcxuvPNK6VeH4101Z3VK/JyMjI=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Sn8bB/SjfXjHrj4rZDoY2XUciRiA9IPCBCEL4v7c8qWmohF1zGeTMAH1KyUH870L0
	 PjEMi6QA2zEG328hXoLQwO9IX+d2RoJsDnh2Ohx9G1kFRR1Ezsat268kA2E527Ftdi
	 UHGK/6xkcEzUBljp5EiUZUkzeIFZPRLwL35tsaDhC/jkHGRTnd5EBTO8mGCpNq9SD5
	 clnSt1S/m1hdfXNDp8T0LdoxfDfMTbvk5my8O6w5rN+IoYTA5vgcN6zO4x95pitlom
	 7nGo5HI7dyHJC4g7zT5zZxqREaeF9QWZNYNdDW9Lu/h8xgwxcJ88rZgV4oLnb9Fok3
	 K/lnr6iT3w9Qg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 6/8] xen/arm: Add handler for cp15 ID registers
In-Reply-To: <c1c68e89683913dbf71a8f370dc6fd896a9e8cce.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171533410.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <c1c68e89683913dbf71a8f370dc6fd896a9e8cce.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Add support for emulation of cp15 based ID registers (on arm32 or when
> running a 32bit guest on arm64).
> The handlers are returning the values stored in the guest_cpuinfo
> structure for known registers and RAZ for all reserved registers.
> In the current status the MVFR registers are no supported.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: Rebase
> Changes in V3:
>   Add case definition for reserved registers
>   Add handling of reserved registers as RAZ.
>   Fix code style in GENERATE_TID3_INFO declaration
> Changes in V4:
>   Fix comment for missing t (no to not)
>   Put cases for reserved registers directly in the code instead of using
>   a define in the cpregs.h header.
> 
> ---
>  xen/arch/arm/vcpreg.c | 65 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 65 insertions(+)
> 
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index cdc91cdf5b..1fe07fe02a 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -155,6 +155,24 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
>          break;                                                      \
>      }
>  
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg, field, offset)                      \
> +    case HSR_CPREG32(reg):                                          \
> +    {                                                               \
> +        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
> +                          1, guest_cpuinfo.field.bits[offset]);     \

This line is misaligned, but it can be adjusted on commit

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



> +    }
> +
> +/* helper to define cases for all registers for one CRm value */
> +#define HSR_CPREG32_TID3_CASES(REG)     case HSR_CPREG32(p15,0,c0,REG,0): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,1): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,2): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,3): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,4): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,5): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,6): \
> +                                        case HSR_CPREG32(p15,0,c0,REG,7)
> +
>  void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>  {
>      const struct hsr_cp32 cp32 = hsr.cp32;
> @@ -286,6 +304,53 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>           */
>          return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
>  
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
> +    /* MVFR registers are in cp10 not cp15 */
> +
> +    /*
> +     * Those cases are catching all Reserved registers trapped by TID3 which
> +     * currently have no assignment.
> +     * HCR.TID3 is trapping all registers in the group 3:
> +     * coproc == p15, opc1 == 0, CRn == c0, CRm == {c2-c7}, opc2 == {0-7}.
> +     * Those registers are defined as being RO in the Arm Architecture
> +     * Reference manual Armv8 (Chapter D12.3.2 of issue F.c) so handle them
> +     * as Read-only read as zero.
> +     */
> +    case HSR_CPREG32(p15,0,c0,c3,0):
> +    case HSR_CPREG32(p15,0,c0,c3,1):
> +    case HSR_CPREG32(p15,0,c0,c3,2):
> +    case HSR_CPREG32(p15,0,c0,c3,3):
> +    case HSR_CPREG32(p15,0,c0,c3,7):
> +    HSR_CPREG32_TID3_CASES(c4):
> +    HSR_CPREG32_TID3_CASES(c5):
> +    HSR_CPREG32_TID3_CASES(c6):
> +    HSR_CPREG32_TID3_CASES(c7):
> +        return handle_ro_raz(regs, regidx, cp32.read, hsr, 1);
> +
>      /*
>       * HCR_EL2.TIDCP
>       *
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:40:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56243.98349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2t9-0005i5-OR; Thu, 17 Dec 2020 23:40:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56243.98349; Thu, 17 Dec 2020 23:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2t9-0005hy-LW; Thu, 17 Dec 2020 23:40:51 +0000
Received: by outflank-mailman (input) for mailman id 56243;
 Thu, 17 Dec 2020 23:40:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2t8-0005ht-Eu
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:40:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86ba6404-e001-474e-91ca-abcd2191442c;
 Thu, 17 Dec 2020 23:40:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86ba6404-e001-474e-91ca-abcd2191442c
Date: Thu, 17 Dec 2020 15:40:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608248449;
	bh=H0wdoo90Hf8bEXLVCXOxZPi1Uw1VvbJ0fo935w8qQIU=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=Cj0Tt74ri2N+BGlbGkuZNqwoyNjewxPS9JRHhwVzmX6OXATcdIcT/GDzBbklYwqlV
	 uIzh8LzRio2C0HXWIUb/9BnQ+cvQo5qLBJ5nHKfs2u58UHiJQFAO7mN2h0hnzZJQxx
	 AV/PxucmC2G9t/s5g7jEZ0T++7VE/bURUZ4F752Yl9jzJ4HHCa3jHlM/g/hVRqdla7
	 XKXEgJ91uDiVrd0TKpqWMxLTYRghW0zACu9umpyDMa4X0RyjqiIyKUNUcrP2+Kaa4o
	 oitaXF5qHRdhq/BuArwHsMQeux29q1/Eac7fMsZ9xQUiUMe5MnSf1sHzk0f1CW9aD6
	 DKBo3OpG5yeRg==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 7/8] xen/arm: Add CP10 exception support to handle
 MVFR
In-Reply-To: <841e5cd22290158d9b0c5d6dedafd01ed9a3d0bc.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171539390.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <841e5cd22290158d9b0c5d6dedafd01ed9a3d0bc.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Add support for cp10 exceptions decoding to be able to emulate the
> values for MVFR0, MVFR1 and MVFR2 when TID3 bit of HSR is activated.
> This is required for aarch32 guests accessing MVFR registers using
> vmrs and vmsr instructions.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3:
>   Add case for MVFR2, fix typo VMFR <-> MVFR.
> Changes in V4:
>   Fix typo HSR -> HCR
>   Move no to not comment fix to previous patch
> 
> ---
>  xen/arch/arm/traps.c             |  5 +++++
>  xen/arch/arm/vcpreg.c            | 37 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/perfc_defn.h |  1 +
>  xen/include/asm-arm/traps.h      |  1 +
>  4 files changed, 44 insertions(+)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 22bd1bd4c6..28d9d64558 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
>          perfc_incr(trap_cp14_dbg);
>          do_cp14_dbg(regs, hsr);
>          break;
> +    case HSR_EC_CP10:
> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
> +        perfc_incr(trap_cp10);
> +        do_cp10(regs, hsr);
> +        break;
>      case HSR_EC_CP:
>          GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>          perfc_incr(trap_cp);
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index 1fe07fe02a..cbad8f25a0 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -664,6 +664,43 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
>      inject_undef_exception(regs, hsr);
>  }
>  
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
> +{
> +    const struct hsr_cp32 cp32 = hsr.cp32;
> +    int regidx = cp32.reg;
> +
> +    if ( !check_conditional_instr(regs, hsr) )
> +    {
> +        advance_pc(regs, hsr);
> +        return;
> +    }
> +
> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
> +    {
> +    /*
> +     * HCR.TID3 is trapping access to MVFR register used to identify the
> +     * VFP/Simd using VMRS/VMSR instructions.
> +     * Exception encoding is using MRC/MCR standard with the reg field in Crn
> +     * as are declared MVFR0 and MVFR1 in cpregs.h
> +     */
> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2, mvfr, 2)
> +
> +    default:
> +        gdprintk(XENLOG_ERR,
> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
> +                 cp32.read ? "mrc" : "mcr",
> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
> +                 hsr.bits & HSR_CP32_REGS_MASK);
> +        inject_undef_exception(regs, hsr);
> +        return;
> +    }
> +
> +    advance_pc(regs, hsr);
> +}
> +
>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>  {
>      const struct hsr_cp cp = hsr.cp;
> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
> index 6a83185163..31f071222b 100644
> --- a/xen/include/asm-arm/perfc_defn.h
> +++ b/xen/include/asm-arm/perfc_defn.h
> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>  PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>  PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>  PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>  PERFCOUNTER(trap_cp,       "trap: cp access")
>  PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>  PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
> index 997c37884e..c4a3d0fb1b 100644
> --- a/xen/include/asm-arm/traps.h
> +++ b/xen/include/asm-arm/traps.h
> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
>  
>  /* SMCCC handling */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:43:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:43:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56247.98361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2vF-0005s5-A4; Thu, 17 Dec 2020 23:43:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56247.98361; Thu, 17 Dec 2020 23:43:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2vF-0005ry-6s; Thu, 17 Dec 2020 23:43:01 +0000
Received: by outflank-mailman (input) for mailman id 56247;
 Thu, 17 Dec 2020 23:42:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2vD-0005rt-IJ
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:42:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fd6a2db-4308-43ec-b330-405b8fc5afee;
 Thu, 17 Dec 2020 23:42:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fd6a2db-4308-43ec-b330-405b8fc5afee
Date: Thu, 17 Dec 2020 15:42:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608248578;
	bh=zEy+JFTXMzAyLDCJ2EeZjEWw4ZB+yBcig1mborY3RW0=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=sRKHL7sqS5Fy1Qm2jJRZBN1Hw+EnjvnQDu+F+rx1MraNFyM9ugUnsCesW0DLvPZTl
	 p2LCIsEG2E7b7lRrV4/Xc4AIBfDrC8qTWKUB1tguDYsNDh9RepO3jNrvlpwZkpWrqJ
	 7pV7YhQmTMS03JeEAbcfAIpN0Nel0T6hvonk0PVwitnT1D+SUBFgl89LimhJKvoIt/
	 onab2vHLIF7KYo1aSdrYRp/rsVbgsunA4pQvPXw7qiVyOOf9Hop8Vvu4uLxGGxVysY
	 MoiZWvLKeIctw788WnyTcSZdCvOaCe3XdoJOLkNRT2FIMLdeZFOR9OmDFUJfVQ+A3t
	 qWyYC8g7ienHQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 8/8] xen/arm: Activate TID3 in HCR_EL2
In-Reply-To: <d89992ce6177bee2c5331cdc3a90d5b189669d0d.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171541360.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com> <d89992ce6177bee2c5331cdc3a90d5b189669d0d.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> Activate TID3 bit in HCR register when starting a guest.
> This will trap all coprecessor ID registers so that we can give to guest
> values corresponding to what they can actually use and mask some
> features to guests even though they would be supported by the underlying
> hardware (like SVE or MPAM).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in V2: Rebase
> Changes in V3: Rebase
> Changes in V4: Rebase
> 
> ---
>  xen/arch/arm/traps.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 28d9d64558..c1a9ad6056 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -98,7 +98,7 @@ register_t get_default_hcr_flags(void)
>  {
>      return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
>               (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
> -             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> +             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>  }
>  
>  static enum {
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:47:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56251.98374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2zp-000634-UO; Thu, 17 Dec 2020 23:47:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56251.98374; Thu, 17 Dec 2020 23:47:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq2zp-00062x-QI; Thu, 17 Dec 2020 23:47:45 +0000
Received: by outflank-mailman (input) for mailman id 56251;
 Thu, 17 Dec 2020 23:47:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq2zp-00062s-4w
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:47:45 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1a5dd61-f216-4f35-aedf-2fa9143bca69;
 Thu, 17 Dec 2020 23:47:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1a5dd61-f216-4f35-aedf-2fa9143bca69
Date: Thu, 17 Dec 2020 15:47:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608248863;
	bh=vOzdZE9XyJP/KeenoI3EJVv3hMBrAU5BOh0GAA0xRbM=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=NnTsz2fITusKiNngWMwt2xr4/GWQIMo/NGaezasQCObLy/9RQwRVkJC8GKkFii1PP
	 +oIaEaPViVjqRjGxkq4ylZdRgrQuTp/1yDq/6wur9OQLH0CTyo2eVS9UTT18gcJr/c
	 q7IcRziP045BOoxXU4/6u1hi2FaYpQyh66yL89LNp242MqhY6ccYCwdB6USmqXfohq
	 71VkzSP4KvUvOglYOnco0Q8vUbmwyb4ujTxcc1U2hEN/f03FYA53fpkfdiqXnFtLt1
	 SS3zs3f8qRaz5NClJ8jnxXqFLj5sE+I8twuHMgp78sR7YOd274gR8U+zFo//RtxWlK
	 X+VTT0nb3RMPw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, andrew.cooper3@citrix.com, 
    george.dunlap@citrix.com, iwj@xenproject.org, wl@xen.org
Subject: Re: [PATCH v4 0/8] xen/arm: Emulate ID registers
In-Reply-To: <cover.1608214355.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2012171543290.4040@sstabellini-ThinkPad-T480s>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 17 Dec 2020, Bertrand Marquis wrote:
> The goal of this serie is to emulate coprocessor ID registers so that
> Xen only publish to guest features that are supported by Xen and can
> actually be used by guests.
> One practical example where this is required are SVE support which is
> forbidden by Xen as it is not supported, but if Linux is compiled with
> it, it will crash on boot. An other one is AMU which is also forbidden
> by Xen but one Linux compiled with it would crash if the platform
> supports it.
> 
> To be able to emulate the coprocessor registers defining what features
> are supported by the hardware, the TID3 bit of HCR must be disabled and
> Xen must emulated the values of those registers when an exception is
> catched when a guest is accessing it.
> 
> This serie is first creating a guest cpuinfo structure which will
> contain the values that we want to publish to the guests and then
> provides the proper emulationg for those registers when Xen is getting
> an exception due to an access to any of those registers.
> 
> This is a first simple implementation to solve the problem and the way
> to define the values that we provide to guests and which features are
> disabled will be in a future patchset enhance so that we could decide
> per guest what can be used or not and depending on this deduce the bits
> to activate in HCR and the values that we must publish on ID registers.

As per our discussion I think we want to add this to the series.

---

xen/arm: clarify support status for various ARMv8.x CPUs

ARMv8.1+ is not security supported for now, as it would require more
investigation on hardware features that Xen has to hide from the guest.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

diff --git a/SUPPORT.md b/SUPPORT.md
index ab02aca5f4..d95ce3a411 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -37,7 +37,8 @@ supported in this document.
 
 ### ARM v8
 
-    Status: Supported
+    Status, ARMv8.0: Supported
+    Status, ARMv8.1+: Supported, not security supported
     Status, Cortex A57 r0p0-r1p1: Supported, not security supported
 
 For the Cortex A57 r0p0 - r1p1, see Errata 832075.


From xen-devel-bounces@lists.xenproject.org Thu Dec 17 23:54:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 17 Dec 2020 23:54:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56255.98386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq36e-00074S-MB; Thu, 17 Dec 2020 23:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56255.98386; Thu, 17 Dec 2020 23:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq36e-00074L-J3; Thu, 17 Dec 2020 23:54:48 +0000
Received: by outflank-mailman (input) for mailman id 56255;
 Thu, 17 Dec 2020 23:54:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHja=FV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kq36d-00074G-MW
 for xen-devel@lists.xenproject.org; Thu, 17 Dec 2020 23:54:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 107c4837-21a3-4778-8f39-94d815bf8c13;
 Thu, 17 Dec 2020 23:54:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 107c4837-21a3-4778-8f39-94d815bf8c13
Date: Thu, 17 Dec 2020 15:54:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608249286;
	bh=rWfARMVooBX7b7FOUc5CYcurwjdt6QhGoSb8STsLutc=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=EkYO7jElAhAiPdLi/xFO54X+XnEgy8dmeNdgtirsCVaHcBOiXxoQfVlw66CsbyB/S
	 nzmAlcwI/S4LrjTO4yUeSazKpZFxBZBLjrgbYasgaW7vcTvzuV4CF1Ot15g9dTZMR5
	 yFE+5Vw8seKmkk9tM3XgAd5dp0fI0h19ZNn5bFqTheQS+Oz9kyF/V6C87vJESL8RSB
	 iWtRRGS6O8tIUJJSaWC2++3uMuSDNeQ0YyOLI49OTuPyVHKIXQMiA7Uc0JuiwrlvIt
	 iFwJ/g0Frkn/UDN/opAFYOv3dWpOSVDfsFR4XLLkF+yFQR0OX/cfZGITPtT5u/7fsv
	 YbaxXyBcJ4JtQ==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Julien Grall <julien@xen.org>, bertrand.marquis@arm.com, 
    Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
In-Reply-To: <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
Message-ID: <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
References: <20201215112610.1986-1-julien@xen.org> <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com> <04455739-f07f-3da8-f764-33600a9cab6f@xen.org> <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 15 Dec 2020, Jan Beulich wrote:
> On 15.12.2020 14:19, Julien Grall wrote:
> > On 15/12/2020 11:46, Jan Beulich wrote:
> >> On 15.12.2020 12:26, Julien Grall wrote:
> >>> --- a/xen/include/xen/lib.h
> >>> +++ b/xen/include/xen/lib.h
> >>> @@ -23,7 +23,13 @@
> >>>   #include <asm/bug.h>
> >>>   
> >>>   #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
> >>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
> >>> +#define WARN_ON(p)  ({                  \
> >>> +    bool __ret_warn_on = (p);           \
> >>
> >> Please can you avoid leading underscores here?
> > 
> > I can.
> > 
> >>
> >>> +                                        \
> >>> +    if ( unlikely(__ret_warn_on) )      \
> >>> +        WARN();                         \
> >>> +    unlikely(__ret_warn_on);            \
> >>> +})
> >>
> >> Is this latter unlikely() having any effect? So far I thought it
> >> would need to be immediately inside a control construct or be an
> >> operand to && or ||.
> > 
> > The unlikely() is directly taken from the Linux implementation.
> > 
> > My guess is the compiler is still able to use the information for the 
> > branch prediction in the case of:
> > 
> > if ( WARN_ON(...) )
> 
> Maybe. Or maybe not. I don't suppose the Linux commit introducing
> it clarifies this?

I did a bit of digging but it looks like the unlikely has been there
forever. I'd just keep it as is.


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 00:30:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 00:30:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56259.98398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq3eo-00035G-F6; Fri, 18 Dec 2020 00:30:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56259.98398; Fri, 18 Dec 2020 00:30:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq3eo-000359-Be; Fri, 18 Dec 2020 00:30:06 +0000
Received: by outflank-mailman (input) for mailman id 56259;
 Fri, 18 Dec 2020 00:30:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NeVu=FW=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1kq3em-0002o8-De
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 00:30:04 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c19b0671-0274-4186-a4bc-ff74f22daf12;
 Fri, 18 Dec 2020 00:30:03 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id 91so304097wrj.7
 for <xen-devel@lists.xenproject.org>; Thu, 17 Dec 2020 16:30:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c19b0671-0274-4186-a4bc-ff74f22daf12
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=hyHN9ivFR9PDDp25TgpaViYqFJZgbegacz/Nx05KtIM=;
        b=hmZz1LupRe5BCWEWEqPp5S5fy1uFWvckjX6rxxtPqRubFosqKb4/SrzyPkWwoDEX0j
         OCMXeR0bGxGqXB5N9+shkTsUpMOpOvNyPxCRl7d9XYsxkagiHnQanOa14QMNzaBEPgn4
         0BGY9girSbM1zEFGUyvvqYRN5Xk+FKZ+DrX76Meym1qEuJNuk9dNUKvocu2eZvEJ6273
         F+E5H2qQIdF6l8WBR1OLlLleLHSEWsP53VYG0C2ocIaAOj8gM2twVrmYJcxGWeHeOis+
         EDwazjWsPEvrAoI9ZOhyuai/Ir17nKdCd6gm/gwUFhxpthQD4F2Tj5DllqZViWegCVyY
         69eQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=hyHN9ivFR9PDDp25TgpaViYqFJZgbegacz/Nx05KtIM=;
        b=ZhQFChbOXdKxMI1lgMZr1g+twtY3oBoQRg8BqHK108mzlG502smaboIr5n12pF7jhy
         PIZjsMYzqykLRiEi+8FsSD02saxeD1vLdo/2KBXkss3QTDxy56I3D2xNqk+4eYViXPEt
         ZPZeZvh8w31c6s9+J8RPla47Vkb2eWENlwdhoxdziC4jc3K04n/xYdC4ADepR8QvUX5o
         5svBzCoN3l/hUKYhZST6oxB4GuwxDn3YzVuioEESmvY3HypdWqErs9ZoTXAxdCZv1t6b
         XqdBtjoQfPOkOiPs+sXDbu1Kv05sx+h1sbr5ClidTJsmTmVvn9fYctRzKAPWF91JYqHW
         7wPA==
X-Gm-Message-State: AOAM530lycm4xJbizOU45PlyXft2Z0DQd2OCaR9cStkGDgUm91jNbv2m
	/tCiiYjylrxGUbbaygcBXbJtFdIWmyOXfQ/vLWg=
X-Google-Smtp-Source: ABdhPJzpyOYE1jO619sHqXkuT26r7EaN9vmUB7THeI/3KEcszuWlm51aRIMzZUmbOOW5SdvojwTCRcJDaoyJDaTm2ok=
X-Received: by 2002:a5d:43d2:: with SMTP id v18mr1384085wrr.326.1608251402434;
 Thu, 17 Dec 2020 16:30:02 -0800 (PST)
MIME-Version: 1.0
References: <20201215112610.1986-1-julien@xen.org> <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
 <04455739-f07f-3da8-f764-33600a9cab6f@xen.org> <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
 <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 18 Dec 2020 00:29:51 +0000
Message-ID: <CAJ=z9a1cJ3H6U0nymU9mPcXbDcxhBmih-Jmua5ScGqeQKcVt5w@mail.gmail.com>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was triggered
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Rahul Singh <Rahul.Singh@arm.com>, Julien Grall <jgrall@amazon.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, 17 Dec 2020 at 23:54, Stefano Stabellini <sstabellini@kernel.org> wrote:
>
> On Tue, 15 Dec 2020, Jan Beulich wrote:
> > On 15.12.2020 14:19, Julien Grall wrote:
> > > On 15/12/2020 11:46, Jan Beulich wrote:
> > >> On 15.12.2020 12:26, Julien Grall wrote:
> > >>> --- a/xen/include/xen/lib.h
> > >>> +++ b/xen/include/xen/lib.h
> > >>> @@ -23,7 +23,13 @@
> > >>>   #include <asm/bug.h>
> > >>>
> > >>>   #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
> > >>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
> > >>> +#define WARN_ON(p)  ({                  \
> > >>> +    bool __ret_warn_on = (p);           \
> > >>
> > >> Please can you avoid leading underscores here?
> > >
> > > I can.
> > >
> > >>
> > >>> +                                        \
> > >>> +    if ( unlikely(__ret_warn_on) )      \
> > >>> +        WARN();                         \
> > >>> +    unlikely(__ret_warn_on);            \
> > >>> +})
> > >>
> > >> Is this latter unlikely() having any effect? So far I thought it
> > >> would need to be immediately inside a control construct or be an
> > >> operand to && or ||.
> > >
> > > The unlikely() is directly taken from the Linux implementation.
> > >
> > > My guess is the compiler is still able to use the information for the
> > > branch prediction in the case of:
> > >
> > > if ( WARN_ON(...) )
> >
> > Maybe. Or maybe not. I don't suppose the Linux commit introducing
> > it clarifies this?
>
> I did a bit of digging but it looks like the unlikely has been there
> forever. I'd just keep it as is.

Thanks! I was planning to answer earlier on with some data but got
preempted with some higher priority work.
The Linux commit message is not very helpful. I will do some testing
so I can convince Jan that compilers can be clever and make use of it.

Cheers,


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 00:32:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 00:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56263.98410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq3gf-0003HS-Rc; Fri, 18 Dec 2020 00:32:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56263.98410; Fri, 18 Dec 2020 00:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq3gf-0003HL-O8; Fri, 18 Dec 2020 00:32:01 +0000
Received: by outflank-mailman (input) for mailman id 56263;
 Fri, 18 Dec 2020 00:32:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq3gf-0003HC-6F; Fri, 18 Dec 2020 00:32:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq3ge-0003Zt-WC; Fri, 18 Dec 2020 00:32:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq3ge-0003P2-Pe; Fri, 18 Dec 2020 00:32:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kq3ge-0007H1-P9; Fri, 18 Dec 2020 00:32:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=YUtHcQGqpUIVTRijWdiqj9BX8Bcr5emwIFT3do4AHgI=; b=0+TNLBH9qc9iLmuQKLHrMZnQFP
	g9kuT86uP8WONSeuaEDMdRrvMG03FKgJST/I4QchToW7F8o5r7tSFH+2TQE5JiDWA/AJtsFXMGAy4
	0st+WKBaPTO61QoAkTWc8xYKJJ2s8EtgognsewhkZrY7u+MjBmcJ7diRnb1yzBaNdUfg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.13-testing bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
Message-Id: <E1kq3ge-0007H1-P9@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 00:32:00 +0000

branch xen-4.13-testing
xenbranch xen-4.13-testing
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
  Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157657/


  commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Author: Guo Dong <guo.dong@intel.com>
  Date:   Wed Dec 2 14:18:18 2020 -0700
  
      UefiCpuPkg/CpuDxe: Fix boot error
      
      REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
      
      When DXE drivers are dispatched above 4GB memory and
      the system is already in 64bit mode, the address
      setCodeSelectorLongJump in stack will be override
      by parameter. so change to use 64bit address and
      jump to qword address.
      
      Signed-off-by: Guo Dong <guo.dong@intel.com>
      Reviewed-by: Ray Ni <ray.ni@intel.com>
      Reviewed-by: Eric Dong <eric.dong@intel.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/157657.bisection-summary --basis-template=157135 --blessings=real,real-bisect,real-retry xen-4.13-testing test-amd64-amd64-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 157597 fail [host=rimava1] / 157135 [host=godello1] 156988 [host=godello0] 156636 [host=godello1] 156593 [host=albana1] 156399 [host=chardonnay0] 156317 [host=elbling0] 156265 [host=huxelrebe1] 156054 [host=godello1] 156030 [host=godello0] 155377 [host=fiano1] 155258 ok.
Failure / basis pass flights: 157597 / 155258
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#d8ab884fe9b4dd148980bf0d8673187f8fb25887-f95e80d832e923046c92cd6f0b8208cec147138e git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484\
 fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#730e2b1927e7d911bbd5350714054ddd5912f4ed-7269466a5b0c0e89b36dc9a7db0554ae404aa230 git://xenbits.xen.org/osstest/seabios.git#41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5-748d619be3282fba35f99446098ac2d0579f6063 git://xenbits.xen.org/xen.git#88f5b414ac0f8008c1e2b26f93c3d980120941f7-10c7c213bef26274684798deb3e351a6756046d2
Loaded 7669 nodes in revision graph
Searching for test results:
 155132 [host=albana0]
 155258 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155377 [host=fiano1]
 156030 [host=godello0]
 156054 [host=godello1]
 156265 [host=huxelrebe1]
 156317 [host=elbling0]
 156399 [host=chardonnay0]
 156593 [host=albana1]
 156636 [host=godello1]
 156988 [host=godello0]
 157135 [host=godello1]
 157563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
 157596 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 157616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
 157622 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b96b44feab7aad2b9ae73a3602924b42d148bb03 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 94f0510dc75e910400aad6c169048d672c8c7193 971a9d14667448427ddea1cf15fd7fd409205c58
 157626 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 391610903b2944bb6bfed76fe9f9b46838600baf d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 5e4914e60da9a8dfdc00e839278f40c87525b8ae
 157597 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
 157628 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 375e9b190e37041129b35a1c667993ea145e5b7e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157630 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 60e3727bcae7268a57aa240c799b1bc788c9c39b
 157631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 4959626e926ce2e6de731135b1f567433edcd992
 157635 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157639 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7061294be500de021bef3d4bc5218134d223315f d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157640 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157643 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157651 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157653 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
 157657 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
Searching for interesting versions
 Result found: flight 155258 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a, results HASH(0x55b5783495c8) HASH(0x55b578278648) HASH(0x55b57829faa0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 375e9b190e37041129b35a1c667993ea145e5b7e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a, results HASH(0x55b5782852a0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 391610903b2944bb6bfed76fe9f9b46838600baf d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7\
 db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 5e4914e60da9a8dfdc00e839278f40c87525b8ae, results HASH(0x55b5778cb898) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b96b44feab7aad2b9ae73a3602924b42d148bb03 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 94f0510dc75e910400aad6c169048d672c8c7193 971a9d14667448427ddea1cf15fd7fd409205c58, results HASH(0x55b578295150) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7, results HASH(0x55b578287ed0) HASH(0x55b57828dbe8) Result found: flight 157563 (fail), for basis failure (at ancestor ~7668)
 Repro found: flight 157596 (pass), for basis pass
 Repro found: flight 157597 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
No revisions left to test, checking graph state.
 Result found: flight 157635 (pass), for last pass
 Result found: flight 157640 (fail), for first failure
 Repro found: flight 157643 (pass), for last pass
 Repro found: flight 157651 (fail), for first failure
 Repro found: flight 157653 (pass), for last pass
 Repro found: flight 157657 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
  Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157657/


  commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
  Author: Guo Dong <guo.dong@intel.com>
  Date:   Wed Dec 2 14:18:18 2020 -0700
  
      UefiCpuPkg/CpuDxe: Fix boot error
      
      REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
      
      When DXE drivers are dispatched above 4GB memory and
      the system is already in 64bit mode, the address
      setCodeSelectorLongJump in stack will be override
      by parameter. so change to use 64bit address and
      jump to qword address.
      
      Signed-off-by: Guo Dong <guo.dong@intel.com>
      Reviewed-by: Ray Ni <ray.ni@intel.com>
      Reviewed-by: Eric Dong <eric.dong@intel.com>

pnmtopng: 145 colors found
Revision graph left in /home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
157657: tolerable ALL FAIL

flight 157657 xen-4.13-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/157657/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 01:30:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 01:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56273.98431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq4bS-0007OR-Mo; Fri, 18 Dec 2020 01:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56273.98431; Fri, 18 Dec 2020 01:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq4bS-0007OK-HI; Fri, 18 Dec 2020 01:30:42 +0000
Received: by outflank-mailman (input) for mailman id 56273;
 Fri, 18 Dec 2020 01:30:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq4bQ-0007O8-RN; Fri, 18 Dec 2020 01:30:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq4bQ-0002Il-LF; Fri, 18 Dec 2020 01:30:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq4bQ-0006we-6R; Fri, 18 Dec 2020 01:30:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kq4bQ-0003fm-5h; Fri, 18 Dec 2020 01:30:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7owkmWnPvLVdPrUoCG0gbnCyQ4dxWjZ+snKUEWue9yE=; b=dZ9VPhURjmGW2PwD83qZONW04L
	AW9dE1Yj0OaU0ioIV2SbZVaLn+4MIhXF+VNnP5dpORlmkRhA3Cxk8iTZNvyKY33cwNJajK9A4KYcu
	DHwEChjowFxZm6oksSyXIxGcBQPdx5GCJdt2de+yBO/zzH0NnhUlW4xorl9VQgPTpocs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157619-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157619: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=accefff5b547a9a1d959c7e76ad539bf2480e78b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 01:30:40 +0000

flight 157619 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157619/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                accefff5b547a9a1d959c7e76ad539bf2480e78b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  139 days
Failing since        152366  2020-08-01 20:49:34 Z  138 days  240 attempts
Testing same since   157619  2020-12-17 01:21:08 Z    1 days    1 attempts

------------------------------------------------------------
4237 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 927460 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 01:57:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 01:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56281.98445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq50w-00017p-KS; Fri, 18 Dec 2020 01:57:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56281.98445; Fri, 18 Dec 2020 01:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq50w-00017i-HA; Fri, 18 Dec 2020 01:57:02 +0000
Received: by outflank-mailman (input) for mailman id 56281;
 Fri, 18 Dec 2020 01:57:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq50u-00017a-Qq; Fri, 18 Dec 2020 01:57:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq50u-0002jl-IA; Fri, 18 Dec 2020 01:57:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq50u-00010f-8n; Fri, 18 Dec 2020 01:57:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kq50u-0007uR-8H; Fri, 18 Dec 2020 01:57:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Pd5rqlz80/sKis0AB5FAECP5xQoP8sR8BjMU7TJNYJg=; b=2eUuawvLMRuuTc1AR0fPyTI426
	iE44P52TDdBwyMarsB934tfqOyAqUNNsRz7jUQ9Gux+5IkogkIkBKxTm12P7rxc7BhTgwNjvU0Nl6
	mTY82jfrMNvmU44qoH6TuiHO86zkfiZfd77sq0fDXBE87VPmV7jM4RVwuPKYbKTNX7eE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157656-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157656: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a3b691a8f3aa7720eecaab0e7bd090aa392885a
X-Osstest-Versions-That:
    xen=641723f78d3a0b1982e1cd2ef37d8d877cfe542d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 01:57:00 +0000

flight 157656 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157656/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a3b691a8f3aa7720eecaab0e7bd090aa392885a
baseline version:
 xen                  641723f78d3a0b1982e1cd2ef37d8d877cfe542d

Last test of basis   157649  2020-12-17 16:00:26 Z    0 days
Testing same since   157656  2020-12-17 23:02:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   641723f78d..7a3b691a8f  7a3b691a8f3aa7720eecaab0e7bd090aa392885a -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 05:54:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 05:54:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56291.98467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq8il-0008UE-Ib; Fri, 18 Dec 2020 05:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56291.98467; Fri, 18 Dec 2020 05:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq8il-0008U7-Ex; Fri, 18 Dec 2020 05:54:31 +0000
Received: by outflank-mailman (input) for mailman id 56291;
 Fri, 18 Dec 2020 05:54:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq8ij-0008Tz-HI; Fri, 18 Dec 2020 05:54:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq8ij-0007VN-8X; Fri, 18 Dec 2020 05:54:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq8ii-0006bp-V5; Fri, 18 Dec 2020 05:54:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kq8ii-0000BJ-Ue; Fri, 18 Dec 2020 05:54:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JyJ4+CkDpQZGei26+MJ/TcFAXlYzFv3hIaeMQBzQ4b0=; b=txokSJ/CwZ6KwCA6gGs3Pu6auc
	YOOaj7+EL8lHMC/mCaizP0xJ4cPsOLBXwHnQftrKO098cxZEO/Dc9qbahS7TCEQ7DMbhOOP1g9Eub
	0/qxuwlftwH+nPHPD9xlYdwWfFec1PNFRw+J91uvEavuUlAqz3vgK7uTJEpYMro7LCsY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157633-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157633: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=e6ae24e1d676bb2bdc0fc715b49b04908f41fc10
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 05:54:28 +0000

flight 157633 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157633/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 e6ae24e1d676bb2bdc0fc715b49b04908f41fc10
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    8 days
Failing since        157348  2020-12-09 15:39:39 Z    8 days   52 attempts
Testing same since   157612  2020-12-16 21:09:14 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sheng Wei <w.sheng@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 699 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 06:37:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 06:37:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56298.98482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq9OS-0004AA-Tz; Fri, 18 Dec 2020 06:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56298.98482; Fri, 18 Dec 2020 06:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq9OS-0004A3-R0; Fri, 18 Dec 2020 06:37:36 +0000
Received: by outflank-mailman (input) for mailman id 56298;
 Fri, 18 Dec 2020 06:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq9OR-00049v-Kt; Fri, 18 Dec 2020 06:37:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq9OR-0000Zv-Dp; Fri, 18 Dec 2020 06:37:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kq9OR-0000oV-5Y; Fri, 18 Dec 2020 06:37:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kq9OR-0005yG-52; Fri, 18 Dec 2020 06:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oW0dtdHcMd5Wwyy+L8fC0AIeBEMRvRrdi1Sz04kVKSM=; b=0a8do69qJ0k9jY5CsSgYt52B5r
	15nbBRs0wGbVkxoRfrP4Q6zflpqdq/yiTYqHRQLjkUsfHJW/MVce+zKf7u2oOQyXOsI9opdE7YRSm
	NhVx6niT//CqWMjtmoT5fTbxmcFxIzoqbWjUPpT2Et88bNV54gv4fmFl5JaXUksIOdkI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157627-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 157627: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2525a745e18bbf14b4f7b1b18209a0ab9166178d
X-Osstest-Versions-That:
    xen=8145d38b48009255a32ab87a02e481cd09c811f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 06:37:35 +0000

flight 157627 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157627/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 157134
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157134
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157134
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157134
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157134
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157134
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157134
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157134
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157134
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2525a745e18bbf14b4f7b1b18209a0ab9166178d
baseline version:
 xen                  8145d38b48009255a32ab87a02e481cd09c811f9

Last test of basis   157134  2020-12-01 15:05:58 Z   16 days
Testing same since   157562  2020-12-15 13:36:12 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8145d38b48..2525a745e1  2525a745e18bbf14b4f7b1b18209a0ab9166178d -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 06:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 06:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56334.98611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq9a6-00060j-L1; Fri, 18 Dec 2020 06:49:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56334.98611; Fri, 18 Dec 2020 06:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kq9a6-00060c-FE; Fri, 18 Dec 2020 06:49:38 +0000
Received: by outflank-mailman (input) for mailman id 56334;
 Fri, 18 Dec 2020 06:49:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J4wv=FW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kq9a4-00060W-QK
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 06:49:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a82cbd9a-537d-4214-9c75-c798eddebc3c;
 Fri, 18 Dec 2020 06:49:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E81BBACC6;
 Fri, 18 Dec 2020 06:49:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a82cbd9a-537d-4214-9c75-c798eddebc3c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608274174; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dUC+1EKFjaMZe8uSV+4lut3gYsDNzcygRgob5ePRAWA=;
	b=MNDYpnEvAbDl9eqg6kWjYjX6luMnUmT30chgO8gPqLykTzFmgkP6eOnRSETNznvpHrQ6aM
	Q5Er97zZ7vUuTOfH+7whXUdQBa4oVOsVjhYBiatZLanTayAQ14dNu3B5+devCAC33JD4NO
	Idwmqy1IVrHOfE+wRAwe/5pmbsVCHRI=
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201215111055.3810-1-jgross@suse.com>
 <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
 <8d26b752-b7ba-159f-5bed-bb015a06d819@suse.com>
 <2414c191-ff55-e446-b555-c9d0ccca6b93@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
Message-ID: <18c05f8c-d40b-b7e2-fc0d-c1a5343bdcfa@suse.com>
Date: Fri, 18 Dec 2020 07:49:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2414c191-ff55-e446-b555-c9d0ccca6b93@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="8zdfIkUg112me7zEGfFVqPLIj21e9Lxi2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--8zdfIkUg112me7zEGfFVqPLIj21e9Lxi2
Content-Type: multipart/mixed; boundary="C7qWVXSnG7pqeooxwBqcxco7wWv5eiKmc";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <18c05f8c-d40b-b7e2-fc0d-c1a5343bdcfa@suse.com>
Subject: Re: [PATCH v2] xen/xenbus: make xs_talkv() interruptible
References: <20201215111055.3810-1-jgross@suse.com>
 <2deac9ce-0c27-a472-7d51-b91a640d92ed@citrix.com>
 <8d26b752-b7ba-159f-5bed-bb015a06d819@suse.com>
 <2414c191-ff55-e446-b555-c9d0ccca6b93@citrix.com>
In-Reply-To: <2414c191-ff55-e446-b555-c9d0ccca6b93@citrix.com>

--C7qWVXSnG7pqeooxwBqcxco7wWv5eiKmc
Content-Type: multipart/mixed;
 boundary="------------E87F1910A183AB8FE5F67A40"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E87F1910A183AB8FE5F67A40
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.12.20 19:25, Andrew Cooper wrote:
> On 16/12/2020 08:21, J=C3=BCrgen Gro=C3=9F wrote:
>> On 15.12.20 21:59, Andrew Cooper wrote:
>>> On 15/12/2020 11:10, Juergen Gross wrote:
>>>> In case a process waits for any Xenstore action in the xenbus driver=

>>>> it should be interruptible by signals.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> V2:
>>>> - don't special case SIGKILL as libxenstore is handling -EINTR fine
>>>> ---
>>>>  =C2=A0 drivers/xen/xenbus/xenbus_xs.c | 9 ++++++++-
>>>>  =C2=A0 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/xen/xenbus/xenbus_xs.c
>>>> b/drivers/xen/xenbus/xenbus_xs.c
>>>> index 3a06eb699f33..17c8f8a155fd 100644
>>>> --- a/drivers/xen/xenbus/xenbus_xs.c
>>>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>>>> @@ -205,8 +205,15 @@ static bool test_reply(struct xb_req_data *req)=

>>>>  =C2=A0 =C2=A0 static void *read_reply(struct xb_req_data *req)
>>>>  =C2=A0 {
>>>> +=C2=A0=C2=A0=C2=A0 int ret;
>>>> +
>>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 do {
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 wait_event(req->wq, test=
_reply(req));
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D wait_event_inter=
ruptible(req->wq, test_reply(req));
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (ret =3D=3D -ERESTART=
SYS && signal_pending(current)) {
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
req->msg.type =3D XS_ERROR;
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
return ERR_PTR(-EINTR);
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>>>
>>> So now I can talk fully about the situations which lead to this, I th=
ink
>>> there is a bit more complexity.
>>>
>>> It turns out there are a number of issues related to running a Xen
>>> system with no xenstored.
>>>
>>> 1) If a xenstore-write occurs during startup before init-xenstore-dom=
ain
>>> runs, the former blocks on /dev/xen/xenbus waiting for xenstored to
>>> reply, while the latter blocks on /dev/xen/xenbus_backend when trying=
 to
>>> tell the dom0 kernel that xenstored is in dom1.=C2=A0 This effectivel=
y
>>> deadlocks the system.
>>
>> This should be easy to solve: any request to /dev/xen/xenbus should
>> block upfront in case xenstored isn't up yet (could e.g. wait
>> interruptible until xenstored_ready is non-zero).
>=20
> I'm not sure that that would fix the problem.=C2=A0 The problem is that=

> setting the ring details via /dev/xen/xenbus_backend blocks, which
> prevents us launching the xenstored stubdomain, which prevents the
> earlier xenbus write being completed.

But _why_ is it blocking? Digging through the code I think it blocks
is xs_suspend() due to the normal xenstore request being pending. If
that request doesn't reach the state to cause blocking in xs_suspend()
all is fine.

> So long as /dev/xen/xenbus_backend doesn't block, there's no problem
> with other /dev/xen/xenbus activity being pending briefly.
>=20
>=20
> Looking at the current logic, I'm not completely convinced.=C2=A0 Even
> finding a filled-in evtchn/gfn doesn't mean that xenstored is actually
> ready.

No, but the deadlock is not going to happen anymore (famous last
words).

>=20
> There are 3 possible cases.
>=20
> 1) PV guest, and details in start_info
> 2) HVM guest, and details in HVM_PARAMs
> 3) No details (expected for dom0).=C2=A0 Something in userspace must pr=
ovide
> details at a later point.
>=20
> So the setup phases go from nothing, to having ring details, to finding=

> the ring working.
>=20
> I think it would be prudent to try reading a key between having details=

> and declaring the xenstored_ready.=C2=A0 Any activity, even XS_ERROR,
> indicates that the other end of the ring is listening.

Yes. But I really think the xs_suspend() is the problematic case. And
this will be called _before_ xenstored_ready is being set.

>=20
>>
>>> 2) If xenstore-watch is running when xenstored dies, it spins at 100%=

>>> cpu usage making no system calls at all.=C2=A0 This is caused by bad =
error
>>> handling from xs_watch(), and attempting to debug found:
>>
>> Can you expand on "bad error handling from xs_watch()", please?
>=20
> do_watch() has
>=20
>  =C2=A0=C2=A0=C2=A0 for ( ... ) { // defaults to an infinite loop
>  =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 vec =3D xs_read_watch();
>  =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 if (vec =3D=3D NULL)
>  =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 continue;
>  =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 ...
>  =C2=A0=C2=A0=C2=A0 }
>=20
>=20
> My next plan was to experiment with break instead of continue, which
> I'll get to at some point.

I'd rather put a sleep() in. Otherwise you might break some use cases.

>=20
>>
>>>
>>> 3) (this issue).=C2=A0 If anyone starts xenstore-watch with no xensto=
red
>>> running at all, it blocks in D in the kernel.
>>
>> Should be handled with solution for 1).
>>
>>>
>>> The cause is the special handling for watch/unwatch commands which,
>>> instead of just queuing up the data for xenstore, explicitly waits fo=
r
>>> an OK for registering the watch.=C2=A0 This causes a write() system c=
all to
>>> block waiting for a non-existent entity to reply.
>>>
>>> So while this patch does resolve the major usability issue I found (I=

>>> can't even SIGINT and get my terminal back), I think there are issues=
=2E
>>>
>>> The reason why XS_WATCH/XS_UNWATCH are special cased is because they =
do
>>> require special handling.=C2=A0 The main kernel thread for processing=

>>> incoming data from xenstored does need to know how to associate each
>>> async XS_WATCH_EVENT to the caller who watched the path.
>>>
>>> Therefore, depending on when this cancellation hits, we might be in a=
ny
>>> of the following states:
>>>
>>> 1) the watch is queued in the kernel, but not even sent to xenstored =
yet
>>> 2) the watch is queued in the xenstored ring, but not acted upon
>>> 3) the watch is queued in the xenstored ring, and the xenstored has s=
een
>>> it but not replied yet
>>> 4) the watch has been processed, but the XS_WATCH reply hasn't been
>>> received yet
>>> 5) the watch has been processed, and the XS_WATCH reply received
>>>
>>> State 5 (and a little bit) is the normal success path when xenstored =
has
>>> acted upon the request, and the internal kernel infrastructure is set=
 up
>>> appropriately to handle XS_WATCH_EVENTs.
>>>
>>> States 1 and 2 can be very common if there is no xenstored (or at lea=
st,
>>> it hasn't started up yet).=C2=A0 In reality, there is either no xenst=
ored, or
>>> it is up and running (and for a period of time during system startup,=

>>> these cases occur in sequence).
>>
>> Yes. this is the reason we can't just reject a user request if xenstor=
ed
>> hasn't been detected yet: it could be just starting.
>=20
> Right, and I'm not suggesting that we'd want to reject accesses while
> xenstored is starting up.
>=20
>>
>>>
>>> As soon as the XS_WATCH event has been written into the xenstored rin=
g,
>>> it is not safe to cancel.=C2=A0 You've committed to xenstored process=
ing the
>>> request (if it is up).
>>
>> I'm not sure this is true. Cancelling it might result in a stale watch=

>> in xenstored, but there shouldn't be a problem related to that. In cas=
e
>> that watch fires the event will normally be discarded by the kernel as=

>> no matching watch is found in the kernel's data. In case a new watch
>> has been setup with the same struct xenbus_watch address (which is use=
d
>> as the token), then this new watch might fire without the node of the
>> new watch having changed, but spurious watch events are defined to be
>> okay (OTOH the path in the event might look strange to the handler).
>=20
> Watches are a quota'd resource in (at least some) xenstored
> configurations.=C2=A0 Losing track of the registration is a resource le=
ak,
> even if the kernel can filter and discard the unexpected watch events.

Hmm, true.

The correct way to handle it then would be to mark the request to not
just drop it in case a late answer is arriving, but to do an unwatch.

A similar handling would be required for a transaction start.

>=20
>>> If xenstored is actually up and running, its fine and necessary to
>>> block.=C2=A0 The request will be processed in due course (timing subj=
ect to
>>> the client and server load).=C2=A0 If xenstored isn't up, blocking is=
n't ok.
>>>
>>> Therefore, I think we need to distinguish "not yet on the ring" from =
"on
>>> the ring", as our distinction as to whether cancelling is safe, and
>>> ensure we don't queue anything on the ring before we're sure xenstore=
d
>>> has started up.
>>>
>>> Does this make sense?
>>
>> Basically, yes.
>=20
> Great.=C2=A0 If I get any time, I'll try to look into some fixes along =
the
> above lines.

I won't work on those for the coming 3 weeks, so go ahead. :-)


Juergen

--------------E87F1910A183AB8FE5F67A40
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E87F1910A183AB8FE5F67A40--

--C7qWVXSnG7pqeooxwBqcxco7wWv5eiKmc--

--8zdfIkUg112me7zEGfFVqPLIj21e9Lxi2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/cUP0FAwAAAAAACgkQsN6d1ii/Ey/G
Agf+PkkoPOzpgMYvrXxBEClYbR+/8Ka+zZGjlq53jANB5QW0shpdLOIe9sOmqFM1kZhkx67WWZxQ
Wy77ivZ30UmhM9gHOutUq2jXBQWmawxAcYpEDBU/hYKFe4+OxeM1Nf7qHfEss9025Uan9Gq0FDRf
ytKKWi5OYBWzJ4Buwfo9d4/8/6RJsRbiSdi7d14HixCbxyk/ESBc93Ed9UUfrL2cSNM6iWTcAgXi
5ddEElj+Yw3/py3w859WRnHUPCXrym1Ioq6n3QxOd90yrZxDYrs9PPtLk3KU4BHao2E4NLn03qsW
DcYWT9vvramSeHSuX0T0P4pTX5/cGhdy7uu5eeDFnA==
=5MZg
-----END PGP SIGNATURE-----

--8zdfIkUg112me7zEGfFVqPLIj21e9Lxi2--


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 07:41:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 07:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56341.98626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqANw-0003HQ-NJ; Fri, 18 Dec 2020 07:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56341.98626; Fri, 18 Dec 2020 07:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqANw-0003HJ-KD; Fri, 18 Dec 2020 07:41:08 +0000
Received: by outflank-mailman (input) for mailman id 56341;
 Fri, 18 Dec 2020 07:41:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqANv-0003HE-C3
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 07:41:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7d09d10-8b9b-43bb-a83d-979b3b1460cd;
 Fri, 18 Dec 2020 07:41:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5C8AACC4;
 Fri, 18 Dec 2020 07:41:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7d09d10-8b9b-43bb-a83d-979b3b1460cd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608277261; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Lx03LAFVea+M7tq/9HJSfW52P41fC1cx/bBol9an1Pg=;
	b=mcGgcb32ScgQC/vVm9JXkslxbXHds+g9VZSgULhmVFP1XCHOA0R2hKEG+66ESTxmQ8NNd3
	bgp/GyT1B8eljvIMIXn77rbC/doXgaVCzdV1OzWsdTIUA2403TmWOu/4voq6583i3H5mEk
	0O+NPvz8mRnJWR5JuSjdU+iBqWuuYuo=
Subject: Re: [xen-4.13-testing bisection] complete
 test-amd64-amd64-xl-qemuu-ovmf-amd64
To: osstest service owner <osstest-admin@xenproject.org>
References: <E1kq3ge-0007H1-P9@osstest.test-lab.xenproject.org>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9c63132b-fa66-2f33-6e0e-64368f2d5c49@suse.com>
Date: Fri, 18 Dec 2020 08:40:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <E1kq3ge-0007H1-P9@osstest.test-lab.xenproject.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.12.2020 01:32, osstest service owner wrote:
> branch xen-4.13-testing
> xenbranch xen-4.13-testing
> job test-amd64-amd64-xl-qemuu-ovmf-amd64
> testid debian-hvm-install
> 
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: seabios git://xenbits.xen.org/osstest/seabios.git
> Tree: xen git://xenbits.xen.org/xen.git
> 
> *** Found and reproduced problem changeset ***
> 
>   Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
>   Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
>   Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157657/
> 
> 
>   commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
>   Author: Guo Dong <guo.dong@intel.com>
>   Date:   Wed Dec 2 14:18:18 2020 -0700
>   
>       UefiCpuPkg/CpuDxe: Fix boot error
>       
>       REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
>       
>       When DXE drivers are dispatched above 4GB memory and
>       the system is already in 64bit mode, the address
>       setCodeSelectorLongJump in stack will be override
>       by parameter. so change to use 64bit address and
>       jump to qword address.
>       
>       Signed-off-by: Guo Dong <guo.dong@intel.com>
>       Reviewed-by: Ray Ni <ray.ni@intel.com>
>       Reviewed-by: Eric Dong <eric.dong@intel.com>

Is this a result one can consider trustworthy? The ovmf tree used
with 4.13 shouldn't have changed anywhere half recently, I would
assume ...

Jan

> For bisection revision-tuple graph see:
>    http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
> Revision IDs in each graph node refer, respectively, to the Trees above.
> 
> ----------------------------------------
> Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/157657.bisection-summary --basis-template=157135 --blessings=real,real-bisect,real-retry xen-4.13-testing test-amd64-amd64-xl-qemuu-ovmf-amd64 debian-hvm-install
> Searching for failure / basis pass:
>  157597 fail [host=rimava1] / 157135 [host=godello1] 156988 [host=godello0] 156636 [host=godello1] 156593 [host=albana1] 156399 [host=chardonnay0] 156317 [host=elbling0] 156265 [host=huxelrebe1] 156054 [host=godello1] 156030 [host=godello0] 155377 [host=fiano1] 155258 ok.
> Failure / basis pass flights: 157597 / 155258
> (tree with no url: minios)
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: seabios git://xenbits.xen.org/osstest/seabios.git
> Tree: xen git://xenbits.xen.org/xen.git
> Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
> Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
> Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#d8ab884fe9b4dd148980bf0d8673187f8fb25887-f95e80d832e923046c92cd6f0b8208cec147138e git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484\
>  fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#730e2b1927e7d911bbd5350714054ddd5912f4ed-7269466a5b0c0e89b36dc9a7db0554ae404aa230 git://xenbits.xen.org/osstest/seabios.git#41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5-748d619be3282fba35f99446098ac2d0579f6063 git://xenbits.xen.org/xen.git#88f5b414ac0f8008c1e2b26f93c3d980120941f7-10c7c213bef26274684798deb3e351a6756046d2
> Loaded 7669 nodes in revision graph
> Searching for test results:
>  155132 [host=albana0]
>  155258 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
>  155377 [host=fiano1]
>  156030 [host=godello0]
>  156054 [host=godello1]
>  156265 [host=huxelrebe1]
>  156317 [host=elbling0]
>  156399 [host=chardonnay0]
>  156593 [host=albana1]
>  156636 [host=godello1]
>  156988 [host=godello0]
>  157135 [host=godello1]
>  157563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
>  157596 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
>  157616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
>  157622 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b96b44feab7aad2b9ae73a3602924b42d148bb03 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 94f0510dc75e910400aad6c169048d672c8c7193 971a9d14667448427ddea1cf15fd7fd409205c58
>  157626 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 391610903b2944bb6bfed76fe9f9b46838600baf d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 5e4914e60da9a8dfdc00e839278f40c87525b8ae
>  157597 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 10c7c213bef26274684798deb3e351a6756046d2
>  157628 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 375e9b190e37041129b35a1c667993ea145e5b7e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157630 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 60e3727bcae7268a57aa240c799b1bc788c9c39b
>  157631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f95e80d832e923046c92cd6f0b8208cec147138e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 4959626e926ce2e6de731135b1f567433edcd992
>  157635 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157639 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7061294be500de021bef3d4bc5218134d223315f d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157640 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157643 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157651 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157653 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
>  157657 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cee5b0441af39dd6f76cc4e0447a1c7f788cbb00 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
> Searching for interesting versions
>  Result found: flight 155258 (pass), for basis pass
>  For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a, results HASH(0x55b5783495c8) HASH(0x55b578278648) HASH(0x55b57829faa0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
>  e6a472b0eb9558310b518f0dfcd8860 375e9b190e37041129b35a1c667993ea145e5b7e d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a, results HASH(0x55b5782852a0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 391610903b2944bb6bfed76fe9f9b46838600baf d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7\
>  db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 5e4914e60da9a8dfdc00e839278f40c87525b8ae, results HASH(0x55b5778cb898) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b96b44feab7aad2b9ae73a3602924b42d148bb03 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 94f0510dc75e910400aad6c169048d672c8c7193 971a9d14667448427ddea1cf15fd7fd409205c58, results HASH(0x55b578295150) For basis\
>   failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7, results HASH(0x55b578287ed0) HASH(0x55b57828dbe8) Result found: flight 157563 (fail), for basis failure (at ancestor ~7668)
>  Repro found: flight 157596 (pass), for basis pass
>  Repro found: flight 157597 (fail), for basis failure
>  0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970 d0d8ad39ecb51cd7497cd524484fe09f50876798 7269466a5b0c0e89b36dc9a7db0554ae404aa230 748d619be3282fba35f99446098ac2d0579f6063 b5302273e2c51940172400486644636f2f4fc64a
> No revisions left to test, checking graph state.
>  Result found: flight 157635 (pass), for last pass
>  Result found: flight 157640 (fail), for first failure
>  Repro found: flight 157643 (pass), for last pass
>  Repro found: flight 157651 (fail), for first failure
>  Repro found: flight 157653 (pass), for last pass
>  Repro found: flight 157657 (fail), for first failure
> 
> *** Found and reproduced problem changeset ***
> 
>   Bug is in tree:  ovmf git://xenbits.xen.org/osstest/ovmf.git
>   Bug introduced:  cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
>   Bug not present: 8e4cb8fbceb84b66b3b2fc45b9e93d70f732e970
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157657/
> 
> 
>   commit cee5b0441af39dd6f76cc4e0447a1c7f788cbb00
>   Author: Guo Dong <guo.dong@intel.com>
>   Date:   Wed Dec 2 14:18:18 2020 -0700
>   
>       UefiCpuPkg/CpuDxe: Fix boot error
>       
>       REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3084
>       
>       When DXE drivers are dispatched above 4GB memory and
>       the system is already in 64bit mode, the address
>       setCodeSelectorLongJump in stack will be override
>       by parameter. so change to use 64bit address and
>       jump to qword address.
>       
>       Signed-off-by: Guo Dong <guo.dong@intel.com>
>       Reviewed-by: Ray Ni <ray.ni@intel.com>
>       Reviewed-by: Eric Dong <eric.dong@intel.com>
> 
> pnmtopng: 145 colors found
> Revision graph left in /home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
> ----------------------------------------
> 157657: tolerable ALL FAIL
> 
> flight 157657 xen-4.13-testing real-bisect [real]
> http://logs.test-lab.xenproject.org/osstest/logs/157657/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail baseline untested
> 
> 
> jobs:
>  test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
> 
> Logs, config files, etc. are available at
>     http://logs.test-lab.xenproject.org/osstest/logs
> 
> Explanation of these reports, and of osstest in general, is at
>     http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
>     http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
> 
> Test harness code can be found at
>     http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
> 
> 



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 07:46:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 07:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56347.98639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqATX-0003Tt-CP; Fri, 18 Dec 2020 07:46:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56347.98639; Fri, 18 Dec 2020 07:46:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqATX-0003Tl-9E; Fri, 18 Dec 2020 07:46:55 +0000
Received: by outflank-mailman (input) for mailman id 56347;
 Fri, 18 Dec 2020 07:46:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqATV-0003SL-Ty; Fri, 18 Dec 2020 07:46:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqATV-0001mp-Mt; Fri, 18 Dec 2020 07:46:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqATV-000483-CM; Fri, 18 Dec 2020 07:46:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqATV-0007pz-Bv; Fri, 18 Dec 2020 07:46:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AmeeJW7M7YUIlnCytiN35qfdYrKH7hKJU7Cm9t/T8KM=; b=4HUa/6YbZRcL6r8Ogxtj70hHfc
	XWzdNqK2YToRJnPKeWV1vfiyx6k9lwv7yvJ0Zop1vOHKgLLLknwh8RSlLyhZFEZefom9sY7Ui1M8q
	eIkuDurLxb7zMZTk97FkTC5BcVEZWUhJvaflFBl0mFHcLOu7uUw+/nkWZ0EOzdxNWILg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157629-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157629: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10c7c213bef26274684798deb3e351a6756046d2
X-Osstest-Versions-That:
    xen=b5302273e2c51940172400486644636f2f4fc64a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 07:46:53 +0000

flight 157629 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157629/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157135

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157135
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157135
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157135
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10c7c213bef26274684798deb3e351a6756046d2
baseline version:
 xen                  b5302273e2c51940172400486644636f2f4fc64a

Last test of basis   157135  2020-12-01 15:06:11 Z   16 days
Testing same since   157563  2020-12-15 13:36:28 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 743 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 07:54:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 07:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56359.98658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqAap-0004ZA-F1; Fri, 18 Dec 2020 07:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56359.98658; Fri, 18 Dec 2020 07:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqAap-0004Z3-CA; Fri, 18 Dec 2020 07:54:27 +0000
Received: by outflank-mailman (input) for mailman id 56359;
 Fri, 18 Dec 2020 07:54:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqAao-0004Yy-8I
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 07:54:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd94bf8d-de5a-4167-8949-bc3abb46295f;
 Fri, 18 Dec 2020 07:54:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7EFD3ABC6;
 Fri, 18 Dec 2020 07:54:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd94bf8d-de5a-4167-8949-bc3abb46295f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608278064; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j2oPksEdP+W0vsc0Lz5K3lyisXlh30EIFwehvRxFRRA=;
	b=drcMJH3ryPImsGSO4xhQ06CxBLhXqLXckaSalstab49D4TqdXUobtL7rpjH7aJyXxF2T8o
	3ebi4vffH/9dgrCvioCWL+qhIP0uOMUrVrwcW0UX6jp6TkUqg7/y3OQFAf6RvbKbxSpg+O
	OQbhJxgSlM4xTbDDFWdOddcqJi7Zdbc=
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, bertrand.marquis@arm.com,
 Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215112610.1986-1-julien@xen.org>
 <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
 <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
 <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
 <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <81ea6132-b8b6-90b9-2c5c-9ca89ee6c0d0@suse.com>
Date: Fri, 18 Dec 2020 08:54:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.12.2020 00:54, Stefano Stabellini wrote:
> On Tue, 15 Dec 2020, Jan Beulich wrote:
>> On 15.12.2020 14:19, Julien Grall wrote:
>>> On 15/12/2020 11:46, Jan Beulich wrote:
>>>> On 15.12.2020 12:26, Julien Grall wrote:
>>>>> --- a/xen/include/xen/lib.h
>>>>> +++ b/xen/include/xen/lib.h
>>>>> @@ -23,7 +23,13 @@
>>>>>   #include <asm/bug.h>
>>>>>   
>>>>>   #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
>>>>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
>>>>> +#define WARN_ON(p)  ({                  \
>>>>> +    bool __ret_warn_on = (p);           \
>>>>
>>>> Please can you avoid leading underscores here?
>>>
>>> I can.
>>>
>>>>
>>>>> +                                        \
>>>>> +    if ( unlikely(__ret_warn_on) )      \
>>>>> +        WARN();                         \
>>>>> +    unlikely(__ret_warn_on);            \
>>>>> +})
>>>>
>>>> Is this latter unlikely() having any effect? So far I thought it
>>>> would need to be immediately inside a control construct or be an
>>>> operand to && or ||.
>>>
>>> The unlikely() is directly taken from the Linux implementation.
>>>
>>> My guess is the compiler is still able to use the information for the 
>>> branch prediction in the case of:
>>>
>>> if ( WARN_ON(...) )
>>
>> Maybe. Or maybe not. I don't suppose the Linux commit introducing
>> it clarifies this?
> 
> I did a bit of digging but it looks like the unlikely has been there
> forever. I'd just keep it as is.

I'm afraid I don't view this as a reason to inherit code unchanged.
If it was introduced with a clear indication that compilers can
recognize it despite the somewhat unusual placement, then fine. But
likely() / unlikely() quite often get put in more or less blindly -
see the not uncommon unlikely(a && b) style of uses, which don't
typically have the intended effect and would instead need to be
unlikely(a) && unlikely(b) [assuming each condition alone is indeed
deemed unlikely], unless compilers have learned to guess/infer
what's meant between when I last looked at this and now.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56369.98669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqAiJ-0006BN-E0; Fri, 18 Dec 2020 08:02:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56369.98669; Fri, 18 Dec 2020 08:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqAiJ-0006BG-B0; Fri, 18 Dec 2020 08:02:11 +0000
Received: by outflank-mailman (input) for mailman id 56369;
 Fri, 18 Dec 2020 08:02:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqAiH-0006BA-JF
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:02:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2785e574-0330-4928-aea0-2bc7f2b2d5d3;
 Fri, 18 Dec 2020 08:02:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 03911ACF5;
 Fri, 18 Dec 2020 08:02:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2785e574-0330-4928-aea0-2bc7f2b2d5d3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608278526; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xiWOHGLXMUTlR8jN4mu9lgQLfBBtzwcZlQrf7/IzyqY=;
	b=YlgY2gn320Pm9A3VcSFeE+phm+9hMKTwBAc2yMgGOGPgU9BtDSoLiG/vPZeyy8aLhEx/oT
	p7rAdcPh3j6mMsy7OFX0tcwO0Mxp+VYgk0uH93SQ4PmbR3Qyur7EVa4IHccKv3CgGxe6c8
	9zWzupdQyCpX+/Fe5a28+ojfqmAGoAI=
Subject: Ping: Arm: [PATCH v3 2/8] lib: collect library files in an archive
From: Jan Beulich <jbeulich@suse.com>
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
Message-ID: <59c970d6-2be4-3239-6728-b8905b007323@suse.com>
Date: Fri, 18 Dec 2020 09:02:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.11.2020 16:21, Jan Beulich wrote:
> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
> just to avoid bloating binaries when only some arch-es and/or
> configurations need generic library routines, combine objects under lib/
> into an archive, which the linker then can pick the necessary objects
> out of.
> 
> Note that we can't use thin archives just yet, until we've raised the
> minimum required binutils version suitably.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/Rules.mk          | 29 +++++++++++++++++++++++++----
>  xen/arch/arm/Makefile |  6 +++---

For this and patch 3 of the series, may I ask for an Arm side ack
(or otherwise)?

Thanks, Jan

>  xen/arch/x86/Makefile |  8 ++++----
>  xen/lib/Makefile      |  3 ++-
>  4 files changed, 34 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index d5e5eb33de39..aba6ca2a90f5 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -41,12 +41,16 @@ ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
>  ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
>  ALL_OBJS-$(CONFIG_CRYPTO)   += $(BASEDIR)/crypto/built_in.o
>  
> +ALL_LIBS-y               := $(BASEDIR)/lib/lib.a
> +
>  # Initialise some variables
> +lib-y :=
>  targets :=
>  CFLAGS-y :=
>  AFLAGS-y :=
>  
>  ALL_OBJS := $(ALL_OBJS-y)
> +ALL_LIBS := $(ALL_LIBS-y)
>  
>  SPECIAL_DATA_SECTIONS := rodata $(foreach a,1 2 4 8 16, \
>                                              $(foreach w,1 2 4, \
> @@ -60,7 +64,14 @@ include Makefile
>  # ---------------------------------------------------------------------------
>  
>  quiet_cmd_ld = LD      $@
> -cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
> +cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
> +               --start-group $(filter %.a,$(real-prereqs)) --end-group
> +
> +# Archive
> +# ---------------------------------------------------------------------------
> +
> +quiet_cmd_ar = AR      $@
> +cmd_ar = rm -f $@; $(AR) cPrs $@ $(real-prereqs)
>  
>  # Objcopy
>  # ---------------------------------------------------------------------------
> @@ -86,6 +97,10 @@ obj-y    := $(patsubst %/, %/built_in.o, $(obj-y))
>  # tell kbuild to descend
>  subdir-obj-y := $(filter %/built_in.o, $(obj-y))
>  
> +# Libraries are always collected in one lib file.
> +# Filter out objects already built-in
> +lib-y := $(filter-out $(obj-y), $(sort $(lib-y)))
> +
>  $(filter %.init.o,$(obj-y) $(obj-bin-y) $(extra-y)): CFLAGS-y += -DINIT_SECTIONS_ONLY
>  
>  ifeq ($(CONFIG_COVERAGE),y)
> @@ -129,7 +144,7 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
>  c_flags += $(CFLAGS-y)
>  a_flags += $(CFLAGS-y) $(AFLAGS-y)
>  
> -built_in.o: $(obj-y) $(extra-y)
> +built_in.o: $(obj-y) $(if $(strip $(lib-y)),lib.a) $(extra-y)
>  ifeq ($(strip $(obj-y)),)
>  	$(CC) $(c_flags) -c -x c /dev/null -o $@
>  else
> @@ -140,8 +155,14 @@ else
>  endif
>  endif
>  
> +lib.a: $(lib-y) FORCE
> +	$(call if_changed,ar)
> +
>  targets += built_in.o
> -targets += $(filter-out $(subdir-obj-y), $(obj-y)) $(extra-y)
> +ifneq ($(strip $(lib-y)),)
> +targets += lib.a
> +endif
> +targets += $(filter-out $(subdir-obj-y), $(obj-y) $(lib-y)) $(extra-y)
>  targets += $(MAKECMDGOALS)
>  
>  built_in_bin.o: $(obj-bin-y) $(extra-y)
> @@ -155,7 +176,7 @@ endif
>  PHONY += FORCE
>  FORCE:
>  
> -%/built_in.o: FORCE
> +%/built_in.o %/lib.a: FORCE
>  	$(MAKE) -f $(BASEDIR)/Rules.mk -C $* built_in.o
>  
>  %/built_in_bin.o: FORCE
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 296c5e68bbc3..612a83b315c8 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -90,14 +90,14 @@ endif
>  
>  ifeq ($(CONFIG_LTO),y)
>  # Gather all LTO objects together
> -prelink_lto.o: $(ALL_OBJS)
> -	$(LD_LTO) -r -o $@ $^
> +prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
> +	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
>  
>  # Link it with all the binary objects
>  prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o
>  	$(call if_changed,ld)
>  else
> -prelink.o: $(ALL_OBJS) FORCE
> +prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
>  	$(call if_changed,ld)
>  endif
>  
> diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
> index 9b368632fb43..8f2180485b2b 100644
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -132,8 +132,8 @@ EFI_OBJS-$(XEN_BUILD_EFI) := efi/relocs-dummy.o
>  
>  ifeq ($(CONFIG_LTO),y)
>  # Gather all LTO objects together
> -prelink_lto.o: $(ALL_OBJS)
> -	$(LD_LTO) -r -o $@ $^
> +prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
> +	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
>  
>  # Link it with all the binary objects
>  prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $(EFI_OBJS-y) FORCE
> @@ -142,10 +142,10 @@ prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $
>  prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o FORCE
>  	$(call if_changed,ld)
>  else
> -prelink.o: $(ALL_OBJS) $(EFI_OBJS-y) FORCE
> +prelink.o: $(ALL_OBJS) $(ALL_LIBS) $(EFI_OBJS-y) FORCE
>  	$(call if_changed,ld)
>  
> -prelink-efi.o: $(ALL_OBJS) FORCE
> +prelink-efi.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
>  	$(call if_changed,ld)
>  endif
>  
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 53b1da025e0d..b8814361d63e 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1,2 +1,3 @@
> -obj-y += ctype.o
>  obj-$(CONFIG_X86) += x86/
> +
> +lib-y += ctype.o
> 
> 



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:19:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:19:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56374.98682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqAzL-0007VI-Qj; Fri, 18 Dec 2020 08:19:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56374.98682; Fri, 18 Dec 2020 08:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqAzL-0007VB-NJ; Fri, 18 Dec 2020 08:19:47 +0000
Received: by outflank-mailman (input) for mailman id 56374;
 Fri, 18 Dec 2020 08:19:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J4wv=FW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kqAzK-0007V6-CH
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:19:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17e0ba05-df6d-4c42-9fe6-2d509dc0cd61;
 Fri, 18 Dec 2020 08:19:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10E5DABC6;
 Fri, 18 Dec 2020 08:19:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17e0ba05-df6d-4c42-9fe6-2d509dc0cd61
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608279579; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=H+IEL3u/f2WSITNorjp40t4yPS3R4acSzY9/k4Vg2Ts=;
	b=GpRtjrsUIjHUmwUBnhiGjwJvr3lHfeK/TSTgEkbcvqXpnL2czT5BxnBYN6NPor0+A2VMFu
	C0ZHjyYtCh6ZtzRR7oO6Eq/Um63G6C++FzMRTvtV6xXUOXQqCYq7jFfYd3v/rBN+j0Czht
	pgHH/NeIdeDehZ7bOCb8sGBXwhNiXc0=
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, bertrand.marquis@arm.com,
 Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215112610.1986-1-julien@xen.org>
 <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
 <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
 <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
 <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
 <81ea6132-b8b6-90b9-2c5c-9ca89ee6c0d0@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
Message-ID: <142e7b4d-649d-07d0-26cf-185a434a365c@suse.com>
Date: Fri, 18 Dec 2020 09:19:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <81ea6132-b8b6-90b9-2c5c-9ca89ee6c0d0@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="dd3N3pNamgrCKyyBJdoADTe5AccG4qnuR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--dd3N3pNamgrCKyyBJdoADTe5AccG4qnuR
Content-Type: multipart/mixed; boundary="maDW8Pd8OjU1d7LYXePMnizjhfujNt8b8";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, bertrand.marquis@arm.com,
 Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <142e7b4d-649d-07d0-26cf-185a434a365c@suse.com>
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
References: <20201215112610.1986-1-julien@xen.org>
 <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
 <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
 <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
 <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
 <81ea6132-b8b6-90b9-2c5c-9ca89ee6c0d0@suse.com>
In-Reply-To: <81ea6132-b8b6-90b9-2c5c-9ca89ee6c0d0@suse.com>

--maDW8Pd8OjU1d7LYXePMnizjhfujNt8b8
Content-Type: multipart/mixed;
 boundary="------------E84DAD1C22DFDE7C30466659"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E84DAD1C22DFDE7C30466659
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.12.20 08:54, Jan Beulich wrote:
> On 18.12.2020 00:54, Stefano Stabellini wrote:
>> On Tue, 15 Dec 2020, Jan Beulich wrote:
>>> On 15.12.2020 14:19, Julien Grall wrote:
>>>> On 15/12/2020 11:46, Jan Beulich wrote:
>>>>> On 15.12.2020 12:26, Julien Grall wrote:
>>>>>> --- a/xen/include/xen/lib.h
>>>>>> +++ b/xen/include/xen/lib.h
>>>>>> @@ -23,7 +23,13 @@
>>>>>>    #include <asm/bug.h>
>>>>>>   =20
>>>>>>    #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
>>>>>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
>>>>>> +#define WARN_ON(p)  ({                  \
>>>>>> +    bool __ret_warn_on =3D (p);           \
>>>>>
>>>>> Please can you avoid leading underscores here?
>>>>
>>>> I can.
>>>>
>>>>>
>>>>>> +                                        \
>>>>>> +    if ( unlikely(__ret_warn_on) )      \
>>>>>> +        WARN();                         \
>>>>>> +    unlikely(__ret_warn_on);            \
>>>>>> +})
>>>>>
>>>>> Is this latter unlikely() having any effect? So far I thought it
>>>>> would need to be immediately inside a control construct or be an
>>>>> operand to && or ||.
>>>>
>>>> The unlikely() is directly taken from the Linux implementation.
>>>>
>>>> My guess is the compiler is still able to use the information for th=
e
>>>> branch prediction in the case of:
>>>>
>>>> if ( WARN_ON(...) )
>>>
>>> Maybe. Or maybe not. I don't suppose the Linux commit introducing
>>> it clarifies this?
>>
>> I did a bit of digging but it looks like the unlikely has been there
>> forever. I'd just keep it as is.
>=20
> I'm afraid I don't view this as a reason to inherit code unchanged.
> If it was introduced with a clear indication that compilers can
> recognize it despite the somewhat unusual placement, then fine. But
> likely() / unlikely() quite often get put in more or less blindly -
> see the not uncommon unlikely(a && b) style of uses, which don't
> typically have the intended effect and would instead need to be
> unlikely(a) && unlikely(b) [assuming each condition alone is indeed
> deemed unlikely], unless compilers have learned to guess/infer
> what's meant between when I last looked at this and now.

I have made a little experiment and found that the unlikely() at the
end of a macro is having effect.

The disassembly of

#define unlikely(x) __builtin_expect(!!(x), 0)

#define foo() ({        \
         int i =3D !time(NULL); \
         unlikely(i); })

#include "stdio.h"
#include "time.h"

int main() {
     if (foo())
         puts("a");
     return 0;
}

is the same as that of:

#define unlikely(x) __builtin_expect(!!(x), 0)

#include "stdio.h"
#include "time.h"

int main() {
     int i =3D !time(NULL);

     if (unlikely(i))
         puts("a");
     return 0;
}

while that of:

#include "stdio.h"
#include "time.h"

int main() {
     int i =3D !time(NULL);

     if (i)
         puts("a");
     return 0;
}

is different.


Juergen

--------------E84DAD1C22DFDE7C30466659
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E84DAD1C22DFDE7C30466659--

--maDW8Pd8OjU1d7LYXePMnizjhfujNt8b8--

--dd3N3pNamgrCKyyBJdoADTe5AccG4qnuR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/cZhoFAwAAAAAACgkQsN6d1ii/Ey/R
kgf7BmbbJsVx6qRMVxfz4T4JHaCjQ6i8YVR50AOvG4Ba4MhG/Gn6ppSNi0dJk9d4fWPP8Y8WQvYy
npRqLWSd50PSAkwQ9x2gIK25VGCLxK0gfQ44xqsphu1UwQjIq+kMWvdz+nxYqPnYmg0Nzyb1anZ2
Ju3uz935QPe1XRT0gvTPK1uhlMZSI3+8bizRKTvEc5JYomhbB3lI6fAT8DovKfUoZCQm68HFt1Kn
KVnUMiXxCJWEh3QzDljTIXTvFkOS7qdMKKglLXS8ejkI1DJDcoadtoohxASL4w1014xNdfr1vvXw
JwVvYP6pwfs5Du3So4D2QWhIAucnLw2kQSS3J712kw==
=I7FP
-----END PGP SIGNATURE-----

--dd3N3pNamgrCKyyBJdoADTe5AccG4qnuR--


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:27:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56379.98697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqB6o-0008QB-LI; Fri, 18 Dec 2020 08:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56379.98697; Fri, 18 Dec 2020 08:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqB6o-0008Q4-I5; Fri, 18 Dec 2020 08:27:30 +0000
Received: by outflank-mailman (input) for mailman id 56379;
 Fri, 18 Dec 2020 08:27:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqB6m-0008Pz-PH
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:27:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51ff92f1-5ae2-4bcf-a004-262658c7d973;
 Fri, 18 Dec 2020 08:27:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86E71ACC4;
 Fri, 18 Dec 2020 08:27:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51ff92f1-5ae2-4bcf-a004-262658c7d973
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608280046; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+6moIg9ArN4Y+PEYC9f+3+7ugZX6EBBqarTBEIg1Rn8=;
	b=U3cFEsB1FkmIsImROYSit1Ow2pC+A5gfqD3eERVHUVaE8xO/HngT7nPVhcyp/DPYrMpZ79
	dZPD1hsuCtsTwR6asqY520BZBd98HIUxeVnynKnBdOc/ESmVVLPf/8/58aoU/4jaUMXbKy
	61qrXOOhPzs8ynNU967mAzr+k9Ew/Cw=
Subject: Re: [PATCH] xen/x86: Fix memory leak in vcpu_create() error path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
 Tim Deegan <tim@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>
References: <20200928154741.2366-1-andrew.cooper3@citrix.com>
 <33331c3a-1fd5-1ef6-16a3-21d2a6672e90@suse.com>
 <9556aeb3-2a7c-7aea-4386-6e561dd9ef6e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9e652863-5ada-0327-5817-cdb2e652e066@suse.com>
Date: Fri, 18 Dec 2020 09:27:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <9556aeb3-2a7c-7aea-4386-6e561dd9ef6e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 22:46, Andrew Cooper wrote:
> On 29/09/2020 07:18, Jan Beulich wrote:
>> On 28.09.2020 17:47, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -563,30 +563,37 @@ void hap_final_teardown(struct domain *d)
>>>      paging_unlock(d);
>>>  }
>>>  
>>> +void hap_vcpu_teardown(struct vcpu *v)
>>> +{
>>> +    struct domain *d = v->domain;
>>> +    mfn_t mfn;
>>> +
>>> +    paging_lock(d);
>>> +
>>> +    if ( !paging_mode_hap(d) || !v->arch.paging.mode )
>>> +        goto out;
>> Any particular reason you don't use paging_get_hostmode() (as the
>> original code did) here? Any particular reason for the seemingly
>> redundant (and hence somewhat in conflict with the description's
>> "with the minimum number of safety checks possible")
>> paging_mode_hap()?
> 
> Yes to both.  As you spotted, I converted the shadow side first, and
> made the two consistent.
> 
> The paging_mode_{shadow,hap})() is necessary for idempotency.  These
> functions really might get called before paging is set up, for an early
> failure in domain_create().

In which case how would v->arch.paging.mode be non-NULL already?
They get set in {hap,shadow}_vcpu_init() only.

> The paging mode has nothing really to do with hostmode/guestmode/etc. 
> It is the only way of expressing the logic where it is clear that the
> lower pointer dereferences are trivially safe.

Well, yes and no - the other uses of course should then also use
paging_get_hostmode(), like various of the wrappers in paging.h
do. Or else I question why we have paging_get_hostmode() in the
first place. There are more examples in shadow code where this
gets open-coded when it probably shouldn't be. There haven't been
any such cases in HAP code so far ...

Additionally (noticing only now) in the shadow case you may now
loop over all vCPU-s in shadow_teardown() just for
shadow_vcpu_teardown() to bail right away. Wouldn't it make sense
to retain the "if ( shadow_mode_enabled(d) )" there around the
loop?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:31:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56386.98712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBAK-0000zs-C5; Fri, 18 Dec 2020 08:31:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56386.98712; Fri, 18 Dec 2020 08:31:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBAK-0000zl-8q; Fri, 18 Dec 2020 08:31:08 +0000
Received: by outflank-mailman (input) for mailman id 56386;
 Fri, 18 Dec 2020 08:31:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqBAJ-0000zg-IE
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:31:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b5e67ab-8141-4541-b11a-b785ec3bc5c2;
 Fri, 18 Dec 2020 08:31:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86ADBACC4;
 Fri, 18 Dec 2020 08:31:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b5e67ab-8141-4541-b11a-b785ec3bc5c2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608280265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PevfJibHyhM1P5bTVcOjupsdgI/avU0PT8NKlUKEqtk=;
	b=NlUt7uXWU8HzOg6zFuNl68+SkTymNK2ynG3lgaEOEYS03BA7tjBU2fIBOCNoVpyKxsK/Fa
	M8Goyt01J3xAk3yBVWUg/Iq5HdtmPVNBkstWc/0fR4BG0uxwhjNpJBPtX4r6xOq8mxE4eT
	2+tjEyBWZ4KpF3IRZ3edrx8B6QJH6nM=
Subject: Re: [PATCH] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, bertrand.marquis@arm.com,
 Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215112610.1986-1-julien@xen.org>
 <c5ac88e6-4e06-553d-2996-d2b027acd782@suse.com>
 <04455739-f07f-3da8-f764-33600a9cab6f@xen.org>
 <3f165cf8-88a4-590a-6e86-2435e8a7e554@suse.com>
 <alpine.DEB.2.21.2012171553340.4040@sstabellini-ThinkPad-T480s>
 <81ea6132-b8b6-90b9-2c5c-9ca89ee6c0d0@suse.com>
 <142e7b4d-649d-07d0-26cf-185a434a365c@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <19553beb-db02-c23c-e176-c5c52a5be7ed@suse.com>
Date: Fri, 18 Dec 2020 09:31:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <142e7b4d-649d-07d0-26cf-185a434a365c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.12.2020 09:19, Jürgen Groß wrote:
> On 18.12.20 08:54, Jan Beulich wrote:
>> On 18.12.2020 00:54, Stefano Stabellini wrote:
>>> On Tue, 15 Dec 2020, Jan Beulich wrote:
>>>> On 15.12.2020 14:19, Julien Grall wrote:
>>>>> On 15/12/2020 11:46, Jan Beulich wrote:
>>>>>> On 15.12.2020 12:26, Julien Grall wrote:
>>>>>>> --- a/xen/include/xen/lib.h
>>>>>>> +++ b/xen/include/xen/lib.h
>>>>>>> @@ -23,7 +23,13 @@
>>>>>>>    #include <asm/bug.h>
>>>>>>>    
>>>>>>>    #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
>>>>>>> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
>>>>>>> +#define WARN_ON(p)  ({                  \
>>>>>>> +    bool __ret_warn_on = (p);           \
>>>>>>
>>>>>> Please can you avoid leading underscores here?
>>>>>
>>>>> I can.
>>>>>
>>>>>>
>>>>>>> +                                        \
>>>>>>> +    if ( unlikely(__ret_warn_on) )      \
>>>>>>> +        WARN();                         \
>>>>>>> +    unlikely(__ret_warn_on);            \
>>>>>>> +})
>>>>>>
>>>>>> Is this latter unlikely() having any effect? So far I thought it
>>>>>> would need to be immediately inside a control construct or be an
>>>>>> operand to && or ||.
>>>>>
>>>>> The unlikely() is directly taken from the Linux implementation.
>>>>>
>>>>> My guess is the compiler is still able to use the information for the
>>>>> branch prediction in the case of:
>>>>>
>>>>> if ( WARN_ON(...) )
>>>>
>>>> Maybe. Or maybe not. I don't suppose the Linux commit introducing
>>>> it clarifies this?
>>>
>>> I did a bit of digging but it looks like the unlikely has been there
>>> forever. I'd just keep it as is.
>>
>> I'm afraid I don't view this as a reason to inherit code unchanged.
>> If it was introduced with a clear indication that compilers can
>> recognize it despite the somewhat unusual placement, then fine. But
>> likely() / unlikely() quite often get put in more or less blindly -
>> see the not uncommon unlikely(a && b) style of uses, which don't
>> typically have the intended effect and would instead need to be
>> unlikely(a) && unlikely(b) [assuming each condition alone is indeed
>> deemed unlikely], unless compilers have learned to guess/infer
>> what's meant between when I last looked at this and now.
> 
> I have made a little experiment and found that the unlikely() at the
> end of a macro is having effect.

Okay, thanks - then my concern vanishes.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:39:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:39:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56390.98724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBIS-0001OL-7L; Fri, 18 Dec 2020 08:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56390.98724; Fri, 18 Dec 2020 08:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBIS-0001OE-4O; Fri, 18 Dec 2020 08:39:32 +0000
Received: by outflank-mailman (input) for mailman id 56390;
 Fri, 18 Dec 2020 08:39:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqBIQ-0001O9-NZ
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:39:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12e6929c-3e40-4a69-a250-cb8402f52af7;
 Fri, 18 Dec 2020 08:39:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85C2FACC4;
 Fri, 18 Dec 2020 08:39:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12e6929c-3e40-4a69-a250-cb8402f52af7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608280767; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wtatcdxlcr3MEWcB4mHrg5SBuLlNpEFY4bk25dgMDKc=;
	b=bNWHCI7X22fzoalOZKuT26utK+EOJTbw0LXOF1IdbqUoFLBHCDSewgwBlsx+3GN5h6/SHF
	xVfTNFCikLciOSLHYxYK6Y4JVFr6BHw5jG8jbN8uWAfHsO+DPagU3rSDpXFIqvkl93YXRX
	EHQEyTF5kBs/r8Zl98kXBw/gh6mXRlg=
Subject: Re: [PATCH 1/6] x86/p2m: tidy p2m_add_foreign() a little
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <8b70c26e-7ae6-8438-67a3-99cef338ba52@suse.com>
 <55de56b3-0e83-c558-6432-9853db82f57a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <54d8249e-9c47-3967-f069-2dd38a9e138d@suse.com>
Date: Fri, 18 Dec 2020 09:39:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <55de56b3-0e83-c558-6432-9853db82f57a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 20:03, Andrew Cooper wrote:
> On 15/12/2020 16:25, Jan Beulich wrote:
>> Drop a bogus ASSERT() - we don't typically assert incoming domain
>> pointers to be non-NULL, and there's no particular reason to do so here.
>>
>> Replace the open-coded DOMID_SELF check by use of
>> rcu_lock_remote_domain_by_id(), at the same time covering the request
>> being made with the current domain's actual ID.
>>
>> Move the "both domains same" check into just the path where it really
>> is meaningful.
>>
>> Swap the order of the two puts, such that
>> - the p2m lock isn't needlessly held across put_page(),
>> - a separate put_page() on an error path can be avoided,
>> - they're inverse to the order of the respective gets.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

>> ---
>> The DOMID_SELF check being converted also suggests to me that there's an
>> implication of tdom == current->domain, which would in turn appear to
>> mean the "both domains same" check could as well be dropped altogether.
> 
> I don't see anything conceptually wrong with the toolstack creating a
> foreign mapping on behalf of a guest at construction time.  I'd go as
> far as to argue that it is an interface shortcoming if this didn't
> function correctly.

Right, I actually didn't get the remark right. It's the DOMID_SELF
check that's suspicious especially when tdom != current->domain,
not the tdom != fdom one.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:49:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:49:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56394.98735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBRX-0002J8-5H; Fri, 18 Dec 2020 08:48:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56394.98735; Fri, 18 Dec 2020 08:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBRX-0002J1-2I; Fri, 18 Dec 2020 08:48:55 +0000
Received: by outflank-mailman (input) for mailman id 56394;
 Fri, 18 Dec 2020 08:48:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqBRV-0002It-BH
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:48:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35598815-ea6f-4cf7-9709-b34af88ba062;
 Fri, 18 Dec 2020 08:48:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6ECBCACBD;
 Fri, 18 Dec 2020 08:48:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35598815-ea6f-4cf7-9709-b34af88ba062
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608281331; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ghIe0N3tZ4XWXLjcKriaLOCVEDnqA9Mt+/h/XXvilog=;
	b=Cg2U4RN+pSXgEafrXm8LZz6D0KuTo57y35uFwaU0sNl6b4JDr9IojGStoaCpGvNwp6eonJ
	9/B2KtSG9hLThCGmTvckmizo1XaoZDFPXPacZvjG+MfAWlwHBbz5LFowbaPJD7RFbyyqc8
	MhbVPlptEx4CsR1yLmVpStEBiS+R47E=
Subject: Re: [PATCH 2/6] x86/mm: p2m_add_foreign() is HVM-only
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <cf4569c5-a9c5-7b4b-d576-d1521c369418@suse.com>
 <f736244b-ece7-af35-1517-2e5fdd9705c7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2eb9259d-1a36-4df0-d59d-fc08ce38d763@suse.com>
Date: Fri, 18 Dec 2020 09:48:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <f736244b-ece7-af35-1517-2e5fdd9705c7@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 20:18, Andrew Cooper wrote:
> On 15/12/2020 16:26, Jan Beulich wrote:
>> This is together with its only caller, xenmem_add_to_physmap_one().
> 
> I can't parse this sentence.  Perhaps "... as is it's only caller," as a
> follow-on from the subject sentence.

Yeah, changed along these lines.

>>  Move
>> the latter next to p2m_add_foreign(), allowing this one to become static
>> at the same time.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> , although...
> 
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -2639,7 +2646,114 @@ int p2m_add_foreign(struct domain *tdom,
>>      return rc;
>>  }
>>  
>> -#ifdef CONFIG_HVM
>> +int xenmem_add_to_physmap_one(
>> +    struct domain *d,
>> +    unsigned int space,
>> +    union add_to_physmap_extra extra,
>> +    unsigned long idx,
>> +    gfn_t gpfn)
>> +{
>> +    struct page_info *page = NULL;
>> +    unsigned long gfn = 0 /* gcc ... */, old_gpfn;
>> +    mfn_t prev_mfn;
>> +    int rc = 0;
>> +    mfn_t mfn = INVALID_MFN;
>> +    p2m_type_t p2mt;
>> +
>> +    switch ( space )
>> +    {
>> +        case XENMAPSPACE_shared_info:
>> +            if ( idx == 0 )
>> +                mfn = virt_to_mfn(d->shared_info);
>> +            break;
>> +        case XENMAPSPACE_grant_table:
>> +            rc = gnttab_map_frame(d, idx, gpfn, &mfn);
>> +            if ( rc )
>> +                return rc;
>> +            break;
>> +        case XENMAPSPACE_gmfn:
>> +        {
>> +            p2m_type_t p2mt;
>> +
>> +            gfn = idx;
>> +            mfn = get_gfn_unshare(d, gfn, &p2mt);
>> +            /* If the page is still shared, exit early */
>> +            if ( p2m_is_shared(p2mt) )
>> +            {
>> +                put_gfn(d, gfn);
>> +                return -ENOMEM;
>> +            }
>> +            page = get_page_from_mfn(mfn, d);
>> +            if ( unlikely(!page) )
>> +                mfn = INVALID_MFN;
>> +            break;
>> +        }
>> +        case XENMAPSPACE_gmfn_foreign:
>> +            return p2m_add_foreign(d, idx, gfn_x(gpfn), extra.foreign_domid);
>> +        default:
>> +            break;
> 
> ... seeing as the function is moving wholesale, can we at least correct
> the indention, to save yet another large churn in the future?  (If it
> were me, I'd go as far as deleting the default case as well.)

Oh, indeed. I did look for obvious style issues but didn't spot this
(still quite obvious) one. I've done so and also added blank lines
between the case blocks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:57:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56398.98747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBZX-0003OM-0U; Fri, 18 Dec 2020 08:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56398.98747; Fri, 18 Dec 2020 08:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBZW-0003OF-Ty; Fri, 18 Dec 2020 08:57:10 +0000
Received: by outflank-mailman (input) for mailman id 56398;
 Fri, 18 Dec 2020 08:57:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J4wv=FW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kqBZV-0003Nu-1C
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:57:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66ec1bd3-498d-4f03-9daa-fc6197f8d050;
 Fri, 18 Dec 2020 08:57:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE1B7ADF8;
 Fri, 18 Dec 2020 08:57:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66ec1bd3-498d-4f03-9daa-fc6197f8d050
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608281827; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xgFFFXewsSNqCc0ZDQFTjtiU6o+sJFsG/qyhDsT9zTg=;
	b=OR3hzyWsjdXPylj69AyaMyqqwRU7Sn4rS1Q7V3ZTLXjytQe8tywuiZ8VuIrkLxWVou/6B1
	ceFx5oG9U965VVG7FlKPuTayDdd+CD/O5mTKxTvXOoH1OqsLVVu0BMMTsLWYXpZZVxEeEb
	KC1r9ZieVAmMMVZtiMCNseF04vqELUw=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
 <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
Message-ID: <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
Date: Fri, 18 Dec 2020 09:57:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="yRdKkYUL7Ih8NQ3MvIKdVmat9TVlWngQp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--yRdKkYUL7Ih8NQ3MvIKdVmat9TVlWngQp
Content-Type: multipart/mixed; boundary="GmQrM7aQfQKBrdSy6lEa0Wgy1FdQ4c5qI";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
 <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
In-Reply-To: <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>

--GmQrM7aQfQKBrdSy6lEa0Wgy1FdQ4c5qI
Content-Type: multipart/mixed;
 boundary="------------B5BEB2992E139D7B84FEB59B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------B5BEB2992E139D7B84FEB59B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.12.20 13:14, Jan Beulich wrote:
> On 17.12.2020 12:32, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.12.20 12:28, Jan Beulich wrote:
>>> On 09.12.2020 17:09, Juergen Gross wrote:
>>>> +static const struct hypfs_entry *hypfs_dyndir_enter(
>>>> +    const struct hypfs_entry *entry)
>>>> +{
>>>> +    const struct hypfs_dyndir_id *data;
>>>> +
>>>> +    data =3D hypfs_get_dyndata();
>>>> +
>>>> +    /* Use template with original enter function. */
>>>> +    return data->template->e.funcs->enter(&data->template->e);
>>>> +}
>>>
>>> At the example of this (applies to other uses as well): I realize
>>> hypfs_get_dyndata() asserts that the pointer is non-NULL, but
>>> according to the bottom of ./CODING_STYLE this may not be enough
>>> when considering the implications of a NULL deref in the context
>>> of a PV guest. Even this living behind a sysctl doesn't really
>>> help, both because via XSM not fully privileged domains can be
>>> granted access, and because speculation may still occur all the
>>> way into here. (I'll send a patch to address the latter aspect in
>>> a few minutes.) While likely we have numerous existing examples
>>> with similar problems, I guess in new code we'd better be as
>>> defensive as possible.
>>
>> What do you suggest? BUG_ON()?
>=20
> Well, BUG_ON() would be a step in the right direction, converting
> privilege escalation to DoS. The question is if we can't do better
> here, gracefully failing in such a case (the usual pair of
> ASSERT_UNREACHABLE() plus return/break/goto approach doesn't fit
> here, at least not directly).
>=20
>> You are aware that this is nothing a user can influence, so it would
>> be a clear coding error in the hypervisor?
>=20
> A user (or guest) can't arrange for there to be a NULL pointer,
> but if there is one that can be run into here, this would still
> require an XSA afaict.

I still don't see how this could happen without a major coding bug,
which IMO wouldn't go unnoticed during a really brief test (this is
the reason for ASSERT() in hypfs_get_dyndata() after all).

Its not as if the control flow would allow many different ways to reach
any of the hypfs_get_dyndata() calls.

I can add security checks at the appropriate places, but I think this
would be just dead code. OTOH if you are feeling strong here lets go
with it.


Juergen

--------------B5BEB2992E139D7B84FEB59B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------B5BEB2992E139D7B84FEB59B--

--GmQrM7aQfQKBrdSy6lEa0Wgy1FdQ4c5qI--

--yRdKkYUL7Ih8NQ3MvIKdVmat9TVlWngQp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/cbuEFAwAAAAAACgkQsN6d1ii/Ey+Q
Vwf9EdNlNQe10qLZAl8PZ7Paf+kegJsx2KF0PLFlp17oi2KtEQxFTSz4ZDOYe950BTgtVIwH1Al0
RPCa+bJi8MSeC526Ms1is5TS44HBBecZFauMZiqu+75YJZg9+1esbOfngV/hajKaElF4YW/VgzDr
BLb7pJjWl8bhpeJHT8AlOZ8oRAMWqdoZNNGVPys9MnVxqf6mmRKTn0KOfRIL0H5mSHmrs+7sfr8R
YJC/RBJFuhZn7K0LpZDeUxwHuoShIKdzRhbSCc0aUqSQomPyG8jiRJMw+kSR/BdpyMVB70xB+SNO
B5F4eS9GSMSdOKNItdWU2n5qUpQ+u0MjEh5PX4gXww==
=6q1j
-----END PGP SIGNATURE-----

--yRdKkYUL7Ih8NQ3MvIKdVmat9TVlWngQp--


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 08:58:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 08:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56402.98760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBad-0003Ui-Ch; Fri, 18 Dec 2020 08:58:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56402.98760; Fri, 18 Dec 2020 08:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBad-0003Ub-9W; Fri, 18 Dec 2020 08:58:19 +0000
Received: by outflank-mailman (input) for mailman id 56402;
 Fri, 18 Dec 2020 08:58:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqBac-0003UT-0E
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 08:58:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0965b98-6284-4148-b28f-377d62f3cc09;
 Fri, 18 Dec 2020 08:58:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B15EABC6;
 Fri, 18 Dec 2020 08:58:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0965b98-6284-4148-b28f-377d62f3cc09
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608281896; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QAQT/jCmN9xZbygsrd1PYs/7sn2pozeTI35cJr9SqsU=;
	b=SxOzh0zPvkzCUtvUYrS6XLupqzYDmXEu5YQzLq1iWtnQUkS7zne67D1IV5PR75Wf/Nr0zI
	KZGspNg92AIr9RYtrqx0KiJF7ApNYUrFAcjMNyZcBHy/9F5SjPsAkGbaTUXGR9vAWt+V9F
	d7TFO51vP9PGlYekCSaUUj1/Ze1qXKQ=
Subject: Re: [PATCH 3/6] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <15f41816-4814-bae5-e0bc-89e99d04a142@suse.com>
 <fc78c235-f806-6120-25f0-182b4c08bdaa@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6833bac9-17a8-dc6e-42d7-100749bad707@suse.com>
Date: Fri, 18 Dec 2020 09:58:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <fc78c235-f806-6120-25f0-182b4c08bdaa@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 20:54, Andrew Cooper wrote:
> On 15/12/2020 16:26, Jan Beulich wrote:
>> Extend a respective #ifdef from inside set_typed_p2m_entry() to around
>> all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
>> check path.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> As the code currently stands, yes.  However, I'm not sure I agree
> conceptually.
> 
> The p2m APIs are either a common interface to use, or HVM-specific.
> 
> PV guests don't actually have a p2m, but some of the APIs are used from
> common code (e.g. copy_to/from_guest()), and some p2m concepts are
> special cased as identity for PV (technically paging_mode_translate()),
> while other concepts, such as foreign/mmio, which do exist for both PV
> and HVM guests, are handled with totally different API sets for PV and HVM.
> 
> This is a broken mess of an abstraction.  I suspect some of it has to do
> with PV autotranslate mode in the past, but that doesn't alter the fact
> that we have a totally undocumented and error prone set of APIs here.
> 
> Either P2M's should (fully) be the common abstraction (despite not being
> a real object for PV guests), or they should should be a different set
> of APIs which is the common abstraction, and P2Ms should move being
> exclusively for HVM guests.
> 
> (It's also very obvious by all the CONFIG_X86 ifdefary that we've got
> arch specifics in our common code, and that is another aspect of the API
> mess which needs handling.)
> 
> I'm honestly not sure which of these would be better, but I'm fairly
> sure that either would be better than what we've currently got.  I
> certainly think it would be better to have a plan for improvement, to
> guide patches like this.

Well, by the end of this series fairly large parts of p2m.c are inside
#ifdef CONFIG_HVM. I would have thought the route is clear - eventually
p2m.c should get built only when HVM is enabled. This change is simply
getting us one tiny step closer.

Otoh, when considering common code, hiding PV specifics inside the p2m
functions may turn out better, as else we may need another layer around
them (like effectively we already have with e.g.
guest_physmap_{add,remove}_page(), which I think would need to move out
of p2m.c if that was to become HVM-only as a whole) ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 09:09:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 09:09:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56408.98772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBl6-0004dX-Ef; Fri, 18 Dec 2020 09:09:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56408.98772; Fri, 18 Dec 2020 09:09:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqBl6-0004dQ-Av; Fri, 18 Dec 2020 09:09:08 +0000
Received: by outflank-mailman (input) for mailman id 56408;
 Fri, 18 Dec 2020 09:09:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ets7=FW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kqBl4-0004dL-Ub
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 09:09:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb64db33-986e-4335-82ca-83f414eae4bb;
 Fri, 18 Dec 2020 09:09:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E30EBABC6;
 Fri, 18 Dec 2020 09:09:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb64db33-986e-4335-82ca-83f414eae4bb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608282545; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TlK/cy7eG1Ho9xGe3WQnmVXfDIiuGqzGtZsbeGKxqHk=;
	b=kIDeHH+jwrWb4d9jKnhVpwOFt6qoAX/Hsgf2rNruJgW3FSTXlj/B/8HAkXWxz57gyRnuCO
	n1QPOzejY3w7eYkbQi+iwFdEu+RR6vwODSpksQ+6zXgthckvqPYmTrNm/E00bC9dQ+ltkK
	aPNbylh2V/kaNd9snNu6jaTOyZcAT+I=
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
 <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
 <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a515ead2-f732-ddcd-f29b-788b8997fd2a@suse.com>
Date: Fri, 18 Dec 2020 10:09:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.12.2020 09:57, Jürgen Groß wrote:
> On 17.12.20 13:14, Jan Beulich wrote:
>> On 17.12.2020 12:32, Jürgen Groß wrote:
>>> On 17.12.20 12:28, Jan Beulich wrote:
>>>> On 09.12.2020 17:09, Juergen Gross wrote:
>>>>> +static const struct hypfs_entry *hypfs_dyndir_enter(
>>>>> +    const struct hypfs_entry *entry)
>>>>> +{
>>>>> +    const struct hypfs_dyndir_id *data;
>>>>> +
>>>>> +    data = hypfs_get_dyndata();
>>>>> +
>>>>> +    /* Use template with original enter function. */
>>>>> +    return data->template->e.funcs->enter(&data->template->e);
>>>>> +}
>>>>
>>>> At the example of this (applies to other uses as well): I realize
>>>> hypfs_get_dyndata() asserts that the pointer is non-NULL, but
>>>> according to the bottom of ./CODING_STYLE this may not be enough
>>>> when considering the implications of a NULL deref in the context
>>>> of a PV guest. Even this living behind a sysctl doesn't really
>>>> help, both because via XSM not fully privileged domains can be
>>>> granted access, and because speculation may still occur all the
>>>> way into here. (I'll send a patch to address the latter aspect in
>>>> a few minutes.) While likely we have numerous existing examples
>>>> with similar problems, I guess in new code we'd better be as
>>>> defensive as possible.
>>>
>>> What do you suggest? BUG_ON()?
>>
>> Well, BUG_ON() would be a step in the right direction, converting
>> privilege escalation to DoS. The question is if we can't do better
>> here, gracefully failing in such a case (the usual pair of
>> ASSERT_UNREACHABLE() plus return/break/goto approach doesn't fit
>> here, at least not directly).
>>
>>> You are aware that this is nothing a user can influence, so it would
>>> be a clear coding error in the hypervisor?
>>
>> A user (or guest) can't arrange for there to be a NULL pointer,
>> but if there is one that can be run into here, this would still
>> require an XSA afaict.
> 
> I still don't see how this could happen without a major coding bug,
> which IMO wouldn't go unnoticed during a really brief test (this is
> the reason for ASSERT() in hypfs_get_dyndata() after all).

True. Yet the NULL derefs wouldn't go unnoticed either.

> Its not as if the control flow would allow many different ways to reach
> any of the hypfs_get_dyndata() calls.

I'm not convinced of this - this is a non-static function, and the
call patch 8 adds (just to take an example) is not very obvious to
have a guarantee that allocation did happen and was checked for
success. Yes, in principle cpupool_gran_write() isn't supposed to
be called in such a case, but it's the nature of bugs assumptions
get broken.

> I can add security checks at the appropriate places, but I think this
> would be just dead code. OTOH if you are feeling strong here lets go
> with it.

Going with it isn't the only possible route. The other is to drop
the ASSERT()s altogether. It simply seems to me that their addition
is a half-hearted attempt when considering what was added to
./CODING_STYLE not all that long ago. 

Jan


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 09:25:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 09:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56420.98788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqC0n-0006fS-Sw; Fri, 18 Dec 2020 09:25:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56420.98788; Fri, 18 Dec 2020 09:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqC0n-0006fL-PI; Fri, 18 Dec 2020 09:25:21 +0000
Received: by outflank-mailman (input) for mailman id 56420;
 Fri, 18 Dec 2020 09:25:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kqC0m-0006fG-Bj
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 09:25:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kqC0l-0003yl-ES; Fri, 18 Dec 2020 09:25:19 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kqC0l-0007vK-6z; Fri, 18 Dec 2020 09:25:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DVt81aClFe9EK7O999j8rRKiZ5yTI9NNY16mxW2PP5Y=; b=YcUSNjkdtNtljg8AuKH/+TdfX1
	PvAj77fshRFqQrjcaSIYydghDQ/Bn16tY3NlqI83CTWI29UKfiKcRXLrd9Ipu1iHGv2Lt3Dqnwiwq
	6QGVfFrlGcknbuXBwvmcPyTJSqJfOEDFOFka2H3PbmiN92TbEVomLUrGlOrC7KxzFzwY=;
Subject: Re: Ping: Arm: [PATCH v3 2/8] lib: collect library files in an
 archive
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
 <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
 <59c970d6-2be4-3239-6728-b8905b007323@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c3bbaad9-8b1f-d362-899b-aa31fa7feecf@xen.org>
Date: Fri, 18 Dec 2020 09:25:16 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <59c970d6-2be4-3239-6728-b8905b007323@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 18/12/2020 08:02, Jan Beulich wrote:
> On 23.11.2020 16:21, Jan Beulich wrote:
>> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
>> just to avoid bloating binaries when only some arch-es and/or
>> configurations need generic library routines, combine objects under lib/
>> into an archive, which the linker then can pick the necessary objects
>> out of.
>>
>> Note that we can't use thin archives just yet, until we've raised the
>> minimum required binutils version suitably.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>>   xen/Rules.mk          | 29 +++++++++++++++++++++++++----
>>   xen/arch/arm/Makefile |  6 +++---
> 
> For this and patch 3 of the series, may I ask for an Arm side ack
> (or otherwise)?

For the 2 patches:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 09:32:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 09:32:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56430.98799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqC7W-0007jg-Jk; Fri, 18 Dec 2020 09:32:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56430.98799; Fri, 18 Dec 2020 09:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqC7W-0007jZ-Gq; Fri, 18 Dec 2020 09:32:18 +0000
Received: by outflank-mailman (input) for mailman id 56430;
 Fri, 18 Dec 2020 09:32:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqC7V-0007jR-7b; Fri, 18 Dec 2020 09:32:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqC7U-00046p-St; Fri, 18 Dec 2020 09:32:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqC7U-0000k4-Gf; Fri, 18 Dec 2020 09:32:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqC7U-0005Ev-GC; Fri, 18 Dec 2020 09:32:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nBTNEJnQqfGNiV8vHbTSXheJKiN6FIs1RZmkBY5CvrI=; b=zp9olADTOzAdSxuq73BOdqSN6G
	yFKb8BVIe45z9KIBPTE7GZq0yw+aQ+bPweXMSp0vYTA9LhafILQiL8uuPqancv90jCk204H+ooFod
	Wnn+1IDHfTrxQ88eVDH4+83Mf+Sb7tgULpGSnx+LekcLthljVnhJk2hiYsdwEQClGv/c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157638-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157638: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a866bdbbac227a99b0b37e03679908642f58aec
X-Osstest-Versions-That:
    linux=2bff021f53b211386abad8cd661e6bb38d0fd524
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 09:32:16 +0000

flight 157638 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157638/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157431

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm  20 guest-start/debian.repeat  fail pass in 157603
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 157603

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157431
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157431
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157431
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157431
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157431
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a866bdbbac227a99b0b37e03679908642f58aec
baseline version:
 linux                2bff021f53b211386abad8cd661e6bb38d0fd524

Last test of basis   157431  2020-12-11 12:40:36 Z    6 days
Testing same since   157603  2020-12-16 10:11:52 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Bean Huo <beanhuo@micron.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Chris Chiu <chiu@endlessos.org>
  Coiby Xu <coiby.xu@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Fangrui Song <maskray@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Georgi Djakov <georgi.djakov@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Si <si.hao@zte.com.cn>
  Heiko Stuebner <heiko@sntech.de>
  Jakub Kicinski <kuba@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kalle Valo <kvalo@codeaurora.org>
  Li Yang <leoyang.li@nxp.com>
  Libo Chen <libo.chen@oracle.com>
  Lijun Pan <ljp@linux.ibm.com>
  Lin Chen <chen.lin5@zte.com.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luca Coelho <luciano.coelho@intel.com>
  Manasi Navare <manasi.d.navare@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Markus Reichl <m.reichl@fivetechno.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Verevkin <me@maxverevkin.tk>
  Michael Ellerman <mpe@ellerman.id.au>
  Miles Chen <miles.chen@mediatek.com>
  Minchan Kim <minchan@kernel.org>
  Mordechay Goodstein <mordechay.goodstein@intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Pankaj Sharma <pankj.sharma@samsung.com>
  Ran Wang <ran.wang_1@nxp.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sara Sharon <sara.sharon@intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Wood <oss@buserror.net>
  Shuah Khan <skhan@linuxfoundation.org>
  Shung-Hsi Yu <shung-hsi.yu@suse.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo Witte <timo.witte@gmail.com>
  Tom Lendacky <thomas.lendacky@amd.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vineet Gupta <vgupta@synopsys.com>
  Xu Qiang <xuqiang36@huawei.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Zhan Liu <zliua@micron.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1160 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 10:13:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 10:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56443.98823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqCl3-0003R3-8d; Fri, 18 Dec 2020 10:13:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56443.98823; Fri, 18 Dec 2020 10:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqCl3-0003Qw-5f; Fri, 18 Dec 2020 10:13:09 +0000
Received: by outflank-mailman (input) for mailman id 56443;
 Fri, 18 Dec 2020 10:13:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z90k=FW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kqCl1-0003Qr-V4
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 10:13:08 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.75]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1be86e6d-3425-44e5-81bb-4e3e520caa9a;
 Fri, 18 Dec 2020 10:13:07 +0000 (UTC)
Received: from AM5PR04CA0020.eurprd04.prod.outlook.com (2603:10a6:206:1::33)
 by HE1PR0801MB2123.eurprd08.prod.outlook.com (2603:10a6:3:7e::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.18; Fri, 18 Dec
 2020 10:13:03 +0000
Received: from AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:1:cafe::37) by AM5PR04CA0020.outlook.office365.com
 (2603:10a6:206:1::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.12 via Frontend
 Transport; Fri, 18 Dec 2020 10:13:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT017.mail.protection.outlook.com (10.152.16.89) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3676.22 via Frontend Transport; Fri, 18 Dec 2020 10:13:03 +0000
Received: ("Tessian outbound 6af064f543d4:v71");
 Fri, 18 Dec 2020 10:13:03 +0000
Received: from f8d5fe78b262.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F6A1C351-33FF-465F-8392-13C54C1E52F9.1; 
 Fri, 18 Dec 2020 10:12:35 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f8d5fe78b262.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 18 Dec 2020 10:12:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3692.eurprd08.prod.outlook.com (2603:10a6:10:30::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.19; Fri, 18 Dec
 2020 10:12:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Fri, 18 Dec 2020
 10:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1be86e6d-3425-44e5-81bb-4e3e520caa9a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EJ52zsdBOHezTfSwHXPXJYFkoB/2E22UlIIDGO1K0T8=;
 b=z/XCisCkwc6pe4T4n2pNI+aerpFcJBTDYsrsFsxHlN+tZYN9CA/GnKf9vEg1ooin7T0kVLnrnfziyTCdlRqOQMVqqRdHno9uHMXugQRSDIffClmHbbYs0b7kRl6MU/KEGez/znqi5JgnpmL/0egA22xkGX0aH0B/m8sATkgxMP4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9974d8774f2e5050
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cKSTUXNz5yLsF4bYu30pBqCtOslX+F+Eld+J+g0Zzjz3tYcIhZQNFcDMuJWAbQ4R74X/mIhrV+DNOu1SbtE/O9cfixeH5n7V2nC0Ne97Lfyyd75YoYiIJWfEWDetrlKIUipaG+tyOXzuOPIXW2oPm8D8rxGIfxB2CClo2rQJ5x+TeV/3KXIMGJ3Xi9hKpsNbobt2r1SBHZtxdrn2PbJnA2wA8lAT6qrKPYYSnAMaHtRHDdxinOz4UM/12VAgEtWGQQgvENTg57e6YvwnrgKeOADKKRxRN0Smf90ZWphweyaazI0BF3X86E9mDT/mp9HzwGyGl+1YVaaraVCqOSDTAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EJ52zsdBOHezTfSwHXPXJYFkoB/2E22UlIIDGO1K0T8=;
 b=CNkXFUAmJmEcP6+AI8kwXBgEdjUAngfKUc7l3KDCWV9bP+gs7i6XIDxUCtpleTljPmgegm8KKuF2qH43JXWN3th5HHq9jfrakOqnT5VFE7XkWCHZm7TQxojAolUoVzc7Dk8N8GNQZNQSWnN6l8NPYgeAwhL54Ke9i65ad8QUvjIhORHlq+R5sOO+XEUoNxslE5A5Laa7zD2pgvK4AkhcmCPbbl4IvRjY7QDer4PPaC2QTRWTqWVsQ0fsC5wD1jnTSpE/5XUfjMssvJgMae4tp9wB0WJSzUyx/LaBh4LwxwQcUaq1VHmll7YMIV/oLymQ0TWsdKHcR5xP1RPPFQrJsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EJ52zsdBOHezTfSwHXPXJYFkoB/2E22UlIIDGO1K0T8=;
 b=z/XCisCkwc6pe4T4n2pNI+aerpFcJBTDYsrsFsxHlN+tZYN9CA/GnKf9vEg1ooin7T0kVLnrnfziyTCdlRqOQMVqqRdHno9uHMXugQRSDIffClmHbbYs0b7kRl6MU/KEGez/znqi5JgnpmL/0egA22xkGX0aH0B/m8sATkgxMP4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH v4 0/8] xen/arm: Emulate ID registers
Thread-Topic: [PATCH v4 0/8] xen/arm: Emulate ID registers
Thread-Index: AQHW1IrfRffCgVO4HkGjd1qFhI+/sqn79O8AgACulYA=
Date: Fri, 18 Dec 2020 10:12:33 +0000
Message-ID: <A0C7B8B8-91AF-4169-8A81-2EC0BA9DC486@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012171543290.4040@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012171543290.4040@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4a69ca9b-3137-457c-ac55-08d8a33d818d
x-ms-traffictypediagnostic: DB7PR08MB3692:|HE1PR0801MB2123:
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB2123DF9875249AFEA83FCD9B9DC30@HE1PR0801MB2123.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cOpxkUZwlxMa3t5aDcJ51aNOMS+7HPkhY6C91qKPC10r91tG4CRYB/nd9tb7YD23Vz6oRHhk6DT/cAUAv/XkzScC5k62y4E6X7wegm/VdxEBMAfWfNNyFslKgGjMpXhjZtURW3LtTSWVNeecg7+dHE+5QThlBhuEjkuEpV5qOfaUtsVJ0IayJVTH1RvUSasCO9DBpkdUa6TwEhDLEJ8yVuoehiH4/Z+aug46CCgSFca3Qul9MK/1fwMINJlcB/3jR4oVZA3ivAqkLT4wyiNL+U59t9MrfmW+LWankfyRpdUfuXV7+DqEEZntceDEzMTwtQ3uC6S6OfU9YdfEs3aVWFkmvAORluijqY+bXymeSjNv8c3IvBveUz0pYtF5EmkC
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(366004)(8676002)(71200400001)(2616005)(66476007)(6486002)(6512007)(33656002)(478600001)(64756008)(36756003)(8936002)(76116006)(66946007)(91956017)(53546011)(4326008)(26005)(5660300002)(54906003)(83380400001)(66446008)(86362001)(6506007)(2906002)(66556008)(6916009)(316002)(186003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?3PPRv8wd09epmvQ4UA9dFcCqb3jPQVwjsa3jYeb3EIv2F4H+Ja0GX8llgvcP?=
 =?us-ascii?Q?5913C1WILRleqZg7PlqsMRCujGGmDOvdB+wEZYz6ytVV2NGzKL6ZYvp/QoxS?=
 =?us-ascii?Q?5FPJY7J7E23Aa00gig4dZFl93O74BDpZr7ozMauhTdzJMsPkcMXuf54oNHDZ?=
 =?us-ascii?Q?5WlS0M95hEth+5WjhhuuaEWt8jtf0jUCjFMNgRvz1GfFdpW4WMmoMPn18pTE?=
 =?us-ascii?Q?SIuREwckrTRVQhcaKQITDwfw/SNds0KUxm1hrUXSlS3IIRwv5pzGKXscoDVu?=
 =?us-ascii?Q?La+kLTVefO+I0BjPddxWdWENjuL042fHtrQy329G2sg6KM6KR26rEWBK91Zq?=
 =?us-ascii?Q?k+5JWsagHwYaYHo1g34p8QeyOsQqcJbUwVVZB820ToVxPR5fbUwLqMG/hFH1?=
 =?us-ascii?Q?MvWdveXkQYUk1XBeJC5dcL9NukOIjOreTPTbjh4bsbUrL1/gM/HYL4IRoBZX?=
 =?us-ascii?Q?+3nBg57Ie8l7nTfW7WWCg0H3pbkZIfOs7HWrDKCR5Z8+VUnnIy/YyoM5EKTT?=
 =?us-ascii?Q?VZAA3Ahyr582DqOj42IgjSoW8x76toJoid2p2bK2sWgIImPx+2r9ATI1H3YH?=
 =?us-ascii?Q?yR6DvCwzPVrptK1zk00hZQqZ9c17V63AcWVmUpZXNjpt+1s3LWOPnOPvdCwb?=
 =?us-ascii?Q?Ta/vCN/EkxodIL70UkxRNL7dgGasrtjpG14fw6KDRbEHySvyOzhPo2ZZu7TK?=
 =?us-ascii?Q?WHC1nC3JANB9dVKf9xgPcrWprGOquZ8aNwIjZK7PsnwkAB5Es/A1aw/uSl7X?=
 =?us-ascii?Q?KLg9amOKKuJAzCNPU8uXfBVWAD4h32D4w7RMa5k9xAvZ91YZw596rQEJr3cE?=
 =?us-ascii?Q?WSLHHZ+sjGzbr1yd+vsJXiyqdc3gd3UIaaScdOXfOj8NQaptYc4RFaz6r1Vl?=
 =?us-ascii?Q?zS+swKZiLnov6ZC83pc/cpyQXRJFngLtCx5nzkQ2qS23Hb+PhjjW3g5FER3S?=
 =?us-ascii?Q?T50PEmWafZmfN0S3DKxvLPhVDWMGLzFoI7zxl3S61pl+2sDzrgUtcH04wngI?=
 =?us-ascii?Q?GWcb?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BA61895F99377A4C90C7F30376D7AF3F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3692
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	69c36d9d-aa68-4422-f5c4-08d8a33d6fb2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V33GRe2KePKR7Wa7Xy/ap8iOEKZoih99CkAe7cQbSqW4JZ6/sVK4qSUVxRWhOxgz09EMvOG/Nty2VJQQCgWYtERgJmVSHz4hyUcJhc5gbqUvf3SnJ+gJzGTWekK4+hbtMGv92uXarm6rzlbWIELmaZveYjvu5V1Q/gsLciAgINB508f+RbYEkSTVccPHpxaZCSO6K5BVo9s3rThIlcmgYvWsd1aY8brlZ0wTsn4c5YZ5KbLcbhESA+O+rIvaS9qgjnAoTzBDeU4BPGDZh2CKyPj4bFparsIh/l3il9da1Z3+2wFRQDUni2dtiPdVNd/lX9En9nopqTbLcgy9P80a2Fgg4s0BBXbZE2ZC7nczqn7qHPaPapQf0wRyqctwVnBHLfNgQFLJsfwmWXg+1dg+sQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39860400002)(396003)(346002)(136003)(46966005)(2616005)(6506007)(186003)(54906003)(70206006)(26005)(8676002)(6486002)(81166007)(356005)(47076004)(6862004)(82740400003)(4326008)(316002)(8936002)(33656002)(36756003)(86362001)(82310400003)(2906002)(83380400001)(70586007)(478600001)(5660300002)(53546011)(336012)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Dec 2020 10:13:03.4213
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4a69ca9b-3137-457c-ac55-08d8a33d818d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB2123

Hi Stefano,

> On 17 Dec 2020, at 23:47, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 17 Dec 2020, Bertrand Marquis wrote:
>> The goal of this serie is to emulate coprocessor ID registers so that
>> Xen only publish to guest features that are supported by Xen and can
>> actually be used by guests.
>> One practical example where this is required are SVE support which is
>> forbidden by Xen as it is not supported, but if Linux is compiled with
>> it, it will crash on boot. An other one is AMU which is also forbidden
>> by Xen but one Linux compiled with it would crash if the platform
>> supports it.
>>=20
>> To be able to emulate the coprocessor registers defining what features
>> are supported by the hardware, the TID3 bit of HCR must be disabled and
>> Xen must emulated the values of those registers when an exception is
>> catched when a guest is accessing it.
>>=20
>> This serie is first creating a guest cpuinfo structure which will
>> contain the values that we want to publish to the guests and then
>> provides the proper emulationg for those registers when Xen is getting
>> an exception due to an access to any of those registers.
>>=20
>> This is a first simple implementation to solve the problem and the way
>> to define the values that we provide to guests and which features are
>> disabled will be in a future patchset enhance so that we could decide
>> per guest what can be used or not and depending on this deduce the bits
>> to activate in HCR and the values that we must publish on ID registers.
>=20
> As per our discussion I think we want to add this to the series.

Fully agree.

>=20
> ---
>=20
> xen/arm: clarify support status for various ARMv8.x CPUs
>=20
> ARMv8.1+ is not security supported for now, as it would require more
> investigation on hardware features that Xen has to hide from the guest.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> diff --git a/SUPPORT.md b/SUPPORT.md
> index ab02aca5f4..d95ce3a411 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -37,7 +37,8 @@ supported in this document.
>=20
> ### ARM v8
>=20
> -    Status: Supported
> +    Status, ARMv8.0: Supported
> +    Status, ARMv8.1+: Supported, not security supported
>     Status, Cortex A57 r0p0-r1p1: Supported, not security supported
>=20
> For the Cortex A57 r0p0 - r1p1, see Errata 832075.



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 10:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 10:14:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56447.98836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqCmQ-0003XG-Kv; Fri, 18 Dec 2020 10:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56447.98836; Fri, 18 Dec 2020 10:14:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqCmQ-0003X9-Hj; Fri, 18 Dec 2020 10:14:34 +0000
Received: by outflank-mailman (input) for mailman id 56447;
 Fri, 18 Dec 2020 10:14:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z90k=FW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kqCmO-0003X3-Hn
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 10:14:32 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.53]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1e9557d-83ec-4913-aa89-d660f6046418;
 Fri, 18 Dec 2020 10:14:30 +0000 (UTC)
Received: from MR2P264CA0008.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::20) by
 AM6PR08MB4357.eurprd08.prod.outlook.com (2603:10a6:20b:74::13) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3676.25; Fri, 18 Dec 2020 10:14:28 +0000
Received: from VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::ba) by MR2P264CA0008.outlook.office365.com
 (2603:10a6:500:1::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.17 via Frontend
 Transport; Fri, 18 Dec 2020 10:14:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT027.mail.protection.outlook.com (10.152.18.154) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3676.22 via Frontend Transport; Fri, 18 Dec 2020 10:14:27 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Fri, 18 Dec 2020 10:14:27 +0000
Received: from 814ae69dcf5a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2A97CDD4-D1E9-40D8-AC71-E246FB69767B.1; 
 Fri, 18 Dec 2020 10:14:12 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 814ae69dcf5a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 18 Dec 2020 10:14:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3692.eurprd08.prod.outlook.com (2603:10a6:10:30::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.19; Fri, 18 Dec
 2020 10:14:05 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Fri, 18 Dec 2020
 10:14:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1e9557d-83ec-4913-aa89-d660f6046418
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8/aC1Gwa1XmOgO0/yE6MwJASl+YoIUVsLPJX7X+98g=;
 b=eMztrPU/sdKiADRKKzjazfLMJID70tPWBWsY34TwuZFJc6FQLyN0DBsRSCGRFUDg239hD05MgG76itmKf+UkNb2nwg6q8zZ7RR2URc3VSjGQ7moqQTVuPAhM3utPxdeJuiqF/jFAanHzZ/ZtpOzP+58uxzLR81zca9KNDU+RDXg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f0401cc23b942d9c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NYnVcL6TcecoTPJZ2Yt4+y9uTKGzzR1xHK9QOewHmtBe+5oS04MHLSTSrICNza/H1ADl2sW4h6JNE4wXiiYoVi/NUrg6x2DpOy+LWJ8PmnyzGTl8tZqBEthmrkpzFJKKsl7Gjq8F2d7HzNq6+EHsBKn+jloTc61n2ifrvBN91Su4uYcVT+rnrrKtUG46UatKgUfDeT+gLWkRd0AAwVRHNq719w4CqtT39elB5APN0DNx7FvQ2yuGG+bITrt/PXOpArj8T9J5BF7SF2FSFoHdtI60ut2mGSAQdbcY9faKM8xcjvtdd9GzdAwX5nyS4WutwVuokEEEkIUyEL9xUbns2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8/aC1Gwa1XmOgO0/yE6MwJASl+YoIUVsLPJX7X+98g=;
 b=YrH/xBtEvdVpYiYNhO6byyN9kp33fV6fK+/kR3FTsgQc/SaU8CR5Y2PDZ8f7kF/x2KXlYNMBk4fh8tUueW3Mhka15/CoJYUmEoZ1CQuIGv67wRNSl8ZVnpc0K5uHSZzpHx+muuuLeZcQp27PBLFGTKZMrEPyJHiQ9sg5i0kk9D4sKIW62VrTKd9i6SNhFlz/kzT5uL4GKD8xy0KqusdHzVVnTyYrDaRdByAjhOookt1r3eQi1rgAyhALnRmtqi0Qr/m/Ps/w/Y8sUW5Ig48HpS9geM7RA0ydfi6dDJ5IhaU1C5WdFFxbDL2RLxVAHqJrwXUCsfhkor8HbisJJYbfHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8/aC1Gwa1XmOgO0/yE6MwJASl+YoIUVsLPJX7X+98g=;
 b=eMztrPU/sdKiADRKKzjazfLMJID70tPWBWsY34TwuZFJc6FQLyN0DBsRSCGRFUDg239hD05MgG76itmKf+UkNb2nwg6q8zZ7RR2URc3VSjGQ7moqQTVuPAhM3utPxdeJuiqF/jFAanHzZ/ZtpOzP+58uxzLR81zca9KNDU+RDXg=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 6/8] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v4 6/8] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHW1IteiuLr9xa1TUaHJbPfJMFoB6n78jeAgACxuAA=
Date: Fri, 18 Dec 2020 10:14:05 +0000
Message-ID: <26DC10F4-E4E5-4091-AFEC-DE1DEA0A1796@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
 <c1c68e89683913dbf71a8f370dc6fd896a9e8cce.1608214355.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012171533410.4040@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012171533410.4040@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f6fd1c1a-f8d2-491f-b646-08d8a33db3db
x-ms-traffictypediagnostic: DB7PR08MB3692:|AM6PR08MB4357:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4357D2FEB637AD983FE8E71E9DC30@AM6PR08MB4357.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +UVasQ8wX3Jo1PeXl7Ora85u2Gvq/0Pg01OfM9M0gQjkL9EOLgwXI5adHuCfw7vgF/w/LuadXGx3C45/VjHzXwk48WN5qU1VoZcbO6Q67XzdKR47D7vtjXvwtrsRja168r1lFPzUB8SHe7yPKRjsO+Ud1hItVPv2i+7eCDvZejF7sDIl/APdNvICwmZzsX/qdIKTBMy7dGTuN95VAW9t/w3kWOKUTYIEcsb3GJWJXxMNZScB04/Kr2qslycVpqpyjVSs8DPXH+OHLXmaz3L61J2MB+0Kby1t380bzLhlZD9kmMMpBfY22SPjrafnaTfyk+68vZAiGu+MG6nfB9UKnsscmyUmuhUtlVhkq5Jx2yFbLMJDvL1oJya301LGAFg8
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(366004)(8676002)(71200400001)(2616005)(66476007)(6486002)(6512007)(33656002)(478600001)(64756008)(36756003)(8936002)(76116006)(66946007)(91956017)(53546011)(4326008)(26005)(5660300002)(54906003)(66446008)(86362001)(6506007)(2906002)(66556008)(6916009)(316002)(186003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?uBRRDxZtygeXCgqDhxyOpzkBct2djwQ6FAQA4X3jc1iprYQbwfTlhpfhsMx/?=
 =?us-ascii?Q?TpI1h888CH4sB1rvKFfE3AC6xcjf/FWYXeuE5BYurBjCvPtdUQGXuyD9BUyn?=
 =?us-ascii?Q?6tzs+mNPQZK4x7FnzD9LR8GD3GAeVFuAq9j5FcY8RQ0wIkQZq2Rw449q8qlH?=
 =?us-ascii?Q?lNQ1N362ZWCTyMyp5BhLoLT2iz2KtSI7e7PIP5ExPqM9Ep91YoXQ2lLFQqHc?=
 =?us-ascii?Q?OyboTGp7Z1FhHPn5QdmVJfU/PI3zawPOYUrs9ZaXIxAfz34TvIUcKN/kCosC?=
 =?us-ascii?Q?TldD5+/vnrEu4z7c6OWA19Q6fPgz6gS55vEq60tWIk4ZBS8Et23O/ZHW35u6?=
 =?us-ascii?Q?HahRhHoaTw6Qq0PFvoTADagRRfsw80p63hxoTvjNFdNtxe1xeN8f2f+nLyGd?=
 =?us-ascii?Q?YNCYUeX5JrmvV2SQDTOCGhHyWaLKk4DbDBAt974eOK76/ZS3SQPfrV/FCAPK?=
 =?us-ascii?Q?8q3zmb4mn1n6tU0UoN3I607mQ39QGNsBNtndudUMyZit0YVZBWO1qWhtVDTD?=
 =?us-ascii?Q?xXuT1TlTbTFwWBm1zBKBVU14+4W2CL+5w7VbxZUbNmzndN3TeA9UDUahebaC?=
 =?us-ascii?Q?RR6N/Lhjk0TV6tyn5JUHKRyiObMDO6svZRE8gyovRw6pR6i/UAPBrQ0M9jU9?=
 =?us-ascii?Q?a1DSDvaTQsh/q7/4cuduaxeDnJvUExqN8MA7xQ6WT4wtquvWLpkUOmau75A+?=
 =?us-ascii?Q?bm58i65R7VdMU6ADweNk1IkU+tOOH30BmMjKOMOEGOS7jVcdMHn/sTqf5hcu?=
 =?us-ascii?Q?GhoIZwbTsmwhokhKwhcl9kCE2WAMduNfXZof8EaTRDRPp4vZxLnWv07vli8O?=
 =?us-ascii?Q?xAkg5nwhrQs+fjTn8wGdyCGkAfLJFZ5ejTpmZdM1xk6q5iFUptlkVrzWxBP7?=
 =?us-ascii?Q?+AOQKhREkqBk9El2Xk38vscWzLt377SXzvtJb3Q9JqrSGCFHJFht4XOdIYym?=
 =?us-ascii?Q?l7X9s2parJKxhWZAOGRJHHyUxsRSuT9NuVilB0q+nLEJf3Qf2DMZoAsvXotI?=
 =?us-ascii?Q?Eq+i?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FF61B097AAB9F640BC5988465018A5D4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3692
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ecb05847-d17c-4e7f-7595-08d8a33da660
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A9JdETaBjH7c3JCrUy0dobI4BeKJnqjNPTRAGAIaThf8p3XDOTTtLzB7EFKfxY+aWj8TfhP7Z82tEBjg24uyAeRXSw58RhdwkqSTixSRg+Kijipw04z6aZxDHf4k1Q2MJg9B7zg1Yfh2/6UJl9yfao6T4jW9p552EmnZIufhIKQqINfW0paklZkET0hYcyoLIKKh0PMZME/sWtdKetrnClefxNYbmJpD9m4WwYx6g+gwv031G8zlS2vnnVBtj3zJe4RIU1BdxB9tPrOUuujSCsNsSRHaAxvMB3icbmXZxQCN5bGn+lAdYtF6WlEQ1piHzvicrxhAzD++6KnYjLPhTFnmKTZFdxB06hdRcpARyf9MMZw4AGfDh4o1EwXZBRqoNdBRZEDZPZjDkscL9wbRo8jNSc+iuXfLsLn677w4t1ba+CV487I+QDXeTUc5icJOH7BFmK1ZJXf09mHxVUJYmg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(376002)(396003)(346002)(136003)(46966005)(36756003)(86362001)(6862004)(5660300002)(54906003)(8936002)(82740400003)(6486002)(478600001)(356005)(4326008)(81166007)(6512007)(33656002)(70586007)(26005)(316002)(2906002)(47076004)(8676002)(70206006)(186003)(82310400003)(53546011)(107886003)(2616005)(6506007)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Dec 2020 10:14:27.7519
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f6fd1c1a-f8d2-491f-b646-08d8a33db3db
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4357

Hi Stefano,

> On 17 Dec 2020, at 23:37, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 17 Dec 2020, Bertrand Marquis wrote:
>> Add support for emulation of cp15 based ID registers (on arm32 or when
>> running a 32bit guest on arm64).
>> The handlers are returning the values stored in the guest_cpuinfo
>> structure for known registers and RAZ for all reserved registers.
>> In the current status the MVFR registers are no supported.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in V2: Rebase
>> Changes in V3:
>>  Add case definition for reserved registers
>>  Add handling of reserved registers as RAZ.
>>  Fix code style in GENERATE_TID3_INFO declaration
>> Changes in V4:
>>  Fix comment for missing t (no to not)
>>  Put cases for reserved registers directly in the code instead of using
>>  a define in the cpregs.h header.
>>=20
>> ---
>> xen/arch/arm/vcpreg.c | 65 +++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 65 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
>> index cdc91cdf5b..1fe07fe02a 100644
>> --- a/xen/arch/arm/vcpreg.c
>> +++ b/xen/arch/arm/vcpreg.c
>> @@ -155,6 +155,24 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
>>         break;                                                      \
>>     }
>>=20
>> +/* Macro to generate easily case for ID co-processor emulation */
>> +#define GENERATE_TID3_INFO(reg, field, offset)                      \
>> +    case HSR_CPREG32(reg):                                          \
>> +    {                                                               \
>> +        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
>> +                          1, guest_cpuinfo.field.bits[offset]);     \
>=20
> This line is misaligned, but it can be adjusted on commit

Oh yes this was spotted by Julien on the previous serie and I forgot to fix=
 it.

>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20

Thanks
Bertrand

>=20
>=20
>> +    }
>> +
>> +/* helper to define cases for all registers for one CRm value */
>> +#define HSR_CPREG32_TID3_CASES(REG)     case HSR_CPREG32(p15,0,c0,REG,0=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,1=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,2=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,3=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,4=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,5=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,6=
): \
>> +                                        case HSR_CPREG32(p15,0,c0,REG,7=
)
>> +
>> void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>> {
>>     const struct hsr_cp32 cp32 =3D hsr.cp32;
>> @@ -286,6 +304,53 @@ void do_cp15_32(struct cpu_user_regs *regs, const u=
nion hsr hsr)
>>          */
>>         return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
>>=20
>> +    /*
>> +     * HCR_EL2.TID3
>> +     *
>> +     * This is trapping most Identification registers used by a guest
>> +     * to identify the processor features
>> +     */
>> +    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
>> +    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
>> +    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
>> +    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
>> +    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
>> +    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
>> +    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
>> +    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
>> +    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
>> +    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
>> +    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
>> +    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
>> +    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
>> +    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
>> +    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
>> +    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
>> +    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
>> +    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
>> +    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
>> +    /* MVFR registers are in cp10 not cp15 */
>> +
>> +    /*
>> +     * Those cases are catching all Reserved registers trapped by TID3 =
which
>> +     * currently have no assignment.
>> +     * HCR.TID3 is trapping all registers in the group 3:
>> +     * coproc =3D=3D p15, opc1 =3D=3D 0, CRn =3D=3D c0, CRm =3D=3D {c2-=
c7}, opc2 =3D=3D {0-7}.
>> +     * Those registers are defined as being RO in the Arm Architecture
>> +     * Reference manual Armv8 (Chapter D12.3.2 of issue F.c) so handle =
them
>> +     * as Read-only read as zero.
>> +     */
>> +    case HSR_CPREG32(p15,0,c0,c3,0):
>> +    case HSR_CPREG32(p15,0,c0,c3,1):
>> +    case HSR_CPREG32(p15,0,c0,c3,2):
>> +    case HSR_CPREG32(p15,0,c0,c3,3):
>> +    case HSR_CPREG32(p15,0,c0,c3,7):
>> +    HSR_CPREG32_TID3_CASES(c4):
>> +    HSR_CPREG32_TID3_CASES(c5):
>> +    HSR_CPREG32_TID3_CASES(c6):
>> +    HSR_CPREG32_TID3_CASES(c7):
>> +        return handle_ro_raz(regs, regidx, cp32.read, hsr, 1);
>> +
>>     /*
>>      * HCR_EL2.TIDCP
>>      *
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 10:23:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 10:23:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56452.98848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqCux-0004eX-NA; Fri, 18 Dec 2020 10:23:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56452.98848; Fri, 18 Dec 2020 10:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqCux-0004eQ-Jw; Fri, 18 Dec 2020 10:23:23 +0000
Received: by outflank-mailman (input) for mailman id 56452;
 Fri, 18 Dec 2020 10:23:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z90k=FW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kqCux-0004eF-0G
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 10:23:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.73]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8dcfc25e-5a8d-400e-a8ff-590bd6ab85c7;
 Fri, 18 Dec 2020 10:23:22 +0000 (UTC)
Received: from MR2P264CA0171.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::10) by
 DB8PR08MB3977.eurprd08.prod.outlook.com (2603:10a6:10:ad::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3654.13; Fri, 18 Dec 2020 10:23:19 +0000
Received: from VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:0:cafe::26) by MR2P264CA0171.outlook.office365.com
 (2603:10a6:501::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.18 via Frontend
 Transport; Fri, 18 Dec 2020 10:23:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT019.mail.protection.outlook.com (10.152.18.153) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3676.22 via Frontend Transport; Fri, 18 Dec 2020 10:23:18 +0000
Received: ("Tessian outbound 8b6e0bb22f1c:v71");
 Fri, 18 Dec 2020 10:23:18 +0000
Received: from 40cd507de730.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9F0D5ABC-7D4A-456F-8E92-C7500751A286.1; 
 Fri, 18 Dec 2020 10:23:12 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 40cd507de730.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 18 Dec 2020 10:23:12 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2695.eurprd08.prod.outlook.com (2603:10a6:6:19::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3654.14; Fri, 18 Dec
 2020 10:23:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3654.025; Fri, 18 Dec 2020
 10:23:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dcfc25e-5a8d-400e-a8ff-590bd6ab85c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ULsjm2pC5sW6JaNnxQ+RHklJd8Myk/6dIbweNZw0Lro=;
 b=naOLVEkK/Sdy6TrXWHIA6EMi91QOaUSL7H8gXbmh3AZIF2MuWgiwK+xchWpzwyThlh0dCy7a7O04QaZUF/xGvd/o5Tb9BvI1ipqn8jILJBzTd99gr46GYOS7+g5MT2Cqi4REGCqab/nNj5m/Nd5kyn488E/7K6GBHl9jLT+ixWY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 26ca2215cdbffb87
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XMgMQt67atPQ52K+N/QyqNDlTQrWKikU0GPVyx8ttQ6LRUeva0LVLQNI+HnH39jqezqCzzcVmGMqTBAwVOEq5Vk+iziNG+6VSh1o4HhC24ixHQhbN4tAEmagivnkO29KAvvh3A3/AX44jPHGlzR3nC/d+pbaqYY+ZSxh69t59zpTOkJXJFSeaTc/haAfXpiSDPkjQ6BaIPrMgBnUborJ34HrjtiBU7pXOIE+cHu9Dp0La+qwyxrbVk3861gZWvaMqU/XRYsRLUd2w357zLO+ijYLvMVOUdLh5A2Zvs2vHAMLYbisi89gLY5unxcT2Sbqn49LTeLctIQmfk5X7sst+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ULsjm2pC5sW6JaNnxQ+RHklJd8Myk/6dIbweNZw0Lro=;
 b=jkyi0VgeUFn9IBz8eJW8QAqmT4rbWvYeX2g6FXhryA9rV9HAYgTjHJMKOD672RZDe1693zR1Cdo/ShBgfhaOvzt/WyWm5dLKyUTxMUZeyWLebb84jIXG1B78EJYJtOV+aO+lCiZOmMbTjV0OR0tzpXFibEbhRW/bQWqA6JUpq8lPdZaEJuyAKRa3h/b4r2P5PZd13aNmE2cqVc2bM34PtZ7w6BRJhlOY9m1ipTFCklHFFx635ZhipmdfjUx4mCUk+scH3NP5gx7QwWggpumMyZrNKphuIGyJ8FpLyW3b17fMZHe8X9B7DYx03CVYeBaWjbkmWnUtCpZKXpdP+9TYQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ULsjm2pC5sW6JaNnxQ+RHklJd8Myk/6dIbweNZw0Lro=;
 b=naOLVEkK/Sdy6TrXWHIA6EMi91QOaUSL7H8gXbmh3AZIF2MuWgiwK+xchWpzwyThlh0dCy7a7O04QaZUF/xGvd/o5Tb9BvI1ipqn8jILJBzTd99gr46GYOS7+g5MT2Cqi4REGCqab/nNj5m/Nd5kyn488E/7K6GBHl9jLT+ixWY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 1/8] xen/arm: Use READ_SYSREG instead of 32/64 versions
Thread-Topic: [PATCH v4 1/8] xen/arm: Use READ_SYSREG instead of 32/64
 versions
Thread-Index: AQHW1ItQykO1RG0fj0mzlL7TviPUeqn77F6AgAC6HAA=
Date: Fri, 18 Dec 2020 10:23:11 +0000
Message-ID: <51ACD417-8CC1-448B-81F3-BDBE7EDC4C61@arm.com>
References: <cover.1608214355.git.bertrand.marquis@arm.com>
 <75ab5c84ed6ce1d004316ca4677735aa0543ecdc.1608214355.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2012171505430.4040@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2012171505430.4040@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8f650228-e225-4cef-2d3b-08d8a33ef03f
x-ms-traffictypediagnostic: DB6PR08MB2695:|DB8PR08MB3977:
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB3977AA00E6F151FFDB785E4F9DC30@DB8PR08MB3977.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7Ed+zsPErzNNw1Uxj448OfrzYBtN1cr8nQ7Bi6eBvM0d/fntCqGkkP7udPjGbta+6iBhqy0RDUoIa9EgnfEZdwZ5UBFxtOwYKeRwbcRtAgoyLe6efFfxm4mHuBFDPvfvSZx6O83o20dpXR1xjY/N+uHr3Qf2Mv5sU9Pm9hXCbaRd3VpFiLkn3E2MHCoJfk68gqIfSqFZ937G3jtMH/Lb94ol5KFKtKM0MLUHKnyuPlXjfWLuDjrQYuOLfs5W5eUqRoYtCg1xm6IyTyGwmioJfWPRb/ZP9vRtIm3q64kp+wceccBGNczQEpBQBe0a7pf1o0wcfT5AYLsvMohLZd/mbeS8eU2dn4p1rWjjXXdlH/MS1Nf5TmlJ9UglZWyAIYov
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(346002)(376002)(366004)(2616005)(316002)(2906002)(33656002)(86362001)(83380400001)(66556008)(71200400001)(26005)(4326008)(66476007)(5660300002)(6512007)(64756008)(76116006)(6916009)(6486002)(66946007)(8936002)(478600001)(53546011)(6506007)(66446008)(54906003)(8676002)(186003)(91956017)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?ciDoslV63/MwVxyiNRHiJ+yg6OKDutvu77dh1LG9EXOvMC6HjT78BB9SsOoL?=
 =?us-ascii?Q?pQUbjyQVPYsg6WimDUSvz7ERoTywVQyV8Wm4FtgQ9Cd/v3b0roj5Oco0QY5Z?=
 =?us-ascii?Q?ATkBVRtTeXD/nMwFV1O+wdobygf0Cvw8MpOC0owaZtepiFvQmqoJO51T0Z51?=
 =?us-ascii?Q?+Mw8vpKWnx2TnknQ3PL/EhmHOBuR6V3yBowRCRvZZr8stv+zzR0e9Hn4f1nI?=
 =?us-ascii?Q?GL07odgb4w/gN1Y1gY3S5BRivZ2vT2tQJPuTj9mNoUruHUQLwPnuOHTzuP0O?=
 =?us-ascii?Q?ipELMzYmxEi154jvEyeiUo1WpwNCibMOS1kVS2xw668ps1JXkXSwuxdkHWYA?=
 =?us-ascii?Q?QIOo0cXQhCuTddH1laZ7Z0kw9VutaykbNbWUmSNwxm8oJ3/nCHZdPW330CA5?=
 =?us-ascii?Q?aCyKy2hqpyrRtYMYV1PnOXFpQHbD/Dr1tnqzV0eRbPGeh33F2Ya2ZnC61GrE?=
 =?us-ascii?Q?IobJLufM8kVx6+TXZyH8fGfekIJLDKjiyS4mI2ZWCVcV1CEu1tQvmdJ9uyeK?=
 =?us-ascii?Q?EgraABT/DTzWWPJEEgDMhi9clwblbcVVDvcJlHnBCZ2UxxVC8h5KesmAGWF9?=
 =?us-ascii?Q?lA6GwFHeCNkqx+hJaTzSZffOuek7WDeGu4dAZ9sVx2gGVc7/5fxGchiu/cRI?=
 =?us-ascii?Q?HYtnGHZ4iAlm7AbhAZPvj0Jf3j6S5JVhuaBgj6wimCuLzeBhzvx3ErOxczwq?=
 =?us-ascii?Q?sknlL6MZQW21TNR820OpDOv0wVv4p3t0pK1TdjOaV3Ckf9DsTKgAmand08bN?=
 =?us-ascii?Q?7kCIN2DLY6vjWgMBFI/KGjieVfe9QBnvk+s+t08d6VRIuNFxtb7To+Np1lom?=
 =?us-ascii?Q?TZK1UfqVTkA9v7goGHiopmKquh0y5b2nnkYFJFo4kM9Z6tddd1up+viCnP0L?=
 =?us-ascii?Q?gC3Gaj6jy/XLwHrvwBZyqh/5s3AchGtTEs+9PGaZatW1o7GyjtPDa5nR+Div?=
 =?us-ascii?Q?LkAiCh244o4G/M0DY6CitDi8KwpgnjGauIOAF1PzyYtNwLcskIyLHP8h9uBW?=
 =?us-ascii?Q?G0tw?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <B9066B0001F91447AF1B98671D87EDBF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2695
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	63e667e0-b856-4aba-3b80-08d8a33eebd8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nEtV2pcVugbW6UKSBIsxe0nwGubkS9Jr7v+bfoX/bKC6L7gnMagyJO8GLGhmqlJX9TPlG1AKcSinOGv2+LXWXcqRa8CK9LTO/X4UJKM+NmMcEhEiDiT+4KL77gVqbejAx/hX9PsEaC1+fgAuNolpj4agO2djtGoyKEjNBhAxxRzz2/KMoCXU7JOrbRKA5bNnzqJb0s4CmInQVx7McYfBRvXTlE1P5atevbdUSUX5xHs6lVbgsBL6rFwGJ0djCrF00RJ7O7PyO6MME7+LMv83HprawTIDFqw55xKsrU60IlBqC8mvp8+oGblGP4+4NHcZpYnjX+SGEHwVxURyb087pA3h2rf+Lm0wIKemWKJlVB2QUOKydUcLQJJhIb75GoeBeYXWUy5swttVGLmTEHWqZg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(346002)(396003)(376002)(136003)(46966005)(70206006)(356005)(8936002)(2906002)(6512007)(54906003)(316002)(107886003)(336012)(5660300002)(6862004)(36756003)(186003)(86362001)(2616005)(53546011)(6506007)(47076004)(70586007)(8676002)(26005)(33656002)(81166007)(83380400001)(82310400003)(478600001)(4326008)(82740400003)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Dec 2020 10:23:18.5564
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f650228-e225-4cef-2d3b-08d8a33ef03f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3977

Hi Stefano,

> On 17 Dec 2020, at 23:17, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 17 Dec 2020, Bertrand Marquis wrote:
>> Modify identify_cpu function to use READ_SYSREG instead of READ_SYSREG32
>> or READ_SYSREG64.
>> The aarch32 versions of the registers are 64bit on an aarch64 processor
>> so it was wrong to access them as 32bit registers.
>=20
> This sentence is a bit confusing because, as an example, MIDR_EL1 is
> also an aarch64 register, not only an aarch32 register. Maybe we should
> clarify.

You are right the sentence is not very clear.

Maybe the following would be better:

All aarch32 specific registers (for example ID_PFR0_EL1) are 64bit when
accessed from aarch64 with upper bits read as 0, so it is right to access t=
hem
as 64bit registers on a 64bit platform.

>=20
> Aside from that:
>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20

Thanks

Cheers
Bertrand

>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>=20
>> ---
>> Change in V4:
>>  This patch was introduced in v4.
>>=20
>> ---
>> xen/arch/arm/cpufeature.c | 50 +++++++++++++++++++--------------------
>> 1 file changed, 25 insertions(+), 25 deletions(-)
>>=20
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index 44126dbf07..115e1b164d 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -99,44 +99,44 @@ int enable_nonboot_cpu_caps(const struct arm_cpu_cap=
abilities *caps)
>>=20
>> void identify_cpu(struct cpuinfo_arm *c)
>> {
>> -        c->midr.bits =3D READ_SYSREG32(MIDR_EL1);
>> +        c->midr.bits =3D READ_SYSREG(MIDR_EL1);
>>         c->mpidr.bits =3D READ_SYSREG(MPIDR_EL1);
>>=20
>> #ifdef CONFIG_ARM_64
>> -        c->pfr64.bits[0] =3D READ_SYSREG64(ID_AA64PFR0_EL1);
>> -        c->pfr64.bits[1] =3D READ_SYSREG64(ID_AA64PFR1_EL1);
>> +        c->pfr64.bits[0] =3D READ_SYSREG(ID_AA64PFR0_EL1);
>> +        c->pfr64.bits[1] =3D READ_SYSREG(ID_AA64PFR1_EL1);
>>=20
>> -        c->dbg64.bits[0] =3D READ_SYSREG64(ID_AA64DFR0_EL1);
>> -        c->dbg64.bits[1] =3D READ_SYSREG64(ID_AA64DFR1_EL1);
>> +        c->dbg64.bits[0] =3D READ_SYSREG(ID_AA64DFR0_EL1);
>> +        c->dbg64.bits[1] =3D READ_SYSREG(ID_AA64DFR1_EL1);
>>=20
>> -        c->aux64.bits[0] =3D READ_SYSREG64(ID_AA64AFR0_EL1);
>> -        c->aux64.bits[1] =3D READ_SYSREG64(ID_AA64AFR1_EL1);
>> +        c->aux64.bits[0] =3D READ_SYSREG(ID_AA64AFR0_EL1);
>> +        c->aux64.bits[1] =3D READ_SYSREG(ID_AA64AFR1_EL1);
>>=20
>> -        c->mm64.bits[0]  =3D READ_SYSREG64(ID_AA64MMFR0_EL1);
>> -        c->mm64.bits[1]  =3D READ_SYSREG64(ID_AA64MMFR1_EL1);
>> +        c->mm64.bits[0]  =3D READ_SYSREG(ID_AA64MMFR0_EL1);
>> +        c->mm64.bits[1]  =3D READ_SYSREG(ID_AA64MMFR1_EL1);
>>=20
>> -        c->isa64.bits[0] =3D READ_SYSREG64(ID_AA64ISAR0_EL1);
>> -        c->isa64.bits[1] =3D READ_SYSREG64(ID_AA64ISAR1_EL1);
>> +        c->isa64.bits[0] =3D READ_SYSREG(ID_AA64ISAR0_EL1);
>> +        c->isa64.bits[1] =3D READ_SYSREG(ID_AA64ISAR1_EL1);
>> #endif
>>=20
>> -        c->pfr32.bits[0] =3D READ_SYSREG32(ID_PFR0_EL1);
>> -        c->pfr32.bits[1] =3D READ_SYSREG32(ID_PFR1_EL1);
>> +        c->pfr32.bits[0] =3D READ_SYSREG(ID_PFR0_EL1);
>> +        c->pfr32.bits[1] =3D READ_SYSREG(ID_PFR1_EL1);
>>=20
>> -        c->dbg32.bits[0] =3D READ_SYSREG32(ID_DFR0_EL1);
>> +        c->dbg32.bits[0] =3D READ_SYSREG(ID_DFR0_EL1);
>>=20
>> -        c->aux32.bits[0] =3D READ_SYSREG32(ID_AFR0_EL1);
>> +        c->aux32.bits[0] =3D READ_SYSREG(ID_AFR0_EL1);
>>=20
>> -        c->mm32.bits[0]  =3D READ_SYSREG32(ID_MMFR0_EL1);
>> -        c->mm32.bits[1]  =3D READ_SYSREG32(ID_MMFR1_EL1);
>> -        c->mm32.bits[2]  =3D READ_SYSREG32(ID_MMFR2_EL1);
>> -        c->mm32.bits[3]  =3D READ_SYSREG32(ID_MMFR3_EL1);
>> +        c->mm32.bits[0]  =3D READ_SYSREG(ID_MMFR0_EL1);
>> +        c->mm32.bits[1]  =3D READ_SYSREG(ID_MMFR1_EL1);
>> +        c->mm32.bits[2]  =3D READ_SYSREG(ID_MMFR2_EL1);
>> +        c->mm32.bits[3]  =3D READ_SYSREG(ID_MMFR3_EL1);
>>=20
>> -        c->isa32.bits[0] =3D READ_SYSREG32(ID_ISAR0_EL1);
>> -        c->isa32.bits[1] =3D READ_SYSREG32(ID_ISAR1_EL1);
>> -        c->isa32.bits[2] =3D READ_SYSREG32(ID_ISAR2_EL1);
>> -        c->isa32.bits[3] =3D READ_SYSREG32(ID_ISAR3_EL1);
>> -        c->isa32.bits[4] =3D READ_SYSREG32(ID_ISAR4_EL1);
>> -        c->isa32.bits[5] =3D READ_SYSREG32(ID_ISAR5_EL1);
>> +        c->isa32.bits[0] =3D READ_SYSREG(ID_ISAR0_EL1);
>> +        c->isa32.bits[1] =3D READ_SYSREG(ID_ISAR1_EL1);
>> +        c->isa32.bits[2] =3D READ_SYSREG(ID_ISAR2_EL1);
>> +        c->isa32.bits[3] =3D READ_SYSREG(ID_ISAR3_EL1);
>> +        c->isa32.bits[4] =3D READ_SYSREG(ID_ISAR4_EL1);
>> +        c->isa32.bits[5] =3D READ_SYSREG(ID_ISAR5_EL1);
>> }
>>=20
>> /*
>> --=20
>> 2.17.1
>>=20



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 12:17:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 12:17:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56478.98876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqEh8-0007Eu-Gw; Fri, 18 Dec 2020 12:17:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56478.98876; Fri, 18 Dec 2020 12:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqEh8-0007En-DN; Fri, 18 Dec 2020 12:17:14 +0000
Received: by outflank-mailman (input) for mailman id 56478;
 Fri, 18 Dec 2020 12:17:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kqEh7-0007Ei-Qh
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 12:17:13 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 885e4fed-4b2d-4aa6-8fc8-52287c9d1f2f;
 Fri, 18 Dec 2020 12:17:13 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id r3so1929610wrt.2
 for <xen-devel@lists.xenproject.org>; Fri, 18 Dec 2020 04:17:12 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (5ec3281d.skybroadband.com.
 [94.195.40.29])
 by smtp.gmail.com with ESMTPSA id o8sm13125679wrm.17.2020.12.18.04.17.10
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 18 Dec 2020 04:17:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 885e4fed-4b2d-4aa6-8fc8-52287c9d1f2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=W0GfmlS9Ivv1AzEqj/qay42P/6AulC/Mk4gTut3f3BM=;
        b=s22c/P0Hz9c5iLW8oBggrv/zj7bH/TsjXIp1gz4GfSpIZfMSQbj0EJroB4NrYNIF4P
         mmpwVIVHGO4wysGY0n4Z1VX/X8+h1svE3xmopvFtk5zB3K1FgPuHdU7qRW2i4eFc0/om
         wHp5zxXWicOtHqrFrh1uc6HssnhRKHeKNNfwMaEKh6/glw5N+f+rMd8qCn8ZjjJCQtJL
         sD0XcJjC5+GSH5UkrL8uFR+hXnzV6dGmQbdadlMogIUqlLGY2vCAtu/z1BPY+fcJ/YnM
         37xla2pduGqT8GlFZzRB4cBp41kodr2AvAj2BiK4k9Lzzkzn0AlT9LQn7ObT3Rea51+M
         +RGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=W0GfmlS9Ivv1AzEqj/qay42P/6AulC/Mk4gTut3f3BM=;
        b=YQCRALPp7MrYT1wWyKIc09Co6Q/L27ZCEaU5meax04HOa/8sA+K26iJIr6BCEfjfK8
         H8edW3IoD+U+cEPUa0ARaujz0UydjyD1MFC/fdx+nZNze8u2wZ1/Wm3u7zP9egGDwY5v
         HcdT8idt8gyJDeCyV4uK1LkXPHPgGwa71ABh7eC6QYO8ZkwVffArK+Wzs/Aefxjb034c
         ucOpfHszKPZz6YdeUTR5E7RowTZuci1cSuDVUppDKK8ChbHj+SUWNh4rrwMCsBws1iuW
         LOqGDdQZedlj5KXFp1raT9Tj8LpMcrcxWSkgn9Qt4NppghObz9x/HavHj7dwUG6KhHkw
         WFhQ==
X-Gm-Message-State: AOAM531FR4aafr7AViWql4jLM92ZJGUsT/tA2uAdo1cG5ouYio7qH7o5
	a0mlaIUSo5pWosxcW/aRZqI=
X-Google-Smtp-Source: ABdhPJxXeCXVt6AqsP1RfaynziHyCWSQcRjW+Td1ums9fol7CBaf7cRsDW7NAfDxtx9oac0ieAPobw==
X-Received: by 2002:adf:ded1:: with SMTP id i17mr4148758wrn.190.1608293832189;
        Fri, 18 Dec 2020 04:17:12 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: ash.j.wilding@gmail.com
Cc: bertrand.marquis@arm.com,
	julien@xen.org,
	rahul.singh@arm.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH v2 00/15] xen/arm: port Linux LL/SC and LSE atomics helpers to Xen
Date: Fri, 18 Dec 2020 12:17:09 +0000
Message-Id: <20201218121709.38544-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201217153742.14034-1-ash.j.wilding@gmail.com>
References: <20201217153742.14034-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Having pondered note (1) in my previous email a bit more, I imagine the reason
for using a DMB instead of acq/rel semantics is to prevent accesses following
the STLXR from being reordered between it and the LDAXR.

I won't be winning any awards for this ASCII art but hopefully it helps convey
the point.

Using just an LDAXR/STLXR pair and ditching the DMB, accesses to [D] and [E]
can be reodered between the LDAXR and STLXR:

                    ...
        +---------- LDR   [A]
        |           ...
        |           ...
        |    +----- STR   [B]
        |    |      ...
    ====|====|======LDAXR [C]================
        |    |      ...                  X
        |    +----> ...                  |
        |           ...                  |
        |           ...   <----------+   |
        X           ...              |   |
    ================STLXR [C]========|===|===
                    ...              |   |
                    ...              |   |
                    LDR   [D]--------+   |
                    ...                  |
                    STR   [E]------------+
                    ...


While dropping the acq semantics from the LDAXR and using a DMB instead will
prevent accesses to [D] and [E] being reordered between the LDXR/STLXR pair,
and keeping the rel semantics on the STLXR to prevents accesses to [A] and [B]
from being reordered after the STLXR:

                    ...
        +---------- LDR   [A]
        |           ...
        |           ...
        |    +----- STR   [B]
        |    |      ...
        |    |      LDXR  [C]
        |    |      ...
        |    +----> ...
        |           ...
        X           ...
    ================STLXR [C]================
    ================DMB======================
                    ...          X    X
                    ...          |    |
                    LDR   [D]----+    |  
                    ...               |
                    STR   [E]---------+
                    ...


As mentioned in my original email, the LSE atomic is a single instruction so
we can give it acq/rel semantics and not worry about any accesses to [A], [B],
[D], or [E] being reordered relative to that atomic:

                    ...
        +---------- LDR   [A]
        |           ...
        |           ...
        |    +----- STR   [B]
        |    |      ...
        X    X      ...
    ================LDADDAL [C]================
                    ...          X    X
                    ...          |    |
                    LDR   [D]----+    |  
                    ...               |
                    STR   [E]---------+
                    ...


So, makes sense that Linux uses acq/rel and no DMB for LSE, but Linux (and Xen)
are forced to use rel semantics and a DMB for the LL/SC case.

Anyway, point (2) from my earlier email is the one that's potentially more
concerning as we only have rel semantics and no DMB on the Linux LL/SC version
of atomic_cmpxchg(), in contrast to the existing Xen LL/SC implementation being
sandwiched between two DMBs and the Linux LSE version having acq/rel semantics.

Cheers,
Ash.


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 12:41:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 12:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56492.98924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqF4P-0001wD-0N; Fri, 18 Dec 2020 12:41:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56492.98924; Fri, 18 Dec 2020 12:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqF4O-0001w6-RM; Fri, 18 Dec 2020 12:41:16 +0000
Received: by outflank-mailman (input) for mailman id 56492;
 Fri, 18 Dec 2020 12:41:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J4wv=FW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kqF4N-0001vv-Pr
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 12:41:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c6490b6-0851-443f-89aa-33c4005152c2;
 Fri, 18 Dec 2020 12:41:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B80C6AD89;
 Fri, 18 Dec 2020 12:41:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c6490b6-0851-443f-89aa-33c4005152c2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608295272; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Y2ihX0PvA7fC2DjVZO8xpyjmANxUEIa7YFWs0+uPrHs=;
	b=Dv/yVlTZd0NMmKqDJRI3fT+6UEYgqG2AOKWbS1d8yDi/uY9SDoZJuXmMfEKb4G/NNZ3p0z
	NW/Q7GkN34pKGKI5AaEQgTuTRfvfsIImH53D4+QwCs7tSoKZfOi8XtxFhTI3PmhBfDYRRX
	/l3HZUq23fuZyY1gbrOJEN2kZAsgL7w=
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
 <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
 <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
 <a515ead2-f732-ddcd-f29b-788b8997fd2a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
Message-ID: <0c56129d-dcfa-2a52-dc66-221f103e6735@suse.com>
Date: Fri, 18 Dec 2020 13:41:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a515ead2-f732-ddcd-f29b-788b8997fd2a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="lfvgs851yJHyzgD1RG7xbPwvsOZq0irWY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--lfvgs851yJHyzgD1RG7xbPwvsOZq0irWY
Content-Type: multipart/mixed; boundary="3Dn312sRL2sqYbwE71xXNlcz0uRviJzSm";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <0c56129d-dcfa-2a52-dc66-221f103e6735@suse.com>
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
 <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
 <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
 <a515ead2-f732-ddcd-f29b-788b8997fd2a@suse.com>
In-Reply-To: <a515ead2-f732-ddcd-f29b-788b8997fd2a@suse.com>

--3Dn312sRL2sqYbwE71xXNlcz0uRviJzSm
Content-Type: multipart/mixed;
 boundary="------------5114EE5B5B62E77601244A48"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5114EE5B5B62E77601244A48
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.12.20 10:09, Jan Beulich wrote:
> On 18.12.2020 09:57, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.12.20 13:14, Jan Beulich wrote:
>>> On 17.12.2020 12:32, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 17.12.20 12:28, Jan Beulich wrote:
>>>>> On 09.12.2020 17:09, Juergen Gross wrote:
>>>>>> +static const struct hypfs_entry *hypfs_dyndir_enter(
>>>>>> +    const struct hypfs_entry *entry)
>>>>>> +{
>>>>>> +    const struct hypfs_dyndir_id *data;
>>>>>> +
>>>>>> +    data =3D hypfs_get_dyndata();
>>>>>> +
>>>>>> +    /* Use template with original enter function. */
>>>>>> +    return data->template->e.funcs->enter(&data->template->e);
>>>>>> +}
>>>>>
>>>>> At the example of this (applies to other uses as well): I realize
>>>>> hypfs_get_dyndata() asserts that the pointer is non-NULL, but
>>>>> according to the bottom of ./CODING_STYLE this may not be enough
>>>>> when considering the implications of a NULL deref in the context
>>>>> of a PV guest. Even this living behind a sysctl doesn't really
>>>>> help, both because via XSM not fully privileged domains can be
>>>>> granted access, and because speculation may still occur all the
>>>>> way into here. (I'll send a patch to address the latter aspect in
>>>>> a few minutes.) While likely we have numerous existing examples
>>>>> with similar problems, I guess in new code we'd better be as
>>>>> defensive as possible.
>>>>
>>>> What do you suggest? BUG_ON()?
>>>
>>> Well, BUG_ON() would be a step in the right direction, converting
>>> privilege escalation to DoS. The question is if we can't do better
>>> here, gracefully failing in such a case (the usual pair of
>>> ASSERT_UNREACHABLE() plus return/break/goto approach doesn't fit
>>> here, at least not directly).
>>>
>>>> You are aware that this is nothing a user can influence, so it would=

>>>> be a clear coding error in the hypervisor?
>>>
>>> A user (or guest) can't arrange for there to be a NULL pointer,
>>> but if there is one that can be run into here, this would still
>>> require an XSA afaict.
>>
>> I still don't see how this could happen without a major coding bug,
>> which IMO wouldn't go unnoticed during a really brief test (this is
>> the reason for ASSERT() in hypfs_get_dyndata() after all).
>=20
> True. Yet the NULL derefs wouldn't go unnoticed either.
>=20
>> Its not as if the control flow would allow many different ways to reac=
h
>> any of the hypfs_get_dyndata() calls.
>=20
> I'm not convinced of this - this is a non-static function, and the
> call patch 8 adds (just to take an example) is not very obvious to
> have a guarantee that allocation did happen and was checked for
> success. Yes, in principle cpupool_gran_write() isn't supposed to
> be called in such a case, but it's the nature of bugs assumptions
> get broken.

Yes, but we do have tons of assumptions like that. I don't think we
should add tests for non-NULL pointers everywhere just because we
happen to dereference something. Where do we stop?

>=20
>> I can add security checks at the appropriate places, but I think this
>> would be just dead code. OTOH if you are feeling strong here lets go
>> with it.
>=20
> Going with it isn't the only possible route. The other is to drop
> the ASSERT()s altogether. It simply seems to me that their addition
> is a half-hearted attempt when considering what was added to
> ./CODING_STYLE not all that long ago.

No. The ASSERT() is clearly an attempt to catch a programming error
early. It is especially not trying to catch a situation which is thought
to be possible. The situation should really never happen, and I'm not
aware how it could happen without a weird code modification.

Dropping the ASSERT() would really add risk to not notice a bug being
introduced by a code modification.


Juergen

--------------5114EE5B5B62E77601244A48
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5114EE5B5B62E77601244A48--

--3Dn312sRL2sqYbwE71xXNlcz0uRviJzSm--

--lfvgs851yJHyzgD1RG7xbPwvsOZq0irWY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/co2cFAwAAAAAACgkQsN6d1ii/Ey9o
Kwf5Ad1f5NVxvrPUObUZtukgzNTBkdWN8pJ5ySpe1p6JUF5mDL4IRDBGYeJYQHpxhV0vUfTraKg0
gKQf8DKFxUtNWH5iRtdMe2YO+AnPbdM16Reqcse2hRIU9xtrjVdlymXJlIyZS/B3JdIsWh0EuE7V
TAMNesLO6DyUz7V3Me9bbRiXSi2i5ZscdV2UEsmELUniB4gJ/9VkRq/mLLOOP71XMnmJ8Hx2hipR
vhdzaorXDWVw7FAims4Cgl6evNZv5fmx6Ik1wlXTZYmKkKqWGioRBEiKf7ZvPuEfoPqtXrqFOGp1
ZXu/SvD0IbO6HYBKGD6tbRkHZwE2+lMhNIoB6RZf6w==
=ez7H
-----END PGP SIGNATURE-----

--lfvgs851yJHyzgD1RG7xbPwvsOZq0irWY--


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 13:31:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 13:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56504.98954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqFqi-0006zu-58; Fri, 18 Dec 2020 13:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56504.98954; Fri, 18 Dec 2020 13:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqFqi-0006zn-1g; Fri, 18 Dec 2020 13:31:12 +0000
Received: by outflank-mailman (input) for mailman id 56504;
 Fri, 18 Dec 2020 13:31:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kqFqg-0006zi-Vz
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 13:31:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kqFqf-00088R-L5; Fri, 18 Dec 2020 13:31:09 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kqFqf-00037b-7v; Fri, 18 Dec 2020 13:31:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=78IRszokJjcwXWRlJq8XMydnKIcEdWhnJEaC/s7+CP4=; b=ta0capm8ycKAgPRg10EyVyYXOv
	Eq3yXoDsz5zRDR2pWtdnmZpkhBj5B2wrSZe8T/7LHXp8IlQUyeHykEoY+YgnNGWqEZR4vY8tuaj7b
	EHVTIh/qq0V9nxVpVYLYxQroVl+ew+/ezFN+501bbZmHhAmBd1ET6Cv0V3/U3kUZ9iCc=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2] xen: Rework WARN_ON() to return whether a warning was triggered
Date: Fri, 18 Dec 2020 13:30:54 +0000
Message-Id: <20201218133054.7744-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

So far, our implementation of WARN_ON() cannot be used in the following
situation:

if ( WARN_ON() )
    ...

This is because WARN_ON() doesn't return whether a warning has been
triggered. Such construciton can be handy if you want to print more
information and also dump the stack trace.

Therefore, rework the WARN_ON() implementation to return whether a
warning was triggered. The idea was borrowed from Linux

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

---
    Changes in v2:
        - Rework the commit message
        - Don't use trailing underscore
        - Add Bertrand's and Juergen's reviewed-by
---
 xen/include/xen/lib.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 48429b69b8df..5841bd489c35 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -23,7 +23,13 @@
 #include <asm/bug.h>
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
-#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
+#define WARN_ON(p)  ({                  \
+    bool ret_warn_on_ = (p);            \
+                                        \
+    if ( unlikely(ret_warn_on_) )       \
+        WARN();                         \
+    unlikely(ret_warn_on_);             \
+})
 
 /* All clang versions supported by Xen have _Static_assert. */
 #if defined(__clang__) || \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 13:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 13:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56509.98965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqFuZ-00079x-Lp; Fri, 18 Dec 2020 13:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56509.98965; Fri, 18 Dec 2020 13:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqFuZ-00079q-Ix; Fri, 18 Dec 2020 13:35:11 +0000
Received: by outflank-mailman (input) for mailman id 56509;
 Fri, 18 Dec 2020 13:35:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kqFuX-00079l-Oh
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 13:35:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kqFuX-0008BY-HY; Fri, 18 Dec 2020 13:35:09 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kqFuX-0003W5-7P; Fri, 18 Dec 2020 13:35:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xM2g/GyajSitlPfAkAWgiSwGXMkoqb0lchea5HNLvyg=; b=uSpo9ATqHSsdor1NQBFiFFtmAO
	wHxwDpPzVuOnbX3le4a830ztgAQP6o/AZq+UOlOSmBFGtgUQJF9i0yRXs9c3gMar7NKO5ez5OVMiB
	/R5dbNkQKa+gYpunNz438CavuHCfw7kueH4PRtZTujPtftncFmEiUO4+SuyqNiENyiog=;
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
 <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
 <da65a69e-389b-1602-1479-6799ce10c101@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1711bb04-ea95-3507-9aa3-e82791d757b4@xen.org>
Date: Fri, 18 Dec 2020 13:35:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <da65a69e-389b-1602-1479-6799ce10c101@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 15/12/2020 10:20, Jürgen Groß wrote:
> On 15.12.20 08:27, Jürgen Groß wrote:
>> On 14.12.20 22:25, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> When testing Linux 5.10 dom0, I could reliably hit the following 
>>> warning with using event 2L ABI:
>>>
>>> [  589.591737] Interrupt for port 34, but apparently not enabled; 
>>> per-user 00000000a86a4c1b
>>> [  589.593259] WARNING: CPU: 0 PID: 1111 at 
>>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:170 
>>> evtchn_interrupt+0xeb/0x100
>>> [  589.595514] Modules linked in:
>>> [  589.596145] CPU: 0 PID: 1111 Comm: qemu-system-i38 Tainted: G 
>>> W         5.10.0+ #180
>>> [  589.597708] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), 
>>> BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
>>> [  589.599782] RIP: e030:evtchn_interrupt+0xeb/0x100
>>> [  589.600698] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00 00 
>>> 00 e8 d9 10 ca ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 31 3d 82 e8 65 
>>> 29 a0 ff <0f> 0b e9 42 ff ff ff 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 
>>> 00 00 0f
>>> [  589.604087] RSP: e02b:ffffc90040003e70 EFLAGS: 00010086
>>> [  589.605102] RAX: 0000000000000000 RBX: ffff888102091800 RCX: 
>>> 0000000000000027
>>> [  589.606445] RDX: 0000000000000000 RSI: ffff88817fe19150 RDI: 
>>> ffff88817fe19158
>>> [  589.607790] RBP: ffff88810f5ab980 R08: 0000000000000001 R09: 
>>> 0000000000328980
>>> [  589.609134] R10: 0000000000000000 R11: ffffc90040003c70 R12: 
>>> ffff888107fd3c00
>>> [  589.610484] R13: ffffc90040003ed4 R14: 0000000000000000 R15: 
>>> ffff88810f5ffd80
>>> [  589.611828] FS:  00007f960c4b8ac0(0000) GS:ffff88817fe00000(0000) 
>>> knlGS:0000000000000000
>>> [  589.613348] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [  589.614525] CR2: 00007f17ee72e000 CR3: 000000010f5b6000 CR4: 
>>> 0000000000050660
>>> [  589.615874] Call Trace:
>>> [  589.616402]  <IRQ>
>>> [  589.616855]  __handle_irq_event_percpu+0x4e/0x2c0
>>> [  589.617784]  handle_irq_event_percpu+0x30/0x80
>>> [  589.618660]  handle_irq_event+0x3a/0x60
>>> [  589.619428]  handle_edge_irq+0x9b/0x1f0
>>> [  589.620209]  generic_handle_irq+0x4f/0x60
>>> [  589.621008]  evtchn_2l_handle_events+0x160/0x280
>>> [  589.621913]  __xen_evtchn_do_upcall+0x66/0xb0
>>> [  589.622767]  __xen_pv_evtchn_do_upcall+0x11/0x20
>>> [  589.623665]  asm_call_irq_on_stack+0x12/0x20
>>> [  589.624511]  </IRQ>
>>> [  589.624978]  xen_pv_evtchn_do_upcall+0x77/0xf0
>>> [  589.625848]  exc_xen_hypervisor_callback+0x8/0x10
>>>
>>> This can be reproduced when creating/destroying guest in a loop. 
>>> Although, I have struggled to reproduce it on a vanilla Xen.
>>>
>>> After several hours of debugging, I think I have found the root cause.
>>>
>>> While we only expect the unmask to happen when the event channel is 
>>> EOIed, there is an unmask happening as part of handle_edge_irq() 
>>> because the interrupt was seen as pending by another vCPU 
>>> (IRQS_PENDING is set).
>>>
>>> It turns out that the event channel is set for multiple vCPU is in 
>>> cpu_evtchn_mask. This is happening because the affinity is not 
>>> cleared when freeing an event channel.
>>>
>>> The implementation of evtchn_2l_handle_events() will look for all the 
>>> active interrupts for the current vCPU and later on clear the pending 
>>> bit (via the ack() callback). IOW, I believe, this is not an atomic 
>>> operation.
>>>
>>> Even if Xen will notify the event to a single vCPU, 
>>> evtchn_pending_sel may still be set on the other vCPU (thanks to a 
>>> different event channel). Therefore, there is a chance that two vCPUs 
>>> will try to handle the same interrupt.
>>>
>>> The IRQ handler handle_edge_irq() is able to deal with that and will 
>>> mask/unmask the interrupt. This will mess us with the lateeoi logic 
>>> (although, I managed to reproduce it once without XSA-332).
>>
>> Thanks for the analysis!
>>
>>> My initial idea to fix the problem was to switch the affinity from 
>>> CPU X to CPU0 when the event channel is freed.
>>>
>>> However, I am not sure this is enough because I haven't found 
>>> anything yet preventing a race between evtchn_2l_handle_events9) and 
>>> evtchn_2l_bind_vcpu().
>>>
>>> So maybe we want to introduce a refcounting (if there is nothing 
>>> provided by the IRQ framework) and only unmask when the counter drop 
>>> to 0.
>>>
>>> Any opinions?
>>
>> I think we don't need a refcount, but just the internal states "masked"
>> and "eoi_pending" and unmask only if both are false. "masked" will be
>> set when the event is being masked. When delivering a lateeoi irq
>> "eoi_pending" will be set and "masked "reset. "masked" will be reset
>> when a normal unmask is happening. And "eoi_pending" will be reset
>> when a lateeoi is signaled. Any reset of "masked" and "eoi_pending"
>> will check the other flag and do an unmask if both are false.
>>
>> I'll write a patch.
> 
> Julien, could you please test the attached (only build tested) patch?

Thank you writing the patches. I will aim to give a spin next week.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 13:55:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 13:55:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56513.98977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqGDh-0000rm-IS; Fri, 18 Dec 2020 13:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56513.98977; Fri, 18 Dec 2020 13:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqGDh-0000rf-Fb; Fri, 18 Dec 2020 13:54:57 +0000
Received: by outflank-mailman (input) for mailman id 56513;
 Fri, 18 Dec 2020 13:54:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqGDg-0000rX-Iq; Fri, 18 Dec 2020 13:54:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqGDg-0008Uo-9s; Fri, 18 Dec 2020 13:54:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqGDg-0007DM-2C; Fri, 18 Dec 2020 13:54:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqGDg-0007q8-1l; Fri, 18 Dec 2020 13:54:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OSCtrKQjIOTsoYGgnRkXoVIwmv4YAOit3OAbjtJ/7Q4=; b=Re/EDXPnMjcOnYF31o/4cLFO8E
	QkCEWRVU/QvF5dj2jqnLnp15vfAoqXTVFy6nycjvQg/pJFXxrCNGR88H33t6us1/s4fs80aFNxaLU
	SKF4lxRQq2iMi0PYBFJVN5O5e6rHhzg8jRktOsmg2uIqFzfHcPJn8jrcmnmT1J8S2d7s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157644-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157644: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=af3f37319cb1e1ca0c42842ecdbd1bcfc64a4b6f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 13:54:56 +0000

flight 157644 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157644/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail pass in 157613

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                af3f37319cb1e1ca0c42842ecdbd1bcfc64a4b6f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  120 days
Failing since        152659  2020-08-21 14:07:39 Z  118 days  247 attempts
Testing same since   157613  2020-12-16 21:42:22 Z    1 days    2 attempts

------------------------------------------------------------
318 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 78650 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 13:58:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 13:58:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56519.98993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqGH6-000138-4j; Fri, 18 Dec 2020 13:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56519.98993; Fri, 18 Dec 2020 13:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqGH6-000131-1e; Fri, 18 Dec 2020 13:58:28 +0000
Received: by outflank-mailman (input) for mailman id 56519;
 Fri, 18 Dec 2020 13:58:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fP3M=FW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kqGH5-00012u-30
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 13:58:27 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e38f7502-b5f3-489a-a7db-233305f8f86a;
 Fri, 18 Dec 2020 13:58:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e38f7502-b5f3-489a-a7db-233305f8f86a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608299905;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=bPk7JmgfQ2TY/TAcyUiq/DCeSRf+2wqwb71nVJq4/OE=;
  b=ML5t33qUz/K9+VzjpzdLWORo39nkbNzpNJg0haF8hqj/b/R8hQPihOMK
   Faqg0KMGuoBf9PyVS7Arrdfkfok9qdyA7roW7UNs+UDtdDc7Yx7g+4Uzh
   racf0XUXjPVfygZQmIQWr9Ao8C7jtYhwGk8+Mnuxct90BUITj521CRGX0
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: S7jw0Ff2I5FRGxVWvD8CaTGXEoK5ljSNye9a4L5+btRtzd9YEhJjdjOnNtuGY+lvm7Yk4blkkS
 c+omn26ucbONGvWdb3RGGP0aXM/ERIdGdoaH1DNxO5o8jyRdEKghVTu4ekP9RdDT5KxyOCjNMz
 2UnUEUUd5DN2S9HIQA38yVvfFqO139efNh8BziwzkMWQu9IJahiaVq3RN1UzjvjDTPuecL0IsH
 Lo5QOXbhmtGUxbkncfxN3fFf0pS9LAaXpNiSAFtSR8lZs1f6i2dIneQLfNhtOBZEQd2U+1KYuk
 kfM=
X-SBRS: 5.2
X-MesageID: 33571494
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,430,1599537600"; 
   d="scan'208";a="33571494"
Subject: Re: [PATCH] xen/x86: Fix memory leak in vcpu_create() error path
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan
	<tim@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
	<michal.leszczynski@cert.pl>
References: <20200928154741.2366-1-andrew.cooper3@citrix.com>
 <33331c3a-1fd5-1ef6-16a3-21d2a6672e90@suse.com>
 <9556aeb3-2a7c-7aea-4386-6e561dd9ef6e@citrix.com>
 <9e652863-5ada-0327-5817-cdb2e652e066@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e26f0cc3-1893-6cd9-71b3-4e0c011318b3@citrix.com>
Date: Fri, 18 Dec 2020 13:58:19 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9e652863-5ada-0327-5817-cdb2e652e066@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 18/12/2020 08:27, Jan Beulich wrote:
> On 17.12.2020 22:46, Andrew Cooper wrote:
>> On 29/09/2020 07:18, Jan Beulich wrote:
>>> On 28.09.2020 17:47, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/mm/hap/hap.c
>>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>>> @@ -563,30 +563,37 @@ void hap_final_teardown(struct domain *d)
>>>>      paging_unlock(d);
>>>>  }
>>>>  
>>>> +void hap_vcpu_teardown(struct vcpu *v)
>>>> +{
>>>> +    struct domain *d = v->domain;
>>>> +    mfn_t mfn;
>>>> +
>>>> +    paging_lock(d);
>>>> +
>>>> +    if ( !paging_mode_hap(d) || !v->arch.paging.mode )
>>>> +        goto out;
>>> Any particular reason you don't use paging_get_hostmode() (as the
>>> original code did) here? Any particular reason for the seemingly
>>> redundant (and hence somewhat in conflict with the description's
>>> "with the minimum number of safety checks possible")
>>> paging_mode_hap()?
>> Yes to both.  As you spotted, I converted the shadow side first, and
>> made the two consistent.
>>
>> The paging_mode_{shadow,hap})() is necessary for idempotency.  These
>> functions really might get called before paging is set up, for an early
>> failure in domain_create().
> In which case how would v->arch.paging.mode be non-NULL already?
> They get set in {hap,shadow}_vcpu_init() only.

Right, but we also might end up here with an error early in
vcpu_create(), where d->arch.paging is set up, but v->arch.paging isn't.

This logic needs to be safe to use at any point of partial initialisation.

(And to be clear, I found I needed both of these based on some
artificial error injection testing.)

>> The paging mode has nothing really to do with hostmode/guestmode/etc. 
>> It is the only way of expressing the logic where it is clear that the
>> lower pointer dereferences are trivially safe.
> Well, yes and no - the other uses of course should then also use
> paging_get_hostmode(), like various of the wrappers in paging.h
> do. Or else I question why we have paging_get_hostmode() in the
> first place.

I'm not convinced it is an appropriate abstraction to have, and I don't
expect it to survive the nested virt work.

> There are more examples in shadow code where this
> gets open-coded when it probably shouldn't be. There haven't been
> any such cases in HAP code so far ...

Doesn't matter.  Its use here would obfuscate the code (this is one part
of why I think it is a bad abstraction to begin with), and if the
implementation ever changed, the function would lose its safety.

> Additionally (noticing only now) in the shadow case you may now
> loop over all vCPU-s in shadow_teardown() just for
> shadow_vcpu_teardown() to bail right away. Wouldn't it make sense
> to retain the "if ( shadow_mode_enabled(d) )" there around the
> loop?

I'm not entirely convinced that was necessarily safe.  Irrespective, see
the TODO.  The foreach_vcpu() is only a stopgap until some cleanup
structure changes come along (which I had queued behind this patch anyway).

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 15:08:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 15:08:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56528.99005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqHMt-00084A-GO; Fri, 18 Dec 2020 15:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56528.99005; Fri, 18 Dec 2020 15:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqHMt-000843-Bu; Fri, 18 Dec 2020 15:08:31 +0000
Received: by outflank-mailman (input) for mailman id 56528;
 Fri, 18 Dec 2020 15:08:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHMr-00083v-LU; Fri, 18 Dec 2020 15:08:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHMr-0001Qu-BU; Fri, 18 Dec 2020 15:08:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHMq-0004VC-V4; Fri, 18 Dec 2020 15:08:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHMq-0004Tv-UX; Fri, 18 Dec 2020 15:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mApg4ajx9ZAeXisKJCyQWFhZL+PJS9C6CzKFF+EhQaQ=; b=swXc7n2ctxRABnPJB7ZDdh52c2
	4RWZKXDvn3SoZ0akUtXNkmoiYD1jagP1crIgJ8r3njLh6LTi/x8wU8pS/oR096m9Om6XjSMjJlhAs
	TyekyEZZQNuALWteQEE/sG2O1DDuV8nWa9EJ0tZm6fSrSA0ctJHOpVwUxfYw3k7cLQ8c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157650-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 157650: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:build-amd64-libvirt:libvirt-build:fail:regression
    xen-4.14-testing:build-armhf-pvops:kernel-build:fail:regression
    xen-4.14-testing:build-amd64-pvops:kernel-build:fail:regression
    xen-4.14-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ad844aa352559a8b1f36e391a27d9d7dbddbdc36
X-Osstest-Versions-That:
    xen=d17a5d5d2774601f8137984a3ee23ec28eb0793c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 15:08:28 +0000

flight 157650 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157650/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 157564
 build-armhf-pvops             6 kernel-build             fail REGR. vs. 157564
 build-amd64-pvops             6 kernel-build             fail REGR. vs. 157564

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157564
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157564
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157564
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157564
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 xen                  ad844aa352559a8b1f36e391a27d9d7dbddbdc36
baseline version:
 xen                  d17a5d5d2774601f8137984a3ee23ec28eb0793c

Last test of basis   157564  2020-12-15 13:36:35 Z    3 days
Testing same since   157650  2020-12-17 17:06:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            fail    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ad844aa352559a8b1f36e391a27d9d7dbddbdc36
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Dec 17 17:47:25 2020 +0100

    update Xen version to 4.14.1
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 15:17:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 15:17:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56535.99020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqHVP-0000jo-Mi; Fri, 18 Dec 2020 15:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56535.99020; Fri, 18 Dec 2020 15:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqHVP-0000jh-J1; Fri, 18 Dec 2020 15:17:19 +0000
Received: by outflank-mailman (input) for mailman id 56535;
 Fri, 18 Dec 2020 15:17:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHVO-0000jZ-Bi; Fri, 18 Dec 2020 15:17:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHVO-0001aK-4w; Fri, 18 Dec 2020 15:17:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHVN-00052W-Qk; Fri, 18 Dec 2020 15:17:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHVN-0006Du-QH; Fri, 18 Dec 2020 15:17:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jakYRnAQglZMW1sfVdf/J3FBCVI6sREMon6Wt2VyANQ=; b=3G3L3WQl5Cvv+MkmqBtVNxdLTE
	vYT8t+ym6DR864OlIPK+b99QP4O3J2MFfX1OPQLTyOlRzU8UMD96pGH1knoILpmSievsboxoq6cbP
	0Rpu/MFwIiz0kUyeKvJOxMOiyBPP20tcXOtOik2Uatii6BU8fPLIBMyOX3G7RK/sNrCg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157668-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157668: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8009c33b5179536e2ecce54462fe4cd069060f77
X-Osstest-Versions-That:
    xen=7a3b691a8f3aa7720eecaab0e7bd090aa392885a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 15:17:17 +0000

flight 157668 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157668/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157656

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8009c33b5179536e2ecce54462fe4cd069060f77
baseline version:
 xen                  7a3b691a8f3aa7720eecaab0e7bd090aa392885a

Last test of basis   157656  2020-12-17 23:02:14 Z    0 days
Testing same since   157668  2020-12-18 13:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8009c33b5179536e2ecce54462fe4cd069060f77
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:29:14 2020 +0100

    x86/mm: p2m_add_foreign() is HVM-only
    
    This is the case also for xenmem_add_to_physmap_one(), as is it's only
    caller of the function. Move the latter next to p2m_add_foreign(),
    allowing it one to become static at the same time. While moving, adjust
    indentation of the body of the main switch().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 173ae325026bd161ae5eecebda28dab2c7a80668
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:28:30 2020 +0100

    x86/p2m: tidy p2m_add_foreign() a little
    
    Drop a bogus ASSERT() - we don't typically assert incoming domain
    pointers to be non-NULL, and there's no particular reason to do so here.
    
    Replace the open-coded DOMID_SELF check by use of
    rcu_lock_remote_domain_by_id(), at the same time covering the request
    being made with the current domain's actual ID.
    
    Move the "both domains same" check into just the path where it really
    is meaningful.
    
    Swap the order of the two puts, such that
    - the p2m lock isn't needlessly held across put_page(),
    - a separate put_page() on an error path can be avoided,
    - they're inverse to the order of the respective gets.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f772b592b75d3144174d4c645b916f2718d9cce5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:25:40 2020 +0100

    lib: move sort code
    
    Build this code into an archive, partly paralleling bsearch().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 7c3af561acb70ddd16069b9c9cab3ce503a10987
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:23:42 2020 +0100

    lib: move bsearch code
    
    Convert this code to an inline function (backed by an instance in an
    archive in case the compiler decides against inlining), which results
    in not having it in x86 final binaries. This saves a little bit of dead
    code.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit c54212261dc3305429344fe1d1cb298b30830155
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:22:54 2020 +0100

    lib: move rbtree code
    
    Build this code into an archive, which results in not linking it into
    x86 final binaries. This saves about 1.5k of dead code.
    
    While moving the source file, take the opportunity and drop the
    pointless EXPORT_SYMBOL() and an instance of trailing whitespace.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 3b1d8eb4744d210abcd1c033bf07d20345b926ba
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:22:10 2020 +0100

    lib: move init_constructors()
    
    ... into its own CU, for being unrelated to other things in
    common/lib.c.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 65fdf25768deba4e8bea751773f2ec4f7ff67ea5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:21:25 2020 +0100

    lib: move parse_size_and_unit()
    
    ... into its own CU, to build it into an archive.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 26dfde919cac720c29d076bc8fd38ad0af1b2abb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:20:42 2020 +0100

    lib: move list sorting code
    
    Build the source file always, as by putting it into an archive it still
    won't be linked into final binaries when not needed. This way possible
    build breakage will be easier to notice, and it's more consistent with
    us unconditionally building other library kind of code (e.g. sort() or
    bsearch()).
    
    While moving the source file, take the opportunity and drop the
    pointless EXPORT_SYMBOL() and an unnecessary #include.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit f301f9a9e84f3cfd18750065f8a3794c8182c7f0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:17:57 2020 +0100

    lib: collect library files in an archive
    
    In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
    just to avoid bloating binaries when only some arch-es and/or
    configurations need generic library routines, combine objects under lib/
    into an archive, which the linker then can pick the necessary objects
    out of.
    
    Note that we can't use thin archives just yet, until we've raised the
    minimum required binutils version suitably.
    
    Note further that --start-group / --end-group get put in place right
    away to allow for symbol resolution across all archives, once we gain
    multuiple ones.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 15:32:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 15:32:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56545.99041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqHjZ-0002kp-3D; Fri, 18 Dec 2020 15:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56545.99041; Fri, 18 Dec 2020 15:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqHjZ-0002ki-0G; Fri, 18 Dec 2020 15:31:57 +0000
Received: by outflank-mailman (input) for mailman id 56545;
 Fri, 18 Dec 2020 15:31:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHjX-0002kM-CD; Fri, 18 Dec 2020 15:31:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHjX-0001oO-3C; Fri, 18 Dec 2020 15:31:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHjW-0005he-Sc; Fri, 18 Dec 2020 15:31:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqHjW-0002L1-S9; Fri, 18 Dec 2020 15:31:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kuANxrxMpYRwA7soAHR2AWeUnO9yjxKBYDzBLbdXsBI=; b=kZuW1YHssndoByQjyRzRADQRVn
	4/VUkvL+AAKTDwdzqKBbv436I/Nkb4lneUfex/wxAx/ONFi0b2j7baQrIVpCQFYx5U/EBAkBlIDXm
	/FO+BF4guPSEugksemMhkhOC/RI3vyavALbb0K2keXyFbFiOAy1n0KGEav9NNxGkA6Rg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157661-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157661: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=9d5f9b7ae8bf518ade951117b2da629bf26faf97
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 15:31:54 +0000

flight 157661 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157661/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              9d5f9b7ae8bf518ade951117b2da629bf26faf97
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  161 days
Failing since        151818  2020-07-11 04:18:52 Z  160 days  155 attempts
Testing same since   157661  2020-12-18 04:20:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33398 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 16:26:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 16:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56564.99077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqIZw-0008QJ-Ir; Fri, 18 Dec 2020 16:26:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56564.99077; Fri, 18 Dec 2020 16:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqIZw-0008QC-Fn; Fri, 18 Dec 2020 16:26:04 +0000
Received: by outflank-mailman (input) for mailman id 56564;
 Fri, 18 Dec 2020 16:26:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FvXD=FW=casper.srs.infradead.org=batv+47c5e4db15a5ade07d01+6326+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1kqIZt-0008Q7-TD
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 16:26:02 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77e883e6-9938-46f6-ac21-e1c71c94d5f1;
 Fri, 18 Dec 2020 16:25:53 +0000 (UTC)
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=freeip.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kqIZX-0004Ba-AJ; Fri, 18 Dec 2020 16:25:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e883e6-9938-46f6-ac21-e1c71c94d5f1
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Mime-Version:Content-Type:Date:Cc:To:
	From:Subject:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=obycoBflX0G8n6e3Nf95BDIgacjxulKLFIrgPWEt6io=; b=KNdhvmNcFhLyz+KMb46+44/1kk
	jWMAzDBDs4xRJEbqP+raqO80deZwlNEp6CedB+z4D3k/UvvoLOkNQyJYL+AuVj8LXmK5OCsy7ffMn
	lJiF1gB/FckR6UdgQr+iCIwYfH0sO4kITK/X4aO8pcbjQ1kLY7EA/l7gndwnsoZ0dyG59qRx+vA1i
	FgdvOmF/qMDYkU1j/Uy0iGZWXd/tio+d9mlYh3ZzmoT4grV6OZJMJfg+Yy7frOCAnSma+vdO/Gzsj
	Yov6MP1HudyGYZtPOh3+iIwUikRDFuDGawe5DLygfFfmHKGmqKx/YNfaMHZ0i7hf/fPDw2vSsJmMh
	9nY51Pgg==;
Message-ID: <5ba658b2d8a2bce63622f5bb8ef8d5e6114276eb.camel@infradead.org>
Subject: [PATCH] xen: Fix event channel callback via INTX/GSI
From: David Woodhouse <dwmw2@infradead.org>
To: "x86@kernel.org" <x86@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Paul Durrant
 <pdurrant@amazon.com>, jgrall@amazon.com, karahmed@amazon.de, xen-devel
 <xen-devel@lists.xenproject.org>
Date: Fri, 18 Dec 2020 16:25:38 +0000
Content-Type: multipart/signed; micalg="sha-256";
	protocol="application/x-pkcs7-signature";
	boundary="=-YLrO5Z5kR+4+pEHI7e6u"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-YLrO5Z5kR+4+pEHI7e6u
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

From: David Woodhouse <dwmw@amazon.co.uk>

For a while, event channel notification via the PCI platform device
has been broken, because we attempt to communicate with xenstore before
we even have notifications working, in the xs_reset_watches() call
called from xs_init().

This mostly doesn't matter much in practice because for Xen versions
below 4.0 we avoid xs_reset_watches() anyway, because xenstore might
not cope with reading a non-existent key. And newer Xen *does* have
the vector callback support, so we rarely fall back to INTX/GSI
delivery.

But those working on Xen and Xen-compatible hypervisor implementations
do want to test that INTX/GSI delivery works correctly, as there are
still guest kernels in active use which don't use vector delivery yet.
So it's useful to have it actually working.

To that end, clean up a bit of the mess of xs_init() and xenbus_probe()
startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
case, deferring it to be called from xenbus_probe() in the XS_HVM case
instead.

Then fix up the invocation of xenbus_probe() to happen either from its
device_initcall if the callback is available early enough, or when the
callback is finally set up. This means that the hack of calling
xenbus_probe() from a workqueue after the first interrupt is no longer
needed.

Add a 'no_vector_callback' command line argument to test it.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/arm/xen/enlighten.c          |  2 +-
 arch/x86/xen/enlighten_hvm.c      | 11 ++++-
 drivers/xen/events/events_base.c  | 10 -----
 drivers/xen/platform-pci.c        |  7 ++++
 drivers/xen/xenbus/xenbus.h       |  1 +
 drivers/xen/xenbus/xenbus_comms.c |  8 ----
 drivers/xen/xenbus/xenbus_probe.c | 68 ++++++++++++++++++++++++-------
 include/xen/xenbus.h              |  2 +-
 8 files changed, 74 insertions(+), 35 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 60e901cd0de6..5a957a9a0984 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -371,7 +371,7 @@ static int __init xen_guest_init(void)
 	}
 	gnttab_init();
 	if (!xen_initial_domain())
-		xenbus_probe(NULL);
+		xenbus_probe();
=20
 	/*
 	 * Making sure board specific code will not set up ops for
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 9e87ab010c82..a1c07e0c888e 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
        return 0;
 }
=20
+static bool no_vector_callback __initdata;
+
 static void __init xen_hvm_guest_init(void)
 {
 	if (xen_pv_domain())
@@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
=20
 	xen_panic_handler_init();
=20
-	if (xen_feature(XENFEAT_hvm_callback_vector))
+	if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback =3D 1;
=20
 	xen_hvm_smp_init();
@@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
 }
 early_param("xen_nopv", xen_parse_nopv);
=20
+static __init int xen_parse_no_vector_callback(char *arg)
+{
+	no_vector_callback =3D true;
+	return 0;
+}
+early_param("no_vector_callback", xen_parse_no_vector_callback);
+
 bool __init xen_hvm_need_lapic(void)
 {
 	if (xen_pv_domain())
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b=
ase.c
index 6038c4c35db5..bbebe248b726 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -2010,16 +2010,6 @@ static struct irq_chip xen_percpu_chip __read_mostly=
 =3D {
 	.irq_ack		=3D ack_dynirq,
 };
=20
-int xen_set_callback_via(uint64_t via)
-{
-	struct xen_hvm_param a;
-	a.domid =3D DOMID_SELF;
-	a.index =3D HVM_PARAM_CALLBACK_IRQ;
-	a.value =3D via;
-	return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
-}
-EXPORT_SYMBOL_GPL(xen_set_callback_via);
-
 #ifdef CONFIG_XEN_PVHVM
 /* Vector callbacks are better than PCI interrupts to receive event
  * channel notifications because we can receive vector callbacks on any
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index dd911e1ff782..5c3015a90a73 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -132,6 +132,13 @@ static int platform_pci_probe(struct pci_dev *pdev,
 			dev_warn(&pdev->dev, "request_irq failed err=3D%d\n", ret);
 			goto out;
 		}
+		/*
+		 * It doesn't strictly *have* to run on CPU0 but it sure
+		 * as hell better process the event channel ports delivered
+		 * to CPU0.
+		 */
+		irq_set_affinity(pdev->irq, cpumask_of(0));
+
 		callback_via =3D get_callback_via(pdev);
 		ret =3D xen_set_callback_via(callback_via);
 		if (ret) {
diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
index 5f5b8a7d5b80..05bbda51103f 100644
--- a/drivers/xen/xenbus/xenbus.h
+++ b/drivers/xen/xenbus/xenbus.h
@@ -113,6 +113,7 @@ int xenbus_probe_node(struct xen_bus_type *bus,
 		      const char *type,
 		      const char *nodename);
 int xenbus_probe_devices(struct xen_bus_type *bus);
+void xenbus_probe(void);
=20
 void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
=20
diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_=
comms.c
index eb5151fc8efa..e5fda0256feb 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex);
 static int xenbus_irq;
 static struct task_struct *xenbus_task;
=20
-static DECLARE_WORK(probe_work, xenbus_probe);
-
-
 static irqreturn_t wake_waiting(int irq, void *unused)
 {
-	if (unlikely(xenstored_ready =3D=3D 0)) {
-		xenstored_ready =3D 1;
-		schedule_work(&probe_work);
-	}
-
 	wake_up(&xb_waitq);
 	return IRQ_HANDLED;
 }
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_=
probe.c
index 38725d97d909..876f381b100a 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -682,29 +682,63 @@ void unregister_xenstore_notifier(struct notifier_blo=
ck *nb)
 }
 EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
=20
-void xenbus_probe(struct work_struct *unused)
+void xenbus_probe(void)
 {
 	xenstored_ready =3D 1;
=20
+	/*
+	 * In the HVM case, xenbus_init() deferred its call to
+	 * xs_init() in case callbacks were not operational yet.
+	 * So do it now.
+	 */
+	if (xen_store_domain_type =3D=3D XS_HVM)
+		xs_init();
+
 	/* Notify others that xenstore is up */
 	blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
 }
-EXPORT_SYMBOL_GPL(xenbus_probe);
=20
 static int __init xenbus_probe_initcall(void)
 {
-	if (!xen_domain())
-		return -ENODEV;
-
-	if (xen_initial_domain() || xen_hvm_domain())
-		return 0;
+	/*
+	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
+	 * need to wait for the platform PCI device to come up, which is
+	 * the (XEN_PVPVM && !xen_have_vector_callback) case.
+	 */
+	if (xen_store_domain_type =3D=3D XS_PV ||
+	    (xen_store_domain_type =3D=3D XS_HVM &&
+	     (!IS_ENABLED(CONFIG_XEN_PVHVM) || xen_have_vector_callback)))
+		xenbus_probe();
=20
-	xenbus_probe(NULL);
 	return 0;
 }
-
 device_initcall(xenbus_probe_initcall);
=20
+int xen_set_callback_via(uint64_t via)
+{
+	struct xen_hvm_param a;
+	int ret;
+
+	a.domid =3D DOMID_SELF;
+	a.index =3D HVM_PARAM_CALLBACK_IRQ;
+	a.value =3D via;
+
+	ret =3D HYPERVISOR_hvm_op(HVMOP_set_param, &a);
+	if (ret)
+		return ret;
+
+	/*
+	 * If xenbus_probe_initcall() deferred the xenbus_probe()
+	 * due to the callback not functioning yet, we can do it now.
+	 */
+	if (!xenstored_ready && xen_store_domain_type =3D=3D XS_HVM &&
+	    IS_ENABLED(CONFIG_XEN_PVHVM) && !xen_have_vector_callback)
+		xenbus_probe();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xen_set_callback_via);
+
 /* Set up event channel for xenstored which is run as a local process
  * (this is normally used only in dom0)
  */
@@ -817,11 +851,17 @@ static int __init xenbus_init(void)
 		break;
 	}
=20
-	/* Initialize the interface to xenstore. */
-	err =3D xs_init();
-	if (err) {
-		pr_warn("Error initializing xenstore comms: %i\n", err);
-		goto out_error;
+	/*
+	 * HVM domains may not have a functional callback yet. In that
+	 * case let xs_init() be called from xenbus_probe(), which will
+	 * get invoked at an appropriate time.
+	 */
+	if (xen_store_domain_type !=3D XS_HVM) {
+		err =3D xs_init();
+		if (err) {
+			pr_warn("Error initializing xenstore comms: %i\n", err);
+			goto out_error;
+		}
 	}
=20
 	if ((xen_store_domain_type !=3D XS_LOCAL) &&
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 5a8315e6d8a6..61202c83d560 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -183,7 +183,7 @@ void xs_suspend_cancel(void);
=20
 struct work_struct;
=20
-void xenbus_probe(struct work_struct *);
+void xenbus_probe(void);
=20
 #define XENBUS_IS_ERR_READ(str) ({			\
 	if (!IS_ERR(str) && strlen(str) =3D=3D 0) {		\
--=20
2.26.2


--=-YLrO5Z5kR+4+pEHI7e6u
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCECow
ggUcMIIEBKADAgECAhEA4rtJSHkq7AnpxKUY8ZlYZjANBgkqhkiG9w0BAQsFADCBlzELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG
A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl
bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTkwMTAyMDAwMDAwWhcNMjIwMTAxMjM1
OTU5WjAkMSIwIAYJKoZIhvcNAQkBFhNkd213MkBpbmZyYWRlYWQub3JnMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAsv3wObLTCbUA7GJqKj9vHGf+Fa+tpkO+ZRVve9EpNsMsfXhvFpb8
RgL8vD+L133wK6csYoDU7zKiAo92FMUWaY1Hy6HqvVr9oevfTV3xhB5rQO1RHJoAfkvhy+wpjo7Q
cXuzkOpibq2YurVStHAiGqAOMGMXhcVGqPuGhcVcVzVUjsvEzAV9Po9K2rpZ52FE4rDkpDK1pBK+
uOAyOkgIg/cD8Kugav5tyapydeWMZRJQH1vMQ6OVT24CyAn2yXm2NgTQMS1mpzStP2ioPtTnszIQ
Ih7ASVzhV6csHb8Yrkx8mgllOyrt9Y2kWRRJFm/FPRNEurOeNV6lnYAXOymVJwIDAQABo4IB0zCC
Ac8wHwYDVR0jBBgwFoAUgq9sjPjF/pZhfOgfPStxSF7Ei8AwHQYDVR0OBBYEFLfuNf820LvaT4AK
xrGK3EKx1DE7MA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQGCCsGAQUF
BwMEBggrBgEFBQcDAjBGBgNVHSAEPzA9MDsGDCsGAQQBsjEBAgEDBTArMCkGCCsGAQUFBwIBFh1o
dHRwczovL3NlY3VyZS5jb21vZG8ubmV0L0NQUzBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3Js
LmNvbW9kb2NhLmNvbS9DT01PRE9SU0FDbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWls
Q0EuY3JsMIGLBggrBgEFBQcBAQR/MH0wVQYIKwYBBQUHMAKGSWh0dHA6Ly9jcnQuY29tb2RvY2Eu
Y29tL0NPTU9ET1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcnQwJAYI
KwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmNvbW9kb2NhLmNvbTAeBgNVHREEFzAVgRNkd213MkBpbmZy
YWRlYWQub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQALbSykFusvvVkSIWttcEeifOGGKs7Wx2f5f45b
nv2ghcxK5URjUvCnJhg+soxOMoQLG6+nbhzzb2rLTdRVGbvjZH0fOOzq0LShq0EXsqnJbbuwJhK+
PnBtqX5O23PMHutP1l88AtVN+Rb72oSvnD+dK6708JqqUx2MAFLMevrhJRXLjKb2Mm+/8XBpEw+B
7DisN4TMlLB/d55WnT9UPNHmQ+3KFL7QrTO8hYExkU849g58Dn3Nw3oCbMUgny81ocrLlB2Z5fFG
Qu1AdNiBA+kg/UxzyJZpFbKfCITd5yX49bOriL692aMVDyqUvh8fP+T99PqorH4cIJP6OxSTdxKM
MIIFHDCCBASgAwIBAgIRAOK7SUh5KuwJ6cSlGPGZWGYwDQYJKoZIhvcNAQELBQAwgZcxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRo
ZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTE5MDEwMjAwMDAwMFoXDTIyMDEwMTIz
NTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBALL98Dmy0wm1AOxiaio/bxxn/hWvraZDvmUVb3vRKTbDLH14bxaW
/EYC/Lw/i9d98CunLGKA1O8yogKPdhTFFmmNR8uh6r1a/aHr301d8YQea0DtURyaAH5L4cvsKY6O
0HF7s5DqYm6tmLq1UrRwIhqgDjBjF4XFRqj7hoXFXFc1VI7LxMwFfT6PStq6WedhROKw5KQytaQS
vrjgMjpICIP3A/CroGr+bcmqcnXljGUSUB9bzEOjlU9uAsgJ9sl5tjYE0DEtZqc0rT9oqD7U57My
ECIewElc4VenLB2/GK5MfJoJZTsq7fWNpFkUSRZvxT0TRLqznjVepZ2AFzsplScCAwEAAaOCAdMw
ggHPMB8GA1UdIwQYMBaAFIKvbIz4xf6WYXzoHz0rcUhexIvAMB0GA1UdDgQWBBS37jX/NtC72k+A
CsaxitxCsdQxOzAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAdBgNVHSUEFjAUBggrBgEF
BQcDBAYIKwYBBQUHAwIwRgYDVR0gBD8wPTA7BgwrBgEEAbIxAQIBAwUwKzApBggrBgEFBQcCARYd
aHR0cHM6Ly9zZWN1cmUuY29tb2RvLm5ldC9DUFMwWgYDVR0fBFMwUTBPoE2gS4ZJaHR0cDovL2Ny
bC5jb21vZG9jYS5jb20vQ09NT0RPUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFp
bENBLmNybDCBiwYIKwYBBQUHAQEEfzB9MFUGCCsGAQUFBzAChklodHRwOi8vY3J0LmNvbW9kb2Nh
LmNvbS9DT01PRE9SU0FDbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWlsQ0EuY3J0MCQG
CCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAC20spBbrL71ZEiFrbXBHonzhhirO1sdn+X+O
W579oIXMSuVEY1LwpyYYPrKMTjKECxuvp24c829qy03UVRm742R9Hzjs6tC0oatBF7KpyW27sCYS
vj5wbal+TttzzB7rT9ZfPALVTfkW+9qEr5w/nSuu9PCaqlMdjABSzHr64SUVy4ym9jJvv/FwaRMP
gew4rDeEzJSwf3eeVp0/VDzR5kPtyhS+0K0zvIWBMZFPOPYOfA59zcN6AmzFIJ8vNaHKy5QdmeXx
RkLtQHTYgQPpIP1Mc8iWaRWynwiE3ecl+PWzq4i+vdmjFQ8qlL4fHz/k/fT6qKx+HCCT+jsUk3cS
jDCCBeYwggPOoAMCAQICEGqb4Tg7/ytrnwHV2binUlYwDQYJKoZIhvcNAQEMBQAwgYUxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMSswKQYDVQQDEyJDT01PRE8gUlNBIENlcnRpZmljYXRp
b24gQXV0aG9yaXR5MB4XDTEzMDExMDAwMDAwMFoXDTI4MDEwOTIzNTk1OVowgZcxCzAJBgNVBAYT
AkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNV
BAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAvrOeV6wodnVAFsc4A5jTxhh2IVDzJXkLTLWg0X06WD6cpzEup/Y0dtmEatrQPTRI5Or1u6zf
+bGBSyD9aH95dDSmeny1nxdlYCeXIoymMv6pQHJGNcIDpFDIMypVpVSRsivlJTRENf+RKwrB6vcf
WlP8dSsE3Rfywq09N0ZfxcBa39V0wsGtkGWC+eQKiz4pBZYKjrc5NOpG9qrxpZxyb4o4yNNwTqza
aPpGRqXB7IMjtf7tTmU2jqPMLxFNe1VXj9XB1rHvbRikw8lBoNoSWY66nJN/VCJv5ym6Q0mdCbDK
CMPybTjoNCQuelc0IAaO4nLUXk0BOSxSxt8kCvsUtQIDAQABo4IBPDCCATgwHwYDVR0jBBgwFoAU
u69+Aj36pvE8hI6t7jiY7NkyMtQwHQYDVR0OBBYEFIKvbIz4xf6WYXzoHz0rcUhexIvAMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMBEGA1UdIAQKMAgwBgYEVR0gADBMBgNVHR8E
RTBDMEGgP6A9hjtodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9SU0FDZXJ0aWZpY2F0aW9u
QXV0aG9yaXR5LmNybDBxBggrBgEFBQcBAQRlMGMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9jcnQuY29t
b2RvY2EuY29tL0NPTU9ET1JTQUFkZFRydXN0Q0EuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz
cC5jb21vZG9jYS5jb20wDQYJKoZIhvcNAQEMBQADggIBAHhcsoEoNE887l9Wzp+XVuyPomsX9vP2
SQgG1NgvNc3fQP7TcePo7EIMERoh42awGGsma65u/ITse2hKZHzT0CBxhuhb6txM1n/y78e/4ZOs
0j8CGpfb+SJA3GaBQ+394k+z3ZByWPQedXLL1OdK8aRINTsjk/H5Ns77zwbjOKkDamxlpZ4TKSDM
KVmU/PUWNMKSTvtlenlxBhh7ETrN543j/Q6qqgCWgWuMAXijnRglp9fyadqGOncjZjaaSOGTTFB+
E2pvOUtY+hPebuPtTbq7vODqzCM6ryEhNhzf+enm0zlpXK7q332nXttNtjv7VFNYG+I31gnMrwfH
M5tdhYF/8v5UY5g2xANPECTQdu9vWPoqNSGDt87b3gXb1AiGGaI06vzgkejL580ul+9hz9D0S0U4
jkhJiA7EuTecP/CFtR72uYRBcunwwH3fciPjviDDAI9SnC/2aPY8ydehzuZutLbZdRJ5PDEJM/1t
yZR2niOYihZ+FCbtf3D9mB12D4ln9icgc7CwaxpNSCPt8i/GqK2HsOgkL3VYnwtx7cJUmpvVdZ4o
gnzgXtgtdk3ShrtOS1iAN2ZBXFiRmjVzmehoMof06r1xub+85hFQzVxZx5/bRaTKTlL8YXLI8nAb
R9HWdFqzcOoB/hxfEyIQpx9/s81rgzdEZOofSlZHynoSMYIDyjCCA8YCAQEwga0wgZcxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRo
ZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA4rtJSHkq7AnpxKUY8ZlYZjANBglghkgB
ZQMEAgEFAKCCAe0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMjAx
MjE4MTYyNTM4WjAvBgkqhkiG9w0BCQQxIgQgUt5IoVGf+4lgbTi6cdNZW0AJ1UtJngWnujuTCtPl
5B8wgb4GCSsGAQQBgjcQBDGBsDCBrTCBlzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIg
TWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQx
PTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1h
aWwgQ0ECEQDiu0lIeSrsCenEpRjxmVhmMIHABgsqhkiG9w0BCRACCzGBsKCBrTCBlzELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG
A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl
bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEQDiu0lIeSrsCenEpRjxmVhmMA0GCSqGSIb3
DQEBAQUABIIBAEl01WQ3Jn3tZC5C3LeFBt4iXZIeJax61/E2uBaqYtmia1gCh5r+m62CgwYGB8CO
2jmfvgOkCYGcG46P6pxcIzDREVfym0GkDrv+to6fT58dtHVsVA3IatlRCtY3aslw+ddlBsBXALMq
px6zz1Z2PTRZF1JnDy2rrcb4a2IzfkTizQdXeYFKDHr1XwvwNeyYltXdu6wItHz4Q+IYyfuDVE2L
mig0WTo14H180JmbKI7ZNYnoPsLpFyrvkpeo5ovDHdKxlUJT7HSVpQn4OJxoW1GulhCHLwYgYDIR
KtZFDi6NBxGa2HWe9iLx4zFCJ5t5sEbcfZSJhjSvmmkM4z4gH+QAAAAAAAA=


--=-YLrO5Z5kR+4+pEHI7e6u--



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 18:54:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 18:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56649.99201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqKsq-0006pA-66; Fri, 18 Dec 2020 18:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56649.99201; Fri, 18 Dec 2020 18:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqKsq-0006p3-2s; Fri, 18 Dec 2020 18:53:44 +0000
Received: by outflank-mailman (input) for mailman id 56649;
 Fri, 18 Dec 2020 18:53:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqKso-0006ov-Rc; Fri, 18 Dec 2020 18:53:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqKso-0005gq-Jw; Fri, 18 Dec 2020 18:53:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqKso-0003Ny-Ch; Fri, 18 Dec 2020 18:53:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqKso-00042C-CB; Fri, 18 Dec 2020 18:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s9ie8fGjGxw2GTrjg8HVPBayE7wXjRRCFdv6OHZ7Zm0=; b=LuvYCIayGQ7Sr12WIhDMErOwy1
	jT4y9nixZGMudz/0j2wBQu90xAl2PueFlBQAPEWCv+4Sh7u2vYtg0+XGiWiHmuAR0+W+Lv0V0KoNG
	5N9k/DS4lXMi9wRE40KSZQlTAbKQY6fzW5e9aoHhYcT/kDkR76Kg9JMH9mqS6WF7hFO0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157679-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157679: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8009c33b5179536e2ecce54462fe4cd069060f77
X-Osstest-Versions-That:
    xen=7a3b691a8f3aa7720eecaab0e7bd090aa392885a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 18:53:42 +0000

flight 157679 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157679/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157656

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8009c33b5179536e2ecce54462fe4cd069060f77
baseline version:
 xen                  7a3b691a8f3aa7720eecaab0e7bd090aa392885a

Last test of basis   157656  2020-12-17 23:02:14 Z    0 days
Testing same since   157668  2020-12-18 13:00:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8009c33b5179536e2ecce54462fe4cd069060f77
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:29:14 2020 +0100

    x86/mm: p2m_add_foreign() is HVM-only
    
    This is the case also for xenmem_add_to_physmap_one(), as is it's only
    caller of the function. Move the latter next to p2m_add_foreign(),
    allowing it one to become static at the same time. While moving, adjust
    indentation of the body of the main switch().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 173ae325026bd161ae5eecebda28dab2c7a80668
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:28:30 2020 +0100

    x86/p2m: tidy p2m_add_foreign() a little
    
    Drop a bogus ASSERT() - we don't typically assert incoming domain
    pointers to be non-NULL, and there's no particular reason to do so here.
    
    Replace the open-coded DOMID_SELF check by use of
    rcu_lock_remote_domain_by_id(), at the same time covering the request
    being made with the current domain's actual ID.
    
    Move the "both domains same" check into just the path where it really
    is meaningful.
    
    Swap the order of the two puts, such that
    - the p2m lock isn't needlessly held across put_page(),
    - a separate put_page() on an error path can be avoided,
    - they're inverse to the order of the respective gets.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f772b592b75d3144174d4c645b916f2718d9cce5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:25:40 2020 +0100

    lib: move sort code
    
    Build this code into an archive, partly paralleling bsearch().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 7c3af561acb70ddd16069b9c9cab3ce503a10987
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:23:42 2020 +0100

    lib: move bsearch code
    
    Convert this code to an inline function (backed by an instance in an
    archive in case the compiler decides against inlining), which results
    in not having it in x86 final binaries. This saves a little bit of dead
    code.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit c54212261dc3305429344fe1d1cb298b30830155
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:22:54 2020 +0100

    lib: move rbtree code
    
    Build this code into an archive, which results in not linking it into
    x86 final binaries. This saves about 1.5k of dead code.
    
    While moving the source file, take the opportunity and drop the
    pointless EXPORT_SYMBOL() and an instance of trailing whitespace.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 3b1d8eb4744d210abcd1c033bf07d20345b926ba
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:22:10 2020 +0100

    lib: move init_constructors()
    
    ... into its own CU, for being unrelated to other things in
    common/lib.c.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 65fdf25768deba4e8bea751773f2ec4f7ff67ea5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:21:25 2020 +0100

    lib: move parse_size_and_unit()
    
    ... into its own CU, to build it into an archive.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 26dfde919cac720c29d076bc8fd38ad0af1b2abb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:20:42 2020 +0100

    lib: move list sorting code
    
    Build the source file always, as by putting it into an archive it still
    won't be linked into final binaries when not needed. This way possible
    build breakage will be easier to notice, and it's more consistent with
    us unconditionally building other library kind of code (e.g. sort() or
    bsearch()).
    
    While moving the source file, take the opportunity and drop the
    pointless EXPORT_SYMBOL() and an unnecessary #include.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit f301f9a9e84f3cfd18750065f8a3794c8182c7f0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Dec 18 13:17:57 2020 +0100

    lib: collect library files in an archive
    
    In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
    just to avoid bloating binaries when only some arch-es and/or
    configurations need generic library routines, combine objects under lib/
    into an archive, which the linker then can pick the necessary objects
    out of.
    
    Note that we can't use thin archives just yet, until we've raised the
    minimum required binutils version suitably.
    
    Note further that --start-group / --end-group get put in place right
    away to allow for symbol resolution across all archives, once we gain
    multuiple ones.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 18:56:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 18:56:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56655.99216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqKvN-0006xX-LH; Fri, 18 Dec 2020 18:56:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56655.99216; Fri, 18 Dec 2020 18:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqKvN-0006xQ-IM; Fri, 18 Dec 2020 18:56:21 +0000
Received: by outflank-mailman (input) for mailman id 56655;
 Fri, 18 Dec 2020 18:56:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=azX+=FW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kqKvM-0006xL-OJ
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 18:56:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e060be68-4478-4943-9ab5-721e4d779b26;
 Fri, 18 Dec 2020 18:56:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e060be68-4478-4943-9ab5-721e4d779b26
Date: Fri, 18 Dec 2020 10:56:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608317779;
	bh=mNsuymiy+hlbVxv5osDqiBtsrMfiVf+wAvW+nLwzmsI=;
	h=From:To:cc:Subject:In-Reply-To:References:From;
	b=nAmAtMWzJX6E6aXsN1korRa+N127EuOwl6yqce+Wv0dpOaeltzq+WlTjvod7OnfZf
	 Gbr+4Ekn8ZM2JdS5CufTTVefIAAaDOhaAynwBhiLWIPDGvftTvupN6Y1+Vd0loD7v5
	 Iy1hU3jF0Qg4aalpCP2I5gj4JLbvmEPLNDHhgVXd8grmklWLIose/89x5AEUtah5F5
	 PLFQM6PnChv7N8F4YqgmF7RXNZUnxt6W8b4z1BoIiBsrqoXIgywG8LVnI8jvedniF3
	 i5NYMtbn8C1Hr279+c9MDSuJaqWZTimcTExOq9DPOc07izrLzbNByCEIhBOEligrQx
	 Q9Z1WFaulxSEw==
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] xen: Rework WARN_ON() to return whether a warning
 was triggered
In-Reply-To: <20201218133054.7744-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2012181056080.4040@sstabellini-ThinkPad-T480s>
References: <20201218133054.7744-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 18 Dec 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> So far, our implementation of WARN_ON() cannot be used in the following
> situation:
> 
> if ( WARN_ON() )
>     ...
> 
> This is because WARN_ON() doesn't return whether a warning has been
> triggered. Such construciton can be handy if you want to print more
> information and also dump the stack trace.
> 
> Therefore, rework the WARN_ON() implementation to return whether a
> warning was triggered. The idea was borrowed from Linux
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Rework the commit message
>         - Don't use trailing underscore
>         - Add Bertrand's and Juergen's reviewed-by
> ---
>  xen/include/xen/lib.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index 48429b69b8df..5841bd489c35 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -23,7 +23,13 @@
>  #include <asm/bug.h>
>  
>  #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
> -#define WARN_ON(p) do { if (unlikely(p)) WARN(); } while (0)
> +#define WARN_ON(p)  ({                  \
> +    bool ret_warn_on_ = (p);            \
> +                                        \
> +    if ( unlikely(ret_warn_on_) )       \
> +        WARN();                         \
> +    unlikely(ret_warn_on_);             \
> +})
>  
>  /* All clang versions supported by Xen have _Static_assert. */
>  #if defined(__clang__) || \
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 20:43:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 20:43:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56675.99257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMb2-00010p-Ph; Fri, 18 Dec 2020 20:43:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56675.99257; Fri, 18 Dec 2020 20:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMb2-00010i-Md; Fri, 18 Dec 2020 20:43:28 +0000
Received: by outflank-mailman (input) for mailman id 56675;
 Fri, 18 Dec 2020 20:43:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iVAa=FW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kqMb1-00010d-7i
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 20:43:27 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dacd7049-351e-4e31-bec3-e23511a2cfe3;
 Fri, 18 Dec 2020 20:43:26 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BIKeQtL122498;
 Fri, 18 Dec 2020 20:43:22 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 35cn9rv7pj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 18 Dec 2020 20:43:22 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BIKfNTm052695;
 Fri, 18 Dec 2020 20:43:21 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 35d7esqs3n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 18 Dec 2020 20:43:21 +0000
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BIKhIrT031029;
 Fri, 18 Dec 2020 20:43:18 GMT
Received: from [10.39.216.141] (/10.39.216.141)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 18 Dec 2020 12:43:18 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dacd7049-351e-4e31-bec3-e23511a2cfe3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to :
 cc : references : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=KGkrhxqOcM6OiUfI259rBQpozA1rYuM0leg0FkluyEY=;
 b=rcLyBy0bVLWp/07VwW2T+kZsXMYBjnoMETEee5T+PFY22SKj+Hb/zJA+ErlB/2ZDgJAZ
 rnMldy+QF7UAr2PyhBbBS+pR1sC1avSnUM/WUvpo3p5Co1smxfgbSkLzeDT6tUytPVfg
 sS8VChG+1R+v5JIz//1JpUkR4qighsSK3zIallKj5YT4FdHjorUXKCPk5uHTymDvS0Xn
 6cVaTSeRJ6vO14WzZs/9ThyG0oX9eDZmDcH1JdN0KdyLvxqmLaknvVcc1p9GuLEJEEsH
 r8QxSiUYZfS9GlbkhLVsDIfAdm6w8rtNi2Im681v1bMs8rc5f7vd7zthFgL2Z/YeV/Sv Gw== 
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
From: boris.ostrovsky@oracle.com
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills <cheyenne.wills@gmail.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
 <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
 <0430337a-6fcd-9471-4455-838390401220@citrix.com>
 <c6e05b63-b066-9bd0-9da1-1fc089cd1aea@oracle.com>
Organization: Oracle Corporation
Message-ID: <10958d4a-154f-a524-35e9-a75eaf50fe55@oracle.com>
Date: Fri, 18 Dec 2020 15:43:16 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <c6e05b63-b066-9bd0-9da1-1fc089cd1aea@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9839 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012180139
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9839 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxlogscore=999
 impostorscore=0 lowpriorityscore=0 clxscore=1015 spamscore=0
 malwarescore=0 priorityscore=1501 phishscore=0 mlxscore=0 bulkscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012180139


On 12/17/20 12:49 PM, boris.ostrovsky@oracle.com wrote:
> On 12/17/20 11:46 AM, Andrew Cooper wrote:
>> On 17/12/2020 16:25, boris.ostrovsky@oracle.com wrote:
>>> On 12/17/20 2:40 AM, Jan Beulich wrote:
>>>> On 17.12.2020 02:51, boris.ostrovsky@oracle.com wrote:
>>>> I think this is acceptable as a workaround, albeit we may want to
>>>> consider further restricting this (at least on staging), like e.g.
>>>> requiring a guest config setting to enable the workaround. 
>>> Maybe, but then someone migrating from a stable release to 4.15 will have to modify guest configuration.
>>>
>>>
>>>> But
>>>> maybe this will need to be part of the MSR policy for the domain
>>>> instead, down the road. We'll definitely want Andrew's view here.
>>>>
>>>> Speaking of staging - before applying anything to the stable
>>>> branches, I think we want to have this addressed on the main
>>>> branch. I can't see how Solaris would work there.
>>> Indeed it won't. I'll need to do that as well (I misinterpreted the statement in the XSA about only 4.14- being vulnerable)
>> It's hopefully obvious now why we suddenly finished the "lets turn all
>> unknown MSRs to #GP" work at the point that we did (after dithering on
>> the point for several years).
>>
>> To put it bluntly, default MSR readability was not a clever decision at all.
>>
>> There is a large risk that there is a similar vulnerability elsewhere,
>> given how poorly documented the MSRs are (and one contemporary CPU I've
>> got the manual open for has more than 6000 *documented* MSRs).  We did
>> debate for a while whether the readability of the PPIN MSRs was a
>> vulnerability or not, before eventually deciding not.


Can we do something like KVM's ignore_msrs (but probably return 0 on reads to avoid leaks from the system)? It would allow to deal with cases when a guest is suddenly unable to boot after hypervisor update (especially from pre-4.14). It won't help in all cases since some MSRs may be expected to be non-zero but I think it will cover large number of them. (and it will certainly do what Jan is asking above but will not be specific to this particular breakage)


-boris


>> Irrespective of what we do to fix this in Xen, has anyone fixed Solaris yet?
>
> I am not aware of anyone working on this (not that I would be).
>
>
> -boris
>


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 20:45:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 20:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56681.99276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMd3-00018p-7i; Fri, 18 Dec 2020 20:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56681.99276; Fri, 18 Dec 2020 20:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMd3-00018i-4h; Fri, 18 Dec 2020 20:45:33 +0000
Received: by outflank-mailman (input) for mailman id 56681;
 Fri, 18 Dec 2020 20:45:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dpih=FW=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1kqMd2-00018c-IB
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 20:45:32 +0000
Received: from mx1.somlen.de (unknown [2a00:1828:a019::100:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 489e8013-2004-4272-bf58-e05c90e9a54b;
 Fri, 18 Dec 2020 20:45:31 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id 48D09C3AF0E;
 Fri, 18 Dec 2020 21:45:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 489e8013-2004-4272-bf58-e05c90e9a54b
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH 2/3] docs: use predictable ordering in generated documentation
Date: Fri, 18 Dec 2020 21:42:34 +0100
Message-Id: <31df2a1128c15bc1b4c738bf52e29c80982b4170.1608319634.git.maxi@daemonizer.de>
In-Reply-To: <cover.1608319634.git.maxi@daemonizer.de>
References: <cover.1608319634.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the seq number is equal, sort by the title to get predictable
output ordering. This is useful for reproducible builds.

Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>
---
 docs/xen-headers | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/xen-headers b/docs/xen-headers
index 54155632c4..8c434d77e2 100755
--- a/docs/xen-headers
+++ b/docs/xen-headers
@@ -331,7 +331,7 @@ sub output_index () {
 <h2>Starting points</h2>
 <ul>
 END
-    foreach my $ic (sort { $a->{Seq} <=> $b->{Seq} } @incontents) {
+    foreach my $ic (sort { $a->{Seq} <=> $b->{Seq} or $a->{Title} cmp $b->{Title} } @incontents) {
         $o .= "<li><a href=\"$ic->{Href}\">$ic->{Title}</a></li>\n";
     }
     $o .= "</ul>\n";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 20:45:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 20:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56682.99288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMd8-0001BL-I2; Fri, 18 Dec 2020 20:45:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56682.99288; Fri, 18 Dec 2020 20:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMd8-0001BC-D9; Fri, 18 Dec 2020 20:45:38 +0000
Received: by outflank-mailman (input) for mailman id 56682;
 Fri, 18 Dec 2020 20:45:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dpih=FW=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1kqMd7-00018c-GP
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 20:45:37 +0000
Received: from mx1.somlen.de (unknown [2a00:1828:a019::100:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4face0c-83fd-4ebf-a058-7acf17fb493a;
 Fri, 18 Dec 2020 20:45:32 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id E10F8C3AF0D;
 Fri, 18 Dec 2020 21:45:29 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4face0c-83fd-4ebf-a058-7acf17fb493a
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH 1/3] xen/arch/x86: don't insert timestamp when SOURCE_DATE_EPOCH is defined
Date: Fri, 18 Dec 2020 21:42:33 +0100
Message-Id: <5f715df2808dcd24b9fab95cd02522d16daba86c.1608319634.git.maxi@daemonizer.de>
In-Reply-To: <cover.1608319634.git.maxi@daemonizer.de>
References: <cover.1608319634.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

By default a timestamp gets added to the xen efi binary. Unfortunately
ld doesn't seem to provide a way to set a custom date, like from
SOURCE_DATE_EPOCH, so set a zero value for the timestamp (option
--no-insert-timestamp) if SOURCE_DATE_EPOCH is defined. This makes
reproducible builds possible.

This is an alternative to the patch suggested in [1]. This patch only
omits the timestamp when SOURCE_DATE_EPOCH is defined.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg02161.html

Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>
---
 xen/arch/x86/Makefile | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 8f2180485b..863aed043f 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -184,6 +184,12 @@ EFI_LDFLAGS += --major-image-version=$(XEN_VERSION)
 EFI_LDFLAGS += --minor-image-version=$(XEN_SUBVERSION)
 EFI_LDFLAGS += --major-os-version=2 --minor-os-version=0
 EFI_LDFLAGS += --major-subsystem-version=2 --minor-subsystem-version=0
+# It seems ld unfortunately can't set a custom timestamp, so add a zero value
+# for the timestamp (option --no-insert-timestamp) if SOURCE_DATE_EPOCH is
+# defined to make reproducible builds possible.
+ifdef SOURCE_DATE_EPOCH
+EFI_LDFLAGS += --no-insert-timestamp
+endif
 
 $(TARGET).efi: VIRT_BASE = 0x$(shell $(NM) efi/relocs-dummy.o | sed -n 's, A VIRT_START$$,,p')
 $(TARGET).efi: ALT_BASE = 0x$(shell $(NM) efi/relocs-dummy.o | sed -n 's, A ALT_START$$,,p')
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 20:54:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 20:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56689.99299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMlS-0002Lm-Dr; Fri, 18 Dec 2020 20:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56689.99299; Fri, 18 Dec 2020 20:54:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMlS-0002Lf-An; Fri, 18 Dec 2020 20:54:14 +0000
Received: by outflank-mailman (input) for mailman id 56689;
 Fri, 18 Dec 2020 20:54:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dpih=FW=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1kqMlQ-0002La-JY
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 20:54:12 +0000
Received: from mx1.somlen.de (unknown [2a00:1828:a019::100:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7026208b-057f-4e68-afaf-70403a55dbd9;
 Fri, 18 Dec 2020 20:54:11 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id 9BD54C3AF0F;
 Fri, 18 Dec 2020 21:45:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7026208b-057f-4e68-afaf-70403a55dbd9
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH 3/3] docs: set date to SOURCE_DATE_EPOCH if available
Date: Fri, 18 Dec 2020 21:42:35 +0100
Message-Id: <23352f4835ae58c5cae6f425d5a8378f3d694055.1608319634.git.maxi@daemonizer.de>
In-Reply-To: <cover.1608319634.git.maxi@daemonizer.de>
References: <cover.1608319634.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use the solution described in [1] to replace the call to the 'date'
command with a version that uses SOURCE_DATE_EPOCH if available. This
is needed for reproducible builds.

[1] https://reproducible-builds.org/docs/source-date-epoch/

Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>
---
 docs/Makefile | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/docs/Makefile b/docs/Makefile
index 8de1efb6f5..ac6792ff7c 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -3,7 +3,13 @@ include $(XEN_ROOT)/Config.mk
 -include $(XEN_ROOT)/config/Docs.mk
 
 VERSION		:= $(shell $(MAKE) -C $(XEN_ROOT)/xen --no-print-directory xenversion)
-DATE		:= $(shell date +%Y-%m-%d)
+
+DATE_FMT	:= +%Y-%m-%d
+ifdef SOURCE_DATE_EPOCH
+DATE		:= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u "$(DATE_FMT)")
+else
+DATE		:= $(shell date "$(DATE_FMT)")
+endif
 
 DOC_ARCHES      := arm x86_32 x86_64
 MAN_SECTIONS    := 1 5 7 8
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 21:04:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 21:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56695.99317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMv7-0003UJ-JZ; Fri, 18 Dec 2020 21:04:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56695.99317; Fri, 18 Dec 2020 21:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqMv7-0003UC-Gd; Fri, 18 Dec 2020 21:04:13 +0000
Received: by outflank-mailman (input) for mailman id 56695;
 Fri, 18 Dec 2020 21:04:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dpih=FW=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1kqMv7-0003U7-4Y
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 21:04:13 +0000
Received: from mx1.somlen.de (unknown [2a00:1828:a019::100:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e145b88-3f89-4536-879d-b3ffebf4cdcc;
 Fri, 18 Dec 2020 21:04:11 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id B30FEC3AF0B;
 Fri, 18 Dec 2020 21:45:24 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e145b88-3f89-4536-879d-b3ffebf4cdcc
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH 0/3] Improvements for reproducible builds
Date: Fri, 18 Dec 2020 21:42:32 +0100
Message-Id: <cover.1608319634.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

While working on a reproducible build of xen in Debian I came up with
the following three patches which are needed to build xen reproducibly
in Debian. Reproducible builds are useful to verify the binary has
actually been built from the corresponding source.

The first patch is an extension of [1] which only omits the timestamp if
SOURCE_DATE_EPOCH is defined. Patch two fixes an ordering issue in the
generated documentation and the last patch uses the date from
SOURCE_DATE_EPOCH if available in the documentation.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg02161.html

Maximilian Engelhardt (3):
  xen/arch/x86: don't insert timestamp when SOURCE_DATE_EPOCH is defined
  docs: use predictable ordering in generated documentation
  docs: set date to SOURCE_DATE_EPOCH if available

 docs/Makefile         | 8 +++++++-
 docs/xen-headers      | 2 +-
 xen/arch/x86/Makefile | 6 ++++++
 3 files changed, 14 insertions(+), 2 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Dec 18 21:15:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 21:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56699.99329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqN5g-0004Zm-KA; Fri, 18 Dec 2020 21:15:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56699.99329; Fri, 18 Dec 2020 21:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqN5g-0004Zf-HB; Fri, 18 Dec 2020 21:15:08 +0000
Received: by outflank-mailman (input) for mailman id 56699;
 Fri, 18 Dec 2020 21:15:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qCeU=FW=vps.thesusis.net=psusi@srs-us1.protection.inumbo.net>)
 id 1kqN5f-0004Za-SW
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 21:15:07 +0000
Received: from vps.thesusis.net (unknown [34.202.238.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5022b878-a54b-47f3-98f0-642aa90b78ac;
 Fri, 18 Dec 2020 21:15:07 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by vps.thesusis.net (Postfix) with ESMTP id F006328B05;
 Fri, 18 Dec 2020 16:15:06 -0500 (EST)
Received: from vps.thesusis.net ([127.0.0.1])
 by localhost (vps.thesusis.net [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id zjEvRrHH1fWO; Fri, 18 Dec 2020 16:15:06 -0500 (EST)
Received: by vps.thesusis.net (Postfix, from userid 1000)
 id BF74928AFD; Fri, 18 Dec 2020 16:15:06 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5022b878-a54b-47f3-98f0-642aa90b78ac
References: <87h7oudcbx.fsf@vps.thesusis.net> <CAHD1Q_zcruQ6KVHApvhb=0+mG0m80T+tmg1UzjQBki8j+aR51A@mail.gmail.com> <87czzcdtir.fsf@vps.thesusis.net> <CAHD1Q_z+WW36rfr1RAFYKjU5bocA90OonBmSKECRnpacvWyPmQ@mail.gmail.com> <873604p1i6.fsf@vps.thesusis.net>
User-agent: mu4e 1.5.7; emacs 26.3
From: Phillip Susi <phill@thesusis.net>
To: "Guilherme G. Piccoli" <guilherme.piccoli@canonical.com>
Cc: kexec mailing list <kexec@lists.infradead.org>, xen-devel@lists.xenproject.org
Subject: Re: kexec not working in xen domU?
Date: Fri, 18 Dec 2020 15:59:02 -0500
In-reply-to: <873604p1i6.fsf@vps.thesusis.net>
Message-ID: <877dpevmsl.fsf@vps.thesusis.net>
MIME-Version: 1.0
Content-Type: text/plain


Phillip Susi writes:

> I tried with -s and it didn't help.  So far I tried it originally on my
> Ubuntu 20.04 amazon vps, then on my debian testing ( linux 5.9.0 ) on my
> local xen server.  I'll try building the latest upstream kernel and
> kexec tomorrow.

Built the latest kexec-tools and linux kernel from git today and get
the same results.  Is there a minimum version of xen required for this
to work?  I have no idea what Amazon is running but my server has 4.9.


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 21:51:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 21:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56714.99359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqNel-00009K-RF; Fri, 18 Dec 2020 21:51:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56714.99359; Fri, 18 Dec 2020 21:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqNel-00009D-NR; Fri, 18 Dec 2020 21:51:23 +0000
Received: by outflank-mailman (input) for mailman id 56714;
 Fri, 18 Dec 2020 21:51:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqNek-000095-2P; Fri, 18 Dec 2020 21:51:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqNej-0000GJ-QB; Fri, 18 Dec 2020 21:51:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqNej-00056g-C8; Fri, 18 Dec 2020 21:51:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqNej-00023q-Bh; Fri, 18 Dec 2020 21:51:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QE5GFTP+abvn9EhbbMeH9DmbuyHRFPG9Sab9dWDgDwU=; b=gsnDrZQDid7EE02B2E43qj+lPV
	zied7AUkWcZTlXDfzyyrRloapKVR0eRoPj/dBK6k9DJubjxPx6BYtJdf+qIAMgZD8Kx/N7FtXV32T
	dH21nnVqdklf8u0psOc5zpKN554jElGhPrJbTVQSE03MO4WW2wykk4nroa6tdwnF0aB0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157662-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157662: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
X-Osstest-Versions-This:
    ovmf=e6ae24e1d676bb2bdc0fc715b49b04908f41fc10
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 21:51:21 +0000

flight 157662 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157662/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157345

version targeted for testing:
 ovmf                 e6ae24e1d676bb2bdc0fc715b49b04908f41fc10
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z    9 days
Failing since        157348  2020-12-09 15:39:39 Z    9 days   53 attempts
Testing same since   157612  2020-12-16 21:09:14 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sheng Wei <w.sheng@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 699 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 22:15:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 22:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56725.99387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqO1i-0002KP-Sq; Fri, 18 Dec 2020 22:15:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56725.99387; Fri, 18 Dec 2020 22:15:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqO1i-0002KI-Pj; Fri, 18 Dec 2020 22:15:06 +0000
Received: by outflank-mailman (input) for mailman id 56725;
 Fri, 18 Dec 2020 22:15:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqO1h-0002KA-Ra; Fri, 18 Dec 2020 22:15:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqO1h-0000ev-Jv; Fri, 18 Dec 2020 22:15:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqO1h-0005tr-ED; Fri, 18 Dec 2020 22:15:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqO1h-0000vJ-Dh; Fri, 18 Dec 2020 22:15:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wjCAIgmNnfBPOrWSHoUzJSt1qNztc0MFsI06BJJPTlA=; b=NOBOx+DaXBJHnUTAGWqLbHQJFF
	OIQsDxTBzp6bCAcb0UqvfsdLm/bk54oAyTDyvkwW15TK1S2OshwNuG69osLrdN5qRtvmDdGR8Sl2m
	z11vGS8sw/v7ACIAsIVeAnhNoYL8dMwtSnN/LcsH8TkiMcW0a3D8lncwj7LEXODYEVh0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157696-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157696: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
X-Osstest-Versions-That:
    xen=7a3b691a8f3aa7720eecaab0e7bd090aa392885a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 22:15:05 +0000

flight 157696 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157696/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93
baseline version:
 xen                  7a3b691a8f3aa7720eecaab0e7bd090aa392885a

Last test of basis   157656  2020-12-17 23:02:14 Z    0 days
Failing since        157668  2020-12-18 13:00:30 Z    0 days    3 attempts
Testing same since   157696  2020-12-18 19:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a3b691a8f..357db96a66  357db96a66e47e609c3b14768f1062e13eedbd93 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 22:21:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 22:21:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56733.99402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqO7c-0003Mc-Ku; Fri, 18 Dec 2020 22:21:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56733.99402; Fri, 18 Dec 2020 22:21:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqO7c-0003MV-Gb; Fri, 18 Dec 2020 22:21:12 +0000
Received: by outflank-mailman (input) for mailman id 56733;
 Fri, 18 Dec 2020 22:21:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iVAa=FW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kqO7b-0003MQ-RB
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 22:21:11 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0decb3a2-4fa6-49b0-9da1-4c517e926ff2;
 Fri, 18 Dec 2020 22:21:08 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BIM9oPV136457;
 Fri, 18 Dec 2020 22:20:55 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2130.oracle.com with ESMTP id 35ckcbvpbb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 18 Dec 2020 22:20:55 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BIMBNlk088681;
 Fri, 18 Dec 2020 22:20:55 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3030.oracle.com with ESMTP id 35d7est623-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 18 Dec 2020 22:20:54 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0BIMKojC029673;
 Fri, 18 Dec 2020 22:20:51 GMT
Received: from [10.39.216.141] (/10.39.216.141)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 18 Dec 2020 14:20:50 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0decb3a2-4fa6-49b0-9da1-4c517e926ff2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=LFswpajBY9zLf+LGKOD2926z3byOYd3nuJV0gcHFwuo=;
 b=dZp8L4KJhmFq+UyI0FGpmRS3lCVEgKscMT9ACH0ELMbmMUyWzTJr+xE9LxyT/jzRu9gA
 Tnmfv72gEGpi0Ow48z2ZYAXS3A3Zbv4EQz1nqpSbum6WumKxj3a2UuKzOpcTf3I+0tiJ
 G5AY0CEB21mEQ0t5t+A/nVlBf2e11wyaKV11PBf+ThzK+W8t9lkdCLPgygVY+V+n607V
 xlsm8Qn1F59vBzUHgQbwSQ+zqNKwo1wjQGwahXkekEpBbGGqVilpcp6h/+qQQZkOwQDw
 XL6SMAN+ZMXKNKj7dF2DacOcMtSpIjZEnpGY6cbVkOCb3q7u850x5nKNS+3SbRspA+mK LA== 
Subject: Re: [PATCH] xen: Fix event channel callback via INTX/GSI
To: David Woodhouse <dwmw2@infradead.org>, "x86@kernel.org" <x86@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Juergen Gross <jgross@suse.com>, Paul Durrant <pdurrant@amazon.com>,
        jgrall@amazon.com, karahmed@amazon.de,
        xen-devel <xen-devel@lists.xenproject.org>
References: <5ba658b2d8a2bce63622f5bb8ef8d5e6114276eb.camel@infradead.org>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <6b6544ac-06b3-2525-aed9-39015715f71d@oracle.com>
Date: Fri, 18 Dec 2020 17:20:48 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <5ba658b2d8a2bce63622f5bb8ef8d5e6114276eb.camel@infradead.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9839 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012180151
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9839 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=999
 priorityscore=1501 mlxscore=0 suspectscore=0 adultscore=0 phishscore=0
 malwarescore=0 impostorscore=0 lowpriorityscore=0 clxscore=1011
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012180151


On 12/18/20 11:25 AM, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> For a while, event channel notification via the PCI platform device
> has been broken, because we attempt to communicate with xenstore before
> we even have notifications working, in the xs_reset_watches() call
> called from xs_init().
>
> This mostly doesn't matter much in practice because for Xen versions
> below 4.0 we avoid xs_reset_watches() anyway, because xenstore might
> not cope with reading a non-existent key. And newer Xen *does* have
> the vector callback support, so we rarely fall back to INTX/GSI
> delivery.
>
> But those working on Xen and Xen-compatible hypervisor implementations
> do want to test that INTX/GSI delivery works correctly, as there are
> still guest kernels in active use which don't use vector delivery yet.
> So it's useful to have it actually working.


Are there other cases where this would be useful? If it's just to test a hypervisor under development I would think that people who are doing that kind of work are capable of building their own kernel. My concern is mostly about having yet another boot option that is of interest to very few people who can easily work around not having it.


>
> To that end, clean up a bit of the mess of xs_init() and xenbus_probe()
> startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
> case, deferring it to be called from xenbus_probe() in the XS_HVM case
> instead.
>
> Then fix up the invocation of xenbus_probe() to happen either from its
> device_initcall if the callback is available early enough, or when the
> callback is finally set up. This means that the hack of calling
> xenbus_probe() from a workqueue after the first interrupt is no longer
> needed.
>
> Add a 'no_vector_callback' command line argument to test it.


This last one should be a separate patch I think.


>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>  arch/arm/xen/enlighten.c          |  2 +-
>  arch/x86/xen/enlighten_hvm.c      | 11 ++++-
>  drivers/xen/events/events_base.c  | 10 -----
>  drivers/xen/platform-pci.c        |  7 ++++
>  drivers/xen/xenbus/xenbus.h       |  1 +
>  drivers/xen/xenbus/xenbus_comms.c |  8 ----
>  drivers/xen/xenbus/xenbus_probe.c | 68 ++++++++++++++++++++++++-------
>  include/xen/xenbus.h              |  2 +-
>  8 files changed, 74 insertions(+), 35 deletions(-)
>
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 60e901cd0de6..5a957a9a0984 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -371,7 +371,7 @@ static int __init xen_guest_init(void)
>  	}
>  	gnttab_init();
>  	if (!xen_initial_domain())
> -		xenbus_probe(NULL);
> +		xenbus_probe();
>  
>  	/*
>  	 * Making sure board specific code will not set up ops for
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index 9e87ab010c82..a1c07e0c888e 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
>         return 0;
>  }
>  
> +static bool no_vector_callback __initdata;
> +
>  static void __init xen_hvm_guest_init(void)
>  {
>  	if (xen_pv_domain())
> @@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
>  
>  	xen_panic_handler_init();
>  
> -	if (xen_feature(XENFEAT_hvm_callback_vector))
> +	if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
>  		xen_have_vector_callback = 1;
>  
>  	xen_hvm_smp_init();
> @@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
>  }
>  early_param("xen_nopv", xen_parse_nopv);
>  
> +static __init int xen_parse_no_vector_callback(char *arg)
> +{
> +	no_vector_callback = true;
> +	return 0;
> +}
> +early_param("no_vector_callback", xen_parse_no_vector_callback);
> +
>  bool __init xen_hvm_need_lapic(void)
>  {
>  	if (xen_pv_domain())
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index 6038c4c35db5..bbebe248b726 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -2010,16 +2010,6 @@ static struct irq_chip xen_percpu_chip __read_mostly = {
>  	.irq_ack		= ack_dynirq,
>  };
>  
> -int xen_set_callback_via(uint64_t via)
> -{
> -	struct xen_hvm_param a;
> -	a.domid = DOMID_SELF;
> -	a.index = HVM_PARAM_CALLBACK_IRQ;
> -	a.value = via;
> -	return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
> -}
> -EXPORT_SYMBOL_GPL(xen_set_callback_via);
> -
>  #ifdef CONFIG_XEN_PVHVM
>  /* Vector callbacks are better than PCI interrupts to receive event
>   * channel notifications because we can receive vector callbacks on any
> diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
> index dd911e1ff782..5c3015a90a73 100644
> --- a/drivers/xen/platform-pci.c
> +++ b/drivers/xen/platform-pci.c
> @@ -132,6 +132,13 @@ static int platform_pci_probe(struct pci_dev *pdev,
>  			dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret);
>  			goto out;
>  		}
> +		/*
> +		 * It doesn't strictly *have* to run on CPU0 but it sure
> +		 * as hell better process the event channel ports delivered
> +		 * to CPU0.
> +		 */
> +		irq_set_affinity(pdev->irq, cpumask_of(0));
> +


Is the concern here that it won't be handled at all?


And is this related to the issue this patch is addressing?


>  		callback_via = get_callback_via(pdev);
>  		ret = xen_set_callback_via(callback_via);
>  		if (ret) {
> diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
> index 5f5b8a7d5b80..05bbda51103f 100644
> --- a/drivers/xen/xenbus/xenbus.h
> +++ b/drivers/xen/xenbus/xenbus.h
> @@ -113,6 +113,7 @@ int xenbus_probe_node(struct xen_bus_type *bus,
>  		      const char *type,
>  		      const char *nodename);
>  int xenbus_probe_devices(struct xen_bus_type *bus);
> +void xenbus_probe(void);
>  
>  void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
>  
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index eb5151fc8efa..e5fda0256feb 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex);
>  static int xenbus_irq;
>  static struct task_struct *xenbus_task;
>  
> -static DECLARE_WORK(probe_work, xenbus_probe);
> -
> -
>  static irqreturn_t wake_waiting(int irq, void *unused)
>  {
> -	if (unlikely(xenstored_ready == 0)) {
> -		xenstored_ready = 1;
> -		schedule_work(&probe_work);
> -	}
> -
>  	wake_up(&xb_waitq);
>  	return IRQ_HANDLED;
>  }
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index 38725d97d909..876f381b100a 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -682,29 +682,63 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
>  }
>  EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
>  
> -void xenbus_probe(struct work_struct *unused)
> +void xenbus_probe(void)
>  {
>  	xenstored_ready = 1;
>  
> +	/*
> +	 * In the HVM case, xenbus_init() deferred its call to
> +	 * xs_init() in case callbacks were not operational yet.
> +	 * So do it now.
> +	 */
> +	if (xen_store_domain_type == XS_HVM)
> +		xs_init();
> +
>  	/* Notify others that xenstore is up */
>  	blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
>  }
> -EXPORT_SYMBOL_GPL(xenbus_probe);
>  
>  static int __init xenbus_probe_initcall(void)
>  {
> -	if (!xen_domain())
> -		return -ENODEV;
> -
> -	if (xen_initial_domain() || xen_hvm_domain())
> -		return 0;
> +	/*
> +	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
> +	 * need to wait for the platform PCI device to come up, which is
> +	 * the (XEN_PVPVM && !xen_have_vector_callback) case.
> +	 */
> +	if (xen_store_domain_type == XS_PV ||
> +	    (xen_store_domain_type == XS_HVM &&
> +	     (!IS_ENABLED(CONFIG_XEN_PVHVM) || xen_have_vector_callback)))
> +		xenbus_probe();
>  
> -	xenbus_probe(NULL);
>  	return 0;
>  }
> -
>  device_initcall(xenbus_probe_initcall);
>  
> +int xen_set_callback_via(uint64_t via)
> +{
> +	struct xen_hvm_param a;
> +	int ret;
> +
> +	a.domid = DOMID_SELF;
> +	a.index = HVM_PARAM_CALLBACK_IRQ;
> +	a.value = via;
> +
> +	ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * If xenbus_probe_initcall() deferred the xenbus_probe()
> +	 * due to the callback not functioning yet, we can do it now.
> +	 */
> +	if (!xenstored_ready && xen_store_domain_type == XS_HVM &&
> +	    IS_ENABLED(CONFIG_XEN_PVHVM) && !xen_have_vector_callback)


I'd create an is_callback_ready() (or something with a better name) helper.


> +		xenbus_probe();
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(xen_set_callback_via);
> +
>  /* Set up event channel for xenstored which is run as a local process
>   * (this is normally used only in dom0)
>   */
> @@ -817,11 +851,17 @@ static int __init xenbus_init(void)
>  		break;
>  	}
>  
> -	/* Initialize the interface to xenstore. */
> -	err = xs_init();
> -	if (err) {
> -		pr_warn("Error initializing xenstore comms: %i\n", err);
> -		goto out_error;
> +	/*
> +	 * HVM domains may not have a functional callback yet. In that
> +	 * case let xs_init() be called from xenbus_probe(), which will
> +	 * get invoked at an appropriate time.
> +	 */
> +	if (xen_store_domain_type != XS_HVM) {


Can we delay xs_init() for !XS_HVM as well? In other words wait until after PCI platform device has been probed (on HVM) and then call xs_init() for everyone.


-boris


> +		err = xs_init();
> +		if (err) {
> +			pr_warn("Error initializing xenstore comms: %i\n", err);
> +			goto out_error;
> +		}
>  	}
>  
>  	if ((xen_store_domain_type != XS_LOCAL) &&
> diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
> index 5a8315e6d8a6..61202c83d560 100644
> --- a/include/xen/xenbus.h
> +++ b/include/xen/xenbus.h
> @@ -183,7 +183,7 @@ void xs_suspend_cancel(void);
>  
>  struct work_struct;
>  
> -void xenbus_probe(struct work_struct *);
> +void xenbus_probe(void);
>  
>  #define XENBUS_IS_ERR_READ(str) ({			\
>  	if (!IS_ERR(str) && strlen(str) == 0) {		\


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 22:46:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 22:46:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56746.99445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqOVb-0005d2-4b; Fri, 18 Dec 2020 22:45:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56746.99445; Fri, 18 Dec 2020 22:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqOVb-0005cv-1I; Fri, 18 Dec 2020 22:45:59 +0000
Received: by outflank-mailman (input) for mailman id 56746;
 Fri, 18 Dec 2020 22:45:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqOVZ-0005cn-Nc; Fri, 18 Dec 2020 22:45:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqOVZ-00019j-HD; Fri, 18 Dec 2020 22:45:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqOVZ-0006p6-5k; Fri, 18 Dec 2020 22:45:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqOVZ-0004yk-5F; Fri, 18 Dec 2020 22:45:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=id0rbiu31YnCn5OWby0K05G6Tkz0XDvH9VNv4ZKdulU=; b=jf57QDj9CPa5uljeJ9xTo1wncT
	tRY2yqc20j2BqD0Q49hYDhFn4FaOUWCGXFEwhelhcEE/dfD31+26dfgruYlZBD4nzPK9k8ZuEKRbc
	zO/6FqvXx7MK7kDiCVgZTV5fabxjAu6w3CY1+qLR3JUnqhm8TIe43hQbZW/+MFy2VxEM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157655-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157655: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=641723f78d3a0b1982e1cd2ef37d8d877cfe542d
X-Osstest-Versions-That:
    xen=ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 18 Dec 2020 22:45:57 +0000

flight 157655 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157655/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157617
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157617
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157617
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157617
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157617
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157617
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157617
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157617
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157617
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157617
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157617
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  641723f78d3a0b1982e1cd2ef37d8d877cfe542d
baseline version:
 xen                  ac6a0af3870ba0f7ffb16af3e41827b0a53f88b0

Last test of basis   157617  2020-12-17 00:26:59 Z    1 days
Testing same since   157655  2020-12-17 21:03:55 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ac6a0af387..641723f78d  641723f78d3a0b1982e1cd2ef37d8d877cfe542d -> master


From xen-devel-bounces@lists.xenproject.org Fri Dec 18 23:39:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 18 Dec 2020 23:39:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56753.99460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqPL6-0002PI-Bv; Fri, 18 Dec 2020 23:39:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56753.99460; Fri, 18 Dec 2020 23:39:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqPL6-0002PB-8M; Fri, 18 Dec 2020 23:39:12 +0000
Received: by outflank-mailman (input) for mailman id 56753;
 Fri, 18 Dec 2020 23:39:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fP3M=FW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kqPL5-0002P6-2g
 for xen-devel@lists.xenproject.org; Fri, 18 Dec 2020 23:39:11 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1017ef90-350a-4de9-a0c5-8a122d09e37a;
 Fri, 18 Dec 2020 23:39:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1017ef90-350a-4de9-a0c5-8a122d09e37a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608334749;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=jNHKbYTTBr5uWlZ+nqwtHXf5IXemodP6GeLyGxYrL+M=;
  b=SWTI8tGtBx0bkUvcBy7v+Eq04WoFTNBnJMZWxlf6UY2XBPHHdsWLz6+o
   4qOxdPCs0jq1aCFBNmX5wowjYzF1Z2857PlIfHrRYERw2t2VLfT5yMqOL
   VC00LE/g8M4mh7zJXh48npBor2yoiWad7+DotIUclHVXFb2YjoJssriUI
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uXSwno8/MTAbO3X2QdFfay/uiXzNeBGegxgSext7SNv35qDrbTOgM1ByyhMeCALLa/lJJmg6pt
 sRkrYvP57DdHzp+ikkzarTTrYYFkRdjRXgSGo40Cgs4D04taCQFApEXUmkzVFAalQc8J5iKMG0
 WdWXoqp/rY+HbR9sqvajclGW7WRP9L/Mq4SW9QesoJDiAjrKtb62fR/vZ+XtmqOH8WShXmnCQ4
 kKji7sZyAka8z+OuDFyE9NfFv433vM2oj8c4KF4zXfwQSuosv/5BTKbIoHo1BYQY1cJ9iw0nqn
 MzM=
X-SBRS: 5.2
X-MesageID: 33594741
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,431,1599537600"; 
   d="scan'208";a="33594741"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, <hanetzer@startmail.com>
Subject: [PATCH] xen/Kconfig: Correct the NR_CPUS description
Date: Fri, 18 Dec 2020 23:38:42 +0000
Message-ID: <20201218233842.5277-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The description "physical CPUs" is especially wrong, as it implies the number
of sockets, which tops out at 8 on all but the very biggest servers.

NR_CPUS is the number of logical entities the scheduler can use.

Reported-by: hanetzer@startmail.com
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: hanetzer@startmail.com
---
 xen/arch/Kconfig | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
index 1954d1c5c1..d144d4c8d3 100644
--- a/xen/arch/Kconfig
+++ b/xen/arch/Kconfig
@@ -1,11 +1,17 @@
 
 config NR_CPUS
-	int "Maximum number of physical CPUs"
+	int "Maximum number of CPUs"
 	range 1 4095
 	default "256" if X86
 	default "8" if ARM && RCAR3
 	default "4" if ARM && QEMU
 	default "4" if ARM && MPSOC
 	default "128" if ARM
-	---help---
-	  Specifies the maximum number of physical CPUs which Xen will support.
+	help
+	  Controls the build-time size of various arrays and bitmaps
+	  associated with multiple-cpu management.  It is the upper bound of
+	  the number of logical entities the scheduler can run code on.
+
+	  For CPU cores which support Simultaneous Multi-Threading or similar
+	  technologies, this the number of logical threads which Xen will
+	  support.
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Sat Dec 19 03:11:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 03:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56770.99478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqSeS-0005sm-Aj; Sat, 19 Dec 2020 03:11:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56770.99478; Sat, 19 Dec 2020 03:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqSeS-0005sf-6t; Sat, 19 Dec 2020 03:11:24 +0000
Received: by outflank-mailman (input) for mailman id 56770;
 Sat, 19 Dec 2020 03:11:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqSeQ-0005sX-Tl; Sat, 19 Dec 2020 03:11:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqSeQ-0004Nc-MQ; Sat, 19 Dec 2020 03:11:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqSeQ-0007e5-FY; Sat, 19 Dec 2020 03:11:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqSeQ-0001og-F4; Sat, 19 Dec 2020 03:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LgrysxTM+SSbnq5PS+Mux2VGbmYxZjN0WQJSmr3ZHt0=; b=6+8CL6JOJZlJ6EeBoX6QFIE1er
	pO1OKKLEPcjQf0b1wcm3wi0oy3QGiRVU+fTD9xNHnMLkOUxCOLXSxzWKPowanppfa323Dg0SWyvOi
	/bA8hgMPh7OIoF0kU/l99ZpXv6oe48RrecBm3BgMNkjkMcLiCP19IQNaIat3rLMrCX1Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157659-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157659: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d64c6f96ba86bd8b97ed8d6762a8c8cc1770d214
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 03:11:22 +0000

flight 157659 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157659/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d64c6f96ba86bd8b97ed8d6762a8c8cc1770d214
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  140 days
Failing since        152366  2020-08-01 20:49:34 Z  139 days  241 attempts
Testing same since   157659  2020-12-18 01:34:02 Z    1 days    1 attempts

------------------------------------------------------------
4263 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 941934 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 04:39:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 04:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56781.99498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqU1C-0005hC-DG; Sat, 19 Dec 2020 04:38:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56781.99498; Sat, 19 Dec 2020 04:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqU1C-0005h5-A3; Sat, 19 Dec 2020 04:38:58 +0000
Received: by outflank-mailman (input) for mailman id 56781;
 Sat, 19 Dec 2020 04:38:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqU1B-0005gx-Az; Sat, 19 Dec 2020 04:38:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqU1B-0005y1-3b; Sat, 19 Dec 2020 04:38:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqU1A-0004Cg-Qh; Sat, 19 Dec 2020 04:38:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqU1A-0000vT-QE; Sat, 19 Dec 2020 04:38:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L0E4qhDzDCDL2HQnm43BB2cwt60CpGtlh/auawv4HJo=; b=p0GLdeGHWJXqJCt4uaREapBrQA
	nsg0YybQ2dNZSmtSEZyEnR9ANXOZuTKJJWgZPviesz1Pp22qJJ+yunbS/2PG9IQasEkoPwK7e8dqa
	bLsrzYDxiV1AU1otYfK8OCcTgpvVtwGFeVmtolWnfXOR18rmeAHRJo0blFeys52/NBW0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157664-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157664: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10c7c213bef26274684798deb3e351a6756046d2
X-Osstest-Versions-That:
    xen=b5302273e2c51940172400486644636f2f4fc64a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 04:38:56 +0000

flight 157664 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157664/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157135

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 157629 pass in 157664
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 157629
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 157629

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157135
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157135
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157135
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10c7c213bef26274684798deb3e351a6756046d2
baseline version:
 xen                  b5302273e2c51940172400486644636f2f4fc64a

Last test of basis   157135  2020-12-01 15:06:11 Z   17 days
Testing same since   157563  2020-12-15 13:36:28 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 743 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 06:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 06:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56791.99520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqVmK-0000sP-Hp; Sat, 19 Dec 2020 06:31:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56791.99520; Sat, 19 Dec 2020 06:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqVmK-0000sI-Eh; Sat, 19 Dec 2020 06:31:44 +0000
Received: by outflank-mailman (input) for mailman id 56791;
 Sat, 19 Dec 2020 06:31:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9pMo=FX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kqVmI-0000sD-Ef
 for xen-devel@lists.xenproject.org; Sat, 19 Dec 2020 06:31:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b185061e-9cfc-4b47-8b35-55b13241d1e0;
 Sat, 19 Dec 2020 06:31:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5D98ACC4;
 Sat, 19 Dec 2020 06:31:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b185061e-9cfc-4b47-8b35-55b13241d1e0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608359496; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=W7It98Z0GUNNQl0L2v0ZRaWFiumpgvCVR6+DzhT4F3M=;
	b=mOHEYBPDHlpDsNSxc4JotGLnoUNOhKzH8M9LGngwgAK5QrkicMt6V2FAeTLYrZhxHNbm0g
	OW36HhK3pRLCtR7v7ZClx8626x+UDRl+i4lFHGrJOR1DrFNKwGvHakwNZLg8EEek/ozJZ+
	HHRYT3bIpLlmbCnmrpa6Z0eFmHgZXwE=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.11-rc1
Date: Sat, 19 Dec 2020 07:31:36 +0100
Message-Id: <20201219063136.5828-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc1b-tag

xen: branch for v5.11-rc1

It contains some minor cleanup patches and a small series disentangling some Xen
related Kconfig options.

Thanks.

Juergen

 arch/x86/include/asm/xen/page.h |  2 +-
 arch/x86/xen/Kconfig            | 38 ++++++++++++++++++++++----------------
 arch/x86/xen/p2m.c              | 12 +-----------
 drivers/block/xen-blkfront.c    |  1 +
 drivers/xen/Makefile            |  2 +-
 drivers/xen/manage.c            |  1 +
 6 files changed, 27 insertions(+), 29 deletions(-)

Gustavo A. R. Silva (2):
      xen-blkfront: Fix fall-through warnings for Clang
      xen/manage: Fix fall-through warnings for Clang

Jason Andryuk (3):
      xen: Remove Xen PVH/PVHVM dependency on PCI
      xen: Kconfig: nest Xen guest options
      xen: Kconfig: remove X86_64 depends from XEN_512GB

Qinglang Miao (1):
      x86/xen: Convert to DEFINE_SHOW_ATTRIBUTE

Tom Rix (1):
      xen: remove trailing semicolon in macro definition


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 06:33:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 06:33:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56795.99531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqVnz-00010a-UY; Sat, 19 Dec 2020 06:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56795.99531; Sat, 19 Dec 2020 06:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqVnz-00010T-RW; Sat, 19 Dec 2020 06:33:27 +0000
Received: by outflank-mailman (input) for mailman id 56795;
 Sat, 19 Dec 2020 06:33:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9pMo=FX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kqVny-00010M-4f
 for xen-devel@lists.xenproject.org; Sat, 19 Dec 2020 06:33:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de771369-ac21-4781-aef6-1041c1d3f20b;
 Sat, 19 Dec 2020 06:33:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6332AADD9;
 Sat, 19 Dec 2020 06:33:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de771369-ac21-4781-aef6-1041c1d3f20b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608359604; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8ozngFhgmS777p9tz0FGxM52e8tUsZ00/PYbgWlaZ7s=;
	b=UFJBWA7j6Ejj4TcF+x2CrLYBZkw7f4BrWukhpHREEorlO11cyVvHutaTMkblhntOYRUs5L
	U+L/aUpm0t0eRsaBK23p++JA9br+O2CKiTWNzuqF/xN9eA3Z360FaA+Kc5ow4ezUBpvwYP
	ytlzWXfDgGe5mD4KNtixBlyAOydj4MQ=
Subject: Re: [PATCH] xen: Kconfig: remove X86_64 depends from XEN_512GB
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20201216140838.16085-1-jandryuk@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1c706e2d-9c2c-ce61-0552-3286a131207d@suse.com>
Date: Sat, 19 Dec 2020 07:33:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201216140838.16085-1-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="tWt7SaIonpoY7Qaxn29tKzpTRhWMcURaT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--tWt7SaIonpoY7Qaxn29tKzpTRhWMcURaT
Content-Type: multipart/mixed; boundary="cMzv9ylFyjh8w7eZ8Bw9H0S3Ns4jhjwMq";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Message-ID: <1c706e2d-9c2c-ce61-0552-3286a131207d@suse.com>
Subject: Re: [PATCH] xen: Kconfig: remove X86_64 depends from XEN_512GB
References: <20201216140838.16085-1-jandryuk@gmail.com>
In-Reply-To: <20201216140838.16085-1-jandryuk@gmail.com>

--cMzv9ylFyjh8w7eZ8Bw9H0S3Ns4jhjwMq
Content-Type: multipart/mixed;
 boundary="------------FF7502206F528227E5AC90CA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FF7502206F528227E5AC90CA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.12.20 15:08, Jason Andryuk wrote:
> commit bfda93aee0ec ("xen: Kconfig: nest Xen guest options")
> accidentally re-added X86_64 as a dependency to XEN_512GB.  It was
> originally removed in commit a13f2ef168cb ("x86/xen: remove 32-bit Xen
> PV guest support").  Remove it again.
>=20
> Fixes: bfda93aee0ec ("xen: Kconfig: nest Xen guest options")
> Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Applied to xen/tip.git for-linus-5.11


Juergen

--------------FF7502206F528227E5AC90CA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FF7502206F528227E5AC90CA--

--cMzv9ylFyjh8w7eZ8Bw9H0S3Ns4jhjwMq--

--tWt7SaIonpoY7Qaxn29tKzpTRhWMcURaT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/dnrMFAwAAAAAACgkQsN6d1ii/Ey9j
JQgAhMRm5XsN3vKUWUMKXUjDQmQo35EuSYRh9oBv3ZK2snma+ArJ1BuEblP64O9ylYlzbCx1PJWq
qh4ui40zdwzPZlcped1qhxAKFWn0DYd6pGPgpi0tRth+5sJRAqWf3Wuhjo4LknA8n413d7aYpZvZ
trYQPqOehw1xYfzxZEhTTK3U/zv/21k9NjY3PwIhW5vq8PlUCxCXcP2NRnFt+jyt9//TT+sgSyqO
l+HW73Bx1pwUyscElVjWWv0829cLJdV1k7wJvv4vh1phWBZKZSZ1lKo2a5mQtyPiD6icML+S8mRk
hd6f5IBWhAcYNb+UTiIbq2yunpDANmE/lC4mGPp5ng==
=HihQ
-----END PGP SIGNATURE-----

--tWt7SaIonpoY7Qaxn29tKzpTRhWMcURaT--


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 08:05:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 08:05:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56809.99549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqXF4-00023o-E7; Sat, 19 Dec 2020 08:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56809.99549; Sat, 19 Dec 2020 08:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqXF4-00023h-B3; Sat, 19 Dec 2020 08:05:30 +0000
Received: by outflank-mailman (input) for mailman id 56809;
 Sat, 19 Dec 2020 08:05:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v/w4=FX=casper.srs.infradead.org=batv+5a3b6fea02aaeb467610+6327+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1kqXF1-00023c-KX
 for xen-devel@lists.xenproject.org; Sat, 19 Dec 2020 08:05:28 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f58918a-2e00-4062-8302-c7503edecca4;
 Sat, 19 Dec 2020 08:05:23 +0000 (UTC)
Received: from dyn-227.woodhou.se ([90.155.92.227])
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kqXEg-0007Na-Qp; Sat, 19 Dec 2020 08:05:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f58918a-2e00-4062-8302-c7503edecca4
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Mime-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=U35lWhbX+MQaMgtOhHPhLopmPRezupUV1K70jjV6Mbk=; b=FHTFHEQD5irytrhe35uh+Rj1Hh
	sOd59ZytfNE3AV47/syXMj5AfHohH7lnXwcM2cEEO9IfRKdVDVlKs+j3h+2NnkFyETdcI+JkINHMQ
	A+neRlGSFLZr1mHgYhmrFhvec/o+p/rRl4sBQ+Fed50SRRgOzMLotoP4jLD68V8RPsFeUl9/MTE4F
	ji9DDwAoMVoVJ/gmI6iBtZrhLnykN/AWUBiJi+g7cbsXR0tPwPRktdqKpI9Y64ZMFgo5QsDXADxyb
	7xFH6/Hx7sQy11i4CQFH5bp3GF6Wl/EB7+YQqqmu7RBOmVVDmhBQJ0rXu8NIs0M1t57Euaiq2op8g
	VB2qcW2w==;
Message-ID: <a02cb64ba5680c0f2076da714d06b8704e3411c2.camel@infradead.org>
Subject: Re: [PATCH] xen: Fix event channel callback via INTX/GSI
From: David Woodhouse <dwmw2@infradead.org>
To: boris.ostrovsky@oracle.com, "x86@kernel.org" <x86@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
 <jgross@suse.com>,  Paul Durrant <pdurrant@amazon.com>, jgrall@amazon.com,
 karahmed@amazon.de, xen-devel <xen-devel@lists.xenproject.org>
Date: Sat, 19 Dec 2020 08:05:06 +0000
In-Reply-To: <6b6544ac-06b3-2525-aed9-39015715f71d@oracle.com>
References: <5ba658b2d8a2bce63622f5bb8ef8d5e6114276eb.camel@infradead.org>
	 <6b6544ac-06b3-2525-aed9-39015715f71d@oracle.com>
Content-Type: multipart/signed; micalg="sha-256";
	protocol="application/x-pkcs7-signature";
	boundary="=-sTDwrdaNLhqUO8bcSRdy"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-sTDwrdaNLhqUO8bcSRdy
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2020-12-18 at 17:20 -0500, boris.ostrovsky@oracle.com wrote:
> Are there other cases where this would be useful? If it's just to
> test a hypervisor under development I would think that people who are
> doing that kind of work are capable of building their own kernel. My
> concern is mostly about having yet another boot option that is of
> interest to very few people who can easily work around not having it.

For hypervisor testing we can just set the Xen major version number in
CPUID to 3, and that stops xs_reset_watches() from doing anything.

cf. https://lkml.org/lkml/2017/4/10/266

Karim ripped out all this INTX code in 2017 because it was broken, and
subsequently put it back because it *was* working for older versions of
Xen, due to that "coincidence". The conclusion back then was that if it
was put back it should at least *work* consistently, and he was going
to send a patch "shortly". This is a that patch :)

> > Add a 'no_vector_callback' command line argument to test it.
>
> This last one should be a separate patch I think.

Could do.

> > +		/*
> > +		 * It doesn't strictly *have* to run on CPU0 but it sure
> > +		 * as hell better process the event channel ports delivered
> > +		 * to CPU0.
> > +		 */
> > +		irq_set_affinity(pdev->irq, cpumask_of(0));
> > +
>=20
>=20
> Is the concern here that it won't be handled at all?

Indeed, the events don't get handled at all if the PCI interrupt lands
on a CPU other than zero. When the handler calls
xen_hvm_evtchn_do_upcall() that processes pending events for whichever
CPU it happens to be running on, and *not* the events pending for CPU0.
And the boot stops in xs_reset_watches() waiting (without a timeout)
for an interrupt that never gets processed, as before.

> And is this related to the issue this patch is addressing?

It is required to fix the event channel callback via INTX/GSI, yes.
Although it could reasonably be lifted out into a separate patch too.

> >  static int __init xenbus_probe_initcall(void)
> >  {
> > -	if (!xen_domain())
> > -		return -ENODEV;
> > -
> > -	if (xen_initial_domain() || xen_hvm_domain())
> > -		return 0;
> > +	/*
> > +	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
> > +	 * need to wait for the platform PCI device to come up, which is
> > +	 * the (XEN_PVPVM && !xen_have_vector_callback) case.
> > +	 */
> > +	if (xen_store_domain_type =3D=3D XS_PV ||
> > +	    (xen_store_domain_type =3D=3D XS_HVM &&
> > +	     (!IS_ENABLED(CONFIG_XEN_PVHVM) || xen_have_vector_callback)))
> > +		xenbus_probe();
> > =20
> > -	xenbus_probe(NULL);
> >  	return 0;
> >  }
> > -
> >  device_initcall(xenbus_probe_initcall);
> > =20
> > +int xen_set_callback_via(uint64_t via)
> > +{
> > +	struct xen_hvm_param a;
> > +	int ret;
> > +
> > +	a.domid =3D DOMID_SELF;
> > +	a.index =3D HVM_PARAM_CALLBACK_IRQ;
> > +	a.value =3D via;
> > +
> > +	ret =3D HYPERVISOR_hvm_op(HVMOP_set_param, &a);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/*
> > +	 * If xenbus_probe_initcall() deferred the xenbus_probe()
> > +	 * due to the callback not functioning yet, we can do it now.
> > +	 */
> > +	if (!xenstored_ready && xen_store_domain_type =3D=3D XS_HVM &&
> > +	    IS_ENABLED(CONFIG_XEN_PVHVM) && !xen_have_vector_callback)
>=20
>=20
> I'd create an is_callback_ready() (or something with a better name)
> helper.

I pondered that, and indeed dropping the XVM/vector conditions and
doing it literally based on whether xen_set_callback_via() had been
called at all (and not too early). But it looks like there are cases
where Arm doesn't call xen_set_callback_via() at all, and it made more
sense to me to live xen_set_callback_via() to sit right here and have
those two conditions within a page of each other in juxtaposition, with
suitable comments. I think that's probably easier to understand and
work with than a "helper".

>=20
> > +		xenbus_probe();
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(xen_set_callback_via);
> > +
> >  /* Set up event channel for xenstored which is run as a local process
> >   * (this is normally used only in dom0)
> >   */
> > @@ -817,11 +851,17 @@ static int __init xenbus_init(void)
> >  		break;
> >  	}
> > =20
> > -	/* Initialize the interface to xenstore. */
> > -	err =3D xs_init();
> > -	if (err) {
> > -		pr_warn("Error initializing xenstore comms: %i\n", err);
> > -		goto out_error;
> > +	/*
> > +	 * HVM domains may not have a functional callback yet. In that
> > +	 * case let xs_init() be called from xenbus_probe(), which will
> > +	 * get invoked at an appropriate time.
> > +	 */
> > +	if (xen_store_domain_type !=3D XS_HVM) {
>=20
>=20
> Can we delay xs_init() for !XS_HVM as well? In other words wait until
> after PCI platform device has been probed (on HVM) and then call
> xs_init() for everyone.

We're half-way there already, because xenbus_probe() *does* happen
later as a device_initcall, and I've just made it call xs_init().

We could make it avoid calling xs_init() from xenbus_init() in the
XS_HVM *and* XS_PV cases fairly easily, and let xenbus_probe() do it.

But right now xenbus_probe() doesn't run for the other cases, so
there'd have to be a mode where it *only* calls xs_init() and doesn't
do the notifier chain. That seems like more churn that was needed so I
didn't do it.


--=-sTDwrdaNLhqUO8bcSRdy
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCECow
ggUcMIIEBKADAgECAhEA4rtJSHkq7AnpxKUY8ZlYZjANBgkqhkiG9w0BAQsFADCBlzELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG
A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl
bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTkwMTAyMDAwMDAwWhcNMjIwMTAxMjM1
OTU5WjAkMSIwIAYJKoZIhvcNAQkBFhNkd213MkBpbmZyYWRlYWQub3JnMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAsv3wObLTCbUA7GJqKj9vHGf+Fa+tpkO+ZRVve9EpNsMsfXhvFpb8
RgL8vD+L133wK6csYoDU7zKiAo92FMUWaY1Hy6HqvVr9oevfTV3xhB5rQO1RHJoAfkvhy+wpjo7Q
cXuzkOpibq2YurVStHAiGqAOMGMXhcVGqPuGhcVcVzVUjsvEzAV9Po9K2rpZ52FE4rDkpDK1pBK+
uOAyOkgIg/cD8Kugav5tyapydeWMZRJQH1vMQ6OVT24CyAn2yXm2NgTQMS1mpzStP2ioPtTnszIQ
Ih7ASVzhV6csHb8Yrkx8mgllOyrt9Y2kWRRJFm/FPRNEurOeNV6lnYAXOymVJwIDAQABo4IB0zCC
Ac8wHwYDVR0jBBgwFoAUgq9sjPjF/pZhfOgfPStxSF7Ei8AwHQYDVR0OBBYEFLfuNf820LvaT4AK
xrGK3EKx1DE7MA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQGCCsGAQUF
BwMEBggrBgEFBQcDAjBGBgNVHSAEPzA9MDsGDCsGAQQBsjEBAgEDBTArMCkGCCsGAQUFBwIBFh1o
dHRwczovL3NlY3VyZS5jb21vZG8ubmV0L0NQUzBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3Js
LmNvbW9kb2NhLmNvbS9DT01PRE9SU0FDbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWls
Q0EuY3JsMIGLBggrBgEFBQcBAQR/MH0wVQYIKwYBBQUHMAKGSWh0dHA6Ly9jcnQuY29tb2RvY2Eu
Y29tL0NPTU9ET1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcnQwJAYI
KwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmNvbW9kb2NhLmNvbTAeBgNVHREEFzAVgRNkd213MkBpbmZy
YWRlYWQub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQALbSykFusvvVkSIWttcEeifOGGKs7Wx2f5f45b
nv2ghcxK5URjUvCnJhg+soxOMoQLG6+nbhzzb2rLTdRVGbvjZH0fOOzq0LShq0EXsqnJbbuwJhK+
PnBtqX5O23PMHutP1l88AtVN+Rb72oSvnD+dK6708JqqUx2MAFLMevrhJRXLjKb2Mm+/8XBpEw+B
7DisN4TMlLB/d55WnT9UPNHmQ+3KFL7QrTO8hYExkU849g58Dn3Nw3oCbMUgny81ocrLlB2Z5fFG
Qu1AdNiBA+kg/UxzyJZpFbKfCITd5yX49bOriL692aMVDyqUvh8fP+T99PqorH4cIJP6OxSTdxKM
MIIFHDCCBASgAwIBAgIRAOK7SUh5KuwJ6cSlGPGZWGYwDQYJKoZIhvcNAQELBQAwgZcxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRo
ZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTE5MDEwMjAwMDAwMFoXDTIyMDEwMTIz
NTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBALL98Dmy0wm1AOxiaio/bxxn/hWvraZDvmUVb3vRKTbDLH14bxaW
/EYC/Lw/i9d98CunLGKA1O8yogKPdhTFFmmNR8uh6r1a/aHr301d8YQea0DtURyaAH5L4cvsKY6O
0HF7s5DqYm6tmLq1UrRwIhqgDjBjF4XFRqj7hoXFXFc1VI7LxMwFfT6PStq6WedhROKw5KQytaQS
vrjgMjpICIP3A/CroGr+bcmqcnXljGUSUB9bzEOjlU9uAsgJ9sl5tjYE0DEtZqc0rT9oqD7U57My
ECIewElc4VenLB2/GK5MfJoJZTsq7fWNpFkUSRZvxT0TRLqznjVepZ2AFzsplScCAwEAAaOCAdMw
ggHPMB8GA1UdIwQYMBaAFIKvbIz4xf6WYXzoHz0rcUhexIvAMB0GA1UdDgQWBBS37jX/NtC72k+A
CsaxitxCsdQxOzAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAdBgNVHSUEFjAUBggrBgEF
BQcDBAYIKwYBBQUHAwIwRgYDVR0gBD8wPTA7BgwrBgEEAbIxAQIBAwUwKzApBggrBgEFBQcCARYd
aHR0cHM6Ly9zZWN1cmUuY29tb2RvLm5ldC9DUFMwWgYDVR0fBFMwUTBPoE2gS4ZJaHR0cDovL2Ny
bC5jb21vZG9jYS5jb20vQ09NT0RPUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFp
bENBLmNybDCBiwYIKwYBBQUHAQEEfzB9MFUGCCsGAQUFBzAChklodHRwOi8vY3J0LmNvbW9kb2Nh
LmNvbS9DT01PRE9SU0FDbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWlsQ0EuY3J0MCQG
CCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAC20spBbrL71ZEiFrbXBHonzhhirO1sdn+X+O
W579oIXMSuVEY1LwpyYYPrKMTjKECxuvp24c829qy03UVRm742R9Hzjs6tC0oatBF7KpyW27sCYS
vj5wbal+TttzzB7rT9ZfPALVTfkW+9qEr5w/nSuu9PCaqlMdjABSzHr64SUVy4ym9jJvv/FwaRMP
gew4rDeEzJSwf3eeVp0/VDzR5kPtyhS+0K0zvIWBMZFPOPYOfA59zcN6AmzFIJ8vNaHKy5QdmeXx
RkLtQHTYgQPpIP1Mc8iWaRWynwiE3ecl+PWzq4i+vdmjFQ8qlL4fHz/k/fT6qKx+HCCT+jsUk3cS
jDCCBeYwggPOoAMCAQICEGqb4Tg7/ytrnwHV2binUlYwDQYJKoZIhvcNAQEMBQAwgYUxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMSswKQYDVQQDEyJDT01PRE8gUlNBIENlcnRpZmljYXRp
b24gQXV0aG9yaXR5MB4XDTEzMDExMDAwMDAwMFoXDTI4MDEwOTIzNTk1OVowgZcxCzAJBgNVBAYT
AkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNV
BAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAvrOeV6wodnVAFsc4A5jTxhh2IVDzJXkLTLWg0X06WD6cpzEup/Y0dtmEatrQPTRI5Or1u6zf
+bGBSyD9aH95dDSmeny1nxdlYCeXIoymMv6pQHJGNcIDpFDIMypVpVSRsivlJTRENf+RKwrB6vcf
WlP8dSsE3Rfywq09N0ZfxcBa39V0wsGtkGWC+eQKiz4pBZYKjrc5NOpG9qrxpZxyb4o4yNNwTqza
aPpGRqXB7IMjtf7tTmU2jqPMLxFNe1VXj9XB1rHvbRikw8lBoNoSWY66nJN/VCJv5ym6Q0mdCbDK
CMPybTjoNCQuelc0IAaO4nLUXk0BOSxSxt8kCvsUtQIDAQABo4IBPDCCATgwHwYDVR0jBBgwFoAU
u69+Aj36pvE8hI6t7jiY7NkyMtQwHQYDVR0OBBYEFIKvbIz4xf6WYXzoHz0rcUhexIvAMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMBEGA1UdIAQKMAgwBgYEVR0gADBMBgNVHR8E
RTBDMEGgP6A9hjtodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9SU0FDZXJ0aWZpY2F0aW9u
QXV0aG9yaXR5LmNybDBxBggrBgEFBQcBAQRlMGMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9jcnQuY29t
b2RvY2EuY29tL0NPTU9ET1JTQUFkZFRydXN0Q0EuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz
cC5jb21vZG9jYS5jb20wDQYJKoZIhvcNAQEMBQADggIBAHhcsoEoNE887l9Wzp+XVuyPomsX9vP2
SQgG1NgvNc3fQP7TcePo7EIMERoh42awGGsma65u/ITse2hKZHzT0CBxhuhb6txM1n/y78e/4ZOs
0j8CGpfb+SJA3GaBQ+394k+z3ZByWPQedXLL1OdK8aRINTsjk/H5Ns77zwbjOKkDamxlpZ4TKSDM
KVmU/PUWNMKSTvtlenlxBhh7ETrN543j/Q6qqgCWgWuMAXijnRglp9fyadqGOncjZjaaSOGTTFB+
E2pvOUtY+hPebuPtTbq7vODqzCM6ryEhNhzf+enm0zlpXK7q332nXttNtjv7VFNYG+I31gnMrwfH
M5tdhYF/8v5UY5g2xANPECTQdu9vWPoqNSGDt87b3gXb1AiGGaI06vzgkejL580ul+9hz9D0S0U4
jkhJiA7EuTecP/CFtR72uYRBcunwwH3fciPjviDDAI9SnC/2aPY8ydehzuZutLbZdRJ5PDEJM/1t
yZR2niOYihZ+FCbtf3D9mB12D4ln9icgc7CwaxpNSCPt8i/GqK2HsOgkL3VYnwtx7cJUmpvVdZ4o
gnzgXtgtdk3ShrtOS1iAN2ZBXFiRmjVzmehoMof06r1xub+85hFQzVxZx5/bRaTKTlL8YXLI8nAb
R9HWdFqzcOoB/hxfEyIQpx9/s81rgzdEZOofSlZHynoSMYIDyjCCA8YCAQEwga0wgZcxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRo
ZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA4rtJSHkq7AnpxKUY8ZlYZjANBglghkgB
ZQMEAgEFAKCCAe0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMjAx
MjE5MDgwNTA2WjAvBgkqhkiG9w0BCQQxIgQgZPwaFuHvG+tUzrtB/ChHSF8kY4dQg33FYQDrLE0H
Glkwgb4GCSsGAQQBgjcQBDGBsDCBrTCBlzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIg
TWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQx
PTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1h
aWwgQ0ECEQDiu0lIeSrsCenEpRjxmVhmMIHABgsqhkiG9w0BCRACCzGBsKCBrTCBlzELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG
A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl
bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEQDiu0lIeSrsCenEpRjxmVhmMA0GCSqGSIb3
DQEBAQUABIIBAIVuSDcXGOvRQXB0vR1KGTfuC6Es2dfJfaFBgzOcWmcOO9E4DybGGTaQl0hgdmqG
MeooZHZT1ABh2jg+sRRcWFEnKY/DupLrSWpQtt0C+lYa4iBNHgsrI5lNV3cBO66nElG2YveQPxEn
kSCmikxtMV3vznU8P2gdxfEw3WSXgjd3XsMgDql5cSyCiJ3+CnuCD/TpE/ktKycUtfTKIVs9LY2i
H/98I2z+QqE0C/rN7ANVqREZtHdYWBBHyZxj8Nsth4rcgRjPrx9i90o7GUyAr6/xKihWhF/MQ/hw
XTguAeUiIihApEt+WcMuC3wuBJ6ZSCc3EygYPc4t94bRPaBnd0MAAAAAAAA=


--=-sTDwrdaNLhqUO8bcSRdy--



From xen-devel-bounces@lists.xenproject.org Sat Dec 19 08:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 08:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56814.99562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqXVm-00045Y-Ug; Sat, 19 Dec 2020 08:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56814.99562; Sat, 19 Dec 2020 08:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqXVm-00045R-Qw; Sat, 19 Dec 2020 08:22:46 +0000
Received: by outflank-mailman (input) for mailman id 56814;
 Sat, 19 Dec 2020 08:22:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqXVm-00045J-AC; Sat, 19 Dec 2020 08:22:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqXVm-000228-0h; Sat, 19 Dec 2020 08:22:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqXVl-0007go-MP; Sat, 19 Dec 2020 08:22:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqXVl-0003hO-Ls; Sat, 19 Dec 2020 08:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZiKw1reKyn9IySrnPivIuw39n07bH6YKPrM6nCywCYA=; b=50GBoRgNslHaA/I6UTtTbUKHMF
	veNdBwoaXX1qqt2yN3JUIjqZ6lFbBf422M+abF9j6INGUbVnyE8cU7Pie0iGsdeYa+Zrdr7oTxEpl
	+98YJWvu39ATR2fE1KzIvD5Ri8UcZvcgmFB09wSm7Mybw2MSLR1kG3Hn2nKOs/anw35c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157666-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157666: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-coresched-amd64-xl:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a866bdbbac227a99b0b37e03679908642f58aec
X-Osstest-Versions-That:
    linux=2bff021f53b211386abad8cd661e6bb38d0fd524
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 08:22:45 +0000

flight 157666 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157666/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157431

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm 20 guest-start/debian.repeat fail in 157638 pass in 157666
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157638 pass in 157666
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail pass in 157638
 test-amd64-coresched-amd64-xl 22 guest-start/debian.repeat fail pass in 157638
 test-amd64-amd64-libvirt     21 guest-start.2              fail pass in 157638

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157431
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157431
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157431
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157431
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a866bdbbac227a99b0b37e03679908642f58aec
baseline version:
 linux                2bff021f53b211386abad8cd661e6bb38d0fd524

Last test of basis   157431  2020-12-11 12:40:36 Z    7 days
Testing same since   157603  2020-12-16 10:11:52 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Bean Huo <beanhuo@micron.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Chris Chiu <chiu@endlessos.org>
  Coiby Xu <coiby.xu@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Fangrui Song <maskray@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Georgi Djakov <georgi.djakov@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Si <si.hao@zte.com.cn>
  Heiko Stuebner <heiko@sntech.de>
  Jakub Kicinski <kuba@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kalle Valo <kvalo@codeaurora.org>
  Li Yang <leoyang.li@nxp.com>
  Libo Chen <libo.chen@oracle.com>
  Lijun Pan <ljp@linux.ibm.com>
  Lin Chen <chen.lin5@zte.com.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luca Coelho <luciano.coelho@intel.com>
  Manasi Navare <manasi.d.navare@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Markus Reichl <m.reichl@fivetechno.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Verevkin <me@maxverevkin.tk>
  Michael Ellerman <mpe@ellerman.id.au>
  Miles Chen <miles.chen@mediatek.com>
  Minchan Kim <minchan@kernel.org>
  Mordechay Goodstein <mordechay.goodstein@intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Pankaj Sharma <pankj.sharma@samsung.com>
  Ran Wang <ran.wang_1@nxp.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sara Sharon <sara.sharon@intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Wood <oss@buserror.net>
  Shuah Khan <skhan@linuxfoundation.org>
  Shung-Hsi Yu <shung-hsi.yu@suse.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo Witte <timo.witte@gmail.com>
  Tom Lendacky <thomas.lendacky@amd.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vineet Gupta <vgupta@synopsys.com>
  Xu Qiang <xuqiang36@huawei.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Zhan Liu <zliua@micron.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1160 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 15:12:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 15:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56876.99589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqdu3-0001wf-Vz; Sat, 19 Dec 2020 15:12:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56876.99589; Sat, 19 Dec 2020 15:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqdu3-0001wY-RS; Sat, 19 Dec 2020 15:12:15 +0000
Received: by outflank-mailman (input) for mailman id 56876;
 Sat, 19 Dec 2020 15:12:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqdu2-0001wQ-Rq; Sat, 19 Dec 2020 15:12:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqdu2-0000NJ-8L; Sat, 19 Dec 2020 15:12:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqdu1-0006lQ-Qo; Sat, 19 Dec 2020 15:12:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqdu1-0007Vi-QM; Sat, 19 Dec 2020 15:12:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3ma8JcDwa9W5bvXbyUHcNmxtU9p+NCorebX4bkZdIFI=; b=I6jZgqOKj7QGKGN6C+oCEiSwi/
	tph1GBpc2QEewDNMMyPeduA9LPB9SzKYMH2PbJl+ysrvkwhfFLj+Hri+rEemNghT5qjfTdpLPIKue
	LNKuncdjsG82iZmKZWl12HTRJQZ99VRxp4Pxz7b1L/Cigm4iFZBNg1B48N/s30dxptpM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157670-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157670: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 15:12:13 +0000

flight 157670 qemu-mainline real [real]
flight 157722 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157670/
http://logs.test-lab.xenproject.org/osstest/logs/157722/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  121 days
Failing since        152659  2020-08-21 14:07:39 Z  120 days  248 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    1 days    1 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 15:57:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 15:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56884.99604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqebJ-0005zH-HK; Sat, 19 Dec 2020 15:56:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56884.99604; Sat, 19 Dec 2020 15:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqebJ-0005zA-Ch; Sat, 19 Dec 2020 15:56:57 +0000
Received: by outflank-mailman (input) for mailman id 56884;
 Sat, 19 Dec 2020 15:56:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqebI-0005z2-8Q; Sat, 19 Dec 2020 15:56:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqebH-00014i-Rw; Sat, 19 Dec 2020 15:56:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqebH-0002dc-Hr; Sat, 19 Dec 2020 15:56:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqebH-0005eJ-Eu; Sat, 19 Dec 2020 15:56:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+a/9OFxwKncrhEgn7c+qOq1gfvhCPa6PFdVG6s7enl8=; b=0ywcFW81a9Ur0Htq4Jm7DKEeyz
	d9W65JEVW5znOD7+fv6OeIM0sUJjzEcy/BpvhQIb0ie3u5przgh09jlUwy3H5PM6UFgaUvstlfrUg
	9mtvh1tyRgs09542yFD3U9FKe00Wi1/CbIkrJKfvGVDyi4rF1F/+lX26o3dV4TUl5S6A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157715-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157715: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 15:56:55 +0000

flight 157715 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157715/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  162 days
Failing since        151818  2020-07-11 04:18:52 Z  161 days  156 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 17:21:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 17:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56952.99625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqfuU-0006nI-6w; Sat, 19 Dec 2020 17:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56952.99625; Sat, 19 Dec 2020 17:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqfuU-0006nB-2I; Sat, 19 Dec 2020 17:20:50 +0000
Received: by outflank-mailman (input) for mailman id 56952;
 Sat, 19 Dec 2020 17:20:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfuS-0006n3-5k; Sat, 19 Dec 2020 17:20:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfuR-0002zt-Mk; Sat, 19 Dec 2020 17:20:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfuR-0008ND-C1; Sat, 19 Dec 2020 17:20:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfuR-0004He-Bb; Sat, 19 Dec 2020 17:20:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QTmBYP/3LXWA+xUeB0wXKuETlvijdF5gGAWnDeHbkVg=; b=c/O6scLOy8BLWfPL59PiF2cg0G
	+4UwpF7BlaF46/JQGu9yG1ajbe0o0pC/WOsOX9sFWaiYy2PaS0rqc1GZTPof6r1QBtet55SypfSVy
	fFVAJJ5R08+/RwGgHZelGA7JOwJYZBDfpZ/45p0ZfTOM/eCkaLd39jHlHs4y9mdUzkVA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157673-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 157673: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ad844aa352559a8b1f36e391a27d9d7dbddbdc36
X-Osstest-Versions-That:
    xen=d17a5d5d2774601f8137984a3ee23ec28eb0793c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 17:20:47 +0000

flight 157673 xen-4.14-testing real [real]
flight 157724 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157673/
http://logs.test-lab.xenproject.org/osstest/logs/157724/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157724-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157564
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157564
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157564
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157564
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157564
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157564
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157564
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157564
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157564
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157564
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157564
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  ad844aa352559a8b1f36e391a27d9d7dbddbdc36
baseline version:
 xen                  d17a5d5d2774601f8137984a3ee23ec28eb0793c

Last test of basis   157564  2020-12-15 13:36:35 Z    4 days
Testing same since   157650  2020-12-17 17:06:20 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d17a5d5d27..ad844aa352  ad844aa352559a8b1f36e391a27d9d7dbddbdc36 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 17:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 17:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56957.99639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqfvG-0006tH-GW; Sat, 19 Dec 2020 17:21:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56957.99639; Sat, 19 Dec 2020 17:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqfvG-0006tA-DN; Sat, 19 Dec 2020 17:21:38 +0000
Received: by outflank-mailman (input) for mailman id 56957;
 Sat, 19 Dec 2020 17:21:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfvF-0006t1-Tg; Sat, 19 Dec 2020 17:21:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfvF-000326-Q6; Sat, 19 Dec 2020 17:21:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfvF-0008Rz-IE; Sat, 19 Dec 2020 17:21:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqfvF-0005H9-Hl; Sat, 19 Dec 2020 17:21:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c39ERMJg27at1TNLWsQAbbQCeDiN5USe8+BylwQG9k8=; b=ZqgAhL6IhcCv6TsbO4HAWCaY+H
	Hb4xeheUAZ25AfGEC5AGxrcGwyOxb6tpaA6N+XT7podS84eDnjMrqbqWJ7M7tIR4wunSIKSdw7HMs
	fDp+Jn8zo2nEnUltow/zleumk4lBmfmt34rosPwsTIVpUESKfV4OrrJSFuCx951i1nuU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157708-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157708: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1159fc3230aee02acc60aa245ce047217fd8b87e
X-Osstest-Versions-That:
    ovmf=f95e80d832e923046c92cd6f0b8208cec147138e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 17:21:37 +0000

flight 157708 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157708/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1159fc3230aee02acc60aa245ce047217fd8b87e
baseline version:
 ovmf                 f95e80d832e923046c92cd6f0b8208cec147138e

Last test of basis   157345  2020-12-09 12:40:46 Z   10 days
Failing since        157348  2020-12-09 15:39:39 Z   10 days   54 attempts
Testing same since   157708  2020-12-18 22:09:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Baraneedharan Anbazhagan <anbazhagan@hp.com>
  Baraneedharan Anbazhagan <anbazhgan@hp.com>
  Borghorst, Hendrik via groups.io <hborghor=amazon.de@groups.io>
  Bret Barkelew <Bret.Barkelew@microsoft.com>
  Chen, Christine <Yuwei.Chen@intel.com>
  Fan Wang <fan.wang@intel.com>
  Hendrik Borghorst <hborghor@amazon.de>
  James Bottomley <James.Bottomley@HansenPartnership.com>
  James Bottomley <jejb@linux.ibm.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Marc Moisson-Franckhauser <marc.moisson-franckhauser@arm.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Quan Nguyen <quan@os.amperecomputing.com>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Sean Brogan <sean.brogan@microsoft.com>
  Sheng Wei <w.sheng@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Star Zeng <star.zeng@intel.com>
  Ting Ye <ting.ye@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f95e80d832..1159fc3230  1159fc3230aee02acc60aa245ce047217fd8b87e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 21:07:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 21:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.56984.99664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqjR6-0002qO-7g; Sat, 19 Dec 2020 21:06:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 56984.99664; Sat, 19 Dec 2020 21:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqjR6-0002qH-4a; Sat, 19 Dec 2020 21:06:44 +0000
Received: by outflank-mailman (input) for mailman id 56984;
 Sat, 19 Dec 2020 21:06:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NQRv=FX=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kqjR5-0002qC-A9
 for xen-devel@lists.xenproject.org; Sat, 19 Dec 2020 21:06:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5df17a51-13b5-443b-96df-25a4400f6199;
 Sat, 19 Dec 2020 21:06:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5df17a51-13b5-443b-96df-25a4400f6199
Subject: Re: [GIT PULL] xen: branch for v5.11-rc1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608412001;
	bh=f5biQxtgpOle2K8edxfWw3ThimfyDmYI41acGeS0PZE=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=joPvXhHC0NDC8iOfyqRBKoyqp76EpSOJ3VbLoTBEL/J/UKgnMTI94v7bdrw4vpwJR
	 rjRtKvtbjGHcZZxAAwv/1mKzNV4J4+Bm7/5/zIfOCgO0g2HBdKV1+kYaspNrJ8m4pZ
	 HTfNjZle+5/P9p5VyDAxUlr18p1Ikg4P53QgygxDrCq4vYbfOSkesyUDHa9EVuaynS
	 irS2323/EMxJOHhgqydtVkRjkFW0PfcrFbtPoc4j2mm1rJgIXbGPMX0weU17/Wahnd
	 bcfldX1GVDT9wSJRVykcSp/52i9W6V0G5SxLQ3y0ik5S/Mxjcibucfq9MvFbjbxX1C
	 VJY/HMXlQ35fg==
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201219063136.5828-1-jgross@suse.com>
References: <20201219063136.5828-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20201219063136.5828-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc1b-tag
X-PR-Tracked-Commit-Id: 6190c0ccaf5dfee845df9c9cd8ad9fdc5856bb41
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 3872f516aab34e3adeb7eda43b29c1ecd852cee1
Message-Id: <160841200137.20285.209923367667109463.pr-tracker-bot@kernel.org>
Date: Sat, 19 Dec 2020 21:06:41 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Sat, 19 Dec 2020 07:31:36 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.11-rc1b-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/3872f516aab34e3adeb7eda43b29c1ecd852cee1

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sat Dec 19 23:58:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 19 Dec 2020 23:58:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57007.99753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqm6z-0002j8-KH; Sat, 19 Dec 2020 23:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57007.99753; Sat, 19 Dec 2020 23:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqm6z-0002j1-Gz; Sat, 19 Dec 2020 23:58:09 +0000
Received: by outflank-mailman (input) for mailman id 57007;
 Sat, 19 Dec 2020 23:58:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqm6y-0002io-K9; Sat, 19 Dec 2020 23:58:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqm6y-0001Nw-DK; Sat, 19 Dec 2020 23:58:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqm6y-00030A-3r; Sat, 19 Dec 2020 23:58:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqm6y-0001lC-3M; Sat, 19 Dec 2020 23:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4f6cC8IpMYovbHgIvTC1ZGGRBy5wvZQPwKiyww0Jb2Q=; b=UBqSDMeXHOUW4GjM7i7Ib3txIp
	Lyq7bycpHOk/ZiyIhmzqfWI+0pPNx+foBULDb9DBIRdeSFPriYpJ59pu3bpA1bzPe6wpCqU2oMjnY
	3AewrW/VybSBW94taHQtfXX7Au2vuREMPQ14uELHI31ahVVanwJnchzUdyRiPTmcstuA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157710-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157710: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
X-Osstest-Versions-That:
    xen=641723f78d3a0b1982e1cd2ef37d8d877cfe542d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 19 Dec 2020 23:58:08 +0000

flight 157710 xen-unstable real [real]
flight 157728 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157710/
http://logs.test-lab.xenproject.org/osstest/logs/157728/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157728-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 157655

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157655
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157655
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157655
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157655
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157655
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157655
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157655
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157655
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157655
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157655
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157655
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93
baseline version:
 xen                  641723f78d3a0b1982e1cd2ef37d8d877cfe542d

Last test of basis   157655  2020-12-17 21:03:55 Z    2 days
Testing same since   157710  2020-12-18 22:47:47 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   641723f78d..357db96a66  357db96a66e47e609c3b14768f1062e13eedbd93 -> master


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 02:30:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 02:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57027.99786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqoUJ-0008Qu-NK; Sun, 20 Dec 2020 02:30:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57027.99786; Sun, 20 Dec 2020 02:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqoUJ-0008Ql-GM; Sun, 20 Dec 2020 02:30:23 +0000
Received: by outflank-mailman (input) for mailman id 57027;
 Sun, 20 Dec 2020 02:30:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqoUI-0008Qc-Af; Sun, 20 Dec 2020 02:30:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqoUH-0002sr-V0; Sun, 20 Dec 2020 02:30:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqoUH-0003H0-K6; Sun, 20 Dec 2020 02:30:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqoUH-0002lq-Hw; Sun, 20 Dec 2020 02:30:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZU9UegiW8XKHjASr2CQ4n9CnmHOxgvp/bRKMJ2WSuug=; b=af8cTUgU6MNVyOmIUwHJrez1xi
	yDVL3Lps/vAymQpbH+1tEJ9xa642g09fS4QJsqwuD1FyR7GXKeNJgdnfaIU0mXPPL19jwTbc8fzTA
	CEJBWz5T9otFObLlxefEwUKHmdSMq4dby38KI80N9OTLPY89Q8rMzc2ci91A0mkP/GA8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157713-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157713: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3644e2d2dda78e21edd8f5415b6d7ab03f5f54f3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 02:30:21 +0000

flight 157713 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157713/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3644e2d2dda78e21edd8f5415b6d7ab03f5f54f3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  141 days
Failing since        152366  2020-08-01 20:49:34 Z  140 days  242 attempts
Testing same since   157713  2020-12-19 03:14:11 Z    0 days    1 attempts

------------------------------------------------------------
4281 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 949400 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 02:54:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 02:54:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57034.99801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqorp-0001u8-Nb; Sun, 20 Dec 2020 02:54:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57034.99801; Sun, 20 Dec 2020 02:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqorp-0001u1-Ja; Sun, 20 Dec 2020 02:54:41 +0000
Received: by outflank-mailman (input) for mailman id 57034;
 Sun, 20 Dec 2020 02:54:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m++8=FY=gmail.com=kevin.buckley.ecs.vuw.ac.nz@srs-us1.protection.inumbo.net>)
 id 1kqorn-0001tw-QQ
 for xen-devel@lists.xenproject.org; Sun, 20 Dec 2020 02:54:39 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b207d17-41db-4c37-b5d5-8589f35b5102;
 Sun, 20 Dec 2020 02:54:38 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id r3so7201368wrt.2
 for <xen-devel@lists.xenproject.org>; Sat, 19 Dec 2020 18:54:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b207d17-41db-4c37-b5d5-8589f35b5102
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=psbuNFNDTLaA7idW4QnjNXBPSq/YUVA99FKXxDGY9I4=;
        b=WS1/Qvrf8FmUEteFX2bq/v+EhFCgoMqoj4YdcTYhSt385LK28Dm8EjHu8w3YsqgkYD
         VeyoYuzJXXMVb68T+n23JR/MUxQgnomdSbEl+1za91tkugJb9otH2NBOCynPVvkBUywj
         cHfBof7Xv7+lhCRyEfNb3fEUsAOR1YYwFUjE2cX+eVVezaZWkSUOQIxfCotcE3Kaa3oF
         EZutKyYo1aKim6HyUM9Y/F8YnXj3+pJunYT65NNLIMm+R6HN6+1jfhP8jbZagVZrlXlr
         n/RoLdPbmB6uwxua829Ar7O3N7zQr+2Og7tkrD8pHOFvstEgkhhMucLSfpuLQyodaLYG
         ZTbA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=psbuNFNDTLaA7idW4QnjNXBPSq/YUVA99FKXxDGY9I4=;
        b=C/mOgAwXG3NI+5LUDE2IIcWXcDq+eitSmA+F6SPxeuoTx8MjUhDcCv/kpj2IIm/tqA
         Sik8o8IFRoTmph9jAxSQTbevq2dvZREjwrH7b7ljFHYQ+Qj3L9XMgyTcYZwRTPP6QW74
         8XQWs1mtgF0NEGiurngrLpowamczsaETIlqxFhrjqetys0tNgRyWBsna/24d7lEsUWK8
         VYShTffuDLguz9g/+Yh20dCx/ztc/ChzORlvqeHvEkruNAqPz4efz3/uPn1+lDDxOPbM
         ohAUhICB4CSe7z522l8C66rwaCWMHehiVgrxOGxxEYsHlqu7Nqx4Bn9IQFEGEa5tC08v
         ZjFg==
X-Gm-Message-State: AOAM533VHLNFKgHgabRgwxQCxGRwOC67UcUhpfYMN7EGlwoAt/TlVw0g
	OFctewwHXzA3mHWMGAbUch8jItnzzXnRyel2SahA/hMcBpdlYA==
X-Google-Smtp-Source: ABdhPJxa/N2q9fX/Pe93JAhrx22OnFLDErBoyhvbpAGZ24V/n2MWh8+2AdAbUzr7S+CzazgViBaGfHLc139zS93SsDE=
X-Received: by 2002:a5d:5147:: with SMTP id u7mr11983561wrt.114.1608432877567;
 Sat, 19 Dec 2020 18:54:37 -0800 (PST)
MIME-Version: 1.0
From: Kevin Buckley <kevin.buckley.ecs.vuw.ac.nz@gmail.com>
Date: Sun, 20 Dec 2020 10:54:26 +0800
Message-ID: <CABwOO=dLaL-BLf+GDo71_Btq1R8L5=XmofSs+oHE+P-qx+M49A@mail.gmail.com>
Subject: Xen Release 4.14.0: Couple of "all warnings being treated as errors"
 issues and ongoing docs/man issue in make world
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Looking to build 4.14.0 on an LFS 10.0 system, so with GCC 10.2.0.

The "all warnings being treated as errors" I'm sure, will have been
picked up by now, but the issue with the man pages is something
I have been seeing for a while now.

The configure options are a little idiosyncratic but I can't see why they
would unmask the issues seen here, but, for completeness

  ./configure --prefix=/usr    \
  --disable-seabios          \
  --disable-qemu-traditional \
  --disable-rombios          \
  --disable-stubdom

after which,

make PYTHON=/usr/bin/python3 world

first generates these four

libxlu_pci.c:32:18: error: 'func' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   32 |     pcidev->func = func;
      |     ~~~~~~~~~~~~~^~~~~~
libxlu_pci.c:51:29: note: 'func' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |                             ^~~~
libxlu_pci.c:31:17: error: 'dev' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   31 |     pcidev->dev = dev;
      |     ~~~~~~~~~~~~^~~~~
libxlu_pci.c:51:24: note: 'dev' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |                        ^~~
libxlu_pci.c:30:17: error: 'bus' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   30 |     pcidev->bus = bus;
      |     ~~~~~~~~~~~~^~~~~
libxlu_pci.c:51:19: note: 'bus' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |                   ^~~
libxlu_pci.c:29:20: error: 'dom' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   29 |     pcidev->domain = domain;
      |     ~~~~~~~~~~~~~~~^~~~~~~~
libxlu_pci.c:51:14: note: 'dom' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |              ^~~
cc1: all warnings being treated as errors
make[5]: *** [/usr/src/xen/xen-4.14.0/tools/libxl/../../tools/Rules.mk:216:
libxlu_pci.o] Error 1
make[5]: Leaving directory '/usr/src/xen/xen-4.14.0/tools/libxl'

which I "fixed" by adding an

  -Wno-maybe-uninitialized

to the CFLAGS in the tools/libxl Makefile

after which the make world then goes on to fail here

libxl_utils.c: In function 'libxl__prepare_sockaddr_un':
libxl_utils.c:1262:5: error: 'strncpy' specified bound 108 equals
destination size [-Werror=stringop-truncation]
 1262 |     strncpy(un->sun_path, path, sizeof(un->sun_path));
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[5]: *** [/usr/src/xen/xen-4.14.0/tools/libxl/../../tools/Rules.mk:216:
libxl_utils.o] Error 1
make[5]: Leaving directory '/usr/src/xen/xen-4.14.0/tools/libxl'
make[4]: *** [/usr/src/xen/xen-4.14.0/tools/../tools/Rules.mk:240:
subdir-install-libxl] Error 2
make[4]: Leaving directory '/usr/src/xen/xen-4.14.0/tools'
make[3]: *** [/usr/src/xen/xen-4.14.0/tools/../tools/Rules.mk:235:
subdirs-install] Error 2
make[3]: Leaving directory '/usr/src/xen/xen-4.14.0/tools'
make[2]: *** [Makefile:72: install] Error 2
make[2]: Leaving directory '/usr/src/xen/xen-4.14.0/tools'
make[1]: *** [Makefile:134: install-tools] Error 2
make[1]: Leaving directory '/usr/src/xen/xen-4.14.0'
make: *** [Makefile:170: world] Error 2

which I fixed by setting

                -Wno-maybe-uninitialized -Wno-stringop-truncation

as the extra CFLAGS in the modified tools/libxl Makefile

After that point, the build gets as far as

make[2]: Leaving directory '/usr/src/xen/xen-4.14.0/tools'
make -C docs install
make[2]: Entering directory '/usr/src/xen/xen-4.14.0/docs'
/usr/bin/pod2man --release=4.14.0 --name=xenhypfs -s 1 -c "Xen"
man/xenhypfs.1.pod man1/xenhypfs.1
Can't write-open man1/xenhypfs.1: No such file or directory at
/usr/bin/pod2man line 69.
make[2]: *** [Makefile:176: man1/xenhypfs.1] Error 2
make[2]: Leaving directory '/usr/src/xen/xen-4.14.0/docs'
make[1]: *** [Makefile:153: install-docs] Error 2
make[1]: Leaving directory '/usr/src/xen/xen-4.14.0'
make: *** [Makefile:170: world] Error 2

and this is the interesting bit.

Firstly, nothing that the make ins being run from the top-level
docs directory, looking at

pkg xen:xen-4.14.0> ls docs/
INDEX            configure.ac       man/
Makefile         designs/           misc/
README.colo      features/          parse-support-md*
README.remus     figs/              process/
README.source    gen-html-index     specs/
admin-guide/     glossary.rst       support-matrix-generate*
conf.py          guest-guide/       xen-headers*
config.status*   hypervisor-guide/
configure*       index.rst


shows that there isn't a man1 subdir within the source tree?

Furthermore, looking at

pkg xen:xen-4.14.0> ls docs/man/
xen-pci-device-reservations.7.pod  xentop.1.pod
xen-pv-channel.7.pod               xentrace.8.pod
xen-tscmode.7.pod                  xentrace_format.1.pod
xen-vbd-interface.7.pandoc         xl-disk-configuration.5.pod
xen-vtpm.7.pod                     xl-network-configuration.5.pod
xen-vtpmmgr.7.pod                  xl-numa-placement.7.pod
xenhypfs.1.pod                     xl.1.pod
xenstore-chmod.1.pod               xl.1.pod.in
xenstore-ls.1.pod                  xl.cfg.5.pod
xenstore-read.1.pod                xl.cfg.5.pod.in
xenstore-write.1.pod               xl.conf.5.pod
xenstore.1.pod                     xlcpupool.cfg.5.pod

suggests that all of the man pages POS files, that one might
expect, given the make output above, to be in man1, man5,
man7 and man8 subdirs, are all in the one "man" directory.

I have seen this "docs/man? failure" with 'make world' in a few past
builds and have found, in the past, that splitting out the 'make world'
into these two parts, with a creation of the seemingly expected dirs
in between, vis:

make clean

mkdir docs/man1 docs/man5 docs/man7 docs/man8

make  dist

has then seen the build complete.

I am thinking though, that the 'make world' should just work,
out of the tarball ?

So is there something missing that should be re-arranging the
man page sources into seperate manN subirs, or should the
pod2man command be given different arguments?

Hoping that info is of some use to you,
Kevin


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 03:12:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 03:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57039.99813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqp9J-0003mW-Dh; Sun, 20 Dec 2020 03:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57039.99813; Sun, 20 Dec 2020 03:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqp9J-0003mP-AQ; Sun, 20 Dec 2020 03:12:45 +0000
Received: by outflank-mailman (input) for mailman id 57039;
 Sun, 20 Dec 2020 03:12:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqp9H-0003mH-Ec; Sun, 20 Dec 2020 03:12:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqp9H-0003dD-6B; Sun, 20 Dec 2020 03:12:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqp9G-0005Lr-Sr; Sun, 20 Dec 2020 03:12:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqp9G-0004y3-SP; Sun, 20 Dec 2020 03:12:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X/UWfNI+CEcMr479uGYh0UdGQMhnwUTLB20X0hds+Z0=; b=tFr9++30eP7w1PZlgcBb8uuQUW
	7XBnbgizIsB2Hm7mYIz2N/KXHQhngj7OkWn2Z9LC1yeqbQ0jUPRTuyqnkwE3L7zCK4x25nCzD98sw
	VS5S8sYV9vX8fAOvyI90oLIKtV6HYAalFFOjBiMoaYa3KNe/DCUMkHP/BzJ3aAtuQtUM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157716-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157716: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10c7c213bef26274684798deb3e351a6756046d2
X-Osstest-Versions-That:
    xen=b5302273e2c51940172400486644636f2f4fc64a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 03:12:42 +0000

flight 157716 xen-4.13-testing real [real]
flight 157731 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157716/
http://logs.test-lab.xenproject.org/osstest/logs/157731/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157135

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157135
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157135
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157135
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10c7c213bef26274684798deb3e351a6756046d2
baseline version:
 xen                  b5302273e2c51940172400486644636f2f4fc64a

Last test of basis   157135  2020-12-01 15:06:11 Z   18 days
Testing same since   157563  2020-12-15 13:36:28 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 743 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 06:42:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 06:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57057.99840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqsQ9-0005g5-K6; Sun, 20 Dec 2020 06:42:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57057.99840; Sun, 20 Dec 2020 06:42:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqsQ9-0005fy-GT; Sun, 20 Dec 2020 06:42:21 +0000
Received: by outflank-mailman (input) for mailman id 57057;
 Sun, 20 Dec 2020 06:42:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqsQ9-0005fq-0I; Sun, 20 Dec 2020 06:42:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqsQ8-0007mY-J3; Sun, 20 Dec 2020 06:42:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqsQ8-0001Eu-9d; Sun, 20 Dec 2020 06:42:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqsQ8-0005I2-9B; Sun, 20 Dec 2020 06:42:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2SR3cZpBCbhMfRcfNudKlzbMm+t0zpqMZMuHklXTujQ=; b=LI9AtvBbP5SvJAXmvFzi6vjL5z
	7Tmuh+AP5obgNYRuLsYPI/qiLqnyfcEklDyyfAStWQntBbZZBXOCtm/e3RdjBN8iL1luWpivigS5f
	ew5ixraKMxhebVNlD4XyAgfqWxRNYlgK0Y4ITgMYTwoMfHyme/HdK3NVASG8ty/fzoD0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157719-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157719: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a866bdbbac227a99b0b37e03679908642f58aec
X-Osstest-Versions-That:
    linux=2bff021f53b211386abad8cd661e6bb38d0fd524
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 06:42:20 +0000

flight 157719 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157719/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 157431

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm 20 guest-start/debian.repeat fail in 157638 pass in 157719
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157638 pass in 157719
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail pass in 157638

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157431
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157431
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157431
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157431
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157431
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a866bdbbac227a99b0b37e03679908642f58aec
baseline version:
 linux                2bff021f53b211386abad8cd661e6bb38d0fd524

Last test of basis   157431  2020-12-11 12:40:36 Z    8 days
Testing same since   157603  2020-12-16 10:11:52 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Bean Huo <beanhuo@micron.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Chris Chiu <chiu@endlessos.org>
  Coiby Xu <coiby.xu@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Fangrui Song <maskray@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Georgi Djakov <georgi.djakov@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Si <si.hao@zte.com.cn>
  Heiko Stuebner <heiko@sntech.de>
  Jakub Kicinski <kuba@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kalle Valo <kvalo@codeaurora.org>
  Li Yang <leoyang.li@nxp.com>
  Libo Chen <libo.chen@oracle.com>
  Lijun Pan <ljp@linux.ibm.com>
  Lin Chen <chen.lin5@zte.com.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luca Coelho <luciano.coelho@intel.com>
  Manasi Navare <manasi.d.navare@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Markus Reichl <m.reichl@fivetechno.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Verevkin <me@maxverevkin.tk>
  Michael Ellerman <mpe@ellerman.id.au>
  Miles Chen <miles.chen@mediatek.com>
  Minchan Kim <minchan@kernel.org>
  Mordechay Goodstein <mordechay.goodstein@intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Pankaj Sharma <pankj.sharma@samsung.com>
  Ran Wang <ran.wang_1@nxp.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sara Sharon <sara.sharon@intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Wood <oss@buserror.net>
  Shuah Khan <skhan@linuxfoundation.org>
  Shung-Hsi Yu <shung-hsi.yu@suse.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo Witte <timo.witte@gmail.com>
  Tom Lendacky <thomas.lendacky@amd.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vineet Gupta <vgupta@synopsys.com>
  Xu Qiang <xuqiang36@huawei.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Zhan Liu <zliua@micron.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1160 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 07:51:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 07:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57079.99854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqtV3-0003Nw-UZ; Sun, 20 Dec 2020 07:51:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57079.99854; Sun, 20 Dec 2020 07:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqtV3-0003Np-RD; Sun, 20 Dec 2020 07:51:29 +0000
Received: by outflank-mailman (input) for mailman id 57079;
 Sun, 20 Dec 2020 07:51:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqtV2-0003Nh-Jz; Sun, 20 Dec 2020 07:51:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqtV2-0000WB-9J; Sun, 20 Dec 2020 07:51:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqtV2-0004Gq-0K; Sun, 20 Dec 2020 07:51:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqtV1-0005nB-W6; Sun, 20 Dec 2020 07:51:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pmvZ4rukwmV35Ovw0ZSPZfwSPUqGrwiiwp+ihcN54hY=; b=WNgsajd5zhMxaCpBx/hTxCzc45
	2D6DMGZAL3/pj6S4wm2KW1S2povc5MblPhQJMntkrlgg4v+2eO1KjAcZP4Aa6EwQGPs4n5vHYdNAv
	SZfvw2CaVRRKIPPITLn4y6lIDpIV0zAIMPeJI6jfmO2akZqNy0jivkC3vRyRNCyMv5GQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157726-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157726: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6932f4bfe552c1704c5715430de6045c78a5b62f
X-Osstest-Versions-That:
    ovmf=1159fc3230aee02acc60aa245ce047217fd8b87e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 07:51:27 +0000

flight 157726 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157726/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6932f4bfe552c1704c5715430de6045c78a5b62f
baseline version:
 ovmf                 1159fc3230aee02acc60aa245ce047217fd8b87e

Last test of basis   157708  2020-12-18 22:09:58 Z    1 days
Testing same since   157726  2020-12-19 17:23:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1159fc3230..6932f4bfe5  6932f4bfe552c1704c5715430de6045c78a5b62f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 09:35:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 09:35:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57095.99876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqv7l-0004Cl-2u; Sun, 20 Dec 2020 09:35:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57095.99876; Sun, 20 Dec 2020 09:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqv7k-0004Ce-W8; Sun, 20 Dec 2020 09:35:32 +0000
Received: by outflank-mailman (input) for mailman id 57095;
 Sun, 20 Dec 2020 09:35:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lj5K=FY=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
 id 1kqv7j-0004CX-M2
 for xen-devel@lists.xenproject.org; Sun, 20 Dec 2020 09:35:31 +0000
Received: from GBR01-CWL-obe.outbound.protection.outlook.com (unknown
 [40.107.11.120]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcb43b0f-f282-4259-b263-3a5a8ce5fc04;
 Sun, 20 Dec 2020 09:35:29 +0000 (UTC)
Received: from LO0P265MB2906.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:180::8)
 by LO2P265MB2623.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:144::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3676.25; Sun, 20 Dec
 2020 09:35:27 +0000
Received: from LO0P265MB2906.GBRP265.PROD.OUTLOOK.COM
 ([fe80::dd88:994a:a4b4:4d9b]) by LO0P265MB2906.GBRP265.PROD.OUTLOOK.COM
 ([fe80::dd88:994a:a4b4:4d9b%7]) with mapi id 15.20.3676.033; Sun, 20 Dec 2020
 09:35:26 +0000
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by
 LO4P123CA0062.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:153::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3676.28 via Frontend Transport; Sun, 20 Dec 2020 09:35:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcb43b0f-f282-4259-b263-3a5a8ce5fc04
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ON0+kPclfg3ySLoHzjquNq6y4DU/GZr7pOaRM6djeU3DoRvm1zECJTuCSf3f/Ufmv/HVO0xpJ4tmbCs8kA92yDAmmAJzZowbLnofxeLVZYn+iLfKs/dOHQMFV8JiBp9ZJc9QcxtNn5OmrgRVLE5B771GgznrByjHlNfrVXTph2PbajTj7rRfgqys36Dvro8Y9akpaY1dbZKipDEHRk94zwAGzmFc+FHR/VJ25D44sM0hW06xFfWpqzr+3DZLPHXDejnncQo9MLLcczvVlZYp0YuyjbjtyU9RJYeS/oAF5z7+6NIEx7iMIHc7bsUW4A0b32fspiPKuGlIQt6k3k3roQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EqVjc4nIVmbGhdVl3Lt+ESrTbsu04q4wHIlaeQojvnQ=;
 b=WdliRdNKlxwK3uxhEK1fHxNVS4oxdEjdUYo0G5xiPp2Ut6uLrHjO8XRiJ9vXWcDniE5avKxl+xS5WNkT5pPKh+Opx9srqcWflJPS3OMrkQvFmThN3pd5AoheQUGJFQRDIO/e4YvmcHBJPKlIN/LWlnqagvUJlZPXVJAYGhfL9tDLzSFN91nxaKLHZSi2cSgZVbjhTl2iTL8aDSyvOqBFGK8byx19qSRMX0CqjQuiaskEdCuttSyZlXibJRaJsmg+SOSX+2/Q98JzzyFiyHxhPN7q4k/8qsI4A2k/uFtwBimV5TapsmM1eJ+b0F84gwnvo2g+0Lp8qY95YBfE29d6Ag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=durham.ac.uk; dmarc=pass action=none header.from=durham.ac.uk;
 dkim=pass header.d=durham.ac.uk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=durhamuniversity.onmicrosoft.com;
 s=selector2-durhamuniversity-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EqVjc4nIVmbGhdVl3Lt+ESrTbsu04q4wHIlaeQojvnQ=;
 b=f01YDdnbaOlL8VqEPUxEkqqOtBDDAN8srD8JMsC0XT/DbjIIPZj9Q1akAMmuy5C+ENq1SOkVyVCCqFsEPnG9s/vN3TeazHaSFVy7OuzrKN41+4Nd6y1OfLn2u7np/cSOpPt1rYeT3rn4ktDkirBZ+nciPS6BOLtR1HV83vlhRpo=
Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=durham.ac.uk;
Date: Sun, 20 Dec 2020 09:35:24 +0000 (GMT)
From: Michael Young <m.a.young@durham.ac.uk>
To: Kevin Buckley <kevin.buckley.ecs.vuw.ac.nz@gmail.com>
cc: xen-devel@lists.xenproject.org
Subject: Re: Xen Release 4.14.0: Couple of "all warnings being treated as
 errors" issues and ongoing docs/man issue in make world
In-Reply-To: <CABwOO=dLaL-BLf+GDo71_Btq1R8L5=XmofSs+oHE+P-qx+M49A@mail.gmail.com>
Message-ID: <cb6233a3-93f7-3b5a-d053-f8cb9f12c4b9@austen3.home>
References: <CABwOO=dLaL-BLf+GDo71_Btq1R8L5=XmofSs+oHE+P-qx+M49A@mail.gmail.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed
X-Originating-IP: [2a00:23c6:751d:7701:1f1a:39af:4235:7681]
X-ClientProxiedBy: LO4P123CA0062.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::13) To LO0P265MB2906.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:180::8)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5740a923-c994-4fea-b8c1-08d8a4ca955e
X-MS-TrafficTypeDiagnostic: LO2P265MB2623:
X-Microsoft-Antispam-PRVS:
	<LO2P265MB26231D07E396CF8A199103FE87C10@LO2P265MB2623.GBRP265.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g3Blo0IsGz5oqg9X+JPskSdXi9vJjxmuKat93c8elLsiMOgsc9QC7BRyLTN4BiziFHODyPWaKqgAAN+cxM79Yl6YqUIXLC+aLLbnj4LCreCa3A4vt7r1dUnb4M0MgqiXa9rE8ddkTIvl68FNjuoNZog8b353peKQO1oqdpV1lk+l8GR0QCksCfaJxqR/izRuREZUAzkU3uwX6iuS5ZnM2Ce9D2qVDD4xAyC7VN4wRsM2hdyGi7BHp+pSbwAGkVSONwwVyX0Ybs7+jYa9J+5c90IdJfFki8Z9on3xhsGyXGqROtJ1/1S+w4Vg3Tb5BadgRd4cGMD4vstMP+KZI2yAhdMc11aLhKY9u19PqsIMIXBk2YBKFF10aazoQ4azyuIZ7yE6cyB+aylUmEYQ4/L+Zg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LO0P265MB2906.GBRP265.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(366004)(376002)(136003)(4744005)(86362001)(9686003)(5660300002)(8676002)(6512007)(6916009)(52116002)(478600001)(6506007)(66946007)(36756003)(316002)(186003)(786003)(4326008)(6486002)(31696002)(2906002)(66476007)(31686004)(16526019)(8936002)(66556008)(83380400001);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?qAipcINeq+8F9h7XCw+saGhdNAo6h3UkXOCrdGsBv6Dg6VXQEgk864fwFH6Y?=
 =?us-ascii?Q?uRsYrw3crcFJ4iFs3LqAgs6eBM5IIvdXZZQ0q/r3DMsEHd5dM5mAgTotT3Xx?=
 =?us-ascii?Q?RjtfXXsBaZ3G6kVGa18DZzX3SgaiIA06eE449Fn55MMBDJdmvbRZaO39BWsr?=
 =?us-ascii?Q?wKqlM2DlTI0W3TTiyoS6yVNt5te1heeGj5+MpLI+htWe3yJtz1yDHZ0qtfYi?=
 =?us-ascii?Q?ZlPAVI5tjnxcF+jWDctLyu5V5akpOeA16Xf+elqFu9DlzDlOZkeVYnPeEQor?=
 =?us-ascii?Q?N3njzjTH8VcA9Laq6mHHCxQD0Ueejx62yKiqb4MDbikCihiG7T5VGNgQgo5T?=
 =?us-ascii?Q?Jsa6e810dmj4IEmwxqEz/o0vibHa31CaD+7mvicU2v4BtT7ZhLzU6zk0NXxL?=
 =?us-ascii?Q?Aisd4QSusU4FcR2bmY3J2Su/BVt8R6be0DkmlQiZG4P+6zbNAPiz1LdEdu/z?=
 =?us-ascii?Q?hHYbZqyBTCohSFWMvSys4NcYowmT4VmpBhvEA9llS+Tz/xOmYucC7s6davto?=
 =?us-ascii?Q?CfLwBHKMlaabRjBrGIXFDz63w2LQ8dKXPfhDPm5DayNrY1VJ06mOC/dosT97?=
 =?us-ascii?Q?CBhxVKBHOZh9jUZQArZycXFSlFOyEwojWQhR46pFN4OfU2yItsinUk59ylSm?=
 =?us-ascii?Q?PckW1bxTRhNmnjFV2L+WS+miWY+uV4SGS3cLRjCYD184G4O4ltXDah2J+Z4k?=
 =?us-ascii?Q?VPW7oDSGTVdliG1OvtkFwgJeNYxggOrJJ4AmVcpLWA9mgiKmDGieuxucCZhN?=
 =?us-ascii?Q?L+IUowuqJRFQo59gWNFCy6s+46B0eANh8rBjCf3yuBpG3IYp0GcX4FDdnWxz?=
 =?us-ascii?Q?24DsbnAXT2/nqeRBg/qE/c7up0jop39TKmzX3SKLQN9UjcQlCRc8XGULGBl2?=
 =?us-ascii?Q?yv1gg62BQgoLva1HYYJMFYQSWwTZMA0PlvO+cZ9oCVEi7H2y/8s/XHsHmd7y?=
 =?us-ascii?Q?jLaSH9zpaZYAByQZJckLP4RtsLglSxX5Z9x/O4aV5JOtmYxC3yBbH+AKcy6A?=
 =?us-ascii?Q?rQJuHSX3G0OP7/lL7hHy5nvoFWs/gYYqYOvL1cbymuFfT8T48RJ3oLs7bx6z?=
 =?us-ascii?Q?MnuIMrsx?=
X-OriginatorOrg: durham.ac.uk
X-MS-Exchange-CrossTenant-AuthSource: LO0P265MB2906.GBRP265.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Dec 2020 09:35:26.6270
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 7250d88b-4b68-4529-be44-d59a2d8a6f94
X-MS-Exchange-CrossTenant-Network-Message-Id: 5740a923-c994-4fea-b8c1-08d8a4ca955e
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5uSrUghxEvFYfqsukVPI0t29vIX3B6EGJ+mYKYwZM4k6zgev4QUcYidSGuHvepJC2zvG63P+k9kLYZW+rD/zXA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LO2P265MB2623

On Sun, 20 Dec 2020, Kevin Buckley wrote:

> Looking to build 4.14.0 on an LFS 10.0 system, so with GCC 10.2.0.
>
> The "all warnings being treated as errors" I'm sure, will have been
> picked up by now, but the issue with the man pages is something
> I have been seeing for a while now.

They are indeed fixed in xen 4.14.1 which available to download though 
hasn't been announced yet (unless I have missed it).

 	Micahel Young


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 09:42:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 09:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57099.99888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqvEI-00059L-Ro; Sun, 20 Dec 2020 09:42:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57099.99888; Sun, 20 Dec 2020 09:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqvEI-00059E-Ol; Sun, 20 Dec 2020 09:42:18 +0000
Received: by outflank-mailman (input) for mailman id 57099;
 Sun, 20 Dec 2020 09:42:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvEH-000596-6j; Sun, 20 Dec 2020 09:42:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvEG-0002tz-TU; Sun, 20 Dec 2020 09:42:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvEG-0003DJ-LU; Sun, 20 Dec 2020 09:42:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvEG-0000Ne-L0; Sun, 20 Dec 2020 09:42:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2OgVA5T5/E+oaCbMTQCv8AngvKoHUl4ZZ+xp1wFxnGk=; b=302gDOjE2ZFbD7gLeer4QSsPLY
	xvFEfIesOL3dARq0PfNuIIzmJ4Jv7/mAt9L+cFL5KnGhgZLlCzTJNbm9lTbrtUZvmORNDz0tR5BNJ
	1evFEA8DZz5NjkaMOVgXQ9kn8xv0h1XE4YnbUKYo7UCA/gkCkTQznP4aR4ckOh+WQaEE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157735-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157735: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 09:42:16 +0000

flight 157735 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157735/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  163 days
Failing since        151818  2020-07-11 04:18:52 Z  162 days  157 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 10:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 10:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57107.99903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqvbH-00074F-Ry; Sun, 20 Dec 2020 10:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57107.99903; Sun, 20 Dec 2020 10:06:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqvbH-000748-Nh; Sun, 20 Dec 2020 10:06:03 +0000
Received: by outflank-mailman (input) for mailman id 57107;
 Sun, 20 Dec 2020 10:06:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvbG-000740-Cv; Sun, 20 Dec 2020 10:06:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvbG-0003Np-5d; Sun, 20 Dec 2020 10:06:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvbF-0004Fk-Ol; Sun, 20 Dec 2020 10:06:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqvbF-0001bM-OH; Sun, 20 Dec 2020 10:06:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kHdqVEvDA9Vr7Os6rGjW2550EUHzPVhHIZkoCBYfBnk=; b=oHSbdEBqO7B0Vn0AApfWqn7kLs
	D5NWo61Hc7T+xElRTSc68gdzDd6j6Hp3mhhvC6qu3DmI56hhWRr4KnxfJjydvax2cF9B6P26tRi5j
	upBwj5UrUcpaCmcrOMx3DrXhb4fDDyq90T5l2azEkOryeVXDPXE4VwvfFw78OEwKPW+U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157738-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157738: all pass - PUSHED
X-Osstest-Versions-This:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
X-Osstest-Versions-That:
    xen=904148ecb4a59d4c8375d8e8d38117b8605e10ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 10:06:01 +0000

flight 157738 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157738/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93
baseline version:
 xen                  904148ecb4a59d4c8375d8e8d38117b8605e10ac

Last test of basis   157602  2020-12-16 09:18:30 Z    4 days
Testing same since   157738  2020-12-20 09:20:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   904148ecb4..357db96a66  357db96a66e47e609c3b14768f1062e13eedbd93 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 11:50:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 11:50:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57119.99917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqxDf-0007My-E9; Sun, 20 Dec 2020 11:49:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57119.99917; Sun, 20 Dec 2020 11:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kqxDf-0007Mr-B5; Sun, 20 Dec 2020 11:49:47 +0000
Received: by outflank-mailman (input) for mailman id 57119;
 Sun, 20 Dec 2020 11:49:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqxDe-0007Mj-V0; Sun, 20 Dec 2020 11:49:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqxDe-00052j-M1; Sun, 20 Dec 2020 11:49:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kqxDe-0002gt-EP; Sun, 20 Dec 2020 11:49:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kqxDe-0007z3-Du; Sun, 20 Dec 2020 11:49:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/5B7pv9zHJZ9tsHDR3OOreGcsL1UcHXpHusx04T1cmg=; b=vb+WuWlQni9voyQsTcJ5RFs+G4
	wNdOBeZVkA4j1eNkObXBFdqWJoYgxCPZ5Whl5zoxH1Mdg48E/PaL6KSszE+FkUqAguXd+LbXLSSha
	TMQl7T6nla1BhW7lFWV4TSKYpG4wB9ZhnQeVdCc5OhurBcql5aHqTLqeXFJSlkpl09jw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157723-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157723: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 11:49:46 +0000

flight 157723 qemu-mainline real [real]
flight 157740 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157723/
http://logs.test-lab.xenproject.org/osstest/logs/157740/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  122 days
Failing since        152659  2020-08-21 14:07:39 Z  120 days  249 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    1 days    2 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 15:57:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 15:57:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57141.99944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr156-0003Xf-DQ; Sun, 20 Dec 2020 15:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57141.99944; Sun, 20 Dec 2020 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr156-0003XY-AF; Sun, 20 Dec 2020 15:57:12 +0000
Received: by outflank-mailman (input) for mailman id 57141;
 Sun, 20 Dec 2020 15:57:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr154-0003XQ-4B; Sun, 20 Dec 2020 15:57:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr153-0000ey-RZ; Sun, 20 Dec 2020 15:57:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr153-000845-Cz; Sun, 20 Dec 2020 15:57:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kr153-0004fq-CO; Sun, 20 Dec 2020 15:57:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UtErDWVL5K7ySOkIGr3ZG7dqCLF/L6TgZX1ZxRKSy1g=; b=4b7e0npCen22ioDrWC63cBi0KT
	pW9rXX7wBwo0SiFX6AyJE0W2sTA4E1va6NUKvSHWGQQQoMC0F6FhWKIsV0ZVCqbji6FTDwkeogzmi
	fhVXVE9MKmaT+7GxSNHMR6PtohkSG0s5mLWL6WrEnhi/oYhnoaRdJUFpsuW9SZLZgZFE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157729-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157729: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 15:57:09 +0000

flight 157729 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157729/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157710
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157710
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157710
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157710
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157710
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157710
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157710
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157710
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157710
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157710
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157710
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157729  2020-12-19 23:59:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Dec 20 18:02:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 18:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57160.99960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr31j-0006mW-ON; Sun, 20 Dec 2020 18:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57160.99960; Sun, 20 Dec 2020 18:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr31j-0006mP-L4; Sun, 20 Dec 2020 18:01:51 +0000
Received: by outflank-mailman (input) for mailman id 57160;
 Sun, 20 Dec 2020 18:01:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jG8K=FY=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kr31h-0006mK-Ra
 for xen-devel@lists.xenproject.org; Sun, 20 Dec 2020 18:01:49 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4545f5df-9278-4e0b-b19d-f40a66fe48c4;
 Sun, 20 Dec 2020 18:01:48 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BKI1bFc058821;
 Sun, 20 Dec 2020 18:01:37 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 35h8xqtubq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sun, 20 Dec 2020 18:01:37 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BKHtv2q125058;
 Sun, 20 Dec 2020 18:01:36 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3020.oracle.com with ESMTP id 35hudvtgav-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sun, 20 Dec 2020 18:01:36 +0000
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BKI1RJ3007047;
 Sun, 20 Dec 2020 18:01:27 GMT
Received: from [10.39.209.146] (/10.39.209.146)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sun, 20 Dec 2020 10:01:26 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4545f5df-9278-4e0b-b19d-f40a66fe48c4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=zGs8oLs0SuV93YX7bMLXBG/KlOIK88g4RdW0mF6sRB4=;
 b=YKT1MtB5XQvvQQSomWbNeCVx2/hWgNCfTP/MtFgNjtAn3GxplMU6EEgo3381YH7hO1bh
 k/JhMOMiUmH1rgmgQs0A6UHdlOGsMEb4dv2EDOdZZQHNYP89wn2tRqtbOrOvzEFzb2n9
 eKZGaOdmAgflGyFl7o8btoYQPtaUg2rp1Ne90PrS+oB6SCTHC6O3k4airsCsx37ys63Z
 566dsr7Ifeqf+CWRU/u2lCvBUGfJasodrPUg0sxmn54H8O/hzYYSvsw8+cq23FrDm4hC
 btPpp0+2huQLvJlz8MhqUJ+S9HqTCWHkGFKSTTCEaNUvnvJV+LSsBiXqCbLH4NfvEAPW vQ== 
Subject: Re: [PATCH] xen: Fix event channel callback via INTX/GSI
To: David Woodhouse <dwmw2@infradead.org>, "x86@kernel.org" <x86@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Juergen Gross <jgross@suse.com>, Paul Durrant <pdurrant@amazon.com>,
        jgrall@amazon.com, karahmed@amazon.de,
        xen-devel <xen-devel@lists.xenproject.org>
References: <5ba658b2d8a2bce63622f5bb8ef8d5e6114276eb.camel@infradead.org>
 <6b6544ac-06b3-2525-aed9-39015715f71d@oracle.com>
 <a02cb64ba5680c0f2076da714d06b8704e3411c2.camel@infradead.org>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <5fa6ba65-2420-8b79-fd20-299166651f0c@oracle.com>
Date: Sun, 20 Dec 2020 13:01:25 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <a02cb64ba5680c0f2076da714d06b8704e3411c2.camel@infradead.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9841 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 adultscore=0
 phishscore=0 bulkscore=0 spamscore=0 mlxlogscore=999 malwarescore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012200134
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9841 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 clxscore=1015
 priorityscore=1501 mlxscore=0 lowpriorityscore=0 adultscore=0
 mlxlogscore=999 suspectscore=0 phishscore=0 impostorscore=0 bulkscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012200134


On 12/19/20 3:05 AM, David Woodhouse wrote:
> On Fri, 2020-12-18 at 17:20 -0500, boris.ostrovsky@oracle.com wrote:
>> Are there other cases where this would be useful? If it's just to
>> test a hypervisor under development I would think that people who are
>> doing that kind of work are capable of building their own kernel. My
>> concern is mostly about having yet another boot option that is of
>> interest to very few people who can easily work around not having it.
> For hypervisor testing we can just set the Xen major version number in
> CPUID to 3, and that stops xs_reset_watches() from doing anything.
>
> cf. https://lkml.org/lkml/2017/4/10/266
>
> Karim ripped out all this INTX code in 2017 because it was broken, and
> subsequently put it back because it *was* working for older versions of
> Xen, due to that "coincidence". The conclusion back then was that if it
> was put back it should at least *work* consistently, and he was going
> to send a patch "shortly". This is a that patch :)


Right, I am not arguing about usefulness of the fix, only of the new boot option.


>
>>> Add a 'no_vector_callback' command line argument to test it.
>> This last one should be a separate patch I think.
> Could do.
>
>>> +		/*
>>> +		 * It doesn't strictly *have* to run on CPU0 but it sure
>>> +		 * as hell better process the event channel ports delivered
>>> +		 * to CPU0.
>>> +		 */
>>> +		irq_set_affinity(pdev->irq, cpumask_of(0));
>>> +
>>
>> Is the concern here that it won't be handled at all?
> Indeed, the events don't get handled at all if the PCI interrupt lands
> on a CPU other than zero. When the handler calls
> xen_hvm_evtchn_do_upcall() that processes pending events for whichever
> CPU it happens to be running on, and *not* the events pending for CPU0.
> And the boot stops in xs_reset_watches() waiting (without a timeout)
> for an interrupt that never gets processed, as before.


Yes, I see. Then please do it in a separate patch.


>
>> And is this related to the issue this patch is addressing?
> It is required to fix the event channel callback via INTX/GSI, yes.
> Although it could reasonably be lifted out into a separate patch too.
>
>>>  static int __init xenbus_probe_initcall(void)
>>>  {
>>> -	if (!xen_domain())
>>> -		return -ENODEV;
>>> -
>>> -	if (xen_initial_domain() || xen_hvm_domain())
>>> -		return 0;
>>> +	/*
>>> +	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
>>> +	 * need to wait for the platform PCI device to come up, which is
>>> +	 * the (XEN_PVPVM && !xen_have_vector_callback) case.
>>> +	 */
>>> +	if (xen_store_domain_type == XS_PV ||
>>> +	    (xen_store_domain_type == XS_HVM &&
>>> +	     (!IS_ENABLED(CONFIG_XEN_PVHVM) || xen_have_vector_callback)))
>>> +		xenbus_probe();
>>>  
>>> -	xenbus_probe(NULL);
>>>  	return 0;
>>>  }
>>> -
>>>  device_initcall(xenbus_probe_initcall);
>>>  
>>> +int xen_set_callback_via(uint64_t via)
>>> +{
>>> +	struct xen_hvm_param a;
>>> +	int ret;
>>> +
>>> +	a.domid = DOMID_SELF;
>>> +	a.index = HVM_PARAM_CALLBACK_IRQ;
>>> +	a.value = via;
>>> +
>>> +	ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
>>> +	if (ret)
>>> +		return ret;
>>> +
>>> +	/*
>>> +	 * If xenbus_probe_initcall() deferred the xenbus_probe()
>>> +	 * due to the callback not functioning yet, we can do it now.
>>> +	 */
>>> +	if (!xenstored_ready && xen_store_domain_type == XS_HVM &&
>>> +	    IS_ENABLED(CONFIG_XEN_PVHVM) && !xen_have_vector_callback)
>>
>> I'd create an is_callback_ready() (or something with a better name)
>> helper.
> I pondered that, and indeed dropping the XVM/vector conditions and
> doing it literally based on whether xen_set_callback_via() had been
> called at all (and not too early). But it looks like there are cases
> where Arm doesn't call xen_set_callback_via() at all, and it made more
> sense to me to live xen_set_callback_via() to sit right here and have
> those two conditions within a page of each other in juxtaposition, with
> suitable comments. I think that's probably easier to understand and
> work with than a "helper".


OK.


>
>>> +		xenbus_probe();
>>> +
>>> +	return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(xen_set_callback_via);
>>> +
>>>  /* Set up event channel for xenstored which is run as a local process
>>>   * (this is normally used only in dom0)
>>>   */
>>> @@ -817,11 +851,17 @@ static int __init xenbus_init(void)
>>>  		break;
>>>  	}
>>>  
>>> -	/* Initialize the interface to xenstore. */
>>> -	err = xs_init();
>>> -	if (err) {
>>> -		pr_warn("Error initializing xenstore comms: %i\n", err);
>>> -		goto out_error;
>>> +	/*
>>> +	 * HVM domains may not have a functional callback yet. In that
>>> +	 * case let xs_init() be called from xenbus_probe(), which will
>>> +	 * get invoked at an appropriate time.
>>> +	 */
>>> +	if (xen_store_domain_type != XS_HVM) {
>>
>> Can we delay xs_init() for !XS_HVM as well? In other words wait until
>> after PCI platform device has been probed (on HVM) and then call
>> xs_init() for everyone.
> We're half-way there already, because xenbus_probe() *does* happen
> later as a device_initcall, and I've just made it call xs_init().
>
> We could make it avoid calling xs_init() from xenbus_init() in the
> XS_HVM *and* XS_PV cases fairly easily, and let xenbus_probe() do it.


Yes, that's along the lines of what I was thinking.


>
> But right now xenbus_probe() doesn't run for the other cases, so
> there'd have to be a mode where it *only* calls xs_init() and doesn't
> do the notifier chain. That seems like more churn that was needed so I
> didn't do it.


You think so? Yes, there would be a couple more places where you'd need to call xenbus_probe() but then you won't have to explain (with comments) why you call xs_init() here and not there and vice versa. It just looks to me a bit more complicated the way you do this but I suppose it's a matter of personal preference.


-boris



From xen-devel-bounces@lists.xenproject.org Sun Dec 20 18:52:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 18:52:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57175.99986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr3p6-0002ll-3C; Sun, 20 Dec 2020 18:52:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57175.99986; Sun, 20 Dec 2020 18:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr3p5-0002le-Vk; Sun, 20 Dec 2020 18:52:51 +0000
Received: by outflank-mailman (input) for mailman id 57175;
 Sun, 20 Dec 2020 18:52:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr3p4-0002lW-Rd; Sun, 20 Dec 2020 18:52:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr3p4-00043j-Kb; Sun, 20 Dec 2020 18:52:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr3p3-00083m-Pc; Sun, 20 Dec 2020 18:52:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kr3p3-0007HN-Ou; Sun, 20 Dec 2020 18:52:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aviYlXFJIHfxRDkW58H7As0v7t2dff3/4U5GNGnmPt4=; b=Np/XPtQT5Dbyn1lTU6ntTzryhF
	KqlinhPYmTa38xDQnc3Jm9hWNclC1kbKWiF5QQ52BGn6R72PaTrtiN+g2PUOCAbRg3TT8atrWVONQ
	2uLQxvLoPACHzb8vyUcyzpMK8Y1mLM81Wzq4jDqR6uvOWUVKm5iezedfsCrKyVwL0A5c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157732-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157732: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=467f8165a2b0e6accf3d0dd9c8089b1dbde29f7f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 18:52:49 +0000

flight 157732 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157732/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                467f8165a2b0e6accf3d0dd9c8089b1dbde29f7f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  141 days
Failing since        152366  2020-08-01 20:49:34 Z  140 days  243 attempts
Testing same since   157732  2020-12-20 02:34:28 Z    0 days    1 attempts

------------------------------------------------------------
4289 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 952057 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 20 21:04:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 20 Dec 2020 21:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57201.100001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr5sT-0005Wq-QU; Sun, 20 Dec 2020 21:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57201.100001; Sun, 20 Dec 2020 21:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr5sT-0005Wj-NQ; Sun, 20 Dec 2020 21:04:29 +0000
Received: by outflank-mailman (input) for mailman id 57201;
 Sun, 20 Dec 2020 21:04:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr5sS-0005Wb-AI; Sun, 20 Dec 2020 21:04:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr5sS-0006IH-1m; Sun, 20 Dec 2020 21:04:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr5sR-0007YF-LI; Sun, 20 Dec 2020 21:04:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kr5sR-0000Hv-Ke; Sun, 20 Dec 2020 21:04:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O4yLhbLiS3Uyx6T5Hwz2azWF8aVT2soBDUmm6bSm0HU=; b=RwGHtqqlsA4eviu9X8d8hQ9HjI
	2wR+xDVK7bCmqcFr5RSScKmf0xJIN+aUFJuqkST+hXTu3/sFUrClrbOFD2Ciy5REea3Ozi5+SVQ2I
	N6H5aWMzBzCOQZMuVYMfnYb9zN04EM32D0N2zp5E0HKXUc7h/9OiCpHoFDfhFJA/GpVk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157733-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 157733: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10c7c213bef26274684798deb3e351a6756046d2
X-Osstest-Versions-That:
    xen=b5302273e2c51940172400486644636f2f4fc64a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 20 Dec 2020 21:04:27 +0000

flight 157733 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157733/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail in 157716 pass in 157733
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157716

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157135
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157135
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157135
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157135
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157135
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10c7c213bef26274684798deb3e351a6756046d2
baseline version:
 xen                  b5302273e2c51940172400486644636f2f4fc64a

Last test of basis   157135  2020-12-01 15:06:11 Z   19 days
Testing same since   157563  2020-12-15 13:36:28 Z    5 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Harsha Shamsundara Havanur <havanur@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b5302273e2..10c7c213be  10c7c213bef26274684798deb3e351a6756046d2 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 00:16:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 00:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57244.100144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr8sJ-00067T-OC; Mon, 21 Dec 2020 00:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57244.100144; Mon, 21 Dec 2020 00:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kr8sJ-00067M-Kd; Mon, 21 Dec 2020 00:16:31 +0000
Received: by outflank-mailman (input) for mailman id 57244;
 Mon, 21 Dec 2020 00:16:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr8sI-00067E-Sy; Mon, 21 Dec 2020 00:16:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr8sI-0001cV-FJ; Mon, 21 Dec 2020 00:16:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kr8sI-000186-4F; Mon, 21 Dec 2020 00:16:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kr8sI-0001Em-3k; Mon, 21 Dec 2020 00:16:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1FfedQfJyuSgBAQmQccR2tCaUFLh3oF88Xd4en4xSp0=; b=bW76J+KnjSjDsQ7dqDUW/X0NQn
	wnw1s1ZTVIfss2NwxDm3AUIP2N15Isx8yOJa4JN0sQUPoVp5KswNz1UEBcRvDXvWIo3i3YuzSswyN
	qm5Q7r6ngSjBJZi/rVPLNsBBdR4SAF7cdo54INwLLAUjdATL63U+cel+i62fhw8fHIEA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157737-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157737: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a866bdbbac227a99b0b37e03679908642f58aec
X-Osstest-Versions-That:
    linux=2bff021f53b211386abad8cd661e6bb38d0fd524
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 00:16:30 +0000

flight 157737 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157737/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail in 157719 pass in 157737
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail in 157719 pass in 157737
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157431
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157431
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157431
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157431
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157431
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157431
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157431
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a866bdbbac227a99b0b37e03679908642f58aec
baseline version:
 linux                2bff021f53b211386abad8cd661e6bb38d0fd524

Last test of basis   157431  2020-12-11 12:40:36 Z    9 days
Testing same since   157603  2020-12-16 10:11:52 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Bean Huo <beanhuo@micron.com>
  Borislav Petkov <bp@suse.de>
  Can Guo <cang@codeaurora.org>
  Chris Chiu <chiu@endlessos.org>
  Coiby Xu <coiby.xu@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Fangrui Song <maskray@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Georgi Djakov <georgi.djakov@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Si <si.hao@zte.com.cn>
  Heiko Stuebner <heiko@sntech.de>
  Jakub Kicinski <kuba@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kalle Valo <kvalo@codeaurora.org>
  Li Yang <leoyang.li@nxp.com>
  Libo Chen <libo.chen@oracle.com>
  Lijun Pan <ljp@linux.ibm.com>
  Lin Chen <chen.lin5@zte.com.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luca Coelho <luciano.coelho@intel.com>
  Manasi Navare <manasi.d.navare@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Markus Reichl <m.reichl@fivetechno.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Verevkin <me@maxverevkin.tk>
  Michael Ellerman <mpe@ellerman.id.au>
  Miles Chen <miles.chen@mediatek.com>
  Minchan Kim <minchan@kernel.org>
  Mordechay Goodstein <mordechay.goodstein@intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Pankaj Sharma <pankj.sharma@samsung.com>
  Ran Wang <ran.wang_1@nxp.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sara Sharon <sara.sharon@intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Wood <oss@buserror.net>
  Shuah Khan <skhan@linuxfoundation.org>
  Shung-Hsi Yu <shung-hsi.yu@suse.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo Witte <timo.witte@gmail.com>
  Tom Lendacky <thomas.lendacky@amd.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vineet Gupta <vgupta@synopsys.com>
  Xu Qiang <xuqiang36@huawei.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Zhan Liu <zliua@micron.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   2bff021f53b2..8a866bdbbac2  8a866bdbbac227a99b0b37e03679908642f58aec -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 03:05:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 03:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57261.100165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krBVH-0002dE-Rw; Mon, 21 Dec 2020 03:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57261.100165; Mon, 21 Dec 2020 03:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krBVH-0002d5-KG; Mon, 21 Dec 2020 03:04:55 +0000
Received: by outflank-mailman (input) for mailman id 57261;
 Mon, 21 Dec 2020 03:04:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AzP2=FZ=gmail.com=kevin.buckley.ecs.vuw.ac.nz@srs-us1.protection.inumbo.net>)
 id 1krBVG-0002d0-TJ
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 03:04:54 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff476e8f-aef1-46b0-abf8-7355d1fe1bba;
 Mon, 21 Dec 2020 03:04:53 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id i9so9546810wrc.4
 for <xen-devel@lists.xenproject.org>; Sun, 20 Dec 2020 19:04:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff476e8f-aef1-46b0-abf8-7355d1fe1bba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=nwx2mUO40C+7qjRtFbvvWHuOupNKcHjJrQkqcUPlN9E=;
        b=TAzHn5bldj1nVZWXd0uD6LvnDPeIW1rpoZTanLDw29b+thlusA93aiCTCL9QUz2rF8
         37F3RvCbZFfjttnmwMOsfJBBn8np51lDhdWyNElcdNa8Zaes1SFhHU1/EBHDiDCBByhp
         nm0cmhNMTYdwHILgPdi61qmI2WyeHj+z9a5iyP9krHWB3Yj81+cD18vaYPB8IjO0HYTl
         INOebF/3Ou4KUmKt76hEV2Lf+aqFnt1IA5cHlHNbwM354mG7eNG4aCo0qQhqtTLLVWsx
         hoB+i1Uo22AwX4zuei1U3+7oCtRcPU9Kdv2kMx4odJz4v4Oq0hQtMX7nGGG9wMIm50Ky
         dHjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=nwx2mUO40C+7qjRtFbvvWHuOupNKcHjJrQkqcUPlN9E=;
        b=iDeKRsSCNJqxZ7wfQizBHiXGbyThUqMwc+a3rJ8QUYf/Uu/4eTDoMMPqT7ccnTFdxx
         1giIYx+b+SFg5HWUPcEkBXWEGHVZ7T4Li8lRyxGRcBv428ZYWP+pMINvCxDNKXyoTY9o
         cR3xSStwsQzOV5XTpgOWXDxmc/vjB0Vr0b6ln+ndnOanMkD4pypNNzDY3+TB4eOa4SZD
         4ejPlt2bKjZyMLJq+lVwGhLZpequA8+NI3CObOxe9Qjt4BrYVShGc9Vna30ttnLH3JeM
         TAwBeQQ0G8mERdphvXnF7Wk3Q0G5S3+78PAn5zi3ivQ6dDcUXu6gPzQH//oUjkMpfUXy
         9ZCA==
X-Gm-Message-State: AOAM530nR5GDwo16x7n29EtIWrpTXrAI92bHjOFizAz1SIRWUEFdx3jN
	Br3+wB90zUhPMnWz5sOloKE+qedxZUS0PU8iBEZJFEOWsEwfDQ==
X-Google-Smtp-Source: ABdhPJwrKCLYjmrB+VOzoyk0Idg/4QvUG1nw5Xvs2jBbiQMXOCym5fyuv1nIYMnWhNi7eMZVANt3d0ZPlR880S9jxpY=
X-Received: by 2002:adf:e443:: with SMTP id t3mr16118884wrm.366.1608519892747;
 Sun, 20 Dec 2020 19:04:52 -0800 (PST)
MIME-Version: 1.0
References: <CABwOO=dLaL-BLf+GDo71_Btq1R8L5=XmofSs+oHE+P-qx+M49A@mail.gmail.com>
In-Reply-To: <CABwOO=dLaL-BLf+GDo71_Btq1R8L5=XmofSs+oHE+P-qx+M49A@mail.gmail.com>
From: Kevin Buckley <kevin.buckley.ecs.vuw.ac.nz@gmail.com>
Date: Mon, 21 Dec 2020 11:04:41 +0800
Message-ID: <CABwOO=c8FYYYEPc5VCiDAyF-PG++2OQbNhv7sKPkWJ9X1gO34Q@mail.gmail.com>
Subject: Re: Xen Release 4.14.0: Couple of "all warnings being treated as
 errors" issues and ongoing docs/man issue in make world
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Sun, 20 Dec 2020 at 10:54, Kevin Buckley
<kevin.buckley.ecs.vuw.ac.nz@gmail.com> wrote:
>
> Looking to build 4.14.0 on an LFS 10.0 system, so with GCC 10.2.0.
>
> The "all warnings being treated as errors" I'm sure, will have been
> picked up by now, but the issue with the man pages is something
> I have been seeing for a while now.
>
> ...
> After that point, the build gets as far as
>
> make[2]: Leaving directory '/usr/src/xen/xen-4.14.0/tools'
> make -C docs install
> make[2]: Entering directory '/usr/src/xen/xen-4.14.0/docs'
> /usr/bin/pod2man --release=4.14.0 --name=xenhypfs -s 1 -c "Xen"
> man/xenhypfs.1.pod man1/xenhypfs.1
> Can't write-open man1/xenhypfs.1: No such file or directory at
> /usr/bin/pod2man line 69.
> make[2]: *** [Makefile:176: man1/xenhypfs.1] Error 2
> make[2]: Leaving directory '/usr/src/xen/xen-4.14.0/docs'
> make[1]: *** [Makefile:153: install-docs] Error 2
> make[1]: Leaving directory '/usr/src/xen/xen-4.14.0'
> make: *** [Makefile:170: world] Error 2
>
> and this is the interesting bit.
>
> Firstly, nothing that the make ins being run from the top-level
> docs directory, looking at
>
> pkg xen:xen-4.14.0> ls docs/
> INDEX            configure.ac       man/
> Makefile         designs/           misc/
> README.colo      features/          parse-support-md*
> README.remus     figs/              process/
> README.source    gen-html-index     specs/
> admin-guide/     glossary.rst       support-matrix-generate*
> conf.py          guest-guide/       xen-headers*
> config.status*   hypervisor-guide/
> configure*       index.rst
>
>
> shows that there isn't a man1 subdir within the source tree?

Please ignore the above issue with the man pages.

This has turned out to be a problem at my end, resulting from a
wrapper around the "install" utility that has been failing silently
for years.

Bugger!


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 06:29:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 06:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57267.100176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krEhK-0003Go-Ap; Mon, 21 Dec 2020 06:29:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57267.100176; Mon, 21 Dec 2020 06:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krEhK-0003Gh-7G; Mon, 21 Dec 2020 06:29:34 +0000
Received: by outflank-mailman (input) for mailman id 57267;
 Mon, 21 Dec 2020 06:29:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krEhJ-0003GZ-NS; Mon, 21 Dec 2020 06:29:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krEhJ-0006sH-He; Mon, 21 Dec 2020 06:29:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krEhJ-0004Kv-5v; Mon, 21 Dec 2020 06:29:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krEhJ-0006sd-5R; Mon, 21 Dec 2020 06:29:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fD3wiLok2auxa/S4BpmxpqwGgd7t4V7asP+YuxeZXf0=; b=DsmG3ezMPc2UEQN8LqtaE357ez
	F1k0FL5kleE0LgQ9N/g5W4LES8ZIZqtXZffOmSuh8LB+gtsge8gwpaOYBMA9OlVlUPeiPM60mp6VR
	zhjMSknekkmsuoGan/Blg+SuOOLEOQtK5xegIJevleVlWPUqyhWYECRPCLSBXuLCxCnI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157741-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157741: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 06:29:33 +0000

flight 157741 qemu-mainline real [real]
flight 157751 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157741/
http://logs.test-lab.xenproject.org/osstest/logs/157751/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  122 days
Failing since        152659  2020-08-21 14:07:39 Z  121 days  250 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    2 days    3 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 06:34:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 06:34:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57275.100194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krEld-00048w-5o; Mon, 21 Dec 2020 06:34:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57275.100194; Mon, 21 Dec 2020 06:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krEld-00048p-2j; Mon, 21 Dec 2020 06:34:01 +0000
Received: by outflank-mailman (input) for mailman id 57275;
 Mon, 21 Dec 2020 06:34:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krElc-00048g-1K; Mon, 21 Dec 2020 06:34:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krElb-0006xJ-Nn; Mon, 21 Dec 2020 06:33:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krElb-0004cU-Fy; Mon, 21 Dec 2020 06:33:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krElb-0000zm-FV; Mon, 21 Dec 2020 06:33:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=suKEuu78WfRv1XlssigfAnZtsYJSst6O8r2SLHQ9KNU=; b=Dlm2T9QgOPj0I/xMrq+4ukIw9v
	uSV4PnP44XEG2ilXfwL1ViMPiqloJJ7mSv7g4G7wQE1LwYiQklK8i1Yh/VI/2qsiCiteHa++wsL9o
	3J9m2kkcrkdgKpBtjOtQjEAOakrY9UtvgVf2CajMyrTuwujTZQC5rAQen5j5XIa35CRg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157750-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157750: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 06:33:59 +0000

flight 157750 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157750/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  164 days
Failing since        151818  2020-07-11 04:18:52 Z  163 days  158 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 07:20:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 07:20:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57286.100213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krFUU-0008Tl-0C; Mon, 21 Dec 2020 07:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57286.100213; Mon, 21 Dec 2020 07:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krFUT-0008Te-Sq; Mon, 21 Dec 2020 07:20:21 +0000
Received: by outflank-mailman (input) for mailman id 57286;
 Mon, 21 Dec 2020 07:20:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krFUR-0008TW-Ug; Mon, 21 Dec 2020 07:20:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krFUR-0007jQ-Mg; Mon, 21 Dec 2020 07:20:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krFUR-0005zs-Ct; Mon, 21 Dec 2020 07:20:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krFUR-0002Nr-CQ; Mon, 21 Dec 2020 07:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eHfjat51cAEMQfFlx69nGsWq+29lJtnDRszekPBVMLg=; b=Ff9v3XLyFwRc6ByMavmhuFeVhN
	6FWS8BjXAMjphkN4xWSPaZGnLA53mr/PAN3KPs0f0x69QEisRJ9U5pWOOzWwN1TIzdXxJNuVm0jJL
	OVycm0eywiEpugygTTIaML5v4Gy+1E+sqeRgSf9XyDX7FvrUFDpHXtn0kDAghl5ObtIU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157744-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157744: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 07:20:19 +0000

flight 157744 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157744/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157729
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157729
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157729
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157729
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157729
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157729
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157729
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157729
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157729
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157729
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157729
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157744  2020-12-20 15:58:58 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:11:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:11:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57300.100227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGHV-0004vG-5X; Mon, 21 Dec 2020 08:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57300.100227; Mon, 21 Dec 2020 08:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGHV-0004v9-2Y; Mon, 21 Dec 2020 08:11:01 +0000
Received: by outflank-mailman (input) for mailman id 57300;
 Mon, 21 Dec 2020 08:10:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGHT-0004v4-7u
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:10:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 028be536-b425-4054-89e2-94ebb5a07192;
 Mon, 21 Dec 2020 08:10:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 60C27AD45;
 Mon, 21 Dec 2020 08:10:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 028be536-b425-4054-89e2-94ebb5a07192
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608538257; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ma1MYWHLgJyLaOiBow43xe5cjJSTbgU999BVjooDWq8=;
	b=hiubR8nAhO5U2mRG4sil/AjtUsn64hixCEPA2X6/Vq8qGV1btsqF8gzSVmiRieVzqffhOJ
	1jmmYe0BSgTsyZW7ZSxENPwlTBUIPTypbUzuCiMwkTBdvR40tzzFlOJaO1cOSke6O96ILJ
	1HMVHXBwACT1gAEimvLbssXXB7e1QHo=
Subject: Re: [PATCH 2/6] x86/mm: p2m_add_foreign() is HVM-only
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <cf4569c5-a9c5-7b4b-d576-d1521c369418@suse.com>
 <f736244b-ece7-af35-1517-2e5fdd9705c7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e2ee99fc-e3f8-bdaf-fe4a-d048da34731a@suse.com>
Date: Mon, 21 Dec 2020 09:10:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <f736244b-ece7-af35-1517-2e5fdd9705c7@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.12.2020 20:18, Andrew Cooper wrote:
> On 15/12/2020 16:26, Jan Beulich wrote:
>> This is together with its only caller, xenmem_add_to_physmap_one().
> 
> I can't parse this sentence.  Perhaps "... as is it's only caller," as a
> follow-on from the subject sentence.
> 
>>  Move
>> the latter next to p2m_add_foreign(), allowing this one to become static
>> at the same time.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

So I had to ask Andrew to revert this (I was already at home when
noticing the breakage), as it turned out to break the shim build.
The problem is that xenmem_add_to_physmap() is non-static and
hence can't be eliminated altogether by the compiler when !HVM.
We could make the function conditionally static
"#if !defined(CONFIG_X86) && !defined(CONFIG_HVM)", but this
looks uglier to me than this extra hunk:

--- unstable.orig/xen/common/memory.c
+++ unstable/xen/common/memory.c
@@ -788,7 +788,11 @@ int xenmem_add_to_physmap(struct domain
     union add_to_physmap_extra extra = {};
     struct page_info *pages[16];
 
-    ASSERT(paging_mode_translate(d));
+    if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
+        return -EACCES;
+    }
 
     if ( xatp->space == XENMAPSPACE_gmfn_foreign )
         extra.foreign_domid = DOMID_INVALID;

Andrew, please let me know whether your ack stands with this (or
said alternative) added, or whether you'd prefer me to re-post.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:14:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:14:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57306.100240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGKq-00054z-KV; Mon, 21 Dec 2020 08:14:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57306.100240; Mon, 21 Dec 2020 08:14:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGKq-00054s-Gv; Mon, 21 Dec 2020 08:14:28 +0000
Received: by outflank-mailman (input) for mailman id 57306;
 Mon, 21 Dec 2020 08:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGKo-00054U-Lv
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:14:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 648089ca-1edb-4fc2-8a44-6254df8a2611;
 Mon, 21 Dec 2020 08:14:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8745AD2D;
 Mon, 21 Dec 2020 08:14:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 648089ca-1edb-4fc2-8a44-6254df8a2611
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608538459; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=k1CVJF3Sk2QqVmwE2sP6B1lIgUt4I6fNKccFw5kHOwA=;
	b=uU0bld4WoPlJbSQ1qW1nTC/6t/YGkD59SV3fqm/S5QKOEIouYAa9oMPq4VaVKxL8/u6hp3
	YY4R+bzeVkB0KQ2cP0I/HMBpN4plkuLOHIQrlhJCYmBv86rmmggAcf647DFaWqb+Fhuoc0
	xL2Z99MARad0o7VjmLZjsBhBco5aP68=
From: Jan Beulich <jbeulich@suse.com>
Subject: Xen 4.14.1 released
To: xen-announce@lists.xenproject.org
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <f00d676e-6fb5-1417-7c16-845171bab6b5@suse.com>
Date: Mon, 21 Dec 2020 09:14:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All,

we're pleased to announce the release of Xen 4.14.1. This is available
immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.14
(tag RELEASE-4.14.1) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-14-series/xen-project-4-14-1/
(where a list of changes can also be found).

We recommend all users of the 4.14 stable series to update to this
first point release.

Regards, Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:21:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:21:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57311.100252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGRd-0005zG-Bu; Mon, 21 Dec 2020 08:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57311.100252; Mon, 21 Dec 2020 08:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGRd-0005z9-8w; Mon, 21 Dec 2020 08:21:29 +0000
Received: by outflank-mailman (input) for mailman id 57311;
 Mon, 21 Dec 2020 08:21:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGRb-0005z4-6m
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:21:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3be7f542-3a3b-47f8-a57e-0ca89e1bd8f6;
 Mon, 21 Dec 2020 08:21:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 471E7AD09;
 Mon, 21 Dec 2020 08:21:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3be7f542-3a3b-47f8-a57e-0ca89e1bd8f6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608538885; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IHHULxBeROb4X9ndbiEFbdSTahhG/Wv9UHgWmaUIokg=;
	b=sxGpf1wYjz6c0IV6UQuWiJQZpC1aF5KUP20qRa04yhy3i5+zbUoK4QqSPydtaLntj7f0+Q
	KIcVD2x2UGHaLwB0ZvVaoKIx9/BLwDB7IAQfOchw1D9rJr9+HWIJjmpCiJVB8gcBDYYezK
	OkZvhAScuEGt+hMDoOFH259dyh3skk0=
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: boris.ostrovsky@oracle.com
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills
 <cheyenne.wills@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
 <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
 <0430337a-6fcd-9471-4455-838390401220@citrix.com>
 <c6e05b63-b066-9bd0-9da1-1fc089cd1aea@oracle.com>
 <10958d4a-154f-a524-35e9-a75eaf50fe55@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <90740e33-c69a-16d7-2622-fa57a1f34272@suse.com>
Date: Mon, 21 Dec 2020 09:21:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <10958d4a-154f-a524-35e9-a75eaf50fe55@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.12.2020 21:43, boris.ostrovsky@oracle.com wrote:
> On 12/17/20 12:49 PM, boris.ostrovsky@oracle.com wrote:
>> On 12/17/20 11:46 AM, Andrew Cooper wrote:
>>> On 17/12/2020 16:25, boris.ostrovsky@oracle.com wrote:
>>>> On 12/17/20 2:40 AM, Jan Beulich wrote:
>>>>> On 17.12.2020 02:51, boris.ostrovsky@oracle.com wrote:
>>>>> I think this is acceptable as a workaround, albeit we may want to
>>>>> consider further restricting this (at least on staging), like e.g.
>>>>> requiring a guest config setting to enable the workaround. 
>>>> Maybe, but then someone migrating from a stable release to 4.15 will have to modify guest configuration.
>>>>
>>>>
>>>>> But
>>>>> maybe this will need to be part of the MSR policy for the domain
>>>>> instead, down the road. We'll definitely want Andrew's view here.
>>>>>
>>>>> Speaking of staging - before applying anything to the stable
>>>>> branches, I think we want to have this addressed on the main
>>>>> branch. I can't see how Solaris would work there.
>>>> Indeed it won't. I'll need to do that as well (I misinterpreted the statement in the XSA about only 4.14- being vulnerable)
>>> It's hopefully obvious now why we suddenly finished the "lets turn all
>>> unknown MSRs to #GP" work at the point that we did (after dithering on
>>> the point for several years).
>>>
>>> To put it bluntly, default MSR readability was not a clever decision at all.
>>>
>>> There is a large risk that there is a similar vulnerability elsewhere,
>>> given how poorly documented the MSRs are (and one contemporary CPU I've
>>> got the manual open for has more than 6000 *documented* MSRs).  We did
>>> debate for a while whether the readability of the PPIN MSRs was a
>>> vulnerability or not, before eventually deciding not.
> 
> 
> Can we do something like KVM's ignore_msrs (but probably return 0 on reads to avoid leaks from the system)? It would allow to deal with cases when a guest is suddenly unable to boot after hypervisor update (especially from pre-4.14). It won't help in all cases since some MSRs may be expected to be non-zero but I think it will cover large number of them. (and it will certainly do what Jan is asking above but will not be specific to this particular breakage)

This would re-introduce the problem with detection (by guests) of certain
features lacking suitable CPUID bits. Guests would no longer observe the
expected #GP(0), and hence be at risk of misbehaving. Hence at the very
least such an option would need to be per-domain rather than (like for
KVM) global, and use of it should then imo be explicitly unsupported. And
along the lines of what KVM has, this may want to be a tristate so the
ignoring can be both silent and verbose.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:26:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:26:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57315.100263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGWL-00069s-05; Mon, 21 Dec 2020 08:26:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57315.100263; Mon, 21 Dec 2020 08:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGWK-00069l-TM; Mon, 21 Dec 2020 08:26:20 +0000
Received: by outflank-mailman (input) for mailman id 57315;
 Mon, 21 Dec 2020 08:26:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGWJ-00069g-Kh
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:26:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 259d0f0c-cad5-42f8-acf5-5eb5a48fbc13;
 Mon, 21 Dec 2020 08:26:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 04DB3ACF5;
 Mon, 21 Dec 2020 08:26:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 259d0f0c-cad5-42f8-acf5-5eb5a48fbc13
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608539178; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rcNhHZOQ+BWlMFyFjB9vWDIxHCi+ydtvzyMz/lIF4vc=;
	b=BQtT51yyFkA6ODsLjMM8gaVM/82LgHCeTL2geVykwFVGFwvghjwnXbr06P24GYqRxwWtav
	fYrcEGhPy5K6EDZ2HmMVXPWPNj0FFizpFI1NxxU9mb6bHOGB4Dk6Attp4B0J7pD1ACz24M
	OBZaj8gRC6KN9oUjheA3Iaf3XocpKUg=
Subject: Re: [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic
 directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201209160956.32456-1-jgross@suse.com>
 <20201209160956.32456-6-jgross@suse.com>
 <2894a231-9150-7c09-cc5c-7ef52087acf5@suse.com>
 <d4c408eb-08d8-42a8-0c0a-6580fce0e181@suse.com>
 <5e0ac85e-ecba-86ad-b350-ff30e3a40a68@suse.com>
 <bde3d3b1-a512-e1fe-cfd4-287fa0ea95cd@suse.com>
 <a515ead2-f732-ddcd-f29b-788b8997fd2a@suse.com>
 <0c56129d-dcfa-2a52-dc66-221f103e6735@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e4767946-99b6-40bb-c606-5a7d21dc3803@suse.com>
Date: Mon, 21 Dec 2020 09:26:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <0c56129d-dcfa-2a52-dc66-221f103e6735@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.12.2020 13:41, Jürgen Groß wrote:
> On 18.12.20 10:09, Jan Beulich wrote:
>> On 18.12.2020 09:57, Jürgen Groß wrote:
>>> On 17.12.20 13:14, Jan Beulich wrote:
>>>> On 17.12.2020 12:32, Jürgen Groß wrote:
>>>>> On 17.12.20 12:28, Jan Beulich wrote:
>>>>>> On 09.12.2020 17:09, Juergen Gross wrote:
>>>>>>> +static const struct hypfs_entry *hypfs_dyndir_enter(
>>>>>>> +    const struct hypfs_entry *entry)
>>>>>>> +{
>>>>>>> +    const struct hypfs_dyndir_id *data;
>>>>>>> +
>>>>>>> +    data = hypfs_get_dyndata();
>>>>>>> +
>>>>>>> +    /* Use template with original enter function. */
>>>>>>> +    return data->template->e.funcs->enter(&data->template->e);
>>>>>>> +}
>>>>>>
>>>>>> At the example of this (applies to other uses as well): I realize
>>>>>> hypfs_get_dyndata() asserts that the pointer is non-NULL, but
>>>>>> according to the bottom of ./CODING_STYLE this may not be enough
>>>>>> when considering the implications of a NULL deref in the context
>>>>>> of a PV guest. Even this living behind a sysctl doesn't really
>>>>>> help, both because via XSM not fully privileged domains can be
>>>>>> granted access, and because speculation may still occur all the
>>>>>> way into here. (I'll send a patch to address the latter aspect in
>>>>>> a few minutes.) While likely we have numerous existing examples
>>>>>> with similar problems, I guess in new code we'd better be as
>>>>>> defensive as possible.
>>>>>
>>>>> What do you suggest? BUG_ON()?
>>>>
>>>> Well, BUG_ON() would be a step in the right direction, converting
>>>> privilege escalation to DoS. The question is if we can't do better
>>>> here, gracefully failing in such a case (the usual pair of
>>>> ASSERT_UNREACHABLE() plus return/break/goto approach doesn't fit
>>>> here, at least not directly).
>>>>
>>>>> You are aware that this is nothing a user can influence, so it would
>>>>> be a clear coding error in the hypervisor?
>>>>
>>>> A user (or guest) can't arrange for there to be a NULL pointer,
>>>> but if there is one that can be run into here, this would still
>>>> require an XSA afaict.
>>>
>>> I still don't see how this could happen without a major coding bug,
>>> which IMO wouldn't go unnoticed during a really brief test (this is
>>> the reason for ASSERT() in hypfs_get_dyndata() after all).
>>
>> True. Yet the NULL derefs wouldn't go unnoticed either.
>>
>>> Its not as if the control flow would allow many different ways to reach
>>> any of the hypfs_get_dyndata() calls.
>>
>> I'm not convinced of this - this is a non-static function, and the
>> call patch 8 adds (just to take an example) is not very obvious to
>> have a guarantee that allocation did happen and was checked for
>> success. Yes, in principle cpupool_gran_write() isn't supposed to
>> be called in such a case, but it's the nature of bugs assumptions
>> get broken.
> 
> Yes, but we do have tons of assumptions like that. I don't think we
> should add tests for non-NULL pointers everywhere just because we
> happen to dereference something. Where do we stop?
> 
>>
>>> I can add security checks at the appropriate places, but I think this
>>> would be just dead code. OTOH if you are feeling strong here lets go
>>> with it.
>>
>> Going with it isn't the only possible route. The other is to drop
>> the ASSERT()s altogether. It simply seems to me that their addition
>> is a half-hearted attempt when considering what was added to
>> ./CODING_STYLE not all that long ago.
> 
> No. The ASSERT() is clearly an attempt to catch a programming error
> early. It is especially not trying to catch a situation which is thought
> to be possible. The situation should really never happen, and I'm not
> aware how it could happen without a weird code modification.
> 
> Dropping the ASSERT() would really add risk to not notice a bug being
> introduced by a code modification.

Is this the case? Wouldn't the NULL be de-referenced almost immediately,
and hence the bug be noticed right away anyway? I don't think it is
typical for PV guests to have a valid mapping for address 0. Putting in
place such a mapping could at least be a hint towards possible malice
imo.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:30:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57319.100276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGaY-00072I-Jp; Mon, 21 Dec 2020 08:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57319.100276; Mon, 21 Dec 2020 08:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGaY-00072B-G6; Mon, 21 Dec 2020 08:30:42 +0000
Received: by outflank-mailman (input) for mailman id 57319;
 Mon, 21 Dec 2020 08:30:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGaX-000726-Lr
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:30:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f315c31-13dc-4c57-bbbf-afa96beb07a7;
 Mon, 21 Dec 2020 08:30:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DD951ACC4;
 Mon, 21 Dec 2020 08:30:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f315c31-13dc-4c57-bbbf-afa96beb07a7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608539440; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=k/cKh/D7mlPJ8p5jVTAb/X7PKFm2Alm16NCrQWwImh0=;
	b=pE0g9vk2yT/NovCusbKukYp3cv3BneEhOFJjSJRZD6oPm+58MAYg4qDycIh5t2ipQ7WYd2
	+0aKVu8jE5HpHNPhOxQQbQ6J0FvISfHkdArA9FwaHD8+UkxxCM7IzYce6hTFdW9Hs//yTf
	h6jLQkD4VCe0QEDK1u+uES6+Paor3HA=
Subject: Re: [PATCH] xen/x86: Fix memory leak in vcpu_create() error path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
 Tim Deegan <tim@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>
References: <20200928154741.2366-1-andrew.cooper3@citrix.com>
 <33331c3a-1fd5-1ef6-16a3-21d2a6672e90@suse.com>
 <9556aeb3-2a7c-7aea-4386-6e561dd9ef6e@citrix.com>
 <9e652863-5ada-0327-5817-cdb2e652e066@suse.com>
 <e26f0cc3-1893-6cd9-71b3-4e0c011318b3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <baaa6ce0-5434-5b65-da12-bdf9487ebf74@suse.com>
Date: Mon, 21 Dec 2020 09:30:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <e26f0cc3-1893-6cd9-71b3-4e0c011318b3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.12.2020 14:58, Andrew Cooper wrote:
> On 18/12/2020 08:27, Jan Beulich wrote:
>> On 17.12.2020 22:46, Andrew Cooper wrote:
>>> On 29/09/2020 07:18, Jan Beulich wrote:
>>>> On 28.09.2020 17:47, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/mm/hap/hap.c
>>>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>>>> @@ -563,30 +563,37 @@ void hap_final_teardown(struct domain *d)
>>>>>      paging_unlock(d);
>>>>>  }
>>>>>  
>>>>> +void hap_vcpu_teardown(struct vcpu *v)
>>>>> +{
>>>>> +    struct domain *d = v->domain;
>>>>> +    mfn_t mfn;
>>>>> +
>>>>> +    paging_lock(d);
>>>>> +
>>>>> +    if ( !paging_mode_hap(d) || !v->arch.paging.mode )
>>>>> +        goto out;
>>>> Any particular reason you don't use paging_get_hostmode() (as the
>>>> original code did) here? Any particular reason for the seemingly
>>>> redundant (and hence somewhat in conflict with the description's
>>>> "with the minimum number of safety checks possible")
>>>> paging_mode_hap()?
>>> Yes to both.  As you spotted, I converted the shadow side first, and
>>> made the two consistent.
>>>
>>> The paging_mode_{shadow,hap})() is necessary for idempotency.  These
>>> functions really might get called before paging is set up, for an early
>>> failure in domain_create().
>> In which case how would v->arch.paging.mode be non-NULL already?
>> They get set in {hap,shadow}_vcpu_init() only.
> 
> Right, but we also might end up here with an error early in
> vcpu_create(), where d->arch.paging is set up, but v->arch.paging isn't.
> 
> This logic needs to be safe to use at any point of partial initialisation.
> 
> (And to be clear, I found I needed both of these based on some
> artificial error injection testing.)
> 
>>> The paging mode has nothing really to do with hostmode/guestmode/etc. 
>>> It is the only way of expressing the logic where it is clear that the
>>> lower pointer dereferences are trivially safe.
>> Well, yes and no - the other uses of course should then also use
>> paging_get_hostmode(), like various of the wrappers in paging.h
>> do. Or else I question why we have paging_get_hostmode() in the
>> first place.
> 
> I'm not convinced it is an appropriate abstraction to have, and I don't
> expect it to survive the nested virt work.
> 
>> There are more examples in shadow code where this
>> gets open-coded when it probably shouldn't be. There haven't been
>> any such cases in HAP code so far ...
> 
> Doesn't matter.  Its use here would obfuscate the code (this is one part
> of why I think it is a bad abstraction to begin with), and if the
> implementation ever changed, the function would lose its safety.
> 
>> Additionally (noticing only now) in the shadow case you may now
>> loop over all vCPU-s in shadow_teardown() just for
>> shadow_vcpu_teardown() to bail right away. Wouldn't it make sense
>> to retain the "if ( shadow_mode_enabled(d) )" there around the
>> loop?
> 
> I'm not entirely convinced that was necessarily safe.  Irrespective, see
> the TODO.  The foreach_vcpu() is only a stopgap until some cleanup
> structure changes come along (which I had queued behind this patch anyway).

Well, fair enough (for all of the points). You have my R-b already,
and all you need to do (if you haven't already) is re-base the
change, as the conflicting one of mine (which was triggered by
reviewing yours) has gone in already.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:37:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:37:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57323.100288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGhV-0007FB-Cm; Mon, 21 Dec 2020 08:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57323.100288; Mon, 21 Dec 2020 08:37:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGhV-0007F4-9H; Mon, 21 Dec 2020 08:37:53 +0000
Received: by outflank-mailman (input) for mailman id 57323;
 Mon, 21 Dec 2020 08:37:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGhT-0007Ez-VQ
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:37:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6120ab75-4c3e-4124-8ae7-59bd1ad179a0;
 Mon, 21 Dec 2020 08:37:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CC31ACC4;
 Mon, 21 Dec 2020 08:37:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6120ab75-4c3e-4124-8ae7-59bd1ad179a0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608539870; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rwfxVfxsVtduoLpwnHgK9G3GdMN/1hPnCL8HdzMbmEI=;
	b=f2QaZJogpvliD7cCfhFFOGuURUubwsy8cRp4/rPCAwulo+cLyTdbqujxzBjrkfViIl9xFp
	e9QgpsDXFhK5DdrnH1hPfajaoIbJsmtSuHP8VRzydJwZk9TCdPZptBu9eh84/pi9y4uXc6
	crgwGPEHFq5lKendFJMYmcfUoNSL9BY=
Subject: Re: [PATCH] xen/Kconfig: Correct the NR_CPUS description
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, hanetzer@startmail.com,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201218233842.5277-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b5c1cd74-f418-1ad1-8cd0-7def522a4ca4@suse.com>
Date: Mon, 21 Dec 2020 09:37:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201218233842.5277-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.12.2020 00:38, Andrew Cooper wrote:
> The description "physical CPUs" is especially wrong, as it implies the number
> of sockets, which tops out at 8 on all but the very biggest servers.
> 
> NR_CPUS is the number of logical entities the scheduler can use.
> 
> Reported-by: hanetzer@startmail.com

This wasn't on xen-devel, was it?

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:47:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57328.100300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGqr-0008E1-CW; Mon, 21 Dec 2020 08:47:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57328.100300; Mon, 21 Dec 2020 08:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGqr-0008Du-7o; Mon, 21 Dec 2020 08:47:33 +0000
Received: by outflank-mailman (input) for mailman id 57328;
 Mon, 21 Dec 2020 08:47:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGqp-0008Dp-J5
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:47:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8efa608-741b-4492-8b87-2eb1be8f1a12;
 Mon, 21 Dec 2020 08:47:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1F21ACF5;
 Mon, 21 Dec 2020 08:47:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8efa608-741b-4492-8b87-2eb1be8f1a12
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608540450; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hwBrUMrCjWnUCYAEqEZmf/fMzxY8EPZLf1RdIQD/7gk=;
	b=Ut3Drzid7UtcJDORhaXbV9Q6J7sWtXXPaJE+6FTnbuBUEEkq4IsTOmUJ9KynYsoZOdJU8o
	Zl4adUv+YjNWxCeMvSs/+IgcTczpq43rtaFV94fzu9W7hKK2nnk1gNflI/XmL0XbsI4ulw
	Y3wkYraQBBIa129fCLF05MEFggAlncs=
Subject: Re: [PATCH v2] xen: Rework WARN_ON() to return whether a warning was
 triggered
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20201218133054.7744-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ec94b6ae-2386-221a-ec01-bc4650e799a8@suse.com>
Date: Mon, 21 Dec 2020 09:47:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201218133054.7744-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.12.2020 14:30, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> So far, our implementation of WARN_ON() cannot be used in the following
> situation:
> 
> if ( WARN_ON() )
>     ...
> 
> This is because WARN_ON() doesn't return whether a warning has been
> triggered. Such construciton can be handy if you want to print more
> information and also dump the stack trace.
> 
> Therefore, rework the WARN_ON() implementation to return whether a
> warning was triggered. The idea was borrowed from Linux
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

In particular to clarify my prior concerns have been addressed:

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 08:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 08:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57332.100312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGuF-0000f5-0G; Mon, 21 Dec 2020 08:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57332.100312; Mon, 21 Dec 2020 08:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krGuE-0000ey-Se; Mon, 21 Dec 2020 08:51:02 +0000
Received: by outflank-mailman (input) for mailman id 57332;
 Mon, 21 Dec 2020 08:51:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krGuD-0000es-Or
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 08:51:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2435b063-f81a-4132-a024-c41c00d253be;
 Mon, 21 Dec 2020 08:50:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E58ABACC4;
 Mon, 21 Dec 2020 08:50:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2435b063-f81a-4132-a024-c41c00d253be
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608540657; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qGghG70X6CDqUmgyIEtDHZH6iY/LD9W9rHtNUkpPDTE=;
	b=aSdUOez1DiXKpXj1E/hUSws9wBo1BUygdZCV4grp7BJ8P1jlw4HWVmufH5PWKJsij8G07b
	4ZUpUGTTErkWQGpUDNLggAr4wxjYJcMF1Dw4TxSK+TVhxBmgRreTAMsgWyQYABqC4udBtV
	9J+PXAOkPfojcYc9RrDiNgcWwIWBiQo=
Subject: Re: [XEN PATCH 1/3] xen/arch/x86: don't insert timestamp when
 SOURCE_DATE_EPOCH is defined
To: Maximilian Engelhardt <maxi@daemonizer.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1608319634.git.maxi@daemonizer.de>
 <5f715df2808dcd24b9fab95cd02522d16daba86c.1608319634.git.maxi@daemonizer.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c89da8a4-7d84-2bcc-ff13-2a4b3002a98f@suse.com>
Date: Mon, 21 Dec 2020 09:50:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <5f715df2808dcd24b9fab95cd02522d16daba86c.1608319634.git.maxi@daemonizer.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.12.2020 21:42, Maximilian Engelhardt wrote:
> By default a timestamp gets added to the xen efi binary. Unfortunately
> ld doesn't seem to provide a way to set a custom date, like from
> SOURCE_DATE_EPOCH, so set a zero value for the timestamp (option
> --no-insert-timestamp) if SOURCE_DATE_EPOCH is defined. This makes
> reproducible builds possible.
> 
> This is an alternative to the patch suggested in [1]. This patch only
> omits the timestamp when SOURCE_DATE_EPOCH is defined.
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg02161.html
> 
> Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 09:01:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 09:01:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57336.100323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krH46-0001dg-TO; Mon, 21 Dec 2020 09:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57336.100323; Mon, 21 Dec 2020 09:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krH46-0001dZ-QM; Mon, 21 Dec 2020 09:01:14 +0000
Received: by outflank-mailman (input) for mailman id 57336;
 Mon, 21 Dec 2020 09:01:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krH46-0001dU-4Y
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 09:01:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4360ffe8-ca25-412a-b9ea-642194376e0a;
 Mon, 21 Dec 2020 09:01:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 871DDACF5;
 Mon, 21 Dec 2020 09:01:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4360ffe8-ca25-412a-b9ea-642194376e0a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608541271; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GPJmhEHMAFHj5aIX4oFLhaRuz2/VupPkpipNhRSKeEo=;
	b=dqcdiHWAQB3eAuvYHupI6q2I/NgW1YRXzn6+UtNtFh9gNnYiPgZqYNzgNV85o0Buvg32ET
	vox9Z6u4cCSTSMl0aqdLVs7jnvC7aMXd+0ZxnSMCx4Eud+tszZ8nPDY14dJFPgYAbf+TVx
	zTx+/Qe4SQkjSGRkYdUNt6FYN6FgfvQ=
Subject: Re: [XEN PATCH 3/3] docs: set date to SOURCE_DATE_EPOCH if available
To: Maximilian Engelhardt <maxi@daemonizer.de>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1608319634.git.maxi@daemonizer.de>
 <23352f4835ae58c5cae6f425d5a8378f3d694055.1608319634.git.maxi@daemonizer.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3c3edc91-7d22-289f-575b-9fd3c2ec4bc8@suse.com>
Date: Mon, 21 Dec 2020 10:01:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <23352f4835ae58c5cae6f425d5a8378f3d694055.1608319634.git.maxi@daemonizer.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.12.2020 21:42, Maximilian Engelhardt wrote:
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -3,7 +3,13 @@ include $(XEN_ROOT)/Config.mk
>  -include $(XEN_ROOT)/config/Docs.mk
>  
>  VERSION		:= $(shell $(MAKE) -C $(XEN_ROOT)/xen --no-print-directory xenversion)
> -DATE		:= $(shell date +%Y-%m-%d)
> +
> +DATE_FMT	:= +%Y-%m-%d
> +ifdef SOURCE_DATE_EPOCH
> +DATE		:= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u "$(DATE_FMT)")

Looking at the doc for a (deliberately) old "date", I can't find
any mention of the -d "@..." syntax. I take it the command would
fail on that system. It would then go on to try the -r variant,
which has entirely different meaning on GNU (Linux) systems.

docs/ being subject to configuring, why don't you determine the
capabilities of "date" there and invoke just the one command
that was found suitable for the system?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 10:29:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 10:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57362.100351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krIRv-0000JJ-NW; Mon, 21 Dec 2020 10:29:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57362.100351; Mon, 21 Dec 2020 10:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krIRv-0000JA-KX; Mon, 21 Dec 2020 10:29:55 +0000
Received: by outflank-mailman (input) for mailman id 57362;
 Mon, 21 Dec 2020 10:29:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krIRt-0000Im-Um
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 10:29:54 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb81cb85-8bb8-4991-8820-b689019f3b6e;
 Mon, 21 Dec 2020 10:29:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb81cb85-8bb8-4991-8820-b689019f3b6e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608546592;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=X5nTjWghJrP336akGDxHsgts8oU+GCDoRaZNB+nMrcM=;
  b=YzkJSEm2QJC52AEvHNzveMhohpKui8kxKNHLRowROYHvoN2sRbPRWx5F
   vyjkaDlz+SCRPC5TMwRBNNKNrJeOeRuKDBvPVCnERQmQZeyI8sm/NFjO7
   tzA7vH9Kfgcv4d3a7sdNU8FM408jP2qccMUflGe5VJNETBjL/Gb2+6F6d
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: h/tm9huJx14nSvgZYpabcxKlgicIPIguIGJkRRh41NQVdNuBmXxEIusYyPWhiOxjdeJQnIMk0p
 UVP3wEvG9WtwKbk43xA5V4AuX9pRcccoSQFUHevmGAGvLmq8DJw/NEaToRwuNGibwKnERiMWpi
 B9SqeipPNjEXFQ2YD3TCK3X4CqtyfVozXAQTKSQ7LBhSOUak6nEm0xMvC1ZdB9tO33sjguYVHS
 LOPETS9MLlayonJt1Rl80hPf4EyvufllmImbj1X+GQhJUQy0V8fx0/+HKo8LDUpGE7Y1SYTWHY
 TPU=
X-SBRS: 5.2
X-MesageID: 33646630
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,436,1599537600"; 
   d="scan'208";a="33646630"
Subject: Re: [PATCH] xen/Kconfig: Correct the NR_CPUS description
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<hanetzer@startmail.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201218233842.5277-1-andrew.cooper3@citrix.com>
 <b5c1cd74-f418-1ad1-8cd0-7def522a4ca4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9d0c49e0-fe2c-604c-98bc-52ee0501a8fb@citrix.com>
Date: Mon, 21 Dec 2020 10:29:44 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b5c1cd74-f418-1ad1-8cd0-7def522a4ca4@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 21/12/2020 08:37, Jan Beulich wrote:
> On 19.12.2020 00:38, Andrew Cooper wrote:
>> The description "physical CPUs" is especially wrong, as it implies the number
>> of sockets, which tops out at 8 on all but the very biggest servers.
>>
>> NR_CPUS is the number of logical entities the scheduler can use.
>>
>> Reported-by: hanetzer@startmail.com
> This wasn't on xen-devel, was it?

No.  It was on IRC, hence the rather late patch.

>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 10:30:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 10:30:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57360.100336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krIRj-0000Fa-AH; Mon, 21 Dec 2020 10:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57360.100336; Mon, 21 Dec 2020 10:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krIRj-0000FT-6I; Mon, 21 Dec 2020 10:29:43 +0000
Received: by outflank-mailman (input) for mailman id 57360;
 Mon, 21 Dec 2020 10:29:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krIRi-0000FL-Bn; Mon, 21 Dec 2020 10:29:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krIRi-000317-19; Mon, 21 Dec 2020 10:29:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krIRh-0006l3-Ks; Mon, 21 Dec 2020 10:29:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krIRh-0007ib-KE; Mon, 21 Dec 2020 10:29:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jMUphR1INmu9pJKp6wKmlBBy4gR/xTPDN0afaCuZqSM=; b=TawvC36ZnQxjFZEAAdipR+TnCg
	DW4VXxpkE0lCOMJ3uZglzKNpq2mUz+f/i9oFMOUMZS5Q3VJtxfyOJe8xLpuLz3pgXwq6S9SxTJMs2
	jgFzXr8EX3P4e25hOgz7uY+QLm3ozuHaHXKXSdav7O1rsrcNRREEx+jCC1pPLxBw6v6M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157746-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157746: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6a447b0e3151893f6d4a889956553c06d2e775c6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 10:29:41 +0000

flight 157746 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157746/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6a447b0e3151893f6d4a889956553c06d2e775c6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  142 days
Failing since        152366  2020-08-01 20:49:34 Z  141 days  244 attempts
Testing same since   157746  2020-12-20 19:11:40 Z    0 days    1 attempts

------------------------------------------------------------
4294 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 956247 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 13:16:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 13:16:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57384.100372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krL2f-0006KM-1R; Mon, 21 Dec 2020 13:16:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57384.100372; Mon, 21 Dec 2020 13:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krL2e-0006KF-Ui; Mon, 21 Dec 2020 13:16:00 +0000
Received: by outflank-mailman (input) for mailman id 57384;
 Mon, 21 Dec 2020 13:15:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krL2d-0006KA-Ir
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 13:15:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e98f3e6-2bdd-4595-a9d6-14d67f4a1a83;
 Mon, 21 Dec 2020 13:15:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2BDB6B71A;
 Mon, 21 Dec 2020 13:15:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e98f3e6-2bdd-4595-a9d6-14d67f4a1a83
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608556556; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2TelxR8ifMOk0sGK5DT96UzSJXOJGE9O9ANLZ2XNnZ4=;
	b=aKcUpRmd0X71kF5jwsdGeuUSJJTBzGqnCLkkRTsWat4kwNZDOJ6R/2eR9smQ4Y4y+jJUmT
	83jdYM3QgiAhjti0Ab4tfkEfw42fdRA13Lw5y7gE9yPWc/w4g9ssfiMan2wfb/mt5p6Wq0
	3o81A/z5YxWJiJd6etS395YbpdjwLYE=
Subject: Re: [PATCH] xsm/dummy: harden against speculative abuse
From: Jan Beulich <jbeulich@suse.com>
Cc: Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <34833712-93d9-1b4e-1ebf-9df5ea93d19f@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <75e04672-f516-e068-a743-9046cef77768@suse.com>
Date: Mon, 21 Dec 2020 14:15:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <34833712-93d9-1b4e-1ebf-9df5ea93d19f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.12.2020 12:57, Jan Beulich wrote:
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -76,20 +76,20 @@ static always_inline int xsm_default_act
>      case XSM_HOOK:
>          return 0;
>      case XSM_TARGET:
> -        if ( src == target )
> +        if ( evaluate_nospec(src == target) )
>          {
>              return 0;
>      case XSM_XS_PRIV:
> -            if ( is_xenstore_domain(src) )
> +            if ( evaluate_nospec(is_xenstore_domain(src)) )
>                  return 0;
>          }
>          /* fall through */
>      case XSM_DM_PRIV:
> -        if ( target && src->target == target )
> +        if ( target && evaluate_nospec(src->target == target) )
>              return 0;
>          /* fall through */
>      case XSM_PRIV:
> -        if ( src->is_privileged )
> +        if ( !is_control_domain(src) )
>              return 0;
>          return -EPERM;

And a stray ! slipped in here. Now fixed.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 14:56:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 14:56:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57396.100399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krMbd-0006Tz-KO; Mon, 21 Dec 2020 14:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57396.100399; Mon, 21 Dec 2020 14:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krMbd-0006Ts-H7; Mon, 21 Dec 2020 14:56:13 +0000
Received: by outflank-mailman (input) for mailman id 57396;
 Mon, 21 Dec 2020 14:56:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krMbc-0006Tn-7p
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 14:56:12 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2f2007e-7608-4198-a1c0-5d1910bf3bba;
 Mon, 21 Dec 2020 14:56:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2f2007e-7608-4198-a1c0-5d1910bf3bba
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608562565;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=IsbdRLNgcktPXpB+x8LBH5Rgb6C8AoFkm8H3FotSQd0=;
  b=JuS65IJHtcoeyEuM9VFymo9meOFHC2e09SKf9+MMRn1R4t3zR288+6zB
   nsz48SZvd2x3GmKYMxRZowindyjCJVylWQTS+aMBreQFww2CVESOMzsxf
   oBtfxe4M4mN5MST02m7SLUVAzMHJsO6ABC0KIVL7ypr31yX7qjSy7TuHk
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Vp7Vx3JoRUDp8fpAEngbl2SuTTqZ70nSOPeTLXoTu9IcLU8PLIqMZMm0imNH6jD2pN/VKylqEx
 t9VcCv9Ao1xYsD8XqgLSj5gk3ugS7S12ZPRLeII7oBTN4VlsuSj3fOj6dRnumCx6vS6mBVcR2C
 G1ucvulyMFduAUWMoxVjZIu3mVCksfzBFLd4wi0OyzaaJ48wIweK+lbEVXz6mvJFPqhSnjBXIC
 +B57XWs9beCF2POTOIiJk50uIdg/zKGRC8460HzJrJ6Z4qAbq5xiJ4XXSTIvguXu+5JO17uOkn
 tak=
X-SBRS: 5.2
X-MesageID: 33665476
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,436,1599537600"; 
   d="scan'208";a="33665476"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/shadow: Fix build with !CONFIG_SHADOW_PAGING
Date: Mon, 21 Dec 2020 14:55:25 +0000
Message-ID: <20201221145525.30804-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Implement a stub for shadow_vcpu_teardown()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/include/asm-x86/shadow.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 29a86ed78e..e25f9604d8 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -99,6 +99,7 @@ int shadow_set_allocation(struct domain *d, unsigned int pages,
 
 #else /* !CONFIG_SHADOW_PAGING */
 
+#define shadow_vcpu_teardown(v) ASSERT(is_pv_vcpu(v))
 #define shadow_teardown(d, p) ASSERT(is_pv_domain(d))
 #define shadow_final_teardown(d) ASSERT(is_pv_domain(d))
 #define shadow_enable(d, mode) \
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 15:09:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 15:09:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57400.100411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krMoq-0007Wu-SL; Mon, 21 Dec 2020 15:09:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57400.100411; Mon, 21 Dec 2020 15:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krMoq-0007Wn-PJ; Mon, 21 Dec 2020 15:09:52 +0000
Received: by outflank-mailman (input) for mailman id 57400;
 Mon, 21 Dec 2020 15:09:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krMop-0007Wi-Oq
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 15:09:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3b18388-26d1-45cc-a33e-96841e2b7468;
 Mon, 21 Dec 2020 15:09:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C1D4AD4D;
 Mon, 21 Dec 2020 15:09:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3b18388-26d1-45cc-a33e-96841e2b7468
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608563389; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Aq/WgrlKj7PMenn2wOrpT/3QAqdyONtFw9o4VST2ibM=;
	b=LqpoNSZUagxxess84XqHyY465E6bcnXQcX4P0ki4TJiJj4Y9HZvMOxku7uEPkEaABAuTt2
	QFEWB1IHGsESOf60uqpObnXyDfqoYBncISkosxnFeFHoKRR0adY0ygK7GVnOXnAH/xcPZe
	he7E3IrLyfLjz8Wgg9qg/KscXCu3AEU=
Subject: Re: [PATCH] x86/shadow: Fix build with !CONFIG_SHADOW_PAGING
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221145525.30804-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <405be62c-7dfc-378a-2ad1-fd6685ccfa23@suse.com>
Date: Mon, 21 Dec 2020 16:09:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201221145525.30804-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.12.2020 15:55, Andrew Cooper wrote:
> Implement a stub for shadow_vcpu_teardown()
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:14:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:14:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57420.100431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krNon-0005WD-Ct; Mon, 21 Dec 2020 16:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57420.100431; Mon, 21 Dec 2020 16:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krNon-0005W6-8z; Mon, 21 Dec 2020 16:13:53 +0000
Received: by outflank-mailman (input) for mailman id 57420;
 Mon, 21 Dec 2020 16:13:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krNol-0005Vy-RI; Mon, 21 Dec 2020 16:13:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krNol-0000nR-Hh; Mon, 21 Dec 2020 16:13:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krNol-0002LN-5A; Mon, 21 Dec 2020 16:13:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krNol-0007S0-4S; Mon, 21 Dec 2020 16:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zRXH6xxEwkB8IthJ7556siJugkqXq9vHGe8ljaY0A6A=; b=jjlQz8yts6ZsRjLyodTsbSs1o/
	RbH+pD8aKrqzrr0bo86xvdyiWhcpVTLyhKrU3va32tWRj/8ZpeTkfgZNrfTqpPe/8hl//5q2pXl+X
	rLpvXXBg1dE0fCkJ3iv0fywSMBytZjfZxzl+lt8ed4l6FNlZ8epm/me7RX0VlSPmUbNg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157752-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157752: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 16:13:51 +0000

flight 157752 qemu-mainline real [real]
flight 157760 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157752/
http://logs.test-lab.xenproject.org/osstest/logs/157760/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  123 days
Failing since        152659  2020-08-21 14:07:39 Z  122 days  251 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    3 days    4 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:21:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:21:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57428.100445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krNwT-0006RP-6I; Mon, 21 Dec 2020 16:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57428.100445; Mon, 21 Dec 2020 16:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krNwT-0006RI-2y; Mon, 21 Dec 2020 16:21:49 +0000
Received: by outflank-mailman (input) for mailman id 57428;
 Mon, 21 Dec 2020 16:21:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Nof=FZ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1krNwS-0006RD-2T
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:21:48 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2aa4506d-b894-4cfd-bf27-71646c238caa;
 Mon, 21 Dec 2020 16:21:47 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BLGEYGM028818;
 Mon, 21 Dec 2020 16:21:44 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 35h8xqx5mn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 21 Dec 2020 16:21:43 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0BLGFW0m014656;
 Mon, 21 Dec 2020 16:21:43 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3020.oracle.com with ESMTP id 35hu9q8x50-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 21 Dec 2020 16:21:43 +0000
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0BLGLgmE000659;
 Mon, 21 Dec 2020 16:21:42 GMT
Received: from [10.39.211.32] (/10.39.211.32)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 21 Dec 2020 08:21:42 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa4506d-b894-4cfd-bf27-71646c238caa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=vXlthgkNB9QQlX/LZXXo/ZcJJ1O8/cQ/XGgeABWEs+w=;
 b=zxbo+zVvMoGXi/Por812DhVBTmTkv44ge8xGrSlvh4SSZUpBcF5owbMcVXQqinWUwdZa
 PxgV4wXmoi/Ja7Nxywy/m2sRvJkWwBrDyyHGMQSpeod1FnZeoUj40ktLzX+XZJrUOUwh
 kznwoMWqw8cvzArrzRJuhNPmchJMmGLqfHeGSvQy07FOXzwKdQ4KMP/jrtUuRtYoIBcE
 k5KSkVULZ1ZfG+uTu7KYk3yFvucc5unCEtkf33FGCjsSunBtqTUNfpU0nv1LeTgZ3OcY
 tKgeh0hLn3ADp3fuaGhX2Rik7JwESl2KV3dp6DT7yC5q/Iiqky2Bmr5cQA2HMth5eQIt YQ== 
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills
 <cheyenne.wills@gmail.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
 <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
 <0430337a-6fcd-9471-4455-838390401220@citrix.com>
 <c6e05b63-b066-9bd0-9da1-1fc089cd1aea@oracle.com>
 <10958d4a-154f-a524-35e9-a75eaf50fe55@oracle.com>
 <90740e33-c69a-16d7-2622-fa57a1f34272@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <0dbfa20a-5c3d-77c5-1ef0-4baf74e60195@oracle.com>
Date: Mon, 21 Dec 2020 11:21:41 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <90740e33-c69a-16d7-2622-fa57a1f34272@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9842 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 bulkscore=0
 malwarescore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2012210116
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9842 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 clxscore=1015
 priorityscore=1501 mlxscore=0 lowpriorityscore=0 adultscore=0
 mlxlogscore=999 suspectscore=0 phishscore=0 impostorscore=0 bulkscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012210116


On 12/21/20 3:21 AM, Jan Beulich wrote:
> On 18.12.2020 21:43, boris.ostrovsky@oracle.com wrote:
>> Can we do something like KVM's ignore_msrs (but probably return 0 on reads to avoid leaks from the system)? It would allow to deal with cases when a guest is suddenly unable to boot after hypervisor update (especially from pre-4.14). It won't help in all cases since some MSRs may be expected to be non-zero but I think it will cover large number of them. (and it will certainly do what Jan is asking above but will not be specific to this particular breakage)
> This would re-introduce the problem with detection (by guests) of certain
> features lacking suitable CPUID bits. Guests would no longer observe the
> expected #GP(0), and hence be at risk of misbehaving. Hence at the very
> least such an option would need to be per-domain rather than (like for
> KVM) global,


Yes, of course.


>  and use of it should then imo be explicitly unsupported.


Unsupported or not recommended? There are options that are not recommended from security perspective but they are still supported. For example, `spec-ctrl=no` (although it's a global setting)


>  And
> along the lines of what KVM has, this may want to be a tristate so the
> ignoring can be both silent and verbose.


OK.


ignore_msrs="never" (default)

ignore_msrs="silent"

ignore_msrs="verbose'



-boris



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57432.100457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krNxn-0006ao-Lb; Mon, 21 Dec 2020 16:23:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57432.100457; Mon, 21 Dec 2020 16:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krNxn-0006ah-IY; Mon, 21 Dec 2020 16:23:11 +0000
Received: by outflank-mailman (input) for mailman id 57432;
 Mon, 21 Dec 2020 16:23:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krNxm-0006aa-68
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:23:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krNxk-0000xl-AI; Mon, 21 Dec 2020 16:23:08 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krNxk-0003h7-2e; Mon, 21 Dec 2020 16:23:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1Gom1xzp2asr56xabVOws+HzHH9sR5dcNL9rnIkEcFg=; b=kWuYzr8abxO2+LI4nsc2Omoenu
	W9Bb0r7xbDg41DZA515RFsBH+UM8HD5utJXLQW6fj9zXOEUn+r4nOHHRxv65z4SilNiyzLiOgN0qf
	xoKhi3bFsGiTvOeecfwhh0Ex5kjUPkBKhgY71A5IwP7drQ+sy2auc2JZj8F71OlOcbq4=;
Subject: Re: [PATCH v2 1/4] x86: verify function type (and maybe attribute) in
 switch_stack_and_jump()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <f4179ee3-56e4-ab18-7aae-55281c4d4412@suse.com>
 <792c442d-c05a-7a00-c807-b94a54bca94f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <896456a9-16cb-2549-f3a1-e431d5e6479c@xen.org>
Date: Mon, 21 Dec 2020 16:23:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <792c442d-c05a-7a00-c807-b94a54bca94f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 15/12/2020 16:11, Jan Beulich wrote:
> It is imperative that the functions passed here are taking no arguments,
> return no values, and don't return in the first place. While the type
> can be checked uniformly, the attribute check is limited to gcc 9 and
> newer (no clang support for this so far afaict).
> 
> Note that I didn't want to have the "true" fallback "implementation" of
> __builtin_has_attribute(..., __noreturn__) generally available, as
> "true" may not be a suitable fallback in other cases.
> 
> Note further that the noreturn addition to startup_cpu_idle_loop()'s
> declaration requires adding unreachable() to Arm's
> switch_stack_and_jump(), or else the build would break. I suppose this
> should have been there already.
> 
> For vmx_asm_do_vmentry() along with adding the attribute, also restrict
> its scope.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

For the Arm bits:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:36:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57437.100470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOAT-0007ZL-Ro; Mon, 21 Dec 2020 16:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57437.100470; Mon, 21 Dec 2020 16:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOAT-0007ZE-Oj; Mon, 21 Dec 2020 16:36:17 +0000
Received: by outflank-mailman (input) for mailman id 57437;
 Mon, 21 Dec 2020 16:36:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krOAS-0007Z3-CF
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:36:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4088bee-d94e-4b6b-bac8-e0bed1959b02;
 Mon, 21 Dec 2020 16:36:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F0BCAD2B;
 Mon, 21 Dec 2020 16:36:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4088bee-d94e-4b6b-bac8-e0bed1959b02
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608568573; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2RfH/dqkXmgIz0W03cuMQrjz/PDs2wXT3pmXDZt+7Bc=;
	b=AZucA73VSZOl1F+v/odFdgJ6bsQN4rMePN4yKkL4686a5HBRzCspDj1h/OGIl1jbV2alCa
	A7ZeuJe371L1HijDcLoEfheMRAvJgqzSSWovQsowZA9HIiBohaa3ysGfim014Rx9avQ7Cu
	c+OsabUUQ4Mbm1vCSwRJdQbF8Cx0TB8=
Subject: Re: [PATCH v2] x86/intel: insert Ice Lake X (server) model numbers
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com, wl@xen.org,
 jun.nakajima@intel.com, kevin.tian@intel.com, xen-devel@lists.xenproject.org
References: <1603075646-24995-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d6265c58-b553-3dee-9817-1a8673472972@suse.com>
Date: Mon, 21 Dec 2020 17:36:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <1603075646-24995-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 04:47, Igor Druzhinin wrote:
> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
> to Ice Lake desktop according to External Design Specification vol.2.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
> Changes in v2:
> - keep partial sorting
> 
> Andrew, since you have access to these documents, please review as you have time.

Coming back to this - the recent SDM update inserted at least the
model numbers, but besides 6a it also lists 6c. Judging from the
majority of additions happening in pairs, I wonder whether we
couldn't (reasonably safely) add 6c then here as well. Of course
I still can't ack the change either way with access to just the
SDM...

Jan

> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -181,6 +181,7 @@ static void do_get_hw_residencies(void *arg)
>      case 0x55:
>      case 0x5E:
>      /* Ice Lake */
> +    case 0x6A:
>      case 0x7D:
>      case 0x7E:
>      /* Kaby Lake */
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2427,6 +2427,7 @@ static bool __init has_if_pschange_mc(void)
>      case 0x4e: /* Skylake M */
>      case 0x5e: /* Skylake D */
>      case 0x55: /* Skylake-X / Cascade Lake */
> +    case 0x6a: /* Ice Lake-X */
>      case 0x7d: /* Ice Lake */
>      case 0x7e: /* Ice Lake */
>      case 0x8e: /* Kaby / Coffee / Whiskey Lake M */
> @@ -2775,7 +2776,7 @@ static const struct lbr_info *last_branch_msr_get(void)
>          /* Goldmont Plus */
>          case 0x7a:
>          /* Ice Lake */
> -        case 0x7d: case 0x7e:
> +        case 0x6a: case 0x7d: case 0x7e:
>          /* Tremont */
>          case 0x86:
>          /* Kaby Lake */
> 



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:50:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:50:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57444.100481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOO0-0000pv-3I; Mon, 21 Dec 2020 16:50:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57444.100481; Mon, 21 Dec 2020 16:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOO0-0000po-0K; Mon, 21 Dec 2020 16:50:16 +0000
Received: by outflank-mailman (input) for mailman id 57444;
 Mon, 21 Dec 2020 16:50:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krONx-0000pj-RG
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:50:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a99d8327-aa16-49db-8bc6-dfbf70236ede;
 Mon, 21 Dec 2020 16:50:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30411AD09;
 Mon, 21 Dec 2020 16:50:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a99d8327-aa16-49db-8bc6-dfbf70236ede
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608569412; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=enTm/9GB+OWX/0sqZW9UDys9X+JqSoosCB3Lsl9kzrE=;
	b=llGHLVyxl/6Q5QCXcoGyNxU78N36cEi2AZ3DFJhz0yipajQh7vB0jUGi9vBqdHo8G9k5Wl
	8CUwmEECclbQezMcuOUI2I218oRGcbQnAQojhhWdIStx1ncCzJySmQSh9IEs3NfIwJU/pC
	j3lYhcmyBPR0mItVp6Qs3WqKUR5GuxQ=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/Intel: insert Tiger Lake model numbers
Message-ID: <97963ff7-e37e-693b-5c02-a4f99669ccbe@suse.com>
Date: Mon, 21 Dec 2020 17:50:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Both match prior generation processors as far as LBR and C-state MSRs
go (SDM rev 073). The if_pschange_mc erratum, according to the spec
update, is not applicable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -183,6 +183,9 @@ static void do_get_hw_residencies(void *
     /* Ice Lake */
     case 0x7D:
     case 0x7E:
+    /* Tiger Lake */
+    case 0x8C:
+    case 0x8D:
     /* Kaby Lake */
     case 0x8E:
     case 0x9E:
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2436,6 +2436,12 @@ static bool __init has_if_pschange_mc(vo
         return true;
 
         /*
+         * Newer Core processors are not vulnerable.
+         */
+    case 0x8c: /* Tiger Lake */
+    case 0x8d: /* Tiger Lake */
+
+        /*
          * Atom processors are not vulnerable.
          */
     case 0x1c: /* Pineview */
@@ -2776,6 +2782,8 @@ static const struct lbr_info *last_branc
         case 0x7a:
         /* Ice Lake */
         case 0x7d: case 0x7e:
+        /* Tiger Lake */
+        case 0x8c: case 0x8d:
         /* Tremont */
         case 0x86:
         /* Kaby Lake */


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:50:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:50:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57447.100493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOOX-0000vY-CX; Mon, 21 Dec 2020 16:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57447.100493; Mon, 21 Dec 2020 16:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOOX-0000vR-9S; Mon, 21 Dec 2020 16:50:49 +0000
Received: by outflank-mailman (input) for mailman id 57447;
 Mon, 21 Dec 2020 16:50:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krOOW-0000vG-2D
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:50:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krOOU-0001Op-B3; Mon, 21 Dec 2020 16:50:46 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krOOT-0007iL-Ud; Mon, 21 Dec 2020 16:50:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NyT7PwbWg7SJPCgESvezw30YfxJ2Mlua6Wy9aW7UXSc=; b=DyMLC08Sdx/VVvZ5JKkWyJrU8w
	AV8kNctjJsWOYwFMAuf2yLElEGoM3RPFnjgZz2FqB/+HFCFZK2F/FuUELN/2IC7odES4ibE1iIGoZ
	rjuSzdmcNT3hSQprS38zCJbNTuFfTkB3AqsUvh1Nne1S97NhOx0qlTfktzgMNyprFxu0=;
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
 <2ffa6302-5368-61c6-8564-6d3f828e2163@xen.org>
 <26480338-56f4-6a61-e776-78727fce0910@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <93d71f68-c561-6fe0-8433-66745d153217@xen.org>
Date: Mon, 21 Dec 2020 16:50:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <26480338-56f4-6a61-e776-78727fce0910@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 15/12/2020 13:59, Jan Beulich wrote:
> On 15.12.2020 14:39, Julien Grall wrote:
>> On 15/12/2020 09:02, Jan Beulich wrote:
>>> On 15.12.2020 07:33, Juergen Gross wrote:
>>>> --- a/xen/include/asm-arm/bug.h
>>>> +++ b/xen/include/asm-arm/bug.h
>>>> @@ -15,65 +15,62 @@
>>>>    
>>>>    struct bug_frame {
>>>>        signed int loc_disp;    /* Relative address to the bug address */
>>>> -    signed int file_disp;   /* Relative address to the filename */
>>>> +    signed int ptr_disp;    /* Relative address to the filename or function */
>>>>        signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
>>>>        uint16_t line;          /* Line number */
>>>>        uint32_t pad0:16;       /* Padding for 8-bytes align */
>>>>    };
>>>>    
>>>>    #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>>>    #define bug_line(b) ((b)->line)
>>>>    #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>>>    
>>>> -#define BUGFRAME_warn   0
>>>> -#define BUGFRAME_bug    1
>>>> -#define BUGFRAME_assert 2
>>>> +#define BUGFRAME_run_fn 0
>>>> +#define BUGFRAME_warn   1
>>>> +#define BUGFRAME_bug    2
>>>> +#define BUGFRAME_assert 3
>>>>    
>>>> -#define BUGFRAME_NR     3
>>>> +#define BUGFRAME_NR     4
>>>>    
>>>>    /* Many versions of GCC doesn't support the asm %c parameter which would
>>>>     * be preferable to this unpleasantness. We use mergeable string
>>>>     * sections to avoid multiple copies of the string appearing in the
>>>>     * Xen image.
>>>>     */
>>>> -#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
>>>> +#define BUG_FRAME(type, line, ptr, msg) do {                                \
>>>>        BUILD_BUG_ON((line) >> 16);                                             \
>>>>        BUILD_BUG_ON((type) >= BUGFRAME_NR);                                    \
>>>>        asm ("1:"BUG_INSTR"\n"                                                  \
>>>> -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
>>>> -         "2:\t.asciz " __stringify(file) "\n"                               \
>>>> -         "3:\n"                                                             \
>>>> -         ".if " #has_msg "\n"                                               \
>>>> -         "\t.asciz " #msg "\n"                                              \
>>>> -         ".endif\n"                                                         \
>>>> -         ".popsection\n"                                                    \
>>>> -         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
>>>> -         "4:\n"                                                             \
>>>> +         ".pushsection .bug_frames." __stringify(type) ", \"a\", %%progbits\n"\
>>>> +         "2:\n"                                                             \
>>>>             ".p2align 2\n"                                                     \
>>>> -         ".long (1b - 4b)\n"                                                \
>>>> -         ".long (2b - 4b)\n"                                                \
>>>> -         ".long (3b - 4b)\n"                                                \
>>>> +         ".long (1b - 2b)\n"                                                \
>>>> +         ".long (%0 - 2b)\n"                                                \
>>>> +         ".long (%1 - 2b)\n"                                                \
>>>>             ".hword " __stringify(line) ", 0\n"                                \
>>>> -         ".popsection");                                                    \
>>>> +         ".popsection" :: "i" (ptr), "i" (msg));                            \
>>>>    } while (0)
>>>
>>> The comment ahead of the construct now looks to be at best stale, if
>>> not entirely pointless. The reference to %c looks quite strange here
>>> to me anyway - I can only guess it appeared here because on x86 one
>>> has to use %c to output constants as operands for .long and alike,
>>> and this was then tried to use on Arm as well without there really
>>> being a need.
>>
>> Well, %c is one the reason why we can't have a common BUG_FRAME
>> implementation. So I would like to retain this information before
>> someone wants to consolidate the code and missed this issue.
> 
> Fair enough, albeit I guess this then could do with re-wording.

I agree.

> 
>> Regarding the mergeable string section, I agree that the comment in now
>> stale. However,  could someone confirm that "i" will still retain the
>> same behavior as using mergeable string sections?
> 
> That's depend on compiler settings / behavior.

Ok. I wanted to see the difference between before and after but it looks 
like I can't compile Xen after applying the patch:


In file included from 
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:23:0,
                  from 
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/bitmap.h:6,
                  from bitmap.c:10:
bitmap.c: In function ‘bitmap_allocate_region’:
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:44:5: 
error: asm operand 0 probably doesn’t match constraints [-Werror]
      asm ("1:"BUG_INSTR"\n" 
       \
      ^
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:60:5: 
note: in expansion of macro ‘BUG_FRAME’
      BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, "");           \
      ^~~~~~~~~
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:25:42: 
note: in expansion of macro ‘BUG’
  #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
                                           ^~~
bitmap.c:330:2: note: in expansion of macro ‘BUG_ON’
   BUG_ON(pages > BITS_PER_LONG);
   ^~~~~~
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:44:5: 
error: asm operand 1 probably doesn’t match constraints [-Werror]
      asm ("1:"BUG_INSTR"\n" 
       \
      ^
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:60:5: 
note: in expansion of macro ‘BUG_FRAME’
      BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, "");           \
      ^~~~~~~~~
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:25:42: 
note: in expansion of macro ‘BUG’
  #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
                                           ^~~
bitmap.c:330:2: note: in expansion of macro ‘BUG_ON’
   BUG_ON(pages > BITS_PER_LONG);
   ^~~~~~
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:44:5: 
error: impossible constraint in ‘asm’
      asm ("1:"BUG_INSTR"\n" 
       \
      ^
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:60:5: 
note: in expansion of macro ‘BUG_FRAME’
      BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, "");           \
      ^~~~~~~~~
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:25:42: 
note: in expansion of macro ‘BUG’
  #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
                                           ^~~
bitmap.c:330:2: note: in expansion of macro ‘BUG_ON’
   BUG_ON(pages > BITS_PER_LONG);
   ^~~~~~
cc1: all warnings being treated as errors

I am using GCC 7.5.0 built by Linaro (cross-compiler). Native 
compilation with GCC 10.2.1 leads to the same error.

@Juergen, which compiler did you use?

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57450.100506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOOk-0000zn-Lg; Mon, 21 Dec 2020 16:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57450.100506; Mon, 21 Dec 2020 16:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOOk-0000zg-IN; Mon, 21 Dec 2020 16:51:02 +0000
Received: by outflank-mailman (input) for mailman id 57450;
 Mon, 21 Dec 2020 16:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krOOj-0000zT-Ji
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:51:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bf484a2-9655-435e-ac0f-9f6c64cc40a8;
 Mon, 21 Dec 2020 16:51:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EECF4AD2B;
 Mon, 21 Dec 2020 16:50:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bf484a2-9655-435e-ac0f-9f6c64cc40a8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608569460; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=z33w4NnckJIjEIg6P5/FDLUWP2vj8GC6Nipm75K0hVo=;
	b=vA9IDJMYYmlHzBQf8tPr9ymbupTHpMKii+Smyh/ienwTecOtiUYcpICKZG3dv5x0oF/k7G
	zoxxDV/HtWLtvgP6ZWJ2ib8z9QpBlnsZPy9pcNqidiYdBGkPHat/OEEdSg2oeIcdQGlYVF
	ItR+vfgXNFF7Lp3FXucLnNZtqrCZKz0=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] lib: drop debug_build()
Message-ID: <143333c9-154b-77c3-a66a-6b81696ecded@suse.com>
Date: Mon, 21 Dec 2020 17:50:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Its expansion shouldn't be tied to NDEBUG - down the road we may want to
allow enabling assertions independently of CONFIG_DEBUG. Replace the few
uses by IS_ENABLED(CONFIG_DEBUG).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wonder whether we shouldn't further abstract this into, say, a
xen_build_info() helper, seeing that all use sites want "debug=[yn]".
This could then also include gcov_string right away.

--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -182,7 +182,7 @@ static void print_xen_info(void)
 #else
            "arm64",
 #endif
-           debug_build() ? 'y' : 'n', print_tainted(taint_str));
+           IS_ENABLED(CONFIG_DEBUG) ? 'y' : 'n', print_tainted(taint_str));
 }
 
 #ifdef CONFIG_ARM_32
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -31,7 +31,7 @@ static void print_xen_info(void)
 
     printk("----[ Xen-%d.%d%s  x86_64  debug=%c " gcov_string "  %s ]----\n",
            xen_major_version(), xen_minor_version(), xen_extra_version(),
-           debug_build() ? 'y' : 'n', print_tainted(taint_str));
+           IS_ENABLED(CONFIG_DEBUG) ? 'y' : 'n', print_tainted(taint_str));
 }
 
 enum context { CTXT_hypervisor, CTXT_pv_guest, CTXT_hvm_guest };
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1004,8 +1004,8 @@ void __init console_init_preirq(void)
     spin_unlock(&console_lock);
     printk("Xen version %d.%d%s (%s@%s) (%s) debug=%c " gcov_string " %s\n",
            xen_major_version(), xen_minor_version(), xen_extra_version(),
-           xen_compile_by(), xen_compile_domain(),
-           xen_compiler(), debug_build() ? 'y' : 'n', xen_compile_date());
+           xen_compile_by(), xen_compile_domain(), xen_compiler(),
+           IS_ENABLED(CONFIG_DEBUG) ? 'y' : 'n', xen_compile_date());
     printk("Latest ChangeSet: %s\n", xen_changeset());
 
     /* Locate and print the buildid, if applicable. */
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -52,11 +52,9 @@
 #define ASSERT(p) \
     do { if ( unlikely(!(p)) ) assert_failed(#p); } while (0)
 #define ASSERT_UNREACHABLE() assert_failed("unreachable")
-#define debug_build() 1
 #else
 #define ASSERT(p) do { if ( 0 && (p) ) {} } while (0)
 #define ASSERT_UNREACHABLE() do { } while (0)
-#define debug_build() 0
 #endif
 
 #define ABS(_x) ({                              \


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:56:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57456.100517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOTV-0001Ev-8x; Mon, 21 Dec 2020 16:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57456.100517; Mon, 21 Dec 2020 16:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOTV-0001Eo-60; Mon, 21 Dec 2020 16:55:57 +0000
Received: by outflank-mailman (input) for mailman id 57456;
 Mon, 21 Dec 2020 16:55:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DEM5=FZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krOTT-0001Ej-Bj
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:55:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 221e57ce-b1dc-4874-91e3-f26c037c5003;
 Mon, 21 Dec 2020 16:55:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FFCBAD2B;
 Mon, 21 Dec 2020 16:55:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 221e57ce-b1dc-4874-91e3-f26c037c5003
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608569753; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wM2QmjhwXz7QS9UN/SlgvatkLyo0wr5EUvEu2sjxZAY=;
	b=sv+jXz7hoxyDU3qYICaIyESTTDpO/S5qEclcJi2Axpa6d3Tp5QQu24dZU31LkvA1hsnZtv
	N3o4JW9SwnH5b1GQ3DvoeMhyqxGXUND9Mr0xll4RP7GVNiQTABRmQX/03D4H8ten52KFuf
	FRtzckdt4bTA21NKe68Br2idEAQ9nRc=
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: boris.ostrovsky@oracle.com
Cc: xen-devel@lists.xenproject.org, Cheyenne Wills
 <cheyenne.wills@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <4fc3532b-f53f-2a15-ce64-f857816b0566@oracle.com>
 <f4ff3d16-40f6-e8a1-fcdd-ca52e1f52ca6@suse.com>
 <c90622c4-f9e0-8b6d-ab46-bba0cbfc0fd9@oracle.com>
 <0430337a-6fcd-9471-4455-838390401220@citrix.com>
 <c6e05b63-b066-9bd0-9da1-1fc089cd1aea@oracle.com>
 <10958d4a-154f-a524-35e9-a75eaf50fe55@oracle.com>
 <90740e33-c69a-16d7-2622-fa57a1f34272@suse.com>
 <0dbfa20a-5c3d-77c5-1ef0-4baf74e60195@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c869736a-afbf-a52b-e7ce-d7f4bb3d7faf@suse.com>
Date: Mon, 21 Dec 2020 17:55:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <0dbfa20a-5c3d-77c5-1ef0-4baf74e60195@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.12.2020 17:21, boris.ostrovsky@oracle.com wrote:
> 
> On 12/21/20 3:21 AM, Jan Beulich wrote:
>> On 18.12.2020 21:43, boris.ostrovsky@oracle.com wrote:
>>> Can we do something like KVM's ignore_msrs (but probably return 0 on reads to avoid leaks from the system)? It would allow to deal with cases when a guest is suddenly unable to boot after hypervisor update (especially from pre-4.14). It won't help in all cases since some MSRs may be expected to be non-zero but I think it will cover large number of them. (and it will certainly do what Jan is asking above but will not be specific to this particular breakage)
>> This would re-introduce the problem with detection (by guests) of certain
>> features lacking suitable CPUID bits. Guests would no longer observe the
>> expected #GP(0), and hence be at risk of misbehaving. Hence at the very
>> least such an option would need to be per-domain rather than (like for
>> KVM) global,
> 
> 
> Yes, of course.
> 
> 
>>  and use of it should then imo be explicitly unsupported.
> 
> 
> Unsupported or not recommended? There are options that are not recommended from security perspective but they are still supported. For example, `spec-ctrl=no` (although it's a global setting)

"Security unsupported", i.e. use of it causing what might look like
a security issue would not get an XSA.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 16:56:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 16:56:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57459.100530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOTs-0001L4-Ly; Mon, 21 Dec 2020 16:56:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57459.100530; Mon, 21 Dec 2020 16:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOTs-0001Kx-IZ; Mon, 21 Dec 2020 16:56:20 +0000
Received: by outflank-mailman (input) for mailman id 57459;
 Mon, 21 Dec 2020 16:56:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krOTq-0001Kk-F5
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 16:56:18 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a143e90-1e86-4abb-a1ec-d249478af553;
 Mon, 21 Dec 2020 16:56:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a143e90-1e86-4abb-a1ec-d249478af553
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608569776;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Vwl+JSEjTJiV4xk2PRVSPLpkk4yey6eAiUUp2Yn5AjY=;
  b=Hj0OHhf8d5wF5CZTMFzUcJzttnyc4vkE8DSq75AT9t7JZKwuHJalbmrR
   xKrYyu5s4Yzhlg7H86k1ZYE8OlPTpsUw72T8ocXYCpqviW4Kd1Ewh3nO3
   ah+S05+8d9mPE0AGLxWGS5W1pGP4XqVGMl3PYkQ5pw1Cf6O2WMPphrfpo
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: KNjvq+SvnSBKEGpoYHLz+ZJy+31+j6dz1/nIvN9JsKhEwTvDLXlgEKQ1zwtrQcpX4oTIiZ+QJU
 w+KcSdewBRmbCXWh7BbdnV9Dz+BAfnDftnjA5ZUZZggjxBy7xDPLitkfcq6a4fHW5gaLa3oT06
 oXNvPi1VlDuzkKrJe+BfDd7zyrJ7+dRKwRUwMwDoLffFB/hLrFvUqwQyI7jya+aFJK4y5HpDCi
 zwetE7lHHKaptrif9WC6RO3FZjnl4a7irRtTEWvT4olKWMP/5/D/uh4tjiQZ4u7hzNrWYiDEk5
 alw=
X-SBRS: 5.2
X-MesageID: 33680936
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,436,1599537600"; 
   d="scan'208";a="33680936"
Subject: Re: [PATCH] x86/Intel: insert Tiger Lake model numbers
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <97963ff7-e37e-693b-5c02-a4f99669ccbe@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <92ec3c47-0411-132d-36ea-911e16b1b383@citrix.com>
Date: Mon, 21 Dec 2020 16:56:10 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <97963ff7-e37e-693b-5c02-a4f99669ccbe@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 21/12/2020 16:50, Jan Beulich wrote:
> Both match prior generation processors as far as LBR and C-state MSRs
> go (SDM rev 073). The if_pschange_mc erratum, according to the spec
> update, is not applicable.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -183,6 +183,9 @@ static void do_get_hw_residencies(void *
>      /* Ice Lake */
>      case 0x7D:
>      case 0x7E:
> +    /* Tiger Lake */
> +    case 0x8C:
> +    case 0x8D:
>      /* Kaby Lake */
>      case 0x8E:
>      case 0x9E:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2436,6 +2436,12 @@ static bool __init has_if_pschange_mc(vo
>          return true;
>  
>          /*
> +         * Newer Core processors are not vulnerable.
> +         */
> +    case 0x8c: /* Tiger Lake */
> +    case 0x8d: /* Tiger Lake */
> +

This hunk should be dropped.  TGL should enumerate
ARCH_CAPS_IF_PSCHANGE_MC_NO and take the early exit path.

If it doesn't, we need to know (i.e. hitting the default case), and I'll
do even more pestering to have ucode fixed.

Others Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> +        /*
>           * Atom processors are not vulnerable.
>           */
>      case 0x1c: /* Pineview */
> @@ -2776,6 +2782,8 @@ static const struct lbr_info *last_branc
>          case 0x7a:
>          /* Ice Lake */
>          case 0x7d: case 0x7e:
> +        /* Tiger Lake */
> +        case 0x8c: case 0x8d:
>          /* Tremont */
>          case 0x86:
>          /* Kaby Lake */



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 17:08:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 17:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57465.100544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOf7-0002Nv-S4; Mon, 21 Dec 2020 17:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57465.100544; Mon, 21 Dec 2020 17:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krOf7-0002No-Oh; Mon, 21 Dec 2020 17:07:57 +0000
Received: by outflank-mailman (input) for mailman id 57465;
 Mon, 21 Dec 2020 17:07:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UJ16=FZ=redhat.com=slp@srs-us1.protection.inumbo.net>)
 id 1krOf6-0002Nj-Fh
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 17:07:56 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 341aaed0-5ab3-4888-9864-5a8fdecaadfd;
 Mon, 21 Dec 2020 17:07:53 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-101-mdHOWa_6MG2mqk5i0E4Gjg-1; Mon, 21 Dec 2020 12:07:49 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A15171054F8B;
 Mon, 21 Dec 2020 17:07:47 +0000 (UTC)
Received: from localhost (ovpn-117-28.rdu2.redhat.com [10.10.117.28])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 58BC85B695;
 Mon, 21 Dec 2020 17:07:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 341aaed0-5ab3-4888-9864-5a8fdecaadfd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608570473;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ql3QY2EZmY15Osz1anO3jVM9GRD3QvHydriR29vDEaE=;
	b=HivKIw9/SOjS4obWNyF2SHA0S0i4rsY3TelFbwXpvovZihoIjXt1L/EW+uEmA1DuITLHnX
	0LzLRp4FnGgiP9jeDkoF2Z6LsRcWeoEgQIUR59ks82oSEq9skaTia+Y+w6pgo5VmyFAid4
	po170njrXNzZ158nkugVjxhVAcbaCnQ=
X-MC-Unique: mdHOWa_6MG2mqk5i0E4Gjg-1
Date: Mon, 21 Dec 2020 18:07:36 +0100
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	qemu-block@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eric Blake <eblake@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v2 4/4] block: Close block exports in two steps
Message-ID: <20201221170736.6rf4ip7xpvcho2eq@mhamilton>
References: <20201214170519.223781-1-slp@redhat.com>
 <20201214170519.223781-5-slp@redhat.com>
 <20201215153405.GF8185@merkur.fritz.box>
MIME-Version: 1.0
In-Reply-To: <20201215153405.GF8185@merkur.fritz.box>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=slp@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="nrplgk2vygihh27c"
Content-Disposition: inline

--nrplgk2vygihh27c
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 15, 2020 at 04:34:05PM +0100, Kevin Wolf wrote:
> Am 14.12.2020 um 18:05 hat Sergio Lopez geschrieben:
> > There's a cross-dependency between closing the block exports and
> > draining the block layer. The latter needs that we close all export's
> > client connections to ensure they won't queue more requests, but the
> > exports may have coroutines yielding in the block layer, which implies
> > they can't be fully closed until we drain it.
>=20
> A coroutine that yielded must have some way to be reentered. So I guess
> the quesiton becomes why they aren't reentered until drain. We do
> process events:
>=20
>     AIO_WAIT_WHILE(NULL, blk_exp_has_type(type));
>=20
> So in theory, anything that would finalise the block export closing
> should still execute.
>=20
> What is the difference that drain makes compared to a simple
> AIO_WAIT_WHILE, so that coroutine are reentered during drain, but not
> during AIO_WAIT_WHILE?
>=20
> This is an even more interesting question because the NBD server isn't a
> block node nor a BdrvChildClass implementation, so it shouldn't even
> notice a drain operation.

OK, took a deeper dive into the issue. While shutting down the guest,
some co-routines from the NBD server are stuck here:

nbd_trip
 nbd_handle_request
  nbd_do_cmd_read
   nbd_co_send_sparse_read
    blk_pread
     blk_prw
      blk_read_entry
       blk_do_preadv
        blk_wait_while_drained
         qemu_co_queue_wait

This happens because bdrv_close_all() is called after
bdrv_drain_all_begin(), so all block backends are quiesced.

An alternative approach to this patch would be moving
blk_exp_close_all() to vl.c:qemu_cleanup, before
bdrv_drain_all_begin().

Do you have a preference for one of these options?

Thanks,
Sergio.

--nrplgk2vygihh27c
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEEvtX891EthoCRQuii9GknjS8MAjUFAl/g1lcACgkQ9GknjS8M
AjUBEg/9HoD49y3ShNbcRWwXbZD0eTwreK8HAvqk328INIakQIFw5Y398rFnHUoW
z5tMH/NritkqYA/x6a9qBvH+zEhJW+r4skdFmgqcs5N5DQQA/ImemCD2Oc9ytdXc
CTZYjxx90Mz+X5MO9PUY3Ngbx6+oOCGwGQyVlKD/IqPrMd5Fkq/gQpVdShp7oWY9
guFRVAzTdkH36rMvkmxEwbo4x+1CmKuhiQk7IeKcDS7t45TuPGZXCOqE8HGREH29
z7NjRrdatwlmbta14V5YmU4FOZTJy6vq4wSM23evZ3cNjng4D6d92hjxFS9V8fc6
slwv2Dtj4uZZPg+zxvNjq92j0qUKOuiXNfLsEoPaQWwjsZTa9n6P3qHRo3OFp4Ww
leJiVpNVPD+f/5ophXKbTAB/zaw3a8uMx7oXMgYnXjUS4snIWdJ4h1DnuTIPKBA1
VUyh93uCEInT0VlSYrlYeIDu+lZv9/J3wwtpM/CR/P2xFfxFHNzWpoT461YJgHXU
TMto49ag5KSWya+IWjfUiscWuj0nq2RvZVziYNN93o4ekDaMEC2UV65qZZRdxtI7
mgH1oVmQUruWjiSaCVIxwMPT8KCVkngpOlVHkSuY0nSzTebxxjsoJIynhfy/xJwK
GcFjTLpua/nREnq3qg3WBJHJ1Mo9QjKIQ+/1MLxNU+uYo+lGgZc=
=cw1v
-----END PGP SIGNATURE-----

--nrplgk2vygihh27c--



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 17:31:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 17:31:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57470.100559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krP1J-0004ub-RC; Mon, 21 Dec 2020 17:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57470.100559; Mon, 21 Dec 2020 17:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krP1J-0004uU-O7; Mon, 21 Dec 2020 17:30:53 +0000
Received: by outflank-mailman (input) for mailman id 57470;
 Mon, 21 Dec 2020 17:30:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MXr/=FZ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1krP1J-0004uP-06
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 17:30:53 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 400ffe58-c914-4293-816a-ab04875593e1;
 Mon, 21 Dec 2020 17:30:52 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0BLHUYPq078964
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 21 Dec 2020 12:30:40 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0BLHUYRm078963;
 Mon, 21 Dec 2020 09:30:34 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 400ffe58-c914-4293-816a-ab04875593e1
Date: Mon, 21 Dec 2020 09:30:34 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Ian Jackson <iwj@xenproject.org>
Subject: [RESEND] [RFC PATCH] xen/arm: domain_build: Ignore empty memory bank
Message-ID: <X+DbupqYE3rrFaIM@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Previously Xen had stopped processing Device Trees if an empty
(size == 0) memory bank was found.

Commit 5a37207df52066efefe419c677b089a654d37afc changed this behavior to
ignore such banks.  Unfortunately this means these empty nodes are
visible to code which accesses the device trees.  Have domain_build also
ignore these entries.

---
This is tagged "RFC" due to issues.

Authorship of this is unclean.  In the first version (checked in, but
never sent to the list and never compiled) a different condition was used
and the comment was absent.  When examing the code it became clear a
condition identical to
5a37207df52066efefe419c677b089a654d37afc was appropriate and so I changed
to !size.  Since what the code is doing was sufficiently similar, the
comment was grabbed.
How far does this dilute authorship?  I diagnosed the bug and figured out
where to add the lines, but the amount inspired by Julien Grall gives
Julien Grall some level of claim of authorship.  Advice is needed.

Commit 7d2b21fd36c2a47799eed71c67bae7faa1ec4272 is an outright bug for
me.  I don't know what percentage of users will experience this bug, but
being observed this quickly suggests this is major enough to be urgent
for the stable-4.14 branch.

I doubt this is the only bug exposed by
5a37207df52066efefe419c677b089a654d37afc.  This might actually effect
most uses of the device-tree code.  I think either the core needs to be
fixed to hide zero-sized entries from anything outside of
xen/common/device_tree.c, otherwise all uses of the device-tree core need
to be audited to ensure they ignore zero-sized entries.  Notably this is
the second location where zero-size device-tree entries need to be
ignored, preemptive action should be taken before a third is found by
bugreport.

Perhaps this fix is appropriate for the stable-4.14 branch and a proper
solution should be implemented for the main branch?

The error message which first showed was
"Unable to retrieve address %u for %s\n".  Where the number in %u was
0, this seems a poor error message.  Version 0.1 (which never got
compiled) had been:  if(!addr) continue;

As I thought the 0 it was reporting was an address of 0.  Perhaps the
message should instead be:
"Unable to retrieve address for index %u of %s\n"?
---
 xen/arch/arm/domain_build.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e824ba34b0..0b83384bd3 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1405,6 +1405,11 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     {
         struct map_range_data mr_data = { .d = d, .p2mt = p2mt };
         res = dt_device_get_address(dev, i, &addr, &size);
+
+        /* Some DT may describe empty bank, ignore them */
+        if ( !size )
+            continue;
+
         if ( res )
         {
             printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Dec 21 17:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 17:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57474.100572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krP4p-00054x-Ay; Mon, 21 Dec 2020 17:34:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57474.100572; Mon, 21 Dec 2020 17:34:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krP4p-00054q-80; Mon, 21 Dec 2020 17:34:31 +0000
Received: by outflank-mailman (input) for mailman id 57474;
 Mon, 21 Dec 2020 17:34:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krP4o-00054i-4c; Mon, 21 Dec 2020 17:34:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krP4o-00029i-0t; Mon, 21 Dec 2020 17:34:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krP4n-0005gF-Os; Mon, 21 Dec 2020 17:34:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krP4n-0005T5-Ns; Mon, 21 Dec 2020 17:34:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OzPBTuUtX6LSWGi9pHe+D+Z4zqDl14AewGVG49VXnPg=; b=qYXCSw5wZusMRP/19eNJLBcdIz
	cmDQfYfJ1zsyeNaUKrTZ0KhnOM0kk3IxLIVdg/3gPfsf5FzywYOjPi/7H/gg2cphX191FuK3d41Sj
	TvgUWEtUxXucwS216razt+9XtCyPDMb0ig3BZNaa/IEtlphynHJ9yQhWcfBJzDHfBQdY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157761-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157761: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d162f36848c4a98a782cc05820b0aa7ec1ae297d
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 17:34:29 +0000

flight 157761 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157761/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 157696

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d162f36848c4a98a782cc05820b0aa7ec1ae297d
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157696  2020-12-18 19:01:31 Z    2 days
Testing same since   157761  2020-12-21 15:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d162f36848c4a98a782cc05820b0aa7ec1ae297d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 28 15:25:44 2020 +0100

    xen/x86: Fix memory leak in vcpu_create() error path
    
    Various paths in vcpu_create() end up calling paging_update_paging_modes(),
    which eventually allocate a monitor pagetable if one doesn't exist.
    
    However, an error in vcpu_create() results in the vcpu being cleaned up
    locally, and not put onto the domain's vcpu list.  Therefore, the monitor
    table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
    assertions later that we've successfully freed the entire hap/shadow memory
    pool.
    
    The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
    due to insufficient existing structure in the existing logic.
    
    Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
    in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
    fixes the memory leak.
    
    The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
    as tolerable as possible, with the minimum number of safety checks possible.
    In particular, drop the mfn_valid() check - if these fields are junk, then Xen
    is going to explode anyway.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Dec 18 23:30:04 2020 +0000

    xen/Kconfig: Correct the NR_CPUS description
    
    The description "physical CPUs" is especially wrong, as it implies the number
    of sockets, which tops out at 8 on all but the very biggest servers.
    
    NR_CPUS is the number of logical entities the scheduler can use.
    
    Reported-by: hanetzer@startmail.com
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 17:45:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 17:45:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57484.100590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPFY-00063H-Eb; Mon, 21 Dec 2020 17:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57484.100590; Mon, 21 Dec 2020 17:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPFY-00063A-BK; Mon, 21 Dec 2020 17:45:36 +0000
Received: by outflank-mailman (input) for mailman id 57484;
 Mon, 21 Dec 2020 17:45:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krPFW-000635-SZ
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 17:45:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krPFW-0002Ka-7m; Mon, 21 Dec 2020 17:45:34 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krPFV-0007fZ-Vw; Mon, 21 Dec 2020 17:45:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BkrRymnUmYJro9bAFzAgr6HO7qKXsBObkY72rMY6ntA=; b=C2XRBtonYOkohEPEkjehArLc2W
	iCeHr1+zqxe1y/MmAe+xFY+DfaiVe9zZy6nGVkI6JPN1/woqgcMmyeindLevJLBO6Hk6mSsBjVCzH
	606mAStZiuJ74PytgTEpo44yebJHNkICg4r/HIA7h4eMK1SBv4q8VDqbnI0xjx4eNXao=;
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
 <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
Date: Mon, 21 Dec 2020 17:45:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 14/12/2020 09:40, Jan Beulich wrote:
> On 11.12.2020 11:57, Julien Grall wrote:
>> On 11/12/2020 10:32, Jan Beulich wrote:
>>> On 09.12.2020 12:54, Julien Grall wrote:
>>>> On 23/11/2020 13:29, Jan Beulich wrote:
>>>>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>>>>         long           rc = 0;
>>>>>     
>>>>>      again:
>>>>> -    spin_lock(&d1->event_lock);
>>>>> +    write_lock(&d1->event_lock);
>>>>>     
>>>>>         if ( !port_is_valid(d1, port1) )
>>>>>         {
>>>>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>>>>                     BUG();
>>>>>     
>>>>>                 if ( d1 < d2 )
>>>>> -            {
>>>>> -                spin_lock(&d2->event_lock);
>>>>> -            }
>>>>> +                read_lock(&d2->event_lock);
>>>>
>>>> This change made me realized that I don't quite understand how the
>>>> rwlock is meant to work for event_lock. I was actually expecting this to
>>>> be a write_lock() given there are state changed in the d2 events.
>>>
>>> Well, the protection needs to be against racing changes, i.e.
>>> parallel invocations of this same function, or evtchn_close().
>>> It is debatable whether evtchn_status() and
>>> domain_dump_evtchn_info() would better also be locked out
>>> (other read_lock() uses aren't applicable to interdomain
>>> channels).
>>>
>>>> Could you outline how a developper can find out whether he/she should
>>>> use read_lock or write_lock?
>>>
>>> I could try to, but it would again be a port type dependent
>>> model, just like for the per-channel locks.
>>
>> It is quite important to have clear locking strategy (in particular
>> rwlock) so we can make correct decision when to use read_lock or write_lock.
>>
>>> So I'd like it to
>>> be clarified first whether you aren't instead indirectly
>>> asking for these to become write_lock()
>>
>> Well, I don't understand why this is a read_lock() (even with your
>> previous explanation). I am not suggesting to switch to a write_lock(),
>> but instead asking for the reasoning behind the decision.
> 
> So if what I've said in my previous reply isn't enough (including the
> argument towards using two write_lock() here), I'm struggling to
> figure what else to say. The primary goal is to exclude changes to
> the same ports. For this it is sufficient to hold just one of the two
> locks in writer mode, as the other (racing) one will acquire that
> same lock for at least reading. The question whether both need to use
> writer mode can only be decided when looking at the sites acquiring
> just one of the locks in reader mode (hence the reference to
> evtchn_status() and domain_dump_evtchn_info()) - if races with them
> are deemed to be a problem, switching to both-writers will be needed.

I had another look at the code based on your explanation. I don't think 
it is fine to allow evtchn_status() to be concurrently called with 
evtchn_close().

evtchn_close() contains the following code:

   chn2->state = ECS_UNBOUND;
   chn2->u.unbound.remote_domid = d1->domain_id;

Where chn2 is a event channel of the remote domain (d2). Your patch will 
only held the read lock for d2.

However evtchn_status() expects the event channel state to not change 
behind its back. This assumption doesn't hold for d2, and you could 
possibly end up to see the new value of chn2->state after the new 
chn2->u.unbound.remote_domid.

Thanksfully, it doesn't look like chn2->u.interdomain.remote_domain 
would be overwritten. Otherwise, this would be a straight dereference of 
an invalid pointer.

So I think, we need to held the write event lock for both domain.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:07:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:07:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57490.100602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPa8-0007uw-CC; Mon, 21 Dec 2020 18:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57490.100602; Mon, 21 Dec 2020 18:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPa8-0007up-8w; Mon, 21 Dec 2020 18:06:52 +0000
Received: by outflank-mailman (input) for mailman id 57490;
 Mon, 21 Dec 2020 18:06:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krPa7-0007uh-7q; Mon, 21 Dec 2020 18:06:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krPa7-0002lt-0z; Mon, 21 Dec 2020 18:06:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krPa6-0006mf-M3; Mon, 21 Dec 2020 18:06:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krPa6-0004VM-LT; Mon, 21 Dec 2020 18:06:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8rFezfZPvItLsyn3QG4vZK0iPOsXFAWnftcE97MpV8E=; b=XrVTIE2vTiujiVqI4r4pXilUgy
	+Is5DlnMgzB1aue1KbrjUHiIRKd7ifkp9vV0GHpchYPF6M2HDybM2YystAAJC+BmNQWwqYk+ysKxP
	j0xJVtZ9IxAnD7KWNHm3/IwZDYHc2NuzZTUXNBzyevyq84clC/ZBVIZu4HIz3PN1vfRI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157754-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157754: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 18:06:50 +0000

flight 157754 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157754/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 157710
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157744
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157744
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157744
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157744
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157744
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157744
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157744
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157744
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157744
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157744
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157744
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157754  2020-12-21 07:23:13 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:07:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57494.100617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPan-00082Y-Pu; Mon, 21 Dec 2020 18:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57494.100617; Mon, 21 Dec 2020 18:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPan-00082R-M9; Mon, 21 Dec 2020 18:07:33 +0000
Received: by outflank-mailman (input) for mailman id 57494;
 Mon, 21 Dec 2020 18:07:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krPam-00082J-AG
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:07:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 219997be-40ac-4907-8b8d-8a8014213c1c;
 Mon, 21 Dec 2020 18:07:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 219997be-40ac-4907-8b8d-8a8014213c1c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608574051;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=AdwUPMnj/e3nA+91jI02AbJYdiKEiZTbTnIiKy/Su0Q=;
  b=YgK9KCDXPAQf1flgttlBzXpCm8UxWFp40gpgbsgnFjDQce41dTrOf9zn
   dq5p2BmXe68BAqwE4fVhYMGRSH7097xJULhAC7OkGpDzPtFd9/Jv/dTZ1
   kRBTO0nrCxnXtVSTRxIq0dC9ykHzXmlovhJOdh4SjKx0BmAm9rIbFggjG
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bQaMCCJWUGBP1lf3qz7tulf92hpjyV554XsL9hSh5xqB7mUUYb5iBJZJ1KHgcPZbh1eG8jiZ+7
 8Lm6jwazseaufVYUIHI4SWfiv1u/mzqYqRLRaY5alo3Z7ckPP7NqDeLpkasW5W5T4z9/+++ZKl
 UI9SgPWZtBPsJ8hSX1BOAV/kO/K67qxpyUKeU+wdfzh1UQtIescEwG6SgTe+OE8GtwDVJyIvpw
 ARDsclprVy/NSoyg6VHjpjOeZnurysOMZyqsiV3jujsfj7SIupSTIMs+0mPhaQQh5NR7DkPz/w
 l7I=
X-SBRS: 5.2
X-MesageID: 33706800
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,436,1599537600"; 
   d="scan'208";a="33706800"
Subject: Re: [PATCH] lib: drop debug_build()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <143333c9-154b-77c3-a66a-6b81696ecded@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2575d75a-ce1d-c725-4f37-b7c9c10a2ecd@citrix.com>
Date: Mon, 21 Dec 2020 18:07:13 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <143333c9-154b-77c3-a66a-6b81696ecded@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 21/12/2020 16:50, Jan Beulich wrote:
> Its expansion shouldn't be tied to NDEBUG - down the road we may want to
> allow enabling assertions independently of CONFIG_DEBUG.

I'm not sure I agree that we'll ever want to do this, but...

>  Replace the few uses by IS_ENABLED(CONFIG_DEBUG).

... we should be aligning on CONFIG_DEBUG.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I wonder whether we shouldn't further abstract this into, say, a
> xen_build_info() helper, seeing that all use sites want "debug=[yn]".
> This could then also include gcov_string right away.

I think that would be a nicer way of doing it.  It should probably also
have some trace of CONFIG_UBSAN in the resulting string.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:16:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:16:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57504.100634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjK-0000Z9-ND; Mon, 21 Dec 2020 18:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57504.100634; Mon, 21 Dec 2020 18:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjK-0000Z2-KA; Mon, 21 Dec 2020 18:16:22 +0000
Received: by outflank-mailman (input) for mailman id 57504;
 Mon, 21 Dec 2020 18:16:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krPjJ-0000Ys-3Q
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:16:21 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f87d616d-fdab-49db-9602-799584b7e218;
 Mon, 21 Dec 2020 18:16:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f87d616d-fdab-49db-9602-799584b7e218
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608574580;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=L7zpXpSF8sqEBnMwyjo06yBMcfoiSc3qOyImvvgHGnY=;
  b=AxHOFtaIzF+fpCE4Axtnhjc8XJVs5vROQLCaQgvgX5JdM5DDE2zxoJoV
   cks2gIsyBtTncJ4fTD70qKfGO710uKlqRoKz9c8ysYvDCSfovo+LS+kNJ
   bzpKR6SvtU2PNoevVi9qIPz/tey0By/mVYZJarTAVxh6uxa40mtUM3NFV
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: N2BIb0CzemZqfg7OYCye5eTJV1CsNu8jfWn/VbjuRzzKw0ZhoRVnk7rhMDU6a63nBYS+XCGeaj
 sbkgxrCH8yUxGD61RPzqcItY6zbkII2mqU+ZAnLVGtGogfyFMjKN3z08eGw9kZDXMxMaQntVz+
 +WDE8fO6wEn9xO5xIBPJv6J1zwCdGb8IgJbDIsSYExxh1pi+KIC8qieoRxaGK74Jdp0G1YI21C
 kJWeS/aa/fsCDKREFCDpxPBgyf2hZk+IJIvqE0yiJJHZWIfnyvutAa+vtmJsM4h8Thzy1vkWt6
 5eQ=
X-SBRS: 5.2
X-MesageID: 33707593
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,437,1599537600"; 
   d="scan'208";a="33707593"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/3] xen/domain: More structured teardown
Date: Mon, 21 Dec 2020 18:14:43 +0000
Message-ID: <20201221181446.7791-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Follow up from both the investigation leading to "Fix memory leak in
vcpu_create() error path", and the older work to deduplicate the domain
teardown paths.

Andrew Cooper (3):
  xen/domain: Reorder trivial initialisation in early domain_create()
  xen/domain: Introduce domain_teardown()
  xen/evtchn: Clean up teardown handling

 xen/common/domain.c        | 113 ++++++++++++++++++++++++++++++++++-----------
 xen/common/event_channel.c |   8 ++--
 xen/include/xen/sched.h    |  12 ++++-
 3 files changed, 101 insertions(+), 32 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:16:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57505.100641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjL-0000Zj-1C; Mon, 21 Dec 2020 18:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57505.100641; Mon, 21 Dec 2020 18:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjK-0000ZT-To; Mon, 21 Dec 2020 18:16:22 +0000
Received: by outflank-mailman (input) for mailman id 57505;
 Mon, 21 Dec 2020 18:16:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krPjJ-0000Yt-4S
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:16:21 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 499db0c6-1c04-4f23-bcff-a7f45c660f51;
 Mon, 21 Dec 2020 18:16:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 499db0c6-1c04-4f23-bcff-a7f45c660f51
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608574578;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=B/5dAlAFWaIIj98Nfhx+GTd6ypQExw4vEU95/xCjDHg=;
  b=RptR4pbhuiWfugIcYa2CFotfNAbTHwPPzNr3Ie/BxtvtLbKjVoVH2Suc
   hGwytbJmlDCwvhwTH8PqZoL4VOWEcuCUbggU3j5T0icOVGs06Je5VkPEM
   1A1UT9Lf1LuzxkreIw60m1JM/fUUGcHD9MQnRLpgDLDA3+m5NIxyrsuiD
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: B2W5j/M1lFQ4AS9cDmR2qgWNTidmuOzn0n9z0yNDaw/6TLT//jKQLoaUzyjGxlQZX0ekB3tpRO
 QhE9YPWHljtcL3rJa6q43GOmj6C+EtPvZFNj5toDju/qNoArojSJ3JZLc0eapxNpVtF1Lg6AIc
 vHqUacirXXlhjWLDaRwXjiQFQFz7ckwCFUUuVWucuDvARBuLsf5qKrvZm6R8/W4wv7bx2/1BKJ
 VBuRPehHIlDgMFOILQXNBoQWXDJbaXFFouUz3l47EZg1HMQhZ5Gp1K5MG5KL6TsmeKN6jP+b7h
 nwc=
X-SBRS: 5.2
X-MesageID: 33707581
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,437,1599537600"; 
   d="scan'208";a="33707581"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [PATCH 3/3] xen/evtchn: Clean up teardown handling
Date: Mon, 21 Dec 2020 18:14:46 +0000
Message-ID: <20201221181446.7791-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201221181446.7791-1-andrew.cooper3@citrix.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

First of all, rename the evtchn APIs:
 * evtchn_destroy       => evtchn_teardown
 * evtchn_destroy_final => evtchn_destroy

Move both calls into appropriate positions in domain_teardown() and
_domain_destroy(), which avoids having different cleanup logic depending on
the the cause of the cleanup.

In particular, this avoids evtchn_teardown() (previously named
evtchn_destroy()) being called redundantly thousands of times on a typical
XEN_DOMCTL_destroydomain hypercall.

No net change in behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>

RFC.  While testing this, I observed this, after faking up an -ENOMEM in
dom0's construction:

  (XEN) [2020-12-21 16:31:20] NX (Execute Disable) protection active
  (XEN) [2020-12-21 16:33:04]
  (XEN) [2020-12-21 16:33:04] ****************************************
  (XEN) [2020-12-21 16:33:04] Panic on CPU 0:
  (XEN) [2020-12-21 16:33:04] Error creating domain 0
  (XEN) [2020-12-21 16:33:04] ****************************************

XSA-344 appears to have added nearly 2 minutes of wallclock time into the
domain_create() error path, which isn't ok.

Considering that event channels haven't even been initialised in this
particular scenario, it ought to take ~0 time.  Even if event channels have
been initalised, none can be active as the domain isn't visible to the system.
---
 xen/common/domain.c        | 17 ++++++++---------
 xen/common/event_channel.c |  8 ++++----
 xen/include/xen/sched.h    |  4 ++--
 3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index ef1987335b..701747b9d9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -284,6 +284,8 @@ custom_param("extra_guest_irqs", parse_extra_guest_irqs);
  */
 static int domain_teardown(struct domain *d)
 {
+    int rc;
+
     BUG_ON(!d->is_dying);
 
     /*
@@ -313,6 +315,10 @@ static int domain_teardown(struct domain *d)
         };
 
     case 0:
+        rc = evtchn_teardown(d);
+        if ( rc )
+            return rc;
+
     PROGRESS(done):
         break;
 
@@ -335,6 +341,8 @@ static void _domain_destroy(struct domain *d)
     BUG_ON(!d->is_dying);
     BUG_ON(atomic_read(&d->refcnt) != DOMAIN_DESTROYED);
 
+    evtchn_destroy(d);
+
     xfree(d->pbuf);
 
     argo_destroy(d);
@@ -598,11 +606,7 @@ struct domain *domain_create(domid_t domid,
     if ( init_status & INIT_gnttab )
         grant_table_destroy(d);
     if ( init_status & INIT_evtchn )
-    {
-        evtchn_destroy(d);
-        evtchn_destroy_final(d);
         radix_tree_destroy(&d->pirq_tree, free_pirq_struct);
-    }
     if ( init_status & INIT_watchdog )
         watchdog_domain_destroy(d);
 
@@ -792,9 +796,6 @@ int domain_kill(struct domain *d)
         rc = domain_teardown(d);
         if ( rc )
             break;
-        rc = evtchn_destroy(d);
-        if ( rc )
-            break;
         rc = domain_relinquish_resources(d);
         if ( rc != 0 )
             break;
@@ -987,8 +988,6 @@ static void complete_domain_destroy(struct rcu_head *head)
     if ( d->target != NULL )
         put_domain(d->target);
 
-    evtchn_destroy_final(d);
-
     radix_tree_destroy(&d->pirq_tree, free_pirq_struct);
 
     xfree(d->vcpu);
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 4a48094356..c1af54eed5 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1401,7 +1401,7 @@ void free_xen_event_channel(struct domain *d, int port)
     {
         /*
          * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing
-         * with the spin_barrier() and BUG_ON() in evtchn_destroy().
+         * with the spin_barrier() and BUG_ON() in evtchn_teardown().
          */
         smp_rmb();
         BUG_ON(!d->is_dying);
@@ -1421,7 +1421,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
     {
         /*
          * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing
-         * with the spin_barrier() and BUG_ON() in evtchn_destroy().
+         * with the spin_barrier() and BUG_ON() in evtchn_teardown().
          */
         smp_rmb();
         ASSERT(ld->is_dying);
@@ -1499,7 +1499,7 @@ int evtchn_init(struct domain *d, unsigned int max_port)
     return 0;
 }
 
-int evtchn_destroy(struct domain *d)
+int evtchn_teardown(struct domain *d)
 {
     unsigned int i;
 
@@ -1534,7 +1534,7 @@ int evtchn_destroy(struct domain *d)
 }
 
 
-void evtchn_destroy_final(struct domain *d)
+void evtchn_destroy(struct domain *d)
 {
     unsigned int i, j;
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3f35c537b8..bb22eeca38 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -142,8 +142,8 @@ struct evtchn
 } __attribute__((aligned(64)));
 
 int  evtchn_init(struct domain *d, unsigned int max_port);
-int  evtchn_destroy(struct domain *d); /* from domain_kill */
-void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
+int  evtchn_teardown(struct domain *d);
+void evtchn_destroy(struct domain *d);
 
 struct waitqueue_vcpu;
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:16:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57506.100659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjP-0000dZ-Eh; Mon, 21 Dec 2020 18:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57506.100659; Mon, 21 Dec 2020 18:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjP-0000dR-Ag; Mon, 21 Dec 2020 18:16:27 +0000
Received: by outflank-mailman (input) for mailman id 57506;
 Mon, 21 Dec 2020 18:16:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krPjN-0000Ys-W1
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:16:26 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecc297fa-f5ea-407d-8910-2d35caea21ad;
 Mon, 21 Dec 2020 18:16:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecc297fa-f5ea-407d-8910-2d35caea21ad
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608574581;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=hRzDOLHnk7AFo9Bf3bdweZ3XGJXNV7e3/ahLcdzLXdc=;
  b=PkeTYGYy5J+nu1yRvcPaFf1Eat/wOb5MuRUjuLPnlecyz8cWolXvkzue
   C0o6aubhD0qn4DMCYz88QJ/ekjSdrxJqw75vgbXQTYjp+JG2qfASGQtla
   A9w0txW93vt3ddhahEcN2cwNX9gx5lIFWO/RW5/4IcH991BptUfmu6vAH
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: eCvJ8vVJQ+h1Yvf02LFBLnaUX2zruNYyojZwmjDu86rPk3OsY1YAJpD8hf6UX9QXu7VM/LGXdw
 Mfq/e9i8vDhIMvhBLTTwKJPhvUGgz7+970pi2fg0mXdV+5s4Qgy/WR7kX1Qf7AVSFz1AkS53CQ
 KCfJUjFqhJsEh2izW+lDXB+pRJ9DfMX6ATET2w4pGiIbXho9dAff3jM9/1gNbbhUKq74ne7Yzo
 Sb0m3H6WQSCXKvppEvIg7wtDjbw1GRpCC0dtdeuVhNVpQi80rKMQ6z0E6ef/OQgUp85jOvyNqm
 Ebk=
X-SBRS: 5.2
X-MesageID: 33707602
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,437,1599537600"; 
   d="scan'208";a="33707602"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/3] xen/domain: Reorder trivial initialisation in early domain_create()
Date: Mon, 21 Dec 2020 18:14:44 +0000
Message-ID: <20201221181446.7791-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201221181446.7791-1-andrew.cooper3@citrix.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This improves the robustness of the error paths.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
---
 xen/common/domain.c | 41 ++++++++++++++++++++++-------------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 5ec48c3e19..ce3667f1b4 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -391,25 +391,7 @@ struct domain *domain_create(domid_t domid,
 
     TRACE_1D(TRC_DOM0_DOM_ADD, d->domain_id);
 
-    /*
-     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
-     * resources want to be sized based on max_vcpus.
-     */
-    if ( !is_system_domain(d) )
-    {
-        err = -ENOMEM;
-        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
-        if ( !d->vcpu )
-            goto fail;
-
-        d->max_vcpus = config->max_vcpus;
-    }
-
-    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
-
-    if ( (err = xsm_alloc_security_domain(d)) != 0 )
-        goto fail;
-
+    /* Trivial initialisation. */
     atomic_set(&d->refcnt, 1);
     RCU_READ_LOCK_INIT(&d->rcu_lock);
     spin_lock_init_prof(d, domain_lock);
@@ -434,6 +416,27 @@ struct domain *domain_create(domid_t domid,
     INIT_LIST_HEAD(&d->pdev_list);
 #endif
 
+    /* All error paths can depend on the above setup. */
+
+    /*
+     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
+     * resources want to be sized based on max_vcpus.
+     */
+    if ( !is_system_domain(d) )
+    {
+        err = -ENOMEM;
+        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
+        if ( !d->vcpu )
+            goto fail;
+
+        d->max_vcpus = config->max_vcpus;
+    }
+
+    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
+
+    if ( (err = xsm_alloc_security_domain(d)) != 0 )
+        goto fail;
+
     err = -ENOMEM;
     if ( !zalloc_cpumask_var(&d->dirty_cpumask) )
         goto fail;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:16:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57507.100664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjP-0000eJ-SM; Mon, 21 Dec 2020 18:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57507.100664; Mon, 21 Dec 2020 18:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPjP-0000e8-Kt; Mon, 21 Dec 2020 18:16:27 +0000
Received: by outflank-mailman (input) for mailman id 57507;
 Mon, 21 Dec 2020 18:16:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krPjO-0000Yt-1i
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:16:26 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c26f9d72-d98a-4135-ab13-a02dfca0b828;
 Mon, 21 Dec 2020 18:16:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c26f9d72-d98a-4135-ab13-a02dfca0b828
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608574581;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=urlwgaA7ewlnkuu+TIgJoIZwJdSpHPYy7Pdzt1cR0Pw=;
  b=Ccjv2poHVOxQ2PteI7sEqDOvSXmNdrE1fFGGKY57SBV7qZ2nonQkIpS4
   fgH1lo5gcNjB2ycw5QSocjnoMvH+J41sTDoRGBQdMyRFSxU2UugRMlZzh
   BuLoN5vFxaBN0YEL0MgLXh9ilBtngV8+Jli783VUYSfSMHk4QzfZ8NBi9
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: YM3is+7Zqq7V8NlCoaB8HckE19fpwhw6g/oSB9Nlw3U5Tvw3lNLeurhFXLG0q6HzmPtP1Vt6yd
 GDEVl5p1jIateQHhbHTGmRHS6KXmrnebYHDe1wWPcpOLvqZ8F4H+nVbgfF5Z5W10JiOsbtj5Qh
 m3HII8KswCQOiK4j0+m0fmUL319hk5NJ9MhnZAh9xPm5WMJ1GTaOetOZogH/kL5tUZdI6cdMeI
 wnErJIkn4lxsqG/04yvguXKumnXAV586kWuEVf2s/kkwFfOmjBvmgPMUws9E+L8WS2IoBuJXRr
 R/g=
X-SBRS: 5.2
X-MesageID: 33707596
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,437,1599537600"; 
   d="scan'208";a="33707596"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/3] xen/domain: Introduce domain_teardown()
Date: Mon, 21 Dec 2020 18:14:45 +0000
Message-ID: <20201221181446.7791-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201221181446.7791-1-andrew.cooper3@citrix.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

There is no common equivelent of domain_reliquish_resources(), which has
caused various pieces of common cleanup to live in inappropriate
places.

Perhaps most obviously, evtchn_destroy() is called for every continuation of
domain_reliquish_resources(), which can easily be thousands of times.

Create domain_teardown() to be a new top level facility, and call it from the
appropriate positions in domain_kill() and domain_create()'s error path.

No change in behaviour yet.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
---
 xen/common/domain.c     | 59 +++++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/sched.h |  8 +++++++
 2 files changed, 67 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index ce3667f1b4..ef1987335b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -273,6 +273,59 @@ static int __init parse_extra_guest_irqs(const char *s)
 custom_param("extra_guest_irqs", parse_extra_guest_irqs);
 
 /*
+ * Release resources held by a domain.  There may or may not be live
+ * references to the domain, and it may or may not be fully constructed.
+ *
+ * d->is_dying differing between DOMDYING_dying and DOMDYING_dead can be used
+ * to determine if live references to the domain exist, and also whether
+ * continuations are permitted.
+ *
+ * If d->is_dying is DOMDYING_dead, this must not return non-zero.
+ */
+static int domain_teardown(struct domain *d)
+{
+    BUG_ON(!d->is_dying);
+
+    /*
+     * This hypercall can take minutes of wallclock time to complete.  This
+     * logic implements a co-routine, stashing state in struct domain across
+     * hypercall continuation boundaries.
+     */
+    switch ( d->teardown.val )
+    {
+        /*
+         * Record the current progress.  Subsequent hypercall continuations
+         * will logically restart work from this point.
+         *
+         * PROGRESS() markers must not be in the middle of loops.  The loop
+         * variable isn't preserved across a continuation.
+         *
+         * To avoid redundant work, there should be a marker before each
+         * function which may return -ERESTART.
+         */
+#define PROGRESS(x)                             \
+        d->teardown.val = PROG_ ## x;           \
+        /* Fallthrough */                       \
+    case PROG_ ## x
+
+        enum {
+            PROG_done = 1,
+        };
+
+    case 0:
+    PROGRESS(done):
+        break;
+
+#undef PROGRESS
+
+    default:
+        BUG();
+    }
+
+    return 0;
+}
+
+/*
  * Destroy a domain once all references to it have been dropped.  Used either
  * from the RCU path, or from the domain_create() error path before the domain
  * is inserted into the domlist.
@@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
     if ( init_status & INIT_watchdog )
         watchdog_domain_destroy(d);
 
+    /* Must not hit a continuation in this context. */
+    ASSERT(domain_teardown(d) == 0);
+
     _domain_destroy(d);
 
     return ERR_PTR(err);
@@ -733,6 +789,9 @@ int domain_kill(struct domain *d)
         domain_set_outstanding_pages(d, 0);
         /* fallthrough */
     case DOMDYING_dying:
+        rc = domain_teardown(d);
+        if ( rc )
+            break;
         rc = evtchn_destroy(d);
         if ( rc )
             break;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index faf5fda36f..3f35c537b8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -525,6 +525,14 @@ struct domain
     /* Argo interdomain communication support */
     struct argo_domain *argo;
 #endif
+
+    /*
+     * Continuation information for domain_teardown().  All fields entirely
+     * private.
+     */
+    struct {
+        unsigned int val;
+    } teardown;
 };
 
 static inline struct page_list_head *page_to_list(
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:28:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:28:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57520.100683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPvE-0001tb-1j; Mon, 21 Dec 2020 18:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57520.100683; Mon, 21 Dec 2020 18:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krPvD-0001tU-Uf; Mon, 21 Dec 2020 18:28:39 +0000
Received: by outflank-mailman (input) for mailman id 57520;
 Mon, 21 Dec 2020 18:28:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krPvC-0001tL-Kg
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:28:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krPvB-000384-4m; Mon, 21 Dec 2020 18:28:37 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krPvA-0003Nk-Pp; Mon, 21 Dec 2020 18:28:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Y/euKdNMGL3snMT4AoD3RkSaZRDr4keLc+9TFDXw8IY=; b=6VET6jgd6pxToJMrmgL4cyl0wl
	RLzAGjcPpKijh5X3Nvbx+NqK8oTKeGJrOs7f4bFbrK9D6zBJOArR8sJr77MbDf7HeJoXOSlpMeP36
	xRuZocZRDzb5dnsG4K9JhuDmSs4LHmJ3NS+R/xRmp3Yv8O1FmNWwZJCfSClmR6yZuGrE=;
Subject: Re: [RESEND] [RFC PATCH] xen/arm: domain_build: Ignore empty memory
 bank
To: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Ian Jackson <iwj@xenproject.org>
References: <X+DbupqYE3rrFaIM@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <102a361a-a070-185f-c564-8e4c30f96ab6@xen.org>
Date: Mon, 21 Dec 2020 18:28:35 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <X+DbupqYE3rrFaIM@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

I was planning to review the first version today, but as you sent a new 
version I will answer on this one directly.

On 21/12/2020 17:30, Elliott Mitchell wrote:
> Previously Xen had stopped processing Device Trees if an empty
> (size == 0) memory bank was found.
> 
> Commit 5a37207df52066efefe419c677b089a654d37afc changed this behavior to
> ignore such banks.  Unfortunately this means these empty nodes are
> visible to code which accesses the device trees.  Have domain_build also
> ignore these entries.

I am probably missing something here. The commit you pointed out will 
only take care of memory nodes (including reserved-memory).

It should not be possible to reach handle_device() with actual RAM. 
Although, it would with the reserved memory node.

Could you provide a bit more details on the issue? In particular, I am 
interested to see the offending node and its content.

> ---
> This is tagged "RFC" due to issues.
> 
> Authorship of this is unclean.  In the first version (checked in, but
> never sent to the list and never compiled) a different condition was used
> and the comment was absent.  When examing the code it became clear a
> condition identical to
> 5a37207df52066efefe419c677b089a654d37afc was appropriate and so I changed
> to !size.  Since what the code is doing was sufficiently similar, the
> comment was grabbed.
> How far does this dilute authorship?  I diagnosed the bug and figured out
> where to add the lines, but the amount inspired by Julien Grall gives
> Julien Grall some level of claim of authorship.  Advice is needed.

You did all the investigation of the bug and the code is small enough. 
So I think it is fine for you to claim the authorship.

> 
> Commit 7d2b21fd36c2a47799eed71c67bae7faa1ec4272 is an outright bug for
> me.  I don't know what percentage of users will experience this bug, but
> being observed this quickly suggests this is major enough to be urgent
> for the stable-4.14 branch.

We usually work on fix for upstream first and then backport it.

> I doubt this is the only bug exposed by
> 5a37207df52066efefe419c677b089a654d37afc.

Are you saying that with my patch dropped, Xen will boot but with it 
will not?

> This might actually effect
> most uses of the device-tree code.  I think either the core needs to be
> fixed to hide zero-sized entries from anything outside of
> xen/common/device_tree.c, otherwise all uses of the device-tree core need
> to be audited to ensure they ignore zero-sized entries.

The meaning of zero-sized is not the same everywhere. In the case of 
memory banks, it can be safely ignored.

For other devices (e.g. GIC), hiding it may make things worse because a 
size 0 means the node is bogus.

> Notably this is
> the second location where zero-size device-tree entries need to be
> ignored, preemptive action should be taken before a third is found by
> bugreport. >
> Perhaps this fix is appropriate for the stable-4.14 branch and a proper
> solution should be implemented for the main branch?
> 
> The error message which first showed was
> "Unable to retrieve address %u for %s\n".  Where the number in %u was
> 0, this seems a poor error message.  Version 0.1 (which never got
> compiled) had been:  if(!addr) continue;

Usually, dt_get_address() will fail because it can't translate the 
address, not because the size is 0. The more...

> 
> As I thought the 0 it was reporting was an address of 0.  Perhaps the
> message should instead be:
> "Unable to retrieve address for index %u of %s\n"?
> ---
>   xen/arch/arm/domain_build.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index e824ba34b0..0b83384bd3 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1405,6 +1405,11 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
>       {
>           struct map_range_data mr_data = { .d = d, .p2mt = p2mt };
>           res = dt_device_get_address(dev, i, &addr, &size);
> +
> +        /* Some DT may describe empty bank, ignore them */
> +        if ( !size )
> +            continue;

... dt_device_get_address() will not set the size if the node is bogus. 
So you can't rely on either addr or size when res is non-zero.

handle_device() (at least on unstable) will not initialize the two 
variables to 0. So I guess you are lucky that you compiler zeroed them 
for you, but that's not the normal behavior.

So I think we first need to figure out what is the offending node and 
why it is dt_device_get_address() is returning an error for it.

That said, I agree that we possibly want a check size == 0 (action TBD) 
in map_range_to_domain() as the code would do the wrong thing for 0.

>           if ( res )
>           {
>               printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:37:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57527.100700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krQ3M-0002qk-Ue; Mon, 21 Dec 2020 18:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57527.100700; Mon, 21 Dec 2020 18:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krQ3M-0002qK-Rh; Mon, 21 Dec 2020 18:37:04 +0000
Received: by outflank-mailman (input) for mailman id 57527;
 Mon, 21 Dec 2020 18:37:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krQ3L-0002qF-LS
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:37:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krQ3J-0003H5-Pr; Mon, 21 Dec 2020 18:37:01 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krQ3J-000478-Cv; Mon, 21 Dec 2020 18:37:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5vLLbcgmGgg3oUH+sxO0KIURyPatcCvjjXrEjyyojz8=; b=FS6IhAKXwwh7V48mwkgYofh/DM
	AVVLLOuzLiUUPmQpqG8xgoPUHrm6HWfpA8uXTi3B9vVUaQwiHZCNYxeWPVVCgtPEzNe/cuC/LZ233
	tQb0h6aKNmLlF45V/2HI+jJJF+D6zmo1LuGoZooILfimOvE+fyOK6UEbAODgJakUMHqo=;
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
Date: Mon, 21 Dec 2020 18:36:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.1
MIME-Version: 1.0
In-Reply-To: <20201221181446.7791-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 21/12/2020 18:14, Andrew Cooper wrote:
> There is no common equivelent of domain_reliquish_resources(), which has

s/equivelent/equivalent/

> caused various pieces of common cleanup to live in inappropriate
> places.
> 
> Perhaps most obviously, evtchn_destroy() is called for every continuation of
> domain_reliquish_resources(), which can easily be thousands of times.

s/reliquish/relinquish/

> 
> Create domain_teardown() to be a new top level facility, and call it from the
> appropriate positions in domain_kill() and domain_create()'s error path.
> 
> No change in behaviour yet.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> ---
>   xen/common/domain.c     | 59 +++++++++++++++++++++++++++++++++++++++++++++++++
>   xen/include/xen/sched.h |  8 +++++++
>   2 files changed, 67 insertions(+)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index ce3667f1b4..ef1987335b 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -273,6 +273,59 @@ static int __init parse_extra_guest_irqs(const char *s)
>   custom_param("extra_guest_irqs", parse_extra_guest_irqs);
>   
>   /*
> + * Release resources held by a domain.  There may or may not be live
> + * references to the domain, and it may or may not be fully constructed.
> + *
> + * d->is_dying differing between DOMDYING_dying and DOMDYING_dead can be used
> + * to determine if live references to the domain exist, and also whether
> + * continuations are permitted.
> + *
> + * If d->is_dying is DOMDYING_dead, this must not return non-zero.
> + */
> +static int domain_teardown(struct domain *d)
> +{
> +    BUG_ON(!d->is_dying);
> +
> +    /*
> +     * This hypercall can take minutes of wallclock time to complete.  This
> +     * logic implements a co-routine, stashing state in struct domain across
> +     * hypercall continuation boundaries.
> +     */
> +    switch ( d->teardown.val )
> +    {
> +        /*
> +         * Record the current progress.  Subsequent hypercall continuations
> +         * will logically restart work from this point.
> +         *
> +         * PROGRESS() markers must not be in the middle of loops.  The loop
> +         * variable isn't preserved across a continuation.
> +         *
> +         * To avoid redundant work, there should be a marker before each
> +         * function which may return -ERESTART.
> +         */
> +#define PROGRESS(x)                             \
> +        d->teardown.val = PROG_ ## x;           \
> +        /* Fallthrough */                       \
> +    case PROG_ ## x
> +
> +        enum {
> +            PROG_done = 1,
> +        };
> +
> +    case 0:
> +    PROGRESS(done):
> +        break;
> +
> +#undef PROGRESS
> +
> +    default:
> +        BUG();
> +    }
> +
> +    return 0;
> +}
> +
> +/*
>    * Destroy a domain once all references to it have been dropped.  Used either
>    * from the RCU path, or from the domain_create() error path before the domain
>    * is inserted into the domlist.
> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>       if ( init_status & INIT_watchdog )
>           watchdog_domain_destroy(d);
>   
> +    /* Must not hit a continuation in this context. */
> +    ASSERT(domain_teardown(d) == 0);
The ASSERT() will become a NOP in production build, so 
domain_teardown_down() will not be called.

However, I think it would be better if we pass an extra argument to 
indicated wheter the code is allowed to preempt. This would make the 
preemption check more obvious in evtchn_destroy() compare to the current 
d->is_dying != DOMDYING_dead.

> +
>       _domain_destroy(d);
>   
>       return ERR_PTR(err);
> @@ -733,6 +789,9 @@ int domain_kill(struct domain *d)
>           domain_set_outstanding_pages(d, 0);
>           /* fallthrough */
>       case DOMDYING_dying:
> +        rc = domain_teardown(d);
> +        if ( rc )
> +            break;
>           rc = evtchn_destroy(d);
>           if ( rc )
>               break;
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index faf5fda36f..3f35c537b8 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -525,6 +525,14 @@ struct domain
>       /* Argo interdomain communication support */
>       struct argo_domain *argo;
>   #endif
> +
> +    /*
> +     * Continuation information for domain_teardown().  All fields entirely
> +     * private.
> +     */
> +    struct {
> +        unsigned int val;
> +    } teardown;
>   };
>   
>   static inline struct page_list_head *page_to_list(
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 18:45:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 18:45:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57531.100713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krQBU-0003ko-QW; Mon, 21 Dec 2020 18:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57531.100713; Mon, 21 Dec 2020 18:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krQBU-0003kh-NX; Mon, 21 Dec 2020 18:45:28 +0000
Received: by outflank-mailman (input) for mailman id 57531;
 Mon, 21 Dec 2020 18:45:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krQBT-0003kc-H7
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 18:45:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fafa69c9-e0a0-4ae6-a17c-a857e5867cc3;
 Mon, 21 Dec 2020 18:45:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fafa69c9-e0a0-4ae6-a17c-a857e5867cc3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608576326;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=R3oppFae7+zIg50d/avPh3Wy3ntwoBp5EFrfl5Izyyo=;
  b=gHTkuh0vEY9JmRl2d5GvVSGllymsnLQAcc368FbdFhmaRawMw89fBNDF
   CDXY/obLTzIoQyZRqQsmzC7FSxW0UVg6nunobnLdQ0dLpZy8rXfadiB4A
   G0R91w1z5D7bwsVyb7Alxva3qIRIiiiw3juKh9t/KUswKDP0kz83zT9Z1
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: KmrKu6tyXI44dlyjth7EerEZMBcoOmchz5Fv/c06MGRdHEIjQe5FSakBUmK3TBQcWKu8p6evJl
 HhT6sZByUk9fBTlVcyXUEaRrt8/871XfWdzPQO+QPfBBVnUhdW1gVe2A6WBoPiEAKAh0g8TYCu
 V3P5Ocy0g9tiA2QHeVGMJYmfiXiw429V1Qqnb90dKHPkF6Ci4dEhYcqZv1rsruz1zkjBI+pLe6
 mvFKIuE5AsYkZ1AjpgYhXhBa67r/3AvWiCv2xVLmg82DvzS513mfBcTIDFCiQ+fjDN3lFqSic0
 XIo=
X-SBRS: 5.2
X-MesageID: 33709337
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,437,1599537600"; 
   d="scan'208";a="33709337"
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
Date: Mon, 21 Dec 2020 18:45:19 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 21/12/2020 18:36, Julien Grall wrote:
>> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>>       if ( init_status & INIT_watchdog )
>>           watchdog_domain_destroy(d);
>>   +    /* Must not hit a continuation in this context. */
>> +    ASSERT(domain_teardown(d) == 0);
> The ASSERT() will become a NOP in production build, so
> domain_teardown_down() will not be called.

Urgh - its not really a nop, but it's evaluation isn't symmetric between
debug and release builds.  I'll need an extra local variable.

>
> However, I think it would be better if we pass an extra argument to
> indicated wheter the code is allowed to preempt. This would make the
> preemption check more obvious in evtchn_destroy() compare to the
> current d->is_dying != DOMDYING_dead.

We can have a predicate if you'd prefer, but plumbing an extra parameter
is wasteful, and can only cause confusion if it is out of sync with
d->is_dying.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 19:37:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 19:37:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57549.100755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krQzK-0008Bi-Ax; Mon, 21 Dec 2020 19:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57549.100755; Mon, 21 Dec 2020 19:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krQzK-0008Bb-6b; Mon, 21 Dec 2020 19:36:58 +0000
Received: by outflank-mailman (input) for mailman id 57549;
 Mon, 21 Dec 2020 19:36:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm/H=FZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krQzJ-0008BW-90
 for xen-devel@lists.xenproject.org; Mon, 21 Dec 2020 19:36:57 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d8e2bc6-8054-48f2-a7ca-ff45a52cfb11;
 Mon, 21 Dec 2020 19:36:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d8e2bc6-8054-48f2-a7ca-ff45a52cfb11
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608579416;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XSBbOSphSVpF1JAj9O9NNctyNMu5unIHAn8w0blBlIE=;
  b=RjNTEGzUz8MfWBzfc/DGqK7odxDC4ySY7bDW6wnZwzbyHCMymCMkKMnj
   Vnou+nBvV1bwO4lYyd6dwqGDZY5ARqPjTitY5ZiSNbuGpb0GjNOvVnqau
   nU30Z0yWkaQJnY/wW/Hamsu+5QMX+65hsddUYjR1ERb0nzGxLdiq3OEZs
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: xjvuXya0pJaFcAuDjBsgh0/u3geMOaHQjIzIPcVQ8t1Gb9Ntg9A+3/ws1MTEMiVNuw7BtTXCAZ
 OKBEm3IWabkL/9fL4XqdqnzTOp3+dZVNZTBklN8Slk8jJt1tXJHEikmxz1YewQLU5lbIzBi+S3
 MrA8PLWRUNp2xl4CFj5cASej4zF2GC1GppdjrvdvFbbyJ7idjzy0nveGSyWnGg2Bfqc/dPULZv
 bPNeoQay3GxuQoep0qiivA+TKp2SYRos3Z+Fr3P9SApJK9QFt1R6HEKqaSZfTx+awC/sUCwaMY
 ziI=
X-SBRS: 5.2
X-MesageID: 33693178
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,437,1599537600"; 
   d="scan'208";a="33693178"
Subject: Hypercall fault injection (Was [PATCH 0/3] xen/domain: More
 structured teardown)
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Juergen Gross <jgross@suse.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com>
Date: Mon, 21 Dec 2020 19:36:49 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201221181446.7791-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

Hello,

We have some very complicated hypercalls, createdomain, and max_vcpus a
close second, with immense complexity, and very hard-to-test error handling.

It is no surprise that the error handling is riddled with bugs.

Random failures from core functions is one way, but I'm not sure that
will be especially helpful.  In particular, we'd need a way to exclude
"dom0 critical" operations so we've got a usable system to run testing on.

As an alternative, how about adding a fault_ttl field into the hypercall?

The exact paths taken in {domain,vcpu}_create() are sensitive to the
hardware, Xen Kconfig, and other parameters passed into the
hypercall(s).  The testing logic doesn't really want to care about what
failed; simply that the error was handled correctly.

So a test for this might look like:

cfg = { ... };
while ( xc_create_domain(xch, cfg) < 0 )
    cfg.fault_ttl++;


The pro's of this approach is that for a specific build of Xen on a
piece of hardware, it ought to check every failure path in
domain_create(), until the ttl finally gets higher than the number of
fail-able actions required to construct a domain.  Also, the test
doesn't need changing as the complexity of domain_create() changes.

The main con will mostly likely be the invasiveness of code in Xen, but
I suppose any fault injection is going to be invasive to a certain extent.

Fault injection like this would also want pairing with some other plans
I had for dalloc() & friends which wrap the current allocators, and
count (in struct domain) the number and/or size of domain allocations,
so we can a) check for leaks, and b) report how much memory a domain
object (and all its decendent objects) actually takes (seeing as we
don't know this value at all).

Thoughts?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 20:07:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 20:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57557.100773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krRSa-0002SH-RF; Mon, 21 Dec 2020 20:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57557.100773; Mon, 21 Dec 2020 20:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krRSa-0002SA-Nv; Mon, 21 Dec 2020 20:07:12 +0000
Received: by outflank-mailman (input) for mailman id 57557;
 Mon, 21 Dec 2020 20:07:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krRSZ-0002S2-OD; Mon, 21 Dec 2020 20:07:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krRSZ-0004q9-EW; Mon, 21 Dec 2020 20:07:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krRSZ-000649-8B; Mon, 21 Dec 2020 20:07:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krRSZ-0006jP-7i; Mon, 21 Dec 2020 20:07:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=gtq83h7kfYKoTknDgkg2tghU1Jn5Ccyd88hOynhTCiE=; b=O1poE14ATEsT11sl3kauuqr8py
	30KuUr//rk7JAMzZdEy7i7QJjlUbOEMnCHrdhx8uc1+PQ+NhuQhlT8KEU8wpLP52zxuL1CkZhmeXu
	Jz3jXZM1Yx0ud5PB/HsZcK7bp17e2RZJRvkCv+sPJI0dir++ppN2cxmV7fYEkwPBfKqM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1krRSZ-0006jP-7i@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 20:07:11 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d162f36848c4a98a782cc05820b0aa7ec1ae297d
  Bug not present: 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157774/


  commit d162f36848c4a98a782cc05820b0aa7ec1ae297d
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Mon Sep 28 15:25:44 2020 +0100
  
      xen/x86: Fix memory leak in vcpu_create() error path
      
      Various paths in vcpu_create() end up calling paging_update_paging_modes(),
      which eventually allocate a monitor pagetable if one doesn't exist.
      
      However, an error in vcpu_create() results in the vcpu being cleaned up
      locally, and not put onto the domain's vcpu list.  Therefore, the monitor
      table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
      assertions later that we've successfully freed the entire hap/shadow memory
      pool.
      
      The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
      due to insufficient existing structure in the existing logic.
      
      Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
      in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
      fixes the memory leak.
      
      The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
      as tolerable as possible, with the minimum number of safety checks possible.
      In particular, drop the mfn_valid() check - if these fields are junk, then Xen
      is going to explode anyway.
      
      Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/157774.bisection-summary --basis-template=157696 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 157761 fail [host=himrod1] / 157696 ok.
Failure / basis pass flights: 157761 / 157696
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d162f36848c4a98a782cc05820b0aa7ec1ae297d
Basis pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 357db96a66e47e609c3b14768f1062e13eedbd93
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#357db96a66e47e609c3b14768f1062e13eedbd93-d162f36848c4a98a782cc05820b0aa7ec1ae297d
Loaded 5001 nodes in revision graph
Searching for test results:
 157691 [host=himrod2]
 157692 [host=himrod2]
 157694 [host=himrod2]
 157695 pass irrelevant
 157697 fail irrelevant
 157698 pass irrelevant
 157699 pass irrelevant
 157700 pass irrelevant
 157701 pass irrelevant
 157702 pass irrelevant
 157703 fail irrelevant
 157704 pass irrelevant
 157706 fail irrelevant
 157696 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 357db96a66e47e609c3b14768f1062e13eedbd93
 157707 pass irrelevant
 157761 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d162f36848c4a98a782cc05820b0aa7ec1ae297d
 157765 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 357db96a66e47e609c3b14768f1062e13eedbd93
 157767 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d162f36848c4a98a782cc05820b0aa7ec1ae297d
 157768 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
 157769 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d162f36848c4a98a782cc05820b0aa7ec1ae297d
 157770 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
 157771 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d162f36848c4a98a782cc05820b0aa7ec1ae297d
 157773 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
 157774 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 d162f36848c4a98a782cc05820b0aa7ec1ae297d
Searching for interesting versions
 Result found: flight 157696 (pass), for basis pass
 For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2, results HASH(0x55feabd515e0) HASH(0x55feabd4ab20) HASH(0x55feabd56098) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 357db96a66e47e609c3b14768f1062e13eedbd93, results HASH(0x55feabd3f660) HASH(0x55feabd521e0) Result found: flight 157761 (fail), for \
 basis failure (at ancestor ~710)
 Repro found: flight 157765 (pass), for basis pass
 Repro found: flight 157767 (fail), for basis failure
 0 revisions at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
No revisions left to test, checking graph state.
 Result found: flight 157768 (pass), for last pass
 Result found: flight 157769 (fail), for first failure
 Repro found: flight 157770 (pass), for last pass
 Repro found: flight 157771 (fail), for first failure
 Repro found: flight 157773 (pass), for last pass
 Repro found: flight 157774 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d162f36848c4a98a782cc05820b0aa7ec1ae297d
  Bug not present: 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/157774/


  commit d162f36848c4a98a782cc05820b0aa7ec1ae297d
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Mon Sep 28 15:25:44 2020 +0100
  
      xen/x86: Fix memory leak in vcpu_create() error path
      
      Various paths in vcpu_create() end up calling paging_update_paging_modes(),
      which eventually allocate a monitor pagetable if one doesn't exist.
      
      However, an error in vcpu_create() results in the vcpu being cleaned up
      locally, and not put onto the domain's vcpu list.  Therefore, the monitor
      table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
      assertions later that we've successfully freed the entire hap/shadow memory
      pool.
      
      The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
      due to insufficient existing structure in the existing logic.
      
      Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
      in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
      fixes the memory leak.
      
      The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
      as tolerable as possible, with the minimum number of safety checks possible.
      In particular, drop the mfn_valid() check - if these fields are junk, then Xen
      is going to explode anyway.
      
      Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
157774: tolerable ALL FAIL

flight 157774 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/157774/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Dec 21 21:10:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 21:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57565.100788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krSRP-0007qy-NP; Mon, 21 Dec 2020 21:10:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57565.100788; Mon, 21 Dec 2020 21:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krSRP-0007qr-JI; Mon, 21 Dec 2020 21:10:03 +0000
Received: by outflank-mailman (input) for mailman id 57565;
 Mon, 21 Dec 2020 21:10:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krSRO-0007mD-LU; Mon, 21 Dec 2020 21:10:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krSRO-0005rP-E9; Mon, 21 Dec 2020 21:10:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krSRO-0001PR-6X; Mon, 21 Dec 2020 21:10:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krSRO-0007zm-64; Mon, 21 Dec 2020 21:10:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dHnT+mL/ouwoZNi75McX+OUs6ld3fcHuek95AIlV9wI=; b=zVY7n7mAwGWYumMr7nSjNlxC0m
	DMrxiZYJ9myx5F5CAkGSAc1sXHyQEXiL4/iTic52mgu9m4njeRzVsKDvPaW4zSG1ftuyTRJcOwb66
	kzIRH5AIIVndvVsWgsbQ/771Wy9VJW+X80x2gYYPJ8BOqpVBl2pBKnHQvHBley5YykBY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157766-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157766: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ffa9d2999722a404d3a4381b0249d191de134d33
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 21:10:02 +0000

flight 157766 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157766/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ffa9d2999722a404d3a4381b0249d191de134d33
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157696  2020-12-18 19:01:31 Z    3 days
Failing since        157761  2020-12-21 15:00:25 Z    0 days    2 attempts
Testing same since   157766  2020-12-21 18:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   357db96a66..ffa9d29997  ffa9d2999722a404d3a4381b0249d191de134d33 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 21:27:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 21:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57572.100803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krSiJ-0000w1-9N; Mon, 21 Dec 2020 21:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57572.100803; Mon, 21 Dec 2020 21:27:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krSiJ-0000vu-67; Mon, 21 Dec 2020 21:27:31 +0000
Received: by outflank-mailman (input) for mailman id 57572;
 Mon, 21 Dec 2020 21:27:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krSiI-0000vm-14; Mon, 21 Dec 2020 21:27:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krSiH-00068g-R1; Mon, 21 Dec 2020 21:27:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krSiH-00028t-I7; Mon, 21 Dec 2020 21:27:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krSiH-000120-Ha; Mon, 21 Dec 2020 21:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b0igHRAmEv5E9cD7XVKFRNYnHYzFO+HiGGsl01SN4Uc=; b=It/d4Usx3VTEpxKQUMXEwjEqkN
	dmpAlzZPPF/N4E4NVRRTvRgkY6b9+J3FzqHhDlYanlFpuQfX0wREmwrL5blG/gNYEdSne576Xa7sS
	2q+06m+tMX15DXrC7cmSApANjHzyalvpqbMHT6wTsUtvYbNPAVThUThjDpmFzk0LTjVg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157755-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157755: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e37b12e4bb21e7c81732370b0a2b34bd196f380b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 21:27:29 +0000

flight 157755 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157755/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e37b12e4bb21e7c81732370b0a2b34bd196f380b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  143 days
Failing since        152366  2020-08-01 20:49:34 Z  142 days  245 attempts
Testing same since   157755  2020-12-21 10:33:07 Z    0 days    1 attempts

------------------------------------------------------------
4299 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 960294 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 21 22:27:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 21 Dec 2020 22:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57581.100818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krTeW-00066M-SV; Mon, 21 Dec 2020 22:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57581.100818; Mon, 21 Dec 2020 22:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krTeW-00066F-P3; Mon, 21 Dec 2020 22:27:40 +0000
Received: by outflank-mailman (input) for mailman id 57581;
 Mon, 21 Dec 2020 22:27:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krTeU-000667-Uu; Mon, 21 Dec 2020 22:27:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krTeU-00076E-Lp; Mon, 21 Dec 2020 22:27:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krTeU-0005gc-Bq; Mon, 21 Dec 2020 22:27:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krTeU-0008P1-BJ; Mon, 21 Dec 2020 22:27:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WogjTN0CG6GYMgpMQV5MhJzCZRHiWRChsrFrt/jebco=; b=INj/bgdgPbxKSIS+L4243HfrIE
	ZXpVLoB/oSqRI5ctk71tWiowO0oigIQlnOWDmrNko+MCdvQ9sF7orrWlxZF+0PiZ/N/6OEQ2YF6Gx
	gzTPmDTf3GQXHCKwXBp0HLezTTR8EUnCvj6XHJ/JOdY7Su7PG6tm21UgaaPSv49ZoNa4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157759-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157759: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3ce3274a5ea41134fafb983c0198de89007d471e
X-Osstest-Versions-That:
    ovmf=6932f4bfe552c1704c5715430de6045c78a5b62f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 21 Dec 2020 22:27:38 +0000

flight 157759 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157759/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3ce3274a5ea41134fafb983c0198de89007d471e
baseline version:
 ovmf                 6932f4bfe552c1704c5715430de6045c78a5b62f

Last test of basis   157726  2020-12-19 17:23:18 Z    2 days
Testing same since   157759  2020-12-21 14:39:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6932f4bfe5..3ce3274a5e  3ce3274a5ea41134fafb983c0198de89007d471e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 00:01:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 00:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57602.100843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krV6m-0006gX-F9; Tue, 22 Dec 2020 00:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57602.100843; Tue, 22 Dec 2020 00:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krV6m-0006gQ-By; Tue, 22 Dec 2020 00:00:56 +0000
Received: by outflank-mailman (input) for mailman id 57602;
 Tue, 22 Dec 2020 00:00:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krV6k-0006gI-M9; Tue, 22 Dec 2020 00:00:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krV6k-0000l9-EM; Tue, 22 Dec 2020 00:00:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krV6k-0003JS-76; Tue, 22 Dec 2020 00:00:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krV6k-0004Lh-0d; Tue, 22 Dec 2020 00:00:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xf20ySFH+PA+bfmq9WjLouqqhNGUiyjf3WdxCVNrcOY=; b=b0pO1ro5AyGEerZZpU8D4zebEU
	HUN7n1ND0imMkvHMBOIvKoFANQ0UXJhkzI34Lj65XhjpJtGzwb55uruE6WMV2wfgsKs3Ey5uidp70
	BYFafgrjxcQVWr9q08GKUjLBQBqKKaDulIiRisTV8skh2fzToeDCSr2bJFmF2YjZzLsA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157757-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157757: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=19d1c763e849fb78ddf2afe0e2625d79ed4c8a11
X-Osstest-Versions-That:
    linux=8a866bdbbac227a99b0b37e03679908642f58aec
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 00:00:54 +0000

flight 157757 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157757/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157737
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157737
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157737
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157737
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157737
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157737
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157737
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157737
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157737
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157737
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157737
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                19d1c763e849fb78ddf2afe0e2625d79ed4c8a11
baseline version:
 linux                8a866bdbbac227a99b0b37e03679908642f58aec

Last test of basis   157737  2020-12-20 06:47:45 Z    1 days
Testing same since   157757  2020-12-21 12:40:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alan Stern <stern@rowland.harvard.edu>
  Alexander Sverdlin <alexander.sverdlin@gmail.com>
  Andy Lutomirski <luto@kernel.org>
  Borislav Petkov <bp@suse.de>
  Bui Quang Minh <minhquangbui99@gmail.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Esben Haabendal <esben@geanix.com>
  Fugang Duan <fugang.duan@nxp.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Huazhong Tan <tanhuazhong@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  James Morse <james.morse@arm.com>
  Joakim Zhang <qiangqing.zhang@nxp.com>
  Jon Hunter <jonathanh@nvidia.com>
  Joseph Huang <Joseph.Huang@garmin.com>
  Kamal Mostafa <kamal@canonical.com>
  Li Jun <jun.li@nxp.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Moshe Shemesh <moshe@mellanox.com>
  Neal Cardwell <ncardwell@google.com>
  Nikolay Aleksandrov <nikolay@nvidia.com>
  Oliver Neukum <oneukum@suse.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peilin Ye <yepeilin.cs@gmail.com>
  Sergej Bauer <sbauer@blackbox.su>
  Soheil Hassas Yeganeh <soheil@google.com>
  Stephen Suryaputra <ssuryaextr@gmail.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tariq Toukan <tariqt@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Xiaochen Shen <xiaochen.shen@intel.com>
  Xin Long <lucien.xin@gmail.com>
  Yuchung Cheng <ycheng@google.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   8a866bdbbac2..19d1c763e849  19d1c763e849fb78ddf2afe0e2625d79ed4c8a11 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 00:47:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 00:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57611.100858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krVpm-0001lN-3E; Tue, 22 Dec 2020 00:47:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57611.100858; Tue, 22 Dec 2020 00:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krVpm-0001lG-0A; Tue, 22 Dec 2020 00:47:26 +0000
Received: by outflank-mailman (input) for mailman id 57611;
 Tue, 22 Dec 2020 00:47:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krVpl-0001l8-4B; Tue, 22 Dec 2020 00:47:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krVpk-0001Vl-U3; Tue, 22 Dec 2020 00:47:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krVpk-0005Nd-H0; Tue, 22 Dec 2020 00:47:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krVpk-000639-GV; Tue, 22 Dec 2020 00:47:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JeBjABJZ9CbOItH7uFDhw3rsbole1ZWN+Nd2bM3/fIY=; b=B2pH1FRDUTs6TrsjGmWFkj5Emj
	NYxmkNJp/Nff6Hw16G0GUneFICba+3jkrVEjUFG7xmeuUvF7C2IwFJqI/cdtKdapKyaJjOU3kXLIQ
	1D/hjYIQlIVlA/pwsx6m5LTM5VwsZoaiPpSIap3Ad5/U1Dd/mUEnImMM43z2ESamfqtQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157762-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 157762: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e
X-Osstest-Versions-That:
    seabios=748d619be3282fba35f99446098ac2d0579f6063
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 00:47:24 +0000

flight 157762 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157762/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156824
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156824
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156824
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156824
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156824
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e
baseline version:
 seabios              748d619be3282fba35f99446098ac2d0579f6063

Last test of basis   156824  2020-11-16 15:39:41 Z   35 days
Testing same since   157762  2020-12-21 16:10:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Mike Banon <mikebdp2@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   748d619..ef88eea  ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 01:24:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 01:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57619.100873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krWPk-0003NX-2s; Tue, 22 Dec 2020 01:24:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57619.100873; Tue, 22 Dec 2020 01:24:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krWPj-0003NQ-Vz; Tue, 22 Dec 2020 01:24:35 +0000
Received: by outflank-mailman (input) for mailman id 57619;
 Tue, 22 Dec 2020 01:24:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krWPi-0003NI-Eg; Tue, 22 Dec 2020 01:24:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krWPi-0008NP-8c; Tue, 22 Dec 2020 01:24:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krWPi-0007Pr-0W; Tue, 22 Dec 2020 01:24:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krWPi-0002gE-03; Tue, 22 Dec 2020 01:24:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jxx6Q2LMgqXsV6uUW7AOP6pGu6k0Ghb8yXmB0jJ8Mp4=; b=wGVFVoyDM09TW2vTy450TPQeRX
	qztyy1VqZuXnBAHH0mnLRfkj9Huoh8ZSyvIGoMtMHmk5bnHdbdvVr8Xm/pXEeenR0Uv96+Mhtb0KJ
	QR0nQ/wltDAdMXTt2+PICsTsZghlQ/QrI6n0GFNUx711o++/LiPItDA5BrQPiBxLiIi8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157779-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157779: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c8938dcc1bd37dd61f705410053e08804ca2b55
X-Osstest-Versions-That:
    xen=ffa9d2999722a404d3a4381b0249d191de134d33
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 01:24:34 +0000

flight 157779 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157779/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c8938dcc1bd37dd61f705410053e08804ca2b55
baseline version:
 xen                  ffa9d2999722a404d3a4381b0249d191de134d33

Last test of basis   157766  2020-12-21 18:00:26 Z    0 days
Testing same since   157779  2020-12-21 23:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ffa9d29997..8c8938dcc1  8c8938dcc1bd37dd61f705410053e08804ca2b55 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 01:39:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 01:39:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57627.100888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krWeA-0004Q3-Ft; Tue, 22 Dec 2020 01:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57627.100888; Tue, 22 Dec 2020 01:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krWeA-0004Pw-Bf; Tue, 22 Dec 2020 01:39:30 +0000
Received: by outflank-mailman (input) for mailman id 57627;
 Tue, 22 Dec 2020 01:39:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AYY1=F2=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1krWe8-0004Pr-Qr
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 01:39:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c3d5ed7-f087-4e1d-b172-a9566a95a7e4;
 Tue, 22 Dec 2020 01:39:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c3d5ed7-f087-4e1d-b172-a9566a95a7e4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608601167;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=bUeQXKJqLiq9cmUZoaFFSY50qENVTMldeUr+bLgxyqw=;
  b=dCTMnMCW4B2Dvw541FdiQ+Rwbw1GKsPVZAqu+nYg48mGebcGYr+/dRJo
   tmN6aXYiaiJ3gA60wqWN0lt6KWg3luCZyBmr5oJngUZC7/Q4tsOvIHAyb
   etAcf28nL82Auv2AA17UAZZlK1HFo4LNWBBFO5y5tdw6RivVXHvwEq5/d
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7VVNWdtuFWeU3i+I0lC5aUsP6YDmrz/B/01D3HzUSLP6aOJzROlVQ+5gIewieh10P7bmiV9LE1
 G+2aXs9gH9kRagkw4tnzfeFMTmtlaAQo52BgS6tgdvVXx259FHI93cAwCGRTQoamia2hgKXLe/
 n7O62GehfZBMOZGGZ1iJshgPCJG20GagDvWubwFlBxh5NYpdiNZJxeWpGTNy/c1FP30t7nC7cS
 /wJCANmfu0jzokJmGVfkqmnqQ+N0fcFCeVvATiZKXxfwMKChbDSEq/lKdEg59C2lHIgi7PBONn
 Hoc=
X-SBRS: 5.2
X-MesageID: 34071410
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="34071410"
Subject: Re: [PATCH v2] x86/intel: insert Ice Lake X (server) model numbers
To: Jan Beulich <jbeulich@suse.com>
CC: <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>, <wl@xen.org>,
	<jun.nakajima@intel.com>, <kevin.tian@intel.com>,
	<xen-devel@lists.xenproject.org>
References: <1603075646-24995-1-git-send-email-igor.druzhinin@citrix.com>
 <d6265c58-b553-3dee-9817-1a8673472972@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <398d0e64-0154-faea-16a8-1677a2a1c3e9@citrix.com>
Date: Tue, 22 Dec 2020 01:39:23 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d6265c58-b553-3dee-9817-1a8673472972@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21/12/2020 16:36, Jan Beulich wrote:
> On 19.10.2020 04:47, Igor Druzhinin wrote:
>> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
>> to Ice Lake desktop according to External Design Specification vol.2.
>>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> ---
>> Changes in v2:
>> - keep partial sorting
>>
>> Andrew, since you have access to these documents, please review as you have time.
> 
> Coming back to this - the recent SDM update inserted at least the
> model numbers, but besides 6a it also lists 6c. Judging from the
> majority of additions happening in pairs, I wonder whether we
> couldn't (reasonably safely) add 6c then here as well. Of course
> I still can't ack the change either way with access to just the
> SDM...

I checked what 0x6c is and it appears to be Ice Lake-D (next gen Xeon D).
The information from EDS vol.2 on Ice Lake-D available to us corresponds to what
I got for Ice Lake X. So the numbers could be added here as soon as Andrew finds
time to review that one.

Igor


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 02:41:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 02:41:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57634.100905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krXc4-00029c-BO; Tue, 22 Dec 2020 02:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57634.100905; Tue, 22 Dec 2020 02:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krXc4-00029V-8X; Tue, 22 Dec 2020 02:41:24 +0000
Received: by outflank-mailman (input) for mailman id 57634;
 Tue, 22 Dec 2020 02:41:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZUFy=F2=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1krXc3-00029Q-3x
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 02:41:23 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e94b4197-1642-472e-be2f-0fca149abc9d;
 Tue, 22 Dec 2020 02:41:21 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0BM2f0N7080944
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 21 Dec 2020 21:41:05 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0BM2exbP080943;
 Mon, 21 Dec 2020 18:40:59 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e94b4197-1642-472e-be2f-0fca149abc9d
Date: Mon, 21 Dec 2020 18:40:59 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Ian Jackson <iwj@xenproject.org>
Subject: Re: [RESEND] [RFC PATCH] xen/arm: domain_build: Ignore empty memory
 bank
Message-ID: <X+Fcu5Frv2rS1DS3@mattapan.m5p.com>
References: <X+DbupqYE3rrFaIM@mattapan.m5p.com>
 <102a361a-a070-185f-c564-8e4c30f96ab6@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <102a361a-a070-185f-c564-8e4c30f96ab6@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Dec 21, 2020 at 06:28:35PM +0000, Julien Grall wrote:
> I was planning to review the first version today, but as you sent a new 
> version I will answer on this one directly.

Mostly the commentary has been increasing, not so much the commit.

> On 21/12/2020 17:30, Elliott Mitchell wrote:
> > Previously Xen had stopped processing Device Trees if an empty
> > (size == 0) memory bank was found.
> > 
> > Commit 5a37207df52066efefe419c677b089a654d37afc changed this behavior to
> > ignore such banks.  Unfortunately this means these empty nodes are
> > visible to code which accesses the device trees.  Have domain_build also
> > ignore these entries.
> 
> I am probably missing something here. The commit you pointed out will 
> only take care of memory nodes (including reserved-memory).
> 
> It should not be possible to reach handle_device() with actual RAM. 
> Although, it would with the reserved memory node.
> 
> Could you provide a bit more details on the issue? In particular, I am 
> interested to see the offending node and its content.

Define "see" in this context.  The message which shows up is:
"Unable to retrieve address 0 for /scb/pcie@7d500000/pci@1,0/usb@1,0"

According to Linux "name", "reg" and "resets" exist there.

(which as stated "0" looks suspiciously like NULL, rather than an
index)

> > I doubt this is the only bug exposed by
> > 5a37207df52066efefe419c677b089a654d37afc.
> 
> Are you saying that with my patch dropped, Xen will boot but with it 
> will not?

Hmm, realizing that I'd found a fix, but not actually tested to
confirm...   Seems I had it wrong, dropping
5a37207df52066efefe419c677b089a654d37afc doesn't fix the issue, so that
is NOT the cause.

Sorry about misattributing the cause.  Now to start doing builds to
identify the cause...  (this of course will take a bit)

Hmm, this is being rather more difficult to identify than I expected.


> > As I thought the 0 it was reporting was an address of 0.  Perhaps the
> > message should instead be:
> > "Unable to retrieve address for index %u of %s\n"?
> > ---
> >   xen/arch/arm/domain_build.c | 5 +++++
> >   1 file changed, 5 insertions(+)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index e824ba34b0..0b83384bd3 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -1405,6 +1405,11 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
> >       {
> >           struct map_range_data mr_data = { .d = d, .p2mt = p2mt };
> >           res = dt_device_get_address(dev, i, &addr, &size);
> > +
> > +        /* Some DT may describe empty bank, ignore them */
> > +        if ( !size )
> > +            continue;
> 
> ... dt_device_get_address() will not set the size if the node is bogus. 
> So you can't rely on either addr or size when res is non-zero.
> 
> handle_device() (at least on unstable) will not initialize the two 
> variables to 0. So I guess you are lucky that you compiler zeroed them 
> for you, but that's not the normal behavior.
> 
> So I think we first need to figure out what is the offending node and 
> why it is dt_device_get_address() is returning an error for it.
> 
> That said, I agree that we possibly want a check size == 0 (action TBD) 
> in map_range_to_domain() as the code would do the wrong thing for 0.

Seems like returning to a working build without that commit is going to
take a bit.  :-(


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Dec 22 05:48:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 05:48:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57640.100924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kraWt-0000x5-9g; Tue, 22 Dec 2020 05:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57640.100924; Tue, 22 Dec 2020 05:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kraWt-0000wy-6f; Tue, 22 Dec 2020 05:48:15 +0000
Received: by outflank-mailman (input) for mailman id 57640;
 Tue, 22 Dec 2020 05:48:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xN4y=F2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kraWr-0000wt-TA
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 05:48:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d15dd0cc-6006-486e-8c06-c61a2d16af39;
 Tue, 22 Dec 2020 05:48:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CC72AC63;
 Tue, 22 Dec 2020 05:48:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d15dd0cc-6006-486e-8c06-c61a2d16af39
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608616091; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=sgj1oDj20FaqvojIiBYkspp1MXr9eQlsW86rFhnOuN4=;
	b=JvI3TzeOIhEbwstuUMkn+FiIfEvQXQkOh2jqzoVeoNGshCTz7fuAv39kcf0n70Wm4At26u
	4vNuPZL0gsHy0dxKbSpP7lttx0dEQoFr41e0Ifz2oW0mpxtpsB4yqRZE0CnlDS6VyKpfSg
	e2LJiQlZuIu6xYntBZ/bnwMcvbkEbc0=
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
 <2ffa6302-5368-61c6-8564-6d3f828e2163@xen.org>
 <26480338-56f4-6a61-e776-78727fce0910@suse.com>
 <93d71f68-c561-6fe0-8433-66745d153217@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1411b326-a0b6-b086-51d1-9827b43587fa@suse.com>
Date: Tue, 22 Dec 2020 06:48:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <93d71f68-c561-6fe0-8433-66745d153217@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xq0RyeyQscSAyELZXxYI8cr4HO1zef9zh"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xq0RyeyQscSAyELZXxYI8cr4HO1zef9zh
Content-Type: multipart/mixed; boundary="UVPciloastc1eekVqdZTuv8lX978t8p5I";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <1411b326-a0b6-b086-51d1-9827b43587fa@suse.com>
Subject: Re: [PATCH v5 1/3] xen/arm: add support for
 run_in_exception_handler()
References: <20201215063319.23290-1-jgross@suse.com>
 <20201215063319.23290-2-jgross@suse.com>
 <94e85d88-b0f0-01f6-99e0-386326bc044a@suse.com>
 <2ffa6302-5368-61c6-8564-6d3f828e2163@xen.org>
 <26480338-56f4-6a61-e776-78727fce0910@suse.com>
 <93d71f68-c561-6fe0-8433-66745d153217@xen.org>
In-Reply-To: <93d71f68-c561-6fe0-8433-66745d153217@xen.org>

--UVPciloastc1eekVqdZTuv8lX978t8p5I
Content-Type: multipart/mixed;
 boundary="------------70B7CDF08F13852D6285050C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------70B7CDF08F13852D6285050C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 21.12.20 17:50, Julien Grall wrote:
> Hi Jan,
>=20
> On 15/12/2020 13:59, Jan Beulich wrote:
>> On 15.12.2020 14:39, Julien Grall wrote:
>>> On 15/12/2020 09:02, Jan Beulich wrote:
>>>> On 15.12.2020 07:33, Juergen Gross wrote:
>>>>> --- a/xen/include/asm-arm/bug.h
>>>>> +++ b/xen/include/asm-arm/bug.h
>>>>> @@ -15,65 +15,62 @@
>>>>> =C2=A0=C2=A0 struct bug_frame {
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signed int loc_disp;=C2=A0=C2=A0=
=C2=A0 /* Relative address to the bug=20
>>>>> address */
>>>>> -=C2=A0=C2=A0=C2=A0 signed int file_disp;=C2=A0=C2=A0 /* Relative a=
ddress to the filename */
>>>>> +=C2=A0=C2=A0=C2=A0 signed int ptr_disp;=C2=A0=C2=A0=C2=A0 /* Relat=
ive address to the filename or=20
>>>>> function */
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signed int msg_disp;=C2=A0=C2=A0=
=C2=A0 /* Relative address to the predicate=20
>>>>> (for ASSERT) */
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint16_t line;=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Line number */
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint32_t pad0:16;=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 /* Padding for 8-bytes align */
>>>>> =C2=A0=C2=A0 };
>>>>> =C2=A0=C2=A0 #define bug_loc(b) ((const void *)(b) + (b)->loc_disp)=

>>>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>>>> +#define bug_ptr(b) ((const void *)(b) + (b)->ptr_disp);
>>>>> =C2=A0=C2=A0 #define bug_line(b) ((b)->line)
>>>>> =C2=A0=C2=A0 #define bug_msg(b) ((const char *)(b) + (b)->msg_disp)=

>>>>> -#define BUGFRAME_warn=C2=A0=C2=A0 0
>>>>> -#define BUGFRAME_bug=C2=A0=C2=A0=C2=A0 1
>>>>> -#define BUGFRAME_assert 2
>>>>> +#define BUGFRAME_run_fn 0
>>>>> +#define BUGFRAME_warn=C2=A0=C2=A0 1
>>>>> +#define BUGFRAME_bug=C2=A0=C2=A0=C2=A0 2
>>>>> +#define BUGFRAME_assert 3
>>>>> -#define BUGFRAME_NR=C2=A0=C2=A0=C2=A0=C2=A0 3
>>>>> +#define BUGFRAME_NR=C2=A0=C2=A0=C2=A0=C2=A0 4
>>>>> =C2=A0=C2=A0 /* Many versions of GCC doesn't support the asm %c par=
ameter=20
>>>>> which would
>>>>> =C2=A0=C2=A0=C2=A0 * be preferable to this unpleasantness. We use m=
ergeable string
>>>>> =C2=A0=C2=A0=C2=A0 * sections to avoid multiple copies of the strin=
g appearing in the
>>>>> =C2=A0=C2=A0=C2=A0 * Xen image.
>>>>> =C2=A0=C2=A0=C2=A0 */
>>>>> -#define BUG_FRAME(type, line, file, has_msg, msg) do=20
>>>>> {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> +#define BUG_FRAME(type, line, ptr, msg) do=20
>>>>> {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BUILD_BUG_ON((line) >>=20
>>>>> 16);=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BUILD_BUG_ON((type) >=3D=20
>>>>> BUGFRAME_NR);=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 \
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 asm=20
>>>>> ("1:"BUG_INSTR"\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".pushsection .ro=
data.str, \"aMS\", %progbits,=20
>>>>> 1\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "2:\t.asciz " __s=
tringify(file)=20
>>>>> "\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -        =20
>>>>> "3:\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
\
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".if " #has_msg=20
>>>>> "\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "\t.asciz " #msg =

>>>>> "\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -        =20
>>>>> ".endif\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -        =20
>>>>> ".popsection\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".pushsection .bu=
g_frames." __stringify(type) ", \"a\",=20
>>>>> %progbits\n"\
>>>>> -        =20
>>>>> "4:\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
\
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".pushsection .bu=
g_frames." __stringify(type) ", \"a\",=20
>>>>> %%progbits\n"\
>>>>> +        =20
>>>>> "2:\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
\
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
".p2align=20
>>>>> 2\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (1b -=20
>>>>> 4b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (2b -=20
>>>>> 4b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (3b -=20
>>>>> 4b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (1b -=20
>>>>> 2b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (%0 -=20
>>>>> 2b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".long (%1 -=20
>>>>> 2b)\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
".hword " __stringify(line) ",=20
>>>>> 0\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> -        =20
>>>>> ".popsection");=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ".popsection" :: =
"i" (ptr), "i"=20
>>>>> (msg));=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>>>>> =C2=A0=C2=A0 } while (0)
>>>>
>>>> The comment ahead of the construct now looks to be at best stale, if=

>>>> not entirely pointless. The reference to %c looks quite strange here=

>>>> to me anyway - I can only guess it appeared here because on x86 one
>>>> has to use %c to output constants as operands for .long and alike,
>>>> and this was then tried to use on Arm as well without there really
>>>> being a need.
>>>
>>> Well, %c is one the reason why we can't have a common BUG_FRAME
>>> implementation. So I would like to retain this information before
>>> someone wants to consolidate the code and missed this issue.
>>
>> Fair enough, albeit I guess this then could do with re-wording.
>=20
> I agree.
>=20
>>
>>> Regarding the mergeable string section, I agree that the comment in n=
ow
>>> stale. However,=C2=A0 could someone confirm that "i" will still retai=
n the
>>> same behavior as using mergeable string sections?
>>
>> That's depend on compiler settings / behavior.
>=20
> Ok. I wanted to see the difference between before and after but it look=
s=20
> like I can't compile Xen after applying the patch:
>=20
>=20
> In file included from=20
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:23:0,
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 from=20
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/bitmap.h:6,
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 from bitmap.c:10:
> bitmap.c: In function =E2=80=98bitmap_allocate_region=E2=80=99:
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:44:5:=20
> error: asm operand 0 probably doesn=E2=80=99t match constraints [-Werro=
r]
>  =C2=A0=C2=A0=C2=A0=C2=A0 asm ("1:"BUG_INSTR"\n" =C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 \
>  =C2=A0=C2=A0=C2=A0=C2=A0 ^
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:60:5:=20
> note: in expansion of macro =E2=80=98BUG_FRAME=E2=80=99
>  =C2=A0=C2=A0=C2=A0=C2=A0 BUG_FRAME(BUGFRAME_bug,=C2=A0 __LINE__, __FIL=
E__, "");=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>  =C2=A0=C2=A0=C2=A0=C2=A0 ^~~~~~~~~
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:25:42: =

> note: in expansion of macro =E2=80=98BUG=E2=80=99
>  =C2=A0#define BUG_ON(p)=C2=A0 do { if (unlikely(p)) BUG();=C2=A0 } whi=
le (0)
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 ^~~
> bitmap.c:330:2: note: in expansion of macro =E2=80=98BUG_ON=E2=80=99
>  =C2=A0 BUG_ON(pages > BITS_PER_LONG);
>  =C2=A0 ^~~~~~
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:44:5:=20
> error: asm operand 1 probably doesn=E2=80=99t match constraints [-Werro=
r]
>  =C2=A0=C2=A0=C2=A0=C2=A0 asm ("1:"BUG_INSTR"\n" =C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 \
>  =C2=A0=C2=A0=C2=A0=C2=A0 ^
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:60:5:=20
> note: in expansion of macro =E2=80=98BUG_FRAME=E2=80=99
>  =C2=A0=C2=A0=C2=A0=C2=A0 BUG_FRAME(BUGFRAME_bug,=C2=A0 __LINE__, __FIL=
E__, "");=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>  =C2=A0=C2=A0=C2=A0=C2=A0 ^~~~~~~~~
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:25:42: =

> note: in expansion of macro =E2=80=98BUG=E2=80=99
>  =C2=A0#define BUG_ON(p)=C2=A0 do { if (unlikely(p)) BUG();=C2=A0 } whi=
le (0)
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 ^~~
> bitmap.c:330:2: note: in expansion of macro =E2=80=98BUG_ON=E2=80=99
>  =C2=A0 BUG_ON(pages > BITS_PER_LONG);
>  =C2=A0 ^~~~~~
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:44:5:=20
> error: impossible constraint in =E2=80=98asm=E2=80=99
>  =C2=A0=C2=A0=C2=A0=C2=A0 asm ("1:"BUG_INSTR"\n" =C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 \
>  =C2=A0=C2=A0=C2=A0=C2=A0 ^
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/asm/bug.h:60:5:=20
> note: in expansion of macro =E2=80=98BUG_FRAME=E2=80=99
>  =C2=A0=C2=A0=C2=A0=C2=A0 BUG_FRAME(BUGFRAME_bug,=C2=A0 __LINE__, __FIL=
E__, "");=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
>  =C2=A0=C2=A0=C2=A0=C2=A0 ^~~~~~~~~
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/include/xen/lib.h:25:42: =

> note: in expansion of macro =E2=80=98BUG=E2=80=99
>  =C2=A0#define BUG_ON(p)=C2=A0 do { if (unlikely(p)) BUG();=C2=A0 } whi=
le (0)
>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 ^~~
> bitmap.c:330:2: note: in expansion of macro =E2=80=98BUG_ON=E2=80=99
>  =C2=A0 BUG_ON(pages > BITS_PER_LONG);
>  =C2=A0 ^~~~~~
> cc1: all warnings being treated as errors
>=20
> I am using GCC 7.5.0 built by Linaro (cross-compiler). Native=20
> compilation with GCC 10.2.1 leads to the same error.
>=20
> @Juergen, which compiler did you use?
>=20

gcc 7.4.0 aarch64 cross-compiler (SUSE)


Juergen

--------------70B7CDF08F13852D6285050C
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------70B7CDF08F13852D6285050C--

--UVPciloastc1eekVqdZTuv8lX978t8p5I--

--xq0RyeyQscSAyELZXxYI8cr4HO1zef9zh
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/hiJoFAwAAAAAACgkQsN6d1ii/Ey9Y
zwf+PrmUi6qmv1WSu4qD2zea/B5q8mXUUc2V38epRRvm9emT+tPbkQ/rxQi9IjSXfQl2pJynFe9y
34zA6azuDBHmcRF9wggWgQWqf6Pze5y/gSn30EWvz6Yl8H3wEjdm7RXY8OiQ/Zf3B2CrF31s1Xar
22byymYIsJ0+Fg6ktNC/Ws29jsjwPaMYhqTspdOfyKurZFUSws1fEt6rg9+FkLwf1iTkm4uOrKFa
FYo2raUDSIEHkBzTrvxJlR+pnYDQ3zVUt2kG5YgfdSjNJg9r23EK4p1B3Lwsi2jyQD4De/l4XXcz
utLx+H7lwSLbhQW/qS+c+eNICao1kS3ioRyk8n/Ihg==
=6i+Z
-----END PGP SIGNATURE-----

--xq0RyeyQscSAyELZXxYI8cr4HO1zef9zh--


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 06:52:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 06:52:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57646.100936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krbWb-0006sO-BD; Tue, 22 Dec 2020 06:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57646.100936; Tue, 22 Dec 2020 06:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krbWb-0006sH-82; Tue, 22 Dec 2020 06:52:01 +0000
Received: by outflank-mailman (input) for mailman id 57646;
 Tue, 22 Dec 2020 06:52:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krbWZ-0006s9-Ve; Tue, 22 Dec 2020 06:51:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krbWZ-0006Qa-KL; Tue, 22 Dec 2020 06:51:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krbWZ-0002aE-7z; Tue, 22 Dec 2020 06:51:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krbWZ-0003MO-7U; Tue, 22 Dec 2020 06:51:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ILl7vHbVhIChVQpx9KyboknZhyn2AF9kdnqq6ourdRQ=; b=H9xjNOBDO1dsSGcUII6KHxFmoS
	cjEa4uxb6cf+KkM2drthUsL0D9nJwn1DB7JN1nHfCqD0L1d016RXTK7VuNMUFz+wtOzoQ82g8bawg
	y/tuyQVTU06j4gQktevGhBMCnPF2TL/YuoYxIx0twlatBGGtsXERq9gG+jkX4oDxDeT4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157763-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157763: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 06:51:59 +0000

flight 157763 qemu-mainline real [real]
flight 157783 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157763/
http://logs.test-lab.xenproject.org/osstest/logs/157783/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  123 days
Failing since        152659  2020-08-21 14:07:39 Z  122 days  252 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    3 days    5 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 07:43:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 07:43:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57654.100951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcK1-0002mz-4V; Tue, 22 Dec 2020 07:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57654.100951; Tue, 22 Dec 2020 07:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcK1-0002ms-15; Tue, 22 Dec 2020 07:43:05 +0000
Received: by outflank-mailman (input) for mailman id 57654;
 Tue, 22 Dec 2020 07:43:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krcK0-0002mn-5b
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 07:43:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43cda4db-44d2-4cb0-9aef-07c158ec9da8;
 Tue, 22 Dec 2020 07:42:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ABB3EB254;
 Tue, 22 Dec 2020 07:42:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43cda4db-44d2-4cb0-9aef-07c158ec9da8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608622977; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h4SDJ1hDwltEadftpbcQCThLndQ9yZCoKfWzK7gYyII=;
	b=te9RXrPhO3dr8DWke3ttBIXSoHIF6i3AtylaxPwTC8Az04wGZvdaiI635fgT5a17iqBiQF
	9r6zl5AN+HwPCu1un+anRobRYcWxpbWZp3AnT3vUsLjHs6HXULt7s/epyklbJeMSYa8n0o
	frtm4nSTsP58hIwrWFCqRB3TEpIcH0c=
Subject: Re: [PATCH] lib: drop debug_build()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <143333c9-154b-77c3-a66a-6b81696ecded@suse.com>
 <2575d75a-ce1d-c725-4f37-b7c9c10a2ecd@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <266f673e-0158-13fe-9ea7-69a3c5dc2903@suse.com>
Date: Tue, 22 Dec 2020 08:42:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <2575d75a-ce1d-c725-4f37-b7c9c10a2ecd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.12.2020 19:07, Andrew Cooper wrote:
> On 21/12/2020 16:50, Jan Beulich wrote:
>> Its expansion shouldn't be tied to NDEBUG - down the road we may want to
>> allow enabling assertions independently of CONFIG_DEBUG.
> 
> I'm not sure I agree that we'll ever want to do this, but...

Didn't you once say XenServer keeps (or kept) assertions enabled
even in release builds? In any event, having such an option may
e.g. help diagnose issues from mis-optimization (no matter
whether because of mis-compilation or because of subtly broken
sources).

>>  Replace the few uses by IS_ENABLED(CONFIG_DEBUG).
> 
> ... we should be aligning on CONFIG_DEBUG.
> 
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I wonder whether we shouldn't further abstract this into, say, a
>> xen_build_info() helper, seeing that all use sites want "debug=[yn]".
>> This could then also include gcov_string right away.
> 
> I think that would be a nicer way of doing it.  It should probably also
> have some trace of CONFIG_UBSAN in the resulting string.

Okay, will do.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 07:50:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 07:50:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57660.100963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcQt-0003gF-UV; Tue, 22 Dec 2020 07:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57660.100963; Tue, 22 Dec 2020 07:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcQt-0003g8-Pz; Tue, 22 Dec 2020 07:50:11 +0000
Received: by outflank-mailman (input) for mailman id 57660;
 Tue, 22 Dec 2020 07:50:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krcQs-0003g2-5K
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 07:50:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12e7aba3-4278-413e-b663-91b8956623e3;
 Tue, 22 Dec 2020 07:50:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5D7CBAD5C;
 Tue, 22 Dec 2020 07:50:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12e7aba3-4278-413e-b663-91b8956623e3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608623408; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sEhkxx0qP1O3PJjv/Pls4ZqvC+Kqeb/UeF5x87ct/ZI=;
	b=keqtgod9xw/Lqu3/6i6y2tnOnlmISnl/R4UBADypf9EAYfqC2c/G9f+bVrFHArQjP/XQZm
	PohWHO26Sq34DJWPgrdcInOKMG9r8nLaPJnPiGReelzaadWFt8uEyIBieVhY9YfIa04CPA
	vNxkEBpcfCJd5XoKvKd8F8nix5Gom1s=
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
 <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <09fd7598-9899-9b4c-68ba-f90b3bc47d6f@suse.com>
Date: Tue, 22 Dec 2020 08:50:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.12.2020 19:45, Andrew Cooper wrote:
> On 21/12/2020 18:36, Julien Grall wrote:
>>> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>>>       if ( init_status & INIT_watchdog )
>>>           watchdog_domain_destroy(d);
>>>   +    /* Must not hit a continuation in this context. */
>>> +    ASSERT(domain_teardown(d) == 0);
>> The ASSERT() will become a NOP in production build, so
>> domain_teardown_down() will not be called.
> 
> Urgh - its not really a nop, but it's evaluation isn't symmetric between
> debug and release builds.  I'll need an extra local variable.

Or use ASSERT_UNREACHABLE(). (I admit I don't really like the
resulting constructs, and would like to propose an alternative,
even if I fear it'll be controversial.)

>> However, I think it would be better if we pass an extra argument to
>> indicated wheter the code is allowed to preempt. This would make the
>> preemption check more obvious in evtchn_destroy() compare to the
>> current d->is_dying != DOMDYING_dead.
> 
> We can have a predicate if you'd prefer, but plumbing an extra parameter
> is wasteful, and can only cause confusion if it is out of sync with
> d->is_dying.

I agree here - it wasn't so long ago that event_channel.c gained
a DOMDYING_dead check, and I don't see why we shouldn't extend
this approach to here and elsewhere.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 08:14:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 08:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57675.100993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcni-000694-F5; Tue, 22 Dec 2020 08:13:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57675.100993; Tue, 22 Dec 2020 08:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcni-00068x-Bw; Tue, 22 Dec 2020 08:13:46 +0000
Received: by outflank-mailman (input) for mailman id 57675;
 Tue, 22 Dec 2020 08:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krcnh-00068s-7v
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 08:13:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dd99585-8d75-4191-9b08-ac32f98505b3;
 Tue, 22 Dec 2020 08:13:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A8F7ABA1;
 Tue, 22 Dec 2020 08:13:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dd99585-8d75-4191-9b08-ac32f98505b3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608624823; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=x6z1IFuX4L22qrzsCB5/FNdV5KaYk6ubpLlPAwLHpfg=;
	b=OfnRaJGzL3qX6ClB2nS+VTPHPXzbkAggjelrFNH4z85FMW2ST5Jn/UqDOLTnTeD6KIJbI+
	kHnV/kKe87lWFsVUBfdRoMbKfqi4sBHerbleM9B/Lkn8qRsO8Wi668UmhTs+fq3pcsY5gI
	qawRH0XgbbPfO2ZwfCaoQrQ1m9UuKh4=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] common: XSA-327 follow-up
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <2a08aa31-fdbf-89ee-cd49-813f818b709a@suse.com>
Date: Tue, 22 Dec 2020 09:13:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There are a few largely cosmetic things that were discussed in the
context of the XSA, but which weren't really XSA material.

1: common: map_vcpu_info() cosmetics
2: evtchn/fifo: don't enforce higher than necessary alignment

I realize both changes are controversial. For the first one
discussion was about the choice of error code. Neither EINVAL nor
EFAULT represent the fact that it is a choice of implementation
to not support mis-aligned structures. If ENXIO isn't liked, the
best I can suggest are EOPNOTSUPP or (as previously suggested)
EPERM. I think it ought to be possible to settle on an error
code everyone can live with.

For the second one the question was whether the relaxation is
useful to do. The original reason for wanting to make the change
remains: The original code here should not be used as an excuse
to introduce similar over-alignment requirements elsewhere. I
can live with the change getting rejected, but if so I'd like to
request that some alternative be submitted to ensure that the
immediate goal can still be reached.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 08:14:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 08:14:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57679.101005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcor-0006Ez-QW; Tue, 22 Dec 2020 08:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57679.101005; Tue, 22 Dec 2020 08:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcor-0006Es-Mv; Tue, 22 Dec 2020 08:14:57 +0000
Received: by outflank-mailman (input) for mailman id 57679;
 Tue, 22 Dec 2020 08:14:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krcoq-0006En-Vq
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 08:14:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14bfdcb8-9f27-441a-8237-81120a20c491;
 Tue, 22 Dec 2020 08:14:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B03D9AD2B;
 Tue, 22 Dec 2020 08:14:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14bfdcb8-9f27-441a-8237-81120a20c491
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608624895; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kZqYO32yoJl+M6jmkojCUtPfHQ2G3qp95VKtN5NbHxY=;
	b=V43AgfDXo3UGyjSvrLEWm64SmYCGxNzw9dKv581peUOMuureGcuFWWwtmcxmLkwp48irtN
	7CgLmG1H1DA1XUt7r301XHMDvNUzFhkPClD7crcu4XP8XAFefPJxLspB+ATsDghYza5tKr
	qoaPnsqwaziNbU0BRP1B7VH1HZNcXDI=
Subject: [PATCH v2 1/2] common: map_vcpu_info() cosmetics
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <2a08aa31-fdbf-89ee-cd49-813f818b709a@suse.com>
Message-ID: <29514f9a-b630-f66e-286e-8b73fcf4d58a@suse.com>
Date: Tue, 22 Dec 2020 09:14:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <2a08aa31-fdbf-89ee-cd49-813f818b709a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Use ENXIO instead of EINVAL to cover the two cases of the address not
satisfying the requirements. This will make an issue here better stand
out at the call site.

Also add a missing compat-mode related size check: If the sizes
differed, other code in the function would need changing. Accompany this
by a change to the initial sizeof() expression, tying it to the type of
the variable we're actually after (matching e.g. the alignof() added by
XSA-327).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1241,17 +1241,18 @@ int map_vcpu_info(struct vcpu *v, unsign
     struct page_info *page;
     unsigned int align;
 
-    if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
-        return -EINVAL;
+    if ( offset > (PAGE_SIZE - sizeof(*new_info)) )
+        return -ENXIO;
 
 #ifdef CONFIG_COMPAT
+    BUILD_BUG_ON(sizeof(*new_info) != sizeof(new_info->compat));
     if ( has_32bit_shinfo(d) )
         align = alignof(new_info->compat);
     else
 #endif
         align = alignof(*new_info);
     if ( offset & (align - 1) )
-        return -EINVAL;
+        return -ENXIO;
 
     if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
         return -EINVAL;



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 08:15:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 08:15:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57683.101017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcpU-0006MD-8Y; Tue, 22 Dec 2020 08:15:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57683.101017; Tue, 22 Dec 2020 08:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krcpU-0006M6-5L; Tue, 22 Dec 2020 08:15:36 +0000
Received: by outflank-mailman (input) for mailman id 57683;
 Tue, 22 Dec 2020 08:15:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krcpS-0006M1-FA
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 08:15:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 858f4152-44eb-4c5e-b7d1-fbb033117f1c;
 Tue, 22 Dec 2020 08:15:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13F21ABA1;
 Tue, 22 Dec 2020 08:15:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 858f4152-44eb-4c5e-b7d1-fbb033117f1c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608624933; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gMu54AWqUXHQnTwpNds3A+YKW+f7l7KpBil/J6yzXu0=;
	b=NQ2C4h30Y7e0BrChNj7YDkOWJhUg5GURXDDduGTVLDFrktxbVnd4bBuk6j7cKzFI4KSGLY
	/JNvan1yFeAi/QmRgowfO79i5aYTRl7uQvq7yc32mVRSqZz1njdNbPyH408S5dmOeMlerS
	hBl7u/f3jIs1u+MmHegTnM7T29Dxjtc=
Subject: [PATCH v2 2/2] evtchn/fifo: don't enforce higher than necessary
 alignment
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <2a08aa31-fdbf-89ee-cd49-813f818b709a@suse.com>
Message-ID: <8db2a31d-29da-a93d-5ded-d6573371516e@suse.com>
Date: Tue, 22 Dec 2020 09:15:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <2a08aa31-fdbf-89ee-cd49-813f818b709a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Neither the code nor the original commit provide any justification for
the need to 8-byte align the struct in all cases. Enforce just as much
alignment as the structure actually needs - 4 bytes - by using alignof()
instead of a literal number.

While relaxation of the requirements is intended here, the primary goal
is to simply get rid of the hard coded number as well its lack of
connection to the structure that is is meant to apply to.

Take the opportunity and also
- add so far missing validation that native and compat mode layouts of
  the structures actually match,
- tie sizeof() expressions to the types of the fields we're actually
  after, rather than specifying the type explicitly (which in the
  general case risks a disconnect, even if there's close to zero risk in
  this particular case),
- use ENXIO instead of EINVAL for the two cases of the address not
  satisfying the requirements, which will make an issue here better
  stand out at the call site.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Add comment to public header. Re-base.
---
I question the need for the array_index_nospec() here. Or else I'd
expect map_vcpu_info() would also need the same.

--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -567,6 +567,16 @@ static void setup_ports(struct domain *d
     }
 }
 
+#ifdef CONFIG_COMPAT
+
+#include <compat/event_channel.h>
+
+#define xen_evtchn_fifo_control_block evtchn_fifo_control_block
+CHECK_evtchn_fifo_control_block;
+#undef xen_evtchn_fifo_control_block
+
+#endif
+
 int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
 {
     struct domain *d = current->domain;
@@ -586,19 +596,20 @@ int evtchn_fifo_init_control(struct evtc
         return -ENOENT;
 
     /* Must not cross page boundary. */
-    if ( offset > (PAGE_SIZE - sizeof(evtchn_fifo_control_block_t)) )
-        return -EINVAL;
+    if ( offset > (PAGE_SIZE - sizeof(*v->evtchn_fifo->control_block)) )
+        return -ENXIO;
 
     /*
      * Make sure the guest controlled value offset is bounded even during
      * speculative execution.
      */
     offset = array_index_nospec(offset,
-                           PAGE_SIZE - sizeof(evtchn_fifo_control_block_t) + 1);
+                                PAGE_SIZE -
+                                sizeof(*v->evtchn_fifo->control_block) + 1);
 
-    /* Must be 8-bytes aligned. */
-    if ( offset & (8 - 1) )
-        return -EINVAL;
+    /* Must be suitably aligned. */
+    if ( offset & (alignof(*v->evtchn_fifo->control_block) - 1) )
+        return -ENXIO;
 
     spin_lock(&d->event_lock);
 
--- a/xen/include/public/event_channel.h
+++ b/xen/include/public/event_channel.h
@@ -368,6 +368,11 @@ typedef uint32_t event_word_t;
 
 #define EVTCHN_FIFO_NR_CHANNELS (1 << EVTCHN_FIFO_LINK_BITS)
 
+/*
+ * While this structure only requires 4-byte alignment, Xen versions 4.14 and
+ * earlier reject offset values (in struct evtchn_init_control) that aren't a
+ * multiple of 8.
+ */
 struct evtchn_fifo_control_block {
     uint32_t ready;
     uint32_t _rsvd;
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -67,6 +67,7 @@
 ?	evtchn_bind_virq		event_channel.h
 ?	evtchn_close			event_channel.h
 ?	evtchn_expand_array		event_channel.h
+?	evtchn_fifo_control_block	event_channel.h
 ?	evtchn_init_control		event_channel.h
 ?	evtchn_op			event_channel.h
 ?	evtchn_reset			event_channel.h



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 09:47:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 09:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57699.101028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kreFu-0005bF-Sr; Tue, 22 Dec 2020 09:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57699.101028; Tue, 22 Dec 2020 09:46:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kreFu-0005b8-Pn; Tue, 22 Dec 2020 09:46:58 +0000
Received: by outflank-mailman (input) for mailman id 57699;
 Tue, 22 Dec 2020 09:46:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kreFt-0005b3-FL
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 09:46:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20ed7f96-7dd8-48ea-95a9-9d40055911d3;
 Tue, 22 Dec 2020 09:46:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33A1CAEA2;
 Tue, 22 Dec 2020 09:46:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20ed7f96-7dd8-48ea-95a9-9d40055911d3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608630415; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vH2tsfNS/wT6yUcJmU1L7Ai6jugWwSqwfdZVi1IqJhc=;
	b=gqKfdkhDaenTpkNZ8qTYQmLJfoKZ8VMqf+emmcAqbmxosZ7K170iMVto4oq76sRP0VMnWR
	1uXrWFpPexnTbieMhnjc81Y5XvNq2l+ky3OQWQXKAEu822PuyFzGs788ifW98jLqUMG2cC
	Aq/vYMHAVG+Gh2ggQCOjm42ujgYNT/0=
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
 <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
 <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <099b99bc-c544-0aa8-c3b4-4871ef618e4a@suse.com>
Date: Tue, 22 Dec 2020 10:46:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.12.2020 18:45, Julien Grall wrote:
> Hi Jan,
> 
> On 14/12/2020 09:40, Jan Beulich wrote:
>> On 11.12.2020 11:57, Julien Grall wrote:
>>> On 11/12/2020 10:32, Jan Beulich wrote:
>>>> On 09.12.2020 12:54, Julien Grall wrote:
>>>>> On 23/11/2020 13:29, Jan Beulich wrote:
>>>>>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>>>>>         long           rc = 0;
>>>>>>     
>>>>>>      again:
>>>>>> -    spin_lock(&d1->event_lock);
>>>>>> +    write_lock(&d1->event_lock);
>>>>>>     
>>>>>>         if ( !port_is_valid(d1, port1) )
>>>>>>         {
>>>>>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>>>>>                     BUG();
>>>>>>     
>>>>>>                 if ( d1 < d2 )
>>>>>> -            {
>>>>>> -                spin_lock(&d2->event_lock);
>>>>>> -            }
>>>>>> +                read_lock(&d2->event_lock);
>>>>>
>>>>> This change made me realized that I don't quite understand how the
>>>>> rwlock is meant to work for event_lock. I was actually expecting this to
>>>>> be a write_lock() given there are state changed in the d2 events.
>>>>
>>>> Well, the protection needs to be against racing changes, i.e.
>>>> parallel invocations of this same function, or evtchn_close().
>>>> It is debatable whether evtchn_status() and
>>>> domain_dump_evtchn_info() would better also be locked out
>>>> (other read_lock() uses aren't applicable to interdomain
>>>> channels).
>>>>
>>>>> Could you outline how a developper can find out whether he/she should
>>>>> use read_lock or write_lock?
>>>>
>>>> I could try to, but it would again be a port type dependent
>>>> model, just like for the per-channel locks.
>>>
>>> It is quite important to have clear locking strategy (in particular
>>> rwlock) so we can make correct decision when to use read_lock or write_lock.
>>>
>>>> So I'd like it to
>>>> be clarified first whether you aren't instead indirectly
>>>> asking for these to become write_lock()
>>>
>>> Well, I don't understand why this is a read_lock() (even with your
>>> previous explanation). I am not suggesting to switch to a write_lock(),
>>> but instead asking for the reasoning behind the decision.
>>
>> So if what I've said in my previous reply isn't enough (including the
>> argument towards using two write_lock() here), I'm struggling to
>> figure what else to say. The primary goal is to exclude changes to
>> the same ports. For this it is sufficient to hold just one of the two
>> locks in writer mode, as the other (racing) one will acquire that
>> same lock for at least reading. The question whether both need to use
>> writer mode can only be decided when looking at the sites acquiring
>> just one of the locks in reader mode (hence the reference to
>> evtchn_status() and domain_dump_evtchn_info()) - if races with them
>> are deemed to be a problem, switching to both-writers will be needed.
> 
> I had another look at the code based on your explanation. I don't think 
> it is fine to allow evtchn_status() to be concurrently called with 
> evtchn_close().
> 
> evtchn_close() contains the following code:
> 
>    chn2->state = ECS_UNBOUND;
>    chn2->u.unbound.remote_domid = d1->domain_id;
> 
> Where chn2 is a event channel of the remote domain (d2). Your patch will 
> only held the read lock for d2.
> 
> However evtchn_status() expects the event channel state to not change 
> behind its back. This assumption doesn't hold for d2, and you could 
> possibly end up to see the new value of chn2->state after the new 
> chn2->u.unbound.remote_domid.
> 
> Thanksfully, it doesn't look like chn2->u.interdomain.remote_domain 
> would be overwritten. Otherwise, this would be a straight dereference of 
> an invalid pointer.
> 
> So I think, we need to held the write event lock for both domain.

Well, okay. Three considerations though:

1) Neither evtchn_status() nor domain_dump_evtchn_info() appear to
have a real need to acquire the per-domain lock. They could as well
acquire the per-channel ones. (In the latter case this will then
also allow inserting the so far missing process_pending_softirqs()
call; it shouldn't be made with a lock held.)

2) With the double-locking changed and with 1) addressed, there's
going to be almost no read_lock() left. hvm_migrate_pirqs() and
do_physdev_op()'s PHYSDEVOP_eoi handling, evtchn_move_pirqs(), and
hvm_dpci_msi_eoi(). While for these it may still be helpful to be
possible to run in parallel, I then nevertheless wonder whether the
change as a whole is still worthwhile.

3) With the per-channel double locking and with 1) addressed I
can't really see the need for the double per-domain locking in
evtchn_bind_interdomain() and evtchn_close(). The write lock is
needed for the domain allocating a new port or freeing one. But why
is there any need for holding the remote domain's lock, when its
side of the channel gets guarded by the per-channel lock anyway?
Granted the per-channel locks may then need acquiring a little
earlier, before checking the remote channel's state. But this
shouldn't be an issue.

I guess I'll make addressing 1) and 3) prereq patches to this one,
unless I learn of reasons why things need to remain the way they
are.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 09:57:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 09:57:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57703.101040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krePq-0006Yc-SW; Tue, 22 Dec 2020 09:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57703.101040; Tue, 22 Dec 2020 09:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krePq-0006YV-PZ; Tue, 22 Dec 2020 09:57:14 +0000
Received: by outflank-mailman (input) for mailman id 57703;
 Tue, 22 Dec 2020 09:57:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krePp-0006YN-2V; Tue, 22 Dec 2020 09:57:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krePo-0001ZR-U6; Tue, 22 Dec 2020 09:57:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krePo-0001Mu-OE; Tue, 22 Dec 2020 09:57:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krePo-00034T-Nj; Tue, 22 Dec 2020 09:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4Pk2lWbzSWg4LUgsO1rISuEHtYzThJ7wuTiRPl0OkS8=; b=0jCRfcaF0AE3P0UoDXWMei6dyg
	uvQnsI7TQSCbSF4XwU1faPanssTMPvUB0g1dlGme+8WwlNyvd/dIDhCfzyL13+ImjjWagTZ8nUpfm
	YAHFA0txh1LKBlr7E7gB/nMNcSrHkYLY8aMZ9FvgiAqML4miXdRKyKv04BTOxOX6lnRY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157778-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157778: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=35ed29f207fd9c3683cfee5492c5c4e96ee0a0eb
X-Osstest-Versions-That:
    ovmf=3ce3274a5ea41134fafb983c0198de89007d471e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 09:57:12 +0000

flight 157778 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157778/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 35ed29f207fd9c3683cfee5492c5c4e96ee0a0eb
baseline version:
 ovmf                 3ce3274a5ea41134fafb983c0198de89007d471e

Last test of basis   157759  2020-12-21 14:39:51 Z    0 days
Testing same since   157778  2020-12-21 22:42:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3ce3274a5e..35ed29f207  35ed29f207fd9c3683cfee5492c5c4e96ee0a0eb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:00:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:00:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57709.101055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kreTI-0007Ty-Co; Tue, 22 Dec 2020 10:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57709.101055; Tue, 22 Dec 2020 10:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kreTI-0007Tr-9b; Tue, 22 Dec 2020 10:00:48 +0000
Received: by outflank-mailman (input) for mailman id 57709;
 Tue, 22 Dec 2020 10:00:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kreTH-0007Tl-9X
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:00:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14ef9e35-a23c-4520-9efb-f31dd067d171;
 Tue, 22 Dec 2020 10:00:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 94B2CABA1;
 Tue, 22 Dec 2020 10:00:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14ef9e35-a23c-4520-9efb-f31dd067d171
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608631245; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1Aa1Z0Z/VpucUq1qQ6jB3GXJQseOfoDV75m3ScP+dts=;
	b=npCnIBlQfqD5ZT9HlztLnXfLDdU6Or1C9OuLqT+ainGPMGJ3HOuPx0v6vLjtfiLtN/dI2D
	iMdkyfQWAwFPUudDjhPuEtKWixMg75+nUl0Oske3f0laYskADNxURxe+jt7HNzPDj05JkX
	pOh/vPGCRHZ5QXsa9Mh6zK6mNxrmTY4=
Subject: Re: Hypercall fault injection (Was [PATCH 0/3] xen/domain: More
 structured teardown)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Juergen Gross <jgross@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <983a3fef-c80f-ec2a-bf3c-5e054fc6a7a9@suse.com>
Date: Tue, 22 Dec 2020 11:00:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.12.2020 20:36, Andrew Cooper wrote:
> Hello,
> 
> We have some very complicated hypercalls, createdomain, and max_vcpus a
> close second, with immense complexity, and very hard-to-test error handling.
> 
> It is no surprise that the error handling is riddled with bugs.
> 
> Random failures from core functions is one way, but I'm not sure that
> will be especially helpful.  In particular, we'd need a way to exclude
> "dom0 critical" operations so we've got a usable system to run testing on.
> 
> As an alternative, how about adding a fault_ttl field into the hypercall?
> 
> The exact paths taken in {domain,vcpu}_create() are sensitive to the
> hardware, Xen Kconfig, and other parameters passed into the
> hypercall(s).  The testing logic doesn't really want to care about what
> failed; simply that the error was handled correctly.
> 
> So a test for this might look like:
> 
> cfg = { ... };
> while ( xc_create_domain(xch, cfg) < 0 )
>     cfg.fault_ttl++;
> 
> 
> The pro's of this approach is that for a specific build of Xen on a
> piece of hardware, it ought to check every failure path in
> domain_create(), until the ttl finally gets higher than the number of
> fail-able actions required to construct a domain.  Also, the test
> doesn't need changing as the complexity of domain_create() changes.
> 
> The main con will mostly likely be the invasiveness of code in Xen, but
> I suppose any fault injection is going to be invasive to a certain extent.

While I like the idea in principle, the innocent looking

cfg = { ... };

is quite a bit of a concern here as well: Depending on the precise
settings, paths taken in the hypervisor may heavily vary, and hence
such a test will only end up being useful if it covers a wide
variety of settings. Even if the number of tests to execute turned
out to still be manageable today, it may quickly turn out not
sufficiently scalable as we add new settings controllable right at
domain creation (which I understand is the plan).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:11:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57714.101068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kred4-0008Pv-AO; Tue, 22 Dec 2020 10:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57714.101068; Tue, 22 Dec 2020 10:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kred4-0008Po-7D; Tue, 22 Dec 2020 10:10:54 +0000
Received: by outflank-mailman (input) for mailman id 57714;
 Tue, 22 Dec 2020 10:10:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kred2-0008Pj-Hi
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:10:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a615acf8-f701-4429-98f0-24f599368e78;
 Tue, 22 Dec 2020 10:10:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B5B6DAEE8;
 Tue, 22 Dec 2020 10:10:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a615acf8-f701-4429-98f0-24f599368e78
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608631850; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jJJ28ty9NTTprw3dWv4SeSS337vWhUiTJofYzlDQAtI=;
	b=GqnQuB/82eEmobJJqHqQ3E6wenAc4SxsTvCGw8TGp3WBEeY0txDoZwUMdTJzdFfOfPMOhQ
	FdKrrcFurp52S1mKwcykRfl5rPPkab5ms1KnaIJWJJot+HE9xtecGpIuLHRVAEPZyNTYTz
	Q2iF+5E9D2m7lbnd+Sip/2B3rvmOW1s=
Subject: Re: [PATCH 1/3] xen/domain: Reorder trivial initialisation in early
 domain_create()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3397707d-ba05-4db2-7dfd-e18dbe044a26@suse.com>
Date: Tue, 22 Dec 2020 11:10:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201221181446.7791-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.12.2020 19:14, Andrew Cooper wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -391,25 +391,7 @@ struct domain *domain_create(domid_t domid,
>  
>      TRACE_1D(TRC_DOM0_DOM_ADD, d->domain_id);
>  
> -    /*
> -     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
> -     * resources want to be sized based on max_vcpus.
> -     */
> -    if ( !is_system_domain(d) )
> -    {
> -        err = -ENOMEM;
> -        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
> -        if ( !d->vcpu )
> -            goto fail;
> -
> -        d->max_vcpus = config->max_vcpus;
> -    }
> -
> -    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);

Wouldn't this also count as "trivial initialization", and hence while
moving want to at least be placed ...

> -    if ( (err = xsm_alloc_security_domain(d)) != 0 )
> -        goto fail;
> -
> +    /* Trivial initialisation. */
>      atomic_set(&d->refcnt, 1);
>      RCU_READ_LOCK_INIT(&d->rcu_lock);
>      spin_lock_init_prof(d, domain_lock);
> @@ -434,6 +416,27 @@ struct domain *domain_create(domid_t domid,
>      INIT_LIST_HEAD(&d->pdev_list);
>  #endif
>  
> +    /* All error paths can depend on the above setup. */

... ahead of this comment?

> +    /*
> +     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
> +     * resources want to be sized based on max_vcpus.
> +     */
> +    if ( !is_system_domain(d) )
> +    {
> +        err = -ENOMEM;
> +        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
> +        if ( !d->vcpu )
> +            goto fail;
> +
> +        d->max_vcpus = config->max_vcpus;
> +    }
> +
> +    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
> +
> +    if ( (err = xsm_alloc_security_domain(d)) != 0 )
> +        goto fail;
> +
>      err = -ENOMEM;
>      if ( !zalloc_cpumask_var(&d->dirty_cpumask) )
>          goto fail;
> 

Just as an observation (i.e. unlikely for this patch) I doubt
system domains need ->dirty_cpumask set up, and hence this
allocation may also want moving down a few lines.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:26:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:26:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57721.101082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krere-0000zm-PB; Tue, 22 Dec 2020 10:25:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57721.101082; Tue, 22 Dec 2020 10:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krere-0000zf-MA; Tue, 22 Dec 2020 10:25:58 +0000
Received: by outflank-mailman (input) for mailman id 57721;
 Tue, 22 Dec 2020 10:25:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krerd-0000zZ-Cl
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:25:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krerd-00027r-3A; Tue, 22 Dec 2020 10:25:57 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krerc-0005b5-S2; Tue, 22 Dec 2020 10:25:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OISANKA+Em+nyojPb2IZuqCZ10Eby1rTAbOa/6ZrpnE=; b=nmad/TldN59BS4d+27J1vna20l
	VMul28fqzwKPU0y0o+qvSk9cc6FzJvwfLfsDjkoqJEl3sy1qnul/DaBie9khBX0DN86lIDdqeipMc
	oG33tMel9AmzEBXVLAr1gPx/8Nyyop/CN3/PsvOz1xnCXc92/u5OIFzeOR7uAr7ruJP8=;
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
 <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
 <09fd7598-9899-9b4c-68ba-f90b3bc47d6f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <35b24879-e075-8066-603a-518fbb82f656@xen.org>
Date: Tue, 22 Dec 2020 10:25:54 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <09fd7598-9899-9b4c-68ba-f90b3bc47d6f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 22/12/2020 07:50, Jan Beulich wrote:
> On 21.12.2020 19:45, Andrew Cooper wrote:
>> On 21/12/2020 18:36, Julien Grall wrote:
>>>> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>>>>        if ( init_status & INIT_watchdog )
>>>>            watchdog_domain_destroy(d);
>>>>    +    /* Must not hit a continuation in this context. */
>>>> +    ASSERT(domain_teardown(d) == 0);
>>> The ASSERT() will become a NOP in production build, so
>>> domain_teardown_down() will not be called.
>>
>> Urgh - its not really a nop, but it's evaluation isn't symmetric between
>> debug and release builds.  I'll need an extra local variable.
> 
> Or use ASSERT_UNREACHABLE(). (I admit I don't really like the
> resulting constructs, and would like to propose an alternative,
> even if I fear it'll be controversial.)
> 
>>> However, I think it would be better if we pass an extra argument to
>>> indicated wheter the code is allowed to preempt. This would make the
>>> preemption check more obvious in evtchn_destroy() compare to the
>>> current d->is_dying != DOMDYING_dead.
>>
>> We can have a predicate if you'd prefer, but plumbing an extra parameter
>> is wasteful, and can only cause confusion if it is out of sync with
>> d->is_dying.
> 
> I agree here - it wasn't so long ago that event_channel.c gained
> a DOMDYING_dead check, and I don't see why we shouldn't extend
> this approach to here and elsewhere.

I think the d->is_dying != DOMYING_dead is difficult to understand even 
with the comment on top. This was ok in one place, but now it will 
spread everywhere. So at least, I would suggest to introduce a wrapper 
that is better named.

There is also a futureproof concern. At the moment, we are considering 
the preemption will not be needed in domain_create(). I am ready to bet 
that the assumption is going to be broken sooner or later.

As we don't have an extra parameter, we would need to investigate all 
the callers in helpers to see where we need to drop the d->is_dying != 
DOMYING_dead. I am sure you will agree that there is a risk to screw up.

With an extra parameter, it is easier to re-enable preemption when needed.

Anyway, both of you seem to agree on the approach here. If that's what 
you want, then so it be, but that's not something I will feel confident 
to Ack. Although, I will not Nack it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:27:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:27:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57726.101095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kretE-00019B-9q; Tue, 22 Dec 2020 10:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57726.101095; Tue, 22 Dec 2020 10:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kretE-000194-6G; Tue, 22 Dec 2020 10:27:36 +0000
Received: by outflank-mailman (input) for mailman id 57726;
 Tue, 22 Dec 2020 10:27:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kretD-00018y-0A
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:27:35 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 799bbf8f-bbc2-48cc-b14b-dccade90a486;
 Tue, 22 Dec 2020 10:27:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 799bbf8f-bbc2-48cc-b14b-dccade90a486
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608632853;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=EVZTI24WkGLTw9U36vgvMwxxGO3l+UbWN9heLMZEdvY=;
  b=M9Y0KrH6KR//mwBjNm+uNE16k8Whg+S3MJgb0L6Cc2UboegxcSQ0quvu
   NPuDgnl8dVP6s4rNktYdXgBiUoHfPA7AO4iSmei+gcBFGOgzmjtfeO8za
   r6s1qchW79Tilv+kxk2ZJiQQqR72epImAhJt2+0oH2cTys/D1s77iCEcR
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +NC4Q7cdg14XXl1helzSab3rK+fWjojU4LDJUkEGJ9G4FIzHjtE1M6mlNp+HtstD6jfoGhLFU2
 oOe6LIbDP7KCrttBuVaPRepLJLKOrCoPUtKTuzPzqclZs0bWFc2nErCZaih5CgnC1Z0isMT4TD
 sGgHlr5AGcFmiUkfGY4A/K50Wh3j13YgC1rkr8bwTVCwb3uND6hTd8eYHdEGoLtD7Dpp9q6Zjx
 17jzqcW+T6wRWM1oeVymb7+wxiFLxmPLy37gSYyU7G9xGbeNGMB+yWE/5h7pogB/yJCVUrz1oZ
 fd4=
X-SBRS: 5.2
X-MesageID: 33974280
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="33974280"
Subject: Re: [PATCH 1/3] xen/domain: Reorder trivial initialisation in early
 domain_create()
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-2-andrew.cooper3@citrix.com>
 <3397707d-ba05-4db2-7dfd-e18dbe044a26@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <472745b0-7b9b-1412-85c7-6186711fadd8@citrix.com>
Date: Tue, 22 Dec 2020 10:24:37 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3397707d-ba05-4db2-7dfd-e18dbe044a26@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 10:10, Jan Beulich wrote:
> On 21.12.2020 19:14, Andrew Cooper wrote:
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -391,25 +391,7 @@ struct domain *domain_create(domid_t domid,
>>  
>>      TRACE_1D(TRC_DOM0_DOM_ADD, d->domain_id);
>>  
>> -    /*
>> -     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
>> -     * resources want to be sized based on max_vcpus.
>> -     */
>> -    if ( !is_system_domain(d) )
>> -    {
>> -        err = -ENOMEM;
>> -        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
>> -        if ( !d->vcpu )
>> -            goto fail;
>> -
>> -        d->max_vcpus = config->max_vcpus;
>> -    }
>> -
>> -    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
> Wouldn't this also count as "trivial initialization", and hence while
> moving want to at least be placed ...
>
>> -    if ( (err = xsm_alloc_security_domain(d)) != 0 )
>> -        goto fail;
>> -
>> +    /* Trivial initialisation. */
>>      atomic_set(&d->refcnt, 1);
>>      RCU_READ_LOCK_INIT(&d->rcu_lock);
>>      spin_lock_init_prof(d, domain_lock);
>> @@ -434,6 +416,27 @@ struct domain *domain_create(domid_t domid,
>>      INIT_LIST_HEAD(&d->pdev_list);
>>  #endif
>>  
>> +    /* All error paths can depend on the above setup. */
> ... ahead of this comment?

Can do.

>
>> +    /*
>> +     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
>> +     * resources want to be sized based on max_vcpus.
>> +     */
>> +    if ( !is_system_domain(d) )
>> +    {
>> +        err = -ENOMEM;
>> +        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
>> +        if ( !d->vcpu )
>> +            goto fail;
>> +
>> +        d->max_vcpus = config->max_vcpus;
>> +    }
>> +
>> +    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
>> +
>> +    if ( (err = xsm_alloc_security_domain(d)) != 0 )
>> +        goto fail;
>> +
>>      err = -ENOMEM;
>>      if ( !zalloc_cpumask_var(&d->dirty_cpumask) )
>>          goto fail;
>>
> Just as an observation (i.e. unlikely for this patch) I doubt
> system domains need ->dirty_cpumask set up, and hence this
> allocation may also want moving down a few lines.

I agree in principle.  However, something does (or at least did)
reference this for system domains when I did the is_system_domain() cleanup.

The fix might not be as trivial as just moving the allocation.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:35:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57732.101110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krf0a-00023Y-7J; Tue, 22 Dec 2020 10:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57732.101110; Tue, 22 Dec 2020 10:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krf0a-00023R-48; Tue, 22 Dec 2020 10:35:12 +0000
Received: by outflank-mailman (input) for mailman id 57732;
 Tue, 22 Dec 2020 10:35:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krf0Y-00023M-RU
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:35:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ffeb83d-79d2-4c38-afb3-9a8f41b07a7f;
 Tue, 22 Dec 2020 10:35:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B4240B293;
 Tue, 22 Dec 2020 10:35:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ffeb83d-79d2-4c38-afb3-9a8f41b07a7f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608633308; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AgGjLf2ZtUsAUXY1JeTK7FVgb00mtwMbM6yXJdcbchY=;
	b=tkUgu6grxpjftL1cONBOWJxl1hL54VT+n5Keiq8CpTkPomb+CfZxgSZJzv3KLfs82/JIhb
	EGAe+/ujGuPrw7u0jgKAVHc7JFYZVvd2uFreLFv9dcnH3A4JWRujLoFhcMWazjgO5yHgs9
	RhrO2Sv8xk+IlmG9fOkpQ4fsS3rFhCk=
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b3d1cb55-2793-f37e-13d1-b9e8de2057da@suse.com>
Date: Tue, 22 Dec 2020 11:35:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201221181446.7791-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.12.2020 19:14, Andrew Cooper wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -273,6 +273,59 @@ static int __init parse_extra_guest_irqs(const char *s)
>  custom_param("extra_guest_irqs", parse_extra_guest_irqs);
>  
>  /*
> + * Release resources held by a domain.  There may or may not be live
> + * references to the domain, and it may or may not be fully constructed.
> + *
> + * d->is_dying differing between DOMDYING_dying and DOMDYING_dead can be used
> + * to determine if live references to the domain exist, and also whether
> + * continuations are permitted.
> + *
> + * If d->is_dying is DOMDYING_dead, this must not return non-zero.
> + */
> +static int domain_teardown(struct domain *d)
> +{
> +    BUG_ON(!d->is_dying);
> +
> +    /*
> +     * This hypercall can take minutes of wallclock time to complete.  This
> +     * logic implements a co-routine, stashing state in struct domain across
> +     * hypercall continuation boundaries.
> +     */
> +    switch ( d->teardown.val )
> +    {
> +        /*
> +         * Record the current progress.  Subsequent hypercall continuations
> +         * will logically restart work from this point.
> +         *
> +         * PROGRESS() markers must not be in the middle of loops.  The loop
> +         * variable isn't preserved across a continuation.
> +         *
> +         * To avoid redundant work, there should be a marker before each
> +         * function which may return -ERESTART.
> +         */
> +#define PROGRESS(x)                             \
> +        d->teardown.val = PROG_ ## x;           \
> +        /* Fallthrough */                       \
> +    case PROG_ ## x
> +
> +        enum {
> +            PROG_done = 1,
> +        };
> +
> +    case 0:
> +    PROGRESS(done):
> +        break;
> +
> +#undef PROGRESS
> +
> +    default:
> +        BUG();
> +    }
> +
> +    return 0;
> +}

While as an initial step this may be okay, I envision ordering issues
with domain_relinquish_resources() - there may be per-arch things that
want doing before certain common ones, and others which should only
be done when certain common teardown was already performed. Therefore
rather than this being a "common equivalent of
domain_relinquish_resources()", I'd rather see (and call) it a
(designated) replacement, where individual per-arch functions would
get called at appropriate times.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:38:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57736.101122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krf3z-0002ED-NV; Tue, 22 Dec 2020 10:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57736.101122; Tue, 22 Dec 2020 10:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krf3z-0002E6-KX; Tue, 22 Dec 2020 10:38:43 +0000
Received: by outflank-mailman (input) for mailman id 57736;
 Tue, 22 Dec 2020 10:38:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krf3z-0002E1-2l
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:38:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0241bfa2-ba4b-4131-bc63-a3c2b8bf9f32;
 Tue, 22 Dec 2020 10:38:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0241bfa2-ba4b-4131-bc63-a3c2b8bf9f32
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608633522;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=lUvWq22SDUDEmC+tSUDzoF0uyYMffSl6Ez2wwX28N9c=;
  b=SHfIE0ofEeh8LYqiEKBFoznUdqixk71W0WB3EjzHEJlCmYJl7uVTZJWZ
   WuK1v4jeIxKvYSxWH68/hJqPUmih3RsbXjKcYbB7+bUE7dLaRRIZm+S25
   QWE17GS1os6Dcmif97YTnWFkt2YJS26ZkB2zqA/orNicoZQ1nBPyTnpJD
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: v2cUk9vtpvzJJEZpURdEvyioznzAvjb8hAxEgS/ltZkN85r/RkWxDyUn3eD5bM7hNuto9l0pOe
 AhPBKpFy7eRyddhBACOo6UHsJM1Mv/29kngsBURPXmV7O2WGAf6pdGoC1NI+Yi63IDXpH6lA7l
 YswwSIw6ShVgPloLJl2hdHAkBVAhzT9Mm3oJI+7jV2yfWMtRQRqWFuxB6sWJic6OyKkiHwua+I
 vGWGJyuHkkriHEZRU9zFPdKBpnKHzYT3GCVQfQAOImqE2Du09T3Z0yCbawkBFWIvr8Rsyl2aLD
 VO0=
X-SBRS: 5.2
X-MesageID: 34092279
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="34092279"
Subject: Re: [XEN PATCH 2/3] docs: use predictable ordering in generated
 documentation
To: Maximilian Engelhardt <maxi@daemonizer.de>,
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <cover.1608319634.git.maxi@daemonizer.de>
 <31df2a1128c15bc1b4c738bf52e29c80982b4170.1608319634.git.maxi@daemonizer.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fcd6645b-d709-a600-05e2-dd1ba6d0fb7f@citrix.com>
Date: Tue, 22 Dec 2020 10:38:35 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <31df2a1128c15bc1b4c738bf52e29c80982b4170.1608319634.git.maxi@daemonizer.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 18/12/2020 20:42, Maximilian Engelhardt wrote:
> When the seq number is equal, sort by the title to get predictable
> output ordering. This is useful for reproducible builds.
>
> Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:40:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57740.101133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krf61-00033b-4W; Tue, 22 Dec 2020 10:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57740.101133; Tue, 22 Dec 2020 10:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krf61-00033U-1T; Tue, 22 Dec 2020 10:40:49 +0000
Received: by outflank-mailman (input) for mailman id 57740;
 Tue, 22 Dec 2020 10:40:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krf5z-00033P-PM
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:40:47 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e94d46a-c026-4953-b073-c31c134bc21a;
 Tue, 22 Dec 2020 10:40:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e94d46a-c026-4953-b073-c31c134bc21a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608633646;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=8TZ3AdM3+DNkMA8Ac+d1EZv+uIxkJKZ+d+5zZF/V3KU=;
  b=dTxGvDwmhpkNeWQY1rA6cj5ndf87kaggdXhqvCaXyeh1knjj7H5YimUN
   9ONAgkWAdvMQ4iB5JQ2CNFxAaDhK/9uywSwNuQCb/5XnaDm9pCXwtVkpz
   eaYJoJMEbhMhhITckYzHI9RL5eWv4YBcdiSdb+Q5lBgdl/55S+pBhss/p
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cNfxd2oaqNwIgqzKNxib/0M/w2mTSb/3oJGjQzn3m92al0/DXuVRsxiztDfpCTdP6t4cRae0IY
 mVL/uSpXHjpYllL+Dl7ff41MlNTVFwkxPmWqyVGJR9ppPjZEhyvFQm0gxEXwh/+WVB8bGjnJ/2
 ucsNtpLbsGATy070FeVm3LROarigxavxHDQD0LotkAPupqXD7YS/kHrJJbiRPQg7cD9BThT52e
 b34PGtDcUgssxSASj/Xw7KJm7dt/BVOhA2/IprW+EdwnGvS+3gILrSAwn9rO7oeBpb/P0mSo8x
 KGM=
X-SBRS: 5.2
X-MesageID: 33732350
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="33732350"
Subject: Re: [PATCH 2/6] x86/mm: p2m_add_foreign() is HVM-only
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <cf4569c5-a9c5-7b4b-d576-d1521c369418@suse.com>
 <f736244b-ece7-af35-1517-2e5fdd9705c7@citrix.com>
 <e2ee99fc-e3f8-bdaf-fe4a-d048da34731a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b4abbe2f-5e3d-5f43-80b8-cfa3fd97061e@citrix.com>
Date: Tue, 22 Dec 2020 10:40:40 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e2ee99fc-e3f8-bdaf-fe4a-d048da34731a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 21/12/2020 08:10, Jan Beulich wrote:
> On 17.12.2020 20:18, Andrew Cooper wrote:
>> On 15/12/2020 16:26, Jan Beulich wrote:
>>> This is together with its only caller, xenmem_add_to_physmap_one().
>> I can't parse this sentence.  Perhaps "... as is it's only caller," as a
>> follow-on from the subject sentence.
>>
>>>  Move
>>> the latter next to p2m_add_foreign(), allowing this one to become static
>>> at the same time.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> So I had to ask Andrew to revert this (I was already at home when
> noticing the breakage), as it turned out to break the shim build.
> The problem is that xenmem_add_to_physmap() is non-static and
> hence can't be eliminated altogether by the compiler when !HVM.
> We could make the function conditionally static
> "#if !defined(CONFIG_X86) && !defined(CONFIG_HVM)", but this
> looks uglier to me than this extra hunk:
>
> --- unstable.orig/xen/common/memory.c
> +++ unstable/xen/common/memory.c
> @@ -788,7 +788,11 @@ int xenmem_add_to_physmap(struct domain
>      union add_to_physmap_extra extra = {};
>      struct page_info *pages[16];
>  
> -    ASSERT(paging_mode_translate(d));
> +    if ( !paging_mode_translate(d) )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return -EACCES;
> +    }
>  
>      if ( xatp->space == XENMAPSPACE_gmfn_foreign )
>          extra.foreign_domid = DOMID_INVALID;
>
> Andrew, please let me know whether your ack stands with this (or
> said alternative) added, or whether you'd prefer me to re-post.

Yeah, this is probably neater than the ifdefary.  My ack stands.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:48:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:48:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57744.101146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfD7-0003Gs-TV; Tue, 22 Dec 2020 10:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57744.101146; Tue, 22 Dec 2020 10:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfD7-0003Gl-QU; Tue, 22 Dec 2020 10:48:09 +0000
Received: by outflank-mailman (input) for mailman id 57744;
 Tue, 22 Dec 2020 10:48:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krfD6-0003Gg-6x
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:48:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id badf59dd-fb57-4e9c-ae4b-352a362a3bb1;
 Tue, 22 Dec 2020 10:48:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C896EACF5;
 Tue, 22 Dec 2020 10:48:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: badf59dd-fb57-4e9c-ae4b-352a362a3bb1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608634085; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZoL2rsTifyl5gcYQ924cRQHepySA47J03hS6iIRqvD8=;
	b=GOBLt7dLjfmVwj4/FQwTDbmcNZcMtuydfl8OF1HN/lRw6fWdrwcvbaA4/WtyKU9POSAbg1
	Vo/bETsq402PS+fy6ftk+u5UhaQmVIoy7KYTSDSP/aEWOjC7cLG9Wh4AYp22umYPhWwpWs
	Gz4EedAMDPx5CVw5sU0Cn7Tx/w7t+vw=
Subject: Re: [PATCH 3/3] xen/evtchn: Clean up teardown handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3d72bcb6-dabf-2b26-cecd-5f2d36505bd5@suse.com>
Date: Tue, 22 Dec 2020 11:48:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201221181446.7791-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.12.2020 19:14, Andrew Cooper wrote:
> First of all, rename the evtchn APIs:
>  * evtchn_destroy       => evtchn_teardown
>  * evtchn_destroy_final => evtchn_destroy

I wonder in how far this is going to cause confusion with backports
down the road. May I suggest to do only the first of the two renames,
at least until in a couple of year's time? Or make the second rename
to e.g. evtchn_cleanup() or evtchn_deinit()?

> RFC.  While testing this, I observed this, after faking up an -ENOMEM in
> dom0's construction:
> 
>   (XEN) [2020-12-21 16:31:20] NX (Execute Disable) protection active
>   (XEN) [2020-12-21 16:33:04]
>   (XEN) [2020-12-21 16:33:04] ****************************************
>   (XEN) [2020-12-21 16:33:04] Panic on CPU 0:
>   (XEN) [2020-12-21 16:33:04] Error creating domain 0
>   (XEN) [2020-12-21 16:33:04] ****************************************
> 
> XSA-344 appears to have added nearly 2 minutes of wallclock time into the
> domain_create() error path, which isn't ok.
> 
> Considering that event channels haven't even been initialised in this
> particular scenario, it ought to take ~0 time.  Even if event channels have
> been initalised, none can be active as the domain isn't visible to the system.

evtchn_init() sets d->valid_evtchns to EVTCHNS_PER_BUCKET. Are you
suggesting cleaning up one bucket's worth of unused event channels
takes two minutes? If this is really the case, and considering there
could at most be unbound Xen channels, perhaps we could avoid
calling evtchn_teardown() from domain_create()'s error path? We'd
need to take care of the then missing accounting (->active_evtchns
and ->xen_evtchns), but this should be doable.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:50:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:50:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57748.101157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfFk-00046f-BZ; Tue, 22 Dec 2020 10:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57748.101157; Tue, 22 Dec 2020 10:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfFk-00046Y-8R; Tue, 22 Dec 2020 10:50:52 +0000
Received: by outflank-mailman (input) for mailman id 57748;
 Tue, 22 Dec 2020 10:50:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krfFj-00046R-4M
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:50:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31beeb7b-083d-4d8f-949a-ab5b74e4902b;
 Tue, 22 Dec 2020 10:50:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 44C6AACF1;
 Tue, 22 Dec 2020 10:50:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31beeb7b-083d-4d8f-949a-ab5b74e4902b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608634249; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aZ9msnHzXOIrKSwlT5b6ErSNQh9Pxq1XTa8Z/4JrXWs=;
	b=kxGDk+XmgNBARKHVRRWMaGCmsOPZyx/gcpslPi5+fG9jR2IgeAAN3DEgwqP0vWcVpQR5g9
	Oy6xtbWn5IZcrlvymYV1rod5IuD8qEwie2Qv6PlgyY5g8e4P5hnVWxLTbmrz7lF0rqXR5d
	vrBprd5j+BeuyJ+MxE9tQAiIrWSoazE=
Subject: Re: [PATCH 1/3] xen/domain: Reorder trivial initialisation in early
 domain_create()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-2-andrew.cooper3@citrix.com>
 <3397707d-ba05-4db2-7dfd-e18dbe044a26@suse.com>
 <472745b0-7b9b-1412-85c7-6186711fadd8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6551c44-4fc9-480e-ed96-70e22ffb1e98@suse.com>
Date: Tue, 22 Dec 2020 11:50:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <472745b0-7b9b-1412-85c7-6186711fadd8@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.12.2020 11:24, Andrew Cooper wrote:
> On 22/12/2020 10:10, Jan Beulich wrote:
>> On 21.12.2020 19:14, Andrew Cooper wrote:
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -391,25 +391,7 @@ struct domain *domain_create(domid_t domid,
>>>  
>>>      TRACE_1D(TRC_DOM0_DOM_ADD, d->domain_id);
>>>  
>>> -    /*
>>> -     * Allocate d->vcpu[] and set ->max_vcpus up early.  Various per-domain
>>> -     * resources want to be sized based on max_vcpus.
>>> -     */
>>> -    if ( !is_system_domain(d) )
>>> -    {
>>> -        err = -ENOMEM;
>>> -        d->vcpu = xzalloc_array(struct vcpu *, config->max_vcpus);
>>> -        if ( !d->vcpu )
>>> -            goto fail;
>>> -
>>> -        d->max_vcpus = config->max_vcpus;
>>> -    }
>>> -
>>> -    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
>> Wouldn't this also count as "trivial initialization", and hence while
>> moving want to at least be placed ...
>>
>>> -    if ( (err = xsm_alloc_security_domain(d)) != 0 )
>>> -        goto fail;
>>> -
>>> +    /* Trivial initialisation. */
>>>      atomic_set(&d->refcnt, 1);
>>>      RCU_READ_LOCK_INIT(&d->rcu_lock);
>>>      spin_lock_init_prof(d, domain_lock);
>>> @@ -434,6 +416,27 @@ struct domain *domain_create(domid_t domid,
>>>      INIT_LIST_HEAD(&d->pdev_list);
>>>  #endif
>>>  
>>> +    /* All error paths can depend on the above setup. */
>> ... ahead of this comment?
> 
> Can do.

At which point

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 10:54:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 10:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57752.101170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfIo-0004Gh-RX; Tue, 22 Dec 2020 10:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57752.101170; Tue, 22 Dec 2020 10:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfIo-0004Ga-Nc; Tue, 22 Dec 2020 10:54:02 +0000
Received: by outflank-mailman (input) for mailman id 57752;
 Tue, 22 Dec 2020 10:54:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krfIo-0004GU-4Y
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 10:54:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9029e9db-51db-41c6-8837-fa7c6d32bd73;
 Tue, 22 Dec 2020 10:54:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34AE8ACF5;
 Tue, 22 Dec 2020 10:54:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9029e9db-51db-41c6-8837-fa7c6d32bd73
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608634440; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/x0HeMS+yuMcvDgCc+Ak4+11Gt2CGzVMRzeAxBnKLng=;
	b=Dd7A3AlyLCXMJsufFF+A9SKfLlubum0ndNWbU9jlB4xV2uPR6M3peozN7TiQdQvxx7T4FL
	TzQJ3aUNn5EEeCHd8O0UZ6NMfyNLsN8g02v9JU8n4cGNAmj5iflj0K0c+Rt08vWRHy66Jt
	vTE+bPXYJCGf6fLyrdiWfjhVkEQ6yDw=
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
 <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
 <09fd7598-9899-9b4c-68ba-f90b3bc47d6f@suse.com>
 <35b24879-e075-8066-603a-518fbb82f656@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <892cb753-594b-15df-2342-9d10d5787f46@suse.com>
Date: Tue, 22 Dec 2020 11:53:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <35b24879-e075-8066-603a-518fbb82f656@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.12.2020 11:25, Julien Grall wrote:
> On 22/12/2020 07:50, Jan Beulich wrote:
>> On 21.12.2020 19:45, Andrew Cooper wrote:
>>> On 21/12/2020 18:36, Julien Grall wrote:
>>>>> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>>>>>        if ( init_status & INIT_watchdog )
>>>>>            watchdog_domain_destroy(d);
>>>>>    +    /* Must not hit a continuation in this context. */
>>>>> +    ASSERT(domain_teardown(d) == 0);
>>>> The ASSERT() will become a NOP in production build, so
>>>> domain_teardown_down() will not be called.
>>>
>>> Urgh - its not really a nop, but it's evaluation isn't symmetric between
>>> debug and release builds.  I'll need an extra local variable.
>>
>> Or use ASSERT_UNREACHABLE(). (I admit I don't really like the
>> resulting constructs, and would like to propose an alternative,
>> even if I fear it'll be controversial.)
>>
>>>> However, I think it would be better if we pass an extra argument to
>>>> indicated wheter the code is allowed to preempt. This would make the
>>>> preemption check more obvious in evtchn_destroy() compare to the
>>>> current d->is_dying != DOMDYING_dead.
>>>
>>> We can have a predicate if you'd prefer, but plumbing an extra parameter
>>> is wasteful, and can only cause confusion if it is out of sync with
>>> d->is_dying.
>>
>> I agree here - it wasn't so long ago that event_channel.c gained
>> a DOMDYING_dead check, and I don't see why we shouldn't extend
>> this approach to here and elsewhere.
> 
> I think the d->is_dying != DOMYING_dead is difficult to understand even 
> with the comment on top. This was ok in one place, but now it will 
> spread everywhere. So at least, I would suggest to introduce a wrapper 
> that is better named.
> 
> There is also a futureproof concern. At the moment, we are considering 
> the preemption will not be needed in domain_create(). I am ready to bet 
> that the assumption is going to be broken sooner or later.

This is a fair consideration, yet I'm having trouble seeing what it
might be that would cause domain_create() to require preemption.
The function is supposed to only produce an empty container. But yes,
if e.g. vCPU creation was to move here, the situation would indeed
change.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:05:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:05:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57757.101181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfUD-0005Gf-3z; Tue, 22 Dec 2020 11:05:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57757.101181; Tue, 22 Dec 2020 11:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfUD-0005GY-0Y; Tue, 22 Dec 2020 11:05:49 +0000
Received: by outflank-mailman (input) for mailman id 57757;
 Tue, 22 Dec 2020 11:05:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krfUC-0005GT-6d
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:05:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krfUB-0002oT-Ss; Tue, 22 Dec 2020 11:05:47 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krfUB-0001ul-Kz; Tue, 22 Dec 2020 11:05:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=GAYYImh/e8Dov7zf5WchrBeZLpC407M9Y8+vzejXDOQ=; b=ak9TVYlzddtoCxRS7ZUXbDk8UJ
	YVqTt0bg1gkaTDWre+FhCJpKJnga16zRTASj6Lu3m7nTHW0V8KnoVvFWfwgAUJQB94Vezpedtim5A
	JsO1uODP3kxWyrx501CmPOgmbBOnW9lHwCV27BCsIrXQqQzQ7pRCv1HHkAQepwVnSWK4=;
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
 <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
 <09fd7598-9899-9b4c-68ba-f90b3bc47d6f@suse.com>
 <35b24879-e075-8066-603a-518fbb82f656@xen.org>
 <892cb753-594b-15df-2342-9d10d5787f46@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d65c0e96-bd8e-15b9-18da-a33147477199@xen.org>
Date: Tue, 22 Dec 2020 11:05:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <892cb753-594b-15df-2342-9d10d5787f46@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 22/12/2020 10:53, Jan Beulich wrote:
> On 22.12.2020 11:25, Julien Grall wrote:
>> On 22/12/2020 07:50, Jan Beulich wrote:
>>> On 21.12.2020 19:45, Andrew Cooper wrote:
>>>> On 21/12/2020 18:36, Julien Grall wrote:
>>>>>> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>>>>>>         if ( init_status & INIT_watchdog )
>>>>>>             watchdog_domain_destroy(d);
>>>>>>     +    /* Must not hit a continuation in this context. */
>>>>>> +    ASSERT(domain_teardown(d) == 0);
>>>>> The ASSERT() will become a NOP in production build, so
>>>>> domain_teardown_down() will not be called.
>>>>
>>>> Urgh - its not really a nop, but it's evaluation isn't symmetric between
>>>> debug and release builds.  I'll need an extra local variable.
>>>
>>> Or use ASSERT_UNREACHABLE(). (I admit I don't really like the
>>> resulting constructs, and would like to propose an alternative,
>>> even if I fear it'll be controversial.)
>>>
>>>>> However, I think it would be better if we pass an extra argument to
>>>>> indicated wheter the code is allowed to preempt. This would make the
>>>>> preemption check more obvious in evtchn_destroy() compare to the
>>>>> current d->is_dying != DOMDYING_dead.
>>>>
>>>> We can have a predicate if you'd prefer, but plumbing an extra parameter
>>>> is wasteful, and can only cause confusion if it is out of sync with
>>>> d->is_dying.
>>>
>>> I agree here - it wasn't so long ago that event_channel.c gained
>>> a DOMDYING_dead check, and I don't see why we shouldn't extend
>>> this approach to here and elsewhere.
>>
>> I think the d->is_dying != DOMYING_dead is difficult to understand even
>> with the comment on top. This was ok in one place, but now it will
>> spread everywhere. So at least, I would suggest to introduce a wrapper
>> that is better named.
>>
>> There is also a futureproof concern. At the moment, we are considering
>> the preemption will not be needed in domain_create(). I am ready to bet
>> that the assumption is going to be broken sooner or later.
> 
> This is a fair consideration, yet I'm having trouble seeing what it
> might be that would cause domain_create() to require preemption.
> The function is supposed to only produce an empty container. But yes,
> if e.g. vCPU creation was to move here, the situation would indeed
> change.
We are not really creating an "empty container". There are at least one 
initial pool of memory allocated for HAP (256 pages) which need to be 
freed if we fail to create the domain.

There is a second pool in discussion for the IOMMU (see [1]) which will 
also initially allocate 256 pages.

To me it looks like domain_create() is getting quite big and preemption 
is going to be required sooner or later.

Cheers,

[1] <20201005094905.2929-6-paul@xen.org>

> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:11:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:11:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57762.101198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfZm-00069z-T1; Tue, 22 Dec 2020 11:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57762.101198; Tue, 22 Dec 2020 11:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfZm-00069s-OM; Tue, 22 Dec 2020 11:11:34 +0000
Received: by outflank-mailman (input) for mailman id 57762;
 Tue, 22 Dec 2020 11:11:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krfZl-00069n-3N
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:11:33 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4efe9e6-9f6c-4049-9abd-0acafbfabc3c;
 Tue, 22 Dec 2020 11:11:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4efe9e6-9f6c-4049-9abd-0acafbfabc3c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608635491;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=RhnixBKziMiL5VIohcadd1Ri8y4M8sTx3hOwKYigZN4=;
  b=f44zwu06i4vWCr0mnP8pJj1ckkru40dvfiFxxKDfendQOxGNBOlkoy5W
   Y7uc2Dx4e5K3KIfTY21zf0uGk8jfpmc33vEu+0QT6lASd2OJ0sQ56amuN
   n3teBhO8EP8YIsYUBPGZ9guqm2D/chv4ru+zPU2PhZPE02sT0LH3ISp/T
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3jfqed5Wn4bgMXyaCk54SIdEIY65rWJUyP0KYB/2Xc/Knf6c77ycX2QzLyHnjIhCqnNZklu/EJ
 Q+wHKoFvVmE/nIkcK/Iv6eGWWX7hFN7nILibuh2iPSPGe0YgBs84Q/e9DqCZsCcBecAUSg3lkb
 iiJqOdNibL843AvGUQIUKQChz+u8oXQu+2SM5F9h+KfqRGC3BTaLITs3p+EhgebG+0J2sl6v74
 cs7oQn/px2MgXYsHMD1LfnjVej0s2K58DoVhAGNm2iMfPMjY7Fq397rJWcCd/uKcI7SRGgYYb9
 OdE=
X-SBRS: 5.2
X-MesageID: 33733941
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="33733941"
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <f42f6b6e-3ee3-f58e-513b-70f80f7541ee@xen.org>
 <7edf2139-b63e-00c9-7172-524566f942ae@citrix.com>
 <09fd7598-9899-9b4c-68ba-f90b3bc47d6f@suse.com>
 <35b24879-e075-8066-603a-518fbb82f656@xen.org>
 <892cb753-594b-15df-2342-9d10d5787f46@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3bf7c6f4-d945-8a70-47eb-eab99099fe25@citrix.com>
Date: Tue, 22 Dec 2020 11:11:25 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <892cb753-594b-15df-2342-9d10d5787f46@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 10:53, Jan Beulich wrote:
> On 22.12.2020 11:25, Julien Grall wrote:
>> On 22/12/2020 07:50, Jan Beulich wrote:
>>> On 21.12.2020 19:45, Andrew Cooper wrote:
>>>> On 21/12/2020 18:36, Julien Grall wrote:
>>>>>> @@ -553,6 +606,9 @@ struct domain *domain_create(domid_t domid,
>>>>>>        if ( init_status & INIT_watchdog )
>>>>>>            watchdog_domain_destroy(d);
>>>>>>    +    /* Must not hit a continuation in this context. */
>>>>>> +    ASSERT(domain_teardown(d) == 0);
>>>>> The ASSERT() will become a NOP in production build, so
>>>>> domain_teardown_down() will not be called.
>>>> Urgh - its not really a nop, but it's evaluation isn't symmetric between
>>>> debug and release builds.  I'll need an extra local variable.
>>> Or use ASSERT_UNREACHABLE(). (I admit I don't really like the
>>> resulting constructs, and would like to propose an alternative,
>>> even if I fear it'll be controversial.)
>>>
>>>>> However, I think it would be better if we pass an extra argument to
>>>>> indicated wheter the code is allowed to preempt. This would make the
>>>>> preemption check more obvious in evtchn_destroy() compare to the
>>>>> current d->is_dying != DOMDYING_dead.
>>>> We can have a predicate if you'd prefer, but plumbing an extra parameter
>>>> is wasteful, and can only cause confusion if it is out of sync with
>>>> d->is_dying.
>>> I agree here - it wasn't so long ago that event_channel.c gained
>>> a DOMDYING_dead check, and I don't see why we shouldn't extend
>>> this approach to here and elsewhere.
>> I think the d->is_dying != DOMYING_dead is difficult to understand even 
>> with the comment on top. This was ok in one place, but now it will 
>> spread everywhere. So at least, I would suggest to introduce a wrapper 
>> that is better named.
>>
>> There is also a futureproof concern. At the moment, we are considering 
>> the preemption will not be needed in domain_create(). I am ready to bet 
>> that the assumption is going to be broken sooner or later.
> This is a fair consideration, yet I'm having trouble seeing what it
> might be that would cause domain_create() to require preemption.
> The function is supposed to only produce an empty container. But yes,
> if e.g. vCPU creation was to move here, the situation would indeed
> change.

As discussed, I no longer think that is a good plan, especially if we
want a sane mechanism for not allocating AMX memory for domains not
configured to use AMX.

domain_create() (and vcpu_create() to a lesser extent) are the functions
which can't become preemptible, because they are the allocation and
setup of the objects which would be used to store continuation
information for other hypercalls.

The only option if these get too complicated is to split the complexity
out into other hypercalls.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:14:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:14:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57766.101210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfcl-0006Ls-BM; Tue, 22 Dec 2020 11:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57766.101210; Tue, 22 Dec 2020 11:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfcl-0006Ll-8I; Tue, 22 Dec 2020 11:14:39 +0000
Received: by outflank-mailman (input) for mailman id 57766;
 Tue, 22 Dec 2020 11:14:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krfcj-0006Le-N6
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:14:37 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb75bc38-dde4-4b13-989c-1d9c8f80d788;
 Tue, 22 Dec 2020 11:14:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb75bc38-dde4-4b13-989c-1d9c8f80d788
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608635676;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=5rvv4xrCVB2pJJnTiYSTe9lX91AycbCVAJTQD+wWHSs=;
  b=hFnLLdJzoDx4DNvgZURTkeUkdCQiOBLZRI4lP3g6tZ7IAoaLWO9IdHM0
   rtJ2A5EK6L6b2GhVIFmfPqkNd39TLZ8juHhv83oik5HG4L92j3t+6CYah
   VUiv5xaDSrGzfW9qc39Z+0bbGEnouDh5pGLYqULgKiRi5mHNip7YJAtiJ
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ZrAqm0flu03ilPafD0q9i8gsMoiCsNChBDL1SFIwA12VuJLDnRpBIL3BSMfFfIpUsA2uS28hz0
 svrETz8EuB6m2C9U9qDdrd5QSrhCfkq/Z3/93RWGuHbmYD4jp/mOakl4y2Y0aDAjsybgqQEVI3
 lcbLLnGalG35pBwpMz+Dsx4dvKUSu5YH0b51E7lMUXIK27BGi+qV45v5HZaGLnlMT4D1OxE5Op
 zQRNhPkrDfAXx0YuyrJX52Suggph1canS3ZhuaVVcAkAjGpLwhycyOLFC6yQMBXqtqiJleW25U
 A+4=
X-SBRS: 5.2
X-MesageID: 34094141
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="34094141"
Subject: Re: Hypercall fault injection (Was [PATCH 0/3] xen/domain: More
 structured teardown)
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "Juergen
 Gross" <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>, "Tamas
 K Lengyel" <tamas@tklengyel.com>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com>
 <983a3fef-c80f-ec2a-bf3c-5e054fc6a7a9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <760969b0-743e-fdd7-3577-72612e3a88b7@citrix.com>
Date: Tue, 22 Dec 2020 11:14:29 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <983a3fef-c80f-ec2a-bf3c-5e054fc6a7a9@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 10:00, Jan Beulich wrote:
> On 21.12.2020 20:36, Andrew Cooper wrote:
>> Hello,
>>
>> We have some very complicated hypercalls, createdomain, and max_vcpus a
>> close second, with immense complexity, and very hard-to-test error handling.
>>
>> It is no surprise that the error handling is riddled with bugs.
>>
>> Random failures from core functions is one way, but I'm not sure that
>> will be especially helpful.  In particular, we'd need a way to exclude
>> "dom0 critical" operations so we've got a usable system to run testing on.
>>
>> As an alternative, how about adding a fault_ttl field into the hypercall?
>>
>> The exact paths taken in {domain,vcpu}_create() are sensitive to the
>> hardware, Xen Kconfig, and other parameters passed into the
>> hypercall(s).  The testing logic doesn't really want to care about what
>> failed; simply that the error was handled correctly.
>>
>> So a test for this might look like:
>>
>> cfg = { ... };
>> while ( xc_create_domain(xch, cfg) < 0 )
>>     cfg.fault_ttl++;
>>
>>
>> The pro's of this approach is that for a specific build of Xen on a
>> piece of hardware, it ought to check every failure path in
>> domain_create(), until the ttl finally gets higher than the number of
>> fail-able actions required to construct a domain.  Also, the test
>> doesn't need changing as the complexity of domain_create() changes.
>>
>> The main con will mostly likely be the invasiveness of code in Xen, but
>> I suppose any fault injection is going to be invasive to a certain extent.
> While I like the idea in principle, the innocent looking
>
> cfg = { ... };
>
> is quite a bit of a concern here as well: Depending on the precise
> settings, paths taken in the hypervisor may heavily vary, and hence
> such a test will only end up being useful if it covers a wide
> variety of settings. Even if the number of tests to execute turned
> out to still be manageable today, it may quickly turn out not
> sufficiently scalable as we add new settings controllable right at
> domain creation (which I understand is the plan).

Well - there are two aspects here.

First, 99% of all VMs in practice are one of 3 or 4 configurations.  An
individual configuration is O(n) time complexity to test with fault_ttl,
depending on the size of Xen's logic, and we absolutely want to be able
to test these deterministically and to completion.

For the plethora of other configurations, I agree that it is infeasible
to test them all.  However, a hypercall like this is easy to wire up
into a fuzzing harness.

TBH, I was thinking of something like
https://github.com/intel/kernel-fuzzer-for-xen-project with a PVH Xen
and XTF "dom0" poking just this hypercall.  All the other complicated
bits of wiring AFL up appear to have been done.

Perhaps when we exhaust that as a source of bugs, we move onto fuzzing
the L0 Xen, because running on native will give it more paths to
explore.  We'd need some way of reporting path/trace data back to AFL in
dom0 which might require a bit plumbing.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:28:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57770.101222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfpq-0007Ld-J5; Tue, 22 Dec 2020 11:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57770.101222; Tue, 22 Dec 2020 11:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfpq-0007LW-G0; Tue, 22 Dec 2020 11:28:10 +0000
Received: by outflank-mailman (input) for mailman id 57770;
 Tue, 22 Dec 2020 11:28:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krfpp-0007LO-Hy; Tue, 22 Dec 2020 11:28:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krfpp-0003Bo-6i; Tue, 22 Dec 2020 11:28:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krfpo-0004l8-Tz; Tue, 22 Dec 2020 11:28:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krfpo-0000hI-TR; Tue, 22 Dec 2020 11:28:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FfTCPwKa7kjSLAF5/GvRgb6ZqQ96Kmrob14V71LUF3M=; b=qUPLIn7Vxfe1btYso0S8TI/ype
	Q8chpbp681DUHz7r+oCdiISwpgouOKquOYxWEdiRBANLSko4zt+4dtCcO0v0ao16Y8MY0WZdCyQye
	MI0XGMLkXuLtT7m02ynCPIhGLuc06kbccz3kWwMo+Bjp/aMfzbAg6vvxnU+/Zusaao6k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157775-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157775: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ffa9d2999722a404d3a4381b0249d191de134d33
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 11:28:08 +0000

flight 157775 xen-unstable real [real]
flight 157789 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157775/
http://logs.test-lab.xenproject.org/osstest/logs/157789/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 157754

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157754
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157754
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157754
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157754
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157754
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157754
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157754
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157754
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157754
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157754
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157754
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  ffa9d2999722a404d3a4381b0249d191de134d33
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157754  2020-12-21 07:23:13 Z    1 days
Testing same since   157775  2020-12-21 21:37:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffa9d2999722a404d3a4381b0249d191de134d33
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Dec 21 14:52:26 2020 +0000

    x86/shadow: Fix build with !CONFIG_SHADOW_PAGING
    
    Implement a stub for shadow_vcpu_teardown()
    
    Fixes: d162f36848c4 ("xen/x86: Fix memory leak in vcpu_create() error path")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit d162f36848c4a98a782cc05820b0aa7ec1ae297d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 28 15:25:44 2020 +0100

    xen/x86: Fix memory leak in vcpu_create() error path
    
    Various paths in vcpu_create() end up calling paging_update_paging_modes(),
    which eventually allocate a monitor pagetable if one doesn't exist.
    
    However, an error in vcpu_create() results in the vcpu being cleaned up
    locally, and not put onto the domain's vcpu list.  Therefore, the monitor
    table is not freed by {hap,shadow}_teardown()'s loop.  This is caught by
    assertions later that we've successfully freed the entire hap/shadow memory
    pool.
    
    The per-vcpu loops in domain teardown logic is conceptually wrong, but exist
    due to insufficient existing structure in the existing logic.
    
    Break paging_vcpu_teardown() out of paging_teardown(), with mirrored breakouts
    in the hap/shadow code, and use it from arch_vcpu_create()'s error path.  This
    fixes the memory leak.
    
    The new {hap,shadow}_vcpu_teardown() must be idempotent, and are written to be
    as tolerable as possible, with the minimum number of safety checks possible.
    In particular, drop the mfn_valid() check - if these fields are junk, then Xen
    is going to explode anyway.
    
    Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 6131dab5f2c8059a0fc7fd884bc6d4ff78ba44c2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Dec 18 23:30:04 2020 +0000

    xen/Kconfig: Correct the NR_CPUS description
    
    The description "physical CPUs" is especially wrong, as it implies the number
    of sockets, which tops out at 8 on all but the very biggest servers.
    
    NR_CPUS is the number of logical entities the scheduler can use.
    
    Reported-by: hanetzer@startmail.com
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:28:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57773.101236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfqD-0007RP-57; Tue, 22 Dec 2020 11:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57773.101236; Tue, 22 Dec 2020 11:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krfqD-0007RG-1g; Tue, 22 Dec 2020 11:28:33 +0000
Received: by outflank-mailman (input) for mailman id 57773;
 Tue, 22 Dec 2020 11:28:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krfqB-0007R2-UD
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:28:31 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fce0ce66-1d58-408f-86e3-36e61bd5c93c;
 Tue, 22 Dec 2020 11:28:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fce0ce66-1d58-408f-86e3-36e61bd5c93c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608636510;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=7mnwUyFM7qldS2/h5A7qFmSUCsEV/cNi71B8NCJqCis=;
  b=RvIr17eMpFSWy8aBnfigOh4776gtUZ3lJjjH3ihw5FproT62kElMLXO8
   FfhqmEFN02gZDRfCi5uH9N06LKGbBBgppx6teunlE3x8G6hNzvYjFD/mh
   DYDToHXHhvxpb6nMwdGriBTk3p+aXetTP50j2aeXIFVRjj3uOXTMK7TpH
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: e/WZRKWVXbIqJlO3yymhpgPXIBtfob3+t/AGoVWYDzzSgnw6E7WRxLTIiyR2JfylRytcPtyetf
 qTip63W7RKKFk8P+E98uiKg3IqeTLplSszQ0f3tc48sdRvIdlFYIXhu0ubSxzV109wnJRln3NK
 LPvgjSpGffbuiy1+pvyo9EoRt8jUlurcopXnG9VHS+Q3fUz+/IuacV0tZ+ko56GF/9vvXJkXFK
 HYkpb1iZdJo/u9SV4jX5CBLOMAaLW6cQM1+By2g2/B3ARa5XVRK8QA2jt69tNfdkRUl6af7ydz
 xV8=
X-SBRS: 5.2
X-MesageID: 33755079
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="33755079"
Subject: Re: [PATCH 3/3] xen/evtchn: Clean up teardown handling
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-4-andrew.cooper3@citrix.com>
 <3d72bcb6-dabf-2b26-cecd-5f2d36505bd5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <683f7808-aad7-1c42-e9e9-3e251e1a4561@citrix.com>
Date: Tue, 22 Dec 2020 11:28:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3d72bcb6-dabf-2b26-cecd-5f2d36505bd5@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 10:48, Jan Beulich wrote:
> On 21.12.2020 19:14, Andrew Cooper wrote:
>> First of all, rename the evtchn APIs:
>>  * evtchn_destroy       => evtchn_teardown
>>  * evtchn_destroy_final => evtchn_destroy
> I wonder in how far this is going to cause confusion with backports
> down the road. May I suggest to do only the first of the two renames,
> at least until in a couple of year's time? Or make the second rename
> to e.g. evtchn_cleanup() or evtchn_deinit()?

I considered backports, but I don't think it will be an issue.  The
contents of the two functions are very different, and we're not likely
to be moving the callers in backports.

I'm not fussed about the exact naming, so long as we can make and
agreement and adhere to it strictly.  The current APIs are a total mess.

I used teardown/destroy because that seems to be one common theme in the
APIs, but it will require some to change their name.

>> RFC.  While testing this, I observed this, after faking up an -ENOMEM in
>> dom0's construction:
>>
>>   (XEN) [2020-12-21 16:31:20] NX (Execute Disable) protection active
>>   (XEN) [2020-12-21 16:33:04]
>>   (XEN) [2020-12-21 16:33:04] ****************************************
>>   (XEN) [2020-12-21 16:33:04] Panic on CPU 0:
>>   (XEN) [2020-12-21 16:33:04] Error creating domain 0
>>   (XEN) [2020-12-21 16:33:04] ****************************************
>>
>> XSA-344 appears to have added nearly 2 minutes of wallclock time into the
>> domain_create() error path, which isn't ok.
>>
>> Considering that event channels haven't even been initialised in this
>> particular scenario, it ought to take ~0 time.  Even if event channels have
>> been initalised, none can be active as the domain isn't visible to the system.
> evtchn_init() sets d->valid_evtchns to EVTCHNS_PER_BUCKET. Are you
> suggesting cleaning up one bucket's worth of unused event channels
> takes two minutes? If this is really the case, and considering there
> could at most be unbound Xen channels, perhaps we could avoid
> calling evtchn_teardown() from domain_create()'s error path? We'd
> need to take care of the then missing accounting (->active_evtchns
> and ->xen_evtchns), but this should be doable.

Actually, its a bug in this patch.  evtchn_init() hasn't been called, so
d->valid_evtchns is 0, so the loop is 2^32 iterations long.  Luckily,
this is easy to fix.

As for avoiding calling, specifically not.  Part of the problem we're
trying to fix is that we've got two different destruction paths, and
making domain_teardown() be fully idempotent is key to fixing that.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:46:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:46:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57782.101252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krg7v-0000nW-Rl; Tue, 22 Dec 2020 11:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57782.101252; Tue, 22 Dec 2020 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krg7v-0000nP-OH; Tue, 22 Dec 2020 11:46:51 +0000
Received: by outflank-mailman (input) for mailman id 57782;
 Tue, 22 Dec 2020 11:46:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krg7u-0000nK-02
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:46:50 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1272a12b-d560-4031-8651-5ae36a0b1e86;
 Tue, 22 Dec 2020 11:46:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1272a12b-d560-4031-8651-5ae36a0b1e86
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608637608;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=tD5CRwL0RrGeMusPck/cEkV9FddfSXSlwlsPh9YbCWQ=;
  b=YI4XK+GgF609tJA7O6K645kGX/1NtFcQdHYZhctiA7eRxrvuCQPCQTQm
   JuhpmlZl9JCdpRPI1NkQDKO7XjxI7jLN5IQtwCveTC4TmfRxmO3rrtsB3
   dYaVearsqfk5ozNoFYOqY63+AqjY1Nro2UzxXcrOAhyfUP9g6rVTdbrQF
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: FUklLGfjUQ6tG5YjdSDgLd9k9O27XCNUy29jkUS0cupBJGuWYR05iPxAFkAW6ckkwCbDMowVtE
 iTsugbXCKWv4m6rP3MrFyL/q5kbrT9YVzbvM+AFVFZJouspruG7i2WnRvJ0jbtNF4/gJA6AHeB
 r1N/itjwb+ds650yulldiLvI296Ae5kEtigB5Cpbq41++eOZAEtyF2apGHCa7CcTdZBgDt2eja
 cKO/x+BdSC0pDpZ7LHL99J4dh2nxg/U2gDb3dkQUwuxqO2Ldlt7cDIpbGy3rCMgen1OuvPJ+Bb
 7R0=
X-SBRS: 5.2
X-MesageID: 34986516
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="34986516"
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <b3d1cb55-2793-f37e-13d1-b9e8de2057da@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <49d46056-2b99-3ec6-cf93-3f40be259330@citrix.com>
Date: Tue, 22 Dec 2020 11:46:42 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b3d1cb55-2793-f37e-13d1-b9e8de2057da@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 10:35, Jan Beulich wrote:
> On 21.12.2020 19:14, Andrew Cooper wrote:
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -273,6 +273,59 @@ static int __init parse_extra_guest_irqs(const char *s)
>>  custom_param("extra_guest_irqs", parse_extra_guest_irqs);
>>  
>>  /*
>> + * Release resources held by a domain.  There may or may not be live
>> + * references to the domain, and it may or may not be fully constructed.
>> + *
>> + * d->is_dying differing between DOMDYING_dying and DOMDYING_dead can be used
>> + * to determine if live references to the domain exist, and also whether
>> + * continuations are permitted.
>> + *
>> + * If d->is_dying is DOMDYING_dead, this must not return non-zero.
>> + */
>> +static int domain_teardown(struct domain *d)
>> +{
>> +    BUG_ON(!d->is_dying);
>> +
>> +    /*
>> +     * This hypercall can take minutes of wallclock time to complete.  This
>> +     * logic implements a co-routine, stashing state in struct domain across
>> +     * hypercall continuation boundaries.
>> +     */
>> +    switch ( d->teardown.val )
>> +    {
>> +        /*
>> +         * Record the current progress.  Subsequent hypercall continuations
>> +         * will logically restart work from this point.
>> +         *
>> +         * PROGRESS() markers must not be in the middle of loops.  The loop
>> +         * variable isn't preserved across a continuation.
>> +         *
>> +         * To avoid redundant work, there should be a marker before each
>> +         * function which may return -ERESTART.
>> +         */
>> +#define PROGRESS(x)                             \
>> +        d->teardown.val = PROG_ ## x;           \
>> +        /* Fallthrough */                       \
>> +    case PROG_ ## x
>> +
>> +        enum {
>> +            PROG_done = 1,
>> +        };
>> +
>> +    case 0:
>> +    PROGRESS(done):
>> +        break;
>> +
>> +#undef PROGRESS
>> +
>> +    default:
>> +        BUG();
>> +    }
>> +
>> +    return 0;
>> +}
> While as an initial step this may be okay, I envision ordering issues
> with domain_relinquish_resources() - there may be per-arch things that
> want doing before certain common ones, and others which should only
> be done when certain common teardown was already performed. Therefore
> rather than this being a "common equivalent of
> domain_relinquish_resources()", I'd rather see (and call) it a
> (designated) replacement, where individual per-arch functions would
> get called at appropriate times.

Over time, I do expect it to replace domain_relinquish_resources().  I'm
not sure if that will take the form of a specific arch_domain_teardown()
call (and reserving some space in teardown.val for arch use), or whether
it will be a load of possibly-CONFIG'd stubs.

I do also have a work in progress supporting per-vcpu continuations to
be able to remove the TODO's introduced into the paging() path.  However
there is a bit more plumbing work required than I have time right now,
so this will have to wait a little until I've got some of the more
urgent 4.15 work done.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:52:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57786.101264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krgDU-0001hy-I0; Tue, 22 Dec 2020 11:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57786.101264; Tue, 22 Dec 2020 11:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krgDU-0001hr-Du; Tue, 22 Dec 2020 11:52:36 +0000
Received: by outflank-mailman (input) for mailman id 57786;
 Tue, 22 Dec 2020 11:52:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krgDT-0001hm-Dg
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:52:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 983a53a0-38af-4aa6-910a-18bb8a05b237;
 Tue, 22 Dec 2020 11:52:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1CC2FADB3;
 Tue, 22 Dec 2020 11:52:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 983a53a0-38af-4aa6-910a-18bb8a05b237
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608637951; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oqsGKp3H6bLKfmPWos/HnudJ76u8DRU9rk/i88PzAYk=;
	b=cxGoyC6cvkemSj/YQkOeT1z0fhVE4leNx+111A0XPh7a8D/YXPEVjjWL8URNQqvuxVMBXV
	/g1LyEQc12+RzPcAW6ArbypeKTGn+uoWEwpWFH16ZDik8cnj2WpXP1GynsQzCWQ2qrMtJ6
	POV65W2bEN67R9eCLTAvJFZHRgpl1HQ=
Subject: Re: [PATCH 3/3] xen/evtchn: Clean up teardown handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-4-andrew.cooper3@citrix.com>
 <3d72bcb6-dabf-2b26-cecd-5f2d36505bd5@suse.com>
 <683f7808-aad7-1c42-e9e9-3e251e1a4561@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5d66a8c9-e3d6-e329-7daf-6b1d0e220e13@suse.com>
Date: Tue, 22 Dec 2020 12:52:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <683f7808-aad7-1c42-e9e9-3e251e1a4561@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.12.2020 12:28, Andrew Cooper wrote:
> On 22/12/2020 10:48, Jan Beulich wrote:
>> On 21.12.2020 19:14, Andrew Cooper wrote:
>>> First of all, rename the evtchn APIs:
>>>  * evtchn_destroy       => evtchn_teardown
>>>  * evtchn_destroy_final => evtchn_destroy
>> I wonder in how far this is going to cause confusion with backports
>> down the road. May I suggest to do only the first of the two renames,
>> at least until in a couple of year's time? Or make the second rename
>> to e.g. evtchn_cleanup() or evtchn_deinit()?
> 
> I considered backports, but I don't think it will be an issue.  The
> contents of the two functions are very different, and we're not likely
> to be moving the callers in backports.

Does the same also apply to the old and new call sites of the functions?

> I'm not fussed about the exact naming, so long as we can make and
> agreement and adhere to it strictly.  The current APIs are a total mess.
> 
> I used teardown/destroy because that seems to be one common theme in the
> APIs, but it will require some to change their name.

So for domains "teardown" and "destroy" pair up with "create". I don't
think evtchn_create() is a sensible name (the function doesn't really
"create" anything); evtchn_init() seems quite a bit better to me, and
hence evtchn_deinit() could be its counterpart. In particular I don't
think all smaller entity functions involved in doing "xyz" for a
larger entity need to have "xyz" in their names.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 11:55:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 11:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57792.101281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krgGI-0001rJ-3m; Tue, 22 Dec 2020 11:55:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57792.101281; Tue, 22 Dec 2020 11:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krgGI-0001rC-0a; Tue, 22 Dec 2020 11:55:30 +0000
Received: by outflank-mailman (input) for mailman id 57792;
 Tue, 22 Dec 2020 11:55:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krgGH-0001r7-20
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 11:55:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05dcea81-18cf-4a05-97d8-1245155ad82f;
 Tue, 22 Dec 2020 11:55:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 610BDACF9;
 Tue, 22 Dec 2020 11:55:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05dcea81-18cf-4a05-97d8-1245155ad82f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608638127; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c4x5NnSKa3GHmGh/u3YbUsIngcrZNDOYXe5Jaxt7bmE=;
	b=qo/C/79joJ3eDMolrY3SZG47C9jo1UifXAftbr/3BqSN3UpYy3OzGvW0bekHYcUO5L1GgA
	XpuWGxCYeLON9EVM9TtzP8hiXGV5+zjOHRqlaOByuM/BXb/RkDaqXghlWy+fQU66TW/cTk
	6MVsDeuLod+YUY2kQHY0taQIt8Rx328=
Subject: Re: [PATCH 2/3] xen/domain: Introduce domain_teardown()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-3-andrew.cooper3@citrix.com>
 <b3d1cb55-2793-f37e-13d1-b9e8de2057da@suse.com>
 <49d46056-2b99-3ec6-cf93-3f40be259330@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4dca168c-93e4-8852-9649-f3713aaa180c@suse.com>
Date: Tue, 22 Dec 2020 12:55:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <49d46056-2b99-3ec6-cf93-3f40be259330@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.12.2020 12:46, Andrew Cooper wrote:
> On 22/12/2020 10:35, Jan Beulich wrote:
>> On 21.12.2020 19:14, Andrew Cooper wrote:
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -273,6 +273,59 @@ static int __init parse_extra_guest_irqs(const char *s)
>>>  custom_param("extra_guest_irqs", parse_extra_guest_irqs);
>>>  
>>>  /*
>>> + * Release resources held by a domain.  There may or may not be live
>>> + * references to the domain, and it may or may not be fully constructed.
>>> + *
>>> + * d->is_dying differing between DOMDYING_dying and DOMDYING_dead can be used
>>> + * to determine if live references to the domain exist, and also whether
>>> + * continuations are permitted.
>>> + *
>>> + * If d->is_dying is DOMDYING_dead, this must not return non-zero.
>>> + */
>>> +static int domain_teardown(struct domain *d)
>>> +{
>>> +    BUG_ON(!d->is_dying);
>>> +
>>> +    /*
>>> +     * This hypercall can take minutes of wallclock time to complete.  This
>>> +     * logic implements a co-routine, stashing state in struct domain across
>>> +     * hypercall continuation boundaries.
>>> +     */
>>> +    switch ( d->teardown.val )
>>> +    {
>>> +        /*
>>> +         * Record the current progress.  Subsequent hypercall continuations
>>> +         * will logically restart work from this point.
>>> +         *
>>> +         * PROGRESS() markers must not be in the middle of loops.  The loop
>>> +         * variable isn't preserved across a continuation.
>>> +         *
>>> +         * To avoid redundant work, there should be a marker before each
>>> +         * function which may return -ERESTART.
>>> +         */
>>> +#define PROGRESS(x)                             \
>>> +        d->teardown.val = PROG_ ## x;           \
>>> +        /* Fallthrough */                       \
>>> +    case PROG_ ## x
>>> +
>>> +        enum {
>>> +            PROG_done = 1,
>>> +        };
>>> +
>>> +    case 0:
>>> +    PROGRESS(done):
>>> +        break;
>>> +
>>> +#undef PROGRESS
>>> +
>>> +    default:
>>> +        BUG();
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>> While as an initial step this may be okay, I envision ordering issues
>> with domain_relinquish_resources() - there may be per-arch things that
>> want doing before certain common ones, and others which should only
>> be done when certain common teardown was already performed. Therefore
>> rather than this being a "common equivalent of
>> domain_relinquish_resources()", I'd rather see (and call) it a
>> (designated) replacement, where individual per-arch functions would
>> get called at appropriate times.
> 
> Over time, I do expect it to replace domain_relinquish_resources().  I'm
> not sure if that will take the form of a specific arch_domain_teardown()
> call (and reserving some space in teardown.val for arch use), or whether
> it will be a load of possibly-CONFIG'd stubs.

I'd consider helpful if you could slightly re-word the description then.

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 12:04:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 12:04:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57797.101296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krgOW-0002qe-9m; Tue, 22 Dec 2020 12:04:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57797.101296; Tue, 22 Dec 2020 12:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krgOW-0002qX-6n; Tue, 22 Dec 2020 12:04:00 +0000
Received: by outflank-mailman (input) for mailman id 57797;
 Tue, 22 Dec 2020 12:03:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krgOV-0002qP-SH; Tue, 22 Dec 2020 12:03:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krgOV-00061i-FG; Tue, 22 Dec 2020 12:03:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krgOV-0006UP-75; Tue, 22 Dec 2020 12:03:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krgOV-00079Q-6a; Tue, 22 Dec 2020 12:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BoqHHtRPelHrHKGg0eHET3CPcAq2Z1xRySV/DiLnaCA=; b=RMn7bl9xXa8gCbt2srBBrZIUD7
	P9oyN5hHYwCntSPvAGJOcGVlCrWnRu/E8Qe1Vu6wR7UfYAIiEZ9vyIKy+Ri/KCEZ1QEkZvL4suxvs
	EG2ByUIgaJunuD27r5mOVx5XeYkvYbE64LXAeWL9J0HO6tnGmXMfEUyTGMvokHok8ifw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157786-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157786: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e93c3712d67098453760fd61c338cbf62dd08da1
X-Osstest-Versions-That:
    xen=8c8938dcc1bd37dd61f705410053e08804ca2b55
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 12:03:59 +0000

flight 157786 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157786/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e93c3712d67098453760fd61c338cbf62dd08da1
baseline version:
 xen                  8c8938dcc1bd37dd61f705410053e08804ca2b55

Last test of basis   157779  2020-12-21 23:00:27 Z    0 days
Testing same since   157786  2020-12-22 09:01:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Maximilian Engelhardt <maxi@daemonizer.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c8938dcc1..e93c3712d6  e93c3712d67098453760fd61c338cbf62dd08da1 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 13:33:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 13:33:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57852.101408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krhmr-0002av-QG; Tue, 22 Dec 2020 13:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57852.101408; Tue, 22 Dec 2020 13:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krhmr-0002ao-MW; Tue, 22 Dec 2020 13:33:13 +0000
Received: by outflank-mailman (input) for mailman id 57852;
 Tue, 22 Dec 2020 13:33:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krhmp-0002aj-QS
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 13:33:11 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccccdc02-1497-44ce-a857-172d72b43d6b;
 Tue, 22 Dec 2020 13:33:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccccdc02-1497-44ce-a857-172d72b43d6b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608643989;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=B910IV10YJCOGXeO+mzYlyOiOazKXkt5jawqu+hoPXY=;
  b=Ok/4udg2OGzWqsY9iKop43YnbiG3OoRGP86QPmchaEK9FplpEkpjD2uz
   e3gkss/yqzomAe51MLckYLvsl8vfLVOHJ3ZM/AlY+JUvj8WEmU2WFqsaG
   mcA7wDEfjUxIq1ZrPvDFsL8BUcWFVikH+/1HNFipGPSywAGpkabQyUUNJ
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: VZ/FELI8126R1aSUnzT0L/WRrWER1hmdbf2vzwFw/Ehvo9mLYnbg2God0pcL+dmAky2noZXTj7
 PMlONQxoTlq8MihfwXsq6Hoo7zA0PjGVwU4n9BwD8TppaTqEy3zHVaMMxvS20G3w0oAUpZToSk
 e9CxBOYEojhONFR4/I/94c8VnoZGALlk6bjtdlXnEN2ytfcOojsoZy2U0nfcqhdnEOgUSyj1Au
 K9h5uwod7QOFSjslgHuT8vp99liZ9QwZ9Gw2IlY4Px3WRVxGotK5eLWSJvf62KOA5ASTwkX4KZ
 ti0=
X-SBRS: 5.2
X-MesageID: 33741732
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,438,1599537600"; 
   d="scan'208";a="33741732"
Subject: Re: [PATCH 3/3] xen/evtchn: Clean up teardown handling
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-4-andrew.cooper3@citrix.com>
 <3d72bcb6-dabf-2b26-cecd-5f2d36505bd5@suse.com>
 <683f7808-aad7-1c42-e9e9-3e251e1a4561@citrix.com>
 <5d66a8c9-e3d6-e329-7daf-6b1d0e220e13@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <91ec88c5-fa7b-e700-2466-322dd3db7397@citrix.com>
Date: Tue, 22 Dec 2020 13:33:02 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5d66a8c9-e3d6-e329-7daf-6b1d0e220e13@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 11:52, Jan Beulich wrote:
> On 22.12.2020 12:28, Andrew Cooper wrote:
>> On 22/12/2020 10:48, Jan Beulich wrote:
>>> On 21.12.2020 19:14, Andrew Cooper wrote:
>>>> First of all, rename the evtchn APIs:
>>>>  * evtchn_destroy       => evtchn_teardown
>>>>  * evtchn_destroy_final => evtchn_destroy
>>> I wonder in how far this is going to cause confusion with backports
>>> down the road. May I suggest to do only the first of the two renames,
>>> at least until in a couple of year's time? Or make the second rename
>>> to e.g. evtchn_cleanup() or evtchn_deinit()?
>> I considered backports, but I don't think it will be an issue.  The
>> contents of the two functions are very different, and we're not likely
>> to be moving the callers in backports.
> Does the same also apply to the old and new call sites of the functions?

I don't understand your question.  I don't intend the new callsites to
ever move again, now they're part of the properly idempotent path, and
any movement in the older trees would be wrong for anything other than
backporting this fix, which clearly isn't a backport candidate.

(That said - there's a memory leak I need to create a backport for...)

>> I'm not fussed about the exact naming, so long as we can make and
>> agreement and adhere to it strictly.  The current APIs are a total mess.
>>
>> I used teardown/destroy because that seems to be one common theme in the
>> APIs, but it will require some to change their name.
> So for domains "teardown" and "destroy" pair up with "create". I don't
> think evtchn_create() is a sensible name (the function doesn't really
> "create" anything); evtchn_init() seems quite a bit better to me, and
> hence evtchn_deinit() could be its counterpart.

You're never going to find a perfect name for all cases, and in this
proposal, you've still got evtchn_init/deinit/destroy() as a triple.

What we do need is some clear rules, which will live in the forthcoming
"lifecycle of a domain" document.

> In particular I don't
> think all smaller entity functions involved in doing "xyz" for a
> larger entity need to have "xyz" in their names.

While in principle I agree, I'm not sure the value if having perfect
names outweighs the value of having a simple set of guidelines.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 13:45:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 13:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57860.101426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krhyk-0003aC-2Q; Tue, 22 Dec 2020 13:45:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57860.101426; Tue, 22 Dec 2020 13:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krhyj-0003a5-VT; Tue, 22 Dec 2020 13:45:29 +0000
Received: by outflank-mailman (input) for mailman id 57860;
 Tue, 22 Dec 2020 13:45:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krhyj-0003a0-Ct
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 13:45:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b155724-1d3a-454f-bad1-4dfe5ef6c525;
 Tue, 22 Dec 2020 13:45:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 48CEFB1C2;
 Tue, 22 Dec 2020 13:45:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b155724-1d3a-454f-bad1-4dfe5ef6c525
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608644727; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7piISWuDwlyJQQ+sseEC12064PAEKVH/Rn56hAAVO5E=;
	b=ELPRDMQG1gxEYOsZHplyJcirxNivha0NLVgk37hpd5izhxEmN7VhEtMwjm/dLUfR0iVm8e
	LWEgZ9KF9AbIiBtfMHnEZUQ9kOAXHP0/sRQ1cqE48F9i8N3+o0LZkIA9vIFJhypz5+KDSv
	rKMBlj/7S0ZSYfRE7QnXoywgH/EvtMQ=
Subject: Re: [PATCH 3/3] xen/evtchn: Clean up teardown handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <20201221181446.7791-4-andrew.cooper3@citrix.com>
 <3d72bcb6-dabf-2b26-cecd-5f2d36505bd5@suse.com>
 <683f7808-aad7-1c42-e9e9-3e251e1a4561@citrix.com>
 <5d66a8c9-e3d6-e329-7daf-6b1d0e220e13@suse.com>
 <91ec88c5-fa7b-e700-2466-322dd3db7397@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b7dd510f-58bd-213f-922f-fe24df68babe@suse.com>
Date: Tue, 22 Dec 2020 14:45:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <91ec88c5-fa7b-e700-2466-322dd3db7397@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.12.2020 14:33, Andrew Cooper wrote:
> On 22/12/2020 11:52, Jan Beulich wrote:
>> On 22.12.2020 12:28, Andrew Cooper wrote:
>>> On 22/12/2020 10:48, Jan Beulich wrote:
>>>> On 21.12.2020 19:14, Andrew Cooper wrote:
>>>>> First of all, rename the evtchn APIs:
>>>>>  * evtchn_destroy       => evtchn_teardown
>>>>>  * evtchn_destroy_final => evtchn_destroy
>>>> I wonder in how far this is going to cause confusion with backports
>>>> down the road. May I suggest to do only the first of the two renames,
>>>> at least until in a couple of year's time? Or make the second rename
>>>> to e.g. evtchn_cleanup() or evtchn_deinit()?
>>> I considered backports, but I don't think it will be an issue.  The
>>> contents of the two functions are very different, and we're not likely
>>> to be moving the callers in backports.
>> Does the same also apply to the old and new call sites of the functions?
> 
> I don't understand your question.  I don't intend the new callsites to
> ever move again, now they're part of the properly idempotent path, and
> any movement in the older trees would be wrong for anything other than
> backporting this fix, which clearly isn't a backport candidate.
> 
> (That said - there's a memory leak I need to create a backport for...)

My thinking was that call sites of functions also serve as references
or anchors when you do backports. Having identically named functions
with different purposes may be misleading people - both ones doing
backports on a very occasional basis, but also us who may be doing
this regularly, but only on halfway recent trees. I, for one, keep
forgetting to check for bool/true/false when moving to 4.7, or the
-ERESTART <=> -EAGAIN change after 4.4(?). For the former I'll be
saved by the compiler yelling at me, but for the latter one needs to
recognize the need for an adjustment. I'm afraid of the same thing
(granted at a lower probability) potentially happening here, down the
road.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 13:47:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 13:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57864.101438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kri0r-0003jN-F8; Tue, 22 Dec 2020 13:47:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57864.101438; Tue, 22 Dec 2020 13:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kri0r-0003jG-C2; Tue, 22 Dec 2020 13:47:41 +0000
Received: by outflank-mailman (input) for mailman id 57864;
 Tue, 22 Dec 2020 13:47:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kri0q-0003j8-68; Tue, 22 Dec 2020 13:47:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kri0p-0008Gx-Va; Tue, 22 Dec 2020 13:47:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kri0p-00021D-OR; Tue, 22 Dec 2020 13:47:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kri0p-0003yO-Nv; Tue, 22 Dec 2020 13:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BINYNGEuUkFPAcTRJBxm50T1vKjInzrKK46/6eOdcrs=; b=UsWMdnfRj/uikk6MgFfahCwvDJ
	8KZhlGoPWdknYqSkEisN4+urZ4AXya71QO6Zi1gnL+zhlJizwidIqx2TbQ+qiwGJ+c9nDC2rpxYiD
	Uj9KdJ90oz/vUFxQ3xFXTOBdDrIM8mGdqkyEtwCaddXUEYaaSSlsKUELZCPgC7oMNBc4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157787-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157787: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d4945b102730a54f58be6bda3369c6844565b7ee
X-Osstest-Versions-That:
    ovmf=35ed29f207fd9c3683cfee5492c5c4e96ee0a0eb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 13:47:39 +0000

flight 157787 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157787/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d4945b102730a54f58be6bda3369c6844565b7ee
baseline version:
 ovmf                 35ed29f207fd9c3683cfee5492c5c4e96ee0a0eb

Last test of basis   157778  2020-12-21 22:42:56 Z    0 days
Testing same since   157787  2020-12-22 09:58:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ming Tan <ming.tan@intel.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Sami Mujawar <sami.mujawar@arm.com>
  Tan, Ming <ming.tan@intel.com>
  Wenyi Xie <xiewenyi2@huawei.com>
  wenyi,xie via groups.io <xiewenyi2=huawei.com@groups.io>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   35ed29f207..d4945b1027  d4945b102730a54f58be6bda3369c6844565b7ee -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:05:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:05:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57897.101513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjDv-0002RV-6q; Tue, 22 Dec 2020 15:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57897.101513; Tue, 22 Dec 2020 15:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjDv-0002RO-3g; Tue, 22 Dec 2020 15:05:15 +0000
Received: by outflank-mailman (input) for mailman id 57897;
 Tue, 22 Dec 2020 15:05:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krjDu-0002RD-5E; Tue, 22 Dec 2020 15:05:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krjDs-0001CY-L8; Tue, 22 Dec 2020 15:05:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krjDs-0006zr-FM; Tue, 22 Dec 2020 15:05:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krjDs-0005qB-Ep; Tue, 22 Dec 2020 15:05:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zwNdIiWU/EkO+Fo10bs9w2YfnGpWaNdFAiIxbq4oboM=; b=buuL06wbh6UjMQr5TSD2XeGZz0
	vm45xTZ5o093jMZLlpSSYLmqiQtBsZea5DvTPRK9rBZWwyJpO280N3QWScw+YjdW9Mn4nZ7JtlDEW
	OYiI/Ywr7shaJnUWz43r//WcqLANsolqak6Zc60owDCyzDXBvHECTb+rtPofbZqmiE6E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157797-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157797: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d4f699a0df6cea907c1de5c277500b98c0729685
X-Osstest-Versions-That:
    xen=e93c3712d67098453760fd61c338cbf62dd08da1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 15:05:12 +0000

flight 157797 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157797/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d4f699a0df6cea907c1de5c277500b98c0729685
baseline version:
 xen                  e93c3712d67098453760fd61c338cbf62dd08da1

Last test of basis   157786  2020-12-22 09:01:24 Z    0 days
Testing same since   157797  2020-12-22 13:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e93c3712d6..d4f699a0df  d4f699a0df6cea907c1de5c277500b98c0729685 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:30:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57913.101540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjbw-0004dP-Fc; Tue, 22 Dec 2020 15:30:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57913.101540; Tue, 22 Dec 2020 15:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjbw-0004dI-CD; Tue, 22 Dec 2020 15:30:04 +0000
Received: by outflank-mailman (input) for mailman id 57913;
 Tue, 22 Dec 2020 15:30:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krjbv-0004b0-Hf; Tue, 22 Dec 2020 15:30:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krjbv-0001b2-DB; Tue, 22 Dec 2020 15:30:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krjbv-0007sx-5l; Tue, 22 Dec 2020 15:30:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krjbv-0007Ki-5I; Tue, 22 Dec 2020 15:30:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dtFwNlad+zkYbu1EUPwMU7XAj9+FUBcxXZ6U9O7dDSM=; b=poZprxZFQ+aNw5WmidiaovATIi
	DaCwRyd8XVTfo7Gbvwc+2G96a/RwL9IlG8++ocrP1ZvAuspEngHi6fVFjc3P+zJYafBQeiDURxwXD
	bOt+nyRfTBR3Q8r0arRmlCw1DuBiXeUHk1CDzwBgM2kOm36Ksz88vwKV9j43RxcZZGFE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157781-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157781: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 15:30:03 +0000

flight 157781 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157781/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  165 days
Failing since        151818  2020-07-11 04:18:52 Z  164 days  159 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:43:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:43:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57959.101626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpA-0006aW-8E; Tue, 22 Dec 2020 15:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57959.101626; Tue, 22 Dec 2020 15:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpA-0006aP-3c; Tue, 22 Dec 2020 15:43:44 +0000
Received: by outflank-mailman (input) for mailman id 57959;
 Tue, 22 Dec 2020 15:43:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krjp8-0006aF-CP
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjp7-0001pd-RN; Tue, 22 Dec 2020 15:43:41 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjp7-0002Vd-HS; Tue, 22 Dec 2020 15:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=Nll9lXZt/LBrS6V2JsGc0ecRr3c0eFUXdRQAetw31cY=; b=jN8ILFaTNV7AIRUg2huK5CT/P6
	ChxZc8Be51a+GMb57kaTzZA04VURNzXy3OALTiyNbbHadx9J18InmpAUJYidbAwfm8houbg4Ub75E
	Gj27aDpFEv1pYaj/ZisyJoQKLD7NJ1J/zbnybgBaYjMK6t2kYA5NPg/cVEiiVERJbTu4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH for-4.15 0/4] xen/iommu: Collection of bug fixes for IOMMU teadorwn
Date: Tue, 22 Dec 2020 15:43:34 +0000
Message-Id: <20201222154338.9459-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of bug fixes for the IOMMU teardown code.
All of them are candidate for 4.15 as they can either leak memory or
lead to host crash/host corruption.

This is sent directly on xen-devel because all the issues were either
introduced in 4.15 or happen in the domain creation code.

Cheers,

Julien Grall (4):
  xen/iommu: Check if the IOMMU was initialized before tearing down
  xen/iommu: x86: Free the IOMMU page-tables with the pgtables.lock held
  [RFC] xen/iommu: x86: Clear the root page-table before freeing the
    page-tables
  xen/iommu: x86: Don't leak the IOMMU page-tables

 xen/arch/x86/domain.c               |  2 +-
 xen/drivers/passthrough/iommu.c     | 10 +++++-
 xen/drivers/passthrough/x86/iommu.c | 47 +++++++++++++++++++++++++++--
 xen/include/asm-x86/iommu.h         |  2 +-
 4 files changed, 56 insertions(+), 5 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:43:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:43:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57962.101662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpD-0006es-5j; Tue, 22 Dec 2020 15:43:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57962.101662; Tue, 22 Dec 2020 15:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpD-0006ed-27; Tue, 22 Dec 2020 15:43:47 +0000
Received: by outflank-mailman (input) for mailman id 57962;
 Tue, 22 Dec 2020 15:43:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krjpB-0006cH-8H
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjpA-0001pv-Q2; Tue, 22 Dec 2020 15:43:44 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjpA-0002Vd-Gp; Tue, 22 Dec 2020 15:43:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=kSPSsNxqjRsah0K2F/khe3S48PU9Vn+ea2ql+1qJVow=; b=u7zwD9TpUghLdk5/4t796dN7R
	OOEKsJTI9epORlTXPMuVoXxnmmbIjmRfUdkldFMZAy9GA+ZhknOA4hcvMVw4n4cUV2qqawv7dMhOd
	Of74k7GR4v8G7Rb07S7LfRq8ISeZ5tbsc5hBO8WlTOoK7umNxl8HSn0bLLoh193yGWF7c=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root page-table before freeing the page-tables
Date: Tue, 22 Dec 2020 15:43:37 +0000
Message-Id: <20201222154338.9459-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201222154338.9459-1-julien@xen.org>
References: <20201222154338.9459-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new per-domain IOMMU page-table allocator will now free the
page-tables when domain's resources are relinquished. However, the root
page-table (i.e. hd->arch.pg_maddr) will not be cleared.

Xen may access the IOMMU page-tables afterwards at least in the case of
PV domain:

(XEN) Xen call trace:
(XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
(XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
(XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
(XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
(XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
(XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
(XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
(XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
(XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
(XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
(XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
(XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
(XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
(XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
(XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
(XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
(XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
(XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
(XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120

This will result to a use after-free and possibly an host crash or
memory corruption.

Freeing the page-tables further down in domain_relinquish_resources()
would not work because pages may not be released until later if another
domain hold a reference on them.

Once all the PCI devices have been de-assigned, it is actually pointless
to access modify the IOMMU page-tables. So we can simply clear the root
page-table address.

Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This is an RFC because it would break AMD IOMMU driver. One option would
be to move the call to the teardown callback earlier on. Any opinions?
---
 xen/drivers/passthrough/x86/iommu.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 779dbb5b98ba..99a23177b3d2 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,6 +267,16 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    /*
+     * Pages will be moved to the free list in a bit. So we want to
+     * clear the root page-table to avoid any potential use after-free.
+     *
+     * XXX: This only code works for Intel vT-D.
+     */
+    spin_lock(&hd->arch.mapping_lock);
+    hd->arch.vtd.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
+
     spin_lock(&hd->arch.pgtables.lock);
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:43:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:43:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57960.101633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpA-0006b7-Jy; Tue, 22 Dec 2020 15:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57960.101633; Tue, 22 Dec 2020 15:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpA-0006av-DQ; Tue, 22 Dec 2020 15:43:44 +0000
Received: by outflank-mailman (input) for mailman id 57960;
 Tue, 22 Dec 2020 15:43:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krjp9-0006aK-Ag
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjp8-0001pi-Qq; Tue, 22 Dec 2020 15:43:42 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjp8-0002Vd-H9; Tue, 22 Dec 2020 15:43:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=+mBSW1/mVahokRFnfqlLHIzGDQdUfDu04JYjUwyK9PU=; b=Qp/G0A2gJEUTvBOWR3byyha/1
	bd8DT9cdnddbL/4W14DjPbG4qC1nxHQ5HPACtFDpIZ2JzFS7oSuoaS7PBf5Nou0z9L5Hb0URikLda
	KqSC3dQSiOSorCUdQE07hYlyY5WQ5f8uo2Lga71/V7zYUIFmcNJyiJpCSECv51XkytyjM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was initialized before tearing down
Date: Tue, 22 Dec 2020 15:43:35 +0000
Message-Id: <20201222154338.9459-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201222154338.9459-1-julien@xen.org>
References: <20201222154338.9459-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

is_iommu_enabled() will return true even if the IOMMU has not been
initialized (e.g. the ops are not set).

In the case of an early failure in arch_domain_init(), the function
iommu_destroy_domain() will be called even if the IOMMU is initialized.

This will result to dereference the ops which will be NULL and an host
crash.

Fix the issue by checking that ops has been set before accessing it.
Note that we are assuming that arch_iommu_domain_init() will cleanup an
intermediate failure if it failed.

Fixes: 71e617a6b8f6 ("use is_iommu_enabled() where appropriate...")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/drivers/passthrough/iommu.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 2358b6eb09f4..f976d5a0b0a5 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -226,7 +226,15 @@ static void iommu_teardown(struct domain *d)
 
 void iommu_domain_destroy(struct domain *d)
 {
-    if ( !is_iommu_enabled(d) )
+    struct domain_iommu *hd = dom_iommu(d);
+
+    /*
+     * In case of failure during the domain construction, it would be
+     * possible to reach this function with the IOMMU enabled but not
+     * yet initialized. We assume that hd->platforms will be non-NULL as
+     * soon as we start to initialize the IOMMU.
+     */
+    if ( !is_iommu_enabled(d) || !hd->platform_ops )
         return;
 
     iommu_teardown(d);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:43:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:43:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57961.101650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpB-0006cn-S3; Tue, 22 Dec 2020 15:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57961.101650; Tue, 22 Dec 2020 15:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpB-0006cg-O5; Tue, 22 Dec 2020 15:43:45 +0000
Received: by outflank-mailman (input) for mailman id 57961;
 Tue, 22 Dec 2020 15:43:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krjpA-0006ab-72
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjp9-0001pp-Qe; Tue, 22 Dec 2020 15:43:43 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjp9-0002Vd-H1; Tue, 22 Dec 2020 15:43:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=oGKgq5COKIk8NxpKicCWyg/HHbH44EXhBG+TPo1gMSg=; b=6F1sIpK8zXz525iNpH+YZy4J3
	7CHZgeRZybwOTkVZ61q5urpWMtfSXXpDnObgRW2hK+49+shv6uDJaRniyyRhf4yEUuRbZPWTVdrkD
	ZhrkwGNjSSbPUCzDwM4+4E6fhZ46DK1HNbyal5aytfHiXgVlbyIiiuGJC0FrLuObJsjOw=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables with the pgtables.lock held
Date: Tue, 22 Dec 2020 15:43:36 +0000
Message-Id: <20201222154338.9459-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201222154338.9459-1-julien@xen.org>
References: <20201222154338.9459-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The pgtables.lock is protecting access to the page list pgtables.list.
However, iommu_free_pgtables() will not held it. I guess it was assumed
that page-tables cannot be allocated while the domain is dying.

Unfortunately, there is no guarantee that iommu_map() will not be
called while a domain is dying (it looks like to be possible from
XEN_DOMCTL_memory_mapping). So it would be possible to be concurrently
allocate memory and free the page-tables.

Therefore, we need to held the lock when freeing the page tables.

There are more issues around the IOMMU page-allocator. They will be
handled in follow-up patches.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/drivers/passthrough/x86/iommu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cea1032b3d02..779dbb5b98ba 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,13 +267,18 @@ int iommu_free_pgtables(struct domain *d)
     struct page_info *pg;
     unsigned int done = 0;
 
+    spin_lock(&hd->arch.pgtables.lock);
     while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
     {
         free_domheap_page(pg);
 
         if ( !(++done & 0xff) && general_preempt_check() )
+        {
+            spin_unlock(&hd->arch.pgtables.lock);
             return -ERESTART;
+        }
     }
+    spin_unlock(&hd->arch.pgtables.lock);
 
     return 0;
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:43:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57963.101674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpE-0006hj-HL; Tue, 22 Dec 2020 15:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57963.101674; Tue, 22 Dec 2020 15:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjpE-0006hY-BP; Tue, 22 Dec 2020 15:43:48 +0000
Received: by outflank-mailman (input) for mailman id 57963;
 Tue, 22 Dec 2020 15:43:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krjpC-0006e6-E6
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjpC-0001q2-3U; Tue, 22 Dec 2020 15:43:46 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krjpB-0002Vd-RL; Tue, 22 Dec 2020 15:43:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=mOorAOnuty5dT4XVPxHkS2yx8rloYCwOnZM9EKJjgIk=; b=ZnR5j8EBlvK2vr5hUlW5i/rcV
	vFAjgpRLLsxxT33LPriWdzoJlmk8Mxs4++xBSS6onpZt6PnkaU0lUdzvpZ4YvDtbGuXEOqFgSBL3Q
	QUPEXZc5pIUociG5agRoDz1F5tSnWr8aD3p1hhih0IZIZlPbn31UKxD5zdekhhy9Ze9p8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU page-tables
Date: Tue, 22 Dec 2020 15:43:38 +0000
Message-Id: <20201222154338.9459-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201222154338.9459-1-julien@xen.org>
References: <20201222154338.9459-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The new IOMMU page-tables allocator will release the pages when
relinquish the domain resources. However, this is not sufficient in two
cases:
    1) domain_relinquish_resources() is not called when the domain
    creation fails.
    2) There is nothing preventing page-table allocations when the
    domain is dying.

In both cases, this can be solved by freeing the page-tables again
when the domain destruction. Although, this may result to an high
number of page-tables to free.

In the second case, it is pointless to allow page-table allocation when
the domain is going to die. iommu_alloc_pgtable() will now return an
error when it is called while the domain is dying.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/domain.c               |  2 +-
 xen/drivers/passthrough/x86/iommu.c | 32 +++++++++++++++++++++++++++--
 xen/include/asm-x86/iommu.h         |  2 +-
 3 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b9ba04633e18..1b7ee5c1a8cb 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2290,7 +2290,7 @@ int domain_relinquish_resources(struct domain *d)
 
     PROGRESS(iommu_pagetables):
 
-        ret = iommu_free_pgtables(d);
+        ret = iommu_free_pgtables(d, false);
         if ( ret )
             return ret;
 
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 99a23177b3d2..4a083e4b8f11 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -149,6 +149,21 @@ int arch_iommu_domain_init(struct domain *d)
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    /*
+     * The relinquish code will not be executed if the domain creation
+     * failed. To avoid any memory leak, we want to free any IOMMU
+     * page-tables that may have been allocated.
+     */
+    rc = iommu_free_pgtables(d, false);
+
+    /* The preemption was disabled, so the call should never fail. */
+    if ( rc )
+        ASSERT_UNREACHABLE();
+
+    ASSERT(page_list_empty(&hd->arch.pgtables.list));
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -261,7 +276,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         return;
 }
 
-int iommu_free_pgtables(struct domain *d)
+int iommu_free_pgtables(struct domain *d, bool preempt)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct page_info *pg;
@@ -282,7 +297,7 @@ int iommu_free_pgtables(struct domain *d)
     {
         free_domheap_page(pg);
 
-        if ( !(++done & 0xff) && general_preempt_check() )
+        if ( !(++done & 0xff) && preempt && general_preempt_check() )
         {
             spin_unlock(&hd->arch.pgtables.lock);
             return -ERESTART;
@@ -305,6 +320,19 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
         memflags = MEMF_node(hd->node);
 #endif
 
+    /*
+     * The IOMMU page-tables are freed when relinquishing the domain, but
+     * nothing prevent allocation to happen afterwards. There is no valid
+     * reasons to continue to update the IOMMU page-tables while the
+     * domain is dying.
+     *
+     * So prevent page-table allocation when the domain is dying. Note
+     * this doesn't fully prevent the race because d->is_dying may not
+     * yet be seen.
+     */
+    if ( d->is_dying )
+        return NULL;
+
     pg = alloc_domheap_page(NULL, memflags);
     if ( !pg )
         return NULL;
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 970eb06ffac5..874bb5bbfbde 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -135,7 +135,7 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
         iommu_vcall(ops, sync_cache, addr, size);       \
 })
 
-int __must_check iommu_free_pgtables(struct domain *d);
+int __must_check iommu_free_pgtables(struct domain *d, bool preempt);
 struct page_info *__must_check iommu_alloc_pgtable(struct domain *d);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 15:48:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 15:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57979.101688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjtS-0007Cy-6g; Tue, 22 Dec 2020 15:48:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57979.101688; Tue, 22 Dec 2020 15:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krjtS-0007Cr-3l; Tue, 22 Dec 2020 15:48:10 +0000
Received: by outflank-mailman (input) for mailman id 57979;
 Tue, 22 Dec 2020 15:48:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HKaq=F2=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1krjtQ-0007Cm-NY
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:48:08 +0000
Received: from MTA-06-3.privateemail.com (unknown [198.54.127.59])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c2e5fee-7d25-4964-8da7-0ae8d91e68bb;
 Tue, 22 Dec 2020 15:48:07 +0000 (UTC)
Received: from MTA-06.privateemail.com (localhost [127.0.0.1])
 by MTA-06.privateemail.com (Postfix) with ESMTP id 85D1660078
 for <xen-devel@lists.xenproject.org>; Tue, 22 Dec 2020 10:48:06 -0500 (EST)
Received: from mail-wr1-f41.google.com (unknown [10.20.151.226])
 by MTA-06.privateemail.com (Postfix) with ESMTPA id 47EA860083
 for <xen-devel@lists.xenproject.org>; Tue, 22 Dec 2020 15:48:06 +0000 (UTC)
Received: by mail-wr1-f41.google.com with SMTP id r3so14980023wrt.2
 for <xen-devel@lists.xenproject.org>; Tue, 22 Dec 2020 07:48:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c2e5fee-7d25-4964-8da7-0ae8d91e68bb
X-Gm-Message-State: AOAM530im1YhObsMcqqppCGCKnxOUr6Ix+GVqoi2kUEKbxpP525/rXEM
	/H9bciMbqgY6vhLsfP9jZOX4ClWt13Twn7zSmZE=
X-Google-Smtp-Source: ABdhPJwVN0QXztzIqWecfxud2MfcmFEVd0HM5OFpRAvx25S7QIy0Dqgau5kHncjRvXYVXGMry1+8gq2Z9cNB92HWawY=
X-Received: by 2002:adf:d082:: with SMTP id y2mr25279530wrh.301.1608652084792;
 Tue, 22 Dec 2020 07:48:04 -0800 (PST)
MIME-Version: 1.0
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com> <983a3fef-c80f-ec2a-bf3c-5e054fc6a7a9@suse.com>
 <760969b0-743e-fdd7-3577-72612e3a88b7@citrix.com>
In-Reply-To: <760969b0-743e-fdd7-3577-72612e3a88b7@citrix.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 22 Dec 2020 10:47:27 -0500
X-Gmail-Original-Message-ID: <CABfawh=nS2nuFEyx+7Hi5S5HUYtqXTJ6LMTLpZErs5_d22GWgQ@mail.gmail.com>
Message-ID: <CABfawh=nS2nuFEyx+7Hi5S5HUYtqXTJ6LMTLpZErs5_d22GWgQ@mail.gmail.com>
Subject: Re: Hypercall fault injection (Was [PATCH 0/3] xen/domain: More
 structured teardown)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Juergen Gross <jgross@suse.com>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000014d2cc05b70f802a"
X-Virus-Scanned: ClamAV using ClamSMTP

--00000000000014d2cc05b70f802a
Content-Type: text/plain; charset="UTF-8"

On Tue, Dec 22, 2020 at 6:14 AM Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 22/12/2020 10:00, Jan Beulich wrote:
> > On 21.12.2020 20:36, Andrew Cooper wrote:
> >> Hello,
> >>
> >> We have some very complicated hypercalls, createdomain, and max_vcpus a
> >> close second, with immense complexity, and very hard-to-test error
> handling.
> >>
> >> It is no surprise that the error handling is riddled with bugs.
> >>
> >> Random failures from core functions is one way, but I'm not sure that
> >> will be especially helpful.  In particular, we'd need a way to exclude
> >> "dom0 critical" operations so we've got a usable system to run testing
> on.
> >>
> >> As an alternative, how about adding a fault_ttl field into the
> hypercall?
> >>
> >> The exact paths taken in {domain,vcpu}_create() are sensitive to the
> >> hardware, Xen Kconfig, and other parameters passed into the
> >> hypercall(s).  The testing logic doesn't really want to care about what
> >> failed; simply that the error was handled correctly.
> >>
> >> So a test for this might look like:
> >>
> >> cfg = { ... };
> >> while ( xc_create_domain(xch, cfg) < 0 )
> >>     cfg.fault_ttl++;
> >>
> >>
> >> The pro's of this approach is that for a specific build of Xen on a
> >> piece of hardware, it ought to check every failure path in
> >> domain_create(), until the ttl finally gets higher than the number of
> >> fail-able actions required to construct a domain.  Also, the test
> >> doesn't need changing as the complexity of domain_create() changes.
> >>
> >> The main con will mostly likely be the invasiveness of code in Xen, but
> >> I suppose any fault injection is going to be invasive to a certain
> extent.
> > While I like the idea in principle, the innocent looking
> >
> > cfg = { ... };
> >
> > is quite a bit of a concern here as well: Depending on the precise
> > settings, paths taken in the hypervisor may heavily vary, and hence
> > such a test will only end up being useful if it covers a wide
> > variety of settings. Even if the number of tests to execute turned
> > out to still be manageable today, it may quickly turn out not
> > sufficiently scalable as we add new settings controllable right at
> > domain creation (which I understand is the plan).
>
> Well - there are two aspects here.
>
> First, 99% of all VMs in practice are one of 3 or 4 configurations.  An
> individual configuration is O(n) time complexity to test with fault_ttl,
> depending on the size of Xen's logic, and we absolutely want to be able
> to test these deterministically and to completion.
>
> For the plethora of other configurations, I agree that it is infeasible
> to test them all.  However, a hypercall like this is easy to wire up
> into a fuzzing harness.
>
> TBH, I was thinking of something like
> https://github.com/intel/kernel-fuzzer-for-xen-project with a PVH Xen
> and XTF "dom0" poking just this hypercall.  All the other complicated
> bits of wiring AFL up appear to have been done.
>
> Perhaps when we exhaust that as a source of bugs, we move onto fuzzing
> the L0 Xen, because running on native will give it more paths to
> explore.  We'd need some way of reporting path/trace data back to AFL in
> dom0 which might require a bit plumbing.


This is a pretty cool idea, I would be very interested in trying this out.
If running Xen nested in a HVM domain is possible (my experiments with
nested setups using Xen have only worked on ancient hw last time I tried)
then running the fuzzer would be entirely possible using VM forks. You
don't even need a special "dom0", you could just add the fuzzer's CPUID
harness to Xen's hypercall handler and the only thing needed from the
nested dom0 would be to trigger the hypercall with a normal config. The
fuzzer would take it from there and replace the config with the fuzzed
version directly in VM forks. Defining what to report as a "crash" to AFL
would still need to be defined manually for Xen as the current sink points
are Linux specific (
https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/src/sink.h),
but that should be straight forward.

Also, running the fuzzer with PVH guests hasn't been tested but since all
VM forking needs is EPT it should work.

Tamas

--00000000000014d2cc05b70f802a
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Tue, Dec 22, 2020 at 6:14 AM Andre=
w Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.com">andrew.cooper3@ci=
trix.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D=
"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-le=
ft:1ex">On 22/12/2020 10:00, Jan Beulich wrote:<br>
&gt; On 21.12.2020 20:36, Andrew Cooper wrote:<br>
&gt;&gt; Hello,<br>
&gt;&gt;<br>
&gt;&gt; We have some very complicated hypercalls, createdomain, and max_vc=
pus a<br>
&gt;&gt; close second, with immense complexity, and very hard-to-test error=
 handling.<br>
&gt;&gt;<br>
&gt;&gt; It is no surprise that the error handling is riddled with bugs.<br=
>
&gt;&gt;<br>
&gt;&gt; Random failures from core functions is one way, but I&#39;m not su=
re that<br>
&gt;&gt; will be especially helpful.=C2=A0 In particular, we&#39;d need a w=
ay to exclude<br>
&gt;&gt; &quot;dom0 critical&quot; operations so we&#39;ve got a usable sys=
tem to run testing on.<br>
&gt;&gt;<br>
&gt;&gt; As an alternative, how about adding a fault_ttl field into the hyp=
ercall?<br>
&gt;&gt;<br>
&gt;&gt; The exact paths taken in {domain,vcpu}_create() are sensitive to t=
he<br>
&gt;&gt; hardware, Xen Kconfig, and other parameters passed into the<br>
&gt;&gt; hypercall(s).=C2=A0 The testing logic doesn&#39;t really want to c=
are about what<br>
&gt;&gt; failed; simply that the error was handled correctly.<br>
&gt;&gt;<br>
&gt;&gt; So a test for this might look like:<br>
&gt;&gt;<br>
&gt;&gt; cfg =3D { ... };<br>
&gt;&gt; while ( xc_create_domain(xch, cfg) &lt; 0 )<br>
&gt;&gt; =C2=A0=C2=A0=C2=A0 cfg.fault_ttl++;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; The pro&#39;s of this approach is that for a specific build of Xen=
 on a<br>
&gt;&gt; piece of hardware, it ought to check every failure path in<br>
&gt;&gt; domain_create(), until the ttl finally gets higher than the number=
 of<br>
&gt;&gt; fail-able actions required to construct a domain.=C2=A0 Also, the =
test<br>
&gt;&gt; doesn&#39;t need changing as the complexity of domain_create() cha=
nges.<br>
&gt;&gt;<br>
&gt;&gt; The main con will mostly likely be the invasiveness of code in Xen=
, but<br>
&gt;&gt; I suppose any fault injection is going to be invasive to a certain=
 extent.<br>
&gt; While I like the idea in principle, the innocent looking<br>
&gt;<br>
&gt; cfg =3D { ... };<br>
&gt;<br>
&gt; is quite a bit of a concern here as well: Depending on the precise<br>
&gt; settings, paths taken in the hypervisor may heavily vary, and hence<br=
>
&gt; such a test will only end up being useful if it covers a wide<br>
&gt; variety of settings. Even if the number of tests to execute turned<br>
&gt; out to still be manageable today, it may quickly turn out not<br>
&gt; sufficiently scalable as we add new settings controllable right at<br>
&gt; domain creation (which I understand is the plan).<br>
<br>
Well - there are two aspects here.<br>
<br>
First, 99% of all VMs in practice are one of 3 or 4 configurations.=C2=A0 A=
n<br>
individual configuration is O(n) time complexity to test with fault_ttl,<br=
>
depending on the size of Xen&#39;s logic, and we absolutely want to be able=
<br>
to test these deterministically and to completion.<br>
<br>
For the plethora of other configurations, I agree that it is infeasible<br>
to test them all.=C2=A0 However, a hypercall like this is easy to wire up<b=
r>
into a fuzzing harness.<br>
<br>
TBH, I was thinking of something like<br>
<a href=3D"https://github.com/intel/kernel-fuzzer-for-xen-project" rel=3D"n=
oreferrer" target=3D"_blank">https://github.com/intel/kernel-fuzzer-for-xen=
-project</a> with a PVH Xen<br>
and XTF &quot;dom0&quot; poking just this hypercall.=C2=A0 All the other co=
mplicated<br>
bits of wiring AFL up appear to have been done.<br>
<br>
Perhaps when we exhaust that as a source of bugs, we move onto fuzzing<br>
the L0 Xen, because running on native will give it more paths to<br>
explore.=C2=A0 We&#39;d need some way of reporting path/trace data back to =
AFL in<br>
dom0 which might require a bit plumbing.</blockquote><div><br></div><div>Th=
is is a pretty cool idea, I would be very interested in trying this out. If=
 running Xen nested in a HVM domain is possible (my experiments with nested=
 setups using Xen have only worked on ancient hw last time I tried) then ru=
nning the fuzzer would be entirely possible using VM forks. You don&#39;t e=
ven need a special &quot;dom0&quot;, you could just add the fuzzer&#39;s CP=
UID harness to Xen&#39;s hypercall handler and the only thing needed from t=
he nested dom0 would be to trigger the hypercall with a normal config. The =
fuzzer would take it from there and replace the config with the fuzzed vers=
ion directly in VM forks. Defining what to report as a &quot;crash&quot; to=
 AFL would still need to be defined manually for Xen as the current sink po=
ints are Linux specific (<a href=3D"https://github.com/intel/kernel-fuzzer-=
for-xen-project/blob/master/src/sink.h">https://github.com/intel/kernel-fuz=
zer-for-xen-project/blob/master/src/sink.h</a>), but that should be straigh=
t forward.</div><div><br></div><div>Also, running the fuzzer with PVH guest=
s hasn&#39;t been tested but since all VM forking needs is EPT it should wo=
rk.</div><div><br></div><div>Tamas</div><div><br></div><div>=C2=A0</div></d=
iv></div>

--00000000000014d2cc05b70f802a--


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 16:06:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 16:06:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.57986.101706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krkBR-000165-Qt; Tue, 22 Dec 2020 16:06:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 57986.101706; Tue, 22 Dec 2020 16:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krkBR-00015y-Nv; Tue, 22 Dec 2020 16:06:45 +0000
Received: by outflank-mailman (input) for mailman id 57986;
 Tue, 22 Dec 2020 16:06:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krkBQ-00015q-MM; Tue, 22 Dec 2020 16:06:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krkBQ-0002ln-E7; Tue, 22 Dec 2020 16:06:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krkBQ-0002Zv-5Y; Tue, 22 Dec 2020 16:06:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krkBQ-0000Pc-50; Tue, 22 Dec 2020 16:06:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=il1CBejvLpAsLC0GkDZgkzrKY2ZMO1DCIK0QMQYJ0uo=; b=Eet3zmIuzW/PmaXNLItLnmop6i
	z888S11vGt8/GHZI0RKsvfABm2sv3NkJbKWs8zFJk1E+E1rGd0uVDqxDurnPvBUNKAQalMC0YLsQh
	0hvjvXnUrEhC1WocRNrAT+fl//u/hM4bCK+SinpeF7eB/0EmU4TvQvmBTRVKcr8hx9XI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157776-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157776: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:debian-install:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8653b778e454a7708847aeafe689bce07aeeb94e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 16:06:44 +0000

flight 157776 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157776/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8653b778e454a7708847aeafe689bce07aeeb94e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  143 days
Failing since        152366  2020-08-01 20:49:34 Z  142 days  246 attempts
Testing same since   157776  2020-12-21 21:40:31 Z    0 days    1 attempts

------------------------------------------------------------
4305 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 963314 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 16:49:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 16:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58002.101745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krkrA-0004i7-HD; Tue, 22 Dec 2020 16:49:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58002.101745; Tue, 22 Dec 2020 16:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krkrA-0004i0-Dz; Tue, 22 Dec 2020 16:49:52 +0000
Received: by outflank-mailman (input) for mailman id 58002;
 Tue, 22 Dec 2020 16:49:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krkr9-0004hv-Dt
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 16:49:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 386c7b3d-15ff-4649-85be-522e3c86be0d;
 Tue, 22 Dec 2020 16:49:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2C0E7AD0B;
 Tue, 22 Dec 2020 16:49:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 386c7b3d-15ff-4649-85be-522e3c86be0d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608655789; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=lrwHDbRBOm2TtPSNfHDaSsuqEFoecmGNV0uASlmEEUM=;
	b=ZCGeNmL6bjsI3MAzZOQCLOsnCtODk2VGdeO1dxKOsSc42PQ0XMZjiK/n7plfUYXrz4pIH2
	JW59NEqN8kF1oz+QCJ98YPJLJYo4pl/BDIf6yLdHc0iLglUpI15tDyurbK12yKVN98Ktyi
	R6u+HhgfZW4B/k4bi51ZrJgksFUsOxE=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] lib/sort: adjust types
Message-ID: <37d6facf-3fb8-2171-4143-e5e0269fb637@suse.com>
Date: Tue, 22 Dec 2020 17:49:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

First and foremost do away with the use of plain int for sizes or size-
derived values. Use size_t, despite this requiring some adjustment to
the logic. Also replace u32 by uint32_t.

While not directly related also drop a leftover #ifdef from x86's
swap_ex - this was needed only back when 32-bit Xen was still a thing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/extable.c
+++ b/xen/arch/x86/extable.c
@@ -37,8 +37,7 @@ static int init_or_livepatch cmp_ex(cons
 	return 0;
 }
 
-#ifndef swap_ex
-static void init_or_livepatch swap_ex(void *a, void *b, int size)
+static void init_or_livepatch swap_ex(void *a, void *b, size_t size)
 {
 	struct exception_table_entry *l = a, *r = b, tmp;
 	long delta = b - a;
@@ -49,7 +48,6 @@ static void init_or_livepatch swap_ex(vo
 	r->addr = tmp.addr - delta;
 	r->cont = tmp.cont - delta;
 }
-#endif
 
 void init_or_livepatch sort_exception_table(struct exception_table_entry *start,
                                  const struct exception_table_entry *stop)
--- a/xen/include/xen/sort.h
+++ b/xen/include/xen/sort.h
@@ -5,6 +5,6 @@
 
 void sort(void *base, size_t num, size_t size,
           int (*cmp)(const void *, const void *),
-          void (*swap)(void *, void *, int));
+          void (*swap)(void *, void *, size_t));
 
 #endif /* __XEN_SORT_H__ */
--- a/xen/lib/sort.c
+++ b/xen/lib/sort.c
@@ -6,14 +6,15 @@
 
 #include <xen/types.h>
 
-static void u32_swap(void *a, void *b, int size)
+static void u32_swap(void *a, void *b, size_t size)
 {
-    u32 t = *(u32 *)a;
-    *(u32 *)a = *(u32 *)b;
-    *(u32 *)b = t;
+    uint32_t t = *(uint32_t *)a;
+
+    *(uint32_t *)a = *(uint32_t *)b;
+    *(uint32_t *)b = t;
 }
 
-static void generic_swap(void *a, void *b, int size)
+static void generic_swap(void *a, void *b, size_t size)
 {
     char t;
 
@@ -43,18 +44,18 @@ static void generic_swap(void *a, void *
 
 void sort(void *base, size_t num, size_t size,
           int (*cmp)(const void *, const void *),
-          void (*swap)(void *, void *, int size))
+          void (*swap)(void *, void *, size_t size))
 {
     /* pre-scale counters for performance */
-    int i = (num / 2 - 1) * size, n = num * size, c, r;
+    size_t i = (num / 2) * size, n = num * size, c, r;
 
     if ( !swap )
         swap = (size == 4 ? u32_swap : generic_swap);
 
     /* heapify */
-    for ( ; i >= 0; i -= size )
+    while ( i > 0 )
     {
-        for ( r = i; r * 2 + size < n; r  = c )
+        for ( r = i -= size; r * 2 + size < n; r  = c )
         {
             c = r * 2 + size;
             if ( (c < n - size) && (cmp(base + c, base + c + size) < 0) )
@@ -66,8 +67,9 @@ void sort(void *base, size_t num, size_t
     }
 
     /* sort */
-    for ( i = n - size; i >= 0; i -= size )
+    for ( i = n; i > 0; )
     {
+        i -= size;
         swap(base, base + i, size);
         for ( r = 0; r * 2 + size < i; r = c )
         {


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 17:02:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 17:02:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58009.101764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krl38-0006SC-TB; Tue, 22 Dec 2020 17:02:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58009.101764; Tue, 22 Dec 2020 17:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krl38-0006S5-QE; Tue, 22 Dec 2020 17:02:14 +0000
Received: by outflank-mailman (input) for mailman id 58009;
 Tue, 22 Dec 2020 17:02:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9/vU=F2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1krl37-0006S0-Jx
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 17:02:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef300d40-3691-47bc-937f-ca4f758e367a;
 Tue, 22 Dec 2020 17:02:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9FA6CAE87;
 Tue, 22 Dec 2020 17:02:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef300d40-3691-47bc-937f-ca4f758e367a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608656531; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=luv64Ooq7BkmbyQ+YxrlBreU58+3PoG2BBvqXYb2AEA=;
	b=r7ZvHK+6m7AfOxlmNuf67mWNotqUmZHa7LCq2Rk8ZgD//YcdvPGZ7g8gn6I9DkwBWuh0FT
	e6YH/HBMgM8ZUAXW26UMMLcEitLD5c0qIn9sctfcEtd58eN9eEsh7T9sx8bO07w3qOmitQ
	ncN4Q1zYjkDwe8YoRHHei5H2FmkwcNs=
Subject: Re: [PATCH 5/6] x86/mm: the gva_to_gfn() hook is HVM-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <be9ce75e-9119-2b5a-9e7b-437beb7ee446@suse.com>
 <cc141f1f-7af8-9d23-de1d-a22ba320ca80@suse.com>
Message-ID: <406b0b8e-3f60-4a7c-d584-ce69bcf436fd@suse.com>
Date: Tue, 22 Dec 2020 18:02:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <cc141f1f-7af8-9d23-de1d-a22ba320ca80@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.12.2020 17:27, Jan Beulich wrote:
> As is the adjacent ga_to_gfn() one as well as paging_gva_to_gfn().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1772,7 +1772,6 @@ void np2m_schedule(int dir)
>          p2m_unlock(p2m);
>      }
>  }
> -#endif
>  
>  unsigned long paging_gva_to_gfn(struct vcpu *v,
>                                  unsigned long va,
> @@ -1820,6 +1819,8 @@ unsigned long paging_gva_to_gfn(struct v
>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
>  }
>  
> +#endif /* CONFIG_HVM */
> +
>  /*
>   * If the map is non-NULL, we leave this function having acquired an extra ref
>   * on mfn_to_page(*mfn).  In all cases, *pfec contains appropriate
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -3414,6 +3414,7 @@ static bool sh_invlpg(struct vcpu *v, un
>      return true;
>  }
>  
> +#ifdef CONFIG_HVM
>  
>  static unsigned long
>  sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
> @@ -3447,6 +3448,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m
>      return gfn_x(gfn);
>  }
>  
> +#endif /* CONFIG_HVM */
>  
>  static inline void
>  sh_update_linear_entries(struct vcpu *v)
> @@ -4571,7 +4573,9 @@ int sh_audit_l4_table(struct vcpu *v, mf
>  const struct paging_mode sh_paging_mode = {
>      .page_fault                    = sh_page_fault,
>      .invlpg                        = sh_invlpg,
> +#ifdef CONFIG_HVM
>      .gva_to_gfn                    = sh_gva_to_gfn,
> +#endif
>      .update_cr3                    = sh_update_cr3,
>      .update_paging_modes           = shadow_update_paging_modes,
>      .flush_tlb                     = shadow_flush_tlb,

I've noticed (or really the compiler told me) I forgot to also
change none.c:

--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -43,12 +43,14 @@ static bool _invlpg(struct vcpu *v, unsi
     return true;
 }
 
+#ifdef CONFIG_HVM
 static unsigned long _gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
                                  unsigned long va, uint32_t *pfec)
 {
     ASSERT_UNREACHABLE();
     return gfn_x(INVALID_GFN);
 }
+#endif
 
 static void _update_cr3(struct vcpu *v, int do_locking, bool noflush)
 {
@@ -63,7 +65,9 @@ static void _update_paging_modes(struct
 static const struct paging_mode sh_paging_none = {
     .page_fault                    = _page_fault,
     .invlpg                        = _invlpg,
+#ifdef CONFIG_HVM
     .gva_to_gfn                    = _gva_to_gfn,
+#endif
     .update_cr3                    = _update_cr3,
     .update_paging_modes           = _update_paging_modes,
 };

Jan

> --- a/xen/include/asm-x86/paging.h
> +++ b/xen/include/asm-x86/paging.h
> @@ -127,6 +127,7 @@ struct paging_mode {
>                                              struct cpu_user_regs *regs);
>      bool          (*invlpg                )(struct vcpu *v,
>                                              unsigned long linear);
> +#ifdef CONFIG_HVM
>      unsigned long (*gva_to_gfn            )(struct vcpu *v,
>                                              struct p2m_domain *p2m,
>                                              unsigned long va,
> @@ -136,6 +137,7 @@ struct paging_mode {
>                                              unsigned long cr3,
>                                              paddr_t ga, uint32_t *pfec,
>                                              unsigned int *page_order);
> +#endif
>      void          (*update_cr3            )(struct vcpu *v, int do_locking,
>                                              bool noflush);
>      void          (*update_paging_modes   )(struct vcpu *v);
> @@ -286,6 +288,8 @@ unsigned long paging_gva_to_gfn(struct v
>                                  unsigned long va,
>                                  uint32_t *pfec);
>  
> +#ifdef CONFIG_HVM
> +
>  /* Translate a guest address using a particular CR3 value.  This is used
>   * to by nested HAP code, to walk the guest-supplied NPT tables as if
>   * they were pagetables.
> @@ -304,6 +308,8 @@ static inline unsigned long paging_ga_to
>          page_order);
>  }
>  
> +#endif /* CONFIG_HVM */
> +
>  /* Update all the things that are derived from the guest's CR3.
>   * Called when the guest changes CR3; the caller can then use v->arch.cr3
>   * as the value to load into the host CR3 to schedule this vcpu */
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 17:05:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 17:05:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58014.101779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krl6Y-0006bV-FP; Tue, 22 Dec 2020 17:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58014.101779; Tue, 22 Dec 2020 17:05:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krl6Y-0006bO-Bp; Tue, 22 Dec 2020 17:05:46 +0000
Received: by outflank-mailman (input) for mailman id 58014;
 Tue, 22 Dec 2020 17:05:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krl6W-0006bJ-Oc
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 17:05:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ca384ed-d9e5-4631-ae3f-aa34a9992fb8;
 Tue, 22 Dec 2020 17:05:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ca384ed-d9e5-4631-ae3f-aa34a9992fb8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608656743;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+CxfePlyUubqw8KRU2NkqHW4qYikX7uVMK+Go3UZuaU=;
  b=RfwyAlmkbCWuVTr2xcXIa90HjjGib/w3jL0ZDFTFd1EVtgqCW3iPscig
   zUgR3w3+kek1x4OwpCxcFssA5IxLsk9HC3luht9UMLV2YGFfsHUkr5++O
   zy4xj+E3NLYrXc31l++RFq2yIncLaCXabUhziFj2n0axhTKaPsPxAI+v1
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: wbgse+P8wQywN+MxDaOaUz93uuPnqEDEtDGrTt0/F8rfVH4NX6uaNt+fH59IGgHc14saDfbQmb
 8mvlEfz+luFI4ESp860IUoqqpKbDDOjIdDKNLNpQmrzj3p6RqNVVN54y7dD3FpqCYjtUezs8Rw
 3bSSsG2CE7VpfbfH0YSIvdBjYbroT4HDscg4jWAYjMtZu/DD2M5Wb55ekdOFoXSv2BijN748W5
 Yj0tlkWp76i2trQbCPpn7SAYxmESkpvwf9l9x4EEhipwcKdJhQHWBBeLBy0OW0YQyUrHb5g7Tg
 ZvM=
X-SBRS: 5.2
X-MesageID: 33801822
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,439,1599537600"; 
   d="scan'208";a="33801822"
Subject: Re: [PATCH] lib/sort: adjust types
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <37d6facf-3fb8-2171-4143-e5e0269fb637@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b8517b5e-a769-73dd-4b83-498f9b512f60@citrix.com>
Date: Tue, 22 Dec 2020 17:05:36 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <37d6facf-3fb8-2171-4143-e5e0269fb637@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 16:49, Jan Beulich wrote:
> First and foremost do away with the use of plain int for sizes or size-
> derived values. Use size_t, despite this requiring some adjustment to
> the logic. Also replace u32 by uint32_t.
>
> While not directly related also drop a leftover #ifdef from x86's
> swap_ex - this was needed only back when 32-bit Xen was still a thing.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Its a shame that we've got exactly two users of sort() (ought to be
named heapsort), one in arm/io.c which ought to be replaced with a
generalised version of my sorted-array logic for VMCS load/save lists,
and the other in x86's sort_exception_table() which ought to become
build-time logic.

>
> --- a/xen/arch/x86/extable.c
> +++ b/xen/arch/x86/extable.c
> @@ -37,8 +37,7 @@ static int init_or_livepatch cmp_ex(cons
>  	return 0;
>  }
>  
> -#ifndef swap_ex
> -static void init_or_livepatch swap_ex(void *a, void *b, int size)
> +static void init_or_livepatch swap_ex(void *a, void *b, size_t size)
>  {
>  	struct exception_table_entry *l = a, *r = b, tmp;
>  	long delta = b - a;
> @@ -49,7 +48,6 @@ static void init_or_livepatch swap_ex(vo
>  	r->addr = tmp.addr - delta;
>  	r->cont = tmp.cont - delta;
>  }
> -#endif
>  
>  void init_or_livepatch sort_exception_table(struct exception_table_entry *start,
>                                   const struct exception_table_entry *stop)
> --- a/xen/include/xen/sort.h
> +++ b/xen/include/xen/sort.h
> @@ -5,6 +5,6 @@
>  
>  void sort(void *base, size_t num, size_t size,
>            int (*cmp)(const void *, const void *),
> -          void (*swap)(void *, void *, int));
> +          void (*swap)(void *, void *, size_t));
>  
>  #endif /* __XEN_SORT_H__ */
> --- a/xen/lib/sort.c
> +++ b/xen/lib/sort.c
> @@ -6,14 +6,15 @@
>  
>  #include <xen/types.h>
>  
> -static void u32_swap(void *a, void *b, int size)
> +static void u32_swap(void *a, void *b, size_t size)
>  {
> -    u32 t = *(u32 *)a;
> -    *(u32 *)a = *(u32 *)b;
> -    *(u32 *)b = t;
> +    uint32_t t = *(uint32_t *)a;
> +
> +    *(uint32_t *)a = *(uint32_t *)b;
> +    *(uint32_t *)b = t;
>  }
>  
> -static void generic_swap(void *a, void *b, int size)
> +static void generic_swap(void *a, void *b, size_t size)
>  {
>      char t;
>  
> @@ -43,18 +44,18 @@ static void generic_swap(void *a, void *
>  
>  void sort(void *base, size_t num, size_t size,
>            int (*cmp)(const void *, const void *),
> -          void (*swap)(void *, void *, int size))
> +          void (*swap)(void *, void *, size_t size))
>  {
>      /* pre-scale counters for performance */
> -    int i = (num / 2 - 1) * size, n = num * size, c, r;
> +    size_t i = (num / 2) * size, n = num * size, c, r;
>  
>      if ( !swap )
>          swap = (size == 4 ? u32_swap : generic_swap);
>  
>      /* heapify */
> -    for ( ; i >= 0; i -= size )
> +    while ( i > 0 )
>      {
> -        for ( r = i; r * 2 + size < n; r  = c )
> +        for ( r = i -= size; r * 2 + size < n; r  = c )

Aren't some compilers going to complain at the lack of brackets for this
setup of r?

Also as you're editing the line, the "r  = c" part should lose one space.

~Andrew

>          {
>              c = r * 2 + size;
>              if ( (c < n - size) && (cmp(base + c, base + c + size) < 0) )
> @@ -66,8 +67,9 @@ void sort(void *base, size_t num, size_t
>      }
>  
>      /* sort */
> -    for ( i = n - size; i >= 0; i -= size )
> +    for ( i = n; i > 0; )
>      {
> +        i -= size;
>          swap(base, base + i, size);
>          for ( r = 0; r * 2 + size < i; r = c )
>          {



From xen-devel-bounces@lists.xenproject.org Tue Dec 22 17:12:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 17:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58020.101794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krlD8-0007XA-8k; Tue, 22 Dec 2020 17:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58020.101794; Tue, 22 Dec 2020 17:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krlD8-0007X3-5c; Tue, 22 Dec 2020 17:12:34 +0000
Received: by outflank-mailman (input) for mailman id 58020;
 Tue, 22 Dec 2020 17:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1krlD7-0007Wy-EW
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 17:12:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krlD3-0003qa-J4; Tue, 22 Dec 2020 17:12:29 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1krlD3-0005TI-CP; Tue, 22 Dec 2020 17:12:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=mz2eyRlnDZkwiicdLJKrlr+25PJOwB8xtc8Upo7gQKg=; b=buOvKlKhQ6MnwUR4JGfGrex+d9
	aIJozIO4N9gYp7gBb9fjtD9UNaa/tM79/2KC7k+Lsoi+cZ5aBicyBSmbMp7F8p6Z+iQZ+gi6H2hbu
	DdX4iiP9jD1wNlwPs9m52sOdqh4eD1O2wxU6VkC3+15hyIQuy7TPlOGXYFSTR65ZwQ1o=;
Subject: Re: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU
 page-tables
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-5-julien@xen.org>
Message-ID: <2c0241eb-9b95-6f30-cd9e-b38b21df0e6b@xen.org>
Date: Tue, 22 Dec 2020 17:12:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201222154338.9459-5-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 22/12/2020 15:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new IOMMU page-tables allocator will release the pages when
> relinquish the domain resources. However, this is not sufficient in two
> cases:
>      1) domain_relinquish_resources() is not called when the domain
>      creation fails.
>      2) There is nothing preventing page-table allocations when the
>      domain is dying.
> 
> In both cases, this can be solved by freeing the page-tables again
> when the domain destruction. Although, this may result to an high
> number of page-tables to free.
> 
> In the second case, it is pointless to allow page-table allocation when
> the domain is going to die. iommu_alloc_pgtable() will now return an
> error when it is called while the domain is dying.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

I forgot to mention this is fixing 15bc9a1ef51c "x86/iommu: add common 
page-table allocator". I will add a Fixes tag for the next version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 17:17:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 17:17:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58025.101809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krlI6-0007ig-VK; Tue, 22 Dec 2020 17:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58025.101809; Tue, 22 Dec 2020 17:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krlI6-0007iZ-SI; Tue, 22 Dec 2020 17:17:42 +0000
Received: by outflank-mailman (input) for mailman id 58025;
 Tue, 22 Dec 2020 17:17:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=if9N=F2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1krlI5-0007iT-In
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 17:17:41 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ca87cfc-b817-4056-b6f9-80a632fa953c;
 Tue, 22 Dec 2020 17:17:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ca87cfc-b817-4056-b6f9-80a632fa953c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608657458;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to;
  bh=mh0ymQQ3gDNCPt/GLY7WFyRa7I00o86mRISmR7oJmTU=;
  b=PBIcEeWgjB0mtDnNfMVRVs8AYtu1PUgUerZnySsVkrRgibEhTeAs8YZQ
   itZ7v2CvFOA4spwZOp/EgpAHu/ON4ULv/OmYQa8g0O6qhJHY+LSSWX8KA
   1aS96ERoGRxC0MJDCGKMs7QWpoFGU0Gwr4Bfs70+95p9EUOj3oV9rn24/
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: a4l3tJXjTm+T1MWHl69EpkmLVntjKzuuHHNCwmrCDOH6hG6hm7daHhYfQFZKb50JC+RTRtk9lt
 aL7Hs3F06keezw+xJvSGuFmGDUwOYDi0wQsg+mOyj1nk5VkCAKnBuq8DbVPCl6FN03NAydG62Z
 b+q8s1PO7coDTKbAX40eh09PpeyZ73Hp5+U+2wHh7hesdylcNH4TOpPm+D9qOOMn9IGeRguNUv
 DCkJlKB98VbUbZHM0H0isEt6o5YRbhPLYZ6/B631lifOsQy6PnVSsQV5MMwLJb1YmxxKzLy9fI
 +6Y=
X-SBRS: 5.2
X-MesageID: 33786538
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,439,1599537600"; 
   d="scan'208,217";a="33786538"
Subject: Re: Hypercall fault injection (Was [PATCH 0/3] xen/domain: More
 structured teardown)
To: Tamas K Lengyel <tamas@tklengyel.com>
CC: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Juergen Gross <jgross@suse.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com>
 <983a3fef-c80f-ec2a-bf3c-5e054fc6a7a9@suse.com>
 <760969b0-743e-fdd7-3577-72612e3a88b7@citrix.com>
 <CABfawh=nS2nuFEyx+7Hi5S5HUYtqXTJ6LMTLpZErs5_d22GWgQ@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5f6a3bbd-c688-c628-9b1e-a838d3c31d8e@citrix.com>
Date: Tue, 22 Dec 2020 17:17:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CABfawh=nS2nuFEyx+7Hi5S5HUYtqXTJ6LMTLpZErs5_d22GWgQ@mail.gmail.com>
Content-Type: multipart/alternative;
	boundary="------------46C7FD4784E30F01E79F7FD1"
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

--------------46C7FD4784E30F01E79F7FD1
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

On 22/12/2020 15:47, Tamas K Lengyel wrote:
>
>
> On Tue, Dec 22, 2020 at 6:14 AM Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     On 22/12/2020 10:00, Jan Beulich wrote:
>     > On 21.12.2020 20:36, Andrew Cooper wrote:
>     >> Hello,
>     >>
>     >> We have some very complicated hypercalls, createdomain, and
>     max_vcpus a
>     >> close second, with immense complexity, and very hard-to-test
>     error handling.
>     >>
>     >> It is no surprise that the error handling is riddled with bugs.
>     >>
>     >> Random failures from core functions is one way, but I'm not
>     sure that
>     >> will be especially helpful.  In particular, we'd need a way to
>     exclude
>     >> "dom0 critical" operations so we've got a usable system to run
>     testing on.
>     >>
>     >> As an alternative, how about adding a fault_ttl field into the
>     hypercall?
>     >>
>     >> The exact paths taken in {domain,vcpu}_create() are sensitive
>     to the
>     >> hardware, Xen Kconfig, and other parameters passed into the
>     >> hypercall(s).  The testing logic doesn't really want to care
>     about what
>     >> failed; simply that the error was handled correctly.
>     >>
>     >> So a test for this might look like:
>     >>
>     >> cfg = { ... };
>     >> while ( xc_create_domain(xch, cfg) < 0 )
>     >>     cfg.fault_ttl++;
>     >>
>     >>
>     >> The pro's of this approach is that for a specific build of Xen on a
>     >> piece of hardware, it ought to check every failure path in
>     >> domain_create(), until the ttl finally gets higher than the
>     number of
>     >> fail-able actions required to construct a domain.  Also, the test
>     >> doesn't need changing as the complexity of domain_create() changes.
>     >>
>     >> The main con will mostly likely be the invasiveness of code in
>     Xen, but
>     >> I suppose any fault injection is going to be invasive to a
>     certain extent.
>     > While I like the idea in principle, the innocent looking
>     >
>     > cfg = { ... };
>     >
>     > is quite a bit of a concern here as well: Depending on the precise
>     > settings, paths taken in the hypervisor may heavily vary, and hence
>     > such a test will only end up being useful if it covers a wide
>     > variety of settings. Even if the number of tests to execute turned
>     > out to still be manageable today, it may quickly turn out not
>     > sufficiently scalable as we add new settings controllable right at
>     > domain creation (which I understand is the plan).
>
>     Well - there are two aspects here.
>
>     First, 99% of all VMs in practice are one of 3 or 4
>     configurations.  An
>     individual configuration is O(n) time complexity to test with
>     fault_ttl,
>     depending on the size of Xen's logic, and we absolutely want to be
>     able
>     to test these deterministically and to completion.
>
>     For the plethora of other configurations, I agree that it is
>     infeasible
>     to test them all.  However, a hypercall like this is easy to wire up
>     into a fuzzing harness.
>
>     TBH, I was thinking of something like
>     https://github.com/intel/kernel-fuzzer-for-xen-project with a PVH Xen
>     and XTF "dom0" poking just this hypercall.  All the other complicated
>     bits of wiring AFL up appear to have been done.
>
>     Perhaps when we exhaust that as a source of bugs, we move onto fuzzing
>     the L0 Xen, because running on native will give it more paths to
>     explore.  We'd need some way of reporting path/trace data back to
>     AFL in
>     dom0 which might require a bit plumbing.
>
>
> This is a pretty cool idea, I would be very interested in trying this
> out. If running Xen nested in a HVM domain is possible (my experiments
> with nested setups using Xen have only worked on ancient hw last time
> I tried) then running the fuzzer would be entirely possible using VM
> forks. You don't even need a special "dom0", you could just add the
> fuzzer's CPUID harness to Xen's hypercall handler and the only thing
> needed from the nested dom0 would be to trigger the hypercall with a
> normal config. The fuzzer would take it from there and replace the
> config with the fuzzed version directly in VM forks. Defining what to
> report as a "crash" to AFL would still need to be defined manually for
> Xen as the current sink points are Linux specific
> (https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/src/sink.h),
> but that should be straight forward.
>
> Also, running the fuzzer with PVH guests hasn't been tested but since
> all VM forking needs is EPT it should work.

Xen running inside Xen definitely works, and is even supported as far as
PV-Shim goes (i.e. no nested virt).  That would limit testing to just
creation of PV guests at L1, which is plenty to get started with.

Xen nested under Xen does work to a first approximation, and for the
purposes of fuzzing areas other than the nested-virt logic, might even
be ok.  (I use this configuration for a fair chunk of hvm development).

~Andrew

--------------46C7FD4784E30F01E79F7FD1
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 22/12/2020 15:47, Tamas K Lengyel
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CABfawh=nS2nuFEyx+7Hi5S5HUYtqXTJ6LMTLpZErs5_d22GWgQ@mail.gmail.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div dir="ltr"><br>
        </div>
        <br>
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Tue, Dec 22, 2020 at 6:14
            AM Andrew Cooper &lt;<a
              href="mailto:andrew.cooper3@citrix.com"
              moz-do-not-send="true">andrew.cooper3@citrix.com</a>&gt;
            wrote:<br>
          </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">On 22/12/2020 10:00, Jan
            Beulich wrote:<br>
            &gt; On 21.12.2020 20:36, Andrew Cooper wrote:<br>
            &gt;&gt; Hello,<br>
            &gt;&gt;<br>
            &gt;&gt; We have some very complicated hypercalls,
            createdomain, and max_vcpus a<br>
            &gt;&gt; close second, with immense complexity, and very
            hard-to-test error handling.<br>
            &gt;&gt;<br>
            &gt;&gt; It is no surprise that the error handling is
            riddled with bugs.<br>
            &gt;&gt;<br>
            &gt;&gt; Random failures from core functions is one way, but
            I'm not sure that<br>
            &gt;&gt; will be especially helpful.  In particular, we'd
            need a way to exclude<br>
            &gt;&gt; "dom0 critical" operations so we've got a usable
            system to run testing on.<br>
            &gt;&gt;<br>
            &gt;&gt; As an alternative, how about adding a fault_ttl
            field into the hypercall?<br>
            &gt;&gt;<br>
            &gt;&gt; The exact paths taken in {domain,vcpu}_create() are
            sensitive to the<br>
            &gt;&gt; hardware, Xen Kconfig, and other parameters passed
            into the<br>
            &gt;&gt; hypercall(s).  The testing logic doesn't really
            want to care about what<br>
            &gt;&gt; failed; simply that the error was handled
            correctly.<br>
            &gt;&gt;<br>
            &gt;&gt; So a test for this might look like:<br>
            &gt;&gt;<br>
            &gt;&gt; cfg = { ... };<br>
            &gt;&gt; while ( xc_create_domain(xch, cfg) &lt; 0 )<br>
            &gt;&gt;     cfg.fault_ttl++;<br>
            &gt;&gt;<br>
            &gt;&gt;<br>
            &gt;&gt; The pro's of this approach is that for a specific
            build of Xen on a<br>
            &gt;&gt; piece of hardware, it ought to check every failure
            path in<br>
            &gt;&gt; domain_create(), until the ttl finally gets higher
            than the number of<br>
            &gt;&gt; fail-able actions required to construct a domain. 
            Also, the test<br>
            &gt;&gt; doesn't need changing as the complexity of
            domain_create() changes.<br>
            &gt;&gt;<br>
            &gt;&gt; The main con will mostly likely be the invasiveness
            of code in Xen, but<br>
            &gt;&gt; I suppose any fault injection is going to be
            invasive to a certain extent.<br>
            &gt; While I like the idea in principle, the innocent
            looking<br>
            &gt;<br>
            &gt; cfg = { ... };<br>
            &gt;<br>
            &gt; is quite a bit of a concern here as well: Depending on
            the precise<br>
            &gt; settings, paths taken in the hypervisor may heavily
            vary, and hence<br>
            &gt; such a test will only end up being useful if it covers
            a wide<br>
            &gt; variety of settings. Even if the number of tests to
            execute turned<br>
            &gt; out to still be manageable today, it may quickly turn
            out not<br>
            &gt; sufficiently scalable as we add new settings
            controllable right at<br>
            &gt; domain creation (which I understand is the plan).<br>
            <br>
            Well - there are two aspects here.<br>
            <br>
            First, 99% of all VMs in practice are one of 3 or 4
            configurations.  An<br>
            individual configuration is O(n) time complexity to test
            with fault_ttl,<br>
            depending on the size of Xen's logic, and we absolutely want
            to be able<br>
            to test these deterministically and to completion.<br>
            <br>
            For the plethora of other configurations, I agree that it is
            infeasible<br>
            to test them all.  However, a hypercall like this is easy to
            wire up<br>
            into a fuzzing harness.<br>
            <br>
            TBH, I was thinking of something like<br>
            <a
              href="https://github.com/intel/kernel-fuzzer-for-xen-project"
              rel="noreferrer" target="_blank" moz-do-not-send="true">https://github.com/intel/kernel-fuzzer-for-xen-project</a>
            with a PVH Xen<br>
            and XTF "dom0" poking just this hypercall.  All the other
            complicated<br>
            bits of wiring AFL up appear to have been done.<br>
            <br>
            Perhaps when we exhaust that as a source of bugs, we move
            onto fuzzing<br>
            the L0 Xen, because running on native will give it more
            paths to<br>
            explore.  We'd need some way of reporting path/trace data
            back to AFL in<br>
            dom0 which might require a bit plumbing.</blockquote>
          <div><br>
          </div>
          <div>This is a pretty cool idea, I would be very interested in
            trying this out. If running Xen nested in a HVM domain is
            possible (my experiments with nested setups using Xen have
            only worked on ancient hw last time I tried) then running
            the fuzzer would be entirely possible using VM forks. You
            don't even need a special "dom0", you could just add the
            fuzzer's CPUID harness to Xen's hypercall handler and the
            only thing needed from the nested dom0 would be to trigger
            the hypercall with a normal config. The fuzzer would take it
            from there and replace the config with the fuzzed version
            directly in VM forks. Defining what to report as a "crash"
            to AFL would still need to be defined manually for Xen as
            the current sink points are Linux specific (<a
href="https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/src/sink.h"
              moz-do-not-send="true">https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/src/sink.h</a>),
            but that should be straight forward.</div>
          <div><br>
          </div>
          <div>Also, running the fuzzer with PVH guests hasn't been
            tested but since all VM forking needs is EPT it should work.</div>
        </div>
      </div>
    </blockquote>
    <br>
    Xen running inside Xen definitely works, and is even supported as
    far as PV-Shim goes (i.e. no nested virt).  That would limit testing
    to just creation of PV guests at L1, which is plenty to get started
    with.<br>
    <br>
    Xen nested under Xen does work to a first approximation, and for the
    purposes of fuzzing areas other than the nested-virt logic, might
    even be ok.  (I use this configuration for a fair chunk of hvm
    development).<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------46C7FD4784E30F01E79F7FD1--


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 18:25:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 18:25:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58038.101848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krmLb-0005MJ-OA; Tue, 22 Dec 2020 18:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58038.101848; Tue, 22 Dec 2020 18:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krmLb-0005MC-Kp; Tue, 22 Dec 2020 18:25:23 +0000
Received: by outflank-mailman (input) for mailman id 58038;
 Tue, 22 Dec 2020 18:25:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HKaq=F2=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1krmLa-0005M7-TQ
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 18:25:22 +0000
Received: from MTA-11-4.privateemail.com (unknown [198.54.127.104])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 707b7e81-0eb0-455a-b5b6-bd342b61eaf8;
 Tue, 22 Dec 2020 18:25:21 +0000 (UTC)
Received: from mta-11.privateemail.com (localhost [127.0.0.1])
 by mta-11.privateemail.com (Postfix) with ESMTP id 3B05D80152
 for <xen-devel@lists.xenproject.org>; Tue, 22 Dec 2020 13:25:20 -0500 (EST)
Received: from mail-wm1-f43.google.com (unknown [10.20.151.231])
 by mta-11.privateemail.com (Postfix) with ESMTPA id F2E818014E
 for <xen-devel@lists.xenproject.org>; Tue, 22 Dec 2020 18:25:19 +0000 (UTC)
Received: by mail-wm1-f43.google.com with SMTP id n16so37293wmc.0
 for <xen-devel@lists.xenproject.org>; Tue, 22 Dec 2020 10:25:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 707b7e81-0eb0-455a-b5b6-bd342b61eaf8
X-Gm-Message-State: AOAM532CVQFHpzlx/liCWGW9kZv2zXKlgwj0aX3sbJNkRDcPIT8f/D3r
	5VrU06fMS+/ely+QdDa0M8WCyHCSMo3fFYUPUeE=
X-Google-Smtp-Source: ABdhPJzqY1EoUl5Bw+FCjWueQvf8OstLyDb+1aLNc8E+yXloXI9Mq+dXa9THtukQcLNygm76gL0WD5FXd8cF4WA1wwU=
X-Received: by 2002:a1c:4843:: with SMTP id v64mr23232675wma.186.1608661518474;
 Tue, 22 Dec 2020 10:25:18 -0800 (PST)
MIME-Version: 1.0
References: <20201221181446.7791-1-andrew.cooper3@citrix.com>
 <ac552c84-144c-c213-7985-84d92cbb5601@citrix.com> <983a3fef-c80f-ec2a-bf3c-5e054fc6a7a9@suse.com>
 <760969b0-743e-fdd7-3577-72612e3a88b7@citrix.com> <CABfawh=nS2nuFEyx+7Hi5S5HUYtqXTJ6LMTLpZErs5_d22GWgQ@mail.gmail.com>
 <5f6a3bbd-c688-c628-9b1e-a838d3c31d8e@citrix.com>
In-Reply-To: <5f6a3bbd-c688-c628-9b1e-a838d3c31d8e@citrix.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 22 Dec 2020 13:24:41 -0500
X-Gmail-Original-Message-ID: <CABfawhk8xSSsvjR41X7pzAD7Nr4DJKiXLojUxcru25Jir_5vMA@mail.gmail.com>
Message-ID: <CABfawhk8xSSsvjR41X7pzAD7Nr4DJKiXLojUxcru25Jir_5vMA@mail.gmail.com>
Subject: Re: Hypercall fault injection (Was [PATCH 0/3] xen/domain: More
 structured teardown)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Juergen Gross <jgross@suse.com>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Virus-Scanned: ClamAV using ClamSMTP

On Tue, Dec 22, 2020 at 12:18 PM Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 22/12/2020 15:47, Tamas K Lengyel wrote:
>
>
>
> On Tue, Dec 22, 2020 at 6:14 AM Andrew Cooper <andrew.cooper3@citrix.com>=
 wrote:
>>
>> On 22/12/2020 10:00, Jan Beulich wrote:
>> > On 21.12.2020 20:36, Andrew Cooper wrote:
>> >> Hello,
>> >>
>> >> We have some very complicated hypercalls, createdomain, and max_vcpus=
 a
>> >> close second, with immense complexity, and very hard-to-test error ha=
ndling.
>> >>
>> >> It is no surprise that the error handling is riddled with bugs.
>> >>
>> >> Random failures from core functions is one way, but I'm not sure that
>> >> will be especially helpful.  In particular, we'd need a way to exclud=
e
>> >> "dom0 critical" operations so we've got a usable system to run testin=
g on.
>> >>
>> >> As an alternative, how about adding a fault_ttl field into the hyperc=
all?
>> >>
>> >> The exact paths taken in {domain,vcpu}_create() are sensitive to the
>> >> hardware, Xen Kconfig, and other parameters passed into the
>> >> hypercall(s).  The testing logic doesn't really want to care about wh=
at
>> >> failed; simply that the error was handled correctly.
>> >>
>> >> So a test for this might look like:
>> >>
>> >> cfg =3D { ... };
>> >> while ( xc_create_domain(xch, cfg) < 0 )
>> >>     cfg.fault_ttl++;
>> >>
>> >>
>> >> The pro's of this approach is that for a specific build of Xen on a
>> >> piece of hardware, it ought to check every failure path in
>> >> domain_create(), until the ttl finally gets higher than the number of
>> >> fail-able actions required to construct a domain.  Also, the test
>> >> doesn't need changing as the complexity of domain_create() changes.
>> >>
>> >> The main con will mostly likely be the invasiveness of code in Xen, b=
ut
>> >> I suppose any fault injection is going to be invasive to a certain ex=
tent.
>> > While I like the idea in principle, the innocent looking
>> >
>> > cfg =3D { ... };
>> >
>> > is quite a bit of a concern here as well: Depending on the precise
>> > settings, paths taken in the hypervisor may heavily vary, and hence
>> > such a test will only end up being useful if it covers a wide
>> > variety of settings. Even if the number of tests to execute turned
>> > out to still be manageable today, it may quickly turn out not
>> > sufficiently scalable as we add new settings controllable right at
>> > domain creation (which I understand is the plan).
>>
>> Well - there are two aspects here.
>>
>> First, 99% of all VMs in practice are one of 3 or 4 configurations.  An
>> individual configuration is O(n) time complexity to test with fault_ttl,
>> depending on the size of Xen's logic, and we absolutely want to be able
>> to test these deterministically and to completion.
>>
>> For the plethora of other configurations, I agree that it is infeasible
>> to test them all.  However, a hypercall like this is easy to wire up
>> into a fuzzing harness.
>>
>> TBH, I was thinking of something like
>> https://github.com/intel/kernel-fuzzer-for-xen-project with a PVH Xen
>> and XTF "dom0" poking just this hypercall.  All the other complicated
>> bits of wiring AFL up appear to have been done.
>>
>> Perhaps when we exhaust that as a source of bugs, we move onto fuzzing
>> the L0 Xen, because running on native will give it more paths to
>> explore.  We'd need some way of reporting path/trace data back to AFL in
>> dom0 which might require a bit plumbing.
>
>
> This is a pretty cool idea, I would be very interested in trying this out=
. If running Xen nested in a HVM domain is possible (my experiments with ne=
sted setups using Xen have only worked on ancient hw last time I tried) the=
n running the fuzzer would be entirely possible using VM forks. You don't e=
ven need a special "dom0", you could just add the fuzzer's CPUID harness to=
 Xen's hypercall handler and the only thing needed from the nested dom0 wou=
ld be to trigger the hypercall with a normal config. The fuzzer would take =
it from there and replace the config with the fuzzed version directly in VM=
 forks. Defining what to report as a "crash" to AFL would still need to be =
defined manually for Xen as the current sink points are Linux specific (htt=
ps://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/src/sink.h)=
, but that should be straight forward.
>
> Also, running the fuzzer with PVH guests hasn't been tested but since all=
 VM forking needs is EPT it should work.
>
>
> Xen running inside Xen definitely works, and is even supported as far as =
PV-Shim goes (i.e. no nested virt).  That would limit testing to just creat=
ion of PV guests at L1, which is plenty to get started with.
>
> Xen nested under Xen does work to a first approximation, and for the purp=
oses of fuzzing areas other than the nested-virt logic, might even be ok.  =
(I use this configuration for a fair chunk of hvm development).
>

Sounds like there is no blocker on fuzzing any hypercall handler then.
Just have to add the harness, define what code-path needs to be
defined as a sink, and off you go. Should work with PT-based coverage
no problem. If you need to fuzz multiple hypercalls that may require
some tweaking as the PT based coverage doesn't support a change in CR3
right now (it's just a limitation of libxdc that does the PT
decoding). You could always fall-back to the
disassemble/breakpoint/singlestep coverage option but would need to
add vmcall/vmenter instruction to the control-flow instruction list
here https://github.com/intel/kernel-fuzzer-for-xen-project/blob/master/src=
/tracer.c#L46.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 18:29:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 18:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58043.101863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krmPO-0005YD-BL; Tue, 22 Dec 2020 18:29:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58043.101863; Tue, 22 Dec 2020 18:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krmPO-0005Y6-8K; Tue, 22 Dec 2020 18:29:18 +0000
Received: by outflank-mailman (input) for mailman id 58043;
 Tue, 22 Dec 2020 18:29:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krmPM-0005Xx-N6; Tue, 22 Dec 2020 18:29:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krmPM-00059I-G0; Tue, 22 Dec 2020 18:29:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krmPM-0002z0-9t; Tue, 22 Dec 2020 18:29:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krmPM-0004FJ-9R; Tue, 22 Dec 2020 18:29:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yvGplUCvdlgM0iEthyvLWfEUGkY6VA7BiE91uabWAI0=; b=eKFk7CD7H2xuS30+njXWBWd6KL
	mcJxBsVxdsP2btZjPBGpB/EgBECkMbcudZQOGKFfMbxfQWiSGjz4WW9kEyzuYJiY5cblDi9hIzlGe
	o+C00gt4yYyTDgVhE/Noor3znnx81YYag2KKJoFhJxWAPe4JtcEJ4+gjKUQ6ZvwDcSrM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157813-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157813: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=d4f699a0df6cea907c1de5c277500b98c0729685
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 18:29:16 +0000

flight 157813 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157813/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  d4f699a0df6cea907c1de5c277500b98c0729685

Last test of basis   157797  2020-12-22 13:00:25 Z    0 days
Testing same since   157813  2020-12-22 16:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Maximilian Engelhardt <maxi@daemonizer.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d4f699a0df..98d4d6d8a6  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 19:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 19:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58059.101898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krn59-0001Tq-66; Tue, 22 Dec 2020 19:12:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58059.101898; Tue, 22 Dec 2020 19:12:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krn59-0001Tj-3I; Tue, 22 Dec 2020 19:12:27 +0000
Received: by outflank-mailman (input) for mailman id 58059;
 Tue, 22 Dec 2020 19:12:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krn57-0001Tb-55; Tue, 22 Dec 2020 19:12:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krn56-0005s2-SI; Tue, 22 Dec 2020 19:12:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krn56-0004Py-JL; Tue, 22 Dec 2020 19:12:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krn56-0006Z8-Ip; Tue, 22 Dec 2020 19:12:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C0ApLLKiYQ12jqxPRThrehJO+zajrRDtd+F89RVnZf4=; b=Uveali0P357b+2Xc5FwM9s3XaM
	ghWbrrV1XP/Zus0PBix1BlZa759+SBpwByk2VZDeBhcPVLM5lobxWHxGSW/4bTzY51WXfYd9RtUCd
	RyTr5dxBHzU75ocnQP10xKCaZ0npXtFlp2DYteGxNZJH6JamPyH1ntPmkhz/0Day0ddk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157804-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157804: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=88e47d1959bfaf9417cfd4865ef3c6a926c1978b
X-Osstest-Versions-That:
    ovmf=d4945b102730a54f58be6bda3369c6844565b7ee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 19:12:24 +0000

flight 157804 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157804/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 88e47d1959bfaf9417cfd4865ef3c6a926c1978b
baseline version:
 ovmf                 d4945b102730a54f58be6bda3369c6844565b7ee

Last test of basis   157787  2020-12-22 09:58:43 Z    0 days
Testing same since   157804  2020-12-22 14:11:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenyi Xie <xiewenyi2@huawei.com>
  wenyi,xie via groups.io <xiewenyi2=huawei.com@groups.io>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d4945b1027..88e47d1959  88e47d1959bfaf9417cfd4865ef3c6a926c1978b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 21:00:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 21:00:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58084.101956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krokw-0001qQ-NZ; Tue, 22 Dec 2020 20:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58084.101956; Tue, 22 Dec 2020 20:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krokw-0001qJ-Jg; Tue, 22 Dec 2020 20:59:42 +0000
Received: by outflank-mailman (input) for mailman id 58084;
 Tue, 22 Dec 2020 20:59:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kroku-0001qB-T5; Tue, 22 Dec 2020 20:59:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kroku-0007wx-Jx; Tue, 22 Dec 2020 20:59:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kroku-00006V-Ad; Tue, 22 Dec 2020 20:59:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kroku-000196-A8; Tue, 22 Dec 2020 20:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bJPxlI8i9uuKQ6TCcP/kuBXrMzx5KdM/5Vw+Bmnmzpo=; b=ply0EJGxuTwoH1qgKxTnZy+6/o
	yK6WEyfn9NJymIFidwxkf4t2q2qDX+GmMbjI++MX5w6Zee749WrEJf/7G334CrAmQxb0F2SpNIfA8
	zcS/36MXmTWdo9fmFSBmOffi816n2+jFHloyYQWnon809GMeBl0D15+2KOVV07mJ2Evg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157790-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157790: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c8938dcc1bd37dd61f705410053e08804ca2b55
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 20:59:40 +0000

flight 157790 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157790/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157754
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157754
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157754
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157754
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157754
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157754
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157754
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157754
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157754
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157754
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157754
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8c8938dcc1bd37dd61f705410053e08804ca2b55
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157754  2020-12-21 07:23:13 Z    1 days
Failing since        157775  2020-12-21 21:37:34 Z    0 days    2 attempts
Testing same since   157790  2020-12-22 11:31:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   357db96a66..8c8938dcc1  8c8938dcc1bd37dd61f705410053e08804ca2b55 -> master


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 21:34:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 21:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58099.101990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krpIm-0005SP-0W; Tue, 22 Dec 2020 21:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58099.101990; Tue, 22 Dec 2020 21:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krpIl-0005SI-TU; Tue, 22 Dec 2020 21:34:39 +0000
Received: by outflank-mailman (input) for mailman id 58099;
 Tue, 22 Dec 2020 21:34:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krpIl-0005SA-88; Tue, 22 Dec 2020 21:34:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krpIk-00005F-TE; Tue, 22 Dec 2020 21:34:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krpIk-0001zR-JY; Tue, 22 Dec 2020 21:34:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krpIk-0006BK-J5; Tue, 22 Dec 2020 21:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=epZ0CeYeWGbbwnsvbuSS6tOIOCBZxYl45SjrSFOUmGU=; b=JtgpUXvzHF/bQQhuiNBeeAFtai
	9uYJwr2w9F6w4ALkv19NbNuw2X5JA0lRPkmMRqCYIhE9mjKjgzZJQHy2oOUpJ+U0APgadFuhaCDMI
	ikVxA3nKF2SSUccCo+Axw8cVb7t7oFBRwyWy5BL2Jn+NKtsJfoJTwMBExVpSTIvczDjg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157784-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157784: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 22 Dec 2020 21:34:38 +0000

flight 157784 qemu-mainline real [real]
flight 157832 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157784/
http://logs.test-lab.xenproject.org/osstest/logs/157832/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  124 days
Failing since        152659  2020-08-21 14:07:39 Z  123 days  253 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    4 days    6 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 22 23:05:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 22 Dec 2020 23:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58112.102010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krqiD-0004iD-0m; Tue, 22 Dec 2020 23:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58112.102010; Tue, 22 Dec 2020 23:05:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krqiC-0004i6-Tx; Tue, 22 Dec 2020 23:05:00 +0000
Received: by outflank-mailman (input) for mailman id 58112;
 Tue, 22 Dec 2020 23:04:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=odvS=F2=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
 id 1krqiA-0004i0-VJ
 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 23:04:59 +0000
Received: from mail1.protonmail.ch (unknown [185.70.40.18])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13360254-26e2-45e0-846a-149f5e8d051f;
 Tue, 22 Dec 2020 23:04:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13360254-26e2-45e0-846a-149f5e8d051f
Date: Tue, 22 Dec 2020 23:04:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=protonmail; t=1608678293;
	bh=ACcda9xu//P+Y6Ww1pup/3a4oMjn+VRQw6mpes2mt0Y=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=GqZGSKdXVFE4M8z2OSeBzRF6/zIC/UEf+52qjICus9rdVwE9GVXziWGowuAStGPUA
	 ESVmOLEtJAIOYM8stFMejuOri89hkemvaTHdXvsFt0j6Jsad0QTUXmikXvSOICZgQl
	 HAPHbS2sCHmuNWNpeE2g6vRWsYPd/YBhIs2M6wFQ=
To: Jan Beulich <jbeulich@suse.com>
From: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Reply-To: Dylanger Daly <dylangerdaly@protonmail.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
Message-ID: <T95F2Mi9RUUZ4w2wdeRqqM4uRyKgOFQNyooqEoTTDByK-0t9hZ1izG68lf90iExeYabEPSEv8puUeg0SEJtOmz8vYbVox2za28DXLd_h-_s=@protonmail.com>
In-Reply-To: <768d9dbb-4387-099f-b489-7952d7e883b0@suse.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com> <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com> <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com> <768d9dbb-4387-099f-b489-7952d7e883b0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

Hey All,

I think I've narrowed down what could be the issue.

I think disabling SMT on any AMD Zen 2 CPU is breaking Xen's Credit2 schedu=
ler, I can only test on AMD Ryzen 4000 based Mobile CPUs, but I think this =
is what is causing issues with softlocks/having to pin dom0 1 vcpu.

I'm currently trying to re-enable SMT on Qubes 4.1 (Xen 4.14) and I'll repo=
rt my findings here.


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 00:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 00:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58131.102040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krs04-0003ya-MJ; Wed, 23 Dec 2020 00:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58131.102040; Wed, 23 Dec 2020 00:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krs04-0003yT-J6; Wed, 23 Dec 2020 00:27:32 +0000
Received: by outflank-mailman (input) for mailman id 58131;
 Wed, 23 Dec 2020 00:27:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krs03-0003yL-C2; Wed, 23 Dec 2020 00:27:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krs03-0003Yd-1Z; Wed, 23 Dec 2020 00:27:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1krs02-0000X8-Pe; Wed, 23 Dec 2020 00:27:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1krs02-00038D-Om; Wed, 23 Dec 2020 00:27:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=suWUmQPNi7Mk/+eDbxX2rkgtFR7820gxnwTmFmUIcCw=; b=lbyLoyP+Fv1s4Q0KplLZCtZ/oe
	HKqx59pk+J929A1RMqDWlGz91yebaNR1LO1jZx9RjF3viECfVbybdfp9y+g9csBSgpVYUwdLLqAPO
	i/pNCzaJzKal/5fD4SNIAuKfsSFPRf1gHxvIRIJAFujYFGAfMv/yEqyDyjymIRjIT/Rc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157814-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157814: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8653b778e454a7708847aeafe689bce07aeeb94e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 00:27:30 +0000

flight 157814 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157814/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  12 debian-install fail in 157776 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 157776 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157776 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10 fail in 157776 pass in 157814
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen        fail pass in 157776
 test-arm64-arm64-examine      8 reboot                     fail pass in 157776
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 157776
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157776

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157776 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 157776 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8653b778e454a7708847aeafe689bce07aeeb94e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  144 days
Failing since        152366  2020-08-01 20:49:34 Z  143 days  247 attempts
Testing same since   157776  2020-12-21 21:40:31 Z    1 days    2 attempts

------------------------------------------------------------
4305 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 963314 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 05:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 05:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58160.102092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYO-0002PG-86; Wed, 23 Dec 2020 05:19:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58160.102092; Wed, 23 Dec 2020 05:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYO-0002P5-15; Wed, 23 Dec 2020 05:19:16 +0000
Received: by outflank-mailman (input) for mailman id 58160;
 Wed, 23 Dec 2020 05:19:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xcbs=F3=redhat.com=jpoimboe@srs-us1.protection.inumbo.net>)
 id 1krwYN-0002OV-0I
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 05:19:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id b18f1904-582d-4fad-a97d-4e30598e20e5;
 Wed, 23 Dec 2020 05:19:12 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-546-lUw8cNt-Px2VJC9L_UNyAQ-1; Wed, 23 Dec 2020 00:19:08 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AEDE3879500;
 Wed, 23 Dec 2020 05:19:06 +0000 (UTC)
Received: from treble.redhat.com (ovpn-117-91.rdu2.redhat.com [10.10.117.91])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C619719D9C;
 Wed, 23 Dec 2020 05:19:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b18f1904-582d-4fad-a97d-4e30598e20e5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608700752;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=vW2cW4TjsKdveYtKoF4OGqFRdopX+oDho5T11toLq90=;
	b=R+u/HNNFiYVHnQ6q7UENI455AlwHevhyYeatzR03hQDCHcH0nxOLBaiuiRffOUL3oYfs9i
	6dj0lY7yehHnLu/hGcGLX34ohKloDEMtRkVciFGYwLaEejNzIS9H77UZeQPsI1Ig+NX02G
	oMQWhR3eK7YxS4OFi2q2euw6OXp5Ffw=
X-MC-Unique: lUw8cNt-Px2VJC9L_UNyAQ-1
From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Miroslav Benes <mbenes@suse.cz>
Subject: [PATCH 0/3] Alternatives vs ORC, a slightly easier way
Date: Tue, 22 Dec 2020 23:18:07 -0600
Message-Id: <cover.1608700338.git.jpoimboe@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

These patches replace Peter's "Alternatives vs ORC, the hard way".  The
end result should be the same (support for paravirt patching's using of
alternatives which modify the stack).

Josh Poimboeuf (3):
  objtool: Refactor ORC section generation
  objtool: Add 'alt_group' struct
  objtool: Support stack layout changes in alternatives

 .../Documentation/stack-validation.txt        |  16 +-
 tools/objtool/Makefile                        |   4 -
 tools/objtool/arch.h                          |   4 -
 tools/objtool/builtin-orc.c                   |   6 +-
 tools/objtool/check.c                         | 190 ++++++-----
 tools/objtool/check.h                         |  22 +-
 tools/objtool/objtool.h                       |   3 +-
 tools/objtool/orc_gen.c                       | 308 ++++++++++--------
 tools/objtool/weak.c                          |   7 +-
 9 files changed, 315 insertions(+), 245 deletions(-)

-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 05:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 05:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58161.102110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYT-0002S0-CS; Wed, 23 Dec 2020 05:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58161.102110; Wed, 23 Dec 2020 05:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYT-0002Rt-9N; Wed, 23 Dec 2020 05:19:21 +0000
Received: by outflank-mailman (input) for mailman id 58161;
 Wed, 23 Dec 2020 05:19:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xcbs=F3=redhat.com=jpoimboe@srs-us1.protection.inumbo.net>)
 id 1krwYR-0002OV-QG
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 05:19:19 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a63ff240-9344-47ab-9544-97ce84c407e7;
 Wed, 23 Dec 2020 05:19:13 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-126-w1lCaopBMcyPQaggoedKzQ-1; Wed, 23 Dec 2020 00:19:08 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9ACE1107ACE6;
 Wed, 23 Dec 2020 05:19:07 +0000 (UTC)
Received: from treble.redhat.com (ovpn-117-91.rdu2.redhat.com [10.10.117.91])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D78BE19D9C;
 Wed, 23 Dec 2020 05:19:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a63ff240-9344-47ab-9544-97ce84c407e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608700753;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZbJws1RViL+dhYV0RosYWexYv0hwpQMwpjgoJDxnT9E=;
	b=bO4bNhGKNBaWjXiRLOer2cLy8eIm9nXSHjzd8vnNpfko7cM5goSxBDsGSrXWW2tkGnz/bY
	DMmvjFFcfPvuv+UKEJbCaONeLx0s+4pSF+Sm3B4JS8BCJehHo3OIg58tuHj+7FzDtdwtYM
	QG4dVy9kFcEoAGyycHcneYgw5/XwPJs=
X-MC-Unique: w1lCaopBMcyPQaggoedKzQ-1
From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Miroslav Benes <mbenes@suse.cz>
Subject: [PATCH 1/3] objtool: Refactor ORC section generation
Date: Tue, 22 Dec 2020 23:18:08 -0600
Message-Id: <46e2a28520d2d9ddd2a525ecc53a97af1946022f.1608700338.git.jpoimboe@redhat.com>
In-Reply-To: <cover.1608700338.git.jpoimboe@redhat.com>
References: <cover.1608700338.git.jpoimboe@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

Decouple ORC entries from instructions.  This simplifies the
control/data flow, and is going to make it easier to support alternative
instructions which change the stack layout.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 tools/objtool/Makefile      |   4 -
 tools/objtool/arch.h        |   4 -
 tools/objtool/builtin-orc.c |   6 +-
 tools/objtool/check.h       |   3 -
 tools/objtool/objtool.h     |   3 +-
 tools/objtool/orc_gen.c     | 272 ++++++++++++++++++------------------
 tools/objtool/weak.c        |   7 +-
 7 files changed, 140 insertions(+), 159 deletions(-)

diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index 5cdb19036d7f..a43096f713c7 100644
--- a/tools/objtool/Makefile
+++ b/tools/objtool/Makefile
@@ -46,10 +46,6 @@ ifeq ($(SRCARCH),x86)
 	SUBCMD_ORC := y
 endif
 
-ifeq ($(SUBCMD_ORC),y)
-	CFLAGS += -DINSN_USE_ORC
-endif
-
 export SUBCMD_CHECK SUBCMD_ORC
 export srctree OUTPUT CFLAGS SRCARCH AWK
 include $(srctree)/tools/build/Makefile.include
diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h
index 4a84c3081b8e..5e3f3ea8bb89 100644
--- a/tools/objtool/arch.h
+++ b/tools/objtool/arch.h
@@ -11,10 +11,6 @@
 #include "objtool.h"
 #include "cfi.h"
 
-#ifdef INSN_USE_ORC
-#include <asm/orc_types.h>
-#endif
-
 enum insn_type {
 	INSN_JUMP_CONDITIONAL,
 	INSN_JUMP_UNCONDITIONAL,
diff --git a/tools/objtool/builtin-orc.c b/tools/objtool/builtin-orc.c
index 7b31121fa60b..508bdf6ae8dc 100644
--- a/tools/objtool/builtin-orc.c
+++ b/tools/objtool/builtin-orc.c
@@ -51,11 +51,7 @@ int cmd_orc(int argc, const char **argv)
 		if (list_empty(&file->insn_list))
 			return 0;
 
-		ret = create_orc(file);
-		if (ret)
-			return ret;
-
-		ret = create_orc_sections(file);
+		ret = orc_create(file);
 		if (ret)
 			return ret;
 
diff --git a/tools/objtool/check.h b/tools/objtool/check.h
index 5ec00a4b891b..4c10916ff1cf 100644
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -43,9 +43,6 @@ struct instruction {
 	struct symbol *func;
 	struct list_head stack_ops;
 	struct cfi_state cfi;
-#ifdef INSN_USE_ORC
-	struct orc_entry orc;
-#endif
 };
 
 static inline bool is_static_jump(struct instruction *insn)
diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h
index 4125d4578b23..5e58d3537e2f 100644
--- a/tools/objtool/objtool.h
+++ b/tools/objtool/objtool.h
@@ -26,7 +26,6 @@ struct objtool_file *objtool_open_read(const char *_objname);
 
 int check(struct objtool_file *file);
 int orc_dump(const char *objname);
-int create_orc(struct objtool_file *file);
-int create_orc_sections(struct objtool_file *file);
+int orc_create(struct objtool_file *file);
 
 #endif /* _OBJTOOL_H */
diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
index 235663b96adc..73efba2bfa72 100644
--- a/tools/objtool/orc_gen.c
+++ b/tools/objtool/orc_gen.c
@@ -12,89 +12,84 @@
 #include "check.h"
 #include "warn.h"
 
-int create_orc(struct objtool_file *file)
+static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi)
 {
-	struct instruction *insn;
+	struct instruction *insn = container_of(cfi, struct instruction, cfi);
+	struct cfi_reg *bp = &cfi->regs[CFI_BP];
 
-	for_each_insn(file, insn) {
-		struct orc_entry *orc = &insn->orc;
-		struct cfi_reg *cfa = &insn->cfi.cfa;
-		struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+	memset(orc, 0, sizeof(*orc));
 
-		if (!insn->sec->text)
-			continue;
-
-		orc->end = insn->cfi.end;
+	orc->end = cfi->end;
 
-		if (cfa->base == CFI_UNDEFINED) {
-			orc->sp_reg = ORC_REG_UNDEFINED;
-			continue;
-		}
-
-		switch (cfa->base) {
-		case CFI_SP:
-			orc->sp_reg = ORC_REG_SP;
-			break;
-		case CFI_SP_INDIRECT:
-			orc->sp_reg = ORC_REG_SP_INDIRECT;
-			break;
-		case CFI_BP:
-			orc->sp_reg = ORC_REG_BP;
-			break;
-		case CFI_BP_INDIRECT:
-			orc->sp_reg = ORC_REG_BP_INDIRECT;
-			break;
-		case CFI_R10:
-			orc->sp_reg = ORC_REG_R10;
-			break;
-		case CFI_R13:
-			orc->sp_reg = ORC_REG_R13;
-			break;
-		case CFI_DI:
-			orc->sp_reg = ORC_REG_DI;
-			break;
-		case CFI_DX:
-			orc->sp_reg = ORC_REG_DX;
-			break;
-		default:
-			WARN_FUNC("unknown CFA base reg %d",
-				  insn->sec, insn->offset, cfa->base);
-			return -1;
-		}
+	if (cfi->cfa.base == CFI_UNDEFINED) {
+		orc->sp_reg = ORC_REG_UNDEFINED;
+		return 0;
+	}
 
-		switch(bp->base) {
-		case CFI_UNDEFINED:
-			orc->bp_reg = ORC_REG_UNDEFINED;
-			break;
-		case CFI_CFA:
-			orc->bp_reg = ORC_REG_PREV_SP;
-			break;
-		case CFI_BP:
-			orc->bp_reg = ORC_REG_BP;
-			break;
-		default:
-			WARN_FUNC("unknown BP base reg %d",
-				  insn->sec, insn->offset, bp->base);
-			return -1;
-		}
+	switch (cfi->cfa.base) {
+	case CFI_SP:
+		orc->sp_reg = ORC_REG_SP;
+		break;
+	case CFI_SP_INDIRECT:
+		orc->sp_reg = ORC_REG_SP_INDIRECT;
+		break;
+	case CFI_BP:
+		orc->sp_reg = ORC_REG_BP;
+		break;
+	case CFI_BP_INDIRECT:
+		orc->sp_reg = ORC_REG_BP_INDIRECT;
+		break;
+	case CFI_R10:
+		orc->sp_reg = ORC_REG_R10;
+		break;
+	case CFI_R13:
+		orc->sp_reg = ORC_REG_R13;
+		break;
+	case CFI_DI:
+		orc->sp_reg = ORC_REG_DI;
+		break;
+	case CFI_DX:
+		orc->sp_reg = ORC_REG_DX;
+		break;
+	default:
+		WARN_FUNC("unknown CFA base reg %d",
+			  insn->sec, insn->offset, cfi->cfa.base);
+		return -1;
+	}
 
-		orc->sp_offset = cfa->offset;
-		orc->bp_offset = bp->offset;
-		orc->type = insn->cfi.type;
+	switch (bp->base) {
+	case CFI_UNDEFINED:
+		orc->bp_reg = ORC_REG_UNDEFINED;
+		break;
+	case CFI_CFA:
+		orc->bp_reg = ORC_REG_PREV_SP;
+		break;
+	case CFI_BP:
+		orc->bp_reg = ORC_REG_BP;
+		break;
+	default:
+		WARN_FUNC("unknown BP base reg %d",
+			  insn->sec, insn->offset, bp->base);
+		return -1;
 	}
 
+	orc->sp_offset = cfi->cfa.offset;
+	orc->bp_offset = bp->offset;
+	orc->type = cfi->type;
+
 	return 0;
 }
 
-static int create_orc_entry(struct elf *elf, struct section *u_sec, struct section *ip_relocsec,
-				unsigned int idx, struct section *insn_sec,
-				unsigned long insn_off, struct orc_entry *o)
+static int write_orc_entry(struct elf *elf, struct section *orc_sec,
+			   struct section *ip_rsec, unsigned int idx,
+			   struct section *insn_sec, unsigned long insn_off,
+			   struct orc_entry *o)
 {
 	struct orc_entry *orc;
 	struct reloc *reloc;
 
 	/* populate ORC data */
-	orc = (struct orc_entry *)u_sec->data->d_buf + idx;
+	orc = (struct orc_entry *)orc_sec->data->d_buf + idx;
 	memcpy(orc, o, sizeof(*orc));
 
 	/* populate reloc for ip */
@@ -133,102 +128,109 @@ static int create_orc_entry(struct elf *elf, struct section *u_sec, struct secti
 
 	reloc->type = R_X86_64_PC32;
 	reloc->offset = idx * sizeof(int);
-	reloc->sec = ip_relocsec;
+	reloc->sec = ip_rsec;
 
 	elf_add_reloc(elf, reloc);
 
 	return 0;
 }
 
-int create_orc_sections(struct objtool_file *file)
+struct orc_list_entry {
+	struct list_head list;
+	struct orc_entry orc;
+	struct section *insn_sec;
+	unsigned long insn_off;
+};
+
+static int orc_list_add(struct list_head *orc_list, struct orc_entry *orc,
+			struct section *sec, unsigned long offset)
+{
+	struct orc_list_entry *entry = malloc(sizeof(*entry));
+
+	if (!entry) {
+		WARN("malloc failed");
+		return -1;
+	}
+
+	entry->orc	= *orc;
+	entry->insn_sec = sec;
+	entry->insn_off = offset;
+
+	list_add_tail(&entry->list, orc_list);
+	return 0;
+}
+
+int orc_create(struct objtool_file *file)
 {
-	struct instruction *insn, *prev_insn;
-	struct section *sec, *u_sec, *ip_relocsec;
-	unsigned int idx;
+	struct section *sec, *ip_rsec, *orc_sec;
+	unsigned int nr = 0, idx = 0;
+	struct orc_list_entry *entry;
+	struct list_head orc_list;
 
-	struct orc_entry empty = {
-		.sp_reg = ORC_REG_UNDEFINED,
+	struct orc_entry null = {
+		.sp_reg  = ORC_REG_UNDEFINED,
 		.bp_reg  = ORC_REG_UNDEFINED,
 		.type    = UNWIND_HINT_TYPE_CALL,
 	};
 
-	sec = find_section_by_name(file->elf, ".orc_unwind");
-	if (sec) {
-		WARN("file already has .orc_unwind section, skipping");
-		return -1;
-	}
-
-	/* count the number of needed orcs */
-	idx = 0;
+	/* Build a deduplicated list of ORC entries: */
+	INIT_LIST_HEAD(&orc_list);
 	for_each_sec(file, sec) {
+		struct orc_entry orc, prev_orc = {0};
+		struct instruction *insn;
+		bool empty = true;
+
 		if (!sec->text)
 			continue;
 
-		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
-			if (!prev_insn ||
-			    memcmp(&insn->orc, &prev_insn->orc,
-				   sizeof(struct orc_entry))) {
-				idx++;
-			}
-			prev_insn = insn;
+			if (init_orc_entry(&orc, &insn->cfi))
+				return -1;
+			if (!memcmp(&prev_orc, &orc, sizeof(orc)))
+				continue;
+			if (orc_list_add(&orc_list, &orc, sec, insn->offset))
+				return -1;
+			nr++;
+			prev_orc = orc;
+			empty = false;
 		}
 
-		/* section terminator */
-		if (prev_insn)
-			idx++;
+		/* Add a section terminator */
+		if (!empty) {
+			orc_list_add(&orc_list, &null, sec, sec->len);
+			nr++;
+		}
 	}
-	if (!idx)
-		return -1;
+	if (!nr)
+		return 0;
 
+	/* Create .orc_unwind, .orc_unwind_ip and .rela.orc_unwind_ip sections: */
+	sec = find_section_by_name(file->elf, ".orc_unwind");
+	if (sec) {
+		WARN("file already has .orc_unwind section, skipping");
+		return -1;
+	}
+	orc_sec = elf_create_section(file->elf, ".orc_unwind", 0,
+				     sizeof(struct orc_entry), nr);
+	if (!orc_sec)
+		return -1;
 
-	/* create .orc_unwind_ip and .rela.orc_unwind_ip sections */
-	sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), idx);
+	sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), nr);
 	if (!sec)
 		return -1;
-
-	ip_relocsec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
-	if (!ip_relocsec)
+	ip_rsec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
+	if (!ip_rsec)
 		return -1;
 
-	/* create .orc_unwind section */
-	u_sec = elf_create_section(file->elf, ".orc_unwind", 0,
-				   sizeof(struct orc_entry), idx);
-
-	/* populate sections */
-	idx = 0;
-	for_each_sec(file, sec) {
-		if (!sec->text)
-			continue;
-
-		prev_insn = NULL;
-		sec_for_each_insn(file, sec, insn) {
-			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
-						 sizeof(struct orc_entry))) {
-
-				if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
-						     insn->sec, insn->offset,
-						     &insn->orc))
-					return -1;
-
-				idx++;
-			}
-			prev_insn = insn;
-		}
-
-		/* section terminator */
-		if (prev_insn) {
-			if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
-					     prev_insn->sec,
-					     prev_insn->offset + prev_insn->len,
-					     &empty))
-				return -1;
-
-			idx++;
-		}
+	/* Write ORC entries to sections: */
+	list_for_each_entry(entry, &orc_list, list) {
+		if (write_orc_entry(file->elf, orc_sec, ip_rsec, idx++,
+				    entry->insn_sec, entry->insn_off,
+				    &entry->orc))
+			return -1;
 	}
 
-	if (elf_rebuild_reloc_section(file->elf, ip_relocsec))
+	if (elf_rebuild_reloc_section(file->elf, ip_rsec))
 		return -1;
 
 	return 0;
diff --git a/tools/objtool/weak.c b/tools/objtool/weak.c
index 7843e9a7a72f..553ec9ce51ba 100644
--- a/tools/objtool/weak.c
+++ b/tools/objtool/weak.c
@@ -25,12 +25,7 @@ int __weak orc_dump(const char *_objname)
 	UNSUPPORTED("orc");
 }
 
-int __weak create_orc(struct objtool_file *file)
-{
-	UNSUPPORTED("orc");
-}
-
-int __weak create_orc_sections(struct objtool_file *file)
+int __weak orc_create(struct objtool_file *file)
 {
 	UNSUPPORTED("orc");
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 05:19:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 05:19:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58159.102086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYN-0002Oo-R3; Wed, 23 Dec 2020 05:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58159.102086; Wed, 23 Dec 2020 05:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYN-0002Oh-N7; Wed, 23 Dec 2020 05:19:15 +0000
Received: by outflank-mailman (input) for mailman id 58159;
 Wed, 23 Dec 2020 05:19:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xcbs=F3=redhat.com=jpoimboe@srs-us1.protection.inumbo.net>)
 id 1krwYM-0002OU-NM
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 05:19:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0c03b833-e60d-4677-a353-467c4576d44a;
 Wed, 23 Dec 2020 05:19:12 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-438-Y_oG4oo9O1GqqGNZNmnjhw-1; Wed, 23 Dec 2020 00:19:09 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 700571015C82;
 Wed, 23 Dec 2020 05:19:08 +0000 (UTC)
Received: from treble.redhat.com (ovpn-117-91.rdu2.redhat.com [10.10.117.91])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C2D3D19D9C;
 Wed, 23 Dec 2020 05:19:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c03b833-e60d-4677-a353-467c4576d44a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608700752;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9FLECD3fszJkQ0foq9Jee/a0IqJ4VTmTS/l3JtWjF48=;
	b=WztnSoNJN9MHhk7wuDEL8Mwt3/YVi6h2wEmwUlBcH1wu9dnzdNiMHEik/76yTWtqFawd9T
	0v728JNeQwJZh9fWU5h+gZvZ8IH4rGucipdv/8Ye2K4tWgXYFr/UwSLvdjqK7vBsf+2vfT
	AJAUP96U/54+3OHRJUZOqYv+jq1FVOk=
X-MC-Unique: Y_oG4oo9O1GqqGNZNmnjhw-1
From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Miroslav Benes <mbenes@suse.cz>
Subject: [PATCH 2/3] objtool: Add 'alt_group' struct
Date: Tue, 22 Dec 2020 23:18:09 -0600
Message-Id: <f092bae6365f5a4c476b0189b9e3001283117fa2.1608700338.git.jpoimboe@redhat.com>
In-Reply-To: <cover.1608700338.git.jpoimboe@redhat.com>
References: <cover.1608700338.git.jpoimboe@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

Create a new struct associated with each group of alternatives
instructions.  This will help with the removal of fake jumps, and more
importantly with adding support for stack layout changes in
alternatives.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 tools/objtool/check.c | 29 +++++++++++++++++++++++------
 tools/objtool/check.h | 13 ++++++++++++-
 2 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index c6ab44543c92..67f39b57c6f7 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -984,20 +984,28 @@ static int handle_group_alt(struct objtool_file *file,
 			    struct instruction *orig_insn,
 			    struct instruction **new_insn)
 {
-	static unsigned int alt_group_next_index = 1;
 	struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump = NULL;
-	unsigned int alt_group = alt_group_next_index++;
+	struct alt_group *orig_alt_group, *new_alt_group;
 	unsigned long dest_off;
 
+
+	orig_alt_group = malloc(sizeof(*orig_alt_group));
+	if (!orig_alt_group) {
+		WARN("malloc failed");
+		return -1;
+	}
 	last_orig_insn = NULL;
 	insn = orig_insn;
 	sec_for_each_insn_from(file, insn) {
 		if (insn->offset >= special_alt->orig_off + special_alt->orig_len)
 			break;
 
-		insn->alt_group = alt_group;
+		insn->alt_group = orig_alt_group;
 		last_orig_insn = insn;
 	}
+	orig_alt_group->orig_group = NULL;
+	orig_alt_group->first_insn = orig_insn;
+	orig_alt_group->last_insn = last_orig_insn;
 
 	if (next_insn_same_sec(file, last_orig_insn)) {
 		fake_jump = malloc(sizeof(*fake_jump));
@@ -1028,8 +1036,13 @@ static int handle_group_alt(struct objtool_file *file,
 		return 0;
 	}
 
+	new_alt_group = malloc(sizeof(*new_alt_group));
+	if (!new_alt_group) {
+		WARN("malloc failed");
+		return -1;
+	}
+
 	last_new_insn = NULL;
-	alt_group = alt_group_next_index++;
 	insn = *new_insn;
 	sec_for_each_insn_from(file, insn) {
 		struct reloc *alt_reloc;
@@ -1041,7 +1054,7 @@ static int handle_group_alt(struct objtool_file *file,
 
 		insn->ignore = orig_insn->ignore_alts;
 		insn->func = orig_insn->func;
-		insn->alt_group = alt_group;
+		insn->alt_group = new_alt_group;
 
 		/*
 		 * Since alternative replacement code is copy/pasted by the
@@ -1090,6 +1103,10 @@ static int handle_group_alt(struct objtool_file *file,
 		return -1;
 	}
 
+	new_alt_group->orig_group = orig_alt_group;
+	new_alt_group->first_insn = *new_insn;
+	new_alt_group->last_insn = last_new_insn;
+
 	if (fake_jump)
 		list_add(&fake_jump->list, &last_new_insn->list);
 
@@ -2405,7 +2422,7 @@ static int validate_return(struct symbol *func, struct instruction *insn, struct
 static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn)
 {
 	struct instruction *first_insn = insn;
-	int alt_group = insn->alt_group;
+	struct alt_group *alt_group = insn->alt_group;
 
 	sec_for_each_insn_continue(file, insn) {
 		if (insn->alt_group != alt_group)
diff --git a/tools/objtool/check.h b/tools/objtool/check.h
index 4c10916ff1cf..b74c383c2d83 100644
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -19,6 +19,17 @@ struct insn_state {
 	s8 instr;
 };
 
+struct alt_group {
+	/*
+	 * Pointer from a replacement group to the original group.  NULL if it
+	 * *is* the original group.
+	 */
+	struct alt_group *orig_group;
+
+	/* First and last instructions in the group */
+	struct instruction *first_insn, *last_insn;
+};
+
 struct instruction {
 	struct list_head list;
 	struct hlist_node hash;
@@ -34,7 +45,7 @@ struct instruction {
 	s8 instr;
 	u8 visited;
 	u8 ret_offset;
-	int alt_group;
+	struct alt_group *alt_group;
 	struct symbol *call_dest;
 	struct instruction *jump_dest;
 	struct instruction *first_jump_src;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 05:19:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 05:19:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58164.102122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYY-0002Wk-Md; Wed, 23 Dec 2020 05:19:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58164.102122; Wed, 23 Dec 2020 05:19:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1krwYY-0002Wd-JT; Wed, 23 Dec 2020 05:19:26 +0000
Received: by outflank-mailman (input) for mailman id 58164;
 Wed, 23 Dec 2020 05:19:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xcbs=F3=redhat.com=jpoimboe@srs-us1.protection.inumbo.net>)
 id 1krwYW-0002OV-QN
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 05:19:24 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 964faad1-99ad-44e8-92f4-6a7cf143853c;
 Wed, 23 Dec 2020 05:19:15 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-406-OYXhaivwNG27pgyrOvM83g-1; Wed, 23 Dec 2020 00:19:11 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8C7CA18C8C02;
 Wed, 23 Dec 2020 05:19:09 +0000 (UTC)
Received: from treble.redhat.com (ovpn-117-91.rdu2.redhat.com [10.10.117.91])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A400319D9C;
 Wed, 23 Dec 2020 05:19:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 964faad1-99ad-44e8-92f4-6a7cf143853c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1608700755;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7GfgB281hzscc78Xu7z2MmSIJrE3t6vhDKCus+aT9gA=;
	b=BRDPlzYLGZxjOjK1kSh5NMB7++HUO+SUs3CJCT15W3G2lbI5UYOQ2idVD6VdVt4kcyHQ8J
	rcTq3+3nwjGVOfmdPAjyYwaP+7l95Cc705J7JJvJDYZFv0UJbAicnMLSkUIwGDriazXY2b
	PCxuTbbrhmhcdEUZ4LlZYKjyAFBWu7c=
X-MC-Unique: OYXhaivwNG27pgyrOvM83g-1
From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Miroslav Benes <mbenes@suse.cz>,
	Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Subject: [PATCH 3/3] objtool: Support stack layout changes in alternatives
Date: Tue, 22 Dec 2020 23:18:10 -0600
Message-Id: <9f78604e49b400eb3b2ca613591f8c357474ed4e.1608700338.git.jpoimboe@redhat.com>
In-Reply-To: <cover.1608700338.git.jpoimboe@redhat.com>
References: <cover.1608700338.git.jpoimboe@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

The ORC unwinder showed a warning [1] which revealed the stack layout
didn't match what was expected.  The problem was that paravirt patching
had replaced "CALL *pv_ops.irq.save_fl" with "PUSHF;POP".  That changed
the stack layout between the PUSHF and the POP, so unwinding from an
interrupt which occurred between those two instructions would fail.

Part of the agreed upon solution was to rework the custom paravirt
patching code to use alternatives instead, since objtool already knows
how to read alternatives (and converging runtime patching infrastructure
is always a good thing anyway).  But the main problem still remains,
which is that runtime patching can change the stack layout.

Making stack layout changes in alternatives was disallowed with commit
7117f16bf460 ("objtool: Fix ORC vs alternatives"), but now that paravirt
is going to be doing it, it needs to be supported.

One way to do so would be to modify the ORC table when the code gets
patched.  But ORC is simple -- a good thing! -- and it's best to leave
it alone.

Instead, support stack layout changes by "flattening" all possible stack
states (CFI) from parallel alternative code streams into a single set of
linear states.  The only necessary limitation is that CFI conflicts are
disallowed at all possible instruction boundaries.

For example, this scenario is allowed:

          Alt1                    Alt2                    Alt3

   0x00   CALL *pv_ops.save_fl    CALL xen_save_fl        PUSHF
   0x01                                                   POP %RAX
   0x02                                                   NOP
   ...
   0x05                           NOP
   ...
   0x07   <insn>

The unwind information for offset-0x00 is identical for all 3
alternatives.  Similarly offset-0x05 and higher also are identical (and
the same as 0x00).  However offset-0x01 has deviating CFI, but that is
only relevant for Alt3, neither of the other alternative instruction
streams will ever hit that offset.

This scenario is NOT allowed:

          Alt1                    Alt2

   0x00   CALL *pv_ops.save_fl    PUSHF
   0x01                           NOP6
   ...
   0x07   NOP                     POP %RAX

The problem here is that offset-0x7, which is an instruction boundary in
both possible instruction patch streams, has two conflicting stack
layouts.

[ The above examples were stolen from Peter Zijlstra. ]

The new flattened CFI array is used both for the detection of conflicts
(like the second example above) and the generation of linear ORC
entries.

BTW, another benefit of these changes is that, thanks to some related
cleanups (new fake nops and alt_group struct) objtool can finally be rid
of fake jumps, which were a constant source of headaches.

[1] https://lkml.kernel.org/r/20201111170536.arx2zbn4ngvjoov7@treble

Cc: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 .../Documentation/stack-validation.txt        |  16 +-
 tools/objtool/check.c                         | 175 ++++++++++--------
 tools/objtool/check.h                         |   6 +
 tools/objtool/orc_gen.c                       |  56 +++++-
 4 files changed, 157 insertions(+), 96 deletions(-)

diff --git a/tools/objtool/Documentation/stack-validation.txt b/tools/objtool/Documentation/stack-validation.txt
index 0542e46c7552..30f38fdc0d56 100644
--- a/tools/objtool/Documentation/stack-validation.txt
+++ b/tools/objtool/Documentation/stack-validation.txt
@@ -315,13 +315,15 @@ they mean, and suggestions for how to fix them.
       function tracing inserts additional calls, which is not obvious from the
       sources).
 
-10. file.o: warning: func()+0x5c: alternative modifies stack
-
-    This means that an alternative includes instructions that modify the
-    stack. The problem is that there is only one ORC unwind table, this means
-    that the ORC unwind entries must be valid for each of the alternatives.
-    The easiest way to enforce this is to ensure alternatives do not contain
-    any ORC entries, which in turn implies the above constraint.
+10. file.o: warning: func()+0x5c: stack layout conflict in alternatives
+
+    This means that in the use of the alternative() or ALTERNATIVE()
+    macro, the code paths have conflicting modifications to the stack.
+    The problem is that there is only one ORC unwind table, which means
+    that the ORC unwind entries must be consistent for all possible
+    instruction boundaries regardless of which code has been patched.
+    This limitation can be overcome by massaging the alternatives with
+    NOPs to shift the stack changes around so they no longer conflict.
 
 11. file.o: warning: unannotated intra-function call
 
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 67f39b57c6f7..81d56fdef1c3 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -19,8 +19,6 @@
 #include <linux/kernel.h>
 #include <linux/static_call_types.h>
 
-#define FAKE_JUMP_OFFSET -1
-
 struct alternative {
 	struct list_head list;
 	struct instruction *insn;
@@ -767,9 +765,6 @@ static int add_jump_destinations(struct objtool_file *file)
 		if (!is_static_jump(insn))
 			continue;
 
-		if (insn->offset == FAKE_JUMP_OFFSET)
-			continue;
-
 		reloc = find_reloc_by_dest_range(file->elf, insn->sec,
 					       insn->offset, insn->len);
 		if (!reloc) {
@@ -984,7 +979,7 @@ static int handle_group_alt(struct objtool_file *file,
 			    struct instruction *orig_insn,
 			    struct instruction **new_insn)
 {
-	struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump = NULL;
+	struct instruction *last_orig_insn, *last_new_insn = NULL, *insn, *nop = NULL;
 	struct alt_group *orig_alt_group, *new_alt_group;
 	unsigned long dest_off;
 
@@ -994,6 +989,13 @@ static int handle_group_alt(struct objtool_file *file,
 		WARN("malloc failed");
 		return -1;
 	}
+	orig_alt_group->cfi = calloc(special_alt->orig_len,
+				     sizeof(struct cfi_state *));
+	if (!orig_alt_group->cfi) {
+		WARN("calloc failed");
+		return -1;
+	}
+
 	last_orig_insn = NULL;
 	insn = orig_insn;
 	sec_for_each_insn_from(file, insn) {
@@ -1007,42 +1009,45 @@ static int handle_group_alt(struct objtool_file *file,
 	orig_alt_group->first_insn = orig_insn;
 	orig_alt_group->last_insn = last_orig_insn;
 
-	if (next_insn_same_sec(file, last_orig_insn)) {
-		fake_jump = malloc(sizeof(*fake_jump));
-		if (!fake_jump) {
-			WARN("malloc failed");
-			return -1;
-		}
-		memset(fake_jump, 0, sizeof(*fake_jump));
-		INIT_LIST_HEAD(&fake_jump->alts);
-		INIT_LIST_HEAD(&fake_jump->stack_ops);
-		init_cfi_state(&fake_jump->cfi);
 
-		fake_jump->sec = special_alt->new_sec;
-		fake_jump->offset = FAKE_JUMP_OFFSET;
-		fake_jump->type = INSN_JUMP_UNCONDITIONAL;
-		fake_jump->jump_dest = list_next_entry(last_orig_insn, list);
-		fake_jump->func = orig_insn->func;
+	new_alt_group = malloc(sizeof(*new_alt_group));
+	if (!new_alt_group) {
+		WARN("malloc failed");
+		return -1;
 	}
 
-	if (!special_alt->new_len) {
-		if (!fake_jump) {
-			WARN("%s: empty alternative at end of section",
-			     special_alt->orig_sec->name);
+	if (special_alt->new_len < special_alt->orig_len) {
+		/*
+		 * Insert a fake nop at the end to make the replacement
+		 * alt_group the same size as the original.  This is needed to
+		 * allow propagate_alt_cfi() to do its magic.  When the last
+		 * instruction affects the stack, the instruction after it (the
+		 * nop) will propagate the new state to the shared CFI array.
+		 */
+		nop = malloc(sizeof(*nop));
+		if (!nop) {
+			WARN("malloc failed");
 			return -1;
 		}
+		memset(nop, 0, sizeof(*nop));
+		INIT_LIST_HEAD(&nop->alts);
+		INIT_LIST_HEAD(&nop->stack_ops);
+		init_cfi_state(&nop->cfi);
 
-		*new_insn = fake_jump;
-		return 0;
+		nop->sec = special_alt->new_sec;
+		nop->offset = special_alt->new_off + special_alt->new_len;
+		nop->len = special_alt->orig_len - special_alt->new_len;
+		nop->type = INSN_NOP;
+		nop->func = orig_insn->func;
+		nop->alt_group = new_alt_group;
+		nop->ignore = orig_insn->ignore_alts;
 	}
 
-	new_alt_group = malloc(sizeof(*new_alt_group));
-	if (!new_alt_group) {
-		WARN("malloc failed");
-		return -1;
+	if (!special_alt->new_len) {
+		*new_insn = nop;
+		goto end;
 	}
 
-	last_new_insn = NULL;
 	insn = *new_insn;
 	sec_for_each_insn_from(file, insn) {
 		struct reloc *alt_reloc;
@@ -1081,14 +1086,8 @@ static int handle_group_alt(struct objtool_file *file,
 			continue;
 
 		dest_off = arch_jump_destination(insn);
-		if (dest_off == special_alt->new_off + special_alt->new_len) {
-			if (!fake_jump) {
-				WARN("%s: alternative jump to end of section",
-				     special_alt->orig_sec->name);
-				return -1;
-			}
-			insn->jump_dest = fake_jump;
-		}
+		if (dest_off == special_alt->new_off + special_alt->new_len)
+			insn->jump_dest = next_insn_same_sec(file, last_orig_insn);
 
 		if (!insn->jump_dest) {
 			WARN_FUNC("can't find alternative jump destination",
@@ -1103,13 +1102,13 @@ static int handle_group_alt(struct objtool_file *file,
 		return -1;
 	}
 
+	if (nop)
+		list_add(&nop->list, &last_new_insn->list);
+end:
 	new_alt_group->orig_group = orig_alt_group;
 	new_alt_group->first_insn = *new_insn;
-	new_alt_group->last_insn = last_new_insn;
-
-	if (fake_jump)
-		list_add(&fake_jump->list, &last_new_insn->list);
-
+	new_alt_group->last_insn = nop ? : last_new_insn;
+	new_alt_group->cfi = orig_alt_group->cfi;
 	return 0;
 }
 
@@ -2202,22 +2201,47 @@ static int update_cfi_state(struct instruction *insn, struct cfi_state *cfi,
 	return 0;
 }
 
-static int handle_insn_ops(struct instruction *insn, struct insn_state *state)
+/*
+ * The stack layouts of alternatives instructions can sometimes diverge when
+ * they have stack modifications.  That's fine as long as the potential stack
+ * layouts don't conflict at any given potential instruction boundary.
+ *
+ * Flatten the CFIs of the different alternative code streams (both original
+ * and replacement) into a single shared CFI array which can be used to detect
+ * conflicts and nicely feed a linear array of ORC entries to the unwinder.
+ */
+static int propagate_alt_cfi(struct objtool_file *file, struct instruction *insn)
 {
-	struct stack_op *op;
+	struct cfi_state **alt_cfi;
+	int group_off;
 
-	list_for_each_entry(op, &insn->stack_ops, list) {
-		struct cfi_state old_cfi = state->cfi;
-		int res;
+	if (!insn->alt_group)
+		return 0;
 
-		res = update_cfi_state(insn, &state->cfi, op);
-		if (res)
-			return res;
+	alt_cfi = insn->alt_group->cfi;
+	group_off = insn->offset - insn->alt_group->first_insn->offset;
 
-		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct cfi_state))) {
-			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
+	if (!alt_cfi[group_off]) {
+		alt_cfi[group_off] = &insn->cfi;
+	} else {
+		if (memcmp(alt_cfi[group_off], &insn->cfi, sizeof(struct cfi_state))) {
+			WARN_FUNC("stack layout conflict in alternatives",
+				  insn->sec, insn->offset);
 			return -1;
 		}
+	}
+
+	return 0;
+}
+
+static int handle_insn_ops(struct instruction *insn, struct insn_state *state)
+{
+	struct stack_op *op;
+
+	list_for_each_entry(op, &insn->stack_ops, list) {
+
+		if (update_cfi_state(insn, &state->cfi, op))
+			return 1;
 
 		if (op->dest.type == OP_DEST_PUSHF) {
 			if (!state->uaccess_stack) {
@@ -2407,28 +2431,20 @@ static int validate_return(struct symbol *func, struct instruction *insn, struct
 	return 0;
 }
 
-/*
- * Alternatives should not contain any ORC entries, this in turn means they
- * should not contain any CFI ops, which implies all instructions should have
- * the same same CFI state.
- *
- * It is possible to constuct alternatives that have unreachable holes that go
- * unreported (because they're NOPs), such holes would result in CFI_UNDEFINED
- * states which then results in ORC entries, which we just said we didn't want.
- *
- * Avoid them by copying the CFI entry of the first instruction into the whole
- * alternative.
- */
-static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn)
+static struct instruction *next_insn_to_validate(struct objtool_file *file,
+						 struct instruction *insn)
 {
-	struct instruction *first_insn = insn;
 	struct alt_group *alt_group = insn->alt_group;
 
-	sec_for_each_insn_continue(file, insn) {
-		if (insn->alt_group != alt_group)
-			break;
-		insn->cfi = first_insn->cfi;
-	}
+	/*
+	 * Simulate the fact that alternatives are patched in-place.  When the
+	 * end of a replacement alt_group is reached, redirect objtool flow to
+	 * the end of the original alt_group.
+	 */
+	if (alt_group && insn == alt_group->last_insn && alt_group->orig_group)
+		return next_insn_same_sec(file, alt_group->orig_group->last_insn);
+
+	return next_insn_same_sec(file, insn);
 }
 
 /*
@@ -2449,7 +2465,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
 	sec = insn->sec;
 
 	while (1) {
-		next_insn = next_insn_same_sec(file, insn);
+		next_insn = next_insn_to_validate(file, insn);
 
 		if (file->c_file && func && insn->func && func != insn->func->pfunc) {
 			WARN("%s() falls through to next function %s()",
@@ -2482,6 +2498,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
 
 		insn->visited |= visited;
 
+		if (propagate_alt_cfi(file, insn))
+			return 1;
+
 		if (!insn->ignore_alts && !list_empty(&insn->alts)) {
 			bool skip_orig = false;
 
@@ -2497,9 +2516,6 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
 				}
 			}
 
-			if (insn->alt_group)
-				fill_alternative_cfi(file, insn);
-
 			if (skip_orig)
 				return 0;
 		}
@@ -2733,9 +2749,6 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
 	    !strcmp(insn->sec->name, ".altinstr_aux"))
 		return true;
 
-	if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->offset == FAKE_JUMP_OFFSET)
-		return true;
-
 	if (!insn->func)
 		return false;
 
diff --git a/tools/objtool/check.h b/tools/objtool/check.h
index b74c383c2d83..45fe87ad662b 100644
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -28,6 +28,12 @@ struct alt_group {
 
 	/* First and last instructions in the group */
 	struct instruction *first_insn, *last_insn;
+
+	/*
+	 * Byte-offset-addressed len-sized array of pointers to CFI structs.
+	 * This is shared with the other alt_groups in the same alternative.
+	 */
+	struct cfi_state **cfi;
 };
 
 struct instruction {
diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
index 73efba2bfa72..2c1e3b909be5 100644
--- a/tools/objtool/orc_gen.c
+++ b/tools/objtool/orc_gen.c
@@ -160,6 +160,13 @@ static int orc_list_add(struct list_head *orc_list, struct orc_entry *orc,
 	return 0;
 }
 
+static unsigned long alt_group_len(struct alt_group *alt_group)
+{
+	return alt_group->last_insn->offset +
+	       alt_group->last_insn->len -
+	       alt_group->first_insn->offset;
+}
+
 int orc_create(struct objtool_file *file)
 {
 	struct section *sec, *ip_rsec, *orc_sec;
@@ -184,15 +191,48 @@ int orc_create(struct objtool_file *file)
 			continue;
 
 		sec_for_each_insn(file, sec, insn) {
-			if (init_orc_entry(&orc, &insn->cfi))
-				return -1;
-			if (!memcmp(&prev_orc, &orc, sizeof(orc)))
+			struct alt_group *alt_group = insn->alt_group;
+			int i;
+
+			if (!alt_group) {
+				if (init_orc_entry(&orc, &insn->cfi))
+					return -1;
+				if (!memcmp(&prev_orc, &orc, sizeof(orc)))
+					continue;
+				if (orc_list_add(&orc_list, &orc, sec,
+						 insn->offset))
+					return -1;
+				nr++;
+				prev_orc = orc;
+				empty = false;
 				continue;
-			if (orc_list_add(&orc_list, &orc, sec, insn->offset))
-				return -1;
-			nr++;
-			prev_orc = orc;
-			empty = false;
+			}
+
+			/*
+			 * Alternatives can have different stack layout
+			 * possibilities (but they shouldn't conflict).
+			 * Instead of traversing the instructions, use the
+			 * alt_group's flattened byte-offset-addressed CFI
+			 * array.
+			 */
+			for (i = 0; i < alt_group_len(alt_group); i++) {
+				struct cfi_state *cfi = alt_group->cfi[i];
+				if (!cfi)
+					continue;
+				if (init_orc_entry(&orc, cfi))
+					return -1;
+				if (!memcmp(&prev_orc, &orc, sizeof(orc)))
+					continue;
+				if (orc_list_add(&orc_list, &orc, insn->sec,
+						 insn->offset + i))
+					return -1;
+				nr++;
+				prev_orc = orc;
+				empty = false;
+			}
+
+			/* Skip to the end of the alt_group */
+			insn = alt_group->last_insn;
 		}
 
 		/* Add a section terminator */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 07:11:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 07:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58184.102160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kryJF-0004X1-Tz; Wed, 23 Dec 2020 07:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58184.102160; Wed, 23 Dec 2020 07:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kryJF-0004Wu-Qq; Wed, 23 Dec 2020 07:11:45 +0000
Received: by outflank-mailman (input) for mailman id 58184;
 Wed, 23 Dec 2020 07:11:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kryJF-0004Wk-Ag; Wed, 23 Dec 2020 07:11:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kryJF-0000W7-4I; Wed, 23 Dec 2020 07:11:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kryJE-0006Ij-S4; Wed, 23 Dec 2020 07:11:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kryJE-00059P-Ra; Wed, 23 Dec 2020 07:11:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F5KniLRrqKMIpEbVjs00vkgDv8BAP4X46cwo3t2gHa4=; b=cCBqPBlksIOs1vXX3c/Ax1qsUu
	shvDan7zJo17MxX39ZJkpjqUNJ3jZ4erR/PVDPclHkwNTF5zL74qA3CQG/+RUbgT5n45Y0ZQsiUJo
	H3EKkI6xtCnJSusTQsuLrDPrJyXt0NGNaxvDCpUsJ53xn7+3LKehypKXnSCDMogk+npw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157837-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157837: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=8c8938dcc1bd37dd61f705410053e08804ca2b55
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 07:11:44 +0000

flight 157837 xen-unstable real [real]
flight 157845 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157837/
http://logs.test-lab.xenproject.org/osstest/logs/157845/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 19 guest-start.2       fail pass in 157845-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 157790

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157790
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157790
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157790
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157790
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157790
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157790
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157790
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157790
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157790
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157790
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157790
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  8c8938dcc1bd37dd61f705410053e08804ca2b55

Last test of basis   157790  2020-12-22 11:31:00 Z    0 days
Testing same since   157837  2020-12-22 21:07:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Maximilian Engelhardt <maxi@daemonizer.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c8938dcc1..98d4d6d8a6  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc -> master


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 07:51:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 07:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58266.102326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kryvp-0000Nd-4P; Wed, 23 Dec 2020 07:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58266.102326; Wed, 23 Dec 2020 07:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kryvp-0000NW-1M; Wed, 23 Dec 2020 07:51:37 +0000
Received: by outflank-mailman (input) for mailman id 58266;
 Wed, 23 Dec 2020 07:51:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kryvn-0000NO-7E; Wed, 23 Dec 2020 07:51:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kryvm-00019x-U3; Wed, 23 Dec 2020 07:51:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kryvm-0000Eu-KU; Wed, 23 Dec 2020 07:51:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kryvm-0005NF-Jy; Wed, 23 Dec 2020 07:51:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e4olaTCG6/YRzMnAjnsSU7i70L9eut3VbY0qJsvJFZE=; b=3/dEhTekwMu/6FfbIJJPHLWnoZ
	/vBy7MzV/RNVvcAFP2pc/irPmP7NaGcT8bulUSDQV0nOWQo8wCcCfniV29N8XjCijXRQX7vdqzXAA
	ndSKDfJga2LpV5jMzcJbAbBRz40W3oDD+ozybJvZacHkDbn3OwcB9PCr43C91ivDI9l4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157840-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157840: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d21d2706761bede7db38929abc5613f3e71c64ba
X-Osstest-Versions-That:
    ovmf=88e47d1959bfaf9417cfd4865ef3c6a926c1978b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 07:51:34 +0000

flight 157840 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157840/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d21d2706761bede7db38929abc5613f3e71c64ba
baseline version:
 ovmf                 88e47d1959bfaf9417cfd4865ef3c6a926c1978b

Last test of basis   157804  2020-12-22 14:11:38 Z    0 days
Testing same since   157840  2020-12-22 22:40:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   88e47d1959..d21d270676  d21d2706761bede7db38929abc5613f3e71c64ba -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 10:00:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 10:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58280.102348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks0w1-0002sA-1k; Wed, 23 Dec 2020 09:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58280.102348; Wed, 23 Dec 2020 09:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks0w0-0002s3-Uv; Wed, 23 Dec 2020 09:59:56 +0000
Received: by outflank-mailman (input) for mailman id 58280;
 Wed, 23 Dec 2020 09:59:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks0vz-0002ry-NA
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 09:59:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d61acb6f-6414-4ceb-b2c5-0e512c471425;
 Wed, 23 Dec 2020 09:59:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8468BAF61;
 Wed, 23 Dec 2020 09:59:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d61acb6f-6414-4ceb-b2c5-0e512c471425
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608717590; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=22Ke/5SG+wrcSc3V1Nr+LM0ihCxBWc6g06UQ0IeOS3E=;
	b=ifEa/7AMDr04G6gmrmvt0J9bgeIRv/jIayK9H9ENGaag6Lv4ofqJiasdEaCoil+n9Y5JTN
	NVQAJCBGwXo3xL2WeEwXfDVI67UoIZsG4KqJLeKonuD1xXgbyffpwCO5Q6WzbwohPcctIX
	+RrWX64lx+z6bXvaqmnha9m76qxkBSI=
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
To: Dario Faggioli <dfaggioli@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Dylanger Daly <dylangerdaly@protonmail.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
 <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com>
 <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com>
 <768d9dbb-4387-099f-b489-7952d7e883b0@suse.com>
 <T95F2Mi9RUUZ4w2wdeRqqM4uRyKgOFQNyooqEoTTDByK-0t9hZ1izG68lf90iExeYabEPSEv8puUeg0SEJtOmz8vYbVox2za28DXLd_h-_s=@protonmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eba12ea4-5dda-f112-0e33-714e859b9b03@suse.com>
Date: Wed, 23 Dec 2020 10:59:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <T95F2Mi9RUUZ4w2wdeRqqM4uRyKgOFQNyooqEoTTDByK-0t9hZ1izG68lf90iExeYabEPSEv8puUeg0SEJtOmz8vYbVox2za28DXLd_h-_s=@protonmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 00:04, Dylanger Daly wrote:
> I think I've narrowed down what could be the issue.
> 
> I think disabling SMT on any AMD Zen 2 CPU is breaking Xen's Credit2 scheduler, I can only test on AMD Ryzen 4000 based Mobile CPUs, but I think this is what is causing issues with softlocks/having to pin dom0 1 vcpu.

Dario,

does this maybe ring any bells?

Jan

> I'm currently trying to re-enable SMT on Qubes 4.1 (Xen 4.14) and I'll report my findings here.
> 



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 10:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 10:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58284.102360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks14N-0003sp-S9; Wed, 23 Dec 2020 10:08:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58284.102360; Wed, 23 Dec 2020 10:08:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks14N-0003si-P5; Wed, 23 Dec 2020 10:08:35 +0000
Received: by outflank-mailman (input) for mailman id 58284;
 Wed, 23 Dec 2020 10:08:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks14M-0003sa-3n; Wed, 23 Dec 2020 10:08:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks14L-0003yP-Pc; Wed, 23 Dec 2020 10:08:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks14L-0007Dt-HX; Wed, 23 Dec 2020 10:08:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ks14L-0000BW-Gz; Wed, 23 Dec 2020 10:08:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RVg091R1W8cZBEgm/hGCYSTMnr0my/oYGNKx926UQ10=; b=zfnyIQ01HBQ+9XIdtNZF/AyMVZ
	CLOLpIgSob459qH90BdmlYPAdhVrNtmIFuLTVhO6h31cGbAKJVcGmK/gjlvC/hP+i7XYjYbIHAXw7
	XLA3E/GOqb3AfKUGOO2fQKWmabqigpzH8Uz9+i6ctcuyrw962fJShAXii01/AehaSpnk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157850-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157850: all pass - PUSHED
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=357db96a66e47e609c3b14768f1062e13eedbd93
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 10:08:33 +0000

flight 157850 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157850/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  357db96a66e47e609c3b14768f1062e13eedbd93

Last test of basis   157738  2020-12-20 09:20:06 Z    3 days
Testing same since   157850  2020-12-23 09:18:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Maximilian Engelhardt <maxi@daemonizer.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   357db96a66..98d4d6d8a6  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 10:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 10:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58293.102374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks1FL-0004vn-3Q; Wed, 23 Dec 2020 10:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58293.102374; Wed, 23 Dec 2020 10:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks1FL-0004vg-0O; Wed, 23 Dec 2020 10:19:55 +0000
Received: by outflank-mailman (input) for mailman id 58293;
 Wed, 23 Dec 2020 10:19:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks1FK-0004vb-Aj
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 10:19:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 775d3a49-68c3-41a3-917a-337ac9bfe204;
 Wed, 23 Dec 2020 10:19:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB171AD12;
 Wed, 23 Dec 2020 10:19:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 775d3a49-68c3-41a3-917a-337ac9bfe204
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608718792; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vJipi6hMbRsilmfJbpcFUYAzGVUcnMp4NNIUhmxooO0=;
	b=nBTGXXMS4r0sOs8CEv6rSr8QK60CK57wAoE9aUJKY0+iXHvaE2NIMiRZdgGGVQAycG1Y+v
	F7Lfk2hn12G+VMKtBFBdUQWJkK2oIxxIxpbgpBKaDJ4j2tHkFP8j3bcOMuT1A1CpFxlfXj
	Wl8JQFk3GHrSmMHeR1YF5194UQCkRCo=
Subject: Re: [PATCH] lib/sort: adjust types
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <37d6facf-3fb8-2171-4143-e5e0269fb637@suse.com>
 <b8517b5e-a769-73dd-4b83-498f9b512f60@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f4d6dc63-da82-f72d-904e-6eefbfaa9a22@suse.com>
Date: Wed, 23 Dec 2020 11:19:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <b8517b5e-a769-73dd-4b83-498f9b512f60@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.12.2020 18:05, Andrew Cooper wrote:
> On 22/12/2020 16:49, Jan Beulich wrote:
>> First and foremost do away with the use of plain int for sizes or size-
>> derived values. Use size_t, despite this requiring some adjustment to
>> the logic. Also replace u32 by uint32_t.
>>
>> While not directly related also drop a leftover #ifdef from x86's
>> swap_ex - this was needed only back when 32-bit Xen was still a thing.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

>> --- a/xen/lib/sort.c
>> +++ b/xen/lib/sort.c
>> @@ -6,14 +6,15 @@
>>  
>>  #include <xen/types.h>
>>  
>> -static void u32_swap(void *a, void *b, int size)
>> +static void u32_swap(void *a, void *b, size_t size)
>>  {
>> -    u32 t = *(u32 *)a;
>> -    *(u32 *)a = *(u32 *)b;
>> -    *(u32 *)b = t;
>> +    uint32_t t = *(uint32_t *)a;
>> +
>> +    *(uint32_t *)a = *(uint32_t *)b;
>> +    *(uint32_t *)b = t;
>>  }
>>  
>> -static void generic_swap(void *a, void *b, int size)
>> +static void generic_swap(void *a, void *b, size_t size)
>>  {
>>      char t;
>>  
>> @@ -43,18 +44,18 @@ static void generic_swap(void *a, void *
>>  
>>  void sort(void *base, size_t num, size_t size,
>>            int (*cmp)(const void *, const void *),
>> -          void (*swap)(void *, void *, int size))
>> +          void (*swap)(void *, void *, size_t size))
>>  {
>>      /* pre-scale counters for performance */
>> -    int i = (num / 2 - 1) * size, n = num * size, c, r;
>> +    size_t i = (num / 2) * size, n = num * size, c, r;
>>  
>>      if ( !swap )
>>          swap = (size == 4 ? u32_swap : generic_swap);
>>  
>>      /* heapify */
>> -    for ( ; i >= 0; i -= size )
>> +    while ( i > 0 )
>>      {
>> -        for ( r = i; r * 2 + size < n; r  = c )
>> +        for ( r = i -= size; r * 2 + size < n; r  = c )
> 
> Aren't some compilers going to complain at the lack of brackets for this
> setup of r?

I've never seen any warning on this type of construct. It's no
different from "a = b = c;".

> Also as you're editing the line, the "r  = c" part should lose one space.

Oh, indeed.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 10:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 10:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58297.102387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks1Z5-0007QL-Rh; Wed, 23 Dec 2020 10:40:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58297.102387; Wed, 23 Dec 2020 10:40:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks1Z5-0007QE-Od; Wed, 23 Dec 2020 10:40:19 +0000
Received: by outflank-mailman (input) for mailman id 58297;
 Wed, 23 Dec 2020 10:40:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1Z4-0007Q3-6H; Wed, 23 Dec 2020 10:40:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1Z3-0004TN-5f; Wed, 23 Dec 2020 10:40:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1Z2-0000J0-Ti; Wed, 23 Dec 2020 10:40:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1Z2-00058w-TC; Wed, 23 Dec 2020 10:40:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZS2nxgL6gYKpS9sv8dtHZzrsjsXVjnxD+EXwzKnqh/w=; b=xzs5FEer0kKiOfGaA5dVkoQeZO
	1WTu6jFiYDgw7WBJHG4Od1Y1eYFMK8s3aoPcE91PLN3cI26sWkxWm4xwNPyaevogpRgpjurEW8pND
	+Av3HyNGPlPVGuLJy6uTyQWIogj0r7EK0dK9npE6bWYwnaw+8T6jwVebUv82nqTyVII8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157838-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157838: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 10:40:16 +0000

flight 157838 qemu-mainline real [real]
flight 157851 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157838/
http://logs.test-lab.xenproject.org/osstest/logs/157851/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  125 days
Failing since        152659  2020-08-21 14:07:39 Z  123 days  254 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    4 days    7 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 10:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 10:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58304.102402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks1im-0008NY-Rf; Wed, 23 Dec 2020 10:50:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58304.102402; Wed, 23 Dec 2020 10:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks1im-0008NR-OF; Wed, 23 Dec 2020 10:50:20 +0000
Received: by outflank-mailman (input) for mailman id 58304;
 Wed, 23 Dec 2020 10:50:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1il-0008NJ-NR; Wed, 23 Dec 2020 10:50:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1il-0004dC-Bk; Wed, 23 Dec 2020 10:50:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1il-0000c0-5A; Wed, 23 Dec 2020 10:50:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ks1il-0001lU-4g; Wed, 23 Dec 2020 10:50:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mGPXH5wIXMdsZgwo9+xdg9rOcCYGulzw0SGU6bd1/WI=; b=ErLzcUMyZOjPyH6AvYYbEU9yjL
	P0pTUmQEQo/5DOzDh8EFftmQu387XHpm4nvKoi9AsONTS6SmgVJoubxD+5UNM560l42nSNoETWEmo
	JuaGWaaJkDFuKlkF9Xr1l20g8VRu/UT7+8hg3UHd+47RROkRpiCRGgYpHAmoLgBqFRm8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157844-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157844: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 10:50:19 +0000

flight 157844 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157844/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  166 days
Failing since        151818  2020-07-11 04:18:52 Z  165 days  160 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 11:22:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 11:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58315.102427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks2Dm-0002kK-Vp; Wed, 23 Dec 2020 11:22:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58315.102427; Wed, 23 Dec 2020 11:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks2Dm-0002kD-SY; Wed, 23 Dec 2020 11:22:22 +0000
Received: by outflank-mailman (input) for mailman id 58315;
 Wed, 23 Dec 2020 11:22:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks2Dl-0002k6-LS
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 11:22:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks2Dk-0005Ah-Co; Wed, 23 Dec 2020 11:22:20 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks2Dk-0004Yr-0G; Wed, 23 Dec 2020 11:22:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=StB7O/4hGb7k4MeHNLujDpS/algM4RjyptBAZ0qLXjw=; b=bJeBfNAdaEOWAe5trS7OejCHxy
	8hXQYjZ14RKpK9p7WPWTuXuUiAG1vjY5Zf/4DKgJ1g2cgneehboUPtjX2MjX95wXEC3LKB+7NYWCF
	0Ym2x97q/+DWzdSRU/mG6FoQWYc86l20+TmqoJh7p+y/R84ew+t89q56O2zY5dSDpyz8=;
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
 <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
 <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
 <099b99bc-c544-0aa8-c3b4-4871ef618e4a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aa169dc2-77f2-b3e9-80f4-d5f4d6ea54f1@xen.org>
Date: Wed, 23 Dec 2020 11:22:18 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <099b99bc-c544-0aa8-c3b4-4871ef618e4a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 22/12/2020 09:46, Jan Beulich wrote:
> On 21.12.2020 18:45, Julien Grall wrote:
>> On 14/12/2020 09:40, Jan Beulich wrote:
>>> On 11.12.2020 11:57, Julien Grall wrote:
>>>> On 11/12/2020 10:32, Jan Beulich wrote:
>>>>> On 09.12.2020 12:54, Julien Grall wrote:
>>>>>> On 23/11/2020 13:29, Jan Beulich wrote:
>>>>>>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>>>>>>          long           rc = 0;
>>>>>>>      
>>>>>>>       again:
>>>>>>> -    spin_lock(&d1->event_lock);
>>>>>>> +    write_lock(&d1->event_lock);
>>>>>>>      
>>>>>>>          if ( !port_is_valid(d1, port1) )
>>>>>>>          {
>>>>>>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>>>>>>                      BUG();
>>>>>>>      
>>>>>>>                  if ( d1 < d2 )
>>>>>>> -            {
>>>>>>> -                spin_lock(&d2->event_lock);
>>>>>>> -            }
>>>>>>> +                read_lock(&d2->event_lock);
>>>>>>
>>>>>> This change made me realized that I don't quite understand how the
>>>>>> rwlock is meant to work for event_lock. I was actually expecting this to
>>>>>> be a write_lock() given there are state changed in the d2 events.
>>>>>
>>>>> Well, the protection needs to be against racing changes, i.e.
>>>>> parallel invocations of this same function, or evtchn_close().
>>>>> It is debatable whether evtchn_status() and
>>>>> domain_dump_evtchn_info() would better also be locked out
>>>>> (other read_lock() uses aren't applicable to interdomain
>>>>> channels).
>>>>>
>>>>>> Could you outline how a developper can find out whether he/she should
>>>>>> use read_lock or write_lock?
>>>>>
>>>>> I could try to, but it would again be a port type dependent
>>>>> model, just like for the per-channel locks.
>>>>
>>>> It is quite important to have clear locking strategy (in particular
>>>> rwlock) so we can make correct decision when to use read_lock or write_lock.
>>>>
>>>>> So I'd like it to
>>>>> be clarified first whether you aren't instead indirectly
>>>>> asking for these to become write_lock()
>>>>
>>>> Well, I don't understand why this is a read_lock() (even with your
>>>> previous explanation). I am not suggesting to switch to a write_lock(),
>>>> but instead asking for the reasoning behind the decision.
>>>
>>> So if what I've said in my previous reply isn't enough (including the
>>> argument towards using two write_lock() here), I'm struggling to
>>> figure what else to say. The primary goal is to exclude changes to
>>> the same ports. For this it is sufficient to hold just one of the two
>>> locks in writer mode, as the other (racing) one will acquire that
>>> same lock for at least reading. The question whether both need to use
>>> writer mode can only be decided when looking at the sites acquiring
>>> just one of the locks in reader mode (hence the reference to
>>> evtchn_status() and domain_dump_evtchn_info()) - if races with them
>>> are deemed to be a problem, switching to both-writers will be needed.
>>
>> I had another look at the code based on your explanation. I don't think
>> it is fine to allow evtchn_status() to be concurrently called with
>> evtchn_close().
>>
>> evtchn_close() contains the following code:
>>
>>     chn2->state = ECS_UNBOUND;
>>     chn2->u.unbound.remote_domid = d1->domain_id;
>>
>> Where chn2 is a event channel of the remote domain (d2). Your patch will
>> only held the read lock for d2.
>>
>> However evtchn_status() expects the event channel state to not change
>> behind its back. This assumption doesn't hold for d2, and you could
>> possibly end up to see the new value of chn2->state after the new
>> chn2->u.unbound.remote_domid.
>>
>> Thanksfully, it doesn't look like chn2->u.interdomain.remote_domain
>> would be overwritten. Otherwise, this would be a straight dereference of
>> an invalid pointer.
>>
>> So I think, we need to held the write event lock for both domain.
> 
> Well, okay. Three considerations though:
> 
> 1) Neither evtchn_status() nor domain_dump_evtchn_info() appear to
> have a real need to acquire the per-domain lock. They could as well
> acquire the per-channel ones. (In the latter case this will then
> also allow inserting the so far missing process_pending_softirqs()
> call; it shouldn't be made with a lock held.)
I agree that evtchn_status() doesn't need to acquire the per-domain 
lock. I am not entirely sure about domain_dump_evtchn_info() because 
AFAICT the PIRQ tree (used by domain_pirq_to_irq()) is protected with 
d->event_lock.

> 
> 2) With the double-locking changed and with 1) addressed, there's
> going to be almost no read_lock() left. hvm_migrate_pirqs() and
> do_physdev_op()'s PHYSDEVOP_eoi handling, evtchn_move_pirqs(), and
> hvm_dpci_msi_eoi(). While for these it may still be helpful to be
> possible to run in parallel, I then nevertheless wonder whether the
> change as a whole is still worthwhile.

I can see some value in one future case. As we are going to support 
non-cooperative migration of guest, we will need to restore event 
channels (including PIRQs) for the domain. From my understanding, when 
the vCPU is first scheduled we will end up to migrate all the interrupts 
as the vCPU may not be created on the targeted pCPU.

So allowing evtchn_mode_pirqs() and hvm_migrate_pirqs() to run in 
parallel would slighlty reduce the downtime. Although, I don't have any 
numbers backing this statement.

> 3) With the per-channel double locking and with 1) addressed I
> can't really see the need for the double per-domain locking in
> evtchn_bind_interdomain() and evtchn_close(). The write lock is
> needed for the domain allocating a new port or freeing one. But why
> is there any need for holding the remote domain's lock, when its
> side of the channel gets guarded by the per-channel lock anyway?

If 1) is addressed, then I think it should be fine to just acquire the 
read event lock of the remote domain.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 12:12:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 12:12:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58326.102440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks2zj-00079s-5m; Wed, 23 Dec 2020 12:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58326.102440; Wed, 23 Dec 2020 12:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks2zj-00079l-2W; Wed, 23 Dec 2020 12:11:55 +0000
Received: by outflank-mailman (input) for mailman id 58326;
 Wed, 23 Dec 2020 12:11:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks2zh-00078v-DI
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 12:11:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks2zh-0005yT-09; Wed, 23 Dec 2020 12:11:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks2zg-000808-I3; Wed, 23 Dec 2020 12:11:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=F6fSyM14HFpSsU75WnH6iy/6U807IzlHqi196s7yAv0=; b=DTU1mjfuD/nWu/HUv19zFxr5tx
	mSoy7JW6wZwaaOC1//V2VbelPTVWkmsK2PYuPsp+DvQ9zIZOHzKUNDsbTTeWipyK8+wBwFd7uB7lL
	SVggZHmkDQkQugerLknmNtJdTc4r/gHffAfThG6x7b0omZT4bQYjyGqidIRrbdLyreaE=;
Subject: Re: xen/evtchn: Interrupt for port 34, but apparently not enabled;
 per-user 00000000a86a4c1b on 5.10
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, aams@amazon.de
Cc: linux-kernel@vger.kernel.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 foersleo@amazon.de
References: <ce881240-284f-8470-10f1-5cce353ee903@xen.org>
 <b5c32c48-3e74-2045-62ec-560b19766389@suse.com>
 <da65a69e-389b-1602-1479-6799ce10c101@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0a577a27-2d7d-4717-a959-1d6b3adc580c@xen.org>
Date: Wed, 23 Dec 2020 12:11:50 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <da65a69e-389b-1602-1479-6799ce10c101@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 15/12/2020 10:20, Jürgen Groß wrote:
> On 15.12.20 08:27, Jürgen Groß wrote:
>> On 14.12.20 22:25, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> When testing Linux 5.10 dom0, I could reliably hit the following 
>>> warning with using event 2L ABI:
>>>
>>> [  589.591737] Interrupt for port 34, but apparently not enabled; 
>>> per-user 00000000a86a4c1b
>>> [  589.593259] WARNING: CPU: 0 PID: 1111 at 
>>> /home/ANT.AMAZON.COM/jgrall/works/oss/linux/drivers/xen/evtchn.c:170 
>>> evtchn_interrupt+0xeb/0x100
>>> [  589.595514] Modules linked in:
>>> [  589.596145] CPU: 0 PID: 1111 Comm: qemu-system-i38 Tainted: G 
>>> W         5.10.0+ #180
>>> [  589.597708] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), 
>>> BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
>>> [  589.599782] RIP: e030:evtchn_interrupt+0xeb/0x100
>>> [  589.600698] Code: 48 8d bb d8 01 00 00 ba 01 00 00 00 be 1d 00 00 
>>> 00 e8 d9 10 ca ff eb b2 8b 75 20 48 89 da 48 c7 c7 a8 31 3d 82 e8 65 
>>> 29 a0 ff <0f> 0b e9 42 ff ff ff 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 
>>> 00 00 0f
>>> [  589.604087] RSP: e02b:ffffc90040003e70 EFLAGS: 00010086
>>> [  589.605102] RAX: 0000000000000000 RBX: ffff888102091800 RCX: 
>>> 0000000000000027
>>> [  589.606445] RDX: 0000000000000000 RSI: ffff88817fe19150 RDI: 
>>> ffff88817fe19158
>>> [  589.607790] RBP: ffff88810f5ab980 R08: 0000000000000001 R09: 
>>> 0000000000328980
>>> [  589.609134] R10: 0000000000000000 R11: ffffc90040003c70 R12: 
>>> ffff888107fd3c00
>>> [  589.610484] R13: ffffc90040003ed4 R14: 0000000000000000 R15: 
>>> ffff88810f5ffd80
>>> [  589.611828] FS:  00007f960c4b8ac0(0000) GS:ffff88817fe00000(0000) 
>>> knlGS:0000000000000000
>>> [  589.613348] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [  589.614525] CR2: 00007f17ee72e000 CR3: 000000010f5b6000 CR4: 
>>> 0000000000050660
>>> [  589.615874] Call Trace:
>>> [  589.616402]  <IRQ>
>>> [  589.616855]  __handle_irq_event_percpu+0x4e/0x2c0
>>> [  589.617784]  handle_irq_event_percpu+0x30/0x80
>>> [  589.618660]  handle_irq_event+0x3a/0x60
>>> [  589.619428]  handle_edge_irq+0x9b/0x1f0
>>> [  589.620209]  generic_handle_irq+0x4f/0x60
>>> [  589.621008]  evtchn_2l_handle_events+0x160/0x280
>>> [  589.621913]  __xen_evtchn_do_upcall+0x66/0xb0
>>> [  589.622767]  __xen_pv_evtchn_do_upcall+0x11/0x20
>>> [  589.623665]  asm_call_irq_on_stack+0x12/0x20
>>> [  589.624511]  </IRQ>
>>> [  589.624978]  xen_pv_evtchn_do_upcall+0x77/0xf0
>>> [  589.625848]  exc_xen_hypervisor_callback+0x8/0x10
>>>
>>> This can be reproduced when creating/destroying guest in a loop. 
>>> Although, I have struggled to reproduce it on a vanilla Xen.
>>>
>>> After several hours of debugging, I think I have found the root cause.
>>>
>>> While we only expect the unmask to happen when the event channel is 
>>> EOIed, there is an unmask happening as part of handle_edge_irq() 
>>> because the interrupt was seen as pending by another vCPU 
>>> (IRQS_PENDING is set).
>>>
>>> It turns out that the event channel is set for multiple vCPU is in 
>>> cpu_evtchn_mask. This is happening because the affinity is not 
>>> cleared when freeing an event channel.
>>>
>>> The implementation of evtchn_2l_handle_events() will look for all the 
>>> active interrupts for the current vCPU and later on clear the pending 
>>> bit (via the ack() callback). IOW, I believe, this is not an atomic 
>>> operation.
>>>
>>> Even if Xen will notify the event to a single vCPU, 
>>> evtchn_pending_sel may still be set on the other vCPU (thanks to a 
>>> different event channel). Therefore, there is a chance that two vCPUs 
>>> will try to handle the same interrupt.
>>>
>>> The IRQ handler handle_edge_irq() is able to deal with that and will 
>>> mask/unmask the interrupt. This will mess us with the lateeoi logic 
>>> (although, I managed to reproduce it once without XSA-332).
>>
>> Thanks for the analysis!
>>
>>> My initial idea to fix the problem was to switch the affinity from 
>>> CPU X to CPU0 when the event channel is freed.
>>>
>>> However, I am not sure this is enough because I haven't found 
>>> anything yet preventing a race between evtchn_2l_handle_events9) and 
>>> evtchn_2l_bind_vcpu().
>>>
>>> So maybe we want to introduce a refcounting (if there is nothing 
>>> provided by the IRQ framework) and only unmask when the counter drop 
>>> to 0.
>>>
>>> Any opinions?
>>
>> I think we don't need a refcount, but just the internal states "masked"
>> and "eoi_pending" and unmask only if both are false. "masked" will be
>> set when the event is being masked. When delivering a lateeoi irq
>> "eoi_pending" will be set and "masked "reset. "masked" will be reset
>> when a normal unmask is happening. And "eoi_pending" will be reset
>> when a lateeoi is signaled. Any reset of "masked" and "eoi_pending"
>> will check the other flag and do an unmask if both are false.
>>
>> I'll write a patch.
> 
> Julien, could you please test the attached (only build tested) patch?

I can boot dom0 and a guest. However, if I destroy the guest and create 
a new one, I will get hundreds of WARN() similar to the one I originally 
reported and the guest wouldn't boot.

The same issue can now be reproduced on a vanilla Xen 4.15 and Linux 
5.10 (no change expect your patch). I haven't looked at the code but it 
looks like to me the interrupt state is getting de-synchronized when 
re-used.

I also got the below splat once, so I am not entirely sure if this is 
related:

[   86.134903] ------------[ cut here ]------------
[   86.135950] WARNING: CPU: 0 PID: 904 at linux/kernel/softirq.c:175 
__local_bh_enable_ip+0x9a/0xd0
[   86.138232] Modules linked in:
[   86.138937] CPU: 0 PID: 904 Comm: xenstored Not tainted 5.10.0+ #183
[   86.140162] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[   86.142221] RIP: e030:__local_bh_enable_ip+0x9a/0xd0
[   86.143204] Code: 00 ff ff 00 74 3e 65 ff 0d 63 fa f2 7e e8 0e 15 13 
00 e8 59 55 f4 ff 66 90 48 83 c4 08 5b c3 65 8b 05 46 0a f3 7e 85 c0 75 
99 <0f> 0b eb 95 48 89 3c 24 e8 93
[   86.146587] RSP: e02b:ffffc90040003ad0 EFLAGS: 00010246
[   86.147632] RAX: 0000000000000000 RBX: 0000000000000200 RCX: 
ffffe8ffffc05000
[   86.148986] RDX: 00000000000000f7 RSI: 0000000000000200 RDI: 
ffffffff81a4e9c3
[   86.150354] RBP: ffff8881003124b0 R08: 0000000000000001 R09: 
0000000000000000
[   86.151722] R10: 0000000000000000 R11: 0000000000000000 R12: 
ffff88810d150600
[   86.153078] R13: ffff88810d2b3f8e R14: ffff8881030b0000 R15: 
ffffc90040003c10
[   86.154422] FS:  00007f39a12f5240(0000) GS:ffff88817fe00000(0000) 
knlGS:0000000000000000
[   86.155958] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[   86.157063] CR2: 00007f0754002858 CR3: 000000010e13a000 CR4: 
0000000000050660
[   86.158409] Call Trace:
[   86.158959]  <IRQ>
[   86.159425]  ? __local_bh_disable_ip+0x4b/0x60
[   86.160329]  ipt_do_table+0x36f/0x660
[   86.161079]  ? lock_acquire+0x252/0x3a0
[   86.161846]  ? ip_local_deliver+0x70/0x200
[   86.162664]  nf_hook_slow+0x43/0xb0
[   86.163398]  ip_local_deliver+0x15b/0x200
[   86.164200]  ? ip_protocol_deliver_rcu+0x270/0x270
[   86.165146]  ip_rcv+0x13a/0x210
[   86.165794]  ? __lock_acquire+0x2e2/0x1a30
[   86.166610]  ? lock_is_held_type+0xe9/0x110
[   86.167477]  __netif_receive_skb_core+0x414/0xf60
[   86.168468]  ? find_held_lock+0x2d/0x90
[   86.169279]  ? __netif_receive_skb_list_core+0x134/0x2d0
[   86.170353]  __netif_receive_skb_list_core+0x134/0x2d0
[   86.171373]  netif_receive_skb_list_internal+0x1ef/0x3c0
[   86.172402]  ? e1000_clean_rx_irq+0x338/0x3d0
[   86.173285]  gro_normal_list.part.149+0x19/0x40
[   86.174171]  napi_complete_done+0xf3/0x1a0
[   86.175013]  e1000e_poll+0xc9/0x2b0
[   86.175738]  net_rx_action+0x176/0x4e0
[   86.176500]  __do_softirq+0xd4/0x432
[   86.177230]  irq_exit_rcu+0xbc/0xc0
[   86.177946]  asm_call_irq_on_stack+0xf/0x20
[   86.178780]  </IRQ>
[   86.179267]  xen_pv_evtchn_do_upcall+0x77/0xf0
[   86.180169]  exc_xen_hypervisor_callback+0x8/0x10
[   86.181098] RIP: e030:xen_hypercall_domctl+0xa/0x20
[   86.182055] Code: 51 41 53 b8 23 00 00 00 0f 05 41 5b 59 c3 cc cc cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 24 00 00 00 0f 
05 <41> 5b 59 c3 cc cc cc cc cc cc
[   86.185490] RSP: e02b:ffffc900407dbe18 EFLAGS: 00000282
[   86.186563] RAX: 0000000000000000 RBX: ffff888104f7b000 RCX: 
ffffffff8100248a
[   86.187973] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 
deadbeefdeadf00d
[   86.189381] RBP: ffffffffffffffff R08: 0000000000000000 R09: 
0000000000000000
[   86.190753] R10: 0000000000000000 R11: 0000000000000282 R12: 
0000000000305000
[   86.192113] R13: 00007ffdc8d6a5a0 R14: 0000000000000008 R15: 
0000000000000000
[   86.193473]  ? xen_hypercall_domctl+0xa/0x20
[   86.194324]  ? privcmd_ioctl+0x179/0xa80
[   86.195131]  ? common_file_perm+0x84/0x2c0
[   86.195956]  ? __x64_sys_ioctl+0x8e/0xd0
[   86.196744]  ? lockdep_hardirqs_on+0x4d/0xf0
[   86.197598]  ? do_syscall_64+0x33/0x80
[   86.198347]  ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   86.199402] irq event stamp: 39806070
[   86.200168] hardirqs last  enabled at (39806078): 
[<ffffffff8116c714>] console_unlock+0x4b4/0x5b0
[   86.201832] hardirqs last disabled at (39806085): 
[<ffffffff8116c670>] console_unlock+0x410/0x5b0
[   86.203508] softirqs last  enabled at (39685022): 
[<ffffffff81e0030f>] __do_softirq+0x30f/0x432
[   86.205140] softirqs last disabled at (39805575): 
[<ffffffff810e857c>] irq_exit_rcu+0xbc/0xc0
[   86.206739] ---[ end trace 178144c74d23e738 ]---

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 12:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 12:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58332.102452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks3hJ-0002IN-OE; Wed, 23 Dec 2020 12:56:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58332.102452; Wed, 23 Dec 2020 12:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks3hJ-0002IG-LC; Wed, 23 Dec 2020 12:56:57 +0000
Received: by outflank-mailman (input) for mailman id 58332;
 Wed, 23 Dec 2020 12:56:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks3hI-0002I8-Fp; Wed, 23 Dec 2020 12:56:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks3hI-0006fW-8T; Wed, 23 Dec 2020 12:56:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks3hH-0007UC-Pu; Wed, 23 Dec 2020 12:56:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ks3hH-0002KV-PQ; Wed, 23 Dec 2020 12:56:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fDQalGy3AzeZIkWBHDNhcmJNnI7vgQAMya9ADckr7nQ=; b=di7+8qylhGcI+ljHQE8quQBBG4
	OrjC6X49/zj9f1pT+xAzDmJKIeUWoDR4216zw2nWeGeSrhEjd38GOoBR/ii/1sCJWwn+RLma++ETZ
	NfktO2ygLFMDesSeYN+6AyCbLl1BtnYBdl8LfZegJNh+p6jSVffTy+W+W9vVVeZ0nVH0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157842-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157842: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=614cb5894306cfa2c7d9b6168182876ff5948735
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 12:56:55 +0000

flight 157842 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157842/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                614cb5894306cfa2c7d9b6168182876ff5948735
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  144 days
Failing since        152366  2020-08-01 20:49:34 Z  143 days  248 attempts
Testing same since   157842  2020-12-23 00:42:00 Z    0 days    1 attempts

------------------------------------------------------------
4313 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 968560 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 12:58:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 12:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58336.102466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks3iK-0002PL-6l; Wed, 23 Dec 2020 12:58:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58336.102466; Wed, 23 Dec 2020 12:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks3iK-0002PE-3f; Wed, 23 Dec 2020 12:58:00 +0000
Received: by outflank-mailman (input) for mailman id 58336;
 Wed, 23 Dec 2020 12:57:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks3iI-0002P6-Co
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 12:57:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4197f79d-6117-4178-bad3-463784b843e4;
 Wed, 23 Dec 2020 12:57:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 679A1ACF5;
 Wed, 23 Dec 2020 12:57:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4197f79d-6117-4178-bad3-463784b843e4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608728276; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TTSZKHnJ9KUXBRG35fiTp3KF/r3oGEaIooh6/1OmZRk=;
	b=fR3HCEhUMLl6B5MUAUTff/kiFstL8yTchF6MRLL8wE/AD3Ei7knURiAjPh9F3+bZU2FZ01
	oaYXNUiZZ58NdIP9itqlDZTGLdqDmuuOC3xgKkLzkbBxyuWHgJzZZp1LxhqxPyR0LyBrjP
	d3AvRoyRITjOFLVVHP0JUXFkDgRDrnw=
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
 <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
 <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
 <099b99bc-c544-0aa8-c3b4-4871ef618e4a@suse.com>
 <aa169dc2-77f2-b3e9-80f4-d5f4d6ea54f1@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d0b3079b-ae83-a14e-1fc6-ea76bdc7db79@suse.com>
Date: Wed, 23 Dec 2020 13:57:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <aa169dc2-77f2-b3e9-80f4-d5f4d6ea54f1@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 12:22, Julien Grall wrote:
> Hi Jan,
> 
> On 22/12/2020 09:46, Jan Beulich wrote:
>> On 21.12.2020 18:45, Julien Grall wrote:
>>> On 14/12/2020 09:40, Jan Beulich wrote:
>>>> On 11.12.2020 11:57, Julien Grall wrote:
>>>>> On 11/12/2020 10:32, Jan Beulich wrote:
>>>>>> On 09.12.2020 12:54, Julien Grall wrote:
>>>>>>> On 23/11/2020 13:29, Jan Beulich wrote:
>>>>>>>> @@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
>>>>>>>>          long           rc = 0;
>>>>>>>>      
>>>>>>>>       again:
>>>>>>>> -    spin_lock(&d1->event_lock);
>>>>>>>> +    write_lock(&d1->event_lock);
>>>>>>>>      
>>>>>>>>          if ( !port_is_valid(d1, port1) )
>>>>>>>>          {
>>>>>>>> @@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
>>>>>>>>                      BUG();
>>>>>>>>      
>>>>>>>>                  if ( d1 < d2 )
>>>>>>>> -            {
>>>>>>>> -                spin_lock(&d2->event_lock);
>>>>>>>> -            }
>>>>>>>> +                read_lock(&d2->event_lock);
>>>>>>>
>>>>>>> This change made me realized that I don't quite understand how the
>>>>>>> rwlock is meant to work for event_lock. I was actually expecting this to
>>>>>>> be a write_lock() given there are state changed in the d2 events.
>>>>>>
>>>>>> Well, the protection needs to be against racing changes, i.e.
>>>>>> parallel invocations of this same function, or evtchn_close().
>>>>>> It is debatable whether evtchn_status() and
>>>>>> domain_dump_evtchn_info() would better also be locked out
>>>>>> (other read_lock() uses aren't applicable to interdomain
>>>>>> channels).
>>>>>>
>>>>>>> Could you outline how a developper can find out whether he/she should
>>>>>>> use read_lock or write_lock?
>>>>>>
>>>>>> I could try to, but it would again be a port type dependent
>>>>>> model, just like for the per-channel locks.
>>>>>
>>>>> It is quite important to have clear locking strategy (in particular
>>>>> rwlock) so we can make correct decision when to use read_lock or write_lock.
>>>>>
>>>>>> So I'd like it to
>>>>>> be clarified first whether you aren't instead indirectly
>>>>>> asking for these to become write_lock()
>>>>>
>>>>> Well, I don't understand why this is a read_lock() (even with your
>>>>> previous explanation). I am not suggesting to switch to a write_lock(),
>>>>> but instead asking for the reasoning behind the decision.
>>>>
>>>> So if what I've said in my previous reply isn't enough (including the
>>>> argument towards using two write_lock() here), I'm struggling to
>>>> figure what else to say. The primary goal is to exclude changes to
>>>> the same ports. For this it is sufficient to hold just one of the two
>>>> locks in writer mode, as the other (racing) one will acquire that
>>>> same lock for at least reading. The question whether both need to use
>>>> writer mode can only be decided when looking at the sites acquiring
>>>> just one of the locks in reader mode (hence the reference to
>>>> evtchn_status() and domain_dump_evtchn_info()) - if races with them
>>>> are deemed to be a problem, switching to both-writers will be needed.
>>>
>>> I had another look at the code based on your explanation. I don't think
>>> it is fine to allow evtchn_status() to be concurrently called with
>>> evtchn_close().
>>>
>>> evtchn_close() contains the following code:
>>>
>>>     chn2->state = ECS_UNBOUND;
>>>     chn2->u.unbound.remote_domid = d1->domain_id;
>>>
>>> Where chn2 is a event channel of the remote domain (d2). Your patch will
>>> only held the read lock for d2.
>>>
>>> However evtchn_status() expects the event channel state to not change
>>> behind its back. This assumption doesn't hold for d2, and you could
>>> possibly end up to see the new value of chn2->state after the new
>>> chn2->u.unbound.remote_domid.
>>>
>>> Thanksfully, it doesn't look like chn2->u.interdomain.remote_domain
>>> would be overwritten. Otherwise, this would be a straight dereference of
>>> an invalid pointer.
>>>
>>> So I think, we need to held the write event lock for both domain.
>>
>> Well, okay. Three considerations though:
>>
>> 1) Neither evtchn_status() nor domain_dump_evtchn_info() appear to
>> have a real need to acquire the per-domain lock. They could as well
>> acquire the per-channel ones. (In the latter case this will then
>> also allow inserting the so far missing process_pending_softirqs()
>> call; it shouldn't be made with a lock held.)
> I agree that evtchn_status() doesn't need to acquire the per-domain 
> lock. I am not entirely sure about domain_dump_evtchn_info() because 
> AFAICT the PIRQ tree (used by domain_pirq_to_irq()) is protected with 
> d->event_lock.

It is, but calling it without the lock just to display the IRQ
is not a problem afaict.

>> 3) With the per-channel double locking and with 1) addressed I
>> can't really see the need for the double per-domain locking in
>> evtchn_bind_interdomain() and evtchn_close(). The write lock is
>> needed for the domain allocating a new port or freeing one. But why
>> is there any need for holding the remote domain's lock, when its
>> side of the channel gets guarded by the per-channel lock anyway?
> 
> If 1) is addressed, then I think it should be fine to just acquire the 
> read event lock of the remote domain.

For bind-interdomain I've eliminated the double locking, so the
question goes away there altogether. While for close I thought
I had managed to eliminate it too, the change looks to be
causing a deadlock of some sort, which I'll have to figure out.
However, the change might be controversial anyway, because I
need to play games already prior to fixing that bug ...

All of this said - for the time being it'll be both write_lock()
in evtchn_close(), as I consider it risky to make the remote one
a read_lock() merely based on the observation that there is
currently (i.e. with 1) addressed) no conflict.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:12:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:12:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58343.102479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks3wT-0004Bw-CQ; Wed, 23 Dec 2020 13:12:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58343.102479; Wed, 23 Dec 2020 13:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks3wT-0004Bp-8u; Wed, 23 Dec 2020 13:12:37 +0000
Received: by outflank-mailman (input) for mailman id 58343;
 Wed, 23 Dec 2020 13:12:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks3wR-0004Bk-Vw
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:12:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca4430d4-0964-4241-bc90-333077a17f65;
 Wed, 23 Dec 2020 13:12:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD26FB77E;
 Wed, 23 Dec 2020 13:12:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca4430d4-0964-4241-bc90-333077a17f65
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608729153; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vwaCuitpl43jC5mKAl/ZWKsNe0TgpOjirYi+f8gVAKM=;
	b=nZXmi5AoqJ5SLxQEAlkPhZVW+5/swI3iT5Ze0INRzbjvbZ2wvQW4/5XLqU7SW5gQ7Pm+eG
	Et0+qb8y1sJsrvcOkqWyEy1fHbnQ/zNseb6UJr6OxVbrcN8NqEzhZPEdZdOBoS2lsFUD+W
	ruINWLEOmcjtTjxbUf/TyL3nNtais5U=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
 <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
Date: Wed, 23 Dec 2020 14:12:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.12.2020 18:35, Tamas K Lengyel wrote:
> On Mon, Dec 7, 2020 at 12:30 PM Julien Grall <julien@xen.org> wrote:
>>
>> Hi Jan,
>>
>> On 07/12/2020 15:28, Jan Beulich wrote:
>>> On 04.12.2020 20:15, Tamas K Lengyel wrote:
>>>> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
>>>>> On 04/12/2020 15:21, Tamas K Lengyel wrote:
>>>>>> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
>>>>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>>>>>>> While there don't look to be any problems with this right now, the lock
>>>>>>>>>> order implications from holding the lock can be very difficult to follow
>>>>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>>>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>>>>>>
>>>>>>>>>> However, vm_event_disable() frees the structures used by respective
>>>>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>>>>>>
>>>>>>>>> AFAICT, this callback is not the only place where the synchronization is
>>>>>>>>> missing in the VM event code.
>>>>>>>>>
>>>>>>>>> For instance, vm_event_put_request() can also race against
>>>>>>>>> vm_event_disable().
>>>>>>>>>
>>>>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>>>>
>>>>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>>>>
>>>>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>>>>> monitoring software to do the right thing.
>>>>>>>
>>>>>>> I will refrain to comment on this approach. However, given the race is
>>>>>>> much wider than the event channel, I would recommend to not add more
>>>>>>> code in the event channel to deal with such problem.
>>>>>>>
>>>>>>> Instead, this should be fixed in the VM event code when someone has time
>>>>>>> to harden the subsystem.
>>>>>>
>>>>>> I double-checked and the disable route is actually more robust, we
>>>>>> don't just rely on the toolstack doing the right thing. The domain
>>>>>> gets paused before any calls to vm_event_disable. So I don't think
>>>>>> there is really a race-condition here.
>>>>>
>>>>> The code will *only* pause the monitored domain. I can see two issues:
>>>>>      1) The toolstack is still sending event while destroy is happening.
>>>>> This is the race discussed here.
>>>>>      2) The implement of vm_event_put_request() suggests that it can be
>>>>> called with not-current domain.
>>>>>
>>>>> I don't see how just pausing the monitored domain is enough here.
>>>>
>>>> Requests only get generated by the monitored domain. So if the domain
>>>> is not running you won't get more of them. The toolstack can only send
>>>> replies.
>>>
>>> Julien,
>>>
>>> does this change your view on the refcounting added by the patch
>>> at the root of this sub-thread?
>>
>> I still think the code is at best fragile. One example I can find is:
>>
>>    -> guest_remove_page()
>>      -> p2m_mem_paging_drop_page()
>>       -> vm_event_put_request()
>>
>> guest_remove_page() is not always call on the current domain. So there
>> are possibility for vm_event_put_request() to happen on a foreign domain
>> and therefore wouldn't be protected by the current hypercall.
>>
>> Anyway, I don't think the refcounting should be part of the event
>> channel without any idea on how this would fit in fixing the VM event race.
> 
> If the problematic patterns only appear with mem_paging I would
> suggest just removing the mem_paging code completely. It's been
> abandoned for several years now.

Since this is nothing I'm fancying doing, the way forward here needs
to be a different one. From the input by both of you I still can't
conclude whether this patch should remain as is in v4, or revert
back to its v2 version. Please can we get this settled so I can get
v4 out?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:19:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:19:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58347.102490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks434-0004Nv-3f; Wed, 23 Dec 2020 13:19:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58347.102490; Wed, 23 Dec 2020 13:19:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks434-0004No-0a; Wed, 23 Dec 2020 13:19:26 +0000
Received: by outflank-mailman (input) for mailman id 58347;
 Wed, 23 Dec 2020 13:19:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks432-0004Nj-MI
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:19:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks431-000730-Pl; Wed, 23 Dec 2020 13:19:23 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks431-0003h1-Gn; Wed, 23 Dec 2020 13:19:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NbuIBhzl6Jps3KIZ/CV2Pa4RdvA7DGpXZ61NF6uZcXU=; b=w2fgR9nfYJ10HT4wULOxHJKpdk
	Z1sW3LTDC32SWeD+fLMkxPbgFBmRjVxdq4hPfq+EFm9bxIF0Bno+k1Z1uagylE+2gWuTCzHlDrqfz
	nd+jzylawlD5ZgSP0V0YGv0r6K+QuS75WskU2NZV395wvAdKwIOSuGT5wYhM50jWOqfY=;
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
 <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
 <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
 <099b99bc-c544-0aa8-c3b4-4871ef618e4a@suse.com>
 <aa169dc2-77f2-b3e9-80f4-d5f4d6ea54f1@xen.org>
 <d0b3079b-ae83-a14e-1fc6-ea76bdc7db79@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <25585b2a-7dcd-1b46-b360-9a9e9d54b191@xen.org>
Date: Wed, 23 Dec 2020 13:19:21 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d0b3079b-ae83-a14e-1fc6-ea76bdc7db79@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 12:57, Jan Beulich wrote:
> On 23.12.2020 12:22, Julien Grall wrote:
>>> 1) Neither evtchn_status() nor domain_dump_evtchn_info() appear to
>>> have a real need to acquire the per-domain lock. They could as well
>>> acquire the per-channel ones. (In the latter case this will then
>>> also allow inserting the so far missing process_pending_softirqs()
>>> call; it shouldn't be made with a lock held.)
>> I agree that evtchn_status() doesn't need to acquire the per-domain
>> lock. I am not entirely sure about domain_dump_evtchn_info() because
>> AFAICT the PIRQ tree (used by domain_pirq_to_irq()) is protected with
>> d->event_lock.
> 
> It is, but calling it without the lock just to display the IRQ
> is not a problem afaict.

How so? Is the radix tree lookup safe against concurrent radix tree 
insertion/deletion?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:27:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:27:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58351.102502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4B5-0005Jb-VK; Wed, 23 Dec 2020 13:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58351.102502; Wed, 23 Dec 2020 13:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4B5-0005JU-SK; Wed, 23 Dec 2020 13:27:43 +0000
Received: by outflank-mailman (input) for mailman id 58351;
 Wed, 23 Dec 2020 13:27:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4B4-0005JP-Jp
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:27:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6861c487-a0e5-4f2b-b21d-ab323437cafb;
 Wed, 23 Dec 2020 13:27:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0F52AD45;
 Wed, 23 Dec 2020 13:27:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6861c487-a0e5-4f2b-b21d-ab323437cafb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608730061; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OdUfEwYEnASrmMpQj9X+Zt6vSHa6Px9UUqg94/zIqhc=;
	b=Yp25mES6hT90Hth+GskiQN/MBcXRiRXBnrXO0NZXP0JtR9k2XaPBqLVyVyiQHoX0cXLJtE
	RpFhcFoK/kZxAx/Zv4kNk5ya0Qe60eG7qrU0rCIIpjkuoQPfq7QUK5j+8F5SSOR+U1RFY3
	ghFP9HKGSIxJ/rHcOc077F3jOYrTlCo=
Subject: Re: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was
 initialized before tearing down
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d9dd2fbc-d4bb-6b12-73e7-52a4fdda9020@suse.com>
Date: Wed, 23 Dec 2020 14:27:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201222154338.9459-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.12.2020 16:43, Julien Grall wrote:
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -226,7 +226,15 @@ static void iommu_teardown(struct domain *d)
>  
>  void iommu_domain_destroy(struct domain *d)
>  {
> -    if ( !is_iommu_enabled(d) )
> +    struct domain_iommu *hd = dom_iommu(d);
> +
> +    /*
> +     * In case of failure during the domain construction, it would be
> +     * possible to reach this function with the IOMMU enabled but not
> +     * yet initialized. We assume that hd->platforms will be non-NULL as
> +     * soon as we start to initialize the IOMMU.
> +     */
> +    if ( !is_iommu_enabled(d) || !hd->platform_ops )
>          return;

>From your description I'd rather say is_iommu_enabled() is the
wrong predicate to use here. IOW I'd rather see it be replaced.

A couple of additional nits: "hd" isn't really necessary to
have as a local variable, but if you insist on introducing it
despite being used just once, it should be pointer-to-const.
Plus the comment would better spell correctly the field it
talks about. But I consider this comment excessive anyway, as
the check of ->platform_ops is quite clear by itself.

And finally "we assume" is in at least latent conflict with
->platform_ops getting set only after arch_iommu_domain_init()
was called. Right now neither x86'es nor Arm's do anything
that would need undoing, but I'd still suggest to replace
"assume" here (if you want to keep the comment in the first
place) and move the assignment up a few lines (right after the
is_iommu_enabled() check there).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:33:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58355.102515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4GM-0006Cl-Ix; Wed, 23 Dec 2020 13:33:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58355.102515; Wed, 23 Dec 2020 13:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4GM-0006Ce-G1; Wed, 23 Dec 2020 13:33:10 +0000
Received: by outflank-mailman (input) for mailman id 58355;
 Wed, 23 Dec 2020 13:33:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks4GL-0006CZ-Gg
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:33:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks4GJ-0007HC-U5; Wed, 23 Dec 2020 13:33:07 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks4GJ-0004cY-FE; Wed, 23 Dec 2020 13:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=c9TD9uZevS5g6BoIkLC32L5rxCE3UCJvDx919iPfCak=; b=QNUJVOQXWn8wJRtfEeDciDRpxG
	vka0Ua+aWPyekC+5LwfDlFh+38Jf0UJdKVCi+lVEaSlhPWBFtUptzaVvx44aBhpjJ41ARHwitmzjZ
	J+281DU64cLHAr6D8rL1CpM6Ubzgha40wFa9pQTIJke0Plw0Ha1O/3zhtt05lIEv055Y=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
 <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
 <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
Date: Wed, 23 Dec 2020 13:33:05 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 13:12, Jan Beulich wrote:
> On 07.12.2020 18:35, Tamas K Lengyel wrote:
>> On Mon, Dec 7, 2020 at 12:30 PM Julien Grall <julien@xen.org> wrote:
>>>
>>> Hi Jan,
>>>
>>> On 07/12/2020 15:28, Jan Beulich wrote:
>>>> On 04.12.2020 20:15, Tamas K Lengyel wrote:
>>>>> On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
>>>>>> On 04/12/2020 15:21, Tamas K Lengyel wrote:
>>>>>>> On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
>>>>>>>> On 03/12/2020 10:09, Jan Beulich wrote:
>>>>>>>>> On 02.12.2020 22:10, Julien Grall wrote:
>>>>>>>>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>>>>>>>>> While there don't look to be any problems with this right now, the lock
>>>>>>>>>>> order implications from holding the lock can be very difficult to follow
>>>>>>>>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>>>>>>>>> (and no such callback should) have any need for the lock to be held.
>>>>>>>>>>>
>>>>>>>>>>> However, vm_event_disable() frees the structures used by respective
>>>>>>>>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>>>>>>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>>>>>>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>>>>>>>>
>>>>>>>>>> AFAICT, this callback is not the only place where the synchronization is
>>>>>>>>>> missing in the VM event code.
>>>>>>>>>>
>>>>>>>>>> For instance, vm_event_put_request() can also race against
>>>>>>>>>> vm_event_disable().
>>>>>>>>>>
>>>>>>>>>> So shouldn't we handle this issue properly in VM event?
>>>>>>>>>
>>>>>>>>> I suppose that's a question to the VM event folks rather than me?
>>>>>>>>
>>>>>>>> Yes. From my understanding of Tamas's e-mail, they are relying on the
>>>>>>>> monitoring software to do the right thing.
>>>>>>>>
>>>>>>>> I will refrain to comment on this approach. However, given the race is
>>>>>>>> much wider than the event channel, I would recommend to not add more
>>>>>>>> code in the event channel to deal with such problem.
>>>>>>>>
>>>>>>>> Instead, this should be fixed in the VM event code when someone has time
>>>>>>>> to harden the subsystem.
>>>>>>>
>>>>>>> I double-checked and the disable route is actually more robust, we
>>>>>>> don't just rely on the toolstack doing the right thing. The domain
>>>>>>> gets paused before any calls to vm_event_disable. So I don't think
>>>>>>> there is really a race-condition here.
>>>>>>
>>>>>> The code will *only* pause the monitored domain. I can see two issues:
>>>>>>       1) The toolstack is still sending event while destroy is happening.
>>>>>> This is the race discussed here.
>>>>>>       2) The implement of vm_event_put_request() suggests that it can be
>>>>>> called with not-current domain.
>>>>>>
>>>>>> I don't see how just pausing the monitored domain is enough here.
>>>>>
>>>>> Requests only get generated by the monitored domain. So if the domain
>>>>> is not running you won't get more of them. The toolstack can only send
>>>>> replies.
>>>>
>>>> Julien,
>>>>
>>>> does this change your view on the refcounting added by the patch
>>>> at the root of this sub-thread?
>>>
>>> I still think the code is at best fragile. One example I can find is:
>>>
>>>     -> guest_remove_page()
>>>       -> p2m_mem_paging_drop_page()
>>>        -> vm_event_put_request()
>>>
>>> guest_remove_page() is not always call on the current domain. So there
>>> are possibility for vm_event_put_request() to happen on a foreign domain
>>> and therefore wouldn't be protected by the current hypercall.
>>>
>>> Anyway, I don't think the refcounting should be part of the event
>>> channel without any idea on how this would fit in fixing the VM event race.
>>
>> If the problematic patterns only appear with mem_paging I would
>> suggest just removing the mem_paging code completely. It's been
>> abandoned for several years now.
> 
> Since this is nothing I'm fancying doing, the way forward here needs
> to be a different one. From the input by both of you I still can't
> conclude whether this patch should remain as is in v4, or revert
> back to its v2 version. Please can we get this settled so I can get
> v4 out?

I haven't had time to investigate the rest of the VM event code to find 
other cases where this may happen. I still think there is a bigger 
problem in the VM event code, but the maintainer disagrees here.

At which point, I see limited reason to try to paper over in the common 
code. So I would rather ack/merge v2 rather than v3.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:36:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:36:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58360.102530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Jr-0006Mo-BM; Wed, 23 Dec 2020 13:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58360.102530; Wed, 23 Dec 2020 13:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Jr-0006Mh-83; Wed, 23 Dec 2020 13:36:47 +0000
Received: by outflank-mailman (input) for mailman id 58360;
 Wed, 23 Dec 2020 13:36:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4Jp-0006Mc-N2
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:36:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df52edab-390b-4c30-a12d-bbabe2663f2f;
 Wed, 23 Dec 2020 13:36:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2FDC1ACF1;
 Wed, 23 Dec 2020 13:36:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df52edab-390b-4c30-a12d-bbabe2663f2f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608730602; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9fzAQALtbXHWWto4J3M1V4rToZAfSJS8999UCv0kqgE=;
	b=J/dKniey+COQucY50tJnS7ED+smE2K9Z88nJ8mEHLoGv5kH1guKa67rJXYz3dn3n/3MRjk
	LnPJG+I2wnkKcNxyUn7a/OMHJKk1OOremgG/WVfdm/l4hdj9hO2Wa8aeRUDtjEVkQAadhI
	fIVwyBg44IrCK47vFYLb6odaCSSKX3w=
Subject: Re: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
 <074be931-54b0-1b0f-72d8-5bd577884814@xen.org>
 <6e34fd25-14a2-f655-b019-aca94ce086c8@suse.com>
 <55dc24b4-88c6-1b22-411e-267231632377@xen.org>
 <cf3faa68-ba4a-b864-66e0-f379a24a48ce@suse.com>
 <1f3571eb-5aec-e76e-0b61-2602356fb436@xen.org>
 <099b99bc-c544-0aa8-c3b4-4871ef618e4a@suse.com>
 <aa169dc2-77f2-b3e9-80f4-d5f4d6ea54f1@xen.org>
 <d0b3079b-ae83-a14e-1fc6-ea76bdc7db79@suse.com>
 <25585b2a-7dcd-1b46-b360-9a9e9d54b191@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a7176b02-7aa7-9f0f-81ff-bef1757ed4f3@suse.com>
Date: Wed, 23 Dec 2020 14:36:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <25585b2a-7dcd-1b46-b360-9a9e9d54b191@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 14:19, Julien Grall wrote:
> On 23/12/2020 12:57, Jan Beulich wrote:
>> On 23.12.2020 12:22, Julien Grall wrote:
>>>> 1) Neither evtchn_status() nor domain_dump_evtchn_info() appear to
>>>> have a real need to acquire the per-domain lock. They could as well
>>>> acquire the per-channel ones. (In the latter case this will then
>>>> also allow inserting the so far missing process_pending_softirqs()
>>>> call; it shouldn't be made with a lock held.)
>>> I agree that evtchn_status() doesn't need to acquire the per-domain
>>> lock. I am not entirely sure about domain_dump_evtchn_info() because
>>> AFAICT the PIRQ tree (used by domain_pirq_to_irq()) is protected with
>>> d->event_lock.
>>
>> It is, but calling it without the lock just to display the IRQ
>> is not a problem afaict.
> 
> How so? Is the radix tree lookup safe against concurrent radix tree 
> insertion/deletion?

Hmm, looks like I was misguided by pirq_field() tolerating NULL
coming back from radix_tree_lookup(). Indeed, if the tree
structure changed, there would be a problem. But this can't be
solved by holding the lock across the entire loop - as said
earlier, the loop body needs to gain a
process_pending_softirqs() invocation, and that needs to happen
with no locks held. The only way I can see us achieving this is
to drop the per-channel lock prior to calling
domain_pirq_to_irq().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:41:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58365.102545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Og-0007Gg-1w; Wed, 23 Dec 2020 13:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58365.102545; Wed, 23 Dec 2020 13:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Of-0007GZ-VL; Wed, 23 Dec 2020 13:41:45 +0000
Received: by outflank-mailman (input) for mailman id 58365;
 Wed, 23 Dec 2020 13:41:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4Oe-0007GU-Ft
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:41:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ffadb67-6490-4775-bd1d-e3bd3c64279b;
 Wed, 23 Dec 2020 13:41:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5606AD45;
 Wed, 23 Dec 2020 13:41:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ffadb67-6490-4775-bd1d-e3bd3c64279b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608730902; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JyMtVEp3gj3hiB5cHZ50ugknG+GdM6y8lldwRx7V01Q=;
	b=XvCj9/2FPtTy18Ar2ruseDsjBaT2ulvmIKXN2AheOQK5cAsEBqg4Prhnm30EFhubk+VP5V
	DHIBBhLIqXqZumoNTYhmNm8TecyjH2oHUjtJdsQcJSz7KR88tetj4IdYWzo9hyX7baeFvS
	/B5sXpTkH9gaN0OJtV9AXnffLJ4w+ks=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>, Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
 <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
 <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
 <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3b339f30-57db-caf6-fd7e-84199f98546f@suse.com>
Date: Wed, 23 Dec 2020 14:41:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 14:33, Julien Grall wrote:
> On 23/12/2020 13:12, Jan Beulich wrote:
>> From the input by both of you I still can't
>> conclude whether this patch should remain as is in v4, or revert
>> back to its v2 version. Please can we get this settled so I can get
>> v4 out?
> 
> I haven't had time to investigate the rest of the VM event code to find 
> other cases where this may happen. I still think there is a bigger 
> problem in the VM event code, but the maintainer disagrees here.
> 
> At which point, I see limited reason to try to paper over in the common 
> code. So I would rather ack/merge v2 rather than v3.

Since I expect Tamas and/or the Bitdefender folks to be of the
opposite opinion, there's still no way out, at least if "rather
ack" implies a nak for v3. Personally, if this expectation of
mine is correct, I'd prefer to keep the accounting but make it
optional (as suggested in a post-commit-message remark).
That'll eliminate the overhead you appear to be concerned of,
but of course it'll further complicate the logic (albeit just
slightly).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:45:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58369.102557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Ro-0007QI-H9; Wed, 23 Dec 2020 13:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58369.102557; Wed, 23 Dec 2020 13:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Ro-0007QB-E7; Wed, 23 Dec 2020 13:45:00 +0000
Received: by outflank-mailman (input) for mailman id 58369;
 Wed, 23 Dec 2020 13:44:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks4Rm-0007Q2-QD; Wed, 23 Dec 2020 13:44:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks4Rm-0007St-JQ; Wed, 23 Dec 2020 13:44:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks4Rm-0001nE-CS; Wed, 23 Dec 2020 13:44:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ks4Rm-0000du-Bz; Wed, 23 Dec 2020 13:44:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mDpyedJtM8glQu93LSIQkpra6kkzmgeGfeE5fUHpEFc=; b=laUCozbI62S5ukILQOSB22S0Ai
	ISpszq5ySpvCytur+VXhTbJ1Wch9OPgjOkoPRnSCpr/BVS6+JB9Oo7t08y6V9Jer+UlHn+kERzdUw
	D5zoJ8PhS879RZcBdtZ7xlHuX95u+UgcRGF27G+wJUlVQ5muPtg8fExDaGm1fuDIKYHE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157848-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157848: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d15d0d3d8aee1c7d5dab7b636601370061b32612
X-Osstest-Versions-That:
    ovmf=d21d2706761bede7db38929abc5613f3e71c64ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 13:44:58 +0000

flight 157848 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157848/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d15d0d3d8aee1c7d5dab7b636601370061b32612
baseline version:
 ovmf                 d21d2706761bede7db38929abc5613f3e71c64ba

Last test of basis   157840  2020-12-22 22:40:48 Z    0 days
Testing same since   157848  2020-12-23 07:52:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d21d270676..d15d0d3d8a  d15d0d3d8aee1c7d5dab7b636601370061b32612 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:48:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58375.102572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Ux-0007aY-1P; Wed, 23 Dec 2020 13:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58375.102572; Wed, 23 Dec 2020 13:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4Uw-0007aR-Ui; Wed, 23 Dec 2020 13:48:14 +0000
Received: by outflank-mailman (input) for mailman id 58375;
 Wed, 23 Dec 2020 13:48:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4Uv-0007aM-9X
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:48:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54dcbf10-41f2-4e60-a6bc-665da71067af;
 Wed, 23 Dec 2020 13:48:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80B40ACF1;
 Wed, 23 Dec 2020 13:48:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54dcbf10-41f2-4e60-a6bc-665da71067af
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608731291; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F48pFlYLzVYDwP2hSzOyTZwKTk1U25C/OR+Scc23UpQ=;
	b=Mh6PiI7zR6XM7xtex46qb9ABN8yaTGL9cJJQ7AzvsTWvH+0/TcP3mnvnAyNkSzgU+x1OiJ
	FJGCHJ1e4+Px23CGQ+Q0QxLsWFIp4JsvuSTm355DilADM0lwmda2BfAI2nrjPiZXjI6mmn
	gobmS8lukvUlQkYVlFtjERtCpgM6Bv4=
Subject: Re: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables
 with the pgtables.lock held
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-3-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3148db2a-ff3f-5993-dd57-7f4376f2f0ad@suse.com>
Date: Wed, 23 Dec 2020 14:48:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201222154338.9459-3-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.12.2020 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The pgtables.lock is protecting access to the page list pgtables.list.
> However, iommu_free_pgtables() will not held it. I guess it was assumed
> that page-tables cannot be allocated while the domain is dying.
> 
> Unfortunately, there is no guarantee that iommu_map() will not be
> called while a domain is dying (it looks like to be possible from
> XEN_DOMCTL_memory_mapping).

I'd rather disallow any new allocations for a dying domain, not
the least because ...

> So it would be possible to be concurrently
> allocate memory and free the page-tables.
> 
> Therefore, we need to held the lock when freeing the page tables.

... we should try to avoid holding locks across allocation /
freeing functions wherever possible.

As to where to place a respective check - I wonder if we wouldn't
be better off disallowing a majority of domctl-s (and perhaps
other operations) on dying domains. Thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:50:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58380.102584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4XB-0008Qu-En; Wed, 23 Dec 2020 13:50:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58380.102584; Wed, 23 Dec 2020 13:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4XB-0008Qn-BZ; Wed, 23 Dec 2020 13:50:33 +0000
Received: by outflank-mailman (input) for mailman id 58380;
 Wed, 23 Dec 2020 13:50:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks4X9-0008Qd-GJ
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:50:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks4X8-0007YM-W8; Wed, 23 Dec 2020 13:50:30 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks4X8-0005mm-O8; Wed, 23 Dec 2020 13:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kFkosOoeruMx44MUfzZSh3w88o1RRuHEYR+6b4+j01Y=; b=UtQoBiB7akD19WM3BZlGKMM6we
	UOYRG01ZaCIBuEoMvwp/ROkNsns+8wiCziq6cD5cr3niWzxkv9LItAAhmDU0ArSacFXj0he2iq7V2
	A+/iFqv92TPYrhTWqxB1jicLcbXni7zxgycNjKQq1UiN7bT7AaJL/7baUjtlxZTSrRe0=;
Subject: Re: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was
 initialized before tearing down
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-2-julien@xen.org>
 <d9dd2fbc-d4bb-6b12-73e7-52a4fdda9020@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <eaba5e4c-91c9-9341-cc8f-d58aa08358a2@xen.org>
Date: Wed, 23 Dec 2020 13:50:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d9dd2fbc-d4bb-6b12-73e7-52a4fdda9020@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 13:27, Jan Beulich wrote:
> On 22.12.2020 16:43, Julien Grall wrote:
>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -226,7 +226,15 @@ static void iommu_teardown(struct domain *d)
>>   
>>   void iommu_domain_destroy(struct domain *d)
>>   {
>> -    if ( !is_iommu_enabled(d) )
>> +    struct domain_iommu *hd = dom_iommu(d);
>> +
>> +    /*
>> +     * In case of failure during the domain construction, it would be
>> +     * possible to reach this function with the IOMMU enabled but not
>> +     * yet initialized. We assume that hd->platforms will be non-NULL as
>> +     * soon as we start to initialize the IOMMU.
>> +     */
>> +    if ( !is_iommu_enabled(d) || !hd->platform_ops )
>>           return;
> 
>  From your description I'd rather say is_iommu_enabled() is the
> wrong predicate to use here. IOW I'd rather see it be replaced.

!hd->platform_ops should be sufficient. So, I think we can drop 
!is_iommu_enabled(d). Would that be fine with you?

> 
> A couple of additional nits: "hd" isn't really necessary to
> have as a local variable, but if you insist on introducing it
> despite being used just once, it should be pointer-to-const.
> Plus the comment would better spell correctly the field it
> talks about. But I consider this comment excessive anyway, as
> the check of ->platform_ops is quite clear by itself.

Well, I added the comment because I think check hd->platform_ops may not 
be that obvious (see more below).

> And finally "we assume" is in at least latent conflict with
> ->platform_ops getting set only after arch_iommu_domain_init()
> was called. Right now neither x86'es nor Arm's do anything
> that would need undoing, but I'd still suggest to replace
> "assume" here (if you want to keep the comment in the first
> place) and move the assignment up a few lines (right after the
> is_iommu_enabled() check there).
My initial implementation of this patch moved the initialization of 
hd->platform_ops before arch_iommu_domain_init().

However, this is going to lead to some issues with Paul's series which 
add an IOMMU page-table pool ([1]).

The function arch_iommu_domain_init() may now fail. If we initialize 
hd->platform_ops earlier on, then the ->teardown() will be called before 
->init().

This may be an issue if ->teardown() expects some list pointer to be 
initialized by ->init(). I am not aware of any today, but this seems 
quite risky to me.

So I think it is better if we initialize hd->platform_ops after 
arch_iommu_domain_init() and expect the function to clean-up nything 
that was allocated before the error.

The alternative is we set hd->platform_ops if both 
arch_iommu_domain_init() and ->init() have succeeded. This means they 
will both have to clean-up in case of an error.

Any thoughts?

Cheers,

[1] <20201005094905.2929-6-paul@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 13:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 13:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58384.102596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4fU-0000Dp-5K; Wed, 23 Dec 2020 13:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58384.102596; Wed, 23 Dec 2020 13:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4fU-0000Di-2K; Wed, 23 Dec 2020 13:59:08 +0000
Received: by outflank-mailman (input) for mailman id 58384;
 Wed, 23 Dec 2020 13:59:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4fS-0000Dd-EG
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 13:59:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03637795-3d54-4c34-a369-52e2333f27cc;
 Wed, 23 Dec 2020 13:59:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11AD0ACBA;
 Wed, 23 Dec 2020 13:59:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03637795-3d54-4c34-a369-52e2333f27cc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608731944; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GHL45RndAjKydUITui6TRqulsX1ckfeLI7Rxk2EKUVI=;
	b=cLEYiNQukOdxxI9JTNmF16+1A7tCYjjeG9DhV/IzYRsfI6gbiGveRNFC2CDdU69U+9tAzP
	shMyzV3mgbjkkq7qSp0c8prR6ZsYmWdM9bKdYwj6NKbcSvcdSoaVsjO6QF2nUH+KZ7gRnn
	GkXheHQl1wr54AA3QWrdFEIbNgbfdcc=
Subject: Re: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was
 initialized before tearing down
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-2-julien@xen.org>
 <d9dd2fbc-d4bb-6b12-73e7-52a4fdda9020@suse.com>
 <eaba5e4c-91c9-9341-cc8f-d58aa08358a2@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6339b4ab-9be3-0b70-a474-14213e8609c0@suse.com>
Date: Wed, 23 Dec 2020 14:59:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <eaba5e4c-91c9-9341-cc8f-d58aa08358a2@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 14:50, Julien Grall wrote:
> On 23/12/2020 13:27, Jan Beulich wrote:
>> On 22.12.2020 16:43, Julien Grall wrote:
>>> --- a/xen/drivers/passthrough/iommu.c
>>> +++ b/xen/drivers/passthrough/iommu.c
>>> @@ -226,7 +226,15 @@ static void iommu_teardown(struct domain *d)
>>>   
>>>   void iommu_domain_destroy(struct domain *d)
>>>   {
>>> -    if ( !is_iommu_enabled(d) )
>>> +    struct domain_iommu *hd = dom_iommu(d);
>>> +
>>> +    /*
>>> +     * In case of failure during the domain construction, it would be
>>> +     * possible to reach this function with the IOMMU enabled but not
>>> +     * yet initialized. We assume that hd->platforms will be non-NULL as
>>> +     * soon as we start to initialize the IOMMU.
>>> +     */
>>> +    if ( !is_iommu_enabled(d) || !hd->platform_ops )
>>>           return;
>>
>>  From your description I'd rather say is_iommu_enabled() is the
>> wrong predicate to use here. IOW I'd rather see it be replaced.
> 
> !hd->platform_ops should be sufficient. So, I think we can drop 
> !is_iommu_enabled(d). Would that be fine with you?

Well, that's what I was trying to suggest.

>> A couple of additional nits: "hd" isn't really necessary to
>> have as a local variable, but if you insist on introducing it
>> despite being used just once, it should be pointer-to-const.
>> Plus the comment would better spell correctly the field it
>> talks about. But I consider this comment excessive anyway, as
>> the check of ->platform_ops is quite clear by itself.
> 
> Well, I added the comment because I think check hd->platform_ops may not 
> be that obvious (see more below).
> 
>> And finally "we assume" is in at least latent conflict with
>> ->platform_ops getting set only after arch_iommu_domain_init()
>> was called. Right now neither x86'es nor Arm's do anything
>> that would need undoing, but I'd still suggest to replace
>> "assume" here (if you want to keep the comment in the first
>> place) and move the assignment up a few lines (right after the
>> is_iommu_enabled() check there).
> My initial implementation of this patch moved the initialization of 
> hd->platform_ops before arch_iommu_domain_init().
> 
> However, this is going to lead to some issues with Paul's series which 
> add an IOMMU page-table pool ([1]).
> 
> The function arch_iommu_domain_init() may now fail. If we initialize 
> hd->platform_ops earlier on, then the ->teardown() will be called before 
> ->init().
> 
> This may be an issue if ->teardown() expects some list pointer to be 
> initialized by ->init(). I am not aware of any today, but this seems 
> quite risky to me.

In such a case the obvious thing is to make the teardown handler
check whether its init counterpart has run before. This will then
fit our apparently much wider goal of making cleanup functions
idempotent.

Jan

> So I think it is better if we initialize hd->platform_ops after 
> arch_iommu_domain_init() and expect the function to clean-up nything 
> that was allocated before the error.
> 
> The alternative is we set hd->platform_ops if both 
> arch_iommu_domain_init() and ->init() have succeeded. This means they 
> will both have to clean-up in case of an error.
> 
> Any thoughts?
> 
> Cheers,
> 
> [1] <20201005094905.2929-6-paul@xen.org>
> 



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:02:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58388.102608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4iI-0001Be-Jo; Wed, 23 Dec 2020 14:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58388.102608; Wed, 23 Dec 2020 14:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4iI-0001BX-GM; Wed, 23 Dec 2020 14:02:02 +0000
Received: by outflank-mailman (input) for mailman id 58388;
 Wed, 23 Dec 2020 14:02:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks4iH-0001BR-Fu
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:02:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks4iG-0007qu-RS; Wed, 23 Dec 2020 14:02:00 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks4iG-0006Zf-Jp; Wed, 23 Dec 2020 14:02:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=G+7V4blTrjiXGnm6WIHP8emDZy9CUkQfddfdIwzjd2M=; b=tTJgO51XJJLkg5tcPesEUyPUp4
	f3XHn1NYdlFzLiyCATNn3rp4nyvp5dFX2vUUZ1c+TVgi1GgQ37U3p4ovhLJQ4MLTXDpF1fqMASOcq
	HTu2L9LGwOYr22qdtH0vYFaL3vLgU8WZjyYbYu4Nutz1eZVlnHCfJrW5m2Sgx+MsZick=;
Subject: Re: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables
 with the pgtables.lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-3-julien@xen.org>
 <3148db2a-ff3f-5993-dd57-7f4376f2f0ad@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <295b32db-ddf7-3926-b4de-b0d3b78af316@xen.org>
Date: Wed, 23 Dec 2020 14:01:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <3148db2a-ff3f-5993-dd57-7f4376f2f0ad@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 13:48, Jan Beulich wrote:
> On 22.12.2020 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The pgtables.lock is protecting access to the page list pgtables.list.
>> However, iommu_free_pgtables() will not held it. I guess it was assumed
>> that page-tables cannot be allocated while the domain is dying.
>>
>> Unfortunately, there is no guarantee that iommu_map() will not be
>> called while a domain is dying (it looks like to be possible from
>> XEN_DOMCTL_memory_mapping).
> 
> I'd rather disallow any new allocations for a dying domain, not
> the least because ...

Patch #4 will disallow such allocation. However...

> 
>> So it would be possible to be concurrently
>> allocate memory and free the page-tables.
>>
>> Therefore, we need to held the lock when freeing the page tables.
> 
> ... we should try to avoid holding locks across allocation /
> freeing functions wherever possible. >
> As to where to place a respective check - I wonder if we wouldn't
> be better off disallowing a majority of domctl-s (and perhaps
> other operations) on dying domains. Thoughts?

... this is still pretty racy because you need to guarantee that 
d->is_dying is seen by the other processors to prevent allocation.

As to whether we can forbid most of the domctl-s, I would agree this is 
a good move. But this doesn't remove the underlying problem here.

We are hoping that a top-level function will protect us against the 
race. Given the IOMMU code is quite deep in the callstack, this is 
something pretty hard to guarantee with future change.

So I still think we need a way to mitigate the issue.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:12:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:12:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58392.102620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4sH-0002AU-Ky; Wed, 23 Dec 2020 14:12:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58392.102620; Wed, 23 Dec 2020 14:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4sH-0002AN-Hu; Wed, 23 Dec 2020 14:12:21 +0000
Received: by outflank-mailman (input) for mailman id 58392;
 Wed, 23 Dec 2020 14:12:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4sG-0002AI-Fr
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:12:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa29c03d-99b8-4a5f-b3bb-d5c86fbd7745;
 Wed, 23 Dec 2020 14:12:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F363BACF1;
 Wed, 23 Dec 2020 14:12:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa29c03d-99b8-4a5f-b3bb-d5c86fbd7745
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608732738; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uehhtcmw9qZNX98SGtNYIfWGGRaFE7ezO6nIlq+UYxQ=;
	b=nX5zH2h+wF7ekBv/uWYUI62but26yAW7rquoHGZfNotVTjX3dnIjcalqEAtkOoXuT6MDta
	skcKOE7bYw+ZghyiRLJRx5FxlHHkL575q3qms2C//XISfkemjqAlgPwUBrKN9rRD8ph9lQ
	YyYb2C1HgVtZvd//4Bv800sfD5++69s=
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
Date: Wed, 23 Dec 2020 15:12:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201222154338.9459-4-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.12.2020 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new per-domain IOMMU page-table allocator will now free the
> page-tables when domain's resources are relinquished. However, the root
> page-table (i.e. hd->arch.pg_maddr) will not be cleared.
> 
> Xen may access the IOMMU page-tables afterwards at least in the case of
> PV domain:
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
> (XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
> (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
> (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
> (XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
> (XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
> (XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
> (XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
> (XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
> (XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
> (XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
> (XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
> (XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
> (XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
> (XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
> (XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
> (XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
> (XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
> (XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120
> 
> This will result to a use after-free and possibly an host crash or
> memory corruption.
> 
> Freeing the page-tables further down in domain_relinquish_resources()
> would not work because pages may not be released until later if another
> domain hold a reference on them.
> 
> Once all the PCI devices have been de-assigned, it is actually pointless
> to access modify the IOMMU page-tables. So we can simply clear the root
> page-table address.
> 
> Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> This is an RFC because it would break AMD IOMMU driver. One option would
> be to move the call to the teardown callback earlier on. Any opinions?

We already have

static void amd_iommu_domain_destroy(struct domain *d)
{
    dom_iommu(d)->arch.amd.root_table = NULL;
}

and this function is AMD's teardown handler. Hence I suppose
doing the same for VT-d would be quite reasonable. And really
VT-d's iommu_domain_teardown() also already has

    hd->arch.vtd.pgd_maddr = 0;

I guess what's missing is prevention of the root table
getting re-setup. This, I guess, would be helped by the
previously suggested preventing of any further modifications
to the p2m and alike for dying domains. Note how e.g. the
handling of XEN_DOMCTL_assign_device already includes a
"dying" check.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:16:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:16:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58396.102632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4wf-0002JR-96; Wed, 23 Dec 2020 14:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58396.102632; Wed, 23 Dec 2020 14:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks4wf-0002JK-44; Wed, 23 Dec 2020 14:16:53 +0000
Received: by outflank-mailman (input) for mailman id 58396;
 Wed, 23 Dec 2020 14:16:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks4wd-0002JF-KF
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:16:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0743636-44b0-438c-a33b-4473ff0b9045;
 Wed, 23 Dec 2020 14:16:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1E506ACF1;
 Wed, 23 Dec 2020 14:16:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0743636-44b0-438c-a33b-4473ff0b9045
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608733010; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MiyKuqrIYynaf/Ko/uQdrnjot5QLfeQe6VUa3oQASTg=;
	b=aiqXe01VM1M6UIhwj8vUH43dh8rNzB/BUgIfGOTFm41vCSUIvaQBlkTwxShT+LSdhi2x9m
	jr5O3PT6Mqreuh8Glcsm9ARveBL2M+1g52/pG5QGGARtINelDeLyPHdwG7Df1XFsLsHe+r
	GeDtdNOZKNW1UbmBjCvgs0EDsYmVgKw=
Subject: Re: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables
 with the pgtables.lock held
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-3-julien@xen.org>
 <3148db2a-ff3f-5993-dd57-7f4376f2f0ad@suse.com>
 <295b32db-ddf7-3926-b4de-b0d3b78af316@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e90365a5-e9d7-f3ac-7682-3161d74b786d@suse.com>
Date: Wed, 23 Dec 2020 15:16:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <295b32db-ddf7-3926-b4de-b0d3b78af316@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 15:01, Julien Grall wrote:
> Hi Jan,
> 
> On 23/12/2020 13:48, Jan Beulich wrote:
>> On 22.12.2020 16:43, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The pgtables.lock is protecting access to the page list pgtables.list.
>>> However, iommu_free_pgtables() will not held it. I guess it was assumed
>>> that page-tables cannot be allocated while the domain is dying.
>>>
>>> Unfortunately, there is no guarantee that iommu_map() will not be
>>> called while a domain is dying (it looks like to be possible from
>>> XEN_DOMCTL_memory_mapping).
>>
>> I'd rather disallow any new allocations for a dying domain, not
>> the least because ...
> 
> Patch #4 will disallow such allocation. However...
> 
>>
>>> So it would be possible to be concurrently
>>> allocate memory and free the page-tables.
>>>
>>> Therefore, we need to held the lock when freeing the page tables.
>>
>> ... we should try to avoid holding locks across allocation /
>> freeing functions wherever possible. >
>> As to where to place a respective check - I wonder if we wouldn't
>> be better off disallowing a majority of domctl-s (and perhaps
>> other operations) on dying domains. Thoughts?
> 
> ... this is still pretty racy because you need to guarantee that 
> d->is_dying is seen by the other processors to prevent allocation.

The function freeing the page tables will need a spin_barrier()
or alike similar to evtchn_destroy(). Aiui this will eliminate
all potential for races.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:34:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:34:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58400.102644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5Dr-00044q-PI; Wed, 23 Dec 2020 14:34:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58400.102644; Wed, 23 Dec 2020 14:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5Dr-00044j-MH; Wed, 23 Dec 2020 14:34:39 +0000
Received: by outflank-mailman (input) for mailman id 58400;
 Wed, 23 Dec 2020 14:34:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks5Dp-00044d-NO
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:34:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebcfa173-e689-4ed6-816e-57e4df780d97;
 Wed, 23 Dec 2020 14:34:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6336CACC6;
 Wed, 23 Dec 2020 14:34:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebcfa173-e689-4ed6-816e-57e4df780d97
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608734075; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yBRFO0fMu//VJVUASebih4NIM7CO71uywLxAhKadaas=;
	b=Bm0pxWiAZhOsA3R56e5GflaJqhwgIDgvJ1g4DsVOCZfXyS4U0ePBWGDqH2C9S3sH3Jp5sG
	I3PvEWopXuFxDEly50RlKPbDheFBhWIhwF8MUUQixOrFz4Q4tDTT+53a0HZHoEp/Uw3gRw
	CNEKtZw4OyrMlDzpZgmLcnOIEoLy7xg=
Subject: Re: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-5-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <beb22b59-701e-462c-5080-e99033079204@suse.com>
Date: Wed, 23 Dec 2020 15:34:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201222154338.9459-5-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.12.2020 16:43, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The new IOMMU page-tables allocator will release the pages when
> relinquish the domain resources. However, this is not sufficient in two
> cases:
>     1) domain_relinquish_resources() is not called when the domain
>     creation fails.

Could you remind me of what IOMMU page table insertions there
are during domain creation? No memory got allocated to the
domain at that point yet, so it would seem to me there simply
is nothing to map.

>     2) There is nothing preventing page-table allocations when the
>     domain is dying.
> 
> In both cases, this can be solved by freeing the page-tables again
> when the domain destruction. Although, this may result to an high
> number of page-tables to free.

Since I've seen this before in this series, and despite me also
not being a native speaker, as a nit: I don't think it can
typically be other than "result in".

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2290,7 +2290,7 @@ int domain_relinquish_resources(struct domain *d)
>  
>      PROGRESS(iommu_pagetables):
>  
> -        ret = iommu_free_pgtables(d);
> +        ret = iommu_free_pgtables(d, false);

I suppose you mean "true" here, but I also think the other
approach (checking for DOMDYING_dead, which you don't seem to
like very much) is better, if for no other reason than it
already being used elsewhere.

> @@ -305,6 +320,19 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>          memflags = MEMF_node(hd->node);
>  #endif
>  
> +    /*
> +     * The IOMMU page-tables are freed when relinquishing the domain, but
> +     * nothing prevent allocation to happen afterwards. There is no valid
> +     * reasons to continue to update the IOMMU page-tables while the
> +     * domain is dying.
> +     *
> +     * So prevent page-table allocation when the domain is dying. Note
> +     * this doesn't fully prevent the race because d->is_dying may not
> +     * yet be seen.
> +     */
> +    if ( d->is_dying )
> +        return NULL;
> +
>      pg = alloc_domheap_page(NULL, memflags);
>      if ( !pg )
>          return NULL;

As said in reply to an earlier patch - with a suitable
spin_barrier() you can place your check further down, along the
lines of

    spin_lock(&hd->arch.pgtables.lock);
    if ( likely(!d->is_dying) )
    {
        page_list_add(pg, &hd->arch.pgtables.list);
        p = NULL;
    }
    spin_unlock(&hd->arch.pgtables.lock);

    if ( p )
    {
        free_domheap_page(pg);
        pg = NULL;
    }

(albeit I'm relatively sure you won't like the re-purposing of
p, but that's a minor detail). (FREE_DOMHEAP_PAGE() would be
nice to use here, but we seem to only have FREE_XENHEAP_PAGE()
so far.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58404.102655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5N4-00051G-NA; Wed, 23 Dec 2020 14:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58404.102655; Wed, 23 Dec 2020 14:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5N4-000519-Jy; Wed, 23 Dec 2020 14:44:10 +0000
Received: by outflank-mailman (input) for mailman id 58404;
 Wed, 23 Dec 2020 14:44:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks5N3-000514-R5
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:44:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5N2-0008VW-6Y; Wed, 23 Dec 2020 14:44:08 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5N1-00017O-Uq; Wed, 23 Dec 2020 14:44:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=WJjkdlNdwkLx1Zcf3xRAV5oKtc6B48Kt5A71CejeDiA=; b=G0bvFQEt8NTKESQbKOAwPDTPK3
	hsq90rqZIbXoKflFnHdY6RXJroK7gN8KPlZILYsswvs9vONfmsheMLJdk1GehRS8TYbxlUBiwxj02
	OK6gWeH3k0ux8QTxgZmHTFX2QlZJS0K9lb72W+QWIhVYvNySgZYodkoXJSZ+AS7pHYp8=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
 <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
 <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
 <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
 <3b339f30-57db-caf6-fd7e-84199f98546f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9c214bc1-61db-5b33-f610-40c2a59edb75@xen.org>
Date: Wed, 23 Dec 2020 14:44:05 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <3b339f30-57db-caf6-fd7e-84199f98546f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 13:41, Jan Beulich wrote:
> On 23.12.2020 14:33, Julien Grall wrote:
>> On 23/12/2020 13:12, Jan Beulich wrote:
>>>  From the input by both of you I still can't
>>> conclude whether this patch should remain as is in v4, or revert
>>> back to its v2 version. Please can we get this settled so I can get
>>> v4 out?
>>
>> I haven't had time to investigate the rest of the VM event code to find
>> other cases where this may happen. I still think there is a bigger
>> problem in the VM event code, but the maintainer disagrees here.
>>
>> At which point, I see limited reason to try to paper over in the common
>> code. So I would rather ack/merge v2 rather than v3.
> 
> Since I expect Tamas and/or the Bitdefender folks to be of the
> opposite opinion, there's still no way out, at least if "rather
> ack" implies a nak for v3.

The only way out here is for someone to justify why this patch is 
sufficient for the VM event race. I am not convinced it is (see more below).

> Personally, if this expectation of
> mine is correct, I'd prefer to keep the accounting but make it
> optional (as suggested in a post-commit-message remark).
> That'll eliminate the overhead you appear to be concerned of,
> but of course it'll further complicate the logic (albeit just
> slightly).

I am more concerned about adding over complex code that would just 
papering over a bigger problem. I also can't see use of it outside of 
the VM event discussion.

I had another look at the code. As I mentioned in the past, 
vm_put_event_request() is able to deal with d != current->domain (it 
will set VM_EVENT_FLAG_FOREIGN). There are 4 callers for the function:
    1) p2m_mem_paging_drop_page()
    2) p2m_mem_paging_populate()
    3) mem_sharing_notify_enomem()
    4) monitor_traps()

1) and 2) belongs to the mem paging subsystem. Tamas suggested that it 
was abandoned.

4) can only be called with the current domain.

This leave us 3) in the mem sharing subsystem. As this is call the 
memory hypercalls, it looks possible to me that d != current->domain. 
The code also seems to be maintained (there were recent non-trivial 
changes).

Can one of the VM event developper come up with a justification why this 
patch enough to make the VM event subsystem safe?

FAOD, the justification should be solely based on the hypervisor code 
(IOW not external trusted software).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:51:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:51:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58408.102668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5Tm-0005v0-Ey; Wed, 23 Dec 2020 14:51:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58408.102668; Wed, 23 Dec 2020 14:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5Tm-0005ut-C0; Wed, 23 Dec 2020 14:51:06 +0000
Received: by outflank-mailman (input) for mailman id 58408;
 Wed, 23 Dec 2020 14:51:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks5Tl-0005uo-7W
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:51:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5Tk-0000Co-Ub; Wed, 23 Dec 2020 14:51:04 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5Tk-0001Vf-J1; Wed, 23 Dec 2020 14:51:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=S7o+EeqIdIJBX3fRCfK7TTLHGtlVHL1LzmI9tHSpEao=; b=UwmwhydUhevpSaavUJrCPchwTB
	P+h3VkUsKojhNY/VwYNllWY2SqtzqDXKcwrphsIUjLZjHV7cVzD2z7Zy2Axs0mktnH9RAzKbSTbPt
	+ELZXmESGj2FepaeWonFqT7903a8c8cwsbCaAGlKywUIhkSiwZ3ALuPaxk12yidVHB+g=;
Subject: Re: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was
 initialized before tearing down
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-2-julien@xen.org>
 <d9dd2fbc-d4bb-6b12-73e7-52a4fdda9020@suse.com>
 <eaba5e4c-91c9-9341-cc8f-d58aa08358a2@xen.org>
 <6339b4ab-9be3-0b70-a474-14213e8609c0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <903075fe-6482-0e1b-c124-932db4790382@xen.org>
Date: Wed, 23 Dec 2020 14:51:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <6339b4ab-9be3-0b70-a474-14213e8609c0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 13:59, Jan Beulich wrote:
> On 23.12.2020 14:50, Julien Grall wrote:
>> On 23/12/2020 13:27, Jan Beulich wrote:
>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>> --- a/xen/drivers/passthrough/iommu.c
>>>> +++ b/xen/drivers/passthrough/iommu.c
>>>> @@ -226,7 +226,15 @@ static void iommu_teardown(struct domain *d)
>>>>    
>>>>    void iommu_domain_destroy(struct domain *d)
>>>>    {
>>>> -    if ( !is_iommu_enabled(d) )
>>>> +    struct domain_iommu *hd = dom_iommu(d);
>>>> +
>>>> +    /*
>>>> +     * In case of failure during the domain construction, it would be
>>>> +     * possible to reach this function with the IOMMU enabled but not
>>>> +     * yet initialized. We assume that hd->platforms will be non-NULL as
>>>> +     * soon as we start to initialize the IOMMU.
>>>> +     */
>>>> +    if ( !is_iommu_enabled(d) || !hd->platform_ops )
>>>>            return;
>>>
>>>   From your description I'd rather say is_iommu_enabled() is the
>>> wrong predicate to use here. IOW I'd rather see it be replaced.
>>
>> !hd->platform_ops should be sufficient. So, I think we can drop
>> !is_iommu_enabled(d). Would that be fine with you?
> 
> Well, that's what I was trying to suggest.
> 
>>> A couple of additional nits: "hd" isn't really necessary to
>>> have as a local variable, but if you insist on introducing it
>>> despite being used just once, it should be pointer-to-const.
>>> Plus the comment would better spell correctly the field it
>>> talks about. But I consider this comment excessive anyway, as
>>> the check of ->platform_ops is quite clear by itself.
>>
>> Well, I added the comment because I think check hd->platform_ops may not
>> be that obvious (see more below).
>>
>>> And finally "we assume" is in at least latent conflict with
>>> ->platform_ops getting set only after arch_iommu_domain_init()
>>> was called. Right now neither x86'es nor Arm's do anything
>>> that would need undoing, but I'd still suggest to replace
>>> "assume" here (if you want to keep the comment in the first
>>> place) and move the assignment up a few lines (right after the
>>> is_iommu_enabled() check there).
>> My initial implementation of this patch moved the initialization of
>> hd->platform_ops before arch_iommu_domain_init().
>>
>> However, this is going to lead to some issues with Paul's series which
>> add an IOMMU page-table pool ([1]).
>>
>> The function arch_iommu_domain_init() may now fail. If we initialize
>> hd->platform_ops earlier on, then the ->teardown() will be called before
>> ->init().
>>
>> This may be an issue if ->teardown() expects some list pointer to be
>> initialized by ->init(). I am not aware of any today, but this seems
>> quite risky to me.
> 
> In such a case the obvious thing is to make the teardown handler
> check whether its init counterpart has run before. This will then
> fit our apparently much wider goal of making cleanup functions
> idempotent.

I will have a look. This may require another boolean which I wanted to 
avoid.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:56:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58412.102680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5ZM-000661-59; Wed, 23 Dec 2020 14:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58412.102680; Wed, 23 Dec 2020 14:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5ZM-00065u-1Z; Wed, 23 Dec 2020 14:56:52 +0000
Received: by outflank-mailman (input) for mailman id 58412;
 Wed, 23 Dec 2020 14:56:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks5ZL-00065p-2w
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:56:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ebba0d7-d406-4211-82c8-4c4502f169a3;
 Wed, 23 Dec 2020 14:56:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B1915ACC6;
 Wed, 23 Dec 2020 14:56:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ebba0d7-d406-4211-82c8-4c4502f169a3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608735408; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4oPqll/m8bI9dvpPkVuPNEcCRviO6wFYQo365cRdxFs=;
	b=LI3Ao4H7cTPz6rtaGVqwLryU0FNZdkUc2HXdhmsUeHCHVm67SltuZo/0yq2iMJKnn1wqJr
	aoHkd+MVjnvPfYwXo3oqnoW8+XHli7b3GCPUIkOhRRCR9eGX14k4pZjBrTtvUQ4KWR5x/1
	V0Gy3eSFzy4T2KKSVI7IIpsdLrfvOV8=
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
 <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
 <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
 <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
 <3b339f30-57db-caf6-fd7e-84199f98546f@suse.com>
 <9c214bc1-61db-5b33-f610-40c2a59edb75@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ce7d0e42-2066-0007-0f82-c55c63a952df@suse.com>
Date: Wed, 23 Dec 2020 15:56:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <9c214bc1-61db-5b33-f610-40c2a59edb75@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 15:44, Julien Grall wrote:
> On 23/12/2020 13:41, Jan Beulich wrote:
>> On 23.12.2020 14:33, Julien Grall wrote:
>>> On 23/12/2020 13:12, Jan Beulich wrote:
>>>>  From the input by both of you I still can't
>>>> conclude whether this patch should remain as is in v4, or revert
>>>> back to its v2 version. Please can we get this settled so I can get
>>>> v4 out?
>>>
>>> I haven't had time to investigate the rest of the VM event code to find
>>> other cases where this may happen. I still think there is a bigger
>>> problem in the VM event code, but the maintainer disagrees here.
>>>
>>> At which point, I see limited reason to try to paper over in the common
>>> code. So I would rather ack/merge v2 rather than v3.
>>
>> Since I expect Tamas and/or the Bitdefender folks to be of the
>> opposite opinion, there's still no way out, at least if "rather
>> ack" implies a nak for v3.
> 
> The only way out here is for someone to justify why this patch is 
> sufficient for the VM event race.

I think this is too much you demand. Moving in the right direction
without arriving at the final destination is still a worthwhile
thing to do imo.

>> Personally, if this expectation of
>> mine is correct, I'd prefer to keep the accounting but make it
>> optional (as suggested in a post-commit-message remark).
>> That'll eliminate the overhead you appear to be concerned of,
>> but of course it'll further complicate the logic (albeit just
>> slightly).
> 
> I am more concerned about adding over complex code that would just 
> papering over a bigger problem. I also can't see use of it outside of 
> the VM event discussion.

I think it is a generally appropriate thing to do to wait for
callbacks to complete before tearing down their origin control
structure. There may be cases where code structure makes this
unnecessary, but I don't think this can be an expectation to
all the users of the functionality. Hence my suggestion to
possibly make this optional (driven directly or indirectly by
the user of the registration function).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:57:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58414.102692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5ZV-0006BC-IF; Wed, 23 Dec 2020 14:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58414.102692; Wed, 23 Dec 2020 14:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5ZV-0006B4-FD; Wed, 23 Dec 2020 14:57:01 +0000
Received: by outflank-mailman (input) for mailman id 58414;
 Wed, 23 Dec 2020 14:56:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks5ZT-0006Aa-TA
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:56:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5ZT-0000Ih-FA; Wed, 23 Dec 2020 14:56:59 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5ZT-0001rv-86; Wed, 23 Dec 2020 14:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lKsx3azKR0cMl/VmQv6Mzvxo3k3Fchw39EhAm4OBy6Y=; b=e2EDMbgv9VY24TMic01lfV1pie
	D+jlnCPlSBj0wyvjwArp7TleCHmmPJx4pav2hbgsAfqIkQDWyV2R6VM88APMcfbs1mNlr+mu12LaX
	15AykyRhixltUdbBfMCIVKNO0Rx4Ne/Jr+96Xe6PF4N1DEvqKXySGsqbgNaGMPFD/kDY=;
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
Date: Wed, 23 Dec 2020 14:56:57 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 14:12, Jan Beulich wrote:
> On 22.12.2020 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The new per-domain IOMMU page-table allocator will now free the
>> page-tables when domain's resources are relinquished. However, the root
>> page-table (i.e. hd->arch.pg_maddr) will not be cleared.
>>
>> Xen may access the IOMMU page-tables afterwards at least in the case of
>> PV domain:
>>
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
>> (XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
>> (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
>> (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
>> (XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
>> (XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
>> (XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
>> (XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
>> (XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
>> (XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
>> (XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
>> (XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
>> (XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
>> (XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
>> (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
>> (XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
>> (XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
>> (XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
>> (XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
>> (XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
>> (XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
>> (XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120
>>
>> This will result to a use after-free and possibly an host crash or
>> memory corruption.
>>
>> Freeing the page-tables further down in domain_relinquish_resources()
>> would not work because pages may not be released until later if another
>> domain hold a reference on them.
>>
>> Once all the PCI devices have been de-assigned, it is actually pointless
>> to access modify the IOMMU page-tables. So we can simply clear the root
>> page-table address.
>>
>> Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>
>> This is an RFC because it would break AMD IOMMU driver. One option would
>> be to move the call to the teardown callback earlier on. Any opinions?
> 
> We already have
> 
> static void amd_iommu_domain_destroy(struct domain *d)
> {
>      dom_iommu(d)->arch.amd.root_table = NULL;
> }
> 
> and this function is AMD's teardown handler. Hence I suppose
> doing the same for VT-d would be quite reasonable. And really
> VT-d's iommu_domain_teardown() also already has
> 
>      hd->arch.vtd.pgd_maddr = 0;

Let me have a look if that works.

> 
> I guess what's missing is prevention of the root table
> getting re-setup. 

This is taken care in the follow-up patch by forbidding page-table 
allocation. I can mention it in the commit message.


> This, I guess, would be helped by the
> previously suggested preventing of any further modifications
> to the p2m and alike for dying domains. Note how e.g. the
> handling of XEN_DOMCTL_assign_device already includes a
> "dying" check.
> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 14:58:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 14:58:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58420.102704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5az-0006M5-VQ; Wed, 23 Dec 2020 14:58:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58420.102704; Wed, 23 Dec 2020 14:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5az-0006Ly-SL; Wed, 23 Dec 2020 14:58:33 +0000
Received: by outflank-mailman (input) for mailman id 58420;
 Wed, 23 Dec 2020 14:58:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks5ay-0006Lt-Lh
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 14:58:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15137570-7931-4f8e-9475-5922694493d1;
 Wed, 23 Dec 2020 14:58:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5F22ACBA;
 Wed, 23 Dec 2020 14:58:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15137570-7931-4f8e-9475-5922694493d1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608735511; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=quMKGo3am2jy/5Nh0ZTO2QZ8bto0eKhMkH6aPQR0Z+4=;
	b=E4a6UL3Tui0hkt6uRrIzD7CwS6D/VGw+5YXyGFwgF8f4iltQrQY0uasHHt9LG4x3P4fa6E
	xC03XuGDRWqkW00dwMJXGTJpr3ZmhgdZUtvQ8oR6HNMP1gZ76ZNyYM6bDxco9VwzFmEsef
	S0j4YUoqBwL0cN0xNxdG5ThyoAde9IM=
Subject: Re: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was
 initialized before tearing down
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-2-julien@xen.org>
 <d9dd2fbc-d4bb-6b12-73e7-52a4fdda9020@suse.com>
 <eaba5e4c-91c9-9341-cc8f-d58aa08358a2@xen.org>
 <6339b4ab-9be3-0b70-a474-14213e8609c0@suse.com>
 <903075fe-6482-0e1b-c124-932db4790382@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51024db6-29df-6d8a-3182-43c9c25440b2@suse.com>
Date: Wed, 23 Dec 2020 15:58:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <903075fe-6482-0e1b-c124-932db4790382@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 15:51, Julien Grall wrote:
> On 23/12/2020 13:59, Jan Beulich wrote:
>> On 23.12.2020 14:50, Julien Grall wrote:
>>> On 23/12/2020 13:27, Jan Beulich wrote:
>>>> And finally "we assume" is in at least latent conflict with
>>>> ->platform_ops getting set only after arch_iommu_domain_init()
>>>> was called. Right now neither x86'es nor Arm's do anything
>>>> that would need undoing, but I'd still suggest to replace
>>>> "assume" here (if you want to keep the comment in the first
>>>> place) and move the assignment up a few lines (right after the
>>>> is_iommu_enabled() check there).
>>> My initial implementation of this patch moved the initialization of
>>> hd->platform_ops before arch_iommu_domain_init().
>>>
>>> However, this is going to lead to some issues with Paul's series which
>>> add an IOMMU page-table pool ([1]).
>>>
>>> The function arch_iommu_domain_init() may now fail. If we initialize
>>> hd->platform_ops earlier on, then the ->teardown() will be called before
>>> ->init().
>>>
>>> This may be an issue if ->teardown() expects some list pointer to be
>>> initialized by ->init(). I am not aware of any today, but this seems
>>> quite risky to me.
>>
>> In such a case the obvious thing is to make the teardown handler
>> check whether its init counterpart has run before. This will then
>> fit our apparently much wider goal of making cleanup functions
>> idempotent.
> 
> I will have a look. This may require another boolean which I wanted to 
> avoid.

I could imagine list_head_is_null() to become handy here.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58424.102716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5d4-0007FI-C3; Wed, 23 Dec 2020 15:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58424.102716; Wed, 23 Dec 2020 15:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5d4-0007FB-90; Wed, 23 Dec 2020 15:00:42 +0000
Received: by outflank-mailman (input) for mailman id 58424;
 Wed, 23 Dec 2020 15:00:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks5d3-0007F6-GB
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:00:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fecfb620-1ff5-4294-9c87-422eff58cbba;
 Wed, 23 Dec 2020 15:00:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A713EACBA;
 Wed, 23 Dec 2020 15:00:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fecfb620-1ff5-4294-9c87-422eff58cbba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608735639; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HLGlKqxjqShMYThO4kOZ0yk7rC0zmMl3vfcvbrku7gs=;
	b=X5HWsfk+dWUj10totkHVf+rJAowTRuNl8y30yF/bhKteolkyBDwmng1UDHIDS9ZhBLhjIh
	1GqKXgZf/Z42GhocZeStDLcljgWMFt0uWsuQKiwe8KC4haPAGELM/ByGJ12p0CbnF4Fn9E
	HY94GBeU1NVSrek4KwZQRdJyZvipNns=
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
Date: Wed, 23 Dec 2020 16:00:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 15:56, Julien Grall wrote:
> On 23/12/2020 14:12, Jan Beulich wrote:
>> On 22.12.2020 16:43, Julien Grall wrote:
>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>> be to move the call to the teardown callback earlier on. Any opinions?
>>
>> We already have
>>
>> static void amd_iommu_domain_destroy(struct domain *d)
>> {
>>      dom_iommu(d)->arch.amd.root_table = NULL;
>> }
>>
>> and this function is AMD's teardown handler. Hence I suppose
>> doing the same for VT-d would be quite reasonable. And really
>> VT-d's iommu_domain_teardown() also already has
>>
>>      hd->arch.vtd.pgd_maddr = 0;
> 
> Let me have a look if that works.
> 
>>
>> I guess what's missing is prevention of the root table
>> getting re-setup. 
> 
> This is taken care in the follow-up patch by forbidding page-table 
> allocation. I can mention it in the commit message.

My expectation is that with that subsequent change the change here
(or any variant of it) would become unnecessary.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:08:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:08:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58428.102728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5kS-0007SN-3b; Wed, 23 Dec 2020 15:08:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58428.102728; Wed, 23 Dec 2020 15:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5kR-0007SG-Vs; Wed, 23 Dec 2020 15:08:19 +0000
Received: by outflank-mailman (input) for mailman id 58428;
 Wed, 23 Dec 2020 15:08:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks5kQ-0007SB-JO
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:08:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5kO-0000X3-JO; Wed, 23 Dec 2020 15:08:16 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5kO-0002fb-BS; Wed, 23 Dec 2020 15:08:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=iGTyooH/uXRVYSjXj+Kacw5JiyS5fUAIh1te09HPLUI=; b=jbG7U1c6r5Pu9CCdW1zwDKlZmt
	bD4tVJZCpXz4io5Ze4P3oPLOVZlG1VNFI5a+u9R5MKipX2bD95DPSV1+eErEuKiMUSPs/9f65+pHq
	Vs9WoOWMdnV2IFt+bJ7xOOmXB/xqg6aUKy017Y9Zvu7vH3OUDi/kbdTDell9owc1sujk=;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
 <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com>
 <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org>
 <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com>
 <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
 <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com>
 <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
 <3b339f30-57db-caf6-fd7e-84199f98546f@suse.com>
 <9c214bc1-61db-5b33-f610-40c2a59edb75@xen.org>
 <ce7d0e42-2066-0007-0f82-c55c63a952df@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9b50863c-4236-7202-6c03-555e58cbf9ec@xen.org>
Date: Wed, 23 Dec 2020 15:08:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <ce7d0e42-2066-0007-0f82-c55c63a952df@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 14:56, Jan Beulich wrote:
> On 23.12.2020 15:44, Julien Grall wrote:
>> On 23/12/2020 13:41, Jan Beulich wrote:
>>> On 23.12.2020 14:33, Julien Grall wrote:
>>>> On 23/12/2020 13:12, Jan Beulich wrote:
>>>>>   From the input by both of you I still can't
>>>>> conclude whether this patch should remain as is in v4, or revert
>>>>> back to its v2 version. Please can we get this settled so I can get
>>>>> v4 out?
>>>>
>>>> I haven't had time to investigate the rest of the VM event code to find
>>>> other cases where this may happen. I still think there is a bigger
>>>> problem in the VM event code, but the maintainer disagrees here.
>>>>
>>>> At which point, I see limited reason to try to paper over in the common
>>>> code. So I would rather ack/merge v2 rather than v3.
>>>
>>> Since I expect Tamas and/or the Bitdefender folks to be of the
>>> opposite opinion, there's still no way out, at least if "rather
>>> ack" implies a nak for v3.
>>
>> The only way out here is for someone to justify why this patch is
>> sufficient for the VM event race.
> 
> I think this is too much you demand.

I guess you didn't notice that I did most of the job by providing an 
analysis in my e-mail... I don't think it is too much demanding to read 
the analysis and say whether I am correct or not.

Do you really prefer to add code would get rot because unused?

> Moving in the right direction
> without arriving at the final destination is still a worthwhile
> thing to do imo.
> 
>>> Personally, if this expectation of
>>> mine is correct, I'd prefer to keep the accounting but make it
>>> optional (as suggested in a post-commit-message remark).
>>> That'll eliminate the overhead you appear to be concerned of,
>>> but of course it'll further complicate the logic (albeit just
>>> slightly).
>>
>> I am more concerned about adding over complex code that would just
>> papering over a bigger problem. I also can't see use of it outside of
>> the VM event discussion.
> 
> I think it is a generally appropriate thing to do to wait for
> callbacks to complete before tearing down their origin control
> structure. There may be cases where code structure makes this
> unnecessary, but I don't think this can be an expectation to
> all the users of the functionality. Hence my suggestion to
> possibly make this optional (driven directly or indirectly by
> the user of the registration function).

As you tend to say, we should not add code unless there is a user. So 
far the only possible user is dubbious. If you find another user, then 
we can discuss whether this patch makes sense.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:09:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58432.102740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5lI-0007b6-CS; Wed, 23 Dec 2020 15:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58432.102740; Wed, 23 Dec 2020 15:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5lI-0007az-9D; Wed, 23 Dec 2020 15:09:12 +0000
Received: by outflank-mailman (input) for mailman id 58432;
 Wed, 23 Dec 2020 15:09:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks5lG-0007Xe-RC
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:09:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd335ec4-1b47-4f08-a281-88af36d20ae7;
 Wed, 23 Dec 2020 15:09:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B1DA1AE12;
 Wed, 23 Dec 2020 15:09:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd335ec4-1b47-4f08-a281-88af36d20ae7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608736147; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ik094K7GTaKEt3Dfc0ug2tmiuOzwPeVqkS89Vn/ls5I=;
	b=Cf5qojWRh69iByStoRlhRGvFTPogQfGiTOPer9m7JU+tb7hkwXOenvC3LOytrVQZDuBb2L
	cObBHq+r8xKM3Ry7T/9M8OYJqw3jjh3vO4UsHKXCTvZjymOFSwfgUzLSgGmTACTVemwSTD
	Qt6y2xbTxQj2yFdEFBG3I+0T5eLDE4w=
Subject: Ping: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
 <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
 <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
 <fe2ec163-c6c7-12d6-0c89-57a238514e25@citrix.com>
 <094e9e27-e01f-6020-c091-f9c546e92028@suse.com>
 <1d971d71-9a7e-f97c-6575-7f427dc1553e@suse.com>
Message-ID: <301f6814-3827-5aab-c105-74ebee66091f@suse.com>
Date: Wed, 23 Dec 2020 16:09:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <1d971d71-9a7e-f97c-6575-7f427dc1553e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.11.2020 14:02, Jan Beulich wrote:
> On 24.11.2020 12:04, Jan Beulich wrote:
>> On 23.11.2020 17:14, Andrew Cooper wrote:
>>> On 23/11/2020 16:07, Roger Pau Monné wrote:
>>>> On Mon, Nov 23, 2020 at 04:30:05PM +0100, Jan Beulich wrote:
>>>>> On 23.11.2020 16:24, Roger Pau Monné wrote:
>>>>>> On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
>>>>>>> --- a/xen/arch/x86/acpi/power.c
>>>>>>> +++ b/xen/arch/x86/acpi/power.c
>>>>>>> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>>>>>>>      if ( state != ACPI_STATE_S3 )
>>>>>>>          return;
>>>>>>>  
>>>>>>> -    wakeup_vector_va = __acpi_map_table(
>>>>>>> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
>>>>>>> -
>>>>>>>      /* TBoot will set resume vector itself (when it is safe to do so). */
>>>>>>>      if ( tboot_in_measured_env() )
>>>>>>>          return;
>>>>>>>  
>>>>>>> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
>>>>>>> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
>>>>>>> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
>>>>>>> +
>>>>>>>      if ( acpi_sinfo.vector_width == 32 )
>>>>>>>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>>>>      else
>>>>>>>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>>>> +
>>>>>>> +    clear_fixmap(FIX_ACPI_END);
>>>>>> Why not use vmap here instead of the fixmap?
>>>>> Considering the S3 path is relatively fragile (as in: we end up
>>>>> breaking it more often than about anything else) I wanted to
>>>>> make as little of a change as possible. Hence I decided to stick
>>>>> to the fixmap use that was (indirectly) used before as well.
>>>> Unless there's a restriction to use the ACPI fixmap entry I would just
>>>> switch to use vmap, as it's used extensively in the code and less
>>>> likely to trigger issues in the future, or else a bunch of other stuff
>>>> would also be broken.
>>>>
>>>> IMO doing the mapping differently here when it's not required will end
>>>> up turning this code more fragile in the long run.
>>>
>>> We can't enter S3 at all until dom0 has booted, as one detail has to
>>> come from AML.
>>>
>>> Therefore, we're fully up and running by this point, and vmap() will be
>>> fine.
>>
>> That's not the point of my reservation. The code here runs when the
>> system already isn't "fully up and running" anymore. Secondary CPUs
>> have already been offlined, and we're around the point where we
>> disable interrupts. Granted when we disable them, we also turn off
>> spin debugging, but I'd still prefer a path that's not susceptible
>> to IRQ state. What I admit I didn't pay attention to is that
>> set_fixmap(), by virtue of being a thin wrapper around
>> map_pages_to_xen(), similarly uses locks. IOW - okay, I'll switch
>> to vmap(). You're both aware that it, unlike set_fixmap(), can
>> fail, aren't you?
> 
> Would at least one of the two of you please explicitly reply to
> this last question, clarifying that you're indeed okay with this
> new possible source of S3 entry failing?

I think we want to get this regression addressed, but without the
explicit consent of at least one of you that introducing a new
error source to the S3 path is indeed okay I'd prefer not to
prepare and then send v2. I expect there's going to be some code
churn (not the least because acpi_sleep_prepare() currently
returns void), and I'd rather avoid doing the conversion work
just to then be told to go back to the previous approach.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:13:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:13:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58436.102752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5pN-0008Sd-Uf; Wed, 23 Dec 2020 15:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58436.102752; Wed, 23 Dec 2020 15:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5pN-0008SW-RT; Wed, 23 Dec 2020 15:13:25 +0000
Received: by outflank-mailman (input) for mailman id 58436;
 Wed, 23 Dec 2020 15:13:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks5pM-0008SQ-Ih
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:13:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12d769bc-6208-4d9a-848e-dc635848956e;
 Wed, 23 Dec 2020 15:13:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5E3F8AE12;
 Wed, 23 Dec 2020 15:13:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12d769bc-6208-4d9a-848e-dc635848956e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608736402; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=UIvYxhcjcsMxBwG4F+TcWL6/ZoSGotF19ElMz38C6KU=;
	b=m7Q+x3HOhKKvbJo8AFRhB7L1Td35aa2KDNT8o9r7UtsdRGkcpiTO+pYJtgPoSPQdRo/7NG
	LAYY8S45b0bD+HQjjsqEFgxP9iARWfj4Atn+mItfcEu5ZoaXBJneHUhm+eKpdUukkbymcg
	7RUhQrCFQSxPx9peRo0hwLUzJxJJS7s=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] gnttab: defer allocation of status frame tracking array
Message-ID: <57dc915c-c373-5003-80f7-279dd300d571@suse.com>
Date: Wed, 23 Dec 2020 16:13:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This array can be large when many grant frames are permitted; avoid
allocating it when it's not going to be used anyway, by doing this only
in gnttab_populate_status_frames().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Defer allocation to when a domain actually switches to the v2 grant
    API.

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1725,6 +1728,17 @@ gnttab_populate_status_frames(struct dom
     /* Make sure, prior version checks are architectural visible */
     block_speculation();
 
+    if ( gt->status == ZERO_BLOCK_PTR )
+    {
+        gt->status = xzalloc_array(grant_status_t *,
+                                   grant_to_status_frames(gt->max_grant_frames));
+        if ( !gt->status )
+        {
+            gt->status = ZERO_BLOCK_PTR;
+            return -ENOMEM;
+        }
+    }
+
     for ( i = nr_status_frames(gt); i < req_status_frames; i++ )
     {
         if ( (gt->status[i] = alloc_xenheap_page()) == NULL )
@@ -1745,18 +1759,23 @@ status_alloc_failed:
         free_xenheap_page(gt->status[i]);
         gt->status[i] = NULL;
     }
+    if ( !nr_status_frames(gt) )
+    {
+        xfree(gt->status);
+        gt->status = ZERO_BLOCK_PTR;
+    }
     return -ENOMEM;
 }
 
 static int
 gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
 {
-    unsigned int i;
+    unsigned int i, n = nr_status_frames(gt);
 
     /* Make sure, prior version checks are architectural visible */
     block_speculation();
 
-    for ( i = 0; i < nr_status_frames(gt); i++ )
+    for ( i = 0; i < n; i++ )
     {
         struct page_info *pg = virt_to_page(gt->status[i]);
         gfn_t gfn = gnttab_get_frame_gfn(gt, true, i);
@@ -1811,12 +1830,12 @@ gnttab_unpopulate_status_frames(struct d
         page_set_owner(pg, NULL);
     }
 
-    for ( i = 0; i < nr_status_frames(gt); i++ )
-    {
-        free_xenheap_page(gt->status[i]);
-        gt->status[i] = NULL;
-    }
     gt->nr_status_frames = 0;
+    smp_wmb(); /* Just in case - all accesses should be under lock. */
+    for ( i = 0; i < n; i++ )
+        free_xenheap_page(gt->status[i]);
+    xfree(gt->status);
+    gt->status = ZERO_BLOCK_PTR;
 
     return 0;
 }
@@ -1943,11 +1962,11 @@ int grant_table_init(struct domain *d, i
     if ( gt->shared_raw == NULL )
         goto out;
 
-    /* Status pages for grant table - for version 2 */
-    gt->status = xzalloc_array(grant_status_t *,
-                               grant_to_status_frames(gt->max_grant_frames));
-    if ( gt->status == NULL )
-        goto out;
+    /*
+     * Status page tracking array for v2 gets allocated on demand. But don't
+     * leave a NULL pointer there.
+     */
+    gt->status = ZERO_BLOCK_PTR;
 
     grant_write_lock(gt);
 


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:15:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:15:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58440.102764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5rM-00009k-FB; Wed, 23 Dec 2020 15:15:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58440.102764; Wed, 23 Dec 2020 15:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5rM-00009d-BV; Wed, 23 Dec 2020 15:15:28 +0000
Received: by outflank-mailman (input) for mailman id 58440;
 Wed, 23 Dec 2020 15:15:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AJ0=F3=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1ks5rL-00009Y-0A
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:15:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f3677ce-9221-40dc-8940-8c101ed97748;
 Wed, 23 Dec 2020 15:15:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37421AD11;
 Wed, 23 Dec 2020 15:15:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f3677ce-9221-40dc-8940-8c101ed97748
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608736525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=emcZMgo805YLJCo94uv+HM2FTIjdYN8Q4gRanWD1aCA=;
	b=KI4czuQRIqvGPV2XjDrm30OeQLVwerXoM5/2xlzaEbAzfSWa3/tjWbtCZSax9c2CvLgA0K
	qmk3o1XOnhlCxutTkxOlybN977Jg77TWXI7P2O9UTWrdKEAGm1zZYGTUyooGF9yRK5SzrD
	U11KdF2DRH4q3AUeyEJ1jq8ogu8AYQo=
Message-ID: <8fef72e972b00f89eb460b292298d755207d9501.camel@suse.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Dylanger Daly
 <dylangerdaly@protonmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Wed, 23 Dec 2020 16:15:24 +0100
In-Reply-To: <5db65e32-31aa-57a5-f82b-ebe497f493f5@suse.com>
References: 
	<9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
	 <72589937-a918-96c8-4589-6d30efaead9a@suse.com>
	 <U00A4lb9CgpRhV9huYxk5kvyAAam9UcFJ7h2K1a6-M84ef8W58V4Shq7hmU5WKh3rKaVRl6EiTXVmDc-czrBJvyf7h1mjh3Dc3SPvj8qIog=@protonmail.com>
	 <5db65e32-31aa-57a5-f82b-ebe497f493f5@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-7l6Sh3Yimzwl2y153BMe"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-7l6Sh3Yimzwl2y153BMe
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

Interesting situation (so to speak... :-O)

On Thu, 2020-10-15 at 11:20 +0200, Jan Beulich wrote:
> On 15.10.2020 11:14, Dylanger Daly wrote:
> > Indeed this is for dom0, I only recently tried limiting a domU to 1
> > core and observed absolutely no softlocks, UI animations are smooth
> > as butter with 1 core only.
> >=20
> > Indeed I believe this is a CPU Scheduling issue, I've tried both
> > the older credit and RTDS however both don't boot correctly.
>=20
> This wants reporting (with sufficient data, i.e. at least a serial
> log)
> as separate issues.
>=20
Indeed.

So, just to be sure I am understanding the symptoms correctly: here you
say that Credit (and RTDS) "don't boot correctly". In another mail, I
think you said that Credit boots, but is unusable due to lag and
lockups... Which is which?

Also, since this looks like it is SMT related, is Credit bootable
and/or usable with SMT off? And with SMT on?

> > The number of cores on this CPU is 8, 16 threads however Qubes by
> > default disables SMT, sched_credit2_max_cpus_runqueue is 16 by
> > default, I've tried testing with setting this to 7 or 8 however
> > it'll either not boot, or nothing will change.
>=20
> Failure to boot, unless with insane command line options, should
> always
> be reported to it can be fixed.
>=20
Yeah and facts are:

1) no value of the sched_credit2_max_cpus_runqueue option should=C2=A0
   prevent the system from booting. If it does, it's definitely a bug.

   It'd be "wonderful" to see _how_ it does that, by seeing the=C2=A0
   stacktrace (preferrably of a debug build), if there is one. Or, if=C2=A0
   the system locks, e.g., knowing whether it is responsive at least=C2=A0
   to=C2=A0debug keys (and, if yest, how the output of the 'r' debug key=C2=
=A0
   looks like)

2) A suboptimal value of sched_credit2_max_cpus_runqueue may indeed be=C2=
=A0
   associated with performance issues, including lags and lookups.=C2=A0
   *BUT* that usually happens on large boxes, with like 128 or 256=C2=A0
   CPUs. In your case, having either 8 or 16 CPUs in the same Credit2
   runqueue (or in two runqueue if you leave SMT on and use 8 as the=C2=A0
   value of that param) should work just fine. And, for sure, it=C2=A0
   shouldn't hang.

So, again, I'm not doubting it's happening, but I can't immediately
think of a root cause, especially without seeing more info.

In absence of that, I only have more questions. :-/ E.g., how are you
enabling and disabling SMT, via the command line parameter, or via
BIOS?

Also, can you perhaps try either upstream 4.14 Xen (from sources, I
mean) or the packages for a distro different than QubesOS (perhaps
installing such distro, temporarily, in an external HD or whatever).

Note that I by no means am trying to blame Qubes or anything else in
particular... I'm just trying to understand.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-7l6Sh3Yimzwl2y153BMe
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/jXwwACgkQFkJ4iaW4
c+59uBAAqcuU145XNduWZdJcO5KQBcTGl2kZbHMTCns/nK4tUlksavCQk6h0yJz5
xYQstJVNpERSPCJljU45IQWPFhYXLuv4/+icJPsj4u7/UjTuFbeKSP5JUJjfHstJ
9LtOEaQABy5I6PEntwbrmqTMRLLBG7SjfhagALI6JdR1sxq9ZXpKBP/jaLyHgdMb
ts0y1vP4QQ76V2r0DSxfGa2OWQWGKoh1J5g99a24oHV0o8MzaLZdPF2MZOUx5M3I
8iKiERG/FDHY19Bgh8F/1WtC0MYrnCVs0RLh9rphB7z6bhJjNZf/xwlUswZ6MhBO
/MicpekD985eNq5gWyGCRoczdOtrAibc/zWZOwAbHQAGBArEvXF4qXfH71kAjlKl
KY1CrgeeJdaVmmVmrA+XZsuSkX79cA/a0NtrCgxcO7YFBeUgeaQGYxHFYy4SOOqW
sUkV4uzgBJotJrWEv/Xr1d54FPyeZr/HutxR7WXNP5Gi2nR3MX1vG1KCSFn4q1bs
DTSMysbd1zhxiId/oxqa1x2WaglBpMmBVMM8bnX6zzm5gYuv1RAnGaZo1v2MCvch
kyYNAb32YBLKU6HWHL7ScYRbJGfyGhV2WoCxjmvEqBy8bv8lBg4+DwWWYQhkBTbm
tTltKRO68wRyELPaVWB50ecniND5H/Y9buSKPW5obhhVFRQrtOs=
=zBRF
-----END PGP SIGNATURE-----

--=-7l6Sh3Yimzwl2y153BMe--



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:16:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:16:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58444.102775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5sB-0000Gz-ON; Wed, 23 Dec 2020 15:16:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58444.102775; Wed, 23 Dec 2020 15:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5sB-0000Gs-L6; Wed, 23 Dec 2020 15:16:19 +0000
Received: by outflank-mailman (input) for mailman id 58444;
 Wed, 23 Dec 2020 15:16:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Alxq=F3=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1ks5sA-0000Gl-Ab
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:16:18 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c320921-ec4e-435e-9c11-a253089fee79;
 Wed, 23 Dec 2020 15:16:17 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id 91so18950471wrj.7
 for <xen-devel@lists.xenproject.org>; Wed, 23 Dec 2020 07:16:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c320921-ec4e-435e-9c11-a253089fee79
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=6yuXk0UjfvyrDacLoijz4xrdR4nujCvawwyyWsbeONQ=;
        b=YTkPoJ6Ruste6+sV1cDa2MxDDAUzR28uvUVRwze75nCAV9l7nlEDo8OsozjmWRLKTY
         Hv/9CUUF3oe+K6OsoB874smceRCKoybDc9ZXrdPfBHr/qwIYHM1oPqajTaF8nLNCOm9L
         eEc60Li6eKTyocWRBPbu8sxl68axuzczmYRWbxRQCCPS/oqyl2vlR2Do06QN+d3Ke3Ww
         RHFo2JeQuB021sCxtUjimXtRBoTqSKrRTV0nz5nR3SW0CI3dZf2XItHIIfT5K6i+VJ4d
         XSx3KKz1qrucXeCOg8bDkCDYQfqT4QGZSa9rdNuAS/A1UFm7nVN3niunV58Pe78MdxdQ
         t/Ug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=6yuXk0UjfvyrDacLoijz4xrdR4nujCvawwyyWsbeONQ=;
        b=lnh2ny7tS6++vfEM3mrAVSzESSJhkfl1LCTWhupGGBYaT/OGq8uUlaM6cNps58KnTN
         xKoqGXuJOYsI2U/00zNwRnuFl+mpH3h9IO91VMutlS8rgC8OtXOk/5gKHADohfZwN41J
         BM/dzdIPL28dLbBHeJSnc5cg1wsqOuZNDnLHgwUlBqYMuitEvU2PlYHi8Mzu8oRG3tnR
         Yttfci+dXvR5N4uP6ANfTPnODwEHRySlUmNzHwmuzCQDLOWyF0a6pTjzG7sOTqJXV3ht
         KeRSxM73PCleigUBhnvsP6/YyZDzyMNpJCDgUEG1zoSYAxtiKRaA4MGhyi8CraNIjyVJ
         GzOg==
X-Gm-Message-State: AOAM531TqtICIZ/rNFVcHl+He9yNTy8lxrU6mkt/n8KrSjHyzaI5iOES
	dALPUPpt9mb8k+U4sJQW8SaRkW7lH6iLOvZBuGY=
X-Google-Smtp-Source: ABdhPJwzZ1SON9OQ3yANlPnl9YUx32njXGwkngb8UFMl0Xp1sT56FfqonwOki49qI4Fbi6C7HVZ2rhvcwjmrnL/VmqU=
X-Received: by 2002:a5d:68ce:: with SMTP id p14mr29733998wrw.386.1608736576412;
 Wed, 23 Dec 2020 07:16:16 -0800 (PST)
MIME-Version: 1.0
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com> <17c90493-b438-fbc1-ca10-3bc4d89c4e5e@xen.org>
 <7a768bcd-80c1-d193-8796-7fb6720fa22a@suse.com> <1a8250f5-ea49-ac3a-e992-be7ec40deba9@xen.org>
 <CABfawhkQcUD4f62zpg0cyrdQgG82XtpYRZZ_-50hjagooT530A@mail.gmail.com>
 <5862eb24-d894-455a-13ac-61af54f949e7@xen.org> <CABfawhkWQiOhLL8f3NzoWbeuag-f+YOOK0i_LJzZq5Yvoh=oHQ@mail.gmail.com>
 <fd384990-376e-40f4-f0b8-1a889b3a0c51@suse.com> <9ee6016a-d3b3-c847-4775-0e05c8578110@xen.org>
 <CABfawhkcHX+FSRRfYwUNd8DweW04=91sSg2PTWy7vjq_DXwMQg@mail.gmail.com>
 <d365ce00-bc3a-de7c-565a-c4cb61063e74@suse.com> <ed5fc3e2-42b1-477a-c424-05ddf7fd3bf4@xen.org>
 <3b339f30-57db-caf6-fd7e-84199f98546f@suse.com> <9c214bc1-61db-5b33-f610-40c2a59edb75@xen.org>
In-Reply-To: <9c214bc1-61db-5b33-f610-40c2a59edb75@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Wed, 23 Dec 2020 10:15:39 -0500
Message-ID: <CABfawhkFhn-f_6akvq74v2pZJi=fkBVENRTxm_NUwJkN+pMkAg@mail.gmail.com>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tamas K Lengyel <lengyelt@ainfosec.com>, 
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Dec 23, 2020 at 9:44 AM Julien Grall <julien@xen.org> wrote:
>
>
>
> On 23/12/2020 13:41, Jan Beulich wrote:
> > On 23.12.2020 14:33, Julien Grall wrote:
> >> On 23/12/2020 13:12, Jan Beulich wrote:
> >>>  From the input by both of you I still can't
> >>> conclude whether this patch should remain as is in v4, or revert
> >>> back to its v2 version. Please can we get this settled so I can get
> >>> v4 out?
> >>
> >> I haven't had time to investigate the rest of the VM event code to find
> >> other cases where this may happen. I still think there is a bigger
> >> problem in the VM event code, but the maintainer disagrees here.
> >>
> >> At which point, I see limited reason to try to paper over in the common
> >> code. So I would rather ack/merge v2 rather than v3.
> >
> > Since I expect Tamas and/or the Bitdefender folks to be of the
> > opposite opinion, there's still no way out, at least if "rather
> > ack" implies a nak for v3.
>
> The only way out here is for someone to justify why this patch is
> sufficient for the VM event race. I am not convinced it is (see more below).
>
> > Personally, if this expectation of
> > mine is correct, I'd prefer to keep the accounting but make it
> > optional (as suggested in a post-commit-message remark).
> > That'll eliminate the overhead you appear to be concerned of,
> > but of course it'll further complicate the logic (albeit just
> > slightly).
>
> I am more concerned about adding over complex code that would just
> papering over a bigger problem. I also can't see use of it outside of
> the VM event discussion.
>
> I had another look at the code. As I mentioned in the past,
> vm_put_event_request() is able to deal with d != current->domain (it
> will set VM_EVENT_FLAG_FOREIGN). There are 4 callers for the function:
>     1) p2m_mem_paging_drop_page()
>     2) p2m_mem_paging_populate()
>     3) mem_sharing_notify_enomem()
>     4) monitor_traps()
>
> 1) and 2) belongs to the mem paging subsystem. Tamas suggested that it
> was abandoned.
>
> 4) can only be called with the current domain.
>
> This leave us 3) in the mem sharing subsystem. As this is call the
> memory hypercalls, it looks possible to me that d != current->domain.
> The code also seems to be maintained (there were recent non-trivial
> changes).
>
> Can one of the VM event developper come up with a justification why this
> patch enough to make the VM event subsystem safe?

3) is an unused feature as well that likely should be dropped at some
point. It can also only be called with current->domain, it effectively
just signals an out-of-memory error to a vm_event listener in dom0
that populating an entry for the VM that EPT faulted failed. I guess
the idea was that the dom0 agent would be able to make a decision on
how to proceed (ie which VM to kill to free up memory).


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:16:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58446.102788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5sN-0000LQ-1G; Wed, 23 Dec 2020 15:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58446.102788; Wed, 23 Dec 2020 15:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks5sM-0000LJ-Th; Wed, 23 Dec 2020 15:16:30 +0000
Received: by outflank-mailman (input) for mailman id 58446;
 Wed, 23 Dec 2020 15:16:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks5sL-0000Kz-SG
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:16:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5sL-0000hy-CR; Wed, 23 Dec 2020 15:16:29 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks5sL-0003L7-3X; Wed, 23 Dec 2020 15:16:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=c31ZVpZuYw6FZc1yugj2Tp6Yz1OI6OYjHMuor1fy1sw=; b=DroT8KAWSUUA3zdxadjslA025D
	5fXNPgQfb91sNp7y3wYSdAe3w8xl8y0VQP8TXHGvCpUifJTRTnDoXHC4lzmzqvwO7q/NAXMcRi33Z
	FEG5FczGXVskTWDefMxAQHu34qVtn170VEmoQ3UK+XZyhM8wd6uIZ/7bTJcz95wKTnEk=;
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
Date: Wed, 23 Dec 2020 15:16:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 15:00, Jan Beulich wrote:
> On 23.12.2020 15:56, Julien Grall wrote:
>> On 23/12/2020 14:12, Jan Beulich wrote:
>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>>
>>> We already have
>>>
>>> static void amd_iommu_domain_destroy(struct domain *d)
>>> {
>>>       dom_iommu(d)->arch.amd.root_table = NULL;
>>> }
>>>
>>> and this function is AMD's teardown handler. Hence I suppose
>>> doing the same for VT-d would be quite reasonable. And really
>>> VT-d's iommu_domain_teardown() also already has
>>>
>>>       hd->arch.vtd.pgd_maddr = 0;
>>
>> Let me have a look if that works.
>>
>>>
>>> I guess what's missing is prevention of the root table
>>> getting re-setup.
>>
>> This is taken care in the follow-up patch by forbidding page-table
>> allocation. I can mention it in the commit message.
> 
> My expectation is that with that subsequent change the change here
> (or any variant of it) would become unnecessary.

I am not be sure. iommu_unmap() would still get called from put_page(). 
Are you suggesting to gate the code if d->is_dying as well?

Even if this patch is deemed to be unecessary to fix the issue.
This issue was quite hard to chase/reproduce.

I think it would still be good to harden the code by zeroing 
hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the 
pointer was still "valid".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 15:51:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 15:51:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58454.102803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6QE-0003pg-RS; Wed, 23 Dec 2020 15:51:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58454.102803; Wed, 23 Dec 2020 15:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6QE-0003pZ-NZ; Wed, 23 Dec 2020 15:51:30 +0000
Received: by outflank-mailman (input) for mailman id 58454;
 Wed, 23 Dec 2020 15:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AJ0=F3=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1ks6QE-0003pU-3t
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 15:51:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e0fa3d0-89d4-4c73-b093-588a36e0f624;
 Wed, 23 Dec 2020 15:51:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8DF8BAD11;
 Wed, 23 Dec 2020 15:51:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e0fa3d0-89d4-4c73-b093-588a36e0f624
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608738687; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ypQhyRghUymqxZWvKkq/TyX4aEU3MHvGuKy3tyVywx0=;
	b=ksUhdypwsDq03crr6bf9gFeuGuUVCNXtqNFLDwTIdwUWB20uqdxiTPR5mXiA031awvzp3G
	o99H8QrvyJ7AylJXt6j6hjlUtmM7AWPDna7FTwfTSriUh6FI9bf31w8+8dgTk5G3p4AVPg
	fr4CS667JrGFl7RI18B73EaC9sIF86k=
Message-ID: <815f3bc3a28a165e8fa41c6954a6d00db656e3c3.camel@suse.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Dylanger
 Daly <dylangerdaly@protonmail.com>
Date: Wed, 23 Dec 2020 16:51:26 +0100
In-Reply-To: <eba12ea4-5dda-f112-0e33-714e859b9b03@suse.com>
References: 
	<9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
	 <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com>
	 <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com>
	 <768d9dbb-4387-099f-b489-7952d7e883b0@suse.com>
	 <T95F2Mi9RUUZ4w2wdeRqqM4uRyKgOFQNyooqEoTTDByK-0t9hZ1izG68lf90iExeYabEPSEv8puUeg0SEJtOmz8vYbVox2za28DXLd_h-_s=@protonmail.com>
	 <eba12ea4-5dda-f112-0e33-714e859b9b03@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-tJmc8m6AEzHTGJjq4Y+3"
User-Agent: Evolution 3.38.2 (by Flathub.org) 
MIME-Version: 1.0


--=-tJmc8m6AEzHTGJjq4Y+3
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-12-23 at 10:59 +0100, Jan Beulich wrote:
> On 23.12.2020 00:04, Dylanger Daly wrote:
> > I think I've narrowed down what could be the issue.
> >=20
> > I think disabling SMT on any AMD Zen 2 CPU is breaking Xen's
> > Credit2 scheduler, I can only test on AMD Ryzen 4000 based Mobile
> > CPUs, but I think this is what is causing issues with
> > softlocks/having to pin dom0 1 vcpu.
>=20
> Dario,
>=20
Hi, and thanks for bringing me in. :-)

> does this maybe ring any bells?
>=20
Not really. :-(

Unfortunately, I don't think I have access to a Ryzen CPU (but I'll try
to look better).

I do have access to an EPYC2 (Rome) CPUs, i.e., an EPYC 7742 with 256
CPUs (128 cores x 2 threads). I have just tried booting Xen 4.14 there
and:

1) with all the 256 CPUs enabled (i.e., smt=3D1), Credit2 scheduler and=C2=
=A0
   the default value (16) for sched_credit2_max_cpus_runqueue, the=C2=A0
   system seem to work fine.

   There are 16 runqueues with 16 CPUs inside each of them, and they=C2=A0
   seem to be constructed reasonably (siblings are in the same=C2=A0
   runqueue, etc).

   I don't have a GUI on that box for checking whether mouse movement=C2=A0
   are fluid, but I've run some basic tests from the terminal and=C2=A0
   everything looks normal.

   Dom0 has 256 vCPUs and no pinning.

2) with only 128 CPUs (i.e., booting Xen with smt=3D0), Credit2 and =C2=A0
   still=C2=A016 in sched_credit2_max_cpus_runqueue, it also seems to work=
=C2=A0
   fine.

   There are again 16 runqueues, each one with 8 CPUs and the system=C2=A0
   seems responsive enough.

   Dom0 has 128 vCPUs and no pinning.

I can try Credit as well, later, but if this is something CPU arch/gen
related, it seems to be a Ryzen rather than a Zen 2 thing...

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-tJmc8m6AEzHTGJjq4Y+3
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl/jZ34ACgkQFkJ4iaW4
c+5wwA//ch9G0WUtgN5EDH96ww5xh7OkBrIpVBhUUxf3aLq/NbGIEFfDeXadKnHC
VHC6bmMYanBDe0kppa9jDmAmm9V9UPQDGNGzwb3w4+lzA6R/YCQ/i+GZSA1JxzPc
Ku+acv+cMn3nTSuq8UC85vufyHWXfxlCPTvs7hCvPDiFaObpLy7HW2pcETf8xVSE
tdl+UR2Prcx5CuPn5ZlyfFtDreaxXI3yBRYz+9tcmAcTg5Dgq4zpRB8iGoz+pg3Q
isXLTOO2puvDY8dm5oPro+UjcgcWmqt+0RqOLoLLLZFVcERfRxNxzwVO20grNpom
A/d786Y3exo5hVay4cYnAG+hv0domWbcg5a99cKW9GVTlh9bm5xTsQKLZ7Ffapdd
ZgrnJOjmzIgQD705ABZ+kP7hPXshFPNXw2ejS9kw8P71thCwIMIs+2PeI+dTXjmU
Ky/09xvrudyvlB7lfYA2FmseLZ1gp7v0ovztiFlZwSsRxtBsN/SiAQjybdSiLye+
SOS4kJK5HV4uefe1EI42WQYfmU64EY5+CvrcFVdnjRSnZWW0gQ5t6TksVwMR43tE
4c22U1ftFxV0djk3MnciMlGaJ7kmht04FdDWnfQlINyN2jJ1bLAX3wjnv32SD4gK
cdLNPguNU77I3HJzCXvqaeg9qhWJERRKMX73qGuIS7wQXKZrKFs=
=GYJB
-----END PGP SIGNATURE-----

--=-tJmc8m6AEzHTGJjq4Y+3--



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:05:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:05:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58460.102818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6dl-0005Ni-6S; Wed, 23 Dec 2020 16:05:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58460.102818; Wed, 23 Dec 2020 16:05:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6dl-0005Nb-2m; Wed, 23 Dec 2020 16:05:29 +0000
Received: by outflank-mailman (input) for mailman id 58460;
 Wed, 23 Dec 2020 16:05:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks6dj-0005NW-SY
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:05:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63ff16fd-c773-4ebf-b5ce-eb4b6a79d068;
 Wed, 23 Dec 2020 16:05:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1E32ACC6;
 Wed, 23 Dec 2020 16:05:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63ff16fd-c773-4ebf-b5ce-eb4b6a79d068
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608739525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=7+i/+o1/2Ftjswwt7furQ6LnkH0Q746Q6++4dF3IrYI=;
	b=FvpeRJ6R0qRNkLRLa2ooeJ1psr3SpA+kAsYhdGFI3LWg8/Z9iDDORJKwruY2P013484Sd8
	d5xzGVNpSIFiH6QXzRGmXonGhadZE58auc7y9aQX6tHqAaxe3e6Hqpn3gnDaIjK0eeq8hW
	JDoe/hciuVKDEYflvvdV+j3znRGktig=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] lib: drop (replace) debug_build()
Message-ID: <ae31ccf1-7334-cdf9-9b90-edac7ca4e148@suse.com>
Date: Wed, 23 Dec 2020 17:05:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Its expansion shouldn't be tied to NDEBUG - down the road we may want to
allow enabling assertions independently of CONFIG_DEBUG. Replace the few
uses by a new xen_build_info() helper, subsuming gcov_string at the same
time (while replacing the stale CONFIG_GCOV used there) and also adding
CONFIG_UBSAN indication.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Introduce xen_build_info() including also gcov and ubsan info.

--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -175,14 +175,14 @@ static void print_xen_info(void)
 {
     char taint_str[TAINT_STRING_MAX_LEN];
 
-    printk("----[ Xen-%d.%d%s  %s  debug=%c " gcov_string "  %s ]----\n",
+    printk("----[ Xen-%d.%d%s  %s  %s  %s ]----\n",
            xen_major_version(), xen_minor_version(), xen_extra_version(),
 #ifdef CONFIG_ARM_32
            "arm32",
 #else
            "arm64",
 #endif
-           debug_build() ? 'y' : 'n', print_tainted(taint_str));
+           xen_build_info(), print_tainted(taint_str));
 }
 
 #ifdef CONFIG_ARM_32
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -29,9 +29,9 @@ static void print_xen_info(void)
 {
     char taint_str[TAINT_STRING_MAX_LEN];
 
-    printk("----[ Xen-%d.%d%s  x86_64  debug=%c " gcov_string "  %s ]----\n",
+    printk("----[ Xen-%d.%d%s  x86_64  %s  %s ]----\n",
            xen_major_version(), xen_minor_version(), xen_extra_version(),
-           debug_build() ? 'y' : 'n', print_tainted(taint_str));
+           xen_build_info(), print_tainted(taint_str));
 }
 
 enum context { CTXT_hypervisor, CTXT_pv_guest, CTXT_hvm_guest };
--- a/xen/common/version.c
+++ b/xen/common/version.c
@@ -70,6 +70,30 @@ const char *xen_deny(void)
     return "<denied>";
 }
 
+static const char build_info[] =
+    "debug="
+#ifdef CONFIG_DEBUG
+    "y"
+#else
+    "n"
+#endif
+#ifdef CONFIG_COVERAGE
+# ifdef __clang__
+    " llvmcov=y"
+# else
+    " gcov=y"
+# endif
+#endif
+#ifdef CONFIG_UBSAN
+    " ubsan=y"
+#endif
+    "";
+
+const char *xen_build_info(void)
+{
+    return build_info;
+}
+
 static const void *build_id_p __read_mostly;
 static unsigned int build_id_len __read_mostly;
 
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1002,10 +1002,10 @@ void __init console_init_preirq(void)
     spin_lock(&console_lock);
     __putstr(xen_banner());
     spin_unlock(&console_lock);
-    printk("Xen version %d.%d%s (%s@%s) (%s) debug=%c " gcov_string " %s\n",
+    printk("Xen version %d.%d%s (%s@%s) (%s) %s %s\n",
            xen_major_version(), xen_minor_version(), xen_extra_version(),
-           xen_compile_by(), xen_compile_domain(),
-           xen_compiler(), debug_build() ? 'y' : 'n', xen_compile_date());
+           xen_compile_by(), xen_compile_domain(), xen_compiler(),
+           xen_build_info(), xen_compile_date());
     printk("Latest ChangeSet: %s\n", xen_changeset());
 
     /* Locate and print the buildid, if applicable. */
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -48,21 +48,13 @@
 #define BUILD_BUG_ON(cond) ((void)BUILD_BUG_ON_ZERO(cond))
 #endif
 
-#ifdef CONFIG_GCOV
-#define gcov_string "gcov=y"
-#else
-#define gcov_string ""
-#endif
-
 #ifndef NDEBUG
 #define ASSERT(p) \
     do { if ( unlikely(!(p)) ) assert_failed(#p); } while (0)
 #define ASSERT_UNREACHABLE() assert_failed("unreachable")
-#define debug_build() 1
 #else
 #define ASSERT(p) do { if ( 0 && (p) ) {} } while (0)
 #define ASSERT_UNREACHABLE() do { } while (0)
-#define debug_build() 0
 #endif
 
 #define ABS(_x) ({                              \
--- a/xen/include/xen/version.h
+++ b/xen/include/xen/version.h
@@ -16,6 +16,7 @@ const char *xen_extra_version(void);
 const char *xen_changeset(void);
 const char *xen_banner(void);
 const char *xen_deny(void);
+const char *xen_build_info(void);
 int xen_build_id(const void **p, unsigned int *len);
 
 #ifdef BUILD_ID


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:08:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:08:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58464.102830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6gF-0005Y6-Pl; Wed, 23 Dec 2020 16:08:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58464.102830; Wed, 23 Dec 2020 16:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6gF-0005Xz-M1; Wed, 23 Dec 2020 16:08:03 +0000
Received: by outflank-mailman (input) for mailman id 58464;
 Wed, 23 Dec 2020 16:08:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks6gE-0005Xu-FL
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:08:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks6gD-00025h-Jt; Wed, 23 Dec 2020 16:08:01 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks6gD-00076i-7m; Wed, 23 Dec 2020 16:08:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=s+60IexXiUolqy8XRvL8J+jmzDMc6K5mUc89JEwWjJM=; b=Qrl4jxatJGi9Jjne4xMbFbdZK3
	syFb4mN4b97qbPKHoozSCFWkn7a+E3G1EfUFScr6ON5FjO48MigzCrAxvGQ1ypFSKNHcdsFNkXHHu
	nhsEzM0rThDi+b1+NwPcAS6T4g+R6/7j0YUPfr71otBNWwDYi27q741tO1Z/IGMgvwiM=;
Subject: Re: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-5-julien@xen.org>
 <beb22b59-701e-462c-5080-e99033079204@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d62f8851-b417-b22a-4527-c2c43b536446@xen.org>
Date: Wed, 23 Dec 2020 16:07:58 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <beb22b59-701e-462c-5080-e99033079204@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 14:34, Jan Beulich wrote:
> On 22.12.2020 16:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The new IOMMU page-tables allocator will release the pages when
>> relinquish the domain resources. However, this is not sufficient in two
>> cases:
>>      1) domain_relinquish_resources() is not called when the domain
>>      creation fails.
> 
> Could you remind me of what IOMMU page table insertions there
> are during domain creation? No memory got allocated to the
> domain at that point yet, so it would seem to me there simply
> is nothing to map.

The P2M is first modified in hvm_domain_initialise():

(XEN) Xen call trace:
(XEN)    [<ffff82d04026b9ec>] R iommu_alloc_pgtable+0x11/0x137
(XEN)    [<ffff82d04025f9f5>] F 
drivers/passthrough/vtd/iommu.c#addr_to_dma_page_maddr+0x146/0x1d8
(XEN)    [<ffff82d04025fcc5>] F 
drivers/passthrough/vtd/iommu.c#intel_iommu_map_page+0x6a/0x14b
(XEN)    [<ffff82d04026d949>] F iommu_map+0x6d/0x16f
(XEN)    [<ffff82d04026da71>] F iommu_legacy_map+0x26/0x63
(XEN)    [<ffff82d040301bdc>] F 
arch/x86/mm/p2m-ept.c#ept_set_entry+0x6b2/0x730
(XEN)    [<ffff82d0402f67e7>] F p2m_set_entry+0x91/0x128
(XEN)    [<ffff82d0402f6b5c>] F 
arch/x86/mm/p2m.c#set_typed_p2m_entry+0xfe/0x3f7
(XEN)    [<ffff82d0402f7f4c>] F set_mmio_p2m_entry+0x65/0x6e
(XEN)    [<ffff82d04029a080>] F 
arch/x86/hvm/vmx/vmx.c#vmx_domain_initialise+0xf6/0x137
(XEN)    [<ffff82d0402af421>] F hvm_domain_initialise+0x357/0x4c7
(XEN)    [<ffff82d04031eae7>] F arch_domain_create+0x478/0x4ff
(XEN)    [<ffff82d04020476e>] F domain_create+0x4f2/0x778
(XEN)    [<ffff82d04023b0d2>] F do_domctl+0xb1e/0x18b8
(XEN)    [<ffff82d040311dbf>] F pv_hypercall+0x2f0/0x55f
(XEN)    [<ffff82d040390432>] F lstar_enter+0x112/0x120

> 
>>      2) There is nothing preventing page-table allocations when the
>>      domain is dying.
>>
>> In both cases, this can be solved by freeing the page-tables again
>> when the domain destruction. Although, this may result to an high
>> number of page-tables to free.
> 
> Since I've seen this before in this series, and despite me also
> not being a native speaker, as a nit: I don't think it can
> typically be other than "result in".

I think you are right.

> 
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -2290,7 +2290,7 @@ int domain_relinquish_resources(struct domain *d)
>>   
>>       PROGRESS(iommu_pagetables):
>>   
>> -        ret = iommu_free_pgtables(d);
>> +        ret = iommu_free_pgtables(d, false);
> 
> I suppose you mean "true" here, but I also think the other
> approach (checking for DOMDYING_dead, which you don't seem to
> like very much) is better, if for no other reason than it
> already being used elsewhere.

I think "don't like very much" is an understatement :). There seems to 
be more function using an extra parameter (such as hap_set_allocation() 
which was introduced before your DOMDYING_dead). So I only followed what 
they did.

> 
>> @@ -305,6 +320,19 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>           memflags = MEMF_node(hd->node);
>>   #endif
>>   
>> +    /*
>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>> +     * nothing prevent allocation to happen afterwards. There is no valid
>> +     * reasons to continue to update the IOMMU page-tables while the
>> +     * domain is dying.
>> +     *
>> +     * So prevent page-table allocation when the domain is dying. Note
>> +     * this doesn't fully prevent the race because d->is_dying may not
>> +     * yet be seen.
>> +     */
>> +    if ( d->is_dying )
>> +        return NULL;
>> +
>>       pg = alloc_domheap_page(NULL, memflags);
>>       if ( !pg )
>>           return NULL;
> 
> As said in reply to an earlier patch - with a suitable
> spin_barrier() you can place your check further down, along the
> lines of
> 
>      spin_lock(&hd->arch.pgtables.lock);
>      if ( likely(!d->is_dying) )
>      {
>          page_list_add(pg, &hd->arch.pgtables.list);
>          p = NULL;
>      }
>      spin_unlock(&hd->arch.pgtables.lock);
> 
>      if ( p )
>      {
>          free_domheap_page(pg);
>          pg = NULL;
>      }
> 
> (albeit I'm relatively sure you won't like the re-purposing of
> p, but that's a minor detail). (FREE_DOMHEAP_PAGE() would be
> nice to use here, but we seem to only have FREE_XENHEAP_PAGE()
> so far.)

In fact I don't mind the re-purposing of p. However, I dislike the 
allocation and then freeing when the domain is dying.

I think I prefer the small race introduced (the pages will still be 
freed) over this solution.

Note that Paul's IOMMU series will completely rework the function. So 
this is only temporary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:10:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:10:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58468.102841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6iV-0006NW-63; Wed, 23 Dec 2020 16:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58468.102841; Wed, 23 Dec 2020 16:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6iV-0006NP-2u; Wed, 23 Dec 2020 16:10:23 +0000
Received: by outflank-mailman (input) for mailman id 58468;
 Wed, 23 Dec 2020 16:10:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lNd7=F3=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1ks6iT-0006NK-Ov
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:10:21 +0000
Received: from mx1.somlen.de (unknown [89.238.87.226])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69215828-7abd-4150-a7a0-c600245441f3;
 Wed, 23 Dec 2020 16:10:19 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id C16B7C3AF0B;
 Wed, 23 Dec 2020 17:10:17 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69215828-7abd-4150-a7a0-c600245441f3
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 3/3] docs: set date to SOURCE_DATE_EPOCH if available
Date: Wed, 23 Dec 2020 17:10:12 +0100
Message-ID: <2354439.sqZyMsV9Az@localhost>
In-Reply-To: <3c3edc91-7d22-289f-575b-9fd3c2ec4bc8@suse.com>
References: <cover.1608319634.git.maxi@daemonizer.de> <23352f4835ae58c5cae6f425d5a8378f3d694055.1608319634.git.maxi@daemonizer.de> <3c3edc91-7d22-289f-575b-9fd3c2ec4bc8@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="nextPart2033718.9jYyLHQttZ"; micalg="pgp-sha512"; protocol="application/pgp-signature"

--nextPart2033718.9jYyLHQttZ
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

On Montag, 21. Dezember 2020 10:01:14 CET Jan Beulich wrote:
> On 18.12.2020 21:42, Maximilian Engelhardt wrote:
> > --- a/docs/Makefile
> > +++ b/docs/Makefile
> > @@ -3,7 +3,13 @@ include $(XEN_ROOT)/Config.mk
> > 
> >  -include $(XEN_ROOT)/config/Docs.mk
> >  
> >  VERSION		:= $(shell $(MAKE) -C $(XEN_ROOT)/xen --no-print-directory
> >  xenversion)> 
> > -DATE		:= $(shell date +%Y-%m-%d)
> > +
> > +DATE_FMT	:= +%Y-%m-%d
> > +ifdef SOURCE_DATE_EPOCH
> > +DATE		:= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)"
> > 2>/dev/null || date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)"
> > 2>/dev/null || date -u "$(DATE_FMT)")
> Looking at the doc for a (deliberately) old "date", I can't find
> any mention of the -d "@..." syntax. I take it the command would
> fail on that system. It would then go on to try the -r variant,
> which has entirely different meaning on GNU (Linux) systems.
> 
> docs/ being subject to configuring, why don't you determine the
> capabilities of "date" there and invoke just the one command
> that was found suitable for the system?
> 
> Jan

Hi Jan,

I did some research. The -d "@..." syntax was introduced about 2005. Testing a 
live CD from 2006 (KNOPPIX_V5.0.1CD-2006-06-01-EN.iso) it was supported there. 
The documentation about this syntax has only been added in 2011 to the date 
command. I'm wondering if anybody running such an old system wants to use 
SOURCE_DATE_EPOCH.

However, I came up with a patch to determine which suitable date version is 
available and only call that, as you suggested. I will post the new patch 
soon.

Maxi
--nextPart2033718.9jYyLHQttZ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part.
Content-Transfer-Encoding: 7Bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEEQ8gZ7vwsPje0uPkIgepkfSQr0hUFAl/ja+QACgkQgepkfSQr
0hW5+g/+L51OAqHpwGZtFGlFb0eJxdq10ARcuU/iiB3dwyRVXAyMAhrBsMa288IH
m7ZnBGV8MMhp/OVY5CeR+L0y5x+H6UhxvvSesqxcP617ngi8SKtPHSbadsA1gV5Z
bkMUuZI8h4RfkyUxx1u18doF1IUGlXRu4/wujrUzLloY8XUp06x+x2dehNQJ9aj/
kMH5UNZbqPUOnCxcvjZwGxlWIyH45iIwTWgdrUzXhIXOhzueA9egudL8/07/cOss
e6UK+BlmXGJb+jFZyYds1EN0SZxbgAIuNGz0iAKvzRwmz3zQL6lqQByNeZ4YZaeD
ec8qdSNgbQj7aitV4XtuSaKvXZ11Lwiaus4FiEeo4/I1CntuzFieBFvQFGY6gEOE
kBKNfMycSKheZ8xk6dFUPZSdXC17TWvOHqEQqEh4MyIug4LO0lbkXuyLE0F0geAx
kcsjPpCxoHy7dmTSdPQIdn4bYK1ufpKFiQSPvPoUEQ+XKpd2UMIBnR3BNwsb8/ey
g+s8YVyxsPEozvyKNvNzfCIq2DzqrDWI/nQyv1CWxuT+nb98VzwWxwX0CJCPSPee
qOSWB5abRT+zR5X9I+nkpEahQLTxcn92jr5Sk1+oVBjTdOOUqVdJJv7kqisJEPcG
sWiHUeJVhlq4r2iwcjGK6S0f4Okfd2vVKEZ42Rb/RODnoGK7Fpw=
=DFB4
-----END PGP SIGNATURE-----

--nextPart2033718.9jYyLHQttZ--





From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:11:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:11:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58472.102854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6jc-0006VX-G1; Wed, 23 Dec 2020 16:11:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58472.102854; Wed, 23 Dec 2020 16:11:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6jc-0006VQ-Cx; Wed, 23 Dec 2020 16:11:32 +0000
Received: by outflank-mailman (input) for mailman id 58472;
 Wed, 23 Dec 2020 16:11:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks6jb-0006VL-OB
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:11:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb2e82df-bc3e-48a0-bc9a-c0167ebe6207;
 Wed, 23 Dec 2020 16:11:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F419CACF1;
 Wed, 23 Dec 2020 16:11:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb2e82df-bc3e-48a0-bc9a-c0167ebe6207
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608739890; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zwk9KLTT7CXqu/qK9JkhsegVSNfBkTc2e6kLd1GHfjg=;
	b=YyRaIYcl9MtMb14cnU5lwOX+FBgqKvlCa6Y7zaiGM20JpMtTZD7vgWpCk/Zl+7DoCX9XBk
	kM3BSgAK2CzE977fIjBslVkXYmHBilFCQMEKLVqXSHjdf1yEXqHbcOO4nHCxUam4Qyy5eD
	x+UFjrh7zDXTlAdzVCOvvd0q/Q6kaBY=
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
Date: Wed, 23 Dec 2020 17:11:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 16:16, Julien Grall wrote:
> On 23/12/2020 15:00, Jan Beulich wrote:
>> On 23.12.2020 15:56, Julien Grall wrote:
>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>>>
>>>> We already have
>>>>
>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>> {
>>>>       dom_iommu(d)->arch.amd.root_table = NULL;
>>>> }
>>>>
>>>> and this function is AMD's teardown handler. Hence I suppose
>>>> doing the same for VT-d would be quite reasonable. And really
>>>> VT-d's iommu_domain_teardown() also already has
>>>>
>>>>       hd->arch.vtd.pgd_maddr = 0;
>>>
>>> Let me have a look if that works.
>>>
>>>>
>>>> I guess what's missing is prevention of the root table
>>>> getting re-setup.
>>>
>>> This is taken care in the follow-up patch by forbidding page-table
>>> allocation. I can mention it in the commit message.
>>
>> My expectation is that with that subsequent change the change here
>> (or any variant of it) would become unnecessary.
> 
> I am not be sure. iommu_unmap() would still get called from put_page(). 
> Are you suggesting to gate the code if d->is_dying as well?

Unmap shouldn't be allocating any memory right now, as in
non-shared-page-table mode we don't install any superpages
(unless I misremember).

> Even if this patch is deemed to be unecessary to fix the issue.
> This issue was quite hard to chase/reproduce.
> 
> I think it would still be good to harden the code by zeroing 
> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the 
> pointer was still "valid".

But my point was that this zeroing already happens. What I
suspect is that it gets re-populated after it was zeroed,
because of page table manipulation that shouldn't be
occurring anymore for a dying domain.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:15:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58476.102866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6nd-0006h2-1s; Wed, 23 Dec 2020 16:15:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58476.102866; Wed, 23 Dec 2020 16:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6nc-0006gv-VD; Wed, 23 Dec 2020 16:15:40 +0000
Received: by outflank-mailman (input) for mailman id 58476;
 Wed, 23 Dec 2020 16:15:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks6nb-0006gn-IS
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:15:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63940683-3ab9-4399-b66e-86dcf70e4ed6;
 Wed, 23 Dec 2020 16:15:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9852AACF1;
 Wed, 23 Dec 2020 16:15:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63940683-3ab9-4399-b66e-86dcf70e4ed6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608740137; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UD3TiecV8KTtQuZJoK9Hq7/gCbZVr22JSzPjiqdkr5U=;
	b=dbqg/2TfiiXw9wyTuGN4OQfdxPAn6/QndDBEpoMKGxnHQDOKT5g28lv635ei7nnP8qclaR
	8ulWUTGUoJFOwWTsEPh+M3fXyTu5z4daWIlRLb+Fb1m6ex+7KqXDnrG5Kd/fbO4IwHTmsu
	nO0EuN9K2PIQ94AWzHrKZphEyFMsI+Y=
Subject: Re: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-5-julien@xen.org>
 <beb22b59-701e-462c-5080-e99033079204@suse.com>
 <d62f8851-b417-b22a-4527-c2c43b536446@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e897e1bf-9c17-f8a9-274a-673ff7f1a009@suse.com>
Date: Wed, 23 Dec 2020 17:15:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d62f8851-b417-b22a-4527-c2c43b536446@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:07, Julien Grall wrote:
> On 23/12/2020 14:34, Jan Beulich wrote:
>> On 22.12.2020 16:43, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The new IOMMU page-tables allocator will release the pages when
>>> relinquish the domain resources. However, this is not sufficient in two
>>> cases:
>>>      1) domain_relinquish_resources() is not called when the domain
>>>      creation fails.
>>
>> Could you remind me of what IOMMU page table insertions there
>> are during domain creation? No memory got allocated to the
>> domain at that point yet, so it would seem to me there simply
>> is nothing to map.
> 
> The P2M is first modified in hvm_domain_initialise():
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d04026b9ec>] R iommu_alloc_pgtable+0x11/0x137
> (XEN)    [<ffff82d04025f9f5>] F 
> drivers/passthrough/vtd/iommu.c#addr_to_dma_page_maddr+0x146/0x1d8
> (XEN)    [<ffff82d04025fcc5>] F 
> drivers/passthrough/vtd/iommu.c#intel_iommu_map_page+0x6a/0x14b
> (XEN)    [<ffff82d04026d949>] F iommu_map+0x6d/0x16f
> (XEN)    [<ffff82d04026da71>] F iommu_legacy_map+0x26/0x63
> (XEN)    [<ffff82d040301bdc>] F 
> arch/x86/mm/p2m-ept.c#ept_set_entry+0x6b2/0x730
> (XEN)    [<ffff82d0402f67e7>] F p2m_set_entry+0x91/0x128
> (XEN)    [<ffff82d0402f6b5c>] F 
> arch/x86/mm/p2m.c#set_typed_p2m_entry+0xfe/0x3f7
> (XEN)    [<ffff82d0402f7f4c>] F set_mmio_p2m_entry+0x65/0x6e
> (XEN)    [<ffff82d04029a080>] F 
> arch/x86/hvm/vmx/vmx.c#vmx_domain_initialise+0xf6/0x137
> (XEN)    [<ffff82d0402af421>] F hvm_domain_initialise+0x357/0x4c7

Oh, the infamous APIC access page again.

>>> @@ -305,6 +320,19 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>           memflags = MEMF_node(hd->node);
>>>   #endif
>>>   
>>> +    /*
>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>> +     * reasons to continue to update the IOMMU page-tables while the
>>> +     * domain is dying.
>>> +     *
>>> +     * So prevent page-table allocation when the domain is dying. Note
>>> +     * this doesn't fully prevent the race because d->is_dying may not
>>> +     * yet be seen.
>>> +     */
>>> +    if ( d->is_dying )
>>> +        return NULL;
>>> +
>>>       pg = alloc_domheap_page(NULL, memflags);
>>>       if ( !pg )
>>>           return NULL;
>>
>> As said in reply to an earlier patch - with a suitable
>> spin_barrier() you can place your check further down, along the
>> lines of
>>
>>      spin_lock(&hd->arch.pgtables.lock);
>>      if ( likely(!d->is_dying) )
>>      {
>>          page_list_add(pg, &hd->arch.pgtables.list);
>>          p = NULL;
>>      }
>>      spin_unlock(&hd->arch.pgtables.lock);
>>
>>      if ( p )
>>      {
>>          free_domheap_page(pg);
>>          pg = NULL;
>>      }
>>
>> (albeit I'm relatively sure you won't like the re-purposing of
>> p, but that's a minor detail). (FREE_DOMHEAP_PAGE() would be
>> nice to use here, but we seem to only have FREE_XENHEAP_PAGE()
>> so far.)
> 
> In fact I don't mind the re-purposing of p. However, I dislike the 
> allocation and then freeing when the domain is dying.
> 
> I think I prefer the small race introduced (the pages will still be 
> freed) over this solution.

The "will still be freed" is because of the 2nd round of freeing
you add in an earlier patch? I'd prefer to avoid the race to in
turn avoid that 2nd round of freeing.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:16:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:16:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58480.102878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6ol-0006nj-C0; Wed, 23 Dec 2020 16:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58480.102878; Wed, 23 Dec 2020 16:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6ol-0006nc-8h; Wed, 23 Dec 2020 16:16:51 +0000
Received: by outflank-mailman (input) for mailman id 58480;
 Wed, 23 Dec 2020 16:16:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks6oj-0006nV-DA
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:16:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks6oi-0002FS-Vr; Wed, 23 Dec 2020 16:16:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks6oi-0007jn-O6; Wed, 23 Dec 2020 16:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lQZqGhkugWov6odsmQGTbpfDqN3MieY0Ahu6c8Xudfs=; b=B0u/p6xRdv6tjkjk3Fqs8sXpTg
	VtkpzE57v5bGOAiFx/C5bQJc8eRwafDKrzlqShjKyIc3pYQ8aN8LDHCEVaoFqLX+UjEgf/a4FDBLf
	LZGjujWs/O6hNl46D00Q29gdcX9z07lDhLtCEhny1lT0cmakZRda1EKV1NbJZLuqKdwM=;
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
Date: Wed, 23 Dec 2020 16:16:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 23/12/2020 16:11, Jan Beulich wrote:
> On 23.12.2020 16:16, Julien Grall wrote:
>> On 23/12/2020 15:00, Jan Beulich wrote:
>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>>>>
>>>>> We already have
>>>>>
>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>> {
>>>>>        dom_iommu(d)->arch.amd.root_table = NULL;
>>>>> }
>>>>>
>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>
>>>>>        hd->arch.vtd.pgd_maddr = 0;
>>>>
>>>> Let me have a look if that works.
>>>>
>>>>>
>>>>> I guess what's missing is prevention of the root table
>>>>> getting re-setup.
>>>>
>>>> This is taken care in the follow-up patch by forbidding page-table
>>>> allocation. I can mention it in the commit message.
>>>
>>> My expectation is that with that subsequent change the change here
>>> (or any variant of it) would become unnecessary.
>>
>> I am not be sure. iommu_unmap() would still get called from put_page().
>> Are you suggesting to gate the code if d->is_dying as well?
> 
> Unmap shouldn't be allocating any memory right now, as in
> non-shared-page-table mode we don't install any superpages
> (unless I misremember).

It doesn't allocate memory, but it will try to access the IOMMU 
page-tables (see more below).

> 
>> Even if this patch is deemed to be unecessary to fix the issue.
>> This issue was quite hard to chase/reproduce.
>>
>> I think it would still be good to harden the code by zeroing
>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>> pointer was still "valid".
> 
> But my point was that this zeroing already happens. 
> What I
> suspect is that it gets re-populated after it was zeroed,
> because of page table manipulation that shouldn't be
> occurring anymore for a dying domain.

AFAICT, the zeroing is happening in ->teardown() helper.

It is only called when the domain is fully destroyed (see call in 
arch_domain_destroy()). This will happen much after relinquishing the code.

Could you clarify why you think it is already zeroed and by who?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:19:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:19:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58484.102890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6r8-0006z7-PI; Wed, 23 Dec 2020 16:19:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58484.102890; Wed, 23 Dec 2020 16:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6r8-0006z0-M2; Wed, 23 Dec 2020 16:19:18 +0000
Received: by outflank-mailman (input) for mailman id 58484;
 Wed, 23 Dec 2020 16:19:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks6r7-0006yv-6r
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:19:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks6r6-0002HU-R7; Wed, 23 Dec 2020 16:19:16 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks6r6-0007pp-Ib; Wed, 23 Dec 2020 16:19:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=E7ngH0xCBjA58rx+yWP69MxE0tP2V3ZusYLPOG7Sys8=; b=PY03xiM3OjkW2Do1151AJWPvji
	S1G3aEDhU4V0v2dZr/0yZFpGQGkKL9qO0yixyJhbXvAIYX5J/VwGmiprMwY8yQ8GEjlEydsBsr9yl
	Nj7UMLpNiUdE9TsZJi1HaGeUZKmY/jeOpv9NIllWp81e2kWkbHUa0TVLi6kpnNjqxqjk=;
Subject: Re: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-5-julien@xen.org>
 <beb22b59-701e-462c-5080-e99033079204@suse.com>
 <d62f8851-b417-b22a-4527-c2c43b536446@xen.org>
 <e897e1bf-9c17-f8a9-274a-673ff7f1a009@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0ff629b1-25e6-6ce4-43ab-d50af52ecb8c@xen.org>
Date: Wed, 23 Dec 2020 16:19:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <e897e1bf-9c17-f8a9-274a-673ff7f1a009@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 16:15, Jan Beulich wrote:
> On 23.12.2020 17:07, Julien Grall wrote:
>> On 23/12/2020 14:34, Jan Beulich wrote:
>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> The new IOMMU page-tables allocator will release the pages when
>>>> relinquish the domain resources. However, this is not sufficient in two
>>>> cases:
>>>>       1) domain_relinquish_resources() is not called when the domain
>>>>       creation fails.
>>>
>>> Could you remind me of what IOMMU page table insertions there
>>> are during domain creation? No memory got allocated to the
>>> domain at that point yet, so it would seem to me there simply
>>> is nothing to map.
>>
>> The P2M is first modified in hvm_domain_initialise():
>>
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d04026b9ec>] R iommu_alloc_pgtable+0x11/0x137
>> (XEN)    [<ffff82d04025f9f5>] F
>> drivers/passthrough/vtd/iommu.c#addr_to_dma_page_maddr+0x146/0x1d8
>> (XEN)    [<ffff82d04025fcc5>] F
>> drivers/passthrough/vtd/iommu.c#intel_iommu_map_page+0x6a/0x14b
>> (XEN)    [<ffff82d04026d949>] F iommu_map+0x6d/0x16f
>> (XEN)    [<ffff82d04026da71>] F iommu_legacy_map+0x26/0x63
>> (XEN)    [<ffff82d040301bdc>] F
>> arch/x86/mm/p2m-ept.c#ept_set_entry+0x6b2/0x730
>> (XEN)    [<ffff82d0402f67e7>] F p2m_set_entry+0x91/0x128
>> (XEN)    [<ffff82d0402f6b5c>] F
>> arch/x86/mm/p2m.c#set_typed_p2m_entry+0xfe/0x3f7
>> (XEN)    [<ffff82d0402f7f4c>] F set_mmio_p2m_entry+0x65/0x6e
>> (XEN)    [<ffff82d04029a080>] F
>> arch/x86/hvm/vmx/vmx.c#vmx_domain_initialise+0xf6/0x137
>> (XEN)    [<ffff82d0402af421>] F hvm_domain_initialise+0x357/0x4c7
> 
> Oh, the infamous APIC access page again.
> 
>>>> @@ -305,6 +320,19 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>>>>            memflags = MEMF_node(hd->node);
>>>>    #endif
>>>>    
>>>> +    /*
>>>> +     * The IOMMU page-tables are freed when relinquishing the domain, but
>>>> +     * nothing prevent allocation to happen afterwards. There is no valid
>>>> +     * reasons to continue to update the IOMMU page-tables while the
>>>> +     * domain is dying.
>>>> +     *
>>>> +     * So prevent page-table allocation when the domain is dying. Note
>>>> +     * this doesn't fully prevent the race because d->is_dying may not
>>>> +     * yet be seen.
>>>> +     */
>>>> +    if ( d->is_dying )
>>>> +        return NULL;
>>>> +
>>>>        pg = alloc_domheap_page(NULL, memflags);
>>>>        if ( !pg )
>>>>            return NULL;
>>>
>>> As said in reply to an earlier patch - with a suitable
>>> spin_barrier() you can place your check further down, along the
>>> lines of
>>>
>>>       spin_lock(&hd->arch.pgtables.lock);
>>>       if ( likely(!d->is_dying) )
>>>       {
>>>           page_list_add(pg, &hd->arch.pgtables.list);
>>>           p = NULL;
>>>       }
>>>       spin_unlock(&hd->arch.pgtables.lock);
>>>
>>>       if ( p )
>>>       {
>>>           free_domheap_page(pg);
>>>           pg = NULL;
>>>       }
>>>
>>> (albeit I'm relatively sure you won't like the re-purposing of
>>> p, but that's a minor detail). (FREE_DOMHEAP_PAGE() would be
>>> nice to use here, but we seem to only have FREE_XENHEAP_PAGE()
>>> so far.)
>>
>> In fact I don't mind the re-purposing of p. However, I dislike the
>> allocation and then freeing when the domain is dying.
>>
>> I think I prefer the small race introduced (the pages will still be
>> freed) over this solution.
> 
> The "will still be freed" is because of the 2nd round of freeing
> you add in an earlier patch? I'd prefer to avoid the race to in
> turn avoid that 2nd round of freeing.

The "2nd round" of freeing is necessary at least for the domain creation 
failure case (it would be the 1rst round).

If we can avoid IOMMU page-table allocation in the domain creation path, 
then yes I agree that we want to avoid that "2nd round". Otherwise, I 
think it is best to take advantage of it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:24:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58488.102901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6w1-0007s0-Hj; Wed, 23 Dec 2020 16:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58488.102901; Wed, 23 Dec 2020 16:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks6w1-0007rt-En; Wed, 23 Dec 2020 16:24:21 +0000
Received: by outflank-mailman (input) for mailman id 58488;
 Wed, 23 Dec 2020 16:24:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks6w0-0007ro-73
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:24:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e31c280-7fcc-47ad-86cb-fbde85ee7136;
 Wed, 23 Dec 2020 16:24:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4E88BACC6;
 Wed, 23 Dec 2020 16:24:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e31c280-7fcc-47ad-86cb-fbde85ee7136
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608740658; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=93XR2Nx1dClPHNzmPmVk1KstIDdqsNCxZw9oOKw4h3Q=;
	b=kOq9tXfahevJM1qIqpEezpgGQfdr+tJRgZ35tVCNgF4dWSMwNZxGSXgGguqYilWjL7yarn
	JZfXdyjUhK3rR7rxzlgg+AqWStcBHb0525P4zQGFCDpdI1ur3u+PrGxCtrM7DYgNTAvltc
	aNe8aIzX4ErjWRCVyoUEad5iq4XEgv8=
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
 <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
Date: Wed, 23 Dec 2020 17:24:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:16, Julien Grall wrote:
> On 23/12/2020 16:11, Jan Beulich wrote:
>> On 23.12.2020 16:16, Julien Grall wrote:
>>> On 23/12/2020 15:00, Jan Beulich wrote:
>>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>>> be to move the call to the teardown callback earlier on. Any opinions?

Please note this (in your original submission). I simply ...

>>>>>> We already have
>>>>>>
>>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>>> {
>>>>>>        dom_iommu(d)->arch.amd.root_table = NULL;
>>>>>> }
>>>>>>
>>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>>
>>>>>>        hd->arch.vtd.pgd_maddr = 0;
>>>>>
>>>>> Let me have a look if that works.
>>>>>
>>>>>>
>>>>>> I guess what's missing is prevention of the root table
>>>>>> getting re-setup.
>>>>>
>>>>> This is taken care in the follow-up patch by forbidding page-table
>>>>> allocation. I can mention it in the commit message.
>>>>
>>>> My expectation is that with that subsequent change the change here
>>>> (or any variant of it) would become unnecessary.
>>>
>>> I am not be sure. iommu_unmap() would still get called from put_page().
>>> Are you suggesting to gate the code if d->is_dying as well?
>>
>> Unmap shouldn't be allocating any memory right now, as in
>> non-shared-page-table mode we don't install any superpages
>> (unless I misremember).
> 
> It doesn't allocate memory, but it will try to access the IOMMU 
> page-tables (see more below).
> 
>>
>>> Even if this patch is deemed to be unecessary to fix the issue.
>>> This issue was quite hard to chase/reproduce.
>>>
>>> I think it would still be good to harden the code by zeroing
>>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>>> pointer was still "valid".
>>
>> But my point was that this zeroing already happens. 
>> What I
>> suspect is that it gets re-populated after it was zeroed,
>> because of page table manipulation that shouldn't be
>> occurring anymore for a dying domain.
> 
> AFAICT, the zeroing is happening in ->teardown() helper.
> 
> It is only called when the domain is fully destroyed (see call in 
> arch_domain_destroy()). This will happen much after relinquishing the code.
> 
> Could you clarify why you think it is already zeroed and by who?

... trusted you on what you stated there. But perhaps I somehow
misunderstood that sentence to mean you want to put your addition
into the teardown functions, when apparently you meant to invoke
them earlier in the process. Without clearly identifying why this
would be a safe thing to do, I couldn't imagine that's what you
suggest as alternative. This is because the interdependencies of
the IOMMU code are pretty hard to follow at times, and hence any
such re-ordering has a fair risk of breaking something elsewhere.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:29:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58492.102914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks71I-00084f-6O; Wed, 23 Dec 2020 16:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58492.102914; Wed, 23 Dec 2020 16:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks71I-00084X-35; Wed, 23 Dec 2020 16:29:48 +0000
Received: by outflank-mailman (input) for mailman id 58492;
 Wed, 23 Dec 2020 16:29:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks71G-00084S-Tp
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:29:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks71G-0002Rp-GM; Wed, 23 Dec 2020 16:29:46 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks71G-0008Th-94; Wed, 23 Dec 2020 16:29:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zeydQ+mP2CjwOQrSIub+z6mwj76ksAUVMoPrCWTWidg=; b=AtULZQvtKbonL+eY5gTo4uoRoO
	GNKXfO1sSUeEqj7xfQJV/LGDONIEb7Wcbmmmnw03T0JTR3uE0Lyc7EXR/H5b9acjAxT1G4PeQenXR
	i+Ds606o4MH8HsKGMIAlduc+EzO/0LGEMSBHtNnVpmbubxYGu2rCv22QPrTHyUHmlVD4=;
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
 <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
 <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0699ad7a-7c3b-e1e8-c7f7-0bfb54d03c78@xen.org>
Date: Wed, 23 Dec 2020 16:29:44 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 16:24, Jan Beulich wrote:
> On 23.12.2020 17:16, Julien Grall wrote:
>> On 23/12/2020 16:11, Jan Beulich wrote:
>>> On 23.12.2020 16:16, Julien Grall wrote:
>>>> On 23/12/2020 15:00, Jan Beulich wrote:
>>>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>>>> be to move the call to the teardown callback earlier on. Any opinions?
> 
> Please note this (in your original submission). I simply ...
> 
>>>>>>> We already have
>>>>>>>
>>>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>>>> {
>>>>>>>         dom_iommu(d)->arch.amd.root_table = NULL;
>>>>>>> }
>>>>>>>
>>>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>>>
>>>>>>>         hd->arch.vtd.pgd_maddr = 0;
>>>>>>
>>>>>> Let me have a look if that works.
>>>>>>
>>>>>>>
>>>>>>> I guess what's missing is prevention of the root table
>>>>>>> getting re-setup.
>>>>>>
>>>>>> This is taken care in the follow-up patch by forbidding page-table
>>>>>> allocation. I can mention it in the commit message.
>>>>>
>>>>> My expectation is that with that subsequent change the change here
>>>>> (or any variant of it) would become unnecessary.
>>>>
>>>> I am not be sure. iommu_unmap() would still get called from put_page().
>>>> Are you suggesting to gate the code if d->is_dying as well?
>>>
>>> Unmap shouldn't be allocating any memory right now, as in
>>> non-shared-page-table mode we don't install any superpages
>>> (unless I misremember).
>>
>> It doesn't allocate memory, but it will try to access the IOMMU
>> page-tables (see more below).
>>
>>>
>>>> Even if this patch is deemed to be unecessary to fix the issue.
>>>> This issue was quite hard to chase/reproduce.
>>>>
>>>> I think it would still be good to harden the code by zeroing
>>>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>>>> pointer was still "valid".
>>>
>>> But my point was that this zeroing already happens.
>>> What I
>>> suspect is that it gets re-populated after it was zeroed,
>>> because of page table manipulation that shouldn't be
>>> occurring anymore for a dying domain.
>>
>> AFAICT, the zeroing is happening in ->teardown() helper.
>>
>> It is only called when the domain is fully destroyed (see call in
>> arch_domain_destroy()). This will happen much after relinquishing the code.
>>
>> Could you clarify why you think it is already zeroed and by who?
> 
> ... trusted you on what you stated there. But perhaps I somehow
> misunderstood that sentence to mean you want to put your addition
> into the teardown functions, when apparently you meant to invoke
> them earlier in the process. Without clearly identifying why this
> would be a safe thing to do, I couldn't imagine that's what you
> suggest as alternative. 

This was a wording issue. I meant moving ->teardown() before (or calling 
from) iommu_free_pgtables().

Shall I introduce a new callback then?

> This is because the interdependencies of
> the IOMMU code are pretty hard to follow at times, and hence any
> such re-ordering has a fair risk of breaking something elsewhere.

Right, this is another reason to try to get most of my fix 
self-contained rather relying on top-layer call to protect against a 
domain dying.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58496.102926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76O-0000Vd-PO; Wed, 23 Dec 2020 16:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58496.102926; Wed, 23 Dec 2020 16:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76O-0000VW-M7; Wed, 23 Dec 2020 16:35:04 +0000
Received: by outflank-mailman (input) for mailman id 58496;
 Wed, 23 Dec 2020 16:35:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks76N-0000VR-Ae
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:35:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8a901d4-b239-4ab3-821f-884361ec54fa;
 Wed, 23 Dec 2020 16:35:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8a901d4-b239-4ab3-821f-884361ec54fa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608741302;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=J12nue8zO9s1xfEu2GlpYvaW7r0FVfxhrRgoIRus5is=;
  b=bgpdTXCMu22X0z5pC7SfxjK8QGXdHEGp+de9mgmK5s3oC5Eo17hkqZYN
   B6XUc2coSGTcrDbE2SCsCrOnwQOXplKtzsNT+gyvRCHX09ep4aXDUIIzH
   kICIUV15djP+NvQgOdlBJDrRAHHN+2FqvYT0oqnvRg/bH7EBE2SdqB+Ga
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 8BUBcM15IGcJTsBlq3qedEFKzHxVzA/I0oq5XV2y2cYzzzjUoH94d8wFpBECH75fjznpvFE9ez
 i707r9ToS/uy3f0vPpQ7eIfb27+vVWeXAujFWYH1fBbR5O/dWkNZsCFb9YWhZSMnlkX21lY6xb
 +y4FK9R+6CZ8Rknb+4erqcHbIW3ch0uHrByrJC++bQiKvsOfxQFeVQsysgAHlemBUTy/aEfdUp
 oQZ8E490HoiFDsI1wzkX6nD283Dl+E4riMQ0YW2gIb8Mh8iQqbUGuD5ic8aEAwVLefSH9f4zJJ
 Nvk=
X-SBRS: 5.2
X-MesageID: 33844236
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="33844236"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH 4/4] tools/misc: Test for fault injection
Date: Wed, 23 Dec 2020 16:34:42 +0000
Message-ID: <20201223163442.8840-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201223163442.8840-1-andrew.cooper3@citrix.com>
References: <20201223163442.8840-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>

RFC: This wants expanding to a few more "default" configurations, and then
some thought needs putting torwards automating it.
---
 tools/misc/.gitignore      |  1 +
 tools/misc/Makefile        |  5 +++++
 tools/misc/xen-fault-ttl.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 62 insertions(+)
 create mode 100644 tools/misc/xen-fault-ttl.c

diff --git a/tools/misc/.gitignore b/tools/misc/.gitignore
index c5fe2cfccd..8d117c3b7d 100644
--- a/tools/misc/.gitignore
+++ b/tools/misc/.gitignore
@@ -1 +1,2 @@
+xen-fault-ttl
 xen-ucode
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 7d37f297a9..5c1ed9a284 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -9,6 +9,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenstore)
+CFLAGS += -Wno-declaration-after-statement
 
 # Everything to be installed in regular bin/
 INSTALL_BIN-$(CONFIG_X86)      += xen-cpuid
@@ -25,6 +26,7 @@ INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
 INSTALL_SBIN                   += xencov
+INSTALL_SBIN                   += xen-fault-ttl
 INSTALL_SBIN                   += xenhypfs
 INSTALL_SBIN                   += xenlockprof
 INSTALL_SBIN                   += xenperf
@@ -76,6 +78,9 @@ distclean: clean
 xen-cpuid: xen-cpuid.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
 
+xen-fault-ttl: xen-fault-ttl.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
+
 xen-hvmctx: xen-hvmctx.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-fault-ttl.c b/tools/misc/xen-fault-ttl.c
new file mode 100644
index 0000000000..e7953443da
--- /dev/null
+++ b/tools/misc/xen-fault-ttl.c
@@ -0,0 +1,56 @@
+#include <stdio.h>
+#include <err.h>
+#include <string.h>
+#include <errno.h>
+
+#include <xenctrl.h>
+
+int main(int argc, char **argv)
+{
+    int rc;
+    xc_interface *xch = xc_interface_open(NULL, NULL, 0);
+
+    if ( !xch )
+        err(1, "xc_interface_open");
+
+    struct xen_domctl_createdomain create = {
+        .fault_ttl = 1,
+        .flags = (XEN_DOMCTL_CDF_hvm |
+                  XEN_DOMCTL_CDF_hap),
+        .max_vcpus = 1,
+        .max_evtchn_port = -1,
+        .max_grant_frames = 64,
+        .max_maptrack_frames = 1024,
+        .arch = {
+            .emulation_flags = XEN_X86_EMU_LAPIC,
+        },
+    };
+    uint32_t domid = 0;
+
+    for ( rc = 1; rc && errno == ENOMEM; create.fault_ttl++ )
+        rc = xc_domain_create(xch, &domid, &create);
+
+    if ( rc == 0 )
+    {
+        printf("Created d%u with fault_ttl of %u\n", domid, create.fault_ttl);
+
+        xc_domain_destroy(xch, domid);
+    }
+    else
+        printf("Domain creation failed: %d: %s\n",
+               -errno, strerror(errno));
+
+    xc_interface_close(xch);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:35:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58498.102938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76V-0000Xt-1d; Wed, 23 Dec 2020 16:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58498.102938; Wed, 23 Dec 2020 16:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76U-0000Xm-UI; Wed, 23 Dec 2020 16:35:10 +0000
Received: by outflank-mailman (input) for mailman id 58498;
 Wed, 23 Dec 2020 16:35:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks76T-0000XT-JQ
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:35:09 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a4fefce-e134-48d8-8e6e-ee2771c0087a;
 Wed, 23 Dec 2020 16:35:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a4fefce-e134-48d8-8e6e-ee2771c0087a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608741308;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=mIHqFXZKILQ2PSe2CqBuCc+ylOQGqiB+Dim1E7v3fY8=;
  b=WkE9fwTVcyLFjX/EI973IXox2c9pLJdNO57vvGgTJARIhW5hgFuzQG5V
   XkrHo7Vms8wGKB+KJIoLM0J5uav9q3HwCfKdAtRudpekuDK/AXmyJAEMj
   x6tzRvDGUVzPNROoFk2+cUrhZ7YjJ50D8wCyTriz9I4o5llrCNdDa1zRz
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: UwZgj87PZM7v3t9dGeOvfWHE/mUNZGCeT53o7uemvG8vc3yvXqwO3G70+qQ+Wu0OGDHrhLXkWF
 QvlgSamH7GIOFgWyzWn4r9ssVvDjxuJj8fm9XPeoO7FAPZvpf6eZM1Q0v/1Z3Skxj2iMJyzuHL
 YboTG/z/gC3qnsjxNrhzGk7AVLtZ25XvMxupuqpWjapIRWkholW9km0W78ANwapUMM0pA9dh6i
 /9WFsUEPiuQCukHkLigjmtCC1Qz8i0UOpOIwosrJ8LpRSl9nZX3eQjljGZHLdt7JWZScElnI5I
 35M=
X-SBRS: 5.2
X-MesageID: 34085172
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="34085172"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH 1/4] xen/dmalloc: Introduce dmalloc() APIs
Date: Wed, 23 Dec 2020 16:34:39 +0000
Message-ID: <20201223163442.8840-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201223163442.8840-1-andrew.cooper3@citrix.com>
References: <20201223163442.8840-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Wrappers for xmalloc() and friends, which track allocations tied to a specific
domain.

Check for any leaked memory at domain destruction time.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>

RFC:
 * This probably wants to be less fatal in release builds
 * In an ideal world, we'd also want to count the total number of bytes
   allocated from the xmalloc heap, which would be interesting to print in the
   'q' debugkey.  However, that data is fairly invasive to obtain.
 * More complicated logic could track the origins of each allocation, and be
   able to identify which one(s) leaked.
---
 xen/common/Makefile       |  1 +
 xen/common/dmalloc.c      | 19 +++++++++++++++++++
 xen/common/domain.c       |  6 ++++++
 xen/include/xen/dmalloc.h | 29 +++++++++++++++++++++++++++++
 xen/include/xen/sched.h   |  2 ++
 5 files changed, 57 insertions(+)
 create mode 100644 xen/common/dmalloc.c
 create mode 100644 xen/include/xen/dmalloc.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index 7a4e652b57..c5d9c23fd1 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -5,6 +5,7 @@ obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
+obj-y += dmalloc.o
 obj-y += domain.o
 obj-y += event_2l.o
 obj-y += event_channel.o
diff --git a/xen/common/dmalloc.c b/xen/common/dmalloc.c
new file mode 100644
index 0000000000..e3a0e546c2
--- /dev/null
+++ b/xen/common/dmalloc.c
@@ -0,0 +1,19 @@
+#include <xen/dmalloc.h>
+#include <xen/sched.h>
+#include <xen/xmalloc.h>
+
+void dfree(struct domain *d, void *ptr)
+{
+    atomic_dec(&d->dalloc_heap);
+    xfree(ptr);
+}
+
+void *_dzalloc(struct domain *d, size_t size, size_t align)
+{
+    void *ptr = _xmalloc(size, align);
+
+    if ( ptr )
+        atomic_inc(&d->dalloc_heap);
+
+    return ptr;
+}
diff --git a/xen/common/domain.c b/xen/common/domain.c
index d151be3f36..1db1c0e70a 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -332,6 +332,8 @@ static int domain_teardown(struct domain *d)
  */
 static void _domain_destroy(struct domain *d)
 {
+    int outstanding;
+
     BUG_ON(!d->is_dying);
     BUG_ON(atomic_read(&d->refcnt) != DOMAIN_DESTROYED);
 
@@ -347,6 +349,10 @@ static void _domain_destroy(struct domain *d)
 
     lock_profile_deregister_struct(LOCKPROF_TYPE_PERDOM, d);
 
+    outstanding = atomic_read(&d->dalloc_heap);
+    if ( outstanding )
+        panic("%pd has %d outstanding heap allocations\n", d, outstanding);
+
     free_domain_struct(d);
 }
 
diff --git a/xen/include/xen/dmalloc.h b/xen/include/xen/dmalloc.h
new file mode 100644
index 0000000000..a90cf0259c
--- /dev/null
+++ b/xen/include/xen/dmalloc.h
@@ -0,0 +1,29 @@
+#ifndef XEN_DMALLOC_H
+#define XEN_DMALLOC_H
+
+#include <xen/types.h>
+
+struct domain;
+
+#define dzalloc_array(d, _type, _num)                                   \
+    ((_type *)_dzalloc_array(d, sizeof(_type), __alignof__(_type), _num))
+
+
+void dfree(struct domain *d, void *ptr);
+
+#define DFREE(d, p)                             \
+    do {                                        \
+        dfree(d, p);                            \
+        (p) = NULL;                             \
+    } while ( 0 )
+
+
+void *_dzalloc(struct domain *d, size_t size, size_t align);
+
+static inline void *_dzalloc_array(struct domain *d, size_t size,
+                                   size_t align, size_t num)
+{
+    return _dzalloc(d, size * num, align);
+}
+
+#endif /* XEN_DMALLOC_H */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3e46384a3c..8ed8b55a1e 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -349,6 +349,8 @@ struct domain
     atomic_t         shr_pages;         /* shared pages */
     atomic_t         paged_pages;       /* paged-out pages */
 
+    atomic_t         dalloc_heap;       /* Number of xmalloc-like allocations. */
+
     /* Scheduling. */
     void            *sched_priv;    /* scheduler-specific data */
     struct sched_unit *sched_unit_list;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58499.102949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76Y-0000ap-9M; Wed, 23 Dec 2020 16:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58499.102949; Wed, 23 Dec 2020 16:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76Y-0000ah-6F; Wed, 23 Dec 2020 16:35:14 +0000
Received: by outflank-mailman (input) for mailman id 58499;
 Wed, 23 Dec 2020 16:35:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks76X-0000XE-4c
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:35:13 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82065b73-4d16-402a-9a87-7c23b59efd32;
 Wed, 23 Dec 2020 16:35:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82065b73-4d16-402a-9a87-7c23b59efd32
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608741308;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=42LWWIePVgGkbX3Yus9uh6xAtV4+wGHkGPht/iiTXv4=;
  b=d3IJCzKpSYRBh/EBrlOTi0sysJTLn9hALLtbC6AVLQti0Bo31RIoqoIq
   P6xVbB/6nyPCBD+yCqh2a0y8gXoet8h09sjICAN8agpjMurQBMNjBKCBs
   nfxsl2mIhsajLTJymLm6QvsTn9uAvKfrYc0mE6PNhysnQkflI6x+KADRM
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3aDH0fJboDmcJbf6DmZfaG4XpFr1I5ipdPzKHgk9P9ywXPB3915v9EjoBreoYaYuz4HLmkPV6h
 q1GBABUPhd9XylP9+OhOcMqkoIy3T71dOs826xG0Iy3nPEhjyRPJYgRNWrpDd98erssun9eGP3
 XES3Fr3R4v9ZjTXaeSMBCwM8XtMtAV8q+cVkeku0Hh33H9DP68hsedrpwhIcsXmalQFSx6Yovv
 J9UMdJEJ5GT6N0E9NwYBb5dMAbTsQAkM2LG5lqv4MjJLs28rUNq6rkt/QrnIfS2baIY4TPWgSZ
 des=
X-SBRS: 5.2
X-MesageID: 33881652
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="33881652"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH 3/4] xen/domctl: Introduce fault_ttl
Date: Wed, 23 Dec 2020 16:34:41 +0000
Message-ID: <20201223163442.8840-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201223163442.8840-1-andrew.cooper3@citrix.com>
References: <20201223163442.8840-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

To inject a simulated resource failure, for testing purposes.

Given a specific set of hypercall parameters, the failure is in a repeatable
position, for the currently booted Xen.  The exact position of failures is
highly dependent on the build of Xen, and hardware support.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>

RFC:
 * Probably wants to be Kconfig'd
 * I'm thinking of dropping handle from xen_domctl_createdomain because it's a
   waste of valuable space.
---
 xen/common/dmalloc.c        | 8 +++++++-
 xen/common/domain.c         | 8 ++++++--
 xen/include/public/domctl.h | 1 +
 xen/include/xen/sched.h     | 1 +
 4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/xen/common/dmalloc.c b/xen/common/dmalloc.c
index e3a0e546c2..1f5d0f5627 100644
--- a/xen/common/dmalloc.c
+++ b/xen/common/dmalloc.c
@@ -10,7 +10,13 @@ void dfree(struct domain *d, void *ptr)
 
 void *_dzalloc(struct domain *d, size_t size, size_t align)
 {
-    void *ptr = _xmalloc(size, align);
+    void *ptr;
+
+    if ( atomic_read(&d->fault_ttl) &&
+         atomic_dec_and_test(&d->fault_ttl) )
+        return NULL;
+
+    ptr = _xmalloc(size, align);
 
     if ( ptr )
         atomic_inc(&d->dalloc_heap);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 1db1c0e70a..cd73321980 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -427,14 +427,18 @@ struct domain *domain_create(domid_t domid,
     if ( (d = alloc_domain_struct()) == NULL )
         return ERR_PTR(-ENOMEM);
 
-    d->options = config ? config->flags : 0;
-
     /* Sort out our idea of is_system_domain(). */
     d->domain_id = domid;
 
     /* Debug sanity. */
     ASSERT(is_system_domain(d) ? config == NULL : config != NULL);
 
+    if ( config )
+    {
+        d->options = config->flags;
+        atomic_set(&d->fault_ttl, config->fault_ttl);
+    }
+
     /* Sort out our idea of is_control_domain(). */
     d->is_privileged = is_priv;
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 666aeb71bf..aaa3d66616 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -48,6 +48,7 @@
 /* XEN_DOMCTL_createdomain */
 struct xen_domctl_createdomain {
     /* IN parameters */
+    uint32_t fault_ttl;
     uint32_t ssidref;
     xen_domain_handle_t handle;
  /* Is this an HVM guest (as opposed to a PV guest)? */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 8ed8b55a1e..620a9f20e5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -349,6 +349,7 @@ struct domain
     atomic_t         shr_pages;         /* shared pages */
     atomic_t         paged_pages;       /* paged-out pages */
 
+    atomic_t         fault_ttl;         /* Time until a simulated resource failure. */
     atomic_t         dalloc_heap;       /* Number of xmalloc-like allocations. */
 
     /* Scheduling. */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:35:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58500.102962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76Z-0000dH-RD; Wed, 23 Dec 2020 16:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58500.102962; Wed, 23 Dec 2020 16:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76Z-0000d8-MF; Wed, 23 Dec 2020 16:35:15 +0000
Received: by outflank-mailman (input) for mailman id 58500;
 Wed, 23 Dec 2020 16:35:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks76Y-0000XT-GZ
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:35:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3ad9b30-209e-4365-9cc7-725099a7b43b;
 Wed, 23 Dec 2020 16:35:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3ad9b30-209e-4365-9cc7-725099a7b43b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608741309;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=CMpXZjooSuf3izRShMQsyPS9IRnkTZlUjBWpT0DP1So=;
  b=hyMzwOKMxhA7UCo9DlwYJC2DFqp9rSHIDpxPc11vpwMf84PtzBXTncYf
   XpCP3S7c+Mut+3AKwVu9IGne4AughogITLj/yvfIDSaYa5V+GgisqNzJQ
   ygLAfEcKNBiwgOq5u6zO8+Cjk/QjEmeZSPI4f39XIWjiGAlpgsGPi2ZiE
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4KLhY6pCuhQiyyEWtLF4uW9Mh8tQVFNJ9+5WzDNSLa9nzav2Dps4WOqWJigTKdRvXNSRos3a4s
 UxB1L2Te3Mp2B6BzUBPZ7/g/KZLzKBDpPOIewPyGCwOG908PeIxFr/IUc84xTZKAvOeuSaXQmt
 n5yQl8v2A33a+/FRVCgTgQ1REP2EcWbbn9aQCbzgxsQOhPp0YvQquKNycOSipMqgdkwDOT0jdI
 TIvmyvKIt5B1Mpsp7XdZKd/C6iV/CIB4swA5d4I29r0M5grsAzkRk8C/YwcqY5W71+JJJDviGK
 mGI=
X-SBRS: 5.2
X-MesageID: 34085173
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="34085173"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH 2/4] xen/evtchn: Switch to dmalloc
Date: Wed, 23 Dec 2020 16:34:40 +0000
Message-ID: <20201223163442.8840-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201223163442.8840-1-andrew.cooper3@citrix.com>
References: <20201223163442.8840-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This causes memory allocations to be tied to domain in question.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>

RFC: Likely more to convert.  This is just a minimal attempt.
---
 xen/common/event_channel.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 4a48094356..f0ca0933e3 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -16,6 +16,7 @@
 
 #include "event_channel.h"
 
+#include <xen/dmalloc.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/errno.h>
@@ -153,7 +154,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
     struct evtchn *chn;
     unsigned int i;
 
-    chn = xzalloc_array(struct evtchn, EVTCHNS_PER_BUCKET);
+    chn = dzalloc_array(d, struct evtchn, EVTCHNS_PER_BUCKET);
     if ( !chn )
         return NULL;
 
@@ -182,7 +183,7 @@ static void free_evtchn_bucket(struct domain *d, struct evtchn *bucket)
     for ( i = 0; i < EVTCHNS_PER_BUCKET; i++ )
         xsm_free_security_evtchn(bucket + i);
 
-    xfree(bucket);
+    dfree(d, bucket);
 }
 
 int evtchn_allocate_port(struct domain *d, evtchn_port_t port)
@@ -204,7 +205,7 @@ int evtchn_allocate_port(struct domain *d, evtchn_port_t port)
 
         if ( !group_from_port(d, port) )
         {
-            grp = xzalloc_array(struct evtchn *, BUCKETS_PER_GROUP);
+            grp = dzalloc_array(d, struct evtchn *, BUCKETS_PER_GROUP);
             if ( !grp )
                 return -ENOMEM;
             group_from_port(d, port) = grp;
@@ -1488,7 +1489,7 @@ int evtchn_init(struct domain *d, unsigned int max_port)
     write_atomic(&d->active_evtchns, 0);
 
 #if MAX_VIRT_CPUS > BITS_PER_LONG
-    d->poll_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
+    d->poll_mask = dzalloc_array(d, unsigned long, BITS_TO_LONGS(d->max_vcpus));
     if ( !d->poll_mask )
     {
         free_evtchn_bucket(d, d->evtchn);
@@ -1545,13 +1546,12 @@ void evtchn_destroy_final(struct domain *d)
             continue;
         for ( j = 0; j < BUCKETS_PER_GROUP; j++ )
             free_evtchn_bucket(d, d->evtchn_group[i][j]);
-        xfree(d->evtchn_group[i]);
+        dfree(d, d->evtchn_group[i]);
     }
     free_evtchn_bucket(d, d->evtchn);
 
 #if MAX_VIRT_CPUS > BITS_PER_LONG
-    xfree(d->poll_mask);
-    d->poll_mask = NULL;
+    DFREE(d, d->poll_mask);
 #endif
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:35:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:35:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58501.102974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76e-0000iP-4B; Wed, 23 Dec 2020 16:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58501.102974; Wed, 23 Dec 2020 16:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76e-0000iG-0M; Wed, 23 Dec 2020 16:35:20 +0000
Received: by outflank-mailman (input) for mailman id 58501;
 Wed, 23 Dec 2020 16:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks76c-0000XE-52
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:35:18 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 409ce93c-3858-4660-bff6-b2cbe68ad34b;
 Wed, 23 Dec 2020 16:35:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 409ce93c-3858-4660-bff6-b2cbe68ad34b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608741309;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=woFvqUrlRFwkse6aw4Hd13+wrpq/34Y+A2vX8m04mkI=;
  b=OkJ4PAd4lJXsP1sOAd56FkZXQTWuNwgZbv91tG6BBsT2i1Qcgv8J0IKF
   xhDFJhuhOwmCCMyHbajLZp7FnGQglyKKSosb5B1VxhdS/Pe+OCqFc2RQy
   NayE4yiLDJuus42r4LWv27/OKEsHDx4wlGRYAe9+nOa3F/gAsXXV6JTU+
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cBblqHoX7pzEwiIzuhxQ7YbYc0nU7dAt/nkNXSknMLvwDl8yrY/cZPEAYtPQ5xueh5EKRZlFBY
 qS2W4jJ3mNvNOlx9GuDKL8RBIyRviWpVQAco+sbYwy+WZ7kQz5CJMjjB4YPTjqHpLcURJV0ggg
 s+Za10a5BvG/ppyblp5lzA217PFnMQA36OplfyDL5hPafqfImrEexBy5IMOHWPtcyKXAh7aOYH
 RZVdy63Fth/RZ9mh69AYkJbEC+x5n/CYcjI/+iJJ7DFNN1967BuWk9aSfO1jNoG+Que4EC2d4R
 B6o=
X-SBRS: 5.2
X-MesageID: 33881654
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="33881654"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Tamas K Lengyel <tamas@tklengyel.com>
Subject: [PATCH 0/4] xen: domain-tracked allocations, and fault injection
Date: Wed, 23 Dec 2020 16:34:38 +0000
Message-ID: <20201223163442.8840-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This was not the christmas hacking project that I was planning to do, but it
has had some exciting results.

After some discussion on an earlier thread, Tamas has successfully got fuzzing
of Xen working via kfx, and this series is a prototype for providing better
testing infrastructure.

And to prove a point, this series has already found a memory leak in ARM's
dom0less smoke test.

>From https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/929518792

  (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
  (XEN) Freed 328kB init memory.
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER24
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER28
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER32
  (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
  (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
  (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
  (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
  (XEN)
  (XEN) ****************************************
  (XEN) Panic on CPU 0:
  (XEN) d1 has 2 outstanding heap allocations
  (XEN) ****************************************
  (XEN)
  (XEN) Reboot in five seconds...

For some reason, neither of the evtchn default memory allocations are freed,
but it's not clear why d1 shut down to being with.  Stefano - any ideas?

Andrew Cooper (4):
  xen/dmalloc: Introduce dmalloc() APIs
  xen/evtchn: Switch to dmalloc
  xen/domctl: Introduce fault_ttl
  tools/misc: Test for fault injection

 tools/misc/.gitignore       |  1 +
 tools/misc/Makefile         |  5 ++++
 tools/misc/xen-fault-ttl.c  | 56 +++++++++++++++++++++++++++++++++++++++++++++
 xen/common/Makefile         |  1 +
 xen/common/dmalloc.c        | 25 ++++++++++++++++++++
 xen/common/domain.c         | 14 ++++++++++--
 xen/common/event_channel.c  | 14 ++++++------
 xen/include/public/domctl.h |  1 +
 xen/include/xen/dmalloc.h   | 29 +++++++++++++++++++++++
 xen/include/xen/sched.h     |  3 +++
 10 files changed, 140 insertions(+), 9 deletions(-)
 create mode 100644 tools/misc/xen-fault-ttl.c
 create mode 100644 xen/common/dmalloc.c
 create mode 100644 xen/include/xen/dmalloc.h

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:35:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58506.102986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76o-0000rl-Fs; Wed, 23 Dec 2020 16:35:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58506.102986; Wed, 23 Dec 2020 16:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks76o-0000rd-BP; Wed, 23 Dec 2020 16:35:30 +0000
Received: by outflank-mailman (input) for mailman id 58506;
 Wed, 23 Dec 2020 16:35:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks76n-0000XT-H5
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:35:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ee3d01c-77a8-4965-b640-66e607890d7e;
 Wed, 23 Dec 2020 16:35:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3562EAD18;
 Wed, 23 Dec 2020 16:35:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ee3d01c-77a8-4965-b640-66e607890d7e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608741319; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yvT1r/O+a0xjJdQM9X5N+I06mHebAFYu2G8LdS1zrvo=;
	b=Deh5vWhwJOAalWM+GwsbG5eN8oxG0h4QNZfvSVdOP41VbYwrTpXYG/2xLhPIPbWB7P+9S5
	F+34Vnkk3xgIKvtGJvKeuy+SomDSuV1pag2r3ubNFDodWcZqxUnmFqxjBffv/qVhSnzkao
	lw+0s6SyEcrhlYl0agZ0e9G8oTQUxNw=
Subject: Re: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU
 page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-5-julien@xen.org>
 <beb22b59-701e-462c-5080-e99033079204@suse.com>
 <d62f8851-b417-b22a-4527-c2c43b536446@xen.org>
 <e897e1bf-9c17-f8a9-274a-673ff7f1a009@suse.com>
 <0ff629b1-25e6-6ce4-43ab-d50af52ecb8c@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a22f7364-518f-ea5f-3b87-5c0462cfc193@suse.com>
Date: Wed, 23 Dec 2020 17:35:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <0ff629b1-25e6-6ce4-43ab-d50af52ecb8c@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:19, Julien Grall wrote:
> On 23/12/2020 16:15, Jan Beulich wrote:
>> On 23.12.2020 17:07, Julien Grall wrote:
>>> I think I prefer the small race introduced (the pages will still be
>>> freed) over this solution.
>>
>> The "will still be freed" is because of the 2nd round of freeing
>> you add in an earlier patch? I'd prefer to avoid the race to in
>> turn avoid that 2nd round of freeing.
> 
> The "2nd round" of freeing is necessary at least for the domain creation 
> failure case (it would be the 1rst round).
> 
> If we can avoid IOMMU page-table allocation in the domain creation path, 
> then yes I agree that we want to avoid that "2nd round". Otherwise, I 
> think it is best to take advantage of it.

Well, I'm not really certain either way here. If it's really just
VMX'es APIC access page that's the problem here, custom cleanup
of this "fallout" from VMX code would certainly be an option.
Furthermore I consider it wrong to insert this page in the IOMMU
page tables in the first place. This page overlaps with the MSI
special address range, and hence accesses will be dealt with by
interrupt remapping machinery (if enabled). If interrupt
remapping is disabled, this page should be no different for I/O
purposes than all other pages in this window - they shouldn't
be mapped at all.

Perhaps, along the lines of epte_get_entry_emt(), ept_set_entry()
should gain an is_special_page() check to prevent propagation to
the IOMMU for such pages (we can't do anything about them in
shared-page-table setups)? See also the (PV related) comment in
cleanup_page_mappings(), a few lines ahead of a respective use of
is_special_page(),

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:42:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:42:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58520.102997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7D3-00022E-5k; Wed, 23 Dec 2020 16:41:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58520.102997; Wed, 23 Dec 2020 16:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7D3-000227-2i; Wed, 23 Dec 2020 16:41:57 +0000
Received: by outflank-mailman (input) for mailman id 58520;
 Wed, 23 Dec 2020 16:41:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks7D1-00021o-Il
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:41:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b8c4c7e-fc3c-4c9b-80eb-edee12388e81;
 Wed, 23 Dec 2020 16:41:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E27EACF1;
 Wed, 23 Dec 2020 16:41:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b8c4c7e-fc3c-4c9b-80eb-edee12388e81
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608741713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O9OOtA8Qc/tES3oyETCPx+qqFk54z+xzGKtXLHHlof8=;
	b=vEgprFgdIii6UYfy+7T9yE8j6sxQsFoQqB5jcBFosL4nohggNofzN5fH6MTJwayS0B02LM
	syvlsieyOXVGpd2NFvstKwCtCG1IPZdWdyt/OOswdsaJd01+19vzW364v5XLr8RS2LtRN+
	PUfx76doQ0VciZ9+sKu9eca1mH9Ghrc=
Subject: Re: [PATCH 4/4] tools/misc: Test for fault injection
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201223163442.8840-1-andrew.cooper3@citrix.com>
 <20201223163442.8840-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3fe99df6-ea2a-d877-8cb9-a4e7e84589d0@suse.com>
Date: Wed, 23 Dec 2020 17:41:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201223163442.8840-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:34, Andrew Cooper wrote:
> --- /dev/null
> +++ b/tools/misc/xen-fault-ttl.c
> @@ -0,0 +1,56 @@
> +#include <stdio.h>
> +#include <err.h>
> +#include <string.h>
> +#include <errno.h>
> +
> +#include <xenctrl.h>
> +
> +int main(int argc, char **argv)
> +{
> +    int rc;
> +    xc_interface *xch = xc_interface_open(NULL, NULL, 0);
> +
> +    if ( !xch )
> +        err(1, "xc_interface_open");
> +
> +    struct xen_domctl_createdomain create = {
> +        .fault_ttl = 1,
> +        .flags = (XEN_DOMCTL_CDF_hvm |
> +                  XEN_DOMCTL_CDF_hap),
> +        .max_vcpus = 1,
> +        .max_evtchn_port = -1,
> +        .max_grant_frames = 64,
> +        .max_maptrack_frames = 1024,
> +        .arch = {
> +            .emulation_flags = XEN_X86_EMU_LAPIC,
> +        },
> +    };
> +    uint32_t domid = 0;
> +
> +    for ( rc = 1; rc && errno == ENOMEM; create.fault_ttl++ )
> +        rc = xc_domain_create(xch, &domid, &create);

The very first time you get here, how does errno end up being
set to ENOMEM? To me

    do {
        create.fault_ttl++;
        rc = xc_domain_create(xch, &domid, &create);
    } while ( rc && errno == ENOMEM );

would seem more natural (and also avoid the somewhat odd
"rc = 1").

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:46:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:46:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58524.103009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7HT-0002BD-O5; Wed, 23 Dec 2020 16:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58524.103009; Wed, 23 Dec 2020 16:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7HT-0002B6-Kz; Wed, 23 Dec 2020 16:46:31 +0000
Received: by outflank-mailman (input) for mailman id 58524;
 Wed, 23 Dec 2020 16:46:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks7HS-0002B1-Ex
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:46:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d84c6b3b-2a70-4389-bbb7-42e1115897cc;
 Wed, 23 Dec 2020 16:46:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23622ACF1;
 Wed, 23 Dec 2020 16:46:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d84c6b3b-2a70-4389-bbb7-42e1115897cc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608741988; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z6Z6q3jbEQA4yLYFvsAhtc7KfPVrmkYSdG6G9pWGvw4=;
	b=LpUxgwXBFrWmwd7xBOognh5PXL8+KdzBEIaCZW2io3w5nPEHLKNlS3foQgkjv0PnS7kg58
	uWTxSBT79pGF2q5BCpg+uevzUrj4mS5rkSeNM8ZaW8KDEtTEMT8yVwLpEwJNVgGXqjSzpv
	a7aVe2WwHDVT1fz0nTV7dA33W3R0Yx8=
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
 <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
 <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
 <0699ad7a-7c3b-e1e8-c7f7-0bfb54d03c78@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <63091edf-a870-cac1-587a-59cb9d0f8d8d@suse.com>
Date: Wed, 23 Dec 2020 17:46:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <0699ad7a-7c3b-e1e8-c7f7-0bfb54d03c78@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:29, Julien Grall wrote:
> On 23/12/2020 16:24, Jan Beulich wrote:
>> On 23.12.2020 17:16, Julien Grall wrote:
>>> On 23/12/2020 16:11, Jan Beulich wrote:
>>>> On 23.12.2020 16:16, Julien Grall wrote:
>>>>> On 23/12/2020 15:00, Jan Beulich wrote:
>>>>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>
>> Please note this (in your original submission). I simply ...
>>
>>>>>>>> We already have
>>>>>>>>
>>>>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>>>>> {
>>>>>>>>         dom_iommu(d)->arch.amd.root_table = NULL;
>>>>>>>> }
>>>>>>>>
>>>>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>>>>
>>>>>>>>         hd->arch.vtd.pgd_maddr = 0;
>>>>>>>
>>>>>>> Let me have a look if that works.
>>>>>>>
>>>>>>>>
>>>>>>>> I guess what's missing is prevention of the root table
>>>>>>>> getting re-setup.
>>>>>>>
>>>>>>> This is taken care in the follow-up patch by forbidding page-table
>>>>>>> allocation. I can mention it in the commit message.
>>>>>>
>>>>>> My expectation is that with that subsequent change the change here
>>>>>> (or any variant of it) would become unnecessary.
>>>>>
>>>>> I am not be sure. iommu_unmap() would still get called from put_page().
>>>>> Are you suggesting to gate the code if d->is_dying as well?
>>>>
>>>> Unmap shouldn't be allocating any memory right now, as in
>>>> non-shared-page-table mode we don't install any superpages
>>>> (unless I misremember).
>>>
>>> It doesn't allocate memory, but it will try to access the IOMMU
>>> page-tables (see more below).
>>>
>>>>
>>>>> Even if this patch is deemed to be unecessary to fix the issue.
>>>>> This issue was quite hard to chase/reproduce.
>>>>>
>>>>> I think it would still be good to harden the code by zeroing
>>>>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>>>>> pointer was still "valid".
>>>>
>>>> But my point was that this zeroing already happens.
>>>> What I
>>>> suspect is that it gets re-populated after it was zeroed,
>>>> because of page table manipulation that shouldn't be
>>>> occurring anymore for a dying domain.
>>>
>>> AFAICT, the zeroing is happening in ->teardown() helper.
>>>
>>> It is only called when the domain is fully destroyed (see call in
>>> arch_domain_destroy()). This will happen much after relinquishing the code.
>>>
>>> Could you clarify why you think it is already zeroed and by who?
>>
>> ... trusted you on what you stated there. But perhaps I somehow
>> misunderstood that sentence to mean you want to put your addition
>> into the teardown functions, when apparently you meant to invoke
>> them earlier in the process. Without clearly identifying why this
>> would be a safe thing to do, I couldn't imagine that's what you
>> suggest as alternative. 
> 
> This was a wording issue. I meant moving ->teardown() before (or calling 
> from) iommu_free_pgtables().
> 
> Shall I introduce a new callback then?

Earlier zeroing won't help unless you prevent re-population, or
unless you make the code capable of telling "still zero" from
"already zero". But I have to admit I'd like to also have Paul's
opinion on the matter.

Jan

>> This is because the interdependencies of
>> the IOMMU code are pretty hard to follow at times, and hence any
>> such re-ordering has a fair risk of breaking something elsewhere.
> 
> Right, this is another reason to try to get most of my fix 
> self-contained rather relying on top-layer call to protect against a 
> domain dying.
> 
> Cheers,
> 



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:54:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:54:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58528.103022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7Om-00037O-Hh; Wed, 23 Dec 2020 16:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58528.103022; Wed, 23 Dec 2020 16:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7Om-00037H-EE; Wed, 23 Dec 2020 16:54:04 +0000
Received: by outflank-mailman (input) for mailman id 58528;
 Wed, 23 Dec 2020 16:54:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks7Ol-00037C-AR
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:54:03 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id adf9c0fe-df5e-415a-ae92-cd5b3f3e2391;
 Wed, 23 Dec 2020 16:54:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adf9c0fe-df5e-415a-ae92-cd5b3f3e2391
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608742440;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+Ow/08H7o3lfn+IMD8GWEhKU7GWxaorSbZkANt5eAH4=;
  b=SCrAluiaE0V+yS/2kYdjKbNGQ6EZVmuVhYwTnB7QWyT38LBhFdMzQkna
   5OcXmHQvmkpTuCl7ELEOTOKU/GCQNDRo9GS2TofNsP6pLPBc6+T6c8vvW
   RP6FXxXnyzIi1PmJ+22VopaCkV4s9Lw+qn6sssPQqH6Q/g8eAD7NBpTTI
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hTBonwiLKpo/PXjRNigNTipnfjqUbR19Vf9I7uTIUOSXQPFH1h4CmRGo99qqWw6vNSMY4iyB+k
 K6zRlTJPZzcfTW3Eua43a7A3bn0hHwOYC3EhGsp/8+cJKCVHb8+7DgsaHk+1luTOyUJOhTJgOx
 7YUEudbYmW7Wy0CUN/cezI890grXvdAvoi+QQV9f0hkXjTfmpLFSZ5hkloxda/LO2vtlMRKAKY
 cYfNJaKHalyR+XN69PGXzRw5yUKrbHT/Ewi4RQ0R3QYQzVOMTe61MPigAxbe7livZZAQAjI1Na
 368=
X-SBRS: 5.2
X-MesageID: 34204501
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="34204501"
Subject: Re: [PATCH v2] lib: drop (replace) debug_build()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <ae31ccf1-7334-cdf9-9b90-edac7ca4e148@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bdb96275-c6a4-a4d2-9195-67fd2f3f1bf3@citrix.com>
Date: Wed, 23 Dec 2020 16:53:53 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ae31ccf1-7334-cdf9-9b90-edac7ca4e148@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 23/12/2020 16:05, Jan Beulich wrote:
> Its expansion shouldn't be tied to NDEBUG - down the road we may want to
> allow enabling assertions independently of CONFIG_DEBUG. Replace the few
> uses by a new xen_build_info() helper, subsuming gcov_string at the same
> time (while replacing the stale CONFIG_GCOV used there) and also adding
> CONFIG_UBSAN indication.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...

> ---
> v2: Introduce xen_build_info() including also gcov and ubsan info.
>
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -175,14 +175,14 @@ static void print_xen_info(void)
>  {
>      char taint_str[TAINT_STRING_MAX_LEN];
>  
> -    printk("----[ Xen-%d.%d%s  %s  debug=%c " gcov_string "  %s ]----\n",
> +    printk("----[ Xen-%d.%d%s  %s  %s  %s ]----\n",
>             xen_major_version(), xen_minor_version(), xen_extra_version(),
>  #ifdef CONFIG_ARM_32
>             "arm32",
>  #else
>             "arm64",
>  #endif
> -           debug_build() ? 'y' : 'n', print_tainted(taint_str));
> +           xen_build_info(), print_tainted(taint_str));
>  }
>  
>  #ifdef CONFIG_ARM_32
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -29,9 +29,9 @@ static void print_xen_info(void)
>  {
>      char taint_str[TAINT_STRING_MAX_LEN];
>  
> -    printk("----[ Xen-%d.%d%s  x86_64  debug=%c " gcov_string "  %s ]----\n",
> +    printk("----[ Xen-%d.%d%s  x86_64  %s  %s ]----\n",
>             xen_major_version(), xen_minor_version(), xen_extra_version(),
> -           debug_build() ? 'y' : 'n', print_tainted(taint_str));
> +           xen_build_info(), print_tainted(taint_str));
>  }
>  
>  enum context { CTXT_hypervisor, CTXT_pv_guest, CTXT_hvm_guest };
> --- a/xen/common/version.c
> +++ b/xen/common/version.c
> @@ -70,6 +70,30 @@ const char *xen_deny(void)
>      return "<denied>";
>  }
>  
> +static const char build_info[] =
> +    "debug="
> +#ifdef CONFIG_DEBUG
> +    "y"
> +#else
> +    "n"
> +#endif
> +#ifdef CONFIG_COVERAGE
> +# ifdef __clang__
> +    " llvmcov=y"
> +# else
> +    " gcov=y"
> +# endif
> +#endif
> +#ifdef CONFIG_UBSAN
> +    " ubsan=y"
> +#endif
> +    "";
> +
> +const char *xen_build_info(void)
> +{
> +    return build_info;
> +}

... do we really need a function here?

Wouldn't an extern const char build_info[] do?

~Andrew

> +
>  static const void *build_id_p __read_mostly;
>  static unsigned int build_id_len __read_mostly;
>  
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -1002,10 +1002,10 @@ void __init console_init_preirq(void)
>      spin_lock(&console_lock);
>      __putstr(xen_banner());
>      spin_unlock(&console_lock);
> -    printk("Xen version %d.%d%s (%s@%s) (%s) debug=%c " gcov_string " %s\n",
> +    printk("Xen version %d.%d%s (%s@%s) (%s) %s %s\n",
>             xen_major_version(), xen_minor_version(), xen_extra_version(),
> -           xen_compile_by(), xen_compile_domain(),
> -           xen_compiler(), debug_build() ? 'y' : 'n', xen_compile_date());
> +           xen_compile_by(), xen_compile_domain(), xen_compiler(),
> +           xen_build_info(), xen_compile_date());
>      printk("Latest ChangeSet: %s\n", xen_changeset());
>  
>      /* Locate and print the buildid, if applicable. */
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -48,21 +48,13 @@
>  #define BUILD_BUG_ON(cond) ((void)BUILD_BUG_ON_ZERO(cond))
>  #endif
>  
> -#ifdef CONFIG_GCOV
> -#define gcov_string "gcov=y"
> -#else
> -#define gcov_string ""
> -#endif
> -
>  #ifndef NDEBUG
>  #define ASSERT(p) \
>      do { if ( unlikely(!(p)) ) assert_failed(#p); } while (0)
>  #define ASSERT_UNREACHABLE() assert_failed("unreachable")
> -#define debug_build() 1
>  #else
>  #define ASSERT(p) do { if ( 0 && (p) ) {} } while (0)
>  #define ASSERT_UNREACHABLE() do { } while (0)
> -#define debug_build() 0
>  #endif
>  
>  #define ABS(_x) ({                              \
> --- a/xen/include/xen/version.h
> +++ b/xen/include/xen/version.h
> @@ -16,6 +16,7 @@ const char *xen_extra_version(void);
>  const char *xen_changeset(void);
>  const char *xen_banner(void);
>  const char *xen_deny(void);
> +const char *xen_build_info(void);
>  int xen_build_id(const void **p, unsigned int *len);
>  
>  #ifdef BUILD_ID



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:54:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:54:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58531.103033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7P4-0003CM-V9; Wed, 23 Dec 2020 16:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58531.103033; Wed, 23 Dec 2020 16:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7P4-0003CF-SE; Wed, 23 Dec 2020 16:54:22 +0000
Received: by outflank-mailman (input) for mailman id 58531;
 Wed, 23 Dec 2020 16:54:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks7P3-0003C1-UR
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:54:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks7P3-0002tl-IU; Wed, 23 Dec 2020 16:54:21 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks7P3-0005oj-9s; Wed, 23 Dec 2020 16:54:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JSrlVRBi1SAxZFUgw2GLFe9sKXE8qhYjB+QHM9orH/I=; b=HiFFSH9iW56TVR1WFi/Q7iUo9h
	XR52AOHJBsJi8iHDjqQRMe9wCd9MKgrHVUtqNpr8LIY38qvJPiXAHAo/3lpuKsY5pkj0PKEONmcwo
	CJDAC2ftlmDwbAnPVVWilNCUT3Y1lLkEbdtdQ9eEV1s1XIr+owpsL+XB6Cu99iprC6+M=;
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
 <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
 <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
 <0699ad7a-7c3b-e1e8-c7f7-0bfb54d03c78@xen.org>
 <63091edf-a870-cac1-587a-59cb9d0f8d8d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6582c77e-114c-1ad7-0179-7e2e58a23745@xen.org>
Date: Wed, 23 Dec 2020 16:54:19 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <63091edf-a870-cac1-587a-59cb9d0f8d8d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/12/2020 16:46, Jan Beulich wrote:
> On 23.12.2020 17:29, Julien Grall wrote:
>> On 23/12/2020 16:24, Jan Beulich wrote:
>>> On 23.12.2020 17:16, Julien Grall wrote:
>>>> On 23/12/2020 16:11, Jan Beulich wrote:
>>>>> On 23.12.2020 16:16, Julien Grall wrote:
>>>>>> On 23/12/2020 15:00, Jan Beulich wrote:
>>>>>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>>>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>>>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>>
>>> Please note this (in your original submission). I simply ...
>>>
>>>>>>>>> We already have
>>>>>>>>>
>>>>>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>>>>>> {
>>>>>>>>>          dom_iommu(d)->arch.amd.root_table = NULL;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>>>>>
>>>>>>>>>          hd->arch.vtd.pgd_maddr = 0;
>>>>>>>>
>>>>>>>> Let me have a look if that works.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I guess what's missing is prevention of the root table
>>>>>>>>> getting re-setup.
>>>>>>>>
>>>>>>>> This is taken care in the follow-up patch by forbidding page-table
>>>>>>>> allocation. I can mention it in the commit message.
>>>>>>>
>>>>>>> My expectation is that with that subsequent change the change here
>>>>>>> (or any variant of it) would become unnecessary.
>>>>>>
>>>>>> I am not be sure. iommu_unmap() would still get called from put_page().
>>>>>> Are you suggesting to gate the code if d->is_dying as well?
>>>>>
>>>>> Unmap shouldn't be allocating any memory right now, as in
>>>>> non-shared-page-table mode we don't install any superpages
>>>>> (unless I misremember).
>>>>
>>>> It doesn't allocate memory, but it will try to access the IOMMU
>>>> page-tables (see more below).
>>>>
>>>>>
>>>>>> Even if this patch is deemed to be unecessary to fix the issue.
>>>>>> This issue was quite hard to chase/reproduce.
>>>>>>
>>>>>> I think it would still be good to harden the code by zeroing
>>>>>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>>>>>> pointer was still "valid".
>>>>>
>>>>> But my point was that this zeroing already happens.
>>>>> What I
>>>>> suspect is that it gets re-populated after it was zeroed,
>>>>> because of page table manipulation that shouldn't be
>>>>> occurring anymore for a dying domain.
>>>>
>>>> AFAICT, the zeroing is happening in ->teardown() helper.
>>>>
>>>> It is only called when the domain is fully destroyed (see call in
>>>> arch_domain_destroy()). This will happen much after relinquishing the code.
>>>>
>>>> Could you clarify why you think it is already zeroed and by who?
>>>
>>> ... trusted you on what you stated there. But perhaps I somehow
>>> misunderstood that sentence to mean you want to put your addition
>>> into the teardown functions, when apparently you meant to invoke
>>> them earlier in the process. Without clearly identifying why this
>>> would be a safe thing to do, I couldn't imagine that's what you
>>> suggest as alternative.
>>
>> This was a wording issue. I meant moving ->teardown() before (or calling
>> from) iommu_free_pgtables().
>>
>> Shall I introduce a new callback then?
> 
> Earlier zeroing won't help unless you prevent re-population, or
> unless you make the code capable of telling "still zero" from
> "already zero". But I have to admit I'd like to also have Paul's
> opinion on the matter.

Patch #4 is meant to prevent that with the d->is_dying check in the 
IOMMU page-table allocation.

Do you think this is not enough?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:57:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58536.103046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7SL-0003PK-Fb; Wed, 23 Dec 2020 16:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58536.103046; Wed, 23 Dec 2020 16:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7SL-0003PD-CD; Wed, 23 Dec 2020 16:57:45 +0000
Received: by outflank-mailman (input) for mailman id 58536;
 Wed, 23 Dec 2020 16:57:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lNd7=F3=daemonizer.de=maxi@srs-us1.protection.inumbo.net>)
 id 1ks7SK-0003P8-D6
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:57:44 +0000
Received: from mx1.somlen.de (unknown [89.238.87.226])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdf84dfb-656f-4fa2-ad13-5b1a7f7e29c3;
 Wed, 23 Dec 2020 16:57:36 +0000 (UTC)
Received: by mx1.somlen.de with ESMTPSA id DBDCAC3AF0B;
 Wed, 23 Dec 2020 17:57:34 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdf84dfb-656f-4fa2-ad13-5b1a7f7e29c3
From: Maximilian Engelhardt <maxi@daemonizer.de>
To: xen-devel@lists.xenproject.org
Cc: Maximilian Engelhardt <maxi@daemonizer.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH v2] docs: set date to SOURCE_DATE_EPOCH if available
Date: Wed, 23 Dec 2020 17:56:53 +0100
Message-Id: <8b4564696cae00041848af8c5793172b80edadd5.1608742171.git.maxi@daemonizer.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

check if a GNU date that supports the '-u -d @...' options and syntax or
a BSD date are available. If so, use the appropriate options for the
date command to produce a custom date if SOURCE_DATE_EPOCH is defined.
If SOURCE_DATE_EPOCH is not defined or no suitable date command was
found, use the current date. This enables reproducible builds.

Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>

Changes in v2:
- add capability detection for the 'date' command using ax_prog_date.m4
- add information about detected date command to config/Docs.mk
- only call a supported date command in docs/Makefile
---
Please note the ax_prog_date.m4 macro is taken from the autoconf-archive
repository [1] and it's license is GPL v3 or later with an exception for
the generated configure script.

[1] https://www.gnu.org/software/autoconf-archive/
---
 config/Docs.mk.in  |   3 +
 docs/Makefile      |  16 +++-
 docs/configure     | 213 +++++++++++++++++++++++++++++++++++++++++++++
 docs/configure.ac  |   9 ++
 m4/ax_prog_date.m4 | 139 +++++++++++++++++++++++++++++
 5 files changed, 379 insertions(+), 1 deletion(-)
 create mode 100644 m4/ax_prog_date.m4

diff --git a/config/Docs.mk.in b/config/Docs.mk.in
index e76e5cd5ff..cc2abd8fcc 100644
--- a/config/Docs.mk.in
+++ b/config/Docs.mk.in
@@ -7,3 +7,6 @@ POD2HTML            := @POD2HTML@
 POD2TEXT            := @POD2TEXT@
 PANDOC              := @PANDOC@
 PERL                := @PERL@
+
+# Variables
+DATE_TYPE_AT        := @DATE_TYPE_AT@
diff --git a/docs/Makefile b/docs/Makefile
index 8de1efb6f5..c79fe0b63e 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -3,7 +3,21 @@ include $(XEN_ROOT)/Config.mk
 -include $(XEN_ROOT)/config/Docs.mk
 
 VERSION		:= $(shell $(MAKE) -C $(XEN_ROOT)/xen --no-print-directory xenversion)
-DATE		:= $(shell date +%Y-%m-%d)
+
+DATE_FMT	:= +%Y-%m-%d
+ifdef SOURCE_DATE_EPOCH
+ifeq ($(DATE_TYPE_AT),GNU)
+DATE		:= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)")
+else
+ifeq ($(DATE_TYPE_AT),BSD)
+DATE		:= $(shell date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)")
+else
+DATE		:= $(shell date "$(DATE_FMT)")
+endif
+endif
+else
+DATE		:= $(shell date "$(DATE_FMT)")
+endif
 
 DOC_ARCHES      := arm x86_32 x86_64
 MAN_SECTIONS    := 1 5 7 8
diff --git a/docs/configure b/docs/configure
index f55268564e..9c3218f560 100755
--- a/docs/configure
+++ b/docs/configure
@@ -587,6 +587,7 @@ PACKAGE_URL='https://www.xen.org/'
 ac_unique_file="misc/xen-command-line.pandoc"
 ac_subst_vars='LTLIBOBJS
 LIBOBJS
+DATE_TYPE_AT
 PERL
 PANDOC
 POD2TEXT
@@ -1808,6 +1809,86 @@ case "$host_os" in
 esac
 
 
+# Fetched from https://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_prog_date.m4
+# Commit ID: fd1d25c14855037f6ccd7dbc7fe9ae5fc9bea8f4
+# ===========================================================================
+#       https://www.gnu.org/software/autoconf-archive/ax_prog_date.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PROG_DATE()
+#
+# DESCRIPTION
+#
+#   This macro tries to determine the type of the date (1) command and some
+#   of its non-standard capabilities.
+#
+#   The type is determined as follow:
+#
+#     * If the version string contains "GNU", then:
+#       - The variable ax_cv_prog_date_gnu is set to "yes".
+#       - The variable ax_cv_prog_date_type is set to "gnu".
+#
+#     * If date supports the "-v 1d" option, then:
+#       - The variable ax_cv_prog_date_bsd is set to "yes".
+#       - The variable ax_cv_prog_date_type is set to "bsd".
+#
+#     * If both previous checks fail, then:
+#       - The variable ax_cv_prog_date_type is set to "unknown".
+#
+#   The following capabilities of GNU date are checked:
+#
+#     * If date supports the --date arg option, then:
+#       - The variable ax_cv_prog_date_gnu_date is set to "yes".
+#
+#     * If date supports the --utc arg option, then:
+#       - The variable ax_cv_prog_date_gnu_utc is set to "yes".
+#
+#   The following capabilities of BSD date are checked:
+#
+#     * If date supports the -v 1d  option, then:
+#       - The variable ax_cv_prog_date_bsd_adjust is set to "yes".
+#
+#     * If date supports the -r arg option, then:
+#       - The variable ax_cv_prog_date_bsd_date is set to "yes".
+#
+#   All the aforementioned variables are set to "no" before a check is
+#   performed.
+#
+# LICENSE
+#
+#   Copyright (c) 2017 Enrico M. Crisostomo <enrico.m.crisostomo@gmail.com>
+#
+#   This program is free software: you can redistribute it and/or modify it
+#   under the terms of the GNU General Public License as published by the
+#   Free Software Foundation, either version 3 of the License, or (at your
+#   option) any later version.
+#
+#   This program is distributed in the hope that it will be useful, but
+#   WITHOUT ANY WARRANTY; without even the implied warranty of
+#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+#   Public License for more details.
+#
+#   You should have received a copy of the GNU General Public License along
+#   with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+#   As a special exception, the respective Autoconf Macro's copyright owner
+#   gives unlimited permission to copy, distribute and modify the configure
+#   scripts that are the output of Autoconf when processing the Macro. You
+#   need not follow the terms of the GNU General Public License when using
+#   or distributing such scripts, even though portions of the text of the
+#   Macro appear in them. The GNU General Public License (GPL) does govern
+#   all other use of the material that constitutes the Autoconf Macro.
+#
+#   This special exception to the GPL applies to versions of the Autoconf
+#   Macro released by the Autoconf Archive. When you make and distribute a
+#   modified version of the Autoconf Macro, you may extend this special
+#   exception to the GPL to apply to your modified version as well.
+
+#serial 3
+
+
 
 
 test "x$prefix" = "xNONE" && prefix=$ac_default_prefix
@@ -2267,6 +2348,138 @@ then
     as_fn_error $? "Unable to find perl, please install perl" "$LINENO" 5
 fi
 
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU date" >&5
+$as_echo_n "checking for GNU date... " >&6; }
+if ${ax_cv_prog_date_gnu+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+    ax_cv_prog_date_gnu=no
+    if date --version 2>/dev/null | head -1 | grep -q GNU
+    then
+      ax_cv_prog_date_gnu=yes
+    fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_gnu" >&5
+$as_echo "$ax_cv_prog_date_gnu" >&6; }
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BSD date" >&5
+$as_echo_n "checking for BSD date... " >&6; }
+if ${ax_cv_prog_date_bsd+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+    ax_cv_prog_date_bsd=no
+    if date -v 1d > /dev/null 2>&1
+    then
+      ax_cv_prog_date_bsd=yes
+    fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_bsd" >&5
+$as_echo "$ax_cv_prog_date_bsd" >&6; }
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for date type" >&5
+$as_echo_n "checking for date type... " >&6; }
+if ${ax_cv_prog_date_type+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+    ax_cv_prog_date_type=unknown
+    if test "x${ax_cv_prog_date_gnu}" = "xyes"
+    then
+      ax_cv_prog_date_type=gnu
+    elif test "x${ax_cv_prog_date_bsd}" = "xyes"
+    then
+      ax_cv_prog_date_type=bsd
+    fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_type" >&5
+$as_echo "$ax_cv_prog_date_type" >&6; }
+  if test "x$ax_cv_prog_date_gnu" = xyes; then :
+
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether GNU date supports --date" >&5
+$as_echo_n "checking whether GNU date supports --date... " >&6; }
+if ${ax_cv_prog_date_gnu_date+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+      ax_cv_prog_date_gnu_date=no
+      if date --date=@1512031231 > /dev/null 2>&1
+      then
+        ax_cv_prog_date_gnu_date=yes
+      fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_gnu_date" >&5
+$as_echo "$ax_cv_prog_date_gnu_date" >&6; }
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether GNU date supports --utc" >&5
+$as_echo_n "checking whether GNU date supports --utc... " >&6; }
+if ${ax_cv_prog_date_gnu_utc+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+      ax_cv_prog_date_gnu_utc=no
+      if date --utc > /dev/null 2>&1
+      then
+        ax_cv_prog_date_gnu_utc=yes
+      fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_gnu_utc" >&5
+$as_echo "$ax_cv_prog_date_gnu_utc" >&6; }
+
+fi
+  if test "x$ax_cv_prog_date_bsd" = xyes; then :
+
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether BSD date supports -r" >&5
+$as_echo_n "checking whether BSD date supports -r... " >&6; }
+if ${ax_cv_prog_date_bsd_date+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+      ax_cv_prog_date_bsd_date=no
+      if date -r 1512031231 > /dev/null 2>&1
+      then
+        ax_cv_prog_date_bsd_date=yes
+      fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_bsd_date" >&5
+$as_echo "$ax_cv_prog_date_bsd_date" >&6; }
+
+fi
+    if test "x$ax_cv_prog_date_bsd" = xyes; then :
+
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether BSD date supports -v" >&5
+$as_echo_n "checking whether BSD date supports -v... " >&6; }
+if ${ax_cv_prog_date_bsd_adjust+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+
+      ax_cv_prog_date_bsd_adjust=no
+      if date -v 1d > /dev/null 2>&1
+      then
+        ax_cv_prog_date_bsd_adjust=yes
+      fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_bsd_adjust" >&5
+$as_echo "$ax_cv_prog_date_bsd_adjust" >&6; }
+
+fi
+
+if test "$ax_cv_prog_date_gnu_date:$ax_cv_prog_date_gnu_utc" = yes:yes; then :
+  DATE_TYPE_AT="GNU"
+else
+  if test "x$ax_cv_prog_date_bsd_date" = xyes; then :
+  DATE_TYPE_AT="BSD"
+else
+  DATE_TYPE_AT="Unknown"
+fi
+fi
+
+
 cat >confcache <<\_ACEOF
 # This file is a shell script that caches the results of configure
 # tests run on this system so they can be shared between configure
diff --git a/docs/configure.ac b/docs/configure.ac
index cb5a6eaa4c..c87471e706 100644
--- a/docs/configure.ac
+++ b/docs/configure.ac
@@ -17,6 +17,7 @@ m4_include([../m4/docs_tool.m4])
 m4_include([../m4/path_or_fail.m4])
 m4_include([../m4/features.m4])
 m4_include([../m4/paths.m4])
+m4_include([../m4/ax_prog_date.m4])
 
 AX_XEN_EXPAND_CONFIG()
 
@@ -29,4 +30,12 @@ AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
 AC_ARG_VAR([PERL], [Path to Perl parser])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
 
+AX_PROG_DATE
+AS_IF([test "$ax_cv_prog_date_gnu_date:$ax_cv_prog_date_gnu_utc" = yes:yes],
+    [DATE_TYPE_AT="GNU"],
+    [AS_IF([test "x$ax_cv_prog_date_bsd_date" = xyes],
+        [DATE_TYPE_AT="BSD"],
+        [DATE_TYPE_AT="Unknown"])])
+AC_SUBST([DATE_TYPE_AT])
+
 AC_OUTPUT()
diff --git a/m4/ax_prog_date.m4 b/m4/ax_prog_date.m4
new file mode 100644
index 0000000000..675412bac2
--- /dev/null
+++ b/m4/ax_prog_date.m4
@@ -0,0 +1,139 @@
+# Fetched from https://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_prog_date.m4
+# Commit ID: fd1d25c14855037f6ccd7dbc7fe9ae5fc9bea8f4
+# ===========================================================================
+#       https://www.gnu.org/software/autoconf-archive/ax_prog_date.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PROG_DATE()
+#
+# DESCRIPTION
+#
+#   This macro tries to determine the type of the date (1) command and some
+#   of its non-standard capabilities.
+#
+#   The type is determined as follow:
+#
+#     * If the version string contains "GNU", then:
+#       - The variable ax_cv_prog_date_gnu is set to "yes".
+#       - The variable ax_cv_prog_date_type is set to "gnu".
+#
+#     * If date supports the "-v 1d" option, then:
+#       - The variable ax_cv_prog_date_bsd is set to "yes".
+#       - The variable ax_cv_prog_date_type is set to "bsd".
+#
+#     * If both previous checks fail, then:
+#       - The variable ax_cv_prog_date_type is set to "unknown".
+#
+#   The following capabilities of GNU date are checked:
+#
+#     * If date supports the --date arg option, then:
+#       - The variable ax_cv_prog_date_gnu_date is set to "yes".
+#
+#     * If date supports the --utc arg option, then:
+#       - The variable ax_cv_prog_date_gnu_utc is set to "yes".
+#
+#   The following capabilities of BSD date are checked:
+#
+#     * If date supports the -v 1d  option, then:
+#       - The variable ax_cv_prog_date_bsd_adjust is set to "yes".
+#
+#     * If date supports the -r arg option, then:
+#       - The variable ax_cv_prog_date_bsd_date is set to "yes".
+#
+#   All the aforementioned variables are set to "no" before a check is
+#   performed.
+#
+# LICENSE
+#
+#   Copyright (c) 2017 Enrico M. Crisostomo <enrico.m.crisostomo@gmail.com>
+#
+#   This program is free software: you can redistribute it and/or modify it
+#   under the terms of the GNU General Public License as published by the
+#   Free Software Foundation, either version 3 of the License, or (at your
+#   option) any later version.
+#
+#   This program is distributed in the hope that it will be useful, but
+#   WITHOUT ANY WARRANTY; without even the implied warranty of
+#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+#   Public License for more details.
+#
+#   You should have received a copy of the GNU General Public License along
+#   with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+#   As a special exception, the respective Autoconf Macro's copyright owner
+#   gives unlimited permission to copy, distribute and modify the configure
+#   scripts that are the output of Autoconf when processing the Macro. You
+#   need not follow the terms of the GNU General Public License when using
+#   or distributing such scripts, even though portions of the text of the
+#   Macro appear in them. The GNU General Public License (GPL) does govern
+#   all other use of the material that constitutes the Autoconf Macro.
+#
+#   This special exception to the GPL applies to versions of the Autoconf
+#   Macro released by the Autoconf Archive. When you make and distribute a
+#   modified version of the Autoconf Macro, you may extend this special
+#   exception to the GPL to apply to your modified version as well.
+
+#serial 3
+
+AC_DEFUN([AX_PROG_DATE], [dnl
+  AC_CACHE_CHECK([for GNU date], [ax_cv_prog_date_gnu], [
+    ax_cv_prog_date_gnu=no
+    if date --version 2>/dev/null | head -1 | grep -q GNU
+    then
+      ax_cv_prog_date_gnu=yes
+    fi
+  ])
+  AC_CACHE_CHECK([for BSD date], [ax_cv_prog_date_bsd], [
+    ax_cv_prog_date_bsd=no
+    if date -v 1d > /dev/null 2>&1
+    then
+      ax_cv_prog_date_bsd=yes
+    fi
+  ])
+  AC_CACHE_CHECK([for date type], [ax_cv_prog_date_type], [
+    ax_cv_prog_date_type=unknown
+    if test "x${ax_cv_prog_date_gnu}" = "xyes"
+    then
+      ax_cv_prog_date_type=gnu
+    elif test "x${ax_cv_prog_date_bsd}" = "xyes"
+    then
+      ax_cv_prog_date_type=bsd
+    fi
+  ])
+  AS_VAR_IF([ax_cv_prog_date_gnu], [yes], [
+    AC_CACHE_CHECK([whether GNU date supports --date], [ax_cv_prog_date_gnu_date], [
+      ax_cv_prog_date_gnu_date=no
+      if date --date=@1512031231 > /dev/null 2>&1
+      then
+        ax_cv_prog_date_gnu_date=yes
+      fi
+    ])
+    AC_CACHE_CHECK([whether GNU date supports --utc], [ax_cv_prog_date_gnu_utc], [
+      ax_cv_prog_date_gnu_utc=no
+      if date --utc > /dev/null 2>&1
+      then
+        ax_cv_prog_date_gnu_utc=yes
+      fi
+    ])
+  ])
+  AS_VAR_IF([ax_cv_prog_date_bsd], [yes], [
+    AC_CACHE_CHECK([whether BSD date supports -r], [ax_cv_prog_date_bsd_date], [
+      ax_cv_prog_date_bsd_date=no
+      if date -r 1512031231 > /dev/null 2>&1
+      then
+        ax_cv_prog_date_bsd_date=yes
+      fi
+    ])
+  ])
+    AS_VAR_IF([ax_cv_prog_date_bsd], [yes], [
+    AC_CACHE_CHECK([whether BSD date supports -v], [ax_cv_prog_date_bsd_adjust], [
+      ax_cv_prog_date_bsd_adjust=no
+      if date -v 1d > /dev/null 2>&1
+      then
+        ax_cv_prog_date_bsd_adjust=yes
+      fi
+    ])
+  ])
+])dnl AX_PROG_DATE
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 16:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 16:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58540.103058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7UR-0003XL-Sz; Wed, 23 Dec 2020 16:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58540.103058; Wed, 23 Dec 2020 16:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7UR-0003XE-PT; Wed, 23 Dec 2020 16:59:55 +0000
Received: by outflank-mailman (input) for mailman id 58540;
 Wed, 23 Dec 2020 16:59:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks7UP-0003X9-SQ
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 16:59:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53adbc5c-1012-4c18-8e38-2a2a2270bee4;
 Wed, 23 Dec 2020 16:59:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37A1EACC6;
 Wed, 23 Dec 2020 16:59:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53adbc5c-1012-4c18-8e38-2a2a2270bee4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608742792; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U96HdMJrByaUuseJkZ3+6NMrEsCWMjTIorLwAJV2XdM=;
	b=tkRGcixugxo7fHtMdZjCsn2GwD4KNPCnjkMRGcTPA+KFCKPQSbIYWS6U8ynt85ely5PZEg
	4RFhodTLn2ewcViyYG2c0XeyJFbfYfVM4IaOmQiKOuydtsQh1mU9mDzuzT+BcT65awnR6o
	6o6KuFtQHs0Ots+F8Q/S1i8Ssr9K70M=
Subject: Re: [PATCH v2] lib: drop (replace) debug_build()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ae31ccf1-7334-cdf9-9b90-edac7ca4e148@suse.com>
 <bdb96275-c6a4-a4d2-9195-67fd2f3f1bf3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <11bb1b39-7d1d-bcf4-1bff-4472a3c79dea@suse.com>
Date: Wed, 23 Dec 2020 17:59:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <bdb96275-c6a4-a4d2-9195-67fd2f3f1bf3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:53, Andrew Cooper wrote:
> On 23/12/2020 16:05, Jan Beulich wrote:
>> Its expansion shouldn't be tied to NDEBUG - down the road we may want to
>> allow enabling assertions independently of CONFIG_DEBUG. Replace the few
>> uses by a new xen_build_info() helper, subsuming gcov_string at the same
>> time (while replacing the stale CONFIG_GCOV used there) and also adding
>> CONFIG_UBSAN indication.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>,

Thanks.

>> --- a/xen/common/version.c
>> +++ b/xen/common/version.c
>> @@ -70,6 +70,30 @@ const char *xen_deny(void)
>>      return "<denied>";
>>  }
>>  
>> +static const char build_info[] =
>> +    "debug="
>> +#ifdef CONFIG_DEBUG
>> +    "y"
>> +#else
>> +    "n"
>> +#endif
>> +#ifdef CONFIG_COVERAGE
>> +# ifdef __clang__
>> +    " llvmcov=y"
>> +# else
>> +    " gcov=y"
>> +# endif
>> +#endif
>> +#ifdef CONFIG_UBSAN
>> +    " ubsan=y"
>> +#endif
>> +    "";
>> +
>> +const char *xen_build_info(void)
>> +{
>> +    return build_info;
>> +}
> 
> ... do we really need a function here?
> 
> Wouldn't an extern const char build_info[] do?

It probably would, but I wanted things to remain consistent with
the siblings, many of which also return string literals (or
effectively plain numbers).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 17:02:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 17:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58544.103069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7X0-0004QL-B3; Wed, 23 Dec 2020 17:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58544.103069; Wed, 23 Dec 2020 17:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7X0-0004QE-7W; Wed, 23 Dec 2020 17:02:34 +0000
Received: by outflank-mailman (input) for mailman id 58544;
 Wed, 23 Dec 2020 17:02:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks7Wy-0004Q9-U6
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 17:02:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72847706-24f1-4342-b091-341ccbfc39ae;
 Wed, 23 Dec 2020 17:02:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72847706-24f1-4342-b091-341ccbfc39ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608742951;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=kYgNW59ChK2fCZZH8onTJqALT3bKd6CHCZsvhPNBxOo=;
  b=Le2arulZUzlywYdbLQTp7isaa2P6aN7/XQtYSSKdRUfz8BxvVCBArbg7
   Yr8y5ZEKDsg6bYZAH/UZOyEuVwYgDpyjmr5kMBLGZLuTylam0LgX1rc65
   Hx//hty0NgQjsAqVnCVP5IlFeAX9uolhkZ1zfnP8MRdVS8muu8yJBpyFg
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: dbeZBWQAUucFcmFHL29ezytC7/1mVTgorsfr6SpmoFUzhnWiu0w7AsV6OB0/OWuOcc/WsFcqKa
 C6FEdV3HKC7NMrF1I5cHiQpgFuWEoP7V3nARMT8bZ8kAITobnooVGg5hFKEMPMmSyaNze6Xqwh
 OQr1dkvyTJjbZweTOvSzAD17n58QOgMgUHtjMzIW/HsFRUR4FLsfENmWod9mr13Gh2/mxOGEKC
 EWr6KC8jH+dTTn2oQ4mdh14jp3TI8dc+ugT9tlBLlvBz08jLvQcQMsJl3dr2F2B2tPt+HfqHa5
 OIU=
X-SBRS: 5.2
X-MesageID: 33866011
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="33866011"
Subject: Re: [PATCH v2] lib: drop (replace) debug_build()
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ae31ccf1-7334-cdf9-9b90-edac7ca4e148@suse.com>
 <bdb96275-c6a4-a4d2-9195-67fd2f3f1bf3@citrix.com>
 <11bb1b39-7d1d-bcf4-1bff-4472a3c79dea@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6f9ece4d-6fef-07cf-b4d5-2b13790956e1@citrix.com>
Date: Wed, 23 Dec 2020 17:02:22 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <11bb1b39-7d1d-bcf4-1bff-4472a3c79dea@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 23/12/2020 16:59, Jan Beulich wrote:
> On 23.12.2020 17:53, Andrew Cooper wrote:
>> On 23/12/2020 16:05, Jan Beulich wrote:
>>> Its expansion shouldn't be tied to NDEBUG - down the road we may want to
>>> allow enabling assertions independently of CONFIG_DEBUG. Replace the few
>>> uses by a new xen_build_info() helper, subsuming gcov_string at the same
>>> time (while replacing the stale CONFIG_GCOV used there) and also adding
>>> CONFIG_UBSAN indication.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>,
> Thanks.
>
>>> --- a/xen/common/version.c
>>> +++ b/xen/common/version.c
>>> @@ -70,6 +70,30 @@ const char *xen_deny(void)
>>>      return "<denied>";
>>>  }
>>>  
>>> +static const char build_info[] =
>>> +    "debug="
>>> +#ifdef CONFIG_DEBUG
>>> +    "y"
>>> +#else
>>> +    "n"
>>> +#endif
>>> +#ifdef CONFIG_COVERAGE
>>> +# ifdef __clang__
>>> +    " llvmcov=y"
>>> +# else
>>> +    " gcov=y"
>>> +# endif
>>> +#endif
>>> +#ifdef CONFIG_UBSAN
>>> +    " ubsan=y"
>>> +#endif
>>> +    "";
>>> +
>>> +const char *xen_build_info(void)
>>> +{
>>> +    return build_info;
>>> +}
>> ... do we really need a function here?
>>
>> Wouldn't an extern const char build_info[] do?
> It probably would, but I wanted things to remain consistent with
> the siblings, many of which also return string literals (or
> effectively plain numbers).

The only reason they are still functions is because there was an
argument over breaking the livepatch testing on older versions of Xen,
and I got bored arguing.

I, however, don't consider this a valid reason to block improvements.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 17:02:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 17:02:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58547.103082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7XN-0004WE-Nl; Wed, 23 Dec 2020 17:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58547.103082; Wed, 23 Dec 2020 17:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7XN-0004W7-KT; Wed, 23 Dec 2020 17:02:57 +0000
Received: by outflank-mailman (input) for mailman id 58547;
 Wed, 23 Dec 2020 17:02:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zN8f=F3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ks7XN-0004W1-1i
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 17:02:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f77494e-5296-4501-a7af-a66bb681f923;
 Wed, 23 Dec 2020 17:02:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 204E4ACF1;
 Wed, 23 Dec 2020 17:02:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f77494e-5296-4501-a7af-a66bb681f923
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1608742975; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5iKthy7ST9mJT5xxvlF5liehiIqTAy/z8Ubto55j11I=;
	b=exjdmMp0ncR+Ce4baxR+bewASF3w7WE7lk6rzOulWcZohG67CE+8UnZqONKi4ae/0uhc5l
	msZsuuah0oDuud6RHQB5pUAbATZrKRMoa9CcZ9Xvkve7kiijCE0fDYP+bX1NhbuGF7JgrB
	es4Qa7SdIGreiV0qy0Z5eA80JrpWeo0=
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Julien Grall <julien@xen.org>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
 <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
 <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
 <0699ad7a-7c3b-e1e8-c7f7-0bfb54d03c78@xen.org>
 <63091edf-a870-cac1-587a-59cb9d0f8d8d@suse.com>
 <6582c77e-114c-1ad7-0179-7e2e58a23745@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2f3b546c-c075-bfb1-a17b-b0c987bee682@suse.com>
Date: Wed, 23 Dec 2020 18:02:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <6582c77e-114c-1ad7-0179-7e2e58a23745@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.12.2020 17:54, Julien Grall wrote:
> 
> 
> On 23/12/2020 16:46, Jan Beulich wrote:
>> On 23.12.2020 17:29, Julien Grall wrote:
>>> On 23/12/2020 16:24, Jan Beulich wrote:
>>>> On 23.12.2020 17:16, Julien Grall wrote:
>>>>> On 23/12/2020 16:11, Jan Beulich wrote:
>>>>>> On 23.12.2020 16:16, Julien Grall wrote:
>>>>>>> On 23/12/2020 15:00, Jan Beulich wrote:
>>>>>>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>>>>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>>>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>>>>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>>>
>>>> Please note this (in your original submission). I simply ...
>>>>
>>>>>>>>>> We already have
>>>>>>>>>>
>>>>>>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>>>>>>> {
>>>>>>>>>>          dom_iommu(d)->arch.amd.root_table = NULL;
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>>>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>>>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>>>>>>
>>>>>>>>>>          hd->arch.vtd.pgd_maddr = 0;
>>>>>>>>>
>>>>>>>>> Let me have a look if that works.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I guess what's missing is prevention of the root table
>>>>>>>>>> getting re-setup.
>>>>>>>>>
>>>>>>>>> This is taken care in the follow-up patch by forbidding page-table
>>>>>>>>> allocation. I can mention it in the commit message.
>>>>>>>>
>>>>>>>> My expectation is that with that subsequent change the change here
>>>>>>>> (or any variant of it) would become unnecessary.
>>>>>>>
>>>>>>> I am not be sure. iommu_unmap() would still get called from put_page().
>>>>>>> Are you suggesting to gate the code if d->is_dying as well?
>>>>>>
>>>>>> Unmap shouldn't be allocating any memory right now, as in
>>>>>> non-shared-page-table mode we don't install any superpages
>>>>>> (unless I misremember).
>>>>>
>>>>> It doesn't allocate memory, but it will try to access the IOMMU
>>>>> page-tables (see more below).
>>>>>
>>>>>>
>>>>>>> Even if this patch is deemed to be unecessary to fix the issue.
>>>>>>> This issue was quite hard to chase/reproduce.
>>>>>>>
>>>>>>> I think it would still be good to harden the code by zeroing
>>>>>>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>>>>>>> pointer was still "valid".
>>>>>>
>>>>>> But my point was that this zeroing already happens.
>>>>>> What I
>>>>>> suspect is that it gets re-populated after it was zeroed,
>>>>>> because of page table manipulation that shouldn't be
>>>>>> occurring anymore for a dying domain.
>>>>>
>>>>> AFAICT, the zeroing is happening in ->teardown() helper.
>>>>>
>>>>> It is only called when the domain is fully destroyed (see call in
>>>>> arch_domain_destroy()). This will happen much after relinquishing the code.
>>>>>
>>>>> Could you clarify why you think it is already zeroed and by who?
>>>>
>>>> ... trusted you on what you stated there. But perhaps I somehow
>>>> misunderstood that sentence to mean you want to put your addition
>>>> into the teardown functions, when apparently you meant to invoke
>>>> them earlier in the process. Without clearly identifying why this
>>>> would be a safe thing to do, I couldn't imagine that's what you
>>>> suggest as alternative.
>>>
>>> This was a wording issue. I meant moving ->teardown() before (or calling
>>> from) iommu_free_pgtables().
>>>
>>> Shall I introduce a new callback then?
>>
>> Earlier zeroing won't help unless you prevent re-population, or
>> unless you make the code capable of telling "still zero" from
>> "already zero". But I have to admit I'd like to also have Paul's
>> opinion on the matter.
> 
> Patch #4 is meant to prevent that with the d->is_dying check in the 
> IOMMU page-table allocation.
> 
> Do you think this is not enough?

It probably is; I think that other patch would want to come first
then, or both be folded. Nevertheless I'm not fully convinced
putting the check there is the best course of action.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 17:07:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 17:07:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58552.103094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7bk-0004jv-AW; Wed, 23 Dec 2020 17:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58552.103094; Wed, 23 Dec 2020 17:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7bk-0004jo-6V; Wed, 23 Dec 2020 17:07:28 +0000
Received: by outflank-mailman (input) for mailman id 58552;
 Wed, 23 Dec 2020 17:07:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ks7bj-0004jj-HH
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 17:07:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c75ce1f2-83aa-4962-8277-b325d403f43f;
 Wed, 23 Dec 2020 17:07:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c75ce1f2-83aa-4962-8277-b325d403f43f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608743246;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XGtcxtIP85AkoJ1oo6T1oa1vfCR93ZORyNz7qwenv/I=;
  b=RBwRxHuZdjrdl7o5FXMdKWZpNlBERAcx/yxWtXMNbA3/7TtYrvOPPCGE
   Tqmdh/IdvYCoPvv/9ah/w3gB9BIZzE13JKNhC2NpIXGtaQQxH3jBRKuB+
   7VI8UBjtsy+e/dPIX9MEdyGPSXH77VXIR2R/Z0zc7c+jfhJE6XjdTGIXe
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7fTt9hNMSvSsH9vBvGXPFqKIzKCBjfP8/SqEm4SzL9RNG7rcw3n6pU+BJehzvLHyEGZAw4qlIr
 /MoXRgKzYHmm+7QM30HsN0nVj+MHjWSj/iQRrjEbwQZOhfYUMPct9RgU8MGr/pwJ9oJ4j0XXeu
 Er3K+NwQ03+1cn7JaRQvolXErdUoJlSlnDWZ+uUti/vzBJftuUNufO/2R4ZQO44zDMG2DT84mx
 y7m3HXSaN3ZCd4VnjLgz+SkRU31ZGM1GR9V17KEMWa6jqm4VZzVtYCn0dbOCEIxBh8+AO9B4tA
 sic=
X-SBRS: 5.2
X-MesageID: 33866419
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,441,1599537600"; 
   d="scan'208";a="33866419"
Subject: Re: [PATCH] lib: drop debug_build()
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <143333c9-154b-77c3-a66a-6b81696ecded@suse.com>
 <2575d75a-ce1d-c725-4f37-b7c9c10a2ecd@citrix.com>
 <266f673e-0158-13fe-9ea7-69a3c5dc2903@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7129ab6a-df30-a995-5a2c-b35c094bb629@citrix.com>
Date: Wed, 23 Dec 2020 17:07:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <266f673e-0158-13fe-9ea7-69a3c5dc2903@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 22/12/2020 07:42, Jan Beulich wrote:
> On 21.12.2020 19:07, Andrew Cooper wrote:
>> On 21/12/2020 16:50, Jan Beulich wrote:
>>> Its expansion shouldn't be tied to NDEBUG - down the road we may want to
>>> allow enabling assertions independently of CONFIG_DEBUG.
>> I'm not sure I agree that we'll ever want to do this, but...
> Didn't you once say XenServer keeps (or kept) assertions enabled
> even in release builds?

The hypervisor RPM has two xen.gz's, a release and debug build from the
same source.

So yes - a debug hypervisor is available in a normal XenServer release,
but its not assertions in a release build of Xen.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 17:26:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 17:26:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58556.103105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7uG-0006Xc-U7; Wed, 23 Dec 2020 17:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58556.103105; Wed, 23 Dec 2020 17:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks7uG-0006XV-RA; Wed, 23 Dec 2020 17:26:36 +0000
Received: by outflank-mailman (input) for mailman id 58556;
 Wed, 23 Dec 2020 17:26:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ks7uF-0006XA-Ph
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 17:26:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks7uF-0003UH-AI; Wed, 23 Dec 2020 17:26:35 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ks7uE-0008WH-V7; Wed, 23 Dec 2020 17:26:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=RVjN2Xj0UI+Y2gdhRH5W34wQ//kli7CKgHmezDOMQIY=; b=mBu0mgIoIUq8zGjn6RhGDZmZED
	bqwiEQnt00LRuNfIYbMxDSGygeVoBWvlh+0MGkzn74qGh3SKmu+DllaCEDlFRACJUfa4GzNO8DWdJ
	/KEFXF6uRDzJ1r90eY04nWjSyrTM1oblJdzbEQK8byU3BTo6oh9r+ba22I/6UJj9OQsI=;
Subject: Re: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root
 page-table before freeing the page-tables
To: Jan Beulich <jbeulich@suse.com>
Cc: hongyxia@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
References: <20201222154338.9459-1-julien@xen.org>
 <20201222154338.9459-4-julien@xen.org>
 <499e6d5a-e8ac-56db-1af9-70469b6a06b9@suse.com>
 <8b394c44-5bdb-9d82-b211-5a4ee3473568@xen.org>
 <19e92d90-ed9a-4bd6-79f4-b761b5a039c6@suse.com>
 <96ce1b10-9764-b71e-ac26-982ba8dcc34d@xen.org>
 <092e5199-7eab-2722-7f0b-43fb3c8b2065@suse.com>
 <281188a0-f632-c0a1-4591-0a66ef0068f5@xen.org>
 <d7b866b6-118a-f873-f8df-eb112b708fe3@suse.com>
 <0699ad7a-7c3b-e1e8-c7f7-0bfb54d03c78@xen.org>
 <63091edf-a870-cac1-587a-59cb9d0f8d8d@suse.com>
 <6582c77e-114c-1ad7-0179-7e2e58a23745@xen.org>
 <2f3b546c-c075-bfb1-a17b-b0c987bee682@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <55be5b1e-20bb-bf59-6f46-be8eb456dfba@xen.org>
Date: Wed, 23 Dec 2020 17:26:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <2f3b546c-c075-bfb1-a17b-b0c987bee682@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 17:02, Jan Beulich wrote:
> On 23.12.2020 17:54, Julien Grall wrote:
>>
>>
>> On 23/12/2020 16:46, Jan Beulich wrote:
>>> On 23.12.2020 17:29, Julien Grall wrote:
>>>> On 23/12/2020 16:24, Jan Beulich wrote:
>>>>> On 23.12.2020 17:16, Julien Grall wrote:
>>>>>> On 23/12/2020 16:11, Jan Beulich wrote:
>>>>>>> On 23.12.2020 16:16, Julien Grall wrote:
>>>>>>>> On 23/12/2020 15:00, Jan Beulich wrote:
>>>>>>>>> On 23.12.2020 15:56, Julien Grall wrote:
>>>>>>>>>> On 23/12/2020 14:12, Jan Beulich wrote:
>>>>>>>>>>> On 22.12.2020 16:43, Julien Grall wrote:
>>>>>>>>>>>> This is an RFC because it would break AMD IOMMU driver. One option would
>>>>>>>>>>>> be to move the call to the teardown callback earlier on. Any opinions?
>>>>>
>>>>> Please note this (in your original submission). I simply ...
>>>>>
>>>>>>>>>>> We already have
>>>>>>>>>>>
>>>>>>>>>>> static void amd_iommu_domain_destroy(struct domain *d)
>>>>>>>>>>> {
>>>>>>>>>>>           dom_iommu(d)->arch.amd.root_table = NULL;
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> and this function is AMD's teardown handler. Hence I suppose
>>>>>>>>>>> doing the same for VT-d would be quite reasonable. And really
>>>>>>>>>>> VT-d's iommu_domain_teardown() also already has
>>>>>>>>>>>
>>>>>>>>>>>           hd->arch.vtd.pgd_maddr = 0;
>>>>>>>>>>
>>>>>>>>>> Let me have a look if that works.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I guess what's missing is prevention of the root table
>>>>>>>>>>> getting re-setup.
>>>>>>>>>>
>>>>>>>>>> This is taken care in the follow-up patch by forbidding page-table
>>>>>>>>>> allocation. I can mention it in the commit message.
>>>>>>>>>
>>>>>>>>> My expectation is that with that subsequent change the change here
>>>>>>>>> (or any variant of it) would become unnecessary.
>>>>>>>>
>>>>>>>> I am not be sure. iommu_unmap() would still get called from put_page().
>>>>>>>> Are you suggesting to gate the code if d->is_dying as well?
>>>>>>>
>>>>>>> Unmap shouldn't be allocating any memory right now, as in
>>>>>>> non-shared-page-table mode we don't install any superpages
>>>>>>> (unless I misremember).
>>>>>>
>>>>>> It doesn't allocate memory, but it will try to access the IOMMU
>>>>>> page-tables (see more below).
>>>>>>
>>>>>>>
>>>>>>>> Even if this patch is deemed to be unecessary to fix the issue.
>>>>>>>> This issue was quite hard to chase/reproduce.
>>>>>>>>
>>>>>>>> I think it would still be good to harden the code by zeroing
>>>>>>>> hd->arch.vtd.pgd_maddr to avoid anyone else wasting 2 days because the
>>>>>>>> pointer was still "valid".
>>>>>>>
>>>>>>> But my point was that this zeroing already happens.
>>>>>>> What I
>>>>>>> suspect is that it gets re-populated after it was zeroed,
>>>>>>> because of page table manipulation that shouldn't be
>>>>>>> occurring anymore for a dying domain.
>>>>>>
>>>>>> AFAICT, the zeroing is happening in ->teardown() helper.
>>>>>>
>>>>>> It is only called when the domain is fully destroyed (see call in
>>>>>> arch_domain_destroy()). This will happen much after relinquishing the code.
>>>>>>
>>>>>> Could you clarify why you think it is already zeroed and by who?
>>>>>
>>>>> ... trusted you on what you stated there. But perhaps I somehow
>>>>> misunderstood that sentence to mean you want to put your addition
>>>>> into the teardown functions, when apparently you meant to invoke
>>>>> them earlier in the process. Without clearly identifying why this
>>>>> would be a safe thing to do, I couldn't imagine that's what you
>>>>> suggest as alternative.
>>>>
>>>> This was a wording issue. I meant moving ->teardown() before (or calling
>>>> from) iommu_free_pgtables().
>>>>
>>>> Shall I introduce a new callback then?
>>>
>>> Earlier zeroing won't help unless you prevent re-population, or
>>> unless you make the code capable of telling "still zero" from
>>> "already zero". But I have to admit I'd like to also have Paul's
>>> opinion on the matter.
>>
>> Patch #4 is meant to prevent that with the d->is_dying check in the
>> IOMMU page-table allocation.
>>
>> Do you think this is not enough?
> 
> It probably is; I think that other patch would want to come first
> then, or both be folded. 

I would like to keep them separated. But I am happy to re-order them.

> Nevertheless I'm not fully convinced
> putting the check there is the best course of action.

As you pointed out in a previous e-mail, the IOMMU code is pretty hard 
to follow at times. The check in the allocator is quite simple, so I 
think it would be best to keep it.

It doesn't mean this should be the only change, but it will avoid a 
whole lot of potential issues if we missed any path that may touch the 
IOMMU page-tables while the domain is dying.

The other checks can just be shortcut to prevent extra work that will 
result to a failure.

I will wait for Paul's input before reworking the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 18:19:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 18:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58562.103118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks8ii-0002a0-Sj; Wed, 23 Dec 2020 18:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58562.103118; Wed, 23 Dec 2020 18:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks8ii-0002Zt-PF; Wed, 23 Dec 2020 18:18:44 +0000
Received: by outflank-mailman (input) for mailman id 58562;
 Wed, 23 Dec 2020 18:18:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks8ih-0002Zl-4m; Wed, 23 Dec 2020 18:18:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks8ig-0004Nj-TQ; Wed, 23 Dec 2020 18:18:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ks8ig-0007nH-J6; Wed, 23 Dec 2020 18:18:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ks8ig-0003nT-Ig; Wed, 23 Dec 2020 18:18:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=33cT0KV/OcZyOpfsIrOApLU7OfEcSXXxrl6j+UiZE7Q=; b=CMI58sbAPk8/Ldzc4UaM98kOd6
	ukI0bSCuCfOp6kC2IMQ44xOQ452vqHMfPif6g4XDuK1OkLIArlG34ndGdtlcpe9/ij2FDTDFgN4rB
	b+h9PN7o5MNPpHWjGo0GxQH8G1XgVG5Ni+hcfknLy5HL4TE+rkrGHyeQUrjRH6vMMvBs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157847-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157847: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 18:18:42 +0000

flight 157847 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157847/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 19 guest-start.2    fail in 157837 pass in 157847
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157837 pass in 157847
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 157837
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157837
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157837

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157837
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157837
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157837
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157837
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157837
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157837
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157837
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157837
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157837
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157837
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157837
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157847  2020-12-23 07:13:32 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 19:07:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 19:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58571.103139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks9Ts-0006yD-Ol; Wed, 23 Dec 2020 19:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58571.103139; Wed, 23 Dec 2020 19:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ks9Ts-0006y6-LK; Wed, 23 Dec 2020 19:07:28 +0000
Received: by outflank-mailman (input) for mailman id 58571;
 Wed, 23 Dec 2020 19:07:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ai6m=F3=casper.srs.infradead.org=batv+d9b30f47e96137ac1d9c+6331+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ks9Tq-0006y1-Mi
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 19:07:27 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d18d54ce-0eb9-47d7-a62f-93521a038425;
 Wed, 23 Dec 2020 19:07:21 +0000 (UTC)
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=mel50-nvr-2-1.e-sec.corp.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ks9TX-00056k-Jq; Wed, 23 Dec 2020 19:07:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d18d54ce-0eb9-47d7-a62f-93521a038425
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Mime-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ULiIcBqM2lzV46VlTbK7O4z41W6gl+ZzAZ2L2txAK1g=; b=p8UFEqXHvPJS/1XNeNrPF8sTq3
	mx/xSW7Ed2FooQgqd+TogiKWbf3capa7f0JvWnKbunpvzSlcZFclq2xqu9vwLWkMjshjnYemGHhxw
	+LAGh3HUtdLPU9wCn0QUlVGif8ElvVcVMvfNVv+jwaYtt3jkwM8WBYV5mWOWeTTnFulpTsaCvC6a5
	aMUj5Ww8Q88xLbP2oFdg1mGl+F/Wgcxf7nnxDknzVlfGSyc9B43NSerTaCq3bYwtJD/y6TVNQDchf
	9pI7XCkUv4baP4bm7SedqkxdKhqGHWrL4h0hq4Mh/i9g8rujQM1Z/Q4Scj6hfS7/ae00jLFPS1cdq
	Dxx/maUw==;
Message-ID: <7875df85d09f32c005d76452c2b5190b1c7682aa.camel@infradead.org>
Subject: Re: [PATCH] xen: Fix event channel callback via INTX/GSI
From: David Woodhouse <dwmw2@infradead.org>
To: boris.ostrovsky@oracle.com, "x86@kernel.org" <x86@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
 <jgross@suse.com>,  Paul Durrant <pdurrant@amazon.com>, jgrall@amazon.com,
 karahmed@amazon.de, xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 23 Dec 2020 19:07:06 +0000
In-Reply-To: <5fa6ba65-2420-8b79-fd20-299166651f0c@oracle.com>
References: <5ba658b2d8a2bce63622f5bb8ef8d5e6114276eb.camel@infradead.org>
	 <6b6544ac-06b3-2525-aed9-39015715f71d@oracle.com>
	 <a02cb64ba5680c0f2076da714d06b8704e3411c2.camel@infradead.org>
	 <5fa6ba65-2420-8b79-fd20-299166651f0c@oracle.com>
Content-Type: multipart/signed; micalg="sha-256";
	protocol="application/x-pkcs7-signature";
	boundary="=-eNIw0gXrR000q9JvJlcn"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-eNIw0gXrR000q9JvJlcn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2020-12-20 at 13:01 -0500, boris.ostrovsky@oracle.com wrote:
> On 12/19/20 3:05 AM, David Woodhouse wrote:
> > On Fri, 2020-12-18 at 17:20 -0500, boris.ostrovsky@oracle.com wrote:
> > > Are there other cases where this would be useful? If it's just to
> > > test a hypervisor under development I would think that people who are
> > > doing that kind of work are capable of building their own kernel. My
> > > concern is mostly about having yet another boot option that is of
> > > interest to very few people who can easily work around not having it.
> >=20
> > For hypervisor testing we can just set the Xen major version number in
> > CPUID to 3, and that stops xs_reset_watches() from doing anything.
> >=20
> > cf. https://lkml.org/lkml/2017/4/10/266
> >=20
> > Karim ripped out all this INTX code in 2017 because it was broken, and
> > subsequently put it back because it *was* working for older versions of
> > Xen, due to that "coincidence". The conclusion back then was that if it
> > was put back it should at least *work* consistently, and he was going
> > to send a patch "shortly". This is a that patch :)
>=20
>=20
> Right, I am not arguing about usefulness of the fix, only of the new boot=
 option.

The boot option also makes it easier for Linux developers to actually
test this.

It's all very well for someone who already has the hypervisor code open
in an emacs buffer, who can turn off vector support then run the canned
test case which spins up that hypervisor and runs Linux nested inside
to. But most people don't live like that, and a command line option
that Linux can use and just run in a normal Xen environment to test
these code paths is kind of useful.


> > > Can we delay xs_init() for !XS_HVM as well? In other words wait until
> > > after PCI platform device has been probed (on HVM) and then call
> > > xs_init() for everyone.
> >=20
> > We're half-way there already, because xenbus_probe() *does* happen
> > later as a device_initcall, and I've just made it call xs_init().
> >=20
> > We could make it avoid calling xs_init() from xenbus_init() in the
> > XS_HVM *and* XS_PV cases fairly easily, and let xenbus_probe() do it.
>=20
>=20
> Yes, that's along the lines of what I was thinking.
>=20
>=20
> >=20
> > But right now xenbus_probe() doesn't run for the other cases, so
> > there'd have to be a mode where it *only* calls xs_init() and doesn't
> > do the notifier chain. That seems like more churn that was needed so I
> > didn't do it.
>=20
>=20
> You think so? Yes, there would be a couple more places where you'd
> need to call xenbus_probe() but then you won't have to explain (with
> comments) why you call xs_init() here and not there and vice versa.
> It just looks to me a bit more complicated the way you do this but I
> suppose it's a matter of personal preference.

I suspect it probably just moves things around and leaves *different*
things to explain (with comments!).

I'll split it out into separate patches (and also fix up the fact that
we're registering the IPI and spinlock event channels even though we're
not going to use them).

--=-eNIw0gXrR000q9JvJlcn
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCECow
ggUcMIIEBKADAgECAhEA4rtJSHkq7AnpxKUY8ZlYZjANBgkqhkiG9w0BAQsFADCBlzELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG
A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl
bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTkwMTAyMDAwMDAwWhcNMjIwMTAxMjM1
OTU5WjAkMSIwIAYJKoZIhvcNAQkBFhNkd213MkBpbmZyYWRlYWQub3JnMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAsv3wObLTCbUA7GJqKj9vHGf+Fa+tpkO+ZRVve9EpNsMsfXhvFpb8
RgL8vD+L133wK6csYoDU7zKiAo92FMUWaY1Hy6HqvVr9oevfTV3xhB5rQO1RHJoAfkvhy+wpjo7Q
cXuzkOpibq2YurVStHAiGqAOMGMXhcVGqPuGhcVcVzVUjsvEzAV9Po9K2rpZ52FE4rDkpDK1pBK+
uOAyOkgIg/cD8Kugav5tyapydeWMZRJQH1vMQ6OVT24CyAn2yXm2NgTQMS1mpzStP2ioPtTnszIQ
Ih7ASVzhV6csHb8Yrkx8mgllOyrt9Y2kWRRJFm/FPRNEurOeNV6lnYAXOymVJwIDAQABo4IB0zCC
Ac8wHwYDVR0jBBgwFoAUgq9sjPjF/pZhfOgfPStxSF7Ei8AwHQYDVR0OBBYEFLfuNf820LvaT4AK
xrGK3EKx1DE7MA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQGCCsGAQUF
BwMEBggrBgEFBQcDAjBGBgNVHSAEPzA9MDsGDCsGAQQBsjEBAgEDBTArMCkGCCsGAQUFBwIBFh1o
dHRwczovL3NlY3VyZS5jb21vZG8ubmV0L0NQUzBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3Js
LmNvbW9kb2NhLmNvbS9DT01PRE9SU0FDbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWls
Q0EuY3JsMIGLBggrBgEFBQcBAQR/MH0wVQYIKwYBBQUHMAKGSWh0dHA6Ly9jcnQuY29tb2RvY2Eu
Y29tL0NPTU9ET1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcnQwJAYI
KwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmNvbW9kb2NhLmNvbTAeBgNVHREEFzAVgRNkd213MkBpbmZy
YWRlYWQub3JnMA0GCSqGSIb3DQEBCwUAA4IBAQALbSykFusvvVkSIWttcEeifOGGKs7Wx2f5f45b
nv2ghcxK5URjUvCnJhg+soxOMoQLG6+nbhzzb2rLTdRVGbvjZH0fOOzq0LShq0EXsqnJbbuwJhK+
PnBtqX5O23PMHutP1l88AtVN+Rb72oSvnD+dK6708JqqUx2MAFLMevrhJRXLjKb2Mm+/8XBpEw+B
7DisN4TMlLB/d55WnT9UPNHmQ+3KFL7QrTO8hYExkU849g58Dn3Nw3oCbMUgny81ocrLlB2Z5fFG
Qu1AdNiBA+kg/UxzyJZpFbKfCITd5yX49bOriL692aMVDyqUvh8fP+T99PqorH4cIJP6OxSTdxKM
MIIFHDCCBASgAwIBAgIRAOK7SUh5KuwJ6cSlGPGZWGYwDQYJKoZIhvcNAQELBQAwgZcxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRo
ZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTE5MDEwMjAwMDAwMFoXDTIyMDEwMTIz
NTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBALL98Dmy0wm1AOxiaio/bxxn/hWvraZDvmUVb3vRKTbDLH14bxaW
/EYC/Lw/i9d98CunLGKA1O8yogKPdhTFFmmNR8uh6r1a/aHr301d8YQea0DtURyaAH5L4cvsKY6O
0HF7s5DqYm6tmLq1UrRwIhqgDjBjF4XFRqj7hoXFXFc1VI7LxMwFfT6PStq6WedhROKw5KQytaQS
vrjgMjpICIP3A/CroGr+bcmqcnXljGUSUB9bzEOjlU9uAsgJ9sl5tjYE0DEtZqc0rT9oqD7U57My
ECIewElc4VenLB2/GK5MfJoJZTsq7fWNpFkUSRZvxT0TRLqznjVepZ2AFzsplScCAwEAAaOCAdMw
ggHPMB8GA1UdIwQYMBaAFIKvbIz4xf6WYXzoHz0rcUhexIvAMB0GA1UdDgQWBBS37jX/NtC72k+A
CsaxitxCsdQxOzAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAdBgNVHSUEFjAUBggrBgEF
BQcDBAYIKwYBBQUHAwIwRgYDVR0gBD8wPTA7BgwrBgEEAbIxAQIBAwUwKzApBggrBgEFBQcCARYd
aHR0cHM6Ly9zZWN1cmUuY29tb2RvLm5ldC9DUFMwWgYDVR0fBFMwUTBPoE2gS4ZJaHR0cDovL2Ny
bC5jb21vZG9jYS5jb20vQ09NT0RPUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFp
bENBLmNybDCBiwYIKwYBBQUHAQEEfzB9MFUGCCsGAQUFBzAChklodHRwOi8vY3J0LmNvbW9kb2Nh
LmNvbS9DT01PRE9SU0FDbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWlsQ0EuY3J0MCQG
CCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAC20spBbrL71ZEiFrbXBHonzhhirO1sdn+X+O
W579oIXMSuVEY1LwpyYYPrKMTjKECxuvp24c829qy03UVRm742R9Hzjs6tC0oatBF7KpyW27sCYS
vj5wbal+TttzzB7rT9ZfPALVTfkW+9qEr5w/nSuu9PCaqlMdjABSzHr64SUVy4ym9jJvv/FwaRMP
gew4rDeEzJSwf3eeVp0/VDzR5kPtyhS+0K0zvIWBMZFPOPYOfA59zcN6AmzFIJ8vNaHKy5QdmeXx
RkLtQHTYgQPpIP1Mc8iWaRWynwiE3ecl+PWzq4i+vdmjFQ8qlL4fHz/k/fT6qKx+HCCT+jsUk3cS
jDCCBeYwggPOoAMCAQICEGqb4Tg7/ytrnwHV2binUlYwDQYJKoZIhvcNAQEMBQAwgYUxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMSswKQYDVQQDEyJDT01PRE8gUlNBIENlcnRpZmljYXRp
b24gQXV0aG9yaXR5MB4XDTEzMDExMDAwMDAwMFoXDTI4MDEwOTIzNTk1OVowgZcxCzAJBgNVBAYT
AkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNV
BAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAvrOeV6wodnVAFsc4A5jTxhh2IVDzJXkLTLWg0X06WD6cpzEup/Y0dtmEatrQPTRI5Or1u6zf
+bGBSyD9aH95dDSmeny1nxdlYCeXIoymMv6pQHJGNcIDpFDIMypVpVSRsivlJTRENf+RKwrB6vcf
WlP8dSsE3Rfywq09N0ZfxcBa39V0wsGtkGWC+eQKiz4pBZYKjrc5NOpG9qrxpZxyb4o4yNNwTqza
aPpGRqXB7IMjtf7tTmU2jqPMLxFNe1VXj9XB1rHvbRikw8lBoNoSWY66nJN/VCJv5ym6Q0mdCbDK
CMPybTjoNCQuelc0IAaO4nLUXk0BOSxSxt8kCvsUtQIDAQABo4IBPDCCATgwHwYDVR0jBBgwFoAU
u69+Aj36pvE8hI6t7jiY7NkyMtQwHQYDVR0OBBYEFIKvbIz4xf6WYXzoHz0rcUhexIvAMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMBEGA1UdIAQKMAgwBgYEVR0gADBMBgNVHR8E
RTBDMEGgP6A9hjtodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9SU0FDZXJ0aWZpY2F0aW9u
QXV0aG9yaXR5LmNybDBxBggrBgEFBQcBAQRlMGMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9jcnQuY29t
b2RvY2EuY29tL0NPTU9ET1JTQUFkZFRydXN0Q0EuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz
cC5jb21vZG9jYS5jb20wDQYJKoZIhvcNAQEMBQADggIBAHhcsoEoNE887l9Wzp+XVuyPomsX9vP2
SQgG1NgvNc3fQP7TcePo7EIMERoh42awGGsma65u/ITse2hKZHzT0CBxhuhb6txM1n/y78e/4ZOs
0j8CGpfb+SJA3GaBQ+394k+z3ZByWPQedXLL1OdK8aRINTsjk/H5Ns77zwbjOKkDamxlpZ4TKSDM
KVmU/PUWNMKSTvtlenlxBhh7ETrN543j/Q6qqgCWgWuMAXijnRglp9fyadqGOncjZjaaSOGTTFB+
E2pvOUtY+hPebuPtTbq7vODqzCM6ryEhNhzf+enm0zlpXK7q332nXttNtjv7VFNYG+I31gnMrwfH
M5tdhYF/8v5UY5g2xANPECTQdu9vWPoqNSGDt87b3gXb1AiGGaI06vzgkejL580ul+9hz9D0S0U4
jkhJiA7EuTecP/CFtR72uYRBcunwwH3fciPjviDDAI9SnC/2aPY8ydehzuZutLbZdRJ5PDEJM/1t
yZR2niOYihZ+FCbtf3D9mB12D4ln9icgc7CwaxpNSCPt8i/GqK2HsOgkL3VYnwtx7cJUmpvVdZ4o
gnzgXtgtdk3ShrtOS1iAN2ZBXFiRmjVzmehoMof06r1xub+85hFQzVxZx5/bRaTKTlL8YXLI8nAb
R9HWdFqzcOoB/hxfEyIQpx9/s81rgzdEZOofSlZHynoSMYIDyjCCA8YCAQEwga0wgZcxCzAJBgNV
BAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY
BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQDEzRDT01PRE8gUlNBIENsaWVudCBBdXRo
ZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA4rtJSHkq7AnpxKUY8ZlYZjANBglghkgB
ZQMEAgEFAKCCAe0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMjAx
MjIzMTkwNzA2WjAvBgkqhkiG9w0BCQQxIgQg+qa0JbtVp1oUL3d+oQ/NbtVOnz7NTbtwAvXtDZca
9kowgb4GCSsGAQQBgjcQBDGBsDCBrTCBlzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIg
TWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQx
PTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1h
aWwgQ0ECEQDiu0lIeSrsCenEpRjxmVhmMIHABgsqhkiG9w0BCRACCzGBsKCBrTCBlzELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG
A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl
bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEQDiu0lIeSrsCenEpRjxmVhmMA0GCSqGSIb3
DQEBAQUABIIBAFNPwYP34tdxzNG4RlUCwLsrQrqLIOSP4sucRMGsfQXXuOhrNBO2FB/BEP6YHmmG
jwSsXuvf5LzA47gVXQD05rMuDGdODiZf+g0Rhfwot3X1uld9xWSbgAAxYaA0KoM6K463ZMlKgRYl
b4TfDg+2YDAXqVV4NM1VYPlyLwjSlUfVqCkNrQBpc5LAzhdVm41Hy7DUbZbWyPYTkYT6Wi/6AzCZ
JvONag2xUtuzFZ9txnoIP0UEdbodJYnfaAC/1ggjIw7sQMfSfqGLi/11IAKw2svkzm+OhtXzv5wf
F8QJ+H76+F2T+1uMuKgPVmN0UMinrkeWFxTmZDBleKr7M1zywgoAAAAAAAA=


--=-eNIw0gXrR000q9JvJlcn--



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 19:46:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 19:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58575.103151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksA5B-00020k-OL; Wed, 23 Dec 2020 19:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58575.103151; Wed, 23 Dec 2020 19:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksA5B-00020d-L8; Wed, 23 Dec 2020 19:46:01 +0000
Received: by outflank-mailman (input) for mailman id 58575;
 Wed, 23 Dec 2020 19:46:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4sWS=F3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ksA5A-00020Y-BP
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 19:46:00 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98a17feb-b208-4c64-a499-53cc565320b2;
 Wed, 23 Dec 2020 19:45:59 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 073ED223E0;
 Wed, 23 Dec 2020 19:45:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98a17feb-b208-4c64-a499-53cc565320b2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608752758;
	bh=1NBYXDgUkzTpKW+ODJ7wVYF3slivVDEp19EAREGs5h8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=poCpyLvLRzM3mTuDl3S/hLRkUYOegtx+pKzbhxo2SsFWVvtoEDDOG2ynri36YnakT
	 VJCiZuNPYmKqauglbSBb+aCrBNX+s9qxWxvKuG9RAOWA7T061WFpKI+wT3j4/T57dW
	 aY9u3X7EuWaTfWHbSCdlP8vRlQfV4/tdPK6d6lvON0WfqIoRcsqgwVrSnqvgtkME/5
	 6PIXs1GRFSSo0D0BBiu7H/lO64dk/i+k2w6m998nde1wET7NvTOZICpPDhbhKl7rRP
	 kdpsevVqIfB3vbKEUD/TwU6LJ7sgh981BBPyRQG79uZZwQT1AaIVZtH78jAxR9Zq5b
	 EpfeKe7p3AjTA==
Date: Wed, 23 Dec 2020 11:45:57 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: famzheng@amazon.com, sstabellini@kernel.org, cardoe@cardoe.com, wl@xen.org, 
    Bertrand.Marquis@arm.com, julien@xen.org, andrew.cooper3@citrix.com
Subject: Re: [PATCH 0/4] xen: domain-tracked allocations, and fault
 injection
In-Reply-To: <160874604800.15699.17952392608790984041@600e7e483b3a>
Message-ID: <alpine.DEB.2.21.2012231143430.4040@sstabellini-ThinkPad-T480s>
References: <160874604800.15699.17952392608790984041@600e7e483b3a>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 23 Dec 2020, no-reply@patchew.org wrote:
> Hi,
> 
> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> 
> You can find the link to the pipeline near the end of the report below:
> 
> Type: series
> Message-id: 20201223163442.8840-1-andrew.cooper3@citrix.com
> Subject: [PATCH 0/4] xen: domain-tracked allocations, and fault injection
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> sleep 10
> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> === TEST SCRIPT END ===

[...]

> === OUTPUT BEGIN ===
> [2020-12-23 16:38:43] Looking up pipeline...
> [2020-12-23 16:38:43] Found pipeline 233889763:
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/233889763

This seems to be a genuine failure. Looking at the alpine-3.12-gcc-arm64
build test, the build error is appended below. This is a link to the
failed job: https://gitlab.com/xen-project/patchew/xen/-/jobs/929842628



gcc  -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O2 -fomit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP -MF .xen-diag.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE   -Werror -include /builds/xen-project/patchew/xen/tools/misc/../../tools/config.h -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -D__XEN_TOOLS__ -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -Wno-declaration-after-statement  -c -o xen-diag.o xen-diag.c 
xen-fault-ttl.c: In function 'main':
xen-fault-ttl.c:25:14: error: 'struct xen_arch_domainconfig' has no member named 'emulation_flags'
   25 |             .emulation_flags = XEN_X86_EMU_LAPIC,
      |              ^~~~~~~~~~~~~~~
xen-fault-ttl.c:25:32: error: 'XEN_X86_EMU_LAPIC' undeclared (first use in this function)
   25 |             .emulation_flags = XEN_X86_EMU_LAPIC,
      |                                ^~~~~~~~~~~~~~~~~
xen-fault-ttl.c:25:32: note: each undeclared identifier is reported only once for each function it appears in
make[4]: *** [/builds/xen-project/patchew/xen/tools/misc/../../tools/Rules.mk:144: xen-fault-ttl.o] Error 1
make[4]: *** Waiting for unfinished jobs....
make[4]: Leaving directory '/builds/xen-project/patchew/xen/tools/misc'
make[3]: *** [/builds/xen-project/patchew/xen/tools/../tools/Rules.mk:160: subdir-install-misc] Error 2
make[3]: Leaving directory '/builds/xen-project/patchew/xen/tools'
make[2]: *** [/builds/xen-project/patchew/xen/tools/../tools/Rules.mk:155: subdirs-install] Error 2
make[2]: Leaving directory '/builds/xen-project/patchew/xen/tools'
make[1]: *** [Makefile:67: install] Error 2
make[1]: Leaving directory '/builds/xen-project/patchew/xen/tools'
make: *** [Makefile:134: install-tools] Error 2


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 20:02:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 20:02:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58579.103163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksAKm-0003rU-47; Wed, 23 Dec 2020 20:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58579.103163; Wed, 23 Dec 2020 20:02:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksAKm-0003rN-0d; Wed, 23 Dec 2020 20:02:08 +0000
Received: by outflank-mailman (input) for mailman id 58579;
 Wed, 23 Dec 2020 20:02:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ifz=F3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ksAKk-0003rI-83
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 20:02:06 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b16e155c-e878-4542-a07b-ae1084403632;
 Wed, 23 Dec 2020 20:02:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b16e155c-e878-4542-a07b-ae1084403632
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608753724;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=1NQJP2yjkA5OBuqO488FhSLIbi0QX49BKjdBy4ZvsWE=;
  b=FMdk084+CdnPKpNex/N7iaOe6XUbxqraHhfcW4HyfIo8KcFIOP6iNy5h
   BofcmS3l2ruIlIoPn0Tz/nLmbbgYfjx0CCld0oZNKmn9UMEmBNn6VUyzs
   dDoSoYiZrDzOpCB1tFLqqsPKvXHNi6qCNwbClCZzBfTI/eQhBgo8glBae
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Z5DF5B1BQ2hQBF1AODj1EFeiTiO6Mx216fUWfCRAITHppEx24Nff34L0xQbdn4iAH1u+OS4hk9
 b1ULZU+ALcK7NIcQISOInJ91nhzIkzL/TSBak32ecACzARL9ZCUPh7b9CFcM1glOlMflXbvDg2
 FG456FoaeCwF09L8dOknqaYURNsLGXMrhK7x+ZI6ob5MV7Vd4ARB8J1Gr5GSTA8mZ6nNaUfuq8
 zzJ9NgIOyp0QEranQlglcuMqN1VnaEAlPlxZTZgNFOy/CHY1JQg0zabAss/kQDPO6MG7TeyfkM
 O/s=
X-SBRS: 5.2
X-MesageID: 33865262
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,442,1599537600"; 
   d="scan'208";a="33865262"
Subject: Re: [PATCH 0/4] xen: domain-tracked allocations, and fault injection
To: Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>
CC: <famzheng@amazon.com>, <cardoe@cardoe.com>, <wl@xen.org>,
	<Bertrand.Marquis@arm.com>, <julien@xen.org>
References: <160874604800.15699.17952392608790984041@600e7e483b3a>
 <alpine.DEB.2.21.2012231143430.4040@sstabellini-ThinkPad-T480s>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e7ad4670-7e7e-99f3-1800-b097b6a1695f@citrix.com>
Date: Wed, 23 Dec 2020 20:01:58 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2012231143430.4040@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 23/12/2020 19:45, Stefano Stabellini wrote:
> On Wed, 23 Dec 2020, no-reply@patchew.org wrote:
>> Hi,
>>
>> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
>>
>> You can find the link to the pipeline near the end of the report below:
>>
>> Type: series
>> Message-id: 20201223163442.8840-1-andrew.cooper3@citrix.com
>> Subject: [PATCH 0/4] xen: domain-tracked allocations, and fault injection
>>
>> === TEST SCRIPT BEGIN ===
>> #!/bin/bash
>> sleep 10
>> patchew gitlab-pipeline-check -p xen-project/patchew/xen
>> === TEST SCRIPT END ===
> [...]
>
>> === OUTPUT BEGIN ===
>> [2020-12-23 16:38:43] Looking up pipeline...
>> [2020-12-23 16:38:43] Found pipeline 233889763:
>>
>> https://gitlab.com/xen-project/patchew/xen/-/pipelines/233889763
> This seems to be a genuine failure. Looking at the alpine-3.12-gcc-arm64
> build test, the build error is appended below. This is a link to the
> failed job: https://gitlab.com/xen-project/patchew/xen/-/jobs/929842628
>
>
>
> gcc  -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O2 -fomit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP -MF .xen-diag.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE   -Werror -include /builds/xen-project/patchew/xen/tools/misc/../../tools/config.h -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -D__XEN_TOOLS__ -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -Wno-declaration-after-statement  -c -o xen-diag.o xen-diag.c 
> xen-fault-ttl.c: In function 'main':
> xen-fault-ttl.c:25:14: error: 'struct xen_arch_domainconfig' has no member named 'emulation_flags'
>    25 |             .emulation_flags = XEN_X86_EMU_LAPIC,
>       |              ^~~~~~~~~~~~~~~
> xen-fault-ttl.c:25:32: error: 'XEN_X86_EMU_LAPIC' undeclared (first use in this function)
>    25 |             .emulation_flags = XEN_X86_EMU_LAPIC,
>       |                                ^~~~~~~~~~~~~~~~~
> xen-fault-ttl.c:25:32: note: each undeclared identifier is reported only once for each function it appears in
> make[4]: *** [/builds/xen-project/patchew/xen/tools/misc/../../tools/Rules.mk:144: xen-fault-ttl.o] Error 1
> make[4]: *** Waiting for unfinished jobs....
> make[4]: Leaving directory '/builds/xen-project/patchew/xen/tools/misc'
> make[3]: *** [/builds/xen-project/patchew/xen/tools/../tools/Rules.mk:160: subdir-install-misc] Error 2
> make[3]: Leaving directory '/builds/xen-project/patchew/xen/tools'
> make[2]: *** [/builds/xen-project/patchew/xen/tools/../tools/Rules.mk:155: subdirs-install] Error 2
> make[2]: Leaving directory '/builds/xen-project/patchew/xen/tools'
> make[1]: *** [Makefile:67: install] Error 2
> make[1]: Leaving directory '/builds/xen-project/patchew/xen/tools'
> make: *** [Makefile:134: install-tools] Error 2

Yeah - that is a real failure, which can be fixed with a little bit of
ifdef-ary.  I'm confused as to why I didn't get that email directly.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 20:10:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 20:10:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58583.103175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksAT8-0004md-Vj; Wed, 23 Dec 2020 20:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58583.103175; Wed, 23 Dec 2020 20:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksAT8-0004mW-Sg; Wed, 23 Dec 2020 20:10:46 +0000
Received: by outflank-mailman (input) for mailman id 58583;
 Wed, 23 Dec 2020 20:10:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4sWS=F3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ksAT8-0004mR-8J
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 20:10:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09c017ba-c898-4b28-9e60-85d4ce6b0a10;
 Wed, 23 Dec 2020 20:10:45 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 8F82620796;
 Wed, 23 Dec 2020 20:10:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09c017ba-c898-4b28-9e60-85d4ce6b0a10
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1608754244;
	bh=0nuM4iMLqo8KCJM4BLhb+XCA9hxBZ3+QBP2hMW2MPVU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=X7zHt27QBy3B8JToZzFDN6F0Lnfob5bcLbXNEesx/sUVCoiKv4vTx/HW9KAcqRy3e
	 wuHLnQXsJ+pJl8mhpHtDz9NxINc9+N1noNzTaybJxiagn+BPG7aJ/j/T6Gq7Ohyi0v
	 J3XwLf1ZsEyMJeMI3OyUNnbYSpcJ+JYWSTeWqEM8kU5J7N57Ht5SkNj9MpIAiEUYSw
	 PN4xayYr4CK0JE3+Mj2p/Byp0C7i0EBQX/dm6+FYigW4fORHMhLCGaYfjAEKwIFKbP
	 llYIE/7XAgQu9mUha/Nh+1iJmQFJ1pRiQawo6Sb9ksXG2IbSXN9UFumj/E118YaBXI
	 ETSEEdHKJEs/A==
Date: Wed, 23 Dec 2020 12:10:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, famzheng@amazon.com, cardoe@cardoe.com, 
    wl@xen.org, Bertrand.Marquis@arm.com, julien@xen.org
Subject: Re: [PATCH 0/4] xen: domain-tracked allocations, and fault
 injection
In-Reply-To: <e7ad4670-7e7e-99f3-1800-b097b6a1695f@citrix.com>
Message-ID: <alpine.DEB.2.21.2012231209170.4040@sstabellini-ThinkPad-T480s>
References: <160874604800.15699.17952392608790984041@600e7e483b3a> <alpine.DEB.2.21.2012231143430.4040@sstabellini-ThinkPad-T480s> <e7ad4670-7e7e-99f3-1800-b097b6a1695f@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-245295571-1608754244=:4040"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-245295571-1608754244=:4040
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 23 Dec 2020, Andrew Cooper wrote:
> On 23/12/2020 19:45, Stefano Stabellini wrote:
> > On Wed, 23 Dec 2020, no-reply@patchew.org wrote:
> >> Hi,
> >>
> >> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> >>
> >> You can find the link to the pipeline near the end of the report below:
> >>
> >> Type: series
> >> Message-id: 20201223163442.8840-1-andrew.cooper3@citrix.com
> >> Subject: [PATCH 0/4] xen: domain-tracked allocations, and fault injection
> >>
> >> === TEST SCRIPT BEGIN ===
> >> #!/bin/bash
> >> sleep 10
> >> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> >> === TEST SCRIPT END ===
> > [...]
> >
> >> === OUTPUT BEGIN ===
> >> [2020-12-23 16:38:43] Looking up pipeline...
> >> [2020-12-23 16:38:43] Found pipeline 233889763:
> >>
> >> https://gitlab.com/xen-project/patchew/xen/-/pipelines/233889763
> > This seems to be a genuine failure. Looking at the alpine-3.12-gcc-arm64
> > build test, the build error is appended below. This is a link to the
> > failed job: https://gitlab.com/xen-project/patchew/xen/-/jobs/929842628
> >
> >
> >
> > gcc  -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O2 -fomit-frame-pointer -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP -MF .xen-diag.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE   -Werror -include /builds/xen-project/patchew/xen/tools/misc/../../tools/config.h -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -D__XEN_TOOLS__ -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -I/builds/xen-project/patchew/xen/tools/misc/../../tools/include -Wno-declaration-after-statement  -c -o xen-diag.o xen-diag.c 
> > xen-fault-ttl.c: In function 'main':
> > xen-fault-ttl.c:25:14: error: 'struct xen_arch_domainconfig' has no member named 'emulation_flags'
> >    25 |             .emulation_flags = XEN_X86_EMU_LAPIC,
> >       |              ^~~~~~~~~~~~~~~
> > xen-fault-ttl.c:25:32: error: 'XEN_X86_EMU_LAPIC' undeclared (first use in this function)
> >    25 |             .emulation_flags = XEN_X86_EMU_LAPIC,
> >       |                                ^~~~~~~~~~~~~~~~~
> > xen-fault-ttl.c:25:32: note: each undeclared identifier is reported only once for each function it appears in
> > make[4]: *** [/builds/xen-project/patchew/xen/tools/misc/../../tools/Rules.mk:144: xen-fault-ttl.o] Error 1
> > make[4]: *** Waiting for unfinished jobs....
> > make[4]: Leaving directory '/builds/xen-project/patchew/xen/tools/misc'
> > make[3]: *** [/builds/xen-project/patchew/xen/tools/../tools/Rules.mk:160: subdir-install-misc] Error 2
> > make[3]: Leaving directory '/builds/xen-project/patchew/xen/tools'
> > make[2]: *** [/builds/xen-project/patchew/xen/tools/../tools/Rules.mk:155: subdirs-install] Error 2
> > make[2]: Leaving directory '/builds/xen-project/patchew/xen/tools'
> > make[1]: *** [Makefile:67: install] Error 2
> > make[1]: Leaving directory '/builds/xen-project/patchew/xen/tools'
> > make: *** [Makefile:134: install-tools] Error 2
> 
> Yeah - that is a real failure, which can be fixed with a little bit of
> ifdef-ary.  I'm confused as to why I didn't get that email directly.

It looks like patchew doesn't yet CC the original author?

Also not sure why you weren't part of the default CC group anyway.
--8323329-245295571-1608754244=:4040--


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 20:16:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 20:16:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58587.103187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksAYA-0004y2-MV; Wed, 23 Dec 2020 20:15:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58587.103187; Wed, 23 Dec 2020 20:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksAYA-0004xv-JV; Wed, 23 Dec 2020 20:15:58 +0000
Received: by outflank-mailman (input) for mailman id 58587;
 Wed, 23 Dec 2020 20:15:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksAY9-0004xn-4d; Wed, 23 Dec 2020 20:15:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksAY8-0006Kk-WE; Wed, 23 Dec 2020 20:15:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksAY8-0004Hi-NW; Wed, 23 Dec 2020 20:15:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksAY8-0001Uh-N3; Wed, 23 Dec 2020 20:15:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YWeaA31/Lp9l6y1vHKTQMMFW82B1I2yzjGSo+WxjEk8=; b=PGBYurlrq5S+AMlJq0ehLGVGar
	xps+P/V/EGz1vyK3TrHIZimqpg6tElKbb8cSbFZMHeFbkE9JaE7/cihh7q+bRjHX/UDDMOXWEVcy1
	XJs5yRDLiWYFOj/udpZGMLsiITySDinQ62HAV9170wqmq4HG5iv49+P3pEoJRyCDlHCw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157856-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157856: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e2747dbb5a44f4a463ecc6dd0f7fd113ee57bd67
X-Osstest-Versions-That:
    ovmf=d15d0d3d8aee1c7d5dab7b636601370061b32612
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 20:15:56 +0000

flight 157856 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157856/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e2747dbb5a44f4a463ecc6dd0f7fd113ee57bd67
baseline version:
 ovmf                 d15d0d3d8aee1c7d5dab7b636601370061b32612

Last test of basis   157848  2020-12-23 07:52:03 Z    0 days
Testing same since   157856  2020-12-23 14:11:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Masahisa Kojima <masahisa.kojima@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d15d0d3d8a..e2747dbb5a  e2747dbb5a44f4a463ecc6dd0f7fd113ee57bd67 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 20:45:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 20:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58594.103202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksB0X-0007bf-16; Wed, 23 Dec 2020 20:45:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58594.103202; Wed, 23 Dec 2020 20:45:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksB0W-0007bY-UN; Wed, 23 Dec 2020 20:45:16 +0000
Received: by outflank-mailman (input) for mailman id 58594;
 Wed, 23 Dec 2020 20:45:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UxwC=F3=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1ksB0V-0007bQ-N3
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 20:45:15 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e571ab90-7c47-4c0c-9450-a477725d5b69;
 Wed, 23 Dec 2020 20:45:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e571ab90-7c47-4c0c-9450-a477725d5b69
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608756314;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=VcDRdLCPQ0tSFy5EC+0JfufBLFLHQk9/isCxuNjtbeo=;
  b=K0SKGjgvFuvPeWQ9OQ3kweB+lPmA9iDqI9pBCSNcJCR9YvjMC02c2V95
   wdp67k3pFeYhufSWZGIHW+iBnT1JpxpI8Hlwr3sWZtuBn7mmrgeAI+hdE
   lv4y/vSivOWPH/O++ik+LOKW/K6T2LeJvHlMJ+boyQ/HYvjp7ibuXrs9U
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: VnFW6XtpNTgxgsucMleoa7VtQwCPaiAWgwpLBQbSgIPy4UYG26PJ/4YX7rPDzoD5llRp+mdmS5
 DacpOHVnyz8iqUi66OMzyYq2C8SZ/kDOBPgQKm9R3NPxYNDZRFvcAIyB+o5zN+ktZZEVIzaY8r
 0YxReaHoQ5X8gNNodSJx8EZtQsgH79w73k6Xcu3ZkF+yhdHYuhBQPwApOAgNEkVYXp4emnZoQH
 VuXRrK/apZM8ntihWFeIYfOubcC5hO2+xbBXDr3MQQHsrR+x+GVWIXdwmIT7pagoHaaz3aVBGz
 oo4=
X-SBRS: 5.2
X-MesageID: 35121504
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,442,1599537600"; 
   d="scan'208";a="35121504"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH v3] x86/intel: insert Ice Lake-X (server) and Ice Lake-D model numbers
Date: Wed, 23 Dec 2020 20:32:00 +0000
Message-ID: <1608755520-1277-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

LBR, C-state MSRs should correspond to Ice Lake desktop according to
External Design Specification vol.2 for both models.

Ice Lake-X is known to expose IF_PSCHANGE_MC_NO in IA32_ARCH_CAPABILITIES MSR
(confirmed on Whitley SDP) which means the erratum is fixed in hardware for
that model and therefore it shouldn't be present in has_if_pschange_mc list.
Provisionally assume the same to be the case for Ice Lake-D.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
Changes in v3:
- Add Ice Lake-D model numbers
- Drop has_if_pschange_mc hunk following Tiger Lake related discussion -
  IF_PSCHANGE_MC_NO is confimed to be exposed on Whitley SDP

---
 xen/arch/x86/acpi/cpu_idle.c | 2 ++
 xen/arch/x86/hvm/vmx/vmx.c   | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index c092086..d788c8b 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -181,6 +181,8 @@ static void do_get_hw_residencies(void *arg)
     case 0x55:
     case 0x5E:
     /* Ice Lake */
+    case 0x6A:
+    case 0x6C:
     case 0x7D:
     case 0x7E:
     /* Tiger Lake */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 2d4475e..bff5979 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2775,7 +2775,7 @@ static const struct lbr_info *last_branch_msr_get(void)
         /* Goldmont Plus */
         case 0x7a:
         /* Ice Lake */
-        case 0x7d: case 0x7e:
+        case 0x6a: case 0x6c: case 0x7d: case 0x7e:
         /* Tiger Lake */
         case 0x8c: case 0x8d:
         /* Tremont */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Dec 23 21:32:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 21:32:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58600.103217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksBjr-0003Xw-Jr; Wed, 23 Dec 2020 21:32:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58600.103217; Wed, 23 Dec 2020 21:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksBjr-0003Xp-Gi; Wed, 23 Dec 2020 21:32:07 +0000
Received: by outflank-mailman (input) for mailman id 58600;
 Wed, 23 Dec 2020 21:32:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GNZT=F3=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1ksBjp-0003Xk-C9
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 21:32:05 +0000
Received: from mail-qv1-xf35.google.com (unknown [2607:f8b0:4864:20::f35])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 047fbef8-d796-49e9-a86f-6644070ac8df;
 Wed, 23 Dec 2020 21:32:03 +0000 (UTC)
Received: by mail-qv1-xf35.google.com with SMTP id a13so450880qvv.0
 for <xen-devel@lists.xenproject.org>; Wed, 23 Dec 2020 13:32:03 -0800 (PST)
Received: from ?IPv6:2607:fb90:2490:9cc4:2476:fb1f:29f7:9ad4?
 ([2607:fb90:2490:9cc4:2476:fb1f:29f7:9ad4])
 by smtp.gmail.com with ESMTPSA id a5sm16395870qtn.57.2020.12.23.13.32.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 23 Dec 2020 13:32:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 047fbef8-d796-49e9-a86f-6644070ac8df
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:message-id:date
         :cc:to;
        bh=IF1Fi9WNO+Q0ToX2Z2DXtCfABhDvr6W+0befsdetbaI=;
        b=lZH3bozJ52vLj99A3hrkj1uiy+EB3Kd2JmD8SDAgC3njxg/4b/IF7IqdXN+O4vsxkq
         BhR7y8dZr82iPGimKOcnK7Ct5jBTUNUul+3geuwgDzYcv616tkH2k94bJn3jX9i5vSuK
         fhbXIQv1/SN2aHmEc73RjS+3IwDGuhfpifr0119xEg+wse4QxCajw7od9RpdU7W4miHF
         Q9+AFUJtBZJPXrkhImRwIxm7zGhiOsOedKZOkJ2nuv4A20orFH6JkU6WXaWDk9XizmIu
         vlDukYb/wrb/vx6Ct+HCLEVpvMb4kWiA+5gJ3KE6lqWAggmsrNYY2ybVhGYuBPkHAyaj
         lFKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:message-id:date:cc:to;
        bh=IF1Fi9WNO+Q0ToX2Z2DXtCfABhDvr6W+0befsdetbaI=;
        b=JAdcBauZBTFpZCkiX3GF0AQ2hyKf2l0qWudWYawduKLwaGodg1tMBCo7WulKERWWpA
         CAEwC945qUJRPf/zfBFDwAq2gXLlCaKVwIeCenAYx/sJ0I9368l6Qg0BqG1RiNAAZUsK
         ifETgxhDhB6ch2awIcRBBatlXJaryznsqCPW+m8zv51dcX5cTxX5O90snDLG0hJH1xPw
         GAipTfDg9y73WQnUkzLj8+s8enqbvY16ruloEpRZIhlki8LR77pfNUiw2uiszMnEqxdF
         vJu3PGLYEf2VM7e40rrKW9k9mNxpJwFD8tpza5eLY/Lr4ixmzkO67megbfjNrqbLNavJ
         NxRg==
X-Gm-Message-State: AOAM533NxpRp/PsvEjJTGPeUFhn4NBfBuSj/+n7M7PkwQuTOsMyYVt3Z
	Tg9dVOvCYbD9u7oi4Eh0FBE=
X-Google-Smtp-Source: ABdhPJwZci8g2Fb/fio8uJCdyyOvKcgrANAPQwLXQtOKtle9eyks701oNx/oQqbT8YwTR3m0oDIivg==
X-Received: by 2002:a0c:b21e:: with SMTP id x30mr28485289qvd.21.1608759122814;
        Wed, 23 Dec 2020 13:32:02 -0800 (PST)
Content-Type: multipart/alternative; boundary=Apple-Mail-1EDF01CF-DC35-4301-BB7C-E7B738FFF897
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
Message-Id: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
Date: Wed, 23 Dec 2020 16:32:01 -0500
Cc: Christopher Clark <christopher.w.clark@gmail.com>,
 openxt <openxt@googlegroups.com>, xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <olekstysh@gmail.com>, roger.pau@citrix.com,
 Julien Grall <jgrall@amazon.com>, James McKenzie <james@bromium.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.co.uk>
To: Jean-Philippe Ouellet <jpo@vt.edu>
X-Mailer: iPhone Mail (18C66)


--Apple-Mail-1EDF01CF-DC35-4301-BB7C-E7B738FFF897
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

=EF=BB=BFOn Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.edu> wrote=
:
> =EF=BB=BFOn Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> <christopher.w.clark@gmail.com> wrote:
>> Hi all,
>>=20
>> I have written a page for the OpenXT wiki describing a proposal for
>> initial development towards the VirtIO-Argo transport driver, and the
>> related system components to support it, destined for OpenXT and
>> upstream projects:
>>=20
>> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-=
Argo+Development+Phase+1
>>=20
>> Please review ahead of tomorrow's OpenXT Community Call.
>>=20
>> I would draw your attention to the Comparison of Argo interface options s=
ection:
>>=20
>> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-=
Argo+Development+Phase+1#Comparison-of-Argo-interface-options
>>=20
>> where further input to the table would be valuable;
>> and would also appreciate input on the IOREQ project section:
>>=20
>> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-=
Argo+Development+Phase+1#Project:-IOREQ-for-VirtIO-Argo
>>=20
>> in particular, whether an IOREQ implementation to support the
>> provision of devices to the frontends can replace the need for any
>> userspace software to interact with an Argo kernel interface for the
>> VirtIO-Argo implementation.
>>=20
>> thanks,
>> Christopher
>=20
> Hi,
>=20
> Really excited to see this happening, and disappointed that I'm not
> able to contribute at this time. I don't think I'll be able to join
> the call, but wanted to share some initial thoughts from my
> middle-of-the-night review anyway.
>=20
> Super rough notes in raw unedited notes-to-self form:
>=20
> main point of feedback is: I love the desire to get a non-shared-mem
> transport backend for virtio standardized. It moves us closer to an
> HMX-only world. BUT: virtio is relevant to many hypervisors beyond
> Xen, not all of which have the same views on how policy enforcement
> should be done, namely some have a preference for capability-oriented
> models over type-enforcement / MAC models. It would be nice if any
> labeling encoded into the actual specs / guest-boundary protocols
> would be strictly a mechanism, and be policy-agnostic, in particular
> not making implicit assumptions about XSM / SELinux / similar. I don't
> have specific suggestions at this point, but would love to discuss.
>=20
> thoughts on how to handle device enumeration? hotplug notifications?
> - can't rely on xenstore
> - need some internal argo messaging for this?
> - name service w/ well-known names? starts to look like xenstore
> pretty quickly...
> - granular disaggregation of backend device-model providers desirable
>=20
> how does resource accounting work? each side pays for their own delivery r=
ing?
> - init in already-guest-mapped mem & simply register?
> - how does it compare to grant tables?
>  - do you need to go through linux driver to alloc (e.g. xengntalloc)
> or has way to share arbitrary otherwise not-special userspace pages
> (e.g. u2mfn, with all its issues (pinning, reloc, etc.))?
>=20
> ioreq is tangled with grant refs, evt chans, generic vmexit
> dispatcher, instruction decoder, etc. none of which seems desirable if
> trying to move towards world with strictly safer guest interfaces
> exposed (e.g. HMX-only)
> - there's no io to trap/decode here, it's explicitly exclusively via
> hypercall to HMX, no?
> - also, do we want argo sendv hypercall to be always blocking & synchronou=
s?
>  - or perhaps async notify & background copy to other vm addr space?
>  - possibly better scaling?
>  - accounting of in-flight io requests to handle gets complicated
> (see recent XSA)
>  - PCI-like completion request semantics? (argo as cross-domain
> software dma engine w/ some basic protocol enforcement?)
>=20
> "port" v4v driver =3D> argo:
> - yes please! something without all the confidence-inspiring
> DEBUG_{APPLE,ORANGE,BANANA} indicators of production-worthy code would
> be great ;)
> - seems like you may want to redo argo hypercall interface too? (at
> least the syscall interface...)
>  - targeting synchronous blocking sendv()?
>  - or some async queue/completion thing too? (like PF_RING, but with
> *iov entries?)
>  - both could count as HMX, both could enforce no double-write racing
> games at dest ring, etc.
>=20
> re v4vchar & doing similar for argo:
> - we may prefer "can write N bytes? -> yes/no" or "how many bytes can
> write? -> N" over "try to write N bytes -> only wrote M, EAGAIN"
> - the latter can be implemented over the former, but not the other way aro=
und
> - starts to matter when you want to be able to implement in userspace
> & provide backpressure to peer userspace without additional buffering
> & potential lying about durability of writes
> - breaks cross-domain EPIPE boundary correctness
> - Qubes ran into same issues when porting vchan from Xen to KVM
> initially via vsock
>=20
> some virtio drivers explicitly use shared mem for more than just
> communication rings:
> - e.g. virtio-fs, which can map pages as DAX-like fs backing to share page=
 cache
> - e.g. virtio-gpu, virtio-wayland, virtio-video, which deal in framebuffer=
s
> - needs thought about how best to map semantics to (or at least
> interoperate cleanly & safely with) HMX-{only,mostly} world
>  - the performance of shared mem actually can meaningfully matter for
> e.g. large framebuffers in particular due to fundamental memory
> bandwidth constraints
>=20
> what is mentioned PX hypervisor? presumably short for PicoXen? any
> public information?

Not much at the moment, but there is prior public work.  PX is an OSS L0 "Pr=
otection Hypervisor" in the Hardened Access Terminal (HAT) architecture pres=
ented by Daniel Smith at the 2020 Xen Summit: https://youtube.com/watch?v=3D=
Wt-SBhFnDZY&t=3D3m48s

PX is intended to build on lessons learned from IBM Ultravisor, HP/Bromium A=
X and AIS Bareflank L0 hypervisors:

IBM: https://www.platformsecuritysummit.com/2019/speaker/hunt/

HP/Bromium: https://www.platformsecuritysummit.com/2018/speaker/pratt/
Dec 2019 meeting in Cambridge, Day2 discussion included L0 nesting hyperviso=
r, UUID semantics, Argo, communication between nested hypervisors: https://l=
ists.archive.carbon60.com/xen/devel/577800

Bareflank: https://youtube.com/channel/UCH-7Pw96K5V1RHAPn5-cmYA
Xen Summit 2020 design session notes: https://lists.archive.carbon60.com/xen=
/devel/591509

In the long-term, efficient hypervisor nesting will require close cooperatio=
n with silicon and firmware vendors. Note that Intel is introducing TDX (Tru=
st Domain Extensions):

https://software.intel.com/content/www/us/en/develop/articles/intel-trust-do=
main-extensions.html
https://www.brighttalk.com/webcast/18206/453600

There are also a couple of recent papers from Shanghai Jiao Tong University,=
 on using hardware instructions to accelerate inter-domain HMX.

March 2019: https://ipads.se.sjtu.edu.cn/_media/publications/skybridge-euros=
ys19.pdf

> we present SkyBridge, a new communication facility designed and optimized f=
or synchronous IPC in microkernels. SkyBridge requires no involvement of ker=
nels during communication and allows a process to directly switch to the vir=
tual address space of the target process and invoke the target function. Sky=
Bridge retains the traditional virtual address space isolation and thus can b=
e easily integrated into existing microkernels. The key idea of SkyBridge is=
 to leverage a commodity hardware feature for virtualization (i.e., [Intel E=
PT] VMFUNC) to achieve efficient IPC. To leverage the hardware feature, SkyB=
ridge inserts a tiny virtualization layer (Rootkernel) beneath the original m=
icrokernel (Subkernel). The Rootkernel is carefully designed to eliminate mo=
st virtualization overheads. SkyBridge also integrates a series of technique=
s to guarantee the security properties of IPC. We have implemented SkyBridge=
 on three popular open-source microkernels (seL4, Fiasco.OC, and Google Zirc=
on). The evaluation results show that SkyBridge improves the speed of IPC by=
 1.49x to 19.6x for microbenchmarks. For real-world applications (e.g., SQLi=
te3 database), SkyBridge improves the throughput by 81.9%, 1.44x and 9.59x f=
or the three microkernels on average.

July 2020: https://ipads.se.sjtu.edu.cn/_media/publications/guatc20.pdf

> a redesign of traditional microkernel OSes to harmonize the tension betwee=
n messaging performance and isolation. UnderBridge moves the OS components o=
f a microkernel between user space and kernel space at runtime while enforci=
ng consistent isolation. It retrofits Intel Memory Protection Key for Usersp=
ace (PKU) in kernel space to achieve such isolation efficiently and design a=
 fast IPC mechanism across those OS components. Thanks to PKU=E2=80=99s extr=
emely low overhead, the inter-process communication (IPC) roundtrip cost in U=
nderBridge can be as low as 109 cycles. We have designed and implemented a n=
ew microkernel called ChCore based on UnderBridge and have also ported Under=
Bridge to three mainstream microkernels, i.e., seL4, Google Zircon, and Fias=
co.OC. Evaluations show that UnderBridge speeds up the IPC by 3.0=C3=97 comp=
ared with the state-of-the-art (e.g., SkyBridge) and improves the performanc=
e of IPC-intensive applications by up to 13.1=C3=97 for the above three micr=
okernels



For those interested in Argo and VirtIO, there will be a conference call on T=
hursday, Jan 14th 2021, at 1600 UTC.

Rich=

--Apple-Mail-1EDF01CF-DC35-4301-BB7C-E7B738FFF897
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr">=EF=BB=BFOn Dec 17, 2020, a=
t 07:13, Jean-Philippe Ouellet &lt;jpo@vt.edu&gt; wrote:<div dir=3D"ltr"><di=
v dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><blockquote type=3D"cite"><b=
r></blockquote></div><blockquote type=3D"cite"><div dir=3D"ltr">=EF=BB=BF<sp=
an>On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark</span><br><span>&lt;chr=
istopher.w.clark@gmail.com&gt; wrote:</span><br><blockquote type=3D"cite"><s=
pan>Hi all,</span><br></blockquote><blockquote type=3D"cite"><span></span><b=
r></blockquote><blockquote type=3D"cite"><span>I have written a page for the=
 OpenXT wiki describing a proposal for</span><br></blockquote><blockquote ty=
pe=3D"cite"><span>initial development towards the VirtIO-Argo transport driv=
er, and the</span><br></blockquote><blockquote type=3D"cite"><span>related s=
ystem components to support it, destined for OpenXT and</span><br></blockquo=
te><blockquote type=3D"cite"><span>upstream projects:</span><br></blockquote=
><blockquote type=3D"cite"><span></span><br></blockquote><blockquote type=3D=
"cite"><span>https://openxt.atlassian.net/wiki/spaces/~cclark/pages/16961699=
85/VirtIO-Argo+Development+Phase+1</span><br></blockquote><blockquote type=3D=
"cite"><span></span><br></blockquote><blockquote type=3D"cite"><span>Please r=
eview ahead of tomorrow's OpenXT Community Call.</span><br></blockquote><blo=
ckquote type=3D"cite"><span></span><br></blockquote><blockquote type=3D"cite=
"><span>I would draw your attention to the Comparison of Argo interface opti=
ons section:</span><br></blockquote><blockquote type=3D"cite"><span></span><=
br></blockquote><blockquote type=3D"cite"><span>https://openxt.atlassian.net=
/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Compar=
ison-of-Argo-interface-options</span><br></blockquote><blockquote type=3D"ci=
te"><span></span><br></blockquote><blockquote type=3D"cite"><span>where furt=
her input to the table would be valuable;</span><br></blockquote><blockquote=
 type=3D"cite"><span>and would also appreciate input on the IOREQ project se=
ction:</span><br></blockquote><blockquote type=3D"cite"><span></span><br></b=
lockquote><blockquote type=3D"cite"><span>https://openxt.atlassian.net/wiki/=
spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-IOR=
EQ-for-VirtIO-Argo</span><br></blockquote><blockquote type=3D"cite"><span></=
span><br></blockquote><blockquote type=3D"cite"><span>in particular, whether=
 an IOREQ implementation to support the</span><br></blockquote><blockquote t=
ype=3D"cite"><span>provision of devices to the frontends can replace the nee=
d for any</span><br></blockquote><blockquote type=3D"cite"><span>userspace s=
oftware to interact with an Argo kernel interface for the</span><br></blockq=
uote><blockquote type=3D"cite"><span>VirtIO-Argo implementation.</span><br><=
/blockquote><blockquote type=3D"cite"><span></span><br></blockquote><blockqu=
ote type=3D"cite"><span>thanks,</span><br></blockquote><blockquote type=3D"c=
ite"><span>Christopher</span><br></blockquote><span></span><br><span>Hi,</sp=
an><br><span></span><br><span>Really excited to see this happening, and disa=
ppointed that I'm not</span><br><span>able to contribute at this time. I don=
't think I'll be able to join</span><br><span>the call, but wanted to share s=
ome initial thoughts from my</span><br><span>middle-of-the-night review anyw=
ay.</span><br><span></span><br><span>Super rough notes in raw unedited notes=
-to-self form:</span><br><span></span><br><span>main point of feedback is: I=
 love the desire to get a non-shared-mem</span><br><span>transport backend f=
or virtio standardized. It moves us closer to an</span><br><span>HMX-only wo=
rld. BUT: virtio is relevant to many hypervisors beyond</span><br><span>Xen,=
 not all of which have the same views on how policy enforcement</span><br><s=
pan>should be done, namely some have a preference for capability-oriented</s=
pan><br><span>models over type-enforcement / MAC models. It would be nice if=
 any</span><br><span>labeling encoded into the actual specs / guest-boundary=
 protocols</span><br><span>would be strictly a mechanism, and be policy-agno=
stic, in particular</span><br><span>not making implicit assumptions about XS=
M / SELinux / similar. I don't</span><br><span>have specific suggestions at t=
his point, but would love to discuss.</span><br><span></span><br><span>thoug=
hts on how to handle device enumeration? hotplug notifications?</span><br><s=
pan>- can't rely on xenstore</span><br><span>- need some internal argo messa=
ging for this?</span><br><span>- name service w/ well-known names? starts to=
 look like xenstore</span><br><span>pretty quickly...</span><br><span>- gran=
ular disaggregation of backend device-model providers desirable</span><br><s=
pan></span><br><span>how does resource accounting work? each side pays for t=
heir own delivery ring?</span><br><span>- init in already-guest-mapped mem &=
amp; simply register?</span><br><span>- how does it compare to grant tables?=
</span><br><span> &nbsp;- do you need to go through linux driver to alloc (e=
.g. xengntalloc)</span><br><span>or has way to share arbitrary otherwise not=
-special userspace pages</span><br><span>(e.g. u2mfn, with all its issues (p=
inning, reloc, etc.))?</span><br><span></span><br><span>ioreq is tangled wit=
h grant refs, evt chans, generic vmexit</span><br><span>dispatcher, instruct=
ion decoder, etc. none of which seems desirable if</span><br><span>trying to=
 move towards world with strictly safer guest interfaces</span><br><span>exp=
osed (e.g. HMX-only)</span><br><span>- there's no io to trap/decode here, it=
's explicitly exclusively via</span><br><span>hypercall to HMX, no?</span><b=
r><span>- also, do we want argo sendv hypercall to be always blocking &amp; s=
ynchronous?</span><br><span> &nbsp;- or perhaps async notify &amp; backgroun=
d copy to other vm addr space?</span><br><span> &nbsp;- possibly better scal=
ing?</span><br><span> &nbsp;- accounting of in-flight io requests to handle g=
ets complicated</span><br><span>(see recent XSA)</span><br><span> &nbsp;- PC=
I-like completion request semantics? (argo as cross-domain</span><br><span>s=
oftware dma engine w/ some basic protocol enforcement?)</span><br><span></sp=
an><br><span>"port" v4v driver =3D&gt; argo:</span><br><span>- yes please! s=
omething without all the confidence-inspiring</span><br><span>DEBUG_{APPLE,O=
RANGE,BANANA} indicators of production-worthy code would</span><br><span>be g=
reat ;)</span><br><span>- seems like you may want to redo argo hypercall int=
erface too? (at</span><br><span>least the syscall interface...)</span><br><s=
pan> &nbsp;- targeting synchronous blocking sendv()?</span><br><span> &nbsp;=
- or some async queue/completion thing too? (like PF_RING, but with</span><b=
r><span>*iov entries?)</span><br><span> &nbsp;- both could count as HMX, bot=
h could enforce no double-write racing</span><br><span>games at dest ring, e=
tc.</span><br><span></span><br><span>re v4vchar &amp; doing similar for argo=
:</span><br><span>- we may prefer "can write N bytes? -&gt; yes/no" or "how m=
any bytes can</span><br><span>write? -&gt; N" over "try to write N bytes -&g=
t; only wrote M, EAGAIN"</span><br><span>- the latter can be implemented ove=
r the former, but not the other way around</span><br><span>- starts to matte=
r when you want to be able to implement in userspace</span><br><span>&amp; p=
rovide backpressure to peer userspace without additional buffering</span><br=
><span>&amp; potential lying about durability of writes</span><br><span>- br=
eaks cross-domain EPIPE boundary correctness</span><br><span>- Qubes ran int=
o same issues when porting vchan from Xen to KVM</span><br><span>initially v=
ia vsock</span><br><span></span><br><span>some virtio drivers explicitly use=
 shared mem for more than just</span><br><span>communication rings:</span><b=
r><span>- e.g. virtio-fs, which can map pages as DAX-like fs backing to shar=
e page cache</span><br><span>- e.g. virtio-gpu, virtio-wayland, virtio-video=
, which deal in framebuffers</span><br><span>- needs thought about how best t=
o map semantics to (or at least</span><br><span>interoperate cleanly &amp; s=
afely with) HMX-{only,mostly} world</span><br><span> &nbsp;- the performance=
 of shared mem actually can meaningfully matter for</span><br><span>e.g. lar=
ge framebuffers in particular due to fundamental memory</span><br><span>band=
width constraints</span><br><span></span><br><span>what is mentioned PX hype=
rvisor? presumably short for PicoXen? any</span><br><span>public information=
?</span><br></div></blockquote><br><div>Not much at the moment, but there is=
 prior public work. &nbsp;PX is an OSS L0 "Protection Hypervisor" in the Har=
dened Access Terminal (HAT) architecture presented by Daniel Smith at the 20=
20 Xen Summit:&nbsp;<a href=3D"https://m.youtube.com/watch?v=3DWt-SBhFnDZY&a=
mp;t=3D3m48s">https://youtube.com/watch?v=3DWt-SBhFnDZY&amp;t=3D3m48s</a></d=
iv><div><br></div><div>PX is intended to build on lessons learned from IBM U=
ltravisor, HP/Bromium AX and AIS Bareflank L0 hypervisors:</div><div><br></d=
iv><div>IBM:&nbsp;<a href=3D"https://www.platformsecuritysummit.com/2019/spe=
aker/hunt/">https://www.platformsecuritysummit.com/2019/speaker/hunt/</a></d=
iv><div><br></div><div>HP/Bromium:&nbsp;<a href=3D"https://www.platformsecur=
itysummit.com/2018/speaker/pratt/">https://www.platformsecuritysummit.com/20=
18/speaker/pratt/</a></div><div>Dec 2019 meeting in Cambridge, Day2 discussi=
on included L0 nesting hypervisor, UUID semantics, Argo, communication betwe=
en nested hypervisors:&nbsp;<a href=3D"https://lists.archive.carbon60.com/xe=
n/devel/577800?search_string=3Ddecember%20f2f;#577800">https://lists.archive=
.carbon60.com/xen/devel/577800</a></div><div><br></div><div>Bareflank:&nbsp;=
<a href=3D"https://m.youtube.com/channel/UCH-7Pw96K5V1RHAPn5-cmYA">https://<=
/a><a href=3D"https://m.youtube.com/channel/UCH-7Pw96K5V1RHAPn5-cmYA">youtub=
e.com/channel/UCH-7Pw96K5V1RHAPn5-cmYA</a></div><div>Xen Summit 2020 design s=
ession notes:&nbsp;<a href=3D"https://lists.archive.carbon60.com/xen/devel/5=
91509?search_string=3Dbareflank;#591509">https://lists.archive.carbon60.com/=
xen/devel/591509</a></div><div><br></div><div>In the long-term, efficient hy=
pervisor nesting will require close cooperation with silicon and firmware ve=
ndors. Note that Intel is introducing TDX (Trust Domain Extensions):</div><d=
iv><br></div><div><a href=3D"https://software.intel.com/content/www/us/en/de=
velop/articles/intel-trust-domain-extensions.html">https://software.intel.co=
m/content/www/us/en/develop/articles/intel-trust-domain-extensions.html</a><=
/div><div><a href=3D"https://www.brighttalk.com/webcast/18206/453600">https:=
//www.brighttalk.com/webcast/18206/453600</a></div><div><br></div><div>There=
 are also a couple of recent papers from Shanghai Jiao Tong University, on u=
sing hardware instructions to accelerate inter-domain HMX.</div><div><br></d=
iv><div><div>March 2019: https://ipads.se.sjtu.edu.cn/_media/publications/sk=
ybridge-eurosys19.pdf</div><div><br></div><div>&gt; we present SkyBridge, a n=
ew communication facility designed and optimized for synchronous IPC in micr=
okernels. SkyBridge requires no involvement of kernels during communication a=
nd allows a process to directly switch to the virtual address space of the t=
arget process and invoke the target function. SkyBridge retains the traditio=
nal virtual address space isolation and thus can be easily integrated into e=
xisting microkernels. The key idea of SkyBridge is to leverage a commodity h=
ardware feature for virtualization (i.e., [Intel EPT] VMFUNC) to achieve eff=
icient IPC. To leverage the hardware feature, SkyBridge inserts a tiny virtu=
alization layer (Rootkernel) beneath the original microkernel (Subkernel). T=
he Rootkernel is carefully designed to eliminate most virtualization overhea=
ds. SkyBridge also integrates a series of techniques to guarantee the securi=
ty properties of IPC. We have implemented SkyBridge on three popular open-so=
urce microkernels (seL4, Fiasco.OC, and Google Zircon). The evaluation resul=
ts show that SkyBridge improves the speed of IPC by 1.49x to 19.6x for micro=
benchmarks. For real-world applications (e.g., SQLite3 database), SkyBridge i=
mproves the throughput by 81.9%, 1.44x and 9.59x for the three microkernels o=
n average.</div></div><div><br></div><div><div>July 2020: https://ipads.se.s=
jtu.edu.cn/_media/publications/guatc20.pdf</div><div><br></div><div>&gt; a r=
edesign of traditional microkernel OSes to harmonize the tension between mes=
saging performance and isolation. UnderBridge moves the OS components of a m=
icrokernel between user space and kernel space at runtime while enforcing co=
nsistent isolation. It retrofits Intel Memory Protection Key for Userspace (=
PKU) in kernel space to achieve such isolation efficiently and design a fast=
 IPC mechanism across those OS components. Thanks to PKU=E2=80=99s extremely=
 low overhead, the inter-process communication (IPC) roundtrip cost in Under=
Bridge can be as low as 109 cycles. We have designed and implemented a new m=
icrokernel called ChCore based on UnderBridge and have also ported UnderBrid=
ge to three mainstream microkernels, i.e., seL4, Google Zircon, and Fiasco.O=
C. Evaluations show that UnderBridge speeds up the IPC by 3.0=C3=97 compared=
 with the state-of-the-art (e.g., SkyBridge) and improves the performance of=
 IPC-intensive applications by up to 13.1=C3=97 for the above three microker=
nels</div></div><div><br></div><div><br></div><div><br></div><div>For those i=
nterested in Argo and VirtIO, there will be a conference call on Thursday, J=
an 14th 2021, at 1600 UTC.</div><div><br></div><div>Rich</div></div></div></=
div></div></body></html>=

--Apple-Mail-1EDF01CF-DC35-4301-BB7C-E7B738FFF897--


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 22:31:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 22:31:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58605.103232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksCfT-0000IW-5Q; Wed, 23 Dec 2020 22:31:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58605.103232; Wed, 23 Dec 2020 22:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksCfT-0000IO-2W; Wed, 23 Dec 2020 22:31:39 +0000
Received: by outflank-mailman (input) for mailman id 58605;
 Wed, 23 Dec 2020 22:31:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ehhk=F3=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
 id 1ksCfR-0000IJ-HX
 for xen-devel@lists.xenproject.org; Wed, 23 Dec 2020 22:31:38 +0000
Received: from mail-40134.protonmail.ch (unknown [185.70.40.134])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4d70cbf-f8b9-42e9-a91e-acd12361d1c4;
 Wed, 23 Dec 2020 22:31:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4d70cbf-f8b9-42e9-a91e-acd12361d1c4
Date: Wed, 23 Dec 2020 22:31:26 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=protonmail; t=1608762690;
	bh=f66g+qZIboOqSZhmi7qd5JGQeF+vAKCetIFSrUaG8AU=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=Ox6gZ1jepMc4rhl+bFqtZyAKjed6Z9Kr0JT3tR1TIAsB3SiF7Ts0XpNRs352pIpYl
	 6n/jlXAVEys46Uvwl18Cqk10uxn8Iqh9YsFSnpnkVRx6T7BxCD0Yz4exC0U7Qqy29j
	 5kkRHbNBjCgFmjTG3B3fetWaFGjewMta9yt/PoAw=
To: Dario Faggioli <dfaggioli@suse.com>
From: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Reply-To: Dylanger Daly <dylangerdaly@protonmail.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
Message-ID: <YpdaXYxTuQqTe8UiFzMxsHQfY-cANXdoOaLH9_nrbz-mGgWcuCMSd-p1RWfnyva7zhn-CybnECNp5YAHhamjsehHJ6Roc7WUz6TYYh79INM=@protonmail.com>
In-Reply-To: <815f3bc3a28a165e8fa41c6954a6d00db656e3c3.camel@suse.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com> <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com> <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com> <768d9dbb-4387-099f-b489-7952d7e883b0@suse.com> <T95F2Mi9RUUZ4w2wdeRqqM4uRyKgOFQNyooqEoTTDByK-0t9hZ1izG68lf90iExeYabEPSEv8puUeg0SEJtOmz8vYbVox2za28DXLd_h-_s=@protonmail.com> <eba12ea4-5dda-f112-0e33-714e859b9b03@suse.com> <815f3bc3a28a165e8fa41c6954a6d00db656e3c3.camel@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,
	T_FILL_THIS_FORM_SHORT shortcircuit=no autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

Hi Dario,

Thank you for your reply

This issue is made much worse with: https://github.com/QubesOS/qubes-vmm-xe=
n/commit/c28754bdb458281a22e9a9779213c941531b6dff#diff-d98b01176d360f55f58c=
25d2dfbfadc115718806181ef40d1838d2efa6b2bea1

Reverting `xen: credit2: limit the max number of CPUs in a
 runqueue` results in stuttering even with dom0 pinned with 1vcpu, currentl=
y AMD Ryzen 4000 users are maintain a separate build without this change, I=
 think this commit has something to do with the issues we're experiencing.

Without pinning dom0 1 vcpu this is what the lockups look like: https://img=
ur.com/a/q7MQRez. Another weird artifact is the mouse (trackpad) will quick=
ly jerk when being moved.

The other interesting thing is appVMs can only use 2 vcpus, allocating more=
 results in that appVM exibiting stuttering/microlockups

So to get the device working, the following is required
1. Not revert `xen: credit2: limit the max number of CPUs in a
 runqueue`
2. Pin 1 vcpu for donm0
3. Limit 2 vcpus per appVM

> So, just to be sure I am understanding the symptoms correctly: here you
> say that Credit (and RTDS) "don't boot correctly". In another mail, I
> think you said that Credit boots, but is unusable due to lag and
> lockups... Which is which?

credit2 is the only scheduler that I can get working, other schedulers don'=
t boot at all

> Also, since this looks like it is SMT related, is Credit bootable
> and/or usable with SMT off? And with SMT on?

Qubes by default disables SMT, just after I sent my email yesterday I was a=
ctually able to boot with smt enabled as long as I had dom0 allocated 1 vcp=
u, without dom0 being allocated 1 vcpu the device won't boot at all.

> It'd be "wonderful" to see _how_ it does that, by seeing the
> stacktrace (preferrably of a debug build), if there is one. Or, if
> the system locks, e.g., knowing whether it is responsive at least
> to debug keys (and, if yest, how the output of the 'r' debug key
> looks like)

Because I'm compiling Xen myself, I should absolutely be able to enable deb=
ug/verbose logging, I'll try to capture more logging today

Here's what I can dig up currently

```
[chairman@dom0 ~]$ xl info
host                   : dom0
release                : 5.8.18-200.fc32.x86_64
version                : #1 SMP Mon Nov 2 19:49:11 UTC 2020
machine                : x86_64
nr_cpus                : 8
max_cpu_id             : 15
nr_nodes               : 1
cores_per_socket       : 8
threads_per_core       : 1
cpu_mhz                : 1696.837
hw_caps                : 178bf3ff:76d8320b:2e500800:244037ff:0000000f:219c9=
1a9:00400004:00000500
virt_caps              : pv hvm hvm_directio pv_directio hap
total_memory           : 30439
free_memory            : 6644
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 14
xen_extra              : .0
xen_version            : 4.14.0
xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-=
3.0-x86_64
xen_scheduler          : credit2
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder console=3Dnone dom0_mem=3Dmin:1024M do=
m0_mem=3Dmax:4096M dom0_max_vcpus=3D1 dom0_vcpus_pin ucode=3Dscan smt=3Doff=
 gnttab_max_frames=3D2048 gnttab_max_maptrack_frames=3D4096 ept=3Dexec-sp n=
o-real-mode edd=3Doff
cc_compiler            : gcc (GCC) 10.2.1 20201125 (Red Hat 10.2.1-9)
cc_compile_by          : user
cc_compile_domain      : [unknown]
cc_compile_date        : Wed Dec 16 00:00:00 UTC 2020
build_id               : 9eb1d06c8bbc4686c4a8a6c9ee46d91e106df81d
xend_config_format     : 4

[chairman@dom0 ~]$ xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) Affinity=
 (Hard / Soft)
Domain-0                             0     0    0   r--     367.3  0 / 0,2,=
4,6,8,10,12,14
sys-net                              1     0   12   -b-      59.9  all / al=
l
sys-net                              1     1   10   -b-      61.0  all / al=
l
sys-net-dm                           2     0   10   -b-      19.4  all / al=
l
sys-usb                              3     0    8   -b-      31.0  all / al=
l
sys-usb                              3     1   14   -b-      36.9  all / al=
l
sys-usb                              3     2   10   -b-      34.5  all / al=
l
sys-usb                              3     3   12   -b-      32.8  all / al=
l
sys-usb-dm                           4     0   12   -b-      20.8  all / al=
l
sys-firewall                         5     0   10   -b-       1.4  all / al=
l
sys-firewall                         5     1   14   -b-      11.0  all / al=
l
xxxxxxxxxxxxxx                       6     0   12   -b-       9.2  all / al=
l
xxxxxxxxxxxxxx                       6     1   12   -b-       8.0  all / al=
l
xxxxxxxxxxxxxxx                      7     0    8   -b-       8.9  all / al=
l
xxxxxxxxxxxxxxx                      7     1    8   -b-       6.1  all / al=
l
xxxxxxxxxxx                          8     0   14   -b-     399.2  all / al=
l
xxxxxxxxxxx                          8     1   12   -b-     454.6  all / al=
l
xxxxxxxxxxxxxxxx                     9     0   14   -b-      33.7  all / al=
l
xxxxxxxxxxxxxxxx                     9     1    8   -b-      54.1  all / al=
l
xxxxxxxxxxxxxxx                     10     0   10   -b-       4.2  all / al=
l
xxxxxxxxxxxxxxx                     10     1    8   -b-       7.3  all / al=
l
xxxxxxxxxxxx                        11     0    2   -b-      39.7  all / al=
l
xxxxxxxxxxxx                        11     1   12   -b-     121.9  all / al=
l
email                               12     0    8   -b-      29.2  all / al=
l
email                               12     1   14   -b-      84.2  all / al=
l


[2020-12-24 08:34:49] Logfile Opened
[2020-12-24 08:34:49] (XEN) Built-in command line: ept=3Dexec-sp
[2020-12-24 08:34:49] (XEN) parameter no-real-mode unknown!
[2020-12-24 08:34:49] (XEN) parameter edd unknown!
[2020-12-24 08:34:49]  Xen 4.14.0
[2020-12-24 08:34:49] (XEN) Xen version 4.14.0 (user@[unknown]) (gcc (GCC) =
10.2.1 20201125 (Red Hat 10.2.1-9)) debug=3Dn  Wed Dec 16 00:00:00 UTC 2020
[2020-12-24 08:34:49] (XEN) Latest ChangeSet:
[2020-12-24 08:34:49] (XEN) Bootloader: GRUB 2.04
[2020-12-24 08:34:49] (XEN) Command line: placeholder console=3Dnone dom0_m=
em=3Dmin:1024M dom0_mem=3Dmax:4096M dom0_max_vcpus=3D1 dom0_vcpus_pin ucode=
=3Dscan smt=3Doff gnttab_max_frames=3D2048 gnttab_max_maptrack_frames=3D409=
6 ept=3Dexec-sp no-real-mode edd=3Doff
[2020-12-24 08:34:49] (XEN) Xen image load base address: 0xb8e00000
[2020-12-24 08:34:49] (XEN) Video information:
[2020-12-24 08:34:49] (XEN)  VGA is graphics mode 1920x1080, 32 bpp
[2020-12-24 08:34:49] (XEN) Disc information:
[2020-12-24 08:34:49] (XEN)  Found 0 MBR signatures
[2020-12-24 08:34:49] (XEN)  Found 2 EDD information structures
[2020-12-24 08:34:49] (XEN) EFI RAM map:
[2020-12-24 08:34:49] (XEN)  [0000000000000000, 000000000009efff] (usable)
[2020-12-24 08:34:49] (XEN)  [000000000009f000, 000000000009ffff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [00000000000e0000, 00000000000fffff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [0000000000100000, 0000000009bfffff] (usable)
[2020-12-24 08:34:49] (XEN)  [0000000009c00000, 0000000009d00fff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [0000000009d01000, 0000000009efffff] (usable)
[2020-12-24 08:34:49] (XEN)  [0000000009f00000, 0000000009f0ffff] (ACPI NVS=
)
[2020-12-24 08:34:49] (XEN)  [0000000009f10000, 00000000bd9ddfff] (usable)
[2020-12-24 08:34:49] (XEN)  [00000000bd9de000, 00000000ca37dfff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [00000000ca37e000, 00000000cc37dfff] (ACPI NVS=
)
[2020-12-24 08:34:49] (XEN)  [00000000cc37e000, 00000000cc3fdfff] (ACPI dat=
a)
[2020-12-24 08:34:49] (XEN)  [00000000cc3fe000, 00000000cdffffff] (usable)
[2020-12-24 08:34:49] (XEN)  [00000000ce000000, 00000000cfffffff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [00000000f8000000, 00000000fbffffff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [00000000fde00000, 00000000fdefffff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [00000000fed80000, 00000000fed80fff] (reserved=
)
[2020-12-24 08:34:49] (XEN)  [0000000100000000, 00000007af33ffff] (usable)
[2020-12-24 08:34:49] (XEN)  [00000007af340000, 000000082fffffff] (reserved=
)
[2020-12-24 08:34:49] (XEN) ACPI: RSDP CC3FD014, 0024 (r2 LENOVO)
[2020-12-24 08:34:49] (XEN) ACPI: XSDT CC3FB188, 0104 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: FACP BE499000, 0114 (r6 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: DSDT BE484000, F08E (r1 LENOVO TP-R1C    =
   1290 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: FACS CC218000, 0040
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF751000, 00A2 (r1 LENOVO PID0Ssdt  =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF750000, 0CCC (r1 LENOVO UsbCTabl  =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF743000, 7216 (r2 LENOVO TP-R1C    =
      2 MSFT  4000000)
[2020-12-24 08:34:49] (XEN) ACPI: IVRS BF742000, 01A4 (r2 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF704000, 0266 (r1 LENOVO     STD3  =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF6F0000, 0632 (r2 LENOVO Tpm2Tabl  =
   1000 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: TPM2 BF6EF000, 0034 (r3 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF6EE000, 0924 (r1 LENOVO WmiTable  =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: MSDM BF6B5000, 0055 (r3 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: BATB BF6A0000, 004A (r2 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: HPET BE498000, 0038 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: APIC BE497000, 0138 (r2 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: MCFG BE496000, 003C (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: SBST BE495000, 0030 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: WSMT BE494000, 0028 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: VFCT BE476000, D484 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BE472000, 39F4 (r1 LENOVO TP-R1C    =
      1 AMD         1)
[2020-12-24 08:34:49] (XEN) ACPI: CRAT BE471000, 0F00 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: CDIT BE470000, 0029 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: FPDT BF6C7000, 0034 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BE46E000, 13CF (r1 LENOVO TP-R1C    =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BE46C000, 1576 (r1 LENOVO TP-R1C    =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BE468000, 353C (r1 LENOVO TP-R1C    =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: BGRT BE467000, 0038 (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: UEFI CC217000, 013E (r1 LENOVO TP-R1C    =
   1290 PTEC        2)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF74F000, 0090 (r1 LENOVO TP-R1C    =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) ACPI: SSDT BF74E000, 09AD (r1 LENOVO TP-R1C    =
      1 INTL 20180313)
[2020-12-24 08:34:49] (XEN) System RAM: 30439MB (31170232kB)
[2020-12-24 08:34:49] (XEN) Domain heap initialised
[2020-12-24 08:34:49] (XEN) ACPI: 32/64X FACS address mismatch in FADT - cc=
218000/0000000000000000, using 32
[2020-12-24 08:34:49] (XEN) IOAPIC[0]: apic_id 32, version 33, address 0xfe=
c00000, GSI 0-23
[2020-12-24 08:34:49] (XEN) IOAPIC[1]: apic_id 33, version 33, address 0xfe=
c01000, GSI 24-55
[2020-12-24 08:34:49] (XEN) Enabling APIC mode:  Phys.  Using 2 I/O APICs
[2020-12-24 08:34:49] (XEN) CPU0: 1400..1700 MHz
[2020-12-24 08:34:49] (XEN) xstate: size: 0x380 and states: 0x207
[2020-12-24 08:34:49] (XEN) Speculative mitigation facilities:
[2020-12-24 08:34:49] (XEN)   Hardware features: IBPB
[2020-12-24 08:34:49] (XEN)   Compiled-in support: INDIRECT_THUNK
[2020-12-24 08:34:49] (XEN)   Xen settings: BTI-Thunk LFENCE, SPEC_CTRL: No=
, Other: IBPB BRANCH_HARDEN
[2020-12-24 08:34:49] (XEN)   Support for HVM VMs: RSB
[2020-12-24 08:34:49] (XEN)   Support for PV VMs: RSB
[2020-12-24 08:34:49] (XEN)   XPTI (64-bit PV only): Dom0 disabled, DomU di=
sabled (without PCID)
[2020-12-24 08:34:49] (XEN)   PV L1TF shadowing: Dom0 disabled, DomU disabl=
ed
[2020-12-24 08:34:49] (XEN) Using scheduler: SMP Credit Scheduler rev2 (cre=
dit2)
[2020-12-24 08:34:49] (XEN) Initializing Credit2 scheduler
[2020-12-24 08:34:49] (XEN) Platform timer is 14.318MHz HPET
[2020-12-24 08:34:49] (XEN) Detected 1696.837 MHz processor.
[2020-12-24 08:34:49] (XEN) Unknown cachability for MFNs 0xe0-0xff
[2020-12-24 08:34:49] (XEN) AMD-Vi: IOMMU Extended Features:
[2020-12-24 08:34:49] (XEN) - Peripheral Page Service Request
[2020-12-24 08:34:49] (XEN) - x2APIC
[2020-12-24 08:34:49] (XEN) - NX bit
[2020-12-24 08:34:49] (XEN) - Invalidate All Command
[2020-12-24 08:34:49] (XEN) - Guest APIC
[2020-12-24 08:34:49] (XEN) - Performance Counters
[2020-12-24 08:34:49] (XEN) - Host Address Translation Size: 0x2
[2020-12-24 08:34:49] (XEN) - Guest Address Translation Size: 0
[2020-12-24 08:34:49] (XEN) - Guest CR3 Root Table Level: 0x1
[2020-12-24 08:34:49] (XEN) - Maximum PASID: 0xf
[2020-12-24 08:34:49] (XEN) - SMI Filter Register: 0x1
[2020-12-24 08:34:49] (XEN) - SMI Filter Register Count: 0x1
[2020-12-24 08:34:49] (XEN) - Guest Virtual APIC Modes: 0x1
[2020-12-24 08:34:49] (XEN) - Dual PPR Log: 0x2
[2020-12-24 08:34:49] (XEN) - Dual Event Log: 0x2
[2020-12-24 08:34:49] (XEN) - User / Supervisor Page Protection
[2020-12-24 08:34:49] (XEN) - Device Table Segmentation: 0x3
[2020-12-24 08:34:49] (XEN) - PPR Log Overflow Early Warning
[2020-12-24 08:34:49] (XEN) - PPR Automatic Response
[2020-12-24 08:34:49] (XEN) - Memory Access Routing and Control: 0
[2020-12-24 08:34:49] (XEN) - Block StopMark Message
[2020-12-24 08:34:49] (XEN) - Performance Optimization
[2020-12-24 08:34:49] (XEN) - MSI Capability MMIO Access
[2020-12-24 08:34:49] (XEN) - Guest I/O Protection
[2020-12-24 08:34:49] (XEN) - Enhanced PPR Handling
[2020-12-24 08:34:49] (XEN) - Attribute Forward
[2020-12-24 08:34:49] (XEN) - Invalidate IOTLB Type
[2020-12-24 08:34:49] (XEN) - VM Table Size: 0
[2020-12-24 08:34:49] (XEN) - Guest Access Bit Update Disable
[2020-12-24 08:34:49] (XEN) AMD-Vi: IOMMU 0 Enabled.
[2020-12-24 08:34:49] (XEN) I/O virtualisation enabled
[2020-12-24 08:34:49] (XEN)  - Dom0 mode: Relaxed
[2020-12-24 08:34:49] (XEN) Interrupt remapping enabled
[2020-12-24 08:34:49] (XEN) ENABLING IO-APIC IRQs
[2020-12-24 08:34:49] (XEN)  -> Using new ACK method
[2020-12-24 08:34:49] (XEN) Allocated console ring of 32 KiB.
[2020-12-24 08:34:49] (XEN) HVM: ASIDs enabled.
[2020-12-24 08:34:49] (XEN) SVM: Supported advanced features:
[2020-12-24 08:34:49] (XEN)  - Nested Page Tables (NPT)
[2020-12-24 08:34:49] (XEN)  - Last Branch Record (LBR) Virtualisation
[2020-12-24 08:34:49] (XEN)  - Next-RIP Saved on #VMEXIT
[2020-12-24 08:34:49] (XEN)  - VMCB Clean Bits
[2020-12-24 08:34:49] (XEN)  - DecodeAssists
[2020-12-24 08:34:49] (XEN)  - Virtual VMLOAD/VMSAVE
[2020-12-24 08:34:49] (XEN)  - Virtual GIF
[2020-12-24 08:34:49] (XEN)  - Pause-Intercept Filter
[2020-12-24 08:34:49] (XEN)  - Pause-Intercept Filter Threshold
[2020-12-24 08:34:49] (XEN)  - TSC Rate MSR
[2020-12-24 08:34:49] (XEN) HVM: SVM enabled
[2020-12-24 08:34:49] (XEN) HVM: Hardware Assisted Paging (HAP) detected
[2020-12-24 08:34:49] (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
[2020-12-24 08:34:49] (XEN) CPU 1 still not dead...
[2020-12-24 08:34:49] (XEN) CPU 1 still not dead...
[2020-12-24 08:34:49] (XEN) Brought up 8 CPUs
[2020-12-24 08:34:49] (XEN) Scheduling granularity: cpu, 1 CPU per sched-re=
source
[2020-12-24 08:34:49] (XEN) xenoprof: Initialization failed. AMD processor =
family 23 is not supported
[2020-12-24 08:34:49] (XEN) TSC warp detected, disabling TSC_RELIABLE
[2020-12-24 08:34:49] (XEN) Dom0 has maximum 264 PIRQs
[2020-12-24 08:34:49] (XEN)  Xen  kernel: 64-bit, lsb, compat32
[2020-12-24 08:34:49] (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x3e00000
[2020-12-24 08:34:49] (XEN) PHYSICAL MEMORY ARRANGEMENT:
[2020-12-24 08:34:49] (XEN)  Dom0 alloc.:   000000078c000000->0000000790000=
000 (1022908 pages to be allocated)
[2020-12-24 08:34:49] (XEN)  Init. ramdisk: 00000007acdbc000->00000007af1ff=
5cd
[2020-12-24 08:34:49] (XEN) VIRTUAL MEMORY ARRANGEMENT:
[2020-12-24 08:34:49] (XEN)  Loaded kernel: ffffffff81000000->ffffffff83e00=
000
[2020-12-24 08:34:49] (XEN)  Init. ramdisk: 0000000000000000->0000000000000=
000
[2020-12-24 08:34:49] (XEN)  Phys-Mach map: 0000008000000000->0000008000800=
000
[2020-12-24 08:34:49] (XEN)  Start info:    ffffffff83e00000->ffffffff83e00=
4b8
[2020-12-24 08:34:49] (XEN)  Xenstore ring: 0000000000000000->0000000000000=
000
[2020-12-24 08:34:49] (XEN)  Console ring:  0000000000000000->0000000000000=
000
[2020-12-24 08:34:49] (XEN)  Page tables:   ffffffff83e01000->ffffffff83e24=
000
[2020-12-24 08:34:49] (XEN)  Boot stack:    ffffffff83e24000->ffffffff83e25=
000
[2020-12-24 08:34:49] (XEN)  TOTAL:         ffffffff80000000->ffffffff84000=
000
[2020-12-24 08:34:49] (XEN)  ENTRY ADDRESS: ffffffff83128180
[2020-12-24 08:34:49] (XEN) Dom0 has maximum 1 VCPUs
[2020-12-24 08:34:49] (XEN) Initial low memory virq threshold set at 0x4000=
 pages.
[2020-12-24 08:34:49] (XEN) Scrubbing Free RAM in background
[2020-12-24 08:34:49] (XEN) Std. Loglevel: Errors and warnings
[2020-12-24 08:34:49] (XEN) Guest Loglevel: Nothing (Rate-limited: Errors a=
nd warnings)
[2020-12-24 08:34:49] (XEN) *** Serial input to DOM0 (type 'CTRL-a' three t=
imes to switch input)
[2020-12-24 08:34:49] (XEN) Freed 536kB init memory
```

> In absence of that, I only have more questions. :-/ E.g., how are you
> enabling and disabling SMT, via the command line parameter, or via
> BIOS?

Currently via Xen's CMDLINE, the Lenovo X13 doesn't have an option to disab=
le SMT :(

> Also, can you perhaps try either upstream 4.14 Xen (from sources, I
> mean) or the packages for a distro different than QubesOS (perhaps
> installing such distro, temporarily, in an external HD or whatever).

I can give this a shot, I'll try and dump out as much logging as I can, the=
n I'll try build from the master branch of xen

> I can try Credit as well, later, but if this is something CPU arch/gen
> related, it seems to be a Ryzen rather than a Zen 2 thing...

Absolutely, I agree, this seems to be Ryzen 3000/4000 related, another Qube=
s user running a Ryzen 9 3900X CPU appears to be having the same issue: htt=
ps://imgur.com/a/EYOMmRe


From xen-devel-bounces@lists.xenproject.org Wed Dec 23 22:47:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 23 Dec 2020 22:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58609.103244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksCuC-0001Jm-Fb; Wed, 23 Dec 2020 22:46:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58609.103244; Wed, 23 Dec 2020 22:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksCuC-0001Jf-CY; Wed, 23 Dec 2020 22:46:52 +0000
Received: by outflank-mailman (input) for mailman id 58609;
 Wed, 23 Dec 2020 22:46:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksCuA-0001JX-Q8; Wed, 23 Dec 2020 22:46:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksCuA-0000MO-I3; Wed, 23 Dec 2020 22:46:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksCuA-0007sD-63; Wed, 23 Dec 2020 22:46:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksCuA-0004Hp-5a; Wed, 23 Dec 2020 22:46:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cgesvapXzXDXd8tOn+jlgjBUyEC+XTNi2S9p/O6qS8k=; b=1RbWvBmykKClyiuuonZuCRgRzA
	dDZkiU+VGIAd2HVXCGYNeCdEyaVS1BhmuiWHWGkyr9UAA6NZ2m1g/pk7ccy+ZuGact6i7uoW11w+E
	67r7WQRdNMT618mwq1V4UKqVYAPHUX2d9tde5ZX7c0mmHIP4KF8QDno3K3/alPVg6NQw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157852-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157852: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 23 Dec 2020 22:46:50 +0000

flight 157852 qemu-mainline real [real]
flight 157861 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157852/
http://logs.test-lab.xenproject.org/osstest/logs/157861/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  125 days
Failing since        152659  2020-08-21 14:07:39 Z  124 days  255 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    5 days    8 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 00:42:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 00:42:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58619.103265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksEhL-0003kn-TH; Thu, 24 Dec 2020 00:41:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58619.103265; Thu, 24 Dec 2020 00:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksEhL-0003kg-QD; Thu, 24 Dec 2020 00:41:43 +0000
Received: by outflank-mailman (input) for mailman id 58619;
 Thu, 24 Dec 2020 00:41:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksEhL-0003kY-3r; Thu, 24 Dec 2020 00:41:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksEhK-0002oM-Rm; Thu, 24 Dec 2020 00:41:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksEhK-0004dS-F9; Thu, 24 Dec 2020 00:41:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksEhK-0001Cc-Ed; Thu, 24 Dec 2020 00:41:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=24yqSLYnm0jqgfAhlP4DkvaQhJzdVkcBcPn7r35m2gk=; b=NahaqdgyE7sA+jA1caR1iEG3pM
	CMgw6OlfcYO52kc6ljTgHsb8KV7kTaAdG++DFhPjmUHelDVlgfHafwgVtidjOtggKMi6TveROlBan
	r1gyJQ2bPW/1xetK/RhiRLN5IsHG9MpUjOacn4L0qFnrq3oaaqurtoFRaZmDqFDSthsc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157854-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157854: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=614cb5894306cfa2c7d9b6168182876ff5948735
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 00:41:42 +0000

flight 157854 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157854/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl     10 host-ping-check-xen fail in 157842 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 157842 pass in 157854
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 157842 pass in 157854
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157842
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157842

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 157842 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                614cb5894306cfa2c7d9b6168182876ff5948735
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  145 days
Failing since        152366  2020-08-01 20:49:34 Z  144 days  249 attempts
Testing same since   157842  2020-12-23 00:42:00 Z    0 days    2 attempts

------------------------------------------------------------
4313 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 968560 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 07:19:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 07:19:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58635.103292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksKtl-0002eH-Io; Thu, 24 Dec 2020 07:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58635.103292; Thu, 24 Dec 2020 07:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksKtl-0002eA-Fg; Thu, 24 Dec 2020 07:18:57 +0000
Received: by outflank-mailman (input) for mailman id 58635;
 Thu, 24 Dec 2020 07:18:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksKtk-0002e2-Cc; Thu, 24 Dec 2020 07:18:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksKtk-000829-3b; Thu, 24 Dec 2020 07:18:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksKtj-0006RX-Qx; Thu, 24 Dec 2020 07:18:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksKtj-0006DS-QQ; Thu, 24 Dec 2020 07:18:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hbLXkT2cPrT+yQB98Pe9SBbn6EDzRctS8sHtxkecBUA=; b=kiZg1bnRJSd3Cgm8wPXpxVb028
	vtaO+UlQUFi3Aw7VZrr0ZotaACjJ4sOvRPlNnl8jgKz073UwpK3vYaXbNSTousvXvlXLRFGVh8PwY
	QCadTSggI+h2GzErAcJjUBZXEfKTJLY71pK0Nci7RlxC3ExxJOiVmGducnZcc7Jujb+M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157867-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157867: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 07:18:55 +0000

flight 157867 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157867/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  167 days
Failing since        151818  2020-07-11 04:18:52 Z  166 days  161 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    5 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 09:57:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 09:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58660.103317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksNMq-000078-Rl; Thu, 24 Dec 2020 09:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58660.103317; Thu, 24 Dec 2020 09:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksNMq-000071-Oh; Thu, 24 Dec 2020 09:57:08 +0000
Received: by outflank-mailman (input) for mailman id 58660;
 Thu, 24 Dec 2020 09:57:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ksNMp-00006v-Qq
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 09:57:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksNMn-0002jL-IN; Thu, 24 Dec 2020 09:57:05 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksNMn-0003W4-A2; Thu, 24 Dec 2020 09:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=2SpNjW3sD0Axl+ITSM5G9zkjGRjVX81oW9IkwDFCpoc=; b=d4SwOX2XbwKRZvisBE7inROc3v
	L/iteJwqpsDLchvmJo5lM0amLEfqHJxghdsUz76X4WZVmZZTWX5CUW8qho38GrDr3j7orNlPMWcmd
	peJArrB7hxaxRzJrC/UsZUHODDoS2mYRfEgs70hGHQDGh3nc/09OfrHUfR5k1kWZjpAc=;
Subject: Re: [PATCH v2] gnttab: defer allocation of status frame tracking
 array
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <57dc915c-c373-5003-80f7-279dd300d571@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bab2f11f-dd59-87fb-1311-2732a71543d0@xen.org>
Date: Thu, 24 Dec 2020 09:57:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <57dc915c-c373-5003-80f7-279dd300d571@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/12/2020 15:13, Jan Beulich wrote:
> This array can be large when many grant frames are permitted; avoid
> allocating it when it's not going to be used anyway, by doing this only
> in gnttab_populate_status_frames().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Defer allocation to when a domain actually switches to the v2 grant
>      API.
> 
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1725,6 +1728,17 @@ gnttab_populate_status_frames(struct dom
>       /* Make sure, prior version checks are architectural visible */
>       block_speculation();
>   
> +    if ( gt->status == ZERO_BLOCK_PTR )
> +    {
> +        gt->status = xzalloc_array(grant_status_t *,
> +                                   grant_to_status_frames(gt->max_grant_frames));
> +        if ( !gt->status )
> +        {
> +            gt->status = ZERO_BLOCK_PTR;
> +            return -ENOMEM;
> +        }
> +    }
> +
>       for ( i = nr_status_frames(gt); i < req_status_frames; i++ )
>       {
>           if ( (gt->status[i] = alloc_xenheap_page()) == NULL )
> @@ -1745,18 +1759,23 @@ status_alloc_failed:
>           free_xenheap_page(gt->status[i]);
>           gt->status[i] = NULL;
>       }
> +    if ( !nr_status_frames(gt) )
> +    {
> +        xfree(gt->status);
> +        gt->status = ZERO_BLOCK_PTR;
> +    }
>       return -ENOMEM;
>   }
>   
>   static int
>   gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
>   {
> -    unsigned int i;
> +    unsigned int i, n = nr_status_frames(gt);
>   
>       /* Make sure, prior version checks are architectural visible */
>       block_speculation();
>   
> -    for ( i = 0; i < nr_status_frames(gt); i++ )
> +    for ( i = 0; i < n; i++ )
>       {
>           struct page_info *pg = virt_to_page(gt->status[i]);
>           gfn_t gfn = gnttab_get_frame_gfn(gt, true, i);
> @@ -1811,12 +1830,12 @@ gnttab_unpopulate_status_frames(struct d
>           page_set_owner(pg, NULL);
>       }
>   
> -    for ( i = 0; i < nr_status_frames(gt); i++ )
> -    {
> -        free_xenheap_page(gt->status[i]);
> -        gt->status[i] = NULL;
> -    }
>       gt->nr_status_frames = 0;
> +    smp_wmb(); /* Just in case - all accesses should be under lock. */

I think gt->status cannot be accessed locklessly. If a entity read 
gt->nr_status_frames != 0, then there is no promise the array will be 
accessible when trying to access it as it may have been freed.

So I would drop the barrier here.

The rest of the code looks good to me.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 10:20:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 10:20:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58666.103329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksNjO-0002kX-QM; Thu, 24 Dec 2020 10:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58666.103329; Thu, 24 Dec 2020 10:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksNjO-0002kQ-NO; Thu, 24 Dec 2020 10:20:26 +0000
Received: by outflank-mailman (input) for mailman id 58666;
 Thu, 24 Dec 2020 10:20:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNjN-0002kI-I4; Thu, 24 Dec 2020 10:20:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNjN-0003AW-9c; Thu, 24 Dec 2020 10:20:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNjM-0008Ks-VS; Thu, 24 Dec 2020 10:20:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNjM-0001Eo-Uv; Thu, 24 Dec 2020 10:20:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ttUQRQOwtINqwy8cdKMcJa1H+EuqGtHvktYl8HgSKlE=; b=LRLlJpndX8q183y4YlJkId3ANs
	a/XFVxFL3mJzFCBvQWfbQqpo8UIDkVPaWOCMqVMXuIDKZ6EE4V5bwn2QkskCt58l4nDogpV1Wi+DQ
	V9MWZzByCRhmcaVdbnfqnj0hhw6GFRM/QEJbILX/uuVC7zqmoimO3B5fC2h0+MMTb2vM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157864-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157864: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58cf05f597b03a8212d9ecf2c79ee046d3ee8ad9
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 10:20:24 +0000

flight 157864 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157864/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58cf05f597b03a8212d9ecf2c79ee046d3ee8ad9
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  145 days
Failing since        152366  2020-08-01 20:49:34 Z  144 days  250 attempts
Testing same since   157864  2020-12-24 01:10:27 Z    0 days    1 attempts

------------------------------------------------------------
4318 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 969560 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 10:35:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 10:35:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58690.103376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksNy8-0003xL-PV; Thu, 24 Dec 2020 10:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58690.103376; Thu, 24 Dec 2020 10:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksNy8-0003xE-Ll; Thu, 24 Dec 2020 10:35:40 +0000
Received: by outflank-mailman (input) for mailman id 58690;
 Thu, 24 Dec 2020 10:35:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNy7-0003x3-Cb; Thu, 24 Dec 2020 10:35:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNy7-0003QC-46; Thu, 24 Dec 2020 10:35:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNy6-0000KM-Qk; Thu, 24 Dec 2020 10:35:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksNy6-0000x0-QG; Thu, 24 Dec 2020 10:35:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=anpBmvxMvrjJeD/H/mQvb5f/XZL3hbaLw/Hc037qFKY=; b=zikdIUITmUeC4Cj9AvzcYBmOWa
	BQNAo3XghQ67XP2Y6nU4G8qPMH3KrNZGlnN/iEo0eHdfuroSkB31RCRvah/nOoK+LKVaG3+Boo1L/
	SfzMUYBHYZJ0V+Cbrzp+dEUZbZoOio9oXI7XN14cyTEcrkeXMYU2zc31ET3u6FAlz6LM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157862-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157862: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 10:35:38 +0000

flight 157862 qemu-mainline real [real]
flight 157870 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157862/
http://logs.test-lab.xenproject.org/osstest/logs/157870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  126 days
Failing since        152659  2020-08-21 14:07:39 Z  124 days  256 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    5 days    9 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 11:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 11:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58702.103421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBq-0002aY-KN; Thu, 24 Dec 2020 11:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58702.103421; Thu, 24 Dec 2020 11:53:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBq-0002aR-HD; Thu, 24 Dec 2020 11:53:54 +0000
Received: by outflank-mailman (input) for mailman id 58702;
 Thu, 24 Dec 2020 11:53:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Kri=F4=merlin.srs.infradead.org=batv+2cb6e212460c613e05ba+6332+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ksPBp-0002WC-8w
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 11:53:53 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b450edf5-14d2-46e0-beb6-64d5732f1582;
 Thu, 24 Dec 2020 11:53:38 +0000 (UTC)
Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92])
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ksPBO-0002az-Hj; Thu, 24 Dec 2020 11:53:26 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1ksPBN-00Er3m-GF; Thu, 24 Dec 2020 11:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: b450edf5-14d2-46e0-beb6-64d5732f1582
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=zk4UYemafseG/MJkQtPNYJOGxL13WedQ005wS02nvsk=; b=YOiTNNDVcCi71gJnkmi5I3nAWT
	/Q+4A66AfywOS5VtbWG0k9kR9csEvksh+dtmcviONDQRV0FOeJYD0jhbhteng8kgZBlfOz0TeQ55F
	BaqqZZQltTwzcRvUPQx0i5SQLoxdMUDqQ9RnrSOsWQisrlZv9688B7ZvgZXKzf2Qt408OkxqGbFkU
	T2u91UjRw3E7CxBwaYQVqKoBlEFVFUlAVnvunwhcfs011ctXe/fiRuOZNy1av1UuV9dwQZFscuAOi
	srq+n/nKV5lmXdVWisRMJS/KxR50shTeTftbnTFSu9YUf08ajZl0Og9nI7nsHQzk0ZAp/WX3YjPip
	PqCvHiZA==;
From: David Woodhouse <dwmw2@infradead.org>
To: x86@kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <pdurrant@amazon.com>,
	jgrall@amazon.com,
	karahmed@amazon.de,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0
Date: Thu, 24 Dec 2020 11:53:20 +0000
Message-Id: <20201224115323.3540130-3-dwmw2@infradead.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201224115323.3540130-1-dwmw2@infradead.org>
References: <20201224115323.3540130-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by merlin.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

With INTX or GSI delivery, Xen uses the event channel structures of CPU0.

If the interrupt gets handled by Linux on a different CPU, then no events
are seen as pending. Rather than introducing locking to allow other CPUs
to process CPU0's events, just ensure that the PCI interrupts happens
only on CPU0.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 drivers/xen/platform-pci.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 9db557b76511..18f0ed8b1f93 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -132,6 +132,13 @@ static int platform_pci_probe(struct pci_dev *pdev,
 			dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret);
 			goto out;
 		}
+		/*
+		 * It doesn't strictly *have* to run on CPU0 but it sure
+		 * as hell better process the event channel ports delivered
+		 * to CPU0.
+		 */
+		irq_set_affinity(pdev->irq, cpumask_of(0));
+
 		callback_via = get_callback_via(pdev);
 		ret = xen_set_callback_via(callback_via);
 		if (ret) {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 11:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 11:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58701.103409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBl-0002Xj-CU; Thu, 24 Dec 2020 11:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58701.103409; Thu, 24 Dec 2020 11:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBl-0002Xc-9I; Thu, 24 Dec 2020 11:53:49 +0000
Received: by outflank-mailman (input) for mailman id 58701;
 Thu, 24 Dec 2020 11:53:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Kri=F4=merlin.srs.infradead.org=batv+2cb6e212460c613e05ba+6332+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ksPBk-0002WC-8u
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 11:53:48 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 839ae6fa-46d6-4ec2-9691-18c61f485f5d;
 Thu, 24 Dec 2020 11:53:38 +0000 (UTC)
Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92])
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ksPBO-0002b1-JE; Thu, 24 Dec 2020 11:53:26 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1ksPBN-00Er3w-Hp; Thu, 24 Dec 2020 11:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 839ae6fa-46d6-4ec2-9691-18c61f485f5d
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=529q7/1gaAJwNhvRKFVZNAtSN1/LIWxT+b3mgEbuAUg=; b=NICvtL4896QMhOF1HTfxSmrS72
	p8d8PA2yocHzLDIM3daetiJEmh4/NglWvaNG3qsGW+4SHQmC4x7CZmZ6IoaALTuRJ7QntYlY52FEc
	H/XwQaYtBOtDdYi6anterI6y7VZ3B3ASgDnJBVoIrClNAH40YAF/sGJ1FRnUmGQozDD3l51AX9i+n
	/KQtzoyhrjDO8DFESvLXDx9r1ViHK8C9XaUhLshO7d7Etxv6iG65UXfxjPpIS/gumxmpZ43ttID3p
	VWxOJ/vZIHR+NZl5OSv+teiaf8k3DXJKDX9n+ivm5NEfLLD0cr/6114tLiqhjAAmoIkWLBMAXhdRt
	YAM0hMjg==;
From: David Woodhouse <dwmw2@infradead.org>
To: x86@kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <pdurrant@amazon.com>,
	jgrall@amazon.com,
	karahmed@amazon.de,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
Date: Thu, 24 Dec 2020 11:53:23 +0000
Message-Id: <20201224115323.3540130-6-dwmw2@infradead.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201224115323.3540130-1-dwmw2@infradead.org>
References: <20201224115323.3540130-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by merlin.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

When xen_have_vector_callback is false, we still register the PV spinlock
kicker IPI on the secondary CPUs. Stop doing that.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/spinlock.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 799f4eba0a62..b240ea483e63 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -68,7 +68,7 @@ void xen_init_lock_cpu(int cpu)
 	int irq;
 	char *name;
 
-	if (!xen_pvspin)
+	if (!xen_pvspin || !xen_have_vector_callback)
 		return;
 
 	WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n",
@@ -93,7 +93,7 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
-	if (!xen_pvspin)
+	if (!xen_pvspin || !xen_have_vector_callback)
 		return;
 
 	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
@@ -115,7 +115,7 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
 void __init xen_init_spinlocks(void)
 {
 	/*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
-	if (num_possible_cpus() == 1 || nopvspin)
+	if (num_possible_cpus() == 1 || nopvspin || !xen_have_vector_callback)
 		xen_pvspin = false;
 
 	if (!xen_pvspin) {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 11:53:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 11:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58700.103397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBh-0002WO-4c; Thu, 24 Dec 2020 11:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58700.103397; Thu, 24 Dec 2020 11:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBh-0002WH-1G; Thu, 24 Dec 2020 11:53:45 +0000
Received: by outflank-mailman (input) for mailman id 58700;
 Thu, 24 Dec 2020 11:53:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Kri=F4=merlin.srs.infradead.org=batv+2cb6e212460c613e05ba+6332+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ksPBf-0002WC-Du
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 11:53:44 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 186c498a-2b40-4594-ae2b-41b1da3537ec;
 Thu, 24 Dec 2020 11:53:39 +0000 (UTC)
Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92])
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ksPBO-0002b0-ID; Thu, 24 Dec 2020 11:53:26 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1ksPBN-00Er3p-Gl; Thu, 24 Dec 2020 11:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 186c498a-2b40-4594-ae2b-41b1da3537ec
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=O3AtkiW5GTt6yxuiaTLnSJ1UIhL9jkFSRWxqdPyBQOk=; b=sWR9MZNWda/BBmFcp64YqtZ00E
	4V4+DwxHPnL8bLzOe6a3GelxX58UVJamvHqE4ViZl592hb49tYRUtDHGLmC0PWs4U8KE1lF5tcD5c
	GaIDIwo+VVocX3BnTEyDrDW/VTvbKOTNpdixlLNYcphIxUag+PhA7rk9gR5ZAsSSHi7/WorD358J5
	o3Thps3ZSEYhqopE+5aikVYvbBKDnWz0Q0amjd8fKrveC2GXrKIzPEsDsVfDvxpWQ4soIAdp2M2oP
	GXKiQ7WScUa7BXhYpA5ve8mXjGmNeNdN+LL3DFvuKPeVAyDGUywsFpW520hofBBpmIjpIM36JyQdX
	wVXk806Q==;
From: David Woodhouse <dwmw2@infradead.org>
To: x86@kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <pdurrant@amazon.com>,
	jgrall@amazon.com,
	karahmed@amazon.de,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery
Date: Thu, 24 Dec 2020 11:53:21 +0000
Message-Id: <20201224115323.3540130-4-dwmw2@infradead.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201224115323.3540130-1-dwmw2@infradead.org>
References: <20201224115323.3540130-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by merlin.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

It's useful to be able to test non-vector event channel delivery, to make
sure Linux will work properly on older Xen which doesn't have it.

It's also useful for those working on Xen and Xen-compatible hypervisors,
because there are guest kernels still in active use which use PCI INTX
even when vector delivery is available.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/enlighten_hvm.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 9e87ab010c82..a1c07e0c888e 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
        return 0;
 }
 
+static bool no_vector_callback __initdata;
+
 static void __init xen_hvm_guest_init(void)
 {
 	if (xen_pv_domain())
@@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
 
 	xen_panic_handler_init();
 
-	if (xen_feature(XENFEAT_hvm_callback_vector))
+	if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
 
 	xen_hvm_smp_init();
@@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
 }
 early_param("xen_nopv", xen_parse_nopv);
 
+static __init int xen_parse_no_vector_callback(char *arg)
+{
+	no_vector_callback = true;
+	return 0;
+}
+early_param("no_vector_callback", xen_parse_no_vector_callback);
+
 bool __init xen_hvm_need_lapic(void)
 {
 	if (xen_pv_domain())
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 11:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 11:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58703.103433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBv-0002eg-UZ; Thu, 24 Dec 2020 11:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58703.103433; Thu, 24 Dec 2020 11:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPBv-0002eX-Qg; Thu, 24 Dec 2020 11:53:59 +0000
Received: by outflank-mailman (input) for mailman id 58703;
 Thu, 24 Dec 2020 11:53:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OdH5=F4=casper.srs.infradead.org=batv+cd7a9cfaa2a0215fac24+6332+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ksPBu-0002WC-9F
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 11:53:58 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9c3c9f7-f28f-4b01-8d0b-17da43b60027;
 Thu, 24 Dec 2020 11:53:39 +0000 (UTC)
Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92])
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ksPBN-0001IL-UT; Thu, 24 Dec 2020 11:53:26 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1ksPBN-00Er3t-HI; Thu, 24 Dec 2020 11:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: a9c3c9f7-f28f-4b01-8d0b-17da43b60027
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=IRb1VunvIiJxLPUFQ4utXydsXUgkW5BhNr6SXAIAjkE=; b=Z63QCpTsmbNseE3AFM3QFE8rHW
	4QIp3TdV9LwRqaoJgFRzIRJ1v2wP0BNaTdOOYVAZvxn7gnbES94Ygsdy4hhWEPpkmm12F7KSgl0+e
	F2W6ndobGrou3JlpyCQLKsawu19yJSEJTXYLEzp5CKoJOyzdqCjnkZeNiXm+2rHhh7gDGf9rLmdjW
	3SFLvjf/pfKIFmh+17m2ffIMXgZ6+cnpHnKT6TEevoUipz+MSixmf3u6HxZz16859Yba1pHa1l8ML
	4kWgCP5YlertuYHd5iDaf8vQp2UmDbaMfrTjUMBQqgrmGpUC3WOwqFQsCQESlsNhwteM5PgeCJpPt
	BAJsowfQ==;
From: David Woodhouse <dwmw2@infradead.org>
To: x86@kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <pdurrant@amazon.com>,
	jgrall@amazon.com,
	karahmed@amazon.de,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used
Date: Thu, 24 Dec 2020 11:53:22 +0000
Message-Id: <20201224115323.3540130-5-dwmw2@infradead.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201224115323.3540130-1-dwmw2@infradead.org>
References: <20201224115323.3540130-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

In the case where xen_have_vector_callback is false, we still register
the IPI vectors in xen_smp_intr_init() for the secondary CPUs even
though they aren't going to be used. Stop doing that.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/enlighten_hvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index a1c07e0c888e..7a6ef517e81a 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -164,10 +164,10 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu)
 	else
 		per_cpu(xen_vcpu_id, cpu) = cpu;
 	rc = xen_vcpu_setup(cpu);
-	if (rc)
+	if (rc || !xen_have_vector_callback)
 		return rc;
 
-	if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock))
+	if (xen_feature(XENFEAT_hvm_safe_pvclock))
 		xen_setup_timer(cpu);
 
 	rc = xen_smp_intr_init(cpu);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 11:54:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 11:54:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58704.103445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPC1-0002jH-6p; Thu, 24 Dec 2020 11:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58704.103445; Thu, 24 Dec 2020 11:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPC1-0002j9-2n; Thu, 24 Dec 2020 11:54:05 +0000
Received: by outflank-mailman (input) for mailman id 58704;
 Thu, 24 Dec 2020 11:54:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OdH5=F4=casper.srs.infradead.org=batv+cd7a9cfaa2a0215fac24+6332+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ksPBz-0002WC-9I
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 11:54:03 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95dbbb1f-fdab-4be3-9ab5-a456c7902189;
 Thu, 24 Dec 2020 11:53:39 +0000 (UTC)
Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92])
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ksPBN-0001IK-T1; Thu, 24 Dec 2020 11:53:26 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1ksPBN-00Er3j-FX; Thu, 24 Dec 2020 11:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 95dbbb1f-fdab-4be3-9ab5-a456c7902189
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=NXk0UWcV3/VkmO3V390JOuM5MMOhtqzPdQBiicIGjhg=; b=ZCj4mb12ayQWaU4XH37Q5Jcpbr
	0BKSUCmI8vbB1mT7TBXQ6s1xJAlfks1KlKrZT/vvOfFJJnZqo16qp7FvtSaaz96opYQzuMbdGAIfv
	ENyzO0U4QAxzdPbZv/kjVcmb1oQolb76DO6tUXVDfAlHvny1vvpf0hMyymJYZXiexxNZHNqXts7mb
	7jrRwsoc/Ki7FYjfZAXb1hIHfkb3NcGldJFo1rwcZBi7kmQ6i3NG8nwJp9RUt0j9pHynjrQF8AOm5
	sSetl+1RANLkfykAtab1EGgb5Ns9qx6Fz9p5BFYfK8PevRI9bXIFiju9amlAGFAaNTJ8LtHaRV/kG
	WbKuXUJw==;
From: David Woodhouse <dwmw2@infradead.org>
To: x86@kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <pdurrant@amazon.com>,
	jgrall@amazon.com,
	karahmed@amazon.de,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 1/5] xen: Fix event channel callback via INTX/GSI
Date: Thu, 24 Dec 2020 11:53:19 +0000
Message-Id: <20201224115323.3540130-2-dwmw2@infradead.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201224115323.3540130-1-dwmw2@infradead.org>
References: <20201224115323.3540130-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

For a while, event channel notification via the PCI platform device
has been broken, because we attempt to communicate with xenstore before
we even have notifications working, with the xs_reset_watches() call
in xs_init().

We tend to get away with this on Xen versions below 4.0 because we avoid
calling xs_reset_watches() anyway, because xenstore might not cope with
reading a non-existent key. And newer Xen *does* have the vector
callback support, so we rarely fall back to INTX/GSI delivery.

To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
case, deferring it to be called from xenbus_probe() in the XS_HVM case
instead.

Then fix up the invocation of xenbus_probe() to happen either from its
device_initcall if the callback is available early enough, or when the
callback is finally set up. This means that the hack of calling
xenbus_probe() from a workqueue after the first interrupt, or directly
from the PCI platform device setup, is no longer needed.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/arm/xen/enlighten.c          |  2 +-
 drivers/xen/events/events_base.c  | 10 -----
 drivers/xen/platform-pci.c        |  1 -
 drivers/xen/xenbus/xenbus.h       |  1 +
 drivers/xen/xenbus/xenbus_comms.c |  8 ----
 drivers/xen/xenbus/xenbus_probe.c | 68 ++++++++++++++++++++++++-------
 include/xen/xenbus.h              |  2 +-
 7 files changed, 57 insertions(+), 35 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 60e901cd0de6..5a957a9a0984 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -371,7 +371,7 @@ static int __init xen_guest_init(void)
 	}
 	gnttab_init();
 	if (!xen_initial_domain())
-		xenbus_probe(NULL);
+		xenbus_probe();
 
 	/*
 	 * Making sure board specific code will not set up ops for
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 6038c4c35db5..bbebe248b726 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -2010,16 +2010,6 @@ static struct irq_chip xen_percpu_chip __read_mostly = {
 	.irq_ack		= ack_dynirq,
 };
 
-int xen_set_callback_via(uint64_t via)
-{
-	struct xen_hvm_param a;
-	a.domid = DOMID_SELF;
-	a.index = HVM_PARAM_CALLBACK_IRQ;
-	a.value = via;
-	return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
-}
-EXPORT_SYMBOL_GPL(xen_set_callback_via);
-
 #ifdef CONFIG_XEN_PVHVM
 /* Vector callbacks are better than PCI interrupts to receive event
  * channel notifications because we can receive vector callbacks on any
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index dd911e1ff782..9db557b76511 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -149,7 +149,6 @@ static int platform_pci_probe(struct pci_dev *pdev,
 	ret = gnttab_init();
 	if (ret)
 		goto grant_out;
-	xenbus_probe(NULL);
 	return 0;
 grant_out:
 	gnttab_free_auto_xlat_frames();
diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
index 5f5b8a7d5b80..05bbda51103f 100644
--- a/drivers/xen/xenbus/xenbus.h
+++ b/drivers/xen/xenbus/xenbus.h
@@ -113,6 +113,7 @@ int xenbus_probe_node(struct xen_bus_type *bus,
 		      const char *type,
 		      const char *nodename);
 int xenbus_probe_devices(struct xen_bus_type *bus);
+void xenbus_probe(void);
 
 void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
 
diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index eb5151fc8efa..e5fda0256feb 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex);
 static int xenbus_irq;
 static struct task_struct *xenbus_task;
 
-static DECLARE_WORK(probe_work, xenbus_probe);
-
-
 static irqreturn_t wake_waiting(int irq, void *unused)
 {
-	if (unlikely(xenstored_ready == 0)) {
-		xenstored_ready = 1;
-		schedule_work(&probe_work);
-	}
-
 	wake_up(&xb_waitq);
 	return IRQ_HANDLED;
 }
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 38725d97d909..876f381b100a 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -682,29 +682,63 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
 }
 EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
 
-void xenbus_probe(struct work_struct *unused)
+void xenbus_probe(void)
 {
 	xenstored_ready = 1;
 
+	/*
+	 * In the HVM case, xenbus_init() deferred its call to
+	 * xs_init() in case callbacks were not operational yet.
+	 * So do it now.
+	 */
+	if (xen_store_domain_type == XS_HVM)
+		xs_init();
+
 	/* Notify others that xenstore is up */
 	blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
 }
-EXPORT_SYMBOL_GPL(xenbus_probe);
 
 static int __init xenbus_probe_initcall(void)
 {
-	if (!xen_domain())
-		return -ENODEV;
-
-	if (xen_initial_domain() || xen_hvm_domain())
-		return 0;
+	/*
+	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
+	 * need to wait for the platform PCI device to come up, which is
+	 * the (XEN_PVPVM && !xen_have_vector_callback) case.
+	 */
+	if (xen_store_domain_type == XS_PV ||
+	    (xen_store_domain_type == XS_HVM &&
+	     (!IS_ENABLED(CONFIG_XEN_PVHVM) || xen_have_vector_callback)))
+		xenbus_probe();
 
-	xenbus_probe(NULL);
 	return 0;
 }
-
 device_initcall(xenbus_probe_initcall);
 
+int xen_set_callback_via(uint64_t via)
+{
+	struct xen_hvm_param a;
+	int ret;
+
+	a.domid = DOMID_SELF;
+	a.index = HVM_PARAM_CALLBACK_IRQ;
+	a.value = via;
+
+	ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
+	if (ret)
+		return ret;
+
+	/*
+	 * If xenbus_probe_initcall() deferred the xenbus_probe()
+	 * due to the callback not functioning yet, we can do it now.
+	 */
+	if (!xenstored_ready && xen_store_domain_type == XS_HVM &&
+	    IS_ENABLED(CONFIG_XEN_PVHVM) && !xen_have_vector_callback)
+		xenbus_probe();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xen_set_callback_via);
+
 /* Set up event channel for xenstored which is run as a local process
  * (this is normally used only in dom0)
  */
@@ -817,11 +851,17 @@ static int __init xenbus_init(void)
 		break;
 	}
 
-	/* Initialize the interface to xenstore. */
-	err = xs_init();
-	if (err) {
-		pr_warn("Error initializing xenstore comms: %i\n", err);
-		goto out_error;
+	/*
+	 * HVM domains may not have a functional callback yet. In that
+	 * case let xs_init() be called from xenbus_probe(), which will
+	 * get invoked at an appropriate time.
+	 */
+	if (xen_store_domain_type != XS_HVM) {
+		err = xs_init();
+		if (err) {
+			pr_warn("Error initializing xenstore comms: %i\n", err);
+			goto out_error;
+		}
 	}
 
 	if ((xen_store_domain_type != XS_LOCAL) &&
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 5a8315e6d8a6..61202c83d560 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -183,7 +183,7 @@ void xs_suspend_cancel(void);
 
 struct work_struct;
 
-void xenbus_probe(struct work_struct *);
+void xenbus_probe(void);
 
 #define XENBUS_IS_ERR_READ(str) ({			\
 	if (!IS_ERR(str) && strlen(str) == 0) {		\
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 11:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 11:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58705.103457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPC5-0002o7-I0; Thu, 24 Dec 2020 11:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58705.103457; Thu, 24 Dec 2020 11:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksPC5-0002nz-Dc; Thu, 24 Dec 2020 11:54:09 +0000
Received: by outflank-mailman (input) for mailman id 58705;
 Thu, 24 Dec 2020 11:54:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Kri=F4=merlin.srs.infradead.org=batv+2cb6e212460c613e05ba+6332+infradead.org+dwmw2@srs-us1.protection.inumbo.net>)
 id 1ksPC4-0002WC-9S
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 11:54:08 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5766b6fc-144d-4768-b0a3-60064c394132;
 Thu, 24 Dec 2020 11:53:38 +0000 (UTC)
Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92])
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1ksPBO-0002ay-GR; Thu, 24 Dec 2020 11:53:26 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat
 Linux)) id 1ksPBN-00Er3h-Et; Thu, 24 Dec 2020 11:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 5766b6fc-144d-4768-b0a3-60064c394132
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-Type:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=CWqpgSCUvnrq7b7B+0yGpMLnZ4fm4PXiLP5Qys7ltuk=; b=ySAc82PpNgACuCCJH2KsZb1WdI
	ZDCQ0w2KrARzWMVIWhE0keS2Q458z/ZCf2x+2B2mq5ca6fYTcM/sqpw2J05peBINKJdEH129I1vvu
	8brIaIed6lqT6IM8NvBiWqHXdcok2hQ2dizUZ2WfhKviVo5MusF9Roxkt9wHu14xHSSPVC3PiV+Fb
	WxU+J2EgcAwtE2EXLgxGXGU1jc9G85qXtXpFDx6X/ADbZINSXFEnHlw9WJf6OjZgjk8Fu0TN3yQIV
	SHhtDzXt2L3L1H/O8f+B0Xj5C10gvsjQoShi8ukoPfHJ53Zm+HmgmsGl510+hdaWJAhJqI0gRiOFG
	YfEZfXJQ==;
From: David Woodhouse <dwmw2@infradead.org>
To: x86@kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Paul Durrant <pdurrant@amazon.com>,
	jgrall@amazon.com,
	karahmed@amazon.de,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 0/5] Xen INTX/GSI event channel delivery fixes
Date: Thu, 24 Dec 2020 11:53:18 +0000
Message-Id: <20201224115323.3540130-1-dwmw2@infradead.org>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by merlin.infradead.org. See http://www.infradead.org/rpr.html

Fix various breakages with INTX/GSI event channel interrupt delivery,
and add a command line option to test it so that it hopefully doesn't
get so broken again.

Karim attempted to rip this out in 2017 but put it back because it's still
necessary with older versions of Xen. This fixes it properly, and makes it
easier to test. cf. https://lkml.org/lkml/2017/4/10/266

David Woodhouse (5):
      xen: Fix event channel callback via INTX/GSI
      xen: Set platform PCI device INTX affinity to CPU0
      x86/xen: Add a no_vector_callback option to test PCI INTX delivery
      x86/xen: Don't register Xen IPIs when they aren't going to be used
      x86/xen: Don't register PV spinlock IPI when it isn't going to be used

 arch/arm/xen/enlighten.c          |  2 +-
 arch/x86/xen/enlighten_hvm.c      | 15 +++++++--
 arch/x86/xen/spinlock.c           |  6 ++--
 drivers/xen/events/events_base.c  | 10 ------
 drivers/xen/platform-pci.c        |  8 ++++-
 drivers/xen/xenbus/xenbus.h       |  1 +
 drivers/xen/xenbus/xenbus_comms.c |  8 -----
 drivers/xen/xenbus/xenbus_probe.c | 68 +++++++++++++++++++++++++++++++--------
 include/xen/xenbus.h              |  2 +-
 9 files changed, 79 insertions(+), 41 deletions(-)




From xen-devel-bounces@lists.xenproject.org Thu Dec 24 12:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 12:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58731.103469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksQ0u-0007Vs-OJ; Thu, 24 Dec 2020 12:46:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58731.103469; Thu, 24 Dec 2020 12:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksQ0u-0007Vl-L0; Thu, 24 Dec 2020 12:46:40 +0000
Received: by outflank-mailman (input) for mailman id 58731;
 Thu, 24 Dec 2020 12:46:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksQ0t-0007Vc-8b; Thu, 24 Dec 2020 12:46:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksQ0s-0005X8-Uq; Thu, 24 Dec 2020 12:46:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksQ0s-0005f2-MN; Thu, 24 Dec 2020 12:46:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksQ0s-0008E7-Lt; Thu, 24 Dec 2020 12:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eQH3qoumz4ssGln4hXnhkkCG6OvAI9vbxDeA7M1r1bM=; b=KQaIrzrRjengYKQ2fmX/T+YGnn
	Jmz8Ut9OzQgSCdII9EnMWw26ueF+Eqar/UVhCFrboloYtI4j17Vwv5ldxxSrXNAApyQjSH4Z+8vFP
	wRiDwfxBRIB8uldHE7AaLl2s+g8eASaMcyjMtcDDDnVXF+qtQqABPdiRoC1tsw4hlnzg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157865-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157865: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 12:46:38 +0000

flight 157865 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157865/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 157837
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157847
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157847
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157847
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157847
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157847
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157847
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157847
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157847
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157847
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157847
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157847
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157865  2020-12-24 01:51:32 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 15:24:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 15:24:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58741.103490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksSTY-0004cq-EO; Thu, 24 Dec 2020 15:24:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58741.103490; Thu, 24 Dec 2020 15:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksSTY-0004cj-AW; Thu, 24 Dec 2020 15:24:24 +0000
Received: by outflank-mailman (input) for mailman id 58741;
 Thu, 24 Dec 2020 15:24:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ksSTW-0004ce-VG
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 15:24:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksSTW-000857-HI; Thu, 24 Dec 2020 15:24:22 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksSTW-0003H1-52; Thu, 24 Dec 2020 15:24:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=BCKy0pbM4tZ5ytPu0zUJpTYg/wcrS8QruCKIKZ+10yw=; b=lZpFF2qALH2gGVj+jA87gVOuvp
	lF8IFS96oiqPXytVu1fwp+0jhrsqWXjm2dBvyu3XP5HVEdiEjCN1qDNVN1q2UlvIohXT90Qy5SOZG
	xlK2jZSWjedr6NSgl+s8g4JfZ6j8Iv4qyLHVVWbBcTIKu5KTxFv/xTmNG9szcdSItRwI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/iommu: smmu: Use 1UL << 31 rather than 1 << 31
Date: Thu, 24 Dec 2020 15:24:19 +0000
Message-Id: <20201224152419.22453-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Replace all the use of 1 << 31 with 1UL << 31 to prevent undefined
behavior in the SMMU driver.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/drivers/passthrough/arm/smmu.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index ed04d85e05e9..3e8aa378669b 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -405,7 +405,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
 #define ID0_NUMSMRG_SHIFT		0
 #define ID0_NUMSMRG_MASK		0xff
 
-#define ID1_PAGESIZE			(1 << 31)
+#define ID1_PAGESIZE			(1U << 31)
 #define ID1_NUMPAGENDXB_SHIFT		28
 #define ID1_NUMPAGENDXB_MASK		7
 #define ID1_NUMS2CB_SHIFT		16
@@ -438,7 +438,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
 
 /* Stream mapping registers */
 #define ARM_SMMU_GR0_SMR(n)		(0x800 + ((n) << 2))
-#define SMR_VALID			(1 << 31)
+#define SMR_VALID			(1U << 31)
 #define SMR_MASK_SHIFT			16
 #define SMR_MASK_MASK			0x7fff
 #define SMR_ID_SHIFT			0
@@ -506,7 +506,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
 #define RESUME_RETRY			(0 << 0)
 #define RESUME_TERMINATE		(1 << 0)
 
-#define TTBCR_EAE			(1 << 31)
+#define TTBCR_EAE			(1U << 31)
 
 #define TTBCR_PASIZE_SHIFT		16
 #define TTBCR_PASIZE_MASK		0x7
@@ -562,7 +562,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
 #define MAIR_ATTR_IDX_CACHE		1
 #define MAIR_ATTR_IDX_DEV		2
 
-#define FSR_MULTI			(1 << 31)
+#define FSR_MULTI			(1U << 31)
 #define FSR_SS				(1 << 30)
 #define FSR_UUT				(1 << 8)
 #define FSR_ASF				(1 << 7)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 16:11:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 16:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58747.103502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksTDJ-00012V-23; Thu, 24 Dec 2020 16:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58747.103502; Thu, 24 Dec 2020 16:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksTDI-00012O-V1; Thu, 24 Dec 2020 16:11:40 +0000
Received: by outflank-mailman (input) for mailman id 58747;
 Thu, 24 Dec 2020 16:11:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksTDH-00012G-B0; Thu, 24 Dec 2020 16:11:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksTDH-0000wT-0T; Thu, 24 Dec 2020 16:11:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksTDG-0006Zx-LS; Thu, 24 Dec 2020 16:11:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksTDG-0001m0-Ky; Thu, 24 Dec 2020 16:11:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uA82zsW8b7fcrmrcEBl+0U1eZQZ3kBALDIGWm72fiho=; b=N3jQMR1lToV3lR/VPFhZJu5pvt
	10bgMqHd+NDzRvG0h6lyqTDkQBrD0sN5Zc4HXm2+WsOconE7IszyaYDVPZC9InzAZddwa45Y5JQYP
	rAP3vE2sk6KksWQ5LaJAYAXVIA4PBt62aEEK/x3CUxWoR/rxAOoNz0tsqMXs6zW1bCdw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157871-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157871: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58cf05f597b03a8212d9ecf2c79ee046d3ee8ad9
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 16:11:38 +0000

flight 157871 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157871/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 157864 pass in 157871
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen        fail pass in 157864
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 157864
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157864

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 157864 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157864 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157864 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58cf05f597b03a8212d9ecf2c79ee046d3ee8ad9
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  145 days
Failing since        152366  2020-08-01 20:49:34 Z  144 days  251 attempts
Testing same since   157864  2020-12-24 01:10:27 Z    0 days    2 attempts

------------------------------------------------------------
4318 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 969560 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 16:50:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 16:50:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58757.103523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksToM-0003ts-G3; Thu, 24 Dec 2020 16:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58757.103523; Thu, 24 Dec 2020 16:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksToM-0003tl-CG; Thu, 24 Dec 2020 16:49:58 +0000
Received: by outflank-mailman (input) for mailman id 58757;
 Thu, 24 Dec 2020 16:49:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ksToK-0003tg-VM
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 16:49:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksToK-0001WK-CA; Thu, 24 Dec 2020 16:49:56 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksToJ-0000p5-W2; Thu, 24 Dec 2020 16:49:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=0Gwe/uH290dVrTI4AHMKTVEIDPSN+P/JGMKSEXHISQI=; b=40zIj8FINXzlpLKDBLWkZfXXqj
	oOdjLtspiSLDlGW2FTUoaEQ0ZQHGzVMruRNnYA/K3T31wnpuVRBIxhvxLs6IsJgNc6hotiakHnQmZ
	RDPq149gfqbjQJg221dfk70y5XIjwrIzdWfOmfck4tIXp0/qnkeWAjawNXuSVl+R+tM4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/iommu: smmu: Rework how the SMMU version is detected
Date: Thu, 24 Dec 2020 16:49:53 +0000
Message-Id: <20201224164953.32357-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Clang 11 will throw the following error:

smmu.c:2284:18: error: cast to smaller integer type 'enum arm_smmu_arch_version' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
        smmu->version = (enum arm_smmu_arch_version)of_id->data;
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The error can be prevented by introduce static variable for each SMMU
version and store a pointer for each of them.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Only build tested
---
 xen/drivers/passthrough/arm/smmu.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index ed04d85e05e9..4a507d7aff64 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2249,12 +2249,15 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
 	return 0;
 }
 
+static const enum arm_smmu_arch_version smmu_v1_version = ARM_SMMU_V1;
+static const enum arm_smmu_arch_version smmu_v2_version = ARM_SMMU_V2;
+
 static const struct of_device_id arm_smmu_of_match[] = {
-	{ .compatible = "arm,smmu-v1", .data = (void *)ARM_SMMU_V1 },
-	{ .compatible = "arm,smmu-v2", .data = (void *)ARM_SMMU_V2 },
-	{ .compatible = "arm,mmu-400", .data = (void *)ARM_SMMU_V1 },
-	{ .compatible = "arm,mmu-401", .data = (void *)ARM_SMMU_V1 },
-	{ .compatible = "arm,mmu-500", .data = (void *)ARM_SMMU_V2 },
+	{ .compatible = "arm,smmu-v1", .data = &smmu_v1_version },
+	{ .compatible = "arm,smmu-v2", .data = &smmu_v2_version },
+	{ .compatible = "arm,mmu-400", .data = &smmu_v1_version },
+	{ .compatible = "arm,mmu-401", .data = &smmu_v1_version },
+	{ .compatible = "arm,mmu-500", .data = &smmu_v2_version },
 	{ },
 };
 MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
@@ -2281,7 +2284,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
 	smmu->dev = dev;
 
 	of_id = of_match_node(arm_smmu_of_match, dev->of_node);
-	smmu->version = (enum arm_smmu_arch_version)of_id->data;
+	smmu->version = *(enum arm_smmu_arch_version *)of_id->data;
 
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	smmu->base = devm_ioremap_resource(dev, res);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 16:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 16:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58759.103535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksTon-0004eG-Or; Thu, 24 Dec 2020 16:50:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58759.103535; Thu, 24 Dec 2020 16:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksTon-0004e8-LQ; Thu, 24 Dec 2020 16:50:25 +0000
Received: by outflank-mailman (input) for mailman id 58759;
 Thu, 24 Dec 2020 16:50:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ksTom-0004e0-UI
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 16:50:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksTom-0001Wf-Ic; Thu, 24 Dec 2020 16:50:24 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksTom-00016r-9j; Thu, 24 Dec 2020 16:50:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=T5b8OXlmfA5c+LhCQAD6AHBhLsAIyHkQ9Ac1BK65xNI=; b=TiV7G3Jl0V1eQ1WKDcJEnubYDB
	cb+jjyRJOxssl1pmdki/otdmJDBL3jzXPeDLqVrkGGaeu3N3RTWFxXZ47guTCgQoWxgaZQmJJuqqZ
	l9RiyyCdZCtFxFpoVL36MhvpAC9o/2aOcNNKNzWzRJQgXe4LdrtDmGa/FW+k8WZ8g/s8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/serial: scif: Rework how the parameters are found
Date: Thu, 24 Dec 2020 16:50:21 +0000
Message-Id: <20201224165021.449-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

clang 11 will throw the following error while build Xen:

scif-uart.c:333:33: error: cast to smaller integer type 'enum port_types' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
    uart->params = &port_params[(enum port_types)match->data];
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~

The error can be prevented by directly storing a pointer to the port
parameters rather than the a cast of the port type.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Only build tested as I don't have the HW.
---
 xen/drivers/char/scif-uart.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
index 9d3f66b55b67..ee204a11a471 100644
--- a/xen/drivers/char/scif-uart.c
+++ b/xen/drivers/char/scif-uart.c
@@ -286,8 +286,8 @@ static struct uart_driver __read_mostly scif_uart_driver = {
 
 static const struct dt_device_match scif_uart_dt_match[] __initconst =
 {
-    { .compatible = "renesas,scif",  .data = (void *)SCIF_PORT },
-    { .compatible = "renesas,scifa", .data = (void *)SCIFA_PORT },
+    { .compatible = "renesas,scif",  .data = &port_params[SCIF_PORT] },
+    { .compatible = "renesas,scifa", .data = &port_params[SCIFA_PORT] },
     { /* sentinel */ },
 };
 
@@ -330,7 +330,7 @@ static int __init scif_uart_init(struct dt_device_node *dev,
 
     match = dt_match_node(scif_uart_dt_match, dev);
     ASSERT( match );
-    uart->params = &port_params[(enum port_types)match->data];
+    uart->params = match->data;
 
     uart->vuart.base_addr  = addr;
     uart->vuart.size       = size;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 17:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 17:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58767.103547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksTz3-0005hR-Q6; Thu, 24 Dec 2020 17:01:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58767.103547; Thu, 24 Dec 2020 17:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksTz3-0005hK-Mv; Thu, 24 Dec 2020 17:01:01 +0000
Received: by outflank-mailman (input) for mailman id 58767;
 Thu, 24 Dec 2020 17:01:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnYj=F4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ksTz2-0005hF-2s
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 17:01:00 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8bf08bd-fc67-477b-af90-cf42b18aae48;
 Thu, 24 Dec 2020 17:00:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8bf08bd-fc67-477b-af90-cf42b18aae48
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608829258;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=eFVHLt0a04goO6nAl+fS918d/GFGWZW+6Zucil4GllY=;
  b=aIq4VDQkpuYwPusjaVAWfXngTLatir8BsgjvsAc2/MUCBYfmn42CJ/rj
   YYXhzMkGyN/OY//KIaFs/U+K6TWE1xq8OL2U/2nnWgxrL1lRTPC8Fv2Hh
   abFYv2IwmN0gkwPAbzycI5N2u5S118+OZwN2MdFZnqG88gViNKJbr8qpr
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Gf1Edw8hsyXi4mbKhRjTIvfgzNVEZMkGAFxl2QG2E88RcPCUz9RR57ENsa0+HFh7aLGm2RYzaq
 /u/RkAO3OzFXDWK+qBa7aw7YtP9bgWHnDd5zg2YR+pN42UwAle4xILbXyiA1Q8iUQMrHKNIqFm
 vJY0mzk7Rk+1/Fm1hmnDTwB+sW/wrj3YjR8cloMD18ZIua3Uc7WbTSt3WNaqudn3yrEq5es41R
 +PHkS4tprUbqcDsCjPo8L19ac7uXJ1evhjL1eBDVQtgXButuBdpWSbKmpTA5K1VD48RQ0LiDMA
 E7A=
X-SBRS: 5.2
X-MesageID: 34158617
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,444,1599537600"; 
   d="scan'208";a="34158617"
Subject: Re: [PATCH] xen/iommu: smmu: Rework how the SMMU version is detected
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <bertrand.marquis@arm.com>, <Rahul.Singh@arm.com>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, "Volodymyr
 Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20201224164953.32357-1-julien@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0d592001-66de-4582-f7a1-6ee56cbe7c27@citrix.com>
Date: Thu, 24 Dec 2020 17:00:47 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201224164953.32357-1-julien@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 24/12/2020 16:49, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>
> Clang 11 will throw the following error:
>
> smmu.c:2284:18: error: cast to smaller integer type 'enum arm_smmu_arch_version' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
>         smmu->version = (enum arm_smmu_arch_version)of_id->data;
>                         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The error can be prevented by introduce static variable for each SMMU
> version and store a pointer for each of them.
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

You can also fix this by casting through (uintptr_t) instead of (enum
arm_smmu_arch_version), which wouldn't involve an extra indirection.

Alternatively, you could modify dt_device_match to union void *data with
uintptr_t val for when you want to actually pass non-pointer data.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 17:10:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 17:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58772.103558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksU8Y-0006g0-PV; Thu, 24 Dec 2020 17:10:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58772.103558; Thu, 24 Dec 2020 17:10:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksU8Y-0006ft-MT; Thu, 24 Dec 2020 17:10:50 +0000
Received: by outflank-mailman (input) for mailman id 58772;
 Thu, 24 Dec 2020 17:10:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ksU8W-0006fo-Uv
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 17:10:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksU8U-0001sg-A8; Thu, 24 Dec 2020 17:10:46 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ksU8U-00066h-0S; Thu, 24 Dec 2020 17:10:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=S8TdCqGwO+NrCOUvfPvAOWjr2opPOrGT38oL60MNY30=; b=WdW+6n7QnkPm0vrUxIQ5Oj19b2
	bXG7B7SHfLyO4Y0bVFIuxj/1WengUbNGZt6NQsvCYJtPiVTu2xrdQ1qQsPkeRPslWua6aqVwxZDso
	NDGfJNGlMcoV8Aw+jcHTq9AO2xSJ7no23nNQWbKOVoLHjv7xPIkreCDLGRXIv9HxmhjQ=;
Subject: Re: [PATCH] xen/iommu: smmu: Rework how the SMMU version is detected
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201224164953.32357-1-julien@xen.org>
 <0d592001-66de-4582-f7a1-6ee56cbe7c27@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <11aad53b-2fca-d893-0897-532e5ac4248c@xen.org>
Date: Thu, 24 Dec 2020 17:10:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <0d592001-66de-4582-f7a1-6ee56cbe7c27@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 24/12/2020 17:00, Andrew Cooper wrote:
> On 24/12/2020 16:49, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Clang 11 will throw the following error:
>>
>> smmu.c:2284:18: error: cast to smaller integer type 'enum arm_smmu_arch_version' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
>>          smmu->version = (enum arm_smmu_arch_version)of_id->data;
>>                          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> The error can be prevented by introduce static variable for each SMMU
>> version and store a pointer for each of them.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> You can also fix this by casting through (uintptr_t) instead of (enum
> arm_smmu_arch_version), which wouldn't involve an extra indirection.

I thought about using a different cast, but it feels just papering over 
the issue.

But I don't see what's the problem with the extra indirection... It is 
self-contained and only used during the IOMMU initialization.

> 
> Alternatively, you could modify dt_device_match to union void *data with
> uintptr_t val for when you want to actually pass non-pointer data.
>
> ~Andrew
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 17:39:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 17:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58777.103571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksUa8-0000Et-SS; Thu, 24 Dec 2020 17:39:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58777.103571; Thu, 24 Dec 2020 17:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksUa8-0000Em-PH; Thu, 24 Dec 2020 17:39:20 +0000
Received: by outflank-mailman (input) for mailman id 58777;
 Thu, 24 Dec 2020 17:39:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnYj=F4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ksUa8-0000Eh-1W
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 17:39:20 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b968df92-117d-42c6-bf81-74cc21e1ea36;
 Thu, 24 Dec 2020 17:39:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b968df92-117d-42c6-bf81-74cc21e1ea36
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1608831558;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=7inTMIk0tMlR5a5l5/wH4uFWmYYM7nYbmVGMOJ+fAhU=;
  b=FsLieeRr2b70OrsEzHnpFsavq+Vm+hm7vMMmm+pA5REApaGqOjzhT/IG
   BdZ53u8mMaz41QVmgtmTBqdt4WiohQ0KwbpkSrXu+zgwmkm1PruaxwgeX
   oxjY/9C0DuBDjy3lV06v6UiH9Q3GBwGTtbpGnnD0aMq60WKGEGLyvBRV7
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WLgvFoCKz3lADIvrM4iwSjCS4adQHYiTgBjQaw8PaPU7exnS/hbazl8sTYQrg/Qkey5N5pV6ir
 LdHpg2U6/QprjOMvFV14c/SRlgeVSItvAs8fV5bvzRgJ1asYP/ZSPP6BubYjacfhqFeh+An3Hp
 skbQ23GyZ5AxrCLSASv9Rslu+ciTdJkaGGrAWEuoHkVVewoHTefpnua8VZaoYWCiy32VAhHByw
 kWHwZUiguDo8ysW2SJMMxUfTs4Mq/XkkvepKZPZ/K523Yt8FN9NKPFHe7TbnnMWWG02FCw+Ntd
 Iy8=
X-SBRS: 5.2
X-MesageID: 33920861
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,444,1599537600"; 
   d="scan'208";a="33920861"
Subject: Re: [PATCH] xen/iommu: smmu: Rework how the SMMU version is detected
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <bertrand.marquis@arm.com>, <Rahul.Singh@arm.com>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, "Volodymyr
 Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20201224164953.32357-1-julien@xen.org>
 <0d592001-66de-4582-f7a1-6ee56cbe7c27@citrix.com>
 <11aad53b-2fca-d893-0897-532e5ac4248c@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e5e73b9c-5653-7333-b252-0bcb1f7ebb20@citrix.com>
Date: Thu, 24 Dec 2020 17:39:12 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <11aad53b-2fca-d893-0897-532e5ac4248c@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 24/12/2020 17:10, Julien Grall wrote:
>
>
> On 24/12/2020 17:00, Andrew Cooper wrote:
>> On 24/12/2020 16:49, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Clang 11 will throw the following error:
>>>
>>> smmu.c:2284:18: error: cast to smaller integer type 'enum
>>> arm_smmu_arch_version' from 'const void *'
>>> [-Werror,-Wvoid-pointer-to-enum-cast]
>>>          smmu->version = (enum arm_smmu_arch_version)of_id->data;
>>>                          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>
>>> The error can be prevented by introduce static variable for each SMMU
>>> version and store a pointer for each of them.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> You can also fix this by casting through (uintptr_t) instead of (enum
>> arm_smmu_arch_version), which wouldn't involve an extra indirection.
>
> I thought about using a different cast, but it feels just papering
> over the issue.
>
> But I don't see what's the problem with the extra indirection... It is
> self-contained and only used during the IOMMU initialization.

Well - its extra .rodata for the sake of satisfying the compiler in a
specific way.

Irrespective, all of this looks like it ought to be initdata rather than
runtime data, which is probably a bigger deal than worrying about one
extra pointer access.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 17:53:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 17:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58782.103583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksUnN-0001wK-4G; Thu, 24 Dec 2020 17:53:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58782.103583; Thu, 24 Dec 2020 17:53:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksUnN-0001wD-1B; Thu, 24 Dec 2020 17:53:01 +0000
Received: by outflank-mailman (input) for mailman id 58782;
 Thu, 24 Dec 2020 17:52:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksUnL-0001w5-OU; Thu, 24 Dec 2020 17:52:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksUnL-0002YZ-I7; Thu, 24 Dec 2020 17:52:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksUnL-00030a-AG; Thu, 24 Dec 2020 17:52:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksUnL-0007k0-9m; Thu, 24 Dec 2020 17:52:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ez3mjcQ/05l9Pas12JrKdbN5DuCxBHEUK39yGX5beHM=; b=jht1NPKy/UwdahFf2npy4XtgP6
	5VPteNlAZxJFp8OEHd5fusmcM3cD4MxTwFipNVHYWRy1dr57XXYm3KuygvH60NobYMl+/Tx32RH2y
	Zgc82UXwN5MBejnrrdjUaXERKBxMt7hEh6gCeoniy5uABOQTXS7IzdzwZTNStBW2pXcM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157875-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157875: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=140674a4601f804302e79d08cb06f91c882ddf28
X-Osstest-Versions-That:
    ovmf=e2747dbb5a44f4a463ecc6dd0f7fd113ee57bd67
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 17:52:59 +0000

flight 157875 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157875/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 140674a4601f804302e79d08cb06f91c882ddf28
baseline version:
 ovmf                 e2747dbb5a44f4a463ecc6dd0f7fd113ee57bd67

Last test of basis   157856  2020-12-23 14:11:47 Z    1 days
Testing same since   157875  2020-12-24 14:39:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e2747dbb5a..140674a460  140674a4601f804302e79d08cb06f91c882ddf28 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 18:15:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 18:15:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58791.103598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksV8u-0003qR-Vt; Thu, 24 Dec 2020 18:15:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58791.103598; Thu, 24 Dec 2020 18:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksV8u-0003qK-RW; Thu, 24 Dec 2020 18:15:16 +0000
Received: by outflank-mailman (input) for mailman id 58791;
 Thu, 24 Dec 2020 18:15:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JomI=F4=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1ksV8t-0003qC-5w
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 18:15:15 +0000
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 106c3f48-eca4-4aba-ae03-620bdfada89b;
 Thu, 24 Dec 2020 18:15:14 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id r7so2776439wrc.5
 for <xen-devel@lists.xenproject.org>; Thu, 24 Dec 2020 10:15:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 106c3f48-eca4-4aba-ae03-620bdfada89b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=5U3LZAIfvUK71X2+Wv2T6aQ7TvG8XEkbwuLTugkLod0=;
        b=FNriO7Mefu0Ec4ruYKQuE+/C9w9a/MTK7kstysVM81cvjpM4QZ+Ebhdr6vlwXdQPZ1
         HzBOkCJFLmrFzVGpFvntL6pHzne0DtGmDtpk+aqTheNl1oqyON9ry9VkS+06zUyyo+hX
         aAornYfEgupi3fS85SpKct5940cFtAT/J9Z8H8dneA+O1+hEy1crSQZaO69cSvNtq41n
         z2kqmEi5wQxJUQE5hKrHMmnW4IHEBChIWFwBqmHYp+7Pd8ebeXQgvTNDoRDKxWvGpJv5
         DrxDKLonXrIrpuJF/zc5pv5whkK1N25XKnQw18Muce0hLroFDGYKGvxb9/OvHtsfmWeh
         akkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=5U3LZAIfvUK71X2+Wv2T6aQ7TvG8XEkbwuLTugkLod0=;
        b=uap1ySYH0Gsoh9VAEdEOW2DAvFQGpCIFk4OXUqDasjmOsWlofa49+dXje+erWXPSV4
         9vrRb2cM7C+TjFKxeg7UoZojJI9zyNHpjrh0iz3acZtCTtieti1xikPVnBhD2OTt5P8M
         gxyJV4sZfM8DajFhcSHnlj9/3AJhdiMr/HKbPBksoreJ58mFedpyNxGEYhSshLylyIqF
         XxVDBAVxKT5rFID/txn0ttx+aAOoc0J2UYm9oe6ynO12XNoR0fS0dXO5P3G/tY5xyFZ9
         hGKYHpP4Kzc13SqcXPM3HrDlQWEBjzrAxah9ojPRJqS//NhPZticDvRKe70mYlgTFp45
         trog==
X-Gm-Message-State: AOAM530sugJqAbWYKORFjj9KzGMRXRmRzoA0kCMWD8HroSFbuSgbRlZu
	IV0oooGJAaIOqtrPtBxCY3iyDP5DX+ZRvOePymE=
X-Google-Smtp-Source: ABdhPJzpWuk8e04Hi+NAAY6lTkPlqLvVk9y0ZiHuP1xGPzY0+BiY9q3+U7TC18bVpNP9tvorp24FZ1AsHdUukqgz/Lc=
X-Received: by 2002:a5d:43d2:: with SMTP id v18mr35444207wrr.326.1608833713476;
 Thu, 24 Dec 2020 10:15:13 -0800 (PST)
MIME-Version: 1.0
References: <20201224164953.32357-1-julien@xen.org> <0d592001-66de-4582-f7a1-6ee56cbe7c27@citrix.com>
 <11aad53b-2fca-d893-0897-532e5ac4248c@xen.org> <e5e73b9c-5653-7333-b252-0bcb1f7ebb20@citrix.com>
In-Reply-To: <e5e73b9c-5653-7333-b252-0bcb1f7ebb20@citrix.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 24 Dec 2020 18:15:02 +0000
Message-ID: <CAJ=z9a0VB8=za70W7Fsq1GrGXnXRgGzb1RDESbcovRrjf=c_xg@mail.gmail.com>
Subject: Re: [PATCH] xen/iommu: smmu: Rework how the SMMU version is detected
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Rahul Singh <Rahul.Singh@arm.com>, 
	Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Thu, 24 Dec 2020 at 17:39, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> On 24/12/2020 17:10, Julien Grall wrote:
> >
> >
> > On 24/12/2020 17:00, Andrew Cooper wrote:
> >> On 24/12/2020 16:49, Julien Grall wrote:
> >>> From: Julien Grall <jgrall@amazon.com>
> >>>
> >>> Clang 11 will throw the following error:
> >>>
> >>> smmu.c:2284:18: error: cast to smaller integer type 'enum
> >>> arm_smmu_arch_version' from 'const void *'
> >>> [-Werror,-Wvoid-pointer-to-enum-cast]
> >>>          smmu->version = (enum arm_smmu_arch_version)of_id->data;
> >>>                          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>>
> >>> The error can be prevented by introduce static variable for each SMMU
> >>> version and store a pointer for each of them.
> >>>
> >>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> >>
> >> You can also fix this by casting through (uintptr_t) instead of (enum
> >> arm_smmu_arch_version), which wouldn't involve an extra indirection.
> >
> > I thought about using a different cast, but it feels just papering
> > over the issue.
> >
> > But I don't see what's the problem with the extra indirection... It is
> > self-contained and only used during the IOMMU initialization.
>
> Well - its extra .rodata for the sake of satisfying the compiler in a
> specific way.

You are making the assumption that I wrote this way only to make the compiler
happy. :)

While the patch was originally written because of the compiler, we will need to
introduce some workaround. So the enum is going to be transformed to a
structure.

I thought about introducing the structure now, but I felt it was more
controversial
than this approach.

>
> Irrespective, all of this looks like it ought to be initdata rather than
> runtime data, which is probably a bigger deal than worrying about one
> extra pointer access.

I thought about that. However, not all the users are in __init yet. So this
would technically be a layer violation (runtime function should not
use init variable).

In practice, I think all the users could be in init. I will check and send a
patch.

Cheers,


From xen-devel-bounces@lists.xenproject.org Thu Dec 24 19:03:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 19:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58797.103609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksVst-0008Eh-KZ; Thu, 24 Dec 2020 19:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58797.103609; Thu, 24 Dec 2020 19:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksVst-0008Ea-Hf; Thu, 24 Dec 2020 19:02:47 +0000
Received: by outflank-mailman (input) for mailman id 58797;
 Thu, 24 Dec 2020 19:02:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/7aL=F4=knorrie.org=hans@srs-us1.protection.inumbo.net>)
 id 1ksVsr-0008EV-J5
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 19:02:45 +0000
Received: from syrinx.knorrie.org (unknown [82.94.188.77])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d61c699d-f5f5-4200-872a-7deb88ea165a;
 Thu, 24 Dec 2020 19:02:39 +0000 (UTC)
Received: from [IPv6:2a02:a213:2b81:e600::12] (unknown
 [IPv6:2a02:a213:2b81:e600::12])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by syrinx.knorrie.org (Postfix) with ESMTPSA id 2091260B8E57A;
 Thu, 24 Dec 2020 20:02:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d61c699d-f5f5-4200-872a-7deb88ea165a
To: Maximilian Engelhardt <maxi@daemonizer.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>
References: <8b4564696cae00041848af8c5793172b80edadd5.1608742171.git.maxi@daemonizer.de>
From: Hans van Kranenburg <hans@knorrie.org>
Subject: Re: [XEN PATCH v2] docs: set date to SOURCE_DATE_EPOCH if available
Message-ID: <f602c6d8-2e3d-806f-0584-ddb478b151b0@knorrie.org>
Date: Thu, 24 Dec 2020 20:02:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8b4564696cae00041848af8c5793172b80edadd5.1608742171.git.maxi@daemonizer.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi,

On 12/23/20 5:56 PM, Maximilian Engelhardt wrote:
> check if a GNU date that supports the '-u -d @...' options and syntax or
> a BSD date are available. If so, use the appropriate options for the
> date command to produce a custom date if SOURCE_DATE_EPOCH is defined.
> If SOURCE_DATE_EPOCH is not defined or no suitable date command was
> found, use the current date. This enables reproducible builds.
> 
> Signed-off-by: Maximilian Engelhardt <maxi@daemonizer.de>
> 
> Changes in v2:
> - add capability detection for the 'date' command using ax_prog_date.m4
> - add information about detected date command to config/Docs.mk
> - only call a supported date command in docs/Makefile
> ---
> Please note the ax_prog_date.m4 macro is taken from the autoconf-archive
> repository [1] and it's license is GPL v3 or later with an exception for
> the generated configure script.
> 
> [1] https://www.gnu.org/software/autoconf-archive/
> ---
>  config/Docs.mk.in  |   3 +
>  docs/Makefile      |  16 +++-
>  docs/configure     | 213 +++++++++++++++++++++++++++++++++++++++++++++
>  docs/configure.ac  |   9 ++
>  m4/ax_prog_date.m4 | 139 +++++++++++++++++++++++++++++
>  5 files changed, 379 insertions(+), 1 deletion(-)
>  create mode 100644 m4/ax_prog_date.m4

Wait, what. The comment about the -d option already existing since 2005
(in the previous thread) is relevant here...

I guess there would be other reasons why the whole current Xen master
branch would not compile on e.g. Debian Sarge 3.1 from 2005... Like,
amd64 did not even exist as release architecture yet, back then...

I'd prefer

  1 file changed, 7 insertions(+), 1 deletion(-)

over

  5 files changed, 379 insertions(+), 1 deletion(-)

in this case.

Hans

> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> index e76e5cd5ff..cc2abd8fcc 100644
> --- a/config/Docs.mk.in
> +++ b/config/Docs.mk.in
> @@ -7,3 +7,6 @@ POD2HTML            := @POD2HTML@
>  POD2TEXT            := @POD2TEXT@
>  PANDOC              := @PANDOC@
>  PERL                := @PERL@
> +
> +# Variables
> +DATE_TYPE_AT        := @DATE_TYPE_AT@
> diff --git a/docs/Makefile b/docs/Makefile
> index 8de1efb6f5..c79fe0b63e 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -3,7 +3,21 @@ include $(XEN_ROOT)/Config.mk
>  -include $(XEN_ROOT)/config/Docs.mk
>  
>  VERSION		:= $(shell $(MAKE) -C $(XEN_ROOT)/xen --no-print-directory xenversion)
> -DATE		:= $(shell date +%Y-%m-%d)
> +
> +DATE_FMT	:= +%Y-%m-%d
> +ifdef SOURCE_DATE_EPOCH
> +ifeq ($(DATE_TYPE_AT),GNU)
> +DATE		:= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)")
> +else
> +ifeq ($(DATE_TYPE_AT),BSD)
> +DATE		:= $(shell date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)")
> +else
> +DATE		:= $(shell date "$(DATE_FMT)")
> +endif
> +endif
> +else
> +DATE		:= $(shell date "$(DATE_FMT)")
> +endif
>  
>  DOC_ARCHES      := arm x86_32 x86_64
>  MAN_SECTIONS    := 1 5 7 8
> diff --git a/docs/configure b/docs/configure
> index f55268564e..9c3218f560 100755
> --- a/docs/configure
> +++ b/docs/configure
> @@ -587,6 +587,7 @@ PACKAGE_URL='https://www.xen.org/'
>  ac_unique_file="misc/xen-command-line.pandoc"
>  ac_subst_vars='LTLIBOBJS
>  LIBOBJS
> +DATE_TYPE_AT
>  PERL
>  PANDOC
>  POD2TEXT
> @@ -1808,6 +1809,86 @@ case "$host_os" in
>  esac
>  
>  
> +# Fetched from https://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_prog_date.m4
> +# Commit ID: fd1d25c14855037f6ccd7dbc7fe9ae5fc9bea8f4
> +# ===========================================================================
> +#       https://www.gnu.org/software/autoconf-archive/ax_prog_date.html
> +# ===========================================================================
> +#
> +# SYNOPSIS
> +#
> +#   AX_PROG_DATE()
> +#
> +# DESCRIPTION
> +#
> +#   This macro tries to determine the type of the date (1) command and some
> +#   of its non-standard capabilities.
> +#
> +#   The type is determined as follow:
> +#
> +#     * If the version string contains "GNU", then:
> +#       - The variable ax_cv_prog_date_gnu is set to "yes".
> +#       - The variable ax_cv_prog_date_type is set to "gnu".
> +#
> +#     * If date supports the "-v 1d" option, then:
> +#       - The variable ax_cv_prog_date_bsd is set to "yes".
> +#       - The variable ax_cv_prog_date_type is set to "bsd".
> +#
> +#     * If both previous checks fail, then:
> +#       - The variable ax_cv_prog_date_type is set to "unknown".
> +#
> +#   The following capabilities of GNU date are checked:
> +#
> +#     * If date supports the --date arg option, then:
> +#       - The variable ax_cv_prog_date_gnu_date is set to "yes".
> +#
> +#     * If date supports the --utc arg option, then:
> +#       - The variable ax_cv_prog_date_gnu_utc is set to "yes".
> +#
> +#   The following capabilities of BSD date are checked:
> +#
> +#     * If date supports the -v 1d  option, then:
> +#       - The variable ax_cv_prog_date_bsd_adjust is set to "yes".
> +#
> +#     * If date supports the -r arg option, then:
> +#       - The variable ax_cv_prog_date_bsd_date is set to "yes".
> +#
> +#   All the aforementioned variables are set to "no" before a check is
> +#   performed.
> +#
> +# LICENSE
> +#
> +#   Copyright (c) 2017 Enrico M. Crisostomo <enrico.m.crisostomo@gmail.com>
> +#
> +#   This program is free software: you can redistribute it and/or modify it
> +#   under the terms of the GNU General Public License as published by the
> +#   Free Software Foundation, either version 3 of the License, or (at your
> +#   option) any later version.
> +#
> +#   This program is distributed in the hope that it will be useful, but
> +#   WITHOUT ANY WARRANTY; without even the implied warranty of
> +#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
> +#   Public License for more details.
> +#
> +#   You should have received a copy of the GNU General Public License along
> +#   with this program. If not, see <http://www.gnu.org/licenses/>.
> +#
> +#   As a special exception, the respective Autoconf Macro's copyright owner
> +#   gives unlimited permission to copy, distribute and modify the configure
> +#   scripts that are the output of Autoconf when processing the Macro. You
> +#   need not follow the terms of the GNU General Public License when using
> +#   or distributing such scripts, even though portions of the text of the
> +#   Macro appear in them. The GNU General Public License (GPL) does govern
> +#   all other use of the material that constitutes the Autoconf Macro.
> +#
> +#   This special exception to the GPL applies to versions of the Autoconf
> +#   Macro released by the Autoconf Archive. When you make and distribute a
> +#   modified version of the Autoconf Macro, you may extend this special
> +#   exception to the GPL to apply to your modified version as well.
> +
> +#serial 3
> +
> +
>  
>  
>  test "x$prefix" = "xNONE" && prefix=$ac_default_prefix
> @@ -2267,6 +2348,138 @@ then
>      as_fn_error $? "Unable to find perl, please install perl" "$LINENO" 5
>  fi
>  
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU date" >&5
> +$as_echo_n "checking for GNU date... " >&6; }
> +if ${ax_cv_prog_date_gnu+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +    ax_cv_prog_date_gnu=no
> +    if date --version 2>/dev/null | head -1 | grep -q GNU
> +    then
> +      ax_cv_prog_date_gnu=yes
> +    fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_gnu" >&5
> +$as_echo "$ax_cv_prog_date_gnu" >&6; }
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BSD date" >&5
> +$as_echo_n "checking for BSD date... " >&6; }
> +if ${ax_cv_prog_date_bsd+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +    ax_cv_prog_date_bsd=no
> +    if date -v 1d > /dev/null 2>&1
> +    then
> +      ax_cv_prog_date_bsd=yes
> +    fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_bsd" >&5
> +$as_echo "$ax_cv_prog_date_bsd" >&6; }
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for date type" >&5
> +$as_echo_n "checking for date type... " >&6; }
> +if ${ax_cv_prog_date_type+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +    ax_cv_prog_date_type=unknown
> +    if test "x${ax_cv_prog_date_gnu}" = "xyes"
> +    then
> +      ax_cv_prog_date_type=gnu
> +    elif test "x${ax_cv_prog_date_bsd}" = "xyes"
> +    then
> +      ax_cv_prog_date_type=bsd
> +    fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_type" >&5
> +$as_echo "$ax_cv_prog_date_type" >&6; }
> +  if test "x$ax_cv_prog_date_gnu" = xyes; then :
> +
> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether GNU date supports --date" >&5
> +$as_echo_n "checking whether GNU date supports --date... " >&6; }
> +if ${ax_cv_prog_date_gnu_date+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +      ax_cv_prog_date_gnu_date=no
> +      if date --date=@1512031231 > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_gnu_date=yes
> +      fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_gnu_date" >&5
> +$as_echo "$ax_cv_prog_date_gnu_date" >&6; }
> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether GNU date supports --utc" >&5
> +$as_echo_n "checking whether GNU date supports --utc... " >&6; }
> +if ${ax_cv_prog_date_gnu_utc+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +      ax_cv_prog_date_gnu_utc=no
> +      if date --utc > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_gnu_utc=yes
> +      fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_gnu_utc" >&5
> +$as_echo "$ax_cv_prog_date_gnu_utc" >&6; }
> +
> +fi
> +  if test "x$ax_cv_prog_date_bsd" = xyes; then :
> +
> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether BSD date supports -r" >&5
> +$as_echo_n "checking whether BSD date supports -r... " >&6; }
> +if ${ax_cv_prog_date_bsd_date+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +      ax_cv_prog_date_bsd_date=no
> +      if date -r 1512031231 > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_bsd_date=yes
> +      fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_bsd_date" >&5
> +$as_echo "$ax_cv_prog_date_bsd_date" >&6; }
> +
> +fi
> +    if test "x$ax_cv_prog_date_bsd" = xyes; then :
> +
> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether BSD date supports -v" >&5
> +$as_echo_n "checking whether BSD date supports -v... " >&6; }
> +if ${ax_cv_prog_date_bsd_adjust+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +
> +      ax_cv_prog_date_bsd_adjust=no
> +      if date -v 1d > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_bsd_adjust=yes
> +      fi
> +
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_prog_date_bsd_adjust" >&5
> +$as_echo "$ax_cv_prog_date_bsd_adjust" >&6; }
> +
> +fi
> +
> +if test "$ax_cv_prog_date_gnu_date:$ax_cv_prog_date_gnu_utc" = yes:yes; then :
> +  DATE_TYPE_AT="GNU"
> +else
> +  if test "x$ax_cv_prog_date_bsd_date" = xyes; then :
> +  DATE_TYPE_AT="BSD"
> +else
> +  DATE_TYPE_AT="Unknown"
> +fi
> +fi
> +
> +
>  cat >confcache <<\_ACEOF
>  # This file is a shell script that caches the results of configure
>  # tests run on this system so they can be shared between configure
> diff --git a/docs/configure.ac b/docs/configure.ac
> index cb5a6eaa4c..c87471e706 100644
> --- a/docs/configure.ac
> +++ b/docs/configure.ac
> @@ -17,6 +17,7 @@ m4_include([../m4/docs_tool.m4])
>  m4_include([../m4/path_or_fail.m4])
>  m4_include([../m4/features.m4])
>  m4_include([../m4/paths.m4])
> +m4_include([../m4/ax_prog_date.m4])
>  
>  AX_XEN_EXPAND_CONFIG()
>  
> @@ -29,4 +30,12 @@ AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
>  AC_ARG_VAR([PERL], [Path to Perl parser])
>  AX_PATH_PROG_OR_FAIL([PERL], [perl])
>  
> +AX_PROG_DATE
> +AS_IF([test "$ax_cv_prog_date_gnu_date:$ax_cv_prog_date_gnu_utc" = yes:yes],
> +    [DATE_TYPE_AT="GNU"],
> +    [AS_IF([test "x$ax_cv_prog_date_bsd_date" = xyes],
> +        [DATE_TYPE_AT="BSD"],
> +        [DATE_TYPE_AT="Unknown"])])
> +AC_SUBST([DATE_TYPE_AT])
> +
>  AC_OUTPUT()
> diff --git a/m4/ax_prog_date.m4 b/m4/ax_prog_date.m4
> new file mode 100644
> index 0000000000..675412bac2
> --- /dev/null
> +++ b/m4/ax_prog_date.m4
> @@ -0,0 +1,139 @@
> +# Fetched from https://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_prog_date.m4
> +# Commit ID: fd1d25c14855037f6ccd7dbc7fe9ae5fc9bea8f4
> +# ===========================================================================
> +#       https://www.gnu.org/software/autoconf-archive/ax_prog_date.html
> +# ===========================================================================
> +#
> +# SYNOPSIS
> +#
> +#   AX_PROG_DATE()
> +#
> +# DESCRIPTION
> +#
> +#   This macro tries to determine the type of the date (1) command and some
> +#   of its non-standard capabilities.
> +#
> +#   The type is determined as follow:
> +#
> +#     * If the version string contains "GNU", then:
> +#       - The variable ax_cv_prog_date_gnu is set to "yes".
> +#       - The variable ax_cv_prog_date_type is set to "gnu".
> +#
> +#     * If date supports the "-v 1d" option, then:
> +#       - The variable ax_cv_prog_date_bsd is set to "yes".
> +#       - The variable ax_cv_prog_date_type is set to "bsd".
> +#
> +#     * If both previous checks fail, then:
> +#       - The variable ax_cv_prog_date_type is set to "unknown".
> +#
> +#   The following capabilities of GNU date are checked:
> +#
> +#     * If date supports the --date arg option, then:
> +#       - The variable ax_cv_prog_date_gnu_date is set to "yes".
> +#
> +#     * If date supports the --utc arg option, then:
> +#       - The variable ax_cv_prog_date_gnu_utc is set to "yes".
> +#
> +#   The following capabilities of BSD date are checked:
> +#
> +#     * If date supports the -v 1d  option, then:
> +#       - The variable ax_cv_prog_date_bsd_adjust is set to "yes".
> +#
> +#     * If date supports the -r arg option, then:
> +#       - The variable ax_cv_prog_date_bsd_date is set to "yes".
> +#
> +#   All the aforementioned variables are set to "no" before a check is
> +#   performed.
> +#
> +# LICENSE
> +#
> +#   Copyright (c) 2017 Enrico M. Crisostomo <enrico.m.crisostomo@gmail.com>
> +#
> +#   This program is free software: you can redistribute it and/or modify it
> +#   under the terms of the GNU General Public License as published by the
> +#   Free Software Foundation, either version 3 of the License, or (at your
> +#   option) any later version.
> +#
> +#   This program is distributed in the hope that it will be useful, but
> +#   WITHOUT ANY WARRANTY; without even the implied warranty of
> +#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
> +#   Public License for more details.
> +#
> +#   You should have received a copy of the GNU General Public License along
> +#   with this program. If not, see <http://www.gnu.org/licenses/>.
> +#
> +#   As a special exception, the respective Autoconf Macro's copyright owner
> +#   gives unlimited permission to copy, distribute and modify the configure
> +#   scripts that are the output of Autoconf when processing the Macro. You
> +#   need not follow the terms of the GNU General Public License when using
> +#   or distributing such scripts, even though portions of the text of the
> +#   Macro appear in them. The GNU General Public License (GPL) does govern
> +#   all other use of the material that constitutes the Autoconf Macro.
> +#
> +#   This special exception to the GPL applies to versions of the Autoconf
> +#   Macro released by the Autoconf Archive. When you make and distribute a
> +#   modified version of the Autoconf Macro, you may extend this special
> +#   exception to the GPL to apply to your modified version as well.
> +
> +#serial 3
> +
> +AC_DEFUN([AX_PROG_DATE], [dnl
> +  AC_CACHE_CHECK([for GNU date], [ax_cv_prog_date_gnu], [
> +    ax_cv_prog_date_gnu=no
> +    if date --version 2>/dev/null | head -1 | grep -q GNU
> +    then
> +      ax_cv_prog_date_gnu=yes
> +    fi
> +  ])
> +  AC_CACHE_CHECK([for BSD date], [ax_cv_prog_date_bsd], [
> +    ax_cv_prog_date_bsd=no
> +    if date -v 1d > /dev/null 2>&1
> +    then
> +      ax_cv_prog_date_bsd=yes
> +    fi
> +  ])
> +  AC_CACHE_CHECK([for date type], [ax_cv_prog_date_type], [
> +    ax_cv_prog_date_type=unknown
> +    if test "x${ax_cv_prog_date_gnu}" = "xyes"
> +    then
> +      ax_cv_prog_date_type=gnu
> +    elif test "x${ax_cv_prog_date_bsd}" = "xyes"
> +    then
> +      ax_cv_prog_date_type=bsd
> +    fi
> +  ])
> +  AS_VAR_IF([ax_cv_prog_date_gnu], [yes], [
> +    AC_CACHE_CHECK([whether GNU date supports --date], [ax_cv_prog_date_gnu_date], [
> +      ax_cv_prog_date_gnu_date=no
> +      if date --date=@1512031231 > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_gnu_date=yes
> +      fi
> +    ])
> +    AC_CACHE_CHECK([whether GNU date supports --utc], [ax_cv_prog_date_gnu_utc], [
> +      ax_cv_prog_date_gnu_utc=no
> +      if date --utc > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_gnu_utc=yes
> +      fi
> +    ])
> +  ])
> +  AS_VAR_IF([ax_cv_prog_date_bsd], [yes], [
> +    AC_CACHE_CHECK([whether BSD date supports -r], [ax_cv_prog_date_bsd_date], [
> +      ax_cv_prog_date_bsd_date=no
> +      if date -r 1512031231 > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_bsd_date=yes
> +      fi
> +    ])
> +  ])
> +    AS_VAR_IF([ax_cv_prog_date_bsd], [yes], [
> +    AC_CACHE_CHECK([whether BSD date supports -v], [ax_cv_prog_date_bsd_adjust], [
> +      ax_cv_prog_date_bsd_adjust=no
> +      if date -v 1d > /dev/null 2>&1
> +      then
> +        ax_cv_prog_date_bsd_adjust=yes
> +      fi
> +    ])
> +  ])
> +])dnl AX_PROG_DATE
> 



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 19:48:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 19:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58802.103621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksWat-0003Np-U7; Thu, 24 Dec 2020 19:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58802.103621; Thu, 24 Dec 2020 19:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksWat-0003Ni-Qu; Thu, 24 Dec 2020 19:48:15 +0000
Received: by outflank-mailman (input) for mailman id 58802;
 Thu, 24 Dec 2020 19:48:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/7aL=F4=knorrie.org=hans@srs-us1.protection.inumbo.net>)
 id 1ksWar-0003Nd-TG
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 19:48:13 +0000
Received: from syrinx.knorrie.org (unknown [2001:888:2177::4d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23c22b4a-5727-4279-91d4-7032c08aaffb;
 Thu, 24 Dec 2020 19:48:12 +0000 (UTC)
Received: from [IPv6:2a02:a213:2b81:e600::12] (unknown
 [IPv6:2a02:a213:2b81:e600::12])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by syrinx.knorrie.org (Postfix) with ESMTPSA id 09F6160B8E77E;
 Thu, 24 Dec 2020 20:48:11 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23c22b4a-5727-4279-91d4-7032c08aaffb
Subject: Re: [PATCH] xen/iommu: smmu: Use 1UL << 31 rather than 1 << 31
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201224152419.22453-1-julien@xen.org>
From: Hans van Kranenburg <hans@knorrie.org>
Message-ID: <617a8755-1993-d46d-d1bf-2f518b5d4233@knorrie.org>
Date: Thu, 24 Dec 2020 20:48:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201224152419.22453-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12/24/20 4:24 PM, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Replace all the use of 1 << 31 with 1UL << 31 to prevent undefined
> behavior in the SMMU driver.

You're replacing it by 1U, not 1UL, in the patch below.

Hans

> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/drivers/passthrough/arm/smmu.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index ed04d85e05e9..3e8aa378669b 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -405,7 +405,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
>  #define ID0_NUMSMRG_SHIFT		0
>  #define ID0_NUMSMRG_MASK		0xff
>  
> -#define ID1_PAGESIZE			(1 << 31)
> +#define ID1_PAGESIZE			(1U << 31)
>  #define ID1_NUMPAGENDXB_SHIFT		28
>  #define ID1_NUMPAGENDXB_MASK		7
>  #define ID1_NUMS2CB_SHIFT		16
> @@ -438,7 +438,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
>  
>  /* Stream mapping registers */
>  #define ARM_SMMU_GR0_SMR(n)		(0x800 + ((n) << 2))
> -#define SMR_VALID			(1 << 31)
> +#define SMR_VALID			(1U << 31)
>  #define SMR_MASK_SHIFT			16
>  #define SMR_MASK_MASK			0x7fff
>  #define SMR_ID_SHIFT			0
> @@ -506,7 +506,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
>  #define RESUME_RETRY			(0 << 0)
>  #define RESUME_TERMINATE		(1 << 0)
>  
> -#define TTBCR_EAE			(1 << 31)
> +#define TTBCR_EAE			(1U << 31)
>  
>  #define TTBCR_PASIZE_SHIFT		16
>  #define TTBCR_PASIZE_MASK		0x7
> @@ -562,7 +562,7 @@ static struct iommu_group *iommu_group_get(struct device *dev)
>  #define MAIR_ATTR_IDX_CACHE		1
>  #define MAIR_ATTR_IDX_DEV		2
>  
> -#define FSR_MULTI			(1 << 31)
> +#define FSR_MULTI			(1U << 31)
>  #define FSR_SS				(1 << 30)
>  #define FSR_UUT				(1 << 8)
>  #define FSR_ASF				(1 << 7)
> 



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 19:57:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 19:57:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58807.103633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksWjg-0004Lg-Ha; Thu, 24 Dec 2020 19:57:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58807.103633; Thu, 24 Dec 2020 19:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksWjg-0004LZ-Ed; Thu, 24 Dec 2020 19:57:20 +0000
Received: by outflank-mailman (input) for mailman id 58807;
 Thu, 24 Dec 2020 19:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jyih=F4=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1ksWje-0004LU-D9
 for xen-devel@lists.xenproject.org; Thu, 24 Dec 2020 19:57:18 +0000
Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69804aaf-80f5-454d-b233-6909a12adc8d;
 Thu, 24 Dec 2020 19:57:17 +0000 (UTC)
Received: by mail-lf1-x130.google.com with SMTP id 23so6586451lfg.10
 for <xen-devel@lists.xenproject.org>; Thu, 24 Dec 2020 11:57:17 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id l13sm4230158lje.138.2020.12.24.11.57.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 24 Dec 2020 11:57:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69804aaf-80f5-454d-b233-6909a12adc8d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=/7eAk2jGlnv74ubcD8r1ftTtGUs2q9kdWJly7N4aqc4=;
        b=SNJ+FoyuTTb+eOXZ+2aFaw97Hwm/HWS5+v29AqnUWO8+imyNHRQ0xyr07b5OK6zYuB
         fCVQSvNvt6HkZJKwpIIiDg6gdHarlT0dhx5SPZ7PSiuTtqYhRafQq7cEEC5klhe5Ojfa
         rnxecV9i4EsrMBiezdV4hhw3PIi/ysXsYuzozD0B4ltVoYy6vg4wQ/uQ3XPKiZmp15qt
         kdr7dHehDaWirGC/YpEEqDeOsuOL4tDkeksQYwjYw6q/y1/sd8ltOYK/f+39uKX1KVsW
         P5ToJs7UHByxMNjHGyoAD0zvilBv+39UviScap4jv8A21HgMhMLIRh5WLztEp882NcIY
         6nVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=/7eAk2jGlnv74ubcD8r1ftTtGUs2q9kdWJly7N4aqc4=;
        b=QXJIesGTfY85oJ8nS+3yYUIzFyJD/r+80FeIbYfbMFJEVmFHI/vBemchoRZ1tFrXCc
         A4Sv0h6HC+KH2OPNngJbkI4QE/T7CoMf7STBu2BvTjer0AvXbPZpIK9pcPRU1XH3IPPT
         xt3mXBmKzBQbKNiUbeZIgT/QrYk9BiKe81iFnN7KLcjFZ/SsHSIaREqEv/IgvYjoBoMp
         ogUlhJbeabn7MhSM90jqHaTBnyd+D0lcImW93bdCGPUF1W9E9gN6Kb4mnd2ed3DHr/ok
         FRfAd2ZupB9IoILiCbEGbD/pwwkj+Wrrq2h4F1OpY7op0V2qYszzdMXnpjMjAfDUa3IU
         9Umg==
X-Gm-Message-State: AOAM531AQhEZwrmzDOm/HvRDxmgnApSloNIyr1ObP9VKZ6mVe+EXyanA
	gAcCKPQ48Dg1938324kUAMM=
X-Google-Smtp-Source: ABdhPJwAQXT8rDhhvW5BDI8vc29560RxImWLQRZNOYZJaml1fxUDBa/Lg0tvGLZTn7ay9Ch5/LKmVw==
X-Received: by 2002:a19:4311:: with SMTP id q17mr12445762lfa.453.1608839836351;
        Thu, 24 Dec 2020 11:57:16 -0800 (PST)
Subject: Re: [PATCH] xen/serial: scif: Rework how the parameters are found
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Rahul.Singh@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201224165021.449-1-julien@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a73a558c-77cd-e527-d82e-88752589cecc@gmail.com>
Date: Thu, 24 Dec 2020 21:57:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201224165021.449-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 24.12.20 18:50, Julien Grall wrote:

Hi Julien

> From: Julien Grall <jgrall@amazon.com>
>
> clang 11 will throw the following error while build Xen:
>
> scif-uart.c:333:33: error: cast to smaller integer type 'enum port_types' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
>      uart->params = &port_params[(enum port_types)match->data];
>                                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The error can be prevented by directly storing a pointer to the port
> parameters rather than the a cast of the port type.
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>
> ---
>
> Only build tested as I don't have the HW.

I don't have an access to the SCIFA based HW at the moment, but on Gen3 
H3 SoC (SCIF) it works.


> ---
>   xen/drivers/char/scif-uart.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
> index 9d3f66b55b67..ee204a11a471 100644
> --- a/xen/drivers/char/scif-uart.c
> +++ b/xen/drivers/char/scif-uart.c
> @@ -286,8 +286,8 @@ static struct uart_driver __read_mostly scif_uart_driver = {
>   
>   static const struct dt_device_match scif_uart_dt_match[] __initconst =
>   {
> -    { .compatible = "renesas,scif",  .data = (void *)SCIF_PORT },
> -    { .compatible = "renesas,scifa", .data = (void *)SCIFA_PORT },
> +    { .compatible = "renesas,scif",  .data = &port_params[SCIF_PORT] },
> +    { .compatible = "renesas,scifa", .data = &port_params[SCIFA_PORT] },
>       { /* sentinel */ },
>   };
>   
> @@ -330,7 +330,7 @@ static int __init scif_uart_init(struct dt_device_node *dev,
>   
>       match = dt_match_node(scif_uart_dt_match, dev);
>       ASSERT( match );
> -    uart->params = &port_params[(enum port_types)match->data];
> +    uart->params = match->data;
>   
>       uart->vuart.base_addr  = addr;
>       uart->vuart.size       = size;

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Dec 24 23:25:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 24 Dec 2020 23:25:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58815.103651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksZys-0005bi-6x; Thu, 24 Dec 2020 23:25:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58815.103651; Thu, 24 Dec 2020 23:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksZys-0005bb-3l; Thu, 24 Dec 2020 23:25:14 +0000
Received: by outflank-mailman (input) for mailman id 58815;
 Thu, 24 Dec 2020 23:25:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksZyq-0005bT-8I; Thu, 24 Dec 2020 23:25:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksZyp-0007zk-T8; Thu, 24 Dec 2020 23:25:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksZyp-0006p5-K4; Thu, 24 Dec 2020 23:25:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksZyp-0005iu-JZ; Thu, 24 Dec 2020 23:25:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=npp72kgQ6Hli726SCyG+ARe3iPXhH+vyDLDLG8k9nfs=; b=DuSfcrvR+Q3vSMLSmNXnjzlzbP
	7iT4Lxz0A1rKV0Xf6xCKX/S9wJGbQJ/8TR5DIQ5TpjJ6ZdGxXDhqrapvmZTJaOBiHFw99V44PhfGG
	TUELyiPOg/sw4koUQ1srmPil04rWZJqko3JSncBZnLIDlVwCz59HDzweaz4aYlw49HE0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157872-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157872: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 24 Dec 2020 23:25:11 +0000

flight 157872 qemu-mainline real [real]
flight 157879 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157872/
http://logs.test-lab.xenproject.org/osstest/logs/157879/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  126 days
Failing since        152659  2020-08-21 14:07:39 Z  125 days  257 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    6 days   10 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 25 02:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 02:28:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58827.103676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kscqC-0003Ro-HQ; Fri, 25 Dec 2020 02:28:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58827.103676; Fri, 25 Dec 2020 02:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kscqC-0003Rh-B6; Fri, 25 Dec 2020 02:28:28 +0000
Received: by outflank-mailman (input) for mailman id 58827;
 Fri, 25 Dec 2020 02:28:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kscqA-0003RZ-Pa; Fri, 25 Dec 2020 02:28:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kscqA-0001KQ-IY; Fri, 25 Dec 2020 02:28:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kscqA-0004wu-6c; Fri, 25 Dec 2020 02:28:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kscqA-0007PC-64; Fri, 25 Dec 2020 02:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TGVUVW8l+S9ZJvtAjvRSGfZJGWIcqU6yv4Mtutji21g=; b=Dg4531oEI3QlhxBpKxzVMEXHFj
	rOAmonz4EHnWQbSdRLXZ/fxHGW75k3guuJO6lVbplFwbsokxknwmaxOMbXduhpjNtP5ptPlA7Oxog
	sLtxTAsCdjuhssTpVTLo+IgqbKAslFYeawfTGJFYxAgoqWJwZT/H+0bWxT8G/9Vue3sY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157877-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157877: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58cf05f597b03a8212d9ecf2c79ee046d3ee8ad9
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 02:28:26 +0000

flight 157877 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157877/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157871 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157871 pass in 157864
 test-arm64-arm64-xl       10 host-ping-check-xen fail in 157871 pass in 157864
 test-amd64-amd64-examine    4 memdisk-try-append fail in 157871 pass in 157877
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 157871 pass in 157877
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 157871 pass in 157877
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157871
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157871
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157871

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 157864 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157864 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58cf05f597b03a8212d9ecf2c79ee046d3ee8ad9
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  146 days
Failing since        152366  2020-08-01 20:49:34 Z  145 days  252 attempts
Testing same since   157864  2020-12-24 01:10:27 Z    1 days    3 attempts

------------------------------------------------------------
4318 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 969560 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 25 08:16:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 08:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58849.103706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksiGg-0000uj-QB; Fri, 25 Dec 2020 08:16:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58849.103706; Fri, 25 Dec 2020 08:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksiGg-0000uc-Mg; Fri, 25 Dec 2020 08:16:10 +0000
Received: by outflank-mailman (input) for mailman id 58849;
 Fri, 25 Dec 2020 08:16:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiGe-0000uU-Jo; Fri, 25 Dec 2020 08:16:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiGe-00085D-BN; Fri, 25 Dec 2020 08:16:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiGe-0006vp-3H; Fri, 25 Dec 2020 08:16:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiGe-0006sf-2m; Fri, 25 Dec 2020 08:16:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=px04L+0iglkNrWBoqEcftX1Un/aoG1i4ipcrtgPFDkQ=; b=Nyb12NSCfnqosKhU+YOUZaIb07
	wSAF+NNvmKW9tor9cdg1/nXMQdTS7fYZ1xv5zqqznGFiuXQHFdZeTnaFWAC0XWlt15NSvBdWtVXD3
	TAIYbKoFFFuLdFUwB19Zw0eICs1mG/lXcvKGnZafxK1V4MBCKAdfSuGLlT1n2BudYXqs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157885-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157885: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 08:16:08 +0000

flight 157885 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157885/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  168 days
Failing since        151818  2020-07-11 04:18:52 Z  167 days  162 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    6 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 25 08:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 08:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58864.103721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksiqV-0004Rc-Tj; Fri, 25 Dec 2020 08:53:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58864.103721; Fri, 25 Dec 2020 08:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksiqV-0004RV-QW; Fri, 25 Dec 2020 08:53:11 +0000
Received: by outflank-mailman (input) for mailman id 58864;
 Fri, 25 Dec 2020 08:53:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiqU-0004RM-2z; Fri, 25 Dec 2020 08:53:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiqT-0000Di-R2; Fri, 25 Dec 2020 08:53:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiqT-0000BK-Fe; Fri, 25 Dec 2020 08:53:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksiqT-0003az-FC; Fri, 25 Dec 2020 08:53:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4PC67Beoosyl005X3K4RsKqztviVxfPEjEkJxuTVtIA=; b=Vt9IH4zVmqQy16+Q3RALnhNJ78
	upDyIWT9zFJTPqUr4weFoXsOvoOVjp0elvVNkA0S3bQfuRmPvjyIjknYrdFO9wkc8qOVTRmO+9PTn
	wRkVQA5AjwWgh72+lVhG31fbuEOxRU+CniNWaApxX0n2x4c/8pAHXrk5odyZFutEcu6k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157881: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 08:53:09 +0000

flight 157881 qemu-mainline real [real]
flight 157887 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157881/
http://logs.test-lab.xenproject.org/osstest/logs/157887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  126 days
Failing since        152659  2020-08-21 14:07:39 Z  125 days  258 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    6 days   11 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 25 11:53:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 11:53:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58879.103742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksleR-00038j-Pq; Fri, 25 Dec 2020 11:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58879.103742; Fri, 25 Dec 2020 11:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksleR-00038c-MW; Fri, 25 Dec 2020 11:52:55 +0000
Received: by outflank-mailman (input) for mailman id 58879;
 Fri, 25 Dec 2020 11:52:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksleQ-00038U-6g; Fri, 25 Dec 2020 11:52:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksleP-0003BS-Tp; Fri, 25 Dec 2020 11:52:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksleP-0008U6-IV; Fri, 25 Dec 2020 11:52:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksleP-0004La-I1; Fri, 25 Dec 2020 11:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IUHkzmEdOLuW/DrJaPL6ZJTGyJ5ycBjqxkf/jWoCcQw=; b=v0/+Yv1mgV+dYWZ9IkWqNCGEle
	vyRWFe/M5AJTmfVvWVtXiSlH+0a1+/fMcfIGH/M5TpEq61HDuN1z1GijrO3HaabiEZrzyc/7fBJGh
	xaO5324rFOjMRn+keRhSWZguoAuzBQzPNaN3u3dXO8g7569r6nW4RwjUNfq4HO2x2R7k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157882-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157882: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 11:52:53 +0000

flight 157882 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157882/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157865
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157865
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157865
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157865
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157865
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157865
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157865
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157865
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157865
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157865
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157865
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157882  2020-12-25 01:52:32 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Dec 25 15:52:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 15:52:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58892.103763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kspNR-0007Qb-8q; Fri, 25 Dec 2020 15:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58892.103763; Fri, 25 Dec 2020 15:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kspNR-0007QU-5R; Fri, 25 Dec 2020 15:51:37 +0000
Received: by outflank-mailman (input) for mailman id 58892;
 Fri, 25 Dec 2020 15:51:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kspNP-0007QK-0T; Fri, 25 Dec 2020 15:51:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kspNO-00076c-JU; Fri, 25 Dec 2020 15:51:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kspNO-0001gc-4a; Fri, 25 Dec 2020 15:51:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kspNO-0001Fn-43; Fri, 25 Dec 2020 15:51:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pu/V6p/NdTw38gQ3hmqq1cXlRd6t9QJZomp8+tco4W0=; b=hSeJw+jlrg1K6PsbIdlZCSMLaF
	Dfb77tf+ok5wfT/icu7YiECNf7jIWipfN6rs8unlsns3gb8ijlMZYxyHpvFerRm91inSQjb8IpID3
	o1WJ3Bl0RwobR98YG94+38Rn88KePQ12WcQ8Fi75Mo8aXZmdvMnBkHTFdI0wFrT2AK1k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157884: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:build-armhf-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=71c5f03154ac1cb27423b984743ccc2f5d11d14d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 15:51:34 +0000

flight 157884 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157884/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 build-armhf-pvops             6 kernel-build             fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                71c5f03154ac1cb27423b984743ccc2f5d11d14d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  146 days
Failing since        152366  2020-08-01 20:49:34 Z  145 days  253 attempts
Testing same since   157884  2020-12-25 02:32:08 Z    0 days    1 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 974520 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 25 18:38:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 18:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58924.103790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksryR-0005DG-Ux; Fri, 25 Dec 2020 18:37:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58924.103790; Fri, 25 Dec 2020 18:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ksryR-0005D9-Rv; Fri, 25 Dec 2020 18:37:59 +0000
Received: by outflank-mailman (input) for mailman id 58924;
 Fri, 25 Dec 2020 18:37:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksryQ-0005D1-KB; Fri, 25 Dec 2020 18:37:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksryQ-0001wM-AC; Fri, 25 Dec 2020 18:37:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ksryP-00086m-Vk; Fri, 25 Dec 2020 18:37:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ksryP-0006ai-VE; Fri, 25 Dec 2020 18:37:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jvt/ZZhamZOMW5jJCRW+1jXGnMDcMnZsuXS6aXWWZ7c=; b=18J7A7H8JYhfFSPTOkMbTmug/f
	v+zPG4fu+t71aPKcuuH0aZrnqA2n/poL1KgjhWLYH+lunAYH+ywH8cJ8296TZStzePL0SwSibpQor
	22pK394g36kbe9MmP7lhszkgicfqlKPrILQyBCaMU/NyTmzMszWVpVX9e8sbPtOanpRE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157889: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 18:37:57 +0000

flight 157889 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157889/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  127 days
Failing since        152659  2020-08-21 14:07:39 Z  126 days  259 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    7 days   12 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Dec 25 23:19:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 25 Dec 2020 23:19:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58943.103817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kswMg-0003uy-VT; Fri, 25 Dec 2020 23:19:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58943.103817; Fri, 25 Dec 2020 23:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kswMg-0003ur-SO; Fri, 25 Dec 2020 23:19:18 +0000
Received: by outflank-mailman (input) for mailman id 58943;
 Fri, 25 Dec 2020 23:19:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kswMf-0003uj-Ra; Fri, 25 Dec 2020 23:19:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kswMf-0006ZA-Jk; Fri, 25 Dec 2020 23:19:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kswMf-0003da-7S; Fri, 25 Dec 2020 23:19:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kswMf-0000xP-6y; Fri, 25 Dec 2020 23:19:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tW4T6ix/IKAjo5cwZWicIYE4u8zv/W86j2Un0tuY5g0=; b=MuSKRgEBpmKi33AhIDXJR1nKDC
	dOe1Labif6DZPzY2RQ6ViYp/0yfXjjO41hhWp9elNKeHRTKopkqgvsa2nfbwMzs0r2wuitosKHB2+
	NBsUj6eAsUJEeb8+YhQWIMVJjHSek3UWZayo/myXvu53QVkMmGUJgPIu/mHerYfc/YnU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157893-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157893: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:build-armhf-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=71c5f03154ac1cb27423b984743ccc2f5d11d14d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 25 Dec 2020 23:19:17 +0000

flight 157893 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157893/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 build-armhf-pvops             6 kernel-build             fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157884 REGR. vs. 152332
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 157884 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 157884 pass in 157893
 test-arm64-arm64-xl           8 xen-boot         fail in 157884 pass in 157893
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157884
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 157884

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                71c5f03154ac1cb27423b984743ccc2f5d11d14d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  147 days
Failing since        152366  2020-08-01 20:49:34 Z  146 days  254 attempts
Testing same since   157884  2020-12-25 02:32:08 Z    0 days    2 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 974520 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 26 04:17:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Dec 2020 04:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58958.103838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt11L-0003MC-LK; Sat, 26 Dec 2020 04:17:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58958.103838; Sat, 26 Dec 2020 04:17:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt11L-0003M3-DB; Sat, 26 Dec 2020 04:17:35 +0000
Received: by outflank-mailman (input) for mailman id 58958;
 Sat, 26 Dec 2020 04:17:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt11K-0003Lv-Cs; Sat, 26 Dec 2020 04:17:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt11K-0001pR-4B; Sat, 26 Dec 2020 04:17:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt11J-0000Bw-Q0; Sat, 26 Dec 2020 04:17:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kt11J-00012o-Ou; Sat, 26 Dec 2020 04:17:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t5hbj4mfQwANKbhn+iofwAkoIMzkpipB0mk3TWa1Ps4=; b=mjvn5bV+Vdy+8f73ACDARgVXr1
	f7s/dNY0IXbB0GM0KrbWqPtjOLTGae1+vYt8zy+gijyaOov9ofLxAWJ3TqNrMfn1WJ2I+zWIGG+vy
	4MZB7XBohI/KyR3SNnO8FSL88vXER0SGvKz95Ad9SI5RCJC+Y11lbDWp4StQbOpexJ/c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157895: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Dec 2020 04:17:33 +0000

flight 157895 qemu-mainline real [real]
flight 157901 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157895/
http://logs.test-lab.xenproject.org/osstest/logs/157901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  127 days
Failing since        152659  2020-08-21 14:07:39 Z  126 days  260 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    7 days   13 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 26 06:54:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Dec 2020 06:54:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58972.103865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt3TJ-0000Xq-O4; Sat, 26 Dec 2020 06:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58972.103865; Sat, 26 Dec 2020 06:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt3TJ-0000Xj-KV; Sat, 26 Dec 2020 06:54:37 +0000
Received: by outflank-mailman (input) for mailman id 58972;
 Sat, 26 Dec 2020 06:54:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt3TJ-0000Xb-1Z; Sat, 26 Dec 2020 06:54:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt3TI-0004oF-Mg; Sat, 26 Dec 2020 06:54:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt3TI-0000of-Bd; Sat, 26 Dec 2020 06:54:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kt3TI-0000L0-BA; Sat, 26 Dec 2020 06:54:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tTORpVUBJHlpqiqPx/KarDsbbMqcQ2Wwnl2rJD5lyBI=; b=yiNVivD8pOiR30Yqewlc2yO3d/
	AIoyvyloC6+o7qyZiRc6Lzwm9BgtQQ5kkOktVHVGIitjsL7dbzTZzXwgkIbECXn/f0YVWIED0ySJD
	bNvzeKxfnviwArZszeVu4EbeuRiPeCQUKRMocxFKZpSJ0ZpbmPEiRQ6RpXIw6ghMF7Jo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157898: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:build-armhf-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5814bc2d4cc241c1a603fac2b5bf1bd4daa108fc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Dec 2020 06:54:36 +0000

flight 157898 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157898/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 build-armhf-pvops             6 kernel-build             fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5814bc2d4cc241c1a603fac2b5bf1bd4daa108fc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  147 days
Failing since        152366  2020-08-01 20:49:34 Z  146 days  255 attempts
Testing same since   157898  2020-12-25 23:40:14 Z    0 days    1 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 975578 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 26 09:21:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Dec 2020 09:21:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.58993.103885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt5kl-0005SW-Qh; Sat, 26 Dec 2020 09:20:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 58993.103885; Sat, 26 Dec 2020 09:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt5kl-0005SP-Np; Sat, 26 Dec 2020 09:20:47 +0000
Received: by outflank-mailman (input) for mailman id 58993;
 Sat, 26 Dec 2020 09:20:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt5kj-0005SH-Nj; Sat, 26 Dec 2020 09:20:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt5kj-0007hD-FW; Sat, 26 Dec 2020 09:20:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt5kj-0007Y8-73; Sat, 26 Dec 2020 09:20:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kt5kj-0005vF-6a; Sat, 26 Dec 2020 09:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NzuFjesDOdrlXjTqbOjg4rycXeu6Tb/Gdb1gN02CLsQ=; b=J4yF6u13Y4oNKR3vFDsdmlFrt9
	s9CGBk6Lwb3jQjsQ0vhoDLMWgKP3T1wp3AF6f4YuU1SKsLxCjwRDW0Saq5EnpGG05ByHQ34etrkTc
	BCfnu0yypo64qj5h8QIAa7HGQjj8KFi9JbZY8ppeT6213onG7wyg6TCS31hNZZe2kLBE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157904-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157904: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Dec 2020 09:20:45 +0000

flight 157904 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  169 days
Failing since        151818  2020-07-11 04:18:52 Z  168 days  163 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    7 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 26 13:35:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Dec 2020 13:35:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59026.103906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt9jB-0001Uu-N7; Sat, 26 Dec 2020 13:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59026.103906; Sat, 26 Dec 2020 13:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kt9jB-0001Un-Jh; Sat, 26 Dec 2020 13:35:25 +0000
Received: by outflank-mailman (input) for mailman id 59026;
 Sat, 26 Dec 2020 13:35:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt9j9-0001Uf-GO; Sat, 26 Dec 2020 13:35:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt9j9-0003Qv-8N; Sat, 26 Dec 2020 13:35:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kt9j8-0002f2-Sj; Sat, 26 Dec 2020 13:35:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kt9j8-0001Fs-SD; Sat, 26 Dec 2020 13:35:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b+Oq+yZMzpqMdbhAf2pQN3RLR+vJT3I3KO9TfesPriM=; b=K7oaEgHNcIX+MNFN7L0i9alSwl
	wE/pbvhG0tN0uLZIwwv99yDl5Vj9cw1aL+2XjecvEnuYaVCNH+yD1Z78LLZm7CVUO42uVe+6TWHOV
	NxaIpZq9dCus3f98adYQd7SIfLbP3/619yeK/4FEFVPKXqyh3kt/gIGbUufAC5411PJ8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157900: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Dec 2020 13:35:22 +0000

flight 157900 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157900/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157882
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157882
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157882
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157882
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157882
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157882
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157882
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157882
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157882
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157882
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157882
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157900  2020-12-26 01:51:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Dec 26 18:14:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Dec 2020 18:14:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59069.103934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktE4a-0000UG-6T; Sat, 26 Dec 2020 18:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59069.103934; Sat, 26 Dec 2020 18:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktE4a-0000U9-3W; Sat, 26 Dec 2020 18:13:48 +0000
Received: by outflank-mailman (input) for mailman id 59069;
 Sat, 26 Dec 2020 18:13:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktE4Z-0000U1-0l; Sat, 26 Dec 2020 18:13:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktE4Y-0000A6-OZ; Sat, 26 Dec 2020 18:13:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktE4Y-0002tf-Gg; Sat, 26 Dec 2020 18:13:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktE4Y-0002JR-GA; Sat, 26 Dec 2020 18:13:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ua2QlrHNfJUk/o9sDeHTkMlCAytFfCEbyVAL6ZjW/HY=; b=JiO0ntP7EWm3nLXavu9L0XRGy1
	vAOTQPQB80ub01+a+ZPJPqDyykYuIjZEC2zCt6Su24IR9NpWgKwGDmZEjdA/iK2E7lh8/pFvtz/rG
	acp05UbToSRVMoXRjz1gmZ3JYQ+OTs2fihgcNq7Z2JxtPtTYJvrXpZr1Cix4V9zcbs3A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157903-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157903: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Dec 2020 18:13:46 +0000

flight 157903 qemu-mainline real [real]
flight 157910 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157903/
http://logs.test-lab.xenproject.org/osstest/logs/157910/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  128 days
Failing since        152659  2020-08-21 14:07:39 Z  127 days  261 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    8 days   14 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Dec 26 19:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 26 Dec 2020 19:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59081.103948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktFG8-0006Ze-V2; Sat, 26 Dec 2020 19:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59081.103948; Sat, 26 Dec 2020 19:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktFG8-0006ZX-S0; Sat, 26 Dec 2020 19:29:48 +0000
Received: by outflank-mailman (input) for mailman id 59081;
 Sat, 26 Dec 2020 19:29:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktFG6-0006ZP-RK; Sat, 26 Dec 2020 19:29:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktFG6-0001O6-K9; Sat, 26 Dec 2020 19:29:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktFG6-0005TV-7a; Sat, 26 Dec 2020 19:29:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktFG6-00007v-72; Sat, 26 Dec 2020 19:29:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h7HsCSVC8eJqizQiEZ5Cpeqg3OzRvnb/iaYgS+zpJrI=; b=yW3g4dSNMb0R5TP0rydBMe/xZ/
	fINZ6tYaEKbD8NwMgW4rRziAa+YaknNpe4uXKt6rIpCGE97MXYcJ+iM2BftiF4H82kTA/MBIglpk0
	KPE1UkM7PsT7RVDwQsDoxuZ7hWGlBtjDx39JwE3eo1HD3nHOOfORzWqd/Tjd5q1x6yqA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157906: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=40f78232f97344afbbeb5b0008615f17c4b93466
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 26 Dec 2020 19:29:46 +0000

flight 157906 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157906/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken
 test-arm64-arm64-xl             <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl           5 host-install(5)       broken blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                40f78232f97344afbbeb5b0008615f17c4b93466
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  147 days
Failing since        152366  2020-08-01 20:49:34 Z  146 days  256 attempts
Testing same since   157906  2020-12-26 06:57:12 Z    0 days    1 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 broken  
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken
broken-job test-arm64-arm64-xl broken
broken-step test-arm64-arm64-libvirt-xsm host-install(5)
broken-step test-arm64-arm64-xl host-install(5)

Not pushing.

(No revision log; it would be 975704 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 01:20:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 01:20:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59114.104002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktKjV-0001xF-DM; Sun, 27 Dec 2020 01:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59114.104002; Sun, 27 Dec 2020 01:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktKjV-0001x8-8K; Sun, 27 Dec 2020 01:20:29 +0000
Received: by outflank-mailman (input) for mailman id 59114;
 Sun, 27 Dec 2020 01:20:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktKjT-0001x0-Lo; Sun, 27 Dec 2020 01:20:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktKjT-0005V9-EG; Sun, 27 Dec 2020 01:20:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktKjT-0004Q7-3E; Sun, 27 Dec 2020 01:20:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktKjT-0003DY-2k; Sun, 27 Dec 2020 01:20:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mvuFpkf7dmPKnzUF6yYb33urMVOZ+1grjK6g0vW/2iU=; b=zSQjNtk2wDzO8bH+RqMD/s6Lyh
	rcbdLECrs+Wj2UIC7nhSJY0SLztL8CoNPijV1ZaKboRqzZ+ZZ0a3/LqHLY107uIe4q3rN8G6CfC1W
	O6TnPgBAw2tnw324kuZtV7QMXyE4/qS2ZQxRzHhXCPUwK9LSqEP0KHubiOhpwLaD+THg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157912: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 01:20:27 +0000

flight 157912 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157912/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  128 days
Failing since        152659  2020-08-21 14:07:39 Z  127 days  262 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    8 days   15 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 05:49:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 05:49:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59131.104023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktOvs-0007xv-1s; Sun, 27 Dec 2020 05:49:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59131.104023; Sun, 27 Dec 2020 05:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktOvr-0007xo-Tx; Sun, 27 Dec 2020 05:49:31 +0000
Received: by outflank-mailman (input) for mailman id 59131;
 Sun, 27 Dec 2020 05:49:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktOvq-0007xg-Hq; Sun, 27 Dec 2020 05:49:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktOvq-0002HD-6M; Sun, 27 Dec 2020 05:49:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktOvp-0007BD-Ut; Sun, 27 Dec 2020 05:49:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktOvp-0007WH-Tl; Sun, 27 Dec 2020 05:49:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=32u3xJSB/4nJ3SxDDF3BJ+ax4q47azUUQrZk5YRyXi8=; b=puZYtNp0xLsl3QHECH+BX69QD5
	9nnU5ayUAW92k+NxSGtEkMAn+bsW6NlUBM8ueg7HBje/6/Ox9jG5ZpHNYNlZqYA1hC2RgHLQtKsYt
	HBIH26Lrxj2TeUEdf8oWJScXwVk2YIHi3UkjlzhFbMA2NFIAFy1F2nOj3DgiWvBuL7aY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157914-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157914: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f838f8d2b694cf9d524dc4423e9dd2db13892f3f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 05:49:29 +0000

flight 157914 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157914/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f838f8d2b694cf9d524dc4423e9dd2db13892f3f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  148 days
Failing since        152366  2020-08-01 20:49:34 Z  147 days  257 attempts
Testing same since   157914  2020-12-26 19:39:58 Z    0 days    1 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 975722 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 09:36:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 09:36:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59156.104044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktST2-0002ae-Jk; Sun, 27 Dec 2020 09:36:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59156.104044; Sun, 27 Dec 2020 09:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktST2-0002aX-Gj; Sun, 27 Dec 2020 09:36:00 +0000
Received: by outflank-mailman (input) for mailman id 59156;
 Sun, 27 Dec 2020 09:35:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktST0-0002aP-SV; Sun, 27 Dec 2020 09:35:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktST0-0006VE-J3; Sun, 27 Dec 2020 09:35:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktST0-00045A-8s; Sun, 27 Dec 2020 09:35:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktST0-0001Zz-8M; Sun, 27 Dec 2020 09:35:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T74tnUeBvOZ+XaQLWV67McmtpNyFA1U9FUqa4nw3JWA=; b=VEIvW0zWal97pq+yuAFGlAcU5s
	wwweESR370ek4Zg3M6bJ5+ZERRJg1rzyHcAzA1xPa7hodajYgj70aPGd8YnmVRaXBMEiDQOnnLhCy
	CsTx9URsyse1JdacHQOU3GaNN7nMtikzhhKX7I0+Cw1jpY8L8mzp8lQcKrBhZFBL1ojY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157920-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157920: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 09:35:58 +0000

flight 157920 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157920/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  170 days
Failing since        151818  2020-07-11 04:18:52 Z  169 days  164 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    8 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 10:15:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 10:15:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59168.104059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktT5T-00069C-Tx; Sun, 27 Dec 2020 10:15:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59168.104059; Sun, 27 Dec 2020 10:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktT5T-000695-QP; Sun, 27 Dec 2020 10:15:43 +0000
Received: by outflank-mailman (input) for mailman id 59168;
 Sun, 27 Dec 2020 10:15:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktT5S-00068x-Va; Sun, 27 Dec 2020 10:15:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktT5S-0007Ev-NA; Sun, 27 Dec 2020 10:15:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktT5S-0006eH-EY; Sun, 27 Dec 2020 10:15:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktT5S-0000zR-E2; Sun, 27 Dec 2020 10:15:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4x794w2VxJ30ErEUo+dnvwStXet3rH5eU91qtFZBq64=; b=B0w3d9Ph6OXi58bkIofA9ogOL7
	miguoJzrRs+i7ONlwJpLTNYEDd8q93Egfvu3JT9rnnOY6iX/aZrqoYY2Pqw+bDOaCZE64kj2UVuNZ
	hmWh2zmB585NfGD2AVuobMBlxCYEywR+np2qW6AWSgUhvb7h40pZuIiifp+s2TypFNGE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157917-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157917: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 10:15:42 +0000

flight 157917 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157917/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 157912 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore          fail pass in 157912
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail pass in 157912
 test-armhf-armhf-xl-arndale   8 xen-boot                   fail pass in 157912
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 157912

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157912 like 152631
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 157912 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 157912 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 157912 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 157912 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  129 days
Failing since        152659  2020-08-21 14:07:39 Z  127 days  263 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    8 days   16 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-job build-arm64-pvops broken
broken-job build-arm64 broken
broken-job build-arm64-xsm broken

Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 13:04:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 13:04:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59205.104098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktVin-0003uA-S9; Sun, 27 Dec 2020 13:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59205.104098; Sun, 27 Dec 2020 13:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktVin-0003u3-PA; Sun, 27 Dec 2020 13:04:29 +0000
Received: by outflank-mailman (input) for mailman id 59205;
 Sun, 27 Dec 2020 13:04:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktVim-0003tv-Le; Sun, 27 Dec 2020 13:04:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktVim-0001ZV-Dc; Sun, 27 Dec 2020 13:04:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktVim-0006wd-1B; Sun, 27 Dec 2020 13:04:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktVim-0004Fm-0e; Sun, 27 Dec 2020 13:04:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X7s0quA8snw6RumfenJciO8K4JqjDOtKClaN+p2fq5A=; b=eYhEfp1ua3040iYp2WKyXlJKQt
	bghGo7TVrODg5Co8dCLlJ8IX2j6VqO0+cMgiP33v9/VBF2ENVNoResd/EUKNHKwz8mVQkj2c1+Fg+
	iaDdx2W0W9GaYmQKTst+we0OtdQ/vn7qWbbG+hE0HtKMOugJajb684HG1bZiJv2BIYx0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157918-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157918: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 13:04:28 +0000

flight 157918 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157918/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 157900
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 157900
 build-arm64                   4 host-install(4)        broken REGR. vs. 157900

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157900
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157900
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157900
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157900
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157900
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157900
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157900
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157900
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157900
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157900
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157900
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157918  2020-12-27 01:51:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Dec 27 16:46:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 16:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59264.104119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktZB4-0006Be-7q; Sun, 27 Dec 2020 16:45:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59264.104119; Sun, 27 Dec 2020 16:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktZB4-0006BX-3e; Sun, 27 Dec 2020 16:45:54 +0000
Received: by outflank-mailman (input) for mailman id 59264;
 Sun, 27 Dec 2020 16:45:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktZB3-0006BP-KS; Sun, 27 Dec 2020 16:45:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktZB3-0005fE-8o; Sun, 27 Dec 2020 16:45:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktZB2-00078j-Sk; Sun, 27 Dec 2020 16:45:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktZB2-00045m-SI; Sun, 27 Dec 2020 16:45:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Wqf2G0EEKikwIIak5Im19kOwa4XeJa+Pnm57P6F7NjQ=; b=DPBsGWtw5xvE1/LH5KichVBNTv
	moraY+TdkmogzSwjOkJKxRreJjaRC37pZHQDKawho8FMUEsPI4Nk86vmybBNBter9ZkFvoaxc2lXr
	Mfw1vrnUHJq3RVUooRfMV/0Noirk7F0CNWnevWfXHWhOrFtxit+lcODEABNYIxs4qeSU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157921-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157921: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f838f8d2b694cf9d524dc4423e9dd2db13892f3f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 16:45:52 +0000

flight 157921 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157921/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f838f8d2b694cf9d524dc4423e9dd2db13892f3f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  148 days
Failing since        152366  2020-08-01 20:49:34 Z  147 days  258 attempts
Testing same since   157914  2020-12-26 19:39:58 Z    0 days    2 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 975722 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 19:21:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 19:21:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59290.104140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktbb3-0002iZ-SD; Sun, 27 Dec 2020 19:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59290.104140; Sun, 27 Dec 2020 19:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktbb3-0002iS-O4; Sun, 27 Dec 2020 19:20:53 +0000
Received: by outflank-mailman (input) for mailman id 59290;
 Sun, 27 Dec 2020 19:20:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yGTU=F7=gmail.com=groeck7@srs-us1.protection.inumbo.net>)
 id 1ktbb3-0002iN-CJ
 for xen-devel@lists.xenproject.org; Sun, 27 Dec 2020 19:20:53 +0000
Received: from mail-ot1-x330.google.com (unknown [2607:f8b0:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 934ec2cd-2fcd-427a-9816-676897cb42da;
 Sun, 27 Dec 2020 19:20:52 +0000 (UTC)
Received: by mail-ot1-x330.google.com with SMTP id d8so7587456otq.6
 for <xen-devel@lists.xenproject.org>; Sun, 27 Dec 2020 11:20:52 -0800 (PST)
Received: from localhost ([2600:1700:e321:62f0:329c:23ff:fee3:9d7c])
 by smtp.gmail.com with ESMTPSA id v17sm8555011oou.41.2020.12.27.11.20.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Sun, 27 Dec 2020 11:20:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 934ec2cd-2fcd-427a-9816-676897cb42da
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=0sVC8j/qFLW+R/BiCcbxfbFuWt0QadRRyfbTVS1VwHY=;
        b=snNRM2UJuxFlYbt0opdNq/Km+jTrugWz+XCv9dvNVL/debD4RiSNjJuvvfbPHP8li7
         KIhX0/QgOmUeidxtdeZSEin831Gb4MPxK/Nsdy82P6kpVKu6N4Nc/2a1j/rhKSYYtPfM
         OdujcskdH0zj17962B4EA6DAWt+WbPQDoOQkCV+q0RBuHOc2AwsMveKzxEy1LnZjcBm6
         WwVaL7EiOv3pK+MmaGP2Q6W+EWp/A0H6XAJ9Sli5mcOq3SMT90MXTM6BUD6s+YNQPnp0
         3oKuBMFCRu9MNXQfN7PDX/uDAX7jwXSxxMeoGH5bWQ0ZYiFJh6LgmVaIuZavb11TOu03
         Stkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to:user-agent;
        bh=0sVC8j/qFLW+R/BiCcbxfbFuWt0QadRRyfbTVS1VwHY=;
        b=CTXaTPG7IIOLrbKkXdBAH+TZxv+Q4h2AJmeoH5i7CNKovyL1NoN5p9p0TiZnPuI6ja
         y949OJqvy/IFNLHlXjflCDUuI3BEOxbWXz20DoD23uRGEIickxPD7MzIseuH2N3QnY/j
         1aIGaQNx30Cl48D2l+2QnRMO6KSWYfUnAubat/MrfzAoSUiqDcGAet92Mmt4f2e/0Xjo
         ORND30xb/ZzSRDjGPOnsqfUkFeb5FlaKiR7ECS7lzuYyWTXPknNwA9fBF87Xv+ansYSF
         Wpm6+gNNHA6OcJfjqfdF4MG+wormh1VaKSkfYOg4cXFmzYYZvNVMDvtyTNVVolJUV6QA
         fULQ==
X-Gm-Message-State: AOAM5337g1HitOjEW6JSIKyaDTXXkF8WYgMG8U/+ndIvuLADB4IMpcxt
	R7SY5bSWUJ3qBe89iYKRVCc=
X-Google-Smtp-Source: ABdhPJy5MA8pTqBMHg7t7kC9SkPKcTOcLW8SMiIKlPFipG2wn4l5044FI2Bd1WiObQwt+tJVlAueHQ==
X-Received: by 2002:a05:6830:cf:: with SMTP id x15mr30498943oto.55.1609096852047;
        Sun, 27 Dec 2020 11:20:52 -0800 (PST)
Sender: Guenter Roeck <groeck7@gmail.com>
Date: Sun, 27 Dec 2020 11:20:49 -0800
From: Guenter Roeck <linux@roeck-us.net>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	dri-devel@lists.freedesktop.org,
	Chris Wilson <chris@chris-wilson.co.uk>,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Saeed Mahameed <saeedm@nvidia.com>, netdev@vger.kernel.org,
	Will Deacon <will@kernel.org>,
	Michal Simek <michal.simek@xilinx.com>, linux-s390@vger.kernel.org,
	afzal mohammed <afzal.mohd.ma@gmail.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Dave Jiang <dave.jiang@intel.com>, xen-devel@lists.xenproject.org,
	Leon Romanovsky <leon@kernel.org>, linux-rdma@vger.kernel.org,
	Marc Zyngier <maz@kernel.org>, Helge Deller <deller@gmx.de>,
	Russell King <linux@armlinux.org.uk>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	linux-pci@vger.kernel.org, Jakub Kicinski <kuba@kernel.org>,
	Heiko Carstens <hca@linux.ibm.com>,
	Wambui Karuga <wambui.karugax@gmail.com>,
	Allen Hubbe <allenbh@gmail.com>, Juergen Gross <jgross@suse.com>,
	David Airlie <airlied@linux.ie>, linux-gpio@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Lee Jones <lee.jones@linaro.org>,
	linux-arm-kernel@lists.infradead.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	linux-parisc@vger.kernel.org,
	Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>,
	Hou Zhiqiang <Zhiqiang.Hou@nxp.com>,
	Tariq Toukan <tariqt@nvidia.com>, Jon Mason <jdmason@kudzu.us>,
	linux-ntb@googlegroups.com, intel-gfx@lists.freedesktop.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [patch 02/30] genirq: Move status flag checks to core
Message-ID: <20201227192049.GA195845@roeck-us.net>
References: <20201210192536.118432146@linutronix.de>
 <20201210194042.703779349@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201210194042.703779349@linutronix.de>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Thu, Dec 10, 2020 at 08:25:38PM +0100, Thomas Gleixner wrote:
> These checks are used by modules and prevent the removal of the export of
> irq_to_desc(). Move the accessor into the core.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Yes, but that means that irq_check_status_bit() may be called from modules,
but it is not exported, resulting in build errors such as the following.

arm64:allmodconfig:

ERROR: modpost: "irq_check_status_bit" [drivers/perf/arm_spe_pmu.ko] undefined!

Guenter

> ---
>  include/linux/irqdesc.h |   17 +++++------------
>  kernel/irq/manage.c     |   17 +++++++++++++++++
>  2 files changed, 22 insertions(+), 12 deletions(-)
> 
> --- a/include/linux/irqdesc.h
> +++ b/include/linux/irqdesc.h
> @@ -223,28 +223,21 @@ irq_set_chip_handler_name_locked(struct
>  	data->chip = chip;
>  }
>  
> +bool irq_check_status_bit(unsigned int irq, unsigned int bitmask);
> +
>  static inline bool irq_balancing_disabled(unsigned int irq)
>  {
> -	struct irq_desc *desc;
> -
> -	desc = irq_to_desc(irq);
> -	return desc->status_use_accessors & IRQ_NO_BALANCING_MASK;
> +	return irq_check_status_bit(irq, IRQ_NO_BALANCING_MASK);
>  }
>  
>  static inline bool irq_is_percpu(unsigned int irq)
>  {
> -	struct irq_desc *desc;
> -
> -	desc = irq_to_desc(irq);
> -	return desc->status_use_accessors & IRQ_PER_CPU;
> +	return irq_check_status_bit(irq, IRQ_PER_CPU);
>  }
>  
>  static inline bool irq_is_percpu_devid(unsigned int irq)
>  {
> -	struct irq_desc *desc;
> -
> -	desc = irq_to_desc(irq);
> -	return desc->status_use_accessors & IRQ_PER_CPU_DEVID;
> +	return irq_check_status_bit(irq, IRQ_PER_CPU_DEVID);
>  }
>  
>  static inline void
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -2769,3 +2769,23 @@ bool irq_has_action(unsigned int irq)
>  	return res;
>  }
>  EXPORT_SYMBOL_GPL(irq_has_action);
> +
> +/**
> + * irq_check_status_bit - Check whether bits in the irq descriptor status are set
> + * @irq:	The linux irq number
> + * @bitmask:	The bitmask to evaluate
> + *
> + * Returns: True if one of the bits in @bitmask is set
> + */
> +bool irq_check_status_bit(unsigned int irq, unsigned int bitmask)
> +{
> +	struct irq_desc *desc;
> +	bool res = false;
> +
> +	rcu_read_lock();
> +	desc = irq_to_desc(irq);
> +	if (desc)
> +		res = !!(desc->status_use_accessors & bitmask);
> +	rcu_read_unlock();
> +	return res;
> +}


From xen-devel-bounces@lists.xenproject.org Sun Dec 27 19:50:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 27 Dec 2020 19:50:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59296.104152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktc3K-00056s-7b; Sun, 27 Dec 2020 19:50:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59296.104152; Sun, 27 Dec 2020 19:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktc3K-000567-49; Sun, 27 Dec 2020 19:50:06 +0000
Received: by outflank-mailman (input) for mailman id 59296;
 Sun, 27 Dec 2020 19:50:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktc3I-0004t3-Pw; Sun, 27 Dec 2020 19:50:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktc3I-0000FH-HI; Sun, 27 Dec 2020 19:50:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktc3I-0006AZ-6b; Sun, 27 Dec 2020 19:50:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktc3I-0002q7-5o; Sun, 27 Dec 2020 19:50:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vL7PZpqqpTuyHFRrSflT69jflqfAERZS7oSrGSDwJxY=; b=U146nBQTb3OmKeTlqXNEn2Abzk
	5pCm3s0rM+5iu1v0Hjtt+0AX/5vZTCc7c4FtrVHz16fPhlB9z7+s1HE7TN2q4o+BOonfU8Jx7xfQh
	QvxWpC778uXDz8vkMUxDBAJcAN1UD3xBK6Ws+tLIKpaDV7PZqg8dxn8DB6DTFlIJyoXg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157923-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157923: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 27 Dec 2020 19:50:04 +0000

flight 157923 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157923/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  129 days
Failing since        152659  2020-08-21 14:07:39 Z  128 days  264 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    9 days   17 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 00:06:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 00:06:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59347.104217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktg3X-00023z-DE; Mon, 28 Dec 2020 00:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59347.104217; Mon, 28 Dec 2020 00:06:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktg3X-00023s-9Z; Mon, 28 Dec 2020 00:06:35 +0000
Received: by outflank-mailman (input) for mailman id 59347;
 Mon, 28 Dec 2020 00:06:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktg3V-00023k-4S; Mon, 28 Dec 2020 00:06:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktg3U-00053y-R6; Mon, 28 Dec 2020 00:06:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktg3U-00016V-HE; Mon, 28 Dec 2020 00:06:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktg3U-000056-Gg; Mon, 28 Dec 2020 00:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KiqrI2baJv0oah+ehtHlIYN+ZG5QkCyATB2/RFTwNPo=; b=hUFIolHsLoby/2wNzE7SpiJ++v
	d9npaQiIJrmKBQYMQd+NXmO40SL58lIlm8xN8o/aAYMVlEdW+CP40VzoHMRqKWPMB3k2cqBLtzwUH
	0NuKX3YjbtJ6T3z0/P0RKqxWI0XmibZBt7U56Osu56OKDVFQQ2YAgYKocAQU/3kDyvMc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157926-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157926: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f838f8d2b694cf9d524dc4423e9dd2db13892f3f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 00:06:32 +0000

flight 157926 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157926/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops               <job status>                 broken  in 157921
 build-arm64-xsm                 <job status>                 broken  in 157921
 build-arm64                     <job status>                 broken  in 157921
 build-arm64-pvops          4 host-install(4) broken in 157921 REGR. vs. 152332
 build-arm64                4 host-install(4) broken in 157921 REGR. vs. 152332
 build-arm64-xsm            4 host-install(4) broken in 157921 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 157921 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10     fail pass in 157921

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 157921 n/a
 build-arm64-libvirt           1 build-check(1)           blocked in 157921 n/a
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f838f8d2b694cf9d524dc4423e9dd2db13892f3f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  149 days
Failing since        152366  2020-08-01 20:49:34 Z  148 days  259 attempts
Testing same since   157914  2020-12-26 19:39:58 Z    1 days    3 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-arm64 broken

Not pushing.

(No revision log; it would be 975722 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 06:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 06:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59375.104262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktlkb-0007FZ-Ig; Mon, 28 Dec 2020 06:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59375.104262; Mon, 28 Dec 2020 06:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktlkb-0007FS-Eh; Mon, 28 Dec 2020 06:11:25 +0000
Received: by outflank-mailman (input) for mailman id 59375;
 Mon, 28 Dec 2020 06:11:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktlka-0007FJ-82; Mon, 28 Dec 2020 06:11:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktlka-0001mO-3G; Mon, 28 Dec 2020 06:11:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktlkZ-00028p-Mr; Mon, 28 Dec 2020 06:11:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktlkZ-0007UM-MN; Mon, 28 Dec 2020 06:11:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vh0EvRFxeaXeYEasjT54DmpgO+gh7sFO87Gb+ncXpok=; b=YarnptkiLYZL4Jq/LBCsHCMv3D
	3OrPWb/DPLbeho/I4mMSYaJa0yOJQ6kTRB09QnImKxbO1KL7u0jXqrlnNhmi5i+8irBsQrwkj/qnw
	rtour9Ss2UxynAq+UdKXX05pNQ03/R4Mm9Alt7E2AQ3T6u9N6qmsswjBZZ56YZA+f4mc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157928-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157928: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 06:11:23 +0000

flight 157928 qemu-mainline real [real]
flight 157934 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157928/
http://logs.test-lab.xenproject.org/osstest/logs/157934/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  129 days
Failing since        152659  2020-08-21 14:07:39 Z  128 days  265 attempts
Testing same since   157670  2020-12-18 13:57:58 Z    9 days   18 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 08:03:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 08:03:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59388.104276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktnV7-0000Jz-23; Mon, 28 Dec 2020 08:03:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59388.104276; Mon, 28 Dec 2020 08:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktnV6-0000Js-VA; Mon, 28 Dec 2020 08:03:32 +0000
Received: by outflank-mailman (input) for mailman id 59388;
 Mon, 28 Dec 2020 08:03:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktnV6-0000Jk-2x; Mon, 28 Dec 2020 08:03:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktnV5-00046U-SP; Mon, 28 Dec 2020 08:03:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktnV5-0007xB-Hl; Mon, 28 Dec 2020 08:03:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktnV5-0003fe-HH; Mon, 28 Dec 2020 08:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dOdJc46ZR0eaaehk/3l2YVGk17Kb3jIn0iCHNMfOc5Y=; b=p10+fIQySqj2JM7hQwnZq00ivS
	IAYTwKHgYLWcYEIE6ydep/cCVjUjT4hIqWEwRNc5rB1TBWIHAcQ2B+zcFBmyvW1UUe4xX00VPxvkW
	tKQ9VGxUBAvlkMtibbv23txU2lQmfhOBEkLsqPLO6EUcRb30horqqi2hKqgjur1MkVdM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157930-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157930: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5c8fe583cce542aa0b84adc939ce85293de36e5e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 08:03:31 +0000

flight 157930 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157930/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5c8fe583cce542aa0b84adc939ce85293de36e5e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  149 days
Failing since        152366  2020-08-01 20:49:34 Z  148 days  260 attempts
Testing same since   157930  2020-12-28 00:10:08 Z    0 days    1 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976090 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 10:54:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 10:54:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59418.104322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktqAI-0006V2-CI; Mon, 28 Dec 2020 10:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59418.104322; Mon, 28 Dec 2020 10:54:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktqAI-0006Uv-9I; Mon, 28 Dec 2020 10:54:14 +0000
Received: by outflank-mailman (input) for mailman id 59418;
 Mon, 28 Dec 2020 10:54:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktqAG-0006Uq-He
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 10:54:12 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15ac3c75-fed2-48a7-b466-e3eb44c44394;
 Mon, 28 Dec 2020 10:54:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15ac3c75-fed2-48a7-b466-e3eb44c44394
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609152850;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=15KdEg+1noTa8Zqv7+ylw7xZRYfdyr4g+5oOm6fKagk=;
  b=Lfi9xgXGMX4WcRPGPaLsf4HUPOnYbeGhyQiKaJn1oh20fPc63LFix/xd
   E3k4kcg3tefy9vdT5AGcxwQxz/CYOhHNar7mk9B5Ian1Yuo3Oj5726Pcv
   2SSjO3E+zNECUaECbMqqzl8IBpB7VUycUiCL+rble1/BGzDG0tncf8tiv
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: j0SnspgokMNyiUXbVAkvyighxuOgX38Jdnyaf65/CzS4t3JkzduUJ2GVkJdqn8IoAex6m1Fv+N
 W1yOZ9EbauAmScnHGmhLG22bLJ8rdge4qEBhsTaQ3eOGJ9BPXwFNivsBzAXXqMIgaQV3IVCGIh
 J58rwmwnP7aquJeVp6C9uL/6SJWp2wQsNfp+3sYMEtWlDyskuMWug9PA2sPfqDqZUvIJKsnvap
 uDFsIjLZlqA+rQ4yzt30dAnIqXeeNE9w4sXcZVDiHaTyeB+JmOeF9RYNpsC4xgndxgnmUdFWhp
 FfE=
X-SBRS: 5.2
X-MesageID: 34015123
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34015123"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gvo0CMeCINwcxrrdSAQKWx7A4tkR5U+e7W7WJ6uF+aH5CR4cEPKbPyQbvUBKi/jrQMMyuO2bkuwTdKUPBwq9ZAxG4UBspMKtkEw8ww0PT4RsthqtWtf5VFlm0X4SaIEF1QkR49/tTRUDD4XMUW49iLPTHt5cX9xtkMyv7+K/wumy/ylREDaYngFKGYYMwPTe4u9IhDpe0CWw9GCQpIauzltf4mhJwao5PsWTRSwf8QitrEcVRqDXkxSNl1I67Z29K1QVWDcVQ/huMcZhb2W3lEsKbHureSFZRdq/jvWVvERUNW7ewOyxFTyXtcaPVszftv6+YEZG09nw56oWtpRx7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LpM7rJsbNXNZkIwxZw9lTa3osUmNvyJ5S5GGOfBGHKI=;
 b=Tz643cahIC0OHowudSHfPmwJWT+6hyVK3GFkqmWHrjlN5pae8/ntbFbB8X9M6Ld/OdHs8RhMHaUp6OicIkRVJLq07KDTq+3cGAcUgji/PEwsdQ00RRpdLAvR/iv9sl/mxKoCNsMXj0axQktPZYztfvOKhyvuRPzM1AmUg2Nf5+VUsbZUXj4meZ3saJVJxLGgHUIp2bDT5+HIpZUl+wI6LTYQFyL0dTIqtYzEg4xvZHjQ4K1VkHy42F4AwQQg/6hXnq+Sq1Ao1O1r2iPCrWRNxdIHHBWO9HQxUmxzmOUJODnmf5YMeKgWvZRaPlQhI5ol+3LD/5QNezhHkmpc9kvH6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LpM7rJsbNXNZkIwxZw9lTa3osUmNvyJ5S5GGOfBGHKI=;
 b=v1mlC/j9MdzjP1ziLm8236MGUT2w1EJVZ9gYyNb2Fs0i62ckq7fCCfIn0hTuFx0xv4b8NyjDJN1exyACcJR4RtCbJgCBqOUh6ZgeLeFZfrSobaiU7aUecd7LVOB3lf9dRIyg9AXg+9pI/UEAFrnt5A1+wnvxLdauSoDmLA6osmw=
Date: Mon, 28 Dec 2020 11:54:00 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/CPUID: suppress IOMMU related hypervisor leaf data
Message-ID: <20201228105400.dzkyrgyvkjuevzsj@Air-de-Roger>
References: <c640463a-d088-aaf5-0c3c-d82b1c98ee4f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c640463a-d088-aaf5-0c3c-d82b1c98ee4f@suse.com>
X-ClientProxiedBy: MRXP264CA0025.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d8e96545-7254-4575-9407-08d8ab1ee598
X-MS-TrafficTypeDiagnostic: DS7PR03MB5525:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB55253819C834BC960B49E6C08FD90@DS7PR03MB5525.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JcEHleMeE6tVY6lPElRM6gxvpUXATfHS0YuhQi05tfEBa2Wo93+YCF2etsN33gyM1WSiuUsTLVl92SNuwRJaiOZSQVX0WH8NwpDTD3Tgw261Ko6OjWBQhJO9ml9qjghs+mplBI4c3uk6RiefwRhDVT7dQfwO4d62DeaOF+Y+p7QgWmpsuy7Z/AV+DmLZFiUbzhRZscvEcWvcq/Avsbh/ZF+2sfzJWshzbMx+9tZ8I43QrTmxQo8qYHCbSpD6FJipgJQAfXsOIaDybsBKTkG92YyxcmjoWEKGnACkqoeGo+ZF2MTOQULnFQoIBcTTwnP6P+jeZkKDD5o1ZKNyKOJXfLtku4Di3Gu+cJbHkBQfH4WEJLZ2Gnbuae3MAwpDbe8dxCiWiFXG5EqdYV0sxMZZhw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(39850400004)(396003)(376002)(346002)(136003)(6496006)(5660300002)(33716001)(186003)(16526019)(85182001)(26005)(6666004)(2906002)(6486002)(86362001)(3716004)(8936002)(4744005)(54906003)(6916009)(1076003)(8676002)(66476007)(478600001)(316002)(9686003)(66556008)(66946007)(83380400001)(956004)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bWlQZlp3bE43Zk1uam92SG9pcFJONElvSFVTMDZWOHYrS21NWVY4ODNNVlRi?=
 =?utf-8?B?RCtoWlk2TWFsZys3MFdNRWQ2UjNJYzZXSmwvT29EKzY0d3FwZGxvelovUlEx?=
 =?utf-8?B?cUhuclVZKzhNL3N3dzErdmJDaVNHODQ3Y0UzQnBmdHUvNkdkVVhWb3AwU3pq?=
 =?utf-8?B?ZW1NR0drbEp1QXVodXpYVTBjalRXYzRqay95Wk9TL05IL3VaSit4QzIvMDlx?=
 =?utf-8?B?VTR4dnJtN0paSy9NL2drRDNWNDdxcUE5OFhPaGpSd2FDU2pCYXN3RzdkY2Nt?=
 =?utf-8?B?OTNrSHp2TFNzSXZiMU1RMTBpemEybUtRYU5oRld3c3oxTzgrUE5zWDdtTXBo?=
 =?utf-8?B?TGN1clU4U3JMZ3hUMy8wNkdZSGRvb0s4ZWZtVWEyTk5Nazd0RnFXYUwvV2Ns?=
 =?utf-8?B?cDVNK0lqREtvSWtnTkxHVUtRaGRjYkFmR05Pajd2cFlhM0ptd1p0ZlNCTThP?=
 =?utf-8?B?YzhpZStScmk4YlFma05LSTRqZzJXRXNjcHBHZmFOMHJNWkZya1FhNE8rbVhj?=
 =?utf-8?B?Y0J1cE1MSjE0M2xKTjBqYUFhZitTZHVremhHbDE0ZTJVc1MxM01wV2czMFJm?=
 =?utf-8?B?cDNQcDZQdFJUbmFVdzc0ckxGdkwwZUwvK0pUZUFSaDN2Nlk4NDJ5L0ZiK09B?=
 =?utf-8?B?dTJOajM0RmdoTTJMenIzeVlnbnRaRXpjRkY2UEtFL1JXSUJWUU41V21wWGJy?=
 =?utf-8?B?ZUtVVkpSV1ZxMDVoY0EzY0V0QmJUUGYzTmNTL1o2Smk5aURqMHJ6cXBXRFB1?=
 =?utf-8?B?QTdGaDdmc1BrM3JEZ1R1N1VoOEU2OThxcHo2T0ZPOUpCRzRaSkFrcERrNTNZ?=
 =?utf-8?B?YmQ5ZEkwTFN3RnRFWUFVVTV3Q3NKQW1qUGZPaC9ENEJ1clduQWc5STRTR3kw?=
 =?utf-8?B?M3FyUFFwVUhmTm8rQjluY05YdytaNTREYTgyVWRRQWtTR1k1OHNOOEVFNWd4?=
 =?utf-8?B?cVlKWlZOZjZPT0kwQThHVmwxSW5JN3MvYThTdDR5cHk1SllmRlZaU0c5Skpi?=
 =?utf-8?B?ZWp5bHJmbWw2Z2tDVDhzV1ZVb25UY2NDRThiR3RsdVJSWnEzUEpGRGovajl0?=
 =?utf-8?B?K0N1UUR3c1poRi9sUHIrcmpIVHMvd2xONlE5ajlWVWR4UG1ZK0FRM1FNUUtD?=
 =?utf-8?B?Q3ZVdDlaYmZBTjB4cXFLV3dHSVQyYUtWT2VjZkhNYURRUFU3WDhXNnZrUmI2?=
 =?utf-8?B?TVdXMmxhVGFXcDFXdlEwV1UyUm4wZFVsRVQzejNZTUFmcU4rZE1xdlNIQVBp?=
 =?utf-8?B?QnpBL3diMGx4aGptUG1nV25NdWg0dEh3aFA0RnJ0UW4wOEpkb0lxUzN2VjdQ?=
 =?utf-8?Q?KJ7CUw3UOusUskQE/gDDf62p79XGKcvv65?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 10:54:05.8372
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: d8e96545-7254-4575-9407-08d8ab1ee598
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HXZBbQR2+2Pf2VXvK4pn7dvdKpAMKjkt1ys94DapeV/aAP0kRHVF7VBB4LyGD1TijcFPTu3lL216rrGKomCAew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5525
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 11:54:09AM +0100, Jan Beulich wrote:
> Now that the IOMMU for guests can't be enabled "on demand" anymore,
> there's also no reason to expose the related CPUID bit "just in case".
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'm not sure this is helpful from a guest PoV.

How does the guest know whether it has pass through devices, and thus
whether it needs to check if this flag is present or not in order to
safely pass foreign mapped pages (or grants) to the underlying devices?

Ie: prior to this change I would just check whether the flag is
present in CPUID to know whether FreeBSD needs to use a bounce buffer
in blkback and netback when running as a domU. If this is now
conditionally set only when the IOMMU is enabled for the guest I
also need to figure a way to know whether the domU has any passed
through device or not, which doesn't seem trivial.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 11:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 11:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59424.104334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktqJd-0007Sc-Bp; Mon, 28 Dec 2020 11:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59424.104334; Mon, 28 Dec 2020 11:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktqJd-0007SV-7q; Mon, 28 Dec 2020 11:03:53 +0000
Received: by outflank-mailman (input) for mailman id 59424;
 Mon, 28 Dec 2020 11:03:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktqJb-0007S7-Fn; Mon, 28 Dec 2020 11:03:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktqJb-00075O-8D; Mon, 28 Dec 2020 11:03:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktqJa-0000y1-V4; Mon, 28 Dec 2020 11:03:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktqJa-0001jx-UY; Mon, 28 Dec 2020 11:03:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ERcTuRgygR3SUW/cPvc/BJbFq20TEn2UeTQ/vZlGAc4=; b=Nv8ixW3mBBLnoe0cVtDzsco7vS
	BRTdyEuNMoQI4vd6F5bDbrA73gmyTyzv6YnS0+PUlZc9uwSKov0dgF1YlMNab+7rdzLAXXlvm5sIl
	nCBDUXCil3Tt6kfFIi9dzaYcFs+rjCLUtUK7qazVb3Z2IE+ju1ouQausXsyzhFcXhuus=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157931-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157931: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 11:03:50 +0000

flight 157931 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157931/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157918
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157918
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157918
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157918
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157918
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157918
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157918
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157918
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157918
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157918
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157918
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157931  2020-12-28 01:52:33 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Dec 28 12:00:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 12:00:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59444.104349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktrCc-00046c-6O; Mon, 28 Dec 2020 12:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59444.104349; Mon, 28 Dec 2020 12:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktrCc-00046V-3C; Mon, 28 Dec 2020 12:00:42 +0000
Received: by outflank-mailman (input) for mailman id 59444;
 Mon, 28 Dec 2020 12:00:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktrCa-00046M-J7
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 12:00:40 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 190d168e-40c1-4bf9-91d8-1f6859327142;
 Mon, 28 Dec 2020 12:00:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 190d168e-40c1-4bf9-91d8-1f6859327142
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609156838;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Eq5JNeZT0l7bMCgcNsChTnPYGNMUpww38oCUh+1OxFs=;
  b=Bs4Z4ObiR1lukz/67SVrMehs1NRZUn2XTEGwCN1WTeO0DGN5woH+0N9z
   hqmPx2g7tWqX969XbzFDDAw75DDccuMt/M9/aFrjdcsicfs9LFEdS8CoT
   G67tUtO914OU3uDFD7rrS4DMfxDGSwKnrNeqtRlz91EiHEdgcYhwMB1k0
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SB8w2xG7i13nn6+ULaRq0/cNheGhi7GuEpO0tkJiEZypHoCPp/uf5NUL0i+0k5HiSkD8uN296F
 59sMgf0/X1W2owWetdbQmL/GKZeM484G18laLlcoH5QPj7AT4UY/KTQze4LcIp0MV+UheWVQD1
 vBsdFzC7wqMve/zk4M9uv1L9loYIH7/Z+ZCuHWuwOvPRJZJkYPU0k6Wu790jxJ8kzPJiEk59KV
 a4X0Rh13jjddGNrsCSN4y60rGKSqcnuidpeYKIapxIs5fX0S2fiS1TUwmZCblHEl5bxLZgSQrD
 8jA=
X-SBRS: 5.2
X-MesageID: 34056652
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34056652"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XrADel6kosonqSqljEoVSqEqY294tkgVQmEfndrJ1XAf0mmAxtYfhk9JH+iDuBaPwGHNk3oqKaqgzD6/0PiFtv/XdH9sLeZbaW0PyAHYQpC73ZqM7Wt1EJyDP8ZlHXoPY69b/+DLb5KOUjYxMA85lYE+lpDyqwZRnI6zhiR47+gY8UabhoKSY+Xf+MCts9slt+PaIPeTorBFDfrCnYsCqbtzJdyAw3QGOwH2fLHz8bYymCrB5GRIQJwlH1V+MVdmuI/NgtWwPmj22dGSiBTHzQjH/YylQZsWbGuiUGnXRTvzkniJWRLObY9q3jLx+A7EVTOeqxtZ6Qa6LrfGJ6c9hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y0T32xHlRne0BfcFdX3BiJ4ygytSnFj2q7tLSb7MWko=;
 b=JXaWgtVBAVvLPINJt/X9Z04Wr1Zyy5eYzKcxg9NyuVGec8zuilJhCCqWmMGyNEwYGnZRc39UcVUz0Rc9i3UTcmNmE/jDAv84F082dOsKe+wII5V9HIYHv56Po6Lp1B+PQrMdg7xuRkkAOHDkDUzoSBKAKCqJ9lyAWrmwLoNUJq0o9ws1SdMv2pjZNdskJGXASf7OwzfDaqgRmcx55/fXZlWim0N5oiRhBsSB4/CTrLOLdndCKn+Jf2Co7mTGTgSkJCcAeKA563wG1nkU7MabxdzKDkV8Fjv7K2Mjo8hwySfIe5Tg9JIGW1OrV4y9a+H6XgACICh1Tq5YetsGDQegTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y0T32xHlRne0BfcFdX3BiJ4ygytSnFj2q7tLSb7MWko=;
 b=I0b4ugG7tDZPRwm4NxDxcLV1WRjhzjSTLX7GFXeLNJHI7o/l/ntJ2qQqfchdOXRnS8ngH97RUgH7Ldi6Le2GgEAiH1AvqSs0MszAlWapxtfPx9k0o8yEDLLXj3sYENAGgbqFBxNzdkgdYH22q1XIReqQ0NNneFiJagVinj+0tW0=
Date: Mon, 28 Dec 2020 13:00:28 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/5] x86/build: limit rebuilding of asm-offsets.h
Message-ID: <20201228120028.f5clmk4jr3jrlo7b@Air-de-Roger>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
 <d437bdbf-3047-06ad-2fe8-f445cf8b3240@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d437bdbf-3047-06ad-2fe8-f445cf8b3240@suse.com>
X-ClientProxiedBy: PR2P264CA0022.FRAP264.PROD.OUTLOOK.COM (2603:10a6:101::34)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 25535cff-e0f3-4a12-6f65-08d8ab282f55
X-MS-TrafficTypeDiagnostic: DM6PR03MB5340:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5340B5C9B0E12F5F0CAAD4968FD90@DM6PR03MB5340.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pxDZnDNEjpAV7DQ6PSIGUKS5mchT6zSYebrTCkHxTsWtPh9Q14nFmUV39ivEcUXzO/e9IQxxyzD/EVz0RxVhr/uajMqf0em41UwzY8UFNbTd0fG24Toih2xkaFr0dRFCTN5EdkR8N7U5a+KHPU1eMYysteYEfcNikm6Gd49mSO3ZsFtZsxd2ExDSsI4ivwUD1k1iQWbVkpMcin2hXLsKYRs7s2UuN0PxzmCb0t8YwDQvSFRaBBL39oR4hRsRJOaE/Rosm7W4ojf/UdNPqT8VkjE4NynSvDUBLrQ8YC+nmQQizG7mA6QzCWoeg+gSao5jlQNxQkkFFEKqemXOM8vdPJmcVsIt6uUdpRhN+IIWi+aArTbJUvAx0wZusjhgsJEwuh6jEIWL1ybRs6Ncyc55kg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(39850400004)(346002)(396003)(366004)(136003)(186003)(478600001)(956004)(6496006)(4326008)(2906002)(6666004)(16526019)(6916009)(85182001)(6486002)(5660300002)(54906003)(8936002)(66476007)(66556008)(9686003)(8676002)(66946007)(33716001)(316002)(1076003)(26005)(4744005)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZStxZWlqQTlHSWxGUW5jVDFCdXhNSnFnT2l4Q3ZQK3hUYUQyaW9BRzV5QnRN?=
 =?utf-8?B?ejZRWWpoUHVqQmlBOE1LZWpJYXQvVTVzVjZpRUxRSUZWRTdWRmZQV2FhaVhq?=
 =?utf-8?B?Y2lCaG5tYTRweG4waE1WdEJWNzFOTG5JdEt5NTJhbmFBMlczMWRabUwxd3Fx?=
 =?utf-8?B?RzVrU1ViUzhTeWI1UzdNdjJWZWpqcncrMXdRbHI3NGtmVUdBM01RcHJ5UEQ4?=
 =?utf-8?B?cURkZDFTeElWMlJwcW1OZ051akJNWlBpZWVsYUl0ZFdSa29KUkdqb2oxbXZo?=
 =?utf-8?B?QnZra215VFVKazFEd3R0UVRLeWxEd2daS204MFZpQWhta0dRUjJ5MEVpTm1D?=
 =?utf-8?B?Rld3clc3TUpac29qUndEMVVxWDlwWW11d1RBU1dPTjJEMGs1RDhPRnhaK0p6?=
 =?utf-8?B?S1htZHNjSlFTVEFhT3NnT09BNDNDczF1YXZCQy9vSVp3SjlyVFU3UGdHUkdE?=
 =?utf-8?B?dDBOK1g1cVh1MWluOW52QmgyYkpRQXZaSzB1QmRrSUQ2UXUzVmpqeXVZdnQw?=
 =?utf-8?B?QThTTXlEYjBYejBWZ2lHa3pCUnlWVk1RR0JwOGl3WW1LWEhHNU9yMkM5ZHF4?=
 =?utf-8?B?czhQTi9OTnBLZ2ZIMlZ5Vzg4dzlZMnFKSVhzTkJ0MlA2aExqbm5TQ1VRMEY5?=
 =?utf-8?B?bUp2WlFZV0Z5NWR5bUtqb1lpUnpTVFdxK1FSUEtZVWtsQUZ3clJVWE8zTlJk?=
 =?utf-8?B?RksvdzdmcXVtUU5mNGh3UVB2Q0sxUSsxZTJINldJVU9Jd2dSWStSRXNQdHk5?=
 =?utf-8?B?S1JWWlUzOEFocTEwNzE0TXZYNjg5N3FDbEJoa3FGTzZ2dVBRNFlWeEp5OGRP?=
 =?utf-8?B?ZCtkRzZsZ2JWVHdiYnN5amdTeVJpbDRva0o3ajJYMnNtMmErZWhwSnJBWk91?=
 =?utf-8?B?TTZQRlczUVkvYmxoajhvTDNVMDlBdEJnelo4TWR5WGdoeVRiMCtjak1Cdzdp?=
 =?utf-8?B?YldsMEVJUzhaWXRPeHVmWENtQnp2R04wd2JkYzJRSmhXOUhzK0cxQmRoSG9l?=
 =?utf-8?B?TlRBMFo3YytVcCtLUm8rVHkrci9SdEpKVmFiWFYrQnhsOThDV3BERVlYZnpC?=
 =?utf-8?B?WFZtcnJuY2RadU5CR25wOUY4eXd6MWxydmZwWmRWZzJnSVl1MWg1ei9sTWVn?=
 =?utf-8?B?VWd2TTlwN1RIR2ZvdWp6alFUWFN3d0NhaTRQOHc2aW1YZjQwcWMzMVN2RWRv?=
 =?utf-8?B?b015U1k0R05WUVl1eVN6eWx3bW5aWGtLblJJdXVITmFEdHRYRXhDODNXakFG?=
 =?utf-8?B?WjJyMUVXYkdsNzA3M1RtREtWUXpIbmJGZXZSVGJsSVNCNWJlRExWMGZpb2xJ?=
 =?utf-8?Q?FNA+g24bDaGWdObYlYgC/fKJQZN92U6OgG?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 12:00:35.3437
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 25535cff-e0f3-4a12-6f65-08d8ab282f55
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MRQq7SbjCJNdkmyNrQ7QfYSElv61vHcn1NzooMDvLsH+E4+EZCGm+8TOeQY1v8PrUPj+R99EO6wPYIAZ3p2cAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5340
X-OriginatorOrg: citrix.com

On Wed, Nov 25, 2020 at 09:45:56AM +0100, Jan Beulich wrote:
> This file has a long dependencies list (through asm-offsets.s) and a
> long list of dependents. IOW if any of the former changes, all of the
> latter will be rebuilt, even if there's no actual change to the
> generated file. This is the primary scenario we have the move-if-changed
> macro for.
> 
> Since debug information may easily cause the file contents to change in
> benign ways, also avoid emitting this into the output file.
> 
> Finally already before this change *.new files needed including in what
> gets removed by the "clean" target.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Perhaps Arm would want doing the same. In fact perhaps the rules should
> be unified by moving to common code?

Having the rule in common code would be my preference, the
prerequisites are slightly different, but I think we can sort this
out?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 12:13:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 12:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59452.104361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktrOW-00057G-B2; Mon, 28 Dec 2020 12:13:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59452.104361; Mon, 28 Dec 2020 12:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktrOW-000579-84; Mon, 28 Dec 2020 12:13:00 +0000
Received: by outflank-mailman (input) for mailman id 59452;
 Mon, 28 Dec 2020 12:12:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktrOV-000571-8K; Mon, 28 Dec 2020 12:12:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktrOU-0008FJ-Vn; Mon, 28 Dec 2020 12:12:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktrOU-0004In-Lv; Mon, 28 Dec 2020 12:12:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktrOU-0001MC-LS; Mon, 28 Dec 2020 12:12:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5Dgg2GvplRPruUFROZO1HZkhFK9JOQGO/xigH4avBkI=; b=5Zl1aimL+0LTlB1u/nMgxyPK05
	9IqsVhUMH++NneAltU/2g3EHFmL8Nuo34VpeiZ0ZAA/Wb7hq8RD5hyWE8UxMTRZmQchl23JVpasip
	4O1vjqGgi9E7yxbrAjWdeKZTYI8RBTLIM3gwiMkK05LWYEfWZFq2Lv3QaV3Ib1KKTKos=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157933-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157933: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 12:12:58 +0000

flight 157933 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157933/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  171 days
Failing since        151818  2020-07-11 04:18:52 Z  170 days  165 attempts
Testing same since   157715  2020-12-19 04:19:22 Z    9 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 12:55:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 12:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59479.104400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kts36-0000MJ-31; Mon, 28 Dec 2020 12:54:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59479.104400; Mon, 28 Dec 2020 12:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kts35-0000MC-W5; Mon, 28 Dec 2020 12:54:55 +0000
Received: by outflank-mailman (input) for mailman id 59479;
 Mon, 28 Dec 2020 12:54:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kts34-0000M7-EU
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 12:54:54 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1fc0b8f-7cdf-4609-bd38-d693050b56d5;
 Mon, 28 Dec 2020 12:54:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1fc0b8f-7cdf-4609-bd38-d693050b56d5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609160093;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=J63KpU2fYlOGFtjAKWJHhplEVi/fpqUD+Fy9dEacyeA=;
  b=d+tPKmZY5MmQldy7wxpHzjOyrH+xeynRlJoiMuEsWBurkgYYbfl+kVUX
   K8jzbh9sTXm+WLn5CsJZdmYHV508Ku2G1JdEhpV9OFq2rl3jm0wHtmuaI
   wFs/bO5i/oMMJA33f7YXLE/E8CITmBsrB5b55x/88I5b4NK58v9XWkIQy
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: F4RSj03QukSpBuFbVzBB4/5j6zndoGedE8b2ZOBzOwWX+nKeMs5BhFC9SxMowXi1C0FU99PIUn
 L4HTPBXJt3gnCD6QUTy1DrZgG+RHQGlZ6zGIGxuEefmBBaNluVe43J2UOuESIu/2m2ORc5bheE
 lpKC6nKHPdV3uGm7EgSCAlUmwg67zIZWvZT1JHu/tz33FJ5pYN/jbVWhSurFQ6VZxfr7NkgyJT
 AVXdKJ5ozL7f42+2+lVY8TYyoNu4gFWSis4jOt4fELcNqix7luhytBOYMVoahtV5Iwe271CzHC
 iSs=
X-SBRS: 5.2
X-MesageID: 34019477
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34019477"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OsB+cznp6Rv20kCPQLvF7HZry7onZwGUiGiBYPjbWrTKvtIgq5MqxKoqXkWDvX1KuT75qYwSvNAAPliF+8M2a8RVDJj5i8eCOTp8ln9e3thGs1MRBZ3pk4LxCIEZ9nugoUjUPCCjMhzN+cQIJd8rUP+QCYf3WgdKmei5sQZDj4GzjfqYIUujXOmXfBHBrWDiZEkv1xyfzLNLoOnaWgszwrizEn/KftMGCgVTojjDt2XgG6wiq3atnF+k/ESMOtthgH0/u3wqmE6IwjjgBJLDf2gFKL04uqffJDHV8ASQ4FBrD9nJyK9g0LYiEQCvCHq7AKkzTGdtQ35YSrgJ75n70w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDmlxzDJ5JztCfIVcp+wiQjgcvygoqzWp44y+GA9iDw=;
 b=QRl0d/y4MWl/aFXqgZUhINwZqAmdWZtYjmFbleSy/toHLV/pl5/oAkAbivf/zfPBJOOymiNELdAvPTJNDRJdMe7X01kjtAG5rVoClcOxvqvXu+6SPd3rorBPisudaLD8BhU4WLvQfSgkY4n0I3BeCtc8MbpNa1JQCs6jPFUaIOPkgmnxjE7tGCmI7IMcX3VCfQnxfUzXFx60WgMOUDrZy7jQbz0VI4oEBbNWhAfvcSJw0vNhsLyADaC66lHzYvlc0/OBgYAD40lAh/zqx7KsxGJT7q8/mtZe3KY5u0of0cNlRFQuB0/IIscrA/GIVEQW8Dw1owptfz3PHj3CfSoG+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDmlxzDJ5JztCfIVcp+wiQjgcvygoqzWp44y+GA9iDw=;
 b=Dyy573K0KWPxQFJPXxZarDbzRsHv2ioVLXS+4qD1491DlnfFaqNcPEZfCgyJak7g6W+zmV78YCMCP2UZtKQwCmEMtEpVi5qMZeW+LQR84DRGTQZR5AByui87tqcd7Vqk7impz4wJxZSPfKGE9Zb5TkTC7EkOWDCrQHKz+cxbpN4=
Date: Mon, 28 Dec 2020 13:54:42 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/5] x86/build: limit #include-ing by asm-offsets.c
Message-ID: <20201228125442.gnkadcfrrnzczffs@Air-de-Roger>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
 <d7ac370e-2e1f-5b7a-b832-63577689053c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <d7ac370e-2e1f-5b7a-b832-63577689053c@suse.com>
X-ClientProxiedBy: MR2P264CA0161.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4b7ba8b2-e0fa-4611-0fa7-08d8ab2fc219
X-MS-TrafficTypeDiagnostic: DM5PR03MB2633:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB263349D038170644C425A7DD8FD90@DM5PR03MB2633.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3p5kmj5scw4APtfE78nFe/vJWEwvKgasiiqoV9DlPcYLgnxW6j2m/CTzEULut9+03rvQppko2rNaTW6TWC1h4LhqqlHHw4eHpZ0PSABpfG42hU69pEUPGhHxyel8sGVm1uTgzBXLso4kRtgOzF2igUoqHTQkAqPPxytvKV6Ukav7yN4xT3T78M2yIl/mf86GJM1bSlFWUayuIDkoX0dqeRrtFVj0mrkbKjXt8P2sIQZE1ET9bMWX9Cy0A5ZvRU60dp1zYXYRIi92EW/hGj1ETZNVWJZL15Yrp56fbncAdVZvnUgJ+xqGw1fwxK20BQpHuLgfPa00XOEHDzJgpdRCZBA3DWbeX2pve60jEjGtZH0PMDL5JPkq/A0xaKNAlfZG3xNVSYemN/ZFNnP/7ycJ+g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(376002)(346002)(366004)(39850400004)(136003)(1076003)(6916009)(5660300002)(6666004)(8676002)(956004)(33716001)(6496006)(186003)(8936002)(478600001)(86362001)(9686003)(316002)(54906003)(3716004)(66476007)(85182001)(16526019)(26005)(4326008)(6486002)(66946007)(2906002)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RFpMM1N1bmk4L2RsbkpOZStsUVF0N0loVUNuM2lReHdXdU9TVjhjYjV6d1FP?=
 =?utf-8?B?YnJHMTdHNHdOc3lESGcwclphbWNKNVlsYXZvTzI2azh1UlIxQnlxNVc0VkFY?=
 =?utf-8?B?ZUt6OWxIK1g3bWRWKzltMFlYZzNRT2hZemwybk1Ba3RINDJpZkVJcGNEeXIx?=
 =?utf-8?B?clFTR3R3Y29WS3lOeXFpZHJXWDE1bnYxcDdpN1MvS0tvVEVpSW0wek9ManRn?=
 =?utf-8?B?VTZVRjlwdWl1MjJhb0ErcEdsQjRyQWNkTjJTYlpoUng0eWJPQ0MwT3c1dnJa?=
 =?utf-8?B?Q3h0aytJQklPeTg2WkhkWDd6YlpYdXZEeVdBek5NUXJPNTNhVEJ2R0F3WTlr?=
 =?utf-8?B?alc0amZ1cW1JbHpPZVlSaXdqdHhBQXhnTTZFSW9kZzRpWTY2dk51bkphN3hR?=
 =?utf-8?B?RWV0SDlHbzVSeTROaTVQdXp4dUpsMkJqQ01vdXl3MGFReCs0a2U5MlkveXZN?=
 =?utf-8?B?ZlZMZHU3Ymw1REJwTm1LT2piRHdiVDVRWlZNei9ZWlppWnEyaWlvS2xobTJQ?=
 =?utf-8?B?SXZ0K2dOSHUzOUZVSUlNMzJQK2daRFBJbkVzeENIc21xQ0hCWUxjREQyUTNR?=
 =?utf-8?B?akN2TUJJWWkzT1hMbGE2VGdSZUkwZ1VSZk0yeDgrK2pCWTJnWVB6bForTlZt?=
 =?utf-8?B?YmZtbndhVjNwWlRZZzhYWE5ZSGxZRVNxdzNROWU4aDVQRlY0a1dRNTdobi9L?=
 =?utf-8?B?MjI1V0lDajVCQTBsejBDY2c3aWhjNlpQN3dYaWhvaUd3N0s4bktubDBWbmM1?=
 =?utf-8?B?UGx5cVBwMDhCUnowZ1hwSlp5S29COHNDNDZJTURabE1YTzFRNFNzZndwY2U0?=
 =?utf-8?B?MXc5LzE0UWZkZVJsTXJzTkhqYVVnN1BNK0dRQkxiZUpHZnNvcm9rMHRqQktu?=
 =?utf-8?B?d3FsVmI4a0tNTGp5d294bUg5V29hNnczeG1pWEJqN0tYL3p4N0NkRWxFcWVk?=
 =?utf-8?B?UzlrQ2IzaHpldVVoRm9JMkUrZ0FhVXd0eUFBZnpZU096OTBZaW9Ea1NjN1ZJ?=
 =?utf-8?B?RkVMQzJQbGJOS1ZxVDQ5cGZrcnFTU0poVWN4T0VlTlR1THBjcW9FWEtQMThQ?=
 =?utf-8?B?UW96ZjJyc1BQRVFXbDl2bXhjb2ptMXE2SXY4SFcyQlJGNU9RVGdkbjhxbjNo?=
 =?utf-8?B?eXhOSjZrVnJzSTlIYVVKaWloMWdCamlFZitIdHdJK1lWaFlmTis2UGlwbXBK?=
 =?utf-8?B?cW4rYWlZaXY3MytNZG5hWXNVOGczQjNKb3ZZTzE0anlsOVpvd1BDQ3MxbjFW?=
 =?utf-8?B?Y3AzZ1MzVCtvR0E0ak1pSnRDVS9rZkM2WVpuNTZ3dEs1OGVoekFnekNFcTBP?=
 =?utf-8?Q?YlCYfqOmQkbSCUVBqvu9NrJx8nrApFC/cR?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 12:54:48.0612
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b7ba8b2-e0fa-4611-0fa7-08d8ab2fc219
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 88qs+EDEos8hL4FsQ7iUctTTfiiXL5O0auWNZtkPITL6vLkWVuXGQz4kYmhIDod9UhVyIRP7DuRyr+arJYYqYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2633
X-OriginatorOrg: citrix.com

On Wed, Nov 25, 2020 at 09:49:21AM +0100, Jan Beulich wrote:
> This file has a long dependencies list and asm-offsets.h, generated from
> it, has a long list of dependents. IOW if any of the former changes, all
> of the latter will be rebuilt, even if there's no actual change to the
> generated file. Therefore avoid including headers we don't actually need
> (generally or configuration dependent).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/x86_64/asm-offsets.c
> +++ b/xen/arch/x86/x86_64/asm-offsets.c
> @@ -5,11 +5,13 @@
>   */
>  #define COMPILE_OFFSETS
>  
> +#ifdef CONFIG_PERF_COUNTERS
>  #include <xen/perfc.h>
> +#endif
>  #include <xen/sched.h>
> -#include <xen/bitops.h>
> +#ifdef CONFIG_PV
>  #include <compat/xen.h>
> -#include <asm/fixmap.h>
> +#endif
>  #include <asm/hardirq.h>
>  #include <xen/multiboot.h>
>  #include <xen/multiboot2.h>
> @@ -101,7 +103,6 @@ void __dummy__(void)
>  #ifdef CONFIG_PV
>      OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
>      BLANK();
> -#endif
>  
>      OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
>      OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
> @@ -110,6 +111,7 @@ void __dummy__(void)
>      OFFSET(COMPAT_VCPUINFO_upcall_pending, struct compat_vcpu_info, evtchn_upcall_pending);
>      OFFSET(COMPAT_VCPUINFO_upcall_mask, struct compat_vcpu_info, evtchn_upcall_mask);
>      BLANK();
> +#endif

Since you are playing with this, the TRAPINFO/TRAPBOUNCE also seem
like ones to gate with CONFIG_PV. And the VCPU_svm/vmx could be gated
on CONFIG_HVM AFAICT?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 13:07:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 13:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59497.104430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktsF9-0001cG-OK; Mon, 28 Dec 2020 13:07:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59497.104430; Mon, 28 Dec 2020 13:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktsF9-0001c9-LB; Mon, 28 Dec 2020 13:07:23 +0000
Received: by outflank-mailman (input) for mailman id 59497;
 Mon, 28 Dec 2020 13:07:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktsF8-0001c4-AT
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 13:07:22 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74cee0c8-56bc-4d8e-b42e-c6c88c9acb87;
 Mon, 28 Dec 2020 13:07:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74cee0c8-56bc-4d8e-b42e-c6c88c9acb87
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609160840;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=wPTD72Lwnlazv7mqVlA5zMrLkWp59Y2iQHAfnLC5VMc=;
  b=P6vg+5PjPbi9OH5Wq5HpyJjNx6v3b5NYfV2m+TooGSK+mB25xUYPWF1j
   h6sHk1WJrzYh/2+6ksVzDwafeuiNjeRf8eTB5N0ecPvzC87jpEdYaZ8Qq
   J+kOsZAi7KduntwP6FiDqALQb/hG44C/EdK0mOx6Ct1l7MJ68srEvLdBA
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3NPs+YpkVNhL1urHmlroO8ZQQAiz1TpQ6U7CtAz7L2g7/+DHTymkkpA7SVxhEGlFQHTy4HP+KQ
 SFPjlY0wusjfI7sgc/BnarE+SVJ/NkZtH0F/TXTkQKU59wgXT5fzan+30JzrIZjKlVdukPIkMI
 5nvc19I0oKWectd1NsnJcufNP4a0olrgA73g4ZWaqCQNyPmzZy06RR3sbzo2UBYQX9Gxrj2Uxu
 jYZ3P7w6sc8AslV7/OE5jj5Y2dzzgfoCU330Kk7VbF+Gg/TWbNsxdro/2YwCG8pll9mSSqcKfw
 ZsI=
X-SBRS: 5.2
X-MesageID: 34258314
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34258314"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oIwfP9QttM+GRhEWIzdto7am7D5KMgIXHydcP2A++O4FmowctT0r7SGltZLBB5/LUTTSDbKTGwCNT5aRgfLB8noi4CrQcmSxM4nMAzHjaCegKiHSAXZ6djgauoszYhujLGTSYdrbHvj4Qeei5wwk//diJeumplr4OlN5WmcWSf7KW48EvFuAjvHxvSJi7zBMQGOow9MkTrFoplPMgy2CuTEJ50Z8JeSP2byHuitivIudQnwcniY2nJn/lXnbRDGAOMOu2KXsFZCK+ona5Diu7iRvzGl1yMGFM9erT8307eZ29b/UwIb6Hh5v1JHN+cjfBQz+O9TwKxCpDza2iyPUgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gLXeYncdNbc5vZCVmz9o6Pw3YQNdxrO4q985/hbVTjQ=;
 b=OMbbC8XHzKnPYmPe4dO3QTz4yRpIhd12B82JIglEiIonc9YiIWCmzl+6HArQWx8awJbEmou07DoxmtBRaqSWC9aMkq0pK2uAuQhQcyg3++rNkcrGQ8WwaWMnQEW95xAg+NGu3QKUXhEbINdK0HxzIp1oHkk638jRfOXF6hrUHXz12YquilYU0Qu7xIYevWPLnLRK5Zx2a84iMC8Amtz4xaivD0uQbM8R1AAONmzPreUDFA+9XVGjk4HejMe6NCsa0IEIMdIE82DSzizf0Gosd8ssKmjPq5jwLLPAz98U18GAQpQbE7SWmz79HTRUC8czaNohPGHW+rdabIikU5ssvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gLXeYncdNbc5vZCVmz9o6Pw3YQNdxrO4q985/hbVTjQ=;
 b=Z1S3D1HieGbiipDFliQ50PopFj8CYHjqmOID6J4B4JlPXrhoMAXP0LwFnGAvY1RBTA8CAcPBZ9Qh+pMxtYYeSd3Bjt4+xcM51vDYxEYAxHFcaZkBhMyoqmTiLTslQVBWdUc+GpmrbrhjjYMThFd61A99TNVhGANOveVK08s4B+E=
Date: Mon, 28 Dec 2020 14:07:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/5] x86/build: restrict contents of asm-offsets.h when
 !HVM / !PV
Message-ID: <20201228130709.cbwdujdhk67rz5c2@Air-de-Roger>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
 <d41ce371-262f-747a-9f6d-e5ab85a93aa5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d41ce371-262f-747a-9f6d-e5ab85a93aa5@suse.com>
X-ClientProxiedBy: LO2P265CA0284.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c2d0334f-844a-4640-b515-08d8ab317f19
X-MS-TrafficTypeDiagnostic: DM6PR03MB3945:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3945BC7590B5C9FAFF3F68458FD90@DM6PR03MB3945.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2399;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: x8D9iK6Y68Agq+j6PzMMHko4YoAQWWLuwlZbbxYcPQ67UHMk5Lahxx4YICyK6V/clf0hIXfwvZkc5ssOrYvCjYbX+EWMAYWDyCy3lWTCo5I7/r3yMnXrZ0yJcdYPdngO1fHjDdH164SvVBxedz91UfYgSKtoegI+WESrtskbnEFFfpB+w+EuaERT5rZVGfiJKeMNSwD4StUd7IBxC8+1aT7sQmbIBFNm+Zc3JmclIUZ42dN1+0W7BpWfjhX1NIb9MTPeT2XUUJoHd05/HIY6aHwbZWIhow7Xku7tS4ypZ0MIGw7pXevp9bIevW/6iY01xWWbMoEcKMYbH25jcCXupXtB672xq3NBwJxVTfWSyHtRLnu38bw3BzNNmkOsyK2Lb9D9i+FSzR+d1c6AOaUTZw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(366004)(376002)(39850400004)(396003)(85182001)(8936002)(478600001)(4744005)(66476007)(66946007)(6666004)(2906002)(1076003)(54906003)(6486002)(33716001)(8676002)(16526019)(9686003)(6916009)(5660300002)(66556008)(86362001)(4326008)(6496006)(956004)(186003)(316002)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Ui9XZzhKRk9zWjNoQWFJbWdvaFUzOXFJRHgyVndEYUVta0ZDZFdZSE1MTXMz?=
 =?utf-8?B?bnh0eTkvaDJTUm94cEdZUm9SdSs4dGNVRGZTbGdXSkowckV4ZGVSbW1CeHNm?=
 =?utf-8?B?QlVkYzBvM1BpaHNHR3BmR0gwMkg3amNFM094cGR0WnkrcGNOQmtEWUhUL2M3?=
 =?utf-8?B?b0t1TzV5Wlh5V0RMYlRscjhYeDNxeHdKcGVhRTh2NlZGZTFhNWRBNEd1cWh0?=
 =?utf-8?B?U2hkMWRPK2trZ3ZEM204M3Z3NlNWdFpMZjJjK2VaS1JLQUNoQkt5YkY1bWo3?=
 =?utf-8?B?aVRNdFpVZ3NicjVNb2NBVjZ6TkhSUFBlKzdjeWJOekdEa0hhQkxLVnpKQmxy?=
 =?utf-8?B?T1p4TndYREcreFBrS0pPUUI4VklPYm5oSlRJRnJWbjFpWFZYWXZ2OTB5K0RD?=
 =?utf-8?B?WmdXQVlsYXFjWXZnYkJEbkJsaVpqL0hvMDVucjQzUlVTcmdjbDZPblpKRENI?=
 =?utf-8?B?Q3prVVpaa0xQbXNpdlN0MTUvRlQxOWk0c1lJN1BoZldxRFFvQXc5M3FIeDFJ?=
 =?utf-8?B?SjVHTmJwMUxtelZaaElPMGRvd1R5S3JGbUJ5TlU1Ynh6NmtxUDRRWTBLc1JV?=
 =?utf-8?B?aDNheiswNGVGN210dGNtd2FObk1DK00vc3gzQkUzRjRJTUJSTy9GYnlGbnl4?=
 =?utf-8?B?VXpKTUFkNU5Xd212VmZEQ2h2YzhIbGlOL0NlWTdKbzlmT293VEtoUE96SlZo?=
 =?utf-8?B?aU9YVnBrRUNPWWI5cjNSR1ZWalNPdFVCZ0RMQk1kMXQreFp0UkpVSnNSVk1Y?=
 =?utf-8?B?TE1xVGZhWnI0S0FrcC94THJIY2kwOUZwT2RJWlAyRklDMjhhZ05jbUJ6eERC?=
 =?utf-8?B?VitGNS9ETkpKM0tmRkJzZG10aEs2NzRwNWdRN2ErVmt6S1l5TmFxR1kyUTc2?=
 =?utf-8?B?b0lqNzlPTEhvQ0ZsRFVzZTIwMkxnZCtlVTlHb28vUVUvQ2VEejNOSnYrTkVx?=
 =?utf-8?B?VjV3ckNDdzlLOG1OU0M5VldiaEpPRXQvdmgzSFI5ZHVEYkozbnZhYllXU2ow?=
 =?utf-8?B?WWRhS0xQampKWE8vQVc5UGM0a1FTaE9FVVhkajFPSVoyeDlUOU1FdjRxclMr?=
 =?utf-8?B?aElpVHFjSm5ydmJ1VnN1Wk5Ib0ltVzNmSktieHphOUN5OEdXa3ZueUhxbXUr?=
 =?utf-8?B?TEQvSmVnL2paWGE3ejBiL0crR3RzdEpzZTRRL3l0TnRzamUyekFOZk5uempD?=
 =?utf-8?B?WEJSZEQ2Y3BXVkQ1Nnp4N1dhN3poSk5QRDF5UEpEV1NQb3dDMUZHVGlTOGh4?=
 =?utf-8?B?d0FKS1F3RlNWMDBQdDFJOGtSYnFMQURmdmRiQitua0ZyaGVLaDRhd0p6bWkz?=
 =?utf-8?Q?j5qKBOV27eSKEK0uJL19BeWJCPEZ8fZvZ4?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 13:07:14.6057
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: c2d0334f-844a-4640-b515-08d8ab317f19
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DTr+WvtTljaaX1ZFD8IApq73HJgDL/8OKrsClGtSJn3WKoM7/k8vMTkri8pWy+P8RtNUjvDTG0+BXwadBE3hGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3945
X-OriginatorOrg: citrix.com

On Wed, Nov 25, 2020 at 09:49:54AM +0100, Jan Beulich wrote:
> This file has a long dependencies list (through asm-offsets.[cs]) and a
> long list of dependents. IOW if any of the former changes, all of the
> latter will be rebuilt, even if there's no actual change to the
> generated file. Therefore avoid producing symbols we don't actually
> need, depending on configuration.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think that replies my question on the previous patch, hence you can
add:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

To both.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 13:38:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 13:38:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59502.104442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktsiz-0004DJ-59; Mon, 28 Dec 2020 13:38:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59502.104442; Mon, 28 Dec 2020 13:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktsiz-0004DC-0v; Mon, 28 Dec 2020 13:38:13 +0000
Received: by outflank-mailman (input) for mailman id 59502;
 Mon, 28 Dec 2020 13:38:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktsix-0004D7-Si
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 13:38:11 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b503f3b0-888f-4887-86bf-801b3cbc34fd;
 Mon, 28 Dec 2020 13:38:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b503f3b0-888f-4887-86bf-801b3cbc34fd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609162689;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=xRf/pouNo4W0pR/99rPzXSviSTZuq6CSogXiyONKy1c=;
  b=JdycH0TTmyexkmwhVCGK9sFs4NeZLKO8cmPEsUJQW1H4jvuKYhIsDYEt
   8YbNfQqc4nyfyUfR90WbzL6W8MKt6nLDHqQFKNFWO67P2t1/I6tYvsq5O
   riXnYUEsp1e3HQvUbQm3dOFf+NBtYzTtv93TqdE6TbHEHm1104jjgYCoN
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bBNYazyUy0BFPYINDW07OXv30hb8vVuyx3aK/U2ChA/cgrbbbDEcL4Q6toRG3Qq/Mas0RKN40N
 eXCwejQLcfOfP3EzLVrFKYWsrWtf8WuIf9HvHSTw0BaKUlEQNFec5kvGW2nK0OBGcRUKiKe7YV
 o1aNEj5n+m9uoctTf3+7RvykuRzpq1ZjToIYPzPR52JdZHvRuwqVespF639yfqZbB1zQT7nMPQ
 MMANX2uWQ44DJzzYixTNcnqs93STyG15aepM3a2RWa9IRjOr50ZwPOW205OnbGHvihAhPRlH7A
 c38=
X-SBRS: 5.2
X-MesageID: 34260064
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34260064"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j89o8Ck9YVcdhiCMuB3zokEzJUzee1m0wf3dT5la7ZZRZbtu284KUgKN/h02Ptxx10VEz7/MYg4c6SB763fobyePbYYAmqhCulibT8voe7Ohlb5GDnDWNCnpZZi5BgYpVWVTupSpn5KKXyRlLvr67ink8ErNCd3fOXuTzsXxcJy0cyXLsWvgi7TSJbS8y8Ag/R8sIPP1N6O8hL819T/ci3fM0MOMG/pYWwPnQIddYSesZdnpgbwmMGsLxcy1+dG7M/gIa4+UgNrUqi1xwwWkXVicdICVjUvJeoyXioE2dC0LdQTqOK2aGnFSGG62A3PJVdKE94TE3Cjke8qk3/YuSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GTaL4nA/Kna6BnDl7gYzVia0wg/SRxZlzUPM9hmeqf4=;
 b=bTNm5e1kwGuk3L8BkugkNZZR19xFpsLrDfWD9K6Cq8ns/0iUFKg2DaSFUQL1cXxII+a9xdkOefM1wyIlG2mGHv1opkinpVJ8cwy3p7R85rgvOGus43R2FJekhsEcUWyGDQM/g9SiBOta8sr3dOmApgzN4tdNbsgH3MrCcnHsc8Xx2CXLmxGn9H7fIODm31jQuDFZdhXwSi0SKDVxpHF3rNVVMdQQ7N9ttMd9xJwolfKoUB1OYiPD8eb6StSR2r6aBe1+DcZumeuDkjuDvV3HZeAOMalIBpMd39UujRIyrcPPa2nbnuMnlv4WQLSo+f+FjT74br7hLb+TD9YEOWzo9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GTaL4nA/Kna6BnDl7gYzVia0wg/SRxZlzUPM9hmeqf4=;
 b=b+6YpUgm8La1nELKPY+d7w7B/zMpd1geFJQSxlZ8TCafvC8rUEyyEDin2pJKgY6qlcNLGDp3BUY2sx50/jsUWYHvSLK+m01v9D+ABRfNVXlhl2aJCidYA4Smsd6JIRrOKS4DyA3/rf+6EHA9t8oW9pSmVtr9rmbuJvjdS2t+Xf4=
Date: Mon, 28 Dec 2020 14:37:59 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 4/5] x86: hypercall vector is unused when !PV32
Message-ID: <20201228133759.lidihcqkqo3svorw@Air-de-Roger>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
 <6505bcc4-0cb3-42de-9fd5-50da133d6d99@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6505bcc4-0cb3-42de-9fd5-50da133d6d99@suse.com>
X-ClientProxiedBy: LO2P265CA0490.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2d1f8339-d2ad-4a34-276e-08d8ab35cda1
X-MS-TrafficTypeDiagnostic: DM6PR03MB5322:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB532203007D67EAE167EF14368FD90@DM6PR03MB5322.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: RoeVrDTJ5go5OS6eGyfIk/WF2i8/63UxCBxGLYCEnR08CgR2asDmGTi8y9XMcfVltkLivRSc26vBTZyM+i6w1Ob49MAgFjZudOm8NunM4qNo5O1Z1XyKPSQbO0rZbSyGiWhV0qMlKj0O8a0AjcN7j4R2rM0gzXkyvakyppKDwZXNRLgWaeedlTPCQQvEGuiBNZa4M2uwdKPos+0FIlbH5BezXRgapw981XmbLqMsddhY45za3ZlDRKug910JqR9rifb8/BxtXJTs7sJ5Xyyp8mezhecizBLNlheUWvcFXe0L/WjnYqGNBW8bM1divhC8Tv5oDrxjnT2eYXGTYsCw0dHUREJFgux2+dOEHfwCTLW7zcKzBvFz8GHhANBBp0MviY/Dw5wq1hW1vbtFINwdjQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(346002)(376002)(396003)(366004)(39850400004)(66556008)(8676002)(2906002)(66946007)(66476007)(6486002)(9686003)(6666004)(4326008)(33716001)(478600001)(6496006)(186003)(16526019)(54906003)(26005)(1076003)(5660300002)(316002)(4744005)(86362001)(6916009)(956004)(85182001)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?b1hYeHhkNFJpRmM2YzNiQUdlMHBHUHVRNVJPenVpNDJ3V1NMeE9EeFZuYmh3?=
 =?utf-8?B?ZjA0QkdSc0FnREFnYTRYejBIVUpDRUMxeWlzaGh2NXJ3bDdTUCt4eDFiTWly?=
 =?utf-8?B?Umc4ZDMwWjhyNlc4Z2d0Mld1MVh3RDdwWXpudGNaeHZncUlFTlRQYXFhYXlE?=
 =?utf-8?B?T09rSGQ3bGp3djhZcnRGU3p0eHI4VUhPVzV1c3lrZkF4WisyampvQmRMNytM?=
 =?utf-8?B?ZndpZ1pzMGdIRm9ScmRramVCUjh0OE1tME9QRGJ4SWhya3hLSDFSakpKM2cx?=
 =?utf-8?B?bTdWOEttNnFLT3V1RSttVy8vZHpMZkRjZzBZMlFJbngwcUFCK01DbmNsV1J5?=
 =?utf-8?B?cFZibmVGZXhmT3JmUEdDdzFEOVhTckZGd3lranlBdDJneTVTek5jNWlIcEtV?=
 =?utf-8?B?cXkzNysxeU5GV08yRkxXZk9PQ2J0NWFBc3JYU1owRGFWa3NCUlc5WnZDNHdO?=
 =?utf-8?B?SW1nM2FRYlpUQVRTRU9QcW84YTZGczMzZ0tQMDRPaHNwSWlvUkIwdGdmSHdI?=
 =?utf-8?B?UGhUeFlqTlB4KzF3ZFJyM3dwU2Q5UGZjVnNlNXJ6Yy9YVmZQUDFFdmRNSU1q?=
 =?utf-8?B?MEVYWDJyaGxFeVgvQVB3eWdpblNiR1hSN2pvWG9INWxnSElYM1VKbnlYcEFH?=
 =?utf-8?B?Q25ZMnFMalNQS2NLY2t5OTM2d0JuMkNoazVyeVp3TjJ6elNROVpVVHNuT2xp?=
 =?utf-8?B?MncxM2NkN0RMWW9wS1lVZ2dVaGNkaFplL2NrMFFsZEExMHB1RjcxcHZ3VVVM?=
 =?utf-8?B?QTc1eTJXQkwwSmJyOW9URXVZcDJJWk9lRkI2cXNPWEZ5Q0gxZ0szQnJVRlUx?=
 =?utf-8?B?UGl0RlpKM2lBWVJ0ZENTR3Q0SXZyZUFvYnFhRE45YitHN0VocU52WDBWYUc5?=
 =?utf-8?B?N2wvWnJ0WWQ3dUJqZ1R1anE4RHE3MHo2WTd4R09ZSTI0Y0pjTloxK1FSMy9x?=
 =?utf-8?B?QTNGZGk4dzQzei9XUUcwYmdzODBQUUZRTjZrb2VsYzVOb3g3RzZOTTVYVlBr?=
 =?utf-8?B?Sm1DRXNPSlVmNjI0ZkcrMnNTSG5CTDRsMUNYT1hEL0dvRnRNWUVEY3Z3RUpk?=
 =?utf-8?B?MzdrcUFRU3ZiVm1XQWlNTUJ5cXd3U0VPZVNRRzlVTDFucFMvS0s4bmZPNmJu?=
 =?utf-8?B?V1JxK1pMVlV6VHlwSVoweWdubUVYOXg4SlN2ZE5OczhvRXpDeks0MFF0VGY3?=
 =?utf-8?B?WG9acnNGNnp3cEZscUw1YnRGQVZkZ3FrdEYwTy8yTTRuZnZSaFRyTmpRYUNE?=
 =?utf-8?B?bEVCK0tBb0VqOWlDZVJPZElFMGZ6WGVDMTd0dWJkQUVQbzF5Y3U1dTdCTEdH?=
 =?utf-8?Q?2ChBWMhnRUnRMXaXUX9rDm6al16v70imv4?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 13:38:04.3356
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d1f8339-d2ad-4a34-276e-08d8ab35cda1
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VipiqEiu6CZcVv9MSodHoyfsfhyD7wlrXqXn9srLaI685j71hHOhqnEAz11T90x2oXFH5u/OMjnmx59JxocEdQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5322
X-OriginatorOrg: citrix.com

On Wed, Nov 25, 2020 at 09:50:51AM +0100, Jan Beulich wrote:
> This vector can be used as an ordinary interrupt handling one in this
> case. To be sure no references are left, make the #define itself
> conditional.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 15:30:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 15:30:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59510.104460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktuTS-0005oy-P6; Mon, 28 Dec 2020 15:30:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59510.104460; Mon, 28 Dec 2020 15:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktuTS-0005or-Lo; Mon, 28 Dec 2020 15:30:18 +0000
Received: by outflank-mailman (input) for mailman id 59510;
 Mon, 28 Dec 2020 15:30:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktuTR-0005om-7H
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 15:30:17 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7637a03f-c47e-46c8-8754-2e68afcdfeaf;
 Mon, 28 Dec 2020 15:30:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7637a03f-c47e-46c8-8754-2e68afcdfeaf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609169415;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=qJtXPLIRJ/lXOtegCF8LL3vT9EHcKMmPEzSQwhLrEzo=;
  b=ZgkPEchyyqnKP1IvyH3SEDlO+MR3vuxPCU7/naQ6TGb3FRsQsmNl1QX2
   YS1OTsbHJHPJ3ptkes2AGuvbkWP/HRiB4Vh3c1TPdLYviSLfDx1e3lshe
   xMHSCwf6q399PptbW0iAVdcO4/FiXKvSzEEo4kB2E7n2Jr3UXQcnd/x51
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: M5mIx2FO/v6I+rF/uDWZa4cu5D/JD4w/GOBMrLWfRmvTlEO86NttnTppZjbVDe4XtHKPwXudji
 asZLFWD+nWwPuL9lbBW9zKSd39qNo0Kk8f49Avs3ahODDovJmqtVJcGZs05E9xOwTtF6LA0Ata
 kFZvnmnVggilPGcuqqkAOgf+VTiS4DFbSqsc7pLrjI1EUDJoC1YsUak6eQFoYmXK5azko4FKZ+
 8KI9FT9OyNjOLbQeSxW0ynMwHsJsjtTpOVGHvlqYBO/B6FASq3fggQfIROpoM4futj2RjPwAoi
 L2A=
X-SBRS: 5.2
X-MesageID: 34029731
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34029731"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DlnbRgnC4IOXGSsc+H4f+bZ+Oj9UQR46ZqjTAaee7siuopugif+GTyG8iCTjw8NLXnpRuLPz9lmn81NSr/b1AgFnQx04SVRHHCI9YTxFWc/zegwm2fWH4GmJwn1CHj+PN8F2TC8jXupLsMLkcsWA0YgLeUvKwYsi9buQ/Rb9n2Gk9U9F6iX5rebWryQRFvG1uyRd2nnIQYJ0HEbSzMsybCmswOetwnKWZXtv3ZfRE+/mHOUSYnSMN85vRF4tgzBGyHoVF8uKp31cFj9CK0DBpnaYL9fHpLg77gIc64Dlim5AoeyIJVx8oH619bp/i4J40lrbfwildE3lJFGggY/yzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R1afBVWYWt1z+7ic+OajD7y46HOXxZMR6kiY1OGEmyw=;
 b=nwzF0Coso9mMklWs3HgtK2hYbPyZEmIq2lxmsto9p6wU9VjeWUjiLiu/Oi3EKMMZwXIcauFJL0AvJ0QKaWqjzHB2B0MQYBYOR9WlbU4HqXG8r9HTgIGWCKKMArRCP6fwL7cEjM3sXd/bRy3xtZeQMvVxg64zjwNuKx2QyERu5H37PtD4aT5mAPtApJpK5fpnirV6HZ8gIcNfmqdj1TGxnXZpP3DbX7ZwVAF5ofWi/JSo8/qbmv62Yb5GmDEmHRJg2nVrRHwNKPms+TJcPq6Rk1VAVOIg/a3aHn5eyiYAPL5r+nEHdSiC4y9QYCqw4J5uOqkvb/kPlf4RZTB8N8hGuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R1afBVWYWt1z+7ic+OajD7y46HOXxZMR6kiY1OGEmyw=;
 b=Amhf1b/nPsFHARbTsUF3eiku9HW07yphLctQAQAPYURLviiQ1GkJQnU+88Pnp1TPaRukrsCqO5OxsQ+IANiZLhBsH5g+Rx0GasrVT4biJU2NPjJ2jQSrLVoUd052cKuFApG+NKseIyGKPT5qWV/5cUEFUvdx7IfGuK03tmCOMdg=
Date: Mon, 28 Dec 2020 16:30:04 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 5/5] x86: don't build unused entry code when !PV32
Message-ID: <20201228153004.qip3v6er5rk22fnu@Air-de-Roger>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
 <d417d3f9-3278-ed08-1ff6-45a13b5e3757@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <d417d3f9-3278-ed08-1ff6-45a13b5e3757@suse.com>
X-ClientProxiedBy: LO4P123CA0334.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2ac587c7-f228-41d2-5ffe-08d8ab4576f6
X-MS-TrafficTypeDiagnostic: DS7PR03MB5590:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB55905D3A9B7C3CCF5A6A01CE8FD90@DS7PR03MB5590.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +C+vRGc/BfsMBf9bBxcU5zcoxainPNxZKALX1Ew/4KsEdXTxIJn9FiuWeO5a/gQYPsGFPGmhriVHs+SKEkeSxKNv3bgIjX2bSZ6xz102JraJ3Ep51/VnHJjDIdi+azoMQC9dG7BM0nt0vuMaKV2NHppmPtn9zxQyFouOW4wytvwMVxosJDpeqItFUena0kKfm168r5KAqpOgS3DmXj/CKxAzrP71s7Pm3iRd3YLSaW2eh8x+D3Lcw2vH5404Yf9dKq3MgVz1Ru/YZq2JDELkLEjm17foSXOPy8ApFk91KAArF6++4acsYXwbRixJwqKS0t31CKHbB/A+DoFKnC6TDdam6c0X6pGP9Wk+2K9VkIcp57tlTg4OT4C/4/ZdgpNOd5b4RIat6FF3WbGH1JdfgA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(396003)(376002)(346002)(366004)(39860400002)(16526019)(478600001)(86362001)(4326008)(6496006)(6666004)(6486002)(2906002)(66946007)(85182001)(1076003)(54906003)(66476007)(6636002)(5660300002)(33716001)(186003)(83380400001)(956004)(8676002)(110136005)(9686003)(8936002)(66556008)(316002)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bDVERCtBSWhPRVZ2cnR4MW9QdENaUy9XNTdjaUh2OUs2dzdQUG5yVnZiU0Za?=
 =?utf-8?B?ai9sZnV6M1F6MzdUZEk2QTROSGpwZnoyVEZFbHMwcUVHWTErZGVoZHd2NHlh?=
 =?utf-8?B?K01ING5QWFd6QlRNK3h1UVNMTTZHQ3BqOWpwSnUrMlhDTnd5cm5EWHJkOXdL?=
 =?utf-8?B?ZGZpNTl1Z1NEUkQrYjNacmUvc2xJVEw2RWJvN2hyRzFJcllrKzBvZGtVaWhS?=
 =?utf-8?B?dlg5ZGhPT1FSRUQ1Y20wRk5RT1pZUGMydUNrZkt0bkxsN3FqckhpdkNPYk45?=
 =?utf-8?B?c0cyajBzbjJFYStOYTFVMWVzUlhKUWJWeU5Nc2x4KzdnUDJtazZvOVh0TVd0?=
 =?utf-8?B?dnRDa0ZkWmROR3VwVWtKU0hZR21kWnIyaTVQMFlRelVpMlRrZERvd0ltZ1Bz?=
 =?utf-8?B?TktISG00ZkFVZnVUeERZVFkzMUNLRnZYdDVsb1J5L0hTZmZnUkpJVStWa2t4?=
 =?utf-8?B?S3hsZVlMMi9EOHExQ0s5SmZDVTJoZjhtdDNrV2pFMGpETUpZZ3VLbityT0xP?=
 =?utf-8?B?Y0VwWTFZeFdPeS9wQWFFZ2VsZXpFTXE3SU1rVkpGaW9WOW1hOFFXMlBFM014?=
 =?utf-8?B?SHd6VStVdVdMZ1VZbHNCL2dxQTRKWWJsSERrbGxvSzFjMWhSSW85VmxXMDRB?=
 =?utf-8?B?d2p1VnQwc3A1ck4xKzJGV0ZNWU9rR2RjTUc2eStBZU9aUE5pNUR6UU50VVVq?=
 =?utf-8?B?VnkwWjlJdG94VUZvUStqYW5HRjdBU1NGODZrTnRnZXBJMkFJc0hvemNQSFQ5?=
 =?utf-8?B?MlE0M0w5L25xdlEzQ2dpVmtxK09XbXM0S05MM0VCaFdEaEtLU2E3amF6djE1?=
 =?utf-8?B?K1FhUWFzRjNMSFgzcVJwQXRGd3BEeGpMcm5sakdKd0lOUXpkRkx1bGtVeEtP?=
 =?utf-8?B?YVVubXU3RFlyWUFXaVUyQS8reUt2YnJMLzdWTVZkcHpTMURNYlNtdmI5R1dk?=
 =?utf-8?B?eWVpeGNTNThud3FkSmltMUZBSnlhRmM2SlN2R2NNM2hlTWNwZlpoTzYzMGEz?=
 =?utf-8?B?QmxCaGFISFY2TWFzRUNVaDYvOTZReWlTN295YVpkNG10MDlMcXcrRDc0dUdt?=
 =?utf-8?B?R09PYTY1SjRMbjhPMmo5aHlhcHpROTZ2ZWRIbWVVL3luL0NGdkNKMXI3NXdX?=
 =?utf-8?B?c3NWRmY1UEJuQiswWW12SXR1L0xjMzFwQzZvU3Uwc0FxeFJIKzVQUkI4R0RZ?=
 =?utf-8?B?Tnl0VUZLa2FIcDlhNDA5aHcrbzF3cXpQKzFyVlVNa3pqdjYybkE3MUplNzlU?=
 =?utf-8?B?NkNlVEJ4dnJTWGxyMjdReEp4VDNsOVp3Wm1YMGllbXBMNU9iRTQ5YUtBQzRI?=
 =?utf-8?Q?yJDRkYeU9FnMEtOHTL88EjIbGVfayr6idC?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 15:30:10.8850
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ac587c7-f228-41d2-5ffe-08d8ab4576f6
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KI7tg9sxDS113NKCzyTlxDPfsXzW95Er7QxQBg0d7RIeS8fzUn5KDWbh6Us2sJGTRVXEWNMaTp/snDTMU083ig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5590
X-OriginatorOrg: citrix.com

On Wed, Nov 25, 2020 at 09:51:33AM +0100, Jan Beulich wrote:
> Except for the initial part of cstar_enter compat/entry.S is all dead
> code in this case. Further, along the lines of the PV conditionals we
> already have in entry.S, make code PV32-conditional there too (to a
> fair part because this code actually references compat/entry.S).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: I'm on the fence of whether (in a separate patch) to also make
>      conditional struct pv_domain's is_32bit field.
> 
> --- a/xen/arch/x86/x86_64/asm-offsets.c
> +++ b/xen/arch/x86/x86_64/asm-offsets.c
> @@ -9,7 +9,7 @@
>  #include <xen/perfc.h>
>  #endif
>  #include <xen/sched.h>
> -#ifdef CONFIG_PV
> +#ifdef CONFIG_PV32
>  #include <compat/xen.h>
>  #endif
>  #include <asm/hardirq.h>
> @@ -102,19 +102,21 @@ void __dummy__(void)
>      BLANK();
>  #endif
>  
> -#ifdef CONFIG_PV
> +#ifdef CONFIG_PV32
>      OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
>      BLANK();
>  
> -    OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
> -    OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
> -    BLANK();
> -
>      OFFSET(COMPAT_VCPUINFO_upcall_pending, struct compat_vcpu_info, evtchn_upcall_pending);
>      OFFSET(COMPAT_VCPUINFO_upcall_mask, struct compat_vcpu_info, evtchn_upcall_mask);
>      BLANK();
>  #endif
>  
> +#ifdef CONFIG_PV
> +    OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
> +    OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
> +    BLANK();
> +#endif
> +
>      OFFSET(CPUINFO_guest_cpu_user_regs, struct cpu_info, guest_cpu_user_regs);
>      OFFSET(CPUINFO_verw_sel, struct cpu_info, verw_sel);
>      OFFSET(CPUINFO_current_vcpu, struct cpu_info, current_vcpu);
> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -29,8 +29,6 @@ ENTRY(entry_int82)
>          mov   %rsp, %rdi
>          call  do_entry_int82
>  
> -#endif /* CONFIG_PV32 */
> -
>  /* %rbx: struct vcpu */
>  ENTRY(compat_test_all_events)
>          ASSERT_NOT_IN_ATOMIC
> @@ -197,6 +195,8 @@ ENTRY(cr4_pv32_restore)
>          xor   %eax, %eax
>          ret
>  
> +#endif /* CONFIG_PV32 */

I've also wondered, it feels weird to add CONFIG_PV32 gates to the
compat entry.S, since that's supposed to be only used when there's
support for 32bit PV guests?

Wouldn't this file only get built when such support is enabled?

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -328,8 +328,10 @@ UNLIKELY_END(sysenter_gpf)
>          movq  VCPU_domain(%rbx),%rdi
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
>          movb  %cl,TRAPBOUNCE_flags(%rdx)
> +#ifdef CONFIG_PV32
>          cmpb  $0, DOMAIN_is_32bit_pv(%rdi)
>          jne   compat_sysenter
> +#endif
>          jmp   .Lbounce_exception
>  
>  ENTRY(int80_direct_trap)
> @@ -370,6 +372,7 @@ UNLIKELY_END(msi_check)
>          mov    0x80 * TRAPINFO_sizeof + TRAPINFO_eip(%rsi), %rdi
>          movzwl 0x80 * TRAPINFO_sizeof + TRAPINFO_cs (%rsi), %ecx
>  
> +#ifdef CONFIG_PV32
>          mov   %ecx, %edx
>          and   $~3, %edx
>  
> @@ -378,6 +381,10 @@ UNLIKELY_END(msi_check)
>  
>          test  %rdx, %rdx
>          jz    int80_slow_path
> +#else
> +        test  %rdi, %rdi
> +        jz    int80_slow_path
> +#endif
>  
>          /* Construct trap_bounce from trap_ctxt[0x80]. */
>          lea   VCPU_trap_bounce(%rbx), %rdx
> @@ -390,8 +397,10 @@ UNLIKELY_END(msi_check)
>          lea   (, %rcx, TBF_INTERRUPT), %ecx
>          mov   %cl, TRAPBOUNCE_flags(%rdx)
>  
> +#ifdef CONFIG_PV32
>          cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>          jne   compat_int80_direct_trap
> +#endif
>  
>          call  create_bounce_frame
>          jmp   test_all_events
> @@ -541,12 +550,16 @@ ENTRY(dom_crash_sync_extable)
>          GET_STACK_END(ax)
>          leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
>          # create_bounce_frame() temporarily clobbers CS.RPL. Fix up.
> +#ifdef CONFIG_PV32
>          movq  STACK_CPUINFO_FIELD(current_vcpu)(%rax), %rax
>          movq  VCPU_domain(%rax),%rax
>          cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>          sete  %al
>          leal  (%rax,%rax,2),%eax
>          orb   %al,UREGS_cs(%rsp)
> +#else
> +        orb   $3, UREGS_cs(%rsp)
> +#endif
>          xorl  %edi,%edi
>          jmp   asm_domain_crash_synchronous /* Does not return */
>          .popsection
> @@ -562,11 +575,15 @@ ENTRY(ret_from_intr)
>          GET_CURRENT(bx)
>          testb $3, UREGS_cs(%rsp)
>          jz    restore_all_xen
> +#ifdef CONFIG_PV32
>          movq  VCPU_domain(%rbx), %rax
>          cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>          je    test_all_events
>          jmp   compat_test_all_events
>  #else
> +        jmp   test_all_events
> +#endif
> +#else
>          ASSERT_CONTEXT_IS_XEN
>          jmp   restore_all_xen
>  #endif
> @@ -652,7 +669,7 @@ handle_exception_saved:
>          testb $X86_EFLAGS_IF>>8,UREGS_eflags+1(%rsp)
>          jz    exception_with_ints_disabled
>  
> -#ifdef CONFIG_PV
> +#if defined(CONFIG_PV32)
>          ALTERNATIVE_2 "jmp .Lcr4_pv32_done", \
>              __stringify(mov VCPU_domain(%rbx), %rax), X86_FEATURE_XEN_SMEP, \
>              __stringify(mov VCPU_domain(%rbx), %rax), X86_FEATURE_XEN_SMAP
> @@ -692,7 +709,7 @@ handle_exception_saved:
>          test  $~(PFEC_write_access|PFEC_insn_fetch),%eax
>          jz    compat_test_all_events
>  .Lcr4_pv32_done:
> -#else
> +#elif !defined(CONFIG_PV)
>          ASSERT_CONTEXT_IS_XEN
>  #endif /* CONFIG_PV */
>          sti
> @@ -711,9 +728,11 @@ handle_exception_saved:
>  #ifdef CONFIG_PV
>          testb $3,UREGS_cs(%rsp)
>          jz    restore_all_xen
> +#ifdef CONFIG_PV32
>          movq  VCPU_domain(%rbx),%rax
>          cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>          jne   compat_test_all_events
> +#endif
>          jmp   test_all_events
>  #else
>          ASSERT_CONTEXT_IS_XEN
> @@ -947,11 +966,16 @@ handle_ist_exception:
>          je    1f
>          movl  $EVENT_CHECK_VECTOR,%edi
>          call  send_IPI_self
> -1:      movq  VCPU_domain(%rbx),%rax
> +1:
> +#ifdef CONFIG_PV32
> +        movq  VCPU_domain(%rbx),%rax
>          cmpb  $0,DOMAIN_is_32bit_pv(%rax)
>          je    restore_all_guest
>          jmp   compat_restore_all_guest
>  #else
> +        jmp   restore_all_guest
> +#endif
> +#else
>          ASSERT_CONTEXT_IS_XEN
>          jmp   restore_all_xen
>  #endif

I would like to have Andrew's opinion on this one (as you and him tend
to modify more asm code than myself). There are quite a lot of
addition to the assembly code, and IMO it makes the code more complex
which I think we should try to avoid, as assembly is already hard
enough.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 16:26:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 16:26:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59516.104472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktvLr-0002GL-9U; Mon, 28 Dec 2020 16:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59516.104472; Mon, 28 Dec 2020 16:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktvLr-0002GE-6M; Mon, 28 Dec 2020 16:26:31 +0000
Received: by outflank-mailman (input) for mailman id 59516;
 Mon, 28 Dec 2020 16:26:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktvLq-0002G6-6j; Mon, 28 Dec 2020 16:26:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktvLp-0004bK-W7; Mon, 28 Dec 2020 16:26:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktvLp-0006Rn-N6; Mon, 28 Dec 2020 16:26:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktvLp-00079y-Mc; Mon, 28 Dec 2020 16:26:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RJpRmuE3LWh28NQhCm6wNAPM0HEcfa7tee8fkdM9Ngg=; b=ykutGYDnWSK+KhXpMznVqiy9+0
	e22mG3tFLhIvIt9nAKfnjrdKBIUw9OamzHvlRQHlYORDdYINgfv/7bqoK91CT3ypys6ilbFgk7SbR
	1ILazttEuOUNG/AfcydrdJhFxFC3izSIi2r2PhlYmh8g1ZgfpQktu6f9oy+3d9X59RTQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157936-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157936: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 16:26:29 +0000

flight 157936 qemu-mainline real [real]
flight 157942 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157936/
http://logs.test-lab.xenproject.org/osstest/logs/157942/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  130 days
Failing since        152659  2020-08-21 14:07:39 Z  129 days  266 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   10 days   19 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 17:36:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 17:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59530.104492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwRN-0008CR-Md; Mon, 28 Dec 2020 17:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59530.104492; Mon, 28 Dec 2020 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwRN-0008CK-JZ; Mon, 28 Dec 2020 17:36:17 +0000
Received: by outflank-mailman (input) for mailman id 59530;
 Mon, 28 Dec 2020 17:36:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktwRL-0008C9-PU
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 17:36:16 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22b03f45-0979-4986-a0cb-5a6d1e84551d;
 Mon, 28 Dec 2020 17:36:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22b03f45-0979-4986-a0cb-5a6d1e84551d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609176973;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8uOgiiXTTvrza3X2P18o8bGzIV/qPqjBRyMNDzFvzsQ=;
  b=E/+buRQqXfothbsTW+mF1KdRhjUUDFkmXrze5Po0X+gJzJTYbFbDyssK
   GqNMEn2WhuhoWpj2riLKRvhdlzX6rIsqr738u+HitVgXFkdmL0r0l6z6E
   MZVAA045qaHqut37rHqLDLFqlGztk0TsDXy+4SNPhlJq3sjhJIgsyiCwT
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: iZZMybsebBGpYj3it6sFUzmyytv5jgcV71DuVfLw4rV1tSW9qF6zCtQtx5aeGsj68OVa8jNfcQ
 sfHx3VTlYeuVjV9IskrRqXWjArzQWuqetjYomgKPevf3ZT462oQ8SgSonzN//ZpX1yhASsLWrj
 CqjPvzq0GAr6GUw3iEq21q8EhlKKR7AwPKs2fkK9ujC69mly643QBLskkEy5bI15zJ9hHuZxIs
 FDiUbeFgmuEFVAMk9+orbodwPSXwJ0csDWPEmhai+Y4D63d4sbn3UB9215U+4giAqL2UND7RfM
 QyU=
X-SBRS: 5.2
X-MesageID: 34276070
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34276070"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=buTpgg4ljyIp9fQa9MgR+Ouj4eDut1LQw8ZeysLI4uRgIqNnN5pB4tQct5gDCuCH+tBGQyYXRTfuyPfo+x4xbD5UlLUvd+rbRsDUfrpKN1iCvoQtoXAEY4bKBQWYDwvxtpbrCdPrBJvQkWTsLEdzNDpTPYPLB/+Yji5AriuDfNXv1I/X5nTXh06Bu3x8Fdy7ZqnlQ5ljCqBy2V24Ye9q2hjsWwOnPvCCBnHDUyHppAC0Isd8NNECQNZfJDt3G5jjw5pO8dXoPcQPy/sk0xK/D5bFq6Dqth4s80PbIP+HDuhHhsJAg08IrhGFnYGiMSwLDXTiHroueH8FgwVoBuNr3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tB5KzQ/YTpz1EEGjQeYGZ8TPCf6ZJ/WJbLG95HlIKHo=;
 b=BsV82hAiE5L/44Cexb1b09N3YP+eZSvsgnVDA+GqYy2cp0KL0cLWUN1VDXkQMPFI11ARHzqMWl4jCYTQrsYxkWSUJxqZnn4o3DH3iVyQw0nCxhPkgTBOWRCDTWB0IZg5ilxTJEI2eMh6IFUalcsXYepOo7wdv6pDrCwDZka1hOw6u6hgfVG0r0t/+QfoN0yBQc8mInn5EIdOm1DiP1kYN7mgq/HZUbdqcDMbKmeIzGSZF7CmMIBNG4H5fzWm/InMGOEHNyRKjRYmbuk+BeaSkEdT1iDv/Ph6FHWf+I4GQn8qX221osWsdFtctTe+vRB3gl0Qbv5Q8AOZsccIcf/g5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tB5KzQ/YTpz1EEGjQeYGZ8TPCf6ZJ/WJbLG95HlIKHo=;
 b=uquZC7wQDphFsVrtfG1/eJ22Cm5O69kySmWK7WjmsXqpo4OiT39Xa6JhHumzIBjwitPjtQxCSjyRvwk/36UASN03GLR0E0WIN+INSLqsbiNCkCDkvbIqTOzdS3umewxs8Mc4pCKE/zl1uE4148r25H3xlHJDAmDBYTm9PC2z6Gg=
Date: Mon, 28 Dec 2020 18:36:04 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/5] x86/vPCI: check address in vpci_msi_update()
Message-ID: <20201228173604.lggfeus2m7jsvekr@Air-de-Roger>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
 <c5bec6bd-b3cb-dc4c-0435-5154956cc4dd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c5bec6bd-b3cb-dc4c-0435-5154956cc4dd@suse.com>
X-ClientProxiedBy: LO3P123CA0003.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::8) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bc8ed423-773f-467f-30ce-08d8ab5710b9
X-MS-TrafficTypeDiagnostic: DM5PR03MB3066:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB30664E7082DB2B59059570FF8FD90@DM5PR03MB3066.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: C0kPkQbzqJ4a9WK3aYf9s5O6nT9cYuOhQptw9+c/ZFArv7Q3sO83sFTe1WTnp6bNVRa3oXjWaT65aYsGhTNQndTJkSY5hEOYxsiOst9sOx87xHbTtLRhB12dzKWF986EulNdFOOOsUXmFBnL/GkSy48wF3otr9+7mhlyN1ngYoPw++Is/PzZg0w1dGyljOUU0xZwcBD/w4m2RQ7M9EEdFQY9gedV/0gPzrccXvFL1lJQ1/do/AiQ1MRemGFALqYYxsnBmePU6M3YhZU3xWOZbJDiqY7Fg7m/NhDjNrWHEBwAcGBXTVqcoCX5RxUCVru423Iq2M5ZCW8nHZotUH8oTw02FKvycdztBqT6j+JHsKQ3LTWvh+nm8Bfx2tQ2VB3HgEgaAmolLnxV9HKIPbc33g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39860400002)(396003)(366004)(376002)(136003)(33716001)(66556008)(5660300002)(316002)(6916009)(2906002)(6486002)(8676002)(83380400001)(9686003)(186003)(8936002)(54906003)(26005)(6496006)(4326008)(86362001)(956004)(16526019)(4744005)(1076003)(6666004)(478600001)(85182001)(66476007)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Qzk1QkQrOVNreXRiaHVPaGhpaFFPWkxZVGh1SnlxeFhHNEJSM25lWHRETFBx?=
 =?utf-8?B?cHQ3a1NiSHJIV1BBbGdCMVA3YWVLc2x3V3prZjVsYllCc0ptbE9wVUZKRzN5?=
 =?utf-8?B?Z2pIcnB0MnZmamJ5dDB5Q0dKcVZKcjdzWTRBYzJ5WTJDSXpoVFdRTTRGVjZ0?=
 =?utf-8?B?MVBxVjZVQWNpUmw0ZFNEMUhIbWF4MnB6Q053SGp0akMrQkNSVHVOVDFxY05s?=
 =?utf-8?B?c0hjZFFPakFPc2RGdTBDU1JXbjdUM0ZlbHVoR05uejk5TG5uTGNtQWVsZDhB?=
 =?utf-8?B?VzhWVnpFZlNmSE50cFZjTlRJUFlDT0Z0NTN5REg0c3NMVFFFUDRuRE9hQTFa?=
 =?utf-8?B?a2JOSDBnNUdnMzVGYTYrM0tRcWVnVTYzQWNLVVNGMTBvTWl0aFRqUkx5K3VN?=
 =?utf-8?B?RTRQZUsvUjk5cFRVQXQ0R0xGZkR6ZmlaVTdHclNtSmpTNXhDUlBQWi94c0hR?=
 =?utf-8?B?WFhodGxHa2E2TDZLdHc4RmpHNkpxSVdZZHJUTXBMcHA4Z25MeStWdCtFVUdT?=
 =?utf-8?B?aURCL0ZYdGhjbVM3T3FzTUlPd202U2dMajRVMU9CQlFEYWx6aVhRZytzeUlL?=
 =?utf-8?B?NWpOV3lFTmFOcVRJYW8yTDAxMHV6aGRGYXZRY2kyUkdQcFpqWGNYZGdUcER1?=
 =?utf-8?B?c0VSMWJQZnk5cDJlVG4wSVVyUTVrak9lTDhWMGJjalhKaDVBZEhGS00wSDNq?=
 =?utf-8?B?eU5rVGs3YlZpc0hDUWsyV0lEanBkQy9kZjU0STl1Q2ZkaFZTdzVkTTJGYi9h?=
 =?utf-8?B?d2lEaWFZTXpRNzhEbDgzcDJMenNQdkZVdTJSVHRnZDJzQkgxaDNBK24ydlRY?=
 =?utf-8?B?ZVFLOXl5a25uZ1IxOVI3TUZSN2VlTmVycEN0TG55ZWJPS0QrdUplM0d3MC90?=
 =?utf-8?B?c2twcEZTQ2hyQWN5WXhsY2paK0JjVnVOQkhPcjRpaUhwN2htRk4xdVM2U1Zn?=
 =?utf-8?B?b1I5N3h4RlJqMGg1QXBDR0luQXJhcGh5bmFWdHRxOTVaRjNVb3ljcjVmTUt2?=
 =?utf-8?B?TnhmaWN1eDRUWWwyVkJyL05QcXIwVlNWbThWMzdMOUxrWG1wMkU1eHUxdW8r?=
 =?utf-8?B?cWxDMDhGKzNHRjN0QlA3R1JhdkZRUkQzdlh3Ym1TUGRDT1Fha2dxYXpjSXhB?=
 =?utf-8?B?NDA1c2pmQU95ZjIzM0xCc0txVU5MYWppT0cwelptV2dpU0IvMEFkVWMyMjJi?=
 =?utf-8?B?bUg4U3RubGFMcTg3cjdkRFpSSGs2SzM3NlVORU9XV3hIaFlPZGh5YnZpcFNt?=
 =?utf-8?B?a0FDNkc2QjBHNmdXcTlGSkR0ZnFLQ0N0L2labjBxL2lSOXFvQUhQeVdUS0tW?=
 =?utf-8?Q?zv5/Jxr+ZgrD+leSDAe2XuejCUZhRhheQ/?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 17:36:10.3464
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: bc8ed423-773f-467f-30ce-08d8ab5710b9
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BHMR0ka0jpGu40lR7R/YETPvVXBJTndpHHXLxq0QMwWDk7PMuKEkyuFcF0U0sRcKlQbBv5XGBB1RL3Bobf/ziA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3066
X-OriginatorOrg: citrix.com

On Mon, Dec 07, 2020 at 11:37:22AM +0100, Jan Beulich wrote:
> If the upper address bits don't match the interrupt delivery address
> space window, entirely different behavior would need to be implemented.
> Refuse such requests for the time being.
> 
> Replace adjacent hard tabs while introducing MSI_ADDR_BASE_MASK.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 17:43:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 17:43:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59535.104505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwY8-0000gN-EC; Mon, 28 Dec 2020 17:43:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59535.104505; Mon, 28 Dec 2020 17:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwY8-0000gG-An; Mon, 28 Dec 2020 17:43:16 +0000
Received: by outflank-mailman (input) for mailman id 59535;
 Mon, 28 Dec 2020 17:43:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktwY6-0000gB-EZ
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 17:43:14 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7176eed-6de4-41e8-90b1-15dece3f670f;
 Mon, 28 Dec 2020 17:43:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7176eed-6de4-41e8-90b1-15dece3f670f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609177393;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=+ImJknkk1GoWK6QHrgdvOKkf5/RNnsNUMixAtUhQPxE=;
  b=FCqC84Vblsmqq+XbZaQbKpY3pSxwrb+ufqBBu5FBSjoLddlmfXzKGAIk
   0Czd6cGbJKayXI0+KP55bDMxkcdv3kj8l04uO7wB+r4vf+eVW9cRu/3ZQ
   TBjw2wiqq+eNjyt8C4K1Ymh7Jmbb1CA1+1D4DUDxOU3R9QUIf1OhIg3ba
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Pq+LK4usiSH/rLe3Q30nypa0tyHGmIdOYXH6qWCxlJ4XnNAyR81ZKT70Vv0wOK1OYJVXxcWz3v
 iYQeSxU2kPjZ8AZ6vWHnUBOOqvHswBoMM0Kx3sa/ZzR9u5ayLnGiJ9CGQOB1L5Fq0vTepZwKLI
 RgP86G6PvVywCjRY15fyIXvtuvrYteWoV7f96kHsSy4wFhJD9gsrm6svO4joX/K1F7c6DvpiPg
 xTwvrMeJayQa9rH24nZaia+ttGmUi2Q+3I+FLDJE0q/UZcNtJlEfRnL0MaUbSZqc3uf/QdA78v
 LNI=
X-SBRS: 5.2
X-MesageID: 34040013
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34040013"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G7uxKkjGJc1nIg84dBBa1Din7JaChr5d3X1wM7UfCVUrmvGyTxjUON4tGrPGPBawraagRLU+WPuP5ZsThRdlEuj6FOwIN7LWubdK1MJDPAJughcW24JUMT4xeyw92OY7K2p++e3i1/LGyPjnvGrPWe22zn9DyTl2sYfNiEEqoX5+J+P7ESMCXNF/0vj0hlgmZXBcZeCI9bM43czt8ppoiALSpa70/NRUIQHUGM+3iPXzSt8yyCcFPDPrrVYJYA3W5Y9FOH/STBaAxRBJZCWPD1QSMllpLjq1wauNHp31Rb3zAE4CZkEoKha2Yvm6qE1hfgGpV1PDe1Jtvbqnx6WtZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1DkWZnDB5Vb49IecYhR41T3/sayS+eDaI6ygtN9QVTY=;
 b=Pj8HV3nq0gW4Ovq4zWbwwD6vNZY/Jy91XLsqFJz1o22BxsiEdhSXvdWqnwT6zNAP9v38bomgBXZUX3drppyGTZiXYBA5cLDgQFRxrm5HKu1chKmwUa22HtynWdlvpBn6C0WdeS2aYivnAqY8RPQaxeRIN0d6ovLujLdfWE9qMyEPii48/GPtjQSasD0p1ueB/KSedf9aXfn+O4bQjBOFdQHjp6tFes7uVHpyChX8t2BBrQXrYWppHK/ERoffTjRQfVWzZJUNVMe0bLtKJhekn0+FdE4QpPJK2x8x7S2f0NbrYeJGxXEZcdzm7Pa3X+LH7jsoegeZt1CcWUV9Bv3v5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1DkWZnDB5Vb49IecYhR41T3/sayS+eDaI6ygtN9QVTY=;
 b=UEWw/4Sla9MT2xhO0o4dPmXU9UapDyX7twE4rZ3oVUDA0By8STtgwUczh0viYqLJO1NM5o4ntahM7QV91wYh0y++PrOywJhwLUX/TnNVREPGuplV5Fgmz8TbTCahqfSpbQohwVDqVUnUmtnpXcSFW6HvSv0ljt2iyIiROEmRlq8=
Date: Mon, 28 Dec 2020 18:43:04 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/5] vPCI/MSI-X: fold clearing of entry->updated
Message-ID: <20201228174304.x5uzdvtw7djqp5po@Air-de-Roger>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
 <c27c1c50-2d98-1796-f0e5-8fbae9f50045@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c27c1c50-2d98-1796-f0e5-8fbae9f50045@suse.com>
X-ClientProxiedBy: MR2P264CA0087.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d48d81c5-7efe-43d0-1cb2-08d8ab580ab8
X-MS-TrafficTypeDiagnostic: DM5PR03MB3066:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB30666E215D3D67F91B2B55798FD90@DM5PR03MB3066.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4283wO0pF7hiTZImlWheeAgF0VUhrWbMoFhdqs1CoqGLjN81++O2spHFjj8yowbE/fJEJzqpaQui3AbK0kAeDJ99UJnWqnR9wf9HP9tQ1WiRou/2lsDrBD8CE977ZrwOHv/KqmtNAkJyT42+L3BhyatU1C5oj7rabnaLcVszAp+jkZfmgLORpJc/aZ/4tpZ3ppHqWK4T2Q1bb9WXjuyChzV77gs2JpUHNQtrhfu6kU2Rwebun6C+ojCPDoReSTiZEhDop8jCSzWFZ+1O4KrkRD8awAvD0DhCN9rz0weAjBVCcn1xEjYsD3CNVYIfluuNOeMHFJuCvsglN7wva7nuDWunYWtJ5SNwIcVTFYwHMs01TzQ8gJMVHalUDw1LXwxibGKMZsw2O/NaZ9W/mDApIw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(366004)(376002)(39860400002)(346002)(86362001)(16526019)(956004)(26005)(6496006)(4326008)(66946007)(85182001)(66476007)(1076003)(6666004)(478600001)(2906002)(6916009)(6486002)(66556008)(5660300002)(33716001)(316002)(15650500001)(186003)(9686003)(8936002)(54906003)(8676002)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?a20vOHhBOWdPUjJQbzNGTEJIMERHcmd3WjNsMXhCOGZZV0ZPbEY5TFNXRGhQ?=
 =?utf-8?B?OEpGT2xPanloRHljNmtXK3czTXRURU5PSXR2dmd4d0lUL01IdHkyejRCMVVT?=
 =?utf-8?B?bFYvRVdkUG1hWnBUY0F5SXRjdmVhbXpRc1grWVVYc0FhdU5aQTlmVElINWlr?=
 =?utf-8?B?ZnRxbnRVZjBZeVZYRVZUa2tMVnhGcHd2K0wvZENwL2tkeHFVcVUxaERTcWxN?=
 =?utf-8?B?cHlVOHBybjUyc3RTemZYeEZsMWtUdjBIWXByRkNkYXN6NnVhNnR1UWpLc2xY?=
 =?utf-8?B?WEYxWUo5eU5Hb0pxYTF3RWNwNEpIOUl1YVpXR01ZcDZVRGNkMEo5UVNNb2Yr?=
 =?utf-8?B?V0s3ZlU0dFhhU1pzNzV1R0pGUWNTb3RHUXNaK1pQZU03dytGa1lCYVZBODlC?=
 =?utf-8?B?QUZtSE1VSkh5ZEMzUzhUc2pSN1ZJOVJmcG5nTGdQSHhZcEYrVDRRa24rSncw?=
 =?utf-8?B?YlNpaitvdEhxRUFCMG5sdU9QTEhYWC96UG45VC9ZVEgzM2lXNi85L1dtaTc5?=
 =?utf-8?B?TnhpM0t3MXBNMFFZSHBvb1dGV21FaU9XcWdPbjQ3c3NoSXVJSG5yMGxiWFJr?=
 =?utf-8?B?c2dsWmFSekpXcks5ZHdmekNXWVZnWWNQNTV3SXJ1WXdiN24rYW8ydDVpRzRB?=
 =?utf-8?B?VWtNSkU3aGpqY2U5cjE4Z0FFSCs3WXo1YTVwWWlDNldKckhWMldNNWR2aW1Q?=
 =?utf-8?B?QlhDOUFlK0JMSFI4YStsUHVBRldOZHlaMmVCZ050ZXB3MDBKb2cwU1hsTzl3?=
 =?utf-8?B?MU9VTXJDYkNva2hXczcrTGtSQ3drUmFIaVBjWkE0cDhxMm4xSkdsdGd4RDY1?=
 =?utf-8?B?RmFnRG9WRk8vay9YSFF2WXk0TElEcWJJVnhmUGNYRHV0V1NQZTl4aVpXclht?=
 =?utf-8?B?bXArTi9GdXpFdTVFV0RNRjFZMG1PTURYRFhiN1JQUGFURU5lMTBEZTBQdEMz?=
 =?utf-8?B?STlKMDVmdmlLRG1YcWxGaloveGF3ODdvUldUOTYvTnlSTXM2akZTSG56S0Nl?=
 =?utf-8?B?WDF0QmR6b3haOFRXMkpnZ2JBUUZFbmVRa296R3dSQkJ5SWpMeDl1cVdVUDVp?=
 =?utf-8?B?WVJtOVVEQjU3RTEzc2hBbHdtWU1OSU5wNW56SUxTUVBpK2doeVhPRE9EdzJk?=
 =?utf-8?B?bWlmeFlzUkRNV0F3OW5oaGJoRnJmcjg1UXU1ZFpXeXFCdWRCOFdoOFVyWFg2?=
 =?utf-8?B?a3BZOXk4anhHRlQvZ0tRSzVuQlB6blhLUUlnc2dBQVBwVWhmVXNOdVN6ZXA3?=
 =?utf-8?B?cHhvS0tLWkNuTUExYzU2Sy9uRm9ndnpJWlVTSTVpbm1Cc0xuMDNEaDIyRFFR?=
 =?utf-8?Q?vJsivvCRaSQlj+A/IEhTKI8hiTHLYOx3N+?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 17:43:09.7314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: d48d81c5-7efe-43d0-1cb2-08d8ab580ab8
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m/pbWuVXBNyzdClhzMsLTCidG/IUk3jTfcIilNc3QcMq6gY9koVCKu/D1Oq/z02vs382T6GLlZdjqZDfTvJ86Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3066
X-OriginatorOrg: citrix.com

On Mon, Dec 07, 2020 at 11:37:51AM +0100, Jan Beulich wrote:
> Both call sites clear the flag after a successfull call to
> update_entry(). This can be simplified by moving the clearing into the
> function, onto its success path.

The point of returning a value was to set the updated field, as there
was no failure log message printed by the callers.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As a result it becomes clear that the return value of the function is of
> no interest to either of the callers. I'm not sure whether ditching it
> is the right thing to do, or whether this rather hints at some problem.

I think you should make the function void as part of this change,
there's a log message printed by update_entry in the failure case
which IMO should be enough.

There's not much else callers can do AFAICT.

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -64,6 +64,8 @@ static int update_entry(struct vpci_msix
>          return rc;
>      }
>  
> +    entry->updated = false;
> +
>      return 0;
>  }
>  
> @@ -92,13 +94,8 @@ static void control_write(const struct p
>      if ( new_enabled && !new_masked && (!msix->enabled || msix->masked) )
>      {
>          for ( i = 0; i < msix->max_entries; i++ )
> -        {
> -            if ( msix->entries[i].masked || !msix->entries[i].updated ||
> -                 update_entry(&msix->entries[i], pdev, i) )
> -                continue;
> -
> -            msix->entries[i].updated = false;
> -        }
> +            if ( !msix->entries[i].masked && msix->entries[i].updated )
> +                update_entry(&msix->entries[i], pdev, i);
>      }
>      else if ( !new_enabled && msix->enabled )
>      {
> @@ -365,10 +362,7 @@ static int msix_write(struct vcpu *v, un
>               * data fields Xen needs to disable and enable the entry in order
>               * to pick up the changes.
>               */
> -            if ( update_entry(entry, pdev, vmsix_entry_nr(msix, entry)) )
> -                break;
> -
> -            entry->updated = false;
> +            update_entry(entry, pdev, vmsix_entry_nr(msix, entry));
>          }

You can also drop this braces now if you feel like.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 17:44:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 17:44:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59541.104516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwZg-0000oL-TO; Mon, 28 Dec 2020 17:44:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59541.104516; Mon, 28 Dec 2020 17:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwZg-0000oE-QL; Mon, 28 Dec 2020 17:44:52 +0000
Received: by outflank-mailman (input) for mailman id 59541;
 Mon, 28 Dec 2020 17:44:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktwZf-0000o8-Gk
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 17:44:51 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf96829e-272e-4038-b141-84e148faea27;
 Mon, 28 Dec 2020 17:44:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf96829e-272e-4038-b141-84e148faea27
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609177490;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=5LGS3+UTbtcKUWixPrb/RZkvl1ZQM3IwZS3pKmQ1TpM=;
  b=EhVUdHDLSKlr3xD5rT7vTDX93eLG0l/ysdYB2sn35ITYJC1H1OfnZF67
   dMrBZOOaGBlfXVfEIbwv+0b0jt+TcEOcX5MTnYUVx7z1XG3Oi5N/hXzNO
   h6bonJrgz8+X6cqyucTHBasodiseCwvAnqWQI4EbnFETCGBDuVHo0I43n
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NDe9ExaczJgGE/LvwBRGg7en+xk+3tfKDOK4iSmMruT5xHVnvkBL/JX8MXj96scZQcYNLzwDyr
 zoDw9kLk8PL3fHhKJMRfizb1cqELMvjGJt+c+WpS1VY+fhgiHYmyZJLNWmlyOrqRBMuD+A+Nux
 YbCSfMhF406U66qi+0GMRrhgjHNtPf7/okW9G+tFUx7KntqIPZ0hjVh9wh/19iQwXbiOueWBAH
 OGo+3g03x87uKSLru9OiXfyRhbXM1qSk1zBkcZzroc2ygjIfnc68aQiaI9fPiZ8qwSWXOGAk9J
 ViY=
X-SBRS: 5.2
X-MesageID: 34276701
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34276701"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N4Pt3MqwZakn2fl0V6hs4keHk7HfvEd4aDLG3EFC4pePpJ+25zuPa+LVOmb9sVp4q/9+QyYmE7lRcRogPSIsVRhdJJwqTSTFLbhGEQWhjbzjzSrkvT1had4ASikqyuCpIUnUcmVTdlqNkITMucT03O8PqHLcz3Gz+VK7cVpvGPT4PdxMyg3wr2oMYyb0L6lpeFYcgeaxY0VhRXEdCAKSi4sHrl0ixtjTNKQxB4Ex7r49R8wO1SzC81IaGTnxNikClmiu8NvfeWTy6mteeLGJyecfv6qqky65Vfad3XI4/MshKG0fYdROVUWfaFWkFN3brnZRiIeOi+0naspUDLSWYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gHbgmvOIQuMOTRJcNZ+TnyAETGrViUzj+xZZhk4ohds=;
 b=KQMtGaxjI2ZfsiqmZsLClM1G3V8UgvMK+O3I4R9rAm0vInAIuw+tkEKoHRhb0kE6OqtUGuG1M2z2MrcxOgct/kKag5uinpDYYQTE+tbrohGjKPU3gYfCTgbG/qjeKrvisaFS+fFy3LfDrcHbLkJK5VR52Ez2KbbEoCLTs2zfeUSYJlHrmfiZdWKYudAgnGxJQquANQQCyQHFsOsm6jgP/vEuTpQ7GtSnL2/1ol0Q/H8IwLlBQov+GKgWs2DGoTxiDaxctEmWZNNaVznwsZRzGb8Gw9ArOhG/6qur57XqiKW1d9CnxbEbfWy456lN15xCDJbygV2OEqoAsRk04Tp4pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gHbgmvOIQuMOTRJcNZ+TnyAETGrViUzj+xZZhk4ohds=;
 b=jn8DFsbnE96Xe2UE8ZYq5KY3IWrq4u+ixIpDffnEvVdB0cvSIYXKWhiMbCEznPHJDj5KgMn6m4X6d7026rDWX6hGKWEnCBe0kOVtwHc/c236uOF+Cz044TxPOfWqzUz9MbK5V0JZdFNcUEgPVtmSaPkf3Y79FtSxwYxhvdx9LJ4=
Date: Mon, 28 Dec 2020 18:44:39 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 4/5] vPCI/MSI-X: make use of xzalloc_flex_struct()
Message-ID: <20201228174439.nljyqls3m7s7pewd@Air-de-Roger>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
 <062e84e0-0e19-001e-df65-b06318cc5925@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <062e84e0-0e19-001e-df65-b06318cc5925@suse.com>
X-ClientProxiedBy: LO4P123CA0014.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 11522d6e-eafa-4411-064d-08d8ab5842f1
X-MS-TrafficTypeDiagnostic: DM5PR03MB3066:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB30660A7A91150F7D9BEBFC858FD90@DM5PR03MB3066.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1107;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XQTYHPwkQeEj6OLpKDrYC0JUokyXjFOdgoi9+IrmfF5978gfPgx8N8TaaFC1Lxd3TaVdAFaObF8SeCBgJWKOf7oRlh46OQZoWTMYF2l2YD+q0AxTW8uK2DvQf5HwVNLCe60w8pBkkYtUzEUWHY/4CkeKyckkCOtK5FPG60kl6HExS8jekGl3y7fnemkKB8X6irLvWSrxaaOTbhpSFCpkDNP9CcJ62BLeJ/355Y+vX8bXk9tZgn9Oj7HRVEWc5pjDmyv2irWNy5wRVFcFRYTOwxtgtCR0ypW3z5veah+XlO9IHTbtQ7143j6dvXVSGipoitLIHWJTDY8fLKIXMSeUn8yqcoYW23+X3mrZo2Q2tlPoa7XR0KUiUFfcvhB6p3q+E4gQJVEoVc2cBTHq8ZKzag==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(366004)(376002)(39860400002)(346002)(86362001)(16526019)(956004)(26005)(6496006)(4326008)(66946007)(85182001)(66476007)(1076003)(6666004)(478600001)(2906002)(6916009)(6486002)(558084003)(66556008)(5660300002)(33716001)(316002)(186003)(9686003)(8936002)(54906003)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eGZCUXFuYjlCZFMxMnhkelIrV2ZIZzZDZWxiT2pleldqRDc1ZENyU0ZGaHZk?=
 =?utf-8?B?MWN5ZHNGMENXQ2FRZ1o3Z1hWRW1lK0l3NTRWZXpJSG9kbEF5UGFGRnlxdXBR?=
 =?utf-8?B?OEhySDJYL0Z0VHVuK1dEc2FqNm1EZWJaRHd0RkhKZEwyTk55L2FvMzVxUk1l?=
 =?utf-8?B?eGdtVng0S29ST3JUazZ6dDFVQ21aazZhQW16WVR1Y0pobFJJNDhBQ1U5Q0FX?=
 =?utf-8?B?bHczaUNqTGVUZk8rMWR1Z29zVDZqRXBQV3VGc2doOXV2WHkyQWNRajRvcU5v?=
 =?utf-8?B?eERja212Q255YnAzeXU5QWMvWTRxQXk2cGdlREhNQkF3N05xR1VhMjZYTnBi?=
 =?utf-8?B?dFZSalRrY1QvUyt5Ti9oN05LaCs3eTM0eWJSalBIMndnbnM3c2UwRDJlQVhS?=
 =?utf-8?B?c2YxUjVJQ21EQ3N3SVJOaHNiandUWmV3QzZkRnBUMmVLL2JEU0o4alVHUGYx?=
 =?utf-8?B?TjFJNlQvS0ZqVzlxc1lSMjdxWVorR1htWGlFSzhvNzBGaHpqSXpFZmxsT1NK?=
 =?utf-8?B?VHZHWmhuK2k3QTlzMWpQSStsd0RJNWVNUnA1UEIxZE4yOXJZbUNUMlRBeVhp?=
 =?utf-8?B?TjF3MWtkYWx0NUsrUHdyVjIyeFhnemZvVmJsRkY4T2Y1Z01ZY0VIeTd3V1kx?=
 =?utf-8?B?VnR4alRMdkx3azRORytReFRmMFlERUxiNmtzcFZmcm1jN3ljWk5WYkRicDVl?=
 =?utf-8?B?dDBoRiswNlNjNGVvRjFodTh2MGNLdFBsTDFJNGFXMmRjVlprRkJ6NjlvSDF2?=
 =?utf-8?B?MnBlTDlhc1ljc1g1dWZodFdjZXplanFzZWN2eEY0MWZLb0tSUWUyMGNYUFhU?=
 =?utf-8?B?QTdidjlHOVBJMzdIbHlvVHhUaFJrZWY0M2JvR1Z2b3FxVS8xTGM3V1h5aGhV?=
 =?utf-8?B?bXJRaEhwZ0xDTVUvMW4vNDRtNjJUYkxPK0VYc3d4a25zZnEvYXB3aGhGNzZu?=
 =?utf-8?B?Z0hlTHhJd0V3Ri9TUnhiYlErbVRZa2xuOXpsWkRDekdmVTdqSElIWllrOG5K?=
 =?utf-8?B?aVByQWNuaXEzMUVGQTlzWFlVNzRCMHc0czF5Z1BTUHkrYlJHQ1RNeE1lOWhs?=
 =?utf-8?B?a2p5YXlGQXk3bTI4OGZGQ0hiVlh2TStPVmJPWjR0SHVvMmE5aWdDRDhKcnBT?=
 =?utf-8?B?Z2c4dngvaHE2SVNPS2NDVXZheHdraFpxdXVvc3gvci9YbTRlM2QvWkc4Ukkr?=
 =?utf-8?B?TUNjYlEvSERjNDVLRkVyc0hPdk5hUFlUNzJPa29UR2FDY2dzZ0JaaXJlMjBB?=
 =?utf-8?B?MHFNK0tvNWtGZ2NaYWhhd2FocE0rbzVXeW5sZU53cW9oZFcyeng2Q0F0OHZs?=
 =?utf-8?Q?9QQJnOT969KoTwgGrklXjHMV1jljBp80Di?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 17:44:44.0403
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 11522d6e-eafa-4411-064d-08d8ab5842f1
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v9fljspzlJi0ihLezudw7JxVJaXHwIdAcG0NA7jMPSrY4sQKP/jeVUIonOuc+UiAbFi5NcExPuCpYVMc+oXr0A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3066
X-OriginatorOrg: citrix.com

On Mon, Dec 07, 2020 at 11:38:21AM +0100, Jan Beulich wrote:
> ... instead of effectively open-coding it in a type-unsafe way.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 17:59:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 17:59:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59547.104529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwnH-0001pS-6N; Mon, 28 Dec 2020 17:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59547.104529; Mon, 28 Dec 2020 17:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwnH-0001pL-34; Mon, 28 Dec 2020 17:58:55 +0000
Received: by outflank-mailman (input) for mailman id 59547;
 Mon, 28 Dec 2020 17:58:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktwnF-0001pG-F0
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 17:58:53 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7569d644-0260-4396-8d4a-f26dbc6fdcfa;
 Mon, 28 Dec 2020 17:58:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7569d644-0260-4396-8d4a-f26dbc6fdcfa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609178331;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Uz+DU672ICkb/KeNCyH3d3zZrPGziElx/QdYC+h7tsc=;
  b=XE6CuIDY+zDZ+iCZA6moFvIWDBX6lUNpH7gPiJcPFZgX8ZIjCHzBXOGP
   PYSNVihJacgEZXCc/WcFFvFNUyxfVVki5Htymga4uzUZ4b0pTnXqsSlJO
   PJ90gePwxkQ7K6jrqDjLlIjgvRw9N2uTtZ7ip9QBOqdCuogjyF+ltsCoS
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wUNWyeMFFXT4wr//cR26S3uDH9qZ9Qjr0YDX/qJ+OGL7PYSOfXgSEKW5zs71IKTLRvEwmWKWfF
 U8X7Ajb6pbWyao6mkOHhx45JLIK5cEtOcXwMuI9KtkNhPEEaPtJ5Yq4O55ozZQpmrQS0XPbH0E
 svIzmHAdn7QgNrMCCJy8cYCkllNzwsTDo0pn0RJTwj0So4VmavfakkfXAu4eM+6IIfKd1/h+hf
 NX352P3IoJx/MWtcnTaLR8MiQYi34HwMfYqxKb8yr04ebDlgd/e3sDOEjq3y6cplYCnqV+mjYR
 s4Y=
X-SBRS: 5.2
X-MesageID: 34277738
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="34277738"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uz/74jtQqUWlOVq3WoobWp3gzmo2bXj3HLPPVBOS3XqWOV3RGrf8ianAUJJ/+xscJ2id6VS4WcxThJmBvs5U0DrRTCh8Tm/x+1cSYhd8llKsdvf/XmCX+OuIeq4+gN6xsV/SH3lpX/9fnmNzLW1n0FCIcYBIVrM48ifUA8Gjhkf8SGRNzsrL6r+/cbzX2ofYXhu+cuTTxJoP3G5fjFgsUPzHYSz2CWcTMMSd/z4u6iZdibGVEHl7qHvNgkBVRPqRpDow058pHwpx+83mKn+v0CfLrbNgbdetVOuzOxcCBZd7rUFctNP4b9dwUERUoa8YYtTNbzvvPsfctBY90RJi+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uy5bTNHDCVjBbuIW4X/SGecNJtqr+d0lbXOznEpwKM8=;
 b=Gwhf/oedZLScX2BB1nlPobKSsorAbPmuHLtgFhGK5Fll1Aqw5QBq7NN+Ssp9hqJ1gGT3vvny76MxCZqDHESFd8wDbOl9Y3zVAYOvJp0fKRA4riA5e3l8htnzQPTpw07tk1vlBO5TrQcfw0BLE7iaZzEgAZfj//XMqXi/mmVwMv0Zs9ErYjobWBfI0j5vSgTXrww5xzviqCJ8cUy3bi/IU8R4nmYyk9t1hHeMXYHDZ0Z0eQBDBIt8YXnE1d+h7yGOCc0MuSoiDJIkIjXw+ehwQcCBbvTg/ngYwIvZhy4QuzgxYtv4wvxnGv9yPN5YPlVEpEVZoEVTw5DYm3Mp4/u86g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uy5bTNHDCVjBbuIW4X/SGecNJtqr+d0lbXOznEpwKM8=;
 b=La2JbQM9wXJBBIOzEbbnh6BeSKCZddoJ+5nBAIquAivq5Z66z67Z/EMJTmuhowLuH/NbIYMdNcTBy1Vy9gN8BACwQa1VOBuq2leUkmPtLNc9mZ3oRx9hwwY9ILtj56VoDMA4dIn1v7/E0w/VU8XGyUzWujrfw5dW6gRnB9LI1uA=
Date: Mon, 28 Dec 2020 18:58:42 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 5/5] vPCI/MSI-X: tidy init_msix()
Message-ID: <20201228175842.hyecvulrklnxsdcm@Air-de-Roger>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
 <e21e4936-f356-8c8e-845d-d60880a58ed4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e21e4936-f356-8c8e-845d-d60880a58ed4@suse.com>
X-ClientProxiedBy: LO2P265CA0464.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4d583ee5-0c13-4383-bac4-08d8ab5a39ca
X-MS-TrafficTypeDiagnostic: DS7PR03MB5542:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB554277A5CB57AB90A24E3C198FD90@DS7PR03MB5542.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bWy/QdFkQ2XGPxwLAEztbcSkaEynYh66d5QHin+FNRYKVFFXO/srqvj5Mwh2UVEOXBOHHsjU/nMAA34XDBYO/j9qg5B+3kCatq6MKUF9XGq8SqI39rD2W6nDjLLVe1o+XB/qsknG8Icnz0tl47gjV4ASJh1oy09M7x/TqyNMD/C+JQYvaKtIqTvtq1+KDzJrGOAimBISQb3Yo//FpyYkT829cLs7EjaQErgk+HRVuwQVeVuvSdhPHPbjDbEq4y6QPOthv78f3sy56dXsw7jExHOBVShBKgLIaGq85Tlyd0S+o2EWtVaONpApXrho1HDxdI8647VLusEaa50zbwnAz8/cWiQlS/SUh7pKBR1xvIxxqQxOZZPvFsp6RlD37Z/AU+wxjdapkPHOpzev0WZwMg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(396003)(376002)(136003)(39860400002)(2906002)(83380400001)(8676002)(478600001)(6486002)(16526019)(316002)(33716001)(5660300002)(6666004)(1076003)(6496006)(54906003)(6916009)(4326008)(66556008)(956004)(8936002)(26005)(86362001)(186003)(3716004)(9686003)(66946007)(66476007)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VW9LcnIzVjVPcDBSOUkxSGdEc2lCMHlMeU5KMCtyMGxHWEh5ZjVVcEhrOStT?=
 =?utf-8?B?UHhNcmlNTmpzeXhaTnNPT0lnMndoNGtUNmlLeHVBakY0WE5EMkFRUDlGTmdI?=
 =?utf-8?B?b1NmTlRJOWZ5Y0VkUytMMitjMG1ZRnhCOGp3aGI2dUl1V2tWQUlBdkRzOHVD?=
 =?utf-8?B?QjRHUDFma29QL3dPeU9GSzE2UVQxR2hjMWErMjZUY0lPVVAxd3R0dFdTekwv?=
 =?utf-8?B?SllyNWNCRWdWaEswYXJTWnA0SkJXeXBmVHppYzNISklPWGpDbEtReXExQjl0?=
 =?utf-8?B?cG1wTFJoWXFQeVVja0Jiako1RW5zd2IrYUdJRlVtbStPT3Exdm1zZzcrSWRS?=
 =?utf-8?B?ajJiTysxQnZyVHpxcFhId214VDRqeE45QkVwNFdFTUMzdjVGUWR3L29IQlEw?=
 =?utf-8?B?cE5iTkJLK21zT25PNFBraTdMUlF1RjZhTUFqQ0Vob2ZEdFJKZStNZUhpQmJ5?=
 =?utf-8?B?M3VtYjJ1M09kamZISkJUU2QyVTVHL0N1dHVrdkpiS3NBc3Q0TzdWWG1nT0Z2?=
 =?utf-8?B?NVhLc29pWktqcWwvcjBZNnNld3ROaWNTclYvYWFjaDM4SXRxRGtaUjA5cUla?=
 =?utf-8?B?MGVDSER2K1FyRGt4REJ3eEIzMlRLSXFMQWRBQjVvUStMaHh5SW85TjRCY2Yw?=
 =?utf-8?B?VWNHUEtZYk5UcjhCYzJ0ZCt1UjUvT2VpWHg4TkRVVmVzV2c3Q0NmMEovYkxL?=
 =?utf-8?B?Umc2Z3JMbEJSc0pxc0tLNFMwVUhLNVRLNjE0YjVwMEs1bGJ2UnVGLzluWkpp?=
 =?utf-8?B?V05ZQWdPWHp6emMzN0tSV3lLbzJrVHd5TjhncFExcllyYWszdTJzVFJrV1BC?=
 =?utf-8?B?VXpFUmZKOXNpNlc5Y0Uzd20rdGFwRWgvQ1pwT3hQNUFEUjRjQ21xSTN2Y1V4?=
 =?utf-8?B?RVhRYWNFSC83RzcxVFR2ZkdzS2VxY3FraThzVzJsOHRvbDJmQlpDOTRYWmpw?=
 =?utf-8?B?eWFlLzJPNHVwVDVGc2tPUDJ4Ti9nTFB2U0JsUVBvSEFiWmw0aTJWbndkVE93?=
 =?utf-8?B?MUdhMld3b2FsWnkzbnFWL0NhNVBhdFpBUW5jZXlzT1ZRL2t0UjlSM3g5dk5u?=
 =?utf-8?B?clloZE9NZkljY3FiUVJOOGdMckk5TE5jWTNmUFR4VUFydUxPang1UldyTys0?=
 =?utf-8?B?RFh2RGpJNW1CeEQ4MkZVRS9aa2NlRHZTbUEyMGVGYzE1d0hpbWpLZzhtUzNC?=
 =?utf-8?B?QUhsbUVDaEI4MFRRYitqQWFFSGJpS1VhV0VsdCtSTndIMnVKaEJLZ0dtRHpw?=
 =?utf-8?B?Ty9KSVlYUFJTd3FpaUxwdXREWTZrdEZPYU94R0crSVNzMDFINGdpOUFzNFVL?=
 =?utf-8?Q?GkziqjdXDLll1XND2anlcXMX6CfF0kIVkV?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 17:58:47.7341
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d583ee5-0c13-4383-bac4-08d8ab5a39ca
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ugp5yuHwXrnIvOPcrF+2A6ImT/Umq5iAoxeOtstJJHw3nIHOKJ384x0M2Y1YW29eLQ3kyqe7IhReY8snR+wsXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5542
X-OriginatorOrg: citrix.com

On Mon, Dec 07, 2020 at 11:38:42AM +0100, Jan Beulich wrote:
> First of all introduce a local variable for the to be allocated struct.
> The compiler can't CSE all the occurrences (I'm observing 80 bytes of
> code saved with gcc 10). Additionally, while the caller can cope and
> there was no memory leak, globally "announce" the struct only once done
> initializing it. This also removes the dependency of the function on
> the caller cleaning up after it in case of an error.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just a couple of comments.

> ---
> I was heavily tempted to also move up the call to vpci_add_register(),
> such that there would be no pointless init done in case of an error
> coming back from there.

Feel free to do so.

> 
> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -436,6 +436,7 @@ static int init_msix(struct pci_dev *pde
>      uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
>      unsigned int msix_offset, i, max_entries;
>      uint16_t control;
> +    struct vpci_msix *msix;
>      int rc;
>  
>      msix_offset = pci_find_cap_offset(pdev->seg, pdev->bus, slot, func,
> @@ -447,34 +448,37 @@ static int init_msix(struct pci_dev *pde
>  
>      max_entries = msix_table_size(control);
>  
> -    pdev->vpci->msix = xzalloc_flex_struct(struct vpci_msix, entries,
> -                                           max_entries);
> -    if ( !pdev->vpci->msix )
> +    msix = xzalloc_flex_struct(struct vpci_msix, entries, max_entries);
> +    if ( !msix )
>          return -ENOMEM;
>  
> -    pdev->vpci->msix->max_entries = max_entries;
> -    pdev->vpci->msix->pdev = pdev;
> +    msix->max_entries = max_entries;
> +    msix->pdev = pdev;
>  
> -    pdev->vpci->msix->tables[VPCI_MSIX_TABLE] =
> +    msix->tables[VPCI_MSIX_TABLE] =
>          pci_conf_read32(pdev->sbdf, msix_table_offset_reg(msix_offset));
> -    pdev->vpci->msix->tables[VPCI_MSIX_PBA] =
> +    msix->tables[VPCI_MSIX_PBA] =
>          pci_conf_read32(pdev->sbdf, msix_pba_offset_reg(msix_offset));
>  
> -    for ( i = 0; i < pdev->vpci->msix->max_entries; i++)
> +    for ( i = 0; i < msix->max_entries; i++)

Feel free to just use max_entries directly here.

>      {
> -        pdev->vpci->msix->entries[i].masked = true;
> -        vpci_msix_arch_init_entry(&pdev->vpci->msix->entries[i]);
> +        msix->entries[i].masked = true;

I think we should also set msix->entries[i].updated = true; for
correctness? Albeit this will never lead to a working configuration,
as the address field will be 0 and thus cause and error to trigger if
enabled without prior setup.

Maybe on a different patch anyway.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 18:10:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 18:10:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59553.104541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwyV-0003aK-B7; Mon, 28 Dec 2020 18:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59553.104541; Mon, 28 Dec 2020 18:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwyV-0003aD-7w; Mon, 28 Dec 2020 18:10:31 +0000
Received: by outflank-mailman (input) for mailman id 59553;
 Mon, 28 Dec 2020 18:08:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H/to=GA=gmail.com=obalaz85@srs-us1.protection.inumbo.net>)
 id 1ktwwa-0002qU-Os
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 18:08:32 +0000
Received: from mail-wm1-f53.google.com (unknown [209.85.128.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 596fc759-5d05-4068-b714-1378bb6384d6;
 Mon, 28 Dec 2020 18:08:31 +0000 (UTC)
Received: by mail-wm1-f53.google.com with SMTP id c133so91771wme.4
 for <xen-devel@lists.xenproject.org>; Mon, 28 Dec 2020 10:08:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 596fc759-5d05-4068-b714-1378bb6384d6
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=Kta5IartLP/8dxe1dhpSNV4aATyrsPJh2h6ciKZbwi4=;
        b=q+J6TJSKxJ3JVQfvownQnCBJpryK6wuCUjzt+TMyCPlpxNJHUZGCqwGyMNgWYydEOs
         i5S3SFu/u12d57zgrHjp/uiqMQqHe5r/EHmqVcQUscGUf32cbjPvkTCiGzwYxa8h2dkT
         CAPzLXQQxeJSA/+5Gt7BRqlkqod/CIHZXHThVFg8zWPisaQQ0Zih+e/E4BIaXZtfVmcx
         SkFj/lILcbistWCW2Ky/SPdxORau8oI5ledlXbHp8uRE8j8gVyK38X2zanivo/0skmeu
         2FDial8Hu7YP9N3VZojUUi5tEVf1BFd8f/T0dpCfClMPGp3MZf9XzC1EBfaw2vy2lXRq
         ZkCA==
X-Gm-Message-State: AOAM533svKpxCNcVZdlUrOsF7DfCM9tGJt83sEMCXarnoK79gEhGCrF9
	x1aZZXNRsOkwrodWokuRye4IDhnd/f6+ypnGPAcevkV0i9CrDQ==
X-Google-Smtp-Source: ABdhPJxeDdm8S7ubobUlEgfUUeg8dRtjmezYGG9LDKX+DsP7fTijQljfo+HlTrelr9h1ebJ2p2PWgOhKpezEppIz7Gw=
X-Received: by 2002:a1c:5402:: with SMTP id i2mr122421wmb.12.1609178910124;
 Mon, 28 Dec 2020 10:08:30 -0800 (PST)
MIME-Version: 1.0
From: Ondrej Balaz <blami@blami.net>
Date: Tue, 29 Dec 2020 03:08:19 +0900
Message-ID: <CACmg6stNxXu3-SFdK_WtREbL2i7N522-DRRVr1ZXVOTqZ9j02Q@mail.gmail.com>
Subject: [BUG] Unable to boot Xen 4.11 (shipped with Ubuntu) on Intel 10i3 CPU
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000516d4a05b78a2920"

--000000000000516d4a05b78a2920
Content-Type: text/plain; charset="UTF-8"

Hi,
I recently updated my home server running Ubuntu 20.04 (Focal) with Xen
hypervisor 4.11 (installed using Ubuntu packages). Before the upgrade all
was running fine and both dom0 and all domUs were booting fine. Upgrade was
literally taking harddrive from 6th gen Intel CPU system to 10th gen Intel
CPU one and redoing EFI entries from Ubuntu live USB.

After doing so standalone Ubuntu (without Xen multiboot) boots just fine
but Ubuntu as dom0 with Xen fails pretty early on with following error
(hand-copied from phone snaps I took with loglvl=all as this is barebone
system without serial port and I don't know how to dump full logs in case
of panic):

(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[01])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override
(XEN) ACPI: IRQ9 used by override
(XEN) Enabling APIC mode: Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018
(XEN) Using ACPI (MADT) for SMP configuration information
...
(XEN) Switched to APIC driver x2apic_cluster
...
(XEN) Initing memory sharing.
(XEN) alt table ffff82d08042b840 -> ffff82d08042d7ce
...
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d Snoop Control not enabled
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled
(XEN) Intel VT-d Queued Invalidation enabled
(XEN) Intel VT-d Interrupt Remapping enabled
(XEN) Intel VT-d Posted Interrupt not enabled
(XEN) Intel VT-d Shared EPT tables enabled
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) nr_sockets: 1
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... failed.
(XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A interrupt
IRQ7.
(XEN) CPU0: No irq handler for vector e7 (IRQ -8)
(XEN) IRQ7 a=0001[0001,0000] v=60[ffffffff] t=IO-APIC-edge s=00000002
(XEN)  failed :(.
(XEN)
(XEN) *******************************
(XEN) Panic on CPU 0:
(XEN) IO-APIC + timer doesn't work!  Boot with apic_verbosity=debug and
send report.  Then try booting with the `noapic` option
(XEN) *******************************

I suspected that migration of drive could cause problem so I took an empty
SSD and installed fresh Ubuntu and added Xen hypervisor, after reboot I
ended up with same panic. I tried booting with noapic (gave general page
fault) and iommu=0 (said it needs iommu=required/force). Trying to boot
this exact fresh install on older (6th gen) Intel CPU succeeded. I happen
to have access to one more system with 10th gen Intel CPUs (Lenovo laptop)
and no luck booting Xen there too and same panic in the end.

Back to my barebone I tried to match BIOS settings between working and
non-working but it didn't help. Virtualization is enabled, both systems are
from the same maker (Intel NUC barebones), both systems are EFI
enabled/secure boot disabled (the later one doesn't seem to have an option
to disable EFI boot and boot using MBR).

Is this something known? Are there any boot options that can potentially
fix this?

Any help (including how to dump full Xen boot logs without serial)
appreciated.

Thanks,
Ondrej

--000000000000516d4a05b78a2920
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<br>I recently updated my home server running Ubuntu 20=
.04 (Focal) with Xen hypervisor 4.11 (installed using Ubuntu packages). Bef=
ore the upgrade all was running fine and both dom0 and all domUs were booti=
ng fine. Upgrade was literally taking harddrive from 6th gen Intel CPU syst=
em to 10th gen Intel CPU one and redoing EFI entries from Ubuntu live USB.<=
br><br>After doing so standalone Ubuntu (without Xen multiboot) boots just =
fine but Ubuntu as dom0 with Xen fails pretty early on with following error=
 (hand-copied from phone snaps I took with loglvl=3Dall as this is barebone=
 system without serial port and I don&#39;t know how to dump full logs in c=
ase of panic):<br><br>(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_=
base[01])<br>(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GS=
I 0-119<br>(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)<b=
r>(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)<br>(XEN=
) ACPI: IRQ0 used by override.<br>

(XEN) ACPI: IRQ2 used by override=C2=A0<div>

(XEN) ACPI: IRQ9 used by override<br>(XEN) Enabling APIC mode: Flat.=C2=A0 =
Using 1 I/O APICs<br>(XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000<br>(X=
EN) ERST table was not found<br>(XEN) ACPI: BGRT: invalidating v1 image at =
0x7d7c1018<br>(XEN) Using ACPI (MADT) for SMP configuration information<br>=
...<br>(XEN) Switched to APIC driver x2apic_cluster<br>...=C2=A0=C2=A0<br>(=
XEN) Initing memory sharing.</div><div>(XEN) alt table ffff82d08042b840 -&g=
t; ffff82d08042d7ce<br>...<br>(XEN) Intel VT-d iommu 0 supported page sizes=
: 4kB, 2MB, 1GB.</div><div>(XEN) Intel VT-d iommu 1 supported page sizes: 4=
kB, 2MB, 1GB.<br>(XEN) Intel VT-d Snoop Control not enabled=C2=A0<br>(XEN) =
Intel VT-d Dom0 DMA Passthrough not enabled=C2=A0<br>(XEN) Intel VT-d Queue=
d Invalidation enabled=C2=A0<br>

(XEN) Intel VT-d Interrupt Remapping enabled<br>

(XEN) Intel VT-d Posted Interrupt not enabled=C2=A0=C2=A0<br>

(XEN) Intel VT-d Shared EPT tables enabled<br>(XEN) I/O virtualisation enab=
led<br>

(XEN)=C2=A0 - Dom0 mode: Relaxed<br>(XEN) Interrupt remapping enabled<br>(X=
EN) nr_sockets: 1<br>(XEN) Enabled directed EOI with ioapic_ack_old on!<br>=
(XEN) ENABLING IO-APIC IRQs<br>(XEN)=C2=A0 -&gt; Using old ACK method<br>(X=
EN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1<br>(XEN)=
 ..MP-BIOS bug: 8254 timer not connected to IO-APIC<br>(XEN) ...trying to s=
et up timer (IRQ0) through the 8259A ... failed.<br>(XEN) ...trying to set =
up timer as Virtual Wire IRQ... failed.<br>(XEN) ...trying to set up timer =
as ExtINT IRQ...spurious 8259A interrupt IRQ7.<br>(XEN) CPU0: No irq handle=
r for vector e7 (IRQ -8)<br>(XEN) IRQ7 a=3D0001[0001,0000] v=3D60[ffffffff]=
 t=3DIO-APIC-edge s=3D00000002<br>(XEN)=C2=A0 failed :(.<br>(XEN)<br>(XEN) =
*******************************<br>(XEN) Panic on CPU 0:<br>(XEN) IO-APIC=
=C2=A0+ timer doesn&#39;t work!=C2=A0 Boot with apic_verbosity=3Ddebug and =
send report.=C2=A0 Then try booting with the `noapic` option<br>(XEN) *****=
**************************<br><br>I suspected that migration of drive could=
 cause problem so I took an empty SSD and installed fresh Ubuntu and added =
Xen hypervisor, after reboot I ended up with same panic. I tried booting wi=
th noapic (gave general page fault) and iommu=3D0 (said it needs iommu=3Dre=
quired/force). Trying to boot this exact fresh install on older (6th gen) I=
ntel CPU succeeded. I happen to have access to one more system with 10th ge=
n Intel CPUs (Lenovo laptop) and no luck booting Xen there too and same pan=
ic in the end.<br><br>Back to my barebone I tried to match BIOS settings be=
tween working and non-working but it didn&#39;t help. Virtualization is ena=
bled, both systems are from the same maker (Intel NUC barebones), both syst=
ems are EFI enabled/secure boot disabled (the later one doesn&#39;t seem to=
 have an option to disable EFI boot and boot using MBR).<br><br>Is this som=
ething known? Are there any boot options that can potentially fix this?<br>=
<br>Any help (including how to dump full Xen boot logs without serial) appr=
eciated.<br><br>Thanks,<br>Ondrej=C2=A0</div></div>

--000000000000516d4a05b78a2920--


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 18:11:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 18:11:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59560.104553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwza-0003iA-QF; Mon, 28 Dec 2020 18:11:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59560.104553; Mon, 28 Dec 2020 18:11:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktwza-0003i3-ME; Mon, 28 Dec 2020 18:11:38 +0000
Received: by outflank-mailman (input) for mailman id 59560;
 Mon, 28 Dec 2020 18:11:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Po3/=GA=yahoo.com=akm2tosher@srs-us1.protection.inumbo.net>)
 id 1ktwzZ-0003hw-Rk
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 18:11:37 +0000
Received: from sonic315-20.consmr.mail.ne1.yahoo.com (unknown [66.163.190.146])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec7ab366-7289-4d03-b204-dda487f7f00f;
 Mon, 28 Dec 2020 18:11:37 +0000 (UTC)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic315.consmr.mail.ne1.yahoo.com with HTTP; Mon, 28 Dec 2020 18:11:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec7ab366-7289-4d03-b204-dda487f7f00f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1609179096; bh=WP98efsM6H8bg7zFkKRHilzfsV0c9NVgFjLAel6r56A=; h=Date:From:To:Subject:References:From:Subject; b=FyFPpKutEzf6F8qAEZ+ULyvC2azcLPvhUZX3nCSNVyuehuQ3Zk1BLFUp05rCA4zZWrro8be3wBEa5OH1EgqvecoNik1l985kRaO14uIP+/A1wsTcfgvrjIj6u9lmx/B0KFTOEF0Xb7DOSCekmy4eAxicojdNuv8fVCBTAsjsair28KFtDB+HAGRwkLw6+VOthDWk4arh3oOeN+2JOEsAM+3H3KpFPYSe2FA16fGWaVu1YURbkYFt8Bbgf/T2grJwzwGwykO15EHqADsbuJNf0eOZ2gap0uoIzH7yS8JMf1XYOFUVUatNcsk0y+I6/dwcDtTAXZPk+FDbDE0OjfcPYw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1609179096; bh=K8lIQrUBVwP9f85FbQQJNWle0cvPmBQBjueJVoES0iG=; h=Date:From:To:Subject:From:Subject; b=qisNkQtdz440F+g0xYvOsydPyBZhlZclqvttIfxhfdUS38rwkPBBebqb2t5eGT2XHAcbcjP8aH7ihKXuQ0GC/oeQ0ONvYi8mJnGKwvvSDAesXx7QSsACYvOXydsBpLurWDzoATvAr+swmr/VABGXxPaSh0gL7b1mKOBIN2bt8AVPcXtTUxWaCOkgxkYO/F92AyPKS07rh9ifq0fHU7S1SQUEqrVbPfoXBVmPW5JHY+yfUL5vSR6EKQctm78m6BxK6KT5FnaI8lk1yH29Uygrwek3gZuPTcxU4iH1zjlc6PFiD6nuR+hN3VNcFKUAz8kr39Qc9FAgyb6x1+mWxR2tNA==
X-YMail-OSG: oWcB6zIVM1naWPxnX0lXmfzZ7ygwvXFiB7nCQQ4NIU09R_0VKkM7S.3zibP4uHH
 E4oDcjd58GiE2FrgAGCzS_e3UAmaftrCKZ0GfuGP5hAPXpv32ypNqF1Uxw4o7AFXifhSUdT5zZxk
 3bRCEf0AfGzrdkgbgLns56W4GPm0gvpMtNRqHwMG7QutOlL.6Emn.4BHwufb_f6JmEYxJ6QoAeCG
 JlaQSl2DS9FWjFdA5g9y12JTpGy.GU2kZG93uxoPFfxJTJJKmMJEhwiWbbjdCnZANfPdJIyhzswj
 K4oyT7JoXvRbiFsi04kgoBwyiDRbMcDlF8sCgQ5sNe79IHPrSe7t9YKwFnwtA4z6nMKDUPDR35jz
 WRFKPADLpCAqS_jSSDm.OBlE_nKNliQA3iGajDsbv5U02gkdOV4LU.brm1DHtTt2hNZSvnmFvt4s
 fR_LfMtm.DouMB9fo0F_xg8.UqOplFf8ZYWsdBsFEqMpbOTj4Hk7g_dkmG6zeP__j_8PmbboOWtN
 Oml9KrPO4j1ztY0A4DZvMMgPBZxFWMXka6LrwZMMc11TcYEIarvkrtUqkX2F3PiCfZgFamEPL710
 RZETsH0p_dJ8Nw0CNVf_MzZb3Dnd7RokwAMXQNQMcGSEyZRyjhYpdO6JVvz3sgkiyJAlSlzSXz07
 T8B4pI0TswgBlmHALaf61PfFm7Uao6ZgQkEsyqvaGMAUgDzDVKFDp2bw4F5GJKublyz31s1GpZs9
 pvCL93F5n3LOCpoBZ17Bs_Jt5MIy3EKC4UTiWkZKtAqjdEX.SOEfLa3vt8I7OeEkWZaTCazOSw37
 GTGiL2NHLcyTYT43QeEIQgfyUPy9GaejPKKvzpSKOaj_CEXOIRX3uraARwgA6vt8xG7XAIyqh2U3
 E7oBLLIeG7acpaGaiyr9Kslm0wuxKhBFkTwGW0K_8zQ7eGDMJY8NBr51Ekvn7rnxBk_yc0fDgv.0
 bTIAbVXqUX70.oUYTsd6MWqkFVp6_aRxioA6bpMSWvUpqdcZ2tF8sOja_4_bhSvjXBOLD.OVYN.K
 Sn_kuX9Gzrbeqs3jIMUr2H0GF86MBVXptLEydu_R_FOIqLoRdpG8zAJpioWMuFZmCoXoGzMmFzbq
 .T8RwiXzMENHhY.KAQUWDMUrqV11FXp1Ig16.cxUY5Y42nUKcimXTNdVr4ZUrPHO6Zn30dTJW1It
 N3mFs9clNCZWmnNWS2t8slnUellXnJD2BPbzy_PsoP_cIXx8Y6e6GvGi34HLsb5CFQTQqJ.YQ0OW
 800ABVH6sLKTG9w9RRGSwMaDpKphn4qqtl0afNxcaYwsRP59gn1HkJeS.PCiI8Hwm509FI.tatzi
 By3inpthiz3Br910uBy7tAaxszEoCQVKhefl_zDgMXjEain2SUwsIB71Fl11TaLBO2B_sa2RKJdl
 nqFi7oybv4Ex0su5OK_s8ynuOENOpZvCQkjmRR1oP82uJjHUD18iWXOUQ7fxrot2CN66KXWPx8z7
 b67xxpsbacU10J.OdiAFbT2Uhq.dVQ.KDhEBvmfcWttDoYs5.uosTBgjrsobMSbIMYNq4Z_rtfuY
 hFHpIw3YAXlQpPaq6erQtvJhewXPtYHCLfVlrZF3LT7BGdr7rkWDFmh2Wov1YVMIZmvqZDC2Uwpv
 dI4jSK88pN3SBF6kR2wTxqPJXLo0iPkBDKiN2IfE7SaT5j53.Y8Rw32s8lqFcobfjoHr2dYvq26g
 NOgoyWS85WG4gna0XQhxY7XAIEItjlpETupR1UvdGyxUYJ1cSXkLndkCUsYooqiCswNpYPAq.gV4
 BcaRRkrfpVXmNC8POIS1XTtZJOPhVLnjoVaq1rN5Q5LxzdNzaBrJ.8jrKBZRQqTJe.I6JRaq3cKb
 86xfKi6kEBaIhpAnnvlTAssaI3wtyYuoikouS8gZJCrHb55CXFfs2bhN1WkEfUzQf6eYw0y6Gq4y
 geAZPkF9ml7So05Wf8VZsbjx5860.P8ziH5l4VF.3XHMNYXVIRj6e2N83YCBN3yKfr6yYALF1pOq
 jvZ.ldqA_shjdxzK2AK3EBNRUGx8S6FbfwJxqehgwRQdYrJ2.13Zhcjo4O8ccir4TcuNG12UAlrV
 haIRFGISX1BdiBWhvH24z.ouesdDqb9za2_6Vv6F_ctG8XuJcrTPM1RwAFiB1ENrWrjOyook3D38
 pX.VZ0O2K7lI9x0AD47vLY4Me_09q_d7Wlh1EnFUekGIzjEp5izEtmfW2WNLxkVIxlZOYPGOWhys
 ttTnckSJakY151C9qq0GRthf1ZXymoNBpEGY554fAUNVqI4kTVSqS_liCdri_OVle0tY2P6Np4en
 EDZQmIpdNX8uVpw2Zq23795NH1u0yUOHLdc9xu4OjYeN9C_J61APJecU1kGYwyMHkXZkuGgO0OjW
 JWdY3MnK.
Date: Mon, 28 Dec 2020 18:11:08 +0000 (UTC)
From: tosher 1 <akm2tosher@yahoo.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <943136031.5051796.1609179068383@mail.yahoo.com>
Subject: PVH mode PCI passthrough status
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
References: <943136031.5051796.1609179068383.ref@mail.yahoo.com>
X-Mailer: WebService/1.1.17278 YMailNorrin Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0
Content-Length: 321

Hi,

As of Xen 4.10, PCI passthrough support was not available in PVH mode. I was wondering if the PCI passthrough support was added in a later version.

It would be great to know the latest status of the PCI passthrough support for the Xen PVM mode. Please let me know if you have any updates on this.

 Thanks,
Mehrab


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 18:24:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 18:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59567.104565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktxBr-0004nE-VO; Mon, 28 Dec 2020 18:24:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59567.104565; Mon, 28 Dec 2020 18:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktxBr-0004n7-RW; Mon, 28 Dec 2020 18:24:19 +0000
Received: by outflank-mailman (input) for mailman id 59567;
 Mon, 28 Dec 2020 18:24:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=shBg=GA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ktxBq-0004n2-MY
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 18:24:18 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 474be9e3-ac94-47bf-b6fc-860b8c971072;
 Mon, 28 Dec 2020 18:24:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 474be9e3-ac94-47bf-b6fc-860b8c971072
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609179856;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=aiHen9JT4PeYKMzQFcFqnfx+rv1IBcBTuskJWWUR4fE=;
  b=edLh36iNFIa/7VjXd6/ZUVkIEqOjaQREfZNgHmtE903R6EteWYLP0jR6
   oqxhFxj0hYKZeV+V6Yi0tP28NkH6CEqrNT2NnJwAZlAoVnNVK+dzLKmLg
   oBgYJQykk5VN5Yn9RRNi7/N7iK7J4dDm5ewRGgEu3A2JZJLuD6EpYazEI
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: G0MHHDGztW8jcpVC85qqgKHT74YdiMdmaHUOTZQTNNlBFLrmaybYIwt7qNZWULfTqIY/qEN3aZ
 NDrPCskRkmIc1e38iO5KOuGGFcuxxS8dOSifEPrcuLb8xkmBtxLDhK3holL5LAThlFsUu2iE+a
 rLBXQq9eIvpAwlEc9XitOeDRcJA0Jn7nGgN2H3agzzYZ1MFxryPvJUGVUv3eC3l5CaGjrEn4lP
 7pQ7OwvxF6CnqWr8ifm/mDwGv0COFomZnrzzvHf2L0nIyeRo1gLZTYnvkGW+wwkdUtZuca1/M7
 kc0=
X-SBRS: 5.2
X-MesageID: 35298032
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="35298032"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jXHYvzvr2PZtJwSIdx4lqsBqTXur6Okn1LoT/HFokKd5DIWk2jqfDVBkSm/fjmLg+hkCjdG25T6O2+dnN3O86jxvtBM7XSTKiZRxBlXidNIfGvUIxZxJglDESu/V9M3Ve3r/m3UgE7EzKpJjhErGiQIjtKWTea81o5mhit17+7P60C6QdQVRIMXTngB9E3Xdqq4M+VD5Ae7s1/Qa9ewrtvvMgoncDIksRmpBrPwGSuvVDIqmotKcN/0bmBZmms1v0lSIY2cxo8oVReZhZTtACjFzYqF3R9Yq6ejPeDp+gMAyCKj8MmRHCCbmG8Y/raCJeeJ0bYwZdnYMDaS+6HSWjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UFXoTKwokUNFL67JtthriUpbVVuEHjE1CFWaQMsXsEM=;
 b=mF5m8Q7GkTd9Ah5l7Fi93A5zSfe324yKD1AUAz4WQPUx2QY2bfXHVZJkeuI+7cEJ2JNtej3kc85FM2Y6SetLvkxY51khpMMdhQbcYr/fJips3iOY1d0EoZj/r8dIwJigoU87vu8yKa1dW0BCvtvfPsnGqEwdkWK7UF9TtHk0qA+RdjF9Bp3mPRdQNhvw/QUUq/NvPXGFJ3jBjledTryj+iLZ4rPo2iRLYCMFxJhiN+z5KMn3MRvAYXQxEkEi4EsArtZToIv+1EuAhO2mS+iRp3ReuhMMSTUwnD86IATzvjuYM0mGwbmiHh7XuczC/CtiCcnZfIpc4udikWn6nzqm4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UFXoTKwokUNFL67JtthriUpbVVuEHjE1CFWaQMsXsEM=;
 b=l+CbRnGrhABJFtcWH18L3X2vWQiv0ARPZyHHiWBnHXyO3i/RsSJQpe9DBRJF8LDbz1D0mNnSgXz9SuSm/UIwJZy+haZSiLQNiqW+rsWBgVJMdaHtoi/kafQw+KbsYr9jk/7y1tLz8cyNUjEU+THGE9viyoiRSMPzh4ZjSmEBnCI=
Date: Mon, 28 Dec 2020 19:24:07 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Manuel Bouyer
	<bouyer@antioche.eu.org>
Subject: Re: [PATCH 1/5] x86/vPCI: tolerate (un)masking a disabled MSI-X entry
Message-ID: <20201228182407.5ntx7qppe4vu7fvu@Air-de-Roger>
References: <f93efb14-f088-ca84-7d0a-f1b53ff6316c@suse.com>
 <fef14892-f21d-e304-d9b1-7484e0ea3415@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <fef14892-f21d-e304-d9b1-7484e0ea3415@suse.com>
X-ClientProxiedBy: MRXP264CA0042.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 960016ae-50e3-4c23-5e98-08d8ab5dc6a5
X-MS-TrafficTypeDiagnostic: DM5PR03MB2491:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2491BF6A28DBD9898BDC5CF58FD90@DM5PR03MB2491.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: SExZkCHxlRQgEx4pSuKam/31c0JsupyYN9liMT/D0NTHzHJxgcYevjBGP2CRISh7UVHPFzGLoPwe/sopZEPLeNsytrymJXmj5rUBP0CySI5QfeqWVCtNzxngRMwGABN6bk0ti/ZLxH1qdjbzlBinMfBWPazZ25QZkCRpSYm+1Cs8pUwObcgRBQXSN3baKdyLLPJ27XK6BrPEjN/xgn4Pcmf4M0b0MNJ26B3sNTIFFcnqV0HjYg/JlVFf7UlfgNKvzB2rTRpYEeWm6hCae4OjkG445orrEWQ2/3OLPKyaih/AswzVvuzT5DnHCXsOJk4gKksoFyGduD52SdZLgjeizpdlPYpwkhfJn9BacVVIG/XRAzGHoR/YDT1UAay+WFI5lCUZjuM0jmpIm9Pd5C9vZA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(396003)(346002)(376002)(39860400002)(136003)(4326008)(956004)(66946007)(9686003)(85182001)(478600001)(8936002)(16526019)(26005)(54906003)(1076003)(66476007)(86362001)(66556008)(6916009)(6486002)(8676002)(5660300002)(316002)(6666004)(186003)(2906002)(83380400001)(6496006)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aVEzUzRnUkQzaWFoQWJ3UitvTHFlVFJjeGJVZFVoRGo5N0RYZENPMHQxZFdo?=
 =?utf-8?B?ZkNSNzRlQ1EyclpBS0dSOG5STjR5QWc2OXhBdHFCQjMyNGhtT0ZNT1VqcC81?=
 =?utf-8?B?d3NDTTVDZ0tFZEZTNGxSS2UrdHVpN3ZuUElVK1RaODEyS2ZtWUVHcUhpbW1h?=
 =?utf-8?B?OWRoNjhwRWVlRUpmZzVTRTB0bmZyY1V3Q0s4U1dQMVp0M0JvaVlyQndrY1dN?=
 =?utf-8?B?b2dkZXluWGZMdFRxeGpoSVZOeHdNdldLRUJZR0l5QWtic2x4QmNERDI0b3Vp?=
 =?utf-8?B?ZmMxY1k2YU9ubWJuM3h6Q3JUaU9SZDZXT2RBZFVJenYxYjNsRlNVekV1ZG0v?=
 =?utf-8?B?SEE2NmJZL3VrdUJueFlJKzIrb1ZqaFd5NHV3U3Nvb2IxdHdmV0d1MWFvc1M3?=
 =?utf-8?B?VkN5czMzeVJWZjc3cGZCSmpHdkVqNjA3Nko0VWVHVU9XNG83UURPTW1PV0Ev?=
 =?utf-8?B?NWpaRFNIbFoxdllKNnI4eU1DM09PVEVpbnlwbGwvNHBXV0V5TWpXeUttZWpY?=
 =?utf-8?B?Q2M5S3MyYjBkYnR3ajMzK3NMdmx1bmhNMDEzY09wbmJGNkkxRWY3Q1E5Vm5p?=
 =?utf-8?B?Ym1RRnNyK0t1S3dwY0hRTjlJMVhZZ1FOYncwdGEvNnZ2SmVkR3lPRVBCL0dz?=
 =?utf-8?B?ZEI3Wi9YUWpPV1VzK1NtUDBBRWZZN1c2TThrMVZJeTRxNkhkVS9WbzJBbi85?=
 =?utf-8?B?K0hwZnVWci9sSG9IYjZ5V1BwSHBrWHVDaGVtbDRkL3d1QThDWEFiRTNLYkp1?=
 =?utf-8?B?NlppeHd6Q2xXZ2lmNjI4VHphRSthM290NEVUcWVob2hXUnJQUnpKakxmY2tk?=
 =?utf-8?B?V016WUJPbVhmWGNUdEtqdnBhckNScndYamVVajdoQWlleWRTaTdoUGxJU1NM?=
 =?utf-8?B?Zzk0U0x5K1hhRHJrSzBReCs3MzNiT3Y1RTExcGZLWURud25VRDllUkc5Q1Vl?=
 =?utf-8?B?NHFWUUpaK055KzczVlltWWRGMWJ4RVpYNW5vVVJqaytmN2lRQkM4R2EvMmQ1?=
 =?utf-8?B?Wm1Zdmszbmt3Q2o5dU4zYUZrSlFWL0x1OXV0Sm9Tb09TT3hJWXB6RnpxTFM1?=
 =?utf-8?B?L1dYeEVWYkxGNkRNRkp4R3g4RFBBZUM3aEhwRU9MaEJYUXl1cVNMaDhheVFX?=
 =?utf-8?B?dmw0Q3JQeUJ3aCsxeTg5NUtjSTdSMmlLaFlzcEVwdlVWNFdyc3B3bnI3cmZv?=
 =?utf-8?B?b25MNFN3VGovREhJR1JNUWlhcXFIRHlxc0VPeFRBRkYreFl1TzRCSnNBQTZm?=
 =?utf-8?B?aTRlNjhZZUhzbmpRUTVYaDkvN1g3OG1lTExEOWFqa2YzUjhSMlhnekZtYW00?=
 =?utf-8?Q?JpaqbPkU5kiNtax9q1UvdE/OpPbQBEE/iX?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Dec 2020 18:24:12.4391
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 960016ae-50e3-4c23-5e98-08d8ab5dc6a5
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OT6GIIwu+u/iSC2Fk2O0giGrmNosiv5ad1fG31+zDYrSf1oQn7iPAHeP67Q/NGFNjJLzjzlhfC6xaJeXHVvfWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2491
X-OriginatorOrg: citrix.com

On Mon, Dec 07, 2020 at 11:36:38AM +0100, Jan Beulich wrote:
> None of the four reasons causing vpci_msix_arch_mask_entry() to get
> called (there's just a single call site) are impossible or illegal prior
> to an entry actually having got set up:
> - the entry may remain masked (in this case, however, a prior masked ->
>   unmasked transition would already not have worked),
> - MSI-X may not be enabled,
> - the global mask bit may be set,
> - the entry may not otherwise have been updated.
> Hence the function asserting that the entry was previously set up was
> simply wrong. Since the caller tracks the masked state (and setting up
> of an entry would only be effected when that software bit is clear),
> it's okay to skip both masking and unmasking requests in this case.

On the original approach I just added this because I convinced myself
that scenario was impossible. I think we could also do:

diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..509cf3962c 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
          * so that it picks the new state.
          */
         entry->masked = new_masked;
-        if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
+
+        if ( !msix->enabled )
+            break;
+
+        if ( !new_masked && !msix->masked && entry->updated )
         {
             /*
              * If MSI-X is enabled, the function mask is not active, the entry
@@ -470,6 +474,7 @@ static int init_msix(struct pci_dev *pdev)
     for ( i = 0; i < pdev->vpci->msix->max_entries; i++)
     {
         pdev->vpci->msix->entries[i].masked = true;
+        pdev->vpci->msix->entries[i].updated = true;
         vpci_msix_arch_init_entry(&pdev->vpci->msix->entries[i]);
     }

In order to solve the issue.

As pointed out in another patch, regardless of what we end up doing
with the issue at hand we might have to consider setting updated to
true in init_msix in case we want to somehow support enabling an entry
that has it's address and data fields set to 0.

> 
> Fixes: d6281be9d0145 ('vpci/msix: add MSI-X handlers')
> Reported-by: Manuel Bouyer <bouyer@antioche.eu.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Manuel, can we get confirmation that this fixes your issue?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 18:49:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 18:49:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59572.104577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktxZv-0006fw-01; Mon, 28 Dec 2020 18:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59572.104577; Mon, 28 Dec 2020 18:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktxZu-0006fp-T6; Mon, 28 Dec 2020 18:49:10 +0000
Received: by outflank-mailman (input) for mailman id 59572;
 Mon, 28 Dec 2020 18:49:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GUIM=GA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ktxZt-0006fh-DM
 for xen-devel@lists.xenproject.org; Mon, 28 Dec 2020 18:49:09 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 20d01331-dc4b-4e42-9b77-e4ca6f0a5bef;
 Mon, 28 Dec 2020 18:49:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20d01331-dc4b-4e42-9b77-e4ca6f0a5bef
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609181347;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=9G6AIxDAf2ARK1SZTcq77GtBN9D79SabgF89Tvyag2k=;
  b=Go67pWi7jVGCaxPp91+yQZ7tKVz5bA1b1FHsn046NV6ekz95LpZZfKkX
   QjDhERBIi7tlmYO6SnaMo9DQ2fE5pAheE4jEeiXUSNFLWbLp6/SwozgHR
   WfG0qKdto+qZO9cQS/sb3oNChbHEdZo2UKwySGwA2fKpWcsqXs4bi6Dub
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OyifZlA7HP97Z7lMId07ePIBuLTiPoKc40saqnyKzG9YKsKvsbbwdw/MVGrpbJSKaPsRQXwx6q
 9tcm0xNO86FphL42CUvcWL3E27o10J/EhIIW+GTM74WFC5LZSYEcUpIPhj9uGzHG6DWXQXt706
 j30NjPJDo4DcN/9sYNSriocvIhZDQ7CYPZnJBJ6ZsZygvpYV/G0qCKsBtFtqY6U6yD0npO4A2E
 E10+udtLDW8XVTIu/GGbBhoR9FXOIDbLSenMFhGEl58IccSXSfZSYrbnIqkfV2pbGyVsoukCEQ
 TrE=
X-SBRS: 5.2
X-MesageID: 35299876
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,455,1599537600"; 
   d="scan'208";a="35299876"
Subject: Re: [BUG] Unable to boot Xen 4.11 (shipped with Ubuntu) on Intel 10i3
 CPU
To: Ondrej Balaz <blami@blami.net>, <xen-devel@lists.xenproject.org>
References: <CACmg6stNxXu3-SFdK_WtREbL2i7N522-DRRVr1ZXVOTqZ9j02Q@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f3346d45-e8d5-5b37-a9df-f410af54469f@citrix.com>
Date: Mon, 28 Dec 2020 18:49:01 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CACmg6stNxXu3-SFdK_WtREbL2i7N522-DRRVr1ZXVOTqZ9j02Q@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 28/12/2020 18:08, Ondrej Balaz wrote:
> Hi,
> I recently updated my home server running Ubuntu 20.04 (Focal) with
> Xen hypervisor 4.11 (installed using Ubuntu packages). Before the
> upgrade all was running fine and both dom0 and all domUs were booting
> fine. Upgrade was literally taking harddrive from 6th gen Intel CPU
> system to 10th gen Intel CPU one and redoing EFI entries from Ubuntu
> live USB.
>
> After doing so standalone Ubuntu (without Xen multiboot) boots just
> fine but Ubuntu as dom0 with Xen fails pretty early on with following
> error (hand-copied from phone snaps I took with loglvl=all as this is
> barebone system without serial port and I don't know how to dump full
> logs in case of panic):
>
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[01])
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override 
> (XEN) ACPI: IRQ9 used by override
> (XEN) Enabling APIC mode: Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000
> (XEN) ERST table was not found
> (XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018
> (XEN) Using ACPI (MADT) for SMP configuration information
> ...
> (XEN) Switched to APIC driver x2apic_cluster
> ...  
> (XEN) Initing memory sharing.
> (XEN) alt table ffff82d08042b840 -> ffff82d08042d7ce
> ...
> (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d Snoop Control not enabled 
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled 
> (XEN) Intel VT-d Queued Invalidation enabled 
> (XEN) Intel VT-d Interrupt Remapping enabled
> (XEN) Intel VT-d Posted Interrupt not enabled  
> (XEN) Intel VT-d Shared EPT tables enabled
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Interrupt remapping enabled
> (XEN) nr_sockets: 1
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> (XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.
> (XEN) ...trying to set up timer as Virtual Wire IRQ... failed.
> (XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A
> interrupt IRQ7.
> (XEN) CPU0: No irq handler for vector e7 (IRQ -8)
> (XEN) IRQ7 a=0001[0001,0000] v=60[ffffffff] t=IO-APIC-edge s=00000002
> (XEN)  failed :(.
> (XEN)
> (XEN) *******************************
> (XEN) Panic on CPU 0:
> (XEN) IO-APIC + timer doesn't work!  Boot with apic_verbosity=debug
> and send report.  Then try booting with the `noapic` option
> (XEN) *******************************
>
> I suspected that migration of drive could cause problem so I took an
> empty SSD and installed fresh Ubuntu and added Xen hypervisor, after
> reboot I ended up with same panic. I tried booting with noapic (gave
> general page fault) and iommu=0 (said it needs iommu=required/force).
> Trying to boot this exact fresh install on older (6th gen) Intel CPU
> succeeded. I happen to have access to one more system with 10th gen
> Intel CPUs (Lenovo laptop) and no luck booting Xen there too and same
> panic in the end.
>
> Back to my barebone I tried to match BIOS settings between working and
> non-working but it didn't help. Virtualization is enabled, both
> systems are from the same maker (Intel NUC barebones), both systems
> are EFI enabled/secure boot disabled (the later one doesn't seem to
> have an option to disable EFI boot and boot using MBR).
>
> Is this something known? Are there any boot options that can
> potentially fix this?
>
> Any help (including how to dump full Xen boot logs without serial)
> appreciated.

Yes we're aware of it.  It is because modern Intel systems no longer
have a legacy PIT configured by default, and Xen depends on this.  (The
error message is misleading.  It's not checking for a timer, so much as
checking that interrupts works, and depends on the legacy PIT "working"
as the source of interrupts.)

I'm working on a fix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Dec 28 18:55:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 28 Dec 2020 18:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59577.104589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktxgA-0007ZA-N4; Mon, 28 Dec 2020 18:55:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59577.104589; Mon, 28 Dec 2020 18:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ktxgA-0007Z3-Jy; Mon, 28 Dec 2020 18:55:38 +0000
Received: by outflank-mailman (input) for mailman id 59577;
 Mon, 28 Dec 2020 18:55:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktxg9-0007Yq-0Y; Mon, 28 Dec 2020 18:55:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktxg8-00078G-Pb; Mon, 28 Dec 2020 18:55:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ktxg8-0003gi-H3; Mon, 28 Dec 2020 18:55:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ktxg8-0005hO-GV; Mon, 28 Dec 2020 18:55:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UVV2i1H/I0VMr9a4T6MzgltUw6XZbIwMBZXlk8ZyEp4=; b=zDhe1iMqtlj0YrANdj1Qu9nUC6
	C91GD0xrwltHLVSp76p0qkLJdLUWUgZaEKdjdhn4NrYJSLMFLfpGRgzdCSD+tiEoyPX4ODVSJ968B
	F7VEiIOs8+a8ecrUNVQB/JvxcLE9H3L2fj1mjdPuhRSaWyroVOndmibC+2kxjbvzmiAQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157937: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5c8fe583cce542aa0b84adc939ce85293de36e5e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 28 Dec 2020 18:55:36 +0000

flight 157937 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157937/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 157930 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157930 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157930 pass in 157937
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 157930 pass in 157937
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 157930 pass in 157937
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157930 pass in 157937
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 157930
 test-arm64-arm64-examine      8 reboot                     fail pass in 157930

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 157930 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5c8fe583cce542aa0b84adc939ce85293de36e5e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  149 days
Failing since        152366  2020-08-01 20:49:34 Z  148 days  261 attempts
Testing same since   157930  2020-12-28 00:10:08 Z    0 days    2 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976090 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 00:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 00:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59637.104670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ku2xv-0003oN-BG; Tue, 29 Dec 2020 00:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59637.104670; Tue, 29 Dec 2020 00:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ku2xv-0003oG-7a; Tue, 29 Dec 2020 00:34:19 +0000
Received: by outflank-mailman (input) for mailman id 59637;
 Tue, 29 Dec 2020 00:34:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ku2xt-0003o8-BC; Tue, 29 Dec 2020 00:34:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ku2xs-0004qs-W3; Tue, 29 Dec 2020 00:34:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ku2xs-0001MN-OC; Tue, 29 Dec 2020 00:34:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ku2xs-0003WQ-Nh; Tue, 29 Dec 2020 00:34:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PA5GS76lN6wtf5vEm9cRFS9dvlkc3O2ckAylVfo/JF8=; b=dR+xfitHhTeZlvYn8kT7sy5TG1
	LDRsQVnMDxB8s0+T3sw4VfGnk8en8xWrpn3OjiTpWHd2+ojRgkokrMQPc37TM28Fxjf1d/LLJX90D
	a1/1PEFZll+1RAaxj7Rnrnav7d7bttj6Rh2p7vXREXEvekox3VxuKh79e+FSYpYFleWE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157943-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157943: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 00:34:16 +0000

flight 157943 qemu-mainline real [real]
flight 157948 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157943/
http://logs.test-lab.xenproject.org/osstest/logs/157948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  130 days
Failing since        152659  2020-08-21 14:07:39 Z  129 days  267 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   10 days   20 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 04:08:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 04:08:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59650.104691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ku6Ig-0003KQ-Ay; Tue, 29 Dec 2020 04:07:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59650.104691; Tue, 29 Dec 2020 04:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ku6Ig-0003KI-5K; Tue, 29 Dec 2020 04:07:58 +0000
Received: by outflank-mailman (input) for mailman id 59650;
 Tue, 29 Dec 2020 04:07:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ku6If-0003KA-5d; Tue, 29 Dec 2020 04:07:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ku6Ie-0006da-Q1; Tue, 29 Dec 2020 04:07:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ku6Ie-0001kQ-F4; Tue, 29 Dec 2020 04:07:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ku6Ie-0007Jz-EZ; Tue, 29 Dec 2020 04:07:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o2enxGtVfSiL99Qyh+bEXAY1N6dSwSzNQ4tloEWs8DA=; b=SeXb3NE1F52xfgHm0DOESRgj6U
	bVWumINjouNc2D1BgpNrYJ+AULigqN8wKCa5wQaASlca5LCmHKbP4H1qGd4Z5I/VxtrIGXq+WwEdy
	wMQvXL8UQtR7pPO/9Nvl5bj51i+ZeshrgXF0kj5P36KIC+ccxo2yxq3zFwi4nbuW8+LE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157945-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157945: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5c8fe583cce542aa0b84adc939ce85293de36e5e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 04:07:56 +0000

flight 157945 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157945/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157937 REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install fail in 157937 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl       10 host-ping-check-xen fail in 157937 pass in 157945
 test-arm64-arm64-examine      8 reboot           fail in 157937 pass in 157945
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157937
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 157937
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157937

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 157937 blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 157937 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5c8fe583cce542aa0b84adc939ce85293de36e5e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  150 days
Failing since        152366  2020-08-01 20:49:34 Z  149 days  262 attempts
Testing same since   157930  2020-12-28 00:10:08 Z    1 days    3 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976090 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 05:01:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 05:01:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59660.104712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ku78j-0000QD-Py; Tue, 29 Dec 2020 05:01:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59660.104712; Tue, 29 Dec 2020 05:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ku78j-0000Q6-MY; Tue, 29 Dec 2020 05:01:45 +0000
Received: by outflank-mailman (input) for mailman id 59660;
 Tue, 29 Dec 2020 05:01:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=py5/=GB=gmail.com=obalaz85@srs-us1.protection.inumbo.net>)
 id 1ku78i-0000Q1-DQ
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 05:01:44 +0000
Received: from mail-pj1-f47.google.com (unknown [209.85.216.47])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd259b3e-f815-4ff1-bac8-cb4abfc259a5;
 Tue, 29 Dec 2020 05:01:42 +0000 (UTC)
Received: by mail-pj1-f47.google.com with SMTP id m5so908923pjv.5
 for <xen-devel@lists.xenproject.org>; Mon, 28 Dec 2020 21:01:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd259b3e-f815-4ff1-bac8-cb4abfc259a5
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=A5ftFIaw22N/OwluJNptEt3v5S8n6hXob6Vg4193/U4=;
        b=mlrT0F8LyTUpvCD3U+UedkmE3Np16pmz0w0tWEH11i6Nh24zENTe6/ZS/FlFemY9P7
         a/scmkhB20FCK0R/c0qmxmo2oZM0lfFYBz6lGDQShv5VSjWqhURIVPO1iuEVh9H7Gd1j
         QXwNBTTXemMl+695p79NtgHYutAGfH064gTtXQ4/YT1DpBCv4rTY0sHgtM/Uy8KYMiV7
         E6tEvZvkXNPfWQ54sRai/kegJ+n1QsjkKLsVUsDWMP9QbvX/oq/JnuNC/jHCM5nUMzhr
         ZXiE14CRKZvIG33TOyUuzgbb7Q2Cn8faIHiig3UC47i8OBzG6zVughMFjB5WjuE6kBLs
         JuIQ==
X-Gm-Message-State: AOAM530lOV3NIWaE6W21EgNch1i3s8LbUqCPg123V8kdKx0kBROsuvuB
	So1n5xkZAKMJzE1TrMCXqc5XhE336E/siXZXgSo=
X-Google-Smtp-Source: ABdhPJyFdD5NLyGV2pbJ38UOrplAnL05jzGOCLzYM+EpXhsZeP3hsg2ngT7Zri46h7/EQSh5lYSQB/Y4MWvrAeS2ii8=
X-Received: by 2002:a17:90a:c20b:: with SMTP id e11mr2313777pjt.43.1609218101768;
 Mon, 28 Dec 2020 21:01:41 -0800 (PST)
MIME-Version: 1.0
References: <CACmg6stNxXu3-SFdK_WtREbL2i7N522-DRRVr1ZXVOTqZ9j02Q@mail.gmail.com>
 <f3346d45-e8d5-5b37-a9df-f410af54469f@citrix.com>
In-Reply-To: <f3346d45-e8d5-5b37-a9df-f410af54469f@citrix.com>
From: Ondrej Balaz <blami@blami.net>
Date: Tue, 29 Dec 2020 14:01:31 +0900
Message-ID: <CACmg6stadwW8ZHm118QtQRKP=kopHULm3QQLsS6-AdVF9wt3RQ@mail.gmail.com>
Subject: Re: [BUG] Unable to boot Xen 4.11 (shipped with Ubuntu) on Intel 10i3 CPU
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000005276c305b79349fe"

--0000000000005276c305b79349fe
Content-Type: text/plain; charset="UTF-8"

Thanks for explanation! I'll stick with older HW until this is out.

On Tue, 29 Dec 2020 at 03:49, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 28/12/2020 18:08, Ondrej Balaz wrote:
> > Hi,
> > I recently updated my home server running Ubuntu 20.04 (Focal) with
> > Xen hypervisor 4.11 (installed using Ubuntu packages). Before the
> > upgrade all was running fine and both dom0 and all domUs were booting
> > fine. Upgrade was literally taking harddrive from 6th gen Intel CPU
> > system to 10th gen Intel CPU one and redoing EFI entries from Ubuntu
> > live USB.
> >
> > After doing so standalone Ubuntu (without Xen multiboot) boots just
> > fine but Ubuntu as dom0 with Xen fails pretty early on with following
> > error (hand-copied from phone snaps I took with loglvl=all as this is
> > barebone system without serial port and I don't know how to dump full
> > logs in case of panic):
> >
> > (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[01])
> > (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
> > (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> > (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> > (XEN) ACPI: IRQ0 used by override.
> > (XEN) ACPI: IRQ2 used by override
> > (XEN) ACPI: IRQ9 used by override
> > (XEN) Enabling APIC mode: Flat.  Using 1 I/O APICs
> > (XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000
> > (XEN) ERST table was not found
> > (XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018
> > (XEN) Using ACPI (MADT) for SMP configuration information
> > ...
> > (XEN) Switched to APIC driver x2apic_cluster
> > ...
> > (XEN) Initing memory sharing.
> > (XEN) alt table ffff82d08042b840 -> ffff82d08042d7ce
> > ...
> > (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
> > (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
> > (XEN) Intel VT-d Snoop Control not enabled
> > (XEN) Intel VT-d Dom0 DMA Passthrough not enabled
> > (XEN) Intel VT-d Queued Invalidation enabled
> > (XEN) Intel VT-d Interrupt Remapping enabled
> > (XEN) Intel VT-d Posted Interrupt not enabled
> > (XEN) Intel VT-d Shared EPT tables enabled
> > (XEN) I/O virtualisation enabled
> > (XEN)  - Dom0 mode: Relaxed
> > (XEN) Interrupt remapping enabled
> > (XEN) nr_sockets: 1
> > (XEN) Enabled directed EOI with ioapic_ack_old on!
> > (XEN) ENABLING IO-APIC IRQs
> > (XEN)  -> Using old ACK method
> > (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> > (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> > (XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.
> > (XEN) ...trying to set up timer as Virtual Wire IRQ... failed.
> > (XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A
> > interrupt IRQ7.
> > (XEN) CPU0: No irq handler for vector e7 (IRQ -8)
> > (XEN) IRQ7 a=0001[0001,0000] v=60[ffffffff] t=IO-APIC-edge s=00000002
> > (XEN)  failed :(.
> > (XEN)
> > (XEN) *******************************
> > (XEN) Panic on CPU 0:
> > (XEN) IO-APIC + timer doesn't work!  Boot with apic_verbosity=debug
> > and send report.  Then try booting with the `noapic` option
> > (XEN) *******************************
> >
> > I suspected that migration of drive could cause problem so I took an
> > empty SSD and installed fresh Ubuntu and added Xen hypervisor, after
> > reboot I ended up with same panic. I tried booting with noapic (gave
> > general page fault) and iommu=0 (said it needs iommu=required/force).
> > Trying to boot this exact fresh install on older (6th gen) Intel CPU
> > succeeded. I happen to have access to one more system with 10th gen
> > Intel CPUs (Lenovo laptop) and no luck booting Xen there too and same
> > panic in the end.
> >
> > Back to my barebone I tried to match BIOS settings between working and
> > non-working but it didn't help. Virtualization is enabled, both
> > systems are from the same maker (Intel NUC barebones), both systems
> > are EFI enabled/secure boot disabled (the later one doesn't seem to
> > have an option to disable EFI boot and boot using MBR).
> >
> > Is this something known? Are there any boot options that can
> > potentially fix this?
> >
> > Any help (including how to dump full Xen boot logs without serial)
> > appreciated.
>
> Yes we're aware of it.  It is because modern Intel systems no longer
> have a legacy PIT configured by default, and Xen depends on this.  (The
> error message is misleading.  It's not checking for a timer, so much as
> checking that interrupts works, and depends on the legacy PIT "working"
> as the source of interrupts.)
>
> I'm working on a fix.
>
> ~Andrew
>

--0000000000005276c305b79349fe
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for explanation! I&#39;ll stick with older HW until=
 this is out.</div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D=
"gmail_attr">On Tue, 29 Dec 2020 at 03:49, Andrew Cooper &lt;<a href=3D"mai=
lto:andrew.cooper3@citrix.com">andrew.cooper3@citrix.com</a>&gt; wrote:<br>=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left:1px solid rgb(204,204,204);padding-left:1ex">On 28/12/2020 18:08=
, Ondrej Balaz wrote:<br>
&gt; Hi,<br>
&gt; I recently updated my home server running Ubuntu 20.04 (Focal) with<br=
>
&gt; Xen hypervisor 4.11 (installed using Ubuntu packages). Before the<br>
&gt; upgrade all was running fine and both dom0 and all domUs were booting<=
br>
&gt; fine. Upgrade was literally taking harddrive from 6th gen Intel CPU<br=
>
&gt; system to 10th gen Intel CPU one and redoing EFI entries from Ubuntu<b=
r>
&gt; live USB.<br>
&gt;<br>
&gt; After doing so standalone Ubuntu (without Xen multiboot) boots just<br=
>
&gt; fine but Ubuntu as dom0 with Xen fails pretty early on with following<=
br>
&gt; error (hand-copied from phone snaps I took with loglvl=3Dall as this i=
s<br>
&gt; barebone system without serial port and I don&#39;t know how to dump f=
ull<br>
&gt; logs in case of panic):<br>
&gt;<br>
&gt; (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[01])<br>
&gt; (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119<=
br>
&gt; (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)<br>
&gt; (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)<br>
&gt; (XEN) ACPI: IRQ0 used by override.<br>
&gt; (XEN) ACPI: IRQ2 used by override=C2=A0<br>
&gt; (XEN) ACPI: IRQ9 used by override<br>
&gt; (XEN) Enabling APIC mode: Flat.=C2=A0 Using 1 I/O APICs<br>
&gt; (XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000<br>
&gt; (XEN) ERST table was not found<br>
&gt; (XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018<br>
&gt; (XEN) Using ACPI (MADT) for SMP configuration information<br>
&gt; ...<br>
&gt; (XEN) Switched to APIC driver x2apic_cluster<br>
&gt; ...=C2=A0=C2=A0<br>
&gt; (XEN) Initing memory sharing.<br>
&gt; (XEN) alt table ffff82d08042b840 -&gt; ffff82d08042d7ce<br>
&gt; ...<br>
&gt; (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.<br>
&gt; (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.<br>
&gt; (XEN) Intel VT-d Snoop Control not enabled=C2=A0<br>
&gt; (XEN) Intel VT-d Dom0 DMA Passthrough not enabled=C2=A0<br>
&gt; (XEN) Intel VT-d Queued Invalidation enabled=C2=A0<br>
&gt; (XEN) Intel VT-d Interrupt Remapping enabled<br>
&gt; (XEN) Intel VT-d Posted Interrupt not enabled=C2=A0=C2=A0<br>
&gt; (XEN) Intel VT-d Shared EPT tables enabled<br>
&gt; (XEN) I/O virtualisation enabled<br>
&gt; (XEN)=C2=A0 - Dom0 mode: Relaxed<br>
&gt; (XEN) Interrupt remapping enabled<br>
&gt; (XEN) nr_sockets: 1<br>
&gt; (XEN) Enabled directed EOI with ioapic_ack_old on!<br>
&gt; (XEN) ENABLING IO-APIC IRQs<br>
&gt; (XEN)=C2=A0 -&gt; Using old ACK method<br>
&gt; (XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1<b=
r>
&gt; (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC<br>
&gt; (XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.<b=
r>
&gt; (XEN) ...trying to set up timer as Virtual Wire IRQ... failed.<br>
&gt; (XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A<br>
&gt; interrupt IRQ7.<br>
&gt; (XEN) CPU0: No irq handler for vector e7 (IRQ -8)<br>
&gt; (XEN) IRQ7 a=3D0001[0001,0000] v=3D60[ffffffff] t=3DIO-APIC-edge s=3D0=
0000002<br>
&gt; (XEN)=C2=A0 failed :(.<br>
&gt; (XEN)<br>
&gt; (XEN) *******************************<br>
&gt; (XEN) Panic on CPU 0:<br>
&gt; (XEN) IO-APIC=C2=A0+ timer doesn&#39;t work!=C2=A0 Boot with apic_verb=
osity=3Ddebug<br>
&gt; and send report.=C2=A0 Then try booting with the `noapic` option<br>
&gt; (XEN) *******************************<br>
&gt;<br>
&gt; I suspected that migration of drive could cause problem so I took an<b=
r>
&gt; empty SSD and installed fresh Ubuntu and added Xen hypervisor, after<b=
r>
&gt; reboot I ended up with same panic. I tried booting with noapic (gave<b=
r>
&gt; general page fault) and iommu=3D0 (said it needs iommu=3Drequired/forc=
e).<br>
&gt; Trying to boot this exact fresh install on older (6th gen) Intel CPU<b=
r>
&gt; succeeded. I happen to have access to one more system with 10th gen<br=
>
&gt; Intel CPUs (Lenovo laptop) and no luck booting Xen there too and same<=
br>
&gt; panic in the end.<br>
&gt;<br>
&gt; Back to my barebone I tried to match BIOS settings between working and=
<br>
&gt; non-working but it didn&#39;t help. Virtualization is enabled, both<br=
>
&gt; systems are from the same maker (Intel NUC barebones), both systems<br=
>
&gt; are EFI enabled/secure boot disabled (the later one doesn&#39;t seem t=
o<br>
&gt; have an option to disable EFI boot and boot using MBR).<br>
&gt;<br>
&gt; Is this something known? Are there any boot options that can<br>
&gt; potentially fix this?<br>
&gt;<br>
&gt; Any help (including how to dump full Xen boot logs without serial)<br>
&gt; appreciated.<br>
<br>
Yes we&#39;re aware of it.=C2=A0 It is because modern Intel systems no long=
er<br>
have a legacy PIT configured by default, and Xen depends on this.=C2=A0 (Th=
e<br>
error message is misleading.=C2=A0 It&#39;s not checking for a timer, so mu=
ch as<br>
checking that interrupts works, and depends on the legacy PIT &quot;working=
&quot;<br>
as the source of interrupts.)<br>
<br>
I&#39;m working on a fix.<br>
<br>
~Andrew<br>
</blockquote></div>

--0000000000005276c305b79349fe--


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 08:20:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 08:20:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59673.104730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuAEX-0000dA-Ar; Tue, 29 Dec 2020 08:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59673.104730; Tue, 29 Dec 2020 08:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuAEX-0000d3-6r; Tue, 29 Dec 2020 08:19:57 +0000
Received: by outflank-mailman (input) for mailman id 59673;
 Tue, 29 Dec 2020 08:19:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuAEV-0000cv-RM
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 08:19:56 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc252133-797d-4141-a5fd-a29eed8b0cdd;
 Tue, 29 Dec 2020 08:19:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc252133-797d-4141-a5fd-a29eed8b0cdd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609229992;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=p8lOTV0CXKq0eKIcNZnVSoVYI5dY8LQVCykj/8NncIo=;
  b=KBYZW54QNeYPpr6vbNcFmPXIXxvWSQH/gduyq/jdA6BLZFqQ6RZaHl5S
   9b84I65EMhrf8zZbaYbSmS951dmki52FJ0CUZV9ouydp+sXZKDWGzP/d+
   yoaeFMPmWSWwAhKJU/S5kzjBvJzeg+hfV71WGBdrumHrxLefZIgZF2+XI
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: YrJrCgl4JcdgMGkukCWodkpyjFpAGdnZdrfljVwa4Z799zX4uZmjuhPTvIE9olV5mQxCSs2kDN
 CvnWZjFvBfomWwBd6Ja68Fp+375c3Mmt2StqgY6zQ6kBkV/2zDn04hyY2Urq5i2AKC4U/ROapb
 /+GdOwAqHr0Yx7LnFXKzLKDf34vg5jOTjtkvgmI5Qxsrbud2OSjz/rhPff/jMGBN0L18kAstgz
 EclX/qIXEZSQLsORXx4ppjAx1E72xfqryVD+1iPmU6l/jPEuDPaRUVWdMplaDJ1SD95IyPrw2r
 9kY=
X-SBRS: 5.2
X-MesageID: 34112817
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34112817"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PD7cq5cYTIK/rSX3MCQDZrf2HKrhBDZpddvB5bHwVk+Esq3Xe5InnEqdJbW/jelaJkGEve3n3RodObBJyu4NSD9LntkUTuZs3QKY0kbbFY4UnBFENTE1KGA9dhRgw8uEqAgolbOPfgKZZ8CP6/8gv2Sz0K3r2GpkhxmH7C3BXsPwLKYMXsh/1Gv62yd/UDlT3kfUeqmks74jWnKNfnIOKVs9vPWLPnooU1bTrvsN/vWVKPIlNwcASinKIMcc0DvwMqw4YJ4uClohBAEBYdUIVovT6h4A23mUwEE07ZtK40XJGaJVVo8LVABRyWhi3/ZT9NGCn+ota9lFGvTjaMBYiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XFtCWXusGSZLwegGok4/Xca75EBq+gUFJNUd0+a5ZA0=;
 b=Go7n7QW8/20k+vqqj4KWZxpxotTUzJcL4HzVkmrAijlvGCqFbXbl3LzO9aOYQUowAkJESuJ+6rpjJaK9Exhg6yjrG3woz1EZeobOsuZvFQgcxakmY4F5dAYCqwKDzpJnqLRomeGjsKLUlvM8L/2aBOLrU45Kql7dQVrvGgwBsJOXqSxlwTS4ly49Ai/+OGs3Wo12OgyHAtx4nT5YhuZJvMp2dEuWqXK6vomLp/5wCilUbrJ+zKC2U9o9IknVLJzVnAZnSRLts/wgp68v8zAFfYabwKBBSngO+VrPrRGsbM5CwaoPVwPp0qcGdQnCHta9ydPTXnn6UWmtDEI+J8yLkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XFtCWXusGSZLwegGok4/Xca75EBq+gUFJNUd0+a5ZA0=;
 b=I8tfH4d3EC4Vu9WKbghYGVuN3hI2kBa/sriEa/BIWCFQ/yi1RDmGhZQBVzVDEyrmpqwvR608nZVNjn79zD5U3rGalcjURx+7ImELL5baMsNAkX1XHfcHDx3bXjx56EsM09IaIVpZrlHUQyETbCnAeuPbyYqmsydI+FfQU0jjSkM=
Date: Tue, 29 Dec 2020 09:19:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: tosher 1 <akm2tosher@yahoo.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: PVH mode PCI passthrough status
Message-ID: <20201229081943.ifaiwrqyj5ojwufn@Air-de-Roger>
References: <943136031.5051796.1609179068383.ref@mail.yahoo.com>
 <943136031.5051796.1609179068383@mail.yahoo.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <943136031.5051796.1609179068383@mail.yahoo.com>
X-ClientProxiedBy: MR2P264CA0058.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6232fe5d-5945-4611-4c4b-08d8abd28265
X-MS-TrafficTypeDiagnostic: DM6PR03MB4393:
X-Microsoft-Antispam-PRVS: <DM6PR03MB43935A40150C4FD4453A576F8FD80@DM6PR03MB4393.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: k7x7JvCwGEsdI1zbE+l3MmoTjrKPrFbR8xijj8MFLaCAC41hrS29GEJ6gSzt9hpcI6lJSlDWlguR8OB+WxvB3a/40CnilCBlzcxQlKzNlGY87vnM3UX4FBrirJWDxYLHrcbM9/tcSXm5TJEe1nqLm6Mu0cuCkT5V1oPkgjEEuuiaGwY+mg4meisaHnUPWf2uc22vn660wpSKTA8rLV06WNZN/QmrCikPKDK88yIR89ekqfD2WL1Gtr9LCsAMuyuSX0Q9h6D+q91L8n25EadHM2tc+i6onAAyZoaCFm4TRt2nWzNQml/FSyVTXI4/seHmxE0aY2m9VHbpsZ6epbQraQ+W9L1xavg1JDxebYE0JmbJBKgUrkjF+we+/+3MidvKcRB4sMG6a/a0cnd6sDALalGepghbR1RsEbYpoerUMA6PL1rf3PKwXBD5wYWpu8VF65mwgm+XEuNALzRm9jqwug==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(396003)(376002)(136003)(366004)(39860400002)(6666004)(4326008)(66946007)(8936002)(6496006)(5660300002)(66556008)(86362001)(478600001)(2906002)(956004)(1076003)(186003)(66476007)(4744005)(8676002)(6486002)(85182001)(316002)(966005)(6916009)(9686003)(33716001)(16526019)(83380400001)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?OGJaUVQ0MGZmQ2tyTmtFbm1kVHBDU2V6N1J4SHhCUEZBeXRrdGtUV2xhYk56?=
 =?utf-8?B?WUhCLzlib2JaVDRCSSt4dWRzQjJhWHRod1lzN082Z3VxbjRpNG5TUCswYWQv?=
 =?utf-8?B?blNjTFJYMlRGdU5Ta2JyS3poUTlFMW9ldlZTS1I1c2prV2NXSHlqdkNYanZ4?=
 =?utf-8?B?SjR5Q2Y2R3ZxL0FTeUdWQ3hUWXhxZnhRY28rTGx5b3Z2TlMzN3piM0Q5dmU2?=
 =?utf-8?B?RkozK1FSR2VQdnphVFlrMmlsdDdXVnJGM1lSRFQ5YTdkY2owMFNBM1NGWUNU?=
 =?utf-8?B?b0ZpTk52V21mbWZQWkFMWWNmUUNYQjFyNmZtcjMyeTBCMFkwU1o3Wk9tcUJu?=
 =?utf-8?B?WDBtK1dqNmVsc0lpdFBiRm5DaFRtVTBCUmI1bmxvbGtDMEt2aW4vR0lreHVE?=
 =?utf-8?B?anptdXJtalJhaUxCZEpHN3J5eC9CeFlmN0tJRlVHZW9LNkFXN1VtL3Qxc0tN?=
 =?utf-8?B?Yk1DYzBYOW9neENYMXZBdlN2WS84ZUZCdXpZcXhuSkoyK0RpS04xV0didThO?=
 =?utf-8?B?eXFtdjBybkxCSTR1NExOU1NLK0xIQVpiS21zOUp0bVRqRXVFM0o5dXFXdUpo?=
 =?utf-8?B?MEpMZXpnR1NQK0lsMUc2bzRsR05ncHlrRkg1Y1ViRHpiMmdXa2VONEltZEgy?=
 =?utf-8?B?VXU2a3dtR1Q2QkpWQ0k3azRQcFlKMWU5SnFXNExXYUdkaU12R2JQV2VsbThh?=
 =?utf-8?B?OXY4aHlEQWZ3RHdMcmxKelpWV2hlbmR6UUVkTTlOdlNqTURjUEpSRVJ6NVNm?=
 =?utf-8?B?cWk5aW1ENTQ2K3orajBDaHFoWHdXcExMcDRqeXVNT1NOTFlWYUlVSzJWc2lM?=
 =?utf-8?B?S2tYS2lFZnAxOUM2SXVyRkpMMGJZL3pvd2pWL05RYy9iV1pNQnB2UmVHdkZt?=
 =?utf-8?B?S1YzMHMrYVlhR3ZZOGFpTDVidjRIazhFcUtHRXhqVWNvZ1B2STc0UFdrdE01?=
 =?utf-8?B?SFdrOW1VRVAwcTk0TlA1Y3M3enFwYkw2ZHoxSllVa2FpVk9JUjNUWmxLclNm?=
 =?utf-8?B?YTJDUFJvUEZoWXh0aE9FQThZY01MUzJtb0Y1dUUvVXNMZ0tZdTBrMnIwOXpR?=
 =?utf-8?B?T05jVnBza1k5aExNQ1pST3cxQXZhR1NnL2dkUEtUR1lpN0FIblpjS0JCcWty?=
 =?utf-8?B?N3BnZHBEc21xVktCZEhENVdkOE1tYjJOc2R2Y2hob0ExQVd3UmFQd2ZjQ2Vk?=
 =?utf-8?B?RUxKaGpuUkNKWlkzemdLd3phSWpIMlBhc2dyN0JCMEgwWjQrQ253NE1INk5o?=
 =?utf-8?B?dmV2VEw3ZGlxTG01MCt1VEljSFRmOXlwMDRINTFIMStsSHl6QmZMMFFSZFJI?=
 =?utf-8?Q?ZXvv3VmOdSZiKZfhAJukndXODYgVJI8s0G?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 08:19:49.1217
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 6232fe5d-5945-4611-4c4b-08d8abd28265
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BTqRTWtAcvvsjqUG1/RsnmwO+izx8UetCbAsn3zPEJwT5332G1mhj/LoHOFLqxwPcjgXner7KwzfArmpr/t9eQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4393
X-OriginatorOrg: citrix.com

On Mon, Dec 28, 2020 at 06:11:08PM +0000, tosher 1 wrote:
> Hi,
> 
> As of Xen 4.10, PCI passthrough support was not available in PVH mode. I was wondering if the PCI passthrough support was added in a later version.
> 
> It would be great to know the latest status of the PCI passthrough support for the Xen PVM mode. Please let me know if you have any updates on this.

I think you meant PVH mode in the sentence above instead of PVM?

Sadly status is still the same, there's no support for PVH and PCI
passthrough on domUs yet.

Arm folks are working on using vPCI for domUs, which could easily be
picked up by x86 once ready. There's also the option to import xenpt
[0] from Paul Durrant and use it with PVH, but it will likely require
some work.

Roger.

[0] http://xenbits.xen.org/gitweb/?p=people/pauldu/xenpt.git;a=summary


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 09:17:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 09:17:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59748.104760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuB8Q-0006tO-2m; Tue, 29 Dec 2020 09:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59748.104760; Tue, 29 Dec 2020 09:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuB8P-0006tH-V8; Tue, 29 Dec 2020 09:17:41 +0000
Received: by outflank-mailman (input) for mailman id 59748;
 Tue, 29 Dec 2020 09:17:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuB8P-0006tC-2n
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 09:17:41 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d346378-60a4-4b2e-a002-bb66af96831c;
 Tue, 29 Dec 2020 09:17:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d346378-60a4-4b2e-a002-bb66af96831c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609233458;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=AJ6qCA6cbn+7lBbf20jFfVh9QHi2PbXf4y4dY20QiZo=;
  b=cpj+GqoTRC+GsxQvG7TX4mNXKxX+o9YMiPNueY7JoiiqwLnQ3us+voTg
   1Ypqo517dw3jCmWbhDulMjNaEmn19WSy0L8RPtqhP0asOkVQVQIqDxieX
   qEZjzbPJGF4ZmR8DEe/T0gVMX1rsNElBtj156K3UZGBbLt3m0J916nKOT
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Idcq7RGp/4yq7/YwbY9pR4ahURZtjzZ+QoalFVxOIK4LY8XO+efF+p7x9qWozayIarOIszCnPo
 uI7oDDe2EWHTB6UeAR8ee8XjjKXkbLuSuy8dmgGLPU+TGHlKfFhzadeaWWl14twaUtHA2LMdn4
 SJ/XcmUS4/CPy94mMblY7ez0uc2k7eUG1m9a0FlfOLk3ZoEnqX9lP3cDJRioJKXp4Ol/F+SgIM
 2vrXuXgxatAwZXgVmCKos2weEUfeigGV6qfaSVc+Q6i7U5q1yI4PbYPXttuGiqqyFZE8vLCdjH
 oDQ=
X-SBRS: 5.2
X-MesageID: 34078283
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34078283"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e6tpnHNUVg8qLxK+69j3td7eEAWzpCR4pqeaM2JLpOR4poux9Xcf8Rz0PxboNLosSZx448opQn6HRaqJhADj42z/JBJi8gqr6z3XxYNtp8fGDq2ozbZehSPsEuB41iUCLgEqPc7ebBuBivA5MYhoY0SOB6I4zaQ+TCP/8XhkB5OSv4gBL0iMhi1qtf3Apx1RyLGg6lnbCTyfrabfw0/AwZdQ98lDCk6Rsp1kzMLJQwTWoXaA1p3yvk7TCnjFfpy/pLYq0/uM6pSvvWnZlYqCnMyaoLl1BXkVlnaFkGsg4maylHvar7BlsdpoBeBphHyFaI8lMsMByN5O06y+0KNevQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4hUbVbLbTkcs7vt7aA8JcC4oozDWj6I4QSUU+RX7zV4=;
 b=lySHj4TuBkc/fsQYS1fJEIVIGCC8n6cRsExDgVh63eNjBgz3q162P0j6im8GqHSfyX+cXACpdhL04nmBz69ygvot366lmqVkZRjC8YzPlgkTUu7EMhFtW5nmKz+9dKKY61oy17lOZvp4tgOa0uvq2JDVi5pBQLebytcalP2eMwLY5bLjkGmYVh3z8TnJmEnoiOkBPGC7j1nwD6iCctfrop/kj7+mD5MKtMpOIRuHZbOW8OU/uc5n5Gu4KKpJ1Zv1pceP/uvAQtUZzTZSMUoge0MFvGWOQdS2/SQLwjKhHsLyPYoo0Zw33s1uGYqmi9d++L6VMKcbm2QW2Z7YXLm4Ag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4hUbVbLbTkcs7vt7aA8JcC4oozDWj6I4QSUU+RX7zV4=;
 b=E/xiOMPTMwh0C4ALl4oiz2qEb7zeesMiHuemt63lOl0gLyrCtgV/JeNJsRbHFnvqg4Kh656Mmv/INe98jvFub2rM2dikm9/26p3if6Yuv1jWUjwL0WoYLYSYmtZDYEsptmSuMpw1yEJrxyvNTBabxqn86WpDXxcwyjBJ7Qc+o7E=
Date: Tue, 29 Dec 2020 10:17:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Rich Persaud <persaur@gmail.com>
CC: Jean-Philippe Ouellet <jpo@vt.edu>, Christopher Clark
	<christopher.w.clark@gmail.com>, openxt <openxt@googlegroups.com>,
	<xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Julien Grall <jgrall@amazon.com>, James McKenzie <james@bromium.com>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.co.uk>
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
Message-ID: <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
References: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
X-ClientProxiedBy: LO2P265CA0440.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 05a938b9-4165-4555-c912-08d8abda9421
X-MS-TrafficTypeDiagnostic: DM6PR03MB4298:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4298C6C666FE3FF4E9E6E0F78FD80@DM6PR03MB4298.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8ZPoHqq7+fS3DxBJa/59LJDt3PhGoUeA3xUp4yhAaqeTRhCamgaLaFfCwFfumurqbQSTQyW/wzcA+5XTs0+Eya9Q/nlUDMRkwkeOUO0P0VRKthlNavIPoU2mgUQ++dOJFtRWG2g1Sm0IelY5jyi02GFP/m8h2U2dmhb0akevGiYixeaLt6EcDtPDkU2UmHoza2K0UhB97+5DQDsoiAP6Wgug2cgKL7diKwuQ2A8flWw1AomYWgHsS8jdY/zBC6z/smnH7sKLIj8g/2J+HwtFkiMwYCFX1J+/qXPz/wLmf5Qgn3K7uvmTda+vP11GsfXl8igPGpcl2EWS9MSVh7W5wsuT6SsijP1ppJuyWLSyAQhTt8WUFrdLvmsCbv5AiyeRpjXiyRQqvpHasvkxXLGxurAkTgJtS9Tt8yrAgnQ4Y4U4OX98PJLaeOjPVOun09nK/CAOIdYogr4dV17HB2Uc3Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(396003)(346002)(376002)(366004)(136003)(186003)(966005)(26005)(9686003)(6486002)(6666004)(5660300002)(316002)(6496006)(8936002)(6916009)(86362001)(66556008)(4326008)(8676002)(956004)(54906003)(16526019)(1076003)(66476007)(478600001)(2906002)(66946007)(83380400001)(33716001)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZjM3YlBXa1RacU1rdFBxVFZQKzdvQW9tcEkzZWx1eEdrVHlOeHFnNGVNR3ln?=
 =?utf-8?B?Rkp5cHd4bXdEV29Yb3BXaVNtcHNNNVhlaEVLRkFJZTh5SDJQSUVHY1J3cWdZ?=
 =?utf-8?B?YVhMWldEWFVkM0ZVVEUyUnN6Tlpqa1EzWnU2WHZTN3I4VEwxZHJIQVg2eDNq?=
 =?utf-8?B?bTV3NVVvUkROVTg0QUVwM3UwYU1aMkU4SWh6bktxQVRLeUljdk93b3dBV1FN?=
 =?utf-8?B?MDd6Mmorenp0NXdINGpDK1d0K016M0gzSTU1bDhScVJiSjJ4Zi9pODAvMEFS?=
 =?utf-8?B?cHpwZ2tHNTRQa2w4UEVGQ3FZdURxcVpPakpRc2cxeit0NlVOY3draVBkQ1V4?=
 =?utf-8?B?VjRxVVByNHdhMENydlA0U1d0d1IrV2pWdjlDYng5V21SMDBoZEtwYURNWUFB?=
 =?utf-8?B?ekJ3UnVWWlJZTFZwN01DbTR3SkNqdktBM2dHekhoQ3BLaWdtOCtmS2NxOHgz?=
 =?utf-8?B?N3dmc2tvNEFwbDBuS2x2VEdlNUcvZ05lZWRacktzRjZ6RThuSldtd01lT2Zy?=
 =?utf-8?B?QnBUY05SbHR6eENabFNWS1NETE1PcmNZSWVPQTQyR1hvZkZSUzU5U2J2cmh2?=
 =?utf-8?B?MEVmMnNxSnJBWmdtRlJYWUpiRUtVWTlGakdTUFk4MzhReHpyMEJETjJDZnRN?=
 =?utf-8?B?YktmZVN5cXF1ZGdNak1haWpsanlFSU4wdGIwTXhqVnhDMzdFRHFUeXdHMVZm?=
 =?utf-8?B?OXpDRW02RThRU2VjSDJWYVpWRytSRkJUbVhiN0oxcWtGaFBvSFg4NG5kdm5w?=
 =?utf-8?B?aWUvTDJyRUFGNXdEb2FOVG84RkF4cEZtaVgzRVhhZytaS3owUHNHaHFNRHBh?=
 =?utf-8?B?TlZlSHRpamV0cDE2Y3dpaE5Kd3lRZHVYRXNUaDhiOEVpc3RoaVRUa0J3Tjdv?=
 =?utf-8?B?T01hSmpJNUlYRWhQWWp4cHJRRG14K3JkUGJac29oeE8xNDkxanpzbnM4QWVk?=
 =?utf-8?B?M0MvUndadXZqZDl3cHIxdXJoR01IUVZlRitVM0FjL1JVWkdvYkZkT3RFSEtQ?=
 =?utf-8?B?S3RlOWVmOHdqRDE4ZlB2UW5jRHVrK1JaUXJ0d1NrQVc4cGkyTkh2ZVllSlZk?=
 =?utf-8?B?eEQrYUV4TlB6Zk1DZlAzbnFoN0RaRHoydS8xOWpjaU1JdCs0RXZvUDR6VHov?=
 =?utf-8?B?RjhQRzlNTnZEaXVrYXJGMzRJd1h4VFVhU0trTys0U2xLZG84T0NVNjIrUjFV?=
 =?utf-8?B?cSt1UHhXZGF6Z2xVRC94Z09LRE5SUFQreFJuSEtPbzZmQm9QVndJUjczaU1h?=
 =?utf-8?B?V1JYMnNYN3F6Mm5ZQ1RpMXJDd2IxTjlmcVhBU2dZc21DZjBEcXJlNW91bkRP?=
 =?utf-8?Q?8ZbSIUsW5d+v24OYvJXTxSeNlW1E0rF9oJ?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 09:17:34.7849
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 05a938b9-4165-4555-c912-08d8abda9421
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZZ2RjlPkb97k1ZRXAjwUy5GPXbTGb6/F7Ds9vy0ovl3W5CXMNHwOMsc/HkESUIKZrBJbBVH+4y+d46tW57a01w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4298
X-OriginatorOrg: citrix.com

On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote:
> ﻿On Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.edu> wrote:
> > ﻿On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> > <christopher.w.clark@gmail.com> wrote:
> >> Hi all,
> >> 
> >> I have written a page for the OpenXT wiki describing a proposal for
> >> initial development towards the VirtIO-Argo transport driver, and the
> >> related system components to support it, destined for OpenXT and
> >> upstream projects:
> >> 
> >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1

Thanks for the detailed document, I've taken a look and there's indeed
a lot of work to do listed there :). I have some suggestion and
questions.

Overall I think it would be easier for VirtIO to take a new transport
if it's not tied to a specific hypervisor. The way Argo is implemented
right now is using hypercalls, which is a mechanism specific to Xen.
IMO it might be easier to start by having an Argo interface using
MSRs, that all hypervisors can implement, and then base the VirtIO
implementation on top of that interface. It could be presented as a
hypervisor agnostic mediated interface for inter-domain communication
or some such.

That kind of links to a question, has any of this been discussed with
the VirtIO folks, either at OASIS or the Linux kernel?

The document mentions: "Destination: mainline Linux kernel, via the
Xen community" regarding the upstreamability of the VirtIO-Argo
transport driver, but I think this would have to go through the VirtIO
maintainers and not the Xen ones, hence you might want their feedback
quite early to make sure they are OK with the approach taken, and in
turn this might also require OASIS to agree to have a new transport
documented.

> >> 
> >> Please review ahead of tomorrow's OpenXT Community Call.
> >> 
> >> I would draw your attention to the Comparison of Argo interface options section:
> >> 
> >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Comparison-of-Argo-interface-options
> >> 
> >> where further input to the table would be valuable;
> >> and would also appreciate input on the IOREQ project section:
> >> 
> >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-IOREQ-for-VirtIO-Argo
> >> 
> >> in particular, whether an IOREQ implementation to support the
> >> provision of devices to the frontends can replace the need for any
> >> userspace software to interact with an Argo kernel interface for the
> >> VirtIO-Argo implementation.
> >> 
> >> thanks,
> >> Christopher
> > 
> > Hi,
> > 
> > Really excited to see this happening, and disappointed that I'm not
> > able to contribute at this time. I don't think I'll be able to join
> > the call, but wanted to share some initial thoughts from my
> > middle-of-the-night review anyway.
> > 
> > Super rough notes in raw unedited notes-to-self form:
> > 
> > main point of feedback is: I love the desire to get a non-shared-mem
> > transport backend for virtio standardized. It moves us closer to an
> > HMX-only world. BUT: virtio is relevant to many hypervisors beyond
> > Xen, not all of which have the same views on how policy enforcement
> > should be done, namely some have a preference for capability-oriented
> > models over type-enforcement / MAC models. It would be nice if any
> > labeling encoded into the actual specs / guest-boundary protocols
> > would be strictly a mechanism, and be policy-agnostic, in particular
> > not making implicit assumptions about XSM / SELinux / similar. I don't
> > have specific suggestions at this point, but would love to discuss.
> > 
> > thoughts on how to handle device enumeration? hotplug notifications?
> > - can't rely on xenstore
> > - need some internal argo messaging for this?
> > - name service w/ well-known names? starts to look like xenstore
> > pretty quickly...
> > - granular disaggregation of backend device-model providers desirable

I'm also curious about this part and I was assuming this would be
done using some kind of Argo messages, but there's no mention in the
document. Would be nice to elaborate a little more about this in the
document.

> > how does resource accounting work? each side pays for their own delivery ring?
> > - init in already-guest-mapped mem & simply register?
> > - how does it compare to grant tables?
> >  - do you need to go through linux driver to alloc (e.g. xengntalloc)
> > or has way to share arbitrary otherwise not-special userspace pages
> > (e.g. u2mfn, with all its issues (pinning, reloc, etc.))?
> > 
> > ioreq is tangled with grant refs, evt chans, generic vmexit
> > dispatcher, instruction decoder, etc. none of which seems desirable if
> > trying to move towards world with strictly safer guest interfaces
> > exposed (e.g. HMX-only)

I think this needs Christopher's clarification, but it's my
understanding that the Argo transport wouldn't need IOREQs at all,
since all data exchange would be done using the Argo interfaces, there
would be no MMIO emulation or anything similar. The mention about
IOREQs is because the Arm folks are working on using IOREQs in Arm to
enable virtio-mmio on Xen.

Fro my reading of the document, it seem Argo VirtIO would still rely
on event channels, it would IMO be better if instead interrupts are
delivered using a native mechanism, something like MSI delivery by
using a destination APIC ID, vector, delivery mode and trigger mode.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 10:12:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 10:12:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59755.104772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuBzY-0003hu-CA; Tue, 29 Dec 2020 10:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59755.104772; Tue, 29 Dec 2020 10:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuBzY-0003hn-8r; Tue, 29 Dec 2020 10:12:36 +0000
Received: by outflank-mailman (input) for mailman id 59755;
 Tue, 29 Dec 2020 10:12:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuBzW-0003hf-DP; Tue, 29 Dec 2020 10:12:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuBzW-00067Z-3g; Tue, 29 Dec 2020 10:12:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuBzV-0006n7-OE; Tue, 29 Dec 2020 10:12:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuBzV-000498-Nm; Tue, 29 Dec 2020 10:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gXqv9Km3gGwxGBMNCSlHx8/io9mIWZqHsnud+VwnRMk=; b=C1jOioVjGxMsUWYf/c+DGDEmf0
	SKrafb6wuLS9gCvR4Y2NNQCf3wwPlix9WIkSWzyRzoYixX+iW2b20vqVQuetGWIlBhRQ/j4Yxmbw5
	jfmhOAICWbRFvzBEcTJUeVvMrgO+DwkdFouAbvYhXmqffThDWFwDExeOydDffJd5JyTY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157949-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157949: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 10:12:33 +0000

flight 157949 qemu-mainline real [real]
flight 157956 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157949/
http://logs.test-lab.xenproject.org/osstest/logs/157956/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  131 days
Failing since        152659  2020-08-21 14:07:39 Z  129 days  268 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   10 days   21 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 10:51:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 10:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59779.104828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCb9-0007V7-EN; Tue, 29 Dec 2020 10:51:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59779.104828; Tue, 29 Dec 2020 10:51:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCb9-0007Uz-Ao; Tue, 29 Dec 2020 10:51:27 +0000
Received: by outflank-mailman (input) for mailman id 59779;
 Tue, 29 Dec 2020 10:51:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuCb8-0007Ug-45
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 10:51:26 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8bb1bf53-6a7a-476a-8b59-17376edd97c2;
 Tue, 29 Dec 2020 10:51:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bb1bf53-6a7a-476a-8b59-17376edd97c2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609239084;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=fDTtDBAqlmv4prZld/cMmwwyDEnl9C5kbUHypT24eAA=;
  b=HCGN5bT0HAfuZ4XvO9Y1m6ZMN2my3xklYpfsJTc4HJZlpfCBGH4g++0a
   qkVMnqj1BG1oAWvwjE7anvEuvlsYPf66sLm3HEjlPdgN6zGNwjyauZe1z
   oaDrt4v1kj6UFTigL+XpMRt0Ul1WWY5Pc8fbUFuGlYyYo/3ZBOW1GgdrG
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NedtSPx6ihld52GWYzZKDEfIH+4J7Rq/jzwLon1bHAlqof2PhyJwkoTyFZ5R9nCyzpJlBAy5OG
 6xoEeVwmGEbru2ZgfycHLCXVomJdAcErX3i1iI6Q+XxzJSAxNcjBlPKx1CL0NqURpkFEQeSrzV
 jDrRN6WEfW1AUyxvgmuyQP32dKDi0s8NGAbKu1LNnoc99X1/1VVAQyvsxTVxDFWVH/x0gRETuy
 k7Ilecyxoz6JcNOuJGnwsxKTDMQnw/Y7YTBY9+aFju5Q7mbMYde1IqFJxDZZCrUme8NFAInmK+
 HaQ=
X-SBRS: 5.2
X-MesageID: 35335118
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="35335118"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YQFxykOBHlQrwl8VJ29VpS/+M7KrUzCrIdDf6pNI00jFXXh+VWDVwYktqwEIsJuPkWpU27y1VbhhI1066jrYp3B8vRotuNICus1t0z2dZbd2IHEoFv7au7pUJuV1J0VqyLuHvnh6HOhjlcaK4f3+R5B9qkXcoDTgOfSquZ+aX26OAskr3txQ2F3Gv0cL8tFMDmHGwFNvnc0Kt7y3TvcWz4+ZhMZTdPwNCHMUGfoYau6GoBVe4dcVk/3Q7OnD2yzqdDtaNFwKeTRHWDaIDVr1FkZEVK7HHrP7/bgm7hYPvDZN0gFoEJQe7WykRawX5h8Eqlg/S1E0tzAK7rZ1lfCo/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qG8/AL35yjMCHqR4iyWD6TLIat3zqRylgKO2oXOs8sg=;
 b=Cv8r6M3JXMvf68j7rGDvGWI+fVeDUzN9dVYKACgdROvT4jIYKrRxvQIYfOUOVGApCMUTJbGUx4AZkX99njZO/OSafpcmlBe/cHKx/AGmFTWhDX3GsApREpMlafJQn+VeQb+te8IBo6jliTJAHN0q9LFZXdei7STI6SEA9Eo+Wl2JFXEsYxYwsFEi9tOQIMMJitn7alSqcrfVzjxzZHUI0nlQgk5+tRhPHI069kSdnYgpkAyoVPmhJJpT6SijkhLHy7nRc98jZEEx1nNmq4PON4yl81zYmCHbDOgsM0JX4To7821pB26tTlG6HCFlHGx74W2HETaH8PavfGsRKjzkUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qG8/AL35yjMCHqR4iyWD6TLIat3zqRylgKO2oXOs8sg=;
 b=hEzxRCPhFcTomiNiO+ioVRxC7qZ1apyH0H+kcd5sMeQ+1LmaRXximvmyWJ4ZQE5Yo9MqAeY4dUwPGeDFWVzJKfx+NsScaiHBMOTDdruMH2fYkmJc7HYta/sWjS7sFngwv54ioWsABvvb8A3U7o7ykR0yKXVnXeD+4+jqADeCG9Y=
Date: Tue, 29 Dec 2020 11:51:16 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
Message-ID: <20201229105116.hfg3mtjzsga7dia3@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
X-ClientProxiedBy: LO2P265CA0257.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f174951b-2d97-4681-342d-08d8abe7ada5
X-MS-TrafficTypeDiagnostic: DS7PR03MB5525:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB552509305555A472F957C6E58FD80@DS7PR03MB5525.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: niAJg+H6clg4XW6eEKRTjRWKt9Z0CuLozPEkIcFDa1wUwi8s8pCx7g5ToKfFWTIAyZZVKeeymWG8J5JnX6Ly2TwbrLZurmBG9Xq30++KSgOK1Wdu1xlAJtS2mXWRaoBGOT9Q/8p+v1FWHjQhIUg5QKrxYr+tcY3BT35bbK/GHwPZUMpz/gW090xtI4nBO3GpkbprAK+7iJDdhFMdB1DfnsG0QhJ96P68VvLgsU+XErjEjvczFRMbJ+crcS9oeihWzf0l9gwIjgF2Kzfl41bvnTlYR6rSUyB6JypG1DgiyAiymIgeaAAfU4SntIejqKlY/TBhTyAplcVe9T+1Lp6aWkEMIWJLhHTLeydIhD1NhEOxdhXRt4AWlD0FkvTnzhGcXgtaEqDk1EK1lv12YO9VvA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(136003)(346002)(396003)(39860400002)(366004)(86362001)(8936002)(66946007)(6486002)(956004)(83380400001)(66556008)(4326008)(54906003)(478600001)(2906002)(316002)(8676002)(66476007)(5660300002)(33716001)(6496006)(1076003)(6916009)(9686003)(6666004)(85182001)(26005)(16526019)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?L3JjRk5TMUoraEtiaWpTZVFaT0xBaW1haEVOMER2L2hiV1QxbHRaVmxnVXFZ?=
 =?utf-8?B?MlFSL1NRTitLUmM0bE1TRHBGWkI1aFJHMFdOdGVKNkNVWHllNVVNS2p6L2NG?=
 =?utf-8?B?TlFnOFB0dzlzU2Q3WWJpN0FGM1VsOVJhTEF5L1haK1lCVkFLaSsvT0lVK1Er?=
 =?utf-8?B?NnFROThCUTZScXdKQ1BtZ0FqRkszZTNJaHRHMkc0REpEYldUdkZzM3Zmd2Ex?=
 =?utf-8?B?OXkzVnorelZUZ0p6N1dpT0xBdjFvSE4yVm5JUUE0MlgrRFFaaGhJeTk4RkpE?=
 =?utf-8?B?ang2a0ZidlFhdk9oN0FkYzNZOVF0SWlNR242Nm9ybVZEa09FVUwveEtpbkVy?=
 =?utf-8?B?ZG51N3J1K2lzcVE0MU5UZnQ0cVlMOEJvcDBncE5YbXh3blE5ckxWVW10QlFs?=
 =?utf-8?B?OWpaM3FTVk8wRHQ4WjF3d1BkckdiZXJ4bFI3cG9sY1BaNTJmNFFSa1pXS0hZ?=
 =?utf-8?B?dzB5alRPQ055K2c5Sy9SY0ZBeWlScExrZFBEdVFpU3B0WkRzL0E0a3RVZEE4?=
 =?utf-8?B?UVR2bWNybVkzMUppajE2cVdPa1BNbXpKMXdzUDlpNlpscmhkd3BiOVlkcnov?=
 =?utf-8?B?WU1hRDl6M3Z2aFBkRlBsbzJRTHFmemhyeFdlOHJPYXNpTHkwSHlSODF5NWY3?=
 =?utf-8?B?Qy8yUnp2WDN6Uzk5cGd3Q1hVMlQ3UkU1d0IybndjYjdVOGxnTW5mTnZBbnFJ?=
 =?utf-8?B?eGVINzZuTGZDUHhNN1JianZkQjlOUFhsS2Rkc3YyQkJuQW9zWGdzSnNxNURG?=
 =?utf-8?B?cVlLYVg4L2w3aWF0OVRDeXpYM2d6VlF3YlRub1FLZ1k0N1Bla3h6eWtzT0t3?=
 =?utf-8?B?MUE3TlA2M0xKazlwaWl1OWhpSUNmTnhManQ1SnZhR3dIdGlyUlJZblFYYjRB?=
 =?utf-8?B?R1J1azZJSUVTd0FsT1V6bk5ZMUVDQ3BQUEhPc3R0ZXdBS21wUTFkZ2pVUDBt?=
 =?utf-8?B?dVNPRGcwY1RtWEV4RS9WeHBmMnJDMGM2SDFmbUM0OVhpMHNUTXNLUm1oSXVH?=
 =?utf-8?B?OHczdGdkMmdEV0JpUUJ1NUt4RVpFbWZaVTFRZXdPVjVSenF0eEFFSnJGUks0?=
 =?utf-8?B?RGJhNm5JRWlqN1hPOVdKQTBack45WkdabHVYTDhJUmJJUG5TQUV6czhZMGNI?=
 =?utf-8?B?cVJhalBwNEoyb3lSQmwyTDR2VGk0NGkzR1dQME5XY0hUd2ZVTzBkSzJaZm1h?=
 =?utf-8?B?dmhwZFovaFJkalEraHVTMTU4Z0tjVVZJNFozcTZ3TEtUM2xNSXZCRkM3U25L?=
 =?utf-8?B?QVdCUHRmZ1U2cWJJek5VdkFhR2hYSVVkd1FtQ0swQnRkSDVEMU9YeFhONWxL?=
 =?utf-8?Q?bTP9tUFcjYB7RDEMQ83drAkWt5Ak+A3ljE?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 10:51:21.1009
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: f174951b-2d97-4681-342d-08d8abe7ada5
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oYElPMR4xGkaMx4l9Y+lCnd7Y8JYvGkSbsmnMusgSTsXAMb16iaeU0RVPUXhx7/GUo2Mun9ZrD+jreCMUG2SaA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5525
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
> Use of __acpi_map_table() here was at least close to an abuse already
> before, but it will now consistently return NULL here. Drop the layering
> violation and use set_fixmap() directly. Re-use of the ACPI fixmap area
> is hopefully going to remain "fine" for the time being.
> 
> Add checks to acpi_enter_sleep(): The vector now needs to be contained
> within a single page, but the ACPI spec requires 64-byte alignment of
> FACS anyway. Also bail if no wakeup vector was determined in the first
> place, in part as preparation for a subsequent relaxation change.
> 
> Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

See below for a comment.

> 
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -443,6 +443,11 @@ acpi_fadt_parse_sleep_info(struct acpi_t
>  			"FACS is shorter than ACPI spec allow: %#x",
>  			facs->length);
>  
> +	if (facs_pa % 64)
> +		printk(KERN_WARNING PREFIX
> +			"FACS is not 64-byte aligned: %#lx",
> +			facs_pa);
> +
>  	acpi_sinfo.wakeup_vector = facs_pa + 
>  		offsetof(struct acpi_table_facs, firmware_waking_vector);
>  	acpi_sinfo.vector_width = 32;
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>      if ( state != ACPI_STATE_S3 )
>          return;
>  
> -    wakeup_vector_va = __acpi_map_table(
> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
> -
>      /* TBoot will set resume vector itself (when it is safe to do so). */
>      if ( tboot_in_measured_env() )
>          return;
>  
> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
> +
>      if ( acpi_sinfo.vector_width == 32 )
>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>      else
>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
> +
> +    clear_fixmap(FIX_ACPI_END);
>  }
>  
>  static void acpi_sleep_post(u32 state) {}
> @@ -331,6 +334,12 @@ static long enter_state_helper(void *dat
>   */
>  int acpi_enter_sleep(struct xenpf_enter_acpi_sleep *sleep)
>  {
> +    if ( sleep->sleep_state == ACPI_STATE_S3 &&
> +         (!acpi_sinfo.wakeup_vector || !acpi_sinfo.vector_width ||
> +          (PAGE_OFFSET(acpi_sinfo.wakeup_vector) >
> +           PAGE_SIZE - acpi_sinfo.vector_width / 8)) )

Shouldn't this last check better be done in acpi_fadt_parse_sleep_info
and then avoid setting wakeup_vector in the first place?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 10:54:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 10:54:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59796.104841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCdq-0007p1-U3; Tue, 29 Dec 2020 10:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59796.104841; Tue, 29 Dec 2020 10:54:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCdq-0007ou-PU; Tue, 29 Dec 2020 10:54:14 +0000
Received: by outflank-mailman (input) for mailman id 59796;
 Tue, 29 Dec 2020 10:54:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuCdp-0007om-HU
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 10:54:13 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f17ff5e8-cdfa-47e8-82a1-03f4ace883de;
 Tue, 29 Dec 2020 10:54:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f17ff5e8-cdfa-47e8-82a1-03f4ace883de
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609239252;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=819dp/zadN0UThaSLGO9MpJZce1ISBnOb2eyEqIu0P0=;
  b=P98U6s7sTLRVWpNTk9XTFBsGlLgOygRbPxEw6EYgxQTO4Xjf9JCSVQgV
   UTtVy6x+rUoxofHb8wz6+mL1+udyZTQ8R/+b9uoPuaJAluDDQYdB8tx2n
   6Tmc+HOuOJlTfifOuHJcN21NFdkUj/DQ/BfZvSQsvT9Tq3Vaikc2Poi1J
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: M5+AZBmv/tvFfhg+qCnei6Wgc2MWOBK8L8bfcRHpnsvZdLf3kTjW9pU2lin9Ps73Rc1pbeE4MG
 jv4N32oFPxseUfvrd1rMr8nLN5MuF9PY8hLKOMY4E6/LH5cxyyOAVxM6O2e2T31bRRyBGRPXTY
 t6Gok2MmVjzGJmQ3W5U7+NT7YGK7pa0cGn8WA/NBnY3HNad0fYRge4S2kOq9k147PzUYINOKX6
 vmL1smEj6ov1aqv/QrjOc/WNug0AEubtPZJK8LYLlro5nB2iTQzIcEAURagF5unr4koYfRYzP8
 y2s=
X-SBRS: 5.2
X-MesageID: 35335185
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="35335185"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OOQbiJ84TJp4xv+lVYlY40jzYtfb3c+PPSbtvC8n3zbOOW+04Ms4Pbrl/dKNxWvKluROkz4m2QrhTKVM2fMKAEAJ/1zCSGHF9/MayMM7GsLj12/O2hnbkOsPKLAE1J0i1zqW3xDI+gAYR0vGGusvFZsWDk3nY3Rfbdh3MAQZ84pSn87zPJm9DBxFtGrVfuide86ggz/LsN7/Mu30qaQFtMXAMahKCxm8pmhKww4xIiBxM/Li509n41Mn/sDLtBJRGOP4NJwo5Phwb0PpEgBTkKSZojYZX1gd0A3R+TqCcFUIfV76/SMxM4+X3Oy33SeT2L8Jmhl+tMxHhOzT1P///Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rZut+Qh9XgHimERJvc5H7uE+N+wrnwXWN4ncZvqxiIU=;
 b=Gwd5JB1efTHhRMw41Sowr49aaG0+CeiuKrOsfLTcf32fDTtekyHuijwFT3YsN1TzV9FnPLaNh1BVDGI98Vpg7ffMN42dGosIpe3RA+2hjF7C864hqQLix6bfbHw/W3JL2bzIb2A8MPiLJMVgdrZLU2zCJttl489g++ovNRXnMOmEUsR8V0a4glv4ciI5sv7G/KXUEAPuQXSxMbKRIUdtEuOozbidRBhKsCZAXuM1n4NhvWDriTXPyUaYJ0+rUi0IVU4Ksh3C32ykqeeqsNwbqKYow06PofN5dJs//YluI6/klH4N4ji91L5zh7ADu7gBiIw4VF+gfnRzQTe47qe8Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rZut+Qh9XgHimERJvc5H7uE+N+wrnwXWN4ncZvqxiIU=;
 b=Htg9DOuHxp/w59yz0xvqmj3hovmnLSqPnpouAczLQnER3WIkoLzeEu3bMcqAi8XGKozU+TpWS1i+XSspPn5w5MwtUwokRsvFpgfvSD1pVcM8xyADEAEgZ/6kAxobU/yM7Mc3PxOJHVY2JFtYFYMdUbePq5IdHczQg+jC9lEhuHk=
Date: Tue, 29 Dec 2020 11:54:03 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
Subject: Re: Ping: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
Message-ID: <20201229105403.7velkoskewx5lafs@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
 <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
 <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
 <fe2ec163-c6c7-12d6-0c89-57a238514e25@citrix.com>
 <094e9e27-e01f-6020-c091-f9c546e92028@suse.com>
 <1d971d71-9a7e-f97c-6575-7f427dc1553e@suse.com>
 <301f6814-3827-5aab-c105-74ebee66091f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <301f6814-3827-5aab-c105-74ebee66091f@suse.com>
X-ClientProxiedBy: MR2P264CA0016.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6723d7f2-79f7-4901-f5e4-08d8abe811b8
X-MS-TrafficTypeDiagnostic: DS7PR03MB5525:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB55259B1E9BC6CF34A1C179F88FD80@DS7PR03MB5525.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: CVLDa7otE7LuymHcwrv3LqhlrSpwjxeHKYK9XWAu1oUBU1lbVsG4PRlz93a2TLA/KoPfSaM/S2kxt5qJkDmi3ml/Bqq/25G9TWuIyysE9zqiIWj9/8r6jS7a5+A/eGrWRuZTxUxUXP+qbE+t2Cbp76KTiKHDETXW2BVL0KskjDFKF8VVVRFFOb14GekHrPnHVE8pMJ47zulmxO+bCp2Cvtt901jMTt8dE8A+d9bi2e0FvbUGbUZdWLTC9yO/1X0efMbHaEvTYzr8P9WEyA/VRcdgASgVKeQzNK09Jr8eKC+Kp075rIISGrIxosisYDXvSG0oVC/zAkOWqtVD7m5Xfr2c2vq5dR6/WTB1R/VN9kWOyuEjQA14qkAZNx3mo4/JnnZklpt/GnxAUWDBCxEcOg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(136003)(346002)(396003)(39860400002)(366004)(86362001)(8936002)(66946007)(6486002)(956004)(83380400001)(66556008)(4326008)(54906003)(478600001)(2906002)(316002)(8676002)(66476007)(5660300002)(33716001)(6496006)(1076003)(6916009)(9686003)(6666004)(85182001)(26005)(16526019)(53546011)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?L09Teisyd1YzQ0J4NVBUb2pHMVhPSFdVTGE5WGVsQUNTck10RUVRRGdGbmpp?=
 =?utf-8?B?dHkxZjNHbjRpL0VsVFVLYTVucGI1NjlVMCtrdlA0REhaUnQ3b2xtcXlubDZq?=
 =?utf-8?B?YzZaQVNDckNFNnBHREFCKzdRVHJGOWx1TkhFOXpQaGlYbTU2eVRXUlhkanJR?=
 =?utf-8?B?VUNJTWM5MDBqcFdoczFkcVZwMDk4aXdEN1VzWVRRTjYySkNTbzh4WmFXZXh2?=
 =?utf-8?B?Z2VERjVwTGlGNzlXSlltNkVseW1jYzJjZnU0ZU1jZ1RQdDdURnZ4SGROTllj?=
 =?utf-8?B?U3RFRUNLRXVvVUZpQm54em9xMlRTREdGRDZFUkpYU3ViMTFBVmp3UWRjT0tS?=
 =?utf-8?B?dVo5OWg4N1d4RVBob2lGeWJwWDlGWC9NL0JucUZDc0JsdkhNamRlSnR6UHkx?=
 =?utf-8?B?NHVJSGFIeVpRV1JZVm5NOHJTTjM0TjM1V25Vc01FeWxSRURlV29wRStOWTVx?=
 =?utf-8?B?aGY5THF5R1VVMnZWbnNpcjRSS3gxRVJLWTBxSlp5cCtpc3VwakExOE55L1E1?=
 =?utf-8?B?SDhSSEJENnhFcTlSTUR4eVQwQk8vdmlRZENPbzdVSmxlc0d2VlJZYzN5WHMy?=
 =?utf-8?B?QzcvZW01YTdnYng3VU1zU2tOODBsNS9kTWluNk1oWmRzU09JUEJBM1ZpelFZ?=
 =?utf-8?B?ZjNjRlA1cisycG1SV2IzZU80YXRhYXF4cHlKaytNNHU5NFd6ZzIxZy9qcU5m?=
 =?utf-8?B?Tm5BNm8xYWg3WFJ5d2dyaTExUkRHdjUvMG0zQzhwUFdFL2pKQmM4YklJWkx4?=
 =?utf-8?B?ZnN3c0sxYWIza09RRjQ3eW9DWkJ0MldlazAxNHN2RE8yN1dFbG1JYkZrWHNs?=
 =?utf-8?B?eFovRk43QUdXTi91YUxOTGp6VmxOaFJrekVYbExjUHViR1o4ckRrQ0ZDVXVw?=
 =?utf-8?B?SElVZTZhNkRMbFNNcTcrVnloNHJ6Wkk4cDZrZXh5T2Z6ekdhVFZDdCtZNmp6?=
 =?utf-8?B?QWpacmtnUkhMYk9aWnp1U2FVWFVoS3g4bWVuVFRobnM1UnhYdTVOQjlvZFBw?=
 =?utf-8?B?a1pGRnA5TGFNM1BoVk1QSW5aODNQb1pFUVpaQUNOZWZwYlM1dlZhQTRzN2s3?=
 =?utf-8?B?Q09Ya0FMTTNYeE5SWlJxYlFHOWRocVV0ci9MTEpVeEYzdDZTUGQyeitlV1Zs?=
 =?utf-8?B?TXptQU5KL2h3aHBiQm94QkdDWTI4SHVRZWtkTlFzTVNPTGdVd2FtaU0vNkhB?=
 =?utf-8?B?RWtQRWFxRG9wZnRqbWlqaENpZE5LSStnaEhpZC9EdjNoVmhyeU5hTjRBcFZE?=
 =?utf-8?B?S0phbXpxYy9TSWE2MExUMjJTTWl1bGptcHppMWVsRVBaN284MGozd2hZTjlH?=
 =?utf-8?Q?OGaYWO64/LgomvoPTI5OgRjjvGSuqQabTU?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 10:54:08.9667
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 6723d7f2-79f7-4901-f5e4-08d8abe811b8
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OIWaD/TYmyq/BFXpoGfoxWxIwIN+tsI5qjeea0+Ql/5fRAsJ5PMOb8X4FI2SCcqfHaWlQz7GA5aGQ5B+hU7drQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5525
X-OriginatorOrg: citrix.com

On Wed, Dec 23, 2020 at 04:09:07PM +0100, Jan Beulich wrote:
> On 30.11.2020 14:02, Jan Beulich wrote:
> > On 24.11.2020 12:04, Jan Beulich wrote:
> >> On 23.11.2020 17:14, Andrew Cooper wrote:
> >>> On 23/11/2020 16:07, Roger Pau Monné wrote:
> >>>> On Mon, Nov 23, 2020 at 04:30:05PM +0100, Jan Beulich wrote:
> >>>>> On 23.11.2020 16:24, Roger Pau Monné wrote:
> >>>>>> On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
> >>>>>>> --- a/xen/arch/x86/acpi/power.c
> >>>>>>> +++ b/xen/arch/x86/acpi/power.c
> >>>>>>> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
> >>>>>>>      if ( state != ACPI_STATE_S3 )
> >>>>>>>          return;
> >>>>>>>  
> >>>>>>> -    wakeup_vector_va = __acpi_map_table(
> >>>>>>> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
> >>>>>>> -
> >>>>>>>      /* TBoot will set resume vector itself (when it is safe to do so). */
> >>>>>>>      if ( tboot_in_measured_env() )
> >>>>>>>          return;
> >>>>>>>  
> >>>>>>> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
> >>>>>>> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
> >>>>>>> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
> >>>>>>> +
> >>>>>>>      if ( acpi_sinfo.vector_width == 32 )
> >>>>>>>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
> >>>>>>>      else
> >>>>>>>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
> >>>>>>> +
> >>>>>>> +    clear_fixmap(FIX_ACPI_END);
> >>>>>> Why not use vmap here instead of the fixmap?
> >>>>> Considering the S3 path is relatively fragile (as in: we end up
> >>>>> breaking it more often than about anything else) I wanted to
> >>>>> make as little of a change as possible. Hence I decided to stick
> >>>>> to the fixmap use that was (indirectly) used before as well.
> >>>> Unless there's a restriction to use the ACPI fixmap entry I would just
> >>>> switch to use vmap, as it's used extensively in the code and less
> >>>> likely to trigger issues in the future, or else a bunch of other stuff
> >>>> would also be broken.
> >>>>
> >>>> IMO doing the mapping differently here when it's not required will end
> >>>> up turning this code more fragile in the long run.
> >>>
> >>> We can't enter S3 at all until dom0 has booted, as one detail has to
> >>> come from AML.
> >>>
> >>> Therefore, we're fully up and running by this point, and vmap() will be
> >>> fine.
> >>
> >> That's not the point of my reservation. The code here runs when the
> >> system already isn't "fully up and running" anymore. Secondary CPUs
> >> have already been offlined, and we're around the point where we
> >> disable interrupts. Granted when we disable them, we also turn off
> >> spin debugging, but I'd still prefer a path that's not susceptible
> >> to IRQ state. What I admit I didn't pay attention to is that
> >> set_fixmap(), by virtue of being a thin wrapper around
> >> map_pages_to_xen(), similarly uses locks. IOW - okay, I'll switch
> >> to vmap(). You're both aware that it, unlike set_fixmap(), can
> >> fail, aren't you?
> > 
> > Would at least one of the two of you please explicitly reply to
> > this last question, clarifying that you're indeed okay with this
> > new possible source of S3 entry failing?
> 
> I think we want to get this regression addressed, but without the
> explicit consent of at least one of you that introducing a new
> error source to the S3 path is indeed okay I'd prefer not to
> prepare and then send v2. I expect there's going to be some code
> churn (not the least because acpi_sleep_prepare() currently
> returns void), and I'd rather avoid doing the conversion work
> just to then be told to go back to the previous approach.

Since this is a fix for a regression I think we should just take it in
order to avoid delaying any more, so I've Acked the patch.

My preference however would be for this to use vmap. Could the mapping
be established in acpi_fadt_parse_sleep_info instead of having to map
and unmap every time in acpi_sleep_prepare?

The physical address where the wakeup vector resides doesn't change
AFAICT, so it would be fine to keep the mapping.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 10:57:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 10:57:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59805.104853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCgV-00081E-GF; Tue, 29 Dec 2020 10:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59805.104853; Tue, 29 Dec 2020 10:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCgV-000817-Cf; Tue, 29 Dec 2020 10:56:59 +0000
Received: by outflank-mailman (input) for mailman id 59805;
 Tue, 29 Dec 2020 10:56:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuCgU-000811-Mv
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 10:56:58 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f382c377-67d3-4f0f-97d8-f4e38c7a9a41;
 Tue, 29 Dec 2020 10:56:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f382c377-67d3-4f0f-97d8-f4e38c7a9a41
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609239416;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=TPZkfNkoKtl1hMMhcQYpSXtRVwcoyalmwRqdLVUwCZk=;
  b=XtLfQzOxAmk9BRTOeeYhvl5bZTBSEQUOPS5lBAojjJHxxs1ZUAVk8YPd
   ao2FpE0yAsy8xZSNtl2+kR2otsemfEML1xycxohOKkxhvGZXql5Ap0l+k
   Q9aOIYnFYsoacE7O9n/AFqXgE0vuNJV2HAaHRwLBrxluuQOI4c/NILKA3
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZazKAYaVKAmYgmAF8kin7F/SJn/O3ZCInguW0qxZUIgde+6/GiKTwXXJ+xbmMgpWFSkbVxzx3x
 NnQ7nKpBkYsV3cmfJ0umwz/U2aShsFEuEonpzqgXiK1B7qMJstKUH+ob3b0R7L4s6iJtlqZbur
 zHqgrYnJxLPGQchU26AT0siinoOmCLKTkWS0Mqdg+7qIZshLaAMkIRyqmp1vl6eQ4scGVteYPW
 tqewdFl9zim5mGCwh+EvR4sX5TG5xu9dMUWPYeLZflfi3zNYJawP1gnhJo6yHd4VY8MladELzM
 jFs=
X-SBRS: 5.2
X-MesageID: 34315853
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34315853"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RcmJM0t+CIL40bGA4UgxDtrbKxT56FeBHebqjCF5e4YhZtxG/5+Ce4Q27MkMEjNiNUUHU59hIiV+9qYccV/HLZfEAxFhhZMvMklV+/aypXw6HSYmrj6NGQuWIu7YA3HiGLqP3Rwe8lsc9rANjfX5WxbCIp30ggTEkGG+eZu2brf6l4Vtq+jqAxKVR+X8IIGEplZOgJo77z6zWpg0N2Z4UeMo9FPDUiP98oK7lrH0uUtgDzHWNbft+B1vvvC4yV7x38AGGSwBUkITNQrfcj/YM1vsG/k1CsuHgpMMDa7qBxI+vNQJwXcgLjfqxvoLb00478XTKAoPmRjlT05p1dOcuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yrmUNnfftmS9BgOhBfrMF5Enj/eNbFxeMNg97tCnSA8=;
 b=l0xVP73RkRwmyV5nKVCZgbfdxlai6clHEjtx73CZpUjoF/b+m4ZtR57MRHqNdaG6/dxSOodvbkJvvBJxkQKlMsjZTG+Pr+LG9MA9N+63tzUgk4P67iiyQ4rGYid25GS/FCD8SB12eqfdBUHR1bEIcit7XAyZQVVQPFnO6MEiSN3vjCUOQ9mNuKz01A0tagzfSAj8hGT5FGLMddqdKbQ136OLMDY3512Fc4YdVa0B3Uu1uP9yor1wQzJHWjX/iZYR+0pzEOQOjGpPwxq3fbaWO7pA+OeqYSByR02BzeSrbubIAXqfRGP2VE8HWLTr4SqeXrgWPaRTrwrbIa+7Db0wtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yrmUNnfftmS9BgOhBfrMF5Enj/eNbFxeMNg97tCnSA8=;
 b=YwaRjLkOkfzGQ/V/FsJ4G4dPqh+wwQ8ScrzaWB1ccH+giuJ8rtZW/mJMMbQMVvDSknLZ1wfyYkLQGkVhXfiHxH237ybohexrHTVrk2+XL+p4Z+W0Thb+bCKU0oANhwfX6fzg0MTAeFmwCq98/tQJWAXBjjZBPMKfukXaIAGZBKM=
Date: Tue, 29 Dec 2020 11:56:46 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 1/4] x86/ACPI: fix mapping of FACS
Message-ID: <20201229105646.fjeq364wcfstff4a@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <81a8c2f0-ae9b-98e0-f5c5-d32b423db491@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <81a8c2f0-ae9b-98e0-f5c5-d32b423db491@suse.com>
X-ClientProxiedBy: LO3P123CA0002.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::7) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ea9ecdb5-96c1-42fc-023f-08d8abe87302
X-MS-TrafficTypeDiagnostic: DM6PR03MB5289:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5289DAC2B4B38A31B3D49BE38FD80@DM6PR03MB5289.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: RVgtDKKJOQRVYbD/xUjX252rfQrn6+2bl0qvbEsT0bz/6F7MnZFchTVRo1UfkUROm2bCveyJsbf27uIiynPrPzbvCcKWLim7ZTzbLBY6rwBzcFpuQaajI2kQspyVB3YtIHBFHNc4uwMdLd0HlQ40v/o06gsNbuzRJ/5o8Ema/NqFzlx53yn3CVH/GlA3/Jbd8tKUx1lfgy8anrIFntrPUNWUcyIC67Abpvo/ndGNo/RSiPBM88TETp6ghtb+31VaJ8w0/4ewnnBPlMHTELZ3OhLcBq612LBJTO3sCFVSTJkxlhjP7f8TY75KYn1KnY6cUztt8yuSCslEIM00hy4L+DL4BAAFSBjaFzMg2fQIXX3fAG2Sn1iN4dxHW6bMb16gpNmm2fRdiklXvqGjHnpnUw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(376002)(39860400002)(366004)(136003)(54906003)(6486002)(66476007)(5660300002)(316002)(6916009)(66556008)(16526019)(86362001)(8676002)(4326008)(66946007)(6496006)(6666004)(8936002)(33716001)(26005)(85182001)(478600001)(186003)(1076003)(956004)(9686003)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?THpvazZoT2RqM1VzRUNTSTRZSFAySzUvS2xCRkg1M3NXSk9PdGhQSi8yZlZ3?=
 =?utf-8?B?NFFHY3Aybkg5dk9hR0VVWDVVNjNWWVp5ME1zNHhYMCthUTFUaDhvTW4yVUQ2?=
 =?utf-8?B?MHJkSEJkTlluUWNMb1lUenlwRXl2Y2tMeHU4L2xnYnd3SjJMU0F5YUlON1Vj?=
 =?utf-8?B?Yi9tb3hvRFlUYmV3aDJUYzdTZDU4cjc4UVFuckhrWERzeldRWE1UTHpGVFNO?=
 =?utf-8?B?VTExQ0JWVmU2UFdXb2c2Rjh1QmVYZE8rQUVPb3o4aHY4MFd5L0tpa0xUUVcx?=
 =?utf-8?B?bEloNzRLRHN2WGxRZ1FKVkRkOWZkaDdRdlhFM1RMTTl5YjBYYk1WeG1VWk9R?=
 =?utf-8?B?Q0hiRDhvQ3dyaUF0S2tQbzNvTHNCNE04TldhOG1YOUIyNmdiS3ZZSGIyV0xO?=
 =?utf-8?B?Tk91K3p6Z0JBbWxXZ3ZOdFFRWkRxejJGYjd4dWZSRHBaYlJhdlk5TkhkWGtV?=
 =?utf-8?B?L3BGQm10Qmw1Vzd6bWhSajdpaTVZdG5sS1YrRUEvVzd0SWt2UmZ0OWp0WFlt?=
 =?utf-8?B?YnpRL3ZNTDQ1N3I5cDNienVQZGdDZ25SMEhjMnZML2JCRTZyZFhnSkphc1hL?=
 =?utf-8?B?L3o0MXpOSlpRRmZvKzhvR1Q5MmlyQkh2d0h4R3lZOVZaZXlUWFQ5RkxwMWlH?=
 =?utf-8?B?N01QWmFaZzBXb2g2Tlg4T0UyOURqbkJMa1VXWHFxaEF5V3ozK0p4UjgrdDI1?=
 =?utf-8?B?MEJibitRRXFaa2EwRXFnblN1Zjd3QjgrL2VTcUtxVWxPZStDVjNTdzZKOUFK?=
 =?utf-8?B?T3VIemdpa1VwRUtlT21GbVJTckFRck1CUnhMZS9LTyt3TFlqdWZZZlhGcGMy?=
 =?utf-8?B?SEgvQkhmenRFVGpCMmgzMUpCS0srQmZ5bzZvVHIzTTRJZ0Z1T1pabjZCSG1U?=
 =?utf-8?B?RnNSdDFrbjNmQzRVc2FETUV3V1lyUC9pRVRPYTg4anF6Zkh0cElJUW9ybnVR?=
 =?utf-8?B?Z0Y4Y2pnVlRWUHZvRWd6YUxxNXZrQ3BwbEZvZjFIL2d1UmVzeFJDYVlTdExE?=
 =?utf-8?B?bDJSaEVlcUlzbWpwcE5NT1FJVUxEUUd3SWR3ZE1HcXN5b3lFSXFybWZkUzhE?=
 =?utf-8?B?ZHZTaHcvSmh4ZlBqbXJhN2tHMllBK2FkMUUvY2pPbUVpOTBYWTQ5RVR0YVFZ?=
 =?utf-8?B?RWgvNURhVVkwQWtuWnBHOFJqZmdOVzdBa3l5VlcydHVQOFdqSkZoeDVIbzFw?=
 =?utf-8?B?K1pNK2pXRDZLQ3U1RGtkcjY3NkhLdlBhenY1NlJvWUZnNmp1cmdObzdhZmVQ?=
 =?utf-8?B?L0lsc0lCRzlrVS9QejlkL1l0R3huYUZVNjJTQXJMWUtuNVFGdlBxMDVuanFB?=
 =?utf-8?Q?RurqZZ+HSCoRUM6+LvsxDfWClq2n+DdMXS?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 10:56:52.1695
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: ea9ecdb5-96c1-42fc-023f-08d8abe87302
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jDW+3bdl63k4Eeei5KeWj0J9RCynuHB9XqpDN7zPo9fmsEJsv/NR6EuPvBKsfb1BpEgfBO92fbiJpBF3V4YjmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5289
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 01:39:55PM +0100, Jan Beulich wrote:
> acpi_fadt_parse_sleep_info() runs when the system is already in
> SYS_STATE_boot. Hence its direct call to __acpi_map_table() won't work
> anymore. This call should probably have been replaced long ago already,
> as the layering violation hasn't been necessary for quite some time.
> 
> Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -422,8 +422,7 @@ acpi_fadt_parse_sleep_info(struct acpi_t
>  	if (!facs_pa)
>  		goto bad;
>  
> -	facs = (struct acpi_table_facs *)
> -		__acpi_map_table(facs_pa, sizeof(struct acpi_table_facs));
> +	facs = acpi_os_map_memory(facs_pa, sizeof(*facs));
>  	if (!facs)
>  		goto bad;
>  
> @@ -448,11 +447,16 @@ acpi_fadt_parse_sleep_info(struct acpi_t
>  		offsetof(struct acpi_table_facs, firmware_waking_vector);
>  	acpi_sinfo.vector_width = 32;
>  
> +	acpi_os_unmap_memory(facs, sizeof(*facs));

Nit: looking at this again, I think you could move the
acpi_os_unmap_memory further up, just after the last usage of facs
(ie: before setting the wakeup_vector field).

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:16:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:16:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59811.104865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCz8-0001Qc-5c; Tue, 29 Dec 2020 11:16:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59811.104865; Tue, 29 Dec 2020 11:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuCz8-0001QV-1U; Tue, 29 Dec 2020 11:16:14 +0000
Received: by outflank-mailman (input) for mailman id 59811;
 Tue, 29 Dec 2020 11:16:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuCz6-0001QP-Us
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:16:13 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2100c3a7-9cc8-4217-8d22-dc3db458986a;
 Tue, 29 Dec 2020 11:16:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2100c3a7-9cc8-4217-8d22-dc3db458986a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609240569;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=7R/b+arKlI0KjLbi4nVK1O43KtzOLCrLjdf8uZe5Vzw=;
  b=dE80+UD9e7FYB9pLiilP187j8aIcQ/8M12PQ6nEP4J+rakA0WeE47tRu
   V/LqaasPzqlCohHfR60UycUY7hxaiJ0MVAXxQ2clTJk9iUMmizPV03TTW
   kInjTOMYQNPkgWn0Is3OVMmJ6Dqt35qKi++U8k9Wwtw7ryoO3GuzqQ/hO
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Z+RkW+IrBADWnOu3hdcYOuRXbU1OBQDW8abGrfjWvqQn8rUqDbTMjMkhl8uKUYA0rB6g1TTbHL
 +a+iif6quwBkJOicfWR6/6AXWdzvr1vdoAbNCpAThQwE/VT+ilcc7uAV7R/oyuQeFNQa6Ac7r7
 eYgBqMIlcES3Ukvg5BUQocoC79yiWtCM5XwtGu/kggxt5ubaxmNyvd4YrC7Nk/BtpCLCP0mEgr
 OQ4aZCE+/R8M4IH1r2BzxwD3ZdrkGfecNH4OymZTwOEKDfgqRvl7EWiHM5utISyX3mCyHEj9JF
 LKk=
X-SBRS: 5.2
X-MesageID: 35336450
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="35336450"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=alWmzfk7b4711+hZk2vsCUtVBW7sPZN1iiqIBOpoJBeqnlHa8OdUtf22SbAKD3SJQq7toeOxdtn2qfW5L1n3n7PikOsbBArSKUaq80H2hfuPR+GxAIeanL9xHbT9dhv2GbtMgMuB5xD8qntvH5fYJCWmXfMQyXifutJwBH1IwBDQ2VYlTwaYhILUbnd/zcZTeLrdnf9+4SjncTbvD2oZdlNIB0grhBxM/hDxsNpfar9Rn5tYxvM3nvi7uTBk7+wf/QVeHOzzIDWynLw4CjgvfUhSd9dPXbXDeW/9VGAUeRpBiTU1LuNUDBDi7WXfspaKuo3QsqB1fjZxyWoQOHK8Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HUu4gJqmQhIfsr00LsR7GNefx0PASAC/mDprSMGGMM4=;
 b=VfVkFKrVwBNridumPGE4aY/R6XazcwwKviSvO35/KzAWNzSrFr4IIsKUvkXD58kM3HS5ZMs4SbOpGff3iWu2LibHWlksOZHGTL+4BUE75aOKVAEvrMESKD7dP8H11NJ/QbCG+S4PsTTxmIyIfgQPnL/o7LXIKDmreCfrGZuw+J56SbSmCxSh3RmUwb9mJPNfw0p6+zFnxUCR5+7P+0/27RgG2TMtYmlD5AjKaNAkedOSASr7BzuewCiibJ95n20XfvW4cbq8sPWur1G9PEvgx9OmeOYkI5AwEDC6IDraiOKPWQoA5sV3ew72b7pq86OKmlzSf+XcnCc23eatwkDGGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HUu4gJqmQhIfsr00LsR7GNefx0PASAC/mDprSMGGMM4=;
 b=ulnJDsiQTbBNegdmMOvBUcT8So2EvcgiyEvoV3yZ4Kj+ciKM/GP2i5WrIl7qE/jMSNZCb3Cym18s1inaXgbBbWiDEupkGdA3+WKUh2GIscRsZTixCZyfzFh5gDbGoNGHBjSvmtK/6TmcaWajyhfRZ+tfPTp1QoLGzOXrtuCe0Dw=
Date: Tue, 29 Dec 2020 12:16:01 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 12/24] Implement gnttab on NetBSD
Message-ID: <20201229111601.x5gmbcai4d7ex5yd@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-13-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-13-bouyer@netbsd.org>
X-ClientProxiedBy: PR0P264CA0230.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1e::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a6ad09d0-6b63-4387-2993-08d8abeb226f
X-MS-TrafficTypeDiagnostic: DM6PR03MB5339:
X-Microsoft-Antispam-PRVS: <DM6PR03MB5339DFD625C88DC619CF7CDB8FD80@DM6PR03MB5339.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GeT8qCrXxNMTCjt3Gyv/ccOZCYuPxAHFgFg/npj/sRFZTVmEVmfDh6QUGhZWBxEJNOu4pHhzrDBXgQpDz6T04kdsTW937aR8vh0X25mVXg02dD5h81vL+XLXLRA2AQP3ebZ5c8l9+7XSUFnu4Q6eeeIsLkpxofXUrEv+I9sHhuO8QW+U1smil+zc9FpqJHNpDy/+tMwVUBxyWnPpeyVeg31o35HcpLFs5xUPnWpuZdGi4SDQOUlmW8g72fle11FEj0eqtRxeuB4EZbiRhMEIJbzj04jnnWIKvrTDdl7G8CW+Oo+vliZxP7pX3vZu6uMJVuwlWT3ZRNIK6506lm5dKMe3eGZ6Eh0nEm1l/kzL5chnJsmsvcXuxIW0N4q5jiYBcdD0PpLOn4cP1chvQRF0/w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(85182001)(66476007)(1076003)(4326008)(478600001)(6486002)(83380400001)(16526019)(316002)(186003)(86362001)(9686003)(956004)(26005)(2906002)(6496006)(6916009)(8676002)(66946007)(8936002)(5660300002)(66556008)(6666004)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TDZsaWxQd2pMV3lOaDBIRnlwYlRnaDJ5ZXQxZTVPbnhTVGpxTjMxOEpPcTlH?=
 =?utf-8?B?TlpVMXpVZVpPTHdkYjBKYS9DVnN1a2VqUmpFejlUK0hpdlkvQm92NnFNSWRs?=
 =?utf-8?B?d1hxclhXbFR4Tm5TR01wbFpLZzIvSmlwZS9OUnpyTmlPSzFpVVFTZEJtT2wr?=
 =?utf-8?B?aVFqa2lFU3VzMGQveW0vV2pHeDJTUXZGUmNud214S0lZOUFyNUE3M014ZW9m?=
 =?utf-8?B?d1BEK2NHSXNjemRZQ0dYRnRLcEZJNW80V2dnVWtLVS9yS0JxcFJPRy90U0d5?=
 =?utf-8?B?dDQxb0pBV01LSms0NkhyQlA2T1JwR2g0cXVpYjlTTUlvWjBlTld3SllsbEg5?=
 =?utf-8?B?NitYUFdReDlzMHdkSDNOTnhOWkk1NURTR2o0UDlyOExsYXVPYkFUdFFXaERp?=
 =?utf-8?B?K09Na2FVM1NJczlRU1oyRHlGSGFabjhtcGNYMCtGNmpEWFpJcHdNRGxIV1hD?=
 =?utf-8?B?Q1hCdVdwMGZPbG96eXRTWFVLZTZISGJvMXREaW83d0NkUVJHeWMxRXJ3Rklm?=
 =?utf-8?B?L3FzVkRHVVU0clFGNU51eUkvRVlTb1UraWV4Z05ObWF0cGI5czVCeUlmZzhj?=
 =?utf-8?B?L0J3QzFWeWMwc0tBRnl2R1F0QnA2aUdFcUhrcGx3NUdkY1pzbzVLMjZTVVM4?=
 =?utf-8?B?SkQ1VDdaVUk4Z2NpSDVNQ1IzU3QyNTArOU9pR0RmLzBvSWR2N3o2T1lpSFNl?=
 =?utf-8?B?ZkpNNlVMVSt3OU9kT29kQUY0MHl1eUcvMzNPOG5rUnF2OWp4R0MvS0RDcUV5?=
 =?utf-8?B?VUZldHoyQ1BDYTlTQm53UUxJN0NkM05uaFV0cXdnZTJ4SjlGL2ZQaWZWWlJx?=
 =?utf-8?B?L3NRbzVQc09FNkxldUFuZHhlNi9HNHlBaXAyZGNzTUg3eEp3M3NGRCtkRysy?=
 =?utf-8?B?OXcxK0NoUHVyd1dEeEVrN3RDSlJjbE9XS1EzMXorendKSFFxQndVK2ZueWZQ?=
 =?utf-8?B?RzQwRDZtUU4yNDB5TGlMY3JUMG52MnQ1WDdvVWRBa1NOVDNMYStRamtUU3hV?=
 =?utf-8?B?ajlpdUJWV1l6ZVRPbC8yMWd2ZE42K0tXWVJUSUZOOGpML2lQMHBSVEl3dGxj?=
 =?utf-8?B?ZjdPOVJ2TTkzZVU1c2JRQmtZejZnQ21xMjN4RnlZQW5kdzI1QStaNlh1R3Fv?=
 =?utf-8?B?aFNtZVQ2M0FXeHR4em1TU2xhVnkycE1PS1pDZUtPZFY4L05DMXpEbHFSeHRj?=
 =?utf-8?B?bDY0K0ZxQjJHNEVWa2RrcG5MY0JFQkRUU1J4WHREOHRtUzBhS2N6cFhWN1dq?=
 =?utf-8?B?aElVTURxbUtJM2swNXMxM1RBalcvOE1qR2hUamlScFluK0VHd3hQMkpaQ1FI?=
 =?utf-8?Q?fJx4m8WKdcjKyFbWczyTI4xyQ+39qHjbu2?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:16:05.5433
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: a6ad09d0-6b63-4387-2993-08d8abeb226f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xL9n9gipIyF8dK7/E3ofmQSDAXGDSKW0OzdGjlpdRF1aFHJOUxJfE1iobOq2WAowKobb7UnbKUQ+XWA5bbO6HQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5339
X-OriginatorOrg: citrix.com

Might need some kind of log message, and will also required your
Signed-off-by (or from the original author of the patch).

On Mon, Dec 14, 2020 at 05:36:11PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/gnttab/Makefile |   2 +-
>  tools/libs/gnttab/netbsd.c | 267 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 268 insertions(+), 1 deletion(-)
>  create mode 100644 tools/libs/gnttab/netbsd.c
> 
> diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
> index d86c49d243..ae390ce60f 100644
> --- a/tools/libs/gnttab/Makefile
> +++ b/tools/libs/gnttab/Makefile
> @@ -10,7 +10,7 @@ SRCS-GNTSHR            += gntshr_core.c
>  SRCS-$(CONFIG_Linux)   += $(SRCS-GNTTAB) $(SRCS-GNTSHR) linux.c
>  SRCS-$(CONFIG_MiniOS)  += $(SRCS-GNTTAB) gntshr_unimp.c minios.c
>  SRCS-$(CONFIG_FreeBSD) += $(SRCS-GNTTAB) $(SRCS-GNTSHR) freebsd.c
> +SRCS-$(CONFIG_NetBSD)  += $(SRCS-GNTTAB) $(SRCS-GNTSHR) netbsd.c
>  SRCS-$(CONFIG_SunOS)   += gnttab_unimp.c gntshr_unimp.c
> -SRCS-$(CONFIG_NetBSD)  += gnttab_unimp.c gntshr_unimp.c
>  
>  include $(XEN_ROOT)/tools/libs/libs.mk
> diff --git a/tools/libs/gnttab/netbsd.c b/tools/libs/gnttab/netbsd.c
> new file mode 100644
> index 0000000000..2df7058cd7
> --- /dev/null
> +++ b/tools/libs/gnttab/netbsd.c

I think this is mostly (if not equal) to the FreeBSD version, in which
case we could rename freebsd.c to plain bsd.c and use it for
both FreeBSD and NetBSD?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:17:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:17:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59817.104876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuD09-0001Yf-EW; Tue, 29 Dec 2020 11:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59817.104876; Tue, 29 Dec 2020 11:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuD09-0001YY-Bc; Tue, 29 Dec 2020 11:17:17 +0000
Received: by outflank-mailman (input) for mailman id 59817;
 Tue, 29 Dec 2020 11:17:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuD07-0001YS-La
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:17:15 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2fb5dfb-6cb6-47e0-ba89-76567fa31b55;
 Tue, 29 Dec 2020 11:17:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2fb5dfb-6cb6-47e0-ba89-76567fa31b55
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609240634;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=tKi6WVy8JGcWX7mELf68LwkylVDN5qGoRhGV6Rsq2G4=;
  b=Sbx4Jaxj2WzQzcCmcsJavJgRDjYheg2QxyMk+nz7QHMfsQORuFqPfvyo
   K/FxIrLnBcPl5s1vtIgMcj+33FW+tysU8QvpzM8q2/QULCozYj887CxbG
   NxWbSm3Q2fYxKVoyBD+Qotdv8CMAkQgO8L9lFS5vO4FyhCufhyP7YMC/P
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jDV2AChY3DbDSZT9Opbje4fij5JxFllxkduPIm6bB10XKrtXiEUlviSDM9KvOj64hdYP8YzWBq
 NVqxDJOiyeqr7puiI/Ffhrku4o1V4KOoj5+sToymsyGA8yVql2P6fh7MmmL1cwwrfLPag3hfr7
 yUQqySx0vw0qkMWG0ZSULti+kAXTDMoYrqQIlWRO0XHS/LVhStCraDJC4hZqK2iDi881jpAocC
 OVZjdRabcl6uGzMJfceYAGRg7r7TL8mQqh+8PomZ44C2ECLl8SUNrd2WkX14dNl1DrUCUqNY75
 IxY=
X-SBRS: 5.2
X-MesageID: 34446353
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34446353"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Upr7IqdTUuZWutNfwPryNvW0gnrzceOaHiosVeoMI1ADAd6FxdMVUDtwj56GZwK8YrbgD3ampER2hrbBB7wunxhc9T+LoxQGetu9EBQBh9befQXiL+RFjyHNXbEysnVZFk8gxyV9/PKKGg/i1cMIzrIC5tKcInzW9i4r4ZPwxVrauKV1KuRXg4/+toDN1AXdRDuVgf8K7qfBlAjoNVT1zSRYIsLTT20oip5briIWRf1nKKQ51YhXC/i54Fg/e4KvTPtVRtVvtFxmdx+pZFn9E2l5XsEarXhOW8NMRwBG7bFQ2KAGV53c1/H3jMNmrbfwZZl7gzJ4X3eHKn36gdyWVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ov1Q139frdOwLzVShFdI67UTQtd/Yv8O4Yq73lb9p9I=;
 b=DrXHsUAdBsek09GSJ3ysNbGDKH8sNVCKi5oxgXHLDGRU97DtSjaNVavQ+NON4LSbEZYnUpLH/JRl1nYGsfnM5+2hlH+1uEeK2UVmcZnMdJdaciDuTKQmXGcAC8nTjnx9tO0cV9hxoeVJvyOwkhfdTX8OAfvImbLAvePeeQgOhgocYCdVkd4dvC+7qQlrhho0sjhdQzxitb0OhyAGxh4Z9LnkbncYEip+K96SL16yV1SIqRgayXwEwLqFGuEfB8MaIGY1ZUMVL5Ryo1DzQjDlXwVdZDfUMzkzt8nyNDzA6GkZ6DT82+fs42As10d/Y6eZQl0t4kSZ2qxbTTf/C0774A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ov1Q139frdOwLzVShFdI67UTQtd/Yv8O4Yq73lb9p9I=;
 b=L8RaR3r1PIsqBElZZfmi35lc/2TBZoVwBPOpewfeLXb7S7ks0ezVGYloXGclwLAG/NJsqLuIxGrvzCfxmREbn2c4wLp86BhpJmCL5UsopFwg+1FMopovDGW2qeZozMSc5lGfPvfI99HN8H3OmtscoD/zI7j4lgcBgnXsN2Rvr5U=
Date: Tue, 29 Dec 2020 12:17:07 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 02/24] NetBSD doens't need xenbackendd with xl toolstack
Message-ID: <20201229111707.x7dhxhifydc6rphk@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-3-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-3-bouyer@netbsd.org>
X-ClientProxiedBy: PR0P264CA0154.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1b::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c466e9eb-a746-41c9-5d0f-08d8abeb4a26
X-MS-TrafficTypeDiagnostic: DM6PR03MB5339:
X-Microsoft-Antispam-PRVS: <DM6PR03MB533941C33E7DEC85CF96142B8FD80@DM6PR03MB5339.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +U6o87tq2aGowEpHAPGXwmzBN+3LfZG8Y5LAvSW1pXINdm1fRYcj+s9igZwqpBdHn1oPU5fTatrpjWg4NmVEo8CpUUlyfYWhiKrV2x4Uh6Kcvrty48U9cCnBaeFWtrM+6RFKRRfjnTfgre/uRLFkjjIvit282ARe4jX80PP5pIJTMlozrwrU43bLrGrHtWx8QgsIPqkb2/7ozs/dJsMRXDGsGzlzZ9dja5J1LkEpT7Ee9cF5UEBB2l0ofuJ4XPYXJZOTRHYJMMVs4Lxovr5IWZCgjJV9ccuaSBXOFz6XOoFDbRL4dCVfGzx3ZO8SJeUY+KhEasKqgmrTK33zCw2eyMM0vXDC7tpkrv0v6kSH0wCBSJFZibzC2wRzKtgNT19nUX99qHhGVFrN+yY0EnCS5w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(85182001)(66476007)(1076003)(4326008)(478600001)(6486002)(4744005)(83380400001)(16526019)(316002)(186003)(86362001)(9686003)(956004)(26005)(2906002)(6496006)(6916009)(8676002)(66946007)(8936002)(5660300002)(66556008)(6666004)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Y1lOdG5LYytYVTlLU2hnSUxYTk44WENiUzV3ZHpicExZUVdkR1dvQ0dhZkJa?=
 =?utf-8?B?K09GMm90TDNTU1VwTW0zR2hqWkhOV1JOTlljS2FYL3JTdkpNaE1Ta2NyNksr?=
 =?utf-8?B?bFh2b2FTQVZpNnptUjJHdHV2NUo0VlNwZjEzaDJIQ29IeG9ZSkpvdVlzcVlT?=
 =?utf-8?B?N29uY29OTkl1MDMxcXY2ckdpK2F4NUJnU2R4S3JsUnNMQkV3Ukk2aUxsWmwv?=
 =?utf-8?B?RjNiNkRFWCtGZDYrNkZ1M2JHNHVYNzFvSUxlVjAwakFKSDhDSHh5UTV1cS9W?=
 =?utf-8?B?YjBxMEthRHM0NUxRV2hvVlJPS3FKTGZ5TkJDSjRReU9qcXcvZnA5WC9jRkZv?=
 =?utf-8?B?dnZNa3g2RlZ6dnp2RXhzOHdPbzVKR3o5Tkcxa2plRktra21qcFdDaURUTHZL?=
 =?utf-8?B?b1d1dlIvdkNGcGhCVDVZdjAxQjd6dFE5clB5RzFDNkNqZXc0Ry9CeEo5NTNj?=
 =?utf-8?B?VFNJdmRJMzZkZUd4RVlvdjhVN0k3NUNIdGg4S2h6N3paNVBNQ2tQNzZDaXR4?=
 =?utf-8?B?dXUzWDBab2FQdGJ0bUl1T3ZnN3hYU0tJT1ZPc1NUdDlHbHcyOUNJM3dIWTVr?=
 =?utf-8?B?TDFzRVdncG5XRDhIaGZUK2x5U1JMb2VTbVk3STRQVHA0L3Q1QWo3VWM0VEt4?=
 =?utf-8?B?SDdkMGFqODZ0L2dqWWl3SnFpam5FZVlhMDl2MHNmSW05Y2szU2dTcjZaS2Yv?=
 =?utf-8?B?UzBnRFNsVDZ0UWE1eExQV3F1S3N4cUN5N3hzM1BST2IralppQis2MzZTb2VN?=
 =?utf-8?B?dFhyaTJoVlpodk5XZElIc1VSQlNOVnhkaXhDQ25kMThoUFphZ3cra2xaM3lU?=
 =?utf-8?B?a3YxUGxUbzhnYUk3TlVzYStzQ1dXZ1VJSXZDMFJSMlRJL3lQRklBV0ZYRHhv?=
 =?utf-8?B?T1NENjVSOGJNV3gxMVdGSFAvSEEwZXl0ekdDb2dhekpNTnFtcCtPMG5zQ0RK?=
 =?utf-8?B?NGNFcmlNakNId3Z0MERoVDlyNkxpby9Sek9pR3Q1RWUvZDlYcEFYRmV0MGhE?=
 =?utf-8?B?R3pKK2l6MzJzdFFZd29pMFlXSVIrb2p2cWVvRmdHMmk0NTI4bklQT2tDMVZK?=
 =?utf-8?B?UXI4ZkdqSHBqSklIdnJTSEhuOUx1MTQ3cElDaklyWVNYTnFYWFdwV09zUElX?=
 =?utf-8?B?RDZPVjdQdzZVR3JPa0d2cTc2REtzM2p5VWpZam9NdHZOazVOK1NGWVdTbmdH?=
 =?utf-8?B?U3dNZkRKSzAvZHVRRTVUL1gxT3JqdjZFT2NUWlN5em56eUZaelRodTFENjNN?=
 =?utf-8?B?K3k0MGFTRUkzZ09Vd3E4M0MxaVUxVitKMTVTUVhLK29mZ0lRUVdXL21xTmFV?=
 =?utf-8?Q?lX/ypJYDwkIovhPZTxais0gwBO3FSXXM8y?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:17:12.1432
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: c466e9eb-a746-41c9-5d0f-08d8abeb4a26
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FYNavMRagL72vPQ9bhlVCW0sq3pIGeslQzlhWNJ3/F/ithAgI3g0F3znQKPLz8kCmPxHqmUWevCiZO3xz4/b8w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5339
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:01PM +0100, Manuel Bouyer wrote:
> ---
>  tools/Makefile | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/tools/Makefile b/tools/Makefile
> index ed71474421..757a560be0 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -18,7 +18,6 @@ SUBDIRS-$(CONFIG_X86) += firmware
>  SUBDIRS-y += console
>  SUBDIRS-y += xenmon
>  SUBDIRS-y += xentop
> -SUBDIRS-$(CONFIG_NetBSD) += xenbackendd

I think we also want to remove the directory itself, as it would be
dead code at this point.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:24:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59823.104889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuD7C-0002VB-7p; Tue, 29 Dec 2020 11:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59823.104889; Tue, 29 Dec 2020 11:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuD7C-0002V4-4Q; Tue, 29 Dec 2020 11:24:34 +0000
Received: by outflank-mailman (input) for mailman id 59823;
 Tue, 29 Dec 2020 11:24:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuD7B-0002Uz-9F
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:24:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a649a423-5c19-48cf-8dfa-b52c5471ab22;
 Tue, 29 Dec 2020 11:24:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a649a423-5c19-48cf-8dfa-b52c5471ab22
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609241072;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=SR9VL7Jlz5rAE339U02WTYfwknZAQBSMAIHIZDplq44=;
  b=PBAwKGdN7rgA2zN/i7NkzUZQx6cN/DX9X1wvv0xeDCpBY942/rsUq/PC
   xeRbDY7vk1wIUAK8Vmgkzp27jew6ltjMdtF4xN/OtTDAabicR2sMs6Z0M
   FlRf+YjOr6LyZMFeoJ3BCegM89Lb/82zQCfCrp16CB7Isonztg5eUx17M
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 2ut/vmsHbbZRnf8bnMXTxUfmXSgK+kZ74OoFWrklYeYWZc63It7/bFOH+8YW4voelzSjenQb7q
 hpMCZObSfCQ2FApMAZPv73J8cxVPUF3IgnwPbigKD0Y9R4R6UlP/UMaYZOVARqUt0M7pg1FRMP
 AFSK1SJMPENu8JnpC6H8yC4BYmnioiCX4sKQDthUXb3bhENDCi26jQPzKUTGiFKCquLTdqkm7o
 aoJfl2ksxUXIYVds/6GwwUM4KhuMBgsLXKuY7it25spS6FPvXP9lvmD1vsTwyCXjBaxDpAXxzs
 xYU=
X-SBRS: 5.2
X-MesageID: 34119322
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34119322"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IL82TmXYftJ5A+aDPUE6XaXYmE1JMKCinZ1WvfwMjeawXFhsUtfUrW1BP781oV7Oat4iltnUoLFgBu7auqXBIo4EvDdDDJXthZoU5HMdtYREmNH1+Mly0NMTe90pNY40sHd4ey6AdbrZku5P1tGqMPZ5HuWhuX0ucaPzpkqUWGRbWymdQjw1tRc5JqTp7fp+RwEzhGIyh7asFv0UxmlB0g5u8m0ApylZ4+TdcGB/ASMmlQE7jcZRx8/Nk8h544/xtQxaJTHP82HhrtFLwISvQE723hoQIyUMQdmbilkfPASImGyjGZc0NEC9he9Hwqfq10TJQ/rzfPnktZqBSNKdkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JdzG1yA1Ck/ufHwExhr4v1cwidMEQC3Xp9Yn3XSmBcQ=;
 b=J1T1BKY19FOJd6b/Vr8wdA5euZrb26KjfzFsSndQQGj1wpWX1Wn4V0olWrC5BQkUUY4y1ciOqbzqeiGh07G2gz8h5dJE34rrtpFmFFbwrYBKOX/MmwK4diUq2kJLtzwrVVzeTrTRm6t/op8+psnUTa4ssz8R2KaPeppajFlaXQxnHulIpyqMq5Qnph4WQg5D6Rvgsd8+NhO6wn4AONrjtv2eOzllzt8+GivBC1I5sWwEBls9Xd00YyHsLuDLZCrETYItCMvhHqECML15mopDNc1RYO8SSUQ1dbmk6Rl5IztD12/nMK//BXMk5BYuaRcKGWOofZ0F9EbJvLYg3rhJNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JdzG1yA1Ck/ufHwExhr4v1cwidMEQC3Xp9Yn3XSmBcQ=;
 b=O8iWkV6Ml/0rYAXpKNeAiGLF7bLimmKcAADBLrTvWPZnjKW94YlFfcfQeRc1hTRbjWH6f40s056KVv8u4XZl+tiF0XCPu5hCwcCGA82ErzYIkTMIdsqZ+vDd4ZXwCjc0b1rneqnhtlkX+IyOIjN/yvlBw6nnowCzcsvVFmvr5Hs=
Date: Tue, 29 Dec 2020 12:24:23 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 04/24] Make xg_main.c  build on NetBSD
Message-ID: <20201229112423.xwp5n7euy3w7ejge@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-5-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-5-bouyer@netbsd.org>
X-ClientProxiedBy: PR2P264CA0030.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:101:1::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 038212e8-0a30-4803-8bd5-08d8abec4dfc
X-MS-TrafficTypeDiagnostic: DM6PR03MB4761:
X-Microsoft-Antispam-PRVS: <DM6PR03MB476116E07315DEAA3DED0CE28FD80@DM6PR03MB4761.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: S65+Kc1wXiuMxBb/NBfKxgPneq7bBEHZwCT20o+74VfkIJaugO1QJmif6XnrBRkAGrLCpHNAd+6wNdj6B+QU9/7B8J8H7ZKiRJnj4sAkSIeoXxoPa8ifA3JWtzX54NFUxkPBL20Sw2Z3s0EsTCHiRw/OSQYSD3hOMNeSOQ007kzTHzy5JLy00DO0Lgiq1BS4jPXqQSvEozw8uVGYprOtauqDL3iV76ByqXe+1VIBDtF47bQBqfW234kj2uIn8ACRQmfEfo+qi3RhPMwrvIdQgsvY2U5wO6+yIuXlqsEVbTgBBsl9iL9THMLvS9BL0qXx5iwtuXuPimk8apxo+SDMaE2VtYa6L8W4uwaGMq2rZTHSapGHstKtrKPK0ioVAltItjMrPr6vEPC1GOQ+W7gzeA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(346002)(39860400002)(376002)(366004)(8676002)(5660300002)(478600001)(6496006)(66556008)(33716001)(8936002)(6916009)(316002)(2906002)(186003)(16526019)(4326008)(26005)(1076003)(956004)(6486002)(9686003)(66476007)(6666004)(86362001)(66946007)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QS81aHozS3dYQWU5YUxLaWdEQ251bTJlTENKb1VlRXJDemtLYk5lOXdhR1dH?=
 =?utf-8?B?U1NDa0ZuL1k4dW1GWEk1Vklpc3dSaE0wK0xBZEdQRTlJVXhDVDJKQ2V5WnJy?=
 =?utf-8?B?aGVOeXlRc2h3MXBMTWU5YXNZV0lqb3JrdWJROWhseTFTN05JZ1V0OXMwVE5z?=
 =?utf-8?B?QjhxRGhaNk9TWWVoQldtUW1icU9zZXpoZmN5MXJZbXpRQXdJa2Z6UDY5OXBC?=
 =?utf-8?B?ZW90bnM0S0l2eTZwYm94V2t5R3lIL1dvMFRkUzdBY3FmY2s2VjJ6NmZka0Nw?=
 =?utf-8?B?T1NhU0VWRkNkMlRickR0bHlBVUQyeEdXUUNUWGpPNXh5SExXWHVpdVAzSTAz?=
 =?utf-8?B?WnVlUlBCdjl1Z1NKYUdITVdDekQ5VU5HdWd3aGFuZk5nMWJaVEdPcW5HN3pw?=
 =?utf-8?B?OG81cG9Wd0N0TzVpMWNUYW5wTEJ5dkUzOE1zOExaQjJCS1VmK1ZpTS9vQzVE?=
 =?utf-8?B?Mzh6eVdNNWdHT0ZoVkNZZkJsUFF2WG96UzZXdWlEVXpYQXNjY2o2cGVoWXRC?=
 =?utf-8?B?M2ROaGRvVUh2cDIxa1AxQWM5ektwRkJRSHFTT2MvZThLY3kvRHNNWnVGbk5I?=
 =?utf-8?B?T1o4elZIc2M3T2VEeVM5Zy9TNmUyY09ldThZTGRHY2ZzaWJscnBMcGVJbURF?=
 =?utf-8?B?a0ZSWTVDcVRXYzM5cmtrbmw4YjdGOVUzSUNhSzZGb3owa0lkd1dHdHY3ZnZW?=
 =?utf-8?B?Z1gyL0FkMGJJQWt3ZUY5VHBhQ0RzZEdkWWRPZmVsNEFXRVRFanVaRXdUZlJY?=
 =?utf-8?B?dlVzUmRKN2RRZDltbmVSaTJRUHkvS0JrL1lKSWxGNTVMR3A2T3V4blhRcTRO?=
 =?utf-8?B?NWZSNVh2akNRVHZWaUNQNGo2L3h3dnZkU0Q5dHMwVVh6MHFzcnpPZlJEMGxF?=
 =?utf-8?B?bmJOZVJuVnhwSlRSZVFFeldJSWh2SU5IYlhxc1c5dEpXOGxiSmJVdG5rblhn?=
 =?utf-8?B?dWMyNDRmZyt4VVc0d2VJcmNIN3RQTzlpTlUwSkJUNnA5RzFGbkIrOW1kWkR6?=
 =?utf-8?B?NlM4cFRoZ1FLdVFXUXQyMXJURk11N2RVRldkWG1IdytocExLN0VBVXpMWmZC?=
 =?utf-8?B?Kzc3V2orUWs1bmZLWFdmM0JTaC9jcjNjYmt1NDd3NGJudGlNVWZ0NFFXUjJs?=
 =?utf-8?B?TTFtTzV0RFZUZE9SZHRnb2RsTVhSUVdEc0Vja2JEdng0QkNheUFGOFFhUlo2?=
 =?utf-8?B?MGdqb0VkZlAzaTBhY0I4ZGNjbGJUNU9GWGlXMTdDMEFiU3oxc0ExdVdCMXJp?=
 =?utf-8?B?MEdCTjh2Z1QzbThhaUN3Zm5wVnVtUUVnNndaN1RDOHl4c0hmck1CSnd6ZG5N?=
 =?utf-8?Q?Qw6ZKC1X9MrPTP/x7h0uE50qvKpnxXBebY?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:24:28.0444
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 038212e8-0a30-4803-8bd5-08d8abec4dfc
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0f0ZKx4d/XiFtvf8JJ4Ojl6InMPzE3e/R7FoyTDzhWDrCx3tFaWP5iz6djayaA+4KGP7f4a3cL3gNRm0S4/gEA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4761
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:03PM +0100, Manuel Bouyer wrote:
> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
> index a4e8653168..fa2741ccf8 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -49,7 +49,11 @@
>  #include "xg_public.h"
>  #include <xen/version.h>
>  #include <xen/domctl.h>
> +#ifdef __NetBSD__
> +#include <xen/xenio.h>
> +#else
>  #include <xen/sys/privcmd.h>
> +#endif
>  #include <xen/foreign/x86_32.h>
>  #include <xen/foreign/x86_64.h>
>  
> @@ -126,12 +130,19 @@ xg_init()
>      int flags, saved_errno;
>  
>      XGTRC("E\n");
> +#ifdef __NetBSD__
> +    if ((_dom0_fd=open("/kern/xen/privcmd", O_RDWR)) == -1) {
> +        perror("Failed to open /kern/xen/privcmd\n");
> +        return -1;
> +    }
> +#else
>      if ((_dom0_fd=open("/dev/xen/privcmd", O_RDWR)) == -1) {
>          if ((_dom0_fd=open("/proc/xen/privcmd", O_RDWR)) == -1) {
>              perror("Failed to open /dev/xen/privcmd or /proc/xen/privcmd\n");
>              return -1;
>          }
>      }
> +#endif

I don't think you need to ifdef here, instead just add to the existing
if, ie:

    if ((_dom0_fd=open("/dev/xen/privcmd", O_RDWR)) == -1 &&
        (_dom0_fd=open("/proc/xen/privcmd", O_RDWR)) == -1 &&
	(_dom0_fd=open("/kern/xen/privcmd", O_RDWR)) == -1) {
        perror("Failed to open /dev/xen/privcmd, /proc/xen/privcmd or /kern/xen/privcmd\n");
        return -1;
    }

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:29:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:29:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59829.104900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDBq-0002jD-UD; Tue, 29 Dec 2020 11:29:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59829.104900; Tue, 29 Dec 2020 11:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDBq-0002j6-R6; Tue, 29 Dec 2020 11:29:22 +0000
Received: by outflank-mailman (input) for mailman id 59829;
 Tue, 29 Dec 2020 11:29:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuDBp-0002j0-A1
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:29:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 099a4efd-01cb-4ac8-bff5-e0e7c5d9e988;
 Tue, 29 Dec 2020 11:29:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 099a4efd-01cb-4ac8-bff5-e0e7c5d9e988
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609241360;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=a8Ztkz1UMWCZmzS61OzW+QNF9alrjdw8n54XBEYSpuE=;
  b=RiWmYWdcesbyG9bWjvMR96rXdAT4bVm5bu2AkylKSaj8sNIx8EdE8XAs
   lzNH4VvJU7250c4S5wwBCMHKx+FpJYhzL5ahCYFytAyuPPrYUAlGZzGLj
   FbPQbAmDb0pzWt+lEkfe1QirbGYAjfgZNHX0As3oN+3D5iCkjwrgVqSqu
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: AWrU4gucyRXPfYMFKxFUt6VzKCsSxD95JZ9SRL0PDhA1MXBRi5IlY7H+2XGYT8glSBVVmvb1jY
 S7V4biJ/ueVjDftBT6nA1h/0f7ytyxpd0H0onGriCgXL9RplCgbSnkTTcYeqrROb21HYl7hblD
 LbtZceC76PzNm7yDJ7c4scEiBgUdPq48zltSsnOjMjsuGi+Rif2N23j7mxcwhfFbNNuKdZDn9P
 sfjDoTH56YpMjncF40J0Gil6mInbRqJADBUF+YDlA8qXcYll6BB9L0XLWN7628sN4mUdX2Z+Xc
 2PM=
X-SBRS: 5.2
X-MesageID: 34317430
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34317430"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hmVLPEuxXrARCumGYLXRyhIINntzSjEHfWbG5XxcfMK7/4M28sBTi7EKtQEs6w3a9DmNYCoY5O4h5Kgx3/TVMq4RH6Hd+0WUysmwR5pcf/sSMcYJR9MLuoQ1EvlKQgFDc5KlYnNmsl6yte4IIYd9TP5l8ylk6HnHo6MIOA9Q4rlS6YnlDxu3S3oNBaAYStrJF0WduiQheuO6Q8ibKOLENZpMVfE1RlCNpujU/vUqdYJVPa1/x35tO0n3Pgx6JijXWIoFZlNgt6uxqLKSPW0W/GnsHS7toMNJ52Ovzl/v+NIIlspwKGH1etHgwT8fLfUOFey47q3xDwNWs89av1AHhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kgca5Fza70s/EspM7qClainEzlG2M9d2xMJMtTqb8zg=;
 b=P0Q9YNvpo6y1GyV/e39s605g1oKRWkCMgd03D0i0cV6mv6JdxTmnVv0wdCLLZbiEBDHetAt0rLwpaCKEU2sTOp1z+Nr66Tybp7bKJzxHNxE9bNhiEwX4IH0i6kBkaGrNOtXYPdd3LqPRCX/DaA3Z+Z6zbbCT2exWsHyGgb3+bbhRmLewy1LhTYSj+FHDZJ6E64RZDepHpes0ziaOVkT0cQsd5b71R7S0BMzu5dUv0Ws3Hx2GDSgPzjED1vBBTa/8O7oBUZs3SnTkjBaSydj80Zxs6RMlCti4nTRdLf1vfazCs2AwGKOBfGREW7bvLS4lzs0yXKIpOCwNjy6jhb1xaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kgca5Fza70s/EspM7qClainEzlG2M9d2xMJMtTqb8zg=;
 b=YDKonI0z/h3J0ggCfZpz98TBp4gM7NTpszLWxWLBT3k04/ySZXdzFwSEk5HPQ7H6IeyU2SibB6dPx46TbTwLkNFwWozLG3eULk+iR3WXh7ylLggsgDfNYTqYOuJn0WNeXU/4nOwsQGaNT1WIEfiKOGeCuGCfUuFSbc0To99uGb8=
Date: Tue, 29 Dec 2020 12:29:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [PATCH 05/24] Introduce locking functions for block device setup
 on NetBSD
Message-ID: <20201229112909.kprjtysxkg4p6y2i@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-6-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-6-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0100.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c70ef843-7227-4f77-08f2-08d8abecf900
X-MS-TrafficTypeDiagnostic: DM6PR03MB4761:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4761BF5E9A7E1304F6F4CEB18FD80@DM6PR03MB4761.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mO4K0ZeZYa01Textb7AMR7EPnruSdHvubFaOJjOB22leDgdfbpVgdarFjiKf32N+lXqxYh0SthvKdBLU/3yhpo56q6mDZtnRYJOYKiGxkl0cPgKWR4MCqAu9MIT24NC/qcQTwk0+qwxM2bebBsYqmMnYxuNOQqwS6axiTAt/jf7mxU7xVGKOKQzuwZFgm67bfp7eowFI20LwY2V3IYGL83NmS+Y0cYgRW+H/yLU6KJidzWq++J8eiiOnPXiY+5hbNyMBA2k2XpPWNdCBecEommWDVhiFshalm4leV3yfV5Q4ZMQscrhDm9zd2b3JCc1679qDwss3USlCNM7H/PnH6XgV8447U9z+cTANrNyzpaLNbAgbPk9gaAc6LswNav8HoGGEkBc8xAQw4k1bVtlrrw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(346002)(39860400002)(376002)(366004)(8676002)(5660300002)(478600001)(6496006)(66556008)(33716001)(8936002)(6916009)(316002)(83380400001)(2906002)(186003)(16526019)(4326008)(26005)(1076003)(956004)(107886003)(6486002)(9686003)(66476007)(6666004)(86362001)(66946007)(85182001)(4744005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T2xPZld1V3N3RHBHaUY4UXRSa2lydGtwaUJURUR1UmtpbkN2b3BPTk5pRzZm?=
 =?utf-8?B?K25PT29FdysyNktiM2NIM2ZPWUljckgyNWF1V2F4RzRlOWpJU3N0MXdubW4x?=
 =?utf-8?B?V3E1bW1yTGNEclcrL2ZhRFhneW9VWW5FRVo1WUNDVUI3anRIRDBlb1lUK0tt?=
 =?utf-8?B?NDVTaFNFY2VCQ3l2WFBxTXlEUHFsVWV6Zk5Fb2ZUSDdRN2hOUU1TSktlaWZu?=
 =?utf-8?B?a1JKMEg2MWh6L3FlcUFDWTJ5NEN2Zjl4K0x1QW1kMjRYNW1YdG9sYXQ0cmxk?=
 =?utf-8?B?WnRwUmZMVVFUR3QyWGFOL0pESjNaaDQ3VzlqUDdXdXBxdmRGcGdTb3E4Qklu?=
 =?utf-8?B?UnRqdENtTUE0T3ExTGZRdU1aclRraEJtVUdpRFJnenhacm80OGk5KzRBYitE?=
 =?utf-8?B?NEEzd3BvWUV6RFAwSi9kTnRkaFdqTERXWVpySU0rUTVQTUdXdGZvSm9DWnZ2?=
 =?utf-8?B?K1FBZGErdGE1N3pUeGljalMrOUdkWHB0RThSSmM4TitmSGhkeS8vcU5ONkdp?=
 =?utf-8?B?Ylp1cWt3aDNJeklJMHZhV3o2bWplN1djWmJ5L3VZdXd3bGZ4SFhjcUFnR2kz?=
 =?utf-8?B?YnZqOVFaVUZTb1dCSjk2d0NmT0hNWEVpcVU4TFhPaEdHaFRVbFhLMXJVTW1w?=
 =?utf-8?B?RmdXV2ZEY20vQnorWGZnam5aTGVTSlpBdzJwNmtFSEV2Skl3c1lwanZMQUYr?=
 =?utf-8?B?QXNwS3VYT1BSWnhBY3JoR2RXTjhQWDBNY0dRNm9IaDZmRFpZM0dNT1hQVnRU?=
 =?utf-8?B?ZkRNeFZFVHlNd000bFMwdnlWVGU5TEZjZmdXaGV2cEY5UFJhMkxCa0hNNm9h?=
 =?utf-8?B?V29KQnRUb2dOTHp0ZlB2SEd6eFplaHhKK0Q4bUFZOXMzT29UNW1TNFlkVDI2?=
 =?utf-8?B?WG1qcjVNYjFZNlQzUDJwUE9GbCs3M1FxcEd1RkFpZW9SQWxBVURxdjQrUUxW?=
 =?utf-8?B?Rlh2QWIrQ2NQOERHZzFpMG5YZEUyZExTK3hGcmljeEQvY2ZMTVV0OGdOdEhh?=
 =?utf-8?B?N0QwSGNGMUN2ZWZ6elY4VUFIdGNrTzR0S1ExZEtkdzJUcjY1bS9xeDd2Yk81?=
 =?utf-8?B?TmdtNVpzNVVGTkd5VWNEMnlFVG50R2lxRkdoY1ZzZ2hJWUphMko1ZEFBL0ZX?=
 =?utf-8?B?MHBjempHWXpaTFVOZm1NK2tYVVd1dHZSV0JsV0Rvb1NHTmg2QnhZZkRpclcx?=
 =?utf-8?B?ZU1DcDZzMC90WkpxYWtpSk1TL2hQVXkvdUc3RGZoNkE4Zmc4aGxrbHdKb1pB?=
 =?utf-8?B?V09rb2g4Z3pQMHpHMFdaWFFSdUdLNkVnS1ZXdTFSTTlycHJpSUN6T044OXln?=
 =?utf-8?Q?WMEpGMJRZGS5RTezdsruiFRCU80AnpyZkO?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:29:14.9841
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: c70ef843-7227-4f77-08f2-08d8abecf900
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B0iv+zim3jpfcrcV/JVC3jzWBzifmrzirrqf5/S1gc0NkgZSZr0+PWy6uCX9eHNVZmNapV98tWSnISfjUzl5FA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4761
X-OriginatorOrg: citrix.com

I think you want tot CC the tools dev on this one, specially Ian who
knows how the Linux one is implemented and can likely give valuable
input.

On Mon, Dec 14, 2020 at 05:36:04PM +0100, Manuel Bouyer wrote:
> ---
>  tools/hotplug/NetBSD/Makefile   |  1 +
>  tools/hotplug/NetBSD/block      |  5 ++-
>  tools/hotplug/NetBSD/locking.sh | 72 +++++++++++++++++++++++++++++++++

Seeing the file itself, I don't think there's any NetBSD specific
stuff, so we might want to consider putting it in BSD/ instead, so it
can be used by FreeBSD also?

I'm also not sure if it would be useful to Linux folks?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:43:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:43:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59835.104913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDPG-0004SL-6O; Tue, 29 Dec 2020 11:43:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59835.104913; Tue, 29 Dec 2020 11:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDPG-0004SE-3C; Tue, 29 Dec 2020 11:43:14 +0000
Received: by outflank-mailman (input) for mailman id 59835;
 Tue, 29 Dec 2020 11:43:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuDPF-0004S9-4E
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:43:13 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a7b6efa7-d8a3-4950-b859-32128417ae2f;
 Tue, 29 Dec 2020 11:43:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7b6efa7-d8a3-4950-b859-32128417ae2f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609242191;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=KWPFtFxBXePA4cAG+03PfTXpNYIv2vY7JHqQyKx9W20=;
  b=Dvcz7HyQE9v2kEJM4RINu369+YocaUuAQL/BU14EqoZF5kTjZRe7USq9
   X5t2wbo+nDALuO9P/vRs27MY6++S58n/oNrGSdLa4JXWa+XKB+ZX+i9QS
   A/xlg8XYuPQREJmSooMMvUaBUGSHwe7KJdyj148cMD84RGtkV4pFWpgMf
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RsFMhJsbLLJXnyKrtM0SyOgVoLe39n9O3UQg8jPh1BZ8kXeiYnwaQmumw/ZgQCUjjTwqQx8UoH
 aadCuDPdF6nEQ0Xo4l42/MQtPIDoNoy86FFWQWvtozzBGPlXxipk4Y/KOvMbEqsrd11IW30pwa
 FySwOvJang5mnLUuRRfcxZxDF108Z6oLgzDkfZOZmEpSN4JfO1fG0OhLneZZWT6xyMZjzFXXP4
 mzQ69WrtYA2Z7llHHJVzyjEmuoAZy2ecOare5C4guz+JQBHJQctgIHqp5v+Kkmrm50ecCFD8MQ
 fgM=
X-SBRS: 5.2
X-MesageID: 34317885
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34317885"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GU9tc0i8QasKwo2yBlgev/a1M6oCyAp6XTaJ1wXjbKOGPLMFDLZsmzkLDewd8zvXhXereifQLMb+EFAp3qpw3ycUb4Zqz5irMdUbgJibUsmgNXTv9U40UBBIxJqqXInqyBtiDbwkbU2x+rUI5fPy0V1fsuS4ywGmumj4c1RD2jfYHeDkwUW/BvkxN6R2Bv1EU2ufYceYgb0/UKCosozsO1k/xsKR9B879eNGaZk1AdMUcA+oRDcix72py0qJBJQjnxBsXNhcBDVQDAOvwLUqV9qn21aug/GVu04eBEGeikq1a6rtjsrDyCwrPks2dg6Zk8IES+hQSIFDCLIP6uoVng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KWPFtFxBXePA4cAG+03PfTXpNYIv2vY7JHqQyKx9W20=;
 b=LKqY7SnObjV9IKBpjX9cUNcIkjxnfmtG6nvxHZSlpAh/6aXm4u0cZ4Jj8Hn8V14vZdbNHRkcINIw/xJBjmAj5dVPND2RQaObCevC3xCvx1aRIuMv6IUazrbaaaLqsa+YKe50CUCVYwHaMXocIHrBp6ofTofE1O64hUjF2i9b4ePJ0kYzzKAb+RiLqrYCflyJeNVtwJTM5BP2W5BemUyiEex9+RU0Utt/K3eTSeBCZkgxcvjcqOiCG+EdFPaJUCokL+aUE6tDSQA9cpjXDDoZoxcl869UpevhsVLPTJYvBXusdOvMd/03FMNWx3eJXX8YLWDOaEoTBR5X3A2JFzNHCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KWPFtFxBXePA4cAG+03PfTXpNYIv2vY7JHqQyKx9W20=;
 b=XE9sqbEkg7mLxU7uf/FhR+zrYMT+l1wwmZLfVITErYVeRSynF1LZ+0lYztsAuv9/9lt4GE8Ec6ZUARWCZrsAC37fAwOpgivKojjOG604Mq26Nk0z09iJrrlr+yhe3lYW+30KvLzk3JYiGusPqwrzEPH4sAWfaaYeokhq0zThlpQ=
Date: Tue, 29 Dec 2020 12:43:02 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 06/24] Handle the case where vifname is not present in
 xenstore.
Message-ID: <20201229114302.kqszcnb7ynk7enin@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-7-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-7-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0158.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::21) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bc041d90-1242-42ae-dca9-08d8abeee9b9
X-MS-TrafficTypeDiagnostic: SA0PR03MB5417:
X-Microsoft-Antispam-PRVS: <SA0PR03MB5417C1F2D6CF95A057A586488FD80@SA0PR03MB5417.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: k85zy4EJOX5pxAAWtQNjailGb7AhJVXsfYYkJ1m6Js8spKbRDV2ia9+c5kNBryuU5VHPkuN8JmaNVpo0yJn5evbaISAUMq79XsfZeBC8HdC7N0BbKUUw2wE8to0PiMSXePh5NMh3t0l4A3hF+f0djC1JWEtSq+YT00ZSNGPDJAEGh7xrkFPtciOryAfZClMGfjrHpLQIUHiPB52HWGKubabP3BvLcpeG5mJcHtsfvT+lWEECe7QwBbECV15An6qMqeURx+S5rybUH15d/+VcYGSi30fLLYlrfgmcMFLytPTO3HQINBEKaSBztWiugLkDWe+abgre4ijBXMsqcHhgs4Htgdw7VuEI3SbiC9L9NsDxs5QAkRdVCOJYE808a49a95tPRb0r2ACWyDAulPKA9A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(396003)(376002)(39860400002)(366004)(4326008)(956004)(9686003)(66946007)(86362001)(85182001)(16526019)(186003)(6496006)(26005)(6486002)(5660300002)(6666004)(33716001)(8936002)(1076003)(8676002)(316002)(66556008)(66476007)(558084003)(478600001)(6916009)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z3hvV3dvMFlEL3VZWWg2aTR6VnA1UVZFcDNCcW9YaUViVGd2UXZiT1dxTUZh?=
 =?utf-8?B?L1dyV1lFT0RuMGE0ZCtNMGlEZXRmYTQwL01iM21sOGJqWElIUy94MVlmbmpV?=
 =?utf-8?B?Vm84NFlKSGE0VDBxSERlczFsZ05oZG5XKzlDQXBmV29HNHNERTNRQ2NKTUZq?=
 =?utf-8?B?TDEvZGlINTQzc1k3M0NoNWxLVmtjaXhXK05UQzUyUjJhQWc3NUlsKytuUy9C?=
 =?utf-8?B?SC8vSXFMNlVVOGd1dDIyRE1VWC81MlhCQ1NpejdLc1hiaUJYZDNremluNjNh?=
 =?utf-8?B?blI0ZHBqakNCc2phdUF5TTEvQTlxTmZTZWMvdFNUbG9qVDREaW1hOXhGSjlu?=
 =?utf-8?B?VmR1SlNFbGx4RjgrNXBZTkdibkZlZDNIcmZjb244b2pGblZqZlhrYkVHaGtu?=
 =?utf-8?B?azNnV1hIN2JkKzR5ak5MUkhaYXFudmJ2WXJDcHQzV0JnYzh3cXZlWFAwcmZr?=
 =?utf-8?B?MXdRUEM3S2FQZkpvZlc0TE9pV3BkN3d5NWVqRk1QRWhmNGd5WGlrSlBURkti?=
 =?utf-8?B?Ymh4NnFXL3BJd1AxTC8zSm82emQ0QTBkV2FVR0JEWHB3Wit3bUFDaWVrYXR1?=
 =?utf-8?B?U0FHUnFTcWYxMUQxTjZFVkZtamdRd3lGNUZhazJTQmEvMm5UOUdlWjY2R3BH?=
 =?utf-8?B?TUF5a01aU1VSTzhTWlpPQ0NuaGpJSjlRTlFaMmFVc3dVMk12MGJGOHd2Snl5?=
 =?utf-8?B?dm52YkdtRjZqMVduS3ROWUM5eTF3RjlrK2VkQVRzOWRNZitNU0hIUkl3eEZm?=
 =?utf-8?B?NDQ2SnRSaysvd2V3WjJkQnk1bXVqdTcvSCtsYXdZd1FmUVR2akJkMUZoTGd0?=
 =?utf-8?B?VGd4UEV3TGFjc2w2L3BGKzQwR1JxWmM2RHNRWTN1ckh1S29BdWxCaTVDdEVY?=
 =?utf-8?B?cEp6YzdOaW1GS1UyS3JkQys1L2ZhaGFHUzUzUHJ3TGFPUWhNU05MOTNjMlly?=
 =?utf-8?B?dVkxWFpwcGNhbmx3UGVLOTN0dmxSVW9qaFl1em5tWEI0Wm9jaFBQdDdRb1Fp?=
 =?utf-8?B?MkFQZVp5c1hWSkR4MUdObVp5SzVIbVp6M2ZBeTB1c2QwUlhPb3o5UHk5d3hi?=
 =?utf-8?B?UlNFY1c5Z2VMRFpvb0p0ZFZwMmU4bE5qL09tdS9yQlAxcEdvWWVKMmNVVnZK?=
 =?utf-8?B?a3VJYUZDUUt1MUhaNTJNUlJ4U2d3N2hEbDFsQjlFNXZKYWd1OWhucUxLVWhS?=
 =?utf-8?B?ODJpdXl2dE5OQzJKcUlwZUExVitDNEV6WXJSTUZnUGlvSG4vYmpZN0xpS3Vw?=
 =?utf-8?B?WVJqUFVhUVh4OW0rWm9ENURJcW4rOTdkZWlIRHNoZitXRkRYdDA2YndJTkYz?=
 =?utf-8?Q?5PUBksyxlSmvgenp6DYjCTpkEbRbWhtQci?=
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:43:08.3787
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: bc041d90-1242-42ae-dca9-08d8abeee9b9
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Up05MB1eubOQtQzmHjIUdh3YHGFhsSWtGF5AM90Gxuhw4Kdf/sCrz8MaPdnN5u9TNx+cIV3zpRzNI43GlF4vKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5417
X-OriginatorOrg: citrix.com

Maybe it would be easier to just fix libxl to always set the vifname
in xenstore?

FWIW, on FreeBSD I'm passing the vifname as an environment parameter
to the script.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:46:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:46:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59840.104925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDSh-0004bm-Mq; Tue, 29 Dec 2020 11:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59840.104925; Tue, 29 Dec 2020 11:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDSh-0004bf-Jr; Tue, 29 Dec 2020 11:46:47 +0000
Received: by outflank-mailman (input) for mailman id 59840;
 Tue, 29 Dec 2020 11:46:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuDSg-0004bZ-7b
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:46:46 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fec66728-40a1-4582-85e0-36161d8c38e2;
 Tue, 29 Dec 2020 11:46:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fec66728-40a1-4582-85e0-36161d8c38e2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609242405;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=Wt1KwkCOTf6TgPktf4Nm8IkBmAd3wAm7/Kr12JadmJk=;
  b=I4OrbJIjrkdCLurXdr5rzgQwKLbmz+1gNXeA5NMu5W515jQRlpdTaJhi
   cKRCmNPiNjQ3kezzrSGHEk67eILtwFNJA7lnY1fbG+TdJQdrzl9bel/uW
   deTygmz3rEE0kyOOXAgycbbT+IaO/fd55WkMWVCvBHzAptvULEOs8uENJ
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vX3MnGvdH8wW30MBaXLhZ80PyVrl96swx3n9Uoys0TP59EZ9EFRiqeUy01d5ANLDbbp0U2eqAU
 S0exuNIcvValUXpcpeIq1Y5Mz7wYxMjGZ5+01T1WD00wpr24kCHd2zW46FxcaJfDnOluXSAzwO
 fz8GlvYCGUVBho6Txm4P0funQME520C+C9d3w/Gaf2VjiaNASXc/eCI73+zkgL3QcOueVEs/t5
 HCkbvKGL6tSzjI7G2Qes4wd61CPEUFxznGJbWN6M19m/s3i+JuP6FUSCNwx3jvHvYlqovmTcq2
 FpA=
X-SBRS: 5.2
X-MesageID: 34317999
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34317999"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Scyw1AViVPAsGQr3YeFXWuJ+yBe770i6tFyRc1eH5N4Cn4MndZJw4XJjJkaxeobl0iM570xK8wr+qw0zG4KmN8+ARA9H1ZdQlwcrLjg2vJPTNQQjgEWe1R1VmFclInPc5YpmNY1jjKxXiXx/aF2qqVFnQIew3QJIhrTzZ00GdhvuezDQELWsTIyDKuzy3eJjtX5ZAjZAiC280EecwanEx48AJCAn+NXyyUJe4iDYWn9SKUwTD+wZQmxVj0oL1TcKilORjQh5gtjyg4Bn4Yb23O2SA7a9Lz/O96sftat9mX+yvUtuD4NjR6niEjVwbytmgbG0aSddqjINVwzN9VrtGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wt1KwkCOTf6TgPktf4Nm8IkBmAd3wAm7/Kr12JadmJk=;
 b=GPFpLbk7CLeAW2FSWT58EA7xKQM4X80duyDduCy/M5ZbjhY5eZaJAs59L8pTWmp0eDAXFi1p01cNtreGjqJkdm/bBvx6RvXB0USafFlo/T2P8XBzpCO1zjsa5KspqkSdBcwBvXe1f0boaUpxKFL6rcPz6AJiLJaHnM5apSX5B+enCX55UyDUHMhJanl+FO7mEpD7M7sovnTHVJhw9c/Xrb8IusV40gFljH5Y7rNyi0GLzIk+MTVtTcWuOqPnwBZ/g804H0Ucp2IqywAHpSPBvLlJ+jGEauWlaAeK0jxYoR50CWHZqv0RVZ9OgHrDO3dhRZfeTnzOXE96Sc5WD85PXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wt1KwkCOTf6TgPktf4Nm8IkBmAd3wAm7/Kr12JadmJk=;
 b=U9u0LrsGritspG9F7a9YzxM/pxacg9GsabcRPDXsFkftsS2VziPM/VKsLPQCm8+3UlYIdr3c2cLQLUv6YTwbtRDjmfCjhhuBpi8liNWjjjz5jTrQ2Xe+SByKl3lmgR538fGj2k8zCXNoVlWzLSRHcby9kDoopw0TKjsX+xPD0Cg=
Date: Tue, 29 Dec 2020 12:46:38 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 07/24] Remove NetBSD's system headers. We'll use the
 system-provided ones, which are up to date.
Message-ID: <20201229114638.yegfswyqzhz7tj25@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-8-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-8-bouyer@netbsd.org>
X-ClientProxiedBy: LNXP265CA0073.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::13) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 82ff1185-5ee9-4a74-36b7-08d8abef6994
X-MS-TrafficTypeDiagnostic: SN6PR03MB4590:
X-Microsoft-Antispam-PRVS: <SN6PR03MB45903D4DAB5A7D666AEE10668FD80@SN6PR03MB4590.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lTr2aOShnahO9/3Yf/XTVr2rdZWxbIDIQ/UdSpLXcbBPNAMwqtpwj8hNrfxY97Q9wOyI6w0jSy9z8R13oV6l/o1Humxb9vgbEjIa6XFGb2H5oYvvcHkzEY2cOS0rKoZmlZuEC/qqWAo8W65nHTIta/yvq748QVQLN5pI7KTU7cF3MiZkVRc2gZm40E3KkEvH3e4MUrfNkoIebZQga0xApsCEGbvZCEzfQpfAauJhUHf+42ifUdVSQfvaTLWxYzMTP/x5EqF8Kxm1YDmpunlXfJVlW4PmC6qpAj9tvJ3dqwBCL6gyeDyi8PsgzT0adbH9o5XZ+UnBf4iOZ0NmPAEBq/FhoULBSzc+X+8eR5cBDYRx4rHMRdvurG2rgWVM03gwBaDdUiMi3y6wVVoYDV+yeQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(376002)(39860400002)(366004)(346002)(396003)(6916009)(33716001)(66476007)(5660300002)(6666004)(16526019)(66946007)(8936002)(6486002)(85182001)(26005)(956004)(478600001)(558084003)(86362001)(2906002)(186003)(8676002)(316002)(6496006)(66556008)(1076003)(4326008)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Zk9OTlNUQzhWV3U5VG0yenhCSWRyb0QwRjJhTW1EWU9YUGtiaW04ajZseTE3?=
 =?utf-8?B?TEIrdlNQVHVwa3dQM0l2M1pkSGFwUzJEV3NJelZSQXJPODRpbGc5VnZHS1dR?=
 =?utf-8?B?MnBMSHJCVmsrZ1IyM2gzZ3kvZThEdEEwNVNzS0s4YlB6MkU3bWdlSzg3THBr?=
 =?utf-8?B?b1lHajNPNlZMUld2V3FGV0xSUWpoaXhqMEwwejlZcUpOdVJjRFd4bjUwMGJE?=
 =?utf-8?B?aDhJSGNkVTZLMEx1UEVEdVNtNWlkZVZjeWdoYldJYnpjSUs0RkNUVEpsRmd6?=
 =?utf-8?B?VUdOZlpXR01uWTExMEJ2blNZc0tCbi9Fb3Z5cjVkdHh6R0R4NU55M1ZrMHN6?=
 =?utf-8?B?bnZycVFxdm1zcXNkMWV0VlcrZHFnTTVERUREQXJtR1BJTVFjQ0NjUW1hVWtx?=
 =?utf-8?B?RFU4RjNjY2lCTkw4dEMrdFUvZXBYWXNwa2RpaVlEY1NWSTZONldNS1EydlZm?=
 =?utf-8?B?SGwxeU4vWHIyRnF3WVlzMVkwS1oxOVhBTVF6cU9JOHhKSXlXR3hYVFMwZXcx?=
 =?utf-8?B?aGIrYkJKWmlUemZWOWZFdFd4WDI3QjNpMTRPWmdRbDI5cmNLSU9GSlZqbHp2?=
 =?utf-8?B?QjVVbExuOFZ6SjU1clpUTEpHZCtybGlVRldnN3R2WVRrRjRHblJLR25GUG1h?=
 =?utf-8?B?bjB3d3dlSHRhOXFhVk1rMHVuK2VncWV4MXBUOGR0K01SZHp3cHlGcnAvNmZW?=
 =?utf-8?B?Z2tzOHhMakw0cmNqbE5ja2dUeFpUUnZMdXZ4czlmVVdGVmNLM2JmNzJJR2hM?=
 =?utf-8?B?eTM2dFdLaFpCNWpTei9nVUVjNU9zUExvR0l4TFBRZGZrSURMK05TU1RRK3Mw?=
 =?utf-8?B?eFBrT1JUczc1MnJyekFhVjBOUVFLSEtnY3QwREpmVVBjNFVRdU5PTVpzMDVW?=
 =?utf-8?B?Nit2QXRNajl0ckhqbFZIekd2Y0R3K2lKdnNscDkrb29kK2hiR1M2NEVzRjZn?=
 =?utf-8?B?QnpiS1lLeEJMTXVOWWFBa0tKOXFtQ3l6eXBOaFJEZUEyaUs3cXY3MHUzQnpP?=
 =?utf-8?B?U01aOTYyZmRMWVU2SzhObm1yc3oxR01PWGE4SmxLYXRiSU4rVDR3NFZkZVFL?=
 =?utf-8?B?ZG9CUlN4Z05yTE1vNXhpZkJuaFo2aHpzMnNRQWIzLzZmb2tuR2hZSjliUVhX?=
 =?utf-8?B?ckluTUZ4TFZqYWxRRjFGQWhQUnJ5TlBVRHh2MHByS1hnbkc5L3RUcldSNmxn?=
 =?utf-8?B?VTFMVnZ0TXNUS3hKSGF2RUNYTkpmbGxVbTYweW1BR1Z1TmR5UVZOUUkwc1RO?=
 =?utf-8?B?blpmeDArRHZnbFY0cnN5TVZUSGJvNTBmQ09DcWhNQ3diQ21XNG8yZXo1R1M3?=
 =?utf-8?Q?SoQ3y3eC6VTkBnLTWxdg6BwsIg94m4m7/x?=
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:46:42.8893
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 82ff1185-5ee9-4a74-36b7-08d8abef6994
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2JLWxTrhEq9w4T5uO7cPwBqyYtM72ba+yZYLFSQn9+xGbjTkiHb6xYx8L9H1RTx0k7ilZ/aFmjsWiPhOaZsdoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4590
X-OriginatorOrg: citrix.com

What would happen when a new device (or ioctl to and existing one) is
added?

You would then run into issues of newer versions of Xen not building on
older NetBSD systems, or would have to appropriately gate the newly
added code to only be built when the headers are available.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59847.104936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDVS-0004no-5l; Tue, 29 Dec 2020 11:49:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59847.104936; Tue, 29 Dec 2020 11:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDVS-0004nh-2N; Tue, 29 Dec 2020 11:49:38 +0000
Received: by outflank-mailman (input) for mailman id 59847;
 Tue, 29 Dec 2020 11:49:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuDVR-0004nX-4m; Tue, 29 Dec 2020 11:49:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuDVQ-0008DI-US; Tue, 29 Dec 2020 11:49:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuDVQ-0002Nz-Mk; Tue, 29 Dec 2020 11:49:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuDVQ-0006Bw-MD; Tue, 29 Dec 2020 11:49:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CGtpa94ExWgjGgVkiPbgGwoHVFHsqEwWN8ABheOl8/4=; b=Z3wX3UdcaQHRf5qEdV4KxaJCxk
	hm375Ow4if/3PmrhJsqSSHCSgQSSlQ9EDQUzr+cwHo5B1jmVgfqB+3LvNeAVZTbAPWz2HhW6FyK3M
	rBkj20zIV3pqpF2FeOm+5UyMp1XLqJLHEp62yD8WTpRfrH+F+ocYSXgbLUoDLqvJlSrk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157953-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157953: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 11:49:36 +0000

flight 157953 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157953/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  172 days
Failing since        151818  2020-07-11 04:18:52 Z  171 days  166 attempts
Testing same since   157715  2020-12-19 04:19:22 Z   10 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:50:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59853.104952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDVp-0004u4-KL; Tue, 29 Dec 2020 11:50:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59853.104952; Tue, 29 Dec 2020 11:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDVp-0004to-FH; Tue, 29 Dec 2020 11:50:01 +0000
Received: by outflank-mailman (input) for mailman id 59853;
 Tue, 29 Dec 2020 11:49:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuDVn-0004tZ-Rs
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:49:59 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 059e13af-ac05-40aa-89ec-e6652bc9280e;
 Tue, 29 Dec 2020 11:49:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 059e13af-ac05-40aa-89ec-e6652bc9280e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609242598;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=xudkZ242BlNMwHow26v3H+HO9JNjHcCvzsX08PG9X2A=;
  b=Ts2f5cmj0yX8BmrffPgVvgj3Awy6rYwF2i48urcSLu/ipj7/28fgJwaW
   NkBaCiYJbyIqS0bpwG4TpWbsl3X3xr0URSrfqHq8FwAii0IP+aKSuqC88
   UjC/WeyXdfLO/Ffsa22Ks/yhCBcvePDN1GF7W3dAQb5M70vJStcLfdPAR
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: cmBGtkSMbYjqxWc061Aam89ByexO76D+mHWT5P3fiGwNU99N2W4rto4E3Gw4CK32Z2PSI8uo+l
 vZPJ/sRIWAkTKkGzrR6fq9lWYwmdxWUeaA4feQquHM3ANeBjyDEm7Mtr/eO9qQLfEHoL7+yJtf
 jPcI1WD8GdpmKnrA91h2kx2IdmPJjW45S6gk6RG0rGAjcSlq33tjK/sOql7wB3X+Ces5IHZhip
 TtwKN+7TjvF+q0FOxeAAB7Qkvzb9Y2gAyaBdnOY9DL1SoNwfUkcWLHcj3FX6NivauQhMXG205S
 VtM=
X-SBRS: 5.2
X-MesageID: 34447266
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="34447266"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SZlZpmRsOz5kAenm8c/dSWC9QHScEqweH6bwlBYkcsdz4uU0+u+2ZRpzled+eNDtZxkTOfHC3E/d274vOTBGGPunwSWhKGl1ebZ6pf1xDCk3i48m/QEfEVRE6zhuZ506xeJXcZmkkv/tWzqCaLoDieYnHKHadHIfHC/O8PgjMd8XLkJ+d2mBkPZjtj1DYLsIEV/cjQfVTLWIUTWD1dFowiqDxSOX2P2vmGwYeqWwlZ4fJxBepfq1P3nPmLcwlUuwu7RsmnPFTyxsDCduoQFMBd2rxMInaDf9j9i2aJreEXOnKSLjh48vWUhsAp9z/4OWlhZZUoe+3S+jD/es7S3Myg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wYbrHp9iKJiWxaQHqD53D1XUQOZyrtsxasMWiJ7vbE0=;
 b=VBBx71DORQpugel38fbDayr4a0mlZX1p84oK41G9ub2ad5NqKGct8N5c91gZdB+EvP1lvhQM+NaTNMcJWruDN0MSD+ewfiBSwKHdyxa7OjNMTXayK1gfTpoXSPzR3Db9TgyO3LUn6Tn/XrFqv2A+TXHxmkWodtCE4PSeK/49k/v5ewlKdbLYnb0gzjncTVyStnhL08kUBxuXwSWEs1aRYbqYoFtV8ZEFlr/o+MOAarXTtbSoJoVJvLMXc1sKnjp2XLPjmEZ0LTPsDWY/rq+ZxS+ykM+oZYH9D7UxmZt38l0Osm7txIHuKYymfsoSCJOq3oHXpyJReQjmM/ECBJxLHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wYbrHp9iKJiWxaQHqD53D1XUQOZyrtsxasMWiJ7vbE0=;
 b=DSnFPip3usn6YHc2U8Bo2XCoJDOwsiR63XKXuDXEFTWsgHWzvaOlVcgFon+N3N4wTae/ORMg0F8bAxaYJWQuNg2gRgzHwtR/tUWRUqy+agu0S00kDeQxqBG9bEY6t1quwTtdwUVTteFMcp/wfaBetgwkmB6gfBXOuvcqGT8R2jU=
Date: Tue, 29 Dec 2020 12:49:50 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 08/24] Make libs/call build on NetBSD
Message-ID: <20201229114950.6b3zrtrmqhaqxft7@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-9-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-9-bouyer@netbsd.org>
X-ClientProxiedBy: LO4P123CA0083.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::16) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 11b36d17-9c04-4244-20f5-08d8abefdbfd
X-MS-TrafficTypeDiagnostic: SN6PR03MB4590:
X-Microsoft-Antispam-PRVS: <SN6PR03MB4590F873194D1F228F4801458FD80@SN6PR03MB4590.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: aVT/PMoqwFiydX3PTBks+cE/FVJgZ2V7q+3f85ZhWi9jKBzF3Mvkl6EVjDW8670TyMXuU2xi1odZk2kOh/FbDCUf57dKhaGGwkVVtnD3D3PmGsgWJAlxc7yuJbr4XSLVU92ZXctfXbM5rkwsNbquWJ/ZjN/UvcEBNLckIi2BpfN5zEI5DxrqDoxBOZJSN9She2y48rR+YNCHOOH9LbX3mZkbaP9mhD842jZGYcEMDvTZRywmg8oWI4bqt7n7srZ9T1boM1EqJpDMXC/XjnX+MWGugKfLOKnjWt9Ov4WQ+533nlSPzopUG1KAvTjc6xYqrQzWBXHzXyauGGe+jZAVbIiSRmzJ9UcrOlwrCopeWLFGgxL9Yc3kaDrGCOzNbQQtijcteprgxkBUG2dGSJ1lBw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(346002)(366004)(136003)(376002)(316002)(83380400001)(8676002)(86362001)(2906002)(186003)(66556008)(1076003)(6496006)(9686003)(4326008)(5660300002)(16526019)(6666004)(66946007)(33716001)(66476007)(6916009)(26005)(478600001)(956004)(85182001)(8936002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dVN4R0M0Q0VuOHgrcW9FVkNOdFo4MC9pZnJXTTlZMkQrYzhwYTdER3NtK2FL?=
 =?utf-8?B?ZGllYm96aFM5NVVPOEUzL3UwaTdKbVdyUDZJNUlVVktUN0RqbHhGU2JTalRI?=
 =?utf-8?B?Y09KOGdJZVJPUFhSQzJmYm01TjNPSzRaZFNrenVZMmQ1TlRMdHdYaDFwYVZk?=
 =?utf-8?B?OHJ1aU9wMVB3b2MwTVBEdDhyTmJENkpIZlVnL1RteFo4NllBc3gwbnlaY3g0?=
 =?utf-8?B?LzJZd3FBK0ZOOTJkbXRnK3lxV2JZbmNMZ3VJbStGRVl1MHlhSTRpNnpaK0l5?=
 =?utf-8?B?aHpuZlNTQUpMVERmdDlQZGlXYVJIb2FPcWFWRStKNktBWUlHY2VIcUxqSm1M?=
 =?utf-8?B?a2hZZ29ib0NJYU4wRk43Zy9Ta2lCQVNBdDJqMFIxMlpPaUk5TjAzQmUxT0Qw?=
 =?utf-8?B?SGF5ZHFISkxUZXhScFZ3TkRxUk83czlmN3BNVFdBZ3EwMEd5dnh1RTErVzQr?=
 =?utf-8?B?SXg5Nm8vR0pZeks4bUdpZXB2R05URVFSWGJoZ2N5QlFuVkZ4WHN4bDExR3E0?=
 =?utf-8?B?UkkyUXF3Wkdqa3FOU1hSU2xrenJkZEVuQWFEL0N1UTJnenI0UFZMZWluMVNo?=
 =?utf-8?B?ZWpBZllUdmV0amtna1U5MXNvWlRpWm9QTjkzRkNzVW9ZUjFnby8xY1d0TWNC?=
 =?utf-8?B?KzEyd0srVWJUOCswMmNUMWkvOWFDZThLQmg3amZzRzNkYVpydTNZT1I3Zi92?=
 =?utf-8?B?RW9jZnlsanB5bnU4QlFXZEYrK2NzYVpQQWhmZENNQXFVRlZNTS9lMHJ2cGZy?=
 =?utf-8?B?TmpPTElHMnlZL3NTelJGbGZXd2JxVCtFUmV1Tnk5YzlGRG9YOVVlK09vdXRP?=
 =?utf-8?B?TGFZdzRzYUQrTTVleitMdDhDYW4rSDdnUjB1ZU5XZ09JN2p6eHl1elplZ2Zz?=
 =?utf-8?B?N25DUy9GRUVOZW1iRDRBTE5JV01yL1pEL2tvbEY3UG12eWVWMzVGL0MwRXBR?=
 =?utf-8?B?MHc4QWRJWG5keDZpNm1ycm9JNjVvUHdPb2JDYkQ5WGJ2Rlg4akkwSWdLbElE?=
 =?utf-8?B?Q2VDZWdMak8wVjNSeEs4eUc4dlVCL054a09kamF0NjVXNEhHRjlTL0duVVd0?=
 =?utf-8?B?YTNaaGxMeUVLc0NSSlZIY2s2aFRMK0tPUWlZQVhTdnpCTDZsMlVva2FOT0Fz?=
 =?utf-8?B?WWUvTFhHbnRoZUVmU2dDM3N1WjhPUGh6ZDhsd1lVQ09SOGUwTGpJbXFvV0tr?=
 =?utf-8?B?andDQjBqSzRjRndXazZXbkozVTBwQkM0czRnK3d3V2RCZnBoOWpOeDlRY2dT?=
 =?utf-8?B?TGQ4UThxWUF3NDdPZ3U1TXVRRmZCNVVEQ1p1d2hubHVDcHlUMHQ2ZnV4d2cv?=
 =?utf-8?Q?A+VN+oyDMMzfLnsURVJkRzRl+c2nSRdljW?=
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:49:54.8839
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 11b36d17-9c04-4244-20f5-08d8abefdbfd
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VFkTjOO92Egab88Xq0uR96b2bdJ8ivDDxKUgIoBZlXIAg3J90EYccB1WWS1FomN2RrpvpT9AYfkQ0aNUZvLXpw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4590
X-OriginatorOrg: citrix.com

LGTM.

On Mon, Dec 14, 2020 at 05:36:07PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/call/netbsd.c | 18 ++++++++++--------
>  1 file changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/libs/call/netbsd.c b/tools/libs/call/netbsd.c
> index a5502da377..1a771e9928 100644
> --- a/tools/libs/call/netbsd.c
> +++ b/tools/libs/call/netbsd.c
> @@ -19,12 +19,14 @@
>   * Split from xc_netbsd.c
>   */
>  
> -#include "xc_private.h"
>  
>  #include <unistd.h>
>  #include <fcntl.h>
>  #include <malloc.h>
> +#include <errno.h>
>  #include <sys/mman.h>
> +#include <sys/ioctl.h>
> +#include "private.h"

Please leave a newline before including private.h.

>  
>  int osdep_xencall_open(xencall_handle *xcall)
>  {
> @@ -69,12 +71,13 @@ int osdep_xencall_close(xencall_handle *xcall)
>      return close(fd);
>  }
>  
> -void *osdep_alloc_hypercall_buffer(xencall_handle *xcall, size_t npages)
> +void *osdep_alloc_pages(xencall_handle *xcall, size_t npages)
>  {
> -    size_t size = npages * XC_PAGE_SIZE;
> +    size_t size = npages * PAGE_SIZE;
>      void *p;
> +    int ret;
>  
> -    ret = posix_memalign(&p, XC_PAGE_SIZE, size);
> +    ret = posix_memalign(&p, PAGE_SIZE, size);
>      if ( ret != 0 || !p )
>          return NULL;
>  
> @@ -86,14 +89,13 @@ void *osdep_alloc_hypercall_buffer(xencall_handle *xcall, size_t npages)
>      return p;
>  }
>  
> -void osdep_free_hypercall_buffer(xencall_handle *xcall, void *ptr,
> -                                 size_t npages)
> +void osdep_free_pages(xencall_handle *xcall, void *ptr, size_t npages)
>  {
> -    (void) munlock(ptr, npages * XC_PAGE_SIZE);
> +    (void) munlock(ptr, npages * PAGE_SIZE);

I think you can drop the (void) cast here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 11:52:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 11:52:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59862.104963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDYd-0005mu-0f; Tue, 29 Dec 2020 11:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59862.104963; Tue, 29 Dec 2020 11:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuDYc-0005mn-Tu; Tue, 29 Dec 2020 11:52:54 +0000
Received: by outflank-mailman (input) for mailman id 59862;
 Tue, 29 Dec 2020 11:52:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuDYb-0005mh-Ao
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 11:52:53 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e378c5d6-1381-4acf-9556-9319c9dc62cc;
 Tue, 29 Dec 2020 11:52:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e378c5d6-1381-4acf-9556-9319c9dc62cc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609242772;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=Ff6MNuGfvvGg+Zi0RA9AH6v8CtB3DMHchWF5cgvCwb0=;
  b=CEoPHzBa25ZMfuHRSgqcJacIPlMGjPxPp098dCIMvuRq2MB5L5i2pz/s
   oJhV6m1zmGFEemVc2MgTMqWYiVJXzJdJaladQnpGw/BB4yu2pW8iniLEp
   LKnyS9MaHmec3dsTxDrNn6RXQk3NBAzC+rXo3pXe8rrm9k6o4VewoGBdY
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: x4XN6Jd1ePoZpnXy0Aa7J28kPzIeULdzQEqrrwkpSmGuVZb4clME6ABAS/bInvhOh+l9LMCNb7
 2F1dGZvVXafScPTJReu0hzPdk1POtKOz53x6Gqq4TIvNKeMoDFS8Qn4BFqj0SrbY0UcaROmQgW
 hjC7Qxzukn0Cqtddz1XX7S6EV6QSK5yRGyiyRoEo8wRaJ7RsLCBhUtVBkWxFICJfD9Nh0/aov1
 oaNIzeJbTL5/vlBzsVREQ1oSbU96WKRJsALyljuw70FPwEEHoE0IeyWliDqbbOx2KQlPYsMEni
 /7k=
X-SBRS: 5.2
X-MesageID: 35337718
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,457,1599537600"; 
   d="scan'208";a="35337718"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k/8h129szrsmbb4CoSgjJ1EdQ77/SdeXPTWMWxNUneXDf0OGZHKde2mfZbv/vScp50pUr/NZmKp+XAqK03EM2cEf8dxCag1+Ip/aJy410ELwQEMmRkfp2SP1mu4nogesYKHOxezhQ8LlnQlbG/kNzUz7XlKUhjR4TjjLIwLGe38nAGRfoMd0TLyv2R3Fsi+ILkOhD0/Qx+Babfkvh3ViWgIC+WTUzjhjz5F9bQD3SD1dHrWEjPSJT6WvECzsy/6Wq88exFmuWFnlRVWMZVtjxfhfHaz7ZUxFmWR9zGE6PkqLALMCoYBSE72vbbAthaKAP2cUXi2Tnl8auQQjs2I1cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FMO+j9fnXhMTz0TphLYNM7Lq3Z2h/FnovwmDaT5f9T0=;
 b=BbzbMKdKBCSKkS4myCieCus/i9vnK8BN0nkztjaAV3d87lPGSLx5kt1IGDucTBG5M/cYTGKOIOUBB/03dwzSvJbFv/dHxujdiot+TuXH+Zk3z2PuYkNstRimnAFVValKhWWfi5KaX5wa3Uuqm+ypMaakrkJO+jW9lHDCppaooVW1P5cYz+l1IfDz9QUYk3pAcnCnXagHt1ioDrux1nP8721f8wviMOMEJpVqNt7sNxliRFYbNIh0wXxJGD5au3gwUUyi9j0TYNh+SAlT4J3XufP++xWBRxxcbZZZ0iQdr2V+ryZRhuTOzyTSJh1WCdwgGEwdkIsDw//rSvVCoTIZJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FMO+j9fnXhMTz0TphLYNM7Lq3Z2h/FnovwmDaT5f9T0=;
 b=DfOtYjXTDAEmtVWQyDoqZINNuMTHBbVn6nzPIqeCrg8Fbgj0ebyNLPi+4jAsizU/iHdUlKMHTWlJVA9yOXmaBU5YPUB3StTeBX84645ICV+BVRdwh4T0+1mXa4lgqcFIsXIkvtwltAvNRQNhxqYUsSRhdz86I/9eyP/oFEq7Qfg=
Date: Tue, 29 Dec 2020 12:52:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 10/24] Make libs/evtchn build on NetBSD
Message-ID: <20201229115243.itpzsuriclqiljs7@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-11-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-11-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0164.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::27) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ff57643-8720-4e5f-daf8-08d8abf043b8
X-MS-TrafficTypeDiagnostic: SN6PR03MB4590:
X-Microsoft-Antispam-PRVS: <SN6PR03MB4590ABCCD5D60E35FE547E518FD80@SN6PR03MB4590.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: e8mSRC+oU2wOZj2FdNiYLd2613AVyToDeR9y85sf3m98nZDQkd5WVHG2amn1czNOHvp3JF+CS14GY6+0Md6w9zpI/o/Aj0CLL6rpQ0zK9vSeNAXULpHeX8UNDtTRQ0qFGq1WLMHhX9prrkYCbKOWikdwN6lAJCMvq8x9b4k1AWXaXLjH1OXnmkSBJyn4ptP/dEtXwgWyV+sYJHUw/YoaemrRBCmfRnaSycsrmFqrBk3ZqQR8sp5imSfxBtRdQlXEsBV7fgRQFL9PXXvTChyfDbqjzSoQ2JMW+IueFlNgWAKbOkmJnMs1dtimnLHRIGSAS3LVKHgz2y3Tdho9QC+W5hQ3JD5T24p5HFNhv4BRi1m7VxudXbC7YqS1ED8sdRoy/sAE3R+vP65CqYWNEvy1Ig==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(346002)(366004)(136003)(376002)(316002)(83380400001)(8676002)(86362001)(2906002)(186003)(66556008)(1076003)(6496006)(9686003)(4326008)(5660300002)(16526019)(6666004)(66946007)(33716001)(66476007)(6916009)(26005)(478600001)(956004)(85182001)(8936002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Z09MRTlOTW8rbngwRTliSTBnM2Y5R3UxOHZlQlVQWFpzM0Z4dS84WVlSTERu?=
 =?utf-8?B?eXBLV3EvcWVUMlhIWWovL2hsMmdRV3VLejB1bTBNRENzcjhWekpvWklWT1E2?=
 =?utf-8?B?ZmVmc3RSZ1Y2TVdPVnBFaVBFRTB5RmVhVStmTGs5Qjh2UStUZ2xHT2tDSGky?=
 =?utf-8?B?SE5Fa0NqZ3c1OE5sc1VVdGxERU45ellzZnFMakNVSmlFT04rdlhKcEpEMjJV?=
 =?utf-8?B?dkRUbEk0anp1VVRxM0tiSTVkaitsdWY1WUw3QVJqKzVMWlQyYmtQcDdIRit3?=
 =?utf-8?B?K1lYZk9QVndZQ0ZtL1JndXlrR1NQUnd1MFk0SXFwdjNEc2NVd1BrZFppNENR?=
 =?utf-8?B?S1FpSDdiNDhuSndJTUI4WTd0cHlJSlQ0Nlh1SEprSUdyREVGTkdqUE8weXUz?=
 =?utf-8?B?MEVlSXBoRkIwTEpLWDdDMnpKSlo5SWdFYjdXR0JqVlZCTHFva0o2aXhJWjB6?=
 =?utf-8?B?b1ZNY0d1ZzFUVVcvZi9aZUdNeUk5bm5aSlN4UGFlSjRId2d2V3ZkV3l1M0xs?=
 =?utf-8?B?MWdSbml4em1UMDZycHVaWEJuUUQyZmNXWHp1Q3BlUlc1UU45SEZJeFpUaS83?=
 =?utf-8?B?Zi9iTzN3R1BTZWNHS2sxTlRlcm1qTzB5clEwNkRHZy9lWDY3RU1NNjNyY2tT?=
 =?utf-8?B?ZVpVMy9ERGVsVGlGQnRuZmZzRnR3UG5WRG13aWpjZ3FwZzYrbm1WcjdKenow?=
 =?utf-8?B?cStNbUVacmRlV2tNK0QrSGVnUzRtQlFSUEhBZjdSdFBTS3Eya1pxSTNLemNU?=
 =?utf-8?B?WDVqSkd3QXhUSVZFMFBvUFh5ZlpyQSs1bXdzMi81QjB6MWVYUVJVV05wVmNP?=
 =?utf-8?B?RzlhdlV5WUYzNC9MVFpYNjdrYkNQZHdpdzRUUkJEUGxLTVRBT3Nzc054ekxO?=
 =?utf-8?B?Wk11emM0bEtCd295OFVNN2xWVHpKNnhENnA5eTlkR1M5MHk5RUliSWt5QXVW?=
 =?utf-8?B?SUpVSFRVRitjM0N0Z2xSVitVUXZwb3pVRHovODlCNkg2MEQ0akZXUVJDRFlU?=
 =?utf-8?B?M0JUeXJNV21tNzVjdFVkT1JwNDJ5YThoQVozbVcycTlmMHBKSWdoTVBMb2ly?=
 =?utf-8?B?OHdTWE9lb2w1K2pUcXNWOHRmNllHQVo2b3FyTkJPRmNFc2lHOURYNkhpMWM0?=
 =?utf-8?B?VVcvTEZ1WnBRZCtnanFrNnhpNmFlMEJOM3hSaE9RVWZsVGZUOTF5azIrKy9J?=
 =?utf-8?B?c2wzUkRwRW5aeUp2VmR6SGEvTHkrWmN3RFMyeFdUM0JQelJ6SUN3MXZncmQ5?=
 =?utf-8?B?ZVNGN1hKWlQzbFN5TzZDdHNNVnFoUzZUU1VtVHVoUUdaL3k1T0FPa0l4TVpo?=
 =?utf-8?Q?MnDRnkd6Co2it3XHI10cCVtxDR88pJTIyu?=
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 11:52:48.9156
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ff57643-8720-4e5f-daf8-08d8abf043b8
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fZAaMybeY6GRXktXs/6VG4Ufs7TEiNy6F5P3eMW7B7UtHWQB3TKf1atTOMQWiYFPUw6nP7X/3G8zd5J/u9oANA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4590
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:09PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/evtchn/netbsd.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libs/evtchn/netbsd.c b/tools/libs/evtchn/netbsd.c
> index 8b8545d2f9..6d4ce28011 100644
> --- a/tools/libs/evtchn/netbsd.c
> +++ b/tools/libs/evtchn/netbsd.c
> @@ -25,10 +25,10 @@
>  
>  #include <sys/ioctl.h>
>  
> -#include <xen/sys/evtchn.h>
> -
>  #include "private.h"
>  
> +#include <xen/xenio3.h>
> +
>  #define EVTCHN_DEV_NAME  "/dev/xenevt"
>  
>  int osdep_evtchn_open(xenevtchn_handle *xce)
> @@ -131,7 +131,7 @@ xenevtchn_port_or_error_t xenevtchn_pending(xenevtchn_handle *xce)
>      int fd = xce->fd;
>      evtchn_port_t port;
>  
> -    if ( read_exact(fd, (char *)&port, sizeof(port)) == -1 )
> +    if ( read(fd, (char *)&port, sizeof(port)) == -1 )
>          return -1;
>  
>      return port;
> @@ -140,7 +140,7 @@ xenevtchn_port_or_error_t xenevtchn_pending(xenevtchn_handle *xce)
>  int xenevtchn_unmask(xenevtchn_handle *xce, evtchn_port_t port)
>  {
>      int fd = xce->fd;
> -    return write_exact(fd, (char *)&port, sizeof(port));
> +    return write(fd, (char *)&port, sizeof(port));

I'm afraid we will need some context as to why {read/write}_exact
doesn't work here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 12:25:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 12:25:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59874.104976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuE3x-0000Bc-U1; Tue, 29 Dec 2020 12:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59874.104976; Tue, 29 Dec 2020 12:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuE3x-0000BV-R3; Tue, 29 Dec 2020 12:25:17 +0000
Received: by outflank-mailman (input) for mailman id 59874;
 Tue, 29 Dec 2020 12:25:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuE3w-0000BK-D7; Tue, 29 Dec 2020 12:25:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuE3w-0000Zq-6H; Tue, 29 Dec 2020 12:25:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuE3v-0003gc-UW; Tue, 29 Dec 2020 12:25:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuE3v-0008Og-U0; Tue, 29 Dec 2020 12:25:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TS2E5Yq0X8OlPglPS/ZKEioTnMsEQKFvqiMlAnGH20E=; b=Ifxft1UJVGgRXUBjTSP5MIqdFZ
	wdqCxxjbP1CikRV1+zDkfESPHhxCIoy1hbW1Ts5KWg5j2fsLWv/DXmeFEzocmN6YkE/8cmXuj4HgB
	KDVwZVu2OFdRwHENi0ujQ0yBikT0rlhP/fMRQ3vaDbouOgAOq49Bu72JUwsUptbLfvF8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157950: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 12:25:15 +0000

flight 157950 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157950/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157931
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157931
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157931
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157931
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157931
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157931
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157931
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157931
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157931
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157931
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157931
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157950  2020-12-29 01:52:37 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Dec 29 12:46:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 12:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59896.105009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuEOk-0002IV-3o; Tue, 29 Dec 2020 12:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59896.105009; Tue, 29 Dec 2020 12:46:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuEOk-0002IO-0C; Tue, 29 Dec 2020 12:46:46 +0000
Received: by outflank-mailman (input) for mailman id 59896;
 Tue, 29 Dec 2020 12:46:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuEOh-0002IJ-MR
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 12:46:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19c30821-e404-4b18-a7f6-f065cd23babf;
 Tue, 29 Dec 2020 12:46:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19c30821-e404-4b18-a7f6-f065cd23babf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609246001;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=pIFfLWQD1Lk55/mT5rZuatiaDexhSJyHLZDDYYgr1fE=;
  b=TxEBBbAhAx+0Hqk7w/kq/rlRqn8Emn0JpSG5rQzzEb4K39kH2VCIewS8
   MwcTSdtwSr7ipVG2aQCzX+epSSR1G9K0CUcyd2J9fdKWUEcvNR5dFcTtb
   R4qdgqabpS9MQw36iLp6T4KhFZUfDIhiwt69/WPwUcGCyb7RBEgsnwccp
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: uGgsMVvGsCrQyFLjaqZuyk3Pyo2uk34lYXvSx2J4rCYhLCPR97mRwqrpAK3630GXgu8VDwDWb4
 6OfdRTVq8WXX2PHLKh0H2bsdESbJ4DvIooWN8PzxNN3cr8Y3sNIHwN5zpDoGwNiP8psRFWRHec
 i7bJSxy9xOehHSZxIn59h2/CPj0A//ZxFftBYwuJyPRx76NHKqUAO3n+YcTDqcZ1BEZ4BWv4sN
 Wl31vHiFMfqdo3/xIIXrdoHi7cZM94vH5vzewZPygUmgRoOUDYbImjWnFmXmyH3gHdUEHSadU+
 DaU=
X-SBRS: 5.2
X-MesageID: 34449888
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34449888"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nZQ3W3zV515lzKB4JHSkj5soZfe7vt8HQzS332UQ8DwJbKWmbPjZ0Sk2njOxmS5ZzK4u3AnkSnzEUfb5JS7rWTZmwwtqgHRAIbYtTcQpa8DsgGrEcAudCvnPB0GeOkcvfosO2qbRipcc+yTKTmhVDG0dKlcBgwam+5MPQHe2lDGuUDL4uo1aSgBnwXBcfrHdMpzkR7iNR754HdIeAZtFNDAl5SNRbK4g+KfKIKGDcU5ApTX90GgwxcA7RI0187C6kmMW6HetwLXknXLpcKjR/4f2W1vb2+EtAg+J7xHiLaBI1vjPSS8ilevuPJ+oO/HYHyAdigtbXPvc6yT7+s0yfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=akeRgcXbGm8I3lkU1y8VMRBOJ4z1Qhp0jxOwdOprAs0=;
 b=lkC7B0z+lYLlEo1LGFshS7xWncTytVJffKVvLWqWMYN3WSSBtFBKYuscDpBdxQFOquwKK4UXoLova1SsEBq/0Dy6+ifZfgWfnrVNTSkmErpKcU2QJs5891/VLUjOAxSVwaS8HxGqtIZyFqJiNr9rFHvVve5cGE46sk0jXO1NqcvhaMr+vmw+TJs+AfDJtzySqhMFMutua/uNM4ol8AoQv9zqSfo4FmzTDE6yDudnGnjSsjb0xYlbyHVjgjoC4UDq//9cteebD5v0jnmvqQqer0aeMbGKrC2qugcTIkSQGl0bI+j/if6Dwnw2AgtnvhMwcTNwSNH2esWDsL/4Nq/Suw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=akeRgcXbGm8I3lkU1y8VMRBOJ4z1Qhp0jxOwdOprAs0=;
 b=pW2DuOlFxZrrFb8O08oApbOVIqvChAcY13iSSy89Khmi5jZbP57+Hvr7Ov3AWkMQSqaQyXKvdRB10NNNuumg++Cn1FEbFGmWiPBuD6CsYpvW26ni+zF3iEk0Fq75s2XwbgkCLhZfjNaUW4pexVrZGRojj4iZU68HZwQmg9J4wKA=
Date: Tue, 29 Dec 2020 13:46:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 11/24] Implement foreignmemory on NetBSD
Message-ID: <20201229124630.5ld2dt5o6awa53db@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-12-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-12-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0101.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 77494516-3c2d-4500-e6b3-08d8abf7c7b4
X-MS-TrafficTypeDiagnostic: DM6PR03MB3673:
X-Microsoft-Antispam-PRVS: <DM6PR03MB36733E69AF842276B0B3952E8FD80@DM6PR03MB3673.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lX/qvqfosne3fQr894qrtryUtJzx6n4M2lRvMmX43Q1uUr3sTlbvF2ejMMcUztQCP5rBa1VWQ1bPcPTt4G2F0tiP/3S5nWIMqeSnas44e3S/2KZqy16B9BzEILqaNX1Yrmku5A3EU6wyntnWqS5J0dK8z7+zB3cTkpVUNETBepjDvdGujsxrCsUFO3d6ESuLHYJW4P7oRQB5/06klWBtcN1AZWAVfZshxZTZkk+sIMabo4TvSxrF/bT3cRv9I4VR8fzs7PJ7J/22qysbjQ/faEHcPOa0MO5m2ReyFakc2A37Py+yftJ8hFdDzQnwralCaOdHVAVTN+8F6a6kYAa2B44CNYXNqBZ1V5A9BcThJKwe+w8aMIUi/kyxqGg9bblaCZo0+oh59cwc28md4ujQQw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(366004)(376002)(396003)(39860400002)(5660300002)(8936002)(85182001)(6916009)(9686003)(1076003)(16526019)(66476007)(6496006)(316002)(86362001)(956004)(66556008)(83380400001)(6486002)(2906002)(26005)(186003)(66946007)(478600001)(8676002)(4326008)(33716001)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T2lGa3RqWmlvay9JWGpMVjFxL09WaVZmRVl1V1hZbHlnSldvbnB1TTRxdTRH?=
 =?utf-8?B?ZlFsNjI2Z0k1SENFeGFWb3ZVTXBhVzRiRlVwQmVhWjl2THpHK01JZlBHZVY0?=
 =?utf-8?B?UXByek9iOVd4VVVYNWFxQ3VWNHZETWdPMlpFQytYb2hmdTNPRm9FeDU3ai8w?=
 =?utf-8?B?NFJYK21ZemlJUjcwQnJQdzNiTG52YU8zT2MwN3JNWDkyMnVTbEltUjFISHNy?=
 =?utf-8?B?eUZHMjNMdkhvLzFpcW1wL1R1YXVFdjNzL0Z3eDNOaHNOcGxqUW03bG5XYmZj?=
 =?utf-8?B?dUJTOHBYZmZtdVlSZmhZN1E2UjhoUGtMYWVpeUg4Ym1SdGV2cFJ2VmFxK0tz?=
 =?utf-8?B?NTZ3anRzRHg3U0dzcFBxeGdEeXhDamxCNVpzTzh3aUVldFVkdW5BTy9FS0tz?=
 =?utf-8?B?M3BFKzBWR01kRmxpVTZFNjBBZUtDaXljS3R1V05mNmJNUXh0YU12b0h5RUYz?=
 =?utf-8?B?NDJmcVpmL1lPOFBUZnlJOTRUTW5ibllzSXl4RkZlUmIvMFI0ekJPaE9hSXFt?=
 =?utf-8?B?aDVYVVpiNm5RbVFKREpTRWovbXBCdHEzNHlibnhLYXU2dCtqNzNUT1Irc0Z1?=
 =?utf-8?B?bmxoSmg4a1NqR2o2Tk5hNlN4RXdzU2w5MlRkNFdxMS9pUXZFWWZGZlY2QThl?=
 =?utf-8?B?TkE3ZC9XMmZBZjIvZ1lTMzZGT0Juck5Bem9vbzYrbnl4RTdaVXM5SDFuYkFT?=
 =?utf-8?B?ZjFzM0NBWHdRU09JcFBFb3hiRlBDOUFuOFVDQjJCbDMxak1ySUZua3pQSDRY?=
 =?utf-8?B?VVRIODFRRlZ3ZlI0dlNpSkZiT2RuSXhhMTk1Y3UyMnRRRW1xS0ppaXVYaUs0?=
 =?utf-8?B?RndzckhhYUNFOVc4ZFUxRUx0QnJocUYzdzNFUkpMbC83OEZzNzZ2RFpacmQv?=
 =?utf-8?B?ZGg2c1BGYUJPVysrYmRRbHUzUk9yTWpPcUVLdHhxS0FpQ3NsSm5DV1BJUWdv?=
 =?utf-8?B?elRscE9aWUordzBtUnVka3BGWndmNSsxK000bWVpUUtZZ0JQWFc0RHpqNkxD?=
 =?utf-8?B?L0pJdlNjK0dNRkVrNDNaL2lOMW5hZGdZUy9seWFPaWdPc21YN1ErdXlCd3lq?=
 =?utf-8?B?U3hiV1pYclhIV2d3TXFWTTNpb2tGUS9jSjROL3I4YXcxUDBPOExSSjJqaFFj?=
 =?utf-8?B?bmVRZjdBbFVvRzMwOXRrU1NGd25vWHBIL2NUa003RWlSd2JXaHROWEh0ZTNm?=
 =?utf-8?B?NXprSUJ3SU9scU5mbzNxL2RzbUNTcHJVYWVmaFllV0JWTnQ4dXpCZENuUFJn?=
 =?utf-8?B?STZIUThtUmVvb3o5N3luNSswMU9KT00vWTNDNzAvamFHR21vdGNMRW1JeXc1?=
 =?utf-8?Q?66hMLh+LPDH1NzzjIRJ6S/sN4fF3KegmBz?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 12:46:36.6528
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 77494516-3c2d-4500-e6b3-08d8abf7c7b4
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ic6Pbqjqk7enyfRQs/ADF7uoSXCIpVNIbN1mVpqxxk8/lIi/UdIn12lo1zdtTkC6+xzIsnkNh+G641MhaY4mQg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3673
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:10PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/foreignmemory/Makefile  |  2 +-
>  tools/libs/foreignmemory/netbsd.c  | 76 ++++++++++++++++++++++++++----
>  tools/libs/foreignmemory/private.h | 10 +++-
>  3 files changed, 75 insertions(+), 13 deletions(-)
> 
> diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
> index 13850f7988..f191cdbed0 100644
> --- a/tools/libs/foreignmemory/Makefile
> +++ b/tools/libs/foreignmemory/Makefile
> @@ -8,7 +8,7 @@ SRCS-y                 += core.c
>  SRCS-$(CONFIG_Linux)   += linux.c
>  SRCS-$(CONFIG_FreeBSD) += freebsd.c
>  SRCS-$(CONFIG_SunOS)   += compat.c solaris.c
> -SRCS-$(CONFIG_NetBSD)  += compat.c netbsd.c
> +SRCS-$(CONFIG_NetBSD)  += netbsd.c
>  SRCS-$(CONFIG_MiniOS)  += minios.c
>  
>  include $(XEN_ROOT)/tools/libs/libs.mk
> diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory/netbsd.c
> index 54a418ebd6..6d740ec2a3 100644
> --- a/tools/libs/foreignmemory/netbsd.c
> +++ b/tools/libs/foreignmemory/netbsd.c
> @@ -19,7 +19,9 @@
>  
>  #include <unistd.h>
>  #include <fcntl.h>
> +#include <errno.h>
>  #include <sys/mman.h>
> +#include <sys/ioctl.h>
>  
>  #include "private.h"
>  
> @@ -66,15 +68,17 @@ int osdep_xenforeignmemory_close(xenforeignmemory_handle *fmem)
>      return close(fd);
>  }
>  
> -void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
> -                              void *addr, int prot, int flags,
> -                              xen_pfn_t *arr, int num)
> +void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
> +                                 uint32_t dom, void *addr,
> +				 int prot, int flags, size_t num,
> +				 const xen_pfn_t arr[/*num*/], int err[/*num*/])
> +
>  {
>      int fd = fmem->fd;
> -    privcmd_mmapbatch_t ioctlx;
> -    addr = mmap(addr, num*XC_PAGE_SIZE, prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
> +    privcmd_mmapbatch_v2_t ioctlx;
> +    addr = mmap(addr, num*PAGE_SIZE, prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
>      if ( addr == MAP_FAILED ) {
> -        PERROR("osdep_map_foreign_batch: mmap failed");
> +        PERROR("osdep_xenforeignmemory_map: mmap failed");
>          return NULL;
>      }
>  
> @@ -82,11 +86,12 @@ void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
>      ioctlx.dom=dom;
>      ioctlx.addr=(unsigned long)addr;
>      ioctlx.arr=arr;
> -    if ( ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH, &ioctlx) < 0 )
> +    ioctlx.err=err;
> +    if ( ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH_V2, &ioctlx) < 0 )
>      {
>          int saved_errno = errno;
> -        PERROR("osdep_map_foreign_batch: ioctl failed");
> -        (void)munmap(addr, num*XC_PAGE_SIZE);
> +        PERROR("osdep_xenforeignmemory_map: ioctl failed");
> +        (void)munmap(addr, num*PAGE_SIZE);
>          errno = saved_errno;
>          return NULL;
>      }
> @@ -97,7 +102,58 @@ void *osdep_map_foreign_batch(xenforeignmem_handle *fmem, uint32_t dom,
>  int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
>                                   void *addr, size_t num)
>  {
> -    return munmap(addr, num*XC_PAGE_SIZE);
> +    return munmap(addr, num*PAGE_SIZE);
> +}
> +
> +int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
> +                                    domid_t domid)
> +{
> +    return 0;

Returning 0 here is wrong, since you are not implementing it. You
should return -1 and set errno to EOPNOTSUPP.

> +}
> +
> +int osdep_xenforeignmemory_unmap_resource(
> +    xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres)
> +{
> +    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
> +}
> +
> +int osdep_xenforeignmemory_map_resource(
> +    xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres)
> +{
> +    privcmd_mmap_resource_t mr = {
> +        .dom = fres->domid,
> +        .type = fres->type,
> +        .id = fres->id,
> +        .idx = fres->frame,
> +        .num = fres->nr_frames,
> +    };
> +    int rc;
> +
> +    fres->addr = mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
> +                      fres->prot, fres->flags | MAP_ANON | MAP_SHARED, -1, 0);
> +    if ( fres->addr == MAP_FAILED )
> +        return -1;
> +
> +    mr.addr = (uintptr_t)fres->addr;
> +
> +    rc = ioctl(fmem->fd, IOCTL_PRIVCMD_MMAP_RESOURCE, &mr);
> +    if ( rc )
> +    {
> +        int saved_errno;
> +
> +        if ( errno != fmem->unimpl_errno && errno != EOPNOTSUPP )

Maybe this is set in another patch, but I don't seem to spot where
fmem->unimpl_errno is set on NetBSD (seems to be set only for Linux
AFAICT).

> +            PERROR("ioctl failed");
> +        else
> +            errno = EOPNOTSUPP;
> +
> +        saved_errno = errno;
> +        (void)osdep_xenforeignmemory_unmap_resource(fmem, fres);
> +        errno = saved_errno;
> +
> +        return -1;
> +    }
> +
> +    return 0;
>  }
>  
>  /*
> diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemory/private.h
> index 8f1bf081ed..abeceb8720 100644
> --- a/tools/libs/foreignmemory/private.h
> +++ b/tools/libs/foreignmemory/private.h
> @@ -8,7 +8,13 @@
>  #include <xentoolcore_internal.h>
>  
>  #include <xen/xen.h>
> +
> +#ifdef __NetBSD__
> +#include <xen/xen.h>
> +#include <xen/xenio.h>
> +#else
>  #include <xen/sys/privcmd.h>
> +#endif
>  
>  #ifndef PAGE_SHIFT /* Mini-os, Yukk */
>  #define PAGE_SHIFT           12
> @@ -38,7 +44,7 @@ int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
>  
>  #if defined(__NetBSD__) || defined(__sun__)
>  /* Strictly compat for those two only only */
> -void *compat_mapforeign_batch(xenforeignmem_handle *fmem, uint32_t dom,
> +void *osdep_map_foreign_batch(xenforeignmemory_handle *fmem, uint32_t dom,
>                                void *addr, int prot, int flags,
>                                xen_pfn_t *arr, int num);

You will have to split this into NetBSD and Solaris variants now if
this is really required, but AFAICT you replace
osdep_map_foreign_batch with osdep_xenforeignmemory_map, so this
prototype is stale?

Also a little bit below these prototypes are the dummy implementations
of osdep_xenforeignmemory_restrict,
osdep_xenforeignmemory_map_resource and
osdep_xenforeignmemory_unmap_resource. I think you at least need to
modify the condition below so that on NetBSD the dummy inlines are not
used?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:02:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:02:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59918.105045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFa2-0001Ad-Jc; Tue, 29 Dec 2020 14:02:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59918.105045; Tue, 29 Dec 2020 14:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFa2-0001AW-Ft; Tue, 29 Dec 2020 14:02:30 +0000
Received: by outflank-mailman (input) for mailman id 59918;
 Tue, 29 Dec 2020 14:02:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuFa0-0001AQ-Cm
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:02:28 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7af179a-c968-472e-bea4-9b2032cabf27;
 Tue, 29 Dec 2020 14:02:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7af179a-c968-472e-bea4-9b2032cabf27
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609250546;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=CgjQ0wkE9VpL2pIFsFqGfISDSfYvkzdFBLi/EhdnM7M=;
  b=LWu+MA3X5Tb0rWc9IEmEPF0L+V2pMbqHHAVEIK26r9xR7CW9IS1EmwQZ
   P0nTk+qIe8F9j3W4Roq0MNb5jjWJMRh85fSCOsZRUUemC0uB8HDDjF6kk
   ACLz3nHJBh4KtLb1+XWj06s+IGvbuejDBd9Rxreji/0fxnnA23wPHtcr1
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NI82ovU2MoKJVF7geU070lxpwIQI6wsnr0dFw0ov40iJHI6SgAxlCxQp2q/e0h0e11eQI8T8lu
 QnFltl1UrhzIod2ftbrYGKIrfk0svWi1BAhCUGdp99kp66fiFqF+AtwjFZiDnKh0ef6zfRbjn+
 P04IRDYcL0wdq3izua5Z+Yig6Nj7dXQ+LmPBqd3Jeib5TTtp0Z6sJxY9Bk9rXc+sef87qabt5B
 V1//9dTt9rdXzpPam68tAoZ0SXJbDsQksjJD5a77QXgcAnJ8eU+Joo+SCyFnp+4yrbdTEJG0WY
 hEQ=
X-SBRS: 5.2
X-MesageID: 34126277
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34126277"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FWycmHXWvdBEgQaFupU9yGu8yepyugGMSe8lsFSWq4fvDNeo1igMbvynuNz1p2lwKbKMsASskM1ET7PCWHwVwVmIf7DUVu9BVDeyjqnFYNAQ0HXrlpjb4u1gdSEikO+lBulSiqrf+ON/6kxHz1uD2AZ25m9dNtt94YTyUd3RAWn1zfSwoPJdKuUlQoWCy8b+PsBzZDGD8YOk4qe5FvRWlNxACvh33mVZ7re2/WsaCtYvi6ULJ4h3U5CvSvDyEmlAJ3U9kRFZUH6bepk175pwiz11dF7lyEWDh/pXsH+RDY+cEx1icSWh12pBGZTpkYjDtLFWMu/Iw6M9bGMGg+HDdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mm6Cdv61E5EFUzA1Vtu6CqolBNW57o6oC1DXEVZJgN8=;
 b=QbZVti40msp9oHGYHNOfD8VZlDWeqmmL8O3pbH3KAqN9mmfkN5fYN2ErNKoRMidXXYMCTmUlo/Yot3jxYGQzfCtIxHqm+2Ox8bwQYpwIBqIIsYdxc6Sgqc0KHqQKJUJEMEKgVLjZqIx54fErBxbPVrw6FS+We6H5EhSj7tARJxEK3zEtH/x6/DFxdJi5I1jsDwBi7XD0MaxyZUbndPZ0b8XKhMA61XR3WeA8rek7o4k/koVcHpzVQTOpUm3HUAG5xMi5GL4ua5guFp6GmRoisYg+irkjVQ9/ENVK3DTwY0+k083P+0wqlqSVPsjxBds9QkShlX5IIzsKUaQ9nRC1vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mm6Cdv61E5EFUzA1Vtu6CqolBNW57o6oC1DXEVZJgN8=;
 b=qjHBzNHZC/l7LO2yOWNQw5zaqg5HMVmmQc7q9g8868TtWfWn9vFh7TGTchkimSoQQkOOD7bZ4lt2fYdzcHf3SHoeIVSR3fZtUEWFmpmDBdLOQ19l/3V8HajH822e/Z14bS+TWqDgvUDnRHp80rc6ZsSVcRTnqZE20/xGlgVdUDY=
Date: Tue, 29 Dec 2020 15:02:15 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 13/24] Don't assume tv_sec is a unsigned long (for NetBSD)
Message-ID: <20201229140215.zl5yju6hm7wcacht@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-14-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-14-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0171.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::10)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4c85c08f-b314-4d03-84f4-08d8ac025d5d
X-MS-TrafficTypeDiagnostic: DS7PR03MB5463:
X-Microsoft-Antispam-PRVS: <DS7PR03MB5463D6B72601F3DE2B2E276E8FD80@DS7PR03MB5463.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3968;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: m81SmFdEsoC39CItSweN4l88YBqTcro/eig1MCKJZnKolovAYVidYQcuZQcPdxvl/Ig96jxbMzdX4QbZjgk/y3vWJVxrbByWLOi0bVQKeTeGgSATbNyYdoCI8zaNpo1j7uy2Gor/hQGel0Se9Ts4oqo2uUfdlWd/uIp0pFmegWB64+E6XYYk2TUkGB4yGFQgeUAiy49paiTDPUcCD8NfD6iUZtG8wWFIJL2ffaNXsywNfpJNN9qD/ovrLqiV8oRUAy3aNMwKk5182YCs2AXWoiUo1pwqgMC4Es3SKtsEjMSd7DORCIBO4AhFk/5CGuEOid51LA1nZlubqMM5x2Emhgx3YMk82mKl/wkjuzmnFhkVESu4HxOM50UBAjo6v7rNk+XDKdqgxAX/tWtEikfTqQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(396003)(136003)(366004)(376002)(346002)(186003)(33716001)(9686003)(66476007)(4326008)(6496006)(8936002)(6916009)(6666004)(1076003)(66556008)(956004)(66946007)(16526019)(83380400001)(2906002)(26005)(316002)(86362001)(6486002)(5660300002)(85182001)(8676002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Y2p1SzBNRUZ5d2h1V3BIMkFoQTg0MkRwdXJpcnhpcWZLemwxRU1obzl2MEFn?=
 =?utf-8?B?Rm4zM21qa1dRQTFoeWRGdW81MXkzbzdLZmpCYzltdGVjK1N3MmFpNTN2NGR3?=
 =?utf-8?B?dmNlQnhTak1rUW5mTm9USEtwaHNwVlF2eFNtZi84REpiajF4bFI5MEpVYm1m?=
 =?utf-8?B?RkxoS3hXdU9kS05jMFRKcU5LczhCN3RlZDBsa3k4UUtsM05LbFRtMkRDNUpJ?=
 =?utf-8?B?RldjR09jUTVSOXY2VDQwUlBuWlZUMWxNalZjdGd1RTJsMFNwaFlUMk8xdElO?=
 =?utf-8?B?MTIxWE81L20yOGJuYVBtYlU1ekpDK3h6N0pzRFdMTktXSDNXSGNQeXRSRXNr?=
 =?utf-8?B?VUpXL2Rab2NlTEVmdWFlVDdSU3RXQUw2Q1A3WU5PNlM4cXoxQVpudCtML0hx?=
 =?utf-8?B?TXoxL2gyNjVPd3p5dVh6T0ltd1BBUHVnS3dkOHg2eFVYdlR6KzVYbEFxcTB1?=
 =?utf-8?B?RjZqa2hRQ0tBYmxHcmQ4UyttWWVSZ3pNQ1I4THNOc3JOQmdrajNabTc2SnB3?=
 =?utf-8?B?bHNzOGFKUVRJeW1IM2M3cVVRWWY0WUE1NWh6ZGdXU255TDFDMzVKbytUYzlH?=
 =?utf-8?B?VWhJYUxDZnQ2Umd1aW5KWHJVT0ViNmZFUlhQSXRQWUtaUSs1ZVZnWXBneCsz?=
 =?utf-8?B?RUJXUllZalI0a2o3NFBCdjg3WlY4WFJ6SERocXRKalBVcGt1WjdKRzdHNXV0?=
 =?utf-8?B?alNXWVA1N3pRT0c4TGtJbllIdnNRallGb0pNZHBKTGcxTlJ3QUxBbkp0S2hJ?=
 =?utf-8?B?blRBY3R5V2NOdWo0anN0KzJBWFh0NE5RbXArcEhoTmdpOW1xWVVKdGpGS3JM?=
 =?utf-8?B?ZXg5SmtkMFplMXFWZjFnRGthRjhLaEZtWld6eGRMcjhMdXBndDZuRHA0V1h4?=
 =?utf-8?B?a2FFSm1CUG4ya3k0eWxLUUdHMVM0U3pIeUhGVmNjTkgwdEI1V1FZa1ZtMHlt?=
 =?utf-8?B?Yjc3K3Q3akVMMWc5RlpKNTZSa3Fqa1c0MmQ2N09rUTN1dDVEWGV4T2RMNndE?=
 =?utf-8?B?b0gzQTNXRlVMc3crR00rNUd1U1QzcUlsblZSR3BPTlhqVm1SNXY2dXB6U0JQ?=
 =?utf-8?B?UkxEVmlvOW5sekpTSWdvREszcVdlMTJkSEhQdllrVDh0Nm1LcU53SWo4V0Nw?=
 =?utf-8?B?QnlIbmoyY2JwT3lvSDAyZXdVRWl2VWJkUmxxblF4Z2pCLzduQ056SS8xQzNs?=
 =?utf-8?B?VWRzQlIzbVo3aFRqWW8zcEZwNGVRVmhzWk0wVkFDQlExdE9kSjNEblJuRUth?=
 =?utf-8?B?VkNmNVFjL2RUVnZnd2daVmpwZ2h1UWxVNXZhdXBmOTNSZUN3REZ3aWMvbTcx?=
 =?utf-8?Q?zzD+XFuVRxFI2fRJnaLEtDZhZIzHT+s4P5?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:02:22.7913
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c85c08f-b314-4d03-84f4-08d8ac025d5d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zksvNCGjaoFz+wKnjDE/rVWM3eIMDfYJXO/cZMkFvZf+15vw/zNpmkfOXj993LlEpterrJBrEjlkmLuJJK6XNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5463
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:12PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/light/libxl_create.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
> index 321a13e519..44691010bc 100644
> --- a/tools/libs/light/libxl_create.c
> +++ b/tools/libs/light/libxl_create.c
> @@ -496,7 +496,7 @@ int libxl__domain_build(libxl__gc *gc,
>          vments[2] = "image/ostype";
>          vments[3] = "hvm";
>          vments[4] = "start_time";
> -        vments[5] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
> +        vments[5] = GCSPRINTF("%jd.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);

You don't cast tv_sec to intmax_t here...

>  
>          localents = libxl__calloc(gc, 13, sizeof(char *));
>          i = 0;
> @@ -535,7 +535,7 @@ int libxl__domain_build(libxl__gc *gc,
>          vments[i++] = "image/kernel";
>          vments[i++] = (char *) state->pv_kernel.path;
>          vments[i++] = "start_time";
> -        vments[i++] = GCSPRINTF("%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
> +        vments[i++] = GCSPRINTF("%jd.%02d", (intmax_t)start_time.tv_sec,(int)start_time.tv_usec/10000);

... yet you do it here. I think the first occurrence is missing the
cast?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:16:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:16:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59924.105057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFnD-0002Ci-RP; Tue, 29 Dec 2020 14:16:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59924.105057; Tue, 29 Dec 2020 14:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFnD-0002Cb-OP; Tue, 29 Dec 2020 14:16:07 +0000
Received: by outflank-mailman (input) for mailman id 59924;
 Tue, 29 Dec 2020 14:16:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuFnC-0002CW-MV
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:16:06 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 150f1c53-8f02-407c-b0c7-a6e6ee218dd1;
 Tue, 29 Dec 2020 14:16:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 150f1c53-8f02-407c-b0c7-a6e6ee218dd1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609251360;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=J2ZKMYA9GzVvJNkSl584roxcziFDfzWdH5eJq7w8szg=;
  b=T3ekb3O4aU2W1f2/SeXvH8skIxhGQwaFmwJJNGF9KyIVg373NjwCEw9O
   HBVQVtXd6BJ+/dD373VBtIhCfvQjZLtvIg275GM3UgvMD0th5m5CInPLg
   6UNVmdisficScKfMrr0X6B2F9B3O99LIgxfnBspW57uj9vgrAYGcHhNLN
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Gjgz+P4QsIl9Ux2nkn++NwvNmKZ/VvB3P7bKbA12hYBqwD3A26XJrHC6MvPtl5g0ARqXOZDX02
 7mAJCvVPC6dFUnEGLPZoNx+6mzwpE/MiniGCYIIflGPlDIFSFzVhTjNzaLZpiuL+txexPG3Gcb
 zCeN9uNvxOca566vd3n9Jo+/bkrrIAt3/RfuY1Fcgvg3BpmwKJHKJNTMTehORv4wWO49pB/kJK
 ww4y1C0FFOZiVkyQUZJyL72SIfnD9SAFMf06THct17DtjFEGaobtpC3EzEgmeQsGtvS+zVG2xW
 GH0=
X-SBRS: 5.2
X-MesageID: 34454888
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34454888"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ekWAYmdZOHYmKO5uc0yvEdzRfb756zeUwVNYRdpb6E63kR7HuwKI32X7AFVBk9wF9+EBkpAJT/qa/wwAR83uWZrNG0y3ZP4fcPMr631FMoYyGMWGxQscrl0eqX9p5tlJxHHrhLDA7rXlnaFnjgiT+iU02mWkYGpcY9wXynufjULYbgDUcqsqa3UojXxwXBdt/5jzPmwAj1YHvvZ26gh8JJvS/iT4EPo7OKHnz6TeQYHECk9/W0iiMXAnJPAavSVpsdbBAoWf66vQ7NGeGSJge+QKXQudJyBdwdKw7wr+lOcO1h4idTzuKdQ3AH1/D73w3yOaGMfkmCPgFKsLb3gEjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GVHJj19dWVAuBnTsgJ6Nmz9h3IFaTTl39INuK9zhZ9o=;
 b=TWpwoCHbWitUgHVv1x9prgNO75GBUUoMBb6rKoHu0yLFid1+VIadl0QfPunCe32Nia2dNVomYOZ0q4bjWOBsZ6ixcptjCTOCykFUgSqzdEk/Uk/2YV9y4cfS+IF4ZUuUzhQ4ZaVO36N9oQ9zCNSQ+0ZKYE536S2rR35rp3M9zfv5RBvKwgIf2nzJEdxoePaEztnKh6RvWV235LPxKX0FctfJS0/AShYzb+P4vnJlqHOv8mPpKTle0YN7achOdacqagxRe2wgBDIazCcXdTEGgyagEA0UDc96Sz49Ed544QcgHmtdfvFK1YE1botii/1nml2A6XBwOunfEbF97CQiqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GVHJj19dWVAuBnTsgJ6Nmz9h3IFaTTl39INuK9zhZ9o=;
 b=vcjCpKq3BUmALRX1wEKN1iiLPgjWCHvGHHdVlMsJyzoIJa0X7jFZa+lPqfVWTLrMo5YgfNRYcnWaG9/OqzR8Ufi7z69lYo8IOvudcU3G9zEE59MnbbXfdRQAt9g28cHEggiQKiXHhxoahseBmxMXzEnZJwZN76ObgkcbUJNKVFM=
Date: Tue, 29 Dec 2020 15:15:50 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 15/24] Make libs/light build on NetBSD
Message-ID: <20201229141550.gxnhnu3xxxunjest@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-16-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-16-bouyer@netbsd.org>
X-ClientProxiedBy: LO4P123CA0374.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: db6e3d06-ea4f-4605-9b30-08d8ac044235
X-MS-TrafficTypeDiagnostic: DM5PR03MB2554:
X-Microsoft-Antispam-PRVS: <DM5PR03MB255474C6ACEF6C4F2B74F3958FD80@DM5PR03MB2554.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2399;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /L1isNgvfWm7ZvKFIPQeTtO/RotXblh19pBDjY03xCuf9GC9vGc6fHBy9UFeLVRs+L6HSZ0NzmSGD+uK2ZsZPxJLb7ku0LcFv3XY5nEZIsjCSriJMr2jktbHYdqNOp2tXzj9XyWXgFaJnhrg6rmD5r+zQPPd6f3ddGDPpoh779d3kLv93uLhRbvOVwQGlsZKIucf9onOwtTZG+81027Vi4ybfNYlgO9ZA0mg4oY487e5zyOGSakol18p8Qza/gaBHAa5iOk2yp4uRQfeLzXJKQLgFdEZ2K71MCvj5y6w/1c6k9mB8PRGPSlorec5rKhb9b0xqYnX/a6RVtccTjsHOG1onnCSUcbmhxB7jFG+95N5Ukaq/95A5HX5vwiV32K/6OnBzBBd7ofKdSOUoIS1yg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(346002)(39860400002)(396003)(136003)(26005)(6486002)(1076003)(6916009)(66476007)(6496006)(8936002)(5660300002)(83380400001)(9686003)(956004)(66946007)(6666004)(186003)(316002)(33716001)(4326008)(8676002)(66556008)(16526019)(86362001)(85182001)(2906002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NVdyQjBMYTZnc1FGL0p5dXhtdGQrS1diQmIzYzR0NTEycVRIY1NZckp5cHc4?=
 =?utf-8?B?MVF4czNNcHZrUUxFS3hVN0dkbGtOZHYvTEJKWlFiZ2FLdFRoMjNsam9TZ2V5?=
 =?utf-8?B?LytZaENOV2pSYWhCVXZYd3kyd3RzZ1dlVHhWdWl0WldNNmFyYmJjYlVUMDMr?=
 =?utf-8?B?cWJBUHJrQmxZdG1vWjZCWEFzUFJjV0RXQWVhV2ZIUktRTEYyZ3pVenVTb0V1?=
 =?utf-8?B?NkxDalo2VjR5bFBpazFlZVBiL0x0c0J6c2lYaG1Sc0dWaDZvYmVFYThDaUw0?=
 =?utf-8?B?VHVEQ212VmM4UkQ2cmNMa05tQXY3dGx0djVEeDVSRXN4WkV6bkx2RDBUME52?=
 =?utf-8?B?RjJ5eW4rQW45WHZjUjZSTXVXc0E2ckROUUJKRTlhZGx4T3MyaWpIcms3QmMx?=
 =?utf-8?B?cVRVVis3VWsweDlSVXZzeHVXaUxjVjU1VmR6MmR1M1N0bkdhcWNVWHlsdUlT?=
 =?utf-8?B?aHMycXhJU1RBWXNkMEZSZVU0UlZ4dG05TTA2bFhyZDdJY0dxUmNURXErYjls?=
 =?utf-8?B?SUk0WlNjVzVuRkN6YVZKN1ZWakJJb21FNEQyRGt6dnA1QkRRc2JFaDNISUJO?=
 =?utf-8?B?VUo1MjBOU2FrSVdRVDRUbmhPRUFxdlY2TUF3eC91bzA2SmZXRlphMGhMeXg3?=
 =?utf-8?B?M09JNHBzOTJkTnozV1RVditHQTlJZS9VWkt4aytPYXZsSUVDaFFHc3dSQkhv?=
 =?utf-8?B?NVZiL2RQWUhJQlduZkoxMVpQSTBZNWRjUmVDSjNpeGZRcTVHVWFtZitUM1Bi?=
 =?utf-8?B?a1IvcUJpQkZrRUl6dGw2UDN0Z3BVbTBWNTZUbDZkQWNvNlJJN2ovcUZ3R3Rm?=
 =?utf-8?B?dUxUdWx3K2hkTmhNSWZFdncwVFZtOVhINzIxcUg0ak1DYjR1cFVwSFB4eGhW?=
 =?utf-8?B?QXo4dHVyK1pPb2ZtMkFocHpMbFJTZ2RZN3pYclh3L09rcTBrSWNWOXIyRThY?=
 =?utf-8?B?WlFsTFlwYlVXbHZPTWhMb2g2K25JRERCWVpqWE8wUyt6aURjSnBOOG8xU0dK?=
 =?utf-8?B?MitIdlU2Q256bktzYzB3Y1J2ekVURWRmL3ZQbWxUYkRIeGtMQ0FMY3F6RmIv?=
 =?utf-8?B?N0hwMVJBeG1aVWhiVlBXWExxcnRHK3ZuTlFwR1U1azloRnpEUDlGUjRZK1FO?=
 =?utf-8?B?RTMybUtvMmVrdGNQRlBVRFMrUVVOcXhNS0lyUFI2ci9pWitvbDdBUFhiRlhz?=
 =?utf-8?B?Qk9tQWE1N04zUnFxWmtabnR6MzkvZmdZLzI4WFhXQVhHMjNNNjFjTUM4dVY2?=
 =?utf-8?B?RnpaN2hHMHpzSzhJeGpXYllscUZBK1FSMDBYSDc0Umd6SmlwT2NOaEJDcE5i?=
 =?utf-8?Q?6WMp7ow7+tEv/8xC9uH5IoR44W1EIbG9oZ?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:15:56.2267
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: db6e3d06-ea4f-4605-9b30-08d8ac044235
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N71N6P2H98EZWVbtXwXfzWgbO7Znu46XEw2yp6LEVz6ObOu1JQk6ieqmA5VePGf1SzikMY/jEprZVmuMRuxN+A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2554
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:14PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/light/libxl_dm.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> index 5948ace60d..c93bdf2cc9 100644
> --- a/tools/libs/light/libxl_dm.c
> +++ b/tools/libs/light/libxl_dm.c
> @@ -3659,6 +3659,14 @@ static int kill_device_model_uid_child(libxl__destroy_devicemodel_state *ddms,
>  
>      LOGD(DEBUG, domid, "DM reaper: calling setresuid(%d, %d, 0)",

For correctness you should change this log message also on NetBSD.

>           reaper_uid, dm_kill_uid);
> +#ifdef __NetBSD__
> +    r = setuid(dm_kill_uid);
> +    if (r) {
> +        LOGED(ERROR, domid, "setuid to %d", dm_kill_uid);
> +        rc = rc ?: ERROR_FAIL;
> +        goto out;
> +    }
> +#else /* __NetBSD__ */
>      r = setresuid(reaper_uid, dm_kill_uid, 0);
>      if (r) {
>          LOGED(ERROR, domid, "setresuid to (%d, %d, 0)",
> @@ -3666,6 +3674,7 @@ static int kill_device_model_uid_child(libxl__destroy_devicemodel_state *ddms,
>          rc = rc ?: ERROR_FAIL;
>          goto out;
>      }
> +#endif /* __NetBSD__ */

Instead of adding this NetBSD specific bodge here I would add a test
for setresuid in tools/configure.ac using AC_CHECK_FUNCS and use the
result from that. Then if/when NetBSD implements setresuid the switch
will be done transparently.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:19:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:19:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59928.105069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFqO-0002OX-A8; Tue, 29 Dec 2020 14:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59928.105069; Tue, 29 Dec 2020 14:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFqO-0002OQ-74; Tue, 29 Dec 2020 14:19:24 +0000
Received: by outflank-mailman (input) for mailman id 59928;
 Tue, 29 Dec 2020 14:19:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuFqN-0002OL-Dz
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:19:23 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01550ac2-3a6d-4bcd-a2f0-20164a1945cd;
 Tue, 29 Dec 2020 14:19:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01550ac2-3a6d-4bcd-a2f0-20164a1945cd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609251561;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=9GmDXzxTMaQDgaKSESowhzGNxp/qoNa7SSE40WiIgog=;
  b=hS4JYZeICFwQB48jRSwph+hhDzDLGeiVXavGUu+ASatgGeibfIrGpIam
   yO/Zh5LTUM4GPhe0IEWD8vBgvv3haT/qyEoiTjKp1HaxRoN3ID9Thtg36
   5cap39KyFsrh4NU7KqUSxt77LMDeBzDVd4r+J8PBBFvxxnXU+8g1nZs92
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qHo/wSHxkp+snB0br2SfS1sZtRYyLqXclfhmr190QfqQagsthQlz6vNB6uNHpF7pdykPR8DqTi
 D0DjNwJcvHS2JWxjehCPio04Ibo9fEU+99uNbb/nFe6O4TwFdD+HBmFl5vfBEF8F4za0YZYJCy
 A1WFqtQkCpoiNDtgKw/y1I214A10GxYZTWWOHdKT1D6kaFrnncm84wtsSHDdqXq5biKeCpSySJ
 O0WWsTwCNdRlKOl0/iPQU4srrKYbKJYEzO1MC1xSswNr/jRjExMyZ8YGH9M26XIdfMUi2OFiH8
 6kw=
X-SBRS: 5.2
X-MesageID: 35345244
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="35345244"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vyq9K4YB4kI6BpTP9M0e+ky9JdVotSURhTe5odYGRJKWES5oNPAhBjntEEKarGHvM83fu3hUgbGtffXqFTDmPKUsDQOGwjAAKU/p4Z9u9wMt1P7fm4B+Aj/jKeL8FRci6Pap9M506H/351gTEwPq5FwkoMCM4fwYayE9qz4lve8bm0Ent9AQ373cwNoZnR61jwHQtxCW0aTKetCcyWkK74bEpENF4FTInixMLploNX6wPlyvVehRg+KFpwOcDMf60ImEwKbBXytrwk+pxdyIHrRC3lHD9Wv+YwhwL9l5eCpmTpGek2sSCkOA6uaEytXWxWnzrqZe/l9Iza/DXxaeMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B2fJ9GEnvgjexRq4pdQGzJZWM2L5rT/JTe6viCoNqSs=;
 b=BnpOk4X77pD6QKDflqoJt7u3ZOZ6+cRF6FGvBDu2dftYJdEn7rObnytkZWvOwHQPE4e/RY+S78osH+yjYlLyQAzfF6XWy+2anVrYcPQcFW81yGdskYyd8K9Ix7gvz1EvQySBXp6AU1DiAU8wG7DxB9qYFi3Arl/g2vpzrl97zkkUEJXSQK6miT1hWl+OJHO39M4H6FXf/OnhMLWVFNNwV9ow95UdEPWJIEvUAUlXtbYZbUQF8y2caN6GOOugidlruMLUFkkXEMn7XPKNjBHJ+eB/ndC4sfl6Sgb3EQ1TbxD9VLHzASPri4rZHLBqNWxmmhvaZ+79hY4TxaVsM5Shew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B2fJ9GEnvgjexRq4pdQGzJZWM2L5rT/JTe6viCoNqSs=;
 b=d+Mqp9WEBZcRcJZCBt0XlTzPm1tTq1ywyfpBP/G8+6LQ5rVPcHqPNPBaFn8alO+ZlVf3+SN35uM5hGIus9Zlyim75fsQyJH7Vms2C3nDyHcJduqqsErWn3zK2WEqnuDzEscuHUeYEhz7+2/0mXeOl27jbusiBV7atXuN2y+gNZQ=
Date: Tue, 29 Dec 2020 15:19:14 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 16/24] Switch NetBSD to QEMU_XEN (!traditional)
Message-ID: <20201229141914.wqj2h5ber7vgdxbk@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-17-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-17-bouyer@netbsd.org>
X-ClientProxiedBy: LO4P123CA0253.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::6) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 67e17277-cf7f-4e13-0e5c-08d8ac04bb16
X-MS-TrafficTypeDiagnostic: DM5PR03MB2554:
X-Microsoft-Antispam-PRVS: <DM5PR03MB255498841468FB7581729DD18FD80@DM5PR03MB2554.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PNK9wlKcva8EOxT6WoXLuOJuD5beMxtxheQhddh0pvEdE5zJCO49jWKAWqK6UPS+D0r/PImW20aavWkFMZ9lfWjJwcNZjZKDl6oHZvL43Ytn4OnDZuME3wPQGqGLsLSsYnm1IcsW72JOhhk2hd2YKgoOu7L+n1BsSJuv/+55bdevywdKGQNzOJ5NPG/tMb5D65e9EsSUcbD7g50cTLxqhDbDrq/lx9oKsr1PgsC7FU6/bDOHSMmXotfusYKPNR4jACJhnIKxK3AKW+smiEsuo1g6gW+nRfLxWEW2mZksA55jsr4L87IDsmQLyOQqgTRwt07yhHCbr1k+vo/ReNzl+ph1A5Qr0BE5cnwE37uS6NjRq6YQXVGlZ7gIhbPQ18s3eJlpLSUs0nEFnH047/PRAQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(346002)(39860400002)(396003)(136003)(26005)(6486002)(1076003)(4744005)(6916009)(66476007)(6496006)(8936002)(5660300002)(83380400001)(9686003)(956004)(66946007)(6666004)(186003)(316002)(33716001)(4326008)(8676002)(66556008)(16526019)(86362001)(85182001)(2906002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Unp0VzV1dDhNUW1vRVB2SlI1djFwRXE0elErdm1TU1M2WG9vYXNDNStWS2RL?=
 =?utf-8?B?MG55S2M4SUZNbmZlUlJ2N0dPZ29iV1dHZlpGa1hISHZ4WmNTcXN0Z2ZWaFh2?=
 =?utf-8?B?VmZ3Y1VheHVkczRkY0NIeEJSZjBpU0FhK3FHdUJ2eitKRXhiVHlDZEZpYmdU?=
 =?utf-8?B?dnRGeldBempObFF0VTlIOGVFMkNheW1PUHR2TXpTNERaNko2RzdxZ0JTUGJX?=
 =?utf-8?B?dllvVUxYWkkxVk4xWEIwOTFuV0Rna2tScHlJNk8rUVNlVHdLTndkV3MxS2pt?=
 =?utf-8?B?aHY3THQwZ3E2MnE2clducEJaNVI1R1NKOE9oYUhpTVNhMnNEaVBodUprcm5Z?=
 =?utf-8?B?YnVUMTRtQ3g0M1NHaTFHbEd5RG4vRE9STUFkMlNtWm8wcWM3Tm8rd0FqQ2c4?=
 =?utf-8?B?MEQzN001cXpOeGExY3cyQW90Nit1ckNDWnpWV3lPQWlZLzRpalFSeTB6SzMz?=
 =?utf-8?B?b2thUlFwVFRibnc1QXNhQklqL1g1N0x6QlB4R0RkaThDaFdaL0ZIbXVXdDln?=
 =?utf-8?B?L2dQd3l5Ly9tK3MzeWQ1NzExekFPTTd3M1lOUFc2L2lSeTlUL1E3dGJPRW1O?=
 =?utf-8?B?VVFqdzFISmFVQjg4eUFadE5wY3JVcUttQjZOUkdqY3c3ZGdMbUFDelhqNjBS?=
 =?utf-8?B?RHA4ZWdDRzBOWkdoSDMvdTNKU2Jld3pTUU4rZlkrN29TWlZYTFVkRmdTY3RL?=
 =?utf-8?B?aEZnbTVTYnAvT0ZQZ0VINHQvaVhMRXJmWHdyK21VVFRPTzdneThGVDg5bFNS?=
 =?utf-8?B?ME1ha0x1bGpka2F4dXlQMTBuNVJlT3JYSk54YnZHQ3Y1aGhXZm8ySEVCUUlY?=
 =?utf-8?B?bUQ1ZzYvenRaN3RweUNsZmFzUUZTd2hpWnZTUDBiUVNHVzhSd0tLWVhrK1Iz?=
 =?utf-8?B?WkFDNXdQQmNZbGJUa1NwS2xGTlBvc3FPQUtRZGI3YVg5TjVxWGFFQUpHeWha?=
 =?utf-8?B?S2ZsUzNsQ3NJSXM2c0w2a1JiQmhoaytzOElZR3JvTzR5cG8xRnpTSkZpMXlX?=
 =?utf-8?B?dmdUalRLblRYOThIY1RoaEo5WGU1b1NwdXpQT3ZPZTdBdzVjS1VxSVB1SExr?=
 =?utf-8?B?NVBRVTVYSndzRFZSTHJOOWlhYkF0K256NTF3TnYxZFY0UmZqbDRoM0Vtb2F4?=
 =?utf-8?B?MVZNSHZMSnpTT1RRQ2w3YjhXNW1EbEVPMTdvWGdYdnZqQ1UxVitJVVY0T3do?=
 =?utf-8?B?Z3R4bnNUNTl5czd0RXRWaTljWDhVdEtkSFVMVjhWVmNOdzhjZlY4SE9VTkxR?=
 =?utf-8?B?NkU5SWRmSi9lcG1VMlZ5OTJpcm5nMXJ6Rm02RENWSDRuMElldzRSRUU0dzJk?=
 =?utf-8?Q?Br15e5y+TvDc/JSn/YcAC5nVp0k2o3AD+R?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:19:19.0431
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 67e17277-cf7f-4e13-0e5c-08d8ac04bb16
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A/J9GjwGi66bnFi7MUNziuS5aVcJ/jHbYaus3WmwYmPnidM9LjB3XXV4nnKLTYDRkio5z5zoVRXnhhesbU8A3A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2554
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:15PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/light/libxl_netbsd.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libs/light/libxl_netbsd.c b/tools/libs/light/libxl_netbsd.c
> index e66a393d7f..31334f932c 100644
> --- a/tools/libs/light/libxl_netbsd.c
> +++ b/tools/libs/light/libxl_netbsd.c
> @@ -110,7 +110,7 @@ out:
>  
>  libxl_device_model_version libxl__default_device_model(libxl__gc *gc)
>  {
> -    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> +    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
>  }

All libxl supported OSes will now be using upstream QEMU as default,
maybe it's best to just move libxl__default_device_model into
libxl_dm.c instead of having 3 equal copies in OS specific files.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:28:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59936.105081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFyu-0003LQ-6s; Tue, 29 Dec 2020 14:28:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59936.105081; Tue, 29 Dec 2020 14:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuFyu-0003LJ-3p; Tue, 29 Dec 2020 14:28:12 +0000
Received: by outflank-mailman (input) for mailman id 59936;
 Tue, 29 Dec 2020 14:28:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuFys-0003Kx-W5
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:28:11 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09feba26-6164-42e2-9667-5039cc306ea8;
 Tue, 29 Dec 2020 14:28:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09feba26-6164-42e2-9667-5039cc306ea8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609252088;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=tvFMsrt8KXHoYipeywmWXNk2w170txRF+V318AOynyk=;
  b=XPNKPrPKdlbyQgUQTXQD62f1zQk69iBfvMl7XQclgkkoJ/mV7H2xSXib
   v5lESc0XzLdhGgLX8ayyetbIZ8iJG/T29ZhbkZl0900gziUCwd6xd25tG
   KYA+vIv1dTBp/7KDPUNvOocigP/zo4oGlHYyH/Yu8KtVRoxZOYglheDQx
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3/Y7gZN6tR6QgyMRjAQTA8kjB1n5KGE0mcrVWOQMR2i7BLD+kr5tgR+pRugEexH6HC9qPvJN8Z
 9Ux/Udyjx9yQP2n9VjQ2cC1wF1ZbOwi1JsY0U3KVf1r6dUoDBGZ9+MUbmeeNBaJqvO+CJVhy4y
 i8dEm55tD31mnjFCZ5rXsP9ohw2AQGGBMpHH9rt7ixtoxpgJw4GuG3rwInB07qFmkTRGYK9DfF
 l5vHTeUUGUKuJHFjdEyG3hbwZGJ+SRrW16Xeaq+JoTaktVwCesg89Z6A3PRX2KX5NglIPm2ncm
 3bc=
X-SBRS: 5.2
X-MesageID: 34091956
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34091956"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KXjaXBnK/4iJxYF1IsGTy02VvNFOHt3vtxOEmEWXKzzSgTOTJQ1AvN7Wc7LiQ2D3JMH9mry4kvQ1pHP2hhbFLM6qCUSSFXMuUm3kpEcbBGgTxG0o5jR21Cd7lV7+r0IUl4Pn7CY0nMsK+3pypx9q/BE7Znz4FpNdkiDVsMXJKM7b4ZQKbjdP8WDXBtFnS6k/sL23A/rQsS59PpXg99SwhVokFKPk2q49OKxN4BsWxT8XazOrTF5kqfpN10xIA/3F2JG+lVFCjCgWcTT/zvZcOOwT8Wo26ExmGqM00hmyqdqJFPMiNPi4USGmjMq1ZuFXeM8bj67cqfwiU0p0nfFpmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3SwvUQp001dK63uoLpS9nHeRnhOPlo3FkXiMtSA5P98=;
 b=PIGhmnfXxPGWYZbqcoNBotmiFDxdambNJCIiWaMrFdOhjp0gNK/55/KsUJY/cdYF2UgnHv39eP4ANgMSY+S2rpb/Kf/43CsEQmvoi6gd0fI19V5YGBxHsbMzCVBK1MaRViz5VRsjQpZxwizgdBZYkWrl4m+QumWJDpsPVV8nQfOEUqfmOe/sej/NYLSxSsBdQq4GbiIsBbQWKwmCS4fC0eQp/Pm+08EFlP+IJe167peG+HBBUt0jazf05+4VLn/lUInFlLZ1GFe912VXk1W4O06c4EWxGdl7WAT550OqKogUKjLZd+niTqtQEZchajl5RKYqRpQowFiSc8GjZKwa9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3SwvUQp001dK63uoLpS9nHeRnhOPlo3FkXiMtSA5P98=;
 b=ZKyRxzwPNVWMMFbjsXmMSMlf3UP9peeiUTU7LPCqoLs7KMjfoqsq+UwtSPry+S2mIPpaSIwpNOME/NRAjzOljNDG+S+r9W/jwsySYywOsoA7/NXloU2Uujyr00KG3+TycpxLMnXPW0uzBxIVbFrfgzNhP/p0Wto0PSLpGecUN9M=
Date: Tue, 29 Dec 2020 15:28:01 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 17/24] Make libs/light build on NetBSD
Message-ID: <20201229142801.6tcgu7seg2opgsrt@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-18-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-18-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0068.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 29c7ad52-60b4-4f22-f17e-08d8ac05f504
X-MS-TrafficTypeDiagnostic: DM6PR03MB3948:
X-Microsoft-Antispam-PRVS: <DM6PR03MB39480C3C0BA95991EDEAE6DB8FD80@DM6PR03MB3948.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qAdkUMqqs4fk4AAIHMRjI+IWDX07bdiES+sCzZWzXK5dgCb4dgzgEV6wsf3JFrehORRd7PWw1rxW5+bngWUGL0O8a0/+VfzKokL5KR56HAmTDgM4LKtz4SHw6P20a7b5k5Mqtsz7b/ofEDMrBrRKAyRj6CU8gZekmaR+rTJgTtEgrhrLp3VleHxaenQzdILMoOlfygQSm7QtgaOpXVyCeunj2Ms4msGO8sKpYkhx3eTk6RLwMniFNtanNE0dCPaA6dCOG1G6tjQfPaOBvHGi5m2vsrBkGSuroPkmvudoXGlT28madwWVFO5pSbiDHnXFQCqbKEa5funobHgOxaYedZNTZ/Q5uJCUO2sn1n5htToVYkQJwBBbec3rScuZneWmOAFw637qVYzGwz0P1n6MJg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(366004)(396003)(376002)(39860400002)(6666004)(66476007)(6486002)(33716001)(83380400001)(478600001)(6496006)(8936002)(26005)(16526019)(6916009)(316002)(956004)(186003)(4326008)(2906002)(1076003)(5660300002)(86362001)(9686003)(8676002)(66556008)(85182001)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bFMwSFNzZjdTVEJNREFwcEMra1BvN0J3ek1NOHhneTRRMGt5d2FtMXhEdkx4?=
 =?utf-8?B?NUx3ejNxOSsvZllkeEJhQjhEOXFTeWV4V0pOMGUrTTNJeFk0djg3L2ZucUxD?=
 =?utf-8?B?dFNFU2o0RDRham50anc1QkZWUGgzUnp0enRjbnZJYk5GVnhIL3pGRmhxVWVs?=
 =?utf-8?B?clZ0bzByTHAydDkwaFFiRkMxYWc5TlBtOEpINkdwOEdXRWVTZW5vWVBuMXpF?=
 =?utf-8?B?MW5DR0phNjhrd3VveVhYb3JlR1ppSmNsUGpMYkR0dUpjR3lrZkdwTjlMK1hQ?=
 =?utf-8?B?ZVk0ZlNWTkJEcnpxbDVzRWRYbnY2Y2VxZUhzZGxLVTJKK29aOVI3SmhEU0F6?=
 =?utf-8?B?UDhBVVZGL0xieVJ5VEFQTHpzWUZqbkZmS21wN1ZERFMwR0NmTjUyeStHbGFX?=
 =?utf-8?B?cTV3Mm8vRmFDQ0JZTHFFM2Jab3N4d3JCNjFrNStGQTFtWXZOdGFvUmJCUmtn?=
 =?utf-8?B?NjQ4WUQ1L0JyUWVvazRaWFZNbzJEZ1RFamx3TndvYnpMUHltYkE0MDVZVUw3?=
 =?utf-8?B?TEVuaXkwUXVkVldrS0N4ckhMRGNtZ1hFNTlzMFVIZnp4ZkI4bE9MYWFBSlho?=
 =?utf-8?B?VU5UQ2ZZVk53akpHRFRXTmkreGRqaUtWWWZGOFhWekN0ZkRUVlYxeEphZnlH?=
 =?utf-8?B?c3RkcGFmd01FYTQ0d1Fma1BnM2ZYbkxkNDlQaHJyUGw1N3JDQlBjeENOUm42?=
 =?utf-8?B?MjUwSW9ZQ2hlT3hqS1VhQVJZNHhjdHRpZ3JTOUlhSC9oNndqV2JweEsvSStW?=
 =?utf-8?B?d0ZsSmZmdmVqWUcxWmhWakg5QnFWcVhBOC9Hd2RlZHNpNlRJVzhJdzZJYjF6?=
 =?utf-8?B?aVNLMGlkOEt2djVWU1NLd0VrQ2Y3MUhLT3F0eUdjNUdmR0lhNnZGLytSbTl4?=
 =?utf-8?B?Q0RMZzYvdkFXc21HYm5wZXZQamNsR1RHWXZqUmFUWTNOSGhMclZsTTN6QUk1?=
 =?utf-8?B?dGovOGNtdEE1emd4eFFIUGxYNm02WDEvRGI2V3gyMS9RSFVjKzNaNVV3cFQw?=
 =?utf-8?B?SWd3bU4wUHVDd3N4VTV6K1luK3FMVUR2T3JJT0VvUHVmS0hPMHhmN0txRzhB?=
 =?utf-8?B?Wi9FVCs2dzJKaDJnR1hWRjdhMDg2SXJJbFhrZ1JvZVBtSzh0QThsVDQyZTU1?=
 =?utf-8?B?bE1maU1pWExRVW5UU3FKbTYrcHo1SUtrZnB4Vnh2a3hLajBwYTRleGRhdk9L?=
 =?utf-8?B?WWVTNEVtLytTQmFRUVRCYnJtZlRsVVBYdkRMaE05R3k5aHpqa1dZZzEweUF4?=
 =?utf-8?B?ak1Kd2IzT1BjQS91cUpKT3lnd1EvdCtkR2RLb2pEVkpZajUwaXlyeGZ6ZzJN?=
 =?utf-8?Q?X0tm/t329GXMT4+iz8sEWcbU05T3CiiT2U?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:28:05.7296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 29c7ad52-60b4-4f22-f17e-08d8ac05f504
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2PjDtE0lS8R6FkhfG9rXNy8KBs+TDx8yJFQwGEC0Zt5221i0YsZ1ScTqBYtxBiNrMEsKcqngFXcm2XkMGooffA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3948
X-OriginatorOrg: citrix.com

There's already a patch with the same subject in the series. I would
recommend to be a bit more specific with the fixes, specally if
there's no log message. This for example would better be:

tools/libxl: fix uuid build on NetBSD

On Mon, Dec 14, 2020 at 05:36:16PM +0100, Manuel Bouyer wrote:
> ---
>  tools/libs/light/libxl_uuid.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_uuid.c b/tools/libs/light/libxl_uuid.c
> index dadb79bad8..a8ee5f253e 100644
> --- a/tools/libs/light/libxl_uuid.c
> +++ b/tools/libs/light/libxl_uuid.c
> @@ -82,7 +82,7 @@ void libxl_uuid_generate(libxl_uuid *uuid)
>      uuid_enc_be(uuid->uuid, &nat_uuid);
>  }
>  
> -#ifdef __FreeBSD__
> +#if defined(__FreeBSD__) || defined(__NetBSD__)
>  int libxl_uuid_from_string(libxl_uuid *uuid, const char *in)
>  {
>      uint32_t status;
> @@ -120,7 +120,7 @@ void libxl_uuid_clear(libxl_uuid *uuid)
>      memset(&uuid->uuid, 0, sizeof(uuid->uuid));
>  }
>  
> -#ifdef __FreeBSD__
> +#if defined(__FreeBSD__) || defined(__NetBSD__)

There's no need to add NetBSD here, just remove the #ifdef altogether
and the content of the #else branch, since this section is already
only for FreeBSD and NetBSD (the #else variant was only used by
NetBSD, see the #elif defined(__FreeBSD__) || defined(__NetBSD__) up
on the file).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:30:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59942.105093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuG0y-0004AW-Nv; Tue, 29 Dec 2020 14:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59942.105093; Tue, 29 Dec 2020 14:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuG0y-0004AP-KX; Tue, 29 Dec 2020 14:30:20 +0000
Received: by outflank-mailman (input) for mailman id 59942;
 Tue, 29 Dec 2020 14:30:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuG0w-0004AE-K9
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:30:18 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e4d34b4-6ac2-420d-9dd5-266e5163737c;
 Tue, 29 Dec 2020 14:30:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e4d34b4-6ac2-420d-9dd5-266e5163737c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609252217;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=f98emElyreMu1VZJ7shMs6d1SeR7iDsoR6szrCGi3pM=;
  b=Z7DyhK8fnpLECjiYb4BCCz0d72nImuFOcPspmvtGwiAreWN3VISO9VsV
   XODShDhHNAH4Zh0Au0k4OYaracOFAw/5kZZtpQB21DQ3OqAfJ+Pa7T2Ui
   P1ueFV4+pJX7ikOs5bR9wNqdIWEt18V+K3E1YYm0drNGseMv6mcgWhM8N
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UFs6JzTl5EUxWvmRN1EtCHOxJb9nQ21eGhgqc2iNmBvWVxxPhzTzmZlBzqzyX3ldAiJGlKzdlp
 1BGHTpOnJRsQCTd7ttRgDWuceIHd3ytfFlmqb0f9GAEMRMVBvKzHmPM4/cSyxy2GNm8dBeR59P
 Glwq7gleQNTxNuIi2i8DsHEtINVlIx7mSFoFOXi6G45FSoYCCSc5KAoJ2mLl+p5BZ2OArk1tWH
 eNoSx7YuyLvRppiBeeA5v7/2zl1+9DwjQD+9J1pDVPIepkQdzqWQba6X/DrmEXzgjwhEcCzbLm
 vcA=
X-SBRS: 5.2
X-MesageID: 34092057
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34092057"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=giFKgx6bduvXfdtPTmC6NGhWyw8Z0Ruw5Fn7bh16d3fnjHTI3mwMcRHgdLSuoAw6ii+Dyy24onkkpKB+l5HBvqJvRimRH0v7VDAVf1BV7A80SYqFfdsndoslk7w1xwerP4l+ZB6jpQ8HR1Evsz2bpaZzZ14AqRMviwPYFSWWyWic6sbh/J23doIkIjRtH1bzMSzgHupDqSQfVKqsxwyz0ueWm3GGyzbCdU6CInPSzLxM7ZAPZuHwILgM9KudI6qiJ/hj51ixb0I2C96SkOBsUYvu9u77E5CGL1Bv0sOYk5xtKk6Nb+fkIa8+tOBWyXCNX6/43JxvnL1J6KPLNPnBpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f98emElyreMu1VZJ7shMs6d1SeR7iDsoR6szrCGi3pM=;
 b=mSlu05E81vqxkopoFv1FzYiO1CZiKsdGqgRQb7I60pWVj49hBqki2VE2RiP9EitT22LvdEkOVab2EmtuJrSsOrp/ZCJWiwDQWVfHbxdMzVHiqAeGmzFbVYWTypedPUVuC1dSIzbarZYHUyCnCQtMPH/+Hf4I93Zk6f1lIPHXcU4r4lWr0u5r3ZSz9V++Zgrs/VCiVB3/svk5Q38Cb294hnS+WB+HAQLlCWi7BGQMxOJvUptp4QqXJZvUHJiRgzeq7neKZAlGbET6DydACiMxoNXa5YmZTlaCZjOI3j0XKD4OGqdHydRemJpVgX5rY5GPV8hjIQh4GMbcpz+jH3ZzAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f98emElyreMu1VZJ7shMs6d1SeR7iDsoR6szrCGi3pM=;
 b=b9sQtkXoOEFeVIKCHRtRZtFfAS+wpC09gCfJhEAM/S16oMgQDxyzQ/lGRy5Vl4siHUJKUqWWeJYVwiIsktOqm3wopqzuuQBCDXB4HF1hpFN7QS3j6nfm3DYKg266vMQ28iHavJqKPwIEAbGgw5XfLoMtyfPuuCyBzeZvjbnvdHw=
Date: Tue, 29 Dec 2020 15:30:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 18/24] xeneventchn_stubs.c doens't need xen/sys/evtchn.h
 (NetBSD fix)
Message-ID: <20201229143009.tvoidmmmcz6uoe5x@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-19-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201214163623.2127-19-bouyer@netbsd.org>
X-ClientProxiedBy: LO2P123CA0054.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4b845371-e3bf-4991-3266-08d8ac0641ab
X-MS-TrafficTypeDiagnostic: DM6PR03MB3948:
X-Microsoft-Antispam-PRVS: <DM6PR03MB3948FC3EF352ACAE3DA8F3D38FD80@DM6PR03MB3948.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6Qqb8FeXyDOczn4G1YabPXkf8KJP+Stu0+WmkTHwj7uBLDtVSBLRP+htwNSEm+L3+eHvkYeGS09Qvc57e3EoaoALRL7rwXr9CNPqXrZfshaT+z+4J7VRPYa8lV4j1WB1kqtPkb4abpsAqkB0rcCS/vV8i4dkC7boBX3H2agW0JIxe6Zt48wpOHPceFlBPrtS2T633VrSSMGjNpO9/b0zkgsLreftxW56umAUxg//wxIO72IdmveASwrLMAtTbO5tkDbmu6u7R/cSfa7QaXfZRQWUYliYvZUDsUepMm9Qv7B631MXD/ekFEc5m12PPrq6Jfub0n9uD5PHdOLgNK/qttFnluldUIIeWOxaqNMtZl1UiXvRmV4WoGn8KjU4daOKeEgjW7HiKsgDaO90xTFkDg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(366004)(396003)(376002)(39860400002)(6666004)(66476007)(6486002)(33716001)(478600001)(6496006)(8936002)(26005)(16526019)(6916009)(316002)(956004)(186003)(4326008)(2906002)(1076003)(5660300002)(558084003)(86362001)(9686003)(8676002)(66556008)(85182001)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NSt4Vi9VQjhiZVIzTzRWRHVNZTFocHZoS1FDWkVQc3NCZlFMK1hPVlBZN0Js?=
 =?utf-8?B?d1BtdUxRbUd6YXQ2dHRJV3BHNzAxSkEyNGlaS3RwYnYxNHdweENpMnM1RW9p?=
 =?utf-8?B?M2xFRGZ2Mmcvb2lKdFhmb2swOS9CaDdKWGNIeDNZL0pxeDAvam0wczVyVUNp?=
 =?utf-8?B?clFnUzdkTVYvZ1pQcVVDTHBOY0hacWRhelRsZ0tPMmlDM2thdDJaalFlVit4?=
 =?utf-8?B?a2FZVCtDSzQyNkxhTFQ5eUs3eUV3VUJIUUlKV1ZRczh0NG5nY2IxSzJGN2Nu?=
 =?utf-8?B?Z1FuN29UNFhOWHNLV1o1TGZ0anpQRTlJMUZCVCtsZkpZVmNoYzRsMUVvTmVu?=
 =?utf-8?B?MEhUREFzY3VwN0VBbDBFRlFEUXNTYmkxRitiSFY0b3Z5QUQ3UC9nUHlzcG00?=
 =?utf-8?B?aWl1SExYYTFjOWVNcnB3RmJGNFVrM2RMRGg2MFcxWkticndobDl5bG1tQjZQ?=
 =?utf-8?B?cW9Sa29FNm1TemlRZWhoYlpuN2FIUTNrTldCb0Y4ZFRGdy9aUmphbTI2cnhM?=
 =?utf-8?B?VU9jUitEU3JSeUxLZUFESGlzN3JkdlY5OU9Rb203amM0M3hEOXk1Qm9HZVhy?=
 =?utf-8?B?d2gvRGZocEtsd0cxSDBHL2xLSmNQblJXVkVCZVYvbWsyVnE4N0JiTGxYcS9L?=
 =?utf-8?B?TDhVTWNkQkQyN1hLUUxmNjFDZnQ5Rzdib2pVaEFId2Y1enZ0NTFqbnlURjE1?=
 =?utf-8?B?dlcrUm1vWk02cUs4U0gwTEQwR1FHVVZMcVUrZmRKNGdnb2YzcGNSQkU1R1NR?=
 =?utf-8?B?bnVOdnREZG1sRWtGVzc4TU9pdGZtSjk3UmlrQ1RRL0dJcUI5MERKb2lLSnBm?=
 =?utf-8?B?NjFJNUxSQzN1MHpOTlhIMUtIMlRFU3B6VG9rNmQwcnFTZEMvQm4vL3R6ZTVm?=
 =?utf-8?B?YmRyMVNJd2ZRRzgwZTFjV0x1UXdOaDZaQ1VIdHVTaWM1aTkwc3NCQUhBUERI?=
 =?utf-8?B?Rk9hK0RURTVtNGRHYzNOYk84cXNKdFpBWC9DNDZZS0pYWmR1aG0vcDJ2Slc5?=
 =?utf-8?B?dkxxVm9sQzVOOUFvZFNxSkt3eDQ0Q2NMSkxVOW5wSWY0QnNNYXhscGVSV0hY?=
 =?utf-8?B?Zm9kR3dsdnU1Qkhub0w2VGo0dW5kMGE2Wm1DY1FvblB2RFdmWE10R1J1Tis0?=
 =?utf-8?B?elFTSzA2RnJjZGJLWVdKRS9xWWxzV1ZYWmtZRlhzYVFRKzVpdkkva253YjRY?=
 =?utf-8?B?WVhvMENBcFhNcTJGK2xSR1V6bUVkR3NYVXpWVWxGTUtKOWFZMmk0NzRZdnFW?=
 =?utf-8?B?VnUvbzhrQ1B1RWtMMmxIaHlscjErbUhRNHdpYjgzSCtiWTgrcEZiNnlnNlpv?=
 =?utf-8?Q?ubUvuIaS5sAB4qpbK9W4BAul6Ark9U2ebu?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:30:14.3768
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b845371-e3bf-4991-3266-08d8ac0641ab
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LwFzvtwrcBzhowHtEaSytgdqzQqvipLgN2MG1j21vIMUbYRPa2MToGHvCYXqRouvKpNJUHTmrk7n0JW8pmbKNA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3948
X-OriginatorOrg: citrix.com

If it builds on other OSes:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Subject would better be:

tools/evtchn: remove unneeded evtchn.h include

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:39:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:39:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59948.105104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuG9Q-0004S1-KA; Tue, 29 Dec 2020 14:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59948.105104; Tue, 29 Dec 2020 14:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuG9Q-0004Ru-H5; Tue, 29 Dec 2020 14:39:04 +0000
Received: by outflank-mailman (input) for mailman id 59948;
 Tue, 29 Dec 2020 14:39:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuG9P-0004Ro-60
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:39:03 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97e85082-6abf-4025-960c-c4ae3571b933;
 Tue, 29 Dec 2020 14:39:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97e85082-6abf-4025-960c-c4ae3571b933
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609252740;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=Jpkvvpr4UbjBSUX1inlCeEdCLXhgBN/uebvLKKmOLTE=;
  b=QC9r8oYyI3zIYITKtU6Yukgao7w4RiBQuxuFVIe6Z25xvcVA35Kp3jFd
   ryqlBaAriiB8x9upeNUKZTBgpraYE36u+I+63yTf9U3IR/xsk87t/Hrem
   qquE434qerY6vrl02uV1puMsRNrI157xYJcL04egeeFTWuH/ms2USJFFq
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: VKXMCre+LlgVv0A37TuQSpkqWPSNxPFPrn3oVYNaIM6YshKhdvUiWcAnouao2/1tX2gjcz8OT/
 CyPno8ubjrRzat7c4VWoYaxu+jkrSiPIYWkPsh+UJYf7fyiuGMaKXQ/FI3/YdALNyWOugdHM0Y
 lKdikBDBqiu2oYEAUK/8YsI/70y+OWeTAduLN/2WDzRs3RQFpFZnhp7FvWDDTTQWcOwD2QiRzX
 GZ4WUYuCUF3LDXUk/ZB6WqgUpmwp3H/4n6xSyRowQBKVZoIQah3zCrYotnY+pUGcjEXi0+5AGL
 Mdw=
X-SBRS: 5.2
X-MesageID: 34128732
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34128732"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SJiWc/ipKCpVzoeNK2NYBOLWUAOc6GkLlFDzyxuC9Qep7HcWuVFwI/0YV/LYTThNH8TEZEzUoKxUnjzoiRxTYIWqMxKwhA/7Itage1mU+YIxbKq+J3uIbakAGpst87IRwDYtqRo7nYEZKsLTAXuwj3SyxCrxDv7KTIf+hFGCqzOJHNkNM4FoR6OyTHnKLkMdjyiAdKlKqHgGvghSx7i9UZpWYtrMnxAJvWdWKYBwxI67e0/sTGCEZDik3u4SQYgDuaw2syIV9rMPo/ozeOcBJm0EMjBjFWhCaBxXF/znAAz4jCh/74TOgIUwR+N/dGtDbntFXgVSfraE6XSsM0yNeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=brc3dBMjzeYf2pITdRlp4aYD72Io64fv2Yz3H0YC2dQ=;
 b=lj3y+w+EzzjncSvPrAGNC0W99aHx1uATWgLDYhcH9g40pF8/2nhShiSHziWJj490/o/tmfoDQJFQGu0Ouf3BKzANx0ssa98/oDMTQOMzrfIsrib3bFj6YwCtwkKNc6cgMcwv641ReXxiLkkMNTE+FkSWEA6HXzWdIyi4fmIH/KkQDu2XlgXI5IsOW4UAB3wdWKcVgyp7MtHTsNgrrq633ZNWQmKAKQzoaW6CyvfnzlLDCnDp6lOMO+JUD/4qqlrA6qct6ccpkEIEkr5q6Xb7XD1xTJBc8Sm1nPUnjabQRQvItDf+FGl0JzD3frZssn7Gk8NxtOFUErgO5+kuRkALVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=brc3dBMjzeYf2pITdRlp4aYD72Io64fv2Yz3H0YC2dQ=;
 b=NBcL/9BVRsnss80dppMXcJPkRtsiQVlfosCm+WpndBqCbG3UmRuqCk5ZJv5JTiXNiWV6qwJ86mkXVjxTuwiLSRflpVLJlJEJTwz9UVLSEn6so5EHnYyMq8wGvyoIZIQHFzYRplc83NdNKXAM2nSeBPO/4SZATtODPPcovWc9DU4=
Date: Tue, 29 Dec 2020 15:38:53 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 19/24] errno may not be a gobal R/W variable, use a local
 variable instead (fix build on NetBSD)
Message-ID: <20201229143853.gikc7xqeqw6jjlvy@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-20-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-20-bouyer@netbsd.org>
X-ClientProxiedBy: MRXP264CA0024.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1638882e-cb55-4b35-5a77-08d8ac077979
X-MS-TrafficTypeDiagnostic: DM6PR03MB3483:
X-Microsoft-Antispam-PRVS: <DM6PR03MB3483F491B97736183A6CAD4D8FD80@DM6PR03MB3483.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 5bLgVgVnuhXGDv0zBUD9MfGtoTK6U9GTUmJwr8QDm4QphAs4kv02aPoxYrnVFthXhLLr+sub86QyrcaIikfNkxrLVPegyhSkQT4ody4gTJEayE9lVLCb31kBVKA/NeUg1Zb7pE1DJp7TIQbtPCVgyi3bz8+eshv1DBKJfe0YdMJ5luIKxQpZTe39BiYFni5AB+sxlr/6W1SmOq3BwjIqcABNMGkggo7SiJkS/GxUy7c57HSnPkulS8LGnpnTtQCQBmM0TEry3hhbeMz/3+U0O+0jgndKvIe7eKuJRvU6y5B/5A9nMmo7+Zzkj/h++TXIJQL1GaJ5b9+2mamsF78gkqOqGsxaW7oMPI4pDJ3c9xTBxWEwb192L9gnLa+jYh3iyF6p/JAREOdi57/7/WI6fg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(6666004)(86362001)(83380400001)(2906002)(956004)(8676002)(6486002)(85182001)(6496006)(5660300002)(9686003)(33716001)(316002)(1076003)(478600001)(6916009)(8936002)(186003)(66946007)(4326008)(26005)(16526019)(66556008)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eXgxVVVVU29KekQvYldMbC9DcmE1NUhmOVRxRmFDVDNBOXozOE5CYWV1Y0dU?=
 =?utf-8?B?OXJFY0htdGxEZ044bWZrTXhxalViU0tpa3V1bkEvL2pSaEVYZ0NxUkdHWFBj?=
 =?utf-8?B?NUZEb01qQmFvL0YyRzlWN0FPZ01EVy9BUTVKY2tLY3h0UlhHUytvK3daaUM0?=
 =?utf-8?B?S2NRZkdQY29Rc3o2S3pvZDFzeEt1NHRub1pOMVVPajJIVEFmdFczMXc4d0l0?=
 =?utf-8?B?N0c5WHlTY1JyYlFUZTBURy9xNUtHYy9ocC9haEFMbDBRMTFlTloyaGpGK2Qy?=
 =?utf-8?B?ckwwTTlEUkRKQWMwaWdNSnlybUVMRHM1LzBMVWxHTDQwSytSb3lOdnNLTTlH?=
 =?utf-8?B?TkxWeDRkdkxOd0c3bVpkUUxvcnJYVVJiYnIyQWxDMjlqcjEwNmhUeGFJaWxN?=
 =?utf-8?B?dWcvbWkra3Y2SnRIejRhRmRZL3lsWndBT0prbXZPdk5XVjdlWmZSZkJSOVN6?=
 =?utf-8?B?V1MvOENpcHR5Y2kvMW9kUzdhbFdqSlQxeG44eXA5OVc5TVl4RmJjOUk0VHJj?=
 =?utf-8?B?UnVXd2tEc3BVL3F4dVM4RCtpSS94SVRreWxiYmd6T0ZZQk95VmVHY3d3TjNm?=
 =?utf-8?B?NWhyUE4yL0p6bE93TUVyQW01MFJnU3NsaE9PUWpQVHY4UmE0RWtUZGlnSXFP?=
 =?utf-8?B?bnNjZXZ4cU5Cc2tIVzdpK2Q5NDRuWTVVSWsvUEJGWkV4VnZlRlNNaVF3VXFG?=
 =?utf-8?B?cGtKYVR3TGdnaExReXdESnp4c2tzREVyTTBnMkx2c1FOYzFKYWtYQnFGRGRS?=
 =?utf-8?B?VVJ5QTlSNXBZMjFtU2xzWnpUQk9IL1EySThWUUFvU1E5UFZQVGlrOVIwczF1?=
 =?utf-8?B?Ymk0aXdXZ2ZyUWlDeWFWM2JpL2dmeWl2bHRjTmpFMEwvREFSTVg1MzZLOVRn?=
 =?utf-8?B?Mk5zcEhOUExYbUlMUS9zN2tHWDBFVU0rbWswTkk3SmhxdEJaRFRXWkZHQmlt?=
 =?utf-8?B?ME1HK1VtUHoyRTdKSDFMYngxMUhRTXhTM05GUzFYZUpUaGQrVi9RTTNKRkRr?=
 =?utf-8?B?MWp6L3VFRWI1clRhbS9LOVpkSXBING9ZQnExVW9oVFpsMTJGcE9heUt6N1ZI?=
 =?utf-8?B?MzhYOXNxZityNnFIeGYzRXlHc04vZVd3NmVaMGdUcHJ4R05VUVdoeEY0aC9W?=
 =?utf-8?B?NGJoVFlpdVV6dTlaWWVuL3psYW96cjdib3RXekVwNGFVVEJnODZ4MXpTUHlU?=
 =?utf-8?B?WkFubyt4bTVBZlU3NXJrWit3SVJWdmd0TXVZQmVVZG9RSE42NlllbFl1cFNJ?=
 =?utf-8?B?QjhlYW5MVnprc0JKKzBCeXFVamhtcU1EcW14dWpTcXVHeVpDaitaQ2FkbkVQ?=
 =?utf-8?Q?BlNFNyPmgi6MD5Z+zuafMQ/sRG4BCtRJ74?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:38:57.4276
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 1638882e-cb55-4b35-5a77-08d8ac077979
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FgGZ5Y/twJJjXaMrjKoOGo/3yXQWSequBbwBisymn5jc+XkOjd1mgnndOTvg69DJezBom3Tssr1KBwRYz6FyAA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3483
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:18PM +0100, Manuel Bouyer wrote:
> ---
>  tools/xenpaging/xenpaging.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
> index 33098046c2..39c8c83b4b 100644
> --- a/tools/xenpaging/xenpaging.c
> +++ b/tools/xenpaging/xenpaging.c
> @@ -180,10 +180,11 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
>  static void *init_page(void)
>  {
>      void *buffer;
> +    int rc;
>  
>      /* Allocated page memory */
> -    errno = posix_memalign(&buffer, XC_PAGE_SIZE, XC_PAGE_SIZE);
> -    if ( errno != 0 )
> +    rc = posix_memalign(&buffer, XC_PAGE_SIZE, XC_PAGE_SIZE);
> +    if ( rc != 0 )

I think the point of setting errno here is because posix_memalign
doesn't set it and instead returns an error code. The caller of
init_page uses PERROR in order to print the error which his expected to
be in errno.

I don't think this is the only place in Xen code that errno is set, why
are the others fine but not this instance?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:52:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:52:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59954.105116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGM1-0006Ba-Pi; Tue, 29 Dec 2020 14:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59954.105116; Tue, 29 Dec 2020 14:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGM1-0006BT-Mf; Tue, 29 Dec 2020 14:52:05 +0000
Received: by outflank-mailman (input) for mailman id 59954;
 Tue, 29 Dec 2020 14:52:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuGM0-0006BL-QB
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:52:04 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19e7d749-2c96-4894-a514-bde4fa320967;
 Tue, 29 Dec 2020 14:52:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19e7d749-2c96-4894-a514-bde4fa320967
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609253523;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=45Txw/1OeEfF/CSE3TqbrpfP8FkmV+Y4xdDV1jj+5v4=;
  b=I6iXWPy9Br93g0VxAquKetYbbla9WXoz0KyNj0mROV1QXYLUR2n9JWXl
   rqT/dNwD3JtM9oFd+oxS+jEv1ZIzdVqZ8S1qIq87GpJxSbS3nboexV3gT
   ikEq3IFrbiFUhEDOZYZR3NqB3yqJzD+qIDtRdIcEDXEoBu20NTSMssEfN
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Lc4hhdAmPuGEJZmxk1ka6/1vrU2i5pf7A9UINy3dFKFh2kjbMpz42+zgVzW7b/3eoQYP2qomI3
 a+FqvPARf0XcpVXEKzTraFb4HEInx57ds/KtCxhUOj4VF5B88eDoLea3tQXsIZQBs4IODDeA4Z
 TIxQZ97SplDVInmukxDvXcnAJvZUgvi2rByfMM8zKbK/u5oMW6RRnNKymxHPm4zZRlvSAFBUMr
 ix2JG9wkLofLKWBbVqE108YpoqWEdtI5olaSfVPBcjxqWzZQjnB8LL0yhJUpH37arFkkxAArM3
 Yds=
X-SBRS: 5.2
X-MesageID: 34093730
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34093730"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CYlKEAMiy5nc/HDhEeQF6G7IFSaDt3G6JbPMe+NjoSOuRkBd+aIol+MjEMB/rN8wPY3L9Boko2BpXhjNSw0Dp37q+SlVWOWyi9VHuydJoU03aFkrhgNQr77ftW6LLflM9zw5pEKMgxuWKjb0zz0Hk5Hr66BnV1bz1wPTuTktmW+eyDK+4KIGtdz/bCtEZiFyBmrMAaW6X65pI0cQH7T9AsOBs3UafpKCB8//nGo/Y7XRitYmt/I1IkvtfUgS4llZyptg++F5T14VUIeCsd90FX3+GN4f1n0s1UxRCm+5xZofSWOXzMrZxttbDs3yGQiQSqiyx6fO7Kh9B3Rd36UCNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zczBwuZiu0pOW8u1wPMKkqUHJLefuJuMntu7TSVTNho=;
 b=FgsO/ms39OaPbO4IoY13fvUq1+Ao2MlzlK3BhRhOPT12caCCB/FeK8eaewUT96HL1adwUBOgTxg1esukHV5aQLAmk4MX1jLE76OEB5c2IiCmrvd57E1fFeRXF6hN4U6PBJcRHXzB0N7zONWT/7ILSL6awhpkVixcU/ITBMqGmaw9dw5cxLmBx1lxgQN9A1jMgCx5IGN2S29yKRgE17Bdu/3VTtjYxK4O/lMMoByHwjFakEmIvSxXAvP6mo7lX0yFSU/CPnANxPgx2jRcrpkTRDXRDm+A4py/Amgp8wcUE7EzbNb8wyezA6YOMYCGfIDl3ua8/GCGGdy3rx6VvJBc6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zczBwuZiu0pOW8u1wPMKkqUHJLefuJuMntu7TSVTNho=;
 b=ZdR19TNZYnLC+5GhsXDTlly59TzhaICMfgLRQMXgk/21pRWZcuUzeQuGcC3eF1hz44FVEcXws/CMwPg4WL5qXDfNvW46r0QC9qC19ChJZ/R3vL0wMZGyTQQYKAAKDh4Jamyt2qQcVuECVU0Y1QHC81UVPshM/CEng0EshhsgXSM=
Date: Tue, 29 Dec 2020 15:51:55 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 20/24] If FILENAME_MAX is defined, use it instead of
 arbitrary value (fix format-truncation errors with GCC >= 7)
Message-ID: <20201229145155.5gwyyrb47p45ak3c@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-21-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201214163623.2127-21-bouyer@netbsd.org>
X-ClientProxiedBy: MRXP264CA0001.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9b2e55f5-69d7-49f2-c95f-08d8ac094bda
X-MS-TrafficTypeDiagnostic: DM5PR03MB3291:
X-Microsoft-Antispam-PRVS: <DM5PR03MB3291580E0DAB12E4772C6BE98FD80@DM5PR03MB3291.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: o+OhZ2ZSgqTM6dWLIJTBJKtLGfYfO2WfcfcX2wWO+YvI/2Nf2JrwWj5BD8VJuvKAPGs7mvDvFvxxte/VybOaU98AV61fuRjGx8JkWafu2kG9IaiBXO0irRXw/uxA2mlky16PCm1LefSXxVoeOuk6501/1z97uMGoXbVLNW9MjsbaHASsP0DszdX8E24gQNZcKRBFoSHLf6DgkOSOK64WD6FZ+eu10szHky2+yXzgH1VueUvqTXICUINAIkwnuZpl8VDWIuUthbKz70cGHyTZDsE1+k34BDSpjuo5gSnus8OBwtj7iVT3PtjWxuHduhhpNzvwc6MvNO+EbrJ/4joNMVoK9mf/c8CEMOBGEk5mpagR1110UdQf8gqL5vJ6hvrY5lEHhIy0cB11CftyechzTg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(39860400002)(376002)(346002)(136003)(396003)(6486002)(16526019)(66946007)(8936002)(8676002)(66476007)(6666004)(478600001)(9686003)(33716001)(5660300002)(66556008)(956004)(4326008)(26005)(86362001)(186003)(6916009)(2906002)(6496006)(85182001)(1076003)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QXRxTkdDU21CQzQwU21IaW9qN0d1N2kxZTBUWnZaNGdrU0NwS2xEUEZqQUYv?=
 =?utf-8?B?M2JIMmNDM0ROU0ZKWXdRTC85MlBoSTBWSU5JUVExYzJEVytndnVnd1hFQ3c1?=
 =?utf-8?B?NWFKUEYvaHUxM2YwbUZUd1NyWmJaZjM2bnU3Q2s4eUtzL0pPdU1jTlpIb3FE?=
 =?utf-8?B?MHcvVm9lWFVHREFTN1FKVkNDUTUyOS9iYksvdiswdGNZT3BQTmgzbUpJbXgv?=
 =?utf-8?B?cWNSMS95V0I2QzR1WTBiaE9uZG1kZ2trYU9qbGJ6RGJzUC94bVRRdTArVHkx?=
 =?utf-8?B?OThPaXo2cWdKTStJYkNXRUY3RDJ6K2Z3NHRkdlJDdFVDRnYrWWRMUTZpV1Ra?=
 =?utf-8?B?YkFKaFlpb3FzditWY1V6Mk80b3pUVW1uNW42ZjRWcFRmOExBWG80N3lJS1Bn?=
 =?utf-8?B?SFFwVDZYd2NoN1psWEdtSk82RlhMaDNRSlhxQ0FSdlNlUWtEeWYvRnJtK0tv?=
 =?utf-8?B?dXJQWEdvYXBDVkkvOWFXNXF5U1cvQXhHN0Rldmo0Y2FrY0RNVlROMk1RNmZj?=
 =?utf-8?B?b2F3Ymh6cGJkTVJKMUoyRFo4eTFWV25NdGU2TzhndFhnaC9zNWEyRjZUa2Ra?=
 =?utf-8?B?dENCSVd6QkN0ZG1KWFJUaEZjbHNGT0JJaENoZEl1ekNDQjE3SmE0Q3A3Zkhy?=
 =?utf-8?B?SGpxSHVsQ3hmeXRDMEdkNXVseGFCcTNHSzlBR3lnNGlSV0NHcTNQYTJwOGlL?=
 =?utf-8?B?VmNySm1YRkVKeXQ4VTI1ZkVtUjNqd0dkZ3BKWFQ5WXJuSjg0T0dVWW5sK0xn?=
 =?utf-8?B?UDBOeTFyb1hHNWFORUxqTGZFVjBzNzlJVE9CSFM0ekdUcFBHSTNCTUxUdllB?=
 =?utf-8?B?dlBLZW1GSWJPRUd0Z1hzSEhwaUdvYTltbTNWQk1vNGI5RTl5U3NjR05QUkZn?=
 =?utf-8?B?dzZza1h1ZC9ST29xSG9ibHpCUmxFdHM5YmRlVG40MW4wVFhMSWY1dUdnMkVK?=
 =?utf-8?B?TWE3cUVTNG4vNEtVWCt3QXZ6QUlQbjVCVElISkpQdkFxVFFhd0ttRGR3QjVm?=
 =?utf-8?B?aDdCbXBEUUtQTkM0ZVcxSXRTQXllS3dVTks3dXRkZitsZW5XWkxPTm00M1dR?=
 =?utf-8?B?Ymt2V1lBRVBFVE10U3dubld0U2NEZTdTeFVvZXBjSHFZYktvdWVQZ2pRd3NL?=
 =?utf-8?B?S1M3SlF5Yytma05RenFFZ2VZbnNvUGYrd1RPaTBvUXhjb0tkdWxJRnhBdnZO?=
 =?utf-8?B?dTRJRXNTTUF0dGpHZUFCUG1WVGROU2lSRzFCS0NmeGNUemZuREl2MzQvdFF6?=
 =?utf-8?B?bVR6cUNJbVR3VXBmdzd6YTJFR2xFcXNtTmNrVWJYMXFBOHF3Q0UxOXY1T2RB?=
 =?utf-8?Q?TYzOlxtFxbFeqH3veBEQ+rh6/V0bKqz5nv?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:51:59.9132
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b2e55f5-69d7-49f2-c95f-08d8ac094bda
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SpNg/spkwqk99rrp1mdfNFHQ5hZ4atXb9Vq25V53AVNDgBX7mY3ssESHK/cQ/aKbXwuoNfexTPRp+vn0hmP36Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3291
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:19PM +0100, Manuel Bouyer wrote:
> ---
>  tools/xenpmd/xenpmd.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
> index 12b82cf43e..cfd22e64e3 100644
> --- a/tools/xenpmd/xenpmd.c
> +++ b/tools/xenpmd/xenpmd.c
> @@ -101,7 +101,11 @@ FILE *get_next_battery_file(DIR *battery_dir,
>  {
>      FILE *file = 0;
>      struct dirent *dir_entries;
> +#ifdef FILENAME_MAX
> +    char file_name[FILENAME_MAX];
> +#else
>      char file_name[284];
> +#endif
>      int ret;

I think it's dangerous to do this, specially on the stack, GNU libc
manual states:

Usage Note: Don’t use FILENAME_MAX as the size of an array in which to
store a file name! You can’t possibly make an array that big! Use
dynamic allocation (see Memory Allocation) instead.

I think it would be better to replace the snprintf calls with asprintf
and free the buffer afterwards. Setting file_name to 284 should be
fine however, as d_name is 256 max and the paths above are 26 maximum
I think (27 with the nul character).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 14:57:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 14:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59960.105129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGQz-0006OJ-Ex; Tue, 29 Dec 2020 14:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59960.105129; Tue, 29 Dec 2020 14:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGQz-0006OC-Ag; Tue, 29 Dec 2020 14:57:13 +0000
Received: by outflank-mailman (input) for mailman id 59960;
 Tue, 29 Dec 2020 14:57:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuGQx-0006No-HC
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 14:57:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e062b61-a496-44fa-b46e-e2f217855d0e;
 Tue, 29 Dec 2020 14:57:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e062b61-a496-44fa-b46e-e2f217855d0e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609253830;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=2uimnoncgjqTx1zKH3TF44gVonlV6rys2NRMrk2z8YI=;
  b=ZKvK7kN+Xv+/ZFzf3ln67tv2LcFlj3VlZCrroZ+FVlh0fny2N/7A2HOX
   I+jHGwhWwsE0Zu8D31z9/ZobHVTLvYLS/68qS5R7Y6adAaco29ko6eGvh
   /3Gba/SLmW0ln5cJ0Y6FJmMLFLOxEOYr3F4Yt4wzx2QFzT0fNakpijNR2
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: l5EndpRZJpWKPMs89znV2PSNW2xR3l5mphKHdpwxGpPEx5dno6Zel3oLgryDKGBrNg+KySbzF5
 2DbUZ/dIV7VnLLRN+cKngJJEPyWV0zJUnd6PwT0uqn6XpGJA1ODCoEY+y3PeGsWid+wqpIB8sS
 dGpm8U6BNiCgogh3v/ZwRQkD+9QY/2HOkUBeo5habQYEh/OW+CoWhY9G+D3eBOWTCGLUBY5UIE
 oqkJaNFG2ZFfWy9/3kn3P0eHnvL9oIOPdNTTqA8hQDmffM7+ufUPNukRN4chVWxJmZCWIN2Wep
 nLQ=
X-SBRS: 5.2
X-MesageID: 35347451
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="35347451"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aLdifREpNrXk8aOLCstGykRtwDmmgESv+33FwUs6pulF5NiQbll5Xna272TW8l15qnR4I4067vgJeF3qSPLfUZ0IBGx8K/KVe9tlazKnce7/EFOuk/2XhunPgdU5E/13gGYlfjZc4x33OoAtF3yuSBnO2qlvlR3NZoQ99Rs006Oa8JdFeazWLbGUFz9Xqi9V2niajRUnyplboeLNIwrYIqyIC8pscHMC9+mx0nFASMlkbwNBH8J5Q4tqf6ccrm1r0NbpAY48HC/QLimIJNmD4ijcKu49etPFdtjExmi5brSPIGrdwRgAjHIJ0lQuixPIGjpSZ2Fgzi/umdP2BHoYcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2uimnoncgjqTx1zKH3TF44gVonlV6rys2NRMrk2z8YI=;
 b=IHJ/uXFNyi+0jaiLMFTbkyLpK7IR9AM2v45vXhs6iat8aMuDho0urcp9W+tYTIbhE+A7UWeafgP5qa/DZ70HyGzEPb8GjROjXZK4MsoaoIEhhUI9GZgAajA4D1JuC0tZyShLPduDthaMJ9FoyKS2BB+1j94VtoiUWysTrKu/saFRP29KPsOY1/n7N0u+8A/4tW5GToVfWIHidFpTJ8rA8C8+bPRlx7UxCl8ZZ4jkIshAlRMOz0lLdzzS8ASMSvknC4GedaPigEN6+B04a4n2fjx7NKGvulCbGAbU2w94H/fDLwjriLVfidDQvgtMSjMdiDnQQsrdFxGfiXQ7nILT5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2uimnoncgjqTx1zKH3TF44gVonlV6rys2NRMrk2z8YI=;
 b=szB1R5JjNOhR60KerF2aw3l9o06yG06Jpudb70wBnxYzr2yuI/7jkQ8seVz75JCHmiNheZ6H4ZVnQuGp129G7CLvcM2t8KAaw4zZrCBLxFs+OxlRI2QM9wHNz01DSd4Y0O3VE/9ausvn26q7K8jPZNMjgzV51887GoilMlLYn1w=
Date: Tue, 29 Dec 2020 15:57:01 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 22/24] If PTHREAD_STACK_MIN is not defined, use
 DEFAULT_THREAD_STACKSIZE
Message-ID: <20201229145701.rfbfou5oupwuf5ei@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-23-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201214163623.2127-23-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0115.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d75093a8-1c66-43ef-ed7f-08d8ac0a0329
X-MS-TrafficTypeDiagnostic: DM5PR03MB3145:
X-Microsoft-Antispam-PRVS: <DM5PR03MB314539CA92E8832BB65FC2478FD80@DM5PR03MB3145.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: OF9yt9/iL7A0PJwW7NbKXl6B5+Y6j+ZovTkUtutgRxrHjp+Ez8aPgepqn3zlEzktXHKZNHdilZVXDXnImhPX+uVv32UFUWQ5t9Z+bx+Rc+XqmZvalcDVmaz9TuZhoMeZyP0TyT6WVwoMGWUaWSqSvRacK1NUc4t9pcZ/mbRBAq13rUUad+hDcL0UIiw3QugnDiUOPJlWBXit6QFXswVZrX+Mrp/BDVszLKbiJDDtxhxMJDmpnF9/F/Y1Pv4OlHk7Bodvfpx7j4PUF3INF6/p5LzJ9uAFHsUA1HiHNIazkC89K5cmSwd5wbtSQaDf5VshfFaE7JjB2Cci9KNjCkw7+AyiM1Bs+XhLPW76+7ZqaPiQLXKSvTzNaRIb3LyU/YSj3inOHJX4etG+SwshwguCpKN9Je+A1sK3mfxKQetiYSHSJjKzHsdy+PSbMJltNpYRZoai4gIjCPjAQG46ebCrmg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(366004)(376002)(136003)(39860400002)(5660300002)(9686003)(66946007)(6916009)(8936002)(478600001)(1076003)(66556008)(6666004)(16526019)(186003)(66476007)(4744005)(8676002)(956004)(26005)(966005)(86362001)(2906002)(33716001)(4326008)(85182001)(6496006)(316002)(6486002)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Nkl5S2ovMzlnNkljWVNZRE1wQnFjdXZrUkV2V3JpR3hYR3NKSDRwRHNQL2Y0?=
 =?utf-8?B?NDQzeWhGNjZnczQzWElsL1BQK1FvZGVKNzY2VnFWU1krQ0dxMU5WT1dMTmZz?=
 =?utf-8?B?VHNzTmxSVVgvR1R2bktXTzFEYWxGVVdTUzI0KzJkSGVJNVRmUjd5SCszcUdC?=
 =?utf-8?B?RFIybGQvQUI2VjJ0TUJPVFJvbWpmNzAzNEt0QXRyUFZNMk9oWEp2OUdiNTZx?=
 =?utf-8?B?Z0N6NjRUWk9IK0dqR3hKNHRBWmpWWm1qUmdFNjZZRWd1RzgwdXlsalhzYmhP?=
 =?utf-8?B?L1Zzb3M4b1A0NXp3WDcwdzlid0VXWjdZeWhTbXdLNVl1QVhZd1FveG5SbXdw?=
 =?utf-8?B?ek10T285bjZNMzRSY3BIa1pQTndlQlJmbTJyZkxENG0zVE9JNGRzZDg0YTEw?=
 =?utf-8?B?TnBlS1RLWi80R3JPTUd4enNrUXhOVjg2UGhEYS9qTUdqbFpiajRBQzVkL0pG?=
 =?utf-8?B?cFNmVjIyR3JNV0kxbFpERm54SFhwQytNYzNIMDhETlNUKzV2OEg1eUY3amxD?=
 =?utf-8?B?Rm5ZcFhnejBtdmVadmdvSFFjdjgxYVg1c1FrMVczQUNWby95dVU1dTFWV25W?=
 =?utf-8?B?amJDRENMMWlYWVRyMjIzSG44cGN6U3pyMTNjL1AxSXlBMUttcWJsdmM5S1VJ?=
 =?utf-8?B?c1JrREF2MGgxVEJGK0FPYnk4WUVyMGdUWmo3V251TEQ5Kyt6cG9ybUFEV1k4?=
 =?utf-8?B?M21ZWmFhTUVPMVdMSFUrRFUvQmxjcTlIQzV3bElkeWZrTUVJckcvNGY3MEJO?=
 =?utf-8?B?QTBhOWxyTHozYitJRS9ISkk5anNBME1lbjlzenZFdEdNWTYzN0tmRHhIMFhs?=
 =?utf-8?B?RzliYUgrbHM0S1pXNjRFZVFnai9ST2FyZ0ZDWDRDVThuMUJSNTdISUVBclhV?=
 =?utf-8?B?YzlXaWJyZ3ROTWlQU3NHcXV2cFlXNEhsNXdETWJLZTY1NE8yb25mdDJlM2pU?=
 =?utf-8?B?V0R6L2krZ0VCZkVsUUdac3hvdDZ0WGlWTWpjeHhBaUQ5L1J4WmVzellrbFl1?=
 =?utf-8?B?Y2VJSG0wd1l5S2F4S0xFZnRFOHdxNnVoNHVhOUZSditmQjFHN3pJWW5JY1JJ?=
 =?utf-8?B?M3NpT1M0d0VjaDU5RXJ2cVYwYXNISW5VZVNVVHhySHlMWkVWMWl2UnhGNytx?=
 =?utf-8?B?bTJ2QVdYVFgzbWs5UkZhdnNYNXU0ZG16bFlpOGdFMVFvS2QxbmtIME1YaVdr?=
 =?utf-8?B?V1NPUVk5dW8vdnVVZnJycVNibFBVNHFkZ29lcE1NaUVTcC9ZSFQzRGlwVjJx?=
 =?utf-8?B?QXA0THdWdmFMV3VGWnk0NmU4ZTlwNk1DTjB0bVQrS2RDMmVwNElIdDdaZ0RX?=
 =?utf-8?Q?iFkCXKKOe7tQvMNyLEYrNt1otu8RGOWxwr?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 14:57:07.4908
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: d75093a8-1c66-43ef-ed7f-08d8ac0a0329
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3TG14aoZTv4QSNPvcl/havytAKcGa1LmD5uEHobzCuNTo2kXpL/kOkSeq8DQ0LT8wHi/3vrIy5YcFzRIuZMalg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3145
X-OriginatorOrg: citrix.com

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I've been told that PTHREAD_STACK_MIN is not designed to be used like
this anyway, and that what we do could be dangerous:

https://lists.freebsd.org/pipermail/freebsd-current/2014-March/048885.html

Please adjust the subject line to mention this applies to
tools/xenstore and a bit of commit message saying it's not available
on NetBSD.

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 15:20:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 15:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59966.105140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGnE-0000ZU-FE; Tue, 29 Dec 2020 15:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59966.105140; Tue, 29 Dec 2020 15:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGnE-0000ZN-C6; Tue, 29 Dec 2020 15:20:12 +0000
Received: by outflank-mailman (input) for mailman id 59966;
 Tue, 29 Dec 2020 15:20:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuGnC-0000Z2-4t
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 15:20:10 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf01b7c1-9722-4162-b05f-bb797fa70c4e;
 Tue, 29 Dec 2020 15:20:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf01b7c1-9722-4162-b05f-bb797fa70c4e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609255208;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=phJUXKEsuBnUAbed3mQNpsnR9fAqyfE9ogI7sOyE9Sg=;
  b=RaJe4iBG7+QcMiB9nE36Rpz+UmH8flBlJoN0iZpygbODgyAQ+xneEK5D
   uCzc5SkUJMOxT7+pY+J/tzEm8n06BhVfyMFtrJqNEN3ZQChYZsR2HL3vx
   db3eoNXRMpVP5pOVDGf9s7Z5NtxA/zk+RRs4NBvam8p18MhbqqBzAlhpN
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s6u5yk4SXnBRwldhM5dt/Tt9vNs3Wgo8N5qP4wP+HC/YhOyygjMEqrFVNAhsOtOoL2WEcyhWsE
 tSW+XHB4SalOf6jolpjULTFOVQgroZdvZKILe8Wwh7ndABuyC8U2QwaOuouGo/hEW8QB/JM5JI
 veTCWFKJ2bHfHbXP0BPxz3pbOBoj3Pm7gP/oQUUwsna2kvLZZmUHJ9CPY35sgUa2G3lfNrWIjR
 1SD3u2Bp5VPBwxFJ6BsNavMXbVyUtOoevdDWGheB6x3rbi7j4ochlUdfGZJGxFtVrHJUzrMJEj
 Bno=
X-SBRS: 5.2
X-MesageID: 34329278
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34329278"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jLBYNodBKehb1Aty/LyaBO66uJnkOye+ZiPGZCgQkVGkM3lfFmnd/E7ILKFL5tS+ccB284S/R6bU9zWiB/V3tvZR0e9L6/6+5hfvfTSgUGmAKjO4IQ8tt4V+r7J07EXO+h/C2/131qU1EmlApyjtNPlvNpSZVJVA1YWOjTSWyzBTNvtPN3xF0zlvLBI7n+aAIJAzR7QuFUdJqGMjgpQcqXVuME/5MqS5CHiG72Qjr45eP19Xc3K2LceFcdJexblMPmmIhb5kxMPuuWzekgq1XQkU0zUIgoy/SOEmGkDvcix3RnVvuaG1X98ectN+c8Fe0jL/IKVK1tjgCvG3M+dxVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQXKmy6xWG9sSsKlTZYW/3ACpR6I1k7AGY2g8ru6yKU=;
 b=KFkjKdr7HxA61kHgM0GxdXv3dAI2eFD/VzSOTXdej4I+wLObBN8HVozWhxFen7572l3NvN74Jr9uavm8l8zNgSzIU7dAzKhpbII8bmVEjsgdK8cH8Jx/fQMi9g2J8rRUQ6hq/XsWrvs+mwptW7fBO/a+0Oz0LHW/hToPqNJTHe+aELv65yGgvVWjHC+sbKIbI8l8iX8Xp8A5khE3MynsznPNASh8mWyeekax9pEd3dYwCC9M3dtbEtBDW1mKCLpnaCjiaTzPco94rRC3dY+aJVPZjs9IF1VECf0DLJZA131zzYfk8acsoj/E2eIlslCBCYcHcLCtU73oDv8/ft/piw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQXKmy6xWG9sSsKlTZYW/3ACpR6I1k7AGY2g8ru6yKU=;
 b=PtnWckh8tuyMuxCAnVRkQeppefdTX8dRDsx/3rBwEvn45r9C5FeKD+iiDIA675+bVkvwmf21OXqhMrbCkoVfdWEARwND0OoO5so/sP+mQ6nnOIvZn8e4V2ukwWAHNdtBYfLPVstnilQMYzz15l/3Ji9/4ODUt1og1X8cPtbst+0=
Date: Tue, 29 Dec 2020 16:19:58 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 14/24] Pass bridge name to qemu and set XEN_DOMAIN_ID
Message-ID: <20201229151958.ungp5efzj75zszzm@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
 <20201214163623.2127-15-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-15-bouyer@netbsd.org>
X-ClientProxiedBy: MRXP264CA0031.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d08ebca4-e23b-44d2-5953-08d8ac0d37c5
X-MS-TrafficTypeDiagnostic: DM6PR03MB4217:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4217A258792D04592D5F4B098FD80@DM6PR03MB4217.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Xf1eQV0+MmuTRBFB3V3hhfWMUbzM++JMFHi6sgMJcA4+eyTwNul9uydjb6DG7SsG9nv/enZ6PMPsI2EbVzm2JM5aifDjOJNW7izHx/5A9T5iOPIm7ENcTzOVMIzXLmxj5Y3pMUgXwAYCz3L1UeOKAed7b6BNtmlPptznzPH5lswxBThYA6TGgclNk8ecT36Z9o7QLhMC6odsXyqeKTScSnru0NF/D7+vx7lKAV1U68r0Eo889uZkOVfz4TZ6m5w1tqMcz4On5gjsSkVrS2aZ3BqyjmwmzFmR7fXSOcEZQqqYtjYhZDEnLxDZ9ICFCFHcNYVkXoAco6QahiGuh7ycFkGZ6pxJnV9yGL2bp90/37M/wfL13eRjCkKrskXXkeKL1i1RZD8izjm6MBKpIVM6Yw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(4326008)(9686003)(33716001)(6486002)(2906002)(6496006)(86362001)(6666004)(478600001)(5660300002)(66556008)(66476007)(1076003)(8936002)(26005)(186003)(16526019)(6916009)(316002)(956004)(66946007)(8676002)(85182001)(4744005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WHc0WFFSV2w0bkw4cTVkcHYwSm1RcTJJbzhOWjhsbi9wMjNpbWNUUnNlSW5a?=
 =?utf-8?B?MVE4VUVPa1BkSUhpTVlPTWl0b1FGM2tHK0E1Zmo5WEFvRDJDNlRpa05Dc3cx?=
 =?utf-8?B?YUMwOFlXalJLZFNjRE1xeVp0Mm45a2hCNlVsYWhEVWRaMjRqbnBTM25iYlQ1?=
 =?utf-8?B?T0tWZjc5RkZiV25aaDVqTTl4Ry9DbzFoTmJ2OHVZMDloeFZjQm4wdGNnUzhW?=
 =?utf-8?B?TEc0Zk82L1pBZjRxcTdGUjF3UVppQXBWVzNkMW9xV1A0ZEQwSGhVUEV2TDIx?=
 =?utf-8?B?djF2SmNrSm8yMkJIQnVkUWh0V040N2pKY3MzUkw4NWFPam1mZ0VudGhLUWRE?=
 =?utf-8?B?MzlzZnByekVPTSs3T05zQ0NWM01BQ3R4VEVJeFlrRk1EZ25NY0s4TWpLUnhv?=
 =?utf-8?B?OFJIak9iSjNNdXRZQTQ4a1hEYVB2RUtGdVU3RFIwUy9zalhQU25FZ0JsQm1y?=
 =?utf-8?B?Sm51bFhvU3FMaVM1RDZvNWFoQjRCOG5Ga1lXV3hnVXltTE1wS0VxL29aTHNz?=
 =?utf-8?B?ZVpVT0pUdkFhWFJDTEV1UkdtazFTYWFJd3hFWjNDbzdENG0xZ1JIR2JTOWpZ?=
 =?utf-8?B?NjkyUW5TUkoxMGZ2RTFDRmYzQWxEaTMzcWJGdG9IcVU0Y3dmQlNiak1NYk03?=
 =?utf-8?B?Yi8wWWpweGNPd2tSaG5sNGw1YU1ocFJJVmtDdnpDRTJyNjV4TkhueHVHeTFS?=
 =?utf-8?B?YXlnRTRBZzFENEJTVHJ2VHJHM0l6STBKQTludGk5aGVtUjhCclNhN0JrM1RK?=
 =?utf-8?B?RHhFRW9yNWpYUkFpVEVibGhxU3VLOTBhZFdITDJDWk9ZelFDelJ0NGUrQml6?=
 =?utf-8?B?bWtDTmR4VlRTNDVPQVMwSjB6R09nbERkT0ZiSi9McFdoSG5sdlFQdExWRnYy?=
 =?utf-8?B?Vi96VG5VczlPbStEZ0w2U04rUG45VTl5c3N5ei9zVnNoZTB2RVlOb0phM29j?=
 =?utf-8?B?UW02N3l6TGFOZzZQQmV1MnNSM0JpeWZjZHhiajErYXVpanE3aUd5UWZ4M1JS?=
 =?utf-8?B?RmhHanFxLzZJZjJlTjF6bEJkcEV5dlNoSHhuM3ppbHlDSWdnOGF0N3hkZ2VM?=
 =?utf-8?B?TUIrOVNacFlRelhML2pYeER2T0p0QWlxbURDNmlFQTJLZ3hYTkFCMHF5MEdY?=
 =?utf-8?B?T1hrQzh3c2haVFZ0bEtaNjNEcG5aNVZHeDkvODJMbEtnNHFMUm5ObGNOTDVK?=
 =?utf-8?B?Qm9zVnYrRERqS25CaHBHOStCWHE0cWhZMFgrMWVkRmdIa3ByL0NvR0JSWEZQ?=
 =?utf-8?B?U2ZGZkZUTVpQbCtBNlJvOXh0QzBMNnVRNVR6elV6TnYyREJ4VEowZTFCM21P?=
 =?utf-8?Q?2NP+gcLAqHIan+SZFC6HJ4MWerMVFLWq3c?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 15:20:04.2072
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: d08ebca4-e23b-44d2-5953-08d8ac0d37c5
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ljIoqrA6Yhow3yruiP7pSfVfDSk0OVqyrKD2Zo1lyxsU/7PKaNn3elQr6DFGUCum1thYGK6d5//EjqUuTYDwug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4217
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:36:13PM +0100, Manuel Bouyer wrote:
> Pass bridge name to qemu
> When starting qemu, set an environnement variable XEN_DOMAIN_ID,
> to be used by qemu helper scripts

NetBSD is the only one to use QEMU nic scripts, both FreeBSD and Linux
don't use up/down scripts with QEMU.

> ---
>  tools/libs/light/libxl_dm.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> index 3da83259c0..5948ace60d 100644
> --- a/tools/libs/light/libxl_dm.c
> +++ b/tools/libs/light/libxl_dm.c
> @@ -761,6 +761,10 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
>          int nr_set_cpus = 0;
>          char *s;
>  
> +	static char buf[12];
> +	snprintf(buf, sizeof(buf), "%d", domid);
> +        flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", buf);

Indentation, here and below.

Also just use:

flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID",
                      GCSPRINTF("%d", domid);

Here and below.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 15:23:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 15:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59972.105153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGqf-0000lL-VN; Tue, 29 Dec 2020 15:23:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59972.105153; Tue, 29 Dec 2020 15:23:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGqf-0000lE-SF; Tue, 29 Dec 2020 15:23:45 +0000
Received: by outflank-mailman (input) for mailman id 59972;
 Tue, 29 Dec 2020 15:23:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuGqe-0000l8-G3
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 15:23:44 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 825da18e-a02b-4c92-b11f-c47510e9f367;
 Tue, 29 Dec 2020 15:23:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 825da18e-a02b-4c92-b11f-c47510e9f367
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609255423;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=9K4B+otHp9pjsvtgsdUyb+3K61CPImQEWih7qny+E70=;
  b=GqogtrMBh31YppjUjc+p+IsoTSP2QlGhm76qHR8jmmL9aN/7po+tcWi3
   uwGaZv97CHL5QTN5/TrVFqgOE7DE4t+wm6Ru2YhspVyXwZ/EarIB1O9OQ
   3rtsHGMumjUNO2Xuk2AasvSF88+17EFET/LZOXmTZuGWH8LVSifX+CKAc
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: i8I6K2ZanQwbgSXP1P6hFdwVYg9rMy5pp3TZYu97Lj8oIlxEy1lsaGyRlzYrcWuvHkLCvzDrNc
 kwMC6a8X6tTSDIQs2E+kL6SShh+3egOwpqZkC+5kciLQPbgDIyorHBQlLtPamM8hlnfGqLlNiG
 uPlOdVnsCPYmrGK6G4hF4+fC/aNQT1G7+gTxUI7t+nBX+sZByUALfixIdBeqCabyFmMwJSU+N/
 jVgZUDJx+njf/jDKugt1iUGr8VhcQIgws2NeaHJqFKjK3XjndmJgXaCj6PebcN1HidiPXyNUUv
 IyQ=
X-SBRS: 5.2
X-MesageID: 35349745
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="35349745"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A1gckP56ysE/9A5rLMYMSlpHINabSa6+z+cg/OafhTk6msDzF3r6F+JpUAtE4hfrdhUxbcHzBiuBF1Kz5F6U0fU6q3FSvbcgDGGCimqwcYvq8Vz8t8FpEPn4GDXYUC2CecRwN+uldSfXDsuVpl71Beqt2OKLB1bSQhdZSy1Hm/EW7sBGPqP3dtzQNTaH8hl4Uygc2bbmg2hzKtarbddi7hYneR7coNFrHWMVvLMddMomRm2InUzHg4GS2C7Y+vSGuvO4W9z3GRIJv6uvvdSP4UWggJdbWmCYBA46OGuuT8e78pfH2LimxNhoaPkRbG+PiI7oi8bXTHALCg54+wJvTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2KkwukNq5S5gP/FrJN1Z6FX++GUKEOIr098DiYJYpgU=;
 b=kv5k6mlaxd7GGexP2jxM6a98ELdDL4v/qPjyMuk2vvGL2hzCIko8uROZtHQbCVLIKji/6h9LMepaLs+BorOSCRlzLhDGT7XGFf5vcKzGERfWVUpcaPZIgczUEvj8LQwk0hBFqgIM3xNrChxMDiGcPXK95dMv9tWu06EEs0TyoBsCpqKdoi9hcWoWQ92DCZf21PA7TEWnzKpIr6pMnjg6mkfaVq2WRFYc3S85dbJxwCX1aYSn6VLYOCqtzBamp43keqQcqyBmO13SCbswQrl1Mz4BUkCBtoAZVUWsaqishZJ4c/JVJCUMZ/LE1oGq6Bdh9prR1V4LKyLUQ3XW+jDLlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2KkwukNq5S5gP/FrJN1Z6FX++GUKEOIr098DiYJYpgU=;
 b=LakH5AZsnUozFDugcBrI87T69C0mX8hsxxryThSR680zf0f8RrG9B7NO4y7AZjdPQm8VAblF2+9zE70nkTQes1uXB/hWkmz8nAT/mqgJ4A/NVsPdt3RBXHO4+aZE/v/q5PMprWOSHFJpoyzklrjnnWDa4aY6RCy6UdiocssP9mg=
Date: Tue, 29 Dec 2020 16:23:34 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@netbsd.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 00/24] NetBSD fixes
Message-ID: <20201229152334.mu6i4kg5tltrf7ky@Air-de-Roger>
References: <20201214163623.2127-1-bouyer@netbsd.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201214163623.2127-1-bouyer@netbsd.org>
X-ClientProxiedBy: MR2P264CA0031.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::19)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 374341fa-f3a8-4a5a-3e70-08d8ac0db8a6
X-MS-TrafficTypeDiagnostic: DM6PR03MB4217:
X-Microsoft-Antispam-PRVS: <DM6PR03MB42172E5051AFE374318045228FD80@DM6PR03MB4217.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oPndQYsC9bmpo+6bjM39LS9w/mRdZ0ATdviu7icVQbJ0rouAgiZhs5NfeNlIBWktKcfFRmVp7M3yhUeynRDFRjhv/45Lj4wZC6vDLqQ/REV/ns6TUpUmTLaensbvhOn3hiyjA2oHr58kLA1Mz0dGUlXA467y2Om/2yifIAzJQpgOvdoAA3ydqVDLYKGl7UyEUM+VrU25I82Tvb5Ok6jnnmyBnGT8eEXtw02nVqt75AjfmvzulMksFZZifCNRz2DHa7vGp47QJNkmL470BVyqCYWYuAkTJV2PpKqAwhB2oHF+h2/BDYVd4/dL7LYrJIAYeF+PeW6Fi5Jlqh5dMDtlbvbxX51rk4aOIdLBIhjNubX5ziSZ1lHOEP6HA5SnOSXlZ9y5ghqOmwEJ9SVB9CvUPg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(4326008)(9686003)(33716001)(6486002)(2906002)(6496006)(86362001)(6666004)(478600001)(5660300002)(66556008)(66476007)(1076003)(8936002)(26005)(186003)(16526019)(6916009)(316002)(956004)(66946007)(8676002)(85182001)(4744005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?a05md05QZndKTSsvWVM0UjlYNlJtQWZseE5sY21YZ3VucG5sVEM2eXdKSkZZ?=
 =?utf-8?B?WSt4VmNSTmJTWUd3bzVUdkpIK2ZFNHdJMXhLcFFkUTQ5Y2NXVmNQcUxzUHRU?=
 =?utf-8?B?ZnlYRitFVExTWUFHWEVmS0lRRXY4dlkxVkNYTFpvbGIvdWdPMWlmM21SdGtU?=
 =?utf-8?B?bzBQb3ZuU1dOWGxVcGtaRzFmbDlyc3c5MGZJakJJRExkd2RYdG8xeGIxS0dJ?=
 =?utf-8?B?WGNKM2o3cmlJSDFOR3JSblV0cTlLczBLQmh4TWRlbXd6d2c4R29udEZkRWRF?=
 =?utf-8?B?MmV4NEZLQlQvblVLVEt6MW0vUGVmTWJqbnFIMFpNMzlMdUVtQ0lGSkl0WWxs?=
 =?utf-8?B?bThlRGhWbjV0T2Y5YlVKVkdwZnl4WElaM1pyYWpoMWJBQUtGMlE5dTFXMUdF?=
 =?utf-8?B?bDZsTUVDS0hERnZMWnBka0ptZGhSeXhraTVoQ01LTk53cUxlVmNyUENxbHov?=
 =?utf-8?B?cnJtZjhLZ2lHa1l3cy9qQnNhYlJXeEJJMVFUaGZScTN1Mi9VZTdkYUdsQ1Rj?=
 =?utf-8?B?bEpuNjFNTjNQQyt2Ym1zMytlWGIwbjRzVHFjaWgyZTcrSlkzOTZCMkFTbHJl?=
 =?utf-8?B?YjNqaVZsTEtuWXFzS2JTdzFBeDdkV2ZBRUpsdUMrdUhpQkNjSUdqSkRwb3hR?=
 =?utf-8?B?SXcyMWlLQ2c4Mitvb2o3ZDdod0xGYy9kWVZ1TmlOanVBMnIzUEhkcEdyTUdQ?=
 =?utf-8?B?eVFsQ0RpaXkvckRqMkRIakZzYXRrQUNHY0ttUVo5NXQrNEJERnVUaS9wVGlZ?=
 =?utf-8?B?ZTc1R2tCWEtpQ3JSdTBEWFVTRXArVnc3KzBXTUg3ayt1cHA5K2ZMRTYwMERo?=
 =?utf-8?B?UFBFejNyQkd2OFNlcENabDVLaUdQamEvd2VkVTR1Z2RrU0hKTmRiU2xiR1VS?=
 =?utf-8?B?YjRlVkg1eFVHNnY4cEFuM0JrODZQUnEzR3VwaFlCQS9oNmZkUzgvVng4aWNS?=
 =?utf-8?B?RGFCOFBKaDlTY29HR01HRG5YREd0d05IalB3OXVjdEJ1bzhUek9PRy8zVGJD?=
 =?utf-8?B?MEdlQ3RJMFlsTEdvcEUzSi9xN3NlRmkvbHFBWEhzbzBGaUYvRnJMSjBYalpJ?=
 =?utf-8?B?WmZKS2crRWFYTktEVzNwS1NvajJ0OU44b0o1V3lIcDY2WmpRSCtjb0V6RUg0?=
 =?utf-8?B?enI1bDJ5WHZGdUxrb2crU0NPNlZUS2FsVldndjhZK3dZYUgrTFRlalVVbHU0?=
 =?utf-8?B?Y1Z4SkYrTWlOSEFiaUMyNnF2ODJTTEpheGRaSnpNMEU0K09YM21oQW9BTTZG?=
 =?utf-8?B?UTNZcUZuczBHd3orYW1Db09iQldPdFBFTnF2N1VnNzQzVlRLK281c29LQ3gx?=
 =?utf-8?Q?ZAA7lHMZHe0/w63Ldreg+XwEdqEfV5whYT?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 15:23:40.4125
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 374341fa-f3a8-4a5a-3e70-08d8ac0db8a6
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TR5IG9G3Oo+bpVPKU8dMtozq0T48vKsvbMpG81qX3WpWcmMaKSpXSpkZOFJOhp3MInwccufocIHMqKuUvzqy0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4217
X-OriginatorOrg: citrix.com

On Mon, Dec 14, 2020 at 05:35:59PM +0100, Manuel Bouyer wrote:
> Hello,
> here is a set of 24 patches, which are needed to build and run the
> tools on NetBSD. They are extracted from NetBSD's pkgsrc repository for
> Xen 4.13, and ported to 4.15. 

Thanks! I think they are mostly fine. All of them are however missing
a Signed-off-by tag, which is mandatory for getting them accepted.
Also you should Cc the maintainers on patches, so they can review
them. There's a script in tree to do that, you can use it against the
patch files and it will append the appropriate Cc's:

% ./scripts/add_maintainers.pl -d patch/directory

Optionally use the --reroll-count when sending new versions of the
series (2,3...)

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 15:32:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 15:32:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59978.105164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGzB-0001j1-SX; Tue, 29 Dec 2020 15:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59978.105164; Tue, 29 Dec 2020 15:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuGzB-0001iu-PZ; Tue, 29 Dec 2020 15:32:33 +0000
Received: by outflank-mailman (input) for mailman id 59978;
 Tue, 29 Dec 2020 15:32:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuGzA-0001ip-PY
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 15:32:32 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3df0f0c7-734f-4154-a0d9-cff576767773;
 Tue, 29 Dec 2020 15:32:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3df0f0c7-734f-4154-a0d9-cff576767773
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609255951;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=FMuoll8BSVwKpDlaL8B9r6hcYgx7oUDYdV2rNVFrE4I=;
  b=Z0YD5ABHWwvOAleuiRMcFIxB7O+4hKc50xxJjvTdqcF/++Ao462Qq9wm
   LVkB2f6wE6r7pgwts8INgTvfKUpBfEs2VqfFwVPvlh9MYXYabePrLMlrZ
   xFi29r/0X0iubJivwxU8MKWa1UbPSmpg5e/bmPXVBHM92/7dS8lpyChUx
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: g9FX22qBLyEC5ilwVQQmyNiYEvM4C/se4kg9wCDZEmH9WtwUdp4QKiHDc03MhVlnqko6cDI/BK
 yl4N/JI7tjKC1cm9HFYlxX58hH0CtW6FXde2Y43AEidRBg9DJljR4ikVWbuktMaQkE3obSM5PQ
 rdpB+FcMuS/+05XLpc/8d+Rl1Y511eDxdClBbv7laQmNvxnoFAm59prcX63ARvsbUtFrjjCOt0
 QUw3xDO0ZWQjMcyKGQZ49H6RIh90hhO6gbK5aTtSjOvjRjSqs4njcKy/nCwUbSYXJ/f1T9LtgI
 8vY=
X-SBRS: 5.2
X-MesageID: 35350412
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="35350412"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OJHHEUOXeGw6kkkzrBcK771vRWr2PDaB2Pn5nMem+ZDxV+vn7bICoLyUnyP8jAzhsqx/AJ3OM8ydKmZktkjaEBR3rgWNnxhMw2GN0N/MK/AXzquYicDS59Dd8dFOOx5MbfKhhoevcXqGErN2xlRfkj8wfq2lVCUIO8dI9VAx0bQvEoM4ybwu8dgjLJatwv68FXYIOps57PG8tre2m2SrNA2ZOnTKpbr8U82DsTQVH48bCnWPrgSxTaveE/8cDBqXixFrWSpdp8KNPyDjkQO4643FXcW3QH/QVh23l8H3IpooYJJ7tGKjLvhuzclFqhTUp30Ks7Ma73YGiOGxkGa3cQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D8K+ftOiWQGwVTeOol0cK39c/UvBXihzSFlX0H+mOmc=;
 b=bl4y/WfNtJBnT2idffKBHTxb/DStZ5e32xvkSAF6+/ZQSUXJ5mh8igUb/socTHUF/eSL1o9UBwYjql9sxVB6Fn8KrF6PZVaK0n9iL5ulOFkcRaPRlvgga89yaBql8QSV3WPzCOlNWG1/KLJpLMqQFPawlD6KkmZylUtJw9P2VRzPxRiW3ftb7eclWaBfzfRk8/R5uTDBJ0mnRCb2K9QlThLdKbuN2NBuPMuTz1cH5bzAFX2t72CEZlsq69C7G4K9PIpI+3Acf6sqoSe1G1bnwNEpDB3OtRCUuDvnx5XK13LWQXRnpoXB390qh7/2Yx8aPSpLM8gnM3L4/M7FZ4vOzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D8K+ftOiWQGwVTeOol0cK39c/UvBXihzSFlX0H+mOmc=;
 b=QTmQ7aEaWC1JxIkhgRA1g4ESGkK4dnIRAjgAhOjkYVxKZ4r6YVuGwS2Bimb9EsBVFgEdmDRMrZdYqKXFGqSUWR2FZTa+Hk4JI+Ri1fnDzNQW8vwPpA4m6devRdVG17FQi3OGcRV2JoH8RU/OxPPCzc6iffSyD1RvnBYNl7tswT4=
Date: Tue, 29 Dec 2020 16:32:21 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
CC: Oleksandr Tyshchenko <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>, Julien Grall <julien.grall@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Tim Deegan
	<tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jun Nakajima <jun.nakajima@intel.com>, "Kevin
 Tian" <kevin.tian@intel.com>, Anthony PERARD <anthony.perard@citrix.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>, Artem Mygaiev <joculator@gmail.com>
Subject: Re:
Message-ID: <20201229153221.lnj2mzei3s5q5xzp@Air-de-Roger>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <87h7p6u860.fsf@linaro.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <87h7p6u860.fsf@linaro.org>
X-ClientProxiedBy: LO2P265CA0225.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4bc396b5-d38e-49fa-0735-08d8ac0ef226
X-MS-TrafficTypeDiagnostic: DM6PR03MB4762:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4762323C9ED75A840D78446D8FD80@DM6PR03MB4762.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AzaJgmNC3peVwAebaqTfZrfdmS80HBRVn9nqoxnxhY4JBUlt+h3+54bncStqCukSRgD4q5BsfhtEhevNNk80x7rU3FgZGVTAPFDTfo1pKyMpyhCz00JcosV+17Ov6ggQUee2xDmTWt//aGy6U1/x7ZKXioZSubsRw5K/T5Zcpg00XQABFaCOqLOh4AMQmAaqniT9duKqiHuSzURu47l3j1xz/42PGjBX3slHisI+BAXiy/6IQV+Gz8q7sLx2ItBCkTntFoln5RUDmFLQGRj4+zzLuqUyJ8ztERLQbq74IXIaWNdhSeF9NHngcq7GEEFnr6/5ELfinrE2cUndFcwsVeytr9zhsxzbxmoSSyxgYabMs1ggtTWjXMdwYacSiEP9xlj2wDskS5FfjEI+X9hJUQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(396003)(346002)(39860400002)(376002)(136003)(186003)(86362001)(53546011)(956004)(54906003)(6486002)(9686003)(66574015)(6916009)(7416002)(6666004)(8676002)(83380400001)(66476007)(16526019)(1076003)(5660300002)(66556008)(4326008)(478600001)(85182001)(66946007)(26005)(2906002)(7116003)(6496006)(3480700007)(33716001)(316002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Wlo0clgyNXByMXhaUjBhNmVYR2IwL3V3dFMzdHh3RE9wS1I4U29zWXNQS0Ni?=
 =?utf-8?B?NUJ1UjFBN2o2NzExZytERTRFeVVWYk1NNVg1dWZaRjdhb3Iwc1o2YllmbUdB?=
 =?utf-8?B?MFdab2F0Z2liVm9DZHFDaFJSWDU3dktoSGIvaDVPVWlHbGcyQzUvVVVLQkdB?=
 =?utf-8?B?b2tLazJWWHFBMjliTnMxeUhleWJjdmRWTmI3Q1NsNU9GTlBqRTlXd1piNFg3?=
 =?utf-8?B?YnZWS0l6VUVJblBFTWtyOWZlVUg3cUZYVEMrZkVNdnpwSGhZYzJLQ2NtQ0U4?=
 =?utf-8?B?WCtrcTZuKytMYU15OHg4WWtheGh1aVdzVDNlTHlHMkZGL3U1ajVDeGlCeHRr?=
 =?utf-8?B?WkViRTRUcGFQbDczbkloY0JXemJlMXJabm5mQ2RTa2I2RENlMlRiVjgycDdr?=
 =?utf-8?B?QUZRWmFXMnB4akx0LzFSS2lNOTB1OFJTSmRPSkpMZU5rOXFqZmNLb2JJbUpn?=
 =?utf-8?B?alh2TERROXQzbWVnWHFDY1ZDbFJCQ21lbGRqOEhRckJaWE1lZTVqTnpKRkdF?=
 =?utf-8?B?eW5ESWtMM1hYQmUzbFlyUkRVK3pkUUhlYVA2Q0ZBUEZmZFNkNXJ2Mkhaek1Z?=
 =?utf-8?B?SEVSU3A2TXZOVzB0SzZ4R3Z2TkViQWoySlZ4MmlJZmQ3dHoxOGdZdmlXWEFE?=
 =?utf-8?B?cUE5VDBGUG4xUW5udklXbG1YdktLVUpmTmZyd1R4cnZtODZidktleURMeStu?=
 =?utf-8?B?U2pLbE1LS0tqV0cwTjhHdjI5Mm81YVZ5RU9nTG1SWXJvZGRINHNrSXRhWFVq?=
 =?utf-8?B?RjVrSzRKRy9MWDlaVDdGaDJXZVNLV0ZLSnVkNElIb2ZVVTFPMkduUGg5SVNZ?=
 =?utf-8?B?cDhFTm1WeVFrbWVSMzVxOEhqOTdxV0doU2l4MW0vSjhFUUNwUEtBWlgrTXJT?=
 =?utf-8?B?a21IUkJiaklZcWtOR1o4VVZqa25RNmdiOWhTL1NuUTRENGV0V21UV0ZUWjFG?=
 =?utf-8?B?Z29NQlJkejJ4cGlydTdvQ2lnVDYram1vWlI3Q0poWlkxYWJRN3NCcUFRellK?=
 =?utf-8?B?UXUwbWM3bGYyeFYrWXpQMjFNZzdKcUVQZW1BUXN5RHlINzdxc3BKekQwTUtq?=
 =?utf-8?B?eHhoeFAwOGhNdjcvYTJFTDY4UHhIZE41b3BCbENDSnZ2OU1GK1FqeDNnN1FQ?=
 =?utf-8?B?SG5Id0F0YTZVOFQ5U1ZNQ1JCTkNHTHFTOGV2S1VIbUg1L1M5QWlLanhieWd4?=
 =?utf-8?B?cHM1dlFuNm03aUZ6NTA5L0JEekRCaVo3bmhMaTE0VUNPN3dadnJuTGcxaGVK?=
 =?utf-8?B?cERsbDJGMmdRSU5jMTRVQ1h5ZzhLYUpzV2R5UFdtZGxudGZFRXY5Z2FKclcz?=
 =?utf-8?Q?BpKEuMdl3Q9dsbd+XHcqEwkNXgrP2k73Oy?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 15:32:26.3689
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 4bc396b5-d38e-49fa-0735-08d8ac0ef226
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8JMlIZ30BfRI39oK2DfIlxibaP0g5seAD/5JCm0Hy1e10l9vWBWrFR24INE6+hydpdN22tlfOz2j7kz7IVD39g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4762
X-OriginatorOrg: citrix.com

On Mon, Nov 30, 2020 at 04:21:59PM +0000, Alex Bennée wrote:
> 
> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
> 
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> >
> > Date: Sat, 28 Nov 2020 22:33:51 +0200
> > Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
> > MIME-Version: 1.0
> > Content-Type: text/plain; charset=UTF-8
> > Content-Transfer-Encoding: 8bit
> >
> > Hello all.
> >
> > The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
> > You can find an initial discussion at [1] and RFC/V1/V2 series at [2]/[3]/[4].
> > Xen on Arm requires some implementation to forward guest MMIO access to a device
> > model in order to implement virtio-mmio backend or even mediator outside of hypervisor.
> > As Xen on x86 already contains required support this series tries to make it common
> > and introduce Arm specific bits plus some new functionality. Patch series is based on
> > Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
> > Besides splitting existing IOREQ/DM support and introducing Arm side, the series
> > also includes virtio-mmio related changes (last 2 patches for toolstack)
> > for the reviewers to be able to see how the whole picture could look
> > like.
> 
> Thanks for posting the latest version.
> 
> >
> > According to the initial discussion there are a few open questions/concerns
> > regarding security, performance in VirtIO solution:
> > 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
> >    transport...
> 
> I think I'm repeating things here I've said in various ephemeral video
> chats over the last few weeks but I should probably put things down on
> the record.
> 
> I think the original intention of the virtio framers is advanced
> features would build on virtio-pci because you get a bunch of things
> "for free" - notably enumeration and MSI support. There is assumption
> that by the time you add these features to virtio-mmio you end up
> re-creating your own less well tested version of virtio-pci. I've not
> been terribly convinced by the argument that the guest implementation of
> PCI presents a sufficiently large blob of code to make the simpler MMIO
> desirable. My attempts to build two virtio kernels (PCI/MMIO) with
> otherwise the same devices wasn't terribly conclusive either way.
> 
> That said virtio-mmio still has life in it because the cloudy slimmed
> down guests moved to using it because the enumeration of PCI is a road
> block to their fast boot up requirements. I'm sure they would also
> appreciate a MSI implementation to reduce the overhead that handling
> notifications currently has on trap-and-emulate.
> 
> AIUI for Xen the other downside to PCI is you would have to emulate it
> in the hypervisor which would be additional code at the most privileged
> level.

Xen already emulates (or maybe it would be better to say decodes) PCI
accesses on the hypervisor and forwards them to the appropriate device
model using the IOREQ interface, so that's not something new. It's
not really emulating the PCI config space, but just detecting accesses
and forwarding them to the device model that should handle them.

You can register different emulators in user space that handle
accesses to different PCI devices from a guest.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 15:50:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 15:50:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59984.105177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuHGm-0003Wj-KB; Tue, 29 Dec 2020 15:50:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59984.105177; Tue, 29 Dec 2020 15:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuHGm-0003Wc-Gp; Tue, 29 Dec 2020 15:50:44 +0000
Received: by outflank-mailman (input) for mailman id 59984;
 Tue, 29 Dec 2020 15:50:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuHGl-0003WU-1B; Tue, 29 Dec 2020 15:50:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuHGk-00059U-QZ; Tue, 29 Dec 2020 15:50:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuHGk-0003vj-Gb; Tue, 29 Dec 2020 15:50:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuHGk-0004MF-G8; Tue, 29 Dec 2020 15:50:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9EyCVGJ4fIFwsxxANsV2CB1XV1tJS4YP5+Zz+zsmJII=; b=JtiB2qHbO52teb/Rh7devyq96+
	cQsQaowhnE6Plq0YBOZw+UbV0AbFc5XJIIH4dw9miE0Twcw+3s9AtdJtBPVQEPJCokt2Mszrnm7EN
	Fqz5UcHGdBX1QUvTj/UJuI3Bz3MejHFeY+kaKq/qWmTont3Qqv9Yv1PnMz+61+3R7pqE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157952: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dea8dcf2a9fa8cc540136a6cd885c3beece16ec3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 15:50:42 +0000

flight 157952 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157952/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                dea8dcf2a9fa8cc540136a6cd885c3beece16ec3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  150 days
Failing since        152366  2020-08-01 20:49:34 Z  149 days  263 attempts
Testing same since   157952  2020-12-29 04:11:49 Z    0 days    1 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976238 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 16:48:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 16:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.59996.105197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuI9t-0000CW-3c; Tue, 29 Dec 2020 16:47:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 59996.105197; Tue, 29 Dec 2020 16:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuI9t-0000CP-0j; Tue, 29 Dec 2020 16:47:41 +0000
Received: by outflank-mailman (input) for mailman id 59996;
 Tue, 29 Dec 2020 16:47:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuI9r-0000CJ-Uz
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 16:47:40 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8dd56975-d545-4cca-ae91-c659e8192e0d;
 Tue, 29 Dec 2020 16:47:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dd56975-d545-4cca-ae91-c659e8192e0d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609260458;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=cmTPdKGUwWx3NY6QNn0UlcMK4WGx6V/356wiqtW7AF4=;
  b=CSzf3YtDdHp2F7zbr+96Kp7EDtPseF/KuACWUV7WcWRs4TnkRUImW6c2
   xrSijiI6o/T+FD2HlizYJtCoimkxcv8iK8F99Pt1g8s7HemTTJe62lAHg
   qquZTbCKcKP8HNERMdS53S2Q8fkQZJDHBw2+QsUyxHK5ymfTfzX/VeMrA
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Yv5bDJI6PwMDzTphQAunHE+XhBy+6QntrPxz085G7jvmeZtMnLwycCv6EnK/W4aLOU6HAgvkjP
 1N+8HDQujBcGJxTkewKgu1SeupIzDn8EajPeg6XtqVp7n1P/0WDcjkm4hxLd8Za6+xBUqD9gR/
 8WZVh+O6f0x3mpsFxiSHLRjM7fbcCUuVWft8/KZJMOYPU0ewd6N773Xqe9JnumKUxModvPh2qA
 3zkSunproLdyDuym3tEB62tR1D4RK4rufTnMOq641WrDTuoUqBUSF249xCHjQ5DJG2HQCvYWte
 KUw=
X-SBRS: 5.2
X-MesageID: 34137495
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34137495"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jKx/fxh7jbYGzSozAa7I9RDqSwtS11N0oF9r51Ya2BpQ3msyHBa+aPZqxdB7pfZLVdrutqiPbLVSn5SwWdATU+BQUmTwXRJezawG4kLWfkSxKtSfB/9OKAMsBHqqPTbTpk0FD5iTEvhHY+rxDWmCjZdZ8bMjPMRdq+XI84bVP3ZZx9jTrLx4zugLD7hFbBYCVqytL95M1HS4ZcxetGaqox8DbVJ7WDD8+T2XWI6Of+H3bLSAMU6FbtLKcHJydA60GXbA1IwScF1ita4sHoXJv+SNxwKUl9tSNcaNbD7ZE4r0JWMATfyf0Jx226qjzJdy/zOgq/Dr2ftKK+4fR35kRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tNFZy/MIRSFikQTym8lclrghQytdhgD3kmzLxP9N5XY=;
 b=AbtbvwP3eoxoVk+RRIyR0hPmNNICeW1Aox9hszyADHLuKrTq8XODJYJ9doFGlQEFNdFXQ16X6p5MimiEdkSZkK4MaxAZTC+jArVDmTomQRQIeyyCCSHShk0IG69wffOtZoBLBhEcs8//KMHuonNA97JET9DZa+7Eh9n1WzxqZBQMeUcM1UAuXSS9g8QEbS+XjmfyhVv/j7Q0JX2x6W/X1l0wRUuglvjyZNAxICYwuxxbJGmxAVpT8q/zaj1IM6qdhWtf2SO2cFWXE3U1w6VZlFjL/30FNSROmckaPGPWraDlLrbHMJHXwz9CMZfghTj0Gm/AdOaOj3IdFXcjHJ69Hg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tNFZy/MIRSFikQTym8lclrghQytdhgD3kmzLxP9N5XY=;
 b=OagXve4cDSMZ0Rx7Ls3RqNLUiXK82nxOpCimjo8/uN1YXixTT0m88M1b2ukkRNhXamKRM9cirfVtP2l23OFkfjvgXoKzqiepoSWIiKsrvQixN6jJT7i2scrRxMFMGkJiMtiz73klwsjBmgTIfEzJ9TSRmfrMqz5vfUyjx0DKkM8=
Date: Tue, 29 Dec 2020 17:47:20 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/msr: don't inject #GP when trying to read
 FEATURE_CONTROL
Message-ID: <20201229164720.ss45re57jjip57ls@Air-de-Roger>
References: <20201127104614.71933-1-roger.pau@citrix.com>
 <c1f686e2-dcc3-233a-c241-edf997d2cef7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c1f686e2-dcc3-233a-c241-edf997d2cef7@suse.com>
X-ClientProxiedBy: MR2P264CA0038.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::26)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 38316f42-c92c-4084-10d3-08d8ac196c9f
X-MS-TrafficTypeDiagnostic: DM5PR03MB2780:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB278034B60C8C3FA18BFD9E0F8FD80@DM5PR03MB2780.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GY+PKJiGJ8rvwg8h+NjfxUs4HEPdA/Z4O7XzJQLNoEWgJzr80J3SumT/PErItIyg+8oY03HdBt56ahPaKprnRDydsI2ERx7iXZ+uQ8VvxmlHd2oaaAsfBE64W+hFUYCOd9bW5spiWZgYSqEbAp9qLQH9E/tZGIWPZQ5IuGNoOTgkiDhMEWO8MEv4M+xwBGS1pgW4BYC5B/5g7yBddOjJ2oicaJ76XcpUGDGIsPqHTPUROGMCv0DPWHQJVYrTfNwkgFWw9GArGIJRJjatUMy4z8bq+lWoPyYzDhCOn13VDDah7sOfYfPuAoSz0pQynebTCCZVbBd59z1O7xk/HIH4qzpL2YIjvcZ/qKd3h5cq93LdpYy1b8r6yIDztZr31qqjyR3gBKIkrURdeu27VgboVw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(346002)(376002)(366004)(39860400002)(478600001)(86362001)(6496006)(33716001)(54906003)(2906002)(6916009)(8936002)(316002)(956004)(8676002)(186003)(53546011)(66556008)(5660300002)(6486002)(4326008)(85182001)(66476007)(26005)(66946007)(16526019)(83380400001)(9686003)(6666004)(1076003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cEt2NEU0SnZPbGh6VDZSQTU2cVU2cXBpZHEvcm83RDVKZ3NqSHQvVTQ0T215?=
 =?utf-8?B?Ri94dnJsaHB2R3ZrMU1IVmlHK2ozWnhZVEd1YWsyNnpqSU5MWDZWb3FHT2l6?=
 =?utf-8?B?dHNKUDE3dnN2VXJTNGZLUkhTMldZcnRvTHhnQnBEZkM5SnZGRjhpM0MxYklO?=
 =?utf-8?B?MFlWaVlibTgzdlV2bHZudTlBb28vZUl2d3dzNlNJUjJaMXlzK2NZWXNTd2lu?=
 =?utf-8?B?N01kMGlha0x0cU1FR2xnMUZHUTY1SGo2NktyVnJ6cnh2eVFHUnFxYit6T1J4?=
 =?utf-8?B?VTc0WTBUNEhSUUh3Vzh3a3F0bVI2Y3M0dlp0dWFqMWgzVWJ6RGFzNlZHcGJM?=
 =?utf-8?B?YnJObVFwZHV2c0M3UW5RWGh2aGhKdWVtN3ZLL1dtOFlWdDlLK1RrYzBNNjdP?=
 =?utf-8?B?aWdsazlUVGh3SGgwczgrT0dCU1hhQkxxeEY4dUdkY21RUm9lbnNqaUNqRGwr?=
 =?utf-8?B?MEFYcmg0TjA4ZTY4OTYzRnAwS2xIcHNkWkxyYytjVG9uRUkvUVJ1anBYR0Jv?=
 =?utf-8?B?N3F4bVpseWs2a1FJSGxpa2paL3Z1SVNkV1RCREFqeVN1dmVseEsxdlZiQjhU?=
 =?utf-8?B?RnVoVEREZGREN0ZYS01tQWhpRnNLakQ5dmRFUm05OEEyazZQSVdEYkVJVjBt?=
 =?utf-8?B?b0hGbTZWWVpsclFxa0piZjVjczNRZDFqcTArZ3k5Z0VpNmJScFNWMmYvUWJ1?=
 =?utf-8?B?Qnc3OUxodi9XREROSDBqS3Z6QmwwdUFYSXlLanpnVTgwVnVqVTZZTFR6Z0Zn?=
 =?utf-8?B?QUpLWTFLWnhWUVRrNVdoNkwvdk9sY1hEdWpHS3ZUNXF4S3JpQ1NweDE5Ulox?=
 =?utf-8?B?dVd4Qy9YMlppVlpGb1d4NEFkTWVzMHAwbk5vSHBsOGZNdFEwbzV1WkdQWVhl?=
 =?utf-8?B?V0lGMG9QUmxjRFNURU9BeEo5clUxemdORVVUMk1HQVJZdHNKL05tL0p3SVA3?=
 =?utf-8?B?bkdZRS80d1hYaHV3V041dm9iQVNUYVoxYk1ROGdXZUhhSTh2b0VBbGFoZU42?=
 =?utf-8?B?Vzc4MEZyOWRLcTJiTDA5QUhHa29PcjhpZUZJUU9na0lNT3ZoajZ6L2lVdmlq?=
 =?utf-8?B?ZUxIUFZmVEg2Umt2QWpKQmpBbUxuN2l0d1FjNU95SGFLdmdtakxvanpmamMz?=
 =?utf-8?B?VGRoWXQ4M0x2SkY1NGhkR0l2SWZmc0huR2FvYUZjc2hJNUxCZGE5a1pGcy9n?=
 =?utf-8?B?b21yVWpVVlkydjdOOXpTeGp4N3BCM2RvZE9TdVlUcUdMNnpETUFHQi9YdFUw?=
 =?utf-8?B?bHpSN2c5ZGhsZGxjVDY4VDhYMkNrNzd2dWxoVTBPZWVuR25sQ3BUMFBLS1FQ?=
 =?utf-8?Q?21O3cvOe/9Q4bMwimwDKkAOQGrcQ9/S5vm?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 16:47:26.8483
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 38316f42-c92c-4084-10d3-08d8ac196c9f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lCnG0dppHabvUqbOI+D014Z5HB8AORT5pd8zc9Ts9CDMHgtCbncH2BKO4UCeP56Y21lVJNxc7SECEzvEhmGWzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2780
X-OriginatorOrg: citrix.com

On Fri, Nov 27, 2020 at 11:56:25AM +0100, Jan Beulich wrote:
> On 27.11.2020 11:46, Roger Pau Monne wrote:
> > Windows 10 will triple fault if #GP is injected when attempting to
> > read the FEATURE_CONTROL MSR on Intel or compatible hardware. Fix this
> > by injecting a #GP only when the vendor doesn't support the MSR, even
> > if there are no features to expose.
> > 
> > Fixes: 39ab598c50a2 ('x86/pv: allow reading FEATURE_CONTROL MSR')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> In principle
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> However, iirc it was Andrew who had suggested the conditional you
> now replace, so I'd like to wait for him to voice a view.
> 
> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -176,7 +176,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >      switch ( msr )
> >      {
> >      case MSR_IA32_FEATURE_CONTROL:
> > -        if ( !cp->basic.vmx && !vmce_has_lmce(v) )
> > +        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
> 
> What about Shanghai? init_shanghai() calling init_intel_cacheinfo()
> suggests to me it's at least as Intel-like as Centaur/VIA.

Right, and it also has VMX AFAICT. I'm not sure whether we could also
gate on the presence of VMX and LMCE on the physical CPU. I will send
and updated version with Shanghai added and will keep your Ack.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 16:58:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 16:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60002.105210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuIKB-0001Dj-8o; Tue, 29 Dec 2020 16:58:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60002.105210; Tue, 29 Dec 2020 16:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuIKB-0001Dc-5p; Tue, 29 Dec 2020 16:58:19 +0000
Received: by outflank-mailman (input) for mailman id 60002;
 Tue, 29 Dec 2020 16:58:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuIK9-0001DX-WA
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 16:58:18 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ede19ca-830c-474f-af7a-93466bef7b9c;
 Tue, 29 Dec 2020 16:58:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ede19ca-830c-474f-af7a-93466bef7b9c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609261095;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=2zmcgJjKaTdi/cAbh11PIRkDu33l8MCD0dWuhhlpJjI=;
  b=KRwsdckbW0BcwNBdqZ1H+PYtL6HjuACpsUgflsk/P5oBeBZ/mM8lu8by
   z1u7GZK87YfUjwOaIjAafMd/c9qf6AMAeN7RlYYNDzm6YESDc7eTYJMvd
   oepNhGe8laXhtxJOa77/lxEAPXB6Sv+UbPFoyyDRQi1ySyGblTfIjbWJS
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pjF/J1HX8geaT0OYApzypR0LOHMqbx5fDFZtL+soTjYka3BZ8S6n85jGbScbzjxxjs/JBdcZAy
 +hddgi4MMsXbOZBVpna+gOeDomdUS38t0UUc5GDzUVLHG990dzeW7qR6PAWepRA8IBRWG29d6+
 KbvLOOXw7V2Iecw2/rBibjoKj6SwqCoDesItjrHzKD25EpAUj55n7eZdlkiSrwIiiC+WjINOkS
 ZEI9f1ZXJeq/wI/Q4cFTQVV0EZKwaYFGaZD43lL1a6Iyj8RvAvJ+p5dHICI15Kjq8bB1E53+kq
 vI0=
X-SBRS: 5.2
X-MesageID: 34102674
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34102674"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dSWwFqT7N5plLZsWWXnNjlruN1OeHSRQGVtDwq6RgMwB1mrY0QKUyC4nC2RVxOBvciwBqQQOke8eNxFxbQVWJLUTwq7nqANS2Lj4t5wcmi1aMDFZkNQsXTwf9WRTHWGq6c0fPofwPLsG27KwEm3+eaInlypKD0iLhkyxSjy6PqbnpJHePhsGBpQxq5OvyNiHKbAuhpIRPvb5/bGLZBC/u1yVpWOGEDUQa3DivM1s7SVW2unc/lD5xfeXOOHJEWRaNL7hHmZ4zP/DAB0Hw6JMPRQhtpM/CmdtsOGMDhTxedDil4sXxbvh6KzgLH0+RfT8LcIBYkjK6ltOV0KXHUvkLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m3BZcI9gaizMHYSRrJ9AyF8b+D1NwrAVV/ClloW6kUI=;
 b=IgFNnC/by1q7sY16gdxlOA0Vkg+0eLI1fYoSJpk6T14lIEVmWNmAp4+Ll9daMQvAw+0rjmctWHveATQHvxhVgzchSvJA7ssJGoX1ikGvwzwXQZgNbsLPjPFTQ6+VJ8IqjY8dI4uDgcnRohnhC1mzPze741xfVROHgQnPzHbod9YFzkmVymtyuY1l3gyKuMZ5uMNWzIDHalpeMMHCHe80By+do+n/o5NTOcU3Hvi4VRt0rakOvymY+/y+D/9Q4OflMYTO//CgNGdUvzRdGGp64n47uewTAUuyTJY0PnbA0jtO5QFySeopIvu4KvE8dFbZ98NCZw2TWtvibGeWC+iJ+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m3BZcI9gaizMHYSRrJ9AyF8b+D1NwrAVV/ClloW6kUI=;
 b=WVVz8vuW+c+7IMtaIfNSNLIZzM/sxmvnsO0ZKVMYWEw2Quzm+CgJDF30U9cOmdeD3qv5yCEmIyMwlGrqbWibXdJQ79BMQV7KBdLe5awaoLM7SlrfKDoxYc/Jft/E/X+dXdFfgLrYNd2X+J2/CjPPZmLVJUrYmpjRAFscUYDqs4w=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2] x86/msr: don't inject #GP when trying to read FEATURE_CONTROL
Date: Tue, 29 Dec 2020 17:58:01 +0100
Message-ID: <20201229165801.89974-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0143.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::35) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0881a29c-8151-4c04-e722-08d8ac1aed9b
X-MS-TrafficTypeDiagnostic: DM6PR03MB5339:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB533914620C9E94FAC38A224E8FD80@DM6PR03MB5339.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lF3uoXqGTXDyX1bZeMk8uCltoJ+V/gXQy0KKzW9GCp5Y3/st7+lJxof3w8c51Q/X3A7Bssz/7qCqs+5JviQBLMY2dnrHFGViLN2pxYw8kfaG+nu55GTpt8n5500aBU3JWioQoT/FUnuJUhpdKiA6iyVbmw7lvwMOLDiX7sFlamNe1+F8PPlFdpxPy/YtVXGXdwuIQSVLPqY584x97X5FEZDNNSM7FQp0DTQesfaBJ++fvZ8IN3eAYWl5cUDjWzzde7+z8D6Dvg2ctVUw57KLWpYfOkltB+rjdb6CtSFP7xGzqiEBOjSkhV1GntvSSLzn0ofxH/LhMIYaA78xjfU8ZwciayKWwVY6ydpxjGYKXQX8pqgqUUd3raaDloGAaPCu
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(346002)(39860400002)(136003)(956004)(86362001)(66946007)(26005)(2906002)(6496006)(316002)(186003)(66556008)(5660300002)(6666004)(6916009)(8676002)(2616005)(8936002)(6486002)(1076003)(4326008)(66476007)(54906003)(16526019)(36756003)(478600001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZTJZekJITzA0SHR6YWVoRWltZEE0cmdZUmptZkZnNmF4WmRHNWl6cUI4WWxt?=
 =?utf-8?B?YS92Ymc5T2ErT2pnbWpRYUprYktYRzNIclJCMTQ4ZjVPc1dTNC9IYS9yYmVn?=
 =?utf-8?B?WlNqeGZ6VXhMYTN0aEQ4b0phSE03SmQvWktkdldVR3hmd1dHSUUxK2psbGk5?=
 =?utf-8?B?VVJPbnJrQ3RtWS9xck10VUxSVjE2M3lHSlpSS3Nha0tNd0FVQ25KY0FxZzdU?=
 =?utf-8?B?QzJoZGkvbUw4aVBDZzNBN3NsMFZnNDd6MVNPcmEyZlduVW1OVXIwTG8wdlg5?=
 =?utf-8?B?MktEN2VrNmZ3dU1pRWV5cWhNRWFha1d4SEwxVC9DT1EvTldPZGpPdkdKVkZs?=
 =?utf-8?B?K1MrUHp3NWxKZyt0UGNOTERWdUFOMEtlazJPNnFJU0drZktuVUlPUjZlcytz?=
 =?utf-8?B?aXpzRUNBZEFabDMzZWppODlpMko5OU1qWS9VQy8vVDd3RGdWTFdITmJFUktp?=
 =?utf-8?B?WXFGVVhWeXlWN2x3bkE1dGZwS3E2Q2V4N0RWalNydUlzZmFRbUdrUGpxMnlB?=
 =?utf-8?B?V1RJYUNFQ3ZHZU55Y2hkbWVGQmNkQU9tV3NFMi9VZmZ0aGRHYlhuSm40UWd2?=
 =?utf-8?B?OWY4Lzg4bEdBR0Y4eEZSZmJjQnpqMFJ6SHIxMHlRaFFjc054SW93YlV1NGNq?=
 =?utf-8?B?TUxoNFBHQnRpNmxvTGQrU3RGWm5wNE1QSXVnY3VKM2xkc1RHZkJPMlpNOWsw?=
 =?utf-8?B?Y3FucjVwRURHM1FYeS9pL3JIYWF0d0I4aUlNOVVEdkcwc0FEMTV6TEVWcGZ6?=
 =?utf-8?B?T2JxSk5BcnVDWGEyeWU4cHR5YjlxQjAzYTJsMU9WN245N3BOTWRjb3RUQ0Z4?=
 =?utf-8?B?SE9QUEdYejZTUGx3SUl3U3F2ekR5L0xNRWJ6bElSL1pnN0dMczM3bjZLdDFS?=
 =?utf-8?B?ZDdaelNpUXdQZFVMWklBRTRoVW1idC9ldldsTjBoRENtSnNjSHpGeS9vSmM4?=
 =?utf-8?B?SUlSOVRFMi9vSU9mU0pwT2U4ZzkvaFppa0xmVlZKcndPMUw2dHRDMWJ0eDI1?=
 =?utf-8?B?RlRTUHJjR1BJYTduQUpzNHBXVFZucGhZR2xPSEpSTHBJTmlUckxQbXdJME9u?=
 =?utf-8?B?ZEhNeVB1Sk81b3VNdnBVMlRiam5EY0lTR3R6bnVYbzJpYUhIR3VzUmw2R25I?=
 =?utf-8?B?WVZ0QlA5eGFUUG4yVi8rVTQ0VmZ2MGxZTW5NZEY3RUtEeUovRi9Ia1VoTFRp?=
 =?utf-8?B?cTBKYXliTTd5Ui9CdWc2dzk3V2JDYkVyZzl5bC9ERTFreWFmZ00yN0c5ZUYw?=
 =?utf-8?B?MUpsK2NjOFhvTnFSVmpRVE01ZHBTeG1aZ3QrcWRQSzd2TjVUVkxPSVk2d0JF?=
 =?utf-8?Q?WQgsu/DppduQHiFLVtLLr1ruF0aKv51lKe?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 16:58:12.5383
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 0881a29c-8151-4c04-e722-08d8ac1aed9b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E97w2GAWJw2Yw0ByJ1XtgMNagSdjvpQflBAra09F9QrQfEvb9ONKAbcuxkjrE6RIuEZndumK4uGT/LM/IXRUgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5339
X-OriginatorOrg: citrix.com

Windows 10 will triple fault if #GP is injected when attempting to
read the FEATURE_CONTROL MSR on Intel or compatible hardware. Fix this
by injecting a #GP only when the vendor doesn't support the MSR, even
if there are no features to expose.

Fixes: 39ab598c50a2 ('x86/pv: allow reading FEATURE_CONTROL MSR')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v1:
 - Allow Shanghai CPUs to access FEATURE_CONTROL without #GP.
---
 xen/arch/x86/msr.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index be8e363862..6dfd3d5f97 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -176,7 +176,8 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
     switch ( msr )
     {
     case MSR_IA32_FEATURE_CONTROL:
-        if ( !cp->basic.vmx && !vmce_has_lmce(v) )
+        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR |
+                                 X86_VENDOR_SHANGHAI)) )
             goto gp_fault;
 
         *val = IA32_FEATURE_CONTROL_LOCK;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Dec 29 17:02:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 17:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60006.105222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuINj-00027i-Qs; Tue, 29 Dec 2020 17:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60006.105222; Tue, 29 Dec 2020 17:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuINj-00027b-Mn; Tue, 29 Dec 2020 17:01:59 +0000
Received: by outflank-mailman (input) for mailman id 60006;
 Tue, 29 Dec 2020 17:01:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt9L=GB=yahoo.com=akm2tosher@srs-us1.protection.inumbo.net>)
 id 1kuINi-00027V-6Q
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 17:01:58 +0000
Received: from sonic313-9.consmr.mail.ne1.yahoo.com (unknown [66.163.185.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2d6fdb3-46a6-4a0f-8640-3aae9b7d218e;
 Tue, 29 Dec 2020 17:01:57 +0000 (UTC)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.ne1.yahoo.com with HTTP; Tue, 29 Dec 2020 17:01:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2d6fdb3-46a6-4a0f-8640-3aae9b7d218e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1609261316; bh=exRtBokWMsHottEOxPo112Ftw2gynbBwQ5NXt66ZqtA=; h=Date:From:To:Cc:In-Reply-To:References:Subject:From:Subject; b=VBA0WnpfeVEPs/hcGsHE7O05F0EURI7kvadhGqpOAiREOqDoAENYvhRCoYMcFriO+00EKf2GjoKd9VLLwnEiAcb8ijcTAvNJaecKK2YPNW3o0gUc8njTUWxPbaqnplqkJM8rLtw6rVcJDB+jElS4k4fvcUZn/Z24mf6t13y4KaVD0AWNgySYSXpWs8LXXrEnOzaqgUBuR/fHEZ+i7EbZzJCiTKccu+R30606ondUr24RooiR7uIBIdq2hkNUFzXzUgugFejIOmPufwp+T49g09lCPzJEEtTMIVyfrL2pHiJ/qqzDEreE1OH5OPmRqwHNbCu60MeZRKmi7xxKCJbg9g==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1609261316; bh=qUjWzBk20sRw1S293s1KNycQ2SqIlNSwgIPFvgUEg4E=; h=Date:From:To:Subject:From:Subject; b=ZYQ9LcFAVt8fvvSs3bPLmH+oZUDCFhOIxj1l5YORDRIlZk2wzogv9d/jJUgkdJlC8Q4pb3KkbCWRHUjftlDFaSDMEo/HVOsuKuq34PgYkv5/VbD91EtFxiPo96dZjlX5doTDNAw4mtI5x6HSFgx/wlGz/8z8fFpJ/JMwlWE6SwNcNaXf81vJfzy3wiJtFckWv56X2BX7K5KIdokRZ6HzKpzmLuR+x5vNfNrrg7nAC0Chnsm7iiwDqHjAvZaj/WNVES6v9P+SZRPtMdjXTH57zjFhpmjFyDeJvL8bqq/Ciuy9Fmmf8sAZG26hK7/HIrvMMHp0FGGYUmfgQH546W61ew==
X-YMail-OSG: 5FOX.0YVM1kt8BycoM1j5egYS_x9PhVwnV0rflbv7ti_w4kAf7YcbmFbe08_yQJ
 ltbHdW9DKq5XqJ32HB3_9EqS6t9dvvaPwT3ONgWsLvXpRdZ_eNqwSkkVeuVXkztpFwwdiTEjyeHp
 YhtOmabrnRFsLZbQegwQBeECvElwnUrjfnuofzR0CAXLZM89SSQ0O2d2Rc.HMJB9MpV6tAmNdxmF
 8dEW5iHJim6rIZdbQMZGyD1aAamjKseiWVmwS8QnXkzX9u3QRylfXxPN0EYzEsQuxb6QP6l3Ga8O
 BUSMztkbmCIy2nOxo7ZzOYigeWnZEnTu035ptAvmZ60_r7hT84wfpeR2O_lx9mGVTOCmPDXUPQSi
 RlxgPMRQVtIQyRcBXzr93y.wHjNav_0weNBH.z_JJ1BjLGXjFwq_EThWh60kIX1tFwGkVDuwZg5L
 ROf8mZ3FRi8oFSkkNwcBS8ZL8sOGGkUgujw.I1GM44cjZvKLlYCZegFaCKgI_8_0AASFBrGfZzT5
 Qzoh7TjQaWTV5QBQe8ZYGagNiW0wxVdFWpfchnBxaJZ6nV0wXS_oG.thsImLYISrwibKhtqXq3h1
 xnaTzLo7XIWBe756BbXuEj.lgHc3CoLkS7QBpIaran43NMFgDpAQWVPW7uIX6uSd0krIegWqhOCf
 tSRiIgANsJNG6QMXjwxsQzZddMEzwDjNKOFqkdgTJdjQP5t_qaw_No0zhpQbdEw03Nap_gqYYahc
 QARWdMkspGNQ7p_GVsYdNDb40xQGApdwSHQM.15sPPl2ZNDWAiU1FImA.hzkr2nredacwsKrSmuE
 rN0dptSdLq3fT8DQbU.ydP0zX.D_t2qOSFU4MXQ9FL21wnYGFOC_6G9jDaE9xb66MkE8icXw.UiW
 gbbecwyHNie7a_ji_UadnCBWYar_8bWrzKv3UjveeF_P4sYF1ML4iLODaFlNqGpvr8TAVSNj9pkN
 JEWw4CtMlD_MMKPqtK4wxIy8apmLwo6J8GiWAKQsrEfTVni6tyohxkO71QpgKbHenNK4l_1jmmUJ
 954GeQeqPpCi.xvdBdQ4sXhqY6tkKiHZNU6TSWsoPOUUd0QEh2hjFv7bdr2M3GfKzG9vepRIYtbx
 teAw5RgqAEGxUWpme_KD_18xa0ViDE4UCnWXsVeCbpQuUUR1SA4vt.ewr.ColbVbCDZrGpXp39JA
 s0RIYvmpRnMqeiE94SBvzD2onzI6uBXvMqsGUZeO7_kdgSkH9d8kBwE8DdBTGgLgl5axBVDNj4Qe
 uZ81a2Z289kioc3gMsKuS00HvXzK_KFo1.KL22UMkQpQK.xR0rTUSU9D37_TBntcdr6nEGeXIoJ6
 3Fu9LATxD9xFfDNoJVbcPzPkURcNkL46HPDCYTYwj6U2ohCwg4_eIWUMgB2zZsCclaVyI2iXhqZB
 zIDrRHZSi4AoWf8Cv1jf4NXT0PRQSOTOQG0YLUpqKh2kpmnQipa2UH7EH9QqkUOr3df1_zg6Geeq
 _ghJoQQn9zob95SwBExmO_4yrHxMLoVYRqk9BeHSnwvr44XI99XKtoFgQP8hE8053JN7roQv7mNM
 gdE2JiemIMmA7EPqRUjXONCqGXbpIwLWguWiUcgYjhcfqZOv8S_OdfJDmnRAXElPjndkd.pLQpPd
 hI09zUXR.OKg3MUbHXCLGwpCrSsktBXwp9WxpS6bJ8fd75WUPwpxOe4h1CqFQ_yt9aHyflI0ykuh
 X2okXW57C6SQWoX1tXkf0KuDzu6VST94QM.jLMJLpRnSrS.ufk6y0eElshKOR2h1um3G2V8c4cX8
 YpjBLRpb2wFv62lB0ntfCivvIgr3tfxBW1IVeO.fkKvTLFjXJ8a8cNLxuvTKVkJI6xCKdFM7fB4d
 9KuDaB0ibsdlSjyLjVWEWaS2SBSTssNoYCeQD5dKEBFapBDnaDcG9.98uJIc7RPf_65zdCqc5TR9
 8gm9dDy3lQxvps9xBBBu_sH.RKgnRsH_9XUeGbQ46bobhDY5Ap2UttKQbsVqtfPNUUuxRbBA7w5u
 C3WnZXdIUDFXfoSy.gX8Jd8jEkv8yRryi0pAz8Q6PMro1w7WQWaDvB7BdfdVvuTK2mHUlJ2Kjl0z
 wUoA1naupCokZKOZClqoxKSGbk3eqd0FOYolc3p.I2e8.hDfb8eZ2LAX2aQiLA6o44FsE89ntiZb
 MRJvE.qLZg8ziIsHlMyWpv8Rt8JqoH5nLISH0EuypSpX8tZe1bXLL1pRL.K9VGqGGS_gAjGXZ1z2
 _0EUT89DX7G6kV8qLfKhTNuf6c9xzr24TgMDPNBB79yk__KvmVDsvYjYGioPKGksuXJXARQ8b0gz
 W1u0K98wfgQmfiXXG3K18zaQdrHP9Ky6GpuLgAmINf9Mp_zvRXhbmobVsKgwenTlClFZJgXUbOLY
 KVfcdVJPsKuChsY3kGKtoNOh01deKDRmLapUeUaKJuMdIGEZmDe5yZKfX645dH23Va35X8MJatOL
 l5NmWnemH7Bh4OcibYm2S6dIL.g--
Date: Tue, 29 Dec 2020 17:01:50 +0000 (UTC)
From: tosher 1 <akm2tosher@yahoo.com>
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <88957249.5342211.1609261310806@mail.yahoo.com>
In-Reply-To: <20201229081943.ifaiwrqyj5ojwufn@Air-de-Roger>
References: <943136031.5051796.1609179068383.ref@mail.yahoo.com> <943136031.5051796.1609179068383@mail.yahoo.com> <20201229081943.ifaiwrqyj5ojwufn@Air-de-Roger>
Subject: Re: PVH mode PCI passthrough status
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.17278 YMailNorrin Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0
Content-Length: 515

Hi Roger,

> I think you meant PVH mode in the sentence above instead of PVM?

Sorry, that was a typo. I meant PVH.

> Arm folks are working on using vPCI for domUs, which could easily be picked up by x86 once ready. There's also the option to import xenpt [0] from Paul Durrant and use it with PVH, but it will likely require some work.

Thanks for your response. Do you have any timeline in mind on when support for x86 will be available? A rough estimate would help me with planning something.

Thanks,
Mehrab



From xen-devel-bounces@lists.xenproject.org Tue Dec 29 18:33:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 18:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60014.105234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuJnX-0001Hz-2X; Tue, 29 Dec 2020 18:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60014.105234; Tue, 29 Dec 2020 18:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuJnW-0001Hs-Uq; Tue, 29 Dec 2020 18:32:42 +0000
Received: by outflank-mailman (input) for mailman id 60014;
 Tue, 29 Dec 2020 18:32:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dLv=GB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuJnV-0001Hn-DJ
 for xen-devel@lists.xenproject.org; Tue, 29 Dec 2020 18:32:41 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c7733bb-8b8a-49f7-b9d7-24cb925fc9b6;
 Tue, 29 Dec 2020 18:32:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c7733bb-8b8a-49f7-b9d7-24cb925fc9b6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609266759;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=M/3y/3doC1ad70RgV8RfLGWLtEtk/a4Wi3U+sHbKEZY=;
  b=FDTRoYEzwmqznjnAXMcZjM8XO1r/lwNUTbl/kXyoTTUe8cIklV2L5v+f
   qJ/Eizc3nm/QaFfTyB6f71ouh6CyguvfT7cTf/MEKgHDCWfTfOvtRDhW9
   bEfFEHmgeppNZn7rKCKqjBpK5Q6d2fAeH17g+jlKUVXbl8vxH3QYl1bcs
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PkXk+qRCygNMickx0GXgk4gLLzVCPt2drp5cunWeLDokMD3Flma+2IbIep4i7vFHg27EHofCSK
 7t+pbGGZE5PdbcqX97uI9AKdqQ7hMIYfson67DWNAoxKaR5XVmS9y2PoFVkMZCXi0L026Nz4WU
 sCkQ9zDiCz/bst4oNU49tDEe5VsrMj6bAXvjqdXjPUP5Vm3Sx/lWcLCCnTEEcPygNeOYeqkkCQ
 RyYIC9rIvqxmWc2D8o63ANuU7N8FMZXpP6f+fgzOHiPJHe3jI5stvB6QycwEp4i533R9ANe6I1
 TPM=
X-SBRS: 5.2
X-MesageID: 34472537
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,458,1599537600"; 
   d="scan'208";a="34472537"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AMjVYJm93aquM2VZDhwxlOpyU/AJYFh6+wA7SPvNM9JvQMhLZYHGaxJGVqngUVEeOvWqcbOu5d+ufoiQwixspg6dPZMhRDivbxJuyTXlEHUh03sPdF6AUoxu3Act/zHkL46QDF8/e2nPZtTaOwjyzKPSEd9vcNIv7Chd+S7cphooR6llpQFqWhaaISq4guMlljieHDP8G3wyiXLXFHEPWAUtKupt/QCbKh90/RBnBrr6LWj+en3WJLLOIGmhqRGZGxbdtdUlA/xNN2E2F04nJAL1YhyrW7SaGvbyEqmXM77vIyczKF5C+3W4JJV+i39oN8A2qKQhLn2D8SLceURv4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lEZnW9igkVJyzeUjUge94HQbb/jeodQKBQk7yT/vmbI=;
 b=PxStWaT/7X8ndIO57aMOCBVSg7t/FuKdXrLU51ZPXHaojSrG2LQtcV8Y4CJK0ikL3T8JXk5X4yNHl0aD4/MRWKE0LaZOulsNlEcaZMQSrWuW0RvhI4mYl+1+Wudx3CRIIgHE0DcAYQ8ZwZWwt78kT5EqI9CqM46AbmmA9bYtUPHrGvzYb5j1ILswsSq1BxMd4J8/6wulLaNUZoIpcbhz5zIWsyzoMKIEI3PS6VN7cbqOxxr2K/1n42QDuBLJdFm2jC4F/wkW2sKlYYVFBxqApn1ooIeYYR4APIjVCo3pub7tWbbELt1imMyrpWGzW3VHZ9+azGSU9VVsacSR3SaqBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lEZnW9igkVJyzeUjUge94HQbb/jeodQKBQk7yT/vmbI=;
 b=EPLTs+l72lEIXihRyeVLPR+zmKk0Zf3jleHejYzntDtcoTgxjWrCm/x2a2gjs80zW3wk8aSZpv8quEE3wcfl3PTYHPTTJ8mnt+vdOVEKH14anEpG87QeojWOM4odGNLtjsJx1NNkEN70ce8TTafZHN36ZLVS9LT051bGFgAd8Fs=
Date: Tue, 29 Dec 2020 19:32:29 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: tosher 1 <akm2tosher@yahoo.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Oleksandr Andrushchenko
	<andr2000@gmail.com>, Paul Durrant <xadimgnik@gmail.com>
Subject: Re: PVH mode PCI passthrough status
Message-ID: <20201229183229.okhtyskjylm2bhx4@Air-de-Roger>
References: <943136031.5051796.1609179068383.ref@mail.yahoo.com>
 <943136031.5051796.1609179068383@mail.yahoo.com>
 <20201229081943.ifaiwrqyj5ojwufn@Air-de-Roger>
 <88957249.5342211.1609261310806@mail.yahoo.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <88957249.5342211.1609261310806@mail.yahoo.com>
X-ClientProxiedBy: MRXP264CA0025.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f1bf6e7a-abcc-40b2-9e2f-08d8ac281ce8
X-MS-TrafficTypeDiagnostic: DM6PR03MB4476:
X-Microsoft-Antispam-PRVS: <DM6PR03MB447663EA4859B6F19ADF742E8FD80@DM6PR03MB4476.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JQ75Hsd6J2kpgMWCPwo1EvTDInMQEjPnVJOdC6uZC7haWKrj2ze46MqSJ0ADZ9BfJ1BXZOK0gYgtXp3SKsDj+jJLWIV2E55dXZi0TeF4rvbxG1iDh3wE2eShjMfK1Hh148mR/xabqmMA5zalqbGZNV3yOEr2jQ4b+8Cty/VOsXC+0JJHmYugH6WaQMnAUVk4Lu4fgF7zEJ96QtuIwopKO/BiLfv9pJhGI1ar+lLErUbNiMLo20fkOvzYRBd2KGC7YTPzv+goiLgqkIFNZwBua3AN5aXIj8yMhum+TBnX14LoeZpd/ZmD6aqms0zXiCrLZMLjCg/db10TvoB6+MXg0I061z5DMScIC1hxZVv8olviG47c+w3uPz4W7kbryGKTFNQA9en+yM+pgpHmMZkneQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(376002)(366004)(136003)(39860400002)(66946007)(66476007)(5660300002)(1076003)(66556008)(85182001)(83380400001)(6496006)(16526019)(8936002)(4744005)(6486002)(86362001)(316002)(33716001)(9686003)(8676002)(4326008)(956004)(186003)(26005)(6916009)(6666004)(478600001)(2906002)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ejRGYkZLRXVXZnlKWVFnc2diVGpEL0w5OFhQMW9jaEI5RFVzdTZUMWNVSjU2?=
 =?utf-8?B?QXJZcE0xSmlTUUo0U2JVcFNpdkdBYm5XUG5YNHp1aTlaWkhaRHpJN2kxTkdS?=
 =?utf-8?B?L3BvK0RQeVJWOGJzVzUrbHZpN1VoNFVTb09ZTEdrK2J1M0hoekxmZzRyWStt?=
 =?utf-8?B?bURwY3pINHVHangyT2Z3NW5OazNHamsra1d2T3dGMS90Z2hWNHh6K21nS3E3?=
 =?utf-8?B?RUdYSTBUd1B0VzhGczNaRkdiZ2JQRHlKQlVubzc5bTdydndGcG52UzdRYlhM?=
 =?utf-8?B?T1Z3bzM1MXNUbThaQ0xwbGhsWi9WOW9Ycm5DVjFHK3dyMmVYOUhpYk4rSncr?=
 =?utf-8?B?TEpxT2VhWXZJUHE3RktsM0gvenhMcGExU3QxbW5XeWREQ3J6NG44WlQvRHRF?=
 =?utf-8?B?Y3dLUE9yUEhvQktjWFlrbHBWdkVXS1NKKy95RWFOQjUyb2xIbnFxalhXTGZt?=
 =?utf-8?B?ekY3dkJJWUR4a3pJUFkvYnA2YVNqUjNOeWx3ZHJmdDhRUmw4UzRtRzRJMWhC?=
 =?utf-8?B?MHpyMTNBVklSNjYzZzE3RjlJd1pFQzJpZUwrTjRYUjQwQUpLU3QzbGVNeVRl?=
 =?utf-8?B?WHhzaUMvUnZ4cUpOQldZNDJiZTBmdVN6WmZ0QjJqOXdCODhaTm9la1VZbnVR?=
 =?utf-8?B?Nllyb1BIK1IxVDdqYlNkMVZVNk1uMlA1YXlQTi8xeHJYOWRBVDZ1Nkd0Mm8z?=
 =?utf-8?B?ZVQ4L1JIUGRFaUFyeU9pRjkwZnQzOENxSUw3VGJ2Um0vWGlrUm5QeVdSVE1l?=
 =?utf-8?B?cnNpanlKWkdncEFYbTExaEE0cnh0S3IyOGZxbG45QjVMMUN4SUVWcW1BYVFO?=
 =?utf-8?B?OWhhUkZmSC96bDNNbUdkNUNrRnJsZXZ4L1dkMVphUnFic1p1YjZBdmJFREo0?=
 =?utf-8?B?cUJoVnpGOWN0NjIxVVBEVTdaQjlPdlMvZW1MQkYveU9UR1luSDVWNnowZkRm?=
 =?utf-8?B?dkJqS0ZnV09ybmlLd0tmUEJGZzJ4cEw5M1BheXpsU3FvOWpsbFlKQ2gwdGhq?=
 =?utf-8?B?SkFwRWUxNkpxcDdNZDg4U2NNVnAxMWRuZWJtSzZPanQyUGRBemZVcllCTTRL?=
 =?utf-8?B?T0NCeEluaWVncVlOOENPZmFKVFg5cmFxRDJjYTQ5QWw5SHVENldzNnYxUnJU?=
 =?utf-8?B?Q3I2ajRyUjJ4Wk53cUl1dnlyMkRxdnN1eHVrS3NWenZrNHpYU3dTQ2R0cnN3?=
 =?utf-8?B?Z2ZZTUhjbzZpYXQ3ZG43dytCK09kRWtaaG8yd2gyUVVJNktLZHFGeHBYMGlB?=
 =?utf-8?B?emtMWTBPY1hoZTBwQTBqZDMyS0VNb09FMGhSQmZteEIvMnZtUFRjVk1mYzlq?=
 =?utf-8?Q?Us+Mi/AlarCCOdv4iSnffDd1l5In9tqEd/?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Dec 2020 18:32:35.5864
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: f1bf6e7a-abcc-40b2-9e2f-08d8ac281ce8
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d4l/AG7oZK/JU+B39II8/xGDvsxSpSyNWfcVWWuhm0LGnaCwZ0iJEUSoJ56ZwPgHk5UryafefMp6iu80wz2TSw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4476
X-OriginatorOrg: citrix.com

On Tue, Dec 29, 2020 at 05:01:50PM +0000, tosher 1 wrote:
> Hi Roger,
> 
> > I think you meant PVH mode in the sentence above instead of PVM?
> 
> Sorry, that was a typo. I meant PVH.
> 
> > Arm folks are working on using vPCI for domUs, which could easily be picked up by x86 once ready. There's also the option to import xenpt [0] from Paul Durrant and use it with PVH, but it will likely require some work.
> 
> Thanks for your response. Do you have any timeline in mind on when support for x86 will be available? A rough estimate would help me with planning something.

I'm adding the relevant people working on this, Oleksandr for the vPCI
Arm work and Paul for xenpt.

The xenpt stuff shouldn't be complicated (Paul can correct me), as it
would mostly involve importing xenpt into the xen.git repository and
then wiring it up for the toolstack to use it, at least to get an
initial version working.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Dec 29 21:45:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 29 Dec 2020 21:45:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60037.105269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuMnS-0000WC-Le; Tue, 29 Dec 2020 21:44:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60037.105269; Tue, 29 Dec 2020 21:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuMnS-0000W5-ID; Tue, 29 Dec 2020 21:44:50 +0000
Received: by outflank-mailman (input) for mailman id 60037;
 Tue, 29 Dec 2020 21:44:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuMnQ-0000Vx-Ux; Tue, 29 Dec 2020 21:44:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuMnQ-00059n-Id; Tue, 29 Dec 2020 21:44:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuMnQ-0000YI-8M; Tue, 29 Dec 2020 21:44:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuMnQ-0006Y1-7v; Tue, 29 Dec 2020 21:44:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ac4pPbacEGPWTQLG/DtvzDv+oh1qpPyCVVIDUWTohZA=; b=23BexBRcRLNRLKVIzldXcO5dPx
	Pz03iwO+fCLWOFN+J6F3dBDnFzYAEm5fY5kDL2fRyDqooqvbcUQdIiYEUakuEF7/1RRNoKn5P/+Qh
	Hy5vZ9yVU08q54VuX1jYN0gRV5QTQVirpnY6F/IONEFrEhMbD9ijYUhWxIUSZAiOghH4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157957-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157957: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 29 Dec 2020 21:44:48 +0000

flight 157957 qemu-mainline real [real]
flight 157963 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157957/
http://logs.test-lab.xenproject.org/osstest/logs/157963/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  131 days
Failing since        152659  2020-08-21 14:07:39 Z  130 days  269 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   11 days   22 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 00:18:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 00:18:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60049.105291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuPC5-0005Tz-3x; Wed, 30 Dec 2020 00:18:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60049.105291; Wed, 30 Dec 2020 00:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuPC5-0005Ts-0z; Wed, 30 Dec 2020 00:18:25 +0000
Received: by outflank-mailman (input) for mailman id 60049;
 Wed, 30 Dec 2020 00:18:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuPC3-0005Tk-Do; Wed, 30 Dec 2020 00:18:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuPC3-0000q0-6o; Wed, 30 Dec 2020 00:18:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuPC2-0005TX-Uv; Wed, 30 Dec 2020 00:18:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuPC2-0006fE-UR; Wed, 30 Dec 2020 00:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AjwimVj95lp5RS2wxWPFp5jxbo/ZNnh/1Zaf9IgLiwc=; b=ZnVKl0WiF6xthBLwQZQfDZ4GA1
	f+1uHUOfvyy71/zquRZzZufeItPpnbpNUi+z3v/+XGPAG/6XaDpwAjUEh7ypXlLYBq4l8ibDvUwNT
	1NxBGxN/DyedXFJMDxz9/RZWz4I+vTHF+WqLF2pYo5a3a69y1DGrgESKTQNGK3LA3x8E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157960-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157960: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dea8dcf2a9fa8cc540136a6cd885c3beece16ec3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 00:18:22 +0000

flight 157960 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157960/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                dea8dcf2a9fa8cc540136a6cd885c3beece16ec3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  151 days
Failing since        152366  2020-08-01 20:49:34 Z  150 days  264 attempts
Testing same since   157952  2020-12-29 04:11:49 Z    0 days    2 attempts

------------------------------------------------------------
4329 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976238 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 01:33:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 01:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60060.105312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuQMb-0001uE-4D; Wed, 30 Dec 2020 01:33:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60060.105312; Wed, 30 Dec 2020 01:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuQMa-0001u6-TD; Wed, 30 Dec 2020 01:33:20 +0000
Received: by outflank-mailman (input) for mailman id 60060;
 Wed, 30 Dec 2020 01:33:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e+AK=GC=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1kuQMZ-0001ti-0Y
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 01:33:19 +0000
Received: from mail-oi1-x234.google.com (unknown [2607:f8b0:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6578238a-08e7-423b-8cf2-8e3b8c136071;
 Wed, 30 Dec 2020 01:33:17 +0000 (UTC)
Received: by mail-oi1-x234.google.com with SMTP id 15so17167756oix.8
 for <xen-devel@lists.xenproject.org>; Tue, 29 Dec 2020 17:33:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6578238a-08e7-423b-8cf2-8e3b8c136071
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=IsCxKkk4R0Ca2wayNnrtXj9WZLPTQjHY2sPylFL2k4Y=;
        b=suinbuHavSNxJwdI442sIFHWVXKlACqpdMfZxDTWZkwvygyS5c2KzU2wW8lPTMKppF
         j6DhLZYB3mRe9PORHwUTRxYlmGumIGi1136m1FZ1wRsjMD6PkWsX+Ygp86/muV3SRbYp
         R5fY45sdYXZwSG2eILLKazvMZJ7b8GaLx5JcKqFnIkvrAoSlKy3DPYVP9YY1ORrt6Rd7
         uFQXvUMYj5vVKO+fodCRWlAsg8HaTz4/VN2687HvhgdBYXg2m5P02EaT12R0/BQYu4Mc
         7Lt9EddNdIz6oAmir8kWf8qS96vMKjOSLYPMMHuDcYfl113lJvN9fvKT079U8RpF6bfV
         91oQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=IsCxKkk4R0Ca2wayNnrtXj9WZLPTQjHY2sPylFL2k4Y=;
        b=ONmCm0oT5XsnYrjUWg8WLT3usqvOPjwpPNtKRZfYAcftYnssjX7IaOtO+F+MkZU4KM
         E5JNG8mSIlQCUACRagpybQM7KnpoTfctCBa2k66QvEkf8ku7QdeP7EwB07NL5v73CxbN
         mtc20BQr0C0BPaFkUSxtlIa0RwGwMy2DN8J7uyH9l+4CB5YXz3hOLzeE52YLyVS5AFrH
         a57axu5L4DItb85LinapVr5Iu2pw2n2lu5i/acUpZTCIMH5TRzoc2xGwiBtkkuOoy0+X
         77wKzn1QPjO6A9MuNLYb18NmG5pQT9ONSwGtw/wWvmB9MDZiQQFHeWgXrYZiUjunL1Ko
         t7Zg==
X-Gm-Message-State: AOAM533+PvNhE9YhJiq6EgUYN8K4u/gKFQSEit7zKng3/gRef9tyTe4t
	OQiRz7EtCK+DePzE04w/MaYS+0uTCDbkwdgJiXc=
X-Google-Smtp-Source: ABdhPJzSeqatiQAcxuU0aJoQO6GFStGFjYOOCpTGA88mg530E07wS1Hti6QnUhjGk1N9atGxMq0SvNPlrwQZ3cymV14=
X-Received: by 2002:aca:4fd0:: with SMTP id d199mr3941243oib.33.1609291996366;
 Tue, 29 Dec 2020 17:33:16 -0800 (PST)
MIME-Version: 1.0
References: <CACMJ4GbQ4ZDB0RVbK+EU0+9yyGi0hTtVNxmBnNWDgY56QeDfyg@mail.gmail.com>
 <CABQWM_AXce2OJep8u0c1hL0s2ukb9LGwLtiX5e7315qQJeMgaQ@mail.gmail.com>
In-Reply-To: <CABQWM_AXce2OJep8u0c1hL0s2ukb9LGwLtiX5e7315qQJeMgaQ@mail.gmail.com>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 29 Dec 2020 17:33:05 -0800
Message-ID: <CACMJ4GbKyvt9-ii4jmhb7TgrXxZicKkAx0BOxM+N0zEqT+4r+w@mail.gmail.com>
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
To: Jean-Philippe Ouellet <jpo@vt.edu>
Cc: openxt <openxt@googlegroups.com>, Rich Persaud <persaur@gmail.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko <olekstysh@gmail.com>, 
	Julien Grall <jgrall@amazon.com>, James McKenzie <james@bromium.com>, 
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.co.uk>
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 17, 2020 at 4:13 AM Jean-Philippe Ouellet <jpo@vt.edu> wrote:
>
> On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> <christopher.w.clark@gmail.com> wrote:
> > Hi all,
> >
> > I have written a page for the OpenXT wiki describing a proposal for
> > initial development towards the VirtIO-Argo transport driver, and the
> > related system components to support it, destined for OpenXT and
> > upstream projects:
> >
> > https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1
> >
> > Please review ahead of tomorrow's OpenXT Community Call.
> >
> > I would draw your attention to the Comparison of Argo interface options section:
> >
> > https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Comparison-of-Argo-interface-options
> >
> > where further input to the table would be valuable;
> > and would also appreciate input on the IOREQ project section:
> >
> > https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-IOREQ-for-VirtIO-Argo
> >
> > in particular, whether an IOREQ implementation to support the
> > provision of devices to the frontends can replace the need for any
> > userspace software to interact with an Argo kernel interface for the
> > VirtIO-Argo implementation.
> >
> > thanks,
> > Christopher
>
> Hi,
>
> Really excited to see this happening, and disappointed that I'm not
> able to contribute at this time. I don't think I'll be able to join
> the call, but wanted to share some initial thoughts from my
> middle-of-the-night review anyway.

Thanks for the review and positive feedback - appreciated.

> Super rough notes in raw unedited notes-to-self form:
>
> main point of feedback is: I love the desire to get a non-shared-mem
> transport backend for virtio standardized. It moves us closer to an
> HMX-only world. BUT: virtio is relevant to many hypervisors beyond
> Xen, not all of which have the same views on how policy enforcement
> should be done, namely some have a preference for capability-oriented
> models over type-enforcement / MAC models. It would be nice if any
> labeling encoded into the actual specs / guest-boundary protocols
> would be strictly a mechanism, and be policy-agnostic, in particular
> not making implicit assumptions about XSM / SELinux / similar. I don't
> have specific suggestions at this point, but would love to discuss.

That is an interesting point; thanks. It is more about the features
and specification of Argo itself and its interfaces than the use of it
to implement a VirtIO transport, but is good to consider. We have a
OpenXT wiki page for Argo development, and have a related item
described there about having the hypervisor and remote guest kernel
provide message context about the communication source to the
receiver, to support policy decisions:

https://openxt.atlassian.net/wiki/spaces/DC/pages/737345538/Argo+Hypervisor-Mediated+data+eXchange+Development

> thoughts on how to handle device enumeration? hotplug notifications?
> - can't rely on xenstore
> - need some internal argo messaging for this?
> - name service w/ well-known names? starts to look like xenstore
> pretty quickly...

I don't think we have a firm decision on this. We have been
considering using ACPI-tables and/or Device Tree for device
enumeration, which is viable for devices that are statically assigned,
and hotplug is an additional case to design for. We'll be looking at
the existing VirtIO transports too.

Handling notifications on a well-known Argo port is a reasonable
direction to go and fits with applying XSM policy to govern Argo port
connectivity between domains.

https://openxt.atlassian.net/wiki/spaces/DC/pages/1333428225/Analysis+of+Argo+as+a+transport+medium+for+VirtIO#Argo:-Device-discovery-and-driver-registration-with-Virtio-Argo-transport

> - granular disaggregation of backend device-model providers desirable

agreed

> how does resource accounting work? each side pays for their own delivery ring?
> - init in already-guest-mapped mem & simply register?

Yes: rings are registered with a domain's own memory for receiving messages.

> - how does it compare to grant tables?

The grant tables are the Xen mechanism for a VM to instruct the
hypervisor to grant another VM permission to establish shared memory
mappings, or to copy data between domains. Argo is an alternative
mechanism for communicating between VMs that does not share memory
between them and provides different properties that are supportive of
isolation and access control.

There's a presentation with an overview of Argo from the 2019 Xen
Design and Developer Summit:
https://static.sched.com/hosted_files/xensummit19/92/Argo%20and%20HMX%20-%20OpenXT%20-%20Christopher%20Clark%20-%20Xen%20Summit%202019.pdf
https://www.youtube.com/watch?v=cnC0Tg3jqJQ&list=PLYyw7IQjL-zHmP6CuqwuqfXNK5QLmU7Ur&index=15

>   - do you need to go through linux driver to alloc (e.g. xengntalloc)
> or has way to share arbitrary otherwise not-special userspace pages
> (e.g. u2mfn, with all its issues (pinning, reloc, etc.))?

In the current Argo device driver implementations, userspace does not
have direct access to Argo message rings. Instead the kernel provides
devices that can be used to send and receive data with familiar I/O
primitives via those.

For the VirtIO-Argo transport, userspace would not need to be aware of
the use of Argo - the VirtIO virtual devices will present themselves
to userspace with the same VirtIO device interfaces as when they use
any other transport.

> ioreq is tangled with grant refs, evt chans, generic vmexit
> dispatcher, instruction decoder, etc. none of which seems desirable if
> trying to move towards world with strictly safer guest interfaces
> exposed (e.g. HMX-only)

ack

> - there's no io to trap/decode here, it's explicitly exclusively via
> hypercall to HMX, no?

Yes; as Roger noted in his reply in this thread, the interest in IOREQ
has been motivated by other recent VirtIO activity in the Xen
Community, and whether some potential might exist for alignment with
that work.

> - also, do we want argo sendv hypercall to be always blocking & synchronous?
>   - or perhaps async notify & background copy to other vm addr space?
>   - possibly better scaling?
>   - accounting of in-flight io requests to handle gets complicated
> (see recent XSA)
>   - PCI-like completion request semantics? (argo as cross-domain
> software dma engine w/ some basic protocol enforcement?)

I think implementation of an asynchronous delivery primitive for Argo
is worth exploring given its potential for achieving different
performance characteristics which could enable it to support
additional use cases.
It is likely beyond the scope of the initial VirtIO-Argo driver
development, but enabling VirtIO guest drivers to use Argo will allow
testing to determine which uses of it could benefit from further
investment.

> "port" v4v driver => argo:
> - yes please! something without all the confidence-inspiring
> DEBUG_{APPLE,ORANGE,BANANA} indicators of production-worthy code would
> be great ;)
> - seems like you may want to redo argo hypercall interface too?

The Xen community has plans to remove all the uses of virtual
addresses from the hypervisor interface, and the Argo interface will
need to be updated as part of that work. In addition, work to
incorporate further features from v4v, and some updates to Argo per
items on the OpenXT Argo development wiki page, will also involve some
updates to the interface.

> (at least the syscall interface...)

Yes: a new Argo Linux driver will likely have quite a different
interface to userspace to the current one; it's been discussed in the
OpenXT community and the notes from the discussion are here:

https://openxt.atlassian.net/wiki/spaces/DC/pages/775389197/New+Linux+Driver+for+Argo

There is motivation to support both a networking and non-networking
interface, so that network-enabled guest OSes can use familiar
primitives and software, and non-network-enabled guests are still able
to use Argo communication.

>   - targeting synchronous blocking sendv()?
>   - or some async queue/completion thing too? (like PF_RING, but with
> *iov entries?)
>   - both could count as HMX, both could enforce no double-write racing
> games at dest ring, etc.

The immediate focus is on building a modern, hopefully simple, driver
that unblocks the immediate use cases we have, allowing us to retire
the existing driver, and is suitable for submission and maintenance in
the kernel upstream.

> re v4vchar & doing similar for argo:
> - we may prefer "can write N bytes? -> yes/no" or "how many bytes can
> write? -> N" over "try to write N bytes -> only wrote M, EAGAIN"
> - the latter can be implemented over the former, but not the other way around
> - starts to matter when you want to be able to implement in userspace
> & provide backpressure to peer userspace without additional buffering
> & potential lying about durability of writes
> - breaks cross-domain EPIPE boundary correctness
> - Qubes ran into same issues when porting vchan from Xen to KVM
> initially via vsock

Thanks - that's helpful and will look at that when the driver work proceeds.

> some virtio drivers explicitly use shared mem for more than just
> communication rings:
> - e.g. virtio-fs, which can map pages as DAX-like fs backing to share page cache
> - e.g. virtio-gpu, virtio-wayland, virtio-video, which deal in framebuffers
> - needs thought about how best to map semantics to (or at least
> interoperate cleanly & safely with) HMX-{only,mostly} world
>   - the performance of shared mem actually can meaningfully matter for
> e.g. large framebuffers in particular due to fundamental memory
> bandwidth constraints

This is an important point and given the clear utility of these
drivers it will be worth exploring what can be done to meet their
performance requirements and satisfy the semantics needed for them to
function.  It may be the case that shared memory regions are going to
be necessary for some classes of driver - some investigation required.
Along the lines of the research that Rich included in his reply, it
would be interesting to see whether modern hardware provides
primitives that can support efficient cross-domain data transport that
could be used for this. Thanks for raising it.

Christopher


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 01:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 01:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60065.105324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuQPB-00022y-M8; Wed, 30 Dec 2020 01:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60065.105324; Wed, 30 Dec 2020 01:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuQPB-00022p-F7; Wed, 30 Dec 2020 01:36:01 +0000
Received: by outflank-mailman (input) for mailman id 60065;
 Wed, 30 Dec 2020 01:35:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e+AK=GC=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1kuQP9-00022k-9l
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 01:35:59 +0000
Received: from mail-ot1-x32d.google.com (unknown [2607:f8b0:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57f04fda-d968-4db1-9c17-d7997681e421;
 Wed, 30 Dec 2020 01:35:58 +0000 (UTC)
Received: by mail-ot1-x32d.google.com with SMTP id x13so14124890oto.8
 for <xen-devel@lists.xenproject.org>; Tue, 29 Dec 2020 17:35:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57f04fda-d968-4db1-9c17-d7997681e421
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=amZAfth7N9MHObaEskJ2oVMxQjustQxHhRiheZhvHLA=;
        b=Vir7fZetbSUYu4IoHFg7/ScmhYg81ChVEz4mjz0nIDkOqpDwZwxsBT5YduSbMRlD1x
         1GHrWBmXM/EpFoL/C3JKvaGMRdi679aOXLiYAHCQtisubDyp9+DohpGE6+X6uHVUuoE1
         vFnEdavop72iHz66/NcDpBLtya52pXKRXX6p7AZYmPnormMW488BNGiQm+Tiigz2WNyZ
         98sUnU+1kK8Sb6lM4Me29lnx2ZBZ3iDccrkGhB5BwSDAlttfH7lcxQ6VduvqYAtvzRvA
         jwpuyYVmgbJ3vwB4MK4YRge/cSHTl/UHuWOS92/cxtF0V1V7/kXg8R9kGp3RSLMYs13p
         CZbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=amZAfth7N9MHObaEskJ2oVMxQjustQxHhRiheZhvHLA=;
        b=dtntHo4UbggoBABv/cmOR21RoQHaJ+954nMBtURixiFMt+5lkh7qqnHe/0+Mxv8thn
         IisejUQ0Mb9+Tkh7haHAYBTI5aShiSa4VB7Q2P5Lx4LL0kGnOR/lEzgWYQcKa3MlOJN5
         euVC3Fd7ZUj7N85MRQZzGz06gKklhgjA1SoFwIEFDsIlYkY5Blvyk8VWnEVfY4DI9+Na
         3qGJl4oy67V1RDEitEj2gUyK6qhAswHG66BY/7NJ0jYg24IuJkJemPMM10VLuet7m577
         MGsBMxs9L0cVVKRyQ5ns1dL+zDVDf2zw9LM4er8FZJxbboEti0sGWjZGrSsNGLI6XMMC
         C+yg==
X-Gm-Message-State: AOAM532NoZRpV2SYQYBmQLjbVx4P5J1TkzicW0chxIYl0RK2H6NkxyZ4
	WEZwHjDTTwvKzRWc4GPNmsX0Dc+G9ADNwElKfTY=
X-Google-Smtp-Source: ABdhPJyS9oo7nhjhS0DwOdV/OPSUIf6YjS9H9E9RIrbI0yBVtE6AbgTnH4rLv5XK2Y9q9hTCVDnOUSgb2gp+vcKIiPw=
X-Received: by 2002:a9d:6048:: with SMTP id v8mr38017652otj.183.1609292157801;
 Tue, 29 Dec 2020 17:35:57 -0800 (PST)
MIME-Version: 1.0
References: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com> <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
In-Reply-To: <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 29 Dec 2020 17:35:46 -0800
Message-ID: <CACMJ4GbT8w_ndH4ULhD9Eq3y+s1vB5n2u=Wk4pRPQLBO-TwS+A@mail.gmail.com>
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Rich Persaud <persaur@gmail.com>, Jean-Philippe Ouellet <jpo@vt.edu>, openxt <openxt@googlegroups.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko <olekstysh@gmail.com>, 
	Julien Grall <jgrall@amazon.com>, James McKenzie <james@bromium.com>, 
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.co.uk>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 29, 2020 at 1:17 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com>=
 wrote:
>
> On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote:
> > =EF=BB=BFOn Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.edu> =
wrote:
> > > =EF=BB=BFOn Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> > > <christopher.w.clark@gmail.com> wrote:
> > >> Hi all,
> > >>
> > >> I have written a page for the OpenXT wiki describing a proposal for
> > >> initial development towards the VirtIO-Argo transport driver, and th=
e
> > >> related system components to support it, destined for OpenXT and
> > >> upstream projects:
> > >>
> > >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/Vi=
rtIO-Argo+Development+Phase+1
>
> Thanks for the detailed document, I've taken a look and there's indeed
> a lot of work to do listed there :). I have some suggestion and
> questions.
>
> Overall I think it would be easier for VirtIO to take a new transport
> if it's not tied to a specific hypervisor. The way Argo is implemented
> right now is using hypercalls, which is a mechanism specific to Xen.
> IMO it might be easier to start by having an Argo interface using
> MSRs, that all hypervisors can implement, and then base the VirtIO
> implementation on top of that interface. It could be presented as a
> hypervisor agnostic mediated interface for inter-domain communication
> or some such.

Thanks - that is an interesting option for a new interface and it
would definitely be advantageous to be able to extend the benefits of
this approach beyond the Xen hypervisor. I have added it to our
planning document to investigate.

> That kind of links to a question, has any of this been discussed with
> the VirtIO folks, either at OASIS or the Linux kernel?

We identified a need within the Automotive Grade Linux community for
the ability to enforce access control, and they want to use VirtIO for
the usual reasons of standardization and to use the existing pool of
available drivers, but there is currently but no good answer for
having both, so we put Argo forward in a presentation the AGL
Virtualization Experts group in August, and they are discussing it.

The slides are available here:
https://lists.automotivelinux.org/g/agl-dev-community/attachment/8595/0/Arg=
o%20and%20VirtIO.pdf

If you think there's anyone we should invite to the upcoming call on
the 14th of January, please let me know off-list.

> The document mentions: "Destination: mainline Linux kernel, via the
> Xen community" regarding the upstreamability of the VirtIO-Argo
> transport driver, but I think this would have to go through the VirtIO
> maintainers and not the Xen ones, hence you might want their feedback
> quite early to make sure they are OK with the approach taken, and in
> turn this might also require OASIS to agree to have a new transport
> documented.

We're aiming to get requirements within the Xen community first, since
there are multiple approaches to VirtIO with Xen ongoing at the
moment, but you are right that a design review by the VirtIO community
in the near term is important. I think it would be helpful to that
process if the Xen community has tried to reach a consensus on the
design beforehand.

> > > thoughts on how to handle device enumeration? hotplug notifications?
> > > - can't rely on xenstore
> > > - need some internal argo messaging for this?
> > > - name service w/ well-known names? starts to look like xenstore
> > > pretty quickly...
> > > - granular disaggregation of backend device-model providers desirable
>
> I'm also curious about this part and I was assuming this would be
> done using some kind of Argo messages, but there's no mention in the
> document. Would be nice to elaborate a little more about this in the
> document.

Ack, noted: some further design work is needed on this.

> > > how does resource accounting work? each side pays for their own deliv=
ery ring?
> > > - init in already-guest-mapped mem & simply register?
> > > - how does it compare to grant tables?
> > >  - do you need to go through linux driver to alloc (e.g. xengntalloc)
> > > or has way to share arbitrary otherwise not-special userspace pages
> > > (e.g. u2mfn, with all its issues (pinning, reloc, etc.))?
> > >
> > > ioreq is tangled with grant refs, evt chans, generic vmexit
> > > dispatcher, instruction decoder, etc. none of which seems desirable i=
f
> > > trying to move towards world with strictly safer guest interfaces
> > > exposed (e.g. HMX-only)
>
> I think this needs Christopher's clarification, but it's my
> understanding that the Argo transport wouldn't need IOREQs at all,
> since all data exchange would be done using the Argo interfaces, there
> would be no MMIO emulation or anything similar. The mention about
> IOREQs is because the Arm folks are working on using IOREQs in Arm to
> enable virtio-mmio on Xen.

Yes, that is correct.

> Fro my reading of the document, it seem Argo VirtIO would still rely
> on event channels, it would IMO be better if instead interrupts are
> delivered using a native mechanism, something like MSI delivery by
> using a destination APIC ID, vector, delivery mode and trigger mode.

Yes, Argo could deliver interrupts via another mechanism rather than
event channels; have added this to our planning doc for investigation.
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+De=
velopment+Phase+1

thanks,

Christopher


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 06:23:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 06:23:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60075.105342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuUsk-0001pL-Bw; Wed, 30 Dec 2020 06:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60075.105342; Wed, 30 Dec 2020 06:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuUsk-0001pE-7r; Wed, 30 Dec 2020 06:22:50 +0000
Received: by outflank-mailman (input) for mailman id 60075;
 Wed, 30 Dec 2020 06:22:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=441D=GC=epam.com=prvs=1633713edb=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kuUsi-0001p9-Jz
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 06:22:48 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a011cbf-99a8-4f79-9e69-afb7224a6be4;
 Wed, 30 Dec 2020 06:22:47 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0BU6C5fT032760; Wed, 30 Dec 2020 06:22:45 GMT
Received: from eur03-am5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2059.outbound.protection.outlook.com [104.47.8.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 35nw3sy7b3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 30 Dec 2020 06:22:45 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB3714.eurprd03.prod.outlook.com (2603:10a6:208:44::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3700.28; Wed, 30 Dec
 2020 06:22:39 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::410a:a547:9838:afb8]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::410a:a547:9838:afb8%6]) with mapi id 15.20.3721.019; Wed, 30 Dec 2020
 06:22:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a011cbf-99a8-4f79-9e69-afb7224a6be4
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dmnJbqzhGuKJDIS8D/iIpjFBHeHBaG8bRQ0WvYJeVotEUjdwxUp+KFz02YAYGb9XhobQAveN8bLnyAGSsH6IKV8ZW5zjz8qCTmBs0fBg8FwN8YS2wXQ7XWBowhblIME7BPL6DUqpEQ/qZG0BrYirl6XE65lDTThm2PSU9G/+N0pB3XrmQFgxKTIkCIXXCg/yy8jkKa5wPN/mWWxP5Ah27UvEGjwUgd7GeZT8di2IO4VHnNCBkq14M8fWfKyUl39yzrGz87oaB2p+02t0P26mNtlUfIzkbuxc7ZgvY8Tw5xLqjY7eG8KgPc80slmpEO5Zd82DGUZtEKTn97Jss8+7RQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GtncH/NWaXJza/0vJIxwluyv1n4lNwH6YV33qkkIlsE=;
 b=MPdUWRcjBBFANze5hCAmvFaUzulnSOz+IzJiB8SiaA47wk1VJsrl0gDgESu0FUEmOUN7lN4KOnim3lVw4Eh4ZYwi1q2ATt2OqJElqcirjju8ckf5w77N9HJNyzBtxrvymkqL/ZLtws/KEdN4/0fFcRyizr4qrsMsK29taP1AyMli//6cUS6lYkyTAFaBQWyxma7Cltym6ngBXLqnvfFCFyYahHy5C098fQBQTGvV0/dzwWZmbQEkBZvb7ek208J9TxIuvjPaQP6A+RXRzoFu0b7lWUH/+Zh0+ZjZtSqN6GqDJwGIxXfE+EyJimTe9rEZ+fHaykLSiTh+09nafdZehw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GtncH/NWaXJza/0vJIxwluyv1n4lNwH6YV33qkkIlsE=;
 b=DrL9eXC1ojBvA8fwsnk7Q+yJHrcP0Tw2/2tEO3YIGxZ7BnLW/Rf+5xeOrA7B06Yc3udLDxZ2N+nGpBxNnxrGyddcuvjblTYQ5SqKk9x2y9Zld1+9BcFSfAp2rouqz+cJ3IElNLllcpVRnUuVhiQRzDOQEoT78ygjd1MswyWuwPpsO5hKyQyVU0xHLecCo4FMm5tID4cQYg9HZsUrpUM6PHxUqEhjyiIHunv/ttY189SXV5sExwZ7F/v1ZjaGjIrDoQ2BdOLD2VLQfSkjLBk5FfQzEttA6y7/i5m3opjjXPF6AB2tBGwskc2c1ivPt1dH7Ku8OdGdBLT4PR4yfw/fgA==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: tosher 1 <akm2tosher@yahoo.com>
CC: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Xen-devel
	<xen-devel@lists.xenproject.org>,
        Paul Durrant <xadimgnik@gmail.com>
Subject: Re: PVH mode PCI passthrough status
Thread-Topic: PVH mode PCI passthrough status
Thread-Index: AQHW3nQr/op/89E1c0G2q24tiQpaZw==
Date: Wed, 30 Dec 2020 06:22:38 +0000
Message-ID: <44a39b4d-8255-420f-005f-73bef4b765e3@epam.com>
References: <943136031.5051796.1609179068383.ref@mail.yahoo.com>
 <943136031.5051796.1609179068383@mail.yahoo.com>
 <20201229081943.ifaiwrqyj5ojwufn@Air-de-Roger>
 <88957249.5342211.1609261310806@mail.yahoo.com>
 <20201229183229.okhtyskjylm2bhx4@Air-de-Roger>
In-Reply-To: <20201229183229.okhtyskjylm2bhx4@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: yahoo.com; dkim=none (message not signed)
 header.d=none;yahoo.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 138e99a0-c7c8-4a63-2eaf-08d8ac8b4e92
x-ms-traffictypediagnostic: AM0PR03MB3714:
x-microsoft-antispam-prvs: 
 <AM0PR03MB3714AA853F7CEEC474399228E7D70@AM0PR03MB3714.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 uI54CrnY+IfNgYk7X0SYKIzanjh6J/+zoxVIGlWiqtdxehPlIW3NxsA53GcsNYyLt19MwLfHwKDYQr0smMHLle/CMfwldjo5ErimVf3yQ/PmRZ7pwLcPz7IeYP38iXqklClQyQM2Aq+wEPBnjNA3rodwjxkEVBIJwUb8FOZVhxpGcUMgNaSC89/La7AnOZ7wElssiHCwSNNvdFkgjgAVuS4kXsaGTlcwQh+hGxLecHcSM5g9AWVpU8HWqOPQwjjjy4bSEjcXOPNqtHYMOJW4v8dh+FliX5bxA/C+c4fvH5VIMUHXlKSCQXz5XTGHo/fFybdiPF+JYOTKEQKGlhVfIcQw10xwPbxt4AiLwyPkHKeQNGKK0iOgyWJtNwdhSt9c10qe6pL2IhMCCJT01XhwHketaqKQbX0WGr0XUdiuDg/cdVcBRwj247Mfj48n1Vmj
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(366004)(39860400002)(376002)(396003)(186003)(6486002)(31696002)(316002)(36756003)(76116006)(66446008)(66946007)(66476007)(5660300002)(83380400001)(8936002)(2616005)(64756008)(86362001)(53546011)(4326008)(2906002)(6916009)(71200400001)(31686004)(54906003)(6506007)(8676002)(66556008)(6512007)(478600001)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?RkQzdXRxcGlXNWxkRzF5TmZxVURzQ25vZzA0Q2N6MmEvRDJ3dUNRRUoyTm05?=
 =?utf-8?B?UXRXenQ1QWh3ZTB5MVltVjlwQk1SU1RQY3FIcUpYV2lrLzJ4MThxbXM4V3ZR?=
 =?utf-8?B?clJYY3dlWTNxemdWOXlPQkdnRVhNOGk5V2d5cXVnOUZtbS9POURXbkRISStI?=
 =?utf-8?B?Z250VmRtbW1qMXpmZFdEcVFWUnkvVGZPbmVxSE9OWFZBUzBTSkdEUVhyZlNp?=
 =?utf-8?B?b3ZzQVJNUkY4OW1FVlNNbzJlQWVKREk3SVdNYkQwallBcmhiRG9jQ2pVWTNo?=
 =?utf-8?B?dEdPU3p6VlJuSyt0WStROEVqbURKU0J5eHRld0U1T01pMzhWR2E0bmN6ZlI3?=
 =?utf-8?B?TTZucllDRkVlZnNEYUdBcFBFcnk5U0JBY2dZcGRUMUdjbVdOY0lhQkQ5MUpt?=
 =?utf-8?B?RkdrbEw2TVdWNlpNK1pYckc0d2FyRStiS3paYTBIMDA3QmZzTTM0eUpqeWhz?=
 =?utf-8?B?dTQzWk41SjlpeWFVTGFiemlVd2dBVFo4TjRIZEh2SmdwTFVBSDFlZFRXdXNn?=
 =?utf-8?B?bWNoeHBZb3duN2FFc3NuNDNUUFpCSXJpdktCeldXVEhtanNTTVRlWStNU0RK?=
 =?utf-8?B?VnM1ZDE1WU16UUdhRWJub05JTXpBbkl6QVp1NnY3Y280eEpiZ2FmdDl4T0Yx?=
 =?utf-8?B?ckdMNlhPWFlCM3lIOEdSaFh0NER3VHhlUjJRUW5zWFZMUHVzY0NyTUdVZkZv?=
 =?utf-8?B?RXErY1Y3YUlGUFFUeXFFYUZuWndGbTU3RThjUGR0QytJaExyeHFzd1h2OXB3?=
 =?utf-8?B?OCttUUFFUlFEWURTUlZSS0phQm12cDlSdGZWWWJLU1VpMHVlYittUnpiWTFD?=
 =?utf-8?B?TGhxc3NCWFpzOGI5VDVtamN5OExXWThVNzNIMjFhL3JTNFdPRWZ6VVNxdUFU?=
 =?utf-8?B?dWZTSXRua0EzZUIvV3NjWDB3OGZwclk3WVh1c0daQk42OWNXdlROZ2JVdzVo?=
 =?utf-8?B?c3RVZHZRSFRRN3FneS9HSnRmakpXUGQycmNoU0J3MGFqUWdwSVAvbWtzNXBw?=
 =?utf-8?B?ck1pUDJQemYrbWp5RFRjMVgvWWF0NnUwdFJSZ3VRNk1uOFJab2tnYnZtSUt4?=
 =?utf-8?B?VTBFdkNlVjRQMU1lRThTTDJNM2dpckpBKzhFUnZ6TDBueWpCemcra0pLSnU5?=
 =?utf-8?B?Tng0R2VZTmhHNEpVaTY4c1B5WFl6VGJid3doV054MHo2T0tZVUF2cEM2bmRQ?=
 =?utf-8?B?aTNxVktxMnhZZ1lMTnJLZHZUc3VCa1docU1LS3l4TnVGOGYvN0lCUFFBK3V2?=
 =?utf-8?B?eUltYWFNZDh6RVVPN3ZyQmpMbklXaE51RVQ4ZU5oSW1kdlZlbnJlclJzZkJ2?=
 =?utf-8?Q?RysBO32/WR3XsvyOgazjnNQBwdMEpo4MtG?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <0C5EE2909E50F84CB85B6B22A6693472@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 138e99a0-c7c8-4a63-2eaf-08d8ac8b4e92
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Dec 2020 06:22:39.0345
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vtvOJ/R20iOnSleuZOOJU+oz0Ny5OWaG+cc4+9Md/55qWm3g2c2viu6NEsRRTHZkLc6Ac3G9WAKktEjCSDmIq5IozV3KqOcD91/9ysNzauQU9spAXOloXcHoM/H0Rv4f
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3714
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 spamscore=0
 mlxscore=0 mlxlogscore=999 clxscore=1011 lowpriorityscore=0
 impostorscore=0 bulkscore=0 suspectscore=0 priorityscore=1501 phishscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2012300038

SGksIGFsbA0KDQpPbiAxMi8yOS8yMCA4OjMyIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0K
PiBPbiBUdWUsIERlYyAyOSwgMjAyMCBhdCAwNTowMTo1MFBNICswMDAwLCB0b3NoZXIgMSB3cm90
ZToNCj4+IEhpIFJvZ2VyLA0KPj4NCj4+PiBJIHRoaW5rIHlvdSBtZWFudCBQVkggbW9kZSBpbiB0
aGUgc2VudGVuY2UgYWJvdmUgaW5zdGVhZCBvZiBQVk0/DQo+PiBTb3JyeSwgdGhhdCB3YXMgYSB0
eXBvLiBJIG1lYW50IFBWSC4NCj4+DQo+Pj4gQXJtIGZvbGtzIGFyZSB3b3JraW5nIG9uIHVzaW5n
IHZQQ0kgZm9yIGRvbVVzLCB3aGljaCBjb3VsZCBlYXNpbHkgYmUgcGlja2VkIHVwIGJ5IHg4NiBv
bmNlIHJlYWR5LiBUaGVyZSdzIGFsc28gdGhlIG9wdGlvbiB0byBpbXBvcnQgeGVucHQgWzBdIGZy
b20gUGF1bCBEdXJyYW50IGFuZCB1c2UgaXQgd2l0aCBQVkgsIGJ1dCBpdCB3aWxsIGxpa2VseSBy
ZXF1aXJlIHNvbWUgd29yay4NCj4+IFRoYW5rcyBmb3IgeW91ciByZXNwb25zZS4gRG8geW91IGhh
dmUgYW55IHRpbWVsaW5lIGluIG1pbmQgb24gd2hlbiBzdXBwb3J0IGZvciB4ODYgd2lsbCBiZSBh
dmFpbGFibGU/IEEgcm91Z2ggZXN0aW1hdGUgd291bGQgaGVscCBtZSB3aXRoIHBsYW5uaW5nIHNv
bWV0aGluZy4NCj4gSSdtIGFkZGluZyB0aGUgcmVsZXZhbnQgcGVvcGxlIHdvcmtpbmcgb24gdGhp
cywgT2xla3NhbmRyIGZvciB0aGUgdlBDSQ0KPiBBcm0gd29yayBhbmQgUGF1bCBmb3IgeGVucHQu
DQoNClRoaXMgaXMgaW5kZWVkIFdJUCBhbmQgdGhlIHRoaW5nIGlzIGJlaW5nIGRldmVsb3BlZCBi
eSBtYW55IHBhcnRpZXM6IEFSTSwgRVBBTSwgWGlsaW54Li4uDQoNCkFuZCB1bmZvcnR1bmF0ZWx5
IEkgY2Fubm90IGdpdmUgeW91IGV2ZW4gYSByb3VnaCBlc3RpbWF0ZSB3aGVuIHRoaXMgaXMgZ29p
bmcgdG8gaGFwcGVuDQoNCmFzIHRoZXJlIGFyZSBzbyBtYW55IHRvcGljcyB0byBjb3ZlciBmb3Ig
QVJNIHlldC4gT2YgY291cnNlLCB3ZSB0cnkgdG8gbWFrZSB0aGUgY29kZSBmaXQgeDg2DQoNCmFz
IHdlbGwsIHNvIHdlIHNoYXJlIHRoZSBjb2RlIGJldHdlZW4gdGhlIGFyY2hpdGVjdHVyZXMuLi4N
Cg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KPg0KPiBUaGUgeGVucHQgc3R1ZmYgc2hvdWxk
bid0IGJlIGNvbXBsaWNhdGVkIChQYXVsIGNhbiBjb3JyZWN0IG1lKSwgYXMgaXQNCj4gd291bGQg
bW9zdGx5IGludm9sdmUgaW1wb3J0aW5nIHhlbnB0IGludG8gdGhlIHhlbi5naXQgcmVwb3NpdG9y
eSBhbmQNCj4gdGhlbiB3aXJpbmcgaXQgdXAgZm9yIHRoZSB0b29sc3RhY2sgdG8gdXNlIGl0LCBh
dCBsZWFzdCB0byBnZXQgYW4NCj4gaW5pdGlhbCB2ZXJzaW9uIHdvcmtpbmcuDQo+DQo+IFJvZ2Vy
Lg==


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 06:48:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 06:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60084.105356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuVHc-0003fu-LH; Wed, 30 Dec 2020 06:48:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60084.105356; Wed, 30 Dec 2020 06:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuVHc-0003fn-IR; Wed, 30 Dec 2020 06:48:32 +0000
Received: by outflank-mailman (input) for mailman id 60084;
 Wed, 30 Dec 2020 06:48:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuVHb-0003ff-4Z; Wed, 30 Dec 2020 06:48:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuVHa-0000DR-Ud; Wed, 30 Dec 2020 06:48:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuVHa-0000Wi-JQ; Wed, 30 Dec 2020 06:48:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuVHa-0007WL-Iv; Wed, 30 Dec 2020 06:48:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VqoWIcjZqO0DTULf9vsAMmpyWp9LqskOMsMAtfU95vg=; b=HZhjvdNlu3hQD1mpu16+uu1obd
	9hvOG3KWuIuYgGvXI6Ia/QJpNqXCwLHJec0MhAV3zf68idmMshVyn/6mEIk74NnbuI4hkUKgxHcVM
	1dCBtaPoCjnO2MNlHSsbmDAHtX41YPwUVEBwhql1eluvxrV4OH3M1NxClnjNGnQDN4oQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157964-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157964: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 06:48:30 +0000

flight 157964 qemu-mainline real [real]
flight 157971 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157964/
http://logs.test-lab.xenproject.org/osstest/logs/157971/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  131 days
Failing since        152659  2020-08-21 14:07:39 Z  130 days  270 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   11 days   23 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 06:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 06:49:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60081.105372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuVIP-0003pn-3g; Wed, 30 Dec 2020 06:49:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60081.105372; Wed, 30 Dec 2020 06:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuVIP-0003pg-0Q; Wed, 30 Dec 2020 06:49:21 +0000
Received: by outflank-mailman (input) for mailman id 60081;
 Wed, 30 Dec 2020 06:38:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cN0i=GC=linux.alibaba.com=abaci-bugfix@srs-us1.protection.inumbo.net>)
 id 1kuV7n-0002pP-P1
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 06:38:23 +0000
Received: from out30-130.freemail.mail.aliyun.com (unknown [115.124.30.130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cce28589-9e69-4d2f-af03-5260576862af;
 Wed, 30 Dec 2020 06:38:21 +0000 (UTC)
Received: from
 j63c13417.sqa.eu95.tbsite.net(mailfrom:abaci-bugfix@linux.alibaba.com
 fp:SMTPD_---0UKCfVeh_1609310288) by smtp.aliyun-inc.com(127.0.0.1);
 Wed, 30 Dec 2020 14:38:15 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cce28589-9e69-4d2f-af03-5260576862af
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=abaci-bugfix@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0UKCfVeh_1609310288;
From: YANG LI <abaci-bugfix@linux.alibaba.com>
To: boris.ostrovsky@oracle.com
Cc: jgross@suse.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	YANG LI <abaci-bugfix@linux.alibaba.com>
Subject: [PATCH] xen: fix: use WARN_ON instead of if condition followed by BUG.
Date: Wed, 30 Dec 2020 14:38:06 +0800
Message-Id: <1609310286-77985-1-git-send-email-abaci-bugfix@linux.alibaba.com>
X-Mailer: git-send-email 1.8.3.1

Use WARN_ON instead of if condition followed by BUG in
gnttab_batch_map() and gnttab_batch_copy().

This issue was detected with the help of coccicheck.

Signed-off-by: YANG LI <abaci-bugfix@linux.alibaba.com>
Reported-by: Abaci <abaci@linux.alibaba.com>
---
 drivers/xen/grant-table.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 3729bea..db1770c 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1080,8 +1080,8 @@ void gnttab_batch_map(struct gnttab_map_grant_ref *batch, unsigned count)
 {
 	struct gnttab_map_grant_ref *op;
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, batch, count))
-		BUG();
+	WARN_ON(HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, batch, count));
+
 	for (op = batch; op < batch + count; op++)
 		if (op->status == GNTST_eagain)
 			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, op,
@@ -1093,8 +1093,8 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 {
 	struct gnttab_copy *op;
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count))
-		BUG();
+	WARN_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
+
 	for (op = batch; op < batch + count; op++)
 		if (op->status == GNTST_eagain)
 			gnttab_retry_eagain_gop(GNTTABOP_copy, op,
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Wed Dec 30 08:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 08:34:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60116.105405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuWw2-0005Nz-F9; Wed, 30 Dec 2020 08:34:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60116.105405; Wed, 30 Dec 2020 08:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuWw2-0005Ns-BX; Wed, 30 Dec 2020 08:34:22 +0000
Received: by outflank-mailman (input) for mailman id 60116;
 Wed, 30 Dec 2020 08:34:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E+2v=GC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuWw0-0005Nn-Gw
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 08:34:20 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7266975-0f45-4b8d-88e2-e7bd714c099f;
 Wed, 30 Dec 2020 08:34:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7266975-0f45-4b8d-88e2-e7bd714c099f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609317258;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ydfHFhm52oL5qGO40g1JepkxZy0WMgmVXyR+OgTq7XY=;
  b=Pgxr7/oMwcgjcvstJ5DGV4CTxv1M9bD6S4GuBKMCeYVUKoGFMT3YptMZ
   AyqAXjoB956rUS9E0u/o3h+qdoox9QharpLMfut6mGrxio6zbvXpzKtLB
   /X/hJ8iFQyjVrOQqalBDui6HH59xWt+4ygVcyowe0RteMhKalBbXD+40l
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rLyqQKk8jDa4D/UphsdpIgz8WSO46vsv/km5bg+ZxhL+Tfg1ULPuIc8q7xubeBYFK/2E75sGFx
 uj1Aj/J2X4ogFBE0GSS/JdtiOhGFrnx2hE0vfs4XzAVqiomwLDFXoz4uNNUAxNN3dBeBGntP4Z
 G0c37XrT/v/LBpjnqND+RTzpDrKKTNl1CTb2rx8HDDF5olaDjiVs4bIlucTp9S+e4Qfs68DdXt
 yqBMJLX/040be0CipRSef+cAU7xqcIAumxiKnlS49jyH96FWgcVisLLhC9obdr9c0lsX84B4Sn
 8CQ=
X-SBRS: 5.2
X-MesageID: 34367290
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,460,1599537600"; 
   d="scan'208";a="34367290"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bAiOA+OIMCITfAGCQB/M89otnnY03xPFDKU5RRLruD3z2eIgiI493R7VjzCSNUrcy4sBMhWELYmEd/cOW/pFvWaaCIyCzTBWdSdE0U9AWR+yGpY3Va1Sy0VpPjgAOgPSwTgOYNXj9nszsB1C/+VVx2mwiIahq7nW7LBXTvuFikFkrkVBqO2T7vnWy/7a5E8MJAKXaU78r2ojeGIrtTdKNW8gWI6AfCAokWBnKGYNK3444Ba6jTpCYh8hF1XWtYIOQfvcwTrXb2uXwd9rdpN7LCS1sA07o4+SG92GTm57avzlnqY8hAfGJb7qdH5PfvyMzL4asKEKqoenUmvcGR07GA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ydfHFhm52oL5qGO40g1JepkxZy0WMgmVXyR+OgTq7XY=;
 b=d4I9u2asXKfYJQDt42QKD1ZVpihxe0U/pHAcL1EhZuyNKUvH1PmsHY+XfTTv8pCBWkWpokKvNlgoOfiOdvKN/pFP0YYevPfoJQCMLnVqjkRhtrPLwP93v++2K/dzRZvd5J7j/rqZgoEsm7t40VSWw7RanIH2MY2WFhS5KxqQQEqcaJLTRdDI8DPj9y//GIxJx1e1cxiC5x8r5u5eH7r/rfdZGrOQOgm2OKGeKvnmVpo8R2jYIMWa8kDN2rqm+o1RjHqmy9iriDm83PFxxBuA9YOIwbI+xp7PYVwfiRTsiVhJbXWGD9OFtvSEmKuoCQz72T3SlOZgDdBFSt17gal7jA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ydfHFhm52oL5qGO40g1JepkxZy0WMgmVXyR+OgTq7XY=;
 b=galRoqk94A5uba02+8+v4DckMNpuFcOx1f1duyTYwmrKErmhtyKkJS+LNU8HrpQS1H0hBsSy3iEnJHNW02khwGAGctAqW6dxHkBOicdA2tQn19ftqkCzhH7CLVKcsuJgHNU6GyjOdPReEAzz26OtOtNJvwJi2wsEsua/znFHnAM=
Date: Wed, 30 Dec 2020 09:34:08 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: YANG LI <abaci-bugfix@linux.alibaba.com>
CC: <boris.ostrovsky@oracle.com>, <jgross@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] xen: fix: use WARN_ON instead of if condition followed
 by BUG.
Message-ID: <20201230083408.b3p6hrk3fuyc332z@Air-de-Roger>
References: <1609310286-77985-1-git-send-email-abaci-bugfix@linux.alibaba.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1609310286-77985-1-git-send-email-abaci-bugfix@linux.alibaba.com>
X-ClientProxiedBy: PR0P264CA0185.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1c::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3d10c0f0-d4ce-450a-0385-08d8ac9db0ea
X-MS-TrafficTypeDiagnostic: DM6PR03MB3914:
X-Microsoft-Antispam-PRVS: <DM6PR03MB3914BE49B6B577CD9D1D2FCC8FD70@DM6PR03MB3914.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2582;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HPtBAnuwMusCRHxbgMDrMUzyxJcp2M2mocHBg/b/aHCJrbVu+70sff5+BmlXOgQnqLNzx+qRFezCwjlY1a34B5PukMOqOgSK7eyWxr45ucdYk0JSbRVha6OEIws5ZTTB1IIKyXbcY3GXeeL28ZybLsB/ZxFJ/j4aQf5bctpBNj+jDGqCmHYcJZbR0wROJFU739BQD/4w72kDD++iRbBLnsZqXWOpQIIP/AkychhhagNEIQ00TewVtLvRLLLxBNyQXbB6dCAm8pmYCYthcoNfzVkCnvaH58UevVNSyqBnJoLQY3Hd37uhd334KMhIMXXUf16yp78nE64PH2R7bKGfju4nwdeqbSlmJatSAT1p1j8/rpKM/7+RV7JigGbegZLv
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(39850400004)(346002)(396003)(136003)(4326008)(86362001)(9686003)(83380400001)(6496006)(66556008)(2906002)(8936002)(66476007)(33716001)(316002)(85182001)(8676002)(6486002)(26005)(478600001)(4744005)(5660300002)(1076003)(6666004)(956004)(186003)(6916009)(16526019)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S2hZYjA1cHhSREZJZGlIQmt6WUlORU81amEwUDNXNndqcGlZUTdYNjZzQy9z?=
 =?utf-8?B?TEo1RDJ3RzRoOG1ZQ3c5YTJuVVIzeTVheGsrUXpwaUw1T2M5aVNIVlFocnFw?=
 =?utf-8?B?bDdGQnRSdlVxanBzTDRmR3JyL1ZCYzJ3dEd6ZFV5YXNDMDhIQkkwNHpOZWRk?=
 =?utf-8?B?RDNoYnZuek95WlVzdzdvZDh5aEJMdEV3QmZvd2N3ZUJCclVSeFgwbjF5QThW?=
 =?utf-8?B?ZTd0YkNrMFdCWmtQY2hyS2QwbVdONGlEVllkeW90SU14akE2TlIvYzRrc1FL?=
 =?utf-8?B?blF4eGtISTVpU1B1eCsrTTRyc215VXRDNS9KQy8xYWNyM1RaSW9aSENoUUJH?=
 =?utf-8?B?SzR2a0hzK01OYkJYdy9XSVVaZ2JiRHJxVWpFcDg4NHIzY3FFeTFTNWx6b0VK?=
 =?utf-8?B?cGE4MGZNTDdrbVpyWFJ1SG5BV1ZRNUxRSzk2di9lb0JyNTdGa252QVh1alJv?=
 =?utf-8?B?R0tNU0crTTFyeUFQUytGT1RvVWp5WnRreUs2aCswRlpFSHdnUGh3bjhNanYy?=
 =?utf-8?B?TTFQdHdkQXdLUkZsQSt3cFJTelNNR3czZDNWRmZPNjV5M0pPTHAwaGNJZS9u?=
 =?utf-8?B?a0hZbUNUYTdyU3llTWcwSE02NDZ5VVlPZGNTbkY4K3dwa3RLVlVycVlGQ2sx?=
 =?utf-8?B?UDdIL0RiKzdjbFBhRW5aVkFkV2ZmTU5raGZ5bUVveXNIUWxhb05ScDMydkta?=
 =?utf-8?B?d2pDVDNJWndydVk3c2VwUzkwUm90cXErUlBsNUZqa2xlOXZNVi83UlFaVnV2?=
 =?utf-8?B?aHk0bE1sekU0Y1JjaFFRbnlIOGJDRWV2NmJacTFMd3MxVG1IYXZWaFhkbEgy?=
 =?utf-8?B?RUZ6VEZRZVYzUGRjOHU3VWJjUVVHS29vZXF3dnppaVZJcm9GaEZLY1JYZS82?=
 =?utf-8?B?a21nWGJBcHpQVzE2ZGtxMEZTMlNhZWF3eEVzd21iWE95Sm83eW1nckhRS2Np?=
 =?utf-8?B?Rmlqa2Q4OU9CKy83cmYrd3ZXQ3ltbThUWVFqQTlVQ1ppWjZYU2lnVEpPZ3li?=
 =?utf-8?B?NlRCL29XRUFUdTN3K21zTUxtWFU1QlA1dm1hb0NsUWJ3WTFIN3ZVVTh5U2pq?=
 =?utf-8?B?ejBkcUZIMXRHTmNtTnVxeTVteFRkaW1zUFNPOWFkc1o4aUpFT29aRjd0RTA4?=
 =?utf-8?B?U1lXd1ZtU3RRWHQ4TUswbjlWajlsTXdpSjhSSzlrSmwxZ2NjS0Jhb0IzOU1V?=
 =?utf-8?B?MFIyNERKNi9qOERrZVNKRHdSalhWTm5kcjFLYXZ3dzlSMld0OXBvd0dEdERo?=
 =?utf-8?B?czhJWDhxWVB0cWMyZDdZKy9lSDJwOXAzSTdKeTJKTlgrN3ora2dVU2hRdUlT?=
 =?utf-8?Q?GZMeOScRBcKge5LMiZ9bWkHqyJZB2fANxw?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Dec 2020 08:34:15.0396
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d10c0f0-d4ce-450a-0385-08d8ac9db0ea
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VC7RTGskD/kdRWvSaifFUZdB0dH1QnJ7v7aN2nRQe1mfmP1dJVBR0B0FolbRQuD4OVF6P+OzC2opbSaj4DkjKQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3914
X-OriginatorOrg: citrix.com

On Wed, Dec 30, 2020 at 02:38:06PM +0800, YANG LI wrote:
> Use WARN_ON instead of if condition followed by BUG in
> gnttab_batch_map() and gnttab_batch_copy().

But those are not equivalent as far as I'm aware. BUG will stop
execution, while WARN_ON will print a splat and continue executing.

If switching to WARN_ON is indeed fine it needs to be explained in the
commit message that returning to the caller(s) with
HYPERVISOR_grant_table_op having returned an error code is fine, and
that it's not going to create other issues, like memory corruption or
leaks.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 09:31:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 09:31:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60123.105417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuXpR-0001y4-R9; Wed, 30 Dec 2020 09:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60123.105417; Wed, 30 Dec 2020 09:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuXpR-0001xx-Nx; Wed, 30 Dec 2020 09:31:37 +0000
Received: by outflank-mailman (input) for mailman id 60123;
 Wed, 30 Dec 2020 09:31:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuXpQ-0001xn-UT; Wed, 30 Dec 2020 09:31:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuXpQ-0003Uy-M4; Wed, 30 Dec 2020 09:31:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuXpQ-0001aB-BI; Wed, 30 Dec 2020 09:31:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuXpQ-0005Gv-An; Wed, 30 Dec 2020 09:31:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fx5mvT/+N7UuZwYo711XFBo21EMHn0HWmn89vMsO2ow=; b=AtiPVEXhZq9S1vzmVtXA5U1R4A
	Ic5Xjwmdj6FldygVpFwCW0M1AG4f84zwbNcS0nlcxl3N767PUnaT2mfyWc/xMJe0YPOCVG7/oppLu
	sWi+WaMl/dBHBWVakXCIm2ARNCVBZT7rANKRi+iyNhdxItaHshYHANjrWBmlryfJdoMM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157967-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157967: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=139711f033f636cc78b6aaf7363252241b9698ef
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 09:31:36 +0000

flight 157967 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157967/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                139711f033f636cc78b6aaf7363252241b9698ef
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  151 days
Failing since        152366  2020-08-01 20:49:34 Z  150 days  265 attempts
Testing same since   157967  2020-12-30 00:39:27 Z    0 days    1 attempts

------------------------------------------------------------
4330 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976766 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 11:30:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 11:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60133.105438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuZg9-0003cu-SJ; Wed, 30 Dec 2020 11:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60133.105438; Wed, 30 Dec 2020 11:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuZg9-0003cn-PK; Wed, 30 Dec 2020 11:30:09 +0000
Received: by outflank-mailman (input) for mailman id 60133;
 Wed, 30 Dec 2020 11:30:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kuZg7-0003Yw-MN
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 11:30:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kuZg6-0005UV-6X; Wed, 30 Dec 2020 11:30:06 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kuZg5-000524-V1; Wed, 30 Dec 2020 11:30:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=paGfClgLfeY6QdBNt5V1TXaOr1iZj8mkYe/1tA0SSpA=; b=X/Qu7mGQxh0miuqFHOuRqFiMKx
	+CaecYmXbh7rdfdhe/VvTWgloyLdgh/QmuCpOVpq/Y2TD0GnZkjLXEnE3gHdPVOj4EyH79Kles2UK
	RqoeZ4LPEN1Q9TvTzKU2lGmmYSDYtjCkTiUV0cGHUQQjH9hGCItOX6KOlv1YohCH+AGY=;
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Rich Persaud <persaur@gmail.com>
Cc: Jean-Philippe Ouellet <jpo@vt.edu>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 openxt <openxt@googlegroups.com>, xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <olekstysh@gmail.com>, Julien Grall
 <jgrall@amazon.com>, James McKenzie <james@bromium.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.co.uk>
References: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
 <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <eac811f4-51fd-9198-446a-230dc6915f62@xen.org>
Date: Wed, 30 Dec 2020 11:30:03 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.6.0
MIME-Version: 1.0
In-Reply-To: <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Roger,

On 29/12/2020 09:17, Roger Pau Monné wrote:
> On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote:
>> ﻿On Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.edu> wrote:
>>> ﻿On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
>>> <christopher.w.clark@gmail.com> wrote:
>>>> Hi all,
>>>>
>>>> I have written a page for the OpenXT wiki describing a proposal for
>>>> initial development towards the VirtIO-Argo transport driver, and the
>>>> related system components to support it, destined for OpenXT and
>>>> upstream projects:
>>>>
>>>> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1
> 
> Thanks for the detailed document, I've taken a look and there's indeed
> a lot of work to do listed there :). I have some suggestion and
> questions.
> 
> Overall I think it would be easier for VirtIO to take a new transport
> if it's not tied to a specific hypervisor. The way Argo is implemented
> right now is using hypercalls, which is a mechanism specific to Xen.
The concept of hypervisor call is not Xen specific. Any other hypervisor 
can easily implement them. At least this is the case on Arm because we 
have an instruction 'hvc' that acts the same way as a syscall but for 
the hypervisor.

What we would need to do is reserving a range for Argos. It should be 
possible to do it on Arm thanks to the SMCCC (see [1]).

I am not sure whether you have something similar on x86.

> IMO it might be easier to start by having an Argo interface using
> MSRs, that all hypervisors can implement, and then base the VirtIO
> implementation on top of that interface.
My concern is the interface would need to be arch-specific. Would you 
mind to explain what's the problem to implement something based on vmcall?

Cheers,

[1] 
https://developer.arm.com/architectures/system-architectures/software-standards/smccc

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 12:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 12:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60161.105468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuaQi-0007TQ-Dv; Wed, 30 Dec 2020 12:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60161.105468; Wed, 30 Dec 2020 12:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuaQi-0007TJ-Aq; Wed, 30 Dec 2020 12:18:16 +0000
Received: by outflank-mailman (input) for mailman id 60161;
 Wed, 30 Dec 2020 12:18:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuaQh-0007TB-A3; Wed, 30 Dec 2020 12:18:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuaQh-0006Hf-08; Wed, 30 Dec 2020 12:18:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuaQg-0000zj-Nn; Wed, 30 Dec 2020 12:18:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuaQg-00030J-NL; Wed, 30 Dec 2020 12:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T0tnTtdL2SebUz2OOTpVVnh2d6kLZ0DQd0mNVnHiEew=; b=nPSX5UietSZqZd1pzBihjVmHlc
	1B5aTJ6shcnDUk+vZ9uy6L4HrOSjEDvErZGGjUi8ID2QLsRZgJ6Uxvx7Q1uAoP9u0qspiYkWxqTu7
	jAkloDBsGujc1BMuGoBvnLe00QKeti7prPKYmyxpmQNYrsPO+SEBLXBd3RZABQr3tF5g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157970-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157970: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 12:18:14 +0000

flight 157970 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157970/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  173 days
Failing since        151818  2020-07-11 04:18:52 Z  172 days  167 attempts
Testing same since   157715  2020-12-19 04:19:22 Z   11 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 13:29:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 13:29:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60195.105524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kubXs-0005Wr-Qn; Wed, 30 Dec 2020 13:29:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60195.105524; Wed, 30 Dec 2020 13:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kubXs-0005Wk-NN; Wed, 30 Dec 2020 13:29:44 +0000
Received: by outflank-mailman (input) for mailman id 60195;
 Wed, 30 Dec 2020 13:29:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kubXr-0005Wc-7r; Wed, 30 Dec 2020 13:29:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kubXr-0007QE-0y; Wed, 30 Dec 2020 13:29:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kubXq-00047K-MN; Wed, 30 Dec 2020 13:29:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kubXq-0005XN-Ji; Wed, 30 Dec 2020 13:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AMRSptsEHzTqD9VTUan42gmcjm2aSwow+bT5t99BcRA=; b=pHhDD7vcBji6xWT5P1pR3zqo1p
	pFlz4gfvdYWQRY0FhHPrfeDpsgxVFAUlOGBBxdY8FjR3Iyn7WTnSpnsoIutYJT8CXaVA6UZrc7EPR
	tc0W1l31sqz7qdlhBfOk4BmXmKylK+OJcjgLNQoT69hzTOZ7UyWvZDSnbHs/VQithltU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157968-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157968: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 13:29:42 +0000

flight 157968 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157968/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 157950

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 157950 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 157950 never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157950
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157950
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157950
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157950
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157950
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157950
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157950
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157950
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157950
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157950
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157950
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157968  2020-12-30 01:51:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Dec 30 16:02:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 16:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60206.105542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kudvn-0002bW-4W; Wed, 30 Dec 2020 16:02:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60206.105542; Wed, 30 Dec 2020 16:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kudvn-0002bP-16; Wed, 30 Dec 2020 16:02:35 +0000
Received: by outflank-mailman (input) for mailman id 60206;
 Wed, 30 Dec 2020 16:02:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F8On=GC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kudvm-0002bK-EM
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 16:02:34 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 801d120e-f136-4e6a-b161-8962f49d89ce;
 Wed, 30 Dec 2020 16:02:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 801d120e-f136-4e6a-b161-8962f49d89ce
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609344153;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=QS8M7hzLf9VUE1e7vMISP3JN5tbqz8IX1YFxd9Ez8tc=;
  b=iKh0m5SoTJjOaCbp0OIjnNJgFHZDN1pEue6CtkDmaSjCoozW/PhcgE+z
   +IFCFxQf6Fg+6saDtqMa0X7uBVDvhxT0roE4GklGvQADBz+XlSsG73RMV
   vzaO3rk4GhBJj2iog9FBdCjQyU5c14BWhEQyh49l2irVFKLnGm4K5fveW
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jZmTVB76mOvNaaW4XBec8r96iUpZ1zC7j581YAQLCkQdv8pYrlu8lN/rIX2dldROJ0d1CDwV35
 XHe8ErerO2zkkQ4yna1QJKY0MTZg+GX0XaZTPp4jS59oXEeHIle6m9teIhD0+U9n8KWTlYMYLy
 mATh1H/lV9IawV7cTrq1+VMSq4HCL0D6+8LmqZYyfN3vndWKM5UXlGRsi8iXKbrk75M32p7lXe
 fzWtEb0aaHByzmEEvBsJ5QkCTU1X1IiKnJKMrELakRAKFeD31zyS50+AwYntr+LA8qIlP6YxGO
 wf0=
X-SBRS: 5.2
X-MesageID: 34166451
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,461,1599537600"; 
   d="scan'208";a="34166451"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/hpet: Fix return value of hpet_setup()
Date: Wed, 30 Dec 2020 16:02:08 +0000
Message-ID: <20201230160208.18877-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

hpet_setup() is idempotent if the rate has already been calculated, and
returns the cached value.  However, this only works correctly when the return
statements are identical.

Use a sensibly named local variable, rather than a dead one with a bad name.

Fixes: a60bb68219 ("x86/time: reduce rounding errors in calculations")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hpet.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index a55e68e6f7..e6fab8acd8 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -759,7 +759,7 @@ u64 __init hpet_setup(void)
 {
     static u64 __initdata hpet_rate;
     u32 hpet_id, hpet_period;
-    unsigned int last;
+    unsigned int last, rem;
 
     if ( hpet_rate )
         return hpet_rate;
@@ -789,9 +789,11 @@ u64 __init hpet_setup(void)
     hpet_resume(hpet_boot_cfg);
 
     hpet_rate = 1000000000000000ULL; /* 10^15 */
-    last = do_div(hpet_rate, hpet_period);
+    rem = do_div(hpet_rate, hpet_period);
+    if ( (rem * 2) > hpet_period )
+        hpet_rate++;
 
-    return hpet_rate + (last * 2 > hpet_period);
+    return hpet_rate;
 }
 
 void hpet_resume(u32 *boot_cfg)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 30 17:14:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 17:14:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60213.105558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuf2k-0008VU-Fy; Wed, 30 Dec 2020 17:13:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60213.105558; Wed, 30 Dec 2020 17:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuf2k-0008VN-Bj; Wed, 30 Dec 2020 17:13:50 +0000
Received: by outflank-mailman (input) for mailman id 60213;
 Wed, 30 Dec 2020 17:13:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuf2i-0008VF-S1; Wed, 30 Dec 2020 17:13:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuf2i-0003R6-NQ; Wed, 30 Dec 2020 17:13:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuf2i-0002NY-BN; Wed, 30 Dec 2020 17:13:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuf2i-0005Lc-Ar; Wed, 30 Dec 2020 17:13:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BlKrwqmd2WqTTzhw58DsIS4YQC30zL3jhp2FXx9IxKY=; b=a9aGr+kMBkjNGZGg6zTtJwAVmj
	iGEa9QvK5WLd7dPL6lonE1lH+0FPDzXSrf2lknrlr/X1s4HBPsC7osS6y6io7fpQB71/ffbZ/TDfM
	+2F/Jym9LfEt9PK8SEHCgPfyi3j0/OJb45o+OBkrv5PRzU55sjbzogiedpOJu35lXEbQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157973-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157973: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 17:13:48 +0000

flight 157973 qemu-mainline real [real]
flight 157978 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157973/
http://logs.test-lab.xenproject.org/osstest/logs/157978/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  132 days
Failing since        152659  2020-08-21 14:07:39 Z  131 days  271 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   12 days   24 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 17:35:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 17:35:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60221.105573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kufNF-0001qB-7r; Wed, 30 Dec 2020 17:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60221.105573; Wed, 30 Dec 2020 17:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kufNF-0001q4-3e; Wed, 30 Dec 2020 17:35:01 +0000
Received: by outflank-mailman (input) for mailman id 60221;
 Wed, 30 Dec 2020 17:35:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E+2v=GC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kufND-0001py-KJ
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 17:34:59 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d72affe2-fb6a-4a4b-ba28-3d375a673333;
 Wed, 30 Dec 2020 17:34:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d72affe2-fb6a-4a4b-ba28-3d375a673333
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609349697;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=jRuINnmYOa/5KoolQNaundeVmJwj+VDfPCONJ7LQkOw=;
  b=Wq+1rMz5ac/9+Hc2Ulu9G0KXhvyKqxF8mj+R+GVFTuCT/Ao4MYo8Dd6n
   tPy7Joh8XqOXZTT42aJD+yxrL7ZIAT3C/dQyZtnn0ePEL3mqLlnZVhTUa
   i3/etzowGDV8IkZfhXul0/XnyJ3QtxUyDFP6HHnBk/dAkP5JbbBGvnOEu
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: g4SHdFR1EJYBKynpKjircudJ3rxNuCC6XtfFi4L7+ZJ0w1U6P2F27r6jJDRSm8g572rmH2IPG0
 lfiOxwQTfqT+/ewMaBXOG+I53tUeCEEkPF0Orfrwa26MN5Li37RDXYTwG2lVy6AfR8TI+TBTbq
 PGLkhBJIx9KkXuu+4x5wMyyzCE+pxUePIwRLRimdipf3aM00N8DXYApOAnHHRQd3wW8RBF5J9N
 XXaHlql/UjZkamxKzjyyl/5+F6NYoLnBFqyjCNdEFyG6PEU2Px5UJpU6diZeeB5R2zbAwaUYiH
 r3E=
X-SBRS: 5.2
X-MesageID: 34172589
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,461,1599537600"; 
   d="scan'208";a="34172589"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MRXBLbGzadKYv2x1D9smHgeef8YB3zw8eqSUS97J/UYHpygWbsl8twyX02KuxT77Tuaw2U2jJx2ltxxMn1i7ylwef4dmk/2JT7GV+3hPJ2eQb6Svy29AhyjhMgnFd9KguCLLASJhrftSzDbUQCarsRSDnTn01JZG6QvhSrgXXEM+sFvyDgF2WbwFsClXT8EN8B5R6qne0u+CQCIYG5CMCaRnLXeyropsupNiy5P0iTjLJrhHDrspXF9xoeePQHFce0cieu2m3tI2Dl09d6ybQrHyHSTcC9f3iNwIR+EW3ikaQVO7Z3yKY+6Nxhe11rmBAdx38lzMiylUG371JtjGtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5rD97J0qexyqUc5qGNFGzyy8YntChTzAmA5ZqLrTlyQ=;
 b=Mn1uw6z95eKzCJJu8z6XTGr0se+x7QY0Cde8tBzzcIlA6xQ6HjHjnaGXaPIUysoNV2q8NDS9x3YMvZjTC90aI3lepuA0V4DnnV1etjwheU7Clxy0kt7Ozn3MXwmt4KWsMQEo+J8T5GdioafN+UIh6AmWRY7BocW1Cs+OwTvppk8v+sAKMYdNwp7NXwZWdPuAVzNiaF39ZR1CMW380Kay0j/ZO6npOojRD+UABBHVi9neQ8wbLwLtFta65In6CrZKlDoTowGWnSBFL8aTECXQAKEYpotlFcFl4oDIJMw0i7g144QY35+ZooTv7EskC8he4z0cbsFG9p0qhx+PGzViug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5rD97J0qexyqUc5qGNFGzyy8YntChTzAmA5ZqLrTlyQ=;
 b=Q9OVB0fG1vbKYliTHmjpvEJbGhmClWJ9tWdzhI84z1Bs1KT3MBGH/46J4wZlj6/SHbTze8Meul7FINP+/HHG8VU3Tddqp5dInRjWpY0+Bd3ufiectpBkfvW7vAqYAGA5cV3QHfV3b8Nm6efsiBRpUVcTrX1CM7rV0p0AVy9Kgvk=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] xen: remove the usage of the P ar option
Date: Wed, 30 Dec 2020 18:34:46 +0100
Message-ID: <20201230173446.1768-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0341.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c79897df-e76a-4c8c-b459-08d8ace93735
X-MS-TrafficTypeDiagnostic: DM6PR03MB4972:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4972C1DEF142BF35F638C1A38FD70@DM6PR03MB4972.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: v2Vp/MBuOx/x3SeKzkF9Plm7wNVG0T5ZuyXPUAqwaHJRLHlUNA9DjSh9NxjFyE53qoWQk3IYsucLsDDfC2Y7lwIglstM9PU05v9ME+AcGs8dyT0ZgxVJ2CSPHJXVb9Zh8uf//34wIW9/QdVgeB2nr/ko+UPAHkwhRchgrJHBZZrjOV1HhpOux0t6Sn7YUZtoY7Uht5ZoD2upo3+J6p+UThFbNsF5QqO29MV/jrRB61Um95OjQGYVta9YQeTuex/L44PrPA0oX1A0sjIXyHQ1wsqS3Ibq8z2DIeg3xlwCxpTfZdtW9RulabjzuISGVgYzLrO8NP9x5vF2sl5ZGie3vF9SZFiMGcZFnMj+rrE5iUHx5ZZ+YYWzljPJ3bB/SlBLiR+st/EWO/7zIHnGDLFV7p8mWw9agWnapEirU2ktEZVEQMjU62S9hnv2fNfKBLOj8oxKXmXO9DjI3qslSMlFGg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(136003)(39860400002)(366004)(83380400001)(66476007)(66556008)(36756003)(6486002)(6916009)(66946007)(5660300002)(316002)(86362001)(4326008)(54906003)(16526019)(1076003)(6496006)(26005)(8936002)(6666004)(8676002)(966005)(956004)(2906002)(478600001)(186003)(2616005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?empQM0M2SnFWazczRTUvUzJoeFlJcG4yMDZ6alMyY0xsYSszeStnMm9hajBr?=
 =?utf-8?B?ZlVSTGd0Ymx5Si9mdUtqR2RycHJsZHlia2l2K0VnUkVkTmRpN1dNd3A3SUxa?=
 =?utf-8?B?SHdQWUNSRy9lU1h0aXV2TVZXL2VETWVaSFlLa1BUTlZQemduYW1LaVZrYytm?=
 =?utf-8?B?cVdFTFJRWWwvWGVzUGU1anNZbGM4N1ppQzF1aGhJUDcweGJNQk45aVhOSXJx?=
 =?utf-8?B?Qlg0dEo5Mjc3V2ZoRmxSZGRuQmhscDU2MnN3ZWZYRVoyUnhZalBvZzBQRXVF?=
 =?utf-8?B?T2ZJRnhVVW50ZHlTWWU2WnVmNG5pZXo2emNHb0tsS1VZUHVnU2VBazlTS3pJ?=
 =?utf-8?B?ekpQdStyOHR1amNLOWVzQ0NsSmNEUFRsWkhYQk5jbFZuaG43azRIbTZJdkNh?=
 =?utf-8?B?SE9kS0VvdjZlL0RnVG14ZUhYOTFZR2VINkxIK3dhZDZrVGhJRWFhVnJZU25U?=
 =?utf-8?B?UHlHKzZ6ejFYYUxzUVlWYXN5ZVNDdy9qQXBsZlpzZmRxbzZWcy9nOFJmRU5y?=
 =?utf-8?B?d05CK1JoL0x4TVczRHh5SGMxVlVZZGd2SDl1M0VNcDRhamo5WXNmT0NDaklG?=
 =?utf-8?B?bHYrN1FyRURpMFNOcWIvTVZWU1UyNTZJM1RzTE9VY1BlUXVYcHVoSTdUWFNL?=
 =?utf-8?B?U0xuMDRFZGlraHlyVWVaeVRIclNFcGhnelE2eXg4QTFoL3ZCcHE4YlNaalBw?=
 =?utf-8?B?Y0kybU9yb0IxM3grellRN0VqdUZZbkExUW43WEhVdCtKK3Z2YnpYK3ZpNlV6?=
 =?utf-8?B?WmJoODJVQnVxVHNwU081dkViSFplTXI3WEF1WkRveFRkdW9jZWdnR2N2R05a?=
 =?utf-8?B?M1cvNTFJRE5oK1pqaHluVE9abng2WS81bTVmWStRWlIrM2V5cEVXV2VmWTd4?=
 =?utf-8?B?RnFpWHB2SE5sOExaby84eGRiclc3ODFrenZQLzZLdDU5dmxlbGdVTmpwNEhB?=
 =?utf-8?B?eDg4TEpKR0ZVcEJTVjFEMnQ1aEpUMTI5TVI2QUdMdG1mZnlxbWxGNUk5dmxt?=
 =?utf-8?B?dExWSCtWRHNkdmVOVW9GMVdLcGlUeDBMV3JvS0syVDRxWitESnBqZnI5VmdN?=
 =?utf-8?B?RzkwekFKc2hZZlplc0twa2NTaHpCMXA5UFdOclZQNTJ2NFdSbzRseGkvdDJh?=
 =?utf-8?B?dHJpZVdEQUFLVGx6YWRNZ0JFREcxdUVjRlovQ0dEb0o4VEZNTDZ0T2xUdlBN?=
 =?utf-8?B?MWVpSTNPVUVuOG5QOVl2T0g1aXdKbE1TbUMybFlab3NsVTVPSmxPYTlxcTlU?=
 =?utf-8?B?eDRMRC9OQ1FEeGtXb1YyeWxPTXFSSEJXeHk4RDg1UWhrUm4zNFJ3a3N2SW9r?=
 =?utf-8?Q?1LC0WMValBm65qneNiLQJePSSdCSLhgWdW?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Dec 2020 17:34:52.5523
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: c79897df-e76a-4c8c-b459-08d8ace93735
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UvLeRGVw8Y32ldKVd3CMK9ct31ufRz7IOs2rMqhd7qdOy+Y4BSRFYxXb+NEgL4NoTjvcilS9XF51PHvdvTqJeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4972
X-OriginatorOrg: citrix.com

It's not part of the POSIX standard [0] and as such non GNU ar
implementations don't usually have it.

It's not relevant for the use case here anyway, as the archive file is
recreated every time due to the rm invocation before the ar call. No
file name matching should happen so matching using the full path name
or a relative one should yield the same result.

This fixes the build on FreeBSD.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

[0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html
---
I'm unsure whether the r and s options are also needed, since they
seem to only be relevant when updating a library, and Xen build system
always removes the old library prior to any ar call.
---
 xen/Rules.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index aba6ca2a90..8fcffffc98 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -71,7 +71,7 @@ cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
 # ---------------------------------------------------------------------------
 
 quiet_cmd_ar = AR      $@
-cmd_ar = rm -f $@; $(AR) cPrs $@ $(real-prereqs)
+cmd_ar = rm -f $@; $(AR) crs $@ $(real-prereqs)
 
 # Objcopy
 # ---------------------------------------------------------------------------
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Dec 30 18:33:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 18:33:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60228.105585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kugHS-0006y5-Ra; Wed, 30 Dec 2020 18:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60228.105585; Wed, 30 Dec 2020 18:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kugHS-0006xy-OV; Wed, 30 Dec 2020 18:33:06 +0000
Received: by outflank-mailman (input) for mailman id 60228;
 Wed, 30 Dec 2020 18:33:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F8On=GC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kugHS-0006xt-7x
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 18:33:06 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5eb58d59-54f5-4d15-aa00-c77d33ccc4b6;
 Wed, 30 Dec 2020 18:33:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5eb58d59-54f5-4d15-aa00-c77d33ccc4b6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609353185;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=tvDOwpqolb6BKcp6j3iX/knSiXz2HNVjIGOali1aDK4=;
  b=XIwMbfrGzVVX4j/QMfXQRteOlmNI7+NbfFAzHt9Us+Kt6KK+BZwDmtzm
   x46pG1EcVobVEybwhcdAVHlXywVabaBs8DPA1k76sse9TLfGlA7vD0nAN
   iR/PlYusINpJ5OtUEZxobqioqoJzP0whzhPvP3S1h+i8zT4/ZDyG9Ze8B
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QSCED7dQwKsNAqNXWvjaCgE0ZogHY/Sakr8J4F/HVg/P0Fek6w1Z0ZlQhUkcpeVoVTbp3WKeBu
 hp8JhN31VPULb02Qqw9kmqzYqjt4KnbCtJoBUx3Ii1qIbNw8w1PJvOefQLeDJ0+nlP2ocrN7X5
 u9CLJTXYooEQslqx8OR/QrwFuGjh1eOsW2nn46B0l3KeXcCOed93OrL4FR8vdeRRbd5VOX1Jt5
 N2vgrCXpsL1RTJ0tRTb9oqZC8oTGRpnbWbiN46gcPt4SqtNejv1wSotknpWeEZ+YR1oKZfYLXh
 OXc=
X-SBRS: 5.2
X-MesageID: 34165435
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,461,1599537600"; 
   d="scan'208";a="34165435"
Subject: Re: [PATCH] xen: remove the usage of the P ar option
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20201230173446.1768-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b90b93d0-ea83-bc00-6dc0-cbe9e7cfa1ce@citrix.com>
Date: Wed, 30 Dec 2020 18:32:58 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201230173446.1768-1-roger.pau@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 30/12/2020 17:34, Roger Pau Monne wrote:
> It's not part of the POSIX standard [0] and as such non GNU ar
> implementations don't usually have it.
>
> It's not relevant for the use case here anyway, as the archive file is
> recreated every time due to the rm invocation before the ar call. No
> file name matching should happen so matching using the full path name
> or a relative one should yield the same result.
>
> This fixes the build on FreeBSD.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...

We really need some kind of BSD build in CI.  This kind of breakage
shouldn't get into master to begin with.

>
> [0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html
> ---
> I'm unsure whether the r and s options are also needed, since they
> seem to only be relevant when updating a library, and Xen build system
> always removes the old library prior to any ar call.

... I think r should be dropped, because we're not replacing any files. 
However, I expect the index file is still relevant, because without it,
you've got to perform an O(n) search through the archive to find a file.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 19:21:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 19:21:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60236.105602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuh21-0002le-Fd; Wed, 30 Dec 2020 19:21:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60236.105602; Wed, 30 Dec 2020 19:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuh21-0002lX-CZ; Wed, 30 Dec 2020 19:21:13 +0000
Received: by outflank-mailman (input) for mailman id 60236;
 Wed, 30 Dec 2020 19:21:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuh20-0002lP-Ah; Wed, 30 Dec 2020 19:21:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuh20-0005Ya-0h; Wed, 30 Dec 2020 19:21:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuh1z-0007YX-N8; Wed, 30 Dec 2020 19:21:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuh1z-0002dn-Md; Wed, 30 Dec 2020 19:21:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uQMHEQAts1QoailfICl14+gvUNjox3H2NqjNljb/pyc=; b=Lp8tTFtWB71ut0ETJT7uqaigrV
	nsPLZoprccyKO75+FTLfQXj5UGj5vlIrjhuPEaO18eAleqxEgiLWzczQiojpeeABMflYP3IVSmaXK
	OhQ7q07U+jeB8dtCC6q7jJ9HVIda7JqnmeebzEXeH9Qw/GsqjhSmQnwL3OXCMOEBXo5Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157974-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157974: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=139711f033f636cc78b6aaf7363252241b9698ef
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 19:21:11 +0000

flight 157974 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157974/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl     10 host-ping-check-xen fail in 157967 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157967 pass in 157974
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157967
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157967
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 157967

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157967 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                139711f033f636cc78b6aaf7363252241b9698ef
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  151 days
Failing since        152366  2020-08-01 20:49:34 Z  150 days  266 attempts
Testing same since   157967  2020-12-30 00:39:27 Z    0 days    2 attempts

------------------------------------------------------------
4330 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976766 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Dec 30 19:36:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 19:36:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60245.105617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuhGG-0003mV-Nw; Wed, 30 Dec 2020 19:35:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60245.105617; Wed, 30 Dec 2020 19:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuhGG-0003mO-L6; Wed, 30 Dec 2020 19:35:56 +0000
Received: by outflank-mailman (input) for mailman id 60245;
 Wed, 30 Dec 2020 19:35:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F8On=GC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kuhGF-0003mJ-5b
 for xen-devel@lists.xenproject.org; Wed, 30 Dec 2020 19:35:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0c166cc-5119-43c1-9d9b-53adb90db94d;
 Wed, 30 Dec 2020 19:35:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0c166cc-5119-43c1-9d9b-53adb90db94d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609356953;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=aVkhF2nw9dU6WVxFGut4OlYQbrrCDmVSzhxlhz6QfRA=;
  b=bA3ngfEtBx77W/w9dKM4QQPEjoHVaAe4KQD9HxGR/HdJucaTHmded/3k
   4gJXjKOBqePDHm21qbg6YhMtfLkls8a44sXYghtLW/J35kHRzsbZGFGA1
   tbmyn14dQ2uWh43zQR0gdBXKUgfVlV99dn2U9o+Tj1A2mvJah+Sol/+lq
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: mYKhCpP1c5YOjRU5DGgtJMG8PaD52xHH5JLEAaVMJUNXaDg1af3EuF9o1s6+xkYghEgHTZv3FY
 PMq9lYWVQC3gSUwLIOpGDlXimb1wuPuWomdWDrAMDEvglP+9sS6tdGI3U1XniI/iN8uqVxhqFd
 8j0yc0DvoZuwcq3KGqm110e3B5JE/wFmC4DoiYtYE83T6m1gwPilG3Wq6pgG55//rGYpKa/Qa7
 pcSgluZsLz8RfQacz20MT1vUhNdky/ULQ2bs2Mdfnu6wgBukCm2/poX89s6f6pALN4A+nWdlg1
 fxU=
X-SBRS: 5.2
X-MesageID: 34532738
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,462,1599537600"; 
   d="scan'208";a="34532738"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/svm: Clean up MSR_K8_VM_CR definitions
Date: Wed, 30 Dec 2020 19:35:25 +0000
Message-ID: <20201230193525.12290-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Drop the unused shift number, and reposition the constants into the cleaned-up
section.  Rename VM_CR_SVM_DISABLE to be closer to its APM definition.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

This is cleanup to help a forthcoming Trenchboot change, which will use more
bits in the MSR.
---
 xen/arch/x86/hvm/svm/svm.c      | 2 +-
 xen/include/asm-x86/msr-index.h | 8 +++-----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 0854fcfc14..b819897a4a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1586,7 +1586,7 @@ static int _svm_cpu_up(bool bsp)
 
     /* Check whether SVM feature is disabled in BIOS */
     rdmsrl(MSR_K8_VM_CR, msr_content);
-    if ( msr_content & K8_VMCR_SVME_DISABLE )
+    if ( msr_content & VM_CR_SVM_DISABLE )
     {
         printk("CPU%d: AMD SVM Extension is disabled in BIOS.\n", cpu);
         return -EINVAL;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 3e0c6c8476..ff583cf0ed 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -116,6 +116,9 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_K8_VM_CR                        0xc0010114
+#define  VM_CR_SVM_DISABLE                  (_AC(1, ULL) <<  4)
+
 /*
  * Legacy MSR constants in need of cleanup.  No new MSRs below this comment.
  */
@@ -297,7 +300,6 @@
 #define MSR_K8_PSTATE6			0xc001006A
 #define MSR_K8_PSTATE7			0xc001006B
 #define MSR_K8_ENABLE_C1E		0xc0010055
-#define MSR_K8_VM_CR			0xc0010114
 #define MSR_K8_VM_HSAVE_PA		0xc0010117
 
 #define MSR_AMD_FAM15H_EVNTSEL0		0xc0010200
@@ -318,10 +320,6 @@
 #define MSR_K8_FEATURE_MASK		0xc0011004
 #define MSR_K8_EXT_FEATURE_MASK		0xc0011005
 
-/* MSR_K8_VM_CR bits: */
-#define _K8_VMCR_SVME_DISABLE		4
-#define K8_VMCR_SVME_DISABLE		(1 << _K8_VMCR_SVME_DISABLE)
-
 /* AMD64 MSRs */
 #define MSR_AMD64_NB_CFG		0xc001001f
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Dec 30 19:47:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 30 Dec 2020 19:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60252.105630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuhRc-0004mg-V2; Wed, 30 Dec 2020 19:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60252.105630; Wed, 30 Dec 2020 19:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuhRc-0004mZ-RZ; Wed, 30 Dec 2020 19:47:40 +0000
Received: by outflank-mailman (input) for mailman id 60252;
 Wed, 30 Dec 2020 19:47:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuhRb-0004mR-DY; Wed, 30 Dec 2020 19:47:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuhRb-0005yD-85; Wed, 30 Dec 2020 19:47:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuhRa-0000qB-Rp; Wed, 30 Dec 2020 19:47:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuhRa-0008SO-RL; Wed, 30 Dec 2020 19:47:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sVn4UzDk6Div3l0whbJvKqU9tMFoBD1ZJ0k6SDxiOOg=; b=j3xTQNbr/7Pl8fHiBaVCNdRoQG
	M4ohUqyuc9+/abu8l1zvN8Ar0h39RN7TsnKU7QKVqZIEISIipTvBveN9fimOxjQPFG7qs0g01mhkx
	9NV6T0ddhUg1VF5/ZqkivjvMyHKhckPh+rKYQC9PYGWsvCRwDY1bJFNDLdN2grIyQ9lw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157976-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157976: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    linux-5.4:build-arm64-pvops:kernel-build:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dfce803cd87dc139cfe4da1a68a5b3585e9e47e7
X-Osstest-Versions-That:
    linux=19d1c763e849fb78ddf2afe0e2625d79ed4c8a11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 30 Dec 2020 19:47:38 +0000

flight 157976 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157976/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 157757
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 157757
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 157757

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 157757

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157757
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157757
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157757
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157757
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157757
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157757
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157757
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157757
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157757
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157757
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157757
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                dfce803cd87dc139cfe4da1a68a5b3585e9e47e7
baseline version:
 linux                19d1c763e849fb78ddf2afe0e2625d79ed4c8a11

Last test of basis   157757  2020-12-21 12:40:27 Z    9 days
Testing same since   157976  2020-12-30 11:09:59 Z    0 days    1 attempts

------------------------------------------------------------
382 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 11467 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 02:01:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 02:01:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60352.105858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kunH7-0002qi-Vx; Thu, 31 Dec 2020 02:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60352.105858; Thu, 31 Dec 2020 02:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kunH7-0002qZ-OS; Thu, 31 Dec 2020 02:01:13 +0000
Received: by outflank-mailman (input) for mailman id 60352;
 Thu, 31 Dec 2020 02:01:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kunH6-0002qR-HW; Thu, 31 Dec 2020 02:01:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kunH6-0002fa-71; Thu, 31 Dec 2020 02:01:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kunH5-0002JF-U7; Thu, 31 Dec 2020 02:01:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kunH5-0004Ul-Te; Thu, 31 Dec 2020 02:01:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aGSmo4oHQ1SV/1sTzs0CLPtuOKG/dtNkPrjqAblRJR8=; b=WtV9TSdEqLOpJHBafyCC1n9apx
	cXE7+iq5S5gZK4y63FfkXDcd6QDmWfeepCR+KQR5/kTwum5RitcejLXisJyBXSaRBdX8euu33tB+P
	2jZxqUJTOqDMye7egGoI0SKg5gWUOdOBl0SavY0QgDkKOWOfcuoND7AwU5B3DUFlUb+I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157980-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157980: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-coresched-i386-xl:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 02:01:11 +0000

flight 157980 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157980/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 157973
 test-amd64-coresched-i386-xl 20 guest-localmigrate/x10     fail pass in 157973

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  132 days
Failing since        152659  2020-08-21 14:07:39 Z  131 days  272 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   12 days   25 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 04:33:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 04:33:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60386.105941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kupeV-0007LQ-9H; Thu, 31 Dec 2020 04:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60386.105941; Thu, 31 Dec 2020 04:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kupeV-0007LJ-6M; Thu, 31 Dec 2020 04:33:31 +0000
Received: by outflank-mailman (input) for mailman id 60386;
 Thu, 31 Dec 2020 04:33:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kupeU-0007LB-2a; Thu, 31 Dec 2020 04:33:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kupeT-0005FQ-NH; Thu, 31 Dec 2020 04:33:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kupeT-0002qB-F7; Thu, 31 Dec 2020 04:33:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kupeT-0002OZ-Ed; Thu, 31 Dec 2020 04:33:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4hH+dnFPlO8I+vxqkpn4Uh/dfqWNj+h4r9JDpYpxgGs=; b=D1I7EaXFTo4DSiObQ5v8uWDlk7
	IZFChtcFhLB//ogBDaCGJJjZUs4f5EWjTcXUqpt3wGirA3ZDi5v4KDOTiReate5ibP8Yn0LqFxAMF
	VHUusza/+98Cq+gSRfrTwaDpfAGya1fsjO7vRdE/7r7OHjvpSutDQvRRDfG7z5wwQwhU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157982-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157982: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=139711f033f636cc78b6aaf7363252241b9698ef
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 04:33:29 +0000

flight 157982 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157982/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl     10 host-ping-check-xen fail in 157967 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 157967 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157967 pass in 157974
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 157967 pass in 157982
 test-amd64-amd64-examine    4 memdisk-try-append fail in 157974 pass in 157982
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 157974 pass in 157982
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen        fail pass in 157967
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 157967
 test-arm64-arm64-examine      8 reboot                     fail pass in 157974
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157974

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 157967 blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 157974 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                139711f033f636cc78b6aaf7363252241b9698ef
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  152 days
Failing since        152366  2020-08-01 20:49:34 Z  151 days  267 attempts
Testing same since   157967  2020-12-30 00:39:27 Z    1 days    3 attempts

------------------------------------------------------------
4330 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976766 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 08:46:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 08:46:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60435.106062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kutb2-0003yq-N4; Thu, 31 Dec 2020 08:46:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60435.106062; Thu, 31 Dec 2020 08:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kutb2-0003yj-Jb; Thu, 31 Dec 2020 08:46:12 +0000
Received: by outflank-mailman (input) for mailman id 60435;
 Thu, 31 Dec 2020 08:46:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIAU=GD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kutb0-0003ye-QN
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 08:46:11 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eda73e53-34a1-4ace-b8aa-89b19f390615;
 Thu, 31 Dec 2020 08:46:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eda73e53-34a1-4ace-b8aa-89b19f390615
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609404367;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=gRxrrKXZn0KdLmk6xPEJsIwzSBfYwF762ol0b1f21L0=;
  b=d2YF4h0iIMPU4uhv6iwWTvL+sbMSmfYjnOHhOy4CMrKUC+UM0jpCMeEt
   LRQoshw10wrSOc8ooNk4kQyrl/yIe4Js5Imx1mQpcb7c4uhSQBOhvswVM
   SMLllAYSoySzz+QUANOv8g0RNhnwm5s90K6IyaCL3iRwB3/6KP8ZxU6w7
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: sGoeYkk4ypAAPSwWpVlb4YuwXYl13dpXSWNzPGw5Azv9QW2YY5hGNugNqDm5qV8lWyXq3V6z8i
 bNBwM7955uRmigA5aKZdSvxI3KikTC55dUhIkp8JfbqGYWE4s1fXdSYNjtiDVVUd6YdqBAWJNv
 egJCaAkN4AGOJSynNGwTf7z3EZ4JCnu4e5NWqggf9S2UHglFIgdrIPUOAXbiTHdSZXRq6GG5yJ
 cWXR8IsQzm4pbeQPIy5+Zycf9mZJfkt9GZe6narOFxFrDWfCUKNwGGQUQ6daQiHt4Sww+rFvo0
 WX4=
X-SBRS: 5.2
X-MesageID: 34427108
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,463,1599537600"; 
   d="scan'208";a="34427108"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XNfXFBU5xj2ccMb+lO7iM9JZbEyAar2lPoYb2lhh2klM5JuSWH3jghryOp5dCgYdIlIEiymJQWWwul9nVT5XolpJ/Tqm6cQ/ApknESD8uqfLjx2nH+bzjkEbHJB2kheoouxJW0yK8QG00l74h5c6YOh8ASVPOvVjlZZV9nxmI9+foVjA14XnfNFoPfcNthMg/khaClAhH87cf2yt0lxPSvNnJajzve8RbIJ1pNoEbV0qu8Q6r3j3igUq8PZQJ3DjrlagPwZZ8JFr9bddZa8WchMtkss6TRiWMDJLMolUr60cvrg0Zlfq0ojDWZZ+z94wYYmBkKL7J2YTT5X/XgLG2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4fUtm5oHVl9pVLu6B6OLdeRXtRiaWfgxkme+DWGgI54=;
 b=nAtOVonR4FrQYtCHgZHwpcJOumCQyPB4/AQiqtxW6Mmz6gkP416UkNNfs8r61bmmjjCvtyZU4J3Qrg21hyoJLB1w3u5JR8XCAkceQnMs7oTH/8v1rYacCsfzF7kDhuzGcgyyEnHussqoef6XZ6jMORZzslvI9SUPbuXIeS/SortW2e3OhD6+TRSuuPc47u68NWukyqlJ5AvW/LI5LrELyCop0bncHFTeC4X1I/AO0KhRBDHMrsAWVlzHiVmXF5tUEM+ITdEFKjfThJITXjxHkOh5D4W4PHSS49aLcYtrV2kZFle3ZjJ9+x2gaj3fL9oYzD9JdQ57oxNIWW8MMPd/Hg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4fUtm5oHVl9pVLu6B6OLdeRXtRiaWfgxkme+DWGgI54=;
 b=vora2xGzH8GcLrpxDcyh4ckY03veUhzRXNCql8CD4gx34bwmtFVJ/wKgJ3oMqiz1avGZiryl3plom6HVYK6I2Z41hlWOhwvK7DAfBITYv0i2GQyOseGNae9wgxlcSiB7cnRJekLL9M1OH1oXlKpfU2mq4R9x0PKFd3FFdpOgkkY=
Date: Thu, 31 Dec 2020 09:45:56 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
CC: Rich Persaud <persaur@gmail.com>, Jean-Philippe Ouellet <jpo@vt.edu>,
	Christopher Clark <christopher.w.clark@gmail.com>, openxt
	<openxt@googlegroups.com>, <xen-devel@lists.xenproject.org>, Oleksandr
 Tyshchenko <olekstysh@gmail.com>, Julien Grall <jgrall@amazon.com>, James
 McKenzie <james@bromium.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul
 Durrant <pdurrant@amazon.co.uk>
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
Message-ID: <20201231084556.ogvltixgd6ovlja2@Air-de-Roger>
References: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
 <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
 <eac811f4-51fd-9198-446a-230dc6915f62@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <eac811f4-51fd-9198-446a-230dc6915f62@xen.org>
X-ClientProxiedBy: PR2P264CA0031.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:101:1::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 61b14778-8f39-4a77-ea0e-08d8ad68810f
X-MS-TrafficTypeDiagnostic: DM5PR03MB3290:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3290BEF9A2656D424A8F298C8FD60@DM5PR03MB3290.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1ub2iFZO4KO9MmaI9Xen7JbWqz6UQBrjz4N73+YA0b1WCjD6vhmCwBZjSYaGfe+FboyRPyB56eFhKluKrOWxYA0UvuNZ/aUXxkCAqrysPcAlVagoGiq3KxPSPjPq3ktgr9BHfGaU3jbJng6zOFtQVtHbxR87uYrBWDkmsiPFUjSEepJYzM45Td+yuJj8AliYtuvWnXeWaY68X09m8zrGrjwNRQ008dqjXe8ACSmtWScyOLW1GsbJZx6QT2s3C90/UN27qjzK2ZxnDP6JKM4X/rvfO9PoNLimomhwoXCFjYQ5OttcxhZNDLuR8dqMy2uIT5TXZU3NQNq21Dh5a7bfbFM2KaZS7P3OCYZuu+ds1ciOk5WXRBkc9l6o92pV49kTrtfMw3wUN3cz6I++FzGOeBtJ2X8uvAwYbSE/3omMPoFoxZ4mJp4iTnwlYUV0lJOriVB1AJ0QkqFYkhLc2NKFfQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(366004)(39860400002)(346002)(396003)(376002)(186003)(2906002)(6916009)(53546011)(16526019)(8676002)(6486002)(5660300002)(9686003)(7416002)(4326008)(83380400001)(26005)(478600001)(85182001)(54906003)(966005)(86362001)(316002)(66476007)(66556008)(6666004)(1076003)(6496006)(956004)(8936002)(33716001)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bGZVSkhsOVVwRlJpb1NnMnlWWkFONXRBb3EzdFF3bUlSLzEwaUN4dnFhbnNZ?=
 =?utf-8?B?c2NJZ2RxTFp5S3JyVjFuMUJtQVZKeG04ME0wSDVlQzByQzE1RTBYeGR1YXpR?=
 =?utf-8?B?WjY0VjMwbFd1MTlIUkJnaXo4Wlp6MTE5dWtOVjNBZXVMSFpkWkRyaEFZNmVN?=
 =?utf-8?B?WXM2Z0FCdzMxOHdRNU1icXYrN1N3OEM3OEtwRnFtWGFicklVUlRLRk8xN0M5?=
 =?utf-8?B?ZmlWSyszWUxWUDQ1Nk84M3lKNXM1bStnZ0RyNWQvR1Ara3FKYktRMURZRGJ2?=
 =?utf-8?B?SG5qYlpSbDIzVVZpR2VwUkVzaVR3elR2Q1dpQmR6M3dZV0hQekp0Si8rQ2ZN?=
 =?utf-8?B?eUlxM0JOV0NkbG5lVm5lbTVhTEVSdzVzUGQvbHlBbGduSk9GOGYzZkVqeHpq?=
 =?utf-8?B?ZStvZkFVVk0rN0pXQjRLamZxY1l3SGZRSkVBdHBwSEdxQkZ5cnh6d3BEYlZw?=
 =?utf-8?B?NjBYQmhLZUs5ZFdSNXV4d2MwT3pURGJhVWFiWFAwQlFpeHh4b1lGTkxxR1lr?=
 =?utf-8?B?dElaaFdINlRwemt1V0c4dldtd0pHb1lWdm5sc2RvUFJiQzBTeFlNMzA4SW1m?=
 =?utf-8?B?Q01KdWFkZ0ErUXNtbk1sNlBzbDhGaEpvNXh6WFNMU1h5dWtuak0xd0NhbWE1?=
 =?utf-8?B?V2YwVVljbDlhSm40YWIwYk1xMUVuVWo2ZEhTdU93aFpFRW8wZm9hZm94Rmdk?=
 =?utf-8?B?M2l1SDFISExXbzRITXVIa05uQUlCbkgxQldaTndXeis0TkozWWlSM29mU2w3?=
 =?utf-8?B?VzZVUUJLaHo3WFIzNGc4NkJ6dEJ1ZExzTVdoY2pSbTRCVm5IbjhIOFgwVm5P?=
 =?utf-8?B?QzdiQ2lrRGU2T2tBWVBBMjg1ZGhhSU9mQncvZFB3dENzN0hwSk8wakt0cStG?=
 =?utf-8?B?Sy8vZDI5NGhJT1pXUVlvU3J0WHI2MzBxcXIrS01sWGpVUXU1dXZmSkxvOXZl?=
 =?utf-8?B?K0c0a0V5WnhudUtYNnZNNHFacVNrT0JxdVIwK3NTRDVnTnZPVEVaM0pRRTVV?=
 =?utf-8?B?ZTNnVitoY25Fb0lIMXhwU1hGeVNqMnFSRGV2MEF2RHRIekJZWjhmM0VQWnR5?=
 =?utf-8?B?ZDZNbSs1blUydGd2Q3Vxd2lGb0ZOblF3U0RTV1A4cllaaVhaS3g2emlRcWpv?=
 =?utf-8?B?b3N3Smk5NW9nOWRoVzk1dTN0R01uQmRwblZxaCtUV0c3cldteVhjUXJ5RVhF?=
 =?utf-8?B?bDJRWWwvWDFyNGgzdWVQUFh0cWd2OU9rbi93V1I0eUlaNDZ3SFdsMitWSWlr?=
 =?utf-8?B?Qks4eDVqRmRqQWhid2JYTkxaMW54WnJOY2pQcEMxYU1zYU5mQTYzOXkyTVlP?=
 =?utf-8?Q?AasT2jY9vrsiCyy298qAlKLe7Q3cHsP3FZ?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Dec 2020 08:46:02.5805
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 61b14778-8f39-4a77-ea0e-08d8ad68810f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3KKpDKzTz/HUpL+W57MpZjoxkcEzQ51wQfwxbsPS3DNU5Gn/OjTOlU//GT25N0biENaypwC0+vTTioZzczwr1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3290
X-OriginatorOrg: citrix.com

On Wed, Dec 30, 2020 at 11:30:03AM +0000, Julien Grall wrote:
> Hi Roger,
> 
> On 29/12/2020 09:17, Roger Pau Monné wrote:
> > On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote:
> > > ﻿On Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.edu> wrote:
> > > > ﻿On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> > > > <christopher.w.clark@gmail.com> wrote:
> > > > > Hi all,
> > > > > 
> > > > > I have written a page for the OpenXT wiki describing a proposal for
> > > > > initial development towards the VirtIO-Argo transport driver, and the
> > > > > related system components to support it, destined for OpenXT and
> > > > > upstream projects:
> > > > > 
> > > > > https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1
> > 
> > Thanks for the detailed document, I've taken a look and there's indeed
> > a lot of work to do listed there :). I have some suggestion and
> > questions.
> > 
> > Overall I think it would be easier for VirtIO to take a new transport
> > if it's not tied to a specific hypervisor. The way Argo is implemented
> > right now is using hypercalls, which is a mechanism specific to Xen.
> The concept of hypervisor call is not Xen specific. Any other hypervisor can
> easily implement them. At least this is the case on Arm because we have an
> instruction 'hvc' that acts the same way as a syscall but for the
> hypervisor.
> 
> What we would need to do is reserving a range for Argos. It should be
> possible to do it on Arm thanks to the SMCCC (see [1]).
> 
> I am not sure whether you have something similar on x86.

On x86 Intel has vmcall and AMD vmmcall, but those are only available
to HVM guests.

> > IMO it might be easier to start by having an Argo interface using
> > MSRs, that all hypervisors can implement, and then base the VirtIO
> > implementation on top of that interface.
> My concern is the interface would need to be arch-specific. Would you mind
> to explain what's the problem to implement something based on vmcall?

More of a recommendation than a concern, but I think it would be more
attractive for upstream if we could provide an interface to Argo (or
hypervisor mediated data exchange) that's no too tied into Xen
specifics. Using a defined vmcall/vmmcall interface (and leaving PV out
of the picture?) could be one option.

My suggestion for using MSRs was because I think every x86 hypervisor
must have the logic to trap and handle some of those, and also every
OS has the helpers to read/write MSRs, and those instructions are not
vendor specific.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 08:56:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 08:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60446.106086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kutkU-0004xn-0L; Thu, 31 Dec 2020 08:55:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60446.106086; Thu, 31 Dec 2020 08:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kutkT-0004xg-Sp; Thu, 31 Dec 2020 08:55:57 +0000
Received: by outflank-mailman (input) for mailman id 60446;
 Thu, 31 Dec 2020 08:55:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIAU=GD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kutkS-0004xb-If
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 08:55:56 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d0e74a9-22de-4f30-82d9-45ecb4b1ac94;
 Thu, 31 Dec 2020 08:55:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d0e74a9-22de-4f30-82d9-45ecb4b1ac94
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609404955;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=xhLgeIVcD+GN7j0M7BgfUFKaMTqrZa+pW7RhS4eDTwc=;
  b=PBg72bP07unixMRqvJrHN7XIo7r5EzvtdDZucxFicAX9w0cUy0PNH65l
   dguPWN2KAt28t7esQ5BapeK/N+wM/4y9Zt7jJJEGKclN0KpNJs3W6EX5p
   Sb6odLX4TriSIr4Qp6xZV+R9BXL6f169b4BTZ6GrIwTLbA5AtOZGYQK+N
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qowIPzAUlAFrcXSxV8cDoFcU7KZNnxap9yJx41ler5EQL9tf5Lk7Pll7bRKYFCrYCakQ9alf9X
 d5D7753YIDoTQbVCPjPxlyJlKhMEFifg5rFaK0MjWzBAH1bxIMGzcP0lGXV4WB9Dj5s6+20PiZ
 ukNPc3pmJZI3q3vyXoWL0jS4epAIksOi0jXaOmbapSeRSW5d0k6GELlhw77PiIGO1oEIJB7QDo
 +9Aq7nMXroPtc8Ha7y/VrYMc2QF/9VU7a7fSQAUSQzoSDw8sWwyAMYo1zBlIaTSWWW3SoooICx
 +IM=
X-SBRS: 5.2
X-MesageID: 34560215
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,463,1599537600"; 
   d="scan'208";a="34560215"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dl5f38yyLg8T6Wt8JyFM4LRQrl/RXVL9nqs5Ky9S0VIOiUC1WsaKQ1+FbzGqzJK1cL5mRHfGnWkzWSTidB3Eqg8g1Zviae1FXPJF2bgJg7pH9mxhATIMKVvpOS1mYcKpG3Lg3AP4wrxWHfvnf4u8oObGaz8A2SbamKc+yUwDzic2rt9/Ong6JTVA2hfFcVXCojVwdEpJED5CaS/aQOf2hlX1BHR4piI/FY6oE9/wALBbmX1cZkpMktM10kaI/Cs/clUeC3Eg1vKHuBhiEFc4Vk4G30fbDgj/9jjIbLB9rF+bngOwtFCJSMKQYRjPACsmap+JMkSc9up5ovTYNaxNkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zjLV/lVtDp/RXbWyIND1cVbX3sYXc0cfPL24c74fCjc=;
 b=djSq4l6H43RK6kC9jHnOJ0ey5CKvna/N2jakC4PHLVYURv66ZTt5RBuneFFFtKXqCSssyBAdaM3/qIUYED3/4EooMrrfIXIsNad+eft9v5wmONxIqQ6QpCEH4otyRLc9HioMB6XCUFpLRmVMwLyFQMam5aCFBOnMTyJZ3DKSr/Ef3JQdVoWrmF442VsqLmwC7Ia/7FZYxbo/v34Zx4yxHxfF5RyoC/rA2FJNUJAmDyppKV4V49QGkUvqM60d6uxLkd+yZJXjAuAUrMoV9VV6+dvyD5h/oqZN0Gd1ZZehxfEtw4SdoDpKsszomyoBR1xifXwR7sE2cmjV/vIBFHEdyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zjLV/lVtDp/RXbWyIND1cVbX3sYXc0cfPL24c74fCjc=;
 b=X72FbqHv5o5wjOKK5zjE86+3+K0aCVJCwNeFGkvANI8+DK8fcxpw3HiGNy+VJZ5Sf0KdzDoaUvg82b9KXJ67CSsiJC3KKHwrZ9kAfoYwO2d8+zwSB1d490un1LRGglxpF3r7yPcbWA0TCGaI7JM1Mlt1WvV4ugvLTRfTL0A8TVQ=
Date: Thu, 31 Dec 2020 09:55:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/hpet: Fix return value of hpet_setup()
Message-ID: <20201231085543.jwggru4pqt75372n@Air-de-Roger>
References: <20201230160208.18877-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201230160208.18877-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0108.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 75908442-ae08-4e82-ecea-08d8ad69def7
X-MS-TrafficTypeDiagnostic: DM6PR03MB3737:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB37375511E07B871E8D5919418FD60@DM6PR03MB3737.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dpoMWujtvEFEEeSTsYqcETnHAGd5uTHvG+dyfX57XltTQKcn8yMuSG+/1WxwwlppsR/+MctWIdt6P09bF7Xh+keuEmVsMpRw/moWghhjSNGWJIinK9e96f4/Iv864vm/5mXm/SVwpP3uknWQSnN0hVgK4JMGzZw1YTfHyjpkHpBdW2dNAUuFBmU/5GB4zVUEjpYyjvbrltgQHlUOKYiRlldCZnx6KbnnxhinZkcfGRxJOKXZoM2ZTPsgsc6dsA8JoLI6kW+u1ukLvMIzksNqF1SQQQGPuAoTifgjEYzMs9gJuYJdzZtnN6Up6f6hOfw4l+duR4A5yIKN9Gz3z5qqpAgO+lak631ymPZMRyMgGIjQyklLjVGnOaLj3SUehsM/
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(366004)(136003)(346002)(376002)(396003)(6636002)(4326008)(33716001)(6486002)(6862004)(8936002)(8676002)(9686003)(85182001)(6496006)(956004)(4744005)(86362001)(1076003)(66946007)(66476007)(16526019)(478600001)(6666004)(186003)(2906002)(26005)(66556008)(5660300002)(316002)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cVlXUjRnUDI4RGFPT0lCUmxFQW01K3p4a1BweC9CRHJ6QnkvT0ZYRkROUXoz?=
 =?utf-8?B?SXY4N01ESysyaWVJVXNSYldzWWc3UWpMcTRUUWNHa0M3bnNVc3FyVnhodHE3?=
 =?utf-8?B?SjFWQmU0a1J5U0hnditHcTV2VUNHblNpM2xiVDU1Wng4NkZ6ZS9PSWlkK0pI?=
 =?utf-8?B?cXRudVRZNVoxOXJGVEdiZ3dBemVTTWxBbG1NanUwZ2JEczJONzdRbEVFdGhS?=
 =?utf-8?B?U0xSdjBvOVFWKzY0eWFZSnlIeTRJeG9vSEx4c0J4c1ZsTWJsYThHU3htaHR5?=
 =?utf-8?B?VDJObHhiUGZJV0lzSWxSNnVXU3ZVdGZNTFZkVTEzeGE4ZXJWNE4yYlRCaU8z?=
 =?utf-8?B?WXVGazhtZ3Q3VkhaMGtzQ2FGTVF4dGJzRTZOQWhNV3JvM2pSNW14T2dob3NZ?=
 =?utf-8?B?bnc3NmhmNWJzcWFpUzV1ekJIblgxR1hNWkVTRTdMdktTdEs5bDhqZTJsb0lh?=
 =?utf-8?B?ZTFEU2QvYThFa2dQOC9KYVgwbi96V2Y2aWRydUZvN2duRDM0UWxYVWhhMGVV?=
 =?utf-8?B?MERsMVNtRElkakhvMXlQeEVLYmpGL3lxL0IvOHBCa2o0c1lkWGFYSzYvVXU1?=
 =?utf-8?B?cFlabUczQkJZWFVSYjYyL2xFS1d2SDIxSmIxRjM3djNiT3FSbERlOWI5T1Fn?=
 =?utf-8?B?UHFtYzBZQmw5ZWFZS3BtVllKRGprQnZXRjNaVHhYeWtpTWhDZmtHNWJLeU01?=
 =?utf-8?B?eGlCaWhWWmJPVWFmOE56WExoeXV1dzhhaS9VUDgvajF4Tkx3WkJrZ21pMmpm?=
 =?utf-8?B?TUxMVFdieUJPMXozenhxdVBUK3cxYjkzZUJKTWFHbkZoU0pFdXNZbzNITW82?=
 =?utf-8?B?WGZpM3h4MkpWa1EvUWRxY0RMemd0M3pSTWZFSDNSL3pKMm5NU09CbDVkcndv?=
 =?utf-8?B?MUtVbzRsTVVvbVlSWGdWeVNqNVRXd1MzZzlyUTdNR3hOSm1rNVRzWThvZDJn?=
 =?utf-8?B?SVRsR1Z1NWJtM01KRlRxZVRYY2dYQWZZQWtmY1RPaW51cXg3L0VUTEFuQkty?=
 =?utf-8?B?V0xiNVdPL29qcDVGK08zMk96Y2RNbjdZWUljOWZsRURUUzRDVEU5cktTbEpG?=
 =?utf-8?B?TmphY1ppbUhzRHAzVjJaMDRNalpHWnlTSEd1N2wxL3REZzdmZC9LdWo5cXdY?=
 =?utf-8?B?UHFiQ0ZFbUR5NnM5MkpKMW9MaTlhRU9pZnJhU0xqZ2RpVmhLamZRUDVTc1dQ?=
 =?utf-8?B?anpkZyttejNQcWluZE1MMUZvTmFjS2VsTmdwNEoycGJaR2o3aEVQQjdQUEw2?=
 =?utf-8?B?c0ZHS0ErbmRSS0JuTVF2d2xpbVdKWDhia3J6K2FyRks5dFpXZjRCa29pd3J5?=
 =?utf-8?Q?ieeA5LAnJB4bXRCAcxUcwUUuJQwG5uYkai?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Dec 2020 08:55:49.6259
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 75908442-ae08-4e82-ecea-08d8ad69def7
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j3RAk2zbQJgLgL5VYASnBIHYxeJZpR0ZjmFLU6PctQtHXOr8cplMWJUE7hGGNp2IsJbz9689me5i2s6if1QETQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3737
X-OriginatorOrg: citrix.com

On Wed, Dec 30, 2020 at 04:02:08PM +0000, Andrew Cooper wrote:
> hpet_setup() is idempotent if the rate has already been calculated, and
> returns the cached value.  However, this only works correctly when the return
> statements are identical.
> 
> Use a sensibly named local variable, rather than a dead one with a bad name.
> 
> Fixes: a60bb68219 ("x86/time: reduce rounding errors in calculations")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 09:20:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 09:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60456.106106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuu8D-0007W2-4y; Thu, 31 Dec 2020 09:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60456.106106; Thu, 31 Dec 2020 09:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuu8D-0007Vv-1h; Thu, 31 Dec 2020 09:20:29 +0000
Received: by outflank-mailman (input) for mailman id 60456;
 Thu, 31 Dec 2020 09:20:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIAU=GD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuu8B-0007Vq-Tk
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 09:20:27 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23639c43-f2fe-4376-9ab8-e5250c2a7422;
 Thu, 31 Dec 2020 09:20:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23639c43-f2fe-4376-9ab8-e5250c2a7422
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609406425;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TwKZvhaWdoloaatJwqQSOY86w/qSW1E7cgF73NYgEQI=;
  b=bfSyjZ0bDBmJV63ug/FHqiLq1AMsbLlmoWt9X4GFJFHnBZkVADKKJB6H
   bwADXXV9KqhwC+AFX7cqWrWpmrjrsacdr+ESiCBeHZ0/3Z/fPrJyE2dtR
   5dfwypcdtvnHxs80ihfbJviEmV/lr58PSKtXRGPBRBx0WP1US4CPrJ5e6
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: y/QRrfxr3OiAkFFQNis31YGruPcxEiROiOz+SnKJzpZYuZo+h/NoKiXOT/gcyCpPYNZ6R4XwSh
 +ggq7srKfuNaAQ4bnhkCYDuQmRMZgB+6OS9ZdqH3RahTIH7dzH2sb202iF7VoRAQJnyoEvzEW/
 djxY1TdgOYIQH3eFuQTVTubNnsJicH54jkwiP78h9j+29shTWtjbX+zg2wCcwKr4wwIWf/rBp4
 wziNIIX9mNZQcTVExkga0b5yXMz9nwAa/diy5pUP4XWrmKdLL7sYKKEnkMImsRErgX26p1qE5g
 UR0=
X-SBRS: 5.2
X-MesageID: 35456442
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,463,1599537600"; 
   d="scan'208";a="35456442"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XC0JzoMQEa10fcDz2fj2zA3dw4/A5XpnuNkpTlqRDPhe+B42RGucC3+vMv5ZYELmjDMR3L5bivo93M/YIUiywEA67XQhzPcxkfJ2q15tIB3eAgd6mtRB/pN2VroGbPaWfZ1XhxzguIPgpGTIwZNg2TZMWEPjyVDr2nBA+J01YylHs+9ROXAb8XoXkumS/8ZedrpqT8Y49nUVt6+Kc2kynxzlG9eTgEWgFSoi4AVWqy3Hs9meNRqmY+ys4H8LAUtaXcEAtSjgAsIm6tSn2UZPpet5icku5cKHXgpwxyECulguIJzaGHhEZ5HNl/wHqGjwLhLsa2VT7QzN75UxuFqKOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R3mxOnLO74MPNjud4xKFP0mDDrraeYSt6AZo5I4xlSQ=;
 b=ISza6vBwz+QiVSyHIKJjZFGHePbM6qiIlXxofItzWqrncz3g/045CasDj79TQjcs+X2S1zjQUvb2gkzCpqiEGVuowFWxRq7tL89MtvmKTWjYRZXpGyhbdpskS3UOk2hVjAMscIzGIql7KJLco14Gmh2dSgCfqR/xnnyMqAiJCSD5vXJfL6nWZMZIsK82pojHHMNzuLbQqi3GIJwUmrYOucr5ltRkbhzdU4h97SN3+vEorxk+RRWv37B8dIL12yJTuGQB3hhwrHaTDd5SMaRWGI5BD2O9Lvnreif+QWwYg5e5dDG1DzZacPOWTwUem33iFWfSz3W5nSBCiNZuA/NWjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R3mxOnLO74MPNjud4xKFP0mDDrraeYSt6AZo5I4xlSQ=;
 b=CZSJeYJsql/B+UwJ0btZaMpZsHiboJCw5nV2sFsJsaxtDPU9ytX1tukjP58oZSfbUu/Pp/3hx3gZMNXVqCxWcFKPnwmxNbwD9CFfDbvSeSgSnDiGF0VbeTucf9BuCfChef/GOaUn/0N56jSsJqOPBGgmA74kLQAUUdpPbO7n0b8=
Date: Thu, 31 Dec 2020 10:20:15 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen: remove the usage of the P ar option
Message-ID: <20201231092015.ojmlwfvqky6uqyig@Air-de-Roger>
References: <20201230173446.1768-1-roger.pau@citrix.com>
 <b90b93d0-ea83-bc00-6dc0-cbe9e7cfa1ce@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b90b93d0-ea83-bc00-6dc0-cbe9e7cfa1ce@citrix.com>
X-ClientProxiedBy: MR2P264CA0019.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c0170727-a61a-4f00-9892-08d8ad6d4b59
X-MS-TrafficTypeDiagnostic: DM6PR03MB5356:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53569B344C0E1EFC8FF9D8C98FD60@DM6PR03MB5356.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HEG7sAD1Q71IoCXGQjlGXrdUZumJsMTYjcVjJaQ0Xypp8Yck6FYD9X1YC+AQpWE+dSbpBUAA10JDJD0NqMvSH2uRpBQmHPnbuk5s4m6qlRI6yv9bsLFQf6M/zDFeqPdXzCjNJ7wqDtbfu4/jyvXo9ZmYQpLsFk3zy337tGk4oDXDIgwKiJLYdhj/N8d7EMNnawY9olMa3cJKTPRUt564WWcMXClidfOMO7vw0WDX5yQmu+p/PmCHlSDR/uPb9Nzm7gDfEDTytDbY6doxEkG52aYuZensCTC1OsQuvIVUDI/YGLS8FRSbIJkXIyB8vaYexDHn31eqkfnqkgbuxBdnU8mG3v/Y7juDK7jN1jXcJ9KmwQWvWIhth5Ms4aah2W2SLp+htFc2Eu6piKzBmW2eF4jTle5MtGnPEMXgJxkrGBx+37EEY5IB4hUg97WF84pgx+Zg5Ar5E1BATsFV5fPXAg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(366004)(39860400002)(396003)(376002)(54906003)(5660300002)(316002)(2906002)(6636002)(66946007)(53546011)(66476007)(6496006)(16526019)(1076003)(86362001)(8936002)(26005)(66556008)(186003)(6666004)(8676002)(6486002)(9686003)(33716001)(85182001)(478600001)(4326008)(6862004)(966005)(83380400001)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NzlmbWFMMVVkclNwd0hZeTdaU1k1ZWMyM3RSaHJYdjdaUUFweDlENXlyUXJz?=
 =?utf-8?B?ckRIQkw4WndVQWlaWWZEMkl3Nk1Na1lXeDFLRVh5cXkvU3dXWTVJMXlDOXAy?=
 =?utf-8?B?aGtXUHQ5ZytzOFhQWldwMUdqUjA3dnR5bWVXVEpzZDlZMUtqeXZRT3hwa0JQ?=
 =?utf-8?B?WlVWemd0dFh0elpVTkd5bDNEOWQ1M0VWZXJEZE9YcHdUWjhNdE04ZWFnUENx?=
 =?utf-8?B?QzdDanhaUDFIbmxVTVNyYS96VU5tSklPcDNHVk5HRXBNd2dtRXFzc3FVZllO?=
 =?utf-8?B?ODRyS2ZUaDBXdy9yRkU3ZlVPZlU3Q05wUEVqWVRtMCtqVDNuUXNXVjF4Unk2?=
 =?utf-8?B?TU4razg3Z2JYeUIxb1dEU2Z5WVBtdDZrdzRMQUdWMTdLMU5OZENlNHNta2sx?=
 =?utf-8?B?eGJMVTFZZlVOejZPKzgwK0JrL0JVK1Y0NUZDZ1M5aVJ2MHErSkRpaTE1NzRZ?=
 =?utf-8?B?ZTRyTHJUZ3UrWEpqRE9WMjV2cnh2bFJzWEV6M3c0OWt0Ty96NHJSNmFSTzg5?=
 =?utf-8?B?YWtmMTdQaC9Ic3ZqOFJGQTlxdVpkK0lwS2NMU1NwT1R1eEtaUjRYSHp3RmhX?=
 =?utf-8?B?bE1WRHY1RXZvUkVHZXVPa0FuZUtPaHNzbnhleWhYdXN3cnVVMkZlM3lYLytM?=
 =?utf-8?B?YTBQNG15Mkl2Z0NPRkdJZDFybUVlTUwzYkM4Umt4RVYrZGNhZDdmMnFTd3Jy?=
 =?utf-8?B?eEVVeFJJekZjdEpXWDBzL1VoOEhVQkJzYTZtWTcvQk1BZ2hJeTB6MjlmRE91?=
 =?utf-8?B?b2Fab3RXOUFwb2JBK1ZjSEJCNm1nWXBXZnpDOU5qd0dQbVE0bkk5dU1aU1d3?=
 =?utf-8?B?Uzh3V2tQdWtKcnVhLys5ZjcvbGdzajN4Mk1GZGJwYkxkNEJkbVE4R1VKV1Nk?=
 =?utf-8?B?R3NvZ3dic2RjZUYxUUMwNm45Y0NVUW0rRjFudTR4NGtqM1FSdThKVVl5SDZU?=
 =?utf-8?B?emRwUHVwUUMzM3NtWGFKUG11ZEFSRk8rbjdQanpzWmwwSHpwcnBNMlpZaFVC?=
 =?utf-8?B?cU5JeGVZb3Nxb2l2eEhGN3pTSU5wYmxOWXh0V2JyMWhTem1VamNMS3NCc1dT?=
 =?utf-8?B?M1BVSHFoYmVMTWhlL0pzWjJmYVRvS0VBb0pQOWVmSkJKbWw1N0pJc3ljZ3Zo?=
 =?utf-8?B?NXR5QytZN2IrcUU5anMwbnVWaHpqdFlzTlZ3TlBEOVNnSktkUmFpaFhzYnJw?=
 =?utf-8?B?aGErOEYwekVRenhzZ09XdUZGZ2pPL2tMT2tKWTlPUyt5UHAvSGVpNWkwMW9k?=
 =?utf-8?B?alhycXE3N002ZG16Skx5SFkrdVpZbXBZZEVFSGhpVWZ6d1pUNm43bWR2ZVEz?=
 =?utf-8?Q?JfKsmetCxxta4FbwX/f5kmtX7wp8tjxz5k?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Dec 2020 09:20:19.8515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: c0170727-a61a-4f00-9892-08d8ad6d4b59
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: duPZv+VNmeXn6ps/+z1Ph++0XPlR+BhrvFUTox9RCIrAaAnBZbuB/LHdrW3qztuigPpjW7qpESVcLNvqauRHMA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5356
X-OriginatorOrg: citrix.com

On Wed, Dec 30, 2020 at 06:32:58PM +0000, Andrew Cooper wrote:
> On 30/12/2020 17:34, Roger Pau Monne wrote:
> > It's not part of the POSIX standard [0] and as such non GNU ar
> > implementations don't usually have it.
> >
> > It's not relevant for the use case here anyway, as the archive file is
> > recreated every time due to the rm invocation before the ar call. No
> > file name matching should happen so matching using the full path name
> > or a relative one should yield the same result.
> >
> > This fixes the build on FreeBSD.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...
> 
> We really need some kind of BSD build in CI.  This kind of breakage
> shouldn't get into master to begin with.

Fully agree. I'm not that familiar with gitlab CI, but since we have
our own runners there, could we also launch VMs instead of just using
containers?

It might be easier to integrate with osstest in the future, since
FreeBSD has now switched to git.

> >
> > [0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html
> > ---
> > I'm unsure whether the r and s options are also needed, since they
> > seem to only be relevant when updating a library, and Xen build system
> > always removes the old library prior to any ar call.
> 
> ... I think r should be dropped, because we're not replacing any files. 
> However, I expect the index file is still relevant, because without it,
> you've got to perform an O(n) search through the archive to find a file.

Right, the description of 's' between opengroup and the Linux man page
seems to be slightly different. From the opengroup description it seems
like s should be used to force the generation of a symbol table when
no changes are made to the archive, but otherwise ar should already
generate such symbol table.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 09:23:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 09:23:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60461.106118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuuAu-0007fg-Hy; Thu, 31 Dec 2020 09:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60461.106118; Thu, 31 Dec 2020 09:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuuAu-0007fZ-Eq; Thu, 31 Dec 2020 09:23:16 +0000
Received: by outflank-mailman (input) for mailman id 60461;
 Thu, 31 Dec 2020 09:23:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuuAt-0007fR-2i; Thu, 31 Dec 2020 09:23:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuuAs-0002cH-To; Thu, 31 Dec 2020 09:23:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuuAs-00032c-MU; Thu, 31 Dec 2020 09:23:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuuAs-000395-Lx; Thu, 31 Dec 2020 09:23:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sN92Aw9eL+JL0EidFRZAjJYafvpKlVvEuv5Qm7F6Tg4=; b=iTuqBd7LrswuxZITA47+CA3Qig
	PlrK+pahzqQohwjoI8mI0V9+qdoruTiUQMgX+Vps+UpD75xxbxqqtcLMPTScpHQbgxmgtvykr+yBO
	CVfktNBqC3R7M691D2rZ1tgtnDFkW4g2aVVOmOjjBuFgBWJPceP84m4HMgdUSo6VuxhM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158023-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 158023: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf-libvirt:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 09:23:14 +0000

flight 158023 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158023/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf-libvirt           2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  174 days
Failing since        151818  2020-07-11 04:18:52 Z  173 days  168 attempts
Testing same since   157715  2020-12-19 04:19:22 Z   12 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33734 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 09:25:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 09:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60470.106134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuuCg-0007p4-4a; Thu, 31 Dec 2020 09:25:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60470.106134; Thu, 31 Dec 2020 09:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuuCg-0007ox-1W; Thu, 31 Dec 2020 09:25:06 +0000
Received: by outflank-mailman (input) for mailman id 60470;
 Thu, 31 Dec 2020 09:25:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIAU=GD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuuCf-0007os-16
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 09:25:05 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4706a326-2606-465b-b0e5-6320295c3935;
 Thu, 31 Dec 2020 09:25:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4706a326-2606-465b-b0e5-6320295c3935
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609406704;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=lfUvN8zfXKHeukRdDnKKp4SYu6Fn833IN8FOshjS9Po=;
  b=UveZVyPq82tengpnI3Ny/D9k3i14G1er2AAWZISkYmDEP6z3F8x48iUD
   PYRgwN/WI68sw9RsKi8YR9jtfZe0csVxDyqGUYaq/DeHJNFxssG2nSMVO
   isQ0OWnqFMvNooReeeR7yZoNXaFZ52nI+qUlFN0KrKXFEkVNYOVizq9mX
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: q7turZOLq4N746f+4E4aueEbCt92DdKlySgMRekIjfkLaAhXRivvLiuwW7cnqLuYcQQW18fU7U
 fAjV3Y7POkqa7Y41+nNPOxWAwzhKs6OxUHOjPv/16B+STlC0Yi3loYa8D6zjGyHYhXlIihDJpz
 HhdiI1UctP3QVjcN/XvbPQdepSJfBcsiTdmAhDofoTL0Q/IrD78X9yekU0UJ4xk3NJ4qJfge37
 q7S2M93cOoALvln16O+ntQZz+pCatZWsxHrZUiC1+LpXXjm3DLkcGJX8A8k8TArJUyWegA2gqm
 xZQ=
X-SBRS: 5.2
X-MesageID: 34561420
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,463,1599537600"; 
   d="scan'208";a="34561420"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RtcyqIPEAJgQ0Fo2Xsl7pjVEzLE57fNFEF9wZb7FGNlS1CjP42oDLWRuioqOAe5uKIZv7pRuLWww0jTlzOtQhCXwdsr/yerCS7iJNTn/HB5d48bsXg9rPsoeoIqb7ftpFgXKVuyMPv4gomGfAMyA4RusPXShd+Q1FIYikzWvFvzX6UlBSVip84JTNWvFUaS7zbAr2wsMQlqkEzdQH49cQxmavypopAH5cIsNRQafyzRqrjWcDYHfSIrH7DBDZiZciA+s140PPQo61fEd6QKda8AhONqiJJHwVIbhmZN5VgBmRmq8+94DlH+ZACCeeorqwvq4Z8BU26vB6gM1gI6oJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+wRRaUbk1QHoHY+VjKm8jZXNML0hYvvjyVmvCx/fOgo=;
 b=E2pxV2QHcjzpSvXtm47bBE2ZTBHhzWRkOYjojw1JHuiPZKd3oZmfJfjpK2I84P3xea/vpy5JevKZUlk3l6YHdBAl/eMePyDrsVKj26GkDb5PiANzi097iVty6Pf1aiQwE6GrjDU8kCaxTc21hu94n0qMXiOGwQrV2s3pggPwR2O2NJgI6GViEzIGA6zAT6gDG4GsABMPEOtJhs7+nlZerxrwxP+L9EME3CQ6rKr+DBKN+47GGelt62VLP15bIyHODIbgkBt+dfBLA5rHBSpbI01K+59gtBbUxQ44n5ZHp6j4QC7K4gnR8+3wowLWYb7BbA6vI8b7lgXyAi+cLrZ0sA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+wRRaUbk1QHoHY+VjKm8jZXNML0hYvvjyVmvCx/fOgo=;
 b=TikMOYIRttZy2oAQZwfG4NvL+ESMb0o4IHR+bJGUzPrD9Ne1OKwTGkf4m8KwgD2rf+E9QxmNSbmlz1GQ5JZTrRA7v9eoAi7IEpN4l4fjMvWfrwekMFzn/ciOa8Brbzvvcek4vYn6CL0Ffh2+yEnSXOH4PHBcSvrVMP+XxGN/WlE=
Date: Thu, 31 Dec 2020 10:24:54 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/svm: Clean up MSR_K8_VM_CR definitions
Message-ID: <20201231092454.awdbppj4hkqpedtu@Air-de-Roger>
References: <20201230193525.12290-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201230193525.12290-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MRXP264CA0031.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4328ef6e-43e5-4fd6-6ef5-08d8ad6df29b
X-MS-TrafficTypeDiagnostic: DM6PR03MB5356:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5356CCDACA311F9BECB863B78FD60@DM6PR03MB5356.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3631;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: x+WKcEnR2ne0763N6bG6QfEldk48XOAGVY+vK7G9EOkYE9+Sq5mIqB8mZ77LmO4yA0hQCEkJWNOQPB5xIhSuOxPKWN1QPkxPZmP7p8ONJV6k+csWoxv5dctEaK/LtrXY1ZNr6VnPmJmyN6LKlJHwUpGMYqPxVb3brSyIyint6S47pKDFYo80Z7Bb/xYDTeAGlm8qc07ElH8fumX4oQF9QC5MImu+pOSxFOqZibxmIpbv281PbnKF/TVBqnyQmxZCEQoegUUSrOwX/67XJ8jOZ6xcZ4dKFj956vrJ6fmKrCaJ6VTMxkqM17aoe5zoW98QJJ/LLKQlT2Llcys87RSb9vLpBX23irnmhzBTvV926SV8Xhh3rTxvsVnmTD5wFCu2Cqi7vyE8JNYnDoAWJYePcQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(366004)(39860400002)(396003)(376002)(54906003)(5660300002)(316002)(2906002)(6636002)(66946007)(66476007)(6496006)(16526019)(1076003)(86362001)(8936002)(26005)(66556008)(186003)(4744005)(6666004)(8676002)(6486002)(9686003)(33716001)(85182001)(478600001)(4326008)(6862004)(83380400001)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SnJVa2xFaUVUckpDTHJVM0xKQjFZSWI5bzNNOHRXK3pzN3o1WGNUV0pMek5i?=
 =?utf-8?B?citmMDRWSS8zZjdKU0xsUDBpVVN3eGxtRVJPRW9pZGorVU8zMGlYWmlhaENk?=
 =?utf-8?B?NHZNaEdPc2NyQVdXUDZHRktPSG53TStZUlM1cUhFZWV1aHRYUS9VZHdrL0tK?=
 =?utf-8?B?Vlp2R0poT2JEbkE2UEVwSWhMZEdyWDQwRG1OVWQyb2tUMVBYOHhPZ2owZS8z?=
 =?utf-8?B?MkNOclB5VHFwUHd0dFFKa2ZqVThVazBmRzRXajlWaTNDaWNjclkzMU9FMlFq?=
 =?utf-8?B?RnpYK3NicUVJMVd2SlNIRE9rMGVrSU5ubm9MYmJYQW1lcSt3UjBlTlQyZ2t6?=
 =?utf-8?B?bXNQaldINDB1Wnl6cFl6R3hSR3dDZmR6aCtVOENaWDc0UXlydWxqOWhuWTdW?=
 =?utf-8?B?czFGUlZLZGI3TncwRkM3TGU0a05uQTB2N1JNQU1iK1ZPdGZPZDBUSktCRUMw?=
 =?utf-8?B?dytDbVlYMkVQdHJEUUNmT3JiWkFNMXgxZnNDc2pWR2s5U255b25EcEdlNnd6?=
 =?utf-8?B?dkFuOCtDUmNiTlE3c2ZoMUtsTmJxclllQk1iTEE2YkNLOGdqSVBRZnJvWTN3?=
 =?utf-8?B?akZ0ZTFCcERZUWhmelpTMW54QUpGVXNFK3k1TVRNOTJad0g4c0Q3TGVKYU52?=
 =?utf-8?B?MURscklvKytleXRKazBPWDZnbkxYb0RXR0xVMzlWMktOY2dTRGJibjcvMkIw?=
 =?utf-8?B?OFR4N1FKMnJQMExPMENjSnNzd2VKekxxVXJBU2ZWR3JJcTdQTEhDc0IwZHNn?=
 =?utf-8?B?QnFic1Fmc200c1hDOU9CYjJ0VUptN1JCcFRZVWhodytZMXQ1Y3NUbDQrSVNm?=
 =?utf-8?B?L0hLVVlVQUhldjgrV0dkbEZwL1NaWVplbXA4bDNlR1VOeUZHSlBVUngrLzZk?=
 =?utf-8?B?M0lkRXR3T1VDbHFhWituclVwOEs3ZHZBNDUyQkRReWIyYXZ4VDY0UHAvRWp1?=
 =?utf-8?B?TjFCZnhNWVk4NVZvTzk5WWhWYTNpL0o1UTNYYVVDUnNMdHBQRHJ4bllOd2ow?=
 =?utf-8?B?VEFJNEtXSEd0Zlh4dUNSSU4vbDlZMStWNlFZa3FNTnY0dmhVUlBLbVVuZ1BB?=
 =?utf-8?B?Y0FHcEFvaytpR3VzWHMvTjRYMTZFaUlTU29WZFEzZlpEYnVjK2licXZhY0ND?=
 =?utf-8?B?cys2SlpvU3JDRjBZZktXSDZ2aXN2ektyNk1hZmdQdUs5b1JLWVBIQmJzM1Fs?=
 =?utf-8?B?ZU5IakdGOUswc0YvZmxTUFo0Z2Z0eTRJaGVPMWQ3SUVFSXVsNlhYZGFWMXNI?=
 =?utf-8?B?aC9RRk5OaWp6TExkQ1RZOWlYMGVNWFRrYkR3Zzh2dmtST283NzlzeXh0b0Nz?=
 =?utf-8?Q?93yM1DxtEF+NRNgb5IQoz/+j19E3gp9Rwd?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Dec 2020 09:25:00.4625
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: 4328ef6e-43e5-4fd6-6ef5-08d8ad6df29b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0in6rO0I+ye/WMK+Js/8yGLqdS1jVWP8x4bI87AsZXR84F5PkHzVeRRorDtiK0bFITZnP/+1XVGtGYvA2FWTJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5356
X-OriginatorOrg: citrix.com

On Wed, Dec 30, 2020 at 07:35:25PM +0000, Andrew Cooper wrote:
> Drop the unused shift number, and reposition the constants into the cleaned-up
> section.  Rename VM_CR_SVM_DISABLE to be closer to its APM definition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 10:18:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 10:18:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60486.106169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuv1g-0003p5-Ii; Thu, 31 Dec 2020 10:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60486.106169; Thu, 31 Dec 2020 10:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuv1g-0003oy-Fb; Thu, 31 Dec 2020 10:17:48 +0000
Received: by outflank-mailman (input) for mailman id 60486;
 Thu, 31 Dec 2020 10:17:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuv1f-0003oq-9i; Thu, 31 Dec 2020 10:17:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuv1f-0003an-3w; Thu, 31 Dec 2020 10:17:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuv1e-0005rg-TE; Thu, 31 Dec 2020 10:17:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuv1e-0002zg-Sk; Thu, 31 Dec 2020 10:17:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mnceq7IRR8bjQwu29B6mY7iL8+7NjKLe0jxUcrGbqks=; b=s3SuJUHRR+bW948mbOU+PHGaSg
	WS7EN1gNlfurXgbr7/T42rYC/zigdpkxkYRRThbN9XcKX2v39l55LzfH/lIbaFSbumZ8U2zIIZ9d5
	bLC0UoWuS53c1Qx656eLR/hxFJSaBaz7L7/iGXuonor3NFRhBgwpMvA6oSuKW8wTSNyg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157984-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 157984: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dfce803cd87dc139cfe4da1a68a5b3585e9e47e7
X-Osstest-Versions-That:
    linux=19d1c763e849fb78ddf2afe0e2625d79ed4c8a11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 10:17:46 +0000

flight 157984 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157984/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157757
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157757
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157757
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157757
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157757
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157757
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157757
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157757
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157757
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157757
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157757
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                dfce803cd87dc139cfe4da1a68a5b3585e9e47e7
baseline version:
 linux                19d1c763e849fb78ddf2afe0e2625d79ed4c8a11

Last test of basis   157757  2020-12-21 12:40:27 Z    9 days
Testing same since   157976  2020-12-30 11:09:59 Z    0 days    2 attempts

------------------------------------------------------------
382 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   19d1c763e849..dfce803cd87d  dfce803cd87dc139cfe4da1a68a5b3585e9e47e7 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 11:03:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 11:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60511.106209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuvjK-0008Fb-Jy; Thu, 31 Dec 2020 11:02:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60511.106209; Thu, 31 Dec 2020 11:02:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuvjK-0008FU-GG; Thu, 31 Dec 2020 11:02:54 +0000
Received: by outflank-mailman (input) for mailman id 60511;
 Thu, 31 Dec 2020 11:02:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dAqn=GD=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1kuvjJ-0008FP-MH
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 11:02:53 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebe8e422-2334-4798-9250-f90471252faf;
 Thu, 31 Dec 2020 11:02:52 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id i9so19830315wrc.4
 for <xen-devel@lists.xenproject.org>; Thu, 31 Dec 2020 03:02:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebe8e422-2334-4798-9250-f90471252faf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=xvYCiRm40T+IOndfILisAAlIvqeZAGx2ftumdhEjG58=;
        b=liXJ/Hhfcmaymy/3WHUZRo3qF9uEUdg1iPeGW+uSCirhhPUeuq7DjMoiSQ8ulwhEul
         H8/Q6aR5diKgAHj2sNu/F0sr7SJIvvdyRA4cyaanntdOM0z0/61M9qe4kX4Jj30SGINQ
         s8JG3TR2PYu4R0nBDdMAr/4KP6WQdIK2uJZERGSkmHs8KkXYDC7P9ZH/sqBaMn/rlxUw
         ycQdw29byDJwBPMXLOaDAi9BiQJf66MCY2Tvz8gtvkdFqq3tp+ms59V/KZvCBr6jWAYo
         mTkeK8VDsTEeHe4Y4qGaWtD0zsyQsTp4Pn/J+g5kf/8kQj2tUAQzDueMEZDqJ/m7zvQA
         sz1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=xvYCiRm40T+IOndfILisAAlIvqeZAGx2ftumdhEjG58=;
        b=LiJ42MtCvr4Ml6h6HxVmGgbCajShhnaOBlRiiV43f6gfFDSNVRZBezwFDYWyTJK8S7
         NZVLXVQib8MIFl/2YxiVEO54m2lrcUk/U5NjPpP8v2EjFx8Zp4HHsnjZ8VuFmnJn5Mlj
         49Md1Hv+5abO2hP0y5pBjVEJYugSA4Ymk04zme4EY1cDv+mi1SOfR7DydRNcrXwLBvxX
         SGtoXpSWhK52hUuRjQTbD/WoQXn12seEbktaf+pS63YDRnsh9n7v/FYCIlmIMCbEZx0j
         ORqR/tIR8iSLrvp4UADY7AMTOmphSQ+M1EPurCd3jeYS/G5SSuCUMIE9w7vO4r1Em6hT
         5+ZQ==
X-Gm-Message-State: AOAM5333ArASD4O6aThxIwaKU0FI+kIxchIXyKGIljsUIfTaoSJSkVRV
	/WcvuSniSpWnDdA1JDsy8np0qSm3qP7+Na7LSTk=
X-Google-Smtp-Source: ABdhPJwzgh3ckWmJJYQkSbenwffHePG+RSFmUgjh2OCViwgB7Y03yquEOlAC8V11OBDBgUpyBFXIJtsDS1WL5r9v1Jg=
X-Received: by 2002:a5d:43d2:: with SMTP id v18mr64418085wrr.326.1609412571831;
 Thu, 31 Dec 2020 03:02:51 -0800 (PST)
MIME-Version: 1.0
References: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
 <20201229091730.owgpdeekb7pcex7t@Air-de-Roger> <eac811f4-51fd-9198-446a-230dc6915f62@xen.org>
 <20201231084556.ogvltixgd6ovlja2@Air-de-Roger>
In-Reply-To: <20201231084556.ogvltixgd6ovlja2@Air-de-Roger>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 31 Dec 2020 11:02:40 +0000
Message-ID: <CAJ=z9a2xJ2g_UL2oMzyQBGB1cA3nqdOrDYMPg2urHesHs0Dk5A@mail.gmail.com>
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Rich Persaud <persaur@gmail.com>, Jean-Philippe Ouellet <jpo@vt.edu>, 
	Christopher Clark <christopher.w.clark@gmail.com>, openxt <openxt@googlegroups.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko <olekstysh@gmail.com>, 
	Julien Grall <jgrall@amazon.com>, James McKenzie <james@bromium.com>, 
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.co.uk>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 31 Dec 2020 at 08:46, Roger Pau Monn=C3=A9 <roger.pau@citrix.com> w=
rote:
>
> On Wed, Dec 30, 2020 at 11:30:03AM +0000, Julien Grall wrote:
> > Hi Roger,
> >
> > On 29/12/2020 09:17, Roger Pau Monn=C3=A9 wrote:
> > > On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote:
> > > > =EF=BB=BFOn Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.e=
du> wrote:
> > > > > =EF=BB=BFOn Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> > > > > <christopher.w.clark@gmail.com> wrote:
> > > > > > Hi all,
> > > > > >
> > > > > > I have written a page for the OpenXT wiki describing a proposal=
 for
> > > > > > initial development towards the VirtIO-Argo transport driver, a=
nd the
> > > > > > related system components to support it, destined for OpenXT an=
d
> > > > > > upstream projects:
> > > > > >
> > > > > > https://openxt.atlassian.net/wiki/spaces/~cclark/pages/16961699=
85/VirtIO-Argo+Development+Phase+1
> > >
> > > Thanks for the detailed document, I've taken a look and there's indee=
d
> > > a lot of work to do listed there :). I have some suggestion and
> > > questions.
> > >
> > > Overall I think it would be easier for VirtIO to take a new transport
> > > if it's not tied to a specific hypervisor. The way Argo is implemente=
d
> > > right now is using hypercalls, which is a mechanism specific to Xen.
> > The concept of hypervisor call is not Xen specific. Any other hyperviso=
r can
> > easily implement them. At least this is the case on Arm because we have=
 an
> > instruction 'hvc' that acts the same way as a syscall but for the
> > hypervisor.
> >
> > What we would need to do is reserving a range for Argos. It should be
> > possible to do it on Arm thanks to the SMCCC (see [1]).
> >
> > I am not sure whether you have something similar on x86.
>
> On x86 Intel has vmcall and AMD vmmcall, but those are only available
> to HVM guests.

Right, as it would for other architecture if one decided to implement
PV (or binary translated) guests.
While it may be possible to use a different way to communicate on x86
(see more below), I am not sure this would be the case for other
architectures.
The closest thing to MSR on Arm would be the System Registers. But I
am not aware of a range of IDs that could be used by the software.

>
> > > IMO it might be easier to start by having an Argo interface using
> > > MSRs, that all hypervisors can implement, and then base the VirtIO
> > > implementation on top of that interface.
> > My concern is the interface would need to be arch-specific. Would you m=
ind
> > to explain what's the problem to implement something based on vmcall?
>
> More of a recommendation than a concern, but I think it would be more
> attractive for upstream if we could provide an interface to Argo (or
> hypervisor mediated data exchange) that's no too tied into Xen
> specifics.

I agree with this statement. We also need an interface that is ideally
not to arch-specific otherwise it will be more complicated to get
adopted on other arch.
For instance, at the moment, I don't really see what else can be used
on Arm (other that MMIO and PCI) if you want to care about PV (or
binary translated) guests.

> My suggestion for using MSRs was because I think every x86 hypervisor
> must have the logic to trap and handle some of those, and also every
> OS has the helpers to read/write MSRs, and those instructions are not
> vendor specific.

In order to use MSRs, you would need to reserve a range of IDs. Do x86
have a range of ID that can be used for software purposes (i.e. the
current and future processors will not use it)?

Cheers,


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 11:39:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 11:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60517.106221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuwIN-0002VD-Kp; Thu, 31 Dec 2020 11:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60517.106221; Thu, 31 Dec 2020 11:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuwIN-0002V6-Gu; Thu, 31 Dec 2020 11:39:07 +0000
Received: by outflank-mailman (input) for mailman id 60517;
 Thu, 31 Dec 2020 11:39:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIAU=GD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kuwIL-0002V1-N5
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 11:39:06 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 568f5b3b-7f69-4cb4-953b-14cc742f739d;
 Thu, 31 Dec 2020 11:39:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 568f5b3b-7f69-4cb4-953b-14cc742f739d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609414742;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=aInut7JyCpiAzvfaDakqSiCf6uQ1lNKSdZHQbTE92wg=;
  b=Xq7aJbqofjuPtNbt9m5uKHFJUwU2YYppGDRvpxMU8QxdewQLxi7LEDcX
   uprbsi3LpsJDbCmEp9ip9fPXQEiOl8Onmvn27o2l3LpwdfzGul3rMnZNM
   dEmygvF0Pnqxs/CJwrbHLj3suxPX1gEVmksuk7Hwjaparz+FD7vTehDl/
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: oOnwT6W/RrcwnMG6MKptGOw0WkJqDyddswjM1Gw1tqxIELwS3ipwlmddzUjqP7g7GfNmv/9bN6
 IzOdtbyhTAWbdB5FHlWahZOwwkbWNGAA5toDyYwy7yl4YS1T4rLWPT/bq8/XEb5Bf0UBUdegTR
 Q/srjy8C5I0akqD+HZafvBEUxSIyJBKDQ9BYxNVqhnfdFh7gK650Zvug1m0q2hIhro01ki20h/
 GYkFNAUAGVs3EtnmqecVXHa1P2y2pxCdEcq71388vd9zkMDul8HitBQFgzPlumDZRd2xL9XKRP
 wSY=
X-SBRS: 5.2
X-MesageID: 34565937
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,464,1599537600"; 
   d="scan'208";a="34565937"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cnYr/kZEz39dyb9POLb0L/NbdjunkQIIaIi8GrFLYc2Yo4geY9rPqopzNnaG7DR7z2jWW2nVCqoXdT5kor/FJQt8fu+7u3cA7MtxhnvP9bU/bW7k5Np+7qr9IpOsKIDHMhAIj9IzfZ+ljqn0GP2s7YSLvPnzBdigpN8fPy/GUDLE8JaUassBtj80iQCP8YmFvxICUiPPWmt0/fhz4a8Qkkk3JmGhDZY9oeXHHURFOGV6Cs64+QCE/oGccPWSuqVBS404YQAA0yl/pjO0h1AGKJ6T26i3/7qZKpbWDiTzEM06w2+L8TjzQ8ox0LNm94lf82oorPrc7q+VIxU7fS3HJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ErATkNMeYpagVSj1neMVSCRFlx5oN0uzbtN5CjR2NEY=;
 b=cz6pNqolLogKuIG89k8mYjlQnrAF/EwZu5xC3COyUlWl+8N0dNRIV5ckDn3MayRxs/L0tJltDBrmFeXSt5mfoH49OcWtU4P/zjNK3QbiLSgVe37cJgfoDvDWVHrhgQk24/eUNp8DRlfTTt/aHj6MpTogv3s/27JzsljLuRP/tbb406D51lcMKCbWWZPJfiovJdNCN29fup1dzRU6I6mhs6F26GaxrB3LrSQBLniDWVjKenqrvEd0HfpkMCmi44wi9BqK1U5tRsdKTjfgN986mRlkAMf91vt4cI/ri/bi4dolgsahKbimqyNX4lCZL3vkYiQwquOQft8+6l1/lNjwDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ErATkNMeYpagVSj1neMVSCRFlx5oN0uzbtN5CjR2NEY=;
 b=ilFj6BixARfvjRyHlU5RKvFvOiO6kR6LQeVA/TGzoF6t5h01v0Ym5tO2p26edkPcV3F5bMjPf//KDURXSQVpU1Z0kGAw7XJ9BomTMJGtHF3pkR+JMUxT6uq29t2l8tgN7OJX/2Yu5CU2Ls4DS5MnuDbsFtnUKXqZbDreI+qQuIo=
Date: Thu, 31 Dec 2020 12:38:52 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien.grall.oss@gmail.com>
CC: Rich Persaud <persaur@gmail.com>, Jean-Philippe Ouellet <jpo@vt.edu>,
	Christopher Clark <christopher.w.clark@gmail.com>, openxt
	<openxt@googlegroups.com>, xen-devel <xen-devel@lists.xenproject.org>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Julien Grall <jgrall@amazon.com>,
	James McKenzie <james@bromium.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.co.uk>
Subject: Re: [openxt-dev] VirtIO-Argo initial development proposal
Message-ID: <20201231113852.f7rgmqrpfdayfzfj@Air-de-Roger>
References: <DBCC8190-7228-483E-AE8A-09880B28F516@gmail.com>
 <20201229091730.owgpdeekb7pcex7t@Air-de-Roger>
 <eac811f4-51fd-9198-446a-230dc6915f62@xen.org>
 <20201231084556.ogvltixgd6ovlja2@Air-de-Roger>
 <CAJ=z9a2xJ2g_UL2oMzyQBGB1cA3nqdOrDYMPg2urHesHs0Dk5A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAJ=z9a2xJ2g_UL2oMzyQBGB1cA3nqdOrDYMPg2urHesHs0Dk5A@mail.gmail.com>
X-ClientProxiedBy: MR2P264CA0118.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::34) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b58633b6-33bb-4ade-2351-08d8ad80a9dc
X-MS-TrafficTypeDiagnostic: DM5PR03MB2635:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2635A2F6303A9B2DB3298F4E8FD60@DM5PR03MB2635.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: SPvNi6hGadKEe50oA1FQNU6dpxHmnL5JroLMhq9icxHh5VrXfWHyKKaltG0fYr1W7Xwb2Wf6b4Am5phBjmYTk092jOAIhpX6buezJWF5AVMWCwJQlyA8Y7H5Hca1CwUMS38EXMg4L1G+TV2qdgvFx01gPH+/q3QbtPMkGC0EzTZcw1cfb232LeYshRyWJnIg0Wk2JweR6KZhfst6+JyG76/ou7cTRFhCEDAl/qU/OG01e9ZReLWDZVXAzjCDdtrxOb3V6CMkz3KE5XpbZVNUEjpoOiuRoEwYN2Clpa91b4aUQWko7FUZLH80JhKmaIvW1i8oqWgi9f9KU7HaBU6ACU7PT2kSUtxgdNj9V9laP/2Q+4649MmDbdZCgcVzuuX2EZ+mRbT8uOAzBm5sruhIdg35JmbU31z/PswiHeVT9WBqh42PsDdshwscEjGYSekVKFhwen/CUy/TZep+ll9BWw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(39860400002)(136003)(346002)(376002)(366004)(5660300002)(6486002)(16526019)(478600001)(8936002)(33716001)(26005)(7416002)(966005)(85182001)(54906003)(6666004)(86362001)(6916009)(4326008)(66556008)(2906002)(83380400001)(66476007)(53546011)(8676002)(186003)(956004)(1076003)(66946007)(9686003)(316002)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UEtLdFJ5ODNzS1F0MkdzQ0xWVkFDSEZsMlI1LzJqeGRNSFB2TWpnM2ZRVlhC?=
 =?utf-8?B?QjRMZCsxRWxBVHdKdWxDcXBQVkFKSGZlYUVGY0NVcFJiZHdQLzZUZi9jSnV4?=
 =?utf-8?B?VE9wNGFYVUxneDhNaFk2MWNhbjUxSzU4ODBFeTUyQyttK01Xd2lPRTNjQk8y?=
 =?utf-8?B?dFozblhIcTd6RG4yUi95Q0NGK2RTNXdXdmhHdGVyKzliVnVmanVmaUtMWk9v?=
 =?utf-8?B?dmRtUjhFSjRWZk96OUtpUm5Xc0xOSEt2T0c4N01zdnI5K1M2TElhai9JSTNm?=
 =?utf-8?B?WFJUZFhiNTY2clRqVnFYRmZZdEpFYnFkamkrUTluUjZzZDhsc2hKYXhnMWNm?=
 =?utf-8?B?QTNvTVpuZlpGR2dNUisxZE5Na0F4OG9LclVGYURjSmRicFdKRS83TWhRdVc1?=
 =?utf-8?B?dW9oc1FOaDhLcHR4eE5sSXRaRFljTlZGNW1VL2lSOXYxUXNDSklEaTJCY0k4?=
 =?utf-8?B?ZysvUXR5SlVtbEdLbS9NMlpZT0R5aFlIenFQckdCanBqMEpGZTVjdENHV2hm?=
 =?utf-8?B?UmdveWRCbkNNTEhzYjBBTENPaFlaa3lramQ4eE5CblZhN24wYjE1aTZIUjdj?=
 =?utf-8?B?Q3NuSEtNVlNaM2lQemFQYkc2N2tKczgvZERwU1I0aUQrSld5ZDV2ZGYweWxC?=
 =?utf-8?B?VU1YSUFNMjRKWjdnc1BxZCtKWC9oT1h4ODNlU2pmZjFCLzQyemxud2RVUmtH?=
 =?utf-8?B?MEFTRnQwL2IvRDBxU3ZBTGhMQVMzb3RyZ2E3NlhySDZuQUtqbDQ2amQyRkVQ?=
 =?utf-8?B?VjVOSmVsekFOa24zZldGNHlJMWpDV2tPS0RoY1JlYjBYSDBJNDRTaWFGTExx?=
 =?utf-8?B?RXQ5QVhQaDlKUlBSUEVmWjVMSlJpTFVDRVl1UU1Rc08rVHh0VzFBRXdmd3Vw?=
 =?utf-8?B?bmVZMWJ3VWlJLzFjaDdzNVRocW5VaGpoMGFjYnY3ZGlCK3RPLzJVZmN3V3pw?=
 =?utf-8?B?elUwTjBkRjBqNGhTTWZSZzhWSGJLWEdoYzJ6ZWVsK3VZbTlmTlIxcURwaDR0?=
 =?utf-8?B?Rnp2a3VMY2p0UkxpY2JXWkNSL3cvZE9SdDVOdlBGRzU2Sm1xVE51ZzFHM0pz?=
 =?utf-8?B?TzlSM3JMOGFyUmFIOE5mR0h3d015QWl5QzN0b1VtT3NMbFNhZjNQczZpSmF5?=
 =?utf-8?B?TG92OHZBQ2l4NUpaZjYwK2UwR2tCRjd3N1FSVWR3enAxOEVQY1o0TERwL3hS?=
 =?utf-8?B?UDY0RjdRc0ZuR1lSdlBmcDdMVHIxbFhPSjN6c2thMTVLYkdMeEsvdExnWXJG?=
 =?utf-8?B?RERPamJzYkhFcElkTG0rUURsQitPV2V3Ri9HK1VZYTRmU3p0RlllenNXNzlW?=
 =?utf-8?Q?8z2PGxgiwo8ppiPhVAvKclh7yIQnGqJdvV?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Dec 2020 11:38:58.9174
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: b58633b6-33bb-4ade-2351-08d8ad80a9dc
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lBsaNe5qUyMsaPGPoadDOesVndJpp7gKPfcv4iu+TQEg5FKFph/ufrUCW/pbUvyvLi8NaiYKV/swEqK270hwKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2635
X-OriginatorOrg: citrix.com

On Thu, Dec 31, 2020 at 11:02:40AM +0000, Julien Grall wrote:
> On Thu, 31 Dec 2020 at 08:46, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >
> > On Wed, Dec 30, 2020 at 11:30:03AM +0000, Julien Grall wrote:
> > > Hi Roger,
> > >
> > > On 29/12/2020 09:17, Roger Pau Monné wrote:
> > > > On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote:
> > > > > ﻿On Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@vt.edu> wrote:
> > > > > > ﻿On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark
> > > > > > <christopher.w.clark@gmail.com> wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I have written a page for the OpenXT wiki describing a proposal for
> > > > > > > initial development towards the VirtIO-Argo transport driver, and the
> > > > > > > related system components to support it, destined for OpenXT and
> > > > > > > upstream projects:
> > > > > > >
> > > > > > > https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1
> > > >
> > > > Thanks for the detailed document, I've taken a look and there's indeed
> > > > a lot of work to do listed there :). I have some suggestion and
> > > > questions.
> > > >
> > > > Overall I think it would be easier for VirtIO to take a new transport
> > > > if it's not tied to a specific hypervisor. The way Argo is implemented
> > > > right now is using hypercalls, which is a mechanism specific to Xen.
> > > The concept of hypervisor call is not Xen specific. Any other hypervisor can
> > > easily implement them. At least this is the case on Arm because we have an
> > > instruction 'hvc' that acts the same way as a syscall but for the
> > > hypervisor.
> > >
> > > What we would need to do is reserving a range for Argos. It should be
> > > possible to do it on Arm thanks to the SMCCC (see [1]).
> > >
> > > I am not sure whether you have something similar on x86.
> >
> > On x86 Intel has vmcall and AMD vmmcall, but those are only available
> > to HVM guests.
> 
> Right, as it would for other architecture if one decided to implement
> PV (or binary translated) guests.
> While it may be possible to use a different way to communicate on x86
> (see more below), I am not sure this would be the case for other
> architectures.
> The closest thing to MSR on Arm would be the System Registers. But I
> am not aware of a range of IDs that could be used by the software.

I don't really know that much about Arm to make any helpful statement
here. My point was to keep in mind that it might be interesting to try
to define an hypervisor agnostic mediated data exchange interface, so
that whatever is implemented in the VirtIO transport layer could also
be used by other hypervisors without having to modify the transport
layer itself. If that's better done using a vmcall interface that's
fine, as long as we provide a sane interface that other hypervisors
can (easily) implement.

> >
> > > > IMO it might be easier to start by having an Argo interface using
> > > > MSRs, that all hypervisors can implement, and then base the VirtIO
> > > > implementation on top of that interface.
> > > My concern is the interface would need to be arch-specific. Would you mind
> > > to explain what's the problem to implement something based on vmcall?
> >
> > More of a recommendation than a concern, but I think it would be more
> > attractive for upstream if we could provide an interface to Argo (or
> > hypervisor mediated data exchange) that's no too tied into Xen
> > specifics.
> 
> I agree with this statement. We also need an interface that is ideally
> not to arch-specific otherwise it will be more complicated to get
> adopted on other arch.
> For instance, at the moment, I don't really see what else can be used
> on Arm (other that MMIO and PCI) if you want to care about PV (or
> binary translated) guests.

My recommendation was mostly to make Argo easier to propose as an
hypervisor agnostic mediated data exchange, which in turn could make
adding a VirtIO transport layer based on it easier. If that's not the
case let's just forget about this.

> > My suggestion for using MSRs was because I think every x86 hypervisor
> > must have the logic to trap and handle some of those, and also every
> > OS has the helpers to read/write MSRs, and those instructions are not
> > vendor specific.
> 
> In order to use MSRs, you would need to reserve a range of IDs. Do x86
> have a range of ID that can be used for software purposes (i.e. the
> current and future processors will not use it)?

There's a range of MSRs (0x40000000-0x400000FF) that are guaranteed to
always be invalid on bare metal by Intel, so VMware, HyperV and
others have started using this range to add virtualization specific
MSRs. You can grep for HV_X64_MSR_* defines on Xen for some of the
HyperV ones.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 13:28:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 13:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60549.106250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuy0P-0003ZQ-AA; Thu, 31 Dec 2020 13:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60549.106250; Thu, 31 Dec 2020 13:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuy0P-0003ZJ-6s; Thu, 31 Dec 2020 13:28:41 +0000
Received: by outflank-mailman (input) for mailman id 60549;
 Thu, 31 Dec 2020 13:28:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuy0N-0003ZB-GF; Thu, 31 Dec 2020 13:28:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuy0N-0006hB-8F; Thu, 31 Dec 2020 13:28:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuy0M-0008MB-U5; Thu, 31 Dec 2020 13:28:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuy0M-0007wf-TZ; Thu, 31 Dec 2020 13:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=usahFkbXmurDnggCwlnLBWErl5NCFCkwZkIuN0HkYfY=; b=T6lwbN55CGHIe9wWLEGOG/hrh2
	9im3Y0wylK2r0lJpTVxy1IJil4yW03JXvuIbgoC2GneSH0Qdyu789K7FBtS4vqjpcMC6O9OFLOevj
	BBoqqnMJn8etGVkfL8QvH0EmI8Um0AJPLWDbm/IgOkwL2vCSMlTAd3ufjnLMC1H3EgU4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158010-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 158010: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 13:28:38 +0000

flight 158010 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158010/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 157865
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157968
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157968
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157968
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157968
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157968
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157968
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157968
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157968
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157968
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157968
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157968
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   158010  2020-12-31 01:51:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Dec 31 15:29:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 15:29:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60559.106272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuzt7-0005Lx-Ir; Thu, 31 Dec 2020 15:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60559.106272; Thu, 31 Dec 2020 15:29:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kuzt7-0005Lq-Fp; Thu, 31 Dec 2020 15:29:17 +0000
Received: by outflank-mailman (input) for mailman id 60559;
 Thu, 31 Dec 2020 15:29:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuzt6-0005Lg-JL; Thu, 31 Dec 2020 15:29:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuzt6-0000LU-Ap; Thu, 31 Dec 2020 15:29:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kuzt6-0003Vs-2E; Thu, 31 Dec 2020 15:29:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kuzt6-0006zn-1i; Thu, 31 Dec 2020 15:29:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6/LEh+08JsFFjLKt5tgORGHe7+y65bVBDxOfkSkLdP4=; b=tLliRZUxjNyU9GO+azs0K7uvU8
	UEbhUEmkGHPGPZq9eG0FxcXFlHV03A+KCxm19p1ocbtrMWZci5Cm54Oxni+GeQwcqGPBx67e8RJ9V
	wnyC8N486CubsbKXodwhZYiqePMTLfZHznR7WK2DR0VcXhqxV81RwpPEddDJX6HRB0as=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158012-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 158012: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a05f8ecd88f15273d033b6f044b850a8af84a5b8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 15:29:16 +0000

flight 158012 qemu-mainline real [real]
flight 158053 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/158012/
http://logs.test-lab.xenproject.org/osstest/logs/158053/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                a05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  133 days
Failing since        152659  2020-08-21 14:07:39 Z  132 days  273 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   13 days   26 attempts

------------------------------------------------------------
323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 79306 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 15:47:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 15:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60568.106286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv0AH-00078W-8b; Thu, 31 Dec 2020 15:47:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60568.106286; Thu, 31 Dec 2020 15:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv0AH-00078P-5H; Thu, 31 Dec 2020 15:47:01 +0000
Received: by outflank-mailman (input) for mailman id 60568;
 Thu, 31 Dec 2020 15:46:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pjRP=GD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kv0AF-00078K-Iw
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 15:46:59 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eec76065-8527-461c-a60d-83c4b878d8cd;
 Thu, 31 Dec 2020 15:46:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eec76065-8527-461c-a60d-83c4b878d8cd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609429617;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=nrPjC/pKfEsU8MgX7FHzMHRBNM7Eu799QxtAXqkwuPM=;
  b=NdOqd3EPfOaKVZqz9Q6EJ265Es2QQoWYkD87aH9vvgJREHJqh+LBbFCJ
   6uu2TaUelWBQErBXt139v+RQlDG2Q1PmDEghSQ8NtiCshl2eqBycfOoqo
   e8JPuz5NfPyl3pAEZkOS23gnOdVWucWu5VAVr8qrhC8rcU08PEnn8y5XQ
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: PM9OzY8Gp94buDjjhmxMueu4zGxTClg6RqTIpSZ8jGIRc/li8ae8wS4quAp/AAnKAA1YilRz6e
 PUrBAz37DkL/YnDgoO3sOPSu6nSErZRx5rOowLEU8r3oOsB2vWlfBAsCI178FOPapAqPgKxyUU
 kZHlWXQnpoRFhIL7mzz8Ikn4/2x+wAAqNcEmqJMFn/tFSNc5C8EjZauPKpXP6oPq1crbpJhY4E
 /4jifN7LTLz80Z2Ez+zrqHBNCsNRZQh635L4BGEj/RJyTPlEJM+StHf2GxotI5oC2p0AVwkPdE
 fjs=
X-SBRS: 5.2
X-MesageID: 34214217
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,464,1599537600"; 
   d="scan'208";a="34214217"
Subject: Re: [PATCH] xen: remove the usage of the P ar option
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201230173446.1768-1-roger.pau@citrix.com>
 <b90b93d0-ea83-bc00-6dc0-cbe9e7cfa1ce@citrix.com>
 <20201231092015.ojmlwfvqky6uqyig@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6a761459-7823-077b-2cee-b6d685eb10ee@citrix.com>
Date: Thu, 31 Dec 2020 15:46:50 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201231092015.ojmlwfvqky6uqyig@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 31/12/2020 09:20, Roger Pau Monné wrote:
> On Wed, Dec 30, 2020 at 06:32:58PM +0000, Andrew Cooper wrote:
>> On 30/12/2020 17:34, Roger Pau Monne wrote:
>>> It's not part of the POSIX standard [0] and as such non GNU ar
>>> implementations don't usually have it.
>>>
>>> It's not relevant for the use case here anyway, as the archive file is
>>> recreated every time due to the rm invocation before the ar call. No
>>> file name matching should happen so matching using the full path name
>>> or a relative one should yield the same result.
>>>
>>> This fixes the build on FreeBSD.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...
>>
>> We really need some kind of BSD build in CI.  This kind of breakage
>> shouldn't get into master to begin with.
> Fully agree. I'm not that familiar with gitlab CI, but since we have
> our own runners there, could we also launch VMs instead of just using
> containers?
>
> It might be easier to integrate with osstest in the future, since
> FreeBSD has now switched to git.
>
>>> [0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html
>>> ---
>>> I'm unsure whether the r and s options are also needed, since they
>>> seem to only be relevant when updating a library, and Xen build system
>>> always removes the old library prior to any ar call.
>> ... I think r should be dropped, because we're not replacing any files. 
>> However, I expect the index file is still relevant, because without it,
>> you've got to perform an O(n) search through the archive to find a file.
> Right, the description of 's' between opengroup and the Linux man page
> seems to be slightly different. From the opengroup description it seems
> like s should be used to force the generation of a symbol table when
> no changes are made to the archive, but otherwise ar should already
> generate such symbol table.

Ok - are you happy for me to commit with dropping the r and s?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 16:05:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 16:05:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60574.106299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv0Rz-0000xa-RA; Thu, 31 Dec 2020 16:05:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60574.106299; Thu, 31 Dec 2020 16:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv0Rz-0000xT-N5; Thu, 31 Dec 2020 16:05:19 +0000
Received: by outflank-mailman (input) for mailman id 60574;
 Thu, 31 Dec 2020 16:05:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eIAU=GD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kv0Rz-0000xL-08
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 16:05:19 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 930196ce-d2ee-4062-bdbb-f62984638c9a;
 Thu, 31 Dec 2020 16:05:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 930196ce-d2ee-4062-bdbb-f62984638c9a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609430717;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=4uDATtCqWB5GcwLc2lQU7nn/jQWnDygbWXQ4SgblC20=;
  b=NDHq3TFubw0woQuyF2oufmFL6tluz07LjaIeOfL0YM+d42JJLcwiv0qM
   G/dT891pzAnZTHvgj2V6YUo6qEvWyjeta+PuVJmTyFPYxO8kkZBWNj6uF
   7yPxZzQFvljd+z+PCy5iOEXBKscUWqS2hAqMARL/9DUbMpcd/NZ5xM/oF
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7KHSxq11SODM/6leymnCbThBVMSB93WwaIpalXUWBTyKqqLC5p+FjVCak+xCaVpvhoB4LT61Vf
 EDiO4qTk6ppfDW6q+QUOGK8EDMbV3z1OKyMWr3coQyEU0tWdvjbE1CXnC9QKDI3Wnhgkk9h37r
 Fr4CWriEKheHkAz4+ECl6gwVOPXlr2Bgx0IWif2Lkwl50Q+CXZXwuAl497Wlg+kaaJSqDUEuuT
 2QQmRmVNqOhr6/mFd5p7QFqO3Kp4xderc/Hr1qK35eg0TGBA+8tBXVXedzcRrygUYySLW/vmDh
 t/k=
X-SBRS: 5.2
X-MesageID: 34252489
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,464,1599537600"; 
   d="scan'208";a="34252489"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HNkEs07R8iQNu9uOc+y9xZkuZ41Fb08Op5GhweaeOlvX+wZwz/2CNQrVSq8a0LAwd4rqZvmEksH4vJ+sVOpxqTGWLnNByTBhHg8W0sRCqDvmLUrYqMP9HKkECfvQDh5a1w8YTIRh70sQa2f1STr/32vOl1xhyRJfBQAMYBj3zBrEYCwJv9/ADvA5cqub+mAqaR8kEQLJwJ1Dzg1rO5UmYbOJwLfoImEDlTpJAHkw+t0DFF+hd9wecuavt6gaHm3nT8U4Ew0PnX/6VEbzaK3IWqVDdLJUWKhx12nFj7NUtSBTyM4qPi+5vuXg4C97hc4VqLA19neJGYBsw4Hbmyq24g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ocPwi+vkVZuIUceVT9eof+2hSuosuCE0FhHhKbN7xGg=;
 b=Q5MWGZrsFqg0Lsk4ToStkOwnTC8W9uZHTjl3aRl64rl03oBWuXSePAGNtwE17PXO+BvAT7ehelIQMzPiiuZ2JMHZLgZxJ7ZCBqClG7R3iXf4OAtKRQnFXV+RQPMhG73lMD87Kut6HUSAdXke89c6WO3/GC32JIX8wnM449RlvnGpcu6CykHkehy5I61FpLRTOKbIr9o90b9u0ZR1YWzyVNMHbIZCxrejzDXhr3QNmzgD2AqugTppERtML330xDq7vTflX5gmUvu8Ss81Bz1q3rvgkzt5Sm91GABfQx3CsZfs51RE6mkqvzwh92bVDaY2JHxUTC0J0YtR9vHoTQ+75Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ocPwi+vkVZuIUceVT9eof+2hSuosuCE0FhHhKbN7xGg=;
 b=aVm0dy2qP628tAZLt3i+mH+S85mvAb7DUwVGLcFKt92nxygR8AL6lhQjggBthkH9I2918qXL0nvOjKXmhGyVEqZoAnZAZRkS2FHfQQhOYOF6nMPalrNOXQsDPzBid0svXW85zoKBA9HAHU+qI79f3cW/L4qSNLlxZSZfRVjneZs=
Date: Thu, 31 Dec 2020 17:05:06 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen: remove the usage of the P ar option
Message-ID: <20201231160506.247ctn2mnfdgcddx@Air-de-Roger>
References: <20201230173446.1768-1-roger.pau@citrix.com>
 <b90b93d0-ea83-bc00-6dc0-cbe9e7cfa1ce@citrix.com>
 <20201231092015.ojmlwfvqky6uqyig@Air-de-Roger>
 <6a761459-7823-077b-2cee-b6d685eb10ee@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6a761459-7823-077b-2cee-b6d685eb10ee@citrix.com>
X-ClientProxiedBy: LO2P265CA0331.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f25044c1-bbc6-41ab-7eed-08d8ada5db0e
X-MS-TrafficTypeDiagnostic: DM6PR03MB4476:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB447641850AB59581DBFC437B8FD60@DM6PR03MB4476.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jvdy2kwirNvKDmQWR54AhbtX+7f+CwA/wsuDVZEjE+Da3V4w6C3JSQjUbwxZ/ee2Kikx9SXXjDF3IsBZbnIikQt/X97UQSczH51cEXvqeZ15F0/mnYNj4YfbfLjCRTN5RuWYeWdXIjMKH7TaUfSwdbIkErBC1OG7piLpdj6a8kDK2MmOJTU/kOxj7DTDt5YuVnXhqHZP9G7Q7qwWsjhK2IJybyvnDKabnO+ElbB4SBSWMbxBYvFmdjAgTBBHsu+sR+w9R7JuzDdksrmiCTfWb1fU9qnKe1wIrXoFWSGTWICJ4Dx+1oIZmoaUvQXSbprsk0n/8GCgqCskTvOPUciGKu+L6LwE9Y60CMMa4rwss5ke/yW/jz+e+ywmKHvV1ugaHil9XhFO7975t4I3QJyBfDasjKGsTipwdAmjN9sGKyL96RoLsuHCFnkv5zsxvwLmHp7uUzBEEvCQNVgn1c8OiA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(66946007)(66476007)(6862004)(5660300002)(1076003)(66556008)(85182001)(83380400001)(6496006)(16526019)(8936002)(6636002)(86362001)(6486002)(316002)(33716001)(9686003)(6666004)(4326008)(956004)(186003)(53546011)(8676002)(26005)(966005)(54906003)(478600001)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RXBmcE5EcXdsQk13MDFUTGVLSXJRTlVQTm0rOVk3Y2p0QzAzUmI2YzlJMHNx?=
 =?utf-8?B?SDNNSXZhMDZ5OHpVa1hQajBiVGMzQzhsRU1EejhpOUgrVWU1aklCZ1gwV1Ri?=
 =?utf-8?B?S0Z3RW02WTJidzBrMGNwYjE5OHNJVEg4aGpGVEhQZloyMkZhMVFmZUEyWjJV?=
 =?utf-8?B?T2xIbENlQzkzN1ltWGJTVDF1ZWVvNG14b1V4QWJ1SGpSZU9HU0lVZTY2UjUr?=
 =?utf-8?B?dXdCWXZpekhoemgrYUN6Yk50T0gvVTNqbWxLSk5vdUtzTGpSMHloS3pxcG1i?=
 =?utf-8?B?MXdRb3FueHBFeU1obnpxRWkrVmdEaEdaVk9nZXhxQ2NiS1pKV1QySXlWOW15?=
 =?utf-8?B?TGpwSXNBNisyYmdWTHZmbjVWVjd0MlpmWkR1ZkRld01yUUk2YVFyQ3huNUVE?=
 =?utf-8?B?amMyUWlJTmlySEE4a1IxUmg3bFJsR2lYcjI5RnNxdk1wNm9FNUpIcHZyNFhS?=
 =?utf-8?B?WTh2VzVDOVdTQ1ZwdlpkVUFUalVIVStXOG15QzQrcVRVOE9FZkVjWXpxakJS?=
 =?utf-8?B?dWJXVDlZV0x6dUhDbTNzR3Fob2ZTR0gxVWg0bjl4UzRGbmdHLzR5SHFUbUha?=
 =?utf-8?B?QzNmakR0eW82eWVTZlIwWGY5MEtuajd1MTV0QlNVNzl1U1J5RG5xT1YvRFh1?=
 =?utf-8?B?QUY3NDUrU2l3dytOeDNZR0wzOER2NGxhNDBsYTR2SFNWNlpLTDZoQzVVUjBi?=
 =?utf-8?B?V1ZLakZjalNla2RsSHRBZjhSOTg4UFlJcnlrQVlqWU1ETkg1YTluVWttN3VM?=
 =?utf-8?B?ZmxGU05QV2dsQU1HaGVJZWJieFVYRWZmVTdsRkNER2V3NmZPTkxqMW9OQnJo?=
 =?utf-8?B?R3cvbFpTNnVHSnRCUExleUppUUhpVVRnMkFWMUY0U0x6RkdkQVhYczBHbTdh?=
 =?utf-8?B?Mi9zWWZOK1l1c0dYczVGK0NTMFlPRUpXUldTMmxCOTRwSWhOOVNGelZWZjQ3?=
 =?utf-8?B?N0twVXRyRE1GSWR6ZStGS0NvNzMvUXVacHZvWUFiRUNqRzdwZUhtVFdHRjBN?=
 =?utf-8?B?eENSTGFxNHVaQzJ6WFZmczgxVmlSQWtJMjlveGpEaVUwbmd1WFB1TWF4V2dW?=
 =?utf-8?B?MkxqUlVzQXFLWUpncVdrckVLNDQySmRVUURFNFZZUVAxWS9DZXdsQnhkYXdP?=
 =?utf-8?B?cDVReUVKTXRaaE5zZnZ4Zjl1TGgzYURVeExRSXdoT3VQcU9XNHlLbFd0UzNj?=
 =?utf-8?B?QUpkaUN6T3NUaDc1Mmt6Z2MvWFNrWUxTNUEwaldXUDJldjV1eTJGRXEwdTBT?=
 =?utf-8?B?Zk5INEkwL2VUd3lqbUs4c3B6djlOMzRHUTVXaFArYUVENUV1K1ladUk5dFlm?=
 =?utf-8?Q?FpFsxqvEdLS5I4tXppqxP/nvRH1mwwZ/lb?=
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Dec 2020 16:05:12.8069
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-Network-Message-Id: f25044c1-bbc6-41ab-7eed-08d8ada5db0e
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MbVn0Kw851/938iZ4g+tQQE5K+dVCBiaGEk0KSUfXo1d8YbprHQlaGrmwm/Tif2F7iW4QjifK8HeTvkQe/b+Gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4476
X-OriginatorOrg: citrix.com

On Thu, Dec 31, 2020 at 03:46:50PM +0000, Andrew Cooper wrote:
> On 31/12/2020 09:20, Roger Pau Monné wrote:
> > On Wed, Dec 30, 2020 at 06:32:58PM +0000, Andrew Cooper wrote:
> >> On 30/12/2020 17:34, Roger Pau Monne wrote:
> >>> It's not part of the POSIX standard [0] and as such non GNU ar
> >>> implementations don't usually have it.
> >>>
> >>> It's not relevant for the use case here anyway, as the archive file is
> >>> recreated every time due to the rm invocation before the ar call. No
> >>> file name matching should happen so matching using the full path name
> >>> or a relative one should yield the same result.
> >>>
> >>> This fixes the build on FreeBSD.
> >>>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...
> >>
> >> We really need some kind of BSD build in CI.  This kind of breakage
> >> shouldn't get into master to begin with.
> > Fully agree. I'm not that familiar with gitlab CI, but since we have
> > our own runners there, could we also launch VMs instead of just using
> > containers?
> >
> > It might be easier to integrate with osstest in the future, since
> > FreeBSD has now switched to git.
> >
> >>> [0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html
> >>> ---
> >>> I'm unsure whether the r and s options are also needed, since they
> >>> seem to only be relevant when updating a library, and Xen build system
> >>> always removes the old library prior to any ar call.
> >> ... I think r should be dropped, because we're not replacing any files. 
> >> However, I expect the index file is still relevant, because without it,
> >> you've got to perform an O(n) search through the archive to find a file.
> > Right, the description of 's' between opengroup and the Linux man page
> > seems to be slightly different. From the opengroup description it seems
> > like s should be used to force the generation of a symbol table when
> > no changes are made to the archive, but otherwise ar should already
> > generate such symbol table.
> 
> Ok - are you happy for me to commit with dropping the r and s?

So after testing this, I think we need at least the r option, as we
want to add new files to the archive (regardless of whether it exists
or not). That seems to be mandatory on FreeBSD, as calling 'ar c' is
not valid.

I think s can be dropped, as ar will generate the symbol table by
default unless S is specified. s should just be used to force the
generation of a symbol table when not adding files and the archive has
been stripped AFAICT.

If so would you mind adding:

"While there also drop the s option, as ar will already generate a
symbol table by default when creating the archive."

To the commit message if you end up dropping s?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 16:15:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 16:15:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60580.106310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv0bN-0001tU-Oq; Thu, 31 Dec 2020 16:15:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60580.106310; Thu, 31 Dec 2020 16:15:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv0bN-0001tN-Lj; Thu, 31 Dec 2020 16:15:01 +0000
Received: by outflank-mailman (input) for mailman id 60580;
 Thu, 31 Dec 2020 16:15:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pjRP=GD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kv0bM-0001tI-Cl
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 16:15:00 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08e21236-71f5-4eca-80b8-6480282968ef;
 Thu, 31 Dec 2020 16:14:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08e21236-71f5-4eca-80b8-6480282968ef
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609431299;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=V4zfVjWL8SLawwLXjOlKOXTnvYY9uxVxfzA9v+xAbtM=;
  b=VvclyvpqJZFdVhQo3UemXnMpQg8CcPtbdkPyMF+C6sG+AehPDh2JDOb9
   8/BqLGMEbGRaqbQdYJCuYx2kXwW6ZMQarvc1PLrzjVTgP2VYTn3i0x3w8
   CADGMBDxD6ljuKrru01moqUqTOnCndlpUETBJRzjXAZ7XnBEOpe+6CHJt
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MnSnFplfHPwlKJqRRZWemamAsJLTOUikoWeVsej/77JgeHPtNR8QxaNJbof5GEA7QM1QuTE6CS
 Gn8yl7M0xCswalDxkBFkcjt1ilAqBWPX2QRoblXmrsMqY2p6TX+xkgSN+aA9pYxdch6FFemdPb
 TufxVv+peaWq0VntNJ0Dc+crvCHPMG93kGiFkfNTgMRO/vyi4HR4rj5mlyZIJIIf5erAFve7A6
 zg9edeG6qKEd53cqqUsY3qVjjMz6PGjcJLtqMqRuf2Gt2kG5V3gfAh3MjU5t9FR4n0pbUVMZf4
 dIk=
X-SBRS: 5.2
X-MesageID: 34446950
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,464,1599537600"; 
   d="scan'208";a="34446950"
Subject: Re: [PATCH] xen: remove the usage of the P ar option
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201230173446.1768-1-roger.pau@citrix.com>
 <b90b93d0-ea83-bc00-6dc0-cbe9e7cfa1ce@citrix.com>
 <20201231092015.ojmlwfvqky6uqyig@Air-de-Roger>
 <6a761459-7823-077b-2cee-b6d685eb10ee@citrix.com>
 <20201231160506.247ctn2mnfdgcddx@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ab442464-8c04-d4e1-13fc-cc774b2ca09b@citrix.com>
Date: Thu, 31 Dec 2020 16:14:52 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201231160506.247ctn2mnfdgcddx@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL03.citrite.net (10.13.108.165)

On 31/12/2020 16:05, Roger Pau Monné wrote:
> On Thu, Dec 31, 2020 at 03:46:50PM +0000, Andrew Cooper wrote:
>> On 31/12/2020 09:20, Roger Pau Monné wrote:
>>> On Wed, Dec 30, 2020 at 06:32:58PM +0000, Andrew Cooper wrote:
>>>> On 30/12/2020 17:34, Roger Pau Monne wrote:
>>>>> It's not part of the POSIX standard [0] and as such non GNU ar
>>>>> implementations don't usually have it.
>>>>>
>>>>> It's not relevant for the use case here anyway, as the archive file is
>>>>> recreated every time due to the rm invocation before the ar call. No
>>>>> file name matching should happen so matching using the full path name
>>>>> or a relative one should yield the same result.
>>>>>
>>>>> This fixes the build on FreeBSD.
>>>>>
>>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although...
>>>>
>>>> We really need some kind of BSD build in CI.  This kind of breakage
>>>> shouldn't get into master to begin with.
>>> Fully agree. I'm not that familiar with gitlab CI, but since we have
>>> our own runners there, could we also launch VMs instead of just using
>>> containers?
>>>
>>> It might be easier to integrate with osstest in the future, since
>>> FreeBSD has now switched to git.
>>>
>>>>> [0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html
>>>>> ---
>>>>> I'm unsure whether the r and s options are also needed, since they
>>>>> seem to only be relevant when updating a library, and Xen build system
>>>>> always removes the old library prior to any ar call.
>>>> ... I think r should be dropped, because we're not replacing any files. 
>>>> However, I expect the index file is still relevant, because without it,
>>>> you've got to perform an O(n) search through the archive to find a file.
>>> Right, the description of 's' between opengroup and the Linux man page
>>> seems to be slightly different. From the opengroup description it seems
>>> like s should be used to force the generation of a symbol table when
>>> no changes are made to the archive, but otherwise ar should already
>>> generate such symbol table.
>> Ok - are you happy for me to commit with dropping the r and s?
> So after testing this, I think we need at least the r option, as we
> want to add new files to the archive (regardless of whether it exists
> or not). That seems to be mandatory on FreeBSD, as calling 'ar c' is
> not valid.
>
> I think s can be dropped, as ar will generate the symbol table by
> default unless S is specified. s should just be used to force the
> generation of a symbol table when not adding files and the archive has
> been stripped AFAICT.
>
> If so would you mind adding:
>
> "While there also drop the s option, as ar will already generate a
> symbol table by default when creating the archive."
>
> To the commit message if you end up dropping s?

Will do.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 17:11:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 17:11:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60589.106329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv1TN-0006v9-C5; Thu, 31 Dec 2020 17:10:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60589.106329; Thu, 31 Dec 2020 17:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv1TN-0006v2-8i; Thu, 31 Dec 2020 17:10:49 +0000
Received: by outflank-mailman (input) for mailman id 60589;
 Thu, 31 Dec 2020 17:10:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pjRP=GD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kv1TL-0006ux-Hz
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 17:10:47 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d2de5a5-3251-4924-a231-1831b0c02e29;
 Thu, 31 Dec 2020 17:10:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d2de5a5-3251-4924-a231-1831b0c02e29
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1609434646;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=NxwI/z0UNJxAx6ac/8QROsOGpOgOIjR2m928kxeN3v8=;
  b=JtcTPD3pEHowAguYbJFxCxhHH0tL9wwrULHij9HVdIx9zH8KfC/gkadj
   L97Dfwj/3qJpIeiKsHyj10+iol1nMRfZHEDvwfBy9oWko891ELYWMsRKM
   NeG8Ag5meIsajxjMOfxvTUQ2H1Tqj69LaYH7noLn6+mcEipJln0C2dk5B
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 6P6FuERh7nPjVoQ1Yv0U73E7l7jt8DzwO8aLfj6bTnUjXNaBuO+k46eqAFRQWFvNq457nVrE7h
 qBpTLHKu+8xUvQipuQL/SqAD4iDl7TCz4DgmGC1/M39LW18B67D8Db3AhgIaXgc8U5ZL0/YSA+
 4uEYt7i+SMJKdMz5DRfmuKqxImbucGRsLQSyCirBWdfE6WItaC1vA0Lei8jhhdLD/nbqc7ILXx
 LbcwYOnrejaeHJaRa0+AgC5ouoHDWpOhUAkPoEp3kTMS8cmKLw94aKHOrFh/XgHcogvuHDPmY4
 ziY=
X-SBRS: 5.2
X-MesageID: 35481531
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,464,1599537600"; 
   d="scan'208";a="35481531"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: [PATCH] x86/p2m: Fix paging_gva_to_gfn() for nested virt
Date: Thu, 31 Dec 2020 17:10:21 +0000
Message-ID: <20201231171021.10361-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

nestedhap_walk_L1_p2m() takes guest physical addresses, not frame numbers.
This means the l2 input is off-by-PAGE_SHIFT, as is the l1 value eventually
returned to the caller.

Delete the misleading comment as well.

Fixes: bab2bd8e222de ("xen/nested_p2m: Don't walk EPT tables with a regular PT walker")
Reported-by: Tamas K Lengyel <tamas@tklengyel.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/mm/p2m.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 487959b121..89a2b55c66 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1784,6 +1784,7 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
     if ( is_hvm_vcpu(v) && paging_mode_hap(v->domain) && nestedhvm_is_n2(v) )
     {
         unsigned long l2_gfn, l1_gfn;
+        paddr_t l1_gpa;
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint8_t l1_p2ma;
@@ -1798,8 +1799,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         if ( l2_gfn == gfn_x(INVALID_GFN) )
             return gfn_x(INVALID_GFN);
 
-        /* translate l2 guest gfn into l1 guest gfn */
-        rv = nestedhap_walk_L1_p2m(v, l2_gfn, &l1_gfn, &l1_page_order, &l1_p2ma,
+        rv = nestedhap_walk_L1_p2m(v, pfn_to_paddr(l2_gfn), &l1_gpa,
+                                   &l1_page_order, &l1_p2ma,
                                    1,
                                    !!(*pfec & PFEC_write_access),
                                    !!(*pfec & PFEC_insn_fetch));
@@ -1807,6 +1808,8 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         if ( rv != NESTEDHVM_PAGEFAULT_DONE )
             return gfn_x(INVALID_GFN);
 
+        l1_gfn = paddr_to_pfn(l1_gpa);
+
         /*
          * Sanity check that l1_gfn can be used properly as a 4K mapping, even
          * if it mapped by a nested superpage.
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Dec 31 18:40:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 18:40:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60598.106353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv2rg-0005Pu-7i; Thu, 31 Dec 2020 18:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60598.106353; Thu, 31 Dec 2020 18:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv2rg-0005Pn-4b; Thu, 31 Dec 2020 18:40:00 +0000
Received: by outflank-mailman (input) for mailman id 60598;
 Thu, 31 Dec 2020 18:39:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kv2re-0005Pe-Uq; Thu, 31 Dec 2020 18:39:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kv2re-0004A8-M9; Thu, 31 Dec 2020 18:39:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kv2re-0001vm-CE; Thu, 31 Dec 2020 18:39:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kv2re-0006t3-Bh; Thu, 31 Dec 2020 18:39:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2q5aAwDTp1Jkco7yPQQHl5rt94/SqJwLs2QVcIKoMkg=; b=kLqepGizzBIgwI2/SjeCWfOMjZ
	8wyXfd4cDL+AcFWH24b0EcW1jbw21OIDKcRIl8ZlFucO8cTPGVQhWsngS6oh5SjPZ8yAP92mKB55I
	jZz8lpK+CMbzmn7Lcn+HBcDgVxWCad4G2yI0cuTkFYqREkKhQFp9X45GzuG5pJcBxiuU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158025-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 158025: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f6e1ea19649216156576aeafa784e3b4cee45549
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 18:39:58 +0000

flight 158025 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158025/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 152332
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f6e1ea19649216156576aeafa784e3b4cee45549
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  152 days
Failing since        152366  2020-08-01 20:49:34 Z  151 days  268 attempts
Testing same since   158025  2020-12-31 04:36:23 Z    0 days    1 attempts

------------------------------------------------------------
4330 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 976851 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 19:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 19:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60608.106368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv3zw-0003dN-QL; Thu, 31 Dec 2020 19:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60608.106368; Thu, 31 Dec 2020 19:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv3zw-0003dG-MH; Thu, 31 Dec 2020 19:52:36 +0000
Received: by outflank-mailman (input) for mailman id 60608;
 Thu, 31 Dec 2020 19:52:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kv3zv-0003d8-5m; Thu, 31 Dec 2020 19:52:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kv3zu-0005PK-VG; Thu, 31 Dec 2020 19:52:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kv3zu-0005ZL-Ns; Thu, 31 Dec 2020 19:52:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kv3zu-0003od-MX; Thu, 31 Dec 2020 19:52:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WUO3741uQuzDkmXCQJfM7sMUvZP+KB4HZ65lhZKuNa4=; b=GvDnmIGXfCk9HlZxeDp30QAHX5
	E9t8TrWqxFsC7jvOPZkYRo4vUnpA0ccGzFg4fpSCfwdJd3ZLljRvFEgnd8ZYMsyAJljtYfu0FuepT
	rnC1p77IvUmGaCyro7F3tUOAoTfsWV2Yi7BBj0abOWlJ9SCcYDK9l+M2/rLfTSWx996U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-158056-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 158056: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1516ecd6f55fe3608f374f4f2548491472d1c9a1
X-Osstest-Versions-That:
    xen=98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 31 Dec 2020 19:52:34 +0000

flight 158056 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/158056/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1516ecd6f55fe3608f374f4f2548491472d1c9a1
baseline version:
 xen                  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157813  2020-12-22 16:00:27 Z    9 days
Testing same since   158056  2020-12-31 18:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   98d4d6d8a6..1516ecd6f5  1516ecd6f55fe3608f374f4f2548491472d1c9a1 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Dec 31 20:00:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 31 Dec 2020 20:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.60617.106383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv47F-0004c7-Jx; Thu, 31 Dec 2020 20:00:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 60617.106383; Thu, 31 Dec 2020 20:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kv47F-0004c0-Gd; Thu, 31 Dec 2020 20:00:09 +0000
Received: by outflank-mailman (input) for mailman id 60617;
 Thu, 31 Dec 2020 20:00:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i6T7=GD=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1kv47E-0004Y9-AK
 for xen-devel@lists.xenproject.org; Thu, 31 Dec 2020 20:00:08 +0000
Received: from MTA-05-4.privateemail.com (unknown [68.65.122.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d45a2668-e911-40ec-9fc7-3dc6ca1f9412;
 Thu, 31 Dec 2020 20:00:06 +0000 (UTC)
Received: from MTA-05.privateemail.com (localhost [127.0.0.1])
 by MTA-05.privateemail.com (Postfix) with ESMTP id 4718760236
 for <xen-devel@lists.xenproject.org>; Thu, 31 Dec 2020 15:00:05 -0500 (EST)
Received: from mail-wr1-f47.google.com (unknown [10.20.151.221])
 by MTA-05.privateemail.com (Postfix) with ESMTPA id 10A7E6018A
 for <xen-devel@lists.xenproject.org>; Thu, 31 Dec 2020 20:00:05 +0000 (UTC)
Received: by mail-wr1-f47.google.com with SMTP id d13so20781114wrc.13
 for <xen-devel@lists.xenproject.org>; Thu, 31 Dec 2020 12:00:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d45a2668-e911-40ec-9fc7-3dc6ca1f9412
X-Gm-Message-State: AOAM530kANXDOJmcYmmBg6iiQZ1WacMYkQeVnTYBGKRtQpMvc2kRPI3n
	eTpyM2frypjX8RLgPWUrIxi3y5Vh6m6x2pfpsCo=
X-Google-Smtp-Source: ABdhPJxZhw6NDENndEkcXPL9VOVTvvxPV4LrXBDvzqZna5NOVwlu2yj3m+guF7f3GYHr1laYL6doeDveSsCN8V5IeDo=
X-Received: by 2002:adf:fad0:: with SMTP id a16mr68389750wrs.390.1609444803649;
 Thu, 31 Dec 2020 12:00:03 -0800 (PST)
MIME-Version: 1.0
References: <20201231171021.10361-1-andrew.cooper3@citrix.com>
In-Reply-To: <20201231171021.10361-1-andrew.cooper3@citrix.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Thu, 31 Dec 2020 14:59:34 -0500
X-Gmail-Original-Message-ID: <CABfawh=cnYpFDGP89=VfJ34fPeVufi7LixeNaTHEMRWHsxSSAw@mail.gmail.com>
Message-ID: <CABfawh=cnYpFDGP89=VfJ34fPeVufi7LixeNaTHEMRWHsxSSAw@mail.gmail.com>
Subject: Re: [PATCH] x86/p2m: Fix paging_gva_to_gfn() for nested virt
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Virus-Scanned: ClamAV using ClamSMTP

On Thu, Dec 31, 2020 at 12:11 PM Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> nestedhap_walk_L1_p2m() takes guest physical addresses, not frame numbers=
.
> This means the l2 input is off-by-PAGE_SHIFT, as is the l1 value eventual=
ly
> returned to the caller.
>
> Delete the misleading comment as well.
>
> Fixes: bab2bd8e222de ("xen/nested_p2m: Don't walk EPT tables with a regul=
ar PT walker")
> Reported-by: Tamas K Lengyel <tamas@tklengyel.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Tamas K Lengyel <tamas@tklengyel.com>

Thanks, issue is resolved with this patch applied.

Tested-by: Tamas K Lengyel <tamas@tklengyel.com>


